Nov 25 02:10:47 np0005534516 kernel: Linux version 5.14.0-642.el9.x86_64 (mockbuild@x86-05.stream.rdu2.redhat.com) (gcc (GCC) 11.5.0 20240719 (Red Hat 11.5.0-14), GNU ld version 2.35.2-68.el9) #1 SMP PREEMPT_DYNAMIC Thu Nov 20 14:15:03 UTC 2025
Nov 25 02:10:47 np0005534516 kernel: The list of certified hardware and cloud instances for Red Hat Enterprise Linux 9 can be viewed at the Red Hat Ecosystem Catalog, https://catalog.redhat.com.
Nov 25 02:10:47 np0005534516 kernel: Command line: BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-642.el9.x86_64 root=UUID=47e3724e-7a1b-439a-9543-b98c9a290709 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Nov 25 02:10:47 np0005534516 kernel: BIOS-provided physical RAM map:
Nov 25 02:10:47 np0005534516 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable
Nov 25 02:10:47 np0005534516 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved
Nov 25 02:10:47 np0005534516 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved
Nov 25 02:10:47 np0005534516 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bffdafff] usable
Nov 25 02:10:47 np0005534516 kernel: BIOS-e820: [mem 0x00000000bffdb000-0x00000000bfffffff] reserved
Nov 25 02:10:47 np0005534516 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved
Nov 25 02:10:47 np0005534516 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved
Nov 25 02:10:47 np0005534516 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000023fffffff] usable
Nov 25 02:10:47 np0005534516 kernel: NX (Execute Disable) protection: active
Nov 25 02:10:47 np0005534516 kernel: APIC: Static calls initialized
Nov 25 02:10:47 np0005534516 kernel: SMBIOS 2.8 present.
Nov 25 02:10:47 np0005534516 kernel: DMI: OpenStack Foundation OpenStack Nova, BIOS 1.15.0-1 04/01/2014
Nov 25 02:10:47 np0005534516 kernel: Hypervisor detected: KVM
Nov 25 02:10:47 np0005534516 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00
Nov 25 02:10:47 np0005534516 kernel: kvm-clock: using sched offset of 4503485878 cycles
Nov 25 02:10:47 np0005534516 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns
Nov 25 02:10:47 np0005534516 kernel: tsc: Detected 2799.998 MHz processor
Nov 25 02:10:47 np0005534516 kernel: last_pfn = 0x240000 max_arch_pfn = 0x400000000
Nov 25 02:10:47 np0005534516 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs
Nov 25 02:10:47 np0005534516 kernel: x86/PAT: Configuration [0-7]: WB  WC  UC- UC  WB  WP  UC- WT  
Nov 25 02:10:47 np0005534516 kernel: last_pfn = 0xbffdb max_arch_pfn = 0x400000000
Nov 25 02:10:47 np0005534516 kernel: found SMP MP-table at [mem 0x000f5ae0-0x000f5aef]
Nov 25 02:10:47 np0005534516 kernel: Using GB pages for direct mapping
Nov 25 02:10:47 np0005534516 kernel: RAMDISK: [mem 0x2ed25000-0x3368afff]
Nov 25 02:10:47 np0005534516 kernel: ACPI: Early table checksum verification disabled
Nov 25 02:10:47 np0005534516 kernel: ACPI: RSDP 0x00000000000F5AA0 000014 (v00 BOCHS )
Nov 25 02:10:47 np0005534516 kernel: ACPI: RSDT 0x00000000BFFE16BD 000030 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Nov 25 02:10:47 np0005534516 kernel: ACPI: FACP 0x00000000BFFE1571 000074 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Nov 25 02:10:47 np0005534516 kernel: ACPI: DSDT 0x00000000BFFDFC80 0018F1 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Nov 25 02:10:47 np0005534516 kernel: ACPI: FACS 0x00000000BFFDFC40 000040
Nov 25 02:10:47 np0005534516 kernel: ACPI: APIC 0x00000000BFFE15E5 0000B0 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Nov 25 02:10:47 np0005534516 kernel: ACPI: WAET 0x00000000BFFE1695 000028 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Nov 25 02:10:47 np0005534516 kernel: ACPI: Reserving FACP table memory at [mem 0xbffe1571-0xbffe15e4]
Nov 25 02:10:47 np0005534516 kernel: ACPI: Reserving DSDT table memory at [mem 0xbffdfc80-0xbffe1570]
Nov 25 02:10:47 np0005534516 kernel: ACPI: Reserving FACS table memory at [mem 0xbffdfc40-0xbffdfc7f]
Nov 25 02:10:47 np0005534516 kernel: ACPI: Reserving APIC table memory at [mem 0xbffe15e5-0xbffe1694]
Nov 25 02:10:47 np0005534516 kernel: ACPI: Reserving WAET table memory at [mem 0xbffe1695-0xbffe16bc]
Nov 25 02:10:47 np0005534516 kernel: No NUMA configuration found
Nov 25 02:10:47 np0005534516 kernel: Faking a node at [mem 0x0000000000000000-0x000000023fffffff]
Nov 25 02:10:47 np0005534516 kernel: NODE_DATA(0) allocated [mem 0x23ffd5000-0x23fffffff]
Nov 25 02:10:47 np0005534516 kernel: crashkernel reserved: 0x00000000af000000 - 0x00000000bf000000 (256 MB)
Nov 25 02:10:47 np0005534516 kernel: Zone ranges:
Nov 25 02:10:47 np0005534516 kernel:  DMA      [mem 0x0000000000001000-0x0000000000ffffff]
Nov 25 02:10:47 np0005534516 kernel:  DMA32    [mem 0x0000000001000000-0x00000000ffffffff]
Nov 25 02:10:47 np0005534516 kernel:  Normal   [mem 0x0000000100000000-0x000000023fffffff]
Nov 25 02:10:47 np0005534516 kernel:  Device   empty
Nov 25 02:10:47 np0005534516 kernel: Movable zone start for each node
Nov 25 02:10:47 np0005534516 kernel: Early memory node ranges
Nov 25 02:10:47 np0005534516 kernel:  node   0: [mem 0x0000000000001000-0x000000000009efff]
Nov 25 02:10:47 np0005534516 kernel:  node   0: [mem 0x0000000000100000-0x00000000bffdafff]
Nov 25 02:10:47 np0005534516 kernel:  node   0: [mem 0x0000000100000000-0x000000023fffffff]
Nov 25 02:10:47 np0005534516 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000023fffffff]
Nov 25 02:10:47 np0005534516 kernel: On node 0, zone DMA: 1 pages in unavailable ranges
Nov 25 02:10:47 np0005534516 kernel: On node 0, zone DMA: 97 pages in unavailable ranges
Nov 25 02:10:47 np0005534516 kernel: On node 0, zone Normal: 37 pages in unavailable ranges
Nov 25 02:10:47 np0005534516 kernel: ACPI: PM-Timer IO Port: 0x608
Nov 25 02:10:47 np0005534516 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1])
Nov 25 02:10:47 np0005534516 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23
Nov 25 02:10:47 np0005534516 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl)
Nov 25 02:10:47 np0005534516 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level)
Nov 25 02:10:47 np0005534516 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level)
Nov 25 02:10:47 np0005534516 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level)
Nov 25 02:10:47 np0005534516 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level)
Nov 25 02:10:47 np0005534516 kernel: ACPI: Using ACPI (MADT) for SMP configuration information
Nov 25 02:10:47 np0005534516 kernel: TSC deadline timer available
Nov 25 02:10:47 np0005534516 kernel: CPU topo: Max. logical packages:   8
Nov 25 02:10:47 np0005534516 kernel: CPU topo: Max. logical dies:       8
Nov 25 02:10:47 np0005534516 kernel: CPU topo: Max. dies per package:   1
Nov 25 02:10:47 np0005534516 kernel: CPU topo: Max. threads per core:   1
Nov 25 02:10:47 np0005534516 kernel: CPU topo: Num. cores per package:     1
Nov 25 02:10:47 np0005534516 kernel: CPU topo: Num. threads per package:   1
Nov 25 02:10:47 np0005534516 kernel: CPU topo: Allowing 8 present CPUs plus 0 hotplug CPUs
Nov 25 02:10:47 np0005534516 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write()
Nov 25 02:10:47 np0005534516 kernel: PM: hibernation: Registered nosave memory: [mem 0x00000000-0x00000fff]
Nov 25 02:10:47 np0005534516 kernel: PM: hibernation: Registered nosave memory: [mem 0x0009f000-0x0009ffff]
Nov 25 02:10:47 np0005534516 kernel: PM: hibernation: Registered nosave memory: [mem 0x000a0000-0x000effff]
Nov 25 02:10:47 np0005534516 kernel: PM: hibernation: Registered nosave memory: [mem 0x000f0000-0x000fffff]
Nov 25 02:10:47 np0005534516 kernel: PM: hibernation: Registered nosave memory: [mem 0xbffdb000-0xbfffffff]
Nov 25 02:10:47 np0005534516 kernel: PM: hibernation: Registered nosave memory: [mem 0xc0000000-0xfeffbfff]
Nov 25 02:10:47 np0005534516 kernel: PM: hibernation: Registered nosave memory: [mem 0xfeffc000-0xfeffffff]
Nov 25 02:10:47 np0005534516 kernel: PM: hibernation: Registered nosave memory: [mem 0xff000000-0xfffbffff]
Nov 25 02:10:47 np0005534516 kernel: PM: hibernation: Registered nosave memory: [mem 0xfffc0000-0xffffffff]
Nov 25 02:10:47 np0005534516 kernel: [mem 0xc0000000-0xfeffbfff] available for PCI devices
Nov 25 02:10:47 np0005534516 kernel: Booting paravirtualized kernel on KVM
Nov 25 02:10:47 np0005534516 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns
Nov 25 02:10:47 np0005534516 kernel: setup_percpu: NR_CPUS:8192 nr_cpumask_bits:8 nr_cpu_ids:8 nr_node_ids:1
Nov 25 02:10:47 np0005534516 kernel: percpu: Embedded 64 pages/cpu s225280 r8192 d28672 u262144
Nov 25 02:10:47 np0005534516 kernel: kvm-guest: PV spinlocks disabled, no host support
Nov 25 02:10:47 np0005534516 kernel: Kernel command line: BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-642.el9.x86_64 root=UUID=47e3724e-7a1b-439a-9543-b98c9a290709 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Nov 25 02:10:47 np0005534516 kernel: Unknown kernel command line parameters "BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-642.el9.x86_64", will be passed to user space.
Nov 25 02:10:47 np0005534516 kernel: random: crng init done
Nov 25 02:10:47 np0005534516 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear)
Nov 25 02:10:47 np0005534516 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear)
Nov 25 02:10:47 np0005534516 kernel: Fallback order for Node 0: 0 
Nov 25 02:10:47 np0005534516 kernel: Built 1 zonelists, mobility grouping on.  Total pages: 2064091
Nov 25 02:10:47 np0005534516 kernel: Policy zone: Normal
Nov 25 02:10:47 np0005534516 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off
Nov 25 02:10:47 np0005534516 kernel: software IO TLB: area num 8.
Nov 25 02:10:47 np0005534516 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=8, Nodes=1
Nov 25 02:10:47 np0005534516 kernel: ftrace: allocating 49313 entries in 193 pages
Nov 25 02:10:47 np0005534516 kernel: ftrace: allocated 193 pages with 3 groups
Nov 25 02:10:47 np0005534516 kernel: Dynamic Preempt: voluntary
Nov 25 02:10:47 np0005534516 kernel: rcu: Preemptible hierarchical RCU implementation.
Nov 25 02:10:47 np0005534516 kernel: rcu: #011RCU event tracing is enabled.
Nov 25 02:10:47 np0005534516 kernel: rcu: #011RCU restricting CPUs from NR_CPUS=8192 to nr_cpu_ids=8.
Nov 25 02:10:47 np0005534516 kernel: #011Trampoline variant of Tasks RCU enabled.
Nov 25 02:10:47 np0005534516 kernel: #011Rude variant of Tasks RCU enabled.
Nov 25 02:10:47 np0005534516 kernel: #011Tracing variant of Tasks RCU enabled.
Nov 25 02:10:47 np0005534516 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies.
Nov 25 02:10:47 np0005534516 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=8
Nov 25 02:10:47 np0005534516 kernel: RCU Tasks: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Nov 25 02:10:47 np0005534516 kernel: RCU Tasks Rude: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Nov 25 02:10:47 np0005534516 kernel: RCU Tasks Trace: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Nov 25 02:10:47 np0005534516 kernel: NR_IRQS: 524544, nr_irqs: 488, preallocated irqs: 16
Nov 25 02:10:47 np0005534516 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention.
Nov 25 02:10:47 np0005534516 kernel: kfence: initialized - using 2097152 bytes for 255 objects at 0x(____ptrval____)-0x(____ptrval____)
Nov 25 02:10:47 np0005534516 kernel: Console: colour VGA+ 80x25
Nov 25 02:10:47 np0005534516 kernel: printk: console [ttyS0] enabled
Nov 25 02:10:47 np0005534516 kernel: ACPI: Core revision 20230331
Nov 25 02:10:47 np0005534516 kernel: APIC: Switch to symmetric I/O mode setup
Nov 25 02:10:47 np0005534516 kernel: x2apic enabled
Nov 25 02:10:47 np0005534516 kernel: APIC: Switched APIC routing to: physical x2apic
Nov 25 02:10:47 np0005534516 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized
Nov 25 02:10:47 np0005534516 kernel: Calibrating delay loop (skipped) preset value.. 5599.99 BogoMIPS (lpj=2799998)
Nov 25 02:10:47 np0005534516 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated
Nov 25 02:10:47 np0005534516 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127
Nov 25 02:10:47 np0005534516 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0
Nov 25 02:10:47 np0005534516 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization
Nov 25 02:10:47 np0005534516 kernel: Spectre V2 : Mitigation: Retpolines
Nov 25 02:10:47 np0005534516 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT
Nov 25 02:10:47 np0005534516 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls
Nov 25 02:10:47 np0005534516 kernel: RETBleed: Mitigation: untrained return thunk
Nov 25 02:10:47 np0005534516 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier
Nov 25 02:10:47 np0005534516 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl
Nov 25 02:10:47 np0005534516 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied!
Nov 25 02:10:47 np0005534516 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options.
Nov 25 02:10:47 np0005534516 kernel: x86/bugs: return thunk changed
Nov 25 02:10:47 np0005534516 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode
Nov 25 02:10:47 np0005534516 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers'
Nov 25 02:10:47 np0005534516 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers'
Nov 25 02:10:47 np0005534516 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers'
Nov 25 02:10:47 np0005534516 kernel: x86/fpu: xstate_offset[2]:  576, xstate_sizes[2]:  256
Nov 25 02:10:47 np0005534516 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format.
Nov 25 02:10:47 np0005534516 kernel: Freeing SMP alternatives memory: 40K
Nov 25 02:10:47 np0005534516 kernel: pid_max: default: 32768 minimum: 301
Nov 25 02:10:47 np0005534516 kernel: LSM: initializing lsm=lockdown,capability,landlock,yama,integrity,selinux,bpf
Nov 25 02:10:47 np0005534516 kernel: landlock: Up and running.
Nov 25 02:10:47 np0005534516 kernel: Yama: becoming mindful.
Nov 25 02:10:47 np0005534516 kernel: SELinux:  Initializing.
Nov 25 02:10:47 np0005534516 kernel: LSM support for eBPF active
Nov 25 02:10:47 np0005534516 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear)
Nov 25 02:10:47 np0005534516 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear)
Nov 25 02:10:47 np0005534516 kernel: smpboot: CPU0: AMD EPYC-Rome Processor (family: 0x17, model: 0x31, stepping: 0x0)
Nov 25 02:10:47 np0005534516 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver.
Nov 25 02:10:47 np0005534516 kernel: ... version:                0
Nov 25 02:10:47 np0005534516 kernel: ... bit width:              48
Nov 25 02:10:47 np0005534516 kernel: ... generic registers:      6
Nov 25 02:10:47 np0005534516 kernel: ... value mask:             0000ffffffffffff
Nov 25 02:10:47 np0005534516 kernel: ... max period:             00007fffffffffff
Nov 25 02:10:47 np0005534516 kernel: ... fixed-purpose events:   0
Nov 25 02:10:47 np0005534516 kernel: ... event mask:             000000000000003f
Nov 25 02:10:47 np0005534516 kernel: signal: max sigframe size: 1776
Nov 25 02:10:47 np0005534516 kernel: rcu: Hierarchical SRCU implementation.
Nov 25 02:10:47 np0005534516 kernel: rcu: #011Max phase no-delay instances is 400.
Nov 25 02:10:47 np0005534516 kernel: smp: Bringing up secondary CPUs ...
Nov 25 02:10:47 np0005534516 kernel: smpboot: x86: Booting SMP configuration:
Nov 25 02:10:47 np0005534516 kernel: .... node  #0, CPUs:      #1 #2 #3 #4 #5 #6 #7
Nov 25 02:10:47 np0005534516 kernel: smp: Brought up 1 node, 8 CPUs
Nov 25 02:10:47 np0005534516 kernel: smpboot: Total of 8 processors activated (44799.96 BogoMIPS)
Nov 25 02:10:47 np0005534516 kernel: node 0 deferred pages initialised in 7ms
Nov 25 02:10:47 np0005534516 kernel: Memory: 7776600K/8388068K available (16384K kernel code, 5787K rwdata, 13900K rodata, 4192K init, 7172K bss, 605564K reserved, 0K cma-reserved)
Nov 25 02:10:47 np0005534516 kernel: devtmpfs: initialized
Nov 25 02:10:47 np0005534516 kernel: x86/mm: Memory block size: 128MB
Nov 25 02:10:47 np0005534516 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns
Nov 25 02:10:47 np0005534516 kernel: futex hash table entries: 2048 (order: 5, 131072 bytes, linear)
Nov 25 02:10:47 np0005534516 kernel: pinctrl core: initialized pinctrl subsystem
Nov 25 02:10:47 np0005534516 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family
Nov 25 02:10:47 np0005534516 kernel: DMA: preallocated 1024 KiB GFP_KERNEL pool for atomic allocations
Nov 25 02:10:47 np0005534516 kernel: DMA: preallocated 1024 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations
Nov 25 02:10:47 np0005534516 kernel: DMA: preallocated 1024 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations
Nov 25 02:10:47 np0005534516 kernel: audit: initializing netlink subsys (disabled)
Nov 25 02:10:47 np0005534516 kernel: audit: type=2000 audit(1764054645.272:1): state=initialized audit_enabled=0 res=1
Nov 25 02:10:47 np0005534516 kernel: thermal_sys: Registered thermal governor 'fair_share'
Nov 25 02:10:47 np0005534516 kernel: thermal_sys: Registered thermal governor 'step_wise'
Nov 25 02:10:47 np0005534516 kernel: thermal_sys: Registered thermal governor 'user_space'
Nov 25 02:10:47 np0005534516 kernel: cpuidle: using governor menu
Nov 25 02:10:47 np0005534516 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5
Nov 25 02:10:47 np0005534516 kernel: PCI: Using configuration type 1 for base access
Nov 25 02:10:47 np0005534516 kernel: PCI: Using configuration type 1 for extended access
Nov 25 02:10:47 np0005534516 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible.
Nov 25 02:10:47 np0005534516 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages
Nov 25 02:10:47 np0005534516 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page
Nov 25 02:10:47 np0005534516 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages
Nov 25 02:10:47 np0005534516 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page
Nov 25 02:10:47 np0005534516 kernel: Demotion targets for Node 0: null
Nov 25 02:10:47 np0005534516 kernel: cryptd: max_cpu_qlen set to 1000
Nov 25 02:10:47 np0005534516 kernel: ACPI: Added _OSI(Module Device)
Nov 25 02:10:47 np0005534516 kernel: ACPI: Added _OSI(Processor Device)
Nov 25 02:10:47 np0005534516 kernel: ACPI: Added _OSI(3.0 _SCP Extensions)
Nov 25 02:10:47 np0005534516 kernel: ACPI: Added _OSI(Processor Aggregator Device)
Nov 25 02:10:47 np0005534516 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded
Nov 25 02:10:47 np0005534516 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC
Nov 25 02:10:47 np0005534516 kernel: ACPI: Interpreter enabled
Nov 25 02:10:47 np0005534516 kernel: ACPI: PM: (supports S0 S3 S4 S5)
Nov 25 02:10:47 np0005534516 kernel: ACPI: Using IOAPIC for interrupt routing
Nov 25 02:10:47 np0005534516 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug
Nov 25 02:10:47 np0005534516 kernel: PCI: Using E820 reservations for host bridge windows
Nov 25 02:10:47 np0005534516 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F
Nov 25 02:10:47 np0005534516 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff])
Nov 25 02:10:47 np0005534516 kernel: acpi PNP0A03:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI EDR HPX-Type3]
Nov 25 02:10:47 np0005534516 kernel: acpiphp: Slot [3] registered
Nov 25 02:10:47 np0005534516 kernel: acpiphp: Slot [4] registered
Nov 25 02:10:47 np0005534516 kernel: acpiphp: Slot [5] registered
Nov 25 02:10:47 np0005534516 kernel: acpiphp: Slot [6] registered
Nov 25 02:10:47 np0005534516 kernel: acpiphp: Slot [7] registered
Nov 25 02:10:47 np0005534516 kernel: acpiphp: Slot [8] registered
Nov 25 02:10:47 np0005534516 kernel: acpiphp: Slot [9] registered
Nov 25 02:10:47 np0005534516 kernel: acpiphp: Slot [10] registered
Nov 25 02:10:47 np0005534516 kernel: acpiphp: Slot [11] registered
Nov 25 02:10:47 np0005534516 kernel: acpiphp: Slot [12] registered
Nov 25 02:10:47 np0005534516 kernel: acpiphp: Slot [13] registered
Nov 25 02:10:47 np0005534516 kernel: acpiphp: Slot [14] registered
Nov 25 02:10:47 np0005534516 kernel: acpiphp: Slot [15] registered
Nov 25 02:10:47 np0005534516 kernel: acpiphp: Slot [16] registered
Nov 25 02:10:47 np0005534516 kernel: acpiphp: Slot [17] registered
Nov 25 02:10:47 np0005534516 kernel: acpiphp: Slot [18] registered
Nov 25 02:10:47 np0005534516 kernel: acpiphp: Slot [19] registered
Nov 25 02:10:47 np0005534516 kernel: acpiphp: Slot [20] registered
Nov 25 02:10:47 np0005534516 kernel: acpiphp: Slot [21] registered
Nov 25 02:10:47 np0005534516 kernel: acpiphp: Slot [22] registered
Nov 25 02:10:47 np0005534516 kernel: acpiphp: Slot [23] registered
Nov 25 02:10:47 np0005534516 kernel: acpiphp: Slot [24] registered
Nov 25 02:10:47 np0005534516 kernel: acpiphp: Slot [25] registered
Nov 25 02:10:47 np0005534516 kernel: acpiphp: Slot [26] registered
Nov 25 02:10:47 np0005534516 kernel: acpiphp: Slot [27] registered
Nov 25 02:10:47 np0005534516 kernel: acpiphp: Slot [28] registered
Nov 25 02:10:47 np0005534516 kernel: acpiphp: Slot [29] registered
Nov 25 02:10:47 np0005534516 kernel: acpiphp: Slot [30] registered
Nov 25 02:10:47 np0005534516 kernel: acpiphp: Slot [31] registered
Nov 25 02:10:47 np0005534516 kernel: PCI host bridge to bus 0000:00
Nov 25 02:10:47 np0005534516 kernel: pci_bus 0000:00: root bus resource [io  0x0000-0x0cf7 window]
Nov 25 02:10:47 np0005534516 kernel: pci_bus 0000:00: root bus resource [io  0x0d00-0xffff window]
Nov 25 02:10:47 np0005534516 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window]
Nov 25 02:10:47 np0005534516 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window]
Nov 25 02:10:47 np0005534516 kernel: pci_bus 0000:00: root bus resource [mem 0x240000000-0x2bfffffff window]
Nov 25 02:10:47 np0005534516 kernel: pci_bus 0000:00: root bus resource [bus 00-ff]
Nov 25 02:10:47 np0005534516 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 conventional PCI endpoint
Nov 25 02:10:47 np0005534516 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 conventional PCI endpoint
Nov 25 02:10:47 np0005534516 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 conventional PCI endpoint
Nov 25 02:10:47 np0005534516 kernel: pci 0000:00:01.1: BAR 4 [io  0xc140-0xc14f]
Nov 25 02:10:47 np0005534516 kernel: pci 0000:00:01.1: BAR 0 [io  0x01f0-0x01f7]: legacy IDE quirk
Nov 25 02:10:47 np0005534516 kernel: pci 0000:00:01.1: BAR 1 [io  0x03f6]: legacy IDE quirk
Nov 25 02:10:47 np0005534516 kernel: pci 0000:00:01.1: BAR 2 [io  0x0170-0x0177]: legacy IDE quirk
Nov 25 02:10:47 np0005534516 kernel: pci 0000:00:01.1: BAR 3 [io  0x0376]: legacy IDE quirk
Nov 25 02:10:47 np0005534516 kernel: pci 0000:00:01.2: [8086:7020] type 00 class 0x0c0300 conventional PCI endpoint
Nov 25 02:10:47 np0005534516 kernel: pci 0000:00:01.2: BAR 4 [io  0xc100-0xc11f]
Nov 25 02:10:47 np0005534516 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 conventional PCI endpoint
Nov 25 02:10:47 np0005534516 kernel: pci 0000:00:01.3: quirk: [io  0x0600-0x063f] claimed by PIIX4 ACPI
Nov 25 02:10:47 np0005534516 kernel: pci 0000:00:01.3: quirk: [io  0x0700-0x070f] claimed by PIIX4 SMB
Nov 25 02:10:47 np0005534516 kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 conventional PCI endpoint
Nov 25 02:10:47 np0005534516 kernel: pci 0000:00:02.0: BAR 0 [mem 0xfe000000-0xfe7fffff pref]
Nov 25 02:10:47 np0005534516 kernel: pci 0000:00:02.0: BAR 2 [mem 0xfe800000-0xfe803fff 64bit pref]
Nov 25 02:10:47 np0005534516 kernel: pci 0000:00:02.0: BAR 4 [mem 0xfeb90000-0xfeb90fff]
Nov 25 02:10:47 np0005534516 kernel: pci 0000:00:02.0: ROM [mem 0xfeb80000-0xfeb8ffff pref]
Nov 25 02:10:47 np0005534516 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff]
Nov 25 02:10:47 np0005534516 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint
Nov 25 02:10:47 np0005534516 kernel: pci 0000:00:03.0: BAR 0 [io  0xc080-0xc0bf]
Nov 25 02:10:47 np0005534516 kernel: pci 0000:00:03.0: BAR 1 [mem 0xfeb91000-0xfeb91fff]
Nov 25 02:10:47 np0005534516 kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe804000-0xfe807fff 64bit pref]
Nov 25 02:10:47 np0005534516 kernel: pci 0000:00:03.0: ROM [mem 0xfeb00000-0xfeb7ffff pref]
Nov 25 02:10:47 np0005534516 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint
Nov 25 02:10:47 np0005534516 kernel: pci 0000:00:04.0: BAR 0 [io  0xc000-0xc07f]
Nov 25 02:10:47 np0005534516 kernel: pci 0000:00:04.0: BAR 1 [mem 0xfeb92000-0xfeb92fff]
Nov 25 02:10:47 np0005534516 kernel: pci 0000:00:04.0: BAR 4 [mem 0xfe808000-0xfe80bfff 64bit pref]
Nov 25 02:10:47 np0005534516 kernel: pci 0000:00:05.0: [1af4:1002] type 00 class 0x00ff00 conventional PCI endpoint
Nov 25 02:10:47 np0005534516 kernel: pci 0000:00:05.0: BAR 0 [io  0xc0c0-0xc0ff]
Nov 25 02:10:47 np0005534516 kernel: pci 0000:00:05.0: BAR 4 [mem 0xfe80c000-0xfe80ffff 64bit pref]
Nov 25 02:10:47 np0005534516 kernel: pci 0000:00:06.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint
Nov 25 02:10:47 np0005534516 kernel: pci 0000:00:06.0: BAR 0 [io  0xc120-0xc13f]
Nov 25 02:10:47 np0005534516 kernel: pci 0000:00:06.0: BAR 4 [mem 0xfe810000-0xfe813fff 64bit pref]
Nov 25 02:10:47 np0005534516 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10
Nov 25 02:10:47 np0005534516 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10
Nov 25 02:10:47 np0005534516 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11
Nov 25 02:10:47 np0005534516 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11
Nov 25 02:10:47 np0005534516 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9
Nov 25 02:10:47 np0005534516 kernel: iommu: Default domain type: Translated
Nov 25 02:10:47 np0005534516 kernel: iommu: DMA domain TLB invalidation policy: lazy mode
Nov 25 02:10:47 np0005534516 kernel: SCSI subsystem initialized
Nov 25 02:10:47 np0005534516 kernel: ACPI: bus type USB registered
Nov 25 02:10:47 np0005534516 kernel: usbcore: registered new interface driver usbfs
Nov 25 02:10:47 np0005534516 kernel: usbcore: registered new interface driver hub
Nov 25 02:10:47 np0005534516 kernel: usbcore: registered new device driver usb
Nov 25 02:10:47 np0005534516 kernel: pps_core: LinuxPPS API ver. 1 registered
Nov 25 02:10:47 np0005534516 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti <giometti@linux.it>
Nov 25 02:10:47 np0005534516 kernel: PTP clock support registered
Nov 25 02:10:47 np0005534516 kernel: EDAC MC: Ver: 3.0.0
Nov 25 02:10:47 np0005534516 kernel: NetLabel: Initializing
Nov 25 02:10:47 np0005534516 kernel: NetLabel:  domain hash size = 128
Nov 25 02:10:47 np0005534516 kernel: NetLabel:  protocols = UNLABELED CIPSOv4 CALIPSO
Nov 25 02:10:47 np0005534516 kernel: NetLabel:  unlabeled traffic allowed by default
Nov 25 02:10:47 np0005534516 kernel: PCI: Using ACPI for IRQ routing
Nov 25 02:10:47 np0005534516 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device
Nov 25 02:10:47 np0005534516 kernel: pci 0000:00:02.0: vgaarb: bridge control possible
Nov 25 02:10:47 np0005534516 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none
Nov 25 02:10:47 np0005534516 kernel: vgaarb: loaded
Nov 25 02:10:47 np0005534516 kernel: clocksource: Switched to clocksource kvm-clock
Nov 25 02:10:47 np0005534516 kernel: VFS: Disk quotas dquot_6.6.0
Nov 25 02:10:47 np0005534516 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes)
Nov 25 02:10:47 np0005534516 kernel: pnp: PnP ACPI init
Nov 25 02:10:47 np0005534516 kernel: pnp: PnP ACPI: found 5 devices
Nov 25 02:10:47 np0005534516 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns
Nov 25 02:10:47 np0005534516 kernel: NET: Registered PF_INET protocol family
Nov 25 02:10:47 np0005534516 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear)
Nov 25 02:10:47 np0005534516 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear)
Nov 25 02:10:47 np0005534516 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear)
Nov 25 02:10:47 np0005534516 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear)
Nov 25 02:10:47 np0005534516 kernel: TCP bind hash table entries: 65536 (order: 8, 1048576 bytes, linear)
Nov 25 02:10:47 np0005534516 kernel: TCP: Hash tables configured (established 65536 bind 65536)
Nov 25 02:10:47 np0005534516 kernel: MPTCP token hash table entries: 8192 (order: 5, 196608 bytes, linear)
Nov 25 02:10:47 np0005534516 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear)
Nov 25 02:10:47 np0005534516 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear)
Nov 25 02:10:47 np0005534516 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family
Nov 25 02:10:47 np0005534516 kernel: NET: Registered PF_XDP protocol family
Nov 25 02:10:47 np0005534516 kernel: pci_bus 0000:00: resource 4 [io  0x0000-0x0cf7 window]
Nov 25 02:10:47 np0005534516 kernel: pci_bus 0000:00: resource 5 [io  0x0d00-0xffff window]
Nov 25 02:10:47 np0005534516 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window]
Nov 25 02:10:47 np0005534516 kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfffff window]
Nov 25 02:10:47 np0005534516 kernel: pci_bus 0000:00: resource 8 [mem 0x240000000-0x2bfffffff window]
Nov 25 02:10:47 np0005534516 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release
Nov 25 02:10:47 np0005534516 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers
Nov 25 02:10:47 np0005534516 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11
Nov 25 02:10:47 np0005534516 kernel: pci 0000:00:01.2: quirk_usb_early_handoff+0x0/0x160 took 74261 usecs
Nov 25 02:10:47 np0005534516 kernel: PCI: CLS 0 bytes, default 64
Nov 25 02:10:47 np0005534516 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB)
Nov 25 02:10:47 np0005534516 kernel: software IO TLB: mapped [mem 0x00000000ab000000-0x00000000af000000] (64MB)
Nov 25 02:10:47 np0005534516 kernel: Trying to unpack rootfs image as initramfs...
Nov 25 02:10:47 np0005534516 kernel: ACPI: bus type thunderbolt registered
Nov 25 02:10:47 np0005534516 kernel: Initialise system trusted keyrings
Nov 25 02:10:47 np0005534516 kernel: Key type blacklist registered
Nov 25 02:10:47 np0005534516 kernel: workingset: timestamp_bits=36 max_order=21 bucket_order=0
Nov 25 02:10:47 np0005534516 kernel: zbud: loaded
Nov 25 02:10:47 np0005534516 kernel: integrity: Platform Keyring initialized
Nov 25 02:10:47 np0005534516 kernel: integrity: Machine keyring initialized
Nov 25 02:10:47 np0005534516 kernel: Freeing initrd memory: 75160K
Nov 25 02:10:47 np0005534516 kernel: NET: Registered PF_ALG protocol family
Nov 25 02:10:47 np0005534516 kernel: xor: automatically using best checksumming function   avx       
Nov 25 02:10:47 np0005534516 kernel: Key type asymmetric registered
Nov 25 02:10:47 np0005534516 kernel: Asymmetric key parser 'x509' registered
Nov 25 02:10:47 np0005534516 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 246)
Nov 25 02:10:47 np0005534516 kernel: io scheduler mq-deadline registered
Nov 25 02:10:47 np0005534516 kernel: io scheduler kyber registered
Nov 25 02:10:47 np0005534516 kernel: io scheduler bfq registered
Nov 25 02:10:47 np0005534516 kernel: atomic64_test: passed for x86-64 platform with CX8 and with SSE
Nov 25 02:10:47 np0005534516 kernel: shpchp: Standard Hot Plug PCI Controller Driver version: 0.4
Nov 25 02:10:47 np0005534516 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input0
Nov 25 02:10:47 np0005534516 kernel: ACPI: button: Power Button [PWRF]
Nov 25 02:10:47 np0005534516 kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10
Nov 25 02:10:47 np0005534516 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11
Nov 25 02:10:47 np0005534516 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10
Nov 25 02:10:47 np0005534516 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled
Nov 25 02:10:47 np0005534516 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A
Nov 25 02:10:47 np0005534516 kernel: Non-volatile memory driver v1.3
Nov 25 02:10:47 np0005534516 kernel: rdac: device handler registered
Nov 25 02:10:47 np0005534516 kernel: hp_sw: device handler registered
Nov 25 02:10:47 np0005534516 kernel: emc: device handler registered
Nov 25 02:10:47 np0005534516 kernel: alua: device handler registered
Nov 25 02:10:47 np0005534516 kernel: uhci_hcd 0000:00:01.2: UHCI Host Controller
Nov 25 02:10:47 np0005534516 kernel: uhci_hcd 0000:00:01.2: new USB bus registered, assigned bus number 1
Nov 25 02:10:47 np0005534516 kernel: uhci_hcd 0000:00:01.2: detected 2 ports
Nov 25 02:10:47 np0005534516 kernel: uhci_hcd 0000:00:01.2: irq 11, io port 0x0000c100
Nov 25 02:10:47 np0005534516 kernel: usb usb1: New USB device found, idVendor=1d6b, idProduct=0001, bcdDevice= 5.14
Nov 25 02:10:47 np0005534516 kernel: usb usb1: New USB device strings: Mfr=3, Product=2, SerialNumber=1
Nov 25 02:10:47 np0005534516 kernel: usb usb1: Product: UHCI Host Controller
Nov 25 02:10:47 np0005534516 kernel: usb usb1: Manufacturer: Linux 5.14.0-642.el9.x86_64 uhci_hcd
Nov 25 02:10:47 np0005534516 kernel: usb usb1: SerialNumber: 0000:00:01.2
Nov 25 02:10:47 np0005534516 kernel: hub 1-0:1.0: USB hub found
Nov 25 02:10:47 np0005534516 kernel: hub 1-0:1.0: 2 ports detected
Nov 25 02:10:47 np0005534516 kernel: usbcore: registered new interface driver usbserial_generic
Nov 25 02:10:47 np0005534516 kernel: usbserial: USB Serial support registered for generic
Nov 25 02:10:47 np0005534516 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12
Nov 25 02:10:47 np0005534516 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1
Nov 25 02:10:47 np0005534516 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12
Nov 25 02:10:47 np0005534516 kernel: mousedev: PS/2 mouse device common for all mice
Nov 25 02:10:47 np0005534516 kernel: rtc_cmos 00:04: RTC can wake from S4
Nov 25 02:10:47 np0005534516 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input1
Nov 25 02:10:47 np0005534516 kernel: rtc_cmos 00:04: registered as rtc0
Nov 25 02:10:47 np0005534516 kernel: rtc_cmos 00:04: setting system clock to 2025-11-25T07:10:46 UTC (1764054646)
Nov 25 02:10:47 np0005534516 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram
Nov 25 02:10:47 np0005534516 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled
Nov 25 02:10:47 np0005534516 kernel: hid: raw HID events driver (C) Jiri Kosina
Nov 25 02:10:47 np0005534516 kernel: usbcore: registered new interface driver usbhid
Nov 25 02:10:47 np0005534516 kernel: usbhid: USB HID core driver
Nov 25 02:10:47 np0005534516 kernel: drop_monitor: Initializing network drop monitor service
Nov 25 02:10:47 np0005534516 kernel: input: VirtualPS/2 VMware VMMouse as /devices/platform/i8042/serio1/input/input4
Nov 25 02:10:47 np0005534516 kernel: input: VirtualPS/2 VMware VMMouse as /devices/platform/i8042/serio1/input/input3
Nov 25 02:10:47 np0005534516 kernel: Initializing XFRM netlink socket
Nov 25 02:10:47 np0005534516 kernel: NET: Registered PF_INET6 protocol family
Nov 25 02:10:47 np0005534516 kernel: Segment Routing with IPv6
Nov 25 02:10:47 np0005534516 kernel: NET: Registered PF_PACKET protocol family
Nov 25 02:10:47 np0005534516 kernel: mpls_gso: MPLS GSO support
Nov 25 02:10:47 np0005534516 kernel: IPI shorthand broadcast: enabled
Nov 25 02:10:47 np0005534516 kernel: AVX2 version of gcm_enc/dec engaged.
Nov 25 02:10:47 np0005534516 kernel: AES CTR mode by8 optimization enabled
Nov 25 02:10:47 np0005534516 kernel: sched_clock: Marking stable (1191002827, 150153723)->(1425753309, -84596759)
Nov 25 02:10:47 np0005534516 kernel: registered taskstats version 1
Nov 25 02:10:47 np0005534516 kernel: Loading compiled-in X.509 certificates
Nov 25 02:10:47 np0005534516 kernel: Loaded X.509 cert 'The CentOS Project: CentOS Stream kernel signing key: 8ec4bd273f582f9a9b9a494ae677ca1f1488f19e'
Nov 25 02:10:47 np0005534516 kernel: Loaded X.509 cert 'Red Hat Enterprise Linux Driver Update Program (key 3): bf57f3e87362bc7229d9f465321773dfd1f77a80'
Nov 25 02:10:47 np0005534516 kernel: Loaded X.509 cert 'Red Hat Enterprise Linux kpatch signing key: 4d38fd864ebe18c5f0b72e3852e2014c3a676fc8'
Nov 25 02:10:47 np0005534516 kernel: Loaded X.509 cert 'RH-IMA-CA: Red Hat IMA CA: fb31825dd0e073685b264e3038963673f753959a'
Nov 25 02:10:47 np0005534516 kernel: Loaded X.509 cert 'Nvidia GPU OOT signing 001: 55e1cef88193e60419f0b0ec379c49f77545acf0'
Nov 25 02:10:47 np0005534516 kernel: Demotion targets for Node 0: null
Nov 25 02:10:47 np0005534516 kernel: page_owner is disabled
Nov 25 02:10:47 np0005534516 kernel: Key type .fscrypt registered
Nov 25 02:10:47 np0005534516 kernel: Key type fscrypt-provisioning registered
Nov 25 02:10:47 np0005534516 kernel: Key type big_key registered
Nov 25 02:10:47 np0005534516 kernel: Key type encrypted registered
Nov 25 02:10:47 np0005534516 kernel: ima: No TPM chip found, activating TPM-bypass!
Nov 25 02:10:47 np0005534516 kernel: Loading compiled-in module X.509 certificates
Nov 25 02:10:47 np0005534516 kernel: Loaded X.509 cert 'The CentOS Project: CentOS Stream kernel signing key: 8ec4bd273f582f9a9b9a494ae677ca1f1488f19e'
Nov 25 02:10:47 np0005534516 kernel: ima: Allocated hash algorithm: sha256
Nov 25 02:10:47 np0005534516 kernel: ima: No architecture policies found
Nov 25 02:10:47 np0005534516 kernel: evm: Initialising EVM extended attributes:
Nov 25 02:10:47 np0005534516 kernel: evm: security.selinux
Nov 25 02:10:47 np0005534516 kernel: evm: security.SMACK64 (disabled)
Nov 25 02:10:47 np0005534516 kernel: evm: security.SMACK64EXEC (disabled)
Nov 25 02:10:47 np0005534516 kernel: evm: security.SMACK64TRANSMUTE (disabled)
Nov 25 02:10:47 np0005534516 kernel: evm: security.SMACK64MMAP (disabled)
Nov 25 02:10:47 np0005534516 kernel: evm: security.apparmor (disabled)
Nov 25 02:10:47 np0005534516 kernel: evm: security.ima
Nov 25 02:10:47 np0005534516 kernel: evm: security.capability
Nov 25 02:10:47 np0005534516 kernel: evm: HMAC attrs: 0x1
Nov 25 02:10:47 np0005534516 kernel: usb 1-1: new full-speed USB device number 2 using uhci_hcd
Nov 25 02:10:47 np0005534516 kernel: Running certificate verification RSA selftest
Nov 25 02:10:47 np0005534516 kernel: Loaded X.509 cert 'Certificate verification self-testing key: f58703bb33ce1b73ee02eccdee5b8817518fe3db'
Nov 25 02:10:47 np0005534516 kernel: Running certificate verification ECDSA selftest
Nov 25 02:10:47 np0005534516 kernel: Loaded X.509 cert 'Certificate verification ECDSA self-testing key: 2900bcea1deb7bc8479a84a23d758efdfdd2b2d3'
Nov 25 02:10:47 np0005534516 kernel: clk: Disabling unused clocks
Nov 25 02:10:47 np0005534516 kernel: Freeing unused decrypted memory: 2028K
Nov 25 02:10:47 np0005534516 kernel: Freeing unused kernel image (initmem) memory: 4192K
Nov 25 02:10:47 np0005534516 kernel: Write protecting the kernel read-only data: 30720k
Nov 25 02:10:47 np0005534516 kernel: Freeing unused kernel image (rodata/data gap) memory: 436K
Nov 25 02:10:47 np0005534516 kernel: usb 1-1: New USB device found, idVendor=0627, idProduct=0001, bcdDevice= 0.00
Nov 25 02:10:47 np0005534516 kernel: usb 1-1: New USB device strings: Mfr=1, Product=3, SerialNumber=10
Nov 25 02:10:47 np0005534516 kernel: usb 1-1: Product: QEMU USB Tablet
Nov 25 02:10:47 np0005534516 kernel: usb 1-1: Manufacturer: QEMU
Nov 25 02:10:47 np0005534516 kernel: usb 1-1: SerialNumber: 28754-0000:00:01.2-1
Nov 25 02:10:47 np0005534516 kernel: x86/mm: Checked W+X mappings: passed, no W+X pages found.
Nov 25 02:10:47 np0005534516 kernel: Run /init as init process
Nov 25 02:10:47 np0005534516 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:01.2/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input5
Nov 25 02:10:47 np0005534516 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:00:01.2-1/input0
Nov 25 02:10:47 np0005534516 systemd: systemd 252-59.el9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN -IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK +XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified)
Nov 25 02:10:47 np0005534516 systemd: Detected virtualization kvm.
Nov 25 02:10:47 np0005534516 systemd: Detected architecture x86-64.
Nov 25 02:10:47 np0005534516 systemd: Running in initrd.
Nov 25 02:10:47 np0005534516 systemd: No hostname configured, using default hostname.
Nov 25 02:10:47 np0005534516 systemd: Hostname set to <localhost>.
Nov 25 02:10:47 np0005534516 systemd: Initializing machine ID from VM UUID.
Nov 25 02:10:47 np0005534516 systemd: Queued start job for default target Initrd Default Target.
Nov 25 02:10:47 np0005534516 systemd: Started Dispatch Password Requests to Console Directory Watch.
Nov 25 02:10:47 np0005534516 systemd: Reached target Local Encrypted Volumes.
Nov 25 02:10:47 np0005534516 systemd: Reached target Initrd /usr File System.
Nov 25 02:10:47 np0005534516 systemd: Reached target Local File Systems.
Nov 25 02:10:47 np0005534516 systemd: Reached target Path Units.
Nov 25 02:10:47 np0005534516 systemd: Reached target Slice Units.
Nov 25 02:10:47 np0005534516 systemd: Reached target Swaps.
Nov 25 02:10:47 np0005534516 systemd: Reached target Timer Units.
Nov 25 02:10:47 np0005534516 systemd: Listening on D-Bus System Message Bus Socket.
Nov 25 02:10:47 np0005534516 systemd: Listening on Journal Socket (/dev/log).
Nov 25 02:10:47 np0005534516 systemd: Listening on Journal Socket.
Nov 25 02:10:47 np0005534516 systemd: Listening on udev Control Socket.
Nov 25 02:10:47 np0005534516 systemd: Listening on udev Kernel Socket.
Nov 25 02:10:47 np0005534516 systemd: Reached target Socket Units.
Nov 25 02:10:47 np0005534516 systemd: Starting Create List of Static Device Nodes...
Nov 25 02:10:47 np0005534516 systemd: Starting Journal Service...
Nov 25 02:10:47 np0005534516 systemd: Load Kernel Modules was skipped because no trigger condition checks were met.
Nov 25 02:10:47 np0005534516 systemd: Starting Apply Kernel Variables...
Nov 25 02:10:47 np0005534516 systemd: Starting Create System Users...
Nov 25 02:10:47 np0005534516 systemd: Starting Setup Virtual Console...
Nov 25 02:10:47 np0005534516 systemd: Finished Create List of Static Device Nodes.
Nov 25 02:10:47 np0005534516 systemd: Finished Apply Kernel Variables.
Nov 25 02:10:47 np0005534516 systemd: Finished Create System Users.
Nov 25 02:10:47 np0005534516 systemd-journald[304]: Journal started
Nov 25 02:10:47 np0005534516 systemd-journald[304]: Runtime Journal (/run/log/journal/c15e9c692e4d4822bb2513f5e1e4f89a) is 8.0M, max 153.6M, 145.6M free.
Nov 25 02:10:47 np0005534516 systemd-sysusers[308]: Creating group 'users' with GID 100.
Nov 25 02:10:47 np0005534516 systemd-sysusers[308]: Creating group 'dbus' with GID 81.
Nov 25 02:10:47 np0005534516 systemd-sysusers[308]: Creating user 'dbus' (System Message Bus) with UID 81 and GID 81.
Nov 25 02:10:47 np0005534516 systemd: Started Journal Service.
Nov 25 02:10:47 np0005534516 systemd[1]: Starting Create Static Device Nodes in /dev...
Nov 25 02:10:47 np0005534516 systemd[1]: Starting Create Volatile Files and Directories...
Nov 25 02:10:47 np0005534516 systemd[1]: Finished Setup Virtual Console.
Nov 25 02:10:47 np0005534516 systemd[1]: dracut ask for additional cmdline parameters was skipped because no trigger condition checks were met.
Nov 25 02:10:47 np0005534516 systemd[1]: Starting dracut cmdline hook...
Nov 25 02:10:47 np0005534516 systemd[1]: Finished Create Static Device Nodes in /dev.
Nov 25 02:10:47 np0005534516 dracut-cmdline[325]: dracut-9 dracut-057-102.git20250818.el9
Nov 25 02:10:47 np0005534516 dracut-cmdline[325]: Using kernel command line parameters:    BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-642.el9.x86_64 root=UUID=47e3724e-7a1b-439a-9543-b98c9a290709 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Nov 25 02:10:47 np0005534516 systemd[1]: Finished Create Volatile Files and Directories.
Nov 25 02:10:47 np0005534516 systemd[1]: Finished dracut cmdline hook.
Nov 25 02:10:47 np0005534516 systemd[1]: Starting dracut pre-udev hook...
Nov 25 02:10:47 np0005534516 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
Nov 25 02:10:47 np0005534516 kernel: device-mapper: uevent: version 1.0.3
Nov 25 02:10:47 np0005534516 kernel: device-mapper: ioctl: 4.50.0-ioctl (2025-04-28) initialised: dm-devel@lists.linux.dev
Nov 25 02:10:47 np0005534516 kernel: RPC: Registered named UNIX socket transport module.
Nov 25 02:10:47 np0005534516 kernel: RPC: Registered udp transport module.
Nov 25 02:10:47 np0005534516 kernel: RPC: Registered tcp transport module.
Nov 25 02:10:47 np0005534516 kernel: RPC: Registered tcp-with-tls transport module.
Nov 25 02:10:47 np0005534516 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module.
Nov 25 02:10:47 np0005534516 rpc.statd[442]: Version 2.5.4 starting
Nov 25 02:10:47 np0005534516 rpc.statd[442]: Initializing NSM state
Nov 25 02:10:47 np0005534516 rpc.idmapd[447]: Setting log level to 0
Nov 25 02:10:47 np0005534516 systemd[1]: Finished dracut pre-udev hook.
Nov 25 02:10:47 np0005534516 systemd[1]: Starting Rule-based Manager for Device Events and Files...
Nov 25 02:10:47 np0005534516 systemd-udevd[460]: Using default interface naming scheme 'rhel-9.0'.
Nov 25 02:10:47 np0005534516 systemd[1]: Started Rule-based Manager for Device Events and Files.
Nov 25 02:10:47 np0005534516 systemd[1]: Starting dracut pre-trigger hook...
Nov 25 02:10:47 np0005534516 systemd[1]: Finished dracut pre-trigger hook.
Nov 25 02:10:47 np0005534516 systemd[1]: Starting Coldplug All udev Devices...
Nov 25 02:10:47 np0005534516 systemd[1]: Created slice Slice /system/modprobe.
Nov 25 02:10:47 np0005534516 systemd[1]: Starting Load Kernel Module configfs...
Nov 25 02:10:47 np0005534516 systemd[1]: Finished Coldplug All udev Devices.
Nov 25 02:10:47 np0005534516 systemd[1]: modprobe@configfs.service: Deactivated successfully.
Nov 25 02:10:47 np0005534516 systemd[1]: Finished Load Kernel Module configfs.
Nov 25 02:10:47 np0005534516 systemd[1]: nm-initrd.service was skipped because of an unmet condition check (ConditionPathExists=/run/NetworkManager/initrd/neednet).
Nov 25 02:10:47 np0005534516 systemd[1]: Reached target Network.
Nov 25 02:10:47 np0005534516 systemd[1]: nm-wait-online-initrd.service was skipped because of an unmet condition check (ConditionPathExists=/run/NetworkManager/initrd/neednet).
Nov 25 02:10:47 np0005534516 systemd[1]: Starting dracut initqueue hook...
Nov 25 02:10:48 np0005534516 kernel: virtio_blk virtio2: 8/0/0 default/read/poll queues
Nov 25 02:10:48 np0005534516 kernel: virtio_blk virtio2: [vda] 167772160 512-byte logical blocks (85.9 GB/80.0 GiB)
Nov 25 02:10:48 np0005534516 kernel: scsi host0: ata_piix
Nov 25 02:10:48 np0005534516 kernel: scsi host1: ata_piix
Nov 25 02:10:48 np0005534516 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc140 irq 14 lpm-pol 0
Nov 25 02:10:48 np0005534516 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc148 irq 15 lpm-pol 0
Nov 25 02:10:48 np0005534516 kernel: vda: vda1
Nov 25 02:10:48 np0005534516 systemd[1]: Mounting Kernel Configuration File System...
Nov 25 02:10:48 np0005534516 systemd[1]: Mounted Kernel Configuration File System.
Nov 25 02:10:48 np0005534516 systemd[1]: Reached target System Initialization.
Nov 25 02:10:48 np0005534516 systemd[1]: Reached target Basic System.
Nov 25 02:10:48 np0005534516 kernel: ata1: found unknown device (class 0)
Nov 25 02:10:48 np0005534516 kernel: ata1.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100
Nov 25 02:10:48 np0005534516 kernel: scsi 0:0:0:0: CD-ROM            QEMU     QEMU DVD-ROM     2.5+ PQ: 0 ANSI: 5
Nov 25 02:10:48 np0005534516 systemd-udevd[507]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 02:10:48 np0005534516 kernel: scsi 0:0:0:0: Attached scsi generic sg0 type 5
Nov 25 02:10:48 np0005534516 kernel: sr 0:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray
Nov 25 02:10:48 np0005534516 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20
Nov 25 02:10:48 np0005534516 systemd[1]: Found device /dev/disk/by-uuid/47e3724e-7a1b-439a-9543-b98c9a290709.
Nov 25 02:10:48 np0005534516 systemd[1]: Reached target Initrd Root Device.
Nov 25 02:10:48 np0005534516 systemd[1]: Finished dracut initqueue hook.
Nov 25 02:10:48 np0005534516 systemd[1]: Reached target Preparation for Remote File Systems.
Nov 25 02:10:48 np0005534516 systemd[1]: Reached target Remote Encrypted Volumes.
Nov 25 02:10:48 np0005534516 systemd[1]: Reached target Remote File Systems.
Nov 25 02:10:48 np0005534516 systemd[1]: Starting dracut pre-mount hook...
Nov 25 02:10:48 np0005534516 systemd[1]: Finished dracut pre-mount hook.
Nov 25 02:10:48 np0005534516 systemd[1]: Starting File System Check on /dev/disk/by-uuid/47e3724e-7a1b-439a-9543-b98c9a290709...
Nov 25 02:10:48 np0005534516 systemd-fsck[554]: /usr/sbin/fsck.xfs: XFS file system.
Nov 25 02:10:48 np0005534516 systemd[1]: Finished File System Check on /dev/disk/by-uuid/47e3724e-7a1b-439a-9543-b98c9a290709.
Nov 25 02:10:48 np0005534516 systemd[1]: Mounting /sysroot...
Nov 25 02:10:49 np0005534516 kernel: SGI XFS with ACLs, security attributes, scrub, quota, no debug enabled
Nov 25 02:10:49 np0005534516 kernel: XFS (vda1): Mounting V5 Filesystem 47e3724e-7a1b-439a-9543-b98c9a290709
Nov 25 02:10:49 np0005534516 kernel: XFS (vda1): Ending clean mount
Nov 25 02:10:49 np0005534516 systemd[1]: Mounted /sysroot.
Nov 25 02:10:49 np0005534516 systemd[1]: Reached target Initrd Root File System.
Nov 25 02:10:49 np0005534516 systemd[1]: Starting Mountpoints Configured in the Real Root...
Nov 25 02:10:49 np0005534516 systemd[1]: initrd-parse-etc.service: Deactivated successfully.
Nov 25 02:10:49 np0005534516 systemd[1]: Finished Mountpoints Configured in the Real Root.
Nov 25 02:10:49 np0005534516 systemd[1]: Reached target Initrd File Systems.
Nov 25 02:10:49 np0005534516 systemd[1]: Reached target Initrd Default Target.
Nov 25 02:10:49 np0005534516 systemd[1]: Starting dracut mount hook...
Nov 25 02:10:49 np0005534516 systemd[1]: Finished dracut mount hook.
Nov 25 02:10:49 np0005534516 systemd[1]: Starting dracut pre-pivot and cleanup hook...
Nov 25 02:10:49 np0005534516 rpc.idmapd[447]: exiting on signal 15
Nov 25 02:10:49 np0005534516 systemd[1]: var-lib-nfs-rpc_pipefs.mount: Deactivated successfully.
Nov 25 02:10:49 np0005534516 systemd[1]: Finished dracut pre-pivot and cleanup hook.
Nov 25 02:10:49 np0005534516 systemd[1]: Starting Cleaning Up and Shutting Down Daemons...
Nov 25 02:10:49 np0005534516 systemd[1]: Stopped target Network.
Nov 25 02:10:49 np0005534516 systemd[1]: Stopped target Remote Encrypted Volumes.
Nov 25 02:10:49 np0005534516 systemd[1]: Stopped target Timer Units.
Nov 25 02:10:49 np0005534516 systemd[1]: dbus.socket: Deactivated successfully.
Nov 25 02:10:49 np0005534516 systemd[1]: Closed D-Bus System Message Bus Socket.
Nov 25 02:10:49 np0005534516 systemd[1]: dracut-pre-pivot.service: Deactivated successfully.
Nov 25 02:10:49 np0005534516 systemd[1]: Stopped dracut pre-pivot and cleanup hook.
Nov 25 02:10:49 np0005534516 systemd[1]: Stopped target Initrd Default Target.
Nov 25 02:10:49 np0005534516 systemd[1]: Stopped target Basic System.
Nov 25 02:10:49 np0005534516 systemd[1]: Stopped target Initrd Root Device.
Nov 25 02:10:49 np0005534516 systemd[1]: Stopped target Initrd /usr File System.
Nov 25 02:10:49 np0005534516 systemd[1]: Stopped target Path Units.
Nov 25 02:10:49 np0005534516 systemd[1]: Stopped target Remote File Systems.
Nov 25 02:10:49 np0005534516 systemd[1]: Stopped target Preparation for Remote File Systems.
Nov 25 02:10:49 np0005534516 systemd[1]: Stopped target Slice Units.
Nov 25 02:10:49 np0005534516 systemd[1]: Stopped target Socket Units.
Nov 25 02:10:49 np0005534516 systemd[1]: Stopped target System Initialization.
Nov 25 02:10:49 np0005534516 systemd[1]: Stopped target Local File Systems.
Nov 25 02:10:49 np0005534516 systemd[1]: Stopped target Swaps.
Nov 25 02:10:49 np0005534516 systemd[1]: dracut-mount.service: Deactivated successfully.
Nov 25 02:10:49 np0005534516 systemd[1]: Stopped dracut mount hook.
Nov 25 02:10:49 np0005534516 systemd[1]: dracut-pre-mount.service: Deactivated successfully.
Nov 25 02:10:49 np0005534516 systemd[1]: Stopped dracut pre-mount hook.
Nov 25 02:10:49 np0005534516 systemd[1]: Stopped target Local Encrypted Volumes.
Nov 25 02:10:49 np0005534516 systemd[1]: systemd-ask-password-console.path: Deactivated successfully.
Nov 25 02:10:49 np0005534516 systemd[1]: Stopped Dispatch Password Requests to Console Directory Watch.
Nov 25 02:10:49 np0005534516 systemd[1]: dracut-initqueue.service: Deactivated successfully.
Nov 25 02:10:49 np0005534516 systemd[1]: Stopped dracut initqueue hook.
Nov 25 02:10:49 np0005534516 systemd[1]: systemd-sysctl.service: Deactivated successfully.
Nov 25 02:10:49 np0005534516 systemd[1]: Stopped Apply Kernel Variables.
Nov 25 02:10:49 np0005534516 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully.
Nov 25 02:10:49 np0005534516 systemd[1]: Stopped Create Volatile Files and Directories.
Nov 25 02:10:49 np0005534516 systemd[1]: systemd-udev-trigger.service: Deactivated successfully.
Nov 25 02:10:49 np0005534516 systemd[1]: Stopped Coldplug All udev Devices.
Nov 25 02:10:49 np0005534516 systemd[1]: dracut-pre-trigger.service: Deactivated successfully.
Nov 25 02:10:49 np0005534516 systemd[1]: Stopped dracut pre-trigger hook.
Nov 25 02:10:49 np0005534516 systemd[1]: Stopping Rule-based Manager for Device Events and Files...
Nov 25 02:10:49 np0005534516 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully.
Nov 25 02:10:49 np0005534516 systemd[1]: Stopped Setup Virtual Console.
Nov 25 02:10:49 np0005534516 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully.
Nov 25 02:10:49 np0005534516 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully.
Nov 25 02:10:49 np0005534516 systemd[1]: initrd-cleanup.service: Deactivated successfully.
Nov 25 02:10:49 np0005534516 systemd[1]: Finished Cleaning Up and Shutting Down Daemons.
Nov 25 02:10:49 np0005534516 systemd[1]: systemd-udevd.service: Deactivated successfully.
Nov 25 02:10:49 np0005534516 systemd[1]: Stopped Rule-based Manager for Device Events and Files.
Nov 25 02:10:49 np0005534516 systemd[1]: systemd-udevd-control.socket: Deactivated successfully.
Nov 25 02:10:49 np0005534516 systemd[1]: Closed udev Control Socket.
Nov 25 02:10:49 np0005534516 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully.
Nov 25 02:10:49 np0005534516 systemd[1]: Closed udev Kernel Socket.
Nov 25 02:10:49 np0005534516 systemd[1]: dracut-pre-udev.service: Deactivated successfully.
Nov 25 02:10:49 np0005534516 systemd[1]: Stopped dracut pre-udev hook.
Nov 25 02:10:49 np0005534516 systemd[1]: dracut-cmdline.service: Deactivated successfully.
Nov 25 02:10:49 np0005534516 systemd[1]: Stopped dracut cmdline hook.
Nov 25 02:10:49 np0005534516 systemd[1]: Starting Cleanup udev Database...
Nov 25 02:10:49 np0005534516 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully.
Nov 25 02:10:49 np0005534516 systemd[1]: Stopped Create Static Device Nodes in /dev.
Nov 25 02:10:49 np0005534516 systemd[1]: kmod-static-nodes.service: Deactivated successfully.
Nov 25 02:10:49 np0005534516 systemd[1]: Stopped Create List of Static Device Nodes.
Nov 25 02:10:49 np0005534516 systemd[1]: systemd-sysusers.service: Deactivated successfully.
Nov 25 02:10:49 np0005534516 systemd[1]: Stopped Create System Users.
Nov 25 02:10:49 np0005534516 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully.
Nov 25 02:10:49 np0005534516 systemd[1]: run-credentials-systemd\x2dsysusers.service.mount: Deactivated successfully.
Nov 25 02:10:49 np0005534516 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully.
Nov 25 02:10:49 np0005534516 systemd[1]: Finished Cleanup udev Database.
Nov 25 02:10:49 np0005534516 systemd[1]: Reached target Switch Root.
Nov 25 02:10:49 np0005534516 systemd[1]: Starting Switch Root...
Nov 25 02:10:49 np0005534516 systemd[1]: Switching root.
Nov 25 02:10:49 np0005534516 systemd-journald[304]: Journal stopped
Nov 25 02:10:51 np0005534516 systemd-journald: Received SIGTERM from PID 1 (systemd).
Nov 25 02:10:51 np0005534516 kernel: audit: type=1404 audit(1764054649.983:2): enforcing=1 old_enforcing=0 auid=4294967295 ses=4294967295 enabled=1 old-enabled=1 lsm=selinux res=1
Nov 25 02:10:51 np0005534516 kernel: SELinux:  policy capability network_peer_controls=1
Nov 25 02:10:51 np0005534516 kernel: SELinux:  policy capability open_perms=1
Nov 25 02:10:51 np0005534516 kernel: SELinux:  policy capability extended_socket_class=1
Nov 25 02:10:51 np0005534516 kernel: SELinux:  policy capability always_check_network=0
Nov 25 02:10:51 np0005534516 kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 25 02:10:51 np0005534516 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 25 02:10:51 np0005534516 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 25 02:10:51 np0005534516 kernel: audit: type=1403 audit(1764054650.219:3): auid=4294967295 ses=4294967295 lsm=selinux res=1
Nov 25 02:10:51 np0005534516 systemd: Successfully loaded SELinux policy in 240.642ms.
Nov 25 02:10:51 np0005534516 systemd: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 32.380ms.
Nov 25 02:10:51 np0005534516 systemd: systemd 252-59.el9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN -IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK +XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified)
Nov 25 02:10:51 np0005534516 systemd: Detected virtualization kvm.
Nov 25 02:10:51 np0005534516 systemd: Detected architecture x86-64.
Nov 25 02:10:51 np0005534516 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 02:10:51 np0005534516 systemd: initrd-switch-root.service: Deactivated successfully.
Nov 25 02:10:51 np0005534516 systemd: Stopped Switch Root.
Nov 25 02:10:51 np0005534516 systemd: systemd-journald.service: Scheduled restart job, restart counter is at 1.
Nov 25 02:10:51 np0005534516 systemd: Created slice Slice /system/getty.
Nov 25 02:10:51 np0005534516 systemd: Created slice Slice /system/serial-getty.
Nov 25 02:10:51 np0005534516 systemd: Created slice Slice /system/sshd-keygen.
Nov 25 02:10:51 np0005534516 systemd: Created slice User and Session Slice.
Nov 25 02:10:51 np0005534516 systemd: Started Dispatch Password Requests to Console Directory Watch.
Nov 25 02:10:51 np0005534516 systemd: Started Forward Password Requests to Wall Directory Watch.
Nov 25 02:10:51 np0005534516 systemd: Set up automount Arbitrary Executable File Formats File System Automount Point.
Nov 25 02:10:51 np0005534516 systemd: Reached target Local Encrypted Volumes.
Nov 25 02:10:51 np0005534516 systemd: Stopped target Switch Root.
Nov 25 02:10:51 np0005534516 systemd: Stopped target Initrd File Systems.
Nov 25 02:10:51 np0005534516 systemd: Stopped target Initrd Root File System.
Nov 25 02:10:51 np0005534516 systemd: Reached target Local Integrity Protected Volumes.
Nov 25 02:10:51 np0005534516 systemd: Reached target Path Units.
Nov 25 02:10:51 np0005534516 systemd: Reached target rpc_pipefs.target.
Nov 25 02:10:51 np0005534516 systemd: Reached target Slice Units.
Nov 25 02:10:51 np0005534516 systemd: Reached target Swaps.
Nov 25 02:10:51 np0005534516 systemd: Reached target Local Verity Protected Volumes.
Nov 25 02:10:51 np0005534516 systemd: Listening on RPCbind Server Activation Socket.
Nov 25 02:10:51 np0005534516 systemd: Reached target RPC Port Mapper.
Nov 25 02:10:51 np0005534516 systemd: Listening on Process Core Dump Socket.
Nov 25 02:10:51 np0005534516 systemd: Listening on initctl Compatibility Named Pipe.
Nov 25 02:10:51 np0005534516 systemd: Listening on udev Control Socket.
Nov 25 02:10:51 np0005534516 systemd: Listening on udev Kernel Socket.
Nov 25 02:10:51 np0005534516 systemd: Mounting Huge Pages File System...
Nov 25 02:10:51 np0005534516 systemd: Mounting POSIX Message Queue File System...
Nov 25 02:10:51 np0005534516 systemd: Mounting Kernel Debug File System...
Nov 25 02:10:51 np0005534516 systemd: Mounting Kernel Trace File System...
Nov 25 02:10:51 np0005534516 systemd: Kernel Module supporting RPCSEC_GSS was skipped because of an unmet condition check (ConditionPathExists=/etc/krb5.keytab).
Nov 25 02:10:51 np0005534516 systemd: Starting Create List of Static Device Nodes...
Nov 25 02:10:51 np0005534516 systemd: Starting Load Kernel Module configfs...
Nov 25 02:10:51 np0005534516 systemd: Starting Load Kernel Module drm...
Nov 25 02:10:51 np0005534516 systemd: Starting Load Kernel Module efi_pstore...
Nov 25 02:10:51 np0005534516 systemd: Starting Load Kernel Module fuse...
Nov 25 02:10:51 np0005534516 systemd: Starting Read and set NIS domainname from /etc/sysconfig/network...
Nov 25 02:10:51 np0005534516 systemd: systemd-fsck-root.service: Deactivated successfully.
Nov 25 02:10:51 np0005534516 systemd: Stopped File System Check on Root Device.
Nov 25 02:10:51 np0005534516 systemd: Stopped Journal Service.
Nov 25 02:10:51 np0005534516 systemd: Starting Journal Service...
Nov 25 02:10:51 np0005534516 systemd: Load Kernel Modules was skipped because no trigger condition checks were met.
Nov 25 02:10:51 np0005534516 systemd: Starting Generate network units from Kernel command line...
Nov 25 02:10:51 np0005534516 systemd: TPM2 PCR Machine ID Measurement was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Nov 25 02:10:51 np0005534516 systemd: Starting Remount Root and Kernel File Systems...
Nov 25 02:10:51 np0005534516 systemd: Repartition Root Disk was skipped because no trigger condition checks were met.
Nov 25 02:10:51 np0005534516 systemd: Starting Apply Kernel Variables...
Nov 25 02:10:51 np0005534516 systemd: Starting Coldplug All udev Devices...
Nov 25 02:10:51 np0005534516 systemd-journald[679]: Journal started
Nov 25 02:10:51 np0005534516 systemd-journald[679]: Runtime Journal (/run/log/journal/fee38d0f94bf6f4b17ec77ba536bd6ab) is 8.0M, max 153.6M, 145.6M free.
Nov 25 02:10:51 np0005534516 systemd[1]: Queued start job for default target Multi-User System.
Nov 25 02:10:51 np0005534516 systemd[1]: systemd-journald.service: Deactivated successfully.
Nov 25 02:10:51 np0005534516 systemd: Mounted Huge Pages File System.
Nov 25 02:10:51 np0005534516 systemd: Started Journal Service.
Nov 25 02:10:51 np0005534516 kernel: xfs filesystem being remounted at / supports timestamps until 2038 (0x7fffffff)
Nov 25 02:10:51 np0005534516 systemd[1]: Mounted POSIX Message Queue File System.
Nov 25 02:10:51 np0005534516 systemd[1]: Mounted Kernel Debug File System.
Nov 25 02:10:51 np0005534516 systemd[1]: Mounted Kernel Trace File System.
Nov 25 02:10:51 np0005534516 systemd[1]: Finished Create List of Static Device Nodes.
Nov 25 02:10:51 np0005534516 systemd[1]: modprobe@configfs.service: Deactivated successfully.
Nov 25 02:10:51 np0005534516 systemd[1]: Finished Load Kernel Module configfs.
Nov 25 02:10:51 np0005534516 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully.
Nov 25 02:10:51 np0005534516 systemd[1]: Finished Load Kernel Module efi_pstore.
Nov 25 02:10:51 np0005534516 systemd[1]: Finished Read and set NIS domainname from /etc/sysconfig/network.
Nov 25 02:10:51 np0005534516 kernel: fuse: init (API version 7.37)
Nov 25 02:10:51 np0005534516 systemd[1]: Finished Generate network units from Kernel command line.
Nov 25 02:10:51 np0005534516 systemd[1]: modprobe@fuse.service: Deactivated successfully.
Nov 25 02:10:51 np0005534516 systemd[1]: Finished Load Kernel Module fuse.
Nov 25 02:10:51 np0005534516 systemd[1]: Finished Remount Root and Kernel File Systems.
Nov 25 02:10:51 np0005534516 systemd[1]: Finished Apply Kernel Variables.
Nov 25 02:10:51 np0005534516 kernel: ACPI: bus type drm_connector registered
Nov 25 02:10:51 np0005534516 systemd[1]: Mounting FUSE Control File System...
Nov 25 02:10:51 np0005534516 systemd[1]: First Boot Wizard was skipped because of an unmet condition check (ConditionFirstBoot=yes).
Nov 25 02:10:51 np0005534516 systemd[1]: Starting Rebuild Hardware Database...
Nov 25 02:10:51 np0005534516 systemd[1]: Starting Flush Journal to Persistent Storage...
Nov 25 02:10:51 np0005534516 systemd[1]: Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore).
Nov 25 02:10:51 np0005534516 systemd[1]: Starting Load/Save OS Random Seed...
Nov 25 02:10:51 np0005534516 systemd[1]: Starting Create System Users...
Nov 25 02:10:51 np0005534516 systemd[1]: modprobe@drm.service: Deactivated successfully.
Nov 25 02:10:51 np0005534516 systemd[1]: Finished Load Kernel Module drm.
Nov 25 02:10:51 np0005534516 systemd[1]: Mounted FUSE Control File System.
Nov 25 02:10:51 np0005534516 systemd[1]: Finished Coldplug All udev Devices.
Nov 25 02:10:51 np0005534516 systemd-journald[679]: Runtime Journal (/run/log/journal/fee38d0f94bf6f4b17ec77ba536bd6ab) is 8.0M, max 153.6M, 145.6M free.
Nov 25 02:10:51 np0005534516 systemd-journald[679]: Received client request to flush runtime journal.
Nov 25 02:10:51 np0005534516 systemd[1]: Finished Flush Journal to Persistent Storage.
Nov 25 02:10:51 np0005534516 systemd[1]: Finished Create System Users.
Nov 25 02:10:51 np0005534516 systemd[1]: Starting Create Static Device Nodes in /dev...
Nov 25 02:10:51 np0005534516 systemd[1]: Finished Load/Save OS Random Seed.
Nov 25 02:10:51 np0005534516 systemd[1]: First Boot Complete was skipped because of an unmet condition check (ConditionFirstBoot=yes).
Nov 25 02:10:51 np0005534516 systemd[1]: Finished Create Static Device Nodes in /dev.
Nov 25 02:10:51 np0005534516 systemd[1]: Reached target Preparation for Local File Systems.
Nov 25 02:10:51 np0005534516 systemd[1]: Reached target Local File Systems.
Nov 25 02:10:51 np0005534516 systemd[1]: Starting Rebuild Dynamic Linker Cache...
Nov 25 02:10:51 np0005534516 systemd[1]: Mark the need to relabel after reboot was skipped because of an unmet condition check (ConditionSecurity=!selinux).
Nov 25 02:10:51 np0005534516 systemd[1]: Set Up Additional Binary Formats was skipped because no trigger condition checks were met.
Nov 25 02:10:51 np0005534516 systemd[1]: Update Boot Loader Random Seed was skipped because no trigger condition checks were met.
Nov 25 02:10:51 np0005534516 systemd[1]: Starting Automatic Boot Loader Update...
Nov 25 02:10:51 np0005534516 systemd[1]: Commit a transient machine-id on disk was skipped because of an unmet condition check (ConditionPathIsMountPoint=/etc/machine-id).
Nov 25 02:10:51 np0005534516 systemd[1]: Starting Create Volatile Files and Directories...
Nov 25 02:10:51 np0005534516 bootctl[697]: Couldn't find EFI system partition, skipping.
Nov 25 02:10:51 np0005534516 systemd[1]: Finished Automatic Boot Loader Update.
Nov 25 02:10:51 np0005534516 systemd[1]: Finished Create Volatile Files and Directories.
Nov 25 02:10:51 np0005534516 systemd[1]: Starting Security Auditing Service...
Nov 25 02:10:51 np0005534516 systemd[1]: Starting RPC Bind...
Nov 25 02:10:51 np0005534516 systemd[1]: Starting Rebuild Journal Catalog...
Nov 25 02:10:51 np0005534516 systemd[1]: Finished Rebuild Journal Catalog.
Nov 25 02:10:51 np0005534516 auditd[703]: audit dispatcher initialized with q_depth=2000 and 1 active plugins
Nov 25 02:10:51 np0005534516 auditd[703]: Init complete, auditd 3.1.5 listening for events (startup state enable)
Nov 25 02:10:51 np0005534516 systemd[1]: Started RPC Bind.
Nov 25 02:10:52 np0005534516 augenrules[709]: /sbin/augenrules: No change
Nov 25 02:10:52 np0005534516 augenrules[724]: No rules
Nov 25 02:10:52 np0005534516 augenrules[724]: enabled 1
Nov 25 02:10:52 np0005534516 augenrules[724]: failure 1
Nov 25 02:10:52 np0005534516 augenrules[724]: pid 703
Nov 25 02:10:52 np0005534516 augenrules[724]: rate_limit 0
Nov 25 02:10:52 np0005534516 augenrules[724]: backlog_limit 8192
Nov 25 02:10:52 np0005534516 augenrules[724]: lost 0
Nov 25 02:10:52 np0005534516 augenrules[724]: backlog 4
Nov 25 02:10:52 np0005534516 augenrules[724]: backlog_wait_time 60000
Nov 25 02:10:52 np0005534516 augenrules[724]: backlog_wait_time_actual 0
Nov 25 02:10:52 np0005534516 augenrules[724]: enabled 1
Nov 25 02:10:52 np0005534516 augenrules[724]: failure 1
Nov 25 02:10:52 np0005534516 augenrules[724]: pid 703
Nov 25 02:10:52 np0005534516 augenrules[724]: rate_limit 0
Nov 25 02:10:52 np0005534516 augenrules[724]: backlog_limit 8192
Nov 25 02:10:52 np0005534516 augenrules[724]: lost 0
Nov 25 02:10:52 np0005534516 augenrules[724]: backlog 4
Nov 25 02:10:52 np0005534516 augenrules[724]: backlog_wait_time 60000
Nov 25 02:10:52 np0005534516 augenrules[724]: backlog_wait_time_actual 0
Nov 25 02:10:52 np0005534516 augenrules[724]: enabled 1
Nov 25 02:10:52 np0005534516 augenrules[724]: failure 1
Nov 25 02:10:52 np0005534516 augenrules[724]: pid 703
Nov 25 02:10:52 np0005534516 augenrules[724]: rate_limit 0
Nov 25 02:10:52 np0005534516 augenrules[724]: backlog_limit 8192
Nov 25 02:10:52 np0005534516 augenrules[724]: lost 0
Nov 25 02:10:52 np0005534516 augenrules[724]: backlog 4
Nov 25 02:10:52 np0005534516 augenrules[724]: backlog_wait_time 60000
Nov 25 02:10:52 np0005534516 augenrules[724]: backlog_wait_time_actual 0
Nov 25 02:10:52 np0005534516 systemd[1]: Started Security Auditing Service.
Nov 25 02:10:52 np0005534516 systemd[1]: Starting Record System Boot/Shutdown in UTMP...
Nov 25 02:10:52 np0005534516 systemd[1]: Finished Record System Boot/Shutdown in UTMP.
Nov 25 02:10:52 np0005534516 systemd[1]: Finished Rebuild Hardware Database.
Nov 25 02:10:52 np0005534516 systemd[1]: Starting Rule-based Manager for Device Events and Files...
Nov 25 02:10:52 np0005534516 systemd-udevd[732]: Using default interface naming scheme 'rhel-9.0'.
Nov 25 02:10:52 np0005534516 systemd[1]: Started Rule-based Manager for Device Events and Files.
Nov 25 02:10:52 np0005534516 systemd[1]: Starting Load Kernel Module configfs...
Nov 25 02:10:52 np0005534516 systemd[1]: Condition check resulted in /dev/ttyS0 being skipped.
Nov 25 02:10:52 np0005534516 systemd[1]: modprobe@configfs.service: Deactivated successfully.
Nov 25 02:10:52 np0005534516 systemd[1]: Finished Load Kernel Module configfs.
Nov 25 02:10:52 np0005534516 kernel: input: PC Speaker as /devices/platform/pcspkr/input/input6
Nov 25 02:10:52 np0005534516 systemd-udevd[733]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 02:10:52 np0005534516 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0
Nov 25 02:10:52 np0005534516 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI)
Nov 25 02:10:52 np0005534516 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD
Nov 25 02:10:52 np0005534516 kernel: [drm] pci: virtio-vga detected at 0000:00:02.0
Nov 25 02:10:52 np0005534516 kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console
Nov 25 02:10:52 np0005534516 kernel: Console: switching to colour dummy device 80x25
Nov 25 02:10:52 np0005534516 kernel: [drm] features: -virgl +edid -resource_blob -host_visible
Nov 25 02:10:52 np0005534516 kernel: [drm] features: -context_init
Nov 25 02:10:52 np0005534516 kernel: [drm] number of scanouts: 1
Nov 25 02:10:52 np0005534516 kernel: [drm] number of cap sets: 0
Nov 25 02:10:52 np0005534516 kernel: [drm] Initialized virtio_gpu 0.1.0 for 0000:00:02.0 on minor 0
Nov 25 02:10:52 np0005534516 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device
Nov 25 02:10:52 np0005534516 kernel: Console: switching to colour frame buffer device 128x48
Nov 25 02:10:52 np0005534516 kernel: virtio-pci 0000:00:02.0: [drm] fb0: virtio_gpudrmfb frame buffer device
Nov 25 02:10:52 np0005534516 kernel: kvm_amd: TSC scaling supported
Nov 25 02:10:52 np0005534516 kernel: kvm_amd: Nested Virtualization enabled
Nov 25 02:10:52 np0005534516 kernel: kvm_amd: Nested Paging enabled
Nov 25 02:10:52 np0005534516 kernel: kvm_amd: LBR virtualization supported
Nov 25 02:10:52 np0005534516 systemd[1]: Finished Rebuild Dynamic Linker Cache.
Nov 25 02:10:52 np0005534516 systemd[1]: Starting Update is Completed...
Nov 25 02:10:52 np0005534516 systemd[1]: Finished Update is Completed.
Nov 25 02:10:52 np0005534516 systemd[1]: Reached target System Initialization.
Nov 25 02:10:52 np0005534516 systemd[1]: Started dnf makecache --timer.
Nov 25 02:10:52 np0005534516 systemd[1]: Started Daily rotation of log files.
Nov 25 02:10:52 np0005534516 systemd[1]: Started Daily Cleanup of Temporary Directories.
Nov 25 02:10:52 np0005534516 systemd[1]: Reached target Timer Units.
Nov 25 02:10:52 np0005534516 systemd[1]: Listening on D-Bus System Message Bus Socket.
Nov 25 02:10:52 np0005534516 systemd[1]: Listening on SSSD Kerberos Cache Manager responder socket.
Nov 25 02:10:52 np0005534516 systemd[1]: Reached target Socket Units.
Nov 25 02:10:52 np0005534516 systemd[1]: Starting D-Bus System Message Bus...
Nov 25 02:10:52 np0005534516 systemd[1]: TPM2 PCR Barrier (Initialization) was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Nov 25 02:10:52 np0005534516 systemd[1]: Started D-Bus System Message Bus.
Nov 25 02:10:52 np0005534516 systemd[1]: Reached target Basic System.
Nov 25 02:10:52 np0005534516 dbus-broker-lau[812]: Ready
Nov 25 02:10:53 np0005534516 systemd[1]: Starting NTP client/server...
Nov 25 02:10:53 np0005534516 systemd[1]: Starting Cloud-init: Local Stage (pre-network)...
Nov 25 02:10:53 np0005534516 systemd[1]: Starting Restore /run/initramfs on shutdown...
Nov 25 02:10:53 np0005534516 systemd[1]: Starting IPv4 firewall with iptables...
Nov 25 02:10:53 np0005534516 systemd[1]: Started irqbalance daemon.
Nov 25 02:10:53 np0005534516 systemd[1]: Load CPU microcode update was skipped because of an unmet condition check (ConditionPathExists=/sys/devices/system/cpu/microcode/reload).
Nov 25 02:10:53 np0005534516 systemd[1]: OpenSSH ecdsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Nov 25 02:10:53 np0005534516 systemd[1]: OpenSSH ed25519 Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Nov 25 02:10:53 np0005534516 systemd[1]: OpenSSH rsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Nov 25 02:10:53 np0005534516 systemd[1]: Reached target sshd-keygen.target.
Nov 25 02:10:53 np0005534516 systemd[1]: System Security Services Daemon was skipped because no trigger condition checks were met.
Nov 25 02:10:53 np0005534516 systemd[1]: Reached target User and Group Name Lookups.
Nov 25 02:10:53 np0005534516 systemd[1]: Starting User Login Management...
Nov 25 02:10:53 np0005534516 systemd[1]: Finished Restore /run/initramfs on shutdown.
Nov 25 02:10:53 np0005534516 systemd-logind[822]: New seat seat0.
Nov 25 02:10:53 np0005534516 systemd-logind[822]: Watching system buttons on /dev/input/event0 (Power Button)
Nov 25 02:10:53 np0005534516 systemd-logind[822]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard)
Nov 25 02:10:53 np0005534516 systemd[1]: Started User Login Management.
Nov 25 02:10:53 np0005534516 chronyd[831]: chronyd version 4.8 starting (+CMDMON +REFCLOCK +RTC +PRIVDROP +SCFILTER +SIGND +NTS +SECHASH +IPV6 +DEBUG)
Nov 25 02:10:53 np0005534516 chronyd[831]: Loaded 0 symmetric keys
Nov 25 02:10:53 np0005534516 chronyd[831]: Using right/UTC timezone to obtain leap second data
Nov 25 02:10:53 np0005534516 chronyd[831]: Loaded seccomp filter (level 2)
Nov 25 02:10:53 np0005534516 kernel: Warning: Deprecated Driver is detected: nft_compat will not be maintained in a future major release and may be disabled
Nov 25 02:10:53 np0005534516 systemd[1]: Started NTP client/server.
Nov 25 02:10:53 np0005534516 kernel: Warning: Deprecated Driver is detected: nft_compat_module_init will not be maintained in a future major release and may be disabled
Nov 25 02:10:53 np0005534516 iptables.init[817]: iptables: Applying firewall rules: [  OK  ]
Nov 25 02:10:53 np0005534516 systemd[1]: Finished IPv4 firewall with iptables.
Nov 25 02:10:54 np0005534516 cloud-init[840]: Cloud-init v. 24.4-7.el9 running 'init-local' at Tue, 25 Nov 2025 07:10:54 +0000. Up 9.44 seconds.
Nov 25 02:10:55 np0005534516 systemd[1]: run-cloud\x2dinit-tmp-tmp0wds0v45.mount: Deactivated successfully.
Nov 25 02:10:55 np0005534516 systemd[1]: Starting Hostname Service...
Nov 25 02:10:55 np0005534516 systemd[1]: Started Hostname Service.
Nov 25 02:10:55 np0005534516 systemd-hostnamed[854]: Hostname set to <np0005534516.novalocal> (static)
Nov 25 02:10:55 np0005534516 systemd[1]: Finished Cloud-init: Local Stage (pre-network).
Nov 25 02:10:55 np0005534516 systemd[1]: Reached target Preparation for Network.
Nov 25 02:10:55 np0005534516 systemd[1]: Starting Network Manager...
Nov 25 02:10:55 np0005534516 NetworkManager[858]: <info>  [1764054655.6660] NetworkManager (version 1.54.1-1.el9) is starting... (boot:4972c5ac-d4d6-4122-899d-4e8787c329b1)
Nov 25 02:10:55 np0005534516 NetworkManager[858]: <info>  [1764054655.6666] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Nov 25 02:10:55 np0005534516 NetworkManager[858]: <info>  [1764054655.6890] manager[0x557aa74e2080]: monitoring kernel firmware directory '/lib/firmware'.
Nov 25 02:10:55 np0005534516 NetworkManager[858]: <info>  [1764054655.6957] hostname: hostname: using hostnamed
Nov 25 02:10:55 np0005534516 NetworkManager[858]: <info>  [1764054655.6957] hostname: static hostname changed from (none) to "np0005534516.novalocal"
Nov 25 02:10:55 np0005534516 NetworkManager[858]: <info>  [1764054655.6964] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Nov 25 02:10:55 np0005534516 NetworkManager[858]: <info>  [1764054655.7192] manager[0x557aa74e2080]: rfkill: Wi-Fi hardware radio set enabled
Nov 25 02:10:55 np0005534516 NetworkManager[858]: <info>  [1764054655.7193] manager[0x557aa74e2080]: rfkill: WWAN hardware radio set enabled
Nov 25 02:10:55 np0005534516 NetworkManager[858]: <info>  [1764054655.7300] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-device-plugin-team.so)
Nov 25 02:10:55 np0005534516 NetworkManager[858]: <info>  [1764054655.7302] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Nov 25 02:10:55 np0005534516 NetworkManager[858]: <info>  [1764054655.7304] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Nov 25 02:10:55 np0005534516 NetworkManager[858]: <info>  [1764054655.7306] manager: Networking is enabled by state file
Nov 25 02:10:55 np0005534516 NetworkManager[858]: <info>  [1764054655.7308] settings: Loaded settings plugin: keyfile (internal)
Nov 25 02:10:55 np0005534516 NetworkManager[858]: <info>  [1764054655.7341] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-settings-plugin-ifcfg-rh.so")
Nov 25 02:10:55 np0005534516 NetworkManager[858]: <info>  [1764054655.7369] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Nov 25 02:10:55 np0005534516 NetworkManager[858]: <info>  [1764054655.7398] dhcp: init: Using DHCP client 'internal'
Nov 25 02:10:55 np0005534516 NetworkManager[858]: <info>  [1764054655.7401] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Nov 25 02:10:55 np0005534516 NetworkManager[858]: <info>  [1764054655.7418] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 02:10:55 np0005534516 NetworkManager[858]: <info>  [1764054655.7432] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Nov 25 02:10:55 np0005534516 NetworkManager[858]: <info>  [1764054655.7442] device (lo): Activation: starting connection 'lo' (9e1b533f-1bf0-4204-9b0f-32d0aa4be169)
Nov 25 02:10:55 np0005534516 NetworkManager[858]: <info>  [1764054655.7453] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Nov 25 02:10:55 np0005534516 NetworkManager[858]: <info>  [1764054655.7458] device (eth0): state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 25 02:10:55 np0005534516 NetworkManager[858]: <info>  [1764054655.7490] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Nov 25 02:10:55 np0005534516 NetworkManager[858]: <info>  [1764054655.7496] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Nov 25 02:10:55 np0005534516 NetworkManager[858]: <info>  [1764054655.7500] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Nov 25 02:10:55 np0005534516 NetworkManager[858]: <info>  [1764054655.7503] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Nov 25 02:10:55 np0005534516 NetworkManager[858]: <info>  [1764054655.7506] device (eth0): carrier: link connected
Nov 25 02:10:55 np0005534516 NetworkManager[858]: <info>  [1764054655.7511] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Nov 25 02:10:55 np0005534516 NetworkManager[858]: <info>  [1764054655.7519] device (eth0): state change: unavailable -> disconnected (reason 'carrier-changed', managed-type: 'full')
Nov 25 02:10:55 np0005534516 NetworkManager[858]: <info>  [1764054655.7527] policy: auto-activating connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Nov 25 02:10:55 np0005534516 NetworkManager[858]: <info>  [1764054655.7532] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Nov 25 02:10:55 np0005534516 NetworkManager[858]: <info>  [1764054655.7535] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 25 02:10:55 np0005534516 NetworkManager[858]: <info>  [1764054655.7538] manager: NetworkManager state is now CONNECTING
Nov 25 02:10:55 np0005534516 NetworkManager[858]: <info>  [1764054655.7541] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 25 02:10:55 np0005534516 NetworkManager[858]: <info>  [1764054655.7550] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 25 02:10:55 np0005534516 NetworkManager[858]: <info>  [1764054655.7554] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Nov 25 02:10:55 np0005534516 systemd[1]: Starting Network Manager Script Dispatcher Service...
Nov 25 02:10:55 np0005534516 NetworkManager[858]: <info>  [1764054655.7596] dhcp4 (eth0): state changed new lease, address=38.102.83.169
Nov 25 02:10:55 np0005534516 NetworkManager[858]: <info>  [1764054655.7607] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Nov 25 02:10:55 np0005534516 systemd[1]: Started Network Manager.
Nov 25 02:10:55 np0005534516 NetworkManager[858]: <info>  [1764054655.7636] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 25 02:10:55 np0005534516 systemd[1]: Reached target Network.
Nov 25 02:10:55 np0005534516 systemd[1]: Starting Network Manager Wait Online...
Nov 25 02:10:55 np0005534516 systemd[1]: Starting GSSAPI Proxy Daemon...
Nov 25 02:10:55 np0005534516 systemd[1]: Started Network Manager Script Dispatcher Service.
Nov 25 02:10:55 np0005534516 NetworkManager[858]: <info>  [1764054655.7855] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Nov 25 02:10:55 np0005534516 NetworkManager[858]: <info>  [1764054655.7860] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Nov 25 02:10:55 np0005534516 NetworkManager[858]: <info>  [1764054655.7869] device (lo): Activation: successful, device activated.
Nov 25 02:10:55 np0005534516 NetworkManager[858]: <info>  [1764054655.7879] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 25 02:10:55 np0005534516 NetworkManager[858]: <info>  [1764054655.7883] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 25 02:10:55 np0005534516 NetworkManager[858]: <info>  [1764054655.7889] manager: NetworkManager state is now CONNECTED_SITE
Nov 25 02:10:55 np0005534516 NetworkManager[858]: <info>  [1764054655.7895] device (eth0): Activation: successful, device activated.
Nov 25 02:10:55 np0005534516 NetworkManager[858]: <info>  [1764054655.7904] manager: NetworkManager state is now CONNECTED_GLOBAL
Nov 25 02:10:55 np0005534516 NetworkManager[858]: <info>  [1764054655.7910] manager: startup complete
Nov 25 02:10:55 np0005534516 systemd[1]: Started GSSAPI Proxy Daemon.
Nov 25 02:10:55 np0005534516 systemd[1]: RPC security service for NFS client and server was skipped because of an unmet condition check (ConditionPathExists=/etc/krb5.keytab).
Nov 25 02:10:55 np0005534516 systemd[1]: Reached target NFS client services.
Nov 25 02:10:55 np0005534516 systemd[1]: Reached target Preparation for Remote File Systems.
Nov 25 02:10:55 np0005534516 systemd[1]: Reached target Remote File Systems.
Nov 25 02:10:55 np0005534516 systemd[1]: TPM2 PCR Barrier (User) was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Nov 25 02:10:55 np0005534516 systemd[1]: Finished Network Manager Wait Online.
Nov 25 02:10:55 np0005534516 systemd[1]: Listening on Load/Save RF Kill Switch Status /dev/rfkill Watch.
Nov 25 02:10:55 np0005534516 systemd[1]: Starting Cloud-init: Network Stage...
Nov 25 02:10:56 np0005534516 cloud-init[924]: Cloud-init v. 24.4-7.el9 running 'init' at Tue, 25 Nov 2025 07:10:56 +0000. Up 10.77 seconds.
Nov 25 02:10:56 np0005534516 cloud-init[924]: ci-info: +++++++++++++++++++++++++++++++++++++++Net device info+++++++++++++++++++++++++++++++++++++++
Nov 25 02:10:56 np0005534516 cloud-init[924]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Nov 25 02:10:56 np0005534516 cloud-init[924]: ci-info: | Device |  Up  |           Address            |      Mask     | Scope  |     Hw-Address    |
Nov 25 02:10:56 np0005534516 cloud-init[924]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Nov 25 02:10:56 np0005534516 cloud-init[924]: ci-info: |  eth0  | True |        38.102.83.169         | 255.255.255.0 | global | fa:16:3e:e0:bf:b3 |
Nov 25 02:10:56 np0005534516 cloud-init[924]: ci-info: |  eth0  | True | fe80::f816:3eff:fee0:bfb3/64 |       .       |  link  | fa:16:3e:e0:bf:b3 |
Nov 25 02:10:56 np0005534516 cloud-init[924]: ci-info: |   lo   | True |          127.0.0.1           |   255.0.0.0   |  host  |         .         |
Nov 25 02:10:56 np0005534516 cloud-init[924]: ci-info: |   lo   | True |           ::1/128            |       .       |  host  |         .         |
Nov 25 02:10:56 np0005534516 cloud-init[924]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Nov 25 02:10:56 np0005534516 cloud-init[924]: ci-info: +++++++++++++++++++++++++++++++++Route IPv4 info+++++++++++++++++++++++++++++++++
Nov 25 02:10:56 np0005534516 cloud-init[924]: ci-info: +-------+-----------------+---------------+-----------------+-----------+-------+
Nov 25 02:10:56 np0005534516 cloud-init[924]: ci-info: | Route |   Destination   |    Gateway    |     Genmask     | Interface | Flags |
Nov 25 02:10:56 np0005534516 cloud-init[924]: ci-info: +-------+-----------------+---------------+-----------------+-----------+-------+
Nov 25 02:10:56 np0005534516 cloud-init[924]: ci-info: |   0   |     0.0.0.0     |  38.102.83.1  |     0.0.0.0     |    eth0   |   UG  |
Nov 25 02:10:56 np0005534516 cloud-init[924]: ci-info: |   1   |   38.102.83.0   |    0.0.0.0    |  255.255.255.0  |    eth0   |   U   |
Nov 25 02:10:56 np0005534516 cloud-init[924]: ci-info: |   2   | 169.254.169.254 | 38.102.83.126 | 255.255.255.255 |    eth0   |  UGH  |
Nov 25 02:10:56 np0005534516 cloud-init[924]: ci-info: +-------+-----------------+---------------+-----------------+-----------+-------+
Nov 25 02:10:56 np0005534516 cloud-init[924]: ci-info: +++++++++++++++++++Route IPv6 info+++++++++++++++++++
Nov 25 02:10:56 np0005534516 cloud-init[924]: ci-info: +-------+-------------+---------+-----------+-------+
Nov 25 02:10:56 np0005534516 cloud-init[924]: ci-info: | Route | Destination | Gateway | Interface | Flags |
Nov 25 02:10:56 np0005534516 cloud-init[924]: ci-info: +-------+-------------+---------+-----------+-------+
Nov 25 02:10:56 np0005534516 cloud-init[924]: ci-info: |   1   |  fe80::/64  |    ::   |    eth0   |   U   |
Nov 25 02:10:56 np0005534516 cloud-init[924]: ci-info: |   3   |  multicast  |    ::   |    eth0   |   U   |
Nov 25 02:10:56 np0005534516 cloud-init[924]: ci-info: +-------+-------------+---------+-----------+-------+
Nov 25 02:10:58 np0005534516 cloud-init[924]: Generating public/private rsa key pair.
Nov 25 02:10:58 np0005534516 cloud-init[924]: Your identification has been saved in /etc/ssh/ssh_host_rsa_key
Nov 25 02:10:58 np0005534516 cloud-init[924]: Your public key has been saved in /etc/ssh/ssh_host_rsa_key.pub
Nov 25 02:10:58 np0005534516 cloud-init[924]: The key fingerprint is:
Nov 25 02:10:58 np0005534516 cloud-init[924]: SHA256:1BuQscS8Jo8tlBe6U0bfrwfW+RILiGVLevNZFFGnoK4 root@np0005534516.novalocal
Nov 25 02:10:58 np0005534516 cloud-init[924]: The key's randomart image is:
Nov 25 02:10:58 np0005534516 cloud-init[924]: +---[RSA 3072]----+
Nov 25 02:10:58 np0005534516 cloud-init[924]: |       o+o  . .oo|
Nov 25 02:10:58 np0005534516 cloud-init[924]: |       .*+ . ....|
Nov 25 02:10:58 np0005534516 cloud-init[924]: |       +o++.  .. |
Nov 25 02:10:58 np0005534516 cloud-init[924]: |      =.*.=o. .  |
Nov 25 02:10:58 np0005534516 cloud-init[924]: |     . @S*oo + . |
Nov 25 02:10:58 np0005534516 cloud-init[924]: |      = =.= + *  |
Nov 25 02:10:58 np0005534516 cloud-init[924]: |       oE. + * + |
Nov 25 02:10:58 np0005534516 cloud-init[924]: |            + + .|
Nov 25 02:10:58 np0005534516 cloud-init[924]: |             . . |
Nov 25 02:10:58 np0005534516 cloud-init[924]: +----[SHA256]-----+
Nov 25 02:10:58 np0005534516 cloud-init[924]: Generating public/private ecdsa key pair.
Nov 25 02:10:58 np0005534516 cloud-init[924]: Your identification has been saved in /etc/ssh/ssh_host_ecdsa_key
Nov 25 02:10:58 np0005534516 cloud-init[924]: Your public key has been saved in /etc/ssh/ssh_host_ecdsa_key.pub
Nov 25 02:10:58 np0005534516 cloud-init[924]: The key fingerprint is:
Nov 25 02:10:58 np0005534516 cloud-init[924]: SHA256:m7Iq1HRhFtgR3E4SIPoywMiU8O0/ROtpHDZtfr9Y9vg root@np0005534516.novalocal
Nov 25 02:10:58 np0005534516 cloud-init[924]: The key's randomart image is:
Nov 25 02:10:58 np0005534516 cloud-init[924]: +---[ECDSA 256]---+
Nov 25 02:10:58 np0005534516 cloud-init[924]: |o.o .==*         |
Nov 25 02:10:58 np0005534516 cloud-init[924]: |=+ o. B o        |
Nov 25 02:10:58 np0005534516 cloud-init[924]: |+o. .o.=         |
Nov 25 02:10:58 np0005534516 cloud-init[924]: |.. ....o.        |
Nov 25 02:10:58 np0005534516 cloud-init[924]: |o .o..* S        |
Nov 25 02:10:58 np0005534516 cloud-init[924]: | o. .* * o       |
Nov 25 02:10:58 np0005534516 cloud-init[924]: | .    O + . o    |
Nov 25 02:10:58 np0005534516 cloud-init[924]: |  .  . + . = o   |
Nov 25 02:10:58 np0005534516 cloud-init[924]: |   ....   . +oE  |
Nov 25 02:10:58 np0005534516 cloud-init[924]: +----[SHA256]-----+
Nov 25 02:10:58 np0005534516 cloud-init[924]: Generating public/private ed25519 key pair.
Nov 25 02:10:58 np0005534516 cloud-init[924]: Your identification has been saved in /etc/ssh/ssh_host_ed25519_key
Nov 25 02:10:58 np0005534516 cloud-init[924]: Your public key has been saved in /etc/ssh/ssh_host_ed25519_key.pub
Nov 25 02:10:58 np0005534516 cloud-init[924]: The key fingerprint is:
Nov 25 02:10:58 np0005534516 cloud-init[924]: SHA256:lNZ6f0cNvpMQxbKHvh8iqlnRRXtEQwtWvc3FFk46Kp0 root@np0005534516.novalocal
Nov 25 02:10:58 np0005534516 cloud-init[924]: The key's randomart image is:
Nov 25 02:10:58 np0005534516 cloud-init[924]: +--[ED25519 256]--+
Nov 25 02:10:58 np0005534516 cloud-init[924]: |            +=B=.|
Nov 25 02:10:58 np0005534516 cloud-init[924]: |         o o.==o=|
Nov 25 02:10:58 np0005534516 cloud-init[924]: |        + . +*+++|
Nov 25 02:10:58 np0005534516 cloud-init[924]: |       o o..=+oo+|
Nov 25 02:10:58 np0005534516 cloud-init[924]: |        S.oE... o|
Nov 25 02:10:58 np0005534516 cloud-init[924]: |         o.... + |
Nov 25 02:10:58 np0005534516 cloud-init[924]: |        . . o.* .|
Nov 25 02:10:58 np0005534516 cloud-init[924]: |       o . ..o + |
Nov 25 02:10:58 np0005534516 cloud-init[924]: |      o..    ..  |
Nov 25 02:10:58 np0005534516 cloud-init[924]: +----[SHA256]-----+
Nov 25 02:10:58 np0005534516 systemd[1]: Finished Cloud-init: Network Stage.
Nov 25 02:10:58 np0005534516 systemd[1]: Reached target Cloud-config availability.
Nov 25 02:10:58 np0005534516 systemd[1]: Reached target Network is Online.
Nov 25 02:10:59 np0005534516 systemd[1]: Starting Cloud-init: Config Stage...
Nov 25 02:10:59 np0005534516 systemd[1]: Starting Crash recovery kernel arming...
Nov 25 02:10:59 np0005534516 systemd[1]: Starting Notify NFS peers of a restart...
Nov 25 02:10:59 np0005534516 systemd[1]: Starting System Logging Service...
Nov 25 02:10:59 np0005534516 sm-notify[1006]: Version 2.5.4 starting
Nov 25 02:10:59 np0005534516 systemd[1]: Starting OpenSSH server daemon...
Nov 25 02:10:59 np0005534516 systemd[1]: Starting Permit User Sessions...
Nov 25 02:10:59 np0005534516 systemd[1]: Started Notify NFS peers of a restart.
Nov 25 02:10:59 np0005534516 systemd[1]: Finished Permit User Sessions.
Nov 25 02:10:59 np0005534516 systemd[1]: Started OpenSSH server daemon.
Nov 25 02:10:59 np0005534516 systemd[1]: Started Command Scheduler.
Nov 25 02:10:59 np0005534516 systemd[1]: Started Getty on tty1.
Nov 25 02:10:59 np0005534516 systemd[1]: Started Serial Getty on ttyS0.
Nov 25 02:10:59 np0005534516 systemd[1]: Reached target Login Prompts.
Nov 25 02:10:59 np0005534516 rsyslogd[1007]: [origin software="rsyslogd" swVersion="8.2510.0-2.el9" x-pid="1007" x-info="https://www.rsyslog.com"] start
Nov 25 02:10:59 np0005534516 rsyslogd[1007]: imjournal: No statefile exists, /var/lib/rsyslog/imjournal.state will be created (ignore if this is first run): No such file or directory [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2040 ]
Nov 25 02:10:59 np0005534516 systemd[1]: Started System Logging Service.
Nov 25 02:10:59 np0005534516 systemd[1]: Reached target Multi-User System.
Nov 25 02:10:59 np0005534516 systemd[1]: Starting Record Runlevel Change in UTMP...
Nov 25 02:10:59 np0005534516 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully.
Nov 25 02:10:59 np0005534516 systemd[1]: Finished Record Runlevel Change in UTMP.
Nov 25 02:10:59 np0005534516 rsyslogd[1007]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 25 02:10:59 np0005534516 kdumpctl[1018]: kdump: No kdump initial ramdisk found.
Nov 25 02:10:59 np0005534516 kdumpctl[1018]: kdump: Rebuilding /boot/initramfs-5.14.0-642.el9.x86_64kdump.img
Nov 25 02:10:59 np0005534516 cloud-init[1123]: Cloud-init v. 24.4-7.el9 running 'modules:config' at Tue, 25 Nov 2025 07:10:59 +0000. Up 13.93 seconds.
Nov 25 02:10:59 np0005534516 systemd[1]: Finished Cloud-init: Config Stage.
Nov 25 02:10:59 np0005534516 systemd[1]: Starting Cloud-init: Final Stage...
Nov 25 02:10:59 np0005534516 cloud-init[1272]: Cloud-init v. 24.4-7.el9 running 'modules:final' at Tue, 25 Nov 2025 07:10:59 +0000. Up 14.35 seconds.
Nov 25 02:10:59 np0005534516 cloud-init[1291]: #############################################################
Nov 25 02:10:59 np0005534516 cloud-init[1292]: -----BEGIN SSH HOST KEY FINGERPRINTS-----
Nov 25 02:10:59 np0005534516 dracut[1294]: dracut-057-102.git20250818.el9
Nov 25 02:10:59 np0005534516 cloud-init[1296]: 256 SHA256:m7Iq1HRhFtgR3E4SIPoywMiU8O0/ROtpHDZtfr9Y9vg root@np0005534516.novalocal (ECDSA)
Nov 25 02:10:59 np0005534516 cloud-init[1301]: 256 SHA256:lNZ6f0cNvpMQxbKHvh8iqlnRRXtEQwtWvc3FFk46Kp0 root@np0005534516.novalocal (ED25519)
Nov 25 02:10:59 np0005534516 cloud-init[1313]: 3072 SHA256:1BuQscS8Jo8tlBe6U0bfrwfW+RILiGVLevNZFFGnoK4 root@np0005534516.novalocal (RSA)
Nov 25 02:10:59 np0005534516 cloud-init[1316]: -----END SSH HOST KEY FINGERPRINTS-----
Nov 25 02:10:59 np0005534516 cloud-init[1317]: #############################################################
Nov 25 02:10:59 np0005534516 cloud-init[1272]: Cloud-init v. 24.4-7.el9 finished at Tue, 25 Nov 2025 07:10:59 +0000. Datasource DataSourceConfigDrive [net,ver=2][source=/dev/sr0].  Up 14.54 seconds
Nov 25 02:10:59 np0005534516 systemd[1]: Finished Cloud-init: Final Stage.
Nov 25 02:10:59 np0005534516 systemd[1]: Reached target Cloud-init target.
Nov 25 02:10:59 np0005534516 dracut[1299]: Executing: /usr/bin/dracut --quiet --hostonly --hostonly-cmdline --hostonly-i18n --hostonly-mode strict --hostonly-nics  --mount "/dev/disk/by-uuid/47e3724e-7a1b-439a-9543-b98c9a290709 /sysroot xfs rw,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,noquota" --squash-compressor zstd --no-hostonly-default-device --add-confdir /lib/kdump/dracut.conf.d -f /boot/initramfs-5.14.0-642.el9.x86_64kdump.img 5.14.0-642.el9.x86_64
Nov 25 02:11:00 np0005534516 chronyd[831]: Selected source 54.39.23.64 (2.centos.pool.ntp.org)
Nov 25 02:11:00 np0005534516 chronyd[831]: System clock TAI offset set to 37 seconds
Nov 25 02:11:00 np0005534516 dracut[1299]: dracut module 'systemd-networkd' will not be installed, because command 'networkctl' could not be found!
Nov 25 02:11:00 np0005534516 dracut[1299]: dracut module 'systemd-networkd' will not be installed, because command '/usr/lib/systemd/systemd-networkd' could not be found!
Nov 25 02:11:00 np0005534516 dracut[1299]: dracut module 'systemd-networkd' will not be installed, because command '/usr/lib/systemd/systemd-networkd-wait-online' could not be found!
Nov 25 02:11:00 np0005534516 dracut[1299]: dracut module 'systemd-resolved' will not be installed, because command 'resolvectl' could not be found!
Nov 25 02:11:00 np0005534516 dracut[1299]: dracut module 'systemd-resolved' will not be installed, because command '/usr/lib/systemd/systemd-resolved' could not be found!
Nov 25 02:11:00 np0005534516 dracut[1299]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-timesyncd' could not be found!
Nov 25 02:11:00 np0005534516 dracut[1299]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-time-wait-sync' could not be found!
Nov 25 02:11:00 np0005534516 dracut[1299]: dracut module 'busybox' will not be installed, because command 'busybox' could not be found!
Nov 25 02:11:00 np0005534516 dracut[1299]: dracut module 'dbus-daemon' will not be installed, because command 'dbus-daemon' could not be found!
Nov 25 02:11:00 np0005534516 dracut[1299]: dracut module 'rngd' will not be installed, because command 'rngd' could not be found!
Nov 25 02:11:00 np0005534516 dracut[1299]: dracut module 'connman' will not be installed, because command 'connmand' could not be found!
Nov 25 02:11:00 np0005534516 dracut[1299]: dracut module 'connman' will not be installed, because command 'connmanctl' could not be found!
Nov 25 02:11:00 np0005534516 dracut[1299]: dracut module 'connman' will not be installed, because command 'connmand-wait-online' could not be found!
Nov 25 02:11:00 np0005534516 dracut[1299]: dracut module 'network-wicked' will not be installed, because command 'wicked' could not be found!
Nov 25 02:11:00 np0005534516 dracut[1299]: 62bluetooth: Could not find any command of '/usr/lib/bluetooth/bluetoothd /usr/libexec/bluetooth/bluetoothd'!
Nov 25 02:11:01 np0005534516 dracut[1299]: dracut module 'lvmmerge' will not be installed, because command 'lvm' could not be found!
Nov 25 02:11:01 np0005534516 dracut[1299]: dracut module 'lvmthinpool-monitor' will not be installed, because command 'lvm' could not be found!
Nov 25 02:11:01 np0005534516 dracut[1299]: dracut module 'btrfs' will not be installed, because command 'btrfs' could not be found!
Nov 25 02:11:01 np0005534516 dracut[1299]: dracut module 'dmraid' will not be installed, because command 'dmraid' could not be found!
Nov 25 02:11:01 np0005534516 dracut[1299]: dracut module 'lvm' will not be installed, because command 'lvm' could not be found!
Nov 25 02:11:01 np0005534516 dracut[1299]: dracut module 'mdraid' will not be installed, because command 'mdadm' could not be found!
Nov 25 02:11:01 np0005534516 dracut[1299]: dracut module 'pcsc' will not be installed, because command 'pcscd' could not be found!
Nov 25 02:11:01 np0005534516 dracut[1299]: dracut module 'tpm2-tss' will not be installed, because command 'tpm2' could not be found!
Nov 25 02:11:01 np0005534516 dracut[1299]: dracut module 'cifs' will not be installed, because command 'mount.cifs' could not be found!
Nov 25 02:11:01 np0005534516 dracut[1299]: dracut module 'iscsi' will not be installed, because command 'iscsi-iname' could not be found!
Nov 25 02:11:01 np0005534516 dracut[1299]: dracut module 'iscsi' will not be installed, because command 'iscsiadm' could not be found!
Nov 25 02:11:01 np0005534516 dracut[1299]: dracut module 'iscsi' will not be installed, because command 'iscsid' could not be found!
Nov 25 02:11:01 np0005534516 dracut[1299]: dracut module 'nvmf' will not be installed, because command 'nvme' could not be found!
Nov 25 02:11:01 np0005534516 dracut[1299]: dracut module 'biosdevname' will not be installed, because command 'biosdevname' could not be found!
Nov 25 02:11:01 np0005534516 dracut[1299]: dracut module 'memstrack' will not be installed, because command 'memstrack' could not be found!
Nov 25 02:11:01 np0005534516 dracut[1299]: memstrack is not available
Nov 25 02:11:01 np0005534516 dracut[1299]: If you need to use rd.memdebug>=4, please install memstrack and procps-ng
Nov 25 02:11:01 np0005534516 dracut[1299]: dracut module 'systemd-resolved' will not be installed, because command 'resolvectl' could not be found!
Nov 25 02:11:01 np0005534516 dracut[1299]: dracut module 'systemd-resolved' will not be installed, because command '/usr/lib/systemd/systemd-resolved' could not be found!
Nov 25 02:11:01 np0005534516 dracut[1299]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-timesyncd' could not be found!
Nov 25 02:11:01 np0005534516 dracut[1299]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-time-wait-sync' could not be found!
Nov 25 02:11:01 np0005534516 dracut[1299]: dracut module 'busybox' will not be installed, because command 'busybox' could not be found!
Nov 25 02:11:01 np0005534516 dracut[1299]: dracut module 'dbus-daemon' will not be installed, because command 'dbus-daemon' could not be found!
Nov 25 02:11:01 np0005534516 dracut[1299]: dracut module 'rngd' will not be installed, because command 'rngd' could not be found!
Nov 25 02:11:01 np0005534516 dracut[1299]: dracut module 'connman' will not be installed, because command 'connmand' could not be found!
Nov 25 02:11:01 np0005534516 dracut[1299]: dracut module 'connman' will not be installed, because command 'connmanctl' could not be found!
Nov 25 02:11:01 np0005534516 dracut[1299]: dracut module 'connman' will not be installed, because command 'connmand-wait-online' could not be found!
Nov 25 02:11:01 np0005534516 dracut[1299]: dracut module 'network-wicked' will not be installed, because command 'wicked' could not be found!
Nov 25 02:11:01 np0005534516 dracut[1299]: 62bluetooth: Could not find any command of '/usr/lib/bluetooth/bluetoothd /usr/libexec/bluetooth/bluetoothd'!
Nov 25 02:11:01 np0005534516 dracut[1299]: dracut module 'lvmmerge' will not be installed, because command 'lvm' could not be found!
Nov 25 02:11:01 np0005534516 dracut[1299]: dracut module 'lvmthinpool-monitor' will not be installed, because command 'lvm' could not be found!
Nov 25 02:11:01 np0005534516 dracut[1299]: dracut module 'btrfs' will not be installed, because command 'btrfs' could not be found!
Nov 25 02:11:01 np0005534516 dracut[1299]: dracut module 'dmraid' will not be installed, because command 'dmraid' could not be found!
Nov 25 02:11:01 np0005534516 dracut[1299]: dracut module 'lvm' will not be installed, because command 'lvm' could not be found!
Nov 25 02:11:01 np0005534516 dracut[1299]: dracut module 'mdraid' will not be installed, because command 'mdadm' could not be found!
Nov 25 02:11:01 np0005534516 dracut[1299]: dracut module 'pcsc' will not be installed, because command 'pcscd' could not be found!
Nov 25 02:11:01 np0005534516 dracut[1299]: dracut module 'tpm2-tss' will not be installed, because command 'tpm2' could not be found!
Nov 25 02:11:01 np0005534516 dracut[1299]: dracut module 'cifs' will not be installed, because command 'mount.cifs' could not be found!
Nov 25 02:11:01 np0005534516 dracut[1299]: dracut module 'iscsi' will not be installed, because command 'iscsi-iname' could not be found!
Nov 25 02:11:01 np0005534516 dracut[1299]: dracut module 'iscsi' will not be installed, because command 'iscsiadm' could not be found!
Nov 25 02:11:01 np0005534516 dracut[1299]: dracut module 'iscsi' will not be installed, because command 'iscsid' could not be found!
Nov 25 02:11:01 np0005534516 dracut[1299]: dracut module 'nvmf' will not be installed, because command 'nvme' could not be found!
Nov 25 02:11:01 np0005534516 dracut[1299]: dracut module 'memstrack' will not be installed, because command 'memstrack' could not be found!
Nov 25 02:11:01 np0005534516 dracut[1299]: memstrack is not available
Nov 25 02:11:01 np0005534516 dracut[1299]: If you need to use rd.memdebug>=4, please install memstrack and procps-ng
Nov 25 02:11:02 np0005534516 dracut[1299]: *** Including module: systemd ***
Nov 25 02:11:02 np0005534516 chronyd[831]: Selected source 206.108.0.131 (2.centos.pool.ntp.org)
Nov 25 02:11:02 np0005534516 dracut[1299]: *** Including module: fips ***
Nov 25 02:11:02 np0005534516 dracut[1299]: *** Including module: systemd-initrd ***
Nov 25 02:11:03 np0005534516 dracut[1299]: *** Including module: i18n ***
Nov 25 02:11:03 np0005534516 dracut[1299]: *** Including module: drm ***
Nov 25 02:11:03 np0005534516 irqbalance[818]: Cannot change IRQ 25 affinity: Operation not permitted
Nov 25 02:11:03 np0005534516 irqbalance[818]: IRQ 25 affinity is now unmanaged
Nov 25 02:11:03 np0005534516 irqbalance[818]: Cannot change IRQ 31 affinity: Operation not permitted
Nov 25 02:11:03 np0005534516 irqbalance[818]: IRQ 31 affinity is now unmanaged
Nov 25 02:11:03 np0005534516 irqbalance[818]: Cannot change IRQ 28 affinity: Operation not permitted
Nov 25 02:11:03 np0005534516 irqbalance[818]: IRQ 28 affinity is now unmanaged
Nov 25 02:11:03 np0005534516 irqbalance[818]: Cannot change IRQ 32 affinity: Operation not permitted
Nov 25 02:11:03 np0005534516 irqbalance[818]: IRQ 32 affinity is now unmanaged
Nov 25 02:11:03 np0005534516 irqbalance[818]: Cannot change IRQ 30 affinity: Operation not permitted
Nov 25 02:11:03 np0005534516 irqbalance[818]: IRQ 30 affinity is now unmanaged
Nov 25 02:11:03 np0005534516 irqbalance[818]: Cannot change IRQ 29 affinity: Operation not permitted
Nov 25 02:11:03 np0005534516 irqbalance[818]: IRQ 29 affinity is now unmanaged
Nov 25 02:11:03 np0005534516 dracut[1299]: *** Including module: prefixdevname ***
Nov 25 02:11:03 np0005534516 dracut[1299]: *** Including module: kernel-modules ***
Nov 25 02:11:03 np0005534516 kernel: block vda: the capability attribute has been deprecated.
Nov 25 02:11:04 np0005534516 dracut[1299]: *** Including module: kernel-modules-extra ***
Nov 25 02:11:04 np0005534516 dracut[1299]: *** Including module: qemu ***
Nov 25 02:11:04 np0005534516 dracut[1299]: *** Including module: fstab-sys ***
Nov 25 02:11:04 np0005534516 dracut[1299]: *** Including module: rootfs-block ***
Nov 25 02:11:04 np0005534516 dracut[1299]: *** Including module: terminfo ***
Nov 25 02:11:04 np0005534516 dracut[1299]: *** Including module: udev-rules ***
Nov 25 02:11:05 np0005534516 dracut[1299]: Skipping udev rule: 91-permissions.rules
Nov 25 02:11:05 np0005534516 dracut[1299]: Skipping udev rule: 80-drivers-modprobe.rules
Nov 25 02:11:05 np0005534516 dracut[1299]: *** Including module: virtiofs ***
Nov 25 02:11:05 np0005534516 dracut[1299]: *** Including module: dracut-systemd ***
Nov 25 02:11:05 np0005534516 dracut[1299]: *** Including module: usrmount ***
Nov 25 02:11:05 np0005534516 dracut[1299]: *** Including module: base ***
Nov 25 02:11:05 np0005534516 dracut[1299]: *** Including module: fs-lib ***
Nov 25 02:11:05 np0005534516 dracut[1299]: *** Including module: kdumpbase ***
Nov 25 02:11:05 np0005534516 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Nov 25 02:11:06 np0005534516 dracut[1299]: *** Including module: microcode_ctl-fw_dir_override ***
Nov 25 02:11:06 np0005534516 dracut[1299]:  microcode_ctl module: mangling fw_dir
Nov 25 02:11:06 np0005534516 dracut[1299]:    microcode_ctl: reset fw_dir to "/lib/firmware/updates /lib/firmware"
Nov 25 02:11:06 np0005534516 dracut[1299]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel"...
Nov 25 02:11:06 np0005534516 dracut[1299]:    microcode_ctl: configuration "intel" is ignored
Nov 25 02:11:06 np0005534516 dracut[1299]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-2d-07"...
Nov 25 02:11:06 np0005534516 dracut[1299]:    microcode_ctl: configuration "intel-06-2d-07" is ignored
Nov 25 02:11:06 np0005534516 dracut[1299]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-4e-03"...
Nov 25 02:11:06 np0005534516 dracut[1299]:    microcode_ctl: configuration "intel-06-4e-03" is ignored
Nov 25 02:11:06 np0005534516 dracut[1299]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-4f-01"...
Nov 25 02:11:06 np0005534516 dracut[1299]:    microcode_ctl: configuration "intel-06-4f-01" is ignored
Nov 25 02:11:06 np0005534516 dracut[1299]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-55-04"...
Nov 25 02:11:06 np0005534516 dracut[1299]:    microcode_ctl: configuration "intel-06-55-04" is ignored
Nov 25 02:11:06 np0005534516 dracut[1299]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-5e-03"...
Nov 25 02:11:06 np0005534516 dracut[1299]:    microcode_ctl: configuration "intel-06-5e-03" is ignored
Nov 25 02:11:06 np0005534516 dracut[1299]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8c-01"...
Nov 25 02:11:06 np0005534516 dracut[1299]:    microcode_ctl: configuration "intel-06-8c-01" is ignored
Nov 25 02:11:06 np0005534516 dracut[1299]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8e-9e-0x-0xca"...
Nov 25 02:11:06 np0005534516 dracut[1299]:    microcode_ctl: configuration "intel-06-8e-9e-0x-0xca" is ignored
Nov 25 02:11:06 np0005534516 dracut[1299]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8e-9e-0x-dell"...
Nov 25 02:11:06 np0005534516 dracut[1299]:    microcode_ctl: configuration "intel-06-8e-9e-0x-dell" is ignored
Nov 25 02:11:06 np0005534516 dracut[1299]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8f-08"...
Nov 25 02:11:06 np0005534516 dracut[1299]:    microcode_ctl: configuration "intel-06-8f-08" is ignored
Nov 25 02:11:06 np0005534516 dracut[1299]:    microcode_ctl: final fw_dir: "/lib/firmware/updates /lib/firmware"
Nov 25 02:11:06 np0005534516 dracut[1299]: *** Including module: openssl ***
Nov 25 02:11:06 np0005534516 dracut[1299]: *** Including module: shutdown ***
Nov 25 02:11:06 np0005534516 dracut[1299]: *** Including module: squash ***
Nov 25 02:11:06 np0005534516 dracut[1299]: *** Including modules done ***
Nov 25 02:11:06 np0005534516 dracut[1299]: *** Installing kernel module dependencies ***
Nov 25 02:11:08 np0005534516 dracut[1299]: *** Installing kernel module dependencies done ***
Nov 25 02:11:08 np0005534516 dracut[1299]: *** Resolving executable dependencies ***
Nov 25 02:11:10 np0005534516 dracut[1299]: *** Resolving executable dependencies done ***
Nov 25 02:11:10 np0005534516 dracut[1299]: *** Generating early-microcode cpio image ***
Nov 25 02:11:10 np0005534516 dracut[1299]: *** Store current command line parameters ***
Nov 25 02:11:10 np0005534516 dracut[1299]: Stored kernel commandline:
Nov 25 02:11:10 np0005534516 dracut[1299]: No dracut internal kernel commandline stored in the initramfs
Nov 25 02:11:10 np0005534516 dracut[1299]: *** Install squash loader ***
Nov 25 02:11:11 np0005534516 dracut[1299]: *** Squashing the files inside the initramfs ***
Nov 25 02:11:13 np0005534516 dracut[1299]: *** Squashing the files inside the initramfs done ***
Nov 25 02:11:13 np0005534516 dracut[1299]: *** Creating image file '/boot/initramfs-5.14.0-642.el9.x86_64kdump.img' ***
Nov 25 02:11:13 np0005534516 dracut[1299]: *** Hardlinking files ***
Nov 25 02:11:13 np0005534516 dracut[1299]: *** Hardlinking files done ***
Nov 25 02:11:13 np0005534516 irqbalance[818]: Cannot change IRQ 27 affinity: Operation not permitted
Nov 25 02:11:13 np0005534516 irqbalance[818]: IRQ 27 affinity is now unmanaged
Nov 25 02:11:13 np0005534516 dracut[1299]: *** Creating initramfs image file '/boot/initramfs-5.14.0-642.el9.x86_64kdump.img' done ***
Nov 25 02:11:14 np0005534516 kdumpctl[1018]: kdump: kexec: loaded kdump kernel
Nov 25 02:11:14 np0005534516 kdumpctl[1018]: kdump: Starting kdump: [OK]
Nov 25 02:11:14 np0005534516 systemd[1]: Finished Crash recovery kernel arming.
Nov 25 02:11:14 np0005534516 systemd[1]: Startup finished in 1.553s (kernel) + 3.061s (initrd) + 24.666s (userspace) = 29.281s.
Nov 25 02:11:19 np0005534516 systemd[1]: Created slice User Slice of UID 1000.
Nov 25 02:11:19 np0005534516 systemd[1]: Starting User Runtime Directory /run/user/1000...
Nov 25 02:11:19 np0005534516 systemd-logind[822]: New session 1 of user zuul.
Nov 25 02:11:19 np0005534516 systemd[1]: Finished User Runtime Directory /run/user/1000.
Nov 25 02:11:19 np0005534516 systemd[1]: Starting User Manager for UID 1000...
Nov 25 02:11:20 np0005534516 systemd[4304]: Queued start job for default target Main User Target.
Nov 25 02:11:20 np0005534516 systemd[4304]: Created slice User Application Slice.
Nov 25 02:11:20 np0005534516 systemd[4304]: Started Mark boot as successful after the user session has run 2 minutes.
Nov 25 02:11:20 np0005534516 systemd[4304]: Started Daily Cleanup of User's Temporary Directories.
Nov 25 02:11:20 np0005534516 systemd[4304]: Reached target Paths.
Nov 25 02:11:20 np0005534516 systemd[4304]: Reached target Timers.
Nov 25 02:11:20 np0005534516 systemd[4304]: Starting D-Bus User Message Bus Socket...
Nov 25 02:11:20 np0005534516 systemd[4304]: Starting Create User's Volatile Files and Directories...
Nov 25 02:11:20 np0005534516 systemd[4304]: Listening on D-Bus User Message Bus Socket.
Nov 25 02:11:20 np0005534516 systemd[4304]: Reached target Sockets.
Nov 25 02:11:20 np0005534516 systemd[4304]: Finished Create User's Volatile Files and Directories.
Nov 25 02:11:20 np0005534516 systemd[4304]: Reached target Basic System.
Nov 25 02:11:20 np0005534516 systemd[4304]: Reached target Main User Target.
Nov 25 02:11:20 np0005534516 systemd[4304]: Startup finished in 130ms.
Nov 25 02:11:20 np0005534516 systemd[1]: Started User Manager for UID 1000.
Nov 25 02:11:20 np0005534516 systemd[1]: Started Session 1 of User zuul.
Nov 25 02:11:20 np0005534516 python3[4386]: ansible-setup Invoked with gather_subset=['!all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 25 02:11:24 np0005534516 python3[4414]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 25 02:11:25 np0005534516 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Nov 25 02:11:30 np0005534516 python3[4474]: ansible-setup Invoked with gather_subset=['network'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 25 02:11:31 np0005534516 python3[4514]: ansible-zuul_console Invoked with path=/tmp/console-{log_uuid}.log port=19885 state=present
Nov 25 02:11:33 np0005534516 python3[4540]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCtR2vF12WE7IaFay0/IRXn4s7ZBXnfCbzotnBMgqEGxjCBmv+DCjoWjJu8u+cBPm03BLr18/kVkmoT68+PlLrnVSgjtiRRu6z4zNeVU/xZuZPdP8hHu3mfqNX8ZjfowLFNC+K4y+4hucFIF04m0dmQ/eDcj0lyoaQ50lhRAYgYGApiLR8K7ne0Iqq5Ib+/YYoTnv0K0KdpP3tIAIC/k4UMsJc1FUTgxXAoPOgAQpwAHTc2tuZqAZdy5IjJwt/5oPpOhHMgwPnoQA358KNWdErs2BGOgSsxe8OMVgITK92wAPriGp3WZ+GHJ/GfzX+iPGVDAo3w/0pcwwcmiZ/v+G5s6qXG71DHsp+oKp7TrDQ+/KWIm30Z8hYOafEsCj7bWH4z+Id/bi9bdo+iTpHvHwZL/r2SDIs/Msc8NVIdYwx0WC6cE2fRR11cQ5vBQr2MMOfjnT5QRtO05MTjl+IIMg18c/QLWieg1gi6yUQ6Bl3yrfmtXPCvkHW657exJm6xcls= zuul-build-sshkey manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 25 02:11:34 np0005534516 python3[4564]: ansible-file Invoked with state=directory path=/home/zuul/.ssh mode=448 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 02:11:34 np0005534516 python3[4663]: ansible-ansible.legacy.stat Invoked with path=/home/zuul/.ssh/id_rsa follow=False get_checksum=False checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 25 02:11:34 np0005534516 python3[4734]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764054694.2284932-207-79826158606923/source dest=/home/zuul/.ssh/id_rsa mode=384 force=False _original_basename=9b1c969306b3449785efa2e3b328a86d_id_rsa follow=False checksum=fba5af1a0176d2a8b1d9c9437b31991a985917ff backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 02:11:35 np0005534516 python3[4857]: ansible-ansible.legacy.stat Invoked with path=/home/zuul/.ssh/id_rsa.pub follow=False get_checksum=False checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 25 02:11:35 np0005534516 python3[4928]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764054695.211606-240-21793266769328/source dest=/home/zuul/.ssh/id_rsa.pub mode=420 force=False _original_basename=9b1c969306b3449785efa2e3b328a86d_id_rsa.pub follow=False checksum=618566c3ad108e2af346c1c9e77a9b605fa9144a backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 02:11:37 np0005534516 python3[4976]: ansible-ping Invoked with data=pong
Nov 25 02:11:38 np0005534516 python3[5000]: ansible-setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 25 02:11:40 np0005534516 python3[5058]: ansible-zuul_debug_info Invoked with ipv4_route_required=False ipv6_route_required=False image_manifest_files=['/etc/dib-builddate.txt', '/etc/image-hostname.txt'] image_manifest=None traceroute_host=None
Nov 25 02:11:41 np0005534516 python3[5090]: ansible-file Invoked with path=/home/zuul/zuul-output/logs state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 02:11:41 np0005534516 python3[5114]: ansible-file Invoked with path=/home/zuul/zuul-output/artifacts state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 02:11:41 np0005534516 python3[5138]: ansible-file Invoked with path=/home/zuul/zuul-output/docs state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 02:11:42 np0005534516 python3[5162]: ansible-file Invoked with path=/home/zuul/zuul-output/logs state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 02:11:42 np0005534516 python3[5186]: ansible-file Invoked with path=/home/zuul/zuul-output/artifacts state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 02:11:42 np0005534516 python3[5210]: ansible-file Invoked with path=/home/zuul/zuul-output/docs state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 02:11:44 np0005534516 python3[5236]: ansible-file Invoked with path=/etc/ci state=directory owner=root group=root mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 02:11:45 np0005534516 python3[5314]: ansible-ansible.legacy.stat Invoked with path=/etc/ci/mirror_info.sh follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 25 02:11:45 np0005534516 python3[5387]: ansible-ansible.legacy.copy Invoked with dest=/etc/ci/mirror_info.sh owner=root group=root mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1764054704.717949-21-98265166807157/source follow=False _original_basename=mirror_info.sh.j2 checksum=92d92a03afdddee82732741071f662c729080c35 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 02:11:46 np0005534516 python3[5435]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEA4Z/c9osaGGtU6X8fgELwfj/yayRurfcKA0HMFfdpPxev2dbwljysMuzoVp4OZmW1gvGtyYPSNRvnzgsaabPNKNo2ym5NToCP6UM+KSe93aln4BcM/24mXChYAbXJQ5Bqq/pIzsGs/pKetQN+vwvMxLOwTvpcsCJBXaa981RKML6xj9l/UZ7IIq1HSEKMvPLxZMWdu0Ut8DkCd5F4nOw9Wgml2uYpDCj5LLCrQQ9ChdOMz8hz6SighhNlRpPkvPaet3OXxr/ytFMu7j7vv06CaEnuMMiY2aTWN1Imin9eHAylIqFHta/3gFfQSWt9jXM7owkBLKL7ATzhaAn+fjNupw== arxcruz@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 25 02:11:46 np0005534516 python3[5459]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDS4Fn6k4deCnIlOtLWqZJyksbepjQt04j8Ed8CGx9EKkj0fKiAxiI4TadXQYPuNHMixZy4Nevjb6aDhL5Z906TfvNHKUrjrG7G26a0k8vdc61NEQ7FmcGMWRLwwc6ReDO7lFpzYKBMk4YqfWgBuGU/K6WLKiVW2cVvwIuGIaYrE1OiiX0iVUUk7KApXlDJMXn7qjSYynfO4mF629NIp8FJal38+Kv+HA+0QkE5Y2xXnzD4Lar5+keymiCHRntPppXHeLIRzbt0gxC7v3L72hpQ3BTBEzwHpeS8KY+SX1y5lRMN45thCHfJqGmARJREDjBvWG8JXOPmVIKQtZmVcD5b mandreou@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 25 02:11:47 np0005534516 python3[5483]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC9MiLfy30deHA7xPOAlew5qUq3UP2gmRMYJi8PtkjFB20/DKeWwWNnkZPqP9AayruRoo51SIiVg870gbZE2jYl+Ncx/FYDe56JeC3ySZsXoAVkC9bP7gkOGqOmJjirvAgPMI7bogVz8i+66Q4Ar7OKTp3762G4IuWPPEg4ce4Y7lx9qWocZapHYq4cYKMxrOZ7SEbFSATBbe2bPZAPKTw8do/Eny+Hq/LkHFhIeyra6cqTFQYShr+zPln0Cr+ro/pDX3bB+1ubFgTpjpkkkQsLhDfR6cCdCWM2lgnS3BTtYj5Ct9/JRPR5YOphqZz+uB+OEu2IL68hmU9vNTth1KeX rlandy@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 25 02:11:47 np0005534516 python3[5507]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFCbgz8gdERiJlk2IKOtkjQxEXejrio6ZYMJAVJYpOIp raukadah@gmail.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 25 02:11:47 np0005534516 python3[5531]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIBqb3Q/9uDf4LmihQ7xeJ9gA/STIQUFPSfyyV0m8AoQi bshewale@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 25 02:11:47 np0005534516 python3[5555]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC0I8QqQx0Az2ysJt2JuffucLijhBqnsXKEIx5GyHwxVULROa8VtNFXUDH6ZKZavhiMcmfHB2+TBTda+lDP4FldYj06dGmzCY+IYGa+uDRdxHNGYjvCfLFcmLlzRK6fNbTcui+KlUFUdKe0fb9CRoGKyhlJD5GRkM1Dv+Yb6Bj+RNnmm1fVGYxzmrD2utvffYEb0SZGWxq2R9gefx1q/3wCGjeqvufEV+AskPhVGc5T7t9eyZ4qmslkLh1/nMuaIBFcr9AUACRajsvk6mXrAN1g3HlBf2gQlhi1UEyfbqIQvzzFtsbLDlSum/KmKjy818GzvWjERfQ0VkGzCd9bSLVL dviroel@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 25 02:11:48 np0005534516 python3[5579]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDLOQd4ZLtkZXQGY6UwAr/06ppWQK4fDO3HaqxPk98csyOCBXsliSKK39Bso828+5srIXiW7aI6aC9P5mwi4mUZlGPfJlQbfrcGvY+b/SocuvaGK+1RrHLoJCT52LBhwgrzlXio2jeksZeein8iaTrhsPrOAs7KggIL/rB9hEiB3NaOPWhhoCP4vlW6MEMExGcqB/1FVxXFBPnLkEyW0Lk7ycVflZl2ocRxbfjZi0+tI1Wlinp8PvSQSc/WVrAcDgKjc/mB4ODPOyYy3G8FHgfMsrXSDEyjBKgLKMsdCrAUcqJQWjkqXleXSYOV4q3pzL+9umK+q/e3P/bIoSFQzmJKTU1eDfuvPXmow9F5H54fii/Da7ezlMJ+wPGHJrRAkmzvMbALy7xwswLhZMkOGNtRcPqaKYRmIBKpw3o6bCTtcNUHOtOQnzwY8JzrM2eBWJBXAANYw+9/ho80JIiwhg29CFNpVBuHbql2YxJQNrnl90guN65rYNpDxdIluweyUf8= anbanerj@kaermorhen manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 25 02:11:48 np0005534516 python3[5603]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC3VwV8Im9kRm49lt3tM36hj4Zv27FxGo4C1Q/0jqhzFmHY7RHbmeRr8ObhwWoHjXSozKWg8FL5ER0z3hTwL0W6lez3sL7hUaCmSuZmG5Hnl3x4vTSxDI9JZ/Y65rtYiiWQo2fC5xJhU/4+0e5e/pseCm8cKRSu+SaxhO+sd6FDojA2x1BzOzKiQRDy/1zWGp/cZkxcEuB1wHI5LMzN03c67vmbu+fhZRAUO4dQkvcnj2LrhQtpa+ytvnSjr8icMDosf1OsbSffwZFyHB/hfWGAfe0eIeSA2XPraxiPknXxiPKx2MJsaUTYbsZcm3EjFdHBBMumw5rBI74zLrMRvCO9GwBEmGT4rFng1nP+yw5DB8sn2zqpOsPg1LYRwCPOUveC13P6pgsZZPh812e8v5EKnETct+5XI3dVpdw6CnNiLwAyVAF15DJvBGT/u1k0Myg/bQn+Gv9k2MSj6LvQmf6WbZu2Wgjm30z3FyCneBqTL7mLF19YXzeC0ufHz5pnO1E= dasm@fedora manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 25 02:11:48 np0005534516 python3[5627]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIHUnwjB20UKmsSed9X73eGNV5AOEFccQ3NYrRW776pEk cjeanner manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 25 02:11:49 np0005534516 python3[5651]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDercCMGn8rW1C4P67tHgtflPdTeXlpyUJYH+6XDd2lR jgilaber@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 25 02:11:49 np0005534516 python3[5675]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIAMI6kkg9Wg0sG7jIJmyZemEBwUn1yzNpQQd3gnulOmZ adrianfuscoarnejo@gmail.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 25 02:11:49 np0005534516 python3[5699]: ansible-authorized_key Invoked with user=zuul state=present key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBPijwpQu/3jhhhBZInXNOLEH57DrknPc3PLbsRvYyJIFzwYjX+WD4a7+nGnMYS42MuZk6TJcVqgnqofVx4isoD4= ramishra@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 25 02:11:50 np0005534516 python3[5723]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIGpU/BepK3qX0NRf5Np+dOBDqzQEefhNrw2DCZaH3uWW rebtoor@monolith manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 25 02:11:50 np0005534516 python3[5747]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDK0iKdi8jQTpQrDdLVH/AAgLVYyTXF7AQ1gjc/5uT3t ykarel@yatinkarel manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 25 02:11:50 np0005534516 python3[5771]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIF/V/cLotA6LZeO32VL45Hd78skuA2lJA425Sm2LlQeZ fmount@horcrux manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 25 02:11:50 np0005534516 python3[5795]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDa7QCjuDMVmRPo1rREbGwzYeBCYVN+Ou/3WKXZEC6Sr manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 25 02:11:51 np0005534516 python3[5819]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQCfNtF7NvKl915TGsGGoseUb06Hj8L/S4toWf0hExeY+F00woL6NvBlJD0nDct+P5a22I4EhvoQCRQ8reaPCm1lybR3uiRIJsj+8zkVvLwby9LXzfZorlNG9ofjd00FEmB09uW/YvTl6Q9XwwwX6tInzIOv3TMqTHHGOL74ibbj8J/FJR0cFEyj0z4WQRvtkh32xAHl83gbuINryMt0sqRI+clj2381NKL55DRLQrVw0gsfqqxiHAnXg21qWmc4J+b9e9kiuAFQjcjwTVkwJCcg3xbPwC/qokYRby/Y5S40UUd7/jEARGXT7RZgpzTuDd1oZiCVrnrqJNPaMNdVv5MLeFdf1B7iIe5aa/fGouX7AO4SdKhZUdnJmCFAGvjC6S3JMZ2wAcUl+OHnssfmdj7XL50cLo27vjuzMtLAgSqi6N99m92WCF2s8J9aVzszX7Xz9OKZCeGsiVJp3/NdABKzSEAyM9xBD/5Vho894Sav+otpySHe3p6RUTgbB5Zu8VyZRZ/UtB3ueXxyo764yrc6qWIDqrehm84Xm9g+/jpIBzGPl07NUNJpdt/6Sgf9RIKXw/7XypO5yZfUcuFNGTxLfqjTNrtgLZNcjfav6sSdVXVcMPL//XNuRdKmVFaO76eV/oGMQGr1fGcCD+N+CpI7+Q+fCNB6VFWG4nZFuI/Iuw== averdagu@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 25 02:11:51 np0005534516 python3[5843]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDq8l27xI+QlQVdS4djp9ogSoyrNE2+Ox6vKPdhSNL1J3PE5w+WCSvMz9A5gnNuH810zwbekEApbxTze/gLQJwBHA52CChfURpXrFaxY7ePXRElwKAL3mJfzBWY/c5jnNL9TCVmFJTGZkFZP3Nh+BMgZvL6xBkt3WKm6Uq18qzd9XeKcZusrA+O+uLv1fVeQnadY9RIqOCyeFYCzLWrUfTyE8x/XG0hAWIM7qpnF2cALQS2h9n4hW5ybiUN790H08wf9hFwEf5nxY9Z9dVkPFQiTSGKNBzmnCXU9skxS/xhpFjJ5duGSZdtAHe9O+nGZm9c67hxgtf8e5PDuqAdXEv2cf6e3VBAt+Bz8EKI3yosTj0oZHfwr42Yzb1l/SKy14Rggsrc9KAQlrGXan6+u2jcQqqx7l+SWmnpFiWTV9u5cWj2IgOhApOitmRBPYqk9rE2usfO0hLn/Pj/R/Nau4803e1/EikdLE7Ps95s9mX5jRDjAoUa2JwFF5RsVFyL910= ashigupt@ashigupt.remote.csb manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 25 02:11:51 np0005534516 python3[5867]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIOKLl0NYKwoZ/JY5KeZU8VwRAggeOxqQJeoqp3dsAaY9 manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 25 02:11:52 np0005534516 python3[5891]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIASASQOH2BcOyLKuuDOdWZlPi2orcjcA8q4400T73DLH evallesp@fedora manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 25 02:11:52 np0005534516 python3[5915]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAILeBWlamUph+jRKV2qrx1PGU7vWuGIt5+z9k96I8WehW amsinha@amsinha-mac manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 25 02:11:52 np0005534516 python3[5939]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIANvVgvJBlK3gb1yz5uef/JqIGq4HLEmY2dYA8e37swb morenod@redhat-laptop manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 25 02:11:52 np0005534516 python3[5963]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQDZdI7t1cxYx65heVI24HTV4F7oQLW1zyfxHreL2TIJKxjyrUUKIFEUmTutcBlJRLNT2Eoix6x1sOw9YrchloCLcn//SGfTElr9mSc5jbjb7QXEU+zJMhtxyEJ1Po3CUGnj7ckiIXw7wcawZtrEOAQ9pH3ExYCJcEMiyNjRQZCxT3tPK+S4B95EWh5Fsrz9CkwpjNRPPH7LigCeQTM3Wc7r97utAslBUUvYceDSLA7rMgkitJE38b7rZBeYzsGQ8YYUBjTCtehqQXxCRjizbHWaaZkBU+N3zkKB6n/iCNGIO690NK7A/qb6msTijiz1PeuM8ThOsi9qXnbX5v0PoTpcFSojV7NHAQ71f0XXuS43FhZctT+Dcx44dT8Fb5vJu2cJGrk+qF8ZgJYNpRS7gPg0EG2EqjK7JMf9ULdjSu0r+KlqIAyLvtzT4eOnQipoKlb/WG5D/0ohKv7OMQ352ggfkBFIQsRXyyTCT98Ft9juqPuahi3CAQmP4H9dyE+7+Kz437PEtsxLmfm6naNmWi7Ee1DqWPwS8rEajsm4sNM4wW9gdBboJQtc0uZw0DfLj1I9r3Mc8Ol0jYtz0yNQDSzVLrGCaJlC311trU70tZ+ZkAVV6Mn8lOhSbj1cK0lvSr6ZK4dgqGl3I1eTZJJhbLNdg7UOVaiRx9543+C/p/As7w== brjackma@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 25 02:11:53 np0005534516 python3[5987]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIKwedoZ0TWPJX/z/4TAbO/kKcDZOQVgRH0hAqrL5UCI1 vcastell@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 25 02:11:53 np0005534516 python3[6011]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIEmv8sE8GCk6ZTPIqF0FQrttBdL3mq7rCm/IJy0xDFh7 michburk@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 25 02:11:53 np0005534516 python3[6035]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAICy6GpGEtwevXEEn4mmLR5lmSLe23dGgAvzkB9DMNbkf rsafrono@rsafrono manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 25 02:11:56 np0005534516 python3[6061]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Nov 25 02:11:56 np0005534516 systemd[1]: Starting Time & Date Service...
Nov 25 02:11:56 np0005534516 systemd[1]: Started Time & Date Service.
Nov 25 02:11:56 np0005534516 systemd-timedated[6063]: Changed time zone to 'UTC' (UTC).
Nov 25 02:11:57 np0005534516 python3[6092]: ansible-file Invoked with path=/etc/nodepool state=directory mode=511 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 02:11:57 np0005534516 python3[6168]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/sub_nodes follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 25 02:11:58 np0005534516 python3[6239]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/sub_nodes src=/home/zuul/.ansible/tmp/ansible-tmp-1764054717.3944702-153-267009089039644/source _original_basename=tmpw1f3z2fv follow=False checksum=da39a3ee5e6b4b0d3255bfef95601890afd80709 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 02:11:58 np0005534516 python3[6339]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/sub_nodes_private follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 25 02:11:59 np0005534516 python3[6410]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/sub_nodes_private src=/home/zuul/.ansible/tmp/ansible-tmp-1764054718.34625-183-9943438480477/source _original_basename=tmpaefvyh5a follow=False checksum=da39a3ee5e6b4b0d3255bfef95601890afd80709 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 02:11:59 np0005534516 python3[6512]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/node_private follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 25 02:12:00 np0005534516 python3[6585]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/node_private src=/home/zuul/.ansible/tmp/ansible-tmp-1764054719.566898-231-97118204801705/source _original_basename=tmpraui67ad follow=False checksum=7a82bff5b5e9039ad1ac15f6a7286925b777bf85 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 02:12:00 np0005534516 python3[6633]: ansible-ansible.legacy.command Invoked with _raw_params=cp .ssh/id_rsa /etc/nodepool/id_rsa zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 02:12:01 np0005534516 python3[6659]: ansible-ansible.legacy.command Invoked with _raw_params=cp .ssh/id_rsa.pub /etc/nodepool/id_rsa.pub zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 02:12:01 np0005534516 python3[6739]: ansible-ansible.legacy.stat Invoked with path=/etc/sudoers.d/zuul-sudo-grep follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 25 02:12:01 np0005534516 python3[6812]: ansible-ansible.legacy.copy Invoked with dest=/etc/sudoers.d/zuul-sudo-grep mode=288 src=/home/zuul/.ansible/tmp/ansible-tmp-1764054721.2406738-273-4786456250286/source _original_basename=tmp2yt_fccq follow=False checksum=bdca1a77493d00fb51567671791f4aa30f66c2f0 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 02:12:02 np0005534516 python3[6863]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/visudo -c zuul_log_id=fa163ef9-e89a-dd62-003c-00000000001d-1-compute0 zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 02:12:03 np0005534516 python3[6891]: ansible-ansible.legacy.command Invoked with executable=/bin/bash _raw_params=env#012 _uses_shell=True zuul_log_id=fa163ef9-e89a-dd62-003c-00000000001e-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None creates=None removes=None stdin=None
Nov 25 02:12:04 np0005534516 python3[6919]: ansible-file Invoked with path=/home/zuul/workspace state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 02:12:22 np0005534516 python3[6945]: ansible-ansible.builtin.file Invoked with path=/etc/ci/env state=directory mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 02:12:26 np0005534516 systemd[1]: systemd-timedated.service: Deactivated successfully.
Nov 25 02:12:56 np0005534516 kernel: pci 0000:00:07.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint
Nov 25 02:12:56 np0005534516 kernel: pci 0000:00:07.0: BAR 0 [io  0x0000-0x003f]
Nov 25 02:12:56 np0005534516 kernel: pci 0000:00:07.0: BAR 1 [mem 0x00000000-0x00000fff]
Nov 25 02:12:56 np0005534516 kernel: pci 0000:00:07.0: BAR 4 [mem 0x00000000-0x00003fff 64bit pref]
Nov 25 02:12:56 np0005534516 kernel: pci 0000:00:07.0: ROM [mem 0x00000000-0x0007ffff pref]
Nov 25 02:12:56 np0005534516 kernel: pci 0000:00:07.0: ROM [mem 0xc0000000-0xc007ffff pref]: assigned
Nov 25 02:12:56 np0005534516 kernel: pci 0000:00:07.0: BAR 4 [mem 0x240000000-0x240003fff 64bit pref]: assigned
Nov 25 02:12:56 np0005534516 kernel: pci 0000:00:07.0: BAR 1 [mem 0xc0080000-0xc0080fff]: assigned
Nov 25 02:12:56 np0005534516 kernel: pci 0000:00:07.0: BAR 0 [io  0x1000-0x103f]: assigned
Nov 25 02:12:56 np0005534516 kernel: virtio-pci 0000:00:07.0: enabling device (0000 -> 0003)
Nov 25 02:12:56 np0005534516 NetworkManager[858]: <info>  [1764054776.6587] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Nov 25 02:12:56 np0005534516 systemd-udevd[6949]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 02:12:56 np0005534516 NetworkManager[858]: <info>  [1764054776.6755] device (eth1): state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 25 02:12:56 np0005534516 NetworkManager[858]: <info>  [1764054776.6779] settings: (eth1): created default wired connection 'Wired connection 1'
Nov 25 02:12:56 np0005534516 NetworkManager[858]: <info>  [1764054776.6782] device (eth1): carrier: link connected
Nov 25 02:12:56 np0005534516 NetworkManager[858]: <info>  [1764054776.6784] device (eth1): state change: unavailable -> disconnected (reason 'carrier-changed', managed-type: 'full')
Nov 25 02:12:56 np0005534516 NetworkManager[858]: <info>  [1764054776.6789] policy: auto-activating connection 'Wired connection 1' (3faee6e5-0328-325e-bacb-ef8b4c932bc7)
Nov 25 02:12:56 np0005534516 NetworkManager[858]: <info>  [1764054776.6794] device (eth1): Activation: starting connection 'Wired connection 1' (3faee6e5-0328-325e-bacb-ef8b4c932bc7)
Nov 25 02:12:56 np0005534516 NetworkManager[858]: <info>  [1764054776.6794] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 25 02:12:56 np0005534516 NetworkManager[858]: <info>  [1764054776.6797] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 25 02:12:56 np0005534516 NetworkManager[858]: <info>  [1764054776.6800] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 25 02:12:56 np0005534516 NetworkManager[858]: <info>  [1764054776.6804] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Nov 25 02:12:57 np0005534516 python3[6975]: ansible-ansible.legacy.command Invoked with _raw_params=ip -j link zuul_log_id=fa163ef9-e89a-b941-9eeb-0000000000fc-0-controller zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 02:13:07 np0005534516 python3[7055]: ansible-ansible.legacy.stat Invoked with path=/etc/NetworkManager/system-connections/ci-private-network.nmconnection follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 25 02:13:07 np0005534516 python3[7128]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764054787.1309433-102-271282669823455/source dest=/etc/NetworkManager/system-connections/ci-private-network.nmconnection mode=0600 owner=root group=root follow=False _original_basename=bootstrap-ci-network-nm-connection.nmconnection.j2 checksum=6eaf70b075bdb274c3582c057be90f3e4a5fcb13 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 02:13:08 np0005534516 python3[7178]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 25 02:13:08 np0005534516 systemd[1]: NetworkManager-wait-online.service: Deactivated successfully.
Nov 25 02:13:08 np0005534516 systemd[1]: Stopped Network Manager Wait Online.
Nov 25 02:13:08 np0005534516 systemd[1]: Stopping Network Manager Wait Online...
Nov 25 02:13:08 np0005534516 NetworkManager[858]: <info>  [1764054788.8380] caught SIGTERM, shutting down normally.
Nov 25 02:13:08 np0005534516 systemd[1]: Stopping Network Manager...
Nov 25 02:13:08 np0005534516 NetworkManager[858]: <info>  [1764054788.8388] dhcp4 (eth0): canceled DHCP transaction
Nov 25 02:13:08 np0005534516 NetworkManager[858]: <info>  [1764054788.8389] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Nov 25 02:13:08 np0005534516 NetworkManager[858]: <info>  [1764054788.8389] dhcp4 (eth0): state changed no lease
Nov 25 02:13:08 np0005534516 NetworkManager[858]: <info>  [1764054788.8391] manager: NetworkManager state is now CONNECTING
Nov 25 02:13:08 np0005534516 NetworkManager[858]: <info>  [1764054788.8563] dhcp4 (eth1): canceled DHCP transaction
Nov 25 02:13:08 np0005534516 NetworkManager[858]: <info>  [1764054788.8564] dhcp4 (eth1): state changed no lease
Nov 25 02:13:08 np0005534516 systemd[1]: Starting Network Manager Script Dispatcher Service...
Nov 25 02:13:08 np0005534516 NetworkManager[858]: <info>  [1764054788.8672] exiting (success)
Nov 25 02:13:08 np0005534516 systemd[1]: Started Network Manager Script Dispatcher Service.
Nov 25 02:13:08 np0005534516 systemd[1]: NetworkManager.service: Deactivated successfully.
Nov 25 02:13:08 np0005534516 systemd[1]: Stopped Network Manager.
Nov 25 02:13:08 np0005534516 systemd[1]: Starting Network Manager...
Nov 25 02:13:08 np0005534516 NetworkManager[7188]: <info>  [1764054788.9361] NetworkManager (version 1.54.1-1.el9) is starting... (after a restart, boot:4972c5ac-d4d6-4122-899d-4e8787c329b1)
Nov 25 02:13:08 np0005534516 NetworkManager[7188]: <info>  [1764054788.9366] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Nov 25 02:13:08 np0005534516 NetworkManager[7188]: <info>  [1764054788.9434] manager[0x564a3ec20070]: monitoring kernel firmware directory '/lib/firmware'.
Nov 25 02:13:08 np0005534516 systemd[1]: Starting Hostname Service...
Nov 25 02:13:09 np0005534516 systemd[1]: Started Hostname Service.
Nov 25 02:13:09 np0005534516 NetworkManager[7188]: <info>  [1764054789.0327] hostname: hostname: using hostnamed
Nov 25 02:13:09 np0005534516 NetworkManager[7188]: <info>  [1764054789.0328] hostname: static hostname changed from (none) to "np0005534516.novalocal"
Nov 25 02:13:09 np0005534516 NetworkManager[7188]: <info>  [1764054789.0335] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Nov 25 02:13:09 np0005534516 NetworkManager[7188]: <info>  [1764054789.0340] manager[0x564a3ec20070]: rfkill: Wi-Fi hardware radio set enabled
Nov 25 02:13:09 np0005534516 NetworkManager[7188]: <info>  [1764054789.0341] manager[0x564a3ec20070]: rfkill: WWAN hardware radio set enabled
Nov 25 02:13:09 np0005534516 NetworkManager[7188]: <info>  [1764054789.0371] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-device-plugin-team.so)
Nov 25 02:13:09 np0005534516 NetworkManager[7188]: <info>  [1764054789.0371] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Nov 25 02:13:09 np0005534516 NetworkManager[7188]: <info>  [1764054789.0372] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Nov 25 02:13:09 np0005534516 NetworkManager[7188]: <info>  [1764054789.0372] manager: Networking is enabled by state file
Nov 25 02:13:09 np0005534516 NetworkManager[7188]: <info>  [1764054789.0375] settings: Loaded settings plugin: keyfile (internal)
Nov 25 02:13:09 np0005534516 NetworkManager[7188]: <info>  [1764054789.0381] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-settings-plugin-ifcfg-rh.so")
Nov 25 02:13:09 np0005534516 NetworkManager[7188]: <info>  [1764054789.0408] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Nov 25 02:13:09 np0005534516 NetworkManager[7188]: <info>  [1764054789.0420] dhcp: init: Using DHCP client 'internal'
Nov 25 02:13:09 np0005534516 NetworkManager[7188]: <info>  [1764054789.0423] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Nov 25 02:13:09 np0005534516 NetworkManager[7188]: <info>  [1764054789.0428] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 02:13:09 np0005534516 NetworkManager[7188]: <info>  [1764054789.0433] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Nov 25 02:13:09 np0005534516 NetworkManager[7188]: <info>  [1764054789.0440] device (lo): Activation: starting connection 'lo' (9e1b533f-1bf0-4204-9b0f-32d0aa4be169)
Nov 25 02:13:09 np0005534516 NetworkManager[7188]: <info>  [1764054789.0448] device (eth0): carrier: link connected
Nov 25 02:13:09 np0005534516 NetworkManager[7188]: <info>  [1764054789.0451] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Nov 25 02:13:09 np0005534516 NetworkManager[7188]: <info>  [1764054789.0456] manager: (eth0): assume: will attempt to assume matching connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03) (indicated)
Nov 25 02:13:09 np0005534516 NetworkManager[7188]: <info>  [1764054789.0457] device (eth0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Nov 25 02:13:09 np0005534516 NetworkManager[7188]: <info>  [1764054789.0463] device (eth0): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Nov 25 02:13:09 np0005534516 NetworkManager[7188]: <info>  [1764054789.0471] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Nov 25 02:13:09 np0005534516 NetworkManager[7188]: <info>  [1764054789.0479] device (eth1): carrier: link connected
Nov 25 02:13:09 np0005534516 NetworkManager[7188]: <info>  [1764054789.0484] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Nov 25 02:13:09 np0005534516 NetworkManager[7188]: <info>  [1764054789.0489] manager: (eth1): assume: will attempt to assume matching connection 'Wired connection 1' (3faee6e5-0328-325e-bacb-ef8b4c932bc7) (indicated)
Nov 25 02:13:09 np0005534516 NetworkManager[7188]: <info>  [1764054789.0489] device (eth1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Nov 25 02:13:09 np0005534516 NetworkManager[7188]: <info>  [1764054789.0494] device (eth1): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Nov 25 02:13:09 np0005534516 NetworkManager[7188]: <info>  [1764054789.0502] device (eth1): Activation: starting connection 'Wired connection 1' (3faee6e5-0328-325e-bacb-ef8b4c932bc7)
Nov 25 02:13:09 np0005534516 systemd[1]: Started Network Manager.
Nov 25 02:13:09 np0005534516 NetworkManager[7188]: <info>  [1764054789.0511] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Nov 25 02:13:09 np0005534516 NetworkManager[7188]: <info>  [1764054789.0516] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Nov 25 02:13:09 np0005534516 NetworkManager[7188]: <info>  [1764054789.0519] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Nov 25 02:13:09 np0005534516 NetworkManager[7188]: <info>  [1764054789.0522] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Nov 25 02:13:09 np0005534516 NetworkManager[7188]: <info>  [1764054789.0524] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Nov 25 02:13:09 np0005534516 NetworkManager[7188]: <info>  [1764054789.0528] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'assume')
Nov 25 02:13:09 np0005534516 NetworkManager[7188]: <info>  [1764054789.0531] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Nov 25 02:13:09 np0005534516 NetworkManager[7188]: <info>  [1764054789.0534] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'assume')
Nov 25 02:13:09 np0005534516 NetworkManager[7188]: <info>  [1764054789.0536] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Nov 25 02:13:09 np0005534516 NetworkManager[7188]: <info>  [1764054789.0541] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Nov 25 02:13:09 np0005534516 NetworkManager[7188]: <info>  [1764054789.0544] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Nov 25 02:13:09 np0005534516 NetworkManager[7188]: <info>  [1764054789.0554] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Nov 25 02:13:09 np0005534516 NetworkManager[7188]: <info>  [1764054789.0556] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Nov 25 02:13:09 np0005534516 NetworkManager[7188]: <info>  [1764054789.0570] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Nov 25 02:13:09 np0005534516 NetworkManager[7188]: <info>  [1764054789.0576] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Nov 25 02:13:09 np0005534516 NetworkManager[7188]: <info>  [1764054789.0581] device (lo): Activation: successful, device activated.
Nov 25 02:13:09 np0005534516 NetworkManager[7188]: <info>  [1764054789.0615] dhcp4 (eth0): state changed new lease, address=38.102.83.169
Nov 25 02:13:09 np0005534516 NetworkManager[7188]: <info>  [1764054789.0622] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Nov 25 02:13:09 np0005534516 systemd[1]: Starting Network Manager Wait Online...
Nov 25 02:13:09 np0005534516 NetworkManager[7188]: <info>  [1764054789.0686] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Nov 25 02:13:09 np0005534516 NetworkManager[7188]: <info>  [1764054789.0708] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Nov 25 02:13:09 np0005534516 NetworkManager[7188]: <info>  [1764054789.0709] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Nov 25 02:13:09 np0005534516 NetworkManager[7188]: <info>  [1764054789.0711] manager: NetworkManager state is now CONNECTED_SITE
Nov 25 02:13:09 np0005534516 NetworkManager[7188]: <info>  [1764054789.0714] device (eth0): Activation: successful, device activated.
Nov 25 02:13:09 np0005534516 NetworkManager[7188]: <info>  [1764054789.0717] manager: NetworkManager state is now CONNECTED_GLOBAL
Nov 25 02:13:09 np0005534516 python3[7262]: ansible-ansible.legacy.command Invoked with _raw_params=ip route zuul_log_id=fa163ef9-e89a-b941-9eeb-0000000000a7-0-controller zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 02:13:19 np0005534516 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Nov 25 02:13:39 np0005534516 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Nov 25 02:13:54 np0005534516 NetworkManager[7188]: <info>  [1764054834.3459] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Nov 25 02:13:54 np0005534516 systemd[1]: Starting Network Manager Script Dispatcher Service...
Nov 25 02:13:54 np0005534516 systemd[1]: Started Network Manager Script Dispatcher Service.
Nov 25 02:13:54 np0005534516 NetworkManager[7188]: <info>  [1764054834.3914] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Nov 25 02:13:54 np0005534516 NetworkManager[7188]: <info>  [1764054834.3916] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Nov 25 02:13:54 np0005534516 NetworkManager[7188]: <info>  [1764054834.3925] device (eth1): Activation: successful, device activated.
Nov 25 02:13:54 np0005534516 NetworkManager[7188]: <info>  [1764054834.3931] manager: startup complete
Nov 25 02:13:54 np0005534516 NetworkManager[7188]: <info>  [1764054834.3933] device (eth1): state change: activated -> failed (reason 'ip-config-unavailable', managed-type: 'full')
Nov 25 02:13:54 np0005534516 NetworkManager[7188]: <warn>  [1764054834.3938] device (eth1): Activation: failed for connection 'Wired connection 1'
Nov 25 02:13:54 np0005534516 NetworkManager[7188]: <info>  [1764054834.3944] device (eth1): state change: failed -> disconnected (reason 'none', managed-type: 'full')
Nov 25 02:13:54 np0005534516 systemd[1]: Finished Network Manager Wait Online.
Nov 25 02:13:54 np0005534516 NetworkManager[7188]: <info>  [1764054834.4074] dhcp4 (eth1): canceled DHCP transaction
Nov 25 02:13:54 np0005534516 NetworkManager[7188]: <info>  [1764054834.4075] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Nov 25 02:13:54 np0005534516 NetworkManager[7188]: <info>  [1764054834.4075] dhcp4 (eth1): state changed no lease
Nov 25 02:13:54 np0005534516 NetworkManager[7188]: <info>  [1764054834.4099] policy: auto-activating connection 'ci-private-network' (8a67a5cf-a148-5b3d-8fb8-4f04d5f08338)
Nov 25 02:13:54 np0005534516 NetworkManager[7188]: <info>  [1764054834.4106] device (eth1): Activation: starting connection 'ci-private-network' (8a67a5cf-a148-5b3d-8fb8-4f04d5f08338)
Nov 25 02:13:54 np0005534516 NetworkManager[7188]: <info>  [1764054834.4108] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 25 02:13:54 np0005534516 NetworkManager[7188]: <info>  [1764054834.4112] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 25 02:13:54 np0005534516 NetworkManager[7188]: <info>  [1764054834.4124] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 25 02:13:54 np0005534516 NetworkManager[7188]: <info>  [1764054834.4139] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 25 02:13:54 np0005534516 NetworkManager[7188]: <info>  [1764054834.4185] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 25 02:13:54 np0005534516 NetworkManager[7188]: <info>  [1764054834.4187] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 25 02:13:54 np0005534516 NetworkManager[7188]: <info>  [1764054834.4197] device (eth1): Activation: successful, device activated.
Nov 25 02:13:55 np0005534516 systemd[4304]: Starting Mark boot as successful...
Nov 25 02:13:55 np0005534516 systemd[4304]: Finished Mark boot as successful.
Nov 25 02:14:04 np0005534516 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Nov 25 02:14:09 np0005534516 systemd-logind[822]: Session 1 logged out. Waiting for processes to exit.
Nov 25 02:14:17 np0005534516 systemd-logind[822]: New session 3 of user zuul.
Nov 25 02:14:17 np0005534516 systemd[1]: Started Session 3 of User zuul.
Nov 25 02:14:18 np0005534516 python3[7372]: ansible-ansible.legacy.stat Invoked with path=/etc/ci/env/networking-info.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 25 02:14:18 np0005534516 python3[7445]: ansible-ansible.legacy.copy Invoked with dest=/etc/ci/env/networking-info.yml owner=root group=root mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764054857.9219978-267-47880752293376/source _original_basename=tmp0ncs0e8m follow=False checksum=cd6137842daa20156a9f77315d48bf502ab12a23 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 02:14:21 np0005534516 systemd[1]: session-3.scope: Deactivated successfully.
Nov 25 02:14:21 np0005534516 systemd-logind[822]: Session 3 logged out. Waiting for processes to exit.
Nov 25 02:14:21 np0005534516 systemd-logind[822]: Removed session 3.
Nov 25 02:16:55 np0005534516 systemd[4304]: Created slice User Background Tasks Slice.
Nov 25 02:16:55 np0005534516 systemd[4304]: Starting Cleanup of User's Temporary Files and Directories...
Nov 25 02:16:55 np0005534516 systemd[4304]: Finished Cleanup of User's Temporary Files and Directories.
Nov 25 02:20:55 np0005534516 systemd-logind[822]: New session 4 of user zuul.
Nov 25 02:20:55 np0005534516 systemd[1]: Started Session 4 of User zuul.
Nov 25 02:20:55 np0005534516 python3[7503]: ansible-ansible.legacy.command Invoked with _raw_params=lsblk -nd -o MAJ:MIN /dev/vda#012 _uses_shell=True zuul_log_id=fa163ef9-e89a-6c09-4d05-000000001cc6-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 02:20:56 np0005534516 python3[7531]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/init.scope state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 02:20:56 np0005534516 python3[7558]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/machine.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 02:20:56 np0005534516 python3[7584]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/system.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 02:20:57 np0005534516 python3[7610]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/user.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 02:20:57 np0005534516 python3[7636]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system.conf.d state=directory mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 02:20:58 np0005534516 python3[7714]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system.conf.d/override.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 25 02:20:58 np0005534516 python3[7787]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system.conf.d/override.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764055258.0323703-470-110512909514608/source _original_basename=tmphsho5210 follow=False checksum=a05098bd3d2321238ea1169d0e6f135b35b392d4 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 02:20:59 np0005534516 python3[7837]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 25 02:20:59 np0005534516 systemd[1]: Reloading.
Nov 25 02:20:59 np0005534516 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 02:21:01 np0005534516 python3[7893]: ansible-ansible.builtin.wait_for Invoked with path=/sys/fs/cgroup/system.slice/io.max state=present timeout=30 host=127.0.0.1 connect_timeout=5 delay=0 active_connection_states=['ESTABLISHED', 'FIN_WAIT1', 'FIN_WAIT2', 'SYN_RECV', 'SYN_SENT', 'TIME_WAIT'] sleep=1 port=None search_regex=None exclude_hosts=None msg=None
Nov 25 02:21:01 np0005534516 python3[7919]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/init.scope/io.max#012 _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 02:21:01 np0005534516 python3[7947]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/machine.slice/io.max#012 _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 02:21:02 np0005534516 python3[7975]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/system.slice/io.max#012 _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 02:21:02 np0005534516 python3[8003]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/user.slice/io.max#012 _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 02:21:02 np0005534516 python3[8030]: ansible-ansible.legacy.command Invoked with _raw_params=echo "init";    cat /sys/fs/cgroup/init.scope/io.max; echo "machine"; cat /sys/fs/cgroup/machine.slice/io.max; echo "system";  cat /sys/fs/cgroup/system.slice/io.max; echo "user";    cat /sys/fs/cgroup/user.slice/io.max;#012 _uses_shell=True zuul_log_id=fa163ef9-e89a-6c09-4d05-000000001ccd-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 02:21:03 np0005534516 python3[8060]: ansible-ansible.builtin.stat Invoked with path=/sys/fs/cgroup/kubepods.slice/io.max follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Nov 25 02:21:05 np0005534516 systemd[1]: session-4.scope: Deactivated successfully.
Nov 25 02:21:05 np0005534516 systemd[1]: session-4.scope: Consumed 4.711s CPU time.
Nov 25 02:21:05 np0005534516 systemd-logind[822]: Session 4 logged out. Waiting for processes to exit.
Nov 25 02:21:05 np0005534516 systemd-logind[822]: Removed session 4.
Nov 25 02:21:07 np0005534516 systemd-logind[822]: New session 5 of user zuul.
Nov 25 02:21:07 np0005534516 systemd[1]: Started Session 5 of User zuul.
Nov 25 02:21:08 np0005534516 python3[8095]: ansible-ansible.legacy.dnf Invoked with name=['podman', 'buildah'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Nov 25 02:21:22 np0005534516 kernel: SELinux:  Converting 385 SID table entries...
Nov 25 02:21:22 np0005534516 kernel: SELinux:  policy capability network_peer_controls=1
Nov 25 02:21:22 np0005534516 kernel: SELinux:  policy capability open_perms=1
Nov 25 02:21:22 np0005534516 kernel: SELinux:  policy capability extended_socket_class=1
Nov 25 02:21:22 np0005534516 kernel: SELinux:  policy capability always_check_network=0
Nov 25 02:21:22 np0005534516 kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 25 02:21:22 np0005534516 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 25 02:21:22 np0005534516 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 25 02:21:31 np0005534516 kernel: SELinux:  Converting 385 SID table entries...
Nov 25 02:21:31 np0005534516 kernel: SELinux:  policy capability network_peer_controls=1
Nov 25 02:21:31 np0005534516 kernel: SELinux:  policy capability open_perms=1
Nov 25 02:21:31 np0005534516 kernel: SELinux:  policy capability extended_socket_class=1
Nov 25 02:21:31 np0005534516 kernel: SELinux:  policy capability always_check_network=0
Nov 25 02:21:31 np0005534516 kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 25 02:21:31 np0005534516 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 25 02:21:31 np0005534516 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 25 02:21:40 np0005534516 kernel: SELinux:  Converting 385 SID table entries...
Nov 25 02:21:40 np0005534516 kernel: SELinux:  policy capability network_peer_controls=1
Nov 25 02:21:40 np0005534516 kernel: SELinux:  policy capability open_perms=1
Nov 25 02:21:40 np0005534516 kernel: SELinux:  policy capability extended_socket_class=1
Nov 25 02:21:40 np0005534516 kernel: SELinux:  policy capability always_check_network=0
Nov 25 02:21:40 np0005534516 kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 25 02:21:40 np0005534516 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 25 02:21:40 np0005534516 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 25 02:21:42 np0005534516 setsebool[8162]: The virt_use_nfs policy boolean was changed to 1 by root
Nov 25 02:21:42 np0005534516 setsebool[8162]: The virt_sandbox_use_all_caps policy boolean was changed to 1 by root
Nov 25 02:21:52 np0005534516 kernel: SELinux:  Converting 388 SID table entries...
Nov 25 02:21:52 np0005534516 kernel: SELinux:  policy capability network_peer_controls=1
Nov 25 02:21:52 np0005534516 kernel: SELinux:  policy capability open_perms=1
Nov 25 02:21:52 np0005534516 kernel: SELinux:  policy capability extended_socket_class=1
Nov 25 02:21:52 np0005534516 kernel: SELinux:  policy capability always_check_network=0
Nov 25 02:21:52 np0005534516 kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 25 02:21:52 np0005534516 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 25 02:21:52 np0005534516 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 25 02:22:13 np0005534516 dbus-broker-launch[813]: avc:  op=load_policy lsm=selinux seqno=6 res=1
Nov 25 02:22:13 np0005534516 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Nov 25 02:22:13 np0005534516 systemd[1]: Starting man-db-cache-update.service...
Nov 25 02:22:13 np0005534516 systemd[1]: Reloading.
Nov 25 02:22:13 np0005534516 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 02:22:13 np0005534516 systemd[1]: Queuing reload/restart jobs for marked units…
Nov 25 02:22:21 np0005534516 python3[13903]: ansible-ansible.legacy.command Invoked with _raw_params=echo "openstack-k8s-operators+cirobot"#012 _uses_shell=True zuul_log_id=fa163ef9-e89a-dcdf-50f3-00000000000a-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 02:22:21 np0005534516 kernel: evm: overlay not supported
Nov 25 02:22:21 np0005534516 systemd[4304]: Starting D-Bus User Message Bus...
Nov 25 02:22:21 np0005534516 dbus-broker-launch[14112]: Policy to allow eavesdropping in /usr/share/dbus-1/session.conf +31: Eavesdropping is deprecated and ignored
Nov 25 02:22:21 np0005534516 dbus-broker-launch[14112]: Policy to allow eavesdropping in /usr/share/dbus-1/session.conf +33: Eavesdropping is deprecated and ignored
Nov 25 02:22:21 np0005534516 systemd[4304]: Started D-Bus User Message Bus.
Nov 25 02:22:21 np0005534516 dbus-broker-lau[14112]: Ready
Nov 25 02:22:21 np0005534516 systemd[4304]: selinux: avc:  op=load_policy lsm=selinux seqno=6 res=1
Nov 25 02:22:21 np0005534516 systemd[4304]: Created slice Slice /user.
Nov 25 02:22:21 np0005534516 systemd[4304]: podman-14031.scope: unit configures an IP firewall, but not running as root.
Nov 25 02:22:21 np0005534516 systemd[4304]: (This warning is only shown for the first unit using IP firewalling.)
Nov 25 02:22:21 np0005534516 systemd[4304]: Started podman-14031.scope.
Nov 25 02:22:22 np0005534516 systemd[4304]: Started podman-pause-bd53dd78.scope.
Nov 25 02:22:22 np0005534516 python3[14578]: ansible-ansible.builtin.blockinfile Invoked with state=present insertafter=EOF dest=/etc/containers/registries.conf content=[[registry]]#012location = "38.102.83.64:5001"#012insecure = true path=/etc/containers/registries.conf block=[[registry]]#012location = "38.102.83.64:5001"#012insecure = true marker=# {mark} ANSIBLE MANAGED BLOCK create=False backup=False marker_begin=BEGIN marker_end=END unsafe_writes=False insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 02:22:22 np0005534516 python3[14578]: ansible-ansible.builtin.blockinfile [WARNING] Module remote_tmp /root/.ansible/tmp did not exist and was created with a mode of 0700, this may cause issues when running as another user. To avoid this, create the remote_tmp dir with the correct permissions manually
Nov 25 02:22:23 np0005534516 systemd[1]: session-5.scope: Deactivated successfully.
Nov 25 02:22:23 np0005534516 systemd[1]: session-5.scope: Consumed 1min 660ms CPU time.
Nov 25 02:22:23 np0005534516 systemd-logind[822]: Session 5 logged out. Waiting for processes to exit.
Nov 25 02:22:23 np0005534516 systemd-logind[822]: Removed session 5.
Nov 25 02:22:45 np0005534516 systemd-logind[822]: New session 6 of user zuul.
Nov 25 02:22:45 np0005534516 systemd[1]: Started Session 6 of User zuul.
Nov 25 02:22:45 np0005534516 python3[23667]: ansible-ansible.posix.authorized_key Invoked with user=zuul key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBNcQZdjHLvpM3ZPqAQQcbhMhTJdIN3tnN9dkSKs099LAj4Y/ZqTjJ0RsFAD+Cz8GCLLYDRZGb8u94SdyKFBL26w= zuul@np0005534515.novalocal#012 manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 25 02:22:46 np0005534516 python3[23851]: ansible-ansible.posix.authorized_key Invoked with user=root key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBNcQZdjHLvpM3ZPqAQQcbhMhTJdIN3tnN9dkSKs099LAj4Y/ZqTjJ0RsFAD+Cz8GCLLYDRZGb8u94SdyKFBL26w= zuul@np0005534515.novalocal#012 manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 25 02:22:47 np0005534516 python3[24160]: ansible-ansible.builtin.user Invoked with name=cloud-admin shell=/bin/bash state=present non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on np0005534516.novalocal update_password=always uid=None group=None groups=None comment=None home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None
Nov 25 02:22:47 np0005534516 python3[24404]: ansible-ansible.posix.authorized_key Invoked with user=cloud-admin key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBNcQZdjHLvpM3ZPqAQQcbhMhTJdIN3tnN9dkSKs099LAj4Y/ZqTjJ0RsFAD+Cz8GCLLYDRZGb8u94SdyKFBL26w= zuul@np0005534515.novalocal#012 manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 25 02:22:48 np0005534516 python3[24661]: ansible-ansible.legacy.stat Invoked with path=/etc/sudoers.d/cloud-admin follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 25 02:22:48 np0005534516 python3[24936]: ansible-ansible.legacy.copy Invoked with dest=/etc/sudoers.d/cloud-admin mode=0640 src=/home/zuul/.ansible/tmp/ansible-tmp-1764055367.9658155-135-252690687155173/source _original_basename=tmpz1c1b2x0 follow=False checksum=e7614e5ad3ab06eaae55b8efaa2ed81b63ea5634 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 02:22:49 np0005534516 python3[25268]: ansible-ansible.builtin.hostname Invoked with name=compute-0 use=systemd
Nov 25 02:22:49 np0005534516 systemd[1]: Starting Hostname Service...
Nov 25 02:22:49 np0005534516 systemd[1]: Started Hostname Service.
Nov 25 02:22:49 np0005534516 systemd-hostnamed[25380]: Changed pretty hostname to 'compute-0'
Nov 25 02:22:49 np0005534516 systemd-hostnamed[25380]: Hostname set to <compute-0> (static)
Nov 25 02:22:49 np0005534516 NetworkManager[7188]: <info>  [1764055369.8110] hostname: static hostname changed from "np0005534516.novalocal" to "compute-0"
Nov 25 02:22:49 np0005534516 systemd[1]: Starting Network Manager Script Dispatcher Service...
Nov 25 02:22:49 np0005534516 systemd[1]: Started Network Manager Script Dispatcher Service.
Nov 25 02:22:50 np0005534516 systemd[1]: session-6.scope: Deactivated successfully.
Nov 25 02:22:50 np0005534516 systemd[1]: session-6.scope: Consumed 2.487s CPU time.
Nov 25 02:22:50 np0005534516 systemd-logind[822]: Session 6 logged out. Waiting for processes to exit.
Nov 25 02:22:50 np0005534516 systemd-logind[822]: Removed session 6.
Nov 25 02:22:59 np0005534516 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Nov 25 02:23:03 np0005534516 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Nov 25 02:23:03 np0005534516 systemd[1]: Finished man-db-cache-update.service.
Nov 25 02:23:03 np0005534516 systemd[1]: man-db-cache-update.service: Consumed 58.569s CPU time.
Nov 25 02:23:03 np0005534516 systemd[1]: run-r05d556d7e7b04dafaa666f222cf51ba4.service: Deactivated successfully.
Nov 25 02:23:19 np0005534516 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Nov 25 02:25:55 np0005534516 systemd[1]: Starting Cleanup of Temporary Directories...
Nov 25 02:25:55 np0005534516 systemd[1]: systemd-tmpfiles-clean.service: Deactivated successfully.
Nov 25 02:25:55 np0005534516 systemd[1]: Finished Cleanup of Temporary Directories.
Nov 25 02:25:55 np0005534516 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dclean.service.mount: Deactivated successfully.
Nov 25 02:26:40 np0005534516 systemd-logind[822]: New session 7 of user zuul.
Nov 25 02:26:40 np0005534516 systemd[1]: Started Session 7 of User zuul.
Nov 25 02:26:41 np0005534516 python3[30004]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 25 02:26:43 np0005534516 python3[30120]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 25 02:26:44 np0005534516 python3[30193]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1764055603.5962079-33543-184393934772221/source mode=0755 _original_basename=delorean.repo follow=False checksum=1830be8248976a7f714fb01ca8550e92dfc79ad2 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 02:26:44 np0005534516 python3[30219]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean-antelope-testing.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 25 02:26:45 np0005534516 python3[30292]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1764055603.5962079-33543-184393934772221/source mode=0755 _original_basename=delorean-antelope-testing.repo follow=False checksum=0bdbb813b840548359ae77c28d76ca272ccaf31b backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 02:26:45 np0005534516 python3[30318]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-highavailability.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 25 02:26:45 np0005534516 python3[30391]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1764055603.5962079-33543-184393934772221/source mode=0755 _original_basename=repo-setup-centos-highavailability.repo follow=False checksum=55d0f695fd0d8f47cbc3044ce0dcf5f88862490f backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 02:26:46 np0005534516 python3[30417]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-powertools.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 25 02:26:46 np0005534516 python3[30490]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1764055603.5962079-33543-184393934772221/source mode=0755 _original_basename=repo-setup-centos-powertools.repo follow=False checksum=4b0cf99aa89c5c5be0151545863a7a7568f67568 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 02:26:46 np0005534516 python3[30516]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-appstream.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 25 02:26:47 np0005534516 python3[30589]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1764055603.5962079-33543-184393934772221/source mode=0755 _original_basename=repo-setup-centos-appstream.repo follow=False checksum=e89244d2503b2996429dda1857290c1e91e393a1 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 02:26:47 np0005534516 python3[30615]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-baseos.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 25 02:26:48 np0005534516 python3[30688]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1764055603.5962079-33543-184393934772221/source mode=0755 _original_basename=repo-setup-centos-baseos.repo follow=False checksum=36d926db23a40dbfa5c84b5e4d43eac6fa2301d6 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 02:26:48 np0005534516 python3[30714]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean.repo.md5 follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 25 02:26:48 np0005534516 python3[30787]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1764055603.5962079-33543-184393934772221/source mode=0755 _original_basename=delorean.repo.md5 follow=False checksum=6646317362318a9831d66a1804f6bb7dd1b97cd5 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 02:27:03 np0005534516 python3[30845]: ansible-ansible.legacy.command Invoked with _raw_params=hostname _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 02:32:03 np0005534516 systemd[1]: session-7.scope: Deactivated successfully.
Nov 25 02:32:03 np0005534516 systemd[1]: session-7.scope: Consumed 5.863s CPU time.
Nov 25 02:32:03 np0005534516 systemd-logind[822]: Session 7 logged out. Waiting for processes to exit.
Nov 25 02:32:03 np0005534516 systemd-logind[822]: Removed session 7.
Nov 25 02:38:29 np0005534516 systemd-logind[822]: New session 8 of user zuul.
Nov 25 02:38:29 np0005534516 systemd[1]: Started Session 8 of User zuul.
Nov 25 02:38:30 np0005534516 python3.9[31007]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 25 02:38:32 np0005534516 python3.9[31189]: ansible-ansible.legacy.command Invoked with _raw_params=set -euxo pipefail#012pushd /var/tmp#012curl -sL https://github.com/openstack-k8s-operators/repo-setup/archive/refs/heads/main.tar.gz | tar -xz#012pushd repo-setup-main#012python3 -m venv ./venv#012PBR_VERSION=0.0.0 ./venv/bin/pip install ./#012./venv/bin/repo-setup current-podified -b antelope#012popd#012rm -rf repo-setup-main#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 02:38:40 np0005534516 systemd[1]: session-8.scope: Deactivated successfully.
Nov 25 02:38:40 np0005534516 systemd[1]: session-8.scope: Consumed 8.441s CPU time.
Nov 25 02:38:40 np0005534516 systemd-logind[822]: Session 8 logged out. Waiting for processes to exit.
Nov 25 02:38:40 np0005534516 systemd-logind[822]: Removed session 8.
Nov 25 02:38:56 np0005534516 systemd-logind[822]: New session 9 of user zuul.
Nov 25 02:38:56 np0005534516 systemd[1]: Started Session 9 of User zuul.
Nov 25 02:38:57 np0005534516 python3.9[31399]: ansible-ansible.legacy.ping Invoked with data=pong
Nov 25 02:38:58 np0005534516 python3.9[31573]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 25 02:39:00 np0005534516 python3.9[31725]: ansible-ansible.legacy.command Invoked with _raw_params=PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin which growvols#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 02:39:01 np0005534516 python3.9[31878]: ansible-ansible.builtin.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 25 02:39:02 np0005534516 python3.9[32030]: ansible-ansible.builtin.file Invoked with mode=755 path=/etc/ansible/facts.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 02:39:02 np0005534516 python3.9[32182]: ansible-ansible.legacy.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 02:39:03 np0005534516 irqbalance[818]: Cannot change IRQ 26 affinity: Operation not permitted
Nov 25 02:39:03 np0005534516 irqbalance[818]: IRQ 26 affinity is now unmanaged
Nov 25 02:39:03 np0005534516 python3.9[32305]: ansible-ansible.legacy.copy Invoked with dest=/etc/ansible/facts.d/bootc.fact mode=755 src=/home/zuul/.ansible/tmp/ansible-tmp-1764056342.342886-73-177517720227582/.source.fact _original_basename=bootc.fact follow=False checksum=eb4122ce7fc50a38407beb511c4ff8c178005b12 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 02:39:04 np0005534516 python3.9[32457]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 25 02:39:05 np0005534516 python3.9[32613]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/log/journal setype=var_log_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 02:39:06 np0005534516 python3.9[32765]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/config-data/ansible-generated recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 02:39:07 np0005534516 python3.9[32915]: ansible-ansible.builtin.service_facts Invoked
Nov 25 02:39:12 np0005534516 python3.9[33168]: ansible-ansible.builtin.lineinfile Invoked with line=cloud-init=disabled path=/proc/cmdline state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 02:39:13 np0005534516 python3.9[33318]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 25 02:39:15 np0005534516 python3.9[33472]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 25 02:39:16 np0005534516 python3.9[33630]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 25 02:39:17 np0005534516 python3.9[33714]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 25 02:40:02 np0005534516 systemd[1]: Reloading.
Nov 25 02:40:02 np0005534516 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 02:40:02 np0005534516 systemd[1]: Listening on Device-mapper event daemon FIFOs.
Nov 25 02:40:02 np0005534516 systemd[1]: Reloading.
Nov 25 02:40:02 np0005534516 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 02:40:02 np0005534516 systemd[1]: Starting dnf makecache...
Nov 25 02:40:02 np0005534516 systemd[1]: Starting Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling...
Nov 25 02:40:02 np0005534516 systemd[1]: Finished Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling.
Nov 25 02:40:02 np0005534516 systemd[1]: Reloading.
Nov 25 02:40:03 np0005534516 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 02:40:03 np0005534516 dnf[33963]: Failed determining last makecache time.
Nov 25 02:40:03 np0005534516 dnf[33963]: delorean-openstack-barbican-42b4c41831408a8e323 157 kB/s | 3.0 kB     00:00
Nov 25 02:40:03 np0005534516 dnf[33963]: delorean-python-glean-10df0bd91b9bc5c9fd9cc02d7 177 kB/s | 3.0 kB     00:00
Nov 25 02:40:03 np0005534516 dnf[33963]: delorean-openstack-cinder-1c00d6490d88e436f26ef 160 kB/s | 3.0 kB     00:00
Nov 25 02:40:03 np0005534516 systemd[1]: Listening on LVM2 poll daemon socket.
Nov 25 02:40:03 np0005534516 dnf[33963]: delorean-python-stevedore-c4acc5639fd2329372142 190 kB/s | 3.0 kB     00:00
Nov 25 02:40:03 np0005534516 dnf[33963]: delorean-python-observabilityclient-2f31846d73c 193 kB/s | 3.0 kB     00:00
Nov 25 02:40:03 np0005534516 dnf[33963]: delorean-os-net-config-bbae2ed8a159b0435a473f38 198 kB/s | 3.0 kB     00:00
Nov 25 02:40:03 np0005534516 dnf[33963]: delorean-openstack-nova-6f8decf0b4f1aa2e96292b6 178 kB/s | 3.0 kB     00:00
Nov 25 02:40:03 np0005534516 dnf[33963]: delorean-python-designate-tests-tempest-347fdbc 156 kB/s | 3.0 kB     00:00
Nov 25 02:40:03 np0005534516 dnf[33963]: delorean-openstack-glance-1fd12c29b339f30fe823e 165 kB/s | 3.0 kB     00:00
Nov 25 02:40:03 np0005534516 dnf[33963]: delorean-openstack-keystone-e4b40af0ae3698fbbbb 175 kB/s | 3.0 kB     00:00
Nov 25 02:40:03 np0005534516 dnf[33963]: delorean-openstack-manila-3c01b7181572c95dac462 159 kB/s | 3.0 kB     00:00
Nov 25 02:40:03 np0005534516 dnf[33963]: delorean-python-whitebox-neutron-tests-tempest- 143 kB/s | 3.0 kB     00:00
Nov 25 02:40:03 np0005534516 dnf[33963]: delorean-openstack-octavia-ba397f07a7331190208c 151 kB/s | 3.0 kB     00:00
Nov 25 02:40:03 np0005534516 dnf[33963]: delorean-openstack-watcher-c014f81a8647287f6dcc 171 kB/s | 3.0 kB     00:00
Nov 25 02:40:03 np0005534516 dnf[33963]: delorean-python-tcib-1124124ec06aadbac34f0d340b 163 kB/s | 3.0 kB     00:00
Nov 25 02:40:03 np0005534516 dnf[33963]: delorean-puppet-ceph-7352068d7b8c84ded636ab3158 130 kB/s | 3.0 kB     00:00
Nov 25 02:40:03 np0005534516 dnf[33963]: delorean-openstack-swift-dc98a8463506ac520c469a 154 kB/s | 3.0 kB     00:00
Nov 25 02:40:03 np0005534516 dnf[33963]: delorean-python-tempestconf-8515371b7cceebd4282 159 kB/s | 3.0 kB     00:00
Nov 25 02:40:03 np0005534516 dbus-broker-launch[812]: Noticed file-system modification, trigger reload.
Nov 25 02:40:03 np0005534516 dbus-broker-launch[812]: Noticed file-system modification, trigger reload.
Nov 25 02:40:03 np0005534516 dnf[33963]: delorean-openstack-heat-ui-013accbfd179753bc3f0 144 kB/s | 3.0 kB     00:00
Nov 25 02:40:03 np0005534516 dbus-broker-launch[812]: Noticed file-system modification, trigger reload.
Nov 25 02:40:03 np0005534516 dnf[33963]: CentOS Stream 9 - BaseOS                         56 kB/s | 5.4 kB     00:00
Nov 25 02:40:03 np0005534516 dnf[33963]: CentOS Stream 9 - AppStream                      63 kB/s | 6.1 kB     00:00
Nov 25 02:40:03 np0005534516 dnf[33963]: CentOS Stream 9 - CRB                            58 kB/s | 5.3 kB     00:00
Nov 25 02:40:04 np0005534516 dnf[33963]: CentOS Stream 9 - Extras packages                74 kB/s | 8.3 kB     00:00
Nov 25 02:40:04 np0005534516 dnf[33963]: dlrn-antelope-testing                           122 kB/s | 3.0 kB     00:00
Nov 25 02:40:04 np0005534516 dnf[33963]: dlrn-antelope-build-deps                        151 kB/s | 3.0 kB     00:00
Nov 25 02:40:04 np0005534516 dnf[33963]: centos9-rabbitmq                                 44 kB/s | 3.0 kB     00:00
Nov 25 02:40:04 np0005534516 dnf[33963]: centos9-storage                                  43 kB/s | 3.0 kB     00:00
Nov 25 02:40:04 np0005534516 dnf[33963]: centos9-opstools                                 40 kB/s | 3.0 kB     00:00
Nov 25 02:40:04 np0005534516 dnf[33963]: NFV SIG OpenvSwitch                              32 kB/s | 3.0 kB     00:00
Nov 25 02:40:04 np0005534516 dnf[33963]: repo-setup-centos-appstream                      58 kB/s | 4.4 kB     00:00
Nov 25 02:40:04 np0005534516 dnf[33963]: repo-setup-centos-baseos                         53 kB/s | 3.9 kB     00:00
Nov 25 02:40:04 np0005534516 dnf[33963]: repo-setup-centos-highavailability               60 kB/s | 3.9 kB     00:00
Nov 25 02:40:05 np0005534516 dnf[33963]: repo-setup-centos-powertools                     61 kB/s | 4.3 kB     00:00
Nov 25 02:40:05 np0005534516 dnf[33963]: Extra Packages for Enterprise Linux 9 - x86_64  230 kB/s |  35 kB     00:00
Nov 25 02:40:05 np0005534516 dnf[33963]: Metadata cache created.
Nov 25 02:40:05 np0005534516 systemd[1]: dnf-makecache.service: Deactivated successfully.
Nov 25 02:40:05 np0005534516 systemd[1]: Finished dnf makecache.
Nov 25 02:40:05 np0005534516 systemd[1]: dnf-makecache.service: Consumed 1.879s CPU time.
Nov 25 02:41:31 np0005534516 kernel: SELinux:  Converting 2718 SID table entries...
Nov 25 02:41:31 np0005534516 kernel: SELinux:  policy capability network_peer_controls=1
Nov 25 02:41:31 np0005534516 kernel: SELinux:  policy capability open_perms=1
Nov 25 02:41:31 np0005534516 kernel: SELinux:  policy capability extended_socket_class=1
Nov 25 02:41:31 np0005534516 kernel: SELinux:  policy capability always_check_network=0
Nov 25 02:41:31 np0005534516 kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 25 02:41:31 np0005534516 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 25 02:41:31 np0005534516 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 25 02:41:31 np0005534516 dbus-broker-launch[813]: avc:  op=load_policy lsm=selinux seqno=8 res=1
Nov 25 02:41:31 np0005534516 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Nov 25 02:41:31 np0005534516 systemd[1]: Starting man-db-cache-update.service...
Nov 25 02:41:31 np0005534516 systemd[1]: Reloading.
Nov 25 02:41:31 np0005534516 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 02:41:31 np0005534516 systemd[1]: Queuing reload/restart jobs for marked units…
Nov 25 02:41:34 np0005534516 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Nov 25 02:41:34 np0005534516 systemd[1]: Finished man-db-cache-update.service.
Nov 25 02:41:34 np0005534516 systemd[1]: man-db-cache-update.service: Consumed 1.558s CPU time.
Nov 25 02:41:34 np0005534516 systemd[1]: run-r866cd617c3d643cc8df9bca99782df21.service: Deactivated successfully.
Nov 25 02:41:34 np0005534516 python3.9[35272]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 02:41:37 np0005534516 python3.9[35553]: ansible-ansible.posix.selinux Invoked with policy=targeted state=enforcing configfile=/etc/selinux/config update_kernel_param=False
Nov 25 02:41:38 np0005534516 python3.9[35705]: ansible-ansible.legacy.command Invoked with cmd=dd if=/dev/zero of=/swap count=1024 bs=1M creates=/swap _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None removes=None stdin=None
Nov 25 02:41:40 np0005534516 python3.9[35858]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/swap recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 02:41:41 np0005534516 python3.9[36010]: ansible-ansible.posix.mount Invoked with dump=0 fstype=swap name=none opts=sw passno=0 src=/swap state=present path=none boot=True opts_no_log=False backup=False fstab=None
Nov 25 02:41:42 np0005534516 python3.9[36162]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/ca-trust/source/anchors setype=cert_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 02:41:43 np0005534516 python3.9[36314]: ansible-ansible.legacy.stat Invoked with path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 02:41:43 np0005534516 python3.9[36437]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764056502.8297493-236-15444087545414/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=0d57da45e7109a066556ab4f54b8767ce2bd0faa backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 02:41:48 np0005534516 python3.9[36589]: ansible-ansible.builtin.stat Invoked with path=/etc/lvm/devices/system.devices follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 25 02:41:49 np0005534516 python3.9[36741]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/vgimportdevices --all _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 02:41:49 np0005534516 python3.9[36894]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/lvm/devices/system.devices state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 02:41:51 np0005534516 python3.9[37046]: ansible-ansible.builtin.getent Invoked with database=passwd key=qemu fail_key=True service=None split=None
Nov 25 02:41:51 np0005534516 rsyslogd[1007]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 25 02:41:52 np0005534516 python3.9[37200]: ansible-ansible.builtin.group Invoked with gid=107 name=qemu state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Nov 25 02:41:53 np0005534516 python3.9[37358]: ansible-ansible.builtin.user Invoked with comment=qemu user group=qemu groups=[''] name=qemu shell=/sbin/nologin state=present uid=107 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Nov 25 02:41:53 np0005534516 python3.9[37518]: ansible-ansible.builtin.getent Invoked with database=passwd key=hugetlbfs fail_key=True service=None split=None
Nov 25 02:41:54 np0005534516 python3.9[37671]: ansible-ansible.builtin.group Invoked with gid=42477 name=hugetlbfs state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Nov 25 02:41:55 np0005534516 python3.9[37829]: ansible-ansible.builtin.file Invoked with group=qemu mode=0755 owner=qemu path=/var/lib/vhost_sockets setype=virt_cache_t seuser=system_u state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None serole=None selevel=None attributes=None
Nov 25 02:41:56 np0005534516 python3.9[37981]: ansible-ansible.legacy.dnf Invoked with name=['dracut-config-generic'] state=absent allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 25 02:41:59 np0005534516 python3.9[38134]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/modules-load.d setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 02:42:00 np0005534516 python3.9[38286]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 02:42:01 np0005534516 python3.9[38409]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/99-edpm.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764056519.939243-355-171880708261270/.source.conf follow=False _original_basename=edpm-modprobe.conf.j2 checksum=8021efe01721d8fa8cab46b95c00ec1be6dbb9d0 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Nov 25 02:42:02 np0005534516 python3.9[38561]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 25 02:42:02 np0005534516 systemd[1]: Starting Load Kernel Modules...
Nov 25 02:42:02 np0005534516 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this.
Nov 25 02:42:02 np0005534516 kernel: Bridge firewalling registered
Nov 25 02:42:02 np0005534516 systemd-modules-load[38565]: Inserted module 'br_netfilter'
Nov 25 02:42:02 np0005534516 systemd[1]: Finished Load Kernel Modules.
Nov 25 02:42:03 np0005534516 python3.9[38722]: ansible-ansible.legacy.stat Invoked with path=/etc/sysctl.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 02:42:04 np0005534516 python3.9[38845]: ansible-ansible.legacy.copy Invoked with dest=/etc/sysctl.d/99-edpm.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764056522.9005845-378-66511682387437/.source.conf follow=False _original_basename=edpm-sysctl.conf.j2 checksum=2a366439721b855adcfe4d7f152babb68596a007 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Nov 25 02:42:05 np0005534516 python3.9[38997]: ansible-ansible.legacy.dnf Invoked with name=['tuned', 'tuned-profiles-cpu-partitioning'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 25 02:42:08 np0005534516 dbus-broker-launch[812]: Noticed file-system modification, trigger reload.
Nov 25 02:42:08 np0005534516 dbus-broker-launch[812]: Noticed file-system modification, trigger reload.
Nov 25 02:42:08 np0005534516 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Nov 25 02:42:08 np0005534516 systemd[1]: Starting man-db-cache-update.service...
Nov 25 02:42:08 np0005534516 systemd[1]: Reloading.
Nov 25 02:42:09 np0005534516 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 02:42:09 np0005534516 systemd[1]: Queuing reload/restart jobs for marked units…
Nov 25 02:42:11 np0005534516 python3.9[40451]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/active_profile follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 25 02:42:11 np0005534516 python3.9[41439]: ansible-ansible.builtin.slurp Invoked with src=/etc/tuned/active_profile
Nov 25 02:42:12 np0005534516 python3.9[42233]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/throughput-performance-variables.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 25 02:42:13 np0005534516 python3.9[43070]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/tuned-adm profile throughput-performance _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 02:42:13 np0005534516 systemd[1]: Starting Dynamic System Tuning Daemon...
Nov 25 02:42:13 np0005534516 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Nov 25 02:42:13 np0005534516 systemd[1]: Finished man-db-cache-update.service.
Nov 25 02:42:13 np0005534516 systemd[1]: man-db-cache-update.service: Consumed 5.234s CPU time.
Nov 25 02:42:13 np0005534516 systemd[1]: run-r607d7352de374d90afb2908225766b46.service: Deactivated successfully.
Nov 25 02:42:14 np0005534516 systemd[1]: Starting Authorization Manager...
Nov 25 02:42:14 np0005534516 systemd[1]: Started Dynamic System Tuning Daemon.
Nov 25 02:42:14 np0005534516 polkitd[43396]: Started polkitd version 0.117
Nov 25 02:42:14 np0005534516 systemd[1]: Started Authorization Manager.
Nov 25 02:42:15 np0005534516 python3.9[43566]: ansible-ansible.builtin.systemd Invoked with enabled=True name=tuned state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 25 02:42:15 np0005534516 systemd[1]: Stopping Dynamic System Tuning Daemon...
Nov 25 02:42:15 np0005534516 systemd[1]: tuned.service: Deactivated successfully.
Nov 25 02:42:15 np0005534516 systemd[1]: Stopped Dynamic System Tuning Daemon.
Nov 25 02:42:15 np0005534516 systemd[1]: Starting Dynamic System Tuning Daemon...
Nov 25 02:42:15 np0005534516 systemd[1]: Started Dynamic System Tuning Daemon.
Nov 25 02:42:16 np0005534516 python3.9[43728]: ansible-ansible.builtin.slurp Invoked with src=/proc/cmdline
Nov 25 02:42:19 np0005534516 python3.9[43880]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksm.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 25 02:42:19 np0005534516 systemd[1]: Reloading.
Nov 25 02:42:19 np0005534516 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 02:42:20 np0005534516 python3.9[44069]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksmtuned.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 25 02:42:20 np0005534516 systemd[1]: Reloading.
Nov 25 02:42:20 np0005534516 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 02:42:21 np0005534516 python3.9[44257]: ansible-ansible.legacy.command Invoked with _raw_params=mkswap "/swap" _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 02:42:22 np0005534516 python3.9[44410]: ansible-ansible.legacy.command Invoked with _raw_params=swapon "/swap" _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 02:42:22 np0005534516 kernel: Adding 1048572k swap on /swap.  Priority:-2 extents:1 across:1048572k 
Nov 25 02:42:23 np0005534516 python3.9[44563]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/bin/update-ca-trust _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 02:42:25 np0005534516 python3.9[44725]: ansible-ansible.legacy.command Invoked with _raw_params=echo 2 >/sys/kernel/mm/ksm/run _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 02:42:26 np0005534516 python3.9[44878]: ansible-ansible.builtin.systemd Invoked with name=systemd-sysctl.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 25 02:42:26 np0005534516 systemd[1]: systemd-sysctl.service: Deactivated successfully.
Nov 25 02:42:26 np0005534516 systemd[1]: Stopped Apply Kernel Variables.
Nov 25 02:42:26 np0005534516 systemd[1]: Stopping Apply Kernel Variables...
Nov 25 02:42:26 np0005534516 systemd[1]: Starting Apply Kernel Variables...
Nov 25 02:42:26 np0005534516 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully.
Nov 25 02:42:26 np0005534516 systemd[1]: Finished Apply Kernel Variables.
Nov 25 02:42:26 np0005534516 systemd[1]: session-9.scope: Deactivated successfully.
Nov 25 02:42:26 np0005534516 systemd[1]: session-9.scope: Consumed 2min 20.496s CPU time.
Nov 25 02:42:26 np0005534516 systemd-logind[822]: Session 9 logged out. Waiting for processes to exit.
Nov 25 02:42:26 np0005534516 systemd-logind[822]: Removed session 9.
Nov 25 02:42:32 np0005534516 systemd-logind[822]: New session 10 of user zuul.
Nov 25 02:42:32 np0005534516 systemd[1]: Started Session 10 of User zuul.
Nov 25 02:42:33 np0005534516 python3.9[45061]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 25 02:42:34 np0005534516 python3.9[45217]: ansible-ansible.builtin.getent Invoked with database=passwd key=openvswitch fail_key=True service=None split=None
Nov 25 02:42:35 np0005534516 python3.9[45370]: ansible-ansible.builtin.group Invoked with gid=42476 name=openvswitch state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Nov 25 02:42:36 np0005534516 python3.9[45528]: ansible-ansible.builtin.user Invoked with comment=openvswitch user group=openvswitch groups=['hugetlbfs'] name=openvswitch shell=/sbin/nologin state=present uid=42476 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Nov 25 02:42:38 np0005534516 python3.9[45688]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 25 02:42:39 np0005534516 python3.9[45772]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['openvswitch'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Nov 25 02:42:43 np0005534516 python3.9[45936]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 25 02:42:56 np0005534516 kernel: SELinux:  Converting 2730 SID table entries...
Nov 25 02:42:56 np0005534516 kernel: SELinux:  policy capability network_peer_controls=1
Nov 25 02:42:56 np0005534516 kernel: SELinux:  policy capability open_perms=1
Nov 25 02:42:56 np0005534516 kernel: SELinux:  policy capability extended_socket_class=1
Nov 25 02:42:56 np0005534516 kernel: SELinux:  policy capability always_check_network=0
Nov 25 02:42:56 np0005534516 kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 25 02:42:56 np0005534516 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 25 02:42:56 np0005534516 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 25 02:42:56 np0005534516 dbus-broker-launch[813]: avc:  op=load_policy lsm=selinux seqno=9 res=1
Nov 25 02:42:56 np0005534516 systemd[1]: Started daily update of the root trust anchor for DNSSEC.
Nov 25 02:42:58 np0005534516 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Nov 25 02:42:58 np0005534516 systemd[1]: Starting man-db-cache-update.service...
Nov 25 02:42:58 np0005534516 systemd[1]: Reloading.
Nov 25 02:42:58 np0005534516 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 02:42:58 np0005534516 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 02:42:58 np0005534516 systemd[1]: Queuing reload/restart jobs for marked units…
Nov 25 02:42:59 np0005534516 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Nov 25 02:42:59 np0005534516 systemd[1]: Finished man-db-cache-update.service.
Nov 25 02:42:59 np0005534516 systemd[1]: run-rb72746d50614410a92e424e4b64194cd.service: Deactivated successfully.
Nov 25 02:43:00 np0005534516 python3.9[47033]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Nov 25 02:43:00 np0005534516 systemd[1]: Reloading.
Nov 25 02:43:00 np0005534516 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 02:43:00 np0005534516 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 02:43:00 np0005534516 systemd[1]: Starting Open vSwitch Database Unit...
Nov 25 02:43:00 np0005534516 chown[47076]: /usr/bin/chown: cannot access '/run/openvswitch': No such file or directory
Nov 25 02:43:00 np0005534516 ovs-ctl[47081]: /etc/openvswitch/conf.db does not exist ... (warning).
Nov 25 02:43:00 np0005534516 ovs-ctl[47081]: Creating empty database /etc/openvswitch/conf.db [  OK  ]
Nov 25 02:43:00 np0005534516 ovs-ctl[47081]: Starting ovsdb-server [  OK  ]
Nov 25 02:43:00 np0005534516 ovs-vsctl[47130]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait -- init -- set Open_vSwitch . db-version=8.5.1
Nov 25 02:43:00 np0005534516 ovs-vsctl[47149]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait set Open_vSwitch . ovs-version=3.3.5-115.el9s "external-ids:system-id=\"0e3362c2-a4a0-4a10-9289-943331244f84\"" "external-ids:rundir=\"/var/run/openvswitch\"" "system-type=\"centos\"" "system-version=\"9\""
Nov 25 02:43:00 np0005534516 ovs-ctl[47081]: Configuring Open vSwitch system IDs [  OK  ]
Nov 25 02:43:00 np0005534516 ovs-vsctl[47155]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait add Open_vSwitch . external-ids hostname=compute-0
Nov 25 02:43:00 np0005534516 ovs-ctl[47081]: Enabling remote OVSDB managers [  OK  ]
Nov 25 02:43:00 np0005534516 systemd[1]: Started Open vSwitch Database Unit.
Nov 25 02:43:00 np0005534516 systemd[1]: Starting Open vSwitch Delete Transient Ports...
Nov 25 02:43:00 np0005534516 systemd[1]: Finished Open vSwitch Delete Transient Ports.
Nov 25 02:43:00 np0005534516 systemd[1]: Starting Open vSwitch Forwarding Unit...
Nov 25 02:43:01 np0005534516 kernel: openvswitch: Open vSwitch switching datapath
Nov 25 02:43:01 np0005534516 ovs-ctl[47200]: Inserting openvswitch module [  OK  ]
Nov 25 02:43:01 np0005534516 ovs-ctl[47169]: Starting ovs-vswitchd [  OK  ]
Nov 25 02:43:01 np0005534516 ovs-vsctl[47220]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait add Open_vSwitch . external-ids hostname=compute-0
Nov 25 02:43:01 np0005534516 ovs-ctl[47169]: Enabling remote OVSDB managers [  OK  ]
Nov 25 02:43:01 np0005534516 systemd[1]: Started Open vSwitch Forwarding Unit.
Nov 25 02:43:01 np0005534516 systemd[1]: Starting Open vSwitch...
Nov 25 02:43:01 np0005534516 systemd[1]: Finished Open vSwitch.
Nov 25 02:43:02 np0005534516 python3.9[47372]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 25 02:43:03 np0005534516 python3.9[47524]: ansible-community.general.sefcontext Invoked with selevel=s0 setype=container_file_t state=present target=/var/lib/edpm-config(/.*)? ignore_selinux_state=False ftype=a reload=True substitute=None seuser=None
Nov 25 02:43:04 np0005534516 kernel: SELinux:  Converting 2744 SID table entries...
Nov 25 02:43:04 np0005534516 kernel: SELinux:  policy capability network_peer_controls=1
Nov 25 02:43:04 np0005534516 kernel: SELinux:  policy capability open_perms=1
Nov 25 02:43:04 np0005534516 kernel: SELinux:  policy capability extended_socket_class=1
Nov 25 02:43:04 np0005534516 kernel: SELinux:  policy capability always_check_network=0
Nov 25 02:43:04 np0005534516 kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 25 02:43:04 np0005534516 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 25 02:43:04 np0005534516 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 25 02:43:05 np0005534516 python3.9[47679]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 25 02:43:06 np0005534516 dbus-broker-launch[813]: avc:  op=load_policy lsm=selinux seqno=10 res=1
Nov 25 02:43:06 np0005534516 python3.9[47837]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 25 02:43:09 np0005534516 python3.9[47990]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 02:43:10 np0005534516 python3.9[48277]: ansible-ansible.builtin.file Invoked with mode=0750 path=/var/lib/edpm-config selevel=s0 setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Nov 25 02:43:11 np0005534516 python3.9[48427]: ansible-ansible.builtin.stat Invoked with path=/etc/cloud/cloud.cfg.d follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 25 02:43:12 np0005534516 python3.9[48581]: ansible-ansible.legacy.dnf Invoked with name=['NetworkManager-ovs'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 25 02:43:14 np0005534516 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Nov 25 02:43:14 np0005534516 systemd[1]: Starting man-db-cache-update.service...
Nov 25 02:43:14 np0005534516 systemd[1]: Reloading.
Nov 25 02:43:15 np0005534516 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 02:43:15 np0005534516 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 02:43:15 np0005534516 systemd[1]: Queuing reload/restart jobs for marked units…
Nov 25 02:43:15 np0005534516 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Nov 25 02:43:15 np0005534516 systemd[1]: Finished man-db-cache-update.service.
Nov 25 02:43:15 np0005534516 systemd[1]: run-r934c18a0d15a47e4bdf7a4d4dc07cecc.service: Deactivated successfully.
Nov 25 02:43:16 np0005534516 python3.9[48898]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 25 02:43:16 np0005534516 systemd[1]: NetworkManager-wait-online.service: Deactivated successfully.
Nov 25 02:43:16 np0005534516 systemd[1]: Stopped Network Manager Wait Online.
Nov 25 02:43:16 np0005534516 systemd[1]: Stopping Network Manager Wait Online...
Nov 25 02:43:16 np0005534516 systemd[1]: Stopping Network Manager...
Nov 25 02:43:16 np0005534516 NetworkManager[7188]: <info>  [1764056596.4348] caught SIGTERM, shutting down normally.
Nov 25 02:43:16 np0005534516 NetworkManager[7188]: <info>  [1764056596.4362] dhcp4 (eth0): canceled DHCP transaction
Nov 25 02:43:16 np0005534516 NetworkManager[7188]: <info>  [1764056596.4362] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Nov 25 02:43:16 np0005534516 NetworkManager[7188]: <info>  [1764056596.4362] dhcp4 (eth0): state changed no lease
Nov 25 02:43:16 np0005534516 NetworkManager[7188]: <info>  [1764056596.4365] manager: NetworkManager state is now CONNECTED_SITE
Nov 25 02:43:16 np0005534516 systemd[1]: Starting Network Manager Script Dispatcher Service...
Nov 25 02:43:16 np0005534516 NetworkManager[7188]: <info>  [1764056596.4596] exiting (success)
Nov 25 02:43:16 np0005534516 systemd[1]: Started Network Manager Script Dispatcher Service.
Nov 25 02:43:16 np0005534516 systemd[1]: NetworkManager.service: Deactivated successfully.
Nov 25 02:43:16 np0005534516 systemd[1]: Stopped Network Manager.
Nov 25 02:43:16 np0005534516 systemd[1]: NetworkManager.service: Consumed 12.280s CPU time, 4.1M memory peak, read 0B from disk, written 21.5K to disk.
Nov 25 02:43:16 np0005534516 systemd[1]: Starting Network Manager...
Nov 25 02:43:16 np0005534516 NetworkManager[48915]: <info>  [1764056596.5628] NetworkManager (version 1.54.1-1.el9) is starting... (after a restart, boot:4972c5ac-d4d6-4122-899d-4e8787c329b1)
Nov 25 02:43:16 np0005534516 NetworkManager[48915]: <info>  [1764056596.5631] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Nov 25 02:43:16 np0005534516 NetworkManager[48915]: <info>  [1764056596.5702] manager[0x56264dd89090]: monitoring kernel firmware directory '/lib/firmware'.
Nov 25 02:43:16 np0005534516 systemd[1]: Starting Hostname Service...
Nov 25 02:43:16 np0005534516 systemd[1]: Started Hostname Service.
Nov 25 02:43:16 np0005534516 NetworkManager[48915]: <info>  [1764056596.6813] hostname: hostname: using hostnamed
Nov 25 02:43:16 np0005534516 NetworkManager[48915]: <info>  [1764056596.6814] hostname: static hostname changed from (none) to "compute-0"
Nov 25 02:43:16 np0005534516 NetworkManager[48915]: <info>  [1764056596.6822] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Nov 25 02:43:16 np0005534516 NetworkManager[48915]: <info>  [1764056596.6829] manager[0x56264dd89090]: rfkill: Wi-Fi hardware radio set enabled
Nov 25 02:43:16 np0005534516 NetworkManager[48915]: <info>  [1764056596.6830] manager[0x56264dd89090]: rfkill: WWAN hardware radio set enabled
Nov 25 02:43:16 np0005534516 NetworkManager[48915]: <info>  [1764056596.6864] Loaded device plugin: NMOvsFactory (/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-device-plugin-ovs.so)
Nov 25 02:43:16 np0005534516 NetworkManager[48915]: <info>  [1764056596.6881] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-device-plugin-team.so)
Nov 25 02:43:16 np0005534516 NetworkManager[48915]: <info>  [1764056596.6884] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Nov 25 02:43:16 np0005534516 NetworkManager[48915]: <info>  [1764056596.6885] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Nov 25 02:43:16 np0005534516 NetworkManager[48915]: <info>  [1764056596.6887] manager: Networking is enabled by state file
Nov 25 02:43:16 np0005534516 NetworkManager[48915]: <info>  [1764056596.6890] settings: Loaded settings plugin: keyfile (internal)
Nov 25 02:43:16 np0005534516 NetworkManager[48915]: <info>  [1764056596.6897] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-settings-plugin-ifcfg-rh.so")
Nov 25 02:43:16 np0005534516 NetworkManager[48915]: <info>  [1764056596.6952] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Nov 25 02:43:16 np0005534516 NetworkManager[48915]: <info>  [1764056596.6971] dhcp: init: Using DHCP client 'internal'
Nov 25 02:43:16 np0005534516 NetworkManager[48915]: <info>  [1764056596.6978] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Nov 25 02:43:16 np0005534516 NetworkManager[48915]: <info>  [1764056596.6989] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 02:43:16 np0005534516 NetworkManager[48915]: <info>  [1764056596.7001] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Nov 25 02:43:16 np0005534516 NetworkManager[48915]: <info>  [1764056596.7014] device (lo): Activation: starting connection 'lo' (9e1b533f-1bf0-4204-9b0f-32d0aa4be169)
Nov 25 02:43:16 np0005534516 NetworkManager[48915]: <info>  [1764056596.7025] device (eth0): carrier: link connected
Nov 25 02:43:16 np0005534516 NetworkManager[48915]: <info>  [1764056596.7032] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Nov 25 02:43:16 np0005534516 NetworkManager[48915]: <info>  [1764056596.7040] manager: (eth0): assume: will attempt to assume matching connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03) (indicated)
Nov 25 02:43:16 np0005534516 NetworkManager[48915]: <info>  [1764056596.7041] device (eth0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Nov 25 02:43:16 np0005534516 NetworkManager[48915]: <info>  [1764056596.7051] device (eth0): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Nov 25 02:43:16 np0005534516 NetworkManager[48915]: <info>  [1764056596.7062] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Nov 25 02:43:16 np0005534516 NetworkManager[48915]: <info>  [1764056596.7070] device (eth1): carrier: link connected
Nov 25 02:43:16 np0005534516 NetworkManager[48915]: <info>  [1764056596.7078] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Nov 25 02:43:16 np0005534516 NetworkManager[48915]: <info>  [1764056596.7085] manager: (eth1): assume: will attempt to assume matching connection 'ci-private-network' (8a67a5cf-a148-5b3d-8fb8-4f04d5f08338) (indicated)
Nov 25 02:43:16 np0005534516 NetworkManager[48915]: <info>  [1764056596.7086] device (eth1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Nov 25 02:43:16 np0005534516 NetworkManager[48915]: <info>  [1764056596.7095] device (eth1): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Nov 25 02:43:16 np0005534516 NetworkManager[48915]: <info>  [1764056596.7105] device (eth1): Activation: starting connection 'ci-private-network' (8a67a5cf-a148-5b3d-8fb8-4f04d5f08338)
Nov 25 02:43:16 np0005534516 NetworkManager[48915]: <info>  [1764056596.7114] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Nov 25 02:43:16 np0005534516 systemd[1]: Started Network Manager.
Nov 25 02:43:16 np0005534516 NetworkManager[48915]: <info>  [1764056596.7125] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Nov 25 02:43:16 np0005534516 NetworkManager[48915]: <info>  [1764056596.7129] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Nov 25 02:43:16 np0005534516 NetworkManager[48915]: <info>  [1764056596.7132] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Nov 25 02:43:16 np0005534516 NetworkManager[48915]: <info>  [1764056596.7136] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Nov 25 02:43:16 np0005534516 NetworkManager[48915]: <info>  [1764056596.7141] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'assume')
Nov 25 02:43:16 np0005534516 NetworkManager[48915]: <info>  [1764056596.7144] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Nov 25 02:43:16 np0005534516 NetworkManager[48915]: <info>  [1764056596.7148] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'assume')
Nov 25 02:43:16 np0005534516 NetworkManager[48915]: <info>  [1764056596.7153] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Nov 25 02:43:16 np0005534516 NetworkManager[48915]: <info>  [1764056596.7166] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Nov 25 02:43:16 np0005534516 NetworkManager[48915]: <info>  [1764056596.7170] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Nov 25 02:43:16 np0005534516 NetworkManager[48915]: <info>  [1764056596.7185] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Nov 25 02:43:16 np0005534516 NetworkManager[48915]: <info>  [1764056596.7206] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Nov 25 02:43:16 np0005534516 NetworkManager[48915]: <info>  [1764056596.7220] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Nov 25 02:43:16 np0005534516 NetworkManager[48915]: <info>  [1764056596.7223] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Nov 25 02:43:16 np0005534516 NetworkManager[48915]: <info>  [1764056596.7233] device (lo): Activation: successful, device activated.
Nov 25 02:43:16 np0005534516 NetworkManager[48915]: <info>  [1764056596.7245] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Nov 25 02:43:16 np0005534516 NetworkManager[48915]: <info>  [1764056596.7248] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Nov 25 02:43:16 np0005534516 NetworkManager[48915]: <info>  [1764056596.7253] manager: NetworkManager state is now CONNECTED_LOCAL
Nov 25 02:43:16 np0005534516 NetworkManager[48915]: <info>  [1764056596.7259] device (eth1): Activation: successful, device activated.
Nov 25 02:43:16 np0005534516 systemd[1]: Starting Network Manager Wait Online...
Nov 25 02:43:17 np0005534516 python3.9[49106]: ansible-ansible.legacy.dnf Invoked with name=['os-net-config'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 25 02:43:18 np0005534516 NetworkManager[48915]: <info>  [1764056598.6061] dhcp4 (eth0): state changed new lease, address=38.102.83.169
Nov 25 02:43:18 np0005534516 NetworkManager[48915]: <info>  [1764056598.6075] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Nov 25 02:43:18 np0005534516 NetworkManager[48915]: <info>  [1764056598.9185] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Nov 25 02:43:18 np0005534516 NetworkManager[48915]: <info>  [1764056598.9217] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Nov 25 02:43:18 np0005534516 NetworkManager[48915]: <info>  [1764056598.9218] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Nov 25 02:43:18 np0005534516 NetworkManager[48915]: <info>  [1764056598.9221] manager: NetworkManager state is now CONNECTED_SITE
Nov 25 02:43:18 np0005534516 NetworkManager[48915]: <info>  [1764056598.9224] device (eth0): Activation: successful, device activated.
Nov 25 02:43:18 np0005534516 NetworkManager[48915]: <info>  [1764056598.9232] manager: NetworkManager state is now CONNECTED_GLOBAL
Nov 25 02:43:18 np0005534516 NetworkManager[48915]: <info>  [1764056598.9234] manager: startup complete
Nov 25 02:43:18 np0005534516 systemd[1]: Finished Network Manager Wait Online.
Nov 25 02:43:24 np0005534516 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Nov 25 02:43:24 np0005534516 systemd[1]: Starting man-db-cache-update.service...
Nov 25 02:43:24 np0005534516 systemd[1]: Reloading.
Nov 25 02:43:24 np0005534516 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 02:43:24 np0005534516 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 02:43:24 np0005534516 systemd[1]: Queuing reload/restart jobs for marked units…
Nov 25 02:43:25 np0005534516 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Nov 25 02:43:25 np0005534516 systemd[1]: Finished man-db-cache-update.service.
Nov 25 02:43:25 np0005534516 systemd[1]: run-rb34be446d243455781c44f0309754528.service: Deactivated successfully.
Nov 25 02:43:26 np0005534516 python3.9[49583]: ansible-ansible.builtin.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 25 02:43:27 np0005534516 python3.9[49735]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=no-auto-default path=/etc/NetworkManager/NetworkManager.conf section=main state=present value=* exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 02:43:27 np0005534516 python3.9[49889]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=dns path=/etc/NetworkManager/NetworkManager.conf section=main state=absent value=none exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 02:43:28 np0005534516 python3.9[50041]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=dns path=/etc/NetworkManager/conf.d/99-cloud-init.conf section=main state=absent value=none exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 02:43:29 np0005534516 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Nov 25 02:43:29 np0005534516 python3.9[50193]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=rc-manager path=/etc/NetworkManager/NetworkManager.conf section=main state=absent value=unmanaged exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 02:43:29 np0005534516 python3.9[50345]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=rc-manager path=/etc/NetworkManager/conf.d/99-cloud-init.conf section=main state=absent value=unmanaged exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 02:43:30 np0005534516 python3.9[50497]: ansible-ansible.legacy.stat Invoked with path=/etc/dhcp/dhclient-enter-hooks follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 02:43:31 np0005534516 python3.9[50620]: ansible-ansible.legacy.copy Invoked with dest=/etc/dhcp/dhclient-enter-hooks mode=0755 src=/home/zuul/.ansible/tmp/ansible-tmp-1764056610.1418374-229-17074398709880/.source _original_basename=.p8uygv3y follow=False checksum=f6278a40de79a9841f6ed1fc584538225566990c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 02:43:32 np0005534516 python3.9[50772]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/os-net-config state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 02:43:33 np0005534516 python3.9[50924]: ansible-edpm_os_net_config_mappings Invoked with net_config_data_lookup={}
Nov 25 02:43:34 np0005534516 python3.9[51076]: ansible-ansible.builtin.file Invoked with path=/var/lib/edpm-config/scripts state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 02:43:37 np0005534516 python3.9[51503]: ansible-ansible.builtin.slurp Invoked with path=/etc/os-net-config/config.yaml src=/etc/os-net-config/config.yaml
Nov 25 02:43:38 np0005534516 ansible-async_wrapper.py[51678]: Invoked with j471873380631 300 /home/zuul/.ansible/tmp/ansible-tmp-1764056617.5322585-295-147080313646375/AnsiballZ_edpm_os_net_config.py _
Nov 25 02:43:38 np0005534516 ansible-async_wrapper.py[51681]: Starting module and watcher
Nov 25 02:43:38 np0005534516 ansible-async_wrapper.py[51681]: Start watching 51682 (300)
Nov 25 02:43:38 np0005534516 ansible-async_wrapper.py[51682]: Start module (51682)
Nov 25 02:43:38 np0005534516 ansible-async_wrapper.py[51678]: Return async_wrapper task started.
Nov 25 02:43:38 np0005534516 python3.9[51683]: ansible-edpm_os_net_config Invoked with cleanup=True config_file=/etc/os-net-config/config.yaml debug=True detailed_exit_codes=True safe_defaults=False use_nmstate=True
Nov 25 02:43:39 np0005534516 kernel: cfg80211: Loading compiled-in X.509 certificates for regulatory database
Nov 25 02:43:39 np0005534516 kernel: Loaded X.509 cert 'sforshee: 00b28ddf47aef9cea7'
Nov 25 02:43:39 np0005534516 kernel: Loaded X.509 cert 'wens: 61c038651aabdcf94bd0ac7ff06c7248db18c600'
Nov 25 02:43:39 np0005534516 kernel: platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
Nov 25 02:43:39 np0005534516 kernel: cfg80211: failed to load regulatory.db
Nov 25 02:43:41 np0005534516 NetworkManager[48915]: <info>  [1764056621.2052] audit: op="checkpoint-create" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51684 uid=0 result="success"
Nov 25 02:43:41 np0005534516 NetworkManager[48915]: <info>  [1764056621.2076] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51684 uid=0 result="success"
Nov 25 02:43:41 np0005534516 NetworkManager[48915]: <info>  [1764056621.2766] manager: (br-ex): new Open vSwitch Bridge device (/org/freedesktop/NetworkManager/Devices/4)
Nov 25 02:43:41 np0005534516 NetworkManager[48915]: <info>  [1764056621.2768] audit: op="connection-add" uuid="e7530fca-e368-4ec3-b54c-dae62fb5ffc7" name="br-ex-br" pid=51684 uid=0 result="success"
Nov 25 02:43:41 np0005534516 NetworkManager[48915]: <info>  [1764056621.2788] manager: (br-ex): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/5)
Nov 25 02:43:41 np0005534516 NetworkManager[48915]: <info>  [1764056621.2790] audit: op="connection-add" uuid="f65a87db-3aa0-4fb3-a9da-cf6985dabba7" name="br-ex-port" pid=51684 uid=0 result="success"
Nov 25 02:43:41 np0005534516 NetworkManager[48915]: <info>  [1764056621.2803] manager: (eth1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/6)
Nov 25 02:43:41 np0005534516 NetworkManager[48915]: <info>  [1764056621.2805] audit: op="connection-add" uuid="4ed0cb1a-944b-4752-adb3-51023ee0251e" name="eth1-port" pid=51684 uid=0 result="success"
Nov 25 02:43:41 np0005534516 NetworkManager[48915]: <info>  [1764056621.2817] manager: (vlan20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/7)
Nov 25 02:43:41 np0005534516 NetworkManager[48915]: <info>  [1764056621.2819] audit: op="connection-add" uuid="309d0eca-8a96-4e2b-9d10-3cffc5938d26" name="vlan20-port" pid=51684 uid=0 result="success"
Nov 25 02:43:41 np0005534516 NetworkManager[48915]: <info>  [1764056621.2831] manager: (vlan21): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/8)
Nov 25 02:43:41 np0005534516 NetworkManager[48915]: <info>  [1764056621.2832] audit: op="connection-add" uuid="3b4692fa-4434-42e6-9b3e-b7bc7cb91c49" name="vlan21-port" pid=51684 uid=0 result="success"
Nov 25 02:43:41 np0005534516 NetworkManager[48915]: <info>  [1764056621.2844] manager: (vlan22): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/9)
Nov 25 02:43:41 np0005534516 NetworkManager[48915]: <info>  [1764056621.2846] audit: op="connection-add" uuid="b48d247e-134e-4c28-a5a7-223d292c196b" name="vlan22-port" pid=51684 uid=0 result="success"
Nov 25 02:43:41 np0005534516 NetworkManager[48915]: <info>  [1764056621.2859] manager: (vlan23): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/10)
Nov 25 02:43:41 np0005534516 NetworkManager[48915]: <info>  [1764056621.2860] audit: op="connection-add" uuid="4e095f3c-7fa8-468c-8f19-d0b568d0d36c" name="vlan23-port" pid=51684 uid=0 result="success"
Nov 25 02:43:41 np0005534516 NetworkManager[48915]: <info>  [1764056621.2881] audit: op="connection-update" uuid="5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03" name="System eth0" args="802-3-ethernet.mtu,ipv6.addr-gen-mode,ipv6.method,ipv6.dhcp-timeout,connection.timestamp,connection.autoconnect-priority,ipv4.dhcp-client-id,ipv4.dhcp-timeout" pid=51684 uid=0 result="success"
Nov 25 02:43:41 np0005534516 NetworkManager[48915]: <info>  [1764056621.2897] manager: (br-ex): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/11)
Nov 25 02:43:41 np0005534516 NetworkManager[48915]: <info>  [1764056621.2899] audit: op="connection-add" uuid="5744b669-f7ad-4c99-90ff-775c50d7ffb2" name="br-ex-if" pid=51684 uid=0 result="success"
Nov 25 02:43:41 np0005534516 NetworkManager[48915]: <info>  [1764056621.4139] audit: op="connection-update" uuid="8a67a5cf-a148-5b3d-8fb8-4f04d5f08338" name="ci-private-network" args="ipv6.addr-gen-mode,ipv6.method,ipv6.routes,ipv6.dns,ipv6.routing-rules,ipv6.addresses,ovs-external-ids.data,ovs-interface.type,connection.controller,connection.slave-type,connection.timestamp,connection.port-type,connection.master,ipv4.method,ipv4.routing-rules,ipv4.never-default,ipv4.dns,ipv4.routes,ipv4.addresses" pid=51684 uid=0 result="success"
Nov 25 02:43:41 np0005534516 NetworkManager[48915]: <info>  [1764056621.4176] manager: (vlan20): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/12)
Nov 25 02:43:41 np0005534516 NetworkManager[48915]: <info>  [1764056621.4180] audit: op="connection-add" uuid="ad3b2c8e-27bf-4888-a828-15dc546acd02" name="vlan20-if" pid=51684 uid=0 result="success"
Nov 25 02:43:41 np0005534516 NetworkManager[48915]: <info>  [1764056621.4216] manager: (vlan21): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/13)
Nov 25 02:43:41 np0005534516 NetworkManager[48915]: <info>  [1764056621.4219] audit: op="connection-add" uuid="91deb49c-2e90-45b3-9be4-1dcedac77924" name="vlan21-if" pid=51684 uid=0 result="success"
Nov 25 02:43:41 np0005534516 NetworkManager[48915]: <info>  [1764056621.4252] manager: (vlan22): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/14)
Nov 25 02:43:41 np0005534516 NetworkManager[48915]: <info>  [1764056621.4256] audit: op="connection-add" uuid="d9c883c9-bc7a-4f9d-b66c-42db528ae01b" name="vlan22-if" pid=51684 uid=0 result="success"
Nov 25 02:43:41 np0005534516 NetworkManager[48915]: <info>  [1764056621.4290] manager: (vlan23): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/15)
Nov 25 02:43:41 np0005534516 NetworkManager[48915]: <info>  [1764056621.4294] audit: op="connection-add" uuid="11300121-847d-4556-88ce-32c4d199160f" name="vlan23-if" pid=51684 uid=0 result="success"
Nov 25 02:43:41 np0005534516 NetworkManager[48915]: <info>  [1764056621.4318] audit: op="connection-delete" uuid="3faee6e5-0328-325e-bacb-ef8b4c932bc7" name="Wired connection 1" pid=51684 uid=0 result="success"
Nov 25 02:43:41 np0005534516 NetworkManager[48915]: <info>  [1764056621.4346] device (br-ex)[Open vSwitch Bridge]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 25 02:43:41 np0005534516 NetworkManager[48915]: <info>  [1764056621.4368] device (br-ex)[Open vSwitch Bridge]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 25 02:43:41 np0005534516 NetworkManager[48915]: <info>  [1764056621.4376] device (br-ex)[Open vSwitch Bridge]: Activation: starting connection 'br-ex-br' (e7530fca-e368-4ec3-b54c-dae62fb5ffc7)
Nov 25 02:43:41 np0005534516 NetworkManager[48915]: <info>  [1764056621.4378] audit: op="connection-activate" uuid="e7530fca-e368-4ec3-b54c-dae62fb5ffc7" name="br-ex-br" pid=51684 uid=0 result="success"
Nov 25 02:43:41 np0005534516 NetworkManager[48915]: <info>  [1764056621.4383] device (br-ex)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 25 02:43:41 np0005534516 NetworkManager[48915]: <info>  [1764056621.4399] device (br-ex)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 25 02:43:41 np0005534516 NetworkManager[48915]: <info>  [1764056621.4407] device (br-ex)[Open vSwitch Port]: Activation: starting connection 'br-ex-port' (f65a87db-3aa0-4fb3-a9da-cf6985dabba7)
Nov 25 02:43:41 np0005534516 NetworkManager[48915]: <info>  [1764056621.4410] device (eth1)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 25 02:43:41 np0005534516 NetworkManager[48915]: <info>  [1764056621.4422] device (eth1)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 25 02:43:41 np0005534516 NetworkManager[48915]: <info>  [1764056621.4430] device (eth1)[Open vSwitch Port]: Activation: starting connection 'eth1-port' (4ed0cb1a-944b-4752-adb3-51023ee0251e)
Nov 25 02:43:41 np0005534516 NetworkManager[48915]: <info>  [1764056621.4434] device (vlan20)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 25 02:43:41 np0005534516 NetworkManager[48915]: <info>  [1764056621.4450] device (vlan20)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 25 02:43:41 np0005534516 NetworkManager[48915]: <info>  [1764056621.4459] device (vlan20)[Open vSwitch Port]: Activation: starting connection 'vlan20-port' (309d0eca-8a96-4e2b-9d10-3cffc5938d26)
Nov 25 02:43:41 np0005534516 NetworkManager[48915]: <info>  [1764056621.4464] device (vlan21)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 25 02:43:41 np0005534516 NetworkManager[48915]: <info>  [1764056621.4478] device (vlan21)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 25 02:43:41 np0005534516 NetworkManager[48915]: <info>  [1764056621.4487] device (vlan21)[Open vSwitch Port]: Activation: starting connection 'vlan21-port' (3b4692fa-4434-42e6-9b3e-b7bc7cb91c49)
Nov 25 02:43:41 np0005534516 NetworkManager[48915]: <info>  [1764056621.4491] device (vlan22)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 25 02:43:41 np0005534516 NetworkManager[48915]: <info>  [1764056621.4506] device (vlan22)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 25 02:43:41 np0005534516 NetworkManager[48915]: <info>  [1764056621.4517] device (vlan22)[Open vSwitch Port]: Activation: starting connection 'vlan22-port' (b48d247e-134e-4c28-a5a7-223d292c196b)
Nov 25 02:43:41 np0005534516 NetworkManager[48915]: <info>  [1764056621.4520] device (vlan23)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 25 02:43:41 np0005534516 NetworkManager[48915]: <info>  [1764056621.4535] device (vlan23)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 25 02:43:41 np0005534516 NetworkManager[48915]: <info>  [1764056621.4545] device (vlan23)[Open vSwitch Port]: Activation: starting connection 'vlan23-port' (4e095f3c-7fa8-468c-8f19-d0b568d0d36c)
Nov 25 02:43:41 np0005534516 NetworkManager[48915]: <info>  [1764056621.4547] device (br-ex)[Open vSwitch Bridge]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 25 02:43:41 np0005534516 NetworkManager[48915]: <info>  [1764056621.4555] device (br-ex)[Open vSwitch Bridge]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 25 02:43:41 np0005534516 NetworkManager[48915]: <info>  [1764056621.4560] device (br-ex)[Open vSwitch Bridge]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 25 02:43:41 np0005534516 NetworkManager[48915]: <info>  [1764056621.4576] device (br-ex)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 25 02:43:41 np0005534516 NetworkManager[48915]: <info>  [1764056621.4586] device (br-ex)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 25 02:43:41 np0005534516 NetworkManager[48915]: <info>  [1764056621.4594] device (br-ex)[Open vSwitch Interface]: Activation: starting connection 'br-ex-if' (5744b669-f7ad-4c99-90ff-775c50d7ffb2)
Nov 25 02:43:41 np0005534516 NetworkManager[48915]: <info>  [1764056621.4596] device (br-ex)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 25 02:43:41 np0005534516 NetworkManager[48915]: <info>  [1764056621.4604] device (br-ex)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 25 02:43:41 np0005534516 NetworkManager[48915]: <info>  [1764056621.4608] device (br-ex)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 25 02:43:41 np0005534516 NetworkManager[48915]: <info>  [1764056621.4611] device (br-ex)[Open vSwitch Port]: Activation: connection 'br-ex-port' attached as port, continuing activation
Nov 25 02:43:41 np0005534516 NetworkManager[48915]: <info>  [1764056621.4614] device (eth1): state change: activated -> deactivating (reason 'new-activation', managed-type: 'full')
Nov 25 02:43:41 np0005534516 NetworkManager[48915]: <info>  [1764056621.4642] device (eth1): disconnecting for new activation request.
Nov 25 02:43:41 np0005534516 NetworkManager[48915]: <info>  [1764056621.4644] device (eth1)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 25 02:43:41 np0005534516 NetworkManager[48915]: <info>  [1764056621.4651] device (eth1)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 25 02:43:41 np0005534516 NetworkManager[48915]: <info>  [1764056621.4658] device (eth1)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 25 02:43:41 np0005534516 NetworkManager[48915]: <info>  [1764056621.4660] device (eth1)[Open vSwitch Port]: Activation: connection 'eth1-port' attached as port, continuing activation
Nov 25 02:43:41 np0005534516 NetworkManager[48915]: <info>  [1764056621.4668] device (vlan20)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 25 02:43:41 np0005534516 NetworkManager[48915]: <info>  [1764056621.4675] device (vlan20)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 25 02:43:41 np0005534516 NetworkManager[48915]: <info>  [1764056621.4681] device (vlan20)[Open vSwitch Interface]: Activation: starting connection 'vlan20-if' (ad3b2c8e-27bf-4888-a828-15dc546acd02)
Nov 25 02:43:41 np0005534516 NetworkManager[48915]: <info>  [1764056621.4683] device (vlan20)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 25 02:43:41 np0005534516 NetworkManager[48915]: <info>  [1764056621.4687] device (vlan20)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 25 02:43:41 np0005534516 NetworkManager[48915]: <info>  [1764056621.4690] device (vlan20)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 25 02:43:41 np0005534516 NetworkManager[48915]: <info>  [1764056621.4693] device (vlan20)[Open vSwitch Port]: Activation: connection 'vlan20-port' attached as port, continuing activation
Nov 25 02:43:41 np0005534516 NetworkManager[48915]: <info>  [1764056621.4698] device (vlan21)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 25 02:43:41 np0005534516 NetworkManager[48915]: <info>  [1764056621.4705] device (vlan21)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 25 02:43:41 np0005534516 NetworkManager[48915]: <info>  [1764056621.4712] device (vlan21)[Open vSwitch Interface]: Activation: starting connection 'vlan21-if' (91deb49c-2e90-45b3-9be4-1dcedac77924)
Nov 25 02:43:41 np0005534516 NetworkManager[48915]: <info>  [1764056621.4715] device (vlan21)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 25 02:43:41 np0005534516 NetworkManager[48915]: <info>  [1764056621.4719] device (vlan21)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 25 02:43:41 np0005534516 NetworkManager[48915]: <info>  [1764056621.4723] device (vlan21)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 25 02:43:41 np0005534516 NetworkManager[48915]: <info>  [1764056621.4725] device (vlan21)[Open vSwitch Port]: Activation: connection 'vlan21-port' attached as port, continuing activation
Nov 25 02:43:41 np0005534516 NetworkManager[48915]: <info>  [1764056621.4729] device (vlan22)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 25 02:43:41 np0005534516 NetworkManager[48915]: <info>  [1764056621.4737] device (vlan22)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 25 02:43:41 np0005534516 NetworkManager[48915]: <info>  [1764056621.4743] device (vlan22)[Open vSwitch Interface]: Activation: starting connection 'vlan22-if' (d9c883c9-bc7a-4f9d-b66c-42db528ae01b)
Nov 25 02:43:41 np0005534516 NetworkManager[48915]: <info>  [1764056621.4744] device (vlan22)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 25 02:43:41 np0005534516 NetworkManager[48915]: <info>  [1764056621.4748] device (vlan22)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 25 02:43:41 np0005534516 NetworkManager[48915]: <info>  [1764056621.4751] device (vlan22)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 25 02:43:41 np0005534516 NetworkManager[48915]: <info>  [1764056621.4753] device (vlan22)[Open vSwitch Port]: Activation: connection 'vlan22-port' attached as port, continuing activation
Nov 25 02:43:41 np0005534516 NetworkManager[48915]: <info>  [1764056621.4757] device (vlan23)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 25 02:43:41 np0005534516 NetworkManager[48915]: <info>  [1764056621.4764] device (vlan23)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 25 02:43:41 np0005534516 NetworkManager[48915]: <info>  [1764056621.4771] device (vlan23)[Open vSwitch Interface]: Activation: starting connection 'vlan23-if' (11300121-847d-4556-88ce-32c4d199160f)
Nov 25 02:43:41 np0005534516 NetworkManager[48915]: <info>  [1764056621.4772] device (vlan23)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 25 02:43:41 np0005534516 NetworkManager[48915]: <info>  [1764056621.4777] device (vlan23)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 25 02:43:41 np0005534516 NetworkManager[48915]: <info>  [1764056621.4781] device (vlan23)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 25 02:43:41 np0005534516 NetworkManager[48915]: <info>  [1764056621.4783] device (vlan23)[Open vSwitch Port]: Activation: connection 'vlan23-port' attached as port, continuing activation
Nov 25 02:43:41 np0005534516 NetworkManager[48915]: <info>  [1764056621.4785] device (br-ex)[Open vSwitch Bridge]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 25 02:43:41 np0005534516 NetworkManager[48915]: <info>  [1764056621.4803] audit: op="device-reapply" interface="eth0" ifindex=2 args="802-3-ethernet.mtu,ipv6.addr-gen-mode,ipv6.method,connection.autoconnect-priority,ipv4.dhcp-client-id,ipv4.dhcp-timeout" pid=51684 uid=0 result="success"
Nov 25 02:43:41 np0005534516 NetworkManager[48915]: <info>  [1764056621.4807] device (br-ex)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 25 02:43:41 np0005534516 NetworkManager[48915]: <info>  [1764056621.4811] device (br-ex)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 25 02:43:41 np0005534516 NetworkManager[48915]: <info>  [1764056621.4815] device (br-ex)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 25 02:43:41 np0005534516 NetworkManager[48915]: <info>  [1764056621.4825] device (br-ex)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 25 02:43:41 np0005534516 NetworkManager[48915]: <info>  [1764056621.4831] device (eth1)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 25 02:43:41 np0005534516 NetworkManager[48915]: <info>  [1764056621.4837] device (vlan20)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 25 02:43:41 np0005534516 NetworkManager[48915]: <info>  [1764056621.4842] device (vlan20)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 25 02:43:41 np0005534516 kernel: ovs-system: entered promiscuous mode
Nov 25 02:43:41 np0005534516 NetworkManager[48915]: <info>  [1764056621.4858] device (vlan20)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 25 02:43:41 np0005534516 NetworkManager[48915]: <info>  [1764056621.4872] device (vlan20)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 25 02:43:41 np0005534516 systemd-udevd[51687]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 02:43:41 np0005534516 kernel: Timeout policy base is empty
Nov 25 02:43:41 np0005534516 NetworkManager[48915]: <info>  [1764056621.4883] device (vlan21)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 25 02:43:41 np0005534516 NetworkManager[48915]: <info>  [1764056621.4890] device (vlan21)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 25 02:43:41 np0005534516 NetworkManager[48915]: <info>  [1764056621.4894] device (vlan21)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 25 02:43:41 np0005534516 NetworkManager[48915]: <info>  [1764056621.4905] device (vlan21)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 25 02:43:41 np0005534516 NetworkManager[48915]: <info>  [1764056621.4911] device (vlan22)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 25 02:43:41 np0005534516 NetworkManager[48915]: <info>  [1764056621.4917] device (vlan22)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 25 02:43:41 np0005534516 NetworkManager[48915]: <info>  [1764056621.4920] device (vlan22)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 25 02:43:41 np0005534516 NetworkManager[48915]: <info>  [1764056621.4930] device (vlan22)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 25 02:43:41 np0005534516 NetworkManager[48915]: <info>  [1764056621.4937] device (vlan23)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 25 02:43:41 np0005534516 NetworkManager[48915]: <info>  [1764056621.4942] device (vlan23)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 25 02:43:41 np0005534516 NetworkManager[48915]: <info>  [1764056621.4945] device (vlan23)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 25 02:43:41 np0005534516 systemd[1]: Starting Network Manager Script Dispatcher Service...
Nov 25 02:43:41 np0005534516 NetworkManager[48915]: <info>  [1764056621.4954] device (vlan23)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 25 02:43:41 np0005534516 NetworkManager[48915]: <info>  [1764056621.4962] dhcp4 (eth0): canceled DHCP transaction
Nov 25 02:43:41 np0005534516 NetworkManager[48915]: <info>  [1764056621.4962] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Nov 25 02:43:41 np0005534516 NetworkManager[48915]: <info>  [1764056621.4962] dhcp4 (eth0): state changed no lease
Nov 25 02:43:41 np0005534516 NetworkManager[48915]: <info>  [1764056621.4966] dhcp4 (eth0): activation: beginning transaction (no timeout)
Nov 25 02:43:41 np0005534516 NetworkManager[48915]: <info>  [1764056621.4985] device (br-ex)[Open vSwitch Interface]: Activation: connection 'br-ex-if' attached as port, continuing activation
Nov 25 02:43:41 np0005534516 NetworkManager[48915]: <info>  [1764056621.4992] audit: op="device-reapply" interface="eth1" ifindex=3 pid=51684 uid=0 result="fail" reason="Device is not activated"
Nov 25 02:43:41 np0005534516 NetworkManager[48915]: <info>  [1764056621.5011] device (vlan20)[Open vSwitch Interface]: Activation: connection 'vlan20-if' attached as port, continuing activation
Nov 25 02:43:41 np0005534516 NetworkManager[48915]: <info>  [1764056621.5018] dhcp4 (eth0): state changed new lease, address=38.102.83.169
Nov 25 02:43:41 np0005534516 NetworkManager[48915]: <info>  [1764056621.5026] device (vlan21)[Open vSwitch Interface]: Activation: connection 'vlan21-if' attached as port, continuing activation
Nov 25 02:43:41 np0005534516 systemd[1]: Started Network Manager Script Dispatcher Service.
Nov 25 02:43:41 np0005534516 kernel: br-ex: entered promiscuous mode
Nov 25 02:43:41 np0005534516 kernel: vlan22: entered promiscuous mode
Nov 25 02:43:41 np0005534516 kernel: vlan20: entered promiscuous mode
Nov 25 02:43:41 np0005534516 systemd-udevd[51689]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 02:43:41 np0005534516 kernel: vlan21: entered promiscuous mode
Nov 25 02:43:41 np0005534516 systemd-udevd[51784]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 02:43:41 np0005534516 NetworkManager[48915]: <info>  [1764056621.7690] device (vlan22)[Open vSwitch Interface]: Activation: connection 'vlan22-if' attached as port, continuing activation
Nov 25 02:43:41 np0005534516 NetworkManager[48915]: <info>  [1764056621.7727] device (br-ex)[Open vSwitch Interface]: carrier: link connected
Nov 25 02:43:41 np0005534516 NetworkManager[48915]: <info>  [1764056621.7737] device (vlan22)[Open vSwitch Interface]: carrier: link connected
Nov 25 02:43:41 np0005534516 NetworkManager[48915]: <info>  [1764056621.7745] device (vlan20)[Open vSwitch Interface]: carrier: link connected
Nov 25 02:43:41 np0005534516 NetworkManager[48915]: <info>  [1764056621.7752] device (vlan21)[Open vSwitch Interface]: carrier: link connected
Nov 25 02:43:41 np0005534516 NetworkManager[48915]: <info>  [1764056621.7753] device (eth1): state change: deactivating -> disconnected (reason 'new-activation', managed-type: 'full')
Nov 25 02:43:41 np0005534516 NetworkManager[48915]: <info>  [1764056621.8145] device (eth1): Activation: starting connection 'ci-private-network' (8a67a5cf-a148-5b3d-8fb8-4f04d5f08338)
Nov 25 02:43:41 np0005534516 NetworkManager[48915]: <info>  [1764056621.8150] device (br-ex)[Open vSwitch Bridge]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 25 02:43:41 np0005534516 NetworkManager[48915]: <info>  [1764056621.8153] device (br-ex)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 25 02:43:41 np0005534516 NetworkManager[48915]: <info>  [1764056621.8156] device (eth1)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 25 02:43:41 np0005534516 NetworkManager[48915]: <info>  [1764056621.8158] device (vlan20)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 25 02:43:41 np0005534516 NetworkManager[48915]: <info>  [1764056621.8161] device (vlan21)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 25 02:43:41 np0005534516 NetworkManager[48915]: <info>  [1764056621.8164] device (vlan22)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 25 02:43:41 np0005534516 NetworkManager[48915]: <info>  [1764056621.8166] device (vlan23)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 25 02:43:41 np0005534516 NetworkManager[48915]: <info>  [1764056621.8169] device (eth1): state change: disconnected -> deactivating (reason 'new-activation', managed-type: 'full')
Nov 25 02:43:41 np0005534516 NetworkManager[48915]: <info>  [1764056621.8178] device (eth1): disconnecting for new activation request.
Nov 25 02:43:41 np0005534516 NetworkManager[48915]: <info>  [1764056621.8180] audit: op="connection-activate" uuid="8a67a5cf-a148-5b3d-8fb8-4f04d5f08338" name="ci-private-network" pid=51684 uid=0 result="success"
Nov 25 02:43:41 np0005534516 NetworkManager[48915]: <info>  [1764056621.8183] device (vlan23)[Open vSwitch Interface]: Activation: connection 'vlan23-if' attached as port, continuing activation
Nov 25 02:43:41 np0005534516 NetworkManager[48915]: <info>  [1764056621.8197] device (br-ex)[Open vSwitch Bridge]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 25 02:43:41 np0005534516 NetworkManager[48915]: <info>  [1764056621.8203] device (br-ex)[Open vSwitch Bridge]: Activation: successful, device activated.
Nov 25 02:43:41 np0005534516 NetworkManager[48915]: <info>  [1764056621.8210] device (br-ex)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 25 02:43:41 np0005534516 NetworkManager[48915]: <info>  [1764056621.8216] device (br-ex)[Open vSwitch Port]: Activation: successful, device activated.
Nov 25 02:43:41 np0005534516 NetworkManager[48915]: <info>  [1764056621.8220] device (eth1)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 25 02:43:41 np0005534516 NetworkManager[48915]: <info>  [1764056621.8224] device (eth1)[Open vSwitch Port]: Activation: successful, device activated.
Nov 25 02:43:41 np0005534516 NetworkManager[48915]: <info>  [1764056621.8228] device (vlan20)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 25 02:43:41 np0005534516 kernel: vlan23: entered promiscuous mode
Nov 25 02:43:41 np0005534516 NetworkManager[48915]: <info>  [1764056621.8234] device (vlan20)[Open vSwitch Port]: Activation: successful, device activated.
Nov 25 02:43:41 np0005534516 NetworkManager[48915]: <info>  [1764056621.8239] device (vlan21)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 25 02:43:41 np0005534516 NetworkManager[48915]: <info>  [1764056621.8244] device (vlan21)[Open vSwitch Port]: Activation: successful, device activated.
Nov 25 02:43:41 np0005534516 NetworkManager[48915]: <info>  [1764056621.8254] device (vlan22)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 25 02:43:41 np0005534516 NetworkManager[48915]: <info>  [1764056621.8263] device (vlan22)[Open vSwitch Port]: Activation: successful, device activated.
Nov 25 02:43:41 np0005534516 NetworkManager[48915]: <info>  [1764056621.8269] device (vlan23)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 25 02:43:41 np0005534516 NetworkManager[48915]: <info>  [1764056621.8274] device (vlan23)[Open vSwitch Port]: Activation: successful, device activated.
Nov 25 02:43:41 np0005534516 NetworkManager[48915]: <info>  [1764056621.8310] device (eth1): state change: deactivating -> disconnected (reason 'new-activation', managed-type: 'full')
Nov 25 02:43:41 np0005534516 NetworkManager[48915]: <info>  [1764056621.8322] device (eth1): Activation: starting connection 'ci-private-network' (8a67a5cf-a148-5b3d-8fb8-4f04d5f08338)
Nov 25 02:43:41 np0005534516 NetworkManager[48915]: <info>  [1764056621.8327] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51684 uid=0 result="success"
Nov 25 02:43:41 np0005534516 NetworkManager[48915]: <info>  [1764056621.8331] device (vlan22)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 25 02:43:41 np0005534516 NetworkManager[48915]: <info>  [1764056621.8352] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 25 02:43:41 np0005534516 NetworkManager[48915]: <info>  [1764056621.8358] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 25 02:43:41 np0005534516 NetworkManager[48915]: <info>  [1764056621.8369] device (br-ex)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 25 02:43:41 np0005534516 NetworkManager[48915]: <info>  [1764056621.8379] device (vlan20)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 25 02:43:41 np0005534516 NetworkManager[48915]: <info>  [1764056621.8389] device (vlan21)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 25 02:43:41 np0005534516 NetworkManager[48915]: <info>  [1764056621.8404] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 25 02:43:41 np0005534516 NetworkManager[48915]: <info>  [1764056621.8412] device (vlan22)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 25 02:43:41 np0005534516 NetworkManager[48915]: <info>  [1764056621.8419] device (eth1): Activation: connection 'ci-private-network' attached as port, continuing activation
Nov 25 02:43:41 np0005534516 kernel: virtio_net virtio5 eth1: entered promiscuous mode
Nov 25 02:43:41 np0005534516 NetworkManager[48915]: <info>  [1764056621.8436] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 25 02:43:41 np0005534516 NetworkManager[48915]: <info>  [1764056621.8447] device (vlan22)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 25 02:43:41 np0005534516 NetworkManager[48915]: <info>  [1764056621.8477] device (vlan22)[Open vSwitch Interface]: Activation: successful, device activated.
Nov 25 02:43:41 np0005534516 NetworkManager[48915]: <info>  [1764056621.8495] device (br-ex)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 25 02:43:41 np0005534516 NetworkManager[48915]: <info>  [1764056621.8499] device (vlan20)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 25 02:43:41 np0005534516 NetworkManager[48915]: <info>  [1764056621.8510] device (br-ex)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 25 02:43:41 np0005534516 NetworkManager[48915]: <info>  [1764056621.8522] device (br-ex)[Open vSwitch Interface]: Activation: successful, device activated.
Nov 25 02:43:41 np0005534516 NetworkManager[48915]: <info>  [1764056621.8536] device (vlan20)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 25 02:43:41 np0005534516 NetworkManager[48915]: <info>  [1764056621.8550] device (vlan20)[Open vSwitch Interface]: Activation: successful, device activated.
Nov 25 02:43:41 np0005534516 NetworkManager[48915]: <info>  [1764056621.8572] device (vlan23)[Open vSwitch Interface]: carrier: link connected
Nov 25 02:43:41 np0005534516 NetworkManager[48915]: <info>  [1764056621.8574] device (vlan21)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 25 02:43:41 np0005534516 NetworkManager[48915]: <info>  [1764056621.8578] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 25 02:43:41 np0005534516 NetworkManager[48915]: <info>  [1764056621.8587] device (vlan21)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 25 02:43:41 np0005534516 NetworkManager[48915]: <info>  [1764056621.8601] device (vlan21)[Open vSwitch Interface]: Activation: successful, device activated.
Nov 25 02:43:41 np0005534516 NetworkManager[48915]: <info>  [1764056621.8617] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 25 02:43:41 np0005534516 NetworkManager[48915]: <info>  [1764056621.8630] device (eth1): Activation: successful, device activated.
Nov 25 02:43:41 np0005534516 NetworkManager[48915]: <info>  [1764056621.8674] device (vlan23)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 25 02:43:41 np0005534516 NetworkManager[48915]: <info>  [1764056621.9388] device (vlan23)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 25 02:43:41 np0005534516 NetworkManager[48915]: <info>  [1764056621.9391] device (vlan23)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 25 02:43:41 np0005534516 NetworkManager[48915]: <info>  [1764056621.9401] device (vlan23)[Open vSwitch Interface]: Activation: successful, device activated.
Nov 25 02:43:42 np0005534516 python3.9[52047]: ansible-ansible.legacy.async_status Invoked with jid=j471873380631.51678 mode=status _async_dir=/root/.ansible_async
Nov 25 02:43:43 np0005534516 NetworkManager[48915]: <info>  [1764056623.1980] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51684 uid=0 result="success"
Nov 25 02:43:43 np0005534516 NetworkManager[48915]: <info>  [1764056623.3714] checkpoint[0x56264dd60950]: destroy /org/freedesktop/NetworkManager/Checkpoint/1
Nov 25 02:43:43 np0005534516 NetworkManager[48915]: <info>  [1764056623.3717] audit: op="checkpoint-destroy" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51684 uid=0 result="success"
Nov 25 02:43:43 np0005534516 ansible-async_wrapper.py[51681]: 51682 still running (300)
Nov 25 02:43:43 np0005534516 NetworkManager[48915]: <info>  [1764056623.6984] audit: op="checkpoint-create" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=51684 uid=0 result="success"
Nov 25 02:43:43 np0005534516 NetworkManager[48915]: <info>  [1764056623.6998] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=51684 uid=0 result="success"
Nov 25 02:43:44 np0005534516 NetworkManager[48915]: <info>  [1764056624.0206] audit: op="networking-control" arg="global-dns-configuration" pid=51684 uid=0 result="success"
Nov 25 02:43:44 np0005534516 NetworkManager[48915]: <info>  [1764056624.0561] config: signal: SET_VALUES,values,values-intern,global-dns-config (/etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf)
Nov 25 02:43:44 np0005534516 NetworkManager[48915]: <info>  [1764056624.0740] audit: op="networking-control" arg="global-dns-configuration" pid=51684 uid=0 result="success"
Nov 25 02:43:44 np0005534516 NetworkManager[48915]: <info>  [1764056624.0779] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=51684 uid=0 result="success"
Nov 25 02:43:44 np0005534516 NetworkManager[48915]: <info>  [1764056624.2639] checkpoint[0x56264dd60a20]: destroy /org/freedesktop/NetworkManager/Checkpoint/2
Nov 25 02:43:44 np0005534516 NetworkManager[48915]: <info>  [1764056624.2643] audit: op="checkpoint-destroy" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=51684 uid=0 result="success"
Nov 25 02:43:44 np0005534516 ansible-async_wrapper.py[51682]: Module complete (51682)
Nov 25 02:43:46 np0005534516 python3.9[52154]: ansible-ansible.legacy.async_status Invoked with jid=j471873380631.51678 mode=status _async_dir=/root/.ansible_async
Nov 25 02:43:46 np0005534516 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Nov 25 02:43:46 np0005534516 python3.9[52253]: ansible-ansible.legacy.async_status Invoked with jid=j471873380631.51678 mode=cleanup _async_dir=/root/.ansible_async
Nov 25 02:43:47 np0005534516 python3.9[52408]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 02:43:48 np0005534516 python3.9[52531]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/os-net-config.returncode mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764056627.1510327-322-170497638427062/.source.returncode _original_basename=.upyb4ian follow=False checksum=b6589fc6ab0dc82cf12099d1c2d40ab994e8410c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 02:43:48 np0005534516 ansible-async_wrapper.py[51681]: Done in kid B.
Nov 25 02:43:49 np0005534516 python3.9[52683]: ansible-ansible.legacy.stat Invoked with path=/etc/cloud/cloud.cfg.d/99-edpm-disable-network-config.cfg follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 02:43:49 np0005534516 python3.9[52807]: ansible-ansible.legacy.copy Invoked with dest=/etc/cloud/cloud.cfg.d/99-edpm-disable-network-config.cfg mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764056628.6101532-338-29787796026537/.source.cfg _original_basename=.eiwr5llt follow=False checksum=f3c5952a9cd4c6c31b314b25eb897168971cc86e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 02:43:50 np0005534516 python3.9[52959]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=reloaded daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 25 02:43:50 np0005534516 systemd[1]: Reloading Network Manager...
Nov 25 02:43:50 np0005534516 NetworkManager[48915]: <info>  [1764056630.7905] audit: op="reload" arg="0" pid=52963 uid=0 result="success"
Nov 25 02:43:50 np0005534516 NetworkManager[48915]: <info>  [1764056630.7912] config: signal: SIGHUP,config-files,values,values-user,no-auto-default (/etc/NetworkManager/NetworkManager.conf, /usr/lib/NetworkManager/conf.d/00-server.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf, /var/lib/NetworkManager/NetworkManager-intern.conf)
Nov 25 02:43:50 np0005534516 systemd[1]: Reloaded Network Manager.
Nov 25 02:43:51 np0005534516 systemd-logind[822]: Session 10 logged out. Waiting for processes to exit.
Nov 25 02:43:51 np0005534516 systemd[1]: session-10.scope: Deactivated successfully.
Nov 25 02:43:51 np0005534516 systemd[1]: session-10.scope: Consumed 55.511s CPU time.
Nov 25 02:43:51 np0005534516 systemd-logind[822]: Removed session 10.
Nov 25 02:43:56 np0005534516 systemd-logind[822]: New session 11 of user zuul.
Nov 25 02:43:56 np0005534516 systemd[1]: Started Session 11 of User zuul.
Nov 25 02:43:57 np0005534516 python3.9[53147]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 25 02:43:58 np0005534516 python3.9[53301]: ansible-ansible.builtin.setup Invoked with filter=['ansible_default_ipv4'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 25 02:43:59 np0005534516 python3.9[53495]: ansible-ansible.legacy.command Invoked with _raw_params=hostname -f _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 02:44:00 np0005534516 systemd[1]: session-11.scope: Deactivated successfully.
Nov 25 02:44:00 np0005534516 systemd[1]: session-11.scope: Consumed 2.789s CPU time.
Nov 25 02:44:00 np0005534516 systemd-logind[822]: Session 11 logged out. Waiting for processes to exit.
Nov 25 02:44:00 np0005534516 systemd-logind[822]: Removed session 11.
Nov 25 02:44:00 np0005534516 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Nov 25 02:44:05 np0005534516 systemd-logind[822]: New session 12 of user zuul.
Nov 25 02:44:05 np0005534516 systemd[1]: Started Session 12 of User zuul.
Nov 25 02:44:06 np0005534516 python3.9[53678]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 25 02:44:07 np0005534516 python3.9[53832]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 25 02:44:09 np0005534516 python3.9[53989]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 25 02:44:09 np0005534516 python3.9[54073]: ansible-ansible.legacy.dnf Invoked with name=['podman'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 25 02:44:12 np0005534516 python3.9[54227]: ansible-ansible.builtin.setup Invoked with filter=['ansible_interfaces'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 25 02:44:13 np0005534516 python3.9[54422]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/containers/networks recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 02:44:14 np0005534516 python3.9[54574]: ansible-ansible.legacy.command Invoked with _raw_params=podman network inspect podman#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 02:44:14 np0005534516 podman[54575]: 2025-11-25 07:44:14.627803056 +0000 UTC m=+0.068376469 system refresh
Nov 25 02:44:15 np0005534516 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 25 02:44:15 np0005534516 python3.9[54737]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/networks/podman.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 02:44:16 np0005534516 python3.9[54860]: ansible-ansible.legacy.copy Invoked with dest=/etc/containers/networks/podman.json group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764056654.87247-79-176300361863766/.source.json follow=False _original_basename=podman_network_config.j2 checksum=9569cbc703d6d315d24e7671f7cb0025f02256d4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 02:44:17 np0005534516 python3.9[55012]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 02:44:18 np0005534516 python3.9[55135]: ansible-ansible.legacy.copy Invoked with dest=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764056656.7858498-94-181296916619940/.source.conf follow=False _original_basename=registries.conf.j2 checksum=485c636425e28137b9c2e788e9d5fc748a88106d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Nov 25 02:44:18 np0005534516 python3.9[55287]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=pids_limit owner=root path=/etc/containers/containers.conf section=containers setype=etc_t value=4096 backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Nov 25 02:44:19 np0005534516 python3.9[55439]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=events_logger owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="journald" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Nov 25 02:44:20 np0005534516 python3.9[55591]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=runtime owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="crun" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Nov 25 02:44:21 np0005534516 python3.9[55743]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=network_backend owner=root path=/etc/containers/containers.conf section=network setype=etc_t value="netavark" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Nov 25 02:44:22 np0005534516 python3.9[55895]: ansible-ansible.legacy.dnf Invoked with name=['openssh-server'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 25 02:44:24 np0005534516 python3.9[56048]: ansible-setup Invoked with gather_subset=['!all', '!min', 'distribution', 'distribution_major_version', 'distribution_version', 'os_family'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 25 02:44:25 np0005534516 python3.9[56202]: ansible-stat Invoked with path=/run/ostree-booted follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 25 02:44:26 np0005534516 python3.9[56354]: ansible-stat Invoked with path=/sbin/transactional-update follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 25 02:44:26 np0005534516 python3.9[56506]: ansible-ansible.legacy.command Invoked with _raw_params=systemctl is-system-running _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 02:44:27 np0005534516 python3.9[56659]: ansible-service_facts Invoked
Nov 25 02:44:27 np0005534516 network[56676]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Nov 25 02:44:27 np0005534516 network[56677]: 'network-scripts' will be removed from distribution in near future.
Nov 25 02:44:27 np0005534516 network[56678]: It is advised to switch to 'NetworkManager' instead for network management.
Nov 25 02:44:35 np0005534516 python3.9[57130]: ansible-ansible.legacy.dnf Invoked with name=['chrony'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 25 02:44:38 np0005534516 python3.9[57283]: ansible-package_facts Invoked with manager=['auto'] strategy=first
Nov 25 02:44:39 np0005534516 python3.9[57435]: ansible-ansible.legacy.stat Invoked with path=/etc/chrony.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 02:44:40 np0005534516 python3.9[57560]: ansible-ansible.legacy.copy Invoked with backup=True dest=/etc/chrony.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764056679.0823808-238-117709957469379/.source.conf follow=False _original_basename=chrony.conf.j2 checksum=cfb003e56d02d0d2c65555452eb1a05073fecdad force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 02:44:41 np0005534516 python3.9[57714]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/chronyd follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 02:44:41 np0005534516 python3.9[57839]: ansible-ansible.legacy.copy Invoked with backup=True dest=/etc/sysconfig/chronyd mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764056680.500643-253-173885912469457/.source follow=False _original_basename=chronyd.sysconfig.j2 checksum=dd196b1ff1f915b23eebc37ec77405b5dd3df76c force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 02:44:42 np0005534516 python3.9[57993]: ansible-lineinfile Invoked with backup=True create=True dest=/etc/sysconfig/network line=PEERNTP=no mode=0644 regexp=^PEERNTP= state=present path=/etc/sysconfig/network encoding=utf-8 backrefs=False firstmatch=False unsafe_writes=False search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 02:44:44 np0005534516 python3.9[58147]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 25 02:44:45 np0005534516 python3.9[58231]: ansible-ansible.legacy.systemd Invoked with enabled=True name=chronyd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 25 02:44:46 np0005534516 python3.9[58385]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 25 02:44:47 np0005534516 python3.9[58469]: ansible-ansible.legacy.systemd Invoked with name=chronyd state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 25 02:44:47 np0005534516 chronyd[831]: chronyd exiting
Nov 25 02:44:47 np0005534516 systemd[1]: Stopping NTP client/server...
Nov 25 02:44:47 np0005534516 systemd[1]: chronyd.service: Deactivated successfully.
Nov 25 02:44:47 np0005534516 systemd[1]: Stopped NTP client/server.
Nov 25 02:44:47 np0005534516 systemd[1]: Starting NTP client/server...
Nov 25 02:44:47 np0005534516 chronyd[58478]: chronyd version 4.8 starting (+CMDMON +REFCLOCK +RTC +PRIVDROP +SCFILTER +SIGND +NTS +SECHASH +IPV6 +DEBUG)
Nov 25 02:44:47 np0005534516 chronyd[58478]: Frequency -26.552 +/- 0.112 ppm read from /var/lib/chrony/drift
Nov 25 02:44:47 np0005534516 chronyd[58478]: Loaded seccomp filter (level 2)
Nov 25 02:44:47 np0005534516 systemd[1]: Started NTP client/server.
Nov 25 02:44:47 np0005534516 systemd[1]: session-12.scope: Deactivated successfully.
Nov 25 02:44:47 np0005534516 systemd[1]: session-12.scope: Consumed 29.902s CPU time.
Nov 25 02:44:47 np0005534516 systemd-logind[822]: Session 12 logged out. Waiting for processes to exit.
Nov 25 02:44:47 np0005534516 systemd-logind[822]: Removed session 12.
Nov 25 02:44:54 np0005534516 systemd-logind[822]: New session 13 of user zuul.
Nov 25 02:44:54 np0005534516 systemd[1]: Started Session 13 of User zuul.
Nov 25 02:44:55 np0005534516 python3.9[58659]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 02:44:56 np0005534516 python3.9[58811]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/ceph-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 02:44:56 np0005534516 python3.9[58934]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/ceph-networks.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764056695.3833473-34-158368890360658/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=729ea8396013e3343245d6e934e0dcef55029ad2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 02:44:57 np0005534516 systemd[1]: session-13.scope: Deactivated successfully.
Nov 25 02:44:57 np0005534516 systemd[1]: session-13.scope: Consumed 1.922s CPU time.
Nov 25 02:44:57 np0005534516 systemd-logind[822]: Session 13 logged out. Waiting for processes to exit.
Nov 25 02:44:57 np0005534516 systemd-logind[822]: Removed session 13.
Nov 25 02:45:05 np0005534516 systemd-logind[822]: New session 14 of user zuul.
Nov 25 02:45:05 np0005534516 systemd[1]: Started Session 14 of User zuul.
Nov 25 02:45:06 np0005534516 python3.9[59112]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 25 02:45:07 np0005534516 python3.9[59268]: ansible-ansible.builtin.file Invoked with group=zuul mode=0770 owner=zuul path=/root/.config/containers recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 02:45:08 np0005534516 python3.9[59443]: ansible-ansible.legacy.stat Invoked with path=/root/.config/containers/auth.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 02:45:09 np0005534516 python3.9[59566]: ansible-ansible.legacy.copy Invoked with dest=/root/.config/containers/auth.json group=zuul mode=0660 owner=zuul src=/home/zuul/.ansible/tmp/ansible-tmp-1764056708.1448162-41-183628737309148/.source.json _original_basename=.x4tf67dr follow=False checksum=bf21a9e8fbc5a3846fb05b4fa0859e0917b2202f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 02:45:10 np0005534516 python3.9[59718]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 02:45:11 np0005534516 python3.9[59841]: ansible-ansible.legacy.copy Invoked with dest=/etc/sysconfig/podman_drop_in mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764056710.0545652-64-65612993723358/.source _original_basename=.imum5ke2 follow=False checksum=125299ce8dea7711a76292961206447f0043248b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 02:45:12 np0005534516 python3.9[59993]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 25 02:45:12 np0005534516 python3.9[60145]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 02:45:13 np0005534516 python3.9[60268]: ansible-ansible.legacy.copy Invoked with dest=/var/local/libexec/edpm-container-shutdown group=root mode=0700 owner=root setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764056712.3428316-88-223721045337732/.source _original_basename=edpm-container-shutdown follow=False checksum=632c3792eb3dce4288b33ae7b265b71950d69f13 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Nov 25 02:45:14 np0005534516 python3.9[60420]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 02:45:14 np0005534516 python3.9[60543]: ansible-ansible.legacy.copy Invoked with dest=/var/local/libexec/edpm-start-podman-container group=root mode=0700 owner=root setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764056713.7795763-88-26987765406550/.source _original_basename=edpm-start-podman-container follow=False checksum=b963c569d75a655c0ccae95d9bb4a2a9a4df27d1 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Nov 25 02:45:15 np0005534516 python3.9[60695]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 02:45:16 np0005534516 python3.9[60847]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 02:45:17 np0005534516 python3.9[60970]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm-container-shutdown.service group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764056715.839126-125-31629615428724/.source.service _original_basename=edpm-container-shutdown-service follow=False checksum=6336835cb0f888670cc99de31e19c8c071444d33 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 02:45:17 np0005534516 python3.9[61122]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 02:45:18 np0005534516 python3.9[61245]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764056717.2831478-140-92844393116671/.source.preset _original_basename=91-edpm-container-shutdown-preset follow=False checksum=b275e4375287528cb63464dd32f622c4f142a915 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 02:45:19 np0005534516 python3.9[61397]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 25 02:45:19 np0005534516 systemd[1]: Reloading.
Nov 25 02:45:19 np0005534516 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 02:45:19 np0005534516 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 02:45:19 np0005534516 systemd[1]: Reloading.
Nov 25 02:45:19 np0005534516 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 02:45:19 np0005534516 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 02:45:20 np0005534516 systemd[1]: Starting EDPM Container Shutdown...
Nov 25 02:45:20 np0005534516 systemd[1]: Finished EDPM Container Shutdown.
Nov 25 02:45:20 np0005534516 python3.9[61624]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 02:45:21 np0005534516 python3.9[61747]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/netns-placeholder.service group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764056720.2578502-163-199886694038573/.source.service _original_basename=netns-placeholder-service follow=False checksum=b61b1b5918c20c877b8b226fbf34ff89a082d972 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 02:45:22 np0005534516 python3.9[61899]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 02:45:22 np0005534516 python3.9[62022]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system-preset/91-netns-placeholder.preset group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764056721.6097775-178-25210408519867/.source.preset _original_basename=91-netns-placeholder-preset follow=False checksum=28b7b9aa893525d134a1eeda8a0a48fb25b736b9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 02:45:23 np0005534516 python3.9[62174]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 25 02:45:23 np0005534516 systemd[1]: Reloading.
Nov 25 02:45:23 np0005534516 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 02:45:23 np0005534516 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 02:45:23 np0005534516 systemd[1]: Reloading.
Nov 25 02:45:23 np0005534516 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 02:45:23 np0005534516 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 02:45:24 np0005534516 systemd[1]: Starting Create netns directory...
Nov 25 02:45:24 np0005534516 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Nov 25 02:45:24 np0005534516 systemd[1]: netns-placeholder.service: Deactivated successfully.
Nov 25 02:45:24 np0005534516 systemd[1]: Finished Create netns directory.
Nov 25 02:45:24 np0005534516 python3.9[62401]: ansible-ansible.builtin.service_facts Invoked
Nov 25 02:45:25 np0005534516 network[62418]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Nov 25 02:45:25 np0005534516 network[62419]: 'network-scripts' will be removed from distribution in near future.
Nov 25 02:45:25 np0005534516 network[62420]: It is advised to switch to 'NetworkManager' instead for network management.
Nov 25 02:45:29 np0005534516 python3.9[62682]: ansible-ansible.builtin.systemd Invoked with enabled=False name=iptables.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 25 02:45:29 np0005534516 systemd[1]: Reloading.
Nov 25 02:45:29 np0005534516 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 02:45:29 np0005534516 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 02:45:29 np0005534516 systemd[1]: Stopping IPv4 firewall with iptables...
Nov 25 02:45:29 np0005534516 iptables.init[62722]: iptables: Setting chains to policy ACCEPT: raw mangle filter nat [  OK  ]
Nov 25 02:45:29 np0005534516 iptables.init[62722]: iptables: Flushing firewall rules: [  OK  ]
Nov 25 02:45:29 np0005534516 systemd[1]: iptables.service: Deactivated successfully.
Nov 25 02:45:29 np0005534516 systemd[1]: Stopped IPv4 firewall with iptables.
Nov 25 02:45:30 np0005534516 python3.9[62918]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ip6tables.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 25 02:45:31 np0005534516 python3.9[63072]: ansible-ansible.builtin.systemd Invoked with enabled=True name=nftables state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 25 02:45:31 np0005534516 systemd[1]: Reloading.
Nov 25 02:45:31 np0005534516 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 02:45:31 np0005534516 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 02:45:31 np0005534516 systemd[1]: Starting Netfilter Tables...
Nov 25 02:45:31 np0005534516 systemd[1]: Finished Netfilter Tables.
Nov 25 02:45:32 np0005534516 python3.9[63264]: ansible-ansible.legacy.command Invoked with _raw_params=nft flush ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 02:45:33 np0005534516 python3.9[63417]: ansible-ansible.legacy.stat Invoked with path=/etc/ssh/sshd_config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 02:45:34 np0005534516 python3.9[63542]: ansible-ansible.legacy.copy Invoked with dest=/etc/ssh/sshd_config mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1764056733.18118-247-252063572495629/.source validate=/usr/sbin/sshd -T -f %s follow=False _original_basename=sshd_config_block.j2 checksum=6c79f4cb960ad444688fde322eeacb8402e22d79 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 02:45:35 np0005534516 python3.9[63695]: ansible-ansible.builtin.systemd Invoked with name=sshd state=reloaded daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 25 02:45:35 np0005534516 systemd[1]: Reloading OpenSSH server daemon...
Nov 25 02:45:35 np0005534516 systemd[1]: Reloaded OpenSSH server daemon.
Nov 25 02:45:36 np0005534516 python3.9[63851]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 02:45:36 np0005534516 python3.9[64003]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/sshd-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 02:45:37 np0005534516 python3.9[64126]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/sshd-networks.yaml group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764056736.2473035-278-17023273272870/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=0bfc8440fd8f39002ab90252479fb794f51b5ae8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 02:45:38 np0005534516 python3.9[64278]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Nov 25 02:45:38 np0005534516 systemd[1]: Starting Time & Date Service...
Nov 25 02:45:38 np0005534516 systemd[1]: Started Time & Date Service.
Nov 25 02:45:39 np0005534516 python3.9[64434]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 02:45:40 np0005534516 python3.9[64586]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 02:45:40 np0005534516 python3.9[64709]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764056739.7075076-313-42050281449309/.source.yaml follow=False _original_basename=base-rules.yaml.j2 checksum=450456afcafded6d4bdecceec7a02e806eebd8b3 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 02:45:41 np0005534516 python3.9[64861]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 02:45:42 np0005534516 python3.9[64984]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764056741.1849284-328-35994961437214/.source.yaml _original_basename=.nhhm7tgb follow=False checksum=97d170e1550eee4afc0af065b78cda302a97674c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 02:45:43 np0005534516 python3.9[65136]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 02:45:43 np0005534516 python3.9[65259]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/iptables.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764056742.5833347-343-18184737420708/.source.nft _original_basename=iptables.nft follow=False checksum=3e02df08f1f3ab4a513e94056dbd390e3d38fe30 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 02:45:44 np0005534516 python3.9[65411]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/iptables.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 02:45:45 np0005534516 python3.9[65564]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 02:45:46 np0005534516 python3[65717]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Nov 25 02:45:47 np0005534516 python3.9[65869]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 02:45:47 np0005534516 python3.9[65992]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764056746.6414728-382-14540925171865/.source.nft follow=False _original_basename=jump-chain.j2 checksum=4c6f036d2d5808f109acc0880c19aa74ca48c961 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 02:45:48 np0005534516 python3.9[66144]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 02:45:49 np0005534516 python3.9[66267]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764056748.0026855-397-251795104595855/.source.nft follow=False _original_basename=jump-chain.j2 checksum=4c6f036d2d5808f109acc0880c19aa74ca48c961 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 02:45:49 np0005534516 python3.9[66419]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 02:45:50 np0005534516 python3.9[66542]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-flushes.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764056749.2950435-412-30686619563790/.source.nft follow=False _original_basename=flush-chain.j2 checksum=d16337256a56373421842284fe09e4e6c7df417e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 02:45:51 np0005534516 python3.9[66694]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 02:45:52 np0005534516 python3.9[66817]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-chains.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764056750.7473702-427-135961645918694/.source.nft follow=False _original_basename=chains.j2 checksum=2079f3b60590a165d1d502e763170876fc8e2984 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 02:45:53 np0005534516 python3.9[66969]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 02:45:53 np0005534516 python3.9[67092]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764056752.4113028-442-114736020936183/.source.nft follow=False _original_basename=ruleset.j2 checksum=693377dc03e5b6b24713cb537b18b88774724e35 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 02:45:54 np0005534516 python3.9[67244]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 02:45:55 np0005534516 python3.9[67396]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 02:45:56 np0005534516 python3.9[67555]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"#012include "/etc/nftables/edpm-chains.nft"#012include "/etc/nftables/edpm-rules.nft"#012include "/etc/nftables/edpm-jumps.nft"#012 path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 02:45:57 np0005534516 python3.9[67708]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages1G state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 02:45:57 np0005534516 python3.9[67860]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages2M state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 02:45:58 np0005534516 python3.9[68012]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=1G path=/dev/hugepages1G src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Nov 25 02:45:59 np0005534516 python3.9[68165]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=2M path=/dev/hugepages2M src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Nov 25 02:46:00 np0005534516 systemd[1]: session-14.scope: Deactivated successfully.
Nov 25 02:46:00 np0005534516 systemd[1]: session-14.scope: Consumed 40.341s CPU time.
Nov 25 02:46:00 np0005534516 systemd-logind[822]: Session 14 logged out. Waiting for processes to exit.
Nov 25 02:46:00 np0005534516 systemd-logind[822]: Removed session 14.
Nov 25 02:46:06 np0005534516 systemd-logind[822]: New session 15 of user zuul.
Nov 25 02:46:06 np0005534516 systemd[1]: Started Session 15 of User zuul.
Nov 25 02:46:07 np0005534516 python3.9[68346]: ansible-ansible.builtin.tempfile Invoked with state=file prefix=ansible. suffix= path=None
Nov 25 02:46:07 np0005534516 python3.9[68498]: ansible-ansible.builtin.stat Invoked with path=/etc/ssh/ssh_known_hosts follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 25 02:46:08 np0005534516 systemd[1]: systemd-timedated.service: Deactivated successfully.
Nov 25 02:46:09 np0005534516 python3.9[68652]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'ssh_host_key_rsa_public', 'ssh_host_key_ed25519_public', 'ssh_host_key_ecdsa_public'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 25 02:46:10 np0005534516 python3.9[68804]: ansible-ansible.builtin.blockinfile Invoked with block=compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDThrOmzZYk/yE7aj0qiUi2XHBodTgD3KuWbEqzNcYV2X9Y5iPECkOOTFwJQ5k3cBVPuxkrbm8m5jaIrc02FUvnMr91CENNlwaofZ5AbVutEMFXttCm2tiap60neuJvQ6E/ODoqVM+rGFs35RB0MJGHUEmc/3ouQ8LzXu4pfQVyHqqYTKzDOlHGGNkVhsiyy2+l/gH7ji0TuOzyatWsvqFSXlSuqlBxETH97ZkLa03H2SqH78HQjb7ck+TKBoDWw6rDtlN6LUCxwwpi3pPQ1lJodoVl2kMH10vwpYoW3pxc7YXUZy6ZHoNOd0QpvhgG6XbKvvF5/PBrxDsYydJQ9+LKaHpQziGJx6eXlsD477eVQPEL616c6ha7mVp9E6ZGsejbP6C0OMXwd8Q/r4O1cXRJ6GMZ/ImJnsge6FL/mf6gH28M++DkpbjDhK3KaEwZaXu3RRkAW1ZEZgs6WU+ggGVhaRHa7oSirk8619FpfHUjYtyFY+ainauRqfgRTVmd5+8=#012compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIMtlUnlv1KTTK/zS5HI6tQr3+y8723Yr0q2NjDTz7vbf#012compute-0.ctlplane.example.com,192.168.122.100,compute-0* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBEMhA6ZDfxgRzvvaEqrlP+xcafhng5agflzPKbv8nTm33sRtBCypYw7k9dI0UHi1piXdLyh0sZ1wIA42UCPG+7g=#012 create=True mode=0644 path=/tmp/ansible.lu52ts5g state=present marker=# {mark} ANSIBLE MANAGED BLOCK backup=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 02:46:10 np0005534516 python3.9[68956]: ansible-ansible.legacy.command Invoked with _raw_params=cat '/tmp/ansible.lu52ts5g' > /etc/ssh/ssh_known_hosts _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 02:46:11 np0005534516 python3.9[69110]: ansible-ansible.builtin.file Invoked with path=/tmp/ansible.lu52ts5g state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 02:46:12 np0005534516 systemd[1]: session-15.scope: Deactivated successfully.
Nov 25 02:46:12 np0005534516 systemd[1]: session-15.scope: Consumed 3.679s CPU time.
Nov 25 02:46:12 np0005534516 systemd-logind[822]: Session 15 logged out. Waiting for processes to exit.
Nov 25 02:46:12 np0005534516 systemd-logind[822]: Removed session 15.
Nov 25 02:46:17 np0005534516 systemd-logind[822]: New session 16 of user zuul.
Nov 25 02:46:17 np0005534516 systemd[1]: Started Session 16 of User zuul.
Nov 25 02:46:18 np0005534516 python3.9[69288]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 25 02:46:20 np0005534516 python3.9[69444]: ansible-ansible.builtin.systemd Invoked with enabled=True name=sshd daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Nov 25 02:46:20 np0005534516 python3.9[69598]: ansible-ansible.builtin.systemd Invoked with name=sshd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 25 02:46:21 np0005534516 python3.9[69751]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 02:46:22 np0005534516 python3.9[69904]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 25 02:46:23 np0005534516 python3.9[70058]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 02:46:24 np0005534516 python3.9[70213]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 02:46:24 np0005534516 systemd[1]: session-16.scope: Deactivated successfully.
Nov 25 02:46:24 np0005534516 systemd[1]: session-16.scope: Consumed 4.840s CPU time.
Nov 25 02:46:24 np0005534516 systemd-logind[822]: Session 16 logged out. Waiting for processes to exit.
Nov 25 02:46:24 np0005534516 systemd-logind[822]: Removed session 16.
Nov 25 02:46:29 np0005534516 systemd-logind[822]: New session 17 of user zuul.
Nov 25 02:46:29 np0005534516 systemd[1]: Started Session 17 of User zuul.
Nov 25 02:46:30 np0005534516 python3.9[70392]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 25 02:46:31 np0005534516 python3.9[70548]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 25 02:46:32 np0005534516 python3.9[70632]: ansible-ansible.legacy.dnf Invoked with name=['yum-utils'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Nov 25 02:46:35 np0005534516 python3.9[70783]: ansible-ansible.legacy.command Invoked with _raw_params=needs-restarting -r _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 02:46:36 np0005534516 python3.9[70934]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/reboot_required/'] patterns=[] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Nov 25 02:46:37 np0005534516 python3.9[71084]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 25 02:46:37 np0005534516 rsyslogd[1007]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 25 02:46:38 np0005534516 python3.9[71235]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/config follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 25 02:46:38 np0005534516 systemd[1]: session-17.scope: Deactivated successfully.
Nov 25 02:46:38 np0005534516 systemd[1]: session-17.scope: Consumed 6.290s CPU time.
Nov 25 02:46:38 np0005534516 systemd-logind[822]: Session 17 logged out. Waiting for processes to exit.
Nov 25 02:46:38 np0005534516 systemd-logind[822]: Removed session 17.
Nov 25 02:46:46 np0005534516 systemd-logind[822]: New session 18 of user zuul.
Nov 25 02:46:46 np0005534516 systemd[1]: Started Session 18 of User zuul.
Nov 25 02:46:53 np0005534516 python3[72001]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 25 02:46:54 np0005534516 python3[72096]: ansible-ansible.legacy.dnf Invoked with name=['util-linux', 'lvm2', 'jq', 'podman'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Nov 25 02:46:56 np0005534516 chronyd[58478]: Selected source 54.39.23.64 (pool.ntp.org)
Nov 25 02:46:56 np0005534516 python3[72123]: ansible-ansible.builtin.stat Invoked with path=/dev/loop3 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Nov 25 02:46:56 np0005534516 python3[72149]: ansible-ansible.legacy.command Invoked with _raw_params=dd if=/dev/zero of=/var/lib/ceph-osd-0.img bs=1 count=0 seek=20G#012losetup /dev/loop3 /var/lib/ceph-osd-0.img#012lsblk _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 02:46:56 np0005534516 kernel: loop: module loaded
Nov 25 02:46:56 np0005534516 kernel: loop3: detected capacity change from 0 to 41943040
Nov 25 02:46:57 np0005534516 python3[72184]: ansible-ansible.legacy.command Invoked with _raw_params=pvcreate /dev/loop3#012vgcreate ceph_vg0 /dev/loop3#012lvcreate -n ceph_lv0 -l +100%FREE ceph_vg0#012lvs _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 02:46:57 np0005534516 lvm[72187]: PV /dev/loop3 not used.
Nov 25 02:46:57 np0005534516 lvm[72189]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Nov 25 02:46:57 np0005534516 systemd[1]: Started /usr/sbin/lvm vgchange -aay --autoactivation event ceph_vg0.
Nov 25 02:46:57 np0005534516 lvm[72199]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Nov 25 02:46:57 np0005534516 lvm[72199]: VG ceph_vg0 finished
Nov 25 02:46:57 np0005534516 lvm[72195]:  1 logical volume(s) in volume group "ceph_vg0" now active
Nov 25 02:46:57 np0005534516 systemd[1]: lvm-activate-ceph_vg0.service: Deactivated successfully.
Nov 25 02:46:58 np0005534516 python3[72277]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/ceph-osd-losetup-0.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 25 02:46:58 np0005534516 python3[72350]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764056817.9446142-36179-79450578197579/source dest=/etc/systemd/system/ceph-osd-losetup-0.service mode=0644 force=True follow=False _original_basename=ceph-osd-losetup.service.j2 checksum=427b1db064a970126b729b07acf99fa7d0eecb9c backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 02:46:59 np0005534516 python3[72400]: ansible-ansible.builtin.systemd Invoked with state=started enabled=True name=ceph-osd-losetup-0.service daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 25 02:46:59 np0005534516 systemd[1]: Reloading.
Nov 25 02:46:59 np0005534516 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 02:46:59 np0005534516 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 02:46:59 np0005534516 systemd[1]: Starting Ceph OSD losetup...
Nov 25 02:46:59 np0005534516 bash[72442]: /dev/loop3: [64513]:4327756 (/var/lib/ceph-osd-0.img)
Nov 25 02:47:00 np0005534516 systemd[1]: Finished Ceph OSD losetup.
Nov 25 02:47:00 np0005534516 lvm[72444]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Nov 25 02:47:00 np0005534516 lvm[72444]: VG ceph_vg0 finished
Nov 25 02:47:00 np0005534516 python3[72470]: ansible-ansible.legacy.dnf Invoked with name=['util-linux', 'lvm2', 'jq', 'podman'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Nov 25 02:47:02 np0005534516 python3[72497]: ansible-ansible.builtin.stat Invoked with path=/dev/loop4 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Nov 25 02:47:02 np0005534516 python3[72523]: ansible-ansible.legacy.command Invoked with _raw_params=dd if=/dev/zero of=/var/lib/ceph-osd-1.img bs=1 count=0 seek=20G#012losetup /dev/loop4 /var/lib/ceph-osd-1.img#012lsblk _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 02:47:02 np0005534516 kernel: loop4: detected capacity change from 0 to 41943040
Nov 25 02:47:03 np0005534516 python3[72554]: ansible-ansible.legacy.command Invoked with _raw_params=pvcreate /dev/loop4#012vgcreate ceph_vg1 /dev/loop4#012lvcreate -n ceph_lv1 -l +100%FREE ceph_vg1#012lvs _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 02:47:03 np0005534516 lvm[72559]: PV /dev/loop4 not used.
Nov 25 02:47:03 np0005534516 lvm[72560]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Nov 25 02:47:03 np0005534516 systemd[1]: Started /usr/sbin/lvm vgchange -aay --autoactivation event ceph_vg1.
Nov 25 02:47:03 np0005534516 lvm[72562]:  1 logical volume(s) in volume group "ceph_vg1" now active
Nov 25 02:47:03 np0005534516 lvm[72571]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Nov 25 02:47:03 np0005534516 lvm[72571]: VG ceph_vg1 finished
Nov 25 02:47:04 np0005534516 systemd[1]: lvm-activate-ceph_vg1.service: Deactivated successfully.
Nov 25 02:47:04 np0005534516 python3[72649]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/ceph-osd-losetup-1.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 25 02:47:05 np0005534516 python3[72722]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764056824.2021503-36206-173305689815850/source dest=/etc/systemd/system/ceph-osd-losetup-1.service mode=0644 force=True follow=False _original_basename=ceph-osd-losetup.service.j2 checksum=19612168ea279db4171b94ee1f8625de1ec44b58 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 02:47:05 np0005534516 python3[72772]: ansible-ansible.builtin.systemd Invoked with state=started enabled=True name=ceph-osd-losetup-1.service daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 25 02:47:05 np0005534516 systemd[1]: Reloading.
Nov 25 02:47:05 np0005534516 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 02:47:05 np0005534516 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 02:47:06 np0005534516 systemd[1]: Starting Ceph OSD losetup...
Nov 25 02:47:06 np0005534516 bash[72813]: /dev/loop4: [64513]:4327911 (/var/lib/ceph-osd-1.img)
Nov 25 02:47:06 np0005534516 systemd[1]: Finished Ceph OSD losetup.
Nov 25 02:47:06 np0005534516 lvm[72814]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Nov 25 02:47:06 np0005534516 lvm[72814]: VG ceph_vg1 finished
Nov 25 02:47:07 np0005534516 python3[72840]: ansible-ansible.legacy.dnf Invoked with name=['util-linux', 'lvm2', 'jq', 'podman'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Nov 25 02:47:08 np0005534516 python3[72867]: ansible-ansible.builtin.stat Invoked with path=/dev/loop5 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Nov 25 02:47:09 np0005534516 python3[72893]: ansible-ansible.legacy.command Invoked with _raw_params=dd if=/dev/zero of=/var/lib/ceph-osd-2.img bs=1 count=0 seek=20G#012losetup /dev/loop5 /var/lib/ceph-osd-2.img#012lsblk _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 02:47:09 np0005534516 kernel: loop5: detected capacity change from 0 to 41943040
Nov 25 02:47:09 np0005534516 python3[72925]: ansible-ansible.legacy.command Invoked with _raw_params=pvcreate /dev/loop5#012vgcreate ceph_vg2 /dev/loop5#012lvcreate -n ceph_lv2 -l +100%FREE ceph_vg2#012lvs _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 02:47:10 np0005534516 lvm[72928]: PV /dev/loop5 not used.
Nov 25 02:47:10 np0005534516 lvm[72930]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Nov 25 02:47:10 np0005534516 systemd[1]: Started /usr/sbin/lvm vgchange -aay --autoactivation event ceph_vg2.
Nov 25 02:47:10 np0005534516 lvm[72933]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Nov 25 02:47:10 np0005534516 lvm[72933]: VG ceph_vg2 finished
Nov 25 02:47:10 np0005534516 lvm[72932]:  0 logical volume(s) in volume group "ceph_vg2" now active
Nov 25 02:47:10 np0005534516 systemd[1]: lvm-activate-ceph_vg2.service: Deactivated successfully.
Nov 25 02:47:10 np0005534516 lvm[72941]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Nov 25 02:47:10 np0005534516 lvm[72941]: VG ceph_vg2 finished
Nov 25 02:47:11 np0005534516 python3[73019]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/ceph-osd-losetup-2.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 25 02:47:11 np0005534516 python3[73092]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764056830.8316793-36233-88678037424316/source dest=/etc/systemd/system/ceph-osd-losetup-2.service mode=0644 force=True follow=False _original_basename=ceph-osd-losetup.service.j2 checksum=4c5b1bc5693c499ffe2edaa97d63f5df7075d845 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 02:47:12 np0005534516 python3[73142]: ansible-ansible.builtin.systemd Invoked with state=started enabled=True name=ceph-osd-losetup-2.service daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 25 02:47:12 np0005534516 systemd[1]: Reloading.
Nov 25 02:47:12 np0005534516 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 02:47:12 np0005534516 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 02:47:12 np0005534516 systemd[1]: Starting Ceph OSD losetup...
Nov 25 02:47:12 np0005534516 bash[73182]: /dev/loop5: [64513]:4327913 (/var/lib/ceph-osd-2.img)
Nov 25 02:47:12 np0005534516 systemd[1]: Finished Ceph OSD losetup.
Nov 25 02:47:13 np0005534516 lvm[73184]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Nov 25 02:47:13 np0005534516 lvm[73184]: VG ceph_vg2 finished
Nov 25 02:47:15 np0005534516 python3[73208]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'network'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 25 02:47:17 np0005534516 python3[73301]: ansible-ansible.legacy.dnf Invoked with name=['cephadm'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Nov 25 02:47:22 np0005534516 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Nov 25 02:47:22 np0005534516 systemd[1]: Starting man-db-cache-update.service...
Nov 25 02:47:23 np0005534516 python3[73411]: ansible-ansible.builtin.stat Invoked with path=/usr/sbin/cephadm follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Nov 25 02:47:24 np0005534516 python3[73439]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/cephadm ls --no-detail _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 02:47:24 np0005534516 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 25 02:47:24 np0005534516 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 25 02:47:25 np0005534516 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Nov 25 02:47:25 np0005534516 systemd[1]: Finished man-db-cache-update.service.
Nov 25 02:47:25 np0005534516 systemd[1]: run-r746e23285d884682959f00402b6428d8.service: Deactivated successfully.
Nov 25 02:47:25 np0005534516 python3[73504]: ansible-ansible.builtin.file Invoked with path=/etc/ceph state=directory mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 02:47:25 np0005534516 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 25 02:47:25 np0005534516 python3[73530]: ansible-ansible.builtin.file Invoked with path=/home/ceph-admin/specs owner=ceph-admin group=ceph-admin mode=0755 state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 02:47:26 np0005534516 python3[73608]: ansible-ansible.legacy.stat Invoked with path=/home/ceph-admin/specs/ceph_spec.yaml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 25 02:47:26 np0005534516 python3[73681]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764056846.2301497-36382-77331135226589/source dest=/home/ceph-admin/specs/ceph_spec.yaml owner=ceph-admin group=ceph-admin mode=0644 _original_basename=ceph_spec.yml follow=False checksum=bb83c53af4ffd926a3f1eafe26a8be437df6401f backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 02:47:27 np0005534516 python3[73783]: ansible-ansible.legacy.stat Invoked with path=/home/ceph-admin/assimilate_ceph.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 25 02:47:28 np0005534516 python3[73856]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764056847.3585308-36400-49980838598311/source dest=/home/ceph-admin/assimilate_ceph.conf owner=ceph-admin group=ceph-admin mode=0644 _original_basename=initial_ceph.conf follow=False checksum=41828f7c2442fdf376911255e33c12863fc3b1b3 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 02:47:28 np0005534516 python3[73906]: ansible-ansible.builtin.stat Invoked with path=/home/ceph-admin/.ssh/id_rsa follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Nov 25 02:47:29 np0005534516 python3[73934]: ansible-ansible.builtin.stat Invoked with path=/home/ceph-admin/.ssh/id_rsa.pub follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Nov 25 02:47:29 np0005534516 python3[73962]: ansible-ansible.builtin.stat Invoked with path=/home/ceph-admin/assimilate_ceph.conf follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Nov 25 02:47:29 np0005534516 python3[73990]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/cephadm bootstrap --skip-firewalld --skip-prepare-host --ssh-private-key /home/ceph-admin/.ssh/id_rsa --ssh-public-key /home/ceph-admin/.ssh/id_rsa.pub --ssh-user ceph-admin --allow-fqdn-hostname --output-keyring /etc/ceph/ceph.client.admin.keyring --output-config /etc/ceph/ceph.conf --fsid a058ea16-8b73-51e1-b172-ed66107102bf --config /home/ceph-admin/assimilate_ceph.conf \--single-host-defaults \--skip-monitoring-stack --skip-dashboard --mon-ip 192.168.122.100#012 _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 02:47:29 np0005534516 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 25 02:47:30 np0005534516 systemd[1]: Created slice User Slice of UID 42477.
Nov 25 02:47:30 np0005534516 systemd[1]: Starting User Runtime Directory /run/user/42477...
Nov 25 02:47:30 np0005534516 systemd-logind[822]: New session 19 of user ceph-admin.
Nov 25 02:47:30 np0005534516 systemd[1]: Finished User Runtime Directory /run/user/42477.
Nov 25 02:47:30 np0005534516 systemd[1]: Starting User Manager for UID 42477...
Nov 25 02:47:30 np0005534516 systemd[74011]: Queued start job for default target Main User Target.
Nov 25 02:47:30 np0005534516 systemd[74011]: Created slice User Application Slice.
Nov 25 02:47:30 np0005534516 systemd[74011]: Started Mark boot as successful after the user session has run 2 minutes.
Nov 25 02:47:30 np0005534516 systemd[74011]: Started Daily Cleanup of User's Temporary Directories.
Nov 25 02:47:30 np0005534516 systemd[74011]: Reached target Paths.
Nov 25 02:47:30 np0005534516 systemd[74011]: Reached target Timers.
Nov 25 02:47:30 np0005534516 systemd[74011]: Starting D-Bus User Message Bus Socket...
Nov 25 02:47:30 np0005534516 systemd[74011]: Starting Create User's Volatile Files and Directories...
Nov 25 02:47:30 np0005534516 systemd[74011]: Listening on D-Bus User Message Bus Socket.
Nov 25 02:47:30 np0005534516 systemd[74011]: Reached target Sockets.
Nov 25 02:47:30 np0005534516 systemd[74011]: Finished Create User's Volatile Files and Directories.
Nov 25 02:47:30 np0005534516 systemd[74011]: Reached target Basic System.
Nov 25 02:47:30 np0005534516 systemd[74011]: Reached target Main User Target.
Nov 25 02:47:30 np0005534516 systemd[74011]: Startup finished in 157ms.
Nov 25 02:47:30 np0005534516 systemd[1]: Started User Manager for UID 42477.
Nov 25 02:47:30 np0005534516 systemd[1]: Started Session 19 of User ceph-admin.
Nov 25 02:47:30 np0005534516 systemd[1]: session-19.scope: Deactivated successfully.
Nov 25 02:47:30 np0005534516 systemd-logind[822]: Session 19 logged out. Waiting for processes to exit.
Nov 25 02:47:30 np0005534516 systemd-logind[822]: Removed session 19.
Nov 25 02:47:32 np0005534516 systemd[1]: var-lib-containers-storage-overlay-compat2842501658-merged.mount: Deactivated successfully.
Nov 25 02:47:32 np0005534516 systemd[1]: var-lib-containers-storage-overlay-compat2842501658-lower\x2dmapped.mount: Deactivated successfully.
Nov 25 02:47:40 np0005534516 systemd[1]: Stopping User Manager for UID 42477...
Nov 25 02:47:40 np0005534516 systemd[74011]: Activating special unit Exit the Session...
Nov 25 02:47:40 np0005534516 systemd[74011]: Stopped target Main User Target.
Nov 25 02:47:40 np0005534516 systemd[74011]: Stopped target Basic System.
Nov 25 02:47:40 np0005534516 systemd[74011]: Stopped target Paths.
Nov 25 02:47:40 np0005534516 systemd[74011]: Stopped target Sockets.
Nov 25 02:47:40 np0005534516 systemd[74011]: Stopped target Timers.
Nov 25 02:47:40 np0005534516 systemd[74011]: Stopped Mark boot as successful after the user session has run 2 minutes.
Nov 25 02:47:40 np0005534516 systemd[74011]: Stopped Daily Cleanup of User's Temporary Directories.
Nov 25 02:47:40 np0005534516 systemd[74011]: Closed D-Bus User Message Bus Socket.
Nov 25 02:47:40 np0005534516 systemd[74011]: Stopped Create User's Volatile Files and Directories.
Nov 25 02:47:40 np0005534516 systemd[74011]: Removed slice User Application Slice.
Nov 25 02:47:40 np0005534516 systemd[74011]: Reached target Shutdown.
Nov 25 02:47:40 np0005534516 systemd[74011]: Finished Exit the Session.
Nov 25 02:47:40 np0005534516 systemd[74011]: Reached target Exit the Session.
Nov 25 02:47:40 np0005534516 systemd[1]: user@42477.service: Deactivated successfully.
Nov 25 02:47:40 np0005534516 systemd[1]: Stopped User Manager for UID 42477.
Nov 25 02:47:40 np0005534516 systemd[1]: Stopping User Runtime Directory /run/user/42477...
Nov 25 02:47:40 np0005534516 systemd[1]: run-user-42477.mount: Deactivated successfully.
Nov 25 02:47:40 np0005534516 systemd[1]: user-runtime-dir@42477.service: Deactivated successfully.
Nov 25 02:47:40 np0005534516 systemd[1]: Stopped User Runtime Directory /run/user/42477.
Nov 25 02:47:40 np0005534516 systemd[1]: Removed slice User Slice of UID 42477.
Nov 25 02:47:57 np0005534516 podman[74065]: 2025-11-25 07:47:57.636562208 +0000 UTC m=+26.958644760 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 25 02:47:57 np0005534516 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 25 02:47:57 np0005534516 podman[74127]: 2025-11-25 07:47:57.732552871 +0000 UTC m=+0.060105908 container create 9963ea65a5b8b12a4489ba54809fd016e3c7c7241b1afefe5b8c46620daa5990 (image=quay.io/ceph/ceph:v18, name=strange_hodgkin, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 02:47:57 np0005534516 systemd[1]: Created slice Virtual Machine and Container Slice.
Nov 25 02:47:57 np0005534516 systemd[1]: Started libpod-conmon-9963ea65a5b8b12a4489ba54809fd016e3c7c7241b1afefe5b8c46620daa5990.scope.
Nov 25 02:47:57 np0005534516 podman[74127]: 2025-11-25 07:47:57.712692466 +0000 UTC m=+0.040245503 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 25 02:47:57 np0005534516 systemd[1]: Started libcrun container.
Nov 25 02:47:57 np0005534516 podman[74127]: 2025-11-25 07:47:57.864879051 +0000 UTC m=+0.192432168 container init 9963ea65a5b8b12a4489ba54809fd016e3c7c7241b1afefe5b8c46620daa5990 (image=quay.io/ceph/ceph:v18, name=strange_hodgkin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Nov 25 02:47:57 np0005534516 podman[74127]: 2025-11-25 07:47:57.877219544 +0000 UTC m=+0.204772541 container start 9963ea65a5b8b12a4489ba54809fd016e3c7c7241b1afefe5b8c46620daa5990 (image=quay.io/ceph/ceph:v18, name=strange_hodgkin, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Nov 25 02:47:57 np0005534516 podman[74127]: 2025-11-25 07:47:57.883248446 +0000 UTC m=+0.210801483 container attach 9963ea65a5b8b12a4489ba54809fd016e3c7c7241b1afefe5b8c46620daa5990 (image=quay.io/ceph/ceph:v18, name=strange_hodgkin, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Nov 25 02:47:58 np0005534516 strange_hodgkin[74144]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)
Nov 25 02:47:58 np0005534516 systemd[1]: libpod-9963ea65a5b8b12a4489ba54809fd016e3c7c7241b1afefe5b8c46620daa5990.scope: Deactivated successfully.
Nov 25 02:47:58 np0005534516 podman[74127]: 2025-11-25 07:47:58.168959994 +0000 UTC m=+0.496512991 container died 9963ea65a5b8b12a4489ba54809fd016e3c7c7241b1afefe5b8c46620daa5990 (image=quay.io/ceph/ceph:v18, name=strange_hodgkin, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 02:47:58 np0005534516 systemd[1]: var-lib-containers-storage-overlay-7bb66e53eee74e1d04d3056f162de6676f3f97ea013bfce4790c45d68f885aa5-merged.mount: Deactivated successfully.
Nov 25 02:47:58 np0005534516 podman[74127]: 2025-11-25 07:47:58.233985243 +0000 UTC m=+0.561538240 container remove 9963ea65a5b8b12a4489ba54809fd016e3c7c7241b1afefe5b8c46620daa5990 (image=quay.io/ceph/ceph:v18, name=strange_hodgkin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Nov 25 02:47:58 np0005534516 systemd[1]: libpod-conmon-9963ea65a5b8b12a4489ba54809fd016e3c7c7241b1afefe5b8c46620daa5990.scope: Deactivated successfully.
Nov 25 02:47:58 np0005534516 podman[74162]: 2025-11-25 07:47:58.322001652 +0000 UTC m=+0.054580740 container create 3b16a015077af096bf5465d289cbad5750864e82b9c380852de6214ccc7dbcf2 (image=quay.io/ceph/ceph:v18, name=objective_shannon, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 02:47:58 np0005534516 podman[74162]: 2025-11-25 07:47:58.30445922 +0000 UTC m=+0.037038328 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 25 02:47:58 np0005534516 systemd[1]: Started libpod-conmon-3b16a015077af096bf5465d289cbad5750864e82b9c380852de6214ccc7dbcf2.scope.
Nov 25 02:47:58 np0005534516 systemd[1]: Started libcrun container.
Nov 25 02:47:58 np0005534516 podman[74162]: 2025-11-25 07:47:58.522184809 +0000 UTC m=+0.254763897 container init 3b16a015077af096bf5465d289cbad5750864e82b9c380852de6214ccc7dbcf2 (image=quay.io/ceph/ceph:v18, name=objective_shannon, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Nov 25 02:47:58 np0005534516 podman[74162]: 2025-11-25 07:47:58.531087938 +0000 UTC m=+0.263667026 container start 3b16a015077af096bf5465d289cbad5750864e82b9c380852de6214ccc7dbcf2 (image=quay.io/ceph/ceph:v18, name=objective_shannon, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Nov 25 02:47:58 np0005534516 objective_shannon[74179]: 167 167
Nov 25 02:47:58 np0005534516 systemd[1]: libpod-3b16a015077af096bf5465d289cbad5750864e82b9c380852de6214ccc7dbcf2.scope: Deactivated successfully.
Nov 25 02:47:58 np0005534516 podman[74162]: 2025-11-25 07:47:58.547569561 +0000 UTC m=+0.280148649 container attach 3b16a015077af096bf5465d289cbad5750864e82b9c380852de6214ccc7dbcf2 (image=quay.io/ceph/ceph:v18, name=objective_shannon, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Nov 25 02:47:58 np0005534516 podman[74162]: 2025-11-25 07:47:58.547857849 +0000 UTC m=+0.280436937 container died 3b16a015077af096bf5465d289cbad5750864e82b9c380852de6214ccc7dbcf2 (image=quay.io/ceph/ceph:v18, name=objective_shannon, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 02:47:58 np0005534516 podman[74162]: 2025-11-25 07:47:58.624875832 +0000 UTC m=+0.357454930 container remove 3b16a015077af096bf5465d289cbad5750864e82b9c380852de6214ccc7dbcf2 (image=quay.io/ceph/ceph:v18, name=objective_shannon, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 02:47:58 np0005534516 systemd[1]: libpod-conmon-3b16a015077af096bf5465d289cbad5750864e82b9c380852de6214ccc7dbcf2.scope: Deactivated successfully.
Nov 25 02:47:58 np0005534516 podman[74197]: 2025-11-25 07:47:58.694929427 +0000 UTC m=+0.050150670 container create 5ff2dfc5d34ee49f317be7e578a100208db1a511e6fe6b925ba2da402fe25200 (image=quay.io/ceph/ceph:v18, name=brave_beaver, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Nov 25 02:47:58 np0005534516 systemd[1]: Started libpod-conmon-5ff2dfc5d34ee49f317be7e578a100208db1a511e6fe6b925ba2da402fe25200.scope.
Nov 25 02:47:58 np0005534516 systemd[1]: Started libcrun container.
Nov 25 02:47:58 np0005534516 podman[74197]: 2025-11-25 07:47:58.672432942 +0000 UTC m=+0.027654175 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 25 02:47:58 np0005534516 podman[74197]: 2025-11-25 07:47:58.802153882 +0000 UTC m=+0.157375125 container init 5ff2dfc5d34ee49f317be7e578a100208db1a511e6fe6b925ba2da402fe25200 (image=quay.io/ceph/ceph:v18, name=brave_beaver, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 02:47:58 np0005534516 podman[74197]: 2025-11-25 07:47:58.810928258 +0000 UTC m=+0.166149511 container start 5ff2dfc5d34ee49f317be7e578a100208db1a511e6fe6b925ba2da402fe25200 (image=quay.io/ceph/ceph:v18, name=brave_beaver, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef)
Nov 25 02:47:58 np0005534516 podman[74197]: 2025-11-25 07:47:58.8221419 +0000 UTC m=+0.177363153 container attach 5ff2dfc5d34ee49f317be7e578a100208db1a511e6fe6b925ba2da402fe25200 (image=quay.io/ceph/ceph:v18, name=brave_beaver, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 02:47:58 np0005534516 brave_beaver[74213]: AQAuXyVpNlNyMRAA4L/IO2YQBRiyrypQBMfWcA==
Nov 25 02:47:58 np0005534516 systemd[1]: libpod-5ff2dfc5d34ee49f317be7e578a100208db1a511e6fe6b925ba2da402fe25200.scope: Deactivated successfully.
Nov 25 02:47:58 np0005534516 podman[74197]: 2025-11-25 07:47:58.833476845 +0000 UTC m=+0.188698098 container died 5ff2dfc5d34ee49f317be7e578a100208db1a511e6fe6b925ba2da402fe25200 (image=quay.io/ceph/ceph:v18, name=brave_beaver, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 02:47:58 np0005534516 systemd[1]: var-lib-containers-storage-overlay-0f8f1e5769684bf6de126024719a662bd917bdd757944be060847aa6e845e3a0-merged.mount: Deactivated successfully.
Nov 25 02:47:58 np0005534516 podman[74197]: 2025-11-25 07:47:58.883896752 +0000 UTC m=+0.239118005 container remove 5ff2dfc5d34ee49f317be7e578a100208db1a511e6fe6b925ba2da402fe25200 (image=quay.io/ceph/ceph:v18, name=brave_beaver, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Nov 25 02:47:58 np0005534516 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 25 02:47:58 np0005534516 systemd[1]: libpod-conmon-5ff2dfc5d34ee49f317be7e578a100208db1a511e6fe6b925ba2da402fe25200.scope: Deactivated successfully.
Nov 25 02:47:58 np0005534516 podman[74230]: 2025-11-25 07:47:58.947560105 +0000 UTC m=+0.045883116 container create 0487bd57925c53b0de233100c8b1b414a677199212853d39cdb0b35ba69a1f7d (image=quay.io/ceph/ceph:v18, name=vibrant_williamson, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Nov 25 02:47:58 np0005534516 systemd[1]: Started libpod-conmon-0487bd57925c53b0de233100c8b1b414a677199212853d39cdb0b35ba69a1f7d.scope.
Nov 25 02:47:58 np0005534516 systemd[1]: Started libcrun container.
Nov 25 02:47:59 np0005534516 podman[74230]: 2025-11-25 07:47:59.008219697 +0000 UTC m=+0.106542708 container init 0487bd57925c53b0de233100c8b1b414a677199212853d39cdb0b35ba69a1f7d (image=quay.io/ceph/ceph:v18, name=vibrant_williamson, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 02:47:59 np0005534516 podman[74230]: 2025-11-25 07:47:59.013461528 +0000 UTC m=+0.111784579 container start 0487bd57925c53b0de233100c8b1b414a677199212853d39cdb0b35ba69a1f7d (image=quay.io/ceph/ceph:v18, name=vibrant_williamson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 02:47:59 np0005534516 podman[74230]: 2025-11-25 07:47:58.922053279 +0000 UTC m=+0.020376340 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 25 02:47:59 np0005534516 podman[74230]: 2025-11-25 07:47:59.02059865 +0000 UTC m=+0.118921661 container attach 0487bd57925c53b0de233100c8b1b414a677199212853d39cdb0b35ba69a1f7d (image=quay.io/ceph/ceph:v18, name=vibrant_williamson, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 02:47:59 np0005534516 vibrant_williamson[74247]: AQAvXyVpMgvxAhAAULnBiJqcBIpdV6NoM7U6xQ==
Nov 25 02:47:59 np0005534516 systemd[1]: libpod-0487bd57925c53b0de233100c8b1b414a677199212853d39cdb0b35ba69a1f7d.scope: Deactivated successfully.
Nov 25 02:47:59 np0005534516 podman[74230]: 2025-11-25 07:47:59.05478556 +0000 UTC m=+0.153108591 container died 0487bd57925c53b0de233100c8b1b414a677199212853d39cdb0b35ba69a1f7d (image=quay.io/ceph/ceph:v18, name=vibrant_williamson, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Nov 25 02:47:59 np0005534516 podman[74230]: 2025-11-25 07:47:59.095700251 +0000 UTC m=+0.194023262 container remove 0487bd57925c53b0de233100c8b1b414a677199212853d39cdb0b35ba69a1f7d (image=quay.io/ceph/ceph:v18, name=vibrant_williamson, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default)
Nov 25 02:47:59 np0005534516 systemd[1]: libpod-conmon-0487bd57925c53b0de233100c8b1b414a677199212853d39cdb0b35ba69a1f7d.scope: Deactivated successfully.
Nov 25 02:47:59 np0005534516 podman[74266]: 2025-11-25 07:47:59.184791889 +0000 UTC m=+0.061159597 container create 0a591dabd97fd30f27286c1d7e89d935f778867beb7ac16fa69b2a28bd5f5428 (image=quay.io/ceph/ceph:v18, name=gracious_bassi, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 02:47:59 np0005534516 systemd[1]: Started libpod-conmon-0a591dabd97fd30f27286c1d7e89d935f778867beb7ac16fa69b2a28bd5f5428.scope.
Nov 25 02:47:59 np0005534516 systemd[1]: Started libcrun container.
Nov 25 02:47:59 np0005534516 podman[74266]: 2025-11-25 07:47:59.16481735 +0000 UTC m=+0.041185078 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 25 02:47:59 np0005534516 podman[74266]: 2025-11-25 07:47:59.271295996 +0000 UTC m=+0.147663784 container init 0a591dabd97fd30f27286c1d7e89d935f778867beb7ac16fa69b2a28bd5f5428 (image=quay.io/ceph/ceph:v18, name=gracious_bassi, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 02:47:59 np0005534516 podman[74266]: 2025-11-25 07:47:59.282069276 +0000 UTC m=+0.158437024 container start 0a591dabd97fd30f27286c1d7e89d935f778867beb7ac16fa69b2a28bd5f5428 (image=quay.io/ceph/ceph:v18, name=gracious_bassi, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Nov 25 02:47:59 np0005534516 podman[74266]: 2025-11-25 07:47:59.293543954 +0000 UTC m=+0.169911782 container attach 0a591dabd97fd30f27286c1d7e89d935f778867beb7ac16fa69b2a28bd5f5428 (image=quay.io/ceph/ceph:v18, name=gracious_bassi, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Nov 25 02:47:59 np0005534516 gracious_bassi[74282]: AQAvXyVpFtfUERAA04GeiATk3dTMLACx//2F4A==
Nov 25 02:47:59 np0005534516 systemd[1]: libpod-0a591dabd97fd30f27286c1d7e89d935f778867beb7ac16fa69b2a28bd5f5428.scope: Deactivated successfully.
Nov 25 02:47:59 np0005534516 podman[74266]: 2025-11-25 07:47:59.30264677 +0000 UTC m=+0.179014478 container died 0a591dabd97fd30f27286c1d7e89d935f778867beb7ac16fa69b2a28bd5f5428 (image=quay.io/ceph/ceph:v18, name=gracious_bassi, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 02:47:59 np0005534516 podman[74266]: 2025-11-25 07:47:59.35840913 +0000 UTC m=+0.234776838 container remove 0a591dabd97fd30f27286c1d7e89d935f778867beb7ac16fa69b2a28bd5f5428 (image=quay.io/ceph/ceph:v18, name=gracious_bassi, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 02:47:59 np0005534516 systemd[1]: libpod-conmon-0a591dabd97fd30f27286c1d7e89d935f778867beb7ac16fa69b2a28bd5f5428.scope: Deactivated successfully.
Nov 25 02:47:59 np0005534516 podman[74302]: 2025-11-25 07:47:59.433515871 +0000 UTC m=+0.050654074 container create 84c185824e84c98d87ad4fdf1052678c776cfec6b3b3b4bcb14ad8a26655a213 (image=quay.io/ceph/ceph:v18, name=crazy_maxwell, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 02:47:59 np0005534516 systemd[1]: Started libpod-conmon-84c185824e84c98d87ad4fdf1052678c776cfec6b3b3b4bcb14ad8a26655a213.scope.
Nov 25 02:47:59 np0005534516 systemd[1]: Started libcrun container.
Nov 25 02:47:59 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7ebe1f7f2892e2efeb6a7ca5bda065ea397b2f461c4224ea44b52bc0469fe763/merged/tmp/monmap supports timestamps until 2038 (0x7fffffff)
Nov 25 02:47:59 np0005534516 podman[74302]: 2025-11-25 07:47:59.410964994 +0000 UTC m=+0.028103197 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 25 02:47:59 np0005534516 podman[74302]: 2025-11-25 07:47:59.523354298 +0000 UTC m=+0.140492581 container init 84c185824e84c98d87ad4fdf1052678c776cfec6b3b3b4bcb14ad8a26655a213 (image=quay.io/ceph/ceph:v18, name=crazy_maxwell, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 02:47:59 np0005534516 podman[74302]: 2025-11-25 07:47:59.53269322 +0000 UTC m=+0.149831393 container start 84c185824e84c98d87ad4fdf1052678c776cfec6b3b3b4bcb14ad8a26655a213 (image=quay.io/ceph/ceph:v18, name=crazy_maxwell, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 02:47:59 np0005534516 podman[74302]: 2025-11-25 07:47:59.538360653 +0000 UTC m=+0.155498916 container attach 84c185824e84c98d87ad4fdf1052678c776cfec6b3b3b4bcb14ad8a26655a213 (image=quay.io/ceph/ceph:v18, name=crazy_maxwell, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 02:47:59 np0005534516 crazy_maxwell[74318]: /usr/bin/monmaptool: monmap file /tmp/monmap
Nov 25 02:47:59 np0005534516 crazy_maxwell[74318]: setting min_mon_release = pacific
Nov 25 02:47:59 np0005534516 crazy_maxwell[74318]: /usr/bin/monmaptool: set fsid to a058ea16-8b73-51e1-b172-ed66107102bf
Nov 25 02:47:59 np0005534516 crazy_maxwell[74318]: /usr/bin/monmaptool: writing epoch 0 to /tmp/monmap (1 monitors)
Nov 25 02:47:59 np0005534516 systemd[1]: libpod-84c185824e84c98d87ad4fdf1052678c776cfec6b3b3b4bcb14ad8a26655a213.scope: Deactivated successfully.
Nov 25 02:47:59 np0005534516 podman[74302]: 2025-11-25 07:47:59.58472858 +0000 UTC m=+0.201866783 container died 84c185824e84c98d87ad4fdf1052678c776cfec6b3b3b4bcb14ad8a26655a213 (image=quay.io/ceph/ceph:v18, name=crazy_maxwell, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Nov 25 02:47:59 np0005534516 podman[74302]: 2025-11-25 07:47:59.62523682 +0000 UTC m=+0.242375003 container remove 84c185824e84c98d87ad4fdf1052678c776cfec6b3b3b4bcb14ad8a26655a213 (image=quay.io/ceph/ceph:v18, name=crazy_maxwell, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 02:47:59 np0005534516 systemd[1]: libpod-conmon-84c185824e84c98d87ad4fdf1052678c776cfec6b3b3b4bcb14ad8a26655a213.scope: Deactivated successfully.
Nov 25 02:47:59 np0005534516 podman[74337]: 2025-11-25 07:47:59.696879508 +0000 UTC m=+0.047556591 container create 5e389c19452bf61564706dafafeb4894510c614e4ab07e0ede4b5aeb00ac0275 (image=quay.io/ceph/ceph:v18, name=ecstatic_colden, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 02:47:59 np0005534516 systemd[1]: Started libpod-conmon-5e389c19452bf61564706dafafeb4894510c614e4ab07e0ede4b5aeb00ac0275.scope.
Nov 25 02:47:59 np0005534516 systemd[1]: Started libcrun container.
Nov 25 02:47:59 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7f2a1038abacd22278f3ba3ed45425672e827785779339c813ae16c473dcf20f/merged/tmp/keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 02:47:59 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7f2a1038abacd22278f3ba3ed45425672e827785779339c813ae16c473dcf20f/merged/tmp/monmap supports timestamps until 2038 (0x7fffffff)
Nov 25 02:47:59 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7f2a1038abacd22278f3ba3ed45425672e827785779339c813ae16c473dcf20f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 02:47:59 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7f2a1038abacd22278f3ba3ed45425672e827785779339c813ae16c473dcf20f/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Nov 25 02:47:59 np0005534516 podman[74337]: 2025-11-25 07:47:59.66874284 +0000 UTC m=+0.019419843 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 25 02:47:59 np0005534516 podman[74337]: 2025-11-25 07:47:59.779581093 +0000 UTC m=+0.130258126 container init 5e389c19452bf61564706dafafeb4894510c614e4ab07e0ede4b5aeb00ac0275 (image=quay.io/ceph/ceph:v18, name=ecstatic_colden, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Nov 25 02:47:59 np0005534516 podman[74337]: 2025-11-25 07:47:59.785126293 +0000 UTC m=+0.135803276 container start 5e389c19452bf61564706dafafeb4894510c614e4ab07e0ede4b5aeb00ac0275 (image=quay.io/ceph/ceph:v18, name=ecstatic_colden, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Nov 25 02:47:59 np0005534516 podman[74337]: 2025-11-25 07:47:59.788512103 +0000 UTC m=+0.139189096 container attach 5e389c19452bf61564706dafafeb4894510c614e4ab07e0ede4b5aeb00ac0275 (image=quay.io/ceph/ceph:v18, name=ecstatic_colden, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 02:47:59 np0005534516 systemd[1]: libpod-5e389c19452bf61564706dafafeb4894510c614e4ab07e0ede4b5aeb00ac0275.scope: Deactivated successfully.
Nov 25 02:47:59 np0005534516 podman[74381]: 2025-11-25 07:47:59.924816732 +0000 UTC m=+0.025189589 container died 5e389c19452bf61564706dafafeb4894510c614e4ab07e0ede4b5aeb00ac0275 (image=quay.io/ceph/ceph:v18, name=ecstatic_colden, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Nov 25 02:47:59 np0005534516 systemd[1]: var-lib-containers-storage-overlay-7f2a1038abacd22278f3ba3ed45425672e827785779339c813ae16c473dcf20f-merged.mount: Deactivated successfully.
Nov 25 02:47:59 np0005534516 podman[74381]: 2025-11-25 07:47:59.972062313 +0000 UTC m=+0.072435190 container remove 5e389c19452bf61564706dafafeb4894510c614e4ab07e0ede4b5aeb00ac0275 (image=quay.io/ceph/ceph:v18, name=ecstatic_colden, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 02:47:59 np0005534516 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 25 02:47:59 np0005534516 systemd[1]: libpod-conmon-5e389c19452bf61564706dafafeb4894510c614e4ab07e0ede4b5aeb00ac0275.scope: Deactivated successfully.
Nov 25 02:48:00 np0005534516 systemd[1]: Reloading.
Nov 25 02:48:00 np0005534516 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 02:48:00 np0005534516 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 02:48:00 np0005534516 systemd[1]: Reloading.
Nov 25 02:48:00 np0005534516 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 02:48:00 np0005534516 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 02:48:00 np0005534516 systemd[1]: Reached target All Ceph clusters and services.
Nov 25 02:48:00 np0005534516 systemd[1]: Reloading.
Nov 25 02:48:00 np0005534516 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 02:48:00 np0005534516 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 02:48:00 np0005534516 systemd[1]: Reached target Ceph cluster a058ea16-8b73-51e1-b172-ed66107102bf.
Nov 25 02:48:00 np0005534516 systemd[1]: Reloading.
Nov 25 02:48:00 np0005534516 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 02:48:00 np0005534516 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 02:48:01 np0005534516 systemd[1]: Reloading.
Nov 25 02:48:01 np0005534516 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 02:48:01 np0005534516 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 02:48:01 np0005534516 systemd[1]: Created slice Slice /system/ceph-a058ea16-8b73-51e1-b172-ed66107102bf.
Nov 25 02:48:01 np0005534516 systemd[1]: Reached target System Time Set.
Nov 25 02:48:01 np0005534516 systemd[1]: Reached target System Time Synchronized.
Nov 25 02:48:01 np0005534516 systemd[1]: Starting Ceph mon.compute-0 for a058ea16-8b73-51e1-b172-ed66107102bf...
Nov 25 02:48:01 np0005534516 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 25 02:48:01 np0005534516 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 25 02:48:01 np0005534516 podman[74634]: 2025-11-25 07:48:01.541860263 +0000 UTC m=+0.039653499 container create 4d90c4cff851eb85d8a7636900b8718e1d5ad47a8de60c2fd320071037de6270 (image=quay.io/ceph/ceph:v18, name=ceph-a058ea16-8b73-51e1-b172-ed66107102bf-mon-compute-0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2)
Nov 25 02:48:01 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0b5de42fa6e774fce74c007edd281475818d783652393836ded61d9b2233bc06/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 02:48:01 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0b5de42fa6e774fce74c007edd281475818d783652393836ded61d9b2233bc06/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 02:48:01 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0b5de42fa6e774fce74c007edd281475818d783652393836ded61d9b2233bc06/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 02:48:01 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0b5de42fa6e774fce74c007edd281475818d783652393836ded61d9b2233bc06/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Nov 25 02:48:01 np0005534516 podman[74634]: 2025-11-25 07:48:01.524879551 +0000 UTC m=+0.022672817 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 25 02:48:01 np0005534516 podman[74634]: 2025-11-25 07:48:01.623695223 +0000 UTC m=+0.121488519 container init 4d90c4cff851eb85d8a7636900b8718e1d5ad47a8de60c2fd320071037de6270 (image=quay.io/ceph/ceph:v18, name=ceph-a058ea16-8b73-51e1-b172-ed66107102bf-mon-compute-0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Nov 25 02:48:01 np0005534516 podman[74634]: 2025-11-25 07:48:01.629386128 +0000 UTC m=+0.127179364 container start 4d90c4cff851eb85d8a7636900b8718e1d5ad47a8de60c2fd320071037de6270 (image=quay.io/ceph/ceph:v18, name=ceph-a058ea16-8b73-51e1-b172-ed66107102bf-mon-compute-0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Nov 25 02:48:01 np0005534516 bash[74634]: 4d90c4cff851eb85d8a7636900b8718e1d5ad47a8de60c2fd320071037de6270
Nov 25 02:48:01 np0005534516 systemd[1]: Started Ceph mon.compute-0 for a058ea16-8b73-51e1-b172-ed66107102bf.
Nov 25 02:48:01 np0005534516 ceph-mon[74654]: set uid:gid to 167:167 (ceph:ceph)
Nov 25 02:48:01 np0005534516 ceph-mon[74654]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-mon, pid 2
Nov 25 02:48:01 np0005534516 ceph-mon[74654]: pidfile_write: ignore empty --pid-file
Nov 25 02:48:01 np0005534516 ceph-mon[74654]: load: jerasure load: lrc 
Nov 25 02:48:01 np0005534516 ceph-mon[74654]: rocksdb: RocksDB version: 7.9.2
Nov 25 02:48:01 np0005534516 ceph-mon[74654]: rocksdb: Git sha 0
Nov 25 02:48:01 np0005534516 ceph-mon[74654]: rocksdb: Compile date 2025-05-06 23:30:25
Nov 25 02:48:01 np0005534516 ceph-mon[74654]: rocksdb: DB SUMMARY
Nov 25 02:48:01 np0005534516 ceph-mon[74654]: rocksdb: DB Session ID:  GQ07Z00E8LTOCQNK2WGO
Nov 25 02:48:01 np0005534516 ceph-mon[74654]: rocksdb: CURRENT file:  CURRENT
Nov 25 02:48:01 np0005534516 ceph-mon[74654]: rocksdb: IDENTITY file:  IDENTITY
Nov 25 02:48:01 np0005534516 ceph-mon[74654]: rocksdb: MANIFEST file:  MANIFEST-000005 size: 59 Bytes
Nov 25 02:48:01 np0005534516 ceph-mon[74654]: rocksdb: SST files in /var/lib/ceph/mon/ceph-compute-0/store.db dir, Total Num: 0, files: 
Nov 25 02:48:01 np0005534516 ceph-mon[74654]: rocksdb: Write Ahead Log file in /var/lib/ceph/mon/ceph-compute-0/store.db: 000004.log size: 807 ; 
Nov 25 02:48:01 np0005534516 ceph-mon[74654]: rocksdb:                         Options.error_if_exists: 0
Nov 25 02:48:01 np0005534516 ceph-mon[74654]: rocksdb:                       Options.create_if_missing: 0
Nov 25 02:48:01 np0005534516 ceph-mon[74654]: rocksdb:                         Options.paranoid_checks: 1
Nov 25 02:48:01 np0005534516 ceph-mon[74654]: rocksdb:             Options.flush_verify_memtable_count: 1
Nov 25 02:48:01 np0005534516 ceph-mon[74654]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Nov 25 02:48:01 np0005534516 ceph-mon[74654]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Nov 25 02:48:01 np0005534516 ceph-mon[74654]: rocksdb:                                     Options.env: 0x55d75ee2ac40
Nov 25 02:48:01 np0005534516 ceph-mon[74654]: rocksdb:                                      Options.fs: PosixFileSystem
Nov 25 02:48:01 np0005534516 ceph-mon[74654]: rocksdb:                                Options.info_log: 0x55d760b26e80
Nov 25 02:48:01 np0005534516 ceph-mon[74654]: rocksdb:                Options.max_file_opening_threads: 16
Nov 25 02:48:01 np0005534516 ceph-mon[74654]: rocksdb:                              Options.statistics: (nil)
Nov 25 02:48:01 np0005534516 ceph-mon[74654]: rocksdb:                               Options.use_fsync: 0
Nov 25 02:48:01 np0005534516 ceph-mon[74654]: rocksdb:                       Options.max_log_file_size: 0
Nov 25 02:48:01 np0005534516 ceph-mon[74654]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Nov 25 02:48:01 np0005534516 ceph-mon[74654]: rocksdb:                   Options.log_file_time_to_roll: 0
Nov 25 02:48:01 np0005534516 ceph-mon[74654]: rocksdb:                       Options.keep_log_file_num: 1000
Nov 25 02:48:01 np0005534516 ceph-mon[74654]: rocksdb:                    Options.recycle_log_file_num: 0
Nov 25 02:48:01 np0005534516 ceph-mon[74654]: rocksdb:                         Options.allow_fallocate: 1
Nov 25 02:48:01 np0005534516 ceph-mon[74654]: rocksdb:                        Options.allow_mmap_reads: 0
Nov 25 02:48:01 np0005534516 ceph-mon[74654]: rocksdb:                       Options.allow_mmap_writes: 0
Nov 25 02:48:01 np0005534516 ceph-mon[74654]: rocksdb:                        Options.use_direct_reads: 0
Nov 25 02:48:01 np0005534516 ceph-mon[74654]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Nov 25 02:48:01 np0005534516 ceph-mon[74654]: rocksdb:          Options.create_missing_column_families: 0
Nov 25 02:48:01 np0005534516 ceph-mon[74654]: rocksdb:                              Options.db_log_dir: 
Nov 25 02:48:01 np0005534516 ceph-mon[74654]: rocksdb:                                 Options.wal_dir: 
Nov 25 02:48:01 np0005534516 ceph-mon[74654]: rocksdb:                Options.table_cache_numshardbits: 6
Nov 25 02:48:01 np0005534516 ceph-mon[74654]: rocksdb:                         Options.WAL_ttl_seconds: 0
Nov 25 02:48:01 np0005534516 ceph-mon[74654]: rocksdb:                       Options.WAL_size_limit_MB: 0
Nov 25 02:48:01 np0005534516 ceph-mon[74654]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Nov 25 02:48:01 np0005534516 ceph-mon[74654]: rocksdb:             Options.manifest_preallocation_size: 4194304
Nov 25 02:48:01 np0005534516 ceph-mon[74654]: rocksdb:                     Options.is_fd_close_on_exec: 1
Nov 25 02:48:01 np0005534516 ceph-mon[74654]: rocksdb:                   Options.advise_random_on_open: 1
Nov 25 02:48:01 np0005534516 ceph-mon[74654]: rocksdb:                    Options.db_write_buffer_size: 0
Nov 25 02:48:01 np0005534516 ceph-mon[74654]: rocksdb:                    Options.write_buffer_manager: 0x55d760b36b40
Nov 25 02:48:01 np0005534516 ceph-mon[74654]: rocksdb:         Options.access_hint_on_compaction_start: 1
Nov 25 02:48:01 np0005534516 ceph-mon[74654]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Nov 25 02:48:01 np0005534516 ceph-mon[74654]: rocksdb:                      Options.use_adaptive_mutex: 0
Nov 25 02:48:01 np0005534516 ceph-mon[74654]: rocksdb:                            Options.rate_limiter: (nil)
Nov 25 02:48:01 np0005534516 ceph-mon[74654]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Nov 25 02:48:01 np0005534516 ceph-mon[74654]: rocksdb:                       Options.wal_recovery_mode: 2
Nov 25 02:48:01 np0005534516 ceph-mon[74654]: rocksdb:                  Options.enable_thread_tracking: 0
Nov 25 02:48:01 np0005534516 ceph-mon[74654]: rocksdb:                  Options.enable_pipelined_write: 0
Nov 25 02:48:01 np0005534516 ceph-mon[74654]: rocksdb:                  Options.unordered_write: 0
Nov 25 02:48:01 np0005534516 ceph-mon[74654]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Nov 25 02:48:01 np0005534516 ceph-mon[74654]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Nov 25 02:48:01 np0005534516 ceph-mon[74654]: rocksdb:             Options.write_thread_max_yield_usec: 100
Nov 25 02:48:01 np0005534516 ceph-mon[74654]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Nov 25 02:48:01 np0005534516 ceph-mon[74654]: rocksdb:                               Options.row_cache: None
Nov 25 02:48:01 np0005534516 ceph-mon[74654]: rocksdb:                              Options.wal_filter: None
Nov 25 02:48:01 np0005534516 ceph-mon[74654]: rocksdb:             Options.avoid_flush_during_recovery: 0
Nov 25 02:48:01 np0005534516 ceph-mon[74654]: rocksdb:             Options.allow_ingest_behind: 0
Nov 25 02:48:01 np0005534516 ceph-mon[74654]: rocksdb:             Options.two_write_queues: 0
Nov 25 02:48:01 np0005534516 ceph-mon[74654]: rocksdb:             Options.manual_wal_flush: 0
Nov 25 02:48:01 np0005534516 ceph-mon[74654]: rocksdb:             Options.wal_compression: 0
Nov 25 02:48:01 np0005534516 ceph-mon[74654]: rocksdb:             Options.atomic_flush: 0
Nov 25 02:48:01 np0005534516 ceph-mon[74654]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Nov 25 02:48:01 np0005534516 ceph-mon[74654]: rocksdb:                 Options.persist_stats_to_disk: 0
Nov 25 02:48:01 np0005534516 ceph-mon[74654]: rocksdb:                 Options.write_dbid_to_manifest: 0
Nov 25 02:48:01 np0005534516 ceph-mon[74654]: rocksdb:                 Options.log_readahead_size: 0
Nov 25 02:48:01 np0005534516 ceph-mon[74654]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Nov 25 02:48:01 np0005534516 ceph-mon[74654]: rocksdb:                 Options.best_efforts_recovery: 0
Nov 25 02:48:01 np0005534516 ceph-mon[74654]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Nov 25 02:48:01 np0005534516 ceph-mon[74654]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Nov 25 02:48:01 np0005534516 ceph-mon[74654]: rocksdb:             Options.allow_data_in_errors: 0
Nov 25 02:48:01 np0005534516 ceph-mon[74654]: rocksdb:             Options.db_host_id: __hostname__
Nov 25 02:48:01 np0005534516 ceph-mon[74654]: rocksdb:             Options.enforce_single_del_contracts: true
Nov 25 02:48:01 np0005534516 ceph-mon[74654]: rocksdb:             Options.max_background_jobs: 2
Nov 25 02:48:01 np0005534516 ceph-mon[74654]: rocksdb:             Options.max_background_compactions: -1
Nov 25 02:48:01 np0005534516 ceph-mon[74654]: rocksdb:             Options.max_subcompactions: 1
Nov 25 02:48:01 np0005534516 ceph-mon[74654]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Nov 25 02:48:01 np0005534516 ceph-mon[74654]: rocksdb:           Options.writable_file_max_buffer_size: 1048576
Nov 25 02:48:01 np0005534516 ceph-mon[74654]: rocksdb:             Options.delayed_write_rate : 16777216
Nov 25 02:48:01 np0005534516 ceph-mon[74654]: rocksdb:             Options.max_total_wal_size: 0
Nov 25 02:48:01 np0005534516 ceph-mon[74654]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Nov 25 02:48:01 np0005534516 ceph-mon[74654]: rocksdb:                   Options.stats_dump_period_sec: 600
Nov 25 02:48:01 np0005534516 ceph-mon[74654]: rocksdb:                 Options.stats_persist_period_sec: 600
Nov 25 02:48:01 np0005534516 ceph-mon[74654]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Nov 25 02:48:01 np0005534516 ceph-mon[74654]: rocksdb:                          Options.max_open_files: -1
Nov 25 02:48:01 np0005534516 ceph-mon[74654]: rocksdb:                          Options.bytes_per_sync: 0
Nov 25 02:48:01 np0005534516 ceph-mon[74654]: rocksdb:                      Options.wal_bytes_per_sync: 0
Nov 25 02:48:01 np0005534516 ceph-mon[74654]: rocksdb:                   Options.strict_bytes_per_sync: 0
Nov 25 02:48:01 np0005534516 ceph-mon[74654]: rocksdb:       Options.compaction_readahead_size: 0
Nov 25 02:48:01 np0005534516 ceph-mon[74654]: rocksdb:                  Options.max_background_flushes: -1
Nov 25 02:48:01 np0005534516 ceph-mon[74654]: rocksdb: Compression algorithms supported:
Nov 25 02:48:01 np0005534516 ceph-mon[74654]: rocksdb: #011kZSTD supported: 0
Nov 25 02:48:01 np0005534516 ceph-mon[74654]: rocksdb: #011kXpressCompression supported: 0
Nov 25 02:48:01 np0005534516 ceph-mon[74654]: rocksdb: #011kBZip2Compression supported: 0
Nov 25 02:48:01 np0005534516 ceph-mon[74654]: rocksdb: #011kZSTDNotFinalCompression supported: 0
Nov 25 02:48:01 np0005534516 ceph-mon[74654]: rocksdb: #011kLZ4Compression supported: 1
Nov 25 02:48:01 np0005534516 ceph-mon[74654]: rocksdb: #011kZlibCompression supported: 1
Nov 25 02:48:01 np0005534516 ceph-mon[74654]: rocksdb: #011kLZ4HCCompression supported: 1
Nov 25 02:48:01 np0005534516 ceph-mon[74654]: rocksdb: #011kSnappyCompression supported: 1
Nov 25 02:48:01 np0005534516 ceph-mon[74654]: rocksdb: Fast CRC32 supported: Supported on x86
Nov 25 02:48:01 np0005534516 ceph-mon[74654]: rocksdb: DMutex implementation: pthread_mutex_t
Nov 25 02:48:01 np0005534516 ceph-mon[74654]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: /var/lib/ceph/mon/ceph-compute-0/store.db/MANIFEST-000005
Nov 25 02:48:01 np0005534516 ceph-mon[74654]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Nov 25 02:48:01 np0005534516 ceph-mon[74654]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 25 02:48:01 np0005534516 ceph-mon[74654]: rocksdb:           Options.merge_operator: 
Nov 25 02:48:01 np0005534516 ceph-mon[74654]: rocksdb:        Options.compaction_filter: None
Nov 25 02:48:01 np0005534516 ceph-mon[74654]: rocksdb:        Options.compaction_filter_factory: None
Nov 25 02:48:01 np0005534516 ceph-mon[74654]: rocksdb:  Options.sst_partitioner_factory: None
Nov 25 02:48:01 np0005534516 ceph-mon[74654]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 25 02:48:01 np0005534516 ceph-mon[74654]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 25 02:48:01 np0005534516 ceph-mon[74654]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55d760b26a80)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55d760b1f1f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 25 02:48:01 np0005534516 ceph-mon[74654]: rocksdb:        Options.write_buffer_size: 33554432
Nov 25 02:48:01 np0005534516 ceph-mon[74654]: rocksdb:  Options.max_write_buffer_number: 2
Nov 25 02:48:01 np0005534516 ceph-mon[74654]: rocksdb:          Options.compression: NoCompression
Nov 25 02:48:01 np0005534516 ceph-mon[74654]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 25 02:48:01 np0005534516 ceph-mon[74654]: rocksdb:       Options.prefix_extractor: nullptr
Nov 25 02:48:01 np0005534516 ceph-mon[74654]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 25 02:48:01 np0005534516 ceph-mon[74654]: rocksdb:             Options.num_levels: 7
Nov 25 02:48:01 np0005534516 ceph-mon[74654]: rocksdb:        Options.min_write_buffer_number_to_merge: 1
Nov 25 02:48:01 np0005534516 ceph-mon[74654]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 25 02:48:01 np0005534516 ceph-mon[74654]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 25 02:48:01 np0005534516 ceph-mon[74654]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 25 02:48:01 np0005534516 ceph-mon[74654]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 25 02:48:01 np0005534516 ceph-mon[74654]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 25 02:48:01 np0005534516 ceph-mon[74654]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 25 02:48:01 np0005534516 ceph-mon[74654]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 25 02:48:01 np0005534516 ceph-mon[74654]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 25 02:48:01 np0005534516 ceph-mon[74654]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 25 02:48:01 np0005534516 ceph-mon[74654]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 25 02:48:01 np0005534516 ceph-mon[74654]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 25 02:48:01 np0005534516 ceph-mon[74654]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 25 02:48:01 np0005534516 ceph-mon[74654]: rocksdb:                  Options.compression_opts.level: 32767
Nov 25 02:48:01 np0005534516 ceph-mon[74654]: rocksdb:               Options.compression_opts.strategy: 0
Nov 25 02:48:01 np0005534516 ceph-mon[74654]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 25 02:48:01 np0005534516 ceph-mon[74654]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 25 02:48:01 np0005534516 ceph-mon[74654]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 25 02:48:01 np0005534516 ceph-mon[74654]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 25 02:48:01 np0005534516 ceph-mon[74654]: rocksdb:                  Options.compression_opts.enabled: false
Nov 25 02:48:01 np0005534516 ceph-mon[74654]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 25 02:48:01 np0005534516 ceph-mon[74654]: rocksdb:      Options.level0_file_num_compaction_trigger: 4
Nov 25 02:48:01 np0005534516 ceph-mon[74654]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 25 02:48:01 np0005534516 ceph-mon[74654]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 25 02:48:01 np0005534516 ceph-mon[74654]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 25 02:48:01 np0005534516 ceph-mon[74654]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 25 02:48:01 np0005534516 ceph-mon[74654]: rocksdb:                Options.max_bytes_for_level_base: 268435456
Nov 25 02:48:01 np0005534516 ceph-mon[74654]: rocksdb: Options.level_compaction_dynamic_level_bytes: 1
Nov 25 02:48:01 np0005534516 ceph-mon[74654]: rocksdb:          Options.max_bytes_for_level_multiplier: 10.000000
Nov 25 02:48:01 np0005534516 ceph-mon[74654]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 25 02:48:01 np0005534516 ceph-mon[74654]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 25 02:48:01 np0005534516 ceph-mon[74654]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 25 02:48:01 np0005534516 ceph-mon[74654]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 25 02:48:01 np0005534516 ceph-mon[74654]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 25 02:48:01 np0005534516 ceph-mon[74654]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 25 02:48:01 np0005534516 ceph-mon[74654]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 25 02:48:01 np0005534516 ceph-mon[74654]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 25 02:48:01 np0005534516 ceph-mon[74654]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 25 02:48:01 np0005534516 ceph-mon[74654]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 25 02:48:01 np0005534516 ceph-mon[74654]: rocksdb:                        Options.arena_block_size: 1048576
Nov 25 02:48:01 np0005534516 ceph-mon[74654]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 25 02:48:01 np0005534516 ceph-mon[74654]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 25 02:48:01 np0005534516 ceph-mon[74654]: rocksdb:                Options.disable_auto_compactions: 0
Nov 25 02:48:01 np0005534516 ceph-mon[74654]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 25 02:48:01 np0005534516 ceph-mon[74654]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 25 02:48:01 np0005534516 ceph-mon[74654]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 25 02:48:01 np0005534516 ceph-mon[74654]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 25 02:48:01 np0005534516 ceph-mon[74654]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 25 02:48:01 np0005534516 ceph-mon[74654]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 25 02:48:01 np0005534516 ceph-mon[74654]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 25 02:48:01 np0005534516 ceph-mon[74654]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 25 02:48:01 np0005534516 ceph-mon[74654]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 25 02:48:01 np0005534516 ceph-mon[74654]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 25 02:48:01 np0005534516 ceph-mon[74654]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 25 02:48:01 np0005534516 ceph-mon[74654]: rocksdb:                   Options.inplace_update_support: 0
Nov 25 02:48:01 np0005534516 ceph-mon[74654]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 25 02:48:01 np0005534516 ceph-mon[74654]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 25 02:48:01 np0005534516 ceph-mon[74654]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 25 02:48:01 np0005534516 ceph-mon[74654]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 25 02:48:01 np0005534516 ceph-mon[74654]: rocksdb:                           Options.bloom_locality: 0
Nov 25 02:48:01 np0005534516 ceph-mon[74654]: rocksdb:                    Options.max_successive_merges: 0
Nov 25 02:48:01 np0005534516 ceph-mon[74654]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 25 02:48:01 np0005534516 ceph-mon[74654]: rocksdb:                Options.paranoid_file_checks: 0
Nov 25 02:48:01 np0005534516 ceph-mon[74654]: rocksdb:                Options.force_consistency_checks: 1
Nov 25 02:48:01 np0005534516 ceph-mon[74654]: rocksdb:                Options.report_bg_io_stats: 0
Nov 25 02:48:01 np0005534516 ceph-mon[74654]: rocksdb:                               Options.ttl: 2592000
Nov 25 02:48:01 np0005534516 ceph-mon[74654]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 25 02:48:01 np0005534516 ceph-mon[74654]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 25 02:48:01 np0005534516 ceph-mon[74654]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 25 02:48:01 np0005534516 ceph-mon[74654]: rocksdb:                       Options.enable_blob_files: false
Nov 25 02:48:01 np0005534516 ceph-mon[74654]: rocksdb:                           Options.min_blob_size: 0
Nov 25 02:48:01 np0005534516 ceph-mon[74654]: rocksdb:                          Options.blob_file_size: 268435456
Nov 25 02:48:01 np0005534516 ceph-mon[74654]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 25 02:48:01 np0005534516 ceph-mon[74654]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 25 02:48:01 np0005534516 ceph-mon[74654]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 25 02:48:01 np0005534516 ceph-mon[74654]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 25 02:48:01 np0005534516 ceph-mon[74654]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 25 02:48:01 np0005534516 ceph-mon[74654]: rocksdb:                Options.blob_file_starting_level: 0
Nov 25 02:48:01 np0005534516 ceph-mon[74654]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 25 02:48:01 np0005534516 ceph-mon[74654]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:/var/lib/ceph/mon/ceph-compute-0/store.db/MANIFEST-000005 succeeded,manifest_file_number is 5, next_file_number is 7, last_sequence is 0, log_number is 0,prev_log_number is 0,max_column_family is 0,min_log_number_to_keep is 0
Nov 25 02:48:01 np0005534516 ceph-mon[74654]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 0
Nov 25 02:48:01 np0005534516 ceph-mon[74654]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: dc5dc6ae-0fb8-474e-b34d-e37705a41add
Nov 25 02:48:01 np0005534516 ceph-mon[74654]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764056881679862, "job": 1, "event": "recovery_started", "wal_files": [4]}
Nov 25 02:48:01 np0005534516 ceph-mon[74654]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #4 mode 2
Nov 25 02:48:01 np0005534516 ceph-mon[74654]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764056881682179, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 8, "file_size": 1944, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 1, "largest_seqno": 5, "table_properties": {"data_size": 819, "index_size": 31, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 115, "raw_average_key_size": 23, "raw_value_size": 696, "raw_average_value_size": 139, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764056881, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "dc5dc6ae-0fb8-474e-b34d-e37705a41add", "db_session_id": "GQ07Z00E8LTOCQNK2WGO", "orig_file_number": 8, "seqno_to_time_mapping": "N/A"}}
Nov 25 02:48:01 np0005534516 ceph-mon[74654]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764056881682294, "job": 1, "event": "recovery_finished"}
Nov 25 02:48:01 np0005534516 ceph-mon[74654]: rocksdb: [db/version_set.cc:5047] Creating manifest 10
Nov 25 02:48:01 np0005534516 ceph-mon[74654]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000004.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 02:48:01 np0005534516 ceph-mon[74654]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x55d760b48e00
Nov 25 02:48:01 np0005534516 ceph-mon[74654]: rocksdb: DB pointer 0x55d760bd2000
Nov 25 02:48:01 np0005534516 ceph-mon[74654]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 25 02:48:01 np0005534516 ceph-mon[74654]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 0.0 total, 0.0 interval#012Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s#012Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      1/0    1.90 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.8      0.00              0.00         1    0.002       0      0       0.0       0.0#012 Sum      1/0    1.90 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.8      0.00              0.00         1    0.002       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.8      0.00              0.00         1    0.002       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.8      0.00              0.00         1    0.002       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.0 total, 0.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.15 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.15 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55d760b1f1f0#2 capacity: 512.00 MB usage: 1.17 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 0 last_secs: 3.7e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(1,0.95 KB,0.000181794%) FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.11 KB,2.08616e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Nov 25 02:48:01 np0005534516 ceph-mon[74654]: starting mon.compute-0 rank 0 at public addrs [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] at bind addrs [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon_data /var/lib/ceph/mon/ceph-compute-0 fsid a058ea16-8b73-51e1-b172-ed66107102bf
Nov 25 02:48:01 np0005534516 ceph-mon[74654]: mon.compute-0@-1(???) e0 preinit fsid a058ea16-8b73-51e1-b172-ed66107102bf
Nov 25 02:48:01 np0005534516 ceph-mon[74654]: mon.compute-0@-1(probing) e0  my rank is now 0 (was -1)
Nov 25 02:48:01 np0005534516 ceph-mon[74654]: mon.compute-0@0(probing) e0 win_standalone_election
Nov 25 02:48:01 np0005534516 ceph-mon[74654]: paxos.0).electionLogic(0) init, first boot, initializing epoch at 1 
Nov 25 02:48:01 np0005534516 ceph-mon[74654]: mon.compute-0@0(electing) e0 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 25 02:48:01 np0005534516 ceph-mon[74654]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Nov 25 02:48:01 np0005534516 ceph-mon[74654]: mon.compute-0@0(leader).osd e0 create_pending setting backfillfull_ratio = 0.9
Nov 25 02:48:01 np0005534516 ceph-mon[74654]: mon.compute-0@0(leader).osd e0 create_pending setting full_ratio = 0.95
Nov 25 02:48:01 np0005534516 ceph-mon[74654]: mon.compute-0@0(leader).osd e0 create_pending setting nearfull_ratio = 0.85
Nov 25 02:48:01 np0005534516 ceph-mon[74654]: mon.compute-0@0(leader).osd e0 do_prune osdmap full prune enabled
Nov 25 02:48:01 np0005534516 ceph-mon[74654]: mon.compute-0@0(leader).osd e0 encode_pending skipping prime_pg_temp; mapping job did not start
Nov 25 02:48:01 np0005534516 ceph-mon[74654]: mon.compute-0@0(leader) e0 _apply_compatset_features enabling new quorum features: compat={},rocompat={},incompat={4=support erasure code pools,5=new-style osdmap encoding,6=support isa/lrc erasure code,7=support shec erasure code}
Nov 25 02:48:01 np0005534516 ceph-mon[74654]: mon.compute-0@0(leader).paxosservice(auth 0..0) refresh upgraded, format 3 -> 0
Nov 25 02:48:01 np0005534516 ceph-mon[74654]: mon.compute-0@0(probing) e1 win_standalone_election
Nov 25 02:48:01 np0005534516 ceph-mon[74654]: paxos.0).electionLogic(2) init, last seen epoch 2
Nov 25 02:48:01 np0005534516 ceph-mon[74654]: mon.compute-0@0(electing) e1 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 25 02:48:01 np0005534516 ceph-mon[74654]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Nov 25 02:48:01 np0005534516 ceph-mon[74654]: log_channel(cluster) log [DBG] : monmap e1: 1 mons at {compute-0=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0]} removed_ranks: {} disallowed_leaders: {}
Nov 25 02:48:01 np0005534516 ceph-mon[74654]: mon.compute-0@0(leader) e1 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 25 02:48:01 np0005534516 ceph-mon[74654]: mgrc update_daemon_metadata mon.compute-0 metadata {addrs=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0],arch=x86_64,ceph_release=reef,ceph_version=ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable),ceph_version_short=18.2.7,ceph_version_when_created=ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable),compression_algorithms=none, snappy, zlib, zstd, lz4,container_hostname=compute-0,container_image=quay.io/ceph/ceph:v18,cpu=AMD EPYC-Rome Processor,created_at=2025-11-25T07:47:59.827974Z,device_ids=,device_paths=vda=/dev/disk/by-path/pci-0000:00:04.0,devices=vda,distro=centos,distro_description=CentOS Stream 9,distro_version=9,hostname=compute-0,kernel_description=#1 SMP PREEMPT_DYNAMIC Thu Nov 20 14:15:03 UTC 2025,kernel_version=5.14.0-642.el9.x86_64,mem_swap_kb=1048572,mem_total_kb=7864320,os=Linux}
Nov 25 02:48:01 np0005534516 ceph-mon[74654]: mon.compute-0@0(leader).osd e0 create_pending setting backfillfull_ratio = 0.9
Nov 25 02:48:01 np0005534516 ceph-mon[74654]: mon.compute-0@0(leader).osd e0 create_pending setting full_ratio = 0.95
Nov 25 02:48:01 np0005534516 ceph-mon[74654]: mon.compute-0@0(leader).osd e0 create_pending setting nearfull_ratio = 0.85
Nov 25 02:48:01 np0005534516 ceph-mon[74654]: mon.compute-0@0(leader).osd e0 do_prune osdmap full prune enabled
Nov 25 02:48:01 np0005534516 ceph-mon[74654]: mon.compute-0@0(leader).osd e0 encode_pending skipping prime_pg_temp; mapping job did not start
Nov 25 02:48:01 np0005534516 ceph-mon[74654]: mon.compute-0@0(leader) e1 _apply_compatset_features enabling new quorum features: compat={},rocompat={},incompat={8=support monmap features,9=luminous ondisk layout,10=mimic ondisk layout,11=nautilus ondisk layout,12=octopus ondisk layout,13=pacific ondisk layout,14=quincy ondisk layout,15=reef ondisk layout}
Nov 25 02:48:01 np0005534516 ceph-mon[74654]: mon.compute-0@0(leader).mds e1 new map
Nov 25 02:48:01 np0005534516 ceph-mon[74654]: mon.compute-0@0(leader).mds e1 print_map#012e1#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}#012legacy client fscid: -1#012 #012No filesystems configured
Nov 25 02:48:01 np0005534516 ceph-mon[74654]: mon.compute-0@0(leader).paxosservice(auth 0..0) refresh upgraded, format 3 -> 0
Nov 25 02:48:01 np0005534516 ceph-mon[74654]: log_channel(cluster) log [DBG] : fsmap 
Nov 25 02:48:01 np0005534516 podman[74655]: 2025-11-25 07:48:01.786577449 +0000 UTC m=+0.097093372 container create 541226ce22b557f5091782110a18d326870465cee603501164d3a94092fc3915 (image=quay.io/ceph/ceph:v18, name=eager_lehmann, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Nov 25 02:48:01 np0005534516 ceph-mon[74654]: mon.compute-0@0(leader).osd e0 _set_cache_ratios kv ratio 0.25 inc ratio 0.375 full ratio 0.375
Nov 25 02:48:01 np0005534516 ceph-mon[74654]: mon.compute-0@0(leader).osd e0 register_cache_with_pcm pcm target: 2147483648 pcm max: 1020054732 pcm min: 134217728 inc_osd_cache size: 1
Nov 25 02:48:01 np0005534516 ceph-mon[74654]: mon.compute-0@0(leader).osd e1 e1: 0 total, 0 up, 0 in
Nov 25 02:48:01 np0005534516 ceph-mon[74654]: mon.compute-0@0(leader).osd e1 crush map has features 3314932999778484224, adjusting msgr requires
Nov 25 02:48:01 np0005534516 ceph-mon[74654]: mon.compute-0@0(leader).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Nov 25 02:48:01 np0005534516 ceph-mon[74654]: mon.compute-0@0(leader).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Nov 25 02:48:01 np0005534516 ceph-mon[74654]: mon.compute-0@0(leader).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Nov 25 02:48:01 np0005534516 ceph-mon[74654]: mkfs a058ea16-8b73-51e1-b172-ed66107102bf
Nov 25 02:48:01 np0005534516 ceph-mon[74654]: mon.compute-0@0(leader).paxosservice(auth 1..1) refresh upgraded, format 0 -> 3
Nov 25 02:48:01 np0005534516 podman[74655]: 2025-11-25 07:48:01.735177111 +0000 UTC m=+0.045693044 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 25 02:48:01 np0005534516 ceph-mon[74654]: log_channel(cluster) log [DBG] : osdmap e1: 0 total, 0 up, 0 in
Nov 25 02:48:01 np0005534516 ceph-mon[74654]: log_channel(cluster) log [DBG] : mgrmap e1: no daemons active
Nov 25 02:48:01 np0005534516 systemd[1]: Started libpod-conmon-541226ce22b557f5091782110a18d326870465cee603501164d3a94092fc3915.scope.
Nov 25 02:48:01 np0005534516 ceph-mon[74654]: mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Nov 25 02:48:01 np0005534516 systemd[1]: Started libcrun container.
Nov 25 02:48:01 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2511e205a35ac93c4c35f2a68545943c952ac988d21735f749a492fd2c07e9fc/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 02:48:01 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2511e205a35ac93c4c35f2a68545943c952ac988d21735f749a492fd2c07e9fc/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 02:48:01 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2511e205a35ac93c4c35f2a68545943c952ac988d21735f749a492fd2c07e9fc/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Nov 25 02:48:01 np0005534516 podman[74655]: 2025-11-25 07:48:01.898074308 +0000 UTC m=+0.208590211 container init 541226ce22b557f5091782110a18d326870465cee603501164d3a94092fc3915 (image=quay.io/ceph/ceph:v18, name=eager_lehmann, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 25 02:48:01 np0005534516 podman[74655]: 2025-11-25 07:48:01.908748408 +0000 UTC m=+0.219264311 container start 541226ce22b557f5091782110a18d326870465cee603501164d3a94092fc3915 (image=quay.io/ceph/ceph:v18, name=eager_lehmann, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Nov 25 02:48:01 np0005534516 podman[74655]: 2025-11-25 07:48:01.921470756 +0000 UTC m=+0.231986679 container attach 541226ce22b557f5091782110a18d326870465cee603501164d3a94092fc3915 (image=quay.io/ceph/ceph:v18, name=eager_lehmann, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 02:48:02 np0005534516 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status"} v 0) v1
Nov 25 02:48:02 np0005534516 ceph-mon[74654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1515114660' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Nov 25 02:48:02 np0005534516 eager_lehmann[74710]:  cluster:
Nov 25 02:48:02 np0005534516 eager_lehmann[74710]:    id:     a058ea16-8b73-51e1-b172-ed66107102bf
Nov 25 02:48:02 np0005534516 eager_lehmann[74710]:    health: HEALTH_OK
Nov 25 02:48:02 np0005534516 eager_lehmann[74710]: 
Nov 25 02:48:02 np0005534516 eager_lehmann[74710]:  services:
Nov 25 02:48:02 np0005534516 eager_lehmann[74710]:    mon: 1 daemons, quorum compute-0 (age 0.619122s)
Nov 25 02:48:02 np0005534516 eager_lehmann[74710]:    mgr: no daemons active
Nov 25 02:48:02 np0005534516 eager_lehmann[74710]:    osd: 0 osds: 0 up, 0 in
Nov 25 02:48:02 np0005534516 eager_lehmann[74710]: 
Nov 25 02:48:02 np0005534516 eager_lehmann[74710]:  data:
Nov 25 02:48:02 np0005534516 eager_lehmann[74710]:    pools:   0 pools, 0 pgs
Nov 25 02:48:02 np0005534516 eager_lehmann[74710]:    objects: 0 objects, 0 B
Nov 25 02:48:02 np0005534516 eager_lehmann[74710]:    usage:   0 B used, 0 B / 0 B avail
Nov 25 02:48:02 np0005534516 eager_lehmann[74710]:    pgs:     
Nov 25 02:48:02 np0005534516 eager_lehmann[74710]: 
Nov 25 02:48:02 np0005534516 systemd[1]: libpod-541226ce22b557f5091782110a18d326870465cee603501164d3a94092fc3915.scope: Deactivated successfully.
Nov 25 02:48:02 np0005534516 podman[74655]: 2025-11-25 07:48:02.376141353 +0000 UTC m=+0.686657256 container died 541226ce22b557f5091782110a18d326870465cee603501164d3a94092fc3915 (image=quay.io/ceph/ceph:v18, name=eager_lehmann, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Nov 25 02:48:02 np0005534516 systemd[1]: var-lib-containers-storage-overlay-2511e205a35ac93c4c35f2a68545943c952ac988d21735f749a492fd2c07e9fc-merged.mount: Deactivated successfully.
Nov 25 02:48:02 np0005534516 podman[74655]: 2025-11-25 07:48:02.778572056 +0000 UTC m=+1.089087999 container remove 541226ce22b557f5091782110a18d326870465cee603501164d3a94092fc3915 (image=quay.io/ceph/ceph:v18, name=eager_lehmann, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 02:48:02 np0005534516 systemd[1]: libpod-conmon-541226ce22b557f5091782110a18d326870465cee603501164d3a94092fc3915.scope: Deactivated successfully.
Nov 25 02:48:02 np0005534516 ceph-mon[74654]: mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Nov 25 02:48:02 np0005534516 podman[74748]: 2025-11-25 07:48:02.942656359 +0000 UTC m=+0.133295382 container create 2545b1eff9d6d67c6ec135b6bdf4923a46450abe05887687bb78ae74e6c38639 (image=quay.io/ceph/ceph:v18, name=great_kapitsa, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True)
Nov 25 02:48:02 np0005534516 podman[74748]: 2025-11-25 07:48:02.852419146 +0000 UTC m=+0.043058259 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 25 02:48:03 np0005534516 systemd[1]: Started libpod-conmon-2545b1eff9d6d67c6ec135b6bdf4923a46450abe05887687bb78ae74e6c38639.scope.
Nov 25 02:48:03 np0005534516 systemd[1]: Started libcrun container.
Nov 25 02:48:03 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/94e08f8fb271ac5bcaca57678b3f6f431e8dbf6388f982c220b00ea0299eb0f5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 02:48:03 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/94e08f8fb271ac5bcaca57678b3f6f431e8dbf6388f982c220b00ea0299eb0f5/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 02:48:03 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/94e08f8fb271ac5bcaca57678b3f6f431e8dbf6388f982c220b00ea0299eb0f5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 02:48:03 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/94e08f8fb271ac5bcaca57678b3f6f431e8dbf6388f982c220b00ea0299eb0f5/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Nov 25 02:48:03 np0005534516 podman[74748]: 2025-11-25 07:48:03.168101827 +0000 UTC m=+0.358740910 container init 2545b1eff9d6d67c6ec135b6bdf4923a46450abe05887687bb78ae74e6c38639 (image=quay.io/ceph/ceph:v18, name=great_kapitsa, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Nov 25 02:48:03 np0005534516 podman[74748]: 2025-11-25 07:48:03.174221795 +0000 UTC m=+0.364860828 container start 2545b1eff9d6d67c6ec135b6bdf4923a46450abe05887687bb78ae74e6c38639 (image=quay.io/ceph/ceph:v18, name=great_kapitsa, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef)
Nov 25 02:48:03 np0005534516 podman[74748]: 2025-11-25 07:48:03.186187841 +0000 UTC m=+0.376826894 container attach 2545b1eff9d6d67c6ec135b6bdf4923a46450abe05887687bb78ae74e6c38639 (image=quay.io/ceph/ceph:v18, name=great_kapitsa, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Nov 25 02:48:03 np0005534516 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config assimilate-conf"} v 0) v1
Nov 25 02:48:03 np0005534516 ceph-mon[74654]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4272927448' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Nov 25 02:48:03 np0005534516 ceph-mon[74654]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4272927448' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished
Nov 25 02:48:03 np0005534516 great_kapitsa[74765]: 
Nov 25 02:48:03 np0005534516 great_kapitsa[74765]: [global]
Nov 25 02:48:03 np0005534516 great_kapitsa[74765]: #011fsid = a058ea16-8b73-51e1-b172-ed66107102bf
Nov 25 02:48:03 np0005534516 great_kapitsa[74765]: #011mon_host = [v2:192.168.122.100:3300,v1:192.168.122.100:6789]
Nov 25 02:48:03 np0005534516 great_kapitsa[74765]: #011osd_crush_chooseleaf_type = 0
Nov 25 02:48:03 np0005534516 systemd[1]: libpod-2545b1eff9d6d67c6ec135b6bdf4923a46450abe05887687bb78ae74e6c38639.scope: Deactivated successfully.
Nov 25 02:48:03 np0005534516 podman[74748]: 2025-11-25 07:48:03.623693411 +0000 UTC m=+0.814332444 container died 2545b1eff9d6d67c6ec135b6bdf4923a46450abe05887687bb78ae74e6c38639 (image=quay.io/ceph/ceph:v18, name=great_kapitsa, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Nov 25 02:48:03 np0005534516 systemd[1]: var-lib-containers-storage-overlay-94e08f8fb271ac5bcaca57678b3f6f431e8dbf6388f982c220b00ea0299eb0f5-merged.mount: Deactivated successfully.
Nov 25 02:48:03 np0005534516 podman[74748]: 2025-11-25 07:48:03.745279132 +0000 UTC m=+0.935918155 container remove 2545b1eff9d6d67c6ec135b6bdf4923a46450abe05887687bb78ae74e6c38639 (image=quay.io/ceph/ceph:v18, name=great_kapitsa, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Nov 25 02:48:03 np0005534516 systemd[1]: libpod-conmon-2545b1eff9d6d67c6ec135b6bdf4923a46450abe05887687bb78ae74e6c38639.scope: Deactivated successfully.
Nov 25 02:48:03 np0005534516 podman[74805]: 2025-11-25 07:48:03.903139424 +0000 UTC m=+0.126915867 container create bb2847dbd11d246e19decbfeb5b58b1ab14829cbc3de251cdb28feb7a6c6c7e7 (image=quay.io/ceph/ceph:v18, name=focused_gould, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 02:48:03 np0005534516 podman[74805]: 2025-11-25 07:48:03.813953251 +0000 UTC m=+0.037729754 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 25 02:48:03 np0005534516 ceph-mon[74654]: from='client.? 192.168.122.100:0/4272927448' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Nov 25 02:48:03 np0005534516 ceph-mon[74654]: from='client.? 192.168.122.100:0/4272927448' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished
Nov 25 02:48:03 np0005534516 systemd[1]: Started libpod-conmon-bb2847dbd11d246e19decbfeb5b58b1ab14829cbc3de251cdb28feb7a6c6c7e7.scope.
Nov 25 02:48:03 np0005534516 systemd[1]: Started libcrun container.
Nov 25 02:48:03 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/07ecc7a18cf09c6f0c76d2f1ce37cfe0014e0d3dbd0a29a8775365754e6624fa/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 02:48:03 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/07ecc7a18cf09c6f0c76d2f1ce37cfe0014e0d3dbd0a29a8775365754e6624fa/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 02:48:03 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/07ecc7a18cf09c6f0c76d2f1ce37cfe0014e0d3dbd0a29a8775365754e6624fa/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 02:48:03 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/07ecc7a18cf09c6f0c76d2f1ce37cfe0014e0d3dbd0a29a8775365754e6624fa/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Nov 25 02:48:04 np0005534516 podman[74805]: 2025-11-25 07:48:04.023933082 +0000 UTC m=+0.247709525 container init bb2847dbd11d246e19decbfeb5b58b1ab14829cbc3de251cdb28feb7a6c6c7e7 (image=quay.io/ceph/ceph:v18, name=focused_gould, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Nov 25 02:48:04 np0005534516 podman[74805]: 2025-11-25 07:48:04.034031355 +0000 UTC m=+0.257807768 container start bb2847dbd11d246e19decbfeb5b58b1ab14829cbc3de251cdb28feb7a6c6c7e7 (image=quay.io/ceph/ceph:v18, name=focused_gould, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True)
Nov 25 02:48:04 np0005534516 podman[74805]: 2025-11-25 07:48:04.052909601 +0000 UTC m=+0.276686014 container attach bb2847dbd11d246e19decbfeb5b58b1ab14829cbc3de251cdb28feb7a6c6c7e7 (image=quay.io/ceph/ceph:v18, name=focused_gould, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 02:48:04 np0005534516 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 02:48:04 np0005534516 ceph-mon[74654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/398062569' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 02:48:04 np0005534516 systemd[1]: libpod-bb2847dbd11d246e19decbfeb5b58b1ab14829cbc3de251cdb28feb7a6c6c7e7.scope: Deactivated successfully.
Nov 25 02:48:04 np0005534516 podman[74805]: 2025-11-25 07:48:04.449484185 +0000 UTC m=+0.673260678 container died bb2847dbd11d246e19decbfeb5b58b1ab14829cbc3de251cdb28feb7a6c6c7e7 (image=quay.io/ceph/ceph:v18, name=focused_gould, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 25 02:48:04 np0005534516 systemd[1]: var-lib-containers-storage-overlay-07ecc7a18cf09c6f0c76d2f1ce37cfe0014e0d3dbd0a29a8775365754e6624fa-merged.mount: Deactivated successfully.
Nov 25 02:48:04 np0005534516 podman[74805]: 2025-11-25 07:48:04.535836556 +0000 UTC m=+0.759612999 container remove bb2847dbd11d246e19decbfeb5b58b1ab14829cbc3de251cdb28feb7a6c6c7e7 (image=quay.io/ceph/ceph:v18, name=focused_gould, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Nov 25 02:48:04 np0005534516 systemd[1]: libpod-conmon-bb2847dbd11d246e19decbfeb5b58b1ab14829cbc3de251cdb28feb7a6c6c7e7.scope: Deactivated successfully.
Nov 25 02:48:04 np0005534516 systemd[1]: Stopping Ceph mon.compute-0 for a058ea16-8b73-51e1-b172-ed66107102bf...
Nov 25 02:48:04 np0005534516 ceph-mon[74654]: received  signal: Terminated from /run/podman-init -- /usr/bin/ceph-mon -n mon.compute-0 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-journald=true --default-log-to-stderr=false --default-mon-cluster-log-to-file=false --default-mon-cluster-log-to-journald=true --default-mon-cluster-log-to-stderr=false  (PID: 1) UID: 0
Nov 25 02:48:04 np0005534516 ceph-mon[74654]: mon.compute-0@0(leader) e1 *** Got Signal Terminated ***
Nov 25 02:48:04 np0005534516 ceph-mon[74654]: mon.compute-0@0(leader) e1 shutdown
Nov 25 02:48:04 np0005534516 ceph-a058ea16-8b73-51e1-b172-ed66107102bf-mon-compute-0[74650]: 2025-11-25T07:48:04.782+0000 7f802c8ca640 -1 received  signal: Terminated from /run/podman-init -- /usr/bin/ceph-mon -n mon.compute-0 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-journald=true --default-log-to-stderr=false --default-mon-cluster-log-to-file=false --default-mon-cluster-log-to-journald=true --default-mon-cluster-log-to-stderr=false  (PID: 1) UID: 0
Nov 25 02:48:04 np0005534516 ceph-a058ea16-8b73-51e1-b172-ed66107102bf-mon-compute-0[74650]: 2025-11-25T07:48:04.782+0000 7f802c8ca640 -1 mon.compute-0@0(leader) e1 *** Got Signal Terminated ***
Nov 25 02:48:04 np0005534516 ceph-mon[74654]: rocksdb: [db/db_impl/db_impl.cc:496] Shutdown: canceling all background work
Nov 25 02:48:04 np0005534516 ceph-mon[74654]: rocksdb: [db/db_impl/db_impl.cc:704] Shutdown complete
Nov 25 02:48:04 np0005534516 podman[74890]: 2025-11-25 07:48:04.947747974 +0000 UTC m=+0.206708526 container died 4d90c4cff851eb85d8a7636900b8718e1d5ad47a8de60c2fd320071037de6270 (image=quay.io/ceph/ceph:v18, name=ceph-a058ea16-8b73-51e1-b172-ed66107102bf-mon-compute-0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 02:48:05 np0005534516 systemd[1]: var-lib-containers-storage-overlay-0b5de42fa6e774fce74c007edd281475818d783652393836ded61d9b2233bc06-merged.mount: Deactivated successfully.
Nov 25 02:48:05 np0005534516 podman[74890]: 2025-11-25 07:48:05.143460842 +0000 UTC m=+0.402421364 container remove 4d90c4cff851eb85d8a7636900b8718e1d5ad47a8de60c2fd320071037de6270 (image=quay.io/ceph/ceph:v18, name=ceph-a058ea16-8b73-51e1-b172-ed66107102bf-mon-compute-0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 02:48:05 np0005534516 bash[74890]: ceph-a058ea16-8b73-51e1-b172-ed66107102bf-mon-compute-0
Nov 25 02:48:05 np0005534516 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 25 02:48:05 np0005534516 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 25 02:48:05 np0005534516 systemd[1]: ceph-a058ea16-8b73-51e1-b172-ed66107102bf@mon.compute-0.service: Deactivated successfully.
Nov 25 02:48:05 np0005534516 systemd[1]: Stopped Ceph mon.compute-0 for a058ea16-8b73-51e1-b172-ed66107102bf.
Nov 25 02:48:05 np0005534516 systemd[1]: ceph-a058ea16-8b73-51e1-b172-ed66107102bf@mon.compute-0.service: Consumed 1.129s CPU time.
Nov 25 02:48:05 np0005534516 systemd[1]: Starting Ceph mon.compute-0 for a058ea16-8b73-51e1-b172-ed66107102bf...
Nov 25 02:48:05 np0005534516 podman[74995]: 2025-11-25 07:48:05.642808763 +0000 UTC m=+0.064182450 container create 04ce8b89bbacb77b3b59c4996e899d7494010827e740656ebea8754c25303956 (image=quay.io/ceph/ceph:v18, name=ceph-a058ea16-8b73-51e1-b172-ed66107102bf-mon-compute-0, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Nov 25 02:48:05 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/98c86a733479041a7ad2c6bc81731006eb7433fbc252ad7aa27fc9621ff5406e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 02:48:05 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/98c86a733479041a7ad2c6bc81731006eb7433fbc252ad7aa27fc9621ff5406e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 02:48:05 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/98c86a733479041a7ad2c6bc81731006eb7433fbc252ad7aa27fc9621ff5406e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 02:48:05 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/98c86a733479041a7ad2c6bc81731006eb7433fbc252ad7aa27fc9621ff5406e/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Nov 25 02:48:05 np0005534516 podman[74995]: 2025-11-25 07:48:05.615863202 +0000 UTC m=+0.037236939 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 25 02:48:05 np0005534516 podman[74995]: 2025-11-25 07:48:05.761997134 +0000 UTC m=+0.183370801 container init 04ce8b89bbacb77b3b59c4996e899d7494010827e740656ebea8754c25303956 (image=quay.io/ceph/ceph:v18, name=ceph-a058ea16-8b73-51e1-b172-ed66107102bf-mon-compute-0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 02:48:05 np0005534516 podman[74995]: 2025-11-25 07:48:05.769389718 +0000 UTC m=+0.190763365 container start 04ce8b89bbacb77b3b59c4996e899d7494010827e740656ebea8754c25303956 (image=quay.io/ceph/ceph:v18, name=ceph-a058ea16-8b73-51e1-b172-ed66107102bf-mon-compute-0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 02:48:05 np0005534516 bash[74995]: 04ce8b89bbacb77b3b59c4996e899d7494010827e740656ebea8754c25303956
Nov 25 02:48:05 np0005534516 systemd[1]: Started Ceph mon.compute-0 for a058ea16-8b73-51e1-b172-ed66107102bf.
Nov 25 02:48:05 np0005534516 ceph-mon[75015]: set uid:gid to 167:167 (ceph:ceph)
Nov 25 02:48:05 np0005534516 ceph-mon[75015]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-mon, pid 2
Nov 25 02:48:05 np0005534516 ceph-mon[75015]: pidfile_write: ignore empty --pid-file
Nov 25 02:48:05 np0005534516 ceph-mon[75015]: load: jerasure load: lrc 
Nov 25 02:48:05 np0005534516 ceph-mon[75015]: rocksdb: RocksDB version: 7.9.2
Nov 25 02:48:05 np0005534516 ceph-mon[75015]: rocksdb: Git sha 0
Nov 25 02:48:05 np0005534516 ceph-mon[75015]: rocksdb: Compile date 2025-05-06 23:30:25
Nov 25 02:48:05 np0005534516 ceph-mon[75015]: rocksdb: DB SUMMARY
Nov 25 02:48:05 np0005534516 ceph-mon[75015]: rocksdb: DB Session ID:  WCSCMBNWDQZ3QKJA3OWW
Nov 25 02:48:05 np0005534516 ceph-mon[75015]: rocksdb: CURRENT file:  CURRENT
Nov 25 02:48:05 np0005534516 ceph-mon[75015]: rocksdb: IDENTITY file:  IDENTITY
Nov 25 02:48:05 np0005534516 ceph-mon[75015]: rocksdb: MANIFEST file:  MANIFEST-000010 size: 179 Bytes
Nov 25 02:48:05 np0005534516 ceph-mon[75015]: rocksdb: SST files in /var/lib/ceph/mon/ceph-compute-0/store.db dir, Total Num: 1, files: 000008.sst 
Nov 25 02:48:05 np0005534516 ceph-mon[75015]: rocksdb: Write Ahead Log file in /var/lib/ceph/mon/ceph-compute-0/store.db: 000009.log size: 54564 ; 
Nov 25 02:48:05 np0005534516 ceph-mon[75015]: rocksdb:                         Options.error_if_exists: 0
Nov 25 02:48:05 np0005534516 ceph-mon[75015]: rocksdb:                       Options.create_if_missing: 0
Nov 25 02:48:05 np0005534516 ceph-mon[75015]: rocksdb:                         Options.paranoid_checks: 1
Nov 25 02:48:05 np0005534516 ceph-mon[75015]: rocksdb:             Options.flush_verify_memtable_count: 1
Nov 25 02:48:05 np0005534516 ceph-mon[75015]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Nov 25 02:48:05 np0005534516 ceph-mon[75015]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Nov 25 02:48:05 np0005534516 ceph-mon[75015]: rocksdb:                                     Options.env: 0x55e9656dcc40
Nov 25 02:48:05 np0005534516 ceph-mon[75015]: rocksdb:                                      Options.fs: PosixFileSystem
Nov 25 02:48:05 np0005534516 ceph-mon[75015]: rocksdb:                                Options.info_log: 0x55e967ea7040
Nov 25 02:48:05 np0005534516 ceph-mon[75015]: rocksdb:                Options.max_file_opening_threads: 16
Nov 25 02:48:05 np0005534516 ceph-mon[75015]: rocksdb:                              Options.statistics: (nil)
Nov 25 02:48:05 np0005534516 ceph-mon[75015]: rocksdb:                               Options.use_fsync: 0
Nov 25 02:48:05 np0005534516 ceph-mon[75015]: rocksdb:                       Options.max_log_file_size: 0
Nov 25 02:48:05 np0005534516 ceph-mon[75015]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Nov 25 02:48:05 np0005534516 ceph-mon[75015]: rocksdb:                   Options.log_file_time_to_roll: 0
Nov 25 02:48:05 np0005534516 ceph-mon[75015]: rocksdb:                       Options.keep_log_file_num: 1000
Nov 25 02:48:05 np0005534516 ceph-mon[75015]: rocksdb:                    Options.recycle_log_file_num: 0
Nov 25 02:48:05 np0005534516 ceph-mon[75015]: rocksdb:                         Options.allow_fallocate: 1
Nov 25 02:48:05 np0005534516 ceph-mon[75015]: rocksdb:                        Options.allow_mmap_reads: 0
Nov 25 02:48:05 np0005534516 ceph-mon[75015]: rocksdb:                       Options.allow_mmap_writes: 0
Nov 25 02:48:05 np0005534516 ceph-mon[75015]: rocksdb:                        Options.use_direct_reads: 0
Nov 25 02:48:05 np0005534516 ceph-mon[75015]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Nov 25 02:48:05 np0005534516 ceph-mon[75015]: rocksdb:          Options.create_missing_column_families: 0
Nov 25 02:48:05 np0005534516 ceph-mon[75015]: rocksdb:                              Options.db_log_dir: 
Nov 25 02:48:05 np0005534516 ceph-mon[75015]: rocksdb:                                 Options.wal_dir: 
Nov 25 02:48:05 np0005534516 ceph-mon[75015]: rocksdb:                Options.table_cache_numshardbits: 6
Nov 25 02:48:05 np0005534516 ceph-mon[75015]: rocksdb:                         Options.WAL_ttl_seconds: 0
Nov 25 02:48:05 np0005534516 ceph-mon[75015]: rocksdb:                       Options.WAL_size_limit_MB: 0
Nov 25 02:48:05 np0005534516 ceph-mon[75015]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Nov 25 02:48:05 np0005534516 ceph-mon[75015]: rocksdb:             Options.manifest_preallocation_size: 4194304
Nov 25 02:48:05 np0005534516 ceph-mon[75015]: rocksdb:                     Options.is_fd_close_on_exec: 1
Nov 25 02:48:05 np0005534516 ceph-mon[75015]: rocksdb:                   Options.advise_random_on_open: 1
Nov 25 02:48:05 np0005534516 ceph-mon[75015]: rocksdb:                    Options.db_write_buffer_size: 0
Nov 25 02:48:05 np0005534516 ceph-mon[75015]: rocksdb:                    Options.write_buffer_manager: 0x55e967eb6b40
Nov 25 02:48:05 np0005534516 ceph-mon[75015]: rocksdb:         Options.access_hint_on_compaction_start: 1
Nov 25 02:48:05 np0005534516 ceph-mon[75015]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Nov 25 02:48:05 np0005534516 ceph-mon[75015]: rocksdb:                      Options.use_adaptive_mutex: 0
Nov 25 02:48:05 np0005534516 ceph-mon[75015]: rocksdb:                            Options.rate_limiter: (nil)
Nov 25 02:48:05 np0005534516 ceph-mon[75015]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Nov 25 02:48:05 np0005534516 ceph-mon[75015]: rocksdb:                       Options.wal_recovery_mode: 2
Nov 25 02:48:05 np0005534516 ceph-mon[75015]: rocksdb:                  Options.enable_thread_tracking: 0
Nov 25 02:48:05 np0005534516 ceph-mon[75015]: rocksdb:                  Options.enable_pipelined_write: 0
Nov 25 02:48:05 np0005534516 ceph-mon[75015]: rocksdb:                  Options.unordered_write: 0
Nov 25 02:48:05 np0005534516 ceph-mon[75015]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Nov 25 02:48:05 np0005534516 ceph-mon[75015]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Nov 25 02:48:05 np0005534516 ceph-mon[75015]: rocksdb:             Options.write_thread_max_yield_usec: 100
Nov 25 02:48:05 np0005534516 ceph-mon[75015]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Nov 25 02:48:05 np0005534516 ceph-mon[75015]: rocksdb:                               Options.row_cache: None
Nov 25 02:48:05 np0005534516 ceph-mon[75015]: rocksdb:                              Options.wal_filter: None
Nov 25 02:48:05 np0005534516 ceph-mon[75015]: rocksdb:             Options.avoid_flush_during_recovery: 0
Nov 25 02:48:05 np0005534516 ceph-mon[75015]: rocksdb:             Options.allow_ingest_behind: 0
Nov 25 02:48:05 np0005534516 ceph-mon[75015]: rocksdb:             Options.two_write_queues: 0
Nov 25 02:48:05 np0005534516 ceph-mon[75015]: rocksdb:             Options.manual_wal_flush: 0
Nov 25 02:48:05 np0005534516 ceph-mon[75015]: rocksdb:             Options.wal_compression: 0
Nov 25 02:48:05 np0005534516 ceph-mon[75015]: rocksdb:             Options.atomic_flush: 0
Nov 25 02:48:05 np0005534516 ceph-mon[75015]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Nov 25 02:48:05 np0005534516 ceph-mon[75015]: rocksdb:                 Options.persist_stats_to_disk: 0
Nov 25 02:48:05 np0005534516 ceph-mon[75015]: rocksdb:                 Options.write_dbid_to_manifest: 0
Nov 25 02:48:05 np0005534516 ceph-mon[75015]: rocksdb:                 Options.log_readahead_size: 0
Nov 25 02:48:05 np0005534516 ceph-mon[75015]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Nov 25 02:48:05 np0005534516 ceph-mon[75015]: rocksdb:                 Options.best_efforts_recovery: 0
Nov 25 02:48:05 np0005534516 ceph-mon[75015]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Nov 25 02:48:05 np0005534516 ceph-mon[75015]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Nov 25 02:48:05 np0005534516 ceph-mon[75015]: rocksdb:             Options.allow_data_in_errors: 0
Nov 25 02:48:05 np0005534516 ceph-mon[75015]: rocksdb:             Options.db_host_id: __hostname__
Nov 25 02:48:05 np0005534516 ceph-mon[75015]: rocksdb:             Options.enforce_single_del_contracts: true
Nov 25 02:48:05 np0005534516 ceph-mon[75015]: rocksdb:             Options.max_background_jobs: 2
Nov 25 02:48:05 np0005534516 ceph-mon[75015]: rocksdb:             Options.max_background_compactions: -1
Nov 25 02:48:05 np0005534516 ceph-mon[75015]: rocksdb:             Options.max_subcompactions: 1
Nov 25 02:48:05 np0005534516 ceph-mon[75015]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Nov 25 02:48:05 np0005534516 ceph-mon[75015]: rocksdb:           Options.writable_file_max_buffer_size: 1048576
Nov 25 02:48:05 np0005534516 ceph-mon[75015]: rocksdb:             Options.delayed_write_rate : 16777216
Nov 25 02:48:05 np0005534516 ceph-mon[75015]: rocksdb:             Options.max_total_wal_size: 0
Nov 25 02:48:05 np0005534516 ceph-mon[75015]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Nov 25 02:48:05 np0005534516 ceph-mon[75015]: rocksdb:                   Options.stats_dump_period_sec: 600
Nov 25 02:48:05 np0005534516 ceph-mon[75015]: rocksdb:                 Options.stats_persist_period_sec: 600
Nov 25 02:48:05 np0005534516 ceph-mon[75015]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Nov 25 02:48:05 np0005534516 ceph-mon[75015]: rocksdb:                          Options.max_open_files: -1
Nov 25 02:48:05 np0005534516 ceph-mon[75015]: rocksdb:                          Options.bytes_per_sync: 0
Nov 25 02:48:05 np0005534516 ceph-mon[75015]: rocksdb:                      Options.wal_bytes_per_sync: 0
Nov 25 02:48:05 np0005534516 ceph-mon[75015]: rocksdb:                   Options.strict_bytes_per_sync: 0
Nov 25 02:48:05 np0005534516 ceph-mon[75015]: rocksdb:       Options.compaction_readahead_size: 0
Nov 25 02:48:05 np0005534516 ceph-mon[75015]: rocksdb:                  Options.max_background_flushes: -1
Nov 25 02:48:05 np0005534516 ceph-mon[75015]: rocksdb: Compression algorithms supported:
Nov 25 02:48:05 np0005534516 ceph-mon[75015]: rocksdb: #011kZSTD supported: 0
Nov 25 02:48:05 np0005534516 ceph-mon[75015]: rocksdb: #011kXpressCompression supported: 0
Nov 25 02:48:05 np0005534516 ceph-mon[75015]: rocksdb: #011kBZip2Compression supported: 0
Nov 25 02:48:05 np0005534516 ceph-mon[75015]: rocksdb: #011kZSTDNotFinalCompression supported: 0
Nov 25 02:48:05 np0005534516 ceph-mon[75015]: rocksdb: #011kLZ4Compression supported: 1
Nov 25 02:48:05 np0005534516 ceph-mon[75015]: rocksdb: #011kZlibCompression supported: 1
Nov 25 02:48:05 np0005534516 ceph-mon[75015]: rocksdb: #011kLZ4HCCompression supported: 1
Nov 25 02:48:05 np0005534516 ceph-mon[75015]: rocksdb: #011kSnappyCompression supported: 1
Nov 25 02:48:05 np0005534516 ceph-mon[75015]: rocksdb: Fast CRC32 supported: Supported on x86
Nov 25 02:48:05 np0005534516 ceph-mon[75015]: rocksdb: DMutex implementation: pthread_mutex_t
Nov 25 02:48:05 np0005534516 ceph-mon[75015]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: /var/lib/ceph/mon/ceph-compute-0/store.db/MANIFEST-000010
Nov 25 02:48:05 np0005534516 ceph-mon[75015]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Nov 25 02:48:05 np0005534516 ceph-mon[75015]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 25 02:48:05 np0005534516 ceph-mon[75015]: rocksdb:           Options.merge_operator: 
Nov 25 02:48:05 np0005534516 ceph-mon[75015]: rocksdb:        Options.compaction_filter: None
Nov 25 02:48:05 np0005534516 ceph-mon[75015]: rocksdb:        Options.compaction_filter_factory: None
Nov 25 02:48:05 np0005534516 ceph-mon[75015]: rocksdb:  Options.sst_partitioner_factory: None
Nov 25 02:48:05 np0005534516 ceph-mon[75015]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 25 02:48:05 np0005534516 ceph-mon[75015]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 25 02:48:05 np0005534516 ceph-mon[75015]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55e967ea6c40)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55e967e9f1f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 25 02:48:05 np0005534516 ceph-mon[75015]: rocksdb:        Options.write_buffer_size: 33554432
Nov 25 02:48:05 np0005534516 ceph-mon[75015]: rocksdb:  Options.max_write_buffer_number: 2
Nov 25 02:48:05 np0005534516 ceph-mon[75015]: rocksdb:          Options.compression: NoCompression
Nov 25 02:48:05 np0005534516 ceph-mon[75015]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 25 02:48:05 np0005534516 ceph-mon[75015]: rocksdb:       Options.prefix_extractor: nullptr
Nov 25 02:48:05 np0005534516 ceph-mon[75015]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 25 02:48:05 np0005534516 ceph-mon[75015]: rocksdb:             Options.num_levels: 7
Nov 25 02:48:05 np0005534516 ceph-mon[75015]: rocksdb:        Options.min_write_buffer_number_to_merge: 1
Nov 25 02:48:05 np0005534516 ceph-mon[75015]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 25 02:48:05 np0005534516 ceph-mon[75015]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 25 02:48:05 np0005534516 ceph-mon[75015]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 25 02:48:05 np0005534516 ceph-mon[75015]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 25 02:48:05 np0005534516 ceph-mon[75015]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 25 02:48:05 np0005534516 ceph-mon[75015]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 25 02:48:05 np0005534516 ceph-mon[75015]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 25 02:48:05 np0005534516 ceph-mon[75015]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 25 02:48:05 np0005534516 ceph-mon[75015]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 25 02:48:05 np0005534516 ceph-mon[75015]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 25 02:48:05 np0005534516 ceph-mon[75015]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 25 02:48:05 np0005534516 ceph-mon[75015]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 25 02:48:05 np0005534516 ceph-mon[75015]: rocksdb:                  Options.compression_opts.level: 32767
Nov 25 02:48:05 np0005534516 ceph-mon[75015]: rocksdb:               Options.compression_opts.strategy: 0
Nov 25 02:48:05 np0005534516 ceph-mon[75015]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 25 02:48:05 np0005534516 ceph-mon[75015]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 25 02:48:05 np0005534516 ceph-mon[75015]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 25 02:48:05 np0005534516 ceph-mon[75015]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 25 02:48:05 np0005534516 ceph-mon[75015]: rocksdb:                  Options.compression_opts.enabled: false
Nov 25 02:48:05 np0005534516 ceph-mon[75015]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 25 02:48:05 np0005534516 ceph-mon[75015]: rocksdb:      Options.level0_file_num_compaction_trigger: 4
Nov 25 02:48:05 np0005534516 ceph-mon[75015]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 25 02:48:05 np0005534516 ceph-mon[75015]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 25 02:48:05 np0005534516 ceph-mon[75015]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 25 02:48:05 np0005534516 ceph-mon[75015]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 25 02:48:05 np0005534516 ceph-mon[75015]: rocksdb:                Options.max_bytes_for_level_base: 268435456
Nov 25 02:48:05 np0005534516 ceph-mon[75015]: rocksdb: Options.level_compaction_dynamic_level_bytes: 1
Nov 25 02:48:05 np0005534516 ceph-mon[75015]: rocksdb:          Options.max_bytes_for_level_multiplier: 10.000000
Nov 25 02:48:05 np0005534516 ceph-mon[75015]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 25 02:48:05 np0005534516 ceph-mon[75015]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 25 02:48:05 np0005534516 ceph-mon[75015]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 25 02:48:05 np0005534516 ceph-mon[75015]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 25 02:48:05 np0005534516 ceph-mon[75015]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 25 02:48:05 np0005534516 ceph-mon[75015]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 25 02:48:05 np0005534516 ceph-mon[75015]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 25 02:48:05 np0005534516 ceph-mon[75015]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 25 02:48:05 np0005534516 ceph-mon[75015]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 25 02:48:05 np0005534516 ceph-mon[75015]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 25 02:48:05 np0005534516 ceph-mon[75015]: rocksdb:                        Options.arena_block_size: 1048576
Nov 25 02:48:05 np0005534516 ceph-mon[75015]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 25 02:48:05 np0005534516 ceph-mon[75015]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 25 02:48:05 np0005534516 ceph-mon[75015]: rocksdb:                Options.disable_auto_compactions: 0
Nov 25 02:48:05 np0005534516 ceph-mon[75015]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 25 02:48:05 np0005534516 ceph-mon[75015]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 25 02:48:05 np0005534516 ceph-mon[75015]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 25 02:48:05 np0005534516 ceph-mon[75015]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 25 02:48:05 np0005534516 ceph-mon[75015]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 25 02:48:05 np0005534516 ceph-mon[75015]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 25 02:48:05 np0005534516 ceph-mon[75015]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 25 02:48:05 np0005534516 ceph-mon[75015]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 25 02:48:05 np0005534516 ceph-mon[75015]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 25 02:48:05 np0005534516 ceph-mon[75015]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 25 02:48:05 np0005534516 ceph-mon[75015]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 25 02:48:05 np0005534516 ceph-mon[75015]: rocksdb:                   Options.inplace_update_support: 0
Nov 25 02:48:05 np0005534516 ceph-mon[75015]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 25 02:48:05 np0005534516 ceph-mon[75015]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 25 02:48:05 np0005534516 ceph-mon[75015]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 25 02:48:05 np0005534516 ceph-mon[75015]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 25 02:48:05 np0005534516 ceph-mon[75015]: rocksdb:                           Options.bloom_locality: 0
Nov 25 02:48:05 np0005534516 ceph-mon[75015]: rocksdb:                    Options.max_successive_merges: 0
Nov 25 02:48:05 np0005534516 ceph-mon[75015]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 25 02:48:05 np0005534516 ceph-mon[75015]: rocksdb:                Options.paranoid_file_checks: 0
Nov 25 02:48:05 np0005534516 ceph-mon[75015]: rocksdb:                Options.force_consistency_checks: 1
Nov 25 02:48:05 np0005534516 ceph-mon[75015]: rocksdb:                Options.report_bg_io_stats: 0
Nov 25 02:48:05 np0005534516 ceph-mon[75015]: rocksdb:                               Options.ttl: 2592000
Nov 25 02:48:05 np0005534516 ceph-mon[75015]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 25 02:48:05 np0005534516 ceph-mon[75015]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 25 02:48:05 np0005534516 ceph-mon[75015]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 25 02:48:05 np0005534516 ceph-mon[75015]: rocksdb:                       Options.enable_blob_files: false
Nov 25 02:48:05 np0005534516 ceph-mon[75015]: rocksdb:                           Options.min_blob_size: 0
Nov 25 02:48:05 np0005534516 ceph-mon[75015]: rocksdb:                          Options.blob_file_size: 268435456
Nov 25 02:48:05 np0005534516 ceph-mon[75015]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 25 02:48:05 np0005534516 ceph-mon[75015]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 25 02:48:05 np0005534516 ceph-mon[75015]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 25 02:48:05 np0005534516 ceph-mon[75015]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 25 02:48:05 np0005534516 ceph-mon[75015]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 25 02:48:05 np0005534516 ceph-mon[75015]: rocksdb:                Options.blob_file_starting_level: 0
Nov 25 02:48:05 np0005534516 ceph-mon[75015]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 25 02:48:05 np0005534516 ceph-mon[75015]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:/var/lib/ceph/mon/ceph-compute-0/store.db/MANIFEST-000010 succeeded,manifest_file_number is 10, next_file_number is 12, last_sequence is 5, log_number is 5,prev_log_number is 0,max_column_family is 0,min_log_number_to_keep is 5
Nov 25 02:48:05 np0005534516 ceph-mon[75015]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Nov 25 02:48:05 np0005534516 ceph-mon[75015]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: dc5dc6ae-0fb8-474e-b34d-e37705a41add
Nov 25 02:48:05 np0005534516 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764056885811808, "job": 1, "event": "recovery_started", "wal_files": [9]}
Nov 25 02:48:05 np0005534516 ceph-mon[75015]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #9 mode 2
Nov 25 02:48:05 np0005534516 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764056885814810, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 13, "file_size": 54153, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 8, "largest_seqno": 137, "table_properties": {"data_size": 52695, "index_size": 164, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 261, "raw_key_size": 3023, "raw_average_key_size": 30, "raw_value_size": 50297, "raw_average_value_size": 502, "num_data_blocks": 8, "num_entries": 100, "num_filter_entries": 100, "num_deletions": 3, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764056885, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "dc5dc6ae-0fb8-474e-b34d-e37705a41add", "db_session_id": "WCSCMBNWDQZ3QKJA3OWW", "orig_file_number": 13, "seqno_to_time_mapping": "N/A"}}
Nov 25 02:48:05 np0005534516 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764056885814964, "job": 1, "event": "recovery_finished"}
Nov 25 02:48:05 np0005534516 ceph-mon[75015]: rocksdb: [db/version_set.cc:5047] Creating manifest 15
Nov 25 02:48:05 np0005534516 ceph-mon[75015]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000009.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 02:48:05 np0005534516 ceph-mon[75015]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x55e967ec8e00
Nov 25 02:48:05 np0005534516 ceph-mon[75015]: rocksdb: DB pointer 0x55e967f52000
Nov 25 02:48:05 np0005534516 ceph-mon[75015]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 25 02:48:05 np0005534516 ceph-mon[75015]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 0.0 total, 0.0 interval#012Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s#012Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0   54.78 KB   0.5      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     19.9      0.00              0.00         1    0.003       0      0       0.0       0.0#012 Sum      2/0   54.78 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     19.9      0.00              0.00         1    0.003       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     19.9      0.00              0.00         1    0.003       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     19.9      0.00              0.00         1    0.003       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.0 total, 0.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 2.63 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 2.63 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55e967e9f1f0#2 capacity: 512.00 MB usage: 0.78 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 0 last_secs: 2.9e-05 secs_since: 0#012Block cache entry stats(count,size,portion): FilterBlock(2,0.42 KB,8.04663e-05%) IndexBlock(2,0.36 KB,6.85453e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Nov 25 02:48:05 np0005534516 ceph-mon[75015]: starting mon.compute-0 rank 0 at public addrs [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] at bind addrs [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon_data /var/lib/ceph/mon/ceph-compute-0 fsid a058ea16-8b73-51e1-b172-ed66107102bf
Nov 25 02:48:05 np0005534516 ceph-mon[75015]: mon.compute-0@-1(???) e1 preinit fsid a058ea16-8b73-51e1-b172-ed66107102bf
Nov 25 02:48:05 np0005534516 ceph-mon[75015]: mon.compute-0@-1(???).mds e1 new map
Nov 25 02:48:05 np0005534516 ceph-mon[75015]: mon.compute-0@-1(???).mds e1 print_map#012e1#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}#012legacy client fscid: -1#012 #012No filesystems configured
Nov 25 02:48:05 np0005534516 ceph-mon[75015]: mon.compute-0@-1(???).osd e1 crush map has features 3314932999778484224, adjusting msgr requires
Nov 25 02:48:05 np0005534516 ceph-mon[75015]: mon.compute-0@-1(???).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Nov 25 02:48:05 np0005534516 ceph-mon[75015]: mon.compute-0@-1(???).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Nov 25 02:48:05 np0005534516 ceph-mon[75015]: mon.compute-0@-1(???).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Nov 25 02:48:05 np0005534516 ceph-mon[75015]: mon.compute-0@-1(???).paxosservice(auth 1..2) refresh upgraded, format 0 -> 3
Nov 25 02:48:05 np0005534516 ceph-mon[75015]: mon.compute-0@-1(probing) e1  my rank is now 0 (was -1)
Nov 25 02:48:05 np0005534516 ceph-mon[75015]: mon.compute-0@0(probing) e1 win_standalone_election
Nov 25 02:48:05 np0005534516 ceph-mon[75015]: paxos.0).electionLogic(3) init, last seen epoch 3, mid-election, bumping
Nov 25 02:48:05 np0005534516 ceph-mon[75015]: mon.compute-0@0(electing) e1 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 25 02:48:05 np0005534516 ceph-mon[75015]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Nov 25 02:48:05 np0005534516 ceph-mon[75015]: log_channel(cluster) log [DBG] : monmap e1: 1 mons at {compute-0=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0]} removed_ranks: {} disallowed_leaders: {}
Nov 25 02:48:05 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 25 02:48:05 np0005534516 ceph-mon[75015]: log_channel(cluster) log [DBG] : fsmap 
Nov 25 02:48:05 np0005534516 ceph-mon[75015]: log_channel(cluster) log [DBG] : osdmap e1: 0 total, 0 up, 0 in
Nov 25 02:48:05 np0005534516 podman[75016]: 2025-11-25 07:48:05.847941413 +0000 UTC m=+0.044160940 container create ec1950d13cd4d91689f07029dd1ea4d65a25855bd28ba2ab3488488ac4b579b4 (image=quay.io/ceph/ceph:v18, name=practical_banzai, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Nov 25 02:48:05 np0005534516 ceph-mon[75015]: log_channel(cluster) log [DBG] : mgrmap e1: no daemons active
Nov 25 02:48:05 np0005534516 systemd[1]: Started libpod-conmon-ec1950d13cd4d91689f07029dd1ea4d65a25855bd28ba2ab3488488ac4b579b4.scope.
Nov 25 02:48:05 np0005534516 ceph-mon[75015]: mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Nov 25 02:48:05 np0005534516 systemd[1]: Started libcrun container.
Nov 25 02:48:05 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1c5a1ab64a5605f713c7f891706f363f425a1e31a3a98ecc817511303581734f/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 02:48:05 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1c5a1ab64a5605f713c7f891706f363f425a1e31a3a98ecc817511303581734f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 02:48:05 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1c5a1ab64a5605f713c7f891706f363f425a1e31a3a98ecc817511303581734f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 02:48:05 np0005534516 podman[75016]: 2025-11-25 07:48:05.831171968 +0000 UTC m=+0.027391545 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 25 02:48:05 np0005534516 podman[75016]: 2025-11-25 07:48:05.937294471 +0000 UTC m=+0.133514008 container init ec1950d13cd4d91689f07029dd1ea4d65a25855bd28ba2ab3488488ac4b579b4 (image=quay.io/ceph/ceph:v18, name=practical_banzai, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 02:48:05 np0005534516 podman[75016]: 2025-11-25 07:48:05.949202515 +0000 UTC m=+0.145422072 container start ec1950d13cd4d91689f07029dd1ea4d65a25855bd28ba2ab3488488ac4b579b4 (image=quay.io/ceph/ceph:v18, name=practical_banzai, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Nov 25 02:48:05 np0005534516 podman[75016]: 2025-11-25 07:48:05.953103099 +0000 UTC m=+0.149322626 container attach ec1950d13cd4d91689f07029dd1ea4d65a25855bd28ba2ab3488488ac4b579b4 (image=quay.io/ceph/ceph:v18, name=practical_banzai, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 02:48:06 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=public_network}] v 0) v1
Nov 25 02:48:06 np0005534516 systemd[1]: libpod-ec1950d13cd4d91689f07029dd1ea4d65a25855bd28ba2ab3488488ac4b579b4.scope: Deactivated successfully.
Nov 25 02:48:06 np0005534516 podman[75016]: 2025-11-25 07:48:06.363909245 +0000 UTC m=+0.560128782 container died ec1950d13cd4d91689f07029dd1ea4d65a25855bd28ba2ab3488488ac4b579b4 (image=quay.io/ceph/ceph:v18, name=practical_banzai, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Nov 25 02:48:06 np0005534516 systemd[1]: var-lib-containers-storage-overlay-1c5a1ab64a5605f713c7f891706f363f425a1e31a3a98ecc817511303581734f-merged.mount: Deactivated successfully.
Nov 25 02:48:06 np0005534516 podman[75016]: 2025-11-25 07:48:06.409461594 +0000 UTC m=+0.605681121 container remove ec1950d13cd4d91689f07029dd1ea4d65a25855bd28ba2ab3488488ac4b579b4 (image=quay.io/ceph/ceph:v18, name=practical_banzai, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 02:48:06 np0005534516 systemd[1]: libpod-conmon-ec1950d13cd4d91689f07029dd1ea4d65a25855bd28ba2ab3488488ac4b579b4.scope: Deactivated successfully.
Nov 25 02:48:06 np0005534516 podman[75109]: 2025-11-25 07:48:06.472019376 +0000 UTC m=+0.040337760 container create 76f8c1c1d9ffbb1f6e82c99a1e987058ab9e9b36b0b00f0778f0c7a8557df2f0 (image=quay.io/ceph/ceph:v18, name=nifty_dubinsky, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Nov 25 02:48:06 np0005534516 systemd[1]: Started libpod-conmon-76f8c1c1d9ffbb1f6e82c99a1e987058ab9e9b36b0b00f0778f0c7a8557df2f0.scope.
Nov 25 02:48:06 np0005534516 systemd[1]: Started libcrun container.
Nov 25 02:48:06 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d4d4348af3d4582a8b13d30628543f582199c58dba5975036d8a39c205c25f44/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 02:48:06 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d4d4348af3d4582a8b13d30628543f582199c58dba5975036d8a39c205c25f44/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 02:48:06 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d4d4348af3d4582a8b13d30628543f582199c58dba5975036d8a39c205c25f44/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 02:48:06 np0005534516 podman[75109]: 2025-11-25 07:48:06.456357192 +0000 UTC m=+0.024675576 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 25 02:48:06 np0005534516 podman[75109]: 2025-11-25 07:48:06.66450391 +0000 UTC m=+0.232822294 container init 76f8c1c1d9ffbb1f6e82c99a1e987058ab9e9b36b0b00f0778f0c7a8557df2f0 (image=quay.io/ceph/ceph:v18, name=nifty_dubinsky, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Nov 25 02:48:06 np0005534516 podman[75109]: 2025-11-25 07:48:06.67073525 +0000 UTC m=+0.239053634 container start 76f8c1c1d9ffbb1f6e82c99a1e987058ab9e9b36b0b00f0778f0c7a8557df2f0 (image=quay.io/ceph/ceph:v18, name=nifty_dubinsky, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 02:48:06 np0005534516 podman[75109]: 2025-11-25 07:48:06.6745429 +0000 UTC m=+0.242861284 container attach 76f8c1c1d9ffbb1f6e82c99a1e987058ab9e9b36b0b00f0778f0c7a8557df2f0 (image=quay.io/ceph/ceph:v18, name=nifty_dubinsky, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Nov 25 02:48:07 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=cluster_network}] v 0) v1
Nov 25 02:48:07 np0005534516 systemd[1]: libpod-76f8c1c1d9ffbb1f6e82c99a1e987058ab9e9b36b0b00f0778f0c7a8557df2f0.scope: Deactivated successfully.
Nov 25 02:48:07 np0005534516 podman[75109]: 2025-11-25 07:48:07.108745735 +0000 UTC m=+0.677064119 container died 76f8c1c1d9ffbb1f6e82c99a1e987058ab9e9b36b0b00f0778f0c7a8557df2f0 (image=quay.io/ceph/ceph:v18, name=nifty_dubinsky, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Nov 25 02:48:07 np0005534516 systemd[1]: var-lib-containers-storage-overlay-d4d4348af3d4582a8b13d30628543f582199c58dba5975036d8a39c205c25f44-merged.mount: Deactivated successfully.
Nov 25 02:48:07 np0005534516 podman[75109]: 2025-11-25 07:48:07.21047683 +0000 UTC m=+0.778795224 container remove 76f8c1c1d9ffbb1f6e82c99a1e987058ab9e9b36b0b00f0778f0c7a8557df2f0 (image=quay.io/ceph/ceph:v18, name=nifty_dubinsky, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 02:48:07 np0005534516 systemd[1]: libpod-conmon-76f8c1c1d9ffbb1f6e82c99a1e987058ab9e9b36b0b00f0778f0c7a8557df2f0.scope: Deactivated successfully.
Nov 25 02:48:07 np0005534516 systemd[1]: Reloading.
Nov 25 02:48:07 np0005534516 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 02:48:07 np0005534516 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 02:48:07 np0005534516 systemd[1]: Reloading.
Nov 25 02:48:07 np0005534516 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 02:48:07 np0005534516 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 02:48:07 np0005534516 systemd[1]: Starting Ceph mgr.compute-0.cpskve for a058ea16-8b73-51e1-b172-ed66107102bf...
Nov 25 02:48:08 np0005534516 podman[75293]: 2025-11-25 07:48:08.09793703 +0000 UTC m=+0.079749469 container create 3683d784a4f27697d689dc7d9db761fbeb621a12b2d4331800ce9d6daff5bdb6 (image=quay.io/ceph/ceph:v18, name=ceph-a058ea16-8b73-51e1-b172-ed66107102bf-mgr-compute-0-cpskve, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Nov 25 02:48:08 np0005534516 podman[75293]: 2025-11-25 07:48:08.046400449 +0000 UTC m=+0.028212918 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 25 02:48:08 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/113a81bac9d6a8ac0b3f6ab6c1005b4e59b2037bbee459b2a3f95ab5c241376f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 02:48:08 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/113a81bac9d6a8ac0b3f6ab6c1005b4e59b2037bbee459b2a3f95ab5c241376f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 02:48:08 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/113a81bac9d6a8ac0b3f6ab6c1005b4e59b2037bbee459b2a3f95ab5c241376f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 02:48:08 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/113a81bac9d6a8ac0b3f6ab6c1005b4e59b2037bbee459b2a3f95ab5c241376f/merged/var/lib/ceph/mgr/ceph-compute-0.cpskve supports timestamps until 2038 (0x7fffffff)
Nov 25 02:48:08 np0005534516 podman[75293]: 2025-11-25 07:48:08.180710528 +0000 UTC m=+0.162522987 container init 3683d784a4f27697d689dc7d9db761fbeb621a12b2d4331800ce9d6daff5bdb6 (image=quay.io/ceph/ceph:v18, name=ceph-a058ea16-8b73-51e1-b172-ed66107102bf-mgr-compute-0-cpskve, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Nov 25 02:48:08 np0005534516 podman[75293]: 2025-11-25 07:48:08.19145866 +0000 UTC m=+0.173271139 container start 3683d784a4f27697d689dc7d9db761fbeb621a12b2d4331800ce9d6daff5bdb6 (image=quay.io/ceph/ceph:v18, name=ceph-a058ea16-8b73-51e1-b172-ed66107102bf-mgr-compute-0-cpskve, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 02:48:08 np0005534516 bash[75293]: 3683d784a4f27697d689dc7d9db761fbeb621a12b2d4331800ce9d6daff5bdb6
Nov 25 02:48:08 np0005534516 systemd[1]: Started Ceph mgr.compute-0.cpskve for a058ea16-8b73-51e1-b172-ed66107102bf.
Nov 25 02:48:08 np0005534516 ceph-mgr[75313]: set uid:gid to 167:167 (ceph:ceph)
Nov 25 02:48:08 np0005534516 ceph-mgr[75313]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-mgr, pid 2
Nov 25 02:48:08 np0005534516 ceph-mgr[75313]: pidfile_write: ignore empty --pid-file
Nov 25 02:48:08 np0005534516 podman[75314]: 2025-11-25 07:48:08.287536622 +0000 UTC m=+0.052082400 container create 966e6bac42305a0c7b2677c4e6899ba9f735675d1588954bd814196d6e20a05c (image=quay.io/ceph/ceph:v18, name=focused_easley, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Nov 25 02:48:08 np0005534516 systemd[1]: Started libpod-conmon-966e6bac42305a0c7b2677c4e6899ba9f735675d1588954bd814196d6e20a05c.scope.
Nov 25 02:48:08 np0005534516 podman[75314]: 2025-11-25 07:48:08.265676918 +0000 UTC m=+0.030222696 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 25 02:48:08 np0005534516 ceph-mgr[75313]: mgr[py] Loading python module 'alerts'
Nov 25 02:48:08 np0005534516 systemd[1]: Started libcrun container.
Nov 25 02:48:08 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8d858b43cd3b17c147210c3ca0fe226f781ec0da346ecadd64ecb4964c9b3257/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 02:48:08 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8d858b43cd3b17c147210c3ca0fe226f781ec0da346ecadd64ecb4964c9b3257/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 02:48:08 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8d858b43cd3b17c147210c3ca0fe226f781ec0da346ecadd64ecb4964c9b3257/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 02:48:08 np0005534516 podman[75314]: 2025-11-25 07:48:08.407865306 +0000 UTC m=+0.172411064 container init 966e6bac42305a0c7b2677c4e6899ba9f735675d1588954bd814196d6e20a05c (image=quay.io/ceph/ceph:v18, name=focused_easley, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS)
Nov 25 02:48:08 np0005534516 podman[75314]: 2025-11-25 07:48:08.415298601 +0000 UTC m=+0.179844349 container start 966e6bac42305a0c7b2677c4e6899ba9f735675d1588954bd814196d6e20a05c (image=quay.io/ceph/ceph:v18, name=focused_easley, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True)
Nov 25 02:48:08 np0005534516 podman[75314]: 2025-11-25 07:48:08.419231795 +0000 UTC m=+0.183777573 container attach 966e6bac42305a0c7b2677c4e6899ba9f735675d1588954bd814196d6e20a05c (image=quay.io/ceph/ceph:v18, name=focused_easley, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Nov 25 02:48:08 np0005534516 ceph-mgr[75313]: mgr[py] Module alerts has missing NOTIFY_TYPES member
Nov 25 02:48:08 np0005534516 ceph-mgr[75313]: mgr[py] Loading python module 'balancer'
Nov 25 02:48:08 np0005534516 ceph-a058ea16-8b73-51e1-b172-ed66107102bf-mgr-compute-0-cpskve[75309]: 2025-11-25T07:48:08.669+0000 7fbb0592e140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member
Nov 25 02:48:08 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Nov 25 02:48:08 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3207565643' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Nov 25 02:48:08 np0005534516 focused_easley[75353]: 
Nov 25 02:48:08 np0005534516 focused_easley[75353]: {
Nov 25 02:48:08 np0005534516 focused_easley[75353]:    "fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 02:48:08 np0005534516 focused_easley[75353]:    "health": {
Nov 25 02:48:08 np0005534516 focused_easley[75353]:        "status": "HEALTH_OK",
Nov 25 02:48:08 np0005534516 focused_easley[75353]:        "checks": {},
Nov 25 02:48:08 np0005534516 focused_easley[75353]:        "mutes": []
Nov 25 02:48:08 np0005534516 focused_easley[75353]:    },
Nov 25 02:48:08 np0005534516 focused_easley[75353]:    "election_epoch": 5,
Nov 25 02:48:08 np0005534516 focused_easley[75353]:    "quorum": [
Nov 25 02:48:08 np0005534516 focused_easley[75353]:        0
Nov 25 02:48:08 np0005534516 focused_easley[75353]:    ],
Nov 25 02:48:08 np0005534516 focused_easley[75353]:    "quorum_names": [
Nov 25 02:48:08 np0005534516 focused_easley[75353]:        "compute-0"
Nov 25 02:48:08 np0005534516 focused_easley[75353]:    ],
Nov 25 02:48:08 np0005534516 focused_easley[75353]:    "quorum_age": 3,
Nov 25 02:48:08 np0005534516 focused_easley[75353]:    "monmap": {
Nov 25 02:48:08 np0005534516 focused_easley[75353]:        "epoch": 1,
Nov 25 02:48:08 np0005534516 focused_easley[75353]:        "min_mon_release_name": "reef",
Nov 25 02:48:08 np0005534516 focused_easley[75353]:        "num_mons": 1
Nov 25 02:48:08 np0005534516 focused_easley[75353]:    },
Nov 25 02:48:08 np0005534516 focused_easley[75353]:    "osdmap": {
Nov 25 02:48:08 np0005534516 focused_easley[75353]:        "epoch": 1,
Nov 25 02:48:08 np0005534516 focused_easley[75353]:        "num_osds": 0,
Nov 25 02:48:08 np0005534516 focused_easley[75353]:        "num_up_osds": 0,
Nov 25 02:48:08 np0005534516 focused_easley[75353]:        "osd_up_since": 0,
Nov 25 02:48:08 np0005534516 focused_easley[75353]:        "num_in_osds": 0,
Nov 25 02:48:08 np0005534516 focused_easley[75353]:        "osd_in_since": 0,
Nov 25 02:48:08 np0005534516 focused_easley[75353]:        "num_remapped_pgs": 0
Nov 25 02:48:08 np0005534516 focused_easley[75353]:    },
Nov 25 02:48:08 np0005534516 focused_easley[75353]:    "pgmap": {
Nov 25 02:48:08 np0005534516 focused_easley[75353]:        "pgs_by_state": [],
Nov 25 02:48:08 np0005534516 focused_easley[75353]:        "num_pgs": 0,
Nov 25 02:48:08 np0005534516 focused_easley[75353]:        "num_pools": 0,
Nov 25 02:48:08 np0005534516 focused_easley[75353]:        "num_objects": 0,
Nov 25 02:48:08 np0005534516 focused_easley[75353]:        "data_bytes": 0,
Nov 25 02:48:08 np0005534516 focused_easley[75353]:        "bytes_used": 0,
Nov 25 02:48:08 np0005534516 focused_easley[75353]:        "bytes_avail": 0,
Nov 25 02:48:08 np0005534516 focused_easley[75353]:        "bytes_total": 0
Nov 25 02:48:08 np0005534516 focused_easley[75353]:    },
Nov 25 02:48:08 np0005534516 focused_easley[75353]:    "fsmap": {
Nov 25 02:48:08 np0005534516 focused_easley[75353]:        "epoch": 1,
Nov 25 02:48:08 np0005534516 focused_easley[75353]:        "by_rank": [],
Nov 25 02:48:08 np0005534516 focused_easley[75353]:        "up:standby": 0
Nov 25 02:48:08 np0005534516 focused_easley[75353]:    },
Nov 25 02:48:08 np0005534516 focused_easley[75353]:    "mgrmap": {
Nov 25 02:48:08 np0005534516 focused_easley[75353]:        "available": false,
Nov 25 02:48:08 np0005534516 focused_easley[75353]:        "num_standbys": 0,
Nov 25 02:48:08 np0005534516 focused_easley[75353]:        "modules": [
Nov 25 02:48:08 np0005534516 focused_easley[75353]:            "iostat",
Nov 25 02:48:08 np0005534516 focused_easley[75353]:            "nfs",
Nov 25 02:48:08 np0005534516 focused_easley[75353]:            "restful"
Nov 25 02:48:08 np0005534516 focused_easley[75353]:        ],
Nov 25 02:48:08 np0005534516 focused_easley[75353]:        "services": {}
Nov 25 02:48:08 np0005534516 focused_easley[75353]:    },
Nov 25 02:48:08 np0005534516 focused_easley[75353]:    "servicemap": {
Nov 25 02:48:08 np0005534516 focused_easley[75353]:        "epoch": 1,
Nov 25 02:48:08 np0005534516 focused_easley[75353]:        "modified": "2025-11-25T07:48:01.745386+0000",
Nov 25 02:48:08 np0005534516 focused_easley[75353]:        "services": {}
Nov 25 02:48:08 np0005534516 focused_easley[75353]:    },
Nov 25 02:48:08 np0005534516 focused_easley[75353]:    "progress_events": {}
Nov 25 02:48:08 np0005534516 focused_easley[75353]: }
Nov 25 02:48:08 np0005534516 systemd[1]: libpod-966e6bac42305a0c7b2677c4e6899ba9f735675d1588954bd814196d6e20a05c.scope: Deactivated successfully.
Nov 25 02:48:08 np0005534516 podman[75314]: 2025-11-25 07:48:08.893429818 +0000 UTC m=+0.657975566 container died 966e6bac42305a0c7b2677c4e6899ba9f735675d1588954bd814196d6e20a05c (image=quay.io/ceph/ceph:v18, name=focused_easley, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 02:48:08 np0005534516 ceph-a058ea16-8b73-51e1-b172-ed66107102bf-mgr-compute-0-cpskve[75309]: 2025-11-25T07:48:08.915+0000 7fbb0592e140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member
Nov 25 02:48:08 np0005534516 ceph-mgr[75313]: mgr[py] Module balancer has missing NOTIFY_TYPES member
Nov 25 02:48:08 np0005534516 ceph-mgr[75313]: mgr[py] Loading python module 'cephadm'
Nov 25 02:48:08 np0005534516 systemd[1]: var-lib-containers-storage-overlay-8d858b43cd3b17c147210c3ca0fe226f781ec0da346ecadd64ecb4964c9b3257-merged.mount: Deactivated successfully.
Nov 25 02:48:08 np0005534516 podman[75314]: 2025-11-25 07:48:08.961503668 +0000 UTC m=+0.726049416 container remove 966e6bac42305a0c7b2677c4e6899ba9f735675d1588954bd814196d6e20a05c (image=quay.io/ceph/ceph:v18, name=focused_easley, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 02:48:08 np0005534516 systemd[1]: libpod-conmon-966e6bac42305a0c7b2677c4e6899ba9f735675d1588954bd814196d6e20a05c.scope: Deactivated successfully.
Nov 25 02:48:10 np0005534516 ceph-mgr[75313]: mgr[py] Loading python module 'crash'
Nov 25 02:48:11 np0005534516 podman[75402]: 2025-11-25 07:48:11.012953577 +0000 UTC m=+0.027548549 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 25 02:48:11 np0005534516 podman[75402]: 2025-11-25 07:48:11.13561552 +0000 UTC m=+0.150210492 container create b7063df0abfd5591f1693206d34758d3622f7ca734a1679e9767d3ac54e27930 (image=quay.io/ceph/ceph:v18, name=dazzling_leakey, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 02:48:11 np0005534516 systemd[1]: Started libpod-conmon-b7063df0abfd5591f1693206d34758d3622f7ca734a1679e9767d3ac54e27930.scope.
Nov 25 02:48:11 np0005534516 systemd[1]: Started libcrun container.
Nov 25 02:48:11 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0e068b6a6231d0c2e37432cd3fd2f4dcf36e9386f9ef5e394128f168db1dccbb/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 02:48:11 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0e068b6a6231d0c2e37432cd3fd2f4dcf36e9386f9ef5e394128f168db1dccbb/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 02:48:11 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0e068b6a6231d0c2e37432cd3fd2f4dcf36e9386f9ef5e394128f168db1dccbb/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 02:48:11 np0005534516 ceph-a058ea16-8b73-51e1-b172-ed66107102bf-mgr-compute-0-cpskve[75309]: 2025-11-25T07:48:11.250+0000 7fbb0592e140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member
Nov 25 02:48:11 np0005534516 ceph-mgr[75313]: mgr[py] Module crash has missing NOTIFY_TYPES member
Nov 25 02:48:11 np0005534516 ceph-mgr[75313]: mgr[py] Loading python module 'dashboard'
Nov 25 02:48:11 np0005534516 podman[75402]: 2025-11-25 07:48:11.263782731 +0000 UTC m=+0.278377673 container init b7063df0abfd5591f1693206d34758d3622f7ca734a1679e9767d3ac54e27930 (image=quay.io/ceph/ceph:v18, name=dazzling_leakey, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Nov 25 02:48:11 np0005534516 podman[75402]: 2025-11-25 07:48:11.270013402 +0000 UTC m=+0.284608354 container start b7063df0abfd5591f1693206d34758d3622f7ca734a1679e9767d3ac54e27930 (image=quay.io/ceph/ceph:v18, name=dazzling_leakey, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 02:48:11 np0005534516 podman[75402]: 2025-11-25 07:48:11.303709277 +0000 UTC m=+0.318304259 container attach b7063df0abfd5591f1693206d34758d3622f7ca734a1679e9767d3ac54e27930 (image=quay.io/ceph/ceph:v18, name=dazzling_leakey, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Nov 25 02:48:11 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Nov 25 02:48:11 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3878489018' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Nov 25 02:48:11 np0005534516 dazzling_leakey[75418]: 
Nov 25 02:48:11 np0005534516 dazzling_leakey[75418]: {
Nov 25 02:48:11 np0005534516 dazzling_leakey[75418]:    "fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 02:48:11 np0005534516 dazzling_leakey[75418]:    "health": {
Nov 25 02:48:11 np0005534516 dazzling_leakey[75418]:        "status": "HEALTH_OK",
Nov 25 02:48:11 np0005534516 dazzling_leakey[75418]:        "checks": {},
Nov 25 02:48:11 np0005534516 dazzling_leakey[75418]:        "mutes": []
Nov 25 02:48:11 np0005534516 dazzling_leakey[75418]:    },
Nov 25 02:48:11 np0005534516 dazzling_leakey[75418]:    "election_epoch": 5,
Nov 25 02:48:11 np0005534516 dazzling_leakey[75418]:    "quorum": [
Nov 25 02:48:11 np0005534516 dazzling_leakey[75418]:        0
Nov 25 02:48:11 np0005534516 dazzling_leakey[75418]:    ],
Nov 25 02:48:11 np0005534516 dazzling_leakey[75418]:    "quorum_names": [
Nov 25 02:48:11 np0005534516 dazzling_leakey[75418]:        "compute-0"
Nov 25 02:48:11 np0005534516 dazzling_leakey[75418]:    ],
Nov 25 02:48:11 np0005534516 dazzling_leakey[75418]:    "quorum_age": 5,
Nov 25 02:48:11 np0005534516 dazzling_leakey[75418]:    "monmap": {
Nov 25 02:48:11 np0005534516 dazzling_leakey[75418]:        "epoch": 1,
Nov 25 02:48:11 np0005534516 dazzling_leakey[75418]:        "min_mon_release_name": "reef",
Nov 25 02:48:11 np0005534516 dazzling_leakey[75418]:        "num_mons": 1
Nov 25 02:48:11 np0005534516 dazzling_leakey[75418]:    },
Nov 25 02:48:11 np0005534516 dazzling_leakey[75418]:    "osdmap": {
Nov 25 02:48:11 np0005534516 dazzling_leakey[75418]:        "epoch": 1,
Nov 25 02:48:11 np0005534516 dazzling_leakey[75418]:        "num_osds": 0,
Nov 25 02:48:11 np0005534516 dazzling_leakey[75418]:        "num_up_osds": 0,
Nov 25 02:48:11 np0005534516 dazzling_leakey[75418]:        "osd_up_since": 0,
Nov 25 02:48:11 np0005534516 dazzling_leakey[75418]:        "num_in_osds": 0,
Nov 25 02:48:11 np0005534516 dazzling_leakey[75418]:        "osd_in_since": 0,
Nov 25 02:48:11 np0005534516 dazzling_leakey[75418]:        "num_remapped_pgs": 0
Nov 25 02:48:11 np0005534516 dazzling_leakey[75418]:    },
Nov 25 02:48:11 np0005534516 dazzling_leakey[75418]:    "pgmap": {
Nov 25 02:48:11 np0005534516 dazzling_leakey[75418]:        "pgs_by_state": [],
Nov 25 02:48:11 np0005534516 dazzling_leakey[75418]:        "num_pgs": 0,
Nov 25 02:48:11 np0005534516 dazzling_leakey[75418]:        "num_pools": 0,
Nov 25 02:48:11 np0005534516 dazzling_leakey[75418]:        "num_objects": 0,
Nov 25 02:48:11 np0005534516 dazzling_leakey[75418]:        "data_bytes": 0,
Nov 25 02:48:11 np0005534516 dazzling_leakey[75418]:        "bytes_used": 0,
Nov 25 02:48:11 np0005534516 dazzling_leakey[75418]:        "bytes_avail": 0,
Nov 25 02:48:11 np0005534516 dazzling_leakey[75418]:        "bytes_total": 0
Nov 25 02:48:11 np0005534516 dazzling_leakey[75418]:    },
Nov 25 02:48:11 np0005534516 dazzling_leakey[75418]:    "fsmap": {
Nov 25 02:48:11 np0005534516 dazzling_leakey[75418]:        "epoch": 1,
Nov 25 02:48:11 np0005534516 dazzling_leakey[75418]:        "by_rank": [],
Nov 25 02:48:11 np0005534516 dazzling_leakey[75418]:        "up:standby": 0
Nov 25 02:48:11 np0005534516 dazzling_leakey[75418]:    },
Nov 25 02:48:11 np0005534516 dazzling_leakey[75418]:    "mgrmap": {
Nov 25 02:48:11 np0005534516 dazzling_leakey[75418]:        "available": false,
Nov 25 02:48:11 np0005534516 dazzling_leakey[75418]:        "num_standbys": 0,
Nov 25 02:48:11 np0005534516 dazzling_leakey[75418]:        "modules": [
Nov 25 02:48:11 np0005534516 dazzling_leakey[75418]:            "iostat",
Nov 25 02:48:11 np0005534516 dazzling_leakey[75418]:            "nfs",
Nov 25 02:48:11 np0005534516 dazzling_leakey[75418]:            "restful"
Nov 25 02:48:11 np0005534516 dazzling_leakey[75418]:        ],
Nov 25 02:48:11 np0005534516 dazzling_leakey[75418]:        "services": {}
Nov 25 02:48:11 np0005534516 dazzling_leakey[75418]:    },
Nov 25 02:48:11 np0005534516 dazzling_leakey[75418]:    "servicemap": {
Nov 25 02:48:11 np0005534516 dazzling_leakey[75418]:        "epoch": 1,
Nov 25 02:48:11 np0005534516 dazzling_leakey[75418]:        "modified": "2025-11-25T07:48:01.745386+0000",
Nov 25 02:48:11 np0005534516 dazzling_leakey[75418]:        "services": {}
Nov 25 02:48:11 np0005534516 dazzling_leakey[75418]:    },
Nov 25 02:48:11 np0005534516 dazzling_leakey[75418]:    "progress_events": {}
Nov 25 02:48:11 np0005534516 dazzling_leakey[75418]: }
Nov 25 02:48:11 np0005534516 systemd[1]: libpod-b7063df0abfd5591f1693206d34758d3622f7ca734a1679e9767d3ac54e27930.scope: Deactivated successfully.
Nov 25 02:48:11 np0005534516 podman[75402]: 2025-11-25 07:48:11.661901229 +0000 UTC m=+0.676496171 container died b7063df0abfd5591f1693206d34758d3622f7ca734a1679e9767d3ac54e27930 (image=quay.io/ceph/ceph:v18, name=dazzling_leakey, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 02:48:11 np0005534516 systemd[1]: var-lib-containers-storage-overlay-0e068b6a6231d0c2e37432cd3fd2f4dcf36e9386f9ef5e394128f168db1dccbb-merged.mount: Deactivated successfully.
Nov 25 02:48:11 np0005534516 podman[75402]: 2025-11-25 07:48:11.724458522 +0000 UTC m=+0.739053464 container remove b7063df0abfd5591f1693206d34758d3622f7ca734a1679e9767d3ac54e27930 (image=quay.io/ceph/ceph:v18, name=dazzling_leakey, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 02:48:11 np0005534516 systemd[1]: libpod-conmon-b7063df0abfd5591f1693206d34758d3622f7ca734a1679e9767d3ac54e27930.scope: Deactivated successfully.
Nov 25 02:48:12 np0005534516 ceph-mgr[75313]: mgr[py] Loading python module 'devicehealth'
Nov 25 02:48:13 np0005534516 ceph-a058ea16-8b73-51e1-b172-ed66107102bf-mgr-compute-0-cpskve[75309]: 2025-11-25T07:48:13.040+0000 7fbb0592e140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Nov 25 02:48:13 np0005534516 ceph-mgr[75313]: mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Nov 25 02:48:13 np0005534516 ceph-mgr[75313]: mgr[py] Loading python module 'diskprediction_local'
Nov 25 02:48:13 np0005534516 ceph-a058ea16-8b73-51e1-b172-ed66107102bf-mgr-compute-0-cpskve[75309]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Nov 25 02:48:13 np0005534516 ceph-a058ea16-8b73-51e1-b172-ed66107102bf-mgr-compute-0-cpskve[75309]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Nov 25 02:48:13 np0005534516 ceph-a058ea16-8b73-51e1-b172-ed66107102bf-mgr-compute-0-cpskve[75309]:  from numpy import show_config as show_numpy_config
Nov 25 02:48:13 np0005534516 ceph-a058ea16-8b73-51e1-b172-ed66107102bf-mgr-compute-0-cpskve[75309]: 2025-11-25T07:48:13.604+0000 7fbb0592e140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Nov 25 02:48:13 np0005534516 ceph-mgr[75313]: mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Nov 25 02:48:13 np0005534516 ceph-mgr[75313]: mgr[py] Loading python module 'influx'
Nov 25 02:48:13 np0005534516 podman[75458]: 2025-11-25 07:48:13.807571516 +0000 UTC m=+0.050953056 container create 51c16564d59746e1e86a307c22c58c48b85a4d496787673cee9c647a7f4bcdb7 (image=quay.io/ceph/ceph:v18, name=vigorous_colden, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Nov 25 02:48:13 np0005534516 systemd[1]: Started libpod-conmon-51c16564d59746e1e86a307c22c58c48b85a4d496787673cee9c647a7f4bcdb7.scope.
Nov 25 02:48:13 np0005534516 systemd[1]: Started libcrun container.
Nov 25 02:48:13 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/29be7ae1914420fa7b13ce398f73d0aa31546cd5f59756a0d0f187eadeed1cff/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 02:48:13 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/29be7ae1914420fa7b13ce398f73d0aa31546cd5f59756a0d0f187eadeed1cff/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 02:48:13 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/29be7ae1914420fa7b13ce398f73d0aa31546cd5f59756a0d0f187eadeed1cff/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 02:48:13 np0005534516 podman[75458]: 2025-11-25 07:48:13.789886224 +0000 UTC m=+0.033267784 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 25 02:48:13 np0005534516 podman[75458]: 2025-11-25 07:48:13.892499766 +0000 UTC m=+0.135881386 container init 51c16564d59746e1e86a307c22c58c48b85a4d496787673cee9c647a7f4bcdb7 (image=quay.io/ceph/ceph:v18, name=vigorous_colden, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 02:48:13 np0005534516 podman[75458]: 2025-11-25 07:48:13.901370713 +0000 UTC m=+0.144752243 container start 51c16564d59746e1e86a307c22c58c48b85a4d496787673cee9c647a7f4bcdb7 (image=quay.io/ceph/ceph:v18, name=vigorous_colden, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Nov 25 02:48:13 np0005534516 podman[75458]: 2025-11-25 07:48:13.905356638 +0000 UTC m=+0.148738178 container attach 51c16564d59746e1e86a307c22c58c48b85a4d496787673cee9c647a7f4bcdb7 (image=quay.io/ceph/ceph:v18, name=vigorous_colden, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 02:48:13 np0005534516 ceph-a058ea16-8b73-51e1-b172-ed66107102bf-mgr-compute-0-cpskve[75309]: 2025-11-25T07:48:13.957+0000 7fbb0592e140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member
Nov 25 02:48:13 np0005534516 ceph-mgr[75313]: mgr[py] Module influx has missing NOTIFY_TYPES member
Nov 25 02:48:13 np0005534516 ceph-mgr[75313]: mgr[py] Loading python module 'insights'
Nov 25 02:48:14 np0005534516 ceph-mgr[75313]: mgr[py] Loading python module 'iostat'
Nov 25 02:48:14 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Nov 25 02:48:14 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/480428438' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Nov 25 02:48:14 np0005534516 vigorous_colden[75474]: 
Nov 25 02:48:14 np0005534516 vigorous_colden[75474]: {
Nov 25 02:48:14 np0005534516 vigorous_colden[75474]:    "fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 02:48:14 np0005534516 vigorous_colden[75474]:    "health": {
Nov 25 02:48:14 np0005534516 vigorous_colden[75474]:        "status": "HEALTH_OK",
Nov 25 02:48:14 np0005534516 vigorous_colden[75474]:        "checks": {},
Nov 25 02:48:14 np0005534516 vigorous_colden[75474]:        "mutes": []
Nov 25 02:48:14 np0005534516 vigorous_colden[75474]:    },
Nov 25 02:48:14 np0005534516 vigorous_colden[75474]:    "election_epoch": 5,
Nov 25 02:48:14 np0005534516 vigorous_colden[75474]:    "quorum": [
Nov 25 02:48:14 np0005534516 vigorous_colden[75474]:        0
Nov 25 02:48:14 np0005534516 vigorous_colden[75474]:    ],
Nov 25 02:48:14 np0005534516 vigorous_colden[75474]:    "quorum_names": [
Nov 25 02:48:14 np0005534516 vigorous_colden[75474]:        "compute-0"
Nov 25 02:48:14 np0005534516 vigorous_colden[75474]:    ],
Nov 25 02:48:14 np0005534516 vigorous_colden[75474]:    "quorum_age": 8,
Nov 25 02:48:14 np0005534516 vigorous_colden[75474]:    "monmap": {
Nov 25 02:48:14 np0005534516 vigorous_colden[75474]:        "epoch": 1,
Nov 25 02:48:14 np0005534516 vigorous_colden[75474]:        "min_mon_release_name": "reef",
Nov 25 02:48:14 np0005534516 vigorous_colden[75474]:        "num_mons": 1
Nov 25 02:48:14 np0005534516 vigorous_colden[75474]:    },
Nov 25 02:48:14 np0005534516 vigorous_colden[75474]:    "osdmap": {
Nov 25 02:48:14 np0005534516 vigorous_colden[75474]:        "epoch": 1,
Nov 25 02:48:14 np0005534516 vigorous_colden[75474]:        "num_osds": 0,
Nov 25 02:48:14 np0005534516 vigorous_colden[75474]:        "num_up_osds": 0,
Nov 25 02:48:14 np0005534516 vigorous_colden[75474]:        "osd_up_since": 0,
Nov 25 02:48:14 np0005534516 vigorous_colden[75474]:        "num_in_osds": 0,
Nov 25 02:48:14 np0005534516 vigorous_colden[75474]:        "osd_in_since": 0,
Nov 25 02:48:14 np0005534516 vigorous_colden[75474]:        "num_remapped_pgs": 0
Nov 25 02:48:14 np0005534516 vigorous_colden[75474]:    },
Nov 25 02:48:14 np0005534516 vigorous_colden[75474]:    "pgmap": {
Nov 25 02:48:14 np0005534516 vigorous_colden[75474]:        "pgs_by_state": [],
Nov 25 02:48:14 np0005534516 vigorous_colden[75474]:        "num_pgs": 0,
Nov 25 02:48:14 np0005534516 vigorous_colden[75474]:        "num_pools": 0,
Nov 25 02:48:14 np0005534516 vigorous_colden[75474]:        "num_objects": 0,
Nov 25 02:48:14 np0005534516 vigorous_colden[75474]:        "data_bytes": 0,
Nov 25 02:48:14 np0005534516 vigorous_colden[75474]:        "bytes_used": 0,
Nov 25 02:48:14 np0005534516 vigorous_colden[75474]:        "bytes_avail": 0,
Nov 25 02:48:14 np0005534516 vigorous_colden[75474]:        "bytes_total": 0
Nov 25 02:48:14 np0005534516 vigorous_colden[75474]:    },
Nov 25 02:48:14 np0005534516 vigorous_colden[75474]:    "fsmap": {
Nov 25 02:48:14 np0005534516 vigorous_colden[75474]:        "epoch": 1,
Nov 25 02:48:14 np0005534516 vigorous_colden[75474]:        "by_rank": [],
Nov 25 02:48:14 np0005534516 vigorous_colden[75474]:        "up:standby": 0
Nov 25 02:48:14 np0005534516 vigorous_colden[75474]:    },
Nov 25 02:48:14 np0005534516 vigorous_colden[75474]:    "mgrmap": {
Nov 25 02:48:14 np0005534516 vigorous_colden[75474]:        "available": false,
Nov 25 02:48:14 np0005534516 vigorous_colden[75474]:        "num_standbys": 0,
Nov 25 02:48:14 np0005534516 vigorous_colden[75474]:        "modules": [
Nov 25 02:48:14 np0005534516 vigorous_colden[75474]:            "iostat",
Nov 25 02:48:14 np0005534516 vigorous_colden[75474]:            "nfs",
Nov 25 02:48:14 np0005534516 vigorous_colden[75474]:            "restful"
Nov 25 02:48:14 np0005534516 vigorous_colden[75474]:        ],
Nov 25 02:48:14 np0005534516 vigorous_colden[75474]:        "services": {}
Nov 25 02:48:14 np0005534516 vigorous_colden[75474]:    },
Nov 25 02:48:14 np0005534516 vigorous_colden[75474]:    "servicemap": {
Nov 25 02:48:14 np0005534516 vigorous_colden[75474]:        "epoch": 1,
Nov 25 02:48:14 np0005534516 vigorous_colden[75474]:        "modified": "2025-11-25T07:48:01.745386+0000",
Nov 25 02:48:14 np0005534516 vigorous_colden[75474]:        "services": {}
Nov 25 02:48:14 np0005534516 vigorous_colden[75474]:    },
Nov 25 02:48:14 np0005534516 vigorous_colden[75474]:    "progress_events": {}
Nov 25 02:48:14 np0005534516 vigorous_colden[75474]: }
Nov 25 02:48:14 np0005534516 systemd[1]: libpod-51c16564d59746e1e86a307c22c58c48b85a4d496787673cee9c647a7f4bcdb7.scope: Deactivated successfully.
Nov 25 02:48:14 np0005534516 podman[75458]: 2025-11-25 07:48:14.367516252 +0000 UTC m=+0.610897792 container died 51c16564d59746e1e86a307c22c58c48b85a4d496787673cee9c647a7f4bcdb7 (image=quay.io/ceph/ceph:v18, name=vigorous_colden, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 02:48:14 np0005534516 systemd[1]: var-lib-containers-storage-overlay-29be7ae1914420fa7b13ce398f73d0aa31546cd5f59756a0d0f187eadeed1cff-merged.mount: Deactivated successfully.
Nov 25 02:48:14 np0005534516 podman[75458]: 2025-11-25 07:48:14.424640887 +0000 UTC m=+0.668022407 container remove 51c16564d59746e1e86a307c22c58c48b85a4d496787673cee9c647a7f4bcdb7 (image=quay.io/ceph/ceph:v18, name=vigorous_colden, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Nov 25 02:48:14 np0005534516 systemd[1]: libpod-conmon-51c16564d59746e1e86a307c22c58c48b85a4d496787673cee9c647a7f4bcdb7.scope: Deactivated successfully.
Nov 25 02:48:14 np0005534516 ceph-a058ea16-8b73-51e1-b172-ed66107102bf-mgr-compute-0-cpskve[75309]: 2025-11-25T07:48:14.437+0000 7fbb0592e140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member
Nov 25 02:48:14 np0005534516 ceph-mgr[75313]: mgr[py] Module iostat has missing NOTIFY_TYPES member
Nov 25 02:48:14 np0005534516 ceph-mgr[75313]: mgr[py] Loading python module 'k8sevents'
Nov 25 02:48:16 np0005534516 ceph-mgr[75313]: mgr[py] Loading python module 'localpool'
Nov 25 02:48:16 np0005534516 ceph-mgr[75313]: mgr[py] Loading python module 'mds_autoscaler'
Nov 25 02:48:16 np0005534516 podman[75514]: 2025-11-25 07:48:16.479205484 +0000 UTC m=+0.028493206 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 25 02:48:16 np0005534516 podman[75514]: 2025-11-25 07:48:16.978562035 +0000 UTC m=+0.527849697 container create a415bb28499c9e8931ae728cccd8ff4efcc25802481aa22ebf4485aac94a8d01 (image=quay.io/ceph/ceph:v18, name=relaxed_pike, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Nov 25 02:48:17 np0005534516 systemd[1]: Started libpod-conmon-a415bb28499c9e8931ae728cccd8ff4efcc25802481aa22ebf4485aac94a8d01.scope.
Nov 25 02:48:17 np0005534516 systemd[1]: Started libcrun container.
Nov 25 02:48:17 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2474fb87c6d4ededf552d281750b4dad1a5d692430ea064cae30f54c3b937a4d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 02:48:17 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2474fb87c6d4ededf552d281750b4dad1a5d692430ea064cae30f54c3b937a4d/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 02:48:17 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2474fb87c6d4ededf552d281750b4dad1a5d692430ea064cae30f54c3b937a4d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 02:48:17 np0005534516 podman[75514]: 2025-11-25 07:48:17.074245586 +0000 UTC m=+0.623533318 container init a415bb28499c9e8931ae728cccd8ff4efcc25802481aa22ebf4485aac94a8d01 (image=quay.io/ceph/ceph:v18, name=relaxed_pike, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 02:48:17 np0005534516 podman[75514]: 2025-11-25 07:48:17.084906765 +0000 UTC m=+0.634194417 container start a415bb28499c9e8931ae728cccd8ff4efcc25802481aa22ebf4485aac94a8d01 (image=quay.io/ceph/ceph:v18, name=relaxed_pike, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 02:48:17 np0005534516 ceph-mgr[75313]: mgr[py] Loading python module 'mirroring'
Nov 25 02:48:17 np0005534516 podman[75514]: 2025-11-25 07:48:17.119711773 +0000 UTC m=+0.668999425 container attach a415bb28499c9e8931ae728cccd8ff4efcc25802481aa22ebf4485aac94a8d01 (image=quay.io/ceph/ceph:v18, name=relaxed_pike, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Nov 25 02:48:17 np0005534516 ceph-mgr[75313]: mgr[py] Loading python module 'nfs'
Nov 25 02:48:17 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Nov 25 02:48:17 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1836010910' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Nov 25 02:48:17 np0005534516 relaxed_pike[75530]: 
Nov 25 02:48:17 np0005534516 relaxed_pike[75530]: {
Nov 25 02:48:17 np0005534516 relaxed_pike[75530]:    "fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 02:48:17 np0005534516 relaxed_pike[75530]:    "health": {
Nov 25 02:48:17 np0005534516 relaxed_pike[75530]:        "status": "HEALTH_OK",
Nov 25 02:48:17 np0005534516 relaxed_pike[75530]:        "checks": {},
Nov 25 02:48:17 np0005534516 relaxed_pike[75530]:        "mutes": []
Nov 25 02:48:17 np0005534516 relaxed_pike[75530]:    },
Nov 25 02:48:17 np0005534516 relaxed_pike[75530]:    "election_epoch": 5,
Nov 25 02:48:17 np0005534516 relaxed_pike[75530]:    "quorum": [
Nov 25 02:48:17 np0005534516 relaxed_pike[75530]:        0
Nov 25 02:48:17 np0005534516 relaxed_pike[75530]:    ],
Nov 25 02:48:17 np0005534516 relaxed_pike[75530]:    "quorum_names": [
Nov 25 02:48:17 np0005534516 relaxed_pike[75530]:        "compute-0"
Nov 25 02:48:17 np0005534516 relaxed_pike[75530]:    ],
Nov 25 02:48:17 np0005534516 relaxed_pike[75530]:    "quorum_age": 11,
Nov 25 02:48:17 np0005534516 relaxed_pike[75530]:    "monmap": {
Nov 25 02:48:17 np0005534516 relaxed_pike[75530]:        "epoch": 1,
Nov 25 02:48:17 np0005534516 relaxed_pike[75530]:        "min_mon_release_name": "reef",
Nov 25 02:48:17 np0005534516 relaxed_pike[75530]:        "num_mons": 1
Nov 25 02:48:17 np0005534516 relaxed_pike[75530]:    },
Nov 25 02:48:17 np0005534516 relaxed_pike[75530]:    "osdmap": {
Nov 25 02:48:17 np0005534516 relaxed_pike[75530]:        "epoch": 1,
Nov 25 02:48:17 np0005534516 relaxed_pike[75530]:        "num_osds": 0,
Nov 25 02:48:17 np0005534516 relaxed_pike[75530]:        "num_up_osds": 0,
Nov 25 02:48:17 np0005534516 relaxed_pike[75530]:        "osd_up_since": 0,
Nov 25 02:48:17 np0005534516 relaxed_pike[75530]:        "num_in_osds": 0,
Nov 25 02:48:17 np0005534516 relaxed_pike[75530]:        "osd_in_since": 0,
Nov 25 02:48:17 np0005534516 relaxed_pike[75530]:        "num_remapped_pgs": 0
Nov 25 02:48:17 np0005534516 relaxed_pike[75530]:    },
Nov 25 02:48:17 np0005534516 relaxed_pike[75530]:    "pgmap": {
Nov 25 02:48:17 np0005534516 relaxed_pike[75530]:        "pgs_by_state": [],
Nov 25 02:48:17 np0005534516 relaxed_pike[75530]:        "num_pgs": 0,
Nov 25 02:48:17 np0005534516 relaxed_pike[75530]:        "num_pools": 0,
Nov 25 02:48:17 np0005534516 relaxed_pike[75530]:        "num_objects": 0,
Nov 25 02:48:17 np0005534516 relaxed_pike[75530]:        "data_bytes": 0,
Nov 25 02:48:17 np0005534516 relaxed_pike[75530]:        "bytes_used": 0,
Nov 25 02:48:17 np0005534516 relaxed_pike[75530]:        "bytes_avail": 0,
Nov 25 02:48:17 np0005534516 relaxed_pike[75530]:        "bytes_total": 0
Nov 25 02:48:17 np0005534516 relaxed_pike[75530]:    },
Nov 25 02:48:17 np0005534516 relaxed_pike[75530]:    "fsmap": {
Nov 25 02:48:17 np0005534516 relaxed_pike[75530]:        "epoch": 1,
Nov 25 02:48:17 np0005534516 relaxed_pike[75530]:        "by_rank": [],
Nov 25 02:48:17 np0005534516 relaxed_pike[75530]:        "up:standby": 0
Nov 25 02:48:17 np0005534516 relaxed_pike[75530]:    },
Nov 25 02:48:17 np0005534516 relaxed_pike[75530]:    "mgrmap": {
Nov 25 02:48:17 np0005534516 relaxed_pike[75530]:        "available": false,
Nov 25 02:48:17 np0005534516 relaxed_pike[75530]:        "num_standbys": 0,
Nov 25 02:48:17 np0005534516 relaxed_pike[75530]:        "modules": [
Nov 25 02:48:17 np0005534516 relaxed_pike[75530]:            "iostat",
Nov 25 02:48:17 np0005534516 relaxed_pike[75530]:            "nfs",
Nov 25 02:48:17 np0005534516 relaxed_pike[75530]:            "restful"
Nov 25 02:48:17 np0005534516 relaxed_pike[75530]:        ],
Nov 25 02:48:17 np0005534516 relaxed_pike[75530]:        "services": {}
Nov 25 02:48:17 np0005534516 relaxed_pike[75530]:    },
Nov 25 02:48:17 np0005534516 relaxed_pike[75530]:    "servicemap": {
Nov 25 02:48:17 np0005534516 relaxed_pike[75530]:        "epoch": 1,
Nov 25 02:48:17 np0005534516 relaxed_pike[75530]:        "modified": "2025-11-25T07:48:01.745386+0000",
Nov 25 02:48:17 np0005534516 relaxed_pike[75530]:        "services": {}
Nov 25 02:48:17 np0005534516 relaxed_pike[75530]:    },
Nov 25 02:48:17 np0005534516 relaxed_pike[75530]:    "progress_events": {}
Nov 25 02:48:17 np0005534516 relaxed_pike[75530]: }
Nov 25 02:48:17 np0005534516 systemd[1]: libpod-a415bb28499c9e8931ae728cccd8ff4efcc25802481aa22ebf4485aac94a8d01.scope: Deactivated successfully.
Nov 25 02:48:17 np0005534516 podman[75556]: 2025-11-25 07:48:17.52985837 +0000 UTC m=+0.022288716 container died a415bb28499c9e8931ae728cccd8ff4efcc25802481aa22ebf4485aac94a8d01 (image=quay.io/ceph/ceph:v18, name=relaxed_pike, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Nov 25 02:48:17 np0005534516 systemd[1]: var-lib-containers-storage-overlay-2474fb87c6d4ededf552d281750b4dad1a5d692430ea064cae30f54c3b937a4d-merged.mount: Deactivated successfully.
Nov 25 02:48:17 np0005534516 podman[75556]: 2025-11-25 07:48:17.621610258 +0000 UTC m=+0.114040624 container remove a415bb28499c9e8931ae728cccd8ff4efcc25802481aa22ebf4485aac94a8d01 (image=quay.io/ceph/ceph:v18, name=relaxed_pike, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 02:48:17 np0005534516 systemd[1]: libpod-conmon-a415bb28499c9e8931ae728cccd8ff4efcc25802481aa22ebf4485aac94a8d01.scope: Deactivated successfully.
Nov 25 02:48:18 np0005534516 ceph-a058ea16-8b73-51e1-b172-ed66107102bf-mgr-compute-0-cpskve[75309]: 2025-11-25T07:48:18.030+0000 7fbb0592e140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member
Nov 25 02:48:18 np0005534516 ceph-mgr[75313]: mgr[py] Module nfs has missing NOTIFY_TYPES member
Nov 25 02:48:18 np0005534516 ceph-mgr[75313]: mgr[py] Loading python module 'orchestrator'
Nov 25 02:48:18 np0005534516 ceph-a058ea16-8b73-51e1-b172-ed66107102bf-mgr-compute-0-cpskve[75309]: 2025-11-25T07:48:18.649+0000 7fbb0592e140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Nov 25 02:48:18 np0005534516 ceph-mgr[75313]: mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Nov 25 02:48:18 np0005534516 ceph-mgr[75313]: mgr[py] Loading python module 'osd_perf_query'
Nov 25 02:48:18 np0005534516 ceph-a058ea16-8b73-51e1-b172-ed66107102bf-mgr-compute-0-cpskve[75309]: 2025-11-25T07:48:18.898+0000 7fbb0592e140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Nov 25 02:48:18 np0005534516 ceph-mgr[75313]: mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Nov 25 02:48:18 np0005534516 ceph-mgr[75313]: mgr[py] Loading python module 'osd_support'
Nov 25 02:48:19 np0005534516 ceph-a058ea16-8b73-51e1-b172-ed66107102bf-mgr-compute-0-cpskve[75309]: 2025-11-25T07:48:19.116+0000 7fbb0592e140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member
Nov 25 02:48:19 np0005534516 ceph-mgr[75313]: mgr[py] Module osd_support has missing NOTIFY_TYPES member
Nov 25 02:48:19 np0005534516 ceph-mgr[75313]: mgr[py] Loading python module 'pg_autoscaler'
Nov 25 02:48:19 np0005534516 ceph-a058ea16-8b73-51e1-b172-ed66107102bf-mgr-compute-0-cpskve[75309]: 2025-11-25T07:48:19.373+0000 7fbb0592e140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Nov 25 02:48:19 np0005534516 ceph-mgr[75313]: mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Nov 25 02:48:19 np0005534516 ceph-mgr[75313]: mgr[py] Loading python module 'progress'
Nov 25 02:48:19 np0005534516 ceph-a058ea16-8b73-51e1-b172-ed66107102bf-mgr-compute-0-cpskve[75309]: 2025-11-25T07:48:19.615+0000 7fbb0592e140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member
Nov 25 02:48:19 np0005534516 ceph-mgr[75313]: mgr[py] Module progress has missing NOTIFY_TYPES member
Nov 25 02:48:19 np0005534516 ceph-mgr[75313]: mgr[py] Loading python module 'prometheus'
Nov 25 02:48:19 np0005534516 podman[75569]: 2025-11-25 07:48:19.687345639 +0000 UTC m=+0.032136281 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 25 02:48:19 np0005534516 podman[75569]: 2025-11-25 07:48:19.834700357 +0000 UTC m=+0.179490939 container create 91e74432ff13e747dea45d1cefa17de63602a88d18e8ef56b9f746ab8d919612 (image=quay.io/ceph/ceph:v18, name=hardcore_grothendieck, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Nov 25 02:48:19 np0005534516 systemd[1]: Started libpod-conmon-91e74432ff13e747dea45d1cefa17de63602a88d18e8ef56b9f746ab8d919612.scope.
Nov 25 02:48:19 np0005534516 systemd[1]: Started libcrun container.
Nov 25 02:48:19 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/16ddd56da5ee07d24018d24dacf44e9877a060b10912fd10f85e553d04ccf15a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 02:48:19 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/16ddd56da5ee07d24018d24dacf44e9877a060b10912fd10f85e553d04ccf15a/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 02:48:19 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/16ddd56da5ee07d24018d24dacf44e9877a060b10912fd10f85e553d04ccf15a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 02:48:19 np0005534516 podman[75569]: 2025-11-25 07:48:19.920903323 +0000 UTC m=+0.265693885 container init 91e74432ff13e747dea45d1cefa17de63602a88d18e8ef56b9f746ab8d919612 (image=quay.io/ceph/ceph:v18, name=hardcore_grothendieck, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Nov 25 02:48:19 np0005534516 podman[75569]: 2025-11-25 07:48:19.927153624 +0000 UTC m=+0.271944196 container start 91e74432ff13e747dea45d1cefa17de63602a88d18e8ef56b9f746ab8d919612 (image=quay.io/ceph/ceph:v18, name=hardcore_grothendieck, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Nov 25 02:48:19 np0005534516 podman[75569]: 2025-11-25 07:48:19.9349372 +0000 UTC m=+0.279727752 container attach 91e74432ff13e747dea45d1cefa17de63602a88d18e8ef56b9f746ab8d919612 (image=quay.io/ceph/ceph:v18, name=hardcore_grothendieck, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Nov 25 02:48:20 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Nov 25 02:48:20 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3425172570' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Nov 25 02:48:20 np0005534516 hardcore_grothendieck[75585]: 
Nov 25 02:48:20 np0005534516 hardcore_grothendieck[75585]: {
Nov 25 02:48:20 np0005534516 hardcore_grothendieck[75585]:    "fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 02:48:20 np0005534516 hardcore_grothendieck[75585]:    "health": {
Nov 25 02:48:20 np0005534516 hardcore_grothendieck[75585]:        "status": "HEALTH_OK",
Nov 25 02:48:20 np0005534516 hardcore_grothendieck[75585]:        "checks": {},
Nov 25 02:48:20 np0005534516 hardcore_grothendieck[75585]:        "mutes": []
Nov 25 02:48:20 np0005534516 hardcore_grothendieck[75585]:    },
Nov 25 02:48:20 np0005534516 hardcore_grothendieck[75585]:    "election_epoch": 5,
Nov 25 02:48:20 np0005534516 hardcore_grothendieck[75585]:    "quorum": [
Nov 25 02:48:20 np0005534516 hardcore_grothendieck[75585]:        0
Nov 25 02:48:20 np0005534516 hardcore_grothendieck[75585]:    ],
Nov 25 02:48:20 np0005534516 hardcore_grothendieck[75585]:    "quorum_names": [
Nov 25 02:48:20 np0005534516 hardcore_grothendieck[75585]:        "compute-0"
Nov 25 02:48:20 np0005534516 hardcore_grothendieck[75585]:    ],
Nov 25 02:48:20 np0005534516 hardcore_grothendieck[75585]:    "quorum_age": 14,
Nov 25 02:48:20 np0005534516 hardcore_grothendieck[75585]:    "monmap": {
Nov 25 02:48:20 np0005534516 hardcore_grothendieck[75585]:        "epoch": 1,
Nov 25 02:48:20 np0005534516 hardcore_grothendieck[75585]:        "min_mon_release_name": "reef",
Nov 25 02:48:20 np0005534516 hardcore_grothendieck[75585]:        "num_mons": 1
Nov 25 02:48:20 np0005534516 hardcore_grothendieck[75585]:    },
Nov 25 02:48:20 np0005534516 hardcore_grothendieck[75585]:    "osdmap": {
Nov 25 02:48:20 np0005534516 hardcore_grothendieck[75585]:        "epoch": 1,
Nov 25 02:48:20 np0005534516 hardcore_grothendieck[75585]:        "num_osds": 0,
Nov 25 02:48:20 np0005534516 hardcore_grothendieck[75585]:        "num_up_osds": 0,
Nov 25 02:48:20 np0005534516 hardcore_grothendieck[75585]:        "osd_up_since": 0,
Nov 25 02:48:20 np0005534516 hardcore_grothendieck[75585]:        "num_in_osds": 0,
Nov 25 02:48:20 np0005534516 hardcore_grothendieck[75585]:        "osd_in_since": 0,
Nov 25 02:48:20 np0005534516 hardcore_grothendieck[75585]:        "num_remapped_pgs": 0
Nov 25 02:48:20 np0005534516 hardcore_grothendieck[75585]:    },
Nov 25 02:48:20 np0005534516 hardcore_grothendieck[75585]:    "pgmap": {
Nov 25 02:48:20 np0005534516 hardcore_grothendieck[75585]:        "pgs_by_state": [],
Nov 25 02:48:20 np0005534516 hardcore_grothendieck[75585]:        "num_pgs": 0,
Nov 25 02:48:20 np0005534516 hardcore_grothendieck[75585]:        "num_pools": 0,
Nov 25 02:48:20 np0005534516 hardcore_grothendieck[75585]:        "num_objects": 0,
Nov 25 02:48:20 np0005534516 hardcore_grothendieck[75585]:        "data_bytes": 0,
Nov 25 02:48:20 np0005534516 hardcore_grothendieck[75585]:        "bytes_used": 0,
Nov 25 02:48:20 np0005534516 hardcore_grothendieck[75585]:        "bytes_avail": 0,
Nov 25 02:48:20 np0005534516 hardcore_grothendieck[75585]:        "bytes_total": 0
Nov 25 02:48:20 np0005534516 hardcore_grothendieck[75585]:    },
Nov 25 02:48:20 np0005534516 hardcore_grothendieck[75585]:    "fsmap": {
Nov 25 02:48:20 np0005534516 hardcore_grothendieck[75585]:        "epoch": 1,
Nov 25 02:48:20 np0005534516 hardcore_grothendieck[75585]:        "by_rank": [],
Nov 25 02:48:20 np0005534516 hardcore_grothendieck[75585]:        "up:standby": 0
Nov 25 02:48:20 np0005534516 hardcore_grothendieck[75585]:    },
Nov 25 02:48:20 np0005534516 hardcore_grothendieck[75585]:    "mgrmap": {
Nov 25 02:48:20 np0005534516 hardcore_grothendieck[75585]:        "available": false,
Nov 25 02:48:20 np0005534516 hardcore_grothendieck[75585]:        "num_standbys": 0,
Nov 25 02:48:20 np0005534516 hardcore_grothendieck[75585]:        "modules": [
Nov 25 02:48:20 np0005534516 hardcore_grothendieck[75585]:            "iostat",
Nov 25 02:48:20 np0005534516 hardcore_grothendieck[75585]:            "nfs",
Nov 25 02:48:20 np0005534516 hardcore_grothendieck[75585]:            "restful"
Nov 25 02:48:20 np0005534516 hardcore_grothendieck[75585]:        ],
Nov 25 02:48:20 np0005534516 hardcore_grothendieck[75585]:        "services": {}
Nov 25 02:48:20 np0005534516 hardcore_grothendieck[75585]:    },
Nov 25 02:48:20 np0005534516 hardcore_grothendieck[75585]:    "servicemap": {
Nov 25 02:48:20 np0005534516 hardcore_grothendieck[75585]:        "epoch": 1,
Nov 25 02:48:20 np0005534516 hardcore_grothendieck[75585]:        "modified": "2025-11-25T07:48:01.745386+0000",
Nov 25 02:48:20 np0005534516 hardcore_grothendieck[75585]:        "services": {}
Nov 25 02:48:20 np0005534516 hardcore_grothendieck[75585]:    },
Nov 25 02:48:20 np0005534516 hardcore_grothendieck[75585]:    "progress_events": {}
Nov 25 02:48:20 np0005534516 hardcore_grothendieck[75585]: }
Nov 25 02:48:20 np0005534516 systemd[1]: libpod-91e74432ff13e747dea45d1cefa17de63602a88d18e8ef56b9f746ab8d919612.scope: Deactivated successfully.
Nov 25 02:48:20 np0005534516 podman[75569]: 2025-11-25 07:48:20.338471915 +0000 UTC m=+0.683262457 container died 91e74432ff13e747dea45d1cefa17de63602a88d18e8ef56b9f746ab8d919612 (image=quay.io/ceph/ceph:v18, name=hardcore_grothendieck, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Nov 25 02:48:20 np0005534516 systemd[1]: var-lib-containers-storage-overlay-16ddd56da5ee07d24018d24dacf44e9877a060b10912fd10f85e553d04ccf15a-merged.mount: Deactivated successfully.
Nov 25 02:48:20 np0005534516 podman[75569]: 2025-11-25 07:48:20.390962235 +0000 UTC m=+0.735752787 container remove 91e74432ff13e747dea45d1cefa17de63602a88d18e8ef56b9f746ab8d919612 (image=quay.io/ceph/ceph:v18, name=hardcore_grothendieck, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Nov 25 02:48:20 np0005534516 systemd[1]: libpod-conmon-91e74432ff13e747dea45d1cefa17de63602a88d18e8ef56b9f746ab8d919612.scope: Deactivated successfully.
Nov 25 02:48:20 np0005534516 ceph-a058ea16-8b73-51e1-b172-ed66107102bf-mgr-compute-0-cpskve[75309]: 2025-11-25T07:48:20.696+0000 7fbb0592e140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member
Nov 25 02:48:20 np0005534516 ceph-mgr[75313]: mgr[py] Module prometheus has missing NOTIFY_TYPES member
Nov 25 02:48:20 np0005534516 ceph-mgr[75313]: mgr[py] Loading python module 'rbd_support'
Nov 25 02:48:21 np0005534516 ceph-a058ea16-8b73-51e1-b172-ed66107102bf-mgr-compute-0-cpskve[75309]: 2025-11-25T07:48:21.017+0000 7fbb0592e140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Nov 25 02:48:21 np0005534516 ceph-mgr[75313]: mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Nov 25 02:48:21 np0005534516 ceph-mgr[75313]: mgr[py] Loading python module 'restful'
Nov 25 02:48:21 np0005534516 ceph-mgr[75313]: mgr[py] Loading python module 'rgw'
Nov 25 02:48:22 np0005534516 ceph-a058ea16-8b73-51e1-b172-ed66107102bf-mgr-compute-0-cpskve[75309]: 2025-11-25T07:48:22.463+0000 7fbb0592e140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member
Nov 25 02:48:22 np0005534516 ceph-mgr[75313]: mgr[py] Module rgw has missing NOTIFY_TYPES member
Nov 25 02:48:22 np0005534516 ceph-mgr[75313]: mgr[py] Loading python module 'rook'
Nov 25 02:48:22 np0005534516 podman[75625]: 2025-11-25 07:48:22.477018295 +0000 UTC m=+0.059401021 container create eb2f6844e4aa997733dbf2b0e5744a900583c28cbdd7ab92b363335cde136b95 (image=quay.io/ceph/ceph:v18, name=hardcore_diffie, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 02:48:22 np0005534516 systemd[1]: Started libpod-conmon-eb2f6844e4aa997733dbf2b0e5744a900583c28cbdd7ab92b363335cde136b95.scope.
Nov 25 02:48:22 np0005534516 podman[75625]: 2025-11-25 07:48:22.444427221 +0000 UTC m=+0.026809977 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 25 02:48:22 np0005534516 systemd[1]: Started libcrun container.
Nov 25 02:48:22 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/de2ca07e03da7e054efd81ef6e78d9b20ae55928b95cea22a73923d5e03f4d7e/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 02:48:22 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/de2ca07e03da7e054efd81ef6e78d9b20ae55928b95cea22a73923d5e03f4d7e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 02:48:22 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/de2ca07e03da7e054efd81ef6e78d9b20ae55928b95cea22a73923d5e03f4d7e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 02:48:22 np0005534516 podman[75625]: 2025-11-25 07:48:22.606616538 +0000 UTC m=+0.188999394 container init eb2f6844e4aa997733dbf2b0e5744a900583c28cbdd7ab92b363335cde136b95 (image=quay.io/ceph/ceph:v18, name=hardcore_diffie, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Nov 25 02:48:22 np0005534516 podman[75625]: 2025-11-25 07:48:22.612122628 +0000 UTC m=+0.194505354 container start eb2f6844e4aa997733dbf2b0e5744a900583c28cbdd7ab92b363335cde136b95 (image=quay.io/ceph/ceph:v18, name=hardcore_diffie, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True)
Nov 25 02:48:22 np0005534516 podman[75625]: 2025-11-25 07:48:22.636545455 +0000 UTC m=+0.218928211 container attach eb2f6844e4aa997733dbf2b0e5744a900583c28cbdd7ab92b363335cde136b95 (image=quay.io/ceph/ceph:v18, name=hardcore_diffie, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Nov 25 02:48:23 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Nov 25 02:48:23 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3898018008' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Nov 25 02:48:23 np0005534516 hardcore_diffie[75641]: 
Nov 25 02:48:23 np0005534516 hardcore_diffie[75641]: {
Nov 25 02:48:23 np0005534516 hardcore_diffie[75641]:    "fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 02:48:23 np0005534516 hardcore_diffie[75641]:    "health": {
Nov 25 02:48:23 np0005534516 hardcore_diffie[75641]:        "status": "HEALTH_OK",
Nov 25 02:48:23 np0005534516 hardcore_diffie[75641]:        "checks": {},
Nov 25 02:48:23 np0005534516 hardcore_diffie[75641]:        "mutes": []
Nov 25 02:48:23 np0005534516 hardcore_diffie[75641]:    },
Nov 25 02:48:23 np0005534516 hardcore_diffie[75641]:    "election_epoch": 5,
Nov 25 02:48:23 np0005534516 hardcore_diffie[75641]:    "quorum": [
Nov 25 02:48:23 np0005534516 hardcore_diffie[75641]:        0
Nov 25 02:48:23 np0005534516 hardcore_diffie[75641]:    ],
Nov 25 02:48:23 np0005534516 hardcore_diffie[75641]:    "quorum_names": [
Nov 25 02:48:23 np0005534516 hardcore_diffie[75641]:        "compute-0"
Nov 25 02:48:23 np0005534516 hardcore_diffie[75641]:    ],
Nov 25 02:48:23 np0005534516 hardcore_diffie[75641]:    "quorum_age": 17,
Nov 25 02:48:23 np0005534516 hardcore_diffie[75641]:    "monmap": {
Nov 25 02:48:23 np0005534516 hardcore_diffie[75641]:        "epoch": 1,
Nov 25 02:48:23 np0005534516 hardcore_diffie[75641]:        "min_mon_release_name": "reef",
Nov 25 02:48:23 np0005534516 hardcore_diffie[75641]:        "num_mons": 1
Nov 25 02:48:23 np0005534516 hardcore_diffie[75641]:    },
Nov 25 02:48:23 np0005534516 hardcore_diffie[75641]:    "osdmap": {
Nov 25 02:48:23 np0005534516 hardcore_diffie[75641]:        "epoch": 1,
Nov 25 02:48:23 np0005534516 hardcore_diffie[75641]:        "num_osds": 0,
Nov 25 02:48:23 np0005534516 hardcore_diffie[75641]:        "num_up_osds": 0,
Nov 25 02:48:23 np0005534516 hardcore_diffie[75641]:        "osd_up_since": 0,
Nov 25 02:48:23 np0005534516 hardcore_diffie[75641]:        "num_in_osds": 0,
Nov 25 02:48:23 np0005534516 hardcore_diffie[75641]:        "osd_in_since": 0,
Nov 25 02:48:23 np0005534516 hardcore_diffie[75641]:        "num_remapped_pgs": 0
Nov 25 02:48:23 np0005534516 hardcore_diffie[75641]:    },
Nov 25 02:48:23 np0005534516 hardcore_diffie[75641]:    "pgmap": {
Nov 25 02:48:23 np0005534516 hardcore_diffie[75641]:        "pgs_by_state": [],
Nov 25 02:48:23 np0005534516 hardcore_diffie[75641]:        "num_pgs": 0,
Nov 25 02:48:23 np0005534516 hardcore_diffie[75641]:        "num_pools": 0,
Nov 25 02:48:23 np0005534516 hardcore_diffie[75641]:        "num_objects": 0,
Nov 25 02:48:23 np0005534516 hardcore_diffie[75641]:        "data_bytes": 0,
Nov 25 02:48:23 np0005534516 hardcore_diffie[75641]:        "bytes_used": 0,
Nov 25 02:48:23 np0005534516 hardcore_diffie[75641]:        "bytes_avail": 0,
Nov 25 02:48:23 np0005534516 hardcore_diffie[75641]:        "bytes_total": 0
Nov 25 02:48:23 np0005534516 hardcore_diffie[75641]:    },
Nov 25 02:48:23 np0005534516 hardcore_diffie[75641]:    "fsmap": {
Nov 25 02:48:23 np0005534516 hardcore_diffie[75641]:        "epoch": 1,
Nov 25 02:48:23 np0005534516 hardcore_diffie[75641]:        "by_rank": [],
Nov 25 02:48:23 np0005534516 hardcore_diffie[75641]:        "up:standby": 0
Nov 25 02:48:23 np0005534516 hardcore_diffie[75641]:    },
Nov 25 02:48:23 np0005534516 hardcore_diffie[75641]:    "mgrmap": {
Nov 25 02:48:23 np0005534516 hardcore_diffie[75641]:        "available": false,
Nov 25 02:48:23 np0005534516 hardcore_diffie[75641]:        "num_standbys": 0,
Nov 25 02:48:23 np0005534516 hardcore_diffie[75641]:        "modules": [
Nov 25 02:48:23 np0005534516 hardcore_diffie[75641]:            "iostat",
Nov 25 02:48:23 np0005534516 hardcore_diffie[75641]:            "nfs",
Nov 25 02:48:23 np0005534516 hardcore_diffie[75641]:            "restful"
Nov 25 02:48:23 np0005534516 hardcore_diffie[75641]:        ],
Nov 25 02:48:23 np0005534516 hardcore_diffie[75641]:        "services": {}
Nov 25 02:48:23 np0005534516 hardcore_diffie[75641]:    },
Nov 25 02:48:23 np0005534516 hardcore_diffie[75641]:    "servicemap": {
Nov 25 02:48:23 np0005534516 hardcore_diffie[75641]:        "epoch": 1,
Nov 25 02:48:23 np0005534516 hardcore_diffie[75641]:        "modified": "2025-11-25T07:48:01.745386+0000",
Nov 25 02:48:23 np0005534516 hardcore_diffie[75641]:        "services": {}
Nov 25 02:48:23 np0005534516 hardcore_diffie[75641]:    },
Nov 25 02:48:23 np0005534516 hardcore_diffie[75641]:    "progress_events": {}
Nov 25 02:48:23 np0005534516 hardcore_diffie[75641]: }
Nov 25 02:48:23 np0005534516 systemd[1]: libpod-eb2f6844e4aa997733dbf2b0e5744a900583c28cbdd7ab92b363335cde136b95.scope: Deactivated successfully.
Nov 25 02:48:23 np0005534516 podman[75625]: 2025-11-25 07:48:23.103602831 +0000 UTC m=+0.685985567 container died eb2f6844e4aa997733dbf2b0e5744a900583c28cbdd7ab92b363335cde136b95 (image=quay.io/ceph/ceph:v18, name=hardcore_diffie, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Nov 25 02:48:23 np0005534516 systemd[1]: var-lib-containers-storage-overlay-de2ca07e03da7e054efd81ef6e78d9b20ae55928b95cea22a73923d5e03f4d7e-merged.mount: Deactivated successfully.
Nov 25 02:48:23 np0005534516 podman[75625]: 2025-11-25 07:48:23.207100888 +0000 UTC m=+0.789483644 container remove eb2f6844e4aa997733dbf2b0e5744a900583c28cbdd7ab92b363335cde136b95 (image=quay.io/ceph/ceph:v18, name=hardcore_diffie, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Nov 25 02:48:23 np0005534516 systemd[1]: libpod-conmon-eb2f6844e4aa997733dbf2b0e5744a900583c28cbdd7ab92b363335cde136b95.scope: Deactivated successfully.
Nov 25 02:48:24 np0005534516 ceph-a058ea16-8b73-51e1-b172-ed66107102bf-mgr-compute-0-cpskve[75309]: 2025-11-25T07:48:24.609+0000 7fbb0592e140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member
Nov 25 02:48:24 np0005534516 ceph-mgr[75313]: mgr[py] Module rook has missing NOTIFY_TYPES member
Nov 25 02:48:24 np0005534516 ceph-mgr[75313]: mgr[py] Loading python module 'selftest'
Nov 25 02:48:24 np0005534516 ceph-a058ea16-8b73-51e1-b172-ed66107102bf-mgr-compute-0-cpskve[75309]: 2025-11-25T07:48:24.850+0000 7fbb0592e140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member
Nov 25 02:48:24 np0005534516 ceph-mgr[75313]: mgr[py] Module selftest has missing NOTIFY_TYPES member
Nov 25 02:48:24 np0005534516 ceph-mgr[75313]: mgr[py] Loading python module 'snap_schedule'
Nov 25 02:48:25 np0005534516 ceph-a058ea16-8b73-51e1-b172-ed66107102bf-mgr-compute-0-cpskve[75309]: 2025-11-25T07:48:25.095+0000 7fbb0592e140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Nov 25 02:48:25 np0005534516 ceph-mgr[75313]: mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Nov 25 02:48:25 np0005534516 ceph-mgr[75313]: mgr[py] Loading python module 'stats'
Nov 25 02:48:25 np0005534516 podman[75680]: 2025-11-25 07:48:25.31014842 +0000 UTC m=+0.077329660 container create 014d4b091962f5545ccb86a5f2b47a4cb00453a8a7070b7f1078601007de7fcc (image=quay.io/ceph/ceph:v18, name=vigilant_proskuriakova, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 25 02:48:25 np0005534516 ceph-mgr[75313]: mgr[py] Loading python module 'status'
Nov 25 02:48:25 np0005534516 podman[75680]: 2025-11-25 07:48:25.255879348 +0000 UTC m=+0.023060588 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 25 02:48:25 np0005534516 systemd[1]: Started libpod-conmon-014d4b091962f5545ccb86a5f2b47a4cb00453a8a7070b7f1078601007de7fcc.scope.
Nov 25 02:48:25 np0005534516 systemd[1]: Started libcrun container.
Nov 25 02:48:25 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/12d82d5ff7a15bbeec86dddf5ffb09ea0ff88163492f918d50d44ce1e14f180b/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 02:48:25 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/12d82d5ff7a15bbeec86dddf5ffb09ea0ff88163492f918d50d44ce1e14f180b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 02:48:25 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/12d82d5ff7a15bbeec86dddf5ffb09ea0ff88163492f918d50d44ce1e14f180b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 02:48:25 np0005534516 podman[75680]: 2025-11-25 07:48:25.521052268 +0000 UTC m=+0.288233498 container init 014d4b091962f5545ccb86a5f2b47a4cb00453a8a7070b7f1078601007de7fcc (image=quay.io/ceph/ceph:v18, name=vigilant_proskuriakova, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 02:48:25 np0005534516 podman[75680]: 2025-11-25 07:48:25.53082651 +0000 UTC m=+0.298007720 container start 014d4b091962f5545ccb86a5f2b47a4cb00453a8a7070b7f1078601007de7fcc (image=quay.io/ceph/ceph:v18, name=vigilant_proskuriakova, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True)
Nov 25 02:48:25 np0005534516 podman[75680]: 2025-11-25 07:48:25.550057777 +0000 UTC m=+0.317239017 container attach 014d4b091962f5545ccb86a5f2b47a4cb00453a8a7070b7f1078601007de7fcc (image=quay.io/ceph/ceph:v18, name=vigilant_proskuriakova, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 02:48:25 np0005534516 ceph-a058ea16-8b73-51e1-b172-ed66107102bf-mgr-compute-0-cpskve[75309]: 2025-11-25T07:48:25.632+0000 7fbb0592e140 -1 mgr[py] Module status has missing NOTIFY_TYPES member
Nov 25 02:48:25 np0005534516 ceph-mgr[75313]: mgr[py] Module status has missing NOTIFY_TYPES member
Nov 25 02:48:25 np0005534516 ceph-mgr[75313]: mgr[py] Loading python module 'telegraf'
Nov 25 02:48:25 np0005534516 ceph-a058ea16-8b73-51e1-b172-ed66107102bf-mgr-compute-0-cpskve[75309]: 2025-11-25T07:48:25.874+0000 7fbb0592e140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member
Nov 25 02:48:25 np0005534516 ceph-mgr[75313]: mgr[py] Module telegraf has missing NOTIFY_TYPES member
Nov 25 02:48:25 np0005534516 ceph-mgr[75313]: mgr[py] Loading python module 'telemetry'
Nov 25 02:48:25 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Nov 25 02:48:25 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3524452107' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Nov 25 02:48:25 np0005534516 vigilant_proskuriakova[75696]: 
Nov 25 02:48:25 np0005534516 vigilant_proskuriakova[75696]: {
Nov 25 02:48:25 np0005534516 vigilant_proskuriakova[75696]:    "fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 02:48:25 np0005534516 vigilant_proskuriakova[75696]:    "health": {
Nov 25 02:48:25 np0005534516 vigilant_proskuriakova[75696]:        "status": "HEALTH_OK",
Nov 25 02:48:25 np0005534516 vigilant_proskuriakova[75696]:        "checks": {},
Nov 25 02:48:25 np0005534516 vigilant_proskuriakova[75696]:        "mutes": []
Nov 25 02:48:25 np0005534516 vigilant_proskuriakova[75696]:    },
Nov 25 02:48:25 np0005534516 vigilant_proskuriakova[75696]:    "election_epoch": 5,
Nov 25 02:48:25 np0005534516 vigilant_proskuriakova[75696]:    "quorum": [
Nov 25 02:48:25 np0005534516 vigilant_proskuriakova[75696]:        0
Nov 25 02:48:25 np0005534516 vigilant_proskuriakova[75696]:    ],
Nov 25 02:48:25 np0005534516 vigilant_proskuriakova[75696]:    "quorum_names": [
Nov 25 02:48:25 np0005534516 vigilant_proskuriakova[75696]:        "compute-0"
Nov 25 02:48:25 np0005534516 vigilant_proskuriakova[75696]:    ],
Nov 25 02:48:25 np0005534516 vigilant_proskuriakova[75696]:    "quorum_age": 20,
Nov 25 02:48:25 np0005534516 vigilant_proskuriakova[75696]:    "monmap": {
Nov 25 02:48:25 np0005534516 vigilant_proskuriakova[75696]:        "epoch": 1,
Nov 25 02:48:25 np0005534516 vigilant_proskuriakova[75696]:        "min_mon_release_name": "reef",
Nov 25 02:48:25 np0005534516 vigilant_proskuriakova[75696]:        "num_mons": 1
Nov 25 02:48:25 np0005534516 vigilant_proskuriakova[75696]:    },
Nov 25 02:48:25 np0005534516 vigilant_proskuriakova[75696]:    "osdmap": {
Nov 25 02:48:25 np0005534516 vigilant_proskuriakova[75696]:        "epoch": 1,
Nov 25 02:48:25 np0005534516 vigilant_proskuriakova[75696]:        "num_osds": 0,
Nov 25 02:48:25 np0005534516 vigilant_proskuriakova[75696]:        "num_up_osds": 0,
Nov 25 02:48:25 np0005534516 vigilant_proskuriakova[75696]:        "osd_up_since": 0,
Nov 25 02:48:25 np0005534516 vigilant_proskuriakova[75696]:        "num_in_osds": 0,
Nov 25 02:48:25 np0005534516 vigilant_proskuriakova[75696]:        "osd_in_since": 0,
Nov 25 02:48:25 np0005534516 vigilant_proskuriakova[75696]:        "num_remapped_pgs": 0
Nov 25 02:48:25 np0005534516 vigilant_proskuriakova[75696]:    },
Nov 25 02:48:25 np0005534516 vigilant_proskuriakova[75696]:    "pgmap": {
Nov 25 02:48:25 np0005534516 vigilant_proskuriakova[75696]:        "pgs_by_state": [],
Nov 25 02:48:25 np0005534516 vigilant_proskuriakova[75696]:        "num_pgs": 0,
Nov 25 02:48:25 np0005534516 vigilant_proskuriakova[75696]:        "num_pools": 0,
Nov 25 02:48:25 np0005534516 vigilant_proskuriakova[75696]:        "num_objects": 0,
Nov 25 02:48:25 np0005534516 vigilant_proskuriakova[75696]:        "data_bytes": 0,
Nov 25 02:48:25 np0005534516 vigilant_proskuriakova[75696]:        "bytes_used": 0,
Nov 25 02:48:25 np0005534516 vigilant_proskuriakova[75696]:        "bytes_avail": 0,
Nov 25 02:48:25 np0005534516 vigilant_proskuriakova[75696]:        "bytes_total": 0
Nov 25 02:48:25 np0005534516 vigilant_proskuriakova[75696]:    },
Nov 25 02:48:25 np0005534516 vigilant_proskuriakova[75696]:    "fsmap": {
Nov 25 02:48:25 np0005534516 vigilant_proskuriakova[75696]:        "epoch": 1,
Nov 25 02:48:25 np0005534516 vigilant_proskuriakova[75696]:        "by_rank": [],
Nov 25 02:48:25 np0005534516 vigilant_proskuriakova[75696]:        "up:standby": 0
Nov 25 02:48:25 np0005534516 vigilant_proskuriakova[75696]:    },
Nov 25 02:48:25 np0005534516 vigilant_proskuriakova[75696]:    "mgrmap": {
Nov 25 02:48:25 np0005534516 vigilant_proskuriakova[75696]:        "available": false,
Nov 25 02:48:25 np0005534516 vigilant_proskuriakova[75696]:        "num_standbys": 0,
Nov 25 02:48:25 np0005534516 vigilant_proskuriakova[75696]:        "modules": [
Nov 25 02:48:25 np0005534516 vigilant_proskuriakova[75696]:            "iostat",
Nov 25 02:48:25 np0005534516 vigilant_proskuriakova[75696]:            "nfs",
Nov 25 02:48:25 np0005534516 vigilant_proskuriakova[75696]:            "restful"
Nov 25 02:48:25 np0005534516 vigilant_proskuriakova[75696]:        ],
Nov 25 02:48:25 np0005534516 vigilant_proskuriakova[75696]:        "services": {}
Nov 25 02:48:25 np0005534516 vigilant_proskuriakova[75696]:    },
Nov 25 02:48:25 np0005534516 vigilant_proskuriakova[75696]:    "servicemap": {
Nov 25 02:48:25 np0005534516 vigilant_proskuriakova[75696]:        "epoch": 1,
Nov 25 02:48:25 np0005534516 vigilant_proskuriakova[75696]:        "modified": "2025-11-25T07:48:01.745386+0000",
Nov 25 02:48:25 np0005534516 vigilant_proskuriakova[75696]:        "services": {}
Nov 25 02:48:25 np0005534516 vigilant_proskuriakova[75696]:    },
Nov 25 02:48:25 np0005534516 vigilant_proskuriakova[75696]:    "progress_events": {}
Nov 25 02:48:25 np0005534516 vigilant_proskuriakova[75696]: }
Nov 25 02:48:25 np0005534516 systemd[1]: libpod-014d4b091962f5545ccb86a5f2b47a4cb00453a8a7070b7f1078601007de7fcc.scope: Deactivated successfully.
Nov 25 02:48:25 np0005534516 podman[75722]: 2025-11-25 07:48:25.968419503 +0000 UTC m=+0.020266768 container died 014d4b091962f5545ccb86a5f2b47a4cb00453a8a7070b7f1078601007de7fcc (image=quay.io/ceph/ceph:v18, name=vigilant_proskuriakova, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Nov 25 02:48:26 np0005534516 systemd[1]: var-lib-containers-storage-overlay-12d82d5ff7a15bbeec86dddf5ffb09ea0ff88163492f918d50d44ce1e14f180b-merged.mount: Deactivated successfully.
Nov 25 02:48:26 np0005534516 ceph-a058ea16-8b73-51e1-b172-ed66107102bf-mgr-compute-0-cpskve[75309]: 2025-11-25T07:48:26.477+0000 7fbb0592e140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member
Nov 25 02:48:26 np0005534516 ceph-mgr[75313]: mgr[py] Module telemetry has missing NOTIFY_TYPES member
Nov 25 02:48:26 np0005534516 ceph-mgr[75313]: mgr[py] Loading python module 'test_orchestrator'
Nov 25 02:48:26 np0005534516 podman[75722]: 2025-11-25 07:48:26.609179789 +0000 UTC m=+0.661027034 container remove 014d4b091962f5545ccb86a5f2b47a4cb00453a8a7070b7f1078601007de7fcc (image=quay.io/ceph/ceph:v18, name=vigilant_proskuriakova, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef)
Nov 25 02:48:26 np0005534516 systemd[1]: libpod-conmon-014d4b091962f5545ccb86a5f2b47a4cb00453a8a7070b7f1078601007de7fcc.scope: Deactivated successfully.
Nov 25 02:48:27 np0005534516 ceph-a058ea16-8b73-51e1-b172-ed66107102bf-mgr-compute-0-cpskve[75309]: 2025-11-25T07:48:27.166+0000 7fbb0592e140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Nov 25 02:48:27 np0005534516 ceph-mgr[75313]: mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Nov 25 02:48:27 np0005534516 ceph-mgr[75313]: mgr[py] Loading python module 'volumes'
Nov 25 02:48:27 np0005534516 ceph-a058ea16-8b73-51e1-b172-ed66107102bf-mgr-compute-0-cpskve[75309]: 2025-11-25T07:48:27.874+0000 7fbb0592e140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member
Nov 25 02:48:27 np0005534516 ceph-mgr[75313]: mgr[py] Module volumes has missing NOTIFY_TYPES member
Nov 25 02:48:27 np0005534516 ceph-mgr[75313]: mgr[py] Loading python module 'zabbix'
Nov 25 02:48:28 np0005534516 ceph-a058ea16-8b73-51e1-b172-ed66107102bf-mgr-compute-0-cpskve[75309]: 2025-11-25T07:48:28.110+0000 7fbb0592e140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member
Nov 25 02:48:28 np0005534516 ceph-mgr[75313]: mgr[py] Module zabbix has missing NOTIFY_TYPES member
Nov 25 02:48:28 np0005534516 ceph-mgr[75313]: ms_deliver_dispatch: unhandled message 0x5643428871e0 mon_map magic: 0 v1 from mon.0 v2:192.168.122.100:3300/0
Nov 25 02:48:28 np0005534516 ceph-mon[75015]: log_channel(cluster) log [INF] : Activating manager daemon compute-0.cpskve
Nov 25 02:48:28 np0005534516 ceph-mgr[75313]: mgr handle_mgr_map Activating!
Nov 25 02:48:28 np0005534516 ceph-mgr[75313]: mgr handle_mgr_map I am now activating
Nov 25 02:48:28 np0005534516 ceph-mon[75015]: log_channel(cluster) log [DBG] : mgrmap e2: compute-0.cpskve(active, starting, since 0.0139736s)
Nov 25 02:48:28 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mds metadata"} v 0) v1
Nov 25 02:48:28 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/3157783486' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "mds metadata"}]: dispatch
Nov 25 02:48:28 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).mds e1 all = 1
Nov 25 02:48:28 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata"} v 0) v1
Nov 25 02:48:28 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/3157783486' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "osd metadata"}]: dispatch
Nov 25 02:48:28 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata"} v 0) v1
Nov 25 02:48:28 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/3157783486' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "mon metadata"}]: dispatch
Nov 25 02:48:28 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0) v1
Nov 25 02:48:28 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/3157783486' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Nov 25 02:48:28 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-0.cpskve", "id": "compute-0.cpskve"} v 0) v1
Nov 25 02:48:28 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/3157783486' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "mgr metadata", "who": "compute-0.cpskve", "id": "compute-0.cpskve"}]: dispatch
Nov 25 02:48:28 np0005534516 ceph-mgr[75313]: [balancer DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 25 02:48:28 np0005534516 ceph-mgr[75313]: mgr load Constructed class from module: balancer
Nov 25 02:48:28 np0005534516 ceph-mgr[75313]: [balancer INFO root] Starting
Nov 25 02:48:28 np0005534516 ceph-mgr[75313]: [balancer INFO root] Optimize plan auto_2025-11-25_07:48:28
Nov 25 02:48:28 np0005534516 ceph-mgr[75313]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 02:48:28 np0005534516 ceph-mgr[75313]: [balancer INFO root] do_upmap
Nov 25 02:48:28 np0005534516 ceph-mgr[75313]: [balancer INFO root] No pools available
Nov 25 02:48:28 np0005534516 ceph-mon[75015]: log_channel(cluster) log [INF] : Manager daemon compute-0.cpskve is now available
Nov 25 02:48:28 np0005534516 ceph-mgr[75313]: [crash DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 25 02:48:28 np0005534516 ceph-mgr[75313]: mgr load Constructed class from module: crash
Nov 25 02:48:28 np0005534516 ceph-mgr[75313]: [devicehealth DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 25 02:48:28 np0005534516 ceph-mgr[75313]: mgr load Constructed class from module: devicehealth
Nov 25 02:48:28 np0005534516 ceph-mgr[75313]: [devicehealth INFO root] Starting
Nov 25 02:48:28 np0005534516 ceph-mgr[75313]: [iostat DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 25 02:48:28 np0005534516 ceph-mgr[75313]: mgr load Constructed class from module: iostat
Nov 25 02:48:28 np0005534516 ceph-mgr[75313]: [nfs DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 25 02:48:28 np0005534516 ceph-mgr[75313]: mgr load Constructed class from module: nfs
Nov 25 02:48:28 np0005534516 ceph-mgr[75313]: [orchestrator DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 25 02:48:28 np0005534516 ceph-mgr[75313]: mgr load Constructed class from module: orchestrator
Nov 25 02:48:28 np0005534516 ceph-mgr[75313]: [pg_autoscaler DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 25 02:48:28 np0005534516 ceph-mgr[75313]: mgr load Constructed class from module: pg_autoscaler
Nov 25 02:48:28 np0005534516 ceph-mgr[75313]: [progress DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 25 02:48:28 np0005534516 ceph-mgr[75313]: mgr load Constructed class from module: progress
Nov 25 02:48:28 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 02:48:28 np0005534516 ceph-mgr[75313]: [progress INFO root] Loading...
Nov 25 02:48:28 np0005534516 ceph-mgr[75313]: [progress INFO root] No stored events to load
Nov 25 02:48:28 np0005534516 ceph-mgr[75313]: [progress INFO root] Loaded [] historic events
Nov 25 02:48:28 np0005534516 ceph-mgr[75313]: [progress INFO root] Loaded OSDMap, ready.
Nov 25 02:48:28 np0005534516 ceph-mgr[75313]: [rbd_support DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 25 02:48:28 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] recovery thread starting
Nov 25 02:48:28 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] starting setup
Nov 25 02:48:28 np0005534516 ceph-mgr[75313]: mgr load Constructed class from module: rbd_support
Nov 25 02:48:28 np0005534516 ceph-mgr[75313]: [restful DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 25 02:48:28 np0005534516 ceph-mgr[75313]: mgr load Constructed class from module: restful
Nov 25 02:48:28 np0005534516 ceph-mgr[75313]: [restful INFO root] server_addr: :: server_port: 8003
Nov 25 02:48:28 np0005534516 ceph-mgr[75313]: [status DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 25 02:48:28 np0005534516 ceph-mgr[75313]: mgr load Constructed class from module: status
Nov 25 02:48:28 np0005534516 ceph-mon[75015]: Activating manager daemon compute-0.cpskve
Nov 25 02:48:28 np0005534516 ceph-mon[75015]: Manager daemon compute-0.cpskve is now available
Nov 25 02:48:28 np0005534516 ceph-mgr[75313]: [telemetry DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 25 02:48:28 np0005534516 ceph-mgr[75313]: mgr load Constructed class from module: telemetry
Nov 25 02:48:28 np0005534516 ceph-mgr[75313]: [volumes DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 25 02:48:28 np0005534516 ceph-mgr[75313]: [restful WARNING root] server not running: no certificate configured
Nov 25 02:48:28 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.cpskve/mirror_snapshot_schedule"} v 0) v1
Nov 25 02:48:28 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/3157783486' entity='mgr.compute-0.cpskve' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.cpskve/mirror_snapshot_schedule"}]: dispatch
Nov 25 02:48:28 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/telemetry/report_id}] v 0) v1
Nov 25 02:48:28 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 02:48:28 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: starting
Nov 25 02:48:28 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] PerfHandler: starting
Nov 25 02:48:28 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] TaskHandler: starting
Nov 25 02:48:28 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.cpskve/trash_purge_schedule"} v 0) v1
Nov 25 02:48:28 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/3157783486' entity='mgr.compute-0.cpskve' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.cpskve/trash_purge_schedule"}]: dispatch
Nov 25 02:48:28 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/3157783486' entity='mgr.compute-0.cpskve' 
Nov 25 02:48:28 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/telemetry/salt}] v 0) v1
Nov 25 02:48:28 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 02:48:28 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] TrashPurgeScheduleHandler: starting
Nov 25 02:48:28 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] setup complete
Nov 25 02:48:28 np0005534516 ceph-mgr[75313]: mgr load Constructed class from module: volumes
Nov 25 02:48:28 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/3157783486' entity='mgr.compute-0.cpskve' 
Nov 25 02:48:28 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/telemetry/collection}] v 0) v1
Nov 25 02:48:28 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/3157783486' entity='mgr.compute-0.cpskve' 
Nov 25 02:48:28 np0005534516 podman[75817]: 2025-11-25 07:48:28.760453926 +0000 UTC m=+0.099170593 container create 727cbcb9310d4243adbc36b4638482bfbc92b8fa5a50f17926507263932484f6 (image=quay.io/ceph/ceph:v18, name=amazing_archimedes, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Nov 25 02:48:28 np0005534516 podman[75817]: 2025-11-25 07:48:28.701754606 +0000 UTC m=+0.040471293 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 25 02:48:28 np0005534516 systemd[1]: Started libpod-conmon-727cbcb9310d4243adbc36b4638482bfbc92b8fa5a50f17926507263932484f6.scope.
Nov 25 02:48:28 np0005534516 systemd[1]: Started libcrun container.
Nov 25 02:48:28 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/afceb8412c8b66c3804d9f3556816d0319fdd1982c8eb1236eb3193b74fc3fa4/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 02:48:28 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/afceb8412c8b66c3804d9f3556816d0319fdd1982c8eb1236eb3193b74fc3fa4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 02:48:28 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/afceb8412c8b66c3804d9f3556816d0319fdd1982c8eb1236eb3193b74fc3fa4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 02:48:28 np0005534516 podman[75817]: 2025-11-25 07:48:28.914479818 +0000 UTC m=+0.253196565 container init 727cbcb9310d4243adbc36b4638482bfbc92b8fa5a50f17926507263932484f6 (image=quay.io/ceph/ceph:v18, name=amazing_archimedes, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Nov 25 02:48:28 np0005534516 podman[75817]: 2025-11-25 07:48:28.923974193 +0000 UTC m=+0.262690870 container start 727cbcb9310d4243adbc36b4638482bfbc92b8fa5a50f17926507263932484f6 (image=quay.io/ceph/ceph:v18, name=amazing_archimedes, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Nov 25 02:48:28 np0005534516 podman[75817]: 2025-11-25 07:48:28.948139993 +0000 UTC m=+0.286856740 container attach 727cbcb9310d4243adbc36b4638482bfbc92b8fa5a50f17926507263932484f6 (image=quay.io/ceph/ceph:v18, name=amazing_archimedes, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 02:48:29 np0005534516 ceph-mon[75015]: log_channel(cluster) log [DBG] : mgrmap e3: compute-0.cpskve(active, since 1.07047s)
Nov 25 02:48:29 np0005534516 ceph-mon[75015]: from='mgr.14102 192.168.122.100:0/3157783486' entity='mgr.compute-0.cpskve' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.cpskve/mirror_snapshot_schedule"}]: dispatch
Nov 25 02:48:29 np0005534516 ceph-mon[75015]: from='mgr.14102 192.168.122.100:0/3157783486' entity='mgr.compute-0.cpskve' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.cpskve/trash_purge_schedule"}]: dispatch
Nov 25 02:48:29 np0005534516 ceph-mon[75015]: from='mgr.14102 192.168.122.100:0/3157783486' entity='mgr.compute-0.cpskve' 
Nov 25 02:48:29 np0005534516 ceph-mon[75015]: from='mgr.14102 192.168.122.100:0/3157783486' entity='mgr.compute-0.cpskve' 
Nov 25 02:48:29 np0005534516 ceph-mon[75015]: from='mgr.14102 192.168.122.100:0/3157783486' entity='mgr.compute-0.cpskve' 
Nov 25 02:48:29 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Nov 25 02:48:29 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3842792037' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Nov 25 02:48:29 np0005534516 amazing_archimedes[75834]: 
Nov 25 02:48:29 np0005534516 amazing_archimedes[75834]: {
Nov 25 02:48:29 np0005534516 amazing_archimedes[75834]:    "fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 02:48:29 np0005534516 amazing_archimedes[75834]:    "health": {
Nov 25 02:48:29 np0005534516 amazing_archimedes[75834]:        "status": "HEALTH_OK",
Nov 25 02:48:29 np0005534516 amazing_archimedes[75834]:        "checks": {},
Nov 25 02:48:29 np0005534516 amazing_archimedes[75834]:        "mutes": []
Nov 25 02:48:29 np0005534516 amazing_archimedes[75834]:    },
Nov 25 02:48:29 np0005534516 amazing_archimedes[75834]:    "election_epoch": 5,
Nov 25 02:48:29 np0005534516 amazing_archimedes[75834]:    "quorum": [
Nov 25 02:48:29 np0005534516 amazing_archimedes[75834]:        0
Nov 25 02:48:29 np0005534516 amazing_archimedes[75834]:    ],
Nov 25 02:48:29 np0005534516 amazing_archimedes[75834]:    "quorum_names": [
Nov 25 02:48:29 np0005534516 amazing_archimedes[75834]:        "compute-0"
Nov 25 02:48:29 np0005534516 amazing_archimedes[75834]:    ],
Nov 25 02:48:29 np0005534516 amazing_archimedes[75834]:    "quorum_age": 23,
Nov 25 02:48:29 np0005534516 amazing_archimedes[75834]:    "monmap": {
Nov 25 02:48:29 np0005534516 amazing_archimedes[75834]:        "epoch": 1,
Nov 25 02:48:29 np0005534516 amazing_archimedes[75834]:        "min_mon_release_name": "reef",
Nov 25 02:48:29 np0005534516 amazing_archimedes[75834]:        "num_mons": 1
Nov 25 02:48:29 np0005534516 amazing_archimedes[75834]:    },
Nov 25 02:48:29 np0005534516 amazing_archimedes[75834]:    "osdmap": {
Nov 25 02:48:29 np0005534516 amazing_archimedes[75834]:        "epoch": 1,
Nov 25 02:48:29 np0005534516 amazing_archimedes[75834]:        "num_osds": 0,
Nov 25 02:48:29 np0005534516 amazing_archimedes[75834]:        "num_up_osds": 0,
Nov 25 02:48:29 np0005534516 amazing_archimedes[75834]:        "osd_up_since": 0,
Nov 25 02:48:29 np0005534516 amazing_archimedes[75834]:        "num_in_osds": 0,
Nov 25 02:48:29 np0005534516 amazing_archimedes[75834]:        "osd_in_since": 0,
Nov 25 02:48:29 np0005534516 amazing_archimedes[75834]:        "num_remapped_pgs": 0
Nov 25 02:48:29 np0005534516 amazing_archimedes[75834]:    },
Nov 25 02:48:29 np0005534516 amazing_archimedes[75834]:    "pgmap": {
Nov 25 02:48:29 np0005534516 amazing_archimedes[75834]:        "pgs_by_state": [],
Nov 25 02:48:29 np0005534516 amazing_archimedes[75834]:        "num_pgs": 0,
Nov 25 02:48:29 np0005534516 amazing_archimedes[75834]:        "num_pools": 0,
Nov 25 02:48:29 np0005534516 amazing_archimedes[75834]:        "num_objects": 0,
Nov 25 02:48:29 np0005534516 amazing_archimedes[75834]:        "data_bytes": 0,
Nov 25 02:48:29 np0005534516 amazing_archimedes[75834]:        "bytes_used": 0,
Nov 25 02:48:29 np0005534516 amazing_archimedes[75834]:        "bytes_avail": 0,
Nov 25 02:48:29 np0005534516 amazing_archimedes[75834]:        "bytes_total": 0
Nov 25 02:48:29 np0005534516 amazing_archimedes[75834]:    },
Nov 25 02:48:29 np0005534516 amazing_archimedes[75834]:    "fsmap": {
Nov 25 02:48:29 np0005534516 amazing_archimedes[75834]:        "epoch": 1,
Nov 25 02:48:29 np0005534516 amazing_archimedes[75834]:        "by_rank": [],
Nov 25 02:48:29 np0005534516 amazing_archimedes[75834]:        "up:standby": 0
Nov 25 02:48:29 np0005534516 amazing_archimedes[75834]:    },
Nov 25 02:48:29 np0005534516 amazing_archimedes[75834]:    "mgrmap": {
Nov 25 02:48:29 np0005534516 amazing_archimedes[75834]:        "available": true,
Nov 25 02:48:29 np0005534516 amazing_archimedes[75834]:        "num_standbys": 0,
Nov 25 02:48:29 np0005534516 amazing_archimedes[75834]:        "modules": [
Nov 25 02:48:29 np0005534516 amazing_archimedes[75834]:            "iostat",
Nov 25 02:48:29 np0005534516 amazing_archimedes[75834]:            "nfs",
Nov 25 02:48:29 np0005534516 amazing_archimedes[75834]:            "restful"
Nov 25 02:48:29 np0005534516 amazing_archimedes[75834]:        ],
Nov 25 02:48:29 np0005534516 amazing_archimedes[75834]:        "services": {}
Nov 25 02:48:29 np0005534516 amazing_archimedes[75834]:    },
Nov 25 02:48:29 np0005534516 amazing_archimedes[75834]:    "servicemap": {
Nov 25 02:48:29 np0005534516 amazing_archimedes[75834]:        "epoch": 1,
Nov 25 02:48:29 np0005534516 amazing_archimedes[75834]:        "modified": "2025-11-25T07:48:01.745386+0000",
Nov 25 02:48:29 np0005534516 amazing_archimedes[75834]:        "services": {}
Nov 25 02:48:29 np0005534516 amazing_archimedes[75834]:    },
Nov 25 02:48:29 np0005534516 amazing_archimedes[75834]:    "progress_events": {}
Nov 25 02:48:29 np0005534516 amazing_archimedes[75834]: }
Nov 25 02:48:29 np0005534516 systemd[1]: libpod-727cbcb9310d4243adbc36b4638482bfbc92b8fa5a50f17926507263932484f6.scope: Deactivated successfully.
Nov 25 02:48:29 np0005534516 podman[75817]: 2025-11-25 07:48:29.526678237 +0000 UTC m=+0.865394914 container died 727cbcb9310d4243adbc36b4638482bfbc92b8fa5a50f17926507263932484f6 (image=quay.io/ceph/ceph:v18, name=amazing_archimedes, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Nov 25 02:48:29 np0005534516 systemd[1]: var-lib-containers-storage-overlay-afceb8412c8b66c3804d9f3556816d0319fdd1982c8eb1236eb3193b74fc3fa4-merged.mount: Deactivated successfully.
Nov 25 02:48:29 np0005534516 podman[75817]: 2025-11-25 07:48:29.77366819 +0000 UTC m=+1.112384857 container remove 727cbcb9310d4243adbc36b4638482bfbc92b8fa5a50f17926507263932484f6 (image=quay.io/ceph/ceph:v18, name=amazing_archimedes, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 02:48:29 np0005534516 systemd[1]: libpod-conmon-727cbcb9310d4243adbc36b4638482bfbc92b8fa5a50f17926507263932484f6.scope: Deactivated successfully.
Nov 25 02:48:29 np0005534516 podman[75873]: 2025-11-25 07:48:29.877477366 +0000 UTC m=+0.077257199 container create 99a94b3ea2e33abdb98d353e56afbbaa63dff8a44309d6b2856c8c6f400389ee (image=quay.io/ceph/ceph:v18, name=lucid_bouman, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 02:48:29 np0005534516 podman[75873]: 2025-11-25 07:48:29.826492579 +0000 UTC m=+0.026272452 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 25 02:48:29 np0005534516 systemd[1]: Started libpod-conmon-99a94b3ea2e33abdb98d353e56afbbaa63dff8a44309d6b2856c8c6f400389ee.scope.
Nov 25 02:48:29 np0005534516 systemd[1]: Started libcrun container.
Nov 25 02:48:29 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8d5f468c0baed4cc2b4112ef25f80385e0a01532fab3ed4761c17a67fe8ffc03/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 02:48:29 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8d5f468c0baed4cc2b4112ef25f80385e0a01532fab3ed4761c17a67fe8ffc03/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 02:48:29 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8d5f468c0baed4cc2b4112ef25f80385e0a01532fab3ed4761c17a67fe8ffc03/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 02:48:29 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8d5f468c0baed4cc2b4112ef25f80385e0a01532fab3ed4761c17a67fe8ffc03/merged/var/lib/ceph/user.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 02:48:30 np0005534516 podman[75873]: 2025-11-25 07:48:30.00088294 +0000 UTC m=+0.200662833 container init 99a94b3ea2e33abdb98d353e56afbbaa63dff8a44309d6b2856c8c6f400389ee (image=quay.io/ceph/ceph:v18, name=lucid_bouman, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 02:48:30 np0005534516 podman[75873]: 2025-11-25 07:48:30.008721427 +0000 UTC m=+0.208501270 container start 99a94b3ea2e33abdb98d353e56afbbaa63dff8a44309d6b2856c8c6f400389ee (image=quay.io/ceph/ceph:v18, name=lucid_bouman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Nov 25 02:48:30 np0005534516 podman[75873]: 2025-11-25 07:48:30.023846754 +0000 UTC m=+0.223626627 container attach 99a94b3ea2e33abdb98d353e56afbbaa63dff8a44309d6b2856c8c6f400389ee (image=quay.io/ceph/ceph:v18, name=lucid_bouman, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3)
Nov 25 02:48:30 np0005534516 ceph-mgr[75313]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Nov 25 02:48:30 np0005534516 ceph-mon[75015]: log_channel(cluster) log [DBG] : mgrmap e4: compute-0.cpskve(active, since 2s)
Nov 25 02:48:30 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config assimilate-conf"} v 0) v1
Nov 25 02:48:30 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3116285204' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Nov 25 02:48:30 np0005534516 systemd[1]: libpod-99a94b3ea2e33abdb98d353e56afbbaa63dff8a44309d6b2856c8c6f400389ee.scope: Deactivated successfully.
Nov 25 02:48:30 np0005534516 podman[75873]: 2025-11-25 07:48:30.579435423 +0000 UTC m=+0.779215256 container died 99a94b3ea2e33abdb98d353e56afbbaa63dff8a44309d6b2856c8c6f400389ee (image=quay.io/ceph/ceph:v18, name=lucid_bouman, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Nov 25 02:48:30 np0005534516 systemd[1]: var-lib-containers-storage-overlay-8d5f468c0baed4cc2b4112ef25f80385e0a01532fab3ed4761c17a67fe8ffc03-merged.mount: Deactivated successfully.
Nov 25 02:48:31 np0005534516 podman[75873]: 2025-11-25 07:48:31.057522998 +0000 UTC m=+1.257302871 container remove 99a94b3ea2e33abdb98d353e56afbbaa63dff8a44309d6b2856c8c6f400389ee (image=quay.io/ceph/ceph:v18, name=lucid_bouman, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 02:48:31 np0005534516 systemd[1]: libpod-conmon-99a94b3ea2e33abdb98d353e56afbbaa63dff8a44309d6b2856c8c6f400389ee.scope: Deactivated successfully.
Nov 25 02:48:31 np0005534516 podman[75927]: 2025-11-25 07:48:31.099677429 +0000 UTC m=+0.022754890 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 25 02:48:31 np0005534516 podman[75927]: 2025-11-25 07:48:31.340716099 +0000 UTC m=+0.263793520 container create ede268948505589b95dcca96e9b2ca007a2268be0d3aed241a67fa7bd5675038 (image=quay.io/ceph/ceph:v18, name=mystifying_babbage, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Nov 25 02:48:31 np0005534516 systemd[1]: Started libpod-conmon-ede268948505589b95dcca96e9b2ca007a2268be0d3aed241a67fa7bd5675038.scope.
Nov 25 02:48:31 np0005534516 systemd[1]: Started libcrun container.
Nov 25 02:48:31 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e96a0fff0a6891181d85a45d3ba229bd434c2bf9c7b5ec677540ae634ce34c38/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 02:48:31 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e96a0fff0a6891181d85a45d3ba229bd434c2bf9c7b5ec677540ae634ce34c38/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 02:48:31 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e96a0fff0a6891181d85a45d3ba229bd434c2bf9c7b5ec677540ae634ce34c38/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 02:48:31 np0005534516 ceph-mon[75015]: from='client.? 192.168.122.100:0/3116285204' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Nov 25 02:48:31 np0005534516 podman[75927]: 2025-11-25 07:48:31.610930455 +0000 UTC m=+0.534007896 container init ede268948505589b95dcca96e9b2ca007a2268be0d3aed241a67fa7bd5675038 (image=quay.io/ceph/ceph:v18, name=mystifying_babbage, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Nov 25 02:48:31 np0005534516 podman[75927]: 2025-11-25 07:48:31.616519207 +0000 UTC m=+0.539596638 container start ede268948505589b95dcca96e9b2ca007a2268be0d3aed241a67fa7bd5675038 (image=quay.io/ceph/ceph:v18, name=mystifying_babbage, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 02:48:31 np0005534516 podman[75927]: 2025-11-25 07:48:31.625071535 +0000 UTC m=+0.548148996 container attach ede268948505589b95dcca96e9b2ca007a2268be0d3aed241a67fa7bd5675038 (image=quay.io/ceph/ceph:v18, name=mystifying_babbage, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Nov 25 02:48:32 np0005534516 ceph-mgr[75313]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Nov 25 02:48:32 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module enable", "module": "cephadm"} v 0) v1
Nov 25 02:48:32 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3095861851' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "cephadm"}]: dispatch
Nov 25 02:48:32 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3095861851' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "cephadm"}]': finished
Nov 25 02:48:32 np0005534516 ceph-mgr[75313]: mgr handle_mgr_map respawning because set of enabled modules changed!
Nov 25 02:48:32 np0005534516 ceph-mgr[75313]: mgr respawn  e: '/usr/bin/ceph-mgr'
Nov 25 02:48:32 np0005534516 ceph-mgr[75313]: mgr respawn  0: '/usr/bin/ceph-mgr'
Nov 25 02:48:32 np0005534516 ceph-mgr[75313]: mgr respawn  1: '-n'
Nov 25 02:48:32 np0005534516 ceph-mgr[75313]: mgr respawn  2: 'mgr.compute-0.cpskve'
Nov 25 02:48:32 np0005534516 ceph-mgr[75313]: mgr respawn  3: '-f'
Nov 25 02:48:32 np0005534516 ceph-mgr[75313]: mgr respawn  4: '--setuser'
Nov 25 02:48:32 np0005534516 ceph-mgr[75313]: mgr respawn  5: 'ceph'
Nov 25 02:48:32 np0005534516 ceph-mgr[75313]: mgr respawn  6: '--setgroup'
Nov 25 02:48:32 np0005534516 ceph-mgr[75313]: mgr respawn  7: 'ceph'
Nov 25 02:48:32 np0005534516 ceph-mgr[75313]: mgr respawn  8: '--default-log-to-file=false'
Nov 25 02:48:32 np0005534516 ceph-mgr[75313]: mgr respawn  9: '--default-log-to-journald=true'
Nov 25 02:48:32 np0005534516 ceph-mgr[75313]: mgr respawn  10: '--default-log-to-stderr=false'
Nov 25 02:48:32 np0005534516 ceph-mgr[75313]: mgr respawn respawning with exe /usr/bin/ceph-mgr
Nov 25 02:48:32 np0005534516 ceph-mgr[75313]: mgr respawn  exe_path /proc/self/exe
Nov 25 02:48:32 np0005534516 ceph-mon[75015]: log_channel(cluster) log [DBG] : mgrmap e5: compute-0.cpskve(active, since 4s)
Nov 25 02:48:32 np0005534516 systemd[1]: libpod-ede268948505589b95dcca96e9b2ca007a2268be0d3aed241a67fa7bd5675038.scope: Deactivated successfully.
Nov 25 02:48:32 np0005534516 podman[75927]: 2025-11-25 07:48:32.633084744 +0000 UTC m=+1.556162205 container died ede268948505589b95dcca96e9b2ca007a2268be0d3aed241a67fa7bd5675038 (image=quay.io/ceph/ceph:v18, name=mystifying_babbage, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 02:48:32 np0005534516 ceph-mon[75015]: from='client.? 192.168.122.100:0/3095861851' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "cephadm"}]: dispatch
Nov 25 02:48:32 np0005534516 systemd[1]: var-lib-containers-storage-overlay-e96a0fff0a6891181d85a45d3ba229bd434c2bf9c7b5ec677540ae634ce34c38-merged.mount: Deactivated successfully.
Nov 25 02:48:32 np0005534516 ceph-a058ea16-8b73-51e1-b172-ed66107102bf-mgr-compute-0-cpskve[75309]: ignoring --setuser ceph since I am not root
Nov 25 02:48:32 np0005534516 ceph-a058ea16-8b73-51e1-b172-ed66107102bf-mgr-compute-0-cpskve[75309]: ignoring --setgroup ceph since I am not root
Nov 25 02:48:32 np0005534516 ceph-mgr[75313]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-mgr, pid 2
Nov 25 02:48:32 np0005534516 ceph-mgr[75313]: pidfile_write: ignore empty --pid-file
Nov 25 02:48:32 np0005534516 podman[75927]: 2025-11-25 07:48:32.796701154 +0000 UTC m=+1.719778615 container remove ede268948505589b95dcca96e9b2ca007a2268be0d3aed241a67fa7bd5675038 (image=quay.io/ceph/ceph:v18, name=mystifying_babbage, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 02:48:32 np0005534516 systemd[1]: libpod-conmon-ede268948505589b95dcca96e9b2ca007a2268be0d3aed241a67fa7bd5675038.scope: Deactivated successfully.
Nov 25 02:48:32 np0005534516 ceph-mgr[75313]: mgr[py] Loading python module 'alerts'
Nov 25 02:48:32 np0005534516 podman[76008]: 2025-11-25 07:48:32.911701174 +0000 UTC m=+0.095294030 container create c36a39f75be07b328fd7e5ef48ddfba69e8466485f9b52a7c0a5e1a4e2cb2ce4 (image=quay.io/ceph/ceph:v18, name=compassionate_solomon, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Nov 25 02:48:32 np0005534516 podman[76008]: 2025-11-25 07:48:32.837923858 +0000 UTC m=+0.021516704 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 25 02:48:33 np0005534516 systemd[1]: Started libpod-conmon-c36a39f75be07b328fd7e5ef48ddfba69e8466485f9b52a7c0a5e1a4e2cb2ce4.scope.
Nov 25 02:48:33 np0005534516 systemd[1]: Started libcrun container.
Nov 25 02:48:33 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8ef2bed710f9ab708712c266bab2f5d9643bd8f26142f306204ad1e04a1a5422/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 02:48:33 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8ef2bed710f9ab708712c266bab2f5d9643bd8f26142f306204ad1e04a1a5422/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 02:48:33 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8ef2bed710f9ab708712c266bab2f5d9643bd8f26142f306204ad1e04a1a5422/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 02:48:33 np0005534516 ceph-a058ea16-8b73-51e1-b172-ed66107102bf-mgr-compute-0-cpskve[75309]: 2025-11-25T07:48:33.167+0000 7f63a98f4140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member
Nov 25 02:48:33 np0005534516 ceph-mgr[75313]: mgr[py] Module alerts has missing NOTIFY_TYPES member
Nov 25 02:48:33 np0005534516 ceph-mgr[75313]: mgr[py] Loading python module 'balancer'
Nov 25 02:48:33 np0005534516 podman[76008]: 2025-11-25 07:48:33.172221929 +0000 UTC m=+0.355814775 container init c36a39f75be07b328fd7e5ef48ddfba69e8466485f9b52a7c0a5e1a4e2cb2ce4 (image=quay.io/ceph/ceph:v18, name=compassionate_solomon, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 02:48:33 np0005534516 podman[76008]: 2025-11-25 07:48:33.178358736 +0000 UTC m=+0.361951602 container start c36a39f75be07b328fd7e5ef48ddfba69e8466485f9b52a7c0a5e1a4e2cb2ce4 (image=quay.io/ceph/ceph:v18, name=compassionate_solomon, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 02:48:33 np0005534516 podman[76008]: 2025-11-25 07:48:33.231096294 +0000 UTC m=+0.414689150 container attach c36a39f75be07b328fd7e5ef48ddfba69e8466485f9b52a7c0a5e1a4e2cb2ce4 (image=quay.io/ceph/ceph:v18, name=compassionate_solomon, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS)
Nov 25 02:48:33 np0005534516 ceph-a058ea16-8b73-51e1-b172-ed66107102bf-mgr-compute-0-cpskve[75309]: 2025-11-25T07:48:33.429+0000 7f63a98f4140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member
Nov 25 02:48:33 np0005534516 ceph-mgr[75313]: mgr[py] Module balancer has missing NOTIFY_TYPES member
Nov 25 02:48:33 np0005534516 ceph-mgr[75313]: mgr[py] Loading python module 'cephadm'
Nov 25 02:48:33 np0005534516 ceph-mon[75015]: from='client.? 192.168.122.100:0/3095861851' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "cephadm"}]': finished
Nov 25 02:48:33 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr stat"} v 0) v1
Nov 25 02:48:33 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1927816501' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Nov 25 02:48:33 np0005534516 compassionate_solomon[76024]: {
Nov 25 02:48:33 np0005534516 compassionate_solomon[76024]:    "epoch": 5,
Nov 25 02:48:33 np0005534516 compassionate_solomon[76024]:    "available": true,
Nov 25 02:48:33 np0005534516 compassionate_solomon[76024]:    "active_name": "compute-0.cpskve",
Nov 25 02:48:33 np0005534516 compassionate_solomon[76024]:    "num_standby": 0
Nov 25 02:48:33 np0005534516 compassionate_solomon[76024]: }
Nov 25 02:48:33 np0005534516 systemd[1]: libpod-c36a39f75be07b328fd7e5ef48ddfba69e8466485f9b52a7c0a5e1a4e2cb2ce4.scope: Deactivated successfully.
Nov 25 02:48:33 np0005534516 podman[76008]: 2025-11-25 07:48:33.754675566 +0000 UTC m=+0.938268392 container died c36a39f75be07b328fd7e5ef48ddfba69e8466485f9b52a7c0a5e1a4e2cb2ce4 (image=quay.io/ceph/ceph:v18, name=compassionate_solomon, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Nov 25 02:48:33 np0005534516 systemd[1]: var-lib-containers-storage-overlay-8ef2bed710f9ab708712c266bab2f5d9643bd8f26142f306204ad1e04a1a5422-merged.mount: Deactivated successfully.
Nov 25 02:48:34 np0005534516 podman[76008]: 2025-11-25 07:48:34.187252463 +0000 UTC m=+1.370845289 container remove c36a39f75be07b328fd7e5ef48ddfba69e8466485f9b52a7c0a5e1a4e2cb2ce4 (image=quay.io/ceph/ceph:v18, name=compassionate_solomon, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 02:48:34 np0005534516 systemd[1]: libpod-conmon-c36a39f75be07b328fd7e5ef48ddfba69e8466485f9b52a7c0a5e1a4e2cb2ce4.scope: Deactivated successfully.
Nov 25 02:48:34 np0005534516 podman[76062]: 2025-11-25 07:48:34.324129767 +0000 UTC m=+0.110353737 container create bd9f2be89031ca57c3b0f9fd46e2f810f45988fc497b7ae13e30deabc08e8716 (image=quay.io/ceph/ceph:v18, name=stoic_hopper, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default)
Nov 25 02:48:34 np0005534516 podman[76062]: 2025-11-25 07:48:34.242422731 +0000 UTC m=+0.028646711 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 25 02:48:34 np0005534516 systemd[1]: Started libpod-conmon-bd9f2be89031ca57c3b0f9fd46e2f810f45988fc497b7ae13e30deabc08e8716.scope.
Nov 25 02:48:34 np0005534516 systemd[1]: Started libcrun container.
Nov 25 02:48:34 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fee745587ea77a747dd35d87648673a5025ca7b096dba946c9d4e17968bb34ec/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 02:48:34 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fee745587ea77a747dd35d87648673a5025ca7b096dba946c9d4e17968bb34ec/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 02:48:34 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fee745587ea77a747dd35d87648673a5025ca7b096dba946c9d4e17968bb34ec/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 02:48:34 np0005534516 podman[76062]: 2025-11-25 07:48:34.45650407 +0000 UTC m=+0.242728040 container init bd9f2be89031ca57c3b0f9fd46e2f810f45988fc497b7ae13e30deabc08e8716 (image=quay.io/ceph/ceph:v18, name=stoic_hopper, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 02:48:34 np0005534516 podman[76062]: 2025-11-25 07:48:34.461658719 +0000 UTC m=+0.247882679 container start bd9f2be89031ca57c3b0f9fd46e2f810f45988fc497b7ae13e30deabc08e8716 (image=quay.io/ceph/ceph:v18, name=stoic_hopper, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 02:48:34 np0005534516 podman[76062]: 2025-11-25 07:48:34.466912031 +0000 UTC m=+0.253136011 container attach bd9f2be89031ca57c3b0f9fd46e2f810f45988fc497b7ae13e30deabc08e8716 (image=quay.io/ceph/ceph:v18, name=stoic_hopper, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 02:48:35 np0005534516 ceph-mgr[75313]: mgr[py] Loading python module 'crash'
Nov 25 02:48:35 np0005534516 ceph-a058ea16-8b73-51e1-b172-ed66107102bf-mgr-compute-0-cpskve[75309]: 2025-11-25T07:48:35.701+0000 7f63a98f4140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member
Nov 25 02:48:35 np0005534516 ceph-mgr[75313]: mgr[py] Module crash has missing NOTIFY_TYPES member
Nov 25 02:48:35 np0005534516 ceph-mgr[75313]: mgr[py] Loading python module 'dashboard'
Nov 25 02:48:37 np0005534516 ceph-mgr[75313]: mgr[py] Loading python module 'devicehealth'
Nov 25 02:48:37 np0005534516 ceph-a058ea16-8b73-51e1-b172-ed66107102bf-mgr-compute-0-cpskve[75309]: 2025-11-25T07:48:37.620+0000 7f63a98f4140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Nov 25 02:48:37 np0005534516 ceph-mgr[75313]: mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Nov 25 02:48:37 np0005534516 ceph-mgr[75313]: mgr[py] Loading python module 'diskprediction_local'
Nov 25 02:48:38 np0005534516 ceph-a058ea16-8b73-51e1-b172-ed66107102bf-mgr-compute-0-cpskve[75309]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Nov 25 02:48:38 np0005534516 ceph-a058ea16-8b73-51e1-b172-ed66107102bf-mgr-compute-0-cpskve[75309]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Nov 25 02:48:38 np0005534516 ceph-a058ea16-8b73-51e1-b172-ed66107102bf-mgr-compute-0-cpskve[75309]:  from numpy import show_config as show_numpy_config
Nov 25 02:48:38 np0005534516 ceph-a058ea16-8b73-51e1-b172-ed66107102bf-mgr-compute-0-cpskve[75309]: 2025-11-25T07:48:38.221+0000 7f63a98f4140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Nov 25 02:48:38 np0005534516 ceph-mgr[75313]: mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Nov 25 02:48:38 np0005534516 ceph-mgr[75313]: mgr[py] Loading python module 'influx'
Nov 25 02:48:38 np0005534516 ceph-a058ea16-8b73-51e1-b172-ed66107102bf-mgr-compute-0-cpskve[75309]: 2025-11-25T07:48:38.467+0000 7f63a98f4140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member
Nov 25 02:48:38 np0005534516 ceph-mgr[75313]: mgr[py] Module influx has missing NOTIFY_TYPES member
Nov 25 02:48:38 np0005534516 ceph-mgr[75313]: mgr[py] Loading python module 'insights'
Nov 25 02:48:38 np0005534516 ceph-mgr[75313]: mgr[py] Loading python module 'iostat'
Nov 25 02:48:38 np0005534516 ceph-a058ea16-8b73-51e1-b172-ed66107102bf-mgr-compute-0-cpskve[75309]: 2025-11-25T07:48:38.963+0000 7f63a98f4140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member
Nov 25 02:48:38 np0005534516 ceph-mgr[75313]: mgr[py] Module iostat has missing NOTIFY_TYPES member
Nov 25 02:48:38 np0005534516 ceph-mgr[75313]: mgr[py] Loading python module 'k8sevents'
Nov 25 02:48:40 np0005534516 ceph-mgr[75313]: mgr[py] Loading python module 'localpool'
Nov 25 02:48:40 np0005534516 ceph-mgr[75313]: mgr[py] Loading python module 'mds_autoscaler'
Nov 25 02:48:41 np0005534516 ceph-mgr[75313]: mgr[py] Loading python module 'mirroring'
Nov 25 02:48:42 np0005534516 ceph-mgr[75313]: mgr[py] Loading python module 'nfs'
Nov 25 02:48:42 np0005534516 ceph-a058ea16-8b73-51e1-b172-ed66107102bf-mgr-compute-0-cpskve[75309]: 2025-11-25T07:48:42.715+0000 7f63a98f4140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member
Nov 25 02:48:42 np0005534516 ceph-mgr[75313]: mgr[py] Module nfs has missing NOTIFY_TYPES member
Nov 25 02:48:42 np0005534516 ceph-mgr[75313]: mgr[py] Loading python module 'orchestrator'
Nov 25 02:48:43 np0005534516 ceph-a058ea16-8b73-51e1-b172-ed66107102bf-mgr-compute-0-cpskve[75309]: 2025-11-25T07:48:43.357+0000 7f63a98f4140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Nov 25 02:48:43 np0005534516 ceph-mgr[75313]: mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Nov 25 02:48:43 np0005534516 ceph-mgr[75313]: mgr[py] Loading python module 'osd_perf_query'
Nov 25 02:48:43 np0005534516 ceph-a058ea16-8b73-51e1-b172-ed66107102bf-mgr-compute-0-cpskve[75309]: 2025-11-25T07:48:43.605+0000 7f63a98f4140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Nov 25 02:48:43 np0005534516 ceph-mgr[75313]: mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Nov 25 02:48:43 np0005534516 ceph-mgr[75313]: mgr[py] Loading python module 'osd_support'
Nov 25 02:48:43 np0005534516 ceph-a058ea16-8b73-51e1-b172-ed66107102bf-mgr-compute-0-cpskve[75309]: 2025-11-25T07:48:43.821+0000 7f63a98f4140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member
Nov 25 02:48:43 np0005534516 ceph-mgr[75313]: mgr[py] Module osd_support has missing NOTIFY_TYPES member
Nov 25 02:48:43 np0005534516 ceph-mgr[75313]: mgr[py] Loading python module 'pg_autoscaler'
Nov 25 02:48:44 np0005534516 ceph-a058ea16-8b73-51e1-b172-ed66107102bf-mgr-compute-0-cpskve[75309]: 2025-11-25T07:48:44.097+0000 7f63a98f4140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Nov 25 02:48:44 np0005534516 ceph-mgr[75313]: mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Nov 25 02:48:44 np0005534516 ceph-mgr[75313]: mgr[py] Loading python module 'progress'
Nov 25 02:48:44 np0005534516 ceph-a058ea16-8b73-51e1-b172-ed66107102bf-mgr-compute-0-cpskve[75309]: 2025-11-25T07:48:44.350+0000 7f63a98f4140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member
Nov 25 02:48:44 np0005534516 ceph-mgr[75313]: mgr[py] Module progress has missing NOTIFY_TYPES member
Nov 25 02:48:44 np0005534516 ceph-mgr[75313]: mgr[py] Loading python module 'prometheus'
Nov 25 02:48:45 np0005534516 ceph-a058ea16-8b73-51e1-b172-ed66107102bf-mgr-compute-0-cpskve[75309]: 2025-11-25T07:48:45.341+0000 7f63a98f4140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member
Nov 25 02:48:45 np0005534516 ceph-mgr[75313]: mgr[py] Module prometheus has missing NOTIFY_TYPES member
Nov 25 02:48:45 np0005534516 ceph-mgr[75313]: mgr[py] Loading python module 'rbd_support'
Nov 25 02:48:45 np0005534516 ceph-a058ea16-8b73-51e1-b172-ed66107102bf-mgr-compute-0-cpskve[75309]: 2025-11-25T07:48:45.639+0000 7f63a98f4140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Nov 25 02:48:45 np0005534516 ceph-mgr[75313]: mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Nov 25 02:48:45 np0005534516 ceph-mgr[75313]: mgr[py] Loading python module 'restful'
Nov 25 02:48:46 np0005534516 ceph-mgr[75313]: mgr[py] Loading python module 'rgw'
Nov 25 02:48:47 np0005534516 ceph-a058ea16-8b73-51e1-b172-ed66107102bf-mgr-compute-0-cpskve[75309]: 2025-11-25T07:48:47.150+0000 7f63a98f4140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member
Nov 25 02:48:47 np0005534516 ceph-mgr[75313]: mgr[py] Module rgw has missing NOTIFY_TYPES member
Nov 25 02:48:47 np0005534516 ceph-mgr[75313]: mgr[py] Loading python module 'rook'
Nov 25 02:48:49 np0005534516 ceph-a058ea16-8b73-51e1-b172-ed66107102bf-mgr-compute-0-cpskve[75309]: 2025-11-25T07:48:49.265+0000 7f63a98f4140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member
Nov 25 02:48:49 np0005534516 ceph-mgr[75313]: mgr[py] Module rook has missing NOTIFY_TYPES member
Nov 25 02:48:49 np0005534516 ceph-mgr[75313]: mgr[py] Loading python module 'selftest'
Nov 25 02:48:49 np0005534516 ceph-a058ea16-8b73-51e1-b172-ed66107102bf-mgr-compute-0-cpskve[75309]: 2025-11-25T07:48:49.522+0000 7f63a98f4140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member
Nov 25 02:48:49 np0005534516 ceph-mgr[75313]: mgr[py] Module selftest has missing NOTIFY_TYPES member
Nov 25 02:48:49 np0005534516 ceph-mgr[75313]: mgr[py] Loading python module 'snap_schedule'
Nov 25 02:48:49 np0005534516 ceph-a058ea16-8b73-51e1-b172-ed66107102bf-mgr-compute-0-cpskve[75309]: 2025-11-25T07:48:49.782+0000 7f63a98f4140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Nov 25 02:48:49 np0005534516 ceph-mgr[75313]: mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Nov 25 02:48:49 np0005534516 ceph-mgr[75313]: mgr[py] Loading python module 'stats'
Nov 25 02:48:50 np0005534516 ceph-mgr[75313]: mgr[py] Loading python module 'status'
Nov 25 02:48:50 np0005534516 ceph-a058ea16-8b73-51e1-b172-ed66107102bf-mgr-compute-0-cpskve[75309]: 2025-11-25T07:48:50.309+0000 7f63a98f4140 -1 mgr[py] Module status has missing NOTIFY_TYPES member
Nov 25 02:48:50 np0005534516 ceph-mgr[75313]: mgr[py] Module status has missing NOTIFY_TYPES member
Nov 25 02:48:50 np0005534516 ceph-mgr[75313]: mgr[py] Loading python module 'telegraf'
Nov 25 02:48:50 np0005534516 ceph-a058ea16-8b73-51e1-b172-ed66107102bf-mgr-compute-0-cpskve[75309]: 2025-11-25T07:48:50.545+0000 7f63a98f4140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member
Nov 25 02:48:50 np0005534516 ceph-mgr[75313]: mgr[py] Module telegraf has missing NOTIFY_TYPES member
Nov 25 02:48:50 np0005534516 ceph-mgr[75313]: mgr[py] Loading python module 'telemetry'
Nov 25 02:48:51 np0005534516 ceph-a058ea16-8b73-51e1-b172-ed66107102bf-mgr-compute-0-cpskve[75309]: 2025-11-25T07:48:51.132+0000 7f63a98f4140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member
Nov 25 02:48:51 np0005534516 ceph-mgr[75313]: mgr[py] Module telemetry has missing NOTIFY_TYPES member
Nov 25 02:48:51 np0005534516 ceph-mgr[75313]: mgr[py] Loading python module 'test_orchestrator'
Nov 25 02:48:51 np0005534516 ceph-a058ea16-8b73-51e1-b172-ed66107102bf-mgr-compute-0-cpskve[75309]: 2025-11-25T07:48:51.815+0000 7f63a98f4140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Nov 25 02:48:51 np0005534516 ceph-mgr[75313]: mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Nov 25 02:48:51 np0005534516 ceph-mgr[75313]: mgr[py] Loading python module 'volumes'
Nov 25 02:48:52 np0005534516 ceph-a058ea16-8b73-51e1-b172-ed66107102bf-mgr-compute-0-cpskve[75309]: 2025-11-25T07:48:52.529+0000 7f63a98f4140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member
Nov 25 02:48:52 np0005534516 ceph-mgr[75313]: mgr[py] Module volumes has missing NOTIFY_TYPES member
Nov 25 02:48:52 np0005534516 ceph-mgr[75313]: mgr[py] Loading python module 'zabbix'
Nov 25 02:48:52 np0005534516 ceph-a058ea16-8b73-51e1-b172-ed66107102bf-mgr-compute-0-cpskve[75309]: 2025-11-25T07:48:52.765+0000 7f63a98f4140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member
Nov 25 02:48:52 np0005534516 ceph-mgr[75313]: mgr[py] Module zabbix has missing NOTIFY_TYPES member
Nov 25 02:48:52 np0005534516 ceph-mon[75015]: log_channel(cluster) log [INF] : Active manager daemon compute-0.cpskve restarted
Nov 25 02:48:52 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e1 do_prune osdmap full prune enabled
Nov 25 02:48:52 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e1 encode_pending skipping prime_pg_temp; mapping job did not start
Nov 25 02:48:52 np0005534516 ceph-mon[75015]: log_channel(cluster) log [INF] : Activating manager daemon compute-0.cpskve
Nov 25 02:48:52 np0005534516 ceph-mgr[75313]: ms_deliver_dispatch: unhandled message 0x555a34e811e0 mon_map magic: 0 v1 from mon.0 v2:192.168.122.100:3300/0
Nov 25 02:48:53 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e1 _set_cache_ratios kv ratio 0.25 inc ratio 0.375 full ratio 0.375
Nov 25 02:48:53 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e1 register_cache_with_pcm pcm target: 2147483648 pcm max: 1020054732 pcm min: 134217728 inc_osd_cache size: 1
Nov 25 02:48:53 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e2 e2: 0 total, 0 up, 0 in
Nov 25 02:48:53 np0005534516 ceph-mgr[75313]: mgr handle_mgr_map Activating!
Nov 25 02:48:53 np0005534516 ceph-mgr[75313]: mgr handle_mgr_map I am now activating
Nov 25 02:48:53 np0005534516 ceph-mon[75015]: log_channel(cluster) log [DBG] : osdmap e2: 0 total, 0 up, 0 in
Nov 25 02:48:53 np0005534516 ceph-mon[75015]: log_channel(cluster) log [DBG] : mgrmap e6: compute-0.cpskve(active, starting, since 0.292208s)
Nov 25 02:48:53 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0) v1
Nov 25 02:48:53 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Nov 25 02:48:53 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-0.cpskve", "id": "compute-0.cpskve"} v 0) v1
Nov 25 02:48:53 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "mgr metadata", "who": "compute-0.cpskve", "id": "compute-0.cpskve"}]: dispatch
Nov 25 02:48:53 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mds metadata"} v 0) v1
Nov 25 02:48:53 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "mds metadata"}]: dispatch
Nov 25 02:48:53 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).mds e1 all = 1
Nov 25 02:48:53 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata"} v 0) v1
Nov 25 02:48:53 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "osd metadata"}]: dispatch
Nov 25 02:48:53 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata"} v 0) v1
Nov 25 02:48:53 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "mon metadata"}]: dispatch
Nov 25 02:48:53 np0005534516 ceph-mgr[75313]: [balancer DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 25 02:48:53 np0005534516 ceph-mgr[75313]: mgr load Constructed class from module: balancer
Nov 25 02:48:53 np0005534516 ceph-mon[75015]: log_channel(cluster) log [INF] : Manager daemon compute-0.cpskve is now available
Nov 25 02:48:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] Starting
Nov 25 02:48:53 np0005534516 ceph-mgr[75313]: [cephadm DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 25 02:48:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] Optimize plan auto_2025-11-25_07:48:53
Nov 25 02:48:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 02:48:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] do_upmap
Nov 25 02:48:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] No pools available
Nov 25 02:48:53 np0005534516 ceph-mon[75015]: Active manager daemon compute-0.cpskve restarted
Nov 25 02:48:53 np0005534516 ceph-mon[75015]: Activating manager daemon compute-0.cpskve
Nov 25 02:48:53 np0005534516 ceph-mgr[75313]: [cephadm INFO cephadm.migrations] Found migration_current of "None". Setting to last migration.
Nov 25 02:48:53 np0005534516 ceph-mgr[75313]: log_channel(cephadm) log [INF] : Found migration_current of "None". Setting to last migration.
Nov 25 02:48:53 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/cephadm/migration_current}] v 0) v1
Nov 25 02:48:53 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 02:48:53 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/config_checks}] v 0) v1
Nov 25 02:48:53 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 02:48:53 np0005534516 ceph-mgr[75313]: mgr load Constructed class from module: cephadm
Nov 25 02:48:53 np0005534516 ceph-mgr[75313]: [crash DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 25 02:48:53 np0005534516 ceph-mgr[75313]: mgr load Constructed class from module: crash
Nov 25 02:48:53 np0005534516 ceph-mgr[75313]: [devicehealth DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 25 02:48:53 np0005534516 ceph-mgr[75313]: mgr load Constructed class from module: devicehealth
Nov 25 02:48:53 np0005534516 ceph-mgr[75313]: [devicehealth INFO root] Starting
Nov 25 02:48:53 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0) v1
Nov 25 02:48:53 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Nov 25 02:48:53 np0005534516 ceph-mgr[75313]: [iostat DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 25 02:48:53 np0005534516 ceph-mgr[75313]: mgr load Constructed class from module: iostat
Nov 25 02:48:53 np0005534516 ceph-mgr[75313]: [nfs DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 25 02:48:53 np0005534516 ceph-mgr[75313]: mgr load Constructed class from module: nfs
Nov 25 02:48:53 np0005534516 ceph-mgr[75313]: [orchestrator DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 25 02:48:53 np0005534516 ceph-mgr[75313]: mgr load Constructed class from module: orchestrator
Nov 25 02:48:53 np0005534516 ceph-mgr[75313]: [pg_autoscaler DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 25 02:48:53 np0005534516 ceph-mgr[75313]: mgr load Constructed class from module: pg_autoscaler
Nov 25 02:48:53 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0) v1
Nov 25 02:48:53 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Nov 25 02:48:53 np0005534516 ceph-mgr[75313]: [progress DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 25 02:48:53 np0005534516 ceph-mgr[75313]: mgr load Constructed class from module: progress
Nov 25 02:48:53 np0005534516 ceph-mgr[75313]: [progress INFO root] Loading...
Nov 25 02:48:53 np0005534516 ceph-mgr[75313]: [progress INFO root] No stored events to load
Nov 25 02:48:53 np0005534516 ceph-mgr[75313]: [progress INFO root] Loaded [] historic events
Nov 25 02:48:53 np0005534516 ceph-mgr[75313]: [progress INFO root] Loaded OSDMap, ready.
Nov 25 02:48:53 np0005534516 ceph-mgr[75313]: [rbd_support DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 25 02:48:53 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 02:48:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] recovery thread starting
Nov 25 02:48:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] starting setup
Nov 25 02:48:53 np0005534516 ceph-mgr[75313]: mgr load Constructed class from module: rbd_support
Nov 25 02:48:53 np0005534516 ceph-mgr[75313]: [restful DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 25 02:48:53 np0005534516 ceph-mgr[75313]: mgr load Constructed class from module: restful
Nov 25 02:48:53 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.cpskve/mirror_snapshot_schedule"} v 0) v1
Nov 25 02:48:53 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.cpskve/mirror_snapshot_schedule"}]: dispatch
Nov 25 02:48:53 np0005534516 ceph-mgr[75313]: [restful INFO root] server_addr: :: server_port: 8003
Nov 25 02:48:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 02:48:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: starting
Nov 25 02:48:53 np0005534516 ceph-mgr[75313]: [status DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 25 02:48:53 np0005534516 ceph-mgr[75313]: mgr load Constructed class from module: status
Nov 25 02:48:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] PerfHandler: starting
Nov 25 02:48:53 np0005534516 ceph-mgr[75313]: [telemetry DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 25 02:48:53 np0005534516 ceph-mgr[75313]: mgr load Constructed class from module: telemetry
Nov 25 02:48:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] TaskHandler: starting
Nov 25 02:48:53 np0005534516 ceph-mgr[75313]: [volumes DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 25 02:48:53 np0005534516 ceph-mgr[75313]: [restful WARNING root] server not running: no certificate configured
Nov 25 02:48:53 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.cpskve/trash_purge_schedule"} v 0) v1
Nov 25 02:48:53 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.cpskve/trash_purge_schedule"}]: dispatch
Nov 25 02:48:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 02:48:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] TrashPurgeScheduleHandler: starting
Nov 25 02:48:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] setup complete
Nov 25 02:48:53 np0005534516 ceph-mgr[75313]: mgr load Constructed class from module: volumes
Nov 25 02:48:54 np0005534516 ceph-mon[75015]: log_channel(cluster) log [DBG] : mgrmap e7: compute-0.cpskve(active, since 1.4173s)
Nov 25 02:48:54 np0005534516 ceph-mgr[75313]: log_channel(audit) log [DBG] : from='client.14134 -' entity='client.admin' cmd=[{"prefix": "get_command_descriptions"}]: dispatch
Nov 25 02:48:54 np0005534516 ceph-mgr[75313]: log_channel(audit) log [DBG] : from='client.14134 -' entity='client.admin' cmd=[{"prefix": "mgr_status"}]: dispatch
Nov 25 02:48:54 np0005534516 stoic_hopper[76079]: {
Nov 25 02:48:54 np0005534516 stoic_hopper[76079]:    "mgrmap_epoch": 7,
Nov 25 02:48:54 np0005534516 stoic_hopper[76079]:    "initialized": true
Nov 25 02:48:54 np0005534516 stoic_hopper[76079]: }
Nov 25 02:48:54 np0005534516 systemd[1]: libpod-bd9f2be89031ca57c3b0f9fd46e2f810f45988fc497b7ae13e30deabc08e8716.scope: Deactivated successfully.
Nov 25 02:48:54 np0005534516 podman[76062]: 2025-11-25 07:48:54.22092809 +0000 UTC m=+20.007152050 container died bd9f2be89031ca57c3b0f9fd46e2f810f45988fc497b7ae13e30deabc08e8716 (image=quay.io/ceph/ceph:v18, name=stoic_hopper, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 02:48:54 np0005534516 ceph-mon[75015]: Manager daemon compute-0.cpskve is now available
Nov 25 02:48:54 np0005534516 ceph-mon[75015]: Found migration_current of "None". Setting to last migration.
Nov 25 02:48:54 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 02:48:54 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 02:48:54 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.cpskve/mirror_snapshot_schedule"}]: dispatch
Nov 25 02:48:54 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.cpskve/trash_purge_schedule"}]: dispatch
Nov 25 02:48:54 np0005534516 systemd[1]: var-lib-containers-storage-overlay-fee745587ea77a747dd35d87648673a5025ca7b096dba946c9d4e17968bb34ec-merged.mount: Deactivated successfully.
Nov 25 02:48:54 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/cephadm_agent/root/cert}] v 0) v1
Nov 25 02:48:55 np0005534516 podman[76062]: 2025-11-25 07:48:55.008803455 +0000 UTC m=+20.795027415 container remove bd9f2be89031ca57c3b0f9fd46e2f810f45988fc497b7ae13e30deabc08e8716 (image=quay.io/ceph/ceph:v18, name=stoic_hopper, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 02:48:55 np0005534516 systemd[1]: libpod-conmon-bd9f2be89031ca57c3b0f9fd46e2f810f45988fc497b7ae13e30deabc08e8716.scope: Deactivated successfully.
Nov 25 02:48:55 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 02:48:55 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/cephadm_agent/root/key}] v 0) v1
Nov 25 02:48:55 np0005534516 ceph-mgr[75313]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Nov 25 02:48:55 np0005534516 podman[76240]: 2025-11-25 07:48:55.049704739 +0000 UTC m=+0.021628876 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 25 02:48:55 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 02:48:55 np0005534516 podman[76240]: 2025-11-25 07:48:55.289049341 +0000 UTC m=+0.260973428 container create 25e5907031e0de63d2350bcd892098e58bd974751c7e913ab6052295c546655b (image=quay.io/ceph/ceph:v18, name=hungry_jennings, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef)
Nov 25 02:48:55 np0005534516 systemd[1]: Started libpod-conmon-25e5907031e0de63d2350bcd892098e58bd974751c7e913ab6052295c546655b.scope.
Nov 25 02:48:55 np0005534516 systemd[1]: Started libcrun container.
Nov 25 02:48:55 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/38e4b5a2e15876eec6175a356342de69244d9d0e7c2d84e2ec65e38b1c757fb7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 02:48:55 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/38e4b5a2e15876eec6175a356342de69244d9d0e7c2d84e2ec65e38b1c757fb7/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 02:48:55 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/38e4b5a2e15876eec6175a356342de69244d9d0e7c2d84e2ec65e38b1c757fb7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 02:48:55 np0005534516 podman[76240]: 2025-11-25 07:48:55.619898632 +0000 UTC m=+0.591822679 container init 25e5907031e0de63d2350bcd892098e58bd974751c7e913ab6052295c546655b (image=quay.io/ceph/ceph:v18, name=hungry_jennings, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 02:48:55 np0005534516 podman[76240]: 2025-11-25 07:48:55.628875072 +0000 UTC m=+0.600799119 container start 25e5907031e0de63d2350bcd892098e58bd974751c7e913ab6052295c546655b (image=quay.io/ceph/ceph:v18, name=hungry_jennings, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 02:48:55 np0005534516 podman[76240]: 2025-11-25 07:48:55.641437016 +0000 UTC m=+0.613361063 container attach 25e5907031e0de63d2350bcd892098e58bd974751c7e913ab6052295c546655b (image=quay.io/ceph/ceph:v18, name=hungry_jennings, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 02:48:55 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 02:48:55 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 02:48:55 np0005534516 ceph-mon[75015]: log_channel(cluster) log [DBG] : mgrmap e8: compute-0.cpskve(active, since 2s)
Nov 25 02:48:55 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1019922418 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 02:48:56 np0005534516 ceph-mgr[75313]: log_channel(audit) log [DBG] : from='client.14142 -' entity='client.admin' cmd=[{"prefix": "orch set backend", "module_name": "cephadm", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 02:48:56 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/orchestrator/orchestrator}] v 0) v1
Nov 25 02:48:56 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 02:48:56 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0) v1
Nov 25 02:48:56 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Nov 25 02:48:56 np0005534516 systemd[1]: libpod-25e5907031e0de63d2350bcd892098e58bd974751c7e913ab6052295c546655b.scope: Deactivated successfully.
Nov 25 02:48:56 np0005534516 podman[76240]: 2025-11-25 07:48:56.362380144 +0000 UTC m=+1.334304211 container died 25e5907031e0de63d2350bcd892098e58bd974751c7e913ab6052295c546655b (image=quay.io/ceph/ceph:v18, name=hungry_jennings, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507)
Nov 25 02:48:56 np0005534516 ceph-mgr[75313]: [cephadm INFO cherrypy.error] [25/Nov/2025:07:48:56] ENGINE Bus STARTING
Nov 25 02:48:56 np0005534516 ceph-mgr[75313]: log_channel(cephadm) log [INF] : [25/Nov/2025:07:48:56] ENGINE Bus STARTING
Nov 25 02:48:56 np0005534516 ceph-mgr[75313]: [cephadm INFO cherrypy.error] [25/Nov/2025:07:48:56] ENGINE Serving on https://192.168.122.100:7150
Nov 25 02:48:56 np0005534516 ceph-mgr[75313]: log_channel(cephadm) log [INF] : [25/Nov/2025:07:48:56] ENGINE Serving on https://192.168.122.100:7150
Nov 25 02:48:56 np0005534516 ceph-mgr[75313]: [cephadm INFO cherrypy.error] [25/Nov/2025:07:48:56] ENGINE Client ('192.168.122.100', 50724) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Nov 25 02:48:56 np0005534516 ceph-mgr[75313]: log_channel(cephadm) log [INF] : [25/Nov/2025:07:48:56] ENGINE Client ('192.168.122.100', 50724) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Nov 25 02:48:56 np0005534516 ceph-mgr[75313]: [cephadm INFO cherrypy.error] [25/Nov/2025:07:48:56] ENGINE Serving on http://192.168.122.100:8765
Nov 25 02:48:56 np0005534516 ceph-mgr[75313]: log_channel(cephadm) log [INF] : [25/Nov/2025:07:48:56] ENGINE Serving on http://192.168.122.100:8765
Nov 25 02:48:56 np0005534516 ceph-mgr[75313]: [cephadm INFO cherrypy.error] [25/Nov/2025:07:48:56] ENGINE Bus STARTED
Nov 25 02:48:56 np0005534516 ceph-mgr[75313]: log_channel(cephadm) log [INF] : [25/Nov/2025:07:48:56] ENGINE Bus STARTED
Nov 25 02:48:56 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0) v1
Nov 25 02:48:56 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Nov 25 02:48:57 np0005534516 systemd[1]: var-lib-containers-storage-overlay-38e4b5a2e15876eec6175a356342de69244d9d0e7c2d84e2ec65e38b1c757fb7-merged.mount: Deactivated successfully.
Nov 25 02:48:57 np0005534516 ceph-mgr[75313]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Nov 25 02:48:57 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 02:48:57 np0005534516 ceph-mon[75015]: [25/Nov/2025:07:48:56] ENGINE Bus STARTING
Nov 25 02:48:57 np0005534516 ceph-mon[75015]: [25/Nov/2025:07:48:56] ENGINE Serving on https://192.168.122.100:7150
Nov 25 02:48:57 np0005534516 ceph-mon[75015]: [25/Nov/2025:07:48:56] ENGINE Client ('192.168.122.100', 50724) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Nov 25 02:48:57 np0005534516 podman[76240]: 2025-11-25 07:48:57.768276677 +0000 UTC m=+2.740200724 container remove 25e5907031e0de63d2350bcd892098e58bd974751c7e913ab6052295c546655b (image=quay.io/ceph/ceph:v18, name=hungry_jennings, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 02:48:57 np0005534516 systemd[1]: libpod-conmon-25e5907031e0de63d2350bcd892098e58bd974751c7e913ab6052295c546655b.scope: Deactivated successfully.
Nov 25 02:48:57 np0005534516 podman[76317]: 2025-11-25 07:48:57.828371137 +0000 UTC m=+0.034614873 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 25 02:48:58 np0005534516 podman[76317]: 2025-11-25 07:48:58.174926573 +0000 UTC m=+0.381170229 container create f84a451f802e4b36eb8f1cd9a3f2582d3ac10fef4016d59ebfe76889b58071a8 (image=quay.io/ceph/ceph:v18, name=vibrant_mendel, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 02:48:58 np0005534516 ceph-mon[75015]: [25/Nov/2025:07:48:56] ENGINE Serving on http://192.168.122.100:8765
Nov 25 02:48:58 np0005534516 ceph-mon[75015]: [25/Nov/2025:07:48:56] ENGINE Bus STARTED
Nov 25 02:48:58 np0005534516 systemd[1]: Started libpod-conmon-f84a451f802e4b36eb8f1cd9a3f2582d3ac10fef4016d59ebfe76889b58071a8.scope.
Nov 25 02:48:58 np0005534516 systemd[1]: Started libcrun container.
Nov 25 02:48:58 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9308f76a1e6d84602277f19798a0abc9cd3804097136289f88b647a6fc96d7d3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 02:48:58 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9308f76a1e6d84602277f19798a0abc9cd3804097136289f88b647a6fc96d7d3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 02:48:58 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9308f76a1e6d84602277f19798a0abc9cd3804097136289f88b647a6fc96d7d3/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 02:48:59 np0005534516 ceph-mgr[75313]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Nov 25 02:48:59 np0005534516 podman[76317]: 2025-11-25 07:48:59.189532765 +0000 UTC m=+1.395776501 container init f84a451f802e4b36eb8f1cd9a3f2582d3ac10fef4016d59ebfe76889b58071a8 (image=quay.io/ceph/ceph:v18, name=vibrant_mendel, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Nov 25 02:48:59 np0005534516 podman[76317]: 2025-11-25 07:48:59.201514932 +0000 UTC m=+1.407758598 container start f84a451f802e4b36eb8f1cd9a3f2582d3ac10fef4016d59ebfe76889b58071a8 (image=quay.io/ceph/ceph:v18, name=vibrant_mendel, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Nov 25 02:48:59 np0005534516 podman[76317]: 2025-11-25 07:48:59.51118026 +0000 UTC m=+1.717423956 container attach f84a451f802e4b36eb8f1cd9a3f2582d3ac10fef4016d59ebfe76889b58071a8 (image=quay.io/ceph/ceph:v18, name=vibrant_mendel, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 02:48:59 np0005534516 ceph-mgr[75313]: log_channel(audit) log [DBG] : from='client.14144 -' entity='client.admin' cmd=[{"prefix": "cephadm set-user", "user": "ceph-admin", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 02:48:59 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ssh_user}] v 0) v1
Nov 25 02:49:00 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 02:49:00 np0005534516 ceph-mgr[75313]: [cephadm INFO root] Set ssh ssh_user
Nov 25 02:49:00 np0005534516 ceph-mgr[75313]: log_channel(cephadm) log [INF] : Set ssh ssh_user
Nov 25 02:49:00 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ssh_config}] v 0) v1
Nov 25 02:49:00 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020053040 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 02:49:00 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 02:49:00 np0005534516 ceph-mgr[75313]: [cephadm INFO root] Set ssh ssh_config
Nov 25 02:49:00 np0005534516 ceph-mgr[75313]: log_channel(cephadm) log [INF] : Set ssh ssh_config
Nov 25 02:49:00 np0005534516 ceph-mgr[75313]: [cephadm INFO root] ssh user set to ceph-admin. sudo will be used
Nov 25 02:49:00 np0005534516 ceph-mgr[75313]: log_channel(cephadm) log [INF] : ssh user set to ceph-admin. sudo will be used
Nov 25 02:49:00 np0005534516 vibrant_mendel[76333]: ssh user set to ceph-admin. sudo will be used
Nov 25 02:49:00 np0005534516 systemd[1]: libpod-f84a451f802e4b36eb8f1cd9a3f2582d3ac10fef4016d59ebfe76889b58071a8.scope: Deactivated successfully.
Nov 25 02:49:00 np0005534516 podman[76317]: 2025-11-25 07:49:00.957416401 +0000 UTC m=+3.163660047 container died f84a451f802e4b36eb8f1cd9a3f2582d3ac10fef4016d59ebfe76889b58071a8 (image=quay.io/ceph/ceph:v18, name=vibrant_mendel, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Nov 25 02:49:01 np0005534516 ceph-mgr[75313]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Nov 25 02:49:01 np0005534516 systemd[1]: var-lib-containers-storage-overlay-9308f76a1e6d84602277f19798a0abc9cd3804097136289f88b647a6fc96d7d3-merged.mount: Deactivated successfully.
Nov 25 02:49:01 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 02:49:01 np0005534516 ceph-mon[75015]: Set ssh ssh_user
Nov 25 02:49:01 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 02:49:02 np0005534516 podman[76317]: 2025-11-25 07:49:02.286565613 +0000 UTC m=+4.492809299 container remove f84a451f802e4b36eb8f1cd9a3f2582d3ac10fef4016d59ebfe76889b58071a8 (image=quay.io/ceph/ceph:v18, name=vibrant_mendel, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2)
Nov 25 02:49:02 np0005534516 systemd[1]: libpod-conmon-f84a451f802e4b36eb8f1cd9a3f2582d3ac10fef4016d59ebfe76889b58071a8.scope: Deactivated successfully.
Nov 25 02:49:02 np0005534516 podman[76372]: 2025-11-25 07:49:02.357353923 +0000 UTC m=+0.039242028 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 25 02:49:02 np0005534516 podman[76372]: 2025-11-25 07:49:02.485578226 +0000 UTC m=+0.167466261 container create 8c5cbb83537bf38fa56799e43e13322c90dc89a058494c7c808a966d2be6bc17 (image=quay.io/ceph/ceph:v18, name=relaxed_williams, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 02:49:02 np0005534516 systemd[1]: Started libpod-conmon-8c5cbb83537bf38fa56799e43e13322c90dc89a058494c7c808a966d2be6bc17.scope.
Nov 25 02:49:02 np0005534516 systemd[1]: Started libcrun container.
Nov 25 02:49:02 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/297cdb457cb5f4fbdeb1f22420a575fef6e6e5b51d6c9882a376e156551e84a2/merged/tmp/cephadm-ssh-key supports timestamps until 2038 (0x7fffffff)
Nov 25 02:49:02 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/297cdb457cb5f4fbdeb1f22420a575fef6e6e5b51d6c9882a376e156551e84a2/merged/tmp/cephadm-ssh-key.pub supports timestamps until 2038 (0x7fffffff)
Nov 25 02:49:02 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/297cdb457cb5f4fbdeb1f22420a575fef6e6e5b51d6c9882a376e156551e84a2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 02:49:02 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/297cdb457cb5f4fbdeb1f22420a575fef6e6e5b51d6c9882a376e156551e84a2/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 02:49:02 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/297cdb457cb5f4fbdeb1f22420a575fef6e6e5b51d6c9882a376e156551e84a2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 02:49:02 np0005534516 podman[76372]: 2025-11-25 07:49:02.803806991 +0000 UTC m=+0.485695056 container init 8c5cbb83537bf38fa56799e43e13322c90dc89a058494c7c808a966d2be6bc17 (image=quay.io/ceph/ceph:v18, name=relaxed_williams, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Nov 25 02:49:02 np0005534516 podman[76372]: 2025-11-25 07:49:02.812838033 +0000 UTC m=+0.494726038 container start 8c5cbb83537bf38fa56799e43e13322c90dc89a058494c7c808a966d2be6bc17 (image=quay.io/ceph/ceph:v18, name=relaxed_williams, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Nov 25 02:49:02 np0005534516 podman[76372]: 2025-11-25 07:49:02.919977696 +0000 UTC m=+0.601865721 container attach 8c5cbb83537bf38fa56799e43e13322c90dc89a058494c7c808a966d2be6bc17 (image=quay.io/ceph/ceph:v18, name=relaxed_williams, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Nov 25 02:49:03 np0005534516 ceph-mon[75015]: Set ssh ssh_config
Nov 25 02:49:03 np0005534516 ceph-mon[75015]: ssh user set to ceph-admin. sudo will be used
Nov 25 02:49:03 np0005534516 ceph-mgr[75313]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Nov 25 02:49:03 np0005534516 ceph-mgr[75313]: log_channel(audit) log [DBG] : from='client.14146 -' entity='client.admin' cmd=[{"prefix": "cephadm set-priv-key", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 02:49:03 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ssh_identity_key}] v 0) v1
Nov 25 02:49:03 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 02:49:03 np0005534516 ceph-mgr[75313]: [cephadm INFO root] Set ssh ssh_identity_key
Nov 25 02:49:03 np0005534516 ceph-mgr[75313]: log_channel(cephadm) log [INF] : Set ssh ssh_identity_key
Nov 25 02:49:03 np0005534516 ceph-mgr[75313]: [cephadm INFO root] Set ssh private key
Nov 25 02:49:03 np0005534516 ceph-mgr[75313]: log_channel(cephadm) log [INF] : Set ssh private key
Nov 25 02:49:03 np0005534516 systemd[1]: libpod-8c5cbb83537bf38fa56799e43e13322c90dc89a058494c7c808a966d2be6bc17.scope: Deactivated successfully.
Nov 25 02:49:03 np0005534516 podman[76372]: 2025-11-25 07:49:03.561895184 +0000 UTC m=+1.243783219 container died 8c5cbb83537bf38fa56799e43e13322c90dc89a058494c7c808a966d2be6bc17 (image=quay.io/ceph/ceph:v18, name=relaxed_williams, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Nov 25 02:49:03 np0005534516 systemd[1]: var-lib-containers-storage-overlay-297cdb457cb5f4fbdeb1f22420a575fef6e6e5b51d6c9882a376e156551e84a2-merged.mount: Deactivated successfully.
Nov 25 02:49:03 np0005534516 podman[76372]: 2025-11-25 07:49:03.80654152 +0000 UTC m=+1.488429515 container remove 8c5cbb83537bf38fa56799e43e13322c90dc89a058494c7c808a966d2be6bc17 (image=quay.io/ceph/ceph:v18, name=relaxed_williams, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Nov 25 02:49:03 np0005534516 systemd[1]: libpod-conmon-8c5cbb83537bf38fa56799e43e13322c90dc89a058494c7c808a966d2be6bc17.scope: Deactivated successfully.
Nov 25 02:49:03 np0005534516 podman[76428]: 2025-11-25 07:49:03.92082957 +0000 UTC m=+0.085790566 container create 89058d6f3aaf386cca30548698cfdfc1a4d8845c7fc87e62e03baac6b0885d88 (image=quay.io/ceph/ceph:v18, name=eager_curie, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Nov 25 02:49:03 np0005534516 podman[76428]: 2025-11-25 07:49:03.872263264 +0000 UTC m=+0.037224340 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 25 02:49:03 np0005534516 systemd[1]: Started libpod-conmon-89058d6f3aaf386cca30548698cfdfc1a4d8845c7fc87e62e03baac6b0885d88.scope.
Nov 25 02:49:04 np0005534516 systemd[1]: Started libcrun container.
Nov 25 02:49:04 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/db271404024250348586387b1cadf8a239fdf7139240d9aa6179f8bebfd5361a/merged/tmp/cephadm-ssh-key supports timestamps until 2038 (0x7fffffff)
Nov 25 02:49:04 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/db271404024250348586387b1cadf8a239fdf7139240d9aa6179f8bebfd5361a/merged/tmp/cephadm-ssh-key.pub supports timestamps until 2038 (0x7fffffff)
Nov 25 02:49:04 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/db271404024250348586387b1cadf8a239fdf7139240d9aa6179f8bebfd5361a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 02:49:04 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/db271404024250348586387b1cadf8a239fdf7139240d9aa6179f8bebfd5361a/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 02:49:04 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/db271404024250348586387b1cadf8a239fdf7139240d9aa6179f8bebfd5361a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 02:49:04 np0005534516 podman[76428]: 2025-11-25 07:49:04.040248728 +0000 UTC m=+0.205209734 container init 89058d6f3aaf386cca30548698cfdfc1a4d8845c7fc87e62e03baac6b0885d88 (image=quay.io/ceph/ceph:v18, name=eager_curie, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 02:49:04 np0005534516 podman[76428]: 2025-11-25 07:49:04.048046813 +0000 UTC m=+0.213007819 container start 89058d6f3aaf386cca30548698cfdfc1a4d8845c7fc87e62e03baac6b0885d88 (image=quay.io/ceph/ceph:v18, name=eager_curie, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 02:49:04 np0005534516 podman[76428]: 2025-11-25 07:49:04.052826592 +0000 UTC m=+0.217787638 container attach 89058d6f3aaf386cca30548698cfdfc1a4d8845c7fc87e62e03baac6b0885d88 (image=quay.io/ceph/ceph:v18, name=eager_curie, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 02:49:04 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 02:49:04 np0005534516 ceph-mon[75015]: Set ssh ssh_identity_key
Nov 25 02:49:04 np0005534516 ceph-mon[75015]: Set ssh private key
Nov 25 02:49:04 np0005534516 ceph-mgr[75313]: log_channel(audit) log [DBG] : from='client.14148 -' entity='client.admin' cmd=[{"prefix": "cephadm set-pub-key", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 02:49:04 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ssh_identity_pub}] v 0) v1
Nov 25 02:49:04 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 02:49:04 np0005534516 ceph-mgr[75313]: [cephadm INFO root] Set ssh ssh_identity_pub
Nov 25 02:49:04 np0005534516 ceph-mgr[75313]: log_channel(cephadm) log [INF] : Set ssh ssh_identity_pub
Nov 25 02:49:04 np0005534516 systemd[1]: libpod-89058d6f3aaf386cca30548698cfdfc1a4d8845c7fc87e62e03baac6b0885d88.scope: Deactivated successfully.
Nov 25 02:49:04 np0005534516 podman[76428]: 2025-11-25 07:49:04.718056687 +0000 UTC m=+0.883017673 container died 89058d6f3aaf386cca30548698cfdfc1a4d8845c7fc87e62e03baac6b0885d88 (image=quay.io/ceph/ceph:v18, name=eager_curie, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3)
Nov 25 02:49:05 np0005534516 systemd[1]: var-lib-containers-storage-overlay-db271404024250348586387b1cadf8a239fdf7139240d9aa6179f8bebfd5361a-merged.mount: Deactivated successfully.
Nov 25 02:49:05 np0005534516 ceph-mgr[75313]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Nov 25 02:49:05 np0005534516 podman[76428]: 2025-11-25 07:49:05.147475832 +0000 UTC m=+1.312436848 container remove 89058d6f3aaf386cca30548698cfdfc1a4d8845c7fc87e62e03baac6b0885d88 (image=quay.io/ceph/ceph:v18, name=eager_curie, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Nov 25 02:49:05 np0005534516 systemd[1]: libpod-conmon-89058d6f3aaf386cca30548698cfdfc1a4d8845c7fc87e62e03baac6b0885d88.scope: Deactivated successfully.
Nov 25 02:49:05 np0005534516 podman[76485]: 2025-11-25 07:49:05.23718236 +0000 UTC m=+0.058757333 container create c0749332f6b890747034170589913ac4e307b84191be529c3a892f4064abff29 (image=quay.io/ceph/ceph:v18, name=affectionate_carson, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 02:49:05 np0005534516 systemd[1]: Started libpod-conmon-c0749332f6b890747034170589913ac4e307b84191be529c3a892f4064abff29.scope.
Nov 25 02:49:05 np0005534516 systemd[1]: Started libcrun container.
Nov 25 02:49:05 np0005534516 podman[76485]: 2025-11-25 07:49:05.203134954 +0000 UTC m=+0.024709937 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 25 02:49:05 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/54c276704fe0afae9051286401098d67fcb9a96faaa046808a8a1f512e1c3181/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 02:49:05 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/54c276704fe0afae9051286401098d67fcb9a96faaa046808a8a1f512e1c3181/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 02:49:05 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/54c276704fe0afae9051286401098d67fcb9a96faaa046808a8a1f512e1c3181/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 02:49:05 np0005534516 podman[76485]: 2025-11-25 07:49:05.325544758 +0000 UTC m=+0.147119751 container init c0749332f6b890747034170589913ac4e307b84191be529c3a892f4064abff29 (image=quay.io/ceph/ceph:v18, name=affectionate_carson, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 02:49:05 np0005534516 podman[76485]: 2025-11-25 07:49:05.33455069 +0000 UTC m=+0.156125703 container start c0749332f6b890747034170589913ac4e307b84191be529c3a892f4064abff29 (image=quay.io/ceph/ceph:v18, name=affectionate_carson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Nov 25 02:49:05 np0005534516 podman[76485]: 2025-11-25 07:49:05.345531188 +0000 UTC m=+0.167106191 container attach c0749332f6b890747034170589913ac4e307b84191be529c3a892f4064abff29 (image=quay.io/ceph/ceph:v18, name=affectionate_carson, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Nov 25 02:49:05 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 02:49:05 np0005534516 ceph-mon[75015]: Set ssh ssh_identity_pub
Nov 25 02:49:05 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020054710 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 02:49:05 np0005534516 ceph-mgr[75313]: log_channel(audit) log [DBG] : from='client.14150 -' entity='client.admin' cmd=[{"prefix": "cephadm get-pub-key", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 02:49:05 np0005534516 affectionate_carson[76502]: ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCh+yjMGeFVmPJJiu3NW/kaDc/omQFDC7WKp2njEV6NjCwuouP/Xg0UBFBlQ+kpJowCm5GbTGvr+RvpDq6Gf1xwylp6O1Jcen2eERZPKGZZ5ttXYcsaubjP3tCvvK8vopoQWOOuwtmQgHvgqmqRkWTr3+NDt/yIitHv0ar223IaOWuhSuhfZhImp50TKLXdDJiHGgEx0o51fLOGcWcLU9owv7FxsiRcIqPvrHws4qZqyJBwDjDa7sKa2ifDM0VHhHDidoINoN4SKf35nKeZRQwKDnOL1859+oOoSEM0la/XY5akyJKok7X3ScNQsaNEggWv014ak2zqy0KPo/k8Ck03nhLdmBptrtzlKAUBwHOWMFEgDIlOUJm/wXVEux8rqsuDUvkej8gCmvWW3hhddYg2E3tXIZNfRdVlfaqUc0n8wtVIhtWNz8iGtQeq5jZf06UWiQpJhbJ4/Yj2TMZCeuyFTaJf7CpKjcEkFX18hHTpxu4ME144fsOLxkt4gHhUKXU= zuul@controller
Nov 25 02:49:05 np0005534516 systemd[1]: libpod-c0749332f6b890747034170589913ac4e307b84191be529c3a892f4064abff29.scope: Deactivated successfully.
Nov 25 02:49:05 np0005534516 podman[76528]: 2025-11-25 07:49:05.948055444 +0000 UTC m=+0.024946567 container died c0749332f6b890747034170589913ac4e307b84191be529c3a892f4064abff29 (image=quay.io/ceph/ceph:v18, name=affectionate_carson, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 02:49:05 np0005534516 systemd[1]: var-lib-containers-storage-overlay-54c276704fe0afae9051286401098d67fcb9a96faaa046808a8a1f512e1c3181-merged.mount: Deactivated successfully.
Nov 25 02:49:06 np0005534516 podman[76528]: 2025-11-25 07:49:06.002160032 +0000 UTC m=+0.079051125 container remove c0749332f6b890747034170589913ac4e307b84191be529c3a892f4064abff29 (image=quay.io/ceph/ceph:v18, name=affectionate_carson, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 02:49:06 np0005534516 systemd[1]: libpod-conmon-c0749332f6b890747034170589913ac4e307b84191be529c3a892f4064abff29.scope: Deactivated successfully.
Nov 25 02:49:06 np0005534516 podman[76542]: 2025-11-25 07:49:06.107429943 +0000 UTC m=+0.066783983 container create 7bc068ae8c02321a87765ecb8bb78b0ddb8e595d110fc225a81eb61ebba75266 (image=quay.io/ceph/ceph:v18, name=strange_solomon, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Nov 25 02:49:06 np0005534516 systemd[1]: Started libpod-conmon-7bc068ae8c02321a87765ecb8bb78b0ddb8e595d110fc225a81eb61ebba75266.scope.
Nov 25 02:49:06 np0005534516 systemd[1]: Started libcrun container.
Nov 25 02:49:06 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0c5582c05df1ee693c52f86c566d576d83f38d44645af34f0835d4d6bfe6445f/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 02:49:06 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0c5582c05df1ee693c52f86c566d576d83f38d44645af34f0835d4d6bfe6445f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 02:49:06 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0c5582c05df1ee693c52f86c566d576d83f38d44645af34f0835d4d6bfe6445f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 02:49:06 np0005534516 podman[76542]: 2025-11-25 07:49:06.077224619 +0000 UTC m=+0.036578679 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 25 02:49:06 np0005534516 podman[76542]: 2025-11-25 07:49:06.195147714 +0000 UTC m=+0.154501744 container init 7bc068ae8c02321a87765ecb8bb78b0ddb8e595d110fc225a81eb61ebba75266 (image=quay.io/ceph/ceph:v18, name=strange_solomon, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 02:49:06 np0005534516 podman[76542]: 2025-11-25 07:49:06.200474484 +0000 UTC m=+0.159828514 container start 7bc068ae8c02321a87765ecb8bb78b0ddb8e595d110fc225a81eb61ebba75266 (image=quay.io/ceph/ceph:v18, name=strange_solomon, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 02:49:06 np0005534516 podman[76542]: 2025-11-25 07:49:06.204459581 +0000 UTC m=+0.163813611 container attach 7bc068ae8c02321a87765ecb8bb78b0ddb8e595d110fc225a81eb61ebba75266 (image=quay.io/ceph/ceph:v18, name=strange_solomon, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Nov 25 02:49:06 np0005534516 ceph-mgr[75313]: log_channel(audit) log [DBG] : from='client.14152 -' entity='client.admin' cmd=[{"prefix": "orch host add", "hostname": "compute-0", "addr": "192.168.122.100", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 02:49:06 np0005534516 systemd-logind[822]: New session 21 of user ceph-admin.
Nov 25 02:49:06 np0005534516 systemd[1]: Created slice User Slice of UID 42477.
Nov 25 02:49:06 np0005534516 systemd[1]: Starting User Runtime Directory /run/user/42477...
Nov 25 02:49:06 np0005534516 systemd[1]: Finished User Runtime Directory /run/user/42477.
Nov 25 02:49:07 np0005534516 systemd[1]: Starting User Manager for UID 42477...
Nov 25 02:49:07 np0005534516 ceph-mgr[75313]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Nov 25 02:49:07 np0005534516 systemd-logind[822]: New session 23 of user ceph-admin.
Nov 25 02:49:07 np0005534516 systemd[76588]: Queued start job for default target Main User Target.
Nov 25 02:49:07 np0005534516 systemd[76588]: Created slice User Application Slice.
Nov 25 02:49:07 np0005534516 systemd[76588]: Started Mark boot as successful after the user session has run 2 minutes.
Nov 25 02:49:07 np0005534516 systemd[76588]: Started Daily Cleanup of User's Temporary Directories.
Nov 25 02:49:07 np0005534516 systemd[76588]: Reached target Paths.
Nov 25 02:49:07 np0005534516 systemd[76588]: Reached target Timers.
Nov 25 02:49:07 np0005534516 systemd[76588]: Starting D-Bus User Message Bus Socket...
Nov 25 02:49:07 np0005534516 systemd[76588]: Starting Create User's Volatile Files and Directories...
Nov 25 02:49:07 np0005534516 systemd[76588]: Listening on D-Bus User Message Bus Socket.
Nov 25 02:49:07 np0005534516 systemd[76588]: Reached target Sockets.
Nov 25 02:49:07 np0005534516 systemd[76588]: Finished Create User's Volatile Files and Directories.
Nov 25 02:49:07 np0005534516 systemd[76588]: Reached target Basic System.
Nov 25 02:49:07 np0005534516 systemd[76588]: Reached target Main User Target.
Nov 25 02:49:07 np0005534516 systemd[76588]: Startup finished in 155ms.
Nov 25 02:49:07 np0005534516 systemd[1]: Started User Manager for UID 42477.
Nov 25 02:49:07 np0005534516 systemd[1]: Started Session 21 of User ceph-admin.
Nov 25 02:49:07 np0005534516 systemd[1]: Started Session 23 of User ceph-admin.
Nov 25 02:49:07 np0005534516 systemd-logind[822]: New session 24 of user ceph-admin.
Nov 25 02:49:07 np0005534516 systemd[1]: Started Session 24 of User ceph-admin.
Nov 25 02:49:08 np0005534516 systemd-logind[822]: New session 25 of user ceph-admin.
Nov 25 02:49:08 np0005534516 systemd[1]: Started Session 25 of User ceph-admin.
Nov 25 02:49:08 np0005534516 ceph-mgr[75313]: [cephadm INFO cephadm.serve] Deploying cephadm binary to compute-0
Nov 25 02:49:08 np0005534516 ceph-mgr[75313]: log_channel(cephadm) log [INF] : Deploying cephadm binary to compute-0
Nov 25 02:49:08 np0005534516 systemd-logind[822]: New session 26 of user ceph-admin.
Nov 25 02:49:08 np0005534516 systemd[1]: Started Session 26 of User ceph-admin.
Nov 25 02:49:08 np0005534516 ceph-mon[75015]: Deploying cephadm binary to compute-0
Nov 25 02:49:08 np0005534516 systemd-logind[822]: New session 27 of user ceph-admin.
Nov 25 02:49:08 np0005534516 systemd[1]: Started Session 27 of User ceph-admin.
Nov 25 02:49:09 np0005534516 ceph-mgr[75313]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Nov 25 02:49:09 np0005534516 systemd-logind[822]: New session 28 of user ceph-admin.
Nov 25 02:49:09 np0005534516 systemd[1]: Started Session 28 of User ceph-admin.
Nov 25 02:49:09 np0005534516 systemd-logind[822]: New session 29 of user ceph-admin.
Nov 25 02:49:09 np0005534516 systemd[1]: Started Session 29 of User ceph-admin.
Nov 25 02:49:10 np0005534516 systemd-logind[822]: New session 30 of user ceph-admin.
Nov 25 02:49:10 np0005534516 systemd[1]: Started Session 30 of User ceph-admin.
Nov 25 02:49:10 np0005534516 systemd-logind[822]: New session 31 of user ceph-admin.
Nov 25 02:49:10 np0005534516 systemd[1]: Started Session 31 of User ceph-admin.
Nov 25 02:49:10 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 02:49:11 np0005534516 ceph-mgr[75313]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Nov 25 02:49:11 np0005534516 systemd-logind[822]: New session 32 of user ceph-admin.
Nov 25 02:49:11 np0005534516 systemd[1]: Started Session 32 of User ceph-admin.
Nov 25 02:49:11 np0005534516 systemd-logind[822]: New session 33 of user ceph-admin.
Nov 25 02:49:11 np0005534516 systemd[1]: Started Session 33 of User ceph-admin.
Nov 25 02:49:12 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0) v1
Nov 25 02:49:12 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 02:49:12 np0005534516 ceph-mgr[75313]: [cephadm INFO root] Added host compute-0
Nov 25 02:49:12 np0005534516 ceph-mgr[75313]: log_channel(cephadm) log [INF] : Added host compute-0
Nov 25 02:49:12 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0) v1
Nov 25 02:49:12 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Nov 25 02:49:12 np0005534516 strange_solomon[76558]: Added host 'compute-0' with addr '192.168.122.100'
Nov 25 02:49:12 np0005534516 systemd[1]: libpod-7bc068ae8c02321a87765ecb8bb78b0ddb8e595d110fc225a81eb61ebba75266.scope: Deactivated successfully.
Nov 25 02:49:12 np0005534516 podman[76542]: 2025-11-25 07:49:12.221248229 +0000 UTC m=+6.180602279 container died 7bc068ae8c02321a87765ecb8bb78b0ddb8e595d110fc225a81eb61ebba75266 (image=quay.io/ceph/ceph:v18, name=strange_solomon, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Nov 25 02:49:12 np0005534516 systemd[1]: var-lib-containers-storage-overlay-0c5582c05df1ee693c52f86c566d576d83f38d44645af34f0835d4d6bfe6445f-merged.mount: Deactivated successfully.
Nov 25 02:49:12 np0005534516 podman[76542]: 2025-11-25 07:49:12.40071622 +0000 UTC m=+6.360070240 container remove 7bc068ae8c02321a87765ecb8bb78b0ddb8e595d110fc225a81eb61ebba75266 (image=quay.io/ceph/ceph:v18, name=strange_solomon, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Nov 25 02:49:12 np0005534516 systemd[1]: libpod-conmon-7bc068ae8c02321a87765ecb8bb78b0ddb8e595d110fc225a81eb61ebba75266.scope: Deactivated successfully.
Nov 25 02:49:12 np0005534516 podman[77278]: 2025-11-25 07:49:12.505233597 +0000 UTC m=+0.075981507 container create e9b3139fde8ce68d4700d95343eb1de39f6fe26bad59f9d927e9956172c5d6e2 (image=quay.io/ceph/ceph:v18, name=hungry_almeida, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 02:49:12 np0005534516 podman[77278]: 2025-11-25 07:49:12.465620882 +0000 UTC m=+0.036368812 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 25 02:49:12 np0005534516 systemd[1]: Started libpod-conmon-e9b3139fde8ce68d4700d95343eb1de39f6fe26bad59f9d927e9956172c5d6e2.scope.
Nov 25 02:49:12 np0005534516 systemd[1]: Started libcrun container.
Nov 25 02:49:12 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/129f61c08d79f7e736d8e8b9acc08d4dfc554bde0d21ed8202d035d0ba91dc64/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 02:49:12 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/129f61c08d79f7e736d8e8b9acc08d4dfc554bde0d21ed8202d035d0ba91dc64/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 02:49:12 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/129f61c08d79f7e736d8e8b9acc08d4dfc554bde0d21ed8202d035d0ba91dc64/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 02:49:12 np0005534516 podman[77278]: 2025-11-25 07:49:12.779496664 +0000 UTC m=+0.350244634 container init e9b3139fde8ce68d4700d95343eb1de39f6fe26bad59f9d927e9956172c5d6e2 (image=quay.io/ceph/ceph:v18, name=hungry_almeida, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 02:49:12 np0005534516 podman[77278]: 2025-11-25 07:49:12.791757576 +0000 UTC m=+0.362505516 container start e9b3139fde8ce68d4700d95343eb1de39f6fe26bad59f9d927e9956172c5d6e2 (image=quay.io/ceph/ceph:v18, name=hungry_almeida, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Nov 25 02:49:12 np0005534516 podman[77278]: 2025-11-25 07:49:12.899516616 +0000 UTC m=+0.470264546 container attach e9b3139fde8ce68d4700d95343eb1de39f6fe26bad59f9d927e9956172c5d6e2 (image=quay.io/ceph/ceph:v18, name=hungry_almeida, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Nov 25 02:49:13 np0005534516 podman[77349]: 2025-11-25 07:49:12.91526881 +0000 UTC m=+0.088519688 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 25 02:49:13 np0005534516 ceph-mgr[75313]: mgr.server send_report Giving up on OSDs that haven't reported yet, sending potentially incomplete PG state to mon
Nov 25 02:49:13 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 25 02:49:13 np0005534516 ceph-mon[75015]: log_channel(cluster) log [WRN] : Health check failed: OSD count 0 < osd_pool_default_size 1 (TOO_FEW_OSDS)
Nov 25 02:49:13 np0005534516 podman[77349]: 2025-11-25 07:49:13.206748057 +0000 UTC m=+0.379998885 container create 10a4f074fe3bf7ee5a32a7163366dd7bde64b43466b1e08f029ffb10ebd86768 (image=quay.io/ceph/ceph:v18, name=sweet_agnesi, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 02:49:13 np0005534516 systemd[1]: Started libpod-conmon-10a4f074fe3bf7ee5a32a7163366dd7bde64b43466b1e08f029ffb10ebd86768.scope.
Nov 25 02:49:13 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 02:49:13 np0005534516 ceph-mon[75015]: Added host compute-0
Nov 25 02:49:13 np0005534516 ceph-mon[75015]: Health check failed: OSD count 0 < osd_pool_default_size 1 (TOO_FEW_OSDS)
Nov 25 02:49:13 np0005534516 systemd[1]: Started libcrun container.
Nov 25 02:49:13 np0005534516 ceph-mgr[75313]: log_channel(audit) log [DBG] : from='client.14154 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mon", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 02:49:13 np0005534516 ceph-mgr[75313]: [cephadm INFO root] Saving service mon spec with placement count:5
Nov 25 02:49:13 np0005534516 ceph-mgr[75313]: log_channel(cephadm) log [INF] : Saving service mon spec with placement count:5
Nov 25 02:49:13 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0) v1
Nov 25 02:49:13 np0005534516 podman[77349]: 2025-11-25 07:49:13.515831166 +0000 UTC m=+0.689082034 container init 10a4f074fe3bf7ee5a32a7163366dd7bde64b43466b1e08f029ffb10ebd86768 (image=quay.io/ceph/ceph:v18, name=sweet_agnesi, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 25 02:49:13 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 02:49:13 np0005534516 hungry_almeida[77322]: Scheduled mon update...
Nov 25 02:49:13 np0005534516 podman[77349]: 2025-11-25 07:49:13.528887493 +0000 UTC m=+0.702138281 container start 10a4f074fe3bf7ee5a32a7163366dd7bde64b43466b1e08f029ffb10ebd86768 (image=quay.io/ceph/ceph:v18, name=sweet_agnesi, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 02:49:13 np0005534516 systemd[1]: libpod-e9b3139fde8ce68d4700d95343eb1de39f6fe26bad59f9d927e9956172c5d6e2.scope: Deactivated successfully.
Nov 25 02:49:13 np0005534516 podman[77349]: 2025-11-25 07:49:13.608388531 +0000 UTC m=+0.781639329 container attach 10a4f074fe3bf7ee5a32a7163366dd7bde64b43466b1e08f029ffb10ebd86768 (image=quay.io/ceph/ceph:v18, name=sweet_agnesi, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Nov 25 02:49:13 np0005534516 podman[77278]: 2025-11-25 07:49:13.625238709 +0000 UTC m=+1.195986629 container died e9b3139fde8ce68d4700d95343eb1de39f6fe26bad59f9d927e9956172c5d6e2 (image=quay.io/ceph/ceph:v18, name=hungry_almeida, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 02:49:13 np0005534516 systemd[1]: var-lib-containers-storage-overlay-129f61c08d79f7e736d8e8b9acc08d4dfc554bde0d21ed8202d035d0ba91dc64-merged.mount: Deactivated successfully.
Nov 25 02:49:13 np0005534516 podman[77278]: 2025-11-25 07:49:13.697087873 +0000 UTC m=+1.267835783 container remove e9b3139fde8ce68d4700d95343eb1de39f6fe26bad59f9d927e9956172c5d6e2 (image=quay.io/ceph/ceph:v18, name=hungry_almeida, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Nov 25 02:49:13 np0005534516 systemd[1]: libpod-conmon-e9b3139fde8ce68d4700d95343eb1de39f6fe26bad59f9d927e9956172c5d6e2.scope: Deactivated successfully.
Nov 25 02:49:13 np0005534516 podman[77405]: 2025-11-25 07:49:13.76526753 +0000 UTC m=+0.047577390 container create 8e671a77fcfaead1f89ad9fc346aa2c555b88ebb4874c41cced5eeb1cd4a8535 (image=quay.io/ceph/ceph:v18, name=hungry_jemison, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Nov 25 02:49:13 np0005534516 systemd[1]: Started libpod-conmon-8e671a77fcfaead1f89ad9fc346aa2c555b88ebb4874c41cced5eeb1cd4a8535.scope.
Nov 25 02:49:13 np0005534516 systemd[1]: Started libcrun container.
Nov 25 02:49:13 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/351983a1263355d8a67a38fe412e6e552eb11b966d7c584f6e1202a8ca53aeac/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 02:49:13 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/351983a1263355d8a67a38fe412e6e552eb11b966d7c584f6e1202a8ca53aeac/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 02:49:13 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/351983a1263355d8a67a38fe412e6e552eb11b966d7c584f6e1202a8ca53aeac/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 02:49:13 np0005534516 podman[77405]: 2025-11-25 07:49:13.743126613 +0000 UTC m=+0.025436503 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 25 02:49:13 np0005534516 podman[77405]: 2025-11-25 07:49:13.843127596 +0000 UTC m=+0.125437476 container init 8e671a77fcfaead1f89ad9fc346aa2c555b88ebb4874c41cced5eeb1cd4a8535 (image=quay.io/ceph/ceph:v18, name=hungry_jemison, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 02:49:13 np0005534516 sweet_agnesi[77384]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)
Nov 25 02:49:13 np0005534516 podman[77405]: 2025-11-25 07:49:13.849589352 +0000 UTC m=+0.131899222 container start 8e671a77fcfaead1f89ad9fc346aa2c555b88ebb4874c41cced5eeb1cd4a8535 (image=quay.io/ceph/ceph:v18, name=hungry_jemison, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 02:49:13 np0005534516 podman[77405]: 2025-11-25 07:49:13.853329392 +0000 UTC m=+0.135639252 container attach 8e671a77fcfaead1f89ad9fc346aa2c555b88ebb4874c41cced5eeb1cd4a8535 (image=quay.io/ceph/ceph:v18, name=hungry_jemison, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 02:49:13 np0005534516 systemd[1]: libpod-10a4f074fe3bf7ee5a32a7163366dd7bde64b43466b1e08f029ffb10ebd86768.scope: Deactivated successfully.
Nov 25 02:49:13 np0005534516 podman[77349]: 2025-11-25 07:49:13.860050197 +0000 UTC m=+1.033301015 container died 10a4f074fe3bf7ee5a32a7163366dd7bde64b43466b1e08f029ffb10ebd86768 (image=quay.io/ceph/ceph:v18, name=sweet_agnesi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 02:49:13 np0005534516 systemd[1]: var-lib-containers-storage-overlay-f7316e60cbdb5257b153950aa869b5785299e5cdbd9abbd3c57b5e1ba2ce45da-merged.mount: Deactivated successfully.
Nov 25 02:49:13 np0005534516 podman[77349]: 2025-11-25 07:49:13.90903458 +0000 UTC m=+1.082285388 container remove 10a4f074fe3bf7ee5a32a7163366dd7bde64b43466b1e08f029ffb10ebd86768 (image=quay.io/ceph/ceph:v18, name=sweet_agnesi, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Nov 25 02:49:13 np0005534516 systemd[1]: libpod-conmon-10a4f074fe3bf7ee5a32a7163366dd7bde64b43466b1e08f029ffb10ebd86768.scope: Deactivated successfully.
Nov 25 02:49:13 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=container_image}] v 0) v1
Nov 25 02:49:13 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 02:49:14 np0005534516 ceph-mgr[75313]: log_channel(audit) log [DBG] : from='client.14156 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mgr", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 02:49:14 np0005534516 ceph-mgr[75313]: [cephadm INFO root] Saving service mgr spec with placement count:2
Nov 25 02:49:14 np0005534516 ceph-mgr[75313]: log_channel(cephadm) log [INF] : Saving service mgr spec with placement count:2
Nov 25 02:49:14 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0) v1
Nov 25 02:49:14 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 02:49:14 np0005534516 hungry_jemison[77421]: Scheduled mgr update...
Nov 25 02:49:14 np0005534516 systemd[1]: libpod-8e671a77fcfaead1f89ad9fc346aa2c555b88ebb4874c41cced5eeb1cd4a8535.scope: Deactivated successfully.
Nov 25 02:49:14 np0005534516 podman[77405]: 2025-11-25 07:49:14.424100967 +0000 UTC m=+0.706410877 container died 8e671a77fcfaead1f89ad9fc346aa2c555b88ebb4874c41cced5eeb1cd4a8535 (image=quay.io/ceph/ceph:v18, name=hungry_jemison, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 02:49:14 np0005534516 systemd[1]: var-lib-containers-storage-overlay-351983a1263355d8a67a38fe412e6e552eb11b966d7c584f6e1202a8ca53aeac-merged.mount: Deactivated successfully.
Nov 25 02:49:14 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 02:49:14 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 02:49:14 np0005534516 podman[77405]: 2025-11-25 07:49:14.483123391 +0000 UTC m=+0.765433251 container remove 8e671a77fcfaead1f89ad9fc346aa2c555b88ebb4874c41cced5eeb1cd4a8535 (image=quay.io/ceph/ceph:v18, name=hungry_jemison, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Nov 25 02:49:14 np0005534516 systemd[1]: libpod-conmon-8e671a77fcfaead1f89ad9fc346aa2c555b88ebb4874c41cced5eeb1cd4a8535.scope: Deactivated successfully.
Nov 25 02:49:14 np0005534516 ceph-mon[75015]: Saving service mon spec with placement count:5
Nov 25 02:49:14 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 02:49:14 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 02:49:14 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 02:49:14 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 02:49:14 np0005534516 podman[77597]: 2025-11-25 07:49:14.5460063 +0000 UTC m=+0.045092641 container create c360554f912d779790c2a6d9f04b87183fa4e8109e2bb20e65e1cb6ec79782c2 (image=quay.io/ceph/ceph:v18, name=busy_mahavira, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Nov 25 02:49:14 np0005534516 systemd[1]: Started libpod-conmon-c360554f912d779790c2a6d9f04b87183fa4e8109e2bb20e65e1cb6ec79782c2.scope.
Nov 25 02:49:14 np0005534516 systemd[1]: Started libcrun container.
Nov 25 02:49:14 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2ebb40bb77f4b42282ba9eab88d12e56448e9cd231a9db1f26ed5fa1682112f0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 02:49:14 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2ebb40bb77f4b42282ba9eab88d12e56448e9cd231a9db1f26ed5fa1682112f0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 02:49:14 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2ebb40bb77f4b42282ba9eab88d12e56448e9cd231a9db1f26ed5fa1682112f0/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 02:49:14 np0005534516 podman[77597]: 2025-11-25 07:49:14.525931169 +0000 UTC m=+0.025017530 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 25 02:49:14 np0005534516 podman[77597]: 2025-11-25 07:49:14.626123937 +0000 UTC m=+0.125210288 container init c360554f912d779790c2a6d9f04b87183fa4e8109e2bb20e65e1cb6ec79782c2 (image=quay.io/ceph/ceph:v18, name=busy_mahavira, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Nov 25 02:49:14 np0005534516 podman[77597]: 2025-11-25 07:49:14.6324366 +0000 UTC m=+0.131522941 container start c360554f912d779790c2a6d9f04b87183fa4e8109e2bb20e65e1cb6ec79782c2 (image=quay.io/ceph/ceph:v18, name=busy_mahavira, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2)
Nov 25 02:49:14 np0005534516 podman[77597]: 2025-11-25 07:49:14.636666254 +0000 UTC m=+0.135752625 container attach c360554f912d779790c2a6d9f04b87183fa4e8109e2bb20e65e1cb6ec79782c2 (image=quay.io/ceph/ceph:v18, name=busy_mahavira, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 25 02:49:15 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v4: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 25 02:49:15 np0005534516 ceph-mgr[75313]: log_channel(audit) log [DBG] : from='client.14158 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "crash", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 02:49:15 np0005534516 ceph-mgr[75313]: [cephadm INFO root] Saving service crash spec with placement *
Nov 25 02:49:15 np0005534516 ceph-mgr[75313]: log_channel(cephadm) log [INF] : Saving service crash spec with placement *
Nov 25 02:49:15 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0) v1
Nov 25 02:49:15 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 02:49:15 np0005534516 busy_mahavira[77656]: Scheduled crash update...
Nov 25 02:49:15 np0005534516 systemd[1]: libpod-c360554f912d779790c2a6d9f04b87183fa4e8109e2bb20e65e1cb6ec79782c2.scope: Deactivated successfully.
Nov 25 02:49:15 np0005534516 podman[77597]: 2025-11-25 07:49:15.172797804 +0000 UTC m=+0.671884165 container died c360554f912d779790c2a6d9f04b87183fa4e8109e2bb20e65e1cb6ec79782c2 (image=quay.io/ceph/ceph:v18, name=busy_mahavira, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 02:49:15 np0005534516 podman[77802]: 2025-11-25 07:49:15.201786049 +0000 UTC m=+0.069857642 container exec 04ce8b89bbacb77b3b59c4996e899d7494010827e740656ebea8754c25303956 (image=quay.io/ceph/ceph:v18, name=ceph-a058ea16-8b73-51e1-b172-ed66107102bf-mon-compute-0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 02:49:15 np0005534516 systemd[1]: var-lib-containers-storage-overlay-2ebb40bb77f4b42282ba9eab88d12e56448e9cd231a9db1f26ed5fa1682112f0-merged.mount: Deactivated successfully.
Nov 25 02:49:15 np0005534516 podman[77597]: 2025-11-25 07:49:15.261434433 +0000 UTC m=+0.760520774 container remove c360554f912d779790c2a6d9f04b87183fa4e8109e2bb20e65e1cb6ec79782c2 (image=quay.io/ceph/ceph:v18, name=busy_mahavira, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 02:49:15 np0005534516 systemd[1]: libpod-conmon-c360554f912d779790c2a6d9f04b87183fa4e8109e2bb20e65e1cb6ec79782c2.scope: Deactivated successfully.
Nov 25 02:49:15 np0005534516 podman[77834]: 2025-11-25 07:49:15.336456609 +0000 UTC m=+0.048910582 container create 7f8345a82fda0f15c6240c3630ecfd717bca91581ee0f869c6211eddf2fe07ab (image=quay.io/ceph/ceph:v18, name=blissful_johnson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 02:49:15 np0005534516 systemd[1]: Started libpod-conmon-7f8345a82fda0f15c6240c3630ecfd717bca91581ee0f869c6211eddf2fe07ab.scope.
Nov 25 02:49:15 np0005534516 podman[77834]: 2025-11-25 07:49:15.311936176 +0000 UTC m=+0.024390159 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 25 02:49:15 np0005534516 systemd[1]: Started libcrun container.
Nov 25 02:49:15 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/26a28f6704f085019b1d2997c8f263c7dfdbbb117814717ec73c76e9317d7410/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 02:49:15 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/26a28f6704f085019b1d2997c8f263c7dfdbbb117814717ec73c76e9317d7410/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 02:49:15 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/26a28f6704f085019b1d2997c8f263c7dfdbbb117814717ec73c76e9317d7410/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 02:49:15 np0005534516 podman[77834]: 2025-11-25 07:49:15.431634498 +0000 UTC m=+0.144088521 container init 7f8345a82fda0f15c6240c3630ecfd717bca91581ee0f869c6211eddf2fe07ab (image=quay.io/ceph/ceph:v18, name=blissful_johnson, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2)
Nov 25 02:49:15 np0005534516 podman[77834]: 2025-11-25 07:49:15.442834635 +0000 UTC m=+0.155288648 container start 7f8345a82fda0f15c6240c3630ecfd717bca91581ee0f869c6211eddf2fe07ab (image=quay.io/ceph/ceph:v18, name=blissful_johnson, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 02:49:15 np0005534516 podman[77834]: 2025-11-25 07:49:15.455344764 +0000 UTC m=+0.167798807 container attach 7f8345a82fda0f15c6240c3630ecfd717bca91581ee0f869c6211eddf2fe07ab (image=quay.io/ceph/ceph:v18, name=blissful_johnson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 02:49:15 np0005534516 ceph-mon[75015]: Saving service mgr spec with placement count:2
Nov 25 02:49:15 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 02:49:15 np0005534516 podman[77802]: 2025-11-25 07:49:15.539655857 +0000 UTC m=+0.407727410 container exec_died 04ce8b89bbacb77b3b59c4996e899d7494010827e740656ebea8754c25303956 (image=quay.io/ceph/ceph:v18, name=ceph-a058ea16-8b73-51e1-b172-ed66107102bf-mon-compute-0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Nov 25 02:49:15 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 02:49:15 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 02:49:15 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 02:49:16 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/cephadm/container_init}] v 0) v1
Nov 25 02:49:16 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3592439255' entity='client.admin' 
Nov 25 02:49:16 np0005534516 systemd[1]: libpod-7f8345a82fda0f15c6240c3630ecfd717bca91581ee0f869c6211eddf2fe07ab.scope: Deactivated successfully.
Nov 25 02:49:16 np0005534516 podman[77834]: 2025-11-25 07:49:16.050493558 +0000 UTC m=+0.762947591 container died 7f8345a82fda0f15c6240c3630ecfd717bca91581ee0f869c6211eddf2fe07ab (image=quay.io/ceph/ceph:v18, name=blissful_johnson, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Nov 25 02:49:16 np0005534516 systemd[1]: var-lib-containers-storage-overlay-26a28f6704f085019b1d2997c8f263c7dfdbbb117814717ec73c76e9317d7410-merged.mount: Deactivated successfully.
Nov 25 02:49:16 np0005534516 podman[77834]: 2025-11-25 07:49:16.12946908 +0000 UTC m=+0.841923053 container remove 7f8345a82fda0f15c6240c3630ecfd717bca91581ee0f869c6211eddf2fe07ab (image=quay.io/ceph/ceph:v18, name=blissful_johnson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 02:49:16 np0005534516 systemd[1]: libpod-conmon-7f8345a82fda0f15c6240c3630ecfd717bca91581ee0f869c6211eddf2fe07ab.scope: Deactivated successfully.
Nov 25 02:49:16 np0005534516 systemd[1]: proc-sys-fs-binfmt_misc.automount: Got automount request for /proc/sys/fs/binfmt_misc, triggered by 78045 (sysctl)
Nov 25 02:49:16 np0005534516 systemd[1]: Mounting Arbitrary Executable File Formats File System...
Nov 25 02:49:16 np0005534516 podman[78024]: 2025-11-25 07:49:16.201827931 +0000 UTC m=+0.047762457 container create 1cbf7022a79c1436512b20e5e4fa729b474be94eb7a9225fe2f8fe864482db2a (image=quay.io/ceph/ceph:v18, name=vigilant_khorana, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Nov 25 02:49:16 np0005534516 systemd[1]: Mounted Arbitrary Executable File Formats File System.
Nov 25 02:49:16 np0005534516 systemd[1]: Started libpod-conmon-1cbf7022a79c1436512b20e5e4fa729b474be94eb7a9225fe2f8fe864482db2a.scope.
Nov 25 02:49:16 np0005534516 podman[78024]: 2025-11-25 07:49:16.179869689 +0000 UTC m=+0.025804255 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 25 02:49:16 np0005534516 systemd[1]: Started libcrun container.
Nov 25 02:49:16 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/887e03765874865ba0907fa6eb563d1142118a21049d3260f8e9e23bfe1c0c27/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 02:49:16 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/887e03765874865ba0907fa6eb563d1142118a21049d3260f8e9e23bfe1c0c27/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 02:49:16 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/887e03765874865ba0907fa6eb563d1142118a21049d3260f8e9e23bfe1c0c27/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 02:49:16 np0005534516 podman[78024]: 2025-11-25 07:49:16.318535687 +0000 UTC m=+0.164470243 container init 1cbf7022a79c1436512b20e5e4fa729b474be94eb7a9225fe2f8fe864482db2a (image=quay.io/ceph/ceph:v18, name=vigilant_khorana, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 02:49:16 np0005534516 podman[78024]: 2025-11-25 07:49:16.328193005 +0000 UTC m=+0.174127531 container start 1cbf7022a79c1436512b20e5e4fa729b474be94eb7a9225fe2f8fe864482db2a (image=quay.io/ceph/ceph:v18, name=vigilant_khorana, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Nov 25 02:49:16 np0005534516 podman[78024]: 2025-11-25 07:49:16.334270779 +0000 UTC m=+0.180205305 container attach 1cbf7022a79c1436512b20e5e4fa729b474be94eb7a9225fe2f8fe864482db2a (image=quay.io/ceph/ceph:v18, name=vigilant_khorana, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 02:49:16 np0005534516 ceph-mon[75015]: Saving service crash spec with placement *
Nov 25 02:49:16 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 02:49:16 np0005534516 ceph-mon[75015]: from='client.? 192.168.122.100:0/3592439255' entity='client.admin' 
Nov 25 02:49:16 np0005534516 ceph-mgr[75313]: log_channel(audit) log [DBG] : from='client.14162 -' entity='client.admin' cmd=[{"prefix": "orch client-keyring set", "entity": "client.admin", "placement": "label:_admin", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 02:49:16 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/client_keyrings}] v 0) v1
Nov 25 02:49:16 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 02:49:16 np0005534516 systemd[1]: libpod-1cbf7022a79c1436512b20e5e4fa729b474be94eb7a9225fe2f8fe864482db2a.scope: Deactivated successfully.
Nov 25 02:49:16 np0005534516 podman[78024]: 2025-11-25 07:49:16.916103458 +0000 UTC m=+0.762038034 container died 1cbf7022a79c1436512b20e5e4fa729b474be94eb7a9225fe2f8fe864482db2a (image=quay.io/ceph/ceph:v18, name=vigilant_khorana, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Nov 25 02:49:16 np0005534516 systemd[1]: var-lib-containers-storage-overlay-887e03765874865ba0907fa6eb563d1142118a21049d3260f8e9e23bfe1c0c27-merged.mount: Deactivated successfully.
Nov 25 02:49:16 np0005534516 podman[78024]: 2025-11-25 07:49:16.967961163 +0000 UTC m=+0.813895689 container remove 1cbf7022a79c1436512b20e5e4fa729b474be94eb7a9225fe2f8fe864482db2a (image=quay.io/ceph/ceph:v18, name=vigilant_khorana, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 02:49:16 np0005534516 systemd[1]: libpod-conmon-1cbf7022a79c1436512b20e5e4fa729b474be94eb7a9225fe2f8fe864482db2a.scope: Deactivated successfully.
Nov 25 02:49:17 np0005534516 podman[78215]: 2025-11-25 07:49:17.046299874 +0000 UTC m=+0.051565047 container create 0cf7139fbc2683de5dd481248feccfd8116037434b862ec3b07a5e0001f37f42 (image=quay.io/ceph/ceph:v18, name=practical_hypatia, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Nov 25 02:49:17 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 02:49:17 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 02:49:17 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v5: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 25 02:49:17 np0005534516 systemd[1]: Started libpod-conmon-0cf7139fbc2683de5dd481248feccfd8116037434b862ec3b07a5e0001f37f42.scope.
Nov 25 02:49:17 np0005534516 podman[78215]: 2025-11-25 07:49:17.023662102 +0000 UTC m=+0.028927285 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 25 02:49:17 np0005534516 systemd[1]: Started libcrun container.
Nov 25 02:49:17 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f881668248634da2099cdb7d60b5bd48fda6c5bc51d8617209f0477d93bd1b74/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 02:49:17 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f881668248634da2099cdb7d60b5bd48fda6c5bc51d8617209f0477d93bd1b74/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 02:49:17 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f881668248634da2099cdb7d60b5bd48fda6c5bc51d8617209f0477d93bd1b74/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 02:49:17 np0005534516 podman[78215]: 2025-11-25 07:49:17.142501727 +0000 UTC m=+0.147766980 container init 0cf7139fbc2683de5dd481248feccfd8116037434b862ec3b07a5e0001f37f42 (image=quay.io/ceph/ceph:v18, name=practical_hypatia, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Nov 25 02:49:17 np0005534516 podman[78215]: 2025-11-25 07:49:17.148342333 +0000 UTC m=+0.153607496 container start 0cf7139fbc2683de5dd481248feccfd8116037434b862ec3b07a5e0001f37f42 (image=quay.io/ceph/ceph:v18, name=practical_hypatia, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 02:49:17 np0005534516 podman[78215]: 2025-11-25 07:49:17.162042491 +0000 UTC m=+0.167307654 container attach 0cf7139fbc2683de5dd481248feccfd8116037434b862ec3b07a5e0001f37f42 (image=quay.io/ceph/ceph:v18, name=practical_hypatia, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 02:49:17 np0005534516 ceph-mgr[75313]: log_channel(audit) log [DBG] : from='client.14164 -' entity='client.admin' cmd=[{"prefix": "orch host label add", "hostname": "compute-0", "label": "_admin", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 02:49:17 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0) v1
Nov 25 02:49:17 np0005534516 podman[78407]: 2025-11-25 07:49:17.714601664 +0000 UTC m=+0.102593607 container create cfb8e7fb5bbb823b5d4ab8959eb29212a69573b0bc18441571833daf5f93df69 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_payne, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Nov 25 02:49:17 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 02:49:17 np0005534516 ceph-mgr[75313]: [cephadm INFO root] Added label _admin to host compute-0
Nov 25 02:49:17 np0005534516 ceph-mgr[75313]: log_channel(cephadm) log [INF] : Added label _admin to host compute-0
Nov 25 02:49:17 np0005534516 practical_hypatia[78248]: Added label _admin to host compute-0
Nov 25 02:49:17 np0005534516 podman[78407]: 2025-11-25 07:49:17.637919526 +0000 UTC m=+0.025911509 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 02:49:17 np0005534516 systemd[1]: libpod-0cf7139fbc2683de5dd481248feccfd8116037434b862ec3b07a5e0001f37f42.scope: Deactivated successfully.
Nov 25 02:49:17 np0005534516 podman[78215]: 2025-11-25 07:49:17.753646761 +0000 UTC m=+0.758911964 container died 0cf7139fbc2683de5dd481248feccfd8116037434b862ec3b07a5e0001f37f42 (image=quay.io/ceph/ceph:v18, name=practical_hypatia, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 02:49:17 np0005534516 systemd[1]: Started libpod-conmon-cfb8e7fb5bbb823b5d4ab8959eb29212a69573b0bc18441571833daf5f93df69.scope.
Nov 25 02:49:17 np0005534516 systemd[1]: Started libcrun container.
Nov 25 02:49:17 np0005534516 systemd[1]: var-lib-containers-storage-overlay-f881668248634da2099cdb7d60b5bd48fda6c5bc51d8617209f0477d93bd1b74-merged.mount: Deactivated successfully.
Nov 25 02:49:17 np0005534516 podman[78407]: 2025-11-25 07:49:17.859012904 +0000 UTC m=+0.247004837 container init cfb8e7fb5bbb823b5d4ab8959eb29212a69573b0bc18441571833daf5f93df69 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_payne, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 02:49:17 np0005534516 podman[78407]: 2025-11-25 07:49:17.869377446 +0000 UTC m=+0.257369399 container start cfb8e7fb5bbb823b5d4ab8959eb29212a69573b0bc18441571833daf5f93df69 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_payne, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Nov 25 02:49:17 np0005534516 infallible_payne[78437]: 167 167
Nov 25 02:49:17 np0005534516 systemd[1]: libpod-cfb8e7fb5bbb823b5d4ab8959eb29212a69573b0bc18441571833daf5f93df69.scope: Deactivated successfully.
Nov 25 02:49:17 np0005534516 conmon[78437]: conmon cfb8e7fb5bbb823b5d4a <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-cfb8e7fb5bbb823b5d4ab8959eb29212a69573b0bc18441571833daf5f93df69.scope/container/memory.events
Nov 25 02:49:18 np0005534516 podman[78215]: 2025-11-25 07:49:18.092580762 +0000 UTC m=+1.097845975 container remove 0cf7139fbc2683de5dd481248feccfd8116037434b862ec3b07a5e0001f37f42 (image=quay.io/ceph/ceph:v18, name=practical_hypatia, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Nov 25 02:49:18 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 02:49:18 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 02:49:18 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 02:49:18 np0005534516 ceph-mon[75015]: Added label _admin to host compute-0
Nov 25 02:49:18 np0005534516 systemd[1]: libpod-conmon-0cf7139fbc2683de5dd481248feccfd8116037434b862ec3b07a5e0001f37f42.scope: Deactivated successfully.
Nov 25 02:49:18 np0005534516 podman[78407]: 2025-11-25 07:49:18.134613515 +0000 UTC m=+0.522605458 container attach cfb8e7fb5bbb823b5d4ab8959eb29212a69573b0bc18441571833daf5f93df69 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_payne, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 02:49:18 np0005534516 podman[78407]: 2025-11-25 07:49:18.135801783 +0000 UTC m=+0.523793736 container died cfb8e7fb5bbb823b5d4ab8959eb29212a69573b0bc18441571833daf5f93df69 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_payne, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 02:49:18 np0005534516 systemd[1]: var-lib-containers-storage-overlay-40ce4bf3e5067c3eb4f9155740081c30dcdaf52921f368206d2eadd3c6de3987-merged.mount: Deactivated successfully.
Nov 25 02:49:18 np0005534516 podman[78456]: 2025-11-25 07:49:18.170609435 +0000 UTC m=+0.047113296 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 25 02:49:18 np0005534516 podman[78407]: 2025-11-25 07:49:18.271349681 +0000 UTC m=+0.659341594 container remove cfb8e7fb5bbb823b5d4ab8959eb29212a69573b0bc18441571833daf5f93df69 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_payne, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 02:49:18 np0005534516 systemd[1]: libpod-conmon-cfb8e7fb5bbb823b5d4ab8959eb29212a69573b0bc18441571833daf5f93df69.scope: Deactivated successfully.
Nov 25 02:49:18 np0005534516 podman[78456]: 2025-11-25 07:49:18.293442937 +0000 UTC m=+0.169946778 container create 7d9749aa30c9e7de0361698ac8047b17e9effa90272f72e4f8ea5cada00b5057 (image=quay.io/ceph/ceph:v18, name=trusting_williams, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Nov 25 02:49:18 np0005534516 systemd[1]: Started libpod-conmon-7d9749aa30c9e7de0361698ac8047b17e9effa90272f72e4f8ea5cada00b5057.scope.
Nov 25 02:49:18 np0005534516 systemd[1]: Started libcrun container.
Nov 25 02:49:18 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d6f32d01a68639e74f770fb499dddf0f124ad4fe3a84ed4e52a55077e46e67f4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 02:49:18 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d6f32d01a68639e74f770fb499dddf0f124ad4fe3a84ed4e52a55077e46e67f4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 02:49:18 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d6f32d01a68639e74f770fb499dddf0f124ad4fe3a84ed4e52a55077e46e67f4/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 02:49:18 np0005534516 podman[78456]: 2025-11-25 07:49:18.447003569 +0000 UTC m=+0.323507370 container init 7d9749aa30c9e7de0361698ac8047b17e9effa90272f72e4f8ea5cada00b5057 (image=quay.io/ceph/ceph:v18, name=trusting_williams, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 02:49:18 np0005534516 podman[78456]: 2025-11-25 07:49:18.456112901 +0000 UTC m=+0.332616682 container start 7d9749aa30c9e7de0361698ac8047b17e9effa90272f72e4f8ea5cada00b5057 (image=quay.io/ceph/ceph:v18, name=trusting_williams, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Nov 25 02:49:18 np0005534516 podman[78456]: 2025-11-25 07:49:18.489238318 +0000 UTC m=+0.365742109 container attach 7d9749aa30c9e7de0361698ac8047b17e9effa90272f72e4f8ea5cada00b5057 (image=quay.io/ceph/ceph:v18, name=trusting_williams, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef)
Nov 25 02:49:19 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=osd_memory_target_autotune}] v 0) v1
Nov 25 02:49:19 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v6: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 25 02:49:19 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/29443319' entity='client.admin' 
Nov 25 02:49:19 np0005534516 systemd[1]: libpod-7d9749aa30c9e7de0361698ac8047b17e9effa90272f72e4f8ea5cada00b5057.scope: Deactivated successfully.
Nov 25 02:49:19 np0005534516 podman[78456]: 2025-11-25 07:49:19.140972458 +0000 UTC m=+1.017476269 container died 7d9749aa30c9e7de0361698ac8047b17e9effa90272f72e4f8ea5cada00b5057 (image=quay.io/ceph/ceph:v18, name=trusting_williams, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Nov 25 02:49:19 np0005534516 systemd[1]: var-lib-containers-storage-overlay-d6f32d01a68639e74f770fb499dddf0f124ad4fe3a84ed4e52a55077e46e67f4-merged.mount: Deactivated successfully.
Nov 25 02:49:19 np0005534516 podman[78456]: 2025-11-25 07:49:19.655831857 +0000 UTC m=+1.532335678 container remove 7d9749aa30c9e7de0361698ac8047b17e9effa90272f72e4f8ea5cada00b5057 (image=quay.io/ceph/ceph:v18, name=trusting_williams, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 02:49:19 np0005534516 systemd[1]: libpod-conmon-7d9749aa30c9e7de0361698ac8047b17e9effa90272f72e4f8ea5cada00b5057.scope: Deactivated successfully.
Nov 25 02:49:19 np0005534516 podman[78515]: 2025-11-25 07:49:19.777771611 +0000 UTC m=+0.099315442 container create 67d6906e38344f039f725a13a6150c82d1b2d78e8cb44a8a1c0509ec1701a24b (image=quay.io/ceph/ceph:v18, name=keen_fermat, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 02:49:19 np0005534516 podman[78515]: 2025-11-25 07:49:19.700901807 +0000 UTC m=+0.022445648 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 25 02:49:19 np0005534516 systemd[1]: Started libpod-conmon-67d6906e38344f039f725a13a6150c82d1b2d78e8cb44a8a1c0509ec1701a24b.scope.
Nov 25 02:49:19 np0005534516 systemd[1]: Started libcrun container.
Nov 25 02:49:19 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4729a0fdc6785447d48c4d8b640408aa1cf263f4f3968522058005f7fb8417a6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 02:49:19 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4729a0fdc6785447d48c4d8b640408aa1cf263f4f3968522058005f7fb8417a6/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 02:49:19 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4729a0fdc6785447d48c4d8b640408aa1cf263f4f3968522058005f7fb8417a6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 02:49:20 np0005534516 podman[78515]: 2025-11-25 07:49:20.042917607 +0000 UTC m=+0.364461438 container init 67d6906e38344f039f725a13a6150c82d1b2d78e8cb44a8a1c0509ec1701a24b (image=quay.io/ceph/ceph:v18, name=keen_fermat, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Nov 25 02:49:20 np0005534516 podman[78515]: 2025-11-25 07:49:20.052977729 +0000 UTC m=+0.374521560 container start 67d6906e38344f039f725a13a6150c82d1b2d78e8cb44a8a1c0509ec1701a24b (image=quay.io/ceph/ceph:v18, name=keen_fermat, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Nov 25 02:49:20 np0005534516 podman[78515]: 2025-11-25 07:49:20.145154082 +0000 UTC m=+0.466697923 container attach 67d6906e38344f039f725a13a6150c82d1b2d78e8cb44a8a1c0509ec1701a24b (image=quay.io/ceph/ceph:v18, name=keen_fermat, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 25 02:49:20 np0005534516 ceph-mon[75015]: from='client.? 192.168.122.100:0/29443319' entity='client.admin' 
Nov 25 02:49:20 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/dashboard/cluster/status}] v 0) v1
Nov 25 02:49:20 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3820689106' entity='client.admin' 
Nov 25 02:49:20 np0005534516 keen_fermat[78532]: set mgr/dashboard/cluster/status
Nov 25 02:49:20 np0005534516 systemd[1]: libpod-67d6906e38344f039f725a13a6150c82d1b2d78e8cb44a8a1c0509ec1701a24b.scope: Deactivated successfully.
Nov 25 02:49:20 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 02:49:20 np0005534516 podman[78558]: 2025-11-25 07:49:20.894242271 +0000 UTC m=+0.028472620 container died 67d6906e38344f039f725a13a6150c82d1b2d78e8cb44a8a1c0509ec1701a24b (image=quay.io/ceph/ceph:v18, name=keen_fermat, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Nov 25 02:49:20 np0005534516 systemd[1]: var-lib-containers-storage-overlay-4729a0fdc6785447d48c4d8b640408aa1cf263f4f3968522058005f7fb8417a6-merged.mount: Deactivated successfully.
Nov 25 02:49:20 np0005534516 podman[78558]: 2025-11-25 07:49:20.986103054 +0000 UTC m=+0.120333403 container remove 67d6906e38344f039f725a13a6150c82d1b2d78e8cb44a8a1c0509ec1701a24b (image=quay.io/ceph/ceph:v18, name=keen_fermat, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 02:49:20 np0005534516 systemd[1]: libpod-conmon-67d6906e38344f039f725a13a6150c82d1b2d78e8cb44a8a1c0509ec1701a24b.scope: Deactivated successfully.
Nov 25 02:49:21 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v7: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 25 02:49:21 np0005534516 podman[78580]: 2025-11-25 07:49:21.186718139 +0000 UTC m=+0.035911087 container create 415f3ce256c3452c8fd17caaee64e5f5d86ec8d231f6cb854ed896c660c40df2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_murdock, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Nov 25 02:49:21 np0005534516 systemd[1]: Started libpod-conmon-415f3ce256c3452c8fd17caaee64e5f5d86ec8d231f6cb854ed896c660c40df2.scope.
Nov 25 02:49:21 np0005534516 systemd[1]: Started libcrun container.
Nov 25 02:49:21 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4b0597911fadcfbd0509393c85d896779fa5ee23e71182fd2d55cdd9cc36fe96/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 02:49:21 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4b0597911fadcfbd0509393c85d896779fa5ee23e71182fd2d55cdd9cc36fe96/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 02:49:21 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4b0597911fadcfbd0509393c85d896779fa5ee23e71182fd2d55cdd9cc36fe96/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 02:49:21 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4b0597911fadcfbd0509393c85d896779fa5ee23e71182fd2d55cdd9cc36fe96/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 02:49:21 np0005534516 podman[78580]: 2025-11-25 07:49:21.169122498 +0000 UTC m=+0.018315456 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 02:49:21 np0005534516 podman[78580]: 2025-11-25 07:49:21.280789803 +0000 UTC m=+0.129982741 container init 415f3ce256c3452c8fd17caaee64e5f5d86ec8d231f6cb854ed896c660c40df2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_murdock, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Nov 25 02:49:21 np0005534516 podman[78580]: 2025-11-25 07:49:21.288957814 +0000 UTC m=+0.138150752 container start 415f3ce256c3452c8fd17caaee64e5f5d86ec8d231f6cb854ed896c660c40df2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_murdock, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Nov 25 02:49:21 np0005534516 podman[78580]: 2025-11-25 07:49:21.304332995 +0000 UTC m=+0.153525953 container attach 415f3ce256c3452c8fd17caaee64e5f5d86ec8d231f6cb854ed896c660c40df2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_murdock, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 25 02:49:21 np0005534516 python3[78625]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid a058ea16-8b73-51e1-b172-ed66107102bf -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set mgr mgr/cephadm/use_repo_digest false#012 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 02:49:21 np0005534516 podman[78626]: 2025-11-25 07:49:21.801284483 +0000 UTC m=+0.125923172 container create 6b75baf013a77d191e53bfe71e69fa727721162e68200ccfdf2badb84b1f0e55 (image=quay.io/ceph/ceph:v18, name=kind_gates, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Nov 25 02:49:21 np0005534516 podman[78626]: 2025-11-25 07:49:21.712401965 +0000 UTC m=+0.037040724 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 25 02:49:21 np0005534516 systemd[1]: Started libpod-conmon-6b75baf013a77d191e53bfe71e69fa727721162e68200ccfdf2badb84b1f0e55.scope.
Nov 25 02:49:21 np0005534516 systemd[1]: Started libcrun container.
Nov 25 02:49:21 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/be0cac2ed41c363c53400b93936470d8e3efbcc9f0537dc0c0f253812a80d66d/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 02:49:21 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/be0cac2ed41c363c53400b93936470d8e3efbcc9f0537dc0c0f253812a80d66d/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 02:49:22 np0005534516 ceph-mon[75015]: from='client.? 192.168.122.100:0/3820689106' entity='client.admin' 
Nov 25 02:49:22 np0005534516 podman[78626]: 2025-11-25 07:49:22.19425266 +0000 UTC m=+0.518891379 container init 6b75baf013a77d191e53bfe71e69fa727721162e68200ccfdf2badb84b1f0e55 (image=quay.io/ceph/ceph:v18, name=kind_gates, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 02:49:22 np0005534516 podman[78626]: 2025-11-25 07:49:22.203402192 +0000 UTC m=+0.528040881 container start 6b75baf013a77d191e53bfe71e69fa727721162e68200ccfdf2badb84b1f0e55 (image=quay.io/ceph/ceph:v18, name=kind_gates, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 02:49:22 np0005534516 podman[78626]: 2025-11-25 07:49:22.307784806 +0000 UTC m=+0.632423515 container attach 6b75baf013a77d191e53bfe71e69fa727721162e68200ccfdf2badb84b1f0e55 (image=quay.io/ceph/ceph:v18, name=kind_gates, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507)
Nov 25 02:49:22 np0005534516 optimistic_murdock[78595]: [
Nov 25 02:49:22 np0005534516 optimistic_murdock[78595]:    {
Nov 25 02:49:22 np0005534516 optimistic_murdock[78595]:        "available": false,
Nov 25 02:49:22 np0005534516 optimistic_murdock[78595]:        "ceph_device": false,
Nov 25 02:49:22 np0005534516 optimistic_murdock[78595]:        "device_id": "QEMU_DVD-ROM_QM00001",
Nov 25 02:49:22 np0005534516 optimistic_murdock[78595]:        "lsm_data": {},
Nov 25 02:49:22 np0005534516 optimistic_murdock[78595]:        "lvs": [],
Nov 25 02:49:22 np0005534516 optimistic_murdock[78595]:        "path": "/dev/sr0",
Nov 25 02:49:22 np0005534516 optimistic_murdock[78595]:        "rejected_reasons": [
Nov 25 02:49:22 np0005534516 optimistic_murdock[78595]:            "Has a FileSystem",
Nov 25 02:49:22 np0005534516 optimistic_murdock[78595]:            "Insufficient space (<5GB)"
Nov 25 02:49:22 np0005534516 optimistic_murdock[78595]:        ],
Nov 25 02:49:22 np0005534516 optimistic_murdock[78595]:        "sys_api": {
Nov 25 02:49:22 np0005534516 optimistic_murdock[78595]:            "actuators": null,
Nov 25 02:49:22 np0005534516 optimistic_murdock[78595]:            "device_nodes": "sr0",
Nov 25 02:49:22 np0005534516 optimistic_murdock[78595]:            "devname": "sr0",
Nov 25 02:49:22 np0005534516 optimistic_murdock[78595]:            "human_readable_size": "482.00 KB",
Nov 25 02:49:22 np0005534516 optimistic_murdock[78595]:            "id_bus": "ata",
Nov 25 02:49:22 np0005534516 optimistic_murdock[78595]:            "model": "QEMU DVD-ROM",
Nov 25 02:49:22 np0005534516 optimistic_murdock[78595]:            "nr_requests": "2",
Nov 25 02:49:22 np0005534516 optimistic_murdock[78595]:            "parent": "/dev/sr0",
Nov 25 02:49:22 np0005534516 optimistic_murdock[78595]:            "partitions": {},
Nov 25 02:49:22 np0005534516 optimistic_murdock[78595]:            "path": "/dev/sr0",
Nov 25 02:49:22 np0005534516 optimistic_murdock[78595]:            "removable": "1",
Nov 25 02:49:22 np0005534516 optimistic_murdock[78595]:            "rev": "2.5+",
Nov 25 02:49:22 np0005534516 optimistic_murdock[78595]:            "ro": "0",
Nov 25 02:49:22 np0005534516 optimistic_murdock[78595]:            "rotational": "1",
Nov 25 02:49:22 np0005534516 optimistic_murdock[78595]:            "sas_address": "",
Nov 25 02:49:22 np0005534516 optimistic_murdock[78595]:            "sas_device_handle": "",
Nov 25 02:49:22 np0005534516 optimistic_murdock[78595]:            "scheduler_mode": "mq-deadline",
Nov 25 02:49:22 np0005534516 optimistic_murdock[78595]:            "sectors": 0,
Nov 25 02:49:22 np0005534516 optimistic_murdock[78595]:            "sectorsize": "2048",
Nov 25 02:49:22 np0005534516 optimistic_murdock[78595]:            "size": 493568.0,
Nov 25 02:49:22 np0005534516 optimistic_murdock[78595]:            "support_discard": "2048",
Nov 25 02:49:22 np0005534516 optimistic_murdock[78595]:            "type": "disk",
Nov 25 02:49:22 np0005534516 optimistic_murdock[78595]:            "vendor": "QEMU"
Nov 25 02:49:22 np0005534516 optimistic_murdock[78595]:        }
Nov 25 02:49:22 np0005534516 optimistic_murdock[78595]:    }
Nov 25 02:49:22 np0005534516 optimistic_murdock[78595]: ]
Nov 25 02:49:22 np0005534516 systemd[1]: libpod-415f3ce256c3452c8fd17caaee64e5f5d86ec8d231f6cb854ed896c660c40df2.scope: Deactivated successfully.
Nov 25 02:49:22 np0005534516 systemd[1]: libpod-415f3ce256c3452c8fd17caaee64e5f5d86ec8d231f6cb854ed896c660c40df2.scope: Consumed 1.410s CPU time.
Nov 25 02:49:22 np0005534516 podman[78580]: 2025-11-25 07:49:22.696377883 +0000 UTC m=+1.545570861 container died 415f3ce256c3452c8fd17caaee64e5f5d86ec8d231f6cb854ed896c660c40df2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_murdock, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Nov 25 02:49:22 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/cephadm/use_repo_digest}] v 0) v1
Nov 25 02:49:22 np0005534516 systemd[1]: var-lib-containers-storage-overlay-4b0597911fadcfbd0509393c85d896779fa5ee23e71182fd2d55cdd9cc36fe96-merged.mount: Deactivated successfully.
Nov 25 02:49:22 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1420282523' entity='client.admin' 
Nov 25 02:49:22 np0005534516 systemd[1]: libpod-6b75baf013a77d191e53bfe71e69fa727721162e68200ccfdf2badb84b1f0e55.scope: Deactivated successfully.
Nov 25 02:49:22 np0005534516 podman[78580]: 2025-11-25 07:49:22.815946491 +0000 UTC m=+1.665139429 container remove 415f3ce256c3452c8fd17caaee64e5f5d86ec8d231f6cb854ed896c660c40df2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_murdock, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 02:49:22 np0005534516 podman[78626]: 2025-11-25 07:49:22.821081026 +0000 UTC m=+1.145719755 container died 6b75baf013a77d191e53bfe71e69fa727721162e68200ccfdf2badb84b1f0e55 (image=quay.io/ceph/ceph:v18, name=kind_gates, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef)
Nov 25 02:49:22 np0005534516 systemd[1]: libpod-conmon-415f3ce256c3452c8fd17caaee64e5f5d86ec8d231f6cb854ed896c660c40df2.scope: Deactivated successfully.
Nov 25 02:49:22 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 02:49:22 np0005534516 systemd[1]: var-lib-containers-storage-overlay-be0cac2ed41c363c53400b93936470d8e3efbcc9f0537dc0c0f253812a80d66d-merged.mount: Deactivated successfully.
Nov 25 02:49:22 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 02:49:22 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 02:49:22 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 02:49:22 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 02:49:22 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 02:49:22 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 02:49:22 np0005534516 podman[78626]: 2025-11-25 07:49:22.895108829 +0000 UTC m=+1.219747548 container remove 6b75baf013a77d191e53bfe71e69fa727721162e68200ccfdf2badb84b1f0e55 (image=quay.io/ceph/ceph:v18, name=kind_gates, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 02:49:22 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 02:49:22 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0) v1
Nov 25 02:49:22 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 25 02:49:22 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 02:49:22 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 02:49:22 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 02:49:22 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 02:49:22 np0005534516 systemd[1]: libpod-conmon-6b75baf013a77d191e53bfe71e69fa727721162e68200ccfdf2badb84b1f0e55.scope: Deactivated successfully.
Nov 25 02:49:22 np0005534516 ceph-mgr[75313]: [cephadm INFO cephadm.serve] Updating compute-0:/etc/ceph/ceph.conf
Nov 25 02:49:22 np0005534516 ceph-mgr[75313]: log_channel(cephadm) log [INF] : Updating compute-0:/etc/ceph/ceph.conf
Nov 25 02:49:23 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v8: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 25 02:49:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 02:49:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 02:49:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 02:49:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 02:49:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 02:49:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 02:49:23 np0005534516 ceph-mon[75015]: from='client.? 192.168.122.100:0/1420282523' entity='client.admin' 
Nov 25 02:49:23 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 02:49:23 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 02:49:23 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 02:49:23 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 02:49:23 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 25 02:49:23 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 02:49:23 np0005534516 ceph-mon[75015]: Updating compute-0:/etc/ceph/ceph.conf
Nov 25 02:49:23 np0005534516 ansible-async_wrapper.py[80679]: Invoked with j82705903355 30 /home/zuul/.ansible/tmp/ansible-tmp-1764056963.2433114-36447-139491917118299/AnsiballZ_command.py _
Nov 25 02:49:23 np0005534516 ansible-async_wrapper.py[80733]: Starting module and watcher
Nov 25 02:49:23 np0005534516 ansible-async_wrapper.py[80733]: Start watching 80734 (30)
Nov 25 02:49:23 np0005534516 ansible-async_wrapper.py[80734]: Start module (80734)
Nov 25 02:49:23 np0005534516 ansible-async_wrapper.py[80679]: Return async_wrapper task started.
Nov 25 02:49:23 np0005534516 python3[80736]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid a058ea16-8b73-51e1-b172-ed66107102bf -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch status --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 02:49:23 np0005534516 ceph-mgr[75313]: [cephadm INFO cephadm.serve] Updating compute-0:/var/lib/ceph/a058ea16-8b73-51e1-b172-ed66107102bf/config/ceph.conf
Nov 25 02:49:23 np0005534516 ceph-mgr[75313]: log_channel(cephadm) log [INF] : Updating compute-0:/var/lib/ceph/a058ea16-8b73-51e1-b172-ed66107102bf/config/ceph.conf
Nov 25 02:49:24 np0005534516 podman[80810]: 2025-11-25 07:49:24.030074639 +0000 UTC m=+0.035669220 container create 428b16aa6af7431cbc5dcb337c7031b20b885cbd223e86f84b5e3b5dbfd96f59 (image=quay.io/ceph/ceph:v18, name=xenodochial_vaughan, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 25 02:49:24 np0005534516 systemd[1]: Started libpod-conmon-428b16aa6af7431cbc5dcb337c7031b20b885cbd223e86f84b5e3b5dbfd96f59.scope.
Nov 25 02:49:24 np0005534516 systemd[1]: Started libcrun container.
Nov 25 02:49:24 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fa4085024be4c8e3d5da3458d035cf528cfe360c1eac1e6057526df197e0fe82/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 02:49:24 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fa4085024be4c8e3d5da3458d035cf528cfe360c1eac1e6057526df197e0fe82/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 02:49:24 np0005534516 podman[80810]: 2025-11-25 07:49:24.102494921 +0000 UTC m=+0.108089522 container init 428b16aa6af7431cbc5dcb337c7031b20b885cbd223e86f84b5e3b5dbfd96f59 (image=quay.io/ceph/ceph:v18, name=xenodochial_vaughan, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Nov 25 02:49:24 np0005534516 podman[80810]: 2025-11-25 07:49:24.108456621 +0000 UTC m=+0.114051202 container start 428b16aa6af7431cbc5dcb337c7031b20b885cbd223e86f84b5e3b5dbfd96f59 (image=quay.io/ceph/ceph:v18, name=xenodochial_vaughan, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Nov 25 02:49:24 np0005534516 podman[80810]: 2025-11-25 07:49:24.111363194 +0000 UTC m=+0.116957775 container attach 428b16aa6af7431cbc5dcb337c7031b20b885cbd223e86f84b5e3b5dbfd96f59 (image=quay.io/ceph/ceph:v18, name=xenodochial_vaughan, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True)
Nov 25 02:49:24 np0005534516 podman[80810]: 2025-11-25 07:49:24.016241207 +0000 UTC m=+0.021835798 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 25 02:49:24 np0005534516 ceph-mgr[75313]: log_channel(audit) log [DBG] : from='client.14172 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Nov 25 02:49:24 np0005534516 xenodochial_vaughan[80870]: 
Nov 25 02:49:24 np0005534516 xenodochial_vaughan[80870]: {"available": true, "backend": "cephadm", "paused": false, "workers": 10}
Nov 25 02:49:24 np0005534516 systemd[1]: libpod-428b16aa6af7431cbc5dcb337c7031b20b885cbd223e86f84b5e3b5dbfd96f59.scope: Deactivated successfully.
Nov 25 02:49:24 np0005534516 podman[80810]: 2025-11-25 07:49:24.669117053 +0000 UTC m=+0.674711634 container died 428b16aa6af7431cbc5dcb337c7031b20b885cbd223e86f84b5e3b5dbfd96f59 (image=quay.io/ceph/ceph:v18, name=xenodochial_vaughan, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 02:49:24 np0005534516 systemd[1]: var-lib-containers-storage-overlay-fa4085024be4c8e3d5da3458d035cf528cfe360c1eac1e6057526df197e0fe82-merged.mount: Deactivated successfully.
Nov 25 02:49:24 np0005534516 podman[80810]: 2025-11-25 07:49:24.71753332 +0000 UTC m=+0.723127901 container remove 428b16aa6af7431cbc5dcb337c7031b20b885cbd223e86f84b5e3b5dbfd96f59 (image=quay.io/ceph/ceph:v18, name=xenodochial_vaughan, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 02:49:24 np0005534516 systemd[1]: libpod-conmon-428b16aa6af7431cbc5dcb337c7031b20b885cbd223e86f84b5e3b5dbfd96f59.scope: Deactivated successfully.
Nov 25 02:49:24 np0005534516 ansible-async_wrapper.py[80734]: Module complete (80734)
Nov 25 02:49:24 np0005534516 ceph-mon[75015]: Updating compute-0:/var/lib/ceph/a058ea16-8b73-51e1-b172-ed66107102bf/config/ceph.conf
Nov 25 02:49:25 np0005534516 ceph-mgr[75313]: [cephadm INFO cephadm.serve] Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Nov 25 02:49:25 np0005534516 ceph-mgr[75313]: log_channel(cephadm) log [INF] : Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Nov 25 02:49:25 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v9: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 25 02:49:25 np0005534516 python3[81359]: ansible-ansible.legacy.async_status Invoked with jid=j82705903355.80679 mode=status _async_dir=/root/.ansible_async
Nov 25 02:49:25 np0005534516 python3[81513]: ansible-ansible.legacy.async_status Invoked with jid=j82705903355.80679 mode=cleanup _async_dir=/root/.ansible_async
Nov 25 02:49:25 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 02:49:25 np0005534516 ceph-mon[75015]: Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Nov 25 02:49:26 np0005534516 python3[81709]: ansible-ansible.builtin.stat Invoked with path=/home/ceph-admin/specs/ceph_spec.yaml follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Nov 25 02:49:26 np0005534516 ceph-mgr[75313]: [cephadm INFO cephadm.serve] Updating compute-0:/var/lib/ceph/a058ea16-8b73-51e1-b172-ed66107102bf/config/ceph.client.admin.keyring
Nov 25 02:49:26 np0005534516 ceph-mgr[75313]: log_channel(cephadm) log [INF] : Updating compute-0:/var/lib/ceph/a058ea16-8b73-51e1-b172-ed66107102bf/config/ceph.client.admin.keyring
Nov 25 02:49:26 np0005534516 python3[81886]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid a058ea16-8b73-51e1-b172-ed66107102bf -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch status --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 02:49:26 np0005534516 podman[81958]: 2025-11-25 07:49:26.548216794 +0000 UTC m=+0.053419097 container create 2b41605fa66c95d4708395573699bf542921b1741dd7b24f8500df84977eb12a (image=quay.io/ceph/ceph:v18, name=trusting_sutherland, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 02:49:26 np0005534516 systemd[1]: Started libpod-conmon-2b41605fa66c95d4708395573699bf542921b1741dd7b24f8500df84977eb12a.scope.
Nov 25 02:49:26 np0005534516 systemd[1]: Started libcrun container.
Nov 25 02:49:26 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5262edfc6156b33efc85923b8622f746ca3ee598ec00039a182ff57285c82961/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Nov 25 02:49:26 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5262edfc6156b33efc85923b8622f746ca3ee598ec00039a182ff57285c82961/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 02:49:26 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5262edfc6156b33efc85923b8622f746ca3ee598ec00039a182ff57285c82961/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 02:49:26 np0005534516 podman[81958]: 2025-11-25 07:49:26.522324997 +0000 UTC m=+0.027527310 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 25 02:49:26 np0005534516 podman[81958]: 2025-11-25 07:49:26.690818077 +0000 UTC m=+0.196020370 container init 2b41605fa66c95d4708395573699bf542921b1741dd7b24f8500df84977eb12a (image=quay.io/ceph/ceph:v18, name=trusting_sutherland, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 02:49:26 np0005534516 podman[81958]: 2025-11-25 07:49:26.698874175 +0000 UTC m=+0.204076458 container start 2b41605fa66c95d4708395573699bf542921b1741dd7b24f8500df84977eb12a (image=quay.io/ceph/ceph:v18, name=trusting_sutherland, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 02:49:26 np0005534516 podman[81958]: 2025-11-25 07:49:26.773488847 +0000 UTC m=+0.278691140 container attach 2b41605fa66c95d4708395573699bf542921b1741dd7b24f8500df84977eb12a (image=quay.io/ceph/ceph:v18, name=trusting_sutherland, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 02:49:27 np0005534516 ceph-mon[75015]: Updating compute-0:/var/lib/ceph/a058ea16-8b73-51e1-b172-ed66107102bf/config/ceph.client.admin.keyring
Nov 25 02:49:27 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v10: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 25 02:49:27 np0005534516 ceph-mgr[75313]: log_channel(audit) log [DBG] : from='client.14174 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Nov 25 02:49:27 np0005534516 trusting_sutherland[82008]: 
Nov 25 02:49:27 np0005534516 trusting_sutherland[82008]: {"available": true, "backend": "cephadm", "paused": false, "workers": 10}
Nov 25 02:49:27 np0005534516 systemd[1]: libpod-2b41605fa66c95d4708395573699bf542921b1741dd7b24f8500df84977eb12a.scope: Deactivated successfully.
Nov 25 02:49:27 np0005534516 podman[81958]: 2025-11-25 07:49:27.308898883 +0000 UTC m=+0.814101226 container died 2b41605fa66c95d4708395573699bf542921b1741dd7b24f8500df84977eb12a (image=quay.io/ceph/ceph:v18, name=trusting_sutherland, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 02:49:27 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 02:49:27 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 02:49:27 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 02:49:27 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 02:49:27 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 02:49:27 np0005534516 systemd[1]: var-lib-containers-storage-overlay-5262edfc6156b33efc85923b8622f746ca3ee598ec00039a182ff57285c82961-merged.mount: Deactivated successfully.
Nov 25 02:49:27 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 02:49:27 np0005534516 ceph-mgr[75313]: [progress INFO root] update: starting ev 8063c225-d7fc-4ab2-a7dd-c7968ec9cba6 (Updating crash deployment (+1 -> 1))
Nov 25 02:49:27 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]} v 0) v1
Nov 25 02:49:27 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Nov 25 02:49:27 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished
Nov 25 02:49:27 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 02:49:27 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 02:49:27 np0005534516 ceph-mgr[75313]: [cephadm INFO cephadm.serve] Deploying daemon crash.compute-0 on compute-0
Nov 25 02:49:27 np0005534516 ceph-mgr[75313]: log_channel(cephadm) log [INF] : Deploying daemon crash.compute-0 on compute-0
Nov 25 02:49:27 np0005534516 podman[81958]: 2025-11-25 07:49:27.42500279 +0000 UTC m=+0.930205083 container remove 2b41605fa66c95d4708395573699bf542921b1741dd7b24f8500df84977eb12a (image=quay.io/ceph/ceph:v18, name=trusting_sutherland, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 25 02:49:27 np0005534516 systemd[1]: libpod-conmon-2b41605fa66c95d4708395573699bf542921b1741dd7b24f8500df84977eb12a.scope: Deactivated successfully.
Nov 25 02:49:27 np0005534516 python3[82462]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid a058ea16-8b73-51e1-b172-ed66107102bf -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set global log_to_file true _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 02:49:27 np0005534516 podman[82480]: 2025-11-25 07:49:27.956944675 +0000 UTC m=+0.046478825 container create 22ff3c2abdacf13a2f389f6d4d7089696c4fdf574e7af2607a24fb6225a2bb3e (image=quay.io/ceph/ceph:v18, name=awesome_noyce, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 02:49:27 np0005534516 systemd[1]: Started libpod-conmon-22ff3c2abdacf13a2f389f6d4d7089696c4fdf574e7af2607a24fb6225a2bb3e.scope.
Nov 25 02:49:28 np0005534516 systemd[1]: Started libcrun container.
Nov 25 02:49:28 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/28fa3ab4cdc8708851d9d0ceb4231f1974729ff3f7149077390e4c296f8133a9/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Nov 25 02:49:28 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/28fa3ab4cdc8708851d9d0ceb4231f1974729ff3f7149077390e4c296f8133a9/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 02:49:28 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/28fa3ab4cdc8708851d9d0ceb4231f1974729ff3f7149077390e4c296f8133a9/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 02:49:28 np0005534516 podman[82480]: 2025-11-25 07:49:27.935811421 +0000 UTC m=+0.025345601 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 25 02:49:28 np0005534516 podman[82520]: 2025-11-25 07:49:28.035472613 +0000 UTC m=+0.040682760 container create f0e186f68d9cb7fd40c23fba8b9ecc9ae1967cc933a82fdf7edfc4f4e388f6f2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_burnell, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 02:49:28 np0005534516 podman[82480]: 2025-11-25 07:49:28.042560799 +0000 UTC m=+0.132094969 container init 22ff3c2abdacf13a2f389f6d4d7089696c4fdf574e7af2607a24fb6225a2bb3e (image=quay.io/ceph/ceph:v18, name=awesome_noyce, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 02:49:28 np0005534516 podman[82480]: 2025-11-25 07:49:28.049454009 +0000 UTC m=+0.138988159 container start 22ff3c2abdacf13a2f389f6d4d7089696c4fdf574e7af2607a24fb6225a2bb3e (image=quay.io/ceph/ceph:v18, name=awesome_noyce, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Nov 25 02:49:28 np0005534516 podman[82480]: 2025-11-25 07:49:28.062530557 +0000 UTC m=+0.152064727 container attach 22ff3c2abdacf13a2f389f6d4d7089696c4fdf574e7af2607a24fb6225a2bb3e (image=quay.io/ceph/ceph:v18, name=awesome_noyce, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 02:49:28 np0005534516 systemd[1]: Started libpod-conmon-f0e186f68d9cb7fd40c23fba8b9ecc9ae1967cc933a82fdf7edfc4f4e388f6f2.scope.
Nov 25 02:49:28 np0005534516 systemd[1]: Started libcrun container.
Nov 25 02:49:28 np0005534516 podman[82520]: 2025-11-25 07:49:28.11212905 +0000 UTC m=+0.117339217 container init f0e186f68d9cb7fd40c23fba8b9ecc9ae1967cc933a82fdf7edfc4f4e388f6f2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_burnell, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 02:49:28 np0005534516 podman[82520]: 2025-11-25 07:49:28.016372103 +0000 UTC m=+0.021582270 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 02:49:28 np0005534516 podman[82520]: 2025-11-25 07:49:28.118410741 +0000 UTC m=+0.123620928 container start f0e186f68d9cb7fd40c23fba8b9ecc9ae1967cc933a82fdf7edfc4f4e388f6f2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_burnell, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Nov 25 02:49:28 np0005534516 hopeful_burnell[82540]: 167 167
Nov 25 02:49:28 np0005534516 systemd[1]: libpod-f0e186f68d9cb7fd40c23fba8b9ecc9ae1967cc933a82fdf7edfc4f4e388f6f2.scope: Deactivated successfully.
Nov 25 02:49:28 np0005534516 podman[82520]: 2025-11-25 07:49:28.124941279 +0000 UTC m=+0.130151446 container attach f0e186f68d9cb7fd40c23fba8b9ecc9ae1967cc933a82fdf7edfc4f4e388f6f2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_burnell, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef)
Nov 25 02:49:28 np0005534516 podman[82520]: 2025-11-25 07:49:28.12558937 +0000 UTC m=+0.130799517 container died f0e186f68d9cb7fd40c23fba8b9ecc9ae1967cc933a82fdf7edfc4f4e388f6f2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_burnell, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Nov 25 02:49:28 np0005534516 systemd[1]: var-lib-containers-storage-overlay-cf596d075e0740794ff0be618470636c239c49169fcda040e0ae5d4588cb15a3-merged.mount: Deactivated successfully.
Nov 25 02:49:28 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 02:49:28 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 02:49:28 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 02:49:28 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Nov 25 02:49:28 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished
Nov 25 02:49:28 np0005534516 ceph-mon[75015]: Deploying daemon crash.compute-0 on compute-0
Nov 25 02:49:28 np0005534516 podman[82520]: 2025-11-25 07:49:28.441752655 +0000 UTC m=+0.446962802 container remove f0e186f68d9cb7fd40c23fba8b9ecc9ae1967cc933a82fdf7edfc4f4e388f6f2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_burnell, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 02:49:28 np0005534516 systemd[1]: libpod-conmon-f0e186f68d9cb7fd40c23fba8b9ecc9ae1967cc933a82fdf7edfc4f4e388f6f2.scope: Deactivated successfully.
Nov 25 02:49:28 np0005534516 systemd[1]: Reloading.
Nov 25 02:49:28 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=log_to_file}] v 0) v1
Nov 25 02:49:28 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2722731675' entity='client.admin' 
Nov 25 02:49:28 np0005534516 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 02:49:28 np0005534516 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 02:49:28 np0005534516 podman[82480]: 2025-11-25 07:49:28.620017828 +0000 UTC m=+0.709552048 container died 22ff3c2abdacf13a2f389f6d4d7089696c4fdf574e7af2607a24fb6225a2bb3e (image=quay.io/ceph/ceph:v18, name=awesome_noyce, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Nov 25 02:49:28 np0005534516 systemd[1]: libpod-22ff3c2abdacf13a2f389f6d4d7089696c4fdf574e7af2607a24fb6225a2bb3e.scope: Deactivated successfully.
Nov 25 02:49:28 np0005534516 systemd[1]: var-lib-containers-storage-overlay-28fa3ab4cdc8708851d9d0ceb4231f1974729ff3f7149077390e4c296f8133a9-merged.mount: Deactivated successfully.
Nov 25 02:49:28 np0005534516 podman[82480]: 2025-11-25 07:49:28.825648073 +0000 UTC m=+0.915182223 container remove 22ff3c2abdacf13a2f389f6d4d7089696c4fdf574e7af2607a24fb6225a2bb3e (image=quay.io/ceph/ceph:v18, name=awesome_noyce, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 02:49:28 np0005534516 systemd[1]: libpod-conmon-22ff3c2abdacf13a2f389f6d4d7089696c4fdf574e7af2607a24fb6225a2bb3e.scope: Deactivated successfully.
Nov 25 02:49:28 np0005534516 ansible-async_wrapper.py[80733]: Done in kid B.
Nov 25 02:49:28 np0005534516 systemd[1]: Reloading.
Nov 25 02:49:28 np0005534516 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 02:49:28 np0005534516 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 02:49:29 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v11: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 25 02:49:29 np0005534516 systemd[1]: Starting Ceph crash.compute-0 for a058ea16-8b73-51e1-b172-ed66107102bf...
Nov 25 02:49:29 np0005534516 python3[82697]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid a058ea16-8b73-51e1-b172-ed66107102bf -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set global mon_cluster_log_to_file true _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 02:49:29 np0005534516 podman[82728]: 2025-11-25 07:49:29.291422495 +0000 UTC m=+0.040557675 container create c2a55c9794184ce87ba66ca6d46569c105f85b4c1abbf62019f25eff3bb00609 (image=quay.io/ceph/ceph:v18, name=hopeful_mestorf, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Nov 25 02:49:29 np0005534516 systemd[1]: Started libpod-conmon-c2a55c9794184ce87ba66ca6d46569c105f85b4c1abbf62019f25eff3bb00609.scope.
Nov 25 02:49:29 np0005534516 podman[82759]: 2025-11-25 07:49:29.335905846 +0000 UTC m=+0.041310000 container create 32ccbc05690cca01896d4723bb87909aa036136f6a04219700340c1b7879cfbe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-a058ea16-8b73-51e1-b172-ed66107102bf-crash-compute-0, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 02:49:29 np0005534516 systemd[1]: Started libcrun container.
Nov 25 02:49:29 np0005534516 podman[82728]: 2025-11-25 07:49:29.27309036 +0000 UTC m=+0.022225570 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 25 02:49:29 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0287e2ffa18117580d64ba0ab050f85775ac3a525f7143a0241d1226a2a5e4ac/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 02:49:29 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0287e2ffa18117580d64ba0ab050f85775ac3a525f7143a0241d1226a2a5e4ac/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 02:49:29 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0287e2ffa18117580d64ba0ab050f85775ac3a525f7143a0241d1226a2a5e4ac/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Nov 25 02:49:29 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b2cfbfc4cdeefde271b19cf39aabb5587931b3e22236d37870b37fc26515ce51/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 02:49:29 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b2cfbfc4cdeefde271b19cf39aabb5587931b3e22236d37870b37fc26515ce51/merged/etc/ceph/ceph.client.crash.compute-0.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 02:49:29 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b2cfbfc4cdeefde271b19cf39aabb5587931b3e22236d37870b37fc26515ce51/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 02:49:29 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b2cfbfc4cdeefde271b19cf39aabb5587931b3e22236d37870b37fc26515ce51/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 02:49:29 np0005534516 podman[82728]: 2025-11-25 07:49:29.384098555 +0000 UTC m=+0.133233755 container init c2a55c9794184ce87ba66ca6d46569c105f85b4c1abbf62019f25eff3bb00609 (image=quay.io/ceph/ceph:v18, name=hopeful_mestorf, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 02:49:29 np0005534516 podman[82728]: 2025-11-25 07:49:29.394546968 +0000 UTC m=+0.143682148 container start c2a55c9794184ce87ba66ca6d46569c105f85b4c1abbf62019f25eff3bb00609 (image=quay.io/ceph/ceph:v18, name=hopeful_mestorf, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Nov 25 02:49:29 np0005534516 podman[82759]: 2025-11-25 07:49:29.400159427 +0000 UTC m=+0.105563621 container init 32ccbc05690cca01896d4723bb87909aa036136f6a04219700340c1b7879cfbe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-a058ea16-8b73-51e1-b172-ed66107102bf-crash-compute-0, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Nov 25 02:49:29 np0005534516 podman[82759]: 2025-11-25 07:49:29.406681196 +0000 UTC m=+0.112085360 container start 32ccbc05690cca01896d4723bb87909aa036136f6a04219700340c1b7879cfbe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-a058ea16-8b73-51e1-b172-ed66107102bf-crash-compute-0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Nov 25 02:49:29 np0005534516 podman[82728]: 2025-11-25 07:49:29.408649529 +0000 UTC m=+0.157784809 container attach c2a55c9794184ce87ba66ca6d46569c105f85b4c1abbf62019f25eff3bb00609 (image=quay.io/ceph/ceph:v18, name=hopeful_mestorf, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 02:49:29 np0005534516 podman[82759]: 2025-11-25 07:49:29.321555178 +0000 UTC m=+0.026959332 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 02:49:29 np0005534516 bash[82759]: 32ccbc05690cca01896d4723bb87909aa036136f6a04219700340c1b7879cfbe
Nov 25 02:49:29 np0005534516 systemd[1]: Started Ceph crash.compute-0 for a058ea16-8b73-51e1-b172-ed66107102bf.
Nov 25 02:49:29 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 02:49:29 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 02:49:29 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 02:49:29 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 02:49:29 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0) v1
Nov 25 02:49:29 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 02:49:29 np0005534516 ceph-mgr[75313]: [progress INFO root] complete: finished ev 8063c225-d7fc-4ab2-a7dd-c7968ec9cba6 (Updating crash deployment (+1 -> 1))
Nov 25 02:49:29 np0005534516 ceph-mgr[75313]: [progress INFO root] Completed event 8063c225-d7fc-4ab2-a7dd-c7968ec9cba6 (Updating crash deployment (+1 -> 1)) in 2 seconds
Nov 25 02:49:29 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0) v1
Nov 25 02:49:29 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 02:49:29 np0005534516 ceph-mgr[75313]: [progress WARNING root] complete: ev dc17f1ac-89c5-4444-978e-13cbe8cfc84f does not exist
Nov 25 02:49:29 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0) v1
Nov 25 02:49:29 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 02:49:29 np0005534516 ceph-mgr[75313]: [progress INFO root] update: starting ev cc16cd01-9269-4298-a887-74342ac4e47e (Updating mgr deployment (+1 -> 2))
Nov 25 02:49:29 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.compute-0.jngluz", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0) v1
Nov 25 02:49:29 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.jngluz", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Nov 25 02:49:29 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.jngluz", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished
Nov 25 02:49:29 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services"} v 0) v1
Nov 25 02:49:29 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "mgr services"}]: dispatch
Nov 25 02:49:29 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 02:49:29 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 02:49:29 np0005534516 ceph-mgr[75313]: [cephadm INFO cephadm.serve] Deploying daemon mgr.compute-0.jngluz on compute-0
Nov 25 02:49:29 np0005534516 ceph-mgr[75313]: log_channel(cephadm) log [INF] : Deploying daemon mgr.compute-0.jngluz on compute-0
Nov 25 02:49:29 np0005534516 ceph-a058ea16-8b73-51e1-b172-ed66107102bf-crash-compute-0[82779]: INFO:ceph-crash:pinging cluster to exercise our key
Nov 25 02:49:29 np0005534516 ceph-mon[75015]: from='client.? 192.168.122.100:0/2722731675' entity='client.admin' 
Nov 25 02:49:29 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 02:49:29 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 02:49:29 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 02:49:29 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 02:49:29 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 02:49:29 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.jngluz", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Nov 25 02:49:29 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.jngluz", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished
Nov 25 02:49:29 np0005534516 ceph-a058ea16-8b73-51e1-b172-ed66107102bf-crash-compute-0[82779]: 2025-11-25T07:49:29.808+0000 7efe62dc3640 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin: (2) No such file or directory
Nov 25 02:49:29 np0005534516 ceph-a058ea16-8b73-51e1-b172-ed66107102bf-crash-compute-0[82779]: 2025-11-25T07:49:29.808+0000 7efe62dc3640 -1 AuthRegistry(0x7efe5c066fe0) no keyring found at /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin, disabling cephx
Nov 25 02:49:29 np0005534516 ceph-a058ea16-8b73-51e1-b172-ed66107102bf-crash-compute-0[82779]: 2025-11-25T07:49:29.809+0000 7efe62dc3640 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin: (2) No such file or directory
Nov 25 02:49:29 np0005534516 ceph-a058ea16-8b73-51e1-b172-ed66107102bf-crash-compute-0[82779]: 2025-11-25T07:49:29.809+0000 7efe62dc3640 -1 AuthRegistry(0x7efe62dc2000) no keyring found at /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin, disabling cephx
Nov 25 02:49:29 np0005534516 ceph-a058ea16-8b73-51e1-b172-ed66107102bf-crash-compute-0[82779]: 2025-11-25T07:49:29.809+0000 7efe60b38640 -1 monclient(hunting): handle_auth_bad_method server allowed_methods [2] but i only support [1]
Nov 25 02:49:29 np0005534516 ceph-a058ea16-8b73-51e1-b172-ed66107102bf-crash-compute-0[82779]: 2025-11-25T07:49:29.809+0000 7efe62dc3640 -1 monclient: authenticate NOTE: no keyring found; disabled cephx authentication
Nov 25 02:49:29 np0005534516 ceph-a058ea16-8b73-51e1-b172-ed66107102bf-crash-compute-0[82779]: [errno 13] RADOS permission denied (error connecting to the cluster)
Nov 25 02:49:29 np0005534516 ceph-a058ea16-8b73-51e1-b172-ed66107102bf-crash-compute-0[82779]: INFO:ceph-crash:monitoring path /var/lib/ceph/crash, delay 600s
Nov 25 02:49:29 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mon_cluster_log_to_file}] v 0) v1
Nov 25 02:49:29 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4204554603' entity='client.admin' 
Nov 25 02:49:29 np0005534516 systemd[1]: libpod-c2a55c9794184ce87ba66ca6d46569c105f85b4c1abbf62019f25eff3bb00609.scope: Deactivated successfully.
Nov 25 02:49:30 np0005534516 podman[82935]: 2025-11-25 07:49:30.000769706 +0000 UTC m=+0.025978481 container died c2a55c9794184ce87ba66ca6d46569c105f85b4c1abbf62019f25eff3bb00609 (image=quay.io/ceph/ceph:v18, name=hopeful_mestorf, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 02:49:30 np0005534516 systemd[1]: var-lib-containers-storage-overlay-0287e2ffa18117580d64ba0ab050f85775ac3a525f7143a0241d1226a2a5e4ac-merged.mount: Deactivated successfully.
Nov 25 02:49:30 np0005534516 podman[82935]: 2025-11-25 07:49:30.04817865 +0000 UTC m=+0.073387425 container remove c2a55c9794184ce87ba66ca6d46569c105f85b4c1abbf62019f25eff3bb00609 (image=quay.io/ceph/ceph:v18, name=hopeful_mestorf, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 02:49:30 np0005534516 systemd[1]: libpod-conmon-c2a55c9794184ce87ba66ca6d46569c105f85b4c1abbf62019f25eff3bb00609.scope: Deactivated successfully.
Nov 25 02:49:30 np0005534516 podman[82972]: 2025-11-25 07:49:30.123817795 +0000 UTC m=+0.042786678 container create 9b45a6090eae2fee399568d350fcc67018d618fd327bc7eff321b8c25cf36d68 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_bohr, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 02:49:30 np0005534516 systemd[1]: Started libpod-conmon-9b45a6090eae2fee399568d350fcc67018d618fd327bc7eff321b8c25cf36d68.scope.
Nov 25 02:49:30 np0005534516 systemd[1]: Started libcrun container.
Nov 25 02:49:30 np0005534516 podman[82972]: 2025-11-25 07:49:30.102236795 +0000 UTC m=+0.021205738 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 02:49:30 np0005534516 podman[82972]: 2025-11-25 07:49:30.307718376 +0000 UTC m=+0.226687359 container init 9b45a6090eae2fee399568d350fcc67018d618fd327bc7eff321b8c25cf36d68 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_bohr, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Nov 25 02:49:30 np0005534516 podman[82972]: 2025-11-25 07:49:30.315872257 +0000 UTC m=+0.234841140 container start 9b45a6090eae2fee399568d350fcc67018d618fd327bc7eff321b8c25cf36d68 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_bohr, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Nov 25 02:49:30 np0005534516 podman[82972]: 2025-11-25 07:49:30.318947815 +0000 UTC m=+0.237916738 container attach 9b45a6090eae2fee399568d350fcc67018d618fd327bc7eff321b8c25cf36d68 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_bohr, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 02:49:30 np0005534516 charming_bohr[82988]: 167 167
Nov 25 02:49:30 np0005534516 systemd[1]: libpod-9b45a6090eae2fee399568d350fcc67018d618fd327bc7eff321b8c25cf36d68.scope: Deactivated successfully.
Nov 25 02:49:30 np0005534516 podman[82972]: 2025-11-25 07:49:30.322024773 +0000 UTC m=+0.240993656 container died 9b45a6090eae2fee399568d350fcc67018d618fd327bc7eff321b8c25cf36d68 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_bohr, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Nov 25 02:49:30 np0005534516 systemd[1]: var-lib-containers-storage-overlay-24ebbb3589d180d773e29e65b42811b88c78c0660006f37dc516674d52576d9f-merged.mount: Deactivated successfully.
Nov 25 02:49:30 np0005534516 podman[82972]: 2025-11-25 07:49:30.365877293 +0000 UTC m=+0.284846166 container remove 9b45a6090eae2fee399568d350fcc67018d618fd327bc7eff321b8c25cf36d68 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_bohr, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Nov 25 02:49:30 np0005534516 systemd[1]: libpod-conmon-9b45a6090eae2fee399568d350fcc67018d618fd327bc7eff321b8c25cf36d68.scope: Deactivated successfully.
Nov 25 02:49:30 np0005534516 systemd[1]: Reloading.
Nov 25 02:49:30 np0005534516 python3[83016]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid a058ea16-8b73-51e1-b172-ed66107102bf -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd set-require-min-compat-client mimic#012 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 02:49:30 np0005534516 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 02:49:30 np0005534516 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 02:49:30 np0005534516 podman[83034]: 2025-11-25 07:49:30.489584093 +0000 UTC m=+0.026011851 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 25 02:49:30 np0005534516 systemd[1]: Reloading.
Nov 25 02:49:30 np0005534516 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 02:49:30 np0005534516 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 02:49:30 np0005534516 podman[83034]: 2025-11-25 07:49:30.822177902 +0000 UTC m=+0.358626421 container create f216d02a56d96960c8698cba067b2cf57a9b74cab3d2da7f4b894a87927e1ef5 (image=quay.io/ceph/ceph:v18, name=jovial_varahamihira, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 02:49:30 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 02:49:31 np0005534516 ceph-mon[75015]: Deploying daemon mgr.compute-0.jngluz on compute-0
Nov 25 02:49:31 np0005534516 ceph-mon[75015]: from='client.? 192.168.122.100:0/4204554603' entity='client.admin' 
Nov 25 02:49:31 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v12: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 25 02:49:31 np0005534516 systemd[1]: Started libpod-conmon-f216d02a56d96960c8698cba067b2cf57a9b74cab3d2da7f4b894a87927e1ef5.scope.
Nov 25 02:49:31 np0005534516 systemd[1]: Starting Ceph mgr.compute-0.jngluz for a058ea16-8b73-51e1-b172-ed66107102bf...
Nov 25 02:49:31 np0005534516 systemd[1]: Started libcrun container.
Nov 25 02:49:31 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1f1596c4c6c24cbe82e55e10037fe48e3691bbe8c5fcbd0bdc48c855cb6c57ab/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 02:49:31 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1f1596c4c6c24cbe82e55e10037fe48e3691bbe8c5fcbd0bdc48c855cb6c57ab/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 02:49:31 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1f1596c4c6c24cbe82e55e10037fe48e3691bbe8c5fcbd0bdc48c855cb6c57ab/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Nov 25 02:49:31 np0005534516 podman[83034]: 2025-11-25 07:49:31.135163256 +0000 UTC m=+0.671591004 container init f216d02a56d96960c8698cba067b2cf57a9b74cab3d2da7f4b894a87927e1ef5 (image=quay.io/ceph/ceph:v18, name=jovial_varahamihira, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Nov 25 02:49:31 np0005534516 podman[83034]: 2025-11-25 07:49:31.146148587 +0000 UTC m=+0.682576335 container start f216d02a56d96960c8698cba067b2cf57a9b74cab3d2da7f4b894a87927e1ef5 (image=quay.io/ceph/ceph:v18, name=jovial_varahamihira, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 02:49:31 np0005534516 podman[83034]: 2025-11-25 07:49:31.153657586 +0000 UTC m=+0.690085334 container attach f216d02a56d96960c8698cba067b2cf57a9b74cab3d2da7f4b894a87927e1ef5 (image=quay.io/ceph/ceph:v18, name=jovial_varahamihira, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507)
Nov 25 02:49:31 np0005534516 podman[83174]: 2025-11-25 07:49:31.323691446 +0000 UTC m=+0.050714610 container create 7162b56f8c7e1930e49152ef5be9659c08b4f35097de23f4f393593224bec1de (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-a058ea16-8b73-51e1-b172-ed66107102bf-mgr-compute-0-jngluz, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Nov 25 02:49:31 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c90d39ce31e57cf06c41d92dd1485a7ada2626e41caa9608be239ded6a8988d5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 02:49:31 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c90d39ce31e57cf06c41d92dd1485a7ada2626e41caa9608be239ded6a8988d5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 02:49:31 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c90d39ce31e57cf06c41d92dd1485a7ada2626e41caa9608be239ded6a8988d5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 02:49:31 np0005534516 podman[83174]: 2025-11-25 07:49:31.301146206 +0000 UTC m=+0.028169400 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 02:49:31 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c90d39ce31e57cf06c41d92dd1485a7ada2626e41caa9608be239ded6a8988d5/merged/var/lib/ceph/mgr/ceph-compute-0.jngluz supports timestamps until 2038 (0x7fffffff)
Nov 25 02:49:31 np0005534516 podman[83174]: 2025-11-25 07:49:31.422202201 +0000 UTC m=+0.149225405 container init 7162b56f8c7e1930e49152ef5be9659c08b4f35097de23f4f393593224bec1de (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-a058ea16-8b73-51e1-b172-ed66107102bf-mgr-compute-0-jngluz, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 02:49:31 np0005534516 podman[83174]: 2025-11-25 07:49:31.428820302 +0000 UTC m=+0.155843466 container start 7162b56f8c7e1930e49152ef5be9659c08b4f35097de23f4f393593224bec1de (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-a058ea16-8b73-51e1-b172-ed66107102bf-mgr-compute-0-jngluz, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Nov 25 02:49:31 np0005534516 bash[83174]: 7162b56f8c7e1930e49152ef5be9659c08b4f35097de23f4f393593224bec1de
Nov 25 02:49:31 np0005534516 systemd[1]: Started Ceph mgr.compute-0.jngluz for a058ea16-8b73-51e1-b172-ed66107102bf.
Nov 25 02:49:31 np0005534516 ceph-mgr[83193]: set uid:gid to 167:167 (ceph:ceph)
Nov 25 02:49:31 np0005534516 ceph-mgr[83193]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-mgr, pid 2
Nov 25 02:49:31 np0005534516 ceph-mgr[83193]: pidfile_write: ignore empty --pid-file
Nov 25 02:49:31 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 02:49:31 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 02:49:31 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 02:49:31 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 02:49:31 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0) v1
Nov 25 02:49:31 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 02:49:31 np0005534516 ceph-mgr[75313]: [progress INFO root] complete: finished ev cc16cd01-9269-4298-a887-74342ac4e47e (Updating mgr deployment (+1 -> 2))
Nov 25 02:49:31 np0005534516 ceph-mgr[75313]: [progress INFO root] Completed event cc16cd01-9269-4298-a887-74342ac4e47e (Updating mgr deployment (+1 -> 2)) in 2 seconds
Nov 25 02:49:31 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0) v1
Nov 25 02:49:31 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 02:49:31 np0005534516 ceph-mgr[83193]: mgr[py] Loading python module 'alerts'
Nov 25 02:49:31 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd set-require-min-compat-client", "version": "mimic"} v 0) v1
Nov 25 02:49:31 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/846103495' entity='client.admin' cmd=[{"prefix": "osd set-require-min-compat-client", "version": "mimic"}]: dispatch
Nov 25 02:49:31 np0005534516 ceph-mgr[83193]: mgr[py] Module alerts has missing NOTIFY_TYPES member
Nov 25 02:49:31 np0005534516 ceph-mgr[83193]: mgr[py] Loading python module 'balancer'
Nov 25 02:49:31 np0005534516 ceph-a058ea16-8b73-51e1-b172-ed66107102bf-mgr-compute-0-jngluz[83189]: 2025-11-25T07:49:31.905+0000 7f8afb986140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member
Nov 25 02:49:32 np0005534516 ceph-mgr[83193]: mgr[py] Module balancer has missing NOTIFY_TYPES member
Nov 25 02:49:32 np0005534516 ceph-mgr[83193]: mgr[py] Loading python module 'cephadm'
Nov 25 02:49:32 np0005534516 ceph-a058ea16-8b73-51e1-b172-ed66107102bf-mgr-compute-0-jngluz[83189]: 2025-11-25T07:49:32.163+0000 7f8afb986140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member
Nov 25 02:49:32 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e2 do_prune osdmap full prune enabled
Nov 25 02:49:32 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e2 encode_pending skipping prime_pg_temp; mapping job did not start
Nov 25 02:49:32 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 02:49:32 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 02:49:32 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 02:49:32 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 02:49:32 np0005534516 ceph-mon[75015]: from='client.? 192.168.122.100:0/846103495' entity='client.admin' cmd=[{"prefix": "osd set-require-min-compat-client", "version": "mimic"}]: dispatch
Nov 25 02:49:32 np0005534516 podman[83461]: 2025-11-25 07:49:32.781288667 +0000 UTC m=+0.479076658 container exec 04ce8b89bbacb77b3b59c4996e899d7494010827e740656ebea8754c25303956 (image=quay.io/ceph/ceph:v18, name=ceph-a058ea16-8b73-51e1-b172-ed66107102bf-mon-compute-0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Nov 25 02:49:33 np0005534516 podman[83482]: 2025-11-25 07:49:33.06854245 +0000 UTC m=+0.182295882 container exec_died 04ce8b89bbacb77b3b59c4996e899d7494010827e740656ebea8754c25303956 (image=quay.io/ceph/ceph:v18, name=ceph-a058ea16-8b73-51e1-b172-ed66107102bf-mon-compute-0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 02:49:33 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v13: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 25 02:49:33 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/846103495' entity='client.admin' cmd='[{"prefix": "osd set-require-min-compat-client", "version": "mimic"}]': finished
Nov 25 02:49:33 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e3 e3: 0 total, 0 up, 0 in
Nov 25 02:49:33 np0005534516 jovial_varahamihira[83124]: set require_min_compat_client to mimic
Nov 25 02:49:33 np0005534516 systemd[1]: libpod-f216d02a56d96960c8698cba067b2cf57a9b74cab3d2da7f4b894a87927e1ef5.scope: Deactivated successfully.
Nov 25 02:49:33 np0005534516 ceph-mon[75015]: log_channel(cluster) log [DBG] : osdmap e3: 0 total, 0 up, 0 in
Nov 25 02:49:33 np0005534516 podman[83461]: 2025-11-25 07:49:33.219606603 +0000 UTC m=+0.917394554 container exec_died 04ce8b89bbacb77b3b59c4996e899d7494010827e740656ebea8754c25303956 (image=quay.io/ceph/ceph:v18, name=ceph-a058ea16-8b73-51e1-b172-ed66107102bf-mon-compute-0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 02:49:33 np0005534516 ceph-mgr[75313]: [progress INFO root] Writing back 2 completed events
Nov 25 02:49:33 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) v1
Nov 25 02:49:33 np0005534516 podman[83034]: 2025-11-25 07:49:33.394418525 +0000 UTC m=+2.930846273 container died f216d02a56d96960c8698cba067b2cf57a9b74cab3d2da7f4b894a87927e1ef5 (image=quay.io/ceph/ceph:v18, name=jovial_varahamihira, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Nov 25 02:49:33 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 02:49:33 np0005534516 systemd[1]: var-lib-containers-storage-overlay-1f1596c4c6c24cbe82e55e10037fe48e3691bbe8c5fcbd0bdc48c855cb6c57ab-merged.mount: Deactivated successfully.
Nov 25 02:49:33 np0005534516 podman[83034]: 2025-11-25 07:49:33.537020688 +0000 UTC m=+3.073448436 container remove f216d02a56d96960c8698cba067b2cf57a9b74cab3d2da7f4b894a87927e1ef5 (image=quay.io/ceph/ceph:v18, name=jovial_varahamihira, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Nov 25 02:49:33 np0005534516 systemd[1]: libpod-conmon-f216d02a56d96960c8698cba067b2cf57a9b74cab3d2da7f4b894a87927e1ef5.scope: Deactivated successfully.
Nov 25 02:49:33 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 02:49:33 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 02:49:33 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 02:49:33 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 02:49:33 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 02:49:33 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 02:49:33 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 02:49:33 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 02:49:33 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 02:49:33 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 02:49:33 np0005534516 ceph-mgr[75313]: [progress WARNING root] complete: ev 322c659d-08e4-4aba-b3de-5dc57ca8c213 does not exist
Nov 25 02:49:33 np0005534516 ceph-mgr[75313]: [progress WARNING root] complete: ev f9bab38a-318d-465d-9f91-986b71a32881 does not exist
Nov 25 02:49:33 np0005534516 ceph-mgr[75313]: [progress WARNING root] complete: ev f2265b7a-7979-49d4-8179-4fc6a4e2394b does not exist
Nov 25 02:49:33 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/alertmanager/web_user}] v 0) v1
Nov 25 02:49:33 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 02:49:33 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/alertmanager/web_password}] v 0) v1
Nov 25 02:49:33 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 02:49:33 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/prometheus/web_user}] v 0) v1
Nov 25 02:49:33 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 02:49:33 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/prometheus/web_password}] v 0) v1
Nov 25 02:49:33 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 02:49:33 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "mon."} v 0) v1
Nov 25 02:49:33 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Nov 25 02:49:33 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config get", "who": "mon", "key": "public_network"} v 0) v1
Nov 25 02:49:33 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Nov 25 02:49:33 np0005534516 ceph-mgr[75313]: [cephadm INFO cephadm.serve] Reconfiguring mon.compute-0 (unknown last config time)...
Nov 25 02:49:33 np0005534516 ceph-mgr[75313]: log_channel(cephadm) log [INF] : Reconfiguring mon.compute-0 (unknown last config time)...
Nov 25 02:49:33 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 02:49:33 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 02:49:33 np0005534516 ceph-mgr[75313]: [cephadm INFO cephadm.serve] Reconfiguring daemon mon.compute-0 on compute-0
Nov 25 02:49:33 np0005534516 ceph-mgr[75313]: log_channel(cephadm) log [INF] : Reconfiguring daemon mon.compute-0 on compute-0
Nov 25 02:49:34 np0005534516 python3[83644]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid a058ea16-8b73-51e1-b172-ed66107102bf -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch apply --in-file /home/ceph_spec.yaml _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 02:49:34 np0005534516 ceph-mon[75015]: from='client.? 192.168.122.100:0/846103495' entity='client.admin' cmd='[{"prefix": "osd set-require-min-compat-client", "version": "mimic"}]': finished
Nov 25 02:49:34 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 02:49:34 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 02:49:34 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 02:49:34 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 02:49:34 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 02:49:34 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 02:49:34 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 02:49:34 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 02:49:34 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 02:49:34 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Nov 25 02:49:34 np0005534516 podman[83695]: 2025-11-25 07:49:34.197535059 +0000 UTC m=+0.052464207 container create 607df9a0e68cf017ac905115e0516a50752e8b48d7e5c89ab393ab0af937eb62 (image=quay.io/ceph/ceph:v18, name=kind_swirles, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Nov 25 02:49:34 np0005534516 ceph-mgr[83193]: mgr[py] Loading python module 'crash'
Nov 25 02:49:34 np0005534516 systemd[1]: Started libpod-conmon-607df9a0e68cf017ac905115e0516a50752e8b48d7e5c89ab393ab0af937eb62.scope.
Nov 25 02:49:34 np0005534516 systemd[1]: Started libcrun container.
Nov 25 02:49:34 np0005534516 podman[83695]: 2025-11-25 07:49:34.17128622 +0000 UTC m=+0.026215428 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 25 02:49:34 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/042c5e2034634b0515b2b46653337c75ea94af79d18eb607f1c8a376d836fba3/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 02:49:34 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/042c5e2034634b0515b2b46653337c75ea94af79d18eb607f1c8a376d836fba3/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 02:49:34 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/042c5e2034634b0515b2b46653337c75ea94af79d18eb607f1c8a376d836fba3/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Nov 25 02:49:34 np0005534516 podman[83695]: 2025-11-25 07:49:34.338799339 +0000 UTC m=+0.193728527 container init 607df9a0e68cf017ac905115e0516a50752e8b48d7e5c89ab393ab0af937eb62 (image=quay.io/ceph/ceph:v18, name=kind_swirles, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 02:49:34 np0005534516 podman[83695]: 2025-11-25 07:49:34.345606097 +0000 UTC m=+0.200535255 container start 607df9a0e68cf017ac905115e0516a50752e8b48d7e5c89ab393ab0af937eb62 (image=quay.io/ceph/ceph:v18, name=kind_swirles, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Nov 25 02:49:34 np0005534516 podman[83695]: 2025-11-25 07:49:34.352104344 +0000 UTC m=+0.207033522 container attach 607df9a0e68cf017ac905115e0516a50752e8b48d7e5c89ab393ab0af937eb62 (image=quay.io/ceph/ceph:v18, name=kind_swirles, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 02:49:34 np0005534516 ceph-a058ea16-8b73-51e1-b172-ed66107102bf-mgr-compute-0-jngluz[83189]: 2025-11-25T07:49:34.510+0000 7f8afb986140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member
Nov 25 02:49:34 np0005534516 ceph-mgr[83193]: mgr[py] Module crash has missing NOTIFY_TYPES member
Nov 25 02:49:34 np0005534516 ceph-mgr[83193]: mgr[py] Loading python module 'dashboard'
Nov 25 02:49:34 np0005534516 podman[83778]: 2025-11-25 07:49:34.538055541 +0000 UTC m=+0.053106407 container create 8dabd1cc69a9a0dbdcf0b1042ed69b1c7a6a7fc74b84c0a27034b4038201dd87 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_pascal, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 02:49:34 np0005534516 systemd[1]: Started libpod-conmon-8dabd1cc69a9a0dbdcf0b1042ed69b1c7a6a7fc74b84c0a27034b4038201dd87.scope.
Nov 25 02:49:34 np0005534516 systemd[1]: Started libcrun container.
Nov 25 02:49:34 np0005534516 podman[83778]: 2025-11-25 07:49:34.609652198 +0000 UTC m=+0.124703114 container init 8dabd1cc69a9a0dbdcf0b1042ed69b1c7a6a7fc74b84c0a27034b4038201dd87 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_pascal, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Nov 25 02:49:34 np0005534516 podman[83778]: 2025-11-25 07:49:34.519605483 +0000 UTC m=+0.034656369 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 02:49:34 np0005534516 podman[83778]: 2025-11-25 07:49:34.617839739 +0000 UTC m=+0.132890605 container start 8dabd1cc69a9a0dbdcf0b1042ed69b1c7a6a7fc74b84c0a27034b4038201dd87 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_pascal, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 02:49:34 np0005534516 festive_pascal[83795]: 167 167
Nov 25 02:49:34 np0005534516 systemd[1]: libpod-8dabd1cc69a9a0dbdcf0b1042ed69b1c7a6a7fc74b84c0a27034b4038201dd87.scope: Deactivated successfully.
Nov 25 02:49:34 np0005534516 podman[83778]: 2025-11-25 07:49:34.622621942 +0000 UTC m=+0.137672828 container attach 8dabd1cc69a9a0dbdcf0b1042ed69b1c7a6a7fc74b84c0a27034b4038201dd87 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_pascal, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 02:49:34 np0005534516 podman[83778]: 2025-11-25 07:49:34.62318929 +0000 UTC m=+0.138240156 container died 8dabd1cc69a9a0dbdcf0b1042ed69b1c7a6a7fc74b84c0a27034b4038201dd87 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_pascal, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Nov 25 02:49:34 np0005534516 systemd[1]: var-lib-containers-storage-overlay-583739f0604de921b9258a21da360957cf74d29dc738856ecef373c43a079d83-merged.mount: Deactivated successfully.
Nov 25 02:49:34 np0005534516 podman[83778]: 2025-11-25 07:49:34.801852534 +0000 UTC m=+0.316903410 container remove 8dabd1cc69a9a0dbdcf0b1042ed69b1c7a6a7fc74b84c0a27034b4038201dd87 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_pascal, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 02:49:34 np0005534516 systemd[1]: libpod-conmon-8dabd1cc69a9a0dbdcf0b1042ed69b1c7a6a7fc74b84c0a27034b4038201dd87.scope: Deactivated successfully.
Nov 25 02:49:34 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 02:49:34 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 02:49:34 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 02:49:34 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 02:49:34 np0005534516 ceph-mgr[75313]: [cephadm INFO cephadm.serve] Reconfiguring mgr.compute-0.cpskve (unknown last config time)...
Nov 25 02:49:34 np0005534516 ceph-mgr[75313]: log_channel(cephadm) log [INF] : Reconfiguring mgr.compute-0.cpskve (unknown last config time)...
Nov 25 02:49:34 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.compute-0.cpskve", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0) v1
Nov 25 02:49:34 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.cpskve", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Nov 25 02:49:34 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services"} v 0) v1
Nov 25 02:49:34 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "mgr services"}]: dispatch
Nov 25 02:49:34 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 02:49:34 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 02:49:34 np0005534516 ceph-mgr[75313]: [cephadm INFO cephadm.serve] Reconfiguring daemon mgr.compute-0.cpskve on compute-0
Nov 25 02:49:34 np0005534516 ceph-mgr[75313]: log_channel(cephadm) log [INF] : Reconfiguring daemon mgr.compute-0.cpskve on compute-0
Nov 25 02:49:34 np0005534516 ceph-mgr[75313]: log_channel(audit) log [DBG] : from='client.14184 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 02:49:35 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v15: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 25 02:49:35 np0005534516 ceph-mon[75015]: Reconfiguring mon.compute-0 (unknown last config time)...
Nov 25 02:49:35 np0005534516 ceph-mon[75015]: Reconfiguring daemon mon.compute-0 on compute-0
Nov 25 02:49:35 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 02:49:35 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 02:49:35 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.cpskve", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Nov 25 02:49:35 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0) v1
Nov 25 02:49:35 np0005534516 podman[84065]: 2025-11-25 07:49:35.386477982 +0000 UTC m=+0.050438372 container create 2c1c8357b8a1688c3cec070ae9c9384dbb09a1fc5e03cd878faae5f54c95eb07 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_matsumoto, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Nov 25 02:49:35 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 02:49:35 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0) v1
Nov 25 02:49:35 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 02:49:35 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0) v1
Nov 25 02:49:35 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 02:49:35 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0) v1
Nov 25 02:49:35 np0005534516 systemd[1]: Started libpod-conmon-2c1c8357b8a1688c3cec070ae9c9384dbb09a1fc5e03cd878faae5f54c95eb07.scope.
Nov 25 02:49:35 np0005534516 systemd[1]: Started libcrun container.
Nov 25 02:49:35 np0005534516 podman[84065]: 2025-11-25 07:49:35.361182924 +0000 UTC m=+0.025143294 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 02:49:35 np0005534516 podman[84065]: 2025-11-25 07:49:35.459121651 +0000 UTC m=+0.123082001 container init 2c1c8357b8a1688c3cec070ae9c9384dbb09a1fc5e03cd878faae5f54c95eb07 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_matsumoto, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 02:49:35 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 02:49:35 np0005534516 ceph-mgr[75313]: [cephadm INFO root] Added host compute-0
Nov 25 02:49:35 np0005534516 ceph-mgr[75313]: log_channel(cephadm) log [INF] : Added host compute-0
Nov 25 02:49:35 np0005534516 ceph-mgr[75313]: [cephadm INFO root] Saving service mon spec with placement compute-0
Nov 25 02:49:35 np0005534516 ceph-mgr[75313]: log_channel(cephadm) log [INF] : Saving service mon spec with placement compute-0
Nov 25 02:49:35 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0) v1
Nov 25 02:49:35 np0005534516 podman[84065]: 2025-11-25 07:49:35.465249767 +0000 UTC m=+0.129210117 container start 2c1c8357b8a1688c3cec070ae9c9384dbb09a1fc5e03cd878faae5f54c95eb07 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_matsumoto, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True)
Nov 25 02:49:35 np0005534516 podman[84065]: 2025-11-25 07:49:35.468467719 +0000 UTC m=+0.132428079 container attach 2c1c8357b8a1688c3cec070ae9c9384dbb09a1fc5e03cd878faae5f54c95eb07 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_matsumoto, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Nov 25 02:49:35 np0005534516 awesome_matsumoto[84082]: 167 167
Nov 25 02:49:35 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 02:49:35 np0005534516 systemd[1]: libpod-2c1c8357b8a1688c3cec070ae9c9384dbb09a1fc5e03cd878faae5f54c95eb07.scope: Deactivated successfully.
Nov 25 02:49:35 np0005534516 ceph-mgr[75313]: [cephadm INFO root] Saving service mgr spec with placement compute-0
Nov 25 02:49:35 np0005534516 ceph-mgr[75313]: log_channel(cephadm) log [INF] : Saving service mgr spec with placement compute-0
Nov 25 02:49:35 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0) v1
Nov 25 02:49:35 np0005534516 podman[84065]: 2025-11-25 07:49:35.471752985 +0000 UTC m=+0.135713345 container died 2c1c8357b8a1688c3cec070ae9c9384dbb09a1fc5e03cd878faae5f54c95eb07 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_matsumoto, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 02:49:35 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 02:49:35 np0005534516 ceph-mgr[75313]: [cephadm INFO root] Marking host: compute-0 for OSDSpec preview refresh.
Nov 25 02:49:35 np0005534516 ceph-mgr[75313]: log_channel(cephadm) log [INF] : Marking host: compute-0 for OSDSpec preview refresh.
Nov 25 02:49:35 np0005534516 ceph-mgr[75313]: [cephadm INFO root] Saving service osd.default_drive_group spec with placement compute-0
Nov 25 02:49:35 np0005534516 ceph-mgr[75313]: log_channel(cephadm) log [INF] : Saving service osd.default_drive_group spec with placement compute-0
Nov 25 02:49:35 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.osd.default_drive_group}] v 0) v1
Nov 25 02:49:35 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 02:49:35 np0005534516 kind_swirles[83748]: Added host 'compute-0' with addr '192.168.122.100'
Nov 25 02:49:35 np0005534516 kind_swirles[83748]: Scheduled mon update...
Nov 25 02:49:35 np0005534516 kind_swirles[83748]: Scheduled mgr update...
Nov 25 02:49:35 np0005534516 kind_swirles[83748]: Scheduled osd.default_drive_group update...
Nov 25 02:49:35 np0005534516 systemd[1]: var-lib-containers-storage-overlay-3317be47bca424b59629792dcaf2dabe1424c6028dc94536ba3f601f6027128e-merged.mount: Deactivated successfully.
Nov 25 02:49:35 np0005534516 systemd[1]: libpod-607df9a0e68cf017ac905115e0516a50752e8b48d7e5c89ab393ab0af937eb62.scope: Deactivated successfully.
Nov 25 02:49:35 np0005534516 podman[83695]: 2025-11-25 07:49:35.564503907 +0000 UTC m=+1.419433065 container died 607df9a0e68cf017ac905115e0516a50752e8b48d7e5c89ab393ab0af937eb62 (image=quay.io/ceph/ceph:v18, name=kind_swirles, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef)
Nov 25 02:49:35 np0005534516 podman[84065]: 2025-11-25 07:49:35.578837064 +0000 UTC m=+0.242797414 container remove 2c1c8357b8a1688c3cec070ae9c9384dbb09a1fc5e03cd878faae5f54c95eb07 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_matsumoto, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 02:49:35 np0005534516 systemd[1]: libpod-conmon-2c1c8357b8a1688c3cec070ae9c9384dbb09a1fc5e03cd878faae5f54c95eb07.scope: Deactivated successfully.
Nov 25 02:49:35 np0005534516 systemd[1]: var-lib-containers-storage-overlay-042c5e2034634b0515b2b46653337c75ea94af79d18eb607f1c8a376d836fba3-merged.mount: Deactivated successfully.
Nov 25 02:49:35 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 02:49:35 np0005534516 podman[83695]: 2025-11-25 07:49:35.65857617 +0000 UTC m=+1.513505328 container remove 607df9a0e68cf017ac905115e0516a50752e8b48d7e5c89ab393ab0af937eb62 (image=quay.io/ceph/ceph:v18, name=kind_swirles, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Nov 25 02:49:35 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 02:49:35 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 02:49:35 np0005534516 systemd[1]: libpod-conmon-607df9a0e68cf017ac905115e0516a50752e8b48d7e5c89ab393ab0af937eb62.scope: Deactivated successfully.
Nov 25 02:49:35 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 02:49:35 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 02:49:36 np0005534516 ceph-mgr[83193]: mgr[py] Loading python module 'devicehealth'
Nov 25 02:49:36 np0005534516 python3[84213]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid a058ea16-8b73-51e1-b172-ed66107102bf -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   status --format json | jq .osdmap.num_up_osds _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 02:49:36 np0005534516 podman[84240]: 2025-11-25 07:49:36.122327278 +0000 UTC m=+0.042805468 container create 8c4cdcc045ec1b732645d250fad79b291aba081cdae4e2a59b651b0b49130510 (image=quay.io/ceph/ceph:v18, name=stoic_wing, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 02:49:36 np0005534516 systemd[1]: Started libpod-conmon-8c4cdcc045ec1b732645d250fad79b291aba081cdae4e2a59b651b0b49130510.scope.
Nov 25 02:49:36 np0005534516 systemd[1]: Started libcrun container.
Nov 25 02:49:36 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d34237df574129e95d0b377539585b52aa45f3efac26d2f311c6568f27179ca6/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 02:49:36 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d34237df574129e95d0b377539585b52aa45f3efac26d2f311c6568f27179ca6/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 02:49:36 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d34237df574129e95d0b377539585b52aa45f3efac26d2f311c6568f27179ca6/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Nov 25 02:49:36 np0005534516 podman[84240]: 2025-11-25 07:49:36.190288408 +0000 UTC m=+0.110766598 container init 8c4cdcc045ec1b732645d250fad79b291aba081cdae4e2a59b651b0b49130510 (image=quay.io/ceph/ceph:v18, name=stoic_wing, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 02:49:36 np0005534516 podman[84240]: 2025-11-25 07:49:36.10016199 +0000 UTC m=+0.020640200 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 25 02:49:36 np0005534516 podman[84240]: 2025-11-25 07:49:36.201506606 +0000 UTC m=+0.121984796 container start 8c4cdcc045ec1b732645d250fad79b291aba081cdae4e2a59b651b0b49130510 (image=quay.io/ceph/ceph:v18, name=stoic_wing, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 02:49:36 np0005534516 podman[84240]: 2025-11-25 07:49:36.22824865 +0000 UTC m=+0.148726860 container attach 8c4cdcc045ec1b732645d250fad79b291aba081cdae4e2a59b651b0b49130510 (image=quay.io/ceph/ceph:v18, name=stoic_wing, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Nov 25 02:49:36 np0005534516 ceph-mgr[83193]: mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Nov 25 02:49:36 np0005534516 ceph-mgr[83193]: mgr[py] Loading python module 'diskprediction_local'
Nov 25 02:49:36 np0005534516 ceph-a058ea16-8b73-51e1-b172-ed66107102bf-mgr-compute-0-jngluz[83189]: 2025-11-25T07:49:36.289+0000 7f8afb986140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Nov 25 02:49:36 np0005534516 ceph-mon[75015]: Reconfiguring mgr.compute-0.cpskve (unknown last config time)...
Nov 25 02:49:36 np0005534516 ceph-mon[75015]: Reconfiguring daemon mgr.compute-0.cpskve on compute-0
Nov 25 02:49:36 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 02:49:36 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 02:49:36 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 02:49:36 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 02:49:36 np0005534516 ceph-mon[75015]: Added host compute-0
Nov 25 02:49:36 np0005534516 ceph-mon[75015]: Saving service mon spec with placement compute-0
Nov 25 02:49:36 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 02:49:36 np0005534516 ceph-mon[75015]: Saving service mgr spec with placement compute-0
Nov 25 02:49:36 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 02:49:36 np0005534516 ceph-mon[75015]: Marking host: compute-0 for OSDSpec preview refresh.
Nov 25 02:49:36 np0005534516 ceph-mon[75015]: Saving service osd.default_drive_group spec with placement compute-0
Nov 25 02:49:36 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 02:49:36 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 02:49:36 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 02:49:36 np0005534516 podman[84324]: 2025-11-25 07:49:36.634589215 +0000 UTC m=+0.174252416 container exec 04ce8b89bbacb77b3b59c4996e899d7494010827e740656ebea8754c25303956 (image=quay.io/ceph/ceph:v18, name=ceph-a058ea16-8b73-51e1-b172-ed66107102bf-mon-compute-0, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 02:49:36 np0005534516 podman[84324]: 2025-11-25 07:49:36.745686742 +0000 UTC m=+0.285349903 container exec_died 04ce8b89bbacb77b3b59c4996e899d7494010827e740656ebea8754c25303956 (image=quay.io/ceph/ceph:v18, name=ceph-a058ea16-8b73-51e1-b172-ed66107102bf-mon-compute-0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 02:49:36 np0005534516 ceph-a058ea16-8b73-51e1-b172-ed66107102bf-mgr-compute-0-jngluz[83189]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Nov 25 02:49:36 np0005534516 ceph-a058ea16-8b73-51e1-b172-ed66107102bf-mgr-compute-0-jngluz[83189]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Nov 25 02:49:36 np0005534516 ceph-a058ea16-8b73-51e1-b172-ed66107102bf-mgr-compute-0-jngluz[83189]:  from numpy import show_config as show_numpy_config
Nov 25 02:49:36 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json"} v 0) v1
Nov 25 02:49:36 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3729322584' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Nov 25 02:49:36 np0005534516 stoic_wing[84273]: 
Nov 25 02:49:36 np0005534516 stoic_wing[84273]: {"fsid":"a058ea16-8b73-51e1-b172-ed66107102bf","health":{"status":"HEALTH_WARN","checks":{"TOO_FEW_OSDS":{"severity":"HEALTH_WARN","summary":{"message":"OSD count 0 < osd_pool_default_size 1","count":1},"muted":false}},"mutes":[]},"election_epoch":5,"quorum":[0],"quorum_names":["compute-0"],"quorum_age":91,"monmap":{"epoch":1,"min_mon_release_name":"reef","num_mons":1},"osdmap":{"epoch":3,"num_osds":0,"num_up_osds":0,"osd_up_since":0,"num_in_osds":0,"osd_in_since":0,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[],"num_pgs":0,"num_pools":0,"num_objects":0,"data_bytes":0,"bytes_used":0,"bytes_avail":0,"bytes_total":0},"fsmap":{"epoch":1,"by_rank":[],"up:standby":0},"mgrmap":{"available":true,"num_standbys":0,"modules":["cephadm","iostat","nfs","restful"],"services":{}},"servicemap":{"epoch":1,"modified":"2025-11-25T07:48:01.745386+0000","services":{}},"progress_events":{}}
Nov 25 02:49:36 np0005534516 ceph-mgr[83193]: mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Nov 25 02:49:36 np0005534516 ceph-a058ea16-8b73-51e1-b172-ed66107102bf-mgr-compute-0-jngluz[83189]: 2025-11-25T07:49:36.859+0000 7f8afb986140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Nov 25 02:49:36 np0005534516 ceph-mgr[83193]: mgr[py] Loading python module 'influx'
Nov 25 02:49:36 np0005534516 podman[84240]: 2025-11-25 07:49:36.870998113 +0000 UTC m=+0.791476303 container died 8c4cdcc045ec1b732645d250fad79b291aba081cdae4e2a59b651b0b49130510 (image=quay.io/ceph/ceph:v18, name=stoic_wing, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 02:49:36 np0005534516 systemd[1]: libpod-8c4cdcc045ec1b732645d250fad79b291aba081cdae4e2a59b651b0b49130510.scope: Deactivated successfully.
Nov 25 02:49:36 np0005534516 systemd[1]: var-lib-containers-storage-overlay-d34237df574129e95d0b377539585b52aa45f3efac26d2f311c6568f27179ca6-merged.mount: Deactivated successfully.
Nov 25 02:49:36 np0005534516 podman[84240]: 2025-11-25 07:49:36.923387826 +0000 UTC m=+0.843866016 container remove 8c4cdcc045ec1b732645d250fad79b291aba081cdae4e2a59b651b0b49130510 (image=quay.io/ceph/ceph:v18, name=stoic_wing, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 02:49:36 np0005534516 systemd[1]: libpod-conmon-8c4cdcc045ec1b732645d250fad79b291aba081cdae4e2a59b651b0b49130510.scope: Deactivated successfully.
Nov 25 02:49:37 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 02:49:37 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 02:49:37 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 02:49:37 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 02:49:37 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v16: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 25 02:49:37 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 02:49:37 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 02:49:37 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 02:49:37 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 02:49:37 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 02:49:37 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 02:49:37 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 02:49:37 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 02:49:37 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 02:49:37 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 02:49:37 np0005534516 ceph-mgr[75313]: [progress WARNING root] complete: ev df3baceb-1f23-4326-82c9-bb9d1922a78b does not exist
Nov 25 02:49:37 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0) v1
Nov 25 02:49:37 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 02:49:37 np0005534516 ceph-mgr[75313]: [progress INFO root] update: starting ev 217faaca-c7f9-4e77-98b6-e4c678e1f740 (Updating mgr deployment (-1 -> 1))
Nov 25 02:49:37 np0005534516 ceph-mgr[75313]: [cephadm INFO cephadm.serve] Removing daemon mgr.compute-0.jngluz from compute-0 -- ports [8765]
Nov 25 02:49:37 np0005534516 ceph-mgr[75313]: log_channel(cephadm) log [INF] : Removing daemon mgr.compute-0.jngluz from compute-0 -- ports [8765]
Nov 25 02:49:37 np0005534516 ceph-mgr[83193]: mgr[py] Module influx has missing NOTIFY_TYPES member
Nov 25 02:49:37 np0005534516 ceph-mgr[83193]: mgr[py] Loading python module 'insights'
Nov 25 02:49:37 np0005534516 ceph-a058ea16-8b73-51e1-b172-ed66107102bf-mgr-compute-0-jngluz[83189]: 2025-11-25T07:49:37.137+0000 7f8afb986140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member
Nov 25 02:49:37 np0005534516 ceph-mgr[83193]: mgr[py] Loading python module 'iostat'
Nov 25 02:49:37 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 02:49:37 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 02:49:37 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 02:49:37 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 02:49:37 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 02:49:37 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 02:49:37 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 02:49:37 np0005534516 systemd[1]: Stopping Ceph mgr.compute-0.jngluz for a058ea16-8b73-51e1-b172-ed66107102bf...
Nov 25 02:49:37 np0005534516 ceph-mgr[83193]: mgr[py] Module iostat has missing NOTIFY_TYPES member
Nov 25 02:49:37 np0005534516 ceph-a058ea16-8b73-51e1-b172-ed66107102bf-mgr-compute-0-jngluz[83189]: 2025-11-25T07:49:37.662+0000 7f8afb986140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member
Nov 25 02:49:37 np0005534516 ceph-mgr[83193]: mgr[py] Loading python module 'k8sevents'
Nov 25 02:49:37 np0005534516 podman[84608]: 2025-11-25 07:49:37.836790241 +0000 UTC m=+0.077625790 container died 7162b56f8c7e1930e49152ef5be9659c08b4f35097de23f4f393593224bec1de (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-a058ea16-8b73-51e1-b172-ed66107102bf-mgr-compute-0-jngluz, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Nov 25 02:49:37 np0005534516 systemd[1]: var-lib-containers-storage-overlay-c90d39ce31e57cf06c41d92dd1485a7ada2626e41caa9608be239ded6a8988d5-merged.mount: Deactivated successfully.
Nov 25 02:49:37 np0005534516 podman[84608]: 2025-11-25 07:49:37.91503321 +0000 UTC m=+0.155868739 container remove 7162b56f8c7e1930e49152ef5be9659c08b4f35097de23f4f393593224bec1de (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-a058ea16-8b73-51e1-b172-ed66107102bf-mgr-compute-0-jngluz, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 02:49:37 np0005534516 bash[84608]: ceph-a058ea16-8b73-51e1-b172-ed66107102bf-mgr-compute-0-jngluz
Nov 25 02:49:37 np0005534516 systemd[1]: ceph-a058ea16-8b73-51e1-b172-ed66107102bf@mgr.compute-0.jngluz.service: Main process exited, code=exited, status=143/n/a
Nov 25 02:49:38 np0005534516 systemd[1]: ceph-a058ea16-8b73-51e1-b172-ed66107102bf@mgr.compute-0.jngluz.service: Failed with result 'exit-code'.
Nov 25 02:49:38 np0005534516 systemd[1]: Stopped Ceph mgr.compute-0.jngluz for a058ea16-8b73-51e1-b172-ed66107102bf.
Nov 25 02:49:38 np0005534516 systemd[1]: ceph-a058ea16-8b73-51e1-b172-ed66107102bf@mgr.compute-0.jngluz.service: Consumed 7.187s CPU time.
Nov 25 02:49:38 np0005534516 systemd[1]: Reloading.
Nov 25 02:49:38 np0005534516 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 02:49:38 np0005534516 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 02:49:38 np0005534516 ceph-mgr[75313]: [cephadm INFO cephadm.services.cephadmservice] Removing key for mgr.compute-0.jngluz
Nov 25 02:49:38 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "mgr.compute-0.jngluz"} v 0) v1
Nov 25 02:49:38 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth rm", "entity": "mgr.compute-0.jngluz"}]: dispatch
Nov 25 02:49:38 np0005534516 ceph-mgr[75313]: log_channel(cephadm) log [INF] : Removing key for mgr.compute-0.jngluz
Nov 25 02:49:38 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd='[{"prefix": "auth rm", "entity": "mgr.compute-0.jngluz"}]': finished
Nov 25 02:49:38 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0) v1
Nov 25 02:49:38 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 02:49:38 np0005534516 ceph-mgr[75313]: [progress INFO root] complete: finished ev 217faaca-c7f9-4e77-98b6-e4c678e1f740 (Updating mgr deployment (-1 -> 1))
Nov 25 02:49:38 np0005534516 ceph-mgr[75313]: [progress INFO root] Completed event 217faaca-c7f9-4e77-98b6-e4c678e1f740 (Updating mgr deployment (-1 -> 1)) in 1 seconds
Nov 25 02:49:38 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0) v1
Nov 25 02:49:38 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 02:49:38 np0005534516 ceph-mgr[75313]: [progress WARNING root] complete: ev eb4ff415-a7cb-41e8-aba5-331c2ae652e1 does not exist
Nov 25 02:49:38 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 02:49:38 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 02:49:38 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 02:49:38 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 02:49:38 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 02:49:38 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 02:49:38 np0005534516 ceph-mgr[75313]: [progress INFO root] Writing back 3 completed events
Nov 25 02:49:38 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) v1
Nov 25 02:49:38 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 02:49:38 np0005534516 ceph-mon[75015]: Removing daemon mgr.compute-0.jngluz from compute-0 -- ports [8765]
Nov 25 02:49:38 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth rm", "entity": "mgr.compute-0.jngluz"}]: dispatch
Nov 25 02:49:38 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd='[{"prefix": "auth rm", "entity": "mgr.compute-0.jngluz"}]': finished
Nov 25 02:49:38 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 02:49:38 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 02:49:38 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 02:49:38 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 02:49:38 np0005534516 podman[84846]: 2025-11-25 07:49:38.929233413 +0000 UTC m=+0.036465615 container create 523758bcaf38fa599958118b9b2c3e14d798b1f236f1995a762136cc0d7834db (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_darwin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef)
Nov 25 02:49:38 np0005534516 systemd[1]: Started libpod-conmon-523758bcaf38fa599958118b9b2c3e14d798b1f236f1995a762136cc0d7834db.scope.
Nov 25 02:49:38 np0005534516 systemd[1]: Started libcrun container.
Nov 25 02:49:39 np0005534516 podman[84846]: 2025-11-25 07:49:38.914942587 +0000 UTC m=+0.022174809 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 02:49:39 np0005534516 podman[84846]: 2025-11-25 07:49:39.011441388 +0000 UTC m=+0.118673590 container init 523758bcaf38fa599958118b9b2c3e14d798b1f236f1995a762136cc0d7834db (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_darwin, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef)
Nov 25 02:49:39 np0005534516 podman[84846]: 2025-11-25 07:49:39.024565897 +0000 UTC m=+0.131798139 container start 523758bcaf38fa599958118b9b2c3e14d798b1f236f1995a762136cc0d7834db (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_darwin, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 25 02:49:39 np0005534516 podman[84846]: 2025-11-25 07:49:39.028396269 +0000 UTC m=+0.135628511 container attach 523758bcaf38fa599958118b9b2c3e14d798b1f236f1995a762136cc0d7834db (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_darwin, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Nov 25 02:49:39 np0005534516 musing_darwin[84862]: 167 167
Nov 25 02:49:39 np0005534516 systemd[1]: libpod-523758bcaf38fa599958118b9b2c3e14d798b1f236f1995a762136cc0d7834db.scope: Deactivated successfully.
Nov 25 02:49:39 np0005534516 conmon[84862]: conmon 523758bcaf38fa599958 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-523758bcaf38fa599958118b9b2c3e14d798b1f236f1995a762136cc0d7834db.scope/container/memory.events
Nov 25 02:49:39 np0005534516 podman[84846]: 2025-11-25 07:49:39.033621496 +0000 UTC m=+0.140853738 container died 523758bcaf38fa599958118b9b2c3e14d798b1f236f1995a762136cc0d7834db (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_darwin, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 02:49:39 np0005534516 systemd[1]: var-lib-containers-storage-overlay-e47516bcabc30f4afdabb777107649270ae76712b80069c6ad312a2509da4643-merged.mount: Deactivated successfully.
Nov 25 02:49:39 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v17: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 25 02:49:39 np0005534516 podman[84846]: 2025-11-25 07:49:39.090561304 +0000 UTC m=+0.197793516 container remove 523758bcaf38fa599958118b9b2c3e14d798b1f236f1995a762136cc0d7834db (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_darwin, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 02:49:39 np0005534516 systemd[1]: libpod-conmon-523758bcaf38fa599958118b9b2c3e14d798b1f236f1995a762136cc0d7834db.scope: Deactivated successfully.
Nov 25 02:49:39 np0005534516 podman[84886]: 2025-11-25 07:49:39.249664455 +0000 UTC m=+0.043932434 container create 10174c727ea7c39e540a9806bde118158620a7ff00a78f2f9acebbe5b10e8fd1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_bassi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 02:49:39 np0005534516 systemd[1]: Started libpod-conmon-10174c727ea7c39e540a9806bde118158620a7ff00a78f2f9acebbe5b10e8fd1.scope.
Nov 25 02:49:39 np0005534516 systemd[1]: Started libcrun container.
Nov 25 02:49:39 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f1503374a73bc00cba1a5ccda256fdbc4838f022830d4889e37d491bc48e04af/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 02:49:39 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f1503374a73bc00cba1a5ccda256fdbc4838f022830d4889e37d491bc48e04af/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 02:49:39 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f1503374a73bc00cba1a5ccda256fdbc4838f022830d4889e37d491bc48e04af/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 02:49:39 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f1503374a73bc00cba1a5ccda256fdbc4838f022830d4889e37d491bc48e04af/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 02:49:39 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f1503374a73bc00cba1a5ccda256fdbc4838f022830d4889e37d491bc48e04af/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 02:49:39 np0005534516 podman[84886]: 2025-11-25 07:49:39.231071712 +0000 UTC m=+0.025339671 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 02:49:39 np0005534516 podman[84886]: 2025-11-25 07:49:39.331435245 +0000 UTC m=+0.125703214 container init 10174c727ea7c39e540a9806bde118158620a7ff00a78f2f9acebbe5b10e8fd1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_bassi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Nov 25 02:49:39 np0005534516 podman[84886]: 2025-11-25 07:49:39.339487253 +0000 UTC m=+0.133755192 container start 10174c727ea7c39e540a9806bde118158620a7ff00a78f2f9acebbe5b10e8fd1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_bassi, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 25 02:49:39 np0005534516 podman[84886]: 2025-11-25 07:49:39.343482141 +0000 UTC m=+0.137750120 container attach 10174c727ea7c39e540a9806bde118158620a7ff00a78f2f9acebbe5b10e8fd1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_bassi, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef)
Nov 25 02:49:39 np0005534516 ceph-mon[75015]: Removing key for mgr.compute-0.jngluz
Nov 25 02:49:40 np0005534516 charming_bassi[84902]: --> passed data devices: 0 physical, 3 LVM
Nov 25 02:49:40 np0005534516 charming_bassi[84902]: --> relative data size: 1.0
Nov 25 02:49:40 np0005534516 charming_bassi[84902]: Running command: /usr/bin/ceph-authtool --gen-print-key
Nov 25 02:49:40 np0005534516 charming_bassi[84902]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new fa0f8d90-5df8-4d42-9078-da082765696d
Nov 25 02:49:40 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd new", "uuid": "fa0f8d90-5df8-4d42-9078-da082765696d"} v 0) v1
Nov 25 02:49:40 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/701392226' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "fa0f8d90-5df8-4d42-9078-da082765696d"}]: dispatch
Nov 25 02:49:40 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e3 do_prune osdmap full prune enabled
Nov 25 02:49:40 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e3 encode_pending skipping prime_pg_temp; mapping job did not start
Nov 25 02:49:40 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/701392226' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "fa0f8d90-5df8-4d42-9078-da082765696d"}]': finished
Nov 25 02:49:40 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e4 e4: 1 total, 0 up, 1 in
Nov 25 02:49:40 np0005534516 ceph-mon[75015]: log_channel(cluster) log [DBG] : osdmap e4: 1 total, 0 up, 1 in
Nov 25 02:49:40 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Nov 25 02:49:40 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 25 02:49:40 np0005534516 ceph-mgr[75313]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Nov 25 02:49:40 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e4 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 02:49:40 np0005534516 charming_bassi[84902]: Running command: /usr/bin/ceph-authtool --gen-print-key
Nov 25 02:49:40 np0005534516 lvm[84963]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Nov 25 02:49:40 np0005534516 lvm[84963]: VG ceph_vg0 finished
Nov 25 02:49:40 np0005534516 charming_bassi[84902]: Running command: /usr/bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-0
Nov 25 02:49:40 np0005534516 charming_bassi[84902]: Running command: /usr/bin/chown -h ceph:ceph /dev/ceph_vg0/ceph_lv0
Nov 25 02:49:40 np0005534516 charming_bassi[84902]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Nov 25 02:49:40 np0005534516 charming_bassi[84902]: Running command: /usr/bin/ln -s /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-0/block
Nov 25 02:49:40 np0005534516 charming_bassi[84902]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/osd/ceph-0/activate.monmap
Nov 25 02:49:41 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v19: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 25 02:49:41 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon getmap"} v 0) v1
Nov 25 02:49:41 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2845485651' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch
Nov 25 02:49:41 np0005534516 charming_bassi[84902]: stderr: got monmap epoch 1
Nov 25 02:49:41 np0005534516 charming_bassi[84902]: --> Creating keyring file for osd.0
Nov 25 02:49:41 np0005534516 charming_bassi[84902]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0/keyring
Nov 25 02:49:41 np0005534516 charming_bassi[84902]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0/
Nov 25 02:49:41 np0005534516 charming_bassi[84902]: Running command: /usr/bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 0 --monmap /var/lib/ceph/osd/ceph-0/activate.monmap --keyfile - --osdspec-affinity default_drive_group --osd-data /var/lib/ceph/osd/ceph-0/ --osd-uuid fa0f8d90-5df8-4d42-9078-da082765696d --setuser ceph --setgroup ceph
Nov 25 02:49:41 np0005534516 ceph-mon[75015]: from='client.? 192.168.122.100:0/701392226' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "fa0f8d90-5df8-4d42-9078-da082765696d"}]: dispatch
Nov 25 02:49:41 np0005534516 ceph-mon[75015]: from='client.? 192.168.122.100:0/701392226' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "fa0f8d90-5df8-4d42-9078-da082765696d"}]': finished
Nov 25 02:49:41 np0005534516 ceph-mon[75015]: log_channel(cluster) log [INF] : Health check cleared: TOO_FEW_OSDS (was: OSD count 0 < osd_pool_default_size 1)
Nov 25 02:49:41 np0005534516 ceph-mon[75015]: log_channel(cluster) log [INF] : Cluster is now healthy
Nov 25 02:49:42 np0005534516 ceph-mon[75015]: Health check cleared: TOO_FEW_OSDS (was: OSD count 0 < osd_pool_default_size 1)
Nov 25 02:49:42 np0005534516 ceph-mon[75015]: Cluster is now healthy
Nov 25 02:49:43 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v20: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 25 02:49:43 np0005534516 charming_bassi[84902]: stderr: 2025-11-25T07:49:41.490+0000 7f4f92e04740 -1 bluestore(/var/lib/ceph/osd/ceph-0//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3]
Nov 25 02:49:43 np0005534516 charming_bassi[84902]: stderr: 2025-11-25T07:49:41.490+0000 7f4f92e04740 -1 bluestore(/var/lib/ceph/osd/ceph-0//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3]
Nov 25 02:49:43 np0005534516 charming_bassi[84902]: stderr: 2025-11-25T07:49:41.490+0000 7f4f92e04740 -1 bluestore(/var/lib/ceph/osd/ceph-0//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3]
Nov 25 02:49:43 np0005534516 charming_bassi[84902]: stderr: 2025-11-25T07:49:41.491+0000 7f4f92e04740 -1 bluestore(/var/lib/ceph/osd/ceph-0/) _read_fsid unparsable uuid
Nov 25 02:49:43 np0005534516 charming_bassi[84902]: --> ceph-volume lvm prepare successful for: ceph_vg0/ceph_lv0
Nov 25 02:49:44 np0005534516 charming_bassi[84902]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Nov 25 02:49:44 np0005534516 charming_bassi[84902]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg0/ceph_lv0 --path /var/lib/ceph/osd/ceph-0 --no-mon-config
Nov 25 02:49:44 np0005534516 charming_bassi[84902]: Running command: /usr/bin/ln -snf /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-0/block
Nov 25 02:49:44 np0005534516 charming_bassi[84902]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-0/block
Nov 25 02:49:44 np0005534516 charming_bassi[84902]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Nov 25 02:49:44 np0005534516 charming_bassi[84902]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Nov 25 02:49:44 np0005534516 charming_bassi[84902]: --> ceph-volume lvm activate successful for osd ID: 0
Nov 25 02:49:44 np0005534516 charming_bassi[84902]: --> ceph-volume lvm create successful for: ceph_vg0/ceph_lv0
Nov 25 02:49:44 np0005534516 charming_bassi[84902]: Running command: /usr/bin/ceph-authtool --gen-print-key
Nov 25 02:49:44 np0005534516 charming_bassi[84902]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new 1ccbbde0-8faf-460a-800e-c84f00ed17db
Nov 25 02:49:44 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd new", "uuid": "1ccbbde0-8faf-460a-800e-c84f00ed17db"} v 0) v1
Nov 25 02:49:44 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3207762473' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "1ccbbde0-8faf-460a-800e-c84f00ed17db"}]: dispatch
Nov 25 02:49:44 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e4 do_prune osdmap full prune enabled
Nov 25 02:49:44 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e4 encode_pending skipping prime_pg_temp; mapping job did not start
Nov 25 02:49:44 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3207762473' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "1ccbbde0-8faf-460a-800e-c84f00ed17db"}]': finished
Nov 25 02:49:44 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e5 e5: 2 total, 0 up, 2 in
Nov 25 02:49:44 np0005534516 ceph-mon[75015]: log_channel(cluster) log [DBG] : osdmap e5: 2 total, 0 up, 2 in
Nov 25 02:49:44 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Nov 25 02:49:44 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 25 02:49:44 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Nov 25 02:49:44 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 25 02:49:44 np0005534516 ceph-mgr[75313]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Nov 25 02:49:44 np0005534516 ceph-mgr[75313]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Nov 25 02:49:44 np0005534516 ceph-mon[75015]: from='client.? 192.168.122.100:0/3207762473' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "1ccbbde0-8faf-460a-800e-c84f00ed17db"}]: dispatch
Nov 25 02:49:44 np0005534516 ceph-mon[75015]: from='client.? 192.168.122.100:0/3207762473' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "1ccbbde0-8faf-460a-800e-c84f00ed17db"}]': finished
Nov 25 02:49:44 np0005534516 charming_bassi[84902]: Running command: /usr/bin/ceph-authtool --gen-print-key
Nov 25 02:49:44 np0005534516 lvm[85916]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Nov 25 02:49:44 np0005534516 lvm[85916]: VG ceph_vg1 finished
Nov 25 02:49:44 np0005534516 charming_bassi[84902]: Running command: /usr/bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-1
Nov 25 02:49:44 np0005534516 charming_bassi[84902]: Running command: /usr/bin/chown -h ceph:ceph /dev/ceph_vg1/ceph_lv1
Nov 25 02:49:44 np0005534516 charming_bassi[84902]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-1
Nov 25 02:49:44 np0005534516 charming_bassi[84902]: Running command: /usr/bin/ln -s /dev/ceph_vg1/ceph_lv1 /var/lib/ceph/osd/ceph-1/block
Nov 25 02:49:44 np0005534516 charming_bassi[84902]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/osd/ceph-1/activate.monmap
Nov 25 02:49:45 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v22: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 25 02:49:45 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon getmap"} v 0) v1
Nov 25 02:49:45 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1065465174' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch
Nov 25 02:49:45 np0005534516 charming_bassi[84902]: stderr: got monmap epoch 1
Nov 25 02:49:45 np0005534516 charming_bassi[84902]: --> Creating keyring file for osd.1
Nov 25 02:49:45 np0005534516 charming_bassi[84902]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1/keyring
Nov 25 02:49:45 np0005534516 charming_bassi[84902]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1/
Nov 25 02:49:45 np0005534516 charming_bassi[84902]: Running command: /usr/bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 1 --monmap /var/lib/ceph/osd/ceph-1/activate.monmap --keyfile - --osdspec-affinity default_drive_group --osd-data /var/lib/ceph/osd/ceph-1/ --osd-uuid 1ccbbde0-8faf-460a-800e-c84f00ed17db --setuser ceph --setgroup ceph
Nov 25 02:49:45 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e5 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 02:49:47 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v23: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 25 02:49:47 np0005534516 charming_bassi[84902]: stderr: 2025-11-25T07:49:45.412+0000 7fb45b97e740 -1 bluestore(/var/lib/ceph/osd/ceph-1//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3]
Nov 25 02:49:47 np0005534516 charming_bassi[84902]: stderr: 2025-11-25T07:49:45.412+0000 7fb45b97e740 -1 bluestore(/var/lib/ceph/osd/ceph-1//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3]
Nov 25 02:49:47 np0005534516 charming_bassi[84902]: stderr: 2025-11-25T07:49:45.412+0000 7fb45b97e740 -1 bluestore(/var/lib/ceph/osd/ceph-1//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3]
Nov 25 02:49:47 np0005534516 charming_bassi[84902]: stderr: 2025-11-25T07:49:45.412+0000 7fb45b97e740 -1 bluestore(/var/lib/ceph/osd/ceph-1/) _read_fsid unparsable uuid
Nov 25 02:49:47 np0005534516 charming_bassi[84902]: --> ceph-volume lvm prepare successful for: ceph_vg1/ceph_lv1
Nov 25 02:49:47 np0005534516 charming_bassi[84902]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Nov 25 02:49:47 np0005534516 charming_bassi[84902]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg1/ceph_lv1 --path /var/lib/ceph/osd/ceph-1 --no-mon-config
Nov 25 02:49:47 np0005534516 charming_bassi[84902]: Running command: /usr/bin/ln -snf /dev/ceph_vg1/ceph_lv1 /var/lib/ceph/osd/ceph-1/block
Nov 25 02:49:47 np0005534516 charming_bassi[84902]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-1/block
Nov 25 02:49:47 np0005534516 charming_bassi[84902]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-1
Nov 25 02:49:47 np0005534516 charming_bassi[84902]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Nov 25 02:49:48 np0005534516 charming_bassi[84902]: --> ceph-volume lvm activate successful for osd ID: 1
Nov 25 02:49:48 np0005534516 charming_bassi[84902]: --> ceph-volume lvm create successful for: ceph_vg1/ceph_lv1
Nov 25 02:49:48 np0005534516 charming_bassi[84902]: Running command: /usr/bin/ceph-authtool --gen-print-key
Nov 25 02:49:48 np0005534516 charming_bassi[84902]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new 2a15223e-f21c-42af-b702-2be1c3607399
Nov 25 02:49:48 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd new", "uuid": "2a15223e-f21c-42af-b702-2be1c3607399"} v 0) v1
Nov 25 02:49:48 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2437423088' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "2a15223e-f21c-42af-b702-2be1c3607399"}]: dispatch
Nov 25 02:49:48 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e5 do_prune osdmap full prune enabled
Nov 25 02:49:48 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e5 encode_pending skipping prime_pg_temp; mapping job did not start
Nov 25 02:49:48 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2437423088' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "2a15223e-f21c-42af-b702-2be1c3607399"}]': finished
Nov 25 02:49:48 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e6 e6: 3 total, 0 up, 3 in
Nov 25 02:49:48 np0005534516 ceph-mon[75015]: log_channel(cluster) log [DBG] : osdmap e6: 3 total, 0 up, 3 in
Nov 25 02:49:48 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Nov 25 02:49:48 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 25 02:49:48 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Nov 25 02:49:48 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 25 02:49:48 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Nov 25 02:49:48 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 25 02:49:48 np0005534516 ceph-mgr[75313]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Nov 25 02:49:48 np0005534516 ceph-mgr[75313]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 25 02:49:48 np0005534516 ceph-mgr[75313]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Nov 25 02:49:48 np0005534516 lvm[86869]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Nov 25 02:49:48 np0005534516 lvm[86869]: VG ceph_vg2 finished
Nov 25 02:49:48 np0005534516 charming_bassi[84902]: Running command: /usr/bin/ceph-authtool --gen-print-key
Nov 25 02:49:48 np0005534516 charming_bassi[84902]: Running command: /usr/bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-2
Nov 25 02:49:48 np0005534516 charming_bassi[84902]: Running command: /usr/bin/chown -h ceph:ceph /dev/ceph_vg2/ceph_lv2
Nov 25 02:49:48 np0005534516 charming_bassi[84902]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-2
Nov 25 02:49:48 np0005534516 charming_bassi[84902]: Running command: /usr/bin/ln -s /dev/ceph_vg2/ceph_lv2 /var/lib/ceph/osd/ceph-2/block
Nov 25 02:49:48 np0005534516 charming_bassi[84902]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/osd/ceph-2/activate.monmap
Nov 25 02:49:48 np0005534516 ceph-mon[75015]: from='client.? 192.168.122.100:0/2437423088' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "2a15223e-f21c-42af-b702-2be1c3607399"}]: dispatch
Nov 25 02:49:48 np0005534516 ceph-mon[75015]: from='client.? 192.168.122.100:0/2437423088' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "2a15223e-f21c-42af-b702-2be1c3607399"}]': finished
Nov 25 02:49:49 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon getmap"} v 0) v1
Nov 25 02:49:49 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2909313932' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch
Nov 25 02:49:49 np0005534516 charming_bassi[84902]: stderr: got monmap epoch 1
Nov 25 02:49:49 np0005534516 charming_bassi[84902]: --> Creating keyring file for osd.2
Nov 25 02:49:49 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v25: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 25 02:49:49 np0005534516 charming_bassi[84902]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2/keyring
Nov 25 02:49:49 np0005534516 charming_bassi[84902]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2/
Nov 25 02:49:49 np0005534516 charming_bassi[84902]: Running command: /usr/bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 2 --monmap /var/lib/ceph/osd/ceph-2/activate.monmap --keyfile - --osdspec-affinity default_drive_group --osd-data /var/lib/ceph/osd/ceph-2/ --osd-uuid 2a15223e-f21c-42af-b702-2be1c3607399 --setuser ceph --setgroup ceph
Nov 25 02:49:50 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e6 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 02:49:51 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v26: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 25 02:49:53 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v27: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 25 02:49:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] Optimize plan auto_2025-11-25_07:49:53
Nov 25 02:49:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 02:49:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] do_upmap
Nov 25 02:49:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] No pools available
Nov 25 02:49:53 np0005534516 charming_bassi[84902]: stderr: 2025-11-25T07:49:49.141+0000 7fac4720a740 -1 bluestore(/var/lib/ceph/osd/ceph-2//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3]
Nov 25 02:49:53 np0005534516 charming_bassi[84902]: stderr: 2025-11-25T07:49:49.141+0000 7fac4720a740 -1 bluestore(/var/lib/ceph/osd/ceph-2//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3]
Nov 25 02:49:53 np0005534516 charming_bassi[84902]: stderr: 2025-11-25T07:49:49.142+0000 7fac4720a740 -1 bluestore(/var/lib/ceph/osd/ceph-2//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3]
Nov 25 02:49:53 np0005534516 charming_bassi[84902]: stderr: 2025-11-25T07:49:49.142+0000 7fac4720a740 -1 bluestore(/var/lib/ceph/osd/ceph-2/) _read_fsid unparsable uuid
Nov 25 02:49:53 np0005534516 charming_bassi[84902]: --> ceph-volume lvm prepare successful for: ceph_vg2/ceph_lv2
Nov 25 02:49:53 np0005534516 charming_bassi[84902]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Nov 25 02:49:53 np0005534516 charming_bassi[84902]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg2/ceph_lv2 --path /var/lib/ceph/osd/ceph-2 --no-mon-config
Nov 25 02:49:53 np0005534516 charming_bassi[84902]: Running command: /usr/bin/ln -snf /dev/ceph_vg2/ceph_lv2 /var/lib/ceph/osd/ceph-2/block
Nov 25 02:49:53 np0005534516 charming_bassi[84902]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-2/block
Nov 25 02:49:53 np0005534516 charming_bassi[84902]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-2
Nov 25 02:49:53 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 02:49:53 np0005534516 charming_bassi[84902]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Nov 25 02:49:53 np0005534516 charming_bassi[84902]: --> ceph-volume lvm activate successful for osd ID: 2
Nov 25 02:49:53 np0005534516 charming_bassi[84902]: --> ceph-volume lvm create successful for: ceph_vg2/ceph_lv2
Nov 25 02:49:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 02:49:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 02:49:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 02:49:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 02:49:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 02:49:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 02:49:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 02:49:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 02:49:53 np0005534516 systemd[1]: libpod-10174c727ea7c39e540a9806bde118158620a7ff00a78f2f9acebbe5b10e8fd1.scope: Deactivated successfully.
Nov 25 02:49:53 np0005534516 systemd[1]: libpod-10174c727ea7c39e540a9806bde118158620a7ff00a78f2f9acebbe5b10e8fd1.scope: Consumed 6.057s CPU time.
Nov 25 02:49:53 np0005534516 podman[87797]: 2025-11-25 07:49:53.465978166 +0000 UTC m=+0.040611378 container died 10174c727ea7c39e540a9806bde118158620a7ff00a78f2f9acebbe5b10e8fd1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_bassi, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Nov 25 02:49:53 np0005534516 systemd[1]: var-lib-containers-storage-overlay-f1503374a73bc00cba1a5ccda256fdbc4838f022830d4889e37d491bc48e04af-merged.mount: Deactivated successfully.
Nov 25 02:49:53 np0005534516 podman[87797]: 2025-11-25 07:49:53.540290328 +0000 UTC m=+0.114923510 container remove 10174c727ea7c39e540a9806bde118158620a7ff00a78f2f9acebbe5b10e8fd1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_bassi, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 02:49:53 np0005534516 systemd[1]: libpod-conmon-10174c727ea7c39e540a9806bde118158620a7ff00a78f2f9acebbe5b10e8fd1.scope: Deactivated successfully.
Nov 25 02:49:54 np0005534516 podman[87951]: 2025-11-25 07:49:54.162451935 +0000 UTC m=+0.044330837 container create 65eaf18bc2dd91c86b44c871f5442390fd802ed4109c27cf047c334e6177c806 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_maxwell, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Nov 25 02:49:54 np0005534516 systemd[1]: Started libpod-conmon-65eaf18bc2dd91c86b44c871f5442390fd802ed4109c27cf047c334e6177c806.scope.
Nov 25 02:49:54 np0005534516 systemd[1]: Started libcrun container.
Nov 25 02:49:54 np0005534516 podman[87951]: 2025-11-25 07:49:54.13977766 +0000 UTC m=+0.021656602 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 02:49:54 np0005534516 podman[87951]: 2025-11-25 07:49:54.244922218 +0000 UTC m=+0.126801190 container init 65eaf18bc2dd91c86b44c871f5442390fd802ed4109c27cf047c334e6177c806 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_maxwell, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 02:49:54 np0005534516 podman[87951]: 2025-11-25 07:49:54.254488173 +0000 UTC m=+0.136367065 container start 65eaf18bc2dd91c86b44c871f5442390fd802ed4109c27cf047c334e6177c806 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_maxwell, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 02:49:54 np0005534516 podman[87951]: 2025-11-25 07:49:54.258522182 +0000 UTC m=+0.140401134 container attach 65eaf18bc2dd91c86b44c871f5442390fd802ed4109c27cf047c334e6177c806 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_maxwell, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Nov 25 02:49:54 np0005534516 optimistic_maxwell[87967]: 167 167
Nov 25 02:49:54 np0005534516 systemd[1]: libpod-65eaf18bc2dd91c86b44c871f5442390fd802ed4109c27cf047c334e6177c806.scope: Deactivated successfully.
Nov 25 02:49:54 np0005534516 podman[87951]: 2025-11-25 07:49:54.260180605 +0000 UTC m=+0.142059527 container died 65eaf18bc2dd91c86b44c871f5442390fd802ed4109c27cf047c334e6177c806 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_maxwell, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 02:49:54 np0005534516 systemd[1]: var-lib-containers-storage-overlay-a6e0c0297e9dd3869513020653081955a222cc4ac2eb170ff35280b5996a779c-merged.mount: Deactivated successfully.
Nov 25 02:49:54 np0005534516 podman[87951]: 2025-11-25 07:49:54.31103982 +0000 UTC m=+0.192918732 container remove 65eaf18bc2dd91c86b44c871f5442390fd802ed4109c27cf047c334e6177c806 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_maxwell, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Nov 25 02:49:54 np0005534516 systemd[1]: libpod-conmon-65eaf18bc2dd91c86b44c871f5442390fd802ed4109c27cf047c334e6177c806.scope: Deactivated successfully.
Nov 25 02:49:54 np0005534516 podman[87992]: 2025-11-25 07:49:54.479465137 +0000 UTC m=+0.067553198 container create e15d92bae931ac74781108f0172129476695a7d17e1ab342cff32781d3e27d53 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_black, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 02:49:54 np0005534516 podman[87992]: 2025-11-25 07:49:54.434571574 +0000 UTC m=+0.022659685 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 02:49:54 np0005534516 systemd[1]: Started libpod-conmon-e15d92bae931ac74781108f0172129476695a7d17e1ab342cff32781d3e27d53.scope.
Nov 25 02:49:54 np0005534516 systemd[1]: Started libcrun container.
Nov 25 02:49:54 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8d655e592769b82af4439eeb3236cc864f839ea6c5e8f182c39cc6a3a37052b8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 02:49:54 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8d655e592769b82af4439eeb3236cc864f839ea6c5e8f182c39cc6a3a37052b8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 02:49:54 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8d655e592769b82af4439eeb3236cc864f839ea6c5e8f182c39cc6a3a37052b8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 02:49:54 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8d655e592769b82af4439eeb3236cc864f839ea6c5e8f182c39cc6a3a37052b8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 02:49:54 np0005534516 podman[87992]: 2025-11-25 07:49:54.597596329 +0000 UTC m=+0.185684410 container init e15d92bae931ac74781108f0172129476695a7d17e1ab342cff32781d3e27d53 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_black, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 02:49:54 np0005534516 podman[87992]: 2025-11-25 07:49:54.60356328 +0000 UTC m=+0.191651341 container start e15d92bae931ac74781108f0172129476695a7d17e1ab342cff32781d3e27d53 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_black, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Nov 25 02:49:54 np0005534516 podman[87992]: 2025-11-25 07:49:54.607639319 +0000 UTC m=+0.195727380 container attach e15d92bae931ac74781108f0172129476695a7d17e1ab342cff32781d3e27d53 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_black, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 02:49:55 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v28: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 25 02:49:55 np0005534516 naughty_black[88009]: {
Nov 25 02:49:55 np0005534516 naughty_black[88009]:    "0": [
Nov 25 02:49:55 np0005534516 naughty_black[88009]:        {
Nov 25 02:49:55 np0005534516 naughty_black[88009]:            "devices": [
Nov 25 02:49:55 np0005534516 naughty_black[88009]:                "/dev/loop3"
Nov 25 02:49:55 np0005534516 naughty_black[88009]:            ],
Nov 25 02:49:55 np0005534516 naughty_black[88009]:            "lv_name": "ceph_lv0",
Nov 25 02:49:55 np0005534516 naughty_black[88009]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 02:49:55 np0005534516 naughty_black[88009]:            "lv_size": "21470642176",
Nov 25 02:49:55 np0005534516 naughty_black[88009]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=fa0f8d90-5df8-4d42-9078-da082765696d,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 02:49:55 np0005534516 naughty_black[88009]:            "lv_uuid": "ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr",
Nov 25 02:49:55 np0005534516 naughty_black[88009]:            "name": "ceph_lv0",
Nov 25 02:49:55 np0005534516 naughty_black[88009]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 02:49:55 np0005534516 naughty_black[88009]:            "tags": {
Nov 25 02:49:55 np0005534516 naughty_black[88009]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 02:49:55 np0005534516 naughty_black[88009]:                "ceph.block_uuid": "ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr",
Nov 25 02:49:55 np0005534516 naughty_black[88009]:                "ceph.cephx_lockbox_secret": "",
Nov 25 02:49:55 np0005534516 naughty_black[88009]:                "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 02:49:55 np0005534516 naughty_black[88009]:                "ceph.cluster_name": "ceph",
Nov 25 02:49:55 np0005534516 naughty_black[88009]:                "ceph.crush_device_class": "",
Nov 25 02:49:55 np0005534516 naughty_black[88009]:                "ceph.encrypted": "0",
Nov 25 02:49:55 np0005534516 naughty_black[88009]:                "ceph.osd_fsid": "fa0f8d90-5df8-4d42-9078-da082765696d",
Nov 25 02:49:55 np0005534516 naughty_black[88009]:                "ceph.osd_id": "0",
Nov 25 02:49:55 np0005534516 naughty_black[88009]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 02:49:55 np0005534516 naughty_black[88009]:                "ceph.type": "block",
Nov 25 02:49:55 np0005534516 naughty_black[88009]:                "ceph.vdo": "0"
Nov 25 02:49:55 np0005534516 naughty_black[88009]:            },
Nov 25 02:49:55 np0005534516 naughty_black[88009]:            "type": "block",
Nov 25 02:49:55 np0005534516 naughty_black[88009]:            "vg_name": "ceph_vg0"
Nov 25 02:49:55 np0005534516 naughty_black[88009]:        }
Nov 25 02:49:55 np0005534516 naughty_black[88009]:    ],
Nov 25 02:49:55 np0005534516 naughty_black[88009]:    "1": [
Nov 25 02:49:55 np0005534516 naughty_black[88009]:        {
Nov 25 02:49:55 np0005534516 naughty_black[88009]:            "devices": [
Nov 25 02:49:55 np0005534516 naughty_black[88009]:                "/dev/loop4"
Nov 25 02:49:55 np0005534516 naughty_black[88009]:            ],
Nov 25 02:49:55 np0005534516 naughty_black[88009]:            "lv_name": "ceph_lv1",
Nov 25 02:49:55 np0005534516 naughty_black[88009]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 02:49:55 np0005534516 naughty_black[88009]:            "lv_size": "21470642176",
Nov 25 02:49:55 np0005534516 naughty_black[88009]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1ccbbde0-8faf-460a-800e-c84f00ed17db,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 02:49:55 np0005534516 naughty_black[88009]:            "lv_uuid": "nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e",
Nov 25 02:49:55 np0005534516 naughty_black[88009]:            "name": "ceph_lv1",
Nov 25 02:49:55 np0005534516 naughty_black[88009]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 02:49:55 np0005534516 naughty_black[88009]:            "tags": {
Nov 25 02:49:55 np0005534516 naughty_black[88009]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 02:49:55 np0005534516 naughty_black[88009]:                "ceph.block_uuid": "nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e",
Nov 25 02:49:55 np0005534516 naughty_black[88009]:                "ceph.cephx_lockbox_secret": "",
Nov 25 02:49:55 np0005534516 naughty_black[88009]:                "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 02:49:55 np0005534516 naughty_black[88009]:                "ceph.cluster_name": "ceph",
Nov 25 02:49:55 np0005534516 naughty_black[88009]:                "ceph.crush_device_class": "",
Nov 25 02:49:55 np0005534516 naughty_black[88009]:                "ceph.encrypted": "0",
Nov 25 02:49:55 np0005534516 naughty_black[88009]:                "ceph.osd_fsid": "1ccbbde0-8faf-460a-800e-c84f00ed17db",
Nov 25 02:49:55 np0005534516 naughty_black[88009]:                "ceph.osd_id": "1",
Nov 25 02:49:55 np0005534516 naughty_black[88009]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 02:49:55 np0005534516 naughty_black[88009]:                "ceph.type": "block",
Nov 25 02:49:55 np0005534516 naughty_black[88009]:                "ceph.vdo": "0"
Nov 25 02:49:55 np0005534516 naughty_black[88009]:            },
Nov 25 02:49:55 np0005534516 naughty_black[88009]:            "type": "block",
Nov 25 02:49:55 np0005534516 naughty_black[88009]:            "vg_name": "ceph_vg1"
Nov 25 02:49:55 np0005534516 naughty_black[88009]:        }
Nov 25 02:49:55 np0005534516 naughty_black[88009]:    ],
Nov 25 02:49:55 np0005534516 naughty_black[88009]:    "2": [
Nov 25 02:49:55 np0005534516 naughty_black[88009]:        {
Nov 25 02:49:55 np0005534516 naughty_black[88009]:            "devices": [
Nov 25 02:49:55 np0005534516 naughty_black[88009]:                "/dev/loop5"
Nov 25 02:49:55 np0005534516 naughty_black[88009]:            ],
Nov 25 02:49:55 np0005534516 naughty_black[88009]:            "lv_name": "ceph_lv2",
Nov 25 02:49:55 np0005534516 naughty_black[88009]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 02:49:55 np0005534516 naughty_black[88009]:            "lv_size": "21470642176",
Nov 25 02:49:55 np0005534516 naughty_black[88009]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2a15223e-f21c-42af-b702-2be1c3607399,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 02:49:55 np0005534516 naughty_black[88009]:            "lv_uuid": "5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0",
Nov 25 02:49:55 np0005534516 naughty_black[88009]:            "name": "ceph_lv2",
Nov 25 02:49:55 np0005534516 naughty_black[88009]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 02:49:55 np0005534516 naughty_black[88009]:            "tags": {
Nov 25 02:49:55 np0005534516 naughty_black[88009]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 02:49:55 np0005534516 naughty_black[88009]:                "ceph.block_uuid": "5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0",
Nov 25 02:49:55 np0005534516 naughty_black[88009]:                "ceph.cephx_lockbox_secret": "",
Nov 25 02:49:55 np0005534516 naughty_black[88009]:                "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 02:49:55 np0005534516 naughty_black[88009]:                "ceph.cluster_name": "ceph",
Nov 25 02:49:55 np0005534516 naughty_black[88009]:                "ceph.crush_device_class": "",
Nov 25 02:49:55 np0005534516 naughty_black[88009]:                "ceph.encrypted": "0",
Nov 25 02:49:55 np0005534516 naughty_black[88009]:                "ceph.osd_fsid": "2a15223e-f21c-42af-b702-2be1c3607399",
Nov 25 02:49:55 np0005534516 naughty_black[88009]:                "ceph.osd_id": "2",
Nov 25 02:49:55 np0005534516 naughty_black[88009]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 02:49:55 np0005534516 naughty_black[88009]:                "ceph.type": "block",
Nov 25 02:49:55 np0005534516 naughty_black[88009]:                "ceph.vdo": "0"
Nov 25 02:49:55 np0005534516 naughty_black[88009]:            },
Nov 25 02:49:55 np0005534516 naughty_black[88009]:            "type": "block",
Nov 25 02:49:55 np0005534516 naughty_black[88009]:            "vg_name": "ceph_vg2"
Nov 25 02:49:55 np0005534516 naughty_black[88009]:        }
Nov 25 02:49:55 np0005534516 naughty_black[88009]:    ]
Nov 25 02:49:55 np0005534516 naughty_black[88009]: }
Nov 25 02:49:55 np0005534516 systemd[1]: libpod-e15d92bae931ac74781108f0172129476695a7d17e1ab342cff32781d3e27d53.scope: Deactivated successfully.
Nov 25 02:49:55 np0005534516 podman[87992]: 2025-11-25 07:49:55.476732311 +0000 UTC m=+1.064820372 container died e15d92bae931ac74781108f0172129476695a7d17e1ab342cff32781d3e27d53 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_black, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 02:49:55 np0005534516 systemd[1]: var-lib-containers-storage-overlay-8d655e592769b82af4439eeb3236cc864f839ea6c5e8f182c39cc6a3a37052b8-merged.mount: Deactivated successfully.
Nov 25 02:49:55 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e6 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 02:49:55 np0005534516 podman[87992]: 2025-11-25 07:49:55.880966888 +0000 UTC m=+1.469054979 container remove e15d92bae931ac74781108f0172129476695a7d17e1ab342cff32781d3e27d53 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_black, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 02:49:55 np0005534516 systemd[1]: libpod-conmon-e15d92bae931ac74781108f0172129476695a7d17e1ab342cff32781d3e27d53.scope: Deactivated successfully.
Nov 25 02:49:55 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "osd.0"} v 0) v1
Nov 25 02:49:55 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch
Nov 25 02:49:55 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 02:49:55 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 02:49:55 np0005534516 ceph-mgr[75313]: [cephadm INFO cephadm.serve] Deploying daemon osd.0 on compute-0
Nov 25 02:49:55 np0005534516 ceph-mgr[75313]: log_channel(cephadm) log [INF] : Deploying daemon osd.0 on compute-0
Nov 25 02:49:56 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch
Nov 25 02:49:56 np0005534516 podman[88172]: 2025-11-25 07:49:56.546581061 +0000 UTC m=+0.056632460 container create 9c070a5581ec90db733e8aabf3adffa67ef896f6016d2920be4cce315e065bfd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_goldstine, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Nov 25 02:49:56 np0005534516 systemd[1]: Started libpod-conmon-9c070a5581ec90db733e8aabf3adffa67ef896f6016d2920be4cce315e065bfd.scope.
Nov 25 02:49:56 np0005534516 podman[88172]: 2025-11-25 07:49:56.518839825 +0000 UTC m=+0.028891324 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 02:49:56 np0005534516 systemd[1]: Started libcrun container.
Nov 25 02:49:56 np0005534516 podman[88172]: 2025-11-25 07:49:56.640398396 +0000 UTC m=+0.150449845 container init 9c070a5581ec90db733e8aabf3adffa67ef896f6016d2920be4cce315e065bfd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_goldstine, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Nov 25 02:49:56 np0005534516 podman[88172]: 2025-11-25 07:49:56.648469784 +0000 UTC m=+0.158521193 container start 9c070a5581ec90db733e8aabf3adffa67ef896f6016d2920be4cce315e065bfd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_goldstine, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Nov 25 02:49:56 np0005534516 clever_goldstine[88188]: 167 167
Nov 25 02:49:56 np0005534516 systemd[1]: libpod-9c070a5581ec90db733e8aabf3adffa67ef896f6016d2920be4cce315e065bfd.scope: Deactivated successfully.
Nov 25 02:49:56 np0005534516 podman[88172]: 2025-11-25 07:49:56.654997692 +0000 UTC m=+0.165049141 container attach 9c070a5581ec90db733e8aabf3adffa67ef896f6016d2920be4cce315e065bfd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_goldstine, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 02:49:56 np0005534516 podman[88172]: 2025-11-25 07:49:56.655333433 +0000 UTC m=+0.165384852 container died 9c070a5581ec90db733e8aabf3adffa67ef896f6016d2920be4cce315e065bfd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_goldstine, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 02:49:56 np0005534516 systemd[1]: var-lib-containers-storage-overlay-579c0a87c422718ddae2415e5477b4e2f4e19b0088a7db24f1f5951daebd0e94-merged.mount: Deactivated successfully.
Nov 25 02:49:56 np0005534516 podman[88172]: 2025-11-25 07:49:56.788578688 +0000 UTC m=+0.298630097 container remove 9c070a5581ec90db733e8aabf3adffa67ef896f6016d2920be4cce315e065bfd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_goldstine, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Nov 25 02:49:56 np0005534516 systemd[1]: libpod-conmon-9c070a5581ec90db733e8aabf3adffa67ef896f6016d2920be4cce315e065bfd.scope: Deactivated successfully.
Nov 25 02:49:57 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v29: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 25 02:49:57 np0005534516 podman[88220]: 2025-11-25 07:49:57.163612993 +0000 UTC m=+0.063548380 container create 2ec578a9224a42db8069dcd1e14fc8fa8f0ae3c6bcbbfa338ec6b37694586d41 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-a058ea16-8b73-51e1-b172-ed66107102bf-osd-0-activate-test, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default)
Nov 25 02:49:57 np0005534516 podman[88220]: 2025-11-25 07:49:57.125028841 +0000 UTC m=+0.024964258 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 02:49:57 np0005534516 systemd[1]: Started libpod-conmon-2ec578a9224a42db8069dcd1e14fc8fa8f0ae3c6bcbbfa338ec6b37694586d41.scope.
Nov 25 02:49:57 np0005534516 systemd[1]: Started libcrun container.
Nov 25 02:49:57 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ef6761e14772e0bde2cb14ee389344c41acf50e393099a161e3a76b7f9e22b15/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 02:49:57 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ef6761e14772e0bde2cb14ee389344c41acf50e393099a161e3a76b7f9e22b15/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 02:49:57 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ef6761e14772e0bde2cb14ee389344c41acf50e393099a161e3a76b7f9e22b15/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 02:49:57 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ef6761e14772e0bde2cb14ee389344c41acf50e393099a161e3a76b7f9e22b15/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 02:49:57 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ef6761e14772e0bde2cb14ee389344c41acf50e393099a161e3a76b7f9e22b15/merged/var/lib/ceph/osd/ceph-0 supports timestamps until 2038 (0x7fffffff)
Nov 25 02:49:57 np0005534516 podman[88220]: 2025-11-25 07:49:57.339414727 +0000 UTC m=+0.239350104 container init 2ec578a9224a42db8069dcd1e14fc8fa8f0ae3c6bcbbfa338ec6b37694586d41 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-a058ea16-8b73-51e1-b172-ed66107102bf-osd-0-activate-test, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Nov 25 02:49:57 np0005534516 podman[88220]: 2025-11-25 07:49:57.351046218 +0000 UTC m=+0.250981575 container start 2ec578a9224a42db8069dcd1e14fc8fa8f0ae3c6bcbbfa338ec6b37694586d41 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-a058ea16-8b73-51e1-b172-ed66107102bf-osd-0-activate-test, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Nov 25 02:49:57 np0005534516 ceph-mon[75015]: Deploying daemon osd.0 on compute-0
Nov 25 02:49:57 np0005534516 podman[88220]: 2025-11-25 07:49:57.364264799 +0000 UTC m=+0.264200266 container attach 2ec578a9224a42db8069dcd1e14fc8fa8f0ae3c6bcbbfa338ec6b37694586d41 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-a058ea16-8b73-51e1-b172-ed66107102bf-osd-0-activate-test, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Nov 25 02:49:57 np0005534516 ceph-a058ea16-8b73-51e1-b172-ed66107102bf-osd-0-activate-test[88236]: usage: ceph-volume activate [-h] [--osd-id OSD_ID] [--osd-uuid OSD_UUID]
Nov 25 02:49:57 np0005534516 ceph-a058ea16-8b73-51e1-b172-ed66107102bf-osd-0-activate-test[88236]:                            [--no-systemd] [--no-tmpfs]
Nov 25 02:49:57 np0005534516 ceph-a058ea16-8b73-51e1-b172-ed66107102bf-osd-0-activate-test[88236]: ceph-volume activate: error: unrecognized arguments: --bad-option
Nov 25 02:49:57 np0005534516 systemd[1]: libpod-2ec578a9224a42db8069dcd1e14fc8fa8f0ae3c6bcbbfa338ec6b37694586d41.scope: Deactivated successfully.
Nov 25 02:49:57 np0005534516 podman[88220]: 2025-11-25 07:49:57.988493191 +0000 UTC m=+0.888428648 container died 2ec578a9224a42db8069dcd1e14fc8fa8f0ae3c6bcbbfa338ec6b37694586d41 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-a058ea16-8b73-51e1-b172-ed66107102bf-osd-0-activate-test, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Nov 25 02:49:58 np0005534516 systemd[1]: var-lib-containers-storage-overlay-ef6761e14772e0bde2cb14ee389344c41acf50e393099a161e3a76b7f9e22b15-merged.mount: Deactivated successfully.
Nov 25 02:49:58 np0005534516 podman[88220]: 2025-11-25 07:49:58.748988135 +0000 UTC m=+1.648923542 container remove 2ec578a9224a42db8069dcd1e14fc8fa8f0ae3c6bcbbfa338ec6b37694586d41 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-a058ea16-8b73-51e1-b172-ed66107102bf-osd-0-activate-test, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Nov 25 02:49:58 np0005534516 systemd[1]: libpod-conmon-2ec578a9224a42db8069dcd1e14fc8fa8f0ae3c6bcbbfa338ec6b37694586d41.scope: Deactivated successfully.
Nov 25 02:49:59 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v30: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 25 02:49:59 np0005534516 systemd[1]: Reloading.
Nov 25 02:49:59 np0005534516 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 02:49:59 np0005534516 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 02:49:59 np0005534516 systemd[1]: Reloading.
Nov 25 02:49:59 np0005534516 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 02:49:59 np0005534516 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 02:50:00 np0005534516 systemd[1]: Starting Ceph osd.0 for a058ea16-8b73-51e1-b172-ed66107102bf...
Nov 25 02:50:00 np0005534516 podman[88393]: 2025-11-25 07:50:00.394805746 +0000 UTC m=+0.041971201 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 02:50:00 np0005534516 podman[88393]: 2025-11-25 07:50:00.805494549 +0000 UTC m=+0.452660024 container create ba7fda384c8b0c6f60e9c125a5c3d89839f6ccb0c56e433c72eff2260434075f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-a058ea16-8b73-51e1-b172-ed66107102bf-osd-0-activate, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default)
Nov 25 02:50:00 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e6 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 02:50:01 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v31: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 25 02:50:01 np0005534516 systemd[1]: Started libcrun container.
Nov 25 02:50:01 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2235fcb7cbb1a4026f5bd500a6e8888289f4445e7f4815ded30b5a7459e90532/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 02:50:01 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2235fcb7cbb1a4026f5bd500a6e8888289f4445e7f4815ded30b5a7459e90532/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 02:50:01 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2235fcb7cbb1a4026f5bd500a6e8888289f4445e7f4815ded30b5a7459e90532/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 02:50:01 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2235fcb7cbb1a4026f5bd500a6e8888289f4445e7f4815ded30b5a7459e90532/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 02:50:01 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2235fcb7cbb1a4026f5bd500a6e8888289f4445e7f4815ded30b5a7459e90532/merged/var/lib/ceph/osd/ceph-0 supports timestamps until 2038 (0x7fffffff)
Nov 25 02:50:01 np0005534516 podman[88393]: 2025-11-25 07:50:01.365482851 +0000 UTC m=+1.012648306 container init ba7fda384c8b0c6f60e9c125a5c3d89839f6ccb0c56e433c72eff2260434075f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-a058ea16-8b73-51e1-b172-ed66107102bf-osd-0-activate, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 02:50:01 np0005534516 podman[88393]: 2025-11-25 07:50:01.37454977 +0000 UTC m=+1.021715215 container start ba7fda384c8b0c6f60e9c125a5c3d89839f6ccb0c56e433c72eff2260434075f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-a058ea16-8b73-51e1-b172-ed66107102bf-osd-0-activate, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 02:50:01 np0005534516 podman[88393]: 2025-11-25 07:50:01.511622027 +0000 UTC m=+1.158787552 container attach ba7fda384c8b0c6f60e9c125a5c3d89839f6ccb0c56e433c72eff2260434075f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-a058ea16-8b73-51e1-b172-ed66107102bf-osd-0-activate, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2)
Nov 25 02:50:02 np0005534516 ceph-a058ea16-8b73-51e1-b172-ed66107102bf-osd-0-activate[88409]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Nov 25 02:50:02 np0005534516 bash[88393]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Nov 25 02:50:02 np0005534516 ceph-a058ea16-8b73-51e1-b172-ed66107102bf-osd-0-activate[88409]: Running command: /usr/bin/ceph-bluestore-tool prime-osd-dir --path /var/lib/ceph/osd/ceph-0 --no-mon-config --dev /dev/mapper/ceph_vg0-ceph_lv0
Nov 25 02:50:02 np0005534516 bash[88393]: Running command: /usr/bin/ceph-bluestore-tool prime-osd-dir --path /var/lib/ceph/osd/ceph-0 --no-mon-config --dev /dev/mapper/ceph_vg0-ceph_lv0
Nov 25 02:50:02 np0005534516 ceph-a058ea16-8b73-51e1-b172-ed66107102bf-osd-0-activate[88409]: Running command: /usr/bin/chown -h ceph:ceph /dev/mapper/ceph_vg0-ceph_lv0
Nov 25 02:50:02 np0005534516 bash[88393]: Running command: /usr/bin/chown -h ceph:ceph /dev/mapper/ceph_vg0-ceph_lv0
Nov 25 02:50:02 np0005534516 ceph-a058ea16-8b73-51e1-b172-ed66107102bf-osd-0-activate[88409]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Nov 25 02:50:02 np0005534516 bash[88393]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Nov 25 02:50:02 np0005534516 ceph-a058ea16-8b73-51e1-b172-ed66107102bf-osd-0-activate[88409]: Running command: /usr/bin/ln -s /dev/mapper/ceph_vg0-ceph_lv0 /var/lib/ceph/osd/ceph-0/block
Nov 25 02:50:02 np0005534516 bash[88393]: Running command: /usr/bin/ln -s /dev/mapper/ceph_vg0-ceph_lv0 /var/lib/ceph/osd/ceph-0/block
Nov 25 02:50:02 np0005534516 ceph-a058ea16-8b73-51e1-b172-ed66107102bf-osd-0-activate[88409]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Nov 25 02:50:02 np0005534516 bash[88393]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Nov 25 02:50:02 np0005534516 ceph-a058ea16-8b73-51e1-b172-ed66107102bf-osd-0-activate[88409]: --> ceph-volume raw activate successful for osd ID: 0
Nov 25 02:50:02 np0005534516 bash[88393]: --> ceph-volume raw activate successful for osd ID: 0
Nov 25 02:50:02 np0005534516 systemd[1]: libpod-ba7fda384c8b0c6f60e9c125a5c3d89839f6ccb0c56e433c72eff2260434075f.scope: Deactivated successfully.
Nov 25 02:50:02 np0005534516 systemd[1]: libpod-ba7fda384c8b0c6f60e9c125a5c3d89839f6ccb0c56e433c72eff2260434075f.scope: Consumed 1.269s CPU time.
Nov 25 02:50:02 np0005534516 podman[88541]: 2025-11-25 07:50:02.692108029 +0000 UTC m=+0.043517070 container died ba7fda384c8b0c6f60e9c125a5c3d89839f6ccb0c56e433c72eff2260434075f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-a058ea16-8b73-51e1-b172-ed66107102bf-osd-0-activate, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507)
Nov 25 02:50:02 np0005534516 systemd[1]: var-lib-containers-storage-overlay-2235fcb7cbb1a4026f5bd500a6e8888289f4445e7f4815ded30b5a7459e90532-merged.mount: Deactivated successfully.
Nov 25 02:50:02 np0005534516 podman[88541]: 2025-11-25 07:50:02.797241217 +0000 UTC m=+0.148650178 container remove ba7fda384c8b0c6f60e9c125a5c3d89839f6ccb0c56e433c72eff2260434075f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-a058ea16-8b73-51e1-b172-ed66107102bf-osd-0-activate, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 02:50:03 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v32: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 25 02:50:03 np0005534516 podman[88601]: 2025-11-25 07:50:03.118184444 +0000 UTC m=+0.074154148 container create 044841914bbe697792fa4324aa06031f7219c15112d8095bc902ff115db878ad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-a058ea16-8b73-51e1-b172-ed66107102bf-osd-0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 02:50:03 np0005534516 podman[88601]: 2025-11-25 07:50:03.07546821 +0000 UTC m=+0.031437954 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 02:50:03 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ca9b1789436df1417a1f2c27db6335147de4c553b7b6df87e36e89e75c0e8bc0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 02:50:03 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ca9b1789436df1417a1f2c27db6335147de4c553b7b6df87e36e89e75c0e8bc0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 02:50:03 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ca9b1789436df1417a1f2c27db6335147de4c553b7b6df87e36e89e75c0e8bc0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 02:50:03 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ca9b1789436df1417a1f2c27db6335147de4c553b7b6df87e36e89e75c0e8bc0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 02:50:03 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ca9b1789436df1417a1f2c27db6335147de4c553b7b6df87e36e89e75c0e8bc0/merged/var/lib/ceph/osd/ceph-0 supports timestamps until 2038 (0x7fffffff)
Nov 25 02:50:03 np0005534516 podman[88601]: 2025-11-25 07:50:03.208474978 +0000 UTC m=+0.164444762 container init 044841914bbe697792fa4324aa06031f7219c15112d8095bc902ff115db878ad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-a058ea16-8b73-51e1-b172-ed66107102bf-osd-0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Nov 25 02:50:03 np0005534516 podman[88601]: 2025-11-25 07:50:03.218538239 +0000 UTC m=+0.174507953 container start 044841914bbe697792fa4324aa06031f7219c15112d8095bc902ff115db878ad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-a058ea16-8b73-51e1-b172-ed66107102bf-osd-0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Nov 25 02:50:03 np0005534516 bash[88601]: 044841914bbe697792fa4324aa06031f7219c15112d8095bc902ff115db878ad
Nov 25 02:50:03 np0005534516 systemd[1]: Started Ceph osd.0 for a058ea16-8b73-51e1-b172-ed66107102bf.
Nov 25 02:50:03 np0005534516 ceph-osd[88620]: set uid:gid to 167:167 (ceph:ceph)
Nov 25 02:50:03 np0005534516 ceph-osd[88620]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-osd, pid 2
Nov 25 02:50:03 np0005534516 ceph-osd[88620]: pidfile_write: ignore empty --pid-file
Nov 25 02:50:03 np0005534516 ceph-osd[88620]: bdev(0x562bd07bf800 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Nov 25 02:50:03 np0005534516 ceph-osd[88620]: bdev(0x562bd07bf800 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Nov 25 02:50:03 np0005534516 ceph-osd[88620]: bdev(0x562bd07bf800 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 25 02:50:03 np0005534516 ceph-osd[88620]: bdev(0x562bd07bf800 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 25 02:50:03 np0005534516 ceph-osd[88620]: bluestore(/var/lib/ceph/osd/ceph-0) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Nov 25 02:50:03 np0005534516 ceph-osd[88620]: bdev(0x562bd15f7800 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Nov 25 02:50:03 np0005534516 ceph-osd[88620]: bdev(0x562bd15f7800 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Nov 25 02:50:03 np0005534516 ceph-osd[88620]: bdev(0x562bd15f7800 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 25 02:50:03 np0005534516 ceph-osd[88620]: bdev(0x562bd15f7800 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 25 02:50:03 np0005534516 ceph-osd[88620]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-0/block size 20 GiB
Nov 25 02:50:03 np0005534516 ceph-osd[88620]: bdev(0x562bd15f7800 /var/lib/ceph/osd/ceph-0/block) close
Nov 25 02:50:03 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 02:50:03 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 02:50:03 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 02:50:03 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 02:50:03 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "osd.1"} v 0) v1
Nov 25 02:50:03 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch
Nov 25 02:50:03 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 02:50:03 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 02:50:03 np0005534516 ceph-mgr[75313]: [cephadm INFO cephadm.serve] Deploying daemon osd.1 on compute-0
Nov 25 02:50:03 np0005534516 ceph-mgr[75313]: log_channel(cephadm) log [INF] : Deploying daemon osd.1 on compute-0
Nov 25 02:50:03 np0005534516 ceph-osd[88620]: bdev(0x562bd07bf800 /var/lib/ceph/osd/ceph-0/block) close
Nov 25 02:50:03 np0005534516 ceph-osd[88620]: starting osd.0 osd_data /var/lib/ceph/osd/ceph-0 /var/lib/ceph/osd/ceph-0/journal
Nov 25 02:50:03 np0005534516 ceph-osd[88620]: load: jerasure load: lrc 
Nov 25 02:50:03 np0005534516 ceph-osd[88620]: bdev(0x562bd1678c00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Nov 25 02:50:03 np0005534516 ceph-osd[88620]: bdev(0x562bd1678c00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Nov 25 02:50:03 np0005534516 ceph-osd[88620]: bdev(0x562bd1678c00 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 25 02:50:03 np0005534516 ceph-osd[88620]: bdev(0x562bd1678c00 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 25 02:50:03 np0005534516 ceph-osd[88620]: bluestore(/var/lib/ceph/osd/ceph-0) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Nov 25 02:50:03 np0005534516 ceph-osd[88620]: bdev(0x562bd1678c00 /var/lib/ceph/osd/ceph-0/block) close
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: bdev(0x562bd1678c00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: bdev(0x562bd1678c00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: bdev(0x562bd1678c00 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: bdev(0x562bd1678c00 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: bluestore(/var/lib/ceph/osd/ceph-0) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: bdev(0x562bd1678c00 /var/lib/ceph/osd/ceph-0/block) close
Nov 25 02:50:04 np0005534516 podman[88782]: 2025-11-25 07:50:04.132266874 +0000 UTC m=+0.070468651 container create a8d342888d672e64dcc2011d218c655c15854e96e06d15a0c1a9f4ff994f0483 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_spence, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 02:50:04 np0005534516 podman[88782]: 2025-11-25 07:50:04.091248245 +0000 UTC m=+0.029450112 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 02:50:04 np0005534516 systemd[1]: Started libpod-conmon-a8d342888d672e64dcc2011d218c655c15854e96e06d15a0c1a9f4ff994f0483.scope.
Nov 25 02:50:04 np0005534516 systemd[1]: Started libcrun container.
Nov 25 02:50:04 np0005534516 podman[88782]: 2025-11-25 07:50:04.236780881 +0000 UTC m=+0.174982679 container init a8d342888d672e64dcc2011d218c655c15854e96e06d15a0c1a9f4ff994f0483 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_spence, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 02:50:04 np0005534516 podman[88782]: 2025-11-25 07:50:04.251931785 +0000 UTC m=+0.190133602 container start a8d342888d672e64dcc2011d218c655c15854e96e06d15a0c1a9f4ff994f0483 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_spence, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 02:50:04 np0005534516 festive_spence[88802]: 167 167
Nov 25 02:50:04 np0005534516 systemd[1]: libpod-a8d342888d672e64dcc2011d218c655c15854e96e06d15a0c1a9f4ff994f0483.scope: Deactivated successfully.
Nov 25 02:50:04 np0005534516 podman[88782]: 2025-11-25 07:50:04.27433875 +0000 UTC m=+0.212540527 container attach a8d342888d672e64dcc2011d218c655c15854e96e06d15a0c1a9f4ff994f0483 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_spence, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Nov 25 02:50:04 np0005534516 podman[88782]: 2025-11-25 07:50:04.275146676 +0000 UTC m=+0.213348453 container died a8d342888d672e64dcc2011d218c655c15854e96e06d15a0c1a9f4ff994f0483 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_spence, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Nov 25 02:50:04 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 02:50:04 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 02:50:04 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch
Nov 25 02:50:04 np0005534516 ceph-mon[75015]: Deploying daemon osd.1 on compute-0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: mClockScheduler: set_osd_capacity_params_from_config: osd_bandwidth_cost_per_io: 499321.90 bytes/io, osd_bandwidth_capacity_per_shard 157286400.00 bytes/second
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: osd.0:0.OSDShard using op scheduler mclock_scheduler, cutoff=196
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: bdev(0x562bd1678c00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: bdev(0x562bd1678c00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: bdev(0x562bd1678c00 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: bdev(0x562bd1678c00 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: bluestore(/var/lib/ceph/osd/ceph-0) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: bdev(0x562bd1679400 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: bdev(0x562bd1679400 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: bdev(0x562bd1679400 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: bdev(0x562bd1679400 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-0/block size 20 GiB
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: bluefs mount
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: bluefs mount shared_bdev_used = 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: bluestore(/var/lib/ceph/osd/ceph-0) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb: RocksDB version: 7.9.2
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb: Git sha 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb: Compile date 2025-05-06 23:30:25
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb: DB SUMMARY
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb: DB Session ID:  NYVHDRU5AAGPSIK36WRU
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb: CURRENT file:  CURRENT
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb: IDENTITY file:  IDENTITY
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5093 ; 
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                         Options.error_if_exists: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                       Options.create_if_missing: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                         Options.paranoid_checks: 1
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:             Options.flush_verify_memtable_count: 1
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                                     Options.env: 0x562bd1649c70
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                                      Options.fs: LegacyFileSystem
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                                Options.info_log: 0x562bd08468a0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                Options.max_file_opening_threads: 16
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                              Options.statistics: (nil)
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                               Options.use_fsync: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                       Options.max_log_file_size: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                   Options.log_file_time_to_roll: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                       Options.keep_log_file_num: 1000
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                    Options.recycle_log_file_num: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                         Options.allow_fallocate: 1
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                        Options.allow_mmap_reads: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                       Options.allow_mmap_writes: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                        Options.use_direct_reads: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:          Options.create_missing_column_families: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                              Options.db_log_dir: 
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                                 Options.wal_dir: db.wal
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                Options.table_cache_numshardbits: 6
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                         Options.WAL_ttl_seconds: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                       Options.WAL_size_limit_MB: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:             Options.manifest_preallocation_size: 4194304
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                     Options.is_fd_close_on_exec: 1
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                   Options.advise_random_on_open: 1
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                    Options.db_write_buffer_size: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                    Options.write_buffer_manager: 0x562bd1752460
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:         Options.access_hint_on_compaction_start: 1
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                      Options.use_adaptive_mutex: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                            Options.rate_limiter: (nil)
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                       Options.wal_recovery_mode: 2
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                  Options.enable_thread_tracking: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                  Options.enable_pipelined_write: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                  Options.unordered_write: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:             Options.write_thread_max_yield_usec: 100
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                               Options.row_cache: None
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                              Options.wal_filter: None
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:             Options.avoid_flush_during_recovery: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:             Options.allow_ingest_behind: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:             Options.two_write_queues: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:             Options.manual_wal_flush: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:             Options.wal_compression: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:             Options.atomic_flush: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                 Options.persist_stats_to_disk: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                 Options.write_dbid_to_manifest: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                 Options.log_readahead_size: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                 Options.best_efforts_recovery: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:             Options.allow_data_in_errors: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:             Options.db_host_id: __hostname__
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:             Options.enforce_single_del_contracts: true
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:             Options.max_background_jobs: 4
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:             Options.max_background_compactions: -1
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:             Options.max_subcompactions: 1
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:           Options.writable_file_max_buffer_size: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:             Options.delayed_write_rate : 16777216
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:             Options.max_total_wal_size: 1073741824
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                   Options.stats_dump_period_sec: 600
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                 Options.stats_persist_period_sec: 600
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                          Options.max_open_files: -1
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                          Options.bytes_per_sync: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                      Options.wal_bytes_per_sync: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                   Options.strict_bytes_per_sync: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:       Options.compaction_readahead_size: 2097152
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                  Options.max_background_flushes: -1
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb: Compression algorithms supported:
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb: #011kZSTD supported: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb: #011kXpressCompression supported: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb: #011kBZip2Compression supported: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb: #011kZSTDNotFinalCompression supported: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb: #011kLZ4Compression supported: 1
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb: #011kZlibCompression supported: 1
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb: #011kLZ4HCCompression supported: 1
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb: #011kSnappyCompression supported: 1
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb: Fast CRC32 supported: Supported on x86
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb: DMutex implementation: pthread_mutex_t
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb: [db/db_impl/db_impl_readonly.cc:25] Opening the db in read only mode
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:        Options.compaction_filter: None
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:        Options.compaction_filter_factory: None
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:  Options.sst_partitioner_factory: None
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x562bd08462c0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x562bd08331f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:        Options.write_buffer_size: 16777216
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:  Options.max_write_buffer_number: 64
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:          Options.compression: LZ4
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:       Options.prefix_extractor: nullptr
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:             Options.num_levels: 7
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                  Options.compression_opts.level: 32767
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:               Options.compression_opts.strategy: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                  Options.compression_opts.enabled: false
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                        Options.arena_block_size: 1048576
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                Options.disable_auto_compactions: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                   Options.inplace_update_support: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                           Options.bloom_locality: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                    Options.max_successive_merges: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                Options.paranoid_file_checks: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                Options.force_consistency_checks: 1
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                Options.report_bg_io_stats: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                               Options.ttl: 2592000
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                       Options.enable_blob_files: false
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                           Options.min_blob_size: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                          Options.blob_file_size: 268435456
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                Options.blob_file_starting_level: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:           Options.merge_operator: None
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:        Options.compaction_filter: None
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:        Options.compaction_filter_factory: None
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:  Options.sst_partitioner_factory: None
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x562bd08462c0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x562bd08331f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:        Options.write_buffer_size: 16777216
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:  Options.max_write_buffer_number: 64
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:          Options.compression: LZ4
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:       Options.prefix_extractor: nullptr
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:             Options.num_levels: 7
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                  Options.compression_opts.level: 32767
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:               Options.compression_opts.strategy: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                  Options.compression_opts.enabled: false
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                        Options.arena_block_size: 1048576
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                Options.disable_auto_compactions: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                   Options.inplace_update_support: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                           Options.bloom_locality: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                    Options.max_successive_merges: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                Options.paranoid_file_checks: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                Options.force_consistency_checks: 1
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                Options.report_bg_io_stats: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                               Options.ttl: 2592000
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                       Options.enable_blob_files: false
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                           Options.min_blob_size: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                          Options.blob_file_size: 268435456
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                Options.blob_file_starting_level: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:           Options.merge_operator: None
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:        Options.compaction_filter: None
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:        Options.compaction_filter_factory: None
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:  Options.sst_partitioner_factory: None
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x562bd08462c0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x562bd08331f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:        Options.write_buffer_size: 16777216
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:  Options.max_write_buffer_number: 64
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:          Options.compression: LZ4
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:       Options.prefix_extractor: nullptr
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:             Options.num_levels: 7
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                  Options.compression_opts.level: 32767
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:               Options.compression_opts.strategy: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                  Options.compression_opts.enabled: false
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                        Options.arena_block_size: 1048576
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                Options.disable_auto_compactions: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                   Options.inplace_update_support: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                           Options.bloom_locality: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                    Options.max_successive_merges: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                Options.paranoid_file_checks: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                Options.force_consistency_checks: 1
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                Options.report_bg_io_stats: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                               Options.ttl: 2592000
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                       Options.enable_blob_files: false
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                           Options.min_blob_size: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                          Options.blob_file_size: 268435456
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                Options.blob_file_starting_level: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:           Options.merge_operator: None
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:        Options.compaction_filter: None
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:        Options.compaction_filter_factory: None
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:  Options.sst_partitioner_factory: None
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x562bd08462c0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x562bd08331f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:        Options.write_buffer_size: 16777216
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:  Options.max_write_buffer_number: 64
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:          Options.compression: LZ4
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:       Options.prefix_extractor: nullptr
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:             Options.num_levels: 7
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                  Options.compression_opts.level: 32767
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:               Options.compression_opts.strategy: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                  Options.compression_opts.enabled: false
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                        Options.arena_block_size: 1048576
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                Options.disable_auto_compactions: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                   Options.inplace_update_support: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                           Options.bloom_locality: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                    Options.max_successive_merges: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                Options.paranoid_file_checks: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                Options.force_consistency_checks: 1
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                Options.report_bg_io_stats: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                               Options.ttl: 2592000
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                       Options.enable_blob_files: false
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                           Options.min_blob_size: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                          Options.blob_file_size: 268435456
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                Options.blob_file_starting_level: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:           Options.merge_operator: None
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:        Options.compaction_filter: None
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:        Options.compaction_filter_factory: None
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:  Options.sst_partitioner_factory: None
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x562bd08462c0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x562bd08331f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:        Options.write_buffer_size: 16777216
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:  Options.max_write_buffer_number: 64
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:          Options.compression: LZ4
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:       Options.prefix_extractor: nullptr
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:             Options.num_levels: 7
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                  Options.compression_opts.level: 32767
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:               Options.compression_opts.strategy: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                  Options.compression_opts.enabled: false
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                        Options.arena_block_size: 1048576
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                Options.disable_auto_compactions: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                   Options.inplace_update_support: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                           Options.bloom_locality: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                    Options.max_successive_merges: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                Options.paranoid_file_checks: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                Options.force_consistency_checks: 1
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                Options.report_bg_io_stats: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                               Options.ttl: 2592000
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                       Options.enable_blob_files: false
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                           Options.min_blob_size: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                          Options.blob_file_size: 268435456
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                Options.blob_file_starting_level: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:           Options.merge_operator: None
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:        Options.compaction_filter: None
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:        Options.compaction_filter_factory: None
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:  Options.sst_partitioner_factory: None
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x562bd08462c0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x562bd08331f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:        Options.write_buffer_size: 16777216
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:  Options.max_write_buffer_number: 64
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:          Options.compression: LZ4
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:       Options.prefix_extractor: nullptr
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:             Options.num_levels: 7
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                  Options.compression_opts.level: 32767
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:               Options.compression_opts.strategy: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                  Options.compression_opts.enabled: false
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                        Options.arena_block_size: 1048576
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                Options.disable_auto_compactions: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                   Options.inplace_update_support: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                           Options.bloom_locality: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                    Options.max_successive_merges: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                Options.paranoid_file_checks: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                Options.force_consistency_checks: 1
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                Options.report_bg_io_stats: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                               Options.ttl: 2592000
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                       Options.enable_blob_files: false
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                           Options.min_blob_size: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                          Options.blob_file_size: 268435456
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                Options.blob_file_starting_level: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:           Options.merge_operator: None
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:        Options.compaction_filter: None
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:        Options.compaction_filter_factory: None
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:  Options.sst_partitioner_factory: None
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x562bd08462c0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x562bd08331f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:        Options.write_buffer_size: 16777216
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:  Options.max_write_buffer_number: 64
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:          Options.compression: LZ4
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:       Options.prefix_extractor: nullptr
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:             Options.num_levels: 7
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                  Options.compression_opts.level: 32767
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:               Options.compression_opts.strategy: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                  Options.compression_opts.enabled: false
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                        Options.arena_block_size: 1048576
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                Options.disable_auto_compactions: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                   Options.inplace_update_support: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                           Options.bloom_locality: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                    Options.max_successive_merges: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                Options.paranoid_file_checks: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                Options.force_consistency_checks: 1
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                Options.report_bg_io_stats: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                               Options.ttl: 2592000
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                       Options.enable_blob_files: false
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                           Options.min_blob_size: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                          Options.blob_file_size: 268435456
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                Options.blob_file_starting_level: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:           Options.merge_operator: None
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:        Options.compaction_filter: None
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:        Options.compaction_filter_factory: None
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:  Options.sst_partitioner_factory: None
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x562bd0846240)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x562bd0833090#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:        Options.write_buffer_size: 16777216
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:  Options.max_write_buffer_number: 64
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:          Options.compression: LZ4
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:       Options.prefix_extractor: nullptr
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:             Options.num_levels: 7
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                  Options.compression_opts.level: 32767
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:               Options.compression_opts.strategy: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                  Options.compression_opts.enabled: false
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                        Options.arena_block_size: 1048576
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                Options.disable_auto_compactions: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                   Options.inplace_update_support: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                           Options.bloom_locality: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                    Options.max_successive_merges: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                Options.paranoid_file_checks: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                Options.force_consistency_checks: 1
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                Options.report_bg_io_stats: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                               Options.ttl: 2592000
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                       Options.enable_blob_files: false
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                           Options.min_blob_size: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                          Options.blob_file_size: 268435456
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                Options.blob_file_starting_level: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:           Options.merge_operator: None
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:        Options.compaction_filter: None
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:        Options.compaction_filter_factory: None
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:  Options.sst_partitioner_factory: None
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x562bd0846240)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x562bd0833090#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:        Options.write_buffer_size: 16777216
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:  Options.max_write_buffer_number: 64
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:          Options.compression: LZ4
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:       Options.prefix_extractor: nullptr
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:             Options.num_levels: 7
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                  Options.compression_opts.level: 32767
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:               Options.compression_opts.strategy: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                  Options.compression_opts.enabled: false
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                        Options.arena_block_size: 1048576
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                Options.disable_auto_compactions: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                   Options.inplace_update_support: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                           Options.bloom_locality: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                    Options.max_successive_merges: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                Options.paranoid_file_checks: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                Options.force_consistency_checks: 1
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                Options.report_bg_io_stats: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                               Options.ttl: 2592000
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                       Options.enable_blob_files: false
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                           Options.min_blob_size: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                          Options.blob_file_size: 268435456
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                Options.blob_file_starting_level: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:           Options.merge_operator: None
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:        Options.compaction_filter: None
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:        Options.compaction_filter_factory: None
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:  Options.sst_partitioner_factory: None
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x562bd0846240)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x562bd0833090#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:        Options.write_buffer_size: 16777216
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:  Options.max_write_buffer_number: 64
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:          Options.compression: LZ4
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:       Options.prefix_extractor: nullptr
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:             Options.num_levels: 7
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                  Options.compression_opts.level: 32767
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:               Options.compression_opts.strategy: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                  Options.compression_opts.enabled: false
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                        Options.arena_block_size: 1048576
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                Options.disable_auto_compactions: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                   Options.inplace_update_support: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                           Options.bloom_locality: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                    Options.max_successive_merges: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                Options.paranoid_file_checks: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                Options.force_consistency_checks: 1
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                Options.report_bg_io_stats: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                               Options.ttl: 2592000
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                       Options.enable_blob_files: false
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                           Options.min_blob_size: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                          Options.blob_file_size: 268435456
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                Options.blob_file_starting_level: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 1ebef826-e7bd-4dbb-b43f-0d6f0a26aed3
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764057004394704, "job": 1, "event": "recovery_started", "wal_files": [31]}
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764057004394897, "job": 1, "event": "recovery_finished"}
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: bluestore(/var/lib/ceph/osd/ceph-0) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: bluestore(/var/lib/ceph/osd/ceph-0) _open_super_meta old nid_max 1025
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: bluestore(/var/lib/ceph/osd/ceph-0) _open_super_meta old blobid_max 10240
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: bluestore(/var/lib/ceph/osd/ceph-0) _open_super_meta ondisk_format 4 compat_ondisk_format 3
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: bluestore(/var/lib/ceph/osd/ceph-0) _open_super_meta min_alloc_size 0x1000
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: freelist init
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: freelist _read_cfg
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: bluestore(/var/lib/ceph/osd/ceph-0) _init_alloc loaded 20 GiB in 2 extents, allocator type hybrid, capacity 0x4ffc00000, block size 0x1000, free 0x4ffbfd000, fragmentation 1.9e-07
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb: [db/db_impl/db_impl.cc:496] Shutdown: canceling all background work
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb: [db/db_impl/db_impl.cc:704] Shutdown complete
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: bluefs umount
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: bdev(0x562bd1679400 /var/lib/ceph/osd/ceph-0/block) close
Nov 25 02:50:04 np0005534516 systemd[1]: var-lib-containers-storage-overlay-e92a6e1b22132de8a8c1534d4e7d6b0f3287f091f07cf1799e4a9fb264c8d1cf-merged.mount: Deactivated successfully.
Nov 25 02:50:04 np0005534516 podman[88782]: 2025-11-25 07:50:04.5912979 +0000 UTC m=+0.529499717 container remove a8d342888d672e64dcc2011d218c655c15854e96e06d15a0c1a9f4ff994f0483 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_spence, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 02:50:04 np0005534516 systemd[1]: libpod-conmon-a8d342888d672e64dcc2011d218c655c15854e96e06d15a0c1a9f4ff994f0483.scope: Deactivated successfully.
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: bdev(0x562bd1679400 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: bdev(0x562bd1679400 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: bdev(0x562bd1679400 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: bdev(0x562bd1679400 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-0/block size 20 GiB
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: bluefs mount
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: bluefs mount shared_bdev_used = 4718592
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: bluestore(/var/lib/ceph/osd/ceph-0) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb: RocksDB version: 7.9.2
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb: Git sha 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb: Compile date 2025-05-06 23:30:25
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb: DB SUMMARY
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb: DB Session ID:  NYVHDRU5AAGPSIK36WRV
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb: CURRENT file:  CURRENT
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb: IDENTITY file:  IDENTITY
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5093 ; 
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                         Options.error_if_exists: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                       Options.create_if_missing: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                         Options.paranoid_checks: 1
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:             Options.flush_verify_memtable_count: 1
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                                     Options.env: 0x562bd17fa380
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                                      Options.fs: LegacyFileSystem
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                                Options.info_log: 0x562bd083cb60
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                Options.max_file_opening_threads: 16
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                              Options.statistics: (nil)
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                               Options.use_fsync: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                       Options.max_log_file_size: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                   Options.log_file_time_to_roll: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                       Options.keep_log_file_num: 1000
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                    Options.recycle_log_file_num: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                         Options.allow_fallocate: 1
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                        Options.allow_mmap_reads: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                       Options.allow_mmap_writes: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                        Options.use_direct_reads: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:          Options.create_missing_column_families: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                              Options.db_log_dir: 
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                                 Options.wal_dir: db.wal
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                Options.table_cache_numshardbits: 6
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                         Options.WAL_ttl_seconds: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                       Options.WAL_size_limit_MB: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:             Options.manifest_preallocation_size: 4194304
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                     Options.is_fd_close_on_exec: 1
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                   Options.advise_random_on_open: 1
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                    Options.db_write_buffer_size: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                    Options.write_buffer_manager: 0x562bd17526e0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:         Options.access_hint_on_compaction_start: 1
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                      Options.use_adaptive_mutex: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                            Options.rate_limiter: (nil)
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                       Options.wal_recovery_mode: 2
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                  Options.enable_thread_tracking: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                  Options.enable_pipelined_write: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                  Options.unordered_write: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:             Options.write_thread_max_yield_usec: 100
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                               Options.row_cache: None
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                              Options.wal_filter: None
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:             Options.avoid_flush_during_recovery: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:             Options.allow_ingest_behind: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:             Options.two_write_queues: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:             Options.manual_wal_flush: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:             Options.wal_compression: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:             Options.atomic_flush: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                 Options.persist_stats_to_disk: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                 Options.write_dbid_to_manifest: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                 Options.log_readahead_size: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                 Options.best_efforts_recovery: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:             Options.allow_data_in_errors: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:             Options.db_host_id: __hostname__
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:             Options.enforce_single_del_contracts: true
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:             Options.max_background_jobs: 4
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:             Options.max_background_compactions: -1
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:             Options.max_subcompactions: 1
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:           Options.writable_file_max_buffer_size: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:             Options.delayed_write_rate : 16777216
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:             Options.max_total_wal_size: 1073741824
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                   Options.stats_dump_period_sec: 600
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                 Options.stats_persist_period_sec: 600
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                          Options.max_open_files: -1
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                          Options.bytes_per_sync: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                      Options.wal_bytes_per_sync: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                   Options.strict_bytes_per_sync: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:       Options.compaction_readahead_size: 2097152
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                  Options.max_background_flushes: -1
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb: Compression algorithms supported:
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb: #011kZSTD supported: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb: #011kXpressCompression supported: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb: #011kBZip2Compression supported: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb: #011kZSTDNotFinalCompression supported: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb: #011kLZ4Compression supported: 1
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb: #011kZlibCompression supported: 1
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb: #011kLZ4HCCompression supported: 1
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb: #011kSnappyCompression supported: 1
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb: Fast CRC32 supported: Supported on x86
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb: DMutex implementation: pthread_mutex_t
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:        Options.compaction_filter: None
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:        Options.compaction_filter_factory: None
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:  Options.sst_partitioner_factory: None
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x562bd083c680)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x562bd08331f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:        Options.write_buffer_size: 16777216
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:  Options.max_write_buffer_number: 64
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:          Options.compression: LZ4
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:       Options.prefix_extractor: nullptr
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:             Options.num_levels: 7
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                  Options.compression_opts.level: 32767
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:               Options.compression_opts.strategy: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                  Options.compression_opts.enabled: false
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                        Options.arena_block_size: 1048576
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                Options.disable_auto_compactions: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                   Options.inplace_update_support: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                           Options.bloom_locality: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                    Options.max_successive_merges: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                Options.paranoid_file_checks: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                Options.force_consistency_checks: 1
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                Options.report_bg_io_stats: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                               Options.ttl: 2592000
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                       Options.enable_blob_files: false
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                           Options.min_blob_size: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                          Options.blob_file_size: 268435456
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                Options.blob_file_starting_level: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:           Options.merge_operator: None
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:        Options.compaction_filter: None
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:        Options.compaction_filter_factory: None
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:  Options.sst_partitioner_factory: None
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x562bd083c680)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x562bd08331f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:        Options.write_buffer_size: 16777216
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:  Options.max_write_buffer_number: 64
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:          Options.compression: LZ4
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:       Options.prefix_extractor: nullptr
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:             Options.num_levels: 7
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                  Options.compression_opts.level: 32767
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:               Options.compression_opts.strategy: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                  Options.compression_opts.enabled: false
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                        Options.arena_block_size: 1048576
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                Options.disable_auto_compactions: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                   Options.inplace_update_support: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                           Options.bloom_locality: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                    Options.max_successive_merges: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                Options.paranoid_file_checks: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                Options.force_consistency_checks: 1
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                Options.report_bg_io_stats: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                               Options.ttl: 2592000
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                       Options.enable_blob_files: false
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                           Options.min_blob_size: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                          Options.blob_file_size: 268435456
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                Options.blob_file_starting_level: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:           Options.merge_operator: None
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:        Options.compaction_filter: None
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:        Options.compaction_filter_factory: None
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:  Options.sst_partitioner_factory: None
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x562bd083c680)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x562bd08331f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:        Options.write_buffer_size: 16777216
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:  Options.max_write_buffer_number: 64
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:          Options.compression: LZ4
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:       Options.prefix_extractor: nullptr
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:             Options.num_levels: 7
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                  Options.compression_opts.level: 32767
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:               Options.compression_opts.strategy: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                  Options.compression_opts.enabled: false
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                        Options.arena_block_size: 1048576
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                Options.disable_auto_compactions: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                   Options.inplace_update_support: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                           Options.bloom_locality: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                    Options.max_successive_merges: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                Options.paranoid_file_checks: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                Options.force_consistency_checks: 1
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                Options.report_bg_io_stats: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                               Options.ttl: 2592000
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                       Options.enable_blob_files: false
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                           Options.min_blob_size: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                          Options.blob_file_size: 268435456
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                Options.blob_file_starting_level: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:           Options.merge_operator: None
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:        Options.compaction_filter: None
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:        Options.compaction_filter_factory: None
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:  Options.sst_partitioner_factory: None
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x562bd083c680)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x562bd08331f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:        Options.write_buffer_size: 16777216
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:  Options.max_write_buffer_number: 64
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:          Options.compression: LZ4
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:       Options.prefix_extractor: nullptr
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:             Options.num_levels: 7
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                  Options.compression_opts.level: 32767
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:               Options.compression_opts.strategy: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                  Options.compression_opts.enabled: false
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                        Options.arena_block_size: 1048576
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                Options.disable_auto_compactions: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                   Options.inplace_update_support: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                           Options.bloom_locality: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                    Options.max_successive_merges: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                Options.paranoid_file_checks: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                Options.force_consistency_checks: 1
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                Options.report_bg_io_stats: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                               Options.ttl: 2592000
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                       Options.enable_blob_files: false
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                           Options.min_blob_size: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                          Options.blob_file_size: 268435456
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                Options.blob_file_starting_level: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:           Options.merge_operator: None
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:        Options.compaction_filter: None
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:        Options.compaction_filter_factory: None
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:  Options.sst_partitioner_factory: None
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x562bd083c680)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x562bd08331f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:        Options.write_buffer_size: 16777216
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:  Options.max_write_buffer_number: 64
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:          Options.compression: LZ4
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:       Options.prefix_extractor: nullptr
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:             Options.num_levels: 7
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                  Options.compression_opts.level: 32767
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:               Options.compression_opts.strategy: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                  Options.compression_opts.enabled: false
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                        Options.arena_block_size: 1048576
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                Options.disable_auto_compactions: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                   Options.inplace_update_support: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                           Options.bloom_locality: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                    Options.max_successive_merges: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                Options.paranoid_file_checks: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                Options.force_consistency_checks: 1
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                Options.report_bg_io_stats: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                               Options.ttl: 2592000
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                       Options.enable_blob_files: false
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                           Options.min_blob_size: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                          Options.blob_file_size: 268435456
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                Options.blob_file_starting_level: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:           Options.merge_operator: None
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:        Options.compaction_filter: None
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:        Options.compaction_filter_factory: None
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:  Options.sst_partitioner_factory: None
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x562bd083c680)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x562bd08331f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:        Options.write_buffer_size: 16777216
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:  Options.max_write_buffer_number: 64
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:          Options.compression: LZ4
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:       Options.prefix_extractor: nullptr
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:             Options.num_levels: 7
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                  Options.compression_opts.level: 32767
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:               Options.compression_opts.strategy: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                  Options.compression_opts.enabled: false
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                        Options.arena_block_size: 1048576
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                Options.disable_auto_compactions: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                   Options.inplace_update_support: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                           Options.bloom_locality: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                    Options.max_successive_merges: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                Options.paranoid_file_checks: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                Options.force_consistency_checks: 1
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                Options.report_bg_io_stats: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                               Options.ttl: 2592000
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                       Options.enable_blob_files: false
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                           Options.min_blob_size: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                          Options.blob_file_size: 268435456
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                Options.blob_file_starting_level: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:           Options.merge_operator: None
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:        Options.compaction_filter: None
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:        Options.compaction_filter_factory: None
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:  Options.sst_partitioner_factory: None
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x562bd083c680)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x562bd08331f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:        Options.write_buffer_size: 16777216
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:  Options.max_write_buffer_number: 64
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:          Options.compression: LZ4
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:       Options.prefix_extractor: nullptr
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:             Options.num_levels: 7
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                  Options.compression_opts.level: 32767
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:               Options.compression_opts.strategy: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                  Options.compression_opts.enabled: false
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                        Options.arena_block_size: 1048576
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                Options.disable_auto_compactions: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                   Options.inplace_update_support: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                           Options.bloom_locality: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                    Options.max_successive_merges: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                Options.paranoid_file_checks: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                Options.force_consistency_checks: 1
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                Options.report_bg_io_stats: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                               Options.ttl: 2592000
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                       Options.enable_blob_files: false
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                           Options.min_blob_size: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                          Options.blob_file_size: 268435456
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                Options.blob_file_starting_level: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:           Options.merge_operator: None
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:        Options.compaction_filter: None
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:        Options.compaction_filter_factory: None
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:  Options.sst_partitioner_factory: None
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x562bd083c600)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x562bd0833090#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:        Options.write_buffer_size: 16777216
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:  Options.max_write_buffer_number: 64
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:          Options.compression: LZ4
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:       Options.prefix_extractor: nullptr
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:             Options.num_levels: 7
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                  Options.compression_opts.level: 32767
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:               Options.compression_opts.strategy: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                  Options.compression_opts.enabled: false
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                        Options.arena_block_size: 1048576
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                Options.disable_auto_compactions: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                   Options.inplace_update_support: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                           Options.bloom_locality: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                    Options.max_successive_merges: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                Options.paranoid_file_checks: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                Options.force_consistency_checks: 1
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                Options.report_bg_io_stats: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                               Options.ttl: 2592000
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                       Options.enable_blob_files: false
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                           Options.min_blob_size: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                          Options.blob_file_size: 268435456
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                Options.blob_file_starting_level: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:           Options.merge_operator: None
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:        Options.compaction_filter: None
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:        Options.compaction_filter_factory: None
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:  Options.sst_partitioner_factory: None
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x562bd083c600)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x562bd0833090#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:        Options.write_buffer_size: 16777216
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:  Options.max_write_buffer_number: 64
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:          Options.compression: LZ4
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:       Options.prefix_extractor: nullptr
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:             Options.num_levels: 7
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                  Options.compression_opts.level: 32767
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:               Options.compression_opts.strategy: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                  Options.compression_opts.enabled: false
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                        Options.arena_block_size: 1048576
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                Options.disable_auto_compactions: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                   Options.inplace_update_support: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                           Options.bloom_locality: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                    Options.max_successive_merges: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                Options.paranoid_file_checks: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                Options.force_consistency_checks: 1
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                Options.report_bg_io_stats: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                               Options.ttl: 2592000
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                       Options.enable_blob_files: false
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                           Options.min_blob_size: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                          Options.blob_file_size: 268435456
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                Options.blob_file_starting_level: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:           Options.merge_operator: None
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:        Options.compaction_filter: None
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:        Options.compaction_filter_factory: None
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:  Options.sst_partitioner_factory: None
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x562bd083c600)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x562bd0833090#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:        Options.write_buffer_size: 16777216
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:  Options.max_write_buffer_number: 64
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:          Options.compression: LZ4
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:       Options.prefix_extractor: nullptr
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:             Options.num_levels: 7
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                  Options.compression_opts.level: 32767
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:               Options.compression_opts.strategy: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                  Options.compression_opts.enabled: false
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                        Options.arena_block_size: 1048576
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                Options.disable_auto_compactions: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                   Options.inplace_update_support: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                           Options.bloom_locality: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                    Options.max_successive_merges: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                Options.paranoid_file_checks: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                Options.force_consistency_checks: 1
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                Options.report_bg_io_stats: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                               Options.ttl: 2592000
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                       Options.enable_blob_files: false
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                           Options.min_blob_size: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                          Options.blob_file_size: 268435456
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb:                Options.blob_file_starting_level: 0
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 1ebef826-e7bd-4dbb-b43f-0d6f0a26aed3
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764057004673527, "job": 1, "event": "recovery_started", "wal_files": [31]}
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764057004750685, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 35, "file_size": 1272, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 13, "largest_seqno": 21, "table_properties": {"data_size": 128, "index_size": 27, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 87, "raw_average_key_size": 17, "raw_value_size": 82, "raw_average_value_size": 16, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 2, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": ".T:int64_array.b:bitwise_xor", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764057004, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1ebef826-e7bd-4dbb-b43f-0d6f0a26aed3", "db_session_id": "NYVHDRU5AAGPSIK36WRV", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}}
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764057004887732, "cf_name": "p-0", "job": 1, "event": "table_file_creation", "file_number": 36, "file_size": 1594, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 14, "largest_seqno": 15, "table_properties": {"data_size": 468, "index_size": 39, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 72, "raw_average_key_size": 36, "raw_value_size": 567, "raw_average_value_size": 283, "num_data_blocks": 1, "num_entries": 2, "num_filter_entries": 2, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "p-0", "column_family_id": 4, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764057004, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1ebef826-e7bd-4dbb-b43f-0d6f0a26aed3", "db_session_id": "NYVHDRU5AAGPSIK36WRV", "orig_file_number": 36, "seqno_to_time_mapping": "N/A"}}
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764057004952029, "cf_name": "O-2", "job": 1, "event": "table_file_creation", "file_number": 37, "file_size": 1275, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 16, "largest_seqno": 16, "table_properties": {"data_size": 121, "index_size": 64, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 55, "raw_average_key_size": 55, "raw_value_size": 50, "raw_average_value_size": 50, "num_data_blocks": 1, "num_entries": 1, "num_filter_entries": 1, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "O-2", "column_family_id": 9, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764057004, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1ebef826-e7bd-4dbb-b43f-0d6f0a26aed3", "db_session_id": "NYVHDRU5AAGPSIK36WRV", "orig_file_number": 37, "seqno_to_time_mapping": "N/A"}}
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764057004966111, "job": 1, "event": "recovery_finished"}
Nov 25 02:50:04 np0005534516 ceph-osd[88620]: rocksdb: [db/version_set.cc:5047] Creating manifest 40
Nov 25 02:50:04 np0005534516 podman[89211]: 2025-11-25 07:50:04.996619382 +0000 UTC m=+0.083656812 container create e4abe4636b39833621d858d368e8405013bfb772a5c3c959e9d53b444f6e66ef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-a058ea16-8b73-51e1-b172-ed66107102bf-osd-1-activate-test, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 02:50:05 np0005534516 podman[89211]: 2025-11-25 07:50:04.938695853 +0000 UTC m=+0.025733313 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 02:50:05 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v33: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 25 02:50:05 np0005534516 ceph-osd[88620]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x562bd1807c00
Nov 25 02:50:05 np0005534516 ceph-osd[88620]: rocksdb: DB pointer 0x562bd173ba00
Nov 25 02:50:05 np0005534516 ceph-osd[88620]: bluestore(/var/lib/ceph/osd/ceph-0) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Nov 25 02:50:05 np0005534516 ceph-osd[88620]: bluestore(/var/lib/ceph/osd/ceph-0) _upgrade_super from 4, latest 4
Nov 25 02:50:05 np0005534516 ceph-osd[88620]: bluestore(/var/lib/ceph/osd/ceph-0) _upgrade_super done
Nov 25 02:50:05 np0005534516 ceph-osd[88620]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 25 02:50:05 np0005534516 ceph-osd[88620]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 0.5 total, 0.5 interval#012Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s#012Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.08              0.00         1    0.077       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.08              0.00         1    0.077       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.08              0.00         1    0.077       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.08              0.00         1    0.077       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.5 total, 0.5 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.1 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.1 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x562bd08331f0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 4.3e-05 secs_since: 0#012Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.5 total, 0.5 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x562bd08331f0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 4.3e-05 secs_since: 0#012Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.5 total, 0.5 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x562bd08331f0#2 capacity: 460.80 MB usag
Nov 25 02:50:05 np0005534516 ceph-osd[88620]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/18.2.7/rpm/el9/BUILD/ceph-18.2.7/src/cls/cephfs/cls_cephfs.cc:201: loading cephfs
Nov 25 02:50:05 np0005534516 ceph-osd[88620]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/18.2.7/rpm/el9/BUILD/ceph-18.2.7/src/cls/hello/cls_hello.cc:316: loading cls_hello
Nov 25 02:50:05 np0005534516 ceph-osd[88620]: _get_class not permitted to load lua
Nov 25 02:50:05 np0005534516 systemd[1]: Started libpod-conmon-e4abe4636b39833621d858d368e8405013bfb772a5c3c959e9d53b444f6e66ef.scope.
Nov 25 02:50:05 np0005534516 ceph-osd[88620]: _get_class not permitted to load sdk
Nov 25 02:50:05 np0005534516 ceph-osd[88620]: _get_class not permitted to load test_remote_reads
Nov 25 02:50:05 np0005534516 ceph-osd[88620]: osd.0 0 crush map has features 288232575208783872, adjusting msgr requires for clients
Nov 25 02:50:05 np0005534516 ceph-osd[88620]: osd.0 0 crush map has features 288232575208783872 was 8705, adjusting msgr requires for mons
Nov 25 02:50:05 np0005534516 ceph-osd[88620]: osd.0 0 crush map has features 288232575208783872, adjusting msgr requires for osds
Nov 25 02:50:05 np0005534516 ceph-osd[88620]: osd.0 0 check_osdmap_features enabling on-disk ERASURE CODES compat feature
Nov 25 02:50:05 np0005534516 ceph-osd[88620]: osd.0 0 load_pgs
Nov 25 02:50:05 np0005534516 ceph-osd[88620]: osd.0 0 load_pgs opened 0 pgs
Nov 25 02:50:05 np0005534516 ceph-osd[88620]: osd.0 0 log_to_monitors true
Nov 25 02:50:05 np0005534516 ceph-a058ea16-8b73-51e1-b172-ed66107102bf-osd-0[88616]: 2025-11-25T07:50:05.133+0000 7f5b7855e740 -1 osd.0 0 log_to_monitors true
Nov 25 02:50:05 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]} v 0) v1
Nov 25 02:50:05 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='osd.0 [v2:192.168.122.100:6802/2098995573,v1:192.168.122.100:6803/2098995573]' entity='osd.0' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]: dispatch
Nov 25 02:50:05 np0005534516 systemd[1]: Started libcrun container.
Nov 25 02:50:05 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/01a1808a4424dffc307340fb2eddc682a487d92710eb7c2f8c02b9dcc782cbde/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 02:50:05 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/01a1808a4424dffc307340fb2eddc682a487d92710eb7c2f8c02b9dcc782cbde/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 02:50:05 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/01a1808a4424dffc307340fb2eddc682a487d92710eb7c2f8c02b9dcc782cbde/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 02:50:05 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/01a1808a4424dffc307340fb2eddc682a487d92710eb7c2f8c02b9dcc782cbde/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 02:50:05 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/01a1808a4424dffc307340fb2eddc682a487d92710eb7c2f8c02b9dcc782cbde/merged/var/lib/ceph/osd/ceph-1 supports timestamps until 2038 (0x7fffffff)
Nov 25 02:50:05 np0005534516 podman[89211]: 2025-11-25 07:50:05.273829924 +0000 UTC m=+0.360867384 container init e4abe4636b39833621d858d368e8405013bfb772a5c3c959e9d53b444f6e66ef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-a058ea16-8b73-51e1-b172-ed66107102bf-osd-1-activate-test, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 02:50:05 np0005534516 podman[89211]: 2025-11-25 07:50:05.27998329 +0000 UTC m=+0.367020760 container start e4abe4636b39833621d858d368e8405013bfb772a5c3c959e9d53b444f6e66ef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-a058ea16-8b73-51e1-b172-ed66107102bf-osd-1-activate-test, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True)
Nov 25 02:50:05 np0005534516 podman[89211]: 2025-11-25 07:50:05.35077485 +0000 UTC m=+0.437812340 container attach e4abe4636b39833621d858d368e8405013bfb772a5c3c959e9d53b444f6e66ef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-a058ea16-8b73-51e1-b172-ed66107102bf-osd-1-activate-test, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True)
Nov 25 02:50:05 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e6 do_prune osdmap full prune enabled
Nov 25 02:50:05 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e6 encode_pending skipping prime_pg_temp; mapping job did not start
Nov 25 02:50:05 np0005534516 ceph-mon[75015]: from='osd.0 [v2:192.168.122.100:6802/2098995573,v1:192.168.122.100:6803/2098995573]' entity='osd.0' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]: dispatch
Nov 25 02:50:05 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='osd.0 [v2:192.168.122.100:6802/2098995573,v1:192.168.122.100:6803/2098995573]' entity='osd.0' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]': finished
Nov 25 02:50:05 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e7 e7: 3 total, 0 up, 3 in
Nov 25 02:50:05 np0005534516 ceph-mon[75015]: log_channel(cluster) log [DBG] : osdmap e7: 3 total, 0 up, 3 in
Nov 25 02:50:05 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-0", "root=default"]} v 0) v1
Nov 25 02:50:05 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='osd.0 [v2:192.168.122.100:6802/2098995573,v1:192.168.122.100:6803/2098995573]' entity='osd.0' cmd=[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]: dispatch
Nov 25 02:50:05 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e7 create-or-move crush item name 'osd.0' initial_weight 0.0195 at location {host=compute-0,root=default}
Nov 25 02:50:05 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Nov 25 02:50:05 np0005534516 ceph-mgr[75313]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Nov 25 02:50:05 np0005534516 ceph-mgr[75313]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Nov 25 02:50:05 np0005534516 ceph-mgr[75313]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 25 02:50:05 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 25 02:50:05 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Nov 25 02:50:05 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 25 02:50:05 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Nov 25 02:50:05 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 25 02:50:05 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e7 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 02:50:06 np0005534516 ceph-a058ea16-8b73-51e1-b172-ed66107102bf-osd-1-activate-test[89236]: usage: ceph-volume activate [-h] [--osd-id OSD_ID] [--osd-uuid OSD_UUID]
Nov 25 02:50:06 np0005534516 ceph-a058ea16-8b73-51e1-b172-ed66107102bf-osd-1-activate-test[89236]:                            [--no-systemd] [--no-tmpfs]
Nov 25 02:50:06 np0005534516 ceph-a058ea16-8b73-51e1-b172-ed66107102bf-osd-1-activate-test[89236]: ceph-volume activate: error: unrecognized arguments: --bad-option
Nov 25 02:50:06 np0005534516 systemd[1]: libpod-e4abe4636b39833621d858d368e8405013bfb772a5c3c959e9d53b444f6e66ef.scope: Deactivated successfully.
Nov 25 02:50:06 np0005534516 podman[89211]: 2025-11-25 07:50:06.028962695 +0000 UTC m=+1.116000165 container died e4abe4636b39833621d858d368e8405013bfb772a5c3c959e9d53b444f6e66ef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-a058ea16-8b73-51e1-b172-ed66107102bf-osd-1-activate-test, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 02:50:06 np0005534516 systemd[1]: var-lib-containers-storage-overlay-01a1808a4424dffc307340fb2eddc682a487d92710eb7c2f8c02b9dcc782cbde-merged.mount: Deactivated successfully.
Nov 25 02:50:06 np0005534516 podman[89211]: 2025-11-25 07:50:06.10020461 +0000 UTC m=+1.187242080 container remove e4abe4636b39833621d858d368e8405013bfb772a5c3c959e9d53b444f6e66ef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-a058ea16-8b73-51e1-b172-ed66107102bf-osd-1-activate-test, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0)
Nov 25 02:50:06 np0005534516 systemd[1]: libpod-conmon-e4abe4636b39833621d858d368e8405013bfb772a5c3c959e9d53b444f6e66ef.scope: Deactivated successfully.
Nov 25 02:50:06 np0005534516 ceph-osd[88620]: log_channel(cluster) log [DBG] : purged_snaps scrub starts
Nov 25 02:50:06 np0005534516 ceph-osd[88620]: log_channel(cluster) log [DBG] : purged_snaps scrub ok
Nov 25 02:50:06 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e7 do_prune osdmap full prune enabled
Nov 25 02:50:06 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e7 encode_pending skipping prime_pg_temp; mapping job did not start
Nov 25 02:50:06 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='osd.0 [v2:192.168.122.100:6802/2098995573,v1:192.168.122.100:6803/2098995573]' entity='osd.0' cmd='[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Nov 25 02:50:06 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e8 e8: 3 total, 0 up, 3 in
Nov 25 02:50:06 np0005534516 ceph-osd[88620]: osd.0 0 done with init, starting boot process
Nov 25 02:50:06 np0005534516 ceph-osd[88620]: osd.0 0 start_boot
Nov 25 02:50:06 np0005534516 ceph-osd[88620]: osd.0 0 maybe_override_options_for_qos osd_max_backfills set to 1
Nov 25 02:50:06 np0005534516 ceph-osd[88620]: osd.0 0 maybe_override_options_for_qos osd_recovery_max_active set to 0
Nov 25 02:50:06 np0005534516 ceph-osd[88620]: osd.0 0 maybe_override_options_for_qos osd_recovery_max_active_hdd set to 3
Nov 25 02:50:06 np0005534516 ceph-osd[88620]: osd.0 0 maybe_override_options_for_qos osd_recovery_max_active_ssd set to 10
Nov 25 02:50:06 np0005534516 ceph-osd[88620]: osd.0 0  bench count 12288000 bsize 4 KiB
Nov 25 02:50:06 np0005534516 ceph-mon[75015]: log_channel(cluster) log [DBG] : osdmap e8: 3 total, 0 up, 3 in
Nov 25 02:50:06 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Nov 25 02:50:06 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 25 02:50:06 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Nov 25 02:50:06 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 25 02:50:06 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Nov 25 02:50:06 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 25 02:50:06 np0005534516 ceph-mgr[75313]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Nov 25 02:50:06 np0005534516 ceph-mgr[75313]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Nov 25 02:50:06 np0005534516 ceph-mgr[75313]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 25 02:50:06 np0005534516 ceph-mgr[75313]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/2098995573; not ready for session (expect reconnect)
Nov 25 02:50:06 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Nov 25 02:50:06 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 25 02:50:06 np0005534516 ceph-mgr[75313]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Nov 25 02:50:06 np0005534516 ceph-mon[75015]: from='osd.0 [v2:192.168.122.100:6802/2098995573,v1:192.168.122.100:6803/2098995573]' entity='osd.0' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]': finished
Nov 25 02:50:06 np0005534516 ceph-mon[75015]: from='osd.0 [v2:192.168.122.100:6802/2098995573,v1:192.168.122.100:6803/2098995573]' entity='osd.0' cmd=[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]: dispatch
Nov 25 02:50:06 np0005534516 systemd[1]: Reloading.
Nov 25 02:50:06 np0005534516 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 02:50:06 np0005534516 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 02:50:06 np0005534516 systemd[1]: Reloading.
Nov 25 02:50:06 np0005534516 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 02:50:06 np0005534516 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 02:50:07 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v36: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 25 02:50:07 np0005534516 systemd[1]: Starting Ceph osd.1 for a058ea16-8b73-51e1-b172-ed66107102bf...
Nov 25 02:50:07 np0005534516 python3[89399]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid a058ea16-8b73-51e1-b172-ed66107102bf -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   status --format json | jq .osdmap.num_up_osds _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 02:50:07 np0005534516 ceph-mgr[75313]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/2098995573; not ready for session (expect reconnect)
Nov 25 02:50:07 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Nov 25 02:50:07 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 25 02:50:07 np0005534516 ceph-mgr[75313]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Nov 25 02:50:07 np0005534516 ceph-mon[75015]: from='osd.0 [v2:192.168.122.100:6802/2098995573,v1:192.168.122.100:6803/2098995573]' entity='osd.0' cmd='[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Nov 25 02:50:07 np0005534516 podman[89435]: 2025-11-25 07:50:07.469894615 +0000 UTC m=+0.100403447 container create fb2483ae41647f4a036dbcf1d882f20f67c5f6ea09314f21ccdb96e343206d9b (image=quay.io/ceph/ceph:v18, name=clever_driscoll, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 02:50:07 np0005534516 podman[89435]: 2025-11-25 07:50:07.399969512 +0000 UTC m=+0.030478414 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 25 02:50:07 np0005534516 podman[89458]: 2025-11-25 07:50:07.542667439 +0000 UTC m=+0.119914941 container create 844259318c994f167331cddf34a7380ef457f99bebe247b6988570526d1b9dbc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-a058ea16-8b73-51e1-b172-ed66107102bf-osd-1-activate, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Nov 25 02:50:07 np0005534516 systemd[1]: Started libpod-conmon-fb2483ae41647f4a036dbcf1d882f20f67c5f6ea09314f21ccdb96e343206d9b.scope.
Nov 25 02:50:07 np0005534516 systemd[1]: Started libcrun container.
Nov 25 02:50:07 np0005534516 podman[89458]: 2025-11-25 07:50:07.495259005 +0000 UTC m=+0.072506547 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 02:50:07 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9c4dd217c2395cd442d2a78c9cb90c56b6ace9f7ab45d95e941f5395a80d82cb/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 02:50:07 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9c4dd217c2395cd442d2a78c9cb90c56b6ace9f7ab45d95e941f5395a80d82cb/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 02:50:07 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9c4dd217c2395cd442d2a78c9cb90c56b6ace9f7ab45d95e941f5395a80d82cb/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Nov 25 02:50:07 np0005534516 systemd[1]: Started libcrun container.
Nov 25 02:50:07 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fdeda60dc4707bfd8294bee8a961da15407817cf1c3df4eba47607d9dbc4af55/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 02:50:07 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fdeda60dc4707bfd8294bee8a961da15407817cf1c3df4eba47607d9dbc4af55/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 02:50:07 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fdeda60dc4707bfd8294bee8a961da15407817cf1c3df4eba47607d9dbc4af55/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 02:50:07 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fdeda60dc4707bfd8294bee8a961da15407817cf1c3df4eba47607d9dbc4af55/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 02:50:07 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fdeda60dc4707bfd8294bee8a961da15407817cf1c3df4eba47607d9dbc4af55/merged/var/lib/ceph/osd/ceph-1 supports timestamps until 2038 (0x7fffffff)
Nov 25 02:50:07 np0005534516 podman[89435]: 2025-11-25 07:50:07.660325055 +0000 UTC m=+0.290833887 container init fb2483ae41647f4a036dbcf1d882f20f67c5f6ea09314f21ccdb96e343206d9b (image=quay.io/ceph/ceph:v18, name=clever_driscoll, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Nov 25 02:50:07 np0005534516 podman[89435]: 2025-11-25 07:50:07.668123455 +0000 UTC m=+0.298632267 container start fb2483ae41647f4a036dbcf1d882f20f67c5f6ea09314f21ccdb96e343206d9b (image=quay.io/ceph/ceph:v18, name=clever_driscoll, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True)
Nov 25 02:50:07 np0005534516 podman[89458]: 2025-11-25 07:50:07.699742674 +0000 UTC m=+0.276990196 container init 844259318c994f167331cddf34a7380ef457f99bebe247b6988570526d1b9dbc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-a058ea16-8b73-51e1-b172-ed66107102bf-osd-1-activate, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 02:50:07 np0005534516 podman[89458]: 2025-11-25 07:50:07.711135208 +0000 UTC m=+0.288382720 container start 844259318c994f167331cddf34a7380ef457f99bebe247b6988570526d1b9dbc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-a058ea16-8b73-51e1-b172-ed66107102bf-osd-1-activate, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True)
Nov 25 02:50:07 np0005534516 podman[89435]: 2025-11-25 07:50:07.754362497 +0000 UTC m=+0.384871399 container attach fb2483ae41647f4a036dbcf1d882f20f67c5f6ea09314f21ccdb96e343206d9b (image=quay.io/ceph/ceph:v18, name=clever_driscoll, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 02:50:07 np0005534516 podman[89458]: 2025-11-25 07:50:07.794403197 +0000 UTC m=+0.371650709 container attach 844259318c994f167331cddf34a7380ef457f99bebe247b6988570526d1b9dbc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-a058ea16-8b73-51e1-b172-ed66107102bf-osd-1-activate, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Nov 25 02:50:08 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json"} v 0) v1
Nov 25 02:50:08 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2462825123' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Nov 25 02:50:08 np0005534516 clever_driscoll[89473]: 
Nov 25 02:50:08 np0005534516 clever_driscoll[89473]: {"fsid":"a058ea16-8b73-51e1-b172-ed66107102bf","health":{"status":"HEALTH_OK","checks":{},"mutes":[]},"election_epoch":5,"quorum":[0],"quorum_names":["compute-0"],"quorum_age":122,"monmap":{"epoch":1,"min_mon_release_name":"reef","num_mons":1},"osdmap":{"epoch":8,"num_osds":3,"num_up_osds":0,"osd_up_since":0,"num_in_osds":3,"osd_in_since":1764056988,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[],"num_pgs":0,"num_pools":0,"num_objects":0,"data_bytes":0,"bytes_used":0,"bytes_avail":0,"bytes_total":0},"fsmap":{"epoch":1,"by_rank":[],"up:standby":0},"mgrmap":{"available":true,"num_standbys":0,"modules":["cephadm","iostat","nfs","restful"],"services":{}},"servicemap":{"epoch":2,"modified":"2025-11-25T07:49:55.085946+0000","services":{}},"progress_events":{}}
Nov 25 02:50:08 np0005534516 systemd[1]: libpod-fb2483ae41647f4a036dbcf1d882f20f67c5f6ea09314f21ccdb96e343206d9b.scope: Deactivated successfully.
Nov 25 02:50:08 np0005534516 podman[89435]: 2025-11-25 07:50:08.29310273 +0000 UTC m=+0.923611602 container died fb2483ae41647f4a036dbcf1d882f20f67c5f6ea09314f21ccdb96e343206d9b (image=quay.io/ceph/ceph:v18, name=clever_driscoll, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 02:50:08 np0005534516 ceph-mgr[75313]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/2098995573; not ready for session (expect reconnect)
Nov 25 02:50:08 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Nov 25 02:50:08 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 25 02:50:08 np0005534516 ceph-mgr[75313]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Nov 25 02:50:08 np0005534516 systemd[1]: var-lib-containers-storage-overlay-9c4dd217c2395cd442d2a78c9cb90c56b6ace9f7ab45d95e941f5395a80d82cb-merged.mount: Deactivated successfully.
Nov 25 02:50:08 np0005534516 podman[89435]: 2025-11-25 07:50:08.518060603 +0000 UTC m=+1.148569415 container remove fb2483ae41647f4a036dbcf1d882f20f67c5f6ea09314f21ccdb96e343206d9b (image=quay.io/ceph/ceph:v18, name=clever_driscoll, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 02:50:08 np0005534516 systemd[1]: libpod-conmon-fb2483ae41647f4a036dbcf1d882f20f67c5f6ea09314f21ccdb96e343206d9b.scope: Deactivated successfully.
Nov 25 02:50:08 np0005534516 ceph-a058ea16-8b73-51e1-b172-ed66107102bf-osd-1-activate[89478]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Nov 25 02:50:08 np0005534516 bash[89458]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Nov 25 02:50:08 np0005534516 ceph-a058ea16-8b73-51e1-b172-ed66107102bf-osd-1-activate[89478]: Running command: /usr/bin/ceph-bluestore-tool prime-osd-dir --path /var/lib/ceph/osd/ceph-1 --no-mon-config --dev /dev/mapper/ceph_vg1-ceph_lv1
Nov 25 02:50:08 np0005534516 bash[89458]: Running command: /usr/bin/ceph-bluestore-tool prime-osd-dir --path /var/lib/ceph/osd/ceph-1 --no-mon-config --dev /dev/mapper/ceph_vg1-ceph_lv1
Nov 25 02:50:08 np0005534516 ceph-a058ea16-8b73-51e1-b172-ed66107102bf-osd-1-activate[89478]: Running command: /usr/bin/chown -h ceph:ceph /dev/mapper/ceph_vg1-ceph_lv1
Nov 25 02:50:08 np0005534516 bash[89458]: Running command: /usr/bin/chown -h ceph:ceph /dev/mapper/ceph_vg1-ceph_lv1
Nov 25 02:50:08 np0005534516 ceph-a058ea16-8b73-51e1-b172-ed66107102bf-osd-1-activate[89478]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-1
Nov 25 02:50:08 np0005534516 bash[89458]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-1
Nov 25 02:50:08 np0005534516 ceph-a058ea16-8b73-51e1-b172-ed66107102bf-osd-1-activate[89478]: Running command: /usr/bin/ln -s /dev/mapper/ceph_vg1-ceph_lv1 /var/lib/ceph/osd/ceph-1/block
Nov 25 02:50:08 np0005534516 bash[89458]: Running command: /usr/bin/ln -s /dev/mapper/ceph_vg1-ceph_lv1 /var/lib/ceph/osd/ceph-1/block
Nov 25 02:50:08 np0005534516 ceph-a058ea16-8b73-51e1-b172-ed66107102bf-osd-1-activate[89478]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Nov 25 02:50:08 np0005534516 bash[89458]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Nov 25 02:50:08 np0005534516 ceph-a058ea16-8b73-51e1-b172-ed66107102bf-osd-1-activate[89478]: --> ceph-volume raw activate successful for osd ID: 1
Nov 25 02:50:08 np0005534516 bash[89458]: --> ceph-volume raw activate successful for osd ID: 1
Nov 25 02:50:08 np0005534516 systemd[1]: libpod-844259318c994f167331cddf34a7380ef457f99bebe247b6988570526d1b9dbc.scope: Deactivated successfully.
Nov 25 02:50:08 np0005534516 systemd[1]: libpod-844259318c994f167331cddf34a7380ef457f99bebe247b6988570526d1b9dbc.scope: Consumed 1.146s CPU time.
Nov 25 02:50:08 np0005534516 podman[89458]: 2025-11-25 07:50:08.859726863 +0000 UTC m=+1.436974405 container died 844259318c994f167331cddf34a7380ef457f99bebe247b6988570526d1b9dbc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-a058ea16-8b73-51e1-b172-ed66107102bf-osd-1-activate, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef)
Nov 25 02:50:08 np0005534516 systemd[1]: var-lib-containers-storage-overlay-fdeda60dc4707bfd8294bee8a961da15407817cf1c3df4eba47607d9dbc4af55-merged.mount: Deactivated successfully.
Nov 25 02:50:09 np0005534516 podman[89458]: 2025-11-25 07:50:09.083612281 +0000 UTC m=+1.660859793 container remove 844259318c994f167331cddf34a7380ef457f99bebe247b6988570526d1b9dbc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-a058ea16-8b73-51e1-b172-ed66107102bf-osd-1-activate, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 02:50:09 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v37: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 25 02:50:09 np0005534516 podman[89683]: 2025-11-25 07:50:09.390466188 +0000 UTC m=+0.103578367 container create a9285175a4a57d3c778dedc2b1b08b0cd144e62c2693e88e7fa119b1af962f1f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-a058ea16-8b73-51e1-b172-ed66107102bf-osd-1, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 02:50:09 np0005534516 podman[89683]: 2025-11-25 07:50:09.31250698 +0000 UTC m=+0.025619209 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 02:50:09 np0005534516 ceph-mgr[75313]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/2098995573; not ready for session (expect reconnect)
Nov 25 02:50:09 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Nov 25 02:50:09 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 25 02:50:09 np0005534516 ceph-mgr[75313]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Nov 25 02:50:09 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/168ddfb792c2a7835dd5605b97f2d88be55cc5d7b9cea5cbc1c3d177c028a4f9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 02:50:09 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/168ddfb792c2a7835dd5605b97f2d88be55cc5d7b9cea5cbc1c3d177c028a4f9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 02:50:09 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/168ddfb792c2a7835dd5605b97f2d88be55cc5d7b9cea5cbc1c3d177c028a4f9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 02:50:09 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/168ddfb792c2a7835dd5605b97f2d88be55cc5d7b9cea5cbc1c3d177c028a4f9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 02:50:09 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/168ddfb792c2a7835dd5605b97f2d88be55cc5d7b9cea5cbc1c3d177c028a4f9/merged/var/lib/ceph/osd/ceph-1 supports timestamps until 2038 (0x7fffffff)
Nov 25 02:50:09 np0005534516 podman[89683]: 2025-11-25 07:50:09.667295828 +0000 UTC m=+0.380408097 container init a9285175a4a57d3c778dedc2b1b08b0cd144e62c2693e88e7fa119b1af962f1f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-a058ea16-8b73-51e1-b172-ed66107102bf-osd-1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Nov 25 02:50:09 np0005534516 podman[89683]: 2025-11-25 07:50:09.673654071 +0000 UTC m=+0.386766280 container start a9285175a4a57d3c778dedc2b1b08b0cd144e62c2693e88e7fa119b1af962f1f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-a058ea16-8b73-51e1-b172-ed66107102bf-osd-1, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 02:50:09 np0005534516 ceph-osd[89702]: set uid:gid to 167:167 (ceph:ceph)
Nov 25 02:50:09 np0005534516 ceph-osd[89702]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-osd, pid 2
Nov 25 02:50:09 np0005534516 ceph-osd[89702]: pidfile_write: ignore empty --pid-file
Nov 25 02:50:09 np0005534516 ceph-osd[89702]: bdev(0x561d8ad9b800 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Nov 25 02:50:09 np0005534516 ceph-osd[89702]: bdev(0x561d8ad9b800 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Nov 25 02:50:09 np0005534516 ceph-osd[89702]: bdev(0x561d8ad9b800 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 25 02:50:09 np0005534516 ceph-osd[89702]: bdev(0x561d8ad9b800 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 25 02:50:09 np0005534516 ceph-osd[89702]: bluestore(/var/lib/ceph/osd/ceph-1) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Nov 25 02:50:09 np0005534516 ceph-osd[89702]: bdev(0x561d8bbdf000 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Nov 25 02:50:09 np0005534516 ceph-osd[89702]: bdev(0x561d8bbdf000 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Nov 25 02:50:09 np0005534516 ceph-osd[89702]: bdev(0x561d8bbdf000 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 25 02:50:09 np0005534516 ceph-osd[89702]: bdev(0x561d8bbdf000 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 25 02:50:09 np0005534516 ceph-osd[89702]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-1/block size 20 GiB
Nov 25 02:50:09 np0005534516 ceph-osd[89702]: bdev(0x561d8bbdf000 /var/lib/ceph/osd/ceph-1/block) close
Nov 25 02:50:09 np0005534516 bash[89683]: a9285175a4a57d3c778dedc2b1b08b0cd144e62c2693e88e7fa119b1af962f1f
Nov 25 02:50:09 np0005534516 systemd[1]: Started Ceph osd.1 for a058ea16-8b73-51e1-b172-ed66107102bf.
Nov 25 02:50:10 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: bdev(0x561d8ad9b800 /var/lib/ceph/osd/ceph-1/block) close
Nov 25 02:50:10 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 02:50:10 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 02:50:10 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 02:50:10 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "osd.2"} v 0) v1
Nov 25 02:50:10 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "osd.2"}]: dispatch
Nov 25 02:50:10 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 02:50:10 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 02:50:10 np0005534516 ceph-mgr[75313]: [cephadm INFO cephadm.serve] Deploying daemon osd.2 on compute-0
Nov 25 02:50:10 np0005534516 ceph-mgr[75313]: log_channel(cephadm) log [INF] : Deploying daemon osd.2 on compute-0
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: starting osd.1 osd_data /var/lib/ceph/osd/ceph-1 /var/lib/ceph/osd/ceph-1/journal
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: load: jerasure load: lrc 
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: bdev(0x561d8bbdfc00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: bdev(0x561d8bbdfc00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: bdev(0x561d8bbdfc00 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: bdev(0x561d8bbdfc00 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: bluestore(/var/lib/ceph/osd/ceph-1) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: bdev(0x561d8bbdfc00 /var/lib/ceph/osd/ceph-1/block) close
Nov 25 02:50:10 np0005534516 ceph-mgr[75313]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/2098995573; not ready for session (expect reconnect)
Nov 25 02:50:10 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Nov 25 02:50:10 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 25 02:50:10 np0005534516 ceph-mgr[75313]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Nov 25 02:50:10 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 02:50:10 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 02:50:10 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "osd.2"}]: dispatch
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: bdev(0x561d8bbdfc00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: bdev(0x561d8bbdfc00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: bdev(0x561d8bbdfc00 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: bdev(0x561d8bbdfc00 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: bluestore(/var/lib/ceph/osd/ceph-1) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: bdev(0x561d8bbdfc00 /var/lib/ceph/osd/ceph-1/block) close
Nov 25 02:50:10 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e8 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: mClockScheduler: set_osd_capacity_params_from_config: osd_bandwidth_cost_per_io: 499321.90 bytes/io, osd_bandwidth_capacity_per_shard 157286400.00 bytes/second
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: osd.1:0.OSDShard using op scheduler mclock_scheduler, cutoff=196
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: bdev(0x561d8bbdfc00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: bdev(0x561d8bbdfc00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: bdev(0x561d8bbdfc00 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: bdev(0x561d8bbdfc00 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: bluestore(/var/lib/ceph/osd/ceph-1) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: bdev(0x561d8bdc0400 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: bdev(0x561d8bdc0400 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: bdev(0x561d8bdc0400 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: bdev(0x561d8bdc0400 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-1/block size 20 GiB
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: bluefs mount
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: bluefs mount shared_bdev_used = 0
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: bluestore(/var/lib/ceph/osd/ceph-1) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb: RocksDB version: 7.9.2
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb: Git sha 0
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb: Compile date 2025-05-06 23:30:25
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb: DB SUMMARY
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb: DB Session ID:  MS0KOLC6CGTPOX1NC6L8
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb: CURRENT file:  CURRENT
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb: IDENTITY file:  IDENTITY
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5093 ; 
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:                         Options.error_if_exists: 0
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:                       Options.create_if_missing: 0
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:                         Options.paranoid_checks: 1
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:             Options.flush_verify_memtable_count: 1
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:                                     Options.env: 0x561d8bc31c70
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:                                      Options.fs: LegacyFileSystem
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:                                Options.info_log: 0x561d8ae22800
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:                Options.max_file_opening_threads: 16
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:                              Options.statistics: (nil)
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:                               Options.use_fsync: 0
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:                       Options.max_log_file_size: 0
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:                   Options.log_file_time_to_roll: 0
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:                       Options.keep_log_file_num: 1000
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:                    Options.recycle_log_file_num: 0
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:                         Options.allow_fallocate: 1
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:                        Options.allow_mmap_reads: 0
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:                       Options.allow_mmap_writes: 0
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:                        Options.use_direct_reads: 0
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:          Options.create_missing_column_families: 0
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:                              Options.db_log_dir: 
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:                                 Options.wal_dir: db.wal
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:                Options.table_cache_numshardbits: 6
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:                         Options.WAL_ttl_seconds: 0
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:                       Options.WAL_size_limit_MB: 0
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:             Options.manifest_preallocation_size: 4194304
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:                     Options.is_fd_close_on_exec: 1
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:                   Options.advise_random_on_open: 1
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:                    Options.db_write_buffer_size: 0
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:                    Options.write_buffer_manager: 0x561d8bd3a460
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:         Options.access_hint_on_compaction_start: 1
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:                      Options.use_adaptive_mutex: 0
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:                            Options.rate_limiter: (nil)
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:                       Options.wal_recovery_mode: 2
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:                  Options.enable_thread_tracking: 0
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:                  Options.enable_pipelined_write: 0
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:                  Options.unordered_write: 0
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:             Options.write_thread_max_yield_usec: 100
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:                               Options.row_cache: None
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:                              Options.wal_filter: None
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:             Options.avoid_flush_during_recovery: 0
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:             Options.allow_ingest_behind: 0
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:             Options.two_write_queues: 0
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:             Options.manual_wal_flush: 0
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:             Options.wal_compression: 0
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:             Options.atomic_flush: 0
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:                 Options.persist_stats_to_disk: 0
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:                 Options.write_dbid_to_manifest: 0
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:                 Options.log_readahead_size: 0
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:                 Options.best_efforts_recovery: 0
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:             Options.allow_data_in_errors: 0
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:             Options.db_host_id: __hostname__
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:             Options.enforce_single_del_contracts: true
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:             Options.max_background_jobs: 4
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:             Options.max_background_compactions: -1
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:             Options.max_subcompactions: 1
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:           Options.writable_file_max_buffer_size: 0
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:             Options.delayed_write_rate : 16777216
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:             Options.max_total_wal_size: 1073741824
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:                   Options.stats_dump_period_sec: 600
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:                 Options.stats_persist_period_sec: 600
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:                          Options.max_open_files: -1
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:                          Options.bytes_per_sync: 0
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:                      Options.wal_bytes_per_sync: 0
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:                   Options.strict_bytes_per_sync: 0
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:       Options.compaction_readahead_size: 2097152
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:                  Options.max_background_flushes: -1
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb: Compression algorithms supported:
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb: #011kZSTD supported: 0
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb: #011kXpressCompression supported: 0
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb: #011kBZip2Compression supported: 0
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb: #011kZSTDNotFinalCompression supported: 0
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb: #011kLZ4Compression supported: 1
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb: #011kZlibCompression supported: 1
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb: #011kLZ4HCCompression supported: 1
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb: #011kSnappyCompression supported: 1
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb: Fast CRC32 supported: Supported on x86
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb: DMutex implementation: pthread_mutex_t
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb: [db/db_impl/db_impl_readonly.cc:25] Opening the db in read only mode
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:        Options.compaction_filter: None
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:        Options.compaction_filter_factory: None
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:  Options.sst_partitioner_factory: None
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x561d8ae22260)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x561d8ae0f1f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:        Options.write_buffer_size: 16777216
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:  Options.max_write_buffer_number: 64
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:          Options.compression: LZ4
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:       Options.prefix_extractor: nullptr
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:             Options.num_levels: 7
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:                  Options.compression_opts.level: 32767
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:               Options.compression_opts.strategy: 0
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:                  Options.compression_opts.enabled: false
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:                        Options.arena_block_size: 1048576
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:                Options.disable_auto_compactions: 0
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:                   Options.inplace_update_support: 0
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:                           Options.bloom_locality: 0
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:                    Options.max_successive_merges: 0
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:                Options.paranoid_file_checks: 0
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:                Options.force_consistency_checks: 1
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:                Options.report_bg_io_stats: 0
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:                               Options.ttl: 2592000
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:                       Options.enable_blob_files: false
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:                           Options.min_blob_size: 0
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:                          Options.blob_file_size: 268435456
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:                Options.blob_file_starting_level: 0
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:           Options.merge_operator: None
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:        Options.compaction_filter: None
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:        Options.compaction_filter_factory: None
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:  Options.sst_partitioner_factory: None
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x561d8ae22260)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x561d8ae0f1f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:        Options.write_buffer_size: 16777216
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:  Options.max_write_buffer_number: 64
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:          Options.compression: LZ4
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:       Options.prefix_extractor: nullptr
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:             Options.num_levels: 7
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:                  Options.compression_opts.level: 32767
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:               Options.compression_opts.strategy: 0
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:                  Options.compression_opts.enabled: false
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:                        Options.arena_block_size: 1048576
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:                Options.disable_auto_compactions: 0
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:                   Options.inplace_update_support: 0
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:                           Options.bloom_locality: 0
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:                    Options.max_successive_merges: 0
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:                Options.paranoid_file_checks: 0
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:                Options.force_consistency_checks: 1
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:                Options.report_bg_io_stats: 0
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:                               Options.ttl: 2592000
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:                       Options.enable_blob_files: false
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:                           Options.min_blob_size: 0
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:                          Options.blob_file_size: 268435456
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:                Options.blob_file_starting_level: 0
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:           Options.merge_operator: None
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:        Options.compaction_filter: None
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:        Options.compaction_filter_factory: None
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:  Options.sst_partitioner_factory: None
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x561d8ae22260)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x561d8ae0f1f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:        Options.write_buffer_size: 16777216
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:  Options.max_write_buffer_number: 64
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:          Options.compression: LZ4
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:       Options.prefix_extractor: nullptr
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:             Options.num_levels: 7
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:                  Options.compression_opts.level: 32767
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:               Options.compression_opts.strategy: 0
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:                  Options.compression_opts.enabled: false
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:                        Options.arena_block_size: 1048576
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:                Options.disable_auto_compactions: 0
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:                   Options.inplace_update_support: 0
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:                           Options.bloom_locality: 0
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:                    Options.max_successive_merges: 0
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:                Options.paranoid_file_checks: 0
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:                Options.force_consistency_checks: 1
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:                Options.report_bg_io_stats: 0
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:                               Options.ttl: 2592000
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:                       Options.enable_blob_files: false
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:                           Options.min_blob_size: 0
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:                          Options.blob_file_size: 268435456
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:                Options.blob_file_starting_level: 0
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:           Options.merge_operator: None
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:        Options.compaction_filter: None
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:        Options.compaction_filter_factory: None
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:  Options.sst_partitioner_factory: None
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x561d8ae22260)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x561d8ae0f1f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:        Options.write_buffer_size: 16777216
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:  Options.max_write_buffer_number: 64
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:          Options.compression: LZ4
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:       Options.prefix_extractor: nullptr
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:             Options.num_levels: 7
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:                  Options.compression_opts.level: 32767
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:               Options.compression_opts.strategy: 0
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:                  Options.compression_opts.enabled: false
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:                        Options.arena_block_size: 1048576
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:                Options.disable_auto_compactions: 0
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:                   Options.inplace_update_support: 0
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:                           Options.bloom_locality: 0
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:                    Options.max_successive_merges: 0
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:                Options.paranoid_file_checks: 0
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:                Options.force_consistency_checks: 1
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:                Options.report_bg_io_stats: 0
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:                               Options.ttl: 2592000
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:                       Options.enable_blob_files: false
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:                           Options.min_blob_size: 0
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:                          Options.blob_file_size: 268435456
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:                Options.blob_file_starting_level: 0
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:           Options.merge_operator: None
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:        Options.compaction_filter: None
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:        Options.compaction_filter_factory: None
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:  Options.sst_partitioner_factory: None
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x561d8ae22260)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x561d8ae0f1f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:        Options.write_buffer_size: 16777216
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:  Options.max_write_buffer_number: 64
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:          Options.compression: LZ4
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:       Options.prefix_extractor: nullptr
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:             Options.num_levels: 7
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:                  Options.compression_opts.level: 32767
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:               Options.compression_opts.strategy: 0
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:                  Options.compression_opts.enabled: false
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:                        Options.arena_block_size: 1048576
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:                Options.disable_auto_compactions: 0
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:                   Options.inplace_update_support: 0
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:                           Options.bloom_locality: 0
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:                    Options.max_successive_merges: 0
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:                Options.paranoid_file_checks: 0
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:                Options.force_consistency_checks: 1
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:                Options.report_bg_io_stats: 0
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:                               Options.ttl: 2592000
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:                       Options.enable_blob_files: false
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:                           Options.min_blob_size: 0
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:                          Options.blob_file_size: 268435456
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:                Options.blob_file_starting_level: 0
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:           Options.merge_operator: None
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:        Options.compaction_filter: None
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:        Options.compaction_filter_factory: None
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:  Options.sst_partitioner_factory: None
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x561d8ae22260)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x561d8ae0f1f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:        Options.write_buffer_size: 16777216
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:  Options.max_write_buffer_number: 64
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:          Options.compression: LZ4
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:       Options.prefix_extractor: nullptr
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:             Options.num_levels: 7
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:                  Options.compression_opts.level: 32767
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:               Options.compression_opts.strategy: 0
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:                  Options.compression_opts.enabled: false
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:                        Options.arena_block_size: 1048576
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:                Options.disable_auto_compactions: 0
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:                   Options.inplace_update_support: 0
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:                           Options.bloom_locality: 0
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:                    Options.max_successive_merges: 0
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:                Options.paranoid_file_checks: 0
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:                Options.force_consistency_checks: 1
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:                Options.report_bg_io_stats: 0
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:                               Options.ttl: 2592000
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:                       Options.enable_blob_files: false
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:                           Options.min_blob_size: 0
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:                          Options.blob_file_size: 268435456
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:                Options.blob_file_starting_level: 0
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:           Options.merge_operator: None
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:        Options.compaction_filter: None
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:        Options.compaction_filter_factory: None
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:  Options.sst_partitioner_factory: None
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x561d8ae22260)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x561d8ae0f1f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:        Options.write_buffer_size: 16777216
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:  Options.max_write_buffer_number: 64
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:          Options.compression: LZ4
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:       Options.prefix_extractor: nullptr
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:             Options.num_levels: 7
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:                  Options.compression_opts.level: 32767
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:               Options.compression_opts.strategy: 0
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:                  Options.compression_opts.enabled: false
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:                        Options.arena_block_size: 1048576
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:                Options.disable_auto_compactions: 0
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:                   Options.inplace_update_support: 0
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:                           Options.bloom_locality: 0
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:                    Options.max_successive_merges: 0
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:                Options.paranoid_file_checks: 0
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:                Options.force_consistency_checks: 1
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:                Options.report_bg_io_stats: 0
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:                               Options.ttl: 2592000
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:                       Options.enable_blob_files: false
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:                           Options.min_blob_size: 0
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:                          Options.blob_file_size: 268435456
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:                Options.blob_file_starting_level: 0
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:           Options.merge_operator: None
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:        Options.compaction_filter: None
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:        Options.compaction_filter_factory: None
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:  Options.sst_partitioner_factory: None
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x561d8ae22200)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x561d8ae0f090#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:        Options.write_buffer_size: 16777216
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:  Options.max_write_buffer_number: 64
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:          Options.compression: LZ4
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:       Options.prefix_extractor: nullptr
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:             Options.num_levels: 7
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:                  Options.compression_opts.level: 32767
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:               Options.compression_opts.strategy: 0
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:                  Options.compression_opts.enabled: false
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:                        Options.arena_block_size: 1048576
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:                Options.disable_auto_compactions: 0
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:                   Options.inplace_update_support: 0
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:                           Options.bloom_locality: 0
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:                    Options.max_successive_merges: 0
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:                Options.paranoid_file_checks: 0
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:                Options.force_consistency_checks: 1
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:                Options.report_bg_io_stats: 0
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:                               Options.ttl: 2592000
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:                       Options.enable_blob_files: false
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:                           Options.min_blob_size: 0
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:                          Options.blob_file_size: 268435456
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:                Options.blob_file_starting_level: 0
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:           Options.merge_operator: None
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:        Options.compaction_filter: None
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:        Options.compaction_filter_factory: None
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:  Options.sst_partitioner_factory: None
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x561d8ae22200)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x561d8ae0f090#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:        Options.write_buffer_size: 16777216
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:  Options.max_write_buffer_number: 64
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:          Options.compression: LZ4
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:       Options.prefix_extractor: nullptr
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:             Options.num_levels: 7
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:                  Options.compression_opts.level: 32767
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:               Options.compression_opts.strategy: 0
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:                  Options.compression_opts.enabled: false
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:                        Options.arena_block_size: 1048576
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:                Options.disable_auto_compactions: 0
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:                   Options.inplace_update_support: 0
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:                           Options.bloom_locality: 0
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:                    Options.max_successive_merges: 0
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:                Options.paranoid_file_checks: 0
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:                Options.force_consistency_checks: 1
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:                Options.report_bg_io_stats: 0
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:                               Options.ttl: 2592000
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:                       Options.enable_blob_files: false
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:                           Options.min_blob_size: 0
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:                          Options.blob_file_size: 268435456
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:                Options.blob_file_starting_level: 0
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:           Options.merge_operator: None
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:        Options.compaction_filter: None
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:        Options.compaction_filter_factory: None
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:  Options.sst_partitioner_factory: None
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x561d8ae22200)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x561d8ae0f090#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:        Options.write_buffer_size: 16777216
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:  Options.max_write_buffer_number: 64
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:          Options.compression: LZ4
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:       Options.prefix_extractor: nullptr
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:             Options.num_levels: 7
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:                  Options.compression_opts.level: 32767
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:               Options.compression_opts.strategy: 0
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:                  Options.compression_opts.enabled: false
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:                        Options.arena_block_size: 1048576
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:                Options.disable_auto_compactions: 0
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:                   Options.inplace_update_support: 0
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:                           Options.bloom_locality: 0
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:                    Options.max_successive_merges: 0
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:                Options.paranoid_file_checks: 0
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:                Options.force_consistency_checks: 1
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:                Options.report_bg_io_stats: 0
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:                               Options.ttl: 2592000
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:                       Options.enable_blob_files: false
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:                           Options.min_blob_size: 0
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:                          Options.blob_file_size: 268435456
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb:                Options.blob_file_starting_level: 0
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 11479f2d-04f7-434d-b27c-ab80bc6abebd
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764057010925159, "job": 1, "event": "recovery_started", "wal_files": [31]}
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764057010925439, "job": 1, "event": "recovery_finished"}
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: bluestore(/var/lib/ceph/osd/ceph-1) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: bluestore(/var/lib/ceph/osd/ceph-1) _open_super_meta old nid_max 1025
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: bluestore(/var/lib/ceph/osd/ceph-1) _open_super_meta old blobid_max 10240
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: bluestore(/var/lib/ceph/osd/ceph-1) _open_super_meta ondisk_format 4 compat_ondisk_format 3
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: bluestore(/var/lib/ceph/osd/ceph-1) _open_super_meta min_alloc_size 0x1000
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: freelist init
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: freelist _read_cfg
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: bluestore(/var/lib/ceph/osd/ceph-1) _init_alloc loaded 20 GiB in 2 extents, allocator type hybrid, capacity 0x4ffc00000, block size 0x1000, free 0x4ffbfd000, fragmentation 1.9e-07
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb: [db/db_impl/db_impl.cc:496] Shutdown: canceling all background work
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: rocksdb: [db/db_impl/db_impl.cc:704] Shutdown complete
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: bluefs umount
Nov 25 02:50:10 np0005534516 ceph-osd[89702]: bdev(0x561d8bdc0400 /var/lib/ceph/osd/ceph-1/block) close
Nov 25 02:50:10 np0005534516 podman[89866]: 2025-11-25 07:50:10.961568023 +0000 UTC m=+0.112390319 container create 5131a73db7ad6882659e02b46552e1c32788c269b44b22065e0894c2cbb8afe1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_dhawan, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Nov 25 02:50:10 np0005534516 podman[89866]: 2025-11-25 07:50:10.876839826 +0000 UTC m=+0.027662152 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 02:50:11 np0005534516 systemd[1]: Started libpod-conmon-5131a73db7ad6882659e02b46552e1c32788c269b44b22065e0894c2cbb8afe1.scope.
Nov 25 02:50:11 np0005534516 systemd[1]: Started libcrun container.
Nov 25 02:50:11 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v38: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 25 02:50:11 np0005534516 podman[89866]: 2025-11-25 07:50:11.14340319 +0000 UTC m=+0.294225526 container init 5131a73db7ad6882659e02b46552e1c32788c269b44b22065e0894c2cbb8afe1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_dhawan, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 02:50:11 np0005534516 podman[89866]: 2025-11-25 07:50:11.156058857 +0000 UTC m=+0.306881153 container start 5131a73db7ad6882659e02b46552e1c32788c269b44b22065e0894c2cbb8afe1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_dhawan, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 02:50:11 np0005534516 angry_dhawan[90075]: 167 167
Nov 25 02:50:11 np0005534516 systemd[1]: libpod-5131a73db7ad6882659e02b46552e1c32788c269b44b22065e0894c2cbb8afe1.scope: Deactivated successfully.
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: bdev(0x561d8bdc0400 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: bdev(0x561d8bdc0400 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: bdev(0x561d8bdc0400 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: bdev(0x561d8bdc0400 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-1/block size 20 GiB
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: bluefs mount
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: bluefs mount shared_bdev_used = 4718592
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: bluestore(/var/lib/ceph/osd/ceph-1) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb: RocksDB version: 7.9.2
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb: Git sha 0
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb: Compile date 2025-05-06 23:30:25
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb: DB SUMMARY
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb: DB Session ID:  MS0KOLC6CGTPOX1NC6L9
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb: CURRENT file:  CURRENT
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb: IDENTITY file:  IDENTITY
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5093 ; 
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:                         Options.error_if_exists: 0
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:                       Options.create_if_missing: 0
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:                         Options.paranoid_checks: 1
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:             Options.flush_verify_memtable_count: 1
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:                                     Options.env: 0x561d8bde0b60
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:                                      Options.fs: LegacyFileSystem
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:                                Options.info_log: 0x561d8ae225c0
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:                Options.max_file_opening_threads: 16
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:                              Options.statistics: (nil)
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:                               Options.use_fsync: 0
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:                       Options.max_log_file_size: 0
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:                   Options.log_file_time_to_roll: 0
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:                       Options.keep_log_file_num: 1000
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:                    Options.recycle_log_file_num: 0
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:                         Options.allow_fallocate: 1
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:                        Options.allow_mmap_reads: 0
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:                       Options.allow_mmap_writes: 0
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:                        Options.use_direct_reads: 0
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:          Options.create_missing_column_families: 0
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:                              Options.db_log_dir: 
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:                                 Options.wal_dir: db.wal
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:                Options.table_cache_numshardbits: 6
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:                         Options.WAL_ttl_seconds: 0
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:                       Options.WAL_size_limit_MB: 0
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:             Options.manifest_preallocation_size: 4194304
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:                     Options.is_fd_close_on_exec: 1
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:                   Options.advise_random_on_open: 1
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:                    Options.db_write_buffer_size: 0
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:                    Options.write_buffer_manager: 0x561d8bd3a6e0
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:         Options.access_hint_on_compaction_start: 1
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:                      Options.use_adaptive_mutex: 0
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:                            Options.rate_limiter: (nil)
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:                       Options.wal_recovery_mode: 2
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:                  Options.enable_thread_tracking: 0
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:                  Options.enable_pipelined_write: 0
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:                  Options.unordered_write: 0
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:             Options.write_thread_max_yield_usec: 100
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:                               Options.row_cache: None
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:                              Options.wal_filter: None
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:             Options.avoid_flush_during_recovery: 0
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:             Options.allow_ingest_behind: 0
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:             Options.two_write_queues: 0
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:             Options.manual_wal_flush: 0
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:             Options.wal_compression: 0
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:             Options.atomic_flush: 0
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:                 Options.persist_stats_to_disk: 0
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:                 Options.write_dbid_to_manifest: 0
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:                 Options.log_readahead_size: 0
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:                 Options.best_efforts_recovery: 0
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:             Options.allow_data_in_errors: 0
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:             Options.db_host_id: __hostname__
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:             Options.enforce_single_del_contracts: true
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:             Options.max_background_jobs: 4
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:             Options.max_background_compactions: -1
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:             Options.max_subcompactions: 1
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:           Options.writable_file_max_buffer_size: 0
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:             Options.delayed_write_rate : 16777216
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:             Options.max_total_wal_size: 1073741824
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:                   Options.stats_dump_period_sec: 600
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:                 Options.stats_persist_period_sec: 600
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:                          Options.max_open_files: -1
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:                          Options.bytes_per_sync: 0
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:                      Options.wal_bytes_per_sync: 0
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:                   Options.strict_bytes_per_sync: 0
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:       Options.compaction_readahead_size: 2097152
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:                  Options.max_background_flushes: -1
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb: Compression algorithms supported:
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb: #011kZSTD supported: 0
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb: #011kXpressCompression supported: 0
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb: #011kBZip2Compression supported: 0
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb: #011kZSTDNotFinalCompression supported: 0
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb: #011kLZ4Compression supported: 1
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb: #011kZlibCompression supported: 1
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb: #011kLZ4HCCompression supported: 1
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb: #011kSnappyCompression supported: 1
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb: Fast CRC32 supported: Supported on x86
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb: DMutex implementation: pthread_mutex_t
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:        Options.compaction_filter: None
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:        Options.compaction_filter_factory: None
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:  Options.sst_partitioner_factory: None
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x561d8ae18780)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x561d8ae0f1f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:        Options.write_buffer_size: 16777216
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:  Options.max_write_buffer_number: 64
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:          Options.compression: LZ4
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:       Options.prefix_extractor: nullptr
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:             Options.num_levels: 7
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:                  Options.compression_opts.level: 32767
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:               Options.compression_opts.strategy: 0
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:                  Options.compression_opts.enabled: false
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:                        Options.arena_block_size: 1048576
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:                Options.disable_auto_compactions: 0
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:                   Options.inplace_update_support: 0
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:                           Options.bloom_locality: 0
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:                    Options.max_successive_merges: 0
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:                Options.paranoid_file_checks: 0
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:                Options.force_consistency_checks: 1
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:                Options.report_bg_io_stats: 0
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:                               Options.ttl: 2592000
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:                       Options.enable_blob_files: false
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:                           Options.min_blob_size: 0
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:                          Options.blob_file_size: 268435456
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:                Options.blob_file_starting_level: 0
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:           Options.merge_operator: None
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:        Options.compaction_filter: None
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:        Options.compaction_filter_factory: None
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:  Options.sst_partitioner_factory: None
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x561d8ae18780)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x561d8ae0f1f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:        Options.write_buffer_size: 16777216
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:  Options.max_write_buffer_number: 64
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:          Options.compression: LZ4
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:       Options.prefix_extractor: nullptr
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:             Options.num_levels: 7
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:                  Options.compression_opts.level: 32767
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:               Options.compression_opts.strategy: 0
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:                  Options.compression_opts.enabled: false
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:                        Options.arena_block_size: 1048576
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:                Options.disable_auto_compactions: 0
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:                   Options.inplace_update_support: 0
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:                           Options.bloom_locality: 0
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:                    Options.max_successive_merges: 0
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:                Options.paranoid_file_checks: 0
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:                Options.force_consistency_checks: 1
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:                Options.report_bg_io_stats: 0
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:                               Options.ttl: 2592000
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:                       Options.enable_blob_files: false
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:                           Options.min_blob_size: 0
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:                          Options.blob_file_size: 268435456
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:                Options.blob_file_starting_level: 0
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:           Options.merge_operator: None
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:        Options.compaction_filter: None
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:        Options.compaction_filter_factory: None
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:  Options.sst_partitioner_factory: None
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x561d8ae18780)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x561d8ae0f1f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:        Options.write_buffer_size: 16777216
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:  Options.max_write_buffer_number: 64
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:          Options.compression: LZ4
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:       Options.prefix_extractor: nullptr
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:             Options.num_levels: 7
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:                  Options.compression_opts.level: 32767
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:               Options.compression_opts.strategy: 0
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:                  Options.compression_opts.enabled: false
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:                        Options.arena_block_size: 1048576
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:                Options.disable_auto_compactions: 0
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:                   Options.inplace_update_support: 0
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:                           Options.bloom_locality: 0
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:                    Options.max_successive_merges: 0
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:                Options.paranoid_file_checks: 0
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:                Options.force_consistency_checks: 1
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:                Options.report_bg_io_stats: 0
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:                               Options.ttl: 2592000
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:                       Options.enable_blob_files: false
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:                           Options.min_blob_size: 0
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:                          Options.blob_file_size: 268435456
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:                Options.blob_file_starting_level: 0
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:           Options.merge_operator: None
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:        Options.compaction_filter: None
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:        Options.compaction_filter_factory: None
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:  Options.sst_partitioner_factory: None
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x561d8ae18780)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x561d8ae0f1f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:        Options.write_buffer_size: 16777216
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:  Options.max_write_buffer_number: 64
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:          Options.compression: LZ4
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:       Options.prefix_extractor: nullptr
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:             Options.num_levels: 7
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:                  Options.compression_opts.level: 32767
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:               Options.compression_opts.strategy: 0
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:                  Options.compression_opts.enabled: false
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 25 02:50:11 np0005534516 podman[89866]: 2025-11-25 07:50:11.189701099 +0000 UTC m=+0.340523485 container attach 5131a73db7ad6882659e02b46552e1c32788c269b44b22065e0894c2cbb8afe1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_dhawan, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 02:50:11 np0005534516 podman[89866]: 2025-11-25 07:50:11.190513215 +0000 UTC m=+0.341335551 container died 5131a73db7ad6882659e02b46552e1c32788c269b44b22065e0894c2cbb8afe1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_dhawan, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:                        Options.arena_block_size: 1048576
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:                Options.disable_auto_compactions: 0
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:                   Options.inplace_update_support: 0
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:                           Options.bloom_locality: 0
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:                    Options.max_successive_merges: 0
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:                Options.paranoid_file_checks: 0
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:                Options.force_consistency_checks: 1
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:                Options.report_bg_io_stats: 0
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:                               Options.ttl: 2592000
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:                       Options.enable_blob_files: false
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:                           Options.min_blob_size: 0
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:                          Options.blob_file_size: 268435456
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:                Options.blob_file_starting_level: 0
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:           Options.merge_operator: None
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:        Options.compaction_filter: None
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:        Options.compaction_filter_factory: None
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:  Options.sst_partitioner_factory: None
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x561d8ae18780)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x561d8ae0f1f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:        Options.write_buffer_size: 16777216
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:  Options.max_write_buffer_number: 64
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:          Options.compression: LZ4
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:       Options.prefix_extractor: nullptr
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:             Options.num_levels: 7
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:                  Options.compression_opts.level: 32767
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:               Options.compression_opts.strategy: 0
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:                  Options.compression_opts.enabled: false
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:                        Options.arena_block_size: 1048576
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:                Options.disable_auto_compactions: 0
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:                   Options.inplace_update_support: 0
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:                           Options.bloom_locality: 0
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:                    Options.max_successive_merges: 0
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:                Options.paranoid_file_checks: 0
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:                Options.force_consistency_checks: 1
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:                Options.report_bg_io_stats: 0
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:                               Options.ttl: 2592000
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:                       Options.enable_blob_files: false
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:                           Options.min_blob_size: 0
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:                          Options.blob_file_size: 268435456
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:                Options.blob_file_starting_level: 0
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:           Options.merge_operator: None
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:        Options.compaction_filter: None
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:        Options.compaction_filter_factory: None
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:  Options.sst_partitioner_factory: None
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x561d8ae18780)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x561d8ae0f1f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:        Options.write_buffer_size: 16777216
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:  Options.max_write_buffer_number: 64
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:          Options.compression: LZ4
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:       Options.prefix_extractor: nullptr
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:             Options.num_levels: 7
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:                  Options.compression_opts.level: 32767
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:               Options.compression_opts.strategy: 0
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:                  Options.compression_opts.enabled: false
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:                        Options.arena_block_size: 1048576
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:                Options.disable_auto_compactions: 0
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:                   Options.inplace_update_support: 0
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:                           Options.bloom_locality: 0
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:                    Options.max_successive_merges: 0
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:                Options.paranoid_file_checks: 0
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:                Options.force_consistency_checks: 1
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:                Options.report_bg_io_stats: 0
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:                               Options.ttl: 2592000
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:                       Options.enable_blob_files: false
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:                           Options.min_blob_size: 0
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:                          Options.blob_file_size: 268435456
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:                Options.blob_file_starting_level: 0
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:           Options.merge_operator: None
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:        Options.compaction_filter: None
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:        Options.compaction_filter_factory: None
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:  Options.sst_partitioner_factory: None
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x561d8ae18780)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x561d8ae0f1f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:        Options.write_buffer_size: 16777216
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:  Options.max_write_buffer_number: 64
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:          Options.compression: LZ4
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:       Options.prefix_extractor: nullptr
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:             Options.num_levels: 7
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:                  Options.compression_opts.level: 32767
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:               Options.compression_opts.strategy: 0
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:                  Options.compression_opts.enabled: false
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:                        Options.arena_block_size: 1048576
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:                Options.disable_auto_compactions: 0
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:                   Options.inplace_update_support: 0
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:                           Options.bloom_locality: 0
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:                    Options.max_successive_merges: 0
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:                Options.paranoid_file_checks: 0
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:                Options.force_consistency_checks: 1
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:                Options.report_bg_io_stats: 0
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:                               Options.ttl: 2592000
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:                       Options.enable_blob_files: false
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:                           Options.min_blob_size: 0
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:                          Options.blob_file_size: 268435456
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:                Options.blob_file_starting_level: 0
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:           Options.merge_operator: None
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:        Options.compaction_filter: None
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:        Options.compaction_filter_factory: None
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:  Options.sst_partitioner_factory: None
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x561d8ae18740)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x561d8ae0f090#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:        Options.write_buffer_size: 16777216
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:  Options.max_write_buffer_number: 64
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:          Options.compression: LZ4
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:       Options.prefix_extractor: nullptr
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:             Options.num_levels: 7
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:                  Options.compression_opts.level: 32767
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:               Options.compression_opts.strategy: 0
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:                  Options.compression_opts.enabled: false
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:                        Options.arena_block_size: 1048576
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:                Options.disable_auto_compactions: 0
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:                   Options.inplace_update_support: 0
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:                           Options.bloom_locality: 0
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:                    Options.max_successive_merges: 0
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:                Options.paranoid_file_checks: 0
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:                Options.force_consistency_checks: 1
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:                Options.report_bg_io_stats: 0
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:                               Options.ttl: 2592000
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:                       Options.enable_blob_files: false
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:                           Options.min_blob_size: 0
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:                          Options.blob_file_size: 268435456
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:                Options.blob_file_starting_level: 0
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:           Options.merge_operator: None
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:        Options.compaction_filter: None
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:        Options.compaction_filter_factory: None
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:  Options.sst_partitioner_factory: None
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x561d8ae18740)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x561d8ae0f090#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:        Options.write_buffer_size: 16777216
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:  Options.max_write_buffer_number: 64
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:          Options.compression: LZ4
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:       Options.prefix_extractor: nullptr
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:             Options.num_levels: 7
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:                  Options.compression_opts.level: 32767
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:               Options.compression_opts.strategy: 0
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:                  Options.compression_opts.enabled: false
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:                        Options.arena_block_size: 1048576
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:                Options.disable_auto_compactions: 0
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:                   Options.inplace_update_support: 0
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:                           Options.bloom_locality: 0
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:                    Options.max_successive_merges: 0
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:                Options.paranoid_file_checks: 0
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:                Options.force_consistency_checks: 1
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:                Options.report_bg_io_stats: 0
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:                               Options.ttl: 2592000
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:                       Options.enable_blob_files: false
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:                           Options.min_blob_size: 0
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:                          Options.blob_file_size: 268435456
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:                Options.blob_file_starting_level: 0
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:           Options.merge_operator: None
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:        Options.compaction_filter: None
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:        Options.compaction_filter_factory: None
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:  Options.sst_partitioner_factory: None
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x561d8ae18740)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x561d8ae0f090#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:        Options.write_buffer_size: 16777216
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:  Options.max_write_buffer_number: 64
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:          Options.compression: LZ4
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:       Options.prefix_extractor: nullptr
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:             Options.num_levels: 7
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:                  Options.compression_opts.level: 32767
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:               Options.compression_opts.strategy: 0
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:                  Options.compression_opts.enabled: false
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:                        Options.arena_block_size: 1048576
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:                Options.disable_auto_compactions: 0
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:                   Options.inplace_update_support: 0
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:                           Options.bloom_locality: 0
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:                    Options.max_successive_merges: 0
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:                Options.paranoid_file_checks: 0
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:                Options.force_consistency_checks: 1
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:                Options.report_bg_io_stats: 0
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:                               Options.ttl: 2592000
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:                       Options.enable_blob_files: false
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:                           Options.min_blob_size: 0
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:                          Options.blob_file_size: 268435456
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb:                Options.blob_file_starting_level: 0
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 11479f2d-04f7-434d-b27c-ab80bc6abebd
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764057011208763, "job": 1, "event": "recovery_started", "wal_files": [31]}
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764057011257735, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 35, "file_size": 1272, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 13, "largest_seqno": 21, "table_properties": {"data_size": 128, "index_size": 27, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 87, "raw_average_key_size": 17, "raw_value_size": 82, "raw_average_value_size": 16, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 2, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": ".T:int64_array.b:bitwise_xor", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764057011, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "11479f2d-04f7-434d-b27c-ab80bc6abebd", "db_session_id": "MS0KOLC6CGTPOX1NC6L9", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}}
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764057011340205, "cf_name": "p-0", "job": 1, "event": "table_file_creation", "file_number": 36, "file_size": 1594, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 14, "largest_seqno": 15, "table_properties": {"data_size": 468, "index_size": 39, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 72, "raw_average_key_size": 36, "raw_value_size": 567, "raw_average_value_size": 283, "num_data_blocks": 1, "num_entries": 2, "num_filter_entries": 2, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "p-0", "column_family_id": 4, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764057011, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "11479f2d-04f7-434d-b27c-ab80bc6abebd", "db_session_id": "MS0KOLC6CGTPOX1NC6L9", "orig_file_number": 36, "seqno_to_time_mapping": "N/A"}}
Nov 25 02:50:11 np0005534516 systemd[1]: var-lib-containers-storage-overlay-8795f02d0c4b5b6911a9dc0afb1d56102de45d30debf793f54d5b530787907ac-merged.mount: Deactivated successfully.
Nov 25 02:50:11 np0005534516 ceph-mgr[75313]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/2098995573; not ready for session (expect reconnect)
Nov 25 02:50:11 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764057011423467, "cf_name": "O-2", "job": 1, "event": "table_file_creation", "file_number": 37, "file_size": 1275, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 16, "largest_seqno": 16, "table_properties": {"data_size": 121, "index_size": 64, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 55, "raw_average_key_size": 55, "raw_value_size": 50, "raw_average_value_size": 50, "num_data_blocks": 1, "num_entries": 1, "num_filter_entries": 1, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "O-2", "column_family_id": 9, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764057011, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "11479f2d-04f7-434d-b27c-ab80bc6abebd", "db_session_id": "MS0KOLC6CGTPOX1NC6L9", "orig_file_number": 37, "seqno_to_time_mapping": "N/A"}}
Nov 25 02:50:11 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 25 02:50:11 np0005534516 ceph-mgr[75313]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764057011427778, "job": 1, "event": "recovery_finished"}
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb: [db/version_set.cc:5047] Creating manifest 40
Nov 25 02:50:11 np0005534516 podman[89866]: 2025-11-25 07:50:11.499650672 +0000 UTC m=+0.650472968 container remove 5131a73db7ad6882659e02b46552e1c32788c269b44b22065e0894c2cbb8afe1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_dhawan, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Nov 25 02:50:11 np0005534516 systemd[1]: libpod-conmon-5131a73db7ad6882659e02b46552e1c32788c269b44b22065e0894c2cbb8afe1.scope: Deactivated successfully.
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x561d8af7c000
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb: DB pointer 0x561d8bd23a00
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: bluestore(/var/lib/ceph/osd/ceph-1) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: bluestore(/var/lib/ceph/osd/ceph-1) _upgrade_super from 4, latest 4
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: bluestore(/var/lib/ceph/osd/ceph-1) _upgrade_super done
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 0.4 total, 0.4 interval#012Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s#012Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.05              0.00         1    0.049       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.05              0.00         1    0.049       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.05              0.00         1    0.049       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.05              0.00         1    0.049       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.4 total, 0.4 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x561d8ae0f1f0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 2.4e-05 secs_since: 0#012Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.4 total, 0.4 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x561d8ae0f1f0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 2.4e-05 secs_since: 0#012Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.4 total, 0.4 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x561d8ae0f1f0#2 capacity: 460.80 MB usag
Nov 25 02:50:11 np0005534516 ceph-mon[75015]: Deploying daemon osd.2 on compute-0
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/18.2.7/rpm/el9/BUILD/ceph-18.2.7/src/cls/cephfs/cls_cephfs.cc:201: loading cephfs
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/18.2.7/rpm/el9/BUILD/ceph-18.2.7/src/cls/hello/cls_hello.cc:316: loading cls_hello
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: _get_class not permitted to load lua
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: _get_class not permitted to load sdk
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: _get_class not permitted to load test_remote_reads
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: osd.1 0 crush map has features 288232575208783872, adjusting msgr requires for clients
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: osd.1 0 crush map has features 288232575208783872 was 8705, adjusting msgr requires for mons
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: osd.1 0 crush map has features 288232575208783872, adjusting msgr requires for osds
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: osd.1 0 check_osdmap_features enabling on-disk ERASURE CODES compat feature
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: osd.1 0 load_pgs
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: osd.1 0 load_pgs opened 0 pgs
Nov 25 02:50:11 np0005534516 ceph-osd[89702]: osd.1 0 log_to_monitors true
Nov 25 02:50:11 np0005534516 ceph-a058ea16-8b73-51e1-b172-ed66107102bf-osd-1[89698]: 2025-11-25T07:50:11.619+0000 7f780534e740 -1 osd.1 0 log_to_monitors true
Nov 25 02:50:11 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]} v 0) v1
Nov 25 02:50:11 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='osd.1 [v2:192.168.122.100:6806/1972745281,v1:192.168.122.100:6807/1972745281]' entity='osd.1' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]: dispatch
Nov 25 02:50:11 np0005534516 podman[90322]: 2025-11-25 07:50:11.80039791 +0000 UTC m=+0.075409820 container create e4edb3f91795a8486106e6d1d982abdd5f41dd2a14bc2871948f04531bf152c5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-a058ea16-8b73-51e1-b172-ed66107102bf-osd-2-activate-test, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 02:50:11 np0005534516 podman[90322]: 2025-11-25 07:50:11.748775092 +0000 UTC m=+0.023787042 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 02:50:11 np0005534516 systemd[1]: Started libpod-conmon-e4edb3f91795a8486106e6d1d982abdd5f41dd2a14bc2871948f04531bf152c5.scope.
Nov 25 02:50:11 np0005534516 systemd[1]: Started libcrun container.
Nov 25 02:50:11 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e65e6310591deb2c8aeb5f4ab9b4c17edf6417bb358869485a2d79ef8d9a219d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 02:50:11 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e65e6310591deb2c8aeb5f4ab9b4c17edf6417bb358869485a2d79ef8d9a219d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 02:50:11 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e65e6310591deb2c8aeb5f4ab9b4c17edf6417bb358869485a2d79ef8d9a219d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 02:50:11 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e65e6310591deb2c8aeb5f4ab9b4c17edf6417bb358869485a2d79ef8d9a219d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 02:50:11 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e65e6310591deb2c8aeb5f4ab9b4c17edf6417bb358869485a2d79ef8d9a219d/merged/var/lib/ceph/osd/ceph-2 supports timestamps until 2038 (0x7fffffff)
Nov 25 02:50:11 np0005534516 podman[90322]: 2025-11-25 07:50:11.929462536 +0000 UTC m=+0.204474496 container init e4edb3f91795a8486106e6d1d982abdd5f41dd2a14bc2871948f04531bf152c5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-a058ea16-8b73-51e1-b172-ed66107102bf-osd-2-activate-test, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 02:50:11 np0005534516 podman[90322]: 2025-11-25 07:50:11.938749094 +0000 UTC m=+0.213761014 container start e4edb3f91795a8486106e6d1d982abdd5f41dd2a14bc2871948f04531bf152c5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-a058ea16-8b73-51e1-b172-ed66107102bf-osd-2-activate-test, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Nov 25 02:50:11 np0005534516 podman[90322]: 2025-11-25 07:50:11.966251432 +0000 UTC m=+0.241263352 container attach e4edb3f91795a8486106e6d1d982abdd5f41dd2a14bc2871948f04531bf152c5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-a058ea16-8b73-51e1-b172-ed66107102bf-osd-2-activate-test, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Nov 25 02:50:12 np0005534516 ceph-mgr[75313]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/2098995573; not ready for session (expect reconnect)
Nov 25 02:50:12 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Nov 25 02:50:12 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 25 02:50:12 np0005534516 ceph-mgr[75313]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Nov 25 02:50:12 np0005534516 ceph-a058ea16-8b73-51e1-b172-ed66107102bf-osd-2-activate-test[90339]: usage: ceph-volume activate [-h] [--osd-id OSD_ID] [--osd-uuid OSD_UUID]
Nov 25 02:50:12 np0005534516 ceph-a058ea16-8b73-51e1-b172-ed66107102bf-osd-2-activate-test[90339]:                            [--no-systemd] [--no-tmpfs]
Nov 25 02:50:12 np0005534516 ceph-a058ea16-8b73-51e1-b172-ed66107102bf-osd-2-activate-test[90339]: ceph-volume activate: error: unrecognized arguments: --bad-option
Nov 25 02:50:12 np0005534516 systemd[1]: libpod-e4edb3f91795a8486106e6d1d982abdd5f41dd2a14bc2871948f04531bf152c5.scope: Deactivated successfully.
Nov 25 02:50:12 np0005534516 podman[90322]: 2025-11-25 07:50:12.577161926 +0000 UTC m=+0.852173876 container died e4edb3f91795a8486106e6d1d982abdd5f41dd2a14bc2871948f04531bf152c5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-a058ea16-8b73-51e1-b172-ed66107102bf-osd-2-activate-test, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Nov 25 02:50:12 np0005534516 ceph-osd[89702]: log_channel(cluster) log [DBG] : purged_snaps scrub starts
Nov 25 02:50:12 np0005534516 ceph-osd[89702]: log_channel(cluster) log [DBG] : purged_snaps scrub ok
Nov 25 02:50:12 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e8 do_prune osdmap full prune enabled
Nov 25 02:50:12 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e8 encode_pending skipping prime_pg_temp; mapping job did not start
Nov 25 02:50:12 np0005534516 ceph-mon[75015]: from='osd.1 [v2:192.168.122.100:6806/1972745281,v1:192.168.122.100:6807/1972745281]' entity='osd.1' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]: dispatch
Nov 25 02:50:12 np0005534516 systemd[1]: var-lib-containers-storage-overlay-e65e6310591deb2c8aeb5f4ab9b4c17edf6417bb358869485a2d79ef8d9a219d-merged.mount: Deactivated successfully.
Nov 25 02:50:12 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='osd.1 [v2:192.168.122.100:6806/1972745281,v1:192.168.122.100:6807/1972745281]' entity='osd.1' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]': finished
Nov 25 02:50:12 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e9 e9: 3 total, 0 up, 3 in
Nov 25 02:50:12 np0005534516 ceph-mon[75015]: log_channel(cluster) log [DBG] : osdmap e9: 3 total, 0 up, 3 in
Nov 25 02:50:12 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-0", "root=default"]} v 0) v1
Nov 25 02:50:12 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='osd.1 [v2:192.168.122.100:6806/1972745281,v1:192.168.122.100:6807/1972745281]' entity='osd.1' cmd=[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]: dispatch
Nov 25 02:50:12 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e9 create-or-move crush item name 'osd.1' initial_weight 0.0195 at location {host=compute-0,root=default}
Nov 25 02:50:12 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Nov 25 02:50:12 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 25 02:50:12 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Nov 25 02:50:12 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 25 02:50:12 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Nov 25 02:50:12 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 25 02:50:12 np0005534516 ceph-mgr[75313]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Nov 25 02:50:12 np0005534516 ceph-mgr[75313]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 25 02:50:12 np0005534516 ceph-mgr[75313]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Nov 25 02:50:12 np0005534516 podman[90322]: 2025-11-25 07:50:12.870751138 +0000 UTC m=+1.145763058 container remove e4edb3f91795a8486106e6d1d982abdd5f41dd2a14bc2871948f04531bf152c5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-a058ea16-8b73-51e1-b172-ed66107102bf-osd-2-activate-test, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 02:50:12 np0005534516 systemd[1]: libpod-conmon-e4edb3f91795a8486106e6d1d982abdd5f41dd2a14bc2871948f04531bf152c5.scope: Deactivated successfully.
Nov 25 02:50:13 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v40: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 25 02:50:13 np0005534516 ceph-osd[88620]: osd.0 0 maybe_override_max_osd_capacity_for_qos osd bench result - bandwidth (MiB/sec): 7.779 iops: 1991.383 elapsed_sec: 1.506
Nov 25 02:50:13 np0005534516 ceph-osd[88620]: log_channel(cluster) log [WRN] : OSD bench result of 1991.382663 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.0. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Nov 25 02:50:13 np0005534516 ceph-osd[88620]: osd.0 0 waiting for initial osdmap
Nov 25 02:50:13 np0005534516 ceph-a058ea16-8b73-51e1-b172-ed66107102bf-osd-0[88616]: 2025-11-25T07:50:13.409+0000 7f5b744de640 -1 osd.0 0 waiting for initial osdmap
Nov 25 02:50:13 np0005534516 ceph-mgr[75313]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/2098995573; not ready for session (expect reconnect)
Nov 25 02:50:13 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Nov 25 02:50:13 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 25 02:50:13 np0005534516 ceph-mgr[75313]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Nov 25 02:50:13 np0005534516 ceph-osd[88620]: osd.0 9 crush map has features 288514050185494528, adjusting msgr requires for clients
Nov 25 02:50:13 np0005534516 ceph-osd[88620]: osd.0 9 crush map has features 288514050185494528 was 288232575208792577, adjusting msgr requires for mons
Nov 25 02:50:13 np0005534516 ceph-osd[88620]: osd.0 9 crush map has features 3314932999778484224, adjusting msgr requires for osds
Nov 25 02:50:13 np0005534516 ceph-osd[88620]: osd.0 9 check_osdmap_features require_osd_release unknown -> reef
Nov 25 02:50:13 np0005534516 ceph-osd[88620]: osd.0 9 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Nov 25 02:50:13 np0005534516 ceph-osd[88620]: osd.0 9 set_numa_affinity not setting numa affinity
Nov 25 02:50:13 np0005534516 ceph-a058ea16-8b73-51e1-b172-ed66107102bf-osd-0[88616]: 2025-11-25T07:50:13.617+0000 7f5b6fb06640 -1 osd.0 9 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Nov 25 02:50:13 np0005534516 ceph-osd[88620]: osd.0 9 _collect_metadata loop3:  no unique device id for loop3: fallback method has no model nor serial
Nov 25 02:50:13 np0005534516 systemd[1]: Reloading.
Nov 25 02:50:13 np0005534516 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 02:50:13 np0005534516 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 02:50:13 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e9 do_prune osdmap full prune enabled
Nov 25 02:50:13 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e9 encode_pending skipping prime_pg_temp; mapping job did not start
Nov 25 02:50:14 np0005534516 systemd[1]: Reloading.
Nov 25 02:50:14 np0005534516 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 02:50:14 np0005534516 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 02:50:14 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='osd.1 [v2:192.168.122.100:6806/1972745281,v1:192.168.122.100:6807/1972745281]' entity='osd.1' cmd='[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Nov 25 02:50:14 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e10 e10: 3 total, 1 up, 3 in
Nov 25 02:50:14 np0005534516 ceph-osd[89702]: osd.1 0 done with init, starting boot process
Nov 25 02:50:14 np0005534516 ceph-osd[89702]: osd.1 0 start_boot
Nov 25 02:50:14 np0005534516 ceph-osd[89702]: osd.1 0 maybe_override_options_for_qos osd_max_backfills set to 1
Nov 25 02:50:14 np0005534516 ceph-osd[89702]: osd.1 0 maybe_override_options_for_qos osd_recovery_max_active set to 0
Nov 25 02:50:14 np0005534516 ceph-osd[89702]: osd.1 0 maybe_override_options_for_qos osd_recovery_max_active_hdd set to 3
Nov 25 02:50:14 np0005534516 ceph-osd[89702]: osd.1 0 maybe_override_options_for_qos osd_recovery_max_active_ssd set to 10
Nov 25 02:50:14 np0005534516 ceph-osd[89702]: osd.1 0  bench count 12288000 bsize 4 KiB
Nov 25 02:50:14 np0005534516 ceph-mon[75015]: log_channel(cluster) log [INF] : osd.0 [v2:192.168.122.100:6802/2098995573,v1:192.168.122.100:6803/2098995573] boot
Nov 25 02:50:14 np0005534516 ceph-mon[75015]: log_channel(cluster) log [DBG] : osdmap e10: 3 total, 1 up, 3 in
Nov 25 02:50:14 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Nov 25 02:50:14 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 25 02:50:14 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Nov 25 02:50:14 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 25 02:50:14 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Nov 25 02:50:14 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 25 02:50:14 np0005534516 ceph-mgr[75313]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Nov 25 02:50:14 np0005534516 ceph-mgr[75313]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 25 02:50:14 np0005534516 ceph-mgr[75313]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6806/1972745281; not ready for session (expect reconnect)
Nov 25 02:50:14 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Nov 25 02:50:14 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 25 02:50:14 np0005534516 ceph-mgr[75313]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Nov 25 02:50:14 np0005534516 systemd[1]: Starting Ceph osd.2 for a058ea16-8b73-51e1-b172-ed66107102bf...
Nov 25 02:50:14 np0005534516 ceph-osd[88620]: osd.0 9 tick checking mon for new map
Nov 25 02:50:14 np0005534516 ceph-mon[75015]: from='osd.1 [v2:192.168.122.100:6806/1972745281,v1:192.168.122.100:6807/1972745281]' entity='osd.1' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]': finished
Nov 25 02:50:14 np0005534516 ceph-mon[75015]: from='osd.1 [v2:192.168.122.100:6806/1972745281,v1:192.168.122.100:6807/1972745281]' entity='osd.1' cmd=[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]: dispatch
Nov 25 02:50:14 np0005534516 ceph-osd[88620]: osd.0 10 state: booting -> active
Nov 25 02:50:14 np0005534516 podman[90499]: 2025-11-25 07:50:14.639788822 +0000 UTC m=+0.114845169 container create 4bbe3135d22ad1f674e5b9dab5a80c9e84c6cfceae1f5a12d6f4059419241b55 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-a058ea16-8b73-51e1-b172-ed66107102bf-osd-2-activate, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 02:50:14 np0005534516 podman[90499]: 2025-11-25 07:50:14.545060541 +0000 UTC m=+0.020116908 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 02:50:14 np0005534516 systemd[1]: Started libcrun container.
Nov 25 02:50:14 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0854d4dee48bc53f0b17bb63a63c06d5c423eee89367cdd2f74eee2794d1a0d6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 02:50:14 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0854d4dee48bc53f0b17bb63a63c06d5c423eee89367cdd2f74eee2794d1a0d6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 02:50:14 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0854d4dee48bc53f0b17bb63a63c06d5c423eee89367cdd2f74eee2794d1a0d6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 02:50:14 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0854d4dee48bc53f0b17bb63a63c06d5c423eee89367cdd2f74eee2794d1a0d6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 02:50:14 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0854d4dee48bc53f0b17bb63a63c06d5c423eee89367cdd2f74eee2794d1a0d6/merged/var/lib/ceph/osd/ceph-2 supports timestamps until 2038 (0x7fffffff)
Nov 25 02:50:14 np0005534516 podman[90499]: 2025-11-25 07:50:14.812721768 +0000 UTC m=+0.287778115 container init 4bbe3135d22ad1f674e5b9dab5a80c9e84c6cfceae1f5a12d6f4059419241b55 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-a058ea16-8b73-51e1-b172-ed66107102bf-osd-2-activate, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 02:50:14 np0005534516 podman[90499]: 2025-11-25 07:50:14.819205199 +0000 UTC m=+0.294261546 container start 4bbe3135d22ad1f674e5b9dab5a80c9e84c6cfceae1f5a12d6f4059419241b55 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-a058ea16-8b73-51e1-b172-ed66107102bf-osd-2-activate, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Nov 25 02:50:14 np0005534516 podman[90499]: 2025-11-25 07:50:14.835255435 +0000 UTC m=+0.310311772 container attach 4bbe3135d22ad1f674e5b9dab5a80c9e84c6cfceae1f5a12d6f4059419241b55 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-a058ea16-8b73-51e1-b172-ed66107102bf-osd-2-activate, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Nov 25 02:50:15 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v42: 0 pgs: ; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail
Nov 25 02:50:15 np0005534516 ceph-mgr[75313]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6806/1972745281; not ready for session (expect reconnect)
Nov 25 02:50:15 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Nov 25 02:50:15 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 25 02:50:15 np0005534516 ceph-mgr[75313]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Nov 25 02:50:15 np0005534516 ceph-mgr[75313]: [devicehealth INFO root] creating mgr pool
Nov 25 02:50:15 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true} v 0) v1
Nov 25 02:50:15 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]: dispatch
Nov 25 02:50:15 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e10 do_prune osdmap full prune enabled
Nov 25 02:50:15 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e10 encode_pending skipping prime_pg_temp; mapping job did not start
Nov 25 02:50:15 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd='[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]': finished
Nov 25 02:50:15 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e11 e11: 3 total, 1 up, 3 in
Nov 25 02:50:15 np0005534516 ceph-a058ea16-8b73-51e1-b172-ed66107102bf-osd-2-activate[90515]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Nov 25 02:50:15 np0005534516 bash[90499]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Nov 25 02:50:15 np0005534516 ceph-a058ea16-8b73-51e1-b172-ed66107102bf-osd-2-activate[90515]: Running command: /usr/bin/ceph-bluestore-tool prime-osd-dir --path /var/lib/ceph/osd/ceph-2 --no-mon-config --dev /dev/mapper/ceph_vg2-ceph_lv2
Nov 25 02:50:15 np0005534516 bash[90499]: Running command: /usr/bin/ceph-bluestore-tool prime-osd-dir --path /var/lib/ceph/osd/ceph-2 --no-mon-config --dev /dev/mapper/ceph_vg2-ceph_lv2
Nov 25 02:50:15 np0005534516 ceph-a058ea16-8b73-51e1-b172-ed66107102bf-osd-2-activate[90515]: Running command: /usr/bin/chown -h ceph:ceph /dev/mapper/ceph_vg2-ceph_lv2
Nov 25 02:50:15 np0005534516 bash[90499]: Running command: /usr/bin/chown -h ceph:ceph /dev/mapper/ceph_vg2-ceph_lv2
Nov 25 02:50:15 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e11 crush map has features 3314933000852226048, adjusting msgr requires
Nov 25 02:50:15 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e11 crush map has features 288514051259236352, adjusting msgr requires
Nov 25 02:50:15 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e11 crush map has features 288514051259236352, adjusting msgr requires
Nov 25 02:50:15 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e11 crush map has features 288514051259236352, adjusting msgr requires
Nov 25 02:50:15 np0005534516 ceph-a058ea16-8b73-51e1-b172-ed66107102bf-osd-2-activate[90515]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-2
Nov 25 02:50:15 np0005534516 bash[90499]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-2
Nov 25 02:50:15 np0005534516 ceph-mon[75015]: OSD bench result of 1991.382663 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.0. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Nov 25 02:50:15 np0005534516 ceph-mon[75015]: from='osd.1 [v2:192.168.122.100:6806/1972745281,v1:192.168.122.100:6807/1972745281]' entity='osd.1' cmd='[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Nov 25 02:50:15 np0005534516 ceph-mon[75015]: osd.0 [v2:192.168.122.100:6802/2098995573,v1:192.168.122.100:6803/2098995573] boot
Nov 25 02:50:15 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]: dispatch
Nov 25 02:50:15 np0005534516 ceph-a058ea16-8b73-51e1-b172-ed66107102bf-osd-2-activate[90515]: Running command: /usr/bin/ln -s /dev/mapper/ceph_vg2-ceph_lv2 /var/lib/ceph/osd/ceph-2/block
Nov 25 02:50:15 np0005534516 bash[90499]: Running command: /usr/bin/ln -s /dev/mapper/ceph_vg2-ceph_lv2 /var/lib/ceph/osd/ceph-2/block
Nov 25 02:50:15 np0005534516 ceph-a058ea16-8b73-51e1-b172-ed66107102bf-osd-2-activate[90515]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Nov 25 02:50:15 np0005534516 bash[90499]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Nov 25 02:50:15 np0005534516 ceph-a058ea16-8b73-51e1-b172-ed66107102bf-osd-2-activate[90515]: --> ceph-volume raw activate successful for osd ID: 2
Nov 25 02:50:15 np0005534516 bash[90499]: --> ceph-volume raw activate successful for osd ID: 2
Nov 25 02:50:15 np0005534516 systemd[1]: libpod-4bbe3135d22ad1f674e5b9dab5a80c9e84c6cfceae1f5a12d6f4059419241b55.scope: Deactivated successfully.
Nov 25 02:50:15 np0005534516 systemd[1]: libpod-4bbe3135d22ad1f674e5b9dab5a80c9e84c6cfceae1f5a12d6f4059419241b55.scope: Consumed 1.110s CPU time.
Nov 25 02:50:15 np0005534516 ceph-mon[75015]: log_channel(cluster) log [DBG] : osdmap e11: 3 total, 1 up, 3 in
Nov 25 02:50:15 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e11 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 02:50:15 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Nov 25 02:50:15 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 25 02:50:15 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Nov 25 02:50:15 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 25 02:50:15 np0005534516 ceph-mgr[75313]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Nov 25 02:50:15 np0005534516 ceph-mgr[75313]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 25 02:50:15 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true} v 0) v1
Nov 25 02:50:15 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]: dispatch
Nov 25 02:50:15 np0005534516 ceph-osd[88620]: osd.0 11 crush map has features 288514051259236352, adjusting msgr requires for clients
Nov 25 02:50:15 np0005534516 ceph-osd[88620]: osd.0 11 crush map has features 288514051259236352 was 288514050185503233, adjusting msgr requires for mons
Nov 25 02:50:15 np0005534516 ceph-osd[88620]: osd.0 11 crush map has features 3314933000852226048, adjusting msgr requires for osds
Nov 25 02:50:15 np0005534516 podman[90634]: 2025-11-25 07:50:15.97102505 +0000 UTC m=+0.029264004 container died 4bbe3135d22ad1f674e5b9dab5a80c9e84c6cfceae1f5a12d6f4059419241b55 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-a058ea16-8b73-51e1-b172-ed66107102bf-osd-2-activate, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Nov 25 02:50:16 np0005534516 ceph-mgr[75313]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6806/1972745281; not ready for session (expect reconnect)
Nov 25 02:50:16 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Nov 25 02:50:16 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 25 02:50:16 np0005534516 ceph-mgr[75313]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Nov 25 02:50:16 np0005534516 systemd[1]: var-lib-containers-storage-overlay-0854d4dee48bc53f0b17bb63a63c06d5c423eee89367cdd2f74eee2794d1a0d6-merged.mount: Deactivated successfully.
Nov 25 02:50:16 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e11 do_prune osdmap full prune enabled
Nov 25 02:50:17 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v44: 1 pgs: 1 unknown; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail
Nov 25 02:50:17 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd='[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]': finished
Nov 25 02:50:17 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e12 e12: 3 total, 1 up, 3 in
Nov 25 02:50:17 np0005534516 ceph-mgr[75313]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6806/1972745281; not ready for session (expect reconnect)
Nov 25 02:50:17 np0005534516 ceph-mon[75015]: log_channel(cluster) log [DBG] : osdmap e12: 3 total, 1 up, 3 in
Nov 25 02:50:17 np0005534516 podman[90634]: 2025-11-25 07:50:17.489675548 +0000 UTC m=+1.547914502 container remove 4bbe3135d22ad1f674e5b9dab5a80c9e84c6cfceae1f5a12d6f4059419241b55 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-a058ea16-8b73-51e1-b172-ed66107102bf-osd-2-activate, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 02:50:17 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Nov 25 02:50:17 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 25 02:50:17 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Nov 25 02:50:17 np0005534516 ceph-mgr[75313]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Nov 25 02:50:17 np0005534516 ceph-mgr[75313]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 25 02:50:17 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 25 02:50:17 np0005534516 ceph-mon[75015]: log_channel(cluster) log [WRN] : Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Nov 25 02:50:17 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd='[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]': finished
Nov 25 02:50:17 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]: dispatch
Nov 25 02:50:17 np0005534516 podman[90692]: 2025-11-25 07:50:17.801352126 +0000 UTC m=+0.091341613 container create aa9541b139f73db2b7c3920fc94f2d9f34756e48c0f6dad46798d294fa11059b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-a058ea16-8b73-51e1-b172-ed66107102bf-osd-2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 02:50:17 np0005534516 podman[90692]: 2025-11-25 07:50:17.738737046 +0000 UTC m=+0.028726573 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 02:50:17 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c55f1c1ed25ef079d4850ef1f82076e317a4c3c9b3c290051b48ae073a7e451a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 02:50:17 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c55f1c1ed25ef079d4850ef1f82076e317a4c3c9b3c290051b48ae073a7e451a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 02:50:17 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c55f1c1ed25ef079d4850ef1f82076e317a4c3c9b3c290051b48ae073a7e451a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 02:50:17 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c55f1c1ed25ef079d4850ef1f82076e317a4c3c9b3c290051b48ae073a7e451a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 02:50:17 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c55f1c1ed25ef079d4850ef1f82076e317a4c3c9b3c290051b48ae073a7e451a/merged/var/lib/ceph/osd/ceph-2 supports timestamps until 2038 (0x7fffffff)
Nov 25 02:50:18 np0005534516 podman[90692]: 2025-11-25 07:50:18.060250125 +0000 UTC m=+0.350239682 container init aa9541b139f73db2b7c3920fc94f2d9f34756e48c0f6dad46798d294fa11059b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-a058ea16-8b73-51e1-b172-ed66107102bf-osd-2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Nov 25 02:50:18 np0005534516 podman[90692]: 2025-11-25 07:50:18.075525145 +0000 UTC m=+0.365514662 container start aa9541b139f73db2b7c3920fc94f2d9f34756e48c0f6dad46798d294fa11059b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-a058ea16-8b73-51e1-b172-ed66107102bf-osd-2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 02:50:18 np0005534516 ceph-osd[90711]: set uid:gid to 167:167 (ceph:ceph)
Nov 25 02:50:18 np0005534516 ceph-osd[90711]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-osd, pid 2
Nov 25 02:50:18 np0005534516 ceph-osd[90711]: pidfile_write: ignore empty --pid-file
Nov 25 02:50:18 np0005534516 ceph-osd[90711]: bdev(0x55a125d9f800 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Nov 25 02:50:18 np0005534516 ceph-osd[90711]: bdev(0x55a125d9f800 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Nov 25 02:50:18 np0005534516 ceph-osd[90711]: bdev(0x55a125d9f800 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 25 02:50:18 np0005534516 ceph-osd[90711]: bdev(0x55a125d9f800 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 25 02:50:18 np0005534516 ceph-osd[90711]: bluestore(/var/lib/ceph/osd/ceph-2) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Nov 25 02:50:18 np0005534516 ceph-osd[90711]: bdev(0x55a126be1800 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Nov 25 02:50:18 np0005534516 ceph-osd[90711]: bdev(0x55a126be1800 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Nov 25 02:50:18 np0005534516 ceph-osd[90711]: bdev(0x55a126be1800 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 25 02:50:18 np0005534516 ceph-osd[90711]: bdev(0x55a126be1800 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 25 02:50:18 np0005534516 ceph-osd[90711]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-2/block size 20 GiB
Nov 25 02:50:18 np0005534516 ceph-osd[90711]: bdev(0x55a126be1800 /var/lib/ceph/osd/ceph-2/block) close
Nov 25 02:50:18 np0005534516 bash[90692]: aa9541b139f73db2b7c3920fc94f2d9f34756e48c0f6dad46798d294fa11059b
Nov 25 02:50:18 np0005534516 systemd[1]: Started Ceph osd.2 for a058ea16-8b73-51e1-b172-ed66107102bf.
Nov 25 02:50:18 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 02:50:18 np0005534516 ceph-mgr[75313]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6806/1972745281; not ready for session (expect reconnect)
Nov 25 02:50:18 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Nov 25 02:50:18 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 25 02:50:18 np0005534516 ceph-mgr[75313]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Nov 25 02:50:18 np0005534516 ceph-osd[90711]: bdev(0x55a125d9f800 /var/lib/ceph/osd/ceph-2/block) close
Nov 25 02:50:18 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 02:50:18 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 02:50:18 np0005534516 ceph-osd[90711]: starting osd.2 osd_data /var/lib/ceph/osd/ceph-2 /var/lib/ceph/osd/ceph-2/journal
Nov 25 02:50:18 np0005534516 ceph-osd[90711]: load: jerasure load: lrc 
Nov 25 02:50:18 np0005534516 ceph-osd[90711]: bdev(0x55a126c62c00 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Nov 25 02:50:18 np0005534516 ceph-osd[90711]: bdev(0x55a126c62c00 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Nov 25 02:50:18 np0005534516 ceph-osd[90711]: bdev(0x55a126c62c00 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 25 02:50:18 np0005534516 ceph-osd[90711]: bdev(0x55a126c62c00 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 25 02:50:18 np0005534516 ceph-osd[90711]: bluestore(/var/lib/ceph/osd/ceph-2) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Nov 25 02:50:18 np0005534516 ceph-osd[90711]: bdev(0x55a126c62c00 /var/lib/ceph/osd/ceph-2/block) close
Nov 25 02:50:18 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd='[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]': finished
Nov 25 02:50:18 np0005534516 ceph-mon[75015]: Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Nov 25 02:50:18 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 02:50:18 np0005534516 ceph-osd[90711]: bdev(0x55a126c62c00 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Nov 25 02:50:18 np0005534516 ceph-osd[90711]: bdev(0x55a126c62c00 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Nov 25 02:50:18 np0005534516 ceph-osd[90711]: bdev(0x55a126c62c00 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 25 02:50:18 np0005534516 ceph-osd[90711]: bdev(0x55a126c62c00 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 25 02:50:18 np0005534516 ceph-osd[90711]: bluestore(/var/lib/ceph/osd/ceph-2) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Nov 25 02:50:18 np0005534516 ceph-osd[90711]: bdev(0x55a126c62c00 /var/lib/ceph/osd/ceph-2/block) close
Nov 25 02:50:19 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v46: 1 pgs: 1 unknown; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: mClockScheduler: set_osd_capacity_params_from_config: osd_bandwidth_cost_per_io: 499321.90 bytes/io, osd_bandwidth_capacity_per_shard 157286400.00 bytes/second
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: osd.2:0.OSDShard using op scheduler mclock_scheduler, cutoff=196
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: bdev(0x55a126c62c00 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: bdev(0x55a126c62c00 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: bdev(0x55a126c62c00 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: bdev(0x55a126c62c00 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: bluestore(/var/lib/ceph/osd/ceph-2) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: bdev(0x55a126c63400 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: bdev(0x55a126c63400 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: bdev(0x55a126c63400 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: bdev(0x55a126c63400 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-2/block size 20 GiB
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: bluefs mount
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: bluefs mount shared_bdev_used = 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: bluestore(/var/lib/ceph/osd/ceph-2) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb: RocksDB version: 7.9.2
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb: Git sha 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb: Compile date 2025-05-06 23:30:25
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb: DB SUMMARY
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb: DB Session ID:  JKC3XCBUPPKQZZET88PC
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb: CURRENT file:  CURRENT
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb: IDENTITY file:  IDENTITY
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5093 ; 
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                         Options.error_if_exists: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                       Options.create_if_missing: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                         Options.paranoid_checks: 1
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:             Options.flush_verify_memtable_count: 1
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                                     Options.env: 0x55a126c33c70
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                                      Options.fs: LegacyFileSystem
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                                Options.info_log: 0x55a125e268a0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                Options.max_file_opening_threads: 16
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                              Options.statistics: (nil)
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                               Options.use_fsync: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                       Options.max_log_file_size: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                   Options.log_file_time_to_roll: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                       Options.keep_log_file_num: 1000
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                    Options.recycle_log_file_num: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                         Options.allow_fallocate: 1
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                        Options.allow_mmap_reads: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                       Options.allow_mmap_writes: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                        Options.use_direct_reads: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:          Options.create_missing_column_families: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                              Options.db_log_dir: 
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                                 Options.wal_dir: db.wal
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                Options.table_cache_numshardbits: 6
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                         Options.WAL_ttl_seconds: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                       Options.WAL_size_limit_MB: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:             Options.manifest_preallocation_size: 4194304
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                     Options.is_fd_close_on_exec: 1
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                   Options.advise_random_on_open: 1
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                    Options.db_write_buffer_size: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                    Options.write_buffer_manager: 0x55a126d3c460
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:         Options.access_hint_on_compaction_start: 1
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                      Options.use_adaptive_mutex: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                            Options.rate_limiter: (nil)
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                       Options.wal_recovery_mode: 2
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                  Options.enable_thread_tracking: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                  Options.enable_pipelined_write: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                  Options.unordered_write: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:             Options.write_thread_max_yield_usec: 100
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                               Options.row_cache: None
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                              Options.wal_filter: None
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:             Options.avoid_flush_during_recovery: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:             Options.allow_ingest_behind: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:             Options.two_write_queues: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:             Options.manual_wal_flush: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:             Options.wal_compression: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:             Options.atomic_flush: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                 Options.persist_stats_to_disk: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                 Options.write_dbid_to_manifest: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                 Options.log_readahead_size: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                 Options.best_efforts_recovery: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:             Options.allow_data_in_errors: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:             Options.db_host_id: __hostname__
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:             Options.enforce_single_del_contracts: true
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:             Options.max_background_jobs: 4
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:             Options.max_background_compactions: -1
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:             Options.max_subcompactions: 1
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:           Options.writable_file_max_buffer_size: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:             Options.delayed_write_rate : 16777216
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:             Options.max_total_wal_size: 1073741824
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                   Options.stats_dump_period_sec: 600
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                 Options.stats_persist_period_sec: 600
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                          Options.max_open_files: -1
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                          Options.bytes_per_sync: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                      Options.wal_bytes_per_sync: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                   Options.strict_bytes_per_sync: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:       Options.compaction_readahead_size: 2097152
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                  Options.max_background_flushes: -1
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb: Compression algorithms supported:
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb: #011kZSTD supported: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb: #011kXpressCompression supported: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb: #011kBZip2Compression supported: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb: #011kZSTDNotFinalCompression supported: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb: #011kLZ4Compression supported: 1
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb: #011kZlibCompression supported: 1
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb: #011kLZ4HCCompression supported: 1
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb: #011kSnappyCompression supported: 1
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb: Fast CRC32 supported: Supported on x86
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb: DMutex implementation: pthread_mutex_t
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb: [db/db_impl/db_impl_readonly.cc:25] Opening the db in read only mode
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:        Options.compaction_filter: None
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:        Options.compaction_filter_factory: None
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:  Options.sst_partitioner_factory: None
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55a125e262c0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55a125e131f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:        Options.write_buffer_size: 16777216
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:  Options.max_write_buffer_number: 64
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:          Options.compression: LZ4
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:       Options.prefix_extractor: nullptr
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:             Options.num_levels: 7
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                  Options.compression_opts.level: 32767
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:               Options.compression_opts.strategy: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                  Options.compression_opts.enabled: false
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                        Options.arena_block_size: 1048576
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                Options.disable_auto_compactions: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                   Options.inplace_update_support: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                           Options.bloom_locality: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                    Options.max_successive_merges: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                Options.paranoid_file_checks: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                Options.force_consistency_checks: 1
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                Options.report_bg_io_stats: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                               Options.ttl: 2592000
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                       Options.enable_blob_files: false
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                           Options.min_blob_size: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                          Options.blob_file_size: 268435456
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                Options.blob_file_starting_level: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:           Options.merge_operator: None
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:        Options.compaction_filter: None
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:        Options.compaction_filter_factory: None
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:  Options.sst_partitioner_factory: None
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55a125e262c0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55a125e131f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:        Options.write_buffer_size: 16777216
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:  Options.max_write_buffer_number: 64
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:          Options.compression: LZ4
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:       Options.prefix_extractor: nullptr
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:             Options.num_levels: 7
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                  Options.compression_opts.level: 32767
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:               Options.compression_opts.strategy: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                  Options.compression_opts.enabled: false
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                        Options.arena_block_size: 1048576
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                Options.disable_auto_compactions: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                   Options.inplace_update_support: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                           Options.bloom_locality: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                    Options.max_successive_merges: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                Options.paranoid_file_checks: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                Options.force_consistency_checks: 1
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                Options.report_bg_io_stats: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                               Options.ttl: 2592000
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                       Options.enable_blob_files: false
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                           Options.min_blob_size: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                          Options.blob_file_size: 268435456
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                Options.blob_file_starting_level: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:           Options.merge_operator: None
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:        Options.compaction_filter: None
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:        Options.compaction_filter_factory: None
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:  Options.sst_partitioner_factory: None
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55a125e262c0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55a125e131f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:        Options.write_buffer_size: 16777216
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:  Options.max_write_buffer_number: 64
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:          Options.compression: LZ4
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:       Options.prefix_extractor: nullptr
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:             Options.num_levels: 7
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                  Options.compression_opts.level: 32767
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:               Options.compression_opts.strategy: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                  Options.compression_opts.enabled: false
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                        Options.arena_block_size: 1048576
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                Options.disable_auto_compactions: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                   Options.inplace_update_support: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                           Options.bloom_locality: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                    Options.max_successive_merges: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                Options.paranoid_file_checks: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                Options.force_consistency_checks: 1
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                Options.report_bg_io_stats: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                               Options.ttl: 2592000
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                       Options.enable_blob_files: false
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                           Options.min_blob_size: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                          Options.blob_file_size: 268435456
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                Options.blob_file_starting_level: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:           Options.merge_operator: None
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:        Options.compaction_filter: None
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:        Options.compaction_filter_factory: None
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:  Options.sst_partitioner_factory: None
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55a125e262c0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55a125e131f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:        Options.write_buffer_size: 16777216
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:  Options.max_write_buffer_number: 64
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:          Options.compression: LZ4
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:       Options.prefix_extractor: nullptr
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:             Options.num_levels: 7
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                  Options.compression_opts.level: 32767
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:               Options.compression_opts.strategy: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                  Options.compression_opts.enabled: false
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                        Options.arena_block_size: 1048576
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                Options.disable_auto_compactions: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                   Options.inplace_update_support: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                           Options.bloom_locality: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                    Options.max_successive_merges: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                Options.paranoid_file_checks: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                Options.force_consistency_checks: 1
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                Options.report_bg_io_stats: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                               Options.ttl: 2592000
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                       Options.enable_blob_files: false
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                           Options.min_blob_size: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                          Options.blob_file_size: 268435456
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                Options.blob_file_starting_level: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:           Options.merge_operator: None
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:        Options.compaction_filter: None
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:        Options.compaction_filter_factory: None
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:  Options.sst_partitioner_factory: None
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55a125e262c0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55a125e131f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:        Options.write_buffer_size: 16777216
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:  Options.max_write_buffer_number: 64
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:          Options.compression: LZ4
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:       Options.prefix_extractor: nullptr
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:             Options.num_levels: 7
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                  Options.compression_opts.level: 32767
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:               Options.compression_opts.strategy: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                  Options.compression_opts.enabled: false
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                        Options.arena_block_size: 1048576
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                Options.disable_auto_compactions: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                   Options.inplace_update_support: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                           Options.bloom_locality: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                    Options.max_successive_merges: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                Options.paranoid_file_checks: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                Options.force_consistency_checks: 1
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                Options.report_bg_io_stats: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                               Options.ttl: 2592000
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                       Options.enable_blob_files: false
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                           Options.min_blob_size: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                          Options.blob_file_size: 268435456
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                Options.blob_file_starting_level: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:           Options.merge_operator: None
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:        Options.compaction_filter: None
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:        Options.compaction_filter_factory: None
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:  Options.sst_partitioner_factory: None
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55a125e262c0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55a125e131f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:        Options.write_buffer_size: 16777216
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:  Options.max_write_buffer_number: 64
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:          Options.compression: LZ4
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:       Options.prefix_extractor: nullptr
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:             Options.num_levels: 7
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                  Options.compression_opts.level: 32767
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:               Options.compression_opts.strategy: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                  Options.compression_opts.enabled: false
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                        Options.arena_block_size: 1048576
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                Options.disable_auto_compactions: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                   Options.inplace_update_support: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                           Options.bloom_locality: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                    Options.max_successive_merges: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                Options.paranoid_file_checks: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                Options.force_consistency_checks: 1
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                Options.report_bg_io_stats: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                               Options.ttl: 2592000
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                       Options.enable_blob_files: false
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                           Options.min_blob_size: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                          Options.blob_file_size: 268435456
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                Options.blob_file_starting_level: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:           Options.merge_operator: None
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:        Options.compaction_filter: None
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:        Options.compaction_filter_factory: None
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:  Options.sst_partitioner_factory: None
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55a125e262c0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55a125e131f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:        Options.write_buffer_size: 16777216
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:  Options.max_write_buffer_number: 64
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:          Options.compression: LZ4
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:       Options.prefix_extractor: nullptr
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:             Options.num_levels: 7
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                  Options.compression_opts.level: 32767
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:               Options.compression_opts.strategy: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                  Options.compression_opts.enabled: false
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                        Options.arena_block_size: 1048576
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                Options.disable_auto_compactions: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                   Options.inplace_update_support: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                           Options.bloom_locality: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                    Options.max_successive_merges: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                Options.paranoid_file_checks: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                Options.force_consistency_checks: 1
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                Options.report_bg_io_stats: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                               Options.ttl: 2592000
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                       Options.enable_blob_files: false
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                           Options.min_blob_size: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                          Options.blob_file_size: 268435456
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                Options.blob_file_starting_level: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:           Options.merge_operator: None
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:        Options.compaction_filter: None
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:        Options.compaction_filter_factory: None
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:  Options.sst_partitioner_factory: None
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55a125e26240)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55a125e13090#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:        Options.write_buffer_size: 16777216
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:  Options.max_write_buffer_number: 64
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:          Options.compression: LZ4
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:       Options.prefix_extractor: nullptr
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:             Options.num_levels: 7
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                  Options.compression_opts.level: 32767
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:               Options.compression_opts.strategy: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                  Options.compression_opts.enabled: false
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                        Options.arena_block_size: 1048576
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                Options.disable_auto_compactions: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                   Options.inplace_update_support: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                           Options.bloom_locality: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                    Options.max_successive_merges: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                Options.paranoid_file_checks: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                Options.force_consistency_checks: 1
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                Options.report_bg_io_stats: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                               Options.ttl: 2592000
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                       Options.enable_blob_files: false
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                           Options.min_blob_size: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                          Options.blob_file_size: 268435456
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                Options.blob_file_starting_level: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:           Options.merge_operator: None
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:        Options.compaction_filter: None
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:        Options.compaction_filter_factory: None
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:  Options.sst_partitioner_factory: None
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55a125e26240)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55a125e13090#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:        Options.write_buffer_size: 16777216
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:  Options.max_write_buffer_number: 64
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:          Options.compression: LZ4
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:       Options.prefix_extractor: nullptr
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:             Options.num_levels: 7
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                  Options.compression_opts.level: 32767
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:               Options.compression_opts.strategy: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                  Options.compression_opts.enabled: false
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                        Options.arena_block_size: 1048576
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                Options.disable_auto_compactions: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                   Options.inplace_update_support: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                           Options.bloom_locality: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                    Options.max_successive_merges: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                Options.paranoid_file_checks: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                Options.force_consistency_checks: 1
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                Options.report_bg_io_stats: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                               Options.ttl: 2592000
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                       Options.enable_blob_files: false
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                           Options.min_blob_size: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                          Options.blob_file_size: 268435456
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                Options.blob_file_starting_level: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:           Options.merge_operator: None
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:        Options.compaction_filter: None
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:        Options.compaction_filter_factory: None
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:  Options.sst_partitioner_factory: None
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55a125e26240)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55a125e13090#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:        Options.write_buffer_size: 16777216
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:  Options.max_write_buffer_number: 64
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:          Options.compression: LZ4
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:       Options.prefix_extractor: nullptr
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:             Options.num_levels: 7
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                  Options.compression_opts.level: 32767
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:               Options.compression_opts.strategy: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                  Options.compression_opts.enabled: false
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                        Options.arena_block_size: 1048576
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                Options.disable_auto_compactions: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                   Options.inplace_update_support: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                           Options.bloom_locality: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                    Options.max_successive_merges: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                Options.paranoid_file_checks: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                Options.force_consistency_checks: 1
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                Options.report_bg_io_stats: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                               Options.ttl: 2592000
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                       Options.enable_blob_files: false
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                           Options.min_blob_size: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                          Options.blob_file_size: 268435456
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                Options.blob_file_starting_level: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 361d3c97-dd19-4c03-9753-1f5b4d245f16
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764057019266335, "job": 1, "event": "recovery_started", "wal_files": [31]}
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764057019266625, "job": 1, "event": "recovery_finished"}
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: bluestore(/var/lib/ceph/osd/ceph-2) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: bluestore(/var/lib/ceph/osd/ceph-2) _open_super_meta old nid_max 1025
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: bluestore(/var/lib/ceph/osd/ceph-2) _open_super_meta old blobid_max 10240
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: bluestore(/var/lib/ceph/osd/ceph-2) _open_super_meta ondisk_format 4 compat_ondisk_format 3
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: bluestore(/var/lib/ceph/osd/ceph-2) _open_super_meta min_alloc_size 0x1000
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: freelist init
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: freelist _read_cfg
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: bluestore(/var/lib/ceph/osd/ceph-2) _init_alloc loaded 20 GiB in 2 extents, allocator type hybrid, capacity 0x4ffc00000, block size 0x1000, free 0x4ffbfd000, fragmentation 1.9e-07
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb: [db/db_impl/db_impl.cc:496] Shutdown: canceling all background work
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb: [db/db_impl/db_impl.cc:704] Shutdown complete
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: bluefs umount
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: bdev(0x55a126c63400 /var/lib/ceph/osd/ceph-2/block) close
Nov 25 02:50:19 np0005534516 ceph-mgr[75313]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6806/1972745281; not ready for session (expect reconnect)
Nov 25 02:50:19 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Nov 25 02:50:19 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 25 02:50:19 np0005534516 ceph-mgr[75313]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: bdev(0x55a126c63400 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: bdev(0x55a126c63400 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: bdev(0x55a126c63400 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: bdev(0x55a126c63400 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-2/block size 20 GiB
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: bluefs mount
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Nov 25 02:50:19 np0005534516 podman[91068]: 2025-11-25 07:50:19.410254923 +0000 UTC m=+0.048118906 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: bluefs mount shared_bdev_used = 4718592
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: bluestore(/var/lib/ceph/osd/ceph-2) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb: RocksDB version: 7.9.2
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb: Git sha 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb: Compile date 2025-05-06 23:30:25
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb: DB SUMMARY
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb: DB Session ID:  JKC3XCBUPPKQZZET88PD
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb: CURRENT file:  CURRENT
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb: IDENTITY file:  IDENTITY
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5093 ; 
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                         Options.error_if_exists: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                       Options.create_if_missing: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                         Options.paranoid_checks: 1
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:             Options.flush_verify_memtable_count: 1
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                                     Options.env: 0x55a126de4b60
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                                      Options.fs: LegacyFileSystem
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                                Options.info_log: 0x55a125e26620
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                Options.max_file_opening_threads: 16
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                              Options.statistics: (nil)
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                               Options.use_fsync: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                       Options.max_log_file_size: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                   Options.log_file_time_to_roll: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                       Options.keep_log_file_num: 1000
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                    Options.recycle_log_file_num: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                         Options.allow_fallocate: 1
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                        Options.allow_mmap_reads: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                       Options.allow_mmap_writes: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                        Options.use_direct_reads: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:          Options.create_missing_column_families: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                              Options.db_log_dir: 
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                                 Options.wal_dir: db.wal
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                Options.table_cache_numshardbits: 6
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                         Options.WAL_ttl_seconds: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                       Options.WAL_size_limit_MB: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:             Options.manifest_preallocation_size: 4194304
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                     Options.is_fd_close_on_exec: 1
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                   Options.advise_random_on_open: 1
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                    Options.db_write_buffer_size: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                    Options.write_buffer_manager: 0x55a126d3c460
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:         Options.access_hint_on_compaction_start: 1
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                      Options.use_adaptive_mutex: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                            Options.rate_limiter: (nil)
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                       Options.wal_recovery_mode: 2
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                  Options.enable_thread_tracking: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                  Options.enable_pipelined_write: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                  Options.unordered_write: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:             Options.write_thread_max_yield_usec: 100
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                               Options.row_cache: None
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                              Options.wal_filter: None
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:             Options.avoid_flush_during_recovery: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:             Options.allow_ingest_behind: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:             Options.two_write_queues: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:             Options.manual_wal_flush: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:             Options.wal_compression: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:             Options.atomic_flush: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                 Options.persist_stats_to_disk: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                 Options.write_dbid_to_manifest: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                 Options.log_readahead_size: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                 Options.best_efforts_recovery: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:             Options.allow_data_in_errors: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:             Options.db_host_id: __hostname__
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:             Options.enforce_single_del_contracts: true
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:             Options.max_background_jobs: 4
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:             Options.max_background_compactions: -1
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:             Options.max_subcompactions: 1
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:           Options.writable_file_max_buffer_size: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:             Options.delayed_write_rate : 16777216
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:             Options.max_total_wal_size: 1073741824
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                   Options.stats_dump_period_sec: 600
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                 Options.stats_persist_period_sec: 600
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                          Options.max_open_files: -1
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                          Options.bytes_per_sync: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                      Options.wal_bytes_per_sync: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                   Options.strict_bytes_per_sync: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:       Options.compaction_readahead_size: 2097152
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                  Options.max_background_flushes: -1
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb: Compression algorithms supported:
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb: #011kZSTD supported: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb: #011kXpressCompression supported: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb: #011kBZip2Compression supported: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb: #011kZSTDNotFinalCompression supported: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb: #011kLZ4Compression supported: 1
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb: #011kZlibCompression supported: 1
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb: #011kLZ4HCCompression supported: 1
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb: #011kSnappyCompression supported: 1
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb: Fast CRC32 supported: Supported on x86
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb: DMutex implementation: pthread_mutex_t
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:        Options.compaction_filter: None
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:        Options.compaction_filter_factory: None
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:  Options.sst_partitioner_factory: None
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55a125e26a20)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55a125e131f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:        Options.write_buffer_size: 16777216
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:  Options.max_write_buffer_number: 64
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:          Options.compression: LZ4
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:       Options.prefix_extractor: nullptr
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:             Options.num_levels: 7
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                  Options.compression_opts.level: 32767
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:               Options.compression_opts.strategy: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                  Options.compression_opts.enabled: false
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                        Options.arena_block_size: 1048576
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                Options.disable_auto_compactions: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                   Options.inplace_update_support: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                           Options.bloom_locality: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                    Options.max_successive_merges: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                Options.paranoid_file_checks: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                Options.force_consistency_checks: 1
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                Options.report_bg_io_stats: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                               Options.ttl: 2592000
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                       Options.enable_blob_files: false
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                           Options.min_blob_size: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                          Options.blob_file_size: 268435456
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                Options.blob_file_starting_level: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:           Options.merge_operator: None
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:        Options.compaction_filter: None
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:        Options.compaction_filter_factory: None
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:  Options.sst_partitioner_factory: None
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55a125e26a20)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55a125e131f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:        Options.write_buffer_size: 16777216
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:  Options.max_write_buffer_number: 64
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:          Options.compression: LZ4
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:       Options.prefix_extractor: nullptr
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:             Options.num_levels: 7
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                  Options.compression_opts.level: 32767
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:               Options.compression_opts.strategy: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                  Options.compression_opts.enabled: false
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                        Options.arena_block_size: 1048576
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                Options.disable_auto_compactions: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                   Options.inplace_update_support: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                           Options.bloom_locality: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                    Options.max_successive_merges: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                Options.paranoid_file_checks: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                Options.force_consistency_checks: 1
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                Options.report_bg_io_stats: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                               Options.ttl: 2592000
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                       Options.enable_blob_files: false
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                           Options.min_blob_size: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                          Options.blob_file_size: 268435456
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                Options.blob_file_starting_level: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:           Options.merge_operator: None
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:        Options.compaction_filter: None
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:        Options.compaction_filter_factory: None
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:  Options.sst_partitioner_factory: None
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55a125e26a20)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55a125e131f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:        Options.write_buffer_size: 16777216
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:  Options.max_write_buffer_number: 64
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:          Options.compression: LZ4
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:       Options.prefix_extractor: nullptr
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:             Options.num_levels: 7
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                  Options.compression_opts.level: 32767
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:               Options.compression_opts.strategy: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                  Options.compression_opts.enabled: false
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                        Options.arena_block_size: 1048576
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                Options.disable_auto_compactions: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                   Options.inplace_update_support: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                           Options.bloom_locality: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                    Options.max_successive_merges: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                Options.paranoid_file_checks: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                Options.force_consistency_checks: 1
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                Options.report_bg_io_stats: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                               Options.ttl: 2592000
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                       Options.enable_blob_files: false
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                           Options.min_blob_size: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                          Options.blob_file_size: 268435456
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                Options.blob_file_starting_level: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:           Options.merge_operator: None
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:        Options.compaction_filter: None
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:        Options.compaction_filter_factory: None
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:  Options.sst_partitioner_factory: None
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55a125e26a20)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55a125e131f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:        Options.write_buffer_size: 16777216
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:  Options.max_write_buffer_number: 64
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:          Options.compression: LZ4
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:       Options.prefix_extractor: nullptr
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:             Options.num_levels: 7
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                  Options.compression_opts.level: 32767
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:               Options.compression_opts.strategy: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                  Options.compression_opts.enabled: false
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                        Options.arena_block_size: 1048576
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                Options.disable_auto_compactions: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                   Options.inplace_update_support: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                           Options.bloom_locality: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                    Options.max_successive_merges: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                Options.paranoid_file_checks: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                Options.force_consistency_checks: 1
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                Options.report_bg_io_stats: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                               Options.ttl: 2592000
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                       Options.enable_blob_files: false
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                           Options.min_blob_size: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                          Options.blob_file_size: 268435456
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                Options.blob_file_starting_level: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:           Options.merge_operator: None
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:        Options.compaction_filter: None
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:        Options.compaction_filter_factory: None
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:  Options.sst_partitioner_factory: None
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55a125e26a20)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55a125e131f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:        Options.write_buffer_size: 16777216
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:  Options.max_write_buffer_number: 64
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:          Options.compression: LZ4
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:       Options.prefix_extractor: nullptr
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:             Options.num_levels: 7
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                  Options.compression_opts.level: 32767
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:               Options.compression_opts.strategy: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                  Options.compression_opts.enabled: false
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                        Options.arena_block_size: 1048576
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                Options.disable_auto_compactions: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                   Options.inplace_update_support: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                           Options.bloom_locality: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                    Options.max_successive_merges: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                Options.paranoid_file_checks: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                Options.force_consistency_checks: 1
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                Options.report_bg_io_stats: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                               Options.ttl: 2592000
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                       Options.enable_blob_files: false
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                           Options.min_blob_size: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                          Options.blob_file_size: 268435456
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                Options.blob_file_starting_level: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:           Options.merge_operator: None
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:        Options.compaction_filter: None
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:        Options.compaction_filter_factory: None
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:  Options.sst_partitioner_factory: None
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55a125e26a20)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55a125e131f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:        Options.write_buffer_size: 16777216
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:  Options.max_write_buffer_number: 64
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:          Options.compression: LZ4
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:       Options.prefix_extractor: nullptr
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:             Options.num_levels: 7
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                  Options.compression_opts.level: 32767
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:               Options.compression_opts.strategy: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                  Options.compression_opts.enabled: false
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                        Options.arena_block_size: 1048576
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                Options.disable_auto_compactions: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                   Options.inplace_update_support: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                           Options.bloom_locality: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                    Options.max_successive_merges: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                Options.paranoid_file_checks: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                Options.force_consistency_checks: 1
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                Options.report_bg_io_stats: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                               Options.ttl: 2592000
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                       Options.enable_blob_files: false
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                           Options.min_blob_size: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                          Options.blob_file_size: 268435456
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                Options.blob_file_starting_level: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:           Options.merge_operator: None
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:        Options.compaction_filter: None
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:        Options.compaction_filter_factory: None
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:  Options.sst_partitioner_factory: None
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55a125e26a20)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55a125e131f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:        Options.write_buffer_size: 16777216
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:  Options.max_write_buffer_number: 64
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:          Options.compression: LZ4
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:       Options.prefix_extractor: nullptr
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:             Options.num_levels: 7
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                  Options.compression_opts.level: 32767
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:               Options.compression_opts.strategy: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                  Options.compression_opts.enabled: false
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                        Options.arena_block_size: 1048576
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                Options.disable_auto_compactions: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                   Options.inplace_update_support: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                           Options.bloom_locality: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                    Options.max_successive_merges: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                Options.paranoid_file_checks: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                Options.force_consistency_checks: 1
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                Options.report_bg_io_stats: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                               Options.ttl: 2592000
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                       Options.enable_blob_files: false
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                           Options.min_blob_size: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                          Options.blob_file_size: 268435456
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                Options.blob_file_starting_level: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:           Options.merge_operator: None
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:        Options.compaction_filter: None
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:        Options.compaction_filter_factory: None
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:  Options.sst_partitioner_factory: None
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55a125e26380)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55a125e13090#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:        Options.write_buffer_size: 16777216
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:  Options.max_write_buffer_number: 64
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:          Options.compression: LZ4
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:       Options.prefix_extractor: nullptr
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:             Options.num_levels: 7
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                  Options.compression_opts.level: 32767
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:               Options.compression_opts.strategy: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                  Options.compression_opts.enabled: false
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                        Options.arena_block_size: 1048576
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                Options.disable_auto_compactions: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                   Options.inplace_update_support: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                           Options.bloom_locality: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                    Options.max_successive_merges: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                Options.paranoid_file_checks: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                Options.force_consistency_checks: 1
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                Options.report_bg_io_stats: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                               Options.ttl: 2592000
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                       Options.enable_blob_files: false
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                           Options.min_blob_size: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                          Options.blob_file_size: 268435456
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                Options.blob_file_starting_level: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:           Options.merge_operator: None
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:        Options.compaction_filter: None
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:        Options.compaction_filter_factory: None
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:  Options.sst_partitioner_factory: None
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55a125e26380)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55a125e13090#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:        Options.write_buffer_size: 16777216
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:  Options.max_write_buffer_number: 64
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:          Options.compression: LZ4
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:       Options.prefix_extractor: nullptr
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:             Options.num_levels: 7
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                  Options.compression_opts.level: 32767
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:               Options.compression_opts.strategy: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                  Options.compression_opts.enabled: false
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                        Options.arena_block_size: 1048576
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                Options.disable_auto_compactions: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                   Options.inplace_update_support: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                           Options.bloom_locality: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                    Options.max_successive_merges: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                Options.paranoid_file_checks: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                Options.force_consistency_checks: 1
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                Options.report_bg_io_stats: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                               Options.ttl: 2592000
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                       Options.enable_blob_files: false
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                           Options.min_blob_size: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                          Options.blob_file_size: 268435456
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                Options.blob_file_starting_level: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:           Options.merge_operator: None
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:        Options.compaction_filter: None
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:        Options.compaction_filter_factory: None
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:  Options.sst_partitioner_factory: None
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55a125e26380)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55a125e13090#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:        Options.write_buffer_size: 16777216
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:  Options.max_write_buffer_number: 64
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:          Options.compression: LZ4
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:       Options.prefix_extractor: nullptr
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:             Options.num_levels: 7
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                  Options.compression_opts.level: 32767
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:               Options.compression_opts.strategy: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                  Options.compression_opts.enabled: false
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                        Options.arena_block_size: 1048576
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                Options.disable_auto_compactions: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                   Options.inplace_update_support: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                           Options.bloom_locality: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                    Options.max_successive_merges: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                Options.paranoid_file_checks: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                Options.force_consistency_checks: 1
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                Options.report_bg_io_stats: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                               Options.ttl: 2592000
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                       Options.enable_blob_files: false
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                           Options.min_blob_size: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                          Options.blob_file_size: 268435456
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb:                Options.blob_file_starting_level: 0
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 361d3c97-dd19-4c03-9753-1f5b4d245f16
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764057019562866, "job": 1, "event": "recovery_started", "wal_files": [31]}
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Nov 25 02:50:19 np0005534516 podman[91068]: 2025-11-25 07:50:19.785176554 +0000 UTC m=+0.423040487 container create 1351a5d07d3019f4989802322f7b339ca418decae7159ac01deba3056b85c82d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_neumann, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 25 02:50:19 np0005534516 ceph-osd[90711]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764057019919505, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 35, "file_size": 1272, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 13, "largest_seqno": 21, "table_properties": {"data_size": 128, "index_size": 27, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 87, "raw_average_key_size": 17, "raw_value_size": 82, "raw_average_value_size": 16, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 2, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": ".T:int64_array.b:bitwise_xor", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764057019, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "361d3c97-dd19-4c03-9753-1f5b4d245f16", "db_session_id": "JKC3XCBUPPKQZZET88PD", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}}
Nov 25 02:50:20 np0005534516 ceph-mon[75015]: log_channel(cluster) log [INF] : Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled)
Nov 25 02:50:20 np0005534516 ceph-mon[75015]: log_channel(cluster) log [INF] : Cluster is now healthy
Nov 25 02:50:20 np0005534516 ceph-mgr[75313]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6806/1972745281; not ready for session (expect reconnect)
Nov 25 02:50:20 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Nov 25 02:50:20 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 25 02:50:20 np0005534516 ceph-mgr[75313]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Nov 25 02:50:20 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 02:50:20 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 02:50:20 np0005534516 systemd[1]: Started libpod-conmon-1351a5d07d3019f4989802322f7b339ca418decae7159ac01deba3056b85c82d.scope.
Nov 25 02:50:20 np0005534516 systemd[1]: Started libcrun container.
Nov 25 02:50:20 np0005534516 ceph-osd[90711]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764057020710036, "cf_name": "p-0", "job": 1, "event": "table_file_creation", "file_number": 36, "file_size": 1594, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 14, "largest_seqno": 15, "table_properties": {"data_size": 468, "index_size": 39, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 72, "raw_average_key_size": 36, "raw_value_size": 567, "raw_average_value_size": 283, "num_data_blocks": 1, "num_entries": 2, "num_filter_entries": 2, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "p-0", "column_family_id": 4, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764057019, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "361d3c97-dd19-4c03-9753-1f5b4d245f16", "db_session_id": "JKC3XCBUPPKQZZET88PD", "orig_file_number": 36, "seqno_to_time_mapping": "N/A"}}
Nov 25 02:50:20 np0005534516 podman[91068]: 2025-11-25 07:50:20.95456688 +0000 UTC m=+1.592430873 container init 1351a5d07d3019f4989802322f7b339ca418decae7159ac01deba3056b85c82d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_neumann, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Nov 25 02:50:20 np0005534516 podman[91068]: 2025-11-25 07:50:20.966715907 +0000 UTC m=+1.604579840 container start 1351a5d07d3019f4989802322f7b339ca418decae7159ac01deba3056b85c82d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_neumann, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 02:50:20 np0005534516 jovial_neumann[91268]: 167 167
Nov 25 02:50:20 np0005534516 systemd[1]: libpod-1351a5d07d3019f4989802322f7b339ca418decae7159ac01deba3056b85c82d.scope: Deactivated successfully.
Nov 25 02:50:21 np0005534516 ceph-osd[90711]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764057021009701, "cf_name": "O-2", "job": 1, "event": "table_file_creation", "file_number": 37, "file_size": 1275, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 16, "largest_seqno": 16, "table_properties": {"data_size": 121, "index_size": 64, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 55, "raw_average_key_size": 55, "raw_value_size": 50, "raw_average_value_size": 50, "num_data_blocks": 1, "num_entries": 1, "num_filter_entries": 1, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "O-2", "column_family_id": 9, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764057020, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "361d3c97-dd19-4c03-9753-1f5b4d245f16", "db_session_id": "JKC3XCBUPPKQZZET88PD", "orig_file_number": 37, "seqno_to_time_mapping": "N/A"}}
Nov 25 02:50:21 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e12 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 02:50:21 np0005534516 ceph-osd[90711]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764057021052990, "job": 1, "event": "recovery_finished"}
Nov 25 02:50:21 np0005534516 ceph-osd[90711]: rocksdb: [db/version_set.cc:5047] Creating manifest 40
Nov 25 02:50:21 np0005534516 podman[91068]: 2025-11-25 07:50:21.054016337 +0000 UTC m=+1.691880290 container attach 1351a5d07d3019f4989802322f7b339ca418decae7159ac01deba3056b85c82d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_neumann, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True)
Nov 25 02:50:21 np0005534516 podman[91068]: 2025-11-25 07:50:21.054577758 +0000 UTC m=+1.692441701 container died 1351a5d07d3019f4989802322f7b339ca418decae7159ac01deba3056b85c82d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_neumann, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Nov 25 02:50:21 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v47: 1 pgs: 1 unknown; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail
Nov 25 02:50:21 np0005534516 ceph-mgr[75313]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6806/1972745281; not ready for session (expect reconnect)
Nov 25 02:50:21 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Nov 25 02:50:21 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 25 02:50:21 np0005534516 ceph-mgr[75313]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Nov 25 02:50:21 np0005534516 systemd[1]: var-lib-containers-storage-overlay-644c6fd8ade605a0f8905edd0c21e63a474dac99caf09949ee54044a214c6cae-merged.mount: Deactivated successfully.
Nov 25 02:50:21 np0005534516 ceph-mon[75015]: Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled)
Nov 25 02:50:21 np0005534516 ceph-mon[75015]: Cluster is now healthy
Nov 25 02:50:21 np0005534516 ceph-osd[90711]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x55a125f80000
Nov 25 02:50:21 np0005534516 ceph-osd[90711]: rocksdb: DB pointer 0x55a126d25a00
Nov 25 02:50:21 np0005534516 ceph-osd[90711]: bluestore(/var/lib/ceph/osd/ceph-2) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Nov 25 02:50:21 np0005534516 ceph-osd[90711]: bluestore(/var/lib/ceph/osd/ceph-2) _upgrade_super from 4, latest 4
Nov 25 02:50:21 np0005534516 ceph-osd[90711]: bluestore(/var/lib/ceph/osd/ceph-2) _upgrade_super done
Nov 25 02:50:21 np0005534516 ceph-osd[90711]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 25 02:50:21 np0005534516 ceph-osd[90711]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 2.4 total, 2.4 interval#012Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s#012Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.36              0.00         1    0.357       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.36              0.00         1    0.357       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.36              0.00         1    0.357       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.36              0.00         1    0.357       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 2.4 total, 2.4 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.4 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.4 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55a125e131f0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.7e-05 secs_since: 0#012Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 2.4 total, 2.4 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55a125e131f0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.7e-05 secs_since: 0#012Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 2.4 total, 2.4 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55a125e131f0#2 capacity: 460.80 MB usag
Nov 25 02:50:21 np0005534516 ceph-osd[90711]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/18.2.7/rpm/el9/BUILD/ceph-18.2.7/src/cls/cephfs/cls_cephfs.cc:201: loading cephfs
Nov 25 02:50:21 np0005534516 ceph-osd[90711]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/18.2.7/rpm/el9/BUILD/ceph-18.2.7/src/cls/hello/cls_hello.cc:316: loading cls_hello
Nov 25 02:50:21 np0005534516 ceph-osd[90711]: _get_class not permitted to load lua
Nov 25 02:50:21 np0005534516 podman[91068]: 2025-11-25 07:50:21.919168427 +0000 UTC m=+2.557032360 container remove 1351a5d07d3019f4989802322f7b339ca418decae7159ac01deba3056b85c82d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_neumann, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 02:50:21 np0005534516 ceph-osd[90711]: _get_class not permitted to load sdk
Nov 25 02:50:21 np0005534516 ceph-osd[90711]: _get_class not permitted to load test_remote_reads
Nov 25 02:50:21 np0005534516 ceph-osd[90711]: osd.2 0 crush map has features 288232575208783872, adjusting msgr requires for clients
Nov 25 02:50:21 np0005534516 ceph-osd[90711]: osd.2 0 crush map has features 288232575208783872 was 8705, adjusting msgr requires for mons
Nov 25 02:50:21 np0005534516 ceph-osd[90711]: osd.2 0 crush map has features 288232575208783872, adjusting msgr requires for osds
Nov 25 02:50:21 np0005534516 ceph-osd[90711]: osd.2 0 check_osdmap_features enabling on-disk ERASURE CODES compat feature
Nov 25 02:50:21 np0005534516 ceph-osd[90711]: osd.2 0 load_pgs
Nov 25 02:50:21 np0005534516 ceph-osd[90711]: osd.2 0 load_pgs opened 0 pgs
Nov 25 02:50:21 np0005534516 ceph-osd[90711]: osd.2 0 log_to_monitors true
Nov 25 02:50:21 np0005534516 ceph-a058ea16-8b73-51e1-b172-ed66107102bf-osd-2[90707]: 2025-11-25T07:50:21.925+0000 7faa7f1a6740 -1 osd.2 0 log_to_monitors true
Nov 25 02:50:21 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]} v 0) v1
Nov 25 02:50:21 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='osd.2 [v2:192.168.122.100:6810/507468267,v1:192.168.122.100:6811/507468267]' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch
Nov 25 02:50:21 np0005534516 systemd[1]: libpod-conmon-1351a5d07d3019f4989802322f7b339ca418decae7159ac01deba3056b85c82d.scope: Deactivated successfully.
Nov 25 02:50:22 np0005534516 podman[91326]: 2025-11-25 07:50:22.163144402 +0000 UTC m=+0.115144325 container create 4fc8ba7c65e47ad8bc3a8c87af25520f0f893b149f93e197c204e47adc3d65e2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_tharp, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Nov 25 02:50:22 np0005534516 podman[91326]: 2025-11-25 07:50:22.083079579 +0000 UTC m=+0.035079522 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 02:50:22 np0005534516 ceph-mgr[75313]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6806/1972745281; not ready for session (expect reconnect)
Nov 25 02:50:22 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Nov 25 02:50:22 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 25 02:50:22 np0005534516 ceph-mgr[75313]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Nov 25 02:50:22 np0005534516 systemd[1]: Started libpod-conmon-4fc8ba7c65e47ad8bc3a8c87af25520f0f893b149f93e197c204e47adc3d65e2.scope.
Nov 25 02:50:22 np0005534516 systemd[1]: Started libcrun container.
Nov 25 02:50:22 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/913f020d587fb45cc724b02ceef17e18af72c7b26fc1fdff5f1cd45e6e56d7b9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 02:50:22 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/913f020d587fb45cc724b02ceef17e18af72c7b26fc1fdff5f1cd45e6e56d7b9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 02:50:22 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/913f020d587fb45cc724b02ceef17e18af72c7b26fc1fdff5f1cd45e6e56d7b9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 02:50:22 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/913f020d587fb45cc724b02ceef17e18af72c7b26fc1fdff5f1cd45e6e56d7b9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 02:50:22 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e12 do_prune osdmap full prune enabled
Nov 25 02:50:22 np0005534516 podman[91326]: 2025-11-25 07:50:22.680124483 +0000 UTC m=+0.632124526 container init 4fc8ba7c65e47ad8bc3a8c87af25520f0f893b149f93e197c204e47adc3d65e2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_tharp, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True)
Nov 25 02:50:22 np0005534516 podman[91326]: 2025-11-25 07:50:22.693891792 +0000 UTC m=+0.645891715 container start 4fc8ba7c65e47ad8bc3a8c87af25520f0f893b149f93e197c204e47adc3d65e2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_tharp, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Nov 25 02:50:22 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='osd.2 [v2:192.168.122.100:6810/507468267,v1:192.168.122.100:6811/507468267]' entity='osd.2' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]': finished
Nov 25 02:50:22 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e13 e13: 3 total, 1 up, 3 in
Nov 25 02:50:22 np0005534516 ceph-osd[90711]: log_channel(cluster) log [DBG] : purged_snaps scrub starts
Nov 25 02:50:22 np0005534516 ceph-osd[90711]: log_channel(cluster) log [DBG] : purged_snaps scrub ok
Nov 25 02:50:23 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v48: 1 pgs: 1 unknown; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail
Nov 25 02:50:23 np0005534516 ceph-mon[75015]: log_channel(cluster) log [DBG] : osdmap e13: 3 total, 1 up, 3 in
Nov 25 02:50:23 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-0", "root=default"]} v 0) v1
Nov 25 02:50:23 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='osd.2 [v2:192.168.122.100:6810/507468267,v1:192.168.122.100:6811/507468267]' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]: dispatch
Nov 25 02:50:23 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e13 create-or-move crush item name 'osd.2' initial_weight 0.0195 at location {host=compute-0,root=default}
Nov 25 02:50:23 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Nov 25 02:50:23 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 25 02:50:23 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Nov 25 02:50:23 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 25 02:50:23 np0005534516 ceph-mgr[75313]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Nov 25 02:50:23 np0005534516 ceph-mgr[75313]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 25 02:50:23 np0005534516 podman[91326]: 2025-11-25 07:50:23.253482427 +0000 UTC m=+1.205482440 container attach 4fc8ba7c65e47ad8bc3a8c87af25520f0f893b149f93e197c204e47adc3d65e2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_tharp, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 02:50:23 np0005534516 ceph-mgr[75313]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6806/1972745281; not ready for session (expect reconnect)
Nov 25 02:50:23 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Nov 25 02:50:23 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 25 02:50:23 np0005534516 ceph-mgr[75313]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Nov 25 02:50:23 np0005534516 ceph-mon[75015]: from='osd.2 [v2:192.168.122.100:6810/507468267,v1:192.168.122.100:6811/507468267]' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch
Nov 25 02:50:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 02:50:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 02:50:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 02:50:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 02:50:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 02:50:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 02:50:23 np0005534516 eager_tharp[91343]: {
Nov 25 02:50:23 np0005534516 eager_tharp[91343]:    "1ccbbde0-8faf-460a-800e-c84f00ed17db": {
Nov 25 02:50:23 np0005534516 eager_tharp[91343]:        "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 02:50:23 np0005534516 eager_tharp[91343]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 02:50:23 np0005534516 eager_tharp[91343]:        "osd_id": 1,
Nov 25 02:50:23 np0005534516 eager_tharp[91343]:        "osd_uuid": "1ccbbde0-8faf-460a-800e-c84f00ed17db",
Nov 25 02:50:23 np0005534516 eager_tharp[91343]:        "type": "bluestore"
Nov 25 02:50:23 np0005534516 eager_tharp[91343]:    },
Nov 25 02:50:23 np0005534516 eager_tharp[91343]:    "2a15223e-f21c-42af-b702-2be1c3607399": {
Nov 25 02:50:23 np0005534516 eager_tharp[91343]:        "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 02:50:23 np0005534516 eager_tharp[91343]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 02:50:23 np0005534516 eager_tharp[91343]:        "osd_id": 2,
Nov 25 02:50:23 np0005534516 eager_tharp[91343]:        "osd_uuid": "2a15223e-f21c-42af-b702-2be1c3607399",
Nov 25 02:50:23 np0005534516 eager_tharp[91343]:        "type": "bluestore"
Nov 25 02:50:23 np0005534516 eager_tharp[91343]:    },
Nov 25 02:50:23 np0005534516 eager_tharp[91343]:    "fa0f8d90-5df8-4d42-9078-da082765696d": {
Nov 25 02:50:23 np0005534516 eager_tharp[91343]:        "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 02:50:23 np0005534516 eager_tharp[91343]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 02:50:23 np0005534516 eager_tharp[91343]:        "osd_id": 0,
Nov 25 02:50:23 np0005534516 eager_tharp[91343]:        "osd_uuid": "fa0f8d90-5df8-4d42-9078-da082765696d",
Nov 25 02:50:23 np0005534516 eager_tharp[91343]:        "type": "bluestore"
Nov 25 02:50:23 np0005534516 eager_tharp[91343]:    }
Nov 25 02:50:23 np0005534516 eager_tharp[91343]: }
Nov 25 02:50:23 np0005534516 systemd[1]: libpod-4fc8ba7c65e47ad8bc3a8c87af25520f0f893b149f93e197c204e47adc3d65e2.scope: Deactivated successfully.
Nov 25 02:50:23 np0005534516 podman[91326]: 2025-11-25 07:50:23.768809094 +0000 UTC m=+1.720809067 container died 4fc8ba7c65e47ad8bc3a8c87af25520f0f893b149f93e197c204e47adc3d65e2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_tharp, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Nov 25 02:50:23 np0005534516 systemd[1]: libpod-4fc8ba7c65e47ad8bc3a8c87af25520f0f893b149f93e197c204e47adc3d65e2.scope: Consumed 1.078s CPU time.
Nov 25 02:50:23 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e13 do_prune osdmap full prune enabled
Nov 25 02:50:24 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='osd.2 [v2:192.168.122.100:6810/507468267,v1:192.168.122.100:6811/507468267]' entity='osd.2' cmd='[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Nov 25 02:50:24 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e14 e14: 3 total, 1 up, 3 in
Nov 25 02:50:24 np0005534516 ceph-osd[90711]: osd.2 0 done with init, starting boot process
Nov 25 02:50:24 np0005534516 ceph-osd[90711]: osd.2 0 start_boot
Nov 25 02:50:24 np0005534516 ceph-osd[90711]: osd.2 0 maybe_override_options_for_qos osd_max_backfills set to 1
Nov 25 02:50:24 np0005534516 ceph-osd[90711]: osd.2 0 maybe_override_options_for_qos osd_recovery_max_active set to 0
Nov 25 02:50:24 np0005534516 ceph-osd[90711]: osd.2 0 maybe_override_options_for_qos osd_recovery_max_active_hdd set to 3
Nov 25 02:50:24 np0005534516 ceph-osd[90711]: osd.2 0 maybe_override_options_for_qos osd_recovery_max_active_ssd set to 10
Nov 25 02:50:24 np0005534516 ceph-osd[90711]: osd.2 0  bench count 12288000 bsize 4 KiB
Nov 25 02:50:24 np0005534516 ceph-mon[75015]: log_channel(cluster) log [DBG] : osdmap e14: 3 total, 1 up, 3 in
Nov 25 02:50:24 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Nov 25 02:50:24 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 25 02:50:24 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Nov 25 02:50:24 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 25 02:50:24 np0005534516 ceph-mgr[75313]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Nov 25 02:50:24 np0005534516 ceph-mgr[75313]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 25 02:50:24 np0005534516 ceph-mgr[75313]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.100:6810/507468267; not ready for session (expect reconnect)
Nov 25 02:50:24 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Nov 25 02:50:24 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 25 02:50:24 np0005534516 ceph-mgr[75313]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 25 02:50:24 np0005534516 systemd[1]: var-lib-containers-storage-overlay-913f020d587fb45cc724b02ceef17e18af72c7b26fc1fdff5f1cd45e6e56d7b9-merged.mount: Deactivated successfully.
Nov 25 02:50:24 np0005534516 ceph-mgr[75313]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6806/1972745281; not ready for session (expect reconnect)
Nov 25 02:50:24 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Nov 25 02:50:24 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 25 02:50:24 np0005534516 ceph-mgr[75313]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Nov 25 02:50:24 np0005534516 ceph-mon[75015]: from='osd.2 [v2:192.168.122.100:6810/507468267,v1:192.168.122.100:6811/507468267]' entity='osd.2' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]': finished
Nov 25 02:50:24 np0005534516 ceph-mon[75015]: from='osd.2 [v2:192.168.122.100:6810/507468267,v1:192.168.122.100:6811/507468267]' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]: dispatch
Nov 25 02:50:24 np0005534516 ceph-mon[75015]: from='osd.2 [v2:192.168.122.100:6810/507468267,v1:192.168.122.100:6811/507468267]' entity='osd.2' cmd='[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Nov 25 02:50:24 np0005534516 podman[91326]: 2025-11-25 07:50:24.95640449 +0000 UTC m=+2.908404413 container remove 4fc8ba7c65e47ad8bc3a8c87af25520f0f893b149f93e197c204e47adc3d65e2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_tharp, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 02:50:25 np0005534516 systemd[1]: libpod-conmon-4fc8ba7c65e47ad8bc3a8c87af25520f0f893b149f93e197c204e47adc3d65e2.scope: Deactivated successfully.
Nov 25 02:50:25 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v51: 1 pgs: 1 unknown; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail
Nov 25 02:50:25 np0005534516 ceph-mgr[75313]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.100:6810/507468267; not ready for session (expect reconnect)
Nov 25 02:50:25 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 02:50:25 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Nov 25 02:50:25 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 25 02:50:25 np0005534516 ceph-mgr[75313]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 25 02:50:25 np0005534516 ceph-mgr[75313]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6806/1972745281; not ready for session (expect reconnect)
Nov 25 02:50:25 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Nov 25 02:50:25 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 25 02:50:25 np0005534516 ceph-mgr[75313]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Nov 25 02:50:25 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 02:50:25 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 02:50:25 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 02:50:25 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 02:50:26 np0005534516 ceph-mgr[75313]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.100:6810/507468267; not ready for session (expect reconnect)
Nov 25 02:50:26 np0005534516 ceph-mgr[75313]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6806/1972745281; not ready for session (expect reconnect)
Nov 25 02:50:26 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Nov 25 02:50:26 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 25 02:50:26 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Nov 25 02:50:26 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 25 02:50:26 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e14 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 02:50:26 np0005534516 ceph-mgr[75313]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 25 02:50:26 np0005534516 ceph-mgr[75313]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Nov 25 02:50:27 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v52: 1 pgs: 1 unknown; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail
Nov 25 02:50:27 np0005534516 ceph-mgr[75313]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.100:6810/507468267; not ready for session (expect reconnect)
Nov 25 02:50:27 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Nov 25 02:50:27 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 25 02:50:27 np0005534516 ceph-mgr[75313]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 25 02:50:27 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 02:50:27 np0005534516 ceph-mgr[75313]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6806/1972745281; not ready for session (expect reconnect)
Nov 25 02:50:27 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Nov 25 02:50:27 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 25 02:50:27 np0005534516 ceph-mgr[75313]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Nov 25 02:50:27 np0005534516 podman[91606]: 2025-11-25 07:50:27.482545363 +0000 UTC m=+0.561795051 container exec 04ce8b89bbacb77b3b59c4996e899d7494010827e740656ebea8754c25303956 (image=quay.io/ceph/ceph:v18, name=ceph-a058ea16-8b73-51e1-b172-ed66107102bf-mon-compute-0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Nov 25 02:50:27 np0005534516 podman[91606]: 2025-11-25 07:50:27.606666079 +0000 UTC m=+0.685915727 container exec_died 04ce8b89bbacb77b3b59c4996e899d7494010827e740656ebea8754c25303956 (image=quay.io/ceph/ceph:v18, name=ceph-a058ea16-8b73-51e1-b172-ed66107102bf-mon-compute-0, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 02:50:28 np0005534516 ceph-mgr[75313]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.100:6810/507468267; not ready for session (expect reconnect)
Nov 25 02:50:28 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Nov 25 02:50:28 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 25 02:50:28 np0005534516 ceph-mgr[75313]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 25 02:50:28 np0005534516 ceph-mgr[75313]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6806/1972745281; not ready for session (expect reconnect)
Nov 25 02:50:28 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Nov 25 02:50:28 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 25 02:50:28 np0005534516 ceph-mgr[75313]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Nov 25 02:50:28 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 02:50:28 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 02:50:28 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 02:50:28 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 02:50:29 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v53: 1 pgs: 1 unknown; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail
Nov 25 02:50:29 np0005534516 ceph-mgr[75313]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.100:6810/507468267; not ready for session (expect reconnect)
Nov 25 02:50:29 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Nov 25 02:50:29 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 25 02:50:29 np0005534516 ceph-mgr[75313]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 25 02:50:29 np0005534516 ceph-mgr[75313]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6806/1972745281; not ready for session (expect reconnect)
Nov 25 02:50:29 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Nov 25 02:50:29 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 25 02:50:29 np0005534516 ceph-mgr[75313]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Nov 25 02:50:29 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 02:50:29 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 02:50:30 np0005534516 ceph-mgr[75313]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.100:6810/507468267; not ready for session (expect reconnect)
Nov 25 02:50:30 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Nov 25 02:50:30 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 25 02:50:30 np0005534516 ceph-mgr[75313]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 25 02:50:30 np0005534516 podman[91998]: 2025-11-25 07:50:30.121953181 +0000 UTC m=+0.022306603 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 02:50:30 np0005534516 ceph-mgr[75313]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6806/1972745281; not ready for session (expect reconnect)
Nov 25 02:50:30 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Nov 25 02:50:30 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 25 02:50:30 np0005534516 ceph-mgr[75313]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Nov 25 02:50:30 np0005534516 podman[91998]: 2025-11-25 07:50:30.361609209 +0000 UTC m=+0.261962601 container create d2b45fa7dfc10f0ec6c599949b0e29de062f2c10502318e461108d16e20df43f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_austin, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Nov 25 02:50:30 np0005534516 systemd[1]: Started libpod-conmon-d2b45fa7dfc10f0ec6c599949b0e29de062f2c10502318e461108d16e20df43f.scope.
Nov 25 02:50:30 np0005534516 systemd[1]: Started libcrun container.
Nov 25 02:50:31 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v54: 1 pgs: 1 unknown; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail
Nov 25 02:50:31 np0005534516 ceph-mgr[75313]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.100:6810/507468267; not ready for session (expect reconnect)
Nov 25 02:50:31 np0005534516 podman[91998]: 2025-11-25 07:50:31.223635835 +0000 UTC m=+1.123989247 container init d2b45fa7dfc10f0ec6c599949b0e29de062f2c10502318e461108d16e20df43f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_austin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Nov 25 02:50:31 np0005534516 podman[91998]: 2025-11-25 07:50:31.235191289 +0000 UTC m=+1.135544681 container start d2b45fa7dfc10f0ec6c599949b0e29de062f2c10502318e461108d16e20df43f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_austin, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Nov 25 02:50:31 np0005534516 boring_austin[92014]: 167 167
Nov 25 02:50:31 np0005534516 systemd[1]: libpod-d2b45fa7dfc10f0ec6c599949b0e29de062f2c10502318e461108d16e20df43f.scope: Deactivated successfully.
Nov 25 02:50:31 np0005534516 ceph-mgr[75313]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6806/1972745281; not ready for session (expect reconnect)
Nov 25 02:50:31 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Nov 25 02:50:31 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 25 02:50:31 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Nov 25 02:50:31 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 25 02:50:31 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e14 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 02:50:31 np0005534516 ceph-mgr[75313]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 25 02:50:31 np0005534516 ceph-mgr[75313]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Nov 25 02:50:31 np0005534516 podman[91998]: 2025-11-25 07:50:31.551092134 +0000 UTC m=+1.451445526 container attach d2b45fa7dfc10f0ec6c599949b0e29de062f2c10502318e461108d16e20df43f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_austin, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 02:50:31 np0005534516 podman[91998]: 2025-11-25 07:50:31.552024832 +0000 UTC m=+1.452378224 container died d2b45fa7dfc10f0ec6c599949b0e29de062f2c10502318e461108d16e20df43f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_austin, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Nov 25 02:50:32 np0005534516 ceph-mgr[75313]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.100:6810/507468267; not ready for session (expect reconnect)
Nov 25 02:50:32 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Nov 25 02:50:32 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 25 02:50:32 np0005534516 ceph-mgr[75313]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 25 02:50:32 np0005534516 ceph-mgr[75313]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6806/1972745281; not ready for session (expect reconnect)
Nov 25 02:50:32 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Nov 25 02:50:32 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 25 02:50:32 np0005534516 ceph-mgr[75313]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Nov 25 02:50:32 np0005534516 systemd[1]: var-lib-containers-storage-overlay-df7339f27fe0106b1190fa14d7c266c5624dfe0003530b3ced09da49d76dcdbf-merged.mount: Deactivated successfully.
Nov 25 02:50:33 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v55: 1 pgs: 1 unknown; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail
Nov 25 02:50:33 np0005534516 ceph-mgr[75313]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.100:6810/507468267; not ready for session (expect reconnect)
Nov 25 02:50:33 np0005534516 ceph-mgr[75313]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6806/1972745281; not ready for session (expect reconnect)
Nov 25 02:50:33 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Nov 25 02:50:33 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 25 02:50:33 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Nov 25 02:50:33 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 25 02:50:33 np0005534516 ceph-mgr[75313]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 25 02:50:33 np0005534516 ceph-mgr[75313]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Nov 25 02:50:33 np0005534516 podman[91998]: 2025-11-25 07:50:33.874751411 +0000 UTC m=+3.775104843 container remove d2b45fa7dfc10f0ec6c599949b0e29de062f2c10502318e461108d16e20df43f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_austin, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 02:50:33 np0005534516 systemd[1]: libpod-conmon-d2b45fa7dfc10f0ec6c599949b0e29de062f2c10502318e461108d16e20df43f.scope: Deactivated successfully.
Nov 25 02:50:34 np0005534516 podman[92041]: 2025-11-25 07:50:34.020408503 +0000 UTC m=+0.026186383 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 02:50:34 np0005534516 ceph-mgr[75313]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.100:6810/507468267; not ready for session (expect reconnect)
Nov 25 02:50:34 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Nov 25 02:50:34 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 25 02:50:34 np0005534516 ceph-mgr[75313]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 25 02:50:34 np0005534516 podman[92041]: 2025-11-25 07:50:34.18153152 +0000 UTC m=+0.187309389 container create 8951207213f249db2160f291d2dbe2d307f6ec1f19d1ef64539b305bcfd5d4f6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_ganguly, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 02:50:34 np0005534516 systemd[1]: Started libpod-conmon-8951207213f249db2160f291d2dbe2d307f6ec1f19d1ef64539b305bcfd5d4f6.scope.
Nov 25 02:50:34 np0005534516 ceph-mgr[75313]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6806/1972745281; not ready for session (expect reconnect)
Nov 25 02:50:34 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Nov 25 02:50:34 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 25 02:50:34 np0005534516 ceph-mgr[75313]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Nov 25 02:50:34 np0005534516 systemd[1]: Started libcrun container.
Nov 25 02:50:34 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c39585cffdc7446f0bed47e813030538ac24d3a693e5c384a7b231b2557ef40c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 02:50:34 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c39585cffdc7446f0bed47e813030538ac24d3a693e5c384a7b231b2557ef40c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 02:50:34 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c39585cffdc7446f0bed47e813030538ac24d3a693e5c384a7b231b2557ef40c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 02:50:34 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c39585cffdc7446f0bed47e813030538ac24d3a693e5c384a7b231b2557ef40c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 02:50:34 np0005534516 podman[92041]: 2025-11-25 07:50:34.395265702 +0000 UTC m=+0.401043551 container init 8951207213f249db2160f291d2dbe2d307f6ec1f19d1ef64539b305bcfd5d4f6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_ganguly, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef)
Nov 25 02:50:34 np0005534516 podman[92041]: 2025-11-25 07:50:34.401364746 +0000 UTC m=+0.407142575 container start 8951207213f249db2160f291d2dbe2d307f6ec1f19d1ef64539b305bcfd5d4f6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_ganguly, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True)
Nov 25 02:50:34 np0005534516 podman[92041]: 2025-11-25 07:50:34.457955874 +0000 UTC m=+0.463733703 container attach 8951207213f249db2160f291d2dbe2d307f6ec1f19d1ef64539b305bcfd5d4f6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_ganguly, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Nov 25 02:50:35 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v56: 1 pgs: 1 unknown; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail
Nov 25 02:50:35 np0005534516 ceph-mgr[75313]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.100:6810/507468267; not ready for session (expect reconnect)
Nov 25 02:50:35 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Nov 25 02:50:35 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 25 02:50:35 np0005534516 ceph-mgr[75313]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 25 02:50:35 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Nov 25 02:50:35 np0005534516 ceph-mgr[75313]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6806/1972745281; not ready for session (expect reconnect)
Nov 25 02:50:35 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 25 02:50:35 np0005534516 ceph-mgr[75313]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Nov 25 02:50:35 np0005534516 pedantic_ganguly[92057]: [
Nov 25 02:50:35 np0005534516 pedantic_ganguly[92057]:    {
Nov 25 02:50:35 np0005534516 pedantic_ganguly[92057]:        "available": false,
Nov 25 02:50:35 np0005534516 pedantic_ganguly[92057]:        "ceph_device": false,
Nov 25 02:50:35 np0005534516 pedantic_ganguly[92057]:        "device_id": "QEMU_DVD-ROM_QM00001",
Nov 25 02:50:35 np0005534516 pedantic_ganguly[92057]:        "lsm_data": {},
Nov 25 02:50:35 np0005534516 pedantic_ganguly[92057]:        "lvs": [],
Nov 25 02:50:35 np0005534516 pedantic_ganguly[92057]:        "path": "/dev/sr0",
Nov 25 02:50:35 np0005534516 pedantic_ganguly[92057]:        "rejected_reasons": [
Nov 25 02:50:35 np0005534516 pedantic_ganguly[92057]:            "Has a FileSystem",
Nov 25 02:50:35 np0005534516 pedantic_ganguly[92057]:            "Insufficient space (<5GB)"
Nov 25 02:50:35 np0005534516 pedantic_ganguly[92057]:        ],
Nov 25 02:50:35 np0005534516 pedantic_ganguly[92057]:        "sys_api": {
Nov 25 02:50:35 np0005534516 pedantic_ganguly[92057]:            "actuators": null,
Nov 25 02:50:35 np0005534516 pedantic_ganguly[92057]:            "device_nodes": "sr0",
Nov 25 02:50:35 np0005534516 pedantic_ganguly[92057]:            "devname": "sr0",
Nov 25 02:50:35 np0005534516 pedantic_ganguly[92057]:            "human_readable_size": "482.00 KB",
Nov 25 02:50:35 np0005534516 pedantic_ganguly[92057]:            "id_bus": "ata",
Nov 25 02:50:35 np0005534516 pedantic_ganguly[92057]:            "model": "QEMU DVD-ROM",
Nov 25 02:50:35 np0005534516 pedantic_ganguly[92057]:            "nr_requests": "2",
Nov 25 02:50:35 np0005534516 pedantic_ganguly[92057]:            "parent": "/dev/sr0",
Nov 25 02:50:35 np0005534516 pedantic_ganguly[92057]:            "partitions": {},
Nov 25 02:50:35 np0005534516 pedantic_ganguly[92057]:            "path": "/dev/sr0",
Nov 25 02:50:35 np0005534516 pedantic_ganguly[92057]:            "removable": "1",
Nov 25 02:50:35 np0005534516 pedantic_ganguly[92057]:            "rev": "2.5+",
Nov 25 02:50:35 np0005534516 pedantic_ganguly[92057]:            "ro": "0",
Nov 25 02:50:35 np0005534516 pedantic_ganguly[92057]:            "rotational": "1",
Nov 25 02:50:35 np0005534516 pedantic_ganguly[92057]:            "sas_address": "",
Nov 25 02:50:35 np0005534516 pedantic_ganguly[92057]:            "sas_device_handle": "",
Nov 25 02:50:35 np0005534516 pedantic_ganguly[92057]:            "scheduler_mode": "mq-deadline",
Nov 25 02:50:35 np0005534516 pedantic_ganguly[92057]:            "sectors": 0,
Nov 25 02:50:35 np0005534516 pedantic_ganguly[92057]:            "sectorsize": "2048",
Nov 25 02:50:35 np0005534516 pedantic_ganguly[92057]:            "size": 493568.0,
Nov 25 02:50:35 np0005534516 pedantic_ganguly[92057]:            "support_discard": "2048",
Nov 25 02:50:35 np0005534516 pedantic_ganguly[92057]:            "type": "disk",
Nov 25 02:50:35 np0005534516 pedantic_ganguly[92057]:            "vendor": "QEMU"
Nov 25 02:50:35 np0005534516 pedantic_ganguly[92057]:        }
Nov 25 02:50:35 np0005534516 pedantic_ganguly[92057]:    }
Nov 25 02:50:35 np0005534516 pedantic_ganguly[92057]: ]
Nov 25 02:50:35 np0005534516 systemd[1]: libpod-8951207213f249db2160f291d2dbe2d307f6ec1f19d1ef64539b305bcfd5d4f6.scope: Deactivated successfully.
Nov 25 02:50:35 np0005534516 systemd[1]: libpod-8951207213f249db2160f291d2dbe2d307f6ec1f19d1ef64539b305bcfd5d4f6.scope: Consumed 1.453s CPU time.
Nov 25 02:50:35 np0005534516 podman[92041]: 2025-11-25 07:50:35.850151608 +0000 UTC m=+1.855929437 container died 8951207213f249db2160f291d2dbe2d307f6ec1f19d1ef64539b305bcfd5d4f6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_ganguly, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 02:50:36 np0005534516 systemd[1]: var-lib-containers-storage-overlay-c39585cffdc7446f0bed47e813030538ac24d3a693e5c384a7b231b2557ef40c-merged.mount: Deactivated successfully.
Nov 25 02:50:36 np0005534516 ceph-mgr[75313]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.100:6810/507468267; not ready for session (expect reconnect)
Nov 25 02:50:36 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Nov 25 02:50:36 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 25 02:50:36 np0005534516 ceph-mgr[75313]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 25 02:50:36 np0005534516 ceph-mgr[75313]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6806/1972745281; not ready for session (expect reconnect)
Nov 25 02:50:36 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Nov 25 02:50:36 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 25 02:50:36 np0005534516 ceph-mgr[75313]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Nov 25 02:50:36 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e14 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 02:50:36 np0005534516 podman[92041]: 2025-11-25 07:50:36.469654287 +0000 UTC m=+2.475432146 container remove 8951207213f249db2160f291d2dbe2d307f6ec1f19d1ef64539b305bcfd5d4f6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_ganguly, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Nov 25 02:50:36 np0005534516 systemd[1]: libpod-conmon-8951207213f249db2160f291d2dbe2d307f6ec1f19d1ef64539b305bcfd5d4f6.scope: Deactivated successfully.
Nov 25 02:50:36 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 02:50:36 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 02:50:36 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 02:50:36 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 02:50:36 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} v 0) v1
Nov 25 02:50:36 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"}]: dispatch
Nov 25 02:50:36 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} v 0) v1
Nov 25 02:50:36 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"}]: dispatch
Nov 25 02:50:36 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} v 0) v1
Nov 25 02:50:36 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"}]: dispatch
Nov 25 02:50:36 np0005534516 ceph-mgr[75313]: [cephadm INFO root] Adjusting osd_memory_target on compute-0 to 43690k
Nov 25 02:50:36 np0005534516 ceph-mgr[75313]: log_channel(cephadm) log [INF] : Adjusting osd_memory_target on compute-0 to 43690k
Nov 25 02:50:36 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=osd_memory_target}] v 0) v1
Nov 25 02:50:36 np0005534516 ceph-mgr[75313]: [cephadm WARNING cephadm.serve] Unable to set osd_memory_target on compute-0 to 44739242: error parsing value: Value '44739242' is below minimum 939524096
Nov 25 02:50:36 np0005534516 ceph-mgr[75313]: log_channel(cephadm) log [WRN] : Unable to set osd_memory_target on compute-0 to 44739242: error parsing value: Value '44739242' is below minimum 939524096
Nov 25 02:50:36 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 02:50:36 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 02:50:36 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 02:50:36 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 02:50:36 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 02:50:36 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 02:50:36 np0005534516 ceph-mgr[75313]: [progress WARNING root] complete: ev 75fd3dd7-2b23-4972-a57a-270958560822 does not exist
Nov 25 02:50:36 np0005534516 ceph-mgr[75313]: [progress WARNING root] complete: ev 5ccb7a27-2c76-4565-82bb-ebecc60864ea does not exist
Nov 25 02:50:36 np0005534516 ceph-mgr[75313]: [progress WARNING root] complete: ev f9f2a8f4-ac74-48ba-a01f-f6558b496d96 does not exist
Nov 25 02:50:36 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 02:50:36 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 02:50:36 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 02:50:36 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 02:50:36 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 02:50:36 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 02:50:37 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v57: 1 pgs: 1 unknown; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail
Nov 25 02:50:37 np0005534516 ceph-mgr[75313]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.100:6810/507468267; not ready for session (expect reconnect)
Nov 25 02:50:37 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Nov 25 02:50:37 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 25 02:50:37 np0005534516 ceph-mgr[75313]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 25 02:50:37 np0005534516 ceph-mgr[75313]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6806/1972745281; not ready for session (expect reconnect)
Nov 25 02:50:37 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Nov 25 02:50:37 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 25 02:50:37 np0005534516 ceph-mgr[75313]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Nov 25 02:50:37 np0005534516 podman[93839]: 2025-11-25 07:50:37.621333065 +0000 UTC m=+0.028069390 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 02:50:37 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 02:50:37 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 02:50:37 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"}]: dispatch
Nov 25 02:50:37 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"}]: dispatch
Nov 25 02:50:37 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"}]: dispatch
Nov 25 02:50:37 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 02:50:37 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 02:50:37 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 02:50:37 np0005534516 podman[93839]: 2025-11-25 07:50:37.856215936 +0000 UTC m=+0.262952151 container create e6ff0f976e07d2734c7fed6d2309873819f8d73ec4be159e9444626d5a03d382 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_shannon, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Nov 25 02:50:37 np0005534516 systemd[1]: Started libpod-conmon-e6ff0f976e07d2734c7fed6d2309873819f8d73ec4be159e9444626d5a03d382.scope.
Nov 25 02:50:37 np0005534516 systemd[1]: Started libcrun container.
Nov 25 02:50:38 np0005534516 podman[93839]: 2025-11-25 07:50:38.130243052 +0000 UTC m=+0.536979327 container init e6ff0f976e07d2734c7fed6d2309873819f8d73ec4be159e9444626d5a03d382 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_shannon, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef)
Nov 25 02:50:38 np0005534516 podman[93839]: 2025-11-25 07:50:38.138263154 +0000 UTC m=+0.544999379 container start e6ff0f976e07d2734c7fed6d2309873819f8d73ec4be159e9444626d5a03d382 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_shannon, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Nov 25 02:50:38 np0005534516 systemd[1]: libpod-e6ff0f976e07d2734c7fed6d2309873819f8d73ec4be159e9444626d5a03d382.scope: Deactivated successfully.
Nov 25 02:50:38 np0005534516 funny_shannon[93855]: 167 167
Nov 25 02:50:38 np0005534516 conmon[93855]: conmon e6ff0f976e07d2734c7f <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-e6ff0f976e07d2734c7fed6d2309873819f8d73ec4be159e9444626d5a03d382.scope/container/memory.events
Nov 25 02:50:38 np0005534516 ceph-mgr[75313]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.100:6810/507468267; not ready for session (expect reconnect)
Nov 25 02:50:38 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Nov 25 02:50:38 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 25 02:50:38 np0005534516 ceph-mgr[75313]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 25 02:50:38 np0005534516 podman[93839]: 2025-11-25 07:50:38.235594677 +0000 UTC m=+0.642330932 container attach e6ff0f976e07d2734c7fed6d2309873819f8d73ec4be159e9444626d5a03d382 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_shannon, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0)
Nov 25 02:50:38 np0005534516 podman[93839]: 2025-11-25 07:50:38.235929404 +0000 UTC m=+0.642665619 container died e6ff0f976e07d2734c7fed6d2309873819f8d73ec4be159e9444626d5a03d382 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_shannon, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 02:50:38 np0005534516 ceph-mgr[75313]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6806/1972745281; not ready for session (expect reconnect)
Nov 25 02:50:38 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Nov 25 02:50:38 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 25 02:50:38 np0005534516 ceph-mgr[75313]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Nov 25 02:50:38 np0005534516 systemd[1]: var-lib-containers-storage-overlay-1627053f8cf68c308aa432c7764cc09da86b03fd1ea94d4930a7935e1d269d17-merged.mount: Deactivated successfully.
Nov 25 02:50:38 np0005534516 python3[93897]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid a058ea16-8b73-51e1-b172-ed66107102bf -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   status --format json | jq .osdmap.num_up_osds _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 02:50:39 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v58: 1 pgs: 1 unknown; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail
Nov 25 02:50:39 np0005534516 podman[93839]: 2025-11-25 07:50:39.14087145 +0000 UTC m=+1.547607705 container remove e6ff0f976e07d2734c7fed6d2309873819f8d73ec4be159e9444626d5a03d382 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_shannon, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Nov 25 02:50:39 np0005534516 ceph-mgr[75313]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.100:6810/507468267; not ready for session (expect reconnect)
Nov 25 02:50:39 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Nov 25 02:50:39 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 25 02:50:39 np0005534516 ceph-mgr[75313]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 25 02:50:39 np0005534516 ceph-mon[75015]: Adjusting osd_memory_target on compute-0 to 43690k
Nov 25 02:50:39 np0005534516 ceph-mon[75015]: Unable to set osd_memory_target on compute-0 to 44739242: error parsing value: Value '44739242' is below minimum 939524096
Nov 25 02:50:39 np0005534516 ceph-osd[89702]: osd.1 0 maybe_override_max_osd_capacity_for_qos osd bench result - bandwidth (MiB/sec): 2.656 iops: 679.928 elapsed_sec: 4.412
Nov 25 02:50:39 np0005534516 ceph-osd[89702]: log_channel(cluster) log [WRN] : OSD bench result of 679.927540 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.1. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Nov 25 02:50:39 np0005534516 ceph-osd[89702]: osd.1 0 waiting for initial osdmap
Nov 25 02:50:39 np0005534516 ceph-a058ea16-8b73-51e1-b172-ed66107102bf-osd-1[89698]: 2025-11-25T07:50:39.234+0000 7f7801ae5640 -1 osd.1 0 waiting for initial osdmap
Nov 25 02:50:39 np0005534516 podman[93899]: 2025-11-25 07:50:39.163446897 +0000 UTC m=+0.207959436 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 25 02:50:39 np0005534516 podman[93899]: 2025-11-25 07:50:39.302932625 +0000 UTC m=+0.347445144 container create 67fee18e0bf2ca31fc0598ab11c38d17840608f9a200faa63835ac9356b1c5b7 (image=quay.io/ceph/ceph:v18, name=loving_chaum, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Nov 25 02:50:39 np0005534516 ceph-mgr[75313]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6806/1972745281; not ready for session (expect reconnect)
Nov 25 02:50:39 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Nov 25 02:50:39 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 25 02:50:39 np0005534516 ceph-mgr[75313]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Nov 25 02:50:39 np0005534516 systemd[1]: Started libpod-conmon-67fee18e0bf2ca31fc0598ab11c38d17840608f9a200faa63835ac9356b1c5b7.scope.
Nov 25 02:50:39 np0005534516 systemd[1]: Started libcrun container.
Nov 25 02:50:39 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d1471bcf0c37acbaaa2c754fd8382369aada87aafc56209774bb7017bef83cdb/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Nov 25 02:50:39 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d1471bcf0c37acbaaa2c754fd8382369aada87aafc56209774bb7017bef83cdb/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 02:50:39 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d1471bcf0c37acbaaa2c754fd8382369aada87aafc56209774bb7017bef83cdb/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 02:50:39 np0005534516 podman[93920]: 2025-11-25 07:50:39.575399899 +0000 UTC m=+0.271595287 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 02:50:39 np0005534516 ceph-osd[89702]: osd.1 14 crush map has features 288514051259236352, adjusting msgr requires for clients
Nov 25 02:50:39 np0005534516 ceph-osd[89702]: osd.1 14 crush map has features 288514051259236352 was 288232575208792577, adjusting msgr requires for mons
Nov 25 02:50:39 np0005534516 ceph-osd[89702]: osd.1 14 crush map has features 3314933000852226048, adjusting msgr requires for osds
Nov 25 02:50:39 np0005534516 ceph-osd[89702]: osd.1 14 check_osdmap_features require_osd_release unknown -> reef
Nov 25 02:50:39 np0005534516 podman[93920]: 2025-11-25 07:50:39.789733144 +0000 UTC m=+0.485928472 container create d9fac40960a6b70e75fd4c20e2b329585c13b1a369443a1651370258cfd535d8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_agnesi, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 02:50:39 np0005534516 ceph-osd[89702]: osd.1 14 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Nov 25 02:50:39 np0005534516 ceph-osd[89702]: osd.1 14 set_numa_affinity not setting numa affinity
Nov 25 02:50:39 np0005534516 ceph-a058ea16-8b73-51e1-b172-ed66107102bf-osd-1[89698]: 2025-11-25T07:50:39.851+0000 7f77fc8f6640 -1 osd.1 14 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Nov 25 02:50:39 np0005534516 ceph-osd[89702]: osd.1 14 _collect_metadata loop4:  no unique device id for loop4: fallback method has no model nor serial
Nov 25 02:50:39 np0005534516 systemd[1]: Started libpod-conmon-d9fac40960a6b70e75fd4c20e2b329585c13b1a369443a1651370258cfd535d8.scope.
Nov 25 02:50:39 np0005534516 systemd[1]: Started libcrun container.
Nov 25 02:50:39 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/987d55a18091b1d695c9c993514edc1e187628a6b17dd1924b9b8f64214bf1f8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 02:50:39 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/987d55a18091b1d695c9c993514edc1e187628a6b17dd1924b9b8f64214bf1f8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 02:50:39 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/987d55a18091b1d695c9c993514edc1e187628a6b17dd1924b9b8f64214bf1f8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 02:50:39 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/987d55a18091b1d695c9c993514edc1e187628a6b17dd1924b9b8f64214bf1f8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 02:50:39 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/987d55a18091b1d695c9c993514edc1e187628a6b17dd1924b9b8f64214bf1f8/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 02:50:39 np0005534516 podman[93899]: 2025-11-25 07:50:39.957351213 +0000 UTC m=+1.001863712 container init 67fee18e0bf2ca31fc0598ab11c38d17840608f9a200faa63835ac9356b1c5b7 (image=quay.io/ceph/ceph:v18, name=loving_chaum, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Nov 25 02:50:39 np0005534516 podman[93899]: 2025-11-25 07:50:39.967947848 +0000 UTC m=+1.012460337 container start 67fee18e0bf2ca31fc0598ab11c38d17840608f9a200faa63835ac9356b1c5b7 (image=quay.io/ceph/ceph:v18, name=loving_chaum, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 02:50:40 np0005534516 podman[93920]: 2025-11-25 07:50:40.093980813 +0000 UTC m=+0.790176161 container init d9fac40960a6b70e75fd4c20e2b329585c13b1a369443a1651370258cfd535d8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_agnesi, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Nov 25 02:50:40 np0005534516 podman[93920]: 2025-11-25 07:50:40.101337341 +0000 UTC m=+0.797532629 container start d9fac40960a6b70e75fd4c20e2b329585c13b1a369443a1651370258cfd535d8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_agnesi, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Nov 25 02:50:40 np0005534516 ceph-mgr[75313]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.100:6810/507468267; not ready for session (expect reconnect)
Nov 25 02:50:40 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Nov 25 02:50:40 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 25 02:50:40 np0005534516 ceph-mgr[75313]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 25 02:50:40 np0005534516 ceph-osd[89702]: osd.1 14 tick checking mon for new map
Nov 25 02:50:40 np0005534516 podman[93899]: 2025-11-25 07:50:40.26062449 +0000 UTC m=+1.305136969 container attach 67fee18e0bf2ca31fc0598ab11c38d17840608f9a200faa63835ac9356b1c5b7 (image=quay.io/ceph/ceph:v18, name=loving_chaum, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 02:50:40 np0005534516 ceph-mgr[75313]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6806/1972745281; not ready for session (expect reconnect)
Nov 25 02:50:40 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Nov 25 02:50:40 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 25 02:50:40 np0005534516 ceph-mgr[75313]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Nov 25 02:50:40 np0005534516 ceph-mon[75015]: OSD bench result of 679.927540 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.1. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Nov 25 02:50:40 np0005534516 podman[93920]: 2025-11-25 07:50:40.382383389 +0000 UTC m=+1.078578717 container attach d9fac40960a6b70e75fd4c20e2b329585c13b1a369443a1651370258cfd535d8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_agnesi, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 02:50:40 np0005534516 systemd[1]: libpod-conmon-e6ff0f976e07d2734c7fed6d2309873819f8d73ec4be159e9444626d5a03d382.scope: Deactivated successfully.
Nov 25 02:50:40 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e14 do_prune osdmap full prune enabled
Nov 25 02:50:40 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e15 e15: 3 total, 2 up, 3 in
Nov 25 02:50:40 np0005534516 ceph-mon[75015]: log_channel(cluster) log [INF] : osd.1 [v2:192.168.122.100:6806/1972745281,v1:192.168.122.100:6807/1972745281] boot
Nov 25 02:50:40 np0005534516 ceph-mon[75015]: log_channel(cluster) log [DBG] : osdmap e15: 3 total, 2 up, 3 in
Nov 25 02:50:40 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json"} v 0) v1
Nov 25 02:50:40 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/864140322' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Nov 25 02:50:40 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Nov 25 02:50:40 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 25 02:50:40 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Nov 25 02:50:40 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 25 02:50:40 np0005534516 ceph-mgr[75313]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 25 02:50:40 np0005534516 loving_chaum[93936]: 
Nov 25 02:50:40 np0005534516 loving_chaum[93936]: {"fsid":"a058ea16-8b73-51e1-b172-ed66107102bf","health":{"status":"HEALTH_OK","checks":{},"mutes":[]},"election_epoch":5,"quorum":[0],"quorum_names":["compute-0"],"quorum_age":154,"monmap":{"epoch":1,"min_mon_release_name":"reef","num_mons":1},"osdmap":{"epoch":15,"num_osds":3,"num_up_osds":2,"osd_up_since":1764057040,"num_in_osds":3,"osd_in_since":1764056988,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[{"state_name":"unknown","count":1}],"num_pgs":1,"num_pools":1,"num_objects":0,"data_bytes":0,"bytes_used":27598848,"bytes_avail":21443043328,"bytes_total":21470642176,"unknown_pgs_ratio":1},"fsmap":{"epoch":1,"by_rank":[],"up:standby":0},"mgrmap":{"available":true,"num_standbys":0,"modules":["cephadm","iostat","nfs","restful"],"services":{}},"servicemap":{"epoch":2,"modified":"2025-11-25T07:49:55.085946+0000","services":{}},"progress_events":{}}
Nov 25 02:50:40 np0005534516 systemd[1]: libpod-67fee18e0bf2ca31fc0598ab11c38d17840608f9a200faa63835ac9356b1c5b7.scope: Deactivated successfully.
Nov 25 02:50:40 np0005534516 podman[93899]: 2025-11-25 07:50:40.627211122 +0000 UTC m=+1.671723601 container died 67fee18e0bf2ca31fc0598ab11c38d17840608f9a200faa63835ac9356b1c5b7 (image=quay.io/ceph/ceph:v18, name=loving_chaum, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True)
Nov 25 02:50:40 np0005534516 ceph-osd[89702]: osd.1 15 state: booting -> active
Nov 25 02:50:40 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 15 pg[1.0( empty local-lis/les=0/0 n=0 ec=11/11 lis/c=0/0 les/c/f=0/0/0 sis=15) [1] r=0 lpr=15 pi=[11,15)/0 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:50:40 np0005534516 systemd[1]: var-lib-containers-storage-overlay-d1471bcf0c37acbaaa2c754fd8382369aada87aafc56209774bb7017bef83cdb-merged.mount: Deactivated successfully.
Nov 25 02:50:40 np0005534516 podman[93899]: 2025-11-25 07:50:40.750476771 +0000 UTC m=+1.794989250 container remove 67fee18e0bf2ca31fc0598ab11c38d17840608f9a200faa63835ac9356b1c5b7 (image=quay.io/ceph/ceph:v18, name=loving_chaum, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3)
Nov 25 02:50:40 np0005534516 systemd[1]: libpod-conmon-67fee18e0bf2ca31fc0598ab11c38d17840608f9a200faa63835ac9356b1c5b7.scope: Deactivated successfully.
Nov 25 02:50:41 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v60: 1 pgs: 1 unknown; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail
Nov 25 02:50:41 np0005534516 ceph-mgr[75313]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.100:6810/507468267; not ready for session (expect reconnect)
Nov 25 02:50:41 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Nov 25 02:50:41 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 25 02:50:41 np0005534516 ceph-mgr[75313]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 25 02:50:41 np0005534516 serene_agnesi[93943]: --> passed data devices: 0 physical, 3 LVM
Nov 25 02:50:41 np0005534516 serene_agnesi[93943]: --> relative data size: 1.0
Nov 25 02:50:41 np0005534516 serene_agnesi[93943]: --> All data devices are unavailable
Nov 25 02:50:41 np0005534516 systemd[1]: libpod-d9fac40960a6b70e75fd4c20e2b329585c13b1a369443a1651370258cfd535d8.scope: Deactivated successfully.
Nov 25 02:50:41 np0005534516 systemd[1]: libpod-d9fac40960a6b70e75fd4c20e2b329585c13b1a369443a1651370258cfd535d8.scope: Consumed 1.087s CPU time.
Nov 25 02:50:41 np0005534516 conmon[93943]: conmon d9fac40960a6b70e75fd <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-d9fac40960a6b70e75fd4c20e2b329585c13b1a369443a1651370258cfd535d8.scope/container/memory.events
Nov 25 02:50:41 np0005534516 python3[94025]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid a058ea16-8b73-51e1-b172-ed66107102bf -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create vms  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 02:50:41 np0005534516 podman[94031]: 2025-11-25 07:50:41.298618004 +0000 UTC m=+0.028490649 container died d9fac40960a6b70e75fd4c20e2b329585c13b1a369443a1651370258cfd535d8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_agnesi, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 02:50:41 np0005534516 systemd[1]: var-lib-containers-storage-overlay-987d55a18091b1d695c9c993514edc1e187628a6b17dd1924b9b8f64214bf1f8-merged.mount: Deactivated successfully.
Nov 25 02:50:41 np0005534516 ceph-mon[75015]: osd.1 [v2:192.168.122.100:6806/1972745281,v1:192.168.122.100:6807/1972745281] boot
Nov 25 02:50:41 np0005534516 podman[94042]: 2025-11-25 07:50:41.366820966 +0000 UTC m=+0.057872554 container create f14d162924d58886a861fa57130aab65fc4cb6f4d531512386e734d4bd17ed47 (image=quay.io/ceph/ceph:v18, name=sweet_darwin, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 02:50:41 np0005534516 podman[94031]: 2025-11-25 07:50:41.399835476 +0000 UTC m=+0.129708031 container remove d9fac40960a6b70e75fd4c20e2b329585c13b1a369443a1651370258cfd535d8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_agnesi, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Nov 25 02:50:41 np0005534516 systemd[1]: libpod-conmon-d9fac40960a6b70e75fd4c20e2b329585c13b1a369443a1651370258cfd535d8.scope: Deactivated successfully.
Nov 25 02:50:41 np0005534516 podman[94042]: 2025-11-25 07:50:41.341417261 +0000 UTC m=+0.032468849 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 25 02:50:41 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e15 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 02:50:41 np0005534516 systemd[1]: Started libpod-conmon-f14d162924d58886a861fa57130aab65fc4cb6f4d531512386e734d4bd17ed47.scope.
Nov 25 02:50:41 np0005534516 systemd[1]: Started libcrun container.
Nov 25 02:50:41 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/13123df8deb889ff18f0d7f2e45563302773b50082898fc982af4fd8da32ef3e/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 02:50:41 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/13123df8deb889ff18f0d7f2e45563302773b50082898fc982af4fd8da32ef3e/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 02:50:41 np0005534516 podman[94042]: 2025-11-25 07:50:41.526564225 +0000 UTC m=+0.217615883 container init f14d162924d58886a861fa57130aab65fc4cb6f4d531512386e734d4bd17ed47 (image=quay.io/ceph/ceph:v18, name=sweet_darwin, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Nov 25 02:50:41 np0005534516 podman[94042]: 2025-11-25 07:50:41.539387615 +0000 UTC m=+0.230439223 container start f14d162924d58886a861fa57130aab65fc4cb6f4d531512386e734d4bd17ed47 (image=quay.io/ceph/ceph:v18, name=sweet_darwin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Nov 25 02:50:41 np0005534516 podman[94042]: 2025-11-25 07:50:41.597619185 +0000 UTC m=+0.288670853 container attach f14d162924d58886a861fa57130aab65fc4cb6f4d531512386e734d4bd17ed47 (image=quay.io/ceph/ceph:v18, name=sweet_darwin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Nov 25 02:50:41 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e15 do_prune osdmap full prune enabled
Nov 25 02:50:41 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e16 e16: 3 total, 2 up, 3 in
Nov 25 02:50:41 np0005534516 ceph-mon[75015]: log_channel(cluster) log [DBG] : osdmap e16: 3 total, 2 up, 3 in
Nov 25 02:50:41 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Nov 25 02:50:41 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 25 02:50:41 np0005534516 ceph-mgr[75313]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 25 02:50:41 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 16 pg[1.0( empty local-lis/les=15/16 n=0 ec=11/11 lis/c=0/0 les/c/f=0/0/0 sis=15) [1] r=0 lpr=15 pi=[11,15)/0 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:50:41 np0005534516 ceph-mgr[75313]: [devicehealth INFO root] creating main.db for devicehealth
Nov 25 02:50:41 np0005534516 ceph-osd[90711]: osd.2 0 maybe_override_max_osd_capacity_for_qos osd bench result - bandwidth (MiB/sec): 14.886 iops: 3810.828 elapsed_sec: 0.787
Nov 25 02:50:41 np0005534516 ceph-osd[90711]: log_channel(cluster) log [WRN] : OSD bench result of 3810.827621 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.2. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Nov 25 02:50:41 np0005534516 ceph-osd[90711]: osd.2 0 waiting for initial osdmap
Nov 25 02:50:41 np0005534516 ceph-a058ea16-8b73-51e1-b172-ed66107102bf-osd-2[90707]: 2025-11-25T07:50:41.849+0000 7faa7b126640 -1 osd.2 0 waiting for initial osdmap
Nov 25 02:50:41 np0005534516 ceph-osd[90711]: osd.2 16 crush map has features 288514051259236352, adjusting msgr requires for clients
Nov 25 02:50:41 np0005534516 ceph-osd[90711]: osd.2 16 crush map has features 288514051259236352 was 288232575208792577, adjusting msgr requires for mons
Nov 25 02:50:41 np0005534516 ceph-osd[90711]: osd.2 16 crush map has features 3314933000852226048, adjusting msgr requires for osds
Nov 25 02:50:41 np0005534516 ceph-osd[90711]: osd.2 16 check_osdmap_features require_osd_release unknown -> reef
Nov 25 02:50:41 np0005534516 ceph-osd[90711]: osd.2 16 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Nov 25 02:50:41 np0005534516 ceph-osd[90711]: osd.2 16 set_numa_affinity not setting numa affinity
Nov 25 02:50:41 np0005534516 ceph-a058ea16-8b73-51e1-b172-ed66107102bf-osd-2[90707]: 2025-11-25T07:50:41.896+0000 7faa7674e640 -1 osd.2 16 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Nov 25 02:50:41 np0005534516 ceph-osd[90711]: osd.2 16 _collect_metadata loop5:  no unique device id for loop5: fallback method has no model nor serial
Nov 25 02:50:41 np0005534516 ceph-mgr[75313]: [devicehealth INFO root] Check health
Nov 25 02:50:41 np0005534516 ceph-mgr[75313]: [devicehealth ERROR root] Fail to parse JSON result from daemon osd.2 ()
Nov 25 02:50:41 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch
Nov 25 02:50:42 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='admin socket' entity='admin socket' cmd=smart args=[json]: finished
Nov 25 02:50:42 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0) v1
Nov 25 02:50:42 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Nov 25 02:50:42 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0) v1
Nov 25 02:50:42 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2708298826' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Nov 25 02:50:42 np0005534516 ceph-mgr[75313]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.100:6810/507468267; not ready for session (expect reconnect)
Nov 25 02:50:42 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Nov 25 02:50:42 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 25 02:50:42 np0005534516 ceph-mgr[75313]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 25 02:50:42 np0005534516 podman[94240]: 2025-11-25 07:50:42.195995206 +0000 UTC m=+0.092324022 container create a7af9c6f16e91f008051cabfcc43853c6ecd0477e032631fd6a4c8dd7e695a93 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_burnell, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 02:50:42 np0005534516 podman[94240]: 2025-11-25 07:50:42.131396077 +0000 UTC m=+0.027724923 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 02:50:42 np0005534516 systemd[1]: Started libpod-conmon-a7af9c6f16e91f008051cabfcc43853c6ecd0477e032631fd6a4c8dd7e695a93.scope.
Nov 25 02:50:42 np0005534516 systemd[1]: Started libcrun container.
Nov 25 02:50:42 np0005534516 podman[94240]: 2025-11-25 07:50:42.283056611 +0000 UTC m=+0.179385427 container init a7af9c6f16e91f008051cabfcc43853c6ecd0477e032631fd6a4c8dd7e695a93 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_burnell, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Nov 25 02:50:42 np0005534516 podman[94240]: 2025-11-25 07:50:42.294768958 +0000 UTC m=+0.191097774 container start a7af9c6f16e91f008051cabfcc43853c6ecd0477e032631fd6a4c8dd7e695a93 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_burnell, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Nov 25 02:50:42 np0005534516 ecstatic_burnell[94256]: 167 167
Nov 25 02:50:42 np0005534516 systemd[1]: libpod-a7af9c6f16e91f008051cabfcc43853c6ecd0477e032631fd6a4c8dd7e695a93.scope: Deactivated successfully.
Nov 25 02:50:42 np0005534516 podman[94240]: 2025-11-25 07:50:42.306162959 +0000 UTC m=+0.202491815 container attach a7af9c6f16e91f008051cabfcc43853c6ecd0477e032631fd6a4c8dd7e695a93 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_burnell, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Nov 25 02:50:42 np0005534516 podman[94240]: 2025-11-25 07:50:42.307056478 +0000 UTC m=+0.203385334 container died a7af9c6f16e91f008051cabfcc43853c6ecd0477e032631fd6a4c8dd7e695a93 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_burnell, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Nov 25 02:50:42 np0005534516 systemd[1]: var-lib-containers-storage-overlay-ab025c15640dafcdcdda4dcd5020855ba88affb6b6a1ce69e0aacfeb78d2bf58-merged.mount: Deactivated successfully.
Nov 25 02:50:42 np0005534516 podman[94240]: 2025-11-25 07:50:42.360276497 +0000 UTC m=+0.256605353 container remove a7af9c6f16e91f008051cabfcc43853c6ecd0477e032631fd6a4c8dd7e695a93 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_burnell, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Nov 25 02:50:42 np0005534516 systemd[1]: libpod-conmon-a7af9c6f16e91f008051cabfcc43853c6ecd0477e032631fd6a4c8dd7e695a93.scope: Deactivated successfully.
Nov 25 02:50:42 np0005534516 podman[94281]: 2025-11-25 07:50:42.588277199 +0000 UTC m=+0.051813922 container create 5dcd5e623bbf34fc891726656e70aebb3564b645d69a94a6909c95ac4856e1f3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_engelbart, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 02:50:42 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e16 do_prune osdmap full prune enabled
Nov 25 02:50:42 np0005534516 podman[94281]: 2025-11-25 07:50:42.568849995 +0000 UTC m=+0.032386738 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 02:50:42 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2708298826' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Nov 25 02:50:42 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e17 e17: 3 total, 3 up, 3 in
Nov 25 02:50:42 np0005534516 sweet_darwin[94060]: pool 'vms' created
Nov 25 02:50:42 np0005534516 ceph-mon[75015]: log_channel(cluster) log [INF] : osd.2 [v2:192.168.122.100:6810/507468267,v1:192.168.122.100:6811/507468267] boot
Nov 25 02:50:42 np0005534516 ceph-mon[75015]: log_channel(cluster) log [DBG] : osdmap e17: 3 total, 3 up, 3 in
Nov 25 02:50:42 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Nov 25 02:50:42 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 25 02:50:42 np0005534516 ceph-mon[75015]: OSD bench result of 3810.827621 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.2. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Nov 25 02:50:42 np0005534516 ceph-mon[75015]: from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch
Nov 25 02:50:42 np0005534516 ceph-mon[75015]: from='admin socket' entity='admin socket' cmd=smart args=[json]: finished
Nov 25 02:50:42 np0005534516 ceph-mon[75015]: from='client.? 192.168.122.100:0/2708298826' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Nov 25 02:50:42 np0005534516 ceph-osd[90711]: osd.2 17 state: booting -> active
Nov 25 02:50:42 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 17 pg[2.0( empty local-lis/les=0/0 n=0 ec=17/17 lis/c=0/0 les/c/f=0/0/0 sis=17) [2] r=0 lpr=17 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:50:42 np0005534516 systemd[1]: Started libpod-conmon-5dcd5e623bbf34fc891726656e70aebb3564b645d69a94a6909c95ac4856e1f3.scope.
Nov 25 02:50:42 np0005534516 systemd[1]: libpod-f14d162924d58886a861fa57130aab65fc4cb6f4d531512386e734d4bd17ed47.scope: Deactivated successfully.
Nov 25 02:50:42 np0005534516 podman[94042]: 2025-11-25 07:50:42.710389015 +0000 UTC m=+1.401440663 container died f14d162924d58886a861fa57130aab65fc4cb6f4d531512386e734d4bd17ed47 (image=quay.io/ceph/ceph:v18, name=sweet_darwin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0)
Nov 25 02:50:42 np0005534516 systemd[1]: Started libcrun container.
Nov 25 02:50:42 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cd106175a18fede6e9cb471923e387144c6c7fc88c5f8b9f10f6ecd426d31c55/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 02:50:42 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cd106175a18fede6e9cb471923e387144c6c7fc88c5f8b9f10f6ecd426d31c55/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 02:50:42 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cd106175a18fede6e9cb471923e387144c6c7fc88c5f8b9f10f6ecd426d31c55/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 02:50:42 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cd106175a18fede6e9cb471923e387144c6c7fc88c5f8b9f10f6ecd426d31c55/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 02:50:42 np0005534516 systemd[1]: var-lib-containers-storage-overlay-13123df8deb889ff18f0d7f2e45563302773b50082898fc982af4fd8da32ef3e-merged.mount: Deactivated successfully.
Nov 25 02:50:42 np0005534516 podman[94042]: 2025-11-25 07:50:42.848140617 +0000 UTC m=+1.539192185 container remove f14d162924d58886a861fa57130aab65fc4cb6f4d531512386e734d4bd17ed47 (image=quay.io/ceph/ceph:v18, name=sweet_darwin, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Nov 25 02:50:42 np0005534516 podman[94281]: 2025-11-25 07:50:42.911657034 +0000 UTC m=+0.375193797 container init 5dcd5e623bbf34fc891726656e70aebb3564b645d69a94a6909c95ac4856e1f3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_engelbart, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 02:50:42 np0005534516 podman[94281]: 2025-11-25 07:50:42.922084146 +0000 UTC m=+0.385620889 container start 5dcd5e623bbf34fc891726656e70aebb3564b645d69a94a6909c95ac4856e1f3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_engelbart, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Nov 25 02:50:42 np0005534516 podman[94281]: 2025-11-25 07:50:42.948625345 +0000 UTC m=+0.412162158 container attach 5dcd5e623bbf34fc891726656e70aebb3564b645d69a94a6909c95ac4856e1f3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_engelbart, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Nov 25 02:50:43 np0005534516 systemd[1]: libpod-conmon-f14d162924d58886a861fa57130aab65fc4cb6f4d531512386e734d4bd17ed47.scope: Deactivated successfully.
Nov 25 02:50:43 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v63: 2 pgs: 1 unknown, 1 creating+peering; 0 B data, 453 MiB used, 40 GiB / 40 GiB avail
Nov 25 02:50:43 np0005534516 python3[94341]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid a058ea16-8b73-51e1-b172-ed66107102bf -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create volumes  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 02:50:43 np0005534516 podman[94342]: 2025-11-25 07:50:43.305370186 +0000 UTC m=+0.115532722 container create 539d83b95d30001d830ab44094ed392a50465d465e8836d92bea91a3219b15d1 (image=quay.io/ceph/ceph:v18, name=bold_mayer, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 02:50:43 np0005534516 podman[94342]: 2025-11-25 07:50:43.229710573 +0000 UTC m=+0.039873199 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 25 02:50:43 np0005534516 systemd[1]: Started libpod-conmon-539d83b95d30001d830ab44094ed392a50465d465e8836d92bea91a3219b15d1.scope.
Nov 25 02:50:43 np0005534516 systemd[1]: Started libcrun container.
Nov 25 02:50:43 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8a30acd122bf66620f711243069cb67a25169d64f37119f9777797d68d09db04/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 02:50:43 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8a30acd122bf66620f711243069cb67a25169d64f37119f9777797d68d09db04/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 02:50:43 np0005534516 podman[94342]: 2025-11-25 07:50:43.461177865 +0000 UTC m=+0.271340431 container init 539d83b95d30001d830ab44094ed392a50465d465e8836d92bea91a3219b15d1 (image=quay.io/ceph/ceph:v18, name=bold_mayer, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Nov 25 02:50:43 np0005534516 podman[94342]: 2025-11-25 07:50:43.470155067 +0000 UTC m=+0.280317613 container start 539d83b95d30001d830ab44094ed392a50465d465e8836d92bea91a3219b15d1 (image=quay.io/ceph/ceph:v18, name=bold_mayer, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Nov 25 02:50:43 np0005534516 podman[94342]: 2025-11-25 07:50:43.477332282 +0000 UTC m=+0.287494838 container attach 539d83b95d30001d830ab44094ed392a50465d465e8836d92bea91a3219b15d1 (image=quay.io/ceph/ceph:v18, name=bold_mayer, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Nov 25 02:50:43 np0005534516 gifted_engelbart[94299]: {
Nov 25 02:50:43 np0005534516 gifted_engelbart[94299]:    "0": [
Nov 25 02:50:43 np0005534516 gifted_engelbart[94299]:        {
Nov 25 02:50:43 np0005534516 gifted_engelbart[94299]:            "devices": [
Nov 25 02:50:43 np0005534516 gifted_engelbart[94299]:                "/dev/loop3"
Nov 25 02:50:43 np0005534516 gifted_engelbart[94299]:            ],
Nov 25 02:50:43 np0005534516 gifted_engelbart[94299]:            "lv_name": "ceph_lv0",
Nov 25 02:50:43 np0005534516 gifted_engelbart[94299]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 02:50:43 np0005534516 gifted_engelbart[94299]:            "lv_size": "21470642176",
Nov 25 02:50:43 np0005534516 gifted_engelbart[94299]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=fa0f8d90-5df8-4d42-9078-da082765696d,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 02:50:43 np0005534516 gifted_engelbart[94299]:            "lv_uuid": "ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr",
Nov 25 02:50:43 np0005534516 gifted_engelbart[94299]:            "name": "ceph_lv0",
Nov 25 02:50:43 np0005534516 gifted_engelbart[94299]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 02:50:43 np0005534516 gifted_engelbart[94299]:            "tags": {
Nov 25 02:50:43 np0005534516 gifted_engelbart[94299]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 02:50:43 np0005534516 gifted_engelbart[94299]:                "ceph.block_uuid": "ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr",
Nov 25 02:50:43 np0005534516 gifted_engelbart[94299]:                "ceph.cephx_lockbox_secret": "",
Nov 25 02:50:43 np0005534516 gifted_engelbart[94299]:                "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 02:50:43 np0005534516 gifted_engelbart[94299]:                "ceph.cluster_name": "ceph",
Nov 25 02:50:43 np0005534516 gifted_engelbart[94299]:                "ceph.crush_device_class": "",
Nov 25 02:50:43 np0005534516 gifted_engelbart[94299]:                "ceph.encrypted": "0",
Nov 25 02:50:43 np0005534516 gifted_engelbart[94299]:                "ceph.osd_fsid": "fa0f8d90-5df8-4d42-9078-da082765696d",
Nov 25 02:50:43 np0005534516 gifted_engelbart[94299]:                "ceph.osd_id": "0",
Nov 25 02:50:43 np0005534516 gifted_engelbart[94299]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 02:50:43 np0005534516 gifted_engelbart[94299]:                "ceph.type": "block",
Nov 25 02:50:43 np0005534516 gifted_engelbart[94299]:                "ceph.vdo": "0"
Nov 25 02:50:43 np0005534516 gifted_engelbart[94299]:            },
Nov 25 02:50:43 np0005534516 gifted_engelbart[94299]:            "type": "block",
Nov 25 02:50:43 np0005534516 gifted_engelbart[94299]:            "vg_name": "ceph_vg0"
Nov 25 02:50:43 np0005534516 gifted_engelbart[94299]:        }
Nov 25 02:50:43 np0005534516 gifted_engelbart[94299]:    ],
Nov 25 02:50:43 np0005534516 gifted_engelbart[94299]:    "1": [
Nov 25 02:50:43 np0005534516 gifted_engelbart[94299]:        {
Nov 25 02:50:43 np0005534516 gifted_engelbart[94299]:            "devices": [
Nov 25 02:50:43 np0005534516 gifted_engelbart[94299]:                "/dev/loop4"
Nov 25 02:50:43 np0005534516 gifted_engelbart[94299]:            ],
Nov 25 02:50:43 np0005534516 gifted_engelbart[94299]:            "lv_name": "ceph_lv1",
Nov 25 02:50:43 np0005534516 gifted_engelbart[94299]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 02:50:43 np0005534516 gifted_engelbart[94299]:            "lv_size": "21470642176",
Nov 25 02:50:43 np0005534516 gifted_engelbart[94299]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1ccbbde0-8faf-460a-800e-c84f00ed17db,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 02:50:43 np0005534516 gifted_engelbart[94299]:            "lv_uuid": "nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e",
Nov 25 02:50:43 np0005534516 gifted_engelbart[94299]:            "name": "ceph_lv1",
Nov 25 02:50:43 np0005534516 gifted_engelbart[94299]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 02:50:43 np0005534516 gifted_engelbart[94299]:            "tags": {
Nov 25 02:50:43 np0005534516 gifted_engelbart[94299]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 02:50:43 np0005534516 gifted_engelbart[94299]:                "ceph.block_uuid": "nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e",
Nov 25 02:50:43 np0005534516 gifted_engelbart[94299]:                "ceph.cephx_lockbox_secret": "",
Nov 25 02:50:43 np0005534516 gifted_engelbart[94299]:                "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 02:50:43 np0005534516 gifted_engelbart[94299]:                "ceph.cluster_name": "ceph",
Nov 25 02:50:43 np0005534516 gifted_engelbart[94299]:                "ceph.crush_device_class": "",
Nov 25 02:50:43 np0005534516 gifted_engelbart[94299]:                "ceph.encrypted": "0",
Nov 25 02:50:43 np0005534516 gifted_engelbart[94299]:                "ceph.osd_fsid": "1ccbbde0-8faf-460a-800e-c84f00ed17db",
Nov 25 02:50:43 np0005534516 gifted_engelbart[94299]:                "ceph.osd_id": "1",
Nov 25 02:50:43 np0005534516 gifted_engelbart[94299]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 02:50:43 np0005534516 gifted_engelbart[94299]:                "ceph.type": "block",
Nov 25 02:50:43 np0005534516 gifted_engelbart[94299]:                "ceph.vdo": "0"
Nov 25 02:50:43 np0005534516 gifted_engelbart[94299]:            },
Nov 25 02:50:43 np0005534516 gifted_engelbart[94299]:            "type": "block",
Nov 25 02:50:43 np0005534516 gifted_engelbart[94299]:            "vg_name": "ceph_vg1"
Nov 25 02:50:43 np0005534516 gifted_engelbart[94299]:        }
Nov 25 02:50:43 np0005534516 gifted_engelbart[94299]:    ],
Nov 25 02:50:43 np0005534516 gifted_engelbart[94299]:    "2": [
Nov 25 02:50:43 np0005534516 gifted_engelbart[94299]:        {
Nov 25 02:50:43 np0005534516 gifted_engelbart[94299]:            "devices": [
Nov 25 02:50:43 np0005534516 gifted_engelbart[94299]:                "/dev/loop5"
Nov 25 02:50:43 np0005534516 gifted_engelbart[94299]:            ],
Nov 25 02:50:43 np0005534516 gifted_engelbart[94299]:            "lv_name": "ceph_lv2",
Nov 25 02:50:43 np0005534516 gifted_engelbart[94299]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 02:50:43 np0005534516 gifted_engelbart[94299]:            "lv_size": "21470642176",
Nov 25 02:50:43 np0005534516 gifted_engelbart[94299]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2a15223e-f21c-42af-b702-2be1c3607399,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 02:50:43 np0005534516 gifted_engelbart[94299]:            "lv_uuid": "5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0",
Nov 25 02:50:43 np0005534516 gifted_engelbart[94299]:            "name": "ceph_lv2",
Nov 25 02:50:43 np0005534516 gifted_engelbart[94299]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 02:50:43 np0005534516 gifted_engelbart[94299]:            "tags": {
Nov 25 02:50:43 np0005534516 gifted_engelbart[94299]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 02:50:43 np0005534516 gifted_engelbart[94299]:                "ceph.block_uuid": "5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0",
Nov 25 02:50:43 np0005534516 gifted_engelbart[94299]:                "ceph.cephx_lockbox_secret": "",
Nov 25 02:50:43 np0005534516 gifted_engelbart[94299]:                "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 02:50:43 np0005534516 gifted_engelbart[94299]:                "ceph.cluster_name": "ceph",
Nov 25 02:50:43 np0005534516 gifted_engelbart[94299]:                "ceph.crush_device_class": "",
Nov 25 02:50:43 np0005534516 gifted_engelbart[94299]:                "ceph.encrypted": "0",
Nov 25 02:50:43 np0005534516 gifted_engelbart[94299]:                "ceph.osd_fsid": "2a15223e-f21c-42af-b702-2be1c3607399",
Nov 25 02:50:43 np0005534516 gifted_engelbart[94299]:                "ceph.osd_id": "2",
Nov 25 02:50:43 np0005534516 gifted_engelbart[94299]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 02:50:43 np0005534516 gifted_engelbart[94299]:                "ceph.type": "block",
Nov 25 02:50:43 np0005534516 gifted_engelbart[94299]:                "ceph.vdo": "0"
Nov 25 02:50:43 np0005534516 gifted_engelbart[94299]:            },
Nov 25 02:50:43 np0005534516 gifted_engelbart[94299]:            "type": "block",
Nov 25 02:50:43 np0005534516 gifted_engelbart[94299]:            "vg_name": "ceph_vg2"
Nov 25 02:50:43 np0005534516 gifted_engelbart[94299]:        }
Nov 25 02:50:43 np0005534516 gifted_engelbart[94299]:    ]
Nov 25 02:50:43 np0005534516 gifted_engelbart[94299]: }
Nov 25 02:50:43 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e17 do_prune osdmap full prune enabled
Nov 25 02:50:43 np0005534516 ceph-mon[75015]: log_channel(cluster) log [WRN] : Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Nov 25 02:50:43 np0005534516 systemd[1]: libpod-5dcd5e623bbf34fc891726656e70aebb3564b645d69a94a6909c95ac4856e1f3.scope: Deactivated successfully.
Nov 25 02:50:43 np0005534516 podman[94281]: 2025-11-25 07:50:43.724321959 +0000 UTC m=+1.187858682 container died 5dcd5e623bbf34fc891726656e70aebb3564b645d69a94a6909c95ac4856e1f3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_engelbart, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef)
Nov 25 02:50:43 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e18 e18: 3 total, 3 up, 3 in
Nov 25 02:50:43 np0005534516 ceph-mon[75015]: log_channel(cluster) log [DBG] : osdmap e18: 3 total, 3 up, 3 in
Nov 25 02:50:43 np0005534516 ceph-mon[75015]: from='client.? 192.168.122.100:0/2708298826' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Nov 25 02:50:43 np0005534516 ceph-mon[75015]: osd.2 [v2:192.168.122.100:6810/507468267,v1:192.168.122.100:6811/507468267] boot
Nov 25 02:50:43 np0005534516 systemd[1]: var-lib-containers-storage-overlay-cd106175a18fede6e9cb471923e387144c6c7fc88c5f8b9f10f6ecd426d31c55-merged.mount: Deactivated successfully.
Nov 25 02:50:43 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 18 pg[2.0( empty local-lis/les=17/18 n=0 ec=17/17 lis/c=0/0 les/c/f=0/0/0 sis=17) [2] r=0 lpr=17 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:50:43 np0005534516 ceph-mon[75015]: log_channel(cluster) log [DBG] : mgrmap e9: compute-0.cpskve(active, since 111s)
Nov 25 02:50:43 np0005534516 podman[94281]: 2025-11-25 07:50:43.881894485 +0000 UTC m=+1.345431208 container remove 5dcd5e623bbf34fc891726656e70aebb3564b645d69a94a6909c95ac4856e1f3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_engelbart, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Nov 25 02:50:43 np0005534516 systemd[1]: libpod-conmon-5dcd5e623bbf34fc891726656e70aebb3564b645d69a94a6909c95ac4856e1f3.scope: Deactivated successfully.
Nov 25 02:50:44 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0) v1
Nov 25 02:50:44 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3505958332' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Nov 25 02:50:44 np0005534516 podman[94540]: 2025-11-25 07:50:44.555459499 +0000 UTC m=+0.039518812 container create 88ddfd5cf873bab2e9e9345bb7f9bcc690eae660c5f98bbc1d546dd5eadfec30 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_jones, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 02:50:44 np0005534516 systemd[1]: Started libpod-conmon-88ddfd5cf873bab2e9e9345bb7f9bcc690eae660c5f98bbc1d546dd5eadfec30.scope.
Nov 25 02:50:44 np0005534516 systemd[1]: Started libcrun container.
Nov 25 02:50:44 np0005534516 podman[94540]: 2025-11-25 07:50:44.626672563 +0000 UTC m=+0.110731856 container init 88ddfd5cf873bab2e9e9345bb7f9bcc690eae660c5f98bbc1d546dd5eadfec30 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_jones, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 02:50:44 np0005534516 podman[94540]: 2025-11-25 07:50:44.631876938 +0000 UTC m=+0.115936221 container start 88ddfd5cf873bab2e9e9345bb7f9bcc690eae660c5f98bbc1d546dd5eadfec30 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_jones, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 02:50:44 np0005534516 podman[94540]: 2025-11-25 07:50:44.539460965 +0000 UTC m=+0.023520258 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 02:50:44 np0005534516 podman[94540]: 2025-11-25 07:50:44.636625465 +0000 UTC m=+0.120684778 container attach 88ddfd5cf873bab2e9e9345bb7f9bcc690eae660c5f98bbc1d546dd5eadfec30 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_jones, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 02:50:44 np0005534516 busy_jones[94556]: 167 167
Nov 25 02:50:44 np0005534516 systemd[1]: libpod-88ddfd5cf873bab2e9e9345bb7f9bcc690eae660c5f98bbc1d546dd5eadfec30.scope: Deactivated successfully.
Nov 25 02:50:44 np0005534516 podman[94540]: 2025-11-25 07:50:44.642086325 +0000 UTC m=+0.126145608 container died 88ddfd5cf873bab2e9e9345bb7f9bcc690eae660c5f98bbc1d546dd5eadfec30 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_jones, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507)
Nov 25 02:50:44 np0005534516 systemd[1]: var-lib-containers-storage-overlay-74153a9eb4f60a47c9325da98e2eb559c623087b28ff9e49752bd3cda7ce7343-merged.mount: Deactivated successfully.
Nov 25 02:50:44 np0005534516 podman[94540]: 2025-11-25 07:50:44.697292395 +0000 UTC m=+0.181351678 container remove 88ddfd5cf873bab2e9e9345bb7f9bcc690eae660c5f98bbc1d546dd5eadfec30 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_jones, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 02:50:44 np0005534516 systemd[1]: libpod-conmon-88ddfd5cf873bab2e9e9345bb7f9bcc690eae660c5f98bbc1d546dd5eadfec30.scope: Deactivated successfully.
Nov 25 02:50:44 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e18 do_prune osdmap full prune enabled
Nov 25 02:50:44 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3505958332' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Nov 25 02:50:44 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e19 e19: 3 total, 3 up, 3 in
Nov 25 02:50:44 np0005534516 bold_mayer[94357]: pool 'volumes' created
Nov 25 02:50:44 np0005534516 ceph-mon[75015]: log_channel(cluster) log [DBG] : osdmap e19: 3 total, 3 up, 3 in
Nov 25 02:50:44 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 19 pg[3.0( empty local-lis/les=0/0 n=0 ec=19/19 lis/c=0/0 les/c/f=0/0/0 sis=19) [1] r=0 lpr=19 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:50:44 np0005534516 systemd[1]: libpod-539d83b95d30001d830ab44094ed392a50465d465e8836d92bea91a3219b15d1.scope: Deactivated successfully.
Nov 25 02:50:44 np0005534516 podman[94342]: 2025-11-25 07:50:44.7577743 +0000 UTC m=+1.567936826 container died 539d83b95d30001d830ab44094ed392a50465d465e8836d92bea91a3219b15d1 (image=quay.io/ceph/ceph:v18, name=bold_mayer, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 02:50:44 np0005534516 ceph-mon[75015]: Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Nov 25 02:50:44 np0005534516 ceph-mon[75015]: from='client.? 192.168.122.100:0/3505958332' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Nov 25 02:50:44 np0005534516 ceph-mon[75015]: from='client.? 192.168.122.100:0/3505958332' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Nov 25 02:50:44 np0005534516 systemd[1]: var-lib-containers-storage-overlay-8a30acd122bf66620f711243069cb67a25169d64f37119f9777797d68d09db04-merged.mount: Deactivated successfully.
Nov 25 02:50:44 np0005534516 podman[94342]: 2025-11-25 07:50:44.856148706 +0000 UTC m=+1.666311222 container remove 539d83b95d30001d830ab44094ed392a50465d465e8836d92bea91a3219b15d1 (image=quay.io/ceph/ceph:v18, name=bold_mayer, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Nov 25 02:50:44 np0005534516 systemd[1]: libpod-conmon-539d83b95d30001d830ab44094ed392a50465d465e8836d92bea91a3219b15d1.scope: Deactivated successfully.
Nov 25 02:50:44 np0005534516 podman[94593]: 2025-11-25 07:50:44.88992024 +0000 UTC m=+0.050915644 container create 1cf13b83d27c3f73389c1a799dd97a165eaf724017da450d6f4356506b1584aa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_robinson, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 02:50:44 np0005534516 systemd[1]: Started libpod-conmon-1cf13b83d27c3f73389c1a799dd97a165eaf724017da450d6f4356506b1584aa.scope.
Nov 25 02:50:44 np0005534516 systemd[1]: Started libcrun container.
Nov 25 02:50:44 np0005534516 podman[94593]: 2025-11-25 07:50:44.872066458 +0000 UTC m=+0.033061892 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 02:50:44 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/33e64d68eb445139377861f8254ae9290e9f82faa41ecb86b1c7b16b28d9b48a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 02:50:44 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/33e64d68eb445139377861f8254ae9290e9f82faa41ecb86b1c7b16b28d9b48a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 02:50:44 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/33e64d68eb445139377861f8254ae9290e9f82faa41ecb86b1c7b16b28d9b48a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 02:50:44 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/33e64d68eb445139377861f8254ae9290e9f82faa41ecb86b1c7b16b28d9b48a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 02:50:44 np0005534516 podman[94593]: 2025-11-25 07:50:44.991072031 +0000 UTC m=+0.152067475 container init 1cf13b83d27c3f73389c1a799dd97a165eaf724017da450d6f4356506b1584aa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_robinson, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 02:50:44 np0005534516 podman[94593]: 2025-11-25 07:50:44.997135914 +0000 UTC m=+0.158131318 container start 1cf13b83d27c3f73389c1a799dd97a165eaf724017da450d6f4356506b1584aa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_robinson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Nov 25 02:50:45 np0005534516 podman[94593]: 2025-11-25 07:50:45.005446242 +0000 UTC m=+0.166441656 container attach 1cf13b83d27c3f73389c1a799dd97a165eaf724017da450d6f4356506b1584aa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_robinson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Nov 25 02:50:45 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v66: 3 pgs: 1 unknown, 2 creating+peering; 0 B data, 879 MiB used, 59 GiB / 60 GiB avail
Nov 25 02:50:45 np0005534516 python3[94640]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid a058ea16-8b73-51e1-b172-ed66107102bf -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create backups  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 02:50:45 np0005534516 podman[94641]: 2025-11-25 07:50:45.233725109 +0000 UTC m=+0.061006317 container create 10f6781948152e3f43b0829376d754c54f5590c967de1162e7989785c315708a (image=quay.io/ceph/ceph:v18, name=wizardly_greider, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 02:50:45 np0005534516 systemd[1]: Started libpod-conmon-10f6781948152e3f43b0829376d754c54f5590c967de1162e7989785c315708a.scope.
Nov 25 02:50:45 np0005534516 systemd[1]: Started libcrun container.
Nov 25 02:50:45 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ad7566f2a991f46c7e8e18e386bbe88c8b5e29c83940003496ab398d091014e9/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 02:50:45 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ad7566f2a991f46c7e8e18e386bbe88c8b5e29c83940003496ab398d091014e9/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 02:50:45 np0005534516 podman[94641]: 2025-11-25 07:50:45.208923326 +0000 UTC m=+0.036204574 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 25 02:50:45 np0005534516 podman[94641]: 2025-11-25 07:50:45.308366993 +0000 UTC m=+0.135648271 container init 10f6781948152e3f43b0829376d754c54f5590c967de1162e7989785c315708a (image=quay.io/ceph/ceph:v18, name=wizardly_greider, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 02:50:45 np0005534516 podman[94641]: 2025-11-25 07:50:45.313527908 +0000 UTC m=+0.140809126 container start 10f6781948152e3f43b0829376d754c54f5590c967de1162e7989785c315708a (image=quay.io/ceph/ceph:v18, name=wizardly_greider, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Nov 25 02:50:45 np0005534516 podman[94641]: 2025-11-25 07:50:45.322587801 +0000 UTC m=+0.149869049 container attach 10f6781948152e3f43b0829376d754c54f5590c967de1162e7989785c315708a (image=quay.io/ceph/ceph:v18, name=wizardly_greider, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef)
Nov 25 02:50:45 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e19 do_prune osdmap full prune enabled
Nov 25 02:50:45 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e20 e20: 3 total, 3 up, 3 in
Nov 25 02:50:45 np0005534516 ceph-mon[75015]: log_channel(cluster) log [DBG] : osdmap e20: 3 total, 3 up, 3 in
Nov 25 02:50:45 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 20 pg[3.0( empty local-lis/les=19/20 n=0 ec=19/19 lis/c=0/0 les/c/f=0/0/0 sis=19) [1] r=0 lpr=19 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:50:45 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0) v1
Nov 25 02:50:45 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/211881131' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Nov 25 02:50:45 np0005534516 focused_robinson[94610]: {
Nov 25 02:50:45 np0005534516 focused_robinson[94610]:    "1ccbbde0-8faf-460a-800e-c84f00ed17db": {
Nov 25 02:50:45 np0005534516 focused_robinson[94610]:        "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 02:50:45 np0005534516 focused_robinson[94610]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 02:50:45 np0005534516 focused_robinson[94610]:        "osd_id": 1,
Nov 25 02:50:45 np0005534516 focused_robinson[94610]:        "osd_uuid": "1ccbbde0-8faf-460a-800e-c84f00ed17db",
Nov 25 02:50:45 np0005534516 focused_robinson[94610]:        "type": "bluestore"
Nov 25 02:50:45 np0005534516 focused_robinson[94610]:    },
Nov 25 02:50:45 np0005534516 focused_robinson[94610]:    "2a15223e-f21c-42af-b702-2be1c3607399": {
Nov 25 02:50:45 np0005534516 focused_robinson[94610]:        "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 02:50:45 np0005534516 focused_robinson[94610]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 02:50:45 np0005534516 focused_robinson[94610]:        "osd_id": 2,
Nov 25 02:50:45 np0005534516 focused_robinson[94610]:        "osd_uuid": "2a15223e-f21c-42af-b702-2be1c3607399",
Nov 25 02:50:45 np0005534516 focused_robinson[94610]:        "type": "bluestore"
Nov 25 02:50:45 np0005534516 focused_robinson[94610]:    },
Nov 25 02:50:45 np0005534516 focused_robinson[94610]:    "fa0f8d90-5df8-4d42-9078-da082765696d": {
Nov 25 02:50:45 np0005534516 focused_robinson[94610]:        "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 02:50:45 np0005534516 focused_robinson[94610]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 02:50:45 np0005534516 focused_robinson[94610]:        "osd_id": 0,
Nov 25 02:50:45 np0005534516 focused_robinson[94610]:        "osd_uuid": "fa0f8d90-5df8-4d42-9078-da082765696d",
Nov 25 02:50:45 np0005534516 focused_robinson[94610]:        "type": "bluestore"
Nov 25 02:50:45 np0005534516 focused_robinson[94610]:    }
Nov 25 02:50:45 np0005534516 focused_robinson[94610]: }
Nov 25 02:50:45 np0005534516 systemd[1]: libpod-1cf13b83d27c3f73389c1a799dd97a165eaf724017da450d6f4356506b1584aa.scope: Deactivated successfully.
Nov 25 02:50:45 np0005534516 podman[94593]: 2025-11-25 07:50:45.968584008 +0000 UTC m=+1.129579422 container died 1cf13b83d27c3f73389c1a799dd97a165eaf724017da450d6f4356506b1584aa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_robinson, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 02:50:45 np0005534516 systemd[1]: var-lib-containers-storage-overlay-33e64d68eb445139377861f8254ae9290e9f82faa41ecb86b1c7b16b28d9b48a-merged.mount: Deactivated successfully.
Nov 25 02:50:46 np0005534516 podman[94593]: 2025-11-25 07:50:46.026682055 +0000 UTC m=+1.187677459 container remove 1cf13b83d27c3f73389c1a799dd97a165eaf724017da450d6f4356506b1584aa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_robinson, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Nov 25 02:50:46 np0005534516 systemd[1]: libpod-conmon-1cf13b83d27c3f73389c1a799dd97a165eaf724017da450d6f4356506b1584aa.scope: Deactivated successfully.
Nov 25 02:50:46 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 02:50:46 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 02:50:46 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 02:50:46 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 02:50:46 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e20 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 02:50:46 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e20 do_prune osdmap full prune enabled
Nov 25 02:50:46 np0005534516 ceph-mon[75015]: from='client.? 192.168.122.100:0/211881131' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Nov 25 02:50:46 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 02:50:46 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 02:50:46 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/211881131' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Nov 25 02:50:46 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e21 e21: 3 total, 3 up, 3 in
Nov 25 02:50:46 np0005534516 wizardly_greider[94656]: pool 'backups' created
Nov 25 02:50:46 np0005534516 ceph-mon[75015]: log_channel(cluster) log [DBG] : osdmap e21: 3 total, 3 up, 3 in
Nov 25 02:50:46 np0005534516 systemd[1]: libpod-10f6781948152e3f43b0829376d754c54f5590c967de1162e7989785c315708a.scope: Deactivated successfully.
Nov 25 02:50:46 np0005534516 podman[94641]: 2025-11-25 07:50:46.789550341 +0000 UTC m=+1.616831579 container died 10f6781948152e3f43b0829376d754c54f5590c967de1162e7989785c315708a (image=quay.io/ceph/ceph:v18, name=wizardly_greider, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Nov 25 02:50:46 np0005534516 systemd[1]: var-lib-containers-storage-overlay-ad7566f2a991f46c7e8e18e386bbe88c8b5e29c83940003496ab398d091014e9-merged.mount: Deactivated successfully.
Nov 25 02:50:46 np0005534516 podman[94641]: 2025-11-25 07:50:46.86299539 +0000 UTC m=+1.690276648 container remove 10f6781948152e3f43b0829376d754c54f5590c967de1162e7989785c315708a (image=quay.io/ceph/ceph:v18, name=wizardly_greider, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Nov 25 02:50:46 np0005534516 systemd[1]: libpod-conmon-10f6781948152e3f43b0829376d754c54f5590c967de1162e7989785c315708a.scope: Deactivated successfully.
Nov 25 02:50:47 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v69: 4 pgs: 1 unknown, 2 active+clean, 1 creating+peering; 449 KiB data, 480 MiB used, 60 GiB / 60 GiB avail
Nov 25 02:50:47 np0005534516 podman[94983]: 2025-11-25 07:50:47.126044963 +0000 UTC m=+0.064857576 container exec 04ce8b89bbacb77b3b59c4996e899d7494010827e740656ebea8754c25303956 (image=quay.io/ceph/ceph:v18, name=ceph-a058ea16-8b73-51e1-b172-ed66107102bf-mon-compute-0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 02:50:47 np0005534516 ceph-osd[88620]: osd.0 pg_epoch: 21 pg[4.0( empty local-lis/les=0/0 n=0 ec=21/21 lis/c=0/0 les/c/f=0/0/0 sis=21) [0] r=0 lpr=21 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:50:47 np0005534516 python3[94982]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid a058ea16-8b73-51e1-b172-ed66107102bf -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create images  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 02:50:47 np0005534516 podman[94983]: 2025-11-25 07:50:47.25664736 +0000 UTC m=+0.195459973 container exec_died 04ce8b89bbacb77b3b59c4996e899d7494010827e740656ebea8754c25303956 (image=quay.io/ceph/ceph:v18, name=ceph-a058ea16-8b73-51e1-b172-ed66107102bf-mon-compute-0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Nov 25 02:50:47 np0005534516 podman[95004]: 2025-11-25 07:50:47.298951808 +0000 UTC m=+0.088796081 container create 619c5e921b8a37d54c1076657e580b093f6cdac9ecfd2e0adbce3b71ac1e43e8 (image=quay.io/ceph/ceph:v18, name=recursing_poincare, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 02:50:47 np0005534516 systemd[1]: Started libpod-conmon-619c5e921b8a37d54c1076657e580b093f6cdac9ecfd2e0adbce3b71ac1e43e8.scope.
Nov 25 02:50:47 np0005534516 podman[95004]: 2025-11-25 07:50:47.250445114 +0000 UTC m=+0.040289427 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 25 02:50:47 np0005534516 systemd[1]: Started libcrun container.
Nov 25 02:50:47 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/16d16fb7de7ae84745f55dce5e6d15bb176d30d3c08172818450b378dd91d412/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 02:50:47 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/16d16fb7de7ae84745f55dce5e6d15bb176d30d3c08172818450b378dd91d412/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 02:50:47 np0005534516 podman[95004]: 2025-11-25 07:50:47.381221536 +0000 UTC m=+0.171065849 container init 619c5e921b8a37d54c1076657e580b093f6cdac9ecfd2e0adbce3b71ac1e43e8 (image=quay.io/ceph/ceph:v18, name=recursing_poincare, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Nov 25 02:50:47 np0005534516 podman[95004]: 2025-11-25 07:50:47.388012393 +0000 UTC m=+0.177856686 container start 619c5e921b8a37d54c1076657e580b093f6cdac9ecfd2e0adbce3b71ac1e43e8 (image=quay.io/ceph/ceph:v18, name=recursing_poincare, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 02:50:47 np0005534516 podman[95004]: 2025-11-25 07:50:47.39180202 +0000 UTC m=+0.181646313 container attach 619c5e921b8a37d54c1076657e580b093f6cdac9ecfd2e0adbce3b71ac1e43e8 (image=quay.io/ceph/ceph:v18, name=recursing_poincare, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Nov 25 02:50:47 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e21 do_prune osdmap full prune enabled
Nov 25 02:50:47 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 02:50:47 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0) v1
Nov 25 02:50:47 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1130542091' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Nov 25 02:50:48 np0005534516 ceph-mon[75015]: from='client.? 192.168.122.100:0/211881131' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Nov 25 02:50:48 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e22 e22: 3 total, 3 up, 3 in
Nov 25 02:50:48 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 02:50:48 np0005534516 ceph-mon[75015]: log_channel(cluster) log [DBG] : osdmap e22: 3 total, 3 up, 3 in
Nov 25 02:50:48 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 02:50:48 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 02:50:48 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 02:50:48 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 02:50:48 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 02:50:48 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 02:50:48 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 02:50:49 np0005534516 ceph-osd[88620]: osd.0 pg_epoch: 22 pg[4.0( empty local-lis/les=21/22 n=0 ec=21/21 lis/c=0/0 les/c/f=0/0/0 sis=21) [0] r=0 lpr=21 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:50:49 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 02:50:49 np0005534516 ceph-mgr[75313]: [progress WARNING root] complete: ev f41b297f-14f5-4b9c-a336-57b4a2f56fce does not exist
Nov 25 02:50:49 np0005534516 ceph-mgr[75313]: [progress WARNING root] complete: ev 749e1cf9-b6d4-4d15-b16a-c48937ef9388 does not exist
Nov 25 02:50:49 np0005534516 ceph-mgr[75313]: [progress WARNING root] complete: ev 2cb389a5-51f3-4ebf-876b-a24b1c0b6d31 does not exist
Nov 25 02:50:49 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 02:50:49 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 02:50:49 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 02:50:49 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 02:50:49 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 02:50:49 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 02:50:49 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v71: 4 pgs: 1 unknown, 3 active+clean; 449 KiB data, 480 MiB used, 60 GiB / 60 GiB avail
Nov 25 02:50:49 np0005534516 ceph-mon[75015]: from='client.? 192.168.122.100:0/1130542091' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Nov 25 02:50:49 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 02:50:49 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 02:50:49 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 02:50:49 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 02:50:49 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 02:50:49 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e22 do_prune osdmap full prune enabled
Nov 25 02:50:49 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1130542091' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Nov 25 02:50:49 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e23 e23: 3 total, 3 up, 3 in
Nov 25 02:50:49 np0005534516 recursing_poincare[95036]: pool 'images' created
Nov 25 02:50:49 np0005534516 ceph-mon[75015]: log_channel(cluster) log [DBG] : osdmap e23: 3 total, 3 up, 3 in
Nov 25 02:50:49 np0005534516 systemd[1]: libpod-619c5e921b8a37d54c1076657e580b093f6cdac9ecfd2e0adbce3b71ac1e43e8.scope: Deactivated successfully.
Nov 25 02:50:49 np0005534516 podman[95004]: 2025-11-25 07:50:49.386965617 +0000 UTC m=+2.176809960 container died 619c5e921b8a37d54c1076657e580b093f6cdac9ecfd2e0adbce3b71ac1e43e8 (image=quay.io/ceph/ceph:v18, name=recursing_poincare, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Nov 25 02:50:49 np0005534516 systemd[1]: var-lib-containers-storage-overlay-16d16fb7de7ae84745f55dce5e6d15bb176d30d3c08172818450b378dd91d412-merged.mount: Deactivated successfully.
Nov 25 02:50:49 np0005534516 podman[95004]: 2025-11-25 07:50:49.47884377 +0000 UTC m=+2.268688043 container remove 619c5e921b8a37d54c1076657e580b093f6cdac9ecfd2e0adbce3b71ac1e43e8 (image=quay.io/ceph/ceph:v18, name=recursing_poincare, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True)
Nov 25 02:50:49 np0005534516 systemd[1]: libpod-conmon-619c5e921b8a37d54c1076657e580b093f6cdac9ecfd2e0adbce3b71ac1e43e8.scope: Deactivated successfully.
Nov 25 02:50:49 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 23 pg[5.0( empty local-lis/les=0/0 n=0 ec=23/23 lis/c=0/0 les/c/f=0/0/0 sis=23) [2] r=0 lpr=23 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:50:49 np0005534516 python3[95290]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid a058ea16-8b73-51e1-b172-ed66107102bf -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create cephfs.cephfs.meta  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 02:50:49 np0005534516 podman[95319]: 2025-11-25 07:50:49.855063677 +0000 UTC m=+0.053809482 container create 63fa1c6d4a78af256d2698a64bf090cde3d00446a74e64fcd5eb33c182a94a9e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_vaughan, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Nov 25 02:50:49 np0005534516 systemd[1]: Started libpod-conmon-63fa1c6d4a78af256d2698a64bf090cde3d00446a74e64fcd5eb33c182a94a9e.scope.
Nov 25 02:50:49 np0005534516 podman[95319]: 2025-11-25 07:50:49.824179371 +0000 UTC m=+0.022925186 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 02:50:49 np0005534516 systemd[1]: Started libcrun container.
Nov 25 02:50:49 np0005534516 podman[95330]: 2025-11-25 07:50:49.969396305 +0000 UTC m=+0.141210774 container create 2e3cd674a012a502b97876e6da21ab91e278d5a22f7a1a21d7612b90d59c3a10 (image=quay.io/ceph/ceph:v18, name=affectionate_noyce, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 25 02:50:49 np0005534516 podman[95330]: 2025-11-25 07:50:49.888359052 +0000 UTC m=+0.060173531 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 25 02:50:50 np0005534516 podman[95319]: 2025-11-25 07:50:50.01451987 +0000 UTC m=+0.213265685 container init 63fa1c6d4a78af256d2698a64bf090cde3d00446a74e64fcd5eb33c182a94a9e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_vaughan, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Nov 25 02:50:50 np0005534516 systemd[1]: Started libpod-conmon-2e3cd674a012a502b97876e6da21ab91e278d5a22f7a1a21d7612b90d59c3a10.scope.
Nov 25 02:50:50 np0005534516 podman[95319]: 2025-11-25 07:50:50.026422801 +0000 UTC m=+0.225168636 container start 63fa1c6d4a78af256d2698a64bf090cde3d00446a74e64fcd5eb33c182a94a9e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_vaughan, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 02:50:50 np0005534516 vibrant_vaughan[95348]: 167 167
Nov 25 02:50:50 np0005534516 systemd[1]: libpod-63fa1c6d4a78af256d2698a64bf090cde3d00446a74e64fcd5eb33c182a94a9e.scope: Deactivated successfully.
Nov 25 02:50:50 np0005534516 podman[95319]: 2025-11-25 07:50:50.052649303 +0000 UTC m=+0.251395108 container attach 63fa1c6d4a78af256d2698a64bf090cde3d00446a74e64fcd5eb33c182a94a9e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_vaughan, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 02:50:50 np0005534516 podman[95319]: 2025-11-25 07:50:50.053100362 +0000 UTC m=+0.251846157 container died 63fa1c6d4a78af256d2698a64bf090cde3d00446a74e64fcd5eb33c182a94a9e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_vaughan, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 02:50:50 np0005534516 systemd[1]: Started libcrun container.
Nov 25 02:50:50 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c87a27dc3ff1769ed46a85c7a59b48639c8a7e722aeb16dea6431d92ea13d0b6/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 02:50:50 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c87a27dc3ff1769ed46a85c7a59b48639c8a7e722aeb16dea6431d92ea13d0b6/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 02:50:50 np0005534516 podman[95330]: 2025-11-25 07:50:50.135124375 +0000 UTC m=+0.306938854 container init 2e3cd674a012a502b97876e6da21ab91e278d5a22f7a1a21d7612b90d59c3a10 (image=quay.io/ceph/ceph:v18, name=affectionate_noyce, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True)
Nov 25 02:50:50 np0005534516 podman[95330]: 2025-11-25 07:50:50.143501134 +0000 UTC m=+0.315315643 container start 2e3cd674a012a502b97876e6da21ab91e278d5a22f7a1a21d7612b90d59c3a10 (image=quay.io/ceph/ceph:v18, name=affectionate_noyce, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 02:50:50 np0005534516 systemd[1]: var-lib-containers-storage-overlay-d44e1647ee70f91f063707411a5e4c247d0f648755776a6b645df74963eb8990-merged.mount: Deactivated successfully.
Nov 25 02:50:50 np0005534516 podman[95319]: 2025-11-25 07:50:50.233681822 +0000 UTC m=+0.432427617 container remove 63fa1c6d4a78af256d2698a64bf090cde3d00446a74e64fcd5eb33c182a94a9e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_vaughan, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Nov 25 02:50:50 np0005534516 podman[95330]: 2025-11-25 07:50:50.25770415 +0000 UTC m=+0.429518639 container attach 2e3cd674a012a502b97876e6da21ab91e278d5a22f7a1a21d7612b90d59c3a10 (image=quay.io/ceph/ceph:v18, name=affectionate_noyce, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Nov 25 02:50:50 np0005534516 systemd[1]: libpod-conmon-63fa1c6d4a78af256d2698a64bf090cde3d00446a74e64fcd5eb33c182a94a9e.scope: Deactivated successfully.
Nov 25 02:50:50 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e23 do_prune osdmap full prune enabled
Nov 25 02:50:50 np0005534516 ceph-mon[75015]: from='client.? 192.168.122.100:0/1130542091' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Nov 25 02:50:50 np0005534516 podman[95380]: 2025-11-25 07:50:50.453649782 +0000 UTC m=+0.093131630 container create 342892a4e9b1c406a690ad082b2aad40cfed15a62646f01fe8dff5017fac1091 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_bhabha, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Nov 25 02:50:50 np0005534516 podman[95380]: 2025-11-25 07:50:50.379929837 +0000 UTC m=+0.019411705 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 02:50:50 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e24 e24: 3 total, 3 up, 3 in
Nov 25 02:50:50 np0005534516 ceph-mon[75015]: log_channel(cluster) log [DBG] : osdmap e24: 3 total, 3 up, 3 in
Nov 25 02:50:50 np0005534516 systemd[1]: Started libpod-conmon-342892a4e9b1c406a690ad082b2aad40cfed15a62646f01fe8dff5017fac1091.scope.
Nov 25 02:50:50 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 24 pg[5.0( empty local-lis/les=23/24 n=0 ec=23/23 lis/c=0/0 les/c/f=0/0/0 sis=23) [2] r=0 lpr=23 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:50:50 np0005534516 systemd[1]: Started libcrun container.
Nov 25 02:50:50 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b3c828e66d2891528ac95b6fc3e3e9439d70038044918cff5942ac79b7fb8a58/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 02:50:50 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b3c828e66d2891528ac95b6fc3e3e9439d70038044918cff5942ac79b7fb8a58/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 02:50:50 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b3c828e66d2891528ac95b6fc3e3e9439d70038044918cff5942ac79b7fb8a58/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 02:50:50 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b3c828e66d2891528ac95b6fc3e3e9439d70038044918cff5942ac79b7fb8a58/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 02:50:50 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b3c828e66d2891528ac95b6fc3e3e9439d70038044918cff5942ac79b7fb8a58/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 02:50:50 np0005534516 podman[95380]: 2025-11-25 07:50:50.624603968 +0000 UTC m=+0.264085846 container init 342892a4e9b1c406a690ad082b2aad40cfed15a62646f01fe8dff5017fac1091 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_bhabha, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 02:50:50 np0005534516 podman[95380]: 2025-11-25 07:50:50.6370777 +0000 UTC m=+0.276559548 container start 342892a4e9b1c406a690ad082b2aad40cfed15a62646f01fe8dff5017fac1091 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_bhabha, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 02:50:50 np0005534516 podman[95380]: 2025-11-25 07:50:50.680481001 +0000 UTC m=+0.319962929 container attach 342892a4e9b1c406a690ad082b2aad40cfed15a62646f01fe8dff5017fac1091 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_bhabha, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 02:50:50 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0) v1
Nov 25 02:50:50 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2889234766' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Nov 25 02:50:51 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v74: 5 pgs: 1 unknown, 4 active+clean; 449 KiB data, 480 MiB used, 60 GiB / 60 GiB avail
Nov 25 02:50:51 np0005534516 ceph-mon[75015]: log_channel(cluster) log [WRN] : Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Nov 25 02:50:51 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e24 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 02:50:51 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e24 do_prune osdmap full prune enabled
Nov 25 02:50:51 np0005534516 ceph-mon[75015]: from='client.? 192.168.122.100:0/2889234766' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Nov 25 02:50:51 np0005534516 ceph-mon[75015]: Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Nov 25 02:50:51 np0005534516 eloquent_bhabha[95416]: --> passed data devices: 0 physical, 3 LVM
Nov 25 02:50:51 np0005534516 eloquent_bhabha[95416]: --> relative data size: 1.0
Nov 25 02:50:51 np0005534516 eloquent_bhabha[95416]: --> All data devices are unavailable
Nov 25 02:50:51 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2889234766' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Nov 25 02:50:51 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e25 e25: 3 total, 3 up, 3 in
Nov 25 02:50:51 np0005534516 affectionate_noyce[95354]: pool 'cephfs.cephfs.meta' created
Nov 25 02:50:51 np0005534516 ceph-mon[75015]: log_channel(cluster) log [DBG] : osdmap e25: 3 total, 3 up, 3 in
Nov 25 02:50:51 np0005534516 systemd[1]: libpod-2e3cd674a012a502b97876e6da21ab91e278d5a22f7a1a21d7612b90d59c3a10.scope: Deactivated successfully.
Nov 25 02:50:51 np0005534516 systemd[1]: libpod-342892a4e9b1c406a690ad082b2aad40cfed15a62646f01fe8dff5017fac1091.scope: Deactivated successfully.
Nov 25 02:50:51 np0005534516 podman[95380]: 2025-11-25 07:50:51.781991391 +0000 UTC m=+1.421473259 container died 342892a4e9b1c406a690ad082b2aad40cfed15a62646f01fe8dff5017fac1091 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_bhabha, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 02:50:51 np0005534516 systemd[1]: libpod-342892a4e9b1c406a690ad082b2aad40cfed15a62646f01fe8dff5017fac1091.scope: Consumed 1.107s CPU time.
Nov 25 02:50:51 np0005534516 podman[95449]: 2025-11-25 07:50:51.820978392 +0000 UTC m=+0.035119034 container died 2e3cd674a012a502b97876e6da21ab91e278d5a22f7a1a21d7612b90d59c3a10 (image=quay.io/ceph/ceph:v18, name=affectionate_noyce, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 02:50:51 np0005534516 systemd[1]: var-lib-containers-storage-overlay-c87a27dc3ff1769ed46a85c7a59b48639c8a7e722aeb16dea6431d92ea13d0b6-merged.mount: Deactivated successfully.
Nov 25 02:50:51 np0005534516 systemd[1]: var-lib-containers-storage-overlay-b3c828e66d2891528ac95b6fc3e3e9439d70038044918cff5942ac79b7fb8a58-merged.mount: Deactivated successfully.
Nov 25 02:50:52 np0005534516 podman[95449]: 2025-11-25 07:50:52.134828875 +0000 UTC m=+0.348969527 container remove 2e3cd674a012a502b97876e6da21ab91e278d5a22f7a1a21d7612b90d59c3a10 (image=quay.io/ceph/ceph:v18, name=affectionate_noyce, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Nov 25 02:50:52 np0005534516 systemd[1]: libpod-conmon-2e3cd674a012a502b97876e6da21ab91e278d5a22f7a1a21d7612b90d59c3a10.scope: Deactivated successfully.
Nov 25 02:50:52 np0005534516 ceph-osd[88620]: osd.0 pg_epoch: 25 pg[6.0( empty local-lis/les=0/0 n=0 ec=25/25 lis/c=0/0 les/c/f=0/0/0 sis=25) [0] r=0 lpr=25 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:50:52 np0005534516 podman[95380]: 2025-11-25 07:50:52.374248678 +0000 UTC m=+2.013730556 container remove 342892a4e9b1c406a690ad082b2aad40cfed15a62646f01fe8dff5017fac1091 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_bhabha, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 02:50:52 np0005534516 systemd[1]: libpod-conmon-342892a4e9b1c406a690ad082b2aad40cfed15a62646f01fe8dff5017fac1091.scope: Deactivated successfully.
Nov 25 02:50:52 np0005534516 python3[95501]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid a058ea16-8b73-51e1-b172-ed66107102bf -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create cephfs.cephfs.data  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 02:50:52 np0005534516 podman[95527]: 2025-11-25 07:50:52.562628066 +0000 UTC m=+0.063061909 container create b34ccb15caebefa1697dce2d23d346774ab23926d738793ca9f745f892a44c8e (image=quay.io/ceph/ceph:v18, name=focused_shannon, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 02:50:52 np0005534516 systemd[1]: Started libpod-conmon-b34ccb15caebefa1697dce2d23d346774ab23926d738793ca9f745f892a44c8e.scope.
Nov 25 02:50:52 np0005534516 podman[95527]: 2025-11-25 07:50:52.530198239 +0000 UTC m=+0.030632092 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 25 02:50:52 np0005534516 systemd[1]: Started libcrun container.
Nov 25 02:50:52 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dcb51bec5a639af5237a81bb5a36193399ce9399669f43572b755323fe1005d6/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 02:50:52 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dcb51bec5a639af5237a81bb5a36193399ce9399669f43572b755323fe1005d6/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 02:50:52 np0005534516 podman[95527]: 2025-11-25 07:50:52.664685746 +0000 UTC m=+0.165119599 container init b34ccb15caebefa1697dce2d23d346774ab23926d738793ca9f745f892a44c8e (image=quay.io/ceph/ceph:v18, name=focused_shannon, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Nov 25 02:50:52 np0005534516 podman[95527]: 2025-11-25 07:50:52.672533775 +0000 UTC m=+0.172967608 container start b34ccb15caebefa1697dce2d23d346774ab23926d738793ca9f745f892a44c8e (image=quay.io/ceph/ceph:v18, name=focused_shannon, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Nov 25 02:50:52 np0005534516 podman[95527]: 2025-11-25 07:50:52.67722095 +0000 UTC m=+0.177654783 container attach b34ccb15caebefa1697dce2d23d346774ab23926d738793ca9f745f892a44c8e (image=quay.io/ceph/ceph:v18, name=focused_shannon, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507)
Nov 25 02:50:52 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e25 do_prune osdmap full prune enabled
Nov 25 02:50:52 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e26 e26: 3 total, 3 up, 3 in
Nov 25 02:50:52 np0005534516 ceph-mon[75015]: log_channel(cluster) log [DBG] : osdmap e26: 3 total, 3 up, 3 in
Nov 25 02:50:52 np0005534516 ceph-mon[75015]: from='client.? 192.168.122.100:0/2889234766' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Nov 25 02:50:52 np0005534516 ceph-osd[88620]: osd.0 pg_epoch: 26 pg[6.0( empty local-lis/les=25/26 n=0 ec=25/25 lis/c=0/0 les/c/f=0/0/0 sis=25) [0] r=0 lpr=25 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:50:53 np0005534516 podman[95678]: 2025-11-25 07:50:53.066912851 +0000 UTC m=+0.049924974 container create d66496b06273d5616c4a869f910a0e2abf127da23666223022bbed21ed1054eb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_dirac, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 02:50:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] Optimize plan auto_2025-11-25_07:50:53
Nov 25 02:50:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 02:50:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] Some PGs (0.333333) are unknown; try again later
Nov 25 02:50:53 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v77: 6 pgs: 2 unknown, 4 active+clean; 449 KiB data, 480 MiB used, 60 GiB / 60 GiB avail
Nov 25 02:50:53 np0005534516 systemd[1]: Started libpod-conmon-d66496b06273d5616c4a869f910a0e2abf127da23666223022bbed21ed1054eb.scope.
Nov 25 02:50:53 np0005534516 systemd[1]: Started libcrun container.
Nov 25 02:50:53 np0005534516 podman[95678]: 2025-11-25 07:50:53.045219981 +0000 UTC m=+0.028232124 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 02:50:53 np0005534516 podman[95678]: 2025-11-25 07:50:53.155346833 +0000 UTC m=+0.138358996 container init d66496b06273d5616c4a869f910a0e2abf127da23666223022bbed21ed1054eb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_dirac, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Nov 25 02:50:53 np0005534516 podman[95678]: 2025-11-25 07:50:53.164911996 +0000 UTC m=+0.147924159 container start d66496b06273d5616c4a869f910a0e2abf127da23666223022bbed21ed1054eb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_dirac, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Nov 25 02:50:53 np0005534516 focused_dirac[95696]: 167 167
Nov 25 02:50:53 np0005534516 podman[95678]: 2025-11-25 07:50:53.168871367 +0000 UTC m=+0.151883520 container attach d66496b06273d5616c4a869f910a0e2abf127da23666223022bbed21ed1054eb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_dirac, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 02:50:53 np0005534516 systemd[1]: libpod-d66496b06273d5616c4a869f910a0e2abf127da23666223022bbed21ed1054eb.scope: Deactivated successfully.
Nov 25 02:50:53 np0005534516 podman[95701]: 2025-11-25 07:50:53.211467161 +0000 UTC m=+0.029049580 container died d66496b06273d5616c4a869f910a0e2abf127da23666223022bbed21ed1054eb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_dirac, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 25 02:50:53 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0) v1
Nov 25 02:50:53 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2155920986' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Nov 25 02:50:53 np0005534516 systemd[1]: var-lib-containers-storage-overlay-2d66e555cf5fe5a547930caba9c849b854f05ab458b9d08a51124942d606e9e2-merged.mount: Deactivated successfully.
Nov 25 02:50:53 np0005534516 podman[95701]: 2025-11-25 07:50:53.258890522 +0000 UTC m=+0.076472861 container remove d66496b06273d5616c4a869f910a0e2abf127da23666223022bbed21ed1054eb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_dirac, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 02:50:53 np0005534516 systemd[1]: libpod-conmon-d66496b06273d5616c4a869f910a0e2abf127da23666223022bbed21ed1054eb.scope: Deactivated successfully.
Nov 25 02:50:53 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 02:50:53 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 02:50:53 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 02:50:53 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 02:50:53 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Nov 25 02:50:53 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 02:50:53 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Nov 25 02:50:53 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 02:50:53 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Nov 25 02:50:53 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 02:50:53 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Nov 25 02:50:53 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 02:50:53 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Nov 25 02:50:53 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"} v 0) v1
Nov 25 02:50:53 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"}]: dispatch
Nov 25 02:50:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 02:50:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 02:50:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 02:50:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 02:50:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 02:50:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 02:50:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 02:50:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 02:50:53 np0005534516 podman[95727]: 2025-11-25 07:50:53.442517735 +0000 UTC m=+0.042670597 container create 67ebda80c2edea97ec770ccab3f6fc836286584f7c8a0191a03ee7b56857e49c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_wozniak, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 02:50:53 np0005534516 systemd[1]: Started libpod-conmon-67ebda80c2edea97ec770ccab3f6fc836286584f7c8a0191a03ee7b56857e49c.scope.
Nov 25 02:50:53 np0005534516 podman[95727]: 2025-11-25 07:50:53.421960798 +0000 UTC m=+0.022113640 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 02:50:53 np0005534516 systemd[1]: Started libcrun container.
Nov 25 02:50:53 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d0b87209fabbf5700d3f8140df5e6d3f891c55512fd756cbdcb855f913dc0388/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 02:50:53 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d0b87209fabbf5700d3f8140df5e6d3f891c55512fd756cbdcb855f913dc0388/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 02:50:53 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d0b87209fabbf5700d3f8140df5e6d3f891c55512fd756cbdcb855f913dc0388/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 02:50:53 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d0b87209fabbf5700d3f8140df5e6d3f891c55512fd756cbdcb855f913dc0388/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 02:50:53 np0005534516 podman[95727]: 2025-11-25 07:50:53.533430928 +0000 UTC m=+0.133583790 container init 67ebda80c2edea97ec770ccab3f6fc836286584f7c8a0191a03ee7b56857e49c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_wozniak, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 02:50:53 np0005534516 podman[95727]: 2025-11-25 07:50:53.541365369 +0000 UTC m=+0.141518181 container start 67ebda80c2edea97ec770ccab3f6fc836286584f7c8a0191a03ee7b56857e49c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_wozniak, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 02:50:53 np0005534516 podman[95727]: 2025-11-25 07:50:53.544651085 +0000 UTC m=+0.144803967 container attach 67ebda80c2edea97ec770ccab3f6fc836286584f7c8a0191a03ee7b56857e49c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_wozniak, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 02:50:53 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e26 do_prune osdmap full prune enabled
Nov 25 02:50:53 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2155920986' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Nov 25 02:50:53 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"}]': finished
Nov 25 02:50:53 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e27 e27: 3 total, 3 up, 3 in
Nov 25 02:50:53 np0005534516 focused_shannon[95590]: pool 'cephfs.cephfs.data' created
Nov 25 02:50:53 np0005534516 ceph-mon[75015]: log_channel(cluster) log [DBG] : osdmap e27: 3 total, 3 up, 3 in
Nov 25 02:50:53 np0005534516 ceph-mgr[75313]: [progress INFO root] update: starting ev 18a00230-25a7-4a30-be13-1cd28ac83edb (PG autoscaler increasing pool 2 PGs from 1 to 32)
Nov 25 02:50:53 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"} v 0) v1
Nov 25 02:50:53 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"}]: dispatch
Nov 25 02:50:53 np0005534516 ceph-mon[75015]: from='client.? 192.168.122.100:0/2155920986' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Nov 25 02:50:53 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"}]: dispatch
Nov 25 02:50:53 np0005534516 systemd[1]: libpod-b34ccb15caebefa1697dce2d23d346774ab23926d738793ca9f745f892a44c8e.scope: Deactivated successfully.
Nov 25 02:50:53 np0005534516 podman[95527]: 2025-11-25 07:50:53.800271207 +0000 UTC m=+1.300705080 container died b34ccb15caebefa1697dce2d23d346774ab23926d738793ca9f745f892a44c8e (image=quay.io/ceph/ceph:v18, name=focused_shannon, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 25 02:50:53 np0005534516 systemd[1]: var-lib-containers-storage-overlay-dcb51bec5a639af5237a81bb5a36193399ce9399669f43572b755323fe1005d6-merged.mount: Deactivated successfully.
Nov 25 02:50:53 np0005534516 podman[95527]: 2025-11-25 07:50:53.869295717 +0000 UTC m=+1.369729550 container remove b34ccb15caebefa1697dce2d23d346774ab23926d738793ca9f745f892a44c8e (image=quay.io/ceph/ceph:v18, name=focused_shannon, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 02:50:53 np0005534516 systemd[1]: libpod-conmon-b34ccb15caebefa1697dce2d23d346774ab23926d738793ca9f745f892a44c8e.scope: Deactivated successfully.
Nov 25 02:50:54 np0005534516 python3[95788]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid a058ea16-8b73-51e1-b172-ed66107102bf -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable vms rbd _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 02:50:54 np0005534516 lucid_wozniak[95743]: {
Nov 25 02:50:54 np0005534516 lucid_wozniak[95743]:    "0": [
Nov 25 02:50:54 np0005534516 lucid_wozniak[95743]:        {
Nov 25 02:50:54 np0005534516 lucid_wozniak[95743]:            "devices": [
Nov 25 02:50:54 np0005534516 lucid_wozniak[95743]:                "/dev/loop3"
Nov 25 02:50:54 np0005534516 lucid_wozniak[95743]:            ],
Nov 25 02:50:54 np0005534516 lucid_wozniak[95743]:            "lv_name": "ceph_lv0",
Nov 25 02:50:54 np0005534516 lucid_wozniak[95743]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 02:50:54 np0005534516 lucid_wozniak[95743]:            "lv_size": "21470642176",
Nov 25 02:50:54 np0005534516 lucid_wozniak[95743]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=fa0f8d90-5df8-4d42-9078-da082765696d,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 02:50:54 np0005534516 lucid_wozniak[95743]:            "lv_uuid": "ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr",
Nov 25 02:50:54 np0005534516 lucid_wozniak[95743]:            "name": "ceph_lv0",
Nov 25 02:50:54 np0005534516 lucid_wozniak[95743]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 02:50:54 np0005534516 lucid_wozniak[95743]:            "tags": {
Nov 25 02:50:54 np0005534516 lucid_wozniak[95743]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 02:50:54 np0005534516 lucid_wozniak[95743]:                "ceph.block_uuid": "ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr",
Nov 25 02:50:54 np0005534516 lucid_wozniak[95743]:                "ceph.cephx_lockbox_secret": "",
Nov 25 02:50:54 np0005534516 lucid_wozniak[95743]:                "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 02:50:54 np0005534516 lucid_wozniak[95743]:                "ceph.cluster_name": "ceph",
Nov 25 02:50:54 np0005534516 lucid_wozniak[95743]:                "ceph.crush_device_class": "",
Nov 25 02:50:54 np0005534516 lucid_wozniak[95743]:                "ceph.encrypted": "0",
Nov 25 02:50:54 np0005534516 lucid_wozniak[95743]:                "ceph.osd_fsid": "fa0f8d90-5df8-4d42-9078-da082765696d",
Nov 25 02:50:54 np0005534516 lucid_wozniak[95743]:                "ceph.osd_id": "0",
Nov 25 02:50:54 np0005534516 lucid_wozniak[95743]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 02:50:54 np0005534516 lucid_wozniak[95743]:                "ceph.type": "block",
Nov 25 02:50:54 np0005534516 lucid_wozniak[95743]:                "ceph.vdo": "0"
Nov 25 02:50:54 np0005534516 lucid_wozniak[95743]:            },
Nov 25 02:50:54 np0005534516 lucid_wozniak[95743]:            "type": "block",
Nov 25 02:50:54 np0005534516 lucid_wozniak[95743]:            "vg_name": "ceph_vg0"
Nov 25 02:50:54 np0005534516 lucid_wozniak[95743]:        }
Nov 25 02:50:54 np0005534516 lucid_wozniak[95743]:    ],
Nov 25 02:50:54 np0005534516 lucid_wozniak[95743]:    "1": [
Nov 25 02:50:54 np0005534516 lucid_wozniak[95743]:        {
Nov 25 02:50:54 np0005534516 lucid_wozniak[95743]:            "devices": [
Nov 25 02:50:54 np0005534516 lucid_wozniak[95743]:                "/dev/loop4"
Nov 25 02:50:54 np0005534516 lucid_wozniak[95743]:            ],
Nov 25 02:50:54 np0005534516 lucid_wozniak[95743]:            "lv_name": "ceph_lv1",
Nov 25 02:50:54 np0005534516 lucid_wozniak[95743]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 02:50:54 np0005534516 lucid_wozniak[95743]:            "lv_size": "21470642176",
Nov 25 02:50:54 np0005534516 lucid_wozniak[95743]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1ccbbde0-8faf-460a-800e-c84f00ed17db,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 02:50:54 np0005534516 lucid_wozniak[95743]:            "lv_uuid": "nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e",
Nov 25 02:50:54 np0005534516 lucid_wozniak[95743]:            "name": "ceph_lv1",
Nov 25 02:50:54 np0005534516 lucid_wozniak[95743]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 02:50:54 np0005534516 lucid_wozniak[95743]:            "tags": {
Nov 25 02:50:54 np0005534516 lucid_wozniak[95743]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 02:50:54 np0005534516 lucid_wozniak[95743]:                "ceph.block_uuid": "nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e",
Nov 25 02:50:54 np0005534516 lucid_wozniak[95743]:                "ceph.cephx_lockbox_secret": "",
Nov 25 02:50:54 np0005534516 lucid_wozniak[95743]:                "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 02:50:54 np0005534516 lucid_wozniak[95743]:                "ceph.cluster_name": "ceph",
Nov 25 02:50:54 np0005534516 lucid_wozniak[95743]:                "ceph.crush_device_class": "",
Nov 25 02:50:54 np0005534516 lucid_wozniak[95743]:                "ceph.encrypted": "0",
Nov 25 02:50:54 np0005534516 lucid_wozniak[95743]:                "ceph.osd_fsid": "1ccbbde0-8faf-460a-800e-c84f00ed17db",
Nov 25 02:50:54 np0005534516 lucid_wozniak[95743]:                "ceph.osd_id": "1",
Nov 25 02:50:54 np0005534516 lucid_wozniak[95743]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 02:50:54 np0005534516 lucid_wozniak[95743]:                "ceph.type": "block",
Nov 25 02:50:54 np0005534516 lucid_wozniak[95743]:                "ceph.vdo": "0"
Nov 25 02:50:54 np0005534516 lucid_wozniak[95743]:            },
Nov 25 02:50:54 np0005534516 lucid_wozniak[95743]:            "type": "block",
Nov 25 02:50:54 np0005534516 lucid_wozniak[95743]:            "vg_name": "ceph_vg1"
Nov 25 02:50:54 np0005534516 lucid_wozniak[95743]:        }
Nov 25 02:50:54 np0005534516 lucid_wozniak[95743]:    ],
Nov 25 02:50:54 np0005534516 lucid_wozniak[95743]:    "2": [
Nov 25 02:50:54 np0005534516 lucid_wozniak[95743]:        {
Nov 25 02:50:54 np0005534516 lucid_wozniak[95743]:            "devices": [
Nov 25 02:50:54 np0005534516 lucid_wozniak[95743]:                "/dev/loop5"
Nov 25 02:50:54 np0005534516 lucid_wozniak[95743]:            ],
Nov 25 02:50:54 np0005534516 lucid_wozniak[95743]:            "lv_name": "ceph_lv2",
Nov 25 02:50:54 np0005534516 lucid_wozniak[95743]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 02:50:54 np0005534516 lucid_wozniak[95743]:            "lv_size": "21470642176",
Nov 25 02:50:54 np0005534516 lucid_wozniak[95743]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2a15223e-f21c-42af-b702-2be1c3607399,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 02:50:54 np0005534516 lucid_wozniak[95743]:            "lv_uuid": "5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0",
Nov 25 02:50:54 np0005534516 lucid_wozniak[95743]:            "name": "ceph_lv2",
Nov 25 02:50:54 np0005534516 lucid_wozniak[95743]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 02:50:54 np0005534516 lucid_wozniak[95743]:            "tags": {
Nov 25 02:50:54 np0005534516 lucid_wozniak[95743]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 02:50:54 np0005534516 lucid_wozniak[95743]:                "ceph.block_uuid": "5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0",
Nov 25 02:50:54 np0005534516 lucid_wozniak[95743]:                "ceph.cephx_lockbox_secret": "",
Nov 25 02:50:54 np0005534516 lucid_wozniak[95743]:                "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 02:50:54 np0005534516 lucid_wozniak[95743]:                "ceph.cluster_name": "ceph",
Nov 25 02:50:54 np0005534516 lucid_wozniak[95743]:                "ceph.crush_device_class": "",
Nov 25 02:50:54 np0005534516 lucid_wozniak[95743]:                "ceph.encrypted": "0",
Nov 25 02:50:54 np0005534516 lucid_wozniak[95743]:                "ceph.osd_fsid": "2a15223e-f21c-42af-b702-2be1c3607399",
Nov 25 02:50:54 np0005534516 lucid_wozniak[95743]:                "ceph.osd_id": "2",
Nov 25 02:50:54 np0005534516 lucid_wozniak[95743]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 02:50:54 np0005534516 lucid_wozniak[95743]:                "ceph.type": "block",
Nov 25 02:50:54 np0005534516 lucid_wozniak[95743]:                "ceph.vdo": "0"
Nov 25 02:50:54 np0005534516 lucid_wozniak[95743]:            },
Nov 25 02:50:54 np0005534516 lucid_wozniak[95743]:            "type": "block",
Nov 25 02:50:54 np0005534516 lucid_wozniak[95743]:            "vg_name": "ceph_vg2"
Nov 25 02:50:54 np0005534516 lucid_wozniak[95743]:        }
Nov 25 02:50:54 np0005534516 lucid_wozniak[95743]:    ]
Nov 25 02:50:54 np0005534516 lucid_wozniak[95743]: }
Nov 25 02:50:54 np0005534516 systemd[1]: libpod-67ebda80c2edea97ec770ccab3f6fc836286584f7c8a0191a03ee7b56857e49c.scope: Deactivated successfully.
Nov 25 02:50:54 np0005534516 podman[95791]: 2025-11-25 07:50:54.322712769 +0000 UTC m=+0.049853112 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 25 02:50:54 np0005534516 podman[95791]: 2025-11-25 07:50:54.733635979 +0000 UTC m=+0.460776332 container create f2d88ee202fc7ac7937d1c1401f94c51d3d6b05fa97de2f25b0f802deb4515a9 (image=quay.io/ceph/ceph:v18, name=elated_brown, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 02:50:54 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 27 pg[7.0( empty local-lis/les=0/0 n=0 ec=27/27 lis/c=0/0 les/c/f=0/0/0 sis=27) [1] r=0 lpr=27 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:50:54 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e27 do_prune osdmap full prune enabled
Nov 25 02:50:55 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v79: 7 pgs: 1 unknown, 6 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 02:50:55 np0005534516 podman[95727]: 2025-11-25 07:50:55.106153751 +0000 UTC m=+1.706306603 container died 67ebda80c2edea97ec770ccab3f6fc836286584f7c8a0191a03ee7b56857e49c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_wozniak, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Nov 25 02:50:55 np0005534516 systemd[1]: Started libpod-conmon-f2d88ee202fc7ac7937d1c1401f94c51d3d6b05fa97de2f25b0f802deb4515a9.scope.
Nov 25 02:50:55 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"} v 0) v1
Nov 25 02:50:55 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 25 02:50:55 np0005534516 systemd[1]: Started libcrun container.
Nov 25 02:50:55 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bb65ee91939c6aeaa1650846617f630357b8168b7ab2c6d4288fefaaf1a35c5b/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 02:50:55 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bb65ee91939c6aeaa1650846617f630357b8168b7ab2c6d4288fefaaf1a35c5b/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 02:50:55 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"}]': finished
Nov 25 02:50:55 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e28 e28: 3 total, 3 up, 3 in
Nov 25 02:50:55 np0005534516 ceph-mon[75015]: log_channel(cluster) log [DBG] : osdmap e28: 3 total, 3 up, 3 in
Nov 25 02:50:55 np0005534516 ceph-mgr[75313]: [progress INFO root] update: starting ev 67a83eb4-2a03-470a-a5bd-4eaed2946ad5 (PG autoscaler increasing pool 3 PGs from 1 to 32)
Nov 25 02:50:55 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"} v 0) v1
Nov 25 02:50:55 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]: dispatch
Nov 25 02:50:55 np0005534516 ceph-mon[75015]: from='client.? 192.168.122.100:0/2155920986' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Nov 25 02:50:55 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"}]': finished
Nov 25 02:50:55 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"}]: dispatch
Nov 25 02:50:55 np0005534516 systemd[1]: var-lib-containers-storage-overlay-d0b87209fabbf5700d3f8140df5e6d3f891c55512fd756cbdcb855f913dc0388-merged.mount: Deactivated successfully.
Nov 25 02:50:55 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 28 pg[7.0( empty local-lis/les=27/28 n=0 ec=27/27 lis/c=0/0 les/c/f=0/0/0 sis=27) [1] r=0 lpr=27 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:50:55 np0005534516 podman[95727]: 2025-11-25 07:50:55.868383725 +0000 UTC m=+2.468536587 container remove 67ebda80c2edea97ec770ccab3f6fc836286584f7c8a0191a03ee7b56857e49c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_wozniak, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 02:50:55 np0005534516 systemd[1]: libpod-conmon-67ebda80c2edea97ec770ccab3f6fc836286584f7c8a0191a03ee7b56857e49c.scope: Deactivated successfully.
Nov 25 02:50:55 np0005534516 podman[95791]: 2025-11-25 07:50:55.9633942 +0000 UTC m=+1.690534573 container init f2d88ee202fc7ac7937d1c1401f94c51d3d6b05fa97de2f25b0f802deb4515a9 (image=quay.io/ceph/ceph:v18, name=elated_brown, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 02:50:55 np0005534516 podman[95791]: 2025-11-25 07:50:55.976723201 +0000 UTC m=+1.703863534 container start f2d88ee202fc7ac7937d1c1401f94c51d3d6b05fa97de2f25b0f802deb4515a9 (image=quay.io/ceph/ceph:v18, name=elated_brown, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 02:50:55 np0005534516 podman[95791]: 2025-11-25 07:50:55.990789656 +0000 UTC m=+1.717930009 container attach f2d88ee202fc7ac7937d1c1401f94c51d3d6b05fa97de2f25b0f802deb4515a9 (image=quay.io/ceph/ceph:v18, name=elated_brown, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Nov 25 02:50:56 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e28 do_prune osdmap full prune enabled
Nov 25 02:50:56 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]': finished
Nov 25 02:50:56 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]': finished
Nov 25 02:50:56 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e29 e29: 3 total, 3 up, 3 in
Nov 25 02:50:56 np0005534516 ceph-mon[75015]: log_channel(cluster) log [DBG] : osdmap e29: 3 total, 3 up, 3 in
Nov 25 02:50:56 np0005534516 ceph-mgr[75313]: [progress INFO root] update: starting ev bde1744a-e2f0-44b6-b48d-4291b4463e6a (PG autoscaler increasing pool 4 PGs from 1 to 32)
Nov 25 02:50:56 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"} v 0) v1
Nov 25 02:50:56 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"}]: dispatch
Nov 25 02:50:56 np0005534516 ceph-mon[75015]: log_channel(cluster) log [WRN] : Health check update: 6 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Nov 25 02:50:56 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e29 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 02:50:56 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 25 02:50:56 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"}]': finished
Nov 25 02:50:56 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]: dispatch
Nov 25 02:50:56 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]': finished
Nov 25 02:50:56 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]': finished
Nov 25 02:50:56 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"}]: dispatch
Nov 25 02:50:56 np0005534516 ceph-mon[75015]: Health check update: 6 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Nov 25 02:50:56 np0005534516 podman[95987]: 2025-11-25 07:50:56.552903032 +0000 UTC m=+0.037584814 container create 767f77e516c76b5104548f22cf411e76fd3535144e1b21f6030d035ad38cd4ba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_euler, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True)
Nov 25 02:50:56 np0005534516 systemd[1]: Started libpod-conmon-767f77e516c76b5104548f22cf411e76fd3535144e1b21f6030d035ad38cd4ba.scope.
Nov 25 02:50:56 np0005534516 systemd[1]: Started libcrun container.
Nov 25 02:50:56 np0005534516 podman[95987]: 2025-11-25 07:50:56.621083214 +0000 UTC m=+0.105765016 container init 767f77e516c76b5104548f22cf411e76fd3535144e1b21f6030d035ad38cd4ba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_euler, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 02:50:56 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"} v 0) v1
Nov 25 02:50:56 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1143318272' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"}]: dispatch
Nov 25 02:50:56 np0005534516 podman[95987]: 2025-11-25 07:50:56.627007214 +0000 UTC m=+0.111688996 container start 767f77e516c76b5104548f22cf411e76fd3535144e1b21f6030d035ad38cd4ba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_euler, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 02:50:56 np0005534516 podman[95987]: 2025-11-25 07:50:56.629971254 +0000 UTC m=+0.114653056 container attach 767f77e516c76b5104548f22cf411e76fd3535144e1b21f6030d035ad38cd4ba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_euler, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Nov 25 02:50:56 np0005534516 great_euler[96003]: 167 167
Nov 25 02:50:56 np0005534516 systemd[1]: libpod-767f77e516c76b5104548f22cf411e76fd3535144e1b21f6030d035ad38cd4ba.scope: Deactivated successfully.
Nov 25 02:50:56 np0005534516 podman[95987]: 2025-11-25 07:50:56.632004825 +0000 UTC m=+0.116686677 container died 767f77e516c76b5104548f22cf411e76fd3535144e1b21f6030d035ad38cd4ba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_euler, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Nov 25 02:50:56 np0005534516 podman[95987]: 2025-11-25 07:50:56.538253215 +0000 UTC m=+0.022935017 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 02:50:56 np0005534516 systemd[1]: var-lib-containers-storage-overlay-57a4249c5e26deaed85edee4da568f76fc32b263da32b9c1554e3c41678108c2-merged.mount: Deactivated successfully.
Nov 25 02:50:56 np0005534516 podman[95987]: 2025-11-25 07:50:56.68008968 +0000 UTC m=+0.164771462 container remove 767f77e516c76b5104548f22cf411e76fd3535144e1b21f6030d035ad38cd4ba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_euler, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 02:50:56 np0005534516 systemd[1]: libpod-conmon-767f77e516c76b5104548f22cf411e76fd3535144e1b21f6030d035ad38cd4ba.scope: Deactivated successfully.
Nov 25 02:50:56 np0005534516 podman[96028]: 2025-11-25 07:50:56.87543627 +0000 UTC m=+0.052773041 container create b0be7bf37e1d921739e6e141705cf09a31349f38cf6402135e7c1f991ba30bd6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_tu, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 02:50:56 np0005534516 systemd[1]: Started libpod-conmon-b0be7bf37e1d921739e6e141705cf09a31349f38cf6402135e7c1f991ba30bd6.scope.
Nov 25 02:50:56 np0005534516 podman[96028]: 2025-11-25 07:50:56.853073097 +0000 UTC m=+0.030409878 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 02:50:56 np0005534516 systemd[1]: Started libcrun container.
Nov 25 02:50:56 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/035aa867065da57a5c371c02c497365b4098b471413ea29ced3aabad6a41e34b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 02:50:56 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/035aa867065da57a5c371c02c497365b4098b471413ea29ced3aabad6a41e34b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 02:50:56 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/035aa867065da57a5c371c02c497365b4098b471413ea29ced3aabad6a41e34b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 02:50:56 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/035aa867065da57a5c371c02c497365b4098b471413ea29ced3aabad6a41e34b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 02:50:56 np0005534516 podman[96028]: 2025-11-25 07:50:56.984614404 +0000 UTC m=+0.161951175 container init b0be7bf37e1d921739e6e141705cf09a31349f38cf6402135e7c1f991ba30bd6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_tu, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Nov 25 02:50:56 np0005534516 podman[96028]: 2025-11-25 07:50:56.996650397 +0000 UTC m=+0.173987158 container start b0be7bf37e1d921739e6e141705cf09a31349f38cf6402135e7c1f991ba30bd6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_tu, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 02:50:57 np0005534516 podman[96028]: 2025-11-25 07:50:57.000808242 +0000 UTC m=+0.178145013 container attach b0be7bf37e1d921739e6e141705cf09a31349f38cf6402135e7c1f991ba30bd6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_tu, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Nov 25 02:50:57 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v82: 38 pgs: 31 unknown, 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 02:50:57 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"} v 0) v1
Nov 25 02:50:57 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 25 02:50:57 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"} v 0) v1
Nov 25 02:50:57 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 25 02:50:57 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e29 do_prune osdmap full prune enabled
Nov 25 02:50:57 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"}]': finished
Nov 25 02:50:57 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1143318272' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"}]': finished
Nov 25 02:50:57 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]': finished
Nov 25 02:50:57 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]': finished
Nov 25 02:50:57 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e30 e30: 3 total, 3 up, 3 in
Nov 25 02:50:57 np0005534516 elated_brown[95819]: enabled application 'rbd' on pool 'vms'
Nov 25 02:50:57 np0005534516 ceph-mon[75015]: log_channel(cluster) log [DBG] : osdmap e30: 3 total, 3 up, 3 in
Nov 25 02:50:57 np0005534516 ceph-mgr[75313]: [progress INFO root] update: starting ev 9d02d13f-df1f-41a7-b97d-6edb4cd52e65 (PG autoscaler increasing pool 5 PGs from 1 to 32)
Nov 25 02:50:57 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "32"} v 0) v1
Nov 25 02:50:57 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "32"}]: dispatch
Nov 25 02:50:57 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 29 pg[2.0( empty local-lis/les=17/18 n=0 ec=17/17 lis/c=17/17 les/c/f=18/18/0 sis=29 pruub=10.398939133s) [2] r=0 lpr=29 pi=[17,29)/1 crt=0'0 mlcod 0'0 active pruub 45.885337830s@ mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [2], acting_primary 2 -> 2, up_primary 2 -> 2, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 02:50:57 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 30 pg[2.0( empty local-lis/les=17/18 n=0 ec=17/17 lis/c=17/17 les/c/f=18/18/0 sis=29 pruub=10.398939133s) [2] r=0 lpr=29 pi=[17,29)/1 crt=0'0 mlcod 0'0 unknown pruub 45.885337830s@ mbc={}] state<Start>: transitioning to Primary
Nov 25 02:50:57 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 30 pg[2.1( empty local-lis/les=17/18 n=0 ec=29/17 lis/c=17/17 les/c/f=18/18/0 sis=29) [2] r=0 lpr=29 pi=[17,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:50:57 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 30 pg[2.2( empty local-lis/les=17/18 n=0 ec=29/17 lis/c=17/17 les/c/f=18/18/0 sis=29) [2] r=0 lpr=29 pi=[17,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:50:57 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 30 pg[2.15( empty local-lis/les=17/18 n=0 ec=29/17 lis/c=17/17 les/c/f=18/18/0 sis=29) [2] r=0 lpr=29 pi=[17,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:50:57 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 30 pg[2.16( empty local-lis/les=17/18 n=0 ec=29/17 lis/c=17/17 les/c/f=18/18/0 sis=29) [2] r=0 lpr=29 pi=[17,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:50:57 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 30 pg[2.1d( empty local-lis/les=17/18 n=0 ec=29/17 lis/c=17/17 les/c/f=18/18/0 sis=29) [2] r=0 lpr=29 pi=[17,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:50:57 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 30 pg[2.1b( empty local-lis/les=17/18 n=0 ec=29/17 lis/c=17/17 les/c/f=18/18/0 sis=29) [2] r=0 lpr=29 pi=[17,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:50:57 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 30 pg[2.1e( empty local-lis/les=17/18 n=0 ec=29/17 lis/c=17/17 les/c/f=18/18/0 sis=29) [2] r=0 lpr=29 pi=[17,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:50:57 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 30 pg[2.1c( empty local-lis/les=17/18 n=0 ec=29/17 lis/c=17/17 les/c/f=18/18/0 sis=29) [2] r=0 lpr=29 pi=[17,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:50:57 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 30 pg[2.19( empty local-lis/les=17/18 n=0 ec=29/17 lis/c=17/17 les/c/f=18/18/0 sis=29) [2] r=0 lpr=29 pi=[17,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:50:57 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 30 pg[2.1a( empty local-lis/les=17/18 n=0 ec=29/17 lis/c=17/17 les/c/f=18/18/0 sis=29) [2] r=0 lpr=29 pi=[17,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:50:57 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 30 pg[2.1f( empty local-lis/les=17/18 n=0 ec=29/17 lis/c=17/17 les/c/f=18/18/0 sis=29) [2] r=0 lpr=29 pi=[17,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:50:57 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 30 pg[2.3( empty local-lis/les=17/18 n=0 ec=29/17 lis/c=17/17 les/c/f=18/18/0 sis=29) [2] r=0 lpr=29 pi=[17,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:50:57 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 30 pg[2.4( empty local-lis/les=17/18 n=0 ec=29/17 lis/c=17/17 les/c/f=18/18/0 sis=29) [2] r=0 lpr=29 pi=[17,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:50:57 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 30 pg[2.7( empty local-lis/les=17/18 n=0 ec=29/17 lis/c=17/17 les/c/f=18/18/0 sis=29) [2] r=0 lpr=29 pi=[17,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:50:57 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 30 pg[2.8( empty local-lis/les=17/18 n=0 ec=29/17 lis/c=17/17 les/c/f=18/18/0 sis=29) [2] r=0 lpr=29 pi=[17,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:50:57 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 30 pg[2.5( empty local-lis/les=17/18 n=0 ec=29/17 lis/c=17/17 les/c/f=18/18/0 sis=29) [2] r=0 lpr=29 pi=[17,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:50:57 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 30 pg[2.6( empty local-lis/les=17/18 n=0 ec=29/17 lis/c=17/17 les/c/f=18/18/0 sis=29) [2] r=0 lpr=29 pi=[17,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:50:57 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 30 pg[2.c( empty local-lis/les=17/18 n=0 ec=29/17 lis/c=17/17 les/c/f=18/18/0 sis=29) [2] r=0 lpr=29 pi=[17,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:50:57 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 30 pg[2.b( empty local-lis/les=17/18 n=0 ec=29/17 lis/c=17/17 les/c/f=18/18/0 sis=29) [2] r=0 lpr=29 pi=[17,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:50:57 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 30 pg[2.9( empty local-lis/les=17/18 n=0 ec=29/17 lis/c=17/17 les/c/f=18/18/0 sis=29) [2] r=0 lpr=29 pi=[17,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:50:57 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 30 pg[2.a( empty local-lis/les=17/18 n=0 ec=29/17 lis/c=17/17 les/c/f=18/18/0 sis=29) [2] r=0 lpr=29 pi=[17,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:50:57 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 30 pg[2.f( empty local-lis/les=17/18 n=0 ec=29/17 lis/c=17/17 les/c/f=18/18/0 sis=29) [2] r=0 lpr=29 pi=[17,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:50:57 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 30 pg[2.10( empty local-lis/les=17/18 n=0 ec=29/17 lis/c=17/17 les/c/f=18/18/0 sis=29) [2] r=0 lpr=29 pi=[17,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:50:57 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 30 pg[2.e( empty local-lis/les=17/18 n=0 ec=29/17 lis/c=17/17 les/c/f=18/18/0 sis=29) [2] r=0 lpr=29 pi=[17,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:50:57 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 30 pg[2.13( empty local-lis/les=17/18 n=0 ec=29/17 lis/c=17/17 les/c/f=18/18/0 sis=29) [2] r=0 lpr=29 pi=[17,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:50:57 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 30 pg[2.14( empty local-lis/les=17/18 n=0 ec=29/17 lis/c=17/17 les/c/f=18/18/0 sis=29) [2] r=0 lpr=29 pi=[17,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:50:57 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 30 pg[2.11( empty local-lis/les=17/18 n=0 ec=29/17 lis/c=17/17 les/c/f=18/18/0 sis=29) [2] r=0 lpr=29 pi=[17,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:50:57 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 30 pg[2.12( empty local-lis/les=17/18 n=0 ec=29/17 lis/c=17/17 les/c/f=18/18/0 sis=29) [2] r=0 lpr=29 pi=[17,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:50:57 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 30 pg[2.17( empty local-lis/les=17/18 n=0 ec=29/17 lis/c=17/17 les/c/f=18/18/0 sis=29) [2] r=0 lpr=29 pi=[17,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:50:57 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 30 pg[2.d( empty local-lis/les=17/18 n=0 ec=29/17 lis/c=17/17 les/c/f=18/18/0 sis=29) [2] r=0 lpr=29 pi=[17,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:50:57 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 30 pg[2.18( empty local-lis/les=17/18 n=0 ec=29/17 lis/c=17/17 les/c/f=18/18/0 sis=29) [2] r=0 lpr=29 pi=[17,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:50:57 np0005534516 systemd[1]: libpod-f2d88ee202fc7ac7937d1c1401f94c51d3d6b05fa97de2f25b0f802deb4515a9.scope: Deactivated successfully.
Nov 25 02:50:57 np0005534516 podman[95791]: 2025-11-25 07:50:57.411950048 +0000 UTC m=+3.139090381 container died f2d88ee202fc7ac7937d1c1401f94c51d3d6b05fa97de2f25b0f802deb4515a9 (image=quay.io/ceph/ceph:v18, name=elated_brown, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Nov 25 02:50:57 np0005534516 systemd[1]: var-lib-containers-storage-overlay-bb65ee91939c6aeaa1650846617f630357b8168b7ab2c6d4288fefaaf1a35c5b-merged.mount: Deactivated successfully.
Nov 25 02:50:57 np0005534516 podman[95791]: 2025-11-25 07:50:57.463973472 +0000 UTC m=+3.191113835 container remove f2d88ee202fc7ac7937d1c1401f94c51d3d6b05fa97de2f25b0f802deb4515a9 (image=quay.io/ceph/ceph:v18, name=elated_brown, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507)
Nov 25 02:50:57 np0005534516 ceph-mon[75015]: from='client.? 192.168.122.100:0/1143318272' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"}]: dispatch
Nov 25 02:50:57 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 25 02:50:57 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 25 02:50:57 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"}]': finished
Nov 25 02:50:57 np0005534516 ceph-mon[75015]: from='client.? 192.168.122.100:0/1143318272' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"}]': finished
Nov 25 02:50:57 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]': finished
Nov 25 02:50:57 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]': finished
Nov 25 02:50:57 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "32"}]: dispatch
Nov 25 02:50:57 np0005534516 systemd[1]: libpod-conmon-f2d88ee202fc7ac7937d1c1401f94c51d3d6b05fa97de2f25b0f802deb4515a9.scope: Deactivated successfully.
Nov 25 02:50:57 np0005534516 python3[96087]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid a058ea16-8b73-51e1-b172-ed66107102bf -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable volumes rbd _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 02:50:57 np0005534516 podman[96104]: 2025-11-25 07:50:57.907456562 +0000 UTC m=+0.061641391 container create bca0211e4f6927e8817083387c5253d877fb7fe6244e54a02a1321127b96547a (image=quay.io/ceph/ceph:v18, name=zealous_driscoll, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True)
Nov 25 02:50:57 np0005534516 systemd[1]: Started libpod-conmon-bca0211e4f6927e8817083387c5253d877fb7fe6244e54a02a1321127b96547a.scope.
Nov 25 02:50:57 np0005534516 systemd[1]: Started libcrun container.
Nov 25 02:50:57 np0005534516 laughing_tu[96045]: {
Nov 25 02:50:57 np0005534516 laughing_tu[96045]:    "1ccbbde0-8faf-460a-800e-c84f00ed17db": {
Nov 25 02:50:57 np0005534516 laughing_tu[96045]:        "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 02:50:57 np0005534516 laughing_tu[96045]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 02:50:57 np0005534516 laughing_tu[96045]:        "osd_id": 1,
Nov 25 02:50:57 np0005534516 laughing_tu[96045]:        "osd_uuid": "1ccbbde0-8faf-460a-800e-c84f00ed17db",
Nov 25 02:50:57 np0005534516 laughing_tu[96045]:        "type": "bluestore"
Nov 25 02:50:57 np0005534516 laughing_tu[96045]:    },
Nov 25 02:50:57 np0005534516 laughing_tu[96045]:    "2a15223e-f21c-42af-b702-2be1c3607399": {
Nov 25 02:50:57 np0005534516 laughing_tu[96045]:        "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 02:50:57 np0005534516 laughing_tu[96045]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 02:50:57 np0005534516 laughing_tu[96045]:        "osd_id": 2,
Nov 25 02:50:57 np0005534516 laughing_tu[96045]:        "osd_uuid": "2a15223e-f21c-42af-b702-2be1c3607399",
Nov 25 02:50:57 np0005534516 laughing_tu[96045]:        "type": "bluestore"
Nov 25 02:50:57 np0005534516 laughing_tu[96045]:    },
Nov 25 02:50:57 np0005534516 laughing_tu[96045]:    "fa0f8d90-5df8-4d42-9078-da082765696d": {
Nov 25 02:50:57 np0005534516 laughing_tu[96045]:        "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 02:50:57 np0005534516 laughing_tu[96045]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 02:50:57 np0005534516 laughing_tu[96045]:        "osd_id": 0,
Nov 25 02:50:57 np0005534516 laughing_tu[96045]:        "osd_uuid": "fa0f8d90-5df8-4d42-9078-da082765696d",
Nov 25 02:50:57 np0005534516 laughing_tu[96045]:        "type": "bluestore"
Nov 25 02:50:57 np0005534516 laughing_tu[96045]:    }
Nov 25 02:50:57 np0005534516 laughing_tu[96045]: }
Nov 25 02:50:57 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8cd854735dace8280f694219d0e25cdbd3379ca3547646e0a2258a2ebacba20c/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 02:50:57 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8cd854735dace8280f694219d0e25cdbd3379ca3547646e0a2258a2ebacba20c/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 02:50:57 np0005534516 podman[96104]: 2025-11-25 07:50:57.885766973 +0000 UTC m=+0.039951902 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 25 02:50:57 np0005534516 podman[96104]: 2025-11-25 07:50:57.987600417 +0000 UTC m=+0.141785316 container init bca0211e4f6927e8817083387c5253d877fb7fe6244e54a02a1321127b96547a (image=quay.io/ceph/ceph:v18, name=zealous_driscoll, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Nov 25 02:50:57 np0005534516 systemd[1]: libpod-b0be7bf37e1d921739e6e141705cf09a31349f38cf6402135e7c1f991ba30bd6.scope: Deactivated successfully.
Nov 25 02:50:57 np0005534516 podman[96104]: 2025-11-25 07:50:57.997160661 +0000 UTC m=+0.151345500 container start bca0211e4f6927e8817083387c5253d877fb7fe6244e54a02a1321127b96547a (image=quay.io/ceph/ceph:v18, name=zealous_driscoll, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Nov 25 02:50:57 np0005534516 systemd[1]: libpod-b0be7bf37e1d921739e6e141705cf09a31349f38cf6402135e7c1f991ba30bd6.scope: Consumed 1.004s CPU time.
Nov 25 02:50:57 np0005534516 podman[96028]: 2025-11-25 07:50:57.997615871 +0000 UTC m=+1.174952632 container died b0be7bf37e1d921739e6e141705cf09a31349f38cf6402135e7c1f991ba30bd6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_tu, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Nov 25 02:50:58 np0005534516 podman[96104]: 2025-11-25 07:50:58.005668503 +0000 UTC m=+0.159853342 container attach bca0211e4f6927e8817083387c5253d877fb7fe6244e54a02a1321127b96547a (image=quay.io/ceph/ceph:v18, name=zealous_driscoll, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 02:50:58 np0005534516 systemd[1]: var-lib-containers-storage-overlay-035aa867065da57a5c371c02c497365b4098b471413ea29ced3aabad6a41e34b-merged.mount: Deactivated successfully.
Nov 25 02:50:58 np0005534516 ceph-osd[88620]: osd.0 pg_epoch: 30 pg[4.0( empty local-lis/les=21/22 n=0 ec=21/21 lis/c=21/21 les/c/f=22/22/0 sis=30 pruub=14.953918457s) [0] r=0 lpr=30 pi=[21,30)/1 crt=0'0 mlcod 0'0 active pruub 67.890365601s@ mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [0], acting_primary 0 -> 0, up_primary 0 -> 0, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 02:50:58 np0005534516 ceph-osd[88620]: osd.0 pg_epoch: 30 pg[4.0( empty local-lis/les=21/22 n=0 ec=21/21 lis/c=21/21 les/c/f=22/22/0 sis=30 pruub=14.953918457s) [0] r=0 lpr=30 pi=[21,30)/1 crt=0'0 mlcod 0'0 unknown pruub 67.890365601s@ mbc={}] state<Start>: transitioning to Primary
Nov 25 02:50:58 np0005534516 podman[96028]: 2025-11-25 07:50:58.085026403 +0000 UTC m=+1.262363154 container remove b0be7bf37e1d921739e6e141705cf09a31349f38cf6402135e7c1f991ba30bd6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_tu, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Nov 25 02:50:58 np0005534516 systemd[1]: libpod-conmon-b0be7bf37e1d921739e6e141705cf09a31349f38cf6402135e7c1f991ba30bd6.scope: Deactivated successfully.
Nov 25 02:50:58 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 02:50:58 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 02:50:58 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 02:50:58 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 02:50:58 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 30 pg[3.0( empty local-lis/les=19/20 n=0 ec=19/19 lis/c=19/19 les/c/f=20/20/0 sis=30 pruub=11.433304787s) [1] r=0 lpr=30 pi=[19,30)/1 crt=0'0 mlcod 0'0 active pruub 58.160362244s@ mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 02:50:58 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 30 pg[3.0( empty local-lis/les=19/20 n=0 ec=19/19 lis/c=19/19 les/c/f=20/20/0 sis=30 pruub=11.433304787s) [1] r=0 lpr=30 pi=[19,30)/1 crt=0'0 mlcod 0'0 unknown pruub 58.160362244s@ mbc={}] state<Start>: transitioning to Primary
Nov 25 02:50:58 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e30 do_prune osdmap full prune enabled
Nov 25 02:50:58 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "32"}]': finished
Nov 25 02:50:58 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e31 e31: 3 total, 3 up, 3 in
Nov 25 02:50:58 np0005534516 ceph-mon[75015]: log_channel(cluster) log [DBG] : osdmap e31: 3 total, 3 up, 3 in
Nov 25 02:50:58 np0005534516 ceph-mgr[75313]: [progress INFO root] update: starting ev 3405a6e5-7e4e-4835-bf68-9b8854f351d2 (PG autoscaler increasing pool 6 PGs from 1 to 32)
Nov 25 02:50:58 np0005534516 ceph-mgr[75313]: [progress INFO root] complete: finished ev 18a00230-25a7-4a30-be13-1cd28ac83edb (PG autoscaler increasing pool 2 PGs from 1 to 32)
Nov 25 02:50:58 np0005534516 ceph-mgr[75313]: [progress INFO root] Completed event 18a00230-25a7-4a30-be13-1cd28ac83edb (PG autoscaler increasing pool 2 PGs from 1 to 32) in 5 seconds
Nov 25 02:50:58 np0005534516 ceph-mgr[75313]: [progress INFO root] complete: finished ev 67a83eb4-2a03-470a-a5bd-4eaed2946ad5 (PG autoscaler increasing pool 3 PGs from 1 to 32)
Nov 25 02:50:58 np0005534516 ceph-mgr[75313]: [progress INFO root] Completed event 67a83eb4-2a03-470a-a5bd-4eaed2946ad5 (PG autoscaler increasing pool 3 PGs from 1 to 32) in 3 seconds
Nov 25 02:50:58 np0005534516 ceph-mgr[75313]: [progress INFO root] complete: finished ev bde1744a-e2f0-44b6-b48d-4291b4463e6a (PG autoscaler increasing pool 4 PGs from 1 to 32)
Nov 25 02:50:58 np0005534516 ceph-mgr[75313]: [progress INFO root] Completed event bde1744a-e2f0-44b6-b48d-4291b4463e6a (PG autoscaler increasing pool 4 PGs from 1 to 32) in 2 seconds
Nov 25 02:50:58 np0005534516 ceph-mgr[75313]: [progress INFO root] complete: finished ev 9d02d13f-df1f-41a7-b97d-6edb4cd52e65 (PG autoscaler increasing pool 5 PGs from 1 to 32)
Nov 25 02:50:58 np0005534516 ceph-mgr[75313]: [progress INFO root] Completed event 9d02d13f-df1f-41a7-b97d-6edb4cd52e65 (PG autoscaler increasing pool 5 PGs from 1 to 32) in 1 seconds
Nov 25 02:50:58 np0005534516 ceph-mgr[75313]: [progress INFO root] complete: finished ev 3405a6e5-7e4e-4835-bf68-9b8854f351d2 (PG autoscaler increasing pool 6 PGs from 1 to 32)
Nov 25 02:50:58 np0005534516 ceph-mgr[75313]: [progress INFO root] Completed event 3405a6e5-7e4e-4835-bf68-9b8854f351d2 (PG autoscaler increasing pool 6 PGs from 1 to 32) in 0 seconds
Nov 25 02:50:58 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 31 pg[2.1f( empty local-lis/les=29/31 n=0 ec=29/17 lis/c=17/17 les/c/f=18/18/0 sis=29) [2] r=0 lpr=29 pi=[17,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:50:58 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 31 pg[3.1e( empty local-lis/les=19/20 n=0 ec=30/19 lis/c=19/19 les/c/f=20/20/0 sis=30) [1] r=0 lpr=30 pi=[19,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:50:58 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 31 pg[3.1f( empty local-lis/les=19/20 n=0 ec=30/19 lis/c=19/19 les/c/f=20/20/0 sis=30) [1] r=0 lpr=30 pi=[19,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:50:58 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 31 pg[3.1d( empty local-lis/les=19/20 n=0 ec=30/19 lis/c=19/19 les/c/f=20/20/0 sis=30) [1] r=0 lpr=30 pi=[19,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:50:58 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 31 pg[3.1b( empty local-lis/les=19/20 n=0 ec=30/19 lis/c=19/19 les/c/f=20/20/0 sis=30) [1] r=0 lpr=30 pi=[19,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:50:58 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 31 pg[3.a( empty local-lis/les=19/20 n=0 ec=30/19 lis/c=19/19 les/c/f=20/20/0 sis=30) [1] r=0 lpr=30 pi=[19,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:50:58 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 31 pg[3.9( empty local-lis/les=19/20 n=0 ec=30/19 lis/c=19/19 les/c/f=20/20/0 sis=30) [1] r=0 lpr=30 pi=[19,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:50:58 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 31 pg[3.8( empty local-lis/les=19/20 n=0 ec=30/19 lis/c=19/19 les/c/f=20/20/0 sis=30) [1] r=0 lpr=30 pi=[19,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:50:58 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 31 pg[3.7( empty local-lis/les=19/20 n=0 ec=30/19 lis/c=19/19 les/c/f=20/20/0 sis=30) [1] r=0 lpr=30 pi=[19,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:50:58 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 31 pg[3.6( empty local-lis/les=19/20 n=0 ec=30/19 lis/c=19/19 les/c/f=20/20/0 sis=30) [1] r=0 lpr=30 pi=[19,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:50:58 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 31 pg[3.5( empty local-lis/les=19/20 n=0 ec=30/19 lis/c=19/19 les/c/f=20/20/0 sis=30) [1] r=0 lpr=30 pi=[19,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:50:58 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 31 pg[3.3( empty local-lis/les=19/20 n=0 ec=30/19 lis/c=19/19 les/c/f=20/20/0 sis=30) [1] r=0 lpr=30 pi=[19,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:50:58 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 31 pg[3.1( empty local-lis/les=19/20 n=0 ec=30/19 lis/c=19/19 les/c/f=20/20/0 sis=30) [1] r=0 lpr=30 pi=[19,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:50:58 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 31 pg[3.4( empty local-lis/les=19/20 n=0 ec=30/19 lis/c=19/19 les/c/f=20/20/0 sis=30) [1] r=0 lpr=30 pi=[19,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:50:58 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 31 pg[3.b( empty local-lis/les=19/20 n=0 ec=30/19 lis/c=19/19 les/c/f=20/20/0 sis=30) [1] r=0 lpr=30 pi=[19,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:50:58 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 31 pg[3.2( empty local-lis/les=19/20 n=0 ec=30/19 lis/c=19/19 les/c/f=20/20/0 sis=30) [1] r=0 lpr=30 pi=[19,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:50:58 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 31 pg[3.c( empty local-lis/les=19/20 n=0 ec=30/19 lis/c=19/19 les/c/f=20/20/0 sis=30) [1] r=0 lpr=30 pi=[19,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:50:58 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 31 pg[3.d( empty local-lis/les=19/20 n=0 ec=30/19 lis/c=19/19 les/c/f=20/20/0 sis=30) [1] r=0 lpr=30 pi=[19,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:50:58 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 31 pg[3.e( empty local-lis/les=19/20 n=0 ec=30/19 lis/c=19/19 les/c/f=20/20/0 sis=30) [1] r=0 lpr=30 pi=[19,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:50:58 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 31 pg[3.1c( empty local-lis/les=19/20 n=0 ec=30/19 lis/c=19/19 les/c/f=20/20/0 sis=30) [1] r=0 lpr=30 pi=[19,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:50:58 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 31 pg[3.f( empty local-lis/les=19/20 n=0 ec=30/19 lis/c=19/19 les/c/f=20/20/0 sis=30) [1] r=0 lpr=30 pi=[19,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:50:58 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 31 pg[3.10( empty local-lis/les=19/20 n=0 ec=30/19 lis/c=19/19 les/c/f=20/20/0 sis=30) [1] r=0 lpr=30 pi=[19,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:50:58 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 31 pg[3.11( empty local-lis/les=19/20 n=0 ec=30/19 lis/c=19/19 les/c/f=20/20/0 sis=30) [1] r=0 lpr=30 pi=[19,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:50:58 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 31 pg[3.12( empty local-lis/les=19/20 n=0 ec=30/19 lis/c=19/19 les/c/f=20/20/0 sis=30) [1] r=0 lpr=30 pi=[19,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:50:58 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 31 pg[3.13( empty local-lis/les=19/20 n=0 ec=30/19 lis/c=19/19 les/c/f=20/20/0 sis=30) [1] r=0 lpr=30 pi=[19,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:50:58 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 31 pg[3.14( empty local-lis/les=19/20 n=0 ec=30/19 lis/c=19/19 les/c/f=20/20/0 sis=30) [1] r=0 lpr=30 pi=[19,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:50:58 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 31 pg[3.15( empty local-lis/les=19/20 n=0 ec=30/19 lis/c=19/19 les/c/f=20/20/0 sis=30) [1] r=0 lpr=30 pi=[19,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:50:58 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 31 pg[3.16( empty local-lis/les=19/20 n=0 ec=30/19 lis/c=19/19 les/c/f=20/20/0 sis=30) [1] r=0 lpr=30 pi=[19,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:50:58 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 31 pg[3.17( empty local-lis/les=19/20 n=0 ec=30/19 lis/c=19/19 les/c/f=20/20/0 sis=30) [1] r=0 lpr=30 pi=[19,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:50:58 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 31 pg[3.18( empty local-lis/les=19/20 n=0 ec=30/19 lis/c=19/19 les/c/f=20/20/0 sis=30) [1] r=0 lpr=30 pi=[19,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:50:58 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 31 pg[3.1a( empty local-lis/les=19/20 n=0 ec=30/19 lis/c=19/19 les/c/f=20/20/0 sis=30) [1] r=0 lpr=30 pi=[19,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:50:58 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 31 pg[3.19( empty local-lis/les=19/20 n=0 ec=30/19 lis/c=19/19 les/c/f=20/20/0 sis=30) [1] r=0 lpr=30 pi=[19,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:50:58 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 31 pg[2.1d( empty local-lis/les=29/31 n=0 ec=29/17 lis/c=17/17 les/c/f=18/18/0 sis=29) [2] r=0 lpr=29 pi=[17,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:50:58 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 31 pg[2.1c( empty local-lis/les=29/31 n=0 ec=29/17 lis/c=17/17 les/c/f=18/18/0 sis=29) [2] r=0 lpr=29 pi=[17,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:50:58 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 31 pg[2.1e( empty local-lis/les=29/31 n=0 ec=29/17 lis/c=17/17 les/c/f=18/18/0 sis=29) [2] r=0 lpr=29 pi=[17,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:50:58 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 31 pg[2.a( empty local-lis/les=29/31 n=0 ec=29/17 lis/c=17/17 les/c/f=18/18/0 sis=29) [2] r=0 lpr=29 pi=[17,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:50:58 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 31 pg[2.8( empty local-lis/les=29/31 n=0 ec=29/17 lis/c=17/17 les/c/f=18/18/0 sis=29) [2] r=0 lpr=29 pi=[17,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:50:58 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 31 pg[2.9( empty local-lis/les=29/31 n=0 ec=29/17 lis/c=17/17 les/c/f=18/18/0 sis=29) [2] r=0 lpr=29 pi=[17,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:50:58 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 31 pg[2.5( empty local-lis/les=29/31 n=0 ec=29/17 lis/c=17/17 les/c/f=18/18/0 sis=29) [2] r=0 lpr=29 pi=[17,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:50:58 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 31 pg[2.b( empty local-lis/les=29/31 n=0 ec=29/17 lis/c=17/17 les/c/f=18/18/0 sis=29) [2] r=0 lpr=29 pi=[17,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:50:58 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 31 pg[2.6( empty local-lis/les=29/31 n=0 ec=29/17 lis/c=17/17 les/c/f=18/18/0 sis=29) [2] r=0 lpr=29 pi=[17,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:50:58 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 31 pg[2.3( empty local-lis/les=29/31 n=0 ec=29/17 lis/c=17/17 les/c/f=18/18/0 sis=29) [2] r=0 lpr=29 pi=[17,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:50:58 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 31 pg[2.2( empty local-lis/les=29/31 n=0 ec=29/17 lis/c=17/17 les/c/f=18/18/0 sis=29) [2] r=0 lpr=29 pi=[17,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:50:58 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 31 pg[3.1f( empty local-lis/les=30/31 n=0 ec=30/19 lis/c=19/19 les/c/f=20/20/0 sis=30) [1] r=0 lpr=30 pi=[19,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:50:58 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 31 pg[2.4( empty local-lis/les=29/31 n=0 ec=29/17 lis/c=17/17 les/c/f=18/18/0 sis=29) [2] r=0 lpr=29 pi=[17,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:50:58 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 31 pg[2.1( empty local-lis/les=29/31 n=0 ec=29/17 lis/c=17/17 les/c/f=18/18/0 sis=29) [2] r=0 lpr=29 pi=[17,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:50:58 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 31 pg[2.c( empty local-lis/les=29/31 n=0 ec=29/17 lis/c=17/17 les/c/f=18/18/0 sis=29) [2] r=0 lpr=29 pi=[17,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:50:58 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 31 pg[2.0( empty local-lis/les=29/31 n=0 ec=17/17 lis/c=17/17 les/c/f=18/18/0 sis=29) [2] r=0 lpr=29 pi=[17,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:50:58 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 31 pg[2.7( empty local-lis/les=29/31 n=0 ec=29/17 lis/c=17/17 les/c/f=18/18/0 sis=29) [2] r=0 lpr=29 pi=[17,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:50:58 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 31 pg[2.e( empty local-lis/les=29/31 n=0 ec=29/17 lis/c=17/17 les/c/f=18/18/0 sis=29) [2] r=0 lpr=29 pi=[17,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:50:58 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 31 pg[2.d( empty local-lis/les=29/31 n=0 ec=29/17 lis/c=17/17 les/c/f=18/18/0 sis=29) [2] r=0 lpr=29 pi=[17,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:50:58 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 31 pg[2.f( empty local-lis/les=29/31 n=0 ec=29/17 lis/c=17/17 les/c/f=18/18/0 sis=29) [2] r=0 lpr=29 pi=[17,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:50:58 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 31 pg[2.11( empty local-lis/les=29/31 n=0 ec=29/17 lis/c=17/17 les/c/f=18/18/0 sis=29) [2] r=0 lpr=29 pi=[17,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:50:58 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 31 pg[2.10( empty local-lis/les=29/31 n=0 ec=29/17 lis/c=17/17 les/c/f=18/18/0 sis=29) [2] r=0 lpr=29 pi=[17,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:50:58 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 31 pg[2.12( empty local-lis/les=29/31 n=0 ec=29/17 lis/c=17/17 les/c/f=18/18/0 sis=29) [2] r=0 lpr=29 pi=[17,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:50:58 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 31 pg[2.14( empty local-lis/les=29/31 n=0 ec=29/17 lis/c=17/17 les/c/f=18/18/0 sis=29) [2] r=0 lpr=29 pi=[17,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:50:58 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 31 pg[2.15( empty local-lis/les=29/31 n=0 ec=29/17 lis/c=17/17 les/c/f=18/18/0 sis=29) [2] r=0 lpr=29 pi=[17,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:50:58 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 31 pg[2.18( empty local-lis/les=29/31 n=0 ec=29/17 lis/c=17/17 les/c/f=18/18/0 sis=29) [2] r=0 lpr=29 pi=[17,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:50:58 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 31 pg[2.16( empty local-lis/les=29/31 n=0 ec=29/17 lis/c=17/17 les/c/f=18/18/0 sis=29) [2] r=0 lpr=29 pi=[17,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:50:58 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 31 pg[2.17( empty local-lis/les=29/31 n=0 ec=29/17 lis/c=17/17 les/c/f=18/18/0 sis=29) [2] r=0 lpr=29 pi=[17,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:50:58 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 31 pg[2.19( empty local-lis/les=29/31 n=0 ec=29/17 lis/c=17/17 les/c/f=18/18/0 sis=29) [2] r=0 lpr=29 pi=[17,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:50:58 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 31 pg[2.13( empty local-lis/les=29/31 n=0 ec=29/17 lis/c=17/17 les/c/f=18/18/0 sis=29) [2] r=0 lpr=29 pi=[17,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:50:58 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 31 pg[2.1a( empty local-lis/les=29/31 n=0 ec=29/17 lis/c=17/17 les/c/f=18/18/0 sis=29) [2] r=0 lpr=29 pi=[17,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:50:58 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 31 pg[2.1b( empty local-lis/les=29/31 n=0 ec=29/17 lis/c=17/17 les/c/f=18/18/0 sis=29) [2] r=0 lpr=29 pi=[17,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:50:58 np0005534516 ceph-osd[88620]: osd.0 pg_epoch: 31 pg[4.1d( empty local-lis/les=21/22 n=0 ec=30/21 lis/c=21/21 les/c/f=22/22/0 sis=30) [0] r=0 lpr=30 pi=[21,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:50:58 np0005534516 ceph-osd[88620]: osd.0 pg_epoch: 31 pg[4.1e( empty local-lis/les=21/22 n=0 ec=30/21 lis/c=21/21 les/c/f=22/22/0 sis=30) [0] r=0 lpr=30 pi=[21,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:50:58 np0005534516 ceph-osd[88620]: osd.0 pg_epoch: 31 pg[4.1c( empty local-lis/les=21/22 n=0 ec=30/21 lis/c=21/21 les/c/f=22/22/0 sis=30) [0] r=0 lpr=30 pi=[21,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:50:58 np0005534516 ceph-osd[88620]: osd.0 pg_epoch: 31 pg[4.8( empty local-lis/les=21/22 n=0 ec=30/21 lis/c=21/21 les/c/f=22/22/0 sis=30) [0] r=0 lpr=30 pi=[21,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:50:58 np0005534516 ceph-osd[88620]: osd.0 pg_epoch: 31 pg[4.7( empty local-lis/les=21/22 n=0 ec=30/21 lis/c=21/21 les/c/f=22/22/0 sis=30) [0] r=0 lpr=30 pi=[21,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:50:58 np0005534516 ceph-osd[88620]: osd.0 pg_epoch: 31 pg[4.b( empty local-lis/les=21/22 n=0 ec=30/21 lis/c=21/21 les/c/f=22/22/0 sis=30) [0] r=0 lpr=30 pi=[21,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:50:58 np0005534516 ceph-osd[88620]: osd.0 pg_epoch: 31 pg[4.1f( empty local-lis/les=21/22 n=0 ec=30/21 lis/c=21/21 les/c/f=22/22/0 sis=30) [0] r=0 lpr=30 pi=[21,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:50:58 np0005534516 ceph-osd[88620]: osd.0 pg_epoch: 31 pg[4.1b( empty local-lis/les=21/22 n=0 ec=30/21 lis/c=21/21 les/c/f=22/22/0 sis=30) [0] r=0 lpr=30 pi=[21,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:50:58 np0005534516 ceph-osd[88620]: osd.0 pg_epoch: 31 pg[4.6( empty local-lis/les=21/22 n=0 ec=30/21 lis/c=21/21 les/c/f=22/22/0 sis=30) [0] r=0 lpr=30 pi=[21,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:50:58 np0005534516 ceph-osd[88620]: osd.0 pg_epoch: 31 pg[4.5( empty local-lis/les=21/22 n=0 ec=30/21 lis/c=21/21 les/c/f=22/22/0 sis=30) [0] r=0 lpr=30 pi=[21,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:50:58 np0005534516 ceph-osd[88620]: osd.0 pg_epoch: 31 pg[4.1a( empty local-lis/les=21/22 n=0 ec=30/21 lis/c=21/21 les/c/f=22/22/0 sis=30) [0] r=0 lpr=30 pi=[21,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:50:58 np0005534516 ceph-osd[88620]: osd.0 pg_epoch: 31 pg[4.9( empty local-lis/les=21/22 n=0 ec=30/21 lis/c=21/21 les/c/f=22/22/0 sis=30) [0] r=0 lpr=30 pi=[21,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:50:58 np0005534516 ceph-osd[88620]: osd.0 pg_epoch: 31 pg[4.4( empty local-lis/les=21/22 n=0 ec=30/21 lis/c=21/21 les/c/f=22/22/0 sis=30) [0] r=0 lpr=30 pi=[21,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:50:58 np0005534516 ceph-osd[88620]: osd.0 pg_epoch: 31 pg[4.19( empty local-lis/les=21/22 n=0 ec=30/21 lis/c=21/21 les/c/f=22/22/0 sis=30) [0] r=0 lpr=30 pi=[21,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:50:58 np0005534516 ceph-osd[88620]: osd.0 pg_epoch: 31 pg[4.3( empty local-lis/les=21/22 n=0 ec=30/21 lis/c=21/21 les/c/f=22/22/0 sis=30) [0] r=0 lpr=30 pi=[21,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:50:58 np0005534516 ceph-osd[88620]: osd.0 pg_epoch: 31 pg[4.1( empty local-lis/les=21/22 n=0 ec=30/21 lis/c=21/21 les/c/f=22/22/0 sis=30) [0] r=0 lpr=30 pi=[21,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:50:58 np0005534516 ceph-osd[88620]: osd.0 pg_epoch: 31 pg[4.2( empty local-lis/les=21/22 n=0 ec=30/21 lis/c=21/21 les/c/f=22/22/0 sis=30) [0] r=0 lpr=30 pi=[21,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:50:58 np0005534516 ceph-osd[88620]: osd.0 pg_epoch: 31 pg[4.c( empty local-lis/les=21/22 n=0 ec=30/21 lis/c=21/21 les/c/f=22/22/0 sis=30) [0] r=0 lpr=30 pi=[21,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:50:58 np0005534516 ceph-osd[88620]: osd.0 pg_epoch: 31 pg[4.e( empty local-lis/les=21/22 n=0 ec=30/21 lis/c=21/21 les/c/f=22/22/0 sis=30) [0] r=0 lpr=30 pi=[21,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:50:58 np0005534516 ceph-osd[88620]: osd.0 pg_epoch: 31 pg[4.d( empty local-lis/les=21/22 n=0 ec=30/21 lis/c=21/21 les/c/f=22/22/0 sis=30) [0] r=0 lpr=30 pi=[21,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:50:58 np0005534516 ceph-osd[88620]: osd.0 pg_epoch: 31 pg[4.f( empty local-lis/les=21/22 n=0 ec=30/21 lis/c=21/21 les/c/f=22/22/0 sis=30) [0] r=0 lpr=30 pi=[21,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:50:58 np0005534516 ceph-osd[88620]: osd.0 pg_epoch: 31 pg[4.10( empty local-lis/les=21/22 n=0 ec=30/21 lis/c=21/21 les/c/f=22/22/0 sis=30) [0] r=0 lpr=30 pi=[21,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:50:58 np0005534516 ceph-osd[88620]: osd.0 pg_epoch: 31 pg[4.11( empty local-lis/les=21/22 n=0 ec=30/21 lis/c=21/21 les/c/f=22/22/0 sis=30) [0] r=0 lpr=30 pi=[21,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:50:58 np0005534516 ceph-osd[88620]: osd.0 pg_epoch: 31 pg[4.12( empty local-lis/les=21/22 n=0 ec=30/21 lis/c=21/21 les/c/f=22/22/0 sis=30) [0] r=0 lpr=30 pi=[21,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:50:58 np0005534516 ceph-osd[88620]: osd.0 pg_epoch: 31 pg[4.13( empty local-lis/les=21/22 n=0 ec=30/21 lis/c=21/21 les/c/f=22/22/0 sis=30) [0] r=0 lpr=30 pi=[21,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:50:58 np0005534516 ceph-osd[88620]: osd.0 pg_epoch: 31 pg[4.14( empty local-lis/les=21/22 n=0 ec=30/21 lis/c=21/21 les/c/f=22/22/0 sis=30) [0] r=0 lpr=30 pi=[21,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:50:58 np0005534516 ceph-osd[88620]: osd.0 pg_epoch: 31 pg[4.15( empty local-lis/les=21/22 n=0 ec=30/21 lis/c=21/21 les/c/f=22/22/0 sis=30) [0] r=0 lpr=30 pi=[21,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:50:58 np0005534516 ceph-osd[88620]: osd.0 pg_epoch: 31 pg[4.16( empty local-lis/les=21/22 n=0 ec=30/21 lis/c=21/21 les/c/f=22/22/0 sis=30) [0] r=0 lpr=30 pi=[21,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:50:58 np0005534516 ceph-osd[88620]: osd.0 pg_epoch: 31 pg[4.17( empty local-lis/les=21/22 n=0 ec=30/21 lis/c=21/21 les/c/f=22/22/0 sis=30) [0] r=0 lpr=30 pi=[21,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:50:58 np0005534516 ceph-osd[88620]: osd.0 pg_epoch: 31 pg[4.a( empty local-lis/les=21/22 n=0 ec=30/21 lis/c=21/21 les/c/f=22/22/0 sis=30) [0] r=0 lpr=30 pi=[21,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:50:58 np0005534516 ceph-osd[88620]: osd.0 pg_epoch: 31 pg[4.1d( empty local-lis/les=30/31 n=0 ec=30/21 lis/c=21/21 les/c/f=22/22/0 sis=30) [0] r=0 lpr=30 pi=[21,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:50:58 np0005534516 ceph-osd[88620]: osd.0 pg_epoch: 31 pg[4.18( empty local-lis/les=21/22 n=0 ec=30/21 lis/c=21/21 les/c/f=22/22/0 sis=30) [0] r=0 lpr=30 pi=[21,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:50:58 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 31 pg[3.1b( empty local-lis/les=30/31 n=0 ec=30/19 lis/c=19/19 les/c/f=20/20/0 sis=30) [1] r=0 lpr=30 pi=[19,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:50:58 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 31 pg[3.1d( empty local-lis/les=30/31 n=0 ec=30/19 lis/c=19/19 les/c/f=20/20/0 sis=30) [1] r=0 lpr=30 pi=[19,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:50:58 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 31 pg[3.a( empty local-lis/les=30/31 n=0 ec=30/19 lis/c=19/19 les/c/f=20/20/0 sis=30) [1] r=0 lpr=30 pi=[19,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:50:58 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 31 pg[3.1e( empty local-lis/les=30/31 n=0 ec=30/19 lis/c=19/19 les/c/f=20/20/0 sis=30) [1] r=0 lpr=30 pi=[19,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:50:58 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 31 pg[3.8( empty local-lis/les=30/31 n=0 ec=30/19 lis/c=19/19 les/c/f=20/20/0 sis=30) [1] r=0 lpr=30 pi=[19,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:50:58 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 31 pg[3.9( empty local-lis/les=30/31 n=0 ec=30/19 lis/c=19/19 les/c/f=20/20/0 sis=30) [1] r=0 lpr=30 pi=[19,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:50:58 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 31 pg[3.6( empty local-lis/les=30/31 n=0 ec=30/19 lis/c=19/19 les/c/f=20/20/0 sis=30) [1] r=0 lpr=30 pi=[19,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:50:58 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 31 pg[3.5( empty local-lis/les=30/31 n=0 ec=30/19 lis/c=19/19 les/c/f=20/20/0 sis=30) [1] r=0 lpr=30 pi=[19,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:50:58 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 31 pg[3.3( empty local-lis/les=30/31 n=0 ec=30/19 lis/c=19/19 les/c/f=20/20/0 sis=30) [1] r=0 lpr=30 pi=[19,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:50:58 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 31 pg[3.1( empty local-lis/les=30/31 n=0 ec=30/19 lis/c=19/19 les/c/f=20/20/0 sis=30) [1] r=0 lpr=30 pi=[19,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:50:58 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 31 pg[3.7( empty local-lis/les=30/31 n=0 ec=30/19 lis/c=19/19 les/c/f=20/20/0 sis=30) [1] r=0 lpr=30 pi=[19,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:50:58 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 31 pg[3.4( empty local-lis/les=30/31 n=0 ec=30/19 lis/c=19/19 les/c/f=20/20/0 sis=30) [1] r=0 lpr=30 pi=[19,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:50:58 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 31 pg[3.b( empty local-lis/les=30/31 n=0 ec=30/19 lis/c=19/19 les/c/f=20/20/0 sis=30) [1] r=0 lpr=30 pi=[19,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:50:58 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 31 pg[3.c( empty local-lis/les=30/31 n=0 ec=30/19 lis/c=19/19 les/c/f=20/20/0 sis=30) [1] r=0 lpr=30 pi=[19,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:50:58 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 31 pg[3.d( empty local-lis/les=30/31 n=0 ec=30/19 lis/c=19/19 les/c/f=20/20/0 sis=30) [1] r=0 lpr=30 pi=[19,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:50:58 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 31 pg[3.e( empty local-lis/les=30/31 n=0 ec=30/19 lis/c=19/19 les/c/f=20/20/0 sis=30) [1] r=0 lpr=30 pi=[19,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:50:58 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 31 pg[3.f( empty local-lis/les=30/31 n=0 ec=30/19 lis/c=19/19 les/c/f=20/20/0 sis=30) [1] r=0 lpr=30 pi=[19,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:50:58 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 31 pg[3.1c( empty local-lis/les=30/31 n=0 ec=30/19 lis/c=19/19 les/c/f=20/20/0 sis=30) [1] r=0 lpr=30 pi=[19,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:50:58 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 31 pg[3.10( empty local-lis/les=30/31 n=0 ec=30/19 lis/c=19/19 les/c/f=20/20/0 sis=30) [1] r=0 lpr=30 pi=[19,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:50:58 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 31 pg[3.11( empty local-lis/les=30/31 n=0 ec=30/19 lis/c=19/19 les/c/f=20/20/0 sis=30) [1] r=0 lpr=30 pi=[19,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:50:58 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 31 pg[3.12( empty local-lis/les=30/31 n=0 ec=30/19 lis/c=19/19 les/c/f=20/20/0 sis=30) [1] r=0 lpr=30 pi=[19,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:50:58 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 31 pg[3.2( empty local-lis/les=30/31 n=0 ec=30/19 lis/c=19/19 les/c/f=20/20/0 sis=30) [1] r=0 lpr=30 pi=[19,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:50:58 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 31 pg[3.14( empty local-lis/les=30/31 n=0 ec=30/19 lis/c=19/19 les/c/f=20/20/0 sis=30) [1] r=0 lpr=30 pi=[19,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:50:58 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 31 pg[3.13( empty local-lis/les=30/31 n=0 ec=30/19 lis/c=19/19 les/c/f=20/20/0 sis=30) [1] r=0 lpr=30 pi=[19,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:50:58 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 31 pg[3.16( empty local-lis/les=30/31 n=0 ec=30/19 lis/c=19/19 les/c/f=20/20/0 sis=30) [1] r=0 lpr=30 pi=[19,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:50:58 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 31 pg[3.15( empty local-lis/les=30/31 n=0 ec=30/19 lis/c=19/19 les/c/f=20/20/0 sis=30) [1] r=0 lpr=30 pi=[19,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:50:58 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 31 pg[3.17( empty local-lis/les=30/31 n=0 ec=30/19 lis/c=19/19 les/c/f=20/20/0 sis=30) [1] r=0 lpr=30 pi=[19,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:50:58 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 31 pg[3.0( empty local-lis/les=30/31 n=0 ec=19/19 lis/c=19/19 les/c/f=20/20/0 sis=30) [1] r=0 lpr=30 pi=[19,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:50:58 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 31 pg[3.18( empty local-lis/les=30/31 n=0 ec=30/19 lis/c=19/19 les/c/f=20/20/0 sis=30) [1] r=0 lpr=30 pi=[19,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:50:58 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 31 pg[3.1a( empty local-lis/les=30/31 n=0 ec=30/19 lis/c=19/19 les/c/f=20/20/0 sis=30) [1] r=0 lpr=30 pi=[19,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:50:58 np0005534516 ceph-osd[88620]: osd.0 pg_epoch: 31 pg[4.1e( empty local-lis/les=30/31 n=0 ec=30/21 lis/c=21/21 les/c/f=22/22/0 sis=30) [0] r=0 lpr=30 pi=[21,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:50:58 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 31 pg[3.19( empty local-lis/les=30/31 n=0 ec=30/19 lis/c=19/19 les/c/f=20/20/0 sis=30) [1] r=0 lpr=30 pi=[19,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:50:58 np0005534516 ceph-osd[88620]: osd.0 pg_epoch: 31 pg[4.1c( empty local-lis/les=30/31 n=0 ec=30/21 lis/c=21/21 les/c/f=22/22/0 sis=30) [0] r=0 lpr=30 pi=[21,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:50:58 np0005534516 ceph-osd[88620]: osd.0 pg_epoch: 31 pg[4.7( empty local-lis/les=30/31 n=0 ec=30/21 lis/c=21/21 les/c/f=22/22/0 sis=30) [0] r=0 lpr=30 pi=[21,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:50:58 np0005534516 ceph-osd[88620]: osd.0 pg_epoch: 31 pg[4.b( empty local-lis/les=30/31 n=0 ec=30/21 lis/c=21/21 les/c/f=22/22/0 sis=30) [0] r=0 lpr=30 pi=[21,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:50:58 np0005534516 ceph-osd[88620]: osd.0 pg_epoch: 31 pg[4.1b( empty local-lis/les=30/31 n=0 ec=30/21 lis/c=21/21 les/c/f=22/22/0 sis=30) [0] r=0 lpr=30 pi=[21,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:50:58 np0005534516 ceph-osd[88620]: osd.0 pg_epoch: 31 pg[4.1f( empty local-lis/les=30/31 n=0 ec=30/21 lis/c=21/21 les/c/f=22/22/0 sis=30) [0] r=0 lpr=30 pi=[21,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:50:58 np0005534516 ceph-osd[88620]: osd.0 pg_epoch: 31 pg[4.8( empty local-lis/les=30/31 n=0 ec=30/21 lis/c=21/21 les/c/f=22/22/0 sis=30) [0] r=0 lpr=30 pi=[21,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:50:58 np0005534516 ceph-osd[88620]: osd.0 pg_epoch: 31 pg[4.6( empty local-lis/les=30/31 n=0 ec=30/21 lis/c=21/21 les/c/f=22/22/0 sis=30) [0] r=0 lpr=30 pi=[21,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:50:58 np0005534516 ceph-osd[88620]: osd.0 pg_epoch: 31 pg[4.5( empty local-lis/les=30/31 n=0 ec=30/21 lis/c=21/21 les/c/f=22/22/0 sis=30) [0] r=0 lpr=30 pi=[21,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:50:58 np0005534516 ceph-osd[88620]: osd.0 pg_epoch: 31 pg[4.9( empty local-lis/les=30/31 n=0 ec=30/21 lis/c=21/21 les/c/f=22/22/0 sis=30) [0] r=0 lpr=30 pi=[21,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:50:58 np0005534516 ceph-osd[88620]: osd.0 pg_epoch: 31 pg[4.1a( empty local-lis/les=30/31 n=0 ec=30/21 lis/c=21/21 les/c/f=22/22/0 sis=30) [0] r=0 lpr=30 pi=[21,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:50:58 np0005534516 ceph-osd[88620]: osd.0 pg_epoch: 31 pg[4.4( empty local-lis/les=30/31 n=0 ec=30/21 lis/c=21/21 les/c/f=22/22/0 sis=30) [0] r=0 lpr=30 pi=[21,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:50:58 np0005534516 ceph-osd[88620]: osd.0 pg_epoch: 31 pg[4.19( empty local-lis/les=30/31 n=0 ec=30/21 lis/c=21/21 les/c/f=22/22/0 sis=30) [0] r=0 lpr=30 pi=[21,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:50:58 np0005534516 ceph-osd[88620]: osd.0 pg_epoch: 31 pg[4.3( empty local-lis/les=30/31 n=0 ec=30/21 lis/c=21/21 les/c/f=22/22/0 sis=30) [0] r=0 lpr=30 pi=[21,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:50:58 np0005534516 ceph-osd[88620]: osd.0 pg_epoch: 31 pg[4.1( empty local-lis/les=30/31 n=0 ec=30/21 lis/c=21/21 les/c/f=22/22/0 sis=30) [0] r=0 lpr=30 pi=[21,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:50:58 np0005534516 ceph-osd[88620]: osd.0 pg_epoch: 31 pg[4.0( empty local-lis/les=30/31 n=0 ec=21/21 lis/c=21/21 les/c/f=22/22/0 sis=30) [0] r=0 lpr=30 pi=[21,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:50:58 np0005534516 ceph-osd[88620]: osd.0 pg_epoch: 31 pg[4.2( empty local-lis/les=30/31 n=0 ec=30/21 lis/c=21/21 les/c/f=22/22/0 sis=30) [0] r=0 lpr=30 pi=[21,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:50:58 np0005534516 ceph-osd[88620]: osd.0 pg_epoch: 31 pg[4.c( empty local-lis/les=30/31 n=0 ec=30/21 lis/c=21/21 les/c/f=22/22/0 sis=30) [0] r=0 lpr=30 pi=[21,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:50:58 np0005534516 ceph-osd[88620]: osd.0 pg_epoch: 31 pg[4.e( empty local-lis/les=30/31 n=0 ec=30/21 lis/c=21/21 les/c/f=22/22/0 sis=30) [0] r=0 lpr=30 pi=[21,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:50:58 np0005534516 ceph-osd[88620]: osd.0 pg_epoch: 31 pg[4.d( empty local-lis/les=30/31 n=0 ec=30/21 lis/c=21/21 les/c/f=22/22/0 sis=30) [0] r=0 lpr=30 pi=[21,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:50:58 np0005534516 ceph-osd[88620]: osd.0 pg_epoch: 31 pg[4.f( empty local-lis/les=30/31 n=0 ec=30/21 lis/c=21/21 les/c/f=22/22/0 sis=30) [0] r=0 lpr=30 pi=[21,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:50:58 np0005534516 ceph-osd[88620]: osd.0 pg_epoch: 31 pg[4.10( empty local-lis/les=30/31 n=0 ec=30/21 lis/c=21/21 les/c/f=22/22/0 sis=30) [0] r=0 lpr=30 pi=[21,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:50:58 np0005534516 ceph-osd[88620]: osd.0 pg_epoch: 31 pg[4.11( empty local-lis/les=30/31 n=0 ec=30/21 lis/c=21/21 les/c/f=22/22/0 sis=30) [0] r=0 lpr=30 pi=[21,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:50:58 np0005534516 ceph-osd[88620]: osd.0 pg_epoch: 31 pg[4.13( empty local-lis/les=30/31 n=0 ec=30/21 lis/c=21/21 les/c/f=22/22/0 sis=30) [0] r=0 lpr=30 pi=[21,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:50:58 np0005534516 ceph-osd[88620]: osd.0 pg_epoch: 31 pg[4.12( empty local-lis/les=30/31 n=0 ec=30/21 lis/c=21/21 les/c/f=22/22/0 sis=30) [0] r=0 lpr=30 pi=[21,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:50:58 np0005534516 ceph-osd[88620]: osd.0 pg_epoch: 31 pg[4.14( empty local-lis/les=30/31 n=0 ec=30/21 lis/c=21/21 les/c/f=22/22/0 sis=30) [0] r=0 lpr=30 pi=[21,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:50:58 np0005534516 ceph-osd[88620]: osd.0 pg_epoch: 31 pg[4.16( empty local-lis/les=30/31 n=0 ec=30/21 lis/c=21/21 les/c/f=22/22/0 sis=30) [0] r=0 lpr=30 pi=[21,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:50:58 np0005534516 ceph-osd[88620]: osd.0 pg_epoch: 31 pg[4.17( empty local-lis/les=30/31 n=0 ec=30/21 lis/c=21/21 les/c/f=22/22/0 sis=30) [0] r=0 lpr=30 pi=[21,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:50:58 np0005534516 ceph-osd[88620]: osd.0 pg_epoch: 31 pg[4.15( empty local-lis/les=30/31 n=0 ec=30/21 lis/c=21/21 les/c/f=22/22/0 sis=30) [0] r=0 lpr=30 pi=[21,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:50:58 np0005534516 ceph-osd[88620]: osd.0 pg_epoch: 31 pg[4.a( empty local-lis/les=30/31 n=0 ec=30/21 lis/c=21/21 les/c/f=22/22/0 sis=30) [0] r=0 lpr=30 pi=[21,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:50:58 np0005534516 ceph-osd[88620]: osd.0 pg_epoch: 31 pg[4.18( empty local-lis/les=30/31 n=0 ec=30/21 lis/c=21/21 les/c/f=22/22/0 sis=30) [0] r=0 lpr=30 pi=[21,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:50:58 np0005534516 ceph-mgr[75313]: [progress INFO root] Writing back 8 completed events
Nov 25 02:50:58 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) v1
Nov 25 02:50:58 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 02:50:58 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 02:50:58 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 02:50:58 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "32"}]': finished
Nov 25 02:50:58 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 02:50:58 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"} v 0) v1
Nov 25 02:50:58 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1327909344' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"}]: dispatch
Nov 25 02:50:59 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v85: 100 pgs: 32 peering, 62 unknown, 6 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 02:50:59 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "32"} v 0) v1
Nov 25 02:50:59 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 25 02:50:59 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"} v 0) v1
Nov 25 02:50:59 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 25 02:50:59 np0005534516 ceph-osd[89702]: log_channel(cluster) log [DBG] : 3.1 scrub starts
Nov 25 02:50:59 np0005534516 ceph-osd[89702]: log_channel(cluster) log [DBG] : 3.1 scrub ok
Nov 25 02:50:59 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e31 do_prune osdmap full prune enabled
Nov 25 02:50:59 np0005534516 ceph-mon[75015]: from='client.? 192.168.122.100:0/1327909344' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"}]: dispatch
Nov 25 02:50:59 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 25 02:50:59 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 25 02:50:59 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1327909344' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"}]': finished
Nov 25 02:50:59 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "32"}]': finished
Nov 25 02:50:59 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]': finished
Nov 25 02:50:59 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e32 e32: 3 total, 3 up, 3 in
Nov 25 02:50:59 np0005534516 zealous_driscoll[96129]: enabled application 'rbd' on pool 'volumes'
Nov 25 02:50:59 np0005534516 ceph-mon[75015]: log_channel(cluster) log [DBG] : osdmap e32: 3 total, 3 up, 3 in
Nov 25 02:50:59 np0005534516 ceph-osd[88620]: osd.0 pg_epoch: 32 pg[6.0( empty local-lis/les=25/26 n=0 ec=25/25 lis/c=25/25 les/c/f=26/26/0 sis=32 pruub=9.232100487s) [0] r=0 lpr=32 pi=[25,32)/1 crt=0'0 mlcod 0'0 active pruub 63.624557495s@ mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [0], acting_primary 0 -> 0, up_primary 0 -> 0, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 02:50:59 np0005534516 ceph-osd[88620]: osd.0 pg_epoch: 32 pg[6.0( empty local-lis/les=25/26 n=0 ec=25/25 lis/c=25/25 les/c/f=26/26/0 sis=32 pruub=9.232100487s) [0] r=0 lpr=32 pi=[25,32)/1 crt=0'0 mlcod 0'0 unknown pruub 63.624557495s@ mbc={}] state<Start>: transitioning to Primary
Nov 25 02:50:59 np0005534516 systemd[1]: libpod-bca0211e4f6927e8817083387c5253d877fb7fe6244e54a02a1321127b96547a.scope: Deactivated successfully.
Nov 25 02:50:59 np0005534516 podman[96104]: 2025-11-25 07:50:59.540223533 +0000 UTC m=+1.694408372 container died bca0211e4f6927e8817083387c5253d877fb7fe6244e54a02a1321127b96547a (image=quay.io/ceph/ceph:v18, name=zealous_driscoll, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef)
Nov 25 02:50:59 np0005534516 systemd[1]: var-lib-containers-storage-overlay-8cd854735dace8280f694219d0e25cdbd3379ca3547646e0a2258a2ebacba20c-merged.mount: Deactivated successfully.
Nov 25 02:50:59 np0005534516 podman[96104]: 2025-11-25 07:50:59.59333113 +0000 UTC m=+1.747515969 container remove bca0211e4f6927e8817083387c5253d877fb7fe6244e54a02a1321127b96547a (image=quay.io/ceph/ceph:v18, name=zealous_driscoll, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 02:50:59 np0005534516 systemd[1]: libpod-conmon-bca0211e4f6927e8817083387c5253d877fb7fe6244e54a02a1321127b96547a.scope: Deactivated successfully.
Nov 25 02:50:59 np0005534516 ceph-osd[90711]: log_channel(cluster) log [DBG] : 2.1 scrub starts
Nov 25 02:50:59 np0005534516 ceph-osd[90711]: log_channel(cluster) log [DBG] : 2.1 scrub ok
Nov 25 02:50:59 np0005534516 python3[96257]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid a058ea16-8b73-51e1-b172-ed66107102bf -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable backups rbd _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 02:50:59 np0005534516 podman[96258]: 2025-11-25 07:50:59.9721744 +0000 UTC m=+0.067025179 container create 8bbc37834fa48c52e52e9c2cb6821e17bd42db621147959a589ea2fccc77617c (image=quay.io/ceph/ceph:v18, name=laughing_payne, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 02:51:00 np0005534516 systemd[1]: Started libpod-conmon-8bbc37834fa48c52e52e9c2cb6821e17bd42db621147959a589ea2fccc77617c.scope.
Nov 25 02:51:00 np0005534516 podman[96258]: 2025-11-25 07:50:59.933928325 +0000 UTC m=+0.028779194 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 25 02:51:00 np0005534516 systemd[1]: Started libcrun container.
Nov 25 02:51:00 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/189acb8a38cdaaaa0dcf6fb47ed7b960b7a174662e44e3f294db1a97d68baf6c/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 02:51:00 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/189acb8a38cdaaaa0dcf6fb47ed7b960b7a174662e44e3f294db1a97d68baf6c/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 02:51:00 np0005534516 podman[96258]: 2025-11-25 07:51:00.073187579 +0000 UTC m=+0.168038438 container init 8bbc37834fa48c52e52e9c2cb6821e17bd42db621147959a589ea2fccc77617c (image=quay.io/ceph/ceph:v18, name=laughing_payne, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Nov 25 02:51:00 np0005534516 podman[96258]: 2025-11-25 07:51:00.079005307 +0000 UTC m=+0.173856096 container start 8bbc37834fa48c52e52e9c2cb6821e17bd42db621147959a589ea2fccc77617c (image=quay.io/ceph/ceph:v18, name=laughing_payne, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 02:51:00 np0005534516 podman[96258]: 2025-11-25 07:51:00.082285363 +0000 UTC m=+0.177136232 container attach 8bbc37834fa48c52e52e9c2cb6821e17bd42db621147959a589ea2fccc77617c (image=quay.io/ceph/ceph:v18, name=laughing_payne, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True)
Nov 25 02:51:00 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e32 do_prune osdmap full prune enabled
Nov 25 02:51:00 np0005534516 ceph-mon[75015]: from='client.? 192.168.122.100:0/1327909344' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"}]': finished
Nov 25 02:51:00 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "32"}]': finished
Nov 25 02:51:00 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]': finished
Nov 25 02:51:00 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e33 e33: 3 total, 3 up, 3 in
Nov 25 02:51:00 np0005534516 ceph-mon[75015]: log_channel(cluster) log [DBG] : osdmap e33: 3 total, 3 up, 3 in
Nov 25 02:51:00 np0005534516 ceph-osd[88620]: osd.0 pg_epoch: 33 pg[6.1a( empty local-lis/les=25/26 n=0 ec=32/25 lis/c=25/25 les/c/f=26/26/0 sis=32) [0] r=0 lpr=32 pi=[25,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:51:00 np0005534516 ceph-osd[88620]: osd.0 pg_epoch: 33 pg[6.15( empty local-lis/les=25/26 n=0 ec=32/25 lis/c=25/25 les/c/f=26/26/0 sis=32) [0] r=0 lpr=32 pi=[25,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:51:00 np0005534516 ceph-osd[88620]: osd.0 pg_epoch: 33 pg[6.17( empty local-lis/les=25/26 n=0 ec=32/25 lis/c=25/25 les/c/f=26/26/0 sis=32) [0] r=0 lpr=32 pi=[25,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:51:00 np0005534516 ceph-osd[88620]: osd.0 pg_epoch: 33 pg[6.16( empty local-lis/les=25/26 n=0 ec=32/25 lis/c=25/25 les/c/f=26/26/0 sis=32) [0] r=0 lpr=32 pi=[25,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:51:00 np0005534516 ceph-osd[88620]: osd.0 pg_epoch: 33 pg[6.14( empty local-lis/les=25/26 n=0 ec=32/25 lis/c=25/25 les/c/f=26/26/0 sis=32) [0] r=0 lpr=32 pi=[25,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:51:00 np0005534516 ceph-osd[88620]: osd.0 pg_epoch: 33 pg[6.11( empty local-lis/les=25/26 n=0 ec=32/25 lis/c=25/25 les/c/f=26/26/0 sis=32) [0] r=0 lpr=32 pi=[25,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:51:00 np0005534516 ceph-osd[88620]: osd.0 pg_epoch: 33 pg[6.10( empty local-lis/les=25/26 n=0 ec=32/25 lis/c=25/25 les/c/f=26/26/0 sis=32) [0] r=0 lpr=32 pi=[25,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:51:00 np0005534516 ceph-osd[88620]: osd.0 pg_epoch: 33 pg[6.12( empty local-lis/les=25/26 n=0 ec=32/25 lis/c=25/25 les/c/f=26/26/0 sis=32) [0] r=0 lpr=32 pi=[25,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:51:00 np0005534516 ceph-osd[88620]: osd.0 pg_epoch: 33 pg[6.d( empty local-lis/les=25/26 n=0 ec=32/25 lis/c=25/25 les/c/f=26/26/0 sis=32) [0] r=0 lpr=32 pi=[25,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:51:00 np0005534516 ceph-osd[88620]: osd.0 pg_epoch: 33 pg[6.c( empty local-lis/les=25/26 n=0 ec=32/25 lis/c=25/25 les/c/f=26/26/0 sis=32) [0] r=0 lpr=32 pi=[25,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:51:00 np0005534516 ceph-osd[88620]: osd.0 pg_epoch: 33 pg[6.13( empty local-lis/les=25/26 n=0 ec=32/25 lis/c=25/25 les/c/f=26/26/0 sis=32) [0] r=0 lpr=32 pi=[25,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:51:00 np0005534516 ceph-osd[88620]: osd.0 pg_epoch: 33 pg[6.f( empty local-lis/les=25/26 n=0 ec=32/25 lis/c=25/25 les/c/f=26/26/0 sis=32) [0] r=0 lpr=32 pi=[25,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:51:00 np0005534516 ceph-osd[88620]: osd.0 pg_epoch: 33 pg[6.e( empty local-lis/les=25/26 n=0 ec=32/25 lis/c=25/25 les/c/f=26/26/0 sis=32) [0] r=0 lpr=32 pi=[25,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:51:00 np0005534516 ceph-osd[88620]: osd.0 pg_epoch: 33 pg[6.2( empty local-lis/les=25/26 n=0 ec=32/25 lis/c=25/25 les/c/f=26/26/0 sis=32) [0] r=0 lpr=32 pi=[25,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:51:00 np0005534516 ceph-osd[88620]: osd.0 pg_epoch: 33 pg[6.3( empty local-lis/les=25/26 n=0 ec=32/25 lis/c=25/25 les/c/f=26/26/0 sis=32) [0] r=0 lpr=32 pi=[25,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:51:00 np0005534516 ceph-osd[88620]: osd.0 pg_epoch: 33 pg[6.1b( empty local-lis/les=25/26 n=0 ec=32/25 lis/c=25/25 les/c/f=26/26/0 sis=32) [0] r=0 lpr=32 pi=[25,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:51:00 np0005534516 ceph-osd[88620]: osd.0 pg_epoch: 33 pg[6.6( empty local-lis/les=25/26 n=0 ec=32/25 lis/c=25/25 les/c/f=26/26/0 sis=32) [0] r=0 lpr=32 pi=[25,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:51:00 np0005534516 ceph-osd[88620]: osd.0 pg_epoch: 33 pg[6.b( empty local-lis/les=25/26 n=0 ec=32/25 lis/c=25/25 les/c/f=26/26/0 sis=32) [0] r=0 lpr=32 pi=[25,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:51:00 np0005534516 ceph-osd[88620]: osd.0 pg_epoch: 33 pg[6.18( empty local-lis/les=25/26 n=0 ec=32/25 lis/c=25/25 les/c/f=26/26/0 sis=32) [0] r=0 lpr=32 pi=[25,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:51:00 np0005534516 ceph-osd[88620]: osd.0 pg_epoch: 33 pg[6.1( empty local-lis/les=25/26 n=0 ec=32/25 lis/c=25/25 les/c/f=26/26/0 sis=32) [0] r=0 lpr=32 pi=[25,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:51:00 np0005534516 ceph-osd[88620]: osd.0 pg_epoch: 33 pg[6.8( empty local-lis/les=25/26 n=0 ec=32/25 lis/c=25/25 les/c/f=26/26/0 sis=32) [0] r=0 lpr=32 pi=[25,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:51:00 np0005534516 ceph-osd[88620]: osd.0 pg_epoch: 33 pg[6.19( empty local-lis/les=25/26 n=0 ec=32/25 lis/c=25/25 les/c/f=26/26/0 sis=32) [0] r=0 lpr=32 pi=[25,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:51:00 np0005534516 ceph-osd[88620]: osd.0 pg_epoch: 33 pg[6.4( empty local-lis/les=25/26 n=0 ec=32/25 lis/c=25/25 les/c/f=26/26/0 sis=32) [0] r=0 lpr=32 pi=[25,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:51:00 np0005534516 ceph-osd[88620]: osd.0 pg_epoch: 33 pg[6.7( empty local-lis/les=25/26 n=0 ec=32/25 lis/c=25/25 les/c/f=26/26/0 sis=32) [0] r=0 lpr=32 pi=[25,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:51:00 np0005534516 ceph-osd[88620]: osd.0 pg_epoch: 33 pg[6.5( empty local-lis/les=25/26 n=0 ec=32/25 lis/c=25/25 les/c/f=26/26/0 sis=32) [0] r=0 lpr=32 pi=[25,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:51:00 np0005534516 ceph-osd[88620]: osd.0 pg_epoch: 33 pg[6.9( empty local-lis/les=25/26 n=0 ec=32/25 lis/c=25/25 les/c/f=26/26/0 sis=32) [0] r=0 lpr=32 pi=[25,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:51:00 np0005534516 ceph-osd[88620]: osd.0 pg_epoch: 33 pg[6.a( empty local-lis/les=25/26 n=0 ec=32/25 lis/c=25/25 les/c/f=26/26/0 sis=32) [0] r=0 lpr=32 pi=[25,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:51:00 np0005534516 ceph-osd[88620]: osd.0 pg_epoch: 33 pg[6.1e( empty local-lis/les=25/26 n=0 ec=32/25 lis/c=25/25 les/c/f=26/26/0 sis=32) [0] r=0 lpr=32 pi=[25,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:51:00 np0005534516 ceph-osd[88620]: osd.0 pg_epoch: 33 pg[6.1c( empty local-lis/les=25/26 n=0 ec=32/25 lis/c=25/25 les/c/f=26/26/0 sis=32) [0] r=0 lpr=32 pi=[25,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:51:00 np0005534516 ceph-osd[88620]: osd.0 pg_epoch: 33 pg[6.1d( empty local-lis/les=25/26 n=0 ec=32/25 lis/c=25/25 les/c/f=26/26/0 sis=32) [0] r=0 lpr=32 pi=[25,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:51:00 np0005534516 ceph-osd[88620]: osd.0 pg_epoch: 33 pg[6.1f( empty local-lis/les=25/26 n=0 ec=32/25 lis/c=25/25 les/c/f=26/26/0 sis=32) [0] r=0 lpr=32 pi=[25,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:51:00 np0005534516 ceph-osd[88620]: osd.0 pg_epoch: 33 pg[6.17( empty local-lis/les=32/33 n=0 ec=32/25 lis/c=25/25 les/c/f=26/26/0 sis=32) [0] r=0 lpr=32 pi=[25,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:51:00 np0005534516 ceph-osd[88620]: osd.0 pg_epoch: 33 pg[6.15( empty local-lis/les=32/33 n=0 ec=32/25 lis/c=25/25 les/c/f=26/26/0 sis=32) [0] r=0 lpr=32 pi=[25,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:51:00 np0005534516 ceph-osd[88620]: osd.0 pg_epoch: 33 pg[6.1a( empty local-lis/les=32/33 n=0 ec=32/25 lis/c=25/25 les/c/f=26/26/0 sis=32) [0] r=0 lpr=32 pi=[25,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:51:00 np0005534516 ceph-osd[88620]: osd.0 pg_epoch: 33 pg[6.11( empty local-lis/les=32/33 n=0 ec=32/25 lis/c=25/25 les/c/f=26/26/0 sis=32) [0] r=0 lpr=32 pi=[25,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:51:00 np0005534516 ceph-osd[88620]: osd.0 pg_epoch: 33 pg[6.16( empty local-lis/les=32/33 n=0 ec=32/25 lis/c=25/25 les/c/f=26/26/0 sis=32) [0] r=0 lpr=32 pi=[25,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:51:00 np0005534516 ceph-osd[88620]: osd.0 pg_epoch: 33 pg[6.14( empty local-lis/les=32/33 n=0 ec=32/25 lis/c=25/25 les/c/f=26/26/0 sis=32) [0] r=0 lpr=32 pi=[25,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:51:00 np0005534516 ceph-osd[88620]: osd.0 pg_epoch: 33 pg[6.10( empty local-lis/les=32/33 n=0 ec=32/25 lis/c=25/25 les/c/f=26/26/0 sis=32) [0] r=0 lpr=32 pi=[25,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:51:00 np0005534516 ceph-osd[88620]: osd.0 pg_epoch: 33 pg[6.d( empty local-lis/les=32/33 n=0 ec=32/25 lis/c=25/25 les/c/f=26/26/0 sis=32) [0] r=0 lpr=32 pi=[25,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:51:00 np0005534516 ceph-osd[88620]: osd.0 pg_epoch: 33 pg[6.12( empty local-lis/les=32/33 n=0 ec=32/25 lis/c=25/25 les/c/f=26/26/0 sis=32) [0] r=0 lpr=32 pi=[25,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:51:00 np0005534516 ceph-osd[88620]: osd.0 pg_epoch: 33 pg[6.c( empty local-lis/les=32/33 n=0 ec=32/25 lis/c=25/25 les/c/f=26/26/0 sis=32) [0] r=0 lpr=32 pi=[25,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:51:00 np0005534516 ceph-osd[88620]: osd.0 pg_epoch: 33 pg[6.13( empty local-lis/les=32/33 n=0 ec=32/25 lis/c=25/25 les/c/f=26/26/0 sis=32) [0] r=0 lpr=32 pi=[25,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:51:00 np0005534516 ceph-osd[88620]: osd.0 pg_epoch: 33 pg[6.e( empty local-lis/les=32/33 n=0 ec=32/25 lis/c=25/25 les/c/f=26/26/0 sis=32) [0] r=0 lpr=32 pi=[25,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:51:00 np0005534516 ceph-osd[88620]: osd.0 pg_epoch: 33 pg[6.f( empty local-lis/les=32/33 n=0 ec=32/25 lis/c=25/25 les/c/f=26/26/0 sis=32) [0] r=0 lpr=32 pi=[25,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:51:00 np0005534516 ceph-osd[88620]: osd.0 pg_epoch: 33 pg[6.0( empty local-lis/les=32/33 n=0 ec=25/25 lis/c=25/25 les/c/f=26/26/0 sis=32) [0] r=0 lpr=32 pi=[25,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:51:00 np0005534516 ceph-osd[88620]: osd.0 pg_epoch: 33 pg[6.1b( empty local-lis/les=32/33 n=0 ec=32/25 lis/c=25/25 les/c/f=26/26/0 sis=32) [0] r=0 lpr=32 pi=[25,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:51:00 np0005534516 ceph-osd[88620]: osd.0 pg_epoch: 33 pg[6.3( empty local-lis/les=32/33 n=0 ec=32/25 lis/c=25/25 les/c/f=26/26/0 sis=32) [0] r=0 lpr=32 pi=[25,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:51:00 np0005534516 ceph-osd[88620]: osd.0 pg_epoch: 33 pg[6.b( empty local-lis/les=32/33 n=0 ec=32/25 lis/c=25/25 les/c/f=26/26/0 sis=32) [0] r=0 lpr=32 pi=[25,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:51:00 np0005534516 ceph-osd[88620]: osd.0 pg_epoch: 33 pg[6.2( empty local-lis/les=32/33 n=0 ec=32/25 lis/c=25/25 les/c/f=26/26/0 sis=32) [0] r=0 lpr=32 pi=[25,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:51:00 np0005534516 ceph-osd[88620]: osd.0 pg_epoch: 33 pg[6.18( empty local-lis/les=32/33 n=0 ec=32/25 lis/c=25/25 les/c/f=26/26/0 sis=32) [0] r=0 lpr=32 pi=[25,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:51:00 np0005534516 ceph-osd[88620]: osd.0 pg_epoch: 33 pg[6.6( empty local-lis/les=32/33 n=0 ec=32/25 lis/c=25/25 les/c/f=26/26/0 sis=32) [0] r=0 lpr=32 pi=[25,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:51:00 np0005534516 ceph-osd[88620]: osd.0 pg_epoch: 33 pg[6.19( empty local-lis/les=32/33 n=0 ec=32/25 lis/c=25/25 les/c/f=26/26/0 sis=32) [0] r=0 lpr=32 pi=[25,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:51:00 np0005534516 ceph-osd[88620]: osd.0 pg_epoch: 33 pg[6.1( empty local-lis/les=32/33 n=0 ec=32/25 lis/c=25/25 les/c/f=26/26/0 sis=32) [0] r=0 lpr=32 pi=[25,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:51:00 np0005534516 ceph-osd[88620]: osd.0 pg_epoch: 33 pg[6.8( empty local-lis/les=32/33 n=0 ec=32/25 lis/c=25/25 les/c/f=26/26/0 sis=32) [0] r=0 lpr=32 pi=[25,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:51:00 np0005534516 ceph-osd[88620]: osd.0 pg_epoch: 33 pg[6.5( empty local-lis/les=32/33 n=0 ec=32/25 lis/c=25/25 les/c/f=26/26/0 sis=32) [0] r=0 lpr=32 pi=[25,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:51:00 np0005534516 ceph-osd[88620]: osd.0 pg_epoch: 33 pg[6.4( empty local-lis/les=32/33 n=0 ec=32/25 lis/c=25/25 les/c/f=26/26/0 sis=32) [0] r=0 lpr=32 pi=[25,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:51:00 np0005534516 ceph-osd[88620]: osd.0 pg_epoch: 33 pg[6.a( empty local-lis/les=32/33 n=0 ec=32/25 lis/c=25/25 les/c/f=26/26/0 sis=32) [0] r=0 lpr=32 pi=[25,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:51:00 np0005534516 ceph-osd[88620]: osd.0 pg_epoch: 33 pg[6.9( empty local-lis/les=32/33 n=0 ec=32/25 lis/c=25/25 les/c/f=26/26/0 sis=32) [0] r=0 lpr=32 pi=[25,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:51:00 np0005534516 ceph-osd[88620]: osd.0 pg_epoch: 33 pg[6.1e( empty local-lis/les=32/33 n=0 ec=32/25 lis/c=25/25 les/c/f=26/26/0 sis=32) [0] r=0 lpr=32 pi=[25,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:51:00 np0005534516 ceph-osd[88620]: osd.0 pg_epoch: 33 pg[6.7( empty local-lis/les=32/33 n=0 ec=32/25 lis/c=25/25 les/c/f=26/26/0 sis=32) [0] r=0 lpr=32 pi=[25,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:51:00 np0005534516 ceph-osd[88620]: osd.0 pg_epoch: 33 pg[6.1d( empty local-lis/les=32/33 n=0 ec=32/25 lis/c=25/25 les/c/f=26/26/0 sis=32) [0] r=0 lpr=32 pi=[25,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:51:00 np0005534516 ceph-osd[88620]: osd.0 pg_epoch: 33 pg[6.1f( empty local-lis/les=32/33 n=0 ec=32/25 lis/c=25/25 les/c/f=26/26/0 sis=32) [0] r=0 lpr=32 pi=[25,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:51:00 np0005534516 ceph-osd[88620]: osd.0 pg_epoch: 33 pg[6.1c( empty local-lis/les=32/33 n=0 ec=32/25 lis/c=25/25 les/c/f=26/26/0 sis=32) [0] r=0 lpr=32 pi=[25,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:51:00 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"} v 0) v1
Nov 25 02:51:00 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1707643645' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"}]: dispatch
Nov 25 02:51:01 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v88: 162 pgs: 32 peering, 93 unknown, 37 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 02:51:01 np0005534516 ceph-mon[75015]: log_channel(cluster) log [WRN] : Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Nov 25 02:51:01 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e33 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 02:51:01 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e33 do_prune osdmap full prune enabled
Nov 25 02:51:01 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 32 pg[5.0( empty local-lis/les=23/24 n=0 ec=23/23 lis/c=23/23 les/c/f=24/24/0 sis=32 pruub=12.969687462s) [2] r=0 lpr=32 pi=[23,32)/1 crt=0'0 mlcod 0'0 active pruub 52.652111053s@ mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [2], acting_primary 2 -> 2, up_primary 2 -> 2, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 02:51:01 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 33 pg[5.0( empty local-lis/les=23/24 n=0 ec=23/23 lis/c=23/23 les/c/f=24/24/0 sis=32 pruub=12.969687462s) [2] r=0 lpr=32 pi=[23,32)/1 crt=0'0 mlcod 0'0 unknown pruub 52.652111053s@ mbc={}] state<Start>: transitioning to Primary
Nov 25 02:51:01 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 33 pg[5.3( empty local-lis/les=23/24 n=0 ec=32/23 lis/c=23/23 les/c/f=24/24/0 sis=32) [2] r=0 lpr=32 pi=[23,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:51:01 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 33 pg[5.4( empty local-lis/les=23/24 n=0 ec=32/23 lis/c=23/23 les/c/f=24/24/0 sis=32) [2] r=0 lpr=32 pi=[23,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:51:01 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 33 pg[5.7( empty local-lis/les=23/24 n=0 ec=32/23 lis/c=23/23 les/c/f=24/24/0 sis=32) [2] r=0 lpr=32 pi=[23,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:51:01 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 33 pg[5.8( empty local-lis/les=23/24 n=0 ec=32/23 lis/c=23/23 les/c/f=24/24/0 sis=32) [2] r=0 lpr=32 pi=[23,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:51:01 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 33 pg[5.5( empty local-lis/les=23/24 n=0 ec=32/23 lis/c=23/23 les/c/f=24/24/0 sis=32) [2] r=0 lpr=32 pi=[23,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:51:01 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 33 pg[5.9( empty local-lis/les=23/24 n=0 ec=32/23 lis/c=23/23 les/c/f=24/24/0 sis=32) [2] r=0 lpr=32 pi=[23,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:51:01 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 33 pg[5.a( empty local-lis/les=23/24 n=0 ec=32/23 lis/c=23/23 les/c/f=24/24/0 sis=32) [2] r=0 lpr=32 pi=[23,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:51:01 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 33 pg[5.b( empty local-lis/les=23/24 n=0 ec=32/23 lis/c=23/23 les/c/f=24/24/0 sis=32) [2] r=0 lpr=32 pi=[23,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:51:01 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 33 pg[5.c( empty local-lis/les=23/24 n=0 ec=32/23 lis/c=23/23 les/c/f=24/24/0 sis=32) [2] r=0 lpr=32 pi=[23,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:51:01 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 33 pg[5.f( empty local-lis/les=23/24 n=0 ec=32/23 lis/c=23/23 les/c/f=24/24/0 sis=32) [2] r=0 lpr=32 pi=[23,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:51:01 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 33 pg[5.10( empty local-lis/les=23/24 n=0 ec=32/23 lis/c=23/23 les/c/f=24/24/0 sis=32) [2] r=0 lpr=32 pi=[23,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:51:01 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 33 pg[5.d( empty local-lis/les=23/24 n=0 ec=32/23 lis/c=23/23 les/c/f=24/24/0 sis=32) [2] r=0 lpr=32 pi=[23,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:51:01 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 33 pg[5.e( empty local-lis/les=23/24 n=0 ec=32/23 lis/c=23/23 les/c/f=24/24/0 sis=32) [2] r=0 lpr=32 pi=[23,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:51:01 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 33 pg[5.6( empty local-lis/les=23/24 n=0 ec=32/23 lis/c=23/23 les/c/f=24/24/0 sis=32) [2] r=0 lpr=32 pi=[23,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:51:01 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 33 pg[5.11( empty local-lis/les=23/24 n=0 ec=32/23 lis/c=23/23 les/c/f=24/24/0 sis=32) [2] r=0 lpr=32 pi=[23,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:51:01 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 33 pg[5.12( empty local-lis/les=23/24 n=0 ec=32/23 lis/c=23/23 les/c/f=24/24/0 sis=32) [2] r=0 lpr=32 pi=[23,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:51:01 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 33 pg[5.15( empty local-lis/les=23/24 n=0 ec=32/23 lis/c=23/23 les/c/f=24/24/0 sis=32) [2] r=0 lpr=32 pi=[23,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:51:01 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 33 pg[5.16( empty local-lis/les=23/24 n=0 ec=32/23 lis/c=23/23 les/c/f=24/24/0 sis=32) [2] r=0 lpr=32 pi=[23,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:51:01 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 33 pg[5.13( empty local-lis/les=23/24 n=0 ec=32/23 lis/c=23/23 les/c/f=24/24/0 sis=32) [2] r=0 lpr=32 pi=[23,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:51:01 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 33 pg[5.14( empty local-lis/les=23/24 n=0 ec=32/23 lis/c=23/23 les/c/f=24/24/0 sis=32) [2] r=0 lpr=32 pi=[23,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:51:01 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 33 pg[5.17( empty local-lis/les=23/24 n=0 ec=32/23 lis/c=23/23 les/c/f=24/24/0 sis=32) [2] r=0 lpr=32 pi=[23,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:51:01 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 33 pg[5.18( empty local-lis/les=23/24 n=0 ec=32/23 lis/c=23/23 les/c/f=24/24/0 sis=32) [2] r=0 lpr=32 pi=[23,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:51:01 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 33 pg[5.1( empty local-lis/les=23/24 n=0 ec=32/23 lis/c=23/23 les/c/f=24/24/0 sis=32) [2] r=0 lpr=32 pi=[23,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:51:01 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 33 pg[5.1b( empty local-lis/les=23/24 n=0 ec=32/23 lis/c=23/23 les/c/f=24/24/0 sis=32) [2] r=0 lpr=32 pi=[23,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:51:01 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 33 pg[5.1c( empty local-lis/les=23/24 n=0 ec=32/23 lis/c=23/23 les/c/f=24/24/0 sis=32) [2] r=0 lpr=32 pi=[23,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:51:01 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 33 pg[5.19( empty local-lis/les=23/24 n=0 ec=32/23 lis/c=23/23 les/c/f=24/24/0 sis=32) [2] r=0 lpr=32 pi=[23,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:51:01 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 33 pg[5.1a( empty local-lis/les=23/24 n=0 ec=32/23 lis/c=23/23 les/c/f=24/24/0 sis=32) [2] r=0 lpr=32 pi=[23,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:51:01 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 33 pg[5.1f( empty local-lis/les=23/24 n=0 ec=32/23 lis/c=23/23 les/c/f=24/24/0 sis=32) [2] r=0 lpr=32 pi=[23,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:51:01 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 33 pg[5.1d( empty local-lis/les=23/24 n=0 ec=32/23 lis/c=23/23 les/c/f=24/24/0 sis=32) [2] r=0 lpr=32 pi=[23,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:51:01 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 33 pg[5.2( empty local-lis/les=23/24 n=0 ec=32/23 lis/c=23/23 les/c/f=24/24/0 sis=32) [2] r=0 lpr=32 pi=[23,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:51:01 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 33 pg[5.1e( empty local-lis/les=23/24 n=0 ec=32/23 lis/c=23/23 les/c/f=24/24/0 sis=32) [2] r=0 lpr=32 pi=[23,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:51:01 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1707643645' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"}]': finished
Nov 25 02:51:01 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e34 e34: 3 total, 3 up, 3 in
Nov 25 02:51:01 np0005534516 laughing_payne[96273]: enabled application 'rbd' on pool 'backups'
Nov 25 02:51:01 np0005534516 ceph-mon[75015]: from='client.? 192.168.122.100:0/1707643645' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"}]: dispatch
Nov 25 02:51:01 np0005534516 ceph-mon[75015]: Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Nov 25 02:51:01 np0005534516 systemd[1]: libpod-8bbc37834fa48c52e52e9c2cb6821e17bd42db621147959a589ea2fccc77617c.scope: Deactivated successfully.
Nov 25 02:51:01 np0005534516 podman[96258]: 2025-11-25 07:51:01.741907008 +0000 UTC m=+1.836757827 container died 8bbc37834fa48c52e52e9c2cb6821e17bd42db621147959a589ea2fccc77617c (image=quay.io/ceph/ceph:v18, name=laughing_payne, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 02:51:01 np0005534516 ceph-mon[75015]: log_channel(cluster) log [DBG] : osdmap e34: 3 total, 3 up, 3 in
Nov 25 02:51:01 np0005534516 systemd[1]: var-lib-containers-storage-overlay-189acb8a38cdaaaa0dcf6fb47ed7b960b7a174662e44e3f294db1a97d68baf6c-merged.mount: Deactivated successfully.
Nov 25 02:51:01 np0005534516 podman[96258]: 2025-11-25 07:51:01.870667198 +0000 UTC m=+1.965517987 container remove 8bbc37834fa48c52e52e9c2cb6821e17bd42db621147959a589ea2fccc77617c (image=quay.io/ceph/ceph:v18, name=laughing_payne, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 02:51:01 np0005534516 systemd[1]: libpod-conmon-8bbc37834fa48c52e52e9c2cb6821e17bd42db621147959a589ea2fccc77617c.scope: Deactivated successfully.
Nov 25 02:51:02 np0005534516 python3[96335]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid a058ea16-8b73-51e1-b172-ed66107102bf -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable images rbd _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 02:51:02 np0005534516 podman[96336]: 2025-11-25 07:51:02.292441989 +0000 UTC m=+0.057962186 container create 1f5f3499b0c461c3bc4279cf23ba3bfa746d78842aad5d8fd780525be8ce09ab (image=quay.io/ceph/ceph:v18, name=amazing_moser, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 02:51:02 np0005534516 systemd[1]: Started libpod-conmon-1f5f3499b0c461c3bc4279cf23ba3bfa746d78842aad5d8fd780525be8ce09ab.scope.
Nov 25 02:51:02 np0005534516 podman[96336]: 2025-11-25 07:51:02.265784448 +0000 UTC m=+0.031304655 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 25 02:51:02 np0005534516 systemd[1]: Started libcrun container.
Nov 25 02:51:02 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2fb388a7b6a87d2474eb934ab199ac25c94946dabc01fe958cc8d06652c9c51c/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 02:51:02 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2fb388a7b6a87d2474eb934ab199ac25c94946dabc01fe958cc8d06652c9c51c/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 02:51:02 np0005534516 podman[96336]: 2025-11-25 07:51:02.384263151 +0000 UTC m=+0.149783398 container init 1f5f3499b0c461c3bc4279cf23ba3bfa746d78842aad5d8fd780525be8ce09ab (image=quay.io/ceph/ceph:v18, name=amazing_moser, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 02:51:02 np0005534516 podman[96336]: 2025-11-25 07:51:02.393995968 +0000 UTC m=+0.159516175 container start 1f5f3499b0c461c3bc4279cf23ba3bfa746d78842aad5d8fd780525be8ce09ab (image=quay.io/ceph/ceph:v18, name=amazing_moser, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Nov 25 02:51:02 np0005534516 podman[96336]: 2025-11-25 07:51:02.407168925 +0000 UTC m=+0.172689172 container attach 1f5f3499b0c461c3bc4279cf23ba3bfa746d78842aad5d8fd780525be8ce09ab (image=quay.io/ceph/ceph:v18, name=amazing_moser, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Nov 25 02:51:02 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e34 do_prune osdmap full prune enabled
Nov 25 02:51:02 np0005534516 ceph-mon[75015]: from='client.? 192.168.122.100:0/1707643645' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"}]': finished
Nov 25 02:51:02 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e35 e35: 3 total, 3 up, 3 in
Nov 25 02:51:02 np0005534516 ceph-mon[75015]: log_channel(cluster) log [DBG] : osdmap e35: 3 total, 3 up, 3 in
Nov 25 02:51:02 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 35 pg[5.1c( empty local-lis/les=32/35 n=0 ec=32/23 lis/c=23/23 les/c/f=24/24/0 sis=32) [2] r=0 lpr=32 pi=[23,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:51:02 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 35 pg[5.1e( empty local-lis/les=32/35 n=0 ec=32/23 lis/c=23/23 les/c/f=24/24/0 sis=32) [2] r=0 lpr=32 pi=[23,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:51:02 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 35 pg[5.1d( empty local-lis/les=32/35 n=0 ec=32/23 lis/c=23/23 les/c/f=24/24/0 sis=32) [2] r=0 lpr=32 pi=[23,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:51:02 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 35 pg[5.1f( empty local-lis/les=32/35 n=0 ec=32/23 lis/c=23/23 les/c/f=24/24/0 sis=32) [2] r=0 lpr=32 pi=[23,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:51:02 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 35 pg[5.10( empty local-lis/les=32/35 n=0 ec=32/23 lis/c=23/23 les/c/f=24/24/0 sis=32) [2] r=0 lpr=32 pi=[23,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:51:02 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 35 pg[5.12( empty local-lis/les=32/35 n=0 ec=32/23 lis/c=23/23 les/c/f=24/24/0 sis=32) [2] r=0 lpr=32 pi=[23,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:51:02 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 35 pg[5.14( empty local-lis/les=32/35 n=0 ec=32/23 lis/c=23/23 les/c/f=24/24/0 sis=32) [2] r=0 lpr=32 pi=[23,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:51:02 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 35 pg[5.13( empty local-lis/les=32/35 n=0 ec=32/23 lis/c=23/23 les/c/f=24/24/0 sis=32) [2] r=0 lpr=32 pi=[23,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:51:02 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 35 pg[5.16( empty local-lis/les=32/35 n=0 ec=32/23 lis/c=23/23 les/c/f=24/24/0 sis=32) [2] r=0 lpr=32 pi=[23,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:51:02 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 35 pg[5.17( empty local-lis/les=32/35 n=0 ec=32/23 lis/c=23/23 les/c/f=24/24/0 sis=32) [2] r=0 lpr=32 pi=[23,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:51:02 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 35 pg[5.8( empty local-lis/les=32/35 n=0 ec=32/23 lis/c=23/23 les/c/f=24/24/0 sis=32) [2] r=0 lpr=32 pi=[23,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:51:02 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 35 pg[5.15( empty local-lis/les=32/35 n=0 ec=32/23 lis/c=23/23 les/c/f=24/24/0 sis=32) [2] r=0 lpr=32 pi=[23,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:51:02 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 35 pg[5.a( empty local-lis/les=32/35 n=0 ec=32/23 lis/c=23/23 les/c/f=24/24/0 sis=32) [2] r=0 lpr=32 pi=[23,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:51:02 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 35 pg[5.0( empty local-lis/les=32/35 n=0 ec=23/23 lis/c=23/23 les/c/f=24/24/0 sis=32) [2] r=0 lpr=32 pi=[23,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:51:02 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 35 pg[5.7( empty local-lis/les=32/35 n=0 ec=32/23 lis/c=23/23 les/c/f=24/24/0 sis=32) [2] r=0 lpr=32 pi=[23,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:51:02 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 35 pg[5.b( empty local-lis/les=32/35 n=0 ec=32/23 lis/c=23/23 les/c/f=24/24/0 sis=32) [2] r=0 lpr=32 pi=[23,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:51:02 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 35 pg[5.9( empty local-lis/les=32/35 n=0 ec=32/23 lis/c=23/23 les/c/f=24/24/0 sis=32) [2] r=0 lpr=32 pi=[23,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:51:02 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 35 pg[5.5( empty local-lis/les=32/35 n=0 ec=32/23 lis/c=23/23 les/c/f=24/24/0 sis=32) [2] r=0 lpr=32 pi=[23,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:51:02 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 35 pg[5.4( empty local-lis/les=32/35 n=0 ec=32/23 lis/c=23/23 les/c/f=24/24/0 sis=32) [2] r=0 lpr=32 pi=[23,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:51:02 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 35 pg[5.2( empty local-lis/les=32/35 n=0 ec=32/23 lis/c=23/23 les/c/f=24/24/0 sis=32) [2] r=0 lpr=32 pi=[23,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:51:02 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 35 pg[5.3( empty local-lis/les=32/35 n=0 ec=32/23 lis/c=23/23 les/c/f=24/24/0 sis=32) [2] r=0 lpr=32 pi=[23,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:51:02 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 35 pg[5.f( empty local-lis/les=32/35 n=0 ec=32/23 lis/c=23/23 les/c/f=24/24/0 sis=32) [2] r=0 lpr=32 pi=[23,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:51:02 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 35 pg[5.1( empty local-lis/les=32/35 n=0 ec=32/23 lis/c=23/23 les/c/f=24/24/0 sis=32) [2] r=0 lpr=32 pi=[23,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:51:02 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 35 pg[5.d( empty local-lis/les=32/35 n=0 ec=32/23 lis/c=23/23 les/c/f=24/24/0 sis=32) [2] r=0 lpr=32 pi=[23,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:51:02 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 35 pg[5.e( empty local-lis/les=32/35 n=0 ec=32/23 lis/c=23/23 les/c/f=24/24/0 sis=32) [2] r=0 lpr=32 pi=[23,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:51:02 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 35 pg[5.6( empty local-lis/les=32/35 n=0 ec=32/23 lis/c=23/23 les/c/f=24/24/0 sis=32) [2] r=0 lpr=32 pi=[23,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:51:02 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 35 pg[5.c( empty local-lis/les=32/35 n=0 ec=32/23 lis/c=23/23 les/c/f=24/24/0 sis=32) [2] r=0 lpr=32 pi=[23,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:51:02 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 35 pg[5.1b( empty local-lis/les=32/35 n=0 ec=32/23 lis/c=23/23 les/c/f=24/24/0 sis=32) [2] r=0 lpr=32 pi=[23,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:51:02 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 35 pg[5.1a( empty local-lis/les=32/35 n=0 ec=32/23 lis/c=23/23 les/c/f=24/24/0 sis=32) [2] r=0 lpr=32 pi=[23,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:51:02 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 35 pg[5.11( empty local-lis/les=32/35 n=0 ec=32/23 lis/c=23/23 les/c/f=24/24/0 sis=32) [2] r=0 lpr=32 pi=[23,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:51:02 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 35 pg[5.18( empty local-lis/les=32/35 n=0 ec=32/23 lis/c=23/23 les/c/f=24/24/0 sis=32) [2] r=0 lpr=32 pi=[23,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:51:02 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 35 pg[5.19( empty local-lis/les=32/35 n=0 ec=32/23 lis/c=23/23 les/c/f=24/24/0 sis=32) [2] r=0 lpr=32 pi=[23,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:51:02 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "images", "app": "rbd"} v 0) v1
Nov 25 02:51:02 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2495177434' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "images", "app": "rbd"}]: dispatch
Nov 25 02:51:03 np0005534516 ceph-osd[88620]: log_channel(cluster) log [DBG] : 4.1 scrub starts
Nov 25 02:51:03 np0005534516 ceph-osd[88620]: log_channel(cluster) log [DBG] : 4.1 scrub ok
Nov 25 02:51:03 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v91: 162 pgs: 32 peering, 62 unknown, 68 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 02:51:03 np0005534516 ceph-osd[90711]: log_channel(cluster) log [DBG] : 2.2 scrub starts
Nov 25 02:51:03 np0005534516 ceph-osd[90711]: log_channel(cluster) log [DBG] : 2.2 scrub ok
Nov 25 02:51:03 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e35 do_prune osdmap full prune enabled
Nov 25 02:51:03 np0005534516 ceph-mon[75015]: from='client.? 192.168.122.100:0/2495177434' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "images", "app": "rbd"}]: dispatch
Nov 25 02:51:03 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2495177434' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "images", "app": "rbd"}]': finished
Nov 25 02:51:03 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e36 e36: 3 total, 3 up, 3 in
Nov 25 02:51:03 np0005534516 amazing_moser[96351]: enabled application 'rbd' on pool 'images'
Nov 25 02:51:03 np0005534516 ceph-mon[75015]: log_channel(cluster) log [DBG] : osdmap e36: 3 total, 3 up, 3 in
Nov 25 02:51:03 np0005534516 systemd[1]: libpod-1f5f3499b0c461c3bc4279cf23ba3bfa746d78842aad5d8fd780525be8ce09ab.scope: Deactivated successfully.
Nov 25 02:51:03 np0005534516 podman[96336]: 2025-11-25 07:51:03.894003157 +0000 UTC m=+1.659523364 container died 1f5f3499b0c461c3bc4279cf23ba3bfa746d78842aad5d8fd780525be8ce09ab (image=quay.io/ceph/ceph:v18, name=amazing_moser, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 02:51:04 np0005534516 systemd[1]: var-lib-containers-storage-overlay-2fb388a7b6a87d2474eb934ab199ac25c94946dabc01fe958cc8d06652c9c51c-merged.mount: Deactivated successfully.
Nov 25 02:51:04 np0005534516 podman[96336]: 2025-11-25 07:51:04.222083649 +0000 UTC m=+1.987603806 container remove 1f5f3499b0c461c3bc4279cf23ba3bfa746d78842aad5d8fd780525be8ce09ab (image=quay.io/ceph/ceph:v18, name=amazing_moser, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 02:51:04 np0005534516 systemd[1]: libpod-conmon-1f5f3499b0c461c3bc4279cf23ba3bfa746d78842aad5d8fd780525be8ce09ab.scope: Deactivated successfully.
Nov 25 02:51:04 np0005534516 python3[96411]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid a058ea16-8b73-51e1-b172-ed66107102bf -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable cephfs.cephfs.meta cephfs _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 02:51:04 np0005534516 podman[96412]: 2025-11-25 07:51:04.679822608 +0000 UTC m=+0.100245403 container create 72c5d43d14486d6957a93330e39f7416bd27fd3a60a333ddaf2cb4496305b081 (image=quay.io/ceph/ceph:v18, name=dreamy_golick, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0)
Nov 25 02:51:04 np0005534516 ceph-osd[90711]: log_channel(cluster) log [DBG] : 2.3 scrub starts
Nov 25 02:51:04 np0005534516 podman[96412]: 2025-11-25 07:51:04.612163066 +0000 UTC m=+0.032585901 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 25 02:51:04 np0005534516 ceph-osd[90711]: log_channel(cluster) log [DBG] : 2.3 scrub ok
Nov 25 02:51:04 np0005534516 systemd[1]: Started libpod-conmon-72c5d43d14486d6957a93330e39f7416bd27fd3a60a333ddaf2cb4496305b081.scope.
Nov 25 02:51:04 np0005534516 systemd[1]: Started libcrun container.
Nov 25 02:51:04 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e196f6346bc3a8f2296a169ef29e1e3247b0b5ef92d122d08baf52c899961ae9/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 02:51:04 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e196f6346bc3a8f2296a169ef29e1e3247b0b5ef92d122d08baf52c899961ae9/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 02:51:04 np0005534516 ceph-mon[75015]: from='client.? 192.168.122.100:0/2495177434' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "images", "app": "rbd"}]': finished
Nov 25 02:51:04 np0005534516 podman[96412]: 2025-11-25 07:51:04.850940157 +0000 UTC m=+0.271362992 container init 72c5d43d14486d6957a93330e39f7416bd27fd3a60a333ddaf2cb4496305b081 (image=quay.io/ceph/ceph:v18, name=dreamy_golick, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Nov 25 02:51:04 np0005534516 podman[96412]: 2025-11-25 07:51:04.86142364 +0000 UTC m=+0.281846395 container start 72c5d43d14486d6957a93330e39f7416bd27fd3a60a333ddaf2cb4496305b081 (image=quay.io/ceph/ceph:v18, name=dreamy_golick, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 02:51:04 np0005534516 podman[96412]: 2025-11-25 07:51:04.881647119 +0000 UTC m=+0.302069904 container attach 72c5d43d14486d6957a93330e39f7416bd27fd3a60a333ddaf2cb4496305b081 (image=quay.io/ceph/ceph:v18, name=dreamy_golick, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 02:51:05 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v93: 162 pgs: 162 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 02:51:05 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"} v 0) v1
Nov 25 02:51:05 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 25 02:51:05 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "32"} v 0) v1
Nov 25 02:51:05 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 25 02:51:05 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"} v 0) v1
Nov 25 02:51:05 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 25 02:51:05 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"} v 0) v1
Nov 25 02:51:05 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 25 02:51:05 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"} v 0) v1
Nov 25 02:51:05 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 25 02:51:05 np0005534516 ceph-osd[89702]: log_channel(cluster) log [DBG] : 3.2 deep-scrub starts
Nov 25 02:51:05 np0005534516 ceph-osd[89702]: log_channel(cluster) log [DBG] : 3.2 deep-scrub ok
Nov 25 02:51:05 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"} v 0) v1
Nov 25 02:51:05 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/741912769' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"}]: dispatch
Nov 25 02:51:05 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e36 do_prune osdmap full prune enabled
Nov 25 02:51:05 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 25 02:51:05 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 25 02:51:05 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 25 02:51:05 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 25 02:51:05 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 25 02:51:05 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/741912769' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"}]': finished
Nov 25 02:51:05 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e37 e37: 3 total, 3 up, 3 in
Nov 25 02:51:05 np0005534516 dreamy_golick[96427]: enabled application 'cephfs' on pool 'cephfs.cephfs.meta'
Nov 25 02:51:05 np0005534516 systemd[1]: libpod-72c5d43d14486d6957a93330e39f7416bd27fd3a60a333ddaf2cb4496305b081.scope: Deactivated successfully.
Nov 25 02:51:05 np0005534516 podman[96412]: 2025-11-25 07:51:05.909950146 +0000 UTC m=+1.330372901 container died 72c5d43d14486d6957a93330e39f7416bd27fd3a60a333ddaf2cb4496305b081 (image=quay.io/ceph/ceph:v18, name=dreamy_golick, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 02:51:05 np0005534516 ceph-mon[75015]: log_channel(cluster) log [DBG] : osdmap e37: 3 total, 3 up, 3 in
Nov 25 02:51:06 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 37 pg[3.a( empty local-lis/les=30/31 n=0 ec=30/19 lis/c=30/30 les/c/f=31/31/0 sis=37 pruub=8.363536835s) [0] r=-1 lpr=37 pi=[30,37)/1 crt=0'0 mlcod 0'0 active pruub 62.811626434s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 02:51:06 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 37 pg[3.1f( empty local-lis/les=30/31 n=0 ec=30/19 lis/c=30/30 les/c/f=31/31/0 sis=37 pruub=8.356163025s) [0] r=-1 lpr=37 pi=[30,37)/1 crt=0'0 mlcod 0'0 active pruub 62.804302216s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 02:51:06 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 37 pg[3.a( empty local-lis/les=30/31 n=0 ec=30/19 lis/c=30/30 les/c/f=31/31/0 sis=37 pruub=8.363462448s) [0] r=-1 lpr=37 pi=[30,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 62.811626434s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 02:51:06 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 37 pg[3.1f( empty local-lis/les=30/31 n=0 ec=30/19 lis/c=30/30 les/c/f=31/31/0 sis=37 pruub=8.356075287s) [0] r=-1 lpr=37 pi=[30,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 62.804302216s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 02:51:06 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 37 pg[3.1d( empty local-lis/les=30/31 n=0 ec=30/19 lis/c=30/30 les/c/f=31/31/0 sis=37 pruub=8.363245010s) [2] r=-1 lpr=37 pi=[30,37)/1 crt=0'0 mlcod 0'0 active pruub 62.811496735s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 02:51:06 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 37 pg[3.1e( empty local-lis/les=30/31 n=0 ec=30/19 lis/c=30/30 les/c/f=31/31/0 sis=37 pruub=8.363198280s) [2] r=-1 lpr=37 pi=[30,37)/1 crt=0'0 mlcod 0'0 active pruub 62.811679840s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 02:51:06 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 37 pg[3.9( empty local-lis/les=30/31 n=0 ec=30/19 lis/c=30/30 les/c/f=31/31/0 sis=37 pruub=8.363232613s) [0] r=-1 lpr=37 pi=[30,37)/1 crt=0'0 mlcod 0'0 active pruub 62.811729431s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 02:51:06 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 37 pg[3.1e( empty local-lis/les=30/31 n=0 ec=30/19 lis/c=30/30 les/c/f=31/31/0 sis=37 pruub=8.363172531s) [2] r=-1 lpr=37 pi=[30,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 62.811679840s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 02:51:06 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 37 pg[3.9( empty local-lis/les=30/31 n=0 ec=30/19 lis/c=30/30 les/c/f=31/31/0 sis=37 pruub=8.363202095s) [0] r=-1 lpr=37 pi=[30,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 62.811729431s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 02:51:06 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 37 pg[3.8( empty local-lis/les=30/31 n=0 ec=30/19 lis/c=30/30 les/c/f=31/31/0 sis=37 pruub=8.363075256s) [2] r=-1 lpr=37 pi=[30,37)/1 crt=0'0 mlcod 0'0 active pruub 62.811710358s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 02:51:06 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 37 pg[3.8( empty local-lis/les=30/31 n=0 ec=30/19 lis/c=30/30 les/c/f=31/31/0 sis=37 pruub=8.363058090s) [2] r=-1 lpr=37 pi=[30,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 62.811710358s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 02:51:06 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 37 pg[3.7( empty local-lis/les=30/31 n=0 ec=30/19 lis/c=30/30 les/c/f=31/31/0 sis=37 pruub=8.363631248s) [2] r=-1 lpr=37 pi=[30,37)/1 crt=0'0 mlcod 0'0 active pruub 62.812343597s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 02:51:06 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 37 pg[3.6( empty local-lis/les=30/31 n=0 ec=30/19 lis/c=30/30 les/c/f=31/31/0 sis=37 pruub=8.362999916s) [0] r=-1 lpr=37 pi=[30,37)/1 crt=0'0 mlcod 0'0 active pruub 62.811756134s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 02:51:06 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 37 pg[3.7( empty local-lis/les=30/31 n=0 ec=30/19 lis/c=30/30 les/c/f=31/31/0 sis=37 pruub=8.363605499s) [2] r=-1 lpr=37 pi=[30,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 62.812343597s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 02:51:06 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 37 pg[3.6( empty local-lis/les=30/31 n=0 ec=30/19 lis/c=30/30 les/c/f=31/31/0 sis=37 pruub=8.362977028s) [0] r=-1 lpr=37 pi=[30,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 62.811756134s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 02:51:06 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 37 pg[3.3( empty local-lis/les=30/31 n=0 ec=30/19 lis/c=30/30 les/c/f=31/31/0 sis=37 pruub=8.363351822s) [0] r=-1 lpr=37 pi=[30,37)/1 crt=0'0 mlcod 0'0 active pruub 62.812255859s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 02:51:06 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 37 pg[3.5( empty local-lis/les=30/31 n=0 ec=30/19 lis/c=30/30 les/c/f=31/31/0 sis=37 pruub=8.363336563s) [2] r=-1 lpr=37 pi=[30,37)/1 crt=0'0 mlcod 0'0 active pruub 62.812244415s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 02:51:06 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 37 pg[3.3( empty local-lis/les=30/31 n=0 ec=30/19 lis/c=30/30 les/c/f=31/31/0 sis=37 pruub=8.363330841s) [0] r=-1 lpr=37 pi=[30,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 62.812255859s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 02:51:06 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 37 pg[3.5( empty local-lis/les=30/31 n=0 ec=30/19 lis/c=30/30 les/c/f=31/31/0 sis=37 pruub=8.363311768s) [2] r=-1 lpr=37 pi=[30,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 62.812244415s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 02:51:06 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 37 pg[3.1( empty local-lis/les=30/31 n=0 ec=30/19 lis/c=30/30 les/c/f=31/31/0 sis=37 pruub=8.363281250s) [0] r=-1 lpr=37 pi=[30,37)/1 crt=0'0 mlcod 0'0 active pruub 62.812301636s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 02:51:06 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 37 pg[3.1( empty local-lis/les=30/31 n=0 ec=30/19 lis/c=30/30 les/c/f=31/31/0 sis=37 pruub=8.363255501s) [0] r=-1 lpr=37 pi=[30,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 62.812301636s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 02:51:06 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 37 pg[3.c( empty local-lis/les=30/31 n=0 ec=30/19 lis/c=30/30 les/c/f=31/31/0 sis=37 pruub=8.363165855s) [0] r=-1 lpr=37 pi=[30,37)/1 crt=0'0 mlcod 0'0 active pruub 62.812393188s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 02:51:06 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 37 pg[3.c( empty local-lis/les=30/31 n=0 ec=30/19 lis/c=30/30 les/c/f=31/31/0 sis=37 pruub=8.363139153s) [0] r=-1 lpr=37 pi=[30,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 62.812393188s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 02:51:06 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 37 pg[3.e( empty local-lis/les=30/31 n=0 ec=30/19 lis/c=30/30 les/c/f=31/31/0 sis=37 pruub=8.363148689s) [2] r=-1 lpr=37 pi=[30,37)/1 crt=0'0 mlcod 0'0 active pruub 62.812435150s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 02:51:06 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 37 pg[3.e( empty local-lis/les=30/31 n=0 ec=30/19 lis/c=30/30 les/c/f=31/31/0 sis=37 pruub=8.363105774s) [2] r=-1 lpr=37 pi=[30,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 62.812435150s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 02:51:06 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 37 pg[3.11( empty local-lis/les=30/31 n=0 ec=30/19 lis/c=30/30 les/c/f=31/31/0 sis=37 pruub=8.363492012s) [2] r=-1 lpr=37 pi=[30,37)/1 crt=0'0 mlcod 0'0 active pruub 62.813014984s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 02:51:06 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 37 pg[3.f( empty local-lis/les=30/31 n=0 ec=30/19 lis/c=30/30 les/c/f=31/31/0 sis=37 pruub=8.363051414s) [0] r=-1 lpr=37 pi=[30,37)/1 crt=0'0 mlcod 0'0 active pruub 62.812458038s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 02:51:06 np0005534516 ceph-osd[88620]: log_channel(cluster) log [DBG] : 4.2 deep-scrub starts
Nov 25 02:51:06 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 37 pg[3.11( empty local-lis/les=30/31 n=0 ec=30/19 lis/c=30/30 les/c/f=31/31/0 sis=37 pruub=8.363462448s) [2] r=-1 lpr=37 pi=[30,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 62.813014984s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 02:51:06 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 37 pg[3.f( empty local-lis/les=30/31 n=0 ec=30/19 lis/c=30/30 les/c/f=31/31/0 sis=37 pruub=8.362867355s) [0] r=-1 lpr=37 pi=[30,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 62.812458038s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 02:51:06 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 37 pg[3.1b( empty local-lis/les=30/31 n=0 ec=30/19 lis/c=30/30 les/c/f=31/31/0 sis=37 pruub=8.361438751s) [0] r=-1 lpr=37 pi=[30,37)/1 crt=0'0 mlcod 0'0 active pruub 62.811336517s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 02:51:06 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 37 pg[3.15( empty local-lis/les=30/31 n=0 ec=30/19 lis/c=30/30 les/c/f=31/31/0 sis=37 pruub=8.364376068s) [0] r=-1 lpr=37 pi=[30,37)/1 crt=0'0 mlcod 0'0 active pruub 62.814365387s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 02:51:06 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 37 pg[3.16( empty local-lis/les=30/31 n=0 ec=30/19 lis/c=30/30 les/c/f=31/31/0 sis=37 pruub=8.364292145s) [2] r=-1 lpr=37 pi=[30,37)/1 crt=0'0 mlcod 0'0 active pruub 62.814338684s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 02:51:06 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 37 pg[3.12( empty local-lis/les=30/31 n=0 ec=30/19 lis/c=30/30 les/c/f=31/31/0 sis=37 pruub=8.363088608s) [0] r=-1 lpr=37 pi=[30,37)/1 crt=0'0 mlcod 0'0 active pruub 62.813159943s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 02:51:06 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 37 pg[3.1d( empty local-lis/les=30/31 n=0 ec=30/19 lis/c=30/30 les/c/f=31/31/0 sis=37 pruub=8.363196373s) [2] r=-1 lpr=37 pi=[30,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 62.811496735s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 02:51:06 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 37 pg[3.15( empty local-lis/les=30/31 n=0 ec=30/19 lis/c=30/30 les/c/f=31/31/0 sis=37 pruub=8.364159584s) [0] r=-1 lpr=37 pi=[30,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 62.814365387s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 02:51:06 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 37 pg[3.12( empty local-lis/les=30/31 n=0 ec=30/19 lis/c=30/30 les/c/f=31/31/0 sis=37 pruub=8.362915993s) [0] r=-1 lpr=37 pi=[30,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 62.813159943s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 02:51:06 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 37 pg[3.16( empty local-lis/les=30/31 n=0 ec=30/19 lis/c=30/30 les/c/f=31/31/0 sis=37 pruub=8.364127159s) [2] r=-1 lpr=37 pi=[30,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 62.814338684s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 02:51:06 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 37 pg[3.17( empty local-lis/les=30/31 n=0 ec=30/19 lis/c=30/30 les/c/f=31/31/0 sis=37 pruub=8.363958359s) [0] r=-1 lpr=37 pi=[30,37)/1 crt=0'0 mlcod 0'0 active pruub 62.814384460s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 02:51:06 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 37 pg[3.17( empty local-lis/les=30/31 n=0 ec=30/19 lis/c=30/30 les/c/f=31/31/0 sis=37 pruub=8.363934517s) [0] r=-1 lpr=37 pi=[30,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 62.814384460s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 02:51:06 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 37 pg[3.1b( empty local-lis/les=30/31 n=0 ec=30/19 lis/c=30/30 les/c/f=31/31/0 sis=37 pruub=8.361090660s) [0] r=-1 lpr=37 pi=[30,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 62.811336517s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 02:51:06 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 37 pg[3.18( empty local-lis/les=30/31 n=0 ec=30/19 lis/c=30/30 les/c/f=31/31/0 sis=37 pruub=8.363858223s) [2] r=-1 lpr=37 pi=[30,37)/1 crt=0'0 mlcod 0'0 active pruub 62.814426422s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 02:51:06 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 37 pg[3.18( empty local-lis/les=30/31 n=0 ec=30/19 lis/c=30/30 les/c/f=31/31/0 sis=37 pruub=8.363823891s) [2] r=-1 lpr=37 pi=[30,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 62.814426422s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 02:51:06 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 25 02:51:06 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 25 02:51:06 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 25 02:51:06 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 25 02:51:06 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 25 02:51:06 np0005534516 ceph-mon[75015]: from='client.? 192.168.122.100:0/741912769' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"}]: dispatch
Nov 25 02:51:06 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 37 pg[3.1e( empty local-lis/les=0/0 n=0 ec=30/19 lis/c=30/30 les/c/f=31/31/0 sis=37) [2] r=0 lpr=37 pi=[30,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:51:06 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 37 pg[3.1d( empty local-lis/les=0/0 n=0 ec=30/19 lis/c=30/30 les/c/f=31/31/0 sis=37) [2] r=0 lpr=37 pi=[30,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:51:06 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 37 pg[3.8( empty local-lis/les=0/0 n=0 ec=30/19 lis/c=30/30 les/c/f=31/31/0 sis=37) [2] r=0 lpr=37 pi=[30,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:51:06 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 37 pg[3.7( empty local-lis/les=0/0 n=0 ec=30/19 lis/c=30/30 les/c/f=31/31/0 sis=37) [2] r=0 lpr=37 pi=[30,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:51:06 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 37 pg[3.5( empty local-lis/les=0/0 n=0 ec=30/19 lis/c=30/30 les/c/f=31/31/0 sis=37) [2] r=0 lpr=37 pi=[30,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:51:06 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 37 pg[3.e( empty local-lis/les=0/0 n=0 ec=30/19 lis/c=30/30 les/c/f=31/31/0 sis=37) [2] r=0 lpr=37 pi=[30,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:51:06 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 37 pg[3.11( empty local-lis/les=0/0 n=0 ec=30/19 lis/c=30/30 les/c/f=31/31/0 sis=37) [2] r=0 lpr=37 pi=[30,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:51:06 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 37 pg[3.16( empty local-lis/les=0/0 n=0 ec=30/19 lis/c=30/30 les/c/f=31/31/0 sis=37) [2] r=0 lpr=37 pi=[30,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:51:06 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 37 pg[3.18( empty local-lis/les=0/0 n=0 ec=30/19 lis/c=30/30 les/c/f=31/31/0 sis=37) [2] r=0 lpr=37 pi=[30,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:51:06 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 37 pg[5.1d( empty local-lis/les=32/35 n=0 ec=32/23 lis/c=32/32 les/c/f=35/35/0 sis=37 pruub=12.657561302s) [1] r=-1 lpr=37 pi=[32,37)/1 crt=0'0 mlcod 0'0 active pruub 56.844928741s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 02:51:06 np0005534516 ceph-osd[88620]: osd.0 pg_epoch: 37 pg[3.1b( empty local-lis/les=0/0 n=0 ec=30/19 lis/c=30/30 les/c/f=31/31/0 sis=37) [0] r=0 lpr=37 pi=[30,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:51:06 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 37 pg[5.1d( empty local-lis/les=32/35 n=0 ec=32/23 lis/c=32/32 les/c/f=35/35/0 sis=37 pruub=12.657412529s) [1] r=-1 lpr=37 pi=[32,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 56.844928741s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 02:51:06 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 37 pg[5.1e( empty local-lis/les=32/35 n=0 ec=32/23 lis/c=32/32 les/c/f=35/35/0 sis=37 pruub=12.657206535s) [0] r=-1 lpr=37 pi=[32,37)/1 crt=0'0 mlcod 0'0 active pruub 56.844875336s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 02:51:06 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 37 pg[5.1e( empty local-lis/les=32/35 n=0 ec=32/23 lis/c=32/32 les/c/f=35/35/0 sis=37 pruub=12.657174110s) [0] r=-1 lpr=37 pi=[32,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 56.844875336s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 02:51:06 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 37 pg[2.19( empty local-lis/les=29/31 n=0 ec=29/17 lis/c=29/29 les/c/f=31/31/0 sis=37 pruub=8.315836906s) [0] r=-1 lpr=37 pi=[29,37)/1 crt=0'0 mlcod 0'0 active pruub 52.503559113s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 02:51:06 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 37 pg[2.19( empty local-lis/les=29/31 n=0 ec=29/17 lis/c=29/29 les/c/f=31/31/0 sis=37 pruub=8.315773010s) [0] r=-1 lpr=37 pi=[29,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 52.503559113s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 02:51:06 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 37 pg[2.18( empty local-lis/les=29/31 n=0 ec=29/17 lis/c=29/29 les/c/f=31/31/0 sis=37 pruub=8.315666199s) [0] r=-1 lpr=37 pi=[29,37)/1 crt=0'0 mlcod 0'0 active pruub 52.503520966s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 02:51:06 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 37 pg[2.18( empty local-lis/les=29/31 n=0 ec=29/17 lis/c=29/29 les/c/f=31/31/0 sis=37 pruub=8.315647125s) [0] r=-1 lpr=37 pi=[29,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 52.503520966s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 02:51:06 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 37 pg[2.17( empty local-lis/les=29/31 n=0 ec=29/17 lis/c=29/29 les/c/f=31/31/0 sis=37 pruub=8.315588951s) [1] r=-1 lpr=37 pi=[29,37)/1 crt=0'0 mlcod 0'0 active pruub 52.503547668s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 02:51:06 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 37 pg[2.16( empty local-lis/les=29/31 n=0 ec=29/17 lis/c=29/29 les/c/f=31/31/0 sis=37 pruub=8.315558434s) [0] r=-1 lpr=37 pi=[29,37)/1 crt=0'0 mlcod 0'0 active pruub 52.503528595s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 02:51:06 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 37 pg[2.1b( empty local-lis/les=29/31 n=0 ec=29/17 lis/c=29/29 les/c/f=31/31/0 sis=37 pruub=8.315595627s) [1] r=-1 lpr=37 pi=[29,37)/1 crt=0'0 mlcod 0'0 active pruub 52.503601074s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 02:51:06 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 37 pg[5.11( empty local-lis/les=32/35 n=0 ec=32/23 lis/c=32/32 les/c/f=35/35/0 sis=37 pruub=12.659687042s) [1] r=-1 lpr=37 pi=[32,37)/1 crt=0'0 mlcod 0'0 active pruub 56.847713470s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 02:51:06 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 37 pg[2.1b( empty local-lis/les=29/31 n=0 ec=29/17 lis/c=29/29 les/c/f=31/31/0 sis=37 pruub=8.315556526s) [1] r=-1 lpr=37 pi=[29,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 52.503601074s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 02:51:06 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 37 pg[2.17( empty local-lis/les=29/31 n=0 ec=29/17 lis/c=29/29 les/c/f=31/31/0 sis=37 pruub=8.315548897s) [1] r=-1 lpr=37 pi=[29,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 52.503547668s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 02:51:06 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 37 pg[2.16( empty local-lis/les=29/31 n=0 ec=29/17 lis/c=29/29 les/c/f=31/31/0 sis=37 pruub=8.315298080s) [0] r=-1 lpr=37 pi=[29,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 52.503528595s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 02:51:06 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 37 pg[2.15( empty local-lis/les=29/31 n=0 ec=29/17 lis/c=29/29 les/c/f=31/31/0 sis=37 pruub=8.314643860s) [1] r=-1 lpr=37 pi=[29,37)/1 crt=0'0 mlcod 0'0 active pruub 52.503494263s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 02:51:06 np0005534516 ceph-osd[88620]: osd.0 pg_epoch: 37 pg[3.f( empty local-lis/les=0/0 n=0 ec=30/19 lis/c=30/30 les/c/f=31/31/0 sis=37) [0] r=0 lpr=37 pi=[30,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:51:06 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 37 pg[5.12( empty local-lis/les=32/35 n=0 ec=32/23 lis/c=32/32 les/c/f=35/35/0 sis=37 pruub=12.656119347s) [1] r=-1 lpr=37 pi=[32,37)/1 crt=0'0 mlcod 0'0 active pruub 56.845153809s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 02:51:06 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 37 pg[5.11( empty local-lis/les=32/35 n=0 ec=32/23 lis/c=32/32 les/c/f=35/35/0 sis=37 pruub=12.658781052s) [1] r=-1 lpr=37 pi=[32,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 56.847713470s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 02:51:06 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 37 pg[5.13( empty local-lis/les=32/35 n=0 ec=32/23 lis/c=32/32 les/c/f=35/35/0 sis=37 pruub=12.656277657s) [1] r=-1 lpr=37 pi=[32,37)/1 crt=0'0 mlcod 0'0 active pruub 56.845333099s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 02:51:06 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 37 pg[5.12( empty local-lis/les=32/35 n=0 ec=32/23 lis/c=32/32 les/c/f=35/35/0 sis=37 pruub=12.656077385s) [1] r=-1 lpr=37 pi=[32,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 56.845153809s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 02:51:06 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 37 pg[5.13( empty local-lis/les=32/35 n=0 ec=32/23 lis/c=32/32 les/c/f=35/35/0 sis=37 pruub=12.656228065s) [1] r=-1 lpr=37 pi=[32,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 56.845333099s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 02:51:06 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 37 pg[2.13( empty local-lis/les=29/31 n=0 ec=29/17 lis/c=29/29 les/c/f=31/31/0 sis=37 pruub=8.314437866s) [0] r=-1 lpr=37 pi=[29,37)/1 crt=0'0 mlcod 0'0 active pruub 52.503562927s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 02:51:06 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 37 pg[2.13( empty local-lis/les=29/31 n=0 ec=29/17 lis/c=29/29 les/c/f=31/31/0 sis=37 pruub=8.314379692s) [0] r=-1 lpr=37 pi=[29,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 52.503562927s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 02:51:06 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 37 pg[5.15( empty local-lis/les=32/35 n=0 ec=32/23 lis/c=32/32 les/c/f=35/35/0 sis=37 pruub=12.656290054s) [0] r=-1 lpr=37 pi=[32,37)/1 crt=0'0 mlcod 0'0 active pruub 56.845584869s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 02:51:06 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 37 pg[5.14( empty local-lis/les=32/35 n=0 ec=32/23 lis/c=32/32 les/c/f=35/35/0 sis=37 pruub=12.656015396s) [0] r=-1 lpr=37 pi=[32,37)/1 crt=0'0 mlcod 0'0 active pruub 56.845249176s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 02:51:06 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 37 pg[5.16( empty local-lis/les=32/35 n=0 ec=32/23 lis/c=32/32 les/c/f=35/35/0 sis=37 pruub=12.656056404s) [1] r=-1 lpr=37 pi=[32,37)/1 crt=0'0 mlcod 0'0 active pruub 56.845436096s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 02:51:06 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 37 pg[5.14( empty local-lis/les=32/35 n=0 ec=32/23 lis/c=32/32 les/c/f=35/35/0 sis=37 pruub=12.655879974s) [0] r=-1 lpr=37 pi=[32,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 56.845249176s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 02:51:06 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 37 pg[5.15( empty local-lis/les=32/35 n=0 ec=32/23 lis/c=32/32 les/c/f=35/35/0 sis=37 pruub=12.656208992s) [0] r=-1 lpr=37 pi=[32,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 56.845584869s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 02:51:06 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 37 pg[2.11( empty local-lis/les=29/31 n=0 ec=29/17 lis/c=29/29 les/c/f=31/31/0 sis=37 pruub=8.313890457s) [0] r=-1 lpr=37 pi=[29,37)/1 crt=0'0 mlcod 0'0 active pruub 52.503372192s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 02:51:06 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 37 pg[2.11( empty local-lis/les=29/31 n=0 ec=29/17 lis/c=29/29 les/c/f=31/31/0 sis=37 pruub=8.313832283s) [0] r=-1 lpr=37 pi=[29,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 52.503372192s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 02:51:06 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 37 pg[2.f( empty local-lis/les=29/31 n=0 ec=29/17 lis/c=29/29 les/c/f=31/31/0 sis=37 pruub=8.313682556s) [0] r=-1 lpr=37 pi=[29,37)/1 crt=0'0 mlcod 0'0 active pruub 52.503353119s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 02:51:06 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 37 pg[2.f( empty local-lis/les=29/31 n=0 ec=29/17 lis/c=29/29 les/c/f=31/31/0 sis=37 pruub=8.313639641s) [0] r=-1 lpr=37 pi=[29,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 52.503353119s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 02:51:06 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 37 pg[5.16( empty local-lis/les=32/35 n=0 ec=32/23 lis/c=32/32 les/c/f=35/35/0 sis=37 pruub=12.655943871s) [1] r=-1 lpr=37 pi=[32,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 56.845436096s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 02:51:06 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 37 pg[5.9( empty local-lis/les=32/35 n=0 ec=32/23 lis/c=32/32 les/c/f=35/35/0 sis=37 pruub=12.655868530s) [1] r=-1 lpr=37 pi=[32,37)/1 crt=0'0 mlcod 0'0 active pruub 56.845672607s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 02:51:06 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 37 pg[5.9( empty local-lis/les=32/35 n=0 ec=32/23 lis/c=32/32 les/c/f=35/35/0 sis=37 pruub=12.655832291s) [1] r=-1 lpr=37 pi=[32,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 56.845672607s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 02:51:06 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 37 pg[2.15( empty local-lis/les=29/31 n=0 ec=29/17 lis/c=29/29 les/c/f=31/31/0 sis=37 pruub=8.313582420s) [1] r=-1 lpr=37 pi=[29,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 52.503494263s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 02:51:06 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 37 pg[2.7( empty local-lis/les=29/31 n=0 ec=29/17 lis/c=29/29 les/c/f=31/31/0 sis=37 pruub=8.313368797s) [1] r=-1 lpr=37 pi=[29,37)/1 crt=0'0 mlcod 0'0 active pruub 52.503322601s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 02:51:06 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 37 pg[5.7( empty local-lis/les=32/35 n=0 ec=32/23 lis/c=32/32 les/c/f=35/35/0 sis=37 pruub=12.655671120s) [0] r=-1 lpr=37 pi=[32,37)/1 crt=0'0 mlcod 0'0 active pruub 56.845653534s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 02:51:06 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 37 pg[2.7( empty local-lis/les=29/31 n=0 ec=29/17 lis/c=29/29 les/c/f=31/31/0 sis=37 pruub=8.313344002s) [1] r=-1 lpr=37 pi=[29,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 52.503322601s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 02:51:06 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 37 pg[2.d( empty local-lis/les=29/31 n=0 ec=29/17 lis/c=29/29 les/c/f=31/31/0 sis=37 pruub=8.313351631s) [1] r=-1 lpr=37 pi=[29,37)/1 crt=0'0 mlcod 0'0 active pruub 52.503345490s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 02:51:06 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 37 pg[5.7( empty local-lis/les=32/35 n=0 ec=32/23 lis/c=32/32 les/c/f=35/35/0 sis=37 pruub=12.655634880s) [0] r=-1 lpr=37 pi=[32,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 56.845653534s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 02:51:06 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 37 pg[2.2( empty local-lis/les=29/31 n=0 ec=29/17 lis/c=29/29 les/c/f=31/31/0 sis=37 pruub=8.313169479s) [0] r=-1 lpr=37 pi=[29,37)/1 crt=0'0 mlcod 0'0 active pruub 52.503234863s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 02:51:06 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 37 pg[2.d( empty local-lis/les=29/31 n=0 ec=29/17 lis/c=29/29 les/c/f=31/31/0 sis=37 pruub=8.313306808s) [1] r=-1 lpr=37 pi=[29,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 52.503345490s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 02:51:06 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 37 pg[5.5( empty local-lis/les=32/35 n=0 ec=32/23 lis/c=32/32 les/c/f=35/35/0 sis=37 pruub=12.655588150s) [0] r=-1 lpr=37 pi=[32,37)/1 crt=0'0 mlcod 0'0 active pruub 56.845687866s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 02:51:06 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 37 pg[2.3( empty local-lis/les=29/31 n=0 ec=29/17 lis/c=29/29 les/c/f=31/31/0 sis=37 pruub=8.313118935s) [1] r=-1 lpr=37 pi=[29,37)/1 crt=0'0 mlcod 0'0 active pruub 52.503219604s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 02:51:06 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 37 pg[2.3( empty local-lis/les=29/31 n=0 ec=29/17 lis/c=29/29 les/c/f=31/31/0 sis=37 pruub=8.313093185s) [1] r=-1 lpr=37 pi=[29,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 52.503219604s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 02:51:06 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 37 pg[5.5( empty local-lis/les=32/35 n=0 ec=32/23 lis/c=32/32 les/c/f=35/35/0 sis=37 pruub=12.655545235s) [0] r=-1 lpr=37 pi=[32,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 56.845687866s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 02:51:06 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 37 pg[5.4( empty local-lis/les=32/35 n=0 ec=32/23 lis/c=32/32 les/c/f=35/35/0 sis=37 pruub=12.655464172s) [0] r=-1 lpr=37 pi=[32,37)/1 crt=0'0 mlcod 0'0 active pruub 56.845695496s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 02:51:06 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 37 pg[5.3( empty local-lis/les=32/35 n=0 ec=32/23 lis/c=32/32 les/c/f=35/35/0 sis=37 pruub=12.655470848s) [0] r=-1 lpr=37 pi=[32,37)/1 crt=0'0 mlcod 0'0 active pruub 56.845718384s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 02:51:06 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 37 pg[2.4( empty local-lis/les=29/31 n=0 ec=29/17 lis/c=29/29 les/c/f=31/31/0 sis=37 pruub=8.313012123s) [1] r=-1 lpr=37 pi=[29,37)/1 crt=0'0 mlcod 0'0 active pruub 52.503269196s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 02:51:06 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 37 pg[2.2( empty local-lis/les=29/31 n=0 ec=29/17 lis/c=29/29 les/c/f=31/31/0 sis=37 pruub=8.313018799s) [0] r=-1 lpr=37 pi=[29,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 52.503234863s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 02:51:06 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 37 pg[5.3( empty local-lis/les=32/35 n=0 ec=32/23 lis/c=32/32 les/c/f=35/35/0 sis=37 pruub=12.655416489s) [0] r=-1 lpr=37 pi=[32,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 56.845718384s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 02:51:06 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 37 pg[5.4( empty local-lis/les=32/35 n=0 ec=32/23 lis/c=32/32 les/c/f=35/35/0 sis=37 pruub=12.655381203s) [0] r=-1 lpr=37 pi=[32,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 56.845695496s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 02:51:06 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 37 pg[2.5( empty local-lis/les=29/31 n=0 ec=29/17 lis/c=29/29 les/c/f=31/31/0 sis=37 pruub=8.312806129s) [1] r=-1 lpr=37 pi=[29,37)/1 crt=0'0 mlcod 0'0 active pruub 52.503162384s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 02:51:06 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 37 pg[2.4( empty local-lis/les=29/31 n=0 ec=29/17 lis/c=29/29 les/c/f=31/31/0 sis=37 pruub=8.312917709s) [1] r=-1 lpr=37 pi=[29,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 52.503269196s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 02:51:06 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 37 pg[2.5( empty local-lis/les=29/31 n=0 ec=29/17 lis/c=29/29 les/c/f=31/31/0 sis=37 pruub=8.312779427s) [1] r=-1 lpr=37 pi=[29,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 52.503162384s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 02:51:06 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 37 pg[5.2( empty local-lis/les=32/35 n=0 ec=32/23 lis/c=32/32 les/c/f=35/35/0 sis=37 pruub=12.655216217s) [0] r=-1 lpr=37 pi=[32,37)/1 crt=0'0 mlcod 0'0 active pruub 56.845699310s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 02:51:06 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 37 pg[5.2( empty local-lis/les=32/35 n=0 ec=32/23 lis/c=32/32 les/c/f=35/35/0 sis=37 pruub=12.655184746s) [0] r=-1 lpr=37 pi=[32,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 56.845699310s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 02:51:06 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 37 pg[5.1( empty local-lis/les=32/35 n=0 ec=32/23 lis/c=32/32 les/c/f=35/35/0 sis=37 pruub=12.655232430s) [1] r=-1 lpr=37 pi=[32,37)/1 crt=0'0 mlcod 0'0 active pruub 56.845764160s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 02:51:06 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 37 pg[5.f( empty local-lis/les=32/35 n=0 ec=32/23 lis/c=32/32 les/c/f=35/35/0 sis=37 pruub=12.655171394s) [1] r=-1 lpr=37 pi=[32,37)/1 crt=0'0 mlcod 0'0 active pruub 56.845733643s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 02:51:06 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 37 pg[2.8( empty local-lis/les=29/31 n=0 ec=29/17 lis/c=29/29 les/c/f=31/31/0 sis=37 pruub=8.312532425s) [0] r=-1 lpr=37 pi=[29,37)/1 crt=0'0 mlcod 0'0 active pruub 52.503135681s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 02:51:06 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 37 pg[2.6( empty local-lis/les=29/31 n=0 ec=29/17 lis/c=29/29 les/c/f=31/31/0 sis=37 pruub=8.312714577s) [1] r=-1 lpr=37 pi=[29,37)/1 crt=0'0 mlcod 0'0 active pruub 52.503192902s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 02:51:06 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 37 pg[5.1( empty local-lis/les=32/35 n=0 ec=32/23 lis/c=32/32 les/c/f=35/35/0 sis=37 pruub=12.655192375s) [1] r=-1 lpr=37 pi=[32,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 56.845764160s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 02:51:06 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 37 pg[2.8( empty local-lis/les=29/31 n=0 ec=29/17 lis/c=29/29 les/c/f=31/31/0 sis=37 pruub=8.312498093s) [0] r=-1 lpr=37 pi=[29,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 52.503135681s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 02:51:06 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 37 pg[2.6( empty local-lis/les=29/31 n=0 ec=29/17 lis/c=29/29 les/c/f=31/31/0 sis=37 pruub=8.312538147s) [1] r=-1 lpr=37 pi=[29,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 52.503192902s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 02:51:06 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 37 pg[2.9( empty local-lis/les=29/31 n=0 ec=29/17 lis/c=29/29 les/c/f=31/31/0 sis=37 pruub=8.312451363s) [1] r=-1 lpr=37 pi=[29,37)/1 crt=0'0 mlcod 0'0 active pruub 52.503135681s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 02:51:06 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 37 pg[5.f( empty local-lis/les=32/35 n=0 ec=32/23 lis/c=32/32 les/c/f=35/35/0 sis=37 pruub=12.655150414s) [1] r=-1 lpr=37 pi=[32,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 56.845733643s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 02:51:06 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 37 pg[2.9( empty local-lis/les=29/31 n=0 ec=29/17 lis/c=29/29 les/c/f=31/31/0 sis=37 pruub=8.312423706s) [1] r=-1 lpr=37 pi=[29,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 52.503135681s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 02:51:06 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 37 pg[2.a( empty local-lis/les=29/31 n=0 ec=29/17 lis/c=29/29 les/c/f=31/31/0 sis=37 pruub=8.312323570s) [1] r=-1 lpr=37 pi=[29,37)/1 crt=0'0 mlcod 0'0 active pruub 52.503120422s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 02:51:06 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 37 pg[2.b( empty local-lis/les=29/31 n=0 ec=29/17 lis/c=29/29 les/c/f=31/31/0 sis=37 pruub=8.312334061s) [0] r=-1 lpr=37 pi=[29,37)/1 crt=0'0 mlcod 0'0 active pruub 52.503177643s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 02:51:06 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 37 pg[5.c( empty local-lis/les=32/35 n=0 ec=32/23 lis/c=32/32 les/c/f=35/35/0 sis=37 pruub=12.655098915s) [1] r=-1 lpr=37 pi=[32,37)/1 crt=0'0 mlcod 0'0 active pruub 56.845958710s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 02:51:06 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 37 pg[2.a( empty local-lis/les=29/31 n=0 ec=29/17 lis/c=29/29 les/c/f=31/31/0 sis=37 pruub=8.312280655s) [1] r=-1 lpr=37 pi=[29,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 52.503120422s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 02:51:06 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 37 pg[2.b( empty local-lis/les=29/31 n=0 ec=29/17 lis/c=29/29 les/c/f=31/31/0 sis=37 pruub=8.312308311s) [0] r=-1 lpr=37 pi=[29,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 52.503177643s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 02:51:06 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 37 pg[5.c( empty local-lis/les=32/35 n=0 ec=32/23 lis/c=32/32 les/c/f=35/35/0 sis=37 pruub=12.655072212s) [1] r=-1 lpr=37 pi=[32,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 56.845958710s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 02:51:06 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 37 pg[2.1c( empty local-lis/les=29/31 n=0 ec=29/17 lis/c=29/29 les/c/f=31/31/0 sis=37 pruub=8.312068939s) [0] r=-1 lpr=37 pi=[29,37)/1 crt=0'0 mlcod 0'0 active pruub 52.503021240s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 02:51:06 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 37 pg[2.1d( empty local-lis/les=29/31 n=0 ec=29/17 lis/c=29/29 les/c/f=31/31/0 sis=37 pruub=8.312003136s) [0] r=-1 lpr=37 pi=[29,37)/1 crt=0'0 mlcod 0'0 active pruub 52.503002167s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 02:51:06 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 37 pg[2.1c( empty local-lis/les=29/31 n=0 ec=29/17 lis/c=29/29 les/c/f=31/31/0 sis=37 pruub=8.312034607s) [0] r=-1 lpr=37 pi=[29,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 52.503021240s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 02:51:06 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 37 pg[2.1d( empty local-lis/les=29/31 n=0 ec=29/17 lis/c=29/29 les/c/f=31/31/0 sis=37 pruub=8.311981201s) [0] r=-1 lpr=37 pi=[29,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 52.503002167s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 02:51:06 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 37 pg[5.1a( empty local-lis/les=32/35 n=0 ec=32/23 lis/c=32/32 les/c/f=35/35/0 sis=37 pruub=12.656565666s) [1] r=-1 lpr=37 pi=[32,37)/1 crt=0'0 mlcod 0'0 active pruub 56.847675323s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 02:51:06 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 37 pg[2.1f( empty local-lis/les=29/31 n=0 ec=29/17 lis/c=29/29 les/c/f=31/31/0 sis=37 pruub=8.304843903s) [0] r=-1 lpr=37 pi=[29,37)/1 crt=0'0 mlcod 0'0 active pruub 52.495994568s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 02:51:06 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 37 pg[5.1a( empty local-lis/les=32/35 n=0 ec=32/23 lis/c=32/32 les/c/f=35/35/0 sis=37 pruub=12.656525612s) [1] r=-1 lpr=37 pi=[32,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 56.847675323s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 02:51:06 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 37 pg[2.1f( empty local-lis/les=29/31 n=0 ec=29/17 lis/c=29/29 les/c/f=31/31/0 sis=37 pruub=8.304823875s) [0] r=-1 lpr=37 pi=[29,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 52.495994568s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 02:51:06 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 37 pg[5.1d( empty local-lis/les=0/0 n=0 ec=32/23 lis/c=32/32 les/c/f=35/35/0 sis=37) [1] r=0 lpr=37 pi=[32,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:51:06 np0005534516 ceph-osd[88620]: osd.0 pg_epoch: 37 pg[3.c( empty local-lis/les=0/0 n=0 ec=30/19 lis/c=30/30 les/c/f=31/31/0 sis=37) [0] r=0 lpr=37 pi=[30,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:51:06 np0005534516 ceph-osd[88620]: osd.0 pg_epoch: 37 pg[3.1( empty local-lis/les=0/0 n=0 ec=30/19 lis/c=30/30 les/c/f=31/31/0 sis=37) [0] r=0 lpr=37 pi=[30,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:51:06 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 37 pg[2.1b( empty local-lis/les=0/0 n=0 ec=29/17 lis/c=29/29 les/c/f=31/31/0 sis=37) [1] r=0 lpr=37 pi=[29,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:51:06 np0005534516 ceph-osd[88620]: osd.0 pg_epoch: 37 pg[3.3( empty local-lis/les=0/0 n=0 ec=30/19 lis/c=30/30 les/c/f=31/31/0 sis=37) [0] r=0 lpr=37 pi=[30,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:51:06 np0005534516 ceph-osd[88620]: osd.0 pg_epoch: 37 pg[3.6( empty local-lis/les=0/0 n=0 ec=30/19 lis/c=30/30 les/c/f=31/31/0 sis=37) [0] r=0 lpr=37 pi=[30,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:51:06 np0005534516 ceph-osd[88620]: osd.0 pg_epoch: 37 pg[3.a( empty local-lis/les=0/0 n=0 ec=30/19 lis/c=30/30 les/c/f=31/31/0 sis=37) [0] r=0 lpr=37 pi=[30,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:51:06 np0005534516 ceph-osd[88620]: osd.0 pg_epoch: 37 pg[3.9( empty local-lis/les=0/0 n=0 ec=30/19 lis/c=30/30 les/c/f=31/31/0 sis=37) [0] r=0 lpr=37 pi=[30,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:51:06 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 37 pg[2.17( empty local-lis/les=0/0 n=0 ec=29/17 lis/c=29/29 les/c/f=31/31/0 sis=37) [1] r=0 lpr=37 pi=[29,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:51:06 np0005534516 ceph-osd[88620]: osd.0 pg_epoch: 37 pg[3.17( empty local-lis/les=0/0 n=0 ec=30/19 lis/c=30/30 les/c/f=31/31/0 sis=37) [0] r=0 lpr=37 pi=[30,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:51:06 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 37 pg[5.19( empty local-lis/les=32/35 n=0 ec=32/23 lis/c=32/32 les/c/f=35/35/0 sis=37 pruub=12.656622887s) [1] r=-1 lpr=37 pi=[32,37)/1 crt=0'0 mlcod 0'0 active pruub 56.847724915s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 02:51:06 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 37 pg[5.18( empty local-lis/les=32/35 n=0 ec=32/23 lis/c=32/32 les/c/f=35/35/0 sis=37 pruub=12.656396866s) [1] r=-1 lpr=37 pi=[32,37)/1 crt=0'0 mlcod 0'0 active pruub 56.847717285s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 02:51:06 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 37 pg[5.19( empty local-lis/les=32/35 n=0 ec=32/23 lis/c=32/32 les/c/f=35/35/0 sis=37 pruub=12.656399727s) [1] r=-1 lpr=37 pi=[32,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 56.847724915s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 02:51:06 np0005534516 ceph-osd[88620]: osd.0 pg_epoch: 37 pg[3.15( empty local-lis/les=0/0 n=0 ec=30/19 lis/c=30/30 les/c/f=31/31/0 sis=37) [0] r=0 lpr=37 pi=[30,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:51:06 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 37 pg[5.18( empty local-lis/les=32/35 n=0 ec=32/23 lis/c=32/32 les/c/f=35/35/0 sis=37 pruub=12.656270027s) [1] r=-1 lpr=37 pi=[32,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 56.847717285s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 02:51:06 np0005534516 ceph-osd[88620]: osd.0 pg_epoch: 37 pg[3.12( empty local-lis/les=0/0 n=0 ec=30/19 lis/c=30/30 les/c/f=31/31/0 sis=37) [0] r=0 lpr=37 pi=[30,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:51:06 np0005534516 ceph-osd[88620]: osd.0 pg_epoch: 37 pg[3.1f( empty local-lis/les=0/0 n=0 ec=30/19 lis/c=30/30 les/c/f=31/31/0 sis=37) [0] r=0 lpr=37 pi=[30,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:51:06 np0005534516 ceph-osd[88620]: osd.0 pg_epoch: 37 pg[4.18( empty local-lis/les=30/31 n=0 ec=30/21 lis/c=30/30 les/c/f=31/31/0 sis=37 pruub=8.325809479s) [2] r=-1 lpr=37 pi=[30,37)/1 crt=0'0 mlcod 0'0 active pruub 69.302902222s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 02:51:06 np0005534516 ceph-osd[88620]: osd.0 pg_epoch: 37 pg[4.18( empty local-lis/les=30/31 n=0 ec=30/21 lis/c=30/30 les/c/f=31/31/0 sis=37 pruub=8.325772285s) [2] r=-1 lpr=37 pi=[30,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 69.302902222s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 02:51:06 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 37 pg[5.11( empty local-lis/les=0/0 n=0 ec=32/23 lis/c=32/32 les/c/f=35/35/0 sis=37) [1] r=0 lpr=37 pi=[32,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:51:06 np0005534516 ceph-osd[88620]: osd.0 pg_epoch: 37 pg[5.1e( empty local-lis/les=0/0 n=0 ec=32/23 lis/c=32/32 les/c/f=35/35/0 sis=37) [0] r=0 lpr=37 pi=[32,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:51:06 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 37 pg[5.12( empty local-lis/les=0/0 n=0 ec=32/23 lis/c=32/32 les/c/f=35/35/0 sis=37) [1] r=0 lpr=37 pi=[32,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:51:06 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 37 pg[5.13( empty local-lis/les=0/0 n=0 ec=32/23 lis/c=32/32 les/c/f=35/35/0 sis=37) [1] r=0 lpr=37 pi=[32,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:51:06 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 37 pg[5.16( empty local-lis/les=0/0 n=0 ec=32/23 lis/c=32/32 les/c/f=35/35/0 sis=37) [1] r=0 lpr=37 pi=[32,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:51:06 np0005534516 ceph-osd[88620]: osd.0 pg_epoch: 37 pg[6.15( empty local-lis/les=32/33 n=0 ec=32/25 lis/c=32/32 les/c/f=33/33/0 sis=37 pruub=10.447907448s) [2] r=-1 lpr=37 pi=[32,37)/1 crt=0'0 mlcod 0'0 active pruub 71.426673889s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 02:51:06 np0005534516 ceph-osd[88620]: osd.0 pg_epoch: 37 pg[6.15( empty local-lis/les=32/33 n=0 ec=32/25 lis/c=32/32 les/c/f=33/33/0 sis=37 pruub=10.447883606s) [2] r=-1 lpr=37 pi=[32,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 71.426673889s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 02:51:06 np0005534516 ceph-osd[88620]: osd.0 pg_epoch: 37 pg[6.14( empty local-lis/les=32/33 n=0 ec=32/25 lis/c=32/32 les/c/f=33/33/0 sis=37 pruub=10.461641312s) [2] r=-1 lpr=37 pi=[32,37)/1 crt=0'0 mlcod 0'0 active pruub 71.440536499s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 02:51:06 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 37 pg[5.9( empty local-lis/les=0/0 n=0 ec=32/23 lis/c=32/32 les/c/f=35/35/0 sis=37) [1] r=0 lpr=37 pi=[32,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:51:06 np0005534516 ceph-osd[88620]: osd.0 pg_epoch: 37 pg[6.14( empty local-lis/les=32/33 n=0 ec=32/25 lis/c=32/32 les/c/f=33/33/0 sis=37 pruub=10.461623192s) [2] r=-1 lpr=37 pi=[32,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 71.440536499s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 02:51:06 np0005534516 ceph-osd[88620]: osd.0 pg_epoch: 37 pg[4.14( empty local-lis/les=30/31 n=0 ec=30/21 lis/c=30/30 les/c/f=31/31/0 sis=37 pruub=8.322090149s) [1] r=-1 lpr=37 pi=[30,37)/1 crt=0'0 mlcod 0'0 active pruub 69.301132202s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 02:51:06 np0005534516 ceph-osd[88620]: osd.0 pg_epoch: 37 pg[6.17( empty local-lis/les=32/33 n=0 ec=32/25 lis/c=32/32 les/c/f=33/33/0 sis=37 pruub=10.447601318s) [1] r=-1 lpr=37 pi=[32,37)/1 crt=0'0 mlcod 0'0 active pruub 71.426643372s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 02:51:06 np0005534516 ceph-osd[88620]: osd.0 pg_epoch: 37 pg[6.17( empty local-lis/les=32/33 n=0 ec=32/25 lis/c=32/32 les/c/f=33/33/0 sis=37 pruub=10.447340965s) [1] r=-1 lpr=37 pi=[32,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 71.426643372s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 02:51:06 np0005534516 ceph-osd[88620]: osd.0 pg_epoch: 37 pg[4.14( empty local-lis/les=30/31 n=0 ec=30/21 lis/c=30/30 les/c/f=31/31/0 sis=37 pruub=8.321861267s) [1] r=-1 lpr=37 pi=[30,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 69.301132202s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 02:51:06 np0005534516 ceph-osd[88620]: osd.0 pg_epoch: 37 pg[6.11( empty local-lis/les=32/33 n=0 ec=32/25 lis/c=32/32 les/c/f=33/33/0 sis=37 pruub=10.461171150s) [2] r=-1 lpr=37 pi=[32,37)/1 crt=0'0 mlcod 0'0 active pruub 71.440544128s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 02:51:06 np0005534516 ceph-osd[88620]: osd.0 pg_epoch: 37 pg[6.11( empty local-lis/les=32/33 n=0 ec=32/25 lis/c=32/32 les/c/f=33/33/0 sis=37 pruub=10.461128235s) [2] r=-1 lpr=37 pi=[32,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 71.440544128s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 02:51:06 np0005534516 ceph-osd[88620]: osd.0 pg_epoch: 37 pg[4.13( empty local-lis/les=30/31 n=0 ec=30/21 lis/c=30/30 les/c/f=31/31/0 sis=37 pruub=8.321568489s) [2] r=-1 lpr=37 pi=[30,37)/1 crt=0'0 mlcod 0'0 active pruub 69.301040649s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 02:51:06 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 37 pg[4.18( empty local-lis/les=0/0 n=0 ec=30/21 lis/c=30/30 les/c/f=31/31/0 sis=37) [2] r=0 lpr=37 pi=[30,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:51:06 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 37 pg[2.15( empty local-lis/les=0/0 n=0 ec=29/17 lis/c=29/29 les/c/f=31/31/0 sis=37) [1] r=0 lpr=37 pi=[29,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:51:06 np0005534516 ceph-osd[88620]: osd.0 pg_epoch: 37 pg[4.12( empty local-lis/les=30/31 n=0 ec=30/21 lis/c=30/30 les/c/f=31/31/0 sis=37 pruub=8.321539879s) [1] r=-1 lpr=37 pi=[30,37)/1 crt=0'0 mlcod 0'0 active pruub 69.301086426s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 02:51:06 np0005534516 ceph-osd[88620]: osd.0 pg_epoch: 37 pg[4.13( empty local-lis/les=30/31 n=0 ec=30/21 lis/c=30/30 les/c/f=31/31/0 sis=37 pruub=8.321496010s) [2] r=-1 lpr=37 pi=[30,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 69.301040649s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 02:51:06 np0005534516 ceph-osd[88620]: osd.0 pg_epoch: 37 pg[4.12( empty local-lis/les=30/31 n=0 ec=30/21 lis/c=30/30 les/c/f=31/31/0 sis=37 pruub=8.321502686s) [1] r=-1 lpr=37 pi=[30,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 69.301086426s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 02:51:06 np0005534516 ceph-osd[88620]: osd.0 pg_epoch: 37 pg[6.13( empty local-lis/les=32/33 n=0 ec=32/25 lis/c=32/32 les/c/f=33/33/0 sis=37 pruub=10.461498260s) [2] r=-1 lpr=37 pi=[32,37)/1 crt=0'0 mlcod 0'0 active pruub 71.441093445s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 02:51:06 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 37 pg[4.14( empty local-lis/les=0/0 n=0 ec=30/21 lis/c=30/30 les/c/f=31/31/0 sis=37) [1] r=0 lpr=37 pi=[30,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:51:06 np0005534516 ceph-osd[88620]: osd.0 pg_epoch: 37 pg[4.11( empty local-lis/les=30/31 n=0 ec=30/21 lis/c=30/30 les/c/f=31/31/0 sis=37 pruub=8.321465492s) [2] r=-1 lpr=37 pi=[30,37)/1 crt=0'0 mlcod 0'0 active pruub 69.301017761s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 02:51:06 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 37 pg[2.7( empty local-lis/les=0/0 n=0 ec=29/17 lis/c=29/29 les/c/f=31/31/0 sis=37) [1] r=0 lpr=37 pi=[29,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:51:06 np0005534516 ceph-osd[88620]: osd.0 pg_epoch: 37 pg[6.13( empty local-lis/les=32/33 n=0 ec=32/25 lis/c=32/32 les/c/f=33/33/0 sis=37 pruub=10.461465836s) [2] r=-1 lpr=37 pi=[32,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 71.441093445s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 02:51:06 np0005534516 ceph-osd[88620]: osd.0 pg_epoch: 37 pg[4.11( empty local-lis/les=30/31 n=0 ec=30/21 lis/c=30/30 les/c/f=31/31/0 sis=37 pruub=8.321335793s) [2] r=-1 lpr=37 pi=[30,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 69.301017761s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 02:51:06 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 37 pg[6.15( empty local-lis/les=0/0 n=0 ec=32/25 lis/c=32/32 les/c/f=33/33/0 sis=37) [2] r=0 lpr=37 pi=[32,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:51:06 np0005534516 ceph-osd[88620]: osd.0 pg_epoch: 37 pg[4.10( empty local-lis/les=30/31 n=0 ec=30/21 lis/c=30/30 les/c/f=31/31/0 sis=37 pruub=8.321282387s) [1] r=-1 lpr=37 pi=[30,37)/1 crt=0'0 mlcod 0'0 active pruub 69.300979614s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 02:51:06 np0005534516 ceph-osd[88620]: osd.0 pg_epoch: 37 pg[4.f( empty local-lis/les=30/31 n=0 ec=30/21 lis/c=30/30 les/c/f=31/31/0 sis=37 pruub=8.321192741s) [1] r=-1 lpr=37 pi=[30,37)/1 crt=0'0 mlcod 0'0 active pruub 69.300941467s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 02:51:06 np0005534516 ceph-osd[88620]: osd.0 pg_epoch: 37 pg[4.10( empty local-lis/les=30/31 n=0 ec=30/21 lis/c=30/30 les/c/f=31/31/0 sis=37 pruub=8.321227074s) [1] r=-1 lpr=37 pi=[30,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 69.300979614s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 02:51:06 np0005534516 ceph-osd[88620]: osd.0 pg_epoch: 37 pg[6.d( empty local-lis/les=32/33 n=0 ec=32/25 lis/c=32/32 les/c/f=33/33/0 sis=37 pruub=10.461191177s) [1] r=-1 lpr=37 pi=[32,37)/1 crt=0'0 mlcod 0'0 active pruub 71.440979004s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 02:51:06 np0005534516 ceph-osd[88620]: osd.0 pg_epoch: 37 pg[4.f( empty local-lis/les=30/31 n=0 ec=30/21 lis/c=30/30 les/c/f=31/31/0 sis=37 pruub=8.321165085s) [1] r=-1 lpr=37 pi=[30,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 69.300941467s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 02:51:06 np0005534516 ceph-osd[88620]: osd.0 pg_epoch: 37 pg[4.e( empty local-lis/les=30/31 n=0 ec=30/21 lis/c=30/30 les/c/f=31/31/0 sis=37 pruub=8.321092606s) [2] r=-1 lpr=37 pi=[30,37)/1 crt=0'0 mlcod 0'0 active pruub 69.300910950s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 02:51:06 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 37 pg[6.14( empty local-lis/les=0/0 n=0 ec=32/25 lis/c=32/32 les/c/f=33/33/0 sis=37) [2] r=0 lpr=37 pi=[32,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:51:06 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 37 pg[2.d( empty local-lis/les=0/0 n=0 ec=29/17 lis/c=29/29 les/c/f=31/31/0 sis=37) [1] r=0 lpr=37 pi=[29,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:51:06 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 37 pg[6.17( empty local-lis/les=0/0 n=0 ec=32/25 lis/c=32/32 les/c/f=33/33/0 sis=37) [1] r=0 lpr=37 pi=[32,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:51:06 np0005534516 ceph-osd[88620]: osd.0 pg_epoch: 37 pg[6.d( empty local-lis/les=32/33 n=0 ec=32/25 lis/c=32/32 les/c/f=33/33/0 sis=37 pruub=10.461163521s) [1] r=-1 lpr=37 pi=[32,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 71.440979004s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 02:51:06 np0005534516 ceph-osd[88620]: osd.0 pg_epoch: 37 pg[4.e( empty local-lis/les=30/31 n=0 ec=30/21 lis/c=30/30 les/c/f=31/31/0 sis=37 pruub=8.321072578s) [2] r=-1 lpr=37 pi=[30,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 69.300910950s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 02:51:06 np0005534516 ceph-osd[88620]: osd.0 pg_epoch: 37 pg[6.e( empty local-lis/les=32/33 n=0 ec=32/25 lis/c=32/32 les/c/f=33/33/0 sis=37 pruub=10.461158752s) [1] r=-1 lpr=37 pi=[32,37)/1 crt=0'0 mlcod 0'0 active pruub 71.441123962s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 02:51:06 np0005534516 ceph-osd[88620]: osd.0 pg_epoch: 37 pg[4.d( empty local-lis/les=30/31 n=0 ec=30/21 lis/c=30/30 les/c/f=31/31/0 sis=37 pruub=8.320941925s) [1] r=-1 lpr=37 pi=[30,37)/1 crt=0'0 mlcod 0'0 active pruub 69.300918579s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 02:51:06 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 37 pg[2.3( empty local-lis/les=0/0 n=0 ec=29/17 lis/c=29/29 les/c/f=31/31/0 sis=37) [1] r=0 lpr=37 pi=[29,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:51:06 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 37 pg[6.11( empty local-lis/les=0/0 n=0 ec=32/25 lis/c=32/32 les/c/f=33/33/0 sis=37) [2] r=0 lpr=37 pi=[32,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:51:06 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 37 pg[4.13( empty local-lis/les=0/0 n=0 ec=30/21 lis/c=30/30 les/c/f=31/31/0 sis=37) [2] r=0 lpr=37 pi=[30,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:51:06 np0005534516 ceph-osd[88620]: osd.0 pg_epoch: 37 pg[6.2( empty local-lis/les=32/33 n=0 ec=32/25 lis/c=32/32 les/c/f=33/33/0 sis=37 pruub=10.461215973s) [1] r=-1 lpr=37 pi=[32,37)/1 crt=0'0 mlcod 0'0 active pruub 71.441207886s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 02:51:06 np0005534516 ceph-osd[88620]: osd.0 pg_epoch: 37 pg[6.e( empty local-lis/les=32/33 n=0 ec=32/25 lis/c=32/32 les/c/f=33/33/0 sis=37 pruub=10.461136818s) [1] r=-1 lpr=37 pi=[32,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 71.441123962s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 02:51:06 np0005534516 ceph-osd[88620]: osd.0 pg_epoch: 37 pg[4.d( empty local-lis/les=30/31 n=0 ec=30/21 lis/c=30/30 les/c/f=31/31/0 sis=37 pruub=8.320899963s) [1] r=-1 lpr=37 pi=[30,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 69.300918579s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 02:51:06 np0005534516 ceph-osd[88620]: osd.0 pg_epoch: 37 pg[6.f( empty local-lis/les=32/33 n=0 ec=32/25 lis/c=32/32 les/c/f=33/33/0 sis=37 pruub=10.461043358s) [2] r=-1 lpr=37 pi=[32,37)/1 crt=0'0 mlcod 0'0 active pruub 71.441123962s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 02:51:06 np0005534516 ceph-osd[88620]: osd.0 pg_epoch: 37 pg[4.2( empty local-lis/les=30/31 n=0 ec=30/21 lis/c=30/30 les/c/f=31/31/0 sis=37 pruub=8.319897652s) [1] r=-1 lpr=37 pi=[30,37)/1 crt=0'0 mlcod 0'0 active+scrubbing+deep pruub 69.300003052s@ [ 4.2:  ]  TIME_FOR_DEEP mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 02:51:06 np0005534516 ceph-osd[88620]: osd.0 pg_epoch: 37 pg[6.2( empty local-lis/les=32/33 n=0 ec=32/25 lis/c=32/32 les/c/f=33/33/0 sis=37 pruub=10.461079597s) [1] r=-1 lpr=37 pi=[32,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 71.441207886s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 02:51:06 np0005534516 ceph-osd[88620]: osd.0 pg_epoch: 37 pg[6.f( empty local-lis/les=32/33 n=0 ec=32/25 lis/c=32/32 les/c/f=33/33/0 sis=37 pruub=10.460970879s) [2] r=-1 lpr=37 pi=[32,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 71.441123962s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 02:51:06 np0005534516 ceph-osd[88620]: osd.0 pg_epoch: 37 pg[4.2( empty local-lis/les=30/31 n=0 ec=30/21 lis/c=30/30 les/c/f=31/31/0 sis=37 pruub=8.319871902s) [1] r=-1 lpr=37 pi=[30,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 69.300003052s@ TIME_FOR_DEEP mbc={}] state<Start>: transitioning to Stray
Nov 25 02:51:06 np0005534516 ceph-osd[88620]: osd.0 pg_epoch: 37 pg[4.1( empty local-lis/les=30/31 n=0 ec=30/21 lis/c=30/30 les/c/f=31/31/0 sis=37 pruub=8.319572449s) [2] r=-1 lpr=37 pi=[30,37)/1 crt=0'0 mlcod 0'0 active pruub 69.299858093s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 02:51:06 np0005534516 ceph-osd[88620]: osd.0 pg_epoch: 37 pg[6.1( empty local-lis/les=32/33 n=0 ec=32/25 lis/c=32/32 les/c/f=33/33/0 sis=37 pruub=10.460918427s) [1] r=-1 lpr=37 pi=[32,37)/1 crt=0'0 mlcod 0'0 active pruub 71.441253662s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 02:51:06 np0005534516 ceph-osd[88620]: osd.0 pg_epoch: 37 pg[4.1( empty local-lis/les=30/31 n=0 ec=30/21 lis/c=30/30 les/c/f=31/31/0 sis=37 pruub=8.319538116s) [2] r=-1 lpr=37 pi=[30,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 69.299858093s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 02:51:06 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 37 pg[6.13( empty local-lis/les=0/0 n=0 ec=32/25 lis/c=32/32 les/c/f=33/33/0 sis=37) [2] r=0 lpr=37 pi=[32,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:51:06 np0005534516 ceph-osd[88620]: osd.0 pg_epoch: 37 pg[6.1( empty local-lis/les=32/33 n=0 ec=32/25 lis/c=32/32 les/c/f=33/33/0 sis=37 pruub=10.460891724s) [1] r=-1 lpr=37 pi=[32,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 71.441253662s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 02:51:06 np0005534516 ceph-osd[88620]: osd.0 pg_epoch: 37 pg[4.4( empty local-lis/les=30/31 n=0 ec=30/21 lis/c=30/30 les/c/f=31/31/0 sis=37 pruub=8.319119453s) [1] r=-1 lpr=37 pi=[30,37)/1 crt=0'0 mlcod 0'0 active pruub 69.299545288s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 02:51:06 np0005534516 ceph-osd[88620]: osd.0 pg_epoch: 37 pg[6.6( empty local-lis/les=32/33 n=0 ec=32/25 lis/c=32/32 les/c/f=33/33/0 sis=37 pruub=10.460733414s) [1] r=-1 lpr=37 pi=[32,37)/1 crt=0'0 mlcod 0'0 active pruub 71.441230774s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 02:51:06 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 37 pg[4.11( empty local-lis/les=0/0 n=0 ec=30/21 lis/c=30/30 les/c/f=31/31/0 sis=37) [2] r=0 lpr=37 pi=[30,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:51:06 np0005534516 ceph-osd[88620]: osd.0 pg_epoch: 37 pg[4.9( empty local-lis/les=30/31 n=0 ec=30/21 lis/c=30/30 les/c/f=31/31/0 sis=37 pruub=8.318985939s) [1] r=-1 lpr=37 pi=[30,37)/1 crt=0'0 mlcod 0'0 active pruub 69.299514771s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 02:51:06 np0005534516 ceph-osd[88620]: osd.0 pg_epoch: 37 pg[6.6( empty local-lis/les=32/33 n=0 ec=32/25 lis/c=32/32 les/c/f=33/33/0 sis=37 pruub=10.460700989s) [1] r=-1 lpr=37 pi=[32,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 71.441230774s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 02:51:06 np0005534516 ceph-osd[88620]: osd.0 pg_epoch: 37 pg[4.4( empty local-lis/les=30/31 n=0 ec=30/21 lis/c=30/30 les/c/f=31/31/0 sis=37 pruub=8.319085121s) [1] r=-1 lpr=37 pi=[30,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 69.299545288s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 02:51:06 np0005534516 ceph-osd[88620]: osd.0 pg_epoch: 37 pg[4.9( empty local-lis/les=30/31 n=0 ec=30/21 lis/c=30/30 les/c/f=31/31/0 sis=37 pruub=8.318936348s) [1] r=-1 lpr=37 pi=[30,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 69.299514771s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 02:51:06 np0005534516 ceph-osd[88620]: osd.0 pg_epoch: 37 pg[6.b( empty local-lis/les=32/33 n=0 ec=32/25 lis/c=32/32 les/c/f=33/33/0 sis=37 pruub=10.460496902s) [1] r=-1 lpr=37 pi=[32,37)/1 crt=0'0 mlcod 0'0 active pruub 71.441192627s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 02:51:06 np0005534516 ceph-osd[88620]: osd.0 pg_epoch: 37 pg[6.b( empty local-lis/les=32/33 n=0 ec=32/25 lis/c=32/32 les/c/f=33/33/0 sis=37 pruub=10.460467339s) [1] r=-1 lpr=37 pi=[32,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 71.441192627s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 02:51:06 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 37 pg[4.e( empty local-lis/les=0/0 n=0 ec=30/21 lis/c=30/30 les/c/f=31/31/0 sis=37) [2] r=0 lpr=37 pi=[30,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:51:06 np0005534516 ceph-osd[88620]: osd.0 pg_epoch: 37 pg[4.5( empty local-lis/les=30/31 n=0 ec=30/21 lis/c=30/30 les/c/f=31/31/0 sis=37 pruub=8.318731308s) [1] r=-1 lpr=37 pi=[30,37)/1 crt=0'0 mlcod 0'0 active pruub 69.299499512s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 02:51:06 np0005534516 ceph-osd[88620]: osd.0 pg_epoch: 37 pg[4.1a( empty local-lis/les=30/31 n=0 ec=30/21 lis/c=30/30 les/c/f=31/31/0 sis=37 pruub=8.318718910s) [2] r=-1 lpr=37 pi=[30,37)/1 crt=0'0 mlcod 0'0 active pruub 69.299537659s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 02:51:06 np0005534516 ceph-osd[88620]: osd.0 pg_epoch: 37 pg[6.c( empty local-lis/les=32/33 n=0 ec=32/25 lis/c=32/32 les/c/f=33/33/0 sis=37 pruub=10.460166931s) [1] r=-1 lpr=37 pi=[32,37)/1 crt=0'0 mlcod 0'0 active pruub 71.441009521s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 02:51:06 np0005534516 ceph-osd[88620]: osd.0 pg_epoch: 37 pg[4.5( empty local-lis/les=30/31 n=0 ec=30/21 lis/c=30/30 les/c/f=31/31/0 sis=37 pruub=8.318700790s) [1] r=-1 lpr=37 pi=[30,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 69.299499512s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 02:51:06 np0005534516 ceph-osd[88620]: osd.0 pg_epoch: 37 pg[4.1a( empty local-lis/les=30/31 n=0 ec=30/21 lis/c=30/30 les/c/f=31/31/0 sis=37 pruub=8.318684578s) [2] r=-1 lpr=37 pi=[30,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 69.299537659s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 02:51:06 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 37 pg[6.f( empty local-lis/les=0/0 n=0 ec=32/25 lis/c=32/32 les/c/f=33/33/0 sis=37) [2] r=0 lpr=37 pi=[32,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:51:06 np0005534516 ceph-osd[88620]: osd.0 pg_epoch: 37 pg[6.c( empty local-lis/les=32/33 n=0 ec=32/25 lis/c=32/32 les/c/f=33/33/0 sis=37 pruub=10.460126877s) [1] r=-1 lpr=37 pi=[32,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 71.441009521s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 02:51:06 np0005534516 ceph-osd[88620]: osd.0 pg_epoch: 37 pg[4.a( empty local-lis/les=30/31 n=0 ec=30/21 lis/c=30/30 les/c/f=31/31/0 sis=37 pruub=8.320325851s) [2] r=-1 lpr=37 pi=[30,37)/1 crt=0'0 mlcod 0'0 active pruub 69.301261902s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 02:51:06 np0005534516 ceph-osd[88620]: osd.0 pg_epoch: 37 pg[6.4( empty local-lis/les=32/33 n=0 ec=32/25 lis/c=32/32 les/c/f=33/33/0 sis=37 pruub=10.461321831s) [1] r=-1 lpr=37 pi=[32,37)/1 crt=0'0 mlcod 0'0 active pruub 71.442314148s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 02:51:06 np0005534516 ceph-osd[88620]: osd.0 pg_epoch: 37 pg[4.a( empty local-lis/les=30/31 n=0 ec=30/21 lis/c=30/30 les/c/f=31/31/0 sis=37 pruub=8.320293427s) [2] r=-1 lpr=37 pi=[30,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 69.301261902s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 02:51:06 np0005534516 ceph-osd[88620]: osd.0 pg_epoch: 37 pg[4.1b( empty local-lis/les=30/31 n=0 ec=30/21 lis/c=30/30 les/c/f=31/31/0 sis=37 pruub=8.318478584s) [2] r=-1 lpr=37 pi=[30,37)/1 crt=0'0 mlcod 0'0 active pruub 69.299453735s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 02:51:06 np0005534516 ceph-osd[88620]: osd.0 pg_epoch: 37 pg[6.4( empty local-lis/les=32/33 n=0 ec=32/25 lis/c=32/32 les/c/f=33/33/0 sis=37 pruub=10.461286545s) [1] r=-1 lpr=37 pi=[32,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 71.442314148s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 02:51:06 np0005534516 ceph-osd[88620]: osd.0 pg_epoch: 37 pg[4.1b( empty local-lis/les=30/31 n=0 ec=30/21 lis/c=30/30 les/c/f=31/31/0 sis=37 pruub=8.318377495s) [2] r=-1 lpr=37 pi=[30,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 69.299453735s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 02:51:06 np0005534516 ceph-osd[88620]: osd.0 pg_epoch: 37 pg[4.7( empty local-lis/les=30/31 n=0 ec=30/21 lis/c=30/30 les/c/f=31/31/0 sis=37 pruub=8.318272591s) [1] r=-1 lpr=37 pi=[30,37)/1 crt=0'0 mlcod 0'0 active pruub 69.299407959s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 02:51:06 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 37 pg[4.1( empty local-lis/les=0/0 n=0 ec=30/21 lis/c=30/30 les/c/f=31/31/0 sis=37) [2] r=0 lpr=37 pi=[30,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:51:06 np0005534516 ceph-osd[88620]: osd.0 pg_epoch: 37 pg[4.7( empty local-lis/les=30/31 n=0 ec=30/21 lis/c=30/30 les/c/f=31/31/0 sis=37 pruub=8.318204880s) [1] r=-1 lpr=37 pi=[30,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 69.299407959s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 02:51:06 np0005534516 ceph-osd[88620]: osd.0 pg_epoch: 37 pg[2.19( empty local-lis/les=0/0 n=0 ec=29/17 lis/c=29/29 les/c/f=31/31/0 sis=37) [0] r=0 lpr=37 pi=[29,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:51:06 np0005534516 ceph-osd[88620]: osd.0 pg_epoch: 37 pg[4.8( empty local-lis/les=30/31 n=0 ec=30/21 lis/c=30/30 les/c/f=31/31/0 sis=37 pruub=8.317975998s) [1] r=-1 lpr=37 pi=[30,37)/1 crt=0'0 mlcod 0'0 active pruub 69.299453735s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 02:51:06 np0005534516 ceph-osd[88620]: osd.0 pg_epoch: 37 pg[4.8( empty local-lis/les=30/31 n=0 ec=30/21 lis/c=30/30 les/c/f=31/31/0 sis=37 pruub=8.317949295s) [1] r=-1 lpr=37 pi=[30,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 69.299453735s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 02:51:06 np0005534516 ceph-osd[88620]: osd.0 pg_epoch: 37 pg[6.8( empty local-lis/les=32/33 n=0 ec=32/25 lis/c=32/32 les/c/f=33/33/0 sis=37 pruub=10.459677696s) [2] r=-1 lpr=37 pi=[32,37)/1 crt=0'0 mlcod 0'0 active pruub 71.441261292s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 02:51:06 np0005534516 ceph-osd[88620]: osd.0 pg_epoch: 37 pg[6.1f( empty local-lis/les=32/33 n=0 ec=32/25 lis/c=32/32 les/c/f=33/33/0 sis=37 pruub=10.460811615s) [2] r=-1 lpr=37 pi=[32,37)/1 crt=0'0 mlcod 0'0 active pruub 71.442398071s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 02:51:06 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 37 pg[4.1a( empty local-lis/les=0/0 n=0 ec=30/21 lis/c=30/30 les/c/f=31/31/0 sis=37) [2] r=0 lpr=37 pi=[30,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:51:06 np0005534516 ceph-osd[88620]: osd.0 pg_epoch: 37 pg[4.1c( empty local-lis/les=30/31 n=0 ec=30/21 lis/c=30/30 les/c/f=31/31/0 sis=37 pruub=8.317687035s) [2] r=-1 lpr=37 pi=[30,37)/1 crt=0'0 mlcod 0'0 active pruub 69.299293518s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 02:51:06 np0005534516 ceph-osd[88620]: osd.0 pg_epoch: 37 pg[6.1e( empty local-lis/les=32/33 n=0 ec=32/25 lis/c=32/32 les/c/f=33/33/0 sis=37 pruub=10.460733414s) [1] r=-1 lpr=37 pi=[32,37)/1 crt=0'0 mlcod 0'0 active pruub 71.442344666s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 02:51:06 np0005534516 ceph-osd[88620]: osd.0 pg_epoch: 37 pg[6.8( empty local-lis/les=32/33 n=0 ec=32/25 lis/c=32/32 les/c/f=33/33/0 sis=37 pruub=10.459627151s) [2] r=-1 lpr=37 pi=[32,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 71.441261292s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 02:51:06 np0005534516 ceph-osd[88620]: osd.0 pg_epoch: 37 pg[4.1c( empty local-lis/les=30/31 n=0 ec=30/21 lis/c=30/30 les/c/f=31/31/0 sis=37 pruub=8.317649841s) [2] r=-1 lpr=37 pi=[30,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 69.299293518s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 02:51:06 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 37 pg[4.a( empty local-lis/les=0/0 n=0 ec=30/21 lis/c=30/30 les/c/f=31/31/0 sis=37) [2] r=0 lpr=37 pi=[30,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:51:06 np0005534516 ceph-osd[88620]: osd.0 pg_epoch: 37 pg[6.1c( empty local-lis/les=32/33 n=0 ec=32/25 lis/c=32/32 les/c/f=33/33/0 sis=37 pruub=10.460806847s) [1] r=-1 lpr=37 pi=[32,37)/1 crt=0'0 mlcod 0'0 active pruub 71.442497253s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 02:51:06 np0005534516 ceph-osd[88620]: osd.0 pg_epoch: 37 pg[6.1f( empty local-lis/les=32/33 n=0 ec=32/25 lis/c=32/32 les/c/f=33/33/0 sis=37 pruub=10.460721016s) [2] r=-1 lpr=37 pi=[32,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 71.442398071s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 02:51:06 np0005534516 ceph-osd[88620]: osd.0 pg_epoch: 37 pg[6.1c( empty local-lis/les=32/33 n=0 ec=32/25 lis/c=32/32 les/c/f=33/33/0 sis=37 pruub=10.460789680s) [1] r=-1 lpr=37 pi=[32,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 71.442497253s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 02:51:06 np0005534516 ceph-osd[88620]: osd.0 pg_epoch: 37 pg[6.1e( empty local-lis/les=32/33 n=0 ec=32/25 lis/c=32/32 les/c/f=33/33/0 sis=37 pruub=10.460692406s) [1] r=-1 lpr=37 pi=[32,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 71.442344666s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 02:51:06 np0005534516 ceph-osd[88620]: osd.0 pg_epoch: 37 pg[6.1d( empty local-lis/les=32/33 n=0 ec=32/25 lis/c=32/32 les/c/f=33/33/0 sis=37 pruub=10.460541725s) [1] r=-1 lpr=37 pi=[32,37)/1 crt=0'0 mlcod 0'0 active pruub 71.442375183s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 02:51:06 np0005534516 ceph-osd[88620]: osd.0 pg_epoch: 37 pg[6.1d( empty local-lis/les=32/33 n=0 ec=32/25 lis/c=32/32 les/c/f=33/33/0 sis=37 pruub=10.460516930s) [1] r=-1 lpr=37 pi=[32,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 71.442375183s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 02:51:06 np0005534516 ceph-osd[88620]: osd.0 pg_epoch: 37 pg[2.18( empty local-lis/les=0/0 n=0 ec=29/17 lis/c=29/29 les/c/f=31/31/0 sis=37) [0] r=0 lpr=37 pi=[29,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:51:06 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 37 pg[4.1b( empty local-lis/les=0/0 n=0 ec=30/21 lis/c=30/30 les/c/f=31/31/0 sis=37) [2] r=0 lpr=37 pi=[30,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:51:06 np0005534516 ceph-osd[88620]: osd.0 pg_epoch: 37 pg[2.16( empty local-lis/les=0/0 n=0 ec=29/17 lis/c=29/29 les/c/f=31/31/0 sis=37) [0] r=0 lpr=37 pi=[29,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:51:06 np0005534516 ceph-osd[88620]: osd.0 pg_epoch: 37 pg[2.13( empty local-lis/les=0/0 n=0 ec=29/17 lis/c=29/29 les/c/f=31/31/0 sis=37) [0] r=0 lpr=37 pi=[29,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:51:06 np0005534516 ceph-osd[88620]: osd.0 pg_epoch: 37 pg[5.14( empty local-lis/les=0/0 n=0 ec=32/23 lis/c=32/32 les/c/f=35/35/0 sis=37) [0] r=0 lpr=37 pi=[32,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:51:06 np0005534516 ceph-osd[88620]: osd.0 pg_epoch: 37 pg[5.15( empty local-lis/les=0/0 n=0 ec=32/23 lis/c=32/32 les/c/f=35/35/0 sis=37) [0] r=0 lpr=37 pi=[32,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:51:06 np0005534516 ceph-osd[88620]: osd.0 pg_epoch: 37 pg[2.11( empty local-lis/les=0/0 n=0 ec=29/17 lis/c=29/29 les/c/f=31/31/0 sis=37) [0] r=0 lpr=37 pi=[29,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:51:06 np0005534516 ceph-osd[88620]: osd.0 pg_epoch: 37 pg[2.f( empty local-lis/les=0/0 n=0 ec=29/17 lis/c=29/29 les/c/f=31/31/0 sis=37) [0] r=0 lpr=37 pi=[29,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:51:06 np0005534516 ceph-osd[88620]: osd.0 pg_epoch: 37 pg[5.7( empty local-lis/les=0/0 n=0 ec=32/23 lis/c=32/32 les/c/f=35/35/0 sis=37) [0] r=0 lpr=37 pi=[32,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:51:06 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 37 pg[4.12( empty local-lis/les=0/0 n=0 ec=30/21 lis/c=30/30 les/c/f=31/31/0 sis=37) [1] r=0 lpr=37 pi=[30,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:51:06 np0005534516 ceph-osd[88620]: osd.0 pg_epoch: 37 pg[5.5( empty local-lis/les=0/0 n=0 ec=32/23 lis/c=32/32 les/c/f=35/35/0 sis=37) [0] r=0 lpr=37 pi=[32,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:51:06 np0005534516 ceph-osd[88620]: osd.0 pg_epoch: 37 pg[2.2( empty local-lis/les=0/0 n=0 ec=29/17 lis/c=29/29 les/c/f=31/31/0 sis=37) [0] r=0 lpr=37 pi=[29,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:51:06 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 37 pg[4.1c( empty local-lis/les=0/0 n=0 ec=30/21 lis/c=30/30 les/c/f=31/31/0 sis=37) [2] r=0 lpr=37 pi=[30,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:51:06 np0005534516 ceph-osd[88620]: osd.0 pg_epoch: 37 pg[5.3( empty local-lis/les=0/0 n=0 ec=32/23 lis/c=32/32 les/c/f=35/35/0 sis=37) [0] r=0 lpr=37 pi=[32,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:51:06 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 37 pg[6.1f( empty local-lis/les=0/0 n=0 ec=32/25 lis/c=32/32 les/c/f=33/33/0 sis=37) [2] r=0 lpr=37 pi=[32,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:51:06 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 37 pg[2.4( empty local-lis/les=0/0 n=0 ec=29/17 lis/c=29/29 les/c/f=31/31/0 sis=37) [1] r=0 lpr=37 pi=[29,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:51:06 np0005534516 ceph-osd[88620]: osd.0 pg_epoch: 37 pg[5.4( empty local-lis/les=0/0 n=0 ec=32/23 lis/c=32/32 les/c/f=35/35/0 sis=37) [0] r=0 lpr=37 pi=[32,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:51:06 np0005534516 ceph-osd[88620]: osd.0 pg_epoch: 37 pg[5.2( empty local-lis/les=0/0 n=0 ec=32/23 lis/c=32/32 les/c/f=35/35/0 sis=37) [0] r=0 lpr=37 pi=[32,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:51:06 np0005534516 ceph-osd[88620]: osd.0 pg_epoch: 37 pg[2.8( empty local-lis/les=0/0 n=0 ec=29/17 lis/c=29/29 les/c/f=31/31/0 sis=37) [0] r=0 lpr=37 pi=[29,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:51:06 np0005534516 ceph-osd[88620]: osd.0 pg_epoch: 37 pg[2.b( empty local-lis/les=0/0 n=0 ec=29/17 lis/c=29/29 les/c/f=31/31/0 sis=37) [0] r=0 lpr=37 pi=[29,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:51:06 np0005534516 ceph-osd[88620]: osd.0 pg_epoch: 37 pg[2.1d( empty local-lis/les=0/0 n=0 ec=29/17 lis/c=29/29 les/c/f=31/31/0 sis=37) [0] r=0 lpr=37 pi=[29,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:51:06 np0005534516 ceph-osd[88620]: osd.0 pg_epoch: 37 pg[2.1c( empty local-lis/les=0/0 n=0 ec=29/17 lis/c=29/29 les/c/f=31/31/0 sis=37) [0] r=0 lpr=37 pi=[29,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:51:06 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 37 pg[2.5( empty local-lis/les=0/0 n=0 ec=29/17 lis/c=29/29 les/c/f=31/31/0 sis=37) [1] r=0 lpr=37 pi=[29,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:51:06 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 37 pg[4.10( empty local-lis/les=0/0 n=0 ec=30/21 lis/c=30/30 les/c/f=31/31/0 sis=37) [1] r=0 lpr=37 pi=[30,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:51:06 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 37 pg[5.1( empty local-lis/les=0/0 n=0 ec=32/23 lis/c=32/32 les/c/f=35/35/0 sis=37) [1] r=0 lpr=37 pi=[32,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:51:06 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 37 pg[6.8( empty local-lis/les=0/0 n=0 ec=32/25 lis/c=32/32 les/c/f=33/33/0 sis=37) [2] r=0 lpr=37 pi=[32,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:51:06 np0005534516 ceph-osd[88620]: osd.0 pg_epoch: 37 pg[2.1f( empty local-lis/les=0/0 n=0 ec=29/17 lis/c=29/29 les/c/f=31/31/0 sis=37) [0] r=0 lpr=37 pi=[29,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:51:06 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 37 pg[2.6( empty local-lis/les=0/0 n=0 ec=29/17 lis/c=29/29 les/c/f=31/31/0 sis=37) [1] r=0 lpr=37 pi=[29,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:51:06 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 37 pg[4.f( empty local-lis/les=0/0 n=0 ec=30/21 lis/c=30/30 les/c/f=31/31/0 sis=37) [1] r=0 lpr=37 pi=[30,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:51:06 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 37 pg[5.f( empty local-lis/les=0/0 n=0 ec=32/23 lis/c=32/32 les/c/f=35/35/0 sis=37) [1] r=0 lpr=37 pi=[32,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:51:06 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 37 pg[2.9( empty local-lis/les=0/0 n=0 ec=29/17 lis/c=29/29 les/c/f=31/31/0 sis=37) [1] r=0 lpr=37 pi=[29,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:51:06 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 37 pg[6.d( empty local-lis/les=0/0 n=0 ec=32/25 lis/c=32/32 les/c/f=33/33/0 sis=37) [1] r=0 lpr=37 pi=[32,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:51:06 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 37 pg[2.a( empty local-lis/les=0/0 n=0 ec=29/17 lis/c=29/29 les/c/f=31/31/0 sis=37) [1] r=0 lpr=37 pi=[29,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:51:06 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 37 pg[5.c( empty local-lis/les=0/0 n=0 ec=32/23 lis/c=32/32 les/c/f=35/35/0 sis=37) [1] r=0 lpr=37 pi=[32,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:51:06 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 37 pg[5.1a( empty local-lis/les=0/0 n=0 ec=32/23 lis/c=32/32 les/c/f=35/35/0 sis=37) [1] r=0 lpr=37 pi=[32,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:51:06 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 37 pg[6.e( empty local-lis/les=0/0 n=0 ec=32/25 lis/c=32/32 les/c/f=33/33/0 sis=37) [1] r=0 lpr=37 pi=[32,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:51:06 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 37 pg[5.19( empty local-lis/les=0/0 n=0 ec=32/23 lis/c=32/32 les/c/f=35/35/0 sis=37) [1] r=0 lpr=37 pi=[32,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:51:06 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 37 pg[5.18( empty local-lis/les=0/0 n=0 ec=32/23 lis/c=32/32 les/c/f=35/35/0 sis=37) [1] r=0 lpr=37 pi=[32,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:51:06 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 37 pg[4.d( empty local-lis/les=0/0 n=0 ec=30/21 lis/c=30/30 les/c/f=31/31/0 sis=37) [1] r=0 lpr=37 pi=[30,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:51:06 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 37 pg[6.2( empty local-lis/les=0/0 n=0 ec=32/25 lis/c=32/32 les/c/f=33/33/0 sis=37) [1] r=0 lpr=37 pi=[32,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:51:06 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 37 pg[6.1( empty local-lis/les=0/0 n=0 ec=32/25 lis/c=32/32 les/c/f=33/33/0 sis=37) [1] r=0 lpr=37 pi=[32,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:51:06 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 37 pg[6.6( empty local-lis/les=0/0 n=0 ec=32/25 lis/c=32/32 les/c/f=33/33/0 sis=37) [1] r=0 lpr=37 pi=[32,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:51:06 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 37 pg[4.9( empty local-lis/les=0/0 n=0 ec=30/21 lis/c=30/30 les/c/f=31/31/0 sis=37) [1] r=0 lpr=37 pi=[30,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:51:06 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 37 pg[4.4( empty local-lis/les=0/0 n=0 ec=30/21 lis/c=30/30 les/c/f=31/31/0 sis=37) [1] r=0 lpr=37 pi=[30,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:51:06 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 37 pg[6.b( empty local-lis/les=0/0 n=0 ec=32/25 lis/c=32/32 les/c/f=33/33/0 sis=37) [1] r=0 lpr=37 pi=[32,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:51:06 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 37 pg[4.5( empty local-lis/les=0/0 n=0 ec=30/21 lis/c=30/30 les/c/f=31/31/0 sis=37) [1] r=0 lpr=37 pi=[30,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:51:06 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 37 pg[6.c( empty local-lis/les=0/0 n=0 ec=32/25 lis/c=32/32 les/c/f=33/33/0 sis=37) [1] r=0 lpr=37 pi=[32,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:51:06 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 37 pg[6.4( empty local-lis/les=0/0 n=0 ec=32/25 lis/c=32/32 les/c/f=33/33/0 sis=37) [1] r=0 lpr=37 pi=[32,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:51:06 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 37 pg[4.7( empty local-lis/les=0/0 n=0 ec=30/21 lis/c=30/30 les/c/f=31/31/0 sis=37) [1] r=0 lpr=37 pi=[30,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:51:06 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 37 pg[4.8( empty local-lis/les=0/0 n=0 ec=30/21 lis/c=30/30 les/c/f=31/31/0 sis=37) [1] r=0 lpr=37 pi=[30,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:51:06 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 37 pg[6.1c( empty local-lis/les=0/0 n=0 ec=32/25 lis/c=32/32 les/c/f=33/33/0 sis=37) [1] r=0 lpr=37 pi=[32,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:51:06 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 37 pg[6.1e( empty local-lis/les=0/0 n=0 ec=32/25 lis/c=32/32 les/c/f=33/33/0 sis=37) [1] r=0 lpr=37 pi=[32,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:51:06 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 37 pg[6.1d( empty local-lis/les=0/0 n=0 ec=32/25 lis/c=32/32 les/c/f=33/33/0 sis=37) [1] r=0 lpr=37 pi=[32,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:51:06 np0005534516 systemd[1]: var-lib-containers-storage-overlay-e196f6346bc3a8f2296a169ef29e1e3247b0b5ef92d122d08baf52c899961ae9-merged.mount: Deactivated successfully.
Nov 25 02:51:06 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 37 pg[4.2( empty local-lis/les=0/0 n=0 ec=30/21 lis/c=30/30 les/c/f=31/31/0 sis=37) [1] r=0 lpr=37 pi=[30,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:51:06 np0005534516 podman[96412]: 2025-11-25 07:51:06.314621251 +0000 UTC m=+1.735044066 container remove 72c5d43d14486d6957a93330e39f7416bd27fd3a60a333ddaf2cb4496305b081 (image=quay.io/ceph/ceph:v18, name=dreamy_golick, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507)
Nov 25 02:51:06 np0005534516 systemd[1]: libpod-conmon-72c5d43d14486d6957a93330e39f7416bd27fd3a60a333ddaf2cb4496305b081.scope: Deactivated successfully.
Nov 25 02:51:06 np0005534516 ceph-mon[75015]: log_channel(cluster) log [WRN] : Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Nov 25 02:51:06 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e37 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 02:51:06 np0005534516 python3[96489]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid a058ea16-8b73-51e1-b172-ed66107102bf -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable cephfs.cephfs.data cephfs _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 02:51:06 np0005534516 ceph-osd[90711]: log_channel(cluster) log [DBG] : 2.c scrub starts
Nov 25 02:51:06 np0005534516 ceph-osd[90711]: log_channel(cluster) log [DBG] : 2.c scrub ok
Nov 25 02:51:06 np0005534516 podman[96490]: 2025-11-25 07:51:06.769834559 +0000 UTC m=+0.083977674 container create af96085c64984ec5baf2f359bece8f77ad31aef30d7b5e2192d7f1588ed062e7 (image=quay.io/ceph/ceph:v18, name=focused_visvesvaraya, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 02:51:06 np0005534516 podman[96490]: 2025-11-25 07:51:06.713777002 +0000 UTC m=+0.027920117 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 25 02:51:06 np0005534516 systemd[1]: Started libpod-conmon-af96085c64984ec5baf2f359bece8f77ad31aef30d7b5e2192d7f1588ed062e7.scope.
Nov 25 02:51:07 np0005534516 systemd[1]: Started libcrun container.
Nov 25 02:51:07 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d7e80dfbea348496b0a422c6128c4f7cd499d56dbeb9f17df30fd0ccf8b9fe14/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 02:51:07 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d7e80dfbea348496b0a422c6128c4f7cd499d56dbeb9f17df30fd0ccf8b9fe14/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 02:51:07 np0005534516 podman[96490]: 2025-11-25 07:51:07.067567805 +0000 UTC m=+0.381710970 container init af96085c64984ec5baf2f359bece8f77ad31aef30d7b5e2192d7f1588ed062e7 (image=quay.io/ceph/ceph:v18, name=focused_visvesvaraya, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Nov 25 02:51:07 np0005534516 podman[96490]: 2025-11-25 07:51:07.077006217 +0000 UTC m=+0.391149332 container start af96085c64984ec5baf2f359bece8f77ad31aef30d7b5e2192d7f1588ed062e7 (image=quay.io/ceph/ceph:v18, name=focused_visvesvaraya, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 02:51:07 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v95: 162 pgs: 46 peering, 116 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 02:51:07 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e37 do_prune osdmap full prune enabled
Nov 25 02:51:07 np0005534516 podman[96490]: 2025-11-25 07:51:07.277172494 +0000 UTC m=+0.591315599 container attach af96085c64984ec5baf2f359bece8f77ad31aef30d7b5e2192d7f1588ed062e7 (image=quay.io/ceph/ceph:v18, name=focused_visvesvaraya, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Nov 25 02:51:07 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 25 02:51:07 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 25 02:51:07 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 25 02:51:07 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 25 02:51:07 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 25 02:51:07 np0005534516 ceph-mon[75015]: from='client.? 192.168.122.100:0/741912769' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"}]': finished
Nov 25 02:51:07 np0005534516 ceph-mon[75015]: Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Nov 25 02:51:07 np0005534516 ceph-osd[89702]: log_channel(cluster) log [DBG] : 3.4 scrub starts
Nov 25 02:51:07 np0005534516 ceph-osd[89702]: log_channel(cluster) log [DBG] : 3.4 scrub ok
Nov 25 02:51:07 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e38 e38: 3 total, 3 up, 3 in
Nov 25 02:51:07 np0005534516 ceph-mon[75015]: log_channel(cluster) log [DBG] : osdmap e38: 3 total, 3 up, 3 in
Nov 25 02:51:07 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"} v 0) v1
Nov 25 02:51:07 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3460571848' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"}]: dispatch
Nov 25 02:51:07 np0005534516 ceph-osd[90711]: log_channel(cluster) log [DBG] : 2.e scrub starts
Nov 25 02:51:07 np0005534516 ceph-osd[90711]: log_channel(cluster) log [DBG] : 2.e scrub ok
Nov 25 02:51:07 np0005534516 ceph-osd[88620]: osd.0 pg_epoch: 38 pg[5.14( empty local-lis/les=37/38 n=0 ec=32/23 lis/c=32/32 les/c/f=35/35/0 sis=37) [0] r=0 lpr=37 pi=[32,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:51:07 np0005534516 ceph-osd[88620]: osd.0 pg_epoch: 38 pg[2.11( empty local-lis/les=37/38 n=0 ec=29/17 lis/c=29/29 les/c/f=31/31/0 sis=37) [0] r=0 lpr=37 pi=[29,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:51:07 np0005534516 ceph-osd[88620]: osd.0 pg_epoch: 38 pg[3.1f( empty local-lis/les=37/38 n=0 ec=30/19 lis/c=30/30 les/c/f=31/31/0 sis=37) [0] r=0 lpr=37 pi=[30,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:51:07 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 38 pg[5.19( empty local-lis/les=37/38 n=0 ec=32/23 lis/c=32/32 les/c/f=35/35/0 sis=37) [1] r=0 lpr=37 pi=[32,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:51:07 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 38 pg[5.1a( empty local-lis/les=37/38 n=0 ec=32/23 lis/c=32/32 les/c/f=35/35/0 sis=37) [1] r=0 lpr=37 pi=[32,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:51:07 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 38 pg[5.c( empty local-lis/les=37/38 n=0 ec=32/23 lis/c=32/32 les/c/f=35/35/0 sis=37) [1] r=0 lpr=37 pi=[32,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:51:07 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 38 pg[4.d( empty local-lis/les=37/38 n=0 ec=30/21 lis/c=30/30 les/c/f=31/31/0 sis=37) [1] r=0 lpr=37 pi=[30,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:51:07 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 38 pg[6.d( empty local-lis/les=37/38 n=0 ec=32/25 lis/c=32/32 les/c/f=33/33/0 sis=37) [1] r=0 lpr=37 pi=[32,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:51:07 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 38 pg[6.c( empty local-lis/les=37/38 n=0 ec=32/25 lis/c=32/32 les/c/f=33/33/0 sis=37) [1] r=0 lpr=37 pi=[32,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:51:07 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 38 pg[5.f( empty local-lis/les=37/38 n=0 ec=32/23 lis/c=32/32 les/c/f=35/35/0 sis=37) [1] r=0 lpr=37 pi=[32,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:51:07 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 38 pg[4.f( empty local-lis/les=37/38 n=0 ec=30/21 lis/c=30/30 les/c/f=31/31/0 sis=37) [1] r=0 lpr=37 pi=[30,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:51:07 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 38 pg[2.9( empty local-lis/les=37/38 n=0 ec=29/17 lis/c=29/29 les/c/f=31/31/0 sis=37) [1] r=0 lpr=37 pi=[29,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:51:07 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 38 pg[2.6( empty local-lis/les=37/38 n=0 ec=29/17 lis/c=29/29 les/c/f=31/31/0 sis=37) [1] r=0 lpr=37 pi=[29,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:51:07 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 38 pg[6.1e( empty local-lis/les=37/38 n=0 ec=32/25 lis/c=32/32 les/c/f=33/33/0 sis=37) [1] r=0 lpr=37 pi=[32,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:51:07 np0005534516 ceph-osd[88620]: osd.0 pg_epoch: 38 pg[3.12( empty local-lis/les=37/38 n=0 ec=30/19 lis/c=30/30 les/c/f=31/31/0 sis=37) [0] r=0 lpr=37 pi=[30,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:51:07 np0005534516 ceph-osd[88620]: osd.0 pg_epoch: 38 pg[2.13( empty local-lis/les=37/38 n=0 ec=29/17 lis/c=29/29 les/c/f=31/31/0 sis=37) [0] r=0 lpr=37 pi=[29,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:51:07 np0005534516 ceph-osd[88620]: osd.0 pg_epoch: 38 pg[5.15( empty local-lis/les=37/38 n=0 ec=32/23 lis/c=32/32 les/c/f=35/35/0 sis=37) [0] r=0 lpr=37 pi=[32,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:51:07 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 38 pg[4.1c( empty local-lis/les=37/38 n=0 ec=30/21 lis/c=30/30 les/c/f=31/31/0 sis=37) [2] r=0 lpr=37 pi=[30,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:51:07 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 38 pg[4.11( empty local-lis/les=37/38 n=0 ec=30/21 lis/c=30/30 les/c/f=31/31/0 sis=37) [2] r=0 lpr=37 pi=[30,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:51:07 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 38 pg[3.18( empty local-lis/les=37/38 n=0 ec=30/19 lis/c=30/30 les/c/f=31/31/0 sis=37) [2] r=0 lpr=37 pi=[30,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:51:07 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 38 pg[6.1f( empty local-lis/les=37/38 n=0 ec=32/25 lis/c=32/32 les/c/f=33/33/0 sis=37) [2] r=0 lpr=37 pi=[32,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:51:07 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 38 pg[6.13( empty local-lis/les=37/38 n=0 ec=32/25 lis/c=32/32 les/c/f=33/33/0 sis=37) [2] r=0 lpr=37 pi=[32,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:51:07 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 38 pg[3.16( empty local-lis/les=37/38 n=0 ec=30/19 lis/c=30/30 les/c/f=31/31/0 sis=37) [2] r=0 lpr=37 pi=[30,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:51:07 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 38 pg[4.13( empty local-lis/les=37/38 n=0 ec=30/21 lis/c=30/30 les/c/f=31/31/0 sis=37) [2] r=0 lpr=37 pi=[30,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:51:07 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 38 pg[6.11( empty local-lis/les=37/38 n=0 ec=32/25 lis/c=32/32 les/c/f=33/33/0 sis=37) [2] r=0 lpr=37 pi=[32,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:51:07 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 38 pg[6.15( empty local-lis/les=37/38 n=0 ec=32/25 lis/c=32/32 les/c/f=33/33/0 sis=37) [2] r=0 lpr=37 pi=[32,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:51:07 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 38 pg[3.11( empty local-lis/les=37/38 n=0 ec=30/19 lis/c=30/30 les/c/f=31/31/0 sis=37) [2] r=0 lpr=37 pi=[30,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:51:07 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 38 pg[6.14( empty local-lis/les=37/38 n=0 ec=32/25 lis/c=32/32 les/c/f=33/33/0 sis=37) [2] r=0 lpr=37 pi=[32,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:51:07 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 38 pg[3.e( empty local-lis/les=37/38 n=0 ec=30/19 lis/c=30/30 les/c/f=31/31/0 sis=37) [2] r=0 lpr=37 pi=[30,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:51:07 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 38 pg[4.a( empty local-lis/les=37/38 n=0 ec=30/21 lis/c=30/30 les/c/f=31/31/0 sis=37) [2] r=0 lpr=37 pi=[30,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:51:07 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 38 pg[4.1( empty local-lis/les=37/38 n=0 ec=30/21 lis/c=30/30 les/c/f=31/31/0 sis=37) [2] r=0 lpr=37 pi=[30,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:51:07 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 38 pg[3.5( empty local-lis/les=37/38 n=0 ec=30/19 lis/c=30/30 les/c/f=31/31/0 sis=37) [2] r=0 lpr=37 pi=[30,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:51:07 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 38 pg[6.8( empty local-lis/les=37/38 n=0 ec=32/25 lis/c=32/32 les/c/f=33/33/0 sis=37) [2] r=0 lpr=37 pi=[32,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:51:07 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 38 pg[4.e( empty local-lis/les=37/38 n=0 ec=30/21 lis/c=30/30 les/c/f=31/31/0 sis=37) [2] r=0 lpr=37 pi=[30,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:51:07 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 38 pg[3.8( empty local-lis/les=37/38 n=0 ec=30/19 lis/c=30/30 les/c/f=31/31/0 sis=37) [2] r=0 lpr=37 pi=[30,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:51:07 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 38 pg[3.7( empty local-lis/les=37/38 n=0 ec=30/19 lis/c=30/30 les/c/f=31/31/0 sis=37) [2] r=0 lpr=37 pi=[30,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:51:07 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 38 pg[6.f( empty local-lis/les=37/38 n=0 ec=32/25 lis/c=32/32 les/c/f=33/33/0 sis=37) [2] r=0 lpr=37 pi=[32,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:51:07 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 38 pg[3.1d( empty local-lis/les=37/38 n=0 ec=30/19 lis/c=30/30 les/c/f=31/31/0 sis=37) [2] r=0 lpr=37 pi=[30,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:51:07 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 38 pg[4.1a( empty local-lis/les=37/38 n=0 ec=30/21 lis/c=30/30 les/c/f=31/31/0 sis=37) [2] r=0 lpr=37 pi=[30,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:51:07 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 38 pg[4.1b( empty local-lis/les=37/38 n=0 ec=30/21 lis/c=30/30 les/c/f=31/31/0 sis=37) [2] r=0 lpr=37 pi=[30,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:51:07 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 38 pg[4.18( empty local-lis/les=37/38 n=0 ec=30/21 lis/c=30/30 les/c/f=31/31/0 sis=37) [2] r=0 lpr=37 pi=[30,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:51:07 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 38 pg[3.1e( empty local-lis/les=37/38 n=0 ec=30/19 lis/c=30/30 les/c/f=31/31/0 sis=37) [2] r=0 lpr=37 pi=[30,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:51:07 np0005534516 ceph-osd[88620]: osd.0 pg_epoch: 38 pg[2.19( empty local-lis/les=37/38 n=0 ec=29/17 lis/c=29/29 les/c/f=31/31/0 sis=37) [0] r=0 lpr=37 pi=[29,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:51:07 np0005534516 ceph-osd[88620]: osd.0 pg_epoch: 38 pg[3.17( empty local-lis/les=37/38 n=0 ec=30/19 lis/c=30/30 les/c/f=31/31/0 sis=37) [0] r=0 lpr=37 pi=[30,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:51:07 np0005534516 ceph-osd[88620]: osd.0 pg_epoch: 38 pg[2.16( empty local-lis/les=37/38 n=0 ec=29/17 lis/c=29/29 les/c/f=31/31/0 sis=37) [0] r=0 lpr=37 pi=[29,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:51:07 np0005534516 ceph-osd[88620]: osd.0 pg_epoch: 38 pg[2.b( empty local-lis/les=37/38 n=0 ec=29/17 lis/c=29/29 les/c/f=31/31/0 sis=37) [0] r=0 lpr=37 pi=[29,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:51:07 np0005534516 ceph-osd[88620]: osd.0 pg_epoch: 38 pg[3.15( empty local-lis/les=37/38 n=0 ec=30/19 lis/c=30/30 les/c/f=31/31/0 sis=37) [0] r=0 lpr=37 pi=[30,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:51:07 np0005534516 ceph-osd[88620]: osd.0 pg_epoch: 38 pg[2.18( empty local-lis/les=37/38 n=0 ec=29/17 lis/c=29/29 les/c/f=31/31/0 sis=37) [0] r=0 lpr=37 pi=[29,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:51:07 np0005534516 ceph-osd[88620]: osd.0 pg_epoch: 38 pg[5.1e( empty local-lis/les=37/38 n=0 ec=32/23 lis/c=32/32 les/c/f=35/35/0 sis=37) [0] r=0 lpr=37 pi=[32,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:51:07 np0005534516 ceph-osd[88620]: osd.0 pg_epoch: 38 pg[3.9( empty local-lis/les=37/38 n=0 ec=30/19 lis/c=30/30 les/c/f=31/31/0 sis=37) [0] r=0 lpr=37 pi=[30,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:51:07 np0005534516 ceph-osd[88620]: osd.0 pg_epoch: 38 pg[5.3( empty local-lis/les=37/38 n=0 ec=32/23 lis/c=32/32 les/c/f=35/35/0 sis=37) [0] r=0 lpr=37 pi=[32,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:51:07 np0005534516 ceph-osd[88620]: osd.0 pg_epoch: 38 pg[2.8( empty local-lis/les=37/38 n=0 ec=29/17 lis/c=29/29 les/c/f=31/31/0 sis=37) [0] r=0 lpr=37 pi=[29,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:51:07 np0005534516 ceph-osd[88620]: osd.0 pg_epoch: 38 pg[3.6( empty local-lis/les=37/38 n=0 ec=30/19 lis/c=30/30 les/c/f=31/31/0 sis=37) [0] r=0 lpr=37 pi=[30,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:51:07 np0005534516 ceph-osd[88620]: osd.0 pg_epoch: 38 pg[2.1f( empty local-lis/les=37/38 n=0 ec=29/17 lis/c=29/29 les/c/f=31/31/0 sis=37) [0] r=0 lpr=37 pi=[29,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:51:07 np0005534516 ceph-osd[88620]: osd.0 pg_epoch: 38 pg[5.2( empty local-lis/les=37/38 n=0 ec=32/23 lis/c=32/32 les/c/f=35/35/0 sis=37) [0] r=0 lpr=37 pi=[32,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:51:07 np0005534516 ceph-osd[88620]: osd.0 pg_epoch: 38 pg[2.f( empty local-lis/les=37/38 n=0 ec=29/17 lis/c=29/29 les/c/f=31/31/0 sis=37) [0] r=0 lpr=37 pi=[29,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:51:07 np0005534516 ceph-osd[88620]: osd.0 pg_epoch: 38 pg[5.5( empty local-lis/les=37/38 n=0 ec=32/23 lis/c=32/32 les/c/f=35/35/0 sis=37) [0] r=0 lpr=37 pi=[32,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:51:07 np0005534516 ceph-osd[88620]: osd.0 pg_epoch: 38 pg[3.a( empty local-lis/les=37/38 n=0 ec=30/19 lis/c=30/30 les/c/f=31/31/0 sis=37) [0] r=0 lpr=37 pi=[30,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:51:07 np0005534516 ceph-osd[88620]: osd.0 pg_epoch: 38 pg[2.2( empty local-lis/les=37/38 n=0 ec=29/17 lis/c=29/29 les/c/f=31/31/0 sis=37) [0] r=0 lpr=37 pi=[29,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:51:07 np0005534516 ceph-osd[88620]: osd.0 pg_epoch: 38 pg[3.3( empty local-lis/les=37/38 n=0 ec=30/19 lis/c=30/30 les/c/f=31/31/0 sis=37) [0] r=0 lpr=37 pi=[30,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:51:07 np0005534516 ceph-osd[88620]: osd.0 pg_epoch: 38 pg[5.7( empty local-lis/les=37/38 n=0 ec=32/23 lis/c=32/32 les/c/f=35/35/0 sis=37) [0] r=0 lpr=37 pi=[32,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:51:07 np0005534516 ceph-osd[88620]: osd.0 pg_epoch: 38 pg[5.4( empty local-lis/les=37/38 n=0 ec=32/23 lis/c=32/32 les/c/f=35/35/0 sis=37) [0] r=0 lpr=37 pi=[32,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:51:07 np0005534516 ceph-osd[88620]: osd.0 pg_epoch: 38 pg[2.1c( empty local-lis/les=37/38 n=0 ec=29/17 lis/c=29/29 les/c/f=31/31/0 sis=37) [0] r=0 lpr=37 pi=[29,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:51:07 np0005534516 ceph-osd[88620]: osd.0 pg_epoch: 38 pg[2.1d( empty local-lis/les=37/38 n=0 ec=29/17 lis/c=29/29 les/c/f=31/31/0 sis=37) [0] r=0 lpr=37 pi=[29,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:51:07 np0005534516 ceph-osd[88620]: osd.0 pg_epoch: 38 pg[3.1( empty local-lis/les=37/38 n=0 ec=30/19 lis/c=30/30 les/c/f=31/31/0 sis=37) [0] r=0 lpr=37 pi=[30,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:51:07 np0005534516 ceph-osd[88620]: osd.0 pg_epoch: 38 pg[3.c( empty local-lis/les=37/38 n=0 ec=30/19 lis/c=30/30 les/c/f=31/31/0 sis=37) [0] r=0 lpr=37 pi=[30,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:51:07 np0005534516 ceph-osd[88620]: osd.0 pg_epoch: 38 pg[3.f( empty local-lis/les=37/38 n=0 ec=30/19 lis/c=30/30 les/c/f=31/31/0 sis=37) [0] r=0 lpr=37 pi=[30,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:51:07 np0005534516 ceph-osd[88620]: osd.0 pg_epoch: 38 pg[3.1b( empty local-lis/les=37/38 n=0 ec=30/19 lis/c=30/30 les/c/f=31/31/0 sis=37) [0] r=0 lpr=37 pi=[30,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:51:07 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 38 pg[6.2( empty local-lis/les=37/38 n=0 ec=32/25 lis/c=32/32 les/c/f=33/33/0 sis=37) [1] r=0 lpr=37 pi=[32,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:51:07 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 38 pg[2.7( empty local-lis/les=37/38 n=0 ec=29/17 lis/c=29/29 les/c/f=31/31/0 sis=37) [1] r=0 lpr=37 pi=[29,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:51:07 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 38 pg[5.1d( empty local-lis/les=37/38 n=0 ec=32/23 lis/c=32/32 les/c/f=35/35/0 sis=37) [1] r=0 lpr=37 pi=[32,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:51:07 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 38 pg[2.4( empty local-lis/les=37/38 n=0 ec=29/17 lis/c=29/29 les/c/f=31/31/0 sis=37) [1] r=0 lpr=37 pi=[29,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:51:07 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 38 pg[4.2( empty local-lis/les=37/38 n=0 ec=30/21 lis/c=30/30 les/c/f=31/31/0 sis=37) [1] r=0 lpr=37 pi=[30,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:51:07 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 38 pg[4.4( empty local-lis/les=37/38 n=0 ec=30/21 lis/c=30/30 les/c/f=31/31/0 sis=37) [1] r=0 lpr=37 pi=[30,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:51:07 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 38 pg[6.4( empty local-lis/les=37/38 n=0 ec=32/25 lis/c=32/32 les/c/f=33/33/0 sis=37) [1] r=0 lpr=37 pi=[32,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:51:07 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 38 pg[6.6( empty local-lis/les=37/38 n=0 ec=32/25 lis/c=32/32 les/c/f=33/33/0 sis=37) [1] r=0 lpr=37 pi=[32,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:51:07 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 38 pg[2.5( empty local-lis/les=37/38 n=0 ec=29/17 lis/c=29/29 les/c/f=31/31/0 sis=37) [1] r=0 lpr=37 pi=[29,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:51:07 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 38 pg[6.1( empty local-lis/les=37/38 n=0 ec=32/25 lis/c=32/32 les/c/f=33/33/0 sis=37) [1] r=0 lpr=37 pi=[32,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:51:07 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 38 pg[2.3( empty local-lis/les=37/38 n=0 ec=29/17 lis/c=29/29 les/c/f=31/31/0 sis=37) [1] r=0 lpr=37 pi=[29,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:51:07 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 38 pg[4.7( empty local-lis/les=37/38 n=0 ec=30/21 lis/c=30/30 les/c/f=31/31/0 sis=37) [1] r=0 lpr=37 pi=[30,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:51:07 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 38 pg[4.5( empty local-lis/les=37/38 n=0 ec=30/21 lis/c=30/30 les/c/f=31/31/0 sis=37) [1] r=0 lpr=37 pi=[30,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:51:07 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 38 pg[5.1( empty local-lis/les=37/38 n=0 ec=32/23 lis/c=32/32 les/c/f=35/35/0 sis=37) [1] r=0 lpr=37 pi=[32,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:51:07 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 38 pg[2.d( empty local-lis/les=37/38 n=0 ec=29/17 lis/c=29/29 les/c/f=31/31/0 sis=37) [1] r=0 lpr=37 pi=[29,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:51:07 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 38 pg[4.9( empty local-lis/les=37/38 n=0 ec=30/21 lis/c=30/30 les/c/f=31/31/0 sis=37) [1] r=0 lpr=37 pi=[30,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:51:07 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 38 pg[6.e( empty local-lis/les=37/38 n=0 ec=32/25 lis/c=32/32 les/c/f=33/33/0 sis=37) [1] r=0 lpr=37 pi=[32,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:51:07 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 38 pg[5.9( empty local-lis/les=37/38 n=0 ec=32/23 lis/c=32/32 les/c/f=35/35/0 sis=37) [1] r=0 lpr=37 pi=[32,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:51:07 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 38 pg[4.8( empty local-lis/les=37/38 n=0 ec=30/21 lis/c=30/30 les/c/f=31/31/0 sis=37) [1] r=0 lpr=37 pi=[30,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:51:07 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 38 pg[6.b( empty local-lis/les=37/38 n=0 ec=32/25 lis/c=32/32 les/c/f=33/33/0 sis=37) [1] r=0 lpr=37 pi=[32,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:51:07 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 38 pg[2.a( empty local-lis/les=37/38 n=0 ec=29/17 lis/c=29/29 les/c/f=31/31/0 sis=37) [1] r=0 lpr=37 pi=[29,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:51:07 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 38 pg[4.14( empty local-lis/les=37/38 n=0 ec=30/21 lis/c=30/30 les/c/f=31/31/0 sis=37) [1] r=0 lpr=37 pi=[30,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:51:07 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 38 pg[6.17( empty local-lis/les=37/38 n=0 ec=32/25 lis/c=32/32 les/c/f=33/33/0 sis=37) [1] r=0 lpr=37 pi=[32,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:51:07 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 38 pg[2.15( empty local-lis/les=37/38 n=0 ec=29/17 lis/c=29/29 les/c/f=31/31/0 sis=37) [1] r=0 lpr=37 pi=[29,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:51:07 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 38 pg[5.12( empty local-lis/les=37/38 n=0 ec=32/23 lis/c=32/32 les/c/f=35/35/0 sis=37) [1] r=0 lpr=37 pi=[32,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:51:07 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 38 pg[4.12( empty local-lis/les=37/38 n=0 ec=30/21 lis/c=30/30 les/c/f=31/31/0 sis=37) [1] r=0 lpr=37 pi=[30,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:51:07 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 38 pg[5.16( empty local-lis/les=37/38 n=0 ec=32/23 lis/c=32/32 les/c/f=35/35/0 sis=37) [1] r=0 lpr=37 pi=[32,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:51:07 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 38 pg[5.13( empty local-lis/les=37/38 n=0 ec=32/23 lis/c=32/32 les/c/f=35/35/0 sis=37) [1] r=0 lpr=37 pi=[32,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:51:07 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 38 pg[4.10( empty local-lis/les=37/38 n=0 ec=30/21 lis/c=30/30 les/c/f=31/31/0 sis=37) [1] r=0 lpr=37 pi=[30,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:51:07 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 38 pg[5.11( empty local-lis/les=37/38 n=0 ec=32/23 lis/c=32/32 les/c/f=35/35/0 sis=37) [1] r=0 lpr=37 pi=[32,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:51:07 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 38 pg[6.1d( empty local-lis/les=37/38 n=0 ec=32/25 lis/c=32/32 les/c/f=33/33/0 sis=37) [1] r=0 lpr=37 pi=[32,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:51:07 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 38 pg[6.1c( empty local-lis/les=37/38 n=0 ec=32/25 lis/c=32/32 les/c/f=33/33/0 sis=37) [1] r=0 lpr=37 pi=[32,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:51:07 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 38 pg[2.17( empty local-lis/les=37/38 n=0 ec=29/17 lis/c=29/29 les/c/f=31/31/0 sis=37) [1] r=0 lpr=37 pi=[29,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:51:07 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 38 pg[5.18( empty local-lis/les=37/38 n=0 ec=32/23 lis/c=32/32 les/c/f=35/35/0 sis=37) [1] r=0 lpr=37 pi=[32,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:51:07 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 38 pg[2.1b( empty local-lis/les=37/38 n=0 ec=29/17 lis/c=29/29 les/c/f=31/31/0 sis=37) [1] r=0 lpr=37 pi=[29,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:51:08 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e38 do_prune osdmap full prune enabled
Nov 25 02:51:08 np0005534516 ceph-mon[75015]: from='client.? 192.168.122.100:0/3460571848' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"}]: dispatch
Nov 25 02:51:08 np0005534516 ceph-osd[89702]: log_channel(cluster) log [DBG] : 3.b scrub starts
Nov 25 02:51:08 np0005534516 ceph-osd[89702]: log_channel(cluster) log [DBG] : 3.b scrub ok
Nov 25 02:51:08 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3460571848' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"}]': finished
Nov 25 02:51:08 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e39 e39: 3 total, 3 up, 3 in
Nov 25 02:51:08 np0005534516 focused_visvesvaraya[96505]: enabled application 'cephfs' on pool 'cephfs.cephfs.data'
Nov 25 02:51:08 np0005534516 systemd[1]: libpod-af96085c64984ec5baf2f359bece8f77ad31aef30d7b5e2192d7f1588ed062e7.scope: Deactivated successfully.
Nov 25 02:51:08 np0005534516 podman[96490]: 2025-11-25 07:51:08.672291266 +0000 UTC m=+1.986434381 container died af96085c64984ec5baf2f359bece8f77ad31aef30d7b5e2192d7f1588ed062e7 (image=quay.io/ceph/ceph:v18, name=focused_visvesvaraya, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True)
Nov 25 02:51:08 np0005534516 ceph-mon[75015]: log_channel(cluster) log [DBG] : osdmap e39: 3 total, 3 up, 3 in
Nov 25 02:51:08 np0005534516 systemd[1]: var-lib-containers-storage-overlay-d7e80dfbea348496b0a422c6128c4f7cd499d56dbeb9f17df30fd0ccf8b9fe14-merged.mount: Deactivated successfully.
Nov 25 02:51:08 np0005534516 systemd[76588]: Starting Mark boot as successful...
Nov 25 02:51:08 np0005534516 systemd[76588]: Finished Mark boot as successful.
Nov 25 02:51:08 np0005534516 podman[96490]: 2025-11-25 07:51:08.871632987 +0000 UTC m=+2.185776102 container remove af96085c64984ec5baf2f359bece8f77ad31aef30d7b5e2192d7f1588ed062e7 (image=quay.io/ceph/ceph:v18, name=focused_visvesvaraya, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Nov 25 02:51:08 np0005534516 systemd[1]: libpod-conmon-af96085c64984ec5baf2f359bece8f77ad31aef30d7b5e2192d7f1588ed062e7.scope: Deactivated successfully.
Nov 25 02:51:09 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v98: 162 pgs: 46 peering, 116 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 02:51:09 np0005534516 ceph-mon[75015]: from='client.? 192.168.122.100:0/3460571848' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"}]': finished
Nov 25 02:51:09 np0005534516 ceph-mon[75015]: log_channel(cluster) log [INF] : Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled)
Nov 25 02:51:09 np0005534516 ceph-mon[75015]: log_channel(cluster) log [INF] : Cluster is now healthy
Nov 25 02:51:09 np0005534516 python3[96621]: ansible-ansible.legacy.stat Invoked with path=/tmp/ceph_rgw.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 25 02:51:10 np0005534516 ceph-osd[88620]: log_channel(cluster) log [DBG] : 4.3 scrub starts
Nov 25 02:51:10 np0005534516 ceph-osd[88620]: log_channel(cluster) log [DBG] : 4.3 scrub ok
Nov 25 02:51:10 np0005534516 python3[96692]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764057069.603395-36570-121333206934783/source dest=/tmp/ceph_rgw.yml mode=0644 force=True follow=False _original_basename=ceph_rgw.yml.j2 checksum=0a1ea65aada399f80274d3cc2047646f2797712b backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 02:51:10 np0005534516 ceph-mon[75015]: Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled)
Nov 25 02:51:10 np0005534516 ceph-mon[75015]: Cluster is now healthy
Nov 25 02:51:11 np0005534516 python3[96794]: ansible-ansible.legacy.stat Invoked with path=/home/ceph-admin/assimilate_ceph.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 25 02:51:11 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v99: 162 pgs: 46 peering, 116 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 02:51:11 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 02:51:11 np0005534516 python3[96869]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764057070.6322043-36584-97118088200897/source dest=/home/ceph-admin/assimilate_ceph.conf owner=167 group=167 mode=0644 follow=False _original_basename=ceph_rgw.conf.j2 checksum=d6688240f27b7645c34ffe8ab67c3e95f6c7d255 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 02:51:11 np0005534516 ceph-osd[90711]: log_channel(cluster) log [DBG] : 2.10 scrub starts
Nov 25 02:51:11 np0005534516 ceph-osd[90711]: log_channel(cluster) log [DBG] : 2.10 scrub ok
Nov 25 02:51:12 np0005534516 python3[96919]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_rgw.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid a058ea16-8b73-51e1-b172-ed66107102bf -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config assimilate-conf -i /home/assimilate_ceph.conf#012 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 02:51:12 np0005534516 ceph-osd[88620]: log_channel(cluster) log [DBG] : 4.6 deep-scrub starts
Nov 25 02:51:12 np0005534516 ceph-osd[88620]: log_channel(cluster) log [DBG] : 4.6 deep-scrub ok
Nov 25 02:51:12 np0005534516 podman[96920]: 2025-11-25 07:51:12.06317753 +0000 UTC m=+0.021628369 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 25 02:51:12 np0005534516 podman[96920]: 2025-11-25 07:51:12.286469977 +0000 UTC m=+0.244920846 container create 06e22b2c209a6b1d629c75c80beb339371c2441b8b420d4fe50c8046c5b905b8 (image=quay.io/ceph/ceph:v18, name=epic_euler, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 02:51:12 np0005534516 systemd[1]: Started libpod-conmon-06e22b2c209a6b1d629c75c80beb339371c2441b8b420d4fe50c8046c5b905b8.scope.
Nov 25 02:51:12 np0005534516 systemd[1]: Started libcrun container.
Nov 25 02:51:12 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7bbe2685919b3b5b7f98fc288eda5bc35caeb3fd450c6671fe212bed5887be50/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 02:51:12 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7bbe2685919b3b5b7f98fc288eda5bc35caeb3fd450c6671fe212bed5887be50/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 02:51:12 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7bbe2685919b3b5b7f98fc288eda5bc35caeb3fd450c6671fe212bed5887be50/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Nov 25 02:51:12 np0005534516 ceph-osd[89702]: log_channel(cluster) log [DBG] : 3.d scrub starts
Nov 25 02:51:12 np0005534516 ceph-osd[89702]: log_channel(cluster) log [DBG] : 3.d scrub ok
Nov 25 02:51:12 np0005534516 podman[96920]: 2025-11-25 07:51:12.56344257 +0000 UTC m=+0.521893409 container init 06e22b2c209a6b1d629c75c80beb339371c2441b8b420d4fe50c8046c5b905b8 (image=quay.io/ceph/ceph:v18, name=epic_euler, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 02:51:12 np0005534516 podman[96920]: 2025-11-25 07:51:12.574878137 +0000 UTC m=+0.533328976 container start 06e22b2c209a6b1d629c75c80beb339371c2441b8b420d4fe50c8046c5b905b8 (image=quay.io/ceph/ceph:v18, name=epic_euler, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Nov 25 02:51:12 np0005534516 ceph-osd[90711]: log_channel(cluster) log [DBG] : 2.12 scrub starts
Nov 25 02:51:12 np0005534516 ceph-osd[90711]: log_channel(cluster) log [DBG] : 2.12 scrub ok
Nov 25 02:51:12 np0005534516 podman[96920]: 2025-11-25 07:51:12.702214674 +0000 UTC m=+0.660665543 container attach 06e22b2c209a6b1d629c75c80beb339371c2441b8b420d4fe50c8046c5b905b8 (image=quay.io/ceph/ceph:v18, name=epic_euler, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Nov 25 02:51:13 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v100: 162 pgs: 162 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 02:51:13 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config assimilate-conf"} v 0) v1
Nov 25 02:51:13 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1494299758' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Nov 25 02:51:13 np0005534516 ceph-osd[89702]: log_channel(cluster) log [DBG] : 3.10 scrub starts
Nov 25 02:51:13 np0005534516 ceph-osd[89702]: log_channel(cluster) log [DBG] : 3.10 scrub ok
Nov 25 02:51:13 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1494299758' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished
Nov 25 02:51:13 np0005534516 epic_euler[96935]: 
Nov 25 02:51:13 np0005534516 epic_euler[96935]: [global]
Nov 25 02:51:13 np0005534516 epic_euler[96935]: #011fsid = a058ea16-8b73-51e1-b172-ed66107102bf
Nov 25 02:51:13 np0005534516 epic_euler[96935]: #011mon_host = 192.168.122.100
Nov 25 02:51:13 np0005534516 systemd[1]: libpod-06e22b2c209a6b1d629c75c80beb339371c2441b8b420d4fe50c8046c5b905b8.scope: Deactivated successfully.
Nov 25 02:51:13 np0005534516 podman[96920]: 2025-11-25 07:51:13.496179789 +0000 UTC m=+1.454630668 container died 06e22b2c209a6b1d629c75c80beb339371c2441b8b420d4fe50c8046c5b905b8 (image=quay.io/ceph/ceph:v18, name=epic_euler, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Nov 25 02:51:13 np0005534516 ceph-osd[90711]: log_channel(cluster) log [DBG] : 2.14 scrub starts
Nov 25 02:51:13 np0005534516 ceph-osd[90711]: log_channel(cluster) log [DBG] : 2.14 scrub ok
Nov 25 02:51:13 np0005534516 systemd[1]: var-lib-containers-storage-overlay-7bbe2685919b3b5b7f98fc288eda5bc35caeb3fd450c6671fe212bed5887be50-merged.mount: Deactivated successfully.
Nov 25 02:51:14 np0005534516 ceph-mon[75015]: from='client.? 192.168.122.100:0/1494299758' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Nov 25 02:51:14 np0005534516 ceph-mon[75015]: from='client.? 192.168.122.100:0/1494299758' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished
Nov 25 02:51:14 np0005534516 podman[96920]: 2025-11-25 07:51:14.321410165 +0000 UTC m=+2.279860994 container remove 06e22b2c209a6b1d629c75c80beb339371c2441b8b420d4fe50c8046c5b905b8 (image=quay.io/ceph/ceph:v18, name=epic_euler, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 02:51:14 np0005534516 systemd[1]: libpod-conmon-06e22b2c209a6b1d629c75c80beb339371c2441b8b420d4fe50c8046c5b905b8.scope: Deactivated successfully.
Nov 25 02:51:14 np0005534516 ceph-osd[89702]: log_channel(cluster) log [DBG] : 3.13 scrub starts
Nov 25 02:51:14 np0005534516 ceph-osd[89702]: log_channel(cluster) log [DBG] : 3.13 scrub ok
Nov 25 02:51:14 np0005534516 python3[97142]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_rgw.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid a058ea16-8b73-51e1-b172-ed66107102bf -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config-key set ssl_option no_sslv2:sslv3:no_tlsv1:no_tlsv1_1#012 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 02:51:14 np0005534516 podman[97174]: 2025-11-25 07:51:14.692384307 +0000 UTC m=+0.042623568 container create 49948f347049b9cc2078245472cdedc424d7d9caa6aed60481357a905614d01d (image=quay.io/ceph/ceph:v18, name=nice_bhabha, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Nov 25 02:51:14 np0005534516 podman[97172]: 2025-11-25 07:51:14.711899872 +0000 UTC m=+0.071044542 container exec 04ce8b89bbacb77b3b59c4996e899d7494010827e740656ebea8754c25303956 (image=quay.io/ceph/ceph:v18, name=ceph-a058ea16-8b73-51e1-b172-ed66107102bf-mon-compute-0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Nov 25 02:51:14 np0005534516 systemd[1]: Started libpod-conmon-49948f347049b9cc2078245472cdedc424d7d9caa6aed60481357a905614d01d.scope.
Nov 25 02:51:14 np0005534516 systemd[1]: Started libcrun container.
Nov 25 02:51:14 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/013773b9754f958434cb2ab3f40f46dd46878a3cf00dbf350ef18ccb2f964563/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Nov 25 02:51:14 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/013773b9754f958434cb2ab3f40f46dd46878a3cf00dbf350ef18ccb2f964563/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 02:51:14 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/013773b9754f958434cb2ab3f40f46dd46878a3cf00dbf350ef18ccb2f964563/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 02:51:14 np0005534516 podman[97174]: 2025-11-25 07:51:14.672104911 +0000 UTC m=+0.022344152 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 25 02:51:14 np0005534516 podman[97174]: 2025-11-25 07:51:14.776279944 +0000 UTC m=+0.126519205 container init 49948f347049b9cc2078245472cdedc424d7d9caa6aed60481357a905614d01d (image=quay.io/ceph/ceph:v18, name=nice_bhabha, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Nov 25 02:51:14 np0005534516 podman[97174]: 2025-11-25 07:51:14.783361205 +0000 UTC m=+0.133600436 container start 49948f347049b9cc2078245472cdedc424d7d9caa6aed60481357a905614d01d (image=quay.io/ceph/ceph:v18, name=nice_bhabha, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Nov 25 02:51:14 np0005534516 podman[97174]: 2025-11-25 07:51:14.786067458 +0000 UTC m=+0.136306699 container attach 49948f347049b9cc2078245472cdedc424d7d9caa6aed60481357a905614d01d (image=quay.io/ceph/ceph:v18, name=nice_bhabha, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Nov 25 02:51:14 np0005534516 podman[97172]: 2025-11-25 07:51:14.814946735 +0000 UTC m=+0.174091395 container exec_died 04ce8b89bbacb77b3b59c4996e899d7494010827e740656ebea8754c25303956 (image=quay.io/ceph/ceph:v18, name=ceph-a058ea16-8b73-51e1-b172-ed66107102bf-mon-compute-0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 02:51:15 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v101: 162 pgs: 162 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 02:51:15 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 02:51:15 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 02:51:15 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 02:51:15 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 02:51:15 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 02:51:15 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 02:51:15 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 02:51:15 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 02:51:15 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 02:51:15 np0005534516 ceph-osd[89702]: log_channel(cluster) log [DBG] : 3.14 scrub starts
Nov 25 02:51:15 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 02:51:15 np0005534516 ceph-mgr[75313]: [progress WARNING root] complete: ev 31597ea7-bc7d-49a6-a0ab-1fe92126af88 does not exist
Nov 25 02:51:15 np0005534516 ceph-mgr[75313]: [progress WARNING root] complete: ev 1f5bbc0d-6c7c-4dba-b31c-b25f2a2aee66 does not exist
Nov 25 02:51:15 np0005534516 ceph-mgr[75313]: [progress WARNING root] complete: ev 0fe4b769-86ba-4df8-ad23-72c8990b91fc does not exist
Nov 25 02:51:15 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 02:51:15 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 02:51:15 np0005534516 ceph-osd[89702]: log_channel(cluster) log [DBG] : 3.14 scrub ok
Nov 25 02:51:15 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 02:51:15 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 02:51:15 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 02:51:15 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 02:51:15 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=ssl_option}] v 0) v1
Nov 25 02:51:15 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3500156775' entity='client.admin' 
Nov 25 02:51:15 np0005534516 nice_bhabha[97207]: set ssl_option
Nov 25 02:51:15 np0005534516 systemd[1]: libpod-49948f347049b9cc2078245472cdedc424d7d9caa6aed60481357a905614d01d.scope: Deactivated successfully.
Nov 25 02:51:15 np0005534516 podman[97174]: 2025-11-25 07:51:15.49001003 +0000 UTC m=+0.840249281 container died 49948f347049b9cc2078245472cdedc424d7d9caa6aed60481357a905614d01d (image=quay.io/ceph/ceph:v18, name=nice_bhabha, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Nov 25 02:51:15 np0005534516 systemd[1]: var-lib-containers-storage-overlay-013773b9754f958434cb2ab3f40f46dd46878a3cf00dbf350ef18ccb2f964563-merged.mount: Deactivated successfully.
Nov 25 02:51:15 np0005534516 podman[97174]: 2025-11-25 07:51:15.637914795 +0000 UTC m=+0.988154026 container remove 49948f347049b9cc2078245472cdedc424d7d9caa6aed60481357a905614d01d (image=quay.io/ceph/ceph:v18, name=nice_bhabha, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 02:51:15 np0005534516 systemd[1]: libpod-conmon-49948f347049b9cc2078245472cdedc424d7d9caa6aed60481357a905614d01d.scope: Deactivated successfully.
Nov 25 02:51:15 np0005534516 python3[97471]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_rgw.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid a058ea16-8b73-51e1-b172-ed66107102bf -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch apply --in-file /home/ceph_spec.yaml _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 02:51:16 np0005534516 podman[97512]: 2025-11-25 07:51:16.047567368 +0000 UTC m=+0.048386317 container create ff0c5bf220fd8df0740312c493a81d206d342633dc9949b8c171d7c68f298326 (image=quay.io/ceph/ceph:v18, name=brave_stonebraker, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Nov 25 02:51:16 np0005534516 podman[97513]: 2025-11-25 07:51:16.065931363 +0000 UTC m=+0.055116869 container create 1bb0871402ea060bd55be6b352be0e6eb07e754ebb010e7bf4be8f5811ed5872 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_rubin, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Nov 25 02:51:16 np0005534516 systemd[1]: Started libpod-conmon-ff0c5bf220fd8df0740312c493a81d206d342633dc9949b8c171d7c68f298326.scope.
Nov 25 02:51:16 np0005534516 systemd[1]: Started libpod-conmon-1bb0871402ea060bd55be6b352be0e6eb07e754ebb010e7bf4be8f5811ed5872.scope.
Nov 25 02:51:16 np0005534516 systemd[1]: Started libcrun container.
Nov 25 02:51:16 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/53765d46c97c1aa687e388b2dde200058860f669ed3e5bb045a52d4bc386a4ad/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 02:51:16 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/53765d46c97c1aa687e388b2dde200058860f669ed3e5bb045a52d4bc386a4ad/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Nov 25 02:51:16 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/53765d46c97c1aa687e388b2dde200058860f669ed3e5bb045a52d4bc386a4ad/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 02:51:16 np0005534516 podman[97512]: 2025-11-25 07:51:16.025963232 +0000 UTC m=+0.026782201 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 25 02:51:16 np0005534516 ceph-osd[88620]: log_channel(cluster) log [DBG] : 4.b scrub starts
Nov 25 02:51:16 np0005534516 systemd[1]: Started libcrun container.
Nov 25 02:51:16 np0005534516 podman[97513]: 2025-11-25 07:51:16.036402066 +0000 UTC m=+0.025587492 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 02:51:16 np0005534516 ceph-osd[88620]: log_channel(cluster) log [DBG] : 4.b scrub ok
Nov 25 02:51:16 np0005534516 podman[97512]: 2025-11-25 07:51:16.142684339 +0000 UTC m=+0.143503308 container init ff0c5bf220fd8df0740312c493a81d206d342633dc9949b8c171d7c68f298326 (image=quay.io/ceph/ceph:v18, name=brave_stonebraker, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Nov 25 02:51:16 np0005534516 podman[97512]: 2025-11-25 07:51:16.14851829 +0000 UTC m=+0.149337239 container start ff0c5bf220fd8df0740312c493a81d206d342633dc9949b8c171d7c68f298326 (image=quay.io/ceph/ceph:v18, name=brave_stonebraker, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 02:51:16 np0005534516 podman[97513]: 2025-11-25 07:51:16.150743915 +0000 UTC m=+0.139929341 container init 1bb0871402ea060bd55be6b352be0e6eb07e754ebb010e7bf4be8f5811ed5872 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_rubin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS)
Nov 25 02:51:16 np0005534516 podman[97512]: 2025-11-25 07:51:16.156365161 +0000 UTC m=+0.157184110 container attach ff0c5bf220fd8df0740312c493a81d206d342633dc9949b8c171d7c68f298326 (image=quay.io/ceph/ceph:v18, name=brave_stonebraker, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Nov 25 02:51:16 np0005534516 podman[97513]: 2025-11-25 07:51:16.156701739 +0000 UTC m=+0.145887145 container start 1bb0871402ea060bd55be6b352be0e6eb07e754ebb010e7bf4be8f5811ed5872 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_rubin, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 02:51:16 np0005534516 podman[97513]: 2025-11-25 07:51:16.159366864 +0000 UTC m=+0.148552290 container attach 1bb0871402ea060bd55be6b352be0e6eb07e754ebb010e7bf4be8f5811ed5872 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_rubin, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef)
Nov 25 02:51:16 np0005534516 clever_rubin[97547]: 167 167
Nov 25 02:51:16 np0005534516 systemd[1]: libpod-1bb0871402ea060bd55be6b352be0e6eb07e754ebb010e7bf4be8f5811ed5872.scope: Deactivated successfully.
Nov 25 02:51:16 np0005534516 podman[97513]: 2025-11-25 07:51:16.160776648 +0000 UTC m=+0.149962054 container died 1bb0871402ea060bd55be6b352be0e6eb07e754ebb010e7bf4be8f5811ed5872 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_rubin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 02:51:16 np0005534516 systemd[1]: var-lib-containers-storage-overlay-a7698f552dcab98d2ccbac98d9151c889625a8cf59c561d16f930d4a4be82b3c-merged.mount: Deactivated successfully.
Nov 25 02:51:16 np0005534516 podman[97513]: 2025-11-25 07:51:16.204360107 +0000 UTC m=+0.193545553 container remove 1bb0871402ea060bd55be6b352be0e6eb07e754ebb010e7bf4be8f5811ed5872 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_rubin, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True)
Nov 25 02:51:16 np0005534516 systemd[1]: libpod-conmon-1bb0871402ea060bd55be6b352be0e6eb07e754ebb010e7bf4be8f5811ed5872.scope: Deactivated successfully.
Nov 25 02:51:16 np0005534516 podman[97572]: 2025-11-25 07:51:16.357170559 +0000 UTC m=+0.053025779 container create f32ca43d248eafb1b9e17ef3cabcbffad9fd28619f11bea04987c9046dbf4a91 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_lamport, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Nov 25 02:51:16 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 02:51:16 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 02:51:16 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 02:51:16 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 02:51:16 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 02:51:16 np0005534516 ceph-mon[75015]: from='client.? 192.168.122.100:0/3500156775' entity='client.admin' 
Nov 25 02:51:16 np0005534516 systemd[1]: Started libpod-conmon-f32ca43d248eafb1b9e17ef3cabcbffad9fd28619f11bea04987c9046dbf4a91.scope.
Nov 25 02:51:16 np0005534516 systemd[1]: Started libcrun container.
Nov 25 02:51:16 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b736d6e633a105c2f9f28371de98d4610425494d1bfd1ad0589b7ec8564a83f7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 02:51:16 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b736d6e633a105c2f9f28371de98d4610425494d1bfd1ad0589b7ec8564a83f7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 02:51:16 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b736d6e633a105c2f9f28371de98d4610425494d1bfd1ad0589b7ec8564a83f7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 02:51:16 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b736d6e633a105c2f9f28371de98d4610425494d1bfd1ad0589b7ec8564a83f7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 02:51:16 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b736d6e633a105c2f9f28371de98d4610425494d1bfd1ad0589b7ec8564a83f7/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 02:51:16 np0005534516 podman[97572]: 2025-11-25 07:51:16.332218283 +0000 UTC m=+0.028073523 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 02:51:16 np0005534516 podman[97572]: 2025-11-25 07:51:16.431534096 +0000 UTC m=+0.127389336 container init f32ca43d248eafb1b9e17ef3cabcbffad9fd28619f11bea04987c9046dbf4a91 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_lamport, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 02:51:16 np0005534516 podman[97572]: 2025-11-25 07:51:16.438732621 +0000 UTC m=+0.134587831 container start f32ca43d248eafb1b9e17ef3cabcbffad9fd28619f11bea04987c9046dbf4a91 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_lamport, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 02:51:16 np0005534516 podman[97572]: 2025-11-25 07:51:16.446288994 +0000 UTC m=+0.142144204 container attach f32ca43d248eafb1b9e17ef3cabcbffad9fd28619f11bea04987c9046dbf4a91 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_lamport, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Nov 25 02:51:16 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 02:51:16 np0005534516 ceph-osd[90711]: log_channel(cluster) log [DBG] : 2.1a scrub starts
Nov 25 02:51:16 np0005534516 ceph-osd[90711]: log_channel(cluster) log [DBG] : 2.1a scrub ok
Nov 25 02:51:16 np0005534516 ceph-mgr[75313]: log_channel(audit) log [DBG] : from='client.14244 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 02:51:16 np0005534516 ceph-mgr[75313]: [cephadm INFO root] Saving service rgw.rgw spec with placement compute-0
Nov 25 02:51:16 np0005534516 ceph-mgr[75313]: log_channel(cephadm) log [INF] : Saving service rgw.rgw spec with placement compute-0
Nov 25 02:51:16 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0) v1
Nov 25 02:51:16 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 02:51:16 np0005534516 brave_stonebraker[97544]: Scheduled rgw.rgw update...
Nov 25 02:51:16 np0005534516 systemd[1]: libpod-ff0c5bf220fd8df0740312c493a81d206d342633dc9949b8c171d7c68f298326.scope: Deactivated successfully.
Nov 25 02:51:16 np0005534516 podman[97615]: 2025-11-25 07:51:16.817696508 +0000 UTC m=+0.033918035 container died ff0c5bf220fd8df0740312c493a81d206d342633dc9949b8c171d7c68f298326 (image=quay.io/ceph/ceph:v18, name=brave_stonebraker, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 02:51:16 np0005534516 systemd[1]: var-lib-containers-storage-overlay-53765d46c97c1aa687e388b2dde200058860f669ed3e5bb045a52d4bc386a4ad-merged.mount: Deactivated successfully.
Nov 25 02:51:16 np0005534516 podman[97615]: 2025-11-25 07:51:16.858217812 +0000 UTC m=+0.074439309 container remove ff0c5bf220fd8df0740312c493a81d206d342633dc9949b8c171d7c68f298326 (image=quay.io/ceph/ceph:v18, name=brave_stonebraker, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 02:51:16 np0005534516 systemd[1]: libpod-conmon-ff0c5bf220fd8df0740312c493a81d206d342633dc9949b8c171d7c68f298326.scope: Deactivated successfully.
Nov 25 02:51:17 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v102: 162 pgs: 162 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 02:51:17 np0005534516 romantic_lamport[97589]: --> passed data devices: 0 physical, 3 LVM
Nov 25 02:51:17 np0005534516 romantic_lamport[97589]: --> relative data size: 1.0
Nov 25 02:51:17 np0005534516 romantic_lamport[97589]: --> All data devices are unavailable
Nov 25 02:51:17 np0005534516 systemd[1]: libpod-f32ca43d248eafb1b9e17ef3cabcbffad9fd28619f11bea04987c9046dbf4a91.scope: Deactivated successfully.
Nov 25 02:51:17 np0005534516 systemd[1]: libpod-f32ca43d248eafb1b9e17ef3cabcbffad9fd28619f11bea04987c9046dbf4a91.scope: Consumed 1.028s CPU time.
Nov 25 02:51:17 np0005534516 podman[97667]: 2025-11-25 07:51:17.549879417 +0000 UTC m=+0.024496857 container died f32ca43d248eafb1b9e17ef3cabcbffad9fd28619f11bea04987c9046dbf4a91 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_lamport, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Nov 25 02:51:17 np0005534516 ceph-osd[90711]: log_channel(cluster) log [DBG] : 2.1e deep-scrub starts
Nov 25 02:51:17 np0005534516 ceph-osd[90711]: log_channel(cluster) log [DBG] : 2.1e deep-scrub ok
Nov 25 02:51:17 np0005534516 systemd[1]: var-lib-containers-storage-overlay-b736d6e633a105c2f9f28371de98d4610425494d1bfd1ad0589b7ec8564a83f7-merged.mount: Deactivated successfully.
Nov 25 02:51:17 np0005534516 podman[97667]: 2025-11-25 07:51:17.681766121 +0000 UTC m=+0.156383511 container remove f32ca43d248eafb1b9e17ef3cabcbffad9fd28619f11bea04987c9046dbf4a91 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_lamport, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 02:51:17 np0005534516 systemd[1]: libpod-conmon-f32ca43d248eafb1b9e17ef3cabcbffad9fd28619f11bea04987c9046dbf4a91.scope: Deactivated successfully.
Nov 25 02:51:17 np0005534516 ceph-mon[75015]: Saving service rgw.rgw spec with placement compute-0
Nov 25 02:51:17 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 02:51:17 np0005534516 python3[97742]: ansible-ansible.legacy.stat Invoked with path=/tmp/ceph_mds.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 25 02:51:18 np0005534516 python3[97913]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764057077.5399528-36625-59055675425713/source dest=/tmp/ceph_mds.yml mode=0644 force=True follow=False _original_basename=ceph_mds.yml.j2 checksum=e359e26d9e42bc107a0de03375144cf8590b6f68 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 02:51:18 np0005534516 podman[97978]: 2025-11-25 07:51:18.352171689 +0000 UTC m=+0.087575539 container create 7351a9ed21d64e25eb145bb6f20bbc150817ed5be3d5b9e6f04162fdb0b9829d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_tu, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 02:51:18 np0005534516 podman[97978]: 2025-11-25 07:51:18.298911335 +0000 UTC m=+0.034315195 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 02:51:18 np0005534516 systemd[1]: Started libpod-conmon-7351a9ed21d64e25eb145bb6f20bbc150817ed5be3d5b9e6f04162fdb0b9829d.scope.
Nov 25 02:51:18 np0005534516 systemd[1]: Started libcrun container.
Nov 25 02:51:18 np0005534516 podman[97978]: 2025-11-25 07:51:18.567407138 +0000 UTC m=+0.302810988 container init 7351a9ed21d64e25eb145bb6f20bbc150817ed5be3d5b9e6f04162fdb0b9829d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_tu, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Nov 25 02:51:18 np0005534516 podman[97978]: 2025-11-25 07:51:18.577746619 +0000 UTC m=+0.313150479 container start 7351a9ed21d64e25eb145bb6f20bbc150817ed5be3d5b9e6f04162fdb0b9829d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_tu, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Nov 25 02:51:18 np0005534516 kind_tu[97995]: 167 167
Nov 25 02:51:18 np0005534516 systemd[1]: libpod-7351a9ed21d64e25eb145bb6f20bbc150817ed5be3d5b9e6f04162fdb0b9829d.scope: Deactivated successfully.
Nov 25 02:51:18 np0005534516 ceph-osd[90711]: log_channel(cluster) log [DBG] : 5.6 deep-scrub starts
Nov 25 02:51:18 np0005534516 ceph-osd[90711]: log_channel(cluster) log [DBG] : 5.6 deep-scrub ok
Nov 25 02:51:18 np0005534516 python3[98023]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_mds.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid a058ea16-8b73-51e1-b172-ed66107102bf -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   fs volume create cephfs '--placement=compute-0 '#012 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 02:51:18 np0005534516 podman[97978]: 2025-11-25 07:51:18.703890634 +0000 UTC m=+0.439294494 container attach 7351a9ed21d64e25eb145bb6f20bbc150817ed5be3d5b9e6f04162fdb0b9829d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_tu, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef)
Nov 25 02:51:18 np0005534516 podman[97978]: 2025-11-25 07:51:18.704524459 +0000 UTC m=+0.439928349 container died 7351a9ed21d64e25eb145bb6f20bbc150817ed5be3d5b9e6f04162fdb0b9829d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_tu, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef)
Nov 25 02:51:18 np0005534516 systemd[1]: var-lib-containers-storage-overlay-75998b074c8b9e35060f0bec2569ca718b6872c63dee174a96cbfa4fa526c1ee-merged.mount: Deactivated successfully.
Nov 25 02:51:18 np0005534516 podman[98036]: 2025-11-25 07:51:18.871137857 +0000 UTC m=+0.169610742 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 25 02:51:19 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v103: 162 pgs: 162 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 02:51:19 np0005534516 podman[97978]: 2025-11-25 07:51:19.193876588 +0000 UTC m=+0.929280418 container remove 7351a9ed21d64e25eb145bb6f20bbc150817ed5be3d5b9e6f04162fdb0b9829d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_tu, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Nov 25 02:51:19 np0005534516 podman[98036]: 2025-11-25 07:51:19.338932993 +0000 UTC m=+0.637405838 container create a8a37458160ae669b286c76f61350e4f357d8e7b657f7044b465f683f24f6405 (image=quay.io/ceph/ceph:v18, name=condescending_ardinghelli, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 02:51:19 np0005534516 systemd[1]: Started libpod-conmon-a8a37458160ae669b286c76f61350e4f357d8e7b657f7044b465f683f24f6405.scope.
Nov 25 02:51:19 np0005534516 systemd[1]: libpod-conmon-7351a9ed21d64e25eb145bb6f20bbc150817ed5be3d5b9e6f04162fdb0b9829d.scope: Deactivated successfully.
Nov 25 02:51:19 np0005534516 systemd[1]: Started libcrun container.
Nov 25 02:51:19 np0005534516 podman[98057]: 2025-11-25 07:51:19.430672392 +0000 UTC m=+0.101869647 container create 4970427f2de26f67643ad2ccd3dc1793654bac1d6709545113b9a1fbf2f871f8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_goodall, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Nov 25 02:51:19 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bdf5265745b0beccdd849383f9869d8ccc166ed09e84ff6e4d4615634abaa27e/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 02:51:19 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bdf5265745b0beccdd849383f9869d8ccc166ed09e84ff6e4d4615634abaa27e/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Nov 25 02:51:19 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bdf5265745b0beccdd849383f9869d8ccc166ed09e84ff6e4d4615634abaa27e/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 02:51:19 np0005534516 podman[98057]: 2025-11-25 07:51:19.367196819 +0000 UTC m=+0.038394164 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 02:51:19 np0005534516 podman[98036]: 2025-11-25 07:51:19.472459656 +0000 UTC m=+0.770932551 container init a8a37458160ae669b286c76f61350e4f357d8e7b657f7044b465f683f24f6405 (image=quay.io/ceph/ceph:v18, name=condescending_ardinghelli, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Nov 25 02:51:19 np0005534516 systemd[1]: Started libpod-conmon-4970427f2de26f67643ad2ccd3dc1793654bac1d6709545113b9a1fbf2f871f8.scope.
Nov 25 02:51:19 np0005534516 podman[98036]: 2025-11-25 07:51:19.486516288 +0000 UTC m=+0.784989163 container start a8a37458160ae669b286c76f61350e4f357d8e7b657f7044b465f683f24f6405 (image=quay.io/ceph/ceph:v18, name=condescending_ardinghelli, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 02:51:19 np0005534516 podman[98036]: 2025-11-25 07:51:19.501157544 +0000 UTC m=+0.799630429 container attach a8a37458160ae669b286c76f61350e4f357d8e7b657f7044b465f683f24f6405 (image=quay.io/ceph/ceph:v18, name=condescending_ardinghelli, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 02:51:19 np0005534516 systemd[1]: Started libcrun container.
Nov 25 02:51:19 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e296f516c9a480e5df0aeebd01ddd13b77d1b8aeb665c80d1c2db71a6c2eba53/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 02:51:19 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e296f516c9a480e5df0aeebd01ddd13b77d1b8aeb665c80d1c2db71a6c2eba53/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 02:51:19 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e296f516c9a480e5df0aeebd01ddd13b77d1b8aeb665c80d1c2db71a6c2eba53/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 02:51:19 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e296f516c9a480e5df0aeebd01ddd13b77d1b8aeb665c80d1c2db71a6c2eba53/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 02:51:19 np0005534516 podman[98057]: 2025-11-25 07:51:19.566269566 +0000 UTC m=+0.237466821 container init 4970427f2de26f67643ad2ccd3dc1793654bac1d6709545113b9a1fbf2f871f8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_goodall, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 02:51:19 np0005534516 podman[98057]: 2025-11-25 07:51:19.579829476 +0000 UTC m=+0.251026761 container start 4970427f2de26f67643ad2ccd3dc1793654bac1d6709545113b9a1fbf2f871f8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_goodall, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 02:51:19 np0005534516 podman[98057]: 2025-11-25 07:51:19.613371551 +0000 UTC m=+0.284568806 container attach 4970427f2de26f67643ad2ccd3dc1793654bac1d6709545113b9a1fbf2f871f8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_goodall, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 02:51:20 np0005534516 ceph-mgr[75313]: log_channel(audit) log [DBG] : from='client.14246 -' entity='client.admin' cmd=[{"prefix": "fs volume create", "name": "cephfs", "placement": "compute-0 ", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 02:51:20 np0005534516 ceph-mgr[75313]: [volumes INFO volumes.module] Starting _cmd_fs_volume_create(name:cephfs, placement:compute-0 , prefix:fs volume create, target:['mon-mgr', '']) < ""
Nov 25 02:51:20 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "cephfs.cephfs.meta"} v 0) v1
Nov 25 02:51:20 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta"}]: dispatch
Nov 25 02:51:20 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"bulk": true, "prefix": "osd pool create", "pool": "cephfs.cephfs.data"} v 0) v1
Nov 25 02:51:20 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"bulk": true, "prefix": "osd pool create", "pool": "cephfs.cephfs.data"}]: dispatch
Nov 25 02:51:20 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"} v 0) v1
Nov 25 02:51:20 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"}]: dispatch
Nov 25 02:51:20 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e39 do_prune osdmap full prune enabled
Nov 25 02:51:20 np0005534516 ceph-mon[75015]: log_channel(cluster) log [ERR] : Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)
Nov 25 02:51:20 np0005534516 ceph-mon[75015]: log_channel(cluster) log [WRN] : Health check failed: 1 filesystem is online with fewer MDS than max_mds (MDS_UP_LESS_THAN_MAX)
Nov 25 02:51:20 np0005534516 ceph-a058ea16-8b73-51e1-b172-ed66107102bf-mon-compute-0[75011]: 2025-11-25T07:51:20.057+0000 7f383574a640 -1 log_channel(cluster) log [ERR] : Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)
Nov 25 02:51:20 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd='[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"}]': finished
Nov 25 02:51:20 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).mds e2 new map
Nov 25 02:51:20 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).mds e2 print_map#012e2#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#0112#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112025-11-25T07:51:20.057872+0000#012modified#0112025-11-25T07:51:20.057922+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#0110#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2}#012max_mds#0111#012in#011#012up#011{}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0110#012 #012 
Nov 25 02:51:20 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e40 e40: 3 total, 3 up, 3 in
Nov 25 02:51:20 np0005534516 ceph-mon[75015]: log_channel(cluster) log [DBG] : osdmap e40: 3 total, 3 up, 3 in
Nov 25 02:51:20 np0005534516 ceph-mon[75015]: log_channel(cluster) log [DBG] : fsmap cephfs:0
Nov 25 02:51:20 np0005534516 ceph-mgr[75313]: [cephadm INFO root] Saving service mds.cephfs spec with placement compute-0
Nov 25 02:51:20 np0005534516 ceph-mgr[75313]: log_channel(cephadm) log [INF] : Saving service mds.cephfs spec with placement compute-0
Nov 25 02:51:20 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0) v1
Nov 25 02:51:20 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 02:51:20 np0005534516 ceph-mgr[75313]: [volumes INFO volumes.module] Finishing _cmd_fs_volume_create(name:cephfs, placement:compute-0 , prefix:fs volume create, target:['mon-mgr', '']) < ""
Nov 25 02:51:20 np0005534516 systemd[1]: libpod-a8a37458160ae669b286c76f61350e4f357d8e7b657f7044b465f683f24f6405.scope: Deactivated successfully.
Nov 25 02:51:20 np0005534516 podman[98036]: 2025-11-25 07:51:20.280292344 +0000 UTC m=+1.578765189 container died a8a37458160ae669b286c76f61350e4f357d8e7b657f7044b465f683f24f6405 (image=quay.io/ceph/ceph:v18, name=condescending_ardinghelli, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Nov 25 02:51:20 np0005534516 nostalgic_goodall[98080]: {
Nov 25 02:51:20 np0005534516 nostalgic_goodall[98080]:    "0": [
Nov 25 02:51:20 np0005534516 nostalgic_goodall[98080]:        {
Nov 25 02:51:20 np0005534516 nostalgic_goodall[98080]:            "devices": [
Nov 25 02:51:20 np0005534516 nostalgic_goodall[98080]:                "/dev/loop3"
Nov 25 02:51:20 np0005534516 nostalgic_goodall[98080]:            ],
Nov 25 02:51:20 np0005534516 nostalgic_goodall[98080]:            "lv_name": "ceph_lv0",
Nov 25 02:51:20 np0005534516 nostalgic_goodall[98080]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 02:51:20 np0005534516 nostalgic_goodall[98080]:            "lv_size": "21470642176",
Nov 25 02:51:20 np0005534516 nostalgic_goodall[98080]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=fa0f8d90-5df8-4d42-9078-da082765696d,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 02:51:20 np0005534516 nostalgic_goodall[98080]:            "lv_uuid": "ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr",
Nov 25 02:51:20 np0005534516 nostalgic_goodall[98080]:            "name": "ceph_lv0",
Nov 25 02:51:20 np0005534516 nostalgic_goodall[98080]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 02:51:20 np0005534516 nostalgic_goodall[98080]:            "tags": {
Nov 25 02:51:20 np0005534516 nostalgic_goodall[98080]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 02:51:20 np0005534516 nostalgic_goodall[98080]:                "ceph.block_uuid": "ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr",
Nov 25 02:51:20 np0005534516 nostalgic_goodall[98080]:                "ceph.cephx_lockbox_secret": "",
Nov 25 02:51:20 np0005534516 nostalgic_goodall[98080]:                "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 02:51:20 np0005534516 nostalgic_goodall[98080]:                "ceph.cluster_name": "ceph",
Nov 25 02:51:20 np0005534516 rsyslogd[1007]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 25 02:51:20 np0005534516 nostalgic_goodall[98080]:                "ceph.crush_device_class": "",
Nov 25 02:51:20 np0005534516 nostalgic_goodall[98080]:                "ceph.encrypted": "0",
Nov 25 02:51:20 np0005534516 nostalgic_goodall[98080]:                "ceph.osd_fsid": "fa0f8d90-5df8-4d42-9078-da082765696d",
Nov 25 02:51:20 np0005534516 nostalgic_goodall[98080]:                "ceph.osd_id": "0",
Nov 25 02:51:20 np0005534516 nostalgic_goodall[98080]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 02:51:20 np0005534516 nostalgic_goodall[98080]:                "ceph.type": "block",
Nov 25 02:51:20 np0005534516 nostalgic_goodall[98080]:                "ceph.vdo": "0"
Nov 25 02:51:20 np0005534516 nostalgic_goodall[98080]:            },
Nov 25 02:51:20 np0005534516 nostalgic_goodall[98080]:            "type": "block",
Nov 25 02:51:20 np0005534516 nostalgic_goodall[98080]:            "vg_name": "ceph_vg0"
Nov 25 02:51:20 np0005534516 nostalgic_goodall[98080]:        }
Nov 25 02:51:20 np0005534516 nostalgic_goodall[98080]:    ],
Nov 25 02:51:20 np0005534516 nostalgic_goodall[98080]:    "1": [
Nov 25 02:51:20 np0005534516 nostalgic_goodall[98080]:        {
Nov 25 02:51:20 np0005534516 nostalgic_goodall[98080]:            "devices": [
Nov 25 02:51:20 np0005534516 nostalgic_goodall[98080]:                "/dev/loop4"
Nov 25 02:51:20 np0005534516 nostalgic_goodall[98080]:            ],
Nov 25 02:51:20 np0005534516 nostalgic_goodall[98080]:            "lv_name": "ceph_lv1",
Nov 25 02:51:20 np0005534516 nostalgic_goodall[98080]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 02:51:20 np0005534516 nostalgic_goodall[98080]:            "lv_size": "21470642176",
Nov 25 02:51:20 np0005534516 nostalgic_goodall[98080]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1ccbbde0-8faf-460a-800e-c84f00ed17db,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 02:51:20 np0005534516 nostalgic_goodall[98080]:            "lv_uuid": "nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e",
Nov 25 02:51:20 np0005534516 nostalgic_goodall[98080]:            "name": "ceph_lv1",
Nov 25 02:51:20 np0005534516 nostalgic_goodall[98080]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 02:51:20 np0005534516 nostalgic_goodall[98080]:            "tags": {
Nov 25 02:51:20 np0005534516 nostalgic_goodall[98080]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 02:51:20 np0005534516 nostalgic_goodall[98080]:                "ceph.block_uuid": "nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e",
Nov 25 02:51:20 np0005534516 nostalgic_goodall[98080]:                "ceph.cephx_lockbox_secret": "",
Nov 25 02:51:20 np0005534516 nostalgic_goodall[98080]:                "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 02:51:20 np0005534516 nostalgic_goodall[98080]:                "ceph.cluster_name": "ceph",
Nov 25 02:51:20 np0005534516 nostalgic_goodall[98080]:                "ceph.crush_device_class": "",
Nov 25 02:51:20 np0005534516 nostalgic_goodall[98080]:                "ceph.encrypted": "0",
Nov 25 02:51:20 np0005534516 nostalgic_goodall[98080]:                "ceph.osd_fsid": "1ccbbde0-8faf-460a-800e-c84f00ed17db",
Nov 25 02:51:20 np0005534516 nostalgic_goodall[98080]:                "ceph.osd_id": "1",
Nov 25 02:51:20 np0005534516 nostalgic_goodall[98080]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 02:51:20 np0005534516 nostalgic_goodall[98080]:                "ceph.type": "block",
Nov 25 02:51:20 np0005534516 nostalgic_goodall[98080]:                "ceph.vdo": "0"
Nov 25 02:51:20 np0005534516 nostalgic_goodall[98080]:            },
Nov 25 02:51:20 np0005534516 nostalgic_goodall[98080]:            "type": "block",
Nov 25 02:51:20 np0005534516 nostalgic_goodall[98080]:            "vg_name": "ceph_vg1"
Nov 25 02:51:20 np0005534516 nostalgic_goodall[98080]:        }
Nov 25 02:51:20 np0005534516 nostalgic_goodall[98080]:    ],
Nov 25 02:51:20 np0005534516 nostalgic_goodall[98080]:    "2": [
Nov 25 02:51:20 np0005534516 nostalgic_goodall[98080]:        {
Nov 25 02:51:20 np0005534516 nostalgic_goodall[98080]:            "devices": [
Nov 25 02:51:20 np0005534516 nostalgic_goodall[98080]:                "/dev/loop5"
Nov 25 02:51:20 np0005534516 nostalgic_goodall[98080]:            ],
Nov 25 02:51:20 np0005534516 nostalgic_goodall[98080]:            "lv_name": "ceph_lv2",
Nov 25 02:51:20 np0005534516 nostalgic_goodall[98080]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 02:51:20 np0005534516 nostalgic_goodall[98080]:            "lv_size": "21470642176",
Nov 25 02:51:20 np0005534516 nostalgic_goodall[98080]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2a15223e-f21c-42af-b702-2be1c3607399,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 02:51:20 np0005534516 nostalgic_goodall[98080]:            "lv_uuid": "5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0",
Nov 25 02:51:20 np0005534516 nostalgic_goodall[98080]:            "name": "ceph_lv2",
Nov 25 02:51:20 np0005534516 nostalgic_goodall[98080]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 02:51:20 np0005534516 nostalgic_goodall[98080]:            "tags": {
Nov 25 02:51:20 np0005534516 nostalgic_goodall[98080]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 02:51:20 np0005534516 nostalgic_goodall[98080]:                "ceph.block_uuid": "5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0",
Nov 25 02:51:20 np0005534516 nostalgic_goodall[98080]:                "ceph.cephx_lockbox_secret": "",
Nov 25 02:51:20 np0005534516 nostalgic_goodall[98080]:                "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 02:51:20 np0005534516 nostalgic_goodall[98080]:                "ceph.cluster_name": "ceph",
Nov 25 02:51:20 np0005534516 nostalgic_goodall[98080]:                "ceph.crush_device_class": "",
Nov 25 02:51:20 np0005534516 nostalgic_goodall[98080]:                "ceph.encrypted": "0",
Nov 25 02:51:20 np0005534516 nostalgic_goodall[98080]:                "ceph.osd_fsid": "2a15223e-f21c-42af-b702-2be1c3607399",
Nov 25 02:51:20 np0005534516 nostalgic_goodall[98080]:                "ceph.osd_id": "2",
Nov 25 02:51:20 np0005534516 nostalgic_goodall[98080]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 02:51:20 np0005534516 nostalgic_goodall[98080]:                "ceph.type": "block",
Nov 25 02:51:20 np0005534516 nostalgic_goodall[98080]:                "ceph.vdo": "0"
Nov 25 02:51:20 np0005534516 nostalgic_goodall[98080]:            },
Nov 25 02:51:20 np0005534516 nostalgic_goodall[98080]:            "type": "block",
Nov 25 02:51:20 np0005534516 nostalgic_goodall[98080]:            "vg_name": "ceph_vg2"
Nov 25 02:51:20 np0005534516 nostalgic_goodall[98080]:        }
Nov 25 02:51:20 np0005534516 nostalgic_goodall[98080]:    ]
Nov 25 02:51:20 np0005534516 nostalgic_goodall[98080]: }
Nov 25 02:51:20 np0005534516 systemd[1]: libpod-4970427f2de26f67643ad2ccd3dc1793654bac1d6709545113b9a1fbf2f871f8.scope: Deactivated successfully.
Nov 25 02:51:20 np0005534516 podman[98057]: 2025-11-25 07:51:20.366388196 +0000 UTC m=+1.037585451 container died 4970427f2de26f67643ad2ccd3dc1793654bac1d6709545113b9a1fbf2f871f8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_goodall, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 02:51:20 np0005534516 systemd[1]: var-lib-containers-storage-overlay-e296f516c9a480e5df0aeebd01ddd13b77d1b8aeb665c80d1c2db71a6c2eba53-merged.mount: Deactivated successfully.
Nov 25 02:51:20 np0005534516 podman[98057]: 2025-11-25 07:51:20.569837038 +0000 UTC m=+1.241034313 container remove 4970427f2de26f67643ad2ccd3dc1793654bac1d6709545113b9a1fbf2f871f8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_goodall, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 02:51:20 np0005534516 systemd[1]: libpod-conmon-4970427f2de26f67643ad2ccd3dc1793654bac1d6709545113b9a1fbf2f871f8.scope: Deactivated successfully.
Nov 25 02:51:20 np0005534516 systemd[1]: var-lib-containers-storage-overlay-bdf5265745b0beccdd849383f9869d8ccc166ed09e84ff6e4d4615634abaa27e-merged.mount: Deactivated successfully.
Nov 25 02:51:20 np0005534516 podman[98036]: 2025-11-25 07:51:20.832384157 +0000 UTC m=+2.130857042 container remove a8a37458160ae669b286c76f61350e4f357d8e7b657f7044b465f683f24f6405 (image=quay.io/ceph/ceph:v18, name=condescending_ardinghelli, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True)
Nov 25 02:51:20 np0005534516 systemd[1]: libpod-conmon-a8a37458160ae669b286c76f61350e4f357d8e7b657f7044b465f683f24f6405.scope: Deactivated successfully.
Nov 25 02:51:21 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta"}]: dispatch
Nov 25 02:51:21 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"bulk": true, "prefix": "osd pool create", "pool": "cephfs.cephfs.data"}]: dispatch
Nov 25 02:51:21 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"}]: dispatch
Nov 25 02:51:21 np0005534516 ceph-mon[75015]: Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)
Nov 25 02:51:21 np0005534516 ceph-mon[75015]: Health check failed: 1 filesystem is online with fewer MDS than max_mds (MDS_UP_LESS_THAN_MAX)
Nov 25 02:51:21 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd='[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"}]': finished
Nov 25 02:51:21 np0005534516 ceph-mon[75015]: Saving service mds.cephfs spec with placement compute-0
Nov 25 02:51:21 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 02:51:21 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v105: 162 pgs: 162 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 02:51:21 np0005534516 python3[98261]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_mds.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid a058ea16-8b73-51e1-b172-ed66107102bf -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch apply --in-file /home/ceph_spec.yaml _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 02:51:21 np0005534516 ceph-osd[88620]: log_channel(cluster) log [DBG] : 4.c scrub starts
Nov 25 02:51:21 np0005534516 ceph-osd[88620]: log_channel(cluster) log [DBG] : 4.c scrub ok
Nov 25 02:51:21 np0005534516 podman[98299]: 2025-11-25 07:51:21.171089606 +0000 UTC m=+0.023885762 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 25 02:51:21 np0005534516 podman[98299]: 2025-11-25 07:51:21.271455105 +0000 UTC m=+0.124251231 container create fef66df0778b0f23f095c3373dee6e12c01fe71bc0749b6542d6dcd68d97f8fc (image=quay.io/ceph/ceph:v18, name=naughty_robinson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 02:51:21 np0005534516 podman[98305]: 2025-11-25 07:51:21.188712934 +0000 UTC m=+0.023683666 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 02:51:21 np0005534516 systemd[1]: Started libpod-conmon-fef66df0778b0f23f095c3373dee6e12c01fe71bc0749b6542d6dcd68d97f8fc.scope.
Nov 25 02:51:21 np0005534516 systemd[1]: Started libcrun container.
Nov 25 02:51:21 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4d562c646100bbbf7c480f47bb58237b167bd1121dbf31ba977d6ec1105c8002/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 02:51:21 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4d562c646100bbbf7c480f47bb58237b167bd1121dbf31ba977d6ec1105c8002/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 02:51:21 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4d562c646100bbbf7c480f47bb58237b167bd1121dbf31ba977d6ec1105c8002/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Nov 25 02:51:21 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e40 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 02:51:21 np0005534516 podman[98305]: 2025-11-25 07:51:21.455737291 +0000 UTC m=+0.290707993 container create b141aee2fff0db5cec52de0a777d6113d7dadbfc283c0b75f8b99e04e7252991 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_stonebraker, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 02:51:21 np0005534516 ceph-osd[89702]: log_channel(cluster) log [DBG] : 3.19 deep-scrub starts
Nov 25 02:51:21 np0005534516 systemd[1]: Started libpod-conmon-b141aee2fff0db5cec52de0a777d6113d7dadbfc283c0b75f8b99e04e7252991.scope.
Nov 25 02:51:21 np0005534516 ceph-osd[89702]: log_channel(cluster) log [DBG] : 3.19 deep-scrub ok
Nov 25 02:51:21 np0005534516 systemd[1]: Started libcrun container.
Nov 25 02:51:21 np0005534516 podman[98299]: 2025-11-25 07:51:21.572362485 +0000 UTC m=+0.425158651 container init fef66df0778b0f23f095c3373dee6e12c01fe71bc0749b6542d6dcd68d97f8fc (image=quay.io/ceph/ceph:v18, name=naughty_robinson, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 02:51:21 np0005534516 podman[98299]: 2025-11-25 07:51:21.584091929 +0000 UTC m=+0.436888055 container start fef66df0778b0f23f095c3373dee6e12c01fe71bc0749b6542d6dcd68d97f8fc (image=quay.io/ceph/ceph:v18, name=naughty_robinson, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 02:51:21 np0005534516 podman[98299]: 2025-11-25 07:51:21.650645717 +0000 UTC m=+0.503441923 container attach fef66df0778b0f23f095c3373dee6e12c01fe71bc0749b6542d6dcd68d97f8fc (image=quay.io/ceph/ceph:v18, name=naughty_robinson, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 25 02:51:21 np0005534516 podman[98305]: 2025-11-25 07:51:21.723994158 +0000 UTC m=+0.558964890 container init b141aee2fff0db5cec52de0a777d6113d7dadbfc283c0b75f8b99e04e7252991 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_stonebraker, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 02:51:21 np0005534516 podman[98305]: 2025-11-25 07:51:21.731106652 +0000 UTC m=+0.566077354 container start b141aee2fff0db5cec52de0a777d6113d7dadbfc283c0b75f8b99e04e7252991 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_stonebraker, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Nov 25 02:51:21 np0005534516 exciting_stonebraker[98334]: 167 167
Nov 25 02:51:21 np0005534516 systemd[1]: libpod-b141aee2fff0db5cec52de0a777d6113d7dadbfc283c0b75f8b99e04e7252991.scope: Deactivated successfully.
Nov 25 02:51:21 np0005534516 podman[98305]: 2025-11-25 07:51:21.751712882 +0000 UTC m=+0.586683594 container attach b141aee2fff0db5cec52de0a777d6113d7dadbfc283c0b75f8b99e04e7252991 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_stonebraker, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Nov 25 02:51:21 np0005534516 podman[98305]: 2025-11-25 07:51:21.75205816 +0000 UTC m=+0.587028862 container died b141aee2fff0db5cec52de0a777d6113d7dadbfc283c0b75f8b99e04e7252991 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_stonebraker, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 02:51:22 np0005534516 systemd[1]: var-lib-containers-storage-overlay-cb0885edd82384efd19b14957f5fce39bac98dad8038df3f150efdcb06038906-merged.mount: Deactivated successfully.
Nov 25 02:51:22 np0005534516 ceph-mgr[75313]: log_channel(audit) log [DBG] : from='client.14248 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 02:51:22 np0005534516 ceph-mgr[75313]: [cephadm INFO root] Saving service mds.cephfs spec with placement compute-0
Nov 25 02:51:22 np0005534516 ceph-mgr[75313]: log_channel(cephadm) log [INF] : Saving service mds.cephfs spec with placement compute-0
Nov 25 02:51:22 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0) v1
Nov 25 02:51:23 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v106: 162 pgs: 162 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 02:51:23 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 02:51:23 np0005534516 naughty_robinson[98328]: Scheduled mds.cephfs update...
Nov 25 02:51:23 np0005534516 podman[98305]: 2025-11-25 07:51:23.21278726 +0000 UTC m=+2.047758002 container remove b141aee2fff0db5cec52de0a777d6113d7dadbfc283c0b75f8b99e04e7252991 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_stonebraker, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Nov 25 02:51:23 np0005534516 systemd[1]: libpod-fef66df0778b0f23f095c3373dee6e12c01fe71bc0749b6542d6dcd68d97f8fc.scope: Deactivated successfully.
Nov 25 02:51:23 np0005534516 podman[98299]: 2025-11-25 07:51:23.219973854 +0000 UTC m=+2.072769980 container died fef66df0778b0f23f095c3373dee6e12c01fe71bc0749b6542d6dcd68d97f8fc (image=quay.io/ceph/ceph:v18, name=naughty_robinson, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 02:51:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 02:51:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 02:51:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 02:51:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 02:51:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 02:51:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 02:51:23 np0005534516 systemd[1]: var-lib-containers-storage-overlay-4d562c646100bbbf7c480f47bb58237b167bd1121dbf31ba977d6ec1105c8002-merged.mount: Deactivated successfully.
Nov 25 02:51:24 np0005534516 ceph-mon[75015]: Saving service mds.cephfs spec with placement compute-0
Nov 25 02:51:24 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 02:51:24 np0005534516 podman[98392]: 2025-11-25 07:51:24.203433487 +0000 UTC m=+0.852442301 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 02:51:24 np0005534516 podman[98299]: 2025-11-25 07:51:24.444750399 +0000 UTC m=+3.297546565 container remove fef66df0778b0f23f095c3373dee6e12c01fe71bc0749b6542d6dcd68d97f8fc (image=quay.io/ceph/ceph:v18, name=naughty_robinson, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 02:51:24 np0005534516 systemd[1]: libpod-conmon-fef66df0778b0f23f095c3373dee6e12c01fe71bc0749b6542d6dcd68d97f8fc.scope: Deactivated successfully.
Nov 25 02:51:24 np0005534516 systemd[1]: libpod-conmon-b141aee2fff0db5cec52de0a777d6113d7dadbfc283c0b75f8b99e04e7252991.scope: Deactivated successfully.
Nov 25 02:51:24 np0005534516 podman[98392]: 2025-11-25 07:51:24.638530698 +0000 UTC m=+1.287539482 container create 043bf50b5a6bb0fb7a7a7d32c51ded802b54c19464e3fb4977902e329acc143e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_hofstadter, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 02:51:24 np0005534516 systemd[1]: Started libpod-conmon-043bf50b5a6bb0fb7a7a7d32c51ded802b54c19464e3fb4977902e329acc143e.scope.
Nov 25 02:51:24 np0005534516 systemd[1]: Started libcrun container.
Nov 25 02:51:24 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fe6d9e51bf3af509aae1eefb88fd2d49a66165fb99324ff00ac578ebbe35a3d0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 02:51:24 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fe6d9e51bf3af509aae1eefb88fd2d49a66165fb99324ff00ac578ebbe35a3d0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 02:51:24 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fe6d9e51bf3af509aae1eefb88fd2d49a66165fb99324ff00ac578ebbe35a3d0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 02:51:24 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fe6d9e51bf3af509aae1eefb88fd2d49a66165fb99324ff00ac578ebbe35a3d0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 02:51:24 np0005534516 podman[98392]: 2025-11-25 07:51:24.881824518 +0000 UTC m=+1.530833312 container init 043bf50b5a6bb0fb7a7a7d32c51ded802b54c19464e3fb4977902e329acc143e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_hofstadter, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 02:51:24 np0005534516 podman[98392]: 2025-11-25 07:51:24.893202704 +0000 UTC m=+1.542211488 container start 043bf50b5a6bb0fb7a7a7d32c51ded802b54c19464e3fb4977902e329acc143e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_hofstadter, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Nov 25 02:51:24 np0005534516 podman[98392]: 2025-11-25 07:51:24.914905602 +0000 UTC m=+1.563914386 container attach 043bf50b5a6bb0fb7a7a7d32c51ded802b54c19464e3fb4977902e329acc143e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_hofstadter, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 02:51:25 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v107: 162 pgs: 162 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 02:51:25 np0005534516 python3[98491]: ansible-ansible.legacy.stat Invoked with path=/etc/ceph/ceph.client.openstack.keyring follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 25 02:51:25 np0005534516 python3[98564]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764057084.8589401-36657-281221573763577/source dest=/etc/ceph/ceph.client.openstack.keyring mode=0644 force=True owner=167 group=167 follow=False _original_basename=ceph_key.j2 checksum=456535103098adb3f62f4b843007d7273e91427f backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 02:51:25 np0005534516 sharp_hofstadter[98409]: {
Nov 25 02:51:25 np0005534516 sharp_hofstadter[98409]:    "1ccbbde0-8faf-460a-800e-c84f00ed17db": {
Nov 25 02:51:25 np0005534516 sharp_hofstadter[98409]:        "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 02:51:25 np0005534516 sharp_hofstadter[98409]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 02:51:25 np0005534516 sharp_hofstadter[98409]:        "osd_id": 1,
Nov 25 02:51:25 np0005534516 sharp_hofstadter[98409]:        "osd_uuid": "1ccbbde0-8faf-460a-800e-c84f00ed17db",
Nov 25 02:51:25 np0005534516 sharp_hofstadter[98409]:        "type": "bluestore"
Nov 25 02:51:25 np0005534516 sharp_hofstadter[98409]:    },
Nov 25 02:51:25 np0005534516 sharp_hofstadter[98409]:    "2a15223e-f21c-42af-b702-2be1c3607399": {
Nov 25 02:51:25 np0005534516 sharp_hofstadter[98409]:        "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 02:51:25 np0005534516 sharp_hofstadter[98409]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 02:51:25 np0005534516 sharp_hofstadter[98409]:        "osd_id": 2,
Nov 25 02:51:25 np0005534516 sharp_hofstadter[98409]:        "osd_uuid": "2a15223e-f21c-42af-b702-2be1c3607399",
Nov 25 02:51:25 np0005534516 sharp_hofstadter[98409]:        "type": "bluestore"
Nov 25 02:51:25 np0005534516 sharp_hofstadter[98409]:    },
Nov 25 02:51:25 np0005534516 sharp_hofstadter[98409]:    "fa0f8d90-5df8-4d42-9078-da082765696d": {
Nov 25 02:51:25 np0005534516 sharp_hofstadter[98409]:        "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 02:51:25 np0005534516 sharp_hofstadter[98409]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 02:51:25 np0005534516 sharp_hofstadter[98409]:        "osd_id": 0,
Nov 25 02:51:25 np0005534516 sharp_hofstadter[98409]:        "osd_uuid": "fa0f8d90-5df8-4d42-9078-da082765696d",
Nov 25 02:51:25 np0005534516 sharp_hofstadter[98409]:        "type": "bluestore"
Nov 25 02:51:25 np0005534516 sharp_hofstadter[98409]:    }
Nov 25 02:51:25 np0005534516 sharp_hofstadter[98409]: }
Nov 25 02:51:25 np0005534516 systemd[1]: libpod-043bf50b5a6bb0fb7a7a7d32c51ded802b54c19464e3fb4977902e329acc143e.scope: Deactivated successfully.
Nov 25 02:51:25 np0005534516 systemd[1]: libpod-043bf50b5a6bb0fb7a7a7d32c51ded802b54c19464e3fb4977902e329acc143e.scope: Consumed 1.015s CPU time.
Nov 25 02:51:25 np0005534516 podman[98392]: 2025-11-25 07:51:25.906874822 +0000 UTC m=+2.555883616 container died 043bf50b5a6bb0fb7a7a7d32c51ded802b54c19464e3fb4977902e329acc143e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_hofstadter, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 02:51:25 np0005534516 systemd[1]: var-lib-containers-storage-overlay-fe6d9e51bf3af509aae1eefb88fd2d49a66165fb99324ff00ac578ebbe35a3d0-merged.mount: Deactivated successfully.
Nov 25 02:51:26 np0005534516 python3[98642]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid a058ea16-8b73-51e1-b172-ed66107102bf -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   auth import -i /etc/ceph/ceph.client.openstack.keyring _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 02:51:26 np0005534516 podman[98392]: 2025-11-25 07:51:26.168856477 +0000 UTC m=+2.817865251 container remove 043bf50b5a6bb0fb7a7a7d32c51ded802b54c19464e3fb4977902e329acc143e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_hofstadter, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 02:51:26 np0005534516 systemd[1]: libpod-conmon-043bf50b5a6bb0fb7a7a7d32c51ded802b54c19464e3fb4977902e329acc143e.scope: Deactivated successfully.
Nov 25 02:51:26 np0005534516 podman[98656]: 2025-11-25 07:51:26.197946234 +0000 UTC m=+0.103031455 container create b8c14b5d60fbde4958eab4b3e2cab89b04ca7bac2d268522b765e8395e442444 (image=quay.io/ceph/ceph:v18, name=naughty_galileo, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 02:51:26 np0005534516 ceph-osd[88620]: log_channel(cluster) log [DBG] : 4.15 scrub starts
Nov 25 02:51:26 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 02:51:26 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 02:51:26 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 02:51:26 np0005534516 ceph-osd[88620]: log_channel(cluster) log [DBG] : 4.15 scrub ok
Nov 25 02:51:26 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 02:51:26 np0005534516 systemd[1]: Started libpod-conmon-b8c14b5d60fbde4958eab4b3e2cab89b04ca7bac2d268522b765e8395e442444.scope.
Nov 25 02:51:26 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 02:51:26 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 02:51:26 np0005534516 systemd[1]: Started libcrun container.
Nov 25 02:51:26 np0005534516 podman[98656]: 2025-11-25 07:51:26.177033226 +0000 UTC m=+0.082118477 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 25 02:51:26 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/641d70e31d233a2873c0e77342431930fa9ce460c458a22e5be87b2810d0cc38/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 02:51:26 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/641d70e31d233a2873c0e77342431930fa9ce460c458a22e5be87b2810d0cc38/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 02:51:26 np0005534516 podman[98656]: 2025-11-25 07:51:26.302585347 +0000 UTC m=+0.207670588 container init b8c14b5d60fbde4958eab4b3e2cab89b04ca7bac2d268522b765e8395e442444 (image=quay.io/ceph/ceph:v18, name=naughty_galileo, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 02:51:26 np0005534516 podman[98656]: 2025-11-25 07:51:26.311053873 +0000 UTC m=+0.216139094 container start b8c14b5d60fbde4958eab4b3e2cab89b04ca7bac2d268522b765e8395e442444 (image=quay.io/ceph/ceph:v18, name=naughty_galileo, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 02:51:26 np0005534516 podman[98656]: 2025-11-25 07:51:26.314957607 +0000 UTC m=+0.220042828 container attach b8c14b5d60fbde4958eab4b3e2cab89b04ca7bac2d268522b765e8395e442444 (image=quay.io/ceph/ceph:v18, name=naughty_galileo, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 02:51:26 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e40 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 02:51:26 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth import"} v 0) v1
Nov 25 02:51:26 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/821493535' entity='client.admin' cmd=[{"prefix": "auth import"}]: dispatch
Nov 25 02:51:26 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/821493535' entity='client.admin' cmd='[{"prefix": "auth import"}]': finished
Nov 25 02:51:26 np0005534516 podman[98656]: 2025-11-25 07:51:26.937102802 +0000 UTC m=+0.842188023 container died b8c14b5d60fbde4958eab4b3e2cab89b04ca7bac2d268522b765e8395e442444 (image=quay.io/ceph/ceph:v18, name=naughty_galileo, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 25 02:51:26 np0005534516 systemd[1]: libpod-b8c14b5d60fbde4958eab4b3e2cab89b04ca7bac2d268522b765e8395e442444.scope: Deactivated successfully.
Nov 25 02:51:26 np0005534516 systemd[1]: var-lib-containers-storage-overlay-641d70e31d233a2873c0e77342431930fa9ce460c458a22e5be87b2810d0cc38-merged.mount: Deactivated successfully.
Nov 25 02:51:27 np0005534516 podman[98656]: 2025-11-25 07:51:27.020076228 +0000 UTC m=+0.925161449 container remove b8c14b5d60fbde4958eab4b3e2cab89b04ca7bac2d268522b765e8395e442444 (image=quay.io/ceph/ceph:v18, name=naughty_galileo, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0)
Nov 25 02:51:27 np0005534516 systemd[1]: libpod-conmon-b8c14b5d60fbde4958eab4b3e2cab89b04ca7bac2d268522b765e8395e442444.scope: Deactivated successfully.
Nov 25 02:51:27 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v108: 162 pgs: 162 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 02:51:27 np0005534516 ceph-osd[88620]: log_channel(cluster) log [DBG] : 4.16 scrub starts
Nov 25 02:51:27 np0005534516 ceph-osd[88620]: log_channel(cluster) log [DBG] : 4.16 scrub ok
Nov 25 02:51:27 np0005534516 podman[98929]: 2025-11-25 07:51:27.208533277 +0000 UTC m=+0.081564063 container exec 04ce8b89bbacb77b3b59c4996e899d7494010827e740656ebea8754c25303956 (image=quay.io/ceph/ceph:v18, name=ceph-a058ea16-8b73-51e1-b172-ed66107102bf-mon-compute-0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Nov 25 02:51:27 np0005534516 ceph-mon[75015]: from='client.? 192.168.122.100:0/821493535' entity='client.admin' cmd=[{"prefix": "auth import"}]: dispatch
Nov 25 02:51:27 np0005534516 ceph-mon[75015]: from='client.? 192.168.122.100:0/821493535' entity='client.admin' cmd='[{"prefix": "auth import"}]': finished
Nov 25 02:51:27 np0005534516 podman[98929]: 2025-11-25 07:51:27.355795935 +0000 UTC m=+0.228826721 container exec_died 04ce8b89bbacb77b3b59c4996e899d7494010827e740656ebea8754c25303956 (image=quay.io/ceph/ceph:v18, name=ceph-a058ea16-8b73-51e1-b172-ed66107102bf-mon-compute-0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Nov 25 02:51:27 np0005534516 ceph-osd[90711]: log_channel(cluster) log [DBG] : 5.8 scrub starts
Nov 25 02:51:27 np0005534516 ceph-osd[90711]: log_channel(cluster) log [DBG] : 5.8 scrub ok
Nov 25 02:51:27 np0005534516 python3[99029]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid a058ea16-8b73-51e1-b172-ed66107102bf -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   status --format json | jq .monmap.num_mons _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 02:51:27 np0005534516 podman[99064]: 2025-11-25 07:51:27.803115742 +0000 UTC m=+0.069064508 container create e44d0a3142c02acc8c8044ca5ee442f6abcd7bd6af08e551e0e4d0d74ba8d0a1 (image=quay.io/ceph/ceph:v18, name=pensive_albattani, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Nov 25 02:51:27 np0005534516 systemd[1]: Started libpod-conmon-e44d0a3142c02acc8c8044ca5ee442f6abcd7bd6af08e551e0e4d0d74ba8d0a1.scope.
Nov 25 02:51:27 np0005534516 podman[99064]: 2025-11-25 07:51:27.755573787 +0000 UTC m=+0.021522583 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 25 02:51:27 np0005534516 systemd[1]: Started libcrun container.
Nov 25 02:51:27 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/af401f67d4213a213461ebba73e8fc74bba3862581550bc3c25b554675c21e7e/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 02:51:27 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/af401f67d4213a213461ebba73e8fc74bba3862581550bc3c25b554675c21e7e/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 02:51:27 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 02:51:27 np0005534516 podman[99064]: 2025-11-25 07:51:27.919369667 +0000 UTC m=+0.185318443 container init e44d0a3142c02acc8c8044ca5ee442f6abcd7bd6af08e551e0e4d0d74ba8d0a1 (image=quay.io/ceph/ceph:v18, name=pensive_albattani, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 02:51:27 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 02:51:27 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 02:51:27 np0005534516 podman[99064]: 2025-11-25 07:51:27.925643149 +0000 UTC m=+0.191591945 container start e44d0a3142c02acc8c8044ca5ee442f6abcd7bd6af08e551e0e4d0d74ba8d0a1 (image=quay.io/ceph/ceph:v18, name=pensive_albattani, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 02:51:27 np0005534516 podman[99064]: 2025-11-25 07:51:27.933277155 +0000 UTC m=+0.199225921 container attach e44d0a3142c02acc8c8044ca5ee442f6abcd7bd6af08e551e0e4d0d74ba8d0a1 (image=quay.io/ceph/ceph:v18, name=pensive_albattani, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Nov 25 02:51:27 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 02:51:27 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 02:51:27 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 02:51:27 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 02:51:27 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 02:51:27 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 02:51:27 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 02:51:27 np0005534516 ceph-mgr[75313]: [progress WARNING root] complete: ev 292eddfd-a2f0-4311-85c2-45a52d2afb40 does not exist
Nov 25 02:51:27 np0005534516 ceph-mgr[75313]: [progress WARNING root] complete: ev 1063e17d-7729-4ac2-be1f-75f6bc777bde does not exist
Nov 25 02:51:27 np0005534516 ceph-mgr[75313]: [progress WARNING root] complete: ev 10febd0e-8a98-4819-9e07-4f2c7a77835c does not exist
Nov 25 02:51:27 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 02:51:27 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 02:51:27 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 02:51:27 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 02:51:27 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 02:51:27 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 02:51:28 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 02:51:28 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 02:51:28 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 02:51:28 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 02:51:28 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 02:51:28 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json"} v 0) v1
Nov 25 02:51:28 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1342989692' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Nov 25 02:51:28 np0005534516 pensive_albattani[99097]: 
Nov 25 02:51:28 np0005534516 pensive_albattani[99097]: {"fsid":"a058ea16-8b73-51e1-b172-ed66107102bf","health":{"status":"HEALTH_ERR","checks":{"MDS_ALL_DOWN":{"severity":"HEALTH_ERR","summary":{"message":"1 filesystem is offline","count":1},"muted":false},"MDS_UP_LESS_THAN_MAX":{"severity":"HEALTH_WARN","summary":{"message":"1 filesystem is online with fewer MDS than max_mds","count":1},"muted":false}},"mutes":[]},"election_epoch":5,"quorum":[0],"quorum_names":["compute-0"],"quorum_age":202,"monmap":{"epoch":1,"min_mon_release_name":"reef","num_mons":1},"osdmap":{"epoch":40,"num_osds":3,"num_up_osds":3,"osd_up_since":1764057042,"num_in_osds":3,"osd_in_since":1764056988,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[{"state_name":"active+clean","count":162}],"num_pgs":162,"num_pools":7,"num_objects":2,"data_bytes":459280,"bytes_used":84107264,"bytes_avail":64327819264,"bytes_total":64411926528},"fsmap":{"epoch":2,"id":1,"up":0,"in":0,"max":1,"by_rank":[],"up:standby":0},"mgrmap":{"available":true,"num_standbys":0,"modules":["cephadm","iostat","nfs","restful"],"services":{}},"servicemap":{"epoch":3,"modified":"2025-11-25T07:51:15.104751+0000","services":{"osd":{"daemons":{"summary":"","1":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}},"2":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}}}}}},"progress_events":{}}
Nov 25 02:51:28 np0005534516 systemd[1]: libpod-e44d0a3142c02acc8c8044ca5ee442f6abcd7bd6af08e551e0e4d0d74ba8d0a1.scope: Deactivated successfully.
Nov 25 02:51:28 np0005534516 conmon[99097]: conmon e44d0a3142c02acc8c80 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-e44d0a3142c02acc8c8044ca5ee442f6abcd7bd6af08e551e0e4d0d74ba8d0a1.scope/container/memory.events
Nov 25 02:51:28 np0005534516 podman[99064]: 2025-11-25 07:51:28.587155891 +0000 UTC m=+0.853104657 container died e44d0a3142c02acc8c8044ca5ee442f6abcd7bd6af08e551e0e4d0d74ba8d0a1 (image=quay.io/ceph/ceph:v18, name=pensive_albattani, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 02:51:28 np0005534516 podman[99260]: 2025-11-25 07:51:28.586435904 +0000 UTC m=+0.024236950 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 02:51:28 np0005534516 podman[99260]: 2025-11-25 07:51:28.913562402 +0000 UTC m=+0.351363448 container create 2187e67a1c600f3790563ba7b9fa36ba60303015cfcce84102987c598d5eb0fc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_babbage, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Nov 25 02:51:29 np0005534516 systemd[1]: Started libpod-conmon-2187e67a1c600f3790563ba7b9fa36ba60303015cfcce84102987c598d5eb0fc.scope.
Nov 25 02:51:29 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v109: 162 pgs: 162 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 02:51:29 np0005534516 systemd[1]: Started libcrun container.
Nov 25 02:51:29 np0005534516 systemd[1]: var-lib-containers-storage-overlay-af401f67d4213a213461ebba73e8fc74bba3862581550bc3c25b554675c21e7e-merged.mount: Deactivated successfully.
Nov 25 02:51:29 np0005534516 podman[99064]: 2025-11-25 07:51:29.621293756 +0000 UTC m=+1.887242512 container remove e44d0a3142c02acc8c8044ca5ee442f6abcd7bd6af08e551e0e4d0d74ba8d0a1 (image=quay.io/ceph/ceph:v18, name=pensive_albattani, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 02:51:29 np0005534516 systemd[1]: libpod-conmon-e44d0a3142c02acc8c8044ca5ee442f6abcd7bd6af08e551e0e4d0d74ba8d0a1.scope: Deactivated successfully.
Nov 25 02:51:29 np0005534516 podman[99260]: 2025-11-25 07:51:29.701138276 +0000 UTC m=+1.138939342 container init 2187e67a1c600f3790563ba7b9fa36ba60303015cfcce84102987c598d5eb0fc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_babbage, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 02:51:29 np0005534516 podman[99260]: 2025-11-25 07:51:29.712262777 +0000 UTC m=+1.150063813 container start 2187e67a1c600f3790563ba7b9fa36ba60303015cfcce84102987c598d5eb0fc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_babbage, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Nov 25 02:51:29 np0005534516 infallible_babbage[99291]: 167 167
Nov 25 02:51:29 np0005534516 systemd[1]: libpod-2187e67a1c600f3790563ba7b9fa36ba60303015cfcce84102987c598d5eb0fc.scope: Deactivated successfully.
Nov 25 02:51:29 np0005534516 podman[99260]: 2025-11-25 07:51:29.787226387 +0000 UTC m=+1.225027443 container attach 2187e67a1c600f3790563ba7b9fa36ba60303015cfcce84102987c598d5eb0fc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_babbage, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Nov 25 02:51:29 np0005534516 podman[99260]: 2025-11-25 07:51:29.787785341 +0000 UTC m=+1.225586407 container died 2187e67a1c600f3790563ba7b9fa36ba60303015cfcce84102987c598d5eb0fc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_babbage, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2)
Nov 25 02:51:29 np0005534516 systemd[1]: var-lib-containers-storage-overlay-de9d5c1c22c88de8e46d7df72a24950f9f50f4a0380f8a554cbda9bfcc143dcb-merged.mount: Deactivated successfully.
Nov 25 02:51:30 np0005534516 python3[99332]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid a058ea16-8b73-51e1-b172-ed66107102bf -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   mon dump --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 02:51:30 np0005534516 podman[99260]: 2025-11-25 07:51:30.278207766 +0000 UTC m=+1.716008832 container remove 2187e67a1c600f3790563ba7b9fa36ba60303015cfcce84102987c598d5eb0fc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_babbage, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef)
Nov 25 02:51:30 np0005534516 systemd[1]: libpod-conmon-2187e67a1c600f3790563ba7b9fa36ba60303015cfcce84102987c598d5eb0fc.scope: Deactivated successfully.
Nov 25 02:51:30 np0005534516 podman[99334]: 2025-11-25 07:51:30.419659453 +0000 UTC m=+0.397653443 container create 74839672295b97cc97ab2ed4b676a70ba60ccd4c9557f875305739401f35dda5 (image=quay.io/ceph/ceph:v18, name=busy_colden, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 02:51:30 np0005534516 podman[99334]: 2025-11-25 07:51:30.384354215 +0000 UTC m=+0.362348235 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 25 02:51:30 np0005534516 systemd[1]: Started libpod-conmon-74839672295b97cc97ab2ed4b676a70ba60ccd4c9557f875305739401f35dda5.scope.
Nov 25 02:51:30 np0005534516 systemd[1]: Started libcrun container.
Nov 25 02:51:30 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fc94ccdd4cd61f7c8107a6354a845ba143bc76b71b6b9d499dfcc4fc409eabda/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 02:51:30 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fc94ccdd4cd61f7c8107a6354a845ba143bc76b71b6b9d499dfcc4fc409eabda/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 02:51:30 np0005534516 podman[99354]: 2025-11-25 07:51:30.564336678 +0000 UTC m=+0.155775786 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 02:51:30 np0005534516 podman[99354]: 2025-11-25 07:51:30.706532103 +0000 UTC m=+0.297971161 container create c55e3b384caf75753a1cba804bbb9d6fbcb86b24843a1a989cb50f7e3892060a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_chaum, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 02:51:30 np0005534516 podman[99334]: 2025-11-25 07:51:30.825758709 +0000 UTC m=+0.803752739 container init 74839672295b97cc97ab2ed4b676a70ba60ccd4c9557f875305739401f35dda5 (image=quay.io/ceph/ceph:v18, name=busy_colden, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 02:51:30 np0005534516 podman[99334]: 2025-11-25 07:51:30.833044957 +0000 UTC m=+0.811038957 container start 74839672295b97cc97ab2ed4b676a70ba60ccd4c9557f875305739401f35dda5 (image=quay.io/ceph/ceph:v18, name=busy_colden, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 02:51:30 np0005534516 podman[99334]: 2025-11-25 07:51:30.899138682 +0000 UTC m=+0.877132682 container attach 74839672295b97cc97ab2ed4b676a70ba60ccd4c9557f875305739401f35dda5 (image=quay.io/ceph/ceph:v18, name=busy_colden, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 02:51:31 np0005534516 systemd[1]: Started libpod-conmon-c55e3b384caf75753a1cba804bbb9d6fbcb86b24843a1a989cb50f7e3892060a.scope.
Nov 25 02:51:31 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v110: 162 pgs: 162 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 02:51:31 np0005534516 systemd[1]: Started libcrun container.
Nov 25 02:51:31 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aa03878d9d9faad214a8a7b36edfbcf41cd98eb869e659c45af6515c088289ff/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 02:51:31 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aa03878d9d9faad214a8a7b36edfbcf41cd98eb869e659c45af6515c088289ff/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 02:51:31 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aa03878d9d9faad214a8a7b36edfbcf41cd98eb869e659c45af6515c088289ff/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 02:51:31 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aa03878d9d9faad214a8a7b36edfbcf41cd98eb869e659c45af6515c088289ff/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 02:51:31 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aa03878d9d9faad214a8a7b36edfbcf41cd98eb869e659c45af6515c088289ff/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 02:51:31 np0005534516 ceph-osd[88620]: log_channel(cluster) log [DBG] : 4.17 scrub starts
Nov 25 02:51:31 np0005534516 ceph-osd[88620]: log_channel(cluster) log [DBG] : 4.17 scrub ok
Nov 25 02:51:31 np0005534516 podman[99354]: 2025-11-25 07:51:31.344142904 +0000 UTC m=+0.935581982 container init c55e3b384caf75753a1cba804bbb9d6fbcb86b24843a1a989cb50f7e3892060a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_chaum, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 02:51:31 np0005534516 podman[99354]: 2025-11-25 07:51:31.351774799 +0000 UTC m=+0.943213847 container start c55e3b384caf75753a1cba804bbb9d6fbcb86b24843a1a989cb50f7e3892060a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_chaum, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Nov 25 02:51:31 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e40 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 02:51:31 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 02:51:31 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1729005680' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 02:51:31 np0005534516 busy_colden[99367]: 
Nov 25 02:51:31 np0005534516 busy_colden[99367]: {"epoch":1,"fsid":"a058ea16-8b73-51e1-b172-ed66107102bf","modified":"2025-11-25T07:47:59.578426Z","created":"2025-11-25T07:47:59.578426Z","min_mon_release":18,"min_mon_release_name":"reef","election_strategy":1,"disallowed_leaders: ":"","stretch_mode":false,"tiebreaker_mon":"","removed_ranks: ":"","features":{"persistent":["kraken","luminous","mimic","osdmap-prune","nautilus","octopus","pacific","elector-pinging","quincy","reef"],"optional":[]},"mons":[{"rank":0,"name":"compute-0","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.122.100:3300","nonce":0},{"type":"v1","addr":"192.168.122.100:6789","nonce":0}]},"addr":"192.168.122.100:6789/0","public_addr":"192.168.122.100:6789/0","priority":0,"weight":0,"crush_location":"{}"}],"quorum":[0]}
Nov 25 02:51:31 np0005534516 busy_colden[99367]: dumped monmap epoch 1
Nov 25 02:51:31 np0005534516 systemd[1]: libpod-74839672295b97cc97ab2ed4b676a70ba60ccd4c9557f875305739401f35dda5.scope: Deactivated successfully.
Nov 25 02:51:31 np0005534516 podman[99354]: 2025-11-25 07:51:31.49300862 +0000 UTC m=+1.084447728 container attach c55e3b384caf75753a1cba804bbb9d6fbcb86b24843a1a989cb50f7e3892060a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_chaum, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 02:51:31 np0005534516 podman[99334]: 2025-11-25 07:51:31.516605014 +0000 UTC m=+1.494599014 container died 74839672295b97cc97ab2ed4b676a70ba60ccd4c9557f875305739401f35dda5 (image=quay.io/ceph/ceph:v18, name=busy_colden, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 02:51:31 np0005534516 ceph-osd[90711]: log_channel(cluster) log [DBG] : 5.a scrub starts
Nov 25 02:51:31 np0005534516 ceph-osd[90711]: log_channel(cluster) log [DBG] : 5.a scrub ok
Nov 25 02:51:32 np0005534516 systemd[1]: var-lib-containers-storage-overlay-fc94ccdd4cd61f7c8107a6354a845ba143bc76b71b6b9d499dfcc4fc409eabda-merged.mount: Deactivated successfully.
Nov 25 02:51:32 np0005534516 hopeful_chaum[99378]: --> passed data devices: 0 physical, 3 LVM
Nov 25 02:51:32 np0005534516 hopeful_chaum[99378]: --> relative data size: 1.0
Nov 25 02:51:32 np0005534516 hopeful_chaum[99378]: --> All data devices are unavailable
Nov 25 02:51:32 np0005534516 systemd[1]: libpod-c55e3b384caf75753a1cba804bbb9d6fbcb86b24843a1a989cb50f7e3892060a.scope: Deactivated successfully.
Nov 25 02:51:32 np0005534516 systemd[1]: libpod-c55e3b384caf75753a1cba804bbb9d6fbcb86b24843a1a989cb50f7e3892060a.scope: Consumed 1.002s CPU time.
Nov 25 02:51:32 np0005534516 podman[99404]: 2025-11-25 07:51:32.425875034 +0000 UTC m=+0.939765432 container remove 74839672295b97cc97ab2ed4b676a70ba60ccd4c9557f875305739401f35dda5 (image=quay.io/ceph/ceph:v18, name=busy_colden, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS)
Nov 25 02:51:32 np0005534516 podman[99354]: 2025-11-25 07:51:32.428111079 +0000 UTC m=+2.019550137 container died c55e3b384caf75753a1cba804bbb9d6fbcb86b24843a1a989cb50f7e3892060a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_chaum, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 02:51:32 np0005534516 systemd[1]: libpod-conmon-74839672295b97cc97ab2ed4b676a70ba60ccd4c9557f875305739401f35dda5.scope: Deactivated successfully.
Nov 25 02:51:32 np0005534516 systemd[1]: var-lib-containers-storage-overlay-aa03878d9d9faad214a8a7b36edfbcf41cd98eb869e659c45af6515c088289ff-merged.mount: Deactivated successfully.
Nov 25 02:51:32 np0005534516 ceph-osd[90711]: log_channel(cluster) log [DBG] : 5.b scrub starts
Nov 25 02:51:32 np0005534516 ceph-osd[90711]: log_channel(cluster) log [DBG] : 5.b scrub ok
Nov 25 02:51:32 np0005534516 podman[99354]: 2025-11-25 07:51:32.850604474 +0000 UTC m=+2.442043512 container remove c55e3b384caf75753a1cba804bbb9d6fbcb86b24843a1a989cb50f7e3892060a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_chaum, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 02:51:32 np0005534516 systemd[1]: libpod-conmon-c55e3b384caf75753a1cba804bbb9d6fbcb86b24843a1a989cb50f7e3892060a.scope: Deactivated successfully.
Nov 25 02:51:33 np0005534516 python3[99480]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid a058ea16-8b73-51e1-b172-ed66107102bf -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   auth get client.openstack _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 02:51:33 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v111: 162 pgs: 162 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 02:51:33 np0005534516 podman[99543]: 2025-11-25 07:51:33.141422519 +0000 UTC m=+0.090306925 container create a4eccd89428eb0a17ed8e239c9457ab6b3feb6a23d40a07d3736c436fe12b009 (image=quay.io/ceph/ceph:v18, name=sad_ptolemy, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 02:51:33 np0005534516 podman[99543]: 2025-11-25 07:51:33.074611146 +0000 UTC m=+0.023495572 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 25 02:51:33 np0005534516 systemd[1]: Started libpod-conmon-a4eccd89428eb0a17ed8e239c9457ab6b3feb6a23d40a07d3736c436fe12b009.scope.
Nov 25 02:51:33 np0005534516 systemd[1]: Started libcrun container.
Nov 25 02:51:33 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7cc03854166c61b0938d5b1fe6335bc470a1e73d1e9f33f0219d228310de6830/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 02:51:33 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7cc03854166c61b0938d5b1fe6335bc470a1e73d1e9f33f0219d228310de6830/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 02:51:33 np0005534516 podman[99543]: 2025-11-25 07:51:33.435018612 +0000 UTC m=+0.383903018 container init a4eccd89428eb0a17ed8e239c9457ab6b3feb6a23d40a07d3736c436fe12b009 (image=quay.io/ceph/ceph:v18, name=sad_ptolemy, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 02:51:33 np0005534516 podman[99543]: 2025-11-25 07:51:33.446960822 +0000 UTC m=+0.395845208 container start a4eccd89428eb0a17ed8e239c9457ab6b3feb6a23d40a07d3736c436fe12b009 (image=quay.io/ceph/ceph:v18, name=sad_ptolemy, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Nov 25 02:51:33 np0005534516 podman[99543]: 2025-11-25 07:51:33.514951874 +0000 UTC m=+0.463836260 container attach a4eccd89428eb0a17ed8e239c9457ab6b3feb6a23d40a07d3736c436fe12b009 (image=quay.io/ceph/ceph:v18, name=sad_ptolemy, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 02:51:33 np0005534516 podman[99638]: 2025-11-25 07:51:33.65314646 +0000 UTC m=+0.064714533 container create c4c4f9e761f7c7c65bf8a475c9dd769975953be7fb11075549af2833ac262282 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_montalcini, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 02:51:33 np0005534516 podman[99638]: 2025-11-25 07:51:33.608878676 +0000 UTC m=+0.020446759 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 02:51:33 np0005534516 systemd[1]: Started libpod-conmon-c4c4f9e761f7c7c65bf8a475c9dd769975953be7fb11075549af2833ac262282.scope.
Nov 25 02:51:33 np0005534516 systemd[1]: Started libcrun container.
Nov 25 02:51:33 np0005534516 podman[99638]: 2025-11-25 07:51:33.813501337 +0000 UTC m=+0.225069450 container init c4c4f9e761f7c7c65bf8a475c9dd769975953be7fb11075549af2833ac262282 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_montalcini, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3)
Nov 25 02:51:33 np0005534516 podman[99638]: 2025-11-25 07:51:33.820260851 +0000 UTC m=+0.231828924 container start c4c4f9e761f7c7c65bf8a475c9dd769975953be7fb11075549af2833ac262282 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_montalcini, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 02:51:33 np0005534516 boring_montalcini[99654]: 167 167
Nov 25 02:51:33 np0005534516 systemd[1]: libpod-c4c4f9e761f7c7c65bf8a475c9dd769975953be7fb11075549af2833ac262282.scope: Deactivated successfully.
Nov 25 02:51:33 np0005534516 podman[99638]: 2025-11-25 07:51:33.889534774 +0000 UTC m=+0.301102837 container attach c4c4f9e761f7c7c65bf8a475c9dd769975953be7fb11075549af2833ac262282 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_montalcini, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 02:51:33 np0005534516 podman[99638]: 2025-11-25 07:51:33.890415665 +0000 UTC m=+0.301983728 container died c4c4f9e761f7c7c65bf8a475c9dd769975953be7fb11075549af2833ac262282 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_montalcini, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 02:51:34 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.openstack"} v 0) v1
Nov 25 02:51:34 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1697876199' entity='client.admin' cmd=[{"prefix": "auth get", "entity": "client.openstack"}]: dispatch
Nov 25 02:51:34 np0005534516 sad_ptolemy[99602]: [client.openstack]
Nov 25 02:51:34 np0005534516 sad_ptolemy[99602]: #011key = AQAEXyVpAAAAABAAFeGhYkRpsvoEu/4JAF8Nmg==
Nov 25 02:51:34 np0005534516 sad_ptolemy[99602]: #011caps mgr = "allow *"
Nov 25 02:51:34 np0005534516 sad_ptolemy[99602]: #011caps mon = "profile rbd"
Nov 25 02:51:34 np0005534516 sad_ptolemy[99602]: #011caps osd = "profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups, profile rbd pool=images, profile rbd pool=cephfs.cephfs.meta, profile rbd pool=cephfs.cephfs.data"
Nov 25 02:51:34 np0005534516 systemd[1]: var-lib-containers-storage-overlay-097c4e088f6fe2939b8492bac695cd824f42f609dd79ebea96efe153334b64a7-merged.mount: Deactivated successfully.
Nov 25 02:51:34 np0005534516 systemd[1]: libpod-a4eccd89428eb0a17ed8e239c9457ab6b3feb6a23d40a07d3736c436fe12b009.scope: Deactivated successfully.
Nov 25 02:51:34 np0005534516 podman[99638]: 2025-11-25 07:51:34.160078937 +0000 UTC m=+0.571647010 container remove c4c4f9e761f7c7c65bf8a475c9dd769975953be7fb11075549af2833ac262282 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_montalcini, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 25 02:51:34 np0005534516 podman[99543]: 2025-11-25 07:51:34.163426609 +0000 UTC m=+1.112310995 container died a4eccd89428eb0a17ed8e239c9457ab6b3feb6a23d40a07d3736c436fe12b009 (image=quay.io/ceph/ceph:v18, name=sad_ptolemy, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 02:51:34 np0005534516 systemd[1]: var-lib-containers-storage-overlay-7cc03854166c61b0938d5b1fe6335bc470a1e73d1e9f33f0219d228310de6830-merged.mount: Deactivated successfully.
Nov 25 02:51:34 np0005534516 podman[99543]: 2025-11-25 07:51:34.29644672 +0000 UTC m=+1.245331106 container remove a4eccd89428eb0a17ed8e239c9457ab6b3feb6a23d40a07d3736c436fe12b009 (image=quay.io/ceph/ceph:v18, name=sad_ptolemy, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True)
Nov 25 02:51:34 np0005534516 podman[99714]: 2025-11-25 07:51:34.376490085 +0000 UTC m=+0.096276130 container create 9a92b3194198f837f492f0c74f6a56b811aa475106029bf4d2d85aa5b3f7ac87 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_burnell, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 02:51:34 np0005534516 podman[99714]: 2025-11-25 07:51:34.303622285 +0000 UTC m=+0.023408360 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 02:51:34 np0005534516 ceph-mon[75015]: from='client.? 192.168.122.100:0/1697876199' entity='client.admin' cmd=[{"prefix": "auth get", "entity": "client.openstack"}]: dispatch
Nov 25 02:51:34 np0005534516 systemd[1]: Started libpod-conmon-9a92b3194198f837f492f0c74f6a56b811aa475106029bf4d2d85aa5b3f7ac87.scope.
Nov 25 02:51:34 np0005534516 systemd[1]: libpod-conmon-c4c4f9e761f7c7c65bf8a475c9dd769975953be7fb11075549af2833ac262282.scope: Deactivated successfully.
Nov 25 02:51:34 np0005534516 systemd[1]: libpod-conmon-a4eccd89428eb0a17ed8e239c9457ab6b3feb6a23d40a07d3736c436fe12b009.scope: Deactivated successfully.
Nov 25 02:51:34 np0005534516 systemd[1]: Started libcrun container.
Nov 25 02:51:34 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0338e0cc486a53e1407511b70dfa53c3fe9a8d01550e9d9cb72a507e149120b1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 02:51:34 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0338e0cc486a53e1407511b70dfa53c3fe9a8d01550e9d9cb72a507e149120b1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 02:51:34 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0338e0cc486a53e1407511b70dfa53c3fe9a8d01550e9d9cb72a507e149120b1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 02:51:34 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0338e0cc486a53e1407511b70dfa53c3fe9a8d01550e9d9cb72a507e149120b1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 02:51:34 np0005534516 podman[99714]: 2025-11-25 07:51:34.551172159 +0000 UTC m=+0.270958234 container init 9a92b3194198f837f492f0c74f6a56b811aa475106029bf4d2d85aa5b3f7ac87 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_burnell, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Nov 25 02:51:34 np0005534516 podman[99714]: 2025-11-25 07:51:34.55902831 +0000 UTC m=+0.278814355 container start 9a92b3194198f837f492f0c74f6a56b811aa475106029bf4d2d85aa5b3f7ac87 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_burnell, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 02:51:34 np0005534516 podman[99714]: 2025-11-25 07:51:34.58906624 +0000 UTC m=+0.308852325 container attach 9a92b3194198f837f492f0c74f6a56b811aa475106029bf4d2d85aa5b3f7ac87 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_burnell, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 02:51:34 np0005534516 ceph-osd[90711]: log_channel(cluster) log [DBG] : 5.d scrub starts
Nov 25 02:51:34 np0005534516 ceph-osd[90711]: log_channel(cluster) log [DBG] : 5.d scrub ok
Nov 25 02:51:35 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v112: 162 pgs: 162 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 02:51:35 np0005534516 dazzling_burnell[99731]: {
Nov 25 02:51:35 np0005534516 dazzling_burnell[99731]:    "0": [
Nov 25 02:51:35 np0005534516 dazzling_burnell[99731]:        {
Nov 25 02:51:35 np0005534516 dazzling_burnell[99731]:            "devices": [
Nov 25 02:51:35 np0005534516 dazzling_burnell[99731]:                "/dev/loop3"
Nov 25 02:51:35 np0005534516 dazzling_burnell[99731]:            ],
Nov 25 02:51:35 np0005534516 dazzling_burnell[99731]:            "lv_name": "ceph_lv0",
Nov 25 02:51:35 np0005534516 dazzling_burnell[99731]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 02:51:35 np0005534516 dazzling_burnell[99731]:            "lv_size": "21470642176",
Nov 25 02:51:35 np0005534516 dazzling_burnell[99731]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=fa0f8d90-5df8-4d42-9078-da082765696d,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 02:51:35 np0005534516 dazzling_burnell[99731]:            "lv_uuid": "ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr",
Nov 25 02:51:35 np0005534516 dazzling_burnell[99731]:            "name": "ceph_lv0",
Nov 25 02:51:35 np0005534516 dazzling_burnell[99731]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 02:51:35 np0005534516 dazzling_burnell[99731]:            "tags": {
Nov 25 02:51:35 np0005534516 dazzling_burnell[99731]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 02:51:35 np0005534516 dazzling_burnell[99731]:                "ceph.block_uuid": "ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr",
Nov 25 02:51:35 np0005534516 dazzling_burnell[99731]:                "ceph.cephx_lockbox_secret": "",
Nov 25 02:51:35 np0005534516 dazzling_burnell[99731]:                "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 02:51:35 np0005534516 dazzling_burnell[99731]:                "ceph.cluster_name": "ceph",
Nov 25 02:51:35 np0005534516 dazzling_burnell[99731]:                "ceph.crush_device_class": "",
Nov 25 02:51:35 np0005534516 dazzling_burnell[99731]:                "ceph.encrypted": "0",
Nov 25 02:51:35 np0005534516 dazzling_burnell[99731]:                "ceph.osd_fsid": "fa0f8d90-5df8-4d42-9078-da082765696d",
Nov 25 02:51:35 np0005534516 dazzling_burnell[99731]:                "ceph.osd_id": "0",
Nov 25 02:51:35 np0005534516 dazzling_burnell[99731]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 02:51:35 np0005534516 dazzling_burnell[99731]:                "ceph.type": "block",
Nov 25 02:51:35 np0005534516 dazzling_burnell[99731]:                "ceph.vdo": "0"
Nov 25 02:51:35 np0005534516 dazzling_burnell[99731]:            },
Nov 25 02:51:35 np0005534516 dazzling_burnell[99731]:            "type": "block",
Nov 25 02:51:35 np0005534516 dazzling_burnell[99731]:            "vg_name": "ceph_vg0"
Nov 25 02:51:35 np0005534516 dazzling_burnell[99731]:        }
Nov 25 02:51:35 np0005534516 dazzling_burnell[99731]:    ],
Nov 25 02:51:35 np0005534516 dazzling_burnell[99731]:    "1": [
Nov 25 02:51:35 np0005534516 dazzling_burnell[99731]:        {
Nov 25 02:51:35 np0005534516 dazzling_burnell[99731]:            "devices": [
Nov 25 02:51:35 np0005534516 dazzling_burnell[99731]:                "/dev/loop4"
Nov 25 02:51:35 np0005534516 dazzling_burnell[99731]:            ],
Nov 25 02:51:35 np0005534516 dazzling_burnell[99731]:            "lv_name": "ceph_lv1",
Nov 25 02:51:35 np0005534516 dazzling_burnell[99731]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 02:51:35 np0005534516 dazzling_burnell[99731]:            "lv_size": "21470642176",
Nov 25 02:51:35 np0005534516 dazzling_burnell[99731]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1ccbbde0-8faf-460a-800e-c84f00ed17db,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 02:51:35 np0005534516 dazzling_burnell[99731]:            "lv_uuid": "nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e",
Nov 25 02:51:35 np0005534516 dazzling_burnell[99731]:            "name": "ceph_lv1",
Nov 25 02:51:35 np0005534516 dazzling_burnell[99731]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 02:51:35 np0005534516 dazzling_burnell[99731]:            "tags": {
Nov 25 02:51:35 np0005534516 dazzling_burnell[99731]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 02:51:35 np0005534516 dazzling_burnell[99731]:                "ceph.block_uuid": "nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e",
Nov 25 02:51:35 np0005534516 dazzling_burnell[99731]:                "ceph.cephx_lockbox_secret": "",
Nov 25 02:51:35 np0005534516 dazzling_burnell[99731]:                "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 02:51:35 np0005534516 dazzling_burnell[99731]:                "ceph.cluster_name": "ceph",
Nov 25 02:51:35 np0005534516 dazzling_burnell[99731]:                "ceph.crush_device_class": "",
Nov 25 02:51:35 np0005534516 dazzling_burnell[99731]:                "ceph.encrypted": "0",
Nov 25 02:51:35 np0005534516 dazzling_burnell[99731]:                "ceph.osd_fsid": "1ccbbde0-8faf-460a-800e-c84f00ed17db",
Nov 25 02:51:35 np0005534516 dazzling_burnell[99731]:                "ceph.osd_id": "1",
Nov 25 02:51:35 np0005534516 dazzling_burnell[99731]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 02:51:35 np0005534516 dazzling_burnell[99731]:                "ceph.type": "block",
Nov 25 02:51:35 np0005534516 dazzling_burnell[99731]:                "ceph.vdo": "0"
Nov 25 02:51:35 np0005534516 dazzling_burnell[99731]:            },
Nov 25 02:51:35 np0005534516 dazzling_burnell[99731]:            "type": "block",
Nov 25 02:51:35 np0005534516 dazzling_burnell[99731]:            "vg_name": "ceph_vg1"
Nov 25 02:51:35 np0005534516 dazzling_burnell[99731]:        }
Nov 25 02:51:35 np0005534516 dazzling_burnell[99731]:    ],
Nov 25 02:51:35 np0005534516 dazzling_burnell[99731]:    "2": [
Nov 25 02:51:35 np0005534516 dazzling_burnell[99731]:        {
Nov 25 02:51:35 np0005534516 dazzling_burnell[99731]:            "devices": [
Nov 25 02:51:35 np0005534516 dazzling_burnell[99731]:                "/dev/loop5"
Nov 25 02:51:35 np0005534516 dazzling_burnell[99731]:            ],
Nov 25 02:51:35 np0005534516 dazzling_burnell[99731]:            "lv_name": "ceph_lv2",
Nov 25 02:51:35 np0005534516 dazzling_burnell[99731]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 02:51:35 np0005534516 dazzling_burnell[99731]:            "lv_size": "21470642176",
Nov 25 02:51:35 np0005534516 dazzling_burnell[99731]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2a15223e-f21c-42af-b702-2be1c3607399,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 02:51:35 np0005534516 dazzling_burnell[99731]:            "lv_uuid": "5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0",
Nov 25 02:51:35 np0005534516 dazzling_burnell[99731]:            "name": "ceph_lv2",
Nov 25 02:51:35 np0005534516 dazzling_burnell[99731]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 02:51:35 np0005534516 dazzling_burnell[99731]:            "tags": {
Nov 25 02:51:35 np0005534516 dazzling_burnell[99731]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 02:51:35 np0005534516 dazzling_burnell[99731]:                "ceph.block_uuid": "5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0",
Nov 25 02:51:35 np0005534516 dazzling_burnell[99731]:                "ceph.cephx_lockbox_secret": "",
Nov 25 02:51:35 np0005534516 dazzling_burnell[99731]:                "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 02:51:35 np0005534516 dazzling_burnell[99731]:                "ceph.cluster_name": "ceph",
Nov 25 02:51:35 np0005534516 dazzling_burnell[99731]:                "ceph.crush_device_class": "",
Nov 25 02:51:35 np0005534516 dazzling_burnell[99731]:                "ceph.encrypted": "0",
Nov 25 02:51:35 np0005534516 dazzling_burnell[99731]:                "ceph.osd_fsid": "2a15223e-f21c-42af-b702-2be1c3607399",
Nov 25 02:51:35 np0005534516 dazzling_burnell[99731]:                "ceph.osd_id": "2",
Nov 25 02:51:35 np0005534516 dazzling_burnell[99731]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 02:51:35 np0005534516 dazzling_burnell[99731]:                "ceph.type": "block",
Nov 25 02:51:35 np0005534516 dazzling_burnell[99731]:                "ceph.vdo": "0"
Nov 25 02:51:35 np0005534516 dazzling_burnell[99731]:            },
Nov 25 02:51:35 np0005534516 dazzling_burnell[99731]:            "type": "block",
Nov 25 02:51:35 np0005534516 dazzling_burnell[99731]:            "vg_name": "ceph_vg2"
Nov 25 02:51:35 np0005534516 dazzling_burnell[99731]:        }
Nov 25 02:51:35 np0005534516 dazzling_burnell[99731]:    ]
Nov 25 02:51:35 np0005534516 dazzling_burnell[99731]: }
Nov 25 02:51:35 np0005534516 systemd[1]: libpod-9a92b3194198f837f492f0c74f6a56b811aa475106029bf4d2d85aa5b3f7ac87.scope: Deactivated successfully.
Nov 25 02:51:35 np0005534516 podman[99714]: 2025-11-25 07:51:35.344409121 +0000 UTC m=+1.064195206 container died 9a92b3194198f837f492f0c74f6a56b811aa475106029bf4d2d85aa5b3f7ac87 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_burnell, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 02:51:35 np0005534516 systemd[1]: var-lib-containers-storage-overlay-0338e0cc486a53e1407511b70dfa53c3fe9a8d01550e9d9cb72a507e149120b1-merged.mount: Deactivated successfully.
Nov 25 02:51:35 np0005534516 podman[99714]: 2025-11-25 07:51:35.436237042 +0000 UTC m=+1.156023097 container remove 9a92b3194198f837f492f0c74f6a56b811aa475106029bf4d2d85aa5b3f7ac87 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_burnell, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Nov 25 02:51:35 np0005534516 systemd[1]: libpod-conmon-9a92b3194198f837f492f0c74f6a56b811aa475106029bf4d2d85aa5b3f7ac87.scope: Deactivated successfully.
Nov 25 02:51:35 np0005534516 ansible-async_wrapper.py[99995]: Invoked with j949580342221 30 /home/zuul/.ansible/tmp/ansible-tmp-1764057095.3752365-36729-163349144695103/AnsiballZ_command.py _
Nov 25 02:51:35 np0005534516 ansible-async_wrapper.py[100007]: Starting module and watcher
Nov 25 02:51:35 np0005534516 ansible-async_wrapper.py[100007]: Start watching 100009 (30)
Nov 25 02:51:35 np0005534516 ansible-async_wrapper.py[100009]: Start module (100009)
Nov 25 02:51:35 np0005534516 ansible-async_wrapper.py[99995]: Return async_wrapper task started.
Nov 25 02:51:35 np0005534516 python3[100010]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid a058ea16-8b73-51e1-b172-ed66107102bf -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch status --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 02:51:36 np0005534516 podman[100043]: 2025-11-25 07:51:36.082108454 +0000 UTC m=+0.059191030 container create 74bfa77bf7d342b46c7a9846e5a9a84c6b3575191d14696e3dc2268ebd3ceb70 (image=quay.io/ceph/ceph:v18, name=sharp_kepler, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Nov 25 02:51:36 np0005534516 podman[100054]: 2025-11-25 07:51:36.105536723 +0000 UTC m=+0.060381608 container create 562bc8c5b62520eee7d53ceae5a7898bc275b9f85b804291211994dd1fb95050 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_euclid, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Nov 25 02:51:36 np0005534516 systemd[1]: Started libpod-conmon-74bfa77bf7d342b46c7a9846e5a9a84c6b3575191d14696e3dc2268ebd3ceb70.scope.
Nov 25 02:51:36 np0005534516 systemd[1]: Started libpod-conmon-562bc8c5b62520eee7d53ceae5a7898bc275b9f85b804291211994dd1fb95050.scope.
Nov 25 02:51:36 np0005534516 systemd[1]: Started libcrun container.
Nov 25 02:51:36 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b857fa3ab259a6e96a5ba9268628c510ed245f4d9d92d725ad3d5ec940ebb710/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 02:51:36 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b857fa3ab259a6e96a5ba9268628c510ed245f4d9d92d725ad3d5ec940ebb710/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 02:51:36 np0005534516 podman[100043]: 2025-11-25 07:51:36.047906802 +0000 UTC m=+0.024989388 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 25 02:51:36 np0005534516 systemd[1]: Started libcrun container.
Nov 25 02:51:36 np0005534516 podman[100043]: 2025-11-25 07:51:36.159489734 +0000 UTC m=+0.136572340 container init 74bfa77bf7d342b46c7a9846e5a9a84c6b3575191d14696e3dc2268ebd3ceb70 (image=quay.io/ceph/ceph:v18, name=sharp_kepler, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 02:51:36 np0005534516 podman[100054]: 2025-11-25 07:51:36.072662494 +0000 UTC m=+0.027507309 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 02:51:36 np0005534516 podman[100043]: 2025-11-25 07:51:36.167590771 +0000 UTC m=+0.144673347 container start 74bfa77bf7d342b46c7a9846e5a9a84c6b3575191d14696e3dc2268ebd3ceb70 (image=quay.io/ceph/ceph:v18, name=sharp_kepler, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Nov 25 02:51:36 np0005534516 podman[100054]: 2025-11-25 07:51:36.168918653 +0000 UTC m=+0.123763468 container init 562bc8c5b62520eee7d53ceae5a7898bc275b9f85b804291211994dd1fb95050 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_euclid, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 02:51:36 np0005534516 podman[100054]: 2025-11-25 07:51:36.175583965 +0000 UTC m=+0.130428770 container start 562bc8c5b62520eee7d53ceae5a7898bc275b9f85b804291211994dd1fb95050 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_euclid, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 02:51:36 np0005534516 podman[100043]: 2025-11-25 07:51:36.178887475 +0000 UTC m=+0.155970031 container attach 74bfa77bf7d342b46c7a9846e5a9a84c6b3575191d14696e3dc2268ebd3ceb70 (image=quay.io/ceph/ceph:v18, name=sharp_kepler, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 02:51:36 np0005534516 systemd[1]: libpod-562bc8c5b62520eee7d53ceae5a7898bc275b9f85b804291211994dd1fb95050.scope: Deactivated successfully.
Nov 25 02:51:36 np0005534516 amazing_euclid[100083]: 167 167
Nov 25 02:51:36 np0005534516 conmon[100083]: conmon 562bc8c5b62520eee7d5 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-562bc8c5b62520eee7d53ceae5a7898bc275b9f85b804291211994dd1fb95050.scope/container/memory.events
Nov 25 02:51:36 np0005534516 podman[100054]: 2025-11-25 07:51:36.190282692 +0000 UTC m=+0.145127487 container attach 562bc8c5b62520eee7d53ceae5a7898bc275b9f85b804291211994dd1fb95050 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_euclid, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 02:51:36 np0005534516 podman[100054]: 2025-11-25 07:51:36.19063582 +0000 UTC m=+0.145480615 container died 562bc8c5b62520eee7d53ceae5a7898bc275b9f85b804291211994dd1fb95050 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_euclid, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Nov 25 02:51:36 np0005534516 systemd[1]: var-lib-containers-storage-overlay-60f24bb84045db665bdbbd7d2eccb7d08682c20ccd0921161398ac1b59edc4b3-merged.mount: Deactivated successfully.
Nov 25 02:51:36 np0005534516 podman[100054]: 2025-11-25 07:51:36.252458282 +0000 UTC m=+0.207303067 container remove 562bc8c5b62520eee7d53ceae5a7898bc275b9f85b804291211994dd1fb95050 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_euclid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Nov 25 02:51:36 np0005534516 systemd[1]: libpod-conmon-562bc8c5b62520eee7d53ceae5a7898bc275b9f85b804291211994dd1fb95050.scope: Deactivated successfully.
Nov 25 02:51:36 np0005534516 podman[100109]: 2025-11-25 07:51:36.398818959 +0000 UTC m=+0.044678457 container create 49c6d34248dac681ca3ea2be845e785d137fff660d5819bd18d6a0ff476ba511 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_grothendieck, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef)
Nov 25 02:51:36 np0005534516 systemd[1]: Started libpod-conmon-49c6d34248dac681ca3ea2be845e785d137fff660d5819bd18d6a0ff476ba511.scope.
Nov 25 02:51:36 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e40 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 02:51:36 np0005534516 podman[100109]: 2025-11-25 07:51:36.375380469 +0000 UTC m=+0.021239987 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 02:51:36 np0005534516 systemd[1]: Started libcrun container.
Nov 25 02:51:36 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/61075f822da8d0bffbbf474ec11988943e37be3293633edcf730ba6a7f7777db/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 02:51:36 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/61075f822da8d0bffbbf474ec11988943e37be3293633edcf730ba6a7f7777db/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 02:51:36 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/61075f822da8d0bffbbf474ec11988943e37be3293633edcf730ba6a7f7777db/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 02:51:36 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/61075f822da8d0bffbbf474ec11988943e37be3293633edcf730ba6a7f7777db/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 02:51:36 np0005534516 podman[100109]: 2025-11-25 07:51:36.514803076 +0000 UTC m=+0.160662614 container init 49c6d34248dac681ca3ea2be845e785d137fff660d5819bd18d6a0ff476ba511 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_grothendieck, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 02:51:36 np0005534516 podman[100109]: 2025-11-25 07:51:36.520889894 +0000 UTC m=+0.166749392 container start 49c6d34248dac681ca3ea2be845e785d137fff660d5819bd18d6a0ff476ba511 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_grothendieck, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Nov 25 02:51:36 np0005534516 podman[100109]: 2025-11-25 07:51:36.542443638 +0000 UTC m=+0.188303146 container attach 49c6d34248dac681ca3ea2be845e785d137fff660d5819bd18d6a0ff476ba511 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_grothendieck, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 02:51:36 np0005534516 ceph-mgr[75313]: log_channel(audit) log [DBG] : from='client.14258 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Nov 25 02:51:36 np0005534516 sharp_kepler[100081]: 
Nov 25 02:51:36 np0005534516 sharp_kepler[100081]: {"available": true, "backend": "cephadm", "paused": false, "workers": 10}
Nov 25 02:51:36 np0005534516 systemd[1]: libpod-74bfa77bf7d342b46c7a9846e5a9a84c6b3575191d14696e3dc2268ebd3ceb70.scope: Deactivated successfully.
Nov 25 02:51:36 np0005534516 podman[100043]: 2025-11-25 07:51:36.783214768 +0000 UTC m=+0.760297324 container died 74bfa77bf7d342b46c7a9846e5a9a84c6b3575191d14696e3dc2268ebd3ceb70 (image=quay.io/ceph/ceph:v18, name=sharp_kepler, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 02:51:36 np0005534516 systemd[1]: var-lib-containers-storage-overlay-b857fa3ab259a6e96a5ba9268628c510ed245f4d9d92d725ad3d5ec940ebb710-merged.mount: Deactivated successfully.
Nov 25 02:51:36 np0005534516 podman[100043]: 2025-11-25 07:51:36.858572959 +0000 UTC m=+0.835655525 container remove 74bfa77bf7d342b46c7a9846e5a9a84c6b3575191d14696e3dc2268ebd3ceb70 (image=quay.io/ceph/ceph:v18, name=sharp_kepler, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Nov 25 02:51:36 np0005534516 systemd[1]: libpod-conmon-74bfa77bf7d342b46c7a9846e5a9a84c6b3575191d14696e3dc2268ebd3ceb70.scope: Deactivated successfully.
Nov 25 02:51:36 np0005534516 ansible-async_wrapper.py[100009]: Module complete (100009)
Nov 25 02:51:37 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v113: 162 pgs: 162 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 02:51:37 np0005534516 python3[100211]: ansible-ansible.legacy.async_status Invoked with jid=j949580342221.99995 mode=status _async_dir=/root/.ansible_async
Nov 25 02:51:37 np0005534516 ceph-osd[89702]: log_channel(cluster) log [DBG] : 3.1a scrub starts
Nov 25 02:51:37 np0005534516 ceph-osd[89702]: log_channel(cluster) log [DBG] : 3.1a scrub ok
Nov 25 02:51:37 np0005534516 boring_grothendieck[100125]: {
Nov 25 02:51:37 np0005534516 boring_grothendieck[100125]:    "1ccbbde0-8faf-460a-800e-c84f00ed17db": {
Nov 25 02:51:37 np0005534516 boring_grothendieck[100125]:        "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 02:51:37 np0005534516 boring_grothendieck[100125]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 02:51:37 np0005534516 boring_grothendieck[100125]:        "osd_id": 1,
Nov 25 02:51:37 np0005534516 boring_grothendieck[100125]:        "osd_uuid": "1ccbbde0-8faf-460a-800e-c84f00ed17db",
Nov 25 02:51:37 np0005534516 boring_grothendieck[100125]:        "type": "bluestore"
Nov 25 02:51:37 np0005534516 boring_grothendieck[100125]:    },
Nov 25 02:51:37 np0005534516 boring_grothendieck[100125]:    "2a15223e-f21c-42af-b702-2be1c3607399": {
Nov 25 02:51:37 np0005534516 boring_grothendieck[100125]:        "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 02:51:37 np0005534516 boring_grothendieck[100125]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 02:51:37 np0005534516 boring_grothendieck[100125]:        "osd_id": 2,
Nov 25 02:51:37 np0005534516 boring_grothendieck[100125]:        "osd_uuid": "2a15223e-f21c-42af-b702-2be1c3607399",
Nov 25 02:51:37 np0005534516 boring_grothendieck[100125]:        "type": "bluestore"
Nov 25 02:51:37 np0005534516 boring_grothendieck[100125]:    },
Nov 25 02:51:37 np0005534516 boring_grothendieck[100125]:    "fa0f8d90-5df8-4d42-9078-da082765696d": {
Nov 25 02:51:37 np0005534516 boring_grothendieck[100125]:        "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 02:51:37 np0005534516 boring_grothendieck[100125]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 02:51:37 np0005534516 boring_grothendieck[100125]:        "osd_id": 0,
Nov 25 02:51:37 np0005534516 boring_grothendieck[100125]:        "osd_uuid": "fa0f8d90-5df8-4d42-9078-da082765696d",
Nov 25 02:51:37 np0005534516 boring_grothendieck[100125]:        "type": "bluestore"
Nov 25 02:51:37 np0005534516 boring_grothendieck[100125]:    }
Nov 25 02:51:37 np0005534516 boring_grothendieck[100125]: }
Nov 25 02:51:37 np0005534516 python3[100277]: ansible-ansible.legacy.async_status Invoked with jid=j949580342221.99995 mode=cleanup _async_dir=/root/.ansible_async
Nov 25 02:51:37 np0005534516 systemd[1]: libpod-49c6d34248dac681ca3ea2be845e785d137fff660d5819bd18d6a0ff476ba511.scope: Deactivated successfully.
Nov 25 02:51:37 np0005534516 systemd[1]: libpod-49c6d34248dac681ca3ea2be845e785d137fff660d5819bd18d6a0ff476ba511.scope: Consumed 1.054s CPU time.
Nov 25 02:51:37 np0005534516 podman[100289]: 2025-11-25 07:51:37.612413363 +0000 UTC m=+0.022351524 container died 49c6d34248dac681ca3ea2be845e785d137fff660d5819bd18d6a0ff476ba511 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_grothendieck, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507)
Nov 25 02:51:37 np0005534516 systemd[1]: var-lib-containers-storage-overlay-61075f822da8d0bffbbf474ec11988943e37be3293633edcf730ba6a7f7777db-merged.mount: Deactivated successfully.
Nov 25 02:51:38 np0005534516 podman[100289]: 2025-11-25 07:51:38.096200848 +0000 UTC m=+0.506138999 container remove 49c6d34248dac681ca3ea2be845e785d137fff660d5819bd18d6a0ff476ba511 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_grothendieck, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 02:51:38 np0005534516 systemd[1]: libpod-conmon-49c6d34248dac681ca3ea2be845e785d137fff660d5819bd18d6a0ff476ba511.scope: Deactivated successfully.
Nov 25 02:51:38 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 02:51:38 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 02:51:38 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 02:51:38 np0005534516 python3[100330]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid a058ea16-8b73-51e1-b172-ed66107102bf -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch status --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 02:51:38 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 02:51:38 np0005534516 ceph-mgr[75313]: [progress INFO root] update: starting ev e7bb3b18-60b0-45f5-96cf-566d33ac6bef (Updating rgw.rgw deployment (+1 -> 1))
Nov 25 02:51:38 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.suapaq", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]} v 0) v1
Nov 25 02:51:38 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.suapaq", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Nov 25 02:51:38 np0005534516 podman[100331]: 2025-11-25 07:51:38.351807107 +0000 UTC m=+0.111672404 container create 5920ae35172e714e5b4c78a605aaf2ae0152caa6257ec3a76d551fff4f61c065 (image=quay.io/ceph/ceph:v18, name=upbeat_gauss, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 02:51:38 np0005534516 podman[100331]: 2025-11-25 07:51:38.260488078 +0000 UTC m=+0.020353385 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 25 02:51:38 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.suapaq", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Nov 25 02:51:38 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=rgw_frontends}] v 0) v1
Nov 25 02:51:38 np0005534516 systemd[1]: Started libpod-conmon-5920ae35172e714e5b4c78a605aaf2ae0152caa6257ec3a76d551fff4f61c065.scope.
Nov 25 02:51:38 np0005534516 systemd[1]: Started libcrun container.
Nov 25 02:51:38 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 02:51:38 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/246cbfdc8e76401159cf11946e9a8c3844773749f88ee78daab058e3b5162b1b/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 02:51:38 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/246cbfdc8e76401159cf11946e9a8c3844773749f88ee78daab058e3b5162b1b/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 02:51:38 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 02:51:38 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 02:51:38 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 02:51:38 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 02:51:38 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.suapaq", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Nov 25 02:51:38 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.suapaq", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Nov 25 02:51:38 np0005534516 ceph-mgr[75313]: [cephadm INFO cephadm.serve] Deploying daemon rgw.rgw.compute-0.suapaq on compute-0
Nov 25 02:51:38 np0005534516 ceph-mgr[75313]: log_channel(cephadm) log [INF] : Deploying daemon rgw.rgw.compute-0.suapaq on compute-0
Nov 25 02:51:38 np0005534516 podman[100331]: 2025-11-25 07:51:38.891483178 +0000 UTC m=+0.651348575 container init 5920ae35172e714e5b4c78a605aaf2ae0152caa6257ec3a76d551fff4f61c065 (image=quay.io/ceph/ceph:v18, name=upbeat_gauss, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 02:51:38 np0005534516 podman[100331]: 2025-11-25 07:51:38.904687489 +0000 UTC m=+0.664552806 container start 5920ae35172e714e5b4c78a605aaf2ae0152caa6257ec3a76d551fff4f61c065 (image=quay.io/ceph/ceph:v18, name=upbeat_gauss, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2)
Nov 25 02:51:39 np0005534516 podman[100331]: 2025-11-25 07:51:39.05864836 +0000 UTC m=+0.818513647 container attach 5920ae35172e714e5b4c78a605aaf2ae0152caa6257ec3a76d551fff4f61c065 (image=quay.io/ceph/ceph:v18, name=upbeat_gauss, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Nov 25 02:51:39 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v114: 162 pgs: 162 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 02:51:39 np0005534516 ceph-osd[89702]: log_channel(cluster) log [DBG] : 3.1c scrub starts
Nov 25 02:51:39 np0005534516 ceph-osd[89702]: log_channel(cluster) log [DBG] : 3.1c scrub ok
Nov 25 02:51:39 np0005534516 ceph-mgr[75313]: log_channel(audit) log [DBG] : from='client.14260 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Nov 25 02:51:39 np0005534516 upbeat_gauss[100346]: 
Nov 25 02:51:39 np0005534516 upbeat_gauss[100346]: {"available": true, "backend": "cephadm", "paused": false, "workers": 10}
Nov 25 02:51:39 np0005534516 systemd[1]: libpod-5920ae35172e714e5b4c78a605aaf2ae0152caa6257ec3a76d551fff4f61c065.scope: Deactivated successfully.
Nov 25 02:51:39 np0005534516 podman[100506]: 2025-11-25 07:51:39.50561908 +0000 UTC m=+0.077117825 container create 0aadc6eec2f3a32ad6af94ba8f50c30f79f172fe3daac30e700a58ec74d79785 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_joliot, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Nov 25 02:51:39 np0005534516 podman[100331]: 2025-11-25 07:51:39.511503733 +0000 UTC m=+1.271369020 container died 5920ae35172e714e5b4c78a605aaf2ae0152caa6257ec3a76d551fff4f61c065 (image=quay.io/ceph/ceph:v18, name=upbeat_gauss, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 02:51:39 np0005534516 podman[100506]: 2025-11-25 07:51:39.463498086 +0000 UTC m=+0.034996851 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 02:51:39 np0005534516 systemd[1]: Started libpod-conmon-0aadc6eec2f3a32ad6af94ba8f50c30f79f172fe3daac30e700a58ec74d79785.scope.
Nov 25 02:51:39 np0005534516 systemd[1]: Started libcrun container.
Nov 25 02:51:39 np0005534516 systemd[1]: var-lib-containers-storage-overlay-246cbfdc8e76401159cf11946e9a8c3844773749f88ee78daab058e3b5162b1b-merged.mount: Deactivated successfully.
Nov 25 02:51:39 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 02:51:39 np0005534516 ceph-mon[75015]: Deploying daemon rgw.rgw.compute-0.suapaq on compute-0
Nov 25 02:51:40 np0005534516 podman[100331]: 2025-11-25 07:51:40.062405016 +0000 UTC m=+1.822270303 container remove 5920ae35172e714e5b4c78a605aaf2ae0152caa6257ec3a76d551fff4f61c065 (image=quay.io/ceph/ceph:v18, name=upbeat_gauss, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 02:51:40 np0005534516 systemd[1]: libpod-conmon-5920ae35172e714e5b4c78a605aaf2ae0152caa6257ec3a76d551fff4f61c065.scope: Deactivated successfully.
Nov 25 02:51:40 np0005534516 podman[100506]: 2025-11-25 07:51:40.243241629 +0000 UTC m=+0.814740414 container init 0aadc6eec2f3a32ad6af94ba8f50c30f79f172fe3daac30e700a58ec74d79785 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_joliot, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 02:51:40 np0005534516 podman[100506]: 2025-11-25 07:51:40.252120055 +0000 UTC m=+0.823618830 container start 0aadc6eec2f3a32ad6af94ba8f50c30f79f172fe3daac30e700a58ec74d79785 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_joliot, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 02:51:40 np0005534516 optimistic_joliot[100537]: 167 167
Nov 25 02:51:40 np0005534516 systemd[1]: libpod-0aadc6eec2f3a32ad6af94ba8f50c30f79f172fe3daac30e700a58ec74d79785.scope: Deactivated successfully.
Nov 25 02:51:40 np0005534516 conmon[100537]: conmon 0aadc6eec2f3a32ad6af <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-0aadc6eec2f3a32ad6af94ba8f50c30f79f172fe3daac30e700a58ec74d79785.scope/container/memory.events
Nov 25 02:51:40 np0005534516 podman[100506]: 2025-11-25 07:51:40.30702488 +0000 UTC m=+0.878523635 container attach 0aadc6eec2f3a32ad6af94ba8f50c30f79f172fe3daac30e700a58ec74d79785 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_joliot, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Nov 25 02:51:40 np0005534516 podman[100506]: 2025-11-25 07:51:40.30743911 +0000 UTC m=+0.878937865 container died 0aadc6eec2f3a32ad6af94ba8f50c30f79f172fe3daac30e700a58ec74d79785 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_joliot, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 02:51:40 np0005534516 systemd[1]: var-lib-containers-storage-overlay-55767977f2292d69c942397ef1a71ce7979658d070b982c63ab1f2d5eb17d20b-merged.mount: Deactivated successfully.
Nov 25 02:51:40 np0005534516 ceph-osd[90711]: log_channel(cluster) log [DBG] : 5.e scrub starts
Nov 25 02:51:40 np0005534516 ceph-osd[90711]: log_channel(cluster) log [DBG] : 5.e scrub ok
Nov 25 02:51:40 np0005534516 podman[100506]: 2025-11-25 07:51:40.724072582 +0000 UTC m=+1.295571357 container remove 0aadc6eec2f3a32ad6af94ba8f50c30f79f172fe3daac30e700a58ec74d79785 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_joliot, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 02:51:40 np0005534516 systemd[1]: libpod-conmon-0aadc6eec2f3a32ad6af94ba8f50c30f79f172fe3daac30e700a58ec74d79785.scope: Deactivated successfully.
Nov 25 02:51:40 np0005534516 ansible-async_wrapper.py[100007]: Done in kid B.
Nov 25 02:51:41 np0005534516 python3[100581]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid a058ea16-8b73-51e1-b172-ed66107102bf -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch ls --export -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 02:51:41 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v115: 162 pgs: 162 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 02:51:41 np0005534516 podman[100582]: 2025-11-25 07:51:41.085059752 +0000 UTC m=+0.029843645 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 25 02:51:41 np0005534516 podman[100582]: 2025-11-25 07:51:41.258752362 +0000 UTC m=+0.203536195 container create 5922e0ea4226327c7b23408d48553e902d0f242f02a49d9053c7488e17896937 (image=quay.io/ceph/ceph:v18, name=optimistic_napier, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 02:51:41 np0005534516 ceph-osd[88620]: log_channel(cluster) log [DBG] : 4.19 deep-scrub starts
Nov 25 02:51:41 np0005534516 ceph-osd[88620]: log_channel(cluster) log [DBG] : 4.19 deep-scrub ok
Nov 25 02:51:41 np0005534516 systemd[1]: Reloading.
Nov 25 02:51:41 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e40 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 02:51:41 np0005534516 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 02:51:41 np0005534516 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 02:51:41 np0005534516 systemd[1]: Started libpod-conmon-5922e0ea4226327c7b23408d48553e902d0f242f02a49d9053c7488e17896937.scope.
Nov 25 02:51:41 np0005534516 systemd[1]: Started libcrun container.
Nov 25 02:51:41 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a5dfe2e6771bc822bd4c1e49c500c3dcfb03744383540077119b7a1d3ecf954d/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 02:51:41 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a5dfe2e6771bc822bd4c1e49c500c3dcfb03744383540077119b7a1d3ecf954d/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 02:51:41 np0005534516 systemd[1]: Reloading.
Nov 25 02:51:41 np0005534516 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 02:51:41 np0005534516 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 02:51:41 np0005534516 podman[100582]: 2025-11-25 07:51:41.877216719 +0000 UTC m=+0.822000622 container init 5922e0ea4226327c7b23408d48553e902d0f242f02a49d9053c7488e17896937 (image=quay.io/ceph/ceph:v18, name=optimistic_napier, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 02:51:41 np0005534516 podman[100582]: 2025-11-25 07:51:41.890662115 +0000 UTC m=+0.835445948 container start 5922e0ea4226327c7b23408d48553e902d0f242f02a49d9053c7488e17896937 (image=quay.io/ceph/ceph:v18, name=optimistic_napier, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2)
Nov 25 02:51:42 np0005534516 podman[100582]: 2025-11-25 07:51:42.002477592 +0000 UTC m=+0.947261515 container attach 5922e0ea4226327c7b23408d48553e902d0f242f02a49d9053c7488e17896937 (image=quay.io/ceph/ceph:v18, name=optimistic_napier, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 02:51:42 np0005534516 systemd[1]: Starting Ceph rgw.rgw.compute-0.suapaq for a058ea16-8b73-51e1-b172-ed66107102bf...
Nov 25 02:51:42 np0005534516 podman[100738]: 2025-11-25 07:51:42.290029349 +0000 UTC m=+0.026933946 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 02:51:42 np0005534516 podman[100738]: 2025-11-25 07:51:42.448690833 +0000 UTC m=+0.185595420 container create 4a9c136fb7d1637e5080eb387097c3f296d62457178347dd9c5f273d2481984f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-a058ea16-8b73-51e1-b172-ed66107102bf-rgw-rgw-compute-0-suapaq, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 02:51:42 np0005534516 ceph-mgr[75313]: log_channel(audit) log [DBG] : from='client.14262 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Nov 25 02:51:42 np0005534516 optimistic_napier[100636]: 
Nov 25 02:51:42 np0005534516 optimistic_napier[100636]: [{"placement": {"host_pattern": "*"}, "service_name": "crash", "service_type": "crash"}, {"placement": {"hosts": ["compute-0"]}, "service_id": "cephfs", "service_name": "mds.cephfs", "service_type": "mds"}, {"placement": {"hosts": ["compute-0"]}, "service_name": "mgr", "service_type": "mgr"}, {"placement": {"hosts": ["compute-0"]}, "service_name": "mon", "service_type": "mon"}, {"placement": {"hosts": ["compute-0"]}, "service_id": "default_drive_group", "service_name": "osd.default_drive_group", "service_type": "osd", "spec": {"data_devices": {"paths": ["/dev/ceph_vg0/ceph_lv0", "/dev/ceph_vg1/ceph_lv1", "/dev/ceph_vg2/ceph_lv2"]}, "filter_logic": "AND", "objectstore": "bluestore"}}, {"networks": ["192.168.122.0/24"], "placement": {"hosts": ["compute-0"]}, "service_id": "rgw", "service_name": "rgw.rgw", "service_type": "rgw", "spec": {"rgw_frontend_port": 8082}}]
Nov 25 02:51:42 np0005534516 systemd[1]: libpod-5922e0ea4226327c7b23408d48553e902d0f242f02a49d9053c7488e17896937.scope: Deactivated successfully.
Nov 25 02:51:42 np0005534516 podman[100582]: 2025-11-25 07:51:42.683801605 +0000 UTC m=+1.628585468 container died 5922e0ea4226327c7b23408d48553e902d0f242f02a49d9053c7488e17896937 (image=quay.io/ceph/ceph:v18, name=optimistic_napier, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 02:51:42 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/65fa530becd0798d50375246d282a9f09eefbcb07efd5120e7431c8a08be4338/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 02:51:42 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/65fa530becd0798d50375246d282a9f09eefbcb07efd5120e7431c8a08be4338/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 02:51:42 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/65fa530becd0798d50375246d282a9f09eefbcb07efd5120e7431c8a08be4338/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 02:51:42 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/65fa530becd0798d50375246d282a9f09eefbcb07efd5120e7431c8a08be4338/merged/var/lib/ceph/radosgw/ceph-rgw.rgw.compute-0.suapaq supports timestamps until 2038 (0x7fffffff)
Nov 25 02:51:42 np0005534516 podman[100738]: 2025-11-25 07:51:42.981550999 +0000 UTC m=+0.718455646 container init 4a9c136fb7d1637e5080eb387097c3f296d62457178347dd9c5f273d2481984f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-a058ea16-8b73-51e1-b172-ed66107102bf-rgw-rgw-compute-0-suapaq, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Nov 25 02:51:42 np0005534516 podman[100738]: 2025-11-25 07:51:42.988414646 +0000 UTC m=+0.725319233 container start 4a9c136fb7d1637e5080eb387097c3f296d62457178347dd9c5f273d2481984f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-a058ea16-8b73-51e1-b172-ed66107102bf-rgw-rgw-compute-0-suapaq, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 02:51:43 np0005534516 radosgw[100782]: deferred set uid:gid to 167:167 (ceph:ceph)
Nov 25 02:51:43 np0005534516 radosgw[100782]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process radosgw, pid 2
Nov 25 02:51:43 np0005534516 radosgw[100782]: framework: beast
Nov 25 02:51:43 np0005534516 radosgw[100782]: framework conf key: endpoint, val: 192.168.122.100:8082
Nov 25 02:51:43 np0005534516 radosgw[100782]: init_numa not setting numa affinity
Nov 25 02:51:43 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v116: 162 pgs: 162 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 02:51:43 np0005534516 bash[100738]: 4a9c136fb7d1637e5080eb387097c3f296d62457178347dd9c5f273d2481984f
Nov 25 02:51:43 np0005534516 systemd[1]: Started Ceph rgw.rgw.compute-0.suapaq for a058ea16-8b73-51e1-b172-ed66107102bf.
Nov 25 02:51:43 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 02:51:43 np0005534516 ceph-osd[88620]: log_channel(cluster) log [DBG] : 4.1d scrub starts
Nov 25 02:51:43 np0005534516 ceph-osd[88620]: log_channel(cluster) log [DBG] : 4.1d scrub ok
Nov 25 02:51:43 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e40 do_prune osdmap full prune enabled
Nov 25 02:51:43 np0005534516 ceph-osd[90711]: log_channel(cluster) log [DBG] : 5.10 scrub starts
Nov 25 02:51:43 np0005534516 ceph-osd[90711]: log_channel(cluster) log [DBG] : 5.10 scrub ok
Nov 25 02:51:43 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 02:51:43 np0005534516 systemd[1]: var-lib-containers-storage-overlay-a5dfe2e6771bc822bd4c1e49c500c3dcfb03744383540077119b7a1d3ecf954d-merged.mount: Deactivated successfully.
Nov 25 02:51:43 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 02:51:43 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e41 e41: 3 total, 3 up, 3 in
Nov 25 02:51:43 np0005534516 ceph-mon[75015]: log_channel(cluster) log [DBG] : osdmap e41: 3 total, 3 up, 3 in
Nov 25 02:51:44 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"} v 0) v1
Nov 25 02:51:44 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3164718551' entity='client.rgw.rgw.compute-0.suapaq' cmd=[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]: dispatch
Nov 25 02:51:44 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 02:51:44 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0) v1
Nov 25 02:51:44 np0005534516 ceph-osd[88620]: log_channel(cluster) log [DBG] : 4.1e scrub starts
Nov 25 02:51:44 np0005534516 ceph-osd[88620]: log_channel(cluster) log [DBG] : 4.1e scrub ok
Nov 25 02:51:44 np0005534516 podman[100582]: 2025-11-25 07:51:44.57942667 +0000 UTC m=+3.524210543 container remove 5922e0ea4226327c7b23408d48553e902d0f242f02a49d9053c7488e17896937 (image=quay.io/ceph/ceph:v18, name=optimistic_napier, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Nov 25 02:51:44 np0005534516 systemd[1]: libpod-conmon-5922e0ea4226327c7b23408d48553e902d0f242f02a49d9053c7488e17896937.scope: Deactivated successfully.
Nov 25 02:51:44 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 41 pg[8.0( empty local-lis/les=0/0 n=0 ec=41/41 lis/c=0/0 les/c/f=0/0/0 sis=41) [1] r=0 lpr=41 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:51:44 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 02:51:44 np0005534516 ceph-mgr[75313]: [progress INFO root] complete: finished ev e7bb3b18-60b0-45f5-96cf-566d33ac6bef (Updating rgw.rgw deployment (+1 -> 1))
Nov 25 02:51:44 np0005534516 ceph-mgr[75313]: [progress INFO root] Completed event e7bb3b18-60b0-45f5-96cf-566d33ac6bef (Updating rgw.rgw deployment (+1 -> 1)) in 6 seconds
Nov 25 02:51:44 np0005534516 ceph-mgr[75313]: [cephadm INFO cephadm.services.cephadmservice] Saving service rgw.rgw spec with placement compute-0
Nov 25 02:51:44 np0005534516 ceph-mgr[75313]: log_channel(cephadm) log [INF] : Saving service rgw.rgw spec with placement compute-0
Nov 25 02:51:44 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0) v1
Nov 25 02:51:44 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e41 do_prune osdmap full prune enabled
Nov 25 02:51:45 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v118: 163 pgs: 1 unknown, 162 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 02:51:45 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 02:51:45 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0) v1
Nov 25 02:51:45 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3164718551' entity='client.rgw.rgw.compute-0.suapaq' cmd='[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]': finished
Nov 25 02:51:45 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e42 e42: 3 total, 3 up, 3 in
Nov 25 02:51:45 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 02:51:45 np0005534516 ceph-mon[75015]: from='client.? 192.168.122.100:0/3164718551' entity='client.rgw.rgw.compute-0.suapaq' cmd=[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]: dispatch
Nov 25 02:51:45 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 02:51:45 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 02:51:45 np0005534516 ceph-osd[90711]: log_channel(cluster) log [DBG] : 5.17 scrub starts
Nov 25 02:51:45 np0005534516 ceph-osd[90711]: log_channel(cluster) log [DBG] : 5.17 scrub ok
Nov 25 02:51:45 np0005534516 ceph-mon[75015]: log_channel(cluster) log [DBG] : osdmap e42: 3 total, 3 up, 3 in
Nov 25 02:51:45 np0005534516 ceph-mon[75015]: log_channel(cluster) log [WRN] : Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Nov 25 02:51:45 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 02:51:45 np0005534516 ceph-mgr[75313]: [progress INFO root] update: starting ev ddf1a32f-96ef-44c8-bef3-36d2ad458051 (Updating mds.cephfs deployment (+1 -> 1))
Nov 25 02:51:45 np0005534516 python3[100870]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid a058ea16-8b73-51e1-b172-ed66107102bf -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch ps -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 02:51:46 np0005534516 podman[100871]: 2025-11-25 07:51:45.972642789 +0000 UTC m=+0.027904419 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 25 02:51:46 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.dgfvvi", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} v 0) v1
Nov 25 02:51:46 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.dgfvvi", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch
Nov 25 02:51:46 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 42 pg[8.0( empty local-lis/les=41/42 n=0 ec=41/41 lis/c=0/0 les/c/f=0/0/0 sis=41) [1] r=0 lpr=41 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:51:46 np0005534516 podman[100871]: 2025-11-25 07:51:46.338518008 +0000 UTC m=+0.393779608 container create c9d1d0ffaeaf0ae47109f2700011602ec7d45e9ccf212c4d84b04dc2b3645ad6 (image=quay.io/ceph/ceph:v18, name=quirky_darwin, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 02:51:46 np0005534516 ceph-osd[88620]: log_channel(cluster) log [DBG] : 4.1f scrub starts
Nov 25 02:51:46 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e42 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 02:51:46 np0005534516 ceph-osd[88620]: log_channel(cluster) log [DBG] : 4.1f scrub ok
Nov 25 02:51:46 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.dgfvvi", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Nov 25 02:51:46 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 02:51:46 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 02:51:46 np0005534516 ceph-mgr[75313]: [cephadm INFO cephadm.serve] Deploying daemon mds.cephfs.compute-0.dgfvvi on compute-0
Nov 25 02:51:46 np0005534516 ceph-mgr[75313]: log_channel(cephadm) log [INF] : Deploying daemon mds.cephfs.compute-0.dgfvvi on compute-0
Nov 25 02:51:46 np0005534516 ceph-mon[75015]: Saving service rgw.rgw spec with placement compute-0
Nov 25 02:51:46 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 02:51:46 np0005534516 ceph-mon[75015]: from='client.? 192.168.122.100:0/3164718551' entity='client.rgw.rgw.compute-0.suapaq' cmd='[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]': finished
Nov 25 02:51:46 np0005534516 ceph-mon[75015]: Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Nov 25 02:51:46 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 02:51:46 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.dgfvvi", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch
Nov 25 02:51:46 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.dgfvvi", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Nov 25 02:51:46 np0005534516 systemd[1]: Started libpod-conmon-c9d1d0ffaeaf0ae47109f2700011602ec7d45e9ccf212c4d84b04dc2b3645ad6.scope.
Nov 25 02:51:46 np0005534516 systemd[1]: Started libcrun container.
Nov 25 02:51:46 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/01946d8ad7285b933773ba7a1301e767d88a455cabd23f58b5c11923d10f7f41/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 02:51:46 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/01946d8ad7285b933773ba7a1301e767d88a455cabd23f58b5c11923d10f7f41/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 02:51:46 np0005534516 podman[100871]: 2025-11-25 07:51:46.732277034 +0000 UTC m=+0.787538724 container init c9d1d0ffaeaf0ae47109f2700011602ec7d45e9ccf212c4d84b04dc2b3645ad6 (image=quay.io/ceph/ceph:v18, name=quirky_darwin, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Nov 25 02:51:46 np0005534516 podman[100871]: 2025-11-25 07:51:46.740026363 +0000 UTC m=+0.795287993 container start c9d1d0ffaeaf0ae47109f2700011602ec7d45e9ccf212c4d84b04dc2b3645ad6 (image=quay.io/ceph/ceph:v18, name=quirky_darwin, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 02:51:46 np0005534516 podman[100871]: 2025-11-25 07:51:46.855417376 +0000 UTC m=+0.910679026 container attach c9d1d0ffaeaf0ae47109f2700011602ec7d45e9ccf212c4d84b04dc2b3645ad6 (image=quay.io/ceph/ceph:v18, name=quirky_darwin, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 02:51:47 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v120: 163 pgs: 163 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 02:51:47 np0005534516 podman[101051]: 2025-11-25 07:51:47.134395164 +0000 UTC m=+0.019475604 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 02:51:47 np0005534516 ceph-mgr[75313]: log_channel(audit) log [DBG] : from='client.14267 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Nov 25 02:51:47 np0005534516 quirky_darwin[100911]: 
Nov 25 02:51:47 np0005534516 quirky_darwin[100911]: [{"container_id": "32ccbc05690c", "container_image_digests": ["quay.io/ceph/ceph@sha256:7d8bb82696d5d9cbeae2a2828dc12b6835aa2dded890fa3ac5a733cb66b72b1c", "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0"], "container_image_id": "0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1", "container_image_name": "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "cpu_percentage": "0.35%", "created": "2025-11-25T07:49:29.427280Z", "daemon_id": "compute-0", "daemon_name": "crash.compute-0", "daemon_type": "crash", "events": ["2025-11-25T07:49:29.492199Z daemon:crash.compute-0 [INFO] \"Deployed crash.compute-0 on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2025-11-25T07:51:27.892734Z", "memory_usage": 11639193, "ports": [], "service_name": "crash", "started": "2025-11-25T07:49:29.325250Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-a058ea16-8b73-51e1-b172-ed66107102bf@crash.compute-0", "version": "18.2.7"}, {"container_id": "3683d784a4f2", "container_image_digests": ["quay.io/ceph/ceph@sha256:7d8bb82696d5d9cbeae2a2828dc12b6835aa2dded890fa3ac5a733cb66b72b1c", "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0"], "container_image_id": "0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1", "container_image_name": "quay.io/ceph/ceph:v18", "cpu_percentage": "24.54%", "created": "2025-11-25T07:48:08.210887Z", "daemon_id": "compute-0.cpskve", "daemon_name": "mgr.compute-0.cpskve", "daemon_type": "mgr", "events": ["2025-11-25T07:49:35.723364Z daemon:mgr.compute-0.cpskve [INFO] \"Reconfigured mgr.compute-0.cpskve on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2025-11-25T07:51:27.892591Z", "memory_usage": 550502400, "ports": [9283, 8765], "service_name": "mgr", "started": "2025-11-25T07:48:08.051426Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-a058ea16-8b73-51e1-b172-ed66107102bf@mgr.compute-0.cpskve", "version": "18.2.7"}, {"container_id": "04ce8b89bbac", "container_image_digests": ["quay.io/ceph/ceph@sha256:7d8bb82696d5d9cbeae2a2828dc12b6835aa2dded890fa3ac5a733cb66b72b1c", "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0"], "container_image_id": "0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1", "container_image_name": "quay.io/ceph/ceph:v18", "cpu_percentage": "1.95%", "created": "2025-11-25T07:48:01.650700Z", "daemon_id": "compute-0", "daemon_name": "mon.compute-0", "daemon_type": "mon", "events": ["2025-11-25T07:49:34.893776Z daemon:mon.compute-0 [INFO] \"Reconfigured mon.compute-0 on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2025-11-25T07:51:27.892197Z", "memory_request": 2147483648, "memory_usage": 39699087, "ports": [], "service_name": "mon", "started": "2025-11-25T07:48:05.623068Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-a058ea16-8b73-51e1-b172-ed66107102bf@mon.compute-0", "version": "18.2.7"}, {"container_id": "044841914bbe", "container_image_digests": ["quay.io/ceph/ceph@sha256:7d8bb82696d5d9cbeae2a2828dc12b6835aa2dded890fa3ac5a733cb66b72b1c", "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0"], "container_image_id": "0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1", "container_image_name": "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "cpu_percentage": "1.59%", "created": "2025-11-25T07:50:03.245348Z", "daemon_id": "0", "daemon_name": "osd.0", "daemon_type": "osd", "events": ["2025-11-25T07:50:03.338108Z daemon:osd.0 [INFO] \"Deployed osd.0 on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2025-11-25T07:51:27.892858Z", "memory_request": 4294967296, "memory_usage": 67675095, "ports": [], "service_name": "osd.default_drive_group", "started": "2025-11-25T07:50:03.080284Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-a058ea16-8b73-51e1-b172-ed66107102bf@osd.0", "version": "18.2.7"}, {"container_id": "a9285175a4a5", "container_image_digests": ["quay.io/ceph/ceph@sha256:7d8bb82696d5d9cbeae2a2828dc12b6835aa2dded890fa3ac5a733cb66b72b1c", "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0"], "container_image_id": "0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1", "container_image_name": "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "cpu_percentage": "1.75%", "created": "2025-11-25T07:50:09.896558Z", "daemon_id": "1", "daemon_name": "osd.1", "daemon_type": "osd", "events": ["2025-11-25T07:50:10.202839Z daemon:osd.1 [INFO] \"Deployed osd.1 on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2025-11-25T07:51:27.892979Z", "memory_request": 4294967296, "memory_usage": 63522734, "ports": [], "service_name": "osd.default_drive_group", "started": "2025-11-25T07:50:09.319772Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-a058ea16-8b73-51e1-b172-ed66107102bf@osd.1", "version": "18.2.7"}, {"container_id": "aa9541b139f7", "container_image_digests": ["quay.io/ceph/ceph@sha256:7d8bb82696d5d9cbeae2a2828dc12b6835aa2dded890fa3ac5a733cb66b72b1c", "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0"], "container_image_id": "0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1", "container_image_name": "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "cpu_percentage": "1.91%", "created": "2025-11-25T07:50:18.194729Z", "daemon_id": "2", "daemon_name": "osd.2", "daemon_type": "osd", "events": ["2025-11-25T07:50:18.773174Z daemon:osd.2 [INFO] \"Deployed osd.2 on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2025-11-25T07:51:27.893099Z", "memory_request": 4294967296, "memory_usage": 65556971, "ports": [], "service_name": "osd.default_drive_group", "started": "2025-11-25T07:50:17.743672Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-a058ea16-8b73-51e1-b172-ed66107102bf@osd.2", "version": "18.2.7"}, {"daemon_id": "rgw.compute-0.suapaq", "daemon_name": "rgw.rgw.compute-0.suapaq", "daemon_type": "rgw", "events": ["2025-11-25T07:51:44.357610Z daemon:rgw.rgw.compute-0.suapaq [INFO] \"Deployed rgw.rgw.compute-0.suapaq on host 'compute-0'\""], "hostname": "compute-0", "ip": "192.168.122.100", "is_active": false, "ports": [8082], "service_name": "rgw.rgw", "status": 2, "status_desc": "starting"}]
Nov 25 02:51:47 np0005534516 systemd[1]: libpod-c9d1d0ffaeaf0ae47109f2700011602ec7d45e9ccf212c4d84b04dc2b3645ad6.scope: Deactivated successfully.
Nov 25 02:51:47 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e42 do_prune osdmap full prune enabled
Nov 25 02:51:47 np0005534516 podman[101051]: 2025-11-25 07:51:47.343196167 +0000 UTC m=+0.228276587 container create 93cf00209ce9383d7cb4158b202715a9998a76e9f9cbd8ffe90846cd53da4b6e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_proskuriakova, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 25 02:51:47 np0005534516 podman[100871]: 2025-11-25 07:51:47.343845613 +0000 UTC m=+1.399107223 container died c9d1d0ffaeaf0ae47109f2700011602ec7d45e9ccf212c4d84b04dc2b3645ad6 (image=quay.io/ceph/ceph:v18, name=quirky_darwin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 25 02:51:47 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e43 e43: 3 total, 3 up, 3 in
Nov 25 02:51:47 np0005534516 ceph-mon[75015]: log_channel(cluster) log [DBG] : osdmap e43: 3 total, 3 up, 3 in
Nov 25 02:51:47 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"} v 0) v1
Nov 25 02:51:47 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3164718551' entity='client.rgw.rgw.compute-0.suapaq' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Nov 25 02:51:47 np0005534516 ceph-mon[75015]: log_channel(cluster) log [INF] : Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled)
Nov 25 02:51:47 np0005534516 ceph-mon[75015]: Deploying daemon mds.cephfs.compute-0.dgfvvi on compute-0
Nov 25 02:51:47 np0005534516 ceph-mon[75015]: from='client.? 192.168.122.100:0/3164718551' entity='client.rgw.rgw.compute-0.suapaq' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Nov 25 02:51:47 np0005534516 systemd[1]: var-lib-containers-storage-overlay-01946d8ad7285b933773ba7a1301e767d88a455cabd23f58b5c11923d10f7f41-merged.mount: Deactivated successfully.
Nov 25 02:51:47 np0005534516 podman[100871]: 2025-11-25 07:51:47.958059746 +0000 UTC m=+2.013321386 container remove c9d1d0ffaeaf0ae47109f2700011602ec7d45e9ccf212c4d84b04dc2b3645ad6 (image=quay.io/ceph/ceph:v18, name=quirky_darwin, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 02:51:48 np0005534516 systemd[1]: Started libpod-conmon-93cf00209ce9383d7cb4158b202715a9998a76e9f9cbd8ffe90846cd53da4b6e.scope.
Nov 25 02:51:48 np0005534516 systemd[1]: libpod-conmon-c9d1d0ffaeaf0ae47109f2700011602ec7d45e9ccf212c4d84b04dc2b3645ad6.scope: Deactivated successfully.
Nov 25 02:51:48 np0005534516 systemd[1]: Started libcrun container.
Nov 25 02:51:48 np0005534516 podman[101051]: 2025-11-25 07:51:48.170778443 +0000 UTC m=+1.055859253 container init 93cf00209ce9383d7cb4158b202715a9998a76e9f9cbd8ffe90846cd53da4b6e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_proskuriakova, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Nov 25 02:51:48 np0005534516 podman[101051]: 2025-11-25 07:51:48.180380156 +0000 UTC m=+1.065460616 container start 93cf00209ce9383d7cb4158b202715a9998a76e9f9cbd8ffe90846cd53da4b6e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_proskuriakova, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 02:51:48 np0005534516 condescending_proskuriakova[101081]: 167 167
Nov 25 02:51:48 np0005534516 systemd[1]: libpod-93cf00209ce9383d7cb4158b202715a9998a76e9f9cbd8ffe90846cd53da4b6e.scope: Deactivated successfully.
Nov 25 02:51:48 np0005534516 podman[101051]: 2025-11-25 07:51:48.206224094 +0000 UTC m=+1.091304514 container attach 93cf00209ce9383d7cb4158b202715a9998a76e9f9cbd8ffe90846cd53da4b6e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_proskuriakova, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Nov 25 02:51:48 np0005534516 podman[101051]: 2025-11-25 07:51:48.208139221 +0000 UTC m=+1.093219671 container died 93cf00209ce9383d7cb4158b202715a9998a76e9f9cbd8ffe90846cd53da4b6e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_proskuriakova, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 02:51:48 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e43 do_prune osdmap full prune enabled
Nov 25 02:51:48 np0005534516 ceph-mgr[75313]: [progress INFO root] Writing back 9 completed events
Nov 25 02:51:48 np0005534516 ceph-osd[88620]: log_channel(cluster) log [DBG] : 6.3 scrub starts
Nov 25 02:51:48 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 43 pg[9.0( empty local-lis/les=0/0 n=0 ec=43/43 lis/c=0/0 les/c/f=0/0/0 sis=43) [1] r=0 lpr=43 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:51:48 np0005534516 ceph-osd[88620]: log_channel(cluster) log [DBG] : 6.3 scrub ok
Nov 25 02:51:48 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) v1
Nov 25 02:51:48 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3164718551' entity='client.rgw.rgw.compute-0.suapaq' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]': finished
Nov 25 02:51:48 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e44 e44: 3 total, 3 up, 3 in
Nov 25 02:51:48 np0005534516 systemd[1]: var-lib-containers-storage-overlay-8c382eaf372dee799252887ad2bd4b2964da27c0a5daabe9fcb1f5c11712aac4-merged.mount: Deactivated successfully.
Nov 25 02:51:48 np0005534516 ceph-mon[75015]: log_channel(cluster) log [DBG] : osdmap e44: 3 total, 3 up, 3 in
Nov 25 02:51:49 np0005534516 python3[101123]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid a058ea16-8b73-51e1-b172-ed66107102bf -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   -s -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 02:51:49 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v123: 164 pgs: 1 unknown, 163 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 02:51:49 np0005534516 ceph-mon[75015]: Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled)
Nov 25 02:51:49 np0005534516 podman[101051]: 2025-11-25 07:51:49.271163118 +0000 UTC m=+2.156243588 container remove 93cf00209ce9383d7cb4158b202715a9998a76e9f9cbd8ffe90846cd53da4b6e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_proskuriakova, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 02:51:49 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 02:51:49 np0005534516 ceph-mgr[75313]: [progress WARNING root] Starting Global Recovery Event,1 pgs not in active + clean state
Nov 25 02:51:49 np0005534516 podman[101124]: 2025-11-25 07:51:49.327464726 +0000 UTC m=+0.211279685 container create 19d41517cb732a9f133cc4b0a506f9350f2032cbce3a4a9348508c48748745ed (image=quay.io/ceph/ceph:v18, name=flamboyant_lamport, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 25 02:51:49 np0005534516 podman[101124]: 2025-11-25 07:51:49.290863077 +0000 UTC m=+0.174678086 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 25 02:51:49 np0005534516 systemd[1]: Started libpod-conmon-19d41517cb732a9f133cc4b0a506f9350f2032cbce3a4a9348508c48748745ed.scope.
Nov 25 02:51:49 np0005534516 systemd[1]: Started libcrun container.
Nov 25 02:51:49 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7b08f7d4889d0d443e41b1a117a8713c6025200f4a23fe80e09c0a905323b9f8/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 02:51:49 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7b08f7d4889d0d443e41b1a117a8713c6025200f4a23fe80e09c0a905323b9f8/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 02:51:49 np0005534516 ceph-osd[90711]: log_channel(cluster) log [DBG] : 5.1b scrub starts
Nov 25 02:51:49 np0005534516 ceph-osd[90711]: log_channel(cluster) log [DBG] : 5.1b scrub ok
Nov 25 02:51:49 np0005534516 podman[101124]: 2025-11-25 07:51:49.837554808 +0000 UTC m=+0.721369817 container init 19d41517cb732a9f133cc4b0a506f9350f2032cbce3a4a9348508c48748745ed (image=quay.io/ceph/ceph:v18, name=flamboyant_lamport, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True)
Nov 25 02:51:49 np0005534516 podman[101124]: 2025-11-25 07:51:49.849688903 +0000 UTC m=+0.733503822 container start 19d41517cb732a9f133cc4b0a506f9350f2032cbce3a4a9348508c48748745ed (image=quay.io/ceph/ceph:v18, name=flamboyant_lamport, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Nov 25 02:51:49 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e44 do_prune osdmap full prune enabled
Nov 25 02:51:50 np0005534516 podman[101124]: 2025-11-25 07:51:50.108760738 +0000 UTC m=+0.992575697 container attach 19d41517cb732a9f133cc4b0a506f9350f2032cbce3a4a9348508c48748745ed (image=quay.io/ceph/ceph:v18, name=flamboyant_lamport, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Nov 25 02:51:50 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e45 e45: 3 total, 3 up, 3 in
Nov 25 02:51:50 np0005534516 systemd[1]: libpod-conmon-93cf00209ce9383d7cb4158b202715a9998a76e9f9cbd8ffe90846cd53da4b6e.scope: Deactivated successfully.
Nov 25 02:51:50 np0005534516 systemd[1]: Reloading.
Nov 25 02:51:50 np0005534516 ceph-mon[75015]: log_channel(cluster) log [DBG] : osdmap e45: 3 total, 3 up, 3 in
Nov 25 02:51:50 np0005534516 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 02:51:50 np0005534516 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 02:51:50 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json"} v 0) v1
Nov 25 02:51:50 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2506182979' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Nov 25 02:51:50 np0005534516 flamboyant_lamport[101141]: 
Nov 25 02:51:50 np0005534516 flamboyant_lamport[101141]: {"fsid":"a058ea16-8b73-51e1-b172-ed66107102bf","health":{"status":"HEALTH_ERR","checks":{"MDS_ALL_DOWN":{"severity":"HEALTH_ERR","summary":{"message":"1 filesystem is offline","count":1},"muted":false},"MDS_UP_LESS_THAN_MAX":{"severity":"HEALTH_WARN","summary":{"message":"1 filesystem is online with fewer MDS than max_mds","count":1},"muted":false}},"mutes":[]},"election_epoch":5,"quorum":[0],"quorum_names":["compute-0"],"quorum_age":224,"monmap":{"epoch":1,"min_mon_release_name":"reef","num_mons":1},"osdmap":{"epoch":45,"num_osds":3,"num_up_osds":3,"osd_up_since":1764057042,"num_in_osds":3,"osd_in_since":1764056988,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[{"state_name":"active+clean","count":163}],"num_pgs":163,"num_pools":8,"num_objects":2,"data_bytes":459280,"bytes_used":84127744,"bytes_avail":64327798784,"bytes_total":64411926528},"fsmap":{"epoch":2,"id":1,"up":0,"in":0,"max":1,"by_rank":[],"up:standby":0},"mgrmap":{"available":true,"num_standbys":0,"modules":["cephadm","iostat","nfs","restful"],"services":{}},"servicemap":{"epoch":5,"modified":"2025-11-25T07:51:45.113900+0000","services":{}},"progress_events":{"ddf1a32f-96ef-44c8-bef3-36d2ad458051":{"message":"Updating mds.cephfs deployment (+1 -> 1) (0s)\n      [............................] ","progress":0,"add_to_ceph_s":true}}}
Nov 25 02:51:50 np0005534516 ceph-osd[88620]: log_channel(cluster) log [DBG] : 6.5 deep-scrub starts
Nov 25 02:51:50 np0005534516 ceph-mon[75015]: from='client.? 192.168.122.100:0/3164718551' entity='client.rgw.rgw.compute-0.suapaq' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]': finished
Nov 25 02:51:50 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 02:51:50 np0005534516 ceph-osd[88620]: log_channel(cluster) log [DBG] : 6.5 deep-scrub ok
Nov 25 02:51:50 np0005534516 podman[101202]: 2025-11-25 07:51:50.51507732 +0000 UTC m=+0.039795169 container died 19d41517cb732a9f133cc4b0a506f9350f2032cbce3a4a9348508c48748745ed (image=quay.io/ceph/ceph:v18, name=flamboyant_lamport, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Nov 25 02:51:50 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 45 pg[9.0( empty local-lis/les=43/45 n=0 ec=43/43 lis/c=0/0 les/c/f=0/0/0 sis=43) [1] r=0 lpr=43 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:51:50 np0005534516 systemd[1]: libpod-19d41517cb732a9f133cc4b0a506f9350f2032cbce3a4a9348508c48748745ed.scope: Deactivated successfully.
Nov 25 02:51:50 np0005534516 systemd[1]: Reloading.
Nov 25 02:51:50 np0005534516 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 02:51:50 np0005534516 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 02:51:50 np0005534516 systemd[1]: Starting Ceph mds.cephfs.compute-0.dgfvvi for a058ea16-8b73-51e1-b172-ed66107102bf...
Nov 25 02:51:51 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v125: 164 pgs: 1 unknown, 163 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 02:51:51 np0005534516 systemd[1]: var-lib-containers-storage-overlay-7b08f7d4889d0d443e41b1a117a8713c6025200f4a23fe80e09c0a905323b9f8-merged.mount: Deactivated successfully.
Nov 25 02:51:51 np0005534516 ceph-osd[88620]: log_channel(cluster) log [DBG] : 6.7 scrub starts
Nov 25 02:51:51 np0005534516 ceph-osd[88620]: log_channel(cluster) log [DBG] : 6.7 scrub ok
Nov 25 02:51:51 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e45 do_prune osdmap full prune enabled
Nov 25 02:51:51 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e46 e46: 3 total, 3 up, 3 in
Nov 25 02:51:51 np0005534516 ceph-mon[75015]: log_channel(cluster) log [DBG] : osdmap e46: 3 total, 3 up, 3 in
Nov 25 02:51:51 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"} v 0) v1
Nov 25 02:51:51 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3164718551' entity='client.rgw.rgw.compute-0.suapaq' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Nov 25 02:51:51 np0005534516 podman[101202]: 2025-11-25 07:51:51.811967118 +0000 UTC m=+1.336684957 container remove 19d41517cb732a9f133cc4b0a506f9350f2032cbce3a4a9348508c48748745ed (image=quay.io/ceph/ceph:v18, name=flamboyant_lamport, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Nov 25 02:51:51 np0005534516 systemd[1]: libpod-conmon-19d41517cb732a9f133cc4b0a506f9350f2032cbce3a4a9348508c48748745ed.scope: Deactivated successfully.
Nov 25 02:51:51 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 46 pg[10.0( empty local-lis/les=0/0 n=0 ec=46/46 lis/c=0/0 les/c/f=0/0/0 sis=46) [2] r=0 lpr=46 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:51:52 np0005534516 podman[101307]: 2025-11-25 07:51:52.029489042 +0000 UTC m=+0.023714387 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 02:51:52 np0005534516 ceph-osd[89702]: log_channel(cluster) log [DBG] : 5.1d scrub starts
Nov 25 02:51:52 np0005534516 ceph-osd[89702]: log_channel(cluster) log [DBG] : 5.1d scrub ok
Nov 25 02:51:52 np0005534516 podman[101307]: 2025-11-25 07:51:52.31505945 +0000 UTC m=+0.309284765 container create 07ed43f61faf85c1983c0d5a6fdde017f3a37417bf797d8e1cb5cf39ec865e56 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-a058ea16-8b73-51e1-b172-ed66107102bf-mds-cephfs-compute-0-dgfvvi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Nov 25 02:51:52 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/410783e7a6e9a94daabe9acd27eb5ba7705f49e81c560d6309a7bf6d2f83aa65/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 02:51:52 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/410783e7a6e9a94daabe9acd27eb5ba7705f49e81c560d6309a7bf6d2f83aa65/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 02:51:52 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/410783e7a6e9a94daabe9acd27eb5ba7705f49e81c560d6309a7bf6d2f83aa65/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 02:51:52 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/410783e7a6e9a94daabe9acd27eb5ba7705f49e81c560d6309a7bf6d2f83aa65/merged/var/lib/ceph/mds/ceph-cephfs.compute-0.dgfvvi supports timestamps until 2038 (0x7fffffff)
Nov 25 02:51:52 np0005534516 podman[101307]: 2025-11-25 07:51:52.482405566 +0000 UTC m=+0.476630901 container init 07ed43f61faf85c1983c0d5a6fdde017f3a37417bf797d8e1cb5cf39ec865e56 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-a058ea16-8b73-51e1-b172-ed66107102bf-mds-cephfs-compute-0-dgfvvi, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Nov 25 02:51:52 np0005534516 podman[101307]: 2025-11-25 07:51:52.490524313 +0000 UTC m=+0.484749628 container start 07ed43f61faf85c1983c0d5a6fdde017f3a37417bf797d8e1cb5cf39ec865e56 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-a058ea16-8b73-51e1-b172-ed66107102bf-mds-cephfs-compute-0-dgfvvi, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 02:51:52 np0005534516 bash[101307]: 07ed43f61faf85c1983c0d5a6fdde017f3a37417bf797d8e1cb5cf39ec865e56
Nov 25 02:51:52 np0005534516 systemd[1]: Started Ceph mds.cephfs.compute-0.dgfvvi for a058ea16-8b73-51e1-b172-ed66107102bf.
Nov 25 02:51:52 np0005534516 ceph-mds[101326]: set uid:gid to 167:167 (ceph:ceph)
Nov 25 02:51:52 np0005534516 ceph-mds[101326]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-mds, pid 2
Nov 25 02:51:52 np0005534516 ceph-mds[101326]: main not setting numa affinity
Nov 25 02:51:52 np0005534516 ceph-mds[101326]: pidfile_write: ignore empty --pid-file
Nov 25 02:51:52 np0005534516 ceph-a058ea16-8b73-51e1-b172-ed66107102bf-mds-cephfs-compute-0-dgfvvi[101322]: starting mds.cephfs.compute-0.dgfvvi at 
Nov 25 02:51:52 np0005534516 ceph-mds[101326]: mds.cephfs.compute-0.dgfvvi Updating MDS map to version 2 from mon.0
Nov 25 02:51:52 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e46 do_prune osdmap full prune enabled
Nov 25 02:51:52 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3164718551' entity='client.rgw.rgw.compute-0.suapaq' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished
Nov 25 02:51:52 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e47 e47: 3 total, 3 up, 3 in
Nov 25 02:51:52 np0005534516 ceph-mon[75015]: log_channel(cluster) log [DBG] : osdmap e47: 3 total, 3 up, 3 in
Nov 25 02:51:52 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 02:51:52 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 02:51:52 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 02:51:52 np0005534516 python3[101370]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid a058ea16-8b73-51e1-b172-ed66107102bf -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config dump -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 02:51:52 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 47 pg[10.0( empty local-lis/les=46/47 n=0 ec=46/46 lis/c=0/0 les/c/f=0/0/0 sis=46) [2] r=0 lpr=46 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:51:53 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 02:51:53 np0005534516 podman[101371]: 2025-11-25 07:51:52.954990398 +0000 UTC m=+0.032240645 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 25 02:51:53 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0) v1
Nov 25 02:51:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] Optimize plan auto_2025-11-25_07:51:53
Nov 25 02:51:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 02:51:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] Some PGs (0.006061) are unknown; try again later
Nov 25 02:51:53 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v128: 165 pgs: 1 unknown, 164 active+clean; 450 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 989 B/s rd, 989 B/s wr, 1 op/s
Nov 25 02:51:53 np0005534516 podman[101371]: 2025-11-25 07:51:53.117647499 +0000 UTC m=+0.194897726 container create d9b01443d54578e27d3311b26d102d2d3d964714e164cdbe5087895b7bea32a7 (image=quay.io/ceph/ceph:v18, name=naughty_wilbur, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 02:51:53 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).mds e3 new map
Nov 25 02:51:53 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).mds e3 print_map#012e3#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#0112#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112025-11-25T07:51:20.057872+0000#012modified#0112025-11-25T07:51:20.057922+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#0110#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2}#012max_mds#0111#012in#011#012up#011{}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0110#012 #012 #012Standby daemons:#012 #012[mds.cephfs.compute-0.dgfvvi{-1:14271} state up:standby seq 1 addr [v2:192.168.122.100:6814/3217719860,v1:192.168.122.100:6815/3217719860] compat {c=[1],r=[1],i=[7ff]}]
Nov 25 02:51:53 np0005534516 ceph-mds[101326]: mds.cephfs.compute-0.dgfvvi Updating MDS map to version 3 from mon.0
Nov 25 02:51:53 np0005534516 ceph-mon[75015]: from='client.? 192.168.122.100:0/3164718551' entity='client.rgw.rgw.compute-0.suapaq' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Nov 25 02:51:53 np0005534516 ceph-mon[75015]: from='client.? 192.168.122.100:0/3164718551' entity='client.rgw.rgw.compute-0.suapaq' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished
Nov 25 02:51:53 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 02:51:53 np0005534516 ceph-mds[101326]: mds.cephfs.compute-0.dgfvvi Monitors have assigned me to become a standby.
Nov 25 02:51:53 np0005534516 ceph-osd[89702]: log_channel(cluster) log [DBG] : 2.17 deep-scrub starts
Nov 25 02:51:53 np0005534516 ceph-osd[89702]: log_channel(cluster) log [DBG] : 2.17 deep-scrub ok
Nov 25 02:51:53 np0005534516 systemd[1]: Started libpod-conmon-d9b01443d54578e27d3311b26d102d2d3d964714e164cdbe5087895b7bea32a7.scope.
Nov 25 02:51:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 02:51:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 02:51:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 02:51:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 02:51:53 np0005534516 systemd[1]: Started libcrun container.
Nov 25 02:51:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 02:51:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 02:51:53 np0005534516 ceph-osd[88620]: log_channel(cluster) log [DBG] : 6.9 scrub starts
Nov 25 02:51:53 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c382648878d27f086b6cc5de6fb1cc801a546652d75d3c06b85fa13e37437705/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 02:51:53 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c382648878d27f086b6cc5de6fb1cc801a546652d75d3c06b85fa13e37437705/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 02:51:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 02:51:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 02:51:53 np0005534516 ceph-osd[88620]: log_channel(cluster) log [DBG] : 6.9 scrub ok
Nov 25 02:51:53 np0005534516 ceph-mon[75015]: log_channel(cluster) log [DBG] : mds.? [v2:192.168.122.100:6814/3217719860,v1:192.168.122.100:6815/3217719860] up:boot
Nov 25 02:51:53 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).mds e3 assigned standby [v2:192.168.122.100:6814/3217719860,v1:192.168.122.100:6815/3217719860] as mds.0
Nov 25 02:51:53 np0005534516 ceph-mon[75015]: log_channel(cluster) log [INF] : daemon mds.cephfs.compute-0.dgfvvi assigned to filesystem cephfs as rank 0 (now has 1 ranks)
Nov 25 02:51:53 np0005534516 ceph-mon[75015]: log_channel(cluster) log [INF] : Health check cleared: MDS_ALL_DOWN (was: 1 filesystem is offline)
Nov 25 02:51:53 np0005534516 ceph-mon[75015]: log_channel(cluster) log [INF] : Health check cleared: MDS_UP_LESS_THAN_MAX (was: 1 filesystem is online with fewer MDS than max_mds)
Nov 25 02:51:53 np0005534516 ceph-mon[75015]: log_channel(cluster) log [INF] : Cluster is now healthy
Nov 25 02:51:53 np0005534516 ceph-mon[75015]: log_channel(cluster) log [DBG] : fsmap cephfs:0 1 up:standby
Nov 25 02:51:53 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mds metadata", "who": "cephfs.compute-0.dgfvvi"} v 0) v1
Nov 25 02:51:53 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-0.dgfvvi"}]: dispatch
Nov 25 02:51:53 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).mds e3 all = 0
Nov 25 02:51:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 02:51:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 02:51:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 02:51:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 02:51:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 02:51:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 02:51:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 02:51:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 02:51:53 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).mds e4 new map
Nov 25 02:51:53 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).mds e4 print_map#012e4#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#0114#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112025-11-25T07:51:20.057872+0000#012modified#0112025-11-25T07:51:53.410597+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#0110#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2}#012max_mds#0111#012in#0110#012up#011{0=14271}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0110#012[mds.cephfs.compute-0.dgfvvi{0:14271} state up:creating seq 1 addr [v2:192.168.122.100:6814/3217719860,v1:192.168.122.100:6815/3217719860] compat {c=[1],r=[1],i=[7ff]}]#012 #012 
Nov 25 02:51:53 np0005534516 ceph-mds[101326]: mds.cephfs.compute-0.dgfvvi Updating MDS map to version 4 from mon.0
Nov 25 02:51:53 np0005534516 ceph-mds[101326]: mds.0.4 handle_mds_map i am now mds.0.4
Nov 25 02:51:53 np0005534516 ceph-mds[101326]: mds.0.4 handle_mds_map state change up:standby --> up:creating
Nov 25 02:51:53 np0005534516 ceph-mds[101326]: mds.0.cache creating system inode with ino:0x1
Nov 25 02:51:53 np0005534516 ceph-mds[101326]: mds.0.cache creating system inode with ino:0x100
Nov 25 02:51:53 np0005534516 ceph-mds[101326]: mds.0.cache creating system inode with ino:0x600
Nov 25 02:51:53 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 02:51:53 np0005534516 ceph-mds[101326]: mds.0.cache creating system inode with ino:0x601
Nov 25 02:51:53 np0005534516 ceph-mon[75015]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-0.dgfvvi=up:creating}
Nov 25 02:51:53 np0005534516 ceph-mds[101326]: mds.0.cache creating system inode with ino:0x602
Nov 25 02:51:53 np0005534516 ceph-mds[101326]: mds.0.cache creating system inode with ino:0x603
Nov 25 02:51:53 np0005534516 ceph-mds[101326]: mds.0.cache creating system inode with ino:0x604
Nov 25 02:51:53 np0005534516 ceph-mds[101326]: mds.0.cache creating system inode with ino:0x605
Nov 25 02:51:53 np0005534516 ceph-mds[101326]: mds.0.cache creating system inode with ino:0x606
Nov 25 02:51:53 np0005534516 ceph-mds[101326]: mds.0.cache creating system inode with ino:0x607
Nov 25 02:51:53 np0005534516 ceph-mds[101326]: mds.0.cache creating system inode with ino:0x608
Nov 25 02:51:53 np0005534516 ceph-mds[101326]: mds.0.cache creating system inode with ino:0x609
Nov 25 02:51:53 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mds_join_fs}] v 0) v1
Nov 25 02:51:53 np0005534516 ceph-mgr[75313]: [progress INFO root] complete: finished ev ddf1a32f-96ef-44c8-bef3-36d2ad458051 (Updating mds.cephfs deployment (+1 -> 1))
Nov 25 02:51:53 np0005534516 ceph-mgr[75313]: [progress INFO root] Completed event ddf1a32f-96ef-44c8-bef3-36d2ad458051 (Updating mds.cephfs deployment (+1 -> 1)) in 8 seconds
Nov 25 02:51:53 np0005534516 podman[101371]: 2025-11-25 07:51:53.67708717 +0000 UTC m=+0.754337397 container init d9b01443d54578e27d3311b26d102d2d3d964714e164cdbe5087895b7bea32a7 (image=quay.io/ceph/ceph:v18, name=naughty_wilbur, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 02:51:53 np0005534516 podman[101371]: 2025-11-25 07:51:53.683753353 +0000 UTC m=+0.761003580 container start d9b01443d54578e27d3311b26d102d2d3d964714e164cdbe5087895b7bea32a7 (image=quay.io/ceph/ceph:v18, name=naughty_wilbur, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 02:51:53 np0005534516 podman[101371]: 2025-11-25 07:51:53.728038929 +0000 UTC m=+0.805289146 container attach d9b01443d54578e27d3311b26d102d2d3d964714e164cdbe5087895b7bea32a7 (image=quay.io/ceph/ceph:v18, name=naughty_wilbur, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 02:51:53 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 02:51:53 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0) v1
Nov 25 02:51:53 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e47 do_prune osdmap full prune enabled
Nov 25 02:51:53 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 02:51:54 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e48 e48: 3 total, 3 up, 3 in
Nov 25 02:51:54 np0005534516 ceph-mds[101326]: mds.0.4 creating_done
Nov 25 02:51:54 np0005534516 ceph-mon[75015]: log_channel(cluster) log [DBG] : osdmap e48: 3 total, 3 up, 3 in
Nov 25 02:51:54 np0005534516 ceph-mgr[75313]: [progress INFO root] Writing back 10 completed events
Nov 25 02:51:54 np0005534516 ceph-mon[75015]: log_channel(cluster) log [INF] : daemon mds.cephfs.compute-0.dgfvvi is now active in filesystem cephfs as rank 0
Nov 25 02:51:54 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0) v1
Nov 25 02:51:54 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2897000026' entity='client.admin' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Nov 25 02:51:54 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) v1
Nov 25 02:51:54 np0005534516 naughty_wilbur[101387]: 
Nov 25 02:51:54 np0005534516 systemd[1]: libpod-d9b01443d54578e27d3311b26d102d2d3d964714e164cdbe5087895b7bea32a7.scope: Deactivated successfully.
Nov 25 02:51:54 np0005534516 naughty_wilbur[101387]: [{"section":"global","name":"cluster_network","value":"172.20.0.0/24","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"container_image","value":"quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0","level":"basic","can_update_at_runtime":false,"mask":""},{"section":"global","name":"log_to_file","value":"true","level":"basic","can_update_at_runtime":true,"mask":""},{"section":"global","name":"mon_cluster_log_to_file","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"ms_bind_ipv4","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"ms_bind_ipv6","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"osd_pool_default_size","value":"1","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"public_network","value":"192.168.122.0/24","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_accepted_admin_roles","value":"ResellerAdmin, swiftoperator","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_accepted_roles","value":"member, Member, admin","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_admin_domain","value":"default","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_admin_password","value":"12345678","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_admin_project","value":"service","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_admin_user","value":"swift","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_api_version","value":"3","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_keystone_implicit_tenants","value":"true","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_url","value":"https://keystone-internal.openstack.svc:5000","level":"basic","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_verify_ssl","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_max_attr_name_len","value":"128","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_max_attr_size","value":"1024","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_max_attrs_num_in_req","value":"90","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_s3_auth_use_keystone","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_swift_account_in_url","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_swift_enforce_content_length","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_swift_versioning_enabled","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_trust_forwarded_https","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mon","name":"auth_allow_insecure_global_id_reclaim","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mon","name":"mon_warn_on_pool_no_redundancy","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mgr","name":"mgr/cephadm/container_init","value":"True","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/cephadm/migration_current","value":"6","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/cephadm/use_repo_digest","value":"false","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/orchestrator/orchestrator","value":"cephadm","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mgr","name":"mgr_standby_modules","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"osd","name":"osd_memory_target_autotune","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mds.cephfs","name":"mds_join_fs","value":"cephfs","level":"basic","can_update_at_runtime":true,"mask":""},{"section":"client.rgw.rgw.compute-0.suapaq","name":"rgw_frontends","value":"beast endpoint=192.168.122.100:8082","level":"basic","can_update_at_runtime":false,"mask":""}]
Nov 25 02:51:54 np0005534516 podman[101371]: 2025-11-25 07:51:54.463972818 +0000 UTC m=+1.541223015 container died d9b01443d54578e27d3311b26d102d2d3d964714e164cdbe5087895b7bea32a7 (image=quay.io/ceph/ceph:v18, name=naughty_wilbur, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2)
Nov 25 02:51:54 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 02:51:54 np0005534516 ceph-mon[75015]: daemon mds.cephfs.compute-0.dgfvvi assigned to filesystem cephfs as rank 0 (now has 1 ranks)
Nov 25 02:51:54 np0005534516 ceph-mon[75015]: Health check cleared: MDS_ALL_DOWN (was: 1 filesystem is offline)
Nov 25 02:51:54 np0005534516 ceph-mon[75015]: Health check cleared: MDS_UP_LESS_THAN_MAX (was: 1 filesystem is online with fewer MDS than max_mds)
Nov 25 02:51:54 np0005534516 ceph-mon[75015]: Cluster is now healthy
Nov 25 02:51:54 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 02:51:54 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 02:51:54 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 02:51:54 np0005534516 ceph-mon[75015]: daemon mds.cephfs.compute-0.dgfvvi is now active in filesystem cephfs as rank 0
Nov 25 02:51:55 np0005534516 systemd[1]: var-lib-containers-storage-overlay-c382648878d27f086b6cc5de6fb1cc801a546652d75d3c06b85fa13e37437705-merged.mount: Deactivated successfully.
Nov 25 02:51:55 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 02:51:55 np0005534516 ceph-mgr[75313]: [progress INFO root] Completed event f6cc32b1-a9ee-406c-9a7d-061aa3123293 (Global Recovery Event) in 6 seconds
Nov 25 02:51:55 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v130: 165 pgs: 165 active+clean; 450 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 839 B/s rd, 839 B/s wr, 4 op/s
Nov 25 02:51:55 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).mds e5 new map
Nov 25 02:51:55 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).mds e5 print_map#012e5#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#0115#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112025-11-25T07:51:20.057872+0000#012modified#0112025-11-25T07:51:55.078899+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#0110#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2}#012max_mds#0111#012in#0110#012up#011{0=14271}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0110#012[mds.cephfs.compute-0.dgfvvi{0:14271} state up:active seq 2 join_fscid=1 addr [v2:192.168.122.100:6814/3217719860,v1:192.168.122.100:6815/3217719860] compat {c=[1],r=[1],i=[7ff]}]#012 #012 
Nov 25 02:51:55 np0005534516 ceph-mds[101326]: mds.cephfs.compute-0.dgfvvi Updating MDS map to version 5 from mon.0
Nov 25 02:51:55 np0005534516 ceph-mds[101326]: mds.0.4 handle_mds_map i am now mds.0.4
Nov 25 02:51:55 np0005534516 ceph-mds[101326]: mds.0.4 handle_mds_map state change up:creating --> up:active
Nov 25 02:51:55 np0005534516 ceph-mds[101326]: mds.0.4 recovery_done -- successful recovery!
Nov 25 02:51:55 np0005534516 ceph-mds[101326]: mds.0.4 active_start
Nov 25 02:51:55 np0005534516 ceph-mon[75015]: log_channel(cluster) log [DBG] : mds.? [v2:192.168.122.100:6814/3217719860,v1:192.168.122.100:6815/3217719860] up:active
Nov 25 02:51:55 np0005534516 ceph-mon[75015]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-0.dgfvvi=up:active}
Nov 25 02:51:55 np0005534516 podman[101371]: 2025-11-25 07:51:55.636541867 +0000 UTC m=+2.713792074 container remove d9b01443d54578e27d3311b26d102d2d3d964714e164cdbe5087895b7bea32a7 (image=quay.io/ceph/ceph:v18, name=naughty_wilbur, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 02:51:55 np0005534516 systemd[1]: libpod-conmon-d9b01443d54578e27d3311b26d102d2d3d964714e164cdbe5087895b7bea32a7.scope: Deactivated successfully.
Nov 25 02:51:56 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 02:51:56 np0005534516 podman[101656]: 2025-11-25 07:51:56.400797755 +0000 UTC m=+0.328347389 container exec 04ce8b89bbacb77b3b59c4996e899d7494010827e740656ebea8754c25303956 (image=quay.io/ceph/ceph:v18, name=ceph-a058ea16-8b73-51e1-b172-ed66107102bf-mon-compute-0, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Nov 25 02:51:56 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e48 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 02:51:56 np0005534516 podman[101656]: 2025-11-25 07:51:56.525009222 +0000 UTC m=+0.452558766 container exec_died 04ce8b89bbacb77b3b59c4996e899d7494010827e740656ebea8754c25303956 (image=quay.io/ceph/ceph:v18, name=ceph-a058ea16-8b73-51e1-b172-ed66107102bf-mon-compute-0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 02:51:56 np0005534516 python3[101725]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid a058ea16-8b73-51e1-b172-ed66107102bf -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd get-require-min-compat-client _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 02:51:56 np0005534516 podman[101750]: 2025-11-25 07:51:56.819811474 +0000 UTC m=+0.080895436 container create 5cc07bfa3938c713449f68e7f4d488ab92129c5792d9244e335099105e684236 (image=quay.io/ceph/ceph:v18, name=great_williams, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 25 02:51:56 np0005534516 systemd[1]: Started libpod-conmon-5cc07bfa3938c713449f68e7f4d488ab92129c5792d9244e335099105e684236.scope.
Nov 25 02:51:56 np0005534516 podman[101750]: 2025-11-25 07:51:56.779579767 +0000 UTC m=+0.040663679 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 25 02:51:56 np0005534516 systemd[1]: Started libcrun container.
Nov 25 02:51:56 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6fbdab2820ed59c864ccdcee91f155323f086feb51f867a0e403ab4a1db41e86/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 02:51:56 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6fbdab2820ed59c864ccdcee91f155323f086feb51f867a0e403ab4a1db41e86/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 02:51:56 np0005534516 podman[101750]: 2025-11-25 07:51:56.93896818 +0000 UTC m=+0.200052062 container init 5cc07bfa3938c713449f68e7f4d488ab92129c5792d9244e335099105e684236 (image=quay.io/ceph/ceph:v18, name=great_williams, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef)
Nov 25 02:51:56 np0005534516 podman[101750]: 2025-11-25 07:51:56.951508824 +0000 UTC m=+0.212592686 container start 5cc07bfa3938c713449f68e7f4d488ab92129c5792d9244e335099105e684236 (image=quay.io/ceph/ceph:v18, name=great_williams, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 02:51:56 np0005534516 podman[101750]: 2025-11-25 07:51:56.961081777 +0000 UTC m=+0.222165749 container attach 5cc07bfa3938c713449f68e7f4d488ab92129c5792d9244e335099105e684236 (image=quay.io/ceph/ceph:v18, name=great_williams, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Nov 25 02:51:57 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v131: 165 pgs: 165 active+clean; 450 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 853 B/s rd, 1.2 KiB/s wr, 6 op/s
Nov 25 02:51:57 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 02:51:57 np0005534516 ceph-osd[88620]: log_channel(cluster) log [DBG] : 6.a scrub starts
Nov 25 02:51:57 np0005534516 ceph-osd[88620]: log_channel(cluster) log [DBG] : 6.a scrub ok
Nov 25 02:51:57 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e48 do_prune osdmap full prune enabled
Nov 25 02:51:57 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd get-require-min-compat-client"} v 0) v1
Nov 25 02:51:57 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/395445342' entity='client.admin' cmd=[{"prefix": "osd get-require-min-compat-client"}]: dispatch
Nov 25 02:51:57 np0005534516 great_williams[101793]: mimic
Nov 25 02:51:57 np0005534516 systemd[1]: libpod-5cc07bfa3938c713449f68e7f4d488ab92129c5792d9244e335099105e684236.scope: Deactivated successfully.
Nov 25 02:51:57 np0005534516 podman[101750]: 2025-11-25 07:51:57.531558226 +0000 UTC m=+0.792642128 container died 5cc07bfa3938c713449f68e7f4d488ab92129c5792d9244e335099105e684236 (image=quay.io/ceph/ceph:v18, name=great_williams, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 02:51:57 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 02:51:57 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 02:51:57 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e49 e49: 3 total, 3 up, 3 in
Nov 25 02:51:57 np0005534516 ceph-mon[75015]: log_channel(cluster) log [DBG] : osdmap e49: 3 total, 3 up, 3 in
Nov 25 02:51:57 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"} v 0) v1
Nov 25 02:51:57 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1342219642' entity='client.rgw.rgw.compute-0.suapaq' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Nov 25 02:51:58 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 49 pg[11.0( empty local-lis/les=0/0 n=0 ec=49/49 lis/c=0/0 les/c/f=0/0/0 sis=49) [1] r=0 lpr=49 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:51:58 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 02:51:58 np0005534516 systemd[1]: var-lib-containers-storage-overlay-6fbdab2820ed59c864ccdcee91f155323f086feb51f867a0e403ab4a1db41e86-merged.mount: Deactivated successfully.
Nov 25 02:51:58 np0005534516 ceph-osd[89702]: log_channel(cluster) log [DBG] : 2.1b scrub starts
Nov 25 02:51:58 np0005534516 ceph-osd[89702]: log_channel(cluster) log [DBG] : 2.1b scrub ok
Nov 25 02:51:58 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 02:51:58 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 02:51:58 np0005534516 podman[101750]: 2025-11-25 07:51:58.414028966 +0000 UTC m=+1.675112868 container remove 5cc07bfa3938c713449f68e7f4d488ab92129c5792d9244e335099105e684236 (image=quay.io/ceph/ceph:v18, name=great_williams, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 02:51:58 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 02:51:58 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 02:51:58 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 02:51:58 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 02:51:58 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 02:51:58 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 02:51:58 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 02:51:58 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 02:51:58 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 02:51:58 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 02:51:58 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 02:51:58 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 6.359070782053786e-08 of space, bias 4.0, pg target 7.630884938464544e-05 quantized to 16 (current 32)
Nov 25 02:51:58 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 02:51:58 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Nov 25 02:51:58 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 02:51:58 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 1)
Nov 25 02:51:58 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 02:51:58 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 1.2718141564107572e-07 of space, bias 1.0, pg target 3.815442469232272e-05 quantized to 32 (current 1)
Nov 25 02:51:58 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 02:51:58 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Nov 25 02:51:58 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 02:51:58 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Nov 25 02:51:58 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"} v 0) v1
Nov 25 02:51:58 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"}]: dispatch
Nov 25 02:51:58 np0005534516 systemd[1]: libpod-conmon-5cc07bfa3938c713449f68e7f4d488ab92129c5792d9244e335099105e684236.scope: Deactivated successfully.
Nov 25 02:51:58 np0005534516 ceph-osd[88620]: log_channel(cluster) log [DBG] : 6.10 scrub starts
Nov 25 02:51:58 np0005534516 ceph-osd[88620]: log_channel(cluster) log [DBG] : 6.10 scrub ok
Nov 25 02:51:58 np0005534516 ceph-mds[101326]: mds.pinger is_rank_lagging: rank=0 was never sent ping request.
Nov 25 02:51:58 np0005534516 ceph-a058ea16-8b73-51e1-b172-ed66107102bf-mds-cephfs-compute-0-dgfvvi[101322]: 2025-11-25T07:51:58.478+0000 7f8287f89640 -1 mds.pinger is_rank_lagging: rank=0 was never sent ping request.
Nov 25 02:51:58 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e49 do_prune osdmap full prune enabled
Nov 25 02:51:58 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1342219642' entity='client.rgw.rgw.compute-0.suapaq' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Nov 25 02:51:58 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"}]': finished
Nov 25 02:51:58 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e50 e50: 3 total, 3 up, 3 in
Nov 25 02:51:58 np0005534516 ceph-mon[75015]: log_channel(cluster) log [DBG] : osdmap e50: 3 total, 3 up, 3 in
Nov 25 02:51:58 np0005534516 ceph-mgr[75313]: [progress INFO root] update: starting ev 4240ae18-f0a5-46f9-9bc6-0535a66c88c4 (PG autoscaler increasing pool 7 PGs from 1 to 32)
Nov 25 02:51:58 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"} v 0) v1
Nov 25 02:51:58 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1342219642' entity='client.rgw.rgw.compute-0.suapaq' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Nov 25 02:51:58 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"} v 0) v1
Nov 25 02:51:58 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"}]: dispatch
Nov 25 02:51:58 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 50 pg[11.0( empty local-lis/les=49/50 n=0 ec=49/49 lis/c=0/0 les/c/f=0/0/0 sis=49) [1] r=0 lpr=49 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:51:59 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v134: 166 pgs: 1 unknown, 165 active+clean; 452 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 2.7 KiB/s wr, 8 op/s
Nov 25 02:51:59 np0005534516 ceph-osd[89702]: log_channel(cluster) log [DBG] : 5.13 scrub starts
Nov 25 02:51:59 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 02:51:59 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 02:51:59 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"} v 0) v1
Nov 25 02:51:59 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 25 02:51:59 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 02:51:59 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 02:51:59 np0005534516 ceph-mon[75015]: from='client.? 192.168.122.100:0/1342219642' entity='client.rgw.rgw.compute-0.suapaq' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Nov 25 02:51:59 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 02:51:59 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"}]: dispatch
Nov 25 02:51:59 np0005534516 ceph-mon[75015]: from='client.? 192.168.122.100:0/1342219642' entity='client.rgw.rgw.compute-0.suapaq' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Nov 25 02:51:59 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"}]': finished
Nov 25 02:51:59 np0005534516 ceph-mon[75015]: from='client.? 192.168.122.100:0/1342219642' entity='client.rgw.rgw.compute-0.suapaq' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Nov 25 02:51:59 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"}]: dispatch
Nov 25 02:51:59 np0005534516 ceph-osd[89702]: log_channel(cluster) log [DBG] : 5.13 scrub ok
Nov 25 02:51:59 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 02:51:59 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 02:51:59 np0005534516 ceph-mgr[75313]: [progress WARNING root] complete: ev a2198ad5-892c-4039-851c-f7278a7908a1 does not exist
Nov 25 02:51:59 np0005534516 ceph-mgr[75313]: [progress WARNING root] complete: ev f292e052-948c-4908-ba3a-3d672422fd1b does not exist
Nov 25 02:51:59 np0005534516 ceph-mgr[75313]: [progress WARNING root] complete: ev d1727749-b6a9-47de-a366-9f6fb472c35b does not exist
Nov 25 02:51:59 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 02:51:59 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 02:51:59 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 02:51:59 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 02:51:59 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 02:51:59 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 02:51:59 np0005534516 python3[102059]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid a058ea16-8b73-51e1-b172-ed66107102bf -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   versions -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 02:51:59 np0005534516 podman[102093]: 2025-11-25 07:51:59.576724874 +0000 UTC m=+0.094362203 container create 3364cd846b8a9764cecea00ec9a9125091ecff949604ca753f178365268cb324 (image=quay.io/ceph/ceph:v18, name=frosty_cray, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 02:51:59 np0005534516 podman[102093]: 2025-11-25 07:51:59.511176272 +0000 UTC m=+0.028813591 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 25 02:51:59 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e50 do_prune osdmap full prune enabled
Nov 25 02:51:59 np0005534516 systemd[1]: Started libpod-conmon-3364cd846b8a9764cecea00ec9a9125091ecff949604ca753f178365268cb324.scope.
Nov 25 02:51:59 np0005534516 systemd[1]: Started libcrun container.
Nov 25 02:51:59 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2092a8b4a67385fa3210676967e083989827b257adc8363840fe936fc4b462e4/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 02:51:59 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2092a8b4a67385fa3210676967e083989827b257adc8363840fe936fc4b462e4/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 02:51:59 np0005534516 ceph-osd[90711]: log_channel(cluster) log [DBG] : 5.1c scrub starts
Nov 25 02:51:59 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1342219642' entity='client.rgw.rgw.compute-0.suapaq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Nov 25 02:51:59 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"}]': finished
Nov 25 02:51:59 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]': finished
Nov 25 02:51:59 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e51 e51: 3 total, 3 up, 3 in
Nov 25 02:51:59 np0005534516 ceph-mon[75015]: log_channel(cluster) log [DBG] : osdmap e51: 3 total, 3 up, 3 in
Nov 25 02:51:59 np0005534516 ceph-mgr[75313]: [progress INFO root] update: starting ev c24dec5a-d46c-4eee-9d31-552ce7526c3a (PG autoscaler increasing pool 8 PGs from 1 to 32)
Nov 25 02:51:59 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"} v 0) v1
Nov 25 02:51:59 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"}]: dispatch
Nov 25 02:52:00 np0005534516 ceph-mgr[75313]: [progress INFO root] Writing back 11 completed events
Nov 25 02:52:00 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) v1
Nov 25 02:52:00 np0005534516 ceph-osd[89702]: log_channel(cluster) log [DBG] : 5.12 scrub starts
Nov 25 02:52:00 np0005534516 ceph-osd[90711]: log_channel(cluster) log [DBG] : 5.1c scrub ok
Nov 25 02:52:00 np0005534516 podman[102093]: 2025-11-25 07:52:00.332234399 +0000 UTC m=+0.849871718 container init 3364cd846b8a9764cecea00ec9a9125091ecff949604ca753f178365268cb324 (image=quay.io/ceph/ceph:v18, name=frosty_cray, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Nov 25 02:52:00 np0005534516 podman[102093]: 2025-11-25 07:52:00.338984033 +0000 UTC m=+0.856621372 container start 3364cd846b8a9764cecea00ec9a9125091ecff949604ca753f178365268cb324 (image=quay.io/ceph/ceph:v18, name=frosty_cray, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Nov 25 02:52:00 np0005534516 ceph-osd[88620]: log_channel(cluster) log [DBG] : 6.12 scrub starts
Nov 25 02:52:00 np0005534516 ceph-osd[88620]: log_channel(cluster) log [DBG] : 6.12 scrub ok
Nov 25 02:52:00 np0005534516 ceph-osd[89702]: log_channel(cluster) log [DBG] : 5.12 scrub ok
Nov 25 02:52:00 np0005534516 podman[102093]: 2025-11-25 07:52:00.527890313 +0000 UTC m=+1.045527632 container attach 3364cd846b8a9764cecea00ec9a9125091ecff949604ca753f178365268cb324 (image=quay.io/ceph/ceph:v18, name=frosty_cray, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 02:52:00 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 25 02:52:00 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 02:52:00 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 02:52:00 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 02:52:00 np0005534516 ceph-mon[75015]: from='client.? 192.168.122.100:0/1342219642' entity='client.rgw.rgw.compute-0.suapaq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Nov 25 02:52:00 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"}]': finished
Nov 25 02:52:00 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]': finished
Nov 25 02:52:00 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"}]: dispatch
Nov 25 02:52:00 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 02:52:00 np0005534516 ceph-mgr[75313]: [progress WARNING root] Starting Global Recovery Event,32 pgs not in active + clean state
Nov 25 02:52:00 np0005534516 ceph-osd[90711]: log_channel(cluster) log [DBG] : 5.1f scrub starts
Nov 25 02:52:00 np0005534516 ceph-osd[90711]: log_channel(cluster) log [DBG] : 5.1f scrub ok
Nov 25 02:52:00 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e51 do_prune osdmap full prune enabled
Nov 25 02:52:00 np0005534516 podman[102237]: 2025-11-25 07:52:00.861413406 +0000 UTC m=+0.028642046 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 02:52:01 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"}]': finished
Nov 25 02:52:01 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e52 e52: 3 total, 3 up, 3 in
Nov 25 02:52:01 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v136: 197 pgs: 32 unknown, 165 active+clean; 452 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 2.7 KiB/s wr, 3 op/s
Nov 25 02:52:01 np0005534516 ceph-mon[75015]: log_channel(cluster) log [DBG] : osdmap e52: 3 total, 3 up, 3 in
Nov 25 02:52:01 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "versions", "format": "json"} v 0) v1
Nov 25 02:52:01 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/760947962' entity='client.admin' cmd=[{"prefix": "versions", "format": "json"}]: dispatch
Nov 25 02:52:01 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"} v 0) v1
Nov 25 02:52:01 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 25 02:52:01 np0005534516 ceph-mgr[75313]: [progress INFO root] update: starting ev b0b0a54d-4a3b-43e2-b3c6-f7857107df69 (PG autoscaler increasing pool 9 PGs from 1 to 32)
Nov 25 02:52:01 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"} v 0) v1
Nov 25 02:52:01 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"}]: dispatch
Nov 25 02:52:01 np0005534516 frosty_cray[102175]: 
Nov 25 02:52:01 np0005534516 frosty_cray[102175]: {"mon":{"ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)":1},"mgr":{"ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)":1},"osd":{"ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)":3},"mds":{"ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)":1},"overall":{"ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)":6}}
Nov 25 02:52:01 np0005534516 systemd[1]: libpod-3364cd846b8a9764cecea00ec9a9125091ecff949604ca753f178365268cb324.scope: Deactivated successfully.
Nov 25 02:52:01 np0005534516 podman[102237]: 2025-11-25 07:52:01.246397729 +0000 UTC m=+0.413626359 container create 6d1839d2281b6cff50f780c0f4ea8f8dcf448615c88862be1f0a059b2fafce13 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_ganguly, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Nov 25 02:52:01 np0005534516 podman[102093]: 2025-11-25 07:52:01.247588289 +0000 UTC m=+1.765225608 container died 3364cd846b8a9764cecea00ec9a9125091ecff949604ca753f178365268cb324 (image=quay.io/ceph/ceph:v18, name=frosty_cray, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 02:52:01 np0005534516 ceph-osd[88620]: log_channel(cluster) log [DBG] : 6.16 scrub starts
Nov 25 02:52:01 np0005534516 ceph-osd[88620]: log_channel(cluster) log [DBG] : 6.16 scrub ok
Nov 25 02:52:01 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e52 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 02:52:01 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 02:52:01 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"}]': finished
Nov 25 02:52:01 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 25 02:52:01 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"}]: dispatch
Nov 25 02:52:01 np0005534516 systemd[1]: Started libpod-conmon-6d1839d2281b6cff50f780c0f4ea8f8dcf448615c88862be1f0a059b2fafce13.scope.
Nov 25 02:52:01 np0005534516 systemd[1]: Started libcrun container.
Nov 25 02:52:01 np0005534516 systemd[1]: var-lib-containers-storage-overlay-2092a8b4a67385fa3210676967e083989827b257adc8363840fe936fc4b462e4-merged.mount: Deactivated successfully.
Nov 25 02:52:02 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e52 do_prune osdmap full prune enabled
Nov 25 02:52:02 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]': finished
Nov 25 02:52:02 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"}]': finished
Nov 25 02:52:02 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e53 e53: 3 total, 3 up, 3 in
Nov 25 02:52:02 np0005534516 ceph-mon[75015]: log_channel(cluster) log [DBG] : osdmap e53: 3 total, 3 up, 3 in
Nov 25 02:52:02 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 51 pg[7.0( empty local-lis/les=27/28 n=0 ec=27/27 lis/c=27/27 les/c/f=28/28/0 sis=51 pruub=13.571899414s) [1] r=0 lpr=51 pi=[27,51)/1 crt=0'0 mlcod 0'0 active pruub 124.060287476s@ mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 02:52:02 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 52 pg[7.0( empty local-lis/les=27/28 n=0 ec=27/27 lis/c=27/27 les/c/f=28/28/0 sis=51 pruub=13.571899414s) [1] r=0 lpr=51 pi=[27,51)/1 crt=0'0 mlcod 0'0 unknown pruub 124.060287476s@ mbc={}] state<Start>: transitioning to Primary
Nov 25 02:52:02 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 52 pg[7.4( empty local-lis/les=27/28 n=0 ec=51/27 lis/c=27/27 les/c/f=28/28/0 sis=51) [1] r=0 lpr=51 pi=[27,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:52:02 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 52 pg[7.3( empty local-lis/les=27/28 n=0 ec=51/27 lis/c=27/27 les/c/f=28/28/0 sis=51) [1] r=0 lpr=51 pi=[27,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:52:02 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 52 pg[7.2( empty local-lis/les=27/28 n=0 ec=51/27 lis/c=27/27 les/c/f=28/28/0 sis=51) [1] r=0 lpr=51 pi=[27,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:52:02 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 52 pg[7.5( empty local-lis/les=27/28 n=0 ec=51/27 lis/c=27/27 les/c/f=28/28/0 sis=51) [1] r=0 lpr=51 pi=[27,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:52:02 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 52 pg[7.6( empty local-lis/les=27/28 n=0 ec=51/27 lis/c=27/27 les/c/f=28/28/0 sis=51) [1] r=0 lpr=51 pi=[27,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:52:02 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 52 pg[7.9( empty local-lis/les=27/28 n=0 ec=51/27 lis/c=27/27 les/c/f=28/28/0 sis=51) [1] r=0 lpr=51 pi=[27,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:52:02 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 52 pg[7.b( empty local-lis/les=27/28 n=0 ec=51/27 lis/c=27/27 les/c/f=28/28/0 sis=51) [1] r=0 lpr=51 pi=[27,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:52:02 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 52 pg[7.c( empty local-lis/les=27/28 n=0 ec=51/27 lis/c=27/27 les/c/f=28/28/0 sis=51) [1] r=0 lpr=51 pi=[27,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:52:02 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 52 pg[7.1( empty local-lis/les=27/28 n=0 ec=51/27 lis/c=27/27 les/c/f=28/28/0 sis=51) [1] r=0 lpr=51 pi=[27,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:52:02 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 52 pg[7.e( empty local-lis/les=27/28 n=0 ec=51/27 lis/c=27/27 les/c/f=28/28/0 sis=51) [1] r=0 lpr=51 pi=[27,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:52:02 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 52 pg[7.a( empty local-lis/les=27/28 n=0 ec=51/27 lis/c=27/27 les/c/f=28/28/0 sis=51) [1] r=0 lpr=51 pi=[27,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:52:02 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 52 pg[7.8( empty local-lis/les=27/28 n=0 ec=51/27 lis/c=27/27 les/c/f=28/28/0 sis=51) [1] r=0 lpr=51 pi=[27,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:52:02 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 52 pg[7.7( empty local-lis/les=27/28 n=0 ec=51/27 lis/c=27/27 les/c/f=28/28/0 sis=51) [1] r=0 lpr=51 pi=[27,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:52:02 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 52 pg[7.d( empty local-lis/les=27/28 n=0 ec=51/27 lis/c=27/27 les/c/f=28/28/0 sis=51) [1] r=0 lpr=51 pi=[27,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:52:02 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 52 pg[7.f( empty local-lis/les=27/28 n=0 ec=51/27 lis/c=27/27 les/c/f=28/28/0 sis=51) [1] r=0 lpr=51 pi=[27,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:52:02 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 52 pg[7.10( empty local-lis/les=27/28 n=0 ec=51/27 lis/c=27/27 les/c/f=28/28/0 sis=51) [1] r=0 lpr=51 pi=[27,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:52:02 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 52 pg[7.11( empty local-lis/les=27/28 n=0 ec=51/27 lis/c=27/27 les/c/f=28/28/0 sis=51) [1] r=0 lpr=51 pi=[27,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:52:02 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 52 pg[7.12( empty local-lis/les=27/28 n=0 ec=51/27 lis/c=27/27 les/c/f=28/28/0 sis=51) [1] r=0 lpr=51 pi=[27,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:52:02 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 52 pg[7.13( empty local-lis/les=27/28 n=0 ec=51/27 lis/c=27/27 les/c/f=28/28/0 sis=51) [1] r=0 lpr=51 pi=[27,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:52:02 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 52 pg[7.14( empty local-lis/les=27/28 n=0 ec=51/27 lis/c=27/27 les/c/f=28/28/0 sis=51) [1] r=0 lpr=51 pi=[27,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:52:02 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 52 pg[7.15( empty local-lis/les=27/28 n=0 ec=51/27 lis/c=27/27 les/c/f=28/28/0 sis=51) [1] r=0 lpr=51 pi=[27,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:52:02 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 52 pg[7.16( empty local-lis/les=27/28 n=0 ec=51/27 lis/c=27/27 les/c/f=28/28/0 sis=51) [1] r=0 lpr=51 pi=[27,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:52:02 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 52 pg[7.17( empty local-lis/les=27/28 n=0 ec=51/27 lis/c=27/27 les/c/f=28/28/0 sis=51) [1] r=0 lpr=51 pi=[27,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:52:02 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 52 pg[7.18( empty local-lis/les=27/28 n=0 ec=51/27 lis/c=27/27 les/c/f=28/28/0 sis=51) [1] r=0 lpr=51 pi=[27,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:52:02 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 52 pg[7.19( empty local-lis/les=27/28 n=0 ec=51/27 lis/c=27/27 les/c/f=28/28/0 sis=51) [1] r=0 lpr=51 pi=[27,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:52:02 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 52 pg[7.1a( empty local-lis/les=27/28 n=0 ec=51/27 lis/c=27/27 les/c/f=28/28/0 sis=51) [1] r=0 lpr=51 pi=[27,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:52:02 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 52 pg[7.1b( empty local-lis/les=27/28 n=0 ec=51/27 lis/c=27/27 les/c/f=28/28/0 sis=51) [1] r=0 lpr=51 pi=[27,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:52:02 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 52 pg[7.1c( empty local-lis/les=27/28 n=0 ec=51/27 lis/c=27/27 les/c/f=28/28/0 sis=51) [1] r=0 lpr=51 pi=[27,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:52:02 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 52 pg[7.1d( empty local-lis/les=27/28 n=0 ec=51/27 lis/c=27/27 les/c/f=28/28/0 sis=51) [1] r=0 lpr=51 pi=[27,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:52:02 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 52 pg[7.1e( empty local-lis/les=27/28 n=0 ec=51/27 lis/c=27/27 les/c/f=28/28/0 sis=51) [1] r=0 lpr=51 pi=[27,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:52:02 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 52 pg[7.1f( empty local-lis/les=27/28 n=0 ec=51/27 lis/c=27/27 les/c/f=28/28/0 sis=51) [1] r=0 lpr=51 pi=[27,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:52:02 np0005534516 ceph-mgr[75313]: [progress INFO root] update: starting ev 47fac222-49bf-43be-ba77-6afa5d25b9ae (PG autoscaler increasing pool 10 PGs from 1 to 32)
Nov 25 02:52:02 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"} v 0) v1
Nov 25 02:52:02 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"}]: dispatch
Nov 25 02:52:02 np0005534516 podman[102093]: 2025-11-25 07:52:02.135656035 +0000 UTC m=+2.653293374 container remove 3364cd846b8a9764cecea00ec9a9125091ecff949604ca753f178365268cb324 (image=quay.io/ceph/ceph:v18, name=frosty_cray, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2)
Nov 25 02:52:02 np0005534516 systemd[1]: libpod-conmon-3364cd846b8a9764cecea00ec9a9125091ecff949604ca753f178365268cb324.scope: Deactivated successfully.
Nov 25 02:52:02 np0005534516 podman[102237]: 2025-11-25 07:52:02.189184655 +0000 UTC m=+1.356413335 container init 6d1839d2281b6cff50f780c0f4ea8f8dcf448615c88862be1f0a059b2fafce13 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_ganguly, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Nov 25 02:52:02 np0005534516 podman[102237]: 2025-11-25 07:52:02.20050171 +0000 UTC m=+1.367730340 container start 6d1839d2281b6cff50f780c0f4ea8f8dcf448615c88862be1f0a059b2fafce13 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_ganguly, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 02:52:02 np0005534516 stoic_ganguly[102268]: 167 167
Nov 25 02:52:02 np0005534516 systemd[1]: libpod-6d1839d2281b6cff50f780c0f4ea8f8dcf448615c88862be1f0a059b2fafce13.scope: Deactivated successfully.
Nov 25 02:52:02 np0005534516 podman[102237]: 2025-11-25 07:52:02.21244662 +0000 UTC m=+1.379675250 container attach 6d1839d2281b6cff50f780c0f4ea8f8dcf448615c88862be1f0a059b2fafce13 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_ganguly, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Nov 25 02:52:02 np0005534516 podman[102237]: 2025-11-25 07:52:02.213039484 +0000 UTC m=+1.380268184 container died 6d1839d2281b6cff50f780c0f4ea8f8dcf448615c88862be1f0a059b2fafce13 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_ganguly, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Nov 25 02:52:02 np0005534516 systemd[1]: var-lib-containers-storage-overlay-a5bc9c898c88f7df9c7354d1762d4d2ac75b6d5c787eddc194c6377ec38898b8-merged.mount: Deactivated successfully.
Nov 25 02:52:02 np0005534516 podman[102237]: 2025-11-25 07:52:02.293713714 +0000 UTC m=+1.460942334 container remove 6d1839d2281b6cff50f780c0f4ea8f8dcf448615c88862be1f0a059b2fafce13 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_ganguly, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 25 02:52:02 np0005534516 ceph-osd[88620]: log_channel(cluster) log [DBG] : 6.18 scrub starts
Nov 25 02:52:02 np0005534516 ceph-osd[88620]: log_channel(cluster) log [DBG] : 6.18 scrub ok
Nov 25 02:52:02 np0005534516 systemd[1]: libpod-conmon-6d1839d2281b6cff50f780c0f4ea8f8dcf448615c88862be1f0a059b2fafce13.scope: Deactivated successfully.
Nov 25 02:52:02 np0005534516 radosgw[100782]: LDAP not started since no server URIs were provided in the configuration.
Nov 25 02:52:02 np0005534516 radosgw[100782]: framework: beast
Nov 25 02:52:02 np0005534516 radosgw[100782]: framework conf key: ssl_certificate, val: config://rgw/cert/$realm/$zone.crt
Nov 25 02:52:02 np0005534516 radosgw[100782]: framework conf key: ssl_private_key, val: config://rgw/cert/$realm/$zone.key
Nov 25 02:52:02 np0005534516 ceph-a058ea16-8b73-51e1-b172-ed66107102bf-rgw-rgw-compute-0-suapaq[100777]: 2025-11-25T07:52:02.394+0000 7f1dd4324940 -1 LDAP not started since no server URIs were provided in the configuration.
Nov 25 02:52:02 np0005534516 radosgw[100782]: starting handler: beast
Nov 25 02:52:02 np0005534516 radosgw[100782]: set uid:gid to 167:167 (ceph:ceph)
Nov 25 02:52:02 np0005534516 radosgw[100782]: mgrc service_daemon_register rgw.14275 metadata {arch=x86_64,ceph_release=reef,ceph_version=ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable),ceph_version_short=18.2.7,container_hostname=compute-0,container_image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0,cpu=AMD EPYC-Rome Processor,distro=centos,distro_description=CentOS Stream 9,distro_version=9,frontend_config#0=beast endpoint=192.168.122.100:8082,frontend_type#0=beast,hostname=compute-0,id=rgw.compute-0.suapaq,kernel_description=#1 SMP PREEMPT_DYNAMIC Thu Nov 20 14:15:03 UTC 2025,kernel_version=5.14.0-642.el9.x86_64,mem_swap_kb=1048572,mem_total_kb=7864320,num_handles=1,os=Linux,pid=2,realm_id=,realm_name=,zone_id=5446af77-927f-4043-b254-5eea4fad2b10,zone_name=default,zonegroup_id=1fe0b7f0-d98d-4298-832c-7abb60731180,zonegroup_name=default}
Nov 25 02:52:02 np0005534516 podman[102293]: 2025-11-25 07:52:02.491274845 +0000 UTC m=+0.090381598 container create 3dcc1cde855c76576c4e7599f99502e3ab4516b309e9eb9236837f07bc71041d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_torvalds, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 02:52:02 np0005534516 podman[102293]: 2025-11-25 07:52:02.422276828 +0000 UTC m=+0.021383601 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 02:52:02 np0005534516 systemd[1]: Started libpod-conmon-3dcc1cde855c76576c4e7599f99502e3ab4516b309e9eb9236837f07bc71041d.scope.
Nov 25 02:52:02 np0005534516 systemd[1]: Started libcrun container.
Nov 25 02:52:02 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b4f7c83e8a7883151facc29991fa1cd531cb43c17c040adddab0e4cc3782e06c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 02:52:02 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b4f7c83e8a7883151facc29991fa1cd531cb43c17c040adddab0e4cc3782e06c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 02:52:02 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b4f7c83e8a7883151facc29991fa1cd531cb43c17c040adddab0e4cc3782e06c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 02:52:02 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b4f7c83e8a7883151facc29991fa1cd531cb43c17c040adddab0e4cc3782e06c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 02:52:02 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b4f7c83e8a7883151facc29991fa1cd531cb43c17c040adddab0e4cc3782e06c/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 02:52:02 np0005534516 podman[102293]: 2025-11-25 07:52:02.749521869 +0000 UTC m=+0.348628702 container init 3dcc1cde855c76576c4e7599f99502e3ab4516b309e9eb9236837f07bc71041d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_torvalds, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default)
Nov 25 02:52:02 np0005534516 podman[102293]: 2025-11-25 07:52:02.762089283 +0000 UTC m=+0.361196036 container start 3dcc1cde855c76576c4e7599f99502e3ab4516b309e9eb9236837f07bc71041d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_torvalds, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 02:52:02 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]': finished
Nov 25 02:52:02 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"}]': finished
Nov 25 02:52:02 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"}]: dispatch
Nov 25 02:52:02 np0005534516 podman[102293]: 2025-11-25 07:52:02.841057353 +0000 UTC m=+0.440164196 container attach 3dcc1cde855c76576c4e7599f99502e3ab4516b309e9eb9236837f07bc71041d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_torvalds, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 25 02:52:03 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e53 do_prune osdmap full prune enabled
Nov 25 02:52:03 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v139: 228 pgs: 62 unknown, 166 active+clean; 454 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 0 B/s rd, 2.7 KiB/s wr, 8 op/s
Nov 25 02:52:03 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"} v 0) v1
Nov 25 02:52:03 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 25 02:52:03 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"} v 0) v1
Nov 25 02:52:03 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 25 02:52:03 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"}]': finished
Nov 25 02:52:03 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e54 e54: 3 total, 3 up, 3 in
Nov 25 02:52:03 np0005534516 ceph-mon[75015]: log_channel(cluster) log [DBG] : osdmap e54: 3 total, 3 up, 3 in
Nov 25 02:52:03 np0005534516 ceph-mgr[75313]: [progress INFO root] update: starting ev 97215932-3cea-4d83-88fd-3ca7dfd8393c (PG autoscaler increasing pool 11 PGs from 1 to 32)
Nov 25 02:52:03 np0005534516 ceph-mgr[75313]: [progress INFO root] complete: finished ev 4240ae18-f0a5-46f9-9bc6-0535a66c88c4 (PG autoscaler increasing pool 7 PGs from 1 to 32)
Nov 25 02:52:03 np0005534516 ceph-mgr[75313]: [progress INFO root] Completed event 4240ae18-f0a5-46f9-9bc6-0535a66c88c4 (PG autoscaler increasing pool 7 PGs from 1 to 32) in 5 seconds
Nov 25 02:52:03 np0005534516 ceph-mgr[75313]: [progress INFO root] complete: finished ev c24dec5a-d46c-4eee-9d31-552ce7526c3a (PG autoscaler increasing pool 8 PGs from 1 to 32)
Nov 25 02:52:03 np0005534516 ceph-mgr[75313]: [progress INFO root] Completed event c24dec5a-d46c-4eee-9d31-552ce7526c3a (PG autoscaler increasing pool 8 PGs from 1 to 32) in 3 seconds
Nov 25 02:52:03 np0005534516 ceph-mgr[75313]: [progress INFO root] complete: finished ev b0b0a54d-4a3b-43e2-b3c6-f7857107df69 (PG autoscaler increasing pool 9 PGs from 1 to 32)
Nov 25 02:52:03 np0005534516 ceph-mgr[75313]: [progress INFO root] Completed event b0b0a54d-4a3b-43e2-b3c6-f7857107df69 (PG autoscaler increasing pool 9 PGs from 1 to 32) in 2 seconds
Nov 25 02:52:03 np0005534516 ceph-mgr[75313]: [progress INFO root] complete: finished ev 47fac222-49bf-43be-ba77-6afa5d25b9ae (PG autoscaler increasing pool 10 PGs from 1 to 32)
Nov 25 02:52:03 np0005534516 ceph-mgr[75313]: [progress INFO root] Completed event 47fac222-49bf-43be-ba77-6afa5d25b9ae (PG autoscaler increasing pool 10 PGs from 1 to 32) in 1 seconds
Nov 25 02:52:03 np0005534516 ceph-mgr[75313]: [progress INFO root] complete: finished ev 97215932-3cea-4d83-88fd-3ca7dfd8393c (PG autoscaler increasing pool 11 PGs from 1 to 32)
Nov 25 02:52:03 np0005534516 ceph-mgr[75313]: [progress INFO root] Completed event 97215932-3cea-4d83-88fd-3ca7dfd8393c (PG autoscaler increasing pool 11 PGs from 1 to 32) in 0 seconds
Nov 25 02:52:03 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 53 pg[8.0( v 42'4 (0'0,42'4] local-lis/les=41/42 n=4 ec=41/41 lis/c=41/41 les/c/f=42/42/0 sis=53 pruub=14.631764412s) [1] r=0 lpr=53 pi=[41,53)/1 crt=42'4 lcod 42'3 mlcod 42'3 active pruub 126.719352722s@ mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 02:52:03 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 53 pg[8.0( v 42'4 lc 0'0 (0'0,42'4] local-lis/les=41/42 n=0 ec=41/41 lis/c=41/41 les/c/f=42/42/0 sis=53 pruub=14.631764412s) [1] r=0 lpr=53 pi=[41,53)/1 crt=42'4 lcod 42'3 mlcod 0'0 unknown pruub 126.719352722s@ mbc={}] state<Start>: transitioning to Primary
Nov 25 02:52:03 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 54 pg[8.4( v 42'4 lc 0'0 (0'0,42'4] local-lis/les=41/42 n=1 ec=53/41 lis/c=41/41 les/c/f=42/42/0 sis=53) [1] r=0 lpr=53 pi=[41,53)/1 crt=42'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:52:03 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 54 pg[8.5( v 42'4 lc 0'0 (0'0,42'4] local-lis/les=41/42 n=0 ec=53/41 lis/c=41/41 les/c/f=42/42/0 sis=53) [1] r=0 lpr=53 pi=[41,53)/1 crt=42'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:52:03 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 54 pg[8.3( v 42'4 lc 0'0 (0'0,42'4] local-lis/les=41/42 n=1 ec=53/41 lis/c=41/41 les/c/f=42/42/0 sis=53) [1] r=0 lpr=53 pi=[41,53)/1 crt=42'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:52:03 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 54 pg[8.2( v 42'4 lc 0'0 (0'0,42'4] local-lis/les=41/42 n=1 ec=53/41 lis/c=41/41 les/c/f=42/42/0 sis=53) [1] r=0 lpr=53 pi=[41,53)/1 crt=42'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:52:03 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 54 pg[8.f( v 42'4 lc 0'0 (0'0,42'4] local-lis/les=41/42 n=0 ec=53/41 lis/c=41/41 les/c/f=42/42/0 sis=53) [1] r=0 lpr=53 pi=[41,53)/1 crt=42'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:52:03 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 54 pg[8.10( v 42'4 lc 0'0 (0'0,42'4] local-lis/les=41/42 n=0 ec=53/41 lis/c=41/41 les/c/f=42/42/0 sis=53) [1] r=0 lpr=53 pi=[41,53)/1 crt=42'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:52:03 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 54 pg[8.11( v 42'4 lc 0'0 (0'0,42'4] local-lis/les=41/42 n=0 ec=53/41 lis/c=41/41 les/c/f=42/42/0 sis=53) [1] r=0 lpr=53 pi=[41,53)/1 crt=42'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:52:03 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 54 pg[8.12( v 42'4 lc 0'0 (0'0,42'4] local-lis/les=41/42 n=0 ec=53/41 lis/c=41/41 les/c/f=42/42/0 sis=53) [1] r=0 lpr=53 pi=[41,53)/1 crt=42'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:52:03 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 54 pg[8.1( v 42'4 (0'0,42'4] local-lis/les=41/42 n=1 ec=53/41 lis/c=41/41 les/c/f=42/42/0 sis=53) [1] r=0 lpr=53 pi=[41,53)/1 crt=42'4 lcod 0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:52:03 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 54 pg[8.7( v 42'4 lc 0'0 (0'0,42'4] local-lis/les=41/42 n=0 ec=53/41 lis/c=41/41 les/c/f=42/42/0 sis=53) [1] r=0 lpr=53 pi=[41,53)/1 crt=42'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:52:03 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 54 pg[8.8( v 42'4 lc 0'0 (0'0,42'4] local-lis/les=41/42 n=0 ec=53/41 lis/c=41/41 les/c/f=42/42/0 sis=53) [1] r=0 lpr=53 pi=[41,53)/1 crt=42'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:52:03 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 54 pg[8.b( v 42'4 lc 0'0 (0'0,42'4] local-lis/les=41/42 n=0 ec=53/41 lis/c=41/41 les/c/f=42/42/0 sis=53) [1] r=0 lpr=53 pi=[41,53)/1 crt=42'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:52:03 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 54 pg[8.c( v 42'4 lc 0'0 (0'0,42'4] local-lis/les=41/42 n=0 ec=53/41 lis/c=41/41 les/c/f=42/42/0 sis=53) [1] r=0 lpr=53 pi=[41,53)/1 crt=42'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:52:03 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 54 pg[8.9( v 42'4 lc 0'0 (0'0,42'4] local-lis/les=41/42 n=0 ec=53/41 lis/c=41/41 les/c/f=42/42/0 sis=53) [1] r=0 lpr=53 pi=[41,53)/1 crt=42'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:52:03 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 54 pg[8.a( v 42'4 lc 0'0 (0'0,42'4] local-lis/les=41/42 n=0 ec=53/41 lis/c=41/41 les/c/f=42/42/0 sis=53) [1] r=0 lpr=53 pi=[41,53)/1 crt=42'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:52:03 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 54 pg[8.d( v 42'4 lc 0'0 (0'0,42'4] local-lis/les=41/42 n=0 ec=53/41 lis/c=41/41 les/c/f=42/42/0 sis=53) [1] r=0 lpr=53 pi=[41,53)/1 crt=42'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:52:03 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 54 pg[8.6( v 42'4 lc 0'0 (0'0,42'4] local-lis/les=41/42 n=0 ec=53/41 lis/c=41/41 les/c/f=42/42/0 sis=53) [1] r=0 lpr=53 pi=[41,53)/1 crt=42'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:52:03 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 54 pg[8.e( v 42'4 lc 0'0 (0'0,42'4] local-lis/les=41/42 n=0 ec=53/41 lis/c=41/41 les/c/f=42/42/0 sis=53) [1] r=0 lpr=53 pi=[41,53)/1 crt=42'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:52:03 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 54 pg[8.13( v 42'4 lc 0'0 (0'0,42'4] local-lis/les=41/42 n=0 ec=53/41 lis/c=41/41 les/c/f=42/42/0 sis=53) [1] r=0 lpr=53 pi=[41,53)/1 crt=42'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:52:03 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 54 pg[8.14( v 42'4 lc 0'0 (0'0,42'4] local-lis/les=41/42 n=0 ec=53/41 lis/c=41/41 les/c/f=42/42/0 sis=53) [1] r=0 lpr=53 pi=[41,53)/1 crt=42'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:52:03 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 54 pg[8.15( v 42'4 lc 0'0 (0'0,42'4] local-lis/les=41/42 n=0 ec=53/41 lis/c=41/41 les/c/f=42/42/0 sis=53) [1] r=0 lpr=53 pi=[41,53)/1 crt=42'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:52:03 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 54 pg[8.18( v 42'4 lc 0'0 (0'0,42'4] local-lis/les=41/42 n=0 ec=53/41 lis/c=41/41 les/c/f=42/42/0 sis=53) [1] r=0 lpr=53 pi=[41,53)/1 crt=42'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:52:03 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 54 pg[8.19( v 42'4 lc 0'0 (0'0,42'4] local-lis/les=41/42 n=0 ec=53/41 lis/c=41/41 les/c/f=42/42/0 sis=53) [1] r=0 lpr=53 pi=[41,53)/1 crt=42'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:52:03 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 54 pg[8.17( v 42'4 lc 0'0 (0'0,42'4] local-lis/les=41/42 n=0 ec=53/41 lis/c=41/41 les/c/f=42/42/0 sis=53) [1] r=0 lpr=53 pi=[41,53)/1 crt=42'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:52:03 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 54 pg[8.1a( v 42'4 lc 0'0 (0'0,42'4] local-lis/les=41/42 n=0 ec=53/41 lis/c=41/41 les/c/f=42/42/0 sis=53) [1] r=0 lpr=53 pi=[41,53)/1 crt=42'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:52:03 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 54 pg[8.1b( v 42'4 lc 0'0 (0'0,42'4] local-lis/les=41/42 n=0 ec=53/41 lis/c=41/41 les/c/f=42/42/0 sis=53) [1] r=0 lpr=53 pi=[41,53)/1 crt=42'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:52:03 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 54 pg[8.1c( v 42'4 lc 0'0 (0'0,42'4] local-lis/les=41/42 n=0 ec=53/41 lis/c=41/41 les/c/f=42/42/0 sis=53) [1] r=0 lpr=53 pi=[41,53)/1 crt=42'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:52:03 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 54 pg[8.16( v 42'4 lc 0'0 (0'0,42'4] local-lis/les=41/42 n=0 ec=53/41 lis/c=41/41 les/c/f=42/42/0 sis=53) [1] r=0 lpr=53 pi=[41,53)/1 crt=42'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:52:03 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 54 pg[8.1e( v 42'4 lc 0'0 (0'0,42'4] local-lis/les=41/42 n=0 ec=53/41 lis/c=41/41 les/c/f=42/42/0 sis=53) [1] r=0 lpr=53 pi=[41,53)/1 crt=42'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:52:03 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 54 pg[8.1d( v 42'4 lc 0'0 (0'0,42'4] local-lis/les=41/42 n=0 ec=53/41 lis/c=41/41 les/c/f=42/42/0 sis=53) [1] r=0 lpr=53 pi=[41,53)/1 crt=42'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:52:03 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 54 pg[8.1f( v 42'4 lc 0'0 (0'0,42'4] local-lis/les=41/42 n=0 ec=53/41 lis/c=41/41 les/c/f=42/42/0 sis=53) [1] r=0 lpr=53 pi=[41,53)/1 crt=42'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:52:03 np0005534516 ceph-osd[90711]: log_channel(cluster) log [DBG] : 3.1e scrub starts
Nov 25 02:52:03 np0005534516 ceph-osd[90711]: log_channel(cluster) log [DBG] : 3.1e scrub ok
Nov 25 02:52:03 np0005534516 distracted_torvalds[102852]: --> passed data devices: 0 physical, 3 LVM
Nov 25 02:52:03 np0005534516 distracted_torvalds[102852]: --> relative data size: 1.0
Nov 25 02:52:03 np0005534516 distracted_torvalds[102852]: --> All data devices are unavailable
Nov 25 02:52:03 np0005534516 systemd[1]: libpod-3dcc1cde855c76576c4e7599f99502e3ab4516b309e9eb9236837f07bc71041d.scope: Deactivated successfully.
Nov 25 02:52:03 np0005534516 podman[102293]: 2025-11-25 07:52:03.850224681 +0000 UTC m=+1.449331434 container died 3dcc1cde855c76576c4e7599f99502e3ab4516b309e9eb9236837f07bc71041d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_torvalds, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 02:52:03 np0005534516 systemd[1]: libpod-3dcc1cde855c76576c4e7599f99502e3ab4516b309e9eb9236837f07bc71041d.scope: Consumed 1.040s CPU time.
Nov 25 02:52:04 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 25 02:52:04 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 25 02:52:04 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"}]': finished
Nov 25 02:52:04 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 54 pg[7.7( empty local-lis/les=51/54 n=0 ec=51/27 lis/c=27/27 les/c/f=28/28/0 sis=51) [1] r=0 lpr=51 pi=[27,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:52:04 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 54 pg[7.2( empty local-lis/les=51/54 n=0 ec=51/27 lis/c=27/27 les/c/f=28/28/0 sis=51) [1] r=0 lpr=51 pi=[27,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:52:04 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 54 pg[7.1( empty local-lis/les=51/54 n=0 ec=51/27 lis/c=27/27 les/c/f=28/28/0 sis=51) [1] r=0 lpr=51 pi=[27,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:52:04 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 54 pg[7.3( empty local-lis/les=51/54 n=0 ec=51/27 lis/c=27/27 les/c/f=28/28/0 sis=51) [1] r=0 lpr=51 pi=[27,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:52:04 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 54 pg[7.c( empty local-lis/les=51/54 n=0 ec=51/27 lis/c=27/27 les/c/f=28/28/0 sis=51) [1] r=0 lpr=51 pi=[27,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:52:04 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 54 pg[7.18( empty local-lis/les=51/54 n=0 ec=51/27 lis/c=27/27 les/c/f=28/28/0 sis=51) [1] r=0 lpr=51 pi=[27,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:52:04 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 54 pg[7.1a( empty local-lis/les=51/54 n=0 ec=51/27 lis/c=27/27 les/c/f=28/28/0 sis=51) [1] r=0 lpr=51 pi=[27,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:52:04 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 54 pg[7.19( empty local-lis/les=51/54 n=0 ec=51/27 lis/c=27/27 les/c/f=28/28/0 sis=51) [1] r=0 lpr=51 pi=[27,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:52:04 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 54 pg[7.1e( empty local-lis/les=51/54 n=0 ec=51/27 lis/c=27/27 les/c/f=28/28/0 sis=51) [1] r=0 lpr=51 pi=[27,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:52:04 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 54 pg[7.1c( empty local-lis/les=51/54 n=0 ec=51/27 lis/c=27/27 les/c/f=28/28/0 sis=51) [1] r=0 lpr=51 pi=[27,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:52:04 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 54 pg[7.1d( empty local-lis/les=51/54 n=0 ec=51/27 lis/c=27/27 les/c/f=28/28/0 sis=51) [1] r=0 lpr=51 pi=[27,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:52:04 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 54 pg[7.13( empty local-lis/les=51/54 n=0 ec=51/27 lis/c=27/27 les/c/f=28/28/0 sis=51) [1] r=0 lpr=51 pi=[27,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:52:04 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 54 pg[7.12( empty local-lis/les=51/54 n=0 ec=51/27 lis/c=27/27 les/c/f=28/28/0 sis=51) [1] r=0 lpr=51 pi=[27,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:52:04 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 54 pg[7.11( empty local-lis/les=51/54 n=0 ec=51/27 lis/c=27/27 les/c/f=28/28/0 sis=51) [1] r=0 lpr=51 pi=[27,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:52:04 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 54 pg[7.1b( empty local-lis/les=51/54 n=0 ec=51/27 lis/c=27/27 les/c/f=28/28/0 sis=51) [1] r=0 lpr=51 pi=[27,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:52:04 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 54 pg[7.e( empty local-lis/les=51/54 n=0 ec=51/27 lis/c=27/27 les/c/f=28/28/0 sis=51) [1] r=0 lpr=51 pi=[27,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:52:04 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 54 pg[7.10( empty local-lis/les=51/54 n=0 ec=51/27 lis/c=27/27 les/c/f=28/28/0 sis=51) [1] r=0 lpr=51 pi=[27,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:52:04 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 54 pg[7.1f( empty local-lis/les=51/54 n=0 ec=51/27 lis/c=27/27 les/c/f=28/28/0 sis=51) [1] r=0 lpr=51 pi=[27,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:52:04 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 54 pg[7.15( empty local-lis/les=51/54 n=0 ec=51/27 lis/c=27/27 les/c/f=28/28/0 sis=51) [1] r=0 lpr=51 pi=[27,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:52:04 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 54 pg[7.d( empty local-lis/les=51/54 n=0 ec=51/27 lis/c=27/27 les/c/f=28/28/0 sis=51) [1] r=0 lpr=51 pi=[27,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:52:04 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 54 pg[7.16( empty local-lis/les=51/54 n=0 ec=51/27 lis/c=27/27 les/c/f=28/28/0 sis=51) [1] r=0 lpr=51 pi=[27,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:52:04 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 54 pg[7.b( empty local-lis/les=51/54 n=0 ec=51/27 lis/c=27/27 les/c/f=28/28/0 sis=51) [1] r=0 lpr=51 pi=[27,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:52:04 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 54 pg[7.a( empty local-lis/les=51/54 n=0 ec=51/27 lis/c=27/27 les/c/f=28/28/0 sis=51) [1] r=0 lpr=51 pi=[27,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:52:04 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 54 pg[7.14( empty local-lis/les=51/54 n=0 ec=51/27 lis/c=27/27 les/c/f=28/28/0 sis=51) [1] r=0 lpr=51 pi=[27,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:52:04 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 54 pg[7.9( empty local-lis/les=51/54 n=0 ec=51/27 lis/c=27/27 les/c/f=28/28/0 sis=51) [1] r=0 lpr=51 pi=[27,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:52:04 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 54 pg[7.f( empty local-lis/les=51/54 n=0 ec=51/27 lis/c=27/27 les/c/f=28/28/0 sis=51) [1] r=0 lpr=51 pi=[27,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:52:04 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 54 pg[7.6( empty local-lis/les=51/54 n=0 ec=51/27 lis/c=27/27 les/c/f=28/28/0 sis=51) [1] r=0 lpr=51 pi=[27,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:52:04 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 54 pg[7.17( empty local-lis/les=51/54 n=0 ec=51/27 lis/c=27/27 les/c/f=28/28/0 sis=51) [1] r=0 lpr=51 pi=[27,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:52:04 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 54 pg[7.4( empty local-lis/les=51/54 n=0 ec=51/27 lis/c=27/27 les/c/f=28/28/0 sis=51) [1] r=0 lpr=51 pi=[27,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:52:04 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 54 pg[7.8( empty local-lis/les=51/54 n=0 ec=51/27 lis/c=27/27 les/c/f=28/28/0 sis=51) [1] r=0 lpr=51 pi=[27,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:52:04 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 54 pg[7.5( empty local-lis/les=51/54 n=0 ec=51/27 lis/c=27/27 les/c/f=28/28/0 sis=51) [1] r=0 lpr=51 pi=[27,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:52:04 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 54 pg[7.0( empty local-lis/les=51/54 n=0 ec=27/27 lis/c=27/27 les/c/f=28/28/0 sis=51) [1] r=0 lpr=51 pi=[27,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:52:04 np0005534516 systemd[1]: var-lib-containers-storage-overlay-b4f7c83e8a7883151facc29991fa1cd531cb43c17c040adddab0e4cc3782e06c-merged.mount: Deactivated successfully.
Nov 25 02:52:04 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e54 do_prune osdmap full prune enabled
Nov 25 02:52:04 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"}]': finished
Nov 25 02:52:04 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]': finished
Nov 25 02:52:04 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e55 e55: 3 total, 3 up, 3 in
Nov 25 02:52:04 np0005534516 podman[102293]: 2025-11-25 07:52:04.4215316 +0000 UTC m=+2.020638353 container remove 3dcc1cde855c76576c4e7599f99502e3ab4516b309e9eb9236837f07bc71041d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_torvalds, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Nov 25 02:52:04 np0005534516 ceph-mon[75015]: log_channel(cluster) log [DBG] : osdmap e55: 3 total, 3 up, 3 in
Nov 25 02:52:04 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 55 pg[9.0( v 54'65 (0'0,54'65] local-lis/les=43/45 n=44 ec=43/43 lis/c=43/43 les/c/f=45/45/0 sis=55 pruub=10.109748840s) [1] r=0 lpr=55 pi=[43,55)/1 luod=54'64 crt=54'65 lcod 54'63 mlcod 54'63 active pruub 122.963088989s@ mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 02:52:04 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 55 pg[9.0( v 54'65 lc 0'0 (0'0,54'65] local-lis/les=43/45 n=1 ec=43/43 lis/c=43/43 les/c/f=45/45/0 sis=55 pruub=10.109748840s) [1] r=0 lpr=55 pi=[43,55)/1 crt=54'65 lcod 54'63 mlcod 0'0 unknown pruub 122.963088989s@ mbc={}] state<Start>: transitioning to Primary
Nov 25 02:52:04 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 55 pg[8.10( v 42'4 (0'0,42'4] local-lis/les=53/55 n=0 ec=53/41 lis/c=41/41 les/c/f=42/42/0 sis=53) [1] r=0 lpr=53 pi=[41,53)/1 crt=42'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:52:04 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 55 pg[8.14( v 42'4 (0'0,42'4] local-lis/les=53/55 n=0 ec=53/41 lis/c=41/41 les/c/f=42/42/0 sis=53) [1] r=0 lpr=53 pi=[41,53)/1 crt=42'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:52:04 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 55 pg[8.15( v 42'4 (0'0,42'4] local-lis/les=53/55 n=0 ec=53/41 lis/c=41/41 les/c/f=42/42/0 sis=53) [1] r=0 lpr=53 pi=[41,53)/1 crt=42'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:52:04 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 55 pg[8.17( v 42'4 (0'0,42'4] local-lis/les=53/55 n=0 ec=53/41 lis/c=41/41 les/c/f=42/42/0 sis=53) [1] r=0 lpr=53 pi=[41,53)/1 crt=42'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:52:04 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 55 pg[8.2( v 42'4 (0'0,42'4] local-lis/les=53/55 n=1 ec=53/41 lis/c=41/41 les/c/f=42/42/0 sis=53) [1] r=0 lpr=53 pi=[41,53)/1 crt=42'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:52:04 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 55 pg[8.c( v 42'4 (0'0,42'4] local-lis/les=53/55 n=0 ec=53/41 lis/c=41/41 les/c/f=42/42/0 sis=53) [1] r=0 lpr=53 pi=[41,53)/1 crt=42'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:52:04 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 55 pg[8.3( v 42'4 (0'0,42'4] local-lis/les=53/55 n=1 ec=53/41 lis/c=41/41 les/c/f=42/42/0 sis=53) [1] r=0 lpr=53 pi=[41,53)/1 crt=42'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:52:04 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 55 pg[8.1( v 42'4 (0'0,42'4] local-lis/les=53/55 n=1 ec=53/41 lis/c=41/41 les/c/f=42/42/0 sis=53) [1] r=0 lpr=53 pi=[41,53)/1 crt=42'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:52:04 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 55 pg[10.0( v 47'16 (0'0,47'16] local-lis/les=46/47 n=8 ec=46/46 lis/c=46/46 les/c/f=47/47/0 sis=55 pruub=12.385427475s) [2] r=0 lpr=55 pi=[46,55)/1 crt=47'16 lcod 47'15 mlcod 47'15 active pruub 114.990997314s@ mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [2], acting_primary 2 -> 2, up_primary 2 -> 2, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 02:52:04 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 55 pg[8.8( v 42'4 (0'0,42'4] local-lis/les=53/55 n=0 ec=53/41 lis/c=41/41 les/c/f=42/42/0 sis=53) [1] r=0 lpr=53 pi=[41,53)/1 crt=42'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:52:04 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 55 pg[8.a( v 42'4 (0'0,42'4] local-lis/les=53/55 n=0 ec=53/41 lis/c=41/41 les/c/f=42/42/0 sis=53) [1] r=0 lpr=53 pi=[41,53)/1 crt=42'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:52:04 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 55 pg[8.d( v 42'4 (0'0,42'4] local-lis/les=53/55 n=0 ec=53/41 lis/c=41/41 les/c/f=42/42/0 sis=53) [1] r=0 lpr=53 pi=[41,53)/1 crt=42'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:52:04 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 55 pg[8.f( v 42'4 (0'0,42'4] local-lis/les=53/55 n=0 ec=53/41 lis/c=41/41 les/c/f=42/42/0 sis=53) [1] r=0 lpr=53 pi=[41,53)/1 crt=42'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:52:04 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 55 pg[8.e( v 42'4 (0'0,42'4] local-lis/les=53/55 n=0 ec=53/41 lis/c=41/41 les/c/f=42/42/0 sis=53) [1] r=0 lpr=53 pi=[41,53)/1 crt=42'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:52:04 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 55 pg[8.b( v 42'4 (0'0,42'4] local-lis/les=53/55 n=0 ec=53/41 lis/c=41/41 les/c/f=42/42/0 sis=53) [1] r=0 lpr=53 pi=[41,53)/1 crt=42'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:52:04 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 55 pg[8.9( v 42'4 (0'0,42'4] local-lis/les=53/55 n=0 ec=53/41 lis/c=41/41 les/c/f=42/42/0 sis=53) [1] r=0 lpr=53 pi=[41,53)/1 crt=42'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:52:04 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 55 pg[8.0( v 42'4 (0'0,42'4] local-lis/les=53/55 n=0 ec=41/41 lis/c=41/41 les/c/f=42/42/0 sis=53) [1] r=0 lpr=53 pi=[41,53)/1 crt=42'4 lcod 42'3 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:52:04 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 55 pg[8.7( v 42'4 (0'0,42'4] local-lis/les=53/55 n=0 ec=53/41 lis/c=41/41 les/c/f=42/42/0 sis=53) [1] r=0 lpr=53 pi=[41,53)/1 crt=42'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:52:04 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 55 pg[8.4( v 42'4 (0'0,42'4] local-lis/les=53/55 n=1 ec=53/41 lis/c=41/41 les/c/f=42/42/0 sis=53) [1] r=0 lpr=53 pi=[41,53)/1 crt=42'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:52:04 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 55 pg[8.5( v 42'4 (0'0,42'4] local-lis/les=53/55 n=0 ec=53/41 lis/c=41/41 les/c/f=42/42/0 sis=53) [1] r=0 lpr=53 pi=[41,53)/1 crt=42'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:52:04 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 55 pg[8.18( v 42'4 (0'0,42'4] local-lis/les=53/55 n=0 ec=53/41 lis/c=41/41 les/c/f=42/42/0 sis=53) [1] r=0 lpr=53 pi=[41,53)/1 crt=42'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:52:04 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 55 pg[8.6( v 42'4 (0'0,42'4] local-lis/les=53/55 n=0 ec=53/41 lis/c=41/41 les/c/f=42/42/0 sis=53) [1] r=0 lpr=53 pi=[41,53)/1 crt=42'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:52:04 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 55 pg[8.19( v 42'4 (0'0,42'4] local-lis/les=53/55 n=0 ec=53/41 lis/c=41/41 les/c/f=42/42/0 sis=53) [1] r=0 lpr=53 pi=[41,53)/1 crt=42'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:52:04 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 55 pg[8.1f( v 42'4 (0'0,42'4] local-lis/les=53/55 n=0 ec=53/41 lis/c=41/41 les/c/f=42/42/0 sis=53) [1] r=0 lpr=53 pi=[41,53)/1 crt=42'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:52:04 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 55 pg[8.13( v 42'4 (0'0,42'4] local-lis/les=53/55 n=0 ec=53/41 lis/c=41/41 les/c/f=42/42/0 sis=53) [1] r=0 lpr=53 pi=[41,53)/1 crt=42'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:52:04 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 55 pg[8.1d( v 42'4 (0'0,42'4] local-lis/les=53/55 n=0 ec=53/41 lis/c=41/41 les/c/f=42/42/0 sis=53) [1] r=0 lpr=53 pi=[41,53)/1 crt=42'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:52:04 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 55 pg[8.1e( v 42'4 (0'0,42'4] local-lis/les=53/55 n=0 ec=53/41 lis/c=41/41 les/c/f=42/42/0 sis=53) [1] r=0 lpr=53 pi=[41,53)/1 crt=42'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:52:04 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 55 pg[8.12( v 42'4 (0'0,42'4] local-lis/les=53/55 n=0 ec=53/41 lis/c=41/41 les/c/f=42/42/0 sis=53) [1] r=0 lpr=53 pi=[41,53)/1 crt=42'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:52:04 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 55 pg[8.1a( v 42'4 (0'0,42'4] local-lis/les=53/55 n=0 ec=53/41 lis/c=41/41 les/c/f=42/42/0 sis=53) [1] r=0 lpr=53 pi=[41,53)/1 crt=42'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:52:04 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 55 pg[8.11( v 42'4 (0'0,42'4] local-lis/les=53/55 n=0 ec=53/41 lis/c=41/41 les/c/f=42/42/0 sis=53) [1] r=0 lpr=53 pi=[41,53)/1 crt=42'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:52:04 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 55 pg[8.1b( v 42'4 (0'0,42'4] local-lis/les=53/55 n=0 ec=53/41 lis/c=41/41 les/c/f=42/42/0 sis=53) [1] r=0 lpr=53 pi=[41,53)/1 crt=42'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:52:04 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 55 pg[8.16( v 42'4 (0'0,42'4] local-lis/les=53/55 n=0 ec=53/41 lis/c=41/41 les/c/f=42/42/0 sis=53) [1] r=0 lpr=53 pi=[41,53)/1 crt=42'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:52:04 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 55 pg[8.1c( v 42'4 (0'0,42'4] local-lis/les=53/55 n=0 ec=53/41 lis/c=41/41 les/c/f=42/42/0 sis=53) [1] r=0 lpr=53 pi=[41,53)/1 crt=42'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:52:04 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 55 pg[10.0( v 47'16 lc 0'0 (0'0,47'16] local-lis/les=46/47 n=0 ec=46/46 lis/c=46/46 les/c/f=47/47/0 sis=55 pruub=12.385427475s) [2] r=0 lpr=55 pi=[46,55)/1 crt=47'16 lcod 47'15 mlcod 0'0 unknown pruub 114.990997314s@ mbc={}] state<Start>: transitioning to Primary
Nov 25 02:52:04 np0005534516 systemd[1]: libpod-conmon-3dcc1cde855c76576c4e7599f99502e3ab4516b309e9eb9236837f07bc71041d.scope: Deactivated successfully.
Nov 25 02:52:05 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"}]': finished
Nov 25 02:52:05 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]': finished
Nov 25 02:52:05 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v142: 290 pgs: 124 unknown, 166 active+clean; 454 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 0 B/s rd, 3.0 KiB/s wr, 12 op/s
Nov 25 02:52:05 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"} v 0) v1
Nov 25 02:52:05 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 25 02:52:05 np0005534516 podman[103034]: 2025-11-25 07:52:05.221925076 +0000 UTC m=+0.095130662 container create c311a713720cb38e4156bd2418386bf76ac5bfec58d0eb860f309ed477cb523d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_galileo, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Nov 25 02:52:05 np0005534516 podman[103034]: 2025-11-25 07:52:05.154256542 +0000 UTC m=+0.027462168 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 02:52:05 np0005534516 systemd[1]: Started libpod-conmon-c311a713720cb38e4156bd2418386bf76ac5bfec58d0eb860f309ed477cb523d.scope.
Nov 25 02:52:05 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e55 do_prune osdmap full prune enabled
Nov 25 02:52:05 np0005534516 systemd[1]: Started libcrun container.
Nov 25 02:52:05 np0005534516 ceph-mgr[75313]: [progress INFO root] Writing back 16 completed events
Nov 25 02:52:05 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) v1
Nov 25 02:52:05 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]': finished
Nov 25 02:52:05 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e56 e56: 3 total, 3 up, 3 in
Nov 25 02:52:05 np0005534516 podman[103034]: 2025-11-25 07:52:05.581721367 +0000 UTC m=+0.454926983 container init c311a713720cb38e4156bd2418386bf76ac5bfec58d0eb860f309ed477cb523d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_galileo, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Nov 25 02:52:05 np0005534516 podman[103034]: 2025-11-25 07:52:05.593213606 +0000 UTC m=+0.466419192 container start c311a713720cb38e4156bd2418386bf76ac5bfec58d0eb860f309ed477cb523d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_galileo, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Nov 25 02:52:05 np0005534516 angry_galileo[103050]: 167 167
Nov 25 02:52:05 np0005534516 systemd[1]: libpod-c311a713720cb38e4156bd2418386bf76ac5bfec58d0eb860f309ed477cb523d.scope: Deactivated successfully.
Nov 25 02:52:05 np0005534516 ceph-mon[75015]: log_channel(cluster) log [DBG] : osdmap e56: 3 total, 3 up, 3 in
Nov 25 02:52:05 np0005534516 podman[103034]: 2025-11-25 07:52:05.789355791 +0000 UTC m=+0.662561377 container attach c311a713720cb38e4156bd2418386bf76ac5bfec58d0eb860f309ed477cb523d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_galileo, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 02:52:05 np0005534516 podman[103034]: 2025-11-25 07:52:05.789957526 +0000 UTC m=+0.663163112 container died c311a713720cb38e4156bd2418386bf76ac5bfec58d0eb860f309ed477cb523d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_galileo, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 25 02:52:05 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 56 pg[10.1b( v 47'16 lc 0'0 (0'0,47'16] local-lis/les=46/47 n=0 ec=55/46 lis/c=46/46 les/c/f=47/47/0 sis=55) [2] r=0 lpr=55 pi=[46,55)/1 crt=47'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:52:05 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 56 pg[10.1e( v 47'16 lc 0'0 (0'0,47'16] local-lis/les=46/47 n=0 ec=55/46 lis/c=46/46 les/c/f=47/47/0 sis=55) [2] r=0 lpr=55 pi=[46,55)/1 crt=47'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:52:05 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 56 pg[10.b( v 47'16 lc 0'0 (0'0,47'16] local-lis/les=46/47 n=0 ec=55/46 lis/c=46/46 les/c/f=47/47/0 sis=55) [2] r=0 lpr=55 pi=[46,55)/1 crt=47'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:52:05 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 56 pg[10.a( v 47'16 lc 0'0 (0'0,47'16] local-lis/les=46/47 n=0 ec=55/46 lis/c=46/46 les/c/f=47/47/0 sis=55) [2] r=0 lpr=55 pi=[46,55)/1 crt=47'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:52:05 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 56 pg[10.d( v 47'16 lc 0'0 (0'0,47'16] local-lis/les=46/47 n=0 ec=55/46 lis/c=46/46 les/c/f=47/47/0 sis=55) [2] r=0 lpr=55 pi=[46,55)/1 crt=47'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:52:05 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 56 pg[10.13( v 47'16 lc 0'0 (0'0,47'16] local-lis/les=46/47 n=0 ec=55/46 lis/c=46/46 les/c/f=47/47/0 sis=55) [2] r=0 lpr=55 pi=[46,55)/1 crt=47'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:52:05 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 56 pg[10.11( v 47'16 lc 0'0 (0'0,47'16] local-lis/les=46/47 n=0 ec=55/46 lis/c=46/46 les/c/f=47/47/0 sis=55) [2] r=0 lpr=55 pi=[46,55)/1 crt=47'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:52:05 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 56 pg[10.10( v 47'16 lc 0'0 (0'0,47'16] local-lis/les=46/47 n=0 ec=55/46 lis/c=46/46 les/c/f=47/47/0 sis=55) [2] r=0 lpr=55 pi=[46,55)/1 crt=47'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:52:05 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 56 pg[10.12( v 47'16 lc 0'0 (0'0,47'16] local-lis/les=46/47 n=0 ec=55/46 lis/c=46/46 les/c/f=47/47/0 sis=55) [2] r=0 lpr=55 pi=[46,55)/1 crt=47'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:52:05 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 56 pg[10.1d( v 47'16 lc 0'0 (0'0,47'16] local-lis/les=46/47 n=0 ec=55/46 lis/c=46/46 les/c/f=47/47/0 sis=55) [2] r=0 lpr=55 pi=[46,55)/1 crt=47'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:52:05 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 56 pg[10.1f( v 47'16 lc 0'0 (0'0,47'16] local-lis/les=46/47 n=0 ec=55/46 lis/c=46/46 les/c/f=47/47/0 sis=55) [2] r=0 lpr=55 pi=[46,55)/1 crt=47'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:52:05 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 56 pg[10.1c( v 47'16 lc 0'0 (0'0,47'16] local-lis/les=46/47 n=0 ec=55/46 lis/c=46/46 les/c/f=47/47/0 sis=55) [2] r=0 lpr=55 pi=[46,55)/1 crt=47'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:52:05 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 56 pg[10.1a( v 47'16 lc 0'0 (0'0,47'16] local-lis/les=46/47 n=0 ec=55/46 lis/c=46/46 les/c/f=47/47/0 sis=55) [2] r=0 lpr=55 pi=[46,55)/1 crt=47'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:52:05 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 56 pg[10.19( v 47'16 lc 0'0 (0'0,47'16] local-lis/les=46/47 n=0 ec=55/46 lis/c=46/46 les/c/f=47/47/0 sis=55) [2] r=0 lpr=55 pi=[46,55)/1 crt=47'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:52:05 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 56 pg[10.18( v 47'16 lc 0'0 (0'0,47'16] local-lis/les=46/47 n=0 ec=55/46 lis/c=46/46 les/c/f=47/47/0 sis=55) [2] r=0 lpr=55 pi=[46,55)/1 crt=47'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:52:05 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 56 pg[10.7( v 47'16 lc 0'0 (0'0,47'16] local-lis/les=46/47 n=1 ec=55/46 lis/c=46/46 les/c/f=47/47/0 sis=55) [2] r=0 lpr=55 pi=[46,55)/1 crt=47'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:52:05 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 56 pg[10.6( v 47'16 lc 0'0 (0'0,47'16] local-lis/les=46/47 n=1 ec=55/46 lis/c=46/46 les/c/f=47/47/0 sis=55) [2] r=0 lpr=55 pi=[46,55)/1 crt=47'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:52:05 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 56 pg[10.5( v 47'16 lc 0'0 (0'0,47'16] local-lis/les=46/47 n=1 ec=55/46 lis/c=46/46 les/c/f=47/47/0 sis=55) [2] r=0 lpr=55 pi=[46,55)/1 crt=47'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:52:05 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 56 pg[10.4( v 47'16 lc 0'0 (0'0,47'16] local-lis/les=46/47 n=1 ec=55/46 lis/c=46/46 les/c/f=47/47/0 sis=55) [2] r=0 lpr=55 pi=[46,55)/1 crt=47'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:52:05 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 56 pg[10.8( v 47'16 lc 0'0 (0'0,47'16] local-lis/les=46/47 n=1 ec=55/46 lis/c=46/46 les/c/f=47/47/0 sis=55) [2] r=0 lpr=55 pi=[46,55)/1 crt=47'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:52:05 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 56 pg[10.f( v 47'16 lc 0'0 (0'0,47'16] local-lis/les=46/47 n=0 ec=55/46 lis/c=46/46 les/c/f=47/47/0 sis=55) [2] r=0 lpr=55 pi=[46,55)/1 crt=47'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:52:05 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 56 pg[10.9( v 47'16 lc 0'0 (0'0,47'16] local-lis/les=46/47 n=0 ec=55/46 lis/c=46/46 les/c/f=47/47/0 sis=55) [2] r=0 lpr=55 pi=[46,55)/1 crt=47'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:52:05 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 56 pg[10.e( v 47'16 lc 0'0 (0'0,47'16] local-lis/les=46/47 n=0 ec=55/46 lis/c=46/46 les/c/f=47/47/0 sis=55) [2] r=0 lpr=55 pi=[46,55)/1 crt=47'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:52:05 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 56 pg[10.c( v 47'16 lc 0'0 (0'0,47'16] local-lis/les=46/47 n=0 ec=55/46 lis/c=46/46 les/c/f=47/47/0 sis=55) [2] r=0 lpr=55 pi=[46,55)/1 crt=47'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:52:05 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 56 pg[10.1( v 47'16 (0'0,47'16] local-lis/les=46/47 n=1 ec=55/46 lis/c=46/46 les/c/f=47/47/0 sis=55) [2] r=0 lpr=55 pi=[46,55)/1 crt=47'16 lcod 0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:52:05 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 56 pg[10.3( v 47'16 lc 0'0 (0'0,47'16] local-lis/les=46/47 n=1 ec=55/46 lis/c=46/46 les/c/f=47/47/0 sis=55) [2] r=0 lpr=55 pi=[46,55)/1 crt=47'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:52:05 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 56 pg[10.2( v 47'16 lc 0'0 (0'0,47'16] local-lis/les=46/47 n=1 ec=55/46 lis/c=46/46 les/c/f=47/47/0 sis=55) [2] r=0 lpr=55 pi=[46,55)/1 crt=47'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:52:05 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 56 pg[10.14( v 47'16 lc 0'0 (0'0,47'16] local-lis/les=46/47 n=0 ec=55/46 lis/c=46/46 les/c/f=47/47/0 sis=55) [2] r=0 lpr=55 pi=[46,55)/1 crt=47'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:52:05 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 56 pg[10.16( v 47'16 lc 0'0 (0'0,47'16] local-lis/les=46/47 n=0 ec=55/46 lis/c=46/46 les/c/f=47/47/0 sis=55) [2] r=0 lpr=55 pi=[46,55)/1 crt=47'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:52:05 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 56 pg[10.15( v 47'16 lc 0'0 (0'0,47'16] local-lis/les=46/47 n=0 ec=55/46 lis/c=46/46 les/c/f=47/47/0 sis=55) [2] r=0 lpr=55 pi=[46,55)/1 crt=47'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:52:05 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 56 pg[10.17( v 47'16 lc 0'0 (0'0,47'16] local-lis/les=46/47 n=0 ec=55/46 lis/c=46/46 les/c/f=47/47/0 sis=55) [2] r=0 lpr=55 pi=[46,55)/1 crt=47'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:52:05 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 56 pg[9.14( v 54'65 lc 0'0 (0'0,54'65] local-lis/les=43/45 n=1 ec=55/43 lis/c=43/43 les/c/f=45/45/0 sis=55) [1] r=0 lpr=55 pi=[43,55)/1 crt=54'65 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:52:05 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 56 pg[9.11( v 54'65 lc 0'0 (0'0,54'65] local-lis/les=43/45 n=1 ec=55/43 lis/c=43/43 les/c/f=45/45/0 sis=55) [1] r=0 lpr=55 pi=[43,55)/1 crt=54'65 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:52:05 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 56 pg[9.3( v 54'65 lc 0'0 (0'0,54'65] local-lis/les=43/45 n=2 ec=55/43 lis/c=43/43 les/c/f=45/45/0 sis=55) [1] r=0 lpr=55 pi=[43,55)/1 crt=54'65 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:52:05 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 56 pg[11.0( empty local-lis/les=49/50 n=0 ec=49/49 lis/c=49/49 les/c/f=50/50/0 sis=56 pruub=9.013166428s) [1] r=0 lpr=56 pi=[49,56)/1 crt=0'0 mlcod 0'0 active pruub 123.321289062s@ mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 02:52:05 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 56 pg[9.2( v 54'65 lc 0'0 (0'0,54'65] local-lis/les=43/45 n=2 ec=55/43 lis/c=43/43 les/c/f=45/45/0 sis=55) [1] r=0 lpr=55 pi=[43,55)/1 crt=54'65 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:52:05 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 56 pg[9.d( v 54'65 lc 0'0 (0'0,54'65] local-lis/les=43/45 n=1 ec=55/43 lis/c=43/43 les/c/f=45/45/0 sis=55) [1] r=0 lpr=55 pi=[43,55)/1 crt=54'65 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:52:05 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 56 pg[9.c( v 54'65 lc 0'0 (0'0,54'65] local-lis/les=43/45 n=2 ec=55/43 lis/c=43/43 les/c/f=45/45/0 sis=55) [1] r=0 lpr=55 pi=[43,55)/1 crt=54'65 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:52:05 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 56 pg[9.f( v 54'65 lc 0'0 (0'0,54'65] local-lis/les=43/45 n=1 ec=55/43 lis/c=43/43 les/c/f=45/45/0 sis=55) [1] r=0 lpr=55 pi=[43,55)/1 crt=54'65 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:52:05 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 56 pg[9.9( v 54'65 lc 0'0 (0'0,54'65] local-lis/les=43/45 n=2 ec=55/43 lis/c=43/43 les/c/f=45/45/0 sis=55) [1] r=0 lpr=55 pi=[43,55)/1 crt=54'65 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:52:05 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 56 pg[9.16( v 54'65 lc 0'0 (0'0,54'65] local-lis/les=43/45 n=1 ec=55/43 lis/c=43/43 les/c/f=45/45/0 sis=55) [1] r=0 lpr=55 pi=[43,55)/1 crt=54'65 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:52:05 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 56 pg[9.15( v 54'65 lc 0'0 (0'0,54'65] local-lis/les=43/45 n=1 ec=55/43 lis/c=43/43 les/c/f=45/45/0 sis=55) [1] r=0 lpr=55 pi=[43,55)/1 crt=54'65 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:52:05 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 56 pg[9.b( v 54'65 lc 0'0 (0'0,54'65] local-lis/les=43/45 n=2 ec=55/43 lis/c=43/43 les/c/f=45/45/0 sis=55) [1] r=0 lpr=55 pi=[43,55)/1 crt=54'65 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:52:05 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 56 pg[9.e( v 54'65 lc 0'0 (0'0,54'65] local-lis/les=43/45 n=1 ec=55/43 lis/c=43/43 les/c/f=45/45/0 sis=55) [1] r=0 lpr=55 pi=[43,55)/1 crt=54'65 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:52:05 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 56 pg[9.a( v 54'65 lc 0'0 (0'0,54'65] local-lis/les=43/45 n=2 ec=55/43 lis/c=43/43 les/c/f=45/45/0 sis=55) [1] r=0 lpr=55 pi=[43,55)/1 crt=54'65 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:52:05 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 02:52:05 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 56 pg[9.8( v 54'65 lc 0'0 (0'0,54'65] local-lis/les=43/45 n=2 ec=55/43 lis/c=43/43 les/c/f=45/45/0 sis=55) [1] r=0 lpr=55 pi=[43,55)/1 crt=54'65 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:52:05 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 56 pg[9.1( v 54'65 lc 0'0 (0'0,54'65] local-lis/les=43/45 n=2 ec=55/43 lis/c=43/43 les/c/f=45/45/0 sis=55) [1] r=0 lpr=55 pi=[43,55)/1 crt=54'65 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:52:05 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 56 pg[9.6( v 54'65 lc 0'0 (0'0,54'65] local-lis/les=43/45 n=2 ec=55/43 lis/c=43/43 les/c/f=45/45/0 sis=55) [1] r=0 lpr=55 pi=[43,55)/1 crt=54'65 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:52:05 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 56 pg[9.7( v 54'65 lc 0'0 (0'0,54'65] local-lis/les=43/45 n=2 ec=55/43 lis/c=43/43 les/c/f=45/45/0 sis=55) [1] r=0 lpr=55 pi=[43,55)/1 crt=54'65 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:52:05 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 56 pg[9.4( v 54'65 lc 0'0 (0'0,54'65] local-lis/les=43/45 n=2 ec=55/43 lis/c=43/43 les/c/f=45/45/0 sis=55) [1] r=0 lpr=55 pi=[43,55)/1 crt=54'65 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:52:05 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 56 pg[9.5( v 54'65 lc 0'0 (0'0,54'65] local-lis/les=43/45 n=2 ec=55/43 lis/c=43/43 les/c/f=45/45/0 sis=55) [1] r=0 lpr=55 pi=[43,55)/1 crt=54'65 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:52:05 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 56 pg[9.1a( v 54'65 lc 0'0 (0'0,54'65] local-lis/les=43/45 n=1 ec=55/43 lis/c=43/43 les/c/f=45/45/0 sis=55) [1] r=0 lpr=55 pi=[43,55)/1 crt=54'65 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:52:05 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 56 pg[9.18( v 54'65 lc 0'0 (0'0,54'65] local-lis/les=43/45 n=1 ec=55/43 lis/c=43/43 les/c/f=45/45/0 sis=55) [1] r=0 lpr=55 pi=[43,55)/1 crt=54'65 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:52:05 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 56 pg[9.19( v 54'65 lc 0'0 (0'0,54'65] local-lis/les=43/45 n=1 ec=55/43 lis/c=43/43 les/c/f=45/45/0 sis=55) [1] r=0 lpr=55 pi=[43,55)/1 crt=54'65 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:52:05 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 56 pg[9.1e( v 54'65 lc 0'0 (0'0,54'65] local-lis/les=43/45 n=1 ec=55/43 lis/c=43/43 les/c/f=45/45/0 sis=55) [1] r=0 lpr=55 pi=[43,55)/1 crt=54'65 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:52:05 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 56 pg[9.1f( v 54'65 lc 0'0 (0'0,54'65] local-lis/les=43/45 n=1 ec=55/43 lis/c=43/43 les/c/f=45/45/0 sis=55) [1] r=0 lpr=55 pi=[43,55)/1 crt=54'65 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:52:05 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 56 pg[11.0( empty local-lis/les=49/50 n=0 ec=49/49 lis/c=49/49 les/c/f=50/50/0 sis=56 pruub=9.013166428s) [1] r=0 lpr=56 pi=[49,56)/1 crt=0'0 mlcod 0'0 unknown pruub 123.321289062s@ mbc={}] state<Start>: transitioning to Primary
Nov 25 02:52:05 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 56 pg[9.1c( v 54'65 lc 0'0 (0'0,54'65] local-lis/les=43/45 n=1 ec=55/43 lis/c=43/43 les/c/f=45/45/0 sis=55) [1] r=0 lpr=55 pi=[43,55)/1 crt=54'65 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:52:05 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 56 pg[9.1d( v 54'65 lc 0'0 (0'0,54'65] local-lis/les=43/45 n=1 ec=55/43 lis/c=43/43 les/c/f=45/45/0 sis=55) [1] r=0 lpr=55 pi=[43,55)/1 crt=54'65 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:52:05 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 56 pg[9.12( v 54'65 lc 0'0 (0'0,54'65] local-lis/les=43/45 n=1 ec=55/43 lis/c=43/43 les/c/f=45/45/0 sis=55) [1] r=0 lpr=55 pi=[43,55)/1 crt=54'65 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:52:05 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 56 pg[9.13( v 54'65 lc 0'0 (0'0,54'65] local-lis/les=43/45 n=1 ec=55/43 lis/c=43/43 les/c/f=45/45/0 sis=55) [1] r=0 lpr=55 pi=[43,55)/1 crt=54'65 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:52:05 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 56 pg[9.10( v 54'65 lc 0'0 (0'0,54'65] local-lis/les=43/45 n=1 ec=55/43 lis/c=43/43 les/c/f=45/45/0 sis=55) [1] r=0 lpr=55 pi=[43,55)/1 crt=54'65 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:52:05 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 56 pg[9.1b( v 54'65 lc 0'0 (0'0,54'65] local-lis/les=43/45 n=1 ec=55/43 lis/c=43/43 les/c/f=45/45/0 sis=55) [1] r=0 lpr=55 pi=[43,55)/1 crt=54'65 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:52:05 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 56 pg[9.17( v 54'65 lc 0'0 (0'0,54'65] local-lis/les=43/45 n=1 ec=55/43 lis/c=43/43 les/c/f=45/45/0 sis=55) [1] r=0 lpr=55 pi=[43,55)/1 crt=54'65 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:52:05 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 56 pg[10.1b( v 47'16 (0'0,47'16] local-lis/les=55/56 n=0 ec=55/46 lis/c=46/46 les/c/f=47/47/0 sis=55) [2] r=0 lpr=55 pi=[46,55)/1 crt=47'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:52:06 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 56 pg[9.14( v 54'65 (0'0,54'65] local-lis/les=55/56 n=1 ec=55/43 lis/c=43/43 les/c/f=45/45/0 sis=55) [1] r=0 lpr=55 pi=[43,55)/1 crt=54'65 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:52:06 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 56 pg[10.a( v 47'16 (0'0,47'16] local-lis/les=55/56 n=0 ec=55/46 lis/c=46/46 les/c/f=47/47/0 sis=55) [2] r=0 lpr=55 pi=[46,55)/1 crt=47'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:52:06 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 56 pg[10.1e( v 47'16 (0'0,47'16] local-lis/les=55/56 n=0 ec=55/46 lis/c=46/46 les/c/f=47/47/0 sis=55) [2] r=0 lpr=55 pi=[46,55)/1 crt=47'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:52:06 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 56 pg[10.d( v 47'16 (0'0,47'16] local-lis/les=55/56 n=0 ec=55/46 lis/c=46/46 les/c/f=47/47/0 sis=55) [2] r=0 lpr=55 pi=[46,55)/1 crt=47'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:52:06 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 56 pg[10.b( v 47'16 (0'0,47'16] local-lis/les=55/56 n=0 ec=55/46 lis/c=46/46 les/c/f=47/47/0 sis=55) [2] r=0 lpr=55 pi=[46,55)/1 crt=47'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:52:06 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 56 pg[10.13( v 47'16 (0'0,47'16] local-lis/les=55/56 n=0 ec=55/46 lis/c=46/46 les/c/f=47/47/0 sis=55) [2] r=0 lpr=55 pi=[46,55)/1 crt=47'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:52:06 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 56 pg[10.11( v 47'16 (0'0,47'16] local-lis/les=55/56 n=0 ec=55/46 lis/c=46/46 les/c/f=47/47/0 sis=55) [2] r=0 lpr=55 pi=[46,55)/1 crt=47'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:52:06 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 56 pg[10.1c( v 47'16 (0'0,47'16] local-lis/les=55/56 n=0 ec=55/46 lis/c=46/46 les/c/f=47/47/0 sis=55) [2] r=0 lpr=55 pi=[46,55)/1 crt=47'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:52:06 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 56 pg[10.12( v 47'16 (0'0,47'16] local-lis/les=55/56 n=0 ec=55/46 lis/c=46/46 les/c/f=47/47/0 sis=55) [2] r=0 lpr=55 pi=[46,55)/1 crt=47'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:52:06 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 56 pg[10.1a( v 47'16 (0'0,47'16] local-lis/les=55/56 n=0 ec=55/46 lis/c=46/46 les/c/f=47/47/0 sis=55) [2] r=0 lpr=55 pi=[46,55)/1 crt=47'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:52:06 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 56 pg[10.1f( v 47'16 (0'0,47'16] local-lis/les=55/56 n=0 ec=55/46 lis/c=46/46 les/c/f=47/47/0 sis=55) [2] r=0 lpr=55 pi=[46,55)/1 crt=47'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:52:06 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 56 pg[10.18( v 47'16 (0'0,47'16] local-lis/les=55/56 n=0 ec=55/46 lis/c=46/46 les/c/f=47/47/0 sis=55) [2] r=0 lpr=55 pi=[46,55)/1 crt=47'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:52:06 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 56 pg[10.10( v 47'16 (0'0,47'16] local-lis/les=55/56 n=0 ec=55/46 lis/c=46/46 les/c/f=47/47/0 sis=55) [2] r=0 lpr=55 pi=[46,55)/1 crt=47'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:52:06 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 56 pg[10.19( v 47'16 (0'0,47'16] local-lis/les=55/56 n=0 ec=55/46 lis/c=46/46 les/c/f=47/47/0 sis=55) [2] r=0 lpr=55 pi=[46,55)/1 crt=47'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:52:06 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 56 pg[10.7( v 47'16 (0'0,47'16] local-lis/les=55/56 n=1 ec=55/46 lis/c=46/46 les/c/f=47/47/0 sis=55) [2] r=0 lpr=55 pi=[46,55)/1 crt=47'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:52:06 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 56 pg[10.6( v 47'16 (0'0,47'16] local-lis/les=55/56 n=1 ec=55/46 lis/c=46/46 les/c/f=47/47/0 sis=55) [2] r=0 lpr=55 pi=[46,55)/1 crt=47'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:52:06 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 56 pg[10.4( v 47'16 (0'0,47'16] local-lis/les=55/56 n=1 ec=55/46 lis/c=46/46 les/c/f=47/47/0 sis=55) [2] r=0 lpr=55 pi=[46,55)/1 crt=47'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:52:06 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 56 pg[10.8( v 47'16 (0'0,47'16] local-lis/les=55/56 n=1 ec=55/46 lis/c=46/46 les/c/f=47/47/0 sis=55) [2] r=0 lpr=55 pi=[46,55)/1 crt=47'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:52:06 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 56 pg[10.5( v 47'16 (0'0,47'16] local-lis/les=55/56 n=1 ec=55/46 lis/c=46/46 les/c/f=47/47/0 sis=55) [2] r=0 lpr=55 pi=[46,55)/1 crt=47'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:52:06 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 56 pg[10.1d( v 47'16 (0'0,47'16] local-lis/les=55/56 n=0 ec=55/46 lis/c=46/46 les/c/f=47/47/0 sis=55) [2] r=0 lpr=55 pi=[46,55)/1 crt=47'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:52:06 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 56 pg[10.0( v 47'16 (0'0,47'16] local-lis/les=55/56 n=0 ec=46/46 lis/c=46/46 les/c/f=47/47/0 sis=55) [2] r=0 lpr=55 pi=[46,55)/1 crt=47'16 lcod 47'15 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:52:06 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 56 pg[10.f( v 47'16 (0'0,47'16] local-lis/les=55/56 n=0 ec=55/46 lis/c=46/46 les/c/f=47/47/0 sis=55) [2] r=0 lpr=55 pi=[46,55)/1 crt=47'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:52:06 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 56 pg[10.c( v 47'16 (0'0,47'16] local-lis/les=55/56 n=0 ec=55/46 lis/c=46/46 les/c/f=47/47/0 sis=55) [2] r=0 lpr=55 pi=[46,55)/1 crt=47'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:52:06 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 56 pg[10.e( v 47'16 (0'0,47'16] local-lis/les=55/56 n=0 ec=55/46 lis/c=46/46 les/c/f=47/47/0 sis=55) [2] r=0 lpr=55 pi=[46,55)/1 crt=47'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:52:06 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 56 pg[10.9( v 47'16 (0'0,47'16] local-lis/les=55/56 n=0 ec=55/46 lis/c=46/46 les/c/f=47/47/0 sis=55) [2] r=0 lpr=55 pi=[46,55)/1 crt=47'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:52:06 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 56 pg[10.3( v 47'16 (0'0,47'16] local-lis/les=55/56 n=1 ec=55/46 lis/c=46/46 les/c/f=47/47/0 sis=55) [2] r=0 lpr=55 pi=[46,55)/1 crt=47'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:52:06 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 56 pg[10.1( v 47'16 (0'0,47'16] local-lis/les=55/56 n=1 ec=55/46 lis/c=46/46 les/c/f=47/47/0 sis=55) [2] r=0 lpr=55 pi=[46,55)/1 crt=47'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:52:06 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 56 pg[10.2( v 47'16 (0'0,47'16] local-lis/les=55/56 n=1 ec=55/46 lis/c=46/46 les/c/f=47/47/0 sis=55) [2] r=0 lpr=55 pi=[46,55)/1 crt=47'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:52:06 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 56 pg[10.16( v 47'16 (0'0,47'16] local-lis/les=55/56 n=0 ec=55/46 lis/c=46/46 les/c/f=47/47/0 sis=55) [2] r=0 lpr=55 pi=[46,55)/1 crt=47'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:52:06 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 56 pg[10.14( v 47'16 (0'0,47'16] local-lis/les=55/56 n=0 ec=55/46 lis/c=46/46 les/c/f=47/47/0 sis=55) [2] r=0 lpr=55 pi=[46,55)/1 crt=47'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:52:06 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 56 pg[10.17( v 47'16 (0'0,47'16] local-lis/les=55/56 n=0 ec=55/46 lis/c=46/46 les/c/f=47/47/0 sis=55) [2] r=0 lpr=55 pi=[46,55)/1 crt=47'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:52:06 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 56 pg[10.15( v 47'16 (0'0,47'16] local-lis/les=55/56 n=0 ec=55/46 lis/c=46/46 les/c/f=47/47/0 sis=55) [2] r=0 lpr=55 pi=[46,55)/1 crt=47'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:52:06 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 56 pg[9.0( v 54'65 (0'0,54'65] local-lis/les=55/56 n=1 ec=43/43 lis/c=43/43 les/c/f=45/45/0 sis=55) [1] r=0 lpr=55 pi=[43,55)/1 crt=54'65 lcod 54'63 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:52:06 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 56 pg[9.2( v 54'65 (0'0,54'65] local-lis/les=55/56 n=2 ec=55/43 lis/c=43/43 les/c/f=45/45/0 sis=55) [1] r=0 lpr=55 pi=[43,55)/1 crt=54'65 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:52:06 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 56 pg[9.3( v 54'65 (0'0,54'65] local-lis/les=55/56 n=2 ec=55/43 lis/c=43/43 les/c/f=45/45/0 sis=55) [1] r=0 lpr=55 pi=[43,55)/1 crt=54'65 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:52:06 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 56 pg[9.d( v 54'65 (0'0,54'65] local-lis/les=55/56 n=1 ec=55/43 lis/c=43/43 les/c/f=45/45/0 sis=55) [1] r=0 lpr=55 pi=[43,55)/1 crt=54'65 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:52:06 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 56 pg[9.c( v 54'65 (0'0,54'65] local-lis/les=55/56 n=2 ec=55/43 lis/c=43/43 les/c/f=45/45/0 sis=55) [1] r=0 lpr=55 pi=[43,55)/1 crt=54'65 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:52:06 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 56 pg[9.f( v 54'65 (0'0,54'65] local-lis/les=55/56 n=1 ec=55/43 lis/c=43/43 les/c/f=45/45/0 sis=55) [1] r=0 lpr=55 pi=[43,55)/1 crt=54'65 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:52:06 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 56 pg[9.16( v 54'65 (0'0,54'65] local-lis/les=55/56 n=1 ec=55/43 lis/c=43/43 les/c/f=45/45/0 sis=55) [1] r=0 lpr=55 pi=[43,55)/1 crt=54'65 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:52:06 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 56 pg[9.15( v 54'65 (0'0,54'65] local-lis/les=55/56 n=1 ec=55/43 lis/c=43/43 les/c/f=45/45/0 sis=55) [1] r=0 lpr=55 pi=[43,55)/1 crt=54'65 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:52:06 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 56 pg[9.e( v 54'65 (0'0,54'65] local-lis/les=55/56 n=1 ec=55/43 lis/c=43/43 les/c/f=45/45/0 sis=55) [1] r=0 lpr=55 pi=[43,55)/1 crt=54'65 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:52:06 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 56 pg[9.b( v 54'65 (0'0,54'65] local-lis/les=55/56 n=2 ec=55/43 lis/c=43/43 les/c/f=45/45/0 sis=55) [1] r=0 lpr=55 pi=[43,55)/1 crt=54'65 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:52:06 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 56 pg[9.11( v 54'65 (0'0,54'65] local-lis/les=55/56 n=1 ec=55/43 lis/c=43/43 les/c/f=45/45/0 sis=55) [1] r=0 lpr=55 pi=[43,55)/1 crt=54'65 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:52:06 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 56 pg[9.8( v 54'65 (0'0,54'65] local-lis/les=55/56 n=2 ec=55/43 lis/c=43/43 les/c/f=45/45/0 sis=55) [1] r=0 lpr=55 pi=[43,55)/1 crt=54'65 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:52:06 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 56 pg[9.6( v 54'65 (0'0,54'65] local-lis/les=55/56 n=2 ec=55/43 lis/c=43/43 les/c/f=45/45/0 sis=55) [1] r=0 lpr=55 pi=[43,55)/1 crt=54'65 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:52:06 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 56 pg[9.7( v 54'65 (0'0,54'65] local-lis/les=55/56 n=2 ec=55/43 lis/c=43/43 les/c/f=45/45/0 sis=55) [1] r=0 lpr=55 pi=[43,55)/1 crt=54'65 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:52:06 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 56 pg[9.1( v 54'65 (0'0,54'65] local-lis/les=55/56 n=2 ec=55/43 lis/c=43/43 les/c/f=45/45/0 sis=55) [1] r=0 lpr=55 pi=[43,55)/1 crt=54'65 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:52:06 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 56 pg[9.4( v 54'65 (0'0,54'65] local-lis/les=55/56 n=2 ec=55/43 lis/c=43/43 les/c/f=45/45/0 sis=55) [1] r=0 lpr=55 pi=[43,55)/1 crt=54'65 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:52:06 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 56 pg[9.5( v 54'65 (0'0,54'65] local-lis/les=55/56 n=2 ec=55/43 lis/c=43/43 les/c/f=45/45/0 sis=55) [1] r=0 lpr=55 pi=[43,55)/1 crt=54'65 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:52:06 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 56 pg[9.18( v 54'65 (0'0,54'65] local-lis/les=55/56 n=1 ec=55/43 lis/c=43/43 les/c/f=45/45/0 sis=55) [1] r=0 lpr=55 pi=[43,55)/1 crt=54'65 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:52:06 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 56 pg[9.19( v 54'65 (0'0,54'65] local-lis/les=55/56 n=1 ec=55/43 lis/c=43/43 les/c/f=45/45/0 sis=55) [1] r=0 lpr=55 pi=[43,55)/1 crt=54'65 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:52:06 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 56 pg[9.9( v 54'65 (0'0,54'65] local-lis/les=55/56 n=2 ec=55/43 lis/c=43/43 les/c/f=45/45/0 sis=55) [1] r=0 lpr=55 pi=[43,55)/1 crt=54'65 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:52:06 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 56 pg[9.1e( v 54'65 (0'0,54'65] local-lis/les=55/56 n=1 ec=55/43 lis/c=43/43 les/c/f=45/45/0 sis=55) [1] r=0 lpr=55 pi=[43,55)/1 crt=54'65 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:52:06 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 56 pg[9.1a( v 54'65 (0'0,54'65] local-lis/les=55/56 n=1 ec=55/43 lis/c=43/43 les/c/f=45/45/0 sis=55) [1] r=0 lpr=55 pi=[43,55)/1 crt=54'65 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:52:06 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 56 pg[9.1d( v 54'65 (0'0,54'65] local-lis/les=55/56 n=1 ec=55/43 lis/c=43/43 les/c/f=45/45/0 sis=55) [1] r=0 lpr=55 pi=[43,55)/1 crt=54'65 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:52:06 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 56 pg[9.12( v 54'65 (0'0,54'65] local-lis/les=55/56 n=1 ec=55/43 lis/c=43/43 les/c/f=45/45/0 sis=55) [1] r=0 lpr=55 pi=[43,55)/1 crt=54'65 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:52:06 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 56 pg[9.1f( v 54'65 (0'0,54'65] local-lis/les=55/56 n=1 ec=55/43 lis/c=43/43 les/c/f=45/45/0 sis=55) [1] r=0 lpr=55 pi=[43,55)/1 crt=54'65 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:52:06 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 56 pg[9.a( v 54'65 (0'0,54'65] local-lis/les=55/56 n=2 ec=55/43 lis/c=43/43 les/c/f=45/45/0 sis=55) [1] r=0 lpr=55 pi=[43,55)/1 crt=54'65 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:52:06 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 56 pg[9.10( v 54'65 (0'0,54'65] local-lis/les=55/56 n=1 ec=55/43 lis/c=43/43 les/c/f=45/45/0 sis=55) [1] r=0 lpr=55 pi=[43,55)/1 crt=54'65 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:52:06 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 56 pg[9.17( v 54'65 (0'0,54'65] local-lis/les=55/56 n=1 ec=55/43 lis/c=43/43 les/c/f=45/45/0 sis=55) [1] r=0 lpr=55 pi=[43,55)/1 crt=54'65 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:52:06 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 56 pg[9.13( v 54'65 (0'0,54'65] local-lis/les=55/56 n=1 ec=55/43 lis/c=43/43 les/c/f=45/45/0 sis=55) [1] r=0 lpr=55 pi=[43,55)/1 crt=54'65 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:52:06 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 56 pg[9.1b( v 54'65 (0'0,54'65] local-lis/les=55/56 n=1 ec=55/43 lis/c=43/43 les/c/f=45/45/0 sis=55) [1] r=0 lpr=55 pi=[43,55)/1 crt=54'65 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:52:06 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 56 pg[9.1c( v 54'65 (0'0,54'65] local-lis/les=55/56 n=1 ec=55/43 lis/c=43/43 les/c/f=45/45/0 sis=55) [1] r=0 lpr=55 pi=[43,55)/1 crt=54'65 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:52:06 np0005534516 systemd[1]: var-lib-containers-storage-overlay-c50f6274c5766e06c58bdf4e4976dd27492bb2d8baa4e4f14be138a0d49d7934-merged.mount: Deactivated successfully.
Nov 25 02:52:06 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 25 02:52:06 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]': finished
Nov 25 02:52:06 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 02:52:06 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e56 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 02:52:06 np0005534516 podman[103034]: 2025-11-25 07:52:06.633620483 +0000 UTC m=+1.506826069 container remove c311a713720cb38e4156bd2418386bf76ac5bfec58d0eb860f309ed477cb523d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_galileo, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 02:52:06 np0005534516 systemd[1]: libpod-conmon-c311a713720cb38e4156bd2418386bf76ac5bfec58d0eb860f309ed477cb523d.scope: Deactivated successfully.
Nov 25 02:52:06 np0005534516 podman[103076]: 2025-11-25 07:52:06.883808072 +0000 UTC m=+0.074873610 container create 2b2a2750f270ad008bb6f584c7c40d82a6f9025040811511b3abfcf9a64560b5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_blackwell, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Nov 25 02:52:06 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e56 do_prune osdmap full prune enabled
Nov 25 02:52:06 np0005534516 podman[103076]: 2025-11-25 07:52:06.8438343 +0000 UTC m=+0.034899918 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 02:52:06 np0005534516 systemd[1]: Started libpod-conmon-2b2a2750f270ad008bb6f584c7c40d82a6f9025040811511b3abfcf9a64560b5.scope.
Nov 25 02:52:06 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e57 e57: 3 total, 3 up, 3 in
Nov 25 02:52:06 np0005534516 ceph-mon[75015]: log_channel(cluster) log [DBG] : osdmap e57: 3 total, 3 up, 3 in
Nov 25 02:52:06 np0005534516 systemd[1]: Started libcrun container.
Nov 25 02:52:06 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/68f290fd18c55fe335ff77a973739f9ea20ac15f361b8f75316391444276749c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 02:52:06 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/68f290fd18c55fe335ff77a973739f9ea20ac15f361b8f75316391444276749c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 02:52:06 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/68f290fd18c55fe335ff77a973739f9ea20ac15f361b8f75316391444276749c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 02:52:06 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/68f290fd18c55fe335ff77a973739f9ea20ac15f361b8f75316391444276749c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 02:52:07 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 57 pg[11.17( empty local-lis/les=49/50 n=0 ec=56/49 lis/c=49/49 les/c/f=50/50/0 sis=56) [1] r=0 lpr=56 pi=[49,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:52:07 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 57 pg[11.16( empty local-lis/les=49/50 n=0 ec=56/49 lis/c=49/49 les/c/f=50/50/0 sis=56) [1] r=0 lpr=56 pi=[49,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:52:07 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 57 pg[11.14( empty local-lis/les=49/50 n=0 ec=56/49 lis/c=49/49 les/c/f=50/50/0 sis=56) [1] r=0 lpr=56 pi=[49,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:52:07 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 57 pg[11.13( empty local-lis/les=49/50 n=0 ec=56/49 lis/c=49/49 les/c/f=50/50/0 sis=56) [1] r=0 lpr=56 pi=[49,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:52:07 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 57 pg[11.2( empty local-lis/les=49/50 n=0 ec=56/49 lis/c=49/49 les/c/f=50/50/0 sis=56) [1] r=0 lpr=56 pi=[49,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:52:07 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 57 pg[11.1( empty local-lis/les=49/50 n=0 ec=56/49 lis/c=49/49 les/c/f=50/50/0 sis=56) [1] r=0 lpr=56 pi=[49,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:52:07 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 57 pg[11.f( empty local-lis/les=49/50 n=0 ec=56/49 lis/c=49/49 les/c/f=50/50/0 sis=56) [1] r=0 lpr=56 pi=[49,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:52:07 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 57 pg[11.e( empty local-lis/les=49/50 n=0 ec=56/49 lis/c=49/49 les/c/f=50/50/0 sis=56) [1] r=0 lpr=56 pi=[49,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:52:07 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 57 pg[11.d( empty local-lis/les=49/50 n=0 ec=56/49 lis/c=49/49 les/c/f=50/50/0 sis=56) [1] r=0 lpr=56 pi=[49,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:52:07 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 57 pg[11.b( empty local-lis/les=49/50 n=0 ec=56/49 lis/c=49/49 les/c/f=50/50/0 sis=56) [1] r=0 lpr=56 pi=[49,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:52:07 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 57 pg[11.9( empty local-lis/les=49/50 n=0 ec=56/49 lis/c=49/49 les/c/f=50/50/0 sis=56) [1] r=0 lpr=56 pi=[49,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:52:07 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 57 pg[11.c( empty local-lis/les=49/50 n=0 ec=56/49 lis/c=49/49 les/c/f=50/50/0 sis=56) [1] r=0 lpr=56 pi=[49,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:52:07 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 57 pg[11.8( empty local-lis/les=49/50 n=0 ec=56/49 lis/c=49/49 les/c/f=50/50/0 sis=56) [1] r=0 lpr=56 pi=[49,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:52:07 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 57 pg[11.a( empty local-lis/les=49/50 n=0 ec=56/49 lis/c=49/49 les/c/f=50/50/0 sis=56) [1] r=0 lpr=56 pi=[49,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:52:07 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 57 pg[11.3( empty local-lis/les=49/50 n=0 ec=56/49 lis/c=49/49 les/c/f=50/50/0 sis=56) [1] r=0 lpr=56 pi=[49,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:52:07 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 57 pg[11.4( empty local-lis/les=49/50 n=0 ec=56/49 lis/c=49/49 les/c/f=50/50/0 sis=56) [1] r=0 lpr=56 pi=[49,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:52:07 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 57 pg[11.5( empty local-lis/les=49/50 n=0 ec=56/49 lis/c=49/49 les/c/f=50/50/0 sis=56) [1] r=0 lpr=56 pi=[49,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:52:07 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 57 pg[11.6( empty local-lis/les=49/50 n=0 ec=56/49 lis/c=49/49 les/c/f=50/50/0 sis=56) [1] r=0 lpr=56 pi=[49,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:52:07 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 57 pg[11.7( empty local-lis/les=49/50 n=0 ec=56/49 lis/c=49/49 les/c/f=50/50/0 sis=56) [1] r=0 lpr=56 pi=[49,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:52:07 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 57 pg[11.18( empty local-lis/les=49/50 n=0 ec=56/49 lis/c=49/49 les/c/f=50/50/0 sis=56) [1] r=0 lpr=56 pi=[49,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:52:07 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 57 pg[11.1a( empty local-lis/les=49/50 n=0 ec=56/49 lis/c=49/49 les/c/f=50/50/0 sis=56) [1] r=0 lpr=56 pi=[49,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:52:07 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 57 pg[11.1b( empty local-lis/les=49/50 n=0 ec=56/49 lis/c=49/49 les/c/f=50/50/0 sis=56) [1] r=0 lpr=56 pi=[49,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:52:07 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 57 pg[11.1c( empty local-lis/les=49/50 n=0 ec=56/49 lis/c=49/49 les/c/f=50/50/0 sis=56) [1] r=0 lpr=56 pi=[49,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:52:07 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 57 pg[11.1d( empty local-lis/les=49/50 n=0 ec=56/49 lis/c=49/49 les/c/f=50/50/0 sis=56) [1] r=0 lpr=56 pi=[49,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:52:07 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 57 pg[11.1e( empty local-lis/les=49/50 n=0 ec=56/49 lis/c=49/49 les/c/f=50/50/0 sis=56) [1] r=0 lpr=56 pi=[49,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:52:07 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 57 pg[11.1f( empty local-lis/les=49/50 n=0 ec=56/49 lis/c=49/49 les/c/f=50/50/0 sis=56) [1] r=0 lpr=56 pi=[49,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:52:07 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 57 pg[11.10( empty local-lis/les=49/50 n=0 ec=56/49 lis/c=49/49 les/c/f=50/50/0 sis=56) [1] r=0 lpr=56 pi=[49,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:52:07 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 57 pg[11.11( empty local-lis/les=49/50 n=0 ec=56/49 lis/c=49/49 les/c/f=50/50/0 sis=56) [1] r=0 lpr=56 pi=[49,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:52:07 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 57 pg[11.12( empty local-lis/les=49/50 n=0 ec=56/49 lis/c=49/49 les/c/f=50/50/0 sis=56) [1] r=0 lpr=56 pi=[49,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:52:07 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 57 pg[11.15( empty local-lis/les=49/50 n=0 ec=56/49 lis/c=49/49 les/c/f=50/50/0 sis=56) [1] r=0 lpr=56 pi=[49,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:52:07 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 57 pg[11.19( empty local-lis/les=49/50 n=0 ec=56/49 lis/c=49/49 les/c/f=50/50/0 sis=56) [1] r=0 lpr=56 pi=[49,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:52:07 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v145: 321 pgs: 1 peering, 62 unknown, 258 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 13 KiB/s rd, 5.0 KiB/s wr, 46 op/s
Nov 25 02:52:07 np0005534516 podman[103076]: 2025-11-25 07:52:07.149600369 +0000 UTC m=+0.340665987 container init 2b2a2750f270ad008bb6f584c7c40d82a6f9025040811511b3abfcf9a64560b5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_blackwell, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 02:52:07 np0005534516 podman[103076]: 2025-11-25 07:52:07.161212112 +0000 UTC m=+0.352277670 container start 2b2a2750f270ad008bb6f584c7c40d82a6f9025040811511b3abfcf9a64560b5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_blackwell, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3)
Nov 25 02:52:07 np0005534516 ceph-osd[89702]: log_channel(cluster) log [DBG] : 2.7 scrub starts
Nov 25 02:52:07 np0005534516 ceph-osd[89702]: log_channel(cluster) log [DBG] : 2.7 scrub ok
Nov 25 02:52:07 np0005534516 podman[103076]: 2025-11-25 07:52:07.243152042 +0000 UTC m=+0.434217650 container attach 2b2a2750f270ad008bb6f584c7c40d82a6f9025040811511b3abfcf9a64560b5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_blackwell, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 02:52:07 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 57 pg[11.16( empty local-lis/les=56/57 n=0 ec=56/49 lis/c=49/49 les/c/f=50/50/0 sis=56) [1] r=0 lpr=56 pi=[49,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:52:07 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 57 pg[11.17( empty local-lis/les=56/57 n=0 ec=56/49 lis/c=49/49 les/c/f=50/50/0 sis=56) [1] r=0 lpr=56 pi=[49,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:52:07 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 57 pg[11.14( empty local-lis/les=56/57 n=0 ec=56/49 lis/c=49/49 les/c/f=50/50/0 sis=56) [1] r=0 lpr=56 pi=[49,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:52:07 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 57 pg[11.13( empty local-lis/les=56/57 n=0 ec=56/49 lis/c=49/49 les/c/f=50/50/0 sis=56) [1] r=0 lpr=56 pi=[49,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:52:07 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 57 pg[11.2( empty local-lis/les=56/57 n=0 ec=56/49 lis/c=49/49 les/c/f=50/50/0 sis=56) [1] r=0 lpr=56 pi=[49,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:52:07 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 57 pg[11.0( empty local-lis/les=56/57 n=0 ec=49/49 lis/c=49/49 les/c/f=50/50/0 sis=56) [1] r=0 lpr=56 pi=[49,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:52:07 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 57 pg[11.1( empty local-lis/les=56/57 n=0 ec=56/49 lis/c=49/49 les/c/f=50/50/0 sis=56) [1] r=0 lpr=56 pi=[49,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:52:07 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 57 pg[11.e( empty local-lis/les=56/57 n=0 ec=56/49 lis/c=49/49 les/c/f=50/50/0 sis=56) [1] r=0 lpr=56 pi=[49,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:52:07 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 57 pg[11.f( empty local-lis/les=56/57 n=0 ec=56/49 lis/c=49/49 les/c/f=50/50/0 sis=56) [1] r=0 lpr=56 pi=[49,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:52:07 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 57 pg[11.d( empty local-lis/les=56/57 n=0 ec=56/49 lis/c=49/49 les/c/f=50/50/0 sis=56) [1] r=0 lpr=56 pi=[49,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:52:07 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 57 pg[11.c( empty local-lis/les=56/57 n=0 ec=56/49 lis/c=49/49 les/c/f=50/50/0 sis=56) [1] r=0 lpr=56 pi=[49,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:52:07 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 57 pg[11.b( empty local-lis/les=56/57 n=0 ec=56/49 lis/c=49/49 les/c/f=50/50/0 sis=56) [1] r=0 lpr=56 pi=[49,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:52:07 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 57 pg[11.9( empty local-lis/les=56/57 n=0 ec=56/49 lis/c=49/49 les/c/f=50/50/0 sis=56) [1] r=0 lpr=56 pi=[49,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:52:07 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 57 pg[11.8( empty local-lis/les=56/57 n=0 ec=56/49 lis/c=49/49 les/c/f=50/50/0 sis=56) [1] r=0 lpr=56 pi=[49,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:52:07 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 57 pg[11.a( empty local-lis/les=56/57 n=0 ec=56/49 lis/c=49/49 les/c/f=50/50/0 sis=56) [1] r=0 lpr=56 pi=[49,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:52:07 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 57 pg[11.3( empty local-lis/les=56/57 n=0 ec=56/49 lis/c=49/49 les/c/f=50/50/0 sis=56) [1] r=0 lpr=56 pi=[49,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:52:07 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 57 pg[11.4( empty local-lis/les=56/57 n=0 ec=56/49 lis/c=49/49 les/c/f=50/50/0 sis=56) [1] r=0 lpr=56 pi=[49,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:52:07 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 57 pg[11.6( empty local-lis/les=56/57 n=0 ec=56/49 lis/c=49/49 les/c/f=50/50/0 sis=56) [1] r=0 lpr=56 pi=[49,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:52:07 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 57 pg[11.5( empty local-lis/les=56/57 n=0 ec=56/49 lis/c=49/49 les/c/f=50/50/0 sis=56) [1] r=0 lpr=56 pi=[49,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:52:07 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 57 pg[11.7( empty local-lis/les=56/57 n=0 ec=56/49 lis/c=49/49 les/c/f=50/50/0 sis=56) [1] r=0 lpr=56 pi=[49,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:52:07 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 57 pg[11.18( empty local-lis/les=56/57 n=0 ec=56/49 lis/c=49/49 les/c/f=50/50/0 sis=56) [1] r=0 lpr=56 pi=[49,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:52:07 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 57 pg[11.1b( empty local-lis/les=56/57 n=0 ec=56/49 lis/c=49/49 les/c/f=50/50/0 sis=56) [1] r=0 lpr=56 pi=[49,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:52:07 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 57 pg[11.1a( empty local-lis/les=56/57 n=0 ec=56/49 lis/c=49/49 les/c/f=50/50/0 sis=56) [1] r=0 lpr=56 pi=[49,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:52:07 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 57 pg[11.1d( empty local-lis/les=56/57 n=0 ec=56/49 lis/c=49/49 les/c/f=50/50/0 sis=56) [1] r=0 lpr=56 pi=[49,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:52:07 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 57 pg[11.1e( empty local-lis/les=56/57 n=0 ec=56/49 lis/c=49/49 les/c/f=50/50/0 sis=56) [1] r=0 lpr=56 pi=[49,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:52:07 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 57 pg[11.11( empty local-lis/les=56/57 n=0 ec=56/49 lis/c=49/49 les/c/f=50/50/0 sis=56) [1] r=0 lpr=56 pi=[49,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:52:07 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 57 pg[11.1f( empty local-lis/les=56/57 n=0 ec=56/49 lis/c=49/49 les/c/f=50/50/0 sis=56) [1] r=0 lpr=56 pi=[49,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:52:07 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 57 pg[11.10( empty local-lis/les=56/57 n=0 ec=56/49 lis/c=49/49 les/c/f=50/50/0 sis=56) [1] r=0 lpr=56 pi=[49,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:52:07 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 57 pg[11.12( empty local-lis/les=56/57 n=0 ec=56/49 lis/c=49/49 les/c/f=50/50/0 sis=56) [1] r=0 lpr=56 pi=[49,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:52:07 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 57 pg[11.1c( empty local-lis/les=56/57 n=0 ec=56/49 lis/c=49/49 les/c/f=50/50/0 sis=56) [1] r=0 lpr=56 pi=[49,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:52:07 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 57 pg[11.15( empty local-lis/les=56/57 n=0 ec=56/49 lis/c=49/49 les/c/f=50/50/0 sis=56) [1] r=0 lpr=56 pi=[49,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:52:07 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 57 pg[11.19( empty local-lis/les=56/57 n=0 ec=56/49 lis/c=49/49 les/c/f=50/50/0 sis=56) [1] r=0 lpr=56 pi=[49,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:52:07 np0005534516 ceph-osd[88620]: log_channel(cluster) log [DBG] : 6.19 scrub starts
Nov 25 02:52:07 np0005534516 ceph-osd[88620]: log_channel(cluster) log [DBG] : 6.19 scrub ok
Nov 25 02:52:07 np0005534516 ceph-osd[90711]: log_channel(cluster) log [DBG] : 3.8 scrub starts
Nov 25 02:52:07 np0005534516 ceph-osd[90711]: log_channel(cluster) log [DBG] : 3.8 scrub ok
Nov 25 02:52:07 np0005534516 optimistic_blackwell[103093]: {
Nov 25 02:52:07 np0005534516 optimistic_blackwell[103093]:    "0": [
Nov 25 02:52:07 np0005534516 optimistic_blackwell[103093]:        {
Nov 25 02:52:07 np0005534516 optimistic_blackwell[103093]:            "devices": [
Nov 25 02:52:07 np0005534516 optimistic_blackwell[103093]:                "/dev/loop3"
Nov 25 02:52:07 np0005534516 optimistic_blackwell[103093]:            ],
Nov 25 02:52:07 np0005534516 optimistic_blackwell[103093]:            "lv_name": "ceph_lv0",
Nov 25 02:52:07 np0005534516 optimistic_blackwell[103093]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 02:52:07 np0005534516 optimistic_blackwell[103093]:            "lv_size": "21470642176",
Nov 25 02:52:07 np0005534516 optimistic_blackwell[103093]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=fa0f8d90-5df8-4d42-9078-da082765696d,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 02:52:07 np0005534516 optimistic_blackwell[103093]:            "lv_uuid": "ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr",
Nov 25 02:52:07 np0005534516 optimistic_blackwell[103093]:            "name": "ceph_lv0",
Nov 25 02:52:07 np0005534516 optimistic_blackwell[103093]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 02:52:07 np0005534516 optimistic_blackwell[103093]:            "tags": {
Nov 25 02:52:07 np0005534516 optimistic_blackwell[103093]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 02:52:07 np0005534516 optimistic_blackwell[103093]:                "ceph.block_uuid": "ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr",
Nov 25 02:52:07 np0005534516 optimistic_blackwell[103093]:                "ceph.cephx_lockbox_secret": "",
Nov 25 02:52:07 np0005534516 optimistic_blackwell[103093]:                "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 02:52:07 np0005534516 optimistic_blackwell[103093]:                "ceph.cluster_name": "ceph",
Nov 25 02:52:07 np0005534516 optimistic_blackwell[103093]:                "ceph.crush_device_class": "",
Nov 25 02:52:07 np0005534516 optimistic_blackwell[103093]:                "ceph.encrypted": "0",
Nov 25 02:52:07 np0005534516 optimistic_blackwell[103093]:                "ceph.osd_fsid": "fa0f8d90-5df8-4d42-9078-da082765696d",
Nov 25 02:52:07 np0005534516 optimistic_blackwell[103093]:                "ceph.osd_id": "0",
Nov 25 02:52:07 np0005534516 optimistic_blackwell[103093]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 02:52:07 np0005534516 optimistic_blackwell[103093]:                "ceph.type": "block",
Nov 25 02:52:07 np0005534516 optimistic_blackwell[103093]:                "ceph.vdo": "0"
Nov 25 02:52:07 np0005534516 optimistic_blackwell[103093]:            },
Nov 25 02:52:07 np0005534516 optimistic_blackwell[103093]:            "type": "block",
Nov 25 02:52:07 np0005534516 optimistic_blackwell[103093]:            "vg_name": "ceph_vg0"
Nov 25 02:52:07 np0005534516 optimistic_blackwell[103093]:        }
Nov 25 02:52:07 np0005534516 optimistic_blackwell[103093]:    ],
Nov 25 02:52:07 np0005534516 optimistic_blackwell[103093]:    "1": [
Nov 25 02:52:07 np0005534516 optimistic_blackwell[103093]:        {
Nov 25 02:52:07 np0005534516 optimistic_blackwell[103093]:            "devices": [
Nov 25 02:52:07 np0005534516 optimistic_blackwell[103093]:                "/dev/loop4"
Nov 25 02:52:07 np0005534516 optimistic_blackwell[103093]:            ],
Nov 25 02:52:07 np0005534516 optimistic_blackwell[103093]:            "lv_name": "ceph_lv1",
Nov 25 02:52:07 np0005534516 optimistic_blackwell[103093]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 02:52:07 np0005534516 optimistic_blackwell[103093]:            "lv_size": "21470642176",
Nov 25 02:52:07 np0005534516 optimistic_blackwell[103093]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1ccbbde0-8faf-460a-800e-c84f00ed17db,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 02:52:07 np0005534516 optimistic_blackwell[103093]:            "lv_uuid": "nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e",
Nov 25 02:52:07 np0005534516 optimistic_blackwell[103093]:            "name": "ceph_lv1",
Nov 25 02:52:07 np0005534516 optimistic_blackwell[103093]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 02:52:07 np0005534516 optimistic_blackwell[103093]:            "tags": {
Nov 25 02:52:07 np0005534516 optimistic_blackwell[103093]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 02:52:07 np0005534516 optimistic_blackwell[103093]:                "ceph.block_uuid": "nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e",
Nov 25 02:52:07 np0005534516 optimistic_blackwell[103093]:                "ceph.cephx_lockbox_secret": "",
Nov 25 02:52:07 np0005534516 optimistic_blackwell[103093]:                "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 02:52:07 np0005534516 optimistic_blackwell[103093]:                "ceph.cluster_name": "ceph",
Nov 25 02:52:07 np0005534516 optimistic_blackwell[103093]:                "ceph.crush_device_class": "",
Nov 25 02:52:07 np0005534516 optimistic_blackwell[103093]:                "ceph.encrypted": "0",
Nov 25 02:52:07 np0005534516 optimistic_blackwell[103093]:                "ceph.osd_fsid": "1ccbbde0-8faf-460a-800e-c84f00ed17db",
Nov 25 02:52:07 np0005534516 optimistic_blackwell[103093]:                "ceph.osd_id": "1",
Nov 25 02:52:07 np0005534516 optimistic_blackwell[103093]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 02:52:07 np0005534516 optimistic_blackwell[103093]:                "ceph.type": "block",
Nov 25 02:52:07 np0005534516 optimistic_blackwell[103093]:                "ceph.vdo": "0"
Nov 25 02:52:07 np0005534516 optimistic_blackwell[103093]:            },
Nov 25 02:52:07 np0005534516 optimistic_blackwell[103093]:            "type": "block",
Nov 25 02:52:07 np0005534516 optimistic_blackwell[103093]:            "vg_name": "ceph_vg1"
Nov 25 02:52:07 np0005534516 optimistic_blackwell[103093]:        }
Nov 25 02:52:07 np0005534516 optimistic_blackwell[103093]:    ],
Nov 25 02:52:07 np0005534516 optimistic_blackwell[103093]:    "2": [
Nov 25 02:52:07 np0005534516 optimistic_blackwell[103093]:        {
Nov 25 02:52:07 np0005534516 optimistic_blackwell[103093]:            "devices": [
Nov 25 02:52:07 np0005534516 optimistic_blackwell[103093]:                "/dev/loop5"
Nov 25 02:52:07 np0005534516 optimistic_blackwell[103093]:            ],
Nov 25 02:52:07 np0005534516 optimistic_blackwell[103093]:            "lv_name": "ceph_lv2",
Nov 25 02:52:07 np0005534516 optimistic_blackwell[103093]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 02:52:07 np0005534516 optimistic_blackwell[103093]:            "lv_size": "21470642176",
Nov 25 02:52:07 np0005534516 optimistic_blackwell[103093]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2a15223e-f21c-42af-b702-2be1c3607399,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 02:52:07 np0005534516 optimistic_blackwell[103093]:            "lv_uuid": "5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0",
Nov 25 02:52:07 np0005534516 optimistic_blackwell[103093]:            "name": "ceph_lv2",
Nov 25 02:52:07 np0005534516 optimistic_blackwell[103093]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 02:52:07 np0005534516 optimistic_blackwell[103093]:            "tags": {
Nov 25 02:52:07 np0005534516 optimistic_blackwell[103093]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 02:52:07 np0005534516 optimistic_blackwell[103093]:                "ceph.block_uuid": "5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0",
Nov 25 02:52:07 np0005534516 optimistic_blackwell[103093]:                "ceph.cephx_lockbox_secret": "",
Nov 25 02:52:07 np0005534516 optimistic_blackwell[103093]:                "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 02:52:07 np0005534516 optimistic_blackwell[103093]:                "ceph.cluster_name": "ceph",
Nov 25 02:52:07 np0005534516 optimistic_blackwell[103093]:                "ceph.crush_device_class": "",
Nov 25 02:52:07 np0005534516 optimistic_blackwell[103093]:                "ceph.encrypted": "0",
Nov 25 02:52:07 np0005534516 optimistic_blackwell[103093]:                "ceph.osd_fsid": "2a15223e-f21c-42af-b702-2be1c3607399",
Nov 25 02:52:07 np0005534516 optimistic_blackwell[103093]:                "ceph.osd_id": "2",
Nov 25 02:52:07 np0005534516 optimistic_blackwell[103093]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 02:52:07 np0005534516 optimistic_blackwell[103093]:                "ceph.type": "block",
Nov 25 02:52:07 np0005534516 optimistic_blackwell[103093]:                "ceph.vdo": "0"
Nov 25 02:52:07 np0005534516 optimistic_blackwell[103093]:            },
Nov 25 02:52:07 np0005534516 optimistic_blackwell[103093]:            "type": "block",
Nov 25 02:52:07 np0005534516 optimistic_blackwell[103093]:            "vg_name": "ceph_vg2"
Nov 25 02:52:07 np0005534516 optimistic_blackwell[103093]:        }
Nov 25 02:52:07 np0005534516 optimistic_blackwell[103093]:    ]
Nov 25 02:52:07 np0005534516 optimistic_blackwell[103093]: }
Nov 25 02:52:07 np0005534516 systemd[1]: libpod-2b2a2750f270ad008bb6f584c7c40d82a6f9025040811511b3abfcf9a64560b5.scope: Deactivated successfully.
Nov 25 02:52:07 np0005534516 podman[103076]: 2025-11-25 07:52:07.92918404 +0000 UTC m=+1.120249598 container died 2b2a2750f270ad008bb6f584c7c40d82a6f9025040811511b3abfcf9a64560b5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_blackwell, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Nov 25 02:52:08 np0005534516 systemd[1]: var-lib-containers-storage-overlay-68f290fd18c55fe335ff77a973739f9ea20ac15f361b8f75316391444276749c-merged.mount: Deactivated successfully.
Nov 25 02:52:08 np0005534516 podman[103076]: 2025-11-25 07:52:08.40253353 +0000 UTC m=+1.593599058 container remove 2b2a2750f270ad008bb6f584c7c40d82a6f9025040811511b3abfcf9a64560b5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_blackwell, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Nov 25 02:52:08 np0005534516 systemd[1]: libpod-conmon-2b2a2750f270ad008bb6f584c7c40d82a6f9025040811511b3abfcf9a64560b5.scope: Deactivated successfully.
Nov 25 02:52:08 np0005534516 ceph-osd[90711]: log_channel(cluster) log [DBG] : 3.1d scrub starts
Nov 25 02:52:08 np0005534516 ceph-osd[90711]: log_channel(cluster) log [DBG] : 3.1d scrub ok
Nov 25 02:52:09 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v146: 321 pgs: 1 peering, 31 unknown, 289 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 8.9 KiB/s rd, 3.4 KiB/s wr, 32 op/s
Nov 25 02:52:09 np0005534516 ceph-osd[89702]: log_channel(cluster) log [DBG] : 2.15 scrub starts
Nov 25 02:52:09 np0005534516 ceph-osd[89702]: log_channel(cluster) log [DBG] : 2.15 scrub ok
Nov 25 02:52:09 np0005534516 podman[103258]: 2025-11-25 07:52:09.233940419 +0000 UTC m=+0.101025105 container create 0746d0d629c4c853323ac058f88c9f5067cfbcf85aa6c03bc1d78ac9649d8735 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_fermat, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 02:52:09 np0005534516 podman[103258]: 2025-11-25 07:52:09.17433704 +0000 UTC m=+0.041421806 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 02:52:09 np0005534516 systemd[1]: Started libpod-conmon-0746d0d629c4c853323ac058f88c9f5067cfbcf85aa6c03bc1d78ac9649d8735.scope.
Nov 25 02:52:09 np0005534516 systemd[1]: Started libcrun container.
Nov 25 02:52:09 np0005534516 podman[103258]: 2025-11-25 07:52:09.430927755 +0000 UTC m=+0.298012471 container init 0746d0d629c4c853323ac058f88c9f5067cfbcf85aa6c03bc1d78ac9649d8735 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_fermat, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Nov 25 02:52:09 np0005534516 podman[103258]: 2025-11-25 07:52:09.439106654 +0000 UTC m=+0.306191340 container start 0746d0d629c4c853323ac058f88c9f5067cfbcf85aa6c03bc1d78ac9649d8735 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_fermat, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 02:52:09 np0005534516 admiring_fermat[103275]: 167 167
Nov 25 02:52:09 np0005534516 systemd[1]: libpod-0746d0d629c4c853323ac058f88c9f5067cfbcf85aa6c03bc1d78ac9649d8735.scope: Deactivated successfully.
Nov 25 02:52:09 np0005534516 podman[103258]: 2025-11-25 07:52:09.456195589 +0000 UTC m=+0.323280305 container attach 0746d0d629c4c853323ac058f88c9f5067cfbcf85aa6c03bc1d78ac9649d8735 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_fermat, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True)
Nov 25 02:52:09 np0005534516 podman[103258]: 2025-11-25 07:52:09.456795753 +0000 UTC m=+0.323880479 container died 0746d0d629c4c853323ac058f88c9f5067cfbcf85aa6c03bc1d78ac9649d8735 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_fermat, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 02:52:09 np0005534516 systemd[1]: var-lib-containers-storage-overlay-2f53dafcd1f01876d82f9f20ce7725c8fae9b9e9a0ea1b7b3dda3db9b75b5a54-merged.mount: Deactivated successfully.
Nov 25 02:52:09 np0005534516 podman[103258]: 2025-11-25 07:52:09.662099212 +0000 UTC m=+0.529183918 container remove 0746d0d629c4c853323ac058f88c9f5067cfbcf85aa6c03bc1d78ac9649d8735 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_fermat, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 25 02:52:09 np0005534516 systemd[1]: libpod-conmon-0746d0d629c4c853323ac058f88c9f5067cfbcf85aa6c03bc1d78ac9649d8735.scope: Deactivated successfully.
Nov 25 02:52:09 np0005534516 podman[103301]: 2025-11-25 07:52:09.818539592 +0000 UTC m=+0.022764294 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 02:52:09 np0005534516 podman[103301]: 2025-11-25 07:52:09.914676758 +0000 UTC m=+0.118901460 container create a43f7008deeef640fb083be56906125657e934cbe5c3cbed0d262f622e4200db (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_dijkstra, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 02:52:10 np0005534516 systemd[1]: Started libpod-conmon-a43f7008deeef640fb083be56906125657e934cbe5c3cbed0d262f622e4200db.scope.
Nov 25 02:52:10 np0005534516 systemd[1]: Started libcrun container.
Nov 25 02:52:10 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/23e5412d45f2a6b672d9637f424d60a7079cc851ba0872aa8e8c36f3403380eb/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 02:52:10 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/23e5412d45f2a6b672d9637f424d60a7079cc851ba0872aa8e8c36f3403380eb/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 02:52:10 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/23e5412d45f2a6b672d9637f424d60a7079cc851ba0872aa8e8c36f3403380eb/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 02:52:10 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/23e5412d45f2a6b672d9637f424d60a7079cc851ba0872aa8e8c36f3403380eb/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 02:52:10 np0005534516 podman[103301]: 2025-11-25 07:52:10.293287117 +0000 UTC m=+0.497511869 container init a43f7008deeef640fb083be56906125657e934cbe5c3cbed0d262f622e4200db (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_dijkstra, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True)
Nov 25 02:52:10 np0005534516 podman[103301]: 2025-11-25 07:52:10.305754739 +0000 UTC m=+0.509979441 container start a43f7008deeef640fb083be56906125657e934cbe5c3cbed0d262f622e4200db (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_dijkstra, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Nov 25 02:52:10 np0005534516 podman[103301]: 2025-11-25 07:52:10.504855596 +0000 UTC m=+0.709080348 container attach a43f7008deeef640fb083be56906125657e934cbe5c3cbed0d262f622e4200db (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_dijkstra, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 02:52:11 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v147: 321 pgs: 1 peering, 31 unknown, 289 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 7.8 KiB/s rd, 3.0 KiB/s wr, 27 op/s
Nov 25 02:52:11 np0005534516 elastic_dijkstra[103317]: {
Nov 25 02:52:11 np0005534516 elastic_dijkstra[103317]:    "1ccbbde0-8faf-460a-800e-c84f00ed17db": {
Nov 25 02:52:11 np0005534516 elastic_dijkstra[103317]:        "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 02:52:11 np0005534516 elastic_dijkstra[103317]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 02:52:11 np0005534516 elastic_dijkstra[103317]:        "osd_id": 1,
Nov 25 02:52:11 np0005534516 elastic_dijkstra[103317]:        "osd_uuid": "1ccbbde0-8faf-460a-800e-c84f00ed17db",
Nov 25 02:52:11 np0005534516 elastic_dijkstra[103317]:        "type": "bluestore"
Nov 25 02:52:11 np0005534516 elastic_dijkstra[103317]:    },
Nov 25 02:52:11 np0005534516 elastic_dijkstra[103317]:    "2a15223e-f21c-42af-b702-2be1c3607399": {
Nov 25 02:52:11 np0005534516 elastic_dijkstra[103317]:        "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 02:52:11 np0005534516 elastic_dijkstra[103317]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 02:52:11 np0005534516 elastic_dijkstra[103317]:        "osd_id": 2,
Nov 25 02:52:11 np0005534516 elastic_dijkstra[103317]:        "osd_uuid": "2a15223e-f21c-42af-b702-2be1c3607399",
Nov 25 02:52:11 np0005534516 elastic_dijkstra[103317]:        "type": "bluestore"
Nov 25 02:52:11 np0005534516 elastic_dijkstra[103317]:    },
Nov 25 02:52:11 np0005534516 elastic_dijkstra[103317]:    "fa0f8d90-5df8-4d42-9078-da082765696d": {
Nov 25 02:52:11 np0005534516 elastic_dijkstra[103317]:        "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 02:52:11 np0005534516 elastic_dijkstra[103317]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 02:52:11 np0005534516 elastic_dijkstra[103317]:        "osd_id": 0,
Nov 25 02:52:11 np0005534516 elastic_dijkstra[103317]:        "osd_uuid": "fa0f8d90-5df8-4d42-9078-da082765696d",
Nov 25 02:52:11 np0005534516 elastic_dijkstra[103317]:        "type": "bluestore"
Nov 25 02:52:11 np0005534516 elastic_dijkstra[103317]:    }
Nov 25 02:52:11 np0005534516 elastic_dijkstra[103317]: }
Nov 25 02:52:11 np0005534516 systemd[1]: libpod-a43f7008deeef640fb083be56906125657e934cbe5c3cbed0d262f622e4200db.scope: Deactivated successfully.
Nov 25 02:52:11 np0005534516 systemd[1]: libpod-a43f7008deeef640fb083be56906125657e934cbe5c3cbed0d262f622e4200db.scope: Consumed 1.023s CPU time.
Nov 25 02:52:11 np0005534516 podman[103301]: 2025-11-25 07:52:11.322494291 +0000 UTC m=+1.526719003 container died a43f7008deeef640fb083be56906125657e934cbe5c3cbed0d262f622e4200db (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_dijkstra, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Nov 25 02:52:11 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e57 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 02:52:12 np0005534516 systemd[1]: var-lib-containers-storage-overlay-23e5412d45f2a6b672d9637f424d60a7079cc851ba0872aa8e8c36f3403380eb-merged.mount: Deactivated successfully.
Nov 25 02:52:13 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v148: 321 pgs: 321 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 25 KiB/s rd, 2.5 KiB/s wr, 59 op/s
Nov 25 02:52:13 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"} v 0) v1
Nov 25 02:52:13 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 25 02:52:13 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"} v 0) v1
Nov 25 02:52:13 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 25 02:52:13 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"} v 0) v1
Nov 25 02:52:13 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 25 02:52:13 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"} v 0) v1
Nov 25 02:52:13 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]: dispatch
Nov 25 02:52:13 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"} v 0) v1
Nov 25 02:52:13 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 25 02:52:13 np0005534516 ceph-osd[89702]: log_channel(cluster) log [DBG] : 5.11 scrub starts
Nov 25 02:52:13 np0005534516 ceph-osd[89702]: log_channel(cluster) log [DBG] : 5.11 scrub ok
Nov 25 02:52:13 np0005534516 podman[103301]: 2025-11-25 07:52:13.581912484 +0000 UTC m=+3.786137186 container remove a43f7008deeef640fb083be56906125657e934cbe5c3cbed0d262f622e4200db (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_dijkstra, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 25 02:52:13 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e57 do_prune osdmap full prune enabled
Nov 25 02:52:13 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 02:52:13 np0005534516 systemd[1]: libpod-conmon-a43f7008deeef640fb083be56906125657e934cbe5c3cbed0d262f622e4200db.scope: Deactivated successfully.
Nov 25 02:52:13 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 25 02:52:13 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 25 02:52:13 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 25 02:52:13 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]: dispatch
Nov 25 02:52:13 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 25 02:52:14 np0005534516 ceph-osd[89702]: log_channel(cluster) log [DBG] : 2.d scrub starts
Nov 25 02:52:14 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 25 02:52:14 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 25 02:52:14 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 25 02:52:14 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]': finished
Nov 25 02:52:14 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 25 02:52:14 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e58 e58: 3 total, 3 up, 3 in
Nov 25 02:52:14 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 02:52:14 np0005534516 ceph-mon[75015]: log_channel(cluster) log [DBG] : osdmap e58: 3 total, 3 up, 3 in
Nov 25 02:52:14 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 02:52:14 np0005534516 ceph-osd[89702]: log_channel(cluster) log [DBG] : 2.d scrub ok
Nov 25 02:52:14 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 58 pg[10.d( v 56'17 (0'0,56'17] local-lis/les=55/56 n=0 ec=55/46 lis/c=55/55 les/c/f=56/56/0 sis=58 pruub=15.412461281s) [0] r=-1 lpr=58 pi=[55,58)/1 crt=47'16 lcod 47'16 mlcod 47'16 active pruub 128.153656006s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 02:52:14 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 58 pg[10.1e( v 47'16 (0'0,47'16] local-lis/les=55/56 n=0 ec=55/46 lis/c=55/55 les/c/f=56/56/0 sis=58 pruub=15.412446022s) [0] r=-1 lpr=58 pi=[55,58)/1 crt=47'16 lcod 0'0 mlcod 0'0 active pruub 128.153656006s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 02:52:14 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 58 pg[10.13( v 47'16 (0'0,47'16] local-lis/les=55/56 n=0 ec=55/46 lis/c=55/55 les/c/f=56/56/0 sis=58 pruub=15.412601471s) [1] r=-1 lpr=58 pi=[55,58)/1 crt=47'16 lcod 0'0 mlcod 0'0 active pruub 128.153793335s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 02:52:14 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 58 pg[10.b( v 47'16 (0'0,47'16] local-lis/les=55/56 n=0 ec=55/46 lis/c=55/55 les/c/f=56/56/0 sis=58 pruub=15.412528038s) [1] r=-1 lpr=58 pi=[55,58)/1 crt=47'16 lcod 0'0 mlcod 0'0 active pruub 128.153793335s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 02:52:14 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 58 pg[10.1e( v 47'16 (0'0,47'16] local-lis/les=55/56 n=0 ec=55/46 lis/c=55/55 les/c/f=56/56/0 sis=58 pruub=15.412369728s) [0] r=-1 lpr=58 pi=[55,58)/1 crt=47'16 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 128.153656006s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 02:52:14 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 58 pg[10.b( v 47'16 (0'0,47'16] local-lis/les=55/56 n=0 ec=55/46 lis/c=55/55 les/c/f=56/56/0 sis=58 pruub=15.412503242s) [1] r=-1 lpr=58 pi=[55,58)/1 crt=47'16 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 128.153793335s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 02:52:14 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 58 pg[10.12( v 47'16 (0'0,47'16] local-lis/les=55/56 n=0 ec=55/46 lis/c=55/55 les/c/f=56/56/0 sis=58 pruub=15.412604332s) [1] r=-1 lpr=58 pi=[55,58)/1 crt=47'16 lcod 0'0 mlcod 0'0 active pruub 128.153945923s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 02:52:14 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 58 pg[10.12( v 47'16 (0'0,47'16] local-lis/les=55/56 n=0 ec=55/46 lis/c=55/55 les/c/f=56/56/0 sis=58 pruub=15.412581444s) [1] r=-1 lpr=58 pi=[55,58)/1 crt=47'16 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 128.153945923s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 02:52:14 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 58 pg[10.11( v 47'16 (0'0,47'16] local-lis/les=55/56 n=0 ec=55/46 lis/c=55/55 les/c/f=56/56/0 sis=58 pruub=15.412370682s) [1] r=-1 lpr=58 pi=[55,58)/1 crt=47'16 lcod 0'0 mlcod 0'0 active pruub 128.153945923s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 02:52:14 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 58 pg[10.11( v 47'16 (0'0,47'16] local-lis/les=55/56 n=0 ec=55/46 lis/c=55/55 les/c/f=56/56/0 sis=58 pruub=15.412355423s) [1] r=-1 lpr=58 pi=[55,58)/1 crt=47'16 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 128.153945923s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 02:52:14 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 58 pg[10.10( v 47'16 (0'0,47'16] local-lis/les=55/56 n=0 ec=55/46 lis/c=55/55 les/c/f=56/56/0 sis=58 pruub=15.412358284s) [1] r=-1 lpr=58 pi=[55,58)/1 crt=47'16 lcod 0'0 mlcod 0'0 active pruub 128.154037476s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 02:52:14 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 58 pg[10.10( v 47'16 (0'0,47'16] local-lis/les=55/56 n=0 ec=55/46 lis/c=55/55 les/c/f=56/56/0 sis=58 pruub=15.412344933s) [1] r=-1 lpr=58 pi=[55,58)/1 crt=47'16 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 128.154037476s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 02:52:14 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 58 pg[10.1a( v 47'16 (0'0,47'16] local-lis/les=55/56 n=0 ec=55/46 lis/c=55/55 les/c/f=56/56/0 sis=58 pruub=15.412012100s) [1] r=-1 lpr=58 pi=[55,58)/1 crt=47'16 lcod 0'0 mlcod 0'0 active pruub 128.154006958s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 02:52:14 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 58 pg[10.d( v 56'17 (0'0,56'17] local-lis/les=55/56 n=0 ec=55/46 lis/c=55/55 les/c/f=56/56/0 sis=58 pruub=15.411663055s) [0] r=-1 lpr=58 pi=[55,58)/1 crt=47'16 lcod 47'16 mlcod 0'0 unknown NOTIFY pruub 128.153656006s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 02:52:14 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 58 pg[10.1a( v 47'16 (0'0,47'16] local-lis/les=55/56 n=0 ec=55/46 lis/c=55/55 les/c/f=56/56/0 sis=58 pruub=15.411864281s) [1] r=-1 lpr=58 pi=[55,58)/1 crt=47'16 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 128.154006958s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 02:52:14 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 58 pg[10.13( v 47'16 (0'0,47'16] local-lis/les=55/56 n=0 ec=55/46 lis/c=55/55 les/c/f=56/56/0 sis=58 pruub=15.411664009s) [1] r=-1 lpr=58 pi=[55,58)/1 crt=47'16 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 128.153793335s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 02:52:14 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 58 pg[10.19( v 47'16 (0'0,47'16] local-lis/les=55/56 n=0 ec=55/46 lis/c=55/55 les/c/f=56/56/0 sis=58 pruub=15.411722183s) [1] r=-1 lpr=58 pi=[55,58)/1 crt=47'16 lcod 0'0 mlcod 0'0 active pruub 128.154083252s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 02:52:14 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 58 pg[10.19( v 47'16 (0'0,47'16] local-lis/les=55/56 n=0 ec=55/46 lis/c=55/55 les/c/f=56/56/0 sis=58 pruub=15.411656380s) [1] r=-1 lpr=58 pi=[55,58)/1 crt=47'16 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 128.154083252s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 02:52:14 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 58 pg[10.7( v 47'16 (0'0,47'16] local-lis/les=55/56 n=1 ec=55/46 lis/c=55/55 les/c/f=56/56/0 sis=58 pruub=15.411628723s) [0] r=-1 lpr=58 pi=[55,58)/1 crt=47'16 lcod 0'0 mlcod 0'0 active pruub 128.154113770s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 02:52:14 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 58 pg[10.7( v 47'16 (0'0,47'16] local-lis/les=55/56 n=1 ec=55/46 lis/c=55/55 les/c/f=56/56/0 sis=58 pruub=15.411602020s) [0] r=-1 lpr=58 pi=[55,58)/1 crt=47'16 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 128.154113770s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 02:52:14 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 58 pg[10.4( v 47'16 (0'0,47'16] local-lis/les=55/56 n=1 ec=55/46 lis/c=55/55 les/c/f=56/56/0 sis=58 pruub=15.411517143s) [0] r=-1 lpr=58 pi=[55,58)/1 crt=47'16 lcod 0'0 mlcod 0'0 active pruub 128.154098511s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 02:52:14 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 58 pg[10.6( v 47'16 (0'0,47'16] local-lis/les=55/56 n=1 ec=55/46 lis/c=55/55 les/c/f=56/56/0 sis=58 pruub=15.411506653s) [1] r=-1 lpr=58 pi=[55,58)/1 crt=47'16 lcod 0'0 mlcod 0'0 active pruub 128.154098511s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 02:52:14 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 58 pg[10.4( v 47'16 (0'0,47'16] local-lis/les=55/56 n=1 ec=55/46 lis/c=55/55 les/c/f=56/56/0 sis=58 pruub=15.411499023s) [0] r=-1 lpr=58 pi=[55,58)/1 crt=47'16 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 128.154098511s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 02:52:14 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 58 pg[10.6( v 47'16 (0'0,47'16] local-lis/les=55/56 n=1 ec=55/46 lis/c=55/55 les/c/f=56/56/0 sis=58 pruub=15.411454201s) [1] r=-1 lpr=58 pi=[55,58)/1 crt=47'16 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 128.154098511s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 02:52:14 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 58 pg[10.8( v 47'16 (0'0,47'16] local-lis/les=55/56 n=1 ec=55/46 lis/c=55/55 les/c/f=56/56/0 sis=58 pruub=15.411475182s) [0] r=-1 lpr=58 pi=[55,58)/1 crt=47'16 lcod 0'0 mlcod 0'0 active pruub 128.154220581s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 02:52:14 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 58 pg[10.f( v 47'16 (0'0,47'16] local-lis/les=55/56 n=0 ec=55/46 lis/c=55/55 les/c/f=56/56/0 sis=58 pruub=15.411421776s) [1] r=-1 lpr=58 pi=[55,58)/1 crt=47'16 lcod 0'0 mlcod 0'0 active pruub 128.154174805s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 02:52:14 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 58 pg[10.8( v 47'16 (0'0,47'16] local-lis/les=55/56 n=1 ec=55/46 lis/c=55/55 les/c/f=56/56/0 sis=58 pruub=15.411449432s) [0] r=-1 lpr=58 pi=[55,58)/1 crt=47'16 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 128.154220581s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 02:52:14 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 58 pg[10.f( v 47'16 (0'0,47'16] local-lis/les=55/56 n=0 ec=55/46 lis/c=55/55 les/c/f=56/56/0 sis=58 pruub=15.411398888s) [1] r=-1 lpr=58 pi=[55,58)/1 crt=47'16 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 128.154174805s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 02:52:14 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 58 pg[10.e( v 56'17 (0'0,56'17] local-lis/les=55/56 n=0 ec=55/46 lis/c=55/55 les/c/f=56/56/0 sis=58 pruub=15.411239624s) [0] r=-1 lpr=58 pi=[55,58)/1 crt=47'16 lcod 47'16 mlcod 47'16 active pruub 128.154220581s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 02:52:14 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 58 pg[10.1( v 47'16 (0'0,47'16] local-lis/les=55/56 n=1 ec=55/46 lis/c=55/55 les/c/f=56/56/0 sis=58 pruub=15.411350250s) [0] r=-1 lpr=58 pi=[55,58)/1 crt=47'16 lcod 0'0 mlcod 0'0 active pruub 128.154342651s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 02:52:14 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 58 pg[10.9( v 56'17 (0'0,56'17] local-lis/les=55/56 n=0 ec=55/46 lis/c=55/55 les/c/f=56/56/0 sis=58 pruub=15.411396980s) [0] r=-1 lpr=58 pi=[55,58)/1 crt=47'16 lcod 47'16 mlcod 47'16 active pruub 128.154357910s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 02:52:14 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 58 pg[10.1( v 47'16 (0'0,47'16] local-lis/les=55/56 n=1 ec=55/46 lis/c=55/55 les/c/f=56/56/0 sis=58 pruub=15.411333084s) [0] r=-1 lpr=58 pi=[55,58)/1 crt=47'16 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 128.154342651s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 02:52:14 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 58 pg[10.e( v 56'17 (0'0,56'17] local-lis/les=55/56 n=0 ec=55/46 lis/c=55/55 les/c/f=56/56/0 sis=58 pruub=15.411198616s) [0] r=-1 lpr=58 pi=[55,58)/1 crt=47'16 lcod 47'16 mlcod 0'0 unknown NOTIFY pruub 128.154220581s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 02:52:14 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 58 pg[10.2( v 47'16 (0'0,47'16] local-lis/les=55/56 n=1 ec=55/46 lis/c=55/55 les/c/f=56/56/0 sis=58 pruub=15.411342621s) [1] r=-1 lpr=58 pi=[55,58)/1 crt=47'16 lcod 0'0 mlcod 0'0 active pruub 128.154449463s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 02:52:14 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 58 pg[10.9( v 56'17 (0'0,56'17] local-lis/les=55/56 n=0 ec=55/46 lis/c=55/55 les/c/f=56/56/0 sis=58 pruub=15.411327362s) [0] r=-1 lpr=58 pi=[55,58)/1 crt=47'16 lcod 47'16 mlcod 0'0 unknown NOTIFY pruub 128.154357910s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 02:52:14 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 58 pg[10.2( v 47'16 (0'0,47'16] local-lis/les=55/56 n=1 ec=55/46 lis/c=55/55 les/c/f=56/56/0 sis=58 pruub=15.411323547s) [1] r=-1 lpr=58 pi=[55,58)/1 crt=47'16 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 128.154449463s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 02:52:14 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 58 pg[10.14( v 56'17 (0'0,56'17] local-lis/les=55/56 n=0 ec=55/46 lis/c=55/55 les/c/f=56/56/0 sis=58 pruub=15.411249161s) [1] r=-1 lpr=58 pi=[55,58)/1 crt=47'16 lcod 47'16 mlcod 47'16 active pruub 128.154479980s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 02:52:14 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 58 pg[10.14( v 56'17 (0'0,56'17] local-lis/les=55/56 n=0 ec=55/46 lis/c=55/55 les/c/f=56/56/0 sis=58 pruub=15.411213875s) [1] r=-1 lpr=58 pi=[55,58)/1 crt=47'16 lcod 47'16 mlcod 0'0 unknown NOTIFY pruub 128.154479980s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 02:52:14 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 58 pg[10.15( v 56'17 (0'0,56'17] local-lis/les=55/56 n=0 ec=55/46 lis/c=55/55 les/c/f=56/56/0 sis=58 pruub=15.411119461s) [0] r=-1 lpr=58 pi=[55,58)/1 crt=47'16 lcod 47'16 mlcod 47'16 active pruub 128.154464722s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 02:52:14 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 58 pg[10.15( v 56'17 (0'0,56'17] local-lis/les=55/56 n=0 ec=55/46 lis/c=55/55 les/c/f=56/56/0 sis=58 pruub=15.411091805s) [0] r=-1 lpr=58 pi=[55,58)/1 crt=47'16 lcod 47'16 mlcod 0'0 unknown NOTIFY pruub 128.154464722s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 02:52:14 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 58 pg[10.16( v 47'16 (0'0,47'16] local-lis/les=55/56 n=0 ec=55/46 lis/c=55/55 les/c/f=56/56/0 sis=58 pruub=15.411002159s) [0] r=-1 lpr=58 pi=[55,58)/1 crt=47'16 lcod 0'0 mlcod 0'0 active pruub 128.154464722s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 02:52:14 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 58 pg[10.16( v 47'16 (0'0,47'16] local-lis/les=55/56 n=0 ec=55/46 lis/c=55/55 les/c/f=56/56/0 sis=58 pruub=15.410983086s) [0] r=-1 lpr=58 pi=[55,58)/1 crt=47'16 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 128.154464722s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 02:52:14 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 58 pg[10.17( v 47'16 (0'0,47'16] local-lis/les=55/56 n=0 ec=55/46 lis/c=55/55 les/c/f=56/56/0 sis=58 pruub=15.410959244s) [0] r=-1 lpr=58 pi=[55,58)/1 crt=47'16 lcod 0'0 mlcod 0'0 active pruub 128.154464722s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 02:52:14 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 58 pg[10.17( v 47'16 (0'0,47'16] local-lis/les=55/56 n=0 ec=55/46 lis/c=55/55 les/c/f=56/56/0 sis=58 pruub=15.410943985s) [0] r=-1 lpr=58 pi=[55,58)/1 crt=47'16 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 128.154464722s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 02:52:14 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 02:52:14 np0005534516 ceph-mgr[75313]: [progress WARNING root] complete: ev d580f088-9659-4713-a013-6cb6a5e7e46d does not exist
Nov 25 02:52:14 np0005534516 ceph-mgr[75313]: [progress WARNING root] complete: ev 4387b0ca-c9d4-4c95-8cd9-4570a3771f5a does not exist
Nov 25 02:52:14 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 58 pg[10.14( empty local-lis/les=0/0 n=0 ec=55/46 lis/c=55/55 les/c/f=56/56/0 sis=58) [1] r=0 lpr=58 pi=[55,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:52:14 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 58 pg[10.13( empty local-lis/les=0/0 n=0 ec=55/46 lis/c=55/55 les/c/f=56/56/0 sis=58) [1] r=0 lpr=58 pi=[55,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:52:14 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 58 pg[10.10( empty local-lis/les=0/0 n=0 ec=55/46 lis/c=55/55 les/c/f=56/56/0 sis=58) [1] r=0 lpr=58 pi=[55,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:52:14 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 58 pg[10.11( empty local-lis/les=0/0 n=0 ec=55/46 lis/c=55/55 les/c/f=56/56/0 sis=58) [1] r=0 lpr=58 pi=[55,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:52:14 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 58 pg[10.1a( empty local-lis/les=0/0 n=0 ec=55/46 lis/c=55/55 les/c/f=56/56/0 sis=58) [1] r=0 lpr=58 pi=[55,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:52:14 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 58 pg[10.19( empty local-lis/les=0/0 n=0 ec=55/46 lis/c=55/55 les/c/f=56/56/0 sis=58) [1] r=0 lpr=58 pi=[55,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:52:14 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 58 pg[10.6( empty local-lis/les=0/0 n=0 ec=55/46 lis/c=55/55 les/c/f=56/56/0 sis=58) [1] r=0 lpr=58 pi=[55,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:52:14 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 58 pg[10.2( empty local-lis/les=0/0 n=0 ec=55/46 lis/c=55/55 les/c/f=56/56/0 sis=58) [1] r=0 lpr=58 pi=[55,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:52:14 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 58 pg[10.b( empty local-lis/les=0/0 n=0 ec=55/46 lis/c=55/55 les/c/f=56/56/0 sis=58) [1] r=0 lpr=58 pi=[55,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:52:14 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 58 pg[10.f( empty local-lis/les=0/0 n=0 ec=55/46 lis/c=55/55 les/c/f=56/56/0 sis=58) [1] r=0 lpr=58 pi=[55,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:52:14 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 58 pg[10.12( empty local-lis/les=0/0 n=0 ec=55/46 lis/c=55/55 les/c/f=56/56/0 sis=58) [1] r=0 lpr=58 pi=[55,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:52:14 np0005534516 ceph-osd[88620]: osd.0 pg_epoch: 58 pg[10.9( empty local-lis/les=0/0 n=0 ec=55/46 lis/c=55/55 les/c/f=56/56/0 sis=58) [0] r=0 lpr=58 pi=[55,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:52:14 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 58 pg[11.17( empty local-lis/les=56/57 n=0 ec=56/49 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=8.536964417s) [0] r=-1 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 active pruub 131.641403198s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 02:52:14 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 58 pg[11.17( empty local-lis/les=56/57 n=0 ec=56/49 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=8.536927223s) [0] r=-1 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 131.641403198s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 02:52:14 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 58 pg[9.15( v 57'67 (0'0,57'67] local-lis/les=55/56 n=2 ec=55/43 lis/c=55/55 les/c/f=56/56/0 sis=58 pruub=15.415613174s) [0] r=-1 lpr=58 pi=[55,58)/1 crt=57'67 lcod 57'66 mlcod 57'66 active pruub 138.520385742s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 02:52:14 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 58 pg[9.15( v 57'67 (0'0,57'67] local-lis/les=55/56 n=2 ec=55/43 lis/c=55/55 les/c/f=56/56/0 sis=58 pruub=15.415584564s) [0] r=-1 lpr=58 pi=[55,58)/1 crt=57'67 lcod 57'66 mlcod 0'0 unknown NOTIFY pruub 138.520385742s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 02:52:14 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 58 pg[7.1b( empty local-lis/les=51/54 n=0 ec=51/27 lis/c=51/51 les/c/f=54/54/0 sis=58 pruub=13.332226753s) [0] r=-1 lpr=58 pi=[51,58)/1 crt=0'0 mlcod 0'0 active pruub 136.437210083s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 02:52:14 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 58 pg[7.1b( empty local-lis/les=51/54 n=0 ec=51/27 lis/c=51/51 les/c/f=54/54/0 sis=58 pruub=13.332199097s) [0] r=-1 lpr=58 pi=[51,58)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 136.437210083s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 02:52:14 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 58 pg[8.14( v 42'4 (0'0,42'4] local-lis/les=53/55 n=0 ec=53/41 lis/c=53/53 les/c/f=55/55/0 sis=58 pruub=13.800941467s) [0] r=-1 lpr=58 pi=[53,58)/1 crt=42'4 lcod 0'0 mlcod 0'0 active pruub 136.906127930s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 02:52:14 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 58 pg[8.14( v 42'4 (0'0,42'4] local-lis/les=53/55 n=0 ec=53/41 lis/c=53/53 les/c/f=55/55/0 sis=58 pruub=13.800911903s) [0] r=-1 lpr=58 pi=[53,58)/1 crt=42'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 136.906127930s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 02:52:14 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 58 pg[7.1a( empty local-lis/les=51/54 n=0 ec=51/27 lis/c=51/51 les/c/f=54/54/0 sis=58 pruub=13.331439018s) [2] r=-1 lpr=58 pi=[51,58)/1 crt=0'0 mlcod 0'0 active pruub 136.437042236s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 02:52:14 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 58 pg[8.15( v 42'4 (0'0,42'4] local-lis/les=53/55 n=0 ec=53/41 lis/c=53/53 les/c/f=55/55/0 sis=58 pruub=13.800518036s) [2] r=-1 lpr=58 pi=[53,58)/1 crt=42'4 lcod 0'0 mlcod 0'0 active pruub 136.906143188s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 02:52:14 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 58 pg[8.15( v 42'4 (0'0,42'4] local-lis/les=53/55 n=0 ec=53/41 lis/c=53/53 les/c/f=55/55/0 sis=58 pruub=13.800493240s) [2] r=-1 lpr=58 pi=[53,58)/1 crt=42'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 136.906143188s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 02:52:14 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 58 pg[7.1a( empty local-lis/les=51/54 n=0 ec=51/27 lis/c=51/51 les/c/f=54/54/0 sis=58 pruub=13.331404686s) [2] r=-1 lpr=58 pi=[51,58)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 136.437042236s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 02:52:14 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 58 pg[7.18( empty local-lis/les=51/54 n=0 ec=51/27 lis/c=51/51 les/c/f=54/54/0 sis=58 pruub=13.331151962s) [0] r=-1 lpr=58 pi=[51,58)/1 crt=0'0 mlcod 0'0 active pruub 136.436935425s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 02:52:14 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 58 pg[7.18( empty local-lis/les=51/54 n=0 ec=51/27 lis/c=51/51 les/c/f=54/54/0 sis=58 pruub=13.331129074s) [0] r=-1 lpr=58 pi=[51,58)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 136.436935425s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 02:52:14 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 58 pg[9.11( v 56'67 (0'0,56'67] local-lis/les=55/56 n=1 ec=55/43 lis/c=55/55 les/c/f=56/56/0 sis=58 pruub=15.413992882s) [0] r=-1 lpr=58 pi=[55,58)/1 crt=56'67 lcod 56'66 mlcod 56'66 active pruub 138.519851685s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 02:52:14 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 58 pg[9.11( v 56'67 (0'0,56'67] local-lis/les=55/56 n=1 ec=55/43 lis/c=55/55 les/c/f=56/56/0 sis=58 pruub=15.413965225s) [0] r=-1 lpr=58 pi=[55,58)/1 crt=56'67 lcod 56'66 mlcod 0'0 unknown NOTIFY pruub 138.519851685s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 02:52:14 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 58 pg[7.1f( empty local-lis/les=51/54 n=0 ec=51/27 lis/c=51/51 les/c/f=54/54/0 sis=58 pruub=13.331004143s) [0] r=-1 lpr=58 pi=[51,58)/1 crt=0'0 mlcod 0'0 active pruub 136.436920166s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 02:52:14 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 58 pg[7.1f( empty local-lis/les=51/54 n=0 ec=51/27 lis/c=51/51 les/c/f=54/54/0 sis=58 pruub=13.330963135s) [0] r=-1 lpr=58 pi=[51,58)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 136.436920166s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 02:52:14 np0005534516 ceph-osd[88620]: osd.0 pg_epoch: 58 pg[10.8( empty local-lis/les=0/0 n=0 ec=55/46 lis/c=55/55 les/c/f=56/56/0 sis=58) [0] r=0 lpr=58 pi=[55,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:52:14 np0005534516 ceph-osd[88620]: osd.0 pg_epoch: 58 pg[10.15( empty local-lis/les=0/0 n=0 ec=55/46 lis/c=55/55 les/c/f=56/56/0 sis=58) [0] r=0 lpr=58 pi=[55,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:52:14 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 58 pg[11.14( empty local-lis/les=56/57 n=0 ec=56/49 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=8.535315514s) [0] r=-1 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 active pruub 131.641479492s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 02:52:14 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 58 pg[11.14( empty local-lis/les=56/57 n=0 ec=56/49 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=8.535070419s) [0] r=-1 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 131.641479492s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 02:52:14 np0005534516 ceph-osd[88620]: osd.0 pg_epoch: 58 pg[10.7( empty local-lis/les=0/0 n=0 ec=55/46 lis/c=55/55 les/c/f=56/56/0 sis=58) [0] r=0 lpr=58 pi=[55,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:52:14 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 58 pg[11.2( empty local-lis/les=56/57 n=0 ec=56/49 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=8.534892082s) [2] r=-1 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 active pruub 131.641571045s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 02:52:14 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 58 pg[11.2( empty local-lis/les=56/57 n=0 ec=56/49 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=8.534868240s) [2] r=-1 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 131.641571045s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 02:52:14 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 58 pg[7.e( empty local-lis/les=51/54 n=0 ec=51/27 lis/c=51/51 les/c/f=54/54/0 sis=58 pruub=13.330162048s) [2] r=-1 lpr=58 pi=[51,58)/1 crt=0'0 mlcod 0'0 active pruub 136.436920166s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 02:52:14 np0005534516 ceph-osd[88620]: osd.0 pg_epoch: 58 pg[10.17( empty local-lis/les=0/0 n=0 ec=55/46 lis/c=55/55 les/c/f=56/56/0 sis=58) [0] r=0 lpr=58 pi=[55,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:52:14 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 58 pg[7.e( empty local-lis/les=51/54 n=0 ec=51/27 lis/c=51/51 les/c/f=54/54/0 sis=58 pruub=13.330141068s) [2] r=-1 lpr=58 pi=[51,58)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 136.436920166s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 02:52:14 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 58 pg[8.10( v 42'4 (0'0,42'4] local-lis/les=53/55 n=0 ec=53/41 lis/c=53/53 les/c/f=55/55/0 sis=58 pruub=13.799351692s) [0] r=-1 lpr=58 pi=[53,58)/1 crt=42'4 lcod 0'0 mlcod 0'0 active pruub 136.905990601s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 02:52:14 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 58 pg[8.10( v 42'4 (0'0,42'4] local-lis/les=53/55 n=0 ec=53/41 lis/c=53/53 les/c/f=55/55/0 sis=58 pruub=13.799082756s) [0] r=-1 lpr=58 pi=[53,58)/1 crt=42'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 136.905990601s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 02:52:14 np0005534516 ceph-osd[88620]: osd.0 pg_epoch: 58 pg[10.d( empty local-lis/les=0/0 n=0 ec=55/46 lis/c=55/55 les/c/f=56/56/0 sis=58) [0] r=0 lpr=58 pi=[55,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:52:14 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 58 pg[11.1( empty local-lis/les=56/57 n=0 ec=56/49 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=8.534516335s) [0] r=-1 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 active pruub 131.641601562s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 02:52:14 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 58 pg[11.1( empty local-lis/les=56/57 n=0 ec=56/49 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=8.534488678s) [0] r=-1 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 131.641601562s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 02:52:14 np0005534516 ceph-osd[88620]: osd.0 pg_epoch: 58 pg[10.e( empty local-lis/les=0/0 n=0 ec=55/46 lis/c=55/55 les/c/f=56/56/0 sis=58) [0] r=0 lpr=58 pi=[55,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:52:14 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 58 pg[8.2( v 42'4 (0'0,42'4] local-lis/les=53/55 n=1 ec=53/41 lis/c=53/53 les/c/f=55/55/0 sis=58 pruub=13.798908234s) [2] r=-1 lpr=58 pi=[53,58)/1 crt=42'4 lcod 0'0 mlcod 0'0 active pruub 136.906173706s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 02:52:14 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 58 pg[9.3( v 57'69 (0'0,57'69] local-lis/les=55/56 n=3 ec=55/43 lis/c=55/55 les/c/f=56/56/0 sis=58 pruub=15.412410736s) [0] r=-1 lpr=58 pi=[55,58)/1 crt=57'69 lcod 57'68 mlcod 57'68 active pruub 138.519851685s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 02:52:14 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 58 pg[9.3( v 57'69 (0'0,57'69] local-lis/les=55/56 n=3 ec=55/43 lis/c=55/55 les/c/f=56/56/0 sis=58 pruub=15.412373543s) [0] r=-1 lpr=58 pi=[55,58)/1 crt=57'69 lcod 57'68 mlcod 0'0 unknown NOTIFY pruub 138.519851685s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 02:52:14 np0005534516 ceph-osd[88620]: osd.0 pg_epoch: 58 pg[10.1e( empty local-lis/les=0/0 n=0 ec=55/46 lis/c=55/55 les/c/f=56/56/0 sis=58) [0] r=0 lpr=58 pi=[55,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:52:14 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 58 pg[7.c( empty local-lis/les=51/54 n=0 ec=51/27 lis/c=51/51 les/c/f=54/54/0 sis=58 pruub=13.329225540s) [2] r=-1 lpr=58 pi=[51,58)/1 crt=0'0 mlcod 0'0 active pruub 136.436889648s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 02:52:14 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 58 pg[7.c( empty local-lis/les=51/54 n=0 ec=51/27 lis/c=51/51 les/c/f=54/54/0 sis=58 pruub=13.329200745s) [2] r=-1 lpr=58 pi=[51,58)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 136.436889648s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 02:52:14 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 58 pg[8.2( v 42'4 (0'0,42'4] local-lis/les=53/55 n=1 ec=53/41 lis/c=53/53 les/c/f=55/55/0 sis=58 pruub=13.798880577s) [2] r=-1 lpr=58 pi=[53,58)/1 crt=42'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 136.906173706s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 02:52:14 np0005534516 ceph-osd[88620]: osd.0 pg_epoch: 58 pg[10.16( empty local-lis/les=0/0 n=0 ec=55/46 lis/c=55/55 les/c/f=56/56/0 sis=58) [0] r=0 lpr=58 pi=[55,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:52:14 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 58 pg[11.f( empty local-lis/les=56/57 n=0 ec=56/49 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=8.533701897s) [0] r=-1 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 active pruub 131.641632080s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 02:52:14 np0005534516 ceph-osd[88620]: osd.0 pg_epoch: 58 pg[10.4( empty local-lis/les=0/0 n=0 ec=55/46 lis/c=55/55 les/c/f=56/56/0 sis=58) [0] r=0 lpr=58 pi=[55,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:52:14 np0005534516 ceph-osd[88620]: osd.0 pg_epoch: 58 pg[10.1( empty local-lis/les=0/0 n=0 ec=55/46 lis/c=55/55 les/c/f=56/56/0 sis=58) [0] r=0 lpr=58 pi=[55,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:52:14 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 58 pg[11.f( empty local-lis/les=56/57 n=0 ec=56/49 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=8.533675194s) [0] r=-1 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 131.641632080s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 02:52:14 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 58 pg[9.d( v 57'67 (0'0,57'67] local-lis/les=55/56 n=2 ec=55/43 lis/c=55/55 les/c/f=56/56/0 sis=58 pruub=15.411535263s) [0] r=-1 lpr=58 pi=[55,58)/1 crt=57'67 lcod 57'66 mlcod 57'66 active pruub 138.520034790s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 02:52:14 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 58 pg[8.15( empty local-lis/les=0/0 n=0 ec=53/41 lis/c=53/53 les/c/f=55/55/0 sis=58) [2] r=0 lpr=58 pi=[53,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:52:14 np0005534516 ceph-osd[88620]: osd.0 pg_epoch: 58 pg[11.17( empty local-lis/les=0/0 n=0 ec=56/49 lis/c=56/56 les/c/f=57/57/0 sis=58) [0] r=0 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:52:14 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 58 pg[9.d( v 57'67 (0'0,57'67] local-lis/les=55/56 n=2 ec=55/43 lis/c=55/55 les/c/f=56/56/0 sis=58 pruub=15.411437988s) [0] r=-1 lpr=58 pi=[55,58)/1 crt=57'67 lcod 57'66 mlcod 0'0 unknown NOTIFY pruub 138.520034790s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 02:52:14 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 58 pg[7.1a( empty local-lis/les=0/0 n=0 ec=51/27 lis/c=51/51 les/c/f=54/54/0 sis=58) [2] r=0 lpr=58 pi=[51,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:52:14 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 58 pg[7.3( empty local-lis/les=51/54 n=0 ec=51/27 lis/c=51/51 les/c/f=54/54/0 sis=58 pruub=13.327769279s) [0] r=-1 lpr=58 pi=[51,58)/1 crt=0'0 mlcod 0'0 active pruub 136.436660767s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 02:52:14 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 58 pg[7.3( empty local-lis/les=51/54 n=0 ec=51/27 lis/c=51/51 les/c/f=54/54/0 sis=58 pruub=13.327728271s) [0] r=-1 lpr=58 pi=[51,58)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 136.436660767s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 02:52:14 np0005534516 ceph-osd[88620]: osd.0 pg_epoch: 58 pg[9.15( empty local-lis/les=0/0 n=0 ec=55/43 lis/c=55/55 les/c/f=56/56/0 sis=58) [0] r=0 lpr=58 pi=[55,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:52:14 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 58 pg[8.c( v 42'4 (0'0,42'4] local-lis/les=53/55 n=0 ec=53/41 lis/c=53/53 les/c/f=55/55/0 sis=58 pruub=13.797171593s) [0] r=-1 lpr=58 pi=[53,58)/1 crt=42'4 lcod 0'0 mlcod 0'0 active pruub 136.906280518s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 02:52:14 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 58 pg[8.c( v 42'4 (0'0,42'4] local-lis/les=53/55 n=0 ec=53/41 lis/c=53/53 les/c/f=55/55/0 sis=58 pruub=13.797142982s) [0] r=-1 lpr=58 pi=[53,58)/1 crt=42'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 136.906280518s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 02:52:14 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 58 pg[7.e( empty local-lis/les=0/0 n=0 ec=51/27 lis/c=51/51 les/c/f=54/54/0 sis=58) [2] r=0 lpr=58 pi=[51,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:52:14 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 58 pg[11.e( empty local-lis/les=56/57 n=0 ec=56/49 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=8.532304764s) [0] r=-1 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 active pruub 131.641616821s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 02:52:14 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 58 pg[11.e( empty local-lis/les=56/57 n=0 ec=56/49 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=8.532276154s) [0] r=-1 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 131.641616821s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 02:52:14 np0005534516 ceph-osd[88620]: osd.0 pg_epoch: 58 pg[7.1b( empty local-lis/les=0/0 n=0 ec=51/27 lis/c=51/51 les/c/f=54/54/0 sis=58) [0] r=0 lpr=58 pi=[51,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:52:14 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 58 pg[11.2( empty local-lis/les=0/0 n=0 ec=56/49 lis/c=56/56 les/c/f=57/57/0 sis=58) [2] r=0 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:52:14 np0005534516 ceph-osd[88620]: osd.0 pg_epoch: 58 pg[8.14( empty local-lis/les=0/0 n=0 ec=53/41 lis/c=53/53 les/c/f=55/55/0 sis=58) [0] r=0 lpr=58 pi=[53,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:52:14 np0005534516 ceph-osd[88620]: osd.0 pg_epoch: 58 pg[7.18( empty local-lis/les=0/0 n=0 ec=51/27 lis/c=51/51 les/c/f=54/54/0 sis=58) [0] r=0 lpr=58 pi=[51,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:52:14 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 58 pg[7.c( empty local-lis/les=0/0 n=0 ec=51/27 lis/c=51/51 les/c/f=54/54/0 sis=58) [2] r=0 lpr=58 pi=[51,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:52:14 np0005534516 ceph-osd[88620]: osd.0 pg_epoch: 58 pg[9.11( empty local-lis/les=0/0 n=0 ec=55/43 lis/c=55/55 les/c/f=56/56/0 sis=58) [0] r=0 lpr=58 pi=[55,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:52:14 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 58 pg[8.2( empty local-lis/les=0/0 n=0 ec=53/41 lis/c=53/53 les/c/f=55/55/0 sis=58) [2] r=0 lpr=58 pi=[53,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:52:14 np0005534516 ceph-osd[88620]: osd.0 pg_epoch: 58 pg[7.1f( empty local-lis/les=0/0 n=0 ec=51/27 lis/c=51/51 les/c/f=54/54/0 sis=58) [0] r=0 lpr=58 pi=[51,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:52:14 np0005534516 ceph-osd[88620]: osd.0 pg_epoch: 58 pg[11.14( empty local-lis/les=0/0 n=0 ec=56/49 lis/c=56/56 les/c/f=57/57/0 sis=58) [0] r=0 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:52:14 np0005534516 ceph-osd[88620]: osd.0 pg_epoch: 58 pg[8.10( empty local-lis/les=0/0 n=0 ec=53/41 lis/c=53/53 les/c/f=55/55/0 sis=58) [0] r=0 lpr=58 pi=[53,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:52:14 np0005534516 ceph-osd[88620]: osd.0 pg_epoch: 58 pg[11.1( empty local-lis/les=0/0 n=0 ec=56/49 lis/c=56/56 les/c/f=57/57/0 sis=58) [0] r=0 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:52:14 np0005534516 ceph-osd[88620]: osd.0 pg_epoch: 58 pg[9.3( empty local-lis/les=0/0 n=0 ec=55/43 lis/c=55/55 les/c/f=56/56/0 sis=58) [0] r=0 lpr=58 pi=[55,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:52:14 np0005534516 ceph-osd[88620]: osd.0 pg_epoch: 58 pg[11.f( empty local-lis/les=0/0 n=0 ec=56/49 lis/c=56/56 les/c/f=57/57/0 sis=58) [0] r=0 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:52:14 np0005534516 ceph-osd[88620]: osd.0 pg_epoch: 58 pg[9.d( empty local-lis/les=0/0 n=0 ec=55/43 lis/c=55/55 les/c/f=56/56/0 sis=58) [0] r=0 lpr=58 pi=[55,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:52:14 np0005534516 ceph-osd[88620]: osd.0 pg_epoch: 58 pg[7.3( empty local-lis/les=0/0 n=0 ec=51/27 lis/c=51/51 les/c/f=54/54/0 sis=58) [0] r=0 lpr=58 pi=[51,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:52:14 np0005534516 ceph-osd[88620]: osd.0 pg_epoch: 58 pg[8.c( empty local-lis/les=0/0 n=0 ec=53/41 lis/c=53/53 les/c/f=55/55/0 sis=58) [0] r=0 lpr=58 pi=[53,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:52:14 np0005534516 ceph-osd[88620]: osd.0 pg_epoch: 58 pg[11.e( empty local-lis/les=0/0 n=0 ec=56/49 lis/c=56/56 les/c/f=57/57/0 sis=58) [0] r=0 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:52:14 np0005534516 ceph-osd[90711]: log_channel(cluster) log [DBG] : 3.7 scrub starts
Nov 25 02:52:14 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 58 pg[8.d( v 42'4 (0'0,42'4] local-lis/les=53/55 n=0 ec=53/41 lis/c=53/53 les/c/f=55/55/0 sis=58 pruub=13.782038689s) [2] r=-1 lpr=58 pi=[53,58)/1 crt=42'4 lcod 0'0 mlcod 0'0 active pruub 136.906997681s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 02:52:14 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 58 pg[11.d( empty local-lis/les=56/57 n=0 ec=56/49 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=8.516644478s) [2] r=-1 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 active pruub 131.641662598s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 02:52:14 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 58 pg[8.d( v 42'4 (0'0,42'4] local-lis/les=53/55 n=0 ec=53/41 lis/c=53/53 les/c/f=55/55/0 sis=58 pruub=13.781992912s) [2] r=-1 lpr=58 pi=[53,58)/1 crt=42'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 136.906997681s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 02:52:14 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 58 pg[9.f( v 57'67 (0'0,57'67] local-lis/les=55/56 n=1 ec=55/43 lis/c=55/55 les/c/f=56/56/0 sis=58 pruub=15.395323753s) [0] r=-1 lpr=58 pi=[55,58)/1 crt=57'67 lcod 57'66 mlcod 57'66 active pruub 138.520401001s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 02:52:14 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 58 pg[9.f( v 57'67 (0'0,57'67] local-lis/les=55/56 n=1 ec=55/43 lis/c=55/55 les/c/f=56/56/0 sis=58 pruub=15.395285606s) [0] r=-1 lpr=58 pi=[55,58)/1 crt=57'67 lcod 57'66 mlcod 0'0 unknown NOTIFY pruub 138.520401001s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 02:52:14 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 58 pg[11.d( empty local-lis/les=56/57 n=0 ec=56/49 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=8.516397476s) [2] r=-1 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 131.641662598s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 02:52:14 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 58 pg[11.b( empty local-lis/les=56/57 n=0 ec=56/49 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=8.516431808s) [2] r=-1 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 active pruub 131.641708374s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 02:52:14 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 58 pg[11.b( empty local-lis/les=56/57 n=0 ec=56/49 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=8.516376495s) [2] r=-1 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 131.641708374s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 02:52:14 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 58 pg[8.e( v 42'4 (0'0,42'4] local-lis/les=53/55 n=0 ec=53/41 lis/c=53/53 les/c/f=55/55/0 sis=58 pruub=13.781793594s) [0] r=-1 lpr=58 pi=[53,58)/1 crt=42'4 lcod 0'0 mlcod 0'0 active pruub 136.907180786s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 02:52:14 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 58 pg[8.e( v 42'4 (0'0,42'4] local-lis/les=53/55 n=0 ec=53/41 lis/c=53/53 les/c/f=55/55/0 sis=58 pruub=13.781765938s) [0] r=-1 lpr=58 pi=[53,58)/1 crt=42'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 136.907180786s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 02:52:14 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 58 pg[9.9( v 57'71 (0'0,57'71] local-lis/les=55/56 n=3 ec=55/43 lis/c=55/55 les/c/f=56/56/0 sis=58 pruub=15.395458221s) [0] r=-1 lpr=58 pi=[55,58)/1 crt=57'71 lcod 57'70 mlcod 57'70 active pruub 138.520919800s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 02:52:14 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 58 pg[9.9( v 57'71 (0'0,57'71] local-lis/les=55/56 n=3 ec=55/43 lis/c=55/55 les/c/f=56/56/0 sis=58 pruub=15.395422935s) [0] r=-1 lpr=58 pi=[55,58)/1 crt=57'71 lcod 57'70 mlcod 0'0 unknown NOTIFY pruub 138.520919800s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 02:52:14 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 58 pg[7.2( empty local-lis/les=51/54 n=0 ec=51/27 lis/c=51/51 les/c/f=54/54/0 sis=58 pruub=13.310992241s) [2] r=-1 lpr=58 pi=[51,58)/1 crt=0'0 mlcod 0'0 active pruub 136.436614990s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 02:52:14 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 58 pg[11.9( empty local-lis/les=56/57 n=0 ec=56/49 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=8.516049385s) [2] r=-1 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 active pruub 131.641723633s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 02:52:14 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 58 pg[9.b( v 57'69 (0'0,57'69] local-lis/les=55/56 n=3 ec=55/43 lis/c=55/55 les/c/f=56/56/0 sis=58 pruub=15.394570351s) [0] r=-1 lpr=58 pi=[55,58)/1 crt=57'69 lcod 57'68 mlcod 57'68 active pruub 138.520401001s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 02:52:14 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 58 pg[9.b( v 57'69 (0'0,57'69] local-lis/les=55/56 n=3 ec=55/43 lis/c=55/55 les/c/f=56/56/0 sis=58 pruub=15.394536018s) [0] r=-1 lpr=58 pi=[55,58)/1 crt=57'69 lcod 57'68 mlcod 0'0 unknown NOTIFY pruub 138.520401001s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 02:52:14 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 58 pg[7.1( empty local-lis/les=51/54 n=0 ec=51/27 lis/c=51/51 les/c/f=54/54/0 sis=58 pruub=13.310986519s) [2] r=-1 lpr=58 pi=[51,58)/1 crt=0'0 mlcod 0'0 active pruub 136.436676025s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 02:52:14 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 58 pg[7.1( empty local-lis/les=51/54 n=0 ec=51/27 lis/c=51/51 les/c/f=54/54/0 sis=58 pruub=13.310735703s) [2] r=-1 lpr=58 pi=[51,58)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 136.436676025s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 02:52:14 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 58 pg[11.9( empty local-lis/les=56/57 n=0 ec=56/49 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=8.516013145s) [2] r=-1 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 131.641723633s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 02:52:14 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 58 pg[7.2( empty local-lis/les=51/54 n=0 ec=51/27 lis/c=51/51 les/c/f=54/54/0 sis=58 pruub=13.310950279s) [2] r=-1 lpr=58 pi=[51,58)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 136.436614990s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 02:52:14 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 58 pg[8.d( empty local-lis/les=0/0 n=0 ec=53/41 lis/c=53/53 les/c/f=55/55/0 sis=58) [2] r=0 lpr=58 pi=[53,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:52:14 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 58 pg[8.f( v 42'4 (0'0,42'4] local-lis/les=53/55 n=0 ec=53/41 lis/c=53/53 les/c/f=55/55/0 sis=58 pruub=13.780870438s) [0] r=-1 lpr=58 pi=[53,58)/1 crt=42'4 lcod 0'0 mlcod 0'0 active pruub 136.906997681s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 02:52:14 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 58 pg[8.f( v 42'4 (0'0,42'4] local-lis/les=53/55 n=0 ec=53/41 lis/c=53/53 les/c/f=55/55/0 sis=58 pruub=13.780838013s) [0] r=-1 lpr=58 pi=[53,58)/1 crt=42'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 136.906997681s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 02:52:14 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 58 pg[11.8( empty local-lis/les=56/57 n=0 ec=56/49 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=8.515574455s) [2] r=-1 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 active pruub 131.641769409s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 02:52:14 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 58 pg[11.d( empty local-lis/les=0/0 n=0 ec=56/49 lis/c=56/56 les/c/f=57/57/0 sis=58) [2] r=0 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:52:14 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 58 pg[7.4( empty local-lis/les=51/54 n=0 ec=51/27 lis/c=51/51 les/c/f=54/54/0 sis=58 pruub=13.311932564s) [0] r=-1 lpr=58 pi=[51,58)/1 crt=0'0 mlcod 0'0 active pruub 136.438293457s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 02:52:14 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 58 pg[8.b( v 42'4 (0'0,42'4] local-lis/les=53/55 n=0 ec=53/41 lis/c=53/53 les/c/f=55/55/0 sis=58 pruub=13.780806541s) [0] r=-1 lpr=58 pi=[53,58)/1 crt=42'4 lcod 0'0 mlcod 0'0 active pruub 136.907196045s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 02:52:14 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 58 pg[7.4( empty local-lis/les=51/54 n=0 ec=51/27 lis/c=51/51 les/c/f=54/54/0 sis=58 pruub=13.311875343s) [0] r=-1 lpr=58 pi=[51,58)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 136.438293457s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 02:52:14 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 58 pg[8.b( v 42'4 (0'0,42'4] local-lis/les=53/55 n=0 ec=53/41 lis/c=53/53 les/c/f=55/55/0 sis=58 pruub=13.780781746s) [0] r=-1 lpr=58 pi=[53,58)/1 crt=42'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 136.907196045s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 02:52:14 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 58 pg[11.8( empty local-lis/les=56/57 n=0 ec=56/49 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=8.515546799s) [2] r=-1 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 131.641769409s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 02:52:14 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 58 pg[7.6( empty local-lis/les=51/54 n=0 ec=51/27 lis/c=51/51 les/c/f=54/54/0 sis=58 pruub=13.311692238s) [0] r=-1 lpr=58 pi=[51,58)/1 crt=0'0 mlcod 0'0 active pruub 136.438278198s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 02:52:14 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 58 pg[7.6( empty local-lis/les=51/54 n=0 ec=51/27 lis/c=51/51 les/c/f=54/54/0 sis=58 pruub=13.311669350s) [0] r=-1 lpr=58 pi=[51,58)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 136.438278198s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 02:52:14 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 58 pg[11.3( empty local-lis/les=56/57 n=0 ec=56/49 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=8.515081406s) [2] r=-1 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 active pruub 131.641815186s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 02:52:14 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 58 pg[8.9( v 42'4 (0'0,42'4] local-lis/les=53/55 n=0 ec=53/41 lis/c=53/53 les/c/f=55/55/0 sis=58 pruub=13.780495644s) [0] r=-1 lpr=58 pi=[53,58)/1 crt=42'4 lcod 0'0 mlcod 0'0 active pruub 136.907211304s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 02:52:14 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 58 pg[7.5( empty local-lis/les=51/54 n=0 ec=51/27 lis/c=51/51 les/c/f=54/54/0 sis=58 pruub=13.311972618s) [2] r=-1 lpr=58 pi=[51,58)/1 crt=0'0 mlcod 0'0 active pruub 136.438339233s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 02:52:14 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 58 pg[11.3( empty local-lis/les=56/57 n=0 ec=56/49 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=8.515055656s) [2] r=-1 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 131.641815186s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 02:52:14 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 58 pg[7.f( empty local-lis/les=51/54 n=0 ec=51/27 lis/c=51/51 les/c/f=54/54/0 sis=58 pruub=13.311210632s) [0] r=-1 lpr=58 pi=[51,58)/1 crt=0'0 mlcod 0'0 active pruub 136.438156128s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 02:52:14 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 58 pg[7.5( empty local-lis/les=51/54 n=0 ec=51/27 lis/c=51/51 les/c/f=54/54/0 sis=58 pruub=13.311404228s) [2] r=-1 lpr=58 pi=[51,58)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 136.438339233s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 02:52:14 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 58 pg[7.f( empty local-lis/les=51/54 n=0 ec=51/27 lis/c=51/51 les/c/f=54/54/0 sis=58 pruub=13.311185837s) [0] r=-1 lpr=58 pi=[51,58)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 136.438156128s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 02:52:14 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 58 pg[9.1( v 57'67 (0'0,57'67] local-lis/les=55/56 n=3 ec=55/43 lis/c=55/55 les/c/f=56/56/0 sis=58 pruub=15.393822670s) [0] r=-1 lpr=58 pi=[55,58)/1 crt=57'67 lcod 57'66 mlcod 57'66 active pruub 138.520812988s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 02:52:14 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 58 pg[8.9( v 42'4 (0'0,42'4] local-lis/les=53/55 n=0 ec=53/41 lis/c=53/53 les/c/f=55/55/0 sis=58 pruub=13.780431747s) [0] r=-1 lpr=58 pi=[53,58)/1 crt=42'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 136.907211304s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 02:52:14 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 58 pg[11.4( empty local-lis/les=56/57 n=0 ec=56/49 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=8.514744759s) [0] r=-1 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 active pruub 131.641815186s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 02:52:14 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 58 pg[11.b( empty local-lis/les=0/0 n=0 ec=56/49 lis/c=56/56 les/c/f=57/57/0 sis=58) [2] r=0 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:52:14 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 58 pg[7.1( empty local-lis/les=0/0 n=0 ec=51/27 lis/c=51/51 les/c/f=54/54/0 sis=58) [2] r=0 lpr=58 pi=[51,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:52:14 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 58 pg[11.9( empty local-lis/les=0/0 n=0 ec=56/49 lis/c=56/56 les/c/f=57/57/0 sis=58) [2] r=0 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:52:14 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 58 pg[11.4( empty local-lis/les=56/57 n=0 ec=56/49 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=8.514661789s) [0] r=-1 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 131.641815186s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 02:52:14 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 58 pg[9.1( v 57'67 (0'0,57'67] local-lis/les=55/56 n=3 ec=55/43 lis/c=55/55 les/c/f=56/56/0 sis=58 pruub=15.393795013s) [0] r=-1 lpr=58 pi=[55,58)/1 crt=57'67 lcod 57'66 mlcod 0'0 unknown NOTIFY pruub 138.520812988s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 02:52:14 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 58 pg[7.8( empty local-lis/les=51/54 n=0 ec=51/27 lis/c=51/51 les/c/f=54/54/0 sis=58 pruub=13.310887337s) [2] r=-1 lpr=58 pi=[51,58)/1 crt=0'0 mlcod 0'0 active pruub 136.438293457s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 02:52:14 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 58 pg[7.9( empty local-lis/les=51/54 n=0 ec=51/27 lis/c=51/51 les/c/f=54/54/0 sis=58 pruub=13.310730934s) [0] r=-1 lpr=58 pi=[51,58)/1 crt=0'0 mlcod 0'0 active pruub 136.438140869s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 02:52:14 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 58 pg[7.8( empty local-lis/les=51/54 n=0 ec=51/27 lis/c=51/51 les/c/f=54/54/0 sis=58 pruub=13.310866356s) [2] r=-1 lpr=58 pi=[51,58)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 136.438293457s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 02:52:14 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 58 pg[7.9( empty local-lis/les=51/54 n=0 ec=51/27 lis/c=51/51 les/c/f=54/54/0 sis=58 pruub=13.310709000s) [0] r=-1 lpr=58 pi=[51,58)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 136.438140869s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 02:52:14 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 58 pg[9.7( v 57'71 (0'0,57'71] local-lis/les=55/56 n=3 ec=55/43 lis/c=55/55 les/c/f=56/56/0 sis=58 pruub=15.393293381s) [0] r=-1 lpr=58 pi=[55,58)/1 crt=57'71 lcod 57'70 mlcod 57'70 active pruub 138.520782471s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 02:52:14 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 58 pg[9.7( v 57'71 (0'0,57'71] local-lis/les=55/56 n=3 ec=55/43 lis/c=55/55 les/c/f=56/56/0 sis=58 pruub=15.393251419s) [0] r=-1 lpr=58 pi=[55,58)/1 crt=57'71 lcod 57'70 mlcod 0'0 unknown NOTIFY pruub 138.520782471s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 02:52:14 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 58 pg[8.6( v 42'4 (0'0,42'4] local-lis/les=53/55 n=0 ec=53/41 lis/c=53/53 les/c/f=55/55/0 sis=58 pruub=13.780318260s) [0] r=-1 lpr=58 pi=[53,58)/1 crt=42'4 lcod 0'0 mlcod 0'0 active pruub 136.908126831s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 02:52:14 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 58 pg[8.6( v 42'4 (0'0,42'4] local-lis/les=53/55 n=0 ec=53/41 lis/c=53/53 les/c/f=55/55/0 sis=58 pruub=13.780234337s) [0] r=-1 lpr=58 pi=[53,58)/1 crt=42'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 136.908126831s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 02:52:14 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 58 pg[11.6( empty local-lis/les=56/57 n=0 ec=56/49 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=8.514602661s) [0] r=-1 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 active pruub 131.642654419s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 02:52:14 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 58 pg[7.a( empty local-lis/les=51/54 n=0 ec=51/27 lis/c=51/51 les/c/f=54/54/0 sis=58 pruub=13.309912682s) [2] r=-1 lpr=58 pi=[51,58)/1 crt=0'0 mlcod 0'0 active pruub 136.438125610s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 02:52:14 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 58 pg[9.5( v 57'69 (0'0,57'69] local-lis/les=55/56 n=4 ec=55/43 lis/c=55/55 les/c/f=56/56/0 sis=58 pruub=15.392650604s) [0] r=-1 lpr=58 pi=[55,58)/1 crt=57'69 lcod 57'68 mlcod 57'68 active pruub 138.520874023s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 02:52:14 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 58 pg[7.a( empty local-lis/les=51/54 n=0 ec=51/27 lis/c=51/51 les/c/f=54/54/0 sis=58 pruub=13.309857368s) [2] r=-1 lpr=58 pi=[51,58)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 136.438125610s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 02:52:14 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 58 pg[7.2( empty local-lis/les=0/0 n=0 ec=51/27 lis/c=51/51 les/c/f=54/54/0 sis=58) [2] r=0 lpr=58 pi=[51,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:52:14 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 58 pg[8.4( v 42'4 (0'0,42'4] local-lis/les=53/55 n=1 ec=53/41 lis/c=53/53 les/c/f=55/55/0 sis=58 pruub=13.778934479s) [2] r=-1 lpr=58 pi=[53,58)/1 crt=42'4 lcod 0'0 mlcod 0'0 active pruub 136.907272339s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 02:52:14 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 58 pg[8.4( v 42'4 (0'0,42'4] local-lis/les=53/55 n=1 ec=53/41 lis/c=53/53 les/c/f=55/55/0 sis=58 pruub=13.778884888s) [2] r=-1 lpr=58 pi=[53,58)/1 crt=42'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 136.907272339s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 02:52:14 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 58 pg[11.8( empty local-lis/les=0/0 n=0 ec=56/49 lis/c=56/56 les/c/f=57/57/0 sis=58) [2] r=0 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:52:14 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 58 pg[11.3( empty local-lis/les=0/0 n=0 ec=56/49 lis/c=56/56 les/c/f=57/57/0 sis=58) [2] r=0 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:52:14 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 58 pg[7.5( empty local-lis/les=0/0 n=0 ec=51/27 lis/c=51/51 les/c/f=54/54/0 sis=58) [2] r=0 lpr=58 pi=[51,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:52:14 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 58 pg[11.18( empty local-lis/les=56/57 n=0 ec=56/49 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=8.514193535s) [2] r=-1 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 active pruub 131.642715454s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 02:52:14 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 58 pg[9.5( v 57'69 (0'0,57'69] local-lis/les=55/56 n=4 ec=55/43 lis/c=55/55 les/c/f=56/56/0 sis=58 pruub=15.392590523s) [0] r=-1 lpr=58 pi=[55,58)/1 crt=57'69 lcod 57'68 mlcod 0'0 unknown NOTIFY pruub 138.520874023s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 02:52:14 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 58 pg[11.18( empty local-lis/les=56/57 n=0 ec=56/49 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=8.514165878s) [2] r=-1 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 131.642715454s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 02:52:14 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 58 pg[11.6( empty local-lis/les=56/57 n=0 ec=56/49 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=8.514573097s) [0] r=-1 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 131.642654419s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 02:52:14 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 58 pg[8.1b( v 42'4 (0'0,42'4] local-lis/les=53/55 n=0 ec=53/41 lis/c=53/53 les/c/f=55/55/0 sis=58 pruub=13.780412674s) [2] r=-1 lpr=58 pi=[53,58)/1 crt=42'4 lcod 0'0 mlcod 0'0 active pruub 136.909118652s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 02:52:14 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 58 pg[11.1a( empty local-lis/les=56/57 n=0 ec=56/49 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=8.513978958s) [2] r=-1 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 active pruub 131.642730713s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 02:52:14 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 58 pg[11.1a( empty local-lis/les=56/57 n=0 ec=56/49 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=8.513956070s) [2] r=-1 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 131.642730713s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 02:52:14 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 58 pg[8.1b( v 42'4 (0'0,42'4] local-lis/les=53/55 n=0 ec=53/41 lis/c=53/53 les/c/f=55/55/0 sis=58 pruub=13.780359268s) [2] r=-1 lpr=58 pi=[53,58)/1 crt=42'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 136.909118652s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 02:52:14 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 58 pg[11.1b( empty local-lis/les=56/57 n=0 ec=56/49 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=8.513732910s) [2] r=-1 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 active pruub 131.642715454s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 02:52:14 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 58 pg[7.8( empty local-lis/les=0/0 n=0 ec=51/27 lis/c=51/51 les/c/f=54/54/0 sis=58) [2] r=0 lpr=58 pi=[51,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:52:14 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 58 pg[11.1b( empty local-lis/les=56/57 n=0 ec=56/49 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=8.513706207s) [2] r=-1 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 131.642715454s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 02:52:14 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 58 pg[9.1f( v 57'67 (0'0,57'67] local-lis/les=55/56 n=1 ec=55/43 lis/c=55/55 les/c/f=56/56/0 sis=58 pruub=15.391778946s) [0] r=-1 lpr=58 pi=[55,58)/1 crt=57'67 lcod 57'66 mlcod 57'66 active pruub 138.521072388s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 02:52:14 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 58 pg[9.19( v 57'68 (0'0,57'68] local-lis/les=55/56 n=1 ec=55/43 lis/c=55/55 les/c/f=56/56/0 sis=58 pruub=15.391609192s) [0] r=-1 lpr=58 pi=[55,58)/1 crt=57'68 lcod 57'67 mlcod 57'67 active pruub 138.520889282s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 02:52:14 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 58 pg[9.1f( v 57'67 (0'0,57'67] local-lis/les=55/56 n=1 ec=55/43 lis/c=55/55 les/c/f=56/56/0 sis=58 pruub=15.391723633s) [0] r=-1 lpr=58 pi=[55,58)/1 crt=57'67 lcod 57'66 mlcod 0'0 unknown NOTIFY pruub 138.521072388s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 02:52:14 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 58 pg[8.18( v 42'4 (0'0,42'4] local-lis/les=53/55 n=0 ec=53/41 lis/c=53/53 les/c/f=55/55/0 sis=58 pruub=13.778636932s) [0] r=-1 lpr=58 pi=[53,58)/1 crt=42'4 lcod 0'0 mlcod 0'0 active pruub 136.908020020s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 02:52:14 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 58 pg[11.1c( empty local-lis/les=56/57 n=0 ec=56/49 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=8.513652802s) [2] r=-1 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 active pruub 131.642852783s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 02:52:14 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 58 pg[7.11( empty local-lis/les=51/54 n=0 ec=51/27 lis/c=51/51 les/c/f=54/54/0 sis=58 pruub=13.307681084s) [2] r=-1 lpr=58 pi=[51,58)/1 crt=0'0 mlcod 0'0 active pruub 136.437179565s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 02:52:14 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 58 pg[11.1c( empty local-lis/les=56/57 n=0 ec=56/49 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=8.513334274s) [2] r=-1 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 131.642852783s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 02:52:14 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 58 pg[7.11( empty local-lis/les=51/54 n=0 ec=51/27 lis/c=51/51 les/c/f=54/54/0 sis=58 pruub=13.307655334s) [2] r=-1 lpr=58 pi=[51,58)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 136.437179565s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 02:52:14 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 58 pg[8.18( v 42'4 (0'0,42'4] local-lis/les=53/55 n=0 ec=53/41 lis/c=53/53 les/c/f=55/55/0 sis=58 pruub=13.778566360s) [0] r=-1 lpr=58 pi=[53,58)/1 crt=42'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 136.908020020s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 02:52:14 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 58 pg[11.1e( empty local-lis/les=56/57 n=0 ec=56/49 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=8.513113976s) [2] r=-1 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 active pruub 131.642791748s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 02:52:14 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 58 pg[11.1e( empty local-lis/les=56/57 n=0 ec=56/49 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=8.513083458s) [2] r=-1 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 131.642791748s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 02:52:14 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 58 pg[8.1f( v 42'4 (0'0,42'4] local-lis/les=53/55 n=0 ec=53/41 lis/c=53/53 les/c/f=55/55/0 sis=58 pruub=13.778541565s) [0] r=-1 lpr=58 pi=[53,58)/1 crt=42'4 lcod 0'0 mlcod 0'0 active pruub 136.908294678s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 02:52:14 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 58 pg[8.1f( v 42'4 (0'0,42'4] local-lis/les=53/55 n=0 ec=53/41 lis/c=53/53 les/c/f=55/55/0 sis=58 pruub=13.778512001s) [0] r=-1 lpr=58 pi=[53,58)/1 crt=42'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 136.908294678s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 02:52:14 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 58 pg[9.1d( v 57'71 (0'0,57'71] local-lis/les=55/56 n=2 ec=55/43 lis/c=55/55 les/c/f=56/56/0 sis=58 pruub=15.391087532s) [0] r=-1 lpr=58 pi=[55,58)/1 crt=57'71 lcod 57'70 mlcod 57'70 active pruub 138.520965576s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 02:52:14 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 58 pg[9.1d( v 57'71 (0'0,57'71] local-lis/les=55/56 n=2 ec=55/43 lis/c=55/55 les/c/f=56/56/0 sis=58 pruub=15.391049385s) [0] r=-1 lpr=58 pi=[55,58)/1 crt=57'71 lcod 57'70 mlcod 0'0 unknown NOTIFY pruub 138.520965576s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 02:52:14 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 58 pg[11.1f( empty local-lis/les=56/57 n=0 ec=56/49 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=8.512857437s) [2] r=-1 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 active pruub 131.642807007s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 02:52:14 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 58 pg[11.1f( empty local-lis/les=56/57 n=0 ec=56/49 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=8.512804985s) [2] r=-1 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 131.642807007s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 02:52:14 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 58 pg[7.13( empty local-lis/les=51/54 n=0 ec=51/27 lis/c=51/51 les/c/f=54/54/0 sis=58 pruub=13.307087898s) [0] r=-1 lpr=58 pi=[51,58)/1 crt=0'0 mlcod 0'0 active pruub 136.437103271s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 02:52:14 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 58 pg[8.1c( v 42'4 (0'0,42'4] local-lis/les=53/55 n=0 ec=53/41 lis/c=53/53 les/c/f=55/55/0 sis=58 pruub=13.779225349s) [2] r=-1 lpr=58 pi=[53,58)/1 crt=42'4 lcod 0'0 mlcod 0'0 active pruub 136.909347534s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 02:52:14 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 58 pg[7.13( empty local-lis/les=51/54 n=0 ec=51/27 lis/c=51/51 les/c/f=54/54/0 sis=58 pruub=13.306992531s) [0] r=-1 lpr=58 pi=[51,58)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 136.437103271s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 02:52:14 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 58 pg[8.1c( v 42'4 (0'0,42'4] local-lis/les=53/55 n=0 ec=53/41 lis/c=53/53 les/c/f=55/55/0 sis=58 pruub=13.779200554s) [2] r=-1 lpr=58 pi=[53,58)/1 crt=42'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 136.909347534s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 02:52:14 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 58 pg[11.10( empty local-lis/les=56/57 n=0 ec=56/49 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=8.512567520s) [0] r=-1 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 active pruub 131.642837524s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 02:52:14 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 58 pg[7.1c( empty local-lis/les=51/54 n=0 ec=51/27 lis/c=51/51 les/c/f=54/54/0 sis=58 pruub=13.306763649s) [2] r=-1 lpr=58 pi=[51,58)/1 crt=0'0 mlcod 0'0 active pruub 136.437057495s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 02:52:14 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 58 pg[11.10( empty local-lis/les=56/57 n=0 ec=56/49 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=8.512538910s) [0] r=-1 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 131.642837524s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 02:52:14 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 58 pg[7.1c( empty local-lis/les=51/54 n=0 ec=51/27 lis/c=51/51 les/c/f=54/54/0 sis=58 pruub=13.306721687s) [2] r=-1 lpr=58 pi=[51,58)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 136.437057495s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 02:52:14 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 58 pg[9.13( v 57'69 (0'0,57'69] local-lis/les=55/56 n=3 ec=55/43 lis/c=55/55 les/c/f=56/56/0 sis=58 pruub=15.390785217s) [0] r=-1 lpr=58 pi=[55,58)/1 crt=57'69 lcod 57'68 mlcod 57'68 active pruub 138.521209717s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 02:52:14 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 58 pg[11.11( empty local-lis/les=56/57 n=0 ec=56/49 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=8.512342453s) [2] r=-1 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 active pruub 131.642791748s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 02:52:14 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 58 pg[9.13( v 57'69 (0'0,57'69] local-lis/les=55/56 n=3 ec=55/43 lis/c=55/55 les/c/f=56/56/0 sis=58 pruub=15.390753746s) [0] r=-1 lpr=58 pi=[55,58)/1 crt=57'69 lcod 57'68 mlcod 0'0 unknown NOTIFY pruub 138.521209717s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 02:52:14 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 58 pg[8.12( v 42'4 (0'0,42'4] local-lis/les=53/55 n=0 ec=53/41 lis/c=53/53 les/c/f=55/55/0 sis=58 pruub=13.778481483s) [2] r=-1 lpr=58 pi=[53,58)/1 crt=42'4 lcod 0'0 mlcod 0'0 active pruub 136.908966064s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 02:52:14 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 58 pg[8.12( v 42'4 (0'0,42'4] local-lis/les=53/55 n=0 ec=53/41 lis/c=53/53 les/c/f=55/55/0 sis=58 pruub=13.778435707s) [2] r=-1 lpr=58 pi=[53,58)/1 crt=42'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 136.908966064s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 02:52:14 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 58 pg[11.12( empty local-lis/les=56/57 n=0 ec=56/49 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=8.512243271s) [2] r=-1 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 active pruub 131.642837524s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 02:52:14 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 58 pg[11.11( empty local-lis/les=56/57 n=0 ec=56/49 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=8.512224197s) [2] r=-1 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 131.642791748s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 02:52:14 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 58 pg[11.12( empty local-lis/les=56/57 n=0 ec=56/49 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=8.512219429s) [2] r=-1 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 131.642837524s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 02:52:14 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 58 pg[8.11( v 42'4 (0'0,42'4] local-lis/les=53/55 n=0 ec=53/41 lis/c=53/53 les/c/f=55/55/0 sis=58 pruub=13.778263092s) [2] r=-1 lpr=58 pi=[53,58)/1 crt=42'4 lcod 0'0 mlcod 0'0 active pruub 136.909027100s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 02:52:14 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 58 pg[9.1b( v 57'67 (0'0,57'67] local-lis/les=55/56 n=1 ec=55/43 lis/c=55/55 les/c/f=56/56/0 sis=58 pruub=15.390431404s) [0] r=-1 lpr=58 pi=[55,58)/1 crt=57'67 lcod 57'66 mlcod 57'66 active pruub 138.521270752s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 02:52:14 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 58 pg[7.a( empty local-lis/les=0/0 n=0 ec=51/27 lis/c=51/51 les/c/f=54/54/0 sis=58) [2] r=0 lpr=58 pi=[51,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:52:14 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 58 pg[8.11( v 42'4 (0'0,42'4] local-lis/les=53/55 n=0 ec=53/41 lis/c=53/53 les/c/f=55/55/0 sis=58 pruub=13.778165817s) [2] r=-1 lpr=58 pi=[53,58)/1 crt=42'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 136.909027100s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 02:52:14 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 58 pg[8.4( empty local-lis/les=0/0 n=0 ec=53/41 lis/c=53/53 les/c/f=55/55/0 sis=58) [2] r=0 lpr=58 pi=[53,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:52:14 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 58 pg[9.1b( v 57'67 (0'0,57'67] local-lis/les=55/56 n=1 ec=55/43 lis/c=55/55 les/c/f=56/56/0 sis=58 pruub=15.390375137s) [0] r=-1 lpr=58 pi=[55,58)/1 crt=57'67 lcod 57'66 mlcod 0'0 unknown NOTIFY pruub 138.521270752s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 02:52:14 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 58 pg[7.15( empty local-lis/les=51/54 n=0 ec=51/27 lis/c=51/51 les/c/f=54/54/0 sis=58 pruub=13.306607246s) [2] r=-1 lpr=58 pi=[51,58)/1 crt=0'0 mlcod 0'0 active pruub 136.437591553s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 02:52:14 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 58 pg[11.18( empty local-lis/les=0/0 n=0 ec=56/49 lis/c=56/56 les/c/f=57/57/0 sis=58) [2] r=0 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:52:14 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 58 pg[7.15( empty local-lis/les=51/54 n=0 ec=51/27 lis/c=51/51 les/c/f=54/54/0 sis=58 pruub=13.306578636s) [2] r=-1 lpr=58 pi=[51,58)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 136.437591553s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 02:52:14 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 58 pg[8.1a( v 42'4 (0'0,42'4] local-lis/les=53/55 n=0 ec=53/41 lis/c=53/53 les/c/f=55/55/0 sis=58 pruub=13.777977943s) [0] r=-1 lpr=58 pi=[53,58)/1 crt=42'4 lcod 0'0 mlcod 0'0 active pruub 136.909027100s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 02:52:14 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 58 pg[8.1a( v 42'4 (0'0,42'4] local-lis/les=53/55 n=0 ec=53/41 lis/c=53/53 les/c/f=55/55/0 sis=58 pruub=13.777955055s) [0] r=-1 lpr=58 pi=[53,58)/1 crt=42'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 136.909027100s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 02:52:14 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 58 pg[11.15( empty local-lis/les=56/57 n=0 ec=56/49 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=8.511707306s) [2] r=-1 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 active pruub 131.642868042s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 02:52:14 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 58 pg[9.17( v 54'65 (0'0,54'65] local-lis/les=55/56 n=1 ec=55/43 lis/c=55/55 les/c/f=56/56/0 sis=58 pruub=15.389930725s) [0] r=-1 lpr=58 pi=[55,58)/1 crt=54'65 lcod 0'0 mlcod 0'0 active pruub 138.521118164s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 02:52:14 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 58 pg[11.15( empty local-lis/les=56/57 n=0 ec=56/49 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=8.511686325s) [2] r=-1 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 131.642868042s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 02:52:14 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 58 pg[11.19( empty local-lis/les=56/57 n=0 ec=56/49 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=8.511740685s) [0] r=-1 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 active pruub 131.642944336s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 02:52:14 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 58 pg[9.17( v 54'65 (0'0,54'65] local-lis/les=55/56 n=1 ec=55/43 lis/c=55/55 les/c/f=56/56/0 sis=58 pruub=15.389905930s) [0] r=-1 lpr=58 pi=[55,58)/1 crt=54'65 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 138.521118164s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 02:52:14 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 58 pg[8.1d( v 42'4 (0'0,42'4] local-lis/les=53/55 n=0 ec=53/41 lis/c=53/53 les/c/f=55/55/0 sis=58 pruub=13.777401924s) [0] r=-1 lpr=58 pi=[53,58)/1 crt=42'4 lcod 0'0 mlcod 0'0 active pruub 136.908966064s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 02:52:14 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 58 pg[8.1d( v 42'4 (0'0,42'4] local-lis/les=53/55 n=0 ec=53/41 lis/c=53/53 les/c/f=55/55/0 sis=58 pruub=13.777373314s) [0] r=-1 lpr=58 pi=[53,58)/1 crt=42'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 136.908966064s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 02:52:14 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 58 pg[11.1a( empty local-lis/les=0/0 n=0 ec=56/49 lis/c=56/56 les/c/f=57/57/0 sis=58) [2] r=0 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:52:14 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 58 pg[11.19( empty local-lis/les=56/57 n=0 ec=56/49 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=8.511682510s) [0] r=-1 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 131.642944336s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 02:52:14 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 58 pg[9.19( v 57'68 (0'0,57'68] local-lis/les=55/56 n=1 ec=55/43 lis/c=55/55 les/c/f=56/56/0 sis=58 pruub=15.390453339s) [0] r=-1 lpr=58 pi=[55,58)/1 crt=57'68 lcod 57'67 mlcod 0'0 unknown NOTIFY pruub 138.520889282s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 02:52:14 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 58 pg[8.1b( empty local-lis/les=0/0 n=0 ec=53/41 lis/c=53/53 les/c/f=55/55/0 sis=58) [2] r=0 lpr=58 pi=[53,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:52:14 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 58 pg[11.1b( empty local-lis/les=0/0 n=0 ec=56/49 lis/c=56/56 les/c/f=57/57/0 sis=58) [2] r=0 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:52:14 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 58 pg[7.11( empty local-lis/les=0/0 n=0 ec=51/27 lis/c=51/51 les/c/f=54/54/0 sis=58) [2] r=0 lpr=58 pi=[51,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:52:14 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 58 pg[11.1c( empty local-lis/les=0/0 n=0 ec=56/49 lis/c=56/56 les/c/f=57/57/0 sis=58) [2] r=0 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:52:14 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 58 pg[11.1e( empty local-lis/les=0/0 n=0 ec=56/49 lis/c=56/56 les/c/f=57/57/0 sis=58) [2] r=0 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:52:14 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 58 pg[11.1f( empty local-lis/les=0/0 n=0 ec=56/49 lis/c=56/56 les/c/f=57/57/0 sis=58) [2] r=0 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:52:14 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 58 pg[8.1c( empty local-lis/les=0/0 n=0 ec=53/41 lis/c=53/53 les/c/f=55/55/0 sis=58) [2] r=0 lpr=58 pi=[53,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:52:14 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 58 pg[7.1c( empty local-lis/les=0/0 n=0 ec=51/27 lis/c=51/51 les/c/f=54/54/0 sis=58) [2] r=0 lpr=58 pi=[51,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:52:14 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 58 pg[8.12( empty local-lis/les=0/0 n=0 ec=53/41 lis/c=53/53 les/c/f=55/55/0 sis=58) [2] r=0 lpr=58 pi=[53,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:52:14 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 58 pg[11.12( empty local-lis/les=0/0 n=0 ec=56/49 lis/c=56/56 les/c/f=57/57/0 sis=58) [2] r=0 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:52:14 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 58 pg[11.11( empty local-lis/les=0/0 n=0 ec=56/49 lis/c=56/56 les/c/f=57/57/0 sis=58) [2] r=0 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:52:14 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 58 pg[8.11( empty local-lis/les=0/0 n=0 ec=53/41 lis/c=53/53 les/c/f=55/55/0 sis=58) [2] r=0 lpr=58 pi=[53,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:52:14 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 58 pg[7.15( empty local-lis/les=0/0 n=0 ec=51/27 lis/c=51/51 les/c/f=54/54/0 sis=58) [2] r=0 lpr=58 pi=[51,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:52:14 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 58 pg[11.15( empty local-lis/les=0/0 n=0 ec=56/49 lis/c=56/56 les/c/f=57/57/0 sis=58) [2] r=0 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:52:14 np0005534516 ceph-osd[88620]: osd.0 pg_epoch: 58 pg[9.f( empty local-lis/les=0/0 n=0 ec=55/43 lis/c=55/55 les/c/f=56/56/0 sis=58) [0] r=0 lpr=58 pi=[55,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:52:14 np0005534516 ceph-osd[88620]: osd.0 pg_epoch: 58 pg[8.e( empty local-lis/les=0/0 n=0 ec=53/41 lis/c=53/53 les/c/f=55/55/0 sis=58) [0] r=0 lpr=58 pi=[53,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:52:14 np0005534516 ceph-osd[88620]: osd.0 pg_epoch: 58 pg[9.9( empty local-lis/les=0/0 n=0 ec=55/43 lis/c=55/55 les/c/f=56/56/0 sis=58) [0] r=0 lpr=58 pi=[55,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:52:14 np0005534516 ceph-osd[88620]: osd.0 pg_epoch: 58 pg[9.b( empty local-lis/les=0/0 n=0 ec=55/43 lis/c=55/55 les/c/f=56/56/0 sis=58) [0] r=0 lpr=58 pi=[55,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:52:14 np0005534516 ceph-osd[88620]: osd.0 pg_epoch: 58 pg[8.f( empty local-lis/les=0/0 n=0 ec=53/41 lis/c=53/53 les/c/f=55/55/0 sis=58) [0] r=0 lpr=58 pi=[53,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:52:14 np0005534516 ceph-osd[88620]: osd.0 pg_epoch: 58 pg[7.4( empty local-lis/les=0/0 n=0 ec=51/27 lis/c=51/51 les/c/f=54/54/0 sis=58) [0] r=0 lpr=58 pi=[51,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:52:14 np0005534516 ceph-osd[88620]: osd.0 pg_epoch: 58 pg[7.6( empty local-lis/les=0/0 n=0 ec=51/27 lis/c=51/51 les/c/f=54/54/0 sis=58) [0] r=0 lpr=58 pi=[51,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:52:14 np0005534516 ceph-osd[88620]: osd.0 pg_epoch: 58 pg[8.b( empty local-lis/les=0/0 n=0 ec=53/41 lis/c=53/53 les/c/f=55/55/0 sis=58) [0] r=0 lpr=58 pi=[53,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:52:14 np0005534516 ceph-osd[88620]: osd.0 pg_epoch: 58 pg[7.f( empty local-lis/les=0/0 n=0 ec=51/27 lis/c=51/51 les/c/f=54/54/0 sis=58) [0] r=0 lpr=58 pi=[51,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:52:14 np0005534516 ceph-osd[88620]: osd.0 pg_epoch: 58 pg[8.9( empty local-lis/les=0/0 n=0 ec=53/41 lis/c=53/53 les/c/f=55/55/0 sis=58) [0] r=0 lpr=58 pi=[53,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:52:14 np0005534516 ceph-osd[88620]: osd.0 pg_epoch: 58 pg[11.4( empty local-lis/les=0/0 n=0 ec=56/49 lis/c=56/56 les/c/f=57/57/0 sis=58) [0] r=0 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:52:14 np0005534516 ceph-osd[88620]: osd.0 pg_epoch: 58 pg[9.1( empty local-lis/les=0/0 n=0 ec=55/43 lis/c=55/55 les/c/f=56/56/0 sis=58) [0] r=0 lpr=58 pi=[55,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:52:14 np0005534516 ceph-osd[88620]: osd.0 pg_epoch: 58 pg[8.6( empty local-lis/les=0/0 n=0 ec=53/41 lis/c=53/53 les/c/f=55/55/0 sis=58) [0] r=0 lpr=58 pi=[53,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:52:14 np0005534516 ceph-osd[88620]: osd.0 pg_epoch: 58 pg[9.7( empty local-lis/les=0/0 n=0 ec=55/43 lis/c=55/55 les/c/f=56/56/0 sis=58) [0] r=0 lpr=58 pi=[55,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:52:14 np0005534516 ceph-osd[88620]: osd.0 pg_epoch: 58 pg[7.9( empty local-lis/les=0/0 n=0 ec=51/27 lis/c=51/51 les/c/f=54/54/0 sis=58) [0] r=0 lpr=58 pi=[51,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:52:14 np0005534516 ceph-osd[88620]: osd.0 pg_epoch: 58 pg[9.5( empty local-lis/les=0/0 n=0 ec=55/43 lis/c=55/55 les/c/f=56/56/0 sis=58) [0] r=0 lpr=58 pi=[55,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:52:14 np0005534516 ceph-osd[88620]: osd.0 pg_epoch: 58 pg[11.6( empty local-lis/les=0/0 n=0 ec=56/49 lis/c=56/56 les/c/f=57/57/0 sis=58) [0] r=0 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:52:14 np0005534516 ceph-osd[88620]: osd.0 pg_epoch: 58 pg[9.1f( empty local-lis/les=0/0 n=0 ec=55/43 lis/c=55/55 les/c/f=56/56/0 sis=58) [0] r=0 lpr=58 pi=[55,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:52:14 np0005534516 ceph-osd[88620]: osd.0 pg_epoch: 58 pg[8.18( empty local-lis/les=0/0 n=0 ec=53/41 lis/c=53/53 les/c/f=55/55/0 sis=58) [0] r=0 lpr=58 pi=[53,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:52:14 np0005534516 ceph-osd[88620]: osd.0 pg_epoch: 58 pg[8.1f( empty local-lis/les=0/0 n=0 ec=53/41 lis/c=53/53 les/c/f=55/55/0 sis=58) [0] r=0 lpr=58 pi=[53,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:52:14 np0005534516 ceph-osd[88620]: osd.0 pg_epoch: 58 pg[9.1d( empty local-lis/les=0/0 n=0 ec=55/43 lis/c=55/55 les/c/f=56/56/0 sis=58) [0] r=0 lpr=58 pi=[55,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:52:14 np0005534516 ceph-osd[88620]: osd.0 pg_epoch: 58 pg[7.13( empty local-lis/les=0/0 n=0 ec=51/27 lis/c=51/51 les/c/f=54/54/0 sis=58) [0] r=0 lpr=58 pi=[51,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:52:14 np0005534516 ceph-osd[88620]: osd.0 pg_epoch: 58 pg[11.10( empty local-lis/les=0/0 n=0 ec=56/49 lis/c=56/56 les/c/f=57/57/0 sis=58) [0] r=0 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:52:14 np0005534516 ceph-osd[88620]: osd.0 pg_epoch: 58 pg[9.13( empty local-lis/les=0/0 n=0 ec=55/43 lis/c=55/55 les/c/f=56/56/0 sis=58) [0] r=0 lpr=58 pi=[55,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:52:14 np0005534516 ceph-osd[88620]: osd.0 pg_epoch: 58 pg[9.1b( empty local-lis/les=0/0 n=0 ec=55/43 lis/c=55/55 les/c/f=56/56/0 sis=58) [0] r=0 lpr=58 pi=[55,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:52:14 np0005534516 ceph-osd[88620]: osd.0 pg_epoch: 58 pg[8.1a( empty local-lis/les=0/0 n=0 ec=53/41 lis/c=53/53 les/c/f=55/55/0 sis=58) [0] r=0 lpr=58 pi=[53,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:52:14 np0005534516 ceph-osd[88620]: osd.0 pg_epoch: 58 pg[9.17( empty local-lis/les=0/0 n=0 ec=55/43 lis/c=55/55 les/c/f=56/56/0 sis=58) [0] r=0 lpr=58 pi=[55,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:52:14 np0005534516 ceph-osd[88620]: osd.0 pg_epoch: 58 pg[8.1d( empty local-lis/les=0/0 n=0 ec=53/41 lis/c=53/53 les/c/f=55/55/0 sis=58) [0] r=0 lpr=58 pi=[53,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:52:14 np0005534516 ceph-osd[88620]: osd.0 pg_epoch: 58 pg[11.19( empty local-lis/les=0/0 n=0 ec=56/49 lis/c=56/56 les/c/f=57/57/0 sis=58) [0] r=0 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:52:14 np0005534516 ceph-osd[88620]: osd.0 pg_epoch: 58 pg[9.19( empty local-lis/les=0/0 n=0 ec=55/43 lis/c=55/55 les/c/f=56/56/0 sis=58) [0] r=0 lpr=58 pi=[55,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:52:14 np0005534516 ceph-osd[90711]: log_channel(cluster) log [DBG] : 3.7 scrub ok
Nov 25 02:52:14 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 25 02:52:14 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 25 02:52:14 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 25 02:52:14 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]': finished
Nov 25 02:52:14 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 25 02:52:14 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 02:52:14 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 02:52:15 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v150: 321 pgs: 321 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 0 B/s wr, 35 op/s
Nov 25 02:52:15 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"} v 0) v1
Nov 25 02:52:15 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"}]: dispatch
Nov 25 02:52:15 np0005534516 ceph-osd[88620]: log_channel(cluster) log [DBG] : 6.1a deep-scrub starts
Nov 25 02:52:15 np0005534516 ceph-osd[88620]: log_channel(cluster) log [DBG] : 6.1a deep-scrub ok
Nov 25 02:52:15 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e58 do_prune osdmap full prune enabled
Nov 25 02:52:15 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"}]': finished
Nov 25 02:52:15 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e59 e59: 3 total, 3 up, 3 in
Nov 25 02:52:15 np0005534516 ceph-mon[75015]: log_channel(cluster) log [DBG] : osdmap e59: 3 total, 3 up, 3 in
Nov 25 02:52:15 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"}]: dispatch
Nov 25 02:52:15 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"}]': finished
Nov 25 02:52:15 np0005534516 podman[103584]: 2025-11-25 07:52:15.905300271 +0000 UTC m=+0.294201568 container exec 04ce8b89bbacb77b3b59c4996e899d7494010827e740656ebea8754c25303956 (image=quay.io/ceph/ceph:v18, name=ceph-a058ea16-8b73-51e1-b172-ed66107102bf-mon-compute-0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 02:52:15 np0005534516 ceph-osd[88620]: osd.0 pg_epoch: 59 pg[10.1( v 47'16 (0'0,47'16] local-lis/les=58/59 n=1 ec=55/46 lis/c=55/55 les/c/f=56/56/0 sis=58) [0] r=0 lpr=58 pi=[55,58)/1 crt=47'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:52:15 np0005534516 ceph-osd[88620]: osd.0 pg_epoch: 59 pg[8.1d( v 42'4 (0'0,42'4] local-lis/les=58/59 n=0 ec=53/41 lis/c=53/53 les/c/f=55/55/0 sis=58) [0] r=0 lpr=58 pi=[53,58)/1 crt=42'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:52:15 np0005534516 ceph-mgr[75313]: [progress INFO root] Completed event 7b51c51c-4aa9-4f59-84a8-f454863311e7 (Global Recovery Event) in 15 seconds
Nov 25 02:52:15 np0005534516 ceph-osd[88620]: osd.0 pg_epoch: 59 pg[10.4( v 47'16 (0'0,47'16] local-lis/les=58/59 n=1 ec=55/46 lis/c=55/55 les/c/f=56/56/0 sis=58) [0] r=0 lpr=58 pi=[55,58)/1 crt=47'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:52:15 np0005534516 ceph-osd[88620]: osd.0 pg_epoch: 59 pg[8.6( v 42'4 (0'0,42'4] local-lis/les=58/59 n=0 ec=53/41 lis/c=53/53 les/c/f=55/55/0 sis=58) [0] r=0 lpr=58 pi=[53,58)/1 crt=42'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:52:15 np0005534516 ceph-osd[88620]: osd.0 pg_epoch: 59 pg[8.1f( v 42'4 (0'0,42'4] local-lis/les=58/59 n=0 ec=53/41 lis/c=53/53 les/c/f=55/55/0 sis=58) [0] r=0 lpr=58 pi=[53,58)/1 crt=42'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:52:15 np0005534516 ceph-osd[88620]: osd.0 pg_epoch: 59 pg[7.1b( empty local-lis/les=58/59 n=0 ec=51/27 lis/c=51/51 les/c/f=54/54/0 sis=58) [0] r=0 lpr=58 pi=[51,58)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:52:15 np0005534516 ceph-osd[88620]: osd.0 pg_epoch: 59 pg[9.7( v 57'71 lc 52'42 (0'0,57'71] local-lis/les=58/59 n=3 ec=55/43 lis/c=55/55 les/c/f=56/56/0 sis=58) [0] r=0 lpr=58 pi=[55,58)/1 crt=57'71 lcod 0'0 mlcod 0'0 active+degraded m=4 mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:52:15 np0005534516 ceph-osd[88620]: osd.0 pg_epoch: 59 pg[8.14( v 42'4 lc 0'0 (0'0,42'4] local-lis/les=58/59 n=0 ec=53/41 lis/c=53/53 les/c/f=55/55/0 sis=58) [0] r=0 lpr=58 pi=[53,58)/1 crt=42'4 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:52:15 np0005534516 ceph-osd[88620]: osd.0 pg_epoch: 59 pg[10.16( v 47'16 (0'0,47'16] local-lis/les=58/59 n=0 ec=55/46 lis/c=55/55 les/c/f=56/56/0 sis=58) [0] r=0 lpr=58 pi=[55,58)/1 crt=47'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:52:15 np0005534516 ceph-osd[88620]: osd.0 pg_epoch: 59 pg[9.15( v 57'67 lc 0'0 (0'0,57'67] local-lis/les=58/59 n=2 ec=55/43 lis/c=55/55 les/c/f=56/56/0 sis=58) [0] r=0 lpr=58 pi=[55,58)/1 crt=57'67 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:52:15 np0005534516 ceph-osd[88620]: osd.0 pg_epoch: 59 pg[11.19( empty local-lis/les=58/59 n=0 ec=56/49 lis/c=56/56 les/c/f=57/57/0 sis=58) [0] r=0 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:52:15 np0005534516 ceph-osd[88620]: osd.0 pg_epoch: 59 pg[8.1a( v 42'4 (0'0,42'4] local-lis/les=58/59 n=0 ec=53/41 lis/c=53/53 les/c/f=55/55/0 sis=58) [0] r=0 lpr=58 pi=[53,58)/1 crt=42'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:52:15 np0005534516 ceph-osd[88620]: osd.0 pg_epoch: 59 pg[9.1b( v 57'67 lc 57'66 (0'0,57'67] local-lis/les=58/59 n=1 ec=55/43 lis/c=55/55 les/c/f=56/56/0 sis=58) [0] r=0 lpr=58 pi=[55,58)/1 crt=57'67 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:52:15 np0005534516 ceph-osd[88620]: osd.0 pg_epoch: 59 pg[11.17( empty local-lis/les=58/59 n=0 ec=56/49 lis/c=56/56 les/c/f=57/57/0 sis=58) [0] r=0 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:52:15 np0005534516 ceph-osd[88620]: osd.0 pg_epoch: 59 pg[9.19( v 57'68 lc 54'63 (0'0,57'68] local-lis/les=58/59 n=1 ec=55/43 lis/c=55/55 les/c/f=56/56/0 sis=58) [0] r=0 lpr=58 pi=[55,58)/1 crt=57'68 lcod 0'0 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:52:15 np0005534516 ceph-osd[88620]: osd.0 pg_epoch: 59 pg[7.13( empty local-lis/les=58/59 n=0 ec=51/27 lis/c=51/51 les/c/f=54/54/0 sis=58) [0] r=0 lpr=58 pi=[51,58)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:52:15 np0005534516 ceph-osd[88620]: osd.0 pg_epoch: 59 pg[8.18( v 42'4 (0'0,42'4] local-lis/les=58/59 n=0 ec=53/41 lis/c=53/53 les/c/f=55/55/0 sis=58) [0] r=0 lpr=58 pi=[53,58)/1 crt=42'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:52:15 np0005534516 ceph-osd[88620]: osd.0 pg_epoch: 59 pg[9.1d( v 57'71 lc 57'66 (0'0,57'71] local-lis/les=58/59 n=2 ec=55/43 lis/c=55/55 les/c/f=56/56/0 sis=58) [0] r=0 lpr=58 pi=[55,58)/1 crt=57'71 lcod 0'0 mlcod 0'0 active+degraded m=3 mbc={255={(0+1)=3}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:52:15 np0005534516 ceph-osd[88620]: osd.0 pg_epoch: 59 pg[9.1f( v 57'67 lc 57'66 (0'0,57'67] local-lis/les=58/59 n=1 ec=55/43 lis/c=55/55 les/c/f=56/56/0 sis=58) [0] r=0 lpr=58 pi=[55,58)/1 crt=57'67 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:52:15 np0005534516 ceph-osd[88620]: osd.0 pg_epoch: 59 pg[7.9( empty local-lis/les=58/59 n=0 ec=51/27 lis/c=51/51 les/c/f=54/54/0 sis=58) [0] r=0 lpr=58 pi=[51,58)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:52:15 np0005534516 ceph-osd[88620]: osd.0 pg_epoch: 59 pg[7.f( empty local-lis/les=58/59 n=0 ec=51/27 lis/c=51/51 les/c/f=54/54/0 sis=58) [0] r=0 lpr=58 pi=[51,58)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:52:15 np0005534516 ceph-osd[88620]: osd.0 pg_epoch: 59 pg[9.3( v 57'69 lc 50'8 (0'0,57'69] local-lis/les=58/59 n=3 ec=55/43 lis/c=55/55 les/c/f=56/56/0 sis=58) [0] r=0 lpr=58 pi=[55,58)/1 crt=57'69 lcod 0'0 mlcod 0'0 active+degraded m=5 mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:52:15 np0005534516 ceph-osd[88620]: osd.0 pg_epoch: 59 pg[10.1e( v 47'16 (0'0,47'16] local-lis/les=58/59 n=0 ec=55/46 lis/c=55/55 les/c/f=56/56/0 sis=58) [0] r=0 lpr=58 pi=[55,58)/1 crt=47'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:52:15 np0005534516 ceph-osd[88620]: osd.0 pg_epoch: 59 pg[8.c( v 42'4 (0'0,42'4] local-lis/les=58/59 n=0 ec=53/41 lis/c=53/53 les/c/f=55/55/0 sis=58) [0] r=0 lpr=58 pi=[53,58)/1 crt=42'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:52:15 np0005534516 ceph-osd[88620]: osd.0 pg_epoch: 59 pg[11.f( empty local-lis/les=58/59 n=0 ec=56/49 lis/c=56/56 les/c/f=57/57/0 sis=58) [0] r=0 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:52:15 np0005534516 ceph-osd[88620]: osd.0 pg_epoch: 59 pg[11.1( empty local-lis/les=58/59 n=0 ec=56/49 lis/c=56/56 les/c/f=57/57/0 sis=58) [0] r=0 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:52:15 np0005534516 ceph-osd[88620]: osd.0 pg_epoch: 59 pg[7.3( empty local-lis/les=58/59 n=0 ec=51/27 lis/c=51/51 les/c/f=54/54/0 sis=58) [0] r=0 lpr=58 pi=[51,58)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:52:15 np0005534516 ceph-osd[88620]: osd.0 pg_epoch: 59 pg[9.d( v 57'67 lc 54'59 (0'0,57'67] local-lis/les=58/59 n=2 ec=55/43 lis/c=55/55 les/c/f=56/56/0 sis=58) [0] r=0 lpr=58 pi=[55,58)/1 crt=57'67 lcod 0'0 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:52:15 np0005534516 ceph-osd[88620]: osd.0 pg_epoch: 59 pg[8.e( v 42'4 (0'0,42'4] local-lis/les=58/59 n=0 ec=53/41 lis/c=53/53 les/c/f=55/55/0 sis=58) [0] r=0 lpr=58 pi=[53,58)/1 crt=42'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:52:15 np0005534516 ceph-osd[88620]: osd.0 pg_epoch: 59 pg[10.e( v 56'17 lc 47'7 (0'0,56'17] local-lis/les=58/59 n=0 ec=55/46 lis/c=55/55 les/c/f=56/56/0 sis=58) [0] r=0 lpr=58 pi=[55,58)/1 crt=56'17 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:52:15 np0005534516 ceph-osd[88620]: osd.0 pg_epoch: 59 pg[11.e( empty local-lis/les=58/59 n=0 ec=56/49 lis/c=56/56 les/c/f=57/57/0 sis=58) [0] r=0 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:52:15 np0005534516 ceph-osd[88620]: osd.0 pg_epoch: 59 pg[9.f( v 57'67 lc 52'43 (0'0,57'67] local-lis/les=58/59 n=1 ec=55/43 lis/c=55/55 les/c/f=56/56/0 sis=58) [0] r=0 lpr=58 pi=[55,58)/1 crt=57'67 lcod 0'0 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:52:15 np0005534516 ceph-osd[88620]: osd.0 pg_epoch: 59 pg[9.1( v 57'67 lc 52'49 (0'0,57'67] local-lis/les=58/59 n=3 ec=55/43 lis/c=55/55 les/c/f=56/56/0 sis=58) [0] r=0 lpr=58 pi=[55,58)/1 crt=57'67 lcod 0'0 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:52:15 np0005534516 ceph-osd[88620]: osd.0 pg_epoch: 59 pg[8.f( v 42'4 lc 0'0 (0'0,42'4] local-lis/les=58/59 n=0 ec=53/41 lis/c=53/53 les/c/f=55/55/0 sis=58) [0] r=0 lpr=58 pi=[53,58)/1 crt=42'4 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:52:15 np0005534516 ceph-osd[88620]: osd.0 pg_epoch: 59 pg[10.d( v 56'17 lc 47'9 (0'0,56'17] local-lis/les=58/59 n=0 ec=55/46 lis/c=55/55 les/c/f=56/56/0 sis=58) [0] r=0 lpr=58 pi=[55,58)/1 crt=56'17 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:52:15 np0005534516 ceph-osd[88620]: osd.0 pg_epoch: 59 pg[10.17( v 47'16 (0'0,47'16] local-lis/les=58/59 n=0 ec=55/46 lis/c=55/55 les/c/f=56/56/0 sis=58) [0] r=0 lpr=58 pi=[55,58)/1 crt=47'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:52:15 np0005534516 ceph-osd[88620]: osd.0 pg_epoch: 59 pg[10.7( v 47'16 (0'0,47'16] local-lis/les=58/59 n=1 ec=55/46 lis/c=55/55 les/c/f=56/56/0 sis=58) [0] r=0 lpr=58 pi=[55,58)/1 crt=47'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:52:15 np0005534516 ceph-osd[88620]: osd.0 pg_epoch: 59 pg[9.17( v 54'65 lc 52'38 (0'0,54'65] local-lis/les=58/59 n=1 ec=55/43 lis/c=55/55 les/c/f=56/56/0 sis=58) [0] r=0 lpr=58 pi=[55,58)/1 crt=54'65 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:52:15 np0005534516 ceph-osd[88620]: osd.0 pg_epoch: 59 pg[7.6( empty local-lis/les=58/59 n=0 ec=51/27 lis/c=51/51 les/c/f=54/54/0 sis=58) [0] r=0 lpr=58 pi=[51,58)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:52:15 np0005534516 ceph-osd[88620]: osd.0 pg_epoch: 59 pg[8.9( v 42'4 (0'0,42'4] local-lis/les=58/59 n=0 ec=53/41 lis/c=53/53 les/c/f=55/55/0 sis=58) [0] r=0 lpr=58 pi=[53,58)/1 crt=42'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:52:15 np0005534516 ceph-osd[88620]: osd.0 pg_epoch: 59 pg[11.14( empty local-lis/les=58/59 n=0 ec=56/49 lis/c=56/56 les/c/f=57/57/0 sis=58) [0] r=0 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:52:15 np0005534516 ceph-osd[88620]: osd.0 pg_epoch: 59 pg[7.18( empty local-lis/les=58/59 n=0 ec=51/27 lis/c=51/51 les/c/f=54/54/0 sis=58) [0] r=0 lpr=58 pi=[51,58)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:52:15 np0005534516 ceph-osd[88620]: osd.0 pg_epoch: 59 pg[10.15( v 56'17 lc 47'5 (0'0,56'17] local-lis/les=58/59 n=0 ec=55/46 lis/c=55/55 les/c/f=56/56/0 sis=58) [0] r=0 lpr=58 pi=[55,58)/1 crt=56'17 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:52:15 np0005534516 ceph-osd[88620]: osd.0 pg_epoch: 59 pg[10.8( v 47'16 (0'0,47'16] local-lis/les=58/59 n=1 ec=55/46 lis/c=55/55 les/c/f=56/56/0 sis=58) [0] r=0 lpr=58 pi=[55,58)/1 crt=47'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:52:15 np0005534516 ceph-osd[88620]: osd.0 pg_epoch: 59 pg[9.9( v 57'71 lc 57'66 (0'0,57'71] local-lis/les=58/59 n=3 ec=55/43 lis/c=55/55 les/c/f=56/56/0 sis=58) [0] r=0 lpr=58 pi=[55,58)/1 crt=57'71 lcod 0'0 mlcod 0'0 active+degraded m=3 mbc={255={(0+1)=3}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:52:15 np0005534516 ceph-osd[88620]: osd.0 pg_epoch: 59 pg[11.4( empty local-lis/les=58/59 n=0 ec=56/49 lis/c=56/56 les/c/f=57/57/0 sis=58) [0] r=0 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:52:15 np0005534516 ceph-osd[88620]: osd.0 pg_epoch: 59 pg[9.b( v 57'69 lc 57'66 (0'0,57'69] local-lis/les=58/59 n=3 ec=55/43 lis/c=55/55 les/c/f=56/56/0 sis=58) [0] r=0 lpr=58 pi=[55,58)/1 crt=57'69 lcod 0'0 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:52:15 np0005534516 ceph-osd[88620]: osd.0 pg_epoch: 59 pg[7.4( empty local-lis/les=58/59 n=0 ec=51/27 lis/c=51/51 les/c/f=54/54/0 sis=58) [0] r=0 lpr=58 pi=[51,58)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:52:15 np0005534516 ceph-osd[88620]: osd.0 pg_epoch: 59 pg[10.9( v 56'17 lc 47'15 (0'0,56'17] local-lis/les=58/59 n=0 ec=55/46 lis/c=55/55 les/c/f=56/56/0 sis=58) [0] r=0 lpr=58 pi=[55,58)/1 crt=56'17 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:52:15 np0005534516 ceph-osd[88620]: osd.0 pg_epoch: 59 pg[8.10( v 42'4 (0'0,42'4] local-lis/les=58/59 n=0 ec=53/41 lis/c=53/53 les/c/f=55/55/0 sis=58) [0] r=0 lpr=58 pi=[53,58)/1 crt=42'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:52:15 np0005534516 ceph-osd[88620]: osd.0 pg_epoch: 59 pg[9.5( v 57'69 lc 0'0 (0'0,57'69] local-lis/les=58/59 n=4 ec=55/43 lis/c=55/55 les/c/f=56/56/0 sis=58) [0] r=0 lpr=58 pi=[55,58)/1 crt=57'69 mlcod 0'0 active+degraded m=3 mbc={255={(0+1)=3}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:52:15 np0005534516 ceph-osd[88620]: osd.0 pg_epoch: 59 pg[11.10( empty local-lis/les=58/59 n=0 ec=56/49 lis/c=56/56 les/c/f=57/57/0 sis=58) [0] r=0 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:52:15 np0005534516 ceph-osd[88620]: osd.0 pg_epoch: 59 pg[9.13( v 57'69 lc 0'0 (0'0,57'69] local-lis/les=58/59 n=3 ec=55/43 lis/c=55/55 les/c/f=56/56/0 sis=58) [0] r=0 lpr=58 pi=[55,58)/1 crt=57'69 mlcod 0'0 active+degraded m=3 mbc={255={(0+1)=3}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:52:15 np0005534516 ceph-osd[88620]: osd.0 pg_epoch: 59 pg[7.1f( empty local-lis/les=58/59 n=0 ec=51/27 lis/c=51/51 les/c/f=54/54/0 sis=58) [0] r=0 lpr=58 pi=[51,58)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:52:15 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 59 pg[10.10( v 47'16 (0'0,47'16] local-lis/les=58/59 n=0 ec=55/46 lis/c=55/55 les/c/f=56/56/0 sis=58) [1] r=0 lpr=58 pi=[55,58)/1 crt=47'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:52:15 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 59 pg[10.13( v 47'16 (0'0,47'16] local-lis/les=58/59 n=0 ec=55/46 lis/c=55/55 les/c/f=56/56/0 sis=58) [1] r=0 lpr=58 pi=[55,58)/1 crt=47'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:52:15 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 59 pg[10.11( v 47'16 (0'0,47'16] local-lis/les=58/59 n=0 ec=55/46 lis/c=55/55 les/c/f=56/56/0 sis=58) [1] r=0 lpr=58 pi=[55,58)/1 crt=47'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:52:15 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 59 pg[10.19( v 47'16 (0'0,47'16] local-lis/les=58/59 n=0 ec=55/46 lis/c=55/55 les/c/f=56/56/0 sis=58) [1] r=0 lpr=58 pi=[55,58)/1 crt=47'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:52:15 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 59 pg[10.1a( v 47'16 (0'0,47'16] local-lis/les=58/59 n=0 ec=55/46 lis/c=55/55 les/c/f=56/56/0 sis=58) [1] r=0 lpr=58 pi=[55,58)/1 crt=47'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:52:15 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 59 pg[10.6( v 47'16 (0'0,47'16] local-lis/les=58/59 n=1 ec=55/46 lis/c=55/55 les/c/f=56/56/0 sis=58) [1] r=0 lpr=58 pi=[55,58)/1 crt=47'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:52:15 np0005534516 ceph-osd[88620]: osd.0 pg_epoch: 59 pg[8.b( v 42'4 (0'0,42'4] local-lis/les=58/59 n=0 ec=53/41 lis/c=53/53 les/c/f=55/55/0 sis=58) [0] r=0 lpr=58 pi=[53,58)/1 crt=42'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:52:15 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 59 pg[10.2( v 47'16 (0'0,47'16] local-lis/les=58/59 n=1 ec=55/46 lis/c=55/55 les/c/f=56/56/0 sis=58) [1] r=0 lpr=58 pi=[55,58)/1 crt=47'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:52:15 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 59 pg[10.b( v 47'16 (0'0,47'16] local-lis/les=58/59 n=0 ec=55/46 lis/c=55/55 les/c/f=56/56/0 sis=58) [1] r=0 lpr=58 pi=[55,58)/1 crt=47'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:52:15 np0005534516 ceph-osd[88620]: osd.0 pg_epoch: 59 pg[9.11( v 56'67 lc 56'66 (0'0,56'67] local-lis/les=58/59 n=1 ec=55/43 lis/c=55/55 les/c/f=56/56/0 sis=58) [0] r=0 lpr=58 pi=[55,58)/1 crt=56'67 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:52:15 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 59 pg[10.12( v 47'16 (0'0,47'16] local-lis/les=58/59 n=0 ec=55/46 lis/c=55/55 les/c/f=56/56/0 sis=58) [1] r=0 lpr=58 pi=[55,58)/1 crt=47'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:52:15 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 59 pg[10.f( v 47'16 (0'0,47'16] local-lis/les=58/59 n=0 ec=55/46 lis/c=55/55 les/c/f=56/56/0 sis=58) [1] r=0 lpr=58 pi=[55,58)/1 crt=47'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:52:15 np0005534516 ceph-osd[88620]: osd.0 pg_epoch: 59 pg[11.6( empty local-lis/les=58/59 n=0 ec=56/49 lis/c=56/56 les/c/f=57/57/0 sis=58) [0] r=0 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:52:15 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 59 pg[10.14( v 56'17 lc 47'13 (0'0,56'17] local-lis/les=58/59 n=0 ec=55/46 lis/c=55/55 les/c/f=56/56/0 sis=58) [1] r=0 lpr=58 pi=[55,58)/1 crt=56'17 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:52:15 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 59 pg[11.1f( empty local-lis/les=58/59 n=0 ec=56/49 lis/c=56/56 les/c/f=57/57/0 sis=58) [2] r=0 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:52:15 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 59 pg[11.1a( empty local-lis/les=58/59 n=0 ec=56/49 lis/c=56/56 les/c/f=57/57/0 sis=58) [2] r=0 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:52:15 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 59 pg[8.1c( v 42'4 (0'0,42'4] local-lis/les=58/59 n=0 ec=53/41 lis/c=53/53 les/c/f=55/55/0 sis=58) [2] r=0 lpr=58 pi=[53,58)/1 crt=42'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:52:15 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 59 pg[8.11( v 42'4 (0'0,42'4] local-lis/les=58/59 n=0 ec=53/41 lis/c=53/53 les/c/f=55/55/0 sis=58) [2] r=0 lpr=58 pi=[53,58)/1 crt=42'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:52:15 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 59 pg[11.12( empty local-lis/les=58/59 n=0 ec=56/49 lis/c=56/56 les/c/f=57/57/0 sis=58) [2] r=0 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:52:15 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 59 pg[11.1e( empty local-lis/les=58/59 n=0 ec=56/49 lis/c=56/56 les/c/f=57/57/0 sis=58) [2] r=0 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:52:15 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 59 pg[8.12( v 42'4 (0'0,42'4] local-lis/les=58/59 n=0 ec=53/41 lis/c=53/53 les/c/f=55/55/0 sis=58) [2] r=0 lpr=58 pi=[53,58)/1 crt=42'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:52:15 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 59 pg[7.11( empty local-lis/les=58/59 n=0 ec=51/27 lis/c=51/51 les/c/f=54/54/0 sis=58) [2] r=0 lpr=58 pi=[51,58)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:52:15 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 59 pg[11.b( empty local-lis/les=58/59 n=0 ec=56/49 lis/c=56/56 les/c/f=57/57/0 sis=58) [2] r=0 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:52:15 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 59 pg[11.1c( empty local-lis/les=58/59 n=0 ec=56/49 lis/c=56/56 les/c/f=57/57/0 sis=58) [2] r=0 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:52:15 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 59 pg[11.1b( empty local-lis/les=58/59 n=0 ec=56/49 lis/c=56/56 les/c/f=57/57/0 sis=58) [2] r=0 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:52:15 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 59 pg[7.15( empty local-lis/les=58/59 n=0 ec=51/27 lis/c=51/51 les/c/f=54/54/0 sis=58) [2] r=0 lpr=58 pi=[51,58)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:52:15 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 59 pg[11.11( empty local-lis/les=58/59 n=0 ec=56/49 lis/c=56/56 les/c/f=57/57/0 sis=58) [2] r=0 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:52:15 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 59 pg[7.1c( empty local-lis/les=58/59 n=0 ec=51/27 lis/c=51/51 les/c/f=54/54/0 sis=58) [2] r=0 lpr=58 pi=[51,58)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:52:15 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 59 pg[8.1b( v 42'4 (0'0,42'4] local-lis/les=58/59 n=0 ec=53/41 lis/c=53/53 les/c/f=55/55/0 sis=58) [2] r=0 lpr=58 pi=[53,58)/1 crt=42'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:52:15 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 59 pg[11.18( empty local-lis/les=58/59 n=0 ec=56/49 lis/c=56/56 les/c/f=57/57/0 sis=58) [2] r=0 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:52:15 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 59 pg[7.8( empty local-lis/les=58/59 n=0 ec=51/27 lis/c=51/51 les/c/f=54/54/0 sis=58) [2] r=0 lpr=58 pi=[51,58)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:52:15 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 59 pg[8.4( v 42'4 (0'0,42'4] local-lis/les=58/59 n=1 ec=53/41 lis/c=53/53 les/c/f=55/55/0 sis=58) [2] r=0 lpr=58 pi=[53,58)/1 crt=42'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:52:15 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 59 pg[7.5( empty local-lis/les=58/59 n=0 ec=51/27 lis/c=51/51 les/c/f=54/54/0 sis=58) [2] r=0 lpr=58 pi=[51,58)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:52:15 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 59 pg[11.9( empty local-lis/les=58/59 n=0 ec=56/49 lis/c=56/56 les/c/f=57/57/0 sis=58) [2] r=0 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:52:15 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 59 pg[7.2( empty local-lis/les=58/59 n=0 ec=51/27 lis/c=51/51 les/c/f=54/54/0 sis=58) [2] r=0 lpr=58 pi=[51,58)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:52:15 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 59 pg[8.d( v 42'4 (0'0,42'4] local-lis/les=58/59 n=0 ec=53/41 lis/c=53/53 les/c/f=55/55/0 sis=58) [2] r=0 lpr=58 pi=[53,58)/1 crt=42'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:52:15 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 59 pg[7.a( empty local-lis/les=58/59 n=0 ec=51/27 lis/c=51/51 les/c/f=54/54/0 sis=58) [2] r=0 lpr=58 pi=[51,58)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:52:15 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 59 pg[11.d( empty local-lis/les=58/59 n=0 ec=56/49 lis/c=56/56 les/c/f=57/57/0 sis=58) [2] r=0 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:52:15 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 59 pg[8.2( v 42'4 (0'0,42'4] local-lis/les=58/59 n=1 ec=53/41 lis/c=53/53 les/c/f=55/55/0 sis=58) [2] r=0 lpr=58 pi=[53,58)/1 crt=42'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:52:15 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 59 pg[7.c( empty local-lis/les=58/59 n=0 ec=51/27 lis/c=51/51 les/c/f=54/54/0 sis=58) [2] r=0 lpr=58 pi=[51,58)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:52:15 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 59 pg[11.2( empty local-lis/les=58/59 n=0 ec=56/49 lis/c=56/56 les/c/f=57/57/0 sis=58) [2] r=0 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:52:15 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 59 pg[11.3( empty local-lis/les=58/59 n=0 ec=56/49 lis/c=56/56 les/c/f=57/57/0 sis=58) [2] r=0 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:52:15 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 59 pg[11.15( empty local-lis/les=58/59 n=0 ec=56/49 lis/c=56/56 les/c/f=57/57/0 sis=58) [2] r=0 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:52:15 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 59 pg[7.1( empty local-lis/les=58/59 n=0 ec=51/27 lis/c=51/51 les/c/f=54/54/0 sis=58) [2] r=0 lpr=58 pi=[51,58)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:52:15 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 59 pg[8.15( v 42'4 (0'0,42'4] local-lis/les=58/59 n=0 ec=53/41 lis/c=53/53 les/c/f=55/55/0 sis=58) [2] r=0 lpr=58 pi=[53,58)/1 crt=42'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:52:15 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 59 pg[7.1a( empty local-lis/les=58/59 n=0 ec=51/27 lis/c=51/51 les/c/f=54/54/0 sis=58) [2] r=0 lpr=58 pi=[51,58)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:52:15 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 59 pg[11.8( empty local-lis/les=58/59 n=0 ec=56/49 lis/c=56/56 les/c/f=57/57/0 sis=58) [2] r=0 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:52:15 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 59 pg[7.e( empty local-lis/les=58/59 n=0 ec=51/27 lis/c=51/51 les/c/f=54/54/0 sis=58) [2] r=0 lpr=58 pi=[51,58)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:52:16 np0005534516 podman[103584]: 2025-11-25 07:52:16.015847668 +0000 UTC m=+0.404748885 container exec_died 04ce8b89bbacb77b3b59c4996e899d7494010827e740656ebea8754c25303956 (image=quay.io/ceph/ceph:v18, name=ceph-a058ea16-8b73-51e1-b172-ed66107102bf-mon-compute-0, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 25 02:52:16 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e59 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 02:52:16 np0005534516 ceph-osd[89702]: log_channel(cluster) log [DBG] : 2.5 scrub starts
Nov 25 02:52:16 np0005534516 ceph-osd[89702]: log_channel(cluster) log [DBG] : 2.5 scrub ok
Nov 25 02:52:16 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 02:52:16 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 02:52:16 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 02:52:16 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 02:52:16 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 02:52:16 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 02:52:16 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 02:52:16 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 02:52:16 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 02:52:16 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 02:52:16 np0005534516 ceph-mgr[75313]: [progress WARNING root] complete: ev 303aa7b3-2e57-4c77-80b1-9fc7e4c71bbe does not exist
Nov 25 02:52:16 np0005534516 ceph-mgr[75313]: [progress WARNING root] complete: ev 3ab9b300-322b-4fb4-9e90-82f062cc2362 does not exist
Nov 25 02:52:16 np0005534516 ceph-mgr[75313]: [progress WARNING root] complete: ev 6c1e367b-2a88-465c-92bb-3e7fbdfd6655 does not exist
Nov 25 02:52:16 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 02:52:16 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 02:52:16 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 02:52:16 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 02:52:16 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 02:52:16 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 02:52:16 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 02:52:16 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 02:52:16 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 02:52:16 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 02:52:16 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 02:52:17 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v152: 321 pgs: 1 active+clean+scrubbing, 320 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 23 KiB/s rd, 0 B/s wr, 44 op/s; 0 B/s, 0 objects/s recovering
Nov 25 02:52:17 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"} v 0) v1
Nov 25 02:52:17 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]: dispatch
Nov 25 02:52:17 np0005534516 podman[103882]: 2025-11-25 07:52:17.542771634 +0000 UTC m=+0.116075212 container create 1950673a6f5863f543784bdef663f3b703f78786d513661a9a0df46ebe951fcf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_colden, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 02:52:17 np0005534516 podman[103882]: 2025-11-25 07:52:17.446596867 +0000 UTC m=+0.019900475 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 02:52:17 np0005534516 systemd[1]: Started libpod-conmon-1950673a6f5863f543784bdef663f3b703f78786d513661a9a0df46ebe951fcf.scope.
Nov 25 02:52:17 np0005534516 systemd[1]: Started libcrun container.
Nov 25 02:52:17 np0005534516 podman[103882]: 2025-11-25 07:52:17.768293843 +0000 UTC m=+0.341597441 container init 1950673a6f5863f543784bdef663f3b703f78786d513661a9a0df46ebe951fcf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_colden, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 25 02:52:17 np0005534516 podman[103882]: 2025-11-25 07:52:17.776442481 +0000 UTC m=+0.349746079 container start 1950673a6f5863f543784bdef663f3b703f78786d513661a9a0df46ebe951fcf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_colden, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 02:52:17 np0005534516 brave_colden[103898]: 167 167
Nov 25 02:52:17 np0005534516 systemd[1]: libpod-1950673a6f5863f543784bdef663f3b703f78786d513661a9a0df46ebe951fcf.scope: Deactivated successfully.
Nov 25 02:52:17 np0005534516 podman[103882]: 2025-11-25 07:52:17.806748837 +0000 UTC m=+0.380052415 container attach 1950673a6f5863f543784bdef663f3b703f78786d513661a9a0df46ebe951fcf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_colden, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Nov 25 02:52:17 np0005534516 podman[103882]: 2025-11-25 07:52:17.807206788 +0000 UTC m=+0.380510366 container died 1950673a6f5863f543784bdef663f3b703f78786d513661a9a0df46ebe951fcf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_colden, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 02:52:17 np0005534516 systemd[1]: var-lib-containers-storage-overlay-c4976f6787a01934039717b8be142cd7c81aff38edb84e94b43ed6e1c6e46979-merged.mount: Deactivated successfully.
Nov 25 02:52:17 np0005534516 podman[103882]: 2025-11-25 07:52:17.934245985 +0000 UTC m=+0.507549563 container remove 1950673a6f5863f543784bdef663f3b703f78786d513661a9a0df46ebe951fcf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_colden, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 02:52:17 np0005534516 systemd[1]: libpod-conmon-1950673a6f5863f543784bdef663f3b703f78786d513661a9a0df46ebe951fcf.scope: Deactivated successfully.
Nov 25 02:52:18 np0005534516 podman[103924]: 2025-11-25 07:52:18.073118619 +0000 UTC m=+0.025951801 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 02:52:18 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e59 do_prune osdmap full prune enabled
Nov 25 02:52:18 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]: dispatch
Nov 25 02:52:18 np0005534516 podman[103924]: 2025-11-25 07:52:18.73085771 +0000 UTC m=+0.683690802 container create 434210e3285412a7bbcfe7b003e2834ed5e08f5d0c9453bf33428cfcf71ad629 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_zhukovsky, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 02:52:18 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]': finished
Nov 25 02:52:18 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e60 e60: 3 total, 3 up, 3 in
Nov 25 02:52:18 np0005534516 systemd[1]: Started libpod-conmon-434210e3285412a7bbcfe7b003e2834ed5e08f5d0c9453bf33428cfcf71ad629.scope.
Nov 25 02:52:18 np0005534516 ceph-mon[75015]: log_channel(cluster) log [DBG] : osdmap e60: 3 total, 3 up, 3 in
Nov 25 02:52:19 np0005534516 systemd[1]: Started libcrun container.
Nov 25 02:52:19 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/56ac7a19b30f69e53aefa71b99961dee8fc10a3410808fc8ff4d82f62a053abd/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 02:52:19 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/56ac7a19b30f69e53aefa71b99961dee8fc10a3410808fc8ff4d82f62a053abd/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 02:52:19 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/56ac7a19b30f69e53aefa71b99961dee8fc10a3410808fc8ff4d82f62a053abd/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 02:52:19 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/56ac7a19b30f69e53aefa71b99961dee8fc10a3410808fc8ff4d82f62a053abd/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 02:52:19 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/56ac7a19b30f69e53aefa71b99961dee8fc10a3410808fc8ff4d82f62a053abd/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 02:52:19 np0005534516 podman[103924]: 2025-11-25 07:52:19.049550003 +0000 UTC m=+1.002383125 container init 434210e3285412a7bbcfe7b003e2834ed5e08f5d0c9453bf33428cfcf71ad629 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_zhukovsky, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 02:52:19 np0005534516 podman[103924]: 2025-11-25 07:52:19.063532552 +0000 UTC m=+1.016365654 container start 434210e3285412a7bbcfe7b003e2834ed5e08f5d0c9453bf33428cfcf71ad629 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_zhukovsky, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0)
Nov 25 02:52:19 np0005534516 podman[103924]: 2025-11-25 07:52:19.076987278 +0000 UTC m=+1.029820410 container attach 434210e3285412a7bbcfe7b003e2834ed5e08f5d0c9453bf33428cfcf71ad629 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_zhukovsky, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 02:52:19 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v154: 321 pgs: 1 active+clean+scrubbing, 320 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 5.5 KiB/s rd, 0 B/s wr, 11 op/s; 0 B/s, 0 objects/s recovering
Nov 25 02:52:19 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"} v 0) v1
Nov 25 02:52:19 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"}]: dispatch
Nov 25 02:52:19 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e60 do_prune osdmap full prune enabled
Nov 25 02:52:19 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"}]': finished
Nov 25 02:52:19 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e61 e61: 3 total, 3 up, 3 in
Nov 25 02:52:19 np0005534516 ceph-mon[75015]: log_channel(cluster) log [DBG] : osdmap e61: 3 total, 3 up, 3 in
Nov 25 02:52:19 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]': finished
Nov 25 02:52:19 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"}]: dispatch
Nov 25 02:52:20 np0005534516 distracted_zhukovsky[103941]: --> passed data devices: 0 physical, 3 LVM
Nov 25 02:52:20 np0005534516 distracted_zhukovsky[103941]: --> relative data size: 1.0
Nov 25 02:52:20 np0005534516 distracted_zhukovsky[103941]: --> All data devices are unavailable
Nov 25 02:52:20 np0005534516 systemd[1]: libpod-434210e3285412a7bbcfe7b003e2834ed5e08f5d0c9453bf33428cfcf71ad629.scope: Deactivated successfully.
Nov 25 02:52:20 np0005534516 podman[103924]: 2025-11-25 07:52:20.185435543 +0000 UTC m=+2.138268635 container died 434210e3285412a7bbcfe7b003e2834ed5e08f5d0c9453bf33428cfcf71ad629 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_zhukovsky, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 02:52:20 np0005534516 systemd[1]: libpod-434210e3285412a7bbcfe7b003e2834ed5e08f5d0c9453bf33428cfcf71ad629.scope: Consumed 1.056s CPU time.
Nov 25 02:52:20 np0005534516 systemd[1]: var-lib-containers-storage-overlay-56ac7a19b30f69e53aefa71b99961dee8fc10a3410808fc8ff4d82f62a053abd-merged.mount: Deactivated successfully.
Nov 25 02:52:20 np0005534516 podman[103924]: 2025-11-25 07:52:20.298491465 +0000 UTC m=+2.251324557 container remove 434210e3285412a7bbcfe7b003e2834ed5e08f5d0c9453bf33428cfcf71ad629 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_zhukovsky, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 02:52:20 np0005534516 systemd[1]: libpod-conmon-434210e3285412a7bbcfe7b003e2834ed5e08f5d0c9453bf33428cfcf71ad629.scope: Deactivated successfully.
Nov 25 02:52:20 np0005534516 ceph-osd[89702]: log_channel(cluster) log [DBG] : 4.10 scrub starts
Nov 25 02:52:20 np0005534516 ceph-osd[89702]: log_channel(cluster) log [DBG] : 4.10 scrub ok
Nov 25 02:52:20 np0005534516 ceph-osd[90711]: log_channel(cluster) log [DBG] : 3.e scrub starts
Nov 25 02:52:20 np0005534516 ceph-osd[90711]: log_channel(cluster) log [DBG] : 3.e scrub ok
Nov 25 02:52:20 np0005534516 ceph-mgr[75313]: [progress INFO root] Writing back 17 completed events
Nov 25 02:52:20 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) v1
Nov 25 02:52:20 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 02:52:21 np0005534516 podman[104122]: 2025-11-25 07:52:20.983916538 +0000 UTC m=+0.024105315 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 02:52:21 np0005534516 podman[104122]: 2025-11-25 07:52:21.081745215 +0000 UTC m=+0.121933972 container create c7e357201adbc1a5f4e2752ea7a839c060d45aa7d9794146020abd1b78d70cbb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_satoshi, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 02:52:21 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"}]': finished
Nov 25 02:52:21 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 02:52:21 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v156: 321 pgs: 3 active+recovery_wait, 6 active+recovery_wait+degraded, 1 active+clean+scrubbing, 1 active+recovering, 310 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 0 B/s wr, 36 op/s; 12/142 objects degraded (8.451%); 428 B/s, 5 objects/s recovering
Nov 25 02:52:21 np0005534516 systemd[1]: Started libpod-conmon-c7e357201adbc1a5f4e2752ea7a839c060d45aa7d9794146020abd1b78d70cbb.scope.
Nov 25 02:52:21 np0005534516 systemd[1]: Started libcrun container.
Nov 25 02:52:21 np0005534516 ceph-osd[88620]: log_channel(cluster) log [DBG] : 6.1b scrub starts
Nov 25 02:52:21 np0005534516 ceph-osd[88620]: log_channel(cluster) log [DBG] : 6.1b scrub ok
Nov 25 02:52:21 np0005534516 podman[104122]: 2025-11-25 07:52:21.238108747 +0000 UTC m=+0.278297604 container init c7e357201adbc1a5f4e2752ea7a839c060d45aa7d9794146020abd1b78d70cbb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_satoshi, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Nov 25 02:52:21 np0005534516 podman[104122]: 2025-11-25 07:52:21.251172619 +0000 UTC m=+0.291361376 container start c7e357201adbc1a5f4e2752ea7a839c060d45aa7d9794146020abd1b78d70cbb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_satoshi, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 02:52:21 np0005534516 practical_satoshi[104138]: 167 167
Nov 25 02:52:21 np0005534516 systemd[1]: libpod-c7e357201adbc1a5f4e2752ea7a839c060d45aa7d9794146020abd1b78d70cbb.scope: Deactivated successfully.
Nov 25 02:52:21 np0005534516 podman[104122]: 2025-11-25 07:52:21.269329464 +0000 UTC m=+0.309518311 container attach c7e357201adbc1a5f4e2752ea7a839c060d45aa7d9794146020abd1b78d70cbb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_satoshi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Nov 25 02:52:21 np0005534516 podman[104122]: 2025-11-25 07:52:21.269858094 +0000 UTC m=+0.310046881 container died c7e357201adbc1a5f4e2752ea7a839c060d45aa7d9794146020abd1b78d70cbb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_satoshi, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default)
Nov 25 02:52:21 np0005534516 ceph-osd[89702]: log_channel(cluster) log [DBG] : 2.4 deep-scrub starts
Nov 25 02:52:21 np0005534516 ceph-osd[89702]: log_channel(cluster) log [DBG] : 2.4 deep-scrub ok
Nov 25 02:52:21 np0005534516 systemd[1]: var-lib-containers-storage-overlay-81233a30ba4ca5d3fbae506444ed34c162e7602f6309884be4aa199c368397fb-merged.mount: Deactivated successfully.
Nov 25 02:52:21 np0005534516 podman[104122]: 2025-11-25 07:52:21.363870884 +0000 UTC m=+0.404059641 container remove c7e357201adbc1a5f4e2752ea7a839c060d45aa7d9794146020abd1b78d70cbb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_satoshi, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Nov 25 02:52:21 np0005534516 systemd[1]: libpod-conmon-c7e357201adbc1a5f4e2752ea7a839c060d45aa7d9794146020abd1b78d70cbb.scope: Deactivated successfully.
Nov 25 02:52:21 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e61 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 02:52:21 np0005534516 podman[104164]: 2025-11-25 07:52:21.60107092 +0000 UTC m=+0.082372326 container create e8afbe3a847246ce18ff5cf7f6b4416cad540e71bdc6aa5c0a2e0f00ab630f16 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_jennings, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True)
Nov 25 02:52:21 np0005534516 podman[104164]: 2025-11-25 07:52:21.554475844 +0000 UTC m=+0.035777280 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 02:52:21 np0005534516 systemd[1]: Started libpod-conmon-e8afbe3a847246ce18ff5cf7f6b4416cad540e71bdc6aa5c0a2e0f00ab630f16.scope.
Nov 25 02:52:21 np0005534516 systemd[1]: Started libcrun container.
Nov 25 02:52:21 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/294169cb0dc6fea2e3cac6536e8c757d998866b70e264ad8669cda3d36708673/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 02:52:21 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/294169cb0dc6fea2e3cac6536e8c757d998866b70e264ad8669cda3d36708673/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 02:52:21 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/294169cb0dc6fea2e3cac6536e8c757d998866b70e264ad8669cda3d36708673/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 02:52:21 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/294169cb0dc6fea2e3cac6536e8c757d998866b70e264ad8669cda3d36708673/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 02:52:21 np0005534516 podman[104164]: 2025-11-25 07:52:21.715412778 +0000 UTC m=+0.196714164 container init e8afbe3a847246ce18ff5cf7f6b4416cad540e71bdc6aa5c0a2e0f00ab630f16 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_jennings, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 02:52:21 np0005534516 podman[104164]: 2025-11-25 07:52:21.72896904 +0000 UTC m=+0.210270436 container start e8afbe3a847246ce18ff5cf7f6b4416cad540e71bdc6aa5c0a2e0f00ab630f16 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_jennings, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 02:52:21 np0005534516 podman[104164]: 2025-11-25 07:52:21.736175796 +0000 UTC m=+0.217477202 container attach e8afbe3a847246ce18ff5cf7f6b4416cad540e71bdc6aa5c0a2e0f00ab630f16 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_jennings, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 02:52:22 np0005534516 ceph-mon[75015]: log_channel(cluster) log [WRN] : Health check failed: Degraded data redundancy: 12/142 objects degraded (8.451%), 6 pgs degraded (PG_DEGRADED)
Nov 25 02:52:22 np0005534516 mystifying_jennings[104181]: {
Nov 25 02:52:22 np0005534516 mystifying_jennings[104181]:    "0": [
Nov 25 02:52:22 np0005534516 mystifying_jennings[104181]:        {
Nov 25 02:52:22 np0005534516 mystifying_jennings[104181]:            "devices": [
Nov 25 02:52:22 np0005534516 mystifying_jennings[104181]:                "/dev/loop3"
Nov 25 02:52:22 np0005534516 mystifying_jennings[104181]:            ],
Nov 25 02:52:22 np0005534516 mystifying_jennings[104181]:            "lv_name": "ceph_lv0",
Nov 25 02:52:22 np0005534516 mystifying_jennings[104181]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 02:52:22 np0005534516 mystifying_jennings[104181]:            "lv_size": "21470642176",
Nov 25 02:52:22 np0005534516 mystifying_jennings[104181]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=fa0f8d90-5df8-4d42-9078-da082765696d,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 02:52:22 np0005534516 mystifying_jennings[104181]:            "lv_uuid": "ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr",
Nov 25 02:52:22 np0005534516 mystifying_jennings[104181]:            "name": "ceph_lv0",
Nov 25 02:52:22 np0005534516 mystifying_jennings[104181]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 02:52:22 np0005534516 mystifying_jennings[104181]:            "tags": {
Nov 25 02:52:22 np0005534516 mystifying_jennings[104181]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 02:52:22 np0005534516 mystifying_jennings[104181]:                "ceph.block_uuid": "ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr",
Nov 25 02:52:22 np0005534516 mystifying_jennings[104181]:                "ceph.cephx_lockbox_secret": "",
Nov 25 02:52:22 np0005534516 mystifying_jennings[104181]:                "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 02:52:22 np0005534516 mystifying_jennings[104181]:                "ceph.cluster_name": "ceph",
Nov 25 02:52:22 np0005534516 mystifying_jennings[104181]:                "ceph.crush_device_class": "",
Nov 25 02:52:22 np0005534516 mystifying_jennings[104181]:                "ceph.encrypted": "0",
Nov 25 02:52:22 np0005534516 mystifying_jennings[104181]:                "ceph.osd_fsid": "fa0f8d90-5df8-4d42-9078-da082765696d",
Nov 25 02:52:22 np0005534516 mystifying_jennings[104181]:                "ceph.osd_id": "0",
Nov 25 02:52:22 np0005534516 mystifying_jennings[104181]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 02:52:22 np0005534516 mystifying_jennings[104181]:                "ceph.type": "block",
Nov 25 02:52:22 np0005534516 mystifying_jennings[104181]:                "ceph.vdo": "0"
Nov 25 02:52:22 np0005534516 mystifying_jennings[104181]:            },
Nov 25 02:52:22 np0005534516 mystifying_jennings[104181]:            "type": "block",
Nov 25 02:52:22 np0005534516 mystifying_jennings[104181]:            "vg_name": "ceph_vg0"
Nov 25 02:52:22 np0005534516 mystifying_jennings[104181]:        }
Nov 25 02:52:22 np0005534516 mystifying_jennings[104181]:    ],
Nov 25 02:52:22 np0005534516 mystifying_jennings[104181]:    "1": [
Nov 25 02:52:22 np0005534516 mystifying_jennings[104181]:        {
Nov 25 02:52:22 np0005534516 mystifying_jennings[104181]:            "devices": [
Nov 25 02:52:22 np0005534516 mystifying_jennings[104181]:                "/dev/loop4"
Nov 25 02:52:22 np0005534516 mystifying_jennings[104181]:            ],
Nov 25 02:52:22 np0005534516 mystifying_jennings[104181]:            "lv_name": "ceph_lv1",
Nov 25 02:52:22 np0005534516 mystifying_jennings[104181]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 02:52:22 np0005534516 mystifying_jennings[104181]:            "lv_size": "21470642176",
Nov 25 02:52:22 np0005534516 mystifying_jennings[104181]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1ccbbde0-8faf-460a-800e-c84f00ed17db,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 02:52:22 np0005534516 mystifying_jennings[104181]:            "lv_uuid": "nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e",
Nov 25 02:52:22 np0005534516 mystifying_jennings[104181]:            "name": "ceph_lv1",
Nov 25 02:52:22 np0005534516 mystifying_jennings[104181]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 02:52:22 np0005534516 mystifying_jennings[104181]:            "tags": {
Nov 25 02:52:22 np0005534516 mystifying_jennings[104181]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 02:52:22 np0005534516 mystifying_jennings[104181]:                "ceph.block_uuid": "nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e",
Nov 25 02:52:22 np0005534516 mystifying_jennings[104181]:                "ceph.cephx_lockbox_secret": "",
Nov 25 02:52:22 np0005534516 mystifying_jennings[104181]:                "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 02:52:22 np0005534516 mystifying_jennings[104181]:                "ceph.cluster_name": "ceph",
Nov 25 02:52:22 np0005534516 mystifying_jennings[104181]:                "ceph.crush_device_class": "",
Nov 25 02:52:22 np0005534516 mystifying_jennings[104181]:                "ceph.encrypted": "0",
Nov 25 02:52:22 np0005534516 mystifying_jennings[104181]:                "ceph.osd_fsid": "1ccbbde0-8faf-460a-800e-c84f00ed17db",
Nov 25 02:52:22 np0005534516 mystifying_jennings[104181]:                "ceph.osd_id": "1",
Nov 25 02:52:22 np0005534516 mystifying_jennings[104181]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 02:52:22 np0005534516 mystifying_jennings[104181]:                "ceph.type": "block",
Nov 25 02:52:22 np0005534516 mystifying_jennings[104181]:                "ceph.vdo": "0"
Nov 25 02:52:22 np0005534516 mystifying_jennings[104181]:            },
Nov 25 02:52:22 np0005534516 mystifying_jennings[104181]:            "type": "block",
Nov 25 02:52:22 np0005534516 mystifying_jennings[104181]:            "vg_name": "ceph_vg1"
Nov 25 02:52:22 np0005534516 mystifying_jennings[104181]:        }
Nov 25 02:52:22 np0005534516 mystifying_jennings[104181]:    ],
Nov 25 02:52:22 np0005534516 mystifying_jennings[104181]:    "2": [
Nov 25 02:52:22 np0005534516 mystifying_jennings[104181]:        {
Nov 25 02:52:22 np0005534516 mystifying_jennings[104181]:            "devices": [
Nov 25 02:52:22 np0005534516 mystifying_jennings[104181]:                "/dev/loop5"
Nov 25 02:52:22 np0005534516 mystifying_jennings[104181]:            ],
Nov 25 02:52:22 np0005534516 mystifying_jennings[104181]:            "lv_name": "ceph_lv2",
Nov 25 02:52:22 np0005534516 mystifying_jennings[104181]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 02:52:22 np0005534516 mystifying_jennings[104181]:            "lv_size": "21470642176",
Nov 25 02:52:22 np0005534516 mystifying_jennings[104181]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2a15223e-f21c-42af-b702-2be1c3607399,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 02:52:22 np0005534516 mystifying_jennings[104181]:            "lv_uuid": "5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0",
Nov 25 02:52:22 np0005534516 mystifying_jennings[104181]:            "name": "ceph_lv2",
Nov 25 02:52:22 np0005534516 mystifying_jennings[104181]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 02:52:22 np0005534516 mystifying_jennings[104181]:            "tags": {
Nov 25 02:52:22 np0005534516 mystifying_jennings[104181]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 02:52:22 np0005534516 mystifying_jennings[104181]:                "ceph.block_uuid": "5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0",
Nov 25 02:52:22 np0005534516 mystifying_jennings[104181]:                "ceph.cephx_lockbox_secret": "",
Nov 25 02:52:22 np0005534516 mystifying_jennings[104181]:                "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 02:52:22 np0005534516 mystifying_jennings[104181]:                "ceph.cluster_name": "ceph",
Nov 25 02:52:22 np0005534516 mystifying_jennings[104181]:                "ceph.crush_device_class": "",
Nov 25 02:52:22 np0005534516 mystifying_jennings[104181]:                "ceph.encrypted": "0",
Nov 25 02:52:22 np0005534516 mystifying_jennings[104181]:                "ceph.osd_fsid": "2a15223e-f21c-42af-b702-2be1c3607399",
Nov 25 02:52:22 np0005534516 mystifying_jennings[104181]:                "ceph.osd_id": "2",
Nov 25 02:52:22 np0005534516 mystifying_jennings[104181]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 02:52:22 np0005534516 mystifying_jennings[104181]:                "ceph.type": "block",
Nov 25 02:52:22 np0005534516 mystifying_jennings[104181]:                "ceph.vdo": "0"
Nov 25 02:52:22 np0005534516 mystifying_jennings[104181]:            },
Nov 25 02:52:22 np0005534516 mystifying_jennings[104181]:            "type": "block",
Nov 25 02:52:22 np0005534516 mystifying_jennings[104181]:            "vg_name": "ceph_vg2"
Nov 25 02:52:22 np0005534516 mystifying_jennings[104181]:        }
Nov 25 02:52:22 np0005534516 mystifying_jennings[104181]:    ]
Nov 25 02:52:22 np0005534516 mystifying_jennings[104181]: }
Nov 25 02:52:22 np0005534516 systemd[1]: libpod-e8afbe3a847246ce18ff5cf7f6b4416cad540e71bdc6aa5c0a2e0f00ab630f16.scope: Deactivated successfully.
Nov 25 02:52:22 np0005534516 podman[104164]: 2025-11-25 07:52:22.494598295 +0000 UTC m=+0.975899701 container died e8afbe3a847246ce18ff5cf7f6b4416cad540e71bdc6aa5c0a2e0f00ab630f16 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_jennings, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Nov 25 02:52:22 np0005534516 systemd[1]: var-lib-containers-storage-overlay-294169cb0dc6fea2e3cac6536e8c757d998866b70e264ad8669cda3d36708673-merged.mount: Deactivated successfully.
Nov 25 02:52:22 np0005534516 ceph-osd[90711]: log_channel(cluster) log [DBG] : 3.16 deep-scrub starts
Nov 25 02:52:22 np0005534516 ceph-osd[90711]: log_channel(cluster) log [DBG] : 3.16 deep-scrub ok
Nov 25 02:52:22 np0005534516 podman[104164]: 2025-11-25 07:52:22.756972808 +0000 UTC m=+1.238274204 container remove e8afbe3a847246ce18ff5cf7f6b4416cad540e71bdc6aa5c0a2e0f00ab630f16 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_jennings, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 25 02:52:22 np0005534516 systemd[1]: libpod-conmon-e8afbe3a847246ce18ff5cf7f6b4416cad540e71bdc6aa5c0a2e0f00ab630f16.scope: Deactivated successfully.
Nov 25 02:52:23 np0005534516 ceph-mon[75015]: Health check failed: Degraded data redundancy: 12/142 objects degraded (8.451%), 6 pgs degraded (PG_DEGRADED)
Nov 25 02:52:23 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v157: 321 pgs: 3 active+recovery_wait, 6 active+recovery_wait+degraded, 1 active+recovering, 311 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 26 KiB/s rd, 0 B/s wr, 59 op/s; 12/185 objects degraded (6.486%); 352 B/s, 4 objects/s recovering
Nov 25 02:52:23 np0005534516 ceph-osd[88620]: log_channel(cluster) log [DBG] : 3.f scrub starts
Nov 25 02:52:23 np0005534516 ceph-osd[88620]: log_channel(cluster) log [DBG] : 3.f scrub ok
Nov 25 02:52:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 02:52:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 02:52:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 02:52:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 02:52:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 02:52:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 02:52:23 np0005534516 podman[104343]: 2025-11-25 07:52:23.391278844 +0000 UTC m=+0.034095326 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 02:52:23 np0005534516 podman[104343]: 2025-11-25 07:52:23.50753402 +0000 UTC m=+0.150350502 container create a9bd6e813940ffef2b04314be5d78b6c9df3dbae2e4f6bbe43b5bf3ff586a1a3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_chatterjee, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Nov 25 02:52:23 np0005534516 systemd[1]: Started libpod-conmon-a9bd6e813940ffef2b04314be5d78b6c9df3dbae2e4f6bbe43b5bf3ff586a1a3.scope.
Nov 25 02:52:23 np0005534516 systemd[1]: Started libcrun container.
Nov 25 02:52:23 np0005534516 podman[104343]: 2025-11-25 07:52:23.614877927 +0000 UTC m=+0.257694409 container init a9bd6e813940ffef2b04314be5d78b6c9df3dbae2e4f6bbe43b5bf3ff586a1a3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_chatterjee, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 02:52:23 np0005534516 podman[104343]: 2025-11-25 07:52:23.622565852 +0000 UTC m=+0.265382304 container start a9bd6e813940ffef2b04314be5d78b6c9df3dbae2e4f6bbe43b5bf3ff586a1a3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_chatterjee, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0)
Nov 25 02:52:23 np0005534516 modest_chatterjee[104360]: 167 167
Nov 25 02:52:23 np0005534516 podman[104343]: 2025-11-25 07:52:23.630009202 +0000 UTC m=+0.272825744 container attach a9bd6e813940ffef2b04314be5d78b6c9df3dbae2e4f6bbe43b5bf3ff586a1a3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_chatterjee, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Nov 25 02:52:23 np0005534516 systemd[1]: libpod-a9bd6e813940ffef2b04314be5d78b6c9df3dbae2e4f6bbe43b5bf3ff586a1a3.scope: Deactivated successfully.
Nov 25 02:52:23 np0005534516 podman[104343]: 2025-11-25 07:52:23.631136214 +0000 UTC m=+0.273952696 container died a9bd6e813940ffef2b04314be5d78b6c9df3dbae2e4f6bbe43b5bf3ff586a1a3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_chatterjee, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Nov 25 02:52:23 np0005534516 systemd[1]: var-lib-containers-storage-overlay-890c03abaee4e97f57beac79f56265175441c5e942dd50e708b51aaaef7733ca-merged.mount: Deactivated successfully.
Nov 25 02:52:23 np0005534516 podman[104343]: 2025-11-25 07:52:23.694436566 +0000 UTC m=+0.337253028 container remove a9bd6e813940ffef2b04314be5d78b6c9df3dbae2e4f6bbe43b5bf3ff586a1a3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_chatterjee, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 02:52:23 np0005534516 systemd[1]: libpod-conmon-a9bd6e813940ffef2b04314be5d78b6c9df3dbae2e4f6bbe43b5bf3ff586a1a3.scope: Deactivated successfully.
Nov 25 02:52:23 np0005534516 ceph-osd[90711]: log_channel(cluster) log [DBG] : 3.5 scrub starts
Nov 25 02:52:23 np0005534516 ceph-osd[90711]: log_channel(cluster) log [DBG] : 3.5 scrub ok
Nov 25 02:52:23 np0005534516 podman[104386]: 2025-11-25 07:52:23.902782003 +0000 UTC m=+0.042760360 container create 2501d517f6f30cb0b86f3c2be2fd44e66a17c3344f557ab6fbd0ea397f98090b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_solomon, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Nov 25 02:52:23 np0005534516 systemd[1]: Started libpod-conmon-2501d517f6f30cb0b86f3c2be2fd44e66a17c3344f557ab6fbd0ea397f98090b.scope.
Nov 25 02:52:23 np0005534516 podman[104386]: 2025-11-25 07:52:23.883514755 +0000 UTC m=+0.023493132 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 02:52:24 np0005534516 systemd[1]: Started libcrun container.
Nov 25 02:52:24 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/32dee6c454da3dc802e9be8046692c3dd97688d5c63a53a49f936fa9d6f5b284/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 02:52:24 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/32dee6c454da3dc802e9be8046692c3dd97688d5c63a53a49f936fa9d6f5b284/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 02:52:24 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/32dee6c454da3dc802e9be8046692c3dd97688d5c63a53a49f936fa9d6f5b284/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 02:52:24 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/32dee6c454da3dc802e9be8046692c3dd97688d5c63a53a49f936fa9d6f5b284/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 02:52:24 np0005534516 podman[104386]: 2025-11-25 07:52:24.03699123 +0000 UTC m=+0.176969607 container init 2501d517f6f30cb0b86f3c2be2fd44e66a17c3344f557ab6fbd0ea397f98090b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_solomon, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 02:52:24 np0005534516 podman[104386]: 2025-11-25 07:52:24.050461381 +0000 UTC m=+0.190439768 container start 2501d517f6f30cb0b86f3c2be2fd44e66a17c3344f557ab6fbd0ea397f98090b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_solomon, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Nov 25 02:52:24 np0005534516 podman[104386]: 2025-11-25 07:52:24.060825008 +0000 UTC m=+0.200803365 container attach 2501d517f6f30cb0b86f3c2be2fd44e66a17c3344f557ab6fbd0ea397f98090b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_solomon, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 02:52:25 np0005534516 gracious_solomon[104403]: {
Nov 25 02:52:25 np0005534516 gracious_solomon[104403]:    "1ccbbde0-8faf-460a-800e-c84f00ed17db": {
Nov 25 02:52:25 np0005534516 gracious_solomon[104403]:        "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 02:52:25 np0005534516 gracious_solomon[104403]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 02:52:25 np0005534516 gracious_solomon[104403]:        "osd_id": 1,
Nov 25 02:52:25 np0005534516 gracious_solomon[104403]:        "osd_uuid": "1ccbbde0-8faf-460a-800e-c84f00ed17db",
Nov 25 02:52:25 np0005534516 gracious_solomon[104403]:        "type": "bluestore"
Nov 25 02:52:25 np0005534516 gracious_solomon[104403]:    },
Nov 25 02:52:25 np0005534516 gracious_solomon[104403]:    "2a15223e-f21c-42af-b702-2be1c3607399": {
Nov 25 02:52:25 np0005534516 gracious_solomon[104403]:        "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 02:52:25 np0005534516 gracious_solomon[104403]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 02:52:25 np0005534516 gracious_solomon[104403]:        "osd_id": 2,
Nov 25 02:52:25 np0005534516 gracious_solomon[104403]:        "osd_uuid": "2a15223e-f21c-42af-b702-2be1c3607399",
Nov 25 02:52:25 np0005534516 gracious_solomon[104403]:        "type": "bluestore"
Nov 25 02:52:25 np0005534516 gracious_solomon[104403]:    },
Nov 25 02:52:25 np0005534516 gracious_solomon[104403]:    "fa0f8d90-5df8-4d42-9078-da082765696d": {
Nov 25 02:52:25 np0005534516 gracious_solomon[104403]:        "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 02:52:25 np0005534516 gracious_solomon[104403]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 02:52:25 np0005534516 gracious_solomon[104403]:        "osd_id": 0,
Nov 25 02:52:25 np0005534516 gracious_solomon[104403]:        "osd_uuid": "fa0f8d90-5df8-4d42-9078-da082765696d",
Nov 25 02:52:25 np0005534516 gracious_solomon[104403]:        "type": "bluestore"
Nov 25 02:52:25 np0005534516 gracious_solomon[104403]:    }
Nov 25 02:52:25 np0005534516 gracious_solomon[104403]: }
Nov 25 02:52:25 np0005534516 systemd[1]: libpod-2501d517f6f30cb0b86f3c2be2fd44e66a17c3344f557ab6fbd0ea397f98090b.scope: Deactivated successfully.
Nov 25 02:52:25 np0005534516 podman[104386]: 2025-11-25 07:52:25.110627674 +0000 UTC m=+1.250606031 container died 2501d517f6f30cb0b86f3c2be2fd44e66a17c3344f557ab6fbd0ea397f98090b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_solomon, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 02:52:25 np0005534516 systemd[1]: libpod-2501d517f6f30cb0b86f3c2be2fd44e66a17c3344f557ab6fbd0ea397f98090b.scope: Consumed 1.071s CPU time.
Nov 25 02:52:25 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v158: 321 pgs: 321 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 26 KiB/s rd, 0 B/s wr, 63 op/s; 401 B/s, 5 objects/s recovering
Nov 25 02:52:25 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"} v 0) v1
Nov 25 02:52:25 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"}]: dispatch
Nov 25 02:52:25 np0005534516 ceph-osd[88620]: log_channel(cluster) log [DBG] : 3.c scrub starts
Nov 25 02:52:25 np0005534516 systemd[1]: var-lib-containers-storage-overlay-32dee6c454da3dc802e9be8046692c3dd97688d5c63a53a49f936fa9d6f5b284-merged.mount: Deactivated successfully.
Nov 25 02:52:25 np0005534516 ceph-osd[88620]: log_channel(cluster) log [DBG] : 3.c scrub ok
Nov 25 02:52:25 np0005534516 ceph-mon[75015]: log_channel(cluster) log [INF] : Health check cleared: PG_DEGRADED (was: Degraded data redundancy: 12/185 objects degraded (6.486%), 6 pgs degraded)
Nov 25 02:52:25 np0005534516 ceph-mon[75015]: log_channel(cluster) log [INF] : Cluster is now healthy
Nov 25 02:52:25 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e61 do_prune osdmap full prune enabled
Nov 25 02:52:25 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"}]': finished
Nov 25 02:52:25 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e62 e62: 3 total, 3 up, 3 in
Nov 25 02:52:25 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"}]: dispatch
Nov 25 02:52:25 np0005534516 ceph-mon[75015]: Health check cleared: PG_DEGRADED (was: Degraded data redundancy: 12/185 objects degraded (6.486%), 6 pgs degraded)
Nov 25 02:52:25 np0005534516 ceph-mon[75015]: Cluster is now healthy
Nov 25 02:52:25 np0005534516 podman[104386]: 2025-11-25 07:52:25.223696445 +0000 UTC m=+1.363674832 container remove 2501d517f6f30cb0b86f3c2be2fd44e66a17c3344f557ab6fbd0ea397f98090b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_solomon, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Nov 25 02:52:25 np0005534516 ceph-mon[75015]: log_channel(cluster) log [DBG] : osdmap e62: 3 total, 3 up, 3 in
Nov 25 02:52:25 np0005534516 systemd[1]: libpod-conmon-2501d517f6f30cb0b86f3c2be2fd44e66a17c3344f557ab6fbd0ea397f98090b.scope: Deactivated successfully.
Nov 25 02:52:25 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 02:52:25 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 02:52:25 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 02:52:25 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 02:52:25 np0005534516 ceph-mgr[75313]: [progress WARNING root] complete: ev b5792a98-7aa0-44d4-9cbd-9eddc8787f8d does not exist
Nov 25 02:52:25 np0005534516 ceph-mgr[75313]: [progress WARNING root] complete: ev b55c8f5a-4f0e-4de4-81cb-bee06457a38b does not exist
Nov 25 02:52:26 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"}]': finished
Nov 25 02:52:26 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 02:52:26 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 02:52:26 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e62 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 02:52:27 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v160: 321 pgs: 321 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 26 KiB/s rd, 0 B/s wr, 63 op/s; 401 B/s, 5 objects/s recovering
Nov 25 02:52:27 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"} v 0) v1
Nov 25 02:52:27 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"}]: dispatch
Nov 25 02:52:27 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e62 do_prune osdmap full prune enabled
Nov 25 02:52:27 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"}]: dispatch
Nov 25 02:52:27 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"}]': finished
Nov 25 02:52:27 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e63 e63: 3 total, 3 up, 3 in
Nov 25 02:52:27 np0005534516 ceph-mon[75015]: log_channel(cluster) log [DBG] : osdmap e63: 3 total, 3 up, 3 in
Nov 25 02:52:28 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"}]': finished
Nov 25 02:52:29 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v162: 321 pgs: 321 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 0 B/s wr, 44 op/s; 79 B/s, 1 objects/s recovering
Nov 25 02:52:29 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"} v 0) v1
Nov 25 02:52:29 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"}]: dispatch
Nov 25 02:52:29 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e63 do_prune osdmap full prune enabled
Nov 25 02:52:29 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"}]': finished
Nov 25 02:52:29 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e64 e64: 3 total, 3 up, 3 in
Nov 25 02:52:29 np0005534516 ceph-mon[75015]: log_channel(cluster) log [DBG] : osdmap e64: 3 total, 3 up, 3 in
Nov 25 02:52:29 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"}]: dispatch
Nov 25 02:52:30 np0005534516 ceph-osd[88620]: log_channel(cluster) log [DBG] : 3.3 deep-scrub starts
Nov 25 02:52:30 np0005534516 ceph-osd[88620]: log_channel(cluster) log [DBG] : 3.3 deep-scrub ok
Nov 25 02:52:30 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"}]': finished
Nov 25 02:52:30 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 63 pg[9.16( v 59'71 (0'0,59'71] local-lis/les=55/56 n=4 ec=55/43 lis/c=55/55 les/c/f=56/56/0 sis=63 pruub=15.582684517s) [2] r=-1 lpr=63 pi=[55,63)/1 crt=59'71 lcod 59'70 mlcod 59'70 active pruub 154.520538330s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 02:52:30 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 63 pg[9.e( v 61'73 (0'0,61'73] local-lis/les=55/56 n=4 ec=55/43 lis/c=55/55 les/c/f=56/56/0 sis=63 pruub=15.582095146s) [2] r=-1 lpr=63 pi=[55,63)/1 crt=61'73 lcod 61'72 mlcod 61'72 active pruub 154.520782471s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 02:52:30 np0005534516 ceph-osd[88620]: osd.0 pg_epoch: 64 pg[9.7( v 60'73 (0'0,60'73] local-lis/les=58/59 n=4 ec=55/43 lis/c=58/58 les/c/f=59/59/0 sis=64 pruub=9.394761086s) [2] r=-1 lpr=64 pi=[58,64)/1 crt=60'73 lcod 60'72 mlcod 60'72 active pruub 154.819122314s@ mbc={255={}}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 02:52:30 np0005534516 ceph-osd[88620]: osd.0 pg_epoch: 64 pg[9.7( v 60'73 (0'0,60'73] local-lis/les=58/59 n=4 ec=55/43 lis/c=58/58 les/c/f=59/59/0 sis=64 pruub=9.394681931s) [2] r=-1 lpr=64 pi=[58,64)/1 crt=60'73 lcod 60'72 mlcod 0'0 unknown NOTIFY pruub 154.819122314s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 02:52:30 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 64 pg[9.e( v 61'73 (0'0,61'73] local-lis/les=55/56 n=4 ec=55/43 lis/c=55/55 les/c/f=56/56/0 sis=63 pruub=15.582025528s) [2] r=-1 lpr=63 pi=[55,63)/1 crt=61'73 lcod 61'72 mlcod 0'0 unknown NOTIFY pruub 154.520782471s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 02:52:30 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 63 pg[9.6( v 60'77 (0'0,60'77] local-lis/les=55/56 n=8 ec=55/43 lis/c=55/55 les/c/f=56/56/0 sis=63 pruub=15.581732750s) [2] r=-1 lpr=63 pi=[55,63)/1 crt=60'77 lcod 60'76 mlcod 60'76 active pruub 154.520767212s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 02:52:30 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 64 pg[9.16( v 59'71 (0'0,59'71] local-lis/les=55/56 n=4 ec=55/43 lis/c=55/55 les/c/f=56/56/0 sis=63 pruub=15.581580162s) [2] r=-1 lpr=63 pi=[55,63)/1 crt=59'71 lcod 59'70 mlcod 0'0 unknown NOTIFY pruub 154.520538330s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 02:52:30 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 63 pg[9.1e( v 61'75 (0'0,61'75] local-lis/les=55/56 n=6 ec=55/43 lis/c=55/55 les/c/f=56/56/0 sis=63 pruub=15.581564903s) [2] r=-1 lpr=63 pi=[55,63)/1 crt=61'75 lcod 61'74 mlcod 61'74 active pruub 154.521041870s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 02:52:30 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 64 pg[9.1e( v 61'75 (0'0,61'75] local-lis/les=55/56 n=6 ec=55/43 lis/c=55/55 les/c/f=56/56/0 sis=63 pruub=15.581510544s) [2] r=-1 lpr=63 pi=[55,63)/1 crt=61'75 lcod 61'74 mlcod 0'0 unknown NOTIFY pruub 154.521041870s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 02:52:30 np0005534516 ceph-osd[88620]: osd.0 pg_epoch: 64 pg[9.1f( v 61'75 (0'0,61'75] local-lis/les=58/59 n=5 ec=55/43 lis/c=58/58 les/c/f=59/59/0 sis=64 pruub=9.394082069s) [2] r=-1 lpr=64 pi=[58,64)/1 crt=61'75 lcod 61'74 mlcod 61'74 active pruub 154.819610596s@ mbc={255={}}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 02:52:30 np0005534516 ceph-osd[88620]: osd.0 pg_epoch: 64 pg[9.1f( v 61'75 (0'0,61'75] local-lis/les=58/59 n=5 ec=55/43 lis/c=58/58 les/c/f=59/59/0 sis=64 pruub=9.394016266s) [2] r=-1 lpr=64 pi=[58,64)/1 crt=61'75 lcod 61'74 mlcod 0'0 unknown NOTIFY pruub 154.819610596s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 02:52:30 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 64 pg[9.7( empty local-lis/les=0/0 n=0 ec=55/43 lis/c=58/58 les/c/f=59/59/0 sis=64) [2] r=0 lpr=64 pi=[58,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:52:30 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 64 pg[9.1f( empty local-lis/les=0/0 n=0 ec=55/43 lis/c=58/58 les/c/f=59/59/0 sis=64) [2] r=0 lpr=64 pi=[58,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:52:30 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 64 pg[9.6( v 60'77 (0'0,60'77] local-lis/les=55/56 n=8 ec=55/43 lis/c=55/55 les/c/f=56/56/0 sis=63 pruub=15.580618858s) [2] r=-1 lpr=63 pi=[55,63)/1 crt=60'77 lcod 60'76 mlcod 0'0 unknown NOTIFY pruub 154.520767212s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 02:52:30 np0005534516 ceph-osd[88620]: osd.0 pg_epoch: 64 pg[9.f( v 61'77 (0'0,61'77] local-lis/les=58/59 n=6 ec=55/43 lis/c=58/58 les/c/f=59/60/0 sis=64 pruub=9.393946648s) [2] r=-1 lpr=64 pi=[58,64)/1 crt=61'77 lcod 61'76 mlcod 61'76 active pruub 154.820373535s@ mbc={255={}}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 02:52:30 np0005534516 ceph-osd[88620]: osd.0 pg_epoch: 64 pg[9.17( v 60'69 (0'0,60'69] local-lis/les=58/59 n=3 ec=55/43 lis/c=58/58 les/c/f=59/60/0 sis=64 pruub=9.394036293s) [2] r=-1 lpr=64 pi=[58,64)/1 crt=60'69 lcod 60'68 mlcod 60'68 active pruub 154.820587158s@ mbc={255={}}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 02:52:30 np0005534516 ceph-osd[88620]: osd.0 pg_epoch: 64 pg[9.17( v 60'69 (0'0,60'69] local-lis/les=58/59 n=3 ec=55/43 lis/c=58/58 les/c/f=59/60/0 sis=64 pruub=9.393989563s) [2] r=-1 lpr=64 pi=[58,64)/1 crt=60'69 lcod 60'68 mlcod 0'0 unknown NOTIFY pruub 154.820587158s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 02:52:30 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 64 pg[9.e( empty local-lis/les=0/0 n=0 ec=55/43 lis/c=55/55 les/c/f=56/56/0 sis=63) [2] r=0 lpr=64 pi=[55,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:52:30 np0005534516 ceph-osd[88620]: osd.0 pg_epoch: 64 pg[9.f( v 61'77 (0'0,61'77] local-lis/les=58/59 n=6 ec=55/43 lis/c=58/58 les/c/f=59/60/0 sis=64 pruub=9.393508911s) [2] r=-1 lpr=64 pi=[58,64)/1 crt=61'77 lcod 61'76 mlcod 0'0 unknown NOTIFY pruub 154.820373535s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 02:52:30 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 64 pg[9.16( empty local-lis/les=0/0 n=0 ec=55/43 lis/c=55/55 les/c/f=56/56/0 sis=63) [2] r=0 lpr=64 pi=[55,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:52:30 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 64 pg[9.1e( empty local-lis/les=0/0 n=0 ec=55/43 lis/c=55/55 les/c/f=56/56/0 sis=63) [2] r=0 lpr=64 pi=[55,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:52:30 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 64 pg[9.17( empty local-lis/les=0/0 n=0 ec=55/43 lis/c=58/58 les/c/f=59/60/0 sis=64) [2] r=0 lpr=64 pi=[58,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:52:30 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 64 pg[9.6( empty local-lis/les=0/0 n=0 ec=55/43 lis/c=55/55 les/c/f=56/56/0 sis=63) [2] r=0 lpr=64 pi=[55,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:52:30 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 64 pg[9.f( empty local-lis/les=0/0 n=0 ec=55/43 lis/c=58/58 les/c/f=59/60/0 sis=64) [2] r=0 lpr=64 pi=[58,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:52:31 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v164: 321 pgs: 321 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 02:52:31 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"} v 0) v1
Nov 25 02:52:31 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"}]: dispatch
Nov 25 02:52:31 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e64 do_prune osdmap full prune enabled
Nov 25 02:52:31 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"}]: dispatch
Nov 25 02:52:31 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"}]': finished
Nov 25 02:52:31 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e65 e65: 3 total, 3 up, 3 in
Nov 25 02:52:31 np0005534516 ceph-mon[75015]: log_channel(cluster) log [DBG] : osdmap e65: 3 total, 3 up, 3 in
Nov 25 02:52:31 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 02:52:31 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 65 pg[9.17( v 60'69 lc 52'38 (0'0,60'69] local-lis/les=64/65 n=3 ec=55/43 lis/c=58/58 les/c/f=59/60/0 sis=64) [2] r=0 lpr=64 pi=[58,64)/1 crt=60'69 lcod 0'0 mlcod 0'0 active+degraded m=3 mbc={255={(0+1)=3}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:52:31 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 65 pg[9.8( v 60'77 (0'0,60'77] local-lis/les=55/56 n=7 ec=55/43 lis/c=55/55 les/c/f=56/56/0 sis=65 pruub=14.596983910s) [2] r=-1 lpr=65 pi=[55,65)/1 crt=60'77 lcod 60'76 mlcod 60'76 active pruub 154.520782471s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 02:52:31 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 65 pg[9.8( v 60'77 (0'0,60'77] local-lis/les=55/56 n=7 ec=55/43 lis/c=55/55 les/c/f=56/56/0 sis=65 pruub=14.596915245s) [2] r=-1 lpr=65 pi=[55,65)/1 crt=60'77 lcod 60'76 mlcod 0'0 unknown NOTIFY pruub 154.520782471s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 02:52:31 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 65 pg[9.18( v 59'71 (0'0,59'71] local-lis/les=55/56 n=3 ec=55/43 lis/c=55/55 les/c/f=56/56/0 sis=65 pruub=14.596852303s) [2] r=-1 lpr=65 pi=[55,65)/1 crt=59'71 lcod 59'70 mlcod 59'70 active pruub 154.521072388s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 02:52:31 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 65 pg[9.18( v 59'71 (0'0,59'71] local-lis/les=55/56 n=3 ec=55/43 lis/c=55/55 les/c/f=56/56/0 sis=65 pruub=14.596764565s) [2] r=-1 lpr=65 pi=[55,65)/1 crt=59'71 lcod 59'70 mlcod 0'0 unknown NOTIFY pruub 154.521072388s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 02:52:31 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 65 pg[9.8( empty local-lis/les=0/0 n=0 ec=55/43 lis/c=55/55 les/c/f=56/56/0 sis=65) [2] r=0 lpr=65 pi=[55,65)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:52:31 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 65 pg[9.18( empty local-lis/les=0/0 n=0 ec=55/43 lis/c=55/55 les/c/f=56/56/0 sis=65) [2] r=0 lpr=65 pi=[55,65)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:52:31 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 65 pg[9.1f( v 61'75 lc 57'66 (0'0,61'75] local-lis/les=64/65 n=5 ec=55/43 lis/c=58/58 les/c/f=59/59/0 sis=64) [2] r=0 lpr=64 pi=[58,64)/1 crt=61'75 lcod 0'0 mlcod 0'0 active+degraded m=5 mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:52:31 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 65 pg[9.f( v 61'77 lc 52'43 (0'0,61'77] local-lis/les=64/65 n=6 ec=55/43 lis/c=58/58 les/c/f=59/60/0 sis=64) [2] r=0 lpr=64 pi=[58,64)/1 crt=61'77 lcod 0'0 mlcod 0'0 active+degraded m=7 mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:52:31 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 65 pg[9.e( v 61'73 lc 52'50 (0'0,61'73] local-lis/les=63/65 n=4 ec=55/43 lis/c=55/55 les/c/f=56/56/0 sis=63) [2] r=0 lpr=64 pi=[55,63)/1 crt=61'73 lcod 0'0 mlcod 0'0 active+degraded m=5 mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:52:31 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 65 pg[9.1e( v 61'75 lc 59'66 (0'0,61'75] local-lis/les=63/65 n=6 ec=55/43 lis/c=55/55 les/c/f=56/56/0 sis=63) [2] r=0 lpr=64 pi=[55,63)/1 crt=61'75 lcod 0'0 mlcod 0'0 active+degraded m=5 mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:52:31 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 65 pg[9.7( v 60'73 lc 52'42 (0'0,60'73] local-lis/les=64/65 n=4 ec=55/43 lis/c=58/58 les/c/f=59/59/0 sis=64) [2] r=0 lpr=64 pi=[58,64)/1 crt=60'73 lcod 0'0 mlcod 0'0 active+degraded m=5 mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:52:31 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 65 pg[9.16( v 59'71 lc 52'53 (0'0,59'71] local-lis/les=63/65 n=4 ec=55/43 lis/c=55/55 les/c/f=56/56/0 sis=63) [2] r=0 lpr=64 pi=[55,63)/1 crt=59'71 lcod 0'0 mlcod 0'0 active+degraded m=4 mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:52:31 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 65 pg[9.6( v 60'77 lc 56'66 (0'0,60'77] local-lis/les=63/65 n=8 ec=55/43 lis/c=55/55 les/c/f=56/56/0 sis=63) [2] r=0 lpr=64 pi=[55,63)/1 crt=60'77 lcod 0'0 mlcod 0'0 active+degraded m=6 mbc={255={(0+1)=6}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:52:32 np0005534516 ceph-osd[88620]: log_channel(cluster) log [DBG] : 3.9 scrub starts
Nov 25 02:52:32 np0005534516 ceph-osd[88620]: log_channel(cluster) log [DBG] : 3.9 scrub ok
Nov 25 02:52:32 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e65 do_prune osdmap full prune enabled
Nov 25 02:52:32 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e66 e66: 3 total, 3 up, 3 in
Nov 25 02:52:32 np0005534516 ceph-mon[75015]: log_channel(cluster) log [DBG] : osdmap e66: 3 total, 3 up, 3 in
Nov 25 02:52:32 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"}]': finished
Nov 25 02:52:32 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 66 pg[9.8( v 60'77 lc 54'57 (0'0,60'77] local-lis/les=65/66 n=7 ec=55/43 lis/c=55/55 les/c/f=56/56/0 sis=65) [2] r=0 lpr=65 pi=[55,65)/1 crt=60'77 lcod 0'0 mlcod 0'0 active+degraded m=7 mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:52:32 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 66 pg[9.18( v 59'71 lc 52'37 (0'0,59'71] local-lis/les=65/66 n=3 ec=55/43 lis/c=55/55 les/c/f=56/56/0 sis=65) [2] r=0 lpr=65 pi=[55,65)/1 crt=59'71 lcod 0'0 mlcod 0'0 active+degraded m=4 mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:52:33 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v167: 321 pgs: 321 active+clean; 456 KiB data, 99 MiB used, 60 GiB / 60 GiB avail
Nov 25 02:52:33 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"} v 0) v1
Nov 25 02:52:33 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"}]: dispatch
Nov 25 02:52:33 np0005534516 ceph-osd[88620]: log_channel(cluster) log [DBG] : 3.17 scrub starts
Nov 25 02:52:33 np0005534516 ceph-osd[89702]: log_channel(cluster) log [DBG] : 5.9 scrub starts
Nov 25 02:52:33 np0005534516 ceph-osd[88620]: log_channel(cluster) log [DBG] : 3.17 scrub ok
Nov 25 02:52:33 np0005534516 ceph-osd[89702]: log_channel(cluster) log [DBG] : 5.9 scrub ok
Nov 25 02:52:33 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e66 do_prune osdmap full prune enabled
Nov 25 02:52:33 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"}]': finished
Nov 25 02:52:33 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e67 e67: 3 total, 3 up, 3 in
Nov 25 02:52:33 np0005534516 ceph-mon[75015]: log_channel(cluster) log [DBG] : osdmap e67: 3 total, 3 up, 3 in
Nov 25 02:52:33 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"}]: dispatch
Nov 25 02:52:34 np0005534516 ceph-osd[88620]: log_channel(cluster) log [DBG] : 3.15 scrub starts
Nov 25 02:52:34 np0005534516 ceph-osd[88620]: log_channel(cluster) log [DBG] : 3.15 scrub ok
Nov 25 02:52:34 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"}]': finished
Nov 25 02:52:35 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v169: 321 pgs: 6 active+recovery_wait+degraded, 1 active+recovering, 314 active+clean; 456 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 33/213 objects degraded (15.493%); 2/213 objects misplaced (0.939%); 38 B/s, 2 objects/s recovering
Nov 25 02:52:35 np0005534516 ceph-osd[89702]: log_channel(cluster) log [DBG] : 4.f scrub starts
Nov 25 02:52:35 np0005534516 ceph-osd[89702]: log_channel(cluster) log [DBG] : 4.f scrub ok
Nov 25 02:52:35 np0005534516 ceph-osd[88620]: log_channel(cluster) log [DBG] : 3.12 scrub starts
Nov 25 02:52:35 np0005534516 ceph-osd[88620]: log_channel(cluster) log [DBG] : 3.12 scrub ok
Nov 25 02:52:35 np0005534516 ceph-mon[75015]: log_channel(cluster) log [WRN] : Health check failed: Degraded data redundancy: 33/213 objects degraded (15.493%), 6 pgs degraded (PG_DEGRADED)
Nov 25 02:52:36 np0005534516 ceph-osd[89702]: log_channel(cluster) log [DBG] : 2.6 scrub starts
Nov 25 02:52:36 np0005534516 ceph-osd[89702]: log_channel(cluster) log [DBG] : 2.6 scrub ok
Nov 25 02:52:36 np0005534516 ceph-osd[88620]: log_channel(cluster) log [DBG] : 3.1f scrub starts
Nov 25 02:52:36 np0005534516 ceph-osd[88620]: log_channel(cluster) log [DBG] : 3.1f scrub ok
Nov 25 02:52:36 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e67 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 02:52:36 np0005534516 ceph-mon[75015]: Health check failed: Degraded data redundancy: 33/213 objects degraded (15.493%), 6 pgs degraded (PG_DEGRADED)
Nov 25 02:52:37 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v170: 321 pgs: 6 active+recovery_wait+degraded, 1 active+recovering, 314 active+clean; 456 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 33/213 objects degraded (15.493%); 2/213 objects misplaced (0.939%); 36 B/s, 2 objects/s recovering
Nov 25 02:52:38 np0005534516 ceph-osd[88620]: log_channel(cluster) log [DBG] : 3.1b scrub starts
Nov 25 02:52:38 np0005534516 ceph-osd[88620]: log_channel(cluster) log [DBG] : 3.1b scrub ok
Nov 25 02:52:39 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v171: 321 pgs: 321 active+clean; 456 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 129 B/s, 6 objects/s recovering
Nov 25 02:52:39 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"} v 0) v1
Nov 25 02:52:39 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"}]: dispatch
Nov 25 02:52:39 np0005534516 ceph-mon[75015]: log_channel(cluster) log [INF] : Health check cleared: PG_DEGRADED (was: Degraded data redundancy: 33/213 objects degraded (15.493%), 6 pgs degraded)
Nov 25 02:52:39 np0005534516 ceph-mon[75015]: log_channel(cluster) log [INF] : Cluster is now healthy
Nov 25 02:52:39 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e67 do_prune osdmap full prune enabled
Nov 25 02:52:39 np0005534516 ceph-osd[88620]: log_channel(cluster) log [DBG] : 3.6 deep-scrub starts
Nov 25 02:52:39 np0005534516 ceph-osd[88620]: log_channel(cluster) log [DBG] : 3.6 deep-scrub ok
Nov 25 02:52:39 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"}]': finished
Nov 25 02:52:39 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e68 e68: 3 total, 3 up, 3 in
Nov 25 02:52:39 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"}]: dispatch
Nov 25 02:52:39 np0005534516 ceph-mon[75015]: log_channel(cluster) log [DBG] : osdmap e68: 3 total, 3 up, 3 in
Nov 25 02:52:40 np0005534516 ceph-osd[88620]: log_channel(cluster) log [DBG] : 5.1e scrub starts
Nov 25 02:52:40 np0005534516 ceph-osd[88620]: log_channel(cluster) log [DBG] : 5.1e scrub ok
Nov 25 02:52:40 np0005534516 ceph-mon[75015]: Health check cleared: PG_DEGRADED (was: Degraded data redundancy: 33/213 objects degraded (15.493%), 6 pgs degraded)
Nov 25 02:52:40 np0005534516 ceph-mon[75015]: Cluster is now healthy
Nov 25 02:52:40 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"}]': finished
Nov 25 02:52:41 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v173: 321 pgs: 321 active+clean; 456 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 123 B/s, 6 objects/s recovering
Nov 25 02:52:41 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"} v 0) v1
Nov 25 02:52:41 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"}]: dispatch
Nov 25 02:52:41 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 02:52:41 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e68 do_prune osdmap full prune enabled
Nov 25 02:52:41 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"}]: dispatch
Nov 25 02:52:42 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"}]': finished
Nov 25 02:52:42 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e69 e69: 3 total, 3 up, 3 in
Nov 25 02:52:42 np0005534516 ceph-mon[75015]: log_channel(cluster) log [DBG] : osdmap e69: 3 total, 3 up, 3 in
Nov 25 02:52:42 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"}]': finished
Nov 25 02:52:43 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v175: 321 pgs: 321 active+clean; 456 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 96 B/s, 4 objects/s recovering
Nov 25 02:52:43 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"} v 0) v1
Nov 25 02:52:43 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"}]: dispatch
Nov 25 02:52:43 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e69 do_prune osdmap full prune enabled
Nov 25 02:52:43 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"}]: dispatch
Nov 25 02:52:43 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"}]': finished
Nov 25 02:52:43 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e70 e70: 3 total, 3 up, 3 in
Nov 25 02:52:43 np0005534516 ceph-mon[75015]: log_channel(cluster) log [DBG] : osdmap e70: 3 total, 3 up, 3 in
Nov 25 02:52:44 np0005534516 ceph-osd[88620]: log_channel(cluster) log [DBG] : 2.19 scrub starts
Nov 25 02:52:44 np0005534516 ceph-osd[88620]: log_channel(cluster) log [DBG] : 2.19 scrub ok
Nov 25 02:52:44 np0005534516 ceph-osd[90711]: log_channel(cluster) log [DBG] : 3.11 scrub starts
Nov 25 02:52:44 np0005534516 ceph-osd[90711]: log_channel(cluster) log [DBG] : 3.11 scrub ok
Nov 25 02:52:45 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v177: 321 pgs: 321 active+clean; 456 KiB data, 99 MiB used, 60 GiB / 60 GiB avail
Nov 25 02:52:45 np0005534516 ceph-osd[89702]: log_channel(cluster) log [DBG] : 4.14 scrub starts
Nov 25 02:52:45 np0005534516 python3[104525]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint radosgw-admin quay.io/ceph/ceph:v18 --fsid a058ea16-8b73-51e1-b172-ed66107102bf -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   user info --uid openstack _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 02:52:45 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"} v 0) v1
Nov 25 02:52:45 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"}]: dispatch
Nov 25 02:52:45 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"}]': finished
Nov 25 02:52:45 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 70 pg[9.c( v 60'75 (0'0,60'75] local-lis/les=55/56 n=7 ec=55/43 lis/c=55/55 les/c/f=56/56/0 sis=70 pruub=8.835994720s) [2] r=-1 lpr=70 pi=[55,70)/1 crt=60'75 lcod 60'74 mlcod 60'74 active pruub 162.520614624s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 02:52:45 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 70 pg[9.c( v 60'75 (0'0,60'75] local-lis/les=55/56 n=7 ec=55/43 lis/c=55/55 les/c/f=56/56/0 sis=70 pruub=8.835805893s) [2] r=-1 lpr=70 pi=[55,70)/1 crt=60'75 lcod 60'74 mlcod 0'0 unknown NOTIFY pruub 162.520614624s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 02:52:45 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 70 pg[9.1c( v 60'79 (0'0,60'79] local-lis/les=55/56 n=7 ec=55/43 lis/c=55/55 les/c/f=56/56/0 sis=70 pruub=8.836124420s) [2] r=-1 lpr=70 pi=[55,70)/1 crt=60'79 lcod 60'78 mlcod 60'78 active pruub 162.521514893s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 02:52:45 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 70 pg[9.1c( v 60'79 (0'0,60'79] local-lis/les=55/56 n=7 ec=55/43 lis/c=55/55 les/c/f=56/56/0 sis=70 pruub=8.836026192s) [2] r=-1 lpr=70 pi=[55,70)/1 crt=60'79 lcod 60'78 mlcod 0'0 unknown NOTIFY pruub 162.521514893s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 02:52:45 np0005534516 ceph-osd[89702]: log_channel(cluster) log [DBG] : 4.14 scrub ok
Nov 25 02:52:45 np0005534516 podman[104526]: 2025-11-25 07:52:45.288851507 +0000 UTC m=+0.026244638 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 25 02:52:45 np0005534516 ceph-osd[90711]: log_channel(cluster) log [DBG] : 3.18 deep-scrub starts
Nov 25 02:52:45 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 70 pg[9.c( empty local-lis/les=0/0 n=0 ec=55/43 lis/c=55/55 les/c/f=56/56/0 sis=70) [2] r=0 lpr=70 pi=[55,70)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:52:45 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 70 pg[9.1c( empty local-lis/les=0/0 n=0 ec=55/43 lis/c=55/55 les/c/f=56/56/0 sis=70) [2] r=0 lpr=70 pi=[55,70)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:52:45 np0005534516 ceph-osd[90711]: log_channel(cluster) log [DBG] : 3.18 deep-scrub ok
Nov 25 02:52:45 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e70 do_prune osdmap full prune enabled
Nov 25 02:52:45 np0005534516 podman[104526]: 2025-11-25 07:52:45.975151788 +0000 UTC m=+0.712544939 container create 624ddd577ec714ae1439a833c4d503881ff602f1356bb0c557d5d66d905ca0e8 (image=quay.io/ceph/ceph:v18, name=upbeat_northcutt, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Nov 25 02:52:46 np0005534516 systemd[1]: Started libpod-conmon-624ddd577ec714ae1439a833c4d503881ff602f1356bb0c557d5d66d905ca0e8.scope.
Nov 25 02:52:46 np0005534516 systemd[1]: Started libcrun container.
Nov 25 02:52:46 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/33c1bc7ca2fa5b8a0078985a02cd09e91b9853dc9d45cbd27d14c3d6a0a9046d/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 02:52:46 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/33c1bc7ca2fa5b8a0078985a02cd09e91b9853dc9d45cbd27d14c3d6a0a9046d/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 02:52:46 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"}]': finished
Nov 25 02:52:46 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e71 e71: 3 total, 3 up, 3 in
Nov 25 02:52:46 np0005534516 ceph-mon[75015]: log_channel(cluster) log [DBG] : osdmap e71: 3 total, 3 up, 3 in
Nov 25 02:52:46 np0005534516 podman[104526]: 2025-11-25 07:52:46.607249249 +0000 UTC m=+1.344642440 container init 624ddd577ec714ae1439a833c4d503881ff602f1356bb0c557d5d66d905ca0e8 (image=quay.io/ceph/ceph:v18, name=upbeat_northcutt, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Nov 25 02:52:46 np0005534516 podman[104526]: 2025-11-25 07:52:46.618581947 +0000 UTC m=+1.355975068 container start 624ddd577ec714ae1439a833c4d503881ff602f1356bb0c557d5d66d905ca0e8 (image=quay.io/ceph/ceph:v18, name=upbeat_northcutt, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 02:52:46 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e71 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 02:52:46 np0005534516 podman[104526]: 2025-11-25 07:52:46.997572142 +0000 UTC m=+1.734965273 container attach 624ddd577ec714ae1439a833c4d503881ff602f1356bb0c557d5d66d905ca0e8 (image=quay.io/ceph/ceph:v18, name=upbeat_northcutt, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Nov 25 02:52:47 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"}]: dispatch
Nov 25 02:52:47 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v179: 321 pgs: 321 active+clean; 456 KiB data, 99 MiB used, 60 GiB / 60 GiB avail
Nov 25 02:52:47 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"} v 0) v1
Nov 25 02:52:47 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"}]: dispatch
Nov 25 02:52:47 np0005534516 ceph-osd[89702]: log_channel(cluster) log [DBG] : 6.e scrub starts
Nov 25 02:52:47 np0005534516 ceph-osd[89702]: log_channel(cluster) log [DBG] : 6.e scrub ok
Nov 25 02:52:47 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e71 do_prune osdmap full prune enabled
Nov 25 02:52:47 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"}]': finished
Nov 25 02:52:47 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e72 e72: 3 total, 3 up, 3 in
Nov 25 02:52:47 np0005534516 ceph-mon[75015]: log_channel(cluster) log [DBG] : osdmap e72: 3 total, 3 up, 3 in
Nov 25 02:52:47 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 72 pg[9.1c( v 60'79 lc 57'66 (0'0,60'79] local-lis/les=70/72 n=7 ec=55/43 lis/c=55/55 les/c/f=56/56/0 sis=70) [2] r=0 lpr=70 pi=[55,70)/1 crt=60'79 lcod 0'0 mlcod 0'0 active+degraded m=7 mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:52:47 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 72 pg[9.c( v 60'75 lc 56'66 (0'0,60'75] local-lis/les=70/72 n=7 ec=55/43 lis/c=55/55 les/c/f=56/56/0 sis=70) [2] r=0 lpr=70 pi=[55,70)/1 crt=60'75 lcod 0'0 mlcod 0'0 active+degraded m=5 mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:52:47 np0005534516 upbeat_northcutt[104542]: could not fetch user info: no user info saved
Nov 25 02:52:48 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"}]': finished
Nov 25 02:52:48 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"}]: dispatch
Nov 25 02:52:48 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"}]': finished
Nov 25 02:52:48 np0005534516 systemd[1]: libpod-624ddd577ec714ae1439a833c4d503881ff602f1356bb0c557d5d66d905ca0e8.scope: Deactivated successfully.
Nov 25 02:52:48 np0005534516 podman[104526]: 2025-11-25 07:52:48.70482655 +0000 UTC m=+3.442219671 container died 624ddd577ec714ae1439a833c4d503881ff602f1356bb0c557d5d66d905ca0e8 (image=quay.io/ceph/ceph:v18, name=upbeat_northcutt, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 02:52:48 np0005534516 systemd[1]: var-lib-containers-storage-overlay-33c1bc7ca2fa5b8a0078985a02cd09e91b9853dc9d45cbd27d14c3d6a0a9046d-merged.mount: Deactivated successfully.
Nov 25 02:52:48 np0005534516 podman[104526]: 2025-11-25 07:52:48.86911167 +0000 UTC m=+3.606504781 container remove 624ddd577ec714ae1439a833c4d503881ff602f1356bb0c557d5d66d905ca0e8 (image=quay.io/ceph/ceph:v18, name=upbeat_northcutt, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Nov 25 02:52:48 np0005534516 systemd[1]: libpod-conmon-624ddd577ec714ae1439a833c4d503881ff602f1356bb0c557d5d66d905ca0e8.scope: Deactivated successfully.
Nov 25 02:52:49 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v181: 321 pgs: 1 active+recovery_wait+degraded, 1 active+recovering, 319 active+clean; 456 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 5/213 objects degraded (2.347%); 18 B/s, 0 objects/s recovering
Nov 25 02:52:49 np0005534516 ceph-mon[75015]: log_channel(cluster) log [WRN] : Health check failed: Degraded data redundancy: 5/213 objects degraded (2.347%), 1 pg degraded (PG_DEGRADED)
Nov 25 02:52:49 np0005534516 python3[104665]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint radosgw-admin quay.io/ceph/ceph:v18 --fsid a058ea16-8b73-51e1-b172-ed66107102bf -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   user create --uid="openstack" --display-name "openstack" _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 02:52:49 np0005534516 ceph-mon[75015]: Health check failed: Degraded data redundancy: 5/213 objects degraded (2.347%), 1 pg degraded (PG_DEGRADED)
Nov 25 02:52:49 np0005534516 podman[104666]: 2025-11-25 07:52:49.313836587 +0000 UTC m=+0.054591208 container create 7c0dcf792f62efd9634cb3d3eee0c19dab0498898ba67026b2474057f1b52673 (image=quay.io/ceph/ceph:v18, name=relaxed_margulis, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 02:52:49 np0005534516 systemd[1]: Started libpod-conmon-7c0dcf792f62efd9634cb3d3eee0c19dab0498898ba67026b2474057f1b52673.scope.
Nov 25 02:52:49 np0005534516 podman[104666]: 2025-11-25 07:52:49.289034769 +0000 UTC m=+0.029789400 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 25 02:52:49 np0005534516 systemd[1]: Started libcrun container.
Nov 25 02:52:49 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6cb6b3bee949174eaceae975efd884b3bd05a00c0542da62e9d0ba6af889babf/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 02:52:49 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6cb6b3bee949174eaceae975efd884b3bd05a00c0542da62e9d0ba6af889babf/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 02:52:49 np0005534516 podman[104666]: 2025-11-25 07:52:49.65288926 +0000 UTC m=+0.393643891 container init 7c0dcf792f62efd9634cb3d3eee0c19dab0498898ba67026b2474057f1b52673 (image=quay.io/ceph/ceph:v18, name=relaxed_margulis, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 02:52:49 np0005534516 podman[104666]: 2025-11-25 07:52:49.658611015 +0000 UTC m=+0.399365636 container start 7c0dcf792f62efd9634cb3d3eee0c19dab0498898ba67026b2474057f1b52673 (image=quay.io/ceph/ceph:v18, name=relaxed_margulis, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Nov 25 02:52:49 np0005534516 podman[104666]: 2025-11-25 07:52:49.714534199 +0000 UTC m=+0.455288880 container attach 7c0dcf792f62efd9634cb3d3eee0c19dab0498898ba67026b2474057f1b52673 (image=quay.io/ceph/ceph:v18, name=relaxed_margulis, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 02:52:49 np0005534516 relaxed_margulis[104682]: {
Nov 25 02:52:49 np0005534516 relaxed_margulis[104682]:    "user_id": "openstack",
Nov 25 02:52:49 np0005534516 relaxed_margulis[104682]:    "display_name": "openstack",
Nov 25 02:52:49 np0005534516 relaxed_margulis[104682]:    "email": "",
Nov 25 02:52:49 np0005534516 relaxed_margulis[104682]:    "suspended": 0,
Nov 25 02:52:49 np0005534516 relaxed_margulis[104682]:    "max_buckets": 1000,
Nov 25 02:52:49 np0005534516 relaxed_margulis[104682]:    "subusers": [],
Nov 25 02:52:49 np0005534516 relaxed_margulis[104682]:    "keys": [
Nov 25 02:52:49 np0005534516 relaxed_margulis[104682]:        {
Nov 25 02:52:49 np0005534516 relaxed_margulis[104682]:            "user": "openstack",
Nov 25 02:52:49 np0005534516 relaxed_margulis[104682]:            "access_key": "B9ITLP4BSND4UGQXF0HS",
Nov 25 02:52:49 np0005534516 relaxed_margulis[104682]:            "secret_key": "N5xDP7BzH2BRY3GwopZvwFuUoGLDFDSPfO2bA4Gn"
Nov 25 02:52:49 np0005534516 relaxed_margulis[104682]:        }
Nov 25 02:52:49 np0005534516 relaxed_margulis[104682]:    ],
Nov 25 02:52:49 np0005534516 relaxed_margulis[104682]:    "swift_keys": [],
Nov 25 02:52:49 np0005534516 relaxed_margulis[104682]:    "caps": [],
Nov 25 02:52:49 np0005534516 relaxed_margulis[104682]:    "op_mask": "read, write, delete",
Nov 25 02:52:49 np0005534516 relaxed_margulis[104682]:    "default_placement": "",
Nov 25 02:52:49 np0005534516 relaxed_margulis[104682]:    "default_storage_class": "",
Nov 25 02:52:49 np0005534516 relaxed_margulis[104682]:    "placement_tags": [],
Nov 25 02:52:49 np0005534516 relaxed_margulis[104682]:    "bucket_quota": {
Nov 25 02:52:49 np0005534516 relaxed_margulis[104682]:        "enabled": false,
Nov 25 02:52:49 np0005534516 relaxed_margulis[104682]:        "check_on_raw": false,
Nov 25 02:52:49 np0005534516 relaxed_margulis[104682]:        "max_size": -1,
Nov 25 02:52:49 np0005534516 relaxed_margulis[104682]:        "max_size_kb": 0,
Nov 25 02:52:49 np0005534516 relaxed_margulis[104682]:        "max_objects": -1
Nov 25 02:52:49 np0005534516 relaxed_margulis[104682]:    },
Nov 25 02:52:49 np0005534516 relaxed_margulis[104682]:    "user_quota": {
Nov 25 02:52:49 np0005534516 relaxed_margulis[104682]:        "enabled": false,
Nov 25 02:52:49 np0005534516 relaxed_margulis[104682]:        "check_on_raw": false,
Nov 25 02:52:49 np0005534516 relaxed_margulis[104682]:        "max_size": -1,
Nov 25 02:52:49 np0005534516 relaxed_margulis[104682]:        "max_size_kb": 0,
Nov 25 02:52:49 np0005534516 relaxed_margulis[104682]:        "max_objects": -1
Nov 25 02:52:49 np0005534516 relaxed_margulis[104682]:    },
Nov 25 02:52:49 np0005534516 relaxed_margulis[104682]:    "temp_url_keys": [],
Nov 25 02:52:49 np0005534516 relaxed_margulis[104682]:    "type": "rgw",
Nov 25 02:52:49 np0005534516 relaxed_margulis[104682]:    "mfa_ids": []
Nov 25 02:52:49 np0005534516 relaxed_margulis[104682]: }
Nov 25 02:52:49 np0005534516 relaxed_margulis[104682]: 
Nov 25 02:52:49 np0005534516 systemd[1]: libpod-7c0dcf792f62efd9634cb3d3eee0c19dab0498898ba67026b2474057f1b52673.scope: Deactivated successfully.
Nov 25 02:52:49 np0005534516 podman[104666]: 2025-11-25 07:52:49.962725056 +0000 UTC m=+0.703479647 container died 7c0dcf792f62efd9634cb3d3eee0c19dab0498898ba67026b2474057f1b52673 (image=quay.io/ceph/ceph:v18, name=relaxed_margulis, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 02:52:49 np0005534516 systemd[1]: var-lib-containers-storage-overlay-6cb6b3bee949174eaceae975efd884b3bd05a00c0542da62e9d0ba6af889babf-merged.mount: Deactivated successfully.
Nov 25 02:52:50 np0005534516 podman[104666]: 2025-11-25 07:52:50.020114029 +0000 UTC m=+0.760868650 container remove 7c0dcf792f62efd9634cb3d3eee0c19dab0498898ba67026b2474057f1b52673 (image=quay.io/ceph/ceph:v18, name=relaxed_margulis, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 02:52:50 np0005534516 systemd[1]: libpod-conmon-7c0dcf792f62efd9634cb3d3eee0c19dab0498898ba67026b2474057f1b52673.scope: Deactivated successfully.
Nov 25 02:52:50 np0005534516 ceph-osd[88620]: log_channel(cluster) log [DBG] : 2.18 scrub starts
Nov 25 02:52:50 np0005534516 ceph-osd[88620]: log_channel(cluster) log [DBG] : 2.18 scrub ok
Nov 25 02:52:51 np0005534516 ceph-osd[89702]: log_channel(cluster) log [DBG] : 6.17 scrub starts
Nov 25 02:52:51 np0005534516 ceph-osd[89702]: log_channel(cluster) log [DBG] : 6.17 scrub ok
Nov 25 02:52:51 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v182: 321 pgs: 1 active+recovery_wait+degraded, 1 active+recovering, 319 active+clean; 456 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 423 B/s rd, 0 op/s; 5/213 objects degraded (2.347%); 15 B/s, 0 objects/s recovering
Nov 25 02:52:51 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e72 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 02:52:52 np0005534516 ceph-osd[89702]: log_channel(cluster) log [DBG] : 5.1 scrub starts
Nov 25 02:52:52 np0005534516 ceph-osd[89702]: log_channel(cluster) log [DBG] : 5.1 scrub ok
Nov 25 02:52:52 np0005534516 ceph-osd[88620]: log_channel(cluster) log [DBG] : 2.16 scrub starts
Nov 25 02:52:52 np0005534516 ceph-osd[88620]: log_channel(cluster) log [DBG] : 2.16 scrub ok
Nov 25 02:52:52 np0005534516 ceph-osd[90711]: log_channel(cluster) log [DBG] : 4.18 deep-scrub starts
Nov 25 02:52:52 np0005534516 ceph-osd[90711]: log_channel(cluster) log [DBG] : 4.18 deep-scrub ok
Nov 25 02:52:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] Optimize plan auto_2025-11-25_07:52:53
Nov 25 02:52:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 02:52:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] Some objects (0.023474) are degraded; try again later
Nov 25 02:52:53 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v183: 321 pgs: 1 active+recovery_wait+degraded, 1 active+recovering, 319 active+clean; 456 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 1.4 KiB/s rd, 1 op/s; 5/213 objects degraded (2.347%); 13 B/s, 0 objects/s recovering
Nov 25 02:52:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 02:52:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 02:52:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 02:52:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 02:52:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 02:52:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 02:52:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 02:52:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 02:52:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 02:52:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 02:52:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 02:52:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 02:52:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 02:52:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 02:52:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 02:52:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 02:52:55 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v184: 321 pgs: 321 active+clean; 456 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 1.6 KiB/s rd, 1 op/s; 12 B/s, 1 objects/s recovering
Nov 25 02:52:55 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"} v 0) v1
Nov 25 02:52:55 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]: dispatch
Nov 25 02:52:55 np0005534516 ceph-mon[75015]: log_channel(cluster) log [INF] : Health check cleared: PG_DEGRADED (was: Degraded data redundancy: 5/213 objects degraded (2.347%), 1 pg degraded)
Nov 25 02:52:55 np0005534516 ceph-mon[75015]: log_channel(cluster) log [INF] : Cluster is now healthy
Nov 25 02:52:55 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e72 do_prune osdmap full prune enabled
Nov 25 02:52:55 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]: dispatch
Nov 25 02:52:55 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]': finished
Nov 25 02:52:55 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e73 e73: 3 total, 3 up, 3 in
Nov 25 02:52:55 np0005534516 ceph-mon[75015]: log_channel(cluster) log [DBG] : osdmap e73: 3 total, 3 up, 3 in
Nov 25 02:52:56 np0005534516 ceph-mon[75015]: Health check cleared: PG_DEGRADED (was: Degraded data redundancy: 5/213 objects degraded (2.347%), 1 pg degraded)
Nov 25 02:52:56 np0005534516 ceph-mon[75015]: Cluster is now healthy
Nov 25 02:52:56 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]': finished
Nov 25 02:52:56 np0005534516 ceph-osd[90711]: log_channel(cluster) log [DBG] : 6.14 scrub starts
Nov 25 02:52:56 np0005534516 ceph-osd[90711]: log_channel(cluster) log [DBG] : 6.14 scrub ok
Nov 25 02:52:56 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e73 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 02:52:57 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v186: 321 pgs: 321 active+clean; 456 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 1.4 KiB/s rd, 1 op/s; 11 B/s, 1 objects/s recovering
Nov 25 02:52:57 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"} v 0) v1
Nov 25 02:52:57 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"}]: dispatch
Nov 25 02:52:57 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e73 do_prune osdmap full prune enabled
Nov 25 02:52:57 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"}]': finished
Nov 25 02:52:57 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e74 e74: 3 total, 3 up, 3 in
Nov 25 02:52:57 np0005534516 ceph-mon[75015]: log_channel(cluster) log [DBG] : osdmap e74: 3 total, 3 up, 3 in
Nov 25 02:52:57 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"}]: dispatch
Nov 25 02:52:58 np0005534516 ceph-osd[89702]: log_channel(cluster) log [DBG] : 5.f deep-scrub starts
Nov 25 02:52:58 np0005534516 ceph-osd[89702]: log_channel(cluster) log [DBG] : 5.f deep-scrub ok
Nov 25 02:52:58 np0005534516 ceph-osd[88620]: log_channel(cluster) log [DBG] : 5.15 scrub starts
Nov 25 02:52:58 np0005534516 ceph-osd[88620]: log_channel(cluster) log [DBG] : 5.15 scrub ok
Nov 25 02:52:58 np0005534516 ceph-osd[90711]: log_channel(cluster) log [DBG] : 6.15 scrub starts
Nov 25 02:52:58 np0005534516 ceph-osd[90711]: log_channel(cluster) log [DBG] : 6.15 scrub ok
Nov 25 02:52:58 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"}]': finished
Nov 25 02:52:59 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v188: 321 pgs: 321 active+clean; 456 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 1.4 KiB/s rd, 1 op/s; 0 B/s, 0 objects/s recovering
Nov 25 02:52:59 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"} v 0) v1
Nov 25 02:52:59 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"}]: dispatch
Nov 25 02:52:59 np0005534516 ceph-osd[88620]: log_channel(cluster) log [DBG] : 2.11 scrub starts
Nov 25 02:52:59 np0005534516 ceph-osd[88620]: log_channel(cluster) log [DBG] : 2.11 scrub ok
Nov 25 02:52:59 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e74 do_prune osdmap full prune enabled
Nov 25 02:53:00 np0005534516 ceph-osd[89702]: log_channel(cluster) log [DBG] : 4.d scrub starts
Nov 25 02:53:00 np0005534516 ceph-osd[89702]: log_channel(cluster) log [DBG] : 4.d scrub ok
Nov 25 02:53:00 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"}]': finished
Nov 25 02:53:00 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e75 e75: 3 total, 3 up, 3 in
Nov 25 02:53:00 np0005534516 ceph-mon[75015]: log_channel(cluster) log [DBG] : osdmap e75: 3 total, 3 up, 3 in
Nov 25 02:53:00 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"}]: dispatch
Nov 25 02:53:01 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v190: 321 pgs: 321 active+clean; 456 KiB data, 99 MiB used, 60 GiB / 60 GiB avail
Nov 25 02:53:01 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"} v 0) v1
Nov 25 02:53:01 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"}]: dispatch
Nov 25 02:53:01 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e75 do_prune osdmap full prune enabled
Nov 25 02:53:01 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"}]': finished
Nov 25 02:53:01 np0005534516 ceph-osd[88620]: log_channel(cluster) log [DBG] : 2.f scrub starts
Nov 25 02:53:01 np0005534516 ceph-osd[88620]: log_channel(cluster) log [DBG] : 2.f scrub ok
Nov 25 02:53:01 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"}]': finished
Nov 25 02:53:01 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e76 e76: 3 total, 3 up, 3 in
Nov 25 02:53:01 np0005534516 ceph-mon[75015]: log_channel(cluster) log [DBG] : osdmap e76: 3 total, 3 up, 3 in
Nov 25 02:53:01 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e76 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 02:53:02 np0005534516 ceph-osd[88620]: log_channel(cluster) log [DBG] : 5.7 scrub starts
Nov 25 02:53:02 np0005534516 ceph-osd[88620]: log_channel(cluster) log [DBG] : 5.7 scrub ok
Nov 25 02:53:02 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"}]: dispatch
Nov 25 02:53:02 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"}]': finished
Nov 25 02:53:02 np0005534516 ceph-osd[89702]: log_channel(cluster) log [DBG] : 2.a deep-scrub starts
Nov 25 02:53:02 np0005534516 ceph-osd[89702]: log_channel(cluster) log [DBG] : 2.a deep-scrub ok
Nov 25 02:53:03 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v192: 321 pgs: 321 active+clean; 456 KiB data, 99 MiB used, 60 GiB / 60 GiB avail
Nov 25 02:53:03 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"} v 0) v1
Nov 25 02:53:03 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"}]: dispatch
Nov 25 02:53:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 02:53:03 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e76 do_prune osdmap full prune enabled
Nov 25 02:53:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 02:53:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 02:53:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 02:53:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 02:53:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 02:53:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 02:53:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 02:53:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 02:53:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 02:53:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 02:53:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 02:53:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 25 02:53:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 02:53:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 02:53:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 02:53:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 25 02:53:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 02:53:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 25 02:53:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 02:53:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 02:53:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 02:53:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 25 02:53:03 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"}]': finished
Nov 25 02:53:03 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e77 e77: 3 total, 3 up, 3 in
Nov 25 02:53:03 np0005534516 ceph-mon[75015]: log_channel(cluster) log [DBG] : osdmap e77: 3 total, 3 up, 3 in
Nov 25 02:53:03 np0005534516 ceph-osd[88620]: osd.0 pg_epoch: 77 pg[9.13( v 61'75 (0'0,61'75] local-lis/les=58/59 n=5 ec=55/43 lis/c=58/58 les/c/f=59/61/0 sis=77 pruub=8.448917389s) [2] r=-1 lpr=77 pi=[58,77)/1 crt=61'75 mlcod 61'75 active pruub 186.821640015s@ mbc={255={}}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 02:53:03 np0005534516 ceph-osd[88620]: osd.0 pg_epoch: 77 pg[9.13( v 61'75 (0'0,61'75] local-lis/les=58/59 n=5 ec=55/43 lis/c=58/58 les/c/f=59/61/0 sis=77 pruub=8.448837280s) [2] r=-1 lpr=77 pi=[58,77)/1 crt=61'75 mlcod 0'0 unknown NOTIFY pruub 186.821640015s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 02:53:03 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"}]: dispatch
Nov 25 02:53:03 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 77 pg[9.13( empty local-lis/les=0/0 n=0 ec=55/43 lis/c=58/58 les/c/f=59/61/0 sis=77) [2] r=0 lpr=77 pi=[58,77)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:53:04 np0005534516 systemd-logind[822]: New session 34 of user zuul.
Nov 25 02:53:04 np0005534516 systemd[1]: Started Session 34 of User zuul.
Nov 25 02:53:04 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e77 do_prune osdmap full prune enabled
Nov 25 02:53:04 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e78 e78: 3 total, 3 up, 3 in
Nov 25 02:53:04 np0005534516 ceph-mon[75015]: log_channel(cluster) log [DBG] : osdmap e78: 3 total, 3 up, 3 in
Nov 25 02:53:04 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"}]': finished
Nov 25 02:53:04 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 78 pg[9.13( v 61'75 lc 57'66 (0'0,61'75] local-lis/les=77/78 n=5 ec=55/43 lis/c=58/58 les/c/f=59/61/0 sis=77) [2] r=0 lpr=77 pi=[58,77)/1 crt=61'75 lcod 0'0 mlcod 0'0 active+degraded m=5 mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:53:04 np0005534516 ceph-osd[89702]: log_channel(cluster) log [DBG] : 2.9 scrub starts
Nov 25 02:53:04 np0005534516 ceph-osd[89702]: log_channel(cluster) log [DBG] : 2.9 scrub ok
Nov 25 02:53:05 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v195: 321 pgs: 321 active+clean; 456 KiB data, 99 MiB used, 60 GiB / 60 GiB avail
Nov 25 02:53:05 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"} v 0) v1
Nov 25 02:53:05 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"}]: dispatch
Nov 25 02:53:05 np0005534516 python3.9[104932]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 25 02:53:05 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e78 do_prune osdmap full prune enabled
Nov 25 02:53:05 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"}]': finished
Nov 25 02:53:05 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e79 e79: 3 total, 3 up, 3 in
Nov 25 02:53:05 np0005534516 ceph-mon[75015]: log_channel(cluster) log [DBG] : osdmap e79: 3 total, 3 up, 3 in
Nov 25 02:53:05 np0005534516 ceph-osd[89702]: log_channel(cluster) log [DBG] : 5.1a scrub starts
Nov 25 02:53:05 np0005534516 ceph-osd[89702]: log_channel(cluster) log [DBG] : 5.1a scrub ok
Nov 25 02:53:06 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"}]: dispatch
Nov 25 02:53:06 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e79 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 02:53:06 np0005534516 python3.9[105150]: ansible-ansible.legacy.command Invoked with _raw_params=set -euxo pipefail#012pushd /var/tmp#012curl -sL https://github.com/openstack-k8s-operators/repo-setup/archive/refs/heads/main.tar.gz | tar -xz#012pushd repo-setup-main#012python3 -m venv ./venv#012PBR_VERSION=0.0.0 ./venv/bin/pip install ./#012./venv/bin/repo-setup current-podified -b antelope#012popd#012rm -rf repo-setup-main#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 02:53:07 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v197: 321 pgs: 321 active+clean; 456 KiB data, 99 MiB used, 60 GiB / 60 GiB avail
Nov 25 02:53:07 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"} v 0) v1
Nov 25 02:53:07 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]: dispatch
Nov 25 02:53:07 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e79 do_prune osdmap full prune enabled
Nov 25 02:53:07 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]': finished
Nov 25 02:53:07 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e80 e80: 3 total, 3 up, 3 in
Nov 25 02:53:07 np0005534516 ceph-mon[75015]: log_channel(cluster) log [DBG] : osdmap e80: 3 total, 3 up, 3 in
Nov 25 02:53:07 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"}]': finished
Nov 25 02:53:07 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]: dispatch
Nov 25 02:53:07 np0005534516 ceph-osd[88620]: osd.0 pg_epoch: 80 pg[9.15( v 61'73 (0'0,61'73] local-lis/les=58/59 n=4 ec=55/43 lis/c=58/58 les/c/f=59/59/0 sis=80 pruub=12.409260750s) [1] r=-1 lpr=80 pi=[58,80)/1 crt=61'73 lcod 61'72 mlcod 61'72 active pruub 194.820388794s@ mbc={255={}}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 02:53:07 np0005534516 ceph-osd[88620]: osd.0 pg_epoch: 80 pg[9.15( v 61'73 (0'0,61'73] local-lis/les=58/59 n=4 ec=55/43 lis/c=58/58 les/c/f=59/59/0 sis=80 pruub=12.408411026s) [1] r=-1 lpr=80 pi=[58,80)/1 crt=61'73 lcod 61'72 mlcod 0'0 unknown NOTIFY pruub 194.820388794s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 02:53:07 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 80 pg[9.15( empty local-lis/les=0/0 n=0 ec=55/43 lis/c=58/58 les/c/f=59/59/0 sis=80) [1] r=0 lpr=80 pi=[58,80)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:53:08 np0005534516 ceph-osd[89702]: log_channel(cluster) log [DBG] : 6.2 scrub starts
Nov 25 02:53:08 np0005534516 ceph-osd[89702]: log_channel(cluster) log [DBG] : 6.2 scrub ok
Nov 25 02:53:08 np0005534516 ceph-osd[88620]: log_channel(cluster) log [DBG] : 5.14 scrub starts
Nov 25 02:53:08 np0005534516 ceph-osd[88620]: log_channel(cluster) log [DBG] : 5.14 scrub ok
Nov 25 02:53:08 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e80 do_prune osdmap full prune enabled
Nov 25 02:53:08 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]': finished
Nov 25 02:53:08 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e81 e81: 3 total, 3 up, 3 in
Nov 25 02:53:08 np0005534516 ceph-mon[75015]: log_channel(cluster) log [DBG] : osdmap e81: 3 total, 3 up, 3 in
Nov 25 02:53:09 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v200: 321 pgs: 321 active+clean; 456 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 23 B/s, 1 objects/s recovering
Nov 25 02:53:09 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"} v 0) v1
Nov 25 02:53:09 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"}]: dispatch
Nov 25 02:53:09 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e81 do_prune osdmap full prune enabled
Nov 25 02:53:09 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"}]: dispatch
Nov 25 02:53:10 np0005534516 ceph-osd[88620]: log_channel(cluster) log [DBG] : 2.13 scrub starts
Nov 25 02:53:10 np0005534516 ceph-osd[88620]: log_channel(cluster) log [DBG] : 2.13 scrub ok
Nov 25 02:53:10 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 81 pg[9.15( v 61'73 lc 57'66 (0'0,61'73] local-lis/les=80/81 n=4 ec=55/43 lis/c=58/58 les/c/f=59/59/0 sis=80) [1] r=0 lpr=80 pi=[58,80)/1 crt=61'73 lcod 0'0 mlcod 0'0 active+degraded m=4 mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:53:10 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"}]': finished
Nov 25 02:53:10 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e82 e82: 3 total, 3 up, 3 in
Nov 25 02:53:10 np0005534516 ceph-osd[90711]: log_channel(cluster) log [DBG] : 6.13 scrub starts
Nov 25 02:53:10 np0005534516 ceph-mon[75015]: log_channel(cluster) log [DBG] : osdmap e82: 3 total, 3 up, 3 in
Nov 25 02:53:10 np0005534516 ceph-osd[90711]: log_channel(cluster) log [DBG] : 6.13 scrub ok
Nov 25 02:53:11 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"}]': finished
Nov 25 02:53:11 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v202: 321 pgs: 321 active+clean; 456 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 21 B/s, 0 objects/s recovering
Nov 25 02:53:11 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"} v 0) v1
Nov 25 02:53:11 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"}]: dispatch
Nov 25 02:53:11 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e82 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 02:53:12 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e82 do_prune osdmap full prune enabled
Nov 25 02:53:12 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"}]': finished
Nov 25 02:53:12 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e83 e83: 3 total, 3 up, 3 in
Nov 25 02:53:12 np0005534516 ceph-mon[75015]: log_channel(cluster) log [DBG] : osdmap e83: 3 total, 3 up, 3 in
Nov 25 02:53:12 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"}]: dispatch
Nov 25 02:53:12 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 82 pg[9.16( v 59'71 (0'0,59'71] local-lis/les=63/65 n=4 ec=55/43 lis/c=63/63 les/c/f=65/67/0 sis=82 pruub=15.079378128s) [0] r=-1 lpr=82 pi=[63,82)/1 crt=59'71 mlcod 59'71 active pruub 185.648956299s@ mbc={255={}}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 02:53:12 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 82 pg[9.16( v 59'71 (0'0,59'71] local-lis/les=63/65 n=4 ec=55/43 lis/c=63/63 les/c/f=65/67/0 sis=82 pruub=15.079277992s) [0] r=-1 lpr=82 pi=[63,82)/1 crt=59'71 mlcod 0'0 unknown NOTIFY pruub 185.648956299s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 02:53:12 np0005534516 ceph-osd[88620]: osd.0 pg_epoch: 82 pg[9.16( empty local-lis/les=0/0 n=0 ec=55/43 lis/c=63/63 les/c/f=65/67/0 sis=82) [0] r=0 lpr=82 pi=[63,82)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:53:12 np0005534516 ceph-osd[90711]: log_channel(cluster) log [DBG] : 4.13 scrub starts
Nov 25 02:53:12 np0005534516 ceph-osd[90711]: log_channel(cluster) log [DBG] : 4.13 scrub ok
Nov 25 02:53:13 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v204: 321 pgs: 321 active+clean; 456 KiB data, 104 MiB used, 60 GiB / 60 GiB avail; 38 B/s, 1 objects/s recovering
Nov 25 02:53:13 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"} v 0) v1
Nov 25 02:53:13 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"}]: dispatch
Nov 25 02:53:13 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e83 do_prune osdmap full prune enabled
Nov 25 02:53:13 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"}]': finished
Nov 25 02:53:13 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e84 e84: 3 total, 3 up, 3 in
Nov 25 02:53:13 np0005534516 ceph-mon[75015]: log_channel(cluster) log [DBG] : osdmap e84: 3 total, 3 up, 3 in
Nov 25 02:53:13 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"}]': finished
Nov 25 02:53:13 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"}]: dispatch
Nov 25 02:53:13 np0005534516 ceph-osd[88620]: osd.0 pg_epoch: 84 pg[9.16( v 59'71 lc 52'53 (0'0,59'71] local-lis/les=82/84 n=4 ec=55/43 lis/c=63/63 les/c/f=65/67/0 sis=82) [0] r=0 lpr=82 pi=[63,82)/1 crt=59'71 lcod 0'0 mlcod 0'0 active+degraded m=4 mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:53:13 np0005534516 ceph-osd[89702]: log_channel(cluster) log [DBG] : 5.c scrub starts
Nov 25 02:53:14 np0005534516 ceph-osd[89702]: log_channel(cluster) log [DBG] : 5.c scrub ok
Nov 25 02:53:14 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"}]': finished
Nov 25 02:53:15 np0005534516 ceph-osd[89702]: log_channel(cluster) log [DBG] : 5.16 scrub starts
Nov 25 02:53:15 np0005534516 ceph-osd[89702]: log_channel(cluster) log [DBG] : 5.16 scrub ok
Nov 25 02:53:15 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v206: 321 pgs: 321 active+clean; 456 KiB data, 104 MiB used, 60 GiB / 60 GiB avail; 36 B/s, 1 objects/s recovering
Nov 25 02:53:15 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"} v 0) v1
Nov 25 02:53:15 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"}]: dispatch
Nov 25 02:53:15 np0005534516 ceph-osd[89702]: log_channel(cluster) log [DBG] : 6.6 scrub starts
Nov 25 02:53:15 np0005534516 ceph-osd[89702]: log_channel(cluster) log [DBG] : 6.6 scrub ok
Nov 25 02:53:16 np0005534516 systemd[1]: session-34.scope: Deactivated successfully.
Nov 25 02:53:16 np0005534516 systemd[1]: session-34.scope: Consumed 8.528s CPU time.
Nov 25 02:53:16 np0005534516 systemd-logind[822]: Session 34 logged out. Waiting for processes to exit.
Nov 25 02:53:16 np0005534516 systemd-logind[822]: Removed session 34.
Nov 25 02:53:16 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e84 do_prune osdmap full prune enabled
Nov 25 02:53:16 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"}]: dispatch
Nov 25 02:53:16 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"}]': finished
Nov 25 02:53:16 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e85 e85: 3 total, 3 up, 3 in
Nov 25 02:53:16 np0005534516 ceph-mon[75015]: log_channel(cluster) log [DBG] : osdmap e85: 3 total, 3 up, 3 in
Nov 25 02:53:16 np0005534516 ceph-osd[88620]: osd.0 pg_epoch: 85 pg[9.19( v 61'78 (0'0,61'78] local-lis/les=58/59 n=6 ec=55/43 lis/c=58/58 les/c/f=59/59/0 sis=85 pruub=11.005993843s) [2] r=-1 lpr=85 pi=[58,85)/1 crt=61'78 lcod 61'77 mlcod 61'77 active pruub 202.820510864s@ mbc={255={}}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 02:53:16 np0005534516 ceph-osd[88620]: osd.0 pg_epoch: 85 pg[9.19( v 61'78 (0'0,61'78] local-lis/les=58/59 n=6 ec=55/43 lis/c=58/58 les/c/f=59/59/0 sis=85 pruub=11.005768776s) [2] r=-1 lpr=85 pi=[58,85)/1 crt=61'78 lcod 61'77 mlcod 0'0 unknown NOTIFY pruub 202.820510864s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 02:53:16 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e85 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 02:53:16 np0005534516 ceph-osd[89702]: log_channel(cluster) log [DBG] : 4.9 scrub starts
Nov 25 02:53:17 np0005534516 ceph-osd[89702]: log_channel(cluster) log [DBG] : 4.9 scrub ok
Nov 25 02:53:17 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 85 pg[9.19( empty local-lis/les=0/0 n=0 ec=55/43 lis/c=58/58 les/c/f=59/59/0 sis=85) [2] r=0 lpr=85 pi=[58,85)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:53:17 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v208: 321 pgs: 321 active+clean; 456 KiB data, 104 MiB used, 60 GiB / 60 GiB avail; 36 B/s, 1 objects/s recovering
Nov 25 02:53:17 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"} v 0) v1
Nov 25 02:53:17 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"}]: dispatch
Nov 25 02:53:17 np0005534516 ceph-osd[88620]: log_channel(cluster) log [DBG] : 5.2 scrub starts
Nov 25 02:53:17 np0005534516 ceph-osd[88620]: log_channel(cluster) log [DBG] : 5.2 scrub ok
Nov 25 02:53:17 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e85 do_prune osdmap full prune enabled
Nov 25 02:53:18 np0005534516 ceph-osd[89702]: log_channel(cluster) log [DBG] : 6.1 deep-scrub starts
Nov 25 02:53:18 np0005534516 ceph-osd[89702]: log_channel(cluster) log [DBG] : 6.1 deep-scrub ok
Nov 25 02:53:18 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"}]': finished
Nov 25 02:53:18 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e86 e86: 3 total, 3 up, 3 in
Nov 25 02:53:18 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"}]': finished
Nov 25 02:53:18 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"}]: dispatch
Nov 25 02:53:18 np0005534516 ceph-mon[75015]: log_channel(cluster) log [DBG] : osdmap e86: 3 total, 3 up, 3 in
Nov 25 02:53:18 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 86 pg[9.19( v 61'78 lc 54'63 (0'0,61'78] local-lis/les=85/86 n=6 ec=55/43 lis/c=58/58 les/c/f=59/59/0 sis=85) [2] r=0 lpr=85 pi=[58,85)/1 crt=61'78 lcod 0'0 mlcod 0'0 active+degraded m=7 mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:53:19 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v210: 321 pgs: 1 active+recovering, 320 active+clean; 456 KiB data, 104 MiB used, 60 GiB / 60 GiB avail; 2/215 objects misplaced (0.930%); 36 B/s, 1 objects/s recovering
Nov 25 02:53:19 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"} v 0) v1
Nov 25 02:53:19 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]: dispatch
Nov 25 02:53:19 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e86 do_prune osdmap full prune enabled
Nov 25 02:53:19 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"}]': finished
Nov 25 02:53:20 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]': finished
Nov 25 02:53:20 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e87 e87: 3 total, 3 up, 3 in
Nov 25 02:53:20 np0005534516 ceph-mon[75015]: log_channel(cluster) log [DBG] : osdmap e87: 3 total, 3 up, 3 in
Nov 25 02:53:20 np0005534516 ceph-osd[88620]: log_channel(cluster) log [DBG] : 2.8 scrub starts
Nov 25 02:53:20 np0005534516 ceph-osd[88620]: log_channel(cluster) log [DBG] : 2.8 scrub ok
Nov 25 02:53:20 np0005534516 ceph-osd[90711]: log_channel(cluster) log [DBG] : 4.11 scrub starts
Nov 25 02:53:20 np0005534516 ceph-osd[90711]: log_channel(cluster) log [DBG] : 4.11 scrub ok
Nov 25 02:53:21 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v212: 321 pgs: 1 active+recovering, 320 active+clean; 456 KiB data, 104 MiB used, 60 GiB / 60 GiB avail; 2/215 objects misplaced (0.930%); 18 B/s, 0 objects/s recovering
Nov 25 02:53:21 np0005534516 ceph-osd[88620]: log_channel(cluster) log [DBG] : 5.4 scrub starts
Nov 25 02:53:21 np0005534516 ceph-osd[90711]: log_channel(cluster) log [DBG] : 6.11 deep-scrub starts
Nov 25 02:53:21 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e87 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 02:53:21 np0005534516 ceph-osd[89702]: log_channel(cluster) log [DBG] : 4.4 scrub starts
Nov 25 02:53:21 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]: dispatch
Nov 25 02:53:21 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]': finished
Nov 25 02:53:21 np0005534516 ceph-osd[88620]: log_channel(cluster) log [DBG] : 5.4 scrub ok
Nov 25 02:53:22 np0005534516 ceph-osd[90711]: log_channel(cluster) log [DBG] : 6.11 deep-scrub ok
Nov 25 02:53:22 np0005534516 ceph-osd[89702]: log_channel(cluster) log [DBG] : 4.4 scrub ok
Nov 25 02:53:22 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"} v 0) v1
Nov 25 02:53:22 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"}]: dispatch
Nov 25 02:53:22 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e87 do_prune osdmap full prune enabled
Nov 25 02:53:23 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v213: 321 pgs: 1 active+recovering, 320 active+clean; 456 KiB data, 104 MiB used, 60 GiB / 60 GiB avail; 2/215 objects misplaced (0.930%); 16 B/s, 0 objects/s recovering
Nov 25 02:53:23 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"} v 0) v1
Nov 25 02:53:23 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"}]: dispatch
Nov 25 02:53:23 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"}]': finished
Nov 25 02:53:23 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e88 e88: 3 total, 3 up, 3 in
Nov 25 02:53:23 np0005534516 ceph-mon[75015]: log_channel(cluster) log [DBG] : osdmap e88: 3 total, 3 up, 3 in
Nov 25 02:53:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 02:53:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 02:53:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 02:53:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 02:53:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 02:53:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 02:53:23 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"}]: dispatch
Nov 25 02:53:24 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e88 do_prune osdmap full prune enabled
Nov 25 02:53:24 np0005534516 ceph-osd[88620]: log_channel(cluster) log [DBG] : 2.b scrub starts
Nov 25 02:53:24 np0005534516 ceph-osd[88620]: log_channel(cluster) log [DBG] : 2.b scrub ok
Nov 25 02:53:24 np0005534516 ceph-osd[90711]: log_channel(cluster) log [DBG] : 6.f deep-scrub starts
Nov 25 02:53:25 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"}]': finished
Nov 25 02:53:25 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e89 e89: 3 total, 3 up, 3 in
Nov 25 02:53:25 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 88 pg[9.1c( v 60'79 (0'0,60'79] local-lis/les=70/72 n=7 ec=55/43 lis/c=70/70 les/c/f=72/72/0 sis=88 pruub=10.505496025s) [0] r=-1 lpr=88 pi=[70,88)/1 crt=60'79 mlcod 60'79 active pruub 193.737670898s@ mbc={255={}}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 02:53:25 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 88 pg[9.1c( v 60'79 (0'0,60'79] local-lis/les=70/72 n=7 ec=55/43 lis/c=70/70 les/c/f=72/72/0 sis=88 pruub=10.505405426s) [0] r=-1 lpr=88 pi=[70,88)/1 crt=60'79 mlcod 0'0 unknown NOTIFY pruub 193.737670898s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 02:53:25 np0005534516 ceph-osd[88620]: osd.0 pg_epoch: 88 pg[9.1c( empty local-lis/les=0/0 n=0 ec=55/43 lis/c=70/70 les/c/f=72/72/0 sis=88) [0] r=0 lpr=88 pi=[70,88)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:53:25 np0005534516 ceph-osd[90711]: log_channel(cluster) log [DBG] : 6.f deep-scrub ok
Nov 25 02:53:25 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v215: 321 pgs: 321 active+clean; 456 KiB data, 104 MiB used, 60 GiB / 60 GiB avail; 15 B/s, 0 objects/s recovering
Nov 25 02:53:25 np0005534516 ceph-osd[88620]: log_channel(cluster) log [DBG] : 2.1d deep-scrub starts
Nov 25 02:53:25 np0005534516 ceph-mon[75015]: log_channel(cluster) log [DBG] : osdmap e89: 3 total, 3 up, 3 in
Nov 25 02:53:25 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"} v 0) v1
Nov 25 02:53:25 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]: dispatch
Nov 25 02:53:25 np0005534516 ceph-osd[88620]: log_channel(cluster) log [DBG] : 2.1d deep-scrub ok
Nov 25 02:53:25 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"}]: dispatch
Nov 25 02:53:25 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"}]': finished
Nov 25 02:53:26 np0005534516 ceph-osd[89702]: log_channel(cluster) log [DBG] : 6.c scrub starts
Nov 25 02:53:26 np0005534516 ceph-osd[89702]: log_channel(cluster) log [DBG] : 6.c scrub ok
Nov 25 02:53:26 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e89 do_prune osdmap full prune enabled
Nov 25 02:53:26 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 02:53:26 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 02:53:26 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 02:53:26 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 02:53:26 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 02:53:26 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]': finished
Nov 25 02:53:26 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e90 e90: 3 total, 3 up, 3 in
Nov 25 02:53:26 np0005534516 ceph-mon[75015]: log_channel(cluster) log [DBG] : osdmap e90: 3 total, 3 up, 3 in
Nov 25 02:53:26 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 02:53:26 np0005534516 ceph-mgr[75313]: [progress WARNING root] complete: ev f0225103-91f2-4062-9ee1-fdef4662402b does not exist
Nov 25 02:53:26 np0005534516 ceph-mgr[75313]: [progress WARNING root] complete: ev fedf59f0-d6d5-4c84-a3b1-27abdd600c4f does not exist
Nov 25 02:53:26 np0005534516 ceph-mgr[75313]: [progress WARNING root] complete: ev 6469a46b-5b0d-4e3b-b770-eeeab80068d1 does not exist
Nov 25 02:53:26 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 02:53:26 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 02:53:26 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 02:53:26 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 02:53:26 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 02:53:26 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 02:53:26 np0005534516 ceph-osd[90711]: log_channel(cluster) log [DBG] : 4.1a scrub starts
Nov 25 02:53:26 np0005534516 ceph-osd[90711]: log_channel(cluster) log [DBG] : 4.1a scrub ok
Nov 25 02:53:27 np0005534516 podman[105478]: 2025-11-25 07:53:27.043533301 +0000 UTC m=+0.020090405 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 02:53:27 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v218: 321 pgs: 321 active+clean; 456 KiB data, 104 MiB used, 60 GiB / 60 GiB avail; 0 B/s, 0 objects/s recovering
Nov 25 02:53:27 np0005534516 ceph-osd[88620]: osd.0 pg_epoch: 90 pg[9.1c( v 60'79 lc 57'66 (0'0,60'79] local-lis/les=88/90 n=7 ec=55/43 lis/c=70/70 les/c/f=72/72/0 sis=88) [0] r=0 lpr=88 pi=[70,88)/1 crt=60'79 lcod 0'0 mlcod 0'0 active+degraded m=7 mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:53:27 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e90 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 02:53:27 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"} v 0) v1
Nov 25 02:53:27 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"}]: dispatch
Nov 25 02:53:27 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e90 do_prune osdmap full prune enabled
Nov 25 02:53:27 np0005534516 podman[105478]: 2025-11-25 07:53:27.550825461 +0000 UTC m=+0.527382545 container create ceefe76bbd769bc7e01a6b680df1165438296cbc7baa96b75917adeae88f3098 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_fermi, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Nov 25 02:53:27 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"}]': finished
Nov 25 02:53:27 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]: dispatch
Nov 25 02:53:27 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 02:53:27 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]': finished
Nov 25 02:53:27 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 02:53:27 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 02:53:27 np0005534516 ceph-osd[90711]: log_channel(cluster) log [DBG] : 4.e deep-scrub starts
Nov 25 02:53:27 np0005534516 ceph-osd[90711]: log_channel(cluster) log [DBG] : 4.e deep-scrub ok
Nov 25 02:53:27 np0005534516 systemd[1]: Started libpod-conmon-ceefe76bbd769bc7e01a6b680df1165438296cbc7baa96b75917adeae88f3098.scope.
Nov 25 02:53:27 np0005534516 systemd[1]: Started libcrun container.
Nov 25 02:53:28 np0005534516 podman[105478]: 2025-11-25 07:53:28.261705448 +0000 UTC m=+1.238262552 container init ceefe76bbd769bc7e01a6b680df1165438296cbc7baa96b75917adeae88f3098 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_fermi, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 02:53:28 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"}]': finished
Nov 25 02:53:28 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e91 e91: 3 total, 3 up, 3 in
Nov 25 02:53:28 np0005534516 podman[105478]: 2025-11-25 07:53:28.275103785 +0000 UTC m=+1.251660869 container start ceefe76bbd769bc7e01a6b680df1165438296cbc7baa96b75917adeae88f3098 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_fermi, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 02:53:28 np0005534516 eloquent_fermi[105495]: 167 167
Nov 25 02:53:28 np0005534516 systemd[1]: libpod-ceefe76bbd769bc7e01a6b680df1165438296cbc7baa96b75917adeae88f3098.scope: Deactivated successfully.
Nov 25 02:53:28 np0005534516 conmon[105495]: conmon ceefe76bbd769bc7e01a <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-ceefe76bbd769bc7e01a6b680df1165438296cbc7baa96b75917adeae88f3098.scope/container/memory.events
Nov 25 02:53:28 np0005534516 ceph-mon[75015]: log_channel(cluster) log [DBG] : osdmap e91: 3 total, 3 up, 3 in
Nov 25 02:53:28 np0005534516 podman[105478]: 2025-11-25 07:53:28.351374365 +0000 UTC m=+1.327931449 container attach ceefe76bbd769bc7e01a6b680df1165438296cbc7baa96b75917adeae88f3098 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_fermi, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 02:53:28 np0005534516 podman[105478]: 2025-11-25 07:53:28.352570877 +0000 UTC m=+1.329127981 container died ceefe76bbd769bc7e01a6b680df1165438296cbc7baa96b75917adeae88f3098 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_fermi, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 02:53:28 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 91 pg[9.1e( v 61'75 (0'0,61'75] local-lis/les=63/65 n=6 ec=55/43 lis/c=63/63 les/c/f=65/66/0 sis=91 pruub=15.013614655s) [0] r=-1 lpr=91 pi=[63,91)/1 crt=61'75 mlcod 61'75 active pruub 201.649581909s@ mbc={255={}}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 02:53:28 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 91 pg[9.1e( v 61'75 (0'0,61'75] local-lis/les=63/65 n=6 ec=55/43 lis/c=63/63 les/c/f=65/66/0 sis=91 pruub=15.013522148s) [0] r=-1 lpr=91 pi=[63,91)/1 crt=61'75 mlcod 0'0 unknown NOTIFY pruub 201.649581909s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 02:53:28 np0005534516 ceph-osd[88620]: osd.0 pg_epoch: 91 pg[9.1e( empty local-lis/les=0/0 n=0 ec=55/43 lis/c=63/63 les/c/f=65/66/0 sis=91) [0] r=0 lpr=91 pi=[63,91)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:53:28 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"}]: dispatch
Nov 25 02:53:28 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"}]': finished
Nov 25 02:53:29 np0005534516 systemd[1]: var-lib-containers-storage-overlay-4f55d889130ab8af2ee8dd506b2011fe42598f020dc81a9fa4ba9a2c286c6103-merged.mount: Deactivated successfully.
Nov 25 02:53:29 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v220: 321 pgs: 321 active+clean; 456 KiB data, 104 MiB used, 60 GiB / 60 GiB avail; 0 B/s, 0 objects/s recovering
Nov 25 02:53:29 np0005534516 ceph-osd[89702]: log_channel(cluster) log [DBG] : 4.12 deep-scrub starts
Nov 25 02:53:29 np0005534516 ceph-osd[89702]: log_channel(cluster) log [DBG] : 4.12 deep-scrub ok
Nov 25 02:53:29 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"} v 0) v1
Nov 25 02:53:29 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 25 02:53:29 np0005534516 ceph-osd[90711]: log_channel(cluster) log [DBG] : 4.a deep-scrub starts
Nov 25 02:53:29 np0005534516 ceph-osd[90711]: log_channel(cluster) log [DBG] : 4.a deep-scrub ok
Nov 25 02:53:29 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e91 do_prune osdmap full prune enabled
Nov 25 02:53:29 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 25 02:53:29 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e92 e92: 3 total, 3 up, 3 in
Nov 25 02:53:30 np0005534516 ceph-mon[75015]: log_channel(cluster) log [DBG] : osdmap e92: 3 total, 3 up, 3 in
Nov 25 02:53:30 np0005534516 podman[105478]: 2025-11-25 07:53:30.108493432 +0000 UTC m=+3.085050506 container remove ceefe76bbd769bc7e01a6b680df1165438296cbc7baa96b75917adeae88f3098 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_fermi, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Nov 25 02:53:30 np0005534516 systemd[1]: libpod-conmon-ceefe76bbd769bc7e01a6b680df1165438296cbc7baa96b75917adeae88f3098.scope: Deactivated successfully.
Nov 25 02:53:30 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 25 02:53:30 np0005534516 podman[105519]: 2025-11-25 07:53:30.242965481 +0000 UTC m=+0.026277530 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 02:53:30 np0005534516 podman[105519]: 2025-11-25 07:53:30.451078009 +0000 UTC m=+0.234390068 container create 54f3b21f5d022ca6bc1966e8457fc83cd1dcb7fee28f739fe21d2c245e32f940 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_ramanujan, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Nov 25 02:53:30 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 92 pg[9.1f( v 61'75 (0'0,61'75] local-lis/les=64/65 n=5 ec=55/43 lis/c=64/64 les/c/f=65/66/0 sis=92 pruub=12.946891785s) [1] r=-1 lpr=92 pi=[64,92)/1 crt=61'75 mlcod 61'75 active pruub 201.649566650s@ mbc={255={}}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 02:53:30 np0005534516 ceph-osd[90711]: osd.2 pg_epoch: 92 pg[9.1f( v 61'75 (0'0,61'75] local-lis/les=64/65 n=5 ec=55/43 lis/c=64/64 les/c/f=65/66/0 sis=92 pruub=12.946835518s) [1] r=-1 lpr=92 pi=[64,92)/1 crt=61'75 mlcod 0'0 unknown NOTIFY pruub 201.649566650s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 02:53:30 np0005534516 systemd[1]: Started libpod-conmon-54f3b21f5d022ca6bc1966e8457fc83cd1dcb7fee28f739fe21d2c245e32f940.scope.
Nov 25 02:53:30 np0005534516 systemd[1]: Started libcrun container.
Nov 25 02:53:30 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/38d4aa09d9821bbff59dcddc219a6dde16fdede460e7d5330784e83fdbba63f2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 02:53:30 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/38d4aa09d9821bbff59dcddc219a6dde16fdede460e7d5330784e83fdbba63f2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 02:53:30 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/38d4aa09d9821bbff59dcddc219a6dde16fdede460e7d5330784e83fdbba63f2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 02:53:30 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/38d4aa09d9821bbff59dcddc219a6dde16fdede460e7d5330784e83fdbba63f2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 02:53:30 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/38d4aa09d9821bbff59dcddc219a6dde16fdede460e7d5330784e83fdbba63f2/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 02:53:30 np0005534516 ceph-osd[88620]: osd.0 pg_epoch: 92 pg[9.1e( v 61'75 lc 59'66 (0'0,61'75] local-lis/les=91/92 n=6 ec=55/43 lis/c=63/63 les/c/f=65/66/0 sis=91) [0] r=0 lpr=91 pi=[63,91)/1 crt=61'75 lcod 0'0 mlcod 0'0 active+degraded m=5 mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:53:30 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 92 pg[9.1f( empty local-lis/les=0/0 n=0 ec=55/43 lis/c=64/64 les/c/f=65/66/0 sis=92) [1] r=0 lpr=92 pi=[64,92)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 02:53:30 np0005534516 podman[105519]: 2025-11-25 07:53:30.866871604 +0000 UTC m=+0.650183643 container init 54f3b21f5d022ca6bc1966e8457fc83cd1dcb7fee28f739fe21d2c245e32f940 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_ramanujan, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Nov 25 02:53:30 np0005534516 podman[105519]: 2025-11-25 07:53:30.874876807 +0000 UTC m=+0.658188826 container start 54f3b21f5d022ca6bc1966e8457fc83cd1dcb7fee28f739fe21d2c245e32f940 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_ramanujan, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 02:53:30 np0005534516 podman[105519]: 2025-11-25 07:53:30.946217815 +0000 UTC m=+0.729529864 container attach 54f3b21f5d022ca6bc1966e8457fc83cd1dcb7fee28f739fe21d2c245e32f940 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_ramanujan, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Nov 25 02:53:31 np0005534516 ceph-osd[89702]: log_channel(cluster) log [DBG] : 4.5 scrub starts
Nov 25 02:53:31 np0005534516 ceph-osd[89702]: log_channel(cluster) log [DBG] : 4.5 scrub ok
Nov 25 02:53:31 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v222: 321 pgs: 1 peering, 1 active+recovering, 319 active+clean; 456 KiB data, 104 MiB used, 60 GiB / 60 GiB avail; 1/215 objects misplaced (0.465%); 18 B/s, 0 objects/s recovering
Nov 25 02:53:31 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e92 do_prune osdmap full prune enabled
Nov 25 02:53:31 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 25 02:53:31 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e93 e93: 3 total, 3 up, 3 in
Nov 25 02:53:31 np0005534516 ceph-mon[75015]: log_channel(cluster) log [DBG] : osdmap e93: 3 total, 3 up, 3 in
Nov 25 02:53:31 np0005534516 ceph-osd[89702]: osd.1 pg_epoch: 93 pg[9.1f( v 61'75 lc 57'66 (0'0,61'75] local-lis/les=92/93 n=5 ec=55/43 lis/c=64/64 les/c/f=65/66/0 sis=92) [1] r=0 lpr=92 pi=[64,92)/1 crt=61'75 lcod 0'0 mlcod 0'0 active+degraded m=5 mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 02:53:31 np0005534516 reverent_ramanujan[105536]: --> passed data devices: 0 physical, 3 LVM
Nov 25 02:53:31 np0005534516 reverent_ramanujan[105536]: --> relative data size: 1.0
Nov 25 02:53:31 np0005534516 reverent_ramanujan[105536]: --> All data devices are unavailable
Nov 25 02:53:31 np0005534516 systemd[1]: libpod-54f3b21f5d022ca6bc1966e8457fc83cd1dcb7fee28f739fe21d2c245e32f940.scope: Deactivated successfully.
Nov 25 02:53:31 np0005534516 podman[105519]: 2025-11-25 07:53:31.984978727 +0000 UTC m=+1.768290746 container died 54f3b21f5d022ca6bc1966e8457fc83cd1dcb7fee28f739fe21d2c245e32f940 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_ramanujan, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Nov 25 02:53:31 np0005534516 systemd[1]: libpod-54f3b21f5d022ca6bc1966e8457fc83cd1dcb7fee28f739fe21d2c245e32f940.scope: Consumed 1.060s CPU time.
Nov 25 02:53:32 np0005534516 systemd-logind[822]: New session 35 of user zuul.
Nov 25 02:53:32 np0005534516 systemd[1]: Started Session 35 of User zuul.
Nov 25 02:53:32 np0005534516 ceph-osd[88620]: log_channel(cluster) log [DBG] : 2.1c scrub starts
Nov 25 02:53:32 np0005534516 ceph-osd[88620]: log_channel(cluster) log [DBG] : 2.1c scrub ok
Nov 25 02:53:32 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 02:53:32 np0005534516 systemd[1]: var-lib-containers-storage-overlay-38d4aa09d9821bbff59dcddc219a6dde16fdede460e7d5330784e83fdbba63f2-merged.mount: Deactivated successfully.
Nov 25 02:53:32 np0005534516 python3.9[105729]: ansible-ansible.legacy.ping Invoked with data=pong
Nov 25 02:53:33 np0005534516 podman[105519]: 2025-11-25 07:53:33.069526408 +0000 UTC m=+2.852838427 container remove 54f3b21f5d022ca6bc1966e8457fc83cd1dcb7fee28f739fe21d2c245e32f940 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_ramanujan, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 02:53:33 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v224: 321 pgs: 2 peering, 1 active+recovering, 318 active+clean; 456 KiB data, 104 MiB used, 60 GiB / 60 GiB avail; 1/215 objects misplaced (0.465%); 18 B/s, 0 objects/s recovering
Nov 25 02:53:33 np0005534516 systemd[1]: libpod-conmon-54f3b21f5d022ca6bc1966e8457fc83cd1dcb7fee28f739fe21d2c245e32f940.scope: Deactivated successfully.
Nov 25 02:53:33 np0005534516 ceph-osd[88620]: log_channel(cluster) log [DBG] : 5.3 deep-scrub starts
Nov 25 02:53:33 np0005534516 ceph-osd[88620]: log_channel(cluster) log [DBG] : 5.3 deep-scrub ok
Nov 25 02:53:33 np0005534516 ceph-osd[90711]: log_channel(cluster) log [DBG] : 4.1b scrub starts
Nov 25 02:53:33 np0005534516 ceph-osd[90711]: log_channel(cluster) log [DBG] : 4.1b scrub ok
Nov 25 02:53:33 np0005534516 podman[105989]: 2025-11-25 07:53:33.758483961 +0000 UTC m=+0.021691477 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 02:53:33 np0005534516 podman[105989]: 2025-11-25 07:53:33.88508157 +0000 UTC m=+0.148289076 container create bba36d97cb12753dbc41db1f0a49ef8a841d8a3c759ef4d400661c04b6cb28d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_lovelace, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 02:53:34 np0005534516 ceph-osd[89702]: log_channel(cluster) log [DBG] : 4.7 scrub starts
Nov 25 02:53:34 np0005534516 ceph-osd[89702]: log_channel(cluster) log [DBG] : 4.7 scrub ok
Nov 25 02:53:34 np0005534516 python3.9[106053]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 25 02:53:34 np0005534516 systemd[1]: Started libpod-conmon-bba36d97cb12753dbc41db1f0a49ef8a841d8a3c759ef4d400661c04b6cb28d7.scope.
Nov 25 02:53:34 np0005534516 systemd[1]: Started libcrun container.
Nov 25 02:53:34 np0005534516 podman[105989]: 2025-11-25 07:53:34.671180169 +0000 UTC m=+0.934387695 container init bba36d97cb12753dbc41db1f0a49ef8a841d8a3c759ef4d400661c04b6cb28d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_lovelace, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 02:53:34 np0005534516 podman[105989]: 2025-11-25 07:53:34.679916012 +0000 UTC m=+0.943123508 container start bba36d97cb12753dbc41db1f0a49ef8a841d8a3c759ef4d400661c04b6cb28d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_lovelace, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Nov 25 02:53:34 np0005534516 keen_lovelace[106061]: 167 167
Nov 25 02:53:34 np0005534516 systemd[1]: libpod-bba36d97cb12753dbc41db1f0a49ef8a841d8a3c759ef4d400661c04b6cb28d7.scope: Deactivated successfully.
Nov 25 02:53:34 np0005534516 podman[105989]: 2025-11-25 07:53:34.857776875 +0000 UTC m=+1.120984421 container attach bba36d97cb12753dbc41db1f0a49ef8a841d8a3c759ef4d400661c04b6cb28d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_lovelace, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Nov 25 02:53:34 np0005534516 podman[105989]: 2025-11-25 07:53:34.858405292 +0000 UTC m=+1.121612808 container died bba36d97cb12753dbc41db1f0a49ef8a841d8a3c759ef4d400661c04b6cb28d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_lovelace, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 02:53:35 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v225: 321 pgs: 1 peering, 320 active+clean; 456 KiB data, 139 MiB used, 60 GiB / 60 GiB avail; 16 B/s, 1 objects/s recovering
Nov 25 02:53:35 np0005534516 ceph-osd[88620]: log_channel(cluster) log [DBG] : 2.1f scrub starts
Nov 25 02:53:35 np0005534516 ceph-osd[88620]: log_channel(cluster) log [DBG] : 2.1f scrub ok
Nov 25 02:53:35 np0005534516 python3.9[106227]: ansible-ansible.legacy.command Invoked with _raw_params=PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin which growvols#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 02:53:35 np0005534516 systemd[1]: var-lib-containers-storage-overlay-994b5f22a176a18432b83973f445a327ce2714ed453226f08f496e78b3871dda-merged.mount: Deactivated successfully.
Nov 25 02:53:35 np0005534516 ceph-osd[90711]: log_channel(cluster) log [DBG] : 4.1c scrub starts
Nov 25 02:53:35 np0005534516 ceph-osd[90711]: log_channel(cluster) log [DBG] : 4.1c scrub ok
Nov 25 02:53:36 np0005534516 ceph-osd[89702]: log_channel(cluster) log [DBG] : 5.19 scrub starts
Nov 25 02:53:36 np0005534516 ceph-osd[89702]: log_channel(cluster) log [DBG] : 5.19 scrub ok
Nov 25 02:53:36 np0005534516 podman[105989]: 2025-11-25 07:53:36.133286487 +0000 UTC m=+2.396494023 container remove bba36d97cb12753dbc41db1f0a49ef8a841d8a3c759ef4d400661c04b6cb28d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_lovelace, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Nov 25 02:53:36 np0005534516 systemd[1]: libpod-conmon-bba36d97cb12753dbc41db1f0a49ef8a841d8a3c759ef4d400661c04b6cb28d7.scope: Deactivated successfully.
Nov 25 02:53:36 np0005534516 podman[106357]: 2025-11-25 07:53:36.317203471 +0000 UTC m=+0.043992241 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 02:53:36 np0005534516 podman[106357]: 2025-11-25 07:53:36.539737353 +0000 UTC m=+0.266526073 container create 9a156fc51129e01711fd54dbdc11eba9d6335896890533677a2382e668dc42bb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_jang, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Nov 25 02:53:36 np0005534516 systemd[1]: Started libpod-conmon-9a156fc51129e01711fd54dbdc11eba9d6335896890533677a2382e668dc42bb.scope.
Nov 25 02:53:36 np0005534516 python3.9[106402]: ansible-ansible.builtin.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 25 02:53:36 np0005534516 systemd[1]: Started libcrun container.
Nov 25 02:53:36 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5694cc5a277099aea06e626fac35be56377b55a40a8a0915ced95d0da72ddee5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 02:53:36 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5694cc5a277099aea06e626fac35be56377b55a40a8a0915ced95d0da72ddee5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 02:53:36 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5694cc5a277099aea06e626fac35be56377b55a40a8a0915ced95d0da72ddee5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 02:53:36 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5694cc5a277099aea06e626fac35be56377b55a40a8a0915ced95d0da72ddee5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 02:53:37 np0005534516 podman[106357]: 2025-11-25 07:53:37.029856426 +0000 UTC m=+0.756645146 container init 9a156fc51129e01711fd54dbdc11eba9d6335896890533677a2382e668dc42bb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_jang, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 02:53:37 np0005534516 podman[106357]: 2025-11-25 07:53:37.037045317 +0000 UTC m=+0.763834007 container start 9a156fc51129e01711fd54dbdc11eba9d6335896890533677a2382e668dc42bb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_jang, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 02:53:37 np0005534516 ceph-osd[89702]: log_channel(cluster) log [DBG] : 6.4 deep-scrub starts
Nov 25 02:53:37 np0005534516 ceph-osd[89702]: log_channel(cluster) log [DBG] : 6.4 deep-scrub ok
Nov 25 02:53:37 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v226: 321 pgs: 321 active+clean; 456 KiB data, 139 MiB used, 60 GiB / 60 GiB avail; 27 B/s, 2 objects/s recovering
Nov 25 02:53:37 np0005534516 ceph-osd[88620]: log_channel(cluster) log [DBG] : 5.5 scrub starts
Nov 25 02:53:37 np0005534516 ceph-osd[88620]: log_channel(cluster) log [DBG] : 5.5 scrub ok
Nov 25 02:53:37 np0005534516 podman[106357]: 2025-11-25 07:53:37.369589826 +0000 UTC m=+1.096378526 container attach 9a156fc51129e01711fd54dbdc11eba9d6335896890533677a2382e668dc42bb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_jang, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Nov 25 02:53:37 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 02:53:37 np0005534516 python3.9[106563]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/log/journal setype=var_log_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 02:53:37 np0005534516 competent_jang[106407]: {
Nov 25 02:53:37 np0005534516 competent_jang[106407]:    "0": [
Nov 25 02:53:37 np0005534516 competent_jang[106407]:        {
Nov 25 02:53:37 np0005534516 competent_jang[106407]:            "devices": [
Nov 25 02:53:37 np0005534516 competent_jang[106407]:                "/dev/loop3"
Nov 25 02:53:37 np0005534516 competent_jang[106407]:            ],
Nov 25 02:53:37 np0005534516 competent_jang[106407]:            "lv_name": "ceph_lv0",
Nov 25 02:53:37 np0005534516 competent_jang[106407]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 02:53:37 np0005534516 competent_jang[106407]:            "lv_size": "21470642176",
Nov 25 02:53:37 np0005534516 competent_jang[106407]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=fa0f8d90-5df8-4d42-9078-da082765696d,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 02:53:37 np0005534516 competent_jang[106407]:            "lv_uuid": "ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr",
Nov 25 02:53:37 np0005534516 competent_jang[106407]:            "name": "ceph_lv0",
Nov 25 02:53:37 np0005534516 competent_jang[106407]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 02:53:37 np0005534516 competent_jang[106407]:            "tags": {
Nov 25 02:53:37 np0005534516 competent_jang[106407]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 02:53:37 np0005534516 competent_jang[106407]:                "ceph.block_uuid": "ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr",
Nov 25 02:53:37 np0005534516 competent_jang[106407]:                "ceph.cephx_lockbox_secret": "",
Nov 25 02:53:37 np0005534516 competent_jang[106407]:                "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 02:53:37 np0005534516 competent_jang[106407]:                "ceph.cluster_name": "ceph",
Nov 25 02:53:37 np0005534516 competent_jang[106407]:                "ceph.crush_device_class": "",
Nov 25 02:53:37 np0005534516 competent_jang[106407]:                "ceph.encrypted": "0",
Nov 25 02:53:37 np0005534516 competent_jang[106407]:                "ceph.osd_fsid": "fa0f8d90-5df8-4d42-9078-da082765696d",
Nov 25 02:53:37 np0005534516 competent_jang[106407]:                "ceph.osd_id": "0",
Nov 25 02:53:37 np0005534516 competent_jang[106407]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 02:53:37 np0005534516 competent_jang[106407]:                "ceph.type": "block",
Nov 25 02:53:37 np0005534516 competent_jang[106407]:                "ceph.vdo": "0"
Nov 25 02:53:37 np0005534516 competent_jang[106407]:            },
Nov 25 02:53:37 np0005534516 competent_jang[106407]:            "type": "block",
Nov 25 02:53:37 np0005534516 competent_jang[106407]:            "vg_name": "ceph_vg0"
Nov 25 02:53:37 np0005534516 competent_jang[106407]:        }
Nov 25 02:53:37 np0005534516 competent_jang[106407]:    ],
Nov 25 02:53:37 np0005534516 competent_jang[106407]:    "1": [
Nov 25 02:53:37 np0005534516 competent_jang[106407]:        {
Nov 25 02:53:37 np0005534516 competent_jang[106407]:            "devices": [
Nov 25 02:53:37 np0005534516 competent_jang[106407]:                "/dev/loop4"
Nov 25 02:53:37 np0005534516 competent_jang[106407]:            ],
Nov 25 02:53:37 np0005534516 competent_jang[106407]:            "lv_name": "ceph_lv1",
Nov 25 02:53:37 np0005534516 competent_jang[106407]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 02:53:37 np0005534516 competent_jang[106407]:            "lv_size": "21470642176",
Nov 25 02:53:37 np0005534516 competent_jang[106407]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1ccbbde0-8faf-460a-800e-c84f00ed17db,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 02:53:37 np0005534516 competent_jang[106407]:            "lv_uuid": "nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e",
Nov 25 02:53:37 np0005534516 competent_jang[106407]:            "name": "ceph_lv1",
Nov 25 02:53:37 np0005534516 competent_jang[106407]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 02:53:37 np0005534516 competent_jang[106407]:            "tags": {
Nov 25 02:53:37 np0005534516 competent_jang[106407]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 02:53:37 np0005534516 competent_jang[106407]:                "ceph.block_uuid": "nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e",
Nov 25 02:53:37 np0005534516 competent_jang[106407]:                "ceph.cephx_lockbox_secret": "",
Nov 25 02:53:37 np0005534516 competent_jang[106407]:                "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 02:53:37 np0005534516 competent_jang[106407]:                "ceph.cluster_name": "ceph",
Nov 25 02:53:37 np0005534516 competent_jang[106407]:                "ceph.crush_device_class": "",
Nov 25 02:53:37 np0005534516 competent_jang[106407]:                "ceph.encrypted": "0",
Nov 25 02:53:37 np0005534516 competent_jang[106407]:                "ceph.osd_fsid": "1ccbbde0-8faf-460a-800e-c84f00ed17db",
Nov 25 02:53:37 np0005534516 competent_jang[106407]:                "ceph.osd_id": "1",
Nov 25 02:53:37 np0005534516 competent_jang[106407]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 02:53:37 np0005534516 competent_jang[106407]:                "ceph.type": "block",
Nov 25 02:53:37 np0005534516 competent_jang[106407]:                "ceph.vdo": "0"
Nov 25 02:53:37 np0005534516 competent_jang[106407]:            },
Nov 25 02:53:37 np0005534516 competent_jang[106407]:            "type": "block",
Nov 25 02:53:37 np0005534516 competent_jang[106407]:            "vg_name": "ceph_vg1"
Nov 25 02:53:37 np0005534516 competent_jang[106407]:        }
Nov 25 02:53:37 np0005534516 competent_jang[106407]:    ],
Nov 25 02:53:37 np0005534516 competent_jang[106407]:    "2": [
Nov 25 02:53:37 np0005534516 competent_jang[106407]:        {
Nov 25 02:53:37 np0005534516 competent_jang[106407]:            "devices": [
Nov 25 02:53:37 np0005534516 competent_jang[106407]:                "/dev/loop5"
Nov 25 02:53:37 np0005534516 competent_jang[106407]:            ],
Nov 25 02:53:37 np0005534516 competent_jang[106407]:            "lv_name": "ceph_lv2",
Nov 25 02:53:37 np0005534516 competent_jang[106407]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 02:53:37 np0005534516 competent_jang[106407]:            "lv_size": "21470642176",
Nov 25 02:53:37 np0005534516 competent_jang[106407]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2a15223e-f21c-42af-b702-2be1c3607399,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 02:53:37 np0005534516 competent_jang[106407]:            "lv_uuid": "5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0",
Nov 25 02:53:37 np0005534516 competent_jang[106407]:            "name": "ceph_lv2",
Nov 25 02:53:37 np0005534516 competent_jang[106407]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 02:53:37 np0005534516 competent_jang[106407]:            "tags": {
Nov 25 02:53:37 np0005534516 competent_jang[106407]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 02:53:37 np0005534516 competent_jang[106407]:                "ceph.block_uuid": "5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0",
Nov 25 02:53:37 np0005534516 competent_jang[106407]:                "ceph.cephx_lockbox_secret": "",
Nov 25 02:53:37 np0005534516 competent_jang[106407]:                "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 02:53:37 np0005534516 competent_jang[106407]:                "ceph.cluster_name": "ceph",
Nov 25 02:53:37 np0005534516 competent_jang[106407]:                "ceph.crush_device_class": "",
Nov 25 02:53:37 np0005534516 competent_jang[106407]:                "ceph.encrypted": "0",
Nov 25 02:53:37 np0005534516 competent_jang[106407]:                "ceph.osd_fsid": "2a15223e-f21c-42af-b702-2be1c3607399",
Nov 25 02:53:37 np0005534516 competent_jang[106407]:                "ceph.osd_id": "2",
Nov 25 02:53:37 np0005534516 competent_jang[106407]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 02:53:37 np0005534516 competent_jang[106407]:                "ceph.type": "block",
Nov 25 02:53:37 np0005534516 competent_jang[106407]:                "ceph.vdo": "0"
Nov 25 02:53:37 np0005534516 competent_jang[106407]:            },
Nov 25 02:53:37 np0005534516 competent_jang[106407]:            "type": "block",
Nov 25 02:53:37 np0005534516 competent_jang[106407]:            "vg_name": "ceph_vg2"
Nov 25 02:53:37 np0005534516 competent_jang[106407]:        }
Nov 25 02:53:37 np0005534516 competent_jang[106407]:    ]
Nov 25 02:53:37 np0005534516 competent_jang[106407]: }
Nov 25 02:53:37 np0005534516 systemd[1]: libpod-9a156fc51129e01711fd54dbdc11eba9d6335896890533677a2382e668dc42bb.scope: Deactivated successfully.
Nov 25 02:53:37 np0005534516 podman[106357]: 2025-11-25 07:53:37.843274691 +0000 UTC m=+1.570063441 container died 9a156fc51129e01711fd54dbdc11eba9d6335896890533677a2382e668dc42bb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_jang, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Nov 25 02:53:38 np0005534516 python3.9[106730]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/config-data/ansible-generated recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 02:53:39 np0005534516 ceph-osd[89702]: log_channel(cluster) log [DBG] : 6.1c scrub starts
Nov 25 02:53:39 np0005534516 ceph-osd[89702]: log_channel(cluster) log [DBG] : 6.1c scrub ok
Nov 25 02:53:39 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v227: 321 pgs: 321 active+clean; 456 KiB data, 139 MiB used, 60 GiB / 60 GiB avail; 12 B/s, 1 objects/s recovering
Nov 25 02:53:40 np0005534516 python3.9[106880]: ansible-ansible.builtin.service_facts Invoked
Nov 25 02:53:40 np0005534516 network[106900]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Nov 25 02:53:40 np0005534516 network[106901]: 'network-scripts' will be removed from distribution in near future.
Nov 25 02:53:40 np0005534516 network[106902]: It is advised to switch to 'NetworkManager' instead for network management.
Nov 25 02:53:40 np0005534516 systemd[1]: var-lib-containers-storage-overlay-5694cc5a277099aea06e626fac35be56377b55a40a8a0915ced95d0da72ddee5-merged.mount: Deactivated successfully.
Nov 25 02:53:41 np0005534516 ceph-osd[89702]: log_channel(cluster) log [DBG] : 5.18 scrub starts
Nov 25 02:53:41 np0005534516 ceph-osd[89702]: log_channel(cluster) log [DBG] : 5.18 scrub ok
Nov 25 02:53:41 np0005534516 podman[106357]: 2025-11-25 07:53:41.11405549 +0000 UTC m=+4.840844220 container remove 9a156fc51129e01711fd54dbdc11eba9d6335896890533677a2382e668dc42bb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_jang, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 02:53:41 np0005534516 systemd[1]: libpod-conmon-9a156fc51129e01711fd54dbdc11eba9d6335896890533677a2382e668dc42bb.scope: Deactivated successfully.
Nov 25 02:53:41 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v228: 321 pgs: 321 active+clean; 456 KiB data, 139 MiB used, 60 GiB / 60 GiB avail; 10 B/s, 1 objects/s recovering
Nov 25 02:53:41 np0005534516 podman[107086]: 2025-11-25 07:53:41.767473437 +0000 UTC m=+0.025259153 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 02:53:41 np0005534516 podman[107086]: 2025-11-25 07:53:41.901509564 +0000 UTC m=+0.159295190 container create 8769ba1b170dae53f965ea63043198ed33e7e2cbde5d05f3e5fb96b6d9a65c9c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_kare, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 02:53:42 np0005534516 systemd[1]: Started libpod-conmon-8769ba1b170dae53f965ea63043198ed33e7e2cbde5d05f3e5fb96b6d9a65c9c.scope.
Nov 25 02:53:42 np0005534516 systemd[1]: Started libcrun container.
Nov 25 02:53:42 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 02:53:42 np0005534516 podman[107086]: 2025-11-25 07:53:42.599269862 +0000 UTC m=+0.857055588 container init 8769ba1b170dae53f965ea63043198ed33e7e2cbde5d05f3e5fb96b6d9a65c9c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_kare, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Nov 25 02:53:42 np0005534516 podman[107086]: 2025-11-25 07:53:42.611282402 +0000 UTC m=+0.869068058 container start 8769ba1b170dae53f965ea63043198ed33e7e2cbde5d05f3e5fb96b6d9a65c9c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_kare, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default)
Nov 25 02:53:42 np0005534516 busy_kare[107119]: 167 167
Nov 25 02:53:42 np0005534516 ceph-osd[90711]: log_channel(cluster) log [DBG] : 6.1f scrub starts
Nov 25 02:53:42 np0005534516 systemd[1]: libpod-8769ba1b170dae53f965ea63043198ed33e7e2cbde5d05f3e5fb96b6d9a65c9c.scope: Deactivated successfully.
Nov 25 02:53:42 np0005534516 ceph-osd[90711]: log_channel(cluster) log [DBG] : 6.1f scrub ok
Nov 25 02:53:42 np0005534516 podman[107086]: 2025-11-25 07:53:42.769873202 +0000 UTC m=+1.027658838 container attach 8769ba1b170dae53f965ea63043198ed33e7e2cbde5d05f3e5fb96b6d9a65c9c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_kare, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 02:53:42 np0005534516 podman[107086]: 2025-11-25 07:53:42.771405533 +0000 UTC m=+1.029191159 container died 8769ba1b170dae53f965ea63043198ed33e7e2cbde5d05f3e5fb96b6d9a65c9c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_kare, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Nov 25 02:53:42 np0005534516 systemd[1]: var-lib-containers-storage-overlay-883a02c56133534b852a9f362fa7c3bed927d444a77ebff145c768a844fb7a47-merged.mount: Deactivated successfully.
Nov 25 02:53:43 np0005534516 podman[107086]: 2025-11-25 07:53:43.053596022 +0000 UTC m=+1.311381668 container remove 8769ba1b170dae53f965ea63043198ed33e7e2cbde5d05f3e5fb96b6d9a65c9c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_kare, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 02:53:43 np0005534516 ceph-osd[89702]: log_channel(cluster) log [DBG] : 6.b scrub starts
Nov 25 02:53:43 np0005534516 systemd[1]: libpod-conmon-8769ba1b170dae53f965ea63043198ed33e7e2cbde5d05f3e5fb96b6d9a65c9c.scope: Deactivated successfully.
Nov 25 02:53:43 np0005534516 ceph-osd[89702]: log_channel(cluster) log [DBG] : 6.b scrub ok
Nov 25 02:53:43 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v229: 321 pgs: 321 active+clean; 456 KiB data, 139 MiB used, 60 GiB / 60 GiB avail; 9 B/s, 1 objects/s recovering
Nov 25 02:53:43 np0005534516 podman[107184]: 2025-11-25 07:53:43.245500249 +0000 UTC m=+0.083825632 container create a40aa39c80783d38f84120ea15c4694e417cbdd8d4559f097690c51c59f51b2f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_raman, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Nov 25 02:53:43 np0005534516 podman[107184]: 2025-11-25 07:53:43.187152996 +0000 UTC m=+0.025478389 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 02:53:43 np0005534516 systemd[1]: Started libpod-conmon-a40aa39c80783d38f84120ea15c4694e417cbdd8d4559f097690c51c59f51b2f.scope.
Nov 25 02:53:43 np0005534516 systemd[1]: Started libcrun container.
Nov 25 02:53:43 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d8959c9c77fbdbd280b63d9b3a84f8a4e8e7b02ea5776458bde7fdb1c1fc7681/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 02:53:43 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d8959c9c77fbdbd280b63d9b3a84f8a4e8e7b02ea5776458bde7fdb1c1fc7681/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 02:53:43 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d8959c9c77fbdbd280b63d9b3a84f8a4e8e7b02ea5776458bde7fdb1c1fc7681/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 02:53:43 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d8959c9c77fbdbd280b63d9b3a84f8a4e8e7b02ea5776458bde7fdb1c1fc7681/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 02:53:43 np0005534516 podman[107184]: 2025-11-25 07:53:43.372395476 +0000 UTC m=+0.210720889 container init a40aa39c80783d38f84120ea15c4694e417cbdd8d4559f097690c51c59f51b2f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_raman, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 02:53:43 np0005534516 podman[107184]: 2025-11-25 07:53:43.379601508 +0000 UTC m=+0.217926891 container start a40aa39c80783d38f84120ea15c4694e417cbdd8d4559f097690c51c59f51b2f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_raman, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Nov 25 02:53:43 np0005534516 podman[107184]: 2025-11-25 07:53:43.433717408 +0000 UTC m=+0.272042791 container attach a40aa39c80783d38f84120ea15c4694e417cbdd8d4559f097690c51c59f51b2f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_raman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 02:53:44 np0005534516 ceph-osd[89702]: log_channel(cluster) log [DBG] : 4.8 scrub starts
Nov 25 02:53:44 np0005534516 ceph-osd[89702]: log_channel(cluster) log [DBG] : 4.8 scrub ok
Nov 25 02:53:44 np0005534516 nifty_raman[107203]: {
Nov 25 02:53:44 np0005534516 nifty_raman[107203]:    "1ccbbde0-8faf-460a-800e-c84f00ed17db": {
Nov 25 02:53:44 np0005534516 nifty_raman[107203]:        "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 02:53:44 np0005534516 nifty_raman[107203]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 02:53:44 np0005534516 nifty_raman[107203]:        "osd_id": 1,
Nov 25 02:53:44 np0005534516 nifty_raman[107203]:        "osd_uuid": "1ccbbde0-8faf-460a-800e-c84f00ed17db",
Nov 25 02:53:44 np0005534516 nifty_raman[107203]:        "type": "bluestore"
Nov 25 02:53:44 np0005534516 nifty_raman[107203]:    },
Nov 25 02:53:44 np0005534516 nifty_raman[107203]:    "2a15223e-f21c-42af-b702-2be1c3607399": {
Nov 25 02:53:44 np0005534516 nifty_raman[107203]:        "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 02:53:44 np0005534516 nifty_raman[107203]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 02:53:44 np0005534516 nifty_raman[107203]:        "osd_id": 2,
Nov 25 02:53:44 np0005534516 nifty_raman[107203]:        "osd_uuid": "2a15223e-f21c-42af-b702-2be1c3607399",
Nov 25 02:53:44 np0005534516 nifty_raman[107203]:        "type": "bluestore"
Nov 25 02:53:44 np0005534516 nifty_raman[107203]:    },
Nov 25 02:53:44 np0005534516 nifty_raman[107203]:    "fa0f8d90-5df8-4d42-9078-da082765696d": {
Nov 25 02:53:44 np0005534516 nifty_raman[107203]:        "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 02:53:44 np0005534516 nifty_raman[107203]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 02:53:44 np0005534516 nifty_raman[107203]:        "osd_id": 0,
Nov 25 02:53:44 np0005534516 nifty_raman[107203]:        "osd_uuid": "fa0f8d90-5df8-4d42-9078-da082765696d",
Nov 25 02:53:44 np0005534516 nifty_raman[107203]:        "type": "bluestore"
Nov 25 02:53:44 np0005534516 nifty_raman[107203]:    }
Nov 25 02:53:44 np0005534516 nifty_raman[107203]: }
Nov 25 02:53:44 np0005534516 systemd[1]: libpod-a40aa39c80783d38f84120ea15c4694e417cbdd8d4559f097690c51c59f51b2f.scope: Deactivated successfully.
Nov 25 02:53:44 np0005534516 systemd[1]: libpod-a40aa39c80783d38f84120ea15c4694e417cbdd8d4559f097690c51c59f51b2f.scope: Consumed 1.003s CPU time.
Nov 25 02:53:44 np0005534516 conmon[107203]: conmon a40aa39c80783d38f841 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-a40aa39c80783d38f84120ea15c4694e417cbdd8d4559f097690c51c59f51b2f.scope/container/memory.events
Nov 25 02:53:44 np0005534516 podman[107184]: 2025-11-25 07:53:44.378450267 +0000 UTC m=+1.216775660 container died a40aa39c80783d38f84120ea15c4694e417cbdd8d4559f097690c51c59f51b2f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_raman, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 02:53:44 np0005534516 systemd[1]: var-lib-containers-storage-overlay-d8959c9c77fbdbd280b63d9b3a84f8a4e8e7b02ea5776458bde7fdb1c1fc7681-merged.mount: Deactivated successfully.
Nov 25 02:53:44 np0005534516 ceph-osd[90711]: log_channel(cluster) log [DBG] : 6.8 scrub starts
Nov 25 02:53:44 np0005534516 ceph-osd[90711]: log_channel(cluster) log [DBG] : 6.8 scrub ok
Nov 25 02:53:44 np0005534516 podman[107184]: 2025-11-25 07:53:44.661017007 +0000 UTC m=+1.499342420 container remove a40aa39c80783d38f84120ea15c4694e417cbdd8d4559f097690c51c59f51b2f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_raman, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef)
Nov 25 02:53:44 np0005534516 systemd[1]: libpod-conmon-a40aa39c80783d38f84120ea15c4694e417cbdd8d4559f097690c51c59f51b2f.scope: Deactivated successfully.
Nov 25 02:53:44 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 02:53:44 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 02:53:44 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 02:53:44 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 02:53:44 np0005534516 ceph-mgr[75313]: [progress WARNING root] complete: ev e4657ffd-8ffa-4c38-9cf2-21d2abe919b9 does not exist
Nov 25 02:53:44 np0005534516 ceph-mgr[75313]: [progress WARNING root] complete: ev ce81b583-39d2-49fe-bd78-ea95caccd411 does not exist
Nov 25 02:53:45 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v230: 321 pgs: 321 active+clean; 456 KiB data, 139 MiB used, 60 GiB / 60 GiB avail; 9 B/s, 1 objects/s recovering
Nov 25 02:53:45 np0005534516 python3.9[107451]: ansible-ansible.builtin.lineinfile Invoked with line=cloud-init=disabled path=/proc/cmdline state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 02:53:45 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 02:53:45 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 02:53:45 np0005534516 ceph-osd[90711]: log_channel(cluster) log [DBG] : 10.3 scrub starts
Nov 25 02:53:45 np0005534516 ceph-osd[90711]: log_channel(cluster) log [DBG] : 10.3 scrub ok
Nov 25 02:53:46 np0005534516 python3.9[107602]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 25 02:53:46 np0005534516 ceph-osd[88620]: log_channel(cluster) log [DBG] : 3.a deep-scrub starts
Nov 25 02:53:46 np0005534516 ceph-osd[88620]: log_channel(cluster) log [DBG] : 3.a deep-scrub ok
Nov 25 02:53:46 np0005534516 ceph-osd[90711]: log_channel(cluster) log [DBG] : 10.5 scrub starts
Nov 25 02:53:46 np0005534516 ceph-osd[90711]: log_channel(cluster) log [DBG] : 10.5 scrub ok
Nov 25 02:53:47 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v231: 321 pgs: 321 active+clean; 456 KiB data, 139 MiB used, 60 GiB / 60 GiB avail; 9 B/s, 0 objects/s recovering
Nov 25 02:53:47 np0005534516 ceph-osd[88620]: log_channel(cluster) log [DBG] : 10.9 scrub starts
Nov 25 02:53:47 np0005534516 ceph-osd[88620]: log_channel(cluster) log [DBG] : 10.9 scrub ok
Nov 25 02:53:47 np0005534516 python3.9[107756]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 25 02:53:47 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 02:53:47 np0005534516 ceph-osd[90711]: log_channel(cluster) log [DBG] : 10.a scrub starts
Nov 25 02:53:47 np0005534516 ceph-osd[90711]: log_channel(cluster) log [DBG] : 10.a scrub ok
Nov 25 02:53:48 np0005534516 python3.9[107914]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 25 02:53:48 np0005534516 ceph-osd[90711]: log_channel(cluster) log [DBG] : 10.c scrub starts
Nov 25 02:53:48 np0005534516 ceph-osd[90711]: log_channel(cluster) log [DBG] : 10.c scrub ok
Nov 25 02:53:49 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v232: 321 pgs: 321 active+clean; 456 KiB data, 139 MiB used, 60 GiB / 60 GiB avail
Nov 25 02:53:49 np0005534516 python3.9[107998]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 25 02:53:50 np0005534516 ceph-osd[89702]: log_channel(cluster) log [DBG] : 6.d scrub starts
Nov 25 02:53:50 np0005534516 ceph-osd[89702]: log_channel(cluster) log [DBG] : 6.d scrub ok
Nov 25 02:53:51 np0005534516 ceph-osd[89702]: log_channel(cluster) log [DBG] : 6.1e scrub starts
Nov 25 02:53:51 np0005534516 ceph-osd[89702]: log_channel(cluster) log [DBG] : 6.1e scrub ok
Nov 25 02:53:51 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v233: 321 pgs: 321 active+clean; 456 KiB data, 139 MiB used, 60 GiB / 60 GiB avail
Nov 25 02:53:52 np0005534516 ceph-osd[89702]: log_channel(cluster) log [DBG] : 6.1d scrub starts
Nov 25 02:53:52 np0005534516 ceph-osd[89702]: log_channel(cluster) log [DBG] : 6.1d scrub ok
Nov 25 02:53:52 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 02:53:53 np0005534516 ceph-osd[89702]: log_channel(cluster) log [DBG] : 4.2 scrub starts
Nov 25 02:53:53 np0005534516 ceph-osd[89702]: log_channel(cluster) log [DBG] : 4.2 scrub ok
Nov 25 02:53:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] Optimize plan auto_2025-11-25_07:53:53
Nov 25 02:53:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 02:53:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] do_upmap
Nov 25 02:53:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] pools ['backups', 'cephfs.cephfs.meta', 'vms', 'default.rgw.control', 'cephfs.cephfs.data', '.rgw.root', '.mgr', 'default.rgw.log', 'volumes', 'default.rgw.meta', 'images']
Nov 25 02:53:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] prepared 0/10 changes
Nov 25 02:53:53 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v234: 321 pgs: 321 active+clean; 456 KiB data, 139 MiB used, 60 GiB / 60 GiB avail
Nov 25 02:53:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 02:53:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 02:53:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 02:53:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 02:53:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 02:53:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 02:53:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 02:53:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 02:53:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 02:53:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 02:53:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 02:53:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 02:53:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 02:53:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 02:53:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 02:53:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 02:53:53 np0005534516 ceph-osd[90711]: log_channel(cluster) log [DBG] : 10.18 scrub starts
Nov 25 02:53:53 np0005534516 ceph-osd[90711]: log_channel(cluster) log [DBG] : 10.18 scrub ok
Nov 25 02:53:54 np0005534516 ceph-osd[89702]: log_channel(cluster) log [DBG] : 7.7 scrub starts
Nov 25 02:53:54 np0005534516 ceph-osd[89702]: log_channel(cluster) log [DBG] : 7.7 scrub ok
Nov 25 02:53:55 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v235: 321 pgs: 321 active+clean; 456 KiB data, 139 MiB used, 60 GiB / 60 GiB avail
Nov 25 02:53:56 np0005534516 ceph-osd[89702]: log_channel(cluster) log [DBG] : 7.b scrub starts
Nov 25 02:53:56 np0005534516 ceph-osd[89702]: log_channel(cluster) log [DBG] : 7.b scrub ok
Nov 25 02:53:57 np0005534516 ceph-osd[88620]: log_channel(cluster) log [DBG] : 10.8 scrub starts
Nov 25 02:53:57 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v236: 321 pgs: 321 active+clean; 456 KiB data, 139 MiB used, 60 GiB / 60 GiB avail
Nov 25 02:53:57 np0005534516 ceph-osd[88620]: log_channel(cluster) log [DBG] : 10.8 scrub ok
Nov 25 02:53:57 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 02:53:58 np0005534516 ceph-osd[89702]: log_channel(cluster) log [DBG] : 7.d scrub starts
Nov 25 02:53:58 np0005534516 ceph-osd[89702]: log_channel(cluster) log [DBG] : 7.d scrub ok
Nov 25 02:53:59 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v237: 321 pgs: 321 active+clean; 456 KiB data, 139 MiB used, 60 GiB / 60 GiB avail
Nov 25 02:53:59 np0005534516 ceph-osd[88620]: log_channel(cluster) log [DBG] : 10.7 deep-scrub starts
Nov 25 02:53:59 np0005534516 ceph-osd[88620]: log_channel(cluster) log [DBG] : 10.7 deep-scrub ok
Nov 25 02:53:59 np0005534516 ceph-osd[89702]: log_channel(cluster) log [DBG] : 7.10 scrub starts
Nov 25 02:53:59 np0005534516 ceph-osd[89702]: log_channel(cluster) log [DBG] : 7.10 scrub ok
Nov 25 02:54:00 np0005534516 ceph-osd[90711]: log_channel(cluster) log [DBG] : 10.1b scrub starts
Nov 25 02:54:00 np0005534516 ceph-osd[90711]: log_channel(cluster) log [DBG] : 10.1b scrub ok
Nov 25 02:54:01 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v238: 321 pgs: 321 active+clean; 456 KiB data, 139 MiB used, 60 GiB / 60 GiB avail
Nov 25 02:54:02 np0005534516 ceph-osd[89702]: log_channel(cluster) log [DBG] : 7.12 deep-scrub starts
Nov 25 02:54:02 np0005534516 ceph-osd[89702]: log_channel(cluster) log [DBG] : 7.12 deep-scrub ok
Nov 25 02:54:02 np0005534516 ceph-osd[90711]: log_channel(cluster) log [DBG] : 10.1c scrub starts
Nov 25 02:54:02 np0005534516 ceph-osd[90711]: log_channel(cluster) log [DBG] : 10.1c scrub ok
Nov 25 02:54:02 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 02:54:03 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v239: 321 pgs: 321 active+clean; 456 KiB data, 139 MiB used, 60 GiB / 60 GiB avail
Nov 25 02:54:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 02:54:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 02:54:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 02:54:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 02:54:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 02:54:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 02:54:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 02:54:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 02:54:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 02:54:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 02:54:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 02:54:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 02:54:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 25 02:54:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 02:54:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 02:54:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 02:54:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 25 02:54:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 02:54:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 25 02:54:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 02:54:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 02:54:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 02:54:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 25 02:54:05 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v240: 321 pgs: 321 active+clean; 456 KiB data, 139 MiB used, 60 GiB / 60 GiB avail
Nov 25 02:54:05 np0005534516 ceph-osd[88620]: log_channel(cluster) log [DBG] : 10.d deep-scrub starts
Nov 25 02:54:05 np0005534516 ceph-osd[88620]: log_channel(cluster) log [DBG] : 10.d deep-scrub ok
Nov 25 02:54:06 np0005534516 ceph-osd[89702]: log_channel(cluster) log [DBG] : 7.14 scrub starts
Nov 25 02:54:06 np0005534516 ceph-osd[89702]: log_channel(cluster) log [DBG] : 7.14 scrub ok
Nov 25 02:54:07 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v241: 321 pgs: 321 active+clean; 456 KiB data, 139 MiB used, 60 GiB / 60 GiB avail
Nov 25 02:54:07 np0005534516 ceph-osd[88620]: log_channel(cluster) log [DBG] : 10.e scrub starts
Nov 25 02:54:07 np0005534516 ceph-osd[88620]: log_channel(cluster) log [DBG] : 10.e scrub ok
Nov 25 02:54:07 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 02:54:08 np0005534516 ceph-osd[88620]: log_channel(cluster) log [DBG] : 10.17 scrub starts
Nov 25 02:54:08 np0005534516 ceph-osd[88620]: log_channel(cluster) log [DBG] : 10.17 scrub ok
Nov 25 02:54:08 np0005534516 ceph-osd[90711]: log_channel(cluster) log [DBG] : 10.1d scrub starts
Nov 25 02:54:08 np0005534516 ceph-osd[90711]: log_channel(cluster) log [DBG] : 10.1d scrub ok
Nov 25 02:54:09 np0005534516 ceph-osd[89702]: log_channel(cluster) log [DBG] : 7.16 scrub starts
Nov 25 02:54:09 np0005534516 ceph-osd[89702]: log_channel(cluster) log [DBG] : 7.16 scrub ok
Nov 25 02:54:09 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v242: 321 pgs: 321 active+clean; 456 KiB data, 139 MiB used, 60 GiB / 60 GiB avail
Nov 25 02:54:10 np0005534516 ceph-osd[89702]: log_channel(cluster) log [DBG] : 7.17 scrub starts
Nov 25 02:54:10 np0005534516 ceph-osd[89702]: log_channel(cluster) log [DBG] : 7.17 scrub ok
Nov 25 02:54:11 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v243: 321 pgs: 321 active+clean; 456 KiB data, 139 MiB used, 60 GiB / 60 GiB avail
Nov 25 02:54:11 np0005534516 ceph-osd[88620]: log_channel(cluster) log [DBG] : 10.16 scrub starts
Nov 25 02:54:11 np0005534516 ceph-osd[88620]: log_channel(cluster) log [DBG] : 10.16 scrub ok
Nov 25 02:54:11 np0005534516 ceph-osd[90711]: log_channel(cluster) log [DBG] : 10.1f scrub starts
Nov 25 02:54:11 np0005534516 ceph-osd[90711]: log_channel(cluster) log [DBG] : 10.1f scrub ok
Nov 25 02:54:12 np0005534516 ceph-osd[89702]: log_channel(cluster) log [DBG] : 7.19 scrub starts
Nov 25 02:54:12 np0005534516 ceph-osd[89702]: log_channel(cluster) log [DBG] : 7.19 scrub ok
Nov 25 02:54:12 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 02:54:13 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v244: 321 pgs: 321 active+clean; 456 KiB data, 139 MiB used, 60 GiB / 60 GiB avail
Nov 25 02:54:13 np0005534516 ceph-osd[89702]: log_channel(cluster) log [DBG] : 7.1d scrub starts
Nov 25 02:54:13 np0005534516 ceph-osd[89702]: log_channel(cluster) log [DBG] : 7.1d scrub ok
Nov 25 02:54:13 np0005534516 ceph-osd[88620]: log_channel(cluster) log [DBG] : 10.4 scrub starts
Nov 25 02:54:13 np0005534516 ceph-osd[88620]: log_channel(cluster) log [DBG] : 10.4 scrub ok
Nov 25 02:54:13 np0005534516 ceph-osd[90711]: log_channel(cluster) log [DBG] : 8.15 scrub starts
Nov 25 02:54:13 np0005534516 ceph-osd[90711]: log_channel(cluster) log [DBG] : 8.15 scrub ok
Nov 25 02:54:15 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v245: 321 pgs: 321 active+clean; 456 KiB data, 139 MiB used, 60 GiB / 60 GiB avail
Nov 25 02:54:15 np0005534516 ceph-osd[89702]: log_channel(cluster) log [DBG] : 7.1e scrub starts
Nov 25 02:54:15 np0005534516 ceph-osd[89702]: log_channel(cluster) log [DBG] : 7.1e scrub ok
Nov 25 02:54:15 np0005534516 ceph-osd[88620]: log_channel(cluster) log [DBG] : 10.1e scrub starts
Nov 25 02:54:15 np0005534516 ceph-osd[88620]: log_channel(cluster) log [DBG] : 10.1e scrub ok
Nov 25 02:54:15 np0005534516 ceph-osd[90711]: log_channel(cluster) log [DBG] : 7.1a deep-scrub starts
Nov 25 02:54:15 np0005534516 ceph-osd[90711]: log_channel(cluster) log [DBG] : 7.1a deep-scrub ok
Nov 25 02:54:17 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v246: 321 pgs: 321 active+clean; 456 KiB data, 139 MiB used, 60 GiB / 60 GiB avail
Nov 25 02:54:17 np0005534516 ceph-osd[88620]: log_channel(cluster) log [DBG] : 10.1 deep-scrub starts
Nov 25 02:54:17 np0005534516 ceph-osd[88620]: log_channel(cluster) log [DBG] : 10.1 deep-scrub ok
Nov 25 02:54:17 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 02:54:18 np0005534516 ceph-osd[89702]: log_channel(cluster) log [DBG] : 8.1 scrub starts
Nov 25 02:54:18 np0005534516 ceph-osd[89702]: log_channel(cluster) log [DBG] : 8.1 scrub ok
Nov 25 02:54:18 np0005534516 ceph-osd[90711]: log_channel(cluster) log [DBG] : 11.2 deep-scrub starts
Nov 25 02:54:18 np0005534516 ceph-osd[90711]: log_channel(cluster) log [DBG] : 11.2 deep-scrub ok
Nov 25 02:54:19 np0005534516 ceph-osd[89702]: log_channel(cluster) log [DBG] : 8.3 scrub starts
Nov 25 02:54:19 np0005534516 ceph-osd[89702]: log_channel(cluster) log [DBG] : 8.3 scrub ok
Nov 25 02:54:19 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v247: 321 pgs: 321 active+clean; 456 KiB data, 139 MiB used, 60 GiB / 60 GiB avail
Nov 25 02:54:20 np0005534516 ceph-osd[89702]: log_channel(cluster) log [DBG] : 8.5 scrub starts
Nov 25 02:54:20 np0005534516 ceph-osd[89702]: log_channel(cluster) log [DBG] : 8.5 scrub ok
Nov 25 02:54:21 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v248: 321 pgs: 321 active+clean; 456 KiB data, 139 MiB used, 60 GiB / 60 GiB avail
Nov 25 02:54:21 np0005534516 ceph-osd[88620]: log_channel(cluster) log [DBG] : 10.15 scrub starts
Nov 25 02:54:21 np0005534516 ceph-osd[88620]: log_channel(cluster) log [DBG] : 10.15 scrub ok
Nov 25 02:54:21 np0005534516 ceph-osd[90711]: log_channel(cluster) log [DBG] : 7.c scrub starts
Nov 25 02:54:21 np0005534516 ceph-osd[90711]: log_channel(cluster) log [DBG] : 7.c scrub ok
Nov 25 02:54:22 np0005534516 ceph-osd[88620]: log_channel(cluster) log [DBG] : 11.17 scrub starts
Nov 25 02:54:22 np0005534516 ceph-osd[88620]: log_channel(cluster) log [DBG] : 11.17 scrub ok
Nov 25 02:54:22 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 02:54:23 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v249: 321 pgs: 321 active+clean; 456 KiB data, 139 MiB used, 60 GiB / 60 GiB avail
Nov 25 02:54:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 02:54:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 02:54:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 02:54:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 02:54:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 02:54:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 02:54:24 np0005534516 ceph-osd[88620]: log_channel(cluster) log [DBG] : 7.1b scrub starts
Nov 25 02:54:24 np0005534516 ceph-osd[88620]: log_channel(cluster) log [DBG] : 7.1b scrub ok
Nov 25 02:54:24 np0005534516 ceph-osd[90711]: log_channel(cluster) log [DBG] : 8.2 scrub starts
Nov 25 02:54:24 np0005534516 ceph-osd[90711]: log_channel(cluster) log [DBG] : 8.2 scrub ok
Nov 25 02:54:25 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v250: 321 pgs: 321 active+clean; 456 KiB data, 139 MiB used, 60 GiB / 60 GiB avail
Nov 25 02:54:27 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v251: 321 pgs: 321 active+clean; 456 KiB data, 139 MiB used, 60 GiB / 60 GiB avail
Nov 25 02:54:27 np0005534516 ceph-osd[88620]: log_channel(cluster) log [DBG] : 8.14 scrub starts
Nov 25 02:54:27 np0005534516 ceph-osd[88620]: log_channel(cluster) log [DBG] : 8.14 scrub ok
Nov 25 02:54:27 np0005534516 ceph-osd[90711]: log_channel(cluster) log [DBG] : 8.d scrub starts
Nov 25 02:54:27 np0005534516 ceph-osd[90711]: log_channel(cluster) log [DBG] : 8.d scrub ok
Nov 25 02:54:27 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 02:54:29 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v252: 321 pgs: 321 active+clean; 456 KiB data, 139 MiB used, 60 GiB / 60 GiB avail
Nov 25 02:54:29 np0005534516 ceph-osd[89702]: log_channel(cluster) log [DBG] : 8.7 scrub starts
Nov 25 02:54:29 np0005534516 ceph-osd[89702]: log_channel(cluster) log [DBG] : 8.7 scrub ok
Nov 25 02:54:29 np0005534516 ceph-osd[88620]: log_channel(cluster) log [DBG] : 7.18 scrub starts
Nov 25 02:54:29 np0005534516 ceph-osd[88620]: log_channel(cluster) log [DBG] : 7.18 scrub ok
Nov 25 02:54:30 np0005534516 ceph-osd[89702]: log_channel(cluster) log [DBG] : 8.8 deep-scrub starts
Nov 25 02:54:30 np0005534516 ceph-osd[89702]: log_channel(cluster) log [DBG] : 8.8 deep-scrub ok
Nov 25 02:54:31 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v253: 321 pgs: 321 active+clean; 457 KiB data, 139 MiB used, 60 GiB / 60 GiB avail
Nov 25 02:54:31 np0005534516 ceph-osd[89702]: log_channel(cluster) log [DBG] : 8.a scrub starts
Nov 25 02:54:31 np0005534516 ceph-osd[89702]: log_channel(cluster) log [DBG] : 8.a scrub ok
Nov 25 02:54:32 np0005534516 ceph-osd[90711]: log_channel(cluster) log [DBG] : 7.e scrub starts
Nov 25 02:54:32 np0005534516 ceph-osd[90711]: log_channel(cluster) log [DBG] : 7.e scrub ok
Nov 25 02:54:32 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 02:54:33 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v254: 321 pgs: 321 active+clean; 457 KiB data, 139 MiB used, 60 GiB / 60 GiB avail
Nov 25 02:54:33 np0005534516 ceph-osd[89702]: log_channel(cluster) log [DBG] : 8.13 scrub starts
Nov 25 02:54:33 np0005534516 ceph-osd[89702]: log_channel(cluster) log [DBG] : 8.13 scrub ok
Nov 25 02:54:33 np0005534516 ceph-osd[90711]: log_channel(cluster) log [DBG] : 11.d scrub starts
Nov 25 02:54:33 np0005534516 ceph-osd[90711]: log_channel(cluster) log [DBG] : 11.d scrub ok
Nov 25 02:54:34 np0005534516 ceph-osd[88620]: log_channel(cluster) log [DBG] : 7.1f deep-scrub starts
Nov 25 02:54:34 np0005534516 ceph-osd[88620]: log_channel(cluster) log [DBG] : 7.1f deep-scrub ok
Nov 25 02:54:35 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v255: 321 pgs: 321 active+clean; 457 KiB data, 139 MiB used, 60 GiB / 60 GiB avail
Nov 25 02:54:37 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v256: 321 pgs: 321 active+clean; 457 KiB data, 139 MiB used, 60 GiB / 60 GiB avail
Nov 25 02:54:37 np0005534516 ceph-osd[89702]: log_channel(cluster) log [DBG] : 8.16 scrub starts
Nov 25 02:54:37 np0005534516 ceph-osd[89702]: log_channel(cluster) log [DBG] : 8.16 scrub ok
Nov 25 02:54:37 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 02:54:38 np0005534516 ceph-osd[89702]: log_channel(cluster) log [DBG] : 8.17 scrub starts
Nov 25 02:54:38 np0005534516 ceph-osd[89702]: log_channel(cluster) log [DBG] : 8.17 scrub ok
Nov 25 02:54:38 np0005534516 ceph-osd[90711]: log_channel(cluster) log [DBG] : 7.1 deep-scrub starts
Nov 25 02:54:38 np0005534516 ceph-osd[90711]: log_channel(cluster) log [DBG] : 7.1 deep-scrub ok
Nov 25 02:54:39 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v257: 321 pgs: 321 active+clean; 457 KiB data, 139 MiB used, 60 GiB / 60 GiB avail
Nov 25 02:54:39 np0005534516 ceph-osd[89702]: log_channel(cluster) log [DBG] : 8.19 scrub starts
Nov 25 02:54:39 np0005534516 ceph-osd[89702]: log_channel(cluster) log [DBG] : 8.19 scrub ok
Nov 25 02:54:40 np0005534516 ceph-osd[89702]: log_channel(cluster) log [DBG] : 8.1e scrub starts
Nov 25 02:54:40 np0005534516 ceph-osd[89702]: log_channel(cluster) log [DBG] : 8.1e scrub ok
Nov 25 02:54:41 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v258: 321 pgs: 321 active+clean; 457 KiB data, 139 MiB used, 60 GiB / 60 GiB avail
Nov 25 02:54:41 np0005534516 ceph-osd[90711]: log_channel(cluster) log [DBG] : 11.9 scrub starts
Nov 25 02:54:41 np0005534516 ceph-osd[90711]: log_channel(cluster) log [DBG] : 11.9 scrub ok
Nov 25 02:54:42 np0005534516 ceph-osd[88620]: log_channel(cluster) log [DBG] : 8.10 scrub starts
Nov 25 02:54:42 np0005534516 ceph-osd[88620]: log_channel(cluster) log [DBG] : 8.10 scrub ok
Nov 25 02:54:42 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 02:54:43 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v259: 321 pgs: 321 active+clean; 457 KiB data, 139 MiB used, 60 GiB / 60 GiB avail
Nov 25 02:54:43 np0005534516 ceph-osd[89702]: log_channel(cluster) log [DBG] : 9.2 deep-scrub starts
Nov 25 02:54:43 np0005534516 ceph-osd[89702]: log_channel(cluster) log [DBG] : 9.2 deep-scrub ok
Nov 25 02:54:44 np0005534516 ceph-osd[90711]: log_channel(cluster) log [DBG] : 11.8 scrub starts
Nov 25 02:54:44 np0005534516 ceph-osd[90711]: log_channel(cluster) log [DBG] : 11.8 scrub ok
Nov 25 02:54:45 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v260: 321 pgs: 321 active+clean; 457 KiB data, 139 MiB used, 60 GiB / 60 GiB avail
Nov 25 02:54:45 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 02:54:45 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 02:54:45 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 02:54:45 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 02:54:45 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 02:54:45 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 02:54:45 np0005534516 ceph-mgr[75313]: [progress WARNING root] complete: ev 7f4a99e3-1783-4d22-82f4-f88416b83e57 does not exist
Nov 25 02:54:45 np0005534516 ceph-mgr[75313]: [progress WARNING root] complete: ev 4c6c4071-40ee-494a-bbe2-391007f1456b does not exist
Nov 25 02:54:45 np0005534516 ceph-mgr[75313]: [progress WARNING root] complete: ev b0bada7b-3fae-4134-a418-3380f0f15a39 does not exist
Nov 25 02:54:45 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 02:54:45 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 02:54:45 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 02:54:45 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 02:54:45 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 02:54:45 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 02:54:46 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 02:54:46 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 02:54:46 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 02:54:46 np0005534516 podman[108414]: 2025-11-25 07:54:46.469413612 +0000 UTC m=+0.060904174 container create 53b8d81dfb90ba72c444509b12f712e6aabfc19f5bd303db89809fe1d60d6650 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_lovelace, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Nov 25 02:54:46 np0005534516 systemd[76588]: Created slice User Background Tasks Slice.
Nov 25 02:54:46 np0005534516 systemd[76588]: Starting Cleanup of User's Temporary Files and Directories...
Nov 25 02:54:46 np0005534516 systemd[76588]: Finished Cleanup of User's Temporary Files and Directories.
Nov 25 02:54:46 np0005534516 podman[108414]: 2025-11-25 07:54:46.435109329 +0000 UTC m=+0.026599911 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 02:54:46 np0005534516 systemd[1]: Started libpod-conmon-53b8d81dfb90ba72c444509b12f712e6aabfc19f5bd303db89809fe1d60d6650.scope.
Nov 25 02:54:46 np0005534516 systemd[1]: Started libcrun container.
Nov 25 02:54:46 np0005534516 podman[108414]: 2025-11-25 07:54:46.713126274 +0000 UTC m=+0.304616866 container init 53b8d81dfb90ba72c444509b12f712e6aabfc19f5bd303db89809fe1d60d6650 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_lovelace, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 02:54:46 np0005534516 podman[108414]: 2025-11-25 07:54:46.722335356 +0000 UTC m=+0.313825928 container start 53b8d81dfb90ba72c444509b12f712e6aabfc19f5bd303db89809fe1d60d6650 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_lovelace, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 02:54:46 np0005534516 hungry_lovelace[108431]: 167 167
Nov 25 02:54:46 np0005534516 systemd[1]: libpod-53b8d81dfb90ba72c444509b12f712e6aabfc19f5bd303db89809fe1d60d6650.scope: Deactivated successfully.
Nov 25 02:54:46 np0005534516 podman[108414]: 2025-11-25 07:54:46.897273629 +0000 UTC m=+0.488764231 container attach 53b8d81dfb90ba72c444509b12f712e6aabfc19f5bd303db89809fe1d60d6650 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_lovelace, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef)
Nov 25 02:54:46 np0005534516 podman[108414]: 2025-11-25 07:54:46.897960169 +0000 UTC m=+0.489450771 container died 53b8d81dfb90ba72c444509b12f712e6aabfc19f5bd303db89809fe1d60d6650 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_lovelace, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 02:54:47 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v261: 321 pgs: 321 active+clean; 457 KiB data, 139 MiB used, 60 GiB / 60 GiB avail
Nov 25 02:54:47 np0005534516 systemd[1]: var-lib-containers-storage-overlay-5c96fbc61fe88c8cb37bb0e736b02cae90e734678121d35b9403cd5a1e22428f-merged.mount: Deactivated successfully.
Nov 25 02:54:47 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 02:54:48 np0005534516 podman[108414]: 2025-11-25 07:54:48.216078132 +0000 UTC m=+1.807568694 container remove 53b8d81dfb90ba72c444509b12f712e6aabfc19f5bd303db89809fe1d60d6650 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_lovelace, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 02:54:48 np0005534516 ceph-osd[89702]: log_channel(cluster) log [DBG] : 9.4 scrub starts
Nov 25 02:54:48 np0005534516 systemd[1]: libpod-conmon-53b8d81dfb90ba72c444509b12f712e6aabfc19f5bd303db89809fe1d60d6650.scope: Deactivated successfully.
Nov 25 02:54:48 np0005534516 ceph-osd[89702]: log_channel(cluster) log [DBG] : 9.4 scrub ok
Nov 25 02:54:48 np0005534516 podman[108607]: 2025-11-25 07:54:48.353983979 +0000 UTC m=+0.025778589 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 02:54:48 np0005534516 ceph-osd[90711]: log_channel(cluster) log [DBG] : 11.b scrub starts
Nov 25 02:54:48 np0005534516 python3.9[108601]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 02:54:48 np0005534516 ceph-osd[90711]: log_channel(cluster) log [DBG] : 11.b scrub ok
Nov 25 02:54:48 np0005534516 podman[108607]: 2025-11-25 07:54:48.604625902 +0000 UTC m=+0.276420462 container create cef4b7ffca5b955d7c144bdf2bc0871caf9ffc0622318227e430acc19b8be7c9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_cohen, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True)
Nov 25 02:54:49 np0005534516 systemd[1]: Started libpod-conmon-cef4b7ffca5b955d7c144bdf2bc0871caf9ffc0622318227e430acc19b8be7c9.scope.
Nov 25 02:54:49 np0005534516 systemd[1]: Started libcrun container.
Nov 25 02:54:49 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2e19fbc74666f72c4456be255242cb738cf128d5dbbc0b4f8308bc09e9bc89bd/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 02:54:49 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2e19fbc74666f72c4456be255242cb738cf128d5dbbc0b4f8308bc09e9bc89bd/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 02:54:49 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2e19fbc74666f72c4456be255242cb738cf128d5dbbc0b4f8308bc09e9bc89bd/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 02:54:49 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2e19fbc74666f72c4456be255242cb738cf128d5dbbc0b4f8308bc09e9bc89bd/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 02:54:49 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2e19fbc74666f72c4456be255242cb738cf128d5dbbc0b4f8308bc09e9bc89bd/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 02:54:49 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v262: 321 pgs: 321 active+clean; 457 KiB data, 139 MiB used, 60 GiB / 60 GiB avail
Nov 25 02:54:49 np0005534516 podman[108607]: 2025-11-25 07:54:49.209636375 +0000 UTC m=+0.881431015 container init cef4b7ffca5b955d7c144bdf2bc0871caf9ffc0622318227e430acc19b8be7c9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_cohen, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 02:54:49 np0005534516 podman[108607]: 2025-11-25 07:54:49.224792361 +0000 UTC m=+0.896586901 container start cef4b7ffca5b955d7c144bdf2bc0871caf9ffc0622318227e430acc19b8be7c9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_cohen, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 02:54:49 np0005534516 podman[108607]: 2025-11-25 07:54:49.465213153 +0000 UTC m=+1.137007713 container attach cef4b7ffca5b955d7c144bdf2bc0871caf9ffc0622318227e430acc19b8be7c9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_cohen, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Nov 25 02:54:50 np0005534516 practical_cohen[108671]: --> passed data devices: 0 physical, 3 LVM
Nov 25 02:54:50 np0005534516 practical_cohen[108671]: --> relative data size: 1.0
Nov 25 02:54:50 np0005534516 practical_cohen[108671]: --> All data devices are unavailable
Nov 25 02:54:50 np0005534516 ceph-osd[89702]: log_channel(cluster) log [DBG] : 9.a scrub starts
Nov 25 02:54:50 np0005534516 podman[108607]: 2025-11-25 07:54:50.356535238 +0000 UTC m=+2.028329818 container died cef4b7ffca5b955d7c144bdf2bc0871caf9ffc0622318227e430acc19b8be7c9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_cohen, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Nov 25 02:54:50 np0005534516 systemd[1]: libpod-cef4b7ffca5b955d7c144bdf2bc0871caf9ffc0622318227e430acc19b8be7c9.scope: Deactivated successfully.
Nov 25 02:54:50 np0005534516 systemd[1]: libpod-cef4b7ffca5b955d7c144bdf2bc0871caf9ffc0622318227e430acc19b8be7c9.scope: Consumed 1.056s CPU time.
Nov 25 02:54:50 np0005534516 ceph-osd[89702]: log_channel(cluster) log [DBG] : 9.a scrub ok
Nov 25 02:54:50 np0005534516 python3.9[108934]: ansible-ansible.posix.selinux Invoked with policy=targeted state=enforcing configfile=/etc/selinux/config update_kernel_param=False
Nov 25 02:54:50 np0005534516 systemd[1]: var-lib-containers-storage-overlay-2e19fbc74666f72c4456be255242cb738cf128d5dbbc0b4f8308bc09e9bc89bd-merged.mount: Deactivated successfully.
Nov 25 02:54:51 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v263: 321 pgs: 321 active+clean; 457 KiB data, 139 MiB used, 60 GiB / 60 GiB avail
Nov 25 02:54:51 np0005534516 ceph-osd[89702]: log_channel(cluster) log [DBG] : 9.10 deep-scrub starts
Nov 25 02:54:51 np0005534516 ceph-osd[89702]: log_channel(cluster) log [DBG] : 9.10 deep-scrub ok
Nov 25 02:54:51 np0005534516 ceph-osd[90711]: log_channel(cluster) log [DBG] : 7.2 scrub starts
Nov 25 02:54:51 np0005534516 ceph-osd[90711]: log_channel(cluster) log [DBG] : 7.2 scrub ok
Nov 25 02:54:51 np0005534516 python3.9[109102]: ansible-ansible.legacy.command Invoked with cmd=dd if=/dev/zero of=/swap count=1024 bs=1M creates=/swap _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None removes=None stdin=None
Nov 25 02:54:51 np0005534516 podman[108607]: 2025-11-25 07:54:51.679986369 +0000 UTC m=+3.351780909 container remove cef4b7ffca5b955d7c144bdf2bc0871caf9ffc0622318227e430acc19b8be7c9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_cohen, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Nov 25 02:54:51 np0005534516 systemd[1]: libpod-conmon-cef4b7ffca5b955d7c144bdf2bc0871caf9ffc0622318227e430acc19b8be7c9.scope: Deactivated successfully.
Nov 25 02:54:52 np0005534516 python3.9[109354]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/swap recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 02:54:52 np0005534516 podman[109404]: 2025-11-25 07:54:52.385616204 +0000 UTC m=+0.055775962 container create c0daa79160357689189db46584e8022a38a911dfec8a8c0d92921ce3af822282 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_allen, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Nov 25 02:54:52 np0005534516 ceph-osd[90711]: log_channel(cluster) log [DBG] : 11.3 scrub starts
Nov 25 02:54:52 np0005534516 podman[109404]: 2025-11-25 07:54:52.357956845 +0000 UTC m=+0.028116623 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 02:54:52 np0005534516 ceph-osd[90711]: log_channel(cluster) log [DBG] : 11.3 scrub ok
Nov 25 02:54:52 np0005534516 systemd[1]: Started libpod-conmon-c0daa79160357689189db46584e8022a38a911dfec8a8c0d92921ce3af822282.scope.
Nov 25 02:54:52 np0005534516 systemd[1]: Started libcrun container.
Nov 25 02:54:52 np0005534516 podman[109404]: 2025-11-25 07:54:52.67571997 +0000 UTC m=+0.345879778 container init c0daa79160357689189db46584e8022a38a911dfec8a8c0d92921ce3af822282 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_allen, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2)
Nov 25 02:54:52 np0005534516 podman[109404]: 2025-11-25 07:54:52.684296906 +0000 UTC m=+0.354456684 container start c0daa79160357689189db46584e8022a38a911dfec8a8c0d92921ce3af822282 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_allen, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Nov 25 02:54:52 np0005534516 flamboyant_allen[109487]: 167 167
Nov 25 02:54:52 np0005534516 systemd[1]: libpod-c0daa79160357689189db46584e8022a38a911dfec8a8c0d92921ce3af822282.scope: Deactivated successfully.
Nov 25 02:54:52 np0005534516 podman[109404]: 2025-11-25 07:54:52.707073571 +0000 UTC m=+0.377233329 container attach c0daa79160357689189db46584e8022a38a911dfec8a8c0d92921ce3af822282 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_allen, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 02:54:52 np0005534516 podman[109404]: 2025-11-25 07:54:52.707517613 +0000 UTC m=+0.377677381 container died c0daa79160357689189db46584e8022a38a911dfec8a8c0d92921ce3af822282 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_allen, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 02:54:52 np0005534516 systemd[1]: var-lib-containers-storage-overlay-fb3b71ca9625c01f84324734b114f744f4fd5c5257cb9cb0fb3f49fdd3036280-merged.mount: Deactivated successfully.
Nov 25 02:54:52 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 02:54:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] Optimize plan auto_2025-11-25_07:54:53
Nov 25 02:54:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 02:54:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] do_upmap
Nov 25 02:54:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] pools ['.mgr', 'default.rgw.log', 'vms', 'default.rgw.meta', 'volumes', 'default.rgw.control', '.rgw.root', 'images', 'cephfs.cephfs.data', 'backups', 'cephfs.cephfs.meta']
Nov 25 02:54:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] prepared 0/10 changes
Nov 25 02:54:53 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v264: 321 pgs: 321 active+clean; 457 KiB data, 139 MiB used, 60 GiB / 60 GiB avail
Nov 25 02:54:53 np0005534516 python3.9[109581]: ansible-ansible.posix.mount Invoked with dump=0 fstype=swap name=none opts=sw passno=0 src=/swap state=present path=none boot=True opts_no_log=False backup=False fstab=None
Nov 25 02:54:53 np0005534516 podman[109404]: 2025-11-25 07:54:53.378994912 +0000 UTC m=+1.049154680 container remove c0daa79160357689189db46584e8022a38a911dfec8a8c0d92921ce3af822282 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_allen, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Nov 25 02:54:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 02:54:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 02:54:53 np0005534516 systemd[1]: libpod-conmon-c0daa79160357689189db46584e8022a38a911dfec8a8c0d92921ce3af822282.scope: Deactivated successfully.
Nov 25 02:54:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 02:54:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 02:54:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 02:54:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 02:54:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 02:54:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 02:54:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 02:54:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 02:54:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 02:54:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 02:54:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 02:54:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 02:54:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 02:54:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 02:54:53 np0005534516 podman[109613]: 2025-11-25 07:54:53.562922432 +0000 UTC m=+0.036077791 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 02:54:53 np0005534516 podman[109613]: 2025-11-25 07:54:53.664013098 +0000 UTC m=+0.137168457 container create 7bc31e75961083914214e8d5b286b568e375d50e9d4a13c299462264af974e92 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_hertz, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 02:54:53 np0005534516 systemd[1]: Started libpod-conmon-7bc31e75961083914214e8d5b286b568e375d50e9d4a13c299462264af974e92.scope.
Nov 25 02:54:53 np0005534516 systemd[1]: Started libcrun container.
Nov 25 02:54:53 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/12101d206958f7c4529115019a4dcbc0cafa5b0dc1b189d081f49affb0038c4c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 02:54:53 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/12101d206958f7c4529115019a4dcbc0cafa5b0dc1b189d081f49affb0038c4c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 02:54:53 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/12101d206958f7c4529115019a4dcbc0cafa5b0dc1b189d081f49affb0038c4c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 02:54:53 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/12101d206958f7c4529115019a4dcbc0cafa5b0dc1b189d081f49affb0038c4c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 02:54:54 np0005534516 podman[109613]: 2025-11-25 07:54:54.225783994 +0000 UTC m=+0.698939453 container init 7bc31e75961083914214e8d5b286b568e375d50e9d4a13c299462264af974e92 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_hertz, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 02:54:54 np0005534516 podman[109613]: 2025-11-25 07:54:54.239420418 +0000 UTC m=+0.712575817 container start 7bc31e75961083914214e8d5b286b568e375d50e9d4a13c299462264af974e92 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_hertz, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Nov 25 02:54:54 np0005534516 podman[109613]: 2025-11-25 07:54:54.380115331 +0000 UTC m=+0.853270690 container attach 7bc31e75961083914214e8d5b286b568e375d50e9d4a13c299462264af974e92 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_hertz, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 02:54:54 np0005534516 ceph-osd[90711]: log_channel(cluster) log [DBG] : 7.5 scrub starts
Nov 25 02:54:54 np0005534516 ceph-osd[90711]: log_channel(cluster) log [DBG] : 7.5 scrub ok
Nov 25 02:54:54 np0005534516 python3.9[109762]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/ca-trust/source/anchors setype=cert_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 02:54:55 np0005534516 peaceful_hertz[109630]: {
Nov 25 02:54:55 np0005534516 peaceful_hertz[109630]:    "0": [
Nov 25 02:54:55 np0005534516 peaceful_hertz[109630]:        {
Nov 25 02:54:55 np0005534516 peaceful_hertz[109630]:            "devices": [
Nov 25 02:54:55 np0005534516 peaceful_hertz[109630]:                "/dev/loop3"
Nov 25 02:54:55 np0005534516 peaceful_hertz[109630]:            ],
Nov 25 02:54:55 np0005534516 peaceful_hertz[109630]:            "lv_name": "ceph_lv0",
Nov 25 02:54:55 np0005534516 peaceful_hertz[109630]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 02:54:55 np0005534516 peaceful_hertz[109630]:            "lv_size": "21470642176",
Nov 25 02:54:55 np0005534516 peaceful_hertz[109630]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=fa0f8d90-5df8-4d42-9078-da082765696d,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 02:54:55 np0005534516 peaceful_hertz[109630]:            "lv_uuid": "ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr",
Nov 25 02:54:55 np0005534516 peaceful_hertz[109630]:            "name": "ceph_lv0",
Nov 25 02:54:55 np0005534516 peaceful_hertz[109630]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 02:54:55 np0005534516 peaceful_hertz[109630]:            "tags": {
Nov 25 02:54:55 np0005534516 peaceful_hertz[109630]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 02:54:55 np0005534516 peaceful_hertz[109630]:                "ceph.block_uuid": "ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr",
Nov 25 02:54:55 np0005534516 peaceful_hertz[109630]:                "ceph.cephx_lockbox_secret": "",
Nov 25 02:54:55 np0005534516 peaceful_hertz[109630]:                "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 02:54:55 np0005534516 peaceful_hertz[109630]:                "ceph.cluster_name": "ceph",
Nov 25 02:54:55 np0005534516 peaceful_hertz[109630]:                "ceph.crush_device_class": "",
Nov 25 02:54:55 np0005534516 peaceful_hertz[109630]:                "ceph.encrypted": "0",
Nov 25 02:54:55 np0005534516 peaceful_hertz[109630]:                "ceph.osd_fsid": "fa0f8d90-5df8-4d42-9078-da082765696d",
Nov 25 02:54:55 np0005534516 peaceful_hertz[109630]:                "ceph.osd_id": "0",
Nov 25 02:54:55 np0005534516 peaceful_hertz[109630]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 02:54:55 np0005534516 peaceful_hertz[109630]:                "ceph.type": "block",
Nov 25 02:54:55 np0005534516 peaceful_hertz[109630]:                "ceph.vdo": "0"
Nov 25 02:54:55 np0005534516 peaceful_hertz[109630]:            },
Nov 25 02:54:55 np0005534516 peaceful_hertz[109630]:            "type": "block",
Nov 25 02:54:55 np0005534516 peaceful_hertz[109630]:            "vg_name": "ceph_vg0"
Nov 25 02:54:55 np0005534516 peaceful_hertz[109630]:        }
Nov 25 02:54:55 np0005534516 peaceful_hertz[109630]:    ],
Nov 25 02:54:55 np0005534516 peaceful_hertz[109630]:    "1": [
Nov 25 02:54:55 np0005534516 peaceful_hertz[109630]:        {
Nov 25 02:54:55 np0005534516 peaceful_hertz[109630]:            "devices": [
Nov 25 02:54:55 np0005534516 peaceful_hertz[109630]:                "/dev/loop4"
Nov 25 02:54:55 np0005534516 peaceful_hertz[109630]:            ],
Nov 25 02:54:55 np0005534516 peaceful_hertz[109630]:            "lv_name": "ceph_lv1",
Nov 25 02:54:55 np0005534516 peaceful_hertz[109630]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 02:54:55 np0005534516 peaceful_hertz[109630]:            "lv_size": "21470642176",
Nov 25 02:54:55 np0005534516 peaceful_hertz[109630]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1ccbbde0-8faf-460a-800e-c84f00ed17db,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 02:54:55 np0005534516 peaceful_hertz[109630]:            "lv_uuid": "nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e",
Nov 25 02:54:55 np0005534516 peaceful_hertz[109630]:            "name": "ceph_lv1",
Nov 25 02:54:55 np0005534516 peaceful_hertz[109630]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 02:54:55 np0005534516 peaceful_hertz[109630]:            "tags": {
Nov 25 02:54:55 np0005534516 peaceful_hertz[109630]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 02:54:55 np0005534516 peaceful_hertz[109630]:                "ceph.block_uuid": "nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e",
Nov 25 02:54:55 np0005534516 peaceful_hertz[109630]:                "ceph.cephx_lockbox_secret": "",
Nov 25 02:54:55 np0005534516 peaceful_hertz[109630]:                "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 02:54:55 np0005534516 peaceful_hertz[109630]:                "ceph.cluster_name": "ceph",
Nov 25 02:54:55 np0005534516 peaceful_hertz[109630]:                "ceph.crush_device_class": "",
Nov 25 02:54:55 np0005534516 peaceful_hertz[109630]:                "ceph.encrypted": "0",
Nov 25 02:54:55 np0005534516 peaceful_hertz[109630]:                "ceph.osd_fsid": "1ccbbde0-8faf-460a-800e-c84f00ed17db",
Nov 25 02:54:55 np0005534516 peaceful_hertz[109630]:                "ceph.osd_id": "1",
Nov 25 02:54:55 np0005534516 peaceful_hertz[109630]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 02:54:55 np0005534516 peaceful_hertz[109630]:                "ceph.type": "block",
Nov 25 02:54:55 np0005534516 peaceful_hertz[109630]:                "ceph.vdo": "0"
Nov 25 02:54:55 np0005534516 peaceful_hertz[109630]:            },
Nov 25 02:54:55 np0005534516 peaceful_hertz[109630]:            "type": "block",
Nov 25 02:54:55 np0005534516 peaceful_hertz[109630]:            "vg_name": "ceph_vg1"
Nov 25 02:54:55 np0005534516 peaceful_hertz[109630]:        }
Nov 25 02:54:55 np0005534516 peaceful_hertz[109630]:    ],
Nov 25 02:54:55 np0005534516 peaceful_hertz[109630]:    "2": [
Nov 25 02:54:55 np0005534516 peaceful_hertz[109630]:        {
Nov 25 02:54:55 np0005534516 peaceful_hertz[109630]:            "devices": [
Nov 25 02:54:55 np0005534516 peaceful_hertz[109630]:                "/dev/loop5"
Nov 25 02:54:55 np0005534516 peaceful_hertz[109630]:            ],
Nov 25 02:54:55 np0005534516 peaceful_hertz[109630]:            "lv_name": "ceph_lv2",
Nov 25 02:54:55 np0005534516 peaceful_hertz[109630]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 02:54:55 np0005534516 peaceful_hertz[109630]:            "lv_size": "21470642176",
Nov 25 02:54:55 np0005534516 peaceful_hertz[109630]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2a15223e-f21c-42af-b702-2be1c3607399,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 02:54:55 np0005534516 peaceful_hertz[109630]:            "lv_uuid": "5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0",
Nov 25 02:54:55 np0005534516 peaceful_hertz[109630]:            "name": "ceph_lv2",
Nov 25 02:54:55 np0005534516 peaceful_hertz[109630]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 02:54:55 np0005534516 peaceful_hertz[109630]:            "tags": {
Nov 25 02:54:55 np0005534516 peaceful_hertz[109630]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 02:54:55 np0005534516 peaceful_hertz[109630]:                "ceph.block_uuid": "5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0",
Nov 25 02:54:55 np0005534516 peaceful_hertz[109630]:                "ceph.cephx_lockbox_secret": "",
Nov 25 02:54:55 np0005534516 peaceful_hertz[109630]:                "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 02:54:55 np0005534516 peaceful_hertz[109630]:                "ceph.cluster_name": "ceph",
Nov 25 02:54:55 np0005534516 peaceful_hertz[109630]:                "ceph.crush_device_class": "",
Nov 25 02:54:55 np0005534516 peaceful_hertz[109630]:                "ceph.encrypted": "0",
Nov 25 02:54:55 np0005534516 peaceful_hertz[109630]:                "ceph.osd_fsid": "2a15223e-f21c-42af-b702-2be1c3607399",
Nov 25 02:54:55 np0005534516 peaceful_hertz[109630]:                "ceph.osd_id": "2",
Nov 25 02:54:55 np0005534516 peaceful_hertz[109630]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 02:54:55 np0005534516 peaceful_hertz[109630]:                "ceph.type": "block",
Nov 25 02:54:55 np0005534516 peaceful_hertz[109630]:                "ceph.vdo": "0"
Nov 25 02:54:55 np0005534516 peaceful_hertz[109630]:            },
Nov 25 02:54:55 np0005534516 peaceful_hertz[109630]:            "type": "block",
Nov 25 02:54:55 np0005534516 peaceful_hertz[109630]:            "vg_name": "ceph_vg2"
Nov 25 02:54:55 np0005534516 peaceful_hertz[109630]:        }
Nov 25 02:54:55 np0005534516 peaceful_hertz[109630]:    ]
Nov 25 02:54:55 np0005534516 peaceful_hertz[109630]: }
Nov 25 02:54:55 np0005534516 systemd[1]: libpod-7bc31e75961083914214e8d5b286b568e375d50e9d4a13c299462264af974e92.scope: Deactivated successfully.
Nov 25 02:54:55 np0005534516 podman[109613]: 2025-11-25 07:54:55.100530864 +0000 UTC m=+1.573686263 container died 7bc31e75961083914214e8d5b286b568e375d50e9d4a13c299462264af974e92 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_hertz, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Nov 25 02:54:55 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v265: 321 pgs: 321 active+clean; 457 KiB data, 139 MiB used, 60 GiB / 60 GiB avail
Nov 25 02:54:55 np0005534516 ceph-osd[89702]: log_channel(cluster) log [DBG] : 9.12 scrub starts
Nov 25 02:54:55 np0005534516 python3.9[109918]: ansible-ansible.legacy.stat Invoked with path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 02:54:55 np0005534516 ceph-osd[89702]: log_channel(cluster) log [DBG] : 9.12 scrub ok
Nov 25 02:54:55 np0005534516 python3.9[110008]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem _original_basename=tls-ca-bundle.pem recurse=False state=file path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 02:54:56 np0005534516 systemd[1]: var-lib-containers-storage-overlay-12101d206958f7c4529115019a4dcbc0cafa5b0dc1b189d081f49affb0038c4c-merged.mount: Deactivated successfully.
Nov 25 02:54:56 np0005534516 ceph-osd[88620]: log_channel(cluster) log [DBG] : 11.1 scrub starts
Nov 25 02:54:56 np0005534516 ceph-osd[88620]: log_channel(cluster) log [DBG] : 11.1 scrub ok
Nov 25 02:54:56 np0005534516 ceph-osd[89702]: log_channel(cluster) log [DBG] : 9.14 scrub starts
Nov 25 02:54:56 np0005534516 ceph-osd[89702]: log_channel(cluster) log [DBG] : 9.14 scrub ok
Nov 25 02:54:56 np0005534516 podman[109613]: 2025-11-25 07:54:56.818148749 +0000 UTC m=+3.291304118 container remove 7bc31e75961083914214e8d5b286b568e375d50e9d4a13c299462264af974e92 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_hertz, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Nov 25 02:54:56 np0005534516 systemd[1]: libpod-conmon-7bc31e75961083914214e8d5b286b568e375d50e9d4a13c299462264af974e92.scope: Deactivated successfully.
Nov 25 02:54:57 np0005534516 python3.9[110162]: ansible-ansible.builtin.stat Invoked with path=/etc/lvm/devices/system.devices follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 25 02:54:57 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v266: 321 pgs: 321 active+clean; 457 KiB data, 139 MiB used, 60 GiB / 60 GiB avail
Nov 25 02:54:57 np0005534516 podman[110328]: 2025-11-25 07:54:57.505906253 +0000 UTC m=+0.027972229 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 02:54:57 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 02:54:58 np0005534516 podman[110328]: 2025-11-25 07:54:58.003142608 +0000 UTC m=+0.525208564 container create 6d6aa5c90cc386d132aa575da71e260cb7ab722fecf41554da6371fe4093eac2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_beaver, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 02:54:58 np0005534516 ceph-osd[88620]: log_channel(cluster) log [DBG] : 11.14 scrub starts
Nov 25 02:54:58 np0005534516 python3.9[110469]: ansible-ansible.builtin.getent Invoked with database=passwd key=qemu fail_key=True service=None split=None
Nov 25 02:54:58 np0005534516 ceph-osd[88620]: log_channel(cluster) log [DBG] : 11.14 scrub ok
Nov 25 02:54:58 np0005534516 systemd[1]: Started libpod-conmon-6d6aa5c90cc386d132aa575da71e260cb7ab722fecf41554da6371fe4093eac2.scope.
Nov 25 02:54:58 np0005534516 systemd[1]: Started libcrun container.
Nov 25 02:54:58 np0005534516 podman[110328]: 2025-11-25 07:54:58.953258406 +0000 UTC m=+1.475324372 container init 6d6aa5c90cc386d132aa575da71e260cb7ab722fecf41554da6371fe4093eac2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_beaver, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 02:54:58 np0005534516 podman[110328]: 2025-11-25 07:54:58.969645637 +0000 UTC m=+1.491711603 container start 6d6aa5c90cc386d132aa575da71e260cb7ab722fecf41554da6371fe4093eac2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_beaver, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Nov 25 02:54:58 np0005534516 beautiful_beaver[110497]: 167 167
Nov 25 02:54:58 np0005534516 systemd[1]: libpod-6d6aa5c90cc386d132aa575da71e260cb7ab722fecf41554da6371fe4093eac2.scope: Deactivated successfully.
Nov 25 02:54:59 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v267: 321 pgs: 321 active+clean; 457 KiB data, 139 MiB used, 60 GiB / 60 GiB avail
Nov 25 02:54:59 np0005534516 python3.9[110640]: ansible-ansible.builtin.getent Invoked with database=passwd key=hugetlbfs fail_key=True service=None split=None
Nov 25 02:54:59 np0005534516 podman[110328]: 2025-11-25 07:54:59.300575693 +0000 UTC m=+1.822641719 container attach 6d6aa5c90cc386d132aa575da71e260cb7ab722fecf41554da6371fe4093eac2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_beaver, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 02:54:59 np0005534516 podman[110328]: 2025-11-25 07:54:59.301970801 +0000 UTC m=+1.824036757 container died 6d6aa5c90cc386d132aa575da71e260cb7ab722fecf41554da6371fe4093eac2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_beaver, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 02:54:59 np0005534516 ceph-osd[90711]: log_channel(cluster) log [DBG] : 7.a scrub starts
Nov 25 02:54:59 np0005534516 ceph-osd[90711]: log_channel(cluster) log [DBG] : 7.a scrub ok
Nov 25 02:54:59 np0005534516 systemd[1]: var-lib-containers-storage-overlay-6e6003e181b593b635c3cfc9ecff2d694aec7dda2f5afaf560a5bba04e867e4b-merged.mount: Deactivated successfully.
Nov 25 02:54:59 np0005534516 podman[110328]: 2025-11-25 07:54:59.870762919 +0000 UTC m=+2.392828875 container remove 6d6aa5c90cc386d132aa575da71e260cb7ab722fecf41554da6371fe4093eac2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_beaver, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Nov 25 02:54:59 np0005534516 systemd[1]: libpod-conmon-6d6aa5c90cc386d132aa575da71e260cb7ab722fecf41554da6371fe4093eac2.scope: Deactivated successfully.
Nov 25 02:55:00 np0005534516 podman[110778]: 2025-11-25 07:55:00.11698736 +0000 UTC m=+0.081231791 container create a7d7dc895bc0e677513d8f6af1b3fe55ad07c7382caafe60ff84129d9c58c4fa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_raman, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Nov 25 02:55:00 np0005534516 podman[110778]: 2025-11-25 07:55:00.069848087 +0000 UTC m=+0.034092578 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 02:55:00 np0005534516 systemd[1]: Started libpod-conmon-a7d7dc895bc0e677513d8f6af1b3fe55ad07c7382caafe60ff84129d9c58c4fa.scope.
Nov 25 02:55:00 np0005534516 systemd[1]: Started libcrun container.
Nov 25 02:55:00 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/913a2d5a8230e64cdd7d9be6dd109ee254359d763248ebaf367da66354902714/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 02:55:00 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/913a2d5a8230e64cdd7d9be6dd109ee254359d763248ebaf367da66354902714/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 02:55:00 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/913a2d5a8230e64cdd7d9be6dd109ee254359d763248ebaf367da66354902714/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 02:55:00 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/913a2d5a8230e64cdd7d9be6dd109ee254359d763248ebaf367da66354902714/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 02:55:00 np0005534516 podman[110778]: 2025-11-25 07:55:00.244103702 +0000 UTC m=+0.208348213 container init a7d7dc895bc0e677513d8f6af1b3fe55ad07c7382caafe60ff84129d9c58c4fa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_raman, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 02:55:00 np0005534516 podman[110778]: 2025-11-25 07:55:00.256050089 +0000 UTC m=+0.220294530 container start a7d7dc895bc0e677513d8f6af1b3fe55ad07c7382caafe60ff84129d9c58c4fa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_raman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Nov 25 02:55:00 np0005534516 podman[110778]: 2025-11-25 07:55:00.289819267 +0000 UTC m=+0.254063788 container attach a7d7dc895bc0e677513d8f6af1b3fe55ad07c7382caafe60ff84129d9c58c4fa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_raman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 02:55:00 np0005534516 python3.9[110815]: ansible-ansible.builtin.group Invoked with gid=42477 name=hugetlbfs state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Nov 25 02:55:00 np0005534516 ceph-osd[88620]: log_channel(cluster) log [DBG] : 9.3 scrub starts
Nov 25 02:55:00 np0005534516 ceph-osd[88620]: log_channel(cluster) log [DBG] : 9.3 scrub ok
Nov 25 02:55:00 np0005534516 ceph-osd[90711]: log_channel(cluster) log [DBG] : 7.8 scrub starts
Nov 25 02:55:00 np0005534516 ceph-osd[90711]: log_channel(cluster) log [DBG] : 7.8 scrub ok
Nov 25 02:55:01 np0005534516 python3.9[110974]: ansible-ansible.builtin.file Invoked with group=qemu mode=0755 owner=qemu path=/var/lib/vhost_sockets setype=virt_cache_t seuser=system_u state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None serole=None selevel=None attributes=None
Nov 25 02:55:01 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v268: 321 pgs: 321 active+clean; 457 KiB data, 139 MiB used, 60 GiB / 60 GiB avail
Nov 25 02:55:01 np0005534516 quizzical_raman[110818]: {
Nov 25 02:55:01 np0005534516 quizzical_raman[110818]:    "1ccbbde0-8faf-460a-800e-c84f00ed17db": {
Nov 25 02:55:01 np0005534516 quizzical_raman[110818]:        "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 02:55:01 np0005534516 quizzical_raman[110818]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 02:55:01 np0005534516 quizzical_raman[110818]:        "osd_id": 1,
Nov 25 02:55:01 np0005534516 quizzical_raman[110818]:        "osd_uuid": "1ccbbde0-8faf-460a-800e-c84f00ed17db",
Nov 25 02:55:01 np0005534516 quizzical_raman[110818]:        "type": "bluestore"
Nov 25 02:55:01 np0005534516 quizzical_raman[110818]:    },
Nov 25 02:55:01 np0005534516 quizzical_raman[110818]:    "2a15223e-f21c-42af-b702-2be1c3607399": {
Nov 25 02:55:01 np0005534516 quizzical_raman[110818]:        "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 02:55:01 np0005534516 quizzical_raman[110818]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 02:55:01 np0005534516 quizzical_raman[110818]:        "osd_id": 2,
Nov 25 02:55:01 np0005534516 quizzical_raman[110818]:        "osd_uuid": "2a15223e-f21c-42af-b702-2be1c3607399",
Nov 25 02:55:01 np0005534516 quizzical_raman[110818]:        "type": "bluestore"
Nov 25 02:55:01 np0005534516 quizzical_raman[110818]:    },
Nov 25 02:55:01 np0005534516 quizzical_raman[110818]:    "fa0f8d90-5df8-4d42-9078-da082765696d": {
Nov 25 02:55:01 np0005534516 quizzical_raman[110818]:        "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 02:55:01 np0005534516 quizzical_raman[110818]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 02:55:01 np0005534516 quizzical_raman[110818]:        "osd_id": 0,
Nov 25 02:55:01 np0005534516 quizzical_raman[110818]:        "osd_uuid": "fa0f8d90-5df8-4d42-9078-da082765696d",
Nov 25 02:55:01 np0005534516 quizzical_raman[110818]:        "type": "bluestore"
Nov 25 02:55:01 np0005534516 quizzical_raman[110818]:    }
Nov 25 02:55:01 np0005534516 quizzical_raman[110818]: }
Nov 25 02:55:01 np0005534516 systemd[1]: libpod-a7d7dc895bc0e677513d8f6af1b3fe55ad07c7382caafe60ff84129d9c58c4fa.scope: Deactivated successfully.
Nov 25 02:55:01 np0005534516 systemd[1]: libpod-a7d7dc895bc0e677513d8f6af1b3fe55ad07c7382caafe60ff84129d9c58c4fa.scope: Consumed 1.079s CPU time.
Nov 25 02:55:01 np0005534516 podman[110778]: 2025-11-25 07:55:01.333813804 +0000 UTC m=+1.298058235 container died a7d7dc895bc0e677513d8f6af1b3fe55ad07c7382caafe60ff84129d9c58c4fa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_raman, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 02:55:01 np0005534516 ceph-osd[90711]: log_channel(cluster) log [DBG] : 8.4 scrub starts
Nov 25 02:55:01 np0005534516 ceph-osd[90711]: log_channel(cluster) log [DBG] : 8.4 scrub ok
Nov 25 02:55:01 np0005534516 systemd[1]: var-lib-containers-storage-overlay-913a2d5a8230e64cdd7d9be6dd109ee254359d763248ebaf367da66354902714-merged.mount: Deactivated successfully.
Nov 25 02:55:01 np0005534516 podman[110778]: 2025-11-25 07:55:01.552514029 +0000 UTC m=+1.516758460 container remove a7d7dc895bc0e677513d8f6af1b3fe55ad07c7382caafe60ff84129d9c58c4fa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_raman, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Nov 25 02:55:01 np0005534516 systemd[1]: libpod-conmon-a7d7dc895bc0e677513d8f6af1b3fe55ad07c7382caafe60ff84129d9c58c4fa.scope: Deactivated successfully.
Nov 25 02:55:01 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 02:55:01 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 02:55:01 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 02:55:01 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 02:55:01 np0005534516 ceph-mgr[75313]: [progress WARNING root] complete: ev c9657c73-a498-41f9-b89c-7095accef4cd does not exist
Nov 25 02:55:01 np0005534516 ceph-mgr[75313]: [progress WARNING root] complete: ev f4d95845-04d7-43ef-9358-14c35b4e5888 does not exist
Nov 25 02:55:02 np0005534516 python3.9[111218]: ansible-ansible.legacy.dnf Invoked with name=['dracut-config-generic'] state=absent allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 25 02:55:02 np0005534516 ceph-osd[90711]: log_channel(cluster) log [DBG] : 11.1b scrub starts
Nov 25 02:55:02 np0005534516 ceph-osd[90711]: log_channel(cluster) log [DBG] : 11.1b scrub ok
Nov 25 02:55:02 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 02:55:02 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 02:55:02 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 02:55:03 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v269: 321 pgs: 321 active+clean; 457 KiB data, 139 MiB used, 60 GiB / 60 GiB avail
Nov 25 02:55:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 02:55:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 02:55:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 02:55:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 02:55:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 02:55:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 02:55:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 02:55:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 02:55:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 02:55:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 02:55:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 02:55:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 02:55:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 25 02:55:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 02:55:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 02:55:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 02:55:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 25 02:55:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 02:55:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 25 02:55:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 02:55:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 02:55:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 02:55:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 25 02:55:03 np0005534516 ceph-osd[88620]: log_channel(cluster) log [DBG] : 11.f scrub starts
Nov 25 02:55:03 np0005534516 ceph-osd[88620]: log_channel(cluster) log [DBG] : 11.f scrub ok
Nov 25 02:55:03 np0005534516 python3.9[111371]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/modules-load.d setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 02:55:04 np0005534516 ceph-osd[89702]: log_channel(cluster) log [DBG] : 9.1a scrub starts
Nov 25 02:55:04 np0005534516 ceph-osd[89702]: log_channel(cluster) log [DBG] : 9.1a scrub ok
Nov 25 02:55:04 np0005534516 ceph-osd[90711]: log_channel(cluster) log [DBG] : 11.1a deep-scrub starts
Nov 25 02:55:04 np0005534516 ceph-osd[90711]: log_channel(cluster) log [DBG] : 11.1a deep-scrub ok
Nov 25 02:55:04 np0005534516 python3.9[111523]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 02:55:05 np0005534516 python3.9[111601]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root setype=etc_t dest=/etc/modules-load.d/99-edpm.conf _original_basename=edpm-modprobe.conf.j2 recurse=False state=file path=/etc/modules-load.d/99-edpm.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 02:55:05 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v270: 321 pgs: 321 active+clean; 457 KiB data, 139 MiB used, 60 GiB / 60 GiB avail
Nov 25 02:55:05 np0005534516 ceph-osd[89702]: log_channel(cluster) log [DBG] : 11.5 scrub starts
Nov 25 02:55:05 np0005534516 ceph-osd[89702]: log_channel(cluster) log [DBG] : 11.5 scrub ok
Nov 25 02:55:06 np0005534516 python3.9[111753]: ansible-ansible.legacy.stat Invoked with path=/etc/sysctl.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 02:55:06 np0005534516 ceph-osd[88620]: log_channel(cluster) log [DBG] : 9.d scrub starts
Nov 25 02:55:06 np0005534516 ceph-osd[88620]: log_channel(cluster) log [DBG] : 9.d scrub ok
Nov 25 02:55:06 np0005534516 python3.9[111831]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root setype=etc_t dest=/etc/sysctl.d/99-edpm.conf _original_basename=edpm-sysctl.conf.j2 recurse=False state=file path=/etc/sysctl.d/99-edpm.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 02:55:07 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v271: 321 pgs: 321 active+clean; 457 KiB data, 139 MiB used, 60 GiB / 60 GiB avail
Nov 25 02:55:07 np0005534516 ceph-osd[88620]: log_channel(cluster) log [DBG] : 9.11 scrub starts
Nov 25 02:55:07 np0005534516 ceph-osd[88620]: log_channel(cluster) log [DBG] : 9.11 scrub ok
Nov 25 02:55:07 np0005534516 python3.9[111983]: ansible-ansible.legacy.dnf Invoked with name=['tuned', 'tuned-profiles-cpu-partitioning'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 25 02:55:07 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 02:55:08 np0005534516 ceph-osd[90711]: log_channel(cluster) log [DBG] : 7.11 scrub starts
Nov 25 02:55:08 np0005534516 ceph-osd[90711]: log_channel(cluster) log [DBG] : 7.11 scrub ok
Nov 25 02:55:09 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v272: 321 pgs: 321 active+clean; 457 KiB data, 139 MiB used, 60 GiB / 60 GiB avail
Nov 25 02:55:09 np0005534516 python3.9[112134]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/active_profile follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 25 02:55:10 np0005534516 ceph-osd[88620]: log_channel(cluster) log [DBG] : 7.3 scrub starts
Nov 25 02:55:10 np0005534516 ceph-osd[88620]: log_channel(cluster) log [DBG] : 7.3 scrub ok
Nov 25 02:55:11 np0005534516 python3.9[112286]: ansible-ansible.builtin.slurp Invoked with src=/etc/tuned/active_profile
Nov 25 02:55:11 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v273: 321 pgs: 321 active+clean; 457 KiB data, 139 MiB used, 60 GiB / 60 GiB avail
Nov 25 02:55:11 np0005534516 ceph-osd[89702]: log_channel(cluster) log [DBG] : 11.7 scrub starts
Nov 25 02:55:11 np0005534516 ceph-osd[89702]: log_channel(cluster) log [DBG] : 11.7 scrub ok
Nov 25 02:55:11 np0005534516 ceph-osd[88620]: log_channel(cluster) log [DBG] : 11.e scrub starts
Nov 25 02:55:11 np0005534516 ceph-osd[88620]: log_channel(cluster) log [DBG] : 11.e scrub ok
Nov 25 02:55:11 np0005534516 python3.9[112436]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/throughput-performance-variables.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 25 02:55:12 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 02:55:13 np0005534516 python3.9[112588]: ansible-ansible.builtin.systemd Invoked with enabled=True name=tuned state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 25 02:55:13 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v274: 321 pgs: 321 active+clean; 457 KiB data, 139 MiB used, 60 GiB / 60 GiB avail
Nov 25 02:55:13 np0005534516 systemd[1]: Stopping Dynamic System Tuning Daemon...
Nov 25 02:55:13 np0005534516 systemd[1]: tuned.service: Deactivated successfully.
Nov 25 02:55:13 np0005534516 systemd[1]: Stopped Dynamic System Tuning Daemon.
Nov 25 02:55:13 np0005534516 systemd[1]: Starting Dynamic System Tuning Daemon...
Nov 25 02:55:13 np0005534516 ceph-osd[89702]: log_channel(cluster) log [DBG] : 11.a scrub starts
Nov 25 02:55:13 np0005534516 ceph-osd[88620]: log_channel(cluster) log [DBG] : 8.c scrub starts
Nov 25 02:55:13 np0005534516 ceph-osd[89702]: log_channel(cluster) log [DBG] : 11.a scrub ok
Nov 25 02:55:13 np0005534516 ceph-osd[88620]: log_channel(cluster) log [DBG] : 8.c scrub ok
Nov 25 02:55:13 np0005534516 ceph-osd[90711]: log_channel(cluster) log [DBG] : 11.1c deep-scrub starts
Nov 25 02:55:13 np0005534516 ceph-osd[90711]: log_channel(cluster) log [DBG] : 11.1c deep-scrub ok
Nov 25 02:55:13 np0005534516 systemd[1]: Started Dynamic System Tuning Daemon.
Nov 25 02:55:14 np0005534516 python3.9[112749]: ansible-ansible.builtin.slurp Invoked with src=/proc/cmdline
Nov 25 02:55:14 np0005534516 ceph-osd[89702]: log_channel(cluster) log [DBG] : 11.c scrub starts
Nov 25 02:55:14 np0005534516 ceph-osd[89702]: log_channel(cluster) log [DBG] : 11.c scrub ok
Nov 25 02:55:14 np0005534516 ceph-osd[88620]: log_channel(cluster) log [DBG] : 9.9 scrub starts
Nov 25 02:55:14 np0005534516 ceph-osd[88620]: log_channel(cluster) log [DBG] : 9.9 scrub ok
Nov 25 02:55:15 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v275: 321 pgs: 321 active+clean; 457 KiB data, 139 MiB used, 60 GiB / 60 GiB avail
Nov 25 02:55:15 np0005534516 ceph-osd[88620]: log_channel(cluster) log [DBG] : 8.f scrub starts
Nov 25 02:55:15 np0005534516 ceph-osd[88620]: log_channel(cluster) log [DBG] : 8.f scrub ok
Nov 25 02:55:16 np0005534516 ceph-osd[89702]: log_channel(cluster) log [DBG] : 11.13 scrub starts
Nov 25 02:55:16 np0005534516 ceph-osd[89702]: log_channel(cluster) log [DBG] : 11.13 scrub ok
Nov 25 02:55:16 np0005534516 ceph-osd[88620]: log_channel(cluster) log [DBG] : 9.b scrub starts
Nov 25 02:55:16 np0005534516 ceph-osd[88620]: log_channel(cluster) log [DBG] : 9.b scrub ok
Nov 25 02:55:16 np0005534516 python3.9[112901]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksm.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 25 02:55:17 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v276: 321 pgs: 321 active+clean; 457 KiB data, 139 MiB used, 60 GiB / 60 GiB avail
Nov 25 02:55:17 np0005534516 ceph-osd[90711]: log_channel(cluster) log [DBG] : 8.1b scrub starts
Nov 25 02:55:17 np0005534516 ceph-osd[90711]: log_channel(cluster) log [DBG] : 8.1b scrub ok
Nov 25 02:55:17 np0005534516 python3.9[113055]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksmtuned.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 25 02:55:17 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 02:55:18 np0005534516 systemd[1]: session-35.scope: Deactivated successfully.
Nov 25 02:55:18 np0005534516 systemd[1]: session-35.scope: Consumed 1min 7.677s CPU time.
Nov 25 02:55:18 np0005534516 systemd-logind[822]: Session 35 logged out. Waiting for processes to exit.
Nov 25 02:55:18 np0005534516 systemd-logind[822]: Removed session 35.
Nov 25 02:55:19 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v277: 321 pgs: 321 active+clean; 457 KiB data, 139 MiB used, 60 GiB / 60 GiB avail
Nov 25 02:55:19 np0005534516 ceph-osd[88620]: log_channel(cluster) log [DBG] : 7.4 scrub starts
Nov 25 02:55:19 np0005534516 ceph-osd[88620]: log_channel(cluster) log [DBG] : 7.4 scrub ok
Nov 25 02:55:21 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v278: 321 pgs: 321 active+clean; 457 KiB data, 139 MiB used, 60 GiB / 60 GiB avail
Nov 25 02:55:21 np0005534516 ceph-osd[89702]: log_channel(cluster) log [DBG] : 11.16 scrub starts
Nov 25 02:55:21 np0005534516 ceph-osd[89702]: log_channel(cluster) log [DBG] : 11.16 scrub ok
Nov 25 02:55:22 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 02:55:23 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v279: 321 pgs: 321 active+clean; 457 KiB data, 139 MiB used, 60 GiB / 60 GiB avail
Nov 25 02:55:23 np0005534516 ceph-osd[90711]: log_channel(cluster) log [DBG] : 11.1e scrub starts
Nov 25 02:55:23 np0005534516 ceph-osd[90711]: log_channel(cluster) log [DBG] : 11.1e scrub ok
Nov 25 02:55:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 02:55:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 02:55:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 02:55:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 02:55:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 02:55:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 02:55:23 np0005534516 ceph-osd[89702]: log_channel(cluster) log [DBG] : 11.1d scrub starts
Nov 25 02:55:23 np0005534516 ceph-osd[89702]: log_channel(cluster) log [DBG] : 11.1d scrub ok
Nov 25 02:55:23 np0005534516 systemd-logind[822]: New session 36 of user zuul.
Nov 25 02:55:23 np0005534516 systemd[1]: Started Session 36 of User zuul.
Nov 25 02:55:24 np0005534516 python3.9[113235]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 25 02:55:25 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v280: 321 pgs: 321 active+clean; 457 KiB data, 139 MiB used, 60 GiB / 60 GiB avail
Nov 25 02:55:25 np0005534516 ceph-osd[90711]: log_channel(cluster) log [DBG] : 8.1c scrub starts
Nov 25 02:55:25 np0005534516 ceph-osd[90711]: log_channel(cluster) log [DBG] : 8.1c scrub ok
Nov 25 02:55:25 np0005534516 ceph-osd[89702]: log_channel(cluster) log [DBG] : 10.14 deep-scrub starts
Nov 25 02:55:25 np0005534516 ceph-osd[89702]: log_channel(cluster) log [DBG] : 10.14 deep-scrub ok
Nov 25 02:55:25 np0005534516 python3.9[113391]: ansible-ansible.builtin.getent Invoked with database=passwd key=openvswitch fail_key=True service=None split=None
Nov 25 02:55:26 np0005534516 ceph-osd[89702]: log_channel(cluster) log [DBG] : 10.13 scrub starts
Nov 25 02:55:26 np0005534516 ceph-osd[89702]: log_channel(cluster) log [DBG] : 10.13 scrub ok
Nov 25 02:55:26 np0005534516 python3.9[113544]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 25 02:55:27 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v281: 321 pgs: 321 active+clean; 457 KiB data, 139 MiB used, 60 GiB / 60 GiB avail
Nov 25 02:55:27 np0005534516 ceph-osd[88620]: log_channel(cluster) log [DBG] : 8.b scrub starts
Nov 25 02:55:27 np0005534516 ceph-osd[88620]: log_channel(cluster) log [DBG] : 8.b scrub ok
Nov 25 02:55:27 np0005534516 ceph-osd[89702]: log_channel(cluster) log [DBG] : 10.1a scrub starts
Nov 25 02:55:27 np0005534516 ceph-osd[89702]: log_channel(cluster) log [DBG] : 10.1a scrub ok
Nov 25 02:55:27 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 02:55:27 np0005534516 python3.9[113628]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['openvswitch'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Nov 25 02:55:28 np0005534516 ceph-osd[90711]: log_channel(cluster) log [DBG] : 7.1c scrub starts
Nov 25 02:55:28 np0005534516 ceph-osd[90711]: log_channel(cluster) log [DBG] : 7.1c scrub ok
Nov 25 02:55:29 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v282: 321 pgs: 321 active+clean; 457 KiB data, 139 MiB used, 60 GiB / 60 GiB avail
Nov 25 02:55:30 np0005534516 python3.9[113781]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 25 02:55:30 np0005534516 ceph-osd[90711]: log_channel(cluster) log [DBG] : 11.1f deep-scrub starts
Nov 25 02:55:30 np0005534516 ceph-osd[88620]: log_channel(cluster) log [DBG] : 7.f scrub starts
Nov 25 02:55:30 np0005534516 ceph-osd[90711]: log_channel(cluster) log [DBG] : 11.1f deep-scrub ok
Nov 25 02:55:30 np0005534516 ceph-osd[88620]: log_channel(cluster) log [DBG] : 7.f scrub ok
Nov 25 02:55:31 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v283: 321 pgs: 321 active+clean; 457 KiB data, 139 MiB used, 60 GiB / 60 GiB avail
Nov 25 02:55:31 np0005534516 ceph-osd[88620]: log_channel(cluster) log [DBG] : 11.4 scrub starts
Nov 25 02:55:31 np0005534516 ceph-osd[88620]: log_channel(cluster) log [DBG] : 11.4 scrub ok
Nov 25 02:55:31 np0005534516 ceph-osd[89702]: log_channel(cluster) log [DBG] : 10.19 deep-scrub starts
Nov 25 02:55:31 np0005534516 ceph-osd[89702]: log_channel(cluster) log [DBG] : 10.19 deep-scrub ok
Nov 25 02:55:32 np0005534516 ceph-osd[89702]: log_channel(cluster) log [DBG] : 10.6 scrub starts
Nov 25 02:55:32 np0005534516 ceph-osd[89702]: log_channel(cluster) log [DBG] : 10.6 scrub ok
Nov 25 02:55:32 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 02:55:32 np0005534516 python3.9[113934]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Nov 25 02:55:33 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v284: 321 pgs: 321 active+clean; 457 KiB data, 139 MiB used, 60 GiB / 60 GiB avail
Nov 25 02:55:33 np0005534516 ceph-osd[90711]: log_channel(cluster) log [DBG] : 8.12 deep-scrub starts
Nov 25 02:55:33 np0005534516 ceph-osd[90711]: log_channel(cluster) log [DBG] : 8.12 deep-scrub ok
Nov 25 02:55:33 np0005534516 ceph-osd[88620]: log_channel(cluster) log [DBG] : 8.9 scrub starts
Nov 25 02:55:33 np0005534516 ceph-osd[88620]: log_channel(cluster) log [DBG] : 8.9 scrub ok
Nov 25 02:55:33 np0005534516 python3.9[114087]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 25 02:55:34 np0005534516 ceph-osd[90711]: log_channel(cluster) log [DBG] : 11.18 scrub starts
Nov 25 02:55:34 np0005534516 ceph-osd[90711]: log_channel(cluster) log [DBG] : 11.18 scrub ok
Nov 25 02:55:34 np0005534516 python3.9[114239]: ansible-community.general.sefcontext Invoked with selevel=s0 setype=container_file_t state=present target=/var/lib/edpm-config(/.*)? ignore_selinux_state=False ftype=a reload=True substitute=None seuser=None
Nov 25 02:55:35 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v285: 321 pgs: 321 active+clean; 457 KiB data, 139 MiB used, 60 GiB / 60 GiB avail
Nov 25 02:55:35 np0005534516 ceph-osd[88620]: log_channel(cluster) log [DBG] : 8.6 scrub starts
Nov 25 02:55:35 np0005534516 ceph-osd[88620]: log_channel(cluster) log [DBG] : 8.6 scrub ok
Nov 25 02:55:36 np0005534516 python3.9[114389]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 25 02:55:36 np0005534516 ceph-osd[89702]: log_channel(cluster) log [DBG] : 10.11 scrub starts
Nov 25 02:55:36 np0005534516 ceph-osd[89702]: log_channel(cluster) log [DBG] : 10.11 scrub ok
Nov 25 02:55:37 np0005534516 python3.9[114547]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 25 02:55:37 np0005534516 ceph-osd[88620]: log_channel(cluster) log [DBG] : 9.1 scrub starts
Nov 25 02:55:37 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v286: 321 pgs: 321 active+clean; 457 KiB data, 139 MiB used, 60 GiB / 60 GiB avail
Nov 25 02:55:37 np0005534516 ceph-osd[88620]: log_channel(cluster) log [DBG] : 9.1 scrub ok
Nov 25 02:55:37 np0005534516 ceph-osd[90711]: log_channel(cluster) log [DBG] : 11.12 scrub starts
Nov 25 02:55:37 np0005534516 ceph-osd[90711]: log_channel(cluster) log [DBG] : 11.12 scrub ok
Nov 25 02:55:37 np0005534516 ceph-osd[89702]: log_channel(cluster) log [DBG] : 10.f scrub starts
Nov 25 02:55:37 np0005534516 ceph-osd[89702]: log_channel(cluster) log [DBG] : 10.f scrub ok
Nov 25 02:55:37 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 02:55:38 np0005534516 ceph-osd[88620]: log_channel(cluster) log [DBG] : 9.5 scrub starts
Nov 25 02:55:38 np0005534516 ceph-osd[88620]: log_channel(cluster) log [DBG] : 9.5 scrub ok
Nov 25 02:55:38 np0005534516 ceph-osd[90711]: log_channel(cluster) log [DBG] : 11.11 scrub starts
Nov 25 02:55:38 np0005534516 ceph-osd[90711]: log_channel(cluster) log [DBG] : 11.11 scrub ok
Nov 25 02:55:38 np0005534516 ceph-osd[89702]: log_channel(cluster) log [DBG] : 10.10 scrub starts
Nov 25 02:55:38 np0005534516 ceph-osd[89702]: log_channel(cluster) log [DBG] : 10.10 scrub ok
Nov 25 02:55:39 np0005534516 ceph-osd[88620]: log_channel(cluster) log [DBG] : 11.6 scrub starts
Nov 25 02:55:39 np0005534516 ceph-osd[88620]: log_channel(cluster) log [DBG] : 11.6 scrub ok
Nov 25 02:55:39 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v287: 321 pgs: 321 active+clean; 457 KiB data, 139 MiB used, 60 GiB / 60 GiB avail
Nov 25 02:55:39 np0005534516 python3.9[114700]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 02:55:40 np0005534516 ceph-osd[88620]: log_channel(cluster) log [DBG] : 8.1f scrub starts
Nov 25 02:55:40 np0005534516 ceph-osd[88620]: log_channel(cluster) log [DBG] : 8.1f scrub ok
Nov 25 02:55:40 np0005534516 ceph-osd[89702]: log_channel(cluster) log [DBG] : 10.b deep-scrub starts
Nov 25 02:55:40 np0005534516 ceph-osd[89702]: log_channel(cluster) log [DBG] : 10.b deep-scrub ok
Nov 25 02:55:41 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v288: 321 pgs: 321 active+clean; 457 KiB data, 143 MiB used, 60 GiB / 60 GiB avail
Nov 25 02:55:41 np0005534516 python3.9[114987]: ansible-ansible.builtin.file Invoked with mode=0750 path=/var/lib/edpm-config selevel=s0 setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Nov 25 02:55:42 np0005534516 python3.9[115137]: ansible-ansible.builtin.stat Invoked with path=/etc/cloud/cloud.cfg.d follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 25 02:55:42 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 02:55:43 np0005534516 python3.9[115291]: ansible-ansible.legacy.dnf Invoked with name=['NetworkManager-ovs'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 25 02:55:43 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v289: 321 pgs: 321 active+clean; 457 KiB data, 143 MiB used, 60 GiB / 60 GiB avail
Nov 25 02:55:45 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v290: 321 pgs: 321 active+clean; 457 KiB data, 143 MiB used, 60 GiB / 60 GiB avail
Nov 25 02:55:45 np0005534516 ceph-osd[90711]: log_channel(cluster) log [DBG] : 8.11 scrub starts
Nov 25 02:55:45 np0005534516 ceph-osd[88620]: log_channel(cluster) log [DBG] : 8.18 scrub starts
Nov 25 02:55:45 np0005534516 ceph-osd[90711]: log_channel(cluster) log [DBG] : 8.11 scrub ok
Nov 25 02:55:45 np0005534516 ceph-osd[88620]: log_channel(cluster) log [DBG] : 8.18 scrub ok
Nov 25 02:55:45 np0005534516 python3.9[115444]: ansible-ansible.legacy.dnf Invoked with name=['os-net-config'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 25 02:55:46 np0005534516 ceph-osd[88620]: log_channel(cluster) log [DBG] : 9.1d scrub starts
Nov 25 02:55:46 np0005534516 ceph-osd[88620]: log_channel(cluster) log [DBG] : 9.1d scrub ok
Nov 25 02:55:47 np0005534516 ceph-osd[88620]: log_channel(cluster) log [DBG] : 7.13 scrub starts
Nov 25 02:55:47 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v291: 321 pgs: 321 active+clean; 457 KiB data, 143 MiB used, 60 GiB / 60 GiB avail
Nov 25 02:55:47 np0005534516 ceph-osd[88620]: log_channel(cluster) log [DBG] : 7.13 scrub ok
Nov 25 02:55:47 np0005534516 ceph-osd[90711]: log_channel(cluster) log [DBG] : 11.15 deep-scrub starts
Nov 25 02:55:47 np0005534516 ceph-osd[90711]: log_channel(cluster) log [DBG] : 11.15 deep-scrub ok
Nov 25 02:55:47 np0005534516 ceph-osd[89702]: log_channel(cluster) log [DBG] : 10.2 deep-scrub starts
Nov 25 02:55:47 np0005534516 ceph-osd[89702]: log_channel(cluster) log [DBG] : 10.2 deep-scrub ok
Nov 25 02:55:47 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 02:55:48 np0005534516 python3.9[115598]: ansible-ansible.builtin.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 25 02:55:49 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v292: 321 pgs: 321 active+clean; 456 KiB data, 143 MiB used, 60 GiB / 60 GiB avail
Nov 25 02:55:49 np0005534516 ceph-osd[90711]: log_channel(cluster) log [DBG] : 7.15 scrub starts
Nov 25 02:55:49 np0005534516 ceph-osd[90711]: log_channel(cluster) log [DBG] : 7.15 scrub ok
Nov 25 02:55:49 np0005534516 python3.9[115752]: ansible-ansible.builtin.slurp Invoked with path=/var/lib/edpm-config/os-net-config.returncode src=/var/lib/edpm-config/os-net-config.returncode
Nov 25 02:55:50 np0005534516 systemd[1]: session-36.scope: Deactivated successfully.
Nov 25 02:55:50 np0005534516 systemd[1]: session-36.scope: Consumed 19.866s CPU time.
Nov 25 02:55:50 np0005534516 systemd-logind[822]: Session 36 logged out. Waiting for processes to exit.
Nov 25 02:55:50 np0005534516 systemd-logind[822]: Removed session 36.
Nov 25 02:55:50 np0005534516 ceph-osd[90711]: log_channel(cluster) log [DBG] : 9.7 deep-scrub starts
Nov 25 02:55:50 np0005534516 ceph-osd[90711]: log_channel(cluster) log [DBG] : 9.7 deep-scrub ok
Nov 25 02:55:50 np0005534516 ceph-osd[89702]: log_channel(cluster) log [DBG] : 10.12 scrub starts
Nov 25 02:55:50 np0005534516 ceph-osd[89702]: log_channel(cluster) log [DBG] : 10.12 scrub ok
Nov 25 02:55:51 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v293: 321 pgs: 321 active+clean; 457 KiB data, 143 MiB used, 60 GiB / 60 GiB avail
Nov 25 02:55:52 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 02:55:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] Optimize plan auto_2025-11-25_07:55:53
Nov 25 02:55:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 02:55:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] do_upmap
Nov 25 02:55:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] pools ['volumes', 'cephfs.cephfs.data', '.mgr', 'images', 'cephfs.cephfs.meta', '.rgw.root', 'default.rgw.log', 'backups', 'default.rgw.control', 'vms', 'default.rgw.meta']
Nov 25 02:55:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] prepared 0/10 changes
Nov 25 02:55:53 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v294: 321 pgs: 321 active+clean; 457 KiB data, 143 MiB used, 60 GiB / 60 GiB avail
Nov 25 02:55:53 np0005534516 ceph-osd[88620]: log_channel(cluster) log [DBG] : 11.10 scrub starts
Nov 25 02:55:53 np0005534516 ceph-osd[88620]: log_channel(cluster) log [DBG] : 11.10 scrub ok
Nov 25 02:55:53 np0005534516 ceph-osd[90711]: log_channel(cluster) log [DBG] : 9.e scrub starts
Nov 25 02:55:53 np0005534516 ceph-osd[90711]: log_channel(cluster) log [DBG] : 9.e scrub ok
Nov 25 02:55:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 02:55:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 02:55:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 02:55:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 02:55:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 02:55:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 02:55:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 02:55:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 02:55:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 02:55:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 02:55:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 02:55:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 02:55:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 02:55:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 02:55:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 02:55:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 02:55:54 np0005534516 ceph-osd[89702]: log_channel(cluster) log [DBG] : 9.15 deep-scrub starts
Nov 25 02:55:54 np0005534516 ceph-osd[89702]: log_channel(cluster) log [DBG] : 9.15 deep-scrub ok
Nov 25 02:55:55 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v295: 321 pgs: 321 active+clean; 457 KiB data, 143 MiB used, 60 GiB / 60 GiB avail
Nov 25 02:55:55 np0005534516 systemd-logind[822]: New session 37 of user zuul.
Nov 25 02:55:55 np0005534516 systemd[1]: Started Session 37 of User zuul.
Nov 25 02:55:56 np0005534516 ceph-osd[89702]: log_channel(cluster) log [DBG] : 9.1f scrub starts
Nov 25 02:55:56 np0005534516 ceph-osd[89702]: log_channel(cluster) log [DBG] : 9.1f scrub ok
Nov 25 02:55:56 np0005534516 python3.9[115930]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 25 02:55:57 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v296: 321 pgs: 321 active+clean; 457 KiB data, 143 MiB used, 60 GiB / 60 GiB avail
Nov 25 02:55:57 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 02:55:58 np0005534516 python3.9[116084]: ansible-ansible.builtin.setup Invoked with filter=['ansible_default_ipv4'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 25 02:55:58 np0005534516 ceph-osd[90711]: log_channel(cluster) log [DBG] : 9.6 scrub starts
Nov 25 02:55:58 np0005534516 ceph-osd[90711]: log_channel(cluster) log [DBG] : 9.6 scrub ok
Nov 25 02:55:58 np0005534516 ceph-osd[88620]: log_channel(cluster) log [DBG] : 7.6 scrub starts
Nov 25 02:55:58 np0005534516 ceph-osd[88620]: log_channel(cluster) log [DBG] : 7.6 scrub ok
Nov 25 02:55:59 np0005534516 ceph-osd[90711]: log_channel(cluster) log [DBG] : 9.17 deep-scrub starts
Nov 25 02:55:59 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v297: 321 pgs: 321 active+clean; 457 KiB data, 143 MiB used, 60 GiB / 60 GiB avail
Nov 25 02:55:59 np0005534516 ceph-osd[90711]: log_channel(cluster) log [DBG] : 9.17 deep-scrub ok
Nov 25 02:55:59 np0005534516 ceph-osd[88620]: log_channel(cluster) log [DBG] : 9.1b scrub starts
Nov 25 02:55:59 np0005534516 ceph-osd[88620]: log_channel(cluster) log [DBG] : 9.1b scrub ok
Nov 25 02:55:59 np0005534516 python3.9[116277]: ansible-ansible.legacy.command Invoked with _raw_params=hostname -f _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 02:55:59 np0005534516 systemd-logind[822]: Session 37 logged out. Waiting for processes to exit.
Nov 25 02:55:59 np0005534516 systemd[1]: session-37.scope: Deactivated successfully.
Nov 25 02:55:59 np0005534516 systemd[1]: session-37.scope: Consumed 2.751s CPU time.
Nov 25 02:55:59 np0005534516 systemd-logind[822]: Removed session 37.
Nov 25 02:56:01 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v298: 321 pgs: 321 active+clean; 457 KiB data, 143 MiB used, 60 GiB / 60 GiB avail
Nov 25 02:56:02 np0005534516 ceph-osd[88620]: log_channel(cluster) log [DBG] : 8.1a deep-scrub starts
Nov 25 02:56:02 np0005534516 ceph-osd[88620]: log_channel(cluster) log [DBG] : 8.1a deep-scrub ok
Nov 25 02:56:02 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 02:56:02 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 02:56:02 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 02:56:02 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 02:56:02 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 02:56:02 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 02:56:02 np0005534516 ceph-mgr[75313]: [progress WARNING root] complete: ev eb54cda2-884d-4426-9923-52b3f39a9bb1 does not exist
Nov 25 02:56:02 np0005534516 ceph-mgr[75313]: [progress WARNING root] complete: ev cf38c5f3-8d38-42b3-82ad-fbd3d86c7dc4 does not exist
Nov 25 02:56:02 np0005534516 ceph-mgr[75313]: [progress WARNING root] complete: ev 568440ff-5361-42f5-83ae-2b75715bbb01 does not exist
Nov 25 02:56:02 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 02:56:02 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 02:56:02 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 02:56:02 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 02:56:02 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 02:56:02 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 02:56:02 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 02:56:02 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 02:56:02 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 02:56:02 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 02:56:03 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v299: 321 pgs: 321 active+clean; 457 KiB data, 143 MiB used, 60 GiB / 60 GiB avail
Nov 25 02:56:03 np0005534516 podman[116572]: 2025-11-25 07:56:03.179527647 +0000 UTC m=+0.022033075 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 02:56:03 np0005534516 podman[116572]: 2025-11-25 07:56:03.317830015 +0000 UTC m=+0.160335423 container create 0258bf6e5ff0bf5fe9f22f9f6d55acc51a466821b9af1b7615719fce6b754bd6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_chaplygin, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Nov 25 02:56:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 02:56:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 02:56:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 02:56:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 02:56:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 02:56:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 02:56:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 02:56:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 02:56:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 02:56:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 02:56:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 02:56:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 02:56:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 25 02:56:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 02:56:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 02:56:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 02:56:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 25 02:56:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 02:56:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 25 02:56:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 02:56:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 02:56:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 02:56:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 25 02:56:03 np0005534516 systemd[1]: Started libpod-conmon-0258bf6e5ff0bf5fe9f22f9f6d55acc51a466821b9af1b7615719fce6b754bd6.scope.
Nov 25 02:56:03 np0005534516 systemd[1]: Started libcrun container.
Nov 25 02:56:03 np0005534516 podman[116572]: 2025-11-25 07:56:03.536807269 +0000 UTC m=+0.379312697 container init 0258bf6e5ff0bf5fe9f22f9f6d55acc51a466821b9af1b7615719fce6b754bd6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_chaplygin, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 02:56:03 np0005534516 podman[116572]: 2025-11-25 07:56:03.546294824 +0000 UTC m=+0.388800232 container start 0258bf6e5ff0bf5fe9f22f9f6d55acc51a466821b9af1b7615719fce6b754bd6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_chaplygin, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 02:56:03 np0005534516 goofy_chaplygin[116588]: 167 167
Nov 25 02:56:03 np0005534516 systemd[1]: libpod-0258bf6e5ff0bf5fe9f22f9f6d55acc51a466821b9af1b7615719fce6b754bd6.scope: Deactivated successfully.
Nov 25 02:56:03 np0005534516 podman[116572]: 2025-11-25 07:56:03.730022753 +0000 UTC m=+0.572528181 container attach 0258bf6e5ff0bf5fe9f22f9f6d55acc51a466821b9af1b7615719fce6b754bd6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_chaplygin, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Nov 25 02:56:03 np0005534516 podman[116572]: 2025-11-25 07:56:03.731016814 +0000 UTC m=+0.573522242 container died 0258bf6e5ff0bf5fe9f22f9f6d55acc51a466821b9af1b7615719fce6b754bd6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_chaplygin, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Nov 25 02:56:03 np0005534516 systemd[1]: var-lib-containers-storage-overlay-4fb8eda5b4cb4a90619ff0c99508fa0f4e7188b31be0b529d874ae66ce6217d2-merged.mount: Deactivated successfully.
Nov 25 02:56:03 np0005534516 podman[116572]: 2025-11-25 07:56:03.961976311 +0000 UTC m=+0.804481719 container remove 0258bf6e5ff0bf5fe9f22f9f6d55acc51a466821b9af1b7615719fce6b754bd6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_chaplygin, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 25 02:56:04 np0005534516 systemd[1]: libpod-conmon-0258bf6e5ff0bf5fe9f22f9f6d55acc51a466821b9af1b7615719fce6b754bd6.scope: Deactivated successfully.
Nov 25 02:56:04 np0005534516 podman[116611]: 2025-11-25 07:56:04.165543506 +0000 UTC m=+0.090854843 container create be33d92febe82b9511e2b440834034e06710e7536b4addd040a7c13f34cd93ff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_poitras, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 02:56:04 np0005534516 podman[116611]: 2025-11-25 07:56:04.097394649 +0000 UTC m=+0.022706016 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 02:56:04 np0005534516 systemd[1]: Started libpod-conmon-be33d92febe82b9511e2b440834034e06710e7536b4addd040a7c13f34cd93ff.scope.
Nov 25 02:56:04 np0005534516 systemd[1]: Started libcrun container.
Nov 25 02:56:04 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dc377dc47bb65dbd7ade1f7000590616b1e168cf189c3580238012c2da77c032/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 02:56:04 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dc377dc47bb65dbd7ade1f7000590616b1e168cf189c3580238012c2da77c032/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 02:56:04 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dc377dc47bb65dbd7ade1f7000590616b1e168cf189c3580238012c2da77c032/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 02:56:04 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dc377dc47bb65dbd7ade1f7000590616b1e168cf189c3580238012c2da77c032/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 02:56:04 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dc377dc47bb65dbd7ade1f7000590616b1e168cf189c3580238012c2da77c032/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 02:56:04 np0005534516 podman[116611]: 2025-11-25 07:56:04.352036552 +0000 UTC m=+0.277347909 container init be33d92febe82b9511e2b440834034e06710e7536b4addd040a7c13f34cd93ff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_poitras, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 02:56:04 np0005534516 podman[116611]: 2025-11-25 07:56:04.358951586 +0000 UTC m=+0.284262923 container start be33d92febe82b9511e2b440834034e06710e7536b4addd040a7c13f34cd93ff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_poitras, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 02:56:04 np0005534516 podman[116611]: 2025-11-25 07:56:04.429914432 +0000 UTC m=+0.355225809 container attach be33d92febe82b9511e2b440834034e06710e7536b4addd040a7c13f34cd93ff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_poitras, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef)
Nov 25 02:56:05 np0005534516 systemd-logind[822]: New session 38 of user zuul.
Nov 25 02:56:05 np0005534516 systemd[1]: Started Session 38 of User zuul.
Nov 25 02:56:05 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v300: 321 pgs: 321 active+clean; 456 KiB data, 143 MiB used, 60 GiB / 60 GiB avail
Nov 25 02:56:05 np0005534516 ceph-mon[75015]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #18. Immutable memtables: 0.
Nov 25 02:56:05 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-07:56:05.409941) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 25 02:56:05 np0005534516 ceph-mon[75015]: rocksdb: [db/flush_job.cc:856] [default] [JOB 3] Flushing memtable with next log file: 18
Nov 25 02:56:05 np0005534516 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764057365410734, "job": 3, "event": "flush_started", "num_memtables": 1, "num_entries": 7265, "num_deletes": 251, "total_data_size": 9090011, "memory_usage": 9253040, "flush_reason": "Manual Compaction"}
Nov 25 02:56:05 np0005534516 ceph-mon[75015]: rocksdb: [db/flush_job.cc:885] [default] [JOB 3] Level-0 flush table #19: started
Nov 25 02:56:05 np0005534516 confident_poitras[116627]: --> passed data devices: 0 physical, 3 LVM
Nov 25 02:56:05 np0005534516 confident_poitras[116627]: --> relative data size: 1.0
Nov 25 02:56:05 np0005534516 confident_poitras[116627]: --> All data devices are unavailable
Nov 25 02:56:05 np0005534516 systemd[1]: libpod-be33d92febe82b9511e2b440834034e06710e7536b4addd040a7c13f34cd93ff.scope: Deactivated successfully.
Nov 25 02:56:05 np0005534516 systemd[1]: libpod-be33d92febe82b9511e2b440834034e06710e7536b4addd040a7c13f34cd93ff.scope: Consumed 1.021s CPU time.
Nov 25 02:56:05 np0005534516 podman[116611]: 2025-11-25 07:56:05.462939463 +0000 UTC m=+1.388250790 container died be33d92febe82b9511e2b440834034e06710e7536b4addd040a7c13f34cd93ff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_poitras, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Nov 25 02:56:05 np0005534516 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764057365566847, "cf_name": "default", "job": 3, "event": "table_file_creation", "file_number": 19, "file_size": 7482321, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 140, "largest_seqno": 7402, "table_properties": {"data_size": 7455260, "index_size": 17538, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 8453, "raw_key_size": 78757, "raw_average_key_size": 23, "raw_value_size": 7390900, "raw_average_value_size": 2201, "num_data_blocks": 774, "num_entries": 3357, "num_filter_entries": 3357, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764056885, "oldest_key_time": 1764056885, "file_creation_time": 1764057365, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "dc5dc6ae-0fb8-474e-b34d-e37705a41add", "db_session_id": "WCSCMBNWDQZ3QKJA3OWW", "orig_file_number": 19, "seqno_to_time_mapping": "N/A"}}
Nov 25 02:56:05 np0005534516 ceph-mon[75015]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 3] Flush lasted 156993 microseconds, and 19256 cpu microseconds.
Nov 25 02:56:05 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-07:56:05.566936) [db/flush_job.cc:967] [default] [JOB 3] Level-0 flush table #19: 7482321 bytes OK
Nov 25 02:56:05 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-07:56:05.566965) [db/memtable_list.cc:519] [default] Level-0 commit table #19 started
Nov 25 02:56:05 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-07:56:05.572920) [db/memtable_list.cc:722] [default] Level-0 commit table #19: memtable #1 done
Nov 25 02:56:05 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-07:56:05.572969) EVENT_LOG_v1 {"time_micros": 1764057365572958, "job": 3, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [3, 0, 0, 0, 0, 0, 0], "immutable_memtables": 0}
Nov 25 02:56:05 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-07:56:05.573009) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: files[3 0 0 0 0 0 0] max score 0.75
Nov 25 02:56:05 np0005534516 ceph-mon[75015]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 3] Try to delete WAL files size 9057913, prev total WAL file size 9057913, number of live WAL files 2.
Nov 25 02:56:05 np0005534516 ceph-mon[75015]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000014.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 02:56:05 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-07:56:05.575604) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730030' seq:72057594037927935, type:22 .. '7061786F7300323532' seq:0, type:0; will stop at (end)
Nov 25 02:56:05 np0005534516 ceph-mon[75015]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 4] Compacting 3@0 files to L6, score -1.00
Nov 25 02:56:05 np0005534516 ceph-mon[75015]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 3 Base level 0, inputs: [19(7306KB) 13(52KB) 8(1944B)]
Nov 25 02:56:05 np0005534516 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764057365575713, "job": 4, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [19, 13, 8], "score": -1, "input_data_size": 7538418, "oldest_snapshot_seqno": -1}
Nov 25 02:56:05 np0005534516 ceph-mon[75015]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 4] Generated table #20: 3172 keys, 7494188 bytes, temperature: kUnknown
Nov 25 02:56:05 np0005534516 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764057365775170, "cf_name": "default", "job": 4, "event": "table_file_creation", "file_number": 20, "file_size": 7494188, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7467627, "index_size": 17555, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 7941, "raw_key_size": 76802, "raw_average_key_size": 24, "raw_value_size": 7404924, "raw_average_value_size": 2334, "num_data_blocks": 776, "num_entries": 3172, "num_filter_entries": 3172, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764056881, "oldest_key_time": 0, "file_creation_time": 1764057365, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "dc5dc6ae-0fb8-474e-b34d-e37705a41add", "db_session_id": "WCSCMBNWDQZ3QKJA3OWW", "orig_file_number": 20, "seqno_to_time_mapping": "N/A"}}
Nov 25 02:56:05 np0005534516 ceph-mon[75015]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 02:56:05 np0005534516 systemd[1]: var-lib-containers-storage-overlay-dc377dc47bb65dbd7ade1f7000590616b1e168cf189c3580238012c2da77c032-merged.mount: Deactivated successfully.
Nov 25 02:56:05 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-07:56:05.775633) [db/compaction/compaction_job.cc:1663] [default] [JOB 4] Compacted 3@0 files to L6 => 7494188 bytes
Nov 25 02:56:05 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-07:56:05.807884) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 37.7 rd, 37.5 wr, level 6, files in(3, 0) out(1 +0 blob) MB in(7.2, 0.0 +0.0 blob) out(7.1 +0.0 blob), read-write-amplify(2.0) write-amplify(1.0) OK, records in: 3462, records dropped: 290 output_compression: NoCompression
Nov 25 02:56:05 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-07:56:05.807936) EVENT_LOG_v1 {"time_micros": 1764057365807916, "job": 4, "event": "compaction_finished", "compaction_time_micros": 199749, "compaction_time_cpu_micros": 16775, "output_level": 6, "num_output_files": 1, "total_output_size": 7494188, "num_input_records": 3462, "num_output_records": 3172, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 25 02:56:05 np0005534516 ceph-mon[75015]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000019.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 02:56:05 np0005534516 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764057365810835, "job": 4, "event": "table_file_deletion", "file_number": 19}
Nov 25 02:56:05 np0005534516 ceph-mon[75015]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000013.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 02:56:05 np0005534516 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764057365811153, "job": 4, "event": "table_file_deletion", "file_number": 13}
Nov 25 02:56:05 np0005534516 ceph-mon[75015]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000008.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 02:56:05 np0005534516 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764057365811426, "job": 4, "event": "table_file_deletion", "file_number": 8}
Nov 25 02:56:05 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-07:56:05.575476) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 02:56:06 np0005534516 python3.9[116822]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 25 02:56:06 np0005534516 podman[116611]: 2025-11-25 07:56:06.103145136 +0000 UTC m=+2.028456473 container remove be33d92febe82b9511e2b440834034e06710e7536b4addd040a7c13f34cd93ff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_poitras, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 25 02:56:06 np0005534516 systemd[1]: libpod-conmon-be33d92febe82b9511e2b440834034e06710e7536b4addd040a7c13f34cd93ff.scope: Deactivated successfully.
Nov 25 02:56:06 np0005534516 ceph-osd[88620]: log_channel(cluster) log [DBG] : 8.1d deep-scrub starts
Nov 25 02:56:06 np0005534516 ceph-osd[88620]: log_channel(cluster) log [DBG] : 8.1d deep-scrub ok
Nov 25 02:56:06 np0005534516 podman[117062]: 2025-11-25 07:56:06.759279084 +0000 UTC m=+0.057009652 container create 92362dfa9aab2af54fd0e2ee51a830b365c7bfb9af3ca11308b67eef5222dc23 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_mayer, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 02:56:06 np0005534516 podman[117062]: 2025-11-25 07:56:06.728217509 +0000 UTC m=+0.025948097 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 02:56:06 np0005534516 systemd[1]: Started libpod-conmon-92362dfa9aab2af54fd0e2ee51a830b365c7bfb9af3ca11308b67eef5222dc23.scope.
Nov 25 02:56:06 np0005534516 systemd[1]: Started libcrun container.
Nov 25 02:56:07 np0005534516 podman[117062]: 2025-11-25 07:56:07.065287144 +0000 UTC m=+0.363017812 container init 92362dfa9aab2af54fd0e2ee51a830b365c7bfb9af3ca11308b67eef5222dc23 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_mayer, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 02:56:07 np0005534516 podman[117062]: 2025-11-25 07:56:07.074594163 +0000 UTC m=+0.372324731 container start 92362dfa9aab2af54fd0e2ee51a830b365c7bfb9af3ca11308b67eef5222dc23 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_mayer, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Nov 25 02:56:07 np0005534516 trusting_mayer[117129]: 167 167
Nov 25 02:56:07 np0005534516 systemd[1]: libpod-92362dfa9aab2af54fd0e2ee51a830b365c7bfb9af3ca11308b67eef5222dc23.scope: Deactivated successfully.
Nov 25 02:56:07 np0005534516 podman[117062]: 2025-11-25 07:56:07.098674411 +0000 UTC m=+0.396404989 container attach 92362dfa9aab2af54fd0e2ee51a830b365c7bfb9af3ca11308b67eef5222dc23 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_mayer, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 02:56:07 np0005534516 podman[117062]: 2025-11-25 07:56:07.09960573 +0000 UTC m=+0.397336298 container died 92362dfa9aab2af54fd0e2ee51a830b365c7bfb9af3ca11308b67eef5222dc23 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_mayer, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 02:56:07 np0005534516 python3.9[117126]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 25 02:56:07 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v301: 321 pgs: 321 active+clean; 456 KiB data, 143 MiB used, 60 GiB / 60 GiB avail
Nov 25 02:56:07 np0005534516 ceph-osd[88620]: log_channel(cluster) log [DBG] : 11.19 scrub starts
Nov 25 02:56:07 np0005534516 ceph-osd[88620]: log_channel(cluster) log [DBG] : 11.19 scrub ok
Nov 25 02:56:07 np0005534516 ceph-osd[90711]: log_channel(cluster) log [DBG] : 9.f scrub starts
Nov 25 02:56:07 np0005534516 ceph-osd[90711]: log_channel(cluster) log [DBG] : 9.f scrub ok
Nov 25 02:56:07 np0005534516 systemd[1]: var-lib-containers-storage-overlay-b71257a1312ce9a706ea005b6aa1696816e58a19f6a3f24b72fe8774a1251ec7-merged.mount: Deactivated successfully.
Nov 25 02:56:07 np0005534516 podman[117062]: 2025-11-25 07:56:07.54497553 +0000 UTC m=+0.842706138 container remove 92362dfa9aab2af54fd0e2ee51a830b365c7bfb9af3ca11308b67eef5222dc23 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_mayer, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Nov 25 02:56:07 np0005534516 systemd[1]: libpod-conmon-92362dfa9aab2af54fd0e2ee51a830b365c7bfb9af3ca11308b67eef5222dc23.scope: Deactivated successfully.
Nov 25 02:56:07 np0005534516 podman[117235]: 2025-11-25 07:56:07.737225333 +0000 UTC m=+0.044429201 container create 9654538889b11f9bb92ece479a8702f4f8fa1791454396afd07ed125b0dde5e5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_chandrasekhar, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 02:56:07 np0005534516 systemd[1]: Started libpod-conmon-9654538889b11f9bb92ece479a8702f4f8fa1791454396afd07ed125b0dde5e5.scope.
Nov 25 02:56:07 np0005534516 systemd[1]: Started libcrun container.
Nov 25 02:56:07 np0005534516 podman[117235]: 2025-11-25 07:56:07.716772598 +0000 UTC m=+0.023976486 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 02:56:07 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9826ae3f82d401a546b0456c5ab41746a5ea9624dc30af3adb7f2b43c371ae86/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 02:56:07 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9826ae3f82d401a546b0456c5ab41746a5ea9624dc30af3adb7f2b43c371ae86/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 02:56:07 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9826ae3f82d401a546b0456c5ab41746a5ea9624dc30af3adb7f2b43c371ae86/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 02:56:07 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9826ae3f82d401a546b0456c5ab41746a5ea9624dc30af3adb7f2b43c371ae86/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 02:56:07 np0005534516 podman[117235]: 2025-11-25 07:56:07.849781471 +0000 UTC m=+0.156985359 container init 9654538889b11f9bb92ece479a8702f4f8fa1791454396afd07ed125b0dde5e5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_chandrasekhar, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Nov 25 02:56:07 np0005534516 podman[117235]: 2025-11-25 07:56:07.85842335 +0000 UTC m=+0.165627208 container start 9654538889b11f9bb92ece479a8702f4f8fa1791454396afd07ed125b0dde5e5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_chandrasekhar, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2)
Nov 25 02:56:07 np0005534516 podman[117235]: 2025-11-25 07:56:07.874513539 +0000 UTC m=+0.181717457 container attach 9654538889b11f9bb92ece479a8702f4f8fa1791454396afd07ed125b0dde5e5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_chandrasekhar, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 02:56:07 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 02:56:08 np0005534516 python3.9[117332]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 25 02:56:08 np0005534516 ceph-osd[88620]: log_channel(cluster) log [DBG] : 7.9 scrub starts
Nov 25 02:56:08 np0005534516 ceph-osd[88620]: log_channel(cluster) log [DBG] : 7.9 scrub ok
Nov 25 02:56:08 np0005534516 pensive_chandrasekhar[117275]: {
Nov 25 02:56:08 np0005534516 pensive_chandrasekhar[117275]:    "0": [
Nov 25 02:56:08 np0005534516 pensive_chandrasekhar[117275]:        {
Nov 25 02:56:08 np0005534516 pensive_chandrasekhar[117275]:            "devices": [
Nov 25 02:56:08 np0005534516 pensive_chandrasekhar[117275]:                "/dev/loop3"
Nov 25 02:56:08 np0005534516 pensive_chandrasekhar[117275]:            ],
Nov 25 02:56:08 np0005534516 pensive_chandrasekhar[117275]:            "lv_name": "ceph_lv0",
Nov 25 02:56:08 np0005534516 pensive_chandrasekhar[117275]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 02:56:08 np0005534516 pensive_chandrasekhar[117275]:            "lv_size": "21470642176",
Nov 25 02:56:08 np0005534516 pensive_chandrasekhar[117275]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=fa0f8d90-5df8-4d42-9078-da082765696d,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 02:56:08 np0005534516 pensive_chandrasekhar[117275]:            "lv_uuid": "ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr",
Nov 25 02:56:08 np0005534516 pensive_chandrasekhar[117275]:            "name": "ceph_lv0",
Nov 25 02:56:08 np0005534516 pensive_chandrasekhar[117275]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 02:56:08 np0005534516 pensive_chandrasekhar[117275]:            "tags": {
Nov 25 02:56:08 np0005534516 pensive_chandrasekhar[117275]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 02:56:08 np0005534516 pensive_chandrasekhar[117275]:                "ceph.block_uuid": "ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr",
Nov 25 02:56:08 np0005534516 pensive_chandrasekhar[117275]:                "ceph.cephx_lockbox_secret": "",
Nov 25 02:56:08 np0005534516 pensive_chandrasekhar[117275]:                "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 02:56:08 np0005534516 pensive_chandrasekhar[117275]:                "ceph.cluster_name": "ceph",
Nov 25 02:56:08 np0005534516 pensive_chandrasekhar[117275]:                "ceph.crush_device_class": "",
Nov 25 02:56:08 np0005534516 pensive_chandrasekhar[117275]:                "ceph.encrypted": "0",
Nov 25 02:56:08 np0005534516 pensive_chandrasekhar[117275]:                "ceph.osd_fsid": "fa0f8d90-5df8-4d42-9078-da082765696d",
Nov 25 02:56:08 np0005534516 pensive_chandrasekhar[117275]:                "ceph.osd_id": "0",
Nov 25 02:56:08 np0005534516 pensive_chandrasekhar[117275]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 02:56:08 np0005534516 pensive_chandrasekhar[117275]:                "ceph.type": "block",
Nov 25 02:56:08 np0005534516 pensive_chandrasekhar[117275]:                "ceph.vdo": "0"
Nov 25 02:56:08 np0005534516 pensive_chandrasekhar[117275]:            },
Nov 25 02:56:08 np0005534516 pensive_chandrasekhar[117275]:            "type": "block",
Nov 25 02:56:08 np0005534516 pensive_chandrasekhar[117275]:            "vg_name": "ceph_vg0"
Nov 25 02:56:08 np0005534516 pensive_chandrasekhar[117275]:        }
Nov 25 02:56:08 np0005534516 pensive_chandrasekhar[117275]:    ],
Nov 25 02:56:08 np0005534516 pensive_chandrasekhar[117275]:    "1": [
Nov 25 02:56:08 np0005534516 pensive_chandrasekhar[117275]:        {
Nov 25 02:56:08 np0005534516 pensive_chandrasekhar[117275]:            "devices": [
Nov 25 02:56:08 np0005534516 pensive_chandrasekhar[117275]:                "/dev/loop4"
Nov 25 02:56:08 np0005534516 pensive_chandrasekhar[117275]:            ],
Nov 25 02:56:08 np0005534516 pensive_chandrasekhar[117275]:            "lv_name": "ceph_lv1",
Nov 25 02:56:08 np0005534516 pensive_chandrasekhar[117275]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 02:56:08 np0005534516 pensive_chandrasekhar[117275]:            "lv_size": "21470642176",
Nov 25 02:56:08 np0005534516 pensive_chandrasekhar[117275]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1ccbbde0-8faf-460a-800e-c84f00ed17db,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 02:56:08 np0005534516 pensive_chandrasekhar[117275]:            "lv_uuid": "nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e",
Nov 25 02:56:08 np0005534516 pensive_chandrasekhar[117275]:            "name": "ceph_lv1",
Nov 25 02:56:08 np0005534516 pensive_chandrasekhar[117275]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 02:56:08 np0005534516 pensive_chandrasekhar[117275]:            "tags": {
Nov 25 02:56:08 np0005534516 pensive_chandrasekhar[117275]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 02:56:08 np0005534516 pensive_chandrasekhar[117275]:                "ceph.block_uuid": "nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e",
Nov 25 02:56:08 np0005534516 pensive_chandrasekhar[117275]:                "ceph.cephx_lockbox_secret": "",
Nov 25 02:56:08 np0005534516 pensive_chandrasekhar[117275]:                "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 02:56:08 np0005534516 pensive_chandrasekhar[117275]:                "ceph.cluster_name": "ceph",
Nov 25 02:56:08 np0005534516 pensive_chandrasekhar[117275]:                "ceph.crush_device_class": "",
Nov 25 02:56:08 np0005534516 pensive_chandrasekhar[117275]:                "ceph.encrypted": "0",
Nov 25 02:56:08 np0005534516 pensive_chandrasekhar[117275]:                "ceph.osd_fsid": "1ccbbde0-8faf-460a-800e-c84f00ed17db",
Nov 25 02:56:08 np0005534516 pensive_chandrasekhar[117275]:                "ceph.osd_id": "1",
Nov 25 02:56:08 np0005534516 pensive_chandrasekhar[117275]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 02:56:08 np0005534516 pensive_chandrasekhar[117275]:                "ceph.type": "block",
Nov 25 02:56:08 np0005534516 pensive_chandrasekhar[117275]:                "ceph.vdo": "0"
Nov 25 02:56:08 np0005534516 pensive_chandrasekhar[117275]:            },
Nov 25 02:56:08 np0005534516 pensive_chandrasekhar[117275]:            "type": "block",
Nov 25 02:56:08 np0005534516 pensive_chandrasekhar[117275]:            "vg_name": "ceph_vg1"
Nov 25 02:56:08 np0005534516 pensive_chandrasekhar[117275]:        }
Nov 25 02:56:08 np0005534516 pensive_chandrasekhar[117275]:    ],
Nov 25 02:56:08 np0005534516 pensive_chandrasekhar[117275]:    "2": [
Nov 25 02:56:08 np0005534516 pensive_chandrasekhar[117275]:        {
Nov 25 02:56:08 np0005534516 pensive_chandrasekhar[117275]:            "devices": [
Nov 25 02:56:08 np0005534516 pensive_chandrasekhar[117275]:                "/dev/loop5"
Nov 25 02:56:08 np0005534516 pensive_chandrasekhar[117275]:            ],
Nov 25 02:56:08 np0005534516 pensive_chandrasekhar[117275]:            "lv_name": "ceph_lv2",
Nov 25 02:56:08 np0005534516 pensive_chandrasekhar[117275]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 02:56:08 np0005534516 pensive_chandrasekhar[117275]:            "lv_size": "21470642176",
Nov 25 02:56:08 np0005534516 pensive_chandrasekhar[117275]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2a15223e-f21c-42af-b702-2be1c3607399,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 02:56:08 np0005534516 pensive_chandrasekhar[117275]:            "lv_uuid": "5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0",
Nov 25 02:56:08 np0005534516 pensive_chandrasekhar[117275]:            "name": "ceph_lv2",
Nov 25 02:56:08 np0005534516 pensive_chandrasekhar[117275]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 02:56:08 np0005534516 pensive_chandrasekhar[117275]:            "tags": {
Nov 25 02:56:08 np0005534516 pensive_chandrasekhar[117275]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 02:56:08 np0005534516 pensive_chandrasekhar[117275]:                "ceph.block_uuid": "5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0",
Nov 25 02:56:08 np0005534516 pensive_chandrasekhar[117275]:                "ceph.cephx_lockbox_secret": "",
Nov 25 02:56:08 np0005534516 pensive_chandrasekhar[117275]:                "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 02:56:08 np0005534516 pensive_chandrasekhar[117275]:                "ceph.cluster_name": "ceph",
Nov 25 02:56:08 np0005534516 pensive_chandrasekhar[117275]:                "ceph.crush_device_class": "",
Nov 25 02:56:08 np0005534516 pensive_chandrasekhar[117275]:                "ceph.encrypted": "0",
Nov 25 02:56:08 np0005534516 pensive_chandrasekhar[117275]:                "ceph.osd_fsid": "2a15223e-f21c-42af-b702-2be1c3607399",
Nov 25 02:56:08 np0005534516 pensive_chandrasekhar[117275]:                "ceph.osd_id": "2",
Nov 25 02:56:08 np0005534516 pensive_chandrasekhar[117275]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 02:56:08 np0005534516 pensive_chandrasekhar[117275]:                "ceph.type": "block",
Nov 25 02:56:08 np0005534516 pensive_chandrasekhar[117275]:                "ceph.vdo": "0"
Nov 25 02:56:08 np0005534516 pensive_chandrasekhar[117275]:            },
Nov 25 02:56:08 np0005534516 pensive_chandrasekhar[117275]:            "type": "block",
Nov 25 02:56:08 np0005534516 pensive_chandrasekhar[117275]:            "vg_name": "ceph_vg2"
Nov 25 02:56:08 np0005534516 pensive_chandrasekhar[117275]:        }
Nov 25 02:56:08 np0005534516 pensive_chandrasekhar[117275]:    ]
Nov 25 02:56:08 np0005534516 pensive_chandrasekhar[117275]: }
Nov 25 02:56:08 np0005534516 systemd[1]: libpod-9654538889b11f9bb92ece479a8702f4f8fa1791454396afd07ed125b0dde5e5.scope: Deactivated successfully.
Nov 25 02:56:08 np0005534516 podman[117235]: 2025-11-25 07:56:08.659553804 +0000 UTC m=+0.966757682 container died 9654538889b11f9bb92ece479a8702f4f8fa1791454396afd07ed125b0dde5e5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_chandrasekhar, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Nov 25 02:56:08 np0005534516 systemd[1]: var-lib-containers-storage-overlay-9826ae3f82d401a546b0456c5ab41746a5ea9624dc30af3adb7f2b43c371ae86-merged.mount: Deactivated successfully.
Nov 25 02:56:08 np0005534516 podman[117235]: 2025-11-25 07:56:08.764384281 +0000 UTC m=+1.071588149 container remove 9654538889b11f9bb92ece479a8702f4f8fa1791454396afd07ed125b0dde5e5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_chandrasekhar, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Nov 25 02:56:08 np0005534516 systemd[1]: libpod-conmon-9654538889b11f9bb92ece479a8702f4f8fa1791454396afd07ed125b0dde5e5.scope: Deactivated successfully.
Nov 25 02:56:09 np0005534516 python3.9[117505]: ansible-ansible.legacy.dnf Invoked with name=['podman'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 25 02:56:09 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v302: 321 pgs: 321 active+clean; 456 KiB data, 143 MiB used, 60 GiB / 60 GiB avail
Nov 25 02:56:09 np0005534516 podman[117571]: 2025-11-25 07:56:09.395996909 +0000 UTC m=+0.064903069 container create 725edfb06f4a3c4f43d4cda7327edcdba3e95a7cf5aa33bf68a2d944d056afd4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_williams, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 02:56:09 np0005534516 systemd[1]: Started libpod-conmon-725edfb06f4a3c4f43d4cda7327edcdba3e95a7cf5aa33bf68a2d944d056afd4.scope.
Nov 25 02:56:09 np0005534516 podman[117571]: 2025-11-25 07:56:09.351474015 +0000 UTC m=+0.020380195 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 02:56:09 np0005534516 systemd[1]: Started libcrun container.
Nov 25 02:56:09 np0005534516 podman[117571]: 2025-11-25 07:56:09.481880457 +0000 UTC m=+0.150786637 container init 725edfb06f4a3c4f43d4cda7327edcdba3e95a7cf5aa33bf68a2d944d056afd4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_williams, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 02:56:09 np0005534516 podman[117571]: 2025-11-25 07:56:09.489238546 +0000 UTC m=+0.158144706 container start 725edfb06f4a3c4f43d4cda7327edcdba3e95a7cf5aa33bf68a2d944d056afd4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_williams, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 02:56:09 np0005534516 boring_williams[117588]: 167 167
Nov 25 02:56:09 np0005534516 systemd[1]: libpod-725edfb06f4a3c4f43d4cda7327edcdba3e95a7cf5aa33bf68a2d944d056afd4.scope: Deactivated successfully.
Nov 25 02:56:09 np0005534516 podman[117571]: 2025-11-25 07:56:09.49870287 +0000 UTC m=+0.167609030 container attach 725edfb06f4a3c4f43d4cda7327edcdba3e95a7cf5aa33bf68a2d944d056afd4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_williams, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Nov 25 02:56:09 np0005534516 podman[117571]: 2025-11-25 07:56:09.499207555 +0000 UTC m=+0.168113735 container died 725edfb06f4a3c4f43d4cda7327edcdba3e95a7cf5aa33bf68a2d944d056afd4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_williams, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Nov 25 02:56:09 np0005534516 systemd[1]: var-lib-containers-storage-overlay-00b8dce4ae92f8902e6eee567fd48d3ccb584dffa8681209258962f13d94dcb5-merged.mount: Deactivated successfully.
Nov 25 02:56:09 np0005534516 podman[117571]: 2025-11-25 07:56:09.606583382 +0000 UTC m=+0.275489542 container remove 725edfb06f4a3c4f43d4cda7327edcdba3e95a7cf5aa33bf68a2d944d056afd4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_williams, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 02:56:09 np0005534516 systemd[1]: libpod-conmon-725edfb06f4a3c4f43d4cda7327edcdba3e95a7cf5aa33bf68a2d944d056afd4.scope: Deactivated successfully.
Nov 25 02:56:09 np0005534516 podman[117614]: 2025-11-25 07:56:09.78190343 +0000 UTC m=+0.063399971 container create b386be589caf632d636ddd3c3329cd02b24da31f5cee5127a5bf2ea5e28eae28 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_galileo, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 25 02:56:09 np0005534516 systemd[1]: Started libpod-conmon-b386be589caf632d636ddd3c3329cd02b24da31f5cee5127a5bf2ea5e28eae28.scope.
Nov 25 02:56:09 np0005534516 podman[117614]: 2025-11-25 07:56:09.746140399 +0000 UTC m=+0.027636980 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 02:56:09 np0005534516 systemd[1]: Started libcrun container.
Nov 25 02:56:09 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/08978361bdb751653b76f38ed0c8c627930cf431ba9bf63835d97b56b77229e0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 02:56:09 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/08978361bdb751653b76f38ed0c8c627930cf431ba9bf63835d97b56b77229e0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 02:56:09 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/08978361bdb751653b76f38ed0c8c627930cf431ba9bf63835d97b56b77229e0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 02:56:09 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/08978361bdb751653b76f38ed0c8c627930cf431ba9bf63835d97b56b77229e0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 02:56:09 np0005534516 podman[117614]: 2025-11-25 07:56:09.880511744 +0000 UTC m=+0.162008305 container init b386be589caf632d636ddd3c3329cd02b24da31f5cee5127a5bf2ea5e28eae28 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_galileo, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 02:56:09 np0005534516 podman[117614]: 2025-11-25 07:56:09.888386968 +0000 UTC m=+0.169883489 container start b386be589caf632d636ddd3c3329cd02b24da31f5cee5127a5bf2ea5e28eae28 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_galileo, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef)
Nov 25 02:56:09 np0005534516 podman[117614]: 2025-11-25 07:56:09.894612553 +0000 UTC m=+0.176109084 container attach b386be589caf632d636ddd3c3329cd02b24da31f5cee5127a5bf2ea5e28eae28 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_galileo, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef)
Nov 25 02:56:10 np0005534516 optimistic_galileo[117632]: {
Nov 25 02:56:10 np0005534516 optimistic_galileo[117632]:    "1ccbbde0-8faf-460a-800e-c84f00ed17db": {
Nov 25 02:56:10 np0005534516 optimistic_galileo[117632]:        "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 02:56:10 np0005534516 optimistic_galileo[117632]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 02:56:10 np0005534516 optimistic_galileo[117632]:        "osd_id": 1,
Nov 25 02:56:10 np0005534516 optimistic_galileo[117632]:        "osd_uuid": "1ccbbde0-8faf-460a-800e-c84f00ed17db",
Nov 25 02:56:10 np0005534516 optimistic_galileo[117632]:        "type": "bluestore"
Nov 25 02:56:10 np0005534516 optimistic_galileo[117632]:    },
Nov 25 02:56:10 np0005534516 optimistic_galileo[117632]:    "2a15223e-f21c-42af-b702-2be1c3607399": {
Nov 25 02:56:10 np0005534516 optimistic_galileo[117632]:        "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 02:56:10 np0005534516 optimistic_galileo[117632]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 02:56:10 np0005534516 optimistic_galileo[117632]:        "osd_id": 2,
Nov 25 02:56:10 np0005534516 optimistic_galileo[117632]:        "osd_uuid": "2a15223e-f21c-42af-b702-2be1c3607399",
Nov 25 02:56:10 np0005534516 optimistic_galileo[117632]:        "type": "bluestore"
Nov 25 02:56:10 np0005534516 optimistic_galileo[117632]:    },
Nov 25 02:56:10 np0005534516 optimistic_galileo[117632]:    "fa0f8d90-5df8-4d42-9078-da082765696d": {
Nov 25 02:56:10 np0005534516 optimistic_galileo[117632]:        "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 02:56:10 np0005534516 optimistic_galileo[117632]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 02:56:10 np0005534516 optimistic_galileo[117632]:        "osd_id": 0,
Nov 25 02:56:10 np0005534516 optimistic_galileo[117632]:        "osd_uuid": "fa0f8d90-5df8-4d42-9078-da082765696d",
Nov 25 02:56:10 np0005534516 optimistic_galileo[117632]:        "type": "bluestore"
Nov 25 02:56:10 np0005534516 optimistic_galileo[117632]:    }
Nov 25 02:56:10 np0005534516 optimistic_galileo[117632]: }
Nov 25 02:56:10 np0005534516 systemd[1]: libpod-b386be589caf632d636ddd3c3329cd02b24da31f5cee5127a5bf2ea5e28eae28.scope: Deactivated successfully.
Nov 25 02:56:10 np0005534516 systemd[1]: libpod-b386be589caf632d636ddd3c3329cd02b24da31f5cee5127a5bf2ea5e28eae28.scope: Consumed 1.062s CPU time.
Nov 25 02:56:10 np0005534516 podman[117614]: 2025-11-25 07:56:10.942549626 +0000 UTC m=+1.224046157 container died b386be589caf632d636ddd3c3329cd02b24da31f5cee5127a5bf2ea5e28eae28 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_galileo, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Nov 25 02:56:10 np0005534516 systemd[1]: var-lib-containers-storage-overlay-08978361bdb751653b76f38ed0c8c627930cf431ba9bf63835d97b56b77229e0-merged.mount: Deactivated successfully.
Nov 25 02:56:11 np0005534516 podman[117614]: 2025-11-25 07:56:11.030821969 +0000 UTC m=+1.312318500 container remove b386be589caf632d636ddd3c3329cd02b24da31f5cee5127a5bf2ea5e28eae28 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_galileo, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 25 02:56:11 np0005534516 systemd[1]: libpod-conmon-b386be589caf632d636ddd3c3329cd02b24da31f5cee5127a5bf2ea5e28eae28.scope: Deactivated successfully.
Nov 25 02:56:11 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 02:56:11 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 02:56:11 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 02:56:11 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 02:56:11 np0005534516 ceph-mgr[75313]: [progress WARNING root] complete: ev f0a9cffa-5266-4075-a741-340f1f2dfac5 does not exist
Nov 25 02:56:11 np0005534516 ceph-mgr[75313]: [progress WARNING root] complete: ev d4d3c312-e4c9-4b92-9719-a5ad4370311d does not exist
Nov 25 02:56:11 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v303: 321 pgs: 321 active+clean; 456 KiB data, 143 MiB used, 60 GiB / 60 GiB avail
Nov 25 02:56:11 np0005534516 python3.9[117857]: ansible-ansible.builtin.setup Invoked with filter=['ansible_interfaces'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 25 02:56:12 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 02:56:12 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 02:56:12 np0005534516 ceph-osd[88620]: log_channel(cluster) log [DBG] : 8.e scrub starts
Nov 25 02:56:12 np0005534516 ceph-osd[88620]: log_channel(cluster) log [DBG] : 8.e scrub ok
Nov 25 02:56:12 np0005534516 python3.9[118072]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/containers/networks recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 02:56:12 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 02:56:13 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v304: 321 pgs: 321 active+clean; 456 KiB data, 143 MiB used, 60 GiB / 60 GiB avail
Nov 25 02:56:13 np0005534516 ceph-osd[88620]: log_channel(cluster) log [DBG] : 9.16 scrub starts
Nov 25 02:56:13 np0005534516 ceph-osd[88620]: log_channel(cluster) log [DBG] : 9.16 scrub ok
Nov 25 02:56:13 np0005534516 python3.9[118224]: ansible-ansible.legacy.command Invoked with _raw_params=podman network inspect podman#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 02:56:14 np0005534516 ceph-osd[88620]: log_channel(cluster) log [DBG] : 9.1c scrub starts
Nov 25 02:56:14 np0005534516 ceph-osd[88620]: log_channel(cluster) log [DBG] : 9.1c scrub ok
Nov 25 02:56:14 np0005534516 python3.9[118387]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/networks/podman.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 02:56:14 np0005534516 python3.9[118465]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/containers/networks/podman.json _original_basename=podman_network_config.j2 recurse=False state=file path=/etc/containers/networks/podman.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 02:56:15 np0005534516 ceph-osd[88620]: log_channel(cluster) log [DBG] : 9.1e scrub starts
Nov 25 02:56:15 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v305: 321 pgs: 321 active+clean; 456 KiB data, 143 MiB used, 60 GiB / 60 GiB avail
Nov 25 02:56:15 np0005534516 ceph-osd[88620]: log_channel(cluster) log [DBG] : 9.1e scrub ok
Nov 25 02:56:15 np0005534516 ceph-osd[90711]: log_channel(cluster) log [DBG] : 9.8 deep-scrub starts
Nov 25 02:56:15 np0005534516 ceph-osd[90711]: log_channel(cluster) log [DBG] : 9.8 deep-scrub ok
Nov 25 02:56:15 np0005534516 python3.9[118617]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 02:56:16 np0005534516 python3.9[118695]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root setype=etc_t dest=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf _original_basename=registries.conf.j2 recurse=False state=file path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 02:56:16 np0005534516 ceph-osd[90711]: log_channel(cluster) log [DBG] : 9.18 scrub starts
Nov 25 02:56:16 np0005534516 ceph-osd[90711]: log_channel(cluster) log [DBG] : 9.18 scrub ok
Nov 25 02:56:17 np0005534516 python3.9[118847]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=pids_limit owner=root path=/etc/containers/containers.conf section=containers setype=etc_t value=4096 backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Nov 25 02:56:17 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v306: 321 pgs: 321 active+clean; 456 KiB data, 143 MiB used, 60 GiB / 60 GiB avail
Nov 25 02:56:17 np0005534516 python3.9[118999]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=events_logger owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="journald" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Nov 25 02:56:17 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 02:56:18 np0005534516 python3.9[119151]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=runtime owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="crun" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Nov 25 02:56:19 np0005534516 python3.9[119303]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=network_backend owner=root path=/etc/containers/containers.conf section=network setype=etc_t value="netavark" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Nov 25 02:56:19 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v307: 321 pgs: 321 active+clean; 456 KiB data, 143 MiB used, 60 GiB / 60 GiB avail
Nov 25 02:56:19 np0005534516 ceph-osd[90711]: log_channel(cluster) log [DBG] : 9.c deep-scrub starts
Nov 25 02:56:19 np0005534516 ceph-osd[90711]: log_channel(cluster) log [DBG] : 9.c deep-scrub ok
Nov 25 02:56:19 np0005534516 python3.9[119455]: ansible-ansible.legacy.dnf Invoked with name=['openssh-server'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 25 02:56:21 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v308: 321 pgs: 321 active+clean; 456 KiB data, 143 MiB used, 60 GiB / 60 GiB avail
Nov 25 02:56:22 np0005534516 ceph-osd[90711]: log_channel(cluster) log [DBG] : 9.13 scrub starts
Nov 25 02:56:22 np0005534516 ceph-osd[90711]: log_channel(cluster) log [DBG] : 9.13 scrub ok
Nov 25 02:56:22 np0005534516 python3.9[119609]: ansible-setup Invoked with gather_subset=['!all', '!min', 'distribution', 'distribution_major_version', 'distribution_version', 'os_family'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 25 02:56:22 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 02:56:23 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v309: 321 pgs: 321 active+clean; 456 KiB data, 143 MiB used, 60 GiB / 60 GiB avail
Nov 25 02:56:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 02:56:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 02:56:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 02:56:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 02:56:23 np0005534516 python3.9[119763]: ansible-stat Invoked with path=/run/ostree-booted follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 25 02:56:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 02:56:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 02:56:24 np0005534516 python3.9[119915]: ansible-stat Invoked with path=/sbin/transactional-update follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 25 02:56:24 np0005534516 ceph-osd[90711]: log_channel(cluster) log [DBG] : 9.19 scrub starts
Nov 25 02:56:24 np0005534516 ceph-osd[90711]: log_channel(cluster) log [DBG] : 9.19 scrub ok
Nov 25 02:56:24 np0005534516 python3.9[120067]: ansible-ansible.legacy.command Invoked with _raw_params=systemctl is-system-running _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 02:56:25 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v310: 321 pgs: 321 active+clean; 456 KiB data, 143 MiB used, 60 GiB / 60 GiB avail
Nov 25 02:56:25 np0005534516 python3.9[120220]: ansible-service_facts Invoked
Nov 25 02:56:25 np0005534516 network[120237]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Nov 25 02:56:25 np0005534516 network[120238]: 'network-scripts' will be removed from distribution in near future.
Nov 25 02:56:25 np0005534516 network[120239]: It is advised to switch to 'NetworkManager' instead for network management.
Nov 25 02:56:27 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v311: 321 pgs: 321 active+clean; 456 KiB data, 143 MiB used, 60 GiB / 60 GiB avail
Nov 25 02:56:27 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 02:56:29 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v312: 321 pgs: 321 active+clean; 456 KiB data, 143 MiB used, 60 GiB / 60 GiB avail
Nov 25 02:56:30 np0005534516 python3.9[120691]: ansible-ansible.legacy.dnf Invoked with name=['chrony'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 25 02:56:31 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v313: 321 pgs: 321 active+clean; 456 KiB data, 143 MiB used, 60 GiB / 60 GiB avail
Nov 25 02:56:32 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 02:56:33 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v314: 321 pgs: 321 active+clean; 456 KiB data, 143 MiB used, 60 GiB / 60 GiB avail
Nov 25 02:56:33 np0005534516 python3.9[120844]: ansible-package_facts Invoked with manager=['auto'] strategy=first
Nov 25 02:56:34 np0005534516 python3.9[120996]: ansible-ansible.legacy.stat Invoked with path=/etc/chrony.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 02:56:35 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v315: 321 pgs: 321 active+clean; 456 KiB data, 143 MiB used, 60 GiB / 60 GiB avail
Nov 25 02:56:35 np0005534516 python3.9[121074]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/chrony.conf _original_basename=chrony.conf.j2 recurse=False state=file path=/etc/chrony.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 02:56:36 np0005534516 python3.9[121227]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/chronyd follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 02:56:36 np0005534516 python3.9[121305]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/sysconfig/chronyd _original_basename=chronyd.sysconfig.j2 recurse=False state=file path=/etc/sysconfig/chronyd force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 02:56:37 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v316: 321 pgs: 321 active+clean; 456 KiB data, 143 MiB used, 60 GiB / 60 GiB avail
Nov 25 02:56:37 np0005534516 python3.9[121457]: ansible-lineinfile Invoked with backup=True create=True dest=/etc/sysconfig/network line=PEERNTP=no mode=0644 regexp=^PEERNTP= state=present path=/etc/sysconfig/network encoding=utf-8 backrefs=False firstmatch=False unsafe_writes=False search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 02:56:37 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 02:56:38 np0005534516 python3.9[121609]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 25 02:56:39 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v317: 321 pgs: 321 active+clean; 456 KiB data, 143 MiB used, 60 GiB / 60 GiB avail
Nov 25 02:56:40 np0005534516 python3.9[121693]: ansible-ansible.legacy.systemd Invoked with enabled=True name=chronyd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 25 02:56:40 np0005534516 systemd[1]: session-38.scope: Deactivated successfully.
Nov 25 02:56:40 np0005534516 systemd[1]: session-38.scope: Consumed 25.450s CPU time.
Nov 25 02:56:40 np0005534516 systemd-logind[822]: Session 38 logged out. Waiting for processes to exit.
Nov 25 02:56:40 np0005534516 systemd-logind[822]: Removed session 38.
Nov 25 02:56:41 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v318: 321 pgs: 321 active+clean; 456 KiB data, 143 MiB used, 60 GiB / 60 GiB avail
Nov 25 02:56:42 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 02:56:43 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v319: 321 pgs: 321 active+clean; 456 KiB data, 143 MiB used, 60 GiB / 60 GiB avail
Nov 25 02:56:45 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v320: 321 pgs: 321 active+clean; 456 KiB data, 143 MiB used, 60 GiB / 60 GiB avail
Nov 25 02:56:46 np0005534516 systemd-logind[822]: New session 39 of user zuul.
Nov 25 02:56:46 np0005534516 systemd[1]: Started Session 39 of User zuul.
Nov 25 02:56:47 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v321: 321 pgs: 321 active+clean; 456 KiB data, 143 MiB used, 60 GiB / 60 GiB avail
Nov 25 02:56:47 np0005534516 python3.9[121875]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 02:56:47 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 02:56:48 np0005534516 python3.9[122027]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/ceph-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 02:56:48 np0005534516 python3.9[122105]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/ceph-networks.yaml _original_basename=firewall.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/ceph-networks.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 02:56:49 np0005534516 systemd[1]: session-39.scope: Deactivated successfully.
Nov 25 02:56:49 np0005534516 systemd[1]: session-39.scope: Consumed 1.672s CPU time.
Nov 25 02:56:49 np0005534516 systemd-logind[822]: Session 39 logged out. Waiting for processes to exit.
Nov 25 02:56:49 np0005534516 systemd-logind[822]: Removed session 39.
Nov 25 02:56:49 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v322: 321 pgs: 321 active+clean; 456 KiB data, 143 MiB used, 60 GiB / 60 GiB avail
Nov 25 02:56:51 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v323: 321 pgs: 321 active+clean; 456 KiB data, 143 MiB used, 60 GiB / 60 GiB avail
Nov 25 02:56:52 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 02:56:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] Optimize plan auto_2025-11-25_07:56:53
Nov 25 02:56:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 02:56:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] do_upmap
Nov 25 02:56:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] pools ['vms', '.rgw.root', 'default.rgw.meta', 'volumes', 'backups', 'default.rgw.control', 'default.rgw.log', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', '.mgr', 'images']
Nov 25 02:56:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] prepared 0/10 changes
Nov 25 02:56:53 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v324: 321 pgs: 321 active+clean; 456 KiB data, 143 MiB used, 60 GiB / 60 GiB avail
Nov 25 02:56:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 02:56:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 02:56:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 02:56:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 02:56:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 02:56:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 02:56:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 02:56:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 02:56:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 02:56:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 02:56:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 02:56:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 02:56:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 02:56:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 02:56:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 02:56:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 02:56:54 np0005534516 systemd-logind[822]: New session 40 of user zuul.
Nov 25 02:56:54 np0005534516 systemd[1]: Started Session 40 of User zuul.
Nov 25 02:56:55 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v325: 321 pgs: 321 active+clean; 456 KiB data, 143 MiB used, 60 GiB / 60 GiB avail
Nov 25 02:56:55 np0005534516 python3.9[122283]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 25 02:56:56 np0005534516 python3.9[122439]: ansible-ansible.builtin.file Invoked with group=zuul mode=0770 owner=zuul path=/root/.config/containers recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 02:56:57 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v326: 321 pgs: 321 active+clean; 456 KiB data, 143 MiB used, 60 GiB / 60 GiB avail
Nov 25 02:56:57 np0005534516 python3.9[122614]: ansible-ansible.legacy.stat Invoked with path=/root/.config/containers/auth.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 02:56:57 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 02:56:58 np0005534516 python3.9[122692]: ansible-ansible.legacy.file Invoked with group=zuul mode=0660 owner=zuul dest=/root/.config/containers/auth.json _original_basename=.q7zoqkb4 recurse=False state=file path=/root/.config/containers/auth.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 02:56:59 np0005534516 python3.9[122844]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 02:56:59 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v327: 321 pgs: 321 active+clean; 456 KiB data, 143 MiB used, 60 GiB / 60 GiB avail
Nov 25 02:56:59 np0005534516 python3.9[122922]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/sysconfig/podman_drop_in _original_basename=.zu5if69l recurse=False state=file path=/etc/sysconfig/podman_drop_in force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 02:57:00 np0005534516 python3.9[123074]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 25 02:57:01 np0005534516 python3.9[123226]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 02:57:01 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v328: 321 pgs: 321 active+clean; 456 KiB data, 143 MiB used, 60 GiB / 60 GiB avail
Nov 25 02:57:01 np0005534516 python3.9[123304]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 02:57:02 np0005534516 python3.9[123456]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 02:57:02 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 02:57:02 np0005534516 python3.9[123534]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 02:57:03 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v329: 321 pgs: 321 active+clean; 456 KiB data, 143 MiB used, 60 GiB / 60 GiB avail
Nov 25 02:57:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 02:57:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 02:57:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 02:57:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 02:57:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 02:57:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 02:57:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 02:57:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 02:57:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 02:57:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 02:57:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 02:57:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 02:57:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 25 02:57:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 02:57:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 02:57:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 02:57:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 25 02:57:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 02:57:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 25 02:57:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 02:57:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 02:57:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 02:57:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 25 02:57:03 np0005534516 python3.9[123686]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 02:57:04 np0005534516 python3.9[123838]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 02:57:05 np0005534516 python3.9[123916]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 02:57:05 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v330: 321 pgs: 321 active+clean; 456 KiB data, 143 MiB used, 60 GiB / 60 GiB avail
Nov 25 02:57:05 np0005534516 python3.9[124068]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 02:57:06 np0005534516 python3.9[124146]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 02:57:07 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v331: 321 pgs: 321 active+clean; 456 KiB data, 143 MiB used, 60 GiB / 60 GiB avail
Nov 25 02:57:07 np0005534516 python3.9[124298]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 25 02:57:07 np0005534516 systemd[1]: Reloading.
Nov 25 02:57:07 np0005534516 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 02:57:07 np0005534516 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 02:57:08 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 02:57:09 np0005534516 python3.9[124488]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 02:57:09 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v332: 321 pgs: 321 active+clean; 456 KiB data, 143 MiB used, 60 GiB / 60 GiB avail
Nov 25 02:57:09 np0005534516 python3.9[124566]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 02:57:10 np0005534516 python3.9[124718]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 02:57:11 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v333: 321 pgs: 321 active+clean; 456 KiB data, 143 MiB used, 60 GiB / 60 GiB avail
Nov 25 02:57:11 np0005534516 python3.9[124796]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 02:57:12 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 02:57:12 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 02:57:12 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 02:57:12 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 02:57:12 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 02:57:12 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 02:57:12 np0005534516 ceph-mgr[75313]: [progress WARNING root] complete: ev f2e3eb55-4387-45ef-8f4d-4ddafc230c9b does not exist
Nov 25 02:57:12 np0005534516 ceph-mgr[75313]: [progress WARNING root] complete: ev b033fa95-c935-4d13-bbd7-74cc8174e57d does not exist
Nov 25 02:57:12 np0005534516 ceph-mgr[75313]: [progress WARNING root] complete: ev 59f9259b-5042-470b-bb14-2714549c7607 does not exist
Nov 25 02:57:12 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 02:57:12 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 02:57:12 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 02:57:12 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 02:57:12 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 02:57:12 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 02:57:12 np0005534516 python3.9[125070]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 25 02:57:12 np0005534516 systemd[1]: Reloading.
Nov 25 02:57:12 np0005534516 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 02:57:12 np0005534516 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 02:57:12 np0005534516 systemd[1]: Starting Create netns directory...
Nov 25 02:57:12 np0005534516 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Nov 25 02:57:12 np0005534516 systemd[1]: netns-placeholder.service: Deactivated successfully.
Nov 25 02:57:12 np0005534516 systemd[1]: Finished Create netns directory.
Nov 25 02:57:12 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 02:57:12 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 02:57:12 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 02:57:13 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 02:57:13 np0005534516 podman[125334]: 2025-11-25 07:57:13.082581053 +0000 UTC m=+0.027829382 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 02:57:13 np0005534516 podman[125334]: 2025-11-25 07:57:13.189899905 +0000 UTC m=+0.135148194 container create b729e20ba1527e9a3236bd4245299dd04a3e5e57d94f10d15e8cd05530db6e54 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_vaughan, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True)
Nov 25 02:57:13 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v334: 321 pgs: 321 active+clean; 456 KiB data, 143 MiB used, 60 GiB / 60 GiB avail
Nov 25 02:57:13 np0005534516 systemd[1]: Started libpod-conmon-b729e20ba1527e9a3236bd4245299dd04a3e5e57d94f10d15e8cd05530db6e54.scope.
Nov 25 02:57:13 np0005534516 systemd[1]: Started libcrun container.
Nov 25 02:57:13 np0005534516 python3.9[125421]: ansible-ansible.builtin.service_facts Invoked
Nov 25 02:57:13 np0005534516 network[125444]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Nov 25 02:57:13 np0005534516 network[125445]: 'network-scripts' will be removed from distribution in near future.
Nov 25 02:57:13 np0005534516 network[125446]: It is advised to switch to 'NetworkManager' instead for network management.
Nov 25 02:57:13 np0005534516 podman[125334]: 2025-11-25 07:57:13.741565697 +0000 UTC m=+0.686814076 container init b729e20ba1527e9a3236bd4245299dd04a3e5e57d94f10d15e8cd05530db6e54 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_vaughan, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 02:57:13 np0005534516 podman[125334]: 2025-11-25 07:57:13.754569551 +0000 UTC m=+0.699817850 container start b729e20ba1527e9a3236bd4245299dd04a3e5e57d94f10d15e8cd05530db6e54 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_vaughan, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 02:57:13 np0005534516 happy_vaughan[125425]: 167 167
Nov 25 02:57:13 np0005534516 podman[125334]: 2025-11-25 07:57:13.851962817 +0000 UTC m=+0.797211186 container attach b729e20ba1527e9a3236bd4245299dd04a3e5e57d94f10d15e8cd05530db6e54 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_vaughan, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Nov 25 02:57:13 np0005534516 podman[125334]: 2025-11-25 07:57:13.853689397 +0000 UTC m=+0.798937686 container died b729e20ba1527e9a3236bd4245299dd04a3e5e57d94f10d15e8cd05530db6e54 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_vaughan, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Nov 25 02:57:14 np0005534516 systemd[1]: libpod-b729e20ba1527e9a3236bd4245299dd04a3e5e57d94f10d15e8cd05530db6e54.scope: Deactivated successfully.
Nov 25 02:57:14 np0005534516 systemd[1]: var-lib-containers-storage-overlay-f53e9f58713044d8cb8b5ffdb00e70000625c793a4622d214d0297af58af9904-merged.mount: Deactivated successfully.
Nov 25 02:57:14 np0005534516 podman[125334]: 2025-11-25 07:57:14.815230786 +0000 UTC m=+1.760479075 container remove b729e20ba1527e9a3236bd4245299dd04a3e5e57d94f10d15e8cd05530db6e54 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_vaughan, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 02:57:14 np0005534516 systemd[1]: libpod-conmon-b729e20ba1527e9a3236bd4245299dd04a3e5e57d94f10d15e8cd05530db6e54.scope: Deactivated successfully.
Nov 25 02:57:15 np0005534516 podman[125496]: 2025-11-25 07:57:15.023766143 +0000 UTC m=+0.065545349 container create 71d1d1dbd3f0f90887a30339baa6965298f9a0062fa0eddac4337565a206e702 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_sutherland, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 02:57:15 np0005534516 podman[125496]: 2025-11-25 07:57:14.983005499 +0000 UTC m=+0.024784695 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 02:57:15 np0005534516 systemd[1]: Started libpod-conmon-71d1d1dbd3f0f90887a30339baa6965298f9a0062fa0eddac4337565a206e702.scope.
Nov 25 02:57:15 np0005534516 systemd[1]: Started libcrun container.
Nov 25 02:57:15 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/329f05e22e8d4048e9a66f9e191fe8e52dc640faf504162cf77d2fe0e3a1ab11/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 02:57:15 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/329f05e22e8d4048e9a66f9e191fe8e52dc640faf504162cf77d2fe0e3a1ab11/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 02:57:15 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/329f05e22e8d4048e9a66f9e191fe8e52dc640faf504162cf77d2fe0e3a1ab11/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 02:57:15 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/329f05e22e8d4048e9a66f9e191fe8e52dc640faf504162cf77d2fe0e3a1ab11/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 02:57:15 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/329f05e22e8d4048e9a66f9e191fe8e52dc640faf504162cf77d2fe0e3a1ab11/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 02:57:15 np0005534516 podman[125496]: 2025-11-25 07:57:15.148098085 +0000 UTC m=+0.189877291 container init 71d1d1dbd3f0f90887a30339baa6965298f9a0062fa0eddac4337565a206e702 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_sutherland, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 02:57:15 np0005534516 podman[125496]: 2025-11-25 07:57:15.15901528 +0000 UTC m=+0.200794476 container start 71d1d1dbd3f0f90887a30339baa6965298f9a0062fa0eddac4337565a206e702 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_sutherland, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Nov 25 02:57:15 np0005534516 podman[125496]: 2025-11-25 07:57:15.186549882 +0000 UTC m=+0.228329048 container attach 71d1d1dbd3f0f90887a30339baa6965298f9a0062fa0eddac4337565a206e702 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_sutherland, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 02:57:15 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v335: 321 pgs: 321 active+clean; 456 KiB data, 143 MiB used, 60 GiB / 60 GiB avail
Nov 25 02:57:16 np0005534516 pedantic_sutherland[125519]: --> passed data devices: 0 physical, 3 LVM
Nov 25 02:57:16 np0005534516 pedantic_sutherland[125519]: --> relative data size: 1.0
Nov 25 02:57:16 np0005534516 pedantic_sutherland[125519]: --> All data devices are unavailable
Nov 25 02:57:16 np0005534516 systemd[1]: libpod-71d1d1dbd3f0f90887a30339baa6965298f9a0062fa0eddac4337565a206e702.scope: Deactivated successfully.
Nov 25 02:57:16 np0005534516 systemd[1]: libpod-71d1d1dbd3f0f90887a30339baa6965298f9a0062fa0eddac4337565a206e702.scope: Consumed 1.118s CPU time.
Nov 25 02:57:16 np0005534516 podman[125496]: 2025-11-25 07:57:16.327299435 +0000 UTC m=+1.369078621 container died 71d1d1dbd3f0f90887a30339baa6965298f9a0062fa0eddac4337565a206e702 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_sutherland, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 02:57:16 np0005534516 systemd[1]: var-lib-containers-storage-overlay-329f05e22e8d4048e9a66f9e191fe8e52dc640faf504162cf77d2fe0e3a1ab11-merged.mount: Deactivated successfully.
Nov 25 02:57:17 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v336: 321 pgs: 321 active+clean; 456 KiB data, 143 MiB used, 60 GiB / 60 GiB avail
Nov 25 02:57:17 np0005534516 podman[125496]: 2025-11-25 07:57:17.276394695 +0000 UTC m=+2.318173971 container remove 71d1d1dbd3f0f90887a30339baa6965298f9a0062fa0eddac4337565a206e702 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_sutherland, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Nov 25 02:57:17 np0005534516 systemd[1]: libpod-conmon-71d1d1dbd3f0f90887a30339baa6965298f9a0062fa0eddac4337565a206e702.scope: Deactivated successfully.
Nov 25 02:57:18 np0005534516 podman[125800]: 2025-11-25 07:57:17.90729723 +0000 UTC m=+0.020447660 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 02:57:18 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 02:57:18 np0005534516 podman[125800]: 2025-11-25 07:57:18.075734092 +0000 UTC m=+0.188884492 container create 01fb2deb38f96c68fd51ceebc92dd38f6f772882643ddb6424fca56aaa765a18 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_einstein, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Nov 25 02:57:18 np0005534516 systemd[1]: Started libpod-conmon-01fb2deb38f96c68fd51ceebc92dd38f6f772882643ddb6424fca56aaa765a18.scope.
Nov 25 02:57:18 np0005534516 systemd[1]: Started libcrun container.
Nov 25 02:57:18 np0005534516 podman[125800]: 2025-11-25 07:57:18.369087404 +0000 UTC m=+0.482237834 container init 01fb2deb38f96c68fd51ceebc92dd38f6f772882643ddb6424fca56aaa765a18 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_einstein, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Nov 25 02:57:18 np0005534516 podman[125800]: 2025-11-25 07:57:18.384470126 +0000 UTC m=+0.497620536 container start 01fb2deb38f96c68fd51ceebc92dd38f6f772882643ddb6424fca56aaa765a18 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_einstein, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 25 02:57:18 np0005534516 dazzling_einstein[125839]: 167 167
Nov 25 02:57:18 np0005534516 systemd[1]: libpod-01fb2deb38f96c68fd51ceebc92dd38f6f772882643ddb6424fca56aaa765a18.scope: Deactivated successfully.
Nov 25 02:57:18 np0005534516 podman[125800]: 2025-11-25 07:57:18.448947664 +0000 UTC m=+0.562098064 container attach 01fb2deb38f96c68fd51ceebc92dd38f6f772882643ddb6424fca56aaa765a18 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_einstein, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Nov 25 02:57:18 np0005534516 podman[125800]: 2025-11-25 07:57:18.449229702 +0000 UTC m=+0.562380102 container died 01fb2deb38f96c68fd51ceebc92dd38f6f772882643ddb6424fca56aaa765a18 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_einstein, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 02:57:18 np0005534516 systemd[1]: var-lib-containers-storage-overlay-5ec24217189eb9548f038e4ec54100564a313549f3fc02f56800d07fc66c88fa-merged.mount: Deactivated successfully.
Nov 25 02:57:18 np0005534516 podman[125800]: 2025-11-25 07:57:18.644433975 +0000 UTC m=+0.757584395 container remove 01fb2deb38f96c68fd51ceebc92dd38f6f772882643ddb6424fca56aaa765a18 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_einstein, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 02:57:18 np0005534516 systemd[1]: libpod-conmon-01fb2deb38f96c68fd51ceebc92dd38f6f772882643ddb6424fca56aaa765a18.scope: Deactivated successfully.
Nov 25 02:57:18 np0005534516 python3.9[125960]: ansible-ansible.legacy.stat Invoked with path=/etc/ssh/sshd_config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 02:57:18 np0005534516 podman[125968]: 2025-11-25 07:57:18.814728501 +0000 UTC m=+0.029740258 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 02:57:18 np0005534516 podman[125968]: 2025-11-25 07:57:18.934361057 +0000 UTC m=+0.149372784 container create cc5f17a31f901eb569d12f9ce32e1f182dbd3eb95acbb5f8f15158b5670d9566 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_matsumoto, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Nov 25 02:57:19 np0005534516 systemd[1]: Started libpod-conmon-cc5f17a31f901eb569d12f9ce32e1f182dbd3eb95acbb5f8f15158b5670d9566.scope.
Nov 25 02:57:19 np0005534516 systemd[1]: Started libcrun container.
Nov 25 02:57:19 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a65180ef8b384e30baaf150517797559e9071d9fb1e64e1bd639b6ca1cb73667/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 02:57:19 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a65180ef8b384e30baaf150517797559e9071d9fb1e64e1bd639b6ca1cb73667/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 02:57:19 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a65180ef8b384e30baaf150517797559e9071d9fb1e64e1bd639b6ca1cb73667/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 02:57:19 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a65180ef8b384e30baaf150517797559e9071d9fb1e64e1bd639b6ca1cb73667/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 02:57:19 np0005534516 podman[125968]: 2025-11-25 07:57:19.212393997 +0000 UTC m=+0.427405784 container init cc5f17a31f901eb569d12f9ce32e1f182dbd3eb95acbb5f8f15158b5670d9566 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_matsumoto, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 02:57:19 np0005534516 podman[125968]: 2025-11-25 07:57:19.222157018 +0000 UTC m=+0.437168755 container start cc5f17a31f901eb569d12f9ce32e1f182dbd3eb95acbb5f8f15158b5670d9566 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_matsumoto, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default)
Nov 25 02:57:19 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v337: 321 pgs: 321 active+clean; 456 KiB data, 143 MiB used, 60 GiB / 60 GiB avail
Nov 25 02:57:19 np0005534516 python3.9[126059]: ansible-ansible.legacy.file Invoked with mode=0600 dest=/etc/ssh/sshd_config _original_basename=sshd_config_block.j2 recurse=False state=file path=/etc/ssh/sshd_config force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 02:57:19 np0005534516 podman[125968]: 2025-11-25 07:57:19.352851073 +0000 UTC m=+0.567862890 container attach cc5f17a31f901eb569d12f9ce32e1f182dbd3eb95acbb5f8f15158b5670d9566 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_matsumoto, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Nov 25 02:57:20 np0005534516 laughing_matsumoto[126063]: {
Nov 25 02:57:20 np0005534516 laughing_matsumoto[126063]:    "0": [
Nov 25 02:57:20 np0005534516 laughing_matsumoto[126063]:        {
Nov 25 02:57:20 np0005534516 laughing_matsumoto[126063]:            "devices": [
Nov 25 02:57:20 np0005534516 laughing_matsumoto[126063]:                "/dev/loop3"
Nov 25 02:57:20 np0005534516 laughing_matsumoto[126063]:            ],
Nov 25 02:57:20 np0005534516 laughing_matsumoto[126063]:            "lv_name": "ceph_lv0",
Nov 25 02:57:20 np0005534516 laughing_matsumoto[126063]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 02:57:20 np0005534516 laughing_matsumoto[126063]:            "lv_size": "21470642176",
Nov 25 02:57:20 np0005534516 laughing_matsumoto[126063]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=fa0f8d90-5df8-4d42-9078-da082765696d,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 02:57:20 np0005534516 laughing_matsumoto[126063]:            "lv_uuid": "ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr",
Nov 25 02:57:20 np0005534516 laughing_matsumoto[126063]:            "name": "ceph_lv0",
Nov 25 02:57:20 np0005534516 laughing_matsumoto[126063]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 02:57:20 np0005534516 laughing_matsumoto[126063]:            "tags": {
Nov 25 02:57:20 np0005534516 laughing_matsumoto[126063]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 02:57:20 np0005534516 laughing_matsumoto[126063]:                "ceph.block_uuid": "ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr",
Nov 25 02:57:20 np0005534516 laughing_matsumoto[126063]:                "ceph.cephx_lockbox_secret": "",
Nov 25 02:57:20 np0005534516 laughing_matsumoto[126063]:                "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 02:57:20 np0005534516 laughing_matsumoto[126063]:                "ceph.cluster_name": "ceph",
Nov 25 02:57:20 np0005534516 laughing_matsumoto[126063]:                "ceph.crush_device_class": "",
Nov 25 02:57:20 np0005534516 laughing_matsumoto[126063]:                "ceph.encrypted": "0",
Nov 25 02:57:20 np0005534516 laughing_matsumoto[126063]:                "ceph.osd_fsid": "fa0f8d90-5df8-4d42-9078-da082765696d",
Nov 25 02:57:20 np0005534516 laughing_matsumoto[126063]:                "ceph.osd_id": "0",
Nov 25 02:57:20 np0005534516 laughing_matsumoto[126063]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 02:57:20 np0005534516 laughing_matsumoto[126063]:                "ceph.type": "block",
Nov 25 02:57:20 np0005534516 laughing_matsumoto[126063]:                "ceph.vdo": "0"
Nov 25 02:57:20 np0005534516 laughing_matsumoto[126063]:            },
Nov 25 02:57:20 np0005534516 laughing_matsumoto[126063]:            "type": "block",
Nov 25 02:57:20 np0005534516 laughing_matsumoto[126063]:            "vg_name": "ceph_vg0"
Nov 25 02:57:20 np0005534516 laughing_matsumoto[126063]:        }
Nov 25 02:57:20 np0005534516 laughing_matsumoto[126063]:    ],
Nov 25 02:57:20 np0005534516 laughing_matsumoto[126063]:    "1": [
Nov 25 02:57:20 np0005534516 laughing_matsumoto[126063]:        {
Nov 25 02:57:20 np0005534516 laughing_matsumoto[126063]:            "devices": [
Nov 25 02:57:20 np0005534516 laughing_matsumoto[126063]:                "/dev/loop4"
Nov 25 02:57:20 np0005534516 laughing_matsumoto[126063]:            ],
Nov 25 02:57:20 np0005534516 laughing_matsumoto[126063]:            "lv_name": "ceph_lv1",
Nov 25 02:57:20 np0005534516 laughing_matsumoto[126063]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 02:57:20 np0005534516 laughing_matsumoto[126063]:            "lv_size": "21470642176",
Nov 25 02:57:20 np0005534516 laughing_matsumoto[126063]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1ccbbde0-8faf-460a-800e-c84f00ed17db,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 02:57:20 np0005534516 laughing_matsumoto[126063]:            "lv_uuid": "nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e",
Nov 25 02:57:20 np0005534516 laughing_matsumoto[126063]:            "name": "ceph_lv1",
Nov 25 02:57:20 np0005534516 laughing_matsumoto[126063]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 02:57:20 np0005534516 laughing_matsumoto[126063]:            "tags": {
Nov 25 02:57:20 np0005534516 laughing_matsumoto[126063]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 02:57:20 np0005534516 laughing_matsumoto[126063]:                "ceph.block_uuid": "nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e",
Nov 25 02:57:20 np0005534516 laughing_matsumoto[126063]:                "ceph.cephx_lockbox_secret": "",
Nov 25 02:57:20 np0005534516 laughing_matsumoto[126063]:                "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 02:57:20 np0005534516 laughing_matsumoto[126063]:                "ceph.cluster_name": "ceph",
Nov 25 02:57:20 np0005534516 laughing_matsumoto[126063]:                "ceph.crush_device_class": "",
Nov 25 02:57:20 np0005534516 laughing_matsumoto[126063]:                "ceph.encrypted": "0",
Nov 25 02:57:20 np0005534516 laughing_matsumoto[126063]:                "ceph.osd_fsid": "1ccbbde0-8faf-460a-800e-c84f00ed17db",
Nov 25 02:57:20 np0005534516 laughing_matsumoto[126063]:                "ceph.osd_id": "1",
Nov 25 02:57:20 np0005534516 laughing_matsumoto[126063]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 02:57:20 np0005534516 laughing_matsumoto[126063]:                "ceph.type": "block",
Nov 25 02:57:20 np0005534516 laughing_matsumoto[126063]:                "ceph.vdo": "0"
Nov 25 02:57:20 np0005534516 laughing_matsumoto[126063]:            },
Nov 25 02:57:20 np0005534516 laughing_matsumoto[126063]:            "type": "block",
Nov 25 02:57:20 np0005534516 laughing_matsumoto[126063]:            "vg_name": "ceph_vg1"
Nov 25 02:57:20 np0005534516 laughing_matsumoto[126063]:        }
Nov 25 02:57:20 np0005534516 laughing_matsumoto[126063]:    ],
Nov 25 02:57:20 np0005534516 laughing_matsumoto[126063]:    "2": [
Nov 25 02:57:20 np0005534516 laughing_matsumoto[126063]:        {
Nov 25 02:57:20 np0005534516 laughing_matsumoto[126063]:            "devices": [
Nov 25 02:57:20 np0005534516 laughing_matsumoto[126063]:                "/dev/loop5"
Nov 25 02:57:20 np0005534516 laughing_matsumoto[126063]:            ],
Nov 25 02:57:20 np0005534516 laughing_matsumoto[126063]:            "lv_name": "ceph_lv2",
Nov 25 02:57:20 np0005534516 laughing_matsumoto[126063]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 02:57:20 np0005534516 laughing_matsumoto[126063]:            "lv_size": "21470642176",
Nov 25 02:57:20 np0005534516 laughing_matsumoto[126063]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2a15223e-f21c-42af-b702-2be1c3607399,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 02:57:20 np0005534516 laughing_matsumoto[126063]:            "lv_uuid": "5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0",
Nov 25 02:57:20 np0005534516 laughing_matsumoto[126063]:            "name": "ceph_lv2",
Nov 25 02:57:20 np0005534516 laughing_matsumoto[126063]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 02:57:20 np0005534516 laughing_matsumoto[126063]:            "tags": {
Nov 25 02:57:20 np0005534516 laughing_matsumoto[126063]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 02:57:20 np0005534516 laughing_matsumoto[126063]:                "ceph.block_uuid": "5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0",
Nov 25 02:57:20 np0005534516 laughing_matsumoto[126063]:                "ceph.cephx_lockbox_secret": "",
Nov 25 02:57:20 np0005534516 laughing_matsumoto[126063]:                "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 02:57:20 np0005534516 laughing_matsumoto[126063]:                "ceph.cluster_name": "ceph",
Nov 25 02:57:20 np0005534516 laughing_matsumoto[126063]:                "ceph.crush_device_class": "",
Nov 25 02:57:20 np0005534516 laughing_matsumoto[126063]:                "ceph.encrypted": "0",
Nov 25 02:57:20 np0005534516 laughing_matsumoto[126063]:                "ceph.osd_fsid": "2a15223e-f21c-42af-b702-2be1c3607399",
Nov 25 02:57:20 np0005534516 laughing_matsumoto[126063]:                "ceph.osd_id": "2",
Nov 25 02:57:20 np0005534516 laughing_matsumoto[126063]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 02:57:20 np0005534516 laughing_matsumoto[126063]:                "ceph.type": "block",
Nov 25 02:57:20 np0005534516 laughing_matsumoto[126063]:                "ceph.vdo": "0"
Nov 25 02:57:20 np0005534516 laughing_matsumoto[126063]:            },
Nov 25 02:57:20 np0005534516 laughing_matsumoto[126063]:            "type": "block",
Nov 25 02:57:20 np0005534516 laughing_matsumoto[126063]:            "vg_name": "ceph_vg2"
Nov 25 02:57:20 np0005534516 laughing_matsumoto[126063]:        }
Nov 25 02:57:20 np0005534516 laughing_matsumoto[126063]:    ]
Nov 25 02:57:20 np0005534516 laughing_matsumoto[126063]: }
Nov 25 02:57:20 np0005534516 systemd[1]: libpod-cc5f17a31f901eb569d12f9ce32e1f182dbd3eb95acbb5f8f15158b5670d9566.scope: Deactivated successfully.
Nov 25 02:57:20 np0005534516 podman[125968]: 2025-11-25 07:57:20.036671481 +0000 UTC m=+1.251683208 container died cc5f17a31f901eb569d12f9ce32e1f182dbd3eb95acbb5f8f15158b5670d9566 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_matsumoto, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 02:57:20 np0005534516 python3.9[126219]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 02:57:20 np0005534516 systemd[1]: var-lib-containers-storage-overlay-a65180ef8b384e30baaf150517797559e9071d9fb1e64e1bd639b6ca1cb73667-merged.mount: Deactivated successfully.
Nov 25 02:57:20 np0005534516 podman[125968]: 2025-11-25 07:57:20.784506314 +0000 UTC m=+1.999518041 container remove cc5f17a31f901eb569d12f9ce32e1f182dbd3eb95acbb5f8f15158b5670d9566 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_matsumoto, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 25 02:57:20 np0005534516 systemd[1]: libpod-conmon-cc5f17a31f901eb569d12f9ce32e1f182dbd3eb95acbb5f8f15158b5670d9566.scope: Deactivated successfully.
Nov 25 02:57:20 np0005534516 python3.9[126387]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/sshd-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 02:57:21 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v338: 321 pgs: 321 active+clean; 456 KiB data, 143 MiB used, 60 GiB / 60 GiB avail
Nov 25 02:57:21 np0005534516 python3.9[126565]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/var/lib/edpm-config/firewall/sshd-networks.yaml _original_basename=firewall.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/sshd-networks.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 02:57:21 np0005534516 podman[126606]: 2025-11-25 07:57:21.386457544 +0000 UTC m=+0.022637933 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 02:57:21 np0005534516 podman[126606]: 2025-11-25 07:57:21.509451688 +0000 UTC m=+0.145632087 container create 211aa0d1c2e9d759920563a4411ee6233407a76773a0c2b868097c545f659182 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_jang, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Nov 25 02:57:21 np0005534516 systemd[1]: Started libpod-conmon-211aa0d1c2e9d759920563a4411ee6233407a76773a0c2b868097c545f659182.scope.
Nov 25 02:57:21 np0005534516 systemd[1]: Started libcrun container.
Nov 25 02:57:21 np0005534516 podman[126606]: 2025-11-25 07:57:21.768741276 +0000 UTC m=+0.404921675 container init 211aa0d1c2e9d759920563a4411ee6233407a76773a0c2b868097c545f659182 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_jang, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 02:57:21 np0005534516 podman[126606]: 2025-11-25 07:57:21.776700156 +0000 UTC m=+0.412880565 container start 211aa0d1c2e9d759920563a4411ee6233407a76773a0c2b868097c545f659182 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_jang, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Nov 25 02:57:21 np0005534516 systemd[1]: libpod-211aa0d1c2e9d759920563a4411ee6233407a76773a0c2b868097c545f659182.scope: Deactivated successfully.
Nov 25 02:57:21 np0005534516 kind_jang[126668]: 167 167
Nov 25 02:57:21 np0005534516 conmon[126668]: conmon 211aa0d1c2e9d7599205 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-211aa0d1c2e9d759920563a4411ee6233407a76773a0c2b868097c545f659182.scope/container/memory.events
Nov 25 02:57:21 np0005534516 podman[126606]: 2025-11-25 07:57:21.802349514 +0000 UTC m=+0.438529903 container attach 211aa0d1c2e9d759920563a4411ee6233407a76773a0c2b868097c545f659182 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_jang, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Nov 25 02:57:21 np0005534516 podman[126606]: 2025-11-25 07:57:21.803033154 +0000 UTC m=+0.439213533 container died 211aa0d1c2e9d759920563a4411ee6233407a76773a0c2b868097c545f659182 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_jang, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 02:57:21 np0005534516 systemd[1]: var-lib-containers-storage-overlay-79f3a93b6925e2521422c9761c031d80b77c4684c3cf3252142251b6be29c572-merged.mount: Deactivated successfully.
Nov 25 02:57:22 np0005534516 podman[126606]: 2025-11-25 07:57:22.106388633 +0000 UTC m=+0.742569002 container remove 211aa0d1c2e9d759920563a4411ee6233407a76773a0c2b868097c545f659182 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_jang, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 02:57:22 np0005534516 systemd[1]: libpod-conmon-211aa0d1c2e9d759920563a4411ee6233407a76773a0c2b868097c545f659182.scope: Deactivated successfully.
Nov 25 02:57:22 np0005534516 podman[126799]: 2025-11-25 07:57:22.354732576 +0000 UTC m=+0.104308495 container create 14cd327d0d41a03d4f610a0bf0534766e4222feeb8a2118ee28e3d91d90dd410 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_black, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 02:57:22 np0005534516 python3.9[126793]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Nov 25 02:57:22 np0005534516 podman[126799]: 2025-11-25 07:57:22.286964774 +0000 UTC m=+0.036540693 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 02:57:22 np0005534516 systemd[1]: Starting Time & Date Service...
Nov 25 02:57:22 np0005534516 systemd[1]: Started libpod-conmon-14cd327d0d41a03d4f610a0bf0534766e4222feeb8a2118ee28e3d91d90dd410.scope.
Nov 25 02:57:22 np0005534516 systemd[1]: Started libcrun container.
Nov 25 02:57:22 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ba6c7acdb7819c97998255fbb23206387a1299b4622ccaf2746cb1c0e56a628b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 02:57:22 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ba6c7acdb7819c97998255fbb23206387a1299b4622ccaf2746cb1c0e56a628b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 02:57:22 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ba6c7acdb7819c97998255fbb23206387a1299b4622ccaf2746cb1c0e56a628b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 02:57:22 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ba6c7acdb7819c97998255fbb23206387a1299b4622ccaf2746cb1c0e56a628b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 02:57:22 np0005534516 podman[126799]: 2025-11-25 07:57:22.486725828 +0000 UTC m=+0.236301737 container init 14cd327d0d41a03d4f610a0bf0534766e4222feeb8a2118ee28e3d91d90dd410 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_black, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Nov 25 02:57:22 np0005534516 podman[126799]: 2025-11-25 07:57:22.495348366 +0000 UTC m=+0.244924285 container start 14cd327d0d41a03d4f610a0bf0534766e4222feeb8a2118ee28e3d91d90dd410 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_black, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 02:57:22 np0005534516 systemd[1]: Started Time & Date Service.
Nov 25 02:57:22 np0005534516 podman[126799]: 2025-11-25 07:57:22.507147607 +0000 UTC m=+0.256723516 container attach 14cd327d0d41a03d4f610a0bf0534766e4222feeb8a2118ee28e3d91d90dd410 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_black, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 02:57:23 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 02:57:23 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v339: 321 pgs: 321 active+clean; 456 KiB data, 143 MiB used, 60 GiB / 60 GiB avail
Nov 25 02:57:23 np0005534516 python3.9[126978]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 02:57:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 02:57:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 02:57:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 02:57:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 02:57:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 02:57:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 02:57:23 np0005534516 distracted_black[126819]: {
Nov 25 02:57:23 np0005534516 distracted_black[126819]:    "1ccbbde0-8faf-460a-800e-c84f00ed17db": {
Nov 25 02:57:23 np0005534516 distracted_black[126819]:        "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 02:57:23 np0005534516 distracted_black[126819]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 02:57:23 np0005534516 distracted_black[126819]:        "osd_id": 1,
Nov 25 02:57:23 np0005534516 distracted_black[126819]:        "osd_uuid": "1ccbbde0-8faf-460a-800e-c84f00ed17db",
Nov 25 02:57:23 np0005534516 distracted_black[126819]:        "type": "bluestore"
Nov 25 02:57:23 np0005534516 distracted_black[126819]:    },
Nov 25 02:57:23 np0005534516 distracted_black[126819]:    "2a15223e-f21c-42af-b702-2be1c3607399": {
Nov 25 02:57:23 np0005534516 distracted_black[126819]:        "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 02:57:23 np0005534516 distracted_black[126819]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 02:57:23 np0005534516 distracted_black[126819]:        "osd_id": 2,
Nov 25 02:57:23 np0005534516 distracted_black[126819]:        "osd_uuid": "2a15223e-f21c-42af-b702-2be1c3607399",
Nov 25 02:57:23 np0005534516 distracted_black[126819]:        "type": "bluestore"
Nov 25 02:57:23 np0005534516 distracted_black[126819]:    },
Nov 25 02:57:23 np0005534516 distracted_black[126819]:    "fa0f8d90-5df8-4d42-9078-da082765696d": {
Nov 25 02:57:23 np0005534516 distracted_black[126819]:        "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 02:57:23 np0005534516 distracted_black[126819]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 02:57:23 np0005534516 distracted_black[126819]:        "osd_id": 0,
Nov 25 02:57:23 np0005534516 distracted_black[126819]:        "osd_uuid": "fa0f8d90-5df8-4d42-9078-da082765696d",
Nov 25 02:57:23 np0005534516 distracted_black[126819]:        "type": "bluestore"
Nov 25 02:57:23 np0005534516 distracted_black[126819]:    }
Nov 25 02:57:23 np0005534516 distracted_black[126819]: }
Nov 25 02:57:23 np0005534516 systemd[1]: libpod-14cd327d0d41a03d4f610a0bf0534766e4222feeb8a2118ee28e3d91d90dd410.scope: Deactivated successfully.
Nov 25 02:57:23 np0005534516 podman[126799]: 2025-11-25 07:57:23.523415512 +0000 UTC m=+1.272991431 container died 14cd327d0d41a03d4f610a0bf0534766e4222feeb8a2118ee28e3d91d90dd410 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_black, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 02:57:23 np0005534516 systemd[1]: libpod-14cd327d0d41a03d4f610a0bf0534766e4222feeb8a2118ee28e3d91d90dd410.scope: Consumed 1.026s CPU time.
Nov 25 02:57:23 np0005534516 systemd[1]: var-lib-containers-storage-overlay-ba6c7acdb7819c97998255fbb23206387a1299b4622ccaf2746cb1c0e56a628b-merged.mount: Deactivated successfully.
Nov 25 02:57:23 np0005534516 podman[126799]: 2025-11-25 07:57:23.614551387 +0000 UTC m=+1.364127286 container remove 14cd327d0d41a03d4f610a0bf0534766e4222feeb8a2118ee28e3d91d90dd410 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_black, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True)
Nov 25 02:57:23 np0005534516 systemd[1]: libpod-conmon-14cd327d0d41a03d4f610a0bf0534766e4222feeb8a2118ee28e3d91d90dd410.scope: Deactivated successfully.
Nov 25 02:57:23 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 02:57:23 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 02:57:23 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 02:57:23 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 02:57:23 np0005534516 ceph-mgr[75313]: [progress WARNING root] complete: ev 3e98672a-ca05-4a8a-b62d-8a0f201d7b30 does not exist
Nov 25 02:57:23 np0005534516 ceph-mgr[75313]: [progress WARNING root] complete: ev a1687e72-52f2-4fd3-81cf-cd1e2a6696c3 does not exist
Nov 25 02:57:24 np0005534516 python3.9[127221]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 02:57:24 np0005534516 python3.9[127299]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 02:57:24 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 02:57:24 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 02:57:25 np0005534516 python3.9[127451]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 02:57:25 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v340: 321 pgs: 321 active+clean; 456 KiB data, 143 MiB used, 60 GiB / 60 GiB avail
Nov 25 02:57:25 np0005534516 python3.9[127529]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.jrmsaykp recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 02:57:26 np0005534516 python3.9[127681]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 02:57:26 np0005534516 python3.9[127759]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 02:57:27 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v341: 321 pgs: 321 active+clean; 456 KiB data, 143 MiB used, 60 GiB / 60 GiB avail
Nov 25 02:57:27 np0005534516 python3.9[127911]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 02:57:28 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 02:57:28 np0005534516 python3[128064]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Nov 25 02:57:29 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v342: 321 pgs: 321 active+clean; 456 KiB data, 143 MiB used, 60 GiB / 60 GiB avail
Nov 25 02:57:29 np0005534516 python3.9[128216]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 02:57:29 np0005534516 python3.9[128294]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 02:57:30 np0005534516 python3.9[128446]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 02:57:31 np0005534516 python3.9[128524]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-update-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-update-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 02:57:31 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v343: 321 pgs: 321 active+clean; 456 KiB data, 143 MiB used, 60 GiB / 60 GiB avail
Nov 25 02:57:32 np0005534516 python3.9[128676]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 02:57:32 np0005534516 python3.9[128754]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-flushes.nft _original_basename=flush-chain.j2 recurse=False state=file path=/etc/nftables/edpm-flushes.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 02:57:33 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 02:57:33 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v344: 321 pgs: 321 active+clean; 456 KiB data, 143 MiB used, 60 GiB / 60 GiB avail
Nov 25 02:57:33 np0005534516 python3.9[128906]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 02:57:34 np0005534516 python3.9[128984]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-chains.nft _original_basename=chains.j2 recurse=False state=file path=/etc/nftables/edpm-chains.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 02:57:35 np0005534516 python3.9[129136]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 02:57:35 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v345: 321 pgs: 321 active+clean; 456 KiB data, 143 MiB used, 60 GiB / 60 GiB avail
Nov 25 02:57:35 np0005534516 python3.9[129214]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-rules.nft _original_basename=ruleset.j2 recurse=False state=file path=/etc/nftables/edpm-rules.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 02:57:36 np0005534516 python3.9[129366]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 02:57:37 np0005534516 python3.9[129521]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"#012include "/etc/nftables/edpm-chains.nft"#012include "/etc/nftables/edpm-rules.nft"#012include "/etc/nftables/edpm-jumps.nft"#012 path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 02:57:37 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v346: 321 pgs: 321 active+clean; 456 KiB data, 143 MiB used, 60 GiB / 60 GiB avail
Nov 25 02:57:38 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 02:57:38 np0005534516 python3.9[129673]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages1G state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 02:57:38 np0005534516 python3.9[129825]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages2M state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 02:57:39 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v347: 321 pgs: 321 active+clean; 456 KiB data, 143 MiB used, 60 GiB / 60 GiB avail
Nov 25 02:57:40 np0005534516 python3.9[129977]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=1G path=/dev/hugepages1G src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Nov 25 02:57:40 np0005534516 python3.9[130129]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=2M path=/dev/hugepages2M src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Nov 25 02:57:41 np0005534516 systemd[1]: session-40.scope: Deactivated successfully.
Nov 25 02:57:41 np0005534516 systemd[1]: session-40.scope: Consumed 32.875s CPU time.
Nov 25 02:57:41 np0005534516 systemd-logind[822]: Session 40 logged out. Waiting for processes to exit.
Nov 25 02:57:41 np0005534516 systemd-logind[822]: Removed session 40.
Nov 25 02:57:41 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v348: 321 pgs: 321 active+clean; 456 KiB data, 143 MiB used, 60 GiB / 60 GiB avail
Nov 25 02:57:43 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 02:57:43 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v349: 321 pgs: 321 active+clean; 456 KiB data, 143 MiB used, 60 GiB / 60 GiB avail
Nov 25 02:57:45 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v350: 321 pgs: 321 active+clean; 456 KiB data, 143 MiB used, 60 GiB / 60 GiB avail
Nov 25 02:57:46 np0005534516 systemd-logind[822]: New session 41 of user zuul.
Nov 25 02:57:46 np0005534516 systemd[1]: Started Session 41 of User zuul.
Nov 25 02:57:47 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v351: 321 pgs: 321 active+clean; 456 KiB data, 143 MiB used, 60 GiB / 60 GiB avail
Nov 25 02:57:47 np0005534516 python3.9[130309]: ansible-ansible.builtin.tempfile Invoked with state=file prefix=ansible. suffix= path=None
Nov 25 02:57:48 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 02:57:48 np0005534516 ceph-mon[75015]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #21. Immutable memtables: 0.
Nov 25 02:57:48 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-07:57:48.101222) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 25 02:57:48 np0005534516 ceph-mon[75015]: rocksdb: [db/flush_job.cc:856] [default] [JOB 5] Flushing memtable with next log file: 21
Nov 25 02:57:48 np0005534516 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764057468101356, "job": 5, "event": "flush_started", "num_memtables": 1, "num_entries": 1091, "num_deletes": 251, "total_data_size": 1545944, "memory_usage": 1573856, "flush_reason": "Manual Compaction"}
Nov 25 02:57:48 np0005534516 ceph-mon[75015]: rocksdb: [db/flush_job.cc:885] [default] [JOB 5] Level-0 flush table #22: started
Nov 25 02:57:48 np0005534516 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764057468191179, "cf_name": "default", "job": 5, "event": "table_file_creation", "file_number": 22, "file_size": 929188, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 7403, "largest_seqno": 8493, "table_properties": {"data_size": 925163, "index_size": 1613, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1349, "raw_key_size": 10678, "raw_average_key_size": 19, "raw_value_size": 916223, "raw_average_value_size": 1706, "num_data_blocks": 76, "num_entries": 537, "num_filter_entries": 537, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764057366, "oldest_key_time": 1764057366, "file_creation_time": 1764057468, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "dc5dc6ae-0fb8-474e-b34d-e37705a41add", "db_session_id": "WCSCMBNWDQZ3QKJA3OWW", "orig_file_number": 22, "seqno_to_time_mapping": "N/A"}}
Nov 25 02:57:48 np0005534516 ceph-mon[75015]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 5] Flush lasted 89993 microseconds, and 6820 cpu microseconds.
Nov 25 02:57:48 np0005534516 ceph-mon[75015]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 02:57:48 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-07:57:48.191240) [db/flush_job.cc:967] [default] [JOB 5] Level-0 flush table #22: 929188 bytes OK
Nov 25 02:57:48 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-07:57:48.191266) [db/memtable_list.cc:519] [default] Level-0 commit table #22 started
Nov 25 02:57:48 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-07:57:48.204366) [db/memtable_list.cc:722] [default] Level-0 commit table #22: memtable #1 done
Nov 25 02:57:48 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-07:57:48.204436) EVENT_LOG_v1 {"time_micros": 1764057468204421, "job": 5, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 25 02:57:48 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-07:57:48.204470) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 25 02:57:48 np0005534516 ceph-mon[75015]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 5] Try to delete WAL files size 1540840, prev total WAL file size 1540840, number of live WAL files 2.
Nov 25 02:57:48 np0005534516 ceph-mon[75015]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000018.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 02:57:48 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-07:57:48.205983) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740030' seq:72057594037927935, type:22 .. '6D67727374617400323532' seq:0, type:0; will stop at (end)
Nov 25 02:57:48 np0005534516 ceph-mon[75015]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 6] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 25 02:57:48 np0005534516 ceph-mon[75015]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 5 Base level 0, inputs: [22(907KB)], [20(7318KB)]
Nov 25 02:57:48 np0005534516 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764057468206060, "job": 6, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [22], "files_L6": [20], "score": -1, "input_data_size": 8423376, "oldest_snapshot_seqno": -1}
Nov 25 02:57:48 np0005534516 ceph-mon[75015]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 6] Generated table #23: 3240 keys, 6411331 bytes, temperature: kUnknown
Nov 25 02:57:48 np0005534516 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764057468379986, "cf_name": "default", "job": 6, "event": "table_file_creation", "file_number": 23, "file_size": 6411331, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 6386922, "index_size": 15212, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 8133, "raw_key_size": 78620, "raw_average_key_size": 24, "raw_value_size": 6325479, "raw_average_value_size": 1952, "num_data_blocks": 678, "num_entries": 3240, "num_filter_entries": 3240, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764056881, "oldest_key_time": 0, "file_creation_time": 1764057468, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "dc5dc6ae-0fb8-474e-b34d-e37705a41add", "db_session_id": "WCSCMBNWDQZ3QKJA3OWW", "orig_file_number": 23, "seqno_to_time_mapping": "N/A"}}
Nov 25 02:57:48 np0005534516 ceph-mon[75015]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 02:57:48 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-07:57:48.380377) [db/compaction/compaction_job.cc:1663] [default] [JOB 6] Compacted 1@0 + 1@6 files to L6 => 6411331 bytes
Nov 25 02:57:48 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-07:57:48.400088) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 48.4 rd, 36.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.9, 7.1 +0.0 blob) out(6.1 +0.0 blob), read-write-amplify(16.0) write-amplify(6.9) OK, records in: 3709, records dropped: 469 output_compression: NoCompression
Nov 25 02:57:48 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-07:57:48.400153) EVENT_LOG_v1 {"time_micros": 1764057468400131, "job": 6, "event": "compaction_finished", "compaction_time_micros": 174038, "compaction_time_cpu_micros": 17495, "output_level": 6, "num_output_files": 1, "total_output_size": 6411331, "num_input_records": 3709, "num_output_records": 3240, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 25 02:57:48 np0005534516 ceph-mon[75015]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000022.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 02:57:48 np0005534516 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764057468400810, "job": 6, "event": "table_file_deletion", "file_number": 22}
Nov 25 02:57:48 np0005534516 ceph-mon[75015]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000020.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 02:57:48 np0005534516 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764057468402672, "job": 6, "event": "table_file_deletion", "file_number": 20}
Nov 25 02:57:48 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-07:57:48.205901) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 02:57:48 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-07:57:48.402799) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 02:57:48 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-07:57:48.402804) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 02:57:48 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-07:57:48.402806) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 02:57:48 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-07:57:48.402807) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 02:57:48 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-07:57:48.402808) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 02:57:48 np0005534516 python3.9[130461]: ansible-ansible.builtin.stat Invoked with path=/etc/ssh/ssh_known_hosts follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 25 02:57:49 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v352: 321 pgs: 321 active+clean; 456 KiB data, 143 MiB used, 60 GiB / 60 GiB avail
Nov 25 02:57:49 np0005534516 systemd[1]: session-18.scope: Deactivated successfully.
Nov 25 02:57:49 np0005534516 systemd[1]: session-18.scope: Consumed 1min 32.827s CPU time.
Nov 25 02:57:49 np0005534516 systemd-logind[822]: Session 18 logged out. Waiting for processes to exit.
Nov 25 02:57:49 np0005534516 systemd-logind[822]: Removed session 18.
Nov 25 02:57:49 np0005534516 python3.9[130615]: ansible-ansible.builtin.slurp Invoked with src=/etc/ssh/ssh_known_hosts
Nov 25 02:57:50 np0005534516 python3.9[130767]: ansible-ansible.legacy.stat Invoked with path=/tmp/ansible.b5fknnin follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 02:57:51 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v353: 321 pgs: 321 active+clean; 456 KiB data, 143 MiB used, 60 GiB / 60 GiB avail
Nov 25 02:57:51 np0005534516 python3.9[130892]: ansible-ansible.legacy.copy Invoked with dest=/tmp/ansible.b5fknnin mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764057469.9015937-44-182756198042331/.source.b5fknnin _original_basename=._cic30ut follow=False checksum=e315ea6a9111bd1835474faf1affcfbc46632700 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 02:57:52 np0005534516 python3.9[131044]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'ssh_host_key_rsa_public', 'ssh_host_key_ed25519_public', 'ssh_host_key_ecdsa_public'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 25 02:57:52 np0005534516 systemd[1]: systemd-timedated.service: Deactivated successfully.
Nov 25 02:57:53 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 02:57:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] Optimize plan auto_2025-11-25_07:57:53
Nov 25 02:57:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 02:57:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] do_upmap
Nov 25 02:57:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'default.rgw.log', 'default.rgw.meta', 'volumes', 'backups', '.mgr', 'default.rgw.control', 'images', 'vms', 'cephfs.cephfs.data', '.rgw.root']
Nov 25 02:57:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] prepared 0/10 changes
Nov 25 02:57:53 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v354: 321 pgs: 321 active+clean; 456 KiB data, 143 MiB used, 60 GiB / 60 GiB avail
Nov 25 02:57:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 02:57:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 02:57:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 02:57:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 02:57:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 02:57:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 02:57:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 02:57:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 02:57:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 02:57:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 02:57:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 02:57:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 02:57:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 02:57:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 02:57:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 02:57:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 02:57:53 np0005534516 python3.9[131198]: ansible-ansible.builtin.blockinfile Invoked with block=compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDThrOmzZYk/yE7aj0qiUi2XHBodTgD3KuWbEqzNcYV2X9Y5iPECkOOTFwJQ5k3cBVPuxkrbm8m5jaIrc02FUvnMr91CENNlwaofZ5AbVutEMFXttCm2tiap60neuJvQ6E/ODoqVM+rGFs35RB0MJGHUEmc/3ouQ8LzXu4pfQVyHqqYTKzDOlHGGNkVhsiyy2+l/gH7ji0TuOzyatWsvqFSXlSuqlBxETH97ZkLa03H2SqH78HQjb7ck+TKBoDWw6rDtlN6LUCxwwpi3pPQ1lJodoVl2kMH10vwpYoW3pxc7YXUZy6ZHoNOd0QpvhgG6XbKvvF5/PBrxDsYydJQ9+LKaHpQziGJx6eXlsD477eVQPEL616c6ha7mVp9E6ZGsejbP6C0OMXwd8Q/r4O1cXRJ6GMZ/ImJnsge6FL/mf6gH28M++DkpbjDhK3KaEwZaXu3RRkAW1ZEZgs6WU+ggGVhaRHa7oSirk8619FpfHUjYtyFY+ainauRqfgRTVmd5+8=#012compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIMtlUnlv1KTTK/zS5HI6tQr3+y8723Yr0q2NjDTz7vbf#012compute-0.ctlplane.example.com,192.168.122.100,compute-0* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBEMhA6ZDfxgRzvvaEqrlP+xcafhng5agflzPKbv8nTm33sRtBCypYw7k9dI0UHi1piXdLyh0sZ1wIA42UCPG+7g=#012 create=True mode=0644 path=/tmp/ansible.b5fknnin state=present marker=# {mark} ANSIBLE MANAGED BLOCK backup=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 02:57:54 np0005534516 python3.9[131350]: ansible-ansible.legacy.command Invoked with _raw_params=cat '/tmp/ansible.b5fknnin' > /etc/ssh/ssh_known_hosts _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 02:57:55 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v355: 321 pgs: 321 active+clean; 456 KiB data, 143 MiB used, 60 GiB / 60 GiB avail
Nov 25 02:57:55 np0005534516 python3.9[131504]: ansible-ansible.builtin.file Invoked with path=/tmp/ansible.b5fknnin state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 02:57:55 np0005534516 systemd-logind[822]: Session 41 logged out. Waiting for processes to exit.
Nov 25 02:57:55 np0005534516 systemd[1]: session-41.scope: Deactivated successfully.
Nov 25 02:57:55 np0005534516 systemd[1]: session-41.scope: Consumed 6.044s CPU time.
Nov 25 02:57:55 np0005534516 systemd-logind[822]: Removed session 41.
Nov 25 02:57:57 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v356: 321 pgs: 321 active+clean; 456 KiB data, 143 MiB used, 60 GiB / 60 GiB avail
Nov 25 02:57:58 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 02:57:59 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v357: 321 pgs: 321 active+clean; 456 KiB data, 143 MiB used, 60 GiB / 60 GiB avail
Nov 25 02:58:01 np0005534516 systemd-logind[822]: New session 42 of user zuul.
Nov 25 02:58:01 np0005534516 systemd[1]: Started Session 42 of User zuul.
Nov 25 02:58:01 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v358: 321 pgs: 321 active+clean; 456 KiB data, 143 MiB used, 60 GiB / 60 GiB avail
Nov 25 02:58:02 np0005534516 python3.9[131682]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 25 02:58:03 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 02:58:03 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v359: 321 pgs: 321 active+clean; 456 KiB data, 143 MiB used, 60 GiB / 60 GiB avail
Nov 25 02:58:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 02:58:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 02:58:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 02:58:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 02:58:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 02:58:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 02:58:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 02:58:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 02:58:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 02:58:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 02:58:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 02:58:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 02:58:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 25 02:58:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 02:58:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 02:58:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 02:58:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 25 02:58:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 02:58:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 25 02:58:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 02:58:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 02:58:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 02:58:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 25 02:58:03 np0005534516 python3.9[131838]: ansible-ansible.builtin.systemd Invoked with enabled=True name=sshd daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Nov 25 02:58:04 np0005534516 python3.9[131992]: ansible-ansible.builtin.systemd Invoked with name=sshd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 25 02:58:05 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v360: 321 pgs: 321 active+clean; 456 KiB data, 143 MiB used, 60 GiB / 60 GiB avail
Nov 25 02:58:05 np0005534516 python3.9[132145]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 02:58:05 np0005534516 ceph-mon[75015]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 25 02:58:05 np0005534516 ceph-mon[75015]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 600.0 total, 600.0 interval#012Cumulative writes: 1852 writes, 8484 keys, 1852 commit groups, 1.0 writes per commit group, ingest: 0.01 GB, 0.02 MB/s#012Cumulative WAL: 1852 writes, 1852 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1852 writes, 8484 keys, 1852 commit groups, 1.0 writes per commit group, ingest: 10.35 MB, 0.02 MB/s#012Interval WAL: 1852 writes, 1852 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     32.3      0.25              0.03         3    0.083       0      0       0.0       0.0#012  L6      1/0    6.11 MB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.6     40.7     35.5      0.37              0.03         2    0.187    7171    759       0.0       0.0#012 Sum      1/0    6.11 MB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   2.6     24.4     34.2      0.62              0.06         5    0.125    7171    759       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   2.7     24.5     34.3      0.62              0.06         4    0.155    7171    759       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0     40.7     35.5      0.37              0.03         2    0.187    7171    759       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     32.5      0.25              0.03         2    0.123       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     19.9      0.00              0.00         1    0.003       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.0 total, 600.0 interval#012Flush(GB): cumulative 0.008, interval 0.008#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.02 GB write, 0.04 MB/s write, 0.01 GB read, 0.03 MB/s read, 0.6 seconds#012Interval compaction: 0.02 GB write, 0.04 MB/s write, 0.01 GB read, 0.03 MB/s read, 0.6 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55e967e9f1f0#2 capacity: 308.00 MB usage: 529.81 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 0 last_secs: 5e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(36,444.47 KB,0.140926%) FilterBlock(6,26.92 KB,0.008536%) IndexBlock(6,58.42 KB,0.0185236%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Nov 25 02:58:06 np0005534516 python3.9[132298]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 25 02:58:07 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v361: 321 pgs: 321 active+clean; 456 KiB data, 143 MiB used, 60 GiB / 60 GiB avail
Nov 25 02:58:07 np0005534516 python3.9[132450]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 02:58:08 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 02:58:08 np0005534516 systemd[1]: session-42.scope: Deactivated successfully.
Nov 25 02:58:08 np0005534516 systemd[1]: session-42.scope: Consumed 4.361s CPU time.
Nov 25 02:58:08 np0005534516 systemd-logind[822]: Session 42 logged out. Waiting for processes to exit.
Nov 25 02:58:08 np0005534516 systemd-logind[822]: Removed session 42.
Nov 25 02:58:09 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v362: 321 pgs: 321 active+clean; 456 KiB data, 143 MiB used, 60 GiB / 60 GiB avail
Nov 25 02:58:11 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v363: 321 pgs: 321 active+clean; 456 KiB data, 143 MiB used, 60 GiB / 60 GiB avail
Nov 25 02:58:13 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 02:58:13 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v364: 321 pgs: 321 active+clean; 456 KiB data, 143 MiB used, 60 GiB / 60 GiB avail
Nov 25 02:58:14 np0005534516 systemd-logind[822]: New session 43 of user zuul.
Nov 25 02:58:14 np0005534516 systemd[1]: Started Session 43 of User zuul.
Nov 25 02:58:15 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v365: 321 pgs: 321 active+clean; 456 KiB data, 143 MiB used, 60 GiB / 60 GiB avail
Nov 25 02:58:15 np0005534516 python3.9[132628]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 25 02:58:16 np0005534516 python3.9[132784]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 25 02:58:17 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v366: 321 pgs: 321 active+clean; 456 KiB data, 143 MiB used, 60 GiB / 60 GiB avail
Nov 25 02:58:17 np0005534516 python3.9[132868]: ansible-ansible.legacy.dnf Invoked with name=['yum-utils'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Nov 25 02:58:18 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 02:58:19 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v367: 321 pgs: 321 active+clean; 456 KiB data, 143 MiB used, 60 GiB / 60 GiB avail
Nov 25 02:58:20 np0005534516 python3.9[133019]: ansible-ansible.legacy.command Invoked with _raw_params=needs-restarting -r _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 02:58:21 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v368: 321 pgs: 321 active+clean; 456 KiB data, 143 MiB used, 60 GiB / 60 GiB avail
Nov 25 02:58:21 np0005534516 python3.9[133170]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/reboot_required/'] patterns=[] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Nov 25 02:58:22 np0005534516 python3.9[133320]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 25 02:58:23 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 02:58:23 np0005534516 python3.9[133470]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/config follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 25 02:58:23 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v369: 321 pgs: 321 active+clean; 456 KiB data, 143 MiB used, 60 GiB / 60 GiB avail
Nov 25 02:58:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 02:58:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 02:58:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 02:58:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 02:58:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 02:58:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 02:58:23 np0005534516 systemd[1]: session-43.scope: Deactivated successfully.
Nov 25 02:58:23 np0005534516 systemd[1]: session-43.scope: Consumed 6.804s CPU time.
Nov 25 02:58:23 np0005534516 systemd-logind[822]: Session 43 logged out. Waiting for processes to exit.
Nov 25 02:58:23 np0005534516 systemd-logind[822]: Removed session 43.
Nov 25 02:58:24 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 02:58:24 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 02:58:24 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 02:58:24 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 02:58:24 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 02:58:24 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 02:58:24 np0005534516 ceph-mgr[75313]: [progress WARNING root] complete: ev e9125035-4a78-4007-84f4-edf8cbfea53b does not exist
Nov 25 02:58:24 np0005534516 ceph-mgr[75313]: [progress WARNING root] complete: ev fd7f14ab-0c92-49e7-b684-85f4d7c4d407 does not exist
Nov 25 02:58:24 np0005534516 ceph-mgr[75313]: [progress WARNING root] complete: ev e781b92e-dc1f-4acf-9404-ab7021e9af58 does not exist
Nov 25 02:58:24 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 02:58:24 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 02:58:24 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 02:58:24 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 02:58:24 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 02:58:24 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 02:58:25 np0005534516 podman[133769]: 2025-11-25 07:58:25.235001447 +0000 UTC m=+0.059973750 container create da90fc12f5c7a0f91211df31fbf8acb788f4038cc77bc92b80709dcdef6193d6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_shannon, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Nov 25 02:58:25 np0005534516 systemd[1]: Started libpod-conmon-da90fc12f5c7a0f91211df31fbf8acb788f4038cc77bc92b80709dcdef6193d6.scope.
Nov 25 02:58:25 np0005534516 podman[133769]: 2025-11-25 07:58:25.203294146 +0000 UTC m=+0.028266469 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 02:58:25 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v370: 321 pgs: 321 active+clean; 456 KiB data, 143 MiB used, 60 GiB / 60 GiB avail
Nov 25 02:58:25 np0005534516 systemd[1]: Started libcrun container.
Nov 25 02:58:25 np0005534516 podman[133769]: 2025-11-25 07:58:25.339280342 +0000 UTC m=+0.164252665 container init da90fc12f5c7a0f91211df31fbf8acb788f4038cc77bc92b80709dcdef6193d6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_shannon, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 02:58:25 np0005534516 podman[133769]: 2025-11-25 07:58:25.353883999 +0000 UTC m=+0.178856312 container start da90fc12f5c7a0f91211df31fbf8acb788f4038cc77bc92b80709dcdef6193d6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_shannon, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 02:58:25 np0005534516 systemd[1]: libpod-da90fc12f5c7a0f91211df31fbf8acb788f4038cc77bc92b80709dcdef6193d6.scope: Deactivated successfully.
Nov 25 02:58:25 np0005534516 brave_shannon[133786]: 167 167
Nov 25 02:58:25 np0005534516 conmon[133786]: conmon da90fc12f5c7a0f91211 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-da90fc12f5c7a0f91211df31fbf8acb788f4038cc77bc92b80709dcdef6193d6.scope/container/memory.events
Nov 25 02:58:25 np0005534516 podman[133769]: 2025-11-25 07:58:25.371146788 +0000 UTC m=+0.196119131 container attach da90fc12f5c7a0f91211df31fbf8acb788f4038cc77bc92b80709dcdef6193d6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_shannon, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 02:58:25 np0005534516 podman[133769]: 2025-11-25 07:58:25.372567037 +0000 UTC m=+0.197539380 container died da90fc12f5c7a0f91211df31fbf8acb788f4038cc77bc92b80709dcdef6193d6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_shannon, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 02:58:25 np0005534516 systemd[1]: var-lib-containers-storage-overlay-5b45ba260d741651c79804eea821a078c49f2e8d0cbf89eb8ee06ee6f83f0cde-merged.mount: Deactivated successfully.
Nov 25 02:58:25 np0005534516 podman[133769]: 2025-11-25 07:58:25.49009535 +0000 UTC m=+0.315067653 container remove da90fc12f5c7a0f91211df31fbf8acb788f4038cc77bc92b80709dcdef6193d6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_shannon, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 02:58:25 np0005534516 systemd[1]: libpod-conmon-da90fc12f5c7a0f91211df31fbf8acb788f4038cc77bc92b80709dcdef6193d6.scope: Deactivated successfully.
Nov 25 02:58:25 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 02:58:25 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 02:58:25 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 02:58:25 np0005534516 podman[133810]: 2025-11-25 07:58:25.69060069 +0000 UTC m=+0.050915894 container create 41ea54823d984e496a811b69c6c6385ab34e7fe57ec5672cafe4c83932b2705f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_agnesi, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 25 02:58:25 np0005534516 systemd[1]: Started libpod-conmon-41ea54823d984e496a811b69c6c6385ab34e7fe57ec5672cafe4c83932b2705f.scope.
Nov 25 02:58:25 np0005534516 podman[133810]: 2025-11-25 07:58:25.667162824 +0000 UTC m=+0.027478058 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 02:58:25 np0005534516 systemd[1]: Started libcrun container.
Nov 25 02:58:25 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d987eb8ea1f6c121ae958f69dacb6b878e7699877743424b666cb7ca56c8988b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 02:58:25 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d987eb8ea1f6c121ae958f69dacb6b878e7699877743424b666cb7ca56c8988b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 02:58:25 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d987eb8ea1f6c121ae958f69dacb6b878e7699877743424b666cb7ca56c8988b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 02:58:25 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d987eb8ea1f6c121ae958f69dacb6b878e7699877743424b666cb7ca56c8988b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 02:58:25 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d987eb8ea1f6c121ae958f69dacb6b878e7699877743424b666cb7ca56c8988b/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 02:58:25 np0005534516 podman[133810]: 2025-11-25 07:58:25.878174809 +0000 UTC m=+0.238490023 container init 41ea54823d984e496a811b69c6c6385ab34e7fe57ec5672cafe4c83932b2705f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_agnesi, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef)
Nov 25 02:58:25 np0005534516 podman[133810]: 2025-11-25 07:58:25.890084992 +0000 UTC m=+0.250400236 container start 41ea54823d984e496a811b69c6c6385ab34e7fe57ec5672cafe4c83932b2705f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_agnesi, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 25 02:58:25 np0005534516 podman[133810]: 2025-11-25 07:58:25.952414656 +0000 UTC m=+0.312729920 container attach 41ea54823d984e496a811b69c6c6385ab34e7fe57ec5672cafe4c83932b2705f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_agnesi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 02:58:26 np0005534516 quizzical_agnesi[133826]: --> passed data devices: 0 physical, 3 LVM
Nov 25 02:58:26 np0005534516 quizzical_agnesi[133826]: --> relative data size: 1.0
Nov 25 02:58:26 np0005534516 quizzical_agnesi[133826]: --> All data devices are unavailable
Nov 25 02:58:26 np0005534516 systemd[1]: libpod-41ea54823d984e496a811b69c6c6385ab34e7fe57ec5672cafe4c83932b2705f.scope: Deactivated successfully.
Nov 25 02:58:26 np0005534516 systemd[1]: libpod-41ea54823d984e496a811b69c6c6385ab34e7fe57ec5672cafe4c83932b2705f.scope: Consumed 1.049s CPU time.
Nov 25 02:58:27 np0005534516 podman[133855]: 2025-11-25 07:58:27.028991327 +0000 UTC m=+0.023819919 container died 41ea54823d984e496a811b69c6c6385ab34e7fe57ec5672cafe4c83932b2705f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_agnesi, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Nov 25 02:58:27 np0005534516 systemd[1]: var-lib-containers-storage-overlay-d987eb8ea1f6c121ae958f69dacb6b878e7699877743424b666cb7ca56c8988b-merged.mount: Deactivated successfully.
Nov 25 02:58:27 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v371: 321 pgs: 321 active+clean; 456 KiB data, 143 MiB used, 60 GiB / 60 GiB avail
Nov 25 02:58:27 np0005534516 podman[133855]: 2025-11-25 07:58:27.475462791 +0000 UTC m=+0.470291373 container remove 41ea54823d984e496a811b69c6c6385ab34e7fe57ec5672cafe4c83932b2705f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_agnesi, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 25 02:58:27 np0005534516 systemd[1]: libpod-conmon-41ea54823d984e496a811b69c6c6385ab34e7fe57ec5672cafe4c83932b2705f.scope: Deactivated successfully.
Nov 25 02:58:28 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 02:58:28 np0005534516 podman[134011]: 2025-11-25 07:58:28.269146762 +0000 UTC m=+0.026976203 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 02:58:28 np0005534516 podman[134011]: 2025-11-25 07:58:28.559182565 +0000 UTC m=+0.317011956 container create 0f8ff5fbe2b7c986fd2ff7083ec3e0893aef72c0e84657b645bfb476883847c9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_jackson, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Nov 25 02:58:28 np0005534516 systemd[1]: Started libpod-conmon-0f8ff5fbe2b7c986fd2ff7083ec3e0893aef72c0e84657b645bfb476883847c9.scope.
Nov 25 02:58:28 np0005534516 systemd[1]: Started libcrun container.
Nov 25 02:58:28 np0005534516 podman[134011]: 2025-11-25 07:58:28.790105071 +0000 UTC m=+0.547934492 container init 0f8ff5fbe2b7c986fd2ff7083ec3e0893aef72c0e84657b645bfb476883847c9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_jackson, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 02:58:28 np0005534516 podman[134011]: 2025-11-25 07:58:28.800767841 +0000 UTC m=+0.558597232 container start 0f8ff5fbe2b7c986fd2ff7083ec3e0893aef72c0e84657b645bfb476883847c9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_jackson, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 02:58:28 np0005534516 nervous_jackson[134028]: 167 167
Nov 25 02:58:28 np0005534516 systemd[1]: libpod-0f8ff5fbe2b7c986fd2ff7083ec3e0893aef72c0e84657b645bfb476883847c9.scope: Deactivated successfully.
Nov 25 02:58:28 np0005534516 podman[134011]: 2025-11-25 07:58:28.864541555 +0000 UTC m=+0.622370976 container attach 0f8ff5fbe2b7c986fd2ff7083ec3e0893aef72c0e84657b645bfb476883847c9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_jackson, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 02:58:28 np0005534516 podman[134011]: 2025-11-25 07:58:28.865078009 +0000 UTC m=+0.622907410 container died 0f8ff5fbe2b7c986fd2ff7083ec3e0893aef72c0e84657b645bfb476883847c9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_jackson, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef)
Nov 25 02:58:29 np0005534516 systemd[1]: var-lib-containers-storage-overlay-4af872b12c5618e8ce70245fe546662a14ae8cc0db60e30a2f03ceb151970393-merged.mount: Deactivated successfully.
Nov 25 02:58:29 np0005534516 podman[134011]: 2025-11-25 07:58:29.229697309 +0000 UTC m=+0.987526740 container remove 0f8ff5fbe2b7c986fd2ff7083ec3e0893aef72c0e84657b645bfb476883847c9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_jackson, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Nov 25 02:58:29 np0005534516 systemd[1]: libpod-conmon-0f8ff5fbe2b7c986fd2ff7083ec3e0893aef72c0e84657b645bfb476883847c9.scope: Deactivated successfully.
Nov 25 02:58:29 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v372: 321 pgs: 321 active+clean; 456 KiB data, 143 MiB used, 60 GiB / 60 GiB avail
Nov 25 02:58:29 np0005534516 podman[134052]: 2025-11-25 07:58:29.470529525 +0000 UTC m=+0.072987195 container create c6f9379ce412c3bb36b9b52c4622c4e94f439089cb79d80cdec13aa8bfcfa1b1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_archimedes, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 02:58:29 np0005534516 podman[134052]: 2025-11-25 07:58:29.439279305 +0000 UTC m=+0.041736965 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 02:58:29 np0005534516 systemd[1]: Started libpod-conmon-c6f9379ce412c3bb36b9b52c4622c4e94f439089cb79d80cdec13aa8bfcfa1b1.scope.
Nov 25 02:58:29 np0005534516 systemd[1]: Started libcrun container.
Nov 25 02:58:29 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ecd1e1f6b466f6d74b14a72a44e53afc51f1d893dd4e16f795e667e0b97b5575/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 02:58:29 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ecd1e1f6b466f6d74b14a72a44e53afc51f1d893dd4e16f795e667e0b97b5575/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 02:58:29 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ecd1e1f6b466f6d74b14a72a44e53afc51f1d893dd4e16f795e667e0b97b5575/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 02:58:29 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ecd1e1f6b466f6d74b14a72a44e53afc51f1d893dd4e16f795e667e0b97b5575/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 02:58:29 np0005534516 podman[134052]: 2025-11-25 07:58:29.586458116 +0000 UTC m=+0.188915796 container init c6f9379ce412c3bb36b9b52c4622c4e94f439089cb79d80cdec13aa8bfcfa1b1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_archimedes, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 02:58:29 np0005534516 podman[134052]: 2025-11-25 07:58:29.598543344 +0000 UTC m=+0.201001014 container start c6f9379ce412c3bb36b9b52c4622c4e94f439089cb79d80cdec13aa8bfcfa1b1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_archimedes, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 02:58:29 np0005534516 podman[134052]: 2025-11-25 07:58:29.606642024 +0000 UTC m=+0.209099744 container attach c6f9379ce412c3bb36b9b52c4622c4e94f439089cb79d80cdec13aa8bfcfa1b1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_archimedes, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 02:58:29 np0005534516 systemd-logind[822]: New session 44 of user zuul.
Nov 25 02:58:29 np0005534516 systemd[1]: Started Session 44 of User zuul.
Nov 25 02:58:30 np0005534516 trusting_archimedes[134070]: {
Nov 25 02:58:30 np0005534516 trusting_archimedes[134070]:    "0": [
Nov 25 02:58:30 np0005534516 trusting_archimedes[134070]:        {
Nov 25 02:58:30 np0005534516 trusting_archimedes[134070]:            "devices": [
Nov 25 02:58:30 np0005534516 trusting_archimedes[134070]:                "/dev/loop3"
Nov 25 02:58:30 np0005534516 trusting_archimedes[134070]:            ],
Nov 25 02:58:30 np0005534516 trusting_archimedes[134070]:            "lv_name": "ceph_lv0",
Nov 25 02:58:30 np0005534516 trusting_archimedes[134070]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 02:58:30 np0005534516 trusting_archimedes[134070]:            "lv_size": "21470642176",
Nov 25 02:58:30 np0005534516 trusting_archimedes[134070]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=fa0f8d90-5df8-4d42-9078-da082765696d,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 02:58:30 np0005534516 trusting_archimedes[134070]:            "lv_uuid": "ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr",
Nov 25 02:58:30 np0005534516 trusting_archimedes[134070]:            "name": "ceph_lv0",
Nov 25 02:58:30 np0005534516 trusting_archimedes[134070]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 02:58:30 np0005534516 trusting_archimedes[134070]:            "tags": {
Nov 25 02:58:30 np0005534516 trusting_archimedes[134070]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 02:58:30 np0005534516 trusting_archimedes[134070]:                "ceph.block_uuid": "ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr",
Nov 25 02:58:30 np0005534516 trusting_archimedes[134070]:                "ceph.cephx_lockbox_secret": "",
Nov 25 02:58:30 np0005534516 trusting_archimedes[134070]:                "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 02:58:30 np0005534516 trusting_archimedes[134070]:                "ceph.cluster_name": "ceph",
Nov 25 02:58:30 np0005534516 trusting_archimedes[134070]:                "ceph.crush_device_class": "",
Nov 25 02:58:30 np0005534516 trusting_archimedes[134070]:                "ceph.encrypted": "0",
Nov 25 02:58:30 np0005534516 trusting_archimedes[134070]:                "ceph.osd_fsid": "fa0f8d90-5df8-4d42-9078-da082765696d",
Nov 25 02:58:30 np0005534516 trusting_archimedes[134070]:                "ceph.osd_id": "0",
Nov 25 02:58:30 np0005534516 trusting_archimedes[134070]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 02:58:30 np0005534516 trusting_archimedes[134070]:                "ceph.type": "block",
Nov 25 02:58:30 np0005534516 trusting_archimedes[134070]:                "ceph.vdo": "0"
Nov 25 02:58:30 np0005534516 trusting_archimedes[134070]:            },
Nov 25 02:58:30 np0005534516 trusting_archimedes[134070]:            "type": "block",
Nov 25 02:58:30 np0005534516 trusting_archimedes[134070]:            "vg_name": "ceph_vg0"
Nov 25 02:58:30 np0005534516 trusting_archimedes[134070]:        }
Nov 25 02:58:30 np0005534516 trusting_archimedes[134070]:    ],
Nov 25 02:58:30 np0005534516 trusting_archimedes[134070]:    "1": [
Nov 25 02:58:30 np0005534516 trusting_archimedes[134070]:        {
Nov 25 02:58:30 np0005534516 trusting_archimedes[134070]:            "devices": [
Nov 25 02:58:30 np0005534516 trusting_archimedes[134070]:                "/dev/loop4"
Nov 25 02:58:30 np0005534516 trusting_archimedes[134070]:            ],
Nov 25 02:58:30 np0005534516 trusting_archimedes[134070]:            "lv_name": "ceph_lv1",
Nov 25 02:58:30 np0005534516 trusting_archimedes[134070]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 02:58:30 np0005534516 trusting_archimedes[134070]:            "lv_size": "21470642176",
Nov 25 02:58:30 np0005534516 trusting_archimedes[134070]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1ccbbde0-8faf-460a-800e-c84f00ed17db,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 02:58:30 np0005534516 trusting_archimedes[134070]:            "lv_uuid": "nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e",
Nov 25 02:58:30 np0005534516 trusting_archimedes[134070]:            "name": "ceph_lv1",
Nov 25 02:58:30 np0005534516 trusting_archimedes[134070]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 02:58:30 np0005534516 trusting_archimedes[134070]:            "tags": {
Nov 25 02:58:30 np0005534516 trusting_archimedes[134070]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 02:58:30 np0005534516 trusting_archimedes[134070]:                "ceph.block_uuid": "nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e",
Nov 25 02:58:30 np0005534516 trusting_archimedes[134070]:                "ceph.cephx_lockbox_secret": "",
Nov 25 02:58:30 np0005534516 trusting_archimedes[134070]:                "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 02:58:30 np0005534516 trusting_archimedes[134070]:                "ceph.cluster_name": "ceph",
Nov 25 02:58:30 np0005534516 trusting_archimedes[134070]:                "ceph.crush_device_class": "",
Nov 25 02:58:30 np0005534516 trusting_archimedes[134070]:                "ceph.encrypted": "0",
Nov 25 02:58:30 np0005534516 trusting_archimedes[134070]:                "ceph.osd_fsid": "1ccbbde0-8faf-460a-800e-c84f00ed17db",
Nov 25 02:58:30 np0005534516 trusting_archimedes[134070]:                "ceph.osd_id": "1",
Nov 25 02:58:30 np0005534516 trusting_archimedes[134070]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 02:58:30 np0005534516 trusting_archimedes[134070]:                "ceph.type": "block",
Nov 25 02:58:30 np0005534516 trusting_archimedes[134070]:                "ceph.vdo": "0"
Nov 25 02:58:30 np0005534516 trusting_archimedes[134070]:            },
Nov 25 02:58:30 np0005534516 trusting_archimedes[134070]:            "type": "block",
Nov 25 02:58:30 np0005534516 trusting_archimedes[134070]:            "vg_name": "ceph_vg1"
Nov 25 02:58:30 np0005534516 trusting_archimedes[134070]:        }
Nov 25 02:58:30 np0005534516 trusting_archimedes[134070]:    ],
Nov 25 02:58:30 np0005534516 trusting_archimedes[134070]:    "2": [
Nov 25 02:58:30 np0005534516 trusting_archimedes[134070]:        {
Nov 25 02:58:30 np0005534516 trusting_archimedes[134070]:            "devices": [
Nov 25 02:58:30 np0005534516 trusting_archimedes[134070]:                "/dev/loop5"
Nov 25 02:58:30 np0005534516 trusting_archimedes[134070]:            ],
Nov 25 02:58:30 np0005534516 trusting_archimedes[134070]:            "lv_name": "ceph_lv2",
Nov 25 02:58:30 np0005534516 trusting_archimedes[134070]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 02:58:30 np0005534516 trusting_archimedes[134070]:            "lv_size": "21470642176",
Nov 25 02:58:30 np0005534516 trusting_archimedes[134070]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2a15223e-f21c-42af-b702-2be1c3607399,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 02:58:30 np0005534516 trusting_archimedes[134070]:            "lv_uuid": "5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0",
Nov 25 02:58:30 np0005534516 trusting_archimedes[134070]:            "name": "ceph_lv2",
Nov 25 02:58:30 np0005534516 trusting_archimedes[134070]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 02:58:30 np0005534516 trusting_archimedes[134070]:            "tags": {
Nov 25 02:58:30 np0005534516 trusting_archimedes[134070]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 02:58:30 np0005534516 trusting_archimedes[134070]:                "ceph.block_uuid": "5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0",
Nov 25 02:58:30 np0005534516 trusting_archimedes[134070]:                "ceph.cephx_lockbox_secret": "",
Nov 25 02:58:30 np0005534516 trusting_archimedes[134070]:                "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 02:58:30 np0005534516 trusting_archimedes[134070]:                "ceph.cluster_name": "ceph",
Nov 25 02:58:30 np0005534516 trusting_archimedes[134070]:                "ceph.crush_device_class": "",
Nov 25 02:58:30 np0005534516 trusting_archimedes[134070]:                "ceph.encrypted": "0",
Nov 25 02:58:30 np0005534516 trusting_archimedes[134070]:                "ceph.osd_fsid": "2a15223e-f21c-42af-b702-2be1c3607399",
Nov 25 02:58:30 np0005534516 trusting_archimedes[134070]:                "ceph.osd_id": "2",
Nov 25 02:58:30 np0005534516 trusting_archimedes[134070]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 02:58:30 np0005534516 trusting_archimedes[134070]:                "ceph.type": "block",
Nov 25 02:58:30 np0005534516 trusting_archimedes[134070]:                "ceph.vdo": "0"
Nov 25 02:58:30 np0005534516 trusting_archimedes[134070]:            },
Nov 25 02:58:30 np0005534516 trusting_archimedes[134070]:            "type": "block",
Nov 25 02:58:30 np0005534516 trusting_archimedes[134070]:            "vg_name": "ceph_vg2"
Nov 25 02:58:30 np0005534516 trusting_archimedes[134070]:        }
Nov 25 02:58:30 np0005534516 trusting_archimedes[134070]:    ]
Nov 25 02:58:30 np0005534516 trusting_archimedes[134070]: }
Nov 25 02:58:30 np0005534516 systemd[1]: libpod-c6f9379ce412c3bb36b9b52c4622c4e94f439089cb79d80cdec13aa8bfcfa1b1.scope: Deactivated successfully.
Nov 25 02:58:30 np0005534516 podman[134052]: 2025-11-25 07:58:30.460144282 +0000 UTC m=+1.062602022 container died c6f9379ce412c3bb36b9b52c4622c4e94f439089cb79d80cdec13aa8bfcfa1b1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_archimedes, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 02:58:30 np0005534516 systemd[1]: var-lib-containers-storage-overlay-ecd1e1f6b466f6d74b14a72a44e53afc51f1d893dd4e16f795e667e0b97b5575-merged.mount: Deactivated successfully.
Nov 25 02:58:30 np0005534516 podman[134052]: 2025-11-25 07:58:30.711121623 +0000 UTC m=+1.313579293 container remove c6f9379ce412c3bb36b9b52c4622c4e94f439089cb79d80cdec13aa8bfcfa1b1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_archimedes, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 02:58:30 np0005534516 python3.9[134231]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 25 02:58:30 np0005534516 systemd[1]: libpod-conmon-c6f9379ce412c3bb36b9b52c4622c4e94f439089cb79d80cdec13aa8bfcfa1b1.scope: Deactivated successfully.
Nov 25 02:58:31 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v373: 321 pgs: 321 active+clean; 456 KiB data, 143 MiB used, 60 GiB / 60 GiB avail
Nov 25 02:58:31 np0005534516 podman[134414]: 2025-11-25 07:58:31.540092403 +0000 UTC m=+0.098817046 container create a147a6473c55f66ed4b1a6dc455923a82f057b0acd1283fa39b57403e5208502 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_hellman, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 02:58:31 np0005534516 podman[134414]: 2025-11-25 07:58:31.470587254 +0000 UTC m=+0.029311947 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 02:58:31 np0005534516 systemd[1]: Started libpod-conmon-a147a6473c55f66ed4b1a6dc455923a82f057b0acd1283fa39b57403e5208502.scope.
Nov 25 02:58:31 np0005534516 systemd[1]: Started libcrun container.
Nov 25 02:58:31 np0005534516 podman[134414]: 2025-11-25 07:58:31.700154274 +0000 UTC m=+0.258878897 container init a147a6473c55f66ed4b1a6dc455923a82f057b0acd1283fa39b57403e5208502 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_hellman, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 02:58:31 np0005534516 podman[134414]: 2025-11-25 07:58:31.714672118 +0000 UTC m=+0.273396721 container start a147a6473c55f66ed4b1a6dc455923a82f057b0acd1283fa39b57403e5208502 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_hellman, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS)
Nov 25 02:58:31 np0005534516 adoring_hellman[134430]: 167 167
Nov 25 02:58:31 np0005534516 systemd[1]: libpod-a147a6473c55f66ed4b1a6dc455923a82f057b0acd1283fa39b57403e5208502.scope: Deactivated successfully.
Nov 25 02:58:31 np0005534516 podman[134414]: 2025-11-25 07:58:31.793285115 +0000 UTC m=+0.352009718 container attach a147a6473c55f66ed4b1a6dc455923a82f057b0acd1283fa39b57403e5208502 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_hellman, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 02:58:31 np0005534516 podman[134414]: 2025-11-25 07:58:31.793701317 +0000 UTC m=+0.352425920 container died a147a6473c55f66ed4b1a6dc455923a82f057b0acd1283fa39b57403e5208502 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_hellman, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 02:58:32 np0005534516 systemd[1]: var-lib-containers-storage-overlay-481f31380be9ccdfd5014ee2791f47ba7df99a3075dae9890e9b9dd0f3e57ad5-merged.mount: Deactivated successfully.
Nov 25 02:58:32 np0005534516 podman[134414]: 2025-11-25 07:58:32.103446905 +0000 UTC m=+0.662171518 container remove a147a6473c55f66ed4b1a6dc455923a82f057b0acd1283fa39b57403e5208502 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_hellman, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 02:58:32 np0005534516 systemd[1]: libpod-conmon-a147a6473c55f66ed4b1a6dc455923a82f057b0acd1283fa39b57403e5208502.scope: Deactivated successfully.
Nov 25 02:58:32 np0005534516 podman[134530]: 2025-11-25 07:58:32.305027644 +0000 UTC m=+0.092537976 container create ca74cf23f4c3ad9304a81572c27decd8ba5a412e4bbb820377d88a72d189865d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_heisenberg, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 02:58:32 np0005534516 podman[134530]: 2025-11-25 07:58:32.234878497 +0000 UTC m=+0.022388959 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 02:58:32 np0005534516 systemd[1]: Started libpod-conmon-ca74cf23f4c3ad9304a81572c27decd8ba5a412e4bbb820377d88a72d189865d.scope.
Nov 25 02:58:32 np0005534516 systemd[1]: Started libcrun container.
Nov 25 02:58:32 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/29c93e99e88969aec3a1de282261f8a03f557d6911179ce08fe983dd0477a5db/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 02:58:32 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/29c93e99e88969aec3a1de282261f8a03f557d6911179ce08fe983dd0477a5db/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 02:58:32 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/29c93e99e88969aec3a1de282261f8a03f557d6911179ce08fe983dd0477a5db/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 02:58:32 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/29c93e99e88969aec3a1de282261f8a03f557d6911179ce08fe983dd0477a5db/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 02:58:32 np0005534516 podman[134530]: 2025-11-25 07:58:32.508663859 +0000 UTC m=+0.296174201 container init ca74cf23f4c3ad9304a81572c27decd8ba5a412e4bbb820377d88a72d189865d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_heisenberg, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 02:58:32 np0005534516 podman[134530]: 2025-11-25 07:58:32.517234712 +0000 UTC m=+0.304745014 container start ca74cf23f4c3ad9304a81572c27decd8ba5a412e4bbb820377d88a72d189865d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_heisenberg, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 02:58:32 np0005534516 podman[134530]: 2025-11-25 07:58:32.529580687 +0000 UTC m=+0.317090999 container attach ca74cf23f4c3ad9304a81572c27decd8ba5a412e4bbb820377d88a72d189865d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_heisenberg, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Nov 25 02:58:32 np0005534516 python3.9[134596]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/libvirt/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 02:58:33 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 02:58:33 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v374: 321 pgs: 321 active+clean; 456 KiB data, 143 MiB used, 60 GiB / 60 GiB avail
Nov 25 02:58:33 np0005534516 python3.9[134755]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/libvirt/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 02:58:33 np0005534516 amazing_heisenberg[134599]: {
Nov 25 02:58:33 np0005534516 amazing_heisenberg[134599]:    "1ccbbde0-8faf-460a-800e-c84f00ed17db": {
Nov 25 02:58:33 np0005534516 amazing_heisenberg[134599]:        "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 02:58:33 np0005534516 amazing_heisenberg[134599]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 02:58:33 np0005534516 amazing_heisenberg[134599]:        "osd_id": 1,
Nov 25 02:58:33 np0005534516 amazing_heisenberg[134599]:        "osd_uuid": "1ccbbde0-8faf-460a-800e-c84f00ed17db",
Nov 25 02:58:33 np0005534516 amazing_heisenberg[134599]:        "type": "bluestore"
Nov 25 02:58:33 np0005534516 amazing_heisenberg[134599]:    },
Nov 25 02:58:33 np0005534516 amazing_heisenberg[134599]:    "2a15223e-f21c-42af-b702-2be1c3607399": {
Nov 25 02:58:33 np0005534516 amazing_heisenberg[134599]:        "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 02:58:33 np0005534516 amazing_heisenberg[134599]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 02:58:33 np0005534516 amazing_heisenberg[134599]:        "osd_id": 2,
Nov 25 02:58:33 np0005534516 amazing_heisenberg[134599]:        "osd_uuid": "2a15223e-f21c-42af-b702-2be1c3607399",
Nov 25 02:58:33 np0005534516 amazing_heisenberg[134599]:        "type": "bluestore"
Nov 25 02:58:33 np0005534516 amazing_heisenberg[134599]:    },
Nov 25 02:58:33 np0005534516 amazing_heisenberg[134599]:    "fa0f8d90-5df8-4d42-9078-da082765696d": {
Nov 25 02:58:33 np0005534516 amazing_heisenberg[134599]:        "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 02:58:33 np0005534516 amazing_heisenberg[134599]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 02:58:33 np0005534516 amazing_heisenberg[134599]:        "osd_id": 0,
Nov 25 02:58:33 np0005534516 amazing_heisenberg[134599]:        "osd_uuid": "fa0f8d90-5df8-4d42-9078-da082765696d",
Nov 25 02:58:33 np0005534516 amazing_heisenberg[134599]:        "type": "bluestore"
Nov 25 02:58:33 np0005534516 amazing_heisenberg[134599]:    }
Nov 25 02:58:33 np0005534516 amazing_heisenberg[134599]: }
Nov 25 02:58:33 np0005534516 systemd[1]: libpod-ca74cf23f4c3ad9304a81572c27decd8ba5a412e4bbb820377d88a72d189865d.scope: Deactivated successfully.
Nov 25 02:58:33 np0005534516 systemd[1]: libpod-ca74cf23f4c3ad9304a81572c27decd8ba5a412e4bbb820377d88a72d189865d.scope: Consumed 1.113s CPU time.
Nov 25 02:58:33 np0005534516 podman[134530]: 2025-11-25 07:58:33.634455927 +0000 UTC m=+1.421966269 container died ca74cf23f4c3ad9304a81572c27decd8ba5a412e4bbb820377d88a72d189865d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_heisenberg, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Nov 25 02:58:33 np0005534516 systemd[1]: var-lib-containers-storage-overlay-29c93e99e88969aec3a1de282261f8a03f557d6911179ce08fe983dd0477a5db-merged.mount: Deactivated successfully.
Nov 25 02:58:33 np0005534516 podman[134530]: 2025-11-25 07:58:33.892071069 +0000 UTC m=+1.679581391 container remove ca74cf23f4c3ad9304a81572c27decd8ba5a412e4bbb820377d88a72d189865d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_heisenberg, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Nov 25 02:58:33 np0005534516 systemd[1]: libpod-conmon-ca74cf23f4c3ad9304a81572c27decd8ba5a412e4bbb820377d88a72d189865d.scope: Deactivated successfully.
Nov 25 02:58:33 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 02:58:34 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 02:58:34 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 02:58:34 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 02:58:34 np0005534516 ceph-mgr[75313]: [progress WARNING root] complete: ev f05a86e1-fd59-44ae-b034-482c06808d83 does not exist
Nov 25 02:58:34 np0005534516 ceph-mgr[75313]: [progress WARNING root] complete: ev 8d1adc92-b189-453d-9dee-bd8a58986721 does not exist
Nov 25 02:58:34 np0005534516 python3.9[134953]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 02:58:34 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 02:58:34 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 02:58:35 np0005534516 python3.9[135121]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764057513.5583165-65-89428674007504/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=7e0508655b3da6040f4410ff39014cd07bef24f1 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 02:58:35 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v375: 321 pgs: 321 active+clean; 456 KiB data, 143 MiB used, 60 GiB / 60 GiB avail
Nov 25 02:58:35 np0005534516 python3.9[135273]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 02:58:36 np0005534516 python3.9[135396]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764057515.2344956-65-24730710705481/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=9796542921156c84fd6ff5bdc2d096cc9872c6aa backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 02:58:37 np0005534516 python3.9[135548]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 02:58:37 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v376: 321 pgs: 321 active+clean; 456 KiB data, 143 MiB used, 60 GiB / 60 GiB avail
Nov 25 02:58:37 np0005534516 python3.9[135671]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764057516.5045466-65-56505690123733/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=6725118d4270b93511686505f836864ea04d4908 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 02:58:38 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 02:58:38 np0005534516 python3.9[135823]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/neutron-metadata/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 02:58:39 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v377: 321 pgs: 321 active+clean; 456 KiB data, 143 MiB used, 60 GiB / 60 GiB avail
Nov 25 02:58:39 np0005534516 python3.9[135975]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/neutron-metadata/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 02:58:40 np0005534516 python3.9[136127]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 02:58:40 np0005534516 python3.9[136250]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764057519.5727787-124-193314014956469/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=2a29a7607ae61d3035fb53ebedf161ea61a39b5d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 02:58:41 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v378: 321 pgs: 321 active+clean; 456 KiB data, 143 MiB used, 60 GiB / 60 GiB avail
Nov 25 02:58:41 np0005534516 python3.9[136402]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 02:58:42 np0005534516 python3.9[136525]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764057520.9129574-124-56935000269427/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=effaa572d356195b188e95ce96181d77aa56057f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 02:58:43 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 02:58:43 np0005534516 python3.9[136677]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 02:58:43 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v379: 321 pgs: 321 active+clean; 456 KiB data, 143 MiB used, 60 GiB / 60 GiB avail
Nov 25 02:58:43 np0005534516 python3.9[136800]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764057522.5463119-124-136362983212782/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=7eadc11c5bfa63884b5ff0d48b8f82c5ae04103b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 02:58:44 np0005534516 python3.9[136952]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/ovn/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 02:58:45 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v380: 321 pgs: 321 active+clean; 456 KiB data, 143 MiB used, 60 GiB / 60 GiB avail
Nov 25 02:58:45 np0005534516 python3.9[137104]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/ovn/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 02:58:46 np0005534516 python3.9[137256]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 02:58:46 np0005534516 python3.9[137379]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764057525.546977-183-33027769849896/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=027058764c1c41cc9e64c5bb574cac71f50a31fa backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 02:58:47 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v381: 321 pgs: 321 active+clean; 456 KiB data, 143 MiB used, 60 GiB / 60 GiB avail
Nov 25 02:58:47 np0005534516 python3.9[137531]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 02:58:48 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 02:58:48 np0005534516 python3.9[137654]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764057526.913083-183-14598451874462/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=effaa572d356195b188e95ce96181d77aa56057f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 02:58:48 np0005534516 python3.9[137806]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 02:58:49 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v382: 321 pgs: 321 active+clean; 456 KiB data, 143 MiB used, 60 GiB / 60 GiB avail
Nov 25 02:58:49 np0005534516 python3.9[137929]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764057528.2849562-183-60941071511334/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=2e6a1309b55e8685f3a95de70fd2545010a45969 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 02:58:50 np0005534516 python3.9[138081]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/ovn setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 02:58:51 np0005534516 python3.9[138233]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 02:58:51 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v383: 321 pgs: 321 active+clean; 456 KiB data, 143 MiB used, 60 GiB / 60 GiB avail
Nov 25 02:58:51 np0005534516 python3.9[138356]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764057530.7679808-251-118087415798192/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=0d57da45e7109a066556ab4f54b8767ce2bd0faa backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 02:58:52 np0005534516 python3.9[138508]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/libvirt setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 02:58:53 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 02:58:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] Optimize plan auto_2025-11-25_07:58:53
Nov 25 02:58:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 02:58:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] do_upmap
Nov 25 02:58:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] pools ['.mgr', 'cephfs.cephfs.meta', 'volumes', 'default.rgw.meta', 'images', 'default.rgw.log', '.rgw.root', 'default.rgw.control', 'cephfs.cephfs.data', 'backups', 'vms']
Nov 25 02:58:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] prepared 0/10 changes
Nov 25 02:58:53 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v384: 321 pgs: 321 active+clean; 456 KiB data, 143 MiB used, 60 GiB / 60 GiB avail
Nov 25 02:58:53 np0005534516 python3.9[138660]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/libvirt/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 02:58:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 02:58:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 02:58:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 02:58:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 02:58:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 02:58:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 02:58:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 02:58:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 02:58:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 02:58:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 02:58:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 02:58:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 02:58:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 02:58:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 02:58:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 02:58:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 02:58:54 np0005534516 python3.9[138783]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/libvirt/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764057532.8981104-275-49528875958071/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=0d57da45e7109a066556ab4f54b8767ce2bd0faa backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 02:58:54 np0005534516 python3.9[138935]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/neutron-metadata setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 02:58:55 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v385: 321 pgs: 321 active+clean; 456 KiB data, 143 MiB used, 60 GiB / 60 GiB avail
Nov 25 02:58:55 np0005534516 python3.9[139087]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 02:58:56 np0005534516 python3.9[139210]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764057534.9409406-299-12567096125618/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=0d57da45e7109a066556ab4f54b8767ce2bd0faa backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 02:58:56 np0005534516 python3.9[139362]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/bootstrap setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 02:58:57 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v386: 321 pgs: 321 active+clean; 456 KiB data, 143 MiB used, 60 GiB / 60 GiB avail
Nov 25 02:58:57 np0005534516 python3.9[139514]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/bootstrap/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 02:58:58 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 02:58:58 np0005534516 python3.9[139637]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/bootstrap/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764057537.063957-323-70235386142076/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=0d57da45e7109a066556ab4f54b8767ce2bd0faa backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 02:58:59 np0005534516 python3.9[139789]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/repo-setup setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 02:58:59 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v387: 321 pgs: 321 active+clean; 456 KiB data, 143 MiB used, 60 GiB / 60 GiB avail
Nov 25 02:58:59 np0005534516 python3.9[139941]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/repo-setup/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 02:59:00 np0005534516 python3.9[140064]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/repo-setup/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764057539.193243-347-38301767505754/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=0d57da45e7109a066556ab4f54b8767ce2bd0faa backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 02:59:01 np0005534516 python3.9[140216]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 02:59:01 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v388: 321 pgs: 321 active+clean; 456 KiB data, 143 MiB used, 60 GiB / 60 GiB avail
Nov 25 02:59:01 np0005534516 python3.9[140368]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 02:59:02 np0005534516 python3.9[140491]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764057541.3003561-371-89961998137593/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=0d57da45e7109a066556ab4f54b8767ce2bd0faa backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 02:59:02 np0005534516 systemd-logind[822]: Session 44 logged out. Waiting for processes to exit.
Nov 25 02:59:02 np0005534516 systemd[1]: session-44.scope: Deactivated successfully.
Nov 25 02:59:02 np0005534516 systemd[1]: session-44.scope: Consumed 25.839s CPU time.
Nov 25 02:59:02 np0005534516 systemd-logind[822]: Removed session 44.
Nov 25 02:59:03 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 02:59:03 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v389: 321 pgs: 321 active+clean; 456 KiB data, 143 MiB used, 60 GiB / 60 GiB avail
Nov 25 02:59:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 02:59:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 02:59:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 02:59:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 02:59:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 02:59:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 02:59:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 02:59:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 02:59:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 02:59:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 02:59:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 02:59:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 02:59:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 25 02:59:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 02:59:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 02:59:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 02:59:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 25 02:59:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 02:59:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 25 02:59:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 02:59:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 02:59:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 02:59:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 25 02:59:05 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v390: 321 pgs: 321 active+clean; 456 KiB data, 143 MiB used, 60 GiB / 60 GiB avail
Nov 25 02:59:07 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v391: 321 pgs: 321 active+clean; 456 KiB data, 143 MiB used, 60 GiB / 60 GiB avail
Nov 25 02:59:08 np0005534516 systemd-logind[822]: New session 45 of user zuul.
Nov 25 02:59:08 np0005534516 systemd[1]: Started Session 45 of User zuul.
Nov 25 02:59:08 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 02:59:08 np0005534516 python3.9[140671]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/openstack/config/ceph state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 02:59:09 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v392: 321 pgs: 321 active+clean; 456 KiB data, 143 MiB used, 60 GiB / 60 GiB avail
Nov 25 02:59:09 np0005534516 python3.9[140823]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/ceph/ceph.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 02:59:10 np0005534516 python3.9[140946]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/ceph/ceph.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764057549.0888832-34-262959655339115/.source.conf _original_basename=ceph.conf follow=False checksum=b62ed5d0a9d3b6bdad765c3fe5cb801da8dca7c5 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 02:59:11 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v393: 321 pgs: 321 active+clean; 456 KiB data, 143 MiB used, 60 GiB / 60 GiB avail
Nov 25 02:59:11 np0005534516 python3.9[141098]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/ceph/ceph.client.openstack.keyring follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 02:59:12 np0005534516 python3.9[141221]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/ceph/ceph.client.openstack.keyring mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1764057550.9041114-34-118196864034680/.source.keyring _original_basename=ceph.client.openstack.keyring follow=False checksum=456535103098adb3f62f4b843007d7273e91427f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 02:59:12 np0005534516 systemd[1]: session-45.scope: Deactivated successfully.
Nov 25 02:59:12 np0005534516 systemd[1]: session-45.scope: Consumed 3.064s CPU time.
Nov 25 02:59:12 np0005534516 systemd-logind[822]: Session 45 logged out. Waiting for processes to exit.
Nov 25 02:59:12 np0005534516 systemd-logind[822]: Removed session 45.
Nov 25 02:59:13 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 02:59:13 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v394: 321 pgs: 321 active+clean; 456 KiB data, 143 MiB used, 60 GiB / 60 GiB avail
Nov 25 02:59:15 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v395: 321 pgs: 321 active+clean; 456 KiB data, 143 MiB used, 60 GiB / 60 GiB avail
Nov 25 02:59:17 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v396: 321 pgs: 321 active+clean; 456 KiB data, 143 MiB used, 60 GiB / 60 GiB avail
Nov 25 02:59:18 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 02:59:19 np0005534516 systemd-logind[822]: New session 46 of user zuul.
Nov 25 02:59:19 np0005534516 systemd[1]: Started Session 46 of User zuul.
Nov 25 02:59:19 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v397: 321 pgs: 321 active+clean; 456 KiB data, 143 MiB used, 60 GiB / 60 GiB avail
Nov 25 02:59:20 np0005534516 python3.9[141399]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 25 02:59:21 np0005534516 python3.9[141555]: ansible-ansible.builtin.file Invoked with group=zuul mode=0750 owner=zuul path=/var/lib/edpm-config/firewall setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 02:59:21 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v398: 321 pgs: 321 active+clean; 456 KiB data, 143 MiB used, 60 GiB / 60 GiB avail
Nov 25 02:59:21 np0005534516 python3.9[141707]: ansible-ansible.builtin.file Invoked with group=openvswitch owner=openvswitch path=/var/lib/openvswitch/ovn setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Nov 25 02:59:22 np0005534516 python3.9[141857]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 25 02:59:23 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 02:59:23 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v399: 321 pgs: 321 active+clean; 456 KiB data, 143 MiB used, 60 GiB / 60 GiB avail
Nov 25 02:59:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 02:59:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 02:59:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 02:59:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 02:59:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 02:59:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 02:59:23 np0005534516 python3.9[142009]: ansible-ansible.posix.seboolean Invoked with name=virt_sandbox_use_netlink persistent=True state=True ignore_selinux_state=False
Nov 25 02:59:25 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v400: 321 pgs: 321 active+clean; 456 KiB data, 143 MiB used, 60 GiB / 60 GiB avail
Nov 25 02:59:27 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v401: 321 pgs: 321 active+clean; 456 KiB data, 143 MiB used, 60 GiB / 60 GiB avail
Nov 25 02:59:28 np0005534516 dbus-broker-launch[813]: avc:  op=load_policy lsm=selinux seqno=11 res=1
Nov 25 02:59:28 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 02:59:28 np0005534516 python3.9[142167]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 25 02:59:29 np0005534516 python3.9[142251]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 25 02:59:29 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v402: 321 pgs: 321 active+clean; 456 KiB data, 143 MiB used, 60 GiB / 60 GiB avail
Nov 25 02:59:31 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v403: 321 pgs: 321 active+clean; 456 KiB data, 143 MiB used, 60 GiB / 60 GiB avail
Nov 25 02:59:32 np0005534516 python3.9[142404]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Nov 25 02:59:33 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 02:59:33 np0005534516 python3[142559]: ansible-osp.edpm.edpm_nftables_snippet Invoked with content=- rule_name: 118 neutron vxlan networks#012  rule:#012    proto: udp#012    dport: 4789#012- rule_name: 119 neutron geneve networks#012  rule:#012    proto: udp#012    dport: 6081#012    state: ["UNTRACKED"]#012- rule_name: 120 neutron geneve networks no conntrack#012  rule:#012    proto: udp#012    dport: 6081#012    table: raw#012    chain: OUTPUT#012    jump: NOTRACK#012    action: append#012    state: []#012- rule_name: 121 neutron geneve networks no conntrack#012  rule:#012    proto: udp#012    dport: 6081#012    table: raw#012    chain: PREROUTING#012    jump: NOTRACK#012    action: append#012    state: []#012 dest=/var/lib/edpm-config/firewall/ovn.yaml state=present
Nov 25 02:59:33 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v404: 321 pgs: 321 active+clean; 456 KiB data, 143 MiB used, 60 GiB / 60 GiB avail
Nov 25 02:59:34 np0005534516 python3.9[142711]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 02:59:34 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 02:59:34 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 02:59:34 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 02:59:34 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 02:59:34 np0005534516 python3.9[142976]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 02:59:35 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v405: 321 pgs: 321 active+clean; 456 KiB data, 143 MiB used, 60 GiB / 60 GiB avail
Nov 25 02:59:35 np0005534516 python3.9[143162]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 02:59:35 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 02:59:35 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 02:59:35 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 02:59:35 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 02:59:35 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 02:59:35 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 02:59:35 np0005534516 ceph-mgr[75313]: [progress WARNING root] complete: ev 8bed7d1e-183b-4c8d-89f9-5630ee23d9a8 does not exist
Nov 25 02:59:35 np0005534516 ceph-mgr[75313]: [progress WARNING root] complete: ev a4738199-8823-4b18-94d2-c80244fadf5c does not exist
Nov 25 02:59:35 np0005534516 ceph-mgr[75313]: [progress WARNING root] complete: ev 98537c8b-287f-4012-aa2b-336e1cde88ab does not exist
Nov 25 02:59:35 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 02:59:35 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 02:59:35 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 02:59:35 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 02:59:35 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 02:59:35 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 02:59:35 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 02:59:35 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 02:59:35 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 02:59:35 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 02:59:35 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 02:59:36 np0005534516 python3.9[143417]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 02:59:36 np0005534516 podman[143534]: 2025-11-25 07:59:36.325796107 +0000 UTC m=+0.045573933 container create e9e5b1dd4083dfd67d9ba673b4a29b3340787cc483b79ccd2489022402f6c2d5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_pasteur, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 02:59:36 np0005534516 systemd[1]: Started libpod-conmon-e9e5b1dd4083dfd67d9ba673b4a29b3340787cc483b79ccd2489022402f6c2d5.scope.
Nov 25 02:59:36 np0005534516 podman[143534]: 2025-11-25 07:59:36.300501452 +0000 UTC m=+0.020279338 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 02:59:36 np0005534516 systemd[1]: Started libcrun container.
Nov 25 02:59:36 np0005534516 podman[143534]: 2025-11-25 07:59:36.416042569 +0000 UTC m=+0.135820395 container init e9e5b1dd4083dfd67d9ba673b4a29b3340787cc483b79ccd2489022402f6c2d5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_pasteur, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Nov 25 02:59:36 np0005534516 podman[143534]: 2025-11-25 07:59:36.424054768 +0000 UTC m=+0.143832594 container start e9e5b1dd4083dfd67d9ba673b4a29b3340787cc483b79ccd2489022402f6c2d5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_pasteur, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Nov 25 02:59:36 np0005534516 podman[143534]: 2025-11-25 07:59:36.427499544 +0000 UTC m=+0.147277370 container attach e9e5b1dd4083dfd67d9ba673b4a29b3340787cc483b79ccd2489022402f6c2d5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_pasteur, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Nov 25 02:59:36 np0005534516 funny_pasteur[143578]: 167 167
Nov 25 02:59:36 np0005534516 systemd[1]: libpod-e9e5b1dd4083dfd67d9ba673b4a29b3340787cc483b79ccd2489022402f6c2d5.scope: Deactivated successfully.
Nov 25 02:59:36 np0005534516 podman[143534]: 2025-11-25 07:59:36.429488918 +0000 UTC m=+0.149266764 container died e9e5b1dd4083dfd67d9ba673b4a29b3340787cc483b79ccd2489022402f6c2d5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_pasteur, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 02:59:36 np0005534516 systemd[1]: var-lib-containers-storage-overlay-eee55599eed9b9372def3f568eb3441eddd700f7e96700aa809a413c2b4404a5-merged.mount: Deactivated successfully.
Nov 25 02:59:36 np0005534516 podman[143534]: 2025-11-25 07:59:36.47940167 +0000 UTC m=+0.199179536 container remove e9e5b1dd4083dfd67d9ba673b4a29b3340787cc483b79ccd2489022402f6c2d5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_pasteur, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Nov 25 02:59:36 np0005534516 systemd[1]: libpod-conmon-e9e5b1dd4083dfd67d9ba673b4a29b3340787cc483b79ccd2489022402f6c2d5.scope: Deactivated successfully.
Nov 25 02:59:36 np0005534516 python3.9[143577]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.jtdrotqe recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 02:59:36 np0005534516 podman[143601]: 2025-11-25 07:59:36.641115536 +0000 UTC m=+0.053521412 container create e919435da3899d60f96578ac9498f73bd1f76bc756c39f0953543cd4dd3723f3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_taussig, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Nov 25 02:59:36 np0005534516 systemd[1]: Started libpod-conmon-e919435da3899d60f96578ac9498f73bd1f76bc756c39f0953543cd4dd3723f3.scope.
Nov 25 02:59:36 np0005534516 podman[143601]: 2025-11-25 07:59:36.612694455 +0000 UTC m=+0.025100411 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 02:59:36 np0005534516 systemd[1]: Started libcrun container.
Nov 25 02:59:36 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ee7aa33bb73a78fa73b4060acb3804571e6a9381953e16749327e6ee39eb77c7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 02:59:36 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ee7aa33bb73a78fa73b4060acb3804571e6a9381953e16749327e6ee39eb77c7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 02:59:36 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ee7aa33bb73a78fa73b4060acb3804571e6a9381953e16749327e6ee39eb77c7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 02:59:36 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ee7aa33bb73a78fa73b4060acb3804571e6a9381953e16749327e6ee39eb77c7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 02:59:36 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ee7aa33bb73a78fa73b4060acb3804571e6a9381953e16749327e6ee39eb77c7/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 02:59:36 np0005534516 podman[143601]: 2025-11-25 07:59:36.764462227 +0000 UTC m=+0.176868123 container init e919435da3899d60f96578ac9498f73bd1f76bc756c39f0953543cd4dd3723f3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_taussig, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True)
Nov 25 02:59:36 np0005534516 podman[143601]: 2025-11-25 07:59:36.78058204 +0000 UTC m=+0.192987916 container start e919435da3899d60f96578ac9498f73bd1f76bc756c39f0953543cd4dd3723f3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_taussig, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Nov 25 02:59:36 np0005534516 podman[143601]: 2025-11-25 07:59:36.788164928 +0000 UTC m=+0.200570804 container attach e919435da3899d60f96578ac9498f73bd1f76bc756c39f0953543cd4dd3723f3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_taussig, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 02:59:37 np0005534516 python3.9[143774]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 02:59:37 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v406: 321 pgs: 321 active+clean; 456 KiB data, 143 MiB used, 60 GiB / 60 GiB avail
Nov 25 02:59:37 np0005534516 python3.9[143859]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 02:59:37 np0005534516 eager_taussig[143648]: --> passed data devices: 0 physical, 3 LVM
Nov 25 02:59:37 np0005534516 eager_taussig[143648]: --> relative data size: 1.0
Nov 25 02:59:37 np0005534516 eager_taussig[143648]: --> All data devices are unavailable
Nov 25 02:59:37 np0005534516 systemd[1]: libpod-e919435da3899d60f96578ac9498f73bd1f76bc756c39f0953543cd4dd3723f3.scope: Deactivated successfully.
Nov 25 02:59:37 np0005534516 systemd[1]: libpod-e919435da3899d60f96578ac9498f73bd1f76bc756c39f0953543cd4dd3723f3.scope: Consumed 1.041s CPU time.
Nov 25 02:59:37 np0005534516 podman[143601]: 2025-11-25 07:59:37.893209187 +0000 UTC m=+1.305615073 container died e919435da3899d60f96578ac9498f73bd1f76bc756c39f0953543cd4dd3723f3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_taussig, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Nov 25 02:59:38 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 02:59:38 np0005534516 systemd[1]: var-lib-containers-storage-overlay-ee7aa33bb73a78fa73b4060acb3804571e6a9381953e16749327e6ee39eb77c7-merged.mount: Deactivated successfully.
Nov 25 02:59:38 np0005534516 podman[143601]: 2025-11-25 07:59:38.460041349 +0000 UTC m=+1.872447225 container remove e919435da3899d60f96578ac9498f73bd1f76bc756c39f0953543cd4dd3723f3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_taussig, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Nov 25 02:59:38 np0005534516 systemd[1]: libpod-conmon-e919435da3899d60f96578ac9498f73bd1f76bc756c39f0953543cd4dd3723f3.scope: Deactivated successfully.
Nov 25 02:59:38 np0005534516 python3.9[144042]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 02:59:39 np0005534516 podman[144260]: 2025-11-25 07:59:39.094022178 +0000 UTC m=+0.044079933 container create 205d3af01eac0d9acc2608221c856521a7440e67b8460b9c3bba202312769a76 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_engelbart, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 02:59:39 np0005534516 systemd[1]: Started libpod-conmon-205d3af01eac0d9acc2608221c856521a7440e67b8460b9c3bba202312769a76.scope.
Nov 25 02:59:39 np0005534516 podman[144260]: 2025-11-25 07:59:39.071974012 +0000 UTC m=+0.022031847 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 02:59:39 np0005534516 systemd[1]: Started libcrun container.
Nov 25 02:59:39 np0005534516 podman[144260]: 2025-11-25 07:59:39.191206819 +0000 UTC m=+0.141264644 container init 205d3af01eac0d9acc2608221c856521a7440e67b8460b9c3bba202312769a76 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_engelbart, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Nov 25 02:59:39 np0005534516 podman[144260]: 2025-11-25 07:59:39.203744224 +0000 UTC m=+0.153802009 container start 205d3af01eac0d9acc2608221c856521a7440e67b8460b9c3bba202312769a76 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_engelbart, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Nov 25 02:59:39 np0005534516 funny_engelbart[144297]: 167 167
Nov 25 02:59:39 np0005534516 systemd[1]: libpod-205d3af01eac0d9acc2608221c856521a7440e67b8460b9c3bba202312769a76.scope: Deactivated successfully.
Nov 25 02:59:39 np0005534516 podman[144260]: 2025-11-25 07:59:39.20901967 +0000 UTC m=+0.159077455 container attach 205d3af01eac0d9acc2608221c856521a7440e67b8460b9c3bba202312769a76 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_engelbart, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 02:59:39 np0005534516 podman[144260]: 2025-11-25 07:59:39.209669167 +0000 UTC m=+0.159726982 container died 205d3af01eac0d9acc2608221c856521a7440e67b8460b9c3bba202312769a76 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_engelbart, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef)
Nov 25 02:59:39 np0005534516 systemd[1]: var-lib-containers-storage-overlay-9b916ee835f0ef76d24227c18512c743cc17f02002e0178bb7cd3edff8616bae-merged.mount: Deactivated successfully.
Nov 25 02:59:39 np0005534516 podman[144260]: 2025-11-25 07:59:39.263664011 +0000 UTC m=+0.213721796 container remove 205d3af01eac0d9acc2608221c856521a7440e67b8460b9c3bba202312769a76 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_engelbart, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Nov 25 02:59:39 np0005534516 systemd[1]: libpod-conmon-205d3af01eac0d9acc2608221c856521a7440e67b8460b9c3bba202312769a76.scope: Deactivated successfully.
Nov 25 02:59:39 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v407: 321 pgs: 321 active+clean; 456 KiB data, 143 MiB used, 60 GiB / 60 GiB avail
Nov 25 02:59:39 np0005534516 podman[144377]: 2025-11-25 07:59:39.449191601 +0000 UTC m=+0.052533384 container create 56e212e42a4e5d6609a53223b64689f9b1f6b5b3a6ef00fa396f2c9f419293d5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_rubin, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default)
Nov 25 02:59:39 np0005534516 systemd[1]: Started libpod-conmon-56e212e42a4e5d6609a53223b64689f9b1f6b5b3a6ef00fa396f2c9f419293d5.scope.
Nov 25 02:59:39 np0005534516 podman[144377]: 2025-11-25 07:59:39.423613318 +0000 UTC m=+0.026955091 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 02:59:39 np0005534516 systemd[1]: Started libcrun container.
Nov 25 02:59:39 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b5da515434c6b273ca5f63b4545095d214b4ee49d18a60c7b7e02edb69999111/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 02:59:39 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b5da515434c6b273ca5f63b4545095d214b4ee49d18a60c7b7e02edb69999111/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 02:59:39 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b5da515434c6b273ca5f63b4545095d214b4ee49d18a60c7b7e02edb69999111/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 02:59:39 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b5da515434c6b273ca5f63b4545095d214b4ee49d18a60c7b7e02edb69999111/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 02:59:39 np0005534516 python3[144371]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Nov 25 02:59:39 np0005534516 podman[144377]: 2025-11-25 07:59:39.547841414 +0000 UTC m=+0.151183257 container init 56e212e42a4e5d6609a53223b64689f9b1f6b5b3a6ef00fa396f2c9f419293d5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_rubin, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Nov 25 02:59:39 np0005534516 podman[144377]: 2025-11-25 07:59:39.556540393 +0000 UTC m=+0.159882146 container start 56e212e42a4e5d6609a53223b64689f9b1f6b5b3a6ef00fa396f2c9f419293d5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_rubin, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 02:59:39 np0005534516 podman[144377]: 2025-11-25 07:59:39.560785229 +0000 UTC m=+0.164127012 container attach 56e212e42a4e5d6609a53223b64689f9b1f6b5b3a6ef00fa396f2c9f419293d5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_rubin, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Nov 25 02:59:40 np0005534516 python3.9[144550]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 02:59:40 np0005534516 upbeat_rubin[144394]: {
Nov 25 02:59:40 np0005534516 upbeat_rubin[144394]:    "0": [
Nov 25 02:59:40 np0005534516 upbeat_rubin[144394]:        {
Nov 25 02:59:40 np0005534516 upbeat_rubin[144394]:            "devices": [
Nov 25 02:59:40 np0005534516 upbeat_rubin[144394]:                "/dev/loop3"
Nov 25 02:59:40 np0005534516 upbeat_rubin[144394]:            ],
Nov 25 02:59:40 np0005534516 upbeat_rubin[144394]:            "lv_name": "ceph_lv0",
Nov 25 02:59:40 np0005534516 upbeat_rubin[144394]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 02:59:40 np0005534516 upbeat_rubin[144394]:            "lv_size": "21470642176",
Nov 25 02:59:40 np0005534516 upbeat_rubin[144394]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=fa0f8d90-5df8-4d42-9078-da082765696d,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 02:59:40 np0005534516 upbeat_rubin[144394]:            "lv_uuid": "ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr",
Nov 25 02:59:40 np0005534516 upbeat_rubin[144394]:            "name": "ceph_lv0",
Nov 25 02:59:40 np0005534516 upbeat_rubin[144394]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 02:59:40 np0005534516 upbeat_rubin[144394]:            "tags": {
Nov 25 02:59:40 np0005534516 upbeat_rubin[144394]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 02:59:40 np0005534516 upbeat_rubin[144394]:                "ceph.block_uuid": "ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr",
Nov 25 02:59:40 np0005534516 upbeat_rubin[144394]:                "ceph.cephx_lockbox_secret": "",
Nov 25 02:59:40 np0005534516 upbeat_rubin[144394]:                "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 02:59:40 np0005534516 upbeat_rubin[144394]:                "ceph.cluster_name": "ceph",
Nov 25 02:59:40 np0005534516 upbeat_rubin[144394]:                "ceph.crush_device_class": "",
Nov 25 02:59:40 np0005534516 upbeat_rubin[144394]:                "ceph.encrypted": "0",
Nov 25 02:59:40 np0005534516 upbeat_rubin[144394]:                "ceph.osd_fsid": "fa0f8d90-5df8-4d42-9078-da082765696d",
Nov 25 02:59:40 np0005534516 upbeat_rubin[144394]:                "ceph.osd_id": "0",
Nov 25 02:59:40 np0005534516 upbeat_rubin[144394]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 02:59:40 np0005534516 upbeat_rubin[144394]:                "ceph.type": "block",
Nov 25 02:59:40 np0005534516 upbeat_rubin[144394]:                "ceph.vdo": "0"
Nov 25 02:59:40 np0005534516 upbeat_rubin[144394]:            },
Nov 25 02:59:40 np0005534516 upbeat_rubin[144394]:            "type": "block",
Nov 25 02:59:40 np0005534516 upbeat_rubin[144394]:            "vg_name": "ceph_vg0"
Nov 25 02:59:40 np0005534516 upbeat_rubin[144394]:        }
Nov 25 02:59:40 np0005534516 upbeat_rubin[144394]:    ],
Nov 25 02:59:40 np0005534516 upbeat_rubin[144394]:    "1": [
Nov 25 02:59:40 np0005534516 upbeat_rubin[144394]:        {
Nov 25 02:59:40 np0005534516 upbeat_rubin[144394]:            "devices": [
Nov 25 02:59:40 np0005534516 upbeat_rubin[144394]:                "/dev/loop4"
Nov 25 02:59:40 np0005534516 upbeat_rubin[144394]:            ],
Nov 25 02:59:40 np0005534516 upbeat_rubin[144394]:            "lv_name": "ceph_lv1",
Nov 25 02:59:40 np0005534516 upbeat_rubin[144394]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 02:59:40 np0005534516 upbeat_rubin[144394]:            "lv_size": "21470642176",
Nov 25 02:59:40 np0005534516 upbeat_rubin[144394]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1ccbbde0-8faf-460a-800e-c84f00ed17db,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 02:59:40 np0005534516 upbeat_rubin[144394]:            "lv_uuid": "nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e",
Nov 25 02:59:40 np0005534516 upbeat_rubin[144394]:            "name": "ceph_lv1",
Nov 25 02:59:40 np0005534516 upbeat_rubin[144394]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 02:59:40 np0005534516 upbeat_rubin[144394]:            "tags": {
Nov 25 02:59:40 np0005534516 upbeat_rubin[144394]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 02:59:40 np0005534516 upbeat_rubin[144394]:                "ceph.block_uuid": "nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e",
Nov 25 02:59:40 np0005534516 upbeat_rubin[144394]:                "ceph.cephx_lockbox_secret": "",
Nov 25 02:59:40 np0005534516 upbeat_rubin[144394]:                "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 02:59:40 np0005534516 upbeat_rubin[144394]:                "ceph.cluster_name": "ceph",
Nov 25 02:59:40 np0005534516 upbeat_rubin[144394]:                "ceph.crush_device_class": "",
Nov 25 02:59:40 np0005534516 upbeat_rubin[144394]:                "ceph.encrypted": "0",
Nov 25 02:59:40 np0005534516 upbeat_rubin[144394]:                "ceph.osd_fsid": "1ccbbde0-8faf-460a-800e-c84f00ed17db",
Nov 25 02:59:40 np0005534516 upbeat_rubin[144394]:                "ceph.osd_id": "1",
Nov 25 02:59:40 np0005534516 upbeat_rubin[144394]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 02:59:40 np0005534516 upbeat_rubin[144394]:                "ceph.type": "block",
Nov 25 02:59:40 np0005534516 upbeat_rubin[144394]:                "ceph.vdo": "0"
Nov 25 02:59:40 np0005534516 upbeat_rubin[144394]:            },
Nov 25 02:59:40 np0005534516 upbeat_rubin[144394]:            "type": "block",
Nov 25 02:59:40 np0005534516 upbeat_rubin[144394]:            "vg_name": "ceph_vg1"
Nov 25 02:59:40 np0005534516 upbeat_rubin[144394]:        }
Nov 25 02:59:40 np0005534516 upbeat_rubin[144394]:    ],
Nov 25 02:59:40 np0005534516 upbeat_rubin[144394]:    "2": [
Nov 25 02:59:40 np0005534516 upbeat_rubin[144394]:        {
Nov 25 02:59:40 np0005534516 upbeat_rubin[144394]:            "devices": [
Nov 25 02:59:40 np0005534516 upbeat_rubin[144394]:                "/dev/loop5"
Nov 25 02:59:40 np0005534516 upbeat_rubin[144394]:            ],
Nov 25 02:59:40 np0005534516 upbeat_rubin[144394]:            "lv_name": "ceph_lv2",
Nov 25 02:59:40 np0005534516 upbeat_rubin[144394]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 02:59:40 np0005534516 upbeat_rubin[144394]:            "lv_size": "21470642176",
Nov 25 02:59:40 np0005534516 upbeat_rubin[144394]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2a15223e-f21c-42af-b702-2be1c3607399,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 02:59:40 np0005534516 upbeat_rubin[144394]:            "lv_uuid": "5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0",
Nov 25 02:59:40 np0005534516 upbeat_rubin[144394]:            "name": "ceph_lv2",
Nov 25 02:59:40 np0005534516 upbeat_rubin[144394]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 02:59:40 np0005534516 upbeat_rubin[144394]:            "tags": {
Nov 25 02:59:40 np0005534516 upbeat_rubin[144394]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 02:59:40 np0005534516 upbeat_rubin[144394]:                "ceph.block_uuid": "5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0",
Nov 25 02:59:40 np0005534516 upbeat_rubin[144394]:                "ceph.cephx_lockbox_secret": "",
Nov 25 02:59:40 np0005534516 upbeat_rubin[144394]:                "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 02:59:40 np0005534516 upbeat_rubin[144394]:                "ceph.cluster_name": "ceph",
Nov 25 02:59:40 np0005534516 upbeat_rubin[144394]:                "ceph.crush_device_class": "",
Nov 25 02:59:40 np0005534516 upbeat_rubin[144394]:                "ceph.encrypted": "0",
Nov 25 02:59:40 np0005534516 upbeat_rubin[144394]:                "ceph.osd_fsid": "2a15223e-f21c-42af-b702-2be1c3607399",
Nov 25 02:59:40 np0005534516 upbeat_rubin[144394]:                "ceph.osd_id": "2",
Nov 25 02:59:40 np0005534516 upbeat_rubin[144394]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 02:59:40 np0005534516 upbeat_rubin[144394]:                "ceph.type": "block",
Nov 25 02:59:40 np0005534516 upbeat_rubin[144394]:                "ceph.vdo": "0"
Nov 25 02:59:40 np0005534516 upbeat_rubin[144394]:            },
Nov 25 02:59:40 np0005534516 upbeat_rubin[144394]:            "type": "block",
Nov 25 02:59:40 np0005534516 upbeat_rubin[144394]:            "vg_name": "ceph_vg2"
Nov 25 02:59:40 np0005534516 upbeat_rubin[144394]:        }
Nov 25 02:59:40 np0005534516 upbeat_rubin[144394]:    ]
Nov 25 02:59:40 np0005534516 upbeat_rubin[144394]: }
Nov 25 02:59:40 np0005534516 systemd[1]: libpod-56e212e42a4e5d6609a53223b64689f9b1f6b5b3a6ef00fa396f2c9f419293d5.scope: Deactivated successfully.
Nov 25 02:59:40 np0005534516 podman[144377]: 2025-11-25 07:59:40.362793437 +0000 UTC m=+0.966135200 container died 56e212e42a4e5d6609a53223b64689f9b1f6b5b3a6ef00fa396f2c9f419293d5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_rubin, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Nov 25 02:59:40 np0005534516 systemd[1]: var-lib-containers-storage-overlay-b5da515434c6b273ca5f63b4545095d214b4ee49d18a60c7b7e02edb69999111-merged.mount: Deactivated successfully.
Nov 25 02:59:40 np0005534516 podman[144377]: 2025-11-25 07:59:40.44403351 +0000 UTC m=+1.047375263 container remove 56e212e42a4e5d6609a53223b64689f9b1f6b5b3a6ef00fa396f2c9f419293d5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_rubin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 02:59:40 np0005534516 systemd[1]: libpod-conmon-56e212e42a4e5d6609a53223b64689f9b1f6b5b3a6ef00fa396f2c9f419293d5.scope: Deactivated successfully.
Nov 25 02:59:41 np0005534516 python3.9[144793]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764057579.757666-157-129899054809286/.source.nft follow=False _original_basename=jump-chain.j2 checksum=81c2fc96c23335ffe374f9b064e885d5d971ddf9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 02:59:41 np0005534516 podman[144832]: 2025-11-25 07:59:41.172189138 +0000 UTC m=+0.060038481 container create 26b5feb4eac18c5042980c30b75c0650bce4632f53b391022c8cf476f87b3420 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_keldysh, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 02:59:41 np0005534516 systemd[1]: Started libpod-conmon-26b5feb4eac18c5042980c30b75c0650bce4632f53b391022c8cf476f87b3420.scope.
Nov 25 02:59:41 np0005534516 podman[144832]: 2025-11-25 07:59:41.139219192 +0000 UTC m=+0.027068615 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 02:59:41 np0005534516 systemd[1]: Started libcrun container.
Nov 25 02:59:41 np0005534516 podman[144832]: 2025-11-25 07:59:41.27559194 +0000 UTC m=+0.163441313 container init 26b5feb4eac18c5042980c30b75c0650bce4632f53b391022c8cf476f87b3420 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_keldysh, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Nov 25 02:59:41 np0005534516 podman[144832]: 2025-11-25 07:59:41.284202127 +0000 UTC m=+0.172051460 container start 26b5feb4eac18c5042980c30b75c0650bce4632f53b391022c8cf476f87b3420 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_keldysh, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Nov 25 02:59:41 np0005534516 podman[144832]: 2025-11-25 07:59:41.288252049 +0000 UTC m=+0.176101402 container attach 26b5feb4eac18c5042980c30b75c0650bce4632f53b391022c8cf476f87b3420 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_keldysh, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Nov 25 02:59:41 np0005534516 amazing_keldysh[144872]: 167 167
Nov 25 02:59:41 np0005534516 systemd[1]: libpod-26b5feb4eac18c5042980c30b75c0650bce4632f53b391022c8cf476f87b3420.scope: Deactivated successfully.
Nov 25 02:59:41 np0005534516 podman[144832]: 2025-11-25 07:59:41.290953753 +0000 UTC m=+0.178803126 container died 26b5feb4eac18c5042980c30b75c0650bce4632f53b391022c8cf476f87b3420 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_keldysh, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 02:59:41 np0005534516 systemd[1]: var-lib-containers-storage-overlay-b796a56b13e013de556d516df7dca81b7838e07f3214fb10d6d3d947636d2739-merged.mount: Deactivated successfully.
Nov 25 02:59:41 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v408: 321 pgs: 321 active+clean; 456 KiB data, 143 MiB used, 60 GiB / 60 GiB avail
Nov 25 02:59:41 np0005534516 podman[144832]: 2025-11-25 07:59:41.377976246 +0000 UTC m=+0.265825579 container remove 26b5feb4eac18c5042980c30b75c0650bce4632f53b391022c8cf476f87b3420 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_keldysh, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Nov 25 02:59:41 np0005534516 systemd[1]: libpod-conmon-26b5feb4eac18c5042980c30b75c0650bce4632f53b391022c8cf476f87b3420.scope: Deactivated successfully.
Nov 25 02:59:41 np0005534516 podman[144971]: 2025-11-25 07:59:41.613545891 +0000 UTC m=+0.089389578 container create e0048151eb433ccecd821fa631e1d8cf63e7426dbeb3e77c7166e4caf8abd2e8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_nightingale, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Nov 25 02:59:41 np0005534516 podman[144971]: 2025-11-25 07:59:41.569253404 +0000 UTC m=+0.045097151 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 02:59:41 np0005534516 systemd[1]: Started libpod-conmon-e0048151eb433ccecd821fa631e1d8cf63e7426dbeb3e77c7166e4caf8abd2e8.scope.
Nov 25 02:59:41 np0005534516 systemd[1]: Started libcrun container.
Nov 25 02:59:41 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d13bce6387ded8601bdd4c85625b9b7cd78ebd2fbad1617d55413ff6d6023ff9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 02:59:41 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d13bce6387ded8601bdd4c85625b9b7cd78ebd2fbad1617d55413ff6d6023ff9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 02:59:41 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d13bce6387ded8601bdd4c85625b9b7cd78ebd2fbad1617d55413ff6d6023ff9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 02:59:41 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d13bce6387ded8601bdd4c85625b9b7cd78ebd2fbad1617d55413ff6d6023ff9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 02:59:41 np0005534516 podman[144971]: 2025-11-25 07:59:41.787978297 +0000 UTC m=+0.263821964 container init e0048151eb433ccecd821fa631e1d8cf63e7426dbeb3e77c7166e4caf8abd2e8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_nightingale, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True)
Nov 25 02:59:41 np0005534516 podman[144971]: 2025-11-25 07:59:41.796126851 +0000 UTC m=+0.271970498 container start e0048151eb433ccecd821fa631e1d8cf63e7426dbeb3e77c7166e4caf8abd2e8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_nightingale, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 02:59:41 np0005534516 podman[144971]: 2025-11-25 07:59:41.802121235 +0000 UTC m=+0.277964912 container attach e0048151eb433ccecd821fa631e1d8cf63e7426dbeb3e77c7166e4caf8abd2e8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_nightingale, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Nov 25 02:59:41 np0005534516 python3.9[145040]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 02:59:42 np0005534516 python3.9[145170]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764057581.301656-172-129510845999401/.source.nft follow=False _original_basename=jump-chain.j2 checksum=81c2fc96c23335ffe374f9b064e885d5d971ddf9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 02:59:42 np0005534516 boring_nightingale[145041]: {
Nov 25 02:59:42 np0005534516 boring_nightingale[145041]:    "1ccbbde0-8faf-460a-800e-c84f00ed17db": {
Nov 25 02:59:42 np0005534516 boring_nightingale[145041]:        "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 02:59:42 np0005534516 boring_nightingale[145041]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 02:59:42 np0005534516 boring_nightingale[145041]:        "osd_id": 1,
Nov 25 02:59:42 np0005534516 boring_nightingale[145041]:        "osd_uuid": "1ccbbde0-8faf-460a-800e-c84f00ed17db",
Nov 25 02:59:42 np0005534516 boring_nightingale[145041]:        "type": "bluestore"
Nov 25 02:59:42 np0005534516 boring_nightingale[145041]:    },
Nov 25 02:59:42 np0005534516 boring_nightingale[145041]:    "2a15223e-f21c-42af-b702-2be1c3607399": {
Nov 25 02:59:42 np0005534516 boring_nightingale[145041]:        "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 02:59:42 np0005534516 boring_nightingale[145041]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 02:59:42 np0005534516 boring_nightingale[145041]:        "osd_id": 2,
Nov 25 02:59:42 np0005534516 boring_nightingale[145041]:        "osd_uuid": "2a15223e-f21c-42af-b702-2be1c3607399",
Nov 25 02:59:42 np0005534516 boring_nightingale[145041]:        "type": "bluestore"
Nov 25 02:59:42 np0005534516 boring_nightingale[145041]:    },
Nov 25 02:59:42 np0005534516 boring_nightingale[145041]:    "fa0f8d90-5df8-4d42-9078-da082765696d": {
Nov 25 02:59:42 np0005534516 boring_nightingale[145041]:        "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 02:59:42 np0005534516 boring_nightingale[145041]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 02:59:42 np0005534516 boring_nightingale[145041]:        "osd_id": 0,
Nov 25 02:59:42 np0005534516 boring_nightingale[145041]:        "osd_uuid": "fa0f8d90-5df8-4d42-9078-da082765696d",
Nov 25 02:59:42 np0005534516 boring_nightingale[145041]:        "type": "bluestore"
Nov 25 02:59:42 np0005534516 boring_nightingale[145041]:    }
Nov 25 02:59:42 np0005534516 boring_nightingale[145041]: }
Nov 25 02:59:42 np0005534516 systemd[1]: libpod-e0048151eb433ccecd821fa631e1d8cf63e7426dbeb3e77c7166e4caf8abd2e8.scope: Deactivated successfully.
Nov 25 02:59:42 np0005534516 systemd[1]: libpod-e0048151eb433ccecd821fa631e1d8cf63e7426dbeb3e77c7166e4caf8abd2e8.scope: Consumed 1.065s CPU time.
Nov 25 02:59:42 np0005534516 podman[144971]: 2025-11-25 07:59:42.856511592 +0000 UTC m=+1.332355229 container died e0048151eb433ccecd821fa631e1d8cf63e7426dbeb3e77c7166e4caf8abd2e8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_nightingale, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 02:59:42 np0005534516 systemd[1]: var-lib-containers-storage-overlay-d13bce6387ded8601bdd4c85625b9b7cd78ebd2fbad1617d55413ff6d6023ff9-merged.mount: Deactivated successfully.
Nov 25 02:59:42 np0005534516 podman[144971]: 2025-11-25 07:59:42.965211489 +0000 UTC m=+1.441055176 container remove e0048151eb433ccecd821fa631e1d8cf63e7426dbeb3e77c7166e4caf8abd2e8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_nightingale, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS)
Nov 25 02:59:42 np0005534516 systemd[1]: libpod-conmon-e0048151eb433ccecd821fa631e1d8cf63e7426dbeb3e77c7166e4caf8abd2e8.scope: Deactivated successfully.
Nov 25 02:59:43 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 02:59:43 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 02:59:43 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 02:59:43 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 02:59:43 np0005534516 ceph-mgr[75313]: [progress WARNING root] complete: ev 975ad57c-50e3-4c70-ba68-4752a5b23fa7 does not exist
Nov 25 02:59:43 np0005534516 ceph-mgr[75313]: [progress WARNING root] complete: ev e914ed58-7784-43d1-a9f0-3eaf3f700e7a does not exist
Nov 25 02:59:43 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 02:59:43 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v409: 321 pgs: 321 active+clean; 456 KiB data, 143 MiB used, 60 GiB / 60 GiB avail
Nov 25 02:59:43 np0005534516 python3.9[145413]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 02:59:44 np0005534516 python3.9[145538]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-flushes.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764057582.8312707-187-81281085625178/.source.nft follow=False _original_basename=flush-chain.j2 checksum=4d3ffec49c8eb1a9b80d2f1e8cd64070063a87b4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 02:59:44 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 02:59:44 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 02:59:44 np0005534516 python3.9[145690]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 02:59:45 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v410: 321 pgs: 321 active+clean; 456 KiB data, 143 MiB used, 60 GiB / 60 GiB avail
Nov 25 02:59:45 np0005534516 python3.9[145815]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-chains.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764057584.2254224-202-42427504455200/.source.nft follow=False _original_basename=chains.j2 checksum=298ada419730ec15df17ded0cc50c97a4014a591 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 02:59:46 np0005534516 python3.9[145967]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 02:59:46 np0005534516 python3.9[146092]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764057585.6849225-217-267023140291245/.source.nft follow=False _original_basename=ruleset.j2 checksum=bdba38546f86123f1927359d89789bd211aba99d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 02:59:47 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v411: 321 pgs: 321 active+clean; 456 KiB data, 143 MiB used, 60 GiB / 60 GiB avail
Nov 25 02:59:47 np0005534516 python3.9[146244]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 02:59:48 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 02:59:48 np0005534516 python3.9[146396]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 02:59:49 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v412: 321 pgs: 321 active+clean; 456 KiB data, 143 MiB used, 60 GiB / 60 GiB avail
Nov 25 02:59:49 np0005534516 python3.9[146551]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"#012include "/etc/nftables/edpm-chains.nft"#012include "/etc/nftables/edpm-rules.nft"#012include "/etc/nftables/edpm-jumps.nft"#012 path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 02:59:50 np0005534516 python3.9[146703]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 02:59:51 np0005534516 python3.9[146856]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 25 02:59:51 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v413: 321 pgs: 321 active+clean; 456 KiB data, 143 MiB used, 60 GiB / 60 GiB avail
Nov 25 02:59:51 np0005534516 ceph-mon[75015]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #24. Immutable memtables: 0.
Nov 25 02:59:51 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-07:59:51.558880) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 25 02:59:51 np0005534516 ceph-mon[75015]: rocksdb: [db/flush_job.cc:856] [default] [JOB 7] Flushing memtable with next log file: 24
Nov 25 02:59:51 np0005534516 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764057591558942, "job": 7, "event": "flush_started", "num_memtables": 1, "num_entries": 1214, "num_deletes": 251, "total_data_size": 1885863, "memory_usage": 1908928, "flush_reason": "Manual Compaction"}
Nov 25 02:59:51 np0005534516 ceph-mon[75015]: rocksdb: [db/flush_job.cc:885] [default] [JOB 7] Level-0 flush table #25: started
Nov 25 02:59:51 np0005534516 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764057591571959, "cf_name": "default", "job": 7, "event": "table_file_creation", "file_number": 25, "file_size": 1847309, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 8494, "largest_seqno": 9707, "table_properties": {"data_size": 1841600, "index_size": 3104, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1541, "raw_key_size": 11598, "raw_average_key_size": 18, "raw_value_size": 1830092, "raw_average_value_size": 2980, "num_data_blocks": 146, "num_entries": 614, "num_filter_entries": 614, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764057468, "oldest_key_time": 1764057468, "file_creation_time": 1764057591, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "dc5dc6ae-0fb8-474e-b34d-e37705a41add", "db_session_id": "WCSCMBNWDQZ3QKJA3OWW", "orig_file_number": 25, "seqno_to_time_mapping": "N/A"}}
Nov 25 02:59:51 np0005534516 ceph-mon[75015]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 7] Flush lasted 13134 microseconds, and 5444 cpu microseconds.
Nov 25 02:59:51 np0005534516 ceph-mon[75015]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 02:59:51 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-07:59:51.572019) [db/flush_job.cc:967] [default] [JOB 7] Level-0 flush table #25: 1847309 bytes OK
Nov 25 02:59:51 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-07:59:51.572044) [db/memtable_list.cc:519] [default] Level-0 commit table #25 started
Nov 25 02:59:51 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-07:59:51.573594) [db/memtable_list.cc:722] [default] Level-0 commit table #25: memtable #1 done
Nov 25 02:59:51 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-07:59:51.573606) EVENT_LOG_v1 {"time_micros": 1764057591573603, "job": 7, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 25 02:59:51 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-07:59:51.573624) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 25 02:59:51 np0005534516 ceph-mon[75015]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 7] Try to delete WAL files size 1880354, prev total WAL file size 1880354, number of live WAL files 2.
Nov 25 02:59:51 np0005534516 ceph-mon[75015]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000021.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 02:59:51 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-07:59:51.574396) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F7300323531' seq:72057594037927935, type:22 .. '7061786F7300353033' seq:0, type:0; will stop at (end)
Nov 25 02:59:51 np0005534516 ceph-mon[75015]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 8] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 25 02:59:51 np0005534516 ceph-mon[75015]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 7 Base level 0, inputs: [25(1804KB)], [23(6261KB)]
Nov 25 02:59:51 np0005534516 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764057591574422, "job": 8, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [25], "files_L6": [23], "score": -1, "input_data_size": 8258640, "oldest_snapshot_seqno": -1}
Nov 25 02:59:51 np0005534516 ceph-mon[75015]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 8] Generated table #26: 3340 keys, 6525190 bytes, temperature: kUnknown
Nov 25 02:59:51 np0005534516 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764057591608830, "cf_name": "default", "job": 8, "event": "table_file_creation", "file_number": 26, "file_size": 6525190, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 6500738, "index_size": 14999, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 8389, "raw_key_size": 81293, "raw_average_key_size": 24, "raw_value_size": 6438136, "raw_average_value_size": 1927, "num_data_blocks": 659, "num_entries": 3340, "num_filter_entries": 3340, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764056881, "oldest_key_time": 0, "file_creation_time": 1764057591, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "dc5dc6ae-0fb8-474e-b34d-e37705a41add", "db_session_id": "WCSCMBNWDQZ3QKJA3OWW", "orig_file_number": 26, "seqno_to_time_mapping": "N/A"}}
Nov 25 02:59:51 np0005534516 ceph-mon[75015]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 02:59:51 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-07:59:51.609063) [db/compaction/compaction_job.cc:1663] [default] [JOB 8] Compacted 1@0 + 1@6 files to L6 => 6525190 bytes
Nov 25 02:59:51 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-07:59:51.610752) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 239.5 rd, 189.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.8, 6.1 +0.0 blob) out(6.2 +0.0 blob), read-write-amplify(8.0) write-amplify(3.5) OK, records in: 3854, records dropped: 514 output_compression: NoCompression
Nov 25 02:59:51 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-07:59:51.610776) EVENT_LOG_v1 {"time_micros": 1764057591610763, "job": 8, "event": "compaction_finished", "compaction_time_micros": 34485, "compaction_time_cpu_micros": 13773, "output_level": 6, "num_output_files": 1, "total_output_size": 6525190, "num_input_records": 3854, "num_output_records": 3340, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 25 02:59:51 np0005534516 ceph-mon[75015]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000025.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 02:59:51 np0005534516 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764057591611296, "job": 8, "event": "table_file_deletion", "file_number": 25}
Nov 25 02:59:51 np0005534516 ceph-mon[75015]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000023.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 02:59:51 np0005534516 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764057591613031, "job": 8, "event": "table_file_deletion", "file_number": 23}
Nov 25 02:59:51 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-07:59:51.574347) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 02:59:51 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-07:59:51.613072) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 02:59:51 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-07:59:51.613078) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 02:59:51 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-07:59:51.613080) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 02:59:51 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-07:59:51.613082) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 02:59:51 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-07:59:51.613084) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 02:59:51 np0005534516 python3.9[147010]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 02:59:52 np0005534516 python3.9[147165]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 02:59:53 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 02:59:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] Optimize plan auto_2025-11-25_07:59:53
Nov 25 02:59:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 02:59:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] do_upmap
Nov 25 02:59:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] pools ['backups', 'volumes', '.rgw.root', 'default.rgw.log', 'vms', 'cephfs.cephfs.meta', 'images', '.mgr', 'cephfs.cephfs.data', 'default.rgw.control', 'default.rgw.meta']
Nov 25 02:59:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] prepared 0/10 changes
Nov 25 02:59:53 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v414: 321 pgs: 321 active+clean; 456 KiB data, 143 MiB used, 60 GiB / 60 GiB avail
Nov 25 02:59:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 02:59:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 02:59:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 02:59:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 02:59:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 02:59:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 02:59:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 02:59:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 02:59:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 02:59:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 02:59:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 02:59:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 02:59:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 02:59:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 02:59:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 02:59:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 02:59:53 np0005534516 python3.9[147315]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'machine'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 25 02:59:54 np0005534516 python3.9[147468]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl set open . external_ids:hostname=compute-0.ctlplane.example.com external_ids:ovn-bridge=br-int external_ids:ovn-bridge-mappings=datacentre:br-ex external_ids:ovn-chassis-mac-mappings="datacentre:2e:0a:93:45:69:49" external_ids:ovn-encap-ip=172.19.0.100 external_ids:ovn-encap-type=geneve external_ids:ovn-encap-tos=0 external_ids:ovn-match-northd-version=False external_ids:ovn-monitor-all=True external_ids:ovn-remote=ssl:ovsdbserver-sb.openstack.svc:6642 external_ids:ovn-remote-probe-interval=60000 external_ids:ovn-ofctrl-wait-before-clear=8000 external_ids:rundir=/var/run/openvswitch #012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 02:59:54 np0005534516 ovs-vsctl[147469]: ovs|00001|vsctl|INFO|Called as ovs-vsctl set open . external_ids:hostname=compute-0.ctlplane.example.com external_ids:ovn-bridge=br-int external_ids:ovn-bridge-mappings=datacentre:br-ex external_ids:ovn-chassis-mac-mappings=datacentre:2e:0a:93:45:69:49 external_ids:ovn-encap-ip=172.19.0.100 external_ids:ovn-encap-type=geneve external_ids:ovn-encap-tos=0 external_ids:ovn-match-northd-version=False external_ids:ovn-monitor-all=True external_ids:ovn-remote=ssl:ovsdbserver-sb.openstack.svc:6642 external_ids:ovn-remote-probe-interval=60000 external_ids:ovn-ofctrl-wait-before-clear=8000 external_ids:rundir=/var/run/openvswitch
Nov 25 02:59:55 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v415: 321 pgs: 321 active+clean; 456 KiB data, 143 MiB used, 60 GiB / 60 GiB avail
Nov 25 02:59:55 np0005534516 python3.9[147621]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail#012ovs-vsctl show | grep -q "Manager"#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 02:59:56 np0005534516 python3.9[147776]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl --timeout=5 --id=@manager -- create Manager target=\"ptcp:********@manager#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 02:59:56 np0005534516 ovs-vsctl[147777]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --timeout=5 --id=@manager -- create Manager "target=\"ptcp:6640:127.0.0.1\"" -- add Open_vSwitch . manager_options @manager
Nov 25 02:59:57 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v416: 321 pgs: 321 active+clean; 456 KiB data, 143 MiB used, 60 GiB / 60 GiB avail
Nov 25 02:59:57 np0005534516 python3.9[147927]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 25 02:59:58 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 02:59:58 np0005534516 python3.9[148081]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 25 02:59:59 np0005534516 python3.9[148233]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 02:59:59 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v417: 321 pgs: 321 active+clean; 456 KiB data, 143 MiB used, 60 GiB / 60 GiB avail
Nov 25 02:59:59 np0005534516 python3.9[148311]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 03:00:00 np0005534516 python3.9[148463]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 03:00:00 np0005534516 python3.9[148541]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 03:00:01 np0005534516 python3.9[148693]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 03:00:01 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v418: 321 pgs: 321 active+clean; 456 KiB data, 143 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:00:02 np0005534516 python3.9[148845]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 03:00:02 np0005534516 python3.9[148923]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 03:00:03 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 03:00:03 np0005534516 python3.9[149075]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 03:00:03 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v419: 321 pgs: 321 active+clean; 456 KiB data, 143 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:00:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 03:00:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:00:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 03:00:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:00:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 03:00:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:00:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 03:00:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:00:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 03:00:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:00:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 03:00:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:00:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 25 03:00:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:00:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 03:00:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:00:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 25 03:00:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:00:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 25 03:00:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:00:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 03:00:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:00:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 25 03:00:03 np0005534516 python3.9[149153]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 03:00:04 np0005534516 python3.9[149305]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 25 03:00:04 np0005534516 systemd[1]: Reloading.
Nov 25 03:00:04 np0005534516 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 03:00:04 np0005534516 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 03:00:05 np0005534516 ceph-osd[88620]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 25 03:00:05 np0005534516 ceph-osd[88620]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 600.5 total, 600.0 interval#012Cumulative writes: 5210 writes, 22K keys, 5210 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.03 MB/s#012Cumulative WAL: 5210 writes, 717 syncs, 7.27 writes per sync, written: 0.02 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 5210 writes, 22K keys, 5210 commit groups, 1.0 writes per commit group, ingest: 17.66 MB, 0.03 MB/s#012Interval WAL: 5210 writes, 717 syncs, 7.27 writes per sync, written: 0.02 GB, 0.03 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.08              0.00         1    0.077       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.08              0.00         1    0.077       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.08              0.00         1    0.077       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.5 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.1 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x562bd08331f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.5e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.5 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x562bd08331f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.5e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.5 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_s
Nov 25 03:00:05 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v420: 321 pgs: 321 active+clean; 456 KiB data, 143 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:00:05 np0005534516 python3.9[149494]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 03:00:05 np0005534516 python3.9[149572]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 03:00:06 np0005534516 python3.9[149724]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 03:00:07 np0005534516 python3.9[149802]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 03:00:07 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v421: 321 pgs: 321 active+clean; 456 KiB data, 143 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:00:07 np0005534516 python3.9[149954]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 25 03:00:07 np0005534516 systemd[1]: Reloading.
Nov 25 03:00:08 np0005534516 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 03:00:08 np0005534516 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 03:00:08 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 03:00:08 np0005534516 systemd[1]: Starting Create netns directory...
Nov 25 03:00:08 np0005534516 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Nov 25 03:00:08 np0005534516 systemd[1]: netns-placeholder.service: Deactivated successfully.
Nov 25 03:00:08 np0005534516 systemd[1]: Finished Create netns directory.
Nov 25 03:00:09 np0005534516 python3.9[150146]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 03:00:09 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v422: 321 pgs: 321 active+clean; 456 KiB data, 143 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:00:09 np0005534516 python3.9[150298]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ovn_controller/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 03:00:10 np0005534516 python3.9[150421]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ovn_controller/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764057609.2644827-468-155628991566926/.source _original_basename=healthcheck follow=False checksum=4098dd010265fabdf5c26b97d169fc4e575ff457 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Nov 25 03:00:11 np0005534516 python3.9[150573]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 25 03:00:11 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v423: 321 pgs: 321 active+clean; 456 KiB data, 143 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:00:11 np0005534516 ceph-osd[89702]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 25 03:00:11 np0005534516 ceph-osd[89702]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 600.4 total, 600.0 interval#012Cumulative writes: 5983 writes, 25K keys, 5983 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.03 MB/s#012Cumulative WAL: 5983 writes, 859 syncs, 6.97 writes per sync, written: 0.02 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 5983 writes, 25K keys, 5983 commit groups, 1.0 writes per commit group, ingest: 18.36 MB, 0.03 MB/s#012Interval WAL: 5983 writes, 859 syncs, 6.97 writes per sync, written: 0.02 GB, 0.03 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.05              0.00         1    0.049       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.05              0.00         1    0.049       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.05              0.00         1    0.049       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.4 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x561d8ae0f1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 2.9e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.4 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x561d8ae0f1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 2.9e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.4 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_s
Nov 25 03:00:12 np0005534516 python3.9[150725]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/ovn_controller.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 03:00:12 np0005534516 python3.9[150848]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/ovn_controller.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1764057611.5822291-493-171372411176429/.source.json _original_basename=.97h0e34h follow=False checksum=2328fc98619beeb08ee32b01f15bb43094c10b61 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 03:00:13 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 03:00:13 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v424: 321 pgs: 321 active+clean; 456 KiB data, 143 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:00:13 np0005534516 python3.9[151000]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/ovn_controller state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 03:00:15 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v425: 321 pgs: 321 active+clean; 456 KiB data, 143 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:00:15 np0005534516 python3.9[151427]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/ovn_controller config_pattern=*.json debug=False
Nov 25 03:00:16 np0005534516 python3.9[151579]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Nov 25 03:00:17 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v426: 321 pgs: 321 active+clean; 456 KiB data, 143 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:00:17 np0005534516 python3.9[151731]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None
Nov 25 03:00:18 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 03:00:19 np0005534516 python3[151909]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/ovn_controller config_id=ovn_controller config_overrides={} config_patterns=*.json log_base_path=/var/log/containers/stdouts debug=False
Nov 25 03:00:19 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v427: 321 pgs: 321 active+clean; 456 KiB data, 143 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:00:21 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v428: 321 pgs: 321 active+clean; 456 KiB data, 143 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:00:21 np0005534516 ceph-osd[90711]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 25 03:00:21 np0005534516 ceph-osd[90711]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 602.4 total, 600.0 interval#012Cumulative writes: 5204 writes, 22K keys, 5204 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.03 MB/s#012Cumulative WAL: 5204 writes, 640 syncs, 8.13 writes per sync, written: 0.02 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 5204 writes, 22K keys, 5204 commit groups, 1.0 writes per commit group, ingest: 17.66 MB, 0.03 MB/s#012Interval WAL: 5204 writes, 640 syncs, 8.13 writes per sync, written: 0.02 GB, 0.03 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.36              0.00         1    0.357       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.36              0.00         1    0.357       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.36              0.00         1    0.357       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 602.4 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.4 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55a125e131f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 2.7e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 602.4 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55a125e131f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 2.7e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 602.4 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_s
Nov 25 03:00:23 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 03:00:23 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v429: 321 pgs: 321 active+clean; 456 KiB data, 143 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:00:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 03:00:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 03:00:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 03:00:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 03:00:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 03:00:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 03:00:24 np0005534516 podman[151922]: 2025-11-25 08:00:24.885076054 +0000 UTC m=+5.509849100 image pull 197857ba4b35dfe0da58eb2e9c37f91c8a1d2b66c0967b4c66656aa6329b870c quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e
Nov 25 03:00:25 np0005534516 podman[152044]: 2025-11-25 08:00:25.002942831 +0000 UTC m=+0.023525737 image pull 197857ba4b35dfe0da58eb2e9c37f91c8a1d2b66c0967b4c66656aa6329b870c quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e
Nov 25 03:00:25 np0005534516 podman[152044]: 2025-11-25 08:00:25.112253296 +0000 UTC m=+0.132836182 container create 8ad1e319ea263c63021f01bb2d6f18ca5aea205883339692c6b10bddfa993191 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_managed=true, container_name=ovn_controller, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118)
Nov 25 03:00:25 np0005534516 python3[151909]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name ovn_controller --conmon-pidfile /run/ovn_controller.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --healthcheck-command /openstack/healthcheck --label config_id=ovn_controller --label container_name=ovn_controller --label managed_by=edpm_ansible --label config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --user root --volume /lib/modules:/lib/modules:ro --volume /run:/run --volume /var/lib/openvswitch/ovn:/run/ovn:shared,z --volume /var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z --volume /var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z --volume /var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z --volume /var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e
Nov 25 03:00:25 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v430: 321 pgs: 321 active+clean; 456 KiB data, 143 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:00:25 np0005534516 python3.9[152234]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 25 03:00:26 np0005534516 python3.9[152388]: ansible-file Invoked with path=/etc/systemd/system/edpm_ovn_controller.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 03:00:27 np0005534516 python3.9[152464]: ansible-stat Invoked with path=/etc/systemd/system/edpm_ovn_controller_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 25 03:00:27 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v431: 321 pgs: 321 active+clean; 456 KiB data, 143 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:00:27 np0005534516 python3.9[152615]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764057627.15938-581-227638989553948/source dest=/etc/systemd/system/edpm_ovn_controller.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 03:00:28 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 03:00:28 np0005534516 python3.9[152691]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 25 03:00:28 np0005534516 systemd[1]: Reloading.
Nov 25 03:00:28 np0005534516 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 03:00:28 np0005534516 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 03:00:29 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v432: 321 pgs: 321 active+clean; 456 KiB data, 143 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:00:29 np0005534516 python3.9[152802]: ansible-systemd Invoked with state=restarted name=edpm_ovn_controller.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 25 03:00:29 np0005534516 systemd[1]: Reloading.
Nov 25 03:00:29 np0005534516 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 03:00:29 np0005534516 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 03:00:30 np0005534516 systemd[1]: Starting ovn_controller container...
Nov 25 03:00:30 np0005534516 systemd[1]: Started libcrun container.
Nov 25 03:00:30 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fee85d1edcca72062eb4cf12885571c95361be36fd67c24507f60d5857d28a96/merged/run/ovn supports timestamps until 2038 (0x7fffffff)
Nov 25 03:00:31 np0005534516 systemd[1]: Started /usr/bin/podman healthcheck run 8ad1e319ea263c63021f01bb2d6f18ca5aea205883339692c6b10bddfa993191.
Nov 25 03:00:31 np0005534516 podman[152843]: 2025-11-25 08:00:31.309900248 +0000 UTC m=+1.030731972 container init 8ad1e319ea263c63021f01bb2d6f18ca5aea205883339692c6b10bddfa993191 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, org.label-schema.build-date=20251118, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_controller)
Nov 25 03:00:31 np0005534516 ovn_controller[152859]: + sudo -E kolla_set_configs
Nov 25 03:00:31 np0005534516 podman[152843]: 2025-11-25 08:00:31.350239028 +0000 UTC m=+1.071070702 container start 8ad1e319ea263c63021f01bb2d6f18ca5aea205883339692c6b10bddfa993191 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=ovn_controller, container_name=ovn_controller)
Nov 25 03:00:31 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v433: 321 pgs: 321 active+clean; 456 KiB data, 143 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:00:31 np0005534516 systemd[1]: Created slice User Slice of UID 0.
Nov 25 03:00:31 np0005534516 systemd[1]: Starting User Runtime Directory /run/user/0...
Nov 25 03:00:31 np0005534516 systemd[1]: Finished User Runtime Directory /run/user/0.
Nov 25 03:00:31 np0005534516 systemd[1]: Starting User Manager for UID 0...
Nov 25 03:00:31 np0005534516 edpm-start-podman-container[152843]: ovn_controller
Nov 25 03:00:31 np0005534516 systemd[152877]: Queued start job for default target Main User Target.
Nov 25 03:00:31 np0005534516 systemd[152877]: Created slice User Application Slice.
Nov 25 03:00:31 np0005534516 systemd[152877]: Mark boot as successful after the user session has run 2 minutes was skipped because of an unmet condition check (ConditionUser=!@system).
Nov 25 03:00:31 np0005534516 systemd[152877]: Started Daily Cleanup of User's Temporary Directories.
Nov 25 03:00:31 np0005534516 systemd[152877]: Reached target Paths.
Nov 25 03:00:31 np0005534516 systemd[152877]: Reached target Timers.
Nov 25 03:00:31 np0005534516 systemd[152877]: Starting D-Bus User Message Bus Socket...
Nov 25 03:00:31 np0005534516 systemd[152877]: Starting Create User's Volatile Files and Directories...
Nov 25 03:00:31 np0005534516 edpm-start-podman-container[152842]: Creating additional drop-in dependency for "ovn_controller" (8ad1e319ea263c63021f01bb2d6f18ca5aea205883339692c6b10bddfa993191)
Nov 25 03:00:31 np0005534516 systemd[152877]: Listening on D-Bus User Message Bus Socket.
Nov 25 03:00:31 np0005534516 systemd[152877]: Finished Create User's Volatile Files and Directories.
Nov 25 03:00:31 np0005534516 systemd[152877]: Reached target Sockets.
Nov 25 03:00:31 np0005534516 systemd[152877]: Reached target Basic System.
Nov 25 03:00:31 np0005534516 systemd[152877]: Reached target Main User Target.
Nov 25 03:00:31 np0005534516 systemd[152877]: Startup finished in 177ms.
Nov 25 03:00:31 np0005534516 systemd[1]: Started User Manager for UID 0.
Nov 25 03:00:31 np0005534516 podman[152866]: 2025-11-25 08:00:31.658289955 +0000 UTC m=+0.291859020 container health_status 8ad1e319ea263c63021f01bb2d6f18ca5aea205883339692c6b10bddfa993191 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=starting, health_failing_streak=1, health_log=, container_name=ovn_controller, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller)
Nov 25 03:00:31 np0005534516 systemd[1]: Started Session c1 of User root.
Nov 25 03:00:31 np0005534516 systemd[1]: 8ad1e319ea263c63021f01bb2d6f18ca5aea205883339692c6b10bddfa993191-454e310736cffab5.service: Main process exited, code=exited, status=1/FAILURE
Nov 25 03:00:31 np0005534516 systemd[1]: 8ad1e319ea263c63021f01bb2d6f18ca5aea205883339692c6b10bddfa993191-454e310736cffab5.service: Failed with result 'exit-code'.
Nov 25 03:00:31 np0005534516 systemd[1]: Reloading.
Nov 25 03:00:31 np0005534516 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 03:00:31 np0005534516 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 03:00:31 np0005534516 systemd[1]: Started ovn_controller container.
Nov 25 03:00:32 np0005534516 ovn_controller[152859]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Nov 25 03:00:32 np0005534516 ovn_controller[152859]: INFO:__main__:Validating config file
Nov 25 03:00:32 np0005534516 ovn_controller[152859]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Nov 25 03:00:32 np0005534516 ovn_controller[152859]: INFO:__main__:Writing out command to execute
Nov 25 03:00:32 np0005534516 systemd[1]: session-c1.scope: Deactivated successfully.
Nov 25 03:00:32 np0005534516 ovn_controller[152859]: ++ cat /run_command
Nov 25 03:00:32 np0005534516 ovn_controller[152859]: + CMD='/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '
Nov 25 03:00:32 np0005534516 ovn_controller[152859]: + ARGS=
Nov 25 03:00:32 np0005534516 ovn_controller[152859]: + sudo kolla_copy_cacerts
Nov 25 03:00:32 np0005534516 systemd[1]: Started Session c2 of User root.
Nov 25 03:00:32 np0005534516 systemd[1]: session-c2.scope: Deactivated successfully.
Nov 25 03:00:32 np0005534516 ovn_controller[152859]: + [[ ! -n '' ]]
Nov 25 03:00:32 np0005534516 ovn_controller[152859]: + . kolla_extend_start
Nov 25 03:00:32 np0005534516 ovn_controller[152859]: Running command: '/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '
Nov 25 03:00:32 np0005534516 ovn_controller[152859]: + echo 'Running command: '\''/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '\'''
Nov 25 03:00:32 np0005534516 ovn_controller[152859]: + umask 0022
Nov 25 03:00:32 np0005534516 ovn_controller[152859]: + exec /usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt
Nov 25 03:00:32 np0005534516 ovn_controller[152859]: 2025-11-25T08:00:32Z|00001|reconnect|INFO|unix:/run/openvswitch/db.sock: connecting...
Nov 25 03:00:32 np0005534516 ovn_controller[152859]: 2025-11-25T08:00:32Z|00002|reconnect|INFO|unix:/run/openvswitch/db.sock: connected
Nov 25 03:00:32 np0005534516 ovn_controller[152859]: 2025-11-25T08:00:32Z|00003|main|INFO|OVN internal version is : [24.03.7-20.33.0-76.8]
Nov 25 03:00:32 np0005534516 ovn_controller[152859]: 2025-11-25T08:00:32Z|00004|main|INFO|OVS IDL reconnected, force recompute.
Nov 25 03:00:32 np0005534516 ovn_controller[152859]: 2025-11-25T08:00:32Z|00005|reconnect|INFO|ssl:ovsdbserver-sb.openstack.svc:6642: connecting...
Nov 25 03:00:32 np0005534516 ovn_controller[152859]: 2025-11-25T08:00:32Z|00006|main|INFO|OVNSB IDL reconnected, force recompute.
Nov 25 03:00:32 np0005534516 NetworkManager[48915]: <info>  [1764057632.1687] manager: (br-int): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/16)
Nov 25 03:00:32 np0005534516 NetworkManager[48915]: <info>  [1764057632.1698] device (br-int)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 25 03:00:32 np0005534516 kernel: br-int: entered promiscuous mode
Nov 25 03:00:32 np0005534516 NetworkManager[48915]: <info>  [1764057632.1718] manager: (br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/17)
Nov 25 03:00:32 np0005534516 rsyslogd[1007]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 25 03:00:32 np0005534516 NetworkManager[48915]: <info>  [1764057632.1729] manager: (br-int): new Open vSwitch Bridge device (/org/freedesktop/NetworkManager/Devices/18)
Nov 25 03:00:32 np0005534516 rsyslogd[1007]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 25 03:00:32 np0005534516 NetworkManager[48915]: <info>  [1764057632.1736] device (br-int)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'none', managed-type: 'full')
Nov 25 03:00:32 np0005534516 ovn_controller[152859]: 2025-11-25T08:00:32Z|00007|reconnect|INFO|ssl:ovsdbserver-sb.openstack.svc:6642: connected
Nov 25 03:00:32 np0005534516 ovn_controller[152859]: 2025-11-25T08:00:32Z|00008|features|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Nov 25 03:00:32 np0005534516 ovn_controller[152859]: 2025-11-25T08:00:32Z|00009|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Nov 25 03:00:32 np0005534516 ovn_controller[152859]: 2025-11-25T08:00:32Z|00010|features|INFO|OVS Feature: ct_zero_snat, state: supported
Nov 25 03:00:32 np0005534516 ovn_controller[152859]: 2025-11-25T08:00:32Z|00011|features|INFO|OVS Feature: ct_flush, state: supported
Nov 25 03:00:32 np0005534516 ovn_controller[152859]: 2025-11-25T08:00:32Z|00012|features|INFO|OVS Feature: dp_hash_l4_sym_support, state: supported
Nov 25 03:00:32 np0005534516 ovn_controller[152859]: 2025-11-25T08:00:32Z|00013|reconnect|INFO|unix:/run/openvswitch/db.sock: connecting...
Nov 25 03:00:32 np0005534516 ovn_controller[152859]: 2025-11-25T08:00:32Z|00014|main|INFO|OVS feature set changed, force recompute.
Nov 25 03:00:32 np0005534516 ovn_controller[152859]: 2025-11-25T08:00:32Z|00015|ofctrl|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Nov 25 03:00:32 np0005534516 ovn_controller[152859]: 2025-11-25T08:00:32Z|00016|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Nov 25 03:00:32 np0005534516 ovn_controller[152859]: 2025-11-25T08:00:32Z|00017|reconnect|INFO|unix:/run/openvswitch/db.sock: connected
Nov 25 03:00:32 np0005534516 ovn_controller[152859]: 2025-11-25T08:00:32Z|00018|features|INFO|OVS DB schema supports 4 flow table prefixes, our IDL supports: 4
Nov 25 03:00:32 np0005534516 ovn_controller[152859]: 2025-11-25T08:00:32Z|00019|main|INFO|Setting flow table prefixes: ip_src, ip_dst, ipv6_src, ipv6_dst.
Nov 25 03:00:32 np0005534516 ovn_controller[152859]: 2025-11-25T08:00:32Z|00020|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Nov 25 03:00:32 np0005534516 ovn_controller[152859]: 2025-11-25T08:00:32Z|00021|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Nov 25 03:00:32 np0005534516 ovn_controller[152859]: 2025-11-25T08:00:32Z|00022|ofctrl|INFO|ofctrl-wait-before-clear is now 8000 ms (was 0 ms)
Nov 25 03:00:32 np0005534516 ovn_controller[152859]: 2025-11-25T08:00:32Z|00023|main|INFO|OVS OpenFlow connection reconnected,force recompute.
Nov 25 03:00:32 np0005534516 ovn_controller[152859]: 2025-11-25T08:00:32Z|00024|main|INFO|OVS feature set changed, force recompute.
Nov 25 03:00:32 np0005534516 ovn_controller[152859]: 2025-11-25T08:00:32Z|00001|statctrl(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Nov 25 03:00:32 np0005534516 ovn_controller[152859]: 2025-11-25T08:00:32Z|00002|rconn(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Nov 25 03:00:32 np0005534516 ovn_controller[152859]: 2025-11-25T08:00:32Z|00001|pinctrl(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Nov 25 03:00:32 np0005534516 ovn_controller[152859]: 2025-11-25T08:00:32Z|00002|rconn(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Nov 25 03:00:32 np0005534516 ovn_controller[152859]: 2025-11-25T08:00:32Z|00003|rconn(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Nov 25 03:00:32 np0005534516 ovn_controller[152859]: 2025-11-25T08:00:32Z|00003|rconn(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Nov 25 03:00:32 np0005534516 NetworkManager[48915]: <info>  [1764057632.1908] manager: (ovn-e613fe-0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/19)
Nov 25 03:00:32 np0005534516 systemd-udevd[153017]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 03:00:32 np0005534516 kernel: genev_sys_6081: entered promiscuous mode
Nov 25 03:00:32 np0005534516 NetworkManager[48915]: <info>  [1764057632.2508] device (genev_sys_6081): carrier: link connected
Nov 25 03:00:32 np0005534516 systemd-udevd[153018]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 03:00:32 np0005534516 NetworkManager[48915]: <info>  [1764057632.2519] manager: (genev_sys_6081): new Generic device (/org/freedesktop/NetworkManager/Devices/20)
Nov 25 03:00:32 np0005534516 python3.9[153125]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl remove open . other_config hw-offload#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 03:00:32 np0005534516 ovs-vsctl[153126]: ovs|00001|vsctl|INFO|Called as ovs-vsctl remove open . other_config hw-offload
Nov 25 03:00:33 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v434: 321 pgs: 321 active+clean; 456 KiB data, 143 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:00:34 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 03:00:35 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v435: 321 pgs: 321 active+clean; 456 KiB data, 143 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:00:37 np0005534516 python3.9[153286]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl get Open_vSwitch . external_ids:ovn-cms-options | sed 's/\"//g'#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 03:00:37 np0005534516 ovs-vsctl[153289]: ovs|00001|db_ctl_base|ERR|no key "ovn-cms-options" in Open_vSwitch record "." column external_ids
Nov 25 03:00:37 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v436: 321 pgs: 321 active+clean; 456 KiB data, 143 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:00:38 np0005534516 python3.9[153442]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl remove Open_vSwitch . external_ids ovn-cms-options#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 03:00:38 np0005534516 ovs-vsctl[153443]: ovs|00001|vsctl|INFO|Called as ovs-vsctl remove Open_vSwitch . external_ids ovn-cms-options
Nov 25 03:00:38 np0005534516 systemd[1]: session-46.scope: Deactivated successfully.
Nov 25 03:00:38 np0005534516 systemd[1]: session-46.scope: Consumed 1min 710ms CPU time.
Nov 25 03:00:38 np0005534516 systemd-logind[822]: Session 46 logged out. Waiting for processes to exit.
Nov 25 03:00:38 np0005534516 systemd-logind[822]: Removed session 46.
Nov 25 03:00:39 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 03:00:39 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v437: 321 pgs: 321 active+clean; 456 KiB data, 143 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:00:41 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v438: 321 pgs: 321 active+clean; 456 KiB data, 143 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:00:42 np0005534516 ceph-mgr[75313]: [devicehealth INFO root] Check health
Nov 25 03:00:42 np0005534516 systemd[1]: Stopping User Manager for UID 0...
Nov 25 03:00:42 np0005534516 systemd[152877]: Activating special unit Exit the Session...
Nov 25 03:00:42 np0005534516 systemd[152877]: Stopped target Main User Target.
Nov 25 03:00:42 np0005534516 systemd[152877]: Stopped target Basic System.
Nov 25 03:00:42 np0005534516 systemd[152877]: Stopped target Paths.
Nov 25 03:00:42 np0005534516 systemd[152877]: Stopped target Sockets.
Nov 25 03:00:42 np0005534516 systemd[152877]: Stopped target Timers.
Nov 25 03:00:42 np0005534516 systemd[152877]: Stopped Daily Cleanup of User's Temporary Directories.
Nov 25 03:00:42 np0005534516 systemd[152877]: Closed D-Bus User Message Bus Socket.
Nov 25 03:00:42 np0005534516 systemd[152877]: Stopped Create User's Volatile Files and Directories.
Nov 25 03:00:42 np0005534516 systemd[152877]: Removed slice User Application Slice.
Nov 25 03:00:42 np0005534516 systemd[152877]: Reached target Shutdown.
Nov 25 03:00:42 np0005534516 systemd[152877]: Finished Exit the Session.
Nov 25 03:00:42 np0005534516 systemd[152877]: Reached target Exit the Session.
Nov 25 03:00:42 np0005534516 systemd[1]: user@0.service: Deactivated successfully.
Nov 25 03:00:42 np0005534516 systemd[1]: Stopped User Manager for UID 0.
Nov 25 03:00:42 np0005534516 systemd[1]: Stopping User Runtime Directory /run/user/0...
Nov 25 03:00:42 np0005534516 systemd[1]: run-user-0.mount: Deactivated successfully.
Nov 25 03:00:42 np0005534516 systemd[1]: user-runtime-dir@0.service: Deactivated successfully.
Nov 25 03:00:42 np0005534516 systemd[1]: Stopped User Runtime Directory /run/user/0.
Nov 25 03:00:42 np0005534516 systemd[1]: Removed slice User Slice of UID 0.
Nov 25 03:00:43 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v439: 321 pgs: 321 active+clean; 456 KiB data, 143 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:00:43 np0005534516 systemd-logind[822]: New session 48 of user zuul.
Nov 25 03:00:43 np0005534516 systemd[1]: Started Session 48 of User zuul.
Nov 25 03:00:44 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0) v1
Nov 25 03:00:44 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 25 03:00:44 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 03:00:44 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 03:00:44 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 03:00:44 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 03:00:44 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 03:00:44 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 03:00:44 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 03:00:44 np0005534516 ceph-mgr[75313]: [progress WARNING root] complete: ev 27d21cd8-8e11-44f1-99a0-5c8effc50b71 does not exist
Nov 25 03:00:44 np0005534516 ceph-mgr[75313]: [progress WARNING root] complete: ev ea8665bb-fbac-44aa-a44b-ba43cd5ad4ff does not exist
Nov 25 03:00:44 np0005534516 ceph-mgr[75313]: [progress WARNING root] complete: ev 1e5f9631-40a9-457f-b698-a283bb369367 does not exist
Nov 25 03:00:44 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 03:00:44 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 03:00:44 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 03:00:44 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 03:00:44 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 03:00:44 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 03:00:44 np0005534516 podman[153868]: 2025-11-25 08:00:44.704733359 +0000 UTC m=+0.057892446 container create 92eb843d8ad13d2e7d8b384cd81956cf5d31d196d191bb3523b2843da9cd2e94 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_wozniak, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 03:00:44 np0005534516 systemd[1]: Started libpod-conmon-92eb843d8ad13d2e7d8b384cd81956cf5d31d196d191bb3523b2843da9cd2e94.scope.
Nov 25 03:00:44 np0005534516 podman[153868]: 2025-11-25 08:00:44.672904189 +0000 UTC m=+0.026063356 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 03:00:44 np0005534516 systemd[1]: Started libcrun container.
Nov 25 03:00:44 np0005534516 podman[153868]: 2025-11-25 08:00:44.806217222 +0000 UTC m=+0.159376329 container init 92eb843d8ad13d2e7d8b384cd81956cf5d31d196d191bb3523b2843da9cd2e94 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_wozniak, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 03:00:44 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 25 03:00:44 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 03:00:44 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 03:00:44 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 03:00:44 np0005534516 podman[153868]: 2025-11-25 08:00:44.815633397 +0000 UTC m=+0.168792474 container start 92eb843d8ad13d2e7d8b384cd81956cf5d31d196d191bb3523b2843da9cd2e94 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_wozniak, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Nov 25 03:00:44 np0005534516 podman[153868]: 2025-11-25 08:00:44.819549542 +0000 UTC m=+0.172708649 container attach 92eb843d8ad13d2e7d8b384cd81956cf5d31d196d191bb3523b2843da9cd2e94 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_wozniak, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 03:00:44 np0005534516 hardcore_wozniak[153911]: 167 167
Nov 25 03:00:44 np0005534516 systemd[1]: libpod-92eb843d8ad13d2e7d8b384cd81956cf5d31d196d191bb3523b2843da9cd2e94.scope: Deactivated successfully.
Nov 25 03:00:44 np0005534516 podman[153868]: 2025-11-25 08:00:44.826970833 +0000 UTC m=+0.180129930 container died 92eb843d8ad13d2e7d8b384cd81956cf5d31d196d191bb3523b2843da9cd2e94 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_wozniak, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 03:00:44 np0005534516 systemd[1]: var-lib-containers-storage-overlay-d3ef51b80176d4b9448518dda48d91e74ae5b32261da58d731e8d84659d04ecc-merged.mount: Deactivated successfully.
Nov 25 03:00:44 np0005534516 podman[153868]: 2025-11-25 08:00:44.880708595 +0000 UTC m=+0.233867692 container remove 92eb843d8ad13d2e7d8b384cd81956cf5d31d196d191bb3523b2843da9cd2e94 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_wozniak, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Nov 25 03:00:44 np0005534516 systemd[1]: libpod-conmon-92eb843d8ad13d2e7d8b384cd81956cf5d31d196d191bb3523b2843da9cd2e94.scope: Deactivated successfully.
Nov 25 03:00:45 np0005534516 python3.9[153908]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 25 03:00:45 np0005534516 podman[153936]: 2025-11-25 08:00:45.080470206 +0000 UTC m=+0.054538726 container create e75a6e349611505e028059db54d9e8bf9e92f64f74ba10131ac3f23a7cf033ac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_booth, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 03:00:45 np0005534516 systemd[1]: Started libpod-conmon-e75a6e349611505e028059db54d9e8bf9e92f64f74ba10131ac3f23a7cf033ac.scope.
Nov 25 03:00:45 np0005534516 podman[153936]: 2025-11-25 08:00:45.061807701 +0000 UTC m=+0.035876241 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 03:00:45 np0005534516 systemd[1]: Started libcrun container.
Nov 25 03:00:45 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1d56ac72ca132a8d334d5b23b966b214536488f772185aa24e2ddc42b40a404d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 03:00:45 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1d56ac72ca132a8d334d5b23b966b214536488f772185aa24e2ddc42b40a404d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 03:00:45 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1d56ac72ca132a8d334d5b23b966b214536488f772185aa24e2ddc42b40a404d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 03:00:45 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1d56ac72ca132a8d334d5b23b966b214536488f772185aa24e2ddc42b40a404d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 03:00:45 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1d56ac72ca132a8d334d5b23b966b214536488f772185aa24e2ddc42b40a404d/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 03:00:45 np0005534516 podman[153936]: 2025-11-25 08:00:45.189732899 +0000 UTC m=+0.163801489 container init e75a6e349611505e028059db54d9e8bf9e92f64f74ba10131ac3f23a7cf033ac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_booth, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Nov 25 03:00:45 np0005534516 podman[153936]: 2025-11-25 08:00:45.200971253 +0000 UTC m=+0.175039773 container start e75a6e349611505e028059db54d9e8bf9e92f64f74ba10131ac3f23a7cf033ac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_booth, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Nov 25 03:00:45 np0005534516 podman[153936]: 2025-11-25 08:00:45.204770515 +0000 UTC m=+0.178839085 container attach e75a6e349611505e028059db54d9e8bf9e92f64f74ba10131ac3f23a7cf033ac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_booth, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Nov 25 03:00:45 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v440: 321 pgs: 321 active+clean; 456 KiB data, 143 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:00:46 np0005534516 python3.9[154124]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Nov 25 03:00:46 np0005534516 charming_booth[153957]: --> passed data devices: 0 physical, 3 LVM
Nov 25 03:00:46 np0005534516 charming_booth[153957]: --> relative data size: 1.0
Nov 25 03:00:46 np0005534516 charming_booth[153957]: --> All data devices are unavailable
Nov 25 03:00:46 np0005534516 systemd[1]: libpod-e75a6e349611505e028059db54d9e8bf9e92f64f74ba10131ac3f23a7cf033ac.scope: Deactivated successfully.
Nov 25 03:00:46 np0005534516 systemd[1]: libpod-e75a6e349611505e028059db54d9e8bf9e92f64f74ba10131ac3f23a7cf033ac.scope: Consumed 1.057s CPU time.
Nov 25 03:00:46 np0005534516 podman[153936]: 2025-11-25 08:00:46.344093462 +0000 UTC m=+1.318162002 container died e75a6e349611505e028059db54d9e8bf9e92f64f74ba10131ac3f23a7cf033ac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_booth, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 03:00:46 np0005534516 systemd[1]: var-lib-containers-storage-overlay-1d56ac72ca132a8d334d5b23b966b214536488f772185aa24e2ddc42b40a404d-merged.mount: Deactivated successfully.
Nov 25 03:00:46 np0005534516 podman[153936]: 2025-11-25 08:00:46.40694891 +0000 UTC m=+1.381017440 container remove e75a6e349611505e028059db54d9e8bf9e92f64f74ba10131ac3f23a7cf033ac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_booth, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Nov 25 03:00:46 np0005534516 systemd[1]: libpod-conmon-e75a6e349611505e028059db54d9e8bf9e92f64f74ba10131ac3f23a7cf033ac.scope: Deactivated successfully.
Nov 25 03:00:46 np0005534516 python3.9[154398]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 03:00:47 np0005534516 podman[154467]: 2025-11-25 08:00:47.088461471 +0000 UTC m=+0.075016048 container create 72c07d172e65113b5a185423ab4f98643028c3cdbe74010bf5950d2293b989a0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_jang, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 03:00:47 np0005534516 podman[154467]: 2025-11-25 08:00:47.033947218 +0000 UTC m=+0.020501795 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 03:00:47 np0005534516 systemd[1]: Started libpod-conmon-72c07d172e65113b5a185423ab4f98643028c3cdbe74010bf5950d2293b989a0.scope.
Nov 25 03:00:47 np0005534516 systemd[1]: Started libcrun container.
Nov 25 03:00:47 np0005534516 podman[154467]: 2025-11-25 08:00:47.184682243 +0000 UTC m=+0.171236820 container init 72c07d172e65113b5a185423ab4f98643028c3cdbe74010bf5950d2293b989a0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_jang, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 03:00:47 np0005534516 podman[154467]: 2025-11-25 08:00:47.192056751 +0000 UTC m=+0.178611298 container start 72c07d172e65113b5a185423ab4f98643028c3cdbe74010bf5950d2293b989a0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_jang, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Nov 25 03:00:47 np0005534516 flamboyant_jang[154535]: 167 167
Nov 25 03:00:47 np0005534516 systemd[1]: libpod-72c07d172e65113b5a185423ab4f98643028c3cdbe74010bf5950d2293b989a0.scope: Deactivated successfully.
Nov 25 03:00:47 np0005534516 podman[154467]: 2025-11-25 08:00:47.198698362 +0000 UTC m=+0.185252949 container attach 72c07d172e65113b5a185423ab4f98643028c3cdbe74010bf5950d2293b989a0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_jang, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 25 03:00:47 np0005534516 podman[154467]: 2025-11-25 08:00:47.198912697 +0000 UTC m=+0.185467244 container died 72c07d172e65113b5a185423ab4f98643028c3cdbe74010bf5950d2293b989a0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_jang, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 03:00:47 np0005534516 systemd[1]: var-lib-containers-storage-overlay-07e2428e7b28e627e42d90baa5f7754710eff33835bc96b800a9f7f54c8eba1a-merged.mount: Deactivated successfully.
Nov 25 03:00:47 np0005534516 podman[154467]: 2025-11-25 08:00:47.369764725 +0000 UTC m=+0.356319272 container remove 72c07d172e65113b5a185423ab4f98643028c3cdbe74010bf5950d2293b989a0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_jang, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Nov 25 03:00:47 np0005534516 systemd[1]: libpod-conmon-72c07d172e65113b5a185423ab4f98643028c3cdbe74010bf5950d2293b989a0.scope: Deactivated successfully.
Nov 25 03:00:47 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v441: 321 pgs: 321 active+clean; 456 KiB data, 143 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:00:47 np0005534516 python3.9[154628]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/kill_scripts setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 03:00:47 np0005534516 podman[154636]: 2025-11-25 08:00:47.608287293 +0000 UTC m=+0.061169015 container create c7d2f81e52047cf99c172073c137bdc4f7fa429169482d46bba94380f500ca57 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_maxwell, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Nov 25 03:00:47 np0005534516 systemd[1]: Started libpod-conmon-c7d2f81e52047cf99c172073c137bdc4f7fa429169482d46bba94380f500ca57.scope.
Nov 25 03:00:47 np0005534516 podman[154636]: 2025-11-25 08:00:47.581620031 +0000 UTC m=+0.034501773 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 03:00:47 np0005534516 systemd[1]: Started libcrun container.
Nov 25 03:00:47 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2c25463d78c783f11db171e535d11680801337ec52980ee70a2e71f7efd2c172/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 03:00:47 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2c25463d78c783f11db171e535d11680801337ec52980ee70a2e71f7efd2c172/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 03:00:47 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2c25463d78c783f11db171e535d11680801337ec52980ee70a2e71f7efd2c172/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 03:00:47 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2c25463d78c783f11db171e535d11680801337ec52980ee70a2e71f7efd2c172/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 03:00:47 np0005534516 podman[154636]: 2025-11-25 08:00:47.715216553 +0000 UTC m=+0.168098285 container init c7d2f81e52047cf99c172073c137bdc4f7fa429169482d46bba94380f500ca57 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_maxwell, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 03:00:47 np0005534516 podman[154636]: 2025-11-25 08:00:47.725898422 +0000 UTC m=+0.178780124 container start c7d2f81e52047cf99c172073c137bdc4f7fa429169482d46bba94380f500ca57 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_maxwell, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Nov 25 03:00:47 np0005534516 podman[154636]: 2025-11-25 08:00:47.73989805 +0000 UTC m=+0.192779752 container attach c7d2f81e52047cf99c172073c137bdc4f7fa429169482d46bba94380f500ca57 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_maxwell, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 03:00:48 np0005534516 python3.9[154808]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/ovn-metadata-proxy setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 03:00:48 np0005534516 competent_maxwell[154676]: {
Nov 25 03:00:48 np0005534516 competent_maxwell[154676]:    "0": [
Nov 25 03:00:48 np0005534516 competent_maxwell[154676]:        {
Nov 25 03:00:48 np0005534516 competent_maxwell[154676]:            "devices": [
Nov 25 03:00:48 np0005534516 competent_maxwell[154676]:                "/dev/loop3"
Nov 25 03:00:48 np0005534516 competent_maxwell[154676]:            ],
Nov 25 03:00:48 np0005534516 competent_maxwell[154676]:            "lv_name": "ceph_lv0",
Nov 25 03:00:48 np0005534516 competent_maxwell[154676]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 03:00:48 np0005534516 competent_maxwell[154676]:            "lv_size": "21470642176",
Nov 25 03:00:48 np0005534516 competent_maxwell[154676]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=fa0f8d90-5df8-4d42-9078-da082765696d,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 03:00:48 np0005534516 competent_maxwell[154676]:            "lv_uuid": "ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr",
Nov 25 03:00:48 np0005534516 competent_maxwell[154676]:            "name": "ceph_lv0",
Nov 25 03:00:48 np0005534516 competent_maxwell[154676]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 03:00:48 np0005534516 competent_maxwell[154676]:            "tags": {
Nov 25 03:00:48 np0005534516 competent_maxwell[154676]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 03:00:48 np0005534516 competent_maxwell[154676]:                "ceph.block_uuid": "ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr",
Nov 25 03:00:48 np0005534516 competent_maxwell[154676]:                "ceph.cephx_lockbox_secret": "",
Nov 25 03:00:48 np0005534516 competent_maxwell[154676]:                "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 03:00:48 np0005534516 competent_maxwell[154676]:                "ceph.cluster_name": "ceph",
Nov 25 03:00:48 np0005534516 competent_maxwell[154676]:                "ceph.crush_device_class": "",
Nov 25 03:00:48 np0005534516 competent_maxwell[154676]:                "ceph.encrypted": "0",
Nov 25 03:00:48 np0005534516 competent_maxwell[154676]:                "ceph.osd_fsid": "fa0f8d90-5df8-4d42-9078-da082765696d",
Nov 25 03:00:48 np0005534516 competent_maxwell[154676]:                "ceph.osd_id": "0",
Nov 25 03:00:48 np0005534516 competent_maxwell[154676]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 03:00:48 np0005534516 competent_maxwell[154676]:                "ceph.type": "block",
Nov 25 03:00:48 np0005534516 competent_maxwell[154676]:                "ceph.vdo": "0"
Nov 25 03:00:48 np0005534516 competent_maxwell[154676]:            },
Nov 25 03:00:48 np0005534516 competent_maxwell[154676]:            "type": "block",
Nov 25 03:00:48 np0005534516 competent_maxwell[154676]:            "vg_name": "ceph_vg0"
Nov 25 03:00:48 np0005534516 competent_maxwell[154676]:        }
Nov 25 03:00:48 np0005534516 competent_maxwell[154676]:    ],
Nov 25 03:00:48 np0005534516 competent_maxwell[154676]:    "1": [
Nov 25 03:00:48 np0005534516 competent_maxwell[154676]:        {
Nov 25 03:00:48 np0005534516 competent_maxwell[154676]:            "devices": [
Nov 25 03:00:48 np0005534516 competent_maxwell[154676]:                "/dev/loop4"
Nov 25 03:00:48 np0005534516 competent_maxwell[154676]:            ],
Nov 25 03:00:48 np0005534516 competent_maxwell[154676]:            "lv_name": "ceph_lv1",
Nov 25 03:00:48 np0005534516 competent_maxwell[154676]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 03:00:48 np0005534516 competent_maxwell[154676]:            "lv_size": "21470642176",
Nov 25 03:00:48 np0005534516 competent_maxwell[154676]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1ccbbde0-8faf-460a-800e-c84f00ed17db,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 03:00:48 np0005534516 competent_maxwell[154676]:            "lv_uuid": "nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e",
Nov 25 03:00:48 np0005534516 competent_maxwell[154676]:            "name": "ceph_lv1",
Nov 25 03:00:48 np0005534516 competent_maxwell[154676]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 03:00:48 np0005534516 competent_maxwell[154676]:            "tags": {
Nov 25 03:00:48 np0005534516 competent_maxwell[154676]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 03:00:48 np0005534516 competent_maxwell[154676]:                "ceph.block_uuid": "nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e",
Nov 25 03:00:48 np0005534516 competent_maxwell[154676]:                "ceph.cephx_lockbox_secret": "",
Nov 25 03:00:48 np0005534516 competent_maxwell[154676]:                "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 03:00:48 np0005534516 competent_maxwell[154676]:                "ceph.cluster_name": "ceph",
Nov 25 03:00:48 np0005534516 competent_maxwell[154676]:                "ceph.crush_device_class": "",
Nov 25 03:00:48 np0005534516 competent_maxwell[154676]:                "ceph.encrypted": "0",
Nov 25 03:00:48 np0005534516 competent_maxwell[154676]:                "ceph.osd_fsid": "1ccbbde0-8faf-460a-800e-c84f00ed17db",
Nov 25 03:00:48 np0005534516 competent_maxwell[154676]:                "ceph.osd_id": "1",
Nov 25 03:00:48 np0005534516 competent_maxwell[154676]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 03:00:48 np0005534516 competent_maxwell[154676]:                "ceph.type": "block",
Nov 25 03:00:48 np0005534516 competent_maxwell[154676]:                "ceph.vdo": "0"
Nov 25 03:00:48 np0005534516 competent_maxwell[154676]:            },
Nov 25 03:00:48 np0005534516 competent_maxwell[154676]:            "type": "block",
Nov 25 03:00:48 np0005534516 competent_maxwell[154676]:            "vg_name": "ceph_vg1"
Nov 25 03:00:48 np0005534516 competent_maxwell[154676]:        }
Nov 25 03:00:48 np0005534516 competent_maxwell[154676]:    ],
Nov 25 03:00:48 np0005534516 competent_maxwell[154676]:    "2": [
Nov 25 03:00:48 np0005534516 competent_maxwell[154676]:        {
Nov 25 03:00:48 np0005534516 competent_maxwell[154676]:            "devices": [
Nov 25 03:00:48 np0005534516 competent_maxwell[154676]:                "/dev/loop5"
Nov 25 03:00:48 np0005534516 competent_maxwell[154676]:            ],
Nov 25 03:00:48 np0005534516 competent_maxwell[154676]:            "lv_name": "ceph_lv2",
Nov 25 03:00:48 np0005534516 competent_maxwell[154676]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 03:00:48 np0005534516 competent_maxwell[154676]:            "lv_size": "21470642176",
Nov 25 03:00:48 np0005534516 competent_maxwell[154676]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2a15223e-f21c-42af-b702-2be1c3607399,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 03:00:48 np0005534516 competent_maxwell[154676]:            "lv_uuid": "5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0",
Nov 25 03:00:48 np0005534516 competent_maxwell[154676]:            "name": "ceph_lv2",
Nov 25 03:00:48 np0005534516 competent_maxwell[154676]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 03:00:48 np0005534516 competent_maxwell[154676]:            "tags": {
Nov 25 03:00:48 np0005534516 competent_maxwell[154676]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 03:00:48 np0005534516 competent_maxwell[154676]:                "ceph.block_uuid": "5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0",
Nov 25 03:00:48 np0005534516 competent_maxwell[154676]:                "ceph.cephx_lockbox_secret": "",
Nov 25 03:00:48 np0005534516 competent_maxwell[154676]:                "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 03:00:48 np0005534516 competent_maxwell[154676]:                "ceph.cluster_name": "ceph",
Nov 25 03:00:48 np0005534516 competent_maxwell[154676]:                "ceph.crush_device_class": "",
Nov 25 03:00:48 np0005534516 competent_maxwell[154676]:                "ceph.encrypted": "0",
Nov 25 03:00:48 np0005534516 competent_maxwell[154676]:                "ceph.osd_fsid": "2a15223e-f21c-42af-b702-2be1c3607399",
Nov 25 03:00:48 np0005534516 competent_maxwell[154676]:                "ceph.osd_id": "2",
Nov 25 03:00:48 np0005534516 competent_maxwell[154676]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 03:00:48 np0005534516 competent_maxwell[154676]:                "ceph.type": "block",
Nov 25 03:00:48 np0005534516 competent_maxwell[154676]:                "ceph.vdo": "0"
Nov 25 03:00:48 np0005534516 competent_maxwell[154676]:            },
Nov 25 03:00:48 np0005534516 competent_maxwell[154676]:            "type": "block",
Nov 25 03:00:48 np0005534516 competent_maxwell[154676]:            "vg_name": "ceph_vg2"
Nov 25 03:00:48 np0005534516 competent_maxwell[154676]:        }
Nov 25 03:00:48 np0005534516 competent_maxwell[154676]:    ]
Nov 25 03:00:48 np0005534516 competent_maxwell[154676]: }
Nov 25 03:00:48 np0005534516 systemd[1]: libpod-c7d2f81e52047cf99c172073c137bdc4f7fa429169482d46bba94380f500ca57.scope: Deactivated successfully.
Nov 25 03:00:48 np0005534516 podman[154636]: 2025-11-25 08:00:48.654211374 +0000 UTC m=+1.107093076 container died c7d2f81e52047cf99c172073c137bdc4f7fa429169482d46bba94380f500ca57 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_maxwell, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Nov 25 03:00:48 np0005534516 systemd[1]: var-lib-containers-storage-overlay-2c25463d78c783f11db171e535d11680801337ec52980ee70a2e71f7efd2c172-merged.mount: Deactivated successfully.
Nov 25 03:00:48 np0005534516 podman[154636]: 2025-11-25 08:00:48.72248025 +0000 UTC m=+1.175361952 container remove c7d2f81e52047cf99c172073c137bdc4f7fa429169482d46bba94380f500ca57 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_maxwell, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 03:00:48 np0005534516 systemd[1]: libpod-conmon-c7d2f81e52047cf99c172073c137bdc4f7fa429169482d46bba94380f500ca57.scope: Deactivated successfully.
Nov 25 03:00:48 np0005534516 python3.9[154975]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/external/pids setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 03:00:49 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 03:00:49 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v442: 321 pgs: 321 active+clean; 456 KiB data, 143 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:00:49 np0005534516 podman[155243]: 2025-11-25 08:00:49.508420823 +0000 UTC m=+0.082550862 container create 4a76638fc72a5e529b2a4d64c471f7a9e090c74c1efc39cf05cb31c54d8fad20 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_galileo, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2)
Nov 25 03:00:49 np0005534516 podman[155243]: 2025-11-25 08:00:49.464885027 +0000 UTC m=+0.039015146 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 03:00:49 np0005534516 systemd[1]: Started libpod-conmon-4a76638fc72a5e529b2a4d64c471f7a9e090c74c1efc39cf05cb31c54d8fad20.scope.
Nov 25 03:00:49 np0005534516 systemd[1]: Started libcrun container.
Nov 25 03:00:49 np0005534516 podman[155243]: 2025-11-25 08:00:49.621107959 +0000 UTC m=+0.195238028 container init 4a76638fc72a5e529b2a4d64c471f7a9e090c74c1efc39cf05cb31c54d8fad20 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_galileo, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Nov 25 03:00:49 np0005534516 podman[155243]: 2025-11-25 08:00:49.631031748 +0000 UTC m=+0.205161807 container start 4a76638fc72a5e529b2a4d64c471f7a9e090c74c1efc39cf05cb31c54d8fad20 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_galileo, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 03:00:49 np0005534516 podman[155243]: 2025-11-25 08:00:49.637834151 +0000 UTC m=+0.211964210 container attach 4a76638fc72a5e529b2a4d64c471f7a9e090c74c1efc39cf05cb31c54d8fad20 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_galileo, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Nov 25 03:00:49 np0005534516 infallible_galileo[155281]: 167 167
Nov 25 03:00:49 np0005534516 systemd[1]: libpod-4a76638fc72a5e529b2a4d64c471f7a9e090c74c1efc39cf05cb31c54d8fad20.scope: Deactivated successfully.
Nov 25 03:00:49 np0005534516 podman[155243]: 2025-11-25 08:00:49.642269722 +0000 UTC m=+0.216399771 container died 4a76638fc72a5e529b2a4d64c471f7a9e090c74c1efc39cf05cb31c54d8fad20 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_galileo, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 03:00:49 np0005534516 systemd[1]: var-lib-containers-storage-overlay-cc7a2300778cbaccc1f7789805a33d38e9ed5b360a6f4b2fc914e1a869110633-merged.mount: Deactivated successfully.
Nov 25 03:00:49 np0005534516 podman[155243]: 2025-11-25 08:00:49.688894512 +0000 UTC m=+0.263024551 container remove 4a76638fc72a5e529b2a4d64c471f7a9e090c74c1efc39cf05cb31c54d8fad20 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_galileo, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 03:00:49 np0005534516 systemd[1]: libpod-conmon-4a76638fc72a5e529b2a4d64c471f7a9e090c74c1efc39cf05cb31c54d8fad20.scope: Deactivated successfully.
Nov 25 03:00:49 np0005534516 python3.9[155275]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 25 03:00:49 np0005534516 podman[155308]: 2025-11-25 08:00:49.881613091 +0000 UTC m=+0.043489276 container create 46eb0ad3fc1d902ff5c4e703511f6d24506b5761b71441574109e999fc3be3b3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_cori, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 03:00:49 np0005534516 systemd[1]: Started libpod-conmon-46eb0ad3fc1d902ff5c4e703511f6d24506b5761b71441574109e999fc3be3b3.scope.
Nov 25 03:00:49 np0005534516 podman[155308]: 2025-11-25 08:00:49.861904418 +0000 UTC m=+0.023780653 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 03:00:49 np0005534516 systemd[1]: Started libcrun container.
Nov 25 03:00:49 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/62718db445a8532a4c7f0e163203e2f06f318177f3210e2a2c8c986505e9b663/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 03:00:49 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/62718db445a8532a4c7f0e163203e2f06f318177f3210e2a2c8c986505e9b663/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 03:00:49 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/62718db445a8532a4c7f0e163203e2f06f318177f3210e2a2c8c986505e9b663/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 03:00:49 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/62718db445a8532a4c7f0e163203e2f06f318177f3210e2a2c8c986505e9b663/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 03:00:49 np0005534516 podman[155308]: 2025-11-25 08:00:49.996127576 +0000 UTC m=+0.158003781 container init 46eb0ad3fc1d902ff5c4e703511f6d24506b5761b71441574109e999fc3be3b3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_cori, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Nov 25 03:00:50 np0005534516 podman[155308]: 2025-11-25 08:00:50.004153343 +0000 UTC m=+0.166029528 container start 46eb0ad3fc1d902ff5c4e703511f6d24506b5761b71441574109e999fc3be3b3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_cori, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Nov 25 03:00:50 np0005534516 podman[155308]: 2025-11-25 08:00:50.007573995 +0000 UTC m=+0.169450200 container attach 46eb0ad3fc1d902ff5c4e703511f6d24506b5761b71441574109e999fc3be3b3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_cori, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True)
Nov 25 03:00:50 np0005534516 python3.9[155477]: ansible-ansible.posix.seboolean Invoked with name=virt_sandbox_use_netlink persistent=True state=True ignore_selinux_state=False
Nov 25 03:00:50 np0005534516 jovial_cori[155352]: {
Nov 25 03:00:50 np0005534516 jovial_cori[155352]:    "1ccbbde0-8faf-460a-800e-c84f00ed17db": {
Nov 25 03:00:50 np0005534516 jovial_cori[155352]:        "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 03:00:50 np0005534516 jovial_cori[155352]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 03:00:50 np0005534516 jovial_cori[155352]:        "osd_id": 1,
Nov 25 03:00:50 np0005534516 jovial_cori[155352]:        "osd_uuid": "1ccbbde0-8faf-460a-800e-c84f00ed17db",
Nov 25 03:00:50 np0005534516 jovial_cori[155352]:        "type": "bluestore"
Nov 25 03:00:50 np0005534516 jovial_cori[155352]:    },
Nov 25 03:00:50 np0005534516 jovial_cori[155352]:    "2a15223e-f21c-42af-b702-2be1c3607399": {
Nov 25 03:00:50 np0005534516 jovial_cori[155352]:        "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 03:00:50 np0005534516 jovial_cori[155352]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 03:00:50 np0005534516 jovial_cori[155352]:        "osd_id": 2,
Nov 25 03:00:50 np0005534516 jovial_cori[155352]:        "osd_uuid": "2a15223e-f21c-42af-b702-2be1c3607399",
Nov 25 03:00:50 np0005534516 jovial_cori[155352]:        "type": "bluestore"
Nov 25 03:00:50 np0005534516 jovial_cori[155352]:    },
Nov 25 03:00:50 np0005534516 jovial_cori[155352]:    "fa0f8d90-5df8-4d42-9078-da082765696d": {
Nov 25 03:00:50 np0005534516 jovial_cori[155352]:        "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 03:00:50 np0005534516 jovial_cori[155352]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 03:00:50 np0005534516 jovial_cori[155352]:        "osd_id": 0,
Nov 25 03:00:50 np0005534516 jovial_cori[155352]:        "osd_uuid": "fa0f8d90-5df8-4d42-9078-da082765696d",
Nov 25 03:00:50 np0005534516 jovial_cori[155352]:        "type": "bluestore"
Nov 25 03:00:50 np0005534516 jovial_cori[155352]:    }
Nov 25 03:00:50 np0005534516 jovial_cori[155352]: }
Nov 25 03:00:51 np0005534516 systemd[1]: libpod-46eb0ad3fc1d902ff5c4e703511f6d24506b5761b71441574109e999fc3be3b3.scope: Deactivated successfully.
Nov 25 03:00:51 np0005534516 systemd[1]: libpod-46eb0ad3fc1d902ff5c4e703511f6d24506b5761b71441574109e999fc3be3b3.scope: Consumed 1.022s CPU time.
Nov 25 03:00:51 np0005534516 podman[155506]: 2025-11-25 08:00:51.066029506 +0000 UTC m=+0.030118686 container died 46eb0ad3fc1d902ff5c4e703511f6d24506b5761b71441574109e999fc3be3b3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_cori, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 03:00:51 np0005534516 systemd[1]: var-lib-containers-storage-overlay-62718db445a8532a4c7f0e163203e2f06f318177f3210e2a2c8c986505e9b663-merged.mount: Deactivated successfully.
Nov 25 03:00:51 np0005534516 podman[155506]: 2025-11-25 08:00:51.162474773 +0000 UTC m=+0.126563933 container remove 46eb0ad3fc1d902ff5c4e703511f6d24506b5761b71441574109e999fc3be3b3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_cori, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Nov 25 03:00:51 np0005534516 systemd[1]: libpod-conmon-46eb0ad3fc1d902ff5c4e703511f6d24506b5761b71441574109e999fc3be3b3.scope: Deactivated successfully.
Nov 25 03:00:51 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 03:00:51 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 03:00:51 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 03:00:51 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 03:00:51 np0005534516 ceph-mgr[75313]: [progress WARNING root] complete: ev b768e981-0da9-48b9-8d67-663287e2a31c does not exist
Nov 25 03:00:51 np0005534516 ceph-mgr[75313]: [progress WARNING root] complete: ev 25c1bcef-b3c1-4433-adce-2e162760ba6c does not exist
Nov 25 03:00:51 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v443: 321 pgs: 321 active+clean; 456 KiB data, 143 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:00:52 np0005534516 python3.9[155720]: ansible-ansible.legacy.stat Invoked with path=/var/lib/neutron/ovn_metadata_haproxy_wrapper follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 03:00:52 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 03:00:52 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 03:00:52 np0005534516 python3.9[155841]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/neutron/ovn_metadata_haproxy_wrapper mode=0755 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764057651.4840145-86-10564964053375/.source follow=False _original_basename=haproxy.j2 checksum=deae64da24ad28f71dc47276f2e9f268f19a4519 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 25 03:00:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] Optimize plan auto_2025-11-25_08:00:53
Nov 25 03:00:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 03:00:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] do_upmap
Nov 25 03:00:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] pools ['cephfs.cephfs.data', 'vms', 'volumes', 'cephfs.cephfs.meta', '.rgw.root', 'default.rgw.log', 'backups', 'default.rgw.meta', '.mgr', 'default.rgw.control', 'images']
Nov 25 03:00:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] prepared 0/10 changes
Nov 25 03:00:53 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v444: 321 pgs: 321 active+clean; 456 KiB data, 143 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:00:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 03:00:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 03:00:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 03:00:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 03:00:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 03:00:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 03:00:53 np0005534516 python3.9[155991]: ansible-ansible.legacy.stat Invoked with path=/var/lib/neutron/kill_scripts/haproxy-kill follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 03:00:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 03:00:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 03:00:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 03:00:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 03:00:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 03:00:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 03:00:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 03:00:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 03:00:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 03:00:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 03:00:54 np0005534516 python3.9[156112]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/neutron/kill_scripts/haproxy-kill mode=0755 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764057652.9890373-101-56729778261203/.source follow=False _original_basename=kill-script.j2 checksum=2dfb5489f491f61b95691c3bf95fa1fe48ff3700 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 25 03:00:54 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 03:00:54 np0005534516 python3.9[156264]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 25 03:00:55 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v445: 321 pgs: 321 active+clean; 456 KiB data, 143 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:00:55 np0005534516 python3.9[156348]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 25 03:00:57 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v446: 321 pgs: 321 active+clean; 456 KiB data, 143 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:00:58 np0005534516 python3.9[156501]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Nov 25 03:00:59 np0005534516 python3.9[156654]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-rootwrap.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 03:00:59 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 03:00:59 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v447: 321 pgs: 321 active+clean; 456 KiB data, 143 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:00:59 np0005534516 python3.9[156775]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-rootwrap.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764057658.5045059-138-30535245386563/.source.conf follow=False _original_basename=rootwrap.conf.j2 checksum=11f2cfb4b7d97b2cef3c2c2d88089e6999cffe22 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 25 03:01:00 np0005534516 python3.9[156925]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-neutron-ovn-metadata-agent.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 03:01:00 np0005534516 python3.9[157046]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-neutron-ovn-metadata-agent.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764057659.8497405-138-144878202132633/.source.conf follow=False _original_basename=neutron-ovn-metadata-agent.conf.j2 checksum=8bc979abbe81c2cf3993a225517a7e2483e20443 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 25 03:01:01 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v448: 321 pgs: 321 active+clean; 456 KiB data, 143 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:01:01 np0005534516 ovn_controller[152859]: 2025-11-25T08:01:01Z|00025|memory|INFO|16128 kB peak resident set size after 29.7 seconds
Nov 25 03:01:01 np0005534516 ovn_controller[152859]: 2025-11-25T08:01:01Z|00026|memory|INFO|idl-cells-OVN_Southbound:239 idl-cells-Open_vSwitch:528 ofctrl_desired_flow_usage-KB:5 ofctrl_installed_flow_usage-KB:4 ofctrl_sb_flow_ref_usage-KB:2
Nov 25 03:01:01 np0005534516 podman[157135]: 2025-11-25 08:01:01.840717935 +0000 UTC m=+0.091437550 container health_status 8ad1e319ea263c63021f01bb2d6f18ca5aea205883339692c6b10bddfa993191 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.build-date=20251118, container_name=ovn_controller)
Nov 25 03:01:02 np0005534516 python3.9[157235]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/10-neutron-metadata.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 03:01:02 np0005534516 python3.9[157356]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/10-neutron-metadata.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764057661.6668513-182-18349112148055/.source.conf _original_basename=10-neutron-metadata.conf follow=False checksum=ca7d4d155f5b812fab1a3b70e34adb495d291b8d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 25 03:01:03 np0005534516 python3.9[157506]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/05-nova-metadata.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 03:01:03 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v449: 321 pgs: 321 active+clean; 456 KiB data, 143 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:01:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 03:01:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:01:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 03:01:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:01:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 03:01:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:01:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 03:01:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:01:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 03:01:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:01:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 03:01:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:01:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 25 03:01:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:01:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 03:01:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:01:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 25 03:01:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:01:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 25 03:01:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:01:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 03:01:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:01:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 25 03:01:03 np0005534516 python3.9[157627]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/05-nova-metadata.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764057662.795806-182-167259346799019/.source.conf _original_basename=05-nova-metadata.conf follow=False checksum=a14d6b38898a379cd37fc0bf365d17f10859446f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 25 03:01:04 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 03:01:04 np0005534516 python3.9[157777]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 25 03:01:05 np0005534516 python3.9[157931]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 25 03:01:05 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v450: 321 pgs: 321 active+clean; 456 KiB data, 143 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:01:05 np0005534516 python3.9[158083]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 03:01:06 np0005534516 python3.9[158161]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 03:01:07 np0005534516 python3.9[158313]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 03:01:07 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v451: 321 pgs: 321 active+clean; 456 KiB data, 143 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:01:07 np0005534516 python3.9[158391]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 03:01:08 np0005534516 python3.9[158543]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 03:01:08 np0005534516 python3.9[158695]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 03:01:09 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 03:01:09 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v452: 321 pgs: 321 active+clean; 456 KiB data, 143 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:01:09 np0005534516 python3.9[158773]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 03:01:10 np0005534516 python3.9[158925]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 03:01:10 np0005534516 python3.9[159003]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 03:01:11 np0005534516 python3.9[159155]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 25 03:01:11 np0005534516 systemd[1]: Reloading.
Nov 25 03:01:11 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v453: 321 pgs: 321 active+clean; 456 KiB data, 143 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:01:11 np0005534516 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 03:01:11 np0005534516 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 03:01:12 np0005534516 python3.9[159344]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 03:01:13 np0005534516 python3.9[159422]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 03:01:13 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v454: 321 pgs: 321 active+clean; 456 KiB data, 143 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:01:13 np0005534516 python3.9[159574]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 03:01:14 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 03:01:14 np0005534516 python3.9[159652]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 03:01:15 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v455: 321 pgs: 321 active+clean; 456 KiB data, 143 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:01:15 np0005534516 python3.9[159804]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 25 03:01:15 np0005534516 systemd[1]: Reloading.
Nov 25 03:01:15 np0005534516 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 03:01:15 np0005534516 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 03:01:15 np0005534516 systemd[1]: Starting Create netns directory...
Nov 25 03:01:15 np0005534516 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Nov 25 03:01:15 np0005534516 systemd[1]: netns-placeholder.service: Deactivated successfully.
Nov 25 03:01:15 np0005534516 systemd[1]: Finished Create netns directory.
Nov 25 03:01:16 np0005534516 python3.9[159997]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 03:01:17 np0005534516 python3.9[160149]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ovn_metadata_agent/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 03:01:17 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v456: 321 pgs: 321 active+clean; 456 KiB data, 143 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:01:17 np0005534516 python3.9[160272]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ovn_metadata_agent/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764057676.640632-333-77752434797693/.source _original_basename=healthcheck follow=False checksum=898a5a1fcd473cf731177fc866e3bd7ebf20a131 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Nov 25 03:01:18 np0005534516 python3.9[160424]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 25 03:01:19 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 03:01:19 np0005534516 python3.9[160576]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/ovn_metadata_agent.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 03:01:19 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v457: 321 pgs: 321 active+clean; 456 KiB data, 143 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:01:19 np0005534516 python3.9[160699]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/ovn_metadata_agent.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1764057678.7725828-358-141969043717556/.source.json _original_basename=.22r8wusc follow=False checksum=a908ef151ded3a33ae6c9ac8be72a35e5e33b9dc backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 03:01:20 np0005534516 python3.9[160851]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 03:01:21 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v458: 321 pgs: 321 active+clean; 456 KiB data, 143 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:01:22 np0005534516 python3.9[161278]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent config_pattern=*.json debug=False
Nov 25 03:01:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 03:01:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 03:01:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 03:01:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 03:01:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 03:01:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 03:01:23 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v459: 321 pgs: 321 active+clean; 456 KiB data, 143 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:01:23 np0005534516 python3.9[161430]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Nov 25 03:01:24 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 03:01:24 np0005534516 python3.9[161582]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None
Nov 25 03:01:25 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v460: 321 pgs: 321 active+clean; 456 KiB data, 143 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:01:26 np0005534516 python3[161760]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent config_id=ovn_metadata_agent config_overrides={} config_patterns=*.json log_base_path=/var/log/containers/stdouts debug=False
Nov 25 03:01:27 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v461: 321 pgs: 321 active+clean; 456 KiB data, 143 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:01:29 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 03:01:29 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v462: 321 pgs: 321 active+clean; 456 KiB data, 143 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:01:31 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v463: 321 pgs: 321 active+clean; 456 KiB data, 143 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:01:33 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v464: 321 pgs: 321 active+clean; 456 KiB data, 143 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:01:33 np0005534516 podman[161838]: 2025-11-25 08:01:33.858353657 +0000 UTC m=+1.101333480 container health_status 8ad1e319ea263c63021f01bb2d6f18ca5aea205883339692c6b10bddfa993191 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Nov 25 03:01:34 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 03:01:34 np0005534516 podman[161773]: 2025-11-25 08:01:34.31266947 +0000 UTC m=+8.158990371 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 25 03:01:34 np0005534516 podman[161918]: 2025-11-25 08:01:34.460029541 +0000 UTC m=+0.055824320 container create 5c8d5508db57b4c3da7e80ae11f3a3094e87785cb74f60c98199261dd8b73481 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, config_id=ovn_metadata_agent, org.label-schema.build-date=20251118, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 03:01:34 np0005534516 podman[161918]: 2025-11-25 08:01:34.430063396 +0000 UTC m=+0.025858215 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 25 03:01:34 np0005534516 python3[161760]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name ovn_metadata_agent --cgroupns=host --conmon-pidfile /run/ovn_metadata_agent.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env EDPM_CONFIG_HASH=0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d --healthcheck-command /openstack/healthcheck --label config_id=ovn_metadata_agent --label container_name=ovn_metadata_agent --label managed_by=edpm_ansible --label config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']} --log-driver journald --log-level info --network host --pid host --privileged=True --user root --volume /run/openvswitch:/run/openvswitch:z --volume /var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z --volume /run/netns:/run/netns:shared --volume /var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/neutron:/var/lib/neutron:shared,z --volume /var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro --volume /var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro --volume /var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z --volume /var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 25 03:01:35 np0005534516 python3.9[162109]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 25 03:01:35 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v465: 321 pgs: 321 active+clean; 456 KiB data, 143 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:01:35 np0005534516 python3.9[162263]: ansible-file Invoked with path=/etc/systemd/system/edpm_ovn_metadata_agent.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 03:01:36 np0005534516 python3.9[162339]: ansible-stat Invoked with path=/etc/systemd/system/edpm_ovn_metadata_agent_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 25 03:01:37 np0005534516 python3.9[162490]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764057696.484899-446-166768213557371/source dest=/etc/systemd/system/edpm_ovn_metadata_agent.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 03:01:37 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v466: 321 pgs: 321 active+clean; 456 KiB data, 143 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:01:37 np0005534516 python3.9[162566]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 25 03:01:37 np0005534516 systemd[1]: Reloading.
Nov 25 03:01:37 np0005534516 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 03:01:37 np0005534516 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 03:01:38 np0005534516 python3.9[162676]: ansible-systemd Invoked with state=restarted name=edpm_ovn_metadata_agent.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 25 03:01:38 np0005534516 systemd[1]: Reloading.
Nov 25 03:01:38 np0005534516 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 03:01:38 np0005534516 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 03:01:38 np0005534516 systemd[1]: Starting ovn_metadata_agent container...
Nov 25 03:01:39 np0005534516 systemd[1]: Started libcrun container.
Nov 25 03:01:39 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/632c0b87b89204d5bdb0f498676698a8d7cef8776c44dbbf4416d0682a90e175/merged/etc/neutron.conf.d supports timestamps until 2038 (0x7fffffff)
Nov 25 03:01:39 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/632c0b87b89204d5bdb0f498676698a8d7cef8776c44dbbf4416d0682a90e175/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 25 03:01:39 np0005534516 systemd[1]: Started /usr/bin/podman healthcheck run 5c8d5508db57b4c3da7e80ae11f3a3094e87785cb74f60c98199261dd8b73481.
Nov 25 03:01:39 np0005534516 podman[162718]: 2025-11-25 08:01:39.094667644 +0000 UTC m=+0.104785660 container init 5c8d5508db57b4c3da7e80ae11f3a3094e87785cb74f60c98199261dd8b73481 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Nov 25 03:01:39 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 03:01:39 np0005534516 ovn_metadata_agent[162734]: + sudo -E kolla_set_configs
Nov 25 03:01:39 np0005534516 podman[162718]: 2025-11-25 08:01:39.125495204 +0000 UTC m=+0.135613200 container start 5c8d5508db57b4c3da7e80ae11f3a3094e87785cb74f60c98199261dd8b73481 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent)
Nov 25 03:01:39 np0005534516 edpm-start-podman-container[162718]: ovn_metadata_agent
Nov 25 03:01:39 np0005534516 edpm-start-podman-container[162717]: Creating additional drop-in dependency for "ovn_metadata_agent" (5c8d5508db57b4c3da7e80ae11f3a3094e87785cb74f60c98199261dd8b73481)
Nov 25 03:01:39 np0005534516 ovn_metadata_agent[162734]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Nov 25 03:01:39 np0005534516 ovn_metadata_agent[162734]: INFO:__main__:Validating config file
Nov 25 03:01:39 np0005534516 ovn_metadata_agent[162734]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Nov 25 03:01:39 np0005534516 ovn_metadata_agent[162734]: INFO:__main__:Copying service configuration files
Nov 25 03:01:39 np0005534516 ovn_metadata_agent[162734]: INFO:__main__:Deleting /etc/neutron/rootwrap.conf
Nov 25 03:01:39 np0005534516 ovn_metadata_agent[162734]: INFO:__main__:Copying /etc/neutron.conf.d/01-rootwrap.conf to /etc/neutron/rootwrap.conf
Nov 25 03:01:39 np0005534516 ovn_metadata_agent[162734]: INFO:__main__:Setting permission for /etc/neutron/rootwrap.conf
Nov 25 03:01:39 np0005534516 ovn_metadata_agent[162734]: INFO:__main__:Writing out command to execute
Nov 25 03:01:39 np0005534516 ovn_metadata_agent[162734]: INFO:__main__:Setting permission for /var/lib/neutron
Nov 25 03:01:39 np0005534516 ovn_metadata_agent[162734]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts
Nov 25 03:01:39 np0005534516 ovn_metadata_agent[162734]: INFO:__main__:Setting permission for /var/lib/neutron/ovn-metadata-proxy
Nov 25 03:01:39 np0005534516 ovn_metadata_agent[162734]: INFO:__main__:Setting permission for /var/lib/neutron/external
Nov 25 03:01:39 np0005534516 ovn_metadata_agent[162734]: INFO:__main__:Setting permission for /var/lib/neutron/ovn_metadata_haproxy_wrapper
Nov 25 03:01:39 np0005534516 ovn_metadata_agent[162734]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts/haproxy-kill
Nov 25 03:01:39 np0005534516 ovn_metadata_agent[162734]: INFO:__main__:Setting permission for /var/lib/neutron/external/pids
Nov 25 03:01:39 np0005534516 systemd[1]: Reloading.
Nov 25 03:01:39 np0005534516 ovn_metadata_agent[162734]: ++ cat /run_command
Nov 25 03:01:39 np0005534516 ovn_metadata_agent[162734]: + CMD=neutron-ovn-metadata-agent
Nov 25 03:01:39 np0005534516 ovn_metadata_agent[162734]: + ARGS=
Nov 25 03:01:39 np0005534516 ovn_metadata_agent[162734]: + sudo kolla_copy_cacerts
Nov 25 03:01:39 np0005534516 podman[162741]: 2025-11-25 08:01:39.205029826 +0000 UTC m=+0.069920379 container health_status 5c8d5508db57b4c3da7e80ae11f3a3094e87785cb74f60c98199261dd8b73481 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 03:01:39 np0005534516 ovn_metadata_agent[162734]: + [[ ! -n '' ]]
Nov 25 03:01:39 np0005534516 ovn_metadata_agent[162734]: + . kolla_extend_start
Nov 25 03:01:39 np0005534516 ovn_metadata_agent[162734]: + echo 'Running command: '\''neutron-ovn-metadata-agent'\'''
Nov 25 03:01:39 np0005534516 ovn_metadata_agent[162734]: Running command: 'neutron-ovn-metadata-agent'
Nov 25 03:01:39 np0005534516 ovn_metadata_agent[162734]: + umask 0022
Nov 25 03:01:39 np0005534516 ovn_metadata_agent[162734]: + exec neutron-ovn-metadata-agent
Nov 25 03:01:39 np0005534516 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 03:01:39 np0005534516 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 03:01:39 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v467: 321 pgs: 321 active+clean; 456 KiB data, 143 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:01:39 np0005534516 systemd[1]: Started ovn_metadata_agent container.
Nov 25 03:01:39 np0005534516 systemd[1]: session-48.scope: Deactivated successfully.
Nov 25 03:01:40 np0005534516 systemd[1]: session-48.scope: Consumed 55.168s CPU time.
Nov 25 03:01:40 np0005534516 systemd-logind[822]: Session 48 logged out. Waiting for processes to exit.
Nov 25 03:01:40 np0005534516 systemd-logind[822]: Removed session 48.
Nov 25 03:01:40 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:40.986 162739 INFO neutron.common.config [-] Logging enabled!#033[00m
Nov 25 03:01:40 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:40.987 162739 INFO neutron.common.config [-] /usr/bin/neutron-ovn-metadata-agent version 22.2.2.dev43#033[00m
Nov 25 03:01:40 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:40.987 162739 DEBUG neutron.common.config [-] command line: /usr/bin/neutron-ovn-metadata-agent setup_logging /usr/lib/python3.9/site-packages/neutron/common/config.py:123#033[00m
Nov 25 03:01:40 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:40.987 162739 DEBUG neutron.agent.ovn.metadata_agent [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589#033[00m
Nov 25 03:01:40 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:40.987 162739 DEBUG neutron.agent.ovn.metadata_agent [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590#033[00m
Nov 25 03:01:40 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:40.987 162739 DEBUG neutron.agent.ovn.metadata_agent [-] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591#033[00m
Nov 25 03:01:40 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:40.988 162739 DEBUG neutron.agent.ovn.metadata_agent [-] config files: ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592#033[00m
Nov 25 03:01:40 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:40.988 162739 DEBUG neutron.agent.ovn.metadata_agent [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594#033[00m
Nov 25 03:01:40 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:40.988 162739 DEBUG neutron.agent.ovn.metadata_agent [-] agent_down_time                = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 03:01:40 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:40.988 162739 DEBUG neutron.agent.ovn.metadata_agent [-] allow_bulk                     = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 03:01:40 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:40.988 162739 DEBUG neutron.agent.ovn.metadata_agent [-] api_extensions_path            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 03:01:40 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:40.988 162739 DEBUG neutron.agent.ovn.metadata_agent [-] api_paste_config               = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 03:01:40 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:40.988 162739 DEBUG neutron.agent.ovn.metadata_agent [-] api_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 03:01:40 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:40.988 162739 DEBUG neutron.agent.ovn.metadata_agent [-] auth_ca_cert                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 03:01:40 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:40.988 162739 DEBUG neutron.agent.ovn.metadata_agent [-] auth_strategy                  = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 03:01:40 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:40.989 162739 DEBUG neutron.agent.ovn.metadata_agent [-] backlog                        = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 03:01:40 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:40.989 162739 DEBUG neutron.agent.ovn.metadata_agent [-] base_mac                       = fa:16:3e:00:00:00 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 03:01:40 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:40.989 162739 DEBUG neutron.agent.ovn.metadata_agent [-] bind_host                      = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 03:01:40 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:40.989 162739 DEBUG neutron.agent.ovn.metadata_agent [-] bind_port                      = 9696 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 03:01:40 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:40.989 162739 DEBUG neutron.agent.ovn.metadata_agent [-] client_socket_timeout          = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 03:01:40 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:40.989 162739 DEBUG neutron.agent.ovn.metadata_agent [-] config_dir                     = ['/etc/neutron.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 03:01:40 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:40.989 162739 DEBUG neutron.agent.ovn.metadata_agent [-] config_file                    = ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 03:01:40 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:40.989 162739 DEBUG neutron.agent.ovn.metadata_agent [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 03:01:40 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:40.989 162739 DEBUG neutron.agent.ovn.metadata_agent [-] control_exchange               = neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 03:01:40 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:40.989 162739 DEBUG neutron.agent.ovn.metadata_agent [-] core_plugin                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 03:01:40 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:40.990 162739 DEBUG neutron.agent.ovn.metadata_agent [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 03:01:40 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:40.990 162739 DEBUG neutron.agent.ovn.metadata_agent [-] default_availability_zones     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 03:01:40 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:40.990 162739 DEBUG neutron.agent.ovn.metadata_agent [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'OFPHandler=INFO', 'OfctlService=INFO', 'os_ken.base.app_manager=INFO', 'os_ken.controller.controller=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 03:01:40 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:40.990 162739 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_agent_notification        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 03:01:40 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:40.990 162739 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_lease_duration            = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 03:01:40 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:40.990 162739 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_load_type                 = networks log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 03:01:40 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:40.990 162739 DEBUG neutron.agent.ovn.metadata_agent [-] dns_domain                     = openstacklocal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 03:01:40 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:40.990 162739 DEBUG neutron.agent.ovn.metadata_agent [-] enable_new_agents              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 03:01:40 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:40.990 162739 DEBUG neutron.agent.ovn.metadata_agent [-] enable_traditional_dhcp        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 03:01:40 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:40.990 162739 DEBUG neutron.agent.ovn.metadata_agent [-] external_dns_driver            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 03:01:40 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:40.991 162739 DEBUG neutron.agent.ovn.metadata_agent [-] external_pids                  = /var/lib/neutron/external/pids log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 03:01:40 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:40.991 162739 DEBUG neutron.agent.ovn.metadata_agent [-] filter_validation              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 03:01:40 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:40.991 162739 DEBUG neutron.agent.ovn.metadata_agent [-] global_physnet_mtu             = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 03:01:40 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:40.991 162739 DEBUG neutron.agent.ovn.metadata_agent [-] host                           = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 03:01:40 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:40.991 162739 DEBUG neutron.agent.ovn.metadata_agent [-] http_retries                   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 03:01:40 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:40.991 162739 DEBUG neutron.agent.ovn.metadata_agent [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 03:01:40 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:40.991 162739 DEBUG neutron.agent.ovn.metadata_agent [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 03:01:40 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:40.991 162739 DEBUG neutron.agent.ovn.metadata_agent [-] ipam_driver                    = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 03:01:40 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:40.992 162739 DEBUG neutron.agent.ovn.metadata_agent [-] ipv6_pd_enabled                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 03:01:40 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:40.992 162739 DEBUG neutron.agent.ovn.metadata_agent [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 03:01:40 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:40.992 162739 DEBUG neutron.agent.ovn.metadata_agent [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 03:01:40 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:40.992 162739 DEBUG neutron.agent.ovn.metadata_agent [-] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 03:01:40 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:40.992 162739 DEBUG neutron.agent.ovn.metadata_agent [-] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 03:01:40 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:40.992 162739 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 03:01:40 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:40.992 162739 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 03:01:40 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:40.992 162739 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 03:01:40 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:40.992 162739 DEBUG neutron.agent.ovn.metadata_agent [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 03:01:40 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:40.992 162739 DEBUG neutron.agent.ovn.metadata_agent [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 03:01:40 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:40.993 162739 DEBUG neutron.agent.ovn.metadata_agent [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 03:01:40 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:40.993 162739 DEBUG neutron.agent.ovn.metadata_agent [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 03:01:40 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:40.993 162739 DEBUG neutron.agent.ovn.metadata_agent [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 03:01:40 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:40.993 162739 DEBUG neutron.agent.ovn.metadata_agent [-] max_dns_nameservers            = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 03:01:40 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:40.993 162739 DEBUG neutron.agent.ovn.metadata_agent [-] max_header_line                = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 03:01:40 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:40.993 162739 DEBUG neutron.agent.ovn.metadata_agent [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 03:01:40 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:40.993 162739 DEBUG neutron.agent.ovn.metadata_agent [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 03:01:40 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:40.993 162739 DEBUG neutron.agent.ovn.metadata_agent [-] max_subnet_host_routes         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 03:01:40 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:40.993 162739 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_backlog               = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 03:01:40 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:40.993 162739 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_group           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 03:01:40 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:40.994 162739 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_shared_secret   = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 03:01:40 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:40.994 162739 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_socket          = /var/lib/neutron/metadata_proxy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 03:01:40 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:40.994 162739 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_socket_mode     = deduce log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 03:01:40 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:40.994 162739 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_user            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 03:01:40 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:40.994 162739 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_workers               = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 03:01:40 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:40.994 162739 DEBUG neutron.agent.ovn.metadata_agent [-] network_link_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 03:01:40 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:40.994 162739 DEBUG neutron.agent.ovn.metadata_agent [-] notify_nova_on_port_data_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 03:01:40 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:40.994 162739 DEBUG neutron.agent.ovn.metadata_agent [-] notify_nova_on_port_status_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 03:01:40 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:40.994 162739 DEBUG neutron.agent.ovn.metadata_agent [-] nova_client_cert               =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 03:01:40 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:40.995 162739 DEBUG neutron.agent.ovn.metadata_agent [-] nova_client_priv_key           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 03:01:40 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:40.995 162739 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_host             = nova-metadata-internal.openstack.svc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 03:01:40 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:40.995 162739 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_insecure         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 03:01:40 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:40.995 162739 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_port             = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 03:01:40 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:40.995 162739 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_protocol         = https log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 03:01:40 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:40.995 162739 DEBUG neutron.agent.ovn.metadata_agent [-] pagination_max_limit           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 03:01:40 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:40.995 162739 DEBUG neutron.agent.ovn.metadata_agent [-] periodic_fuzzy_delay           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 03:01:40 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:40.995 162739 DEBUG neutron.agent.ovn.metadata_agent [-] periodic_interval              = 40 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 03:01:40 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:40.995 162739 DEBUG neutron.agent.ovn.metadata_agent [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 03:01:40 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:40.996 162739 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 03:01:40 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:40.996 162739 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 03:01:40 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:40.996 162739 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 03:01:40 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:40.996 162739 DEBUG neutron.agent.ovn.metadata_agent [-] retry_until_window             = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 03:01:40 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:40.996 162739 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_resources_processing_step  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 03:01:40 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:40.996 162739 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_response_max_timeout       = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 03:01:40 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:40.996 162739 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_state_report_workers       = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 03:01:40 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:40.996 162739 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 03:01:40 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:40.996 162739 DEBUG neutron.agent.ovn.metadata_agent [-] send_events_interval           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 03:01:40 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:40.996 162739 DEBUG neutron.agent.ovn.metadata_agent [-] service_plugins                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 03:01:40 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:40.997 162739 DEBUG neutron.agent.ovn.metadata_agent [-] setproctitle                   = on log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 03:01:40 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:40.997 162739 DEBUG neutron.agent.ovn.metadata_agent [-] state_path                     = /var/lib/neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 03:01:40 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:40.997 162739 DEBUG neutron.agent.ovn.metadata_agent [-] syslog_log_facility            = syslog log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 03:01:40 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:40.997 162739 DEBUG neutron.agent.ovn.metadata_agent [-] tcp_keepidle                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 03:01:40 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:40.997 162739 DEBUG neutron.agent.ovn.metadata_agent [-] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 03:01:40 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:40.997 162739 DEBUG neutron.agent.ovn.metadata_agent [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 03:01:40 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:40.997 162739 DEBUG neutron.agent.ovn.metadata_agent [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 03:01:40 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:40.997 162739 DEBUG neutron.agent.ovn.metadata_agent [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 03:01:40 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:40.997 162739 DEBUG neutron.agent.ovn.metadata_agent [-] use_ssl                        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 03:01:40 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:40.998 162739 DEBUG neutron.agent.ovn.metadata_agent [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 03:01:40 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:40.998 162739 DEBUG neutron.agent.ovn.metadata_agent [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 03:01:40 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:40.998 162739 DEBUG neutron.agent.ovn.metadata_agent [-] vlan_transparent               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 03:01:40 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:40.998 162739 DEBUG neutron.agent.ovn.metadata_agent [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 03:01:40 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:40.998 162739 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_default_pool_size         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 03:01:40 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:40.998 162739 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 03:01:40 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:40.998 162739 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_log_format                = %(client_ip)s "%(request_line)s" status: %(status_code)s  len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 03:01:40 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:40.998 162739 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_server_debug              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 03:01:40 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:40.999 162739 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:01:40 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:40.999 162739 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_concurrency.lock_path     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:01:40 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:40.999 162739 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.connection_string     = messaging:// log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:01:40 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:40.999 162739 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.enabled               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:01:40 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:40.999 162739 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_doc_type           = notification log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:01:40 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:40.999 162739 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_scroll_size        = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:01:40 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:40.999 162739 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_scroll_time        = 2m log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:01:40 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:40.999 162739 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.filter_error_trace    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:01:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:40.999 162739 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.hmac_keys             = SECRET_KEY log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:01:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:41.000 162739 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.sentinel_service_name = mymaster log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:01:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:41.000 162739 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.socket_timeout        = 0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:01:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:41.000 162739 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.trace_sqlalchemy      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:01:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:41.000 162739 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.enforce_new_defaults = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:01:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:41.000 162739 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.enforce_scope      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:01:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:41.000 162739 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:01:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:41.000 162739 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:01:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:41.001 162739 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:01:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:41.001 162739 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:01:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:41.001 162739 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:01:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:41.001 162739 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:01:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:41.001 162739 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:01:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:41.001 162739 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:01:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:41.001 162739 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:01:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:41.001 162739 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:01:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:41.001 162739 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:01:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:41.002 162739 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:01:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:41.002 162739 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:01:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:41.002 162739 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:01:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:41.002 162739 DEBUG neutron.agent.ovn.metadata_agent [-] service_providers.service_provider = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:01:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:41.002 162739 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.capabilities           = [21, 12, 1, 2, 19] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:01:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:41.002 162739 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.group                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:01:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:41.002 162739 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.helper_command         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:01:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:41.002 162739 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.logger_name            = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:01:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:41.002 162739 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.thread_pool_size       = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:01:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:41.003 162739 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.user                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:01:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:41.003 162739 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:01:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:41.003 162739 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.group     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:01:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:41.003 162739 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:01:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:41.003 162739 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:01:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:41.003 162739 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:01:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:41.003 162739 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.user      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:01:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:41.003 162739 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:01:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:41.003 162739 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:01:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:41.004 162739 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:01:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:41.004 162739 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:01:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:41.004 162739 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:01:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:41.004 162739 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:01:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:41.004 162739 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.capabilities = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:01:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:41.004 162739 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:01:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:41.004 162739 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:01:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:41.004 162739 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:01:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:41.004 162739 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:01:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:41.005 162739 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:01:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:41.005 162739 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:01:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:41.005 162739 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:01:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:41.005 162739 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:01:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:41.005 162739 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:01:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:41.005 162739 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:01:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:41.005 162739 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:01:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:41.005 162739 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.capabilities      = [12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:01:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:41.005 162739 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.group             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:01:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:41.005 162739 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.helper_command    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:01:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:41.006 162739 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.logger_name       = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:01:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:41.006 162739 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.thread_pool_size  = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:01:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:41.006 162739 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.user              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:01:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:41.006 162739 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.check_child_processes_action = respawn log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:01:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:41.006 162739 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.check_child_processes_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:01:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:41.006 162739 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.comment_iptables_rules   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:01:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:41.006 162739 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.debug_iptables_rules     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:01:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:41.006 162739 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.kill_scripts_path        = /etc/neutron/kill_scripts/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:01:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:41.006 162739 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.root_helper              = sudo neutron-rootwrap /etc/neutron/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:01:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:41.007 162739 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.root_helper_daemon       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:01:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:41.007 162739 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.use_helper_for_ns_read   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:01:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:41.007 162739 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.use_random_fully         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:01:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:41.007 162739 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:01:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:41.007 162739 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.default_quota           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:01:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:41.007 162739 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_driver            = neutron.db.quota.driver_nolock.DbQuotaNoLockDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:01:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:41.007 162739 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_network           = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:01:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:41.007 162739 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_port              = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:01:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:41.007 162739 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_security_group    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:01:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:41.008 162739 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_security_group_rule = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:01:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:41.008 162739 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_subnet            = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:01:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:41.008 162739 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.track_quota_usage       = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:01:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:41.008 162739 DEBUG neutron.agent.ovn.metadata_agent [-] nova.auth_section              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:01:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:41.008 162739 DEBUG neutron.agent.ovn.metadata_agent [-] nova.auth_type                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:01:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:41.008 162739 DEBUG neutron.agent.ovn.metadata_agent [-] nova.cafile                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:01:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:41.008 162739 DEBUG neutron.agent.ovn.metadata_agent [-] nova.certfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:01:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:41.008 162739 DEBUG neutron.agent.ovn.metadata_agent [-] nova.collect_timing            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:01:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:41.008 162739 DEBUG neutron.agent.ovn.metadata_agent [-] nova.endpoint_type             = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:01:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:41.008 162739 DEBUG neutron.agent.ovn.metadata_agent [-] nova.insecure                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:01:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:41.009 162739 DEBUG neutron.agent.ovn.metadata_agent [-] nova.keyfile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:01:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:41.009 162739 DEBUG neutron.agent.ovn.metadata_agent [-] nova.region_name               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:01:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:41.009 162739 DEBUG neutron.agent.ovn.metadata_agent [-] nova.split_loggers             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:01:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:41.009 162739 DEBUG neutron.agent.ovn.metadata_agent [-] nova.timeout                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:01:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:41.009 162739 DEBUG neutron.agent.ovn.metadata_agent [-] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:01:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:41.009 162739 DEBUG neutron.agent.ovn.metadata_agent [-] placement.auth_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:01:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:41.009 162739 DEBUG neutron.agent.ovn.metadata_agent [-] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:01:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:41.009 162739 DEBUG neutron.agent.ovn.metadata_agent [-] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:01:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:41.009 162739 DEBUG neutron.agent.ovn.metadata_agent [-] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:01:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:41.009 162739 DEBUG neutron.agent.ovn.metadata_agent [-] placement.endpoint_type        = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:01:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:41.010 162739 DEBUG neutron.agent.ovn.metadata_agent [-] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:01:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:41.010 162739 DEBUG neutron.agent.ovn.metadata_agent [-] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:01:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:41.010 162739 DEBUG neutron.agent.ovn.metadata_agent [-] placement.region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:01:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:41.010 162739 DEBUG neutron.agent.ovn.metadata_agent [-] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:01:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:41.010 162739 DEBUG neutron.agent.ovn.metadata_agent [-] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:01:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:41.010 162739 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:01:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:41.010 162739 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:01:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:41.010 162739 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:01:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:41.010 162739 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:01:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:41.011 162739 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:01:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:41.011 162739 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:01:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:41.011 162739 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:01:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:41.011 162739 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.enable_notifications    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:01:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:41.011 162739 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:01:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:41.011 162739 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:01:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:41.011 162739 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.interface               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:01:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:41.011 162739 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:01:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:41.011 162739 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:01:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:41.012 162739 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:01:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:41.012 162739 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:01:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:41.012 162739 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:01:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:41.012 162739 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.service_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:01:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:41.012 162739 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:01:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:41.012 162739 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:01:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:41.012 162739 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:01:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:41.012 162739 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:01:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:41.012 162739 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.valid_interfaces        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:01:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:41.012 162739 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:01:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:41.013 162739 DEBUG neutron.agent.ovn.metadata_agent [-] cli_script.dry_run             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:01:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:41.013 162739 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.allow_stateless_action_supported = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:01:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:41.013 162739 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.dhcp_default_lease_time    = 43200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:01:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:41.013 162739 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.disable_ovn_dhcp_for_baremetal_ports = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:01:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:41.013 162739 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.dns_servers                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:01:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:41.013 162739 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.enable_distributed_floating_ip = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:01:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:41.013 162739 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.neutron_sync_mode          = log log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:01:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:41.013 162739 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_dhcp4_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:01:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:41.013 162739 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_dhcp6_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:01:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:41.014 162739 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_emit_need_to_frag      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:01:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:41.014 162739 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_l3_mode                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:01:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:41.014 162739 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_l3_scheduler           = leastloaded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:01:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:41.014 162739 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_metadata_enabled       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:01:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:41.014 162739 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_ca_cert             =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:01:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:41.014 162739 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_certificate         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:01:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:41.014 162739 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_connection          = tcp:127.0.0.1:6641 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:01:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:41.014 162739 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_private_key         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:01:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:41.015 162739 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_ca_cert             = /etc/pki/tls/certs/ovndbca.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:01:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:41.015 162739 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_certificate         = /etc/pki/tls/certs/ovndb.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:01:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:41.015 162739 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_connection          = ssl:ovsdbserver-sb.openstack.svc:6642 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:01:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:41.015 162739 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_private_key         = /etc/pki/tls/private/ovndb.key log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:01:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:41.015 162739 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:01:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:41.015 162739 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_log_level            = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:01:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:41.015 162739 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_probe_interval       = 60000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:01:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:41.015 162739 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_retry_max_interval   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:01:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:41.015 162739 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.vhost_sock_dir             = /var/run/openvswitch log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:01:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:41.016 162739 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.vif_type                   = ovs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:01:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:41.016 162739 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.bridge_mac_table_size      = 50000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:01:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:41.016 162739 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.igmp_snooping_enable       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:01:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:41.016 162739 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.ovsdb_timeout              = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:01:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:41.016 162739 DEBUG neutron.agent.ovn.metadata_agent [-] ovs.ovsdb_connection           = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:01:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:41.016 162739 DEBUG neutron.agent.ovn.metadata_agent [-] ovs.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:01:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:41.016 162739 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:01:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:41.016 162739 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:01:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:41.016 162739 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:01:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:41.016 162739 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:01:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:41.017 162739 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:01:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:41.017 162739 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:01:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:41.017 162739 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:01:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:41.017 162739 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:01:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:41.017 162739 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:01:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:41.017 162739 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:01:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:41.017 162739 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:01:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:41.017 162739 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:01:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:41.017 162739 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:01:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:41.018 162739 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:01:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:41.018 162739 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:01:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:41.018 162739 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:01:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:41.018 162739 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:01:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:41.018 162739 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:01:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:41.018 162739 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:01:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:41.018 162739 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:01:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:41.019 162739 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:01:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:41.019 162739 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:01:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:41.019 162739 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:01:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:41.019 162739 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:01:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:41.019 162739 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:01:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:41.019 162739 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:01:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:41.019 162739 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:01:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:41.019 162739 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:01:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:41.019 162739 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:01:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:41.020 162739 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:01:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:41.020 162739 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:01:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:41.020 162739 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.driver = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:01:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:41.020 162739 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:01:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:41.020 162739 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:01:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:41.020 162739 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:01:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:41.020 162739 DEBUG neutron.agent.ovn.metadata_agent [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613#033[00m
Nov 25 03:01:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:41.028 162739 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Bridge.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Nov 25 03:01:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:41.028 162739 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Port.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Nov 25 03:01:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:41.029 162739 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Interface.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Nov 25 03:01:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:41.029 162739 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: connecting...#033[00m
Nov 25 03:01:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:41.029 162739 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: connected#033[00m
Nov 25 03:01:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:41.041 162739 DEBUG neutron.agent.ovn.metadata.agent [-] Loaded chassis name 0e3362c2-a4a0-4a10-9289-943331244f84 (UUID: 0e3362c2-a4a0-4a10-9289-943331244f84) and ovn bridge br-int. _load_config /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:309#033[00m
Nov 25 03:01:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:41.065 162739 INFO neutron.agent.ovn.metadata.ovsdb [-] Getting OvsdbSbOvnIdl for MetadataAgent with retry#033[00m
Nov 25 03:01:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:41.065 162739 DEBUG ovsdbapp.backend.ovs_idl [-] Created lookup_table index Chassis.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:87#033[00m
Nov 25 03:01:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:41.065 162739 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Datapath_Binding.tunnel_key autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Nov 25 03:01:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:41.065 162739 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Chassis_Private.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Nov 25 03:01:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:41.068 162739 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connecting...#033[00m
Nov 25 03:01:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:41.074 162739 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connected#033[00m
Nov 25 03:01:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:41.078 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched CREATE: ChassisPrivateCreateEvent(events=('create',), table='Chassis_Private', conditions=(('name', '=', '0e3362c2-a4a0-4a10-9289-943331244f84'),), old_conditions=None), priority=20 to row=Chassis_Private(chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], external_ids={}, name=0e3362c2-a4a0-4a10-9289-943331244f84, nb_cfg_timestamp=1764057640193, nb_cfg=1) old= matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 03:01:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:41.079 162739 DEBUG neutron_lib.callbacks.manager [-] Subscribe: <bound method MetadataProxyHandler.post_fork_initialize of <neutron.agent.ovn.metadata.server.MetadataProxyHandler object at 0x7f150fc0ab20>> process after_init 55550000, False subscribe /usr/lib/python3.9/site-packages/neutron_lib/callbacks/manager.py:52#033[00m
Nov 25 03:01:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:41.079 162739 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 03:01:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:41.080 162739 DEBUG oslo_concurrency.lockutils [-] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 03:01:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:41.080 162739 DEBUG oslo_concurrency.lockutils [-] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 03:01:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:41.080 162739 INFO oslo_service.service [-] Starting 1 workers#033[00m
Nov 25 03:01:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:41.084 162739 DEBUG oslo_service.service [-] Started child 162847 _start_child /usr/lib/python3.9/site-packages/oslo_service/service.py:575#033[00m
Nov 25 03:01:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:41.087 162739 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.namespace_cmd', '--privsep_sock_path', '/tmp/tmp9uc93ig9/privsep.sock']#033[00m
Nov 25 03:01:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:41.087 162847 DEBUG neutron_lib.callbacks.manager [-] Publish callbacks ['neutron.agent.ovn.metadata.server.MetadataProxyHandler.post_fork_initialize-166554'] for process (None), after_init _notify_loop /usr/lib/python3.9/site-packages/neutron_lib/callbacks/manager.py:184#033[00m
Nov 25 03:01:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:41.106 162847 INFO neutron.agent.ovn.metadata.ovsdb [-] Getting OvsdbSbOvnIdl for MetadataAgent with retry#033[00m
Nov 25 03:01:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:41.107 162847 DEBUG ovsdbapp.backend.ovs_idl [-] Created lookup_table index Chassis.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:87#033[00m
Nov 25 03:01:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:41.107 162847 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Datapath_Binding.tunnel_key autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Nov 25 03:01:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:41.110 162847 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connecting...#033[00m
Nov 25 03:01:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:41.116 162847 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connected#033[00m
Nov 25 03:01:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:41.123 162847 INFO eventlet.wsgi.server [-] (162847) wsgi starting up on http:/var/lib/neutron/metadata_proxy#033[00m
Nov 25 03:01:41 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v468: 321 pgs: 321 active+clean; 456 KiB data, 143 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:01:41 np0005534516 kernel: capability: warning: `privsep-helper' uses deprecated v2 capabilities in a way that may be insecure
Nov 25 03:01:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:41.797 162739 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap#033[00m
Nov 25 03:01:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:41.798 162739 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmp9uc93ig9/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362#033[00m
Nov 25 03:01:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:41.670 162852 INFO oslo.privsep.daemon [-] privsep daemon starting#033[00m
Nov 25 03:01:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:41.677 162852 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0#033[00m
Nov 25 03:01:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:41.681 162852 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_SYS_ADMIN/CAP_SYS_ADMIN/none#033[00m
Nov 25 03:01:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:41.682 162852 INFO oslo.privsep.daemon [-] privsep daemon running as pid 162852#033[00m
Nov 25 03:01:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:41.801 162852 DEBUG oslo.privsep.daemon [-] privsep: reply[0d52eb59-be2d-40b9-afd5-57e012a0e547]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:01:42 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:42.295 162852 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:01:42 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:42.295 162852 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:01:42 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:42.295 162852 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:01:42 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:42.897 162852 DEBUG oslo.privsep.daemon [-] privsep: reply[9145c816-cb3d-4689-8289-3e8a997573a5]: (4, []) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:01:42 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:42.899 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbAddCommand(_result=None, table=Chassis_Private, record=0e3362c2-a4a0-4a10-9289-943331244f84, column=external_ids, values=({'neutron:ovn-metadata-id': 'cebe41b2-427a-5d5f-8f4c-a5ea438e3c1e'},)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:01:42 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:42.912 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=0e3362c2-a4a0-4a10-9289-943331244f84, col_values=(('external_ids', {'neutron:ovn-bridge': 'br-int'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:01:42 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:42.920 162739 DEBUG oslo_service.service [-] Full set of CONF: wait /usr/lib/python3.9/site-packages/oslo_service/service.py:649#033[00m
Nov 25 03:01:42 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:42.921 162739 DEBUG oslo_service.service [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589#033[00m
Nov 25 03:01:42 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:42.921 162739 DEBUG oslo_service.service [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590#033[00m
Nov 25 03:01:42 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:42.921 162739 DEBUG oslo_service.service [-] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591#033[00m
Nov 25 03:01:42 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:42.921 162739 DEBUG oslo_service.service [-] config files: ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592#033[00m
Nov 25 03:01:42 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:42.921 162739 DEBUG oslo_service.service [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594#033[00m
Nov 25 03:01:42 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:42.922 162739 DEBUG oslo_service.service [-] agent_down_time                = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 03:01:42 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:42.922 162739 DEBUG oslo_service.service [-] allow_bulk                     = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 03:01:42 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:42.923 162739 DEBUG oslo_service.service [-] api_extensions_path            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 03:01:42 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:42.923 162739 DEBUG oslo_service.service [-] api_paste_config               = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 03:01:42 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:42.923 162739 DEBUG oslo_service.service [-] api_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 03:01:42 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:42.923 162739 DEBUG oslo_service.service [-] auth_ca_cert                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 03:01:42 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:42.924 162739 DEBUG oslo_service.service [-] auth_strategy                  = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 03:01:42 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:42.924 162739 DEBUG oslo_service.service [-] backlog                        = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 03:01:42 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:42.924 162739 DEBUG oslo_service.service [-] base_mac                       = fa:16:3e:00:00:00 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 03:01:42 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:42.925 162739 DEBUG oslo_service.service [-] bind_host                      = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 03:01:42 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:42.925 162739 DEBUG oslo_service.service [-] bind_port                      = 9696 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 03:01:42 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:42.925 162739 DEBUG oslo_service.service [-] client_socket_timeout          = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 03:01:42 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:42.925 162739 DEBUG oslo_service.service [-] config_dir                     = ['/etc/neutron.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 03:01:42 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:42.926 162739 DEBUG oslo_service.service [-] config_file                    = ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 03:01:42 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:42.926 162739 DEBUG oslo_service.service [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 03:01:42 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:42.926 162739 DEBUG oslo_service.service [-] control_exchange               = neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 03:01:42 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:42.927 162739 DEBUG oslo_service.service [-] core_plugin                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 03:01:42 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:42.927 162739 DEBUG oslo_service.service [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 03:01:42 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:42.927 162739 DEBUG oslo_service.service [-] default_availability_zones     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 03:01:42 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:42.928 162739 DEBUG oslo_service.service [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'OFPHandler=INFO', 'OfctlService=INFO', 'os_ken.base.app_manager=INFO', 'os_ken.controller.controller=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 03:01:42 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:42.928 162739 DEBUG oslo_service.service [-] dhcp_agent_notification        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 03:01:42 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:42.928 162739 DEBUG oslo_service.service [-] dhcp_lease_duration            = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 03:01:42 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:42.929 162739 DEBUG oslo_service.service [-] dhcp_load_type                 = networks log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 03:01:42 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:42.929 162739 DEBUG oslo_service.service [-] dns_domain                     = openstacklocal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 03:01:42 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:42.929 162739 DEBUG oslo_service.service [-] enable_new_agents              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 03:01:42 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:42.930 162739 DEBUG oslo_service.service [-] enable_traditional_dhcp        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 03:01:42 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:42.930 162739 DEBUG oslo_service.service [-] external_dns_driver            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 03:01:42 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:42.930 162739 DEBUG oslo_service.service [-] external_pids                  = /var/lib/neutron/external/pids log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 03:01:42 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:42.931 162739 DEBUG oslo_service.service [-] filter_validation              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 03:01:42 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:42.931 162739 DEBUG oslo_service.service [-] global_physnet_mtu             = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 03:01:42 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:42.931 162739 DEBUG oslo_service.service [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 03:01:42 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:42.932 162739 DEBUG oslo_service.service [-] host                           = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 03:01:42 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:42.932 162739 DEBUG oslo_service.service [-] http_retries                   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 03:01:42 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:42.932 162739 DEBUG oslo_service.service [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 03:01:42 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:42.932 162739 DEBUG oslo_service.service [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 03:01:42 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:42.933 162739 DEBUG oslo_service.service [-] ipam_driver                    = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 03:01:42 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:42.933 162739 DEBUG oslo_service.service [-] ipv6_pd_enabled                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 03:01:42 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:42.933 162739 DEBUG oslo_service.service [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 03:01:42 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:42.934 162739 DEBUG oslo_service.service [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 03:01:42 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:42.934 162739 DEBUG oslo_service.service [-] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 03:01:42 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:42.934 162739 DEBUG oslo_service.service [-] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 03:01:42 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:42.934 162739 DEBUG oslo_service.service [-] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 03:01:42 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:42.935 162739 DEBUG oslo_service.service [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 03:01:42 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:42.935 162739 DEBUG oslo_service.service [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 03:01:42 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:42.935 162739 DEBUG oslo_service.service [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 03:01:42 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:42.935 162739 DEBUG oslo_service.service [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 03:01:42 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:42.936 162739 DEBUG oslo_service.service [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 03:01:42 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:42.936 162739 DEBUG oslo_service.service [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 03:01:42 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:42.936 162739 DEBUG oslo_service.service [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 03:01:42 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:42.936 162739 DEBUG oslo_service.service [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 03:01:42 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:42.937 162739 DEBUG oslo_service.service [-] max_dns_nameservers            = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 03:01:42 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:42.937 162739 DEBUG oslo_service.service [-] max_header_line                = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 03:01:42 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:42.937 162739 DEBUG oslo_service.service [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 03:01:42 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:42.937 162739 DEBUG oslo_service.service [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 03:01:42 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:42.937 162739 DEBUG oslo_service.service [-] max_subnet_host_routes         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 03:01:42 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:42.938 162739 DEBUG oslo_service.service [-] metadata_backlog               = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 03:01:42 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:42.938 162739 DEBUG oslo_service.service [-] metadata_proxy_group           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 03:01:42 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:42.938 162739 DEBUG oslo_service.service [-] metadata_proxy_shared_secret   = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 03:01:42 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:42.939 162739 DEBUG oslo_service.service [-] metadata_proxy_socket          = /var/lib/neutron/metadata_proxy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 03:01:42 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:42.939 162739 DEBUG oslo_service.service [-] metadata_proxy_socket_mode     = deduce log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 03:01:42 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:42.939 162739 DEBUG oslo_service.service [-] metadata_proxy_user            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 03:01:42 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:42.939 162739 DEBUG oslo_service.service [-] metadata_workers               = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 03:01:42 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:42.940 162739 DEBUG oslo_service.service [-] network_link_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 03:01:42 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:42.940 162739 DEBUG oslo_service.service [-] notify_nova_on_port_data_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 03:01:42 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:42.940 162739 DEBUG oslo_service.service [-] notify_nova_on_port_status_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 03:01:42 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:42.941 162739 DEBUG oslo_service.service [-] nova_client_cert               =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 03:01:42 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:42.941 162739 DEBUG oslo_service.service [-] nova_client_priv_key           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 03:01:42 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:42.941 162739 DEBUG oslo_service.service [-] nova_metadata_host             = nova-metadata-internal.openstack.svc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 03:01:42 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:42.941 162739 DEBUG oslo_service.service [-] nova_metadata_insecure         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 03:01:42 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:42.942 162739 DEBUG oslo_service.service [-] nova_metadata_port             = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 03:01:42 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:42.942 162739 DEBUG oslo_service.service [-] nova_metadata_protocol         = https log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 03:01:42 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:42.942 162739 DEBUG oslo_service.service [-] pagination_max_limit           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 03:01:42 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:42.942 162739 DEBUG oslo_service.service [-] periodic_fuzzy_delay           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 03:01:42 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:42.943 162739 DEBUG oslo_service.service [-] periodic_interval              = 40 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 03:01:42 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:42.943 162739 DEBUG oslo_service.service [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 03:01:42 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:42.943 162739 DEBUG oslo_service.service [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 03:01:42 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:42.943 162739 DEBUG oslo_service.service [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 03:01:42 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:42.944 162739 DEBUG oslo_service.service [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 03:01:42 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:42.944 162739 DEBUG oslo_service.service [-] retry_until_window             = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 03:01:42 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:42.944 162739 DEBUG oslo_service.service [-] rpc_resources_processing_step  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 03:01:42 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:42.944 162739 DEBUG oslo_service.service [-] rpc_response_max_timeout       = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 03:01:42 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:42.945 162739 DEBUG oslo_service.service [-] rpc_state_report_workers       = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 03:01:42 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:42.945 162739 DEBUG oslo_service.service [-] rpc_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 03:01:42 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:42.945 162739 DEBUG oslo_service.service [-] send_events_interval           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 03:01:42 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:42.946 162739 DEBUG oslo_service.service [-] service_plugins                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 03:01:42 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:42.946 162739 DEBUG oslo_service.service [-] setproctitle                   = on log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 03:01:42 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:42.946 162739 DEBUG oslo_service.service [-] state_path                     = /var/lib/neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 03:01:42 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:42.946 162739 DEBUG oslo_service.service [-] syslog_log_facility            = syslog log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 03:01:42 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:42.946 162739 DEBUG oslo_service.service [-] tcp_keepidle                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 03:01:42 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:42.947 162739 DEBUG oslo_service.service [-] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 03:01:42 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:42.947 162739 DEBUG oslo_service.service [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 03:01:42 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:42.947 162739 DEBUG oslo_service.service [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 03:01:42 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:42.947 162739 DEBUG oslo_service.service [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 03:01:42 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:42.948 162739 DEBUG oslo_service.service [-] use_ssl                        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 03:01:42 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:42.948 162739 DEBUG oslo_service.service [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 03:01:42 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:42.948 162739 DEBUG oslo_service.service [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 03:01:42 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:42.948 162739 DEBUG oslo_service.service [-] vlan_transparent               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 03:01:42 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:42.949 162739 DEBUG oslo_service.service [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 03:01:42 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:42.949 162739 DEBUG oslo_service.service [-] wsgi_default_pool_size         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 03:01:42 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:42.949 162739 DEBUG oslo_service.service [-] wsgi_keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 03:01:42 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:42.950 162739 DEBUG oslo_service.service [-] wsgi_log_format                = %(client_ip)s "%(request_line)s" status: %(status_code)s  len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 03:01:42 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:42.950 162739 DEBUG oslo_service.service [-] wsgi_server_debug              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 03:01:42 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:42.950 162739 DEBUG oslo_service.service [-] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:01:42 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:42.950 162739 DEBUG oslo_service.service [-] oslo_concurrency.lock_path     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:01:42 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:42.951 162739 DEBUG oslo_service.service [-] profiler.connection_string     = messaging:// log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:01:42 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:42.951 162739 DEBUG oslo_service.service [-] profiler.enabled               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:01:42 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:42.951 162739 DEBUG oslo_service.service [-] profiler.es_doc_type           = notification log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:01:42 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:42.952 162739 DEBUG oslo_service.service [-] profiler.es_scroll_size        = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:01:42 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:42.952 162739 DEBUG oslo_service.service [-] profiler.es_scroll_time        = 2m log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:01:42 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:42.952 162739 DEBUG oslo_service.service [-] profiler.filter_error_trace    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:01:42 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:42.952 162739 DEBUG oslo_service.service [-] profiler.hmac_keys             = SECRET_KEY log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:01:42 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:42.953 162739 DEBUG oslo_service.service [-] profiler.sentinel_service_name = mymaster log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:01:42 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:42.953 162739 DEBUG oslo_service.service [-] profiler.socket_timeout        = 0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:01:42 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:42.953 162739 DEBUG oslo_service.service [-] profiler.trace_sqlalchemy      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:01:42 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:42.954 162739 DEBUG oslo_service.service [-] oslo_policy.enforce_new_defaults = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:01:42 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:42.954 162739 DEBUG oslo_service.service [-] oslo_policy.enforce_scope      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:01:42 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:42.954 162739 DEBUG oslo_service.service [-] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:01:42 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:42.954 162739 DEBUG oslo_service.service [-] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:01:42 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:42.955 162739 DEBUG oslo_service.service [-] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:01:42 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:42.955 162739 DEBUG oslo_service.service [-] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:01:42 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:42.955 162739 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:01:42 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:42.956 162739 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:01:42 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:42.956 162739 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:01:42 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:42.956 162739 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:01:42 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:42.956 162739 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:01:42 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:42.957 162739 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:01:42 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:42.957 162739 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:01:42 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:42.957 162739 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:01:42 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:42.957 162739 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:01:42 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:42.958 162739 DEBUG oslo_service.service [-] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:01:42 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:42.958 162739 DEBUG oslo_service.service [-] service_providers.service_provider = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:01:42 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:42.958 162739 DEBUG oslo_service.service [-] privsep.capabilities           = [21, 12, 1, 2, 19] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:01:42 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:42.959 162739 DEBUG oslo_service.service [-] privsep.group                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:01:42 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:42.959 162739 DEBUG oslo_service.service [-] privsep.helper_command         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:01:42 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:42.959 162739 DEBUG oslo_service.service [-] privsep.logger_name            = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:01:42 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:42.959 162739 DEBUG oslo_service.service [-] privsep.thread_pool_size       = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:01:42 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:42.960 162739 DEBUG oslo_service.service [-] privsep.user                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:01:42 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:42.960 162739 DEBUG oslo_service.service [-] privsep_dhcp_release.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:01:42 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:42.960 162739 DEBUG oslo_service.service [-] privsep_dhcp_release.group     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:01:42 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:42.960 162739 DEBUG oslo_service.service [-] privsep_dhcp_release.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:01:42 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:42.961 162739 DEBUG oslo_service.service [-] privsep_dhcp_release.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:01:42 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:42.961 162739 DEBUG oslo_service.service [-] privsep_dhcp_release.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:01:42 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:42.961 162739 DEBUG oslo_service.service [-] privsep_dhcp_release.user      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:01:42 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:42.961 162739 DEBUG oslo_service.service [-] privsep_ovs_vsctl.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:01:42 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:42.962 162739 DEBUG oslo_service.service [-] privsep_ovs_vsctl.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:01:42 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:42.962 162739 DEBUG oslo_service.service [-] privsep_ovs_vsctl.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:01:42 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:42.962 162739 DEBUG oslo_service.service [-] privsep_ovs_vsctl.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:01:42 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:42.962 162739 DEBUG oslo_service.service [-] privsep_ovs_vsctl.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:01:42 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:42.962 162739 DEBUG oslo_service.service [-] privsep_ovs_vsctl.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:01:42 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:42.962 162739 DEBUG oslo_service.service [-] privsep_namespace.capabilities = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:01:42 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:42.963 162739 DEBUG oslo_service.service [-] privsep_namespace.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:01:42 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:42.963 162739 DEBUG oslo_service.service [-] privsep_namespace.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:01:42 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:42.963 162739 DEBUG oslo_service.service [-] privsep_namespace.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:01:42 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:42.963 162739 DEBUG oslo_service.service [-] privsep_namespace.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:01:42 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:42.963 162739 DEBUG oslo_service.service [-] privsep_namespace.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:01:42 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:42.963 162739 DEBUG oslo_service.service [-] privsep_conntrack.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:01:42 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:42.963 162739 DEBUG oslo_service.service [-] privsep_conntrack.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:01:42 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:42.964 162739 DEBUG oslo_service.service [-] privsep_conntrack.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:01:42 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:42.964 162739 DEBUG oslo_service.service [-] privsep_conntrack.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:01:42 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:42.964 162739 DEBUG oslo_service.service [-] privsep_conntrack.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:01:42 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:42.964 162739 DEBUG oslo_service.service [-] privsep_conntrack.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:01:42 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:42.964 162739 DEBUG oslo_service.service [-] privsep_link.capabilities      = [12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:01:42 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:42.964 162739 DEBUG oslo_service.service [-] privsep_link.group             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:01:42 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:42.964 162739 DEBUG oslo_service.service [-] privsep_link.helper_command    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:01:42 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:42.965 162739 DEBUG oslo_service.service [-] privsep_link.logger_name       = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:01:42 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:42.965 162739 DEBUG oslo_service.service [-] privsep_link.thread_pool_size  = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:01:42 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:42.965 162739 DEBUG oslo_service.service [-] privsep_link.user              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:01:42 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:42.965 162739 DEBUG oslo_service.service [-] AGENT.check_child_processes_action = respawn log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:01:42 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:42.965 162739 DEBUG oslo_service.service [-] AGENT.check_child_processes_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:01:42 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:42.965 162739 DEBUG oslo_service.service [-] AGENT.comment_iptables_rules   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:01:42 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:42.966 162739 DEBUG oslo_service.service [-] AGENT.debug_iptables_rules     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:01:42 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:42.966 162739 DEBUG oslo_service.service [-] AGENT.kill_scripts_path        = /etc/neutron/kill_scripts/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:01:42 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:42.966 162739 DEBUG oslo_service.service [-] AGENT.root_helper              = sudo neutron-rootwrap /etc/neutron/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:01:42 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:42.966 162739 DEBUG oslo_service.service [-] AGENT.root_helper_daemon       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:01:42 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:42.966 162739 DEBUG oslo_service.service [-] AGENT.use_helper_for_ns_read   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:01:42 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:42.966 162739 DEBUG oslo_service.service [-] AGENT.use_random_fully         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:01:42 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:42.966 162739 DEBUG oslo_service.service [-] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:01:42 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:42.967 162739 DEBUG oslo_service.service [-] QUOTAS.default_quota           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:01:42 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:42.967 162739 DEBUG oslo_service.service [-] QUOTAS.quota_driver            = neutron.db.quota.driver_nolock.DbQuotaNoLockDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:01:42 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:42.967 162739 DEBUG oslo_service.service [-] QUOTAS.quota_network           = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:01:42 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:42.967 162739 DEBUG oslo_service.service [-] QUOTAS.quota_port              = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:01:42 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:42.967 162739 DEBUG oslo_service.service [-] QUOTAS.quota_security_group    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:01:42 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:42.967 162739 DEBUG oslo_service.service [-] QUOTAS.quota_security_group_rule = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:01:42 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:42.968 162739 DEBUG oslo_service.service [-] QUOTAS.quota_subnet            = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:01:42 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:42.968 162739 DEBUG oslo_service.service [-] QUOTAS.track_quota_usage       = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:01:42 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:42.968 162739 DEBUG oslo_service.service [-] nova.auth_section              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:01:42 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:42.968 162739 DEBUG oslo_service.service [-] nova.auth_type                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:01:42 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:42.968 162739 DEBUG oslo_service.service [-] nova.cafile                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:01:42 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:42.968 162739 DEBUG oslo_service.service [-] nova.certfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:01:42 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:42.968 162739 DEBUG oslo_service.service [-] nova.collect_timing            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:01:42 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:42.969 162739 DEBUG oslo_service.service [-] nova.endpoint_type             = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:01:42 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:42.969 162739 DEBUG oslo_service.service [-] nova.insecure                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:01:42 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:42.969 162739 DEBUG oslo_service.service [-] nova.keyfile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:01:42 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:42.969 162739 DEBUG oslo_service.service [-] nova.region_name               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:01:42 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:42.969 162739 DEBUG oslo_service.service [-] nova.split_loggers             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:01:42 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:42.969 162739 DEBUG oslo_service.service [-] nova.timeout                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:01:42 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:42.970 162739 DEBUG oslo_service.service [-] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:01:42 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:42.970 162739 DEBUG oslo_service.service [-] placement.auth_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:01:42 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:42.970 162739 DEBUG oslo_service.service [-] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:01:42 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:42.970 162739 DEBUG oslo_service.service [-] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:01:42 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:42.970 162739 DEBUG oslo_service.service [-] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:01:42 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:42.970 162739 DEBUG oslo_service.service [-] placement.endpoint_type        = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:01:42 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:42.970 162739 DEBUG oslo_service.service [-] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:01:42 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:42.971 162739 DEBUG oslo_service.service [-] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:01:42 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:42.971 162739 DEBUG oslo_service.service [-] placement.region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:01:42 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:42.971 162739 DEBUG oslo_service.service [-] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:01:42 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:42.971 162739 DEBUG oslo_service.service [-] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:01:42 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:42.971 162739 DEBUG oslo_service.service [-] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:01:42 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:42.971 162739 DEBUG oslo_service.service [-] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:01:42 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:42.972 162739 DEBUG oslo_service.service [-] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:01:42 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:42.972 162739 DEBUG oslo_service.service [-] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:01:42 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:42.972 162739 DEBUG oslo_service.service [-] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:01:42 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:42.972 162739 DEBUG oslo_service.service [-] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:01:42 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:42.972 162739 DEBUG oslo_service.service [-] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:01:42 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:42.972 162739 DEBUG oslo_service.service [-] ironic.enable_notifications    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:01:42 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:42.972 162739 DEBUG oslo_service.service [-] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:01:42 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:42.973 162739 DEBUG oslo_service.service [-] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:01:42 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:42.973 162739 DEBUG oslo_service.service [-] ironic.interface               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:01:42 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:42.973 162739 DEBUG oslo_service.service [-] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:01:42 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:42.973 162739 DEBUG oslo_service.service [-] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:01:42 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:42.973 162739 DEBUG oslo_service.service [-] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:01:42 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:42.973 162739 DEBUG oslo_service.service [-] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:01:42 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:42.974 162739 DEBUG oslo_service.service [-] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:01:42 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:42.974 162739 DEBUG oslo_service.service [-] ironic.service_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:01:42 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:42.974 162739 DEBUG oslo_service.service [-] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:01:42 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:42.974 162739 DEBUG oslo_service.service [-] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:01:42 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:42.974 162739 DEBUG oslo_service.service [-] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:01:42 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:42.975 162739 DEBUG oslo_service.service [-] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:01:42 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:42.975 162739 DEBUG oslo_service.service [-] ironic.valid_interfaces        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:01:42 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:42.975 162739 DEBUG oslo_service.service [-] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:01:42 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:42.975 162739 DEBUG oslo_service.service [-] cli_script.dry_run             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:01:42 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:42.975 162739 DEBUG oslo_service.service [-] ovn.allow_stateless_action_supported = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:01:42 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:42.975 162739 DEBUG oslo_service.service [-] ovn.dhcp_default_lease_time    = 43200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:01:42 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:42.976 162739 DEBUG oslo_service.service [-] ovn.disable_ovn_dhcp_for_baremetal_ports = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:01:42 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:42.976 162739 DEBUG oslo_service.service [-] ovn.dns_servers                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:01:42 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:42.976 162739 DEBUG oslo_service.service [-] ovn.enable_distributed_floating_ip = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:01:42 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:42.976 162739 DEBUG oslo_service.service [-] ovn.neutron_sync_mode          = log log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:01:42 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:42.976 162739 DEBUG oslo_service.service [-] ovn.ovn_dhcp4_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:01:42 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:42.977 162739 DEBUG oslo_service.service [-] ovn.ovn_dhcp6_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:01:42 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:42.977 162739 DEBUG oslo_service.service [-] ovn.ovn_emit_need_to_frag      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:01:42 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:42.977 162739 DEBUG oslo_service.service [-] ovn.ovn_l3_mode                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:01:42 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:42.977 162739 DEBUG oslo_service.service [-] ovn.ovn_l3_scheduler           = leastloaded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:01:42 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:42.977 162739 DEBUG oslo_service.service [-] ovn.ovn_metadata_enabled       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:01:42 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:42.978 162739 DEBUG oslo_service.service [-] ovn.ovn_nb_ca_cert             =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:01:42 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:42.978 162739 DEBUG oslo_service.service [-] ovn.ovn_nb_certificate         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:01:42 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:42.978 162739 DEBUG oslo_service.service [-] ovn.ovn_nb_connection          = tcp:127.0.0.1:6641 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:01:42 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:42.978 162739 DEBUG oslo_service.service [-] ovn.ovn_nb_private_key         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:01:42 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:42.978 162739 DEBUG oslo_service.service [-] ovn.ovn_sb_ca_cert             = /etc/pki/tls/certs/ovndbca.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:01:42 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:42.978 162739 DEBUG oslo_service.service [-] ovn.ovn_sb_certificate         = /etc/pki/tls/certs/ovndb.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:01:42 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:42.979 162739 DEBUG oslo_service.service [-] ovn.ovn_sb_connection          = ssl:ovsdbserver-sb.openstack.svc:6642 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:01:42 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:42.979 162739 DEBUG oslo_service.service [-] ovn.ovn_sb_private_key         = /etc/pki/tls/private/ovndb.key log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:01:42 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:42.979 162739 DEBUG oslo_service.service [-] ovn.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:01:42 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:42.979 162739 DEBUG oslo_service.service [-] ovn.ovsdb_log_level            = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:01:42 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:42.979 162739 DEBUG oslo_service.service [-] ovn.ovsdb_probe_interval       = 60000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:01:42 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:42.979 162739 DEBUG oslo_service.service [-] ovn.ovsdb_retry_max_interval   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:01:42 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:42.980 162739 DEBUG oslo_service.service [-] ovn.vhost_sock_dir             = /var/run/openvswitch log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:01:42 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:42.980 162739 DEBUG oslo_service.service [-] ovn.vif_type                   = ovs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:01:42 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:42.980 162739 DEBUG oslo_service.service [-] OVS.bridge_mac_table_size      = 50000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:01:42 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:42.980 162739 DEBUG oslo_service.service [-] OVS.igmp_snooping_enable       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:01:42 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:42.980 162739 DEBUG oslo_service.service [-] OVS.ovsdb_timeout              = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:01:42 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:42.981 162739 DEBUG oslo_service.service [-] ovs.ovsdb_connection           = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:01:42 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:42.981 162739 DEBUG oslo_service.service [-] ovs.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:01:42 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:42.981 162739 DEBUG oslo_service.service [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:01:42 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:42.981 162739 DEBUG oslo_service.service [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:01:42 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:42.981 162739 DEBUG oslo_service.service [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:01:42 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:42.982 162739 DEBUG oslo_service.service [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:01:42 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:42.982 162739 DEBUG oslo_service.service [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:01:42 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:42.982 162739 DEBUG oslo_service.service [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:01:42 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:42.982 162739 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:01:42 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:42.982 162739 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:01:42 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:42.983 162739 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:01:42 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:42.983 162739 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:01:42 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:42.983 162739 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:01:42 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:42.983 162739 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:01:42 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:42.983 162739 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:01:42 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:42.983 162739 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:01:42 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:42.984 162739 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:01:42 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:42.984 162739 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:01:42 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:42.984 162739 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:01:42 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:42.984 162739 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:01:42 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:42.984 162739 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:01:42 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:42.984 162739 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:01:42 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:42.984 162739 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:01:42 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:42.985 162739 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:01:42 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:42.985 162739 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:01:42 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:42.985 162739 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:01:42 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:42.985 162739 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:01:42 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:42.985 162739 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:01:42 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:42.985 162739 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:01:42 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:42.986 162739 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:01:42 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:42.986 162739 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:01:42 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:42.986 162739 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:01:42 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:42.986 162739 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:01:42 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:42.986 162739 DEBUG oslo_service.service [-] oslo_messaging_notifications.driver = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:01:42 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:42.986 162739 DEBUG oslo_service.service [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:01:42 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:42.986 162739 DEBUG oslo_service.service [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:01:42 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:42.987 162739 DEBUG oslo_service.service [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:01:42 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:01:42.987 162739 DEBUG oslo_service.service [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613#033[00m
Nov 25 03:01:43 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v469: 321 pgs: 321 active+clean; 456 KiB data, 143 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:01:44 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 03:01:45 np0005534516 systemd-logind[822]: New session 49 of user zuul.
Nov 25 03:01:45 np0005534516 systemd[1]: Started Session 49 of User zuul.
Nov 25 03:01:45 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v470: 321 pgs: 321 active+clean; 456 KiB data, 143 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:01:46 np0005534516 python3.9[163010]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 25 03:01:47 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v471: 321 pgs: 321 active+clean; 456 KiB data, 143 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:01:47 np0005534516 python3.9[163166]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a --filter name=^nova_virtlogd$ --format \{\{.Names\}\} _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 03:01:48 np0005534516 python3.9[163331]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 25 03:01:48 np0005534516 systemd[1]: Reloading.
Nov 25 03:01:49 np0005534516 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 03:01:49 np0005534516 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 03:01:49 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 03:01:49 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v472: 321 pgs: 321 active+clean; 456 KiB data, 143 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:01:50 np0005534516 python3.9[163517]: ansible-ansible.builtin.service_facts Invoked
Nov 25 03:01:50 np0005534516 network[163534]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Nov 25 03:01:50 np0005534516 network[163535]: 'network-scripts' will be removed from distribution in near future.
Nov 25 03:01:50 np0005534516 network[163536]: It is advised to switch to 'NetworkManager' instead for network management.
Nov 25 03:01:51 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v473: 321 pgs: 321 active+clean; 456 KiB data, 143 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:01:52 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 03:01:52 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 03:01:52 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 03:01:52 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 03:01:52 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 03:01:52 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 03:01:52 np0005534516 ceph-mgr[75313]: [progress WARNING root] complete: ev 134a01a6-734a-470f-b6d6-bc40c8d31832 does not exist
Nov 25 03:01:52 np0005534516 ceph-mgr[75313]: [progress WARNING root] complete: ev 5df74e6f-0afa-40b2-b91c-0a629d689c81 does not exist
Nov 25 03:01:52 np0005534516 ceph-mgr[75313]: [progress WARNING root] complete: ev fc2c6748-4582-4ca0-abc0-2482414c5919 does not exist
Nov 25 03:01:52 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 03:01:52 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 03:01:52 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 03:01:52 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 03:01:52 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 03:01:52 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 03:01:52 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 03:01:52 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 03:01:52 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 03:01:52 np0005534516 podman[163900]: 2025-11-25 08:01:52.966564746 +0000 UTC m=+0.069596329 container create 659e79606fa543a4e50f6837d5baf438c87856ef6ac39c3592860a38b511bb56 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_blackwell, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 03:01:53 np0005534516 systemd[1]: Started libpod-conmon-659e79606fa543a4e50f6837d5baf438c87856ef6ac39c3592860a38b511bb56.scope.
Nov 25 03:01:53 np0005534516 podman[163900]: 2025-11-25 08:01:52.935479278 +0000 UTC m=+0.038510921 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 03:01:53 np0005534516 systemd[1]: Started libcrun container.
Nov 25 03:01:53 np0005534516 podman[163900]: 2025-11-25 08:01:53.056830114 +0000 UTC m=+0.159861727 container init 659e79606fa543a4e50f6837d5baf438c87856ef6ac39c3592860a38b511bb56 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_blackwell, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 03:01:53 np0005534516 podman[163900]: 2025-11-25 08:01:53.069567975 +0000 UTC m=+0.172599578 container start 659e79606fa543a4e50f6837d5baf438c87856ef6ac39c3592860a38b511bb56 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_blackwell, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 03:01:53 np0005534516 podman[163900]: 2025-11-25 08:01:53.080991439 +0000 UTC m=+0.184023042 container attach 659e79606fa543a4e50f6837d5baf438c87856ef6ac39c3592860a38b511bb56 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_blackwell, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True)
Nov 25 03:01:53 np0005534516 zen_blackwell[163921]: 167 167
Nov 25 03:01:53 np0005534516 podman[163900]: 2025-11-25 08:01:53.094088151 +0000 UTC m=+0.197119754 container died 659e79606fa543a4e50f6837d5baf438c87856ef6ac39c3592860a38b511bb56 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_blackwell, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 03:01:53 np0005534516 systemd[1]: libpod-659e79606fa543a4e50f6837d5baf438c87856ef6ac39c3592860a38b511bb56.scope: Deactivated successfully.
Nov 25 03:01:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] Optimize plan auto_2025-11-25_08:01:53
Nov 25 03:01:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 03:01:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] do_upmap
Nov 25 03:01:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] pools ['default.rgw.meta', '.mgr', 'default.rgw.log', 'volumes', 'vms', 'default.rgw.control', 'cephfs.cephfs.meta', '.rgw.root', 'cephfs.cephfs.data', 'backups', 'images']
Nov 25 03:01:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] prepared 0/10 changes
Nov 25 03:01:53 np0005534516 systemd[1]: var-lib-containers-storage-overlay-6e3fd77abc7611b9c3b2daa10b8d46ceed5498bd526b31584d6ab9fb4ac10b89-merged.mount: Deactivated successfully.
Nov 25 03:01:53 np0005534516 podman[163900]: 2025-11-25 08:01:53.160372908 +0000 UTC m=+0.263404481 container remove 659e79606fa543a4e50f6837d5baf438c87856ef6ac39c3592860a38b511bb56 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_blackwell, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 25 03:01:53 np0005534516 systemd[1]: libpod-conmon-659e79606fa543a4e50f6837d5baf438c87856ef6ac39c3592860a38b511bb56.scope: Deactivated successfully.
Nov 25 03:01:53 np0005534516 podman[163973]: 2025-11-25 08:01:53.355243869 +0000 UTC m=+0.050608276 container create afe563cba466949ec40efd202736318152bffc645c09514af7d96cb1e00449bf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_satoshi, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507)
Nov 25 03:01:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 03:01:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 03:01:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 03:01:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 03:01:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 03:01:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 03:01:53 np0005534516 systemd[1]: Started libpod-conmon-afe563cba466949ec40efd202736318152bffc645c09514af7d96cb1e00449bf.scope.
Nov 25 03:01:53 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v474: 321 pgs: 321 active+clean; 456 KiB data, 143 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:01:53 np0005534516 podman[163973]: 2025-11-25 08:01:53.333503501 +0000 UTC m=+0.028867988 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 03:01:53 np0005534516 systemd[1]: Started libcrun container.
Nov 25 03:01:53 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/202b7d8de5108c8dd3016a9d50c71c7c04d4ad7e38660ca5a4f458d5eb9ca46f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 03:01:53 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/202b7d8de5108c8dd3016a9d50c71c7c04d4ad7e38660ca5a4f458d5eb9ca46f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 03:01:53 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/202b7d8de5108c8dd3016a9d50c71c7c04d4ad7e38660ca5a4f458d5eb9ca46f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 03:01:53 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/202b7d8de5108c8dd3016a9d50c71c7c04d4ad7e38660ca5a4f458d5eb9ca46f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 03:01:53 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/202b7d8de5108c8dd3016a9d50c71c7c04d4ad7e38660ca5a4f458d5eb9ca46f/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 03:01:53 np0005534516 podman[163973]: 2025-11-25 08:01:53.48152481 +0000 UTC m=+0.176889257 container init afe563cba466949ec40efd202736318152bffc645c09514af7d96cb1e00449bf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_satoshi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 03:01:53 np0005534516 podman[163973]: 2025-11-25 08:01:53.493788609 +0000 UTC m=+0.189153036 container start afe563cba466949ec40efd202736318152bffc645c09514af7d96cb1e00449bf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_satoshi, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 03:01:53 np0005534516 podman[163973]: 2025-11-25 08:01:53.499900127 +0000 UTC m=+0.195264554 container attach afe563cba466949ec40efd202736318152bffc645c09514af7d96cb1e00449bf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_satoshi, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 03:01:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 03:01:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 03:01:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 03:01:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 03:01:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 03:01:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 03:01:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 03:01:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 03:01:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 03:01:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 03:01:54 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 03:01:54 np0005534516 python3.9[164125]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_libvirt.target state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 25 03:01:54 np0005534516 ecstatic_satoshi[163993]: --> passed data devices: 0 physical, 3 LVM
Nov 25 03:01:54 np0005534516 ecstatic_satoshi[163993]: --> relative data size: 1.0
Nov 25 03:01:54 np0005534516 ecstatic_satoshi[163993]: --> All data devices are unavailable
Nov 25 03:01:54 np0005534516 systemd[1]: libpod-afe563cba466949ec40efd202736318152bffc645c09514af7d96cb1e00449bf.scope: Deactivated successfully.
Nov 25 03:01:54 np0005534516 podman[163973]: 2025-11-25 08:01:54.858514867 +0000 UTC m=+1.553879254 container died afe563cba466949ec40efd202736318152bffc645c09514af7d96cb1e00449bf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_satoshi, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 03:01:54 np0005534516 systemd[1]: libpod-afe563cba466949ec40efd202736318152bffc645c09514af7d96cb1e00449bf.scope: Consumed 1.122s CPU time.
Nov 25 03:01:55 np0005534516 systemd[1]: var-lib-containers-storage-overlay-202b7d8de5108c8dd3016a9d50c71c7c04d4ad7e38660ca5a4f458d5eb9ca46f-merged.mount: Deactivated successfully.
Nov 25 03:01:55 np0005534516 python3.9[164312]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtlogd_wrapper.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 25 03:01:55 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v475: 321 pgs: 321 active+clean; 456 KiB data, 143 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:01:55 np0005534516 podman[163973]: 2025-11-25 08:01:55.418986686 +0000 UTC m=+2.114351113 container remove afe563cba466949ec40efd202736318152bffc645c09514af7d96cb1e00449bf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_satoshi, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 03:01:55 np0005534516 systemd[1]: libpod-conmon-afe563cba466949ec40efd202736318152bffc645c09514af7d96cb1e00449bf.scope: Deactivated successfully.
Nov 25 03:01:56 np0005534516 podman[164607]: 2025-11-25 08:01:56.075824961 +0000 UTC m=+0.022091179 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 03:01:56 np0005534516 podman[164607]: 2025-11-25 08:01:56.300464664 +0000 UTC m=+0.246730892 container create 79c0a0506a5cb850cde686b09bb6b73ff792201591786be440f37eb3e23f1cfe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_goodall, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Nov 25 03:01:56 np0005534516 python3.9[164568]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtnodedevd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 25 03:01:56 np0005534516 systemd[1]: Started libpod-conmon-79c0a0506a5cb850cde686b09bb6b73ff792201591786be440f37eb3e23f1cfe.scope.
Nov 25 03:01:56 np0005534516 systemd[1]: Started libcrun container.
Nov 25 03:01:56 np0005534516 podman[164607]: 2025-11-25 08:01:56.879653308 +0000 UTC m=+0.825919606 container init 79c0a0506a5cb850cde686b09bb6b73ff792201591786be440f37eb3e23f1cfe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_goodall, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Nov 25 03:01:56 np0005534516 podman[164607]: 2025-11-25 08:01:56.893436978 +0000 UTC m=+0.839703216 container start 79c0a0506a5cb850cde686b09bb6b73ff792201591786be440f37eb3e23f1cfe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_goodall, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Nov 25 03:01:56 np0005534516 festive_goodall[164714]: 167 167
Nov 25 03:01:56 np0005534516 systemd[1]: libpod-79c0a0506a5cb850cde686b09bb6b73ff792201591786be440f37eb3e23f1cfe.scope: Deactivated successfully.
Nov 25 03:01:56 np0005534516 podman[164607]: 2025-11-25 08:01:56.916332499 +0000 UTC m=+0.862598777 container attach 79c0a0506a5cb850cde686b09bb6b73ff792201591786be440f37eb3e23f1cfe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_goodall, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Nov 25 03:01:56 np0005534516 podman[164607]: 2025-11-25 08:01:56.917553133 +0000 UTC m=+0.863819371 container died 79c0a0506a5cb850cde686b09bb6b73ff792201591786be440f37eb3e23f1cfe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_goodall, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 03:01:57 np0005534516 python3.9[164779]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtproxyd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 25 03:01:57 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v476: 321 pgs: 321 active+clean; 456 KiB data, 143 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:01:57 np0005534516 systemd[1]: var-lib-containers-storage-overlay-84b2cf5ff22c527b032d41ca53d24807b8c86b6b63a27682d18c99a2bb1e146f-merged.mount: Deactivated successfully.
Nov 25 03:01:57 np0005534516 podman[164607]: 2025-11-25 08:01:57.650770454 +0000 UTC m=+1.597036652 container remove 79c0a0506a5cb850cde686b09bb6b73ff792201591786be440f37eb3e23f1cfe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_goodall, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Nov 25 03:01:57 np0005534516 systemd[1]: libpod-conmon-79c0a0506a5cb850cde686b09bb6b73ff792201591786be440f37eb3e23f1cfe.scope: Deactivated successfully.
Nov 25 03:01:57 np0005534516 podman[164954]: 2025-11-25 08:01:57.873530734 +0000 UTC m=+0.095163274 container create a0f1b8be3c5b0ca3d6a070e36c396451d532df12f2f4caed0deaca7c9e44eedb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_khayyam, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 03:01:57 np0005534516 podman[164954]: 2025-11-25 08:01:57.806483196 +0000 UTC m=+0.028115766 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 03:01:57 np0005534516 systemd[1]: Started libpod-conmon-a0f1b8be3c5b0ca3d6a070e36c396451d532df12f2f4caed0deaca7c9e44eedb.scope.
Nov 25 03:01:57 np0005534516 systemd[1]: Started libcrun container.
Nov 25 03:01:57 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/da1f7e0b8562433ad9b4264837e1b4903a8fabed6bb22d12f586ee6f69ad5e29/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 03:01:57 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/da1f7e0b8562433ad9b4264837e1b4903a8fabed6bb22d12f586ee6f69ad5e29/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 03:01:57 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/da1f7e0b8562433ad9b4264837e1b4903a8fabed6bb22d12f586ee6f69ad5e29/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 03:01:57 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/da1f7e0b8562433ad9b4264837e1b4903a8fabed6bb22d12f586ee6f69ad5e29/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 03:01:58 np0005534516 python3.9[164948]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtqemud.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 25 03:01:58 np0005534516 podman[164954]: 2025-11-25 08:01:58.09468768 +0000 UTC m=+0.316320260 container init a0f1b8be3c5b0ca3d6a070e36c396451d532df12f2f4caed0deaca7c9e44eedb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_khayyam, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Nov 25 03:01:58 np0005534516 podman[164954]: 2025-11-25 08:01:58.109038246 +0000 UTC m=+0.330670776 container start a0f1b8be3c5b0ca3d6a070e36c396451d532df12f2f4caed0deaca7c9e44eedb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_khayyam, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Nov 25 03:01:58 np0005534516 podman[164954]: 2025-11-25 08:01:58.176768982 +0000 UTC m=+0.398401602 container attach a0f1b8be3c5b0ca3d6a070e36c396451d532df12f2f4caed0deaca7c9e44eedb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_khayyam, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 03:01:58 np0005534516 python3.9[165127]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtsecretd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 25 03:01:59 np0005534516 xenodochial_khayyam[164970]: {
Nov 25 03:01:59 np0005534516 xenodochial_khayyam[164970]:    "0": [
Nov 25 03:01:59 np0005534516 xenodochial_khayyam[164970]:        {
Nov 25 03:01:59 np0005534516 xenodochial_khayyam[164970]:            "devices": [
Nov 25 03:01:59 np0005534516 xenodochial_khayyam[164970]:                "/dev/loop3"
Nov 25 03:01:59 np0005534516 xenodochial_khayyam[164970]:            ],
Nov 25 03:01:59 np0005534516 xenodochial_khayyam[164970]:            "lv_name": "ceph_lv0",
Nov 25 03:01:59 np0005534516 xenodochial_khayyam[164970]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 03:01:59 np0005534516 xenodochial_khayyam[164970]:            "lv_size": "21470642176",
Nov 25 03:01:59 np0005534516 xenodochial_khayyam[164970]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=fa0f8d90-5df8-4d42-9078-da082765696d,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 03:01:59 np0005534516 xenodochial_khayyam[164970]:            "lv_uuid": "ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr",
Nov 25 03:01:59 np0005534516 xenodochial_khayyam[164970]:            "name": "ceph_lv0",
Nov 25 03:01:59 np0005534516 xenodochial_khayyam[164970]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 03:01:59 np0005534516 xenodochial_khayyam[164970]:            "tags": {
Nov 25 03:01:59 np0005534516 xenodochial_khayyam[164970]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 03:01:59 np0005534516 xenodochial_khayyam[164970]:                "ceph.block_uuid": "ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr",
Nov 25 03:01:59 np0005534516 xenodochial_khayyam[164970]:                "ceph.cephx_lockbox_secret": "",
Nov 25 03:01:59 np0005534516 xenodochial_khayyam[164970]:                "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 03:01:59 np0005534516 xenodochial_khayyam[164970]:                "ceph.cluster_name": "ceph",
Nov 25 03:01:59 np0005534516 xenodochial_khayyam[164970]:                "ceph.crush_device_class": "",
Nov 25 03:01:59 np0005534516 xenodochial_khayyam[164970]:                "ceph.encrypted": "0",
Nov 25 03:01:59 np0005534516 xenodochial_khayyam[164970]:                "ceph.osd_fsid": "fa0f8d90-5df8-4d42-9078-da082765696d",
Nov 25 03:01:59 np0005534516 xenodochial_khayyam[164970]:                "ceph.osd_id": "0",
Nov 25 03:01:59 np0005534516 xenodochial_khayyam[164970]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 03:01:59 np0005534516 xenodochial_khayyam[164970]:                "ceph.type": "block",
Nov 25 03:01:59 np0005534516 xenodochial_khayyam[164970]:                "ceph.vdo": "0"
Nov 25 03:01:59 np0005534516 xenodochial_khayyam[164970]:            },
Nov 25 03:01:59 np0005534516 xenodochial_khayyam[164970]:            "type": "block",
Nov 25 03:01:59 np0005534516 xenodochial_khayyam[164970]:            "vg_name": "ceph_vg0"
Nov 25 03:01:59 np0005534516 xenodochial_khayyam[164970]:        }
Nov 25 03:01:59 np0005534516 xenodochial_khayyam[164970]:    ],
Nov 25 03:01:59 np0005534516 xenodochial_khayyam[164970]:    "1": [
Nov 25 03:01:59 np0005534516 xenodochial_khayyam[164970]:        {
Nov 25 03:01:59 np0005534516 xenodochial_khayyam[164970]:            "devices": [
Nov 25 03:01:59 np0005534516 xenodochial_khayyam[164970]:                "/dev/loop4"
Nov 25 03:01:59 np0005534516 xenodochial_khayyam[164970]:            ],
Nov 25 03:01:59 np0005534516 xenodochial_khayyam[164970]:            "lv_name": "ceph_lv1",
Nov 25 03:01:59 np0005534516 xenodochial_khayyam[164970]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 03:01:59 np0005534516 xenodochial_khayyam[164970]:            "lv_size": "21470642176",
Nov 25 03:01:59 np0005534516 xenodochial_khayyam[164970]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1ccbbde0-8faf-460a-800e-c84f00ed17db,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 03:01:59 np0005534516 xenodochial_khayyam[164970]:            "lv_uuid": "nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e",
Nov 25 03:01:59 np0005534516 xenodochial_khayyam[164970]:            "name": "ceph_lv1",
Nov 25 03:01:59 np0005534516 xenodochial_khayyam[164970]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 03:01:59 np0005534516 xenodochial_khayyam[164970]:            "tags": {
Nov 25 03:01:59 np0005534516 xenodochial_khayyam[164970]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 03:01:59 np0005534516 xenodochial_khayyam[164970]:                "ceph.block_uuid": "nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e",
Nov 25 03:01:59 np0005534516 xenodochial_khayyam[164970]:                "ceph.cephx_lockbox_secret": "",
Nov 25 03:01:59 np0005534516 xenodochial_khayyam[164970]:                "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 03:01:59 np0005534516 xenodochial_khayyam[164970]:                "ceph.cluster_name": "ceph",
Nov 25 03:01:59 np0005534516 xenodochial_khayyam[164970]:                "ceph.crush_device_class": "",
Nov 25 03:01:59 np0005534516 xenodochial_khayyam[164970]:                "ceph.encrypted": "0",
Nov 25 03:01:59 np0005534516 xenodochial_khayyam[164970]:                "ceph.osd_fsid": "1ccbbde0-8faf-460a-800e-c84f00ed17db",
Nov 25 03:01:59 np0005534516 xenodochial_khayyam[164970]:                "ceph.osd_id": "1",
Nov 25 03:01:59 np0005534516 xenodochial_khayyam[164970]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 03:01:59 np0005534516 xenodochial_khayyam[164970]:                "ceph.type": "block",
Nov 25 03:01:59 np0005534516 xenodochial_khayyam[164970]:                "ceph.vdo": "0"
Nov 25 03:01:59 np0005534516 xenodochial_khayyam[164970]:            },
Nov 25 03:01:59 np0005534516 xenodochial_khayyam[164970]:            "type": "block",
Nov 25 03:01:59 np0005534516 xenodochial_khayyam[164970]:            "vg_name": "ceph_vg1"
Nov 25 03:01:59 np0005534516 xenodochial_khayyam[164970]:        }
Nov 25 03:01:59 np0005534516 xenodochial_khayyam[164970]:    ],
Nov 25 03:01:59 np0005534516 xenodochial_khayyam[164970]:    "2": [
Nov 25 03:01:59 np0005534516 xenodochial_khayyam[164970]:        {
Nov 25 03:01:59 np0005534516 xenodochial_khayyam[164970]:            "devices": [
Nov 25 03:01:59 np0005534516 xenodochial_khayyam[164970]:                "/dev/loop5"
Nov 25 03:01:59 np0005534516 xenodochial_khayyam[164970]:            ],
Nov 25 03:01:59 np0005534516 xenodochial_khayyam[164970]:            "lv_name": "ceph_lv2",
Nov 25 03:01:59 np0005534516 xenodochial_khayyam[164970]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 03:01:59 np0005534516 xenodochial_khayyam[164970]:            "lv_size": "21470642176",
Nov 25 03:01:59 np0005534516 xenodochial_khayyam[164970]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2a15223e-f21c-42af-b702-2be1c3607399,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 03:01:59 np0005534516 xenodochial_khayyam[164970]:            "lv_uuid": "5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0",
Nov 25 03:01:59 np0005534516 xenodochial_khayyam[164970]:            "name": "ceph_lv2",
Nov 25 03:01:59 np0005534516 xenodochial_khayyam[164970]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 03:01:59 np0005534516 xenodochial_khayyam[164970]:            "tags": {
Nov 25 03:01:59 np0005534516 xenodochial_khayyam[164970]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 03:01:59 np0005534516 xenodochial_khayyam[164970]:                "ceph.block_uuid": "5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0",
Nov 25 03:01:59 np0005534516 xenodochial_khayyam[164970]:                "ceph.cephx_lockbox_secret": "",
Nov 25 03:01:59 np0005534516 xenodochial_khayyam[164970]:                "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 03:01:59 np0005534516 xenodochial_khayyam[164970]:                "ceph.cluster_name": "ceph",
Nov 25 03:01:59 np0005534516 xenodochial_khayyam[164970]:                "ceph.crush_device_class": "",
Nov 25 03:01:59 np0005534516 xenodochial_khayyam[164970]:                "ceph.encrypted": "0",
Nov 25 03:01:59 np0005534516 xenodochial_khayyam[164970]:                "ceph.osd_fsid": "2a15223e-f21c-42af-b702-2be1c3607399",
Nov 25 03:01:59 np0005534516 xenodochial_khayyam[164970]:                "ceph.osd_id": "2",
Nov 25 03:01:59 np0005534516 xenodochial_khayyam[164970]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 03:01:59 np0005534516 xenodochial_khayyam[164970]:                "ceph.type": "block",
Nov 25 03:01:59 np0005534516 xenodochial_khayyam[164970]:                "ceph.vdo": "0"
Nov 25 03:01:59 np0005534516 xenodochial_khayyam[164970]:            },
Nov 25 03:01:59 np0005534516 xenodochial_khayyam[164970]:            "type": "block",
Nov 25 03:01:59 np0005534516 xenodochial_khayyam[164970]:            "vg_name": "ceph_vg2"
Nov 25 03:01:59 np0005534516 xenodochial_khayyam[164970]:        }
Nov 25 03:01:59 np0005534516 xenodochial_khayyam[164970]:    ]
Nov 25 03:01:59 np0005534516 xenodochial_khayyam[164970]: }
Nov 25 03:01:59 np0005534516 systemd[1]: libpod-a0f1b8be3c5b0ca3d6a070e36c396451d532df12f2f4caed0deaca7c9e44eedb.scope: Deactivated successfully.
Nov 25 03:01:59 np0005534516 podman[164954]: 2025-11-25 08:01:59.055014501 +0000 UTC m=+1.276647071 container died a0f1b8be3c5b0ca3d6a070e36c396451d532df12f2f4caed0deaca7c9e44eedb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_khayyam, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 03:01:59 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 03:01:59 np0005534516 systemd[1]: var-lib-containers-storage-overlay-da1f7e0b8562433ad9b4264837e1b4903a8fabed6bb22d12f586ee6f69ad5e29-merged.mount: Deactivated successfully.
Nov 25 03:01:59 np0005534516 podman[164954]: 2025-11-25 08:01:59.387655221 +0000 UTC m=+1.609287801 container remove a0f1b8be3c5b0ca3d6a070e36c396451d532df12f2f4caed0deaca7c9e44eedb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_khayyam, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 03:01:59 np0005534516 systemd[1]: libpod-conmon-a0f1b8be3c5b0ca3d6a070e36c396451d532df12f2f4caed0deaca7c9e44eedb.scope: Deactivated successfully.
Nov 25 03:01:59 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v477: 321 pgs: 321 active+clean; 456 KiB data, 143 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:01:59 np0005534516 python3.9[165341]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtstoraged.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 25 03:02:00 np0005534516 podman[165460]: 2025-11-25 08:02:00.163131915 +0000 UTC m=+0.030419638 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 03:02:00 np0005534516 podman[165460]: 2025-11-25 08:02:00.311055393 +0000 UTC m=+0.178343066 container create ec3ffad37e35cb14b47ccf5f0b07571c1f107134ded892328a3c185f750d4802 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_brattain, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef)
Nov 25 03:02:00 np0005534516 systemd[1]: Started libpod-conmon-ec3ffad37e35cb14b47ccf5f0b07571c1f107134ded892328a3c185f750d4802.scope.
Nov 25 03:02:00 np0005534516 systemd[1]: Started libcrun container.
Nov 25 03:02:00 np0005534516 podman[165460]: 2025-11-25 08:02:00.469040379 +0000 UTC m=+0.336328112 container init ec3ffad37e35cb14b47ccf5f0b07571c1f107134ded892328a3c185f750d4802 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_brattain, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 03:02:00 np0005534516 podman[165460]: 2025-11-25 08:02:00.480526855 +0000 UTC m=+0.347814488 container start ec3ffad37e35cb14b47ccf5f0b07571c1f107134ded892328a3c185f750d4802 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_brattain, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Nov 25 03:02:00 np0005534516 systemd[1]: libpod-ec3ffad37e35cb14b47ccf5f0b07571c1f107134ded892328a3c185f750d4802.scope: Deactivated successfully.
Nov 25 03:02:00 np0005534516 keen_brattain[165528]: 167 167
Nov 25 03:02:00 np0005534516 podman[165460]: 2025-11-25 08:02:00.526223434 +0000 UTC m=+0.393511077 container attach ec3ffad37e35cb14b47ccf5f0b07571c1f107134ded892328a3c185f750d4802 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_brattain, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 03:02:00 np0005534516 podman[165460]: 2025-11-25 08:02:00.527590972 +0000 UTC m=+0.394878605 container died ec3ffad37e35cb14b47ccf5f0b07571c1f107134ded892328a3c185f750d4802 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_brattain, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Nov 25 03:02:00 np0005534516 systemd[1]: var-lib-containers-storage-overlay-4c18ba0e1a5ae28ea65c93660c1fe5b52059bce935aa337f60b2d7e7855045aa-merged.mount: Deactivated successfully.
Nov 25 03:02:00 np0005534516 podman[165460]: 2025-11-25 08:02:00.835523474 +0000 UTC m=+0.702811117 container remove ec3ffad37e35cb14b47ccf5f0b07571c1f107134ded892328a3c185f750d4802 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_brattain, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 03:02:00 np0005534516 systemd[1]: libpod-conmon-ec3ffad37e35cb14b47ccf5f0b07571c1f107134ded892328a3c185f750d4802.scope: Deactivated successfully.
Nov 25 03:02:01 np0005534516 podman[165630]: 2025-11-25 08:02:01.075621077 +0000 UTC m=+0.086990466 container create 00c40b91ef7cd921dca2d75a6b1c004cfb5547740f8f64642b751d00278d3540 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_hellman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 03:02:01 np0005534516 podman[165630]: 2025-11-25 08:02:01.018670038 +0000 UTC m=+0.030039487 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 03:02:01 np0005534516 python3.9[165624]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_libvirt.target state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 03:02:01 np0005534516 systemd[1]: Started libpod-conmon-00c40b91ef7cd921dca2d75a6b1c004cfb5547740f8f64642b751d00278d3540.scope.
Nov 25 03:02:01 np0005534516 systemd[1]: Started libcrun container.
Nov 25 03:02:01 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c0f20235bc9416ef867a0c492b9d107a522dc3a334f3856ba01d8286a1d15c7d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 03:02:01 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c0f20235bc9416ef867a0c492b9d107a522dc3a334f3856ba01d8286a1d15c7d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 03:02:01 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c0f20235bc9416ef867a0c492b9d107a522dc3a334f3856ba01d8286a1d15c7d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 03:02:01 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c0f20235bc9416ef867a0c492b9d107a522dc3a334f3856ba01d8286a1d15c7d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 03:02:01 np0005534516 podman[165630]: 2025-11-25 08:02:01.233628548 +0000 UTC m=+0.244997997 container init 00c40b91ef7cd921dca2d75a6b1c004cfb5547740f8f64642b751d00278d3540 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_hellman, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 03:02:01 np0005534516 podman[165630]: 2025-11-25 08:02:01.241206935 +0000 UTC m=+0.252576304 container start 00c40b91ef7cd921dca2d75a6b1c004cfb5547740f8f64642b751d00278d3540 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_hellman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3)
Nov 25 03:02:01 np0005534516 podman[165630]: 2025-11-25 08:02:01.264681982 +0000 UTC m=+0.276051431 container attach 00c40b91ef7cd921dca2d75a6b1c004cfb5547740f8f64642b751d00278d3540 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_hellman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Nov 25 03:02:01 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v478: 321 pgs: 321 active+clean; 456 KiB data, 143 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:02:01 np0005534516 python3.9[165803]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtlogd_wrapper.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 03:02:02 np0005534516 kind_hellman[165647]: {
Nov 25 03:02:02 np0005534516 kind_hellman[165647]:    "1ccbbde0-8faf-460a-800e-c84f00ed17db": {
Nov 25 03:02:02 np0005534516 kind_hellman[165647]:        "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 03:02:02 np0005534516 kind_hellman[165647]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 03:02:02 np0005534516 kind_hellman[165647]:        "osd_id": 1,
Nov 25 03:02:02 np0005534516 kind_hellman[165647]:        "osd_uuid": "1ccbbde0-8faf-460a-800e-c84f00ed17db",
Nov 25 03:02:02 np0005534516 kind_hellman[165647]:        "type": "bluestore"
Nov 25 03:02:02 np0005534516 kind_hellman[165647]:    },
Nov 25 03:02:02 np0005534516 kind_hellman[165647]:    "2a15223e-f21c-42af-b702-2be1c3607399": {
Nov 25 03:02:02 np0005534516 kind_hellman[165647]:        "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 03:02:02 np0005534516 kind_hellman[165647]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 03:02:02 np0005534516 kind_hellman[165647]:        "osd_id": 2,
Nov 25 03:02:02 np0005534516 kind_hellman[165647]:        "osd_uuid": "2a15223e-f21c-42af-b702-2be1c3607399",
Nov 25 03:02:02 np0005534516 kind_hellman[165647]:        "type": "bluestore"
Nov 25 03:02:02 np0005534516 kind_hellman[165647]:    },
Nov 25 03:02:02 np0005534516 kind_hellman[165647]:    "fa0f8d90-5df8-4d42-9078-da082765696d": {
Nov 25 03:02:02 np0005534516 kind_hellman[165647]:        "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 03:02:02 np0005534516 kind_hellman[165647]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 03:02:02 np0005534516 kind_hellman[165647]:        "osd_id": 0,
Nov 25 03:02:02 np0005534516 kind_hellman[165647]:        "osd_uuid": "fa0f8d90-5df8-4d42-9078-da082765696d",
Nov 25 03:02:02 np0005534516 kind_hellman[165647]:        "type": "bluestore"
Nov 25 03:02:02 np0005534516 kind_hellman[165647]:    }
Nov 25 03:02:02 np0005534516 kind_hellman[165647]: }
Nov 25 03:02:02 np0005534516 systemd[1]: libpod-00c40b91ef7cd921dca2d75a6b1c004cfb5547740f8f64642b751d00278d3540.scope: Deactivated successfully.
Nov 25 03:02:02 np0005534516 podman[165630]: 2025-11-25 08:02:02.244886406 +0000 UTC m=+1.256255775 container died 00c40b91ef7cd921dca2d75a6b1c004cfb5547740f8f64642b751d00278d3540 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_hellman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef)
Nov 25 03:02:02 np0005534516 systemd[1]: var-lib-containers-storage-overlay-c0f20235bc9416ef867a0c492b9d107a522dc3a334f3856ba01d8286a1d15c7d-merged.mount: Deactivated successfully.
Nov 25 03:02:02 np0005534516 python3.9[165983]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtnodedevd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 03:02:03 np0005534516 podman[165630]: 2025-11-25 08:02:03.079799777 +0000 UTC m=+2.091169146 container remove 00c40b91ef7cd921dca2d75a6b1c004cfb5547740f8f64642b751d00278d3540 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_hellman, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 03:02:03 np0005534516 systemd[1]: libpod-conmon-00c40b91ef7cd921dca2d75a6b1c004cfb5547740f8f64642b751d00278d3540.scope: Deactivated successfully.
Nov 25 03:02:03 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 03:02:03 np0005534516 python3.9[166149]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtproxyd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 03:02:03 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 03:02:03 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 03:02:03 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 03:02:03 np0005534516 ceph-mgr[75313]: [progress WARNING root] complete: ev bb63cc32-0a7b-4459-9b74-b980cfbe85d2 does not exist
Nov 25 03:02:03 np0005534516 ceph-mgr[75313]: [progress WARNING root] complete: ev b22985d0-3768-404d-af1b-7b2f90281cc3 does not exist
Nov 25 03:02:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 03:02:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:02:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 03:02:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:02:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 03:02:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:02:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 03:02:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:02:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 03:02:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:02:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 03:02:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:02:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 25 03:02:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:02:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 03:02:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:02:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 25 03:02:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:02:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 25 03:02:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:02:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 03:02:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:02:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 25 03:02:03 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v479: 321 pgs: 321 active+clean; 456 KiB data, 143 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:02:03 np0005534516 python3.9[166351]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtqemud.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 03:02:04 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 03:02:04 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 03:02:04 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 03:02:04 np0005534516 podman[166475]: 2025-11-25 08:02:04.486446071 +0000 UTC m=+0.096334865 container health_status 8ad1e319ea263c63021f01bb2d6f18ca5aea205883339692c6b10bddfa993191 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2)
Nov 25 03:02:04 np0005534516 python3.9[166524]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtsecretd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 03:02:05 np0005534516 python3.9[166680]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtstoraged.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 03:02:05 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v480: 321 pgs: 321 active+clean; 456 KiB data, 143 MiB used, 60 GiB / 60 GiB avail; 3.7 KiB/s rd, 0 B/s wr, 6 op/s
Nov 25 03:02:06 np0005534516 python3.9[166832]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_libvirt.target state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 03:02:06 np0005534516 python3.9[166984]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtlogd_wrapper.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 03:02:07 np0005534516 python3.9[167136]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtnodedevd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 03:02:07 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v481: 321 pgs: 321 active+clean; 456 KiB data, 143 MiB used, 60 GiB / 60 GiB avail; 7.9 KiB/s rd, 0 B/s wr, 13 op/s
Nov 25 03:02:09 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 03:02:09 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v482: 321 pgs: 321 active+clean; 456 KiB data, 143 MiB used, 60 GiB / 60 GiB avail; 11 KiB/s rd, 0 B/s wr, 18 op/s
Nov 25 03:02:09 np0005534516 podman[167291]: 2025-11-25 08:02:09.822538543 +0000 UTC m=+0.064651844 container health_status 5c8d5508db57b4c3da7e80ae11f3a3094e87785cb74f60c98199261dd8b73481 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 03:02:10 np0005534516 python3.9[167289]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtproxyd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 03:02:10 np0005534516 python3.9[167461]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtqemud.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 03:02:11 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v483: 321 pgs: 321 active+clean; 456 KiB data, 143 MiB used, 60 GiB / 60 GiB avail; 13 KiB/s rd, 0 B/s wr, 22 op/s
Nov 25 03:02:11 np0005534516 python3.9[167613]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtsecretd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 03:02:12 np0005534516 python3.9[167765]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtstoraged.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 03:02:13 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v484: 321 pgs: 321 active+clean; 456 KiB data, 143 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 0 B/s wr, 33 op/s
Nov 25 03:02:14 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 03:02:14 np0005534516 python3.9[167917]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then#012  systemctl disable --now certmonger.service#012  test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service#012fi#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 03:02:15 np0005534516 python3.9[168069]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Nov 25 03:02:15 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v485: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Nov 25 03:02:16 np0005534516 python3.9[168221]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 25 03:02:16 np0005534516 systemd[1]: Reloading.
Nov 25 03:02:16 np0005534516 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 03:02:16 np0005534516 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 03:02:17 np0005534516 python3.9[168408]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_libvirt.target _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 03:02:17 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v486: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail; 32 KiB/s rd, 0 B/s wr, 53 op/s
Nov 25 03:02:18 np0005534516 python3.9[168561]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtlogd_wrapper.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 03:02:18 np0005534516 python3.9[168714]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtnodedevd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 03:02:19 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 03:02:19 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v487: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail; 28 KiB/s rd, 0 B/s wr, 46 op/s
Nov 25 03:02:19 np0005534516 python3.9[168867]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtproxyd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 03:02:20 np0005534516 python3.9[169020]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtqemud.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 03:02:20 np0005534516 python3.9[169173]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtsecretd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 03:02:21 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v488: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail; 25 KiB/s rd, 0 B/s wr, 41 op/s
Nov 25 03:02:21 np0005534516 python3.9[169326]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtstoraged.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 03:02:22 np0005534516 python3.9[169479]: ansible-ansible.builtin.getent Invoked with database=passwd key=libvirt fail_key=True service=None split=None
Nov 25 03:02:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 03:02:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 03:02:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 03:02:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 03:02:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 03:02:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 03:02:23 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v489: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail; 22 KiB/s rd, 0 B/s wr, 37 op/s
Nov 25 03:02:23 np0005534516 python3.9[169632]: ansible-ansible.builtin.group Invoked with gid=42473 name=libvirt state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Nov 25 03:02:24 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 03:02:24 np0005534516 python3.9[169790]: ansible-ansible.builtin.user Invoked with comment=libvirt user group=libvirt groups=[''] name=libvirt shell=/sbin/nologin state=present uid=42473 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Nov 25 03:02:25 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v490: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail; 15 KiB/s rd, 0 B/s wr, 25 op/s
Nov 25 03:02:25 np0005534516 python3.9[169950]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 25 03:02:26 np0005534516 python3.9[170034]: ansible-ansible.legacy.dnf Invoked with name=['libvirt ', 'libvirt-admin ', 'libvirt-client ', 'libvirt-daemon ', 'qemu-kvm', 'qemu-img', 'libguestfs', 'libseccomp', 'swtpm', 'swtpm-tools', 'edk2-ovmf', 'ceph-common', 'cyrus-sasl-scram'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 25 03:02:27 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v491: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:02:29 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 03:02:29 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v492: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:02:31 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v493: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:02:33 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v494: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:02:34 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 03:02:34 np0005534516 podman[170047]: 2025-11-25 08:02:34.880145376 +0000 UTC m=+0.131869838 container health_status 8ad1e319ea263c63021f01bb2d6f18ca5aea205883339692c6b10bddfa993191 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Nov 25 03:02:35 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v495: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:02:37 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v496: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:02:39 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 03:02:39 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v497: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:02:40 np0005534516 podman[170223]: 2025-11-25 08:02:40.81752293 +0000 UTC m=+0.066757083 container health_status 5c8d5508db57b4c3da7e80ae11f3a3094e87785cb74f60c98199261dd8b73481 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Nov 25 03:02:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:02:41.023 162739 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:02:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:02:41.023 162739 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:02:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:02:41.023 162739 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:02:41 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v498: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:02:43 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v499: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:02:44 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 03:02:45 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v500: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:02:47 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v501: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:02:49 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 03:02:49 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v502: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:02:51 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v503: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:02:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] Optimize plan auto_2025-11-25_08:02:53
Nov 25 03:02:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 03:02:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] do_upmap
Nov 25 03:02:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] pools ['default.rgw.meta', '.mgr', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', '.rgw.root', 'default.rgw.control', 'vms', 'backups', 'images', 'volumes', 'default.rgw.log']
Nov 25 03:02:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] prepared 0/10 changes
Nov 25 03:02:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 03:02:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 03:02:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 03:02:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 03:02:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 03:02:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 03:02:53 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v504: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:02:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 03:02:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 03:02:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 03:02:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 03:02:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 03:02:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 03:02:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 03:02:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 03:02:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 03:02:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 03:02:54 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 03:02:55 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v505: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:02:56 np0005534516 kernel: SELinux:  Converting 2768 SID table entries...
Nov 25 03:02:56 np0005534516 kernel: SELinux:  policy capability network_peer_controls=1
Nov 25 03:02:56 np0005534516 kernel: SELinux:  policy capability open_perms=1
Nov 25 03:02:56 np0005534516 kernel: SELinux:  policy capability extended_socket_class=1
Nov 25 03:02:56 np0005534516 kernel: SELinux:  policy capability always_check_network=0
Nov 25 03:02:56 np0005534516 kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 25 03:02:56 np0005534516 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 25 03:02:56 np0005534516 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 25 03:02:57 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v506: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:02:59 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 03:02:59 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v507: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:03:01 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v508: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:03:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 03:03:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:03:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 03:03:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:03:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 03:03:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:03:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 03:03:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:03:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 03:03:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:03:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 03:03:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:03:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 25 03:03:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:03:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 03:03:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:03:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 25 03:03:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:03:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 25 03:03:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:03:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 03:03:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:03:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 25 03:03:03 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v509: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:03:03 np0005534516 dbus-broker-launch[813]: avc:  op=load_policy lsm=selinux seqno=12 res=1
Nov 25 03:03:04 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 03:03:04 np0005534516 podman[170447]: 2025-11-25 08:03:04.399475646 +0000 UTC m=+0.208351512 container exec 04ce8b89bbacb77b3b59c4996e899d7494010827e740656ebea8754c25303956 (image=quay.io/ceph/ceph:v18, name=ceph-a058ea16-8b73-51e1-b172-ed66107102bf-mon-compute-0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 03:03:04 np0005534516 podman[170447]: 2025-11-25 08:03:04.55069092 +0000 UTC m=+0.359566696 container exec_died 04ce8b89bbacb77b3b59c4996e899d7494010827e740656ebea8754c25303956 (image=quay.io/ceph/ceph:v18, name=ceph-a058ea16-8b73-51e1-b172-ed66107102bf-mon-compute-0, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Nov 25 03:03:05 np0005534516 podman[170517]: 2025-11-25 08:03:05.028569489 +0000 UTC m=+0.072448497 container health_status 8ad1e319ea263c63021f01bb2d6f18ca5aea205883339692c6b10bddfa993191 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team)
Nov 25 03:03:05 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 03:03:05 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 03:03:05 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 03:03:05 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 03:03:05 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v510: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:03:06 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 03:03:06 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 03:03:06 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 03:03:06 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 03:03:06 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 03:03:07 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 03:03:07 np0005534516 ceph-mgr[75313]: [progress WARNING root] complete: ev ae728719-a2ee-4366-9815-367f2e89142a does not exist
Nov 25 03:03:07 np0005534516 ceph-mgr[75313]: [progress WARNING root] complete: ev f73495ed-bde9-4313-b89b-98629b6cc960 does not exist
Nov 25 03:03:07 np0005534516 ceph-mgr[75313]: [progress WARNING root] complete: ev 1723563a-50e5-41ac-8b22-934a6e36c393 does not exist
Nov 25 03:03:07 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 03:03:07 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 03:03:07 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 03:03:07 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 03:03:07 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 03:03:07 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 03:03:07 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 03:03:07 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 03:03:07 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 03:03:07 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v511: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:03:07 np0005534516 kernel: SELinux:  Converting 2768 SID table entries...
Nov 25 03:03:07 np0005534516 kernel: SELinux:  policy capability network_peer_controls=1
Nov 25 03:03:07 np0005534516 kernel: SELinux:  policy capability open_perms=1
Nov 25 03:03:07 np0005534516 kernel: SELinux:  policy capability extended_socket_class=1
Nov 25 03:03:07 np0005534516 kernel: SELinux:  policy capability always_check_network=0
Nov 25 03:03:07 np0005534516 kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 25 03:03:07 np0005534516 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 25 03:03:07 np0005534516 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 25 03:03:07 np0005534516 podman[170902]: 2025-11-25 08:03:07.883662666 +0000 UTC m=+0.019450798 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 03:03:08 np0005534516 podman[170902]: 2025-11-25 08:03:08.068836602 +0000 UTC m=+0.204624754 container create 515dbb0ad39b2b4afc6c507005eb50cfa0300956d3212128101043f1615226f7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_volhard, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 03:03:08 np0005534516 dbus-broker-launch[813]: avc:  op=load_policy lsm=selinux seqno=13 res=1
Nov 25 03:03:08 np0005534516 systemd[1]: Started libpod-conmon-515dbb0ad39b2b4afc6c507005eb50cfa0300956d3212128101043f1615226f7.scope.
Nov 25 03:03:08 np0005534516 systemd[1]: Started libcrun container.
Nov 25 03:03:08 np0005534516 podman[170902]: 2025-11-25 08:03:08.3507879 +0000 UTC m=+0.486576012 container init 515dbb0ad39b2b4afc6c507005eb50cfa0300956d3212128101043f1615226f7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_volhard, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Nov 25 03:03:08 np0005534516 podman[170902]: 2025-11-25 08:03:08.357788 +0000 UTC m=+0.493576112 container start 515dbb0ad39b2b4afc6c507005eb50cfa0300956d3212128101043f1615226f7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_volhard, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Nov 25 03:03:08 np0005534516 eloquent_volhard[170921]: 167 167
Nov 25 03:03:08 np0005534516 systemd[1]: libpod-515dbb0ad39b2b4afc6c507005eb50cfa0300956d3212128101043f1615226f7.scope: Deactivated successfully.
Nov 25 03:03:08 np0005534516 podman[170902]: 2025-11-25 08:03:08.375406337 +0000 UTC m=+0.511194449 container attach 515dbb0ad39b2b4afc6c507005eb50cfa0300956d3212128101043f1615226f7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_volhard, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Nov 25 03:03:08 np0005534516 podman[170902]: 2025-11-25 08:03:08.375929051 +0000 UTC m=+0.511717173 container died 515dbb0ad39b2b4afc6c507005eb50cfa0300956d3212128101043f1615226f7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_volhard, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 03:03:08 np0005534516 systemd[1]: var-lib-containers-storage-overlay-fe63001abee484b6ed54221dac71494e357b18323d53081563c32d38f698fa19-merged.mount: Deactivated successfully.
Nov 25 03:03:08 np0005534516 podman[170902]: 2025-11-25 08:03:08.447546361 +0000 UTC m=+0.583334473 container remove 515dbb0ad39b2b4afc6c507005eb50cfa0300956d3212128101043f1615226f7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_volhard, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 03:03:08 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 03:03:08 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 03:03:08 np0005534516 systemd[1]: libpod-conmon-515dbb0ad39b2b4afc6c507005eb50cfa0300956d3212128101043f1615226f7.scope: Deactivated successfully.
Nov 25 03:03:08 np0005534516 podman[170946]: 2025-11-25 08:03:08.600170785 +0000 UTC m=+0.041009531 container create 91e049a4739fb370b98c03c6a668bfe302fdb57dde682aa5abada7b38398cf25 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_mahavira, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 03:03:08 np0005534516 systemd[1]: Started libpod-conmon-91e049a4739fb370b98c03c6a668bfe302fdb57dde682aa5abada7b38398cf25.scope.
Nov 25 03:03:08 np0005534516 systemd[1]: Started libcrun container.
Nov 25 03:03:08 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ac13aa1fce044f1680a37fd30aec76243722fa759cc7a1ac1901fdeac398e400/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 03:03:08 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ac13aa1fce044f1680a37fd30aec76243722fa759cc7a1ac1901fdeac398e400/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 03:03:08 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ac13aa1fce044f1680a37fd30aec76243722fa759cc7a1ac1901fdeac398e400/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 03:03:08 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ac13aa1fce044f1680a37fd30aec76243722fa759cc7a1ac1901fdeac398e400/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 03:03:08 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ac13aa1fce044f1680a37fd30aec76243722fa759cc7a1ac1901fdeac398e400/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 03:03:08 np0005534516 podman[170946]: 2025-11-25 08:03:08.581980633 +0000 UTC m=+0.022819399 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 03:03:08 np0005534516 podman[170946]: 2025-11-25 08:03:08.679571415 +0000 UTC m=+0.120410181 container init 91e049a4739fb370b98c03c6a668bfe302fdb57dde682aa5abada7b38398cf25 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_mahavira, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3)
Nov 25 03:03:08 np0005534516 podman[170946]: 2025-11-25 08:03:08.686219776 +0000 UTC m=+0.127058512 container start 91e049a4739fb370b98c03c6a668bfe302fdb57dde682aa5abada7b38398cf25 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_mahavira, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 03:03:08 np0005534516 podman[170946]: 2025-11-25 08:03:08.690517582 +0000 UTC m=+0.131356328 container attach 91e049a4739fb370b98c03c6a668bfe302fdb57dde682aa5abada7b38398cf25 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_mahavira, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Nov 25 03:03:09 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 03:03:09 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v512: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:03:09 np0005534516 amazing_mahavira[170963]: --> passed data devices: 0 physical, 3 LVM
Nov 25 03:03:09 np0005534516 amazing_mahavira[170963]: --> relative data size: 1.0
Nov 25 03:03:09 np0005534516 amazing_mahavira[170963]: --> All data devices are unavailable
Nov 25 03:03:09 np0005534516 systemd[1]: libpod-91e049a4739fb370b98c03c6a668bfe302fdb57dde682aa5abada7b38398cf25.scope: Deactivated successfully.
Nov 25 03:03:09 np0005534516 podman[170946]: 2025-11-25 08:03:09.684886708 +0000 UTC m=+1.125725474 container died 91e049a4739fb370b98c03c6a668bfe302fdb57dde682aa5abada7b38398cf25 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_mahavira, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Nov 25 03:03:09 np0005534516 systemd[1]: var-lib-containers-storage-overlay-ac13aa1fce044f1680a37fd30aec76243722fa759cc7a1ac1901fdeac398e400-merged.mount: Deactivated successfully.
Nov 25 03:03:09 np0005534516 podman[170946]: 2025-11-25 08:03:09.772030678 +0000 UTC m=+1.212869424 container remove 91e049a4739fb370b98c03c6a668bfe302fdb57dde682aa5abada7b38398cf25 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_mahavira, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Nov 25 03:03:09 np0005534516 systemd[1]: libpod-conmon-91e049a4739fb370b98c03c6a668bfe302fdb57dde682aa5abada7b38398cf25.scope: Deactivated successfully.
Nov 25 03:03:10 np0005534516 podman[171146]: 2025-11-25 08:03:10.368754833 +0000 UTC m=+0.074389896 container create bac478c0ff5f6f1989da1ee90cc327a2f830fcf26aae98e6ddfd4f8377f767a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_hermann, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 03:03:10 np0005534516 podman[171146]: 2025-11-25 08:03:10.314249507 +0000 UTC m=+0.019884590 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 03:03:10 np0005534516 systemd[1]: Started libpod-conmon-bac478c0ff5f6f1989da1ee90cc327a2f830fcf26aae98e6ddfd4f8377f767a9.scope.
Nov 25 03:03:10 np0005534516 systemd[1]: Started libcrun container.
Nov 25 03:03:10 np0005534516 podman[171146]: 2025-11-25 08:03:10.654084892 +0000 UTC m=+0.359719975 container init bac478c0ff5f6f1989da1ee90cc327a2f830fcf26aae98e6ddfd4f8377f767a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_hermann, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 03:03:10 np0005534516 podman[171146]: 2025-11-25 08:03:10.660542208 +0000 UTC m=+0.366177271 container start bac478c0ff5f6f1989da1ee90cc327a2f830fcf26aae98e6ddfd4f8377f767a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_hermann, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Nov 25 03:03:10 np0005534516 sleepy_hermann[171163]: 167 167
Nov 25 03:03:10 np0005534516 systemd[1]: libpod-bac478c0ff5f6f1989da1ee90cc327a2f830fcf26aae98e6ddfd4f8377f767a9.scope: Deactivated successfully.
Nov 25 03:03:10 np0005534516 podman[171146]: 2025-11-25 08:03:10.69611128 +0000 UTC m=+0.401746373 container attach bac478c0ff5f6f1989da1ee90cc327a2f830fcf26aae98e6ddfd4f8377f767a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_hermann, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 25 03:03:10 np0005534516 podman[171146]: 2025-11-25 08:03:10.700225692 +0000 UTC m=+0.405860755 container died bac478c0ff5f6f1989da1ee90cc327a2f830fcf26aae98e6ddfd4f8377f767a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_hermann, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Nov 25 03:03:10 np0005534516 systemd[1]: var-lib-containers-storage-overlay-ca00e1d843b964b90be14d1fbd130c94f59817361c82180eec12924e30932235-merged.mount: Deactivated successfully.
Nov 25 03:03:10 np0005534516 podman[171146]: 2025-11-25 08:03:10.910664843 +0000 UTC m=+0.616299916 container remove bac478c0ff5f6f1989da1ee90cc327a2f830fcf26aae98e6ddfd4f8377f767a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_hermann, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 03:03:10 np0005534516 systemd[1]: libpod-conmon-bac478c0ff5f6f1989da1ee90cc327a2f830fcf26aae98e6ddfd4f8377f767a9.scope: Deactivated successfully.
Nov 25 03:03:11 np0005534516 podman[171182]: 2025-11-25 08:03:11.013001624 +0000 UTC m=+0.063118560 container health_status 5c8d5508db57b4c3da7e80ae11f3a3094e87785cb74f60c98199261dd8b73481 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, config_id=ovn_metadata_agent)
Nov 25 03:03:11 np0005534516 podman[171209]: 2025-11-25 08:03:11.150086228 +0000 UTC m=+0.087651545 container create e83b05a5a6bf24aeaf0687605f15c15baefc4baf58b83abdcc5a6a0d309d831f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_austin, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 03:03:11 np0005534516 podman[171209]: 2025-11-25 08:03:11.081787768 +0000 UTC m=+0.019353105 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 03:03:11 np0005534516 systemd[1]: Started libpod-conmon-e83b05a5a6bf24aeaf0687605f15c15baefc4baf58b83abdcc5a6a0d309d831f.scope.
Nov 25 03:03:11 np0005534516 systemd[1]: Started libcrun container.
Nov 25 03:03:11 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/42ff319a46ed137009415b06a76470aa451c0a89a4b5667b4b1e8937f717f8a8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 03:03:11 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/42ff319a46ed137009415b06a76470aa451c0a89a4b5667b4b1e8937f717f8a8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 03:03:11 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/42ff319a46ed137009415b06a76470aa451c0a89a4b5667b4b1e8937f717f8a8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 03:03:11 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/42ff319a46ed137009415b06a76470aa451c0a89a4b5667b4b1e8937f717f8a8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 03:03:11 np0005534516 podman[171209]: 2025-11-25 08:03:11.345779899 +0000 UTC m=+0.283345246 container init e83b05a5a6bf24aeaf0687605f15c15baefc4baf58b83abdcc5a6a0d309d831f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_austin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 03:03:11 np0005534516 podman[171209]: 2025-11-25 08:03:11.352594214 +0000 UTC m=+0.290159541 container start e83b05a5a6bf24aeaf0687605f15c15baefc4baf58b83abdcc5a6a0d309d831f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_austin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Nov 25 03:03:11 np0005534516 podman[171209]: 2025-11-25 08:03:11.394331835 +0000 UTC m=+0.331897152 container attach e83b05a5a6bf24aeaf0687605f15c15baefc4baf58b83abdcc5a6a0d309d831f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_austin, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Nov 25 03:03:11 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v513: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:03:12 np0005534516 agitated_austin[171226]: {
Nov 25 03:03:12 np0005534516 agitated_austin[171226]:    "0": [
Nov 25 03:03:12 np0005534516 agitated_austin[171226]:        {
Nov 25 03:03:12 np0005534516 agitated_austin[171226]:            "devices": [
Nov 25 03:03:12 np0005534516 agitated_austin[171226]:                "/dev/loop3"
Nov 25 03:03:12 np0005534516 agitated_austin[171226]:            ],
Nov 25 03:03:12 np0005534516 agitated_austin[171226]:            "lv_name": "ceph_lv0",
Nov 25 03:03:12 np0005534516 agitated_austin[171226]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 03:03:12 np0005534516 agitated_austin[171226]:            "lv_size": "21470642176",
Nov 25 03:03:12 np0005534516 agitated_austin[171226]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=fa0f8d90-5df8-4d42-9078-da082765696d,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 03:03:12 np0005534516 agitated_austin[171226]:            "lv_uuid": "ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr",
Nov 25 03:03:12 np0005534516 agitated_austin[171226]:            "name": "ceph_lv0",
Nov 25 03:03:12 np0005534516 agitated_austin[171226]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 03:03:12 np0005534516 agitated_austin[171226]:            "tags": {
Nov 25 03:03:12 np0005534516 agitated_austin[171226]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 03:03:12 np0005534516 agitated_austin[171226]:                "ceph.block_uuid": "ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr",
Nov 25 03:03:12 np0005534516 agitated_austin[171226]:                "ceph.cephx_lockbox_secret": "",
Nov 25 03:03:12 np0005534516 agitated_austin[171226]:                "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 03:03:12 np0005534516 agitated_austin[171226]:                "ceph.cluster_name": "ceph",
Nov 25 03:03:12 np0005534516 agitated_austin[171226]:                "ceph.crush_device_class": "",
Nov 25 03:03:12 np0005534516 agitated_austin[171226]:                "ceph.encrypted": "0",
Nov 25 03:03:12 np0005534516 agitated_austin[171226]:                "ceph.osd_fsid": "fa0f8d90-5df8-4d42-9078-da082765696d",
Nov 25 03:03:12 np0005534516 agitated_austin[171226]:                "ceph.osd_id": "0",
Nov 25 03:03:12 np0005534516 agitated_austin[171226]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 03:03:12 np0005534516 agitated_austin[171226]:                "ceph.type": "block",
Nov 25 03:03:12 np0005534516 agitated_austin[171226]:                "ceph.vdo": "0"
Nov 25 03:03:12 np0005534516 agitated_austin[171226]:            },
Nov 25 03:03:12 np0005534516 agitated_austin[171226]:            "type": "block",
Nov 25 03:03:12 np0005534516 agitated_austin[171226]:            "vg_name": "ceph_vg0"
Nov 25 03:03:12 np0005534516 agitated_austin[171226]:        }
Nov 25 03:03:12 np0005534516 agitated_austin[171226]:    ],
Nov 25 03:03:12 np0005534516 agitated_austin[171226]:    "1": [
Nov 25 03:03:12 np0005534516 agitated_austin[171226]:        {
Nov 25 03:03:12 np0005534516 agitated_austin[171226]:            "devices": [
Nov 25 03:03:12 np0005534516 agitated_austin[171226]:                "/dev/loop4"
Nov 25 03:03:12 np0005534516 agitated_austin[171226]:            ],
Nov 25 03:03:12 np0005534516 agitated_austin[171226]:            "lv_name": "ceph_lv1",
Nov 25 03:03:12 np0005534516 agitated_austin[171226]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 03:03:12 np0005534516 agitated_austin[171226]:            "lv_size": "21470642176",
Nov 25 03:03:12 np0005534516 agitated_austin[171226]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1ccbbde0-8faf-460a-800e-c84f00ed17db,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 03:03:12 np0005534516 agitated_austin[171226]:            "lv_uuid": "nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e",
Nov 25 03:03:12 np0005534516 agitated_austin[171226]:            "name": "ceph_lv1",
Nov 25 03:03:12 np0005534516 agitated_austin[171226]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 03:03:12 np0005534516 agitated_austin[171226]:            "tags": {
Nov 25 03:03:12 np0005534516 agitated_austin[171226]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 03:03:12 np0005534516 agitated_austin[171226]:                "ceph.block_uuid": "nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e",
Nov 25 03:03:12 np0005534516 agitated_austin[171226]:                "ceph.cephx_lockbox_secret": "",
Nov 25 03:03:12 np0005534516 agitated_austin[171226]:                "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 03:03:12 np0005534516 agitated_austin[171226]:                "ceph.cluster_name": "ceph",
Nov 25 03:03:12 np0005534516 agitated_austin[171226]:                "ceph.crush_device_class": "",
Nov 25 03:03:12 np0005534516 agitated_austin[171226]:                "ceph.encrypted": "0",
Nov 25 03:03:12 np0005534516 agitated_austin[171226]:                "ceph.osd_fsid": "1ccbbde0-8faf-460a-800e-c84f00ed17db",
Nov 25 03:03:12 np0005534516 agitated_austin[171226]:                "ceph.osd_id": "1",
Nov 25 03:03:12 np0005534516 agitated_austin[171226]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 03:03:12 np0005534516 agitated_austin[171226]:                "ceph.type": "block",
Nov 25 03:03:12 np0005534516 agitated_austin[171226]:                "ceph.vdo": "0"
Nov 25 03:03:12 np0005534516 agitated_austin[171226]:            },
Nov 25 03:03:12 np0005534516 agitated_austin[171226]:            "type": "block",
Nov 25 03:03:12 np0005534516 agitated_austin[171226]:            "vg_name": "ceph_vg1"
Nov 25 03:03:12 np0005534516 agitated_austin[171226]:        }
Nov 25 03:03:12 np0005534516 agitated_austin[171226]:    ],
Nov 25 03:03:12 np0005534516 agitated_austin[171226]:    "2": [
Nov 25 03:03:12 np0005534516 agitated_austin[171226]:        {
Nov 25 03:03:12 np0005534516 agitated_austin[171226]:            "devices": [
Nov 25 03:03:12 np0005534516 agitated_austin[171226]:                "/dev/loop5"
Nov 25 03:03:12 np0005534516 agitated_austin[171226]:            ],
Nov 25 03:03:12 np0005534516 agitated_austin[171226]:            "lv_name": "ceph_lv2",
Nov 25 03:03:12 np0005534516 agitated_austin[171226]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 03:03:12 np0005534516 agitated_austin[171226]:            "lv_size": "21470642176",
Nov 25 03:03:12 np0005534516 agitated_austin[171226]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2a15223e-f21c-42af-b702-2be1c3607399,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 03:03:12 np0005534516 agitated_austin[171226]:            "lv_uuid": "5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0",
Nov 25 03:03:12 np0005534516 agitated_austin[171226]:            "name": "ceph_lv2",
Nov 25 03:03:12 np0005534516 agitated_austin[171226]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 03:03:12 np0005534516 agitated_austin[171226]:            "tags": {
Nov 25 03:03:12 np0005534516 agitated_austin[171226]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 03:03:12 np0005534516 agitated_austin[171226]:                "ceph.block_uuid": "5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0",
Nov 25 03:03:12 np0005534516 agitated_austin[171226]:                "ceph.cephx_lockbox_secret": "",
Nov 25 03:03:12 np0005534516 agitated_austin[171226]:                "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 03:03:12 np0005534516 agitated_austin[171226]:                "ceph.cluster_name": "ceph",
Nov 25 03:03:12 np0005534516 agitated_austin[171226]:                "ceph.crush_device_class": "",
Nov 25 03:03:12 np0005534516 agitated_austin[171226]:                "ceph.encrypted": "0",
Nov 25 03:03:12 np0005534516 agitated_austin[171226]:                "ceph.osd_fsid": "2a15223e-f21c-42af-b702-2be1c3607399",
Nov 25 03:03:12 np0005534516 agitated_austin[171226]:                "ceph.osd_id": "2",
Nov 25 03:03:12 np0005534516 agitated_austin[171226]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 03:03:12 np0005534516 agitated_austin[171226]:                "ceph.type": "block",
Nov 25 03:03:12 np0005534516 agitated_austin[171226]:                "ceph.vdo": "0"
Nov 25 03:03:12 np0005534516 agitated_austin[171226]:            },
Nov 25 03:03:12 np0005534516 agitated_austin[171226]:            "type": "block",
Nov 25 03:03:12 np0005534516 agitated_austin[171226]:            "vg_name": "ceph_vg2"
Nov 25 03:03:12 np0005534516 agitated_austin[171226]:        }
Nov 25 03:03:12 np0005534516 agitated_austin[171226]:    ]
Nov 25 03:03:12 np0005534516 agitated_austin[171226]: }
Nov 25 03:03:12 np0005534516 systemd[1]: libpod-e83b05a5a6bf24aeaf0687605f15c15baefc4baf58b83abdcc5a6a0d309d831f.scope: Deactivated successfully.
Nov 25 03:03:12 np0005534516 podman[171209]: 2025-11-25 08:03:12.104193193 +0000 UTC m=+1.041758560 container died e83b05a5a6bf24aeaf0687605f15c15baefc4baf58b83abdcc5a6a0d309d831f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_austin, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Nov 25 03:03:12 np0005534516 systemd[1]: var-lib-containers-storage-overlay-42ff319a46ed137009415b06a76470aa451c0a89a4b5667b4b1e8937f717f8a8-merged.mount: Deactivated successfully.
Nov 25 03:03:12 np0005534516 podman[171209]: 2025-11-25 08:03:12.175194606 +0000 UTC m=+1.112759913 container remove e83b05a5a6bf24aeaf0687605f15c15baefc4baf58b83abdcc5a6a0d309d831f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_austin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Nov 25 03:03:12 np0005534516 systemd[1]: libpod-conmon-e83b05a5a6bf24aeaf0687605f15c15baefc4baf58b83abdcc5a6a0d309d831f.scope: Deactivated successfully.
Nov 25 03:03:12 np0005534516 podman[171386]: 2025-11-25 08:03:12.876045881 +0000 UTC m=+0.060087318 container create 74c3ee0f357c0a5e81fe4c852659cba2b53a3e9ca3d321b78edcb3cefa623b3c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_goodall, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 03:03:12 np0005534516 systemd[1]: Started libpod-conmon-74c3ee0f357c0a5e81fe4c852659cba2b53a3e9ca3d321b78edcb3cefa623b3c.scope.
Nov 25 03:03:12 np0005534516 podman[171386]: 2025-11-25 08:03:12.845672059 +0000 UTC m=+0.029713516 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 03:03:12 np0005534516 systemd[1]: Started libcrun container.
Nov 25 03:03:12 np0005534516 podman[171386]: 2025-11-25 08:03:12.997764109 +0000 UTC m=+0.181805606 container init 74c3ee0f357c0a5e81fe4c852659cba2b53a3e9ca3d321b78edcb3cefa623b3c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_goodall, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Nov 25 03:03:13 np0005534516 podman[171386]: 2025-11-25 08:03:13.006817254 +0000 UTC m=+0.190858661 container start 74c3ee0f357c0a5e81fe4c852659cba2b53a3e9ca3d321b78edcb3cefa623b3c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_goodall, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Nov 25 03:03:13 np0005534516 ecstatic_goodall[171402]: 167 167
Nov 25 03:03:13 np0005534516 systemd[1]: libpod-74c3ee0f357c0a5e81fe4c852659cba2b53a3e9ca3d321b78edcb3cefa623b3c.scope: Deactivated successfully.
Nov 25 03:03:13 np0005534516 podman[171386]: 2025-11-25 08:03:13.024964546 +0000 UTC m=+0.209005963 container attach 74c3ee0f357c0a5e81fe4c852659cba2b53a3e9ca3d321b78edcb3cefa623b3c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_goodall, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Nov 25 03:03:13 np0005534516 podman[171386]: 2025-11-25 08:03:13.025565502 +0000 UTC m=+0.209606919 container died 74c3ee0f357c0a5e81fe4c852659cba2b53a3e9ca3d321b78edcb3cefa623b3c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_goodall, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 25 03:03:13 np0005534516 systemd[1]: var-lib-containers-storage-overlay-fe906950ca6c6187aeb0b630e09153a4c55429e9faf2fc34a8847c3a748c3fe9-merged.mount: Deactivated successfully.
Nov 25 03:03:13 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v514: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:03:13 np0005534516 podman[171386]: 2025-11-25 08:03:13.541121377 +0000 UTC m=+0.725162824 container remove 74c3ee0f357c0a5e81fe4c852659cba2b53a3e9ca3d321b78edcb3cefa623b3c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_goodall, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 03:03:13 np0005534516 systemd[1]: libpod-conmon-74c3ee0f357c0a5e81fe4c852659cba2b53a3e9ca3d321b78edcb3cefa623b3c.scope: Deactivated successfully.
Nov 25 03:03:13 np0005534516 podman[171428]: 2025-11-25 08:03:13.719834979 +0000 UTC m=+0.047784286 container create bd356968322ec481173a8fab4ca1c39a1408a1885b46614532e794469d5bfcaa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_yalow, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 03:03:13 np0005534516 systemd[1]: Started libpod-conmon-bd356968322ec481173a8fab4ca1c39a1408a1885b46614532e794469d5bfcaa.scope.
Nov 25 03:03:13 np0005534516 systemd[1]: Started libcrun container.
Nov 25 03:03:13 np0005534516 podman[171428]: 2025-11-25 08:03:13.696407163 +0000 UTC m=+0.024356520 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 03:03:13 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bce33c84e4242acd55c231826932fc8a42997e9219b978e6b07b3606c4f1b9f4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 03:03:13 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bce33c84e4242acd55c231826932fc8a42997e9219b978e6b07b3606c4f1b9f4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 03:03:13 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bce33c84e4242acd55c231826932fc8a42997e9219b978e6b07b3606c4f1b9f4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 03:03:13 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bce33c84e4242acd55c231826932fc8a42997e9219b978e6b07b3606c4f1b9f4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 03:03:13 np0005534516 podman[171428]: 2025-11-25 08:03:13.813771473 +0000 UTC m=+0.141720770 container init bd356968322ec481173a8fab4ca1c39a1408a1885b46614532e794469d5bfcaa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_yalow, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 03:03:13 np0005534516 podman[171428]: 2025-11-25 08:03:13.823086696 +0000 UTC m=+0.151036003 container start bd356968322ec481173a8fab4ca1c39a1408a1885b46614532e794469d5bfcaa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_yalow, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Nov 25 03:03:13 np0005534516 podman[171428]: 2025-11-25 08:03:13.827474914 +0000 UTC m=+0.155424221 container attach bd356968322ec481173a8fab4ca1c39a1408a1885b46614532e794469d5bfcaa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_yalow, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507)
Nov 25 03:03:14 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 03:03:14 np0005534516 serene_yalow[171444]: {
Nov 25 03:03:14 np0005534516 serene_yalow[171444]:    "1ccbbde0-8faf-460a-800e-c84f00ed17db": {
Nov 25 03:03:14 np0005534516 serene_yalow[171444]:        "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 03:03:14 np0005534516 serene_yalow[171444]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 03:03:14 np0005534516 serene_yalow[171444]:        "osd_id": 1,
Nov 25 03:03:14 np0005534516 serene_yalow[171444]:        "osd_uuid": "1ccbbde0-8faf-460a-800e-c84f00ed17db",
Nov 25 03:03:14 np0005534516 serene_yalow[171444]:        "type": "bluestore"
Nov 25 03:03:14 np0005534516 serene_yalow[171444]:    },
Nov 25 03:03:14 np0005534516 serene_yalow[171444]:    "2a15223e-f21c-42af-b702-2be1c3607399": {
Nov 25 03:03:14 np0005534516 serene_yalow[171444]:        "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 03:03:14 np0005534516 serene_yalow[171444]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 03:03:14 np0005534516 serene_yalow[171444]:        "osd_id": 2,
Nov 25 03:03:14 np0005534516 serene_yalow[171444]:        "osd_uuid": "2a15223e-f21c-42af-b702-2be1c3607399",
Nov 25 03:03:14 np0005534516 serene_yalow[171444]:        "type": "bluestore"
Nov 25 03:03:14 np0005534516 serene_yalow[171444]:    },
Nov 25 03:03:14 np0005534516 serene_yalow[171444]:    "fa0f8d90-5df8-4d42-9078-da082765696d": {
Nov 25 03:03:14 np0005534516 serene_yalow[171444]:        "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 03:03:14 np0005534516 serene_yalow[171444]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 03:03:14 np0005534516 serene_yalow[171444]:        "osd_id": 0,
Nov 25 03:03:14 np0005534516 serene_yalow[171444]:        "osd_uuid": "fa0f8d90-5df8-4d42-9078-da082765696d",
Nov 25 03:03:14 np0005534516 serene_yalow[171444]:        "type": "bluestore"
Nov 25 03:03:14 np0005534516 serene_yalow[171444]:    }
Nov 25 03:03:14 np0005534516 serene_yalow[171444]: }
Nov 25 03:03:14 np0005534516 systemd[1]: libpod-bd356968322ec481173a8fab4ca1c39a1408a1885b46614532e794469d5bfcaa.scope: Deactivated successfully.
Nov 25 03:03:14 np0005534516 podman[171428]: 2025-11-25 08:03:14.821042338 +0000 UTC m=+1.148991645 container died bd356968322ec481173a8fab4ca1c39a1408a1885b46614532e794469d5bfcaa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_yalow, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 03:03:14 np0005534516 systemd[1]: var-lib-containers-storage-overlay-bce33c84e4242acd55c231826932fc8a42997e9219b978e6b07b3606c4f1b9f4-merged.mount: Deactivated successfully.
Nov 25 03:03:14 np0005534516 podman[171428]: 2025-11-25 08:03:14.871468984 +0000 UTC m=+1.199418301 container remove bd356968322ec481173a8fab4ca1c39a1408a1885b46614532e794469d5bfcaa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_yalow, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 03:03:14 np0005534516 systemd[1]: libpod-conmon-bd356968322ec481173a8fab4ca1c39a1408a1885b46614532e794469d5bfcaa.scope: Deactivated successfully.
Nov 25 03:03:14 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 03:03:14 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 03:03:14 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 03:03:14 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 03:03:14 np0005534516 ceph-mgr[75313]: [progress WARNING root] complete: ev 17b0449f-a96b-44ae-ad52-0f187f4271dc does not exist
Nov 25 03:03:14 np0005534516 ceph-mgr[75313]: [progress WARNING root] complete: ev d410af0b-424e-40f2-8225-95f9a09134be does not exist
Nov 25 03:03:15 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v515: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:03:15 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 03:03:15 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 03:03:17 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v516: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:03:19 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 03:03:19 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v517: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:03:21 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v518: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:03:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 03:03:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 03:03:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 03:03:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 03:03:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 03:03:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 03:03:23 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v519: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:03:24 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 03:03:25 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v520: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:03:27 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v521: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:03:29 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 03:03:29 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v522: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:03:31 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v523: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:03:33 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v524: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:03:34 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 03:03:35 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v525: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:03:35 np0005534516 podman[179307]: 2025-11-25 08:03:35.85118671 +0000 UTC m=+0.101073399 container health_status 8ad1e319ea263c63021f01bb2d6f18ca5aea205883339692c6b10bddfa993191 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 25 03:03:37 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v526: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:03:39 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 03:03:39 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v527: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:03:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:03:41.024 162739 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:03:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:03:41.025 162739 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:03:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:03:41.025 162739 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:03:41 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v528: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:03:41 np0005534516 podman[182628]: 2025-11-25 08:03:41.796103998 +0000 UTC m=+0.047042196 container health_status 5c8d5508db57b4c3da7e80ae11f3a3094e87785cb74f60c98199261dd8b73481 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Nov 25 03:03:43 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v529: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:03:44 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 03:03:45 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v530: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:03:47 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v531: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:03:49 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 03:03:49 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v532: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:03:50 np0005534516 ceph-mon[75015]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #27. Immutable memtables: 0.
Nov 25 03:03:50 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:03:50.767392) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 25 03:03:50 np0005534516 ceph-mon[75015]: rocksdb: [db/flush_job.cc:856] [default] [JOB 9] Flushing memtable with next log file: 27
Nov 25 03:03:50 np0005534516 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764057830767496, "job": 9, "event": "flush_started", "num_memtables": 1, "num_entries": 2039, "num_deletes": 251, "total_data_size": 3520989, "memory_usage": 3570664, "flush_reason": "Manual Compaction"}
Nov 25 03:03:50 np0005534516 ceph-mon[75015]: rocksdb: [db/flush_job.cc:885] [default] [JOB 9] Level-0 flush table #28: started
Nov 25 03:03:50 np0005534516 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764057830893556, "cf_name": "default", "job": 9, "event": "table_file_creation", "file_number": 28, "file_size": 3456466, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 9708, "largest_seqno": 11746, "table_properties": {"data_size": 3447170, "index_size": 5918, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2309, "raw_key_size": 17778, "raw_average_key_size": 19, "raw_value_size": 3428799, "raw_average_value_size": 3751, "num_data_blocks": 268, "num_entries": 914, "num_filter_entries": 914, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764057592, "oldest_key_time": 1764057592, "file_creation_time": 1764057830, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "dc5dc6ae-0fb8-474e-b34d-e37705a41add", "db_session_id": "WCSCMBNWDQZ3QKJA3OWW", "orig_file_number": 28, "seqno_to_time_mapping": "N/A"}}
Nov 25 03:03:50 np0005534516 ceph-mon[75015]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 9] Flush lasted 126647 microseconds, and 14836 cpu microseconds.
Nov 25 03:03:50 np0005534516 ceph-mon[75015]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 03:03:50 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:03:50.894067) [db/flush_job.cc:967] [default] [JOB 9] Level-0 flush table #28: 3456466 bytes OK
Nov 25 03:03:50 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:03:50.894182) [db/memtable_list.cc:519] [default] Level-0 commit table #28 started
Nov 25 03:03:50 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:03:50.896543) [db/memtable_list.cc:722] [default] Level-0 commit table #28: memtable #1 done
Nov 25 03:03:50 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:03:50.896561) EVENT_LOG_v1 {"time_micros": 1764057830896555, "job": 9, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 25 03:03:50 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:03:50.896585) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 25 03:03:50 np0005534516 ceph-mon[75015]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 9] Try to delete WAL files size 3512489, prev total WAL file size 3512489, number of live WAL files 2.
Nov 25 03:03:50 np0005534516 ceph-mon[75015]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000024.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 03:03:50 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:03:50.898104) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F7300353032' seq:72057594037927935, type:22 .. '7061786F7300373534' seq:0, type:0; will stop at (end)
Nov 25 03:03:50 np0005534516 ceph-mon[75015]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 10] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 25 03:03:50 np0005534516 ceph-mon[75015]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 9 Base level 0, inputs: [28(3375KB)], [26(6372KB)]
Nov 25 03:03:50 np0005534516 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764057830898203, "job": 10, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [28], "files_L6": [26], "score": -1, "input_data_size": 9981656, "oldest_snapshot_seqno": -1}
Nov 25 03:03:50 np0005534516 ceph-mon[75015]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 10] Generated table #29: 3740 keys, 8245940 bytes, temperature: kUnknown
Nov 25 03:03:50 np0005534516 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764057830986993, "cf_name": "default", "job": 10, "event": "table_file_creation", "file_number": 29, "file_size": 8245940, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8217150, "index_size": 18312, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 9413, "raw_key_size": 90146, "raw_average_key_size": 24, "raw_value_size": 8145808, "raw_average_value_size": 2178, "num_data_blocks": 798, "num_entries": 3740, "num_filter_entries": 3740, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764056881, "oldest_key_time": 0, "file_creation_time": 1764057830, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "dc5dc6ae-0fb8-474e-b34d-e37705a41add", "db_session_id": "WCSCMBNWDQZ3QKJA3OWW", "orig_file_number": 29, "seqno_to_time_mapping": "N/A"}}
Nov 25 03:03:50 np0005534516 ceph-mon[75015]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 03:03:50 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:03:50.987415) [db/compaction/compaction_job.cc:1663] [default] [JOB 10] Compacted 1@0 + 1@6 files to L6 => 8245940 bytes
Nov 25 03:03:50 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:03:50.996052) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 112.2 rd, 92.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.3, 6.2 +0.0 blob) out(7.9 +0.0 blob), read-write-amplify(5.3) write-amplify(2.4) OK, records in: 4254, records dropped: 514 output_compression: NoCompression
Nov 25 03:03:50 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:03:50.996120) EVENT_LOG_v1 {"time_micros": 1764057830996095, "job": 10, "event": "compaction_finished", "compaction_time_micros": 88966, "compaction_time_cpu_micros": 18562, "output_level": 6, "num_output_files": 1, "total_output_size": 8245940, "num_input_records": 4254, "num_output_records": 3740, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 25 03:03:50 np0005534516 ceph-mon[75015]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000028.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 03:03:50 np0005534516 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764057830997102, "job": 10, "event": "table_file_deletion", "file_number": 28}
Nov 25 03:03:50 np0005534516 ceph-mon[75015]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000026.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 03:03:50 np0005534516 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764057830998489, "job": 10, "event": "table_file_deletion", "file_number": 26}
Nov 25 03:03:50 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:03:50.897978) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 03:03:50 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:03:50.998553) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 03:03:50 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:03:50.998561) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 03:03:50 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:03:50.998564) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 03:03:50 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:03:50.998567) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 03:03:50 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:03:50.998570) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 03:03:51 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v533: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:03:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] Optimize plan auto_2025-11-25_08:03:53
Nov 25 03:03:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 03:03:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] do_upmap
Nov 25 03:03:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] pools ['images', 'default.rgw.control', 'cephfs.cephfs.data', 'default.rgw.meta', 'default.rgw.log', 'vms', '.rgw.root', '.mgr', 'cephfs.cephfs.meta', 'backups', 'volumes']
Nov 25 03:03:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] prepared 0/10 changes
Nov 25 03:03:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 03:03:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 03:03:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 03:03:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 03:03:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 03:03:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 03:03:53 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v534: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:03:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 03:03:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 03:03:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 03:03:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 03:03:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 03:03:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 03:03:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 03:03:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 03:03:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 03:03:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 03:03:54 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 03:03:54 np0005534516 ceph-mgr[75313]: client.0 ms_handle_reset on v2:192.168.122.100:6800/3119838916
Nov 25 03:03:55 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v535: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:03:57 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v536: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:03:59 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 03:03:59 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v537: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:04:01 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v538: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:04:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 03:04:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:04:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 03:04:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:04:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 03:04:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:04:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 03:04:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:04:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 03:04:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:04:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 03:04:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:04:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 25 03:04:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:04:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 03:04:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:04:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 25 03:04:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:04:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 25 03:04:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:04:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 03:04:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:04:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 25 03:04:03 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v539: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:04:04 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 03:04:05 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v540: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:04:06 np0005534516 kernel: SELinux:  Converting 2769 SID table entries...
Nov 25 03:04:06 np0005534516 kernel: SELinux:  policy capability network_peer_controls=1
Nov 25 03:04:06 np0005534516 kernel: SELinux:  policy capability open_perms=1
Nov 25 03:04:06 np0005534516 kernel: SELinux:  policy capability extended_socket_class=1
Nov 25 03:04:06 np0005534516 kernel: SELinux:  policy capability always_check_network=0
Nov 25 03:04:06 np0005534516 kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 25 03:04:06 np0005534516 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 25 03:04:06 np0005534516 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 25 03:04:06 np0005534516 dbus-broker-launch[813]: avc:  op=load_policy lsm=selinux seqno=14 res=1
Nov 25 03:04:06 np0005534516 podman[188390]: 2025-11-25 08:04:06.881121731 +0000 UTC m=+0.130102235 container health_status 8ad1e319ea263c63021f01bb2d6f18ca5aea205883339692c6b10bddfa993191 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2)
Nov 25 03:04:07 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v541: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:04:09 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 03:04:09 np0005534516 dbus-broker-launch[812]: Noticed file-system modification, trigger reload.
Nov 25 03:04:09 np0005534516 dbus-broker-launch[812]: Noticed file-system modification, trigger reload.
Nov 25 03:04:09 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v542: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:04:11 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v543: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:04:12 np0005534516 podman[188469]: 2025-11-25 08:04:12.573100613 +0000 UTC m=+0.078096802 container health_status 5c8d5508db57b4c3da7e80ae11f3a3094e87785cb74f60c98199261dd8b73481 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Nov 25 03:04:13 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v544: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:04:14 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 03:04:15 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v545: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:04:15 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 03:04:15 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 03:04:15 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 03:04:15 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 03:04:15 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 03:04:16 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 03:04:16 np0005534516 ceph-mgr[75313]: [progress WARNING root] complete: ev a30e3883-32eb-4914-a4c6-5285ac2a40b2 does not exist
Nov 25 03:04:16 np0005534516 ceph-mgr[75313]: [progress WARNING root] complete: ev 7becd118-6792-4ad4-927b-03004a43d119 does not exist
Nov 25 03:04:16 np0005534516 ceph-mgr[75313]: [progress WARNING root] complete: ev c749f1ce-d6d6-4fb7-999d-e0af18f209b0 does not exist
Nov 25 03:04:16 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 03:04:16 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 03:04:16 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 03:04:16 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 03:04:16 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 03:04:16 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 03:04:16 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 03:04:16 np0005534516 podman[188771]: 2025-11-25 08:04:16.830355225 +0000 UTC m=+0.023549674 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 03:04:17 np0005534516 podman[188771]: 2025-11-25 08:04:17.006788138 +0000 UTC m=+0.199982577 container create 9caa20cc2c61a1d836edffb02d2fae7dbb2b8b376c503fcac6b70785044cb2c2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_wilson, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Nov 25 03:04:17 np0005534516 systemd[1]: Started libpod-conmon-9caa20cc2c61a1d836edffb02d2fae7dbb2b8b376c503fcac6b70785044cb2c2.scope.
Nov 25 03:04:17 np0005534516 systemd[1]: Started libcrun container.
Nov 25 03:04:17 np0005534516 podman[188771]: 2025-11-25 08:04:17.266787353 +0000 UTC m=+0.459981802 container init 9caa20cc2c61a1d836edffb02d2fae7dbb2b8b376c503fcac6b70785044cb2c2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_wilson, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 03:04:17 np0005534516 podman[188771]: 2025-11-25 08:04:17.275521956 +0000 UTC m=+0.468716405 container start 9caa20cc2c61a1d836edffb02d2fae7dbb2b8b376c503fcac6b70785044cb2c2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_wilson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef)
Nov 25 03:04:17 np0005534516 beautiful_wilson[188791]: 167 167
Nov 25 03:04:17 np0005534516 systemd[1]: libpod-9caa20cc2c61a1d836edffb02d2fae7dbb2b8b376c503fcac6b70785044cb2c2.scope: Deactivated successfully.
Nov 25 03:04:17 np0005534516 podman[188771]: 2025-11-25 08:04:17.371165064 +0000 UTC m=+0.564359483 container attach 9caa20cc2c61a1d836edffb02d2fae7dbb2b8b376c503fcac6b70785044cb2c2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_wilson, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Nov 25 03:04:17 np0005534516 podman[188771]: 2025-11-25 08:04:17.372965414 +0000 UTC m=+0.566159843 container died 9caa20cc2c61a1d836edffb02d2fae7dbb2b8b376c503fcac6b70785044cb2c2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_wilson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Nov 25 03:04:17 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 03:04:17 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 03:04:17 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v546: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:04:17 np0005534516 systemd[1]: var-lib-containers-storage-overlay-9532fe31a2079c3c9a45a1101ba61a57a2f08518c1fabce5edbf33c13596e265-merged.mount: Deactivated successfully.
Nov 25 03:04:17 np0005534516 podman[188771]: 2025-11-25 08:04:17.637916176 +0000 UTC m=+0.831110615 container remove 9caa20cc2c61a1d836edffb02d2fae7dbb2b8b376c503fcac6b70785044cb2c2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_wilson, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 03:04:17 np0005534516 systemd[1]: libpod-conmon-9caa20cc2c61a1d836edffb02d2fae7dbb2b8b376c503fcac6b70785044cb2c2.scope: Deactivated successfully.
Nov 25 03:04:17 np0005534516 podman[188863]: 2025-11-25 08:04:17.807977982 +0000 UTC m=+0.046875294 container create b37ca3305b8fd594dd7b2fe815f4e1c575539b985013b0121078f109d2624eef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_colden, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 25 03:04:17 np0005534516 systemd[1]: Started libpod-conmon-b37ca3305b8fd594dd7b2fe815f4e1c575539b985013b0121078f109d2624eef.scope.
Nov 25 03:04:17 np0005534516 podman[188863]: 2025-11-25 08:04:17.786398092 +0000 UTC m=+0.025295494 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 03:04:17 np0005534516 systemd[1]: Started libcrun container.
Nov 25 03:04:17 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/72b999b64dbf298d8ae8f2504957570299207551e733ffa40539dbcb1cb7702b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 03:04:17 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/72b999b64dbf298d8ae8f2504957570299207551e733ffa40539dbcb1cb7702b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 03:04:17 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/72b999b64dbf298d8ae8f2504957570299207551e733ffa40539dbcb1cb7702b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 03:04:17 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/72b999b64dbf298d8ae8f2504957570299207551e733ffa40539dbcb1cb7702b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 03:04:17 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/72b999b64dbf298d8ae8f2504957570299207551e733ffa40539dbcb1cb7702b/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 03:04:17 np0005534516 podman[188863]: 2025-11-25 08:04:17.965578061 +0000 UTC m=+0.204475423 container init b37ca3305b8fd594dd7b2fe815f4e1c575539b985013b0121078f109d2624eef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_colden, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 03:04:17 np0005534516 podman[188863]: 2025-11-25 08:04:17.975748514 +0000 UTC m=+0.214645836 container start b37ca3305b8fd594dd7b2fe815f4e1c575539b985013b0121078f109d2624eef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_colden, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Nov 25 03:04:18 np0005534516 podman[188863]: 2025-11-25 08:04:18.019759817 +0000 UTC m=+0.258657149 container attach b37ca3305b8fd594dd7b2fe815f4e1c575539b985013b0121078f109d2624eef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_colden, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True)
Nov 25 03:04:19 np0005534516 goofy_colden[188894]: --> passed data devices: 0 physical, 3 LVM
Nov 25 03:04:19 np0005534516 goofy_colden[188894]: --> relative data size: 1.0
Nov 25 03:04:19 np0005534516 goofy_colden[188894]: --> All data devices are unavailable
Nov 25 03:04:19 np0005534516 systemd[1]: libpod-b37ca3305b8fd594dd7b2fe815f4e1c575539b985013b0121078f109d2624eef.scope: Deactivated successfully.
Nov 25 03:04:19 np0005534516 podman[188863]: 2025-11-25 08:04:19.064054426 +0000 UTC m=+1.302951748 container died b37ca3305b8fd594dd7b2fe815f4e1c575539b985013b0121078f109d2624eef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_colden, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 03:04:19 np0005534516 systemd[1]: libpod-b37ca3305b8fd594dd7b2fe815f4e1c575539b985013b0121078f109d2624eef.scope: Consumed 1.016s CPU time.
Nov 25 03:04:19 np0005534516 systemd[1]: var-lib-containers-storage-overlay-72b999b64dbf298d8ae8f2504957570299207551e733ffa40539dbcb1cb7702b-merged.mount: Deactivated successfully.
Nov 25 03:04:19 np0005534516 podman[188863]: 2025-11-25 08:04:19.127553811 +0000 UTC m=+1.366451123 container remove b37ca3305b8fd594dd7b2fe815f4e1c575539b985013b0121078f109d2624eef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_colden, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 03:04:19 np0005534516 systemd[1]: libpod-conmon-b37ca3305b8fd594dd7b2fe815f4e1c575539b985013b0121078f109d2624eef.scope: Deactivated successfully.
Nov 25 03:04:19 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v547: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:04:19 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 03:04:20 np0005534516 podman[189189]: 2025-11-25 08:04:20.489770004 +0000 UTC m=+0.041928166 container create 79ede7513d585e6872f98c9fbe9e71f052fcd13ab42b0f1c19b946a869aa1cf2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_margulis, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 03:04:20 np0005534516 systemd[1]: Started libpod-conmon-79ede7513d585e6872f98c9fbe9e71f052fcd13ab42b0f1c19b946a869aa1cf2.scope.
Nov 25 03:04:20 np0005534516 podman[189189]: 2025-11-25 08:04:20.472645568 +0000 UTC m=+0.024803750 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 03:04:20 np0005534516 systemd[1]: Started libcrun container.
Nov 25 03:04:20 np0005534516 podman[189189]: 2025-11-25 08:04:20.591767939 +0000 UTC m=+0.143926121 container init 79ede7513d585e6872f98c9fbe9e71f052fcd13ab42b0f1c19b946a869aa1cf2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_margulis, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 03:04:20 np0005534516 podman[189189]: 2025-11-25 08:04:20.598078844 +0000 UTC m=+0.150237026 container start 79ede7513d585e6872f98c9fbe9e71f052fcd13ab42b0f1c19b946a869aa1cf2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_margulis, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 03:04:20 np0005534516 charming_margulis[189205]: 167 167
Nov 25 03:04:20 np0005534516 podman[189189]: 2025-11-25 08:04:20.602802376 +0000 UTC m=+0.154960538 container attach 79ede7513d585e6872f98c9fbe9e71f052fcd13ab42b0f1c19b946a869aa1cf2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_margulis, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 03:04:20 np0005534516 systemd[1]: libpod-79ede7513d585e6872f98c9fbe9e71f052fcd13ab42b0f1c19b946a869aa1cf2.scope: Deactivated successfully.
Nov 25 03:04:20 np0005534516 podman[189189]: 2025-11-25 08:04:20.60440202 +0000 UTC m=+0.156560182 container died 79ede7513d585e6872f98c9fbe9e71f052fcd13ab42b0f1c19b946a869aa1cf2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_margulis, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Nov 25 03:04:20 np0005534516 systemd[1]: var-lib-containers-storage-overlay-3456ac2ce47ddbdfcbd38a41cbf644803d708f31fa1d457ef8ad87bf1b62b7aa-merged.mount: Deactivated successfully.
Nov 25 03:04:20 np0005534516 podman[189189]: 2025-11-25 08:04:20.642464888 +0000 UTC m=+0.194623050 container remove 79ede7513d585e6872f98c9fbe9e71f052fcd13ab42b0f1c19b946a869aa1cf2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_margulis, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Nov 25 03:04:20 np0005534516 systemd[1]: libpod-conmon-79ede7513d585e6872f98c9fbe9e71f052fcd13ab42b0f1c19b946a869aa1cf2.scope: Deactivated successfully.
Nov 25 03:04:20 np0005534516 podman[189229]: 2025-11-25 08:04:20.880873193 +0000 UTC m=+0.075158330 container create 17f7965ed684e0eac28d60e26eba6c6721ff5c8fef03606f2ecbde648ec32c70 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_chebyshev, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 03:04:20 np0005534516 systemd[1]: Started libpod-conmon-17f7965ed684e0eac28d60e26eba6c6721ff5c8fef03606f2ecbde648ec32c70.scope.
Nov 25 03:04:20 np0005534516 podman[189229]: 2025-11-25 08:04:20.83975538 +0000 UTC m=+0.034040547 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 03:04:20 np0005534516 systemd[1]: Started libcrun container.
Nov 25 03:04:20 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/665921014f777024274c9e6eac0566375f530063adfcb736133c97cbad03d8b6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 03:04:20 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/665921014f777024274c9e6eac0566375f530063adfcb736133c97cbad03d8b6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 03:04:20 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/665921014f777024274c9e6eac0566375f530063adfcb736133c97cbad03d8b6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 03:04:20 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/665921014f777024274c9e6eac0566375f530063adfcb736133c97cbad03d8b6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 03:04:20 np0005534516 podman[189229]: 2025-11-25 08:04:20.981375796 +0000 UTC m=+0.175660993 container init 17f7965ed684e0eac28d60e26eba6c6721ff5c8fef03606f2ecbde648ec32c70 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_chebyshev, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 03:04:20 np0005534516 podman[189229]: 2025-11-25 08:04:20.989611374 +0000 UTC m=+0.183896531 container start 17f7965ed684e0eac28d60e26eba6c6721ff5c8fef03606f2ecbde648ec32c70 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_chebyshev, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 03:04:20 np0005534516 podman[189229]: 2025-11-25 08:04:20.99735473 +0000 UTC m=+0.191639877 container attach 17f7965ed684e0eac28d60e26eba6c6721ff5c8fef03606f2ecbde648ec32c70 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_chebyshev, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Nov 25 03:04:21 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v548: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:04:21 np0005534516 thirsty_chebyshev[189300]: {
Nov 25 03:04:21 np0005534516 thirsty_chebyshev[189300]:    "0": [
Nov 25 03:04:21 np0005534516 thirsty_chebyshev[189300]:        {
Nov 25 03:04:21 np0005534516 thirsty_chebyshev[189300]:            "devices": [
Nov 25 03:04:21 np0005534516 thirsty_chebyshev[189300]:                "/dev/loop3"
Nov 25 03:04:21 np0005534516 thirsty_chebyshev[189300]:            ],
Nov 25 03:04:21 np0005534516 thirsty_chebyshev[189300]:            "lv_name": "ceph_lv0",
Nov 25 03:04:21 np0005534516 thirsty_chebyshev[189300]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 03:04:21 np0005534516 thirsty_chebyshev[189300]:            "lv_size": "21470642176",
Nov 25 03:04:21 np0005534516 thirsty_chebyshev[189300]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=fa0f8d90-5df8-4d42-9078-da082765696d,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 03:04:21 np0005534516 thirsty_chebyshev[189300]:            "lv_uuid": "ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr",
Nov 25 03:04:21 np0005534516 thirsty_chebyshev[189300]:            "name": "ceph_lv0",
Nov 25 03:04:21 np0005534516 thirsty_chebyshev[189300]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 03:04:21 np0005534516 thirsty_chebyshev[189300]:            "tags": {
Nov 25 03:04:21 np0005534516 thirsty_chebyshev[189300]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 03:04:21 np0005534516 thirsty_chebyshev[189300]:                "ceph.block_uuid": "ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr",
Nov 25 03:04:21 np0005534516 thirsty_chebyshev[189300]:                "ceph.cephx_lockbox_secret": "",
Nov 25 03:04:21 np0005534516 thirsty_chebyshev[189300]:                "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 03:04:21 np0005534516 thirsty_chebyshev[189300]:                "ceph.cluster_name": "ceph",
Nov 25 03:04:21 np0005534516 thirsty_chebyshev[189300]:                "ceph.crush_device_class": "",
Nov 25 03:04:21 np0005534516 thirsty_chebyshev[189300]:                "ceph.encrypted": "0",
Nov 25 03:04:21 np0005534516 thirsty_chebyshev[189300]:                "ceph.osd_fsid": "fa0f8d90-5df8-4d42-9078-da082765696d",
Nov 25 03:04:21 np0005534516 thirsty_chebyshev[189300]:                "ceph.osd_id": "0",
Nov 25 03:04:21 np0005534516 thirsty_chebyshev[189300]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 03:04:21 np0005534516 thirsty_chebyshev[189300]:                "ceph.type": "block",
Nov 25 03:04:21 np0005534516 thirsty_chebyshev[189300]:                "ceph.vdo": "0"
Nov 25 03:04:21 np0005534516 thirsty_chebyshev[189300]:            },
Nov 25 03:04:21 np0005534516 thirsty_chebyshev[189300]:            "type": "block",
Nov 25 03:04:21 np0005534516 thirsty_chebyshev[189300]:            "vg_name": "ceph_vg0"
Nov 25 03:04:21 np0005534516 thirsty_chebyshev[189300]:        }
Nov 25 03:04:21 np0005534516 thirsty_chebyshev[189300]:    ],
Nov 25 03:04:21 np0005534516 thirsty_chebyshev[189300]:    "1": [
Nov 25 03:04:21 np0005534516 thirsty_chebyshev[189300]:        {
Nov 25 03:04:21 np0005534516 thirsty_chebyshev[189300]:            "devices": [
Nov 25 03:04:21 np0005534516 thirsty_chebyshev[189300]:                "/dev/loop4"
Nov 25 03:04:21 np0005534516 thirsty_chebyshev[189300]:            ],
Nov 25 03:04:21 np0005534516 thirsty_chebyshev[189300]:            "lv_name": "ceph_lv1",
Nov 25 03:04:21 np0005534516 thirsty_chebyshev[189300]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 03:04:21 np0005534516 thirsty_chebyshev[189300]:            "lv_size": "21470642176",
Nov 25 03:04:21 np0005534516 thirsty_chebyshev[189300]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1ccbbde0-8faf-460a-800e-c84f00ed17db,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 03:04:21 np0005534516 thirsty_chebyshev[189300]:            "lv_uuid": "nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e",
Nov 25 03:04:21 np0005534516 thirsty_chebyshev[189300]:            "name": "ceph_lv1",
Nov 25 03:04:21 np0005534516 thirsty_chebyshev[189300]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 03:04:21 np0005534516 thirsty_chebyshev[189300]:            "tags": {
Nov 25 03:04:21 np0005534516 thirsty_chebyshev[189300]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 03:04:21 np0005534516 thirsty_chebyshev[189300]:                "ceph.block_uuid": "nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e",
Nov 25 03:04:21 np0005534516 thirsty_chebyshev[189300]:                "ceph.cephx_lockbox_secret": "",
Nov 25 03:04:21 np0005534516 thirsty_chebyshev[189300]:                "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 03:04:21 np0005534516 thirsty_chebyshev[189300]:                "ceph.cluster_name": "ceph",
Nov 25 03:04:21 np0005534516 thirsty_chebyshev[189300]:                "ceph.crush_device_class": "",
Nov 25 03:04:21 np0005534516 thirsty_chebyshev[189300]:                "ceph.encrypted": "0",
Nov 25 03:04:21 np0005534516 thirsty_chebyshev[189300]:                "ceph.osd_fsid": "1ccbbde0-8faf-460a-800e-c84f00ed17db",
Nov 25 03:04:21 np0005534516 thirsty_chebyshev[189300]:                "ceph.osd_id": "1",
Nov 25 03:04:21 np0005534516 thirsty_chebyshev[189300]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 03:04:21 np0005534516 thirsty_chebyshev[189300]:                "ceph.type": "block",
Nov 25 03:04:21 np0005534516 thirsty_chebyshev[189300]:                "ceph.vdo": "0"
Nov 25 03:04:21 np0005534516 thirsty_chebyshev[189300]:            },
Nov 25 03:04:21 np0005534516 thirsty_chebyshev[189300]:            "type": "block",
Nov 25 03:04:21 np0005534516 thirsty_chebyshev[189300]:            "vg_name": "ceph_vg1"
Nov 25 03:04:21 np0005534516 thirsty_chebyshev[189300]:        }
Nov 25 03:04:21 np0005534516 thirsty_chebyshev[189300]:    ],
Nov 25 03:04:21 np0005534516 thirsty_chebyshev[189300]:    "2": [
Nov 25 03:04:21 np0005534516 thirsty_chebyshev[189300]:        {
Nov 25 03:04:21 np0005534516 thirsty_chebyshev[189300]:            "devices": [
Nov 25 03:04:21 np0005534516 thirsty_chebyshev[189300]:                "/dev/loop5"
Nov 25 03:04:21 np0005534516 thirsty_chebyshev[189300]:            ],
Nov 25 03:04:21 np0005534516 thirsty_chebyshev[189300]:            "lv_name": "ceph_lv2",
Nov 25 03:04:21 np0005534516 thirsty_chebyshev[189300]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 03:04:21 np0005534516 thirsty_chebyshev[189300]:            "lv_size": "21470642176",
Nov 25 03:04:21 np0005534516 thirsty_chebyshev[189300]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2a15223e-f21c-42af-b702-2be1c3607399,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 03:04:21 np0005534516 thirsty_chebyshev[189300]:            "lv_uuid": "5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0",
Nov 25 03:04:21 np0005534516 thirsty_chebyshev[189300]:            "name": "ceph_lv2",
Nov 25 03:04:21 np0005534516 thirsty_chebyshev[189300]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 03:04:21 np0005534516 thirsty_chebyshev[189300]:            "tags": {
Nov 25 03:04:21 np0005534516 thirsty_chebyshev[189300]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 03:04:21 np0005534516 thirsty_chebyshev[189300]:                "ceph.block_uuid": "5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0",
Nov 25 03:04:21 np0005534516 thirsty_chebyshev[189300]:                "ceph.cephx_lockbox_secret": "",
Nov 25 03:04:21 np0005534516 thirsty_chebyshev[189300]:                "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 03:04:21 np0005534516 thirsty_chebyshev[189300]:                "ceph.cluster_name": "ceph",
Nov 25 03:04:21 np0005534516 thirsty_chebyshev[189300]:                "ceph.crush_device_class": "",
Nov 25 03:04:21 np0005534516 thirsty_chebyshev[189300]:                "ceph.encrypted": "0",
Nov 25 03:04:21 np0005534516 thirsty_chebyshev[189300]:                "ceph.osd_fsid": "2a15223e-f21c-42af-b702-2be1c3607399",
Nov 25 03:04:21 np0005534516 thirsty_chebyshev[189300]:                "ceph.osd_id": "2",
Nov 25 03:04:21 np0005534516 thirsty_chebyshev[189300]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 03:04:21 np0005534516 thirsty_chebyshev[189300]:                "ceph.type": "block",
Nov 25 03:04:21 np0005534516 thirsty_chebyshev[189300]:                "ceph.vdo": "0"
Nov 25 03:04:21 np0005534516 thirsty_chebyshev[189300]:            },
Nov 25 03:04:21 np0005534516 thirsty_chebyshev[189300]:            "type": "block",
Nov 25 03:04:21 np0005534516 thirsty_chebyshev[189300]:            "vg_name": "ceph_vg2"
Nov 25 03:04:21 np0005534516 thirsty_chebyshev[189300]:        }
Nov 25 03:04:21 np0005534516 thirsty_chebyshev[189300]:    ]
Nov 25 03:04:21 np0005534516 thirsty_chebyshev[189300]: }
Nov 25 03:04:21 np0005534516 systemd[1]: libpod-17f7965ed684e0eac28d60e26eba6c6721ff5c8fef03606f2ecbde648ec32c70.scope: Deactivated successfully.
Nov 25 03:04:21 np0005534516 podman[189229]: 2025-11-25 08:04:21.840794848 +0000 UTC m=+1.035079995 container died 17f7965ed684e0eac28d60e26eba6c6721ff5c8fef03606f2ecbde648ec32c70 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_chebyshev, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 25 03:04:21 np0005534516 systemd[1]: var-lib-containers-storage-overlay-665921014f777024274c9e6eac0566375f530063adfcb736133c97cbad03d8b6-merged.mount: Deactivated successfully.
Nov 25 03:04:21 np0005534516 podman[189229]: 2025-11-25 08:04:21.914870216 +0000 UTC m=+1.109155353 container remove 17f7965ed684e0eac28d60e26eba6c6721ff5c8fef03606f2ecbde648ec32c70 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_chebyshev, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507)
Nov 25 03:04:21 np0005534516 systemd[1]: libpod-conmon-17f7965ed684e0eac28d60e26eba6c6721ff5c8fef03606f2ecbde648ec32c70.scope: Deactivated successfully.
Nov 25 03:04:21 np0005534516 systemd[1]: Stopping OpenSSH server daemon...
Nov 25 03:04:21 np0005534516 systemd[1]: sshd.service: Deactivated successfully.
Nov 25 03:04:21 np0005534516 systemd[1]: Stopped OpenSSH server daemon.
Nov 25 03:04:21 np0005534516 systemd[1]: sshd.service: Consumed 2.803s CPU time, read 32.0K from disk, written 0B to disk.
Nov 25 03:04:21 np0005534516 systemd[1]: Stopped target sshd-keygen.target.
Nov 25 03:04:21 np0005534516 systemd[1]: Stopping sshd-keygen.target...
Nov 25 03:04:21 np0005534516 systemd[1]: OpenSSH ecdsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Nov 25 03:04:21 np0005534516 systemd[1]: OpenSSH ed25519 Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Nov 25 03:04:21 np0005534516 systemd[1]: OpenSSH rsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Nov 25 03:04:21 np0005534516 systemd[1]: Reached target sshd-keygen.target.
Nov 25 03:04:21 np0005534516 systemd[1]: Starting OpenSSH server daemon...
Nov 25 03:04:21 np0005534516 systemd[1]: Started OpenSSH server daemon.
Nov 25 03:04:22 np0005534516 podman[190094]: 2025-11-25 08:04:22.576968904 +0000 UTC m=+0.095519645 container create 4299eb44df6401abc07f6a945d8ee53b71b0e5582ac23c0d1b14e21925e28587 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_noyce, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 03:04:22 np0005534516 podman[190094]: 2025-11-25 08:04:22.506380912 +0000 UTC m=+0.024931723 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 03:04:22 np0005534516 systemd[1]: Started libpod-conmon-4299eb44df6401abc07f6a945d8ee53b71b0e5582ac23c0d1b14e21925e28587.scope.
Nov 25 03:04:22 np0005534516 systemd[1]: Started libcrun container.
Nov 25 03:04:22 np0005534516 podman[190094]: 2025-11-25 08:04:22.688958766 +0000 UTC m=+0.207509527 container init 4299eb44df6401abc07f6a945d8ee53b71b0e5582ac23c0d1b14e21925e28587 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_noyce, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 03:04:22 np0005534516 podman[190094]: 2025-11-25 08:04:22.702878593 +0000 UTC m=+0.221429344 container start 4299eb44df6401abc07f6a945d8ee53b71b0e5582ac23c0d1b14e21925e28587 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_noyce, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 03:04:22 np0005534516 strange_noyce[190130]: 167 167
Nov 25 03:04:22 np0005534516 systemd[1]: libpod-4299eb44df6401abc07f6a945d8ee53b71b0e5582ac23c0d1b14e21925e28587.scope: Deactivated successfully.
Nov 25 03:04:22 np0005534516 podman[190094]: 2025-11-25 08:04:22.716426389 +0000 UTC m=+0.234977140 container attach 4299eb44df6401abc07f6a945d8ee53b71b0e5582ac23c0d1b14e21925e28587 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_noyce, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 03:04:22 np0005534516 podman[190094]: 2025-11-25 08:04:22.716982225 +0000 UTC m=+0.235532966 container died 4299eb44df6401abc07f6a945d8ee53b71b0e5582ac23c0d1b14e21925e28587 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_noyce, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Nov 25 03:04:22 np0005534516 systemd[1]: var-lib-containers-storage-overlay-6698a1b36b41e191e15f7302e1dba3d87b90d9d8f6881fbab8494ae5d22bfeac-merged.mount: Deactivated successfully.
Nov 25 03:04:22 np0005534516 podman[190094]: 2025-11-25 08:04:22.863694582 +0000 UTC m=+0.382245323 container remove 4299eb44df6401abc07f6a945d8ee53b71b0e5582ac23c0d1b14e21925e28587 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_noyce, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Nov 25 03:04:22 np0005534516 systemd[1]: libpod-conmon-4299eb44df6401abc07f6a945d8ee53b71b0e5582ac23c0d1b14e21925e28587.scope: Deactivated successfully.
Nov 25 03:04:23 np0005534516 podman[190193]: 2025-11-25 08:04:23.089193948 +0000 UTC m=+0.100248827 container create 305c2225f0b3e07f90b38d9c57982c80cdfff546e64c93388a43fb3a4e739c43 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_shirley, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 03:04:23 np0005534516 podman[190193]: 2025-11-25 08:04:23.010678376 +0000 UTC m=+0.021733255 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 03:04:23 np0005534516 systemd[1]: Started libpod-conmon-305c2225f0b3e07f90b38d9c57982c80cdfff546e64c93388a43fb3a4e739c43.scope.
Nov 25 03:04:23 np0005534516 systemd[1]: Started libcrun container.
Nov 25 03:04:23 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b764b6355a6e1f9d8ef7bb6971e2564e19b491a0639a6d35eb3e95d3b27ff2b7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 03:04:23 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b764b6355a6e1f9d8ef7bb6971e2564e19b491a0639a6d35eb3e95d3b27ff2b7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 03:04:23 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b764b6355a6e1f9d8ef7bb6971e2564e19b491a0639a6d35eb3e95d3b27ff2b7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 03:04:23 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b764b6355a6e1f9d8ef7bb6971e2564e19b491a0639a6d35eb3e95d3b27ff2b7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 03:04:23 np0005534516 podman[190193]: 2025-11-25 08:04:23.180793374 +0000 UTC m=+0.191848243 container init 305c2225f0b3e07f90b38d9c57982c80cdfff546e64c93388a43fb3a4e739c43 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_shirley, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 03:04:23 np0005534516 podman[190193]: 2025-11-25 08:04:23.188144738 +0000 UTC m=+0.199199607 container start 305c2225f0b3e07f90b38d9c57982c80cdfff546e64c93388a43fb3a4e739c43 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_shirley, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Nov 25 03:04:23 np0005534516 podman[190193]: 2025-11-25 08:04:23.22422381 +0000 UTC m=+0.235278769 container attach 305c2225f0b3e07f90b38d9c57982c80cdfff546e64c93388a43fb3a4e739c43 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_shirley, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef)
Nov 25 03:04:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 03:04:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 03:04:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 03:04:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 03:04:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 03:04:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 03:04:23 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v549: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:04:24 np0005534516 youthful_shirley[190229]: {
Nov 25 03:04:24 np0005534516 youthful_shirley[190229]:    "1ccbbde0-8faf-460a-800e-c84f00ed17db": {
Nov 25 03:04:24 np0005534516 youthful_shirley[190229]:        "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 03:04:24 np0005534516 youthful_shirley[190229]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 03:04:24 np0005534516 youthful_shirley[190229]:        "osd_id": 1,
Nov 25 03:04:24 np0005534516 youthful_shirley[190229]:        "osd_uuid": "1ccbbde0-8faf-460a-800e-c84f00ed17db",
Nov 25 03:04:24 np0005534516 youthful_shirley[190229]:        "type": "bluestore"
Nov 25 03:04:24 np0005534516 youthful_shirley[190229]:    },
Nov 25 03:04:24 np0005534516 youthful_shirley[190229]:    "2a15223e-f21c-42af-b702-2be1c3607399": {
Nov 25 03:04:24 np0005534516 youthful_shirley[190229]:        "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 03:04:24 np0005534516 youthful_shirley[190229]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 03:04:24 np0005534516 youthful_shirley[190229]:        "osd_id": 2,
Nov 25 03:04:24 np0005534516 youthful_shirley[190229]:        "osd_uuid": "2a15223e-f21c-42af-b702-2be1c3607399",
Nov 25 03:04:24 np0005534516 youthful_shirley[190229]:        "type": "bluestore"
Nov 25 03:04:24 np0005534516 youthful_shirley[190229]:    },
Nov 25 03:04:24 np0005534516 youthful_shirley[190229]:    "fa0f8d90-5df8-4d42-9078-da082765696d": {
Nov 25 03:04:24 np0005534516 youthful_shirley[190229]:        "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 03:04:24 np0005534516 youthful_shirley[190229]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 03:04:24 np0005534516 youthful_shirley[190229]:        "osd_id": 0,
Nov 25 03:04:24 np0005534516 youthful_shirley[190229]:        "osd_uuid": "fa0f8d90-5df8-4d42-9078-da082765696d",
Nov 25 03:04:24 np0005534516 youthful_shirley[190229]:        "type": "bluestore"
Nov 25 03:04:24 np0005534516 youthful_shirley[190229]:    }
Nov 25 03:04:24 np0005534516 youthful_shirley[190229]: }
Nov 25 03:04:24 np0005534516 podman[190193]: 2025-11-25 08:04:24.258097869 +0000 UTC m=+1.269152748 container died 305c2225f0b3e07f90b38d9c57982c80cdfff546e64c93388a43fb3a4e739c43 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_shirley, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 03:04:24 np0005534516 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Nov 25 03:04:24 np0005534516 systemd[1]: libpod-305c2225f0b3e07f90b38d9c57982c80cdfff546e64c93388a43fb3a4e739c43.scope: Deactivated successfully.
Nov 25 03:04:24 np0005534516 systemd[1]: Starting man-db-cache-update.service...
Nov 25 03:04:24 np0005534516 systemd[1]: var-lib-containers-storage-overlay-b764b6355a6e1f9d8ef7bb6971e2564e19b491a0639a6d35eb3e95d3b27ff2b7-merged.mount: Deactivated successfully.
Nov 25 03:04:24 np0005534516 systemd[1]: Reloading.
Nov 25 03:04:24 np0005534516 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 03:04:24 np0005534516 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 03:04:24 np0005534516 podman[190193]: 2025-11-25 08:04:24.492954416 +0000 UTC m=+1.504009285 container remove 305c2225f0b3e07f90b38d9c57982c80cdfff546e64c93388a43fb3a4e739c43 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_shirley, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 03:04:24 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 03:04:24 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 03:04:24 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 03:04:24 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 03:04:24 np0005534516 ceph-mgr[75313]: [progress WARNING root] complete: ev c960bdc3-422f-469e-b686-d8bef592faf0 does not exist
Nov 25 03:04:24 np0005534516 ceph-mgr[75313]: [progress WARNING root] complete: ev 46127485-8461-459a-882e-2199651aa02a does not exist
Nov 25 03:04:24 np0005534516 systemd[1]: libpod-conmon-305c2225f0b3e07f90b38d9c57982c80cdfff546e64c93388a43fb3a4e739c43.scope: Deactivated successfully.
Nov 25 03:04:24 np0005534516 systemd[1]: Queuing reload/restart jobs for marked units…
Nov 25 03:04:24 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 03:04:25 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v550: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:04:27 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 03:04:27 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 03:04:27 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v551: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:04:29 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v552: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:04:29 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 03:04:31 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v553: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:04:32 np0005534516 python3.9[197802]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Nov 25 03:04:32 np0005534516 systemd[1]: Reloading.
Nov 25 03:04:32 np0005534516 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 03:04:32 np0005534516 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 03:04:33 np0005534516 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Nov 25 03:04:33 np0005534516 systemd[1]: Finished man-db-cache-update.service.
Nov 25 03:04:33 np0005534516 systemd[1]: man-db-cache-update.service: Consumed 11.013s CPU time.
Nov 25 03:04:33 np0005534516 systemd[1]: run-rb458a2694b454ed58ec9d533dc0d0f38.service: Deactivated successfully.
Nov 25 03:04:33 np0005534516 python3.9[199097]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd-tcp.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Nov 25 03:04:33 np0005534516 systemd[1]: Reloading.
Nov 25 03:04:33 np0005534516 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 03:04:33 np0005534516 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 03:04:33 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v554: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:04:34 np0005534516 python3.9[199355]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd-tls.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Nov 25 03:04:34 np0005534516 systemd[1]: Reloading.
Nov 25 03:04:34 np0005534516 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 03:04:34 np0005534516 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 03:04:34 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 03:04:35 np0005534516 python3.9[199546]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=virtproxyd-tcp.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Nov 25 03:04:35 np0005534516 systemd[1]: Reloading.
Nov 25 03:04:35 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v555: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:04:35 np0005534516 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 03:04:35 np0005534516 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 03:04:36 np0005534516 python3.9[199736]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 25 03:04:36 np0005534516 systemd[1]: Reloading.
Nov 25 03:04:36 np0005534516 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 03:04:36 np0005534516 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 03:04:37 np0005534516 podman[199775]: 2025-11-25 08:04:37.211869964 +0000 UTC m=+0.096593046 container health_status 8ad1e319ea263c63021f01bb2d6f18ca5aea205883339692c6b10bddfa993191 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2)
Nov 25 03:04:37 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v556: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:04:37 np0005534516 python3.9[199952]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 25 03:04:37 np0005534516 systemd[1]: Reloading.
Nov 25 03:04:38 np0005534516 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 03:04:38 np0005534516 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 03:04:39 np0005534516 python3.9[200142]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 25 03:04:39 np0005534516 systemd[1]: Reloading.
Nov 25 03:04:39 np0005534516 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 03:04:39 np0005534516 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 03:04:39 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v557: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:04:39 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 03:04:40 np0005534516 python3.9[200332]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 25 03:04:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:04:41.026 162739 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:04:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:04:41.027 162739 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:04:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:04:41.027 162739 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:04:41 np0005534516 python3.9[200487]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 25 03:04:41 np0005534516 systemd[1]: Reloading.
Nov 25 03:04:41 np0005534516 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 03:04:41 np0005534516 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 03:04:41 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v558: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:04:42 np0005534516 python3.9[200677]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-tls.socket state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Nov 25 03:04:42 np0005534516 systemd[1]: Reloading.
Nov 25 03:04:42 np0005534516 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 03:04:42 np0005534516 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 03:04:43 np0005534516 systemd[1]: Listening on libvirt proxy daemon socket.
Nov 25 03:04:43 np0005534516 systemd[1]: Listening on libvirt proxy daemon TLS IP socket.
Nov 25 03:04:43 np0005534516 podman[200717]: 2025-11-25 08:04:43.151575769 +0000 UTC m=+0.082801912 container health_status 5c8d5508db57b4c3da7e80ae11f3a3094e87785cb74f60c98199261dd8b73481 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 25 03:04:43 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v559: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:04:43 np0005534516 python3.9[200887]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 25 03:04:44 np0005534516 auditd[703]: Audit daemon rotating log files
Nov 25 03:04:44 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 03:04:45 np0005534516 python3.9[201042]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 25 03:04:45 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v560: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:04:45 np0005534516 python3.9[201197]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 25 03:04:46 np0005534516 python3.9[201352]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 25 03:04:47 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v561: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:04:47 np0005534516 python3.9[201507]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 25 03:04:48 np0005534516 python3.9[201662]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 25 03:04:49 np0005534516 python3.9[201817]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 25 03:04:49 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v562: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:04:50 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 03:04:50 np0005534516 python3.9[201972]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 25 03:04:51 np0005534516 python3.9[202127]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 25 03:04:51 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v563: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:04:51 np0005534516 python3.9[202282]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 25 03:04:52 np0005534516 python3.9[202437]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 25 03:04:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] Optimize plan auto_2025-11-25_08:04:53
Nov 25 03:04:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 03:04:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] do_upmap
Nov 25 03:04:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] pools ['.mgr', '.rgw.root', 'vms', 'volumes', 'images', 'default.rgw.meta', 'cephfs.cephfs.data', 'backups', 'cephfs.cephfs.meta', 'default.rgw.control', 'default.rgw.log']
Nov 25 03:04:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] prepared 0/10 changes
Nov 25 03:04:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 03:04:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 03:04:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 03:04:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 03:04:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 03:04:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 03:04:53 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v564: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:04:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 03:04:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 03:04:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 03:04:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 03:04:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 03:04:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 03:04:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 03:04:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 03:04:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 03:04:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 03:04:53 np0005534516 python3.9[202592]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 25 03:04:54 np0005534516 python3.9[202747]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 25 03:04:55 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 03:04:55 np0005534516 python3.9[202902]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 25 03:04:55 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v565: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:04:57 np0005534516 python3.9[203057]: ansible-ansible.builtin.file Invoked with group=root owner=root path=/etc/tmpfiles.d/ setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Nov 25 03:04:57 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v566: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:04:58 np0005534516 python3.9[203209]: ansible-ansible.builtin.file Invoked with group=root owner=root path=/var/lib/edpm-config/firewall setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Nov 25 03:04:58 np0005534516 python3.9[203361]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/libvirt setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 03:04:59 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v567: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:04:59 np0005534516 python3.9[203513]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/libvirt/private setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 03:05:00 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 03:05:00 np0005534516 python3.9[203665]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/CA setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 03:05:01 np0005534516 python3.9[203817]: ansible-ansible.builtin.file Invoked with group=qemu owner=root path=/etc/pki/qemu setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Nov 25 03:05:01 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v568: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:05:02 np0005534516 python3.9[203969]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtlogd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 03:05:02 np0005534516 python3.9[204094]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtlogd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1764057901.35197-554-279725786917200/.source.conf follow=False _original_basename=virtlogd.conf checksum=d7a72ae92c2c205983b029473e05a6aa4c58ec24 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 03:05:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 03:05:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:05:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 03:05:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:05:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 03:05:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:05:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 03:05:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:05:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 03:05:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:05:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 03:05:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:05:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 25 03:05:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:05:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 03:05:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:05:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 25 03:05:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:05:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 25 03:05:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:05:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 03:05:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:05:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 25 03:05:03 np0005534516 python3.9[204246]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtnodedevd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 03:05:03 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v569: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:05:04 np0005534516 python3.9[204371]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtnodedevd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1764057903.0117712-554-97567060199716/.source.conf follow=False _original_basename=virtnodedevd.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 03:05:04 np0005534516 python3.9[204523]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtproxyd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 03:05:05 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 03:05:05 np0005534516 python3.9[204648]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtproxyd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1764057904.2430272-554-234469163433006/.source.conf follow=False _original_basename=virtproxyd.conf checksum=28bc484b7c9988e03de49d4fcc0a088ea975f716 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 03:05:05 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v570: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:05:06 np0005534516 python3.9[204800]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtqemud.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 03:05:06 np0005534516 python3.9[204925]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtqemud.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1764057905.5406904-554-71140712574344/.source.conf follow=False _original_basename=virtqemud.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 03:05:07 np0005534516 python3.9[205077]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/qemu.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 03:05:07 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v571: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:05:07 np0005534516 podman[205174]: 2025-11-25 08:05:07.819791908 +0000 UTC m=+0.085801176 container health_status 8ad1e319ea263c63021f01bb2d6f18ca5aea205883339692c6b10bddfa993191 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.build-date=20251118, maintainer=OpenStack Kubernetes Operator team)
Nov 25 03:05:07 np0005534516 python3.9[205223]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/qemu.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1764057906.788498-554-173859991961924/.source.conf follow=False _original_basename=qemu.conf.j2 checksum=c44de21af13c90603565570f09ff60c6a41ed8df backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 03:05:08 np0005534516 python3.9[205381]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtsecretd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 03:05:09 np0005534516 python3.9[205506]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtsecretd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1764057908.1528542-554-172238568258064/.source.conf follow=False _original_basename=virtsecretd.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 03:05:09 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v572: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:05:10 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 03:05:10 np0005534516 python3.9[205658]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/auth.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 03:05:10 np0005534516 python3.9[205781]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/auth.conf group=libvirt mode=0600 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1764057909.4917467-554-688869147539/.source.conf follow=False _original_basename=auth.conf checksum=a94cd818c374cec2c8425b70d2e0e2f41b743ae4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 03:05:11 np0005534516 python3.9[205933]: ansible-ansible.legacy.stat Invoked with path=/etc/sasl2/libvirt.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 03:05:11 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v573: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:05:11 np0005534516 python3.9[206058]: ansible-ansible.legacy.copy Invoked with dest=/etc/sasl2/libvirt.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1764057910.7569072-554-23680783227881/.source.conf follow=False _original_basename=sasl_libvirt.conf checksum=652e4d404bf79253d06956b8e9847c9364979d4a backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 03:05:12 np0005534516 python3.9[206210]: ansible-ansible.legacy.command Invoked with cmd=saslpasswd2 -f /etc/libvirt/passwd.db -p -a libvirt -u openstack migration stdin=12345678 _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None
Nov 25 03:05:13 np0005534516 python3.9[206363]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtlogd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 03:05:13 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v574: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:05:13 np0005534516 podman[206486]: 2025-11-25 08:05:13.83717091 +0000 UTC m=+0.064602687 container health_status 5c8d5508db57b4c3da7e80ae11f3a3094e87785cb74f60c98199261dd8b73481 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 03:05:14 np0005534516 python3.9[206534]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtlogd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 03:05:14 np0005534516 python3.9[206686]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 03:05:15 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 03:05:15 np0005534516 python3.9[206838]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 03:05:15 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v575: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:05:16 np0005534516 python3.9[206990]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 03:05:16 np0005534516 python3.9[207142]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 03:05:17 np0005534516 python3.9[207294]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 03:05:17 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v576: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:05:18 np0005534516 python3.9[207446]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 03:05:18 np0005534516 python3.9[207598]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 03:05:19 np0005534516 python3.9[207750]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 03:05:19 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v577: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:05:20 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 03:05:20 np0005534516 python3.9[207902]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 03:05:20 np0005534516 python3.9[208054]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 03:05:21 np0005534516 python3.9[208206]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 03:05:21 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v578: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:05:22 np0005534516 python3.9[208358]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 03:05:22 np0005534516 python3.9[208510]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtlogd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 03:05:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 03:05:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 03:05:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 03:05:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 03:05:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 03:05:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 03:05:23 np0005534516 python3.9[208633]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtlogd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764057922.34604-775-237260901488124/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 03:05:23 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v579: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:05:24 np0005534516 python3.9[208785]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtlogd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 03:05:24 np0005534516 python3.9[208908]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtlogd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764057923.6642723-775-256430436317593/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 03:05:25 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 03:05:25 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v580: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:05:25 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 03:05:25 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 03:05:25 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 03:05:25 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 03:05:25 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 03:05:25 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 03:05:25 np0005534516 ceph-mgr[75313]: [progress WARNING root] complete: ev 49a00373-c870-476f-9a3d-ef688701f76e does not exist
Nov 25 03:05:25 np0005534516 ceph-mgr[75313]: [progress WARNING root] complete: ev fab60384-4eec-4a4a-ac7e-b76d93c1ac4a does not exist
Nov 25 03:05:25 np0005534516 ceph-mgr[75313]: [progress WARNING root] complete: ev b23f9c9f-45fa-4412-befd-f79514517790 does not exist
Nov 25 03:05:25 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 03:05:25 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 03:05:25 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 03:05:25 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 03:05:25 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 03:05:25 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 03:05:25 np0005534516 python3.9[209177]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 03:05:26 np0005534516 python3.9[209415]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764057925.092716-775-214005869508627/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 03:05:26 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 03:05:26 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 03:05:26 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 03:05:26 np0005534516 podman[209455]: 2025-11-25 08:05:26.220000129 +0000 UTC m=+0.065897783 container create 3db02cf0aa9c893d543db5747d1c8ed47e8fdebb64af039a12aff82e5f6d5565 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_lewin, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Nov 25 03:05:26 np0005534516 podman[209455]: 2025-11-25 08:05:26.178038533 +0000 UTC m=+0.023936207 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 03:05:26 np0005534516 systemd[1]: Started libpod-conmon-3db02cf0aa9c893d543db5747d1c8ed47e8fdebb64af039a12aff82e5f6d5565.scope.
Nov 25 03:05:26 np0005534516 systemd[1]: Started libcrun container.
Nov 25 03:05:26 np0005534516 podman[209455]: 2025-11-25 08:05:26.495791002 +0000 UTC m=+0.341688686 container init 3db02cf0aa9c893d543db5747d1c8ed47e8fdebb64af039a12aff82e5f6d5565 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_lewin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Nov 25 03:05:26 np0005534516 podman[209455]: 2025-11-25 08:05:26.503790985 +0000 UTC m=+0.349688639 container start 3db02cf0aa9c893d543db5747d1c8ed47e8fdebb64af039a12aff82e5f6d5565 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_lewin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 03:05:26 np0005534516 pensive_lewin[209513]: 167 167
Nov 25 03:05:26 np0005534516 systemd[1]: libpod-3db02cf0aa9c893d543db5747d1c8ed47e8fdebb64af039a12aff82e5f6d5565.scope: Deactivated successfully.
Nov 25 03:05:26 np0005534516 conmon[209513]: conmon 3db02cf0aa9c893d543d <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-3db02cf0aa9c893d543db5747d1c8ed47e8fdebb64af039a12aff82e5f6d5565.scope/container/memory.events
Nov 25 03:05:26 np0005534516 podman[209455]: 2025-11-25 08:05:26.536823003 +0000 UTC m=+0.382720677 container attach 3db02cf0aa9c893d543db5747d1c8ed47e8fdebb64af039a12aff82e5f6d5565 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_lewin, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef)
Nov 25 03:05:26 np0005534516 podman[209455]: 2025-11-25 08:05:26.538147849 +0000 UTC m=+0.384045503 container died 3db02cf0aa9c893d543db5747d1c8ed47e8fdebb64af039a12aff82e5f6d5565 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_lewin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 03:05:26 np0005534516 systemd[1]: var-lib-containers-storage-overlay-df85a884aa3a039e67c02a175aa1c766953487009683f6be616f20383abbf2ff-merged.mount: Deactivated successfully.
Nov 25 03:05:26 np0005534516 python3.9[209639]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 03:05:26 np0005534516 podman[209455]: 2025-11-25 08:05:26.793752923 +0000 UTC m=+0.639650617 container remove 3db02cf0aa9c893d543db5747d1c8ed47e8fdebb64af039a12aff82e5f6d5565 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_lewin, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Nov 25 03:05:26 np0005534516 systemd[1]: libpod-conmon-3db02cf0aa9c893d543db5747d1c8ed47e8fdebb64af039a12aff82e5f6d5565.scope: Deactivated successfully.
Nov 25 03:05:27 np0005534516 podman[209718]: 2025-11-25 08:05:26.974234248 +0000 UTC m=+0.022951529 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 03:05:27 np0005534516 podman[209718]: 2025-11-25 08:05:27.197852042 +0000 UTC m=+0.246569343 container create a98f38c990d7a50296bd429e821273e4a66aa9e0f7fc9ab6fb55f79ae077c9fc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_tu, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Nov 25 03:05:27 np0005534516 systemd[1]: Started libpod-conmon-a98f38c990d7a50296bd429e821273e4a66aa9e0f7fc9ab6fb55f79ae077c9fc.scope.
Nov 25 03:05:27 np0005534516 python3.9[209784]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764057926.3151433-775-72026243015685/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 03:05:27 np0005534516 systemd[1]: Started libcrun container.
Nov 25 03:05:27 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e83836fc44b4aef679aef032f58630c9a3e97a4c5206dbdec4caef5bffc7c3a6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 03:05:27 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e83836fc44b4aef679aef032f58630c9a3e97a4c5206dbdec4caef5bffc7c3a6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 03:05:27 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e83836fc44b4aef679aef032f58630c9a3e97a4c5206dbdec4caef5bffc7c3a6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 03:05:27 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e83836fc44b4aef679aef032f58630c9a3e97a4c5206dbdec4caef5bffc7c3a6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 03:05:27 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e83836fc44b4aef679aef032f58630c9a3e97a4c5206dbdec4caef5bffc7c3a6/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 03:05:27 np0005534516 podman[209718]: 2025-11-25 08:05:27.39176808 +0000 UTC m=+0.440485391 container init a98f38c990d7a50296bd429e821273e4a66aa9e0f7fc9ab6fb55f79ae077c9fc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_tu, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 03:05:27 np0005534516 podman[209718]: 2025-11-25 08:05:27.402373105 +0000 UTC m=+0.451090386 container start a98f38c990d7a50296bd429e821273e4a66aa9e0f7fc9ab6fb55f79ae077c9fc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_tu, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True)
Nov 25 03:05:27 np0005534516 podman[209718]: 2025-11-25 08:05:27.481195745 +0000 UTC m=+0.529913026 container attach a98f38c990d7a50296bd429e821273e4a66aa9e0f7fc9ab6fb55f79ae077c9fc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_tu, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 25 03:05:27 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v581: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:05:28 np0005534516 python3.9[209943]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 03:05:28 np0005534516 vibrant_tu[209787]: --> passed data devices: 0 physical, 3 LVM
Nov 25 03:05:28 np0005534516 vibrant_tu[209787]: --> relative data size: 1.0
Nov 25 03:05:28 np0005534516 vibrant_tu[209787]: --> All data devices are unavailable
Nov 25 03:05:28 np0005534516 systemd[1]: libpod-a98f38c990d7a50296bd429e821273e4a66aa9e0f7fc9ab6fb55f79ae077c9fc.scope: Deactivated successfully.
Nov 25 03:05:28 np0005534516 systemd[1]: libpod-a98f38c990d7a50296bd429e821273e4a66aa9e0f7fc9ab6fb55f79ae077c9fc.scope: Consumed 1.041s CPU time.
Nov 25 03:05:28 np0005534516 python3.9[210083]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764057927.501409-775-191194974572070/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 03:05:28 np0005534516 podman[210091]: 2025-11-25 08:05:28.564144719 +0000 UTC m=+0.030378795 container died a98f38c990d7a50296bd429e821273e4a66aa9e0f7fc9ab6fb55f79ae077c9fc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_tu, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Nov 25 03:05:28 np0005534516 systemd[1]: var-lib-containers-storage-overlay-e83836fc44b4aef679aef032f58630c9a3e97a4c5206dbdec4caef5bffc7c3a6-merged.mount: Deactivated successfully.
Nov 25 03:05:28 np0005534516 podman[210091]: 2025-11-25 08:05:28.694568263 +0000 UTC m=+0.160802339 container remove a98f38c990d7a50296bd429e821273e4a66aa9e0f7fc9ab6fb55f79ae077c9fc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_tu, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Nov 25 03:05:28 np0005534516 systemd[1]: libpod-conmon-a98f38c990d7a50296bd429e821273e4a66aa9e0f7fc9ab6fb55f79ae077c9fc.scope: Deactivated successfully.
Nov 25 03:05:29 np0005534516 python3.9[210358]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 03:05:29 np0005534516 podman[210398]: 2025-11-25 08:05:29.364480738 +0000 UTC m=+0.070289974 container create 89b4204912d19516a06cf3a2d02f7cd13974a577fa760e06f1ef9e485121eb18 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_dhawan, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Nov 25 03:05:29 np0005534516 podman[210398]: 2025-11-25 08:05:29.31883524 +0000 UTC m=+0.024644526 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 03:05:29 np0005534516 systemd[1]: Started libpod-conmon-89b4204912d19516a06cf3a2d02f7cd13974a577fa760e06f1ef9e485121eb18.scope.
Nov 25 03:05:29 np0005534516 systemd[1]: Started libcrun container.
Nov 25 03:05:29 np0005534516 podman[210398]: 2025-11-25 08:05:29.459869059 +0000 UTC m=+0.165678315 container init 89b4204912d19516a06cf3a2d02f7cd13974a577fa760e06f1ef9e485121eb18 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_dhawan, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 03:05:29 np0005534516 podman[210398]: 2025-11-25 08:05:29.468848039 +0000 UTC m=+0.174657275 container start 89b4204912d19516a06cf3a2d02f7cd13974a577fa760e06f1ef9e485121eb18 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_dhawan, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Nov 25 03:05:29 np0005534516 cranky_dhawan[210438]: 167 167
Nov 25 03:05:29 np0005534516 systemd[1]: libpod-89b4204912d19516a06cf3a2d02f7cd13974a577fa760e06f1ef9e485121eb18.scope: Deactivated successfully.
Nov 25 03:05:29 np0005534516 podman[210398]: 2025-11-25 08:05:29.486040017 +0000 UTC m=+0.191849273 container attach 89b4204912d19516a06cf3a2d02f7cd13974a577fa760e06f1ef9e485121eb18 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_dhawan, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 03:05:29 np0005534516 podman[210398]: 2025-11-25 08:05:29.486728346 +0000 UTC m=+0.192537592 container died 89b4204912d19516a06cf3a2d02f7cd13974a577fa760e06f1ef9e485121eb18 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_dhawan, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 03:05:29 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v582: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:05:29 np0005534516 systemd[1]: var-lib-containers-storage-overlay-a473aea29582498d3d4ef584702bdbe9de07c4fab249a31e21904d49704b2e06-merged.mount: Deactivated successfully.
Nov 25 03:05:29 np0005534516 podman[210398]: 2025-11-25 08:05:29.547234937 +0000 UTC m=+0.253044173 container remove 89b4204912d19516a06cf3a2d02f7cd13974a577fa760e06f1ef9e485121eb18 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_dhawan, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 03:05:29 np0005534516 systemd[1]: libpod-conmon-89b4204912d19516a06cf3a2d02f7cd13974a577fa760e06f1ef9e485121eb18.scope: Deactivated successfully.
Nov 25 03:05:29 np0005534516 podman[210538]: 2025-11-25 08:05:29.724204325 +0000 UTC m=+0.046136704 container create ec1ac01fdca395d1d19e09eb0522c801a51951a3361b4bc99f9728ad6712969b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_murdock, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 03:05:29 np0005534516 systemd[1]: Started libpod-conmon-ec1ac01fdca395d1d19e09eb0522c801a51951a3361b4bc99f9728ad6712969b.scope.
Nov 25 03:05:29 np0005534516 podman[210538]: 2025-11-25 08:05:29.703447928 +0000 UTC m=+0.025380337 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 03:05:29 np0005534516 systemd[1]: Started libcrun container.
Nov 25 03:05:29 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fb93718ffc0ba83fd880c9b4604f2c3bf4cd595f66875eb9aea14284e50f2fa6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 03:05:29 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fb93718ffc0ba83fd880c9b4604f2c3bf4cd595f66875eb9aea14284e50f2fa6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 03:05:29 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fb93718ffc0ba83fd880c9b4604f2c3bf4cd595f66875eb9aea14284e50f2fa6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 03:05:29 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fb93718ffc0ba83fd880c9b4604f2c3bf4cd595f66875eb9aea14284e50f2fa6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 03:05:29 np0005534516 podman[210538]: 2025-11-25 08:05:29.854118895 +0000 UTC m=+0.176051334 container init ec1ac01fdca395d1d19e09eb0522c801a51951a3361b4bc99f9728ad6712969b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_murdock, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507)
Nov 25 03:05:29 np0005534516 podman[210538]: 2025-11-25 08:05:29.862333593 +0000 UTC m=+0.184266012 container start ec1ac01fdca395d1d19e09eb0522c801a51951a3361b4bc99f9728ad6712969b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_murdock, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 25 03:05:29 np0005534516 podman[210538]: 2025-11-25 08:05:29.867405124 +0000 UTC m=+0.189337553 container attach ec1ac01fdca395d1d19e09eb0522c801a51951a3361b4bc99f9728ad6712969b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_murdock, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 03:05:29 np0005534516 python3.9[210577]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764057928.8172677-775-230887794432443/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 03:05:30 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 03:05:30 np0005534516 python3.9[210736]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 03:05:30 np0005534516 interesting_murdock[210580]: {
Nov 25 03:05:30 np0005534516 interesting_murdock[210580]:    "0": [
Nov 25 03:05:30 np0005534516 interesting_murdock[210580]:        {
Nov 25 03:05:30 np0005534516 interesting_murdock[210580]:            "devices": [
Nov 25 03:05:30 np0005534516 interesting_murdock[210580]:                "/dev/loop3"
Nov 25 03:05:30 np0005534516 interesting_murdock[210580]:            ],
Nov 25 03:05:30 np0005534516 interesting_murdock[210580]:            "lv_name": "ceph_lv0",
Nov 25 03:05:30 np0005534516 interesting_murdock[210580]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 03:05:30 np0005534516 interesting_murdock[210580]:            "lv_size": "21470642176",
Nov 25 03:05:30 np0005534516 interesting_murdock[210580]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=fa0f8d90-5df8-4d42-9078-da082765696d,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 03:05:30 np0005534516 interesting_murdock[210580]:            "lv_uuid": "ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr",
Nov 25 03:05:30 np0005534516 interesting_murdock[210580]:            "name": "ceph_lv0",
Nov 25 03:05:30 np0005534516 interesting_murdock[210580]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 03:05:30 np0005534516 interesting_murdock[210580]:            "tags": {
Nov 25 03:05:30 np0005534516 interesting_murdock[210580]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 03:05:30 np0005534516 interesting_murdock[210580]:                "ceph.block_uuid": "ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr",
Nov 25 03:05:30 np0005534516 interesting_murdock[210580]:                "ceph.cephx_lockbox_secret": "",
Nov 25 03:05:30 np0005534516 interesting_murdock[210580]:                "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 03:05:30 np0005534516 interesting_murdock[210580]:                "ceph.cluster_name": "ceph",
Nov 25 03:05:30 np0005534516 interesting_murdock[210580]:                "ceph.crush_device_class": "",
Nov 25 03:05:30 np0005534516 interesting_murdock[210580]:                "ceph.encrypted": "0",
Nov 25 03:05:30 np0005534516 interesting_murdock[210580]:                "ceph.osd_fsid": "fa0f8d90-5df8-4d42-9078-da082765696d",
Nov 25 03:05:30 np0005534516 interesting_murdock[210580]:                "ceph.osd_id": "0",
Nov 25 03:05:30 np0005534516 interesting_murdock[210580]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 03:05:30 np0005534516 interesting_murdock[210580]:                "ceph.type": "block",
Nov 25 03:05:30 np0005534516 interesting_murdock[210580]:                "ceph.vdo": "0"
Nov 25 03:05:30 np0005534516 interesting_murdock[210580]:            },
Nov 25 03:05:30 np0005534516 interesting_murdock[210580]:            "type": "block",
Nov 25 03:05:30 np0005534516 interesting_murdock[210580]:            "vg_name": "ceph_vg0"
Nov 25 03:05:30 np0005534516 interesting_murdock[210580]:        }
Nov 25 03:05:30 np0005534516 interesting_murdock[210580]:    ],
Nov 25 03:05:30 np0005534516 interesting_murdock[210580]:    "1": [
Nov 25 03:05:30 np0005534516 interesting_murdock[210580]:        {
Nov 25 03:05:30 np0005534516 interesting_murdock[210580]:            "devices": [
Nov 25 03:05:30 np0005534516 interesting_murdock[210580]:                "/dev/loop4"
Nov 25 03:05:30 np0005534516 interesting_murdock[210580]:            ],
Nov 25 03:05:30 np0005534516 interesting_murdock[210580]:            "lv_name": "ceph_lv1",
Nov 25 03:05:30 np0005534516 interesting_murdock[210580]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 03:05:30 np0005534516 interesting_murdock[210580]:            "lv_size": "21470642176",
Nov 25 03:05:30 np0005534516 interesting_murdock[210580]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1ccbbde0-8faf-460a-800e-c84f00ed17db,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 03:05:30 np0005534516 interesting_murdock[210580]:            "lv_uuid": "nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e",
Nov 25 03:05:30 np0005534516 interesting_murdock[210580]:            "name": "ceph_lv1",
Nov 25 03:05:30 np0005534516 interesting_murdock[210580]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 03:05:30 np0005534516 interesting_murdock[210580]:            "tags": {
Nov 25 03:05:30 np0005534516 interesting_murdock[210580]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 03:05:30 np0005534516 interesting_murdock[210580]:                "ceph.block_uuid": "nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e",
Nov 25 03:05:30 np0005534516 interesting_murdock[210580]:                "ceph.cephx_lockbox_secret": "",
Nov 25 03:05:30 np0005534516 interesting_murdock[210580]:                "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 03:05:30 np0005534516 interesting_murdock[210580]:                "ceph.cluster_name": "ceph",
Nov 25 03:05:30 np0005534516 interesting_murdock[210580]:                "ceph.crush_device_class": "",
Nov 25 03:05:30 np0005534516 interesting_murdock[210580]:                "ceph.encrypted": "0",
Nov 25 03:05:30 np0005534516 interesting_murdock[210580]:                "ceph.osd_fsid": "1ccbbde0-8faf-460a-800e-c84f00ed17db",
Nov 25 03:05:30 np0005534516 interesting_murdock[210580]:                "ceph.osd_id": "1",
Nov 25 03:05:30 np0005534516 interesting_murdock[210580]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 03:05:30 np0005534516 interesting_murdock[210580]:                "ceph.type": "block",
Nov 25 03:05:30 np0005534516 interesting_murdock[210580]:                "ceph.vdo": "0"
Nov 25 03:05:30 np0005534516 interesting_murdock[210580]:            },
Nov 25 03:05:30 np0005534516 interesting_murdock[210580]:            "type": "block",
Nov 25 03:05:30 np0005534516 interesting_murdock[210580]:            "vg_name": "ceph_vg1"
Nov 25 03:05:30 np0005534516 interesting_murdock[210580]:        }
Nov 25 03:05:30 np0005534516 interesting_murdock[210580]:    ],
Nov 25 03:05:30 np0005534516 interesting_murdock[210580]:    "2": [
Nov 25 03:05:30 np0005534516 interesting_murdock[210580]:        {
Nov 25 03:05:30 np0005534516 interesting_murdock[210580]:            "devices": [
Nov 25 03:05:30 np0005534516 interesting_murdock[210580]:                "/dev/loop5"
Nov 25 03:05:30 np0005534516 interesting_murdock[210580]:            ],
Nov 25 03:05:30 np0005534516 interesting_murdock[210580]:            "lv_name": "ceph_lv2",
Nov 25 03:05:30 np0005534516 interesting_murdock[210580]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 03:05:30 np0005534516 interesting_murdock[210580]:            "lv_size": "21470642176",
Nov 25 03:05:30 np0005534516 interesting_murdock[210580]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2a15223e-f21c-42af-b702-2be1c3607399,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 03:05:30 np0005534516 interesting_murdock[210580]:            "lv_uuid": "5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0",
Nov 25 03:05:30 np0005534516 interesting_murdock[210580]:            "name": "ceph_lv2",
Nov 25 03:05:30 np0005534516 interesting_murdock[210580]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 03:05:30 np0005534516 interesting_murdock[210580]:            "tags": {
Nov 25 03:05:30 np0005534516 interesting_murdock[210580]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 03:05:30 np0005534516 interesting_murdock[210580]:                "ceph.block_uuid": "5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0",
Nov 25 03:05:30 np0005534516 interesting_murdock[210580]:                "ceph.cephx_lockbox_secret": "",
Nov 25 03:05:30 np0005534516 interesting_murdock[210580]:                "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 03:05:30 np0005534516 interesting_murdock[210580]:                "ceph.cluster_name": "ceph",
Nov 25 03:05:30 np0005534516 interesting_murdock[210580]:                "ceph.crush_device_class": "",
Nov 25 03:05:30 np0005534516 interesting_murdock[210580]:                "ceph.encrypted": "0",
Nov 25 03:05:30 np0005534516 interesting_murdock[210580]:                "ceph.osd_fsid": "2a15223e-f21c-42af-b702-2be1c3607399",
Nov 25 03:05:30 np0005534516 interesting_murdock[210580]:                "ceph.osd_id": "2",
Nov 25 03:05:30 np0005534516 interesting_murdock[210580]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 03:05:30 np0005534516 interesting_murdock[210580]:                "ceph.type": "block",
Nov 25 03:05:30 np0005534516 interesting_murdock[210580]:                "ceph.vdo": "0"
Nov 25 03:05:30 np0005534516 interesting_murdock[210580]:            },
Nov 25 03:05:30 np0005534516 interesting_murdock[210580]:            "type": "block",
Nov 25 03:05:30 np0005534516 interesting_murdock[210580]:            "vg_name": "ceph_vg2"
Nov 25 03:05:30 np0005534516 interesting_murdock[210580]:        }
Nov 25 03:05:30 np0005534516 interesting_murdock[210580]:    ]
Nov 25 03:05:30 np0005534516 interesting_murdock[210580]: }
Nov 25 03:05:30 np0005534516 systemd[1]: libpod-ec1ac01fdca395d1d19e09eb0522c801a51951a3361b4bc99f9728ad6712969b.scope: Deactivated successfully.
Nov 25 03:05:30 np0005534516 podman[210538]: 2025-11-25 08:05:30.681372633 +0000 UTC m=+1.003305012 container died ec1ac01fdca395d1d19e09eb0522c801a51951a3361b4bc99f9728ad6712969b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_murdock, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Nov 25 03:05:30 np0005534516 systemd[1]: var-lib-containers-storage-overlay-fb93718ffc0ba83fd880c9b4604f2c3bf4cd595f66875eb9aea14284e50f2fa6-merged.mount: Deactivated successfully.
Nov 25 03:05:30 np0005534516 podman[210538]: 2025-11-25 08:05:30.774874671 +0000 UTC m=+1.096807050 container remove ec1ac01fdca395d1d19e09eb0522c801a51951a3361b4bc99f9728ad6712969b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_murdock, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 03:05:30 np0005534516 systemd[1]: libpod-conmon-ec1ac01fdca395d1d19e09eb0522c801a51951a3361b4bc99f9728ad6712969b.scope: Deactivated successfully.
Nov 25 03:05:31 np0005534516 python3.9[210924]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764057930.0964048-775-250175973188315/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 03:05:31 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v583: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:05:31 np0005534516 podman[211088]: 2025-11-25 08:05:31.541425392 +0000 UTC m=+0.056912912 container create a69fa864416d7f703b0f575e115e25c82e653efb678bb16fc576fa579ad82d33 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_zhukovsky, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 03:05:31 np0005534516 systemd[1]: Started libpod-conmon-a69fa864416d7f703b0f575e115e25c82e653efb678bb16fc576fa579ad82d33.scope.
Nov 25 03:05:31 np0005534516 podman[211088]: 2025-11-25 08:05:31.507639763 +0000 UTC m=+0.023127303 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 03:05:31 np0005534516 systemd[1]: Started libcrun container.
Nov 25 03:05:31 np0005534516 podman[211088]: 2025-11-25 08:05:31.625156339 +0000 UTC m=+0.140643849 container init a69fa864416d7f703b0f575e115e25c82e653efb678bb16fc576fa579ad82d33 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_zhukovsky, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Nov 25 03:05:31 np0005534516 podman[211088]: 2025-11-25 08:05:31.634568881 +0000 UTC m=+0.150056401 container start a69fa864416d7f703b0f575e115e25c82e653efb678bb16fc576fa579ad82d33 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_zhukovsky, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Nov 25 03:05:31 np0005534516 inspiring_zhukovsky[211151]: 167 167
Nov 25 03:05:31 np0005534516 podman[211088]: 2025-11-25 08:05:31.638952312 +0000 UTC m=+0.154439822 container attach a69fa864416d7f703b0f575e115e25c82e653efb678bb16fc576fa579ad82d33 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_zhukovsky, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 03:05:31 np0005534516 systemd[1]: libpod-a69fa864416d7f703b0f575e115e25c82e653efb678bb16fc576fa579ad82d33.scope: Deactivated successfully.
Nov 25 03:05:31 np0005534516 podman[211088]: 2025-11-25 08:05:31.640377112 +0000 UTC m=+0.155864622 container died a69fa864416d7f703b0f575e115e25c82e653efb678bb16fc576fa579ad82d33 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_zhukovsky, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Nov 25 03:05:31 np0005534516 systemd[1]: var-lib-containers-storage-overlay-d88ee3f872e9221c3d3ed9fd6f40a34d9cb128130768d4922ffac54bac792e47-merged.mount: Deactivated successfully.
Nov 25 03:05:31 np0005534516 podman[211088]: 2025-11-25 08:05:31.715277663 +0000 UTC m=+0.230765193 container remove a69fa864416d7f703b0f575e115e25c82e653efb678bb16fc576fa579ad82d33 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_zhukovsky, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 03:05:31 np0005534516 systemd[1]: libpod-conmon-a69fa864416d7f703b0f575e115e25c82e653efb678bb16fc576fa579ad82d33.scope: Deactivated successfully.
Nov 25 03:05:31 np0005534516 podman[211206]: 2025-11-25 08:05:31.894892335 +0000 UTC m=+0.051274767 container create cdef55a4d264a15e165b9bb08e9849b695a5c087f39a689bf2d798a5eff1d729 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_cartwright, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 03:05:31 np0005534516 python3.9[211198]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 03:05:31 np0005534516 systemd[1]: Started libpod-conmon-cdef55a4d264a15e165b9bb08e9849b695a5c087f39a689bf2d798a5eff1d729.scope.
Nov 25 03:05:31 np0005534516 podman[211206]: 2025-11-25 08:05:31.871414122 +0000 UTC m=+0.027796604 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 03:05:31 np0005534516 systemd[1]: Started libcrun container.
Nov 25 03:05:31 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/928132bb0d56cfa5337f33667579d1dd8654c9c91ac3ddd58da38d991dc4de80/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 03:05:31 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/928132bb0d56cfa5337f33667579d1dd8654c9c91ac3ddd58da38d991dc4de80/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 03:05:31 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/928132bb0d56cfa5337f33667579d1dd8654c9c91ac3ddd58da38d991dc4de80/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 03:05:31 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/928132bb0d56cfa5337f33667579d1dd8654c9c91ac3ddd58da38d991dc4de80/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 03:05:32 np0005534516 podman[211206]: 2025-11-25 08:05:32.00630162 +0000 UTC m=+0.162684072 container init cdef55a4d264a15e165b9bb08e9849b695a5c087f39a689bf2d798a5eff1d729 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_cartwright, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 03:05:32 np0005534516 podman[211206]: 2025-11-25 08:05:32.016109843 +0000 UTC m=+0.172492275 container start cdef55a4d264a15e165b9bb08e9849b695a5c087f39a689bf2d798a5eff1d729 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_cartwright, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True)
Nov 25 03:05:32 np0005534516 podman[211206]: 2025-11-25 08:05:32.029643759 +0000 UTC m=+0.186026221 container attach cdef55a4d264a15e165b9bb08e9849b695a5c087f39a689bf2d798a5eff1d729 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_cartwright, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3)
Nov 25 03:05:32 np0005534516 python3.9[211349]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764057931.3931994-775-213533273445717/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 03:05:32 np0005534516 youthful_cartwright[211235]: {
Nov 25 03:05:32 np0005534516 youthful_cartwright[211235]:    "1ccbbde0-8faf-460a-800e-c84f00ed17db": {
Nov 25 03:05:32 np0005534516 youthful_cartwright[211235]:        "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 03:05:32 np0005534516 youthful_cartwright[211235]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 03:05:32 np0005534516 youthful_cartwright[211235]:        "osd_id": 1,
Nov 25 03:05:32 np0005534516 youthful_cartwright[211235]:        "osd_uuid": "1ccbbde0-8faf-460a-800e-c84f00ed17db",
Nov 25 03:05:32 np0005534516 youthful_cartwright[211235]:        "type": "bluestore"
Nov 25 03:05:32 np0005534516 youthful_cartwright[211235]:    },
Nov 25 03:05:32 np0005534516 youthful_cartwright[211235]:    "2a15223e-f21c-42af-b702-2be1c3607399": {
Nov 25 03:05:32 np0005534516 youthful_cartwright[211235]:        "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 03:05:32 np0005534516 youthful_cartwright[211235]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 03:05:32 np0005534516 youthful_cartwright[211235]:        "osd_id": 2,
Nov 25 03:05:32 np0005534516 youthful_cartwright[211235]:        "osd_uuid": "2a15223e-f21c-42af-b702-2be1c3607399",
Nov 25 03:05:32 np0005534516 youthful_cartwright[211235]:        "type": "bluestore"
Nov 25 03:05:32 np0005534516 youthful_cartwright[211235]:    },
Nov 25 03:05:32 np0005534516 youthful_cartwright[211235]:    "fa0f8d90-5df8-4d42-9078-da082765696d": {
Nov 25 03:05:32 np0005534516 youthful_cartwright[211235]:        "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 03:05:32 np0005534516 youthful_cartwright[211235]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 03:05:32 np0005534516 youthful_cartwright[211235]:        "osd_id": 0,
Nov 25 03:05:32 np0005534516 youthful_cartwright[211235]:        "osd_uuid": "fa0f8d90-5df8-4d42-9078-da082765696d",
Nov 25 03:05:32 np0005534516 youthful_cartwright[211235]:        "type": "bluestore"
Nov 25 03:05:32 np0005534516 youthful_cartwright[211235]:    }
Nov 25 03:05:32 np0005534516 youthful_cartwright[211235]: }
Nov 25 03:05:33 np0005534516 systemd[1]: libpod-cdef55a4d264a15e165b9bb08e9849b695a5c087f39a689bf2d798a5eff1d729.scope: Deactivated successfully.
Nov 25 03:05:33 np0005534516 systemd[1]: libpod-cdef55a4d264a15e165b9bb08e9849b695a5c087f39a689bf2d798a5eff1d729.scope: Consumed 1.009s CPU time.
Nov 25 03:05:33 np0005534516 podman[211530]: 2025-11-25 08:05:33.072717344 +0000 UTC m=+0.029637845 container died cdef55a4d264a15e165b9bb08e9849b695a5c087f39a689bf2d798a5eff1d729 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_cartwright, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 03:05:33 np0005534516 python3.9[211521]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 03:05:33 np0005534516 systemd[1]: var-lib-containers-storage-overlay-928132bb0d56cfa5337f33667579d1dd8654c9c91ac3ddd58da38d991dc4de80-merged.mount: Deactivated successfully.
Nov 25 03:05:33 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v584: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:05:33 np0005534516 podman[211530]: 2025-11-25 08:05:33.574430206 +0000 UTC m=+0.531350677 container remove cdef55a4d264a15e165b9bb08e9849b695a5c087f39a689bf2d798a5eff1d729 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_cartwright, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Nov 25 03:05:33 np0005534516 systemd[1]: libpod-conmon-cdef55a4d264a15e165b9bb08e9849b695a5c087f39a689bf2d798a5eff1d729.scope: Deactivated successfully.
Nov 25 03:05:33 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 03:05:33 np0005534516 python3.9[211668]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764057932.6123564-775-247121891995892/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 03:05:33 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 03:05:33 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 03:05:33 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 03:05:33 np0005534516 ceph-mgr[75313]: [progress WARNING root] complete: ev 216e9d8a-63f9-4937-ab82-ee7d108e22f9 does not exist
Nov 25 03:05:33 np0005534516 ceph-mgr[75313]: [progress WARNING root] complete: ev 7cad7bc4-c63c-47d8-b0f6-19c78ddd3f4f does not exist
Nov 25 03:05:34 np0005534516 python3.9[211870]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 03:05:34 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 03:05:34 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 03:05:34 np0005534516 python3.9[211993]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764057933.8479204-775-188448090342935/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 03:05:35 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 03:05:35 np0005534516 python3.9[212145]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 03:05:35 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v585: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:05:36 np0005534516 python3.9[212268]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764057934.999886-775-1359779850124/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 03:05:36 np0005534516 python3.9[212420]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 03:05:37 np0005534516 python3.9[212543]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764057936.193932-775-63671850868669/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 03:05:37 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v586: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:05:37 np0005534516 podman[212667]: 2025-11-25 08:05:37.961659679 +0000 UTC m=+0.085565678 container health_status 8ad1e319ea263c63021f01bb2d6f18ca5aea205883339692c6b10bddfa993191 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_controller)
Nov 25 03:05:38 np0005534516 python3.9[212715]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 03:05:38 np0005534516 python3.9[212844]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764057937.540135-775-271402117212415/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 03:05:39 np0005534516 python3.9[212996]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 03:05:39 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v587: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:05:39 np0005534516 python3.9[213119]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764057938.801763-775-96272213248236/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 03:05:40 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 03:05:40 np0005534516 python3.9[213269]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail#012ls -lRZ /run/libvirt | grep -E ':container_\S+_t'#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 03:05:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:05:41.027 162739 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:05:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:05:41.028 162739 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:05:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:05:41.028 162739 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:05:41 np0005534516 python3.9[213424]: ansible-ansible.posix.seboolean Invoked with name=os_enable_vtpm persistent=True state=True ignore_selinux_state=False
Nov 25 03:05:41 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v588: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:05:42 np0005534516 dbus-broker-launch[813]: avc:  op=load_policy lsm=selinux seqno=15 res=1
Nov 25 03:05:43 np0005534516 python3.9[213580]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/servercert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 03:05:43 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v589: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:05:43 np0005534516 python3.9[213732]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/private/serverkey.pem group=root mode=0600 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 03:05:44 np0005534516 podman[213856]: 2025-11-25 08:05:44.380179699 +0000 UTC m=+0.158110194 container health_status 5c8d5508db57b4c3da7e80ae11f3a3094e87785cb74f60c98199261dd8b73481 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent)
Nov 25 03:05:44 np0005534516 python3.9[213897]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/clientcert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 03:05:45 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 03:05:45 np0005534516 python3.9[214055]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/private/clientkey.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 03:05:45 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v590: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:05:46 np0005534516 python3.9[214207]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/CA/cacert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/ca.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 03:05:46 np0005534516 python3.9[214359]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/server-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 03:05:47 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v591: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:05:47 np0005534516 python3.9[214511]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/server-key.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 03:05:48 np0005534516 python3.9[214663]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/client-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 03:05:49 np0005534516 python3.9[214815]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/client-key.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 03:05:49 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v592: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:05:49 np0005534516 python3.9[214967]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/ca-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/ca.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 03:05:50 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 03:05:50 np0005534516 python3.9[215119]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtlogd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 25 03:05:50 np0005534516 systemd[1]: Reloading.
Nov 25 03:05:50 np0005534516 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 03:05:50 np0005534516 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 03:05:51 np0005534516 systemd[1]: Starting libvirt logging daemon socket...
Nov 25 03:05:51 np0005534516 systemd[1]: Listening on libvirt logging daemon socket.
Nov 25 03:05:51 np0005534516 systemd[1]: Starting libvirt logging daemon admin socket...
Nov 25 03:05:51 np0005534516 systemd[1]: Listening on libvirt logging daemon admin socket.
Nov 25 03:05:51 np0005534516 systemd[1]: Starting libvirt logging daemon...
Nov 25 03:05:51 np0005534516 systemd[1]: Started libvirt logging daemon.
Nov 25 03:05:51 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v593: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:05:52 np0005534516 python3.9[215312]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtnodedevd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 25 03:05:52 np0005534516 systemd[1]: Reloading.
Nov 25 03:05:52 np0005534516 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 03:05:52 np0005534516 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 03:05:52 np0005534516 systemd[1]: Starting libvirt nodedev daemon socket...
Nov 25 03:05:52 np0005534516 systemd[1]: Listening on libvirt nodedev daemon socket.
Nov 25 03:05:52 np0005534516 systemd[1]: Starting libvirt nodedev daemon admin socket...
Nov 25 03:05:52 np0005534516 systemd[1]: Starting libvirt nodedev daemon read-only socket...
Nov 25 03:05:52 np0005534516 systemd[1]: Listening on libvirt nodedev daemon admin socket.
Nov 25 03:05:52 np0005534516 systemd[1]: Listening on libvirt nodedev daemon read-only socket.
Nov 25 03:05:52 np0005534516 systemd[1]: Starting libvirt nodedev daemon...
Nov 25 03:05:52 np0005534516 systemd[1]: Started libvirt nodedev daemon.
Nov 25 03:05:53 np0005534516 systemd[1]: Starting SETroubleshoot daemon for processing new SELinux denial logs...
Nov 25 03:05:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] Optimize plan auto_2025-11-25_08:05:53
Nov 25 03:05:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 03:05:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] do_upmap
Nov 25 03:05:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] pools ['default.rgw.meta', 'cephfs.cephfs.data', 'images', 'default.rgw.control', '.rgw.root', 'volumes', '.mgr', 'cephfs.cephfs.meta', 'backups', 'default.rgw.log', 'vms']
Nov 25 03:05:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] prepared 0/10 changes
Nov 25 03:05:53 np0005534516 python3.9[215527]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtproxyd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 25 03:05:53 np0005534516 systemd[1]: Reloading.
Nov 25 03:05:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 03:05:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 03:05:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 03:05:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 03:05:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 03:05:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 03:05:53 np0005534516 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 03:05:53 np0005534516 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 03:05:53 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v594: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:05:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 03:05:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 03:05:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 03:05:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 03:05:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 03:05:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 03:05:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 03:05:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 03:05:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 03:05:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 03:05:53 np0005534516 systemd[1]: Started SETroubleshoot daemon for processing new SELinux denial logs.
Nov 25 03:05:53 np0005534516 systemd[1]: Starting libvirt proxy daemon admin socket...
Nov 25 03:05:53 np0005534516 systemd[1]: Starting libvirt proxy daemon read-only socket...
Nov 25 03:05:53 np0005534516 systemd[1]: Listening on libvirt proxy daemon admin socket.
Nov 25 03:05:53 np0005534516 systemd[1]: Listening on libvirt proxy daemon read-only socket.
Nov 25 03:05:53 np0005534516 systemd[1]: Starting libvirt proxy daemon...
Nov 25 03:05:53 np0005534516 systemd[1]: Started libvirt proxy daemon.
Nov 25 03:05:53 np0005534516 systemd[1]: Created slice Slice /system/dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged.
Nov 25 03:05:53 np0005534516 systemd[1]: Started dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged@0.service.
Nov 25 03:05:54 np0005534516 python3.9[215749]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtqemud.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 25 03:05:54 np0005534516 systemd[1]: Reloading.
Nov 25 03:05:54 np0005534516 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 03:05:54 np0005534516 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 03:05:54 np0005534516 setroubleshoot[215528]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability. For complete SELinux messages run: sealert -l 205a231d-7ff4-429e-a24c-785da799a4e8
Nov 25 03:05:54 np0005534516 setroubleshoot[215528]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability.#012#012*****  Plugin dac_override (91.4 confidence) suggests   **********************#012#012If you want to help identify if domain needs this access or you have a file with the wrong permissions on your system#012Then turn on full auditing to get path information about the offending file and generate the error again.#012Do#012#012Turn on full auditing#012# auditctl -w /etc/shadow -p w#012Try to recreate AVC. Then execute#012# ausearch -m avc -ts recent#012If you see PATH record check ownership/permissions on file, and fix it,#012otherwise report as a bugzilla.#012#012*****  Plugin catchall (9.59 confidence) suggests   **************************#012#012If you believe that virtlogd should have the dac_read_search capability by default.#012Then you should report this as a bug.#012You can generate a local policy module to allow this access.#012Do#012allow this access for now by executing:#012# ausearch -c 'virtlogd' --raw | audit2allow -M my-virtlogd#012# semodule -X 300 -i my-virtlogd.pp#012
Nov 25 03:05:54 np0005534516 setroubleshoot[215528]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability. For complete SELinux messages run: sealert -l 205a231d-7ff4-429e-a24c-785da799a4e8
Nov 25 03:05:54 np0005534516 setroubleshoot[215528]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability.#012#012*****  Plugin dac_override (91.4 confidence) suggests   **********************#012#012If you want to help identify if domain needs this access or you have a file with the wrong permissions on your system#012Then turn on full auditing to get path information about the offending file and generate the error again.#012Do#012#012Turn on full auditing#012# auditctl -w /etc/shadow -p w#012Try to recreate AVC. Then execute#012# ausearch -m avc -ts recent#012If you see PATH record check ownership/permissions on file, and fix it,#012otherwise report as a bugzilla.#012#012*****  Plugin catchall (9.59 confidence) suggests   **************************#012#012If you believe that virtlogd should have the dac_read_search capability by default.#012Then you should report this as a bug.#012You can generate a local policy module to allow this access.#012Do#012allow this access for now by executing:#012# ausearch -c 'virtlogd' --raw | audit2allow -M my-virtlogd#012# semodule -X 300 -i my-virtlogd.pp#012
Nov 25 03:05:55 np0005534516 systemd[1]: Listening on libvirt locking daemon socket.
Nov 25 03:05:55 np0005534516 systemd[1]: Starting libvirt QEMU daemon socket...
Nov 25 03:05:55 np0005534516 systemd[1]: Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw).
Nov 25 03:05:55 np0005534516 systemd[1]: Starting Virtual Machine and Container Registration Service...
Nov 25 03:05:55 np0005534516 systemd[1]: Listening on libvirt QEMU daemon socket.
Nov 25 03:05:55 np0005534516 systemd[1]: Starting libvirt QEMU daemon admin socket...
Nov 25 03:05:55 np0005534516 systemd[1]: Starting libvirt QEMU daemon read-only socket...
Nov 25 03:05:55 np0005534516 systemd[1]: Listening on libvirt QEMU daemon read-only socket.
Nov 25 03:05:55 np0005534516 systemd[1]: Listening on libvirt QEMU daemon admin socket.
Nov 25 03:05:55 np0005534516 systemd[1]: Started Virtual Machine and Container Registration Service.
Nov 25 03:05:55 np0005534516 systemd[1]: Starting libvirt QEMU daemon...
Nov 25 03:05:55 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 03:05:55 np0005534516 systemd[1]: Started libvirt QEMU daemon.
Nov 25 03:05:55 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v595: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:05:55 np0005534516 python3.9[215965]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtsecretd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 25 03:05:55 np0005534516 systemd[1]: Reloading.
Nov 25 03:05:55 np0005534516 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 03:05:55 np0005534516 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 03:05:56 np0005534516 systemd[1]: Starting libvirt secret daemon socket...
Nov 25 03:05:56 np0005534516 systemd[1]: Listening on libvirt secret daemon socket.
Nov 25 03:05:56 np0005534516 systemd[1]: Starting libvirt secret daemon admin socket...
Nov 25 03:05:56 np0005534516 systemd[1]: Starting libvirt secret daemon read-only socket...
Nov 25 03:05:56 np0005534516 systemd[1]: Listening on libvirt secret daemon admin socket.
Nov 25 03:05:56 np0005534516 systemd[1]: Listening on libvirt secret daemon read-only socket.
Nov 25 03:05:56 np0005534516 systemd[1]: Starting libvirt secret daemon...
Nov 25 03:05:56 np0005534516 systemd[1]: Started libvirt secret daemon.
Nov 25 03:05:56 np0005534516 python3.9[216176]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/openstack/config/ceph state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 03:05:57 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v596: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:05:57 np0005534516 python3.9[216328]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/config/ceph'] patterns=['*.conf'] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Nov 25 03:05:58 np0005534516 python3.9[216480]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail;#012echo ceph#012awk -F '=' '/fsid/ {print $2}' /var/lib/openstack/config/ceph/ceph.conf | xargs#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 03:05:59 np0005534516 python3.9[216634]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/config/ceph'] patterns=['*.keyring'] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Nov 25 03:05:59 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v597: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:06:00 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 03:06:00 np0005534516 python3.9[216784]: ansible-ansible.legacy.stat Invoked with path=/tmp/secret.xml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 03:06:00 np0005534516 python3.9[216905]: ansible-ansible.legacy.copy Invoked with dest=/tmp/secret.xml mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1764057959.5786674-1133-195959289094257/.source.xml follow=False _original_basename=secret.xml.j2 checksum=9d13c3a77669e55dac3ba13d81cc947a961520ce backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 03:06:01 np0005534516 python3.9[217057]: ansible-ansible.legacy.command Invoked with _raw_params=virsh secret-undefine a058ea16-8b73-51e1-b172-ed66107102bf#012virsh secret-define --file /tmp/secret.xml#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 03:06:01 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v598: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:06:01 np0005534516 python3.9[217219]: ansible-ansible.builtin.file Invoked with path=/tmp/secret.xml state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 03:06:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 03:06:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:06:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 03:06:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:06:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 03:06:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:06:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 03:06:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:06:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 03:06:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:06:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 03:06:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:06:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 25 03:06:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:06:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 03:06:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:06:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 25 03:06:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:06:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 25 03:06:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:06:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 03:06:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:06:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 25 03:06:03 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v599: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:06:04 np0005534516 python3.9[217682]: ansible-ansible.legacy.copy Invoked with dest=/etc/ceph/ceph.conf group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/config/ceph/ceph.conf backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 03:06:04 np0005534516 python3.9[217834]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/libvirt.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 03:06:04 np0005534516 systemd[1]: dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged@0.service: Deactivated successfully.
Nov 25 03:06:04 np0005534516 systemd[1]: dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged@0.service: Consumed 1.001s CPU time.
Nov 25 03:06:04 np0005534516 systemd[1]: setroubleshootd.service: Deactivated successfully.
Nov 25 03:06:05 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 03:06:05 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v600: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:06:06 np0005534516 python3.9[217957]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/libvirt.yaml mode=0640 src=/home/zuul/.ansible/tmp/ansible-tmp-1764057964.3363228-1188-79151255616324/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=5ca83b1310a74c5e48c4c3d4640e1cb8fdac1061 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 03:06:06 np0005534516 python3.9[218109]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 03:06:07 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v601: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:06:07 np0005534516 python3.9[218261]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 03:06:08 np0005534516 python3.9[218339]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 03:06:08 np0005534516 podman[218463]: 2025-11-25 08:06:08.743095594 +0000 UTC m=+0.112159517 container health_status 8ad1e319ea263c63021f01bb2d6f18ca5aea205883339692c6b10bddfa993191 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Nov 25 03:06:08 np0005534516 python3.9[218508]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 03:06:09 np0005534516 python3.9[218595]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.xyvl1bi1 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 03:06:09 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v602: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:06:10 np0005534516 python3.9[218747]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 03:06:10 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 03:06:11 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v603: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:06:13 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v604: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:06:14 np0005534516 ceph-mds[101326]: mds.beacon.cephfs.compute-0.dgfvvi missed beacon ack from the monitors
Nov 25 03:06:15 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v605: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:06:17 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v606: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:06:18 np0005534516 ceph-mds[101326]: mds.beacon.cephfs.compute-0.dgfvvi missed beacon ack from the monitors
Nov 25 03:06:19 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v607: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:06:21 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v608: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:06:22 np0005534516 ceph-mds[101326]: mds.beacon.cephfs.compute-0.dgfvvi missed beacon ack from the monitors
Nov 25 03:06:22 np0005534516 podman[218826]: 2025-11-25 08:06:22.321207009 +0000 UTC m=+7.569526627 container health_status 5c8d5508db57b4c3da7e80ae11f3a3094e87785cb74f60c98199261dd8b73481 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118)
Nov 25 03:06:22 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).mds e5 check_health: resetting beacon timeouts due to mon delay (slow election?) of 12.3197 seconds
Nov 25 03:06:22 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 03:06:22 np0005534516 python3.9[218825]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 03:06:23 np0005534516 python3.9[218996]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 03:06:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 03:06:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 03:06:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 03:06:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 03:06:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 03:06:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 03:06:23 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v609: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:06:23 np0005534516 python3[219149]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Nov 25 03:06:24 np0005534516 python3.9[219301]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 03:06:25 np0005534516 python3.9[219379]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 03:06:25 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v610: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:06:25 np0005534516 python3.9[219531]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 03:06:26 np0005534516 python3.9[219609]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-update-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-update-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 03:06:27 np0005534516 python3.9[219761]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 03:06:27 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 03:06:27 np0005534516 ceph-mon[75015]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #30. Immutable memtables: 0.
Nov 25 03:06:27 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:06:27.446373) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 25 03:06:27 np0005534516 ceph-mon[75015]: rocksdb: [db/flush_job.cc:856] [default] [JOB 11] Flushing memtable with next log file: 30
Nov 25 03:06:27 np0005534516 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764057987446409, "job": 11, "event": "flush_started", "num_memtables": 1, "num_entries": 1338, "num_deletes": 252, "total_data_size": 2097677, "memory_usage": 2145336, "flush_reason": "Manual Compaction"}
Nov 25 03:06:27 np0005534516 ceph-mon[75015]: rocksdb: [db/flush_job.cc:885] [default] [JOB 11] Level-0 flush table #31: started
Nov 25 03:06:27 np0005534516 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764057987467594, "cf_name": "default", "job": 11, "event": "table_file_creation", "file_number": 31, "file_size": 1213743, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 11747, "largest_seqno": 13084, "table_properties": {"data_size": 1209022, "index_size": 2117, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1541, "raw_key_size": 12035, "raw_average_key_size": 20, "raw_value_size": 1198720, "raw_average_value_size": 2004, "num_data_blocks": 97, "num_entries": 598, "num_filter_entries": 598, "num_deletions": 252, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764057831, "oldest_key_time": 1764057831, "file_creation_time": 1764057987, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "dc5dc6ae-0fb8-474e-b34d-e37705a41add", "db_session_id": "WCSCMBNWDQZ3QKJA3OWW", "orig_file_number": 31, "seqno_to_time_mapping": "N/A"}}
Nov 25 03:06:27 np0005534516 ceph-mon[75015]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 11] Flush lasted 21275 microseconds, and 4173 cpu microseconds.
Nov 25 03:06:27 np0005534516 ceph-mon[75015]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 03:06:27 np0005534516 python3.9[219839]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-flushes.nft _original_basename=flush-chain.j2 recurse=False state=file path=/etc/nftables/edpm-flushes.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 03:06:27 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:06:27.467646) [db/flush_job.cc:967] [default] [JOB 11] Level-0 flush table #31: 1213743 bytes OK
Nov 25 03:06:27 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:06:27.467666) [db/memtable_list.cc:519] [default] Level-0 commit table #31 started
Nov 25 03:06:27 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:06:27.515337) [db/memtable_list.cc:722] [default] Level-0 commit table #31: memtable #1 done
Nov 25 03:06:27 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:06:27.515392) EVENT_LOG_v1 {"time_micros": 1764057987515379, "job": 11, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 25 03:06:27 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:06:27.515422) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 25 03:06:27 np0005534516 ceph-mon[75015]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 11] Try to delete WAL files size 2091707, prev total WAL file size 2091707, number of live WAL files 2.
Nov 25 03:06:27 np0005534516 ceph-mon[75015]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000027.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 03:06:27 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:06:27.516392) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D67727374617400323531' seq:72057594037927935, type:22 .. '6D67727374617400353034' seq:0, type:0; will stop at (end)
Nov 25 03:06:27 np0005534516 ceph-mon[75015]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 12] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 25 03:06:27 np0005534516 ceph-mon[75015]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 11 Base level 0, inputs: [31(1185KB)], [29(8052KB)]
Nov 25 03:06:27 np0005534516 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764057987516451, "job": 12, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [31], "files_L6": [29], "score": -1, "input_data_size": 9459683, "oldest_snapshot_seqno": -1}
Nov 25 03:06:27 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v611: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:06:27 np0005534516 ceph-mon[75015]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 12] Generated table #32: 3887 keys, 6988387 bytes, temperature: kUnknown
Nov 25 03:06:27 np0005534516 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764057987641004, "cf_name": "default", "job": 12, "event": "table_file_creation", "file_number": 32, "file_size": 6988387, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 6961448, "index_size": 16142, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 9733, "raw_key_size": 93447, "raw_average_key_size": 24, "raw_value_size": 6890200, "raw_average_value_size": 1772, "num_data_blocks": 706, "num_entries": 3887, "num_filter_entries": 3887, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764056881, "oldest_key_time": 0, "file_creation_time": 1764057987, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "dc5dc6ae-0fb8-474e-b34d-e37705a41add", "db_session_id": "WCSCMBNWDQZ3QKJA3OWW", "orig_file_number": 32, "seqno_to_time_mapping": "N/A"}}
Nov 25 03:06:27 np0005534516 ceph-mon[75015]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 03:06:27 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:06:27.641283) [db/compaction/compaction_job.cc:1663] [default] [JOB 12] Compacted 1@0 + 1@6 files to L6 => 6988387 bytes
Nov 25 03:06:27 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:06:27.666682) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 75.9 rd, 56.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.2, 7.9 +0.0 blob) out(6.7 +0.0 blob), read-write-amplify(13.6) write-amplify(5.8) OK, records in: 4338, records dropped: 451 output_compression: NoCompression
Nov 25 03:06:27 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:06:27.666718) EVENT_LOG_v1 {"time_micros": 1764057987666705, "job": 12, "event": "compaction_finished", "compaction_time_micros": 124635, "compaction_time_cpu_micros": 17663, "output_level": 6, "num_output_files": 1, "total_output_size": 6988387, "num_input_records": 4338, "num_output_records": 3887, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 25 03:06:27 np0005534516 ceph-mon[75015]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000031.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 03:06:27 np0005534516 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764057987667031, "job": 12, "event": "table_file_deletion", "file_number": 31}
Nov 25 03:06:27 np0005534516 ceph-mon[75015]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000029.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 03:06:27 np0005534516 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764057987668179, "job": 12, "event": "table_file_deletion", "file_number": 29}
Nov 25 03:06:27 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:06:27.516267) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 03:06:27 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:06:27.668209) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 03:06:27 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:06:27.668212) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 03:06:27 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:06:27.668214) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 03:06:27 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:06:27.668215) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 03:06:27 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:06:27.668217) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 03:06:28 np0005534516 python3.9[219991]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 03:06:28 np0005534516 python3.9[220069]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-chains.nft _original_basename=chains.j2 recurse=False state=file path=/etc/nftables/edpm-chains.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 03:06:29 np0005534516 python3.9[220221]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 03:06:29 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v612: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:06:29 np0005534516 python3.9[220346]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764057988.7555165-1313-229390815444521/.source.nft follow=False _original_basename=ruleset.j2 checksum=ac3ce8ce2d33fa5fe0a79b0c811c97734ce43fa5 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 03:06:30 np0005534516 python3.9[220498]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 03:06:31 np0005534516 python3.9[220650]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 03:06:31 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v613: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:06:32 np0005534516 python3.9[220805]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"#012include "/etc/nftables/edpm-chains.nft"#012include "/etc/nftables/edpm-rules.nft"#012include "/etc/nftables/edpm-jumps.nft"#012 path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 03:06:32 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 03:06:32 np0005534516 python3.9[220957]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 03:06:33 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v614: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:06:33 np0005534516 python3.9[221110]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 25 03:06:34 np0005534516 python3.9[221339]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 03:06:34 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 03:06:34 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 03:06:34 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 03:06:34 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 03:06:34 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 03:06:34 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 03:06:34 np0005534516 ceph-mgr[75313]: [progress WARNING root] complete: ev ac7e7d07-b8b5-418b-bfac-69bd911820f8 does not exist
Nov 25 03:06:34 np0005534516 ceph-mgr[75313]: [progress WARNING root] complete: ev 1f93500c-bd40-43f9-817d-047c40459aaf does not exist
Nov 25 03:06:34 np0005534516 ceph-mgr[75313]: [progress WARNING root] complete: ev f386e3aa-7645-486e-b1d6-1dc63bb33ba3 does not exist
Nov 25 03:06:34 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 03:06:34 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 03:06:34 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 03:06:34 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 03:06:34 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 03:06:34 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 03:06:34 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 03:06:34 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 03:06:35 np0005534516 python3.9[221582]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 03:06:35 np0005534516 podman[221747]: 2025-11-25 08:06:35.327545057 +0000 UTC m=+0.087745005 container create fe4ce4cd953f0467d1369501de65f0da17aacbc98486111882b0a620335374cd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_wescoff, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 03:06:35 np0005534516 podman[221747]: 2025-11-25 08:06:35.259878498 +0000 UTC m=+0.020078496 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 03:06:35 np0005534516 systemd[1]: Started libpod-conmon-fe4ce4cd953f0467d1369501de65f0da17aacbc98486111882b0a620335374cd.scope.
Nov 25 03:06:35 np0005534516 systemd[1]: Started libcrun container.
Nov 25 03:06:35 np0005534516 podman[221747]: 2025-11-25 08:06:35.490749735 +0000 UTC m=+0.250949723 container init fe4ce4cd953f0467d1369501de65f0da17aacbc98486111882b0a620335374cd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_wescoff, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Nov 25 03:06:35 np0005534516 podman[221747]: 2025-11-25 08:06:35.499332692 +0000 UTC m=+0.259532660 container start fe4ce4cd953f0467d1369501de65f0da17aacbc98486111882b0a620335374cd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_wescoff, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 03:06:35 np0005534516 hopeful_wescoff[221831]: 167 167
Nov 25 03:06:35 np0005534516 systemd[1]: libpod-fe4ce4cd953f0467d1369501de65f0da17aacbc98486111882b0a620335374cd.scope: Deactivated successfully.
Nov 25 03:06:35 np0005534516 podman[221747]: 2025-11-25 08:06:35.517915626 +0000 UTC m=+0.278115614 container attach fe4ce4cd953f0467d1369501de65f0da17aacbc98486111882b0a620335374cd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_wescoff, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Nov 25 03:06:35 np0005534516 podman[221747]: 2025-11-25 08:06:35.518845381 +0000 UTC m=+0.279045349 container died fe4ce4cd953f0467d1369501de65f0da17aacbc98486111882b0a620335374cd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_wescoff, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Nov 25 03:06:35 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v615: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:06:35 np0005534516 systemd[1]: var-lib-containers-storage-overlay-4fb13921fbb6ad5f90d15494d4ba6e28b77138c6b9a6058bcad4df5ed58639e4-merged.mount: Deactivated successfully.
Nov 25 03:06:35 np0005534516 python3.9[221866]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm_libvirt.target follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 03:06:35 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 03:06:35 np0005534516 podman[221747]: 2025-11-25 08:06:35.802291291 +0000 UTC m=+0.562491289 container remove fe4ce4cd953f0467d1369501de65f0da17aacbc98486111882b0a620335374cd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_wescoff, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 03:06:35 np0005534516 systemd[1]: libpod-conmon-fe4ce4cd953f0467d1369501de65f0da17aacbc98486111882b0a620335374cd.scope: Deactivated successfully.
Nov 25 03:06:36 np0005534516 podman[221940]: 2025-11-25 08:06:36.055064674 +0000 UTC m=+0.107787459 container create 7cd7a955eb99f4497fde6b79fd458b3076ff7058199fcb8ae00ddbe92dcb1b78 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_merkle, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 03:06:36 np0005534516 podman[221940]: 2025-11-25 08:06:35.97605437 +0000 UTC m=+0.028777175 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 03:06:36 np0005534516 systemd[1]: Started libpod-conmon-7cd7a955eb99f4497fde6b79fd458b3076ff7058199fcb8ae00ddbe92dcb1b78.scope.
Nov 25 03:06:36 np0005534516 systemd[1]: Started libcrun container.
Nov 25 03:06:36 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4ca4b5732011d348f5f90f7fdd1cb74362a40b758c5530cf468cc95a6334929b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 03:06:36 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4ca4b5732011d348f5f90f7fdd1cb74362a40b758c5530cf468cc95a6334929b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 03:06:36 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4ca4b5732011d348f5f90f7fdd1cb74362a40b758c5530cf468cc95a6334929b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 03:06:36 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4ca4b5732011d348f5f90f7fdd1cb74362a40b758c5530cf468cc95a6334929b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 03:06:36 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4ca4b5732011d348f5f90f7fdd1cb74362a40b758c5530cf468cc95a6334929b/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 03:06:36 np0005534516 podman[221940]: 2025-11-25 08:06:36.195628346 +0000 UTC m=+0.248351141 container init 7cd7a955eb99f4497fde6b79fd458b3076ff7058199fcb8ae00ddbe92dcb1b78 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_merkle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2)
Nov 25 03:06:36 np0005534516 podman[221940]: 2025-11-25 08:06:36.205138318 +0000 UTC m=+0.257861093 container start 7cd7a955eb99f4497fde6b79fd458b3076ff7058199fcb8ae00ddbe92dcb1b78 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_merkle, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 03:06:36 np0005534516 podman[221940]: 2025-11-25 08:06:36.250424979 +0000 UTC m=+0.303147784 container attach 7cd7a955eb99f4497fde6b79fd458b3076ff7058199fcb8ae00ddbe92dcb1b78 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_merkle, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Nov 25 03:06:36 np0005534516 python3.9[222029]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm_libvirt.target mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764057995.199449-1385-66140460653032/.source.target follow=False _original_basename=edpm_libvirt.target checksum=13035a1aa0f414c677b14be9a5a363b6623d393c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 03:06:37 np0005534516 python3.9[222183]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm_libvirt_guests.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 03:06:37 np0005534516 busy_merkle[222025]: --> passed data devices: 0 physical, 3 LVM
Nov 25 03:06:37 np0005534516 busy_merkle[222025]: --> relative data size: 1.0
Nov 25 03:06:37 np0005534516 busy_merkle[222025]: --> All data devices are unavailable
Nov 25 03:06:37 np0005534516 systemd[1]: libpod-7cd7a955eb99f4497fde6b79fd458b3076ff7058199fcb8ae00ddbe92dcb1b78.scope: Deactivated successfully.
Nov 25 03:06:37 np0005534516 podman[221940]: 2025-11-25 08:06:37.335216674 +0000 UTC m=+1.387939469 container died 7cd7a955eb99f4497fde6b79fd458b3076ff7058199fcb8ae00ddbe92dcb1b78 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_merkle, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 03:06:37 np0005534516 systemd[1]: libpod-7cd7a955eb99f4497fde6b79fd458b3076ff7058199fcb8ae00ddbe92dcb1b78.scope: Consumed 1.075s CPU time.
Nov 25 03:06:37 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 03:06:37 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v616: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:06:37 np0005534516 python3.9[222338]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm_libvirt_guests.service mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764057996.5585134-1400-157062518335649/.source.service follow=False _original_basename=edpm_libvirt_guests.service checksum=db83430a42fc2ccfd6ed8b56ebf04f3dff9cd0cf backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 03:06:37 np0005534516 systemd[1]: var-lib-containers-storage-overlay-4ca4b5732011d348f5f90f7fdd1cb74362a40b758c5530cf468cc95a6334929b-merged.mount: Deactivated successfully.
Nov 25 03:06:37 np0005534516 podman[221940]: 2025-11-25 08:06:37.812525539 +0000 UTC m=+1.865248324 container remove 7cd7a955eb99f4497fde6b79fd458b3076ff7058199fcb8ae00ddbe92dcb1b78 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_merkle, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Nov 25 03:06:37 np0005534516 systemd[1]: libpod-conmon-7cd7a955eb99f4497fde6b79fd458b3076ff7058199fcb8ae00ddbe92dcb1b78.scope: Deactivated successfully.
Nov 25 03:06:38 np0005534516 python3.9[222565]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virt-guest-shutdown.target follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 03:06:38 np0005534516 podman[222682]: 2025-11-25 08:06:38.432178105 +0000 UTC m=+0.068944446 container create 4ab7098ebd1ab5ddf78bfb150990df59edf091daf856418b4bb11c38740aae7f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_brahmagupta, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Nov 25 03:06:38 np0005534516 podman[222682]: 2025-11-25 08:06:38.382287347 +0000 UTC m=+0.019053718 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 03:06:38 np0005534516 systemd[1]: Started libpod-conmon-4ab7098ebd1ab5ddf78bfb150990df59edf091daf856418b4bb11c38740aae7f.scope.
Nov 25 03:06:38 np0005534516 systemd[1]: Started libcrun container.
Nov 25 03:06:38 np0005534516 podman[222682]: 2025-11-25 08:06:38.531929831 +0000 UTC m=+0.168696202 container init 4ab7098ebd1ab5ddf78bfb150990df59edf091daf856418b4bb11c38740aae7f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_brahmagupta, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 03:06:38 np0005534516 podman[222682]: 2025-11-25 08:06:38.538181243 +0000 UTC m=+0.174947584 container start 4ab7098ebd1ab5ddf78bfb150990df59edf091daf856418b4bb11c38740aae7f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_brahmagupta, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507)
Nov 25 03:06:38 np0005534516 silly_brahmagupta[222742]: 167 167
Nov 25 03:06:38 np0005534516 systemd[1]: libpod-4ab7098ebd1ab5ddf78bfb150990df59edf091daf856418b4bb11c38740aae7f.scope: Deactivated successfully.
Nov 25 03:06:38 np0005534516 podman[222682]: 2025-11-25 08:06:38.546724939 +0000 UTC m=+0.183491300 container attach 4ab7098ebd1ab5ddf78bfb150990df59edf091daf856418b4bb11c38740aae7f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_brahmagupta, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Nov 25 03:06:38 np0005534516 podman[222682]: 2025-11-25 08:06:38.547084099 +0000 UTC m=+0.183850450 container died 4ab7098ebd1ab5ddf78bfb150990df59edf091daf856418b4bb11c38740aae7f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_brahmagupta, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 03:06:38 np0005534516 systemd[1]: var-lib-containers-storage-overlay-13e7ce08382639d8865b4d4d830534a9990802c7c1a1f2ae230bee871a5f75d3-merged.mount: Deactivated successfully.
Nov 25 03:06:38 np0005534516 podman[222682]: 2025-11-25 08:06:38.63615019 +0000 UTC m=+0.272916531 container remove 4ab7098ebd1ab5ddf78bfb150990df59edf091daf856418b4bb11c38740aae7f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_brahmagupta, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Nov 25 03:06:38 np0005534516 systemd[1]: libpod-conmon-4ab7098ebd1ab5ddf78bfb150990df59edf091daf856418b4bb11c38740aae7f.scope: Deactivated successfully.
Nov 25 03:06:38 np0005534516 python3.9[222777]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virt-guest-shutdown.target mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764057997.7273626-1415-231025240995256/.source.target follow=False _original_basename=virt-guest-shutdown.target checksum=49ca149619c596cbba877418629d2cf8f7b0f5cf backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 03:06:38 np0005534516 podman[222797]: 2025-11-25 08:06:38.814165446 +0000 UTC m=+0.058905648 container create 30edba5c924d9ae3a5e817517f100ff5bdbaf7bc990e35ea1dd2d719f63d3e74 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_galois, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 03:06:38 np0005534516 systemd[1]: Started libpod-conmon-30edba5c924d9ae3a5e817517f100ff5bdbaf7bc990e35ea1dd2d719f63d3e74.scope.
Nov 25 03:06:38 np0005534516 systemd[1]: Started libcrun container.
Nov 25 03:06:38 np0005534516 podman[222797]: 2025-11-25 08:06:38.782978575 +0000 UTC m=+0.027718797 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 03:06:38 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/86a290a940f60377d83b5597d5b264079d69ccc04f02ce3da9160a717baf56b5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 03:06:38 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/86a290a940f60377d83b5597d5b264079d69ccc04f02ce3da9160a717baf56b5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 03:06:38 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/86a290a940f60377d83b5597d5b264079d69ccc04f02ce3da9160a717baf56b5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 03:06:38 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/86a290a940f60377d83b5597d5b264079d69ccc04f02ce3da9160a717baf56b5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 03:06:38 np0005534516 podman[222797]: 2025-11-25 08:06:38.900294956 +0000 UTC m=+0.145035168 container init 30edba5c924d9ae3a5e817517f100ff5bdbaf7bc990e35ea1dd2d719f63d3e74 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_galois, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 03:06:38 np0005534516 podman[222797]: 2025-11-25 08:06:38.908504723 +0000 UTC m=+0.153244945 container start 30edba5c924d9ae3a5e817517f100ff5bdbaf7bc990e35ea1dd2d719f63d3e74 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_galois, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Nov 25 03:06:38 np0005534516 podman[222797]: 2025-11-25 08:06:38.92907417 +0000 UTC m=+0.173814422 container attach 30edba5c924d9ae3a5e817517f100ff5bdbaf7bc990e35ea1dd2d719f63d3e74 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_galois, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Nov 25 03:06:38 np0005534516 podman[222835]: 2025-11-25 08:06:38.955754198 +0000 UTC m=+0.088121345 container health_status 8ad1e319ea263c63021f01bb2d6f18ca5aea205883339692c6b10bddfa993191 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true)
Nov 25 03:06:39 np0005534516 python3.9[222994]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm_libvirt.target state=restarted daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 25 03:06:39 np0005534516 systemd[1]: Reloading.
Nov 25 03:06:39 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v617: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:06:39 np0005534516 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 03:06:39 np0005534516 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 03:06:39 np0005534516 trusting_galois[222838]: {
Nov 25 03:06:39 np0005534516 trusting_galois[222838]:    "0": [
Nov 25 03:06:39 np0005534516 trusting_galois[222838]:        {
Nov 25 03:06:39 np0005534516 trusting_galois[222838]:            "devices": [
Nov 25 03:06:39 np0005534516 trusting_galois[222838]:                "/dev/loop3"
Nov 25 03:06:39 np0005534516 trusting_galois[222838]:            ],
Nov 25 03:06:39 np0005534516 trusting_galois[222838]:            "lv_name": "ceph_lv0",
Nov 25 03:06:39 np0005534516 trusting_galois[222838]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 03:06:39 np0005534516 trusting_galois[222838]:            "lv_size": "21470642176",
Nov 25 03:06:39 np0005534516 trusting_galois[222838]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=fa0f8d90-5df8-4d42-9078-da082765696d,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 03:06:39 np0005534516 trusting_galois[222838]:            "lv_uuid": "ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr",
Nov 25 03:06:39 np0005534516 trusting_galois[222838]:            "name": "ceph_lv0",
Nov 25 03:06:39 np0005534516 trusting_galois[222838]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 03:06:39 np0005534516 trusting_galois[222838]:            "tags": {
Nov 25 03:06:39 np0005534516 trusting_galois[222838]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 03:06:39 np0005534516 trusting_galois[222838]:                "ceph.block_uuid": "ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr",
Nov 25 03:06:39 np0005534516 trusting_galois[222838]:                "ceph.cephx_lockbox_secret": "",
Nov 25 03:06:39 np0005534516 trusting_galois[222838]:                "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 03:06:39 np0005534516 trusting_galois[222838]:                "ceph.cluster_name": "ceph",
Nov 25 03:06:39 np0005534516 trusting_galois[222838]:                "ceph.crush_device_class": "",
Nov 25 03:06:39 np0005534516 trusting_galois[222838]:                "ceph.encrypted": "0",
Nov 25 03:06:39 np0005534516 trusting_galois[222838]:                "ceph.osd_fsid": "fa0f8d90-5df8-4d42-9078-da082765696d",
Nov 25 03:06:39 np0005534516 trusting_galois[222838]:                "ceph.osd_id": "0",
Nov 25 03:06:39 np0005534516 trusting_galois[222838]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 03:06:39 np0005534516 trusting_galois[222838]:                "ceph.type": "block",
Nov 25 03:06:39 np0005534516 trusting_galois[222838]:                "ceph.vdo": "0"
Nov 25 03:06:39 np0005534516 trusting_galois[222838]:            },
Nov 25 03:06:39 np0005534516 trusting_galois[222838]:            "type": "block",
Nov 25 03:06:39 np0005534516 trusting_galois[222838]:            "vg_name": "ceph_vg0"
Nov 25 03:06:39 np0005534516 trusting_galois[222838]:        }
Nov 25 03:06:39 np0005534516 trusting_galois[222838]:    ],
Nov 25 03:06:39 np0005534516 trusting_galois[222838]:    "1": [
Nov 25 03:06:39 np0005534516 trusting_galois[222838]:        {
Nov 25 03:06:39 np0005534516 trusting_galois[222838]:            "devices": [
Nov 25 03:06:39 np0005534516 trusting_galois[222838]:                "/dev/loop4"
Nov 25 03:06:39 np0005534516 trusting_galois[222838]:            ],
Nov 25 03:06:39 np0005534516 trusting_galois[222838]:            "lv_name": "ceph_lv1",
Nov 25 03:06:39 np0005534516 trusting_galois[222838]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 03:06:39 np0005534516 trusting_galois[222838]:            "lv_size": "21470642176",
Nov 25 03:06:39 np0005534516 trusting_galois[222838]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1ccbbde0-8faf-460a-800e-c84f00ed17db,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 03:06:39 np0005534516 trusting_galois[222838]:            "lv_uuid": "nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e",
Nov 25 03:06:39 np0005534516 trusting_galois[222838]:            "name": "ceph_lv1",
Nov 25 03:06:39 np0005534516 trusting_galois[222838]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 03:06:39 np0005534516 trusting_galois[222838]:            "tags": {
Nov 25 03:06:39 np0005534516 trusting_galois[222838]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 03:06:39 np0005534516 trusting_galois[222838]:                "ceph.block_uuid": "nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e",
Nov 25 03:06:39 np0005534516 trusting_galois[222838]:                "ceph.cephx_lockbox_secret": "",
Nov 25 03:06:39 np0005534516 trusting_galois[222838]:                "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 03:06:39 np0005534516 trusting_galois[222838]:                "ceph.cluster_name": "ceph",
Nov 25 03:06:39 np0005534516 trusting_galois[222838]:                "ceph.crush_device_class": "",
Nov 25 03:06:39 np0005534516 trusting_galois[222838]:                "ceph.encrypted": "0",
Nov 25 03:06:39 np0005534516 trusting_galois[222838]:                "ceph.osd_fsid": "1ccbbde0-8faf-460a-800e-c84f00ed17db",
Nov 25 03:06:39 np0005534516 trusting_galois[222838]:                "ceph.osd_id": "1",
Nov 25 03:06:39 np0005534516 trusting_galois[222838]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 03:06:39 np0005534516 trusting_galois[222838]:                "ceph.type": "block",
Nov 25 03:06:39 np0005534516 trusting_galois[222838]:                "ceph.vdo": "0"
Nov 25 03:06:39 np0005534516 trusting_galois[222838]:            },
Nov 25 03:06:39 np0005534516 trusting_galois[222838]:            "type": "block",
Nov 25 03:06:39 np0005534516 trusting_galois[222838]:            "vg_name": "ceph_vg1"
Nov 25 03:06:39 np0005534516 trusting_galois[222838]:        }
Nov 25 03:06:39 np0005534516 trusting_galois[222838]:    ],
Nov 25 03:06:39 np0005534516 trusting_galois[222838]:    "2": [
Nov 25 03:06:39 np0005534516 trusting_galois[222838]:        {
Nov 25 03:06:39 np0005534516 trusting_galois[222838]:            "devices": [
Nov 25 03:06:39 np0005534516 trusting_galois[222838]:                "/dev/loop5"
Nov 25 03:06:39 np0005534516 trusting_galois[222838]:            ],
Nov 25 03:06:39 np0005534516 trusting_galois[222838]:            "lv_name": "ceph_lv2",
Nov 25 03:06:39 np0005534516 trusting_galois[222838]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 03:06:39 np0005534516 trusting_galois[222838]:            "lv_size": "21470642176",
Nov 25 03:06:39 np0005534516 trusting_galois[222838]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2a15223e-f21c-42af-b702-2be1c3607399,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 03:06:39 np0005534516 trusting_galois[222838]:            "lv_uuid": "5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0",
Nov 25 03:06:39 np0005534516 trusting_galois[222838]:            "name": "ceph_lv2",
Nov 25 03:06:39 np0005534516 trusting_galois[222838]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 03:06:39 np0005534516 trusting_galois[222838]:            "tags": {
Nov 25 03:06:39 np0005534516 trusting_galois[222838]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 03:06:39 np0005534516 trusting_galois[222838]:                "ceph.block_uuid": "5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0",
Nov 25 03:06:39 np0005534516 trusting_galois[222838]:                "ceph.cephx_lockbox_secret": "",
Nov 25 03:06:39 np0005534516 trusting_galois[222838]:                "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 03:06:39 np0005534516 trusting_galois[222838]:                "ceph.cluster_name": "ceph",
Nov 25 03:06:39 np0005534516 trusting_galois[222838]:                "ceph.crush_device_class": "",
Nov 25 03:06:39 np0005534516 trusting_galois[222838]:                "ceph.encrypted": "0",
Nov 25 03:06:39 np0005534516 trusting_galois[222838]:                "ceph.osd_fsid": "2a15223e-f21c-42af-b702-2be1c3607399",
Nov 25 03:06:39 np0005534516 trusting_galois[222838]:                "ceph.osd_id": "2",
Nov 25 03:06:39 np0005534516 trusting_galois[222838]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 03:06:39 np0005534516 trusting_galois[222838]:                "ceph.type": "block",
Nov 25 03:06:39 np0005534516 trusting_galois[222838]:                "ceph.vdo": "0"
Nov 25 03:06:39 np0005534516 trusting_galois[222838]:            },
Nov 25 03:06:39 np0005534516 trusting_galois[222838]:            "type": "block",
Nov 25 03:06:39 np0005534516 trusting_galois[222838]:            "vg_name": "ceph_vg2"
Nov 25 03:06:39 np0005534516 trusting_galois[222838]:        }
Nov 25 03:06:39 np0005534516 trusting_galois[222838]:    ]
Nov 25 03:06:39 np0005534516 trusting_galois[222838]: }
Nov 25 03:06:39 np0005534516 podman[222797]: 2025-11-25 08:06:39.778775622 +0000 UTC m=+1.023515824 container died 30edba5c924d9ae3a5e817517f100ff5bdbaf7bc990e35ea1dd2d719f63d3e74 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_galois, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Nov 25 03:06:39 np0005534516 systemd[1]: libpod-30edba5c924d9ae3a5e817517f100ff5bdbaf7bc990e35ea1dd2d719f63d3e74.scope: Deactivated successfully.
Nov 25 03:06:39 np0005534516 systemd[1]: var-lib-containers-storage-overlay-86a290a940f60377d83b5597d5b264079d69ccc04f02ce3da9160a717baf56b5-merged.mount: Deactivated successfully.
Nov 25 03:06:39 np0005534516 systemd[1]: Reached target edpm_libvirt.target.
Nov 25 03:06:40 np0005534516 podman[222797]: 2025-11-25 08:06:40.080267709 +0000 UTC m=+1.325007911 container remove 30edba5c924d9ae3a5e817517f100ff5bdbaf7bc990e35ea1dd2d719f63d3e74 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_galois, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True)
Nov 25 03:06:40 np0005534516 systemd[1]: libpod-conmon-30edba5c924d9ae3a5e817517f100ff5bdbaf7bc990e35ea1dd2d719f63d3e74.scope: Deactivated successfully.
Nov 25 03:06:40 np0005534516 podman[223347]: 2025-11-25 08:06:40.693607831 +0000 UTC m=+0.081976625 container create a93477e4a85168929e7fdcffc77067b9be80c9cadd19b08d643a3dcdfc13bf16 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_swanson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 03:06:40 np0005534516 podman[223347]: 2025-11-25 08:06:40.644872916 +0000 UTC m=+0.033241770 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 03:06:40 np0005534516 systemd[1]: Started libpod-conmon-a93477e4a85168929e7fdcffc77067b9be80c9cadd19b08d643a3dcdfc13bf16.scope.
Nov 25 03:06:40 np0005534516 python3.9[223305]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm_libvirt_guests daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Nov 25 03:06:40 np0005534516 systemd[1]: Started libcrun container.
Nov 25 03:06:40 np0005534516 systemd[1]: Reloading.
Nov 25 03:06:40 np0005534516 podman[223347]: 2025-11-25 08:06:40.780631345 +0000 UTC m=+0.169000159 container init a93477e4a85168929e7fdcffc77067b9be80c9cadd19b08d643a3dcdfc13bf16 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_swanson, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 03:06:40 np0005534516 podman[223347]: 2025-11-25 08:06:40.787797734 +0000 UTC m=+0.176166528 container start a93477e4a85168929e7fdcffc77067b9be80c9cadd19b08d643a3dcdfc13bf16 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_swanson, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 03:06:40 np0005534516 podman[223347]: 2025-11-25 08:06:40.791551777 +0000 UTC m=+0.179920601 container attach a93477e4a85168929e7fdcffc77067b9be80c9cadd19b08d643a3dcdfc13bf16 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_swanson, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 03:06:40 np0005534516 goofy_swanson[223363]: 167 167
Nov 25 03:06:40 np0005534516 podman[223347]: 2025-11-25 08:06:40.793339287 +0000 UTC m=+0.181708081 container died a93477e4a85168929e7fdcffc77067b9be80c9cadd19b08d643a3dcdfc13bf16 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_swanson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 03:06:40 np0005534516 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 03:06:40 np0005534516 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 03:06:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:06:41.029 162739 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:06:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:06:41.030 162739 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:06:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:06:41.030 162739 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:06:41 np0005534516 systemd[1]: libpod-a93477e4a85168929e7fdcffc77067b9be80c9cadd19b08d643a3dcdfc13bf16.scope: Deactivated successfully.
Nov 25 03:06:41 np0005534516 systemd[1]: var-lib-containers-storage-overlay-3cf19f7f513efb74ffaa480cb06bc28e43d23fc5176688db7a5c3b0a3462ff26-merged.mount: Deactivated successfully.
Nov 25 03:06:41 np0005534516 podman[223347]: 2025-11-25 08:06:41.152163608 +0000 UTC m=+0.540532402 container remove a93477e4a85168929e7fdcffc77067b9be80c9cadd19b08d643a3dcdfc13bf16 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_swanson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 03:06:41 np0005534516 systemd[1]: libpod-conmon-a93477e4a85168929e7fdcffc77067b9be80c9cadd19b08d643a3dcdfc13bf16.scope: Deactivated successfully.
Nov 25 03:06:41 np0005534516 systemd[1]: Reloading.
Nov 25 03:06:41 np0005534516 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 03:06:41 np0005534516 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 03:06:41 np0005534516 podman[223450]: 2025-11-25 08:06:41.311349135 +0000 UTC m=+0.039148702 container create 99c8abb3132f00f6d67c0031a648c87be4a63722fbcf5140255bd1a2efcff42d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_goldberg, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 03:06:41 np0005534516 podman[223450]: 2025-11-25 08:06:41.295099606 +0000 UTC m=+0.022899203 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 03:06:41 np0005534516 systemd[1]: Started libpod-conmon-99c8abb3132f00f6d67c0031a648c87be4a63722fbcf5140255bd1a2efcff42d.scope.
Nov 25 03:06:41 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v618: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:06:41 np0005534516 systemd[1]: Started libcrun container.
Nov 25 03:06:41 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4cb32685c5356d50797d698d6920acb86d62b07f251db2cda8069470a672bb9b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 03:06:41 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4cb32685c5356d50797d698d6920acb86d62b07f251db2cda8069470a672bb9b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 03:06:41 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4cb32685c5356d50797d698d6920acb86d62b07f251db2cda8069470a672bb9b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 03:06:41 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4cb32685c5356d50797d698d6920acb86d62b07f251db2cda8069470a672bb9b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 03:06:41 np0005534516 podman[223450]: 2025-11-25 08:06:41.580661024 +0000 UTC m=+0.308460621 container init 99c8abb3132f00f6d67c0031a648c87be4a63722fbcf5140255bd1a2efcff42d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_goldberg, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True)
Nov 25 03:06:41 np0005534516 podman[223450]: 2025-11-25 08:06:41.590046363 +0000 UTC m=+0.317845930 container start 99c8abb3132f00f6d67c0031a648c87be4a63722fbcf5140255bd1a2efcff42d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_goldberg, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 03:06:41 np0005534516 podman[223450]: 2025-11-25 08:06:41.597248163 +0000 UTC m=+0.325047730 container attach 99c8abb3132f00f6d67c0031a648c87be4a63722fbcf5140255bd1a2efcff42d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_goldberg, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 03:06:41 np0005534516 systemd[1]: session-49.scope: Deactivated successfully.
Nov 25 03:06:41 np0005534516 systemd[1]: session-49.scope: Consumed 3min 33.871s CPU time.
Nov 25 03:06:41 np0005534516 systemd-logind[822]: Session 49 logged out. Waiting for processes to exit.
Nov 25 03:06:41 np0005534516 systemd-logind[822]: Removed session 49.
Nov 25 03:06:42 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 03:06:42 np0005534516 romantic_goldberg[223478]: {
Nov 25 03:06:42 np0005534516 romantic_goldberg[223478]:    "1ccbbde0-8faf-460a-800e-c84f00ed17db": {
Nov 25 03:06:42 np0005534516 romantic_goldberg[223478]:        "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 03:06:42 np0005534516 romantic_goldberg[223478]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 03:06:42 np0005534516 romantic_goldberg[223478]:        "osd_id": 1,
Nov 25 03:06:42 np0005534516 romantic_goldberg[223478]:        "osd_uuid": "1ccbbde0-8faf-460a-800e-c84f00ed17db",
Nov 25 03:06:42 np0005534516 romantic_goldberg[223478]:        "type": "bluestore"
Nov 25 03:06:42 np0005534516 romantic_goldberg[223478]:    },
Nov 25 03:06:42 np0005534516 romantic_goldberg[223478]:    "2a15223e-f21c-42af-b702-2be1c3607399": {
Nov 25 03:06:42 np0005534516 romantic_goldberg[223478]:        "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 03:06:42 np0005534516 romantic_goldberg[223478]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 03:06:42 np0005534516 romantic_goldberg[223478]:        "osd_id": 2,
Nov 25 03:06:42 np0005534516 romantic_goldberg[223478]:        "osd_uuid": "2a15223e-f21c-42af-b702-2be1c3607399",
Nov 25 03:06:42 np0005534516 romantic_goldberg[223478]:        "type": "bluestore"
Nov 25 03:06:42 np0005534516 romantic_goldberg[223478]:    },
Nov 25 03:06:42 np0005534516 romantic_goldberg[223478]:    "fa0f8d90-5df8-4d42-9078-da082765696d": {
Nov 25 03:06:42 np0005534516 romantic_goldberg[223478]:        "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 03:06:42 np0005534516 romantic_goldberg[223478]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 03:06:42 np0005534516 romantic_goldberg[223478]:        "osd_id": 0,
Nov 25 03:06:42 np0005534516 romantic_goldberg[223478]:        "osd_uuid": "fa0f8d90-5df8-4d42-9078-da082765696d",
Nov 25 03:06:42 np0005534516 romantic_goldberg[223478]:        "type": "bluestore"
Nov 25 03:06:42 np0005534516 romantic_goldberg[223478]:    }
Nov 25 03:06:42 np0005534516 romantic_goldberg[223478]: }
Nov 25 03:06:42 np0005534516 systemd[1]: libpod-99c8abb3132f00f6d67c0031a648c87be4a63722fbcf5140255bd1a2efcff42d.scope: Deactivated successfully.
Nov 25 03:06:42 np0005534516 podman[223450]: 2025-11-25 08:06:42.628617662 +0000 UTC m=+1.356417239 container died 99c8abb3132f00f6d67c0031a648c87be4a63722fbcf5140255bd1a2efcff42d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_goldberg, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 03:06:42 np0005534516 systemd[1]: libpod-99c8abb3132f00f6d67c0031a648c87be4a63722fbcf5140255bd1a2efcff42d.scope: Consumed 1.031s CPU time.
Nov 25 03:06:42 np0005534516 systemd[1]: var-lib-containers-storage-overlay-4cb32685c5356d50797d698d6920acb86d62b07f251db2cda8069470a672bb9b-merged.mount: Deactivated successfully.
Nov 25 03:06:42 np0005534516 podman[223450]: 2025-11-25 08:06:42.692901147 +0000 UTC m=+1.420700714 container remove 99c8abb3132f00f6d67c0031a648c87be4a63722fbcf5140255bd1a2efcff42d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_goldberg, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 03:06:42 np0005534516 systemd[1]: libpod-conmon-99c8abb3132f00f6d67c0031a648c87be4a63722fbcf5140255bd1a2efcff42d.scope: Deactivated successfully.
Nov 25 03:06:42 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 03:06:42 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 03:06:42 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 03:06:42 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 03:06:42 np0005534516 ceph-mgr[75313]: [progress WARNING root] complete: ev fab4bd92-6069-4d12-8ec4-e91431fbdc63 does not exist
Nov 25 03:06:42 np0005534516 ceph-mgr[75313]: [progress WARNING root] complete: ev 6f59165c-3e5d-4446-b4d0-24bfd67c6ff7 does not exist
Nov 25 03:06:43 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v619: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:06:43 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 03:06:43 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 03:06:45 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v620: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:06:47 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 03:06:47 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v621: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:06:47 np0005534516 systemd-logind[822]: New session 50 of user zuul.
Nov 25 03:06:47 np0005534516 systemd[1]: Started Session 50 of User zuul.
Nov 25 03:06:48 np0005534516 python3.9[223753]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 25 03:06:49 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v622: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:06:49 np0005534516 python3.9[223907]: ansible-ansible.builtin.service_facts Invoked
Nov 25 03:06:50 np0005534516 network[223924]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Nov 25 03:06:50 np0005534516 network[223925]: 'network-scripts' will be removed from distribution in near future.
Nov 25 03:06:50 np0005534516 network[223926]: It is advised to switch to 'NetworkManager' instead for network management.
Nov 25 03:06:51 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v623: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:06:52 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 03:06:52 np0005534516 podman[224010]: 2025-11-25 08:06:52.442180117 +0000 UTC m=+0.063754282 container health_status 5c8d5508db57b4c3da7e80ae11f3a3094e87785cb74f60c98199261dd8b73481 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Nov 25 03:06:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] Optimize plan auto_2025-11-25_08:06:53
Nov 25 03:06:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 03:06:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] do_upmap
Nov 25 03:06:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] pools ['.mgr', 'images', 'backups', '.rgw.root', 'default.rgw.log', 'default.rgw.control', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', 'vms', 'default.rgw.meta', 'volumes']
Nov 25 03:06:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] prepared 0/10 changes
Nov 25 03:06:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 03:06:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 03:06:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 03:06:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 03:06:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 03:06:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 03:06:53 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v624: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:06:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 03:06:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 03:06:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 03:06:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 03:06:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 03:06:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 03:06:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 03:06:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 03:06:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 03:06:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 03:06:53 np0005534516 python3.9[224216]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 25 03:06:54 np0005534516 python3.9[224300]: ansible-ansible.legacy.dnf Invoked with name=['iscsi-initiator-utils'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 25 03:06:55 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v625: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:06:57 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 03:06:57 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v626: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:06:59 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v627: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:07:01 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v628: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:07:01 np0005534516 python3.9[224453]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated/iscsid/etc/iscsi follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 25 03:07:02 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 03:07:03 np0005534516 python3.9[224605]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/restorecon -nvr /etc/iscsi /var/lib/iscsi _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 03:07:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 03:07:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:07:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 03:07:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:07:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 03:07:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:07:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 03:07:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:07:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 03:07:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:07:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 03:07:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:07:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 25 03:07:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:07:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 03:07:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:07:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 25 03:07:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:07:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 25 03:07:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:07:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 03:07:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:07:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 25 03:07:03 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v629: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:07:04 np0005534516 python3.9[224758]: ansible-ansible.builtin.stat Invoked with path=/etc/iscsi/.initiator_reset follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 25 03:07:04 np0005534516 python3.9[224910]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/iscsi-iname _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 03:07:05 np0005534516 python3.9[225063]: ansible-ansible.legacy.stat Invoked with path=/etc/iscsi/initiatorname.iscsi follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 03:07:05 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v630: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:07:06 np0005534516 python3.9[225186]: ansible-ansible.legacy.copy Invoked with dest=/etc/iscsi/initiatorname.iscsi mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764058024.848927-95-193186534480611/.source.iscsi _original_basename=.hzgc5kg9 follow=False checksum=11d7418586990663851ab7048415bba15d0f1430 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 03:07:07 np0005534516 python3.9[225338]: ansible-ansible.builtin.file Invoked with mode=0600 path=/etc/iscsi/.initiator_reset state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 03:07:07 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 03:07:07 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v631: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:07:07 np0005534516 python3.9[225490]: ansible-ansible.builtin.lineinfile Invoked with insertafter=^#node.session.auth.chap.algs line=node.session.auth.chap_algs = SHA3-256,SHA256,SHA1,MD5 path=/etc/iscsi/iscsid.conf regexp=^node.session.auth.chap_algs state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 03:07:07 np0005534516 rsyslogd[1007]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 25 03:07:07 np0005534516 rsyslogd[1007]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 25 03:07:09 np0005534516 python3.9[225643]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=iscsid.socket state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 25 03:07:09 np0005534516 systemd[1]: Listening on Open-iSCSI iscsid Socket.
Nov 25 03:07:09 np0005534516 podman[225645]: 2025-11-25 08:07:09.392884379 +0000 UTC m=+0.095421046 container health_status 8ad1e319ea263c63021f01bb2d6f18ca5aea205883339692c6b10bddfa993191 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118)
Nov 25 03:07:09 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v632: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:07:10 np0005534516 python3.9[225824]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=iscsid state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 25 03:07:10 np0005534516 systemd[1]: Reloading.
Nov 25 03:07:10 np0005534516 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 03:07:10 np0005534516 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 03:07:10 np0005534516 systemd[1]: One time configuration for iscsi.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/iscsi/initiatorname.iscsi).
Nov 25 03:07:10 np0005534516 systemd[1]: Starting Open-iSCSI...
Nov 25 03:07:10 np0005534516 kernel: Loading iSCSI transport class v2.0-870.
Nov 25 03:07:10 np0005534516 systemd[1]: Started Open-iSCSI.
Nov 25 03:07:10 np0005534516 systemd[1]: Starting Logout off all iSCSI sessions on shutdown...
Nov 25 03:07:10 np0005534516 systemd[1]: Finished Logout off all iSCSI sessions on shutdown.
Nov 25 03:07:11 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v633: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:07:11 np0005534516 python3.9[226025]: ansible-ansible.builtin.service_facts Invoked
Nov 25 03:07:11 np0005534516 network[226042]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Nov 25 03:07:11 np0005534516 network[226043]: 'network-scripts' will be removed from distribution in near future.
Nov 25 03:07:11 np0005534516 network[226044]: It is advised to switch to 'NetworkManager' instead for network management.
Nov 25 03:07:12 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 03:07:13 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v634: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:07:15 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v635: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:07:15 np0005534516 python3.9[226316]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/modules-load.d selevel=s0 setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Nov 25 03:07:16 np0005534516 python3.9[226468]: ansible-community.general.modprobe Invoked with name=dm-multipath state=present params= persistent=disabled
Nov 25 03:07:17 np0005534516 python3.9[226624]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/dm-multipath.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 03:07:17 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 03:07:18 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v636: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:07:18 np0005534516 python3.9[226747]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/dm-multipath.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764058036.7938623-172-86206203281468/.source.conf follow=False _original_basename=module-load.conf.j2 checksum=065061c60917e4f67cecc70d12ce55e42f9d0b3f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 03:07:19 np0005534516 python3.9[226899]: ansible-ansible.builtin.lineinfile Invoked with create=True dest=/etc/modules line=dm-multipath  mode=0644 state=present path=/etc/modules encoding=utf-8 backrefs=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 03:07:20 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v637: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:07:20 np0005534516 python3.9[227051]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 25 03:07:20 np0005534516 systemd[1]: systemd-modules-load.service: Deactivated successfully.
Nov 25 03:07:20 np0005534516 systemd[1]: Stopped Load Kernel Modules.
Nov 25 03:07:20 np0005534516 systemd[1]: Stopping Load Kernel Modules...
Nov 25 03:07:20 np0005534516 systemd[1]: Starting Load Kernel Modules...
Nov 25 03:07:20 np0005534516 systemd[1]: Finished Load Kernel Modules.
Nov 25 03:07:21 np0005534516 python3.9[227207]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/multipath setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 25 03:07:22 np0005534516 python3.9[227359]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 25 03:07:22 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v638: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:07:22 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 03:07:22 np0005534516 podman[227483]: 2025-11-25 08:07:22.736803922 +0000 UTC m=+0.086535771 container health_status 5c8d5508db57b4c3da7e80ae11f3a3094e87785cb74f60c98199261dd8b73481 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3)
Nov 25 03:07:22 np0005534516 python3.9[227524]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 25 03:07:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 03:07:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 03:07:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 03:07:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 03:07:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 03:07:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 03:07:23 np0005534516 python3.9[227683]: ansible-ansible.legacy.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 03:07:24 np0005534516 python3.9[227806]: ansible-ansible.legacy.copy Invoked with dest=/etc/multipath.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764058043.0759454-230-95852454100562/.source.conf _original_basename=multipath.conf follow=False checksum=bf02ab264d3d648048a81f3bacec8bc58db93162 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 03:07:24 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v639: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:07:24 np0005534516 python3.9[227958]: ansible-ansible.legacy.command Invoked with _raw_params=grep -q '^blacklist\s*{' /etc/multipath.conf _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 03:07:25 np0005534516 python3.9[228111]: ansible-ansible.builtin.lineinfile Invoked with line=blacklist { path=/etc/multipath.conf state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 03:07:26 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v640: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:07:26 np0005534516 python3.9[228263]: ansible-ansible.builtin.replace Invoked with path=/etc/multipath.conf regexp=^(blacklist {) replace=\1\n} backup=False encoding=utf-8 unsafe_writes=False after=None before=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 03:07:27 np0005534516 python3.9[228415]: ansible-ansible.builtin.replace Invoked with path=/etc/multipath.conf regexp=^blacklist\s*{\n[\s]+devnode \"\.\*\" replace=blacklist { backup=False encoding=utf-8 unsafe_writes=False after=None before=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 03:07:27 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 03:07:28 np0005534516 python3.9[228567]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        find_multipaths yes path=/etc/multipath.conf regexp=^\s+find_multipaths state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 03:07:28 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v641: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:07:28 np0005534516 python3.9[228719]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        recheck_wwid yes path=/etc/multipath.conf regexp=^\s+recheck_wwid state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 03:07:29 np0005534516 python3.9[228871]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        skip_kpartx yes path=/etc/multipath.conf regexp=^\s+skip_kpartx state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 03:07:30 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v642: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:07:30 np0005534516 python3.9[229023]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        user_friendly_names no path=/etc/multipath.conf regexp=^\s+user_friendly_names state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 03:07:31 np0005534516 python3.9[229175]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 25 03:07:31 np0005534516 python3.9[229329]: ansible-ansible.builtin.file Invoked with mode=0644 path=/etc/multipath/.multipath_restart_required state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 03:07:32 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v643: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:07:32 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 03:07:32 np0005534516 python3.9[229481]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 25 03:07:33 np0005534516 python3.9[229633]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 03:07:33 np0005534516 python3.9[229711]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 03:07:34 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v644: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:07:34 np0005534516 python3.9[229863]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 03:07:34 np0005534516 python3.9[229941]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 03:07:35 np0005534516 python3.9[230093]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 03:07:36 np0005534516 python3.9[230245]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 03:07:36 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v645: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:07:36 np0005534516 python3.9[230323]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 03:07:37 np0005534516 python3.9[230475]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 03:07:37 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 03:07:37 np0005534516 python3.9[230553]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 03:07:38 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v646: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:07:38 np0005534516 python3.9[230705]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 25 03:07:38 np0005534516 systemd[1]: Reloading.
Nov 25 03:07:38 np0005534516 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 03:07:38 np0005534516 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 03:07:39 np0005534516 podman[230894]: 2025-11-25 08:07:39.628795913 +0000 UTC m=+0.119953325 container health_status 8ad1e319ea263c63021f01bb2d6f18ca5aea205883339692c6b10bddfa993191 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_controller, org.label-schema.build-date=20251118, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Nov 25 03:07:39 np0005534516 python3.9[230895]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 03:07:40 np0005534516 python3.9[230999]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 03:07:40 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v647: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:07:40 np0005534516 python3.9[231151]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 03:07:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:07:41.030 162739 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:07:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:07:41.031 162739 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:07:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:07:41.031 162739 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:07:41 np0005534516 python3.9[231229]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 03:07:42 np0005534516 python3.9[231381]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 25 03:07:42 np0005534516 systemd[1]: Reloading.
Nov 25 03:07:42 np0005534516 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 03:07:42 np0005534516 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 03:07:42 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v648: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:07:42 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 03:07:42 np0005534516 systemd[1]: Starting Create netns directory...
Nov 25 03:07:42 np0005534516 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Nov 25 03:07:42 np0005534516 systemd[1]: netns-placeholder.service: Deactivated successfully.
Nov 25 03:07:42 np0005534516 systemd[1]: Finished Create netns directory.
Nov 25 03:07:43 np0005534516 python3.9[231675]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 03:07:43 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 03:07:43 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 03:07:43 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 03:07:43 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 03:07:43 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 03:07:43 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 03:07:43 np0005534516 ceph-mgr[75313]: [progress WARNING root] complete: ev 0181d8c4-b543-473e-9025-c8c9045ed683 does not exist
Nov 25 03:07:43 np0005534516 ceph-mgr[75313]: [progress WARNING root] complete: ev d85b3606-ca10-4567-ae1c-b9a69603f347 does not exist
Nov 25 03:07:43 np0005534516 ceph-mgr[75313]: [progress WARNING root] complete: ev 46fc56fc-f489-47d7-8e55-3ffc3a0ccdb2 does not exist
Nov 25 03:07:43 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 03:07:43 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 03:07:43 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 03:07:43 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 03:07:43 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 03:07:43 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 03:07:44 np0005534516 python3.9[231929]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/multipathd/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 03:07:44 np0005534516 podman[232047]: 2025-11-25 08:07:44.266691854 +0000 UTC m=+0.032751986 container create 892ad6e161d06c463b5d7facf998f1c23aa1f92688cc52b101b193c1cfd60a11 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_yonath, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Nov 25 03:07:44 np0005534516 systemd[1]: Started libpod-conmon-892ad6e161d06c463b5d7facf998f1c23aa1f92688cc52b101b193c1cfd60a11.scope.
Nov 25 03:07:44 np0005534516 systemd[1]: Started libcrun container.
Nov 25 03:07:44 np0005534516 podman[232047]: 2025-11-25 08:07:44.344237885 +0000 UTC m=+0.110298017 container init 892ad6e161d06c463b5d7facf998f1c23aa1f92688cc52b101b193c1cfd60a11 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_yonath, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 03:07:44 np0005534516 podman[232047]: 2025-11-25 08:07:44.251942606 +0000 UTC m=+0.018002758 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 03:07:44 np0005534516 podman[232047]: 2025-11-25 08:07:44.350380396 +0000 UTC m=+0.116440528 container start 892ad6e161d06c463b5d7facf998f1c23aa1f92688cc52b101b193c1cfd60a11 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_yonath, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 03:07:44 np0005534516 podman[232047]: 2025-11-25 08:07:44.353393549 +0000 UTC m=+0.119453681 container attach 892ad6e161d06c463b5d7facf998f1c23aa1f92688cc52b101b193c1cfd60a11 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_yonath, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 03:07:44 np0005534516 goofy_yonath[232089]: 167 167
Nov 25 03:07:44 np0005534516 systemd[1]: libpod-892ad6e161d06c463b5d7facf998f1c23aa1f92688cc52b101b193c1cfd60a11.scope: Deactivated successfully.
Nov 25 03:07:44 np0005534516 conmon[232089]: conmon 892ad6e161d06c463b5d <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-892ad6e161d06c463b5d7facf998f1c23aa1f92688cc52b101b193c1cfd60a11.scope/container/memory.events
Nov 25 03:07:44 np0005534516 podman[232047]: 2025-11-25 08:07:44.358687955 +0000 UTC m=+0.124748087 container died 892ad6e161d06c463b5d7facf998f1c23aa1f92688cc52b101b193c1cfd60a11 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_yonath, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Nov 25 03:07:44 np0005534516 systemd[1]: var-lib-containers-storage-overlay-e9a6a6726d1987fd7bbada1af88fcdda3a192554554ccf6ef19a519d9d1a23d5-merged.mount: Deactivated successfully.
Nov 25 03:07:44 np0005534516 podman[232047]: 2025-11-25 08:07:44.390450882 +0000 UTC m=+0.156511014 container remove 892ad6e161d06c463b5d7facf998f1c23aa1f92688cc52b101b193c1cfd60a11 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_yonath, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 03:07:44 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v649: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:07:44 np0005534516 systemd[1]: libpod-conmon-892ad6e161d06c463b5d7facf998f1c23aa1f92688cc52b101b193c1cfd60a11.scope: Deactivated successfully.
Nov 25 03:07:44 np0005534516 podman[232164]: 2025-11-25 08:07:44.560114308 +0000 UTC m=+0.039754589 container create 600c74e3c7805aae286c4e70fa8e296595eab9f39c97ab7275980a32c9693f7e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_nightingale, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0)
Nov 25 03:07:44 np0005534516 systemd[1]: Started libpod-conmon-600c74e3c7805aae286c4e70fa8e296595eab9f39c97ab7275980a32c9693f7e.scope.
Nov 25 03:07:44 np0005534516 python3.9[232158]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/multipathd/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764058063.6467676-437-256411046839333/.source _original_basename=healthcheck follow=False checksum=af9d0c1c8f3cb0e30ce9609be9d5b01924d0d23f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Nov 25 03:07:44 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 03:07:44 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 03:07:44 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 03:07:44 np0005534516 systemd[1]: Started libcrun container.
Nov 25 03:07:44 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/286f8396230706360f4d3ed33caa1542e53c80d28100b1b830958e2ddfeb935a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 03:07:44 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/286f8396230706360f4d3ed33caa1542e53c80d28100b1b830958e2ddfeb935a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 03:07:44 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/286f8396230706360f4d3ed33caa1542e53c80d28100b1b830958e2ddfeb935a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 03:07:44 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/286f8396230706360f4d3ed33caa1542e53c80d28100b1b830958e2ddfeb935a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 03:07:44 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/286f8396230706360f4d3ed33caa1542e53c80d28100b1b830958e2ddfeb935a/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 03:07:44 np0005534516 podman[232164]: 2025-11-25 08:07:44.542457831 +0000 UTC m=+0.022098142 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 03:07:44 np0005534516 podman[232164]: 2025-11-25 08:07:44.648948392 +0000 UTC m=+0.128588703 container init 600c74e3c7805aae286c4e70fa8e296595eab9f39c97ab7275980a32c9693f7e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_nightingale, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Nov 25 03:07:44 np0005534516 podman[232164]: 2025-11-25 08:07:44.657017055 +0000 UTC m=+0.136657336 container start 600c74e3c7805aae286c4e70fa8e296595eab9f39c97ab7275980a32c9693f7e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_nightingale, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 03:07:44 np0005534516 podman[232164]: 2025-11-25 08:07:44.661560151 +0000 UTC m=+0.141200442 container attach 600c74e3c7805aae286c4e70fa8e296595eab9f39c97ab7275980a32c9693f7e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_nightingale, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 03:07:45 np0005534516 python3.9[232336]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 25 03:07:45 np0005534516 mystifying_nightingale[232180]: --> passed data devices: 0 physical, 3 LVM
Nov 25 03:07:45 np0005534516 mystifying_nightingale[232180]: --> relative data size: 1.0
Nov 25 03:07:45 np0005534516 mystifying_nightingale[232180]: --> All data devices are unavailable
Nov 25 03:07:45 np0005534516 systemd[1]: libpod-600c74e3c7805aae286c4e70fa8e296595eab9f39c97ab7275980a32c9693f7e.scope: Deactivated successfully.
Nov 25 03:07:45 np0005534516 systemd[1]: libpod-600c74e3c7805aae286c4e70fa8e296595eab9f39c97ab7275980a32c9693f7e.scope: Consumed 1.034s CPU time.
Nov 25 03:07:45 np0005534516 podman[232164]: 2025-11-25 08:07:45.770728389 +0000 UTC m=+1.250368700 container died 600c74e3c7805aae286c4e70fa8e296595eab9f39c97ab7275980a32c9693f7e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_nightingale, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507)
Nov 25 03:07:45 np0005534516 systemd[1]: var-lib-containers-storage-overlay-286f8396230706360f4d3ed33caa1542e53c80d28100b1b830958e2ddfeb935a-merged.mount: Deactivated successfully.
Nov 25 03:07:45 np0005534516 podman[232164]: 2025-11-25 08:07:45.990409487 +0000 UTC m=+1.470049768 container remove 600c74e3c7805aae286c4e70fa8e296595eab9f39c97ab7275980a32c9693f7e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_nightingale, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Nov 25 03:07:45 np0005534516 systemd[1]: libpod-conmon-600c74e3c7805aae286c4e70fa8e296595eab9f39c97ab7275980a32c9693f7e.scope: Deactivated successfully.
Nov 25 03:07:46 np0005534516 python3.9[232526]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/multipathd.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 03:07:46 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v650: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:07:46 np0005534516 podman[232789]: 2025-11-25 08:07:46.588065885 +0000 UTC m=+0.041771105 container create 60b31017b7be0a822c18feeb387430146f0ee8dd9b8fd8c6812542d43b40accd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_matsumoto, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Nov 25 03:07:46 np0005534516 systemd[1]: Started libpod-conmon-60b31017b7be0a822c18feeb387430146f0ee8dd9b8fd8c6812542d43b40accd.scope.
Nov 25 03:07:46 np0005534516 systemd[1]: Started libcrun container.
Nov 25 03:07:46 np0005534516 podman[232789]: 2025-11-25 08:07:46.570964663 +0000 UTC m=+0.024669883 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 03:07:46 np0005534516 podman[232789]: 2025-11-25 08:07:46.670936055 +0000 UTC m=+0.124641265 container init 60b31017b7be0a822c18feeb387430146f0ee8dd9b8fd8c6812542d43b40accd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_matsumoto, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 25 03:07:46 np0005534516 podman[232789]: 2025-11-25 08:07:46.679337727 +0000 UTC m=+0.133042937 container start 60b31017b7be0a822c18feeb387430146f0ee8dd9b8fd8c6812542d43b40accd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_matsumoto, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Nov 25 03:07:46 np0005534516 podman[232789]: 2025-11-25 08:07:46.682593207 +0000 UTC m=+0.136298407 container attach 60b31017b7be0a822c18feeb387430146f0ee8dd9b8fd8c6812542d43b40accd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_matsumoto, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 03:07:46 np0005534516 trusting_matsumoto[232805]: 167 167
Nov 25 03:07:46 np0005534516 systemd[1]: libpod-60b31017b7be0a822c18feeb387430146f0ee8dd9b8fd8c6812542d43b40accd.scope: Deactivated successfully.
Nov 25 03:07:46 np0005534516 conmon[232805]: conmon 60b31017b7be0a822c18 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-60b31017b7be0a822c18feeb387430146f0ee8dd9b8fd8c6812542d43b40accd.scope/container/memory.events
Nov 25 03:07:46 np0005534516 podman[232789]: 2025-11-25 08:07:46.685998331 +0000 UTC m=+0.139703541 container died 60b31017b7be0a822c18feeb387430146f0ee8dd9b8fd8c6812542d43b40accd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_matsumoto, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 03:07:46 np0005534516 systemd[1]: var-lib-containers-storage-overlay-9a8fb19c6d1aff9846e15f4a724a8ab094717b23d12496a5813eeb1ede9f6c24-merged.mount: Deactivated successfully.
Nov 25 03:07:46 np0005534516 python3.9[232788]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/multipathd.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1764058065.7049193-462-2356908705644/.source.json _original_basename=.oad3h3gc follow=False checksum=3f7959ee8ac9757398adcc451c3b416c957d7c14 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 03:07:46 np0005534516 podman[232789]: 2025-11-25 08:07:46.721347797 +0000 UTC m=+0.175053007 container remove 60b31017b7be0a822c18feeb387430146f0ee8dd9b8fd8c6812542d43b40accd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_matsumoto, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 03:07:46 np0005534516 systemd[1]: libpod-conmon-60b31017b7be0a822c18feeb387430146f0ee8dd9b8fd8c6812542d43b40accd.scope: Deactivated successfully.
Nov 25 03:07:46 np0005534516 podman[232853]: 2025-11-25 08:07:46.91765543 +0000 UTC m=+0.044449429 container create 80dab811ab290f1b395db688114bfad7e22084e6e3f196a577b47c5bb9b459e6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_satoshi, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Nov 25 03:07:46 np0005534516 systemd[1]: Started libpod-conmon-80dab811ab290f1b395db688114bfad7e22084e6e3f196a577b47c5bb9b459e6.scope.
Nov 25 03:07:46 np0005534516 systemd[1]: Started libcrun container.
Nov 25 03:07:46 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b3127c6745653f94d2518308e79f7875732d106407f0f757b1cf4a64ff2decd3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 03:07:46 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b3127c6745653f94d2518308e79f7875732d106407f0f757b1cf4a64ff2decd3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 03:07:46 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b3127c6745653f94d2518308e79f7875732d106407f0f757b1cf4a64ff2decd3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 03:07:46 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b3127c6745653f94d2518308e79f7875732d106407f0f757b1cf4a64ff2decd3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 03:07:46 np0005534516 podman[232853]: 2025-11-25 08:07:46.898990604 +0000 UTC m=+0.025784653 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 03:07:46 np0005534516 podman[232853]: 2025-11-25 08:07:46.999347876 +0000 UTC m=+0.126141895 container init 80dab811ab290f1b395db688114bfad7e22084e6e3f196a577b47c5bb9b459e6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_satoshi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Nov 25 03:07:47 np0005534516 podman[232853]: 2025-11-25 08:07:47.008685865 +0000 UTC m=+0.135479864 container start 80dab811ab290f1b395db688114bfad7e22084e6e3f196a577b47c5bb9b459e6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_satoshi, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Nov 25 03:07:47 np0005534516 podman[232853]: 2025-11-25 08:07:47.011812001 +0000 UTC m=+0.138606170 container attach 80dab811ab290f1b395db688114bfad7e22084e6e3f196a577b47c5bb9b459e6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_satoshi, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Nov 25 03:07:47 np0005534516 python3.9[233001]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/multipathd state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 03:07:47 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 03:07:47 np0005534516 beautiful_satoshi[232912]: {
Nov 25 03:07:47 np0005534516 beautiful_satoshi[232912]:    "0": [
Nov 25 03:07:47 np0005534516 beautiful_satoshi[232912]:        {
Nov 25 03:07:47 np0005534516 beautiful_satoshi[232912]:            "devices": [
Nov 25 03:07:47 np0005534516 beautiful_satoshi[232912]:                "/dev/loop3"
Nov 25 03:07:47 np0005534516 beautiful_satoshi[232912]:            ],
Nov 25 03:07:47 np0005534516 beautiful_satoshi[232912]:            "lv_name": "ceph_lv0",
Nov 25 03:07:47 np0005534516 beautiful_satoshi[232912]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 03:07:47 np0005534516 beautiful_satoshi[232912]:            "lv_size": "21470642176",
Nov 25 03:07:47 np0005534516 beautiful_satoshi[232912]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=fa0f8d90-5df8-4d42-9078-da082765696d,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 03:07:47 np0005534516 beautiful_satoshi[232912]:            "lv_uuid": "ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr",
Nov 25 03:07:47 np0005534516 beautiful_satoshi[232912]:            "name": "ceph_lv0",
Nov 25 03:07:47 np0005534516 beautiful_satoshi[232912]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 03:07:47 np0005534516 beautiful_satoshi[232912]:            "tags": {
Nov 25 03:07:47 np0005534516 beautiful_satoshi[232912]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 03:07:47 np0005534516 beautiful_satoshi[232912]:                "ceph.block_uuid": "ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr",
Nov 25 03:07:47 np0005534516 beautiful_satoshi[232912]:                "ceph.cephx_lockbox_secret": "",
Nov 25 03:07:47 np0005534516 beautiful_satoshi[232912]:                "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 03:07:47 np0005534516 beautiful_satoshi[232912]:                "ceph.cluster_name": "ceph",
Nov 25 03:07:47 np0005534516 beautiful_satoshi[232912]:                "ceph.crush_device_class": "",
Nov 25 03:07:47 np0005534516 beautiful_satoshi[232912]:                "ceph.encrypted": "0",
Nov 25 03:07:47 np0005534516 beautiful_satoshi[232912]:                "ceph.osd_fsid": "fa0f8d90-5df8-4d42-9078-da082765696d",
Nov 25 03:07:47 np0005534516 beautiful_satoshi[232912]:                "ceph.osd_id": "0",
Nov 25 03:07:47 np0005534516 beautiful_satoshi[232912]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 03:07:47 np0005534516 beautiful_satoshi[232912]:                "ceph.type": "block",
Nov 25 03:07:47 np0005534516 beautiful_satoshi[232912]:                "ceph.vdo": "0"
Nov 25 03:07:47 np0005534516 beautiful_satoshi[232912]:            },
Nov 25 03:07:47 np0005534516 beautiful_satoshi[232912]:            "type": "block",
Nov 25 03:07:47 np0005534516 beautiful_satoshi[232912]:            "vg_name": "ceph_vg0"
Nov 25 03:07:47 np0005534516 beautiful_satoshi[232912]:        }
Nov 25 03:07:47 np0005534516 beautiful_satoshi[232912]:    ],
Nov 25 03:07:47 np0005534516 beautiful_satoshi[232912]:    "1": [
Nov 25 03:07:47 np0005534516 beautiful_satoshi[232912]:        {
Nov 25 03:07:47 np0005534516 beautiful_satoshi[232912]:            "devices": [
Nov 25 03:07:47 np0005534516 beautiful_satoshi[232912]:                "/dev/loop4"
Nov 25 03:07:47 np0005534516 beautiful_satoshi[232912]:            ],
Nov 25 03:07:47 np0005534516 beautiful_satoshi[232912]:            "lv_name": "ceph_lv1",
Nov 25 03:07:47 np0005534516 beautiful_satoshi[232912]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 03:07:47 np0005534516 beautiful_satoshi[232912]:            "lv_size": "21470642176",
Nov 25 03:07:47 np0005534516 beautiful_satoshi[232912]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1ccbbde0-8faf-460a-800e-c84f00ed17db,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 03:07:47 np0005534516 beautiful_satoshi[232912]:            "lv_uuid": "nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e",
Nov 25 03:07:47 np0005534516 beautiful_satoshi[232912]:            "name": "ceph_lv1",
Nov 25 03:07:47 np0005534516 beautiful_satoshi[232912]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 03:07:47 np0005534516 beautiful_satoshi[232912]:            "tags": {
Nov 25 03:07:47 np0005534516 beautiful_satoshi[232912]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 03:07:47 np0005534516 beautiful_satoshi[232912]:                "ceph.block_uuid": "nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e",
Nov 25 03:07:47 np0005534516 beautiful_satoshi[232912]:                "ceph.cephx_lockbox_secret": "",
Nov 25 03:07:47 np0005534516 beautiful_satoshi[232912]:                "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 03:07:47 np0005534516 beautiful_satoshi[232912]:                "ceph.cluster_name": "ceph",
Nov 25 03:07:47 np0005534516 beautiful_satoshi[232912]:                "ceph.crush_device_class": "",
Nov 25 03:07:47 np0005534516 beautiful_satoshi[232912]:                "ceph.encrypted": "0",
Nov 25 03:07:47 np0005534516 beautiful_satoshi[232912]:                "ceph.osd_fsid": "1ccbbde0-8faf-460a-800e-c84f00ed17db",
Nov 25 03:07:47 np0005534516 beautiful_satoshi[232912]:                "ceph.osd_id": "1",
Nov 25 03:07:47 np0005534516 beautiful_satoshi[232912]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 03:07:47 np0005534516 beautiful_satoshi[232912]:                "ceph.type": "block",
Nov 25 03:07:47 np0005534516 beautiful_satoshi[232912]:                "ceph.vdo": "0"
Nov 25 03:07:47 np0005534516 beautiful_satoshi[232912]:            },
Nov 25 03:07:47 np0005534516 beautiful_satoshi[232912]:            "type": "block",
Nov 25 03:07:47 np0005534516 beautiful_satoshi[232912]:            "vg_name": "ceph_vg1"
Nov 25 03:07:47 np0005534516 beautiful_satoshi[232912]:        }
Nov 25 03:07:47 np0005534516 beautiful_satoshi[232912]:    ],
Nov 25 03:07:47 np0005534516 beautiful_satoshi[232912]:    "2": [
Nov 25 03:07:47 np0005534516 beautiful_satoshi[232912]:        {
Nov 25 03:07:47 np0005534516 beautiful_satoshi[232912]:            "devices": [
Nov 25 03:07:47 np0005534516 beautiful_satoshi[232912]:                "/dev/loop5"
Nov 25 03:07:47 np0005534516 beautiful_satoshi[232912]:            ],
Nov 25 03:07:47 np0005534516 beautiful_satoshi[232912]:            "lv_name": "ceph_lv2",
Nov 25 03:07:47 np0005534516 beautiful_satoshi[232912]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 03:07:47 np0005534516 beautiful_satoshi[232912]:            "lv_size": "21470642176",
Nov 25 03:07:47 np0005534516 beautiful_satoshi[232912]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2a15223e-f21c-42af-b702-2be1c3607399,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 03:07:47 np0005534516 beautiful_satoshi[232912]:            "lv_uuid": "5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0",
Nov 25 03:07:47 np0005534516 beautiful_satoshi[232912]:            "name": "ceph_lv2",
Nov 25 03:07:47 np0005534516 beautiful_satoshi[232912]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 03:07:47 np0005534516 beautiful_satoshi[232912]:            "tags": {
Nov 25 03:07:47 np0005534516 beautiful_satoshi[232912]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 03:07:47 np0005534516 beautiful_satoshi[232912]:                "ceph.block_uuid": "5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0",
Nov 25 03:07:47 np0005534516 beautiful_satoshi[232912]:                "ceph.cephx_lockbox_secret": "",
Nov 25 03:07:47 np0005534516 beautiful_satoshi[232912]:                "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 03:07:47 np0005534516 beautiful_satoshi[232912]:                "ceph.cluster_name": "ceph",
Nov 25 03:07:47 np0005534516 beautiful_satoshi[232912]:                "ceph.crush_device_class": "",
Nov 25 03:07:47 np0005534516 beautiful_satoshi[232912]:                "ceph.encrypted": "0",
Nov 25 03:07:47 np0005534516 beautiful_satoshi[232912]:                "ceph.osd_fsid": "2a15223e-f21c-42af-b702-2be1c3607399",
Nov 25 03:07:47 np0005534516 beautiful_satoshi[232912]:                "ceph.osd_id": "2",
Nov 25 03:07:47 np0005534516 beautiful_satoshi[232912]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 03:07:47 np0005534516 beautiful_satoshi[232912]:                "ceph.type": "block",
Nov 25 03:07:47 np0005534516 beautiful_satoshi[232912]:                "ceph.vdo": "0"
Nov 25 03:07:47 np0005534516 beautiful_satoshi[232912]:            },
Nov 25 03:07:47 np0005534516 beautiful_satoshi[232912]:            "type": "block",
Nov 25 03:07:47 np0005534516 beautiful_satoshi[232912]:            "vg_name": "ceph_vg2"
Nov 25 03:07:47 np0005534516 beautiful_satoshi[232912]:        }
Nov 25 03:07:47 np0005534516 beautiful_satoshi[232912]:    ]
Nov 25 03:07:47 np0005534516 beautiful_satoshi[232912]: }
Nov 25 03:07:47 np0005534516 systemd[1]: libpod-80dab811ab290f1b395db688114bfad7e22084e6e3f196a577b47c5bb9b459e6.scope: Deactivated successfully.
Nov 25 03:07:47 np0005534516 podman[232853]: 2025-11-25 08:07:47.818517394 +0000 UTC m=+0.945311413 container died 80dab811ab290f1b395db688114bfad7e22084e6e3f196a577b47c5bb9b459e6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_satoshi, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3)
Nov 25 03:07:47 np0005534516 systemd[1]: var-lib-containers-storage-overlay-b3127c6745653f94d2518308e79f7875732d106407f0f757b1cf4a64ff2decd3-merged.mount: Deactivated successfully.
Nov 25 03:07:47 np0005534516 podman[232853]: 2025-11-25 08:07:47.89148034 +0000 UTC m=+1.018274349 container remove 80dab811ab290f1b395db688114bfad7e22084e6e3f196a577b47c5bb9b459e6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_satoshi, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 03:07:47 np0005534516 systemd[1]: libpod-conmon-80dab811ab290f1b395db688114bfad7e22084e6e3f196a577b47c5bb9b459e6.scope: Deactivated successfully.
Nov 25 03:07:48 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v651: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:07:48 np0005534516 podman[233432]: 2025-11-25 08:07:48.462498653 +0000 UTC m=+0.042087905 container create 1da343d941b3fa5e1d9266788333494fd0f384f3c187b85a2a7154f84490444e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_varahamihira, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 03:07:48 np0005534516 systemd[1]: Started libpod-conmon-1da343d941b3fa5e1d9266788333494fd0f384f3c187b85a2a7154f84490444e.scope.
Nov 25 03:07:48 np0005534516 systemd[1]: Started libcrun container.
Nov 25 03:07:48 np0005534516 podman[233432]: 2025-11-25 08:07:48.447056836 +0000 UTC m=+0.026646078 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 03:07:48 np0005534516 podman[233432]: 2025-11-25 08:07:48.549583738 +0000 UTC m=+0.129172990 container init 1da343d941b3fa5e1d9266788333494fd0f384f3c187b85a2a7154f84490444e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_varahamihira, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Nov 25 03:07:48 np0005534516 podman[233432]: 2025-11-25 08:07:48.559117221 +0000 UTC m=+0.138706473 container start 1da343d941b3fa5e1d9266788333494fd0f384f3c187b85a2a7154f84490444e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_varahamihira, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 03:07:48 np0005534516 podman[233432]: 2025-11-25 08:07:48.562271868 +0000 UTC m=+0.141861130 container attach 1da343d941b3fa5e1d9266788333494fd0f384f3c187b85a2a7154f84490444e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_varahamihira, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 03:07:48 np0005534516 serene_varahamihira[233448]: 167 167
Nov 25 03:07:48 np0005534516 systemd[1]: libpod-1da343d941b3fa5e1d9266788333494fd0f384f3c187b85a2a7154f84490444e.scope: Deactivated successfully.
Nov 25 03:07:48 np0005534516 podman[233432]: 2025-11-25 08:07:48.564901431 +0000 UTC m=+0.144490673 container died 1da343d941b3fa5e1d9266788333494fd0f384f3c187b85a2a7154f84490444e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_varahamihira, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Nov 25 03:07:48 np0005534516 systemd[1]: var-lib-containers-storage-overlay-f3ed2d3aac342d222f2d386a09a9b481247c45b41c1dc8e902940a450ee012d3-merged.mount: Deactivated successfully.
Nov 25 03:07:48 np0005534516 podman[233432]: 2025-11-25 08:07:48.600268348 +0000 UTC m=+0.179857610 container remove 1da343d941b3fa5e1d9266788333494fd0f384f3c187b85a2a7154f84490444e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_varahamihira, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 03:07:48 np0005534516 systemd[1]: libpod-conmon-1da343d941b3fa5e1d9266788333494fd0f384f3c187b85a2a7154f84490444e.scope: Deactivated successfully.
Nov 25 03:07:48 np0005534516 podman[233497]: 2025-11-25 08:07:48.765490882 +0000 UTC m=+0.037162688 container create 7e9d7982e1632eae5ba6f9a48c4049e3bf763646a4d5db281796684ff4b8624a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_aryabhata, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 03:07:48 np0005534516 systemd[1]: Started libpod-conmon-7e9d7982e1632eae5ba6f9a48c4049e3bf763646a4d5db281796684ff4b8624a.scope.
Nov 25 03:07:48 np0005534516 systemd[1]: Started libcrun container.
Nov 25 03:07:48 np0005534516 podman[233497]: 2025-11-25 08:07:48.747866045 +0000 UTC m=+0.019537881 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 03:07:48 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/77fd9df2775e739517de219919256d463fe5eca75b522543a38343322bf5188d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 03:07:48 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/77fd9df2775e739517de219919256d463fe5eca75b522543a38343322bf5188d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 03:07:48 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/77fd9df2775e739517de219919256d463fe5eca75b522543a38343322bf5188d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 03:07:48 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/77fd9df2775e739517de219919256d463fe5eca75b522543a38343322bf5188d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 03:07:48 np0005534516 podman[233497]: 2025-11-25 08:07:48.85809401 +0000 UTC m=+0.129765826 container init 7e9d7982e1632eae5ba6f9a48c4049e3bf763646a4d5db281796684ff4b8624a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_aryabhata, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 03:07:48 np0005534516 podman[233497]: 2025-11-25 08:07:48.866133552 +0000 UTC m=+0.137805378 container start 7e9d7982e1632eae5ba6f9a48c4049e3bf763646a4d5db281796684ff4b8624a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_aryabhata, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 03:07:48 np0005534516 podman[233497]: 2025-11-25 08:07:48.869889306 +0000 UTC m=+0.141561142 container attach 7e9d7982e1632eae5ba6f9a48c4049e3bf763646a4d5db281796684ff4b8624a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_aryabhata, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef)
Nov 25 03:07:49 np0005534516 python3.9[233645]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/multipathd config_pattern=*.json debug=False
Nov 25 03:07:49 np0005534516 charming_aryabhata[233513]: {
Nov 25 03:07:49 np0005534516 charming_aryabhata[233513]:    "1ccbbde0-8faf-460a-800e-c84f00ed17db": {
Nov 25 03:07:49 np0005534516 charming_aryabhata[233513]:        "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 03:07:49 np0005534516 charming_aryabhata[233513]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 03:07:49 np0005534516 charming_aryabhata[233513]:        "osd_id": 1,
Nov 25 03:07:49 np0005534516 charming_aryabhata[233513]:        "osd_uuid": "1ccbbde0-8faf-460a-800e-c84f00ed17db",
Nov 25 03:07:49 np0005534516 charming_aryabhata[233513]:        "type": "bluestore"
Nov 25 03:07:49 np0005534516 charming_aryabhata[233513]:    },
Nov 25 03:07:49 np0005534516 charming_aryabhata[233513]:    "2a15223e-f21c-42af-b702-2be1c3607399": {
Nov 25 03:07:49 np0005534516 charming_aryabhata[233513]:        "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 03:07:49 np0005534516 charming_aryabhata[233513]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 03:07:49 np0005534516 charming_aryabhata[233513]:        "osd_id": 2,
Nov 25 03:07:49 np0005534516 charming_aryabhata[233513]:        "osd_uuid": "2a15223e-f21c-42af-b702-2be1c3607399",
Nov 25 03:07:49 np0005534516 charming_aryabhata[233513]:        "type": "bluestore"
Nov 25 03:07:49 np0005534516 charming_aryabhata[233513]:    },
Nov 25 03:07:49 np0005534516 charming_aryabhata[233513]:    "fa0f8d90-5df8-4d42-9078-da082765696d": {
Nov 25 03:07:49 np0005534516 charming_aryabhata[233513]:        "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 03:07:49 np0005534516 charming_aryabhata[233513]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 03:07:49 np0005534516 charming_aryabhata[233513]:        "osd_id": 0,
Nov 25 03:07:49 np0005534516 charming_aryabhata[233513]:        "osd_uuid": "fa0f8d90-5df8-4d42-9078-da082765696d",
Nov 25 03:07:49 np0005534516 charming_aryabhata[233513]:        "type": "bluestore"
Nov 25 03:07:49 np0005534516 charming_aryabhata[233513]:    }
Nov 25 03:07:49 np0005534516 charming_aryabhata[233513]: }
Nov 25 03:07:49 np0005534516 systemd[1]: libpod-7e9d7982e1632eae5ba6f9a48c4049e3bf763646a4d5db281796684ff4b8624a.scope: Deactivated successfully.
Nov 25 03:07:49 np0005534516 podman[233497]: 2025-11-25 08:07:49.880010018 +0000 UTC m=+1.151681824 container died 7e9d7982e1632eae5ba6f9a48c4049e3bf763646a4d5db281796684ff4b8624a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_aryabhata, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 03:07:49 np0005534516 systemd[1]: libpod-7e9d7982e1632eae5ba6f9a48c4049e3bf763646a4d5db281796684ff4b8624a.scope: Consumed 1.014s CPU time.
Nov 25 03:07:49 np0005534516 systemd[1]: var-lib-containers-storage-overlay-77fd9df2775e739517de219919256d463fe5eca75b522543a38343322bf5188d-merged.mount: Deactivated successfully.
Nov 25 03:07:50 np0005534516 podman[233497]: 2025-11-25 08:07:50.005690159 +0000 UTC m=+1.277361985 container remove 7e9d7982e1632eae5ba6f9a48c4049e3bf763646a4d5db281796684ff4b8624a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_aryabhata, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 03:07:50 np0005534516 systemd[1]: libpod-conmon-7e9d7982e1632eae5ba6f9a48c4049e3bf763646a4d5db281796684ff4b8624a.scope: Deactivated successfully.
Nov 25 03:07:50 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 03:07:50 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 03:07:50 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 03:07:50 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 03:07:50 np0005534516 ceph-mgr[75313]: [progress WARNING root] complete: ev 6423ee3a-904c-4326-b117-9a9f3493e40c does not exist
Nov 25 03:07:50 np0005534516 ceph-mgr[75313]: [progress WARNING root] complete: ev eb7e1b8e-3b6f-4995-9e19-4bf48e2482cf does not exist
Nov 25 03:07:50 np0005534516 python3.9[233837]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Nov 25 03:07:50 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v652: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:07:51 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 03:07:51 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 03:07:51 np0005534516 python3.9[234038]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None
Nov 25 03:07:52 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v653: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:07:52 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 03:07:52 np0005534516 python3[234216]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/multipathd config_id=multipathd config_overrides={} config_patterns=*.json log_base_path=/var/log/containers/stdouts debug=False
Nov 25 03:07:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] Optimize plan auto_2025-11-25_08:07:53
Nov 25 03:07:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 03:07:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] do_upmap
Nov 25 03:07:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] pools ['volumes', 'images', 'default.rgw.log', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', 'default.rgw.control', '.mgr', 'vms', 'backups', '.rgw.root', 'default.rgw.meta']
Nov 25 03:07:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] prepared 0/10 changes
Nov 25 03:07:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 03:07:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 03:07:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 03:07:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 03:07:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 03:07:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 03:07:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 03:07:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 03:07:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 03:07:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 03:07:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 03:07:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 03:07:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 03:07:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 03:07:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 03:07:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 03:07:53 np0005534516 systemd[1]: virtproxyd.service: Deactivated successfully.
Nov 25 03:07:54 np0005534516 podman[234264]: 2025-11-25 08:07:54.212878653 +0000 UTC m=+0.464848612 container health_status 5c8d5508db57b4c3da7e80ae11f3a3094e87785cb74f60c98199261dd8b73481 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118)
Nov 25 03:07:54 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v654: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:07:54 np0005534516 podman[234231]: 2025-11-25 08:07:54.687767621 +0000 UTC m=+1.876173966 image pull 5a87eb2d1bea5c4c3bce654551fc0b05a96cf5556b36110e17bddeee8189b072 quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24
Nov 25 03:07:54 np0005534516 podman[234307]: 2025-11-25 08:07:54.844152701 +0000 UTC m=+0.028944181 image pull 5a87eb2d1bea5c4c3bce654551fc0b05a96cf5556b36110e17bddeee8189b072 quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24
Nov 25 03:07:55 np0005534516 podman[234307]: 2025-11-25 08:07:55.030944309 +0000 UTC m=+0.215735749 container create 201f5ba8a846455dad7681d91099d8c3f33e8ac91298c1330a8b525b94c03938 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, config_id=multipathd)
Nov 25 03:07:55 np0005534516 python3[234216]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name multipathd --conmon-pidfile /run/multipathd.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --healthcheck-command /openstack/healthcheck --label config_id=multipathd --label container_name=multipathd --label managed_by=edpm_ansible --label config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro --volume /dev:/dev --volume /run/udev:/run/udev --volume /sys:/sys --volume /lib/modules:/lib/modules:ro --volume /etc/iscsi:/etc/iscsi:ro --volume /var/lib/iscsi:/var/lib/iscsi --volume /etc/multipath:/etc/multipath:z --volume /etc/multipath.conf:/etc/multipath.conf:ro --volume /var/lib/openstack/healthchecks/multipathd:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24
Nov 25 03:07:55 np0005534516 python3.9[234497]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 25 03:07:56 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v655: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:07:56 np0005534516 python3.9[234651]: ansible-file Invoked with path=/etc/systemd/system/edpm_multipathd.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 03:07:57 np0005534516 python3.9[234727]: ansible-stat Invoked with path=/etc/systemd/system/edpm_multipathd_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 25 03:07:57 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 03:07:58 np0005534516 python3.9[234878]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764058077.2840505-550-17020566572834/source dest=/etc/systemd/system/edpm_multipathd.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 03:07:58 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v656: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:07:58 np0005534516 python3.9[234954]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 25 03:07:58 np0005534516 systemd[1]: Reloading.
Nov 25 03:07:58 np0005534516 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 03:07:58 np0005534516 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 03:07:59 np0005534516 python3.9[235066]: ansible-systemd Invoked with state=restarted name=edpm_multipathd.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 25 03:07:59 np0005534516 systemd[1]: Reloading.
Nov 25 03:07:59 np0005534516 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 03:07:59 np0005534516 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 03:08:00 np0005534516 systemd[1]: Starting multipathd container...
Nov 25 03:08:00 np0005534516 systemd[1]: Started libcrun container.
Nov 25 03:08:00 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e22425faa7f9484c203140830acb57dd4d069c188847fa30b8fa39b2fabfee9d/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Nov 25 03:08:00 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e22425faa7f9484c203140830acb57dd4d069c188847fa30b8fa39b2fabfee9d/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Nov 25 03:08:00 np0005534516 systemd[1]: Started /usr/bin/podman healthcheck run 201f5ba8a846455dad7681d91099d8c3f33e8ac91298c1330a8b525b94c03938.
Nov 25 03:08:00 np0005534516 podman[235106]: 2025-11-25 08:08:00.37984628 +0000 UTC m=+0.299557075 container init 201f5ba8a846455dad7681d91099d8c3f33e8ac91298c1330a8b525b94c03938 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, managed_by=edpm_ansible)
Nov 25 03:08:00 np0005534516 multipathd[235121]: + sudo -E kolla_set_configs
Nov 25 03:08:00 np0005534516 podman[235106]: 2025-11-25 08:08:00.402970449 +0000 UTC m=+0.322681194 container start 201f5ba8a846455dad7681d91099d8c3f33e8ac91298c1330a8b525b94c03938 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team)
Nov 25 03:08:00 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v657: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:08:00 np0005534516 podman[235106]: multipathd
Nov 25 03:08:00 np0005534516 systemd[1]: Started multipathd container.
Nov 25 03:08:00 np0005534516 multipathd[235121]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Nov 25 03:08:00 np0005534516 multipathd[235121]: INFO:__main__:Validating config file
Nov 25 03:08:00 np0005534516 multipathd[235121]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Nov 25 03:08:00 np0005534516 multipathd[235121]: INFO:__main__:Writing out command to execute
Nov 25 03:08:00 np0005534516 multipathd[235121]: ++ cat /run_command
Nov 25 03:08:00 np0005534516 multipathd[235121]: + CMD='/usr/sbin/multipathd -d'
Nov 25 03:08:00 np0005534516 multipathd[235121]: + ARGS=
Nov 25 03:08:00 np0005534516 multipathd[235121]: + sudo kolla_copy_cacerts
Nov 25 03:08:00 np0005534516 podman[235127]: 2025-11-25 08:08:00.52213009 +0000 UTC m=+0.110210705 container health_status 201f5ba8a846455dad7681d91099d8c3f33e8ac91298c1330a8b525b94c03938 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=starting, health_failing_streak=1, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118)
Nov 25 03:08:00 np0005534516 multipathd[235121]: + [[ ! -n '' ]]
Nov 25 03:08:00 np0005534516 multipathd[235121]: + . kolla_extend_start
Nov 25 03:08:00 np0005534516 multipathd[235121]: + echo 'Running command: '\''/usr/sbin/multipathd -d'\'''
Nov 25 03:08:00 np0005534516 multipathd[235121]: Running command: '/usr/sbin/multipathd -d'
Nov 25 03:08:00 np0005534516 multipathd[235121]: + umask 0022
Nov 25 03:08:00 np0005534516 multipathd[235121]: + exec /usr/sbin/multipathd -d
Nov 25 03:08:00 np0005534516 systemd[1]: 201f5ba8a846455dad7681d91099d8c3f33e8ac91298c1330a8b525b94c03938-2e054a24189f55fe.service: Main process exited, code=exited, status=1/FAILURE
Nov 25 03:08:00 np0005534516 systemd[1]: 201f5ba8a846455dad7681d91099d8c3f33e8ac91298c1330a8b525b94c03938-2e054a24189f55fe.service: Failed with result 'exit-code'.
Nov 25 03:08:00 np0005534516 multipathd[235121]: 3435.197145 | --------start up--------
Nov 25 03:08:00 np0005534516 multipathd[235121]: 3435.197161 | read /etc/multipath.conf
Nov 25 03:08:00 np0005534516 multipathd[235121]: 3435.202877 | path checkers start up
Nov 25 03:08:01 np0005534516 python3.9[235312]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath/.multipath_restart_required follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 25 03:08:01 np0005534516 python3.9[235466]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps --filter volume=/etc/multipath.conf --format {{.Names}} _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 03:08:02 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 03:08:02 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v658: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:08:02 np0005534516 python3.9[235631]: ansible-ansible.builtin.systemd Invoked with name=edpm_multipathd state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 25 03:08:02 np0005534516 systemd[1]: Stopping multipathd container...
Nov 25 03:08:02 np0005534516 multipathd[235121]: 3437.413767 | exit (signal)
Nov 25 03:08:02 np0005534516 multipathd[235121]: 3437.413886 | --------shut down-------
Nov 25 03:08:02 np0005534516 systemd[1]: libpod-201f5ba8a846455dad7681d91099d8c3f33e8ac91298c1330a8b525b94c03938.scope: Deactivated successfully.
Nov 25 03:08:02 np0005534516 podman[235635]: 2025-11-25 08:08:02.802895421 +0000 UTC m=+0.140783700 container died 201f5ba8a846455dad7681d91099d8c3f33e8ac91298c1330a8b525b94c03938 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team)
Nov 25 03:08:02 np0005534516 systemd[1]: 201f5ba8a846455dad7681d91099d8c3f33e8ac91298c1330a8b525b94c03938-2e054a24189f55fe.timer: Deactivated successfully.
Nov 25 03:08:02 np0005534516 systemd[1]: Stopped /usr/bin/podman healthcheck run 201f5ba8a846455dad7681d91099d8c3f33e8ac91298c1330a8b525b94c03938.
Nov 25 03:08:02 np0005534516 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-201f5ba8a846455dad7681d91099d8c3f33e8ac91298c1330a8b525b94c03938-userdata-shm.mount: Deactivated successfully.
Nov 25 03:08:02 np0005534516 systemd[1]: var-lib-containers-storage-overlay-e22425faa7f9484c203140830acb57dd4d069c188847fa30b8fa39b2fabfee9d-merged.mount: Deactivated successfully.
Nov 25 03:08:03 np0005534516 systemd[1]: virtsecretd.service: Deactivated successfully.
Nov 25 03:08:03 np0005534516 systemd[1]: virtqemud.service: Deactivated successfully.
Nov 25 03:08:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 03:08:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:08:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 03:08:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:08:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 03:08:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:08:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 03:08:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:08:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 03:08:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:08:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 03:08:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:08:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 25 03:08:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:08:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 03:08:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:08:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 25 03:08:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:08:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 25 03:08:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:08:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 03:08:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:08:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 25 03:08:03 np0005534516 podman[235635]: 2025-11-25 08:08:03.750667461 +0000 UTC m=+1.088555730 container cleanup 201f5ba8a846455dad7681d91099d8c3f33e8ac91298c1330a8b525b94c03938 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20251118, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=multipathd)
Nov 25 03:08:03 np0005534516 podman[235635]: multipathd
Nov 25 03:08:03 np0005534516 podman[235667]: multipathd
Nov 25 03:08:03 np0005534516 systemd[1]: edpm_multipathd.service: Deactivated successfully.
Nov 25 03:08:03 np0005534516 systemd[1]: Stopped multipathd container.
Nov 25 03:08:03 np0005534516 systemd[1]: Starting multipathd container...
Nov 25 03:08:03 np0005534516 systemd[1]: Started libcrun container.
Nov 25 03:08:03 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e22425faa7f9484c203140830acb57dd4d069c188847fa30b8fa39b2fabfee9d/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Nov 25 03:08:03 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e22425faa7f9484c203140830acb57dd4d069c188847fa30b8fa39b2fabfee9d/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Nov 25 03:08:03 np0005534516 systemd[1]: Started /usr/bin/podman healthcheck run 201f5ba8a846455dad7681d91099d8c3f33e8ac91298c1330a8b525b94c03938.
Nov 25 03:08:03 np0005534516 podman[235680]: 2025-11-25 08:08:03.968888449 +0000 UTC m=+0.130482235 container init 201f5ba8a846455dad7681d91099d8c3f33e8ac91298c1330a8b525b94c03938 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, config_id=multipathd, org.label-schema.vendor=CentOS)
Nov 25 03:08:03 np0005534516 multipathd[235696]: + sudo -E kolla_set_configs
Nov 25 03:08:03 np0005534516 podman[235680]: 2025-11-25 08:08:03.993609062 +0000 UTC m=+0.155202858 container start 201f5ba8a846455dad7681d91099d8c3f33e8ac91298c1330a8b525b94c03938 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251118, config_id=multipathd, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 03:08:04 np0005534516 podman[235680]: multipathd
Nov 25 03:08:04 np0005534516 systemd[1]: Started multipathd container.
Nov 25 03:08:04 np0005534516 multipathd[235696]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Nov 25 03:08:04 np0005534516 multipathd[235696]: INFO:__main__:Validating config file
Nov 25 03:08:04 np0005534516 multipathd[235696]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Nov 25 03:08:04 np0005534516 multipathd[235696]: INFO:__main__:Writing out command to execute
Nov 25 03:08:04 np0005534516 multipathd[235696]: ++ cat /run_command
Nov 25 03:08:04 np0005534516 multipathd[235696]: + CMD='/usr/sbin/multipathd -d'
Nov 25 03:08:04 np0005534516 multipathd[235696]: + ARGS=
Nov 25 03:08:04 np0005534516 multipathd[235696]: + sudo kolla_copy_cacerts
Nov 25 03:08:04 np0005534516 multipathd[235696]: + [[ ! -n '' ]]
Nov 25 03:08:04 np0005534516 multipathd[235696]: + . kolla_extend_start
Nov 25 03:08:04 np0005534516 multipathd[235696]: Running command: '/usr/sbin/multipathd -d'
Nov 25 03:08:04 np0005534516 multipathd[235696]: + echo 'Running command: '\''/usr/sbin/multipathd -d'\'''
Nov 25 03:08:04 np0005534516 multipathd[235696]: + umask 0022
Nov 25 03:08:04 np0005534516 multipathd[235696]: + exec /usr/sbin/multipathd -d
Nov 25 03:08:04 np0005534516 podman[235703]: 2025-11-25 08:08:04.087167866 +0000 UTC m=+0.083200789 container health_status 201f5ba8a846455dad7681d91099d8c3f33e8ac91298c1330a8b525b94c03938 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=starting, health_failing_streak=1, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118)
Nov 25 03:08:04 np0005534516 systemd[1]: 201f5ba8a846455dad7681d91099d8c3f33e8ac91298c1330a8b525b94c03938-1d88e68652d3bfd4.service: Main process exited, code=exited, status=1/FAILURE
Nov 25 03:08:04 np0005534516 systemd[1]: 201f5ba8a846455dad7681d91099d8c3f33e8ac91298c1330a8b525b94c03938-1d88e68652d3bfd4.service: Failed with result 'exit-code'.
Nov 25 03:08:04 np0005534516 multipathd[235696]: 3438.750344 | --------start up--------
Nov 25 03:08:04 np0005534516 multipathd[235696]: 3438.750360 | read /etc/multipath.conf
Nov 25 03:08:04 np0005534516 multipathd[235696]: 3438.754900 | path checkers start up
Nov 25 03:08:04 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v659: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:08:04 np0005534516 python3.9[235889]: ansible-ansible.builtin.file Invoked with path=/etc/multipath/.multipath_restart_required state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 03:08:05 np0005534516 python3.9[236041]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/modules-load.d selevel=s0 setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Nov 25 03:08:05 np0005534516 ceph-mon[75015]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 25 03:08:05 np0005534516 ceph-mon[75015]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1200.0 total, 600.0 interval#012Cumulative writes: 3110 writes, 13K keys, 3110 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s#012Cumulative WAL: 3110 writes, 3110 syncs, 1.00 writes per sync, written: 0.02 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1258 writes, 5489 keys, 1258 commit groups, 1.0 writes per commit group, ingest: 8.30 MB, 0.01 MB/s#012Interval WAL: 1258 writes, 1258 syncs, 1.00 writes per sync, written: 0.01 GB, 0.01 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     34.8      0.41              0.05         6    0.068       0      0       0.0       0.0#012  L6      1/0    6.66 MB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   2.4     67.0     54.7      0.62              0.08         5    0.124     19K   2238       0.0       0.0#012 Sum      1/0    6.66 MB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   3.4     40.3     46.8      1.03              0.13        11    0.094     19K   2238       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   4.3     64.6     65.9      0.41              0.07         6    0.068     12K   1479       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0     67.0     54.7      0.62              0.08         5    0.124     19K   2238       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     34.9      0.41              0.05         5    0.082       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     19.9      0.00              0.00         1    0.003       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.0 total, 600.0 interval#012Flush(GB): cumulative 0.014, interval 0.006#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.05 GB write, 0.04 MB/s write, 0.04 GB read, 0.03 MB/s read, 1.0 seconds#012Interval compaction: 0.03 GB write, 0.04 MB/s write, 0.03 GB read, 0.04 MB/s read, 0.4 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55e967e9f1f0#2 capacity: 308.00 MB usage: 1.30 MB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 0 last_secs: 5.6e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(83,1.12 MB,0.362426%) FilterBlock(12,62.11 KB,0.0196928%) IndexBlock(12,122.39 KB,0.0388059%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Nov 25 03:08:06 np0005534516 python3.9[236193]: ansible-community.general.modprobe Invoked with name=nvme-fabrics state=present params= persistent=disabled
Nov 25 03:08:06 np0005534516 kernel: Key type psk registered
Nov 25 03:08:06 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v660: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:08:07 np0005534516 python3.9[236356]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/nvme-fabrics.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 03:08:07 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 03:08:07 np0005534516 python3.9[236479]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/nvme-fabrics.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764058086.551701-630-235458265676260/.source.conf follow=False _original_basename=module-load.conf.j2 checksum=783c778f0c68cc414f35486f234cbb1cf3f9bbff backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 03:08:08 np0005534516 python3.9[236631]: ansible-ansible.builtin.lineinfile Invoked with create=True dest=/etc/modules line=nvme-fabrics  mode=0644 state=present path=/etc/modules encoding=utf-8 backrefs=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 03:08:08 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v661: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:08:09 np0005534516 python3.9[236783]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 25 03:08:09 np0005534516 systemd[1]: systemd-modules-load.service: Deactivated successfully.
Nov 25 03:08:09 np0005534516 systemd[1]: Stopped Load Kernel Modules.
Nov 25 03:08:09 np0005534516 systemd[1]: Stopping Load Kernel Modules...
Nov 25 03:08:09 np0005534516 systemd[1]: Starting Load Kernel Modules...
Nov 25 03:08:09 np0005534516 systemd[1]: Finished Load Kernel Modules.
Nov 25 03:08:09 np0005534516 podman[236887]: 2025-11-25 08:08:09.840414816 +0000 UTC m=+0.091496319 container health_status 8ad1e319ea263c63021f01bb2d6f18ca5aea205883339692c6b10bddfa993191 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_controller, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 25 03:08:10 np0005534516 python3.9[236963]: ansible-ansible.legacy.dnf Invoked with name=['nvme-cli'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 25 03:08:10 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v662: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:08:12 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 03:08:12 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v663: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:08:13 np0005534516 systemd[1]: Reloading.
Nov 25 03:08:13 np0005534516 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 03:08:13 np0005534516 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 03:08:13 np0005534516 systemd[1]: Reloading.
Nov 25 03:08:13 np0005534516 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 03:08:13 np0005534516 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 03:08:13 np0005534516 systemd-logind[822]: Watching system buttons on /dev/input/event0 (Power Button)
Nov 25 03:08:13 np0005534516 systemd-logind[822]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard)
Nov 25 03:08:13 np0005534516 lvm[237082]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Nov 25 03:08:13 np0005534516 lvm[237082]: VG ceph_vg0 finished
Nov 25 03:08:13 np0005534516 lvm[237079]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Nov 25 03:08:13 np0005534516 lvm[237079]: VG ceph_vg2 finished
Nov 25 03:08:13 np0005534516 lvm[237078]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Nov 25 03:08:13 np0005534516 lvm[237078]: VG ceph_vg1 finished
Nov 25 03:08:14 np0005534516 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Nov 25 03:08:14 np0005534516 systemd[1]: Starting man-db-cache-update.service...
Nov 25 03:08:14 np0005534516 systemd[1]: Reloading.
Nov 25 03:08:14 np0005534516 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 03:08:14 np0005534516 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 03:08:14 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v664: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:08:14 np0005534516 systemd[1]: Queuing reload/restart jobs for marked units…
Nov 25 03:08:16 np0005534516 python3.9[238422]: ansible-ansible.builtin.systemd_service Invoked with name=iscsid state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 25 03:08:16 np0005534516 systemd[1]: Stopping Open-iSCSI...
Nov 25 03:08:16 np0005534516 iscsid[225864]: iscsid shutting down.
Nov 25 03:08:16 np0005534516 systemd[1]: iscsid.service: Deactivated successfully.
Nov 25 03:08:16 np0005534516 systemd[1]: Stopped Open-iSCSI.
Nov 25 03:08:16 np0005534516 systemd[1]: One time configuration for iscsi.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/iscsi/initiatorname.iscsi).
Nov 25 03:08:16 np0005534516 systemd[1]: Starting Open-iSCSI...
Nov 25 03:08:16 np0005534516 systemd[1]: Started Open-iSCSI.
Nov 25 03:08:16 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v665: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:08:16 np0005534516 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Nov 25 03:08:16 np0005534516 systemd[1]: Finished man-db-cache-update.service.
Nov 25 03:08:16 np0005534516 systemd[1]: man-db-cache-update.service: Consumed 1.600s CPU time.
Nov 25 03:08:16 np0005534516 systemd[1]: run-r2a0495ba9d6c42579a191c55bdf27efe.service: Deactivated successfully.
Nov 25 03:08:17 np0005534516 python3.9[238577]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 25 03:08:17 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 03:08:18 np0005534516 python3.9[238733]: ansible-ansible.builtin.file Invoked with mode=0644 path=/etc/ssh/ssh_known_hosts state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 03:08:18 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v666: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:08:19 np0005534516 python3.9[238885]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 25 03:08:19 np0005534516 systemd[1]: Reloading.
Nov 25 03:08:19 np0005534516 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 03:08:19 np0005534516 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 03:08:20 np0005534516 python3.9[239069]: ansible-ansible.builtin.service_facts Invoked
Nov 25 03:08:20 np0005534516 network[239086]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Nov 25 03:08:20 np0005534516 network[239087]: 'network-scripts' will be removed from distribution in near future.
Nov 25 03:08:20 np0005534516 network[239088]: It is advised to switch to 'NetworkManager' instead for network management.
Nov 25 03:08:20 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v667: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:08:22 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 03:08:22 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v668: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:08:22 np0005534516 virtnodedevd[215353]: libvirt version: 11.9.0, package: 1.el9 (builder@centos.org, 2025-11-04-09:54:50, )
Nov 25 03:08:22 np0005534516 virtnodedevd[215353]: hostname: compute-0
Nov 25 03:08:22 np0005534516 virtnodedevd[215353]: Make forcefull daemon shutdown
Nov 25 03:08:22 np0005534516 systemd[1]: virtnodedevd.service: Main process exited, code=exited, status=1/FAILURE
Nov 25 03:08:22 np0005534516 systemd[1]: virtnodedevd.service: Failed with result 'exit-code'.
Nov 25 03:08:22 np0005534516 systemd[1]: virtnodedevd.service: Scheduled restart job, restart counter is at 1.
Nov 25 03:08:22 np0005534516 systemd[1]: Stopped libvirt nodedev daemon.
Nov 25 03:08:22 np0005534516 systemd[1]: Starting libvirt nodedev daemon...
Nov 25 03:08:22 np0005534516 systemd[1]: Started libvirt nodedev daemon.
Nov 25 03:08:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 03:08:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 03:08:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 03:08:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 03:08:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 03:08:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 03:08:24 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v669: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:08:24 np0005534516 podman[239358]: 2025-11-25 08:08:24.758245004 +0000 UTC m=+0.065518962 container health_status 5c8d5508db57b4c3da7e80ae11f3a3094e87785cb74f60c98199261dd8b73481 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3)
Nov 25 03:08:25 np0005534516 python3.9[239402]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_compute.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 25 03:08:25 np0005534516 python3.9[239557]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_migration_target.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 25 03:08:26 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v670: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:08:26 np0005534516 python3.9[239710]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_api_cron.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 25 03:08:27 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 03:08:27 np0005534516 python3.9[239863]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_api.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 25 03:08:28 np0005534516 python3.9[240016]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_conductor.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 25 03:08:28 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v671: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:08:28 np0005534516 python3.9[240169]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_metadata.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 25 03:08:29 np0005534516 python3.9[240322]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_scheduler.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 25 03:08:30 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v672: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:08:30 np0005534516 python3.9[240475]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_vnc_proxy.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 25 03:08:31 np0005534516 python3.9[240628]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 03:08:32 np0005534516 python3.9[240780]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_migration_target.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 03:08:32 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 03:08:32 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v673: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:08:32 np0005534516 python3.9[240932]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_api_cron.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 03:08:33 np0005534516 python3.9[241084]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_api.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 03:08:34 np0005534516 python3.9[241236]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_conductor.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 03:08:34 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v674: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:08:34 np0005534516 podman[241360]: 2025-11-25 08:08:34.656784957 +0000 UTC m=+0.067037367 container health_status 201f5ba8a846455dad7681d91099d8c3f33e8ac91298c1330a8b525b94c03938 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251118, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Nov 25 03:08:34 np0005534516 python3.9[241404]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_metadata.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 03:08:35 np0005534516 python3.9[241557]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_scheduler.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 03:08:36 np0005534516 python3.9[241709]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_vnc_proxy.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 03:08:36 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v675: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:08:36 np0005534516 python3.9[241861]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 03:08:37 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 03:08:37 np0005534516 python3.9[242013]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_migration_target.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 03:08:38 np0005534516 python3.9[242165]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_api_cron.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 03:08:38 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v676: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:08:38 np0005534516 python3.9[242317]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_api.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 03:08:39 np0005534516 python3.9[242469]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_conductor.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 03:08:40 np0005534516 podman[242593]: 2025-11-25 08:08:40.001405694 +0000 UTC m=+0.101480646 container health_status 8ad1e319ea263c63021f01bb2d6f18ca5aea205883339692c6b10bddfa993191 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 25 03:08:40 np0005534516 python3.9[242641]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_metadata.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 03:08:40 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v677: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:08:40 np0005534516 python3.9[242799]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_scheduler.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 03:08:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:08:41.032 162739 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:08:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:08:41.033 162739 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:08:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:08:41.033 162739 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:08:41 np0005534516 python3.9[242951]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_vnc_proxy.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 03:08:42 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 03:08:42 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v678: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:08:42 np0005534516 python3.9[243103]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then#012  systemctl disable --now certmonger.service#012  test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service#012fi#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 03:08:43 np0005534516 python3.9[243255]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Nov 25 03:08:44 np0005534516 python3.9[243407]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 25 03:08:44 np0005534516 systemd[1]: Reloading.
Nov 25 03:08:44 np0005534516 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 03:08:44 np0005534516 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 03:08:44 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v679: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:08:45 np0005534516 python3.9[243595]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_compute.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 03:08:46 np0005534516 python3.9[243748]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_migration_target.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 03:08:46 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v680: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:08:46 np0005534516 python3.9[243901]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_api_cron.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 03:08:47 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 03:08:47 np0005534516 python3.9[244054]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_api.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 03:08:48 np0005534516 python3.9[244207]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_conductor.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 03:08:48 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v681: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:08:48 np0005534516 python3.9[244360]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_metadata.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 03:08:49 np0005534516 python3.9[244513]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_scheduler.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 03:08:50 np0005534516 python3.9[244666]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_vnc_proxy.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 03:08:50 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v682: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:08:51 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 03:08:51 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 03:08:51 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 03:08:51 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 03:08:51 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 03:08:51 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 03:08:51 np0005534516 ceph-mgr[75313]: [progress WARNING root] complete: ev 6b3020ad-5b28-41fc-a12d-7582f514f4ca does not exist
Nov 25 03:08:51 np0005534516 ceph-mgr[75313]: [progress WARNING root] complete: ev 35f575c1-0c7b-4404-99a1-499b91a0d90a does not exist
Nov 25 03:08:51 np0005534516 ceph-mgr[75313]: [progress WARNING root] complete: ev fb208f81-760f-49da-a75b-f8c0044d928d does not exist
Nov 25 03:08:51 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 03:08:51 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 03:08:51 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 03:08:51 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 03:08:51 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 03:08:51 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 03:08:51 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 03:08:51 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 03:08:51 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 03:08:51 np0005534516 python3.9[245051]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 03:08:51 np0005534516 podman[245093]: 2025-11-25 08:08:51.691961244 +0000 UTC m=+0.052057604 container create 803ae0d17128980e7c63a9eae49779be5d6007083821bc51783fd45e9dec859f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_wing, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 03:08:51 np0005534516 systemd[1]: Started libpod-conmon-803ae0d17128980e7c63a9eae49779be5d6007083821bc51783fd45e9dec859f.scope.
Nov 25 03:08:51 np0005534516 podman[245093]: 2025-11-25 08:08:51.668008385 +0000 UTC m=+0.028104795 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 03:08:51 np0005534516 systemd[1]: Started libcrun container.
Nov 25 03:08:51 np0005534516 podman[245093]: 2025-11-25 08:08:51.786145289 +0000 UTC m=+0.146241629 container init 803ae0d17128980e7c63a9eae49779be5d6007083821bc51783fd45e9dec859f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_wing, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Nov 25 03:08:51 np0005534516 podman[245093]: 2025-11-25 08:08:51.794889939 +0000 UTC m=+0.154986269 container start 803ae0d17128980e7c63a9eae49779be5d6007083821bc51783fd45e9dec859f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_wing, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Nov 25 03:08:51 np0005534516 podman[245093]: 2025-11-25 08:08:51.798769706 +0000 UTC m=+0.158866076 container attach 803ae0d17128980e7c63a9eae49779be5d6007083821bc51783fd45e9dec859f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_wing, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Nov 25 03:08:51 np0005534516 upbeat_wing[245133]: 167 167
Nov 25 03:08:51 np0005534516 systemd[1]: libpod-803ae0d17128980e7c63a9eae49779be5d6007083821bc51783fd45e9dec859f.scope: Deactivated successfully.
Nov 25 03:08:51 np0005534516 podman[245093]: 2025-11-25 08:08:51.806051117 +0000 UTC m=+0.166147437 container died 803ae0d17128980e7c63a9eae49779be5d6007083821bc51783fd45e9dec859f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_wing, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 03:08:51 np0005534516 systemd[1]: var-lib-containers-storage-overlay-bee7b67e6b84cfac7149cb2550f0dbe2aab0b373b7b841f6fa2c5a805e68cce4-merged.mount: Deactivated successfully.
Nov 25 03:08:51 np0005534516 podman[245093]: 2025-11-25 08:08:51.848531447 +0000 UTC m=+0.208627767 container remove 803ae0d17128980e7c63a9eae49779be5d6007083821bc51783fd45e9dec859f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_wing, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 03:08:51 np0005534516 systemd[1]: libpod-conmon-803ae0d17128980e7c63a9eae49779be5d6007083821bc51783fd45e9dec859f.scope: Deactivated successfully.
Nov 25 03:08:52 np0005534516 podman[245233]: 2025-11-25 08:08:52.00657624 +0000 UTC m=+0.040315082 container create fdd8844685e9228be62862c9d15e336f08ec6553640b2b8aab1584b29d2f2fd6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_margulis, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Nov 25 03:08:52 np0005534516 systemd[1]: Started libpod-conmon-fdd8844685e9228be62862c9d15e336f08ec6553640b2b8aab1584b29d2f2fd6.scope.
Nov 25 03:08:52 np0005534516 systemd[1]: Started libcrun container.
Nov 25 03:08:52 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c79bc0ec3d333bca8f659e14d0ca303308b090b9112f379b84d3efbb8487987b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 03:08:52 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c79bc0ec3d333bca8f659e14d0ca303308b090b9112f379b84d3efbb8487987b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 03:08:52 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c79bc0ec3d333bca8f659e14d0ca303308b090b9112f379b84d3efbb8487987b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 03:08:52 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c79bc0ec3d333bca8f659e14d0ca303308b090b9112f379b84d3efbb8487987b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 03:08:52 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c79bc0ec3d333bca8f659e14d0ca303308b090b9112f379b84d3efbb8487987b/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 03:08:52 np0005534516 podman[245233]: 2025-11-25 08:08:51.989581962 +0000 UTC m=+0.023320824 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 03:08:52 np0005534516 podman[245233]: 2025-11-25 08:08:52.09627195 +0000 UTC m=+0.130010822 container init fdd8844685e9228be62862c9d15e336f08ec6553640b2b8aab1584b29d2f2fd6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_margulis, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Nov 25 03:08:52 np0005534516 podman[245233]: 2025-11-25 08:08:52.106718937 +0000 UTC m=+0.140457779 container start fdd8844685e9228be62862c9d15e336f08ec6553640b2b8aab1584b29d2f2fd6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_margulis, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 03:08:52 np0005534516 podman[245233]: 2025-11-25 08:08:52.113010541 +0000 UTC m=+0.146749593 container attach fdd8844685e9228be62862c9d15e336f08ec6553640b2b8aab1584b29d2f2fd6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_margulis, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Nov 25 03:08:52 np0005534516 python3.9[245304]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/containers setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 03:08:52 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 03:08:52 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v683: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:08:52 np0005534516 python3.9[245458]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/nova_nvme_cleaner setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 03:08:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] Optimize plan auto_2025-11-25_08:08:53
Nov 25 03:08:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 03:08:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] do_upmap
Nov 25 03:08:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] pools ['vms', 'cephfs.cephfs.data', '.rgw.root', 'cephfs.cephfs.meta', 'default.rgw.log', 'default.rgw.control', 'default.rgw.meta', '.mgr', 'backups', 'volumes', 'images']
Nov 25 03:08:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] prepared 0/10 changes
Nov 25 03:08:53 np0005534516 gallant_margulis[245296]: --> passed data devices: 0 physical, 3 LVM
Nov 25 03:08:53 np0005534516 gallant_margulis[245296]: --> relative data size: 1.0
Nov 25 03:08:53 np0005534516 gallant_margulis[245296]: --> All data devices are unavailable
Nov 25 03:08:53 np0005534516 systemd[1]: libpod-fdd8844685e9228be62862c9d15e336f08ec6553640b2b8aab1584b29d2f2fd6.scope: Deactivated successfully.
Nov 25 03:08:53 np0005534516 systemd[1]: libpod-fdd8844685e9228be62862c9d15e336f08ec6553640b2b8aab1584b29d2f2fd6.scope: Consumed 1.147s CPU time.
Nov 25 03:08:53 np0005534516 podman[245233]: 2025-11-25 08:08:53.326105551 +0000 UTC m=+1.359844393 container died fdd8844685e9228be62862c9d15e336f08ec6553640b2b8aab1584b29d2f2fd6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_margulis, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 03:08:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 03:08:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 03:08:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 03:08:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 03:08:53 np0005534516 systemd[1]: var-lib-containers-storage-overlay-c79bc0ec3d333bca8f659e14d0ca303308b090b9112f379b84d3efbb8487987b-merged.mount: Deactivated successfully.
Nov 25 03:08:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 03:08:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 03:08:53 np0005534516 podman[245233]: 2025-11-25 08:08:53.539902059 +0000 UTC m=+1.573640901 container remove fdd8844685e9228be62862c9d15e336f08ec6553640b2b8aab1584b29d2f2fd6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_margulis, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2)
Nov 25 03:08:53 np0005534516 systemd[1]: libpod-conmon-fdd8844685e9228be62862c9d15e336f08ec6553640b2b8aab1584b29d2f2fd6.scope: Deactivated successfully.
Nov 25 03:08:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 03:08:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 03:08:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 03:08:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 03:08:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 03:08:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 03:08:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 03:08:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 03:08:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 03:08:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 03:08:53 np0005534516 python3.9[245646]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 03:08:54 np0005534516 podman[245939]: 2025-11-25 08:08:54.203214467 +0000 UTC m=+0.056136377 container create c33279230607abb2f0db5f2cad81cb844bce02b7482899d7b1b5422c5ea22d9e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_mestorf, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 03:08:54 np0005534516 systemd[1]: Started libpod-conmon-c33279230607abb2f0db5f2cad81cb844bce02b7482899d7b1b5422c5ea22d9e.scope.
Nov 25 03:08:54 np0005534516 podman[245939]: 2025-11-25 08:08:54.171999598 +0000 UTC m=+0.024921528 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 03:08:54 np0005534516 systemd[1]: Started libcrun container.
Nov 25 03:08:54 np0005534516 podman[245939]: 2025-11-25 08:08:54.291619062 +0000 UTC m=+0.144540992 container init c33279230607abb2f0db5f2cad81cb844bce02b7482899d7b1b5422c5ea22d9e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_mestorf, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 03:08:54 np0005534516 podman[245939]: 2025-11-25 08:08:54.299848518 +0000 UTC m=+0.152770418 container start c33279230607abb2f0db5f2cad81cb844bce02b7482899d7b1b5422c5ea22d9e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_mestorf, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 03:08:54 np0005534516 podman[245939]: 2025-11-25 08:08:54.307079618 +0000 UTC m=+0.160001538 container attach c33279230607abb2f0db5f2cad81cb844bce02b7482899d7b1b5422c5ea22d9e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_mestorf, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 03:08:54 np0005534516 python3.9[245935]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/_nova_secontext setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 03:08:54 np0005534516 vibrant_mestorf[245955]: 167 167
Nov 25 03:08:54 np0005534516 systemd[1]: libpod-c33279230607abb2f0db5f2cad81cb844bce02b7482899d7b1b5422c5ea22d9e.scope: Deactivated successfully.
Nov 25 03:08:54 np0005534516 conmon[245955]: conmon c33279230607abb2f0db <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-c33279230607abb2f0db5f2cad81cb844bce02b7482899d7b1b5422c5ea22d9e.scope/container/memory.events
Nov 25 03:08:54 np0005534516 podman[245939]: 2025-11-25 08:08:54.349959319 +0000 UTC m=+0.202881249 container died c33279230607abb2f0db5f2cad81cb844bce02b7482899d7b1b5422c5ea22d9e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_mestorf, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 03:08:54 np0005534516 systemd[1]: var-lib-containers-storage-overlay-9a5c57dba47fd780978d4bf4967e271a0a5e2aafd5071fbe6a891e56d883f2f1-merged.mount: Deactivated successfully.
Nov 25 03:08:54 np0005534516 podman[245939]: 2025-11-25 08:08:54.399145794 +0000 UTC m=+0.252067704 container remove c33279230607abb2f0db5f2cad81cb844bce02b7482899d7b1b5422c5ea22d9e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_mestorf, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 03:08:54 np0005534516 systemd[1]: libpod-conmon-c33279230607abb2f0db5f2cad81cb844bce02b7482899d7b1b5422c5ea22d9e.scope: Deactivated successfully.
Nov 25 03:08:54 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v684: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:08:54 np0005534516 podman[246030]: 2025-11-25 08:08:54.590805242 +0000 UTC m=+0.050895343 container create ba786606250d9d0208bc44df3f961db2d15f1bde978f7dab20ea41a64bf70b68 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_mendeleev, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Nov 25 03:08:54 np0005534516 systemd[1]: Started libpod-conmon-ba786606250d9d0208bc44df3f961db2d15f1bde978f7dab20ea41a64bf70b68.scope.
Nov 25 03:08:54 np0005534516 podman[246030]: 2025-11-25 08:08:54.569172346 +0000 UTC m=+0.029262477 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 03:08:54 np0005534516 systemd[1]: Started libcrun container.
Nov 25 03:08:54 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/73a0bcf075f9b8e6b32790f2e75fe77963b693308e3fde9c762870315738f363/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 03:08:54 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/73a0bcf075f9b8e6b32790f2e75fe77963b693308e3fde9c762870315738f363/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 03:08:54 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/73a0bcf075f9b8e6b32790f2e75fe77963b693308e3fde9c762870315738f363/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 03:08:54 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/73a0bcf075f9b8e6b32790f2e75fe77963b693308e3fde9c762870315738f363/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 03:08:54 np0005534516 podman[246030]: 2025-11-25 08:08:54.686797645 +0000 UTC m=+0.146887766 container init ba786606250d9d0208bc44df3f961db2d15f1bde978f7dab20ea41a64bf70b68 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_mendeleev, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef)
Nov 25 03:08:54 np0005534516 podman[246030]: 2025-11-25 08:08:54.693960933 +0000 UTC m=+0.154051034 container start ba786606250d9d0208bc44df3f961db2d15f1bde978f7dab20ea41a64bf70b68 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_mendeleev, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 03:08:54 np0005534516 podman[246030]: 2025-11-25 08:08:54.698264371 +0000 UTC m=+0.158354502 container attach ba786606250d9d0208bc44df3f961db2d15f1bde978f7dab20ea41a64bf70b68 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_mendeleev, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Nov 25 03:08:54 np0005534516 podman[246152]: 2025-11-25 08:08:54.885111127 +0000 UTC m=+0.067448938 container health_status 5c8d5508db57b4c3da7e80ae11f3a3094e87785cb74f60c98199261dd8b73481 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 25 03:08:55 np0005534516 python3.9[246153]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/nova/instances setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 03:08:55 np0005534516 funny_mendeleev[246095]: {
Nov 25 03:08:55 np0005534516 funny_mendeleev[246095]:    "0": [
Nov 25 03:08:55 np0005534516 funny_mendeleev[246095]:        {
Nov 25 03:08:55 np0005534516 funny_mendeleev[246095]:            "devices": [
Nov 25 03:08:55 np0005534516 funny_mendeleev[246095]:                "/dev/loop3"
Nov 25 03:08:55 np0005534516 funny_mendeleev[246095]:            ],
Nov 25 03:08:55 np0005534516 funny_mendeleev[246095]:            "lv_name": "ceph_lv0",
Nov 25 03:08:55 np0005534516 funny_mendeleev[246095]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 03:08:55 np0005534516 funny_mendeleev[246095]:            "lv_size": "21470642176",
Nov 25 03:08:55 np0005534516 funny_mendeleev[246095]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=fa0f8d90-5df8-4d42-9078-da082765696d,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 03:08:55 np0005534516 funny_mendeleev[246095]:            "lv_uuid": "ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr",
Nov 25 03:08:55 np0005534516 funny_mendeleev[246095]:            "name": "ceph_lv0",
Nov 25 03:08:55 np0005534516 funny_mendeleev[246095]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 03:08:55 np0005534516 funny_mendeleev[246095]:            "tags": {
Nov 25 03:08:55 np0005534516 funny_mendeleev[246095]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 03:08:55 np0005534516 funny_mendeleev[246095]:                "ceph.block_uuid": "ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr",
Nov 25 03:08:55 np0005534516 funny_mendeleev[246095]:                "ceph.cephx_lockbox_secret": "",
Nov 25 03:08:55 np0005534516 funny_mendeleev[246095]:                "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 03:08:55 np0005534516 funny_mendeleev[246095]:                "ceph.cluster_name": "ceph",
Nov 25 03:08:55 np0005534516 funny_mendeleev[246095]:                "ceph.crush_device_class": "",
Nov 25 03:08:55 np0005534516 funny_mendeleev[246095]:                "ceph.encrypted": "0",
Nov 25 03:08:55 np0005534516 funny_mendeleev[246095]:                "ceph.osd_fsid": "fa0f8d90-5df8-4d42-9078-da082765696d",
Nov 25 03:08:55 np0005534516 funny_mendeleev[246095]:                "ceph.osd_id": "0",
Nov 25 03:08:55 np0005534516 funny_mendeleev[246095]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 03:08:55 np0005534516 funny_mendeleev[246095]:                "ceph.type": "block",
Nov 25 03:08:55 np0005534516 funny_mendeleev[246095]:                "ceph.vdo": "0"
Nov 25 03:08:55 np0005534516 funny_mendeleev[246095]:            },
Nov 25 03:08:55 np0005534516 funny_mendeleev[246095]:            "type": "block",
Nov 25 03:08:55 np0005534516 funny_mendeleev[246095]:            "vg_name": "ceph_vg0"
Nov 25 03:08:55 np0005534516 funny_mendeleev[246095]:        }
Nov 25 03:08:55 np0005534516 funny_mendeleev[246095]:    ],
Nov 25 03:08:55 np0005534516 funny_mendeleev[246095]:    "1": [
Nov 25 03:08:55 np0005534516 funny_mendeleev[246095]:        {
Nov 25 03:08:55 np0005534516 funny_mendeleev[246095]:            "devices": [
Nov 25 03:08:55 np0005534516 funny_mendeleev[246095]:                "/dev/loop4"
Nov 25 03:08:55 np0005534516 funny_mendeleev[246095]:            ],
Nov 25 03:08:55 np0005534516 funny_mendeleev[246095]:            "lv_name": "ceph_lv1",
Nov 25 03:08:55 np0005534516 funny_mendeleev[246095]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 03:08:55 np0005534516 funny_mendeleev[246095]:            "lv_size": "21470642176",
Nov 25 03:08:55 np0005534516 funny_mendeleev[246095]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1ccbbde0-8faf-460a-800e-c84f00ed17db,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 03:08:55 np0005534516 funny_mendeleev[246095]:            "lv_uuid": "nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e",
Nov 25 03:08:55 np0005534516 funny_mendeleev[246095]:            "name": "ceph_lv1",
Nov 25 03:08:55 np0005534516 funny_mendeleev[246095]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 03:08:55 np0005534516 funny_mendeleev[246095]:            "tags": {
Nov 25 03:08:55 np0005534516 funny_mendeleev[246095]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 03:08:55 np0005534516 funny_mendeleev[246095]:                "ceph.block_uuid": "nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e",
Nov 25 03:08:55 np0005534516 funny_mendeleev[246095]:                "ceph.cephx_lockbox_secret": "",
Nov 25 03:08:55 np0005534516 funny_mendeleev[246095]:                "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 03:08:55 np0005534516 funny_mendeleev[246095]:                "ceph.cluster_name": "ceph",
Nov 25 03:08:55 np0005534516 funny_mendeleev[246095]:                "ceph.crush_device_class": "",
Nov 25 03:08:55 np0005534516 funny_mendeleev[246095]:                "ceph.encrypted": "0",
Nov 25 03:08:55 np0005534516 funny_mendeleev[246095]:                "ceph.osd_fsid": "1ccbbde0-8faf-460a-800e-c84f00ed17db",
Nov 25 03:08:55 np0005534516 funny_mendeleev[246095]:                "ceph.osd_id": "1",
Nov 25 03:08:55 np0005534516 funny_mendeleev[246095]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 03:08:55 np0005534516 funny_mendeleev[246095]:                "ceph.type": "block",
Nov 25 03:08:55 np0005534516 funny_mendeleev[246095]:                "ceph.vdo": "0"
Nov 25 03:08:55 np0005534516 funny_mendeleev[246095]:            },
Nov 25 03:08:55 np0005534516 funny_mendeleev[246095]:            "type": "block",
Nov 25 03:08:55 np0005534516 funny_mendeleev[246095]:            "vg_name": "ceph_vg1"
Nov 25 03:08:55 np0005534516 funny_mendeleev[246095]:        }
Nov 25 03:08:55 np0005534516 funny_mendeleev[246095]:    ],
Nov 25 03:08:55 np0005534516 funny_mendeleev[246095]:    "2": [
Nov 25 03:08:55 np0005534516 funny_mendeleev[246095]:        {
Nov 25 03:08:55 np0005534516 funny_mendeleev[246095]:            "devices": [
Nov 25 03:08:55 np0005534516 funny_mendeleev[246095]:                "/dev/loop5"
Nov 25 03:08:55 np0005534516 funny_mendeleev[246095]:            ],
Nov 25 03:08:55 np0005534516 funny_mendeleev[246095]:            "lv_name": "ceph_lv2",
Nov 25 03:08:55 np0005534516 funny_mendeleev[246095]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 03:08:55 np0005534516 funny_mendeleev[246095]:            "lv_size": "21470642176",
Nov 25 03:08:55 np0005534516 funny_mendeleev[246095]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2a15223e-f21c-42af-b702-2be1c3607399,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 03:08:55 np0005534516 funny_mendeleev[246095]:            "lv_uuid": "5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0",
Nov 25 03:08:55 np0005534516 funny_mendeleev[246095]:            "name": "ceph_lv2",
Nov 25 03:08:55 np0005534516 funny_mendeleev[246095]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 03:08:55 np0005534516 funny_mendeleev[246095]:            "tags": {
Nov 25 03:08:55 np0005534516 funny_mendeleev[246095]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 03:08:55 np0005534516 funny_mendeleev[246095]:                "ceph.block_uuid": "5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0",
Nov 25 03:08:55 np0005534516 funny_mendeleev[246095]:                "ceph.cephx_lockbox_secret": "",
Nov 25 03:08:55 np0005534516 funny_mendeleev[246095]:                "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 03:08:55 np0005534516 funny_mendeleev[246095]:                "ceph.cluster_name": "ceph",
Nov 25 03:08:55 np0005534516 funny_mendeleev[246095]:                "ceph.crush_device_class": "",
Nov 25 03:08:55 np0005534516 funny_mendeleev[246095]:                "ceph.encrypted": "0",
Nov 25 03:08:55 np0005534516 funny_mendeleev[246095]:                "ceph.osd_fsid": "2a15223e-f21c-42af-b702-2be1c3607399",
Nov 25 03:08:55 np0005534516 funny_mendeleev[246095]:                "ceph.osd_id": "2",
Nov 25 03:08:55 np0005534516 funny_mendeleev[246095]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 03:08:55 np0005534516 funny_mendeleev[246095]:                "ceph.type": "block",
Nov 25 03:08:55 np0005534516 funny_mendeleev[246095]:                "ceph.vdo": "0"
Nov 25 03:08:55 np0005534516 funny_mendeleev[246095]:            },
Nov 25 03:08:55 np0005534516 funny_mendeleev[246095]:            "type": "block",
Nov 25 03:08:55 np0005534516 funny_mendeleev[246095]:            "vg_name": "ceph_vg2"
Nov 25 03:08:55 np0005534516 funny_mendeleev[246095]:        }
Nov 25 03:08:55 np0005534516 funny_mendeleev[246095]:    ]
Nov 25 03:08:55 np0005534516 funny_mendeleev[246095]: }
Nov 25 03:08:55 np0005534516 systemd[1]: libpod-ba786606250d9d0208bc44df3f961db2d15f1bde978f7dab20ea41a64bf70b68.scope: Deactivated successfully.
Nov 25 03:08:55 np0005534516 podman[246030]: 2025-11-25 08:08:55.593369664 +0000 UTC m=+1.053459785 container died ba786606250d9d0208bc44df3f961db2d15f1bde978f7dab20ea41a64bf70b68 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_mendeleev, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Nov 25 03:08:55 np0005534516 systemd[1]: var-lib-containers-storage-overlay-73a0bcf075f9b8e6b32790f2e75fe77963b693308e3fde9c762870315738f363-merged.mount: Deactivated successfully.
Nov 25 03:08:55 np0005534516 podman[246030]: 2025-11-25 08:08:55.664835961 +0000 UTC m=+1.124926082 container remove ba786606250d9d0208bc44df3f961db2d15f1bde978f7dab20ea41a64bf70b68 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_mendeleev, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True)
Nov 25 03:08:55 np0005534516 systemd[1]: libpod-conmon-ba786606250d9d0208bc44df3f961db2d15f1bde978f7dab20ea41a64bf70b68.scope: Deactivated successfully.
Nov 25 03:08:55 np0005534516 python3.9[246326]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/etc/ceph setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 03:08:56 np0005534516 podman[246632]: 2025-11-25 08:08:56.360109761 +0000 UTC m=+0.047701655 container create b1aa5abcab29e4d6415c7a71c4c9151ecb6a7acfdef815cb6bb2ac83187c14b1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_shirley, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef)
Nov 25 03:08:56 np0005534516 systemd[1]: Started libpod-conmon-b1aa5abcab29e4d6415c7a71c4c9151ecb6a7acfdef815cb6bb2ac83187c14b1.scope.
Nov 25 03:08:56 np0005534516 python3.9[246613]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/etc/multipath setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Nov 25 03:08:56 np0005534516 systemd[1]: Started libcrun container.
Nov 25 03:08:56 np0005534516 podman[246632]: 2025-11-25 08:08:56.339926624 +0000 UTC m=+0.027518548 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 03:08:56 np0005534516 podman[246632]: 2025-11-25 08:08:56.473134513 +0000 UTC m=+0.160726437 container init b1aa5abcab29e4d6415c7a71c4c9151ecb6a7acfdef815cb6bb2ac83187c14b1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_shirley, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 25 03:08:56 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v685: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:08:56 np0005534516 podman[246632]: 2025-11-25 08:08:56.481830383 +0000 UTC m=+0.169422277 container start b1aa5abcab29e4d6415c7a71c4c9151ecb6a7acfdef815cb6bb2ac83187c14b1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_shirley, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Nov 25 03:08:56 np0005534516 elastic_shirley[246648]: 167 167
Nov 25 03:08:56 np0005534516 systemd[1]: libpod-b1aa5abcab29e4d6415c7a71c4c9151ecb6a7acfdef815cb6bb2ac83187c14b1.scope: Deactivated successfully.
Nov 25 03:08:56 np0005534516 podman[246632]: 2025-11-25 08:08:56.523204123 +0000 UTC m=+0.210796007 container attach b1aa5abcab29e4d6415c7a71c4c9151ecb6a7acfdef815cb6bb2ac83187c14b1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_shirley, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 25 03:08:56 np0005534516 podman[246632]: 2025-11-25 08:08:56.523604093 +0000 UTC m=+0.211195987 container died b1aa5abcab29e4d6415c7a71c4c9151ecb6a7acfdef815cb6bb2ac83187c14b1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_shirley, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Nov 25 03:08:56 np0005534516 systemd[1]: var-lib-containers-storage-overlay-a5e1685cdc1ec8f99a04c49fb7e3d4fa1914a328d41e144e727aec1d870352c7-merged.mount: Deactivated successfully.
Nov 25 03:08:56 np0005534516 podman[246632]: 2025-11-25 08:08:56.646366925 +0000 UTC m=+0.333958829 container remove b1aa5abcab29e4d6415c7a71c4c9151ecb6a7acfdef815cb6bb2ac83187c14b1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_shirley, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 03:08:56 np0005534516 systemd[1]: libpod-conmon-b1aa5abcab29e4d6415c7a71c4c9151ecb6a7acfdef815cb6bb2ac83187c14b1.scope: Deactivated successfully.
Nov 25 03:08:56 np0005534516 podman[246796]: 2025-11-25 08:08:56.887451704 +0000 UTC m=+0.094166065 container create 4d4a63a92cf7a782a97a094caed93c2ccbe96761ff1c8bac056013e65f601fb3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_tesla, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 03:08:56 np0005534516 systemd[1]: Started libpod-conmon-4d4a63a92cf7a782a97a094caed93c2ccbe96761ff1c8bac056013e65f601fb3.scope.
Nov 25 03:08:56 np0005534516 podman[246796]: 2025-11-25 08:08:56.858799785 +0000 UTC m=+0.065514176 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 03:08:56 np0005534516 systemd[1]: Started libcrun container.
Nov 25 03:08:56 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7ae43fcecdbf3be5fc41947e5be8f1b179fbdbaa7c0a388f1b71c1bde9b4d5f3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 03:08:56 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7ae43fcecdbf3be5fc41947e5be8f1b179fbdbaa7c0a388f1b71c1bde9b4d5f3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 03:08:56 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7ae43fcecdbf3be5fc41947e5be8f1b179fbdbaa7c0a388f1b71c1bde9b4d5f3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 03:08:56 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7ae43fcecdbf3be5fc41947e5be8f1b179fbdbaa7c0a388f1b71c1bde9b4d5f3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 03:08:56 np0005534516 podman[246796]: 2025-11-25 08:08:56.978410149 +0000 UTC m=+0.185124520 container init 4d4a63a92cf7a782a97a094caed93c2ccbe96761ff1c8bac056013e65f601fb3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_tesla, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 03:08:56 np0005534516 podman[246796]: 2025-11-25 08:08:56.988507588 +0000 UTC m=+0.195221949 container start 4d4a63a92cf7a782a97a094caed93c2ccbe96761ff1c8bac056013e65f601fb3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_tesla, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Nov 25 03:08:57 np0005534516 podman[246796]: 2025-11-25 08:08:57.007073059 +0000 UTC m=+0.213787440 container attach 4d4a63a92cf7a782a97a094caed93c2ccbe96761ff1c8bac056013e65f601fb3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_tesla, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 03:08:57 np0005534516 python3.9[246838]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/etc/nvme setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Nov 25 03:08:57 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 03:08:57 np0005534516 python3.9[246998]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/run/openvswitch setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Nov 25 03:08:58 np0005534516 admiring_tesla[246842]: {
Nov 25 03:08:58 np0005534516 admiring_tesla[246842]:    "1ccbbde0-8faf-460a-800e-c84f00ed17db": {
Nov 25 03:08:58 np0005534516 admiring_tesla[246842]:        "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 03:08:58 np0005534516 admiring_tesla[246842]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 03:08:58 np0005534516 admiring_tesla[246842]:        "osd_id": 1,
Nov 25 03:08:58 np0005534516 admiring_tesla[246842]:        "osd_uuid": "1ccbbde0-8faf-460a-800e-c84f00ed17db",
Nov 25 03:08:58 np0005534516 admiring_tesla[246842]:        "type": "bluestore"
Nov 25 03:08:58 np0005534516 admiring_tesla[246842]:    },
Nov 25 03:08:58 np0005534516 admiring_tesla[246842]:    "2a15223e-f21c-42af-b702-2be1c3607399": {
Nov 25 03:08:58 np0005534516 admiring_tesla[246842]:        "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 03:08:58 np0005534516 admiring_tesla[246842]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 03:08:58 np0005534516 admiring_tesla[246842]:        "osd_id": 2,
Nov 25 03:08:58 np0005534516 admiring_tesla[246842]:        "osd_uuid": "2a15223e-f21c-42af-b702-2be1c3607399",
Nov 25 03:08:58 np0005534516 admiring_tesla[246842]:        "type": "bluestore"
Nov 25 03:08:58 np0005534516 admiring_tesla[246842]:    },
Nov 25 03:08:58 np0005534516 admiring_tesla[246842]:    "fa0f8d90-5df8-4d42-9078-da082765696d": {
Nov 25 03:08:58 np0005534516 admiring_tesla[246842]:        "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 03:08:58 np0005534516 admiring_tesla[246842]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 03:08:58 np0005534516 admiring_tesla[246842]:        "osd_id": 0,
Nov 25 03:08:58 np0005534516 admiring_tesla[246842]:        "osd_uuid": "fa0f8d90-5df8-4d42-9078-da082765696d",
Nov 25 03:08:58 np0005534516 admiring_tesla[246842]:        "type": "bluestore"
Nov 25 03:08:58 np0005534516 admiring_tesla[246842]:    }
Nov 25 03:08:58 np0005534516 admiring_tesla[246842]: }
Nov 25 03:08:58 np0005534516 systemd[1]: libpod-4d4a63a92cf7a782a97a094caed93c2ccbe96761ff1c8bac056013e65f601fb3.scope: Deactivated successfully.
Nov 25 03:08:58 np0005534516 systemd[1]: libpod-4d4a63a92cf7a782a97a094caed93c2ccbe96761ff1c8bac056013e65f601fb3.scope: Consumed 1.093s CPU time.
Nov 25 03:08:58 np0005534516 conmon[246842]: conmon 4d4a63a92cf7a782a97a <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-4d4a63a92cf7a782a97a094caed93c2ccbe96761ff1c8bac056013e65f601fb3.scope/container/memory.events
Nov 25 03:08:58 np0005534516 podman[246796]: 2025-11-25 08:08:58.082552469 +0000 UTC m=+1.289266840 container died 4d4a63a92cf7a782a97a094caed93c2ccbe96761ff1c8bac056013e65f601fb3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_tesla, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Nov 25 03:08:58 np0005534516 systemd[1]: var-lib-containers-storage-overlay-7ae43fcecdbf3be5fc41947e5be8f1b179fbdbaa7c0a388f1b71c1bde9b4d5f3-merged.mount: Deactivated successfully.
Nov 25 03:08:58 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v686: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:08:58 np0005534516 podman[246796]: 2025-11-25 08:08:58.582630621 +0000 UTC m=+1.789345012 container remove 4d4a63a92cf7a782a97a094caed93c2ccbe96761ff1c8bac056013e65f601fb3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_tesla, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Nov 25 03:08:58 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 03:08:58 np0005534516 systemd[1]: libpod-conmon-4d4a63a92cf7a782a97a094caed93c2ccbe96761ff1c8bac056013e65f601fb3.scope: Deactivated successfully.
Nov 25 03:08:58 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 03:08:58 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 03:08:58 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 03:08:58 np0005534516 ceph-mgr[75313]: [progress WARNING root] complete: ev b0c34f15-98e0-41f4-8e24-42e9f6e0c1c9 does not exist
Nov 25 03:08:58 np0005534516 ceph-mgr[75313]: [progress WARNING root] complete: ev 7253b7c6-d23e-4b38-89c1-e0ce76602079 does not exist
Nov 25 03:08:59 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 03:08:59 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 03:09:00 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v687: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:09:02 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 03:09:02 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v688: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:09:02 np0005534516 python3.9[247240]: ansible-ansible.builtin.getent Invoked with database=passwd key=nova fail_key=True service=None split=None
Nov 25 03:09:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 03:09:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:09:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 03:09:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:09:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 03:09:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:09:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 03:09:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:09:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 03:09:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:09:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 03:09:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:09:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 25 03:09:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:09:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 03:09:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:09:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 25 03:09:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:09:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 25 03:09:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:09:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 03:09:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:09:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 25 03:09:03 np0005534516 python3.9[247393]: ansible-ansible.builtin.group Invoked with gid=42436 name=nova state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Nov 25 03:09:04 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v689: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:09:04 np0005534516 podman[247551]: 2025-11-25 08:09:04.828498059 +0000 UTC m=+0.103038369 container health_status 201f5ba8a846455dad7681d91099d8c3f33e8ac91298c1330a8b525b94c03938 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 03:09:04 np0005534516 python3.9[247552]: ansible-ansible.builtin.user Invoked with comment=nova user group=nova groups=['libvirt'] name=nova shell=/bin/sh state=present uid=42436 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Nov 25 03:09:05 np0005534516 systemd-logind[822]: New session 51 of user zuul.
Nov 25 03:09:05 np0005534516 systemd[1]: Started Session 51 of User zuul.
Nov 25 03:09:06 np0005534516 systemd[1]: session-51.scope: Deactivated successfully.
Nov 25 03:09:06 np0005534516 systemd-logind[822]: Session 51 logged out. Waiting for processes to exit.
Nov 25 03:09:06 np0005534516 systemd-logind[822]: Removed session 51.
Nov 25 03:09:06 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v690: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:09:06 np0005534516 python3.9[247757]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/config.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 03:09:07 np0005534516 python3.9[247878]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/config.json mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764058146.300364-1249-154431346677697/.source.json follow=False _original_basename=config.json.j2 checksum=b51012bfb0ca26296dcf3793a2f284446fb1395e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 25 03:09:07 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 03:09:08 np0005534516 python3.9[248028]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/nova-blank.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 03:09:08 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v691: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:09:08 np0005534516 python3.9[248104]: ansible-ansible.legacy.file Invoked with mode=0644 setype=container_file_t dest=/var/lib/openstack/config/nova/nova-blank.conf _original_basename=nova-blank.conf recurse=False state=file path=/var/lib/openstack/config/nova/nova-blank.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 25 03:09:09 np0005534516 python3.9[248254]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/ssh-config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 03:09:09 np0005534516 python3.9[248375]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/ssh-config mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764058148.6690822-1249-186713386220559/.source follow=False _original_basename=ssh-config checksum=4297f735c41bdc1ff52d72e6f623a02242f37958 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 25 03:09:10 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v692: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:09:10 np0005534516 podman[248499]: 2025-11-25 08:09:10.6464143 +0000 UTC m=+0.079302225 container health_status 8ad1e319ea263c63021f01bb2d6f18ca5aea205883339692c6b10bddfa993191 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true)
Nov 25 03:09:10 np0005534516 python3.9[248538]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/02-nova-host-specific.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 03:09:11 np0005534516 python3.9[248672]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/02-nova-host-specific.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764058150.002647-1249-51770849645438/.source.conf follow=False _original_basename=02-nova-host-specific.conf.j2 checksum=1feba546d0beacad9258164ab79b8a747685ccc8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 25 03:09:11 np0005534516 python3.9[248822]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/nova_statedir_ownership.py follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 03:09:12 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 03:09:12 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v693: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:09:12 np0005534516 ceph-mon[75015]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #33. Immutable memtables: 0.
Nov 25 03:09:12 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:09:12.518952) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 25 03:09:12 np0005534516 ceph-mon[75015]: rocksdb: [db/flush_job.cc:856] [default] [JOB 13] Flushing memtable with next log file: 33
Nov 25 03:09:12 np0005534516 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764058152519013, "job": 13, "event": "flush_started", "num_memtables": 1, "num_entries": 1778, "num_deletes": 507, "total_data_size": 2457347, "memory_usage": 2504704, "flush_reason": "Manual Compaction"}
Nov 25 03:09:12 np0005534516 ceph-mon[75015]: rocksdb: [db/flush_job.cc:885] [default] [JOB 13] Level-0 flush table #34: started
Nov 25 03:09:12 np0005534516 python3.9[248943]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/nova_statedir_ownership.py mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764058151.5112927-1249-219049457211950/.source.py follow=False _original_basename=nova_statedir_ownership.py checksum=c6c8a3cfefa5efd60ceb1408c4e977becedb71e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 25 03:09:12 np0005534516 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764058152632339, "cf_name": "default", "job": 13, "event": "table_file_creation", "file_number": 34, "file_size": 2412384, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 13085, "largest_seqno": 14862, "table_properties": {"data_size": 2404664, "index_size": 4150, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2501, "raw_key_size": 18027, "raw_average_key_size": 18, "raw_value_size": 2387330, "raw_average_value_size": 2428, "num_data_blocks": 190, "num_entries": 983, "num_filter_entries": 983, "num_deletions": 507, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764057988, "oldest_key_time": 1764057988, "file_creation_time": 1764058152, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "dc5dc6ae-0fb8-474e-b34d-e37705a41add", "db_session_id": "WCSCMBNWDQZ3QKJA3OWW", "orig_file_number": 34, "seqno_to_time_mapping": "N/A"}}
Nov 25 03:09:12 np0005534516 ceph-mon[75015]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 13] Flush lasted 113470 microseconds, and 5417 cpu microseconds.
Nov 25 03:09:12 np0005534516 ceph-mon[75015]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 03:09:12 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:09:12.632419) [db/flush_job.cc:967] [default] [JOB 13] Level-0 flush table #34: 2412384 bytes OK
Nov 25 03:09:12 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:09:12.632443) [db/memtable_list.cc:519] [default] Level-0 commit table #34 started
Nov 25 03:09:12 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:09:12.638132) [db/memtable_list.cc:722] [default] Level-0 commit table #34: memtable #1 done
Nov 25 03:09:12 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:09:12.638200) EVENT_LOG_v1 {"time_micros": 1764058152638186, "job": 13, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 25 03:09:12 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:09:12.638234) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 25 03:09:12 np0005534516 ceph-mon[75015]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 13] Try to delete WAL files size 2448690, prev total WAL file size 2448690, number of live WAL files 2.
Nov 25 03:09:12 np0005534516 ceph-mon[75015]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000030.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 03:09:12 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:09:12.639406) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0030' seq:72057594037927935, type:22 .. '6C6F676D00323533' seq:0, type:0; will stop at (end)
Nov 25 03:09:12 np0005534516 ceph-mon[75015]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 14] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 25 03:09:12 np0005534516 ceph-mon[75015]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 13 Base level 0, inputs: [34(2355KB)], [32(6824KB)]
Nov 25 03:09:12 np0005534516 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764058152639443, "job": 14, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [34], "files_L6": [32], "score": -1, "input_data_size": 9400771, "oldest_snapshot_seqno": -1}
Nov 25 03:09:12 np0005534516 ceph-mon[75015]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 14] Generated table #35: 3843 keys, 7378991 bytes, temperature: kUnknown
Nov 25 03:09:12 np0005534516 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764058152777639, "cf_name": "default", "job": 14, "event": "table_file_creation", "file_number": 35, "file_size": 7378991, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7351610, "index_size": 16647, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 9669, "raw_key_size": 94497, "raw_average_key_size": 24, "raw_value_size": 7280294, "raw_average_value_size": 1894, "num_data_blocks": 709, "num_entries": 3843, "num_filter_entries": 3843, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764056881, "oldest_key_time": 0, "file_creation_time": 1764058152, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "dc5dc6ae-0fb8-474e-b34d-e37705a41add", "db_session_id": "WCSCMBNWDQZ3QKJA3OWW", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}}
Nov 25 03:09:12 np0005534516 ceph-mon[75015]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 03:09:12 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:09:12.777976) [db/compaction/compaction_job.cc:1663] [default] [JOB 14] Compacted 1@0 + 1@6 files to L6 => 7378991 bytes
Nov 25 03:09:12 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:09:12.780658) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 68.0 rd, 53.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.3, 6.7 +0.0 blob) out(7.0 +0.0 blob), read-write-amplify(7.0) write-amplify(3.1) OK, records in: 4870, records dropped: 1027 output_compression: NoCompression
Nov 25 03:09:12 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:09:12.780683) EVENT_LOG_v1 {"time_micros": 1764058152780669, "job": 14, "event": "compaction_finished", "compaction_time_micros": 138339, "compaction_time_cpu_micros": 20590, "output_level": 6, "num_output_files": 1, "total_output_size": 7378991, "num_input_records": 4870, "num_output_records": 3843, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 25 03:09:12 np0005534516 ceph-mon[75015]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000034.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 03:09:12 np0005534516 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764058152781693, "job": 14, "event": "table_file_deletion", "file_number": 34}
Nov 25 03:09:12 np0005534516 ceph-mon[75015]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000032.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 03:09:12 np0005534516 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764058152783282, "job": 14, "event": "table_file_deletion", "file_number": 32}
Nov 25 03:09:12 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:09:12.639327) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 03:09:12 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:09:12.783468) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 03:09:12 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:09:12.783474) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 03:09:12 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:09:12.783478) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 03:09:12 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:09:12.783480) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 03:09:12 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:09:12.783482) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 03:09:13 np0005534516 python3.9[249093]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/run-on-host follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 03:09:13 np0005534516 python3.9[249214]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/run-on-host mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764058152.7019413-1249-150292630141819/.source follow=False _original_basename=run-on-host checksum=93aba8edc83d5878604a66d37fea2f12b60bdea2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 25 03:09:14 np0005534516 python3.9[249366]: ansible-ansible.builtin.file Invoked with group=nova mode=0700 owner=nova path=/home/nova/.ssh state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 03:09:14 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v694: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:09:14 np0005534516 python3.9[249518]: ansible-ansible.legacy.copy Invoked with dest=/home/nova/.ssh/authorized_keys group=nova mode=0600 owner=nova remote_src=True src=/var/lib/openstack/config/nova/ssh-publickey backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 03:09:15 np0005534516 python3.9[249670]: ansible-ansible.builtin.stat Invoked with path=/var/lib/nova/compute_id follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 25 03:09:16 np0005534516 python3.9[249822]: ansible-ansible.legacy.stat Invoked with path=/var/lib/nova/compute_id follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 03:09:16 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v695: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:09:16 np0005534516 python3.9[249945]: ansible-ansible.legacy.copy Invoked with attributes=+i dest=/var/lib/nova/compute_id group=nova mode=0400 owner=nova src=/home/zuul/.ansible/tmp/ansible-tmp-1764058155.8716338-1356-198468731263128/.source _original_basename=.oj5zx64y follow=False checksum=8d400221a4777749f418012e3134a0ee77f2c14c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None
Nov 25 03:09:17 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 03:09:17 np0005534516 python3.9[250097]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 25 03:09:18 np0005534516 python3.9[250249]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/containers/nova_compute.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 03:09:18 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v696: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:09:18 np0005534516 python3.9[250370]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/containers/nova_compute.json mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764058157.8134937-1382-132066742964032/.source.json follow=False _original_basename=nova_compute.json.j2 checksum=4c77b2c041a7564aa2c84115117dc8517e9bb9ef backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 25 03:09:19 np0005534516 python3.9[250520]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/containers/nova_compute_init.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 03:09:20 np0005534516 python3.9[250641]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/containers/nova_compute_init.json mode=0700 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764058159.0094397-1397-176268802253161/.source.json follow=False _original_basename=nova_compute_init.json.j2 checksum=941d5739094d046b86479403aeaaf0441b82ba11 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 25 03:09:20 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v697: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:09:20 np0005534516 python3.9[250793]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/containers config_pattern=nova_compute_init.json debug=False
Nov 25 03:09:21 np0005534516 python3.9[250945]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Nov 25 03:09:22 np0005534516 python3[251097]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/containers config_id=edpm config_overrides={} config_patterns=nova_compute_init.json log_base_path=/var/log/containers/stdouts debug=False
Nov 25 03:09:22 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 03:09:22 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v698: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:09:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 03:09:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 03:09:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 03:09:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 03:09:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 03:09:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 03:09:24 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v699: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:09:26 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v700: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:09:27 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 03:09:28 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v701: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:09:28 np0005534516 podman[251149]: 2025-11-25 08:09:28.99243923 +0000 UTC m=+3.676341582 container health_status 5c8d5508db57b4c3da7e80ae11f3a3094e87785cb74f60c98199261dd8b73481 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_managed=true, managed_by=edpm_ansible)
Nov 25 03:09:30 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v702: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:09:32 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 03:09:32 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v703: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:09:34 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v704: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:09:34 np0005534516 podman[251110]: 2025-11-25 08:09:34.675653511 +0000 UTC m=+12.139803143 image pull 8e31b7b83c8d26bacd9598fdae1b287d27f8fa7d1d3cf4270dd8e435ff2f6a66 quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:b6e1e8a249d36ef36c6ac4170af1e043dda1ccc0f9672832d3ff151bf3533076
Nov 25 03:09:34 np0005534516 podman[251212]: 2025-11-25 08:09:34.796443708 +0000 UTC m=+0.024898727 image pull 8e31b7b83c8d26bacd9598fdae1b287d27f8fa7d1d3cf4270dd8e435ff2f6a66 quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:b6e1e8a249d36ef36c6ac4170af1e043dda1ccc0f9672832d3ff151bf3533076
Nov 25 03:09:34 np0005534516 podman[251212]: 2025-11-25 08:09:34.916806724 +0000 UTC m=+0.145261683 container create ae8ddf4e41b839184dc557e1ce145f7af9d11f6c37e54b2694a4da31dd64d5e7 (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:b6e1e8a249d36ef36c6ac4170af1e043dda1ccc0f9672832d3ff151bf3533076, name=nova_compute_init, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, container_name=nova_compute_init, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, managed_by=edpm_ansible, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:b6e1e8a249d36ef36c6ac4170af1e043dda1ccc0f9672832d3ff151bf3533076', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0)
Nov 25 03:09:34 np0005534516 python3[251097]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name nova_compute_init --conmon-pidfile /run/nova_compute_init.pid --env NOVA_STATEDIR_OWNERSHIP_SKIP=/var/lib/nova/compute_id --env __OS_DEBUG=False --label config_id=edpm --label container_name=nova_compute_init --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:b6e1e8a249d36ef36c6ac4170af1e043dda1ccc0f9672832d3ff151bf3533076', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']} --log-driver journald --log-level info --network none --privileged=False --security-opt label=disable --user root --volume /dev/log:/dev/log --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z --volume /var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:b6e1e8a249d36ef36c6ac4170af1e043dda1ccc0f9672832d3ff151bf3533076 bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init
Nov 25 03:09:35 np0005534516 podman[251374]: 2025-11-25 08:09:35.514010261 +0000 UTC m=+0.068727624 container health_status 201f5ba8a846455dad7681d91099d8c3f33e8ac91298c1330a8b525b94c03938 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, org.label-schema.build-date=20251118, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Nov 25 03:09:35 np0005534516 python3.9[251419]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 25 03:09:36 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v705: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:09:36 np0005534516 python3.9[251577]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/containers config_pattern=nova_compute.json debug=False
Nov 25 03:09:37 np0005534516 python3.9[251729]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Nov 25 03:09:37 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 03:09:38 np0005534516 python3[251881]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/containers config_id=edpm config_overrides={} config_patterns=nova_compute.json log_base_path=/var/log/containers/stdouts debug=False
Nov 25 03:09:38 np0005534516 podman[251916]: 2025-11-25 08:09:38.317503222 +0000 UTC m=+0.053057382 container create 91f47f57df2646ec74ecc1a5b33a58591f2c43f1aff63ca45de8f906b345da20 (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:b6e1e8a249d36ef36c6ac4170af1e043dda1ccc0f9672832d3ff151bf3533076, name=nova_compute, org.label-schema.name=CentOS Stream 9 Base Image, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:b6e1e8a249d36ef36c6ac4170af1e043dda1ccc0f9672832d3ff151bf3533076', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, org.label-schema.license=GPLv2, container_name=nova_compute, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3)
Nov 25 03:09:38 np0005534516 podman[251916]: 2025-11-25 08:09:38.286672333 +0000 UTC m=+0.022226523 image pull 8e31b7b83c8d26bacd9598fdae1b287d27f8fa7d1d3cf4270dd8e435ff2f6a66 quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:b6e1e8a249d36ef36c6ac4170af1e043dda1ccc0f9672832d3ff151bf3533076
Nov 25 03:09:38 np0005534516 python3[251881]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name nova_compute --conmon-pidfile /run/nova_compute.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --label config_id=edpm --label container_name=nova_compute --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:b6e1e8a249d36ef36c6ac4170af1e043dda1ccc0f9672832d3ff151bf3533076', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']} --log-driver journald --log-level info --network host --pid host --privileged=True --user nova --volume /var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro --volume /var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /etc/localtime:/etc/localtime:ro --volume /lib/modules:/lib/modules:ro --volume /dev:/dev --volume /var/lib/libvirt:/var/lib/libvirt --volume /run/libvirt:/run/libvirt:shared --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/iscsi:/var/lib/iscsi --volume /etc/multipath:/etc/multipath:z --volume /etc/multipath.conf:/etc/multipath.conf:ro --volume /etc/iscsi:/etc/iscsi:ro --volume /etc/nvme:/etc/nvme --volume /var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro --volume /etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:b6e1e8a249d36ef36c6ac4170af1e043dda1ccc0f9672832d3ff151bf3533076 kolla_start
Nov 25 03:09:38 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v706: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:09:39 np0005534516 python3.9[252106]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 25 03:09:40 np0005534516 python3.9[252260]: ansible-file Invoked with path=/etc/systemd/system/edpm_nova_compute.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 03:09:40 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v707: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:09:40 np0005534516 python3.9[252411]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764058180.1369398-1489-266749301068557/source dest=/etc/systemd/system/edpm_nova_compute.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 03:09:40 np0005534516 podman[252412]: 2025-11-25 08:09:40.858746461 +0000 UTC m=+0.111782930 container health_status 8ad1e319ea263c63021f01bb2d6f18ca5aea205883339692c6b10bddfa993191 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2)
Nov 25 03:09:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:09:41.033 162739 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:09:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:09:41.033 162739 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:09:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:09:41.033 162739 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:09:41 np0005534516 python3.9[252513]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 25 03:09:41 np0005534516 systemd[1]: Reloading.
Nov 25 03:09:41 np0005534516 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 03:09:41 np0005534516 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 03:09:42 np0005534516 python3.9[252623]: ansible-systemd Invoked with state=restarted name=edpm_nova_compute.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 25 03:09:42 np0005534516 systemd[1]: Reloading.
Nov 25 03:09:42 np0005534516 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 03:09:42 np0005534516 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 03:09:42 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 03:09:42 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v708: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:09:42 np0005534516 systemd[1]: Starting nova_compute container...
Nov 25 03:09:42 np0005534516 systemd[1]: Started libcrun container.
Nov 25 03:09:42 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f520f796187c812df55952ce6376a3337c7b111cc9368dbd21fda8da12c2adfd/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Nov 25 03:09:42 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f520f796187c812df55952ce6376a3337c7b111cc9368dbd21fda8da12c2adfd/merged/etc/nvme supports timestamps until 2038 (0x7fffffff)
Nov 25 03:09:42 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f520f796187c812df55952ce6376a3337c7b111cc9368dbd21fda8da12c2adfd/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Nov 25 03:09:42 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f520f796187c812df55952ce6376a3337c7b111cc9368dbd21fda8da12c2adfd/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Nov 25 03:09:42 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f520f796187c812df55952ce6376a3337c7b111cc9368dbd21fda8da12c2adfd/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff)
Nov 25 03:09:42 np0005534516 podman[252663]: 2025-11-25 08:09:42.805596199 +0000 UTC m=+0.091142340 container init 91f47f57df2646ec74ecc1a5b33a58591f2c43f1aff63ca45de8f906b345da20 (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:b6e1e8a249d36ef36c6ac4170af1e043dda1ccc0f9672832d3ff151bf3533076, name=nova_compute, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, container_name=nova_compute, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:b6e1e8a249d36ef36c6ac4170af1e043dda1ccc0f9672832d3ff151bf3533076', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, org.label-schema.license=GPLv2, config_id=edpm)
Nov 25 03:09:42 np0005534516 podman[252663]: 2025-11-25 08:09:42.81397944 +0000 UTC m=+0.099525551 container start 91f47f57df2646ec74ecc1a5b33a58591f2c43f1aff63ca45de8f906b345da20 (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:b6e1e8a249d36ef36c6ac4170af1e043dda1ccc0f9672832d3ff151bf3533076, name=nova_compute, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=edpm, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:b6e1e8a249d36ef36c6ac4170af1e043dda1ccc0f9672832d3ff151bf3533076', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, container_name=nova_compute, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 03:09:42 np0005534516 podman[252663]: nova_compute
Nov 25 03:09:42 np0005534516 nova_compute[252679]: + sudo -E kolla_set_configs
Nov 25 03:09:42 np0005534516 systemd[1]: Started nova_compute container.
Nov 25 03:09:42 np0005534516 nova_compute[252679]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Nov 25 03:09:42 np0005534516 nova_compute[252679]: INFO:__main__:Validating config file
Nov 25 03:09:42 np0005534516 nova_compute[252679]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Nov 25 03:09:42 np0005534516 nova_compute[252679]: INFO:__main__:Copying service configuration files
Nov 25 03:09:42 np0005534516 nova_compute[252679]: INFO:__main__:Deleting /etc/nova/nova.conf
Nov 25 03:09:42 np0005534516 nova_compute[252679]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf
Nov 25 03:09:42 np0005534516 nova_compute[252679]: INFO:__main__:Setting permission for /etc/nova/nova.conf
Nov 25 03:09:42 np0005534516 nova_compute[252679]: INFO:__main__:Copying /var/lib/kolla/config_files/01-nova.conf to /etc/nova/nova.conf.d/01-nova.conf
Nov 25 03:09:42 np0005534516 nova_compute[252679]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/01-nova.conf
Nov 25 03:09:42 np0005534516 nova_compute[252679]: INFO:__main__:Copying /var/lib/kolla/config_files/03-ceph-nova.conf to /etc/nova/nova.conf.d/03-ceph-nova.conf
Nov 25 03:09:42 np0005534516 nova_compute[252679]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/03-ceph-nova.conf
Nov 25 03:09:42 np0005534516 nova_compute[252679]: INFO:__main__:Copying /var/lib/kolla/config_files/25-nova-extra.conf to /etc/nova/nova.conf.d/25-nova-extra.conf
Nov 25 03:09:42 np0005534516 nova_compute[252679]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/25-nova-extra.conf
Nov 25 03:09:42 np0005534516 nova_compute[252679]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf.d/nova-blank.conf
Nov 25 03:09:42 np0005534516 nova_compute[252679]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/nova-blank.conf
Nov 25 03:09:42 np0005534516 nova_compute[252679]: INFO:__main__:Copying /var/lib/kolla/config_files/02-nova-host-specific.conf to /etc/nova/nova.conf.d/02-nova-host-specific.conf
Nov 25 03:09:42 np0005534516 nova_compute[252679]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/02-nova-host-specific.conf
Nov 25 03:09:42 np0005534516 nova_compute[252679]: INFO:__main__:Deleting /etc/ceph
Nov 25 03:09:42 np0005534516 nova_compute[252679]: INFO:__main__:Creating directory /etc/ceph
Nov 25 03:09:42 np0005534516 nova_compute[252679]: INFO:__main__:Setting permission for /etc/ceph
Nov 25 03:09:42 np0005534516 nova_compute[252679]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.conf to /etc/ceph/ceph.conf
Nov 25 03:09:42 np0005534516 nova_compute[252679]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Nov 25 03:09:42 np0005534516 nova_compute[252679]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.client.openstack.keyring to /etc/ceph/ceph.client.openstack.keyring
Nov 25 03:09:42 np0005534516 nova_compute[252679]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Nov 25 03:09:42 np0005534516 nova_compute[252679]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-privatekey to /var/lib/nova/.ssh/ssh-privatekey
Nov 25 03:09:42 np0005534516 nova_compute[252679]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Nov 25 03:09:42 np0005534516 nova_compute[252679]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-config to /var/lib/nova/.ssh/config
Nov 25 03:09:42 np0005534516 nova_compute[252679]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Nov 25 03:09:42 np0005534516 nova_compute[252679]: INFO:__main__:Deleting /usr/sbin/iscsiadm
Nov 25 03:09:42 np0005534516 nova_compute[252679]: INFO:__main__:Copying /var/lib/kolla/config_files/run-on-host to /usr/sbin/iscsiadm
Nov 25 03:09:42 np0005534516 nova_compute[252679]: INFO:__main__:Setting permission for /usr/sbin/iscsiadm
Nov 25 03:09:42 np0005534516 nova_compute[252679]: INFO:__main__:Writing out command to execute
Nov 25 03:09:42 np0005534516 nova_compute[252679]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Nov 25 03:09:42 np0005534516 nova_compute[252679]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Nov 25 03:09:42 np0005534516 nova_compute[252679]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/
Nov 25 03:09:42 np0005534516 nova_compute[252679]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Nov 25 03:09:42 np0005534516 nova_compute[252679]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Nov 25 03:09:42 np0005534516 nova_compute[252679]: ++ cat /run_command
Nov 25 03:09:42 np0005534516 nova_compute[252679]: + CMD=nova-compute
Nov 25 03:09:42 np0005534516 nova_compute[252679]: + ARGS=
Nov 25 03:09:42 np0005534516 nova_compute[252679]: + sudo kolla_copy_cacerts
Nov 25 03:09:42 np0005534516 nova_compute[252679]: Running command: 'nova-compute'
Nov 25 03:09:42 np0005534516 nova_compute[252679]: + [[ ! -n '' ]]
Nov 25 03:09:42 np0005534516 nova_compute[252679]: + . kolla_extend_start
Nov 25 03:09:42 np0005534516 nova_compute[252679]: + echo 'Running command: '\''nova-compute'\'''
Nov 25 03:09:42 np0005534516 nova_compute[252679]: + umask 0022
Nov 25 03:09:42 np0005534516 nova_compute[252679]: + exec nova-compute
Nov 25 03:09:43 np0005534516 python3.9[252840]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner_healthcheck.service follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 25 03:09:44 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v709: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:09:44 np0005534516 python3.9[252991]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner.service follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 25 03:09:45 np0005534516 python3.9[253141]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner.service.requires follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 25 03:09:46 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v710: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:09:46 np0005534516 nova_compute[252679]: 2025-11-25 08:09:46.631 252683 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_linux_bridge.linux_bridge.LinuxBridgePlugin'>' with name 'linux_bridge' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Nov 25 03:09:46 np0005534516 nova_compute[252679]: 2025-11-25 08:09:46.631 252683 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_noop.noop.NoOpPlugin'>' with name 'noop' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Nov 25 03:09:46 np0005534516 nova_compute[252679]: 2025-11-25 08:09:46.632 252683 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_ovs.ovs.OvsPlugin'>' with name 'ovs' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Nov 25 03:09:46 np0005534516 nova_compute[252679]: 2025-11-25 08:09:46.632 252683 INFO os_vif [-] Loaded VIF plugins: linux_bridge, noop, ovs#033[00m
Nov 25 03:09:46 np0005534516 python3.9[253295]: ansible-containers.podman.podman_container Invoked with name=nova_nvme_cleaner state=absent executable=podman detach=True debug=False force_restart=False force_delete=True generate_systemd={} image_strict=False recreate=False image=None annotation=None arch=None attach=None authfile=None blkio_weight=None blkio_weight_device=None cap_add=None cap_drop=None cgroup_conf=None cgroup_parent=None cgroupns=None cgroups=None chrootdirs=None cidfile=None cmd_args=None conmon_pidfile=None command=None cpu_period=None cpu_quota=None cpu_rt_period=None cpu_rt_runtime=None cpu_shares=None cpus=None cpuset_cpus=None cpuset_mems=None decryption_key=None delete_depend=None delete_time=None delete_volumes=None detach_keys=None device=None device_cgroup_rule=None device_read_bps=None device_read_iops=None device_write_bps=None device_write_iops=None dns=None dns_option=None dns_search=None entrypoint=None env=None env_file=None env_host=None env_merge=None etc_hosts=None expose=None gidmap=None gpus=None group_add=None group_entry=None healthcheck=None healthcheck_interval=None healthcheck_retries=None healthcheck_start_period=None health_startup_cmd=None health_startup_interval=None health_startup_retries=None health_startup_success=None health_startup_timeout=None healthcheck_timeout=None healthcheck_failure_action=None hooks_dir=None hostname=None hostuser=None http_proxy=None image_volume=None init=None init_ctr=None init_path=None interactive=None ip=None ip6=None ipc=None kernel_memory=None label=None label_file=None log_driver=None log_level=None log_opt=None mac_address=None memory=None memory_reservation=None memory_swap=None memory_swappiness=None mount=None network=None network_aliases=None no_healthcheck=None no_hosts=None oom_kill_disable=None oom_score_adj=None os=None passwd=None passwd_entry=None personality=None pid=None pid_file=None pids_limit=None platform=None pod=None pod_id_file=None preserve_fd=None preserve_fds=None privileged=None publish=None publish_all=None pull=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None rdt_class=None read_only=None read_only_tmpfs=None requires=None restart_policy=None restart_time=None retry=None retry_delay=None rm=None rmi=None rootfs=None seccomp_policy=None secrets=NOT_LOGGING_PARAMETER sdnotify=None security_opt=None shm_size=None shm_size_systemd=None sig_proxy=None stop_signal=None stop_timeout=None stop_time=None subgidname=None subuidname=None sysctl=None systemd=None timeout=None timezone=None tls_verify=None tmpfs=None tty=None uidmap=None ulimit=None umask=None unsetenv=None unsetenv_all=None user=None userns=None uts=None variant=None volume=None volumes_from=None workdir=None
Nov 25 03:09:46 np0005534516 rsyslogd[1007]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 25 03:09:46 np0005534516 nova_compute[252679]: 2025-11-25 08:09:46.901 252683 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): grep -F node.session.scan /sbin/iscsiadm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:09:46 np0005534516 nova_compute[252679]: 2025-11-25 08:09:46.920 252683 DEBUG oslo_concurrency.processutils [-] CMD "grep -F node.session.scan /sbin/iscsiadm" returned: 1 in 0.020s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:09:46 np0005534516 nova_compute[252679]: 2025-11-25 08:09:46.921 252683 DEBUG oslo_concurrency.processutils [-] 'grep -F node.session.scan /sbin/iscsiadm' failed. Not Retrying. execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:473#033[00m
Nov 25 03:09:47 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 03:09:47 np0005534516 python3.9[253474]: ansible-ansible.builtin.systemd Invoked with name=edpm_nova_compute.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 25 03:09:47 np0005534516 nova_compute[252679]: 2025-11-25 08:09:47.688 252683 INFO nova.virt.driver [None req-057ef57f-a4b8-401e-96cc-5861abc1ea1f - - - - - -] Loading compute driver 'libvirt.LibvirtDriver'#033[00m
Nov 25 03:09:47 np0005534516 systemd[1]: Stopping nova_compute container...
Nov 25 03:09:47 np0005534516 systemd[1]: libpod-91f47f57df2646ec74ecc1a5b33a58591f2c43f1aff63ca45de8f906b345da20.scope: Deactivated successfully.
Nov 25 03:09:47 np0005534516 systemd[1]: libpod-91f47f57df2646ec74ecc1a5b33a58591f2c43f1aff63ca45de8f906b345da20.scope: Consumed 2.690s CPU time.
Nov 25 03:09:47 np0005534516 podman[253478]: 2025-11-25 08:09:47.838375708 +0000 UTC m=+0.127283437 container died 91f47f57df2646ec74ecc1a5b33a58591f2c43f1aff63ca45de8f906b345da20 (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:b6e1e8a249d36ef36c6ac4170af1e043dda1ccc0f9672832d3ff151bf3533076, name=nova_compute, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:b6e1e8a249d36ef36c6ac4170af1e043dda1ccc0f9672832d3ff151bf3533076', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, container_name=nova_compute, io.buildah.version=1.41.3, org.label-schema.build-date=20251118)
Nov 25 03:09:48 np0005534516 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-91f47f57df2646ec74ecc1a5b33a58591f2c43f1aff63ca45de8f906b345da20-userdata-shm.mount: Deactivated successfully.
Nov 25 03:09:48 np0005534516 systemd[1]: var-lib-containers-storage-overlay-f520f796187c812df55952ce6376a3337c7b111cc9368dbd21fda8da12c2adfd-merged.mount: Deactivated successfully.
Nov 25 03:09:48 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v711: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:09:50 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v712: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:09:52 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 03:09:52 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v713: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:09:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] Optimize plan auto_2025-11-25_08:09:53
Nov 25 03:09:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 03:09:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] do_upmap
Nov 25 03:09:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] pools ['vms', 'images', '.mgr', 'default.rgw.meta', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', 'default.rgw.control', 'backups', 'default.rgw.log', 'volumes', '.rgw.root']
Nov 25 03:09:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] prepared 0/10 changes
Nov 25 03:09:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 03:09:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 03:09:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 03:09:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 03:09:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 03:09:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 03:09:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 03:09:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 03:09:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 03:09:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 03:09:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 03:09:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 03:09:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 03:09:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 03:09:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 03:09:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 03:09:54 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v714: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:09:54 np0005534516 podman[253478]: 2025-11-25 08:09:54.87738633 +0000 UTC m=+7.166293999 container cleanup 91f47f57df2646ec74ecc1a5b33a58591f2c43f1aff63ca45de8f906b345da20 (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:b6e1e8a249d36ef36c6ac4170af1e043dda1ccc0f9672832d3ff151bf3533076, name=nova_compute, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:b6e1e8a249d36ef36c6ac4170af1e043dda1ccc0f9672832d3ff151bf3533076', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, config_id=edpm, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=nova_compute, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Nov 25 03:09:54 np0005534516 podman[253478]: nova_compute
Nov 25 03:09:54 np0005534516 podman[253510]: nova_compute
Nov 25 03:09:54 np0005534516 systemd[1]: edpm_nova_compute.service: Deactivated successfully.
Nov 25 03:09:54 np0005534516 systemd[1]: Stopped nova_compute container.
Nov 25 03:09:54 np0005534516 systemd[1]: Starting nova_compute container...
Nov 25 03:09:55 np0005534516 systemd[1]: Started libcrun container.
Nov 25 03:09:55 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f520f796187c812df55952ce6376a3337c7b111cc9368dbd21fda8da12c2adfd/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Nov 25 03:09:55 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f520f796187c812df55952ce6376a3337c7b111cc9368dbd21fda8da12c2adfd/merged/etc/nvme supports timestamps until 2038 (0x7fffffff)
Nov 25 03:09:55 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f520f796187c812df55952ce6376a3337c7b111cc9368dbd21fda8da12c2adfd/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Nov 25 03:09:55 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f520f796187c812df55952ce6376a3337c7b111cc9368dbd21fda8da12c2adfd/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Nov 25 03:09:55 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f520f796187c812df55952ce6376a3337c7b111cc9368dbd21fda8da12c2adfd/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff)
Nov 25 03:09:55 np0005534516 podman[253523]: 2025-11-25 08:09:55.15258006 +0000 UTC m=+0.155396132 container init 91f47f57df2646ec74ecc1a5b33a58591f2c43f1aff63ca45de8f906b345da20 (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:b6e1e8a249d36ef36c6ac4170af1e043dda1ccc0f9672832d3ff151bf3533076, name=nova_compute, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=edpm, container_name=nova_compute, managed_by=edpm_ansible, org.label-schema.build-date=20251118, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:b6e1e8a249d36ef36c6ac4170af1e043dda1ccc0f9672832d3ff151bf3533076', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']})
Nov 25 03:09:55 np0005534516 podman[253523]: 2025-11-25 08:09:55.161654709 +0000 UTC m=+0.164470751 container start 91f47f57df2646ec74ecc1a5b33a58591f2c43f1aff63ca45de8f906b345da20 (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:b6e1e8a249d36ef36c6ac4170af1e043dda1ccc0f9672832d3ff151bf3533076, name=nova_compute, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_id=edpm, container_name=nova_compute, org.label-schema.build-date=20251118, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:b6e1e8a249d36ef36c6ac4170af1e043dda1ccc0f9672832d3ff151bf3533076', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3)
Nov 25 03:09:55 np0005534516 nova_compute[253538]: + sudo -E kolla_set_configs
Nov 25 03:09:55 np0005534516 podman[253523]: nova_compute
Nov 25 03:09:55 np0005534516 systemd[1]: Started nova_compute container.
Nov 25 03:09:55 np0005534516 nova_compute[253538]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Nov 25 03:09:55 np0005534516 nova_compute[253538]: INFO:__main__:Validating config file
Nov 25 03:09:55 np0005534516 nova_compute[253538]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Nov 25 03:09:55 np0005534516 nova_compute[253538]: INFO:__main__:Copying service configuration files
Nov 25 03:09:55 np0005534516 nova_compute[253538]: INFO:__main__:Deleting /etc/nova/nova.conf
Nov 25 03:09:55 np0005534516 nova_compute[253538]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf
Nov 25 03:09:55 np0005534516 nova_compute[253538]: INFO:__main__:Setting permission for /etc/nova/nova.conf
Nov 25 03:09:55 np0005534516 nova_compute[253538]: INFO:__main__:Deleting /etc/nova/nova.conf.d/01-nova.conf
Nov 25 03:09:55 np0005534516 nova_compute[253538]: INFO:__main__:Copying /var/lib/kolla/config_files/01-nova.conf to /etc/nova/nova.conf.d/01-nova.conf
Nov 25 03:09:55 np0005534516 nova_compute[253538]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/01-nova.conf
Nov 25 03:09:55 np0005534516 nova_compute[253538]: INFO:__main__:Deleting /etc/nova/nova.conf.d/03-ceph-nova.conf
Nov 25 03:09:55 np0005534516 nova_compute[253538]: INFO:__main__:Copying /var/lib/kolla/config_files/03-ceph-nova.conf to /etc/nova/nova.conf.d/03-ceph-nova.conf
Nov 25 03:09:55 np0005534516 nova_compute[253538]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/03-ceph-nova.conf
Nov 25 03:09:55 np0005534516 nova_compute[253538]: INFO:__main__:Deleting /etc/nova/nova.conf.d/25-nova-extra.conf
Nov 25 03:09:55 np0005534516 nova_compute[253538]: INFO:__main__:Copying /var/lib/kolla/config_files/25-nova-extra.conf to /etc/nova/nova.conf.d/25-nova-extra.conf
Nov 25 03:09:55 np0005534516 nova_compute[253538]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/25-nova-extra.conf
Nov 25 03:09:55 np0005534516 nova_compute[253538]: INFO:__main__:Deleting /etc/nova/nova.conf.d/nova-blank.conf
Nov 25 03:09:55 np0005534516 nova_compute[253538]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf.d/nova-blank.conf
Nov 25 03:09:55 np0005534516 nova_compute[253538]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/nova-blank.conf
Nov 25 03:09:55 np0005534516 nova_compute[253538]: INFO:__main__:Deleting /etc/nova/nova.conf.d/02-nova-host-specific.conf
Nov 25 03:09:55 np0005534516 nova_compute[253538]: INFO:__main__:Copying /var/lib/kolla/config_files/02-nova-host-specific.conf to /etc/nova/nova.conf.d/02-nova-host-specific.conf
Nov 25 03:09:55 np0005534516 nova_compute[253538]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/02-nova-host-specific.conf
Nov 25 03:09:55 np0005534516 nova_compute[253538]: INFO:__main__:Deleting /etc/ceph
Nov 25 03:09:55 np0005534516 nova_compute[253538]: INFO:__main__:Creating directory /etc/ceph
Nov 25 03:09:55 np0005534516 nova_compute[253538]: INFO:__main__:Setting permission for /etc/ceph
Nov 25 03:09:55 np0005534516 nova_compute[253538]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.conf to /etc/ceph/ceph.conf
Nov 25 03:09:55 np0005534516 nova_compute[253538]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Nov 25 03:09:55 np0005534516 nova_compute[253538]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.client.openstack.keyring to /etc/ceph/ceph.client.openstack.keyring
Nov 25 03:09:55 np0005534516 nova_compute[253538]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Nov 25 03:09:55 np0005534516 nova_compute[253538]: INFO:__main__:Deleting /var/lib/nova/.ssh/ssh-privatekey
Nov 25 03:09:55 np0005534516 nova_compute[253538]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-privatekey to /var/lib/nova/.ssh/ssh-privatekey
Nov 25 03:09:55 np0005534516 nova_compute[253538]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Nov 25 03:09:55 np0005534516 nova_compute[253538]: INFO:__main__:Deleting /var/lib/nova/.ssh/config
Nov 25 03:09:55 np0005534516 nova_compute[253538]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-config to /var/lib/nova/.ssh/config
Nov 25 03:09:55 np0005534516 nova_compute[253538]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Nov 25 03:09:55 np0005534516 nova_compute[253538]: INFO:__main__:Deleting /usr/sbin/iscsiadm
Nov 25 03:09:55 np0005534516 nova_compute[253538]: INFO:__main__:Copying /var/lib/kolla/config_files/run-on-host to /usr/sbin/iscsiadm
Nov 25 03:09:55 np0005534516 nova_compute[253538]: INFO:__main__:Setting permission for /usr/sbin/iscsiadm
Nov 25 03:09:55 np0005534516 nova_compute[253538]: INFO:__main__:Writing out command to execute
Nov 25 03:09:55 np0005534516 nova_compute[253538]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Nov 25 03:09:55 np0005534516 nova_compute[253538]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Nov 25 03:09:55 np0005534516 nova_compute[253538]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/
Nov 25 03:09:55 np0005534516 nova_compute[253538]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Nov 25 03:09:55 np0005534516 nova_compute[253538]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Nov 25 03:09:55 np0005534516 nova_compute[253538]: ++ cat /run_command
Nov 25 03:09:55 np0005534516 nova_compute[253538]: + CMD=nova-compute
Nov 25 03:09:55 np0005534516 nova_compute[253538]: + ARGS=
Nov 25 03:09:55 np0005534516 nova_compute[253538]: + sudo kolla_copy_cacerts
Nov 25 03:09:55 np0005534516 nova_compute[253538]: + [[ ! -n '' ]]
Nov 25 03:09:55 np0005534516 nova_compute[253538]: + . kolla_extend_start
Nov 25 03:09:55 np0005534516 nova_compute[253538]: Running command: 'nova-compute'
Nov 25 03:09:55 np0005534516 nova_compute[253538]: + echo 'Running command: '\''nova-compute'\'''
Nov 25 03:09:55 np0005534516 nova_compute[253538]: + umask 0022
Nov 25 03:09:55 np0005534516 nova_compute[253538]: + exec nova-compute
Nov 25 03:09:55 np0005534516 python3.9[253701]: ansible-containers.podman.podman_container Invoked with name=nova_compute_init state=started executable=podman detach=True debug=False force_restart=False force_delete=True generate_systemd={} image_strict=False recreate=False image=None annotation=None arch=None attach=None authfile=None blkio_weight=None blkio_weight_device=None cap_add=None cap_drop=None cgroup_conf=None cgroup_parent=None cgroupns=None cgroups=None chrootdirs=None cidfile=None cmd_args=None conmon_pidfile=None command=None cpu_period=None cpu_quota=None cpu_rt_period=None cpu_rt_runtime=None cpu_shares=None cpus=None cpuset_cpus=None cpuset_mems=None decryption_key=None delete_depend=None delete_time=None delete_volumes=None detach_keys=None device=None device_cgroup_rule=None device_read_bps=None device_read_iops=None device_write_bps=None device_write_iops=None dns=None dns_option=None dns_search=None entrypoint=None env=None env_file=None env_host=None env_merge=None etc_hosts=None expose=None gidmap=None gpus=None group_add=None group_entry=None healthcheck=None healthcheck_interval=None healthcheck_retries=None healthcheck_start_period=None health_startup_cmd=None health_startup_interval=None health_startup_retries=None health_startup_success=None health_startup_timeout=None healthcheck_timeout=None healthcheck_failure_action=None hooks_dir=None hostname=None hostuser=None http_proxy=None image_volume=None init=None init_ctr=None init_path=None interactive=None ip=None ip6=None ipc=None kernel_memory=None label=None label_file=None log_driver=None log_level=None log_opt=None mac_address=None memory=None memory_reservation=None memory_swap=None memory_swappiness=None mount=None network=None network_aliases=None no_healthcheck=None no_hosts=None oom_kill_disable=None oom_score_adj=None os=None passwd=None passwd_entry=None personality=None pid=None pid_file=None pids_limit=None platform=None pod=None pod_id_file=None preserve_fd=None preserve_fds=None privileged=None publish=None publish_all=None pull=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None rdt_class=None read_only=None read_only_tmpfs=None requires=None restart_policy=None restart_time=None retry=None retry_delay=None rm=None rmi=None rootfs=None seccomp_policy=None secrets=NOT_LOGGING_PARAMETER sdnotify=None security_opt=None shm_size=None shm_size_systemd=None sig_proxy=None stop_signal=None stop_timeout=None stop_time=None subgidname=None subuidname=None sysctl=None systemd=None timeout=None timezone=None tls_verify=None tmpfs=None tty=None uidmap=None ulimit=None umask=None unsetenv=None unsetenv_all=None user=None userns=None uts=None variant=None volume=None volumes_from=None workdir=None
Nov 25 03:09:56 np0005534516 systemd[1]: Started libpod-conmon-ae8ddf4e41b839184dc557e1ce145f7af9d11f6c37e54b2694a4da31dd64d5e7.scope.
Nov 25 03:09:56 np0005534516 systemd[1]: Started libcrun container.
Nov 25 03:09:56 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f8fe428e8c1e21af782aca23484d6aa4074808a621f0d086a37a891a9af212a5/merged/usr/sbin/nova_statedir_ownership.py supports timestamps until 2038 (0x7fffffff)
Nov 25 03:09:56 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f8fe428e8c1e21af782aca23484d6aa4074808a621f0d086a37a891a9af212a5/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Nov 25 03:09:56 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f8fe428e8c1e21af782aca23484d6aa4074808a621f0d086a37a891a9af212a5/merged/var/lib/_nova_secontext supports timestamps until 2038 (0x7fffffff)
Nov 25 03:09:56 np0005534516 podman[253728]: 2025-11-25 08:09:56.423569083 +0000 UTC m=+0.351612924 container init ae8ddf4e41b839184dc557e1ce145f7af9d11f6c37e54b2694a4da31dd64d5e7 (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:b6e1e8a249d36ef36c6ac4170af1e043dda1ccc0f9672832d3ff151bf3533076, name=nova_compute_init, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=edpm, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=nova_compute_init, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:b6e1e8a249d36ef36c6ac4170af1e043dda1ccc0f9672832d3ff151bf3533076', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 03:09:56 np0005534516 podman[253728]: 2025-11-25 08:09:56.433729643 +0000 UTC m=+0.361773454 container start ae8ddf4e41b839184dc557e1ce145f7af9d11f6c37e54b2694a4da31dd64d5e7 (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:b6e1e8a249d36ef36c6ac4170af1e043dda1ccc0f9672832d3ff151bf3533076, name=nova_compute_init, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:b6e1e8a249d36ef36c6ac4170af1e043dda1ccc0f9672832d3ff151bf3533076', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, config_id=edpm, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, container_name=nova_compute_init)
Nov 25 03:09:56 np0005534516 python3.9[253701]: ansible-containers.podman.podman_container PODMAN-CONTAINER-DEBUG: podman start nova_compute_init
Nov 25 03:09:56 np0005534516 nova_compute_init[253749]: INFO:nova_statedir:Applying nova statedir ownership
Nov 25 03:09:56 np0005534516 nova_compute_init[253749]: INFO:nova_statedir:Target ownership for /var/lib/nova: 42436:42436
Nov 25 03:09:56 np0005534516 nova_compute_init[253749]: INFO:nova_statedir:Checking uid: 1000 gid: 1000 path: /var/lib/nova/
Nov 25 03:09:56 np0005534516 nova_compute_init[253749]: INFO:nova_statedir:Changing ownership of /var/lib/nova from 1000:1000 to 42436:42436
Nov 25 03:09:56 np0005534516 nova_compute_init[253749]: INFO:nova_statedir:Setting selinux context of /var/lib/nova to system_u:object_r:container_file_t:s0
Nov 25 03:09:56 np0005534516 nova_compute_init[253749]: INFO:nova_statedir:Checking uid: 1000 gid: 1000 path: /var/lib/nova/instances/
Nov 25 03:09:56 np0005534516 nova_compute_init[253749]: INFO:nova_statedir:Changing ownership of /var/lib/nova/instances from 1000:1000 to 42436:42436
Nov 25 03:09:56 np0005534516 nova_compute_init[253749]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/instances to system_u:object_r:container_file_t:s0
Nov 25 03:09:56 np0005534516 nova_compute_init[253749]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/
Nov 25 03:09:56 np0005534516 nova_compute_init[253749]: INFO:nova_statedir:Ownership of /var/lib/nova/.ssh already 42436:42436
Nov 25 03:09:56 np0005534516 nova_compute_init[253749]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/.ssh to system_u:object_r:container_file_t:s0
Nov 25 03:09:56 np0005534516 nova_compute_init[253749]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/ssh-privatekey
Nov 25 03:09:56 np0005534516 nova_compute_init[253749]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/config
Nov 25 03:09:56 np0005534516 nova_compute_init[253749]: INFO:nova_statedir:Nova statedir ownership complete
Nov 25 03:09:56 np0005534516 systemd[1]: libpod-ae8ddf4e41b839184dc557e1ce145f7af9d11f6c37e54b2694a4da31dd64d5e7.scope: Deactivated successfully.
Nov 25 03:09:56 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v715: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:09:56 np0005534516 podman[253763]: 2025-11-25 08:09:56.537019478 +0000 UTC m=+0.027225511 container died ae8ddf4e41b839184dc557e1ce145f7af9d11f6c37e54b2694a4da31dd64d5e7 (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:b6e1e8a249d36ef36c6ac4170af1e043dda1ccc0f9672832d3ff151bf3533076, name=nova_compute_init, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:b6e1e8a249d36ef36c6ac4170af1e043dda1ccc0f9672832d3ff151bf3533076', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, config_id=edpm, container_name=nova_compute_init, io.buildah.version=1.41.3, tcib_managed=true)
Nov 25 03:09:56 np0005534516 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-ae8ddf4e41b839184dc557e1ce145f7af9d11f6c37e54b2694a4da31dd64d5e7-userdata-shm.mount: Deactivated successfully.
Nov 25 03:09:56 np0005534516 systemd[1]: var-lib-containers-storage-overlay-f8fe428e8c1e21af782aca23484d6aa4074808a621f0d086a37a891a9af212a5-merged.mount: Deactivated successfully.
Nov 25 03:09:56 np0005534516 podman[253763]: 2025-11-25 08:09:56.684495849 +0000 UTC m=+0.174701902 container cleanup ae8ddf4e41b839184dc557e1ce145f7af9d11f6c37e54b2694a4da31dd64d5e7 (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:b6e1e8a249d36ef36c6ac4170af1e043dda1ccc0f9672832d3ff151bf3533076, name=nova_compute_init, config_id=edpm, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:b6e1e8a249d36ef36c6ac4170af1e043dda1ccc0f9672832d3ff151bf3533076', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, container_name=nova_compute_init, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_managed=true)
Nov 25 03:09:56 np0005534516 systemd[1]: libpod-conmon-ae8ddf4e41b839184dc557e1ce145f7af9d11f6c37e54b2694a4da31dd64d5e7.scope: Deactivated successfully.
Nov 25 03:09:57 np0005534516 systemd[1]: session-50.scope: Deactivated successfully.
Nov 25 03:09:57 np0005534516 systemd[1]: session-50.scope: Consumed 2min 23.127s CPU time.
Nov 25 03:09:57 np0005534516 systemd-logind[822]: Session 50 logged out. Waiting for processes to exit.
Nov 25 03:09:57 np0005534516 systemd-logind[822]: Removed session 50.
Nov 25 03:09:57 np0005534516 nova_compute[253538]: 2025-11-25 08:09:57.362 253542 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_linux_bridge.linux_bridge.LinuxBridgePlugin'>' with name 'linux_bridge' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Nov 25 03:09:57 np0005534516 nova_compute[253538]: 2025-11-25 08:09:57.363 253542 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_noop.noop.NoOpPlugin'>' with name 'noop' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Nov 25 03:09:57 np0005534516 nova_compute[253538]: 2025-11-25 08:09:57.363 253542 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_ovs.ovs.OvsPlugin'>' with name 'ovs' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Nov 25 03:09:57 np0005534516 nova_compute[253538]: 2025-11-25 08:09:57.363 253542 INFO os_vif [-] Loaded VIF plugins: linux_bridge, noop, ovs#033[00m
Nov 25 03:09:57 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 03:09:57 np0005534516 nova_compute[253538]: 2025-11-25 08:09:57.516 253542 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): grep -F node.session.scan /sbin/iscsiadm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:09:57 np0005534516 nova_compute[253538]: 2025-11-25 08:09:57.541 253542 DEBUG oslo_concurrency.processutils [-] CMD "grep -F node.session.scan /sbin/iscsiadm" returned: 1 in 0.026s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:09:57 np0005534516 nova_compute[253538]: 2025-11-25 08:09:57.542 253542 DEBUG oslo_concurrency.processutils [-] 'grep -F node.session.scan /sbin/iscsiadm' failed. Not Retrying. execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:473#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.013 253542 INFO nova.virt.driver [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] Loading compute driver 'libvirt.LibvirtDriver'#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.310 253542 INFO nova.compute.provider_config [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] No provider configs found in /etc/nova/provider_config/. If files are present, ensure the Nova process has access.#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.322 253542 DEBUG oslo_concurrency.lockutils [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.322 253542 DEBUG oslo_concurrency.lockutils [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.322 253542 DEBUG oslo_concurrency.lockutils [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.323 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] Full set of CONF: _wait_for_exit_or_signal /usr/lib/python3.9/site-packages/oslo_service/service.py:362#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.323 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.323 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.323 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.323 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] config files: ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.323 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.324 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] allow_resize_to_same_host      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.324 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] arq_binding_timeout            = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.324 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] backdoor_port                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.324 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] backdoor_socket                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.324 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] block_device_allocate_retries  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.324 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] block_device_allocate_retries_interval = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.325 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] cert                           = self.pem log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.325 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] compute_driver                 = libvirt.LibvirtDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.325 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] compute_monitors               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.325 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] config_dir                     = ['/etc/nova/nova.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.325 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] config_drive_format            = iso9660 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.325 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] config_file                    = ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.325 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.326 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] console_host                   = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.326 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] control_exchange               = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.326 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] cpu_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.326 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] daemon                         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.326 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.326 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] default_access_ip_network_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.327 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] default_availability_zone      = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.327 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] default_ephemeral_format       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.327 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'glanceclient=WARN', 'oslo.privsep.daemon=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.327 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] default_schedule_zone          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.327 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] disk_allocation_ratio          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.327 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] enable_new_services            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.327 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] enabled_apis                   = ['osapi_compute', 'metadata'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.328 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] enabled_ssl_apis               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.328 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] flat_injected                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.328 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] force_config_drive             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.328 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] force_raw_images               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.328 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.328 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] heal_instance_info_cache_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.329 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.329 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] initial_cpu_allocation_ratio   = 4.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.329 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] initial_disk_allocation_ratio  = 0.9 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.329 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] initial_ram_allocation_ratio   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.329 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] injected_network_template      = /usr/lib/python3.9/site-packages/nova/virt/interfaces.template log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.330 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] instance_build_timeout         = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.330 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] instance_delete_interval       = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.330 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.330 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] instance_name_template         = instance-%08x log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.330 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] instance_usage_audit           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.330 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] instance_usage_audit_period    = month log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.331 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.331 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] instances_path                 = /var/lib/nova/instances log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.331 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] internal_service_availability_zone = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.331 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] key                            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.331 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] live_migration_retry_count     = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.331 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.331 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.332 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.332 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.332 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.332 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.332 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.332 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] log_rotation_type              = size log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.332 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.333 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.333 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.333 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.333 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.333 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] long_rpc_timeout               = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.333 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] max_concurrent_builds          = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.333 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] max_concurrent_live_migrations = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.334 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] max_concurrent_snapshots       = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.334 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] max_local_block_devices        = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.334 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] max_logfile_count              = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.334 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] max_logfile_size_mb            = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.334 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] maximum_instance_delete_attempts = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.334 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] metadata_listen                = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.334 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] metadata_listen_port           = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.334 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] metadata_workers               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.335 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] migrate_max_retries            = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.335 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] mkisofs_cmd                    = /usr/bin/mkisofs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.335 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] my_block_storage_ip            = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.335 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] my_ip                          = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.335 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] network_allocate_retries       = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.335 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] non_inheritable_image_properties = ['cache_in_nova', 'bittorrent'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.336 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] osapi_compute_listen           = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.336 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] osapi_compute_listen_port      = 8774 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.336 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] osapi_compute_unique_server_name_scope =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.336 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] osapi_compute_workers          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.336 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] password_length                = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.336 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] periodic_enable                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.336 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] periodic_fuzzy_delay           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.337 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] pointer_model                  = usbtablet log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.337 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] preallocate_images             = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.337 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.337 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] pybasedir                      = /usr/lib/python3.9/site-packages log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.337 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] ram_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.337 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.337 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.337 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.338 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] reboot_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.338 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] reclaim_instance_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.338 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] record                         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.338 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] reimage_timeout_per_gb         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.338 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] report_interval                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.338 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] rescue_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.338 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] reserved_host_cpus             = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.339 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] reserved_host_disk_mb          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.339 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] reserved_host_memory_mb        = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.339 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] reserved_huge_pages            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.339 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] resize_confirm_window          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.339 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] resize_fs_using_block_device   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.340 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] resume_guests_state_on_host_boot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.340 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] rootwrap_config                = /etc/nova/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.340 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] rpc_response_timeout           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.340 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] run_external_periodic_tasks    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.340 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] running_deleted_instance_action = reap log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.340 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] running_deleted_instance_poll_interval = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.341 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] running_deleted_instance_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.341 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] scheduler_instance_sync_interval = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.341 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] service_down_time              = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.341 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] servicegroup_driver            = db log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.341 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] shelved_offload_time           = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.341 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] shelved_poll_interval          = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.341 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] shutdown_timeout               = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.342 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] source_is_ipv6                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.342 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] ssl_only                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.342 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] state_path                     = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.342 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] sync_power_state_interval      = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.342 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] sync_power_state_pool_size     = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.342 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.343 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] tempdir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.343 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] timeout_nbd                    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.343 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.343 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] update_resources_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.343 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] use_cow_images                 = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.344 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.344 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.344 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.344 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] use_rootwrap_daemon            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.344 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.344 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.344 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] vcpu_pin_set                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.345 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] vif_plugging_is_fatal          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.345 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] vif_plugging_timeout           = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.345 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] virt_mkfs                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.345 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] volume_usage_poll_interval     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.345 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.345 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] web                            = /usr/share/spice-html5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.346 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.346 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] oslo_concurrency.lock_path     = /var/lib/nova/tmp log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.346 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.346 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.346 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.346 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.346 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.347 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] api.auth_strategy              = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.347 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] api.compute_link_prefix        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.347 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] api.config_drive_skip_versions = 1.0 2007-01-19 2007-03-01 2007-08-29 2007-10-10 2007-12-15 2008-02-01 2008-09-01 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.347 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] api.dhcp_domain                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.347 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] api.enable_instance_password   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.347 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] api.glance_link_prefix         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.348 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] api.instance_list_cells_batch_fixed_size = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.348 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] api.instance_list_cells_batch_strategy = distributed log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.348 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] api.instance_list_per_project_cells = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.348 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] api.list_records_by_skipping_down_cells = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.348 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] api.local_metadata_per_cell    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.348 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] api.max_limit                  = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.348 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] api.metadata_cache_expiration  = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.349 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] api.neutron_default_tenant_id  = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.349 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] api.use_forwarded_for          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.349 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] api.use_neutron_default_nets   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.349 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] api.vendordata_dynamic_connect_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.349 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] api.vendordata_dynamic_failure_fatal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.349 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] api.vendordata_dynamic_read_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.349 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] api.vendordata_dynamic_ssl_certfile =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.350 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] api.vendordata_dynamic_targets = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.350 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] api.vendordata_jsonfile_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.350 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] api.vendordata_providers       = ['StaticJSON'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.350 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] cache.backend                  = oslo_cache.dict log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.350 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] cache.backend_argument         = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.350 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] cache.config_prefix            = cache.oslo log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.350 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] cache.dead_timeout             = 60.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.351 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] cache.debug_cache_backend      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.351 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] cache.enable_retry_client      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.351 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] cache.enable_socket_keepalive  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.351 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] cache.enabled                  = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.351 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] cache.expiration_time          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.351 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] cache.hashclient_retry_attempts = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.351 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] cache.hashclient_retry_delay   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.352 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] cache.memcache_dead_retry      = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.352 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] cache.memcache_password        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.352 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] cache.memcache_pool_connection_get_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.352 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] cache.memcache_pool_flush_on_reconnect = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.352 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] cache.memcache_pool_maxsize    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.352 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] cache.memcache_pool_unused_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.352 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] cache.memcache_sasl_enabled    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.353 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] cache.memcache_servers         = ['localhost:11211'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.353 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] cache.memcache_socket_timeout  = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.353 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] cache.memcache_username        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.353 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] cache.proxies                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.353 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] cache.retry_attempts           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.353 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] cache.retry_delay              = 0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.353 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] cache.socket_keepalive_count   = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.354 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] cache.socket_keepalive_idle    = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.354 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] cache.socket_keepalive_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.354 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] cache.tls_allowed_ciphers      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.354 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] cache.tls_cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.354 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] cache.tls_certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.354 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] cache.tls_enabled              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.354 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] cache.tls_keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.355 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] cinder.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.355 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] cinder.auth_type               = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.355 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] cinder.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.355 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] cinder.catalog_info            = volumev3:cinderv3:internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.355 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] cinder.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.355 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] cinder.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.355 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] cinder.cross_az_attach         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.356 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] cinder.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.356 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] cinder.endpoint_template       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.356 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] cinder.http_retries            = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.356 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] cinder.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.356 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] cinder.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.356 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] cinder.os_region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.357 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] cinder.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.357 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] cinder.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.357 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] compute.consecutive_build_service_disable_threshold = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.357 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] compute.cpu_dedicated_set      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.357 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] compute.cpu_shared_set         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.357 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] compute.image_type_exclude_list = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.358 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] compute.live_migration_wait_for_vif_plug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.358 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] compute.max_concurrent_disk_ops = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.358 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] compute.max_disk_devices_to_attach = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.358 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] compute.packing_host_numa_cells_allocation_strategy = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.358 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] compute.provider_config_location = /etc/nova/provider_config/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.358 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] compute.resource_provider_association_refresh = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.358 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] compute.shutdown_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.359 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] compute.vmdk_allowed_types     = ['streamOptimized', 'monolithicSparse'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.359 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] conductor.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.359 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] console.allowed_origins        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.359 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] console.ssl_ciphers            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.359 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] console.ssl_minimum_version    = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.360 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] consoleauth.token_ttl          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.360 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] cyborg.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.360 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] cyborg.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.360 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] cyborg.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.360 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] cyborg.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.360 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] cyborg.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.360 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] cyborg.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.360 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] cyborg.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.361 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] cyborg.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.361 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] cyborg.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.361 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] cyborg.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.361 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] cyborg.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.361 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] cyborg.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.361 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] cyborg.service_type            = accelerator log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.361 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] cyborg.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.362 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] cyborg.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.362 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] cyborg.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.362 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] cyborg.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.362 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] cyborg.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.362 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] cyborg.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.362 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] database.backend               = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.362 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] database.connection            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.363 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] database.connection_debug      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.363 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.363 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.363 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] database.connection_trace      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.363 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.363 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] database.db_max_retries        = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.364 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.364 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] database.db_retry_interval     = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.364 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] database.max_overflow          = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.364 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] database.max_pool_size         = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.364 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] database.max_retries           = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.364 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] database.mysql_enable_ndb      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.364 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] database.mysql_sql_mode        = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.365 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.365 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] database.pool_timeout          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.365 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] database.retry_interval        = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.365 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] database.slave_connection      = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.365 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] database.sqlite_synchronous    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.365 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] api_database.backend           = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.365 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] api_database.connection        = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.366 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] api_database.connection_debug  = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.366 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] api_database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.366 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] api_database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.366 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] api_database.connection_trace  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.366 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] api_database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.366 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] api_database.db_max_retries    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.366 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] api_database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.367 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] api_database.db_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.367 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] api_database.max_overflow      = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.367 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] api_database.max_pool_size     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.367 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] api_database.max_retries       = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.367 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] api_database.mysql_enable_ndb  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.367 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] api_database.mysql_sql_mode    = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.367 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] api_database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.368 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] api_database.pool_timeout      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.368 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] api_database.retry_interval    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.368 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] api_database.slave_connection  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.368 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] api_database.sqlite_synchronous = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.368 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] devices.enabled_mdev_types     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.368 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] ephemeral_storage_encryption.cipher = aes-xts-plain64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.368 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] ephemeral_storage_encryption.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.369 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] ephemeral_storage_encryption.key_size = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.369 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] glance.api_servers             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.369 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] glance.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.369 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] glance.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.369 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] glance.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.369 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] glance.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.370 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] glance.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.370 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] glance.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.370 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] glance.default_trusted_certificate_ids = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.370 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] glance.enable_certificate_validation = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.370 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] glance.enable_rbd_download     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.370 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] glance.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.370 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] glance.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.371 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] glance.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.371 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] glance.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.371 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] glance.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.371 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] glance.num_retries             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.371 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] glance.rbd_ceph_conf           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.371 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] glance.rbd_connect_timeout     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.371 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] glance.rbd_pool                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.372 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] glance.rbd_user                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.372 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] glance.region_name             = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.372 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] glance.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.372 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] glance.service_type            = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.372 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] glance.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.372 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] glance.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.373 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] glance.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.373 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] glance.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.373 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] glance.valid_interfaces        = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.373 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] glance.verify_glance_signatures = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.373 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] glance.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.373 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] guestfs.debug                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.373 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] hyperv.config_drive_cdrom      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.374 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] hyperv.config_drive_inject_password = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.374 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] hyperv.dynamic_memory_ratio    = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.374 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] hyperv.enable_instance_metrics_collection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.374 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] hyperv.enable_remotefx         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.374 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] hyperv.instances_path_share    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.374 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] hyperv.iscsi_initiator_list    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.375 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] hyperv.limit_cpu_features      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.375 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] hyperv.mounted_disk_query_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.375 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] hyperv.mounted_disk_query_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.375 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] hyperv.power_state_check_timeframe = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.375 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] hyperv.power_state_event_polling_interval = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.376 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] hyperv.qemu_img_cmd            = qemu-img.exe log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.376 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] hyperv.use_multipath_io        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.376 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] hyperv.volume_attach_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.376 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] hyperv.volume_attach_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.376 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] hyperv.vswitch_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.376 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] hyperv.wait_soft_reboot_seconds = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.376 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] mks.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.377 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] mks.mksproxy_base_url          = http://127.0.0.1:6090/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.377 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] image_cache.manager_interval   = 2400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.377 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] image_cache.precache_concurrency = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.377 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] image_cache.remove_unused_base_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.378 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] image_cache.remove_unused_original_minimum_age_seconds = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.378 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] image_cache.remove_unused_resized_minimum_age_seconds = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.378 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] image_cache.subdirectory_name  = _base log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.378 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] ironic.api_max_retries         = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.378 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] ironic.api_retry_interval      = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.378 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.379 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.379 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.379 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.379 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.379 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.380 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.380 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.380 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.380 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.380 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.381 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.381 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] ironic.partition_key           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.381 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] ironic.peer_list               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.381 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.381 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] ironic.serial_console_state_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.381 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.382 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] ironic.service_type            = baremetal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.382 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.382 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.382 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.382 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.382 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] ironic.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.383 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.383 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] key_manager.backend            = barbican log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.383 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] key_manager.fixed_key          = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.383 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] barbican.auth_endpoint         = http://localhost/identity/v3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.383 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] barbican.barbican_api_version  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.384 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] barbican.barbican_endpoint     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.384 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] barbican.barbican_endpoint_type = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.384 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] barbican.barbican_region_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.384 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] barbican.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.384 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] barbican.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.384 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] barbican.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.385 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] barbican.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.385 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] barbican.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.385 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] barbican.number_of_retries     = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.385 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] barbican.retry_delay           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.385 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] barbican.send_service_user_token = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.385 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] barbican.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.385 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] barbican.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.386 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] barbican.verify_ssl            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.386 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] barbican.verify_ssl_path       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.386 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] barbican_service_user.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.386 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] barbican_service_user.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.386 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] barbican_service_user.cafile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.386 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] barbican_service_user.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.387 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] barbican_service_user.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.387 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] barbican_service_user.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.387 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] barbican_service_user.keyfile  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.387 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] barbican_service_user.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.387 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] barbican_service_user.timeout  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.387 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] vault.approle_role_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.388 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] vault.approle_secret_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.388 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] vault.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.388 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] vault.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.388 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] vault.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.388 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] vault.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.388 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] vault.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.389 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] vault.kv_mountpoint            = secret log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.389 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] vault.kv_version               = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.389 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] vault.namespace                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.389 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] vault.root_token_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.389 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] vault.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.389 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] vault.ssl_ca_crt_file          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.390 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] vault.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.390 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] vault.use_ssl                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.390 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] vault.vault_url                = http://127.0.0.1:8200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.390 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] keystone.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.390 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] keystone.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.390 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] keystone.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.391 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] keystone.connect_retries       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.391 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] keystone.connect_retry_delay   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.391 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] keystone.endpoint_override     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.391 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] keystone.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.391 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] keystone.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.391 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] keystone.max_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.392 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] keystone.min_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.392 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] keystone.region_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.392 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] keystone.service_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.392 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] keystone.service_type          = identity log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.393 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] keystone.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.393 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] keystone.status_code_retries   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.393 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] keystone.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.393 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] keystone.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.393 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] keystone.valid_interfaces      = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.393 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] keystone.version               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.393 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] libvirt.connection_uri         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.394 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] libvirt.cpu_mode               = host-model log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.394 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] libvirt.cpu_model_extra_flags  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.394 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] libvirt.cpu_models             = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.394 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] libvirt.cpu_power_governor_high = performance log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.394 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] libvirt.cpu_power_governor_low = powersave log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.394 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] libvirt.cpu_power_management   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.395 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] libvirt.cpu_power_management_strategy = cpu_state log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.395 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] libvirt.device_detach_attempts = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.395 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] libvirt.device_detach_timeout  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.395 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] libvirt.disk_cachemodes        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.395 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] libvirt.disk_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.396 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] libvirt.enabled_perf_events    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.396 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] libvirt.file_backed_memory     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.396 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] libvirt.gid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.396 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] libvirt.hw_disk_discard        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.396 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] libvirt.hw_machine_type        = ['x86_64=q35'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.397 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] libvirt.images_rbd_ceph_conf   = /etc/ceph/ceph.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.397 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] libvirt.images_rbd_glance_copy_poll_interval = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.397 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] libvirt.images_rbd_glance_copy_timeout = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.397 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] libvirt.images_rbd_glance_store_name = default_backend log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.397 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] libvirt.images_rbd_pool        = vms log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.398 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] libvirt.images_type            = rbd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.399 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] libvirt.images_volume_group    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.399 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] libvirt.inject_key             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.399 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] libvirt.inject_partition       = -2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.399 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] libvirt.inject_password        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.399 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] libvirt.iscsi_iface            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.400 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] libvirt.iser_use_multipath     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.400 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] libvirt.live_migration_bandwidth = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.400 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] libvirt.live_migration_completion_timeout = 800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.400 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] libvirt.live_migration_downtime = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.400 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] libvirt.live_migration_downtime_delay = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.400 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] libvirt.live_migration_downtime_steps = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.401 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] libvirt.live_migration_inbound_addr = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.401 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] libvirt.live_migration_permit_auto_converge = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.401 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] libvirt.live_migration_permit_post_copy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.401 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] libvirt.live_migration_scheme  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.401 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] libvirt.live_migration_timeout_action = force_complete log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.401 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] libvirt.live_migration_tunnelled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.402 253542 WARNING oslo_config.cfg [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] Deprecated: Option "live_migration_uri" from group "libvirt" is deprecated for removal (
Nov 25 03:09:58 np0005534516 nova_compute[253538]: live_migration_uri is deprecated for removal in favor of two other options that
Nov 25 03:09:58 np0005534516 nova_compute[253538]: allow to change live migration scheme and target URI: ``live_migration_scheme``
Nov 25 03:09:58 np0005534516 nova_compute[253538]: and ``live_migration_inbound_addr`` respectively.
Nov 25 03:09:58 np0005534516 nova_compute[253538]: ).  Its value may be silently ignored in the future.#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.402 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] libvirt.live_migration_uri     = qemu+tls://%s/system log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.402 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] libvirt.live_migration_with_native_tls = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.402 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] libvirt.max_queues             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.402 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] libvirt.mem_stats_period_seconds = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.403 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] libvirt.nfs_mount_options      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.403 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] libvirt.nfs_mount_point_base   = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.403 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] libvirt.num_aoe_discover_tries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.403 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] libvirt.num_iser_scan_tries    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.403 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] libvirt.num_memory_encrypted_guests = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.404 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] libvirt.num_nvme_discover_tries = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.404 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] libvirt.num_pcie_ports         = 24 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.404 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] libvirt.num_volume_scan_tries  = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.404 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] libvirt.pmem_namespaces        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.404 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] libvirt.quobyte_client_cfg     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.404 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] libvirt.quobyte_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.405 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] libvirt.rbd_connect_timeout    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.405 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] libvirt.rbd_destroy_volume_retries = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.405 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] libvirt.rbd_destroy_volume_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.405 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] libvirt.rbd_secret_uuid        = a058ea16-8b73-51e1-b172-ed66107102bf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.406 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] libvirt.rbd_user               = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.406 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] libvirt.realtime_scheduler_priority = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.406 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] libvirt.remote_filesystem_transport = ssh log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.406 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] libvirt.rescue_image_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.406 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] libvirt.rescue_kernel_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.407 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] libvirt.rescue_ramdisk_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.407 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] libvirt.rng_dev_path           = /dev/urandom log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.407 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] libvirt.rx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.407 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] libvirt.smbfs_mount_options    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.407 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] libvirt.smbfs_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.407 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] libvirt.snapshot_compression   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.408 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] libvirt.snapshot_image_format  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.408 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] libvirt.snapshots_directory    = /var/lib/nova/instances/snapshots log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.408 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] libvirt.sparse_logical_volumes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.408 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] libvirt.swtpm_enabled          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.408 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] libvirt.swtpm_group            = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.408 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] libvirt.swtpm_user             = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.409 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] libvirt.sysinfo_serial         = unique log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.409 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] libvirt.tx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.409 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] libvirt.uid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.409 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] libvirt.use_virtio_for_bridges = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.409 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] libvirt.virt_type              = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.409 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] libvirt.volume_clear           = zero log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.409 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] libvirt.volume_clear_size      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.410 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] libvirt.volume_use_multipath   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.410 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] libvirt.vzstorage_cache_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.410 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] libvirt.vzstorage_log_path     = /var/log/vstorage/%(cluster_name)s/nova.log.gz log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.410 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] libvirt.vzstorage_mount_group  = qemu log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.410 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] libvirt.vzstorage_mount_opts   = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.411 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] libvirt.vzstorage_mount_perms  = 0770 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.411 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] libvirt.vzstorage_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.411 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] libvirt.vzstorage_mount_user   = stack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.411 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] libvirt.wait_soft_reboot_seconds = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.411 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] neutron.auth_section           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.412 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] neutron.auth_type              = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.412 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] neutron.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.412 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] neutron.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.412 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] neutron.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.412 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] neutron.connect_retries        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.413 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] neutron.connect_retry_delay    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.413 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] neutron.default_floating_pool  = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.413 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] neutron.endpoint_override      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.413 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] neutron.extension_sync_interval = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.413 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] neutron.http_retries           = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.414 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] neutron.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.414 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] neutron.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.414 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] neutron.max_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.414 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] neutron.metadata_proxy_shared_secret = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.414 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] neutron.min_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.415 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] neutron.ovs_bridge             = br-int log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.415 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] neutron.physnets               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.415 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] neutron.region_name            = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.415 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] neutron.service_metadata_proxy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.415 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] neutron.service_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.416 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] neutron.service_type           = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.416 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] neutron.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.416 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] neutron.status_code_retries    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.416 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] neutron.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.416 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] neutron.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.417 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] neutron.valid_interfaces       = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.417 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] neutron.version                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.417 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] notifications.bdms_in_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.417 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] notifications.default_level    = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.417 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] notifications.notification_format = unversioned log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.417 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] notifications.notify_on_state_change = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.418 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] notifications.versioned_notifications_topics = ['versioned_notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.418 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] pci.alias                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.418 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] pci.device_spec                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.418 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] pci.report_in_placement        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.418 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.419 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] placement.auth_type            = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.419 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] placement.auth_url             = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.419 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.419 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.419 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.419 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] placement.connect_retries      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.420 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] placement.connect_retry_delay  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.420 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] placement.default_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.420 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] placement.default_domain_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.420 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] placement.domain_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.420 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] placement.domain_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.420 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] placement.endpoint_override    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.421 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.421 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.421 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] placement.max_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.421 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] placement.min_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.421 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] placement.password             = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.421 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] placement.project_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.422 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] placement.project_domain_name  = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.422 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] placement.project_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.422 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] placement.project_name         = service log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.422 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] placement.region_name          = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.422 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] placement.service_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.423 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] placement.service_type         = placement log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.423 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.423 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] placement.status_code_retries  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.423 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] placement.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.423 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] placement.system_scope         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.423 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.423 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] placement.trust_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.424 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] placement.user_domain_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.424 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] placement.user_domain_name     = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.424 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] placement.user_id              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.424 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] placement.username             = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.424 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] placement.valid_interfaces     = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.425 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] placement.version              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.425 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] quota.cores                    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.425 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] quota.count_usage_from_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.425 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] quota.driver                   = nova.quota.DbQuotaDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.425 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] quota.injected_file_content_bytes = 10240 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.425 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] quota.injected_file_path_length = 255 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.426 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] quota.injected_files           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.426 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] quota.instances                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.426 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] quota.key_pairs                = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.426 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] quota.metadata_items           = 128 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.426 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] quota.ram                      = 51200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.427 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] quota.recheck_quota            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.427 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] quota.server_group_members     = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.427 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] quota.server_groups            = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.428 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] rdp.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.428 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] rdp.html5_proxy_base_url       = http://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.428 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] scheduler.discover_hosts_in_cells_interval = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.429 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] scheduler.enable_isolated_aggregate_filtering = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.429 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] scheduler.image_metadata_prefilter = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.429 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] scheduler.limit_tenants_to_placement_aggregate = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.429 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] scheduler.max_attempts         = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.429 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] scheduler.max_placement_results = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.429 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] scheduler.placement_aggregate_required_for_tenants = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.430 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] scheduler.query_placement_for_availability_zone = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.430 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] scheduler.query_placement_for_image_type_support = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.430 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] scheduler.query_placement_for_routed_network_aggregates = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.430 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] scheduler.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.430 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_namespace = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.431 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_separator = . log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.431 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] filter_scheduler.available_filters = ['nova.scheduler.filters.all_filters'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.431 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] filter_scheduler.build_failure_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.431 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] filter_scheduler.cpu_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.431 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] filter_scheduler.cross_cell_move_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.431 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] filter_scheduler.disk_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.432 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] filter_scheduler.enabled_filters = ['ComputeFilter', 'ComputeCapabilitiesFilter', 'ImagePropertiesFilter', 'ServerGroupAntiAffinityFilter', 'ServerGroupAffinityFilter'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.432 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] filter_scheduler.host_subset_size = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.432 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] filter_scheduler.image_properties_default_architecture = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.432 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] filter_scheduler.io_ops_weight_multiplier = -1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.432 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] filter_scheduler.isolated_hosts = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.432 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] filter_scheduler.isolated_images = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.433 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] filter_scheduler.max_instances_per_host = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.433 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] filter_scheduler.max_io_ops_per_host = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.433 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] filter_scheduler.pci_in_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.433 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] filter_scheduler.pci_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.433 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] filter_scheduler.ram_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.433 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] filter_scheduler.restrict_isolated_hosts_to_isolated_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.434 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] filter_scheduler.shuffle_best_same_weighed_hosts = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.434 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] filter_scheduler.soft_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.434 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] filter_scheduler.soft_anti_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.434 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] filter_scheduler.track_instance_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.434 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] filter_scheduler.weight_classes = ['nova.scheduler.weights.all_weighers'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.435 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] metrics.required               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.435 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] metrics.weight_multiplier      = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.435 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] metrics.weight_of_unavailable  = -10000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.435 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] metrics.weight_setting         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.436 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] serial_console.base_url        = ws://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.436 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] serial_console.enabled         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.436 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] serial_console.port_range      = 10000:20000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.436 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] serial_console.proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.436 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] serial_console.serialproxy_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.436 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] serial_console.serialproxy_port = 6083 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.437 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] service_user.auth_section      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.437 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] service_user.auth_type         = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.437 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] service_user.cafile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.437 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] service_user.certfile          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.437 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] service_user.collect_timing    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.437 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] service_user.insecure          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.438 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] service_user.keyfile           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.438 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] service_user.send_service_user_token = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.438 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] service_user.split_loggers     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.438 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] service_user.timeout           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.438 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] spice.agent_enabled            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.438 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] spice.enabled                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.439 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] spice.html5proxy_base_url      = http://127.0.0.1:6082/spice_auto.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.439 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] spice.html5proxy_host          = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.439 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] spice.html5proxy_port          = 6082 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.439 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] spice.image_compression        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.439 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] spice.jpeg_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.440 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] spice.playback_compression     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.440 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] spice.server_listen            = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.440 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] spice.server_proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.440 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] spice.streaming_mode           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.440 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] spice.zlib_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.440 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] upgrade_levels.baseapi         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.441 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] upgrade_levels.cert            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.441 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] upgrade_levels.compute         = auto log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.441 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] upgrade_levels.conductor       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.441 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] upgrade_levels.scheduler       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.441 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] vendordata_dynamic_auth.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.441 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] vendordata_dynamic_auth.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.441 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] vendordata_dynamic_auth.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.442 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] vendordata_dynamic_auth.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.442 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] vendordata_dynamic_auth.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.442 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] vendordata_dynamic_auth.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.442 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] vendordata_dynamic_auth.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.442 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] vendordata_dynamic_auth.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.442 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] vendordata_dynamic_auth.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.442 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.443 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.443 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] vmware.cache_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.443 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] vmware.cluster_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.443 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] vmware.connection_pool_size    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.443 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] vmware.console_delay_seconds   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.443 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] vmware.datastore_regex         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.444 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] vmware.host_ip                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.444 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.444 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.444 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] vmware.host_username           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.444 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.444 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] vmware.integration_bridge      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.445 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] vmware.maximum_objects         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.445 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] vmware.pbm_default_policy      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.445 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] vmware.pbm_enabled             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.445 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] vmware.pbm_wsdl_location       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.445 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] vmware.serial_log_dir          = /opt/vmware/vspc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.445 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] vmware.serial_port_proxy_uri   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.446 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] vmware.serial_port_service_uri = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.446 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.446 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] vmware.use_linked_clone        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.446 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] vmware.vnc_keymap              = en-us log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.446 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] vmware.vnc_port                = 5900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.446 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] vmware.vnc_port_total          = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.446 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] vnc.auth_schemes               = ['none'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.447 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] vnc.enabled                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.447 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] vnc.novncproxy_base_url        = https://nova-novncproxy-cell1-public-openstack.apps-crc.testing/vnc_lite.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.447 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] vnc.novncproxy_host            = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.448 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] vnc.novncproxy_port            = 6080 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.448 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] vnc.server_listen              = ::0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.448 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] vnc.server_proxyclient_address = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.448 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] vnc.vencrypt_ca_certs          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.448 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] vnc.vencrypt_client_cert       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.448 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] vnc.vencrypt_client_key        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.449 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] workarounds.disable_compute_service_check_for_ffu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.449 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] workarounds.disable_deep_image_inspection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.449 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] workarounds.disable_fallback_pcpu_query = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.449 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] workarounds.disable_group_policy_check_upcall = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.449 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] workarounds.disable_libvirt_livesnapshot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.449 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] workarounds.disable_rootwrap   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.450 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] workarounds.enable_numa_live_migration = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.450 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] workarounds.enable_qemu_monitor_announce_self = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.450 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] workarounds.ensure_libvirt_rbd_instance_dir_cleanup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.450 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] workarounds.handle_virt_lifecycle_events = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.450 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] workarounds.libvirt_disable_apic = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.450 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] workarounds.never_download_image_if_on_rbd = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.451 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] workarounds.qemu_monitor_announce_self_count = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.451 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] workarounds.qemu_monitor_announce_self_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.451 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] workarounds.reserve_disk_resource_for_image_cache = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.451 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] workarounds.skip_cpu_compare_at_startup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.451 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] workarounds.skip_cpu_compare_on_dest = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.452 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] workarounds.skip_hypervisor_version_check_on_lm = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.452 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] workarounds.skip_reserve_in_use_ironic_nodes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.452 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] workarounds.unified_limits_count_pcpu_as_vcpu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.452 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] workarounds.wait_for_vif_plugged_event_during_hard_reboot = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.452 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] wsgi.api_paste_config          = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.453 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] wsgi.client_socket_timeout     = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.453 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] wsgi.default_pool_size         = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.453 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] wsgi.keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.453 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] wsgi.max_header_line           = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.453 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] wsgi.secure_proxy_ssl_header   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.454 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] wsgi.ssl_ca_file               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.454 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] wsgi.ssl_cert_file             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.454 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] wsgi.ssl_key_file              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.454 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] wsgi.tcp_keepidle              = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.454 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] wsgi.wsgi_log_format           = %(client_ip)s "%(request_line)s" status: %(status_code)s len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.454 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] zvm.ca_file                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.454 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] zvm.cloud_connector_url        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.455 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] zvm.image_tmp_path             = /var/lib/nova/images log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.455 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] zvm.reachable_timeout          = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.455 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] oslo_policy.enforce_new_defaults = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.455 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] oslo_policy.enforce_scope      = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.455 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.456 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.456 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.456 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.456 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.456 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.456 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.456 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.457 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.457 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.457 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] remote_debug.host              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.457 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] remote_debug.port              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.457 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.457 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.457 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.458 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.458 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.458 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.458 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.458 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.458 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.459 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.459 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.459 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.459 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.459 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.459 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.459 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.460 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.460 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.460 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.460 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.460 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.460 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.460 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.461 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.461 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.461 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.461 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.461 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.461 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.461 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.462 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.462 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.462 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.462 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.462 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.462 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] oslo_limit.auth_section        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.463 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] oslo_limit.auth_type           = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.463 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] oslo_limit.auth_url            = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.463 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] oslo_limit.cafile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.463 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] oslo_limit.certfile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.463 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] oslo_limit.collect_timing      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.463 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] oslo_limit.connect_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.464 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] oslo_limit.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.464 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] oslo_limit.default_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.464 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] oslo_limit.default_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.464 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] oslo_limit.domain_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.464 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] oslo_limit.domain_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.464 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] oslo_limit.endpoint_id         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.464 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] oslo_limit.endpoint_override   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.464 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] oslo_limit.insecure            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.465 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] oslo_limit.keyfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.465 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] oslo_limit.max_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.465 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] oslo_limit.min_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.465 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] oslo_limit.password            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.465 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] oslo_limit.project_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.465 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] oslo_limit.project_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.465 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] oslo_limit.project_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.466 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] oslo_limit.project_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.466 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] oslo_limit.region_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.466 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] oslo_limit.service_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.466 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] oslo_limit.service_type        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.466 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] oslo_limit.split_loggers       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.466 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] oslo_limit.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.467 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] oslo_limit.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.467 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] oslo_limit.system_scope        = all log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.467 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] oslo_limit.timeout             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.467 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] oslo_limit.trust_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.467 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] oslo_limit.user_domain_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.467 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] oslo_limit.user_domain_name    = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.468 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] oslo_limit.user_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.468 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] oslo_limit.username            = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.468 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] oslo_limit.valid_interfaces    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.468 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] oslo_limit.version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.468 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] oslo_reports.file_event_handler = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.468 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.469 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.469 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] vif_plug_linux_bridge_privileged.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.469 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] vif_plug_linux_bridge_privileged.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.469 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] vif_plug_linux_bridge_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.469 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] vif_plug_linux_bridge_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.469 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] vif_plug_linux_bridge_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.470 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] vif_plug_linux_bridge_privileged.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.470 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] vif_plug_ovs_privileged.capabilities = [12, 1] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.470 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] vif_plug_ovs_privileged.group  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.470 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] vif_plug_ovs_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.470 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] vif_plug_ovs_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.470 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] vif_plug_ovs_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.470 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] vif_plug_ovs_privileged.user   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.471 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] os_vif_linux_bridge.flat_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.471 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] os_vif_linux_bridge.forward_bridge_interface = ['all'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.471 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] os_vif_linux_bridge.iptables_bottom_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.471 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] os_vif_linux_bridge.iptables_drop_action = DROP log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.471 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] os_vif_linux_bridge.iptables_top_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.471 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] os_vif_linux_bridge.network_device_mtu = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.472 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] os_vif_linux_bridge.use_ipv6   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.472 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] os_vif_linux_bridge.vlan_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.472 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] os_vif_ovs.isolate_vif         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.472 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] os_vif_ovs.network_device_mtu  = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.472 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] os_vif_ovs.ovs_vsctl_timeout   = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.472 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] os_vif_ovs.ovsdb_connection    = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.473 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] os_vif_ovs.ovsdb_interface     = native log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.473 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] os_vif_ovs.per_port_bridge     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.473 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] os_brick.lock_path             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.473 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] os_brick.wait_mpath_device_attempts = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.473 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] os_brick.wait_mpath_device_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.473 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] privsep_osbrick.capabilities   = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.474 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] privsep_osbrick.group          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.474 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] privsep_osbrick.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.474 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] privsep_osbrick.logger_name    = os_brick.privileged log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.474 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] privsep_osbrick.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.474 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] privsep_osbrick.user           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.474 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] nova_sys_admin.capabilities    = [0, 1, 2, 3, 12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.474 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] nova_sys_admin.group           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.475 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] nova_sys_admin.helper_command  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.475 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] nova_sys_admin.logger_name     = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.475 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] nova_sys_admin.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.475 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] nova_sys_admin.user            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.475 253542 DEBUG oslo_service.service [None req-70e36426-222a-4c90-a767-1c3d58e7e1b1 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.477 253542 INFO nova.service [-] Starting compute node (version 27.5.2-0.20250829104910.6f8decf.el9)#033[00m
Nov 25 03:09:58 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v716: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.508 253542 DEBUG nova.virt.libvirt.host [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Starting native event thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:492#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.509 253542 DEBUG nova.virt.libvirt.host [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Starting green dispatch thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:498#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.509 253542 DEBUG nova.virt.libvirt.host [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Starting connection event dispatch thread initialize /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:620#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.510 253542 DEBUG nova.virt.libvirt.host [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Connecting to libvirt: qemu:///system _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:503#033[00m
Nov 25 03:09:58 np0005534516 systemd[1]: Starting libvirt QEMU daemon...
Nov 25 03:09:58 np0005534516 systemd[1]: Started libvirt QEMU daemon.
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.594 253542 DEBUG nova.virt.libvirt.host [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Registering for lifecycle events <nova.virt.libvirt.host.Host object at 0x7f843d6c5550> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:509#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.596 253542 DEBUG nova.virt.libvirt.host [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Registering for connection events: <nova.virt.libvirt.host.Host object at 0x7f843d6c5550> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:530#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.597 253542 INFO nova.virt.libvirt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Connection event '1' reason 'None'#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.625 253542 WARNING nova.virt.libvirt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Cannot update service status on host "compute-0.ctlplane.example.com" since it is not registered.: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host compute-0.ctlplane.example.com could not be found.#033[00m
Nov 25 03:09:58 np0005534516 nova_compute[253538]: 2025-11-25 08:09:58.626 253542 DEBUG nova.virt.libvirt.volume.mount [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Initialising _HostMountState generation 0 host_up /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/mount.py:130#033[00m
Nov 25 03:09:59 np0005534516 nova_compute[253538]: 2025-11-25 08:09:59.471 253542 INFO nova.virt.libvirt.host [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Libvirt host capabilities <capabilities>
Nov 25 03:09:59 np0005534516 nova_compute[253538]: 
Nov 25 03:09:59 np0005534516 nova_compute[253538]:  <host>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:    <uuid>c15e9c69-2e4d-4822-bb25-13f5e1e4f89a</uuid>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:    <cpu>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <arch>x86_64</arch>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model>EPYC-Rome-v4</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <vendor>AMD</vendor>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <microcode version='16777317'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <signature family='23' model='49' stepping='0'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <topology sockets='8' dies='1' clusters='1' cores='1' threads='1'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <maxphysaddr mode='emulate' bits='40'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <feature name='x2apic'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <feature name='tsc-deadline'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <feature name='osxsave'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <feature name='hypervisor'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <feature name='tsc_adjust'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <feature name='spec-ctrl'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <feature name='stibp'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <feature name='arch-capabilities'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <feature name='ssbd'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <feature name='cmp_legacy'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <feature name='topoext'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <feature name='virt-ssbd'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <feature name='lbrv'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <feature name='tsc-scale'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <feature name='vmcb-clean'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <feature name='pause-filter'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <feature name='pfthreshold'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <feature name='svme-addr-chk'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <feature name='rdctl-no'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <feature name='skip-l1dfl-vmentry'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <feature name='mds-no'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <feature name='pschange-mc-no'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <pages unit='KiB' size='4'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <pages unit='KiB' size='2048'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <pages unit='KiB' size='1048576'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:    </cpu>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:    <power_management>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <suspend_mem/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:    </power_management>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:    <iommu support='no'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:    <migration_features>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <live/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <uri_transports>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <uri_transport>tcp</uri_transport>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <uri_transport>rdma</uri_transport>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </uri_transports>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:    </migration_features>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:    <topology>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <cells num='1'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <cell id='0'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:          <memory unit='KiB'>7864320</memory>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:          <pages unit='KiB' size='4'>1966080</pages>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:          <pages unit='KiB' size='2048'>0</pages>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:          <pages unit='KiB' size='1048576'>0</pages>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:          <distances>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:            <sibling id='0' value='10'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:          </distances>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:          <cpus num='8'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:            <cpu id='0' socket_id='0' die_id='0' cluster_id='65535' core_id='0' siblings='0'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:            <cpu id='1' socket_id='1' die_id='1' cluster_id='65535' core_id='0' siblings='1'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:            <cpu id='2' socket_id='2' die_id='2' cluster_id='65535' core_id='0' siblings='2'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:            <cpu id='3' socket_id='3' die_id='3' cluster_id='65535' core_id='0' siblings='3'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:            <cpu id='4' socket_id='4' die_id='4' cluster_id='65535' core_id='0' siblings='4'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:            <cpu id='5' socket_id='5' die_id='5' cluster_id='65535' core_id='0' siblings='5'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:            <cpu id='6' socket_id='6' die_id='6' cluster_id='65535' core_id='0' siblings='6'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:            <cpu id='7' socket_id='7' die_id='7' cluster_id='65535' core_id='0' siblings='7'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:          </cpus>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        </cell>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </cells>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:    </topology>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:    <cache>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <bank id='0' level='2' type='both' size='512' unit='KiB' cpus='0'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <bank id='1' level='2' type='both' size='512' unit='KiB' cpus='1'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <bank id='2' level='2' type='both' size='512' unit='KiB' cpus='2'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <bank id='3' level='2' type='both' size='512' unit='KiB' cpus='3'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <bank id='4' level='2' type='both' size='512' unit='KiB' cpus='4'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <bank id='5' level='2' type='both' size='512' unit='KiB' cpus='5'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <bank id='6' level='2' type='both' size='512' unit='KiB' cpus='6'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <bank id='7' level='2' type='both' size='512' unit='KiB' cpus='7'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <bank id='0' level='3' type='both' size='16' unit='MiB' cpus='0'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <bank id='1' level='3' type='both' size='16' unit='MiB' cpus='1'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <bank id='2' level='3' type='both' size='16' unit='MiB' cpus='2'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <bank id='3' level='3' type='both' size='16' unit='MiB' cpus='3'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <bank id='4' level='3' type='both' size='16' unit='MiB' cpus='4'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <bank id='5' level='3' type='both' size='16' unit='MiB' cpus='5'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <bank id='6' level='3' type='both' size='16' unit='MiB' cpus='6'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <bank id='7' level='3' type='both' size='16' unit='MiB' cpus='7'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:    </cache>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:    <secmodel>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model>selinux</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <doi>0</doi>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <baselabel type='kvm'>system_u:system_r:svirt_t:s0</baselabel>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <baselabel type='qemu'>system_u:system_r:svirt_tcg_t:s0</baselabel>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:    </secmodel>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:    <secmodel>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model>dac</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <doi>0</doi>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <baselabel type='kvm'>+107:+107</baselabel>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <baselabel type='qemu'>+107:+107</baselabel>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:    </secmodel>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:  </host>
Nov 25 03:09:59 np0005534516 nova_compute[253538]: 
Nov 25 03:09:59 np0005534516 nova_compute[253538]:  <guest>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:    <os_type>hvm</os_type>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:    <arch name='i686'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <wordsize>32</wordsize>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <emulator>/usr/libexec/qemu-kvm</emulator>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <domain type='qemu'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <domain type='kvm'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:    </arch>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:    <features>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <pae/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <nonpae/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <acpi default='on' toggle='yes'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <apic default='on' toggle='no'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <cpuselection/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <deviceboot/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <disksnapshot default='on' toggle='no'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <externalSnapshot/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:    </features>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:  </guest>
Nov 25 03:09:59 np0005534516 nova_compute[253538]: 
Nov 25 03:09:59 np0005534516 nova_compute[253538]:  <guest>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:    <os_type>hvm</os_type>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:    <arch name='x86_64'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <wordsize>64</wordsize>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <emulator>/usr/libexec/qemu-kvm</emulator>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <domain type='qemu'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <domain type='kvm'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:    </arch>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:    <features>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <acpi default='on' toggle='yes'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <apic default='on' toggle='no'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <cpuselection/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <deviceboot/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <disksnapshot default='on' toggle='no'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <externalSnapshot/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:    </features>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:  </guest>
Nov 25 03:09:59 np0005534516 nova_compute[253538]: 
Nov 25 03:09:59 np0005534516 nova_compute[253538]: </capabilities>
Nov 25 03:09:59 np0005534516 nova_compute[253538]: #033[00m
Nov 25 03:09:59 np0005534516 nova_compute[253538]: 2025-11-25 08:09:59.483 253542 DEBUG nova.virt.libvirt.host [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Getting domain capabilities for i686 via machine types: {'q35', 'pc'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952#033[00m
Nov 25 03:09:59 np0005534516 nova_compute[253538]: 2025-11-25 08:09:59.502 253542 DEBUG nova.virt.libvirt.host [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=q35:
Nov 25 03:09:59 np0005534516 nova_compute[253538]: <domainCapabilities>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:  <path>/usr/libexec/qemu-kvm</path>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:  <domain>kvm</domain>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:  <machine>pc-q35-rhel9.8.0</machine>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:  <arch>i686</arch>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:  <vcpu max='4096'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:  <iothreads supported='yes'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:  <os supported='yes'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:    <enum name='firmware'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:    <loader supported='yes'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <enum name='type'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <value>rom</value>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <value>pflash</value>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </enum>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <enum name='readonly'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <value>yes</value>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <value>no</value>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </enum>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <enum name='secure'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <value>no</value>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </enum>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:    </loader>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:  </os>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:  <cpu>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:    <mode name='host-passthrough' supported='yes'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <enum name='hostPassthroughMigratable'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <value>on</value>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <value>off</value>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </enum>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:    </mode>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:    <mode name='maximum' supported='yes'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <enum name='maximumMigratable'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <value>on</value>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <value>off</value>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </enum>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:    </mode>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:    <mode name='host-model' supported='yes'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model fallback='forbid'>EPYC-Rome</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <vendor>AMD</vendor>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <maxphysaddr mode='passthrough' limit='40'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <feature policy='require' name='x2apic'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <feature policy='require' name='tsc-deadline'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <feature policy='require' name='hypervisor'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <feature policy='require' name='tsc_adjust'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <feature policy='require' name='spec-ctrl'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <feature policy='require' name='stibp'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <feature policy='require' name='ssbd'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <feature policy='require' name='cmp_legacy'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <feature policy='require' name='overflow-recov'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <feature policy='require' name='succor'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <feature policy='require' name='ibrs'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <feature policy='require' name='amd-ssbd'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <feature policy='require' name='virt-ssbd'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <feature policy='require' name='lbrv'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <feature policy='require' name='tsc-scale'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <feature policy='require' name='vmcb-clean'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <feature policy='require' name='flushbyasid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <feature policy='require' name='pause-filter'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <feature policy='require' name='pfthreshold'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <feature policy='require' name='svme-addr-chk'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <feature policy='require' name='lfence-always-serializing'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <feature policy='disable' name='xsaves'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:    </mode>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:    <mode name='custom' supported='yes'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <blockers model='Broadwell'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='erms'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='hle'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='invpcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='pcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='rtm'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </blockers>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <blockers model='Broadwell-IBRS'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='erms'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='hle'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='invpcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='pcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='rtm'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </blockers>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <blockers model='Broadwell-noTSX'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='erms'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='invpcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='pcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </blockers>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <blockers model='Broadwell-noTSX-IBRS'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='erms'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='invpcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='pcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </blockers>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <blockers model='Broadwell-v1'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='erms'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='hle'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='invpcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='pcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='rtm'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </blockers>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <blockers model='Broadwell-v2'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='erms'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='invpcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='pcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </blockers>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <blockers model='Broadwell-v3'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='erms'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='hle'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='invpcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='pcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='rtm'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </blockers>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <blockers model='Broadwell-v4'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='erms'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='invpcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='pcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </blockers>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <blockers model='Cascadelake-Server'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512bw'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512cd'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512dq'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512f'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512vl'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512vnni'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='erms'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='hle'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='invpcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='pcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='pku'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='rtm'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </blockers>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <blockers model='Cascadelake-Server-noTSX'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512bw'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512cd'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512dq'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512f'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512vl'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512vnni'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='erms'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='ibrs-all'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='invpcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='pcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='pku'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </blockers>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <blockers model='Cascadelake-Server-v1'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512bw'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512cd'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512dq'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512f'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512vl'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512vnni'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='erms'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='hle'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='invpcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='pcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='pku'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='rtm'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </blockers>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <blockers model='Cascadelake-Server-v2'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512bw'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512cd'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512dq'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512f'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512vl'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512vnni'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='erms'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='hle'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='ibrs-all'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='invpcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='pcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='pku'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='rtm'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </blockers>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <blockers model='Cascadelake-Server-v3'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512bw'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512cd'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512dq'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512f'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512vl'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512vnni'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='erms'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='ibrs-all'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='invpcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='pcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='pku'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </blockers>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <blockers model='Cascadelake-Server-v4'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512bw'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512cd'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512dq'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512f'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512vl'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512vnni'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='erms'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='ibrs-all'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='invpcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='pcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='pku'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </blockers>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <blockers model='Cascadelake-Server-v5'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512bw'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512cd'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512dq'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512f'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512vl'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512vnni'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='erms'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='ibrs-all'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='invpcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='pcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='pku'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='xsaves'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </blockers>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <blockers model='Cooperlake'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512-bf16'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512bw'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512cd'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512dq'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512f'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512vl'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512vnni'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='erms'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='hle'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='ibrs-all'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='invpcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='pcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='pku'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='rtm'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='taa-no'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </blockers>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <blockers model='Cooperlake-v1'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512-bf16'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512bw'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512cd'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512dq'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512f'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512vl'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512vnni'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='erms'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='hle'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='ibrs-all'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='invpcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='pcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='pku'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='rtm'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='taa-no'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </blockers>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <blockers model='Cooperlake-v2'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512-bf16'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512bw'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512cd'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512dq'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512f'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512vl'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512vnni'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='erms'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='hle'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='ibrs-all'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='invpcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='pcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='pku'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='rtm'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='taa-no'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='xsaves'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </blockers>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <blockers model='Denverton'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='erms'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='mpx'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </blockers>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <blockers model='Denverton-v1'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='erms'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='mpx'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </blockers>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <blockers model='Denverton-v2'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='erms'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </blockers>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <blockers model='Denverton-v3'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='erms'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='xsaves'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </blockers>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <blockers model='Dhyana-v2'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='xsaves'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </blockers>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <blockers model='EPYC-Genoa'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='amd-psfd'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='auto-ibrs'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512-bf16'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512-vpopcntdq'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512bitalg'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512bw'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512cd'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512dq'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512f'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512ifma'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512vbmi'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512vbmi2'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512vl'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512vnni'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='erms'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='fsrm'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='gfni'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='invpcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='la57'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='no-nested-data-bp'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='null-sel-clr-base'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='pcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='pku'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='stibp-always-on'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='vaes'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='vpclmulqdq'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='xsaves'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </blockers>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <blockers model='EPYC-Genoa-v1'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='amd-psfd'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='auto-ibrs'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512-bf16'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512-vpopcntdq'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512bitalg'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512bw'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512cd'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512dq'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512f'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512ifma'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512vbmi'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512vbmi2'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512vl'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512vnni'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='erms'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='fsrm'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='gfni'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='invpcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='la57'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='no-nested-data-bp'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='null-sel-clr-base'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='pcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='pku'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='stibp-always-on'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='vaes'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='vpclmulqdq'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='xsaves'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </blockers>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <blockers model='EPYC-Milan'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='erms'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='fsrm'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='invpcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='pcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='pku'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='xsaves'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </blockers>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <blockers model='EPYC-Milan-v1'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='erms'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='fsrm'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='invpcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='pcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='pku'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='xsaves'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </blockers>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <blockers model='EPYC-Milan-v2'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='amd-psfd'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='erms'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='fsrm'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='invpcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='no-nested-data-bp'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='null-sel-clr-base'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='pcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='pku'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='stibp-always-on'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='vaes'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='vpclmulqdq'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='xsaves'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </blockers>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <blockers model='EPYC-Rome'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='xsaves'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </blockers>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <blockers model='EPYC-Rome-v1'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='xsaves'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </blockers>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <blockers model='EPYC-Rome-v2'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='xsaves'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </blockers>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <blockers model='EPYC-Rome-v3'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='xsaves'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </blockers>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <blockers model='EPYC-v3'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='xsaves'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </blockers>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <blockers model='EPYC-v4'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='xsaves'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </blockers>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <blockers model='GraniteRapids'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='amx-bf16'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='amx-fp16'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='amx-int8'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='amx-tile'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx-vnni'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512-bf16'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512-fp16'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512-vpopcntdq'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512bitalg'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512bw'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512cd'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512dq'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512f'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512ifma'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512vbmi'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512vbmi2'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512vl'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512vnni'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='bus-lock-detect'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='erms'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='fbsdp-no'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='fsrc'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='fsrm'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='fsrs'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='fzrm'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='gfni'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='hle'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='ibrs-all'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='invpcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='la57'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='mcdt-no'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='pbrsb-no'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='pcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='pku'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='prefetchiti'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='psdp-no'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='rtm'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='sbdr-ssdp-no'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='serialize'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='taa-no'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='tsx-ldtrk'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='vaes'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='vpclmulqdq'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='xfd'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='xsaves'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </blockers>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <blockers model='GraniteRapids-v1'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='amx-bf16'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='amx-fp16'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='amx-int8'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='amx-tile'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx-vnni'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512-bf16'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512-fp16'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512-vpopcntdq'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512bitalg'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512bw'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512cd'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512dq'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512f'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512ifma'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512vbmi'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512vbmi2'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512vl'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512vnni'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='bus-lock-detect'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='erms'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='fbsdp-no'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='fsrc'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='fsrm'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='fsrs'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='fzrm'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='gfni'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='hle'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='ibrs-all'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='invpcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='la57'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='mcdt-no'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='pbrsb-no'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='pcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='pku'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='prefetchiti'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='psdp-no'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='rtm'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='sbdr-ssdp-no'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='serialize'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='taa-no'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='tsx-ldtrk'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='vaes'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='vpclmulqdq'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='xfd'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='xsaves'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </blockers>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <blockers model='GraniteRapids-v2'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='amx-bf16'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='amx-fp16'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='amx-int8'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='amx-tile'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx-vnni'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx10'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx10-128'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx10-256'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx10-512'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512-bf16'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512-fp16'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512-vpopcntdq'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512bitalg'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512bw'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512cd'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512dq'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512f'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512ifma'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512vbmi'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512vbmi2'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512vl'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512vnni'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='bus-lock-detect'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='cldemote'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='erms'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='fbsdp-no'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='fsrc'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='fsrm'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='fsrs'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='fzrm'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='gfni'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='hle'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='ibrs-all'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='invpcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='la57'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='mcdt-no'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='movdir64b'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='movdiri'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='pbrsb-no'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='pcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='pku'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='prefetchiti'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='psdp-no'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='rtm'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='sbdr-ssdp-no'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='serialize'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='ss'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='taa-no'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='tsx-ldtrk'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='vaes'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='vpclmulqdq'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='xfd'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='xsaves'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </blockers>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <blockers model='Haswell'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='erms'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='hle'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='invpcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='pcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='rtm'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </blockers>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <blockers model='Haswell-IBRS'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='erms'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='hle'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='invpcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='pcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='rtm'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </blockers>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <blockers model='Haswell-noTSX'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='erms'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='invpcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='pcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </blockers>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <blockers model='Haswell-noTSX-IBRS'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='erms'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='invpcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='pcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </blockers>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <blockers model='Haswell-v1'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='erms'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='hle'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='invpcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='pcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='rtm'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </blockers>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <blockers model='Haswell-v2'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='erms'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='invpcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='pcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </blockers>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <blockers model='Haswell-v3'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='erms'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='hle'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='invpcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='pcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='rtm'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </blockers>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <blockers model='Haswell-v4'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='erms'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='invpcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='pcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </blockers>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <blockers model='Icelake-Server'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512-vpopcntdq'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512bitalg'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512bw'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512cd'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512dq'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512f'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512vbmi'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512vbmi2'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512vl'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512vnni'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='erms'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='gfni'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='hle'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='invpcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='la57'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='pcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='pku'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='rtm'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='vaes'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='vpclmulqdq'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </blockers>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <blockers model='Icelake-Server-noTSX'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512-vpopcntdq'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512bitalg'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512bw'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512cd'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512dq'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512f'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512vbmi'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512vbmi2'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512vl'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512vnni'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='erms'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='gfni'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='invpcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='la57'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='pcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='pku'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='vaes'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='vpclmulqdq'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </blockers>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <blockers model='Icelake-Server-v1'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512-vpopcntdq'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512bitalg'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512bw'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512cd'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512dq'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512f'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512vbmi'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512vbmi2'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512vl'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512vnni'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='erms'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='gfni'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='hle'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='invpcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='la57'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='pcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='pku'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='rtm'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='vaes'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='vpclmulqdq'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </blockers>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <blockers model='Icelake-Server-v2'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512-vpopcntdq'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512bitalg'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512bw'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512cd'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512dq'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512f'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512vbmi'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512vbmi2'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512vl'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512vnni'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='erms'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='gfni'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='invpcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='la57'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='pcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='pku'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='vaes'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='vpclmulqdq'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </blockers>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <blockers model='Icelake-Server-v3'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512-vpopcntdq'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512bitalg'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512bw'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512cd'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512dq'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512f'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512vbmi'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512vbmi2'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512vl'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512vnni'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='erms'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='gfni'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='ibrs-all'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='invpcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='la57'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='pcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='pku'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='taa-no'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='vaes'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='vpclmulqdq'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </blockers>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <blockers model='Icelake-Server-v4'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512-vpopcntdq'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512bitalg'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512bw'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512cd'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512dq'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512f'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512ifma'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512vbmi'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512vbmi2'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512vl'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512vnni'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='erms'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='fsrm'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='gfni'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='ibrs-all'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='invpcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='la57'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='pcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='pku'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='taa-no'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='vaes'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='vpclmulqdq'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </blockers>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <blockers model='Icelake-Server-v5'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512-vpopcntdq'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512bitalg'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512bw'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512cd'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512dq'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512f'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512ifma'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512vbmi'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512vbmi2'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512vl'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512vnni'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='erms'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='fsrm'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='gfni'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='ibrs-all'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='invpcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='la57'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='pcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='pku'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='taa-no'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='vaes'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='vpclmulqdq'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='xsaves'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </blockers>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <blockers model='Icelake-Server-v6'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512-vpopcntdq'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512bitalg'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512bw'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512cd'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512dq'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512f'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512ifma'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512vbmi'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512vbmi2'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512vl'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512vnni'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='erms'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='fsrm'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='gfni'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='ibrs-all'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='invpcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='la57'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='pcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='pku'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='taa-no'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='vaes'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='vpclmulqdq'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='xsaves'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </blockers>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <blockers model='Icelake-Server-v7'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512-vpopcntdq'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512bitalg'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512bw'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512cd'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512dq'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512f'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512ifma'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512vbmi'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512vbmi2'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512vl'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512vnni'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='erms'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='fsrm'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='gfni'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='hle'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='ibrs-all'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='invpcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='la57'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='pcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='pku'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='rtm'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='taa-no'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='vaes'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='vpclmulqdq'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='xsaves'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </blockers>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <blockers model='IvyBridge'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='erms'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </blockers>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <blockers model='IvyBridge-IBRS'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='erms'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </blockers>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <blockers model='IvyBridge-v1'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='erms'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </blockers>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <blockers model='IvyBridge-v2'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='erms'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </blockers>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <blockers model='KnightsMill'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512-4fmaps'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512-4vnniw'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512-vpopcntdq'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512cd'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512er'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512f'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512pf'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='erms'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='ss'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </blockers>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <blockers model='KnightsMill-v1'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512-4fmaps'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512-4vnniw'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512-vpopcntdq'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512cd'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512er'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512f'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512pf'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='erms'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='ss'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </blockers>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <blockers model='Opteron_G4'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='fma4'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='xop'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </blockers>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <blockers model='Opteron_G4-v1'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='fma4'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='xop'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </blockers>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <blockers model='Opteron_G5'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='fma4'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='tbm'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='xop'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </blockers>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <blockers model='Opteron_G5-v1'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='fma4'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='tbm'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='xop'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </blockers>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <blockers model='SapphireRapids'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='amx-bf16'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='amx-int8'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='amx-tile'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx-vnni'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512-bf16'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512-fp16'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512-vpopcntdq'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512bitalg'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512bw'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512cd'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512dq'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512f'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512ifma'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512vbmi'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512vbmi2'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512vl'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512vnni'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='bus-lock-detect'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='erms'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='fsrc'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='fsrm'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='fsrs'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='fzrm'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='gfni'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='hle'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='ibrs-all'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='invpcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='la57'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='pcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='pku'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='rtm'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='serialize'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='taa-no'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='tsx-ldtrk'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='vaes'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='vpclmulqdq'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='xfd'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='xsaves'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </blockers>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <blockers model='SapphireRapids-v1'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='amx-bf16'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='amx-int8'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='amx-tile'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx-vnni'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512-bf16'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512-fp16'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512-vpopcntdq'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512bitalg'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512bw'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512cd'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512dq'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512f'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512ifma'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512vbmi'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512vbmi2'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512vl'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512vnni'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='bus-lock-detect'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='erms'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='fsrc'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='fsrm'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='fsrs'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='fzrm'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='gfni'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='hle'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='ibrs-all'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='invpcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='la57'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='pcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='pku'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='rtm'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='serialize'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='taa-no'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='tsx-ldtrk'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='vaes'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='vpclmulqdq'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='xfd'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='xsaves'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </blockers>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <blockers model='SapphireRapids-v2'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='amx-bf16'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='amx-int8'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='amx-tile'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx-vnni'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512-bf16'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512-fp16'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512-vpopcntdq'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512bitalg'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512bw'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512cd'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512dq'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512f'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512ifma'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512vbmi'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512vbmi2'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512vl'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512vnni'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='bus-lock-detect'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='erms'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='fbsdp-no'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='fsrc'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='fsrm'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='fsrs'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='fzrm'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='gfni'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='hle'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='ibrs-all'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='invpcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='la57'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='pcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='pku'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='psdp-no'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='rtm'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='sbdr-ssdp-no'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='serialize'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='taa-no'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='tsx-ldtrk'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='vaes'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='vpclmulqdq'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='xfd'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='xsaves'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </blockers>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <blockers model='SapphireRapids-v3'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='amx-bf16'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='amx-int8'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='amx-tile'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx-vnni'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512-bf16'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512-fp16'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512-vpopcntdq'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512bitalg'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512bw'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512cd'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512dq'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512f'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512ifma'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512vbmi'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512vbmi2'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512vl'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512vnni'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='bus-lock-detect'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='cldemote'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='erms'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='fbsdp-no'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='fsrc'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='fsrm'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='fsrs'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='fzrm'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='gfni'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='hle'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='ibrs-all'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='invpcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='la57'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='movdir64b'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='movdiri'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='pcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='pku'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='psdp-no'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='rtm'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='sbdr-ssdp-no'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='serialize'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='ss'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='taa-no'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='tsx-ldtrk'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='vaes'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='vpclmulqdq'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='xfd'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='xsaves'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </blockers>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <blockers model='SierraForest'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx-ifma'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx-ne-convert'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx-vnni'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx-vnni-int8'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='bus-lock-detect'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='cmpccxadd'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='erms'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='fbsdp-no'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='fsrm'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='fsrs'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='gfni'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='ibrs-all'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='invpcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='mcdt-no'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='pbrsb-no'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='pcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='pku'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='psdp-no'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='sbdr-ssdp-no'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='serialize'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='vaes'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='vpclmulqdq'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='xsaves'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </blockers>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <blockers model='SierraForest-v1'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx-ifma'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx-ne-convert'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx-vnni'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx-vnni-int8'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='bus-lock-detect'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='cmpccxadd'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='erms'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='fbsdp-no'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='fsrm'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='fsrs'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='gfni'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='ibrs-all'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='invpcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='mcdt-no'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='pbrsb-no'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='pcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='pku'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='psdp-no'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='sbdr-ssdp-no'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='serialize'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='vaes'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='vpclmulqdq'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='xsaves'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </blockers>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <blockers model='Skylake-Client'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='erms'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='hle'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='invpcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='pcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='rtm'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </blockers>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <blockers model='Skylake-Client-IBRS'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='erms'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='hle'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='invpcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='pcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='rtm'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </blockers>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='erms'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='invpcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='pcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </blockers>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <blockers model='Skylake-Client-v1'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='erms'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='hle'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='invpcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='pcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='rtm'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </blockers>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <blockers model='Skylake-Client-v2'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='erms'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='hle'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='invpcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='pcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='rtm'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </blockers>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <blockers model='Skylake-Client-v3'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='erms'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='invpcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='pcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </blockers>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <blockers model='Skylake-Client-v4'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='erms'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='invpcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='pcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='xsaves'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </blockers>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <blockers model='Skylake-Server'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512bw'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512cd'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512dq'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512f'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512vl'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='erms'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='hle'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='invpcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='pcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='pku'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='rtm'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </blockers>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <blockers model='Skylake-Server-IBRS'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512bw'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512cd'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512dq'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512f'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512vl'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='erms'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='hle'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='invpcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='pcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='pku'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='rtm'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </blockers>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512bw'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512cd'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512dq'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512f'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512vl'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='erms'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='invpcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='pcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='pku'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </blockers>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <blockers model='Skylake-Server-v1'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512bw'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512cd'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512dq'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512f'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512vl'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='erms'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='hle'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='invpcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='pcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='pku'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='rtm'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </blockers>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <blockers model='Skylake-Server-v2'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512bw'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512cd'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512dq'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512f'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512vl'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='erms'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='hle'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='invpcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='pcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='pku'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='rtm'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </blockers>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <blockers model='Skylake-Server-v3'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512bw'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512cd'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512dq'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512f'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512vl'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='erms'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='invpcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='pcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='pku'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </blockers>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <blockers model='Skylake-Server-v4'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512bw'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512cd'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512dq'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512f'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512vl'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='erms'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='invpcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='pcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='pku'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </blockers>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <blockers model='Skylake-Server-v5'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512bw'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512cd'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512dq'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512f'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512vl'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='erms'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='invpcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='pcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='pku'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='xsaves'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </blockers>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <blockers model='Snowridge'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='cldemote'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='core-capability'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='erms'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='gfni'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='movdir64b'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='movdiri'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='mpx'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='split-lock-detect'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </blockers>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <blockers model='Snowridge-v1'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='cldemote'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='core-capability'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='erms'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='gfni'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='movdir64b'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='movdiri'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='mpx'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='split-lock-detect'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </blockers>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <blockers model='Snowridge-v2'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='cldemote'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='core-capability'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='erms'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='gfni'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='movdir64b'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='movdiri'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='split-lock-detect'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </blockers>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <blockers model='Snowridge-v3'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='cldemote'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='core-capability'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='erms'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='gfni'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='movdir64b'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='movdiri'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='split-lock-detect'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='xsaves'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </blockers>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <blockers model='Snowridge-v4'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='cldemote'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='erms'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='gfni'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='movdir64b'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='movdiri'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='xsaves'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </blockers>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <blockers model='athlon'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='3dnow'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='3dnowext'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </blockers>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <blockers model='athlon-v1'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='3dnow'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='3dnowext'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </blockers>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <blockers model='core2duo'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='ss'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </blockers>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <blockers model='core2duo-v1'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='ss'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </blockers>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <blockers model='coreduo'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='ss'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </blockers>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <blockers model='coreduo-v1'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='ss'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </blockers>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <blockers model='n270'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='ss'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </blockers>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <blockers model='n270-v1'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='ss'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </blockers>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <blockers model='phenom'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='3dnow'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='3dnowext'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </blockers>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <blockers model='phenom-v1'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='3dnow'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='3dnowext'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </blockers>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:    </mode>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:  </cpu>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:  <memoryBacking supported='yes'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:    <enum name='sourceType'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <value>file</value>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <value>anonymous</value>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <value>memfd</value>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:    </enum>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:  </memoryBacking>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:  <devices>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:    <disk supported='yes'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <enum name='diskDevice'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <value>disk</value>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <value>cdrom</value>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <value>floppy</value>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <value>lun</value>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </enum>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <enum name='bus'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <value>fdc</value>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <value>scsi</value>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <value>virtio</value>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <value>usb</value>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <value>sata</value>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </enum>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <enum name='model'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <value>virtio</value>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <value>virtio-transitional</value>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <value>virtio-non-transitional</value>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </enum>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:    </disk>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:    <graphics supported='yes'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <enum name='type'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <value>vnc</value>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <value>egl-headless</value>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <value>dbus</value>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </enum>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:    </graphics>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:    <video supported='yes'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <enum name='modelType'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <value>vga</value>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <value>cirrus</value>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <value>virtio</value>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <value>none</value>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <value>bochs</value>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <value>ramfb</value>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </enum>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:    </video>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:    <hostdev supported='yes'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <enum name='mode'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <value>subsystem</value>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </enum>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <enum name='startupPolicy'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <value>default</value>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <value>mandatory</value>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <value>requisite</value>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <value>optional</value>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </enum>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <enum name='subsysType'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <value>usb</value>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <value>pci</value>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <value>scsi</value>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </enum>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <enum name='capsType'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <enum name='pciBackend'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:    </hostdev>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:    <rng supported='yes'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <enum name='model'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <value>virtio</value>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <value>virtio-transitional</value>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <value>virtio-non-transitional</value>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </enum>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <enum name='backendModel'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <value>random</value>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <value>egd</value>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <value>builtin</value>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </enum>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:    </rng>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:    <filesystem supported='yes'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <enum name='driverType'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <value>path</value>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <value>handle</value>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <value>virtiofs</value>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </enum>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:    </filesystem>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:    <tpm supported='yes'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <enum name='model'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <value>tpm-tis</value>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <value>tpm-crb</value>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </enum>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <enum name='backendModel'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <value>emulator</value>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <value>external</value>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </enum>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <enum name='backendVersion'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <value>2.0</value>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </enum>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:    </tpm>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:    <redirdev supported='yes'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <enum name='bus'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <value>usb</value>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </enum>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:    </redirdev>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:    <channel supported='yes'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <enum name='type'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <value>pty</value>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <value>unix</value>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </enum>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:    </channel>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:    <crypto supported='yes'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <enum name='model'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <enum name='type'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <value>qemu</value>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </enum>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <enum name='backendModel'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <value>builtin</value>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </enum>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:    </crypto>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:    <interface supported='yes'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <enum name='backendType'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <value>default</value>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <value>passt</value>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </enum>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:    </interface>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:    <panic supported='yes'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <enum name='model'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <value>isa</value>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <value>hyperv</value>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </enum>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:    </panic>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:    <console supported='yes'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <enum name='type'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <value>null</value>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <value>vc</value>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <value>pty</value>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <value>dev</value>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <value>file</value>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <value>pipe</value>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <value>stdio</value>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <value>udp</value>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <value>tcp</value>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <value>unix</value>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <value>qemu-vdagent</value>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <value>dbus</value>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </enum>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:    </console>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:  </devices>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:  <features>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:    <gic supported='no'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:    <vmcoreinfo supported='yes'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:    <genid supported='yes'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:    <backingStoreInput supported='yes'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:    <backup supported='yes'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:    <async-teardown supported='yes'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:    <ps2 supported='yes'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:    <sev supported='no'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:    <sgx supported='no'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:    <hyperv supported='yes'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <enum name='features'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <value>relaxed</value>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <value>vapic</value>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <value>spinlocks</value>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <value>vpindex</value>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <value>runtime</value>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <value>synic</value>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <value>stimer</value>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <value>reset</value>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <value>vendor_id</value>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <value>frequencies</value>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <value>reenlightenment</value>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <value>tlbflush</value>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <value>ipi</value>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <value>avic</value>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <value>emsr_bitmap</value>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <value>xmm_input</value>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </enum>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <defaults>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <spinlocks>4095</spinlocks>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <stimer_direct>on</stimer_direct>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <tlbflush_direct>on</tlbflush_direct>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <tlbflush_extended>on</tlbflush_extended>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <vendor_id>Linux KVM Hv</vendor_id>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </defaults>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:    </hyperv>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:    <launchSecurity supported='yes'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <enum name='sectype'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <value>tdx</value>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </enum>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:    </launchSecurity>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:  </features>
Nov 25 03:09:59 np0005534516 nova_compute[253538]: </domainCapabilities>
Nov 25 03:09:59 np0005534516 nova_compute[253538]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Nov 25 03:09:59 np0005534516 nova_compute[253538]: 2025-11-25 08:09:59.510 253542 DEBUG nova.virt.libvirt.host [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=pc:
Nov 25 03:09:59 np0005534516 nova_compute[253538]: <domainCapabilities>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:  <path>/usr/libexec/qemu-kvm</path>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:  <domain>kvm</domain>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:  <machine>pc-i440fx-rhel7.6.0</machine>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:  <arch>i686</arch>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:  <vcpu max='240'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:  <iothreads supported='yes'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:  <os supported='yes'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:    <enum name='firmware'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:    <loader supported='yes'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <enum name='type'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <value>rom</value>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <value>pflash</value>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </enum>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <enum name='readonly'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <value>yes</value>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <value>no</value>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </enum>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <enum name='secure'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <value>no</value>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </enum>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:    </loader>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:  </os>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:  <cpu>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:    <mode name='host-passthrough' supported='yes'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <enum name='hostPassthroughMigratable'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <value>on</value>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <value>off</value>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </enum>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:    </mode>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:    <mode name='maximum' supported='yes'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <enum name='maximumMigratable'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <value>on</value>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <value>off</value>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </enum>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:    </mode>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:    <mode name='host-model' supported='yes'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model fallback='forbid'>EPYC-Rome</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <vendor>AMD</vendor>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <maxphysaddr mode='passthrough' limit='40'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <feature policy='require' name='x2apic'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <feature policy='require' name='tsc-deadline'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <feature policy='require' name='hypervisor'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <feature policy='require' name='tsc_adjust'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <feature policy='require' name='spec-ctrl'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <feature policy='require' name='stibp'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <feature policy='require' name='ssbd'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <feature policy='require' name='cmp_legacy'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <feature policy='require' name='overflow-recov'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <feature policy='require' name='succor'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <feature policy='require' name='ibrs'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <feature policy='require' name='amd-ssbd'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <feature policy='require' name='virt-ssbd'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <feature policy='require' name='lbrv'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <feature policy='require' name='tsc-scale'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <feature policy='require' name='vmcb-clean'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <feature policy='require' name='flushbyasid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <feature policy='require' name='pause-filter'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <feature policy='require' name='pfthreshold'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <feature policy='require' name='svme-addr-chk'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <feature policy='require' name='lfence-always-serializing'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <feature policy='disable' name='xsaves'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:    </mode>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:    <mode name='custom' supported='yes'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <blockers model='Broadwell'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='erms'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='hle'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='invpcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='pcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='rtm'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </blockers>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <blockers model='Broadwell-IBRS'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='erms'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='hle'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='invpcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='pcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='rtm'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </blockers>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <blockers model='Broadwell-noTSX'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='erms'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='invpcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='pcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </blockers>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <blockers model='Broadwell-noTSX-IBRS'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='erms'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='invpcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='pcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </blockers>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <blockers model='Broadwell-v1'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='erms'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='hle'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='invpcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='pcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='rtm'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </blockers>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <blockers model='Broadwell-v2'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='erms'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='invpcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='pcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </blockers>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <blockers model='Broadwell-v3'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='erms'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='hle'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='invpcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='pcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='rtm'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </blockers>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <blockers model='Broadwell-v4'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='erms'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='invpcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='pcid'/>
Nov 25 03:09:59 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </blockers>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <blockers model='Cascadelake-Server'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512bw'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512cd'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512dq'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512f'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512vl'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512vnni'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='erms'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='hle'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='invpcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='pcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='pku'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='rtm'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </blockers>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <blockers model='Cascadelake-Server-noTSX'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512bw'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512cd'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512dq'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512f'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512vl'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512vnni'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='erms'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='ibrs-all'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='invpcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='pcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='pku'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </blockers>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <blockers model='Cascadelake-Server-v1'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512bw'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512cd'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512dq'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512f'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512vl'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512vnni'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='erms'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='hle'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='invpcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='pcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='pku'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='rtm'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </blockers>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <blockers model='Cascadelake-Server-v2'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512bw'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512cd'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512dq'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512f'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512vl'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512vnni'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='erms'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='hle'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='ibrs-all'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='invpcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='pcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='pku'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='rtm'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </blockers>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <blockers model='Cascadelake-Server-v3'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512bw'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512cd'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512dq'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512f'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512vl'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512vnni'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='erms'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='ibrs-all'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='invpcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='pcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='pku'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </blockers>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <blockers model='Cascadelake-Server-v4'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512bw'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512cd'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512dq'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512f'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512vl'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512vnni'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='erms'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='ibrs-all'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='invpcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='pcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='pku'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </blockers>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <blockers model='Cascadelake-Server-v5'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512bw'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512cd'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512dq'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512f'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512vl'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512vnni'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='erms'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='ibrs-all'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='invpcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='pcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='pku'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='xsaves'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </blockers>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <blockers model='Cooperlake'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512-bf16'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512bw'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512cd'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512dq'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512f'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512vl'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512vnni'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='erms'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='hle'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='ibrs-all'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='invpcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='pcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='pku'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='rtm'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='taa-no'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </blockers>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <blockers model='Cooperlake-v1'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512-bf16'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512bw'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512cd'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512dq'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512f'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512vl'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512vnni'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='erms'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='hle'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='ibrs-all'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='invpcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='pcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='pku'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='rtm'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='taa-no'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </blockers>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <blockers model='Cooperlake-v2'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512-bf16'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512bw'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512cd'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512dq'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512f'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512vl'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512vnni'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='erms'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='hle'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='ibrs-all'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='invpcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='pcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='pku'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='rtm'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='taa-no'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='xsaves'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </blockers>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <blockers model='Denverton'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='erms'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='mpx'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </blockers>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <blockers model='Denverton-v1'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='erms'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='mpx'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </blockers>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <blockers model='Denverton-v2'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='erms'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </blockers>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <blockers model='Denverton-v3'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='erms'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='xsaves'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </blockers>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <blockers model='Dhyana-v2'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='xsaves'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </blockers>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <blockers model='EPYC-Genoa'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='amd-psfd'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='auto-ibrs'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512-bf16'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512-vpopcntdq'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512bitalg'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512bw'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512cd'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512dq'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512f'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512ifma'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512vbmi'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512vbmi2'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512vl'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512vnni'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='erms'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='fsrm'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='gfni'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='invpcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='la57'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='no-nested-data-bp'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='null-sel-clr-base'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='pcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='pku'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='stibp-always-on'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='vaes'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='vpclmulqdq'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='xsaves'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </blockers>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <blockers model='EPYC-Genoa-v1'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='amd-psfd'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='auto-ibrs'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512-bf16'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512-vpopcntdq'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512bitalg'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512bw'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512cd'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512dq'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512f'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512ifma'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512vbmi'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512vbmi2'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512vl'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512vnni'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='erms'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='fsrm'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='gfni'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='invpcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='la57'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='no-nested-data-bp'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='null-sel-clr-base'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='pcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='pku'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='stibp-always-on'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='vaes'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='vpclmulqdq'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='xsaves'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </blockers>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <blockers model='EPYC-Milan'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='erms'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='fsrm'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='invpcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='pcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='pku'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='xsaves'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </blockers>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <blockers model='EPYC-Milan-v1'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='erms'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='fsrm'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='invpcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='pcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='pku'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='xsaves'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </blockers>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <blockers model='EPYC-Milan-v2'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='amd-psfd'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='erms'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='fsrm'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='invpcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='no-nested-data-bp'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='null-sel-clr-base'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='pcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='pku'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='stibp-always-on'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='vaes'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='vpclmulqdq'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='xsaves'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </blockers>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <blockers model='EPYC-Rome'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='xsaves'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </blockers>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <blockers model='EPYC-Rome-v1'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='xsaves'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </blockers>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <blockers model='EPYC-Rome-v2'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='xsaves'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </blockers>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <blockers model='EPYC-Rome-v3'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='xsaves'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </blockers>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <blockers model='EPYC-v3'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='xsaves'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </blockers>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <blockers model='EPYC-v4'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='xsaves'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </blockers>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <blockers model='GraniteRapids'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='amx-bf16'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='amx-fp16'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='amx-int8'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='amx-tile'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx-vnni'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512-bf16'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512-fp16'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512-vpopcntdq'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512bitalg'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512bw'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512cd'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512dq'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512f'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512ifma'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512vbmi'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512vbmi2'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512vl'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512vnni'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='bus-lock-detect'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='erms'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='fbsdp-no'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='fsrc'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='fsrm'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='fsrs'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='fzrm'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='gfni'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='hle'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='ibrs-all'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='invpcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='la57'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='mcdt-no'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='pbrsb-no'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='pcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='pku'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='prefetchiti'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='psdp-no'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='rtm'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='sbdr-ssdp-no'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='serialize'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='taa-no'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='tsx-ldtrk'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='vaes'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='vpclmulqdq'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='xfd'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='xsaves'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </blockers>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <blockers model='GraniteRapids-v1'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='amx-bf16'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='amx-fp16'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='amx-int8'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='amx-tile'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx-vnni'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512-bf16'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512-fp16'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512-vpopcntdq'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512bitalg'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512bw'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512cd'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512dq'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512f'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512ifma'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512vbmi'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512vbmi2'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512vl'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512vnni'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='bus-lock-detect'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='erms'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='fbsdp-no'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='fsrc'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='fsrm'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='fsrs'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='fzrm'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='gfni'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='hle'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='ibrs-all'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='invpcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='la57'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='mcdt-no'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='pbrsb-no'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='pcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='pku'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='prefetchiti'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='psdp-no'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='rtm'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='sbdr-ssdp-no'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='serialize'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='taa-no'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='tsx-ldtrk'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='vaes'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='vpclmulqdq'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='xfd'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='xsaves'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </blockers>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <blockers model='GraniteRapids-v2'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='amx-bf16'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='amx-fp16'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='amx-int8'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='amx-tile'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx-vnni'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx10'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx10-128'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx10-256'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx10-512'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512-bf16'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512-fp16'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512-vpopcntdq'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512bitalg'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512bw'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512cd'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512dq'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512f'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512ifma'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512vbmi'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512vbmi2'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512vl'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512vnni'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='bus-lock-detect'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='cldemote'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='erms'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='fbsdp-no'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='fsrc'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='fsrm'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='fsrs'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='fzrm'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='gfni'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='hle'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='ibrs-all'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='invpcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='la57'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='mcdt-no'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='movdir64b'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='movdiri'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='pbrsb-no'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='pcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='pku'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='prefetchiti'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='psdp-no'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='rtm'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='sbdr-ssdp-no'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='serialize'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='ss'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='taa-no'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='tsx-ldtrk'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='vaes'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='vpclmulqdq'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='xfd'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='xsaves'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </blockers>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <blockers model='Haswell'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='erms'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='hle'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='invpcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='pcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='rtm'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </blockers>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <blockers model='Haswell-IBRS'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='erms'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='hle'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='invpcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='pcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='rtm'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </blockers>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <blockers model='Haswell-noTSX'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='erms'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='invpcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='pcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </blockers>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <blockers model='Haswell-noTSX-IBRS'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='erms'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='invpcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='pcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </blockers>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <blockers model='Haswell-v1'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='erms'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='hle'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='invpcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='pcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='rtm'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </blockers>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <blockers model='Haswell-v2'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='erms'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='invpcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='pcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </blockers>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <blockers model='Haswell-v3'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='erms'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='hle'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='invpcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='pcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='rtm'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </blockers>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <blockers model='Haswell-v4'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='erms'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='invpcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='pcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </blockers>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <blockers model='Icelake-Server'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512-vpopcntdq'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512bitalg'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512bw'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512cd'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512dq'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512f'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512vbmi'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512vbmi2'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512vl'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512vnni'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='erms'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='gfni'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='hle'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='invpcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='la57'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='pcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='pku'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='rtm'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='vaes'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='vpclmulqdq'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </blockers>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <blockers model='Icelake-Server-noTSX'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512-vpopcntdq'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512bitalg'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512bw'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512cd'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512dq'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512f'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512vbmi'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512vbmi2'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512vl'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512vnni'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='erms'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='gfni'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='invpcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='la57'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='pcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='pku'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='vaes'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='vpclmulqdq'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </blockers>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <blockers model='Icelake-Server-v1'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512-vpopcntdq'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512bitalg'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512bw'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512cd'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512dq'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512f'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512vbmi'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512vbmi2'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512vl'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512vnni'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='erms'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='gfni'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='hle'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='invpcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='la57'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='pcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='pku'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='rtm'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='vaes'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='vpclmulqdq'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </blockers>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <blockers model='Icelake-Server-v2'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512-vpopcntdq'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512bitalg'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512bw'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512cd'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512dq'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512f'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512vbmi'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512vbmi2'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512vl'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512vnni'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='erms'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='gfni'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='invpcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='la57'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='pcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='pku'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='vaes'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='vpclmulqdq'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </blockers>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <blockers model='Icelake-Server-v3'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512-vpopcntdq'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512bitalg'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512bw'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512cd'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512dq'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512f'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512vbmi'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512vbmi2'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512vl'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512vnni'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='erms'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='gfni'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='ibrs-all'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='invpcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='la57'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='pcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='pku'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='taa-no'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='vaes'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='vpclmulqdq'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </blockers>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <blockers model='Icelake-Server-v4'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512-vpopcntdq'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512bitalg'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512bw'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512cd'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512dq'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512f'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512ifma'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512vbmi'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512vbmi2'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512vl'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512vnni'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='erms'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='fsrm'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='gfni'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='ibrs-all'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='invpcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='la57'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='pcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='pku'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='taa-no'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='vaes'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='vpclmulqdq'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </blockers>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <blockers model='Icelake-Server-v5'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512-vpopcntdq'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512bitalg'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512bw'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512cd'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512dq'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512f'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512ifma'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512vbmi'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512vbmi2'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512vl'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512vnni'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='erms'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='fsrm'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='gfni'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='ibrs-all'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='invpcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='la57'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='pcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='pku'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='taa-no'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='vaes'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='vpclmulqdq'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='xsaves'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </blockers>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <blockers model='Icelake-Server-v6'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512-vpopcntdq'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512bitalg'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512bw'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512cd'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512dq'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512f'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512ifma'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512vbmi'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512vbmi2'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512vl'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512vnni'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='erms'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='fsrm'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='gfni'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='ibrs-all'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='invpcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='la57'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='pcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='pku'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='taa-no'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='vaes'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='vpclmulqdq'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='xsaves'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </blockers>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <blockers model='Icelake-Server-v7'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512-vpopcntdq'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512bitalg'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512bw'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512cd'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512dq'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512f'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512ifma'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512vbmi'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512vbmi2'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512vl'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512vnni'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='erms'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='fsrm'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='gfni'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='hle'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='ibrs-all'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='invpcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='la57'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='pcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='pku'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='rtm'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='taa-no'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='vaes'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='vpclmulqdq'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='xsaves'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </blockers>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <blockers model='IvyBridge'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='erms'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </blockers>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <blockers model='IvyBridge-IBRS'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='erms'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </blockers>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <blockers model='IvyBridge-v1'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='erms'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </blockers>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <blockers model='IvyBridge-v2'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='erms'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </blockers>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <blockers model='KnightsMill'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512-4fmaps'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512-4vnniw'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512-vpopcntdq'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512cd'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512er'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512f'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512pf'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='erms'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='ss'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </blockers>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <blockers model='KnightsMill-v1'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512-4fmaps'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512-4vnniw'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512-vpopcntdq'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512cd'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512er'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512f'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512pf'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='erms'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='ss'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </blockers>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <blockers model='Opteron_G4'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='fma4'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='xop'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </blockers>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <blockers model='Opteron_G4-v1'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='fma4'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='xop'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </blockers>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <blockers model='Opteron_G5'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='fma4'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='tbm'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='xop'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </blockers>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <blockers model='Opteron_G5-v1'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='fma4'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='tbm'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='xop'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </blockers>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <blockers model='SapphireRapids'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='amx-bf16'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='amx-int8'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='amx-tile'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx-vnni'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512-bf16'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512-fp16'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512-vpopcntdq'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512bitalg'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512bw'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512cd'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512dq'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512f'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512ifma'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512vbmi'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512vbmi2'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512vl'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512vnni'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='bus-lock-detect'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='erms'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='fsrc'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='fsrm'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='fsrs'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='fzrm'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='gfni'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='hle'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='ibrs-all'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='invpcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='la57'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='pcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='pku'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='rtm'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='serialize'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='taa-no'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='tsx-ldtrk'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='vaes'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='vpclmulqdq'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='xfd'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='xsaves'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </blockers>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <blockers model='SapphireRapids-v1'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='amx-bf16'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='amx-int8'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='amx-tile'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx-vnni'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512-bf16'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512-fp16'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512-vpopcntdq'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512bitalg'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512bw'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512cd'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512dq'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512f'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512ifma'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512vbmi'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512vbmi2'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512vl'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512vnni'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='bus-lock-detect'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='erms'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='fsrc'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='fsrm'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='fsrs'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='fzrm'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='gfni'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='hle'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='ibrs-all'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='invpcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='la57'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='pcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='pku'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='rtm'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='serialize'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='taa-no'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='tsx-ldtrk'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='vaes'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='vpclmulqdq'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='xfd'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='xsaves'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </blockers>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <blockers model='SapphireRapids-v2'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='amx-bf16'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='amx-int8'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='amx-tile'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx-vnni'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512-bf16'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512-fp16'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512-vpopcntdq'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512bitalg'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512bw'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512cd'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512dq'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512f'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512ifma'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512vbmi'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512vbmi2'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512vl'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512vnni'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='bus-lock-detect'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='erms'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='fbsdp-no'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='fsrc'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='fsrm'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='fsrs'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='fzrm'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='gfni'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='hle'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='ibrs-all'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='invpcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='la57'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='pcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='pku'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='psdp-no'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='rtm'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='sbdr-ssdp-no'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='serialize'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='taa-no'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='tsx-ldtrk'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='vaes'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='vpclmulqdq'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='xfd'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='xsaves'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </blockers>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <blockers model='SapphireRapids-v3'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='amx-bf16'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='amx-int8'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='amx-tile'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx-vnni'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512-bf16'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512-fp16'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512-vpopcntdq'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512bitalg'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512bw'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512cd'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512dq'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512f'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512ifma'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512vbmi'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512vbmi2'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512vl'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512vnni'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='bus-lock-detect'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='cldemote'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='erms'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='fbsdp-no'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='fsrc'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='fsrm'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='fsrs'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='fzrm'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='gfni'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='hle'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='ibrs-all'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='invpcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='la57'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='movdir64b'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='movdiri'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='pcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='pku'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='psdp-no'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='rtm'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='sbdr-ssdp-no'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='serialize'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='ss'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='taa-no'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='tsx-ldtrk'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='vaes'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='vpclmulqdq'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='xfd'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='xsaves'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </blockers>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <blockers model='SierraForest'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx-ifma'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx-ne-convert'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx-vnni'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx-vnni-int8'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='bus-lock-detect'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='cmpccxadd'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='erms'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='fbsdp-no'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='fsrm'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='fsrs'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='gfni'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='ibrs-all'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='invpcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='mcdt-no'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='pbrsb-no'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='pcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='pku'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='psdp-no'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='sbdr-ssdp-no'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='serialize'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='vaes'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='vpclmulqdq'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='xsaves'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </blockers>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <blockers model='SierraForest-v1'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx-ifma'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx-ne-convert'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx-vnni'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx-vnni-int8'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='bus-lock-detect'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='cmpccxadd'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='erms'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='fbsdp-no'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='fsrm'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='fsrs'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='gfni'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='ibrs-all'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='invpcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='mcdt-no'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='pbrsb-no'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='pcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='pku'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='psdp-no'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='sbdr-ssdp-no'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='serialize'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='vaes'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='vpclmulqdq'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='xsaves'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </blockers>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <blockers model='Skylake-Client'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='erms'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='hle'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='invpcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='pcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='rtm'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </blockers>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <blockers model='Skylake-Client-IBRS'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='erms'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='hle'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='invpcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='pcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='rtm'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </blockers>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='erms'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='invpcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='pcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </blockers>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <blockers model='Skylake-Client-v1'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='erms'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='hle'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='invpcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='pcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='rtm'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </blockers>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <blockers model='Skylake-Client-v2'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='erms'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='hle'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='invpcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='pcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='rtm'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </blockers>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <blockers model='Skylake-Client-v3'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='erms'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='invpcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='pcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </blockers>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <blockers model='Skylake-Client-v4'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='erms'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='invpcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='pcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='xsaves'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </blockers>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <blockers model='Skylake-Server'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512bw'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512cd'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512dq'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512f'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512vl'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='erms'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='hle'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='invpcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='pcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='pku'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='rtm'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </blockers>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <blockers model='Skylake-Server-IBRS'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512bw'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512cd'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512dq'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512f'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512vl'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='erms'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='hle'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='invpcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='pcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='pku'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='rtm'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </blockers>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512bw'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512cd'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512dq'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512f'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512vl'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='erms'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='invpcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='pcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='pku'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </blockers>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <blockers model='Skylake-Server-v1'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512bw'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512cd'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512dq'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512f'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512vl'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='erms'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='hle'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='invpcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='pcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='pku'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='rtm'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </blockers>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <blockers model='Skylake-Server-v2'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512bw'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512cd'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512dq'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512f'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512vl'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='erms'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='hle'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='invpcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='pcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='pku'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='rtm'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </blockers>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <blockers model='Skylake-Server-v3'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512bw'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512cd'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512dq'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512f'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512vl'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='erms'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='invpcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='pcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='pku'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </blockers>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <blockers model='Skylake-Server-v4'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512bw'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512cd'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512dq'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512f'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512vl'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='erms'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='invpcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='pcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='pku'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </blockers>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <blockers model='Skylake-Server-v5'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512bw'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512cd'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512dq'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512f'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512vl'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='erms'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='invpcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='pcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='pku'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='xsaves'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </blockers>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <blockers model='Snowridge'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='cldemote'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='core-capability'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='erms'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='gfni'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='movdir64b'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='movdiri'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='mpx'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='split-lock-detect'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </blockers>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <blockers model='Snowridge-v1'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='cldemote'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='core-capability'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='erms'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='gfni'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='movdir64b'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='movdiri'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='mpx'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='split-lock-detect'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </blockers>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <blockers model='Snowridge-v2'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='cldemote'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='core-capability'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='erms'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='gfni'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='movdir64b'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='movdiri'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='split-lock-detect'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </blockers>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <blockers model='Snowridge-v3'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='cldemote'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='core-capability'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='erms'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='gfni'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='movdir64b'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='movdiri'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='split-lock-detect'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='xsaves'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </blockers>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <blockers model='Snowridge-v4'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='cldemote'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='erms'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='gfni'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='movdir64b'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='movdiri'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='xsaves'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </blockers>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <blockers model='athlon'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='3dnow'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='3dnowext'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </blockers>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <blockers model='athlon-v1'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='3dnow'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='3dnowext'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </blockers>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <blockers model='core2duo'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='ss'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </blockers>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <blockers model='core2duo-v1'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='ss'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </blockers>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <blockers model='coreduo'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='ss'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </blockers>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <blockers model='coreduo-v1'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='ss'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </blockers>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <blockers model='n270'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='ss'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </blockers>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <blockers model='n270-v1'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='ss'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </blockers>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <blockers model='phenom'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='3dnow'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='3dnowext'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </blockers>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <blockers model='phenom-v1'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='3dnow'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='3dnowext'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </blockers>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:    </mode>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:  </cpu>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:  <memoryBacking supported='yes'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:    <enum name='sourceType'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <value>file</value>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <value>anonymous</value>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <value>memfd</value>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:    </enum>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:  </memoryBacking>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:  <devices>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:    <disk supported='yes'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <enum name='diskDevice'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <value>disk</value>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <value>cdrom</value>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <value>floppy</value>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <value>lun</value>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </enum>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <enum name='bus'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <value>ide</value>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <value>fdc</value>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <value>scsi</value>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <value>virtio</value>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <value>usb</value>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <value>sata</value>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </enum>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <enum name='model'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <value>virtio</value>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <value>virtio-transitional</value>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <value>virtio-non-transitional</value>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </enum>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:    </disk>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:    <graphics supported='yes'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <enum name='type'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <value>vnc</value>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <value>egl-headless</value>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <value>dbus</value>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </enum>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:    </graphics>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:    <video supported='yes'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <enum name='modelType'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <value>vga</value>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <value>cirrus</value>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <value>virtio</value>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <value>none</value>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <value>bochs</value>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <value>ramfb</value>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </enum>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:    </video>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:    <hostdev supported='yes'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <enum name='mode'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <value>subsystem</value>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </enum>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <enum name='startupPolicy'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <value>default</value>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <value>mandatory</value>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <value>requisite</value>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <value>optional</value>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </enum>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <enum name='subsysType'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <value>usb</value>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <value>pci</value>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <value>scsi</value>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </enum>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <enum name='capsType'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <enum name='pciBackend'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:    </hostdev>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:    <rng supported='yes'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <enum name='model'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <value>virtio</value>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <value>virtio-transitional</value>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <value>virtio-non-transitional</value>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </enum>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <enum name='backendModel'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <value>random</value>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <value>egd</value>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <value>builtin</value>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </enum>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:    </rng>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:    <filesystem supported='yes'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <enum name='driverType'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <value>path</value>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <value>handle</value>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <value>virtiofs</value>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </enum>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:    </filesystem>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:    <tpm supported='yes'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <enum name='model'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <value>tpm-tis</value>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <value>tpm-crb</value>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </enum>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <enum name='backendModel'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <value>emulator</value>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <value>external</value>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </enum>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <enum name='backendVersion'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <value>2.0</value>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </enum>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:    </tpm>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:    <redirdev supported='yes'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <enum name='bus'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <value>usb</value>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </enum>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:    </redirdev>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:    <channel supported='yes'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <enum name='type'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <value>pty</value>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <value>unix</value>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </enum>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:    </channel>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:    <crypto supported='yes'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <enum name='model'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <enum name='type'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <value>qemu</value>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </enum>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <enum name='backendModel'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <value>builtin</value>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </enum>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:    </crypto>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:    <interface supported='yes'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <enum name='backendType'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <value>default</value>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <value>passt</value>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </enum>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:    </interface>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:    <panic supported='yes'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <enum name='model'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <value>isa</value>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <value>hyperv</value>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </enum>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:    </panic>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:    <console supported='yes'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <enum name='type'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <value>null</value>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <value>vc</value>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <value>pty</value>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <value>dev</value>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <value>file</value>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <value>pipe</value>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <value>stdio</value>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <value>udp</value>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <value>tcp</value>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <value>unix</value>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <value>qemu-vdagent</value>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <value>dbus</value>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </enum>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:    </console>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:  </devices>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:  <features>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:    <gic supported='no'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:    <vmcoreinfo supported='yes'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:    <genid supported='yes'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:    <backingStoreInput supported='yes'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:    <backup supported='yes'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:    <async-teardown supported='yes'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:    <ps2 supported='yes'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:    <sev supported='no'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:    <sgx supported='no'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:    <hyperv supported='yes'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <enum name='features'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <value>relaxed</value>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <value>vapic</value>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <value>spinlocks</value>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <value>vpindex</value>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <value>runtime</value>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <value>synic</value>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <value>stimer</value>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <value>reset</value>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <value>vendor_id</value>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <value>frequencies</value>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <value>reenlightenment</value>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <value>tlbflush</value>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <value>ipi</value>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <value>avic</value>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <value>emsr_bitmap</value>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <value>xmm_input</value>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </enum>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <defaults>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <spinlocks>4095</spinlocks>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <stimer_direct>on</stimer_direct>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <tlbflush_direct>on</tlbflush_direct>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <tlbflush_extended>on</tlbflush_extended>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <vendor_id>Linux KVM Hv</vendor_id>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </defaults>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:    </hyperv>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:    <launchSecurity supported='yes'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <enum name='sectype'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <value>tdx</value>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </enum>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:    </launchSecurity>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:  </features>
Nov 25 03:09:59 np0005534516 nova_compute[253538]: </domainCapabilities>
Nov 25 03:09:59 np0005534516 nova_compute[253538]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Nov 25 03:09:59 np0005534516 nova_compute[253538]: 2025-11-25 08:09:59.534 253542 DEBUG nova.virt.libvirt.host [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Getting domain capabilities for x86_64 via machine types: {'q35', 'pc'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952#033[00m
Nov 25 03:09:59 np0005534516 nova_compute[253538]: 2025-11-25 08:09:59.539 253542 DEBUG nova.virt.libvirt.host [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=q35:
Nov 25 03:09:59 np0005534516 nova_compute[253538]: <domainCapabilities>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:  <path>/usr/libexec/qemu-kvm</path>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:  <domain>kvm</domain>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:  <machine>pc-q35-rhel9.8.0</machine>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:  <arch>x86_64</arch>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:  <vcpu max='4096'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:  <iothreads supported='yes'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:  <os supported='yes'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:    <enum name='firmware'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <value>efi</value>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:    </enum>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:    <loader supported='yes'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <value>/usr/share/edk2/ovmf/OVMF_CODE.secboot.fd</value>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <value>/usr/share/edk2/ovmf/OVMF_CODE.fd</value>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <value>/usr/share/edk2/ovmf/OVMF.amdsev.fd</value>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <value>/usr/share/edk2/ovmf/OVMF.inteltdx.secboot.fd</value>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <enum name='type'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <value>rom</value>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <value>pflash</value>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </enum>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <enum name='readonly'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <value>yes</value>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <value>no</value>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </enum>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <enum name='secure'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <value>yes</value>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <value>no</value>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </enum>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:    </loader>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:  </os>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:  <cpu>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:    <mode name='host-passthrough' supported='yes'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <enum name='hostPassthroughMigratable'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <value>on</value>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <value>off</value>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </enum>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:    </mode>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:    <mode name='maximum' supported='yes'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <enum name='maximumMigratable'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <value>on</value>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <value>off</value>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </enum>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:    </mode>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:    <mode name='host-model' supported='yes'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model fallback='forbid'>EPYC-Rome</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <vendor>AMD</vendor>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <maxphysaddr mode='passthrough' limit='40'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <feature policy='require' name='x2apic'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <feature policy='require' name='tsc-deadline'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <feature policy='require' name='hypervisor'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <feature policy='require' name='tsc_adjust'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <feature policy='require' name='spec-ctrl'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <feature policy='require' name='stibp'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <feature policy='require' name='ssbd'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <feature policy='require' name='cmp_legacy'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <feature policy='require' name='overflow-recov'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <feature policy='require' name='succor'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <feature policy='require' name='ibrs'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <feature policy='require' name='amd-ssbd'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <feature policy='require' name='virt-ssbd'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <feature policy='require' name='lbrv'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <feature policy='require' name='tsc-scale'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <feature policy='require' name='vmcb-clean'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <feature policy='require' name='flushbyasid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <feature policy='require' name='pause-filter'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <feature policy='require' name='pfthreshold'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <feature policy='require' name='svme-addr-chk'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <feature policy='require' name='lfence-always-serializing'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <feature policy='disable' name='xsaves'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:    </mode>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:    <mode name='custom' supported='yes'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <blockers model='Broadwell'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='erms'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='hle'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='invpcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='pcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='rtm'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </blockers>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <blockers model='Broadwell-IBRS'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='erms'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='hle'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='invpcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='pcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='rtm'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </blockers>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <blockers model='Broadwell-noTSX'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='erms'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='invpcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='pcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </blockers>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <blockers model='Broadwell-noTSX-IBRS'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='erms'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='invpcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='pcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </blockers>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <blockers model='Broadwell-v1'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='erms'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='hle'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='invpcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='pcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='rtm'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </blockers>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <blockers model='Broadwell-v2'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='erms'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='invpcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='pcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </blockers>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <blockers model='Broadwell-v3'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='erms'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='hle'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='invpcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='pcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='rtm'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </blockers>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <blockers model='Broadwell-v4'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='erms'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='invpcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='pcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </blockers>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <blockers model='Cascadelake-Server'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512bw'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512cd'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512dq'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512f'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512vl'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512vnni'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='erms'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='hle'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='invpcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='pcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='pku'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='rtm'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </blockers>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <blockers model='Cascadelake-Server-noTSX'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512bw'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512cd'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512dq'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512f'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512vl'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512vnni'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='erms'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='ibrs-all'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='invpcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='pcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='pku'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </blockers>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <blockers model='Cascadelake-Server-v1'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512bw'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512cd'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512dq'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512f'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512vl'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512vnni'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='erms'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='hle'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='invpcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='pcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='pku'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='rtm'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </blockers>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <blockers model='Cascadelake-Server-v2'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512bw'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512cd'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512dq'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512f'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512vl'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512vnni'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='erms'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='hle'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='ibrs-all'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='invpcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='pcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='pku'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='rtm'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </blockers>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <blockers model='Cascadelake-Server-v3'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512bw'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512cd'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512dq'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512f'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512vl'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512vnni'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='erms'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='ibrs-all'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='invpcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='pcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='pku'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </blockers>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <blockers model='Cascadelake-Server-v4'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512bw'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512cd'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512dq'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512f'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512vl'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512vnni'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='erms'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='ibrs-all'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='invpcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='pcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='pku'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </blockers>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <blockers model='Cascadelake-Server-v5'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512bw'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512cd'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512dq'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512f'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512vl'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512vnni'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='erms'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='ibrs-all'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='invpcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='pcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='pku'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='xsaves'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </blockers>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <blockers model='Cooperlake'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512-bf16'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512bw'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512cd'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512dq'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512f'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512vl'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512vnni'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='erms'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='hle'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='ibrs-all'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='invpcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='pcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='pku'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='rtm'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='taa-no'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </blockers>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <blockers model='Cooperlake-v1'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512-bf16'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512bw'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512cd'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512dq'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512f'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512vl'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512vnni'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='erms'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='hle'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='ibrs-all'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='invpcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='pcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='pku'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='rtm'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='taa-no'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </blockers>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <blockers model='Cooperlake-v2'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512-bf16'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512bw'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512cd'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512dq'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512f'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512vl'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512vnni'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='erms'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='hle'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='ibrs-all'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='invpcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='pcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='pku'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='rtm'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='taa-no'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='xsaves'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </blockers>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <blockers model='Denverton'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='erms'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='mpx'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </blockers>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <blockers model='Denverton-v1'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='erms'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='mpx'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </blockers>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <blockers model='Denverton-v2'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='erms'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </blockers>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <blockers model='Denverton-v3'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='erms'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='xsaves'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </blockers>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <blockers model='Dhyana-v2'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='xsaves'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </blockers>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <blockers model='EPYC-Genoa'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='amd-psfd'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='auto-ibrs'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512-bf16'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512-vpopcntdq'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512bitalg'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512bw'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512cd'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512dq'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512f'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512ifma'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512vbmi'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512vbmi2'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512vl'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512vnni'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='erms'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='fsrm'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='gfni'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='invpcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='la57'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='no-nested-data-bp'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='null-sel-clr-base'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='pcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='pku'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='stibp-always-on'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='vaes'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='vpclmulqdq'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='xsaves'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </blockers>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <blockers model='EPYC-Genoa-v1'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='amd-psfd'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='auto-ibrs'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512-bf16'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512-vpopcntdq'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512bitalg'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512bw'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512cd'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512dq'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512f'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512ifma'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512vbmi'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512vbmi2'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512vl'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512vnni'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='erms'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='fsrm'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='gfni'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='invpcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='la57'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='no-nested-data-bp'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='null-sel-clr-base'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='pcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='pku'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='stibp-always-on'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='vaes'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='vpclmulqdq'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='xsaves'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </blockers>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <blockers model='EPYC-Milan'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='erms'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='fsrm'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='invpcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='pcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='pku'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='xsaves'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </blockers>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <blockers model='EPYC-Milan-v1'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='erms'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='fsrm'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='invpcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='pcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='pku'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='xsaves'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </blockers>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <blockers model='EPYC-Milan-v2'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='amd-psfd'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='erms'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='fsrm'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='invpcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='no-nested-data-bp'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='null-sel-clr-base'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='pcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='pku'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='stibp-always-on'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='vaes'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='vpclmulqdq'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='xsaves'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </blockers>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <blockers model='EPYC-Rome'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='xsaves'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </blockers>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <blockers model='EPYC-Rome-v1'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='xsaves'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </blockers>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <blockers model='EPYC-Rome-v2'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='xsaves'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </blockers>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <blockers model='EPYC-Rome-v3'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='xsaves'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </blockers>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <blockers model='EPYC-v3'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='xsaves'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </blockers>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <blockers model='EPYC-v4'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='xsaves'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </blockers>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <blockers model='GraniteRapids'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='amx-bf16'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='amx-fp16'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='amx-int8'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='amx-tile'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx-vnni'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512-bf16'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512-fp16'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512-vpopcntdq'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512bitalg'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512bw'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512cd'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512dq'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512f'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512ifma'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512vbmi'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512vbmi2'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512vl'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512vnni'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='bus-lock-detect'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='erms'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='fbsdp-no'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='fsrc'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='fsrm'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='fsrs'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='fzrm'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='gfni'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='hle'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='ibrs-all'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='invpcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='la57'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='mcdt-no'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='pbrsb-no'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='pcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='pku'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='prefetchiti'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='psdp-no'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='rtm'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='sbdr-ssdp-no'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='serialize'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='taa-no'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='tsx-ldtrk'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='vaes'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='vpclmulqdq'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='xfd'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='xsaves'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </blockers>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <blockers model='GraniteRapids-v1'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='amx-bf16'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='amx-fp16'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='amx-int8'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='amx-tile'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx-vnni'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512-bf16'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512-fp16'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512-vpopcntdq'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512bitalg'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512bw'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512cd'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512dq'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512f'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512ifma'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512vbmi'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512vbmi2'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512vl'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512vnni'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='bus-lock-detect'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='erms'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='fbsdp-no'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='fsrc'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='fsrm'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='fsrs'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='fzrm'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='gfni'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='hle'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='ibrs-all'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='invpcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='la57'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='mcdt-no'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='pbrsb-no'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='pcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='pku'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='prefetchiti'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='psdp-no'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='rtm'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='sbdr-ssdp-no'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='serialize'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='taa-no'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='tsx-ldtrk'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='vaes'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='vpclmulqdq'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='xfd'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='xsaves'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </blockers>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <blockers model='GraniteRapids-v2'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='amx-bf16'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='amx-fp16'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='amx-int8'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='amx-tile'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx-vnni'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx10'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx10-128'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx10-256'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx10-512'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512-bf16'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512-fp16'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512-vpopcntdq'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512bitalg'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512bw'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512cd'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512dq'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512f'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512ifma'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512vbmi'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512vbmi2'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512vl'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512vnni'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='bus-lock-detect'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='cldemote'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='erms'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='fbsdp-no'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='fsrc'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='fsrm'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='fsrs'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='fzrm'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='gfni'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='hle'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='ibrs-all'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='invpcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='la57'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='mcdt-no'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='movdir64b'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='movdiri'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='pbrsb-no'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='pcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='pku'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='prefetchiti'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='psdp-no'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='rtm'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='sbdr-ssdp-no'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='serialize'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='ss'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='taa-no'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='tsx-ldtrk'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='vaes'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='vpclmulqdq'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='xfd'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='xsaves'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </blockers>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <blockers model='Haswell'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='erms'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='hle'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='invpcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='pcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='rtm'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </blockers>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <blockers model='Haswell-IBRS'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='erms'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='hle'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='invpcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='pcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='rtm'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </blockers>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <blockers model='Haswell-noTSX'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='erms'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='invpcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='pcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </blockers>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <blockers model='Haswell-noTSX-IBRS'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='erms'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='invpcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='pcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </blockers>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <blockers model='Haswell-v1'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='erms'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='hle'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='invpcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='pcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='rtm'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </blockers>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <blockers model='Haswell-v2'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='erms'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='invpcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='pcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </blockers>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <blockers model='Haswell-v3'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='erms'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='hle'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='invpcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='pcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='rtm'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </blockers>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <blockers model='Haswell-v4'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='erms'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='invpcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='pcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </blockers>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <blockers model='Icelake-Server'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512-vpopcntdq'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512bitalg'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512bw'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512cd'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512dq'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512f'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512vbmi'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512vbmi2'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512vl'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512vnni'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='erms'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='gfni'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='hle'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='invpcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='la57'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='pcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='pku'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='rtm'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='vaes'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='vpclmulqdq'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </blockers>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <blockers model='Icelake-Server-noTSX'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512-vpopcntdq'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512bitalg'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512bw'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512cd'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512dq'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512f'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512vbmi'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512vbmi2'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512vl'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512vnni'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='erms'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='gfni'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='invpcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='la57'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='pcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='pku'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='vaes'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='vpclmulqdq'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </blockers>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <blockers model='Icelake-Server-v1'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512-vpopcntdq'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512bitalg'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512bw'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512cd'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512dq'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512f'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512vbmi'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512vbmi2'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512vl'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512vnni'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='erms'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='gfni'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='hle'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='invpcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='la57'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='pcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='pku'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='rtm'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='vaes'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='vpclmulqdq'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </blockers>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <blockers model='Icelake-Server-v2'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512-vpopcntdq'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512bitalg'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512bw'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512cd'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512dq'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512f'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512vbmi'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512vbmi2'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512vl'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512vnni'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='erms'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='gfni'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='invpcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='la57'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='pcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='pku'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='vaes'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='vpclmulqdq'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </blockers>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <blockers model='Icelake-Server-v3'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512-vpopcntdq'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512bitalg'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512bw'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512cd'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512dq'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512f'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512vbmi'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512vbmi2'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512vl'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512vnni'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='erms'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='gfni'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='ibrs-all'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='invpcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='la57'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='pcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='pku'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='taa-no'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='vaes'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='vpclmulqdq'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </blockers>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <blockers model='Icelake-Server-v4'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512-vpopcntdq'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512bitalg'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512bw'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512cd'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512dq'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512f'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512ifma'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512vbmi'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512vbmi2'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512vl'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512vnni'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='erms'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='fsrm'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='gfni'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='ibrs-all'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='invpcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='la57'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='pcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='pku'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='taa-no'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='vaes'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='vpclmulqdq'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </blockers>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <blockers model='Icelake-Server-v5'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512-vpopcntdq'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512bitalg'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512bw'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512cd'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512dq'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512f'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512ifma'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512vbmi'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512vbmi2'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512vl'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512vnni'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='erms'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='fsrm'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='gfni'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='ibrs-all'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='invpcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='la57'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='pcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='pku'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='taa-no'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='vaes'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='vpclmulqdq'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='xsaves'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </blockers>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <blockers model='Icelake-Server-v6'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512-vpopcntdq'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512bitalg'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512bw'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512cd'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512dq'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512f'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512ifma'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512vbmi'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512vbmi2'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512vl'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512vnni'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='erms'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='fsrm'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='gfni'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='ibrs-all'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='invpcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='la57'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='pcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='pku'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='taa-no'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='vaes'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='vpclmulqdq'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='xsaves'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </blockers>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <blockers model='Icelake-Server-v7'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512-vpopcntdq'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512bitalg'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512bw'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512cd'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512dq'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512f'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512ifma'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512vbmi'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512vbmi2'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512vl'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512vnni'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='erms'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='fsrm'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='gfni'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='hle'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='ibrs-all'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='invpcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='la57'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='pcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='pku'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='rtm'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='taa-no'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='vaes'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='vpclmulqdq'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='xsaves'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </blockers>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <blockers model='IvyBridge'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='erms'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </blockers>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <blockers model='IvyBridge-IBRS'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='erms'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </blockers>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <blockers model='IvyBridge-v1'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='erms'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </blockers>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <blockers model='IvyBridge-v2'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='erms'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </blockers>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <blockers model='KnightsMill'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512-4fmaps'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512-4vnniw'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512-vpopcntdq'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512cd'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512er'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512f'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512pf'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='erms'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='ss'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </blockers>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <blockers model='KnightsMill-v1'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512-4fmaps'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512-4vnniw'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512-vpopcntdq'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512cd'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512er'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512f'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512pf'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='erms'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='ss'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </blockers>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <blockers model='Opteron_G4'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='fma4'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='xop'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </blockers>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <blockers model='Opteron_G4-v1'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='fma4'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='xop'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </blockers>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <blockers model='Opteron_G5'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='fma4'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='tbm'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='xop'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </blockers>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <blockers model='Opteron_G5-v1'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='fma4'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='tbm'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='xop'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </blockers>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <blockers model='SapphireRapids'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='amx-bf16'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='amx-int8'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='amx-tile'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx-vnni'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512-bf16'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512-fp16'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512-vpopcntdq'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512bitalg'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512bw'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512cd'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512dq'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512f'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512ifma'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512vbmi'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512vbmi2'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512vl'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512vnni'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='bus-lock-detect'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='erms'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='fsrc'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='fsrm'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='fsrs'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='fzrm'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='gfni'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='hle'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='ibrs-all'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='invpcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='la57'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='pcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='pku'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='rtm'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='serialize'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='taa-no'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='tsx-ldtrk'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='vaes'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='vpclmulqdq'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='xfd'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='xsaves'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </blockers>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <blockers model='SapphireRapids-v1'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='amx-bf16'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='amx-int8'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='amx-tile'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx-vnni'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512-bf16'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512-fp16'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512-vpopcntdq'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512bitalg'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512bw'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512cd'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512dq'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512f'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512ifma'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512vbmi'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512vbmi2'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512vl'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512vnni'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='bus-lock-detect'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='erms'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='fsrc'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='fsrm'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='fsrs'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='fzrm'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='gfni'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='hle'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='ibrs-all'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='invpcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='la57'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='pcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='pku'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='rtm'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='serialize'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='taa-no'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='tsx-ldtrk'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='vaes'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='vpclmulqdq'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='xfd'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='xsaves'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </blockers>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <blockers model='SapphireRapids-v2'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='amx-bf16'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='amx-int8'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='amx-tile'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx-vnni'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512-bf16'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512-fp16'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512-vpopcntdq'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512bitalg'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512bw'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512cd'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512dq'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512f'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512ifma'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512vbmi'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512vbmi2'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512vl'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512vnni'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='bus-lock-detect'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='erms'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='fbsdp-no'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='fsrc'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='fsrm'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='fsrs'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='fzrm'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='gfni'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='hle'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='ibrs-all'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='invpcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='la57'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='pcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='pku'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='psdp-no'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='rtm'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='sbdr-ssdp-no'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='serialize'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='taa-no'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='tsx-ldtrk'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='vaes'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='vpclmulqdq'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='xfd'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='xsaves'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </blockers>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <blockers model='SapphireRapids-v3'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='amx-bf16'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='amx-int8'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='amx-tile'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx-vnni'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512-bf16'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512-fp16'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512-vpopcntdq'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512bitalg'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512bw'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512cd'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512dq'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512f'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512ifma'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512vbmi'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512vbmi2'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512vl'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512vnni'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='bus-lock-detect'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='cldemote'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='erms'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='fbsdp-no'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='fsrc'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='fsrm'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='fsrs'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='fzrm'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='gfni'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='hle'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='ibrs-all'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='invpcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='la57'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='movdir64b'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='movdiri'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='pcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='pku'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='psdp-no'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='rtm'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='sbdr-ssdp-no'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='serialize'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='ss'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='taa-no'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='tsx-ldtrk'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='vaes'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='vpclmulqdq'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='xfd'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='xsaves'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </blockers>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <blockers model='SierraForest'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx-ifma'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx-ne-convert'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx-vnni'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx-vnni-int8'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='bus-lock-detect'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='cmpccxadd'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='erms'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='fbsdp-no'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='fsrm'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='fsrs'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='gfni'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='ibrs-all'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='invpcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='mcdt-no'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='pbrsb-no'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='pcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='pku'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='psdp-no'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='sbdr-ssdp-no'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='serialize'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='vaes'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='vpclmulqdq'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='xsaves'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </blockers>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <blockers model='SierraForest-v1'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx-ifma'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx-ne-convert'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx-vnni'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx-vnni-int8'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='bus-lock-detect'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='cmpccxadd'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='erms'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='fbsdp-no'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='fsrm'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='fsrs'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='gfni'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='ibrs-all'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='invpcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='mcdt-no'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='pbrsb-no'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='pcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='pku'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='psdp-no'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='sbdr-ssdp-no'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='serialize'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='vaes'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='vpclmulqdq'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='xsaves'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </blockers>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <blockers model='Skylake-Client'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='erms'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='hle'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='invpcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='pcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='rtm'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </blockers>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <blockers model='Skylake-Client-IBRS'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='erms'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='hle'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='invpcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='pcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='rtm'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </blockers>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='erms'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='invpcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='pcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </blockers>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <blockers model='Skylake-Client-v1'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='erms'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='hle'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='invpcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='pcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='rtm'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </blockers>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <blockers model='Skylake-Client-v2'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='erms'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='hle'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='invpcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='pcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='rtm'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </blockers>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <blockers model='Skylake-Client-v3'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='erms'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='invpcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='pcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </blockers>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <blockers model='Skylake-Client-v4'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='erms'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='invpcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='pcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='xsaves'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </blockers>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <blockers model='Skylake-Server'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512bw'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512cd'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512dq'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512f'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512vl'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='erms'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='hle'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='invpcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='pcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='pku'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='rtm'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </blockers>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <blockers model='Skylake-Server-IBRS'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512bw'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512cd'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512dq'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512f'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512vl'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='erms'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='hle'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='invpcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='pcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='pku'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='rtm'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </blockers>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512bw'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512cd'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512dq'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512f'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512vl'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='erms'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='invpcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='pcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='pku'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </blockers>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <blockers model='Skylake-Server-v1'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512bw'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512cd'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512dq'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512f'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512vl'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='erms'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='hle'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='invpcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='pcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='pku'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='rtm'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </blockers>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <blockers model='Skylake-Server-v2'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512bw'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512cd'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512dq'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512f'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512vl'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='erms'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='hle'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='invpcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='pcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='pku'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='rtm'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </blockers>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <blockers model='Skylake-Server-v3'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512bw'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512cd'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512dq'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512f'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512vl'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='erms'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='invpcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='pcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='pku'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </blockers>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <blockers model='Skylake-Server-v4'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512bw'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512cd'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512dq'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512f'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512vl'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='erms'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='invpcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='pcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='pku'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </blockers>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <blockers model='Skylake-Server-v5'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512bw'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512cd'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512dq'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512f'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512vl'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='erms'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='invpcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='pcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='pku'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='xsaves'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </blockers>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <blockers model='Snowridge'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='cldemote'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='core-capability'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='erms'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='gfni'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='movdir64b'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='movdiri'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='mpx'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='split-lock-detect'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </blockers>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <blockers model='Snowridge-v1'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='cldemote'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='core-capability'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='erms'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='gfni'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='movdir64b'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='movdiri'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='mpx'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='split-lock-detect'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </blockers>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <blockers model='Snowridge-v2'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='cldemote'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='core-capability'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='erms'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='gfni'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='movdir64b'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='movdiri'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='split-lock-detect'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </blockers>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <blockers model='Snowridge-v3'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='cldemote'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='core-capability'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='erms'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='gfni'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='movdir64b'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='movdiri'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='split-lock-detect'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='xsaves'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </blockers>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <blockers model='Snowridge-v4'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='cldemote'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='erms'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='gfni'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='movdir64b'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='movdiri'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='xsaves'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </blockers>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <blockers model='athlon'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='3dnow'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='3dnowext'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </blockers>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <blockers model='athlon-v1'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='3dnow'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='3dnowext'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </blockers>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <blockers model='core2duo'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='ss'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </blockers>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <blockers model='core2duo-v1'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='ss'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </blockers>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <blockers model='coreduo'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='ss'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </blockers>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <blockers model='coreduo-v1'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='ss'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </blockers>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <blockers model='n270'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='ss'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </blockers>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <blockers model='n270-v1'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='ss'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </blockers>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <blockers model='phenom'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='3dnow'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='3dnowext'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </blockers>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <blockers model='phenom-v1'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='3dnow'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='3dnowext'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </blockers>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:    </mode>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:  </cpu>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:  <memoryBacking supported='yes'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:    <enum name='sourceType'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <value>file</value>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <value>anonymous</value>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <value>memfd</value>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:    </enum>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:  </memoryBacking>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:  <devices>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:    <disk supported='yes'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <enum name='diskDevice'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <value>disk</value>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <value>cdrom</value>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <value>floppy</value>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <value>lun</value>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </enum>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <enum name='bus'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <value>fdc</value>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <value>scsi</value>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <value>virtio</value>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <value>usb</value>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <value>sata</value>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </enum>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <enum name='model'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <value>virtio</value>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <value>virtio-transitional</value>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <value>virtio-non-transitional</value>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </enum>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:    </disk>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:    <graphics supported='yes'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <enum name='type'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <value>vnc</value>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <value>egl-headless</value>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <value>dbus</value>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </enum>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:    </graphics>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:    <video supported='yes'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <enum name='modelType'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <value>vga</value>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <value>cirrus</value>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <value>virtio</value>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <value>none</value>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <value>bochs</value>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <value>ramfb</value>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </enum>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:    </video>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:    <hostdev supported='yes'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <enum name='mode'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <value>subsystem</value>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </enum>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <enum name='startupPolicy'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <value>default</value>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <value>mandatory</value>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <value>requisite</value>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <value>optional</value>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </enum>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <enum name='subsysType'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <value>usb</value>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <value>pci</value>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <value>scsi</value>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </enum>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <enum name='capsType'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <enum name='pciBackend'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:    </hostdev>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:    <rng supported='yes'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <enum name='model'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <value>virtio</value>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <value>virtio-transitional</value>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <value>virtio-non-transitional</value>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </enum>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <enum name='backendModel'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <value>random</value>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <value>egd</value>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <value>builtin</value>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </enum>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:    </rng>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:    <filesystem supported='yes'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <enum name='driverType'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <value>path</value>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <value>handle</value>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <value>virtiofs</value>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </enum>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:    </filesystem>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:    <tpm supported='yes'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <enum name='model'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <value>tpm-tis</value>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <value>tpm-crb</value>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </enum>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <enum name='backendModel'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <value>emulator</value>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <value>external</value>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </enum>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <enum name='backendVersion'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <value>2.0</value>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </enum>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:    </tpm>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:    <redirdev supported='yes'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <enum name='bus'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <value>usb</value>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </enum>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:    </redirdev>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:    <channel supported='yes'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <enum name='type'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <value>pty</value>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <value>unix</value>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </enum>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:    </channel>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:    <crypto supported='yes'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <enum name='model'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <enum name='type'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <value>qemu</value>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </enum>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <enum name='backendModel'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <value>builtin</value>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </enum>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:    </crypto>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:    <interface supported='yes'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <enum name='backendType'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <value>default</value>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <value>passt</value>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </enum>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:    </interface>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:    <panic supported='yes'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <enum name='model'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <value>isa</value>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <value>hyperv</value>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </enum>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:    </panic>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:    <console supported='yes'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <enum name='type'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <value>null</value>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <value>vc</value>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <value>pty</value>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <value>dev</value>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <value>file</value>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <value>pipe</value>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <value>stdio</value>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <value>udp</value>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <value>tcp</value>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <value>unix</value>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <value>qemu-vdagent</value>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <value>dbus</value>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </enum>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:    </console>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:  </devices>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:  <features>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:    <gic supported='no'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:    <vmcoreinfo supported='yes'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:    <genid supported='yes'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:    <backingStoreInput supported='yes'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:    <backup supported='yes'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:    <async-teardown supported='yes'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:    <ps2 supported='yes'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:    <sev supported='no'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:    <sgx supported='no'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:    <hyperv supported='yes'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <enum name='features'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <value>relaxed</value>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <value>vapic</value>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <value>spinlocks</value>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <value>vpindex</value>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <value>runtime</value>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <value>synic</value>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <value>stimer</value>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <value>reset</value>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <value>vendor_id</value>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <value>frequencies</value>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <value>reenlightenment</value>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <value>tlbflush</value>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <value>ipi</value>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <value>avic</value>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <value>emsr_bitmap</value>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <value>xmm_input</value>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </enum>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <defaults>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <spinlocks>4095</spinlocks>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <stimer_direct>on</stimer_direct>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <tlbflush_direct>on</tlbflush_direct>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <tlbflush_extended>on</tlbflush_extended>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <vendor_id>Linux KVM Hv</vendor_id>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </defaults>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:    </hyperv>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:    <launchSecurity supported='yes'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <enum name='sectype'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <value>tdx</value>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </enum>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:    </launchSecurity>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:  </features>
Nov 25 03:09:59 np0005534516 nova_compute[253538]: </domainCapabilities>
Nov 25 03:09:59 np0005534516 nova_compute[253538]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Nov 25 03:09:59 np0005534516 nova_compute[253538]: 2025-11-25 08:09:59.606 253542 DEBUG nova.virt.libvirt.host [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=pc:
Nov 25 03:09:59 np0005534516 nova_compute[253538]: <domainCapabilities>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:  <path>/usr/libexec/qemu-kvm</path>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:  <domain>kvm</domain>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:  <machine>pc-i440fx-rhel7.6.0</machine>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:  <arch>x86_64</arch>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:  <vcpu max='240'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:  <iothreads supported='yes'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:  <os supported='yes'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:    <enum name='firmware'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:    <loader supported='yes'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <enum name='type'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <value>rom</value>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <value>pflash</value>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </enum>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <enum name='readonly'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <value>yes</value>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <value>no</value>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </enum>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <enum name='secure'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <value>no</value>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </enum>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:    </loader>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:  </os>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:  <cpu>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:    <mode name='host-passthrough' supported='yes'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <enum name='hostPassthroughMigratable'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <value>on</value>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <value>off</value>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </enum>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:    </mode>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:    <mode name='maximum' supported='yes'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <enum name='maximumMigratable'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <value>on</value>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <value>off</value>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </enum>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:    </mode>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:    <mode name='host-model' supported='yes'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model fallback='forbid'>EPYC-Rome</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <vendor>AMD</vendor>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <maxphysaddr mode='passthrough' limit='40'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <feature policy='require' name='x2apic'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <feature policy='require' name='tsc-deadline'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <feature policy='require' name='hypervisor'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <feature policy='require' name='tsc_adjust'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <feature policy='require' name='spec-ctrl'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <feature policy='require' name='stibp'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <feature policy='require' name='ssbd'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <feature policy='require' name='cmp_legacy'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <feature policy='require' name='overflow-recov'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <feature policy='require' name='succor'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <feature policy='require' name='ibrs'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <feature policy='require' name='amd-ssbd'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <feature policy='require' name='virt-ssbd'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <feature policy='require' name='lbrv'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <feature policy='require' name='tsc-scale'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <feature policy='require' name='vmcb-clean'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <feature policy='require' name='flushbyasid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <feature policy='require' name='pause-filter'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <feature policy='require' name='pfthreshold'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <feature policy='require' name='svme-addr-chk'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <feature policy='require' name='lfence-always-serializing'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <feature policy='disable' name='xsaves'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:    </mode>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:    <mode name='custom' supported='yes'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <blockers model='Broadwell'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='erms'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='hle'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='invpcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='pcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='rtm'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </blockers>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <blockers model='Broadwell-IBRS'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='erms'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='hle'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='invpcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='pcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='rtm'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </blockers>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <blockers model='Broadwell-noTSX'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='erms'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='invpcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='pcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </blockers>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <blockers model='Broadwell-noTSX-IBRS'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='erms'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='invpcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='pcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </blockers>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <blockers model='Broadwell-v1'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='erms'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='hle'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='invpcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='pcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='rtm'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </blockers>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <blockers model='Broadwell-v2'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='erms'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='invpcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='pcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </blockers>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <blockers model='Broadwell-v3'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='erms'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='hle'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='invpcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='pcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='rtm'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </blockers>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <blockers model='Broadwell-v4'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='erms'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='invpcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='pcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </blockers>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <blockers model='Cascadelake-Server'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512bw'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512cd'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512dq'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512f'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512vl'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512vnni'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='erms'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='hle'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='invpcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='pcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='pku'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='rtm'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </blockers>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <blockers model='Cascadelake-Server-noTSX'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512bw'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512cd'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512dq'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512f'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512vl'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512vnni'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='erms'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='ibrs-all'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='invpcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='pcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='pku'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </blockers>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <blockers model='Cascadelake-Server-v1'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512bw'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512cd'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512dq'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512f'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512vl'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512vnni'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='erms'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='hle'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='invpcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='pcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='pku'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='rtm'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </blockers>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <blockers model='Cascadelake-Server-v2'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512bw'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512cd'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512dq'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512f'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512vl'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512vnni'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='erms'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='hle'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='ibrs-all'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='invpcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='pcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='pku'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='rtm'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </blockers>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <blockers model='Cascadelake-Server-v3'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512bw'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512cd'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512dq'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512f'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512vl'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512vnni'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='erms'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='ibrs-all'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='invpcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='pcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='pku'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </blockers>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <blockers model='Cascadelake-Server-v4'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512bw'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512cd'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512dq'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512f'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512vl'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512vnni'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='erms'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='ibrs-all'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='invpcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='pcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='pku'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </blockers>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <blockers model='Cascadelake-Server-v5'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512bw'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512cd'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512dq'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512f'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512vl'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512vnni'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='erms'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='ibrs-all'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='invpcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='pcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='pku'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='xsaves'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </blockers>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <blockers model='Cooperlake'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512-bf16'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512bw'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512cd'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512dq'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512f'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512vl'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512vnni'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='erms'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='hle'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='ibrs-all'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='invpcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='pcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='pku'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='rtm'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='taa-no'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </blockers>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <blockers model='Cooperlake-v1'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512-bf16'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512bw'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512cd'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512dq'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512f'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512vl'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512vnni'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='erms'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='hle'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='ibrs-all'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='invpcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='pcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='pku'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='rtm'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='taa-no'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </blockers>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <blockers model='Cooperlake-v2'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512-bf16'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512bw'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512cd'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512dq'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512f'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512vl'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512vnni'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='erms'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='hle'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='ibrs-all'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='invpcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='pcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='pku'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='rtm'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='taa-no'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='xsaves'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </blockers>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <blockers model='Denverton'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='erms'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='mpx'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </blockers>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <blockers model='Denverton-v1'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='erms'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='mpx'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </blockers>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <blockers model='Denverton-v2'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='erms'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </blockers>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <blockers model='Denverton-v3'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='erms'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='xsaves'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </blockers>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <blockers model='Dhyana-v2'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='xsaves'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </blockers>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <blockers model='EPYC-Genoa'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='amd-psfd'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='auto-ibrs'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512-bf16'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512-vpopcntdq'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512bitalg'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512bw'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512cd'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512dq'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512f'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512ifma'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512vbmi'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512vbmi2'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512vl'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512vnni'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='erms'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='fsrm'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='gfni'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='invpcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='la57'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='no-nested-data-bp'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='null-sel-clr-base'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='pcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='pku'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='stibp-always-on'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='vaes'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='vpclmulqdq'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='xsaves'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </blockers>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <blockers model='EPYC-Genoa-v1'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='amd-psfd'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='auto-ibrs'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512-bf16'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512-vpopcntdq'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512bitalg'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512bw'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512cd'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512dq'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512f'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512ifma'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512vbmi'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512vbmi2'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512vl'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512vnni'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='erms'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='fsrm'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='gfni'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='invpcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='la57'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='no-nested-data-bp'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='null-sel-clr-base'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='pcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='pku'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='stibp-always-on'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='vaes'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='vpclmulqdq'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='xsaves'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </blockers>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <blockers model='EPYC-Milan'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='erms'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='fsrm'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='invpcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='pcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='pku'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='xsaves'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </blockers>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <blockers model='EPYC-Milan-v1'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='erms'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='fsrm'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='invpcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='pcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='pku'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='xsaves'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </blockers>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <blockers model='EPYC-Milan-v2'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='amd-psfd'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='erms'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='fsrm'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='invpcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='no-nested-data-bp'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='null-sel-clr-base'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='pcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='pku'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='stibp-always-on'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='vaes'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='vpclmulqdq'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='xsaves'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </blockers>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <blockers model='EPYC-Rome'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='xsaves'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </blockers>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <blockers model='EPYC-Rome-v1'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='xsaves'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </blockers>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <blockers model='EPYC-Rome-v2'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='xsaves'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </blockers>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <blockers model='EPYC-Rome-v3'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='xsaves'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </blockers>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <blockers model='EPYC-v3'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='xsaves'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </blockers>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <blockers model='EPYC-v4'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='xsaves'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </blockers>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <blockers model='GraniteRapids'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='amx-bf16'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='amx-fp16'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='amx-int8'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='amx-tile'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx-vnni'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512-bf16'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512-fp16'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512-vpopcntdq'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512bitalg'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512bw'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512cd'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512dq'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512f'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512ifma'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512vbmi'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512vbmi2'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512vl'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512vnni'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='bus-lock-detect'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='erms'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='fbsdp-no'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='fsrc'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='fsrm'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='fsrs'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='fzrm'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='gfni'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='hle'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='ibrs-all'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='invpcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='la57'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='mcdt-no'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='pbrsb-no'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='pcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='pku'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='prefetchiti'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='psdp-no'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='rtm'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='sbdr-ssdp-no'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='serialize'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='taa-no'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='tsx-ldtrk'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='vaes'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='vpclmulqdq'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='xfd'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='xsaves'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </blockers>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <blockers model='GraniteRapids-v1'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='amx-bf16'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='amx-fp16'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='amx-int8'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='amx-tile'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx-vnni'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512-bf16'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512-fp16'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512-vpopcntdq'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512bitalg'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512bw'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512cd'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512dq'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512f'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512ifma'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512vbmi'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512vbmi2'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512vl'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512vnni'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='bus-lock-detect'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='erms'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='fbsdp-no'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='fsrc'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='fsrm'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='fsrs'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='fzrm'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='gfni'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='hle'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='ibrs-all'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='invpcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='la57'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='mcdt-no'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='pbrsb-no'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='pcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='pku'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='prefetchiti'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='psdp-no'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='rtm'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='sbdr-ssdp-no'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='serialize'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='taa-no'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='tsx-ldtrk'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='vaes'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='vpclmulqdq'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='xfd'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='xsaves'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </blockers>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <blockers model='GraniteRapids-v2'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='amx-bf16'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='amx-fp16'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='amx-int8'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='amx-tile'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx-vnni'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx10'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx10-128'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx10-256'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx10-512'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512-bf16'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512-fp16'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512-vpopcntdq'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512bitalg'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512bw'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512cd'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512dq'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512f'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512ifma'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512vbmi'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512vbmi2'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512vl'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512vnni'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='bus-lock-detect'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='cldemote'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='erms'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='fbsdp-no'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='fsrc'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='fsrm'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='fsrs'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='fzrm'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='gfni'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='hle'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='ibrs-all'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='invpcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='la57'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='mcdt-no'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='movdir64b'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='movdiri'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='pbrsb-no'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='pcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='pku'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='prefetchiti'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='psdp-no'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='rtm'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='sbdr-ssdp-no'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='serialize'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='ss'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='taa-no'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='tsx-ldtrk'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='vaes'/>
Nov 25 03:09:59 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='vpclmulqdq'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='xfd'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='xsaves'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </blockers>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <blockers model='Haswell'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='erms'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='hle'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='invpcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='pcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='rtm'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </blockers>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <blockers model='Haswell-IBRS'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='erms'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='hle'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='invpcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='pcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='rtm'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </blockers>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <blockers model='Haswell-noTSX'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='erms'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='invpcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='pcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </blockers>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <blockers model='Haswell-noTSX-IBRS'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='erms'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='invpcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='pcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </blockers>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <blockers model='Haswell-v1'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='erms'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='hle'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='invpcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='pcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='rtm'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </blockers>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <blockers model='Haswell-v2'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='erms'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='invpcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='pcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </blockers>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <blockers model='Haswell-v3'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='erms'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='hle'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='invpcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='pcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='rtm'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </blockers>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <blockers model='Haswell-v4'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='erms'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='invpcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='pcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </blockers>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <blockers model='Icelake-Server'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512-vpopcntdq'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512bitalg'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512bw'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512cd'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512dq'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512f'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512vbmi'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512vbmi2'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512vl'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512vnni'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='erms'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='gfni'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='hle'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='invpcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='la57'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='pcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='pku'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='rtm'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='vaes'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='vpclmulqdq'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </blockers>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <blockers model='Icelake-Server-noTSX'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512-vpopcntdq'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512bitalg'/>
Nov 25 03:09:59 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512bw'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512cd'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512dq'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512f'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512vbmi'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512vbmi2'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512vl'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512vnni'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='erms'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='gfni'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='invpcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='la57'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='pcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='pku'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='vaes'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='vpclmulqdq'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </blockers>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <blockers model='Icelake-Server-v1'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512-vpopcntdq'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512bitalg'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512bw'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512cd'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512dq'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512f'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512vbmi'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512vbmi2'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512vl'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512vnni'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='erms'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='gfni'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='hle'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='invpcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='la57'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='pcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='pku'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='rtm'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='vaes'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='vpclmulqdq'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </blockers>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <blockers model='Icelake-Server-v2'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512-vpopcntdq'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512bitalg'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512bw'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512cd'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512dq'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512f'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512vbmi'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512vbmi2'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512vl'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512vnni'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='erms'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='gfni'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='invpcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='la57'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='pcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='pku'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='vaes'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='vpclmulqdq'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </blockers>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <blockers model='Icelake-Server-v3'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512-vpopcntdq'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512bitalg'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512bw'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512cd'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512dq'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512f'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512vbmi'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512vbmi2'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512vl'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512vnni'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='erms'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='gfni'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='ibrs-all'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='invpcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='la57'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='pcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='pku'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='taa-no'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='vaes'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='vpclmulqdq'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </blockers>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <blockers model='Icelake-Server-v4'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512-vpopcntdq'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512bitalg'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512bw'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512cd'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512dq'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512f'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512ifma'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512vbmi'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512vbmi2'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512vl'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512vnni'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='erms'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='fsrm'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='gfni'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='ibrs-all'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='invpcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='la57'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='pcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='pku'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='taa-no'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='vaes'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='vpclmulqdq'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </blockers>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <blockers model='Icelake-Server-v5'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512-vpopcntdq'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512bitalg'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512bw'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512cd'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512dq'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512f'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512ifma'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512vbmi'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512vbmi2'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512vl'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512vnni'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='erms'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='fsrm'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='gfni'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='ibrs-all'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='invpcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='la57'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='pcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='pku'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='taa-no'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='vaes'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='vpclmulqdq'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='xsaves'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </blockers>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <blockers model='Icelake-Server-v6'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512-vpopcntdq'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512bitalg'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512bw'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512cd'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512dq'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512f'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512ifma'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512vbmi'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512vbmi2'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512vl'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512vnni'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='erms'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='fsrm'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='gfni'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='ibrs-all'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='invpcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='la57'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='pcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='pku'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='taa-no'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='vaes'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='vpclmulqdq'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='xsaves'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </blockers>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <blockers model='Icelake-Server-v7'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512-vpopcntdq'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512bitalg'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512bw'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512cd'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512dq'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512f'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512ifma'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512vbmi'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512vbmi2'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512vl'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512vnni'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='erms'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='fsrm'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='gfni'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='hle'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='ibrs-all'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='invpcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='la57'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='pcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='pku'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='rtm'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='taa-no'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='vaes'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='vpclmulqdq'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='xsaves'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </blockers>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <blockers model='IvyBridge'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='erms'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </blockers>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <blockers model='IvyBridge-IBRS'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='erms'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </blockers>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <blockers model='IvyBridge-v1'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='erms'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </blockers>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <blockers model='IvyBridge-v2'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='erms'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </blockers>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <blockers model='KnightsMill'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512-4fmaps'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512-4vnniw'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512-vpopcntdq'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512cd'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512er'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512f'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512pf'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='erms'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='ss'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </blockers>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <blockers model='KnightsMill-v1'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512-4fmaps'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512-4vnniw'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512-vpopcntdq'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512cd'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512er'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512f'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512pf'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='erms'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='ss'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </blockers>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <blockers model='Opteron_G4'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='fma4'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='xop'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </blockers>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <blockers model='Opteron_G4-v1'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='fma4'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='xop'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </blockers>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <blockers model='Opteron_G5'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='fma4'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='tbm'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='xop'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </blockers>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <blockers model='Opteron_G5-v1'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='fma4'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='tbm'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='xop'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </blockers>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <blockers model='SapphireRapids'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='amx-bf16'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='amx-int8'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='amx-tile'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx-vnni'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512-bf16'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512-fp16'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512-vpopcntdq'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512bitalg'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512bw'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512cd'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512dq'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512f'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512ifma'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512vbmi'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512vbmi2'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512vl'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512vnni'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='bus-lock-detect'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='erms'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='fsrc'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='fsrm'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='fsrs'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='fzrm'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='gfni'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='hle'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='ibrs-all'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='invpcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='la57'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='pcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='pku'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='rtm'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='serialize'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='taa-no'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='tsx-ldtrk'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='vaes'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='vpclmulqdq'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='xfd'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='xsaves'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </blockers>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <blockers model='SapphireRapids-v1'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='amx-bf16'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='amx-int8'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='amx-tile'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx-vnni'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512-bf16'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512-fp16'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512-vpopcntdq'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512bitalg'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512bw'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512cd'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512dq'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512f'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512ifma'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512vbmi'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512vbmi2'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512vl'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512vnni'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='bus-lock-detect'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='erms'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='fsrc'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='fsrm'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='fsrs'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='fzrm'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='gfni'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='hle'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='ibrs-all'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='invpcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='la57'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='pcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='pku'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='rtm'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='serialize'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='taa-no'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='tsx-ldtrk'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='vaes'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='vpclmulqdq'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='xfd'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='xsaves'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </blockers>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <blockers model='SapphireRapids-v2'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='amx-bf16'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='amx-int8'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='amx-tile'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx-vnni'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512-bf16'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512-fp16'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512-vpopcntdq'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512bitalg'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512bw'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512cd'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512dq'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512f'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512ifma'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512vbmi'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512vbmi2'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512vl'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512vnni'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='bus-lock-detect'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='erms'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='fbsdp-no'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='fsrc'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='fsrm'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='fsrs'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='fzrm'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='gfni'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='hle'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='ibrs-all'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='invpcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='la57'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='pcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='pku'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='psdp-no'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='rtm'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='sbdr-ssdp-no'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='serialize'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='taa-no'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='tsx-ldtrk'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='vaes'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='vpclmulqdq'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='xfd'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='xsaves'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </blockers>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <blockers model='SapphireRapids-v3'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='amx-bf16'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='amx-int8'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='amx-tile'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx-vnni'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512-bf16'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512-fp16'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512-vpopcntdq'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512bitalg'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512bw'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512cd'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512dq'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512f'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512ifma'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512vbmi'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512vbmi2'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512vl'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512vnni'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='bus-lock-detect'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='cldemote'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='erms'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='fbsdp-no'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='fsrc'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='fsrm'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='fsrs'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='fzrm'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='gfni'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='hle'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='ibrs-all'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='invpcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='la57'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='movdir64b'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='movdiri'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='pcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='pku'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='psdp-no'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='rtm'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='sbdr-ssdp-no'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='serialize'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='ss'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='taa-no'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='tsx-ldtrk'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='vaes'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='vpclmulqdq'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='xfd'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='xsaves'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </blockers>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <blockers model='SierraForest'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx-ifma'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx-ne-convert'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx-vnni'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx-vnni-int8'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='bus-lock-detect'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='cmpccxadd'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='erms'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='fbsdp-no'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='fsrm'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='fsrs'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='gfni'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='ibrs-all'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='invpcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='mcdt-no'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='pbrsb-no'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='pcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='pku'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='psdp-no'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='sbdr-ssdp-no'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='serialize'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='vaes'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='vpclmulqdq'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='xsaves'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </blockers>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <blockers model='SierraForest-v1'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx-ifma'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx-ne-convert'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx-vnni'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx-vnni-int8'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='bus-lock-detect'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='cmpccxadd'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='erms'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='fbsdp-no'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='fsrm'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='fsrs'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='gfni'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='ibrs-all'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='invpcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='mcdt-no'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='pbrsb-no'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='pcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='pku'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='psdp-no'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='sbdr-ssdp-no'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='serialize'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='vaes'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='vpclmulqdq'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='xsaves'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </blockers>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <blockers model='Skylake-Client'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='erms'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='hle'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='invpcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='pcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='rtm'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </blockers>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <blockers model='Skylake-Client-IBRS'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='erms'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='hle'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='invpcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='pcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='rtm'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </blockers>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='erms'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='invpcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='pcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </blockers>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <blockers model='Skylake-Client-v1'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='erms'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='hle'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='invpcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='pcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='rtm'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </blockers>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <blockers model='Skylake-Client-v2'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='erms'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='hle'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='invpcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='pcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='rtm'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </blockers>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <blockers model='Skylake-Client-v3'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='erms'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='invpcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='pcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </blockers>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <blockers model='Skylake-Client-v4'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='erms'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='invpcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='pcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='xsaves'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </blockers>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <blockers model='Skylake-Server'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512bw'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512cd'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512dq'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512f'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512vl'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='erms'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='hle'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='invpcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='pcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='pku'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='rtm'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </blockers>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <blockers model='Skylake-Server-IBRS'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512bw'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512cd'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512dq'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512f'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512vl'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='erms'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='hle'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='invpcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='pcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='pku'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='rtm'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </blockers>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512bw'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512cd'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512dq'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512f'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512vl'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='erms'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='invpcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='pcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='pku'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </blockers>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <blockers model='Skylake-Server-v1'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512bw'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512cd'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512dq'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512f'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512vl'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='erms'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='hle'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='invpcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='pcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='pku'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='rtm'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </blockers>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <blockers model='Skylake-Server-v2'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512bw'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512cd'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512dq'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512f'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512vl'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='erms'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='hle'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='invpcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='pcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='pku'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='rtm'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </blockers>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <blockers model='Skylake-Server-v3'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512bw'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512cd'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512dq'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512f'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512vl'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='erms'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='invpcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='pcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='pku'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </blockers>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <blockers model='Skylake-Server-v4'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512bw'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512cd'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512dq'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512f'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512vl'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='erms'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='invpcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='pcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='pku'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </blockers>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <blockers model='Skylake-Server-v5'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512bw'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512cd'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512dq'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512f'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='avx512vl'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='erms'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='invpcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='pcid'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='pku'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='xsaves'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </blockers>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <blockers model='Snowridge'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='cldemote'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='core-capability'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='erms'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='gfni'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='movdir64b'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='movdiri'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='mpx'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='split-lock-detect'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </blockers>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <blockers model='Snowridge-v1'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='cldemote'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='core-capability'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='erms'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='gfni'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='movdir64b'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='movdiri'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='mpx'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='split-lock-detect'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </blockers>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <blockers model='Snowridge-v2'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='cldemote'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='core-capability'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='erms'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='gfni'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='movdir64b'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='movdiri'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='split-lock-detect'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </blockers>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <blockers model='Snowridge-v3'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='cldemote'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='core-capability'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='erms'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='gfni'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='movdir64b'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='movdiri'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='split-lock-detect'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='xsaves'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </blockers>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <blockers model='Snowridge-v4'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='cldemote'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='erms'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='gfni'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='movdir64b'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='movdiri'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='xsaves'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </blockers>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <blockers model='athlon'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='3dnow'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='3dnowext'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </blockers>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <blockers model='athlon-v1'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='3dnow'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='3dnowext'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </blockers>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <blockers model='core2duo'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='ss'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </blockers>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <blockers model='core2duo-v1'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='ss'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </blockers>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <blockers model='coreduo'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='ss'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </blockers>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <blockers model='coreduo-v1'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='ss'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </blockers>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <blockers model='n270'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='ss'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </blockers>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <blockers model='n270-v1'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='ss'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </blockers>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <blockers model='phenom'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='3dnow'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='3dnowext'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </blockers>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <blockers model='phenom-v1'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='3dnow'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <feature name='3dnowext'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </blockers>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:    </mode>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:  </cpu>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:  <memoryBacking supported='yes'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:    <enum name='sourceType'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <value>file</value>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <value>anonymous</value>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <value>memfd</value>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:    </enum>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:  </memoryBacking>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:  <devices>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:    <disk supported='yes'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <enum name='diskDevice'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <value>disk</value>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <value>cdrom</value>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <value>floppy</value>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <value>lun</value>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </enum>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <enum name='bus'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <value>ide</value>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <value>fdc</value>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <value>scsi</value>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <value>virtio</value>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <value>usb</value>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <value>sata</value>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </enum>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <enum name='model'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <value>virtio</value>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <value>virtio-transitional</value>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <value>virtio-non-transitional</value>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </enum>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:    </disk>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:    <graphics supported='yes'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <enum name='type'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <value>vnc</value>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <value>egl-headless</value>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <value>dbus</value>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </enum>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:    </graphics>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:    <video supported='yes'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <enum name='modelType'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <value>vga</value>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <value>cirrus</value>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <value>virtio</value>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <value>none</value>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <value>bochs</value>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <value>ramfb</value>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </enum>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:    </video>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:    <hostdev supported='yes'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <enum name='mode'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <value>subsystem</value>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </enum>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <enum name='startupPolicy'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <value>default</value>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <value>mandatory</value>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <value>requisite</value>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <value>optional</value>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </enum>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <enum name='subsysType'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <value>usb</value>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <value>pci</value>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <value>scsi</value>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </enum>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <enum name='capsType'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <enum name='pciBackend'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:    </hostdev>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:    <rng supported='yes'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <enum name='model'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <value>virtio</value>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <value>virtio-transitional</value>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <value>virtio-non-transitional</value>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </enum>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <enum name='backendModel'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <value>random</value>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <value>egd</value>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <value>builtin</value>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </enum>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:    </rng>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:    <filesystem supported='yes'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <enum name='driverType'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <value>path</value>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <value>handle</value>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <value>virtiofs</value>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </enum>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:    </filesystem>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:    <tpm supported='yes'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <enum name='model'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <value>tpm-tis</value>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <value>tpm-crb</value>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </enum>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <enum name='backendModel'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <value>emulator</value>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <value>external</value>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </enum>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <enum name='backendVersion'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <value>2.0</value>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </enum>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:    </tpm>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:    <redirdev supported='yes'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <enum name='bus'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <value>usb</value>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </enum>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:    </redirdev>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:    <channel supported='yes'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <enum name='type'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <value>pty</value>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <value>unix</value>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </enum>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:    </channel>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:    <crypto supported='yes'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <enum name='model'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <enum name='type'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <value>qemu</value>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </enum>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <enum name='backendModel'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <value>builtin</value>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </enum>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:    </crypto>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:    <interface supported='yes'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <enum name='backendType'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <value>default</value>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <value>passt</value>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </enum>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:    </interface>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:    <panic supported='yes'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <enum name='model'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <value>isa</value>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <value>hyperv</value>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </enum>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:    </panic>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:    <console supported='yes'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <enum name='type'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <value>null</value>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <value>vc</value>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <value>pty</value>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <value>dev</value>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <value>file</value>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <value>pipe</value>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <value>stdio</value>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <value>udp</value>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <value>tcp</value>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <value>unix</value>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <value>qemu-vdagent</value>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <value>dbus</value>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </enum>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:    </console>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:  </devices>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:  <features>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:    <gic supported='no'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:    <vmcoreinfo supported='yes'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:    <genid supported='yes'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:    <backingStoreInput supported='yes'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:    <backup supported='yes'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:    <async-teardown supported='yes'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:    <ps2 supported='yes'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:    <sev supported='no'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:    <sgx supported='no'/>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:    <hyperv supported='yes'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <enum name='features'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <value>relaxed</value>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <value>vapic</value>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <value>spinlocks</value>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <value>vpindex</value>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <value>runtime</value>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <value>synic</value>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <value>stimer</value>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <value>reset</value>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <value>vendor_id</value>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <value>frequencies</value>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <value>reenlightenment</value>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <value>tlbflush</value>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <value>ipi</value>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <value>avic</value>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <value>emsr_bitmap</value>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <value>xmm_input</value>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </enum>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <defaults>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <spinlocks>4095</spinlocks>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <stimer_direct>on</stimer_direct>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <tlbflush_direct>on</tlbflush_direct>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <tlbflush_extended>on</tlbflush_extended>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <vendor_id>Linux KVM Hv</vendor_id>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </defaults>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:    </hyperv>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:    <launchSecurity supported='yes'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      <enum name='sectype'>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:        <value>tdx</value>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:      </enum>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:    </launchSecurity>
Nov 25 03:09:59 np0005534516 nova_compute[253538]:  </features>
Nov 25 03:09:59 np0005534516 nova_compute[253538]: </domainCapabilities>
Nov 25 03:09:59 np0005534516 nova_compute[253538]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Nov 25 03:09:59 np0005534516 nova_compute[253538]: 2025-11-25 08:09:59.664 253542 DEBUG nova.virt.libvirt.host [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782#033[00m
Nov 25 03:09:59 np0005534516 nova_compute[253538]: 2025-11-25 08:09:59.665 253542 INFO nova.virt.libvirt.host [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Secure Boot support detected#033[00m
Nov 25 03:09:59 np0005534516 nova_compute[253538]: 2025-11-25 08:09:59.667 253542 INFO nova.virt.libvirt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.#033[00m
Nov 25 03:09:59 np0005534516 nova_compute[253538]: 2025-11-25 08:09:59.667 253542 INFO nova.virt.libvirt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.#033[00m
Nov 25 03:09:59 np0005534516 nova_compute[253538]: 2025-11-25 08:09:59.677 253542 DEBUG nova.virt.libvirt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Enabling emulated TPM support _check_vtpm_support /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:1097#033[00m
Nov 25 03:09:59 np0005534516 nova_compute[253538]: 2025-11-25 08:09:59.706 253542 INFO nova.virt.node [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Determined node identity 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 from /var/lib/nova/compute_id#033[00m
Nov 25 03:09:59 np0005534516 nova_compute[253538]: 2025-11-25 08:09:59.725 253542 WARNING nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Compute nodes ['06cbbb86-d0ab-41fb-a5e5-8a0de51561b4'] for host compute-0.ctlplane.example.com were not found in the database. If this is the first time this service is starting on this host, then you can ignore this warning.#033[00m
Nov 25 03:09:59 np0005534516 nova_compute[253538]: 2025-11-25 08:09:59.762 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Looking for unclaimed instances stuck in BUILDING status for nodes managed by this host#033[00m
Nov 25 03:09:59 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 03:09:59 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 03:09:59 np0005534516 nova_compute[253538]: 2025-11-25 08:09:59.983 253542 WARNING nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] No compute node record found for host compute-0.ctlplane.example.com. If this is the first time this service is starting on this host, then you can ignore this warning.: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host compute-0.ctlplane.example.com could not be found.#033[00m
Nov 25 03:09:59 np0005534516 nova_compute[253538]: 2025-11-25 08:09:59.983 253542 DEBUG oslo_concurrency.lockutils [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:09:59 np0005534516 nova_compute[253538]: 2025-11-25 08:09:59.984 253542 DEBUG oslo_concurrency.lockutils [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:09:59 np0005534516 nova_compute[253538]: 2025-11-25 08:09:59.984 253542 DEBUG oslo_concurrency.lockutils [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:09:59 np0005534516 nova_compute[253538]: 2025-11-25 08:09:59.984 253542 DEBUG nova.compute.resource_tracker [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 25 03:09:59 np0005534516 nova_compute[253538]: 2025-11-25 08:09:59.984 253542 DEBUG oslo_concurrency.processutils [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:10:00 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 03:10:00 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1646872483' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 03:10:00 np0005534516 nova_compute[253538]: 2025-11-25 08:10:00.464 253542 DEBUG oslo_concurrency.processutils [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.479s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:10:00 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v717: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:10:00 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 03:10:00 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 03:10:00 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 03:10:00 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 03:10:00 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 03:10:00 np0005534516 nova_compute[253538]: 2025-11-25 08:10:00.631 253542 WARNING nova.virt.libvirt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 03:10:00 np0005534516 nova_compute[253538]: 2025-11-25 08:10:00.633 253542 DEBUG nova.compute.resource_tracker [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5145MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 25 03:10:00 np0005534516 nova_compute[253538]: 2025-11-25 08:10:00.633 253542 DEBUG oslo_concurrency.lockutils [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:10:00 np0005534516 nova_compute[253538]: 2025-11-25 08:10:00.634 253542 DEBUG oslo_concurrency.lockutils [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:10:00 np0005534516 nova_compute[253538]: 2025-11-25 08:10:00.651 253542 WARNING nova.compute.resource_tracker [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] No compute node record for compute-0.ctlplane.example.com:06cbbb86-d0ab-41fb-a5e5-8a0de51561b4: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 could not be found.#033[00m
Nov 25 03:10:00 np0005534516 nova_compute[253538]: 2025-11-25 08:10:00.693 253542 INFO nova.compute.resource_tracker [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Compute node record created for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com with uuid: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4#033[00m
Nov 25 03:10:00 np0005534516 nova_compute[253538]: 2025-11-25 08:10:00.743 253542 DEBUG nova.compute.resource_tracker [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 25 03:10:00 np0005534516 nova_compute[253538]: 2025-11-25 08:10:00.743 253542 DEBUG nova.compute.resource_tracker [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 25 03:10:00 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 03:10:00 np0005534516 ceph-mgr[75313]: [progress WARNING root] complete: ev b57131df-3da5-461e-b79d-03df3ca67d97 does not exist
Nov 25 03:10:00 np0005534516 ceph-mgr[75313]: [progress WARNING root] complete: ev 888dec3a-6c6b-4a3a-a05f-ca2dcacddd69 does not exist
Nov 25 03:10:00 np0005534516 ceph-mgr[75313]: [progress WARNING root] complete: ev 69fc9587-29f6-42ef-b4c8-a1960fd0bf84 does not exist
Nov 25 03:10:00 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 03:10:00 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 03:10:00 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 03:10:00 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 03:10:00 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 03:10:00 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 03:10:01 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 03:10:01 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 03:10:01 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 03:10:01 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 03:10:01 np0005534516 podman[254296]: 2025-11-25 08:10:01.530813512 +0000 UTC m=+0.027831057 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 03:10:01 np0005534516 nova_compute[253538]: 2025-11-25 08:10:01.651 253542 INFO nova.scheduler.client.report [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [req-1c99fa62-1708-449e-96b6-9a78376dd5a0] Created resource provider record via placement API for resource provider with UUID 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 and name compute-0.ctlplane.example.com.#033[00m
Nov 25 03:10:01 np0005534516 podman[254296]: 2025-11-25 08:10:01.743425528 +0000 UTC m=+0.240443083 container create ce93bde498216cf3d1126b6f427d417418a8cac6c672f9ae1e6e35033806fae4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_fermat, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Nov 25 03:10:01 np0005534516 systemd[1]: Started libpod-conmon-ce93bde498216cf3d1126b6f427d417418a8cac6c672f9ae1e6e35033806fae4.scope.
Nov 25 03:10:01 np0005534516 systemd[1]: Started libcrun container.
Nov 25 03:10:02 np0005534516 nova_compute[253538]: 2025-11-25 08:10:02.018 253542 DEBUG oslo_concurrency.processutils [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:10:02 np0005534516 podman[254296]: 2025-11-25 08:10:02.099813074 +0000 UTC m=+0.596830639 container init ce93bde498216cf3d1126b6f427d417418a8cac6c672f9ae1e6e35033806fae4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_fermat, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Nov 25 03:10:02 np0005534516 podman[254296]: 2025-11-25 08:10:02.111977589 +0000 UTC m=+0.608995134 container start ce93bde498216cf3d1126b6f427d417418a8cac6c672f9ae1e6e35033806fae4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_fermat, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 03:10:02 np0005534516 tender_fermat[254313]: 167 167
Nov 25 03:10:02 np0005534516 systemd[1]: libpod-ce93bde498216cf3d1126b6f427d417418a8cac6c672f9ae1e6e35033806fae4.scope: Deactivated successfully.
Nov 25 03:10:02 np0005534516 conmon[254313]: conmon ce93bde498216cf3d112 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-ce93bde498216cf3d1126b6f427d417418a8cac6c672f9ae1e6e35033806fae4.scope/container/memory.events
Nov 25 03:10:02 np0005534516 podman[254296]: 2025-11-25 08:10:02.22243735 +0000 UTC m=+0.719454885 container attach ce93bde498216cf3d1126b6f427d417418a8cac6c672f9ae1e6e35033806fae4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_fermat, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 03:10:02 np0005534516 podman[254296]: 2025-11-25 08:10:02.224795856 +0000 UTC m=+0.721813381 container died ce93bde498216cf3d1126b6f427d417418a8cac6c672f9ae1e6e35033806fae4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_fermat, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True)
Nov 25 03:10:02 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 03:10:02 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2996146319' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 03:10:02 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 03:10:02 np0005534516 nova_compute[253538]: 2025-11-25 08:10:02.491 253542 DEBUG oslo_concurrency.processutils [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.473s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:10:02 np0005534516 systemd[1]: var-lib-containers-storage-overlay-5709a9e931f2cf3dc4d8f36c4f002d5f566257f6ad061d8e81813045ee0f3473-merged.mount: Deactivated successfully.
Nov 25 03:10:02 np0005534516 nova_compute[253538]: 2025-11-25 08:10:02.502 253542 DEBUG nova.virt.libvirt.host [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] /sys/module/kvm_amd/parameters/sev contains [N
Nov 25 03:10:02 np0005534516 nova_compute[253538]: ] _kernel_supports_amd_sev /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1803#033[00m
Nov 25 03:10:02 np0005534516 nova_compute[253538]: 2025-11-25 08:10:02.502 253542 INFO nova.virt.libvirt.host [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] kernel doesn't support AMD SEV#033[00m
Nov 25 03:10:02 np0005534516 nova_compute[253538]: 2025-11-25 08:10:02.503 253542 DEBUG nova.compute.provider_tree [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Updating inventory in ProviderTree for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 with inventory: {'MEMORY_MB': {'total': 7680, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 59, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 0}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Nov 25 03:10:02 np0005534516 nova_compute[253538]: 2025-11-25 08:10:02.504 253542 DEBUG nova.virt.libvirt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 25 03:10:02 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v718: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:10:02 np0005534516 podman[254296]: 2025-11-25 08:10:02.555932105 +0000 UTC m=+1.052949620 container remove ce93bde498216cf3d1126b6f427d417418a8cac6c672f9ae1e6e35033806fae4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_fermat, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Nov 25 03:10:02 np0005534516 nova_compute[253538]: 2025-11-25 08:10:02.573 253542 DEBUG nova.scheduler.client.report [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Updated inventory for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 with generation 0 in Placement from set_inventory_for_provider using data: {'MEMORY_MB': {'total': 7680, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 59, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:957#033[00m
Nov 25 03:10:02 np0005534516 nova_compute[253538]: 2025-11-25 08:10:02.574 253542 DEBUG nova.compute.provider_tree [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Updating resource provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 generation from 0 to 1 during operation: update_inventory _update_generation /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:164#033[00m
Nov 25 03:10:02 np0005534516 nova_compute[253538]: 2025-11-25 08:10:02.574 253542 DEBUG nova.compute.provider_tree [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Updating inventory in ProviderTree for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 with inventory: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Nov 25 03:10:02 np0005534516 systemd[1]: libpod-conmon-ce93bde498216cf3d1126b6f427d417418a8cac6c672f9ae1e6e35033806fae4.scope: Deactivated successfully.
Nov 25 03:10:02 np0005534516 podman[254359]: 2025-11-25 08:10:02.740488898 +0000 UTC m=+0.056080525 container create 07a9c6b6af63ca76da451baf894e9af165eb3b986b5e46501eb308648f417ab0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_dijkstra, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Nov 25 03:10:02 np0005534516 nova_compute[253538]: 2025-11-25 08:10:02.765 253542 DEBUG nova.compute.provider_tree [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Updating resource provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 generation from 1 to 2 during operation: update_traits _update_generation /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:164#033[00m
Nov 25 03:10:02 np0005534516 systemd[1]: Started libpod-conmon-07a9c6b6af63ca76da451baf894e9af165eb3b986b5e46501eb308648f417ab0.scope.
Nov 25 03:10:02 np0005534516 nova_compute[253538]: 2025-11-25 08:10:02.794 253542 DEBUG nova.compute.resource_tracker [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 25 03:10:02 np0005534516 nova_compute[253538]: 2025-11-25 08:10:02.794 253542 DEBUG oslo_concurrency.lockutils [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 2.161s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:10:02 np0005534516 nova_compute[253538]: 2025-11-25 08:10:02.794 253542 DEBUG nova.service [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Creating RPC server for service compute start /usr/lib/python3.9/site-packages/nova/service.py:182#033[00m
Nov 25 03:10:02 np0005534516 podman[254359]: 2025-11-25 08:10:02.714454761 +0000 UTC m=+0.030046438 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 03:10:02 np0005534516 systemd[1]: Started libcrun container.
Nov 25 03:10:02 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c2d3ba8521163ed577b6592062814cdb65fda4f2b69885b2f23958f356e2f8db/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 03:10:02 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c2d3ba8521163ed577b6592062814cdb65fda4f2b69885b2f23958f356e2f8db/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 03:10:02 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c2d3ba8521163ed577b6592062814cdb65fda4f2b69885b2f23958f356e2f8db/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 03:10:02 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c2d3ba8521163ed577b6592062814cdb65fda4f2b69885b2f23958f356e2f8db/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 03:10:02 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c2d3ba8521163ed577b6592062814cdb65fda4f2b69885b2f23958f356e2f8db/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 03:10:02 np0005534516 nova_compute[253538]: 2025-11-25 08:10:02.836 253542 DEBUG nova.service [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Join ServiceGroup membership for this service compute start /usr/lib/python3.9/site-packages/nova/service.py:199#033[00m
Nov 25 03:10:02 np0005534516 nova_compute[253538]: 2025-11-25 08:10:02.836 253542 DEBUG nova.servicegroup.drivers.db [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] DB_Driver: join new ServiceGroup member compute-0.ctlplane.example.com to the compute group, service = <Service: host=compute-0.ctlplane.example.com, binary=nova-compute, manager_class_name=nova.compute.manager.ComputeManager> join /usr/lib/python3.9/site-packages/nova/servicegroup/drivers/db.py:44#033[00m
Nov 25 03:10:02 np0005534516 podman[254359]: 2025-11-25 08:10:02.849257994 +0000 UTC m=+0.164849651 container init 07a9c6b6af63ca76da451baf894e9af165eb3b986b5e46501eb308648f417ab0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_dijkstra, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Nov 25 03:10:02 np0005534516 podman[254359]: 2025-11-25 08:10:02.861668255 +0000 UTC m=+0.177259862 container start 07a9c6b6af63ca76da451baf894e9af165eb3b986b5e46501eb308648f417ab0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_dijkstra, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Nov 25 03:10:02 np0005534516 podman[254359]: 2025-11-25 08:10:02.865131742 +0000 UTC m=+0.180723379 container attach 07a9c6b6af63ca76da451baf894e9af165eb3b986b5e46501eb308648f417ab0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_dijkstra, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Nov 25 03:10:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 03:10:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:10:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 03:10:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:10:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 03:10:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:10:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 03:10:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:10:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 03:10:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:10:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 03:10:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:10:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 25 03:10:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:10:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 03:10:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:10:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 25 03:10:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:10:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 25 03:10:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:10:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 03:10:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:10:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 25 03:10:03 np0005534516 nifty_dijkstra[254375]: --> passed data devices: 0 physical, 3 LVM
Nov 25 03:10:03 np0005534516 nifty_dijkstra[254375]: --> relative data size: 1.0
Nov 25 03:10:03 np0005534516 nifty_dijkstra[254375]: --> All data devices are unavailable
Nov 25 03:10:03 np0005534516 systemd[1]: libpod-07a9c6b6af63ca76da451baf894e9af165eb3b986b5e46501eb308648f417ab0.scope: Deactivated successfully.
Nov 25 03:10:03 np0005534516 systemd[1]: libpod-07a9c6b6af63ca76da451baf894e9af165eb3b986b5e46501eb308648f417ab0.scope: Consumed 1.046s CPU time.
Nov 25 03:10:03 np0005534516 conmon[254375]: conmon 07a9c6b6af63ca76da45 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-07a9c6b6af63ca76da451baf894e9af165eb3b986b5e46501eb308648f417ab0.scope/container/memory.events
Nov 25 03:10:03 np0005534516 podman[254359]: 2025-11-25 08:10:03.96054129 +0000 UTC m=+1.276132947 container died 07a9c6b6af63ca76da451baf894e9af165eb3b986b5e46501eb308648f417ab0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_dijkstra, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Nov 25 03:10:03 np0005534516 systemd[1]: var-lib-containers-storage-overlay-c2d3ba8521163ed577b6592062814cdb65fda4f2b69885b2f23958f356e2f8db-merged.mount: Deactivated successfully.
Nov 25 03:10:04 np0005534516 podman[254359]: 2025-11-25 08:10:04.025278313 +0000 UTC m=+1.340869960 container remove 07a9c6b6af63ca76da451baf894e9af165eb3b986b5e46501eb308648f417ab0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_dijkstra, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 25 03:10:04 np0005534516 systemd[1]: libpod-conmon-07a9c6b6af63ca76da451baf894e9af165eb3b986b5e46501eb308648f417ab0.scope: Deactivated successfully.
Nov 25 03:10:04 np0005534516 podman[254439]: 2025-11-25 08:10:04.228718995 +0000 UTC m=+0.071714085 container health_status 5c8d5508db57b4c3da7e80ae11f3a3094e87785cb74f60c98199261dd8b73481 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 03:10:04 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v719: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:10:04 np0005534516 podman[254574]: 2025-11-25 08:10:04.692330384 +0000 UTC m=+0.057493135 container create 50a11313772f167fa36afc59adc3f559dc3051f597663e39169bd77c8983d007 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_shockley, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Nov 25 03:10:04 np0005534516 systemd[1]: Started libpod-conmon-50a11313772f167fa36afc59adc3f559dc3051f597663e39169bd77c8983d007.scope.
Nov 25 03:10:04 np0005534516 systemd[1]: Started libcrun container.
Nov 25 03:10:04 np0005534516 podman[254574]: 2025-11-25 08:10:04.674430882 +0000 UTC m=+0.039593663 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 03:10:04 np0005534516 podman[254574]: 2025-11-25 08:10:04.785961242 +0000 UTC m=+0.151124013 container init 50a11313772f167fa36afc59adc3f559dc3051f597663e39169bd77c8983d007 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_shockley, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 25 03:10:04 np0005534516 podman[254574]: 2025-11-25 08:10:04.79564768 +0000 UTC m=+0.160810431 container start 50a11313772f167fa36afc59adc3f559dc3051f597663e39169bd77c8983d007 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_shockley, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Nov 25 03:10:04 np0005534516 podman[254574]: 2025-11-25 08:10:04.799390383 +0000 UTC m=+0.164553134 container attach 50a11313772f167fa36afc59adc3f559dc3051f597663e39169bd77c8983d007 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_shockley, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 03:10:04 np0005534516 optimistic_shockley[254591]: 167 167
Nov 25 03:10:04 np0005534516 systemd[1]: libpod-50a11313772f167fa36afc59adc3f559dc3051f597663e39169bd77c8983d007.scope: Deactivated successfully.
Nov 25 03:10:04 np0005534516 podman[254574]: 2025-11-25 08:10:04.801814819 +0000 UTC m=+0.166977570 container died 50a11313772f167fa36afc59adc3f559dc3051f597663e39169bd77c8983d007 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_shockley, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 25 03:10:04 np0005534516 systemd[1]: var-lib-containers-storage-overlay-d605a1228432cde1594e2bfab892d0eb807642e2588855da34d9877f525ef786-merged.mount: Deactivated successfully.
Nov 25 03:10:04 np0005534516 podman[254574]: 2025-11-25 08:10:04.839911419 +0000 UTC m=+0.205074170 container remove 50a11313772f167fa36afc59adc3f559dc3051f597663e39169bd77c8983d007 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_shockley, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 03:10:04 np0005534516 systemd[1]: libpod-conmon-50a11313772f167fa36afc59adc3f559dc3051f597663e39169bd77c8983d007.scope: Deactivated successfully.
Nov 25 03:10:04 np0005534516 podman[254615]: 2025-11-25 08:10:04.993969862 +0000 UTC m=+0.038237444 container create ec2cf228b9e261923fb23e984e6e84e45fe124890ce2006a73dc6c89a7d44e81 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_cannon, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Nov 25 03:10:05 np0005534516 systemd[1]: Started libpod-conmon-ec2cf228b9e261923fb23e984e6e84e45fe124890ce2006a73dc6c89a7d44e81.scope.
Nov 25 03:10:05 np0005534516 systemd[1]: Started libcrun container.
Nov 25 03:10:05 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b21a6f70bb6c5022450b3b8e7f49fd342fe0fb534449cf227f6f98ae1511a621/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 03:10:05 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b21a6f70bb6c5022450b3b8e7f49fd342fe0fb534449cf227f6f98ae1511a621/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 03:10:05 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b21a6f70bb6c5022450b3b8e7f49fd342fe0fb534449cf227f6f98ae1511a621/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 03:10:05 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b21a6f70bb6c5022450b3b8e7f49fd342fe0fb534449cf227f6f98ae1511a621/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 03:10:05 np0005534516 podman[254615]: 2025-11-25 08:10:05.060109883 +0000 UTC m=+0.104377465 container init ec2cf228b9e261923fb23e984e6e84e45fe124890ce2006a73dc6c89a7d44e81 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_cannon, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Nov 25 03:10:05 np0005534516 podman[254615]: 2025-11-25 08:10:05.067919938 +0000 UTC m=+0.112187530 container start ec2cf228b9e261923fb23e984e6e84e45fe124890ce2006a73dc6c89a7d44e81 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_cannon, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 03:10:05 np0005534516 podman[254615]: 2025-11-25 08:10:04.975872643 +0000 UTC m=+0.020140245 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 03:10:05 np0005534516 podman[254615]: 2025-11-25 08:10:05.071760564 +0000 UTC m=+0.116028146 container attach ec2cf228b9e261923fb23e984e6e84e45fe124890ce2006a73dc6c89a7d44e81 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_cannon, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Nov 25 03:10:05 np0005534516 ceph-osd[88620]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 25 03:10:05 np0005534516 ceph-osd[88620]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1200.5 total, 600.0 interval#012Cumulative writes: 5422 writes, 22K keys, 5422 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s#012Cumulative WAL: 5422 writes, 823 syncs, 6.59 writes per sync, written: 0.02 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 212 writes, 318 keys, 212 commit groups, 1.0 writes per commit group, ingest: 0.11 MB, 0.00 MB/s#012Interval WAL: 212 writes, 106 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.08              0.00         1    0.077       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.08              0.00         1    0.077       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.08              0.00         1    0.077       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.5 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.1 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x562bd08331f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 5.6e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.5 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x562bd08331f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 5.6e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.5 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_s
Nov 25 03:10:05 np0005534516 podman[254639]: 2025-11-25 08:10:05.807640271 +0000 UTC m=+0.058706748 container health_status 201f5ba8a846455dad7681d91099d8c3f33e8ac91298c1330a8b525b94c03938 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, config_id=multipathd, org.label-schema.schema-version=1.0)
Nov 25 03:10:05 np0005534516 laughing_cannon[254632]: {
Nov 25 03:10:05 np0005534516 laughing_cannon[254632]:    "0": [
Nov 25 03:10:05 np0005534516 laughing_cannon[254632]:        {
Nov 25 03:10:05 np0005534516 laughing_cannon[254632]:            "devices": [
Nov 25 03:10:05 np0005534516 laughing_cannon[254632]:                "/dev/loop3"
Nov 25 03:10:05 np0005534516 laughing_cannon[254632]:            ],
Nov 25 03:10:05 np0005534516 laughing_cannon[254632]:            "lv_name": "ceph_lv0",
Nov 25 03:10:05 np0005534516 laughing_cannon[254632]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 03:10:05 np0005534516 laughing_cannon[254632]:            "lv_size": "21470642176",
Nov 25 03:10:05 np0005534516 laughing_cannon[254632]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=fa0f8d90-5df8-4d42-9078-da082765696d,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 03:10:05 np0005534516 laughing_cannon[254632]:            "lv_uuid": "ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr",
Nov 25 03:10:05 np0005534516 laughing_cannon[254632]:            "name": "ceph_lv0",
Nov 25 03:10:05 np0005534516 laughing_cannon[254632]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 03:10:05 np0005534516 laughing_cannon[254632]:            "tags": {
Nov 25 03:10:05 np0005534516 laughing_cannon[254632]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 03:10:05 np0005534516 laughing_cannon[254632]:                "ceph.block_uuid": "ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr",
Nov 25 03:10:05 np0005534516 laughing_cannon[254632]:                "ceph.cephx_lockbox_secret": "",
Nov 25 03:10:05 np0005534516 laughing_cannon[254632]:                "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 03:10:05 np0005534516 laughing_cannon[254632]:                "ceph.cluster_name": "ceph",
Nov 25 03:10:05 np0005534516 laughing_cannon[254632]:                "ceph.crush_device_class": "",
Nov 25 03:10:05 np0005534516 laughing_cannon[254632]:                "ceph.encrypted": "0",
Nov 25 03:10:05 np0005534516 laughing_cannon[254632]:                "ceph.osd_fsid": "fa0f8d90-5df8-4d42-9078-da082765696d",
Nov 25 03:10:05 np0005534516 laughing_cannon[254632]:                "ceph.osd_id": "0",
Nov 25 03:10:05 np0005534516 laughing_cannon[254632]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 03:10:05 np0005534516 laughing_cannon[254632]:                "ceph.type": "block",
Nov 25 03:10:05 np0005534516 laughing_cannon[254632]:                "ceph.vdo": "0"
Nov 25 03:10:05 np0005534516 laughing_cannon[254632]:            },
Nov 25 03:10:05 np0005534516 laughing_cannon[254632]:            "type": "block",
Nov 25 03:10:05 np0005534516 laughing_cannon[254632]:            "vg_name": "ceph_vg0"
Nov 25 03:10:05 np0005534516 laughing_cannon[254632]:        }
Nov 25 03:10:05 np0005534516 laughing_cannon[254632]:    ],
Nov 25 03:10:05 np0005534516 laughing_cannon[254632]:    "1": [
Nov 25 03:10:05 np0005534516 laughing_cannon[254632]:        {
Nov 25 03:10:05 np0005534516 laughing_cannon[254632]:            "devices": [
Nov 25 03:10:05 np0005534516 laughing_cannon[254632]:                "/dev/loop4"
Nov 25 03:10:05 np0005534516 laughing_cannon[254632]:            ],
Nov 25 03:10:05 np0005534516 laughing_cannon[254632]:            "lv_name": "ceph_lv1",
Nov 25 03:10:05 np0005534516 laughing_cannon[254632]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 03:10:05 np0005534516 laughing_cannon[254632]:            "lv_size": "21470642176",
Nov 25 03:10:05 np0005534516 laughing_cannon[254632]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1ccbbde0-8faf-460a-800e-c84f00ed17db,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 03:10:05 np0005534516 laughing_cannon[254632]:            "lv_uuid": "nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e",
Nov 25 03:10:05 np0005534516 laughing_cannon[254632]:            "name": "ceph_lv1",
Nov 25 03:10:05 np0005534516 laughing_cannon[254632]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 03:10:05 np0005534516 laughing_cannon[254632]:            "tags": {
Nov 25 03:10:05 np0005534516 laughing_cannon[254632]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 03:10:05 np0005534516 laughing_cannon[254632]:                "ceph.block_uuid": "nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e",
Nov 25 03:10:05 np0005534516 laughing_cannon[254632]:                "ceph.cephx_lockbox_secret": "",
Nov 25 03:10:05 np0005534516 laughing_cannon[254632]:                "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 03:10:05 np0005534516 laughing_cannon[254632]:                "ceph.cluster_name": "ceph",
Nov 25 03:10:05 np0005534516 laughing_cannon[254632]:                "ceph.crush_device_class": "",
Nov 25 03:10:05 np0005534516 laughing_cannon[254632]:                "ceph.encrypted": "0",
Nov 25 03:10:05 np0005534516 laughing_cannon[254632]:                "ceph.osd_fsid": "1ccbbde0-8faf-460a-800e-c84f00ed17db",
Nov 25 03:10:05 np0005534516 laughing_cannon[254632]:                "ceph.osd_id": "1",
Nov 25 03:10:05 np0005534516 laughing_cannon[254632]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 03:10:05 np0005534516 laughing_cannon[254632]:                "ceph.type": "block",
Nov 25 03:10:05 np0005534516 laughing_cannon[254632]:                "ceph.vdo": "0"
Nov 25 03:10:05 np0005534516 laughing_cannon[254632]:            },
Nov 25 03:10:05 np0005534516 laughing_cannon[254632]:            "type": "block",
Nov 25 03:10:05 np0005534516 laughing_cannon[254632]:            "vg_name": "ceph_vg1"
Nov 25 03:10:05 np0005534516 laughing_cannon[254632]:        }
Nov 25 03:10:05 np0005534516 laughing_cannon[254632]:    ],
Nov 25 03:10:05 np0005534516 laughing_cannon[254632]:    "2": [
Nov 25 03:10:05 np0005534516 laughing_cannon[254632]:        {
Nov 25 03:10:05 np0005534516 laughing_cannon[254632]:            "devices": [
Nov 25 03:10:05 np0005534516 laughing_cannon[254632]:                "/dev/loop5"
Nov 25 03:10:05 np0005534516 laughing_cannon[254632]:            ],
Nov 25 03:10:05 np0005534516 laughing_cannon[254632]:            "lv_name": "ceph_lv2",
Nov 25 03:10:05 np0005534516 laughing_cannon[254632]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 03:10:05 np0005534516 laughing_cannon[254632]:            "lv_size": "21470642176",
Nov 25 03:10:05 np0005534516 laughing_cannon[254632]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2a15223e-f21c-42af-b702-2be1c3607399,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 03:10:05 np0005534516 laughing_cannon[254632]:            "lv_uuid": "5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0",
Nov 25 03:10:05 np0005534516 laughing_cannon[254632]:            "name": "ceph_lv2",
Nov 25 03:10:05 np0005534516 laughing_cannon[254632]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 03:10:05 np0005534516 laughing_cannon[254632]:            "tags": {
Nov 25 03:10:05 np0005534516 laughing_cannon[254632]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 03:10:05 np0005534516 laughing_cannon[254632]:                "ceph.block_uuid": "5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0",
Nov 25 03:10:05 np0005534516 laughing_cannon[254632]:                "ceph.cephx_lockbox_secret": "",
Nov 25 03:10:05 np0005534516 laughing_cannon[254632]:                "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 03:10:05 np0005534516 laughing_cannon[254632]:                "ceph.cluster_name": "ceph",
Nov 25 03:10:05 np0005534516 laughing_cannon[254632]:                "ceph.crush_device_class": "",
Nov 25 03:10:05 np0005534516 laughing_cannon[254632]:                "ceph.encrypted": "0",
Nov 25 03:10:05 np0005534516 laughing_cannon[254632]:                "ceph.osd_fsid": "2a15223e-f21c-42af-b702-2be1c3607399",
Nov 25 03:10:05 np0005534516 laughing_cannon[254632]:                "ceph.osd_id": "2",
Nov 25 03:10:05 np0005534516 laughing_cannon[254632]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 03:10:05 np0005534516 laughing_cannon[254632]:                "ceph.type": "block",
Nov 25 03:10:05 np0005534516 laughing_cannon[254632]:                "ceph.vdo": "0"
Nov 25 03:10:05 np0005534516 laughing_cannon[254632]:            },
Nov 25 03:10:05 np0005534516 laughing_cannon[254632]:            "type": "block",
Nov 25 03:10:05 np0005534516 laughing_cannon[254632]:            "vg_name": "ceph_vg2"
Nov 25 03:10:05 np0005534516 laughing_cannon[254632]:        }
Nov 25 03:10:05 np0005534516 laughing_cannon[254632]:    ]
Nov 25 03:10:05 np0005534516 laughing_cannon[254632]: }
Nov 25 03:10:05 np0005534516 systemd[1]: libpod-ec2cf228b9e261923fb23e984e6e84e45fe124890ce2006a73dc6c89a7d44e81.scope: Deactivated successfully.
Nov 25 03:10:05 np0005534516 podman[254615]: 2025-11-25 08:10:05.833089632 +0000 UTC m=+0.877357214 container died ec2cf228b9e261923fb23e984e6e84e45fe124890ce2006a73dc6c89a7d44e81 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_cannon, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 03:10:05 np0005534516 systemd[1]: var-lib-containers-storage-overlay-b21a6f70bb6c5022450b3b8e7f49fd342fe0fb534449cf227f6f98ae1511a621-merged.mount: Deactivated successfully.
Nov 25 03:10:05 np0005534516 podman[254615]: 2025-11-25 08:10:05.888349674 +0000 UTC m=+0.932617256 container remove ec2cf228b9e261923fb23e984e6e84e45fe124890ce2006a73dc6c89a7d44e81 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_cannon, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Nov 25 03:10:05 np0005534516 systemd[1]: libpod-conmon-ec2cf228b9e261923fb23e984e6e84e45fe124890ce2006a73dc6c89a7d44e81.scope: Deactivated successfully.
Nov 25 03:10:06 np0005534516 podman[254809]: 2025-11-25 08:10:06.486658942 +0000 UTC m=+0.075852120 container create f9b28b2dd2f5f0ee472718e1afba5e86c98c5b8d620a675f80dc2005f3da1c48 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_carson, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 03:10:06 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v720: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:10:06 np0005534516 podman[254809]: 2025-11-25 08:10:06.436790899 +0000 UTC m=+0.025984097 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 03:10:06 np0005534516 systemd[1]: Started libpod-conmon-f9b28b2dd2f5f0ee472718e1afba5e86c98c5b8d620a675f80dc2005f3da1c48.scope.
Nov 25 03:10:06 np0005534516 systemd[1]: Started libcrun container.
Nov 25 03:10:06 np0005534516 podman[254809]: 2025-11-25 08:10:06.879253397 +0000 UTC m=+0.468446595 container init f9b28b2dd2f5f0ee472718e1afba5e86c98c5b8d620a675f80dc2005f3da1c48 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_carson, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 03:10:06 np0005534516 podman[254809]: 2025-11-25 08:10:06.89210361 +0000 UTC m=+0.481296788 container start f9b28b2dd2f5f0ee472718e1afba5e86c98c5b8d620a675f80dc2005f3da1c48 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_carson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Nov 25 03:10:06 np0005534516 sleepy_carson[254825]: 167 167
Nov 25 03:10:06 np0005534516 systemd[1]: libpod-f9b28b2dd2f5f0ee472718e1afba5e86c98c5b8d620a675f80dc2005f3da1c48.scope: Deactivated successfully.
Nov 25 03:10:06 np0005534516 conmon[254825]: conmon f9b28b2dd2f5f0ee4727 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-f9b28b2dd2f5f0ee472718e1afba5e86c98c5b8d620a675f80dc2005f3da1c48.scope/container/memory.events
Nov 25 03:10:07 np0005534516 podman[254809]: 2025-11-25 08:10:07.013110082 +0000 UTC m=+0.602303260 container attach f9b28b2dd2f5f0ee472718e1afba5e86c98c5b8d620a675f80dc2005f3da1c48 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_carson, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True)
Nov 25 03:10:07 np0005534516 podman[254809]: 2025-11-25 08:10:07.014744427 +0000 UTC m=+0.603937635 container died f9b28b2dd2f5f0ee472718e1afba5e86c98c5b8d620a675f80dc2005f3da1c48 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_carson, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Nov 25 03:10:07 np0005534516 systemd[1]: var-lib-containers-storage-overlay-17fb9511874a52ebe6073af558677ed0c812cacd572109dd0f930bae1d85d9b0-merged.mount: Deactivated successfully.
Nov 25 03:10:07 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 03:10:07 np0005534516 podman[254809]: 2025-11-25 08:10:07.500863085 +0000 UTC m=+1.090056273 container remove f9b28b2dd2f5f0ee472718e1afba5e86c98c5b8d620a675f80dc2005f3da1c48 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_carson, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Nov 25 03:10:07 np0005534516 systemd[1]: libpod-conmon-f9b28b2dd2f5f0ee472718e1afba5e86c98c5b8d620a675f80dc2005f3da1c48.scope: Deactivated successfully.
Nov 25 03:10:07 np0005534516 podman[254850]: 2025-11-25 08:10:07.701545171 +0000 UTC m=+0.084810616 container create 74ffe5a5c5c798beeefca1abb1b9902c3c1be4e8e92e896488c32523ec30ba3a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_chandrasekhar, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 25 03:10:07 np0005534516 podman[254850]: 2025-11-25 08:10:07.644760697 +0000 UTC m=+0.028026162 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 03:10:07 np0005534516 systemd[1]: Started libpod-conmon-74ffe5a5c5c798beeefca1abb1b9902c3c1be4e8e92e896488c32523ec30ba3a.scope.
Nov 25 03:10:07 np0005534516 systemd[1]: Started libcrun container.
Nov 25 03:10:07 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1eed1bf082ee0237f46c7827b77fbf033c63c79cd8d091be6ba3372d94533a0b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 03:10:07 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1eed1bf082ee0237f46c7827b77fbf033c63c79cd8d091be6ba3372d94533a0b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 03:10:07 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1eed1bf082ee0237f46c7827b77fbf033c63c79cd8d091be6ba3372d94533a0b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 03:10:07 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1eed1bf082ee0237f46c7827b77fbf033c63c79cd8d091be6ba3372d94533a0b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 03:10:07 np0005534516 podman[254850]: 2025-11-25 08:10:07.957901852 +0000 UTC m=+0.341167327 container init 74ffe5a5c5c798beeefca1abb1b9902c3c1be4e8e92e896488c32523ec30ba3a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_chandrasekhar, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 03:10:07 np0005534516 podman[254850]: 2025-11-25 08:10:07.965129531 +0000 UTC m=+0.348394976 container start 74ffe5a5c5c798beeefca1abb1b9902c3c1be4e8e92e896488c32523ec30ba3a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_chandrasekhar, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True)
Nov 25 03:10:08 np0005534516 podman[254850]: 2025-11-25 08:10:08.040724042 +0000 UTC m=+0.423989487 container attach 74ffe5a5c5c798beeefca1abb1b9902c3c1be4e8e92e896488c32523ec30ba3a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_chandrasekhar, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 03:10:08 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v721: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:10:08 np0005534516 hopeful_chandrasekhar[254867]: {
Nov 25 03:10:08 np0005534516 hopeful_chandrasekhar[254867]:    "1ccbbde0-8faf-460a-800e-c84f00ed17db": {
Nov 25 03:10:08 np0005534516 hopeful_chandrasekhar[254867]:        "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 03:10:08 np0005534516 hopeful_chandrasekhar[254867]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 03:10:08 np0005534516 hopeful_chandrasekhar[254867]:        "osd_id": 1,
Nov 25 03:10:08 np0005534516 hopeful_chandrasekhar[254867]:        "osd_uuid": "1ccbbde0-8faf-460a-800e-c84f00ed17db",
Nov 25 03:10:08 np0005534516 hopeful_chandrasekhar[254867]:        "type": "bluestore"
Nov 25 03:10:08 np0005534516 hopeful_chandrasekhar[254867]:    },
Nov 25 03:10:08 np0005534516 hopeful_chandrasekhar[254867]:    "2a15223e-f21c-42af-b702-2be1c3607399": {
Nov 25 03:10:08 np0005534516 hopeful_chandrasekhar[254867]:        "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 03:10:08 np0005534516 hopeful_chandrasekhar[254867]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 03:10:08 np0005534516 hopeful_chandrasekhar[254867]:        "osd_id": 2,
Nov 25 03:10:08 np0005534516 hopeful_chandrasekhar[254867]:        "osd_uuid": "2a15223e-f21c-42af-b702-2be1c3607399",
Nov 25 03:10:08 np0005534516 hopeful_chandrasekhar[254867]:        "type": "bluestore"
Nov 25 03:10:08 np0005534516 hopeful_chandrasekhar[254867]:    },
Nov 25 03:10:08 np0005534516 hopeful_chandrasekhar[254867]:    "fa0f8d90-5df8-4d42-9078-da082765696d": {
Nov 25 03:10:08 np0005534516 hopeful_chandrasekhar[254867]:        "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 03:10:08 np0005534516 hopeful_chandrasekhar[254867]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 03:10:08 np0005534516 hopeful_chandrasekhar[254867]:        "osd_id": 0,
Nov 25 03:10:08 np0005534516 hopeful_chandrasekhar[254867]:        "osd_uuid": "fa0f8d90-5df8-4d42-9078-da082765696d",
Nov 25 03:10:08 np0005534516 hopeful_chandrasekhar[254867]:        "type": "bluestore"
Nov 25 03:10:08 np0005534516 hopeful_chandrasekhar[254867]:    }
Nov 25 03:10:08 np0005534516 hopeful_chandrasekhar[254867]: }
Nov 25 03:10:08 np0005534516 systemd[1]: libpod-74ffe5a5c5c798beeefca1abb1b9902c3c1be4e8e92e896488c32523ec30ba3a.scope: Deactivated successfully.
Nov 25 03:10:08 np0005534516 systemd[1]: libpod-74ffe5a5c5c798beeefca1abb1b9902c3c1be4e8e92e896488c32523ec30ba3a.scope: Consumed 1.026s CPU time.
Nov 25 03:10:08 np0005534516 podman[254850]: 2025-11-25 08:10:08.990810188 +0000 UTC m=+1.374075633 container died 74ffe5a5c5c798beeefca1abb1b9902c3c1be4e8e92e896488c32523ec30ba3a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_chandrasekhar, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Nov 25 03:10:09 np0005534516 systemd[1]: var-lib-containers-storage-overlay-1eed1bf082ee0237f46c7827b77fbf033c63c79cd8d091be6ba3372d94533a0b-merged.mount: Deactivated successfully.
Nov 25 03:10:09 np0005534516 podman[254850]: 2025-11-25 08:10:09.313920658 +0000 UTC m=+1.697186143 container remove 74ffe5a5c5c798beeefca1abb1b9902c3c1be4e8e92e896488c32523ec30ba3a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_chandrasekhar, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 03:10:09 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 03:10:09 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 03:10:09 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 03:10:09 np0005534516 systemd[1]: libpod-conmon-74ffe5a5c5c798beeefca1abb1b9902c3c1be4e8e92e896488c32523ec30ba3a.scope: Deactivated successfully.
Nov 25 03:10:09 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 03:10:09 np0005534516 ceph-mgr[75313]: [progress WARNING root] complete: ev 0c562342-5039-4763-b62c-c06d908b6876 does not exist
Nov 25 03:10:09 np0005534516 ceph-mgr[75313]: [progress WARNING root] complete: ev 710f7a67-a814-4fe5-9807-fde0897df746 does not exist
Nov 25 03:10:10 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v722: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:10:10 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 03:10:10 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 03:10:11 np0005534516 ceph-osd[89702]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 25 03:10:11 np0005534516 ceph-osd[89702]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1200.4 total, 600.0 interval#012Cumulative writes: 6163 writes, 25K keys, 6163 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s#012Cumulative WAL: 6163 writes, 949 syncs, 6.49 writes per sync, written: 0.02 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 180 writes, 271 keys, 180 commit groups, 1.0 writes per commit group, ingest: 0.09 MB, 0.00 MB/s#012Interval WAL: 180 writes, 90 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.05              0.00         1    0.049       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.05              0.00         1    0.049       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.05              0.00         1    0.049       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.4 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x561d8ae0f1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.5e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.4 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x561d8ae0f1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.5e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.4 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_sl
Nov 25 03:10:11 np0005534516 podman[254963]: 2025-11-25 08:10:11.884719191 +0000 UTC m=+0.123835811 container health_status 8ad1e319ea263c63021f01bb2d6f18ca5aea205883339692c6b10bddfa993191 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Nov 25 03:10:12 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 03:10:12 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v723: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:10:14 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v724: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:10:16 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v725: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:10:17 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 03:10:19 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v726: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:10:20 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v727: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:10:21 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 25 03:10:21 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3899162114' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 03:10:21 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 25 03:10:21 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3899162114' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 03:10:21 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 25 03:10:21 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/406744174' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 03:10:21 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 25 03:10:21 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/406744174' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 03:10:21 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 25 03:10:21 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3948039731' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 03:10:21 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 25 03:10:21 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3948039731' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 03:10:21 np0005534516 ceph-osd[90711]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 25 03:10:21 np0005534516 ceph-osd[90711]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1202.4 total, 600.0 interval#012Cumulative writes: 5384 writes, 23K keys, 5384 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s#012Cumulative WAL: 5384 writes, 730 syncs, 7.38 writes per sync, written: 0.02 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 180 writes, 270 keys, 180 commit groups, 1.0 writes per commit group, ingest: 0.09 MB, 0.00 MB/s#012Interval WAL: 180 writes, 90 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.36              0.00         1    0.357       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.36              0.00         1    0.357       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.36              0.00         1    0.357       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1202.4 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.4 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55a125e131f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.3e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1202.4 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55a125e131f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.3e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1202.4 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_sl
Nov 25 03:10:22 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 03:10:22 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v728: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:10:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 03:10:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 03:10:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 03:10:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 03:10:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 03:10:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 03:10:24 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v729: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:10:26 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v730: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:10:27 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 03:10:28 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v731: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:10:30 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v732: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:10:32 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 03:10:32 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v733: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:10:34 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v734: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:10:34 np0005534516 podman[254989]: 2025-11-25 08:10:34.799980353 +0000 UTC m=+0.050451831 container health_status 5c8d5508db57b4c3da7e80ae11f3a3094e87785cb74f60c98199261dd8b73481 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent)
Nov 25 03:10:36 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v735: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:10:36 np0005534516 podman[255008]: 2025-11-25 08:10:36.798125255 +0000 UTC m=+0.052982030 container health_status 201f5ba8a846455dad7681d91099d8c3f33e8ac91298c1330a8b525b94c03938 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=multipathd, container_name=multipathd, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Nov 25 03:10:37 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 03:10:38 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v736: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:10:40 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v737: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:10:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:10:41.034 162739 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:10:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:10:41.035 162739 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:10:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:10:41.035 162739 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:10:42 np0005534516 ceph-mgr[75313]: [devicehealth INFO root] Check health
Nov 25 03:10:42 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 03:10:42 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v738: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:10:42 np0005534516 podman[255030]: 2025-11-25 08:10:42.820608351 +0000 UTC m=+0.076434317 container health_status 8ad1e319ea263c63021f01bb2d6f18ca5aea205883339692c6b10bddfa993191 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team)
Nov 25 03:10:44 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v739: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:10:46 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v740: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:10:47 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 03:10:48 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v741: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:10:50 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v742: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:10:52 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 03:10:52 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v743: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:10:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] Optimize plan auto_2025-11-25_08:10:53
Nov 25 03:10:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 03:10:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] do_upmap
Nov 25 03:10:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] pools ['images', 'default.rgw.meta', '.rgw.root', 'default.rgw.control', '.mgr', 'cephfs.cephfs.data', 'vms', 'cephfs.cephfs.meta', 'default.rgw.log', 'backups', 'volumes']
Nov 25 03:10:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] prepared 0/10 changes
Nov 25 03:10:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 03:10:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 03:10:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 03:10:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 03:10:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 03:10:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 03:10:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 03:10:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 03:10:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 03:10:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 03:10:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 03:10:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 03:10:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 03:10:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 03:10:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 03:10:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 03:10:54 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v744: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:10:56 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v745: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:10:56 np0005534516 nova_compute[253538]: 2025-11-25 08:10:56.840 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 03:10:56 np0005534516 nova_compute[253538]: 2025-11-25 08:10:56.859 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._cleanup_running_deleted_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 03:10:57 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 03:10:57 np0005534516 nova_compute[253538]: 2025-11-25 08:10:57.556 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 03:10:57 np0005534516 nova_compute[253538]: 2025-11-25 08:10:57.557 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 03:10:57 np0005534516 nova_compute[253538]: 2025-11-25 08:10:57.557 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 25 03:10:57 np0005534516 nova_compute[253538]: 2025-11-25 08:10:57.557 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 25 03:10:57 np0005534516 nova_compute[253538]: 2025-11-25 08:10:57.574 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 25 03:10:57 np0005534516 nova_compute[253538]: 2025-11-25 08:10:57.574 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 03:10:57 np0005534516 nova_compute[253538]: 2025-11-25 08:10:57.574 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 03:10:57 np0005534516 nova_compute[253538]: 2025-11-25 08:10:57.575 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 03:10:57 np0005534516 nova_compute[253538]: 2025-11-25 08:10:57.575 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 03:10:57 np0005534516 nova_compute[253538]: 2025-11-25 08:10:57.575 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 03:10:57 np0005534516 nova_compute[253538]: 2025-11-25 08:10:57.576 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 03:10:57 np0005534516 nova_compute[253538]: 2025-11-25 08:10:57.576 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 25 03:10:57 np0005534516 nova_compute[253538]: 2025-11-25 08:10:57.576 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 03:10:57 np0005534516 nova_compute[253538]: 2025-11-25 08:10:57.600 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:10:57 np0005534516 nova_compute[253538]: 2025-11-25 08:10:57.600 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:10:57 np0005534516 nova_compute[253538]: 2025-11-25 08:10:57.600 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:10:57 np0005534516 nova_compute[253538]: 2025-11-25 08:10:57.600 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 25 03:10:57 np0005534516 nova_compute[253538]: 2025-11-25 08:10:57.601 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:10:58 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 03:10:58 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2865195568' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 03:10:58 np0005534516 nova_compute[253538]: 2025-11-25 08:10:58.043 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.442s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:10:58 np0005534516 nova_compute[253538]: 2025-11-25 08:10:58.223 253542 WARNING nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 03:10:58 np0005534516 nova_compute[253538]: 2025-11-25 08:10:58.224 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5207MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 25 03:10:58 np0005534516 nova_compute[253538]: 2025-11-25 08:10:58.225 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:10:58 np0005534516 nova_compute[253538]: 2025-11-25 08:10:58.225 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:10:58 np0005534516 nova_compute[253538]: 2025-11-25 08:10:58.449 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 25 03:10:58 np0005534516 nova_compute[253538]: 2025-11-25 08:10:58.449 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 25 03:10:58 np0005534516 nova_compute[253538]: 2025-11-25 08:10:58.469 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:10:58 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v746: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:10:58 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 03:10:58 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2807100047' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 03:10:58 np0005534516 nova_compute[253538]: 2025-11-25 08:10:58.952 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.483s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:10:58 np0005534516 nova_compute[253538]: 2025-11-25 08:10:58.961 253542 DEBUG nova.compute.provider_tree [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 03:10:58 np0005534516 nova_compute[253538]: 2025-11-25 08:10:58.975 253542 DEBUG nova.scheduler.client.report [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 03:10:59 np0005534516 nova_compute[253538]: 2025-11-25 08:10:59.002 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 25 03:10:59 np0005534516 nova_compute[253538]: 2025-11-25 08:10:59.003 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.778s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:11:00 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v747: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:11:02 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 03:11:02 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v748: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:11:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 03:11:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:11:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 03:11:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:11:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 03:11:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:11:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 03:11:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:11:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 03:11:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:11:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 03:11:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:11:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 25 03:11:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:11:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 03:11:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:11:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 25 03:11:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:11:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 25 03:11:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:11:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 03:11:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:11:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 25 03:11:04 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v749: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:11:05 np0005534516 podman[255101]: 2025-11-25 08:11:05.835164719 +0000 UTC m=+0.086435014 container health_status 5c8d5508db57b4c3da7e80ae11f3a3094e87785cb74f60c98199261dd8b73481 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0)
Nov 25 03:11:06 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v750: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:11:07 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 03:11:07 np0005534516 podman[255121]: 2025-11-25 08:11:07.817073708 +0000 UTC m=+0.066606494 container health_status 201f5ba8a846455dad7681d91099d8c3f33e8ac91298c1330a8b525b94c03938 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=multipathd, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251118)
Nov 25 03:11:08 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v751: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:11:10 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0) v1
Nov 25 03:11:10 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 25 03:11:10 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 03:11:10 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 03:11:10 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 03:11:10 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 03:11:10 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 03:11:10 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v752: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:11:10 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 03:11:10 np0005534516 ceph-mgr[75313]: [progress WARNING root] complete: ev cded2dc9-3d60-4a63-b622-a1a7bcc492d1 does not exist
Nov 25 03:11:10 np0005534516 ceph-mgr[75313]: [progress WARNING root] complete: ev 9081b572-dbd8-4c91-a243-070b88cde72c does not exist
Nov 25 03:11:10 np0005534516 ceph-mgr[75313]: [progress WARNING root] complete: ev c7839f13-94ba-43e5-a83f-b8a91e869e1a does not exist
Nov 25 03:11:10 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 03:11:10 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 03:11:10 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 03:11:10 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 03:11:10 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 03:11:10 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 03:11:11 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 25 03:11:11 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 03:11:11 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 03:11:11 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 03:11:11 np0005534516 podman[255409]: 2025-11-25 08:11:11.203433053 +0000 UTC m=+0.019635774 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 03:11:11 np0005534516 podman[255409]: 2025-11-25 08:11:11.588805001 +0000 UTC m=+0.405007712 container create 8eef632902d735fef80ffa8d4e2feac5eb731ff0367f6000f6fa8deaa515bc76 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_davinci, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Nov 25 03:11:11 np0005534516 systemd[1]: Started libpod-conmon-8eef632902d735fef80ffa8d4e2feac5eb731ff0367f6000f6fa8deaa515bc76.scope.
Nov 25 03:11:11 np0005534516 systemd[1]: Started libcrun container.
Nov 25 03:11:11 np0005534516 podman[255409]: 2025-11-25 08:11:11.873132521 +0000 UTC m=+0.689335312 container init 8eef632902d735fef80ffa8d4e2feac5eb731ff0367f6000f6fa8deaa515bc76 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_davinci, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 03:11:11 np0005534516 podman[255409]: 2025-11-25 08:11:11.880873095 +0000 UTC m=+0.697075836 container start 8eef632902d735fef80ffa8d4e2feac5eb731ff0367f6000f6fa8deaa515bc76 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_davinci, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Nov 25 03:11:11 np0005534516 dreamy_davinci[255426]: 167 167
Nov 25 03:11:11 np0005534516 systemd[1]: libpod-8eef632902d735fef80ffa8d4e2feac5eb731ff0367f6000f6fa8deaa515bc76.scope: Deactivated successfully.
Nov 25 03:11:12 np0005534516 podman[255409]: 2025-11-25 08:11:12.1133396 +0000 UTC m=+0.929542341 container attach 8eef632902d735fef80ffa8d4e2feac5eb731ff0367f6000f6fa8deaa515bc76 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_davinci, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 03:11:12 np0005534516 podman[255409]: 2025-11-25 08:11:12.113869225 +0000 UTC m=+0.930071986 container died 8eef632902d735fef80ffa8d4e2feac5eb731ff0367f6000f6fa8deaa515bc76 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_davinci, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 03:11:12 np0005534516 systemd[1]: var-lib-containers-storage-overlay-c6fb888ec2f2b9a406c96bc2049b04366106e4b072cf96c2a8f19378549cde5b-merged.mount: Deactivated successfully.
Nov 25 03:11:12 np0005534516 podman[255409]: 2025-11-25 08:11:12.475385042 +0000 UTC m=+1.291587763 container remove 8eef632902d735fef80ffa8d4e2feac5eb731ff0367f6000f6fa8deaa515bc76 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_davinci, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 03:11:12 np0005534516 systemd[1]: libpod-conmon-8eef632902d735fef80ffa8d4e2feac5eb731ff0367f6000f6fa8deaa515bc76.scope: Deactivated successfully.
Nov 25 03:11:12 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 03:11:12 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v753: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:11:12 np0005534516 podman[255450]: 2025-11-25 08:11:12.649893552 +0000 UTC m=+0.027389479 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 03:11:12 np0005534516 podman[255450]: 2025-11-25 08:11:12.79864539 +0000 UTC m=+0.176141237 container create 476ce6573287a20989c804e6fe3f49a56babedc634162eb88a2c0904da15cf9f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_hodgkin, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 03:11:12 np0005534516 systemd[1]: Started libpod-conmon-476ce6573287a20989c804e6fe3f49a56babedc634162eb88a2c0904da15cf9f.scope.
Nov 25 03:11:12 np0005534516 systemd[1]: Started libcrun container.
Nov 25 03:11:12 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d679388d2dfab7d399e06457f6737864ed104142535495b0442e5b33a93baa63/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 03:11:12 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d679388d2dfab7d399e06457f6737864ed104142535495b0442e5b33a93baa63/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 03:11:12 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d679388d2dfab7d399e06457f6737864ed104142535495b0442e5b33a93baa63/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 03:11:12 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d679388d2dfab7d399e06457f6737864ed104142535495b0442e5b33a93baa63/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 03:11:12 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d679388d2dfab7d399e06457f6737864ed104142535495b0442e5b33a93baa63/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 03:11:12 np0005534516 podman[255450]: 2025-11-25 08:11:12.968624645 +0000 UTC m=+0.346120512 container init 476ce6573287a20989c804e6fe3f49a56babedc634162eb88a2c0904da15cf9f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_hodgkin, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 25 03:11:12 np0005534516 podman[255450]: 2025-11-25 08:11:12.977518251 +0000 UTC m=+0.355014098 container start 476ce6573287a20989c804e6fe3f49a56babedc634162eb88a2c0904da15cf9f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_hodgkin, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Nov 25 03:11:13 np0005534516 podman[255450]: 2025-11-25 08:11:13.113949938 +0000 UTC m=+0.491445785 container attach 476ce6573287a20989c804e6fe3f49a56babedc634162eb88a2c0904da15cf9f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_hodgkin, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 03:11:13 np0005534516 podman[255468]: 2025-11-25 08:11:13.223835399 +0000 UTC m=+0.349395571 container health_status 8ad1e319ea263c63021f01bb2d6f18ca5aea205883339692c6b10bddfa993191 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 03:11:13 np0005534516 unruffled_hodgkin[255466]: --> passed data devices: 0 physical, 3 LVM
Nov 25 03:11:13 np0005534516 unruffled_hodgkin[255466]: --> relative data size: 1.0
Nov 25 03:11:13 np0005534516 unruffled_hodgkin[255466]: --> All data devices are unavailable
Nov 25 03:11:13 np0005534516 systemd[1]: libpod-476ce6573287a20989c804e6fe3f49a56babedc634162eb88a2c0904da15cf9f.scope: Deactivated successfully.
Nov 25 03:11:13 np0005534516 podman[255450]: 2025-11-25 08:11:13.981370188 +0000 UTC m=+1.358866035 container died 476ce6573287a20989c804e6fe3f49a56babedc634162eb88a2c0904da15cf9f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_hodgkin, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Nov 25 03:11:14 np0005534516 systemd[1]: var-lib-containers-storage-overlay-d679388d2dfab7d399e06457f6737864ed104142535495b0442e5b33a93baa63-merged.mount: Deactivated successfully.
Nov 25 03:11:14 np0005534516 podman[255450]: 2025-11-25 08:11:14.440467867 +0000 UTC m=+1.817963724 container remove 476ce6573287a20989c804e6fe3f49a56babedc634162eb88a2c0904da15cf9f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_hodgkin, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Nov 25 03:11:14 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v754: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:11:14 np0005534516 systemd[1]: libpod-conmon-476ce6573287a20989c804e6fe3f49a56babedc634162eb88a2c0904da15cf9f.scope: Deactivated successfully.
Nov 25 03:11:15 np0005534516 podman[255675]: 2025-11-25 08:11:15.011102271 +0000 UTC m=+0.052790691 container create 16239c4d6a18407143fa57b7669705fc7b1a68dec9d8d29102391c9cbbba7741 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_hofstadter, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 03:11:15 np0005534516 systemd[1]: Started libpod-conmon-16239c4d6a18407143fa57b7669705fc7b1a68dec9d8d29102391c9cbbba7741.scope.
Nov 25 03:11:15 np0005534516 podman[255675]: 2025-11-25 08:11:14.980023841 +0000 UTC m=+0.021712281 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 03:11:15 np0005534516 systemd[1]: Started libcrun container.
Nov 25 03:11:15 np0005534516 podman[255675]: 2025-11-25 08:11:15.13820632 +0000 UTC m=+0.179894760 container init 16239c4d6a18407143fa57b7669705fc7b1a68dec9d8d29102391c9cbbba7741 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_hofstadter, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 03:11:15 np0005534516 podman[255675]: 2025-11-25 08:11:15.145003098 +0000 UTC m=+0.186691518 container start 16239c4d6a18407143fa57b7669705fc7b1a68dec9d8d29102391c9cbbba7741 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_hofstadter, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Nov 25 03:11:15 np0005534516 clever_hofstadter[255691]: 167 167
Nov 25 03:11:15 np0005534516 systemd[1]: libpod-16239c4d6a18407143fa57b7669705fc7b1a68dec9d8d29102391c9cbbba7741.scope: Deactivated successfully.
Nov 25 03:11:15 np0005534516 podman[255675]: 2025-11-25 08:11:15.20360752 +0000 UTC m=+0.245295940 container attach 16239c4d6a18407143fa57b7669705fc7b1a68dec9d8d29102391c9cbbba7741 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_hofstadter, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS)
Nov 25 03:11:15 np0005534516 podman[255675]: 2025-11-25 08:11:15.204734701 +0000 UTC m=+0.246423151 container died 16239c4d6a18407143fa57b7669705fc7b1a68dec9d8d29102391c9cbbba7741 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_hofstadter, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Nov 25 03:11:15 np0005534516 systemd[1]: var-lib-containers-storage-overlay-e41d9ab20af17c6e578a6d5eddbe4c42a00758186313f9a94b6266c95785b0e9-merged.mount: Deactivated successfully.
Nov 25 03:11:15 np0005534516 podman[255675]: 2025-11-25 08:11:15.404903272 +0000 UTC m=+0.446591692 container remove 16239c4d6a18407143fa57b7669705fc7b1a68dec9d8d29102391c9cbbba7741 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_hofstadter, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 03:11:15 np0005534516 systemd[1]: libpod-conmon-16239c4d6a18407143fa57b7669705fc7b1a68dec9d8d29102391c9cbbba7741.scope: Deactivated successfully.
Nov 25 03:11:15 np0005534516 podman[255716]: 2025-11-25 08:11:15.580587715 +0000 UTC m=+0.043093844 container create 8974e0ef126537b74cb5474c3865c473aff76e372600d2d7b836dd980e20d2d3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_chatterjee, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Nov 25 03:11:15 np0005534516 systemd[1]: Started libpod-conmon-8974e0ef126537b74cb5474c3865c473aff76e372600d2d7b836dd980e20d2d3.scope.
Nov 25 03:11:15 np0005534516 systemd[1]: Started libcrun container.
Nov 25 03:11:15 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d7cbaeb955bfd9fe0f115ae772692b68058d41198777e1856f8ee87835e124b4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 03:11:15 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d7cbaeb955bfd9fe0f115ae772692b68058d41198777e1856f8ee87835e124b4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 03:11:15 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d7cbaeb955bfd9fe0f115ae772692b68058d41198777e1856f8ee87835e124b4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 03:11:15 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d7cbaeb955bfd9fe0f115ae772692b68058d41198777e1856f8ee87835e124b4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 03:11:15 np0005534516 podman[255716]: 2025-11-25 08:11:15.561732663 +0000 UTC m=+0.024238812 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 03:11:15 np0005534516 podman[255716]: 2025-11-25 08:11:15.664533169 +0000 UTC m=+0.127039318 container init 8974e0ef126537b74cb5474c3865c473aff76e372600d2d7b836dd980e20d2d3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_chatterjee, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 03:11:15 np0005534516 podman[255716]: 2025-11-25 08:11:15.671671556 +0000 UTC m=+0.134177685 container start 8974e0ef126537b74cb5474c3865c473aff76e372600d2d7b836dd980e20d2d3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_chatterjee, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Nov 25 03:11:15 np0005534516 podman[255716]: 2025-11-25 08:11:15.674940486 +0000 UTC m=+0.137446615 container attach 8974e0ef126537b74cb5474c3865c473aff76e372600d2d7b836dd980e20d2d3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_chatterjee, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 03:11:16 np0005534516 relaxed_chatterjee[255732]: {
Nov 25 03:11:16 np0005534516 relaxed_chatterjee[255732]:    "0": [
Nov 25 03:11:16 np0005534516 relaxed_chatterjee[255732]:        {
Nov 25 03:11:16 np0005534516 relaxed_chatterjee[255732]:            "devices": [
Nov 25 03:11:16 np0005534516 relaxed_chatterjee[255732]:                "/dev/loop3"
Nov 25 03:11:16 np0005534516 relaxed_chatterjee[255732]:            ],
Nov 25 03:11:16 np0005534516 relaxed_chatterjee[255732]:            "lv_name": "ceph_lv0",
Nov 25 03:11:16 np0005534516 relaxed_chatterjee[255732]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 03:11:16 np0005534516 relaxed_chatterjee[255732]:            "lv_size": "21470642176",
Nov 25 03:11:16 np0005534516 relaxed_chatterjee[255732]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=fa0f8d90-5df8-4d42-9078-da082765696d,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 03:11:16 np0005534516 relaxed_chatterjee[255732]:            "lv_uuid": "ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr",
Nov 25 03:11:16 np0005534516 relaxed_chatterjee[255732]:            "name": "ceph_lv0",
Nov 25 03:11:16 np0005534516 relaxed_chatterjee[255732]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 03:11:16 np0005534516 relaxed_chatterjee[255732]:            "tags": {
Nov 25 03:11:16 np0005534516 relaxed_chatterjee[255732]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 03:11:16 np0005534516 relaxed_chatterjee[255732]:                "ceph.block_uuid": "ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr",
Nov 25 03:11:16 np0005534516 relaxed_chatterjee[255732]:                "ceph.cephx_lockbox_secret": "",
Nov 25 03:11:16 np0005534516 relaxed_chatterjee[255732]:                "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 03:11:16 np0005534516 relaxed_chatterjee[255732]:                "ceph.cluster_name": "ceph",
Nov 25 03:11:16 np0005534516 relaxed_chatterjee[255732]:                "ceph.crush_device_class": "",
Nov 25 03:11:16 np0005534516 relaxed_chatterjee[255732]:                "ceph.encrypted": "0",
Nov 25 03:11:16 np0005534516 relaxed_chatterjee[255732]:                "ceph.osd_fsid": "fa0f8d90-5df8-4d42-9078-da082765696d",
Nov 25 03:11:16 np0005534516 relaxed_chatterjee[255732]:                "ceph.osd_id": "0",
Nov 25 03:11:16 np0005534516 relaxed_chatterjee[255732]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 03:11:16 np0005534516 relaxed_chatterjee[255732]:                "ceph.type": "block",
Nov 25 03:11:16 np0005534516 relaxed_chatterjee[255732]:                "ceph.vdo": "0"
Nov 25 03:11:16 np0005534516 relaxed_chatterjee[255732]:            },
Nov 25 03:11:16 np0005534516 relaxed_chatterjee[255732]:            "type": "block",
Nov 25 03:11:16 np0005534516 relaxed_chatterjee[255732]:            "vg_name": "ceph_vg0"
Nov 25 03:11:16 np0005534516 relaxed_chatterjee[255732]:        }
Nov 25 03:11:16 np0005534516 relaxed_chatterjee[255732]:    ],
Nov 25 03:11:16 np0005534516 relaxed_chatterjee[255732]:    "1": [
Nov 25 03:11:16 np0005534516 relaxed_chatterjee[255732]:        {
Nov 25 03:11:16 np0005534516 relaxed_chatterjee[255732]:            "devices": [
Nov 25 03:11:16 np0005534516 relaxed_chatterjee[255732]:                "/dev/loop4"
Nov 25 03:11:16 np0005534516 relaxed_chatterjee[255732]:            ],
Nov 25 03:11:16 np0005534516 relaxed_chatterjee[255732]:            "lv_name": "ceph_lv1",
Nov 25 03:11:16 np0005534516 relaxed_chatterjee[255732]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 03:11:16 np0005534516 relaxed_chatterjee[255732]:            "lv_size": "21470642176",
Nov 25 03:11:16 np0005534516 relaxed_chatterjee[255732]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1ccbbde0-8faf-460a-800e-c84f00ed17db,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 03:11:16 np0005534516 relaxed_chatterjee[255732]:            "lv_uuid": "nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e",
Nov 25 03:11:16 np0005534516 relaxed_chatterjee[255732]:            "name": "ceph_lv1",
Nov 25 03:11:16 np0005534516 relaxed_chatterjee[255732]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 03:11:16 np0005534516 relaxed_chatterjee[255732]:            "tags": {
Nov 25 03:11:16 np0005534516 relaxed_chatterjee[255732]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 03:11:16 np0005534516 relaxed_chatterjee[255732]:                "ceph.block_uuid": "nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e",
Nov 25 03:11:16 np0005534516 relaxed_chatterjee[255732]:                "ceph.cephx_lockbox_secret": "",
Nov 25 03:11:16 np0005534516 relaxed_chatterjee[255732]:                "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 03:11:16 np0005534516 relaxed_chatterjee[255732]:                "ceph.cluster_name": "ceph",
Nov 25 03:11:16 np0005534516 relaxed_chatterjee[255732]:                "ceph.crush_device_class": "",
Nov 25 03:11:16 np0005534516 relaxed_chatterjee[255732]:                "ceph.encrypted": "0",
Nov 25 03:11:16 np0005534516 relaxed_chatterjee[255732]:                "ceph.osd_fsid": "1ccbbde0-8faf-460a-800e-c84f00ed17db",
Nov 25 03:11:16 np0005534516 relaxed_chatterjee[255732]:                "ceph.osd_id": "1",
Nov 25 03:11:16 np0005534516 relaxed_chatterjee[255732]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 03:11:16 np0005534516 relaxed_chatterjee[255732]:                "ceph.type": "block",
Nov 25 03:11:16 np0005534516 relaxed_chatterjee[255732]:                "ceph.vdo": "0"
Nov 25 03:11:16 np0005534516 relaxed_chatterjee[255732]:            },
Nov 25 03:11:16 np0005534516 relaxed_chatterjee[255732]:            "type": "block",
Nov 25 03:11:16 np0005534516 relaxed_chatterjee[255732]:            "vg_name": "ceph_vg1"
Nov 25 03:11:16 np0005534516 relaxed_chatterjee[255732]:        }
Nov 25 03:11:16 np0005534516 relaxed_chatterjee[255732]:    ],
Nov 25 03:11:16 np0005534516 relaxed_chatterjee[255732]:    "2": [
Nov 25 03:11:16 np0005534516 relaxed_chatterjee[255732]:        {
Nov 25 03:11:16 np0005534516 relaxed_chatterjee[255732]:            "devices": [
Nov 25 03:11:16 np0005534516 relaxed_chatterjee[255732]:                "/dev/loop5"
Nov 25 03:11:16 np0005534516 relaxed_chatterjee[255732]:            ],
Nov 25 03:11:16 np0005534516 relaxed_chatterjee[255732]:            "lv_name": "ceph_lv2",
Nov 25 03:11:16 np0005534516 relaxed_chatterjee[255732]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 03:11:16 np0005534516 relaxed_chatterjee[255732]:            "lv_size": "21470642176",
Nov 25 03:11:16 np0005534516 relaxed_chatterjee[255732]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2a15223e-f21c-42af-b702-2be1c3607399,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 03:11:16 np0005534516 relaxed_chatterjee[255732]:            "lv_uuid": "5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0",
Nov 25 03:11:16 np0005534516 relaxed_chatterjee[255732]:            "name": "ceph_lv2",
Nov 25 03:11:16 np0005534516 relaxed_chatterjee[255732]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 03:11:16 np0005534516 relaxed_chatterjee[255732]:            "tags": {
Nov 25 03:11:16 np0005534516 relaxed_chatterjee[255732]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 03:11:16 np0005534516 relaxed_chatterjee[255732]:                "ceph.block_uuid": "5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0",
Nov 25 03:11:16 np0005534516 relaxed_chatterjee[255732]:                "ceph.cephx_lockbox_secret": "",
Nov 25 03:11:16 np0005534516 relaxed_chatterjee[255732]:                "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 03:11:16 np0005534516 relaxed_chatterjee[255732]:                "ceph.cluster_name": "ceph",
Nov 25 03:11:16 np0005534516 relaxed_chatterjee[255732]:                "ceph.crush_device_class": "",
Nov 25 03:11:16 np0005534516 relaxed_chatterjee[255732]:                "ceph.encrypted": "0",
Nov 25 03:11:16 np0005534516 relaxed_chatterjee[255732]:                "ceph.osd_fsid": "2a15223e-f21c-42af-b702-2be1c3607399",
Nov 25 03:11:16 np0005534516 relaxed_chatterjee[255732]:                "ceph.osd_id": "2",
Nov 25 03:11:16 np0005534516 relaxed_chatterjee[255732]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 03:11:16 np0005534516 relaxed_chatterjee[255732]:                "ceph.type": "block",
Nov 25 03:11:16 np0005534516 relaxed_chatterjee[255732]:                "ceph.vdo": "0"
Nov 25 03:11:16 np0005534516 relaxed_chatterjee[255732]:            },
Nov 25 03:11:16 np0005534516 relaxed_chatterjee[255732]:            "type": "block",
Nov 25 03:11:16 np0005534516 relaxed_chatterjee[255732]:            "vg_name": "ceph_vg2"
Nov 25 03:11:16 np0005534516 relaxed_chatterjee[255732]:        }
Nov 25 03:11:16 np0005534516 relaxed_chatterjee[255732]:    ]
Nov 25 03:11:16 np0005534516 relaxed_chatterjee[255732]: }
Nov 25 03:11:16 np0005534516 systemd[1]: libpod-8974e0ef126537b74cb5474c3865c473aff76e372600d2d7b836dd980e20d2d3.scope: Deactivated successfully.
Nov 25 03:11:16 np0005534516 podman[255716]: 2025-11-25 08:11:16.464261745 +0000 UTC m=+0.926767874 container died 8974e0ef126537b74cb5474c3865c473aff76e372600d2d7b836dd980e20d2d3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_chatterjee, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Nov 25 03:11:16 np0005534516 systemd[1]: var-lib-containers-storage-overlay-d7cbaeb955bfd9fe0f115ae772692b68058d41198777e1856f8ee87835e124b4-merged.mount: Deactivated successfully.
Nov 25 03:11:16 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v755: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:11:16 np0005534516 podman[255716]: 2025-11-25 08:11:16.544200509 +0000 UTC m=+1.006706638 container remove 8974e0ef126537b74cb5474c3865c473aff76e372600d2d7b836dd980e20d2d3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_chatterjee, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Nov 25 03:11:16 np0005534516 systemd[1]: libpod-conmon-8974e0ef126537b74cb5474c3865c473aff76e372600d2d7b836dd980e20d2d3.scope: Deactivated successfully.
Nov 25 03:11:17 np0005534516 podman[255892]: 2025-11-25 08:11:17.099401437 +0000 UTC m=+0.040327398 container create a539860e1dad9ecdac3b46e71c09f7bc235163d5e725670065353ff0ba429944 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_buck, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 03:11:17 np0005534516 systemd[1]: Started libpod-conmon-a539860e1dad9ecdac3b46e71c09f7bc235163d5e725670065353ff0ba429944.scope.
Nov 25 03:11:17 np0005534516 systemd[1]: Started libcrun container.
Nov 25 03:11:17 np0005534516 podman[255892]: 2025-11-25 08:11:17.171352499 +0000 UTC m=+0.112278460 container init a539860e1dad9ecdac3b46e71c09f7bc235163d5e725670065353ff0ba429944 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_buck, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 03:11:17 np0005534516 podman[255892]: 2025-11-25 08:11:17.178202508 +0000 UTC m=+0.119128469 container start a539860e1dad9ecdac3b46e71c09f7bc235163d5e725670065353ff0ba429944 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_buck, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 03:11:17 np0005534516 podman[255892]: 2025-11-25 08:11:17.083485736 +0000 UTC m=+0.024411717 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 03:11:17 np0005534516 affectionate_buck[255908]: 167 167
Nov 25 03:11:17 np0005534516 systemd[1]: libpod-a539860e1dad9ecdac3b46e71c09f7bc235163d5e725670065353ff0ba429944.scope: Deactivated successfully.
Nov 25 03:11:17 np0005534516 podman[255892]: 2025-11-25 08:11:17.183640808 +0000 UTC m=+0.124566789 container attach a539860e1dad9ecdac3b46e71c09f7bc235163d5e725670065353ff0ba429944 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_buck, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 03:11:17 np0005534516 podman[255892]: 2025-11-25 08:11:17.185007476 +0000 UTC m=+0.125933437 container died a539860e1dad9ecdac3b46e71c09f7bc235163d5e725670065353ff0ba429944 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_buck, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 03:11:17 np0005534516 systemd[1]: var-lib-containers-storage-overlay-710232fe56f6455fcbc5e29d21dedfafa83ed11de5cb7f12bdd0450a4629c3e5-merged.mount: Deactivated successfully.
Nov 25 03:11:17 np0005534516 podman[255892]: 2025-11-25 08:11:17.227337397 +0000 UTC m=+0.168263358 container remove a539860e1dad9ecdac3b46e71c09f7bc235163d5e725670065353ff0ba429944 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_buck, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Nov 25 03:11:17 np0005534516 systemd[1]: libpod-conmon-a539860e1dad9ecdac3b46e71c09f7bc235163d5e725670065353ff0ba429944.scope: Deactivated successfully.
Nov 25 03:11:17 np0005534516 podman[255935]: 2025-11-25 08:11:17.385170217 +0000 UTC m=+0.041033887 container create 2fd68f6a2ceeb57aec34f8a7b7971b38300b492c1e6a5fd125eb73ec0f03fe2c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_driscoll, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Nov 25 03:11:17 np0005534516 systemd[1]: Started libpod-conmon-2fd68f6a2ceeb57aec34f8a7b7971b38300b492c1e6a5fd125eb73ec0f03fe2c.scope.
Nov 25 03:11:17 np0005534516 systemd[1]: Started libcrun container.
Nov 25 03:11:17 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6ed27999321b92604d6711e2b04eae7cb42d971e416bf621ed61673fde5dcedb/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 03:11:17 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6ed27999321b92604d6711e2b04eae7cb42d971e416bf621ed61673fde5dcedb/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 03:11:17 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6ed27999321b92604d6711e2b04eae7cb42d971e416bf621ed61673fde5dcedb/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 03:11:17 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6ed27999321b92604d6711e2b04eae7cb42d971e416bf621ed61673fde5dcedb/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 03:11:17 np0005534516 podman[255935]: 2025-11-25 08:11:17.445549798 +0000 UTC m=+0.101413498 container init 2fd68f6a2ceeb57aec34f8a7b7971b38300b492c1e6a5fd125eb73ec0f03fe2c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_driscoll, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 03:11:17 np0005534516 podman[255935]: 2025-11-25 08:11:17.454553618 +0000 UTC m=+0.110417288 container start 2fd68f6a2ceeb57aec34f8a7b7971b38300b492c1e6a5fd125eb73ec0f03fe2c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_driscoll, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 25 03:11:17 np0005534516 podman[255935]: 2025-11-25 08:11:17.458584579 +0000 UTC m=+0.114448249 container attach 2fd68f6a2ceeb57aec34f8a7b7971b38300b492c1e6a5fd125eb73ec0f03fe2c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_driscoll, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Nov 25 03:11:17 np0005534516 podman[255935]: 2025-11-25 08:11:17.369126202 +0000 UTC m=+0.024989892 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 03:11:17 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 03:11:18 np0005534516 romantic_driscoll[255951]: {
Nov 25 03:11:18 np0005534516 romantic_driscoll[255951]:    "1ccbbde0-8faf-460a-800e-c84f00ed17db": {
Nov 25 03:11:18 np0005534516 romantic_driscoll[255951]:        "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 03:11:18 np0005534516 romantic_driscoll[255951]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 03:11:18 np0005534516 romantic_driscoll[255951]:        "osd_id": 1,
Nov 25 03:11:18 np0005534516 romantic_driscoll[255951]:        "osd_uuid": "1ccbbde0-8faf-460a-800e-c84f00ed17db",
Nov 25 03:11:18 np0005534516 romantic_driscoll[255951]:        "type": "bluestore"
Nov 25 03:11:18 np0005534516 romantic_driscoll[255951]:    },
Nov 25 03:11:18 np0005534516 romantic_driscoll[255951]:    "2a15223e-f21c-42af-b702-2be1c3607399": {
Nov 25 03:11:18 np0005534516 romantic_driscoll[255951]:        "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 03:11:18 np0005534516 romantic_driscoll[255951]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 03:11:18 np0005534516 romantic_driscoll[255951]:        "osd_id": 2,
Nov 25 03:11:18 np0005534516 romantic_driscoll[255951]:        "osd_uuid": "2a15223e-f21c-42af-b702-2be1c3607399",
Nov 25 03:11:18 np0005534516 romantic_driscoll[255951]:        "type": "bluestore"
Nov 25 03:11:18 np0005534516 romantic_driscoll[255951]:    },
Nov 25 03:11:18 np0005534516 romantic_driscoll[255951]:    "fa0f8d90-5df8-4d42-9078-da082765696d": {
Nov 25 03:11:18 np0005534516 romantic_driscoll[255951]:        "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 03:11:18 np0005534516 romantic_driscoll[255951]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 03:11:18 np0005534516 romantic_driscoll[255951]:        "osd_id": 0,
Nov 25 03:11:18 np0005534516 romantic_driscoll[255951]:        "osd_uuid": "fa0f8d90-5df8-4d42-9078-da082765696d",
Nov 25 03:11:18 np0005534516 romantic_driscoll[255951]:        "type": "bluestore"
Nov 25 03:11:18 np0005534516 romantic_driscoll[255951]:    }
Nov 25 03:11:18 np0005534516 romantic_driscoll[255951]: }
Nov 25 03:11:18 np0005534516 systemd[1]: libpod-2fd68f6a2ceeb57aec34f8a7b7971b38300b492c1e6a5fd125eb73ec0f03fe2c.scope: Deactivated successfully.
Nov 25 03:11:18 np0005534516 podman[255984]: 2025-11-25 08:11:18.486206594 +0000 UTC m=+0.024379066 container died 2fd68f6a2ceeb57aec34f8a7b7971b38300b492c1e6a5fd125eb73ec0f03fe2c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_driscoll, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Nov 25 03:11:18 np0005534516 systemd[1]: var-lib-containers-storage-overlay-6ed27999321b92604d6711e2b04eae7cb42d971e416bf621ed61673fde5dcedb-merged.mount: Deactivated successfully.
Nov 25 03:11:18 np0005534516 podman[255984]: 2025-11-25 08:11:18.53158546 +0000 UTC m=+0.069757932 container remove 2fd68f6a2ceeb57aec34f8a7b7971b38300b492c1e6a5fd125eb73ec0f03fe2c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_driscoll, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3)
Nov 25 03:11:18 np0005534516 systemd[1]: libpod-conmon-2fd68f6a2ceeb57aec34f8a7b7971b38300b492c1e6a5fd125eb73ec0f03fe2c.scope: Deactivated successfully.
Nov 25 03:11:18 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v756: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:11:18 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 03:11:18 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 03:11:18 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 03:11:18 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 03:11:18 np0005534516 ceph-mgr[75313]: [progress WARNING root] complete: ev b36efd1a-8a89-400e-a5d9-a75b8752e9cf does not exist
Nov 25 03:11:18 np0005534516 ceph-mgr[75313]: [progress WARNING root] complete: ev 5ef6659e-804d-4112-9d10-0e8745ed908d does not exist
Nov 25 03:11:19 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 03:11:19 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 03:11:20 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v757: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:11:22 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 03:11:22 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v758: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:11:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 03:11:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 03:11:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 03:11:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 03:11:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 03:11:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 03:11:24 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v759: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:11:26 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v760: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:11:27 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "version", "format": "json"} v 0) v1
Nov 25 03:11:27 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1654547578' entity='client.openstack' cmd=[{"prefix": "version", "format": "json"}]: dispatch
Nov 25 03:11:27 np0005534516 ceph-mgr[75313]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs volume ls", "format": "json"}]: dispatch
Nov 25 03:11:27 np0005534516 ceph-mgr[75313]: [volumes INFO volumes.module] Starting _cmd_fs_volume_ls(format:json, prefix:fs volume ls) < ""
Nov 25 03:11:27 np0005534516 ceph-mgr[75313]: [volumes INFO volumes.module] Finishing _cmd_fs_volume_ls(format:json, prefix:fs volume ls) < ""
Nov 25 03:11:27 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 03:11:28 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v761: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:11:28 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 25 03:11:28 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1810811532' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 03:11:28 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 25 03:11:28 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1810811532' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 03:11:30 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v762: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:11:32 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 03:11:32 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v763: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:11:34 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v764: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:11:36 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v765: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:11:36 np0005534516 podman[256049]: 2025-11-25 08:11:36.874391038 +0000 UTC m=+0.119611402 container health_status 5c8d5508db57b4c3da7e80ae11f3a3094e87785cb74f60c98199261dd8b73481 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_metadata_agent, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2)
Nov 25 03:11:37 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 03:11:38 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v766: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:11:38 np0005534516 podman[256070]: 2025-11-25 08:11:38.824295773 +0000 UTC m=+0.068273132 container health_status 201f5ba8a846455dad7681d91099d8c3f33e8ac91298c1330a8b525b94c03938 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Nov 25 03:11:40 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v767: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:11:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:11:41.035 162739 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:11:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:11:41.036 162739 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:11:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:11:41.036 162739 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:11:42 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 03:11:42 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v768: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:11:43 np0005534516 podman[256090]: 2025-11-25 08:11:43.920698532 +0000 UTC m=+0.176202229 container health_status 8ad1e319ea263c63021f01bb2d6f18ca5aea205883339692c6b10bddfa993191 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 03:11:44 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v769: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:11:46 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v770: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:11:47 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 03:11:48 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v771: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:11:50 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v772: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:11:51 np0005534516 ceph-mon[75015]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #36. Immutable memtables: 0.
Nov 25 03:11:51 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:11:51.761841) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 25 03:11:51 np0005534516 ceph-mon[75015]: rocksdb: [db/flush_job.cc:856] [default] [JOB 15] Flushing memtable with next log file: 36
Nov 25 03:11:51 np0005534516 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764058311761898, "job": 15, "event": "flush_started", "num_memtables": 1, "num_entries": 1480, "num_deletes": 251, "total_data_size": 2375838, "memory_usage": 2423968, "flush_reason": "Manual Compaction"}
Nov 25 03:11:51 np0005534516 ceph-mon[75015]: rocksdb: [db/flush_job.cc:885] [default] [JOB 15] Level-0 flush table #37: started
Nov 25 03:11:51 np0005534516 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764058311961632, "cf_name": "default", "job": 15, "event": "table_file_creation", "file_number": 37, "file_size": 2332277, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 14863, "largest_seqno": 16342, "table_properties": {"data_size": 2325311, "index_size": 4037, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1861, "raw_key_size": 14181, "raw_average_key_size": 19, "raw_value_size": 2311430, "raw_average_value_size": 3214, "num_data_blocks": 185, "num_entries": 719, "num_filter_entries": 719, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764058153, "oldest_key_time": 1764058153, "file_creation_time": 1764058311, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "dc5dc6ae-0fb8-474e-b34d-e37705a41add", "db_session_id": "WCSCMBNWDQZ3QKJA3OWW", "orig_file_number": 37, "seqno_to_time_mapping": "N/A"}}
Nov 25 03:11:51 np0005534516 ceph-mon[75015]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 15] Flush lasted 199860 microseconds, and 10498 cpu microseconds.
Nov 25 03:11:51 np0005534516 ceph-mon[75015]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 03:11:52 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:11:51.961699) [db/flush_job.cc:967] [default] [JOB 15] Level-0 flush table #37: 2332277 bytes OK
Nov 25 03:11:52 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:11:51.961730) [db/memtable_list.cc:519] [default] Level-0 commit table #37 started
Nov 25 03:11:52 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:11:52.017075) [db/memtable_list.cc:722] [default] Level-0 commit table #37: memtable #1 done
Nov 25 03:11:52 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:11:52.017155) EVENT_LOG_v1 {"time_micros": 1764058312017137, "job": 15, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 25 03:11:52 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:11:52.017199) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 25 03:11:52 np0005534516 ceph-mon[75015]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 15] Try to delete WAL files size 2369331, prev total WAL file size 2369331, number of live WAL files 2.
Nov 25 03:11:52 np0005534516 ceph-mon[75015]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000033.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 03:11:52 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:11:52.019843) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031303034' seq:72057594037927935, type:22 .. '7061786F730031323536' seq:0, type:0; will stop at (end)
Nov 25 03:11:52 np0005534516 ceph-mon[75015]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 16] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 25 03:11:52 np0005534516 ceph-mon[75015]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 15 Base level 0, inputs: [37(2277KB)], [35(7206KB)]
Nov 25 03:11:52 np0005534516 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764058312020250, "job": 16, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [37], "files_L6": [35], "score": -1, "input_data_size": 9711268, "oldest_snapshot_seqno": -1}
Nov 25 03:11:52 np0005534516 ceph-mon[75015]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 16] Generated table #38: 4048 keys, 7931818 bytes, temperature: kUnknown
Nov 25 03:11:52 np0005534516 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764058312367520, "cf_name": "default", "job": 16, "event": "table_file_creation", "file_number": 38, "file_size": 7931818, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7902580, "index_size": 17992, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 10181, "raw_key_size": 99251, "raw_average_key_size": 24, "raw_value_size": 7827125, "raw_average_value_size": 1933, "num_data_blocks": 765, "num_entries": 4048, "num_filter_entries": 4048, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764056881, "oldest_key_time": 0, "file_creation_time": 1764058312, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "dc5dc6ae-0fb8-474e-b34d-e37705a41add", "db_session_id": "WCSCMBNWDQZ3QKJA3OWW", "orig_file_number": 38, "seqno_to_time_mapping": "N/A"}}
Nov 25 03:11:52 np0005534516 ceph-mon[75015]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 03:11:52 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:11:52.368034) [db/compaction/compaction_job.cc:1663] [default] [JOB 16] Compacted 1@0 + 1@6 files to L6 => 7931818 bytes
Nov 25 03:11:52 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:11:52.386972) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 27.9 rd, 22.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.2, 7.0 +0.0 blob) out(7.6 +0.0 blob), read-write-amplify(7.6) write-amplify(3.4) OK, records in: 4562, records dropped: 514 output_compression: NoCompression
Nov 25 03:11:52 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:11:52.387004) EVENT_LOG_v1 {"time_micros": 1764058312386991, "job": 16, "event": "compaction_finished", "compaction_time_micros": 347532, "compaction_time_cpu_micros": 33385, "output_level": 6, "num_output_files": 1, "total_output_size": 7931818, "num_input_records": 4562, "num_output_records": 4048, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 25 03:11:52 np0005534516 ceph-mon[75015]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000037.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 03:11:52 np0005534516 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764058312387632, "job": 16, "event": "table_file_deletion", "file_number": 37}
Nov 25 03:11:52 np0005534516 ceph-mon[75015]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000035.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 03:11:52 np0005534516 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764058312389061, "job": 16, "event": "table_file_deletion", "file_number": 35}
Nov 25 03:11:52 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:11:52.019687) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 03:11:52 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:11:52.389126) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 03:11:52 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:11:52.389132) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 03:11:52 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:11:52.389133) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 03:11:52 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:11:52.389135) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 03:11:52 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:11:52.389137) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 03:11:52 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v773: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:11:52 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 03:11:53 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "version", "format": "json"} v 0) v1
Nov 25 03:11:53 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1320665593' entity='client.openstack' cmd=[{"prefix": "version", "format": "json"}]: dispatch
Nov 25 03:11:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] Optimize plan auto_2025-11-25_08:11:53
Nov 25 03:11:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 03:11:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] do_upmap
Nov 25 03:11:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] pools ['.mgr', 'cephfs.cephfs.data', 'vms', 'volumes', 'images', 'backups', 'default.rgw.control', 'default.rgw.meta', 'default.rgw.log', '.rgw.root', 'cephfs.cephfs.meta']
Nov 25 03:11:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] prepared 0/10 changes
Nov 25 03:11:53 np0005534516 ceph-mgr[75313]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs volume ls", "format": "json"}]: dispatch
Nov 25 03:11:53 np0005534516 ceph-mgr[75313]: [volumes INFO volumes.module] Starting _cmd_fs_volume_ls(format:json, prefix:fs volume ls) < ""
Nov 25 03:11:53 np0005534516 ceph-mgr[75313]: [volumes INFO volumes.module] Finishing _cmd_fs_volume_ls(format:json, prefix:fs volume ls) < ""
Nov 25 03:11:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 03:11:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 03:11:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 03:11:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 03:11:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 03:11:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 03:11:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 03:11:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 03:11:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 03:11:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 03:11:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 03:11:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 03:11:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 03:11:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 03:11:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 03:11:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 03:11:54 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v774: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:11:56 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v775: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:11:57 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 03:11:58 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v776: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:11:58 np0005534516 nova_compute[253538]: 2025-11-25 08:11:58.994 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 03:11:58 np0005534516 nova_compute[253538]: 2025-11-25 08:11:58.994 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 03:11:59 np0005534516 nova_compute[253538]: 2025-11-25 08:11:59.014 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 03:11:59 np0005534516 nova_compute[253538]: 2025-11-25 08:11:59.014 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 03:11:59 np0005534516 nova_compute[253538]: 2025-11-25 08:11:59.014 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 03:11:59 np0005534516 nova_compute[253538]: 2025-11-25 08:11:59.014 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 03:11:59 np0005534516 nova_compute[253538]: 2025-11-25 08:11:59.015 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 03:11:59 np0005534516 nova_compute[253538]: 2025-11-25 08:11:59.061 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:11:59 np0005534516 nova_compute[253538]: 2025-11-25 08:11:59.061 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:11:59 np0005534516 nova_compute[253538]: 2025-11-25 08:11:59.062 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:11:59 np0005534516 nova_compute[253538]: 2025-11-25 08:11:59.062 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 25 03:11:59 np0005534516 nova_compute[253538]: 2025-11-25 08:11:59.063 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:11:59 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 03:11:59 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4115604863' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 03:11:59 np0005534516 nova_compute[253538]: 2025-11-25 08:11:59.475 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.411s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:11:59 np0005534516 nova_compute[253538]: 2025-11-25 08:11:59.675 253542 WARNING nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 03:11:59 np0005534516 nova_compute[253538]: 2025-11-25 08:11:59.677 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5197MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 25 03:11:59 np0005534516 nova_compute[253538]: 2025-11-25 08:11:59.677 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:11:59 np0005534516 nova_compute[253538]: 2025-11-25 08:11:59.677 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:11:59 np0005534516 nova_compute[253538]: 2025-11-25 08:11:59.739 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 25 03:11:59 np0005534516 nova_compute[253538]: 2025-11-25 08:11:59.739 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 25 03:11:59 np0005534516 nova_compute[253538]: 2025-11-25 08:11:59.758 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:12:00 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 03:12:00 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/782627290' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 03:12:00 np0005534516 nova_compute[253538]: 2025-11-25 08:12:00.255 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.497s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:12:00 np0005534516 nova_compute[253538]: 2025-11-25 08:12:00.261 253542 DEBUG nova.compute.provider_tree [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 03:12:00 np0005534516 nova_compute[253538]: 2025-11-25 08:12:00.294 253542 DEBUG nova.scheduler.client.report [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 03:12:00 np0005534516 nova_compute[253538]: 2025-11-25 08:12:00.296 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 25 03:12:00 np0005534516 nova_compute[253538]: 2025-11-25 08:12:00.297 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.620s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:12:00 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v777: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:12:00 np0005534516 nova_compute[253538]: 2025-11-25 08:12:00.836 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 03:12:00 np0005534516 nova_compute[253538]: 2025-11-25 08:12:00.837 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 25 03:12:00 np0005534516 nova_compute[253538]: 2025-11-25 08:12:00.837 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 25 03:12:00 np0005534516 nova_compute[253538]: 2025-11-25 08:12:00.852 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 25 03:12:00 np0005534516 nova_compute[253538]: 2025-11-25 08:12:00.852 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 03:12:00 np0005534516 nova_compute[253538]: 2025-11-25 08:12:00.853 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 03:12:00 np0005534516 nova_compute[253538]: 2025-11-25 08:12:00.853 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 25 03:12:02 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v778: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:12:02 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 03:12:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 03:12:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:12:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 03:12:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:12:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 03:12:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:12:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 03:12:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:12:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 03:12:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:12:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 03:12:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:12:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 25 03:12:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:12:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 03:12:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:12:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 25 03:12:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:12:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 25 03:12:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:12:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 03:12:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:12:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 25 03:12:04 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v779: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 0 B/s wr, 2 op/s
Nov 25 03:12:06 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v780: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 0 B/s wr, 31 op/s
Nov 25 03:12:07 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 03:12:07 np0005534516 podman[256161]: 2025-11-25 08:12:07.836441926 +0000 UTC m=+0.078626828 container health_status 5c8d5508db57b4c3da7e80ae11f3a3094e87785cb74f60c98199261dd8b73481 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2)
Nov 25 03:12:08 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v781: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 0 B/s wr, 31 op/s
Nov 25 03:12:09 np0005534516 podman[256180]: 2025-11-25 08:12:09.827256343 +0000 UTC m=+0.076708345 container health_status 201f5ba8a846455dad7681d91099d8c3f33e8ac91298c1330a8b525b94c03938 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=multipathd, tcib_managed=true, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Nov 25 03:12:10 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v782: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Nov 25 03:12:12 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v783: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Nov 25 03:12:12 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 03:12:14 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v784: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Nov 25 03:12:14 np0005534516 podman[256200]: 2025-11-25 08:12:14.855536878 +0000 UTC m=+0.107200928 container health_status 8ad1e319ea263c63021f01bb2d6f18ca5aea205883339692c6b10bddfa993191 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.build-date=20251118, container_name=ovn_controller, io.buildah.version=1.41.3)
Nov 25 03:12:16 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v785: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail; 34 KiB/s rd, 0 B/s wr, 57 op/s
Nov 25 03:12:17 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 03:12:18 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v786: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 0 B/s wr, 28 op/s
Nov 25 03:12:19 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 03:12:19 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 03:12:19 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 03:12:19 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 03:12:19 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 03:12:19 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 03:12:19 np0005534516 ceph-mgr[75313]: [progress WARNING root] complete: ev e48d1a91-e1d4-43d7-b7ac-7b012b96cd7e does not exist
Nov 25 03:12:19 np0005534516 ceph-mgr[75313]: [progress WARNING root] complete: ev bac69b60-506a-4565-8f6d-ce4357886fac does not exist
Nov 25 03:12:19 np0005534516 ceph-mgr[75313]: [progress WARNING root] complete: ev 629017ab-4907-4cf9-9e0f-0cbb2543418c does not exist
Nov 25 03:12:19 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 03:12:19 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 03:12:19 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 03:12:19 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 03:12:19 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 03:12:19 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 03:12:19 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 03:12:19 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 03:12:20 np0005534516 podman[256499]: 2025-11-25 08:12:20.540422127 +0000 UTC m=+0.102844828 container create ef7f8602bf37f1cb89d390d9747e326989cf62cdca2be2ae7eedd30fbe1dd881 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_mccarthy, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Nov 25 03:12:20 np0005534516 podman[256499]: 2025-11-25 08:12:20.462141391 +0000 UTC m=+0.024564112 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 03:12:20 np0005534516 systemd[1]: Started libpod-conmon-ef7f8602bf37f1cb89d390d9747e326989cf62cdca2be2ae7eedd30fbe1dd881.scope.
Nov 25 03:12:20 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v787: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 0 B/s wr, 28 op/s
Nov 25 03:12:20 np0005534516 systemd[1]: Started libcrun container.
Nov 25 03:12:20 np0005534516 podman[256499]: 2025-11-25 08:12:20.62323199 +0000 UTC m=+0.185654731 container init ef7f8602bf37f1cb89d390d9747e326989cf62cdca2be2ae7eedd30fbe1dd881 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_mccarthy, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Nov 25 03:12:20 np0005534516 podman[256499]: 2025-11-25 08:12:20.63049251 +0000 UTC m=+0.192915221 container start ef7f8602bf37f1cb89d390d9747e326989cf62cdca2be2ae7eedd30fbe1dd881 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_mccarthy, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Nov 25 03:12:20 np0005534516 podman[256499]: 2025-11-25 08:12:20.634595815 +0000 UTC m=+0.197018566 container attach ef7f8602bf37f1cb89d390d9747e326989cf62cdca2be2ae7eedd30fbe1dd881 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_mccarthy, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 03:12:20 np0005534516 eloquent_mccarthy[256516]: 167 167
Nov 25 03:12:20 np0005534516 systemd[1]: libpod-ef7f8602bf37f1cb89d390d9747e326989cf62cdca2be2ae7eedd30fbe1dd881.scope: Deactivated successfully.
Nov 25 03:12:20 np0005534516 conmon[256516]: conmon ef7f8602bf37f1cb89d3 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-ef7f8602bf37f1cb89d390d9747e326989cf62cdca2be2ae7eedd30fbe1dd881.scope/container/memory.events
Nov 25 03:12:20 np0005534516 podman[256499]: 2025-11-25 08:12:20.638916254 +0000 UTC m=+0.201338975 container died ef7f8602bf37f1cb89d390d9747e326989cf62cdca2be2ae7eedd30fbe1dd881 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_mccarthy, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 03:12:20 np0005534516 systemd[1]: var-lib-containers-storage-overlay-77d14aea32c8d0c49b9e7ac680b066ddcb9e8eadc244301fbcc121329fd18f4f-merged.mount: Deactivated successfully.
Nov 25 03:12:20 np0005534516 podman[256499]: 2025-11-25 08:12:20.674742255 +0000 UTC m=+0.237164956 container remove ef7f8602bf37f1cb89d390d9747e326989cf62cdca2be2ae7eedd30fbe1dd881 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_mccarthy, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 03:12:20 np0005534516 systemd[1]: libpod-conmon-ef7f8602bf37f1cb89d390d9747e326989cf62cdca2be2ae7eedd30fbe1dd881.scope: Deactivated successfully.
Nov 25 03:12:20 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 03:12:20 np0005534516 podman[256539]: 2025-11-25 08:12:20.857754801 +0000 UTC m=+0.048694299 container create 369bd4f69406a9d1907aa231b0282e5f6f99c5d1aed65c668578bb8d16f4ef64 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_panini, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 03:12:20 np0005534516 systemd[1]: Started libpod-conmon-369bd4f69406a9d1907aa231b0282e5f6f99c5d1aed65c668578bb8d16f4ef64.scope.
Nov 25 03:12:20 np0005534516 systemd[1]: Started libcrun container.
Nov 25 03:12:20 np0005534516 podman[256539]: 2025-11-25 08:12:20.83637704 +0000 UTC m=+0.027316528 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 03:12:20 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4fc807790d25322a9c15020a31e77700f6d830389c8cce6ddb041d95428d883b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 03:12:20 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4fc807790d25322a9c15020a31e77700f6d830389c8cce6ddb041d95428d883b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 03:12:20 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4fc807790d25322a9c15020a31e77700f6d830389c8cce6ddb041d95428d883b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 03:12:20 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4fc807790d25322a9c15020a31e77700f6d830389c8cce6ddb041d95428d883b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 03:12:20 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4fc807790d25322a9c15020a31e77700f6d830389c8cce6ddb041d95428d883b/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 03:12:20 np0005534516 podman[256539]: 2025-11-25 08:12:20.948265907 +0000 UTC m=+0.139205405 container init 369bd4f69406a9d1907aa231b0282e5f6f99c5d1aed65c668578bb8d16f4ef64 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_panini, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 03:12:20 np0005534516 podman[256539]: 2025-11-25 08:12:20.960097874 +0000 UTC m=+0.151037382 container start 369bd4f69406a9d1907aa231b0282e5f6f99c5d1aed65c668578bb8d16f4ef64 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_panini, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 03:12:20 np0005534516 podman[256539]: 2025-11-25 08:12:20.964580379 +0000 UTC m=+0.155519937 container attach 369bd4f69406a9d1907aa231b0282e5f6f99c5d1aed65c668578bb8d16f4ef64 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_panini, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 03:12:21 np0005534516 affectionate_panini[256556]: --> passed data devices: 0 physical, 3 LVM
Nov 25 03:12:21 np0005534516 affectionate_panini[256556]: --> relative data size: 1.0
Nov 25 03:12:21 np0005534516 affectionate_panini[256556]: --> All data devices are unavailable
Nov 25 03:12:22 np0005534516 systemd[1]: libpod-369bd4f69406a9d1907aa231b0282e5f6f99c5d1aed65c668578bb8d16f4ef64.scope: Deactivated successfully.
Nov 25 03:12:22 np0005534516 systemd[1]: libpod-369bd4f69406a9d1907aa231b0282e5f6f99c5d1aed65c668578bb8d16f4ef64.scope: Consumed 1.012s CPU time.
Nov 25 03:12:22 np0005534516 podman[256539]: 2025-11-25 08:12:22.01918497 +0000 UTC m=+1.210124478 container died 369bd4f69406a9d1907aa231b0282e5f6f99c5d1aed65c668578bb8d16f4ef64 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_panini, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 03:12:22 np0005534516 systemd[1]: var-lib-containers-storage-overlay-4fc807790d25322a9c15020a31e77700f6d830389c8cce6ddb041d95428d883b-merged.mount: Deactivated successfully.
Nov 25 03:12:22 np0005534516 podman[256539]: 2025-11-25 08:12:22.071148789 +0000 UTC m=+1.262088257 container remove 369bd4f69406a9d1907aa231b0282e5f6f99c5d1aed65c668578bb8d16f4ef64 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_panini, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef)
Nov 25 03:12:22 np0005534516 systemd[1]: libpod-conmon-369bd4f69406a9d1907aa231b0282e5f6f99c5d1aed65c668578bb8d16f4ef64.scope: Deactivated successfully.
Nov 25 03:12:22 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v788: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:12:22 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 03:12:22 np0005534516 podman[256740]: 2025-11-25 08:12:22.710876316 +0000 UTC m=+0.042912348 container create 45cfd9ed0587de9f626e962e2cf2a83f3a3ff15ea2aed71bb2bbc5a8b9eb70ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_austin, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 25 03:12:22 np0005534516 systemd[1]: Started libpod-conmon-45cfd9ed0587de9f626e962e2cf2a83f3a3ff15ea2aed71bb2bbc5a8b9eb70ca.scope.
Nov 25 03:12:22 np0005534516 systemd[1]: Started libcrun container.
Nov 25 03:12:22 np0005534516 podman[256740]: 2025-11-25 08:12:22.692685202 +0000 UTC m=+0.024721274 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 03:12:22 np0005534516 podman[256740]: 2025-11-25 08:12:22.791745975 +0000 UTC m=+0.123782047 container init 45cfd9ed0587de9f626e962e2cf2a83f3a3ff15ea2aed71bb2bbc5a8b9eb70ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_austin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Nov 25 03:12:22 np0005534516 podman[256740]: 2025-11-25 08:12:22.799697295 +0000 UTC m=+0.131733327 container start 45cfd9ed0587de9f626e962e2cf2a83f3a3ff15ea2aed71bb2bbc5a8b9eb70ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_austin, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 03:12:22 np0005534516 podman[256740]: 2025-11-25 08:12:22.803182652 +0000 UTC m=+0.135218774 container attach 45cfd9ed0587de9f626e962e2cf2a83f3a3ff15ea2aed71bb2bbc5a8b9eb70ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_austin, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Nov 25 03:12:22 np0005534516 jolly_austin[256757]: 167 167
Nov 25 03:12:22 np0005534516 systemd[1]: libpod-45cfd9ed0587de9f626e962e2cf2a83f3a3ff15ea2aed71bb2bbc5a8b9eb70ca.scope: Deactivated successfully.
Nov 25 03:12:22 np0005534516 podman[256740]: 2025-11-25 08:12:22.80712235 +0000 UTC m=+0.139158392 container died 45cfd9ed0587de9f626e962e2cf2a83f3a3ff15ea2aed71bb2bbc5a8b9eb70ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_austin, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 03:12:22 np0005534516 systemd[1]: var-lib-containers-storage-overlay-e6fc8ba4d6df7b3e81950331f939e4f9307b9bca310f6faed866cb146b3b6f7b-merged.mount: Deactivated successfully.
Nov 25 03:12:22 np0005534516 podman[256740]: 2025-11-25 08:12:22.852748034 +0000 UTC m=+0.184784066 container remove 45cfd9ed0587de9f626e962e2cf2a83f3a3ff15ea2aed71bb2bbc5a8b9eb70ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_austin, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 03:12:22 np0005534516 systemd[1]: libpod-conmon-45cfd9ed0587de9f626e962e2cf2a83f3a3ff15ea2aed71bb2bbc5a8b9eb70ca.scope: Deactivated successfully.
Nov 25 03:12:23 np0005534516 podman[256781]: 2025-11-25 08:12:23.069180475 +0000 UTC m=+0.053148403 container create ed48fc484c8b1cd9ff9415d202d8580a21be8d5caeb2e1971faa4fcf5e1d6840 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_davinci, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Nov 25 03:12:23 np0005534516 systemd[1]: Started libpod-conmon-ed48fc484c8b1cd9ff9415d202d8580a21be8d5caeb2e1971faa4fcf5e1d6840.scope.
Nov 25 03:12:23 np0005534516 systemd[1]: Started libcrun container.
Nov 25 03:12:23 np0005534516 podman[256781]: 2025-11-25 08:12:23.049603363 +0000 UTC m=+0.033571291 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 03:12:23 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4083b8e1222cc39eb989a3e71065bbcbd82e759dfcaa517b5e4ef3b270579f01/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 03:12:23 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4083b8e1222cc39eb989a3e71065bbcbd82e759dfcaa517b5e4ef3b270579f01/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 03:12:23 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4083b8e1222cc39eb989a3e71065bbcbd82e759dfcaa517b5e4ef3b270579f01/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 03:12:23 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4083b8e1222cc39eb989a3e71065bbcbd82e759dfcaa517b5e4ef3b270579f01/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 03:12:23 np0005534516 podman[256781]: 2025-11-25 08:12:23.161014287 +0000 UTC m=+0.144982205 container init ed48fc484c8b1cd9ff9415d202d8580a21be8d5caeb2e1971faa4fcf5e1d6840 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_davinci, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 03:12:23 np0005534516 podman[256781]: 2025-11-25 08:12:23.172857544 +0000 UTC m=+0.156825482 container start ed48fc484c8b1cd9ff9415d202d8580a21be8d5caeb2e1971faa4fcf5e1d6840 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_davinci, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 03:12:23 np0005534516 podman[256781]: 2025-11-25 08:12:23.176890006 +0000 UTC m=+0.160857924 container attach ed48fc484c8b1cd9ff9415d202d8580a21be8d5caeb2e1971faa4fcf5e1d6840 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_davinci, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Nov 25 03:12:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 03:12:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 03:12:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 03:12:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 03:12:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 03:12:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 03:12:23 np0005534516 serene_davinci[256797]: {
Nov 25 03:12:23 np0005534516 serene_davinci[256797]:    "0": [
Nov 25 03:12:23 np0005534516 serene_davinci[256797]:        {
Nov 25 03:12:23 np0005534516 serene_davinci[256797]:            "devices": [
Nov 25 03:12:23 np0005534516 serene_davinci[256797]:                "/dev/loop3"
Nov 25 03:12:23 np0005534516 serene_davinci[256797]:            ],
Nov 25 03:12:23 np0005534516 serene_davinci[256797]:            "lv_name": "ceph_lv0",
Nov 25 03:12:23 np0005534516 serene_davinci[256797]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 03:12:23 np0005534516 serene_davinci[256797]:            "lv_size": "21470642176",
Nov 25 03:12:23 np0005534516 serene_davinci[256797]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=fa0f8d90-5df8-4d42-9078-da082765696d,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 03:12:23 np0005534516 serene_davinci[256797]:            "lv_uuid": "ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr",
Nov 25 03:12:23 np0005534516 serene_davinci[256797]:            "name": "ceph_lv0",
Nov 25 03:12:23 np0005534516 serene_davinci[256797]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 03:12:23 np0005534516 serene_davinci[256797]:            "tags": {
Nov 25 03:12:23 np0005534516 serene_davinci[256797]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 03:12:23 np0005534516 serene_davinci[256797]:                "ceph.block_uuid": "ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr",
Nov 25 03:12:23 np0005534516 serene_davinci[256797]:                "ceph.cephx_lockbox_secret": "",
Nov 25 03:12:23 np0005534516 serene_davinci[256797]:                "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 03:12:23 np0005534516 serene_davinci[256797]:                "ceph.cluster_name": "ceph",
Nov 25 03:12:23 np0005534516 serene_davinci[256797]:                "ceph.crush_device_class": "",
Nov 25 03:12:23 np0005534516 serene_davinci[256797]:                "ceph.encrypted": "0",
Nov 25 03:12:23 np0005534516 serene_davinci[256797]:                "ceph.osd_fsid": "fa0f8d90-5df8-4d42-9078-da082765696d",
Nov 25 03:12:23 np0005534516 serene_davinci[256797]:                "ceph.osd_id": "0",
Nov 25 03:12:23 np0005534516 serene_davinci[256797]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 03:12:23 np0005534516 serene_davinci[256797]:                "ceph.type": "block",
Nov 25 03:12:23 np0005534516 serene_davinci[256797]:                "ceph.vdo": "0"
Nov 25 03:12:23 np0005534516 serene_davinci[256797]:            },
Nov 25 03:12:23 np0005534516 serene_davinci[256797]:            "type": "block",
Nov 25 03:12:23 np0005534516 serene_davinci[256797]:            "vg_name": "ceph_vg0"
Nov 25 03:12:23 np0005534516 serene_davinci[256797]:        }
Nov 25 03:12:23 np0005534516 serene_davinci[256797]:    ],
Nov 25 03:12:23 np0005534516 serene_davinci[256797]:    "1": [
Nov 25 03:12:23 np0005534516 serene_davinci[256797]:        {
Nov 25 03:12:23 np0005534516 serene_davinci[256797]:            "devices": [
Nov 25 03:12:23 np0005534516 serene_davinci[256797]:                "/dev/loop4"
Nov 25 03:12:23 np0005534516 serene_davinci[256797]:            ],
Nov 25 03:12:23 np0005534516 serene_davinci[256797]:            "lv_name": "ceph_lv1",
Nov 25 03:12:23 np0005534516 serene_davinci[256797]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 03:12:23 np0005534516 serene_davinci[256797]:            "lv_size": "21470642176",
Nov 25 03:12:23 np0005534516 serene_davinci[256797]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1ccbbde0-8faf-460a-800e-c84f00ed17db,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 03:12:23 np0005534516 serene_davinci[256797]:            "lv_uuid": "nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e",
Nov 25 03:12:23 np0005534516 serene_davinci[256797]:            "name": "ceph_lv1",
Nov 25 03:12:23 np0005534516 serene_davinci[256797]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 03:12:23 np0005534516 serene_davinci[256797]:            "tags": {
Nov 25 03:12:23 np0005534516 serene_davinci[256797]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 03:12:23 np0005534516 serene_davinci[256797]:                "ceph.block_uuid": "nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e",
Nov 25 03:12:23 np0005534516 serene_davinci[256797]:                "ceph.cephx_lockbox_secret": "",
Nov 25 03:12:23 np0005534516 serene_davinci[256797]:                "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 03:12:23 np0005534516 serene_davinci[256797]:                "ceph.cluster_name": "ceph",
Nov 25 03:12:23 np0005534516 serene_davinci[256797]:                "ceph.crush_device_class": "",
Nov 25 03:12:23 np0005534516 serene_davinci[256797]:                "ceph.encrypted": "0",
Nov 25 03:12:23 np0005534516 serene_davinci[256797]:                "ceph.osd_fsid": "1ccbbde0-8faf-460a-800e-c84f00ed17db",
Nov 25 03:12:23 np0005534516 serene_davinci[256797]:                "ceph.osd_id": "1",
Nov 25 03:12:23 np0005534516 serene_davinci[256797]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 03:12:23 np0005534516 serene_davinci[256797]:                "ceph.type": "block",
Nov 25 03:12:23 np0005534516 serene_davinci[256797]:                "ceph.vdo": "0"
Nov 25 03:12:23 np0005534516 serene_davinci[256797]:            },
Nov 25 03:12:23 np0005534516 serene_davinci[256797]:            "type": "block",
Nov 25 03:12:23 np0005534516 serene_davinci[256797]:            "vg_name": "ceph_vg1"
Nov 25 03:12:23 np0005534516 serene_davinci[256797]:        }
Nov 25 03:12:23 np0005534516 serene_davinci[256797]:    ],
Nov 25 03:12:23 np0005534516 serene_davinci[256797]:    "2": [
Nov 25 03:12:23 np0005534516 serene_davinci[256797]:        {
Nov 25 03:12:23 np0005534516 serene_davinci[256797]:            "devices": [
Nov 25 03:12:23 np0005534516 serene_davinci[256797]:                "/dev/loop5"
Nov 25 03:12:23 np0005534516 serene_davinci[256797]:            ],
Nov 25 03:12:23 np0005534516 serene_davinci[256797]:            "lv_name": "ceph_lv2",
Nov 25 03:12:23 np0005534516 serene_davinci[256797]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 03:12:23 np0005534516 serene_davinci[256797]:            "lv_size": "21470642176",
Nov 25 03:12:23 np0005534516 serene_davinci[256797]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2a15223e-f21c-42af-b702-2be1c3607399,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 03:12:23 np0005534516 serene_davinci[256797]:            "lv_uuid": "5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0",
Nov 25 03:12:23 np0005534516 serene_davinci[256797]:            "name": "ceph_lv2",
Nov 25 03:12:23 np0005534516 serene_davinci[256797]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 03:12:23 np0005534516 serene_davinci[256797]:            "tags": {
Nov 25 03:12:23 np0005534516 serene_davinci[256797]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 03:12:23 np0005534516 serene_davinci[256797]:                "ceph.block_uuid": "5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0",
Nov 25 03:12:23 np0005534516 serene_davinci[256797]:                "ceph.cephx_lockbox_secret": "",
Nov 25 03:12:23 np0005534516 serene_davinci[256797]:                "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 03:12:23 np0005534516 serene_davinci[256797]:                "ceph.cluster_name": "ceph",
Nov 25 03:12:23 np0005534516 serene_davinci[256797]:                "ceph.crush_device_class": "",
Nov 25 03:12:23 np0005534516 serene_davinci[256797]:                "ceph.encrypted": "0",
Nov 25 03:12:23 np0005534516 serene_davinci[256797]:                "ceph.osd_fsid": "2a15223e-f21c-42af-b702-2be1c3607399",
Nov 25 03:12:23 np0005534516 serene_davinci[256797]:                "ceph.osd_id": "2",
Nov 25 03:12:23 np0005534516 serene_davinci[256797]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 03:12:23 np0005534516 serene_davinci[256797]:                "ceph.type": "block",
Nov 25 03:12:23 np0005534516 serene_davinci[256797]:                "ceph.vdo": "0"
Nov 25 03:12:23 np0005534516 serene_davinci[256797]:            },
Nov 25 03:12:23 np0005534516 serene_davinci[256797]:            "type": "block",
Nov 25 03:12:23 np0005534516 serene_davinci[256797]:            "vg_name": "ceph_vg2"
Nov 25 03:12:23 np0005534516 serene_davinci[256797]:        }
Nov 25 03:12:23 np0005534516 serene_davinci[256797]:    ]
Nov 25 03:12:23 np0005534516 serene_davinci[256797]: }
Nov 25 03:12:23 np0005534516 systemd[1]: libpod-ed48fc484c8b1cd9ff9415d202d8580a21be8d5caeb2e1971faa4fcf5e1d6840.scope: Deactivated successfully.
Nov 25 03:12:23 np0005534516 podman[256781]: 2025-11-25 08:12:23.988537632 +0000 UTC m=+0.972505560 container died ed48fc484c8b1cd9ff9415d202d8580a21be8d5caeb2e1971faa4fcf5e1d6840 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_davinci, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 03:12:24 np0005534516 systemd[1]: var-lib-containers-storage-overlay-4083b8e1222cc39eb989a3e71065bbcbd82e759dfcaa517b5e4ef3b270579f01-merged.mount: Deactivated successfully.
Nov 25 03:12:24 np0005534516 podman[256781]: 2025-11-25 08:12:24.05059077 +0000 UTC m=+1.034558678 container remove ed48fc484c8b1cd9ff9415d202d8580a21be8d5caeb2e1971faa4fcf5e1d6840 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_davinci, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 03:12:24 np0005534516 systemd[1]: libpod-conmon-ed48fc484c8b1cd9ff9415d202d8580a21be8d5caeb2e1971faa4fcf5e1d6840.scope: Deactivated successfully.
Nov 25 03:12:24 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v789: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:12:24 np0005534516 podman[256955]: 2025-11-25 08:12:24.687881411 +0000 UTC m=+0.037386777 container create 2589e7535dfc480646e7ebaa73bc9b398895629a9cd4105b15ca4838fe4a639c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_cohen, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 03:12:24 np0005534516 systemd[1]: Started libpod-conmon-2589e7535dfc480646e7ebaa73bc9b398895629a9cd4105b15ca4838fe4a639c.scope.
Nov 25 03:12:24 np0005534516 systemd[1]: Started libcrun container.
Nov 25 03:12:24 np0005534516 podman[256955]: 2025-11-25 08:12:24.671656922 +0000 UTC m=+0.021162318 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 03:12:24 np0005534516 podman[256955]: 2025-11-25 08:12:24.77422952 +0000 UTC m=+0.123734906 container init 2589e7535dfc480646e7ebaa73bc9b398895629a9cd4105b15ca4838fe4a639c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_cohen, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Nov 25 03:12:24 np0005534516 podman[256955]: 2025-11-25 08:12:24.780934267 +0000 UTC m=+0.130439633 container start 2589e7535dfc480646e7ebaa73bc9b398895629a9cd4105b15ca4838fe4a639c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_cohen, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Nov 25 03:12:24 np0005534516 podman[256955]: 2025-11-25 08:12:24.784502155 +0000 UTC m=+0.134007551 container attach 2589e7535dfc480646e7ebaa73bc9b398895629a9cd4105b15ca4838fe4a639c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_cohen, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 03:12:24 np0005534516 fervent_cohen[256972]: 167 167
Nov 25 03:12:24 np0005534516 systemd[1]: libpod-2589e7535dfc480646e7ebaa73bc9b398895629a9cd4105b15ca4838fe4a639c.scope: Deactivated successfully.
Nov 25 03:12:24 np0005534516 conmon[256972]: conmon 2589e7535dfc480646e7 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-2589e7535dfc480646e7ebaa73bc9b398895629a9cd4105b15ca4838fe4a639c.scope/container/memory.events
Nov 25 03:12:24 np0005534516 podman[256955]: 2025-11-25 08:12:24.787739554 +0000 UTC m=+0.137244930 container died 2589e7535dfc480646e7ebaa73bc9b398895629a9cd4105b15ca4838fe4a639c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_cohen, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Nov 25 03:12:24 np0005534516 systemd[1]: var-lib-containers-storage-overlay-c8a7a4b8ad14e2d162b5e495716a761fc87cfad71ab7cc11aa86a1e6a61f1671-merged.mount: Deactivated successfully.
Nov 25 03:12:24 np0005534516 podman[256955]: 2025-11-25 08:12:24.823082603 +0000 UTC m=+0.172587979 container remove 2589e7535dfc480646e7ebaa73bc9b398895629a9cd4105b15ca4838fe4a639c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_cohen, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Nov 25 03:12:24 np0005534516 systemd[1]: libpod-conmon-2589e7535dfc480646e7ebaa73bc9b398895629a9cd4105b15ca4838fe4a639c.scope: Deactivated successfully.
Nov 25 03:12:25 np0005534516 podman[256994]: 2025-11-25 08:12:25.060634989 +0000 UTC m=+0.057460142 container create 3aaa512dfbe4ab8341ddcf575323af69f9b1e8191e043cd39315b51ae3518ed4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_booth, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Nov 25 03:12:25 np0005534516 systemd[1]: Started libpod-conmon-3aaa512dfbe4ab8341ddcf575323af69f9b1e8191e043cd39315b51ae3518ed4.scope.
Nov 25 03:12:25 np0005534516 podman[256994]: 2025-11-25 08:12:25.033423305 +0000 UTC m=+0.030248478 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 03:12:25 np0005534516 systemd[1]: Started libcrun container.
Nov 25 03:12:25 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3fdac997f1f9391a2ff56c7159a8a93d811d71344a87e2d75a4c9769c8ea317c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 03:12:25 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3fdac997f1f9391a2ff56c7159a8a93d811d71344a87e2d75a4c9769c8ea317c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 03:12:25 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3fdac997f1f9391a2ff56c7159a8a93d811d71344a87e2d75a4c9769c8ea317c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 03:12:25 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3fdac997f1f9391a2ff56c7159a8a93d811d71344a87e2d75a4c9769c8ea317c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 03:12:25 np0005534516 podman[256994]: 2025-11-25 08:12:25.159581748 +0000 UTC m=+0.156406951 container init 3aaa512dfbe4ab8341ddcf575323af69f9b1e8191e043cd39315b51ae3518ed4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_booth, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Nov 25 03:12:25 np0005534516 podman[256994]: 2025-11-25 08:12:25.175719584 +0000 UTC m=+0.172544747 container start 3aaa512dfbe4ab8341ddcf575323af69f9b1e8191e043cd39315b51ae3518ed4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_booth, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Nov 25 03:12:25 np0005534516 podman[256994]: 2025-11-25 08:12:25.179459148 +0000 UTC m=+0.176284341 container attach 3aaa512dfbe4ab8341ddcf575323af69f9b1e8191e043cd39315b51ae3518ed4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_booth, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef)
Nov 25 03:12:26 np0005534516 funny_booth[257010]: {
Nov 25 03:12:26 np0005534516 funny_booth[257010]:    "1ccbbde0-8faf-460a-800e-c84f00ed17db": {
Nov 25 03:12:26 np0005534516 funny_booth[257010]:        "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 03:12:26 np0005534516 funny_booth[257010]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 03:12:26 np0005534516 funny_booth[257010]:        "osd_id": 1,
Nov 25 03:12:26 np0005534516 funny_booth[257010]:        "osd_uuid": "1ccbbde0-8faf-460a-800e-c84f00ed17db",
Nov 25 03:12:26 np0005534516 funny_booth[257010]:        "type": "bluestore"
Nov 25 03:12:26 np0005534516 funny_booth[257010]:    },
Nov 25 03:12:26 np0005534516 funny_booth[257010]:    "2a15223e-f21c-42af-b702-2be1c3607399": {
Nov 25 03:12:26 np0005534516 funny_booth[257010]:        "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 03:12:26 np0005534516 funny_booth[257010]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 03:12:26 np0005534516 funny_booth[257010]:        "osd_id": 2,
Nov 25 03:12:26 np0005534516 funny_booth[257010]:        "osd_uuid": "2a15223e-f21c-42af-b702-2be1c3607399",
Nov 25 03:12:26 np0005534516 funny_booth[257010]:        "type": "bluestore"
Nov 25 03:12:26 np0005534516 funny_booth[257010]:    },
Nov 25 03:12:26 np0005534516 funny_booth[257010]:    "fa0f8d90-5df8-4d42-9078-da082765696d": {
Nov 25 03:12:26 np0005534516 funny_booth[257010]:        "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 03:12:26 np0005534516 funny_booth[257010]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 03:12:26 np0005534516 funny_booth[257010]:        "osd_id": 0,
Nov 25 03:12:26 np0005534516 funny_booth[257010]:        "osd_uuid": "fa0f8d90-5df8-4d42-9078-da082765696d",
Nov 25 03:12:26 np0005534516 funny_booth[257010]:        "type": "bluestore"
Nov 25 03:12:26 np0005534516 funny_booth[257010]:    }
Nov 25 03:12:26 np0005534516 funny_booth[257010]: }
Nov 25 03:12:26 np0005534516 systemd[1]: libpod-3aaa512dfbe4ab8341ddcf575323af69f9b1e8191e043cd39315b51ae3518ed4.scope: Deactivated successfully.
Nov 25 03:12:26 np0005534516 podman[257043]: 2025-11-25 08:12:26.211747742 +0000 UTC m=+0.033868639 container died 3aaa512dfbe4ab8341ddcf575323af69f9b1e8191e043cd39315b51ae3518ed4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_booth, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 03:12:26 np0005534516 systemd[1]: var-lib-containers-storage-overlay-3fdac997f1f9391a2ff56c7159a8a93d811d71344a87e2d75a4c9769c8ea317c-merged.mount: Deactivated successfully.
Nov 25 03:12:26 np0005534516 podman[257043]: 2025-11-25 08:12:26.308604323 +0000 UTC m=+0.130725200 container remove 3aaa512dfbe4ab8341ddcf575323af69f9b1e8191e043cd39315b51ae3518ed4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_booth, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 03:12:26 np0005534516 systemd[1]: libpod-conmon-3aaa512dfbe4ab8341ddcf575323af69f9b1e8191e043cd39315b51ae3518ed4.scope: Deactivated successfully.
Nov 25 03:12:26 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 03:12:26 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 03:12:26 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 03:12:26 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 03:12:26 np0005534516 ceph-mgr[75313]: [progress WARNING root] complete: ev 8992a228-065b-4c76-8638-8b6192fe4643 does not exist
Nov 25 03:12:26 np0005534516 ceph-mgr[75313]: [progress WARNING root] complete: ev c2734628-a458-4908-a6a7-1298605aef1e does not exist
Nov 25 03:12:26 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v790: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:12:27 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 03:12:27 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 03:12:27 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 03:12:28 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v791: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:12:28 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 25 03:12:28 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3839763140' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 03:12:28 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 25 03:12:28 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3839763140' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 03:12:30 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v792: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:12:32 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v793: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:12:32 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 03:12:34 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v794: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:12:36 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v795: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:12:37 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:12:37.121 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=2, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '3e:21:09', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '6a:71:8e:07:fa:c2'}, ipsec=False) old=SB_Global(nb_cfg=1) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 03:12:37 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:12:37.123 162739 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 25 03:12:37 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:12:37.123 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=0e3362c2-a4a0-4a10-9289-943331244f84, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '2'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:12:37 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 03:12:38 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v796: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:12:38 np0005534516 podman[257108]: 2025-11-25 08:12:38.858032334 +0000 UTC m=+0.097918992 container health_status 5c8d5508db57b4c3da7e80ae11f3a3094e87785cb74f60c98199261dd8b73481 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251118, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 03:12:40 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v797: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:12:40 np0005534516 podman[257127]: 2025-11-25 08:12:40.818506971 +0000 UTC m=+0.069756262 container health_status 201f5ba8a846455dad7681d91099d8c3f33e8ac91298c1330a8b525b94c03938 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, io.buildah.version=1.41.3, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.build-date=20251118)
Nov 25 03:12:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:12:41.036 162739 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:12:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:12:41.036 162739 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:12:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:12:41.037 162739 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:12:42 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v798: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:12:42 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 03:12:44 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v799: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:12:45 np0005534516 podman[257148]: 2025-11-25 08:12:45.871257681 +0000 UTC m=+0.120305371 container health_status 8ad1e319ea263c63021f01bb2d6f18ca5aea205883339692c6b10bddfa993191 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251118, tcib_managed=true, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 25 03:12:46 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v800: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:12:47 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 03:12:48 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v801: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:12:50 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v802: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:12:52 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v803: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:12:52 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 03:12:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] Optimize plan auto_2025-11-25_08:12:53
Nov 25 03:12:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 03:12:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] do_upmap
Nov 25 03:12:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] pools ['.mgr', 'default.rgw.log', 'cephfs.cephfs.meta', 'backups', '.rgw.root', 'cephfs.cephfs.data', 'default.rgw.control', 'images', 'vms', 'volumes', 'default.rgw.meta']
Nov 25 03:12:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] prepared 0/10 changes
Nov 25 03:12:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 03:12:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 03:12:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 03:12:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 03:12:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 03:12:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 03:12:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 03:12:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 03:12:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 03:12:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 03:12:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 03:12:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 03:12:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 03:12:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 03:12:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 03:12:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 03:12:54 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v804: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:12:56 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v805: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:12:57 np0005534516 nova_compute[253538]: 2025-11-25 08:12:57.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 03:12:57 np0005534516 nova_compute[253538]: 2025-11-25 08:12:57.578 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:12:57 np0005534516 nova_compute[253538]: 2025-11-25 08:12:57.579 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:12:57 np0005534516 nova_compute[253538]: 2025-11-25 08:12:57.579 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:12:57 np0005534516 nova_compute[253538]: 2025-11-25 08:12:57.580 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 25 03:12:57 np0005534516 nova_compute[253538]: 2025-11-25 08:12:57.580 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:12:57 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 03:12:58 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 03:12:58 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2488561780' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 03:12:58 np0005534516 nova_compute[253538]: 2025-11-25 08:12:58.073 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.493s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:12:58 np0005534516 nova_compute[253538]: 2025-11-25 08:12:58.235 253542 WARNING nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 03:12:58 np0005534516 nova_compute[253538]: 2025-11-25 08:12:58.236 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5194MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 25 03:12:58 np0005534516 nova_compute[253538]: 2025-11-25 08:12:58.236 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:12:58 np0005534516 nova_compute[253538]: 2025-11-25 08:12:58.236 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:12:58 np0005534516 nova_compute[253538]: 2025-11-25 08:12:58.309 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 25 03:12:58 np0005534516 nova_compute[253538]: 2025-11-25 08:12:58.309 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 25 03:12:58 np0005534516 nova_compute[253538]: 2025-11-25 08:12:58.332 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:12:58 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v806: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:12:58 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 03:12:58 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3788198711' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 03:12:58 np0005534516 nova_compute[253538]: 2025-11-25 08:12:58.808 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.476s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:12:58 np0005534516 nova_compute[253538]: 2025-11-25 08:12:58.815 253542 DEBUG nova.compute.provider_tree [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 03:12:58 np0005534516 nova_compute[253538]: 2025-11-25 08:12:58.829 253542 DEBUG nova.scheduler.client.report [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 03:12:58 np0005534516 nova_compute[253538]: 2025-11-25 08:12:58.831 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 25 03:12:58 np0005534516 nova_compute[253538]: 2025-11-25 08:12:58.831 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.595s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:12:59 np0005534516 nova_compute[253538]: 2025-11-25 08:12:59.824 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 03:12:59 np0005534516 nova_compute[253538]: 2025-11-25 08:12:59.825 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 03:12:59 np0005534516 nova_compute[253538]: 2025-11-25 08:12:59.825 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 03:13:00 np0005534516 nova_compute[253538]: 2025-11-25 08:13:00.553 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 03:13:00 np0005534516 nova_compute[253538]: 2025-11-25 08:13:00.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 03:13:00 np0005534516 nova_compute[253538]: 2025-11-25 08:13:00.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 03:13:00 np0005534516 nova_compute[253538]: 2025-11-25 08:13:00.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 03:13:00 np0005534516 nova_compute[253538]: 2025-11-25 08:13:00.554 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 25 03:13:00 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v807: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:13:01 np0005534516 nova_compute[253538]: 2025-11-25 08:13:01.555 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 03:13:01 np0005534516 nova_compute[253538]: 2025-11-25 08:13:01.556 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 25 03:13:01 np0005534516 nova_compute[253538]: 2025-11-25 08:13:01.556 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 25 03:13:01 np0005534516 nova_compute[253538]: 2025-11-25 08:13:01.570 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 25 03:13:02 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v808: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:13:02 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 03:13:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 03:13:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:13:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 03:13:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:13:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 03:13:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:13:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 03:13:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:13:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 03:13:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:13:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 03:13:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:13:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 25 03:13:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:13:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 03:13:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:13:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 25 03:13:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:13:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 25 03:13:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:13:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 03:13:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:13:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 25 03:13:04 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v809: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:13:06 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v810: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:13:07 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 03:13:08 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v811: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:13:09 np0005534516 podman[257219]: 2025-11-25 08:13:09.818600228 +0000 UTC m=+0.063047583 container health_status 5c8d5508db57b4c3da7e80ae11f3a3094e87785cb74f60c98199261dd8b73481 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=ovn_metadata_agent)
Nov 25 03:13:10 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v812: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:13:11 np0005534516 podman[257238]: 2025-11-25 08:13:11.827285593 +0000 UTC m=+0.075199728 container health_status 201f5ba8a846455dad7681d91099d8c3f33e8ac91298c1330a8b525b94c03938 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Nov 25 03:13:12 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v813: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:13:12 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 03:13:14 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v814: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:13:16 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v815: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:13:16 np0005534516 podman[257258]: 2025-11-25 08:13:16.884260317 +0000 UTC m=+0.125846307 container health_status 8ad1e319ea263c63021f01bb2d6f18ca5aea205883339692c6b10bddfa993191 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3)
Nov 25 03:13:17 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 03:13:18 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v816: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:13:20 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v817: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:13:22 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v818: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:13:22 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 03:13:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 03:13:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 03:13:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 03:13:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 03:13:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 03:13:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 03:13:24 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v819: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:13:26 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v820: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:13:27 np0005534516 podman[257458]: 2025-11-25 08:13:27.45753095 +0000 UTC m=+0.089026131 container exec 04ce8b89bbacb77b3b59c4996e899d7494010827e740656ebea8754c25303956 (image=quay.io/ceph/ceph:v18, name=ceph-a058ea16-8b73-51e1-b172-ed66107102bf-mon-compute-0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Nov 25 03:13:27 np0005534516 podman[257458]: 2025-11-25 08:13:27.603469501 +0000 UTC m=+0.234964672 container exec_died 04ce8b89bbacb77b3b59c4996e899d7494010827e740656ebea8754c25303956 (image=quay.io/ceph/ceph:v18, name=ceph-a058ea16-8b73-51e1-b172-ed66107102bf-mon-compute-0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Nov 25 03:13:27 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 03:13:28 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 03:13:28 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 03:13:28 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 03:13:28 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 03:13:28 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v821: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:13:28 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 25 03:13:28 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1215565804' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 03:13:28 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 25 03:13:28 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1215565804' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 03:13:29 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 03:13:29 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 03:13:29 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 03:13:29 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 03:13:29 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 03:13:29 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 03:13:29 np0005534516 ceph-mgr[75313]: [progress WARNING root] complete: ev 7006680c-2cb4-43ec-b1ab-90cf746b3282 does not exist
Nov 25 03:13:29 np0005534516 ceph-mgr[75313]: [progress WARNING root] complete: ev 8f6c3411-4acd-4fd9-b36a-530a6c3b43ce does not exist
Nov 25 03:13:29 np0005534516 ceph-mgr[75313]: [progress WARNING root] complete: ev 77160857-7832-4001-a4cf-f56757f403df does not exist
Nov 25 03:13:29 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 03:13:29 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 03:13:29 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 03:13:29 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 03:13:29 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 03:13:29 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 03:13:29 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 03:13:29 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 03:13:29 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 03:13:29 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 03:13:29 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 03:13:29 np0005534516 podman[257889]: 2025-11-25 08:13:29.71226384 +0000 UTC m=+0.041465726 container create 5e2a4a219d22a398d1102f34f4149be1416333177dd073667402d66b3d3f3982 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_benz, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 03:13:29 np0005534516 systemd[1]: Started libpod-conmon-5e2a4a219d22a398d1102f34f4149be1416333177dd073667402d66b3d3f3982.scope.
Nov 25 03:13:29 np0005534516 systemd[1]: Started libcrun container.
Nov 25 03:13:29 np0005534516 podman[257889]: 2025-11-25 08:13:29.69597802 +0000 UTC m=+0.025179906 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 03:13:29 np0005534516 podman[257889]: 2025-11-25 08:13:29.79551812 +0000 UTC m=+0.124720016 container init 5e2a4a219d22a398d1102f34f4149be1416333177dd073667402d66b3d3f3982 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_benz, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 03:13:29 np0005534516 podman[257889]: 2025-11-25 08:13:29.806662308 +0000 UTC m=+0.135864184 container start 5e2a4a219d22a398d1102f34f4149be1416333177dd073667402d66b3d3f3982 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_benz, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 03:13:29 np0005534516 podman[257889]: 2025-11-25 08:13:29.810143964 +0000 UTC m=+0.139345850 container attach 5e2a4a219d22a398d1102f34f4149be1416333177dd073667402d66b3d3f3982 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_benz, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 25 03:13:29 np0005534516 vigorous_benz[257905]: 167 167
Nov 25 03:13:29 np0005534516 systemd[1]: libpod-5e2a4a219d22a398d1102f34f4149be1416333177dd073667402d66b3d3f3982.scope: Deactivated successfully.
Nov 25 03:13:29 np0005534516 conmon[257905]: conmon 5e2a4a219d22a398d110 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-5e2a4a219d22a398d1102f34f4149be1416333177dd073667402d66b3d3f3982.scope/container/memory.events
Nov 25 03:13:29 np0005534516 podman[257889]: 2025-11-25 08:13:29.813083565 +0000 UTC m=+0.142285451 container died 5e2a4a219d22a398d1102f34f4149be1416333177dd073667402d66b3d3f3982 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_benz, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 03:13:29 np0005534516 systemd[1]: var-lib-containers-storage-overlay-066ad1beaa9a12d11058eb4684fe4cc528e5409c524136d49d2124fe1579796d-merged.mount: Deactivated successfully.
Nov 25 03:13:29 np0005534516 podman[257889]: 2025-11-25 08:13:29.853644686 +0000 UTC m=+0.182846552 container remove 5e2a4a219d22a398d1102f34f4149be1416333177dd073667402d66b3d3f3982 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_benz, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 03:13:29 np0005534516 systemd[1]: libpod-conmon-5e2a4a219d22a398d1102f34f4149be1416333177dd073667402d66b3d3f3982.scope: Deactivated successfully.
Nov 25 03:13:30 np0005534516 podman[257929]: 2025-11-25 08:13:30.083271399 +0000 UTC m=+0.048352358 container create ef1289b4581bf10c0f9a74eeedbccbac300e5d0c616c14cd0dc30ee39ae12055 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_lamarr, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Nov 25 03:13:30 np0005534516 systemd[1]: Started libpod-conmon-ef1289b4581bf10c0f9a74eeedbccbac300e5d0c616c14cd0dc30ee39ae12055.scope.
Nov 25 03:13:30 np0005534516 systemd[1]: Started libcrun container.
Nov 25 03:13:30 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d900833919f33f7628abba7af7d9ddd5c30ef38fc0f86f322e08377a96063c4a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 03:13:30 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d900833919f33f7628abba7af7d9ddd5c30ef38fc0f86f322e08377a96063c4a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 03:13:30 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d900833919f33f7628abba7af7d9ddd5c30ef38fc0f86f322e08377a96063c4a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 03:13:30 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d900833919f33f7628abba7af7d9ddd5c30ef38fc0f86f322e08377a96063c4a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 03:13:30 np0005534516 podman[257929]: 2025-11-25 08:13:30.063208314 +0000 UTC m=+0.028289313 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 03:13:30 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d900833919f33f7628abba7af7d9ddd5c30ef38fc0f86f322e08377a96063c4a/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 03:13:30 np0005534516 podman[257929]: 2025-11-25 08:13:30.171767473 +0000 UTC m=+0.136848472 container init ef1289b4581bf10c0f9a74eeedbccbac300e5d0c616c14cd0dc30ee39ae12055 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_lamarr, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 03:13:30 np0005534516 podman[257929]: 2025-11-25 08:13:30.185004599 +0000 UTC m=+0.150085608 container start ef1289b4581bf10c0f9a74eeedbccbac300e5d0c616c14cd0dc30ee39ae12055 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_lamarr, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Nov 25 03:13:30 np0005534516 podman[257929]: 2025-11-25 08:13:30.188958477 +0000 UTC m=+0.154039456 container attach ef1289b4581bf10c0f9a74eeedbccbac300e5d0c616c14cd0dc30ee39ae12055 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_lamarr, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Nov 25 03:13:30 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v822: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:13:31 np0005534516 laughing_lamarr[257946]: --> passed data devices: 0 physical, 3 LVM
Nov 25 03:13:31 np0005534516 laughing_lamarr[257946]: --> relative data size: 1.0
Nov 25 03:13:31 np0005534516 laughing_lamarr[257946]: --> All data devices are unavailable
Nov 25 03:13:31 np0005534516 systemd[1]: libpod-ef1289b4581bf10c0f9a74eeedbccbac300e5d0c616c14cd0dc30ee39ae12055.scope: Deactivated successfully.
Nov 25 03:13:31 np0005534516 systemd[1]: libpod-ef1289b4581bf10c0f9a74eeedbccbac300e5d0c616c14cd0dc30ee39ae12055.scope: Consumed 1.071s CPU time.
Nov 25 03:13:31 np0005534516 podman[257929]: 2025-11-25 08:13:31.303292938 +0000 UTC m=+1.268373937 container died ef1289b4581bf10c0f9a74eeedbccbac300e5d0c616c14cd0dc30ee39ae12055 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_lamarr, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 03:13:31 np0005534516 systemd[1]: var-lib-containers-storage-overlay-d900833919f33f7628abba7af7d9ddd5c30ef38fc0f86f322e08377a96063c4a-merged.mount: Deactivated successfully.
Nov 25 03:13:31 np0005534516 podman[257929]: 2025-11-25 08:13:31.367917553 +0000 UTC m=+1.332998512 container remove ef1289b4581bf10c0f9a74eeedbccbac300e5d0c616c14cd0dc30ee39ae12055 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_lamarr, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 03:13:31 np0005534516 systemd[1]: libpod-conmon-ef1289b4581bf10c0f9a74eeedbccbac300e5d0c616c14cd0dc30ee39ae12055.scope: Deactivated successfully.
Nov 25 03:13:32 np0005534516 podman[258127]: 2025-11-25 08:13:32.035527243 +0000 UTC m=+0.038048481 container create f9487ccb8c2fd4b87a4a3190b7570e34994c811bfa453e50a952b13e8f3cd21a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_kare, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 03:13:32 np0005534516 systemd[1]: Started libpod-conmon-f9487ccb8c2fd4b87a4a3190b7570e34994c811bfa453e50a952b13e8f3cd21a.scope.
Nov 25 03:13:32 np0005534516 systemd[1]: Started libcrun container.
Nov 25 03:13:32 np0005534516 podman[258127]: 2025-11-25 08:13:32.101993829 +0000 UTC m=+0.104515137 container init f9487ccb8c2fd4b87a4a3190b7570e34994c811bfa453e50a952b13e8f3cd21a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_kare, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Nov 25 03:13:32 np0005534516 podman[258127]: 2025-11-25 08:13:32.111673387 +0000 UTC m=+0.114194645 container start f9487ccb8c2fd4b87a4a3190b7570e34994c811bfa453e50a952b13e8f3cd21a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_kare, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Nov 25 03:13:32 np0005534516 podman[258127]: 2025-11-25 08:13:32.115358889 +0000 UTC m=+0.117880147 container attach f9487ccb8c2fd4b87a4a3190b7570e34994c811bfa453e50a952b13e8f3cd21a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_kare, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Nov 25 03:13:32 np0005534516 boring_kare[258143]: 167 167
Nov 25 03:13:32 np0005534516 systemd[1]: libpod-f9487ccb8c2fd4b87a4a3190b7570e34994c811bfa453e50a952b13e8f3cd21a.scope: Deactivated successfully.
Nov 25 03:13:32 np0005534516 podman[258127]: 2025-11-25 08:13:32.02127563 +0000 UTC m=+0.023796868 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 03:13:32 np0005534516 podman[258127]: 2025-11-25 08:13:32.11904055 +0000 UTC m=+0.121561838 container died f9487ccb8c2fd4b87a4a3190b7570e34994c811bfa453e50a952b13e8f3cd21a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_kare, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef)
Nov 25 03:13:32 np0005534516 systemd[1]: var-lib-containers-storage-overlay-68e62e41f6c9259c9d7da9d44873947a4c07afb1cff1e202432a70cf333de81b-merged.mount: Deactivated successfully.
Nov 25 03:13:32 np0005534516 podman[258127]: 2025-11-25 08:13:32.167164089 +0000 UTC m=+0.169685347 container remove f9487ccb8c2fd4b87a4a3190b7570e34994c811bfa453e50a952b13e8f3cd21a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_kare, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 03:13:32 np0005534516 systemd[1]: libpod-conmon-f9487ccb8c2fd4b87a4a3190b7570e34994c811bfa453e50a952b13e8f3cd21a.scope: Deactivated successfully.
Nov 25 03:13:32 np0005534516 podman[258168]: 2025-11-25 08:13:32.349058364 +0000 UTC m=+0.039567414 container create c358588bccdc7df308950fb42e995fc17bb9c80691348eeace6aa8939127aec2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_noyce, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Nov 25 03:13:32 np0005534516 systemd[1]: Started libpod-conmon-c358588bccdc7df308950fb42e995fc17bb9c80691348eeace6aa8939127aec2.scope.
Nov 25 03:13:32 np0005534516 systemd[1]: Started libcrun container.
Nov 25 03:13:32 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/758cca7314dc12419c03465d4733fa2dd6c4514b77f7723a7c6fb99b70d8f795/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 03:13:32 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/758cca7314dc12419c03465d4733fa2dd6c4514b77f7723a7c6fb99b70d8f795/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 03:13:32 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/758cca7314dc12419c03465d4733fa2dd6c4514b77f7723a7c6fb99b70d8f795/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 03:13:32 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/758cca7314dc12419c03465d4733fa2dd6c4514b77f7723a7c6fb99b70d8f795/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 03:13:32 np0005534516 podman[258168]: 2025-11-25 08:13:32.332517977 +0000 UTC m=+0.023027117 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 03:13:32 np0005534516 podman[258168]: 2025-11-25 08:13:32.488863595 +0000 UTC m=+0.179372665 container init c358588bccdc7df308950fb42e995fc17bb9c80691348eeace6aa8939127aec2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_noyce, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 03:13:32 np0005534516 podman[258168]: 2025-11-25 08:13:32.504467627 +0000 UTC m=+0.194976687 container start c358588bccdc7df308950fb42e995fc17bb9c80691348eeace6aa8939127aec2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_noyce, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 03:13:32 np0005534516 podman[258168]: 2025-11-25 08:13:32.540694038 +0000 UTC m=+0.231203098 container attach c358588bccdc7df308950fb42e995fc17bb9c80691348eeace6aa8939127aec2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_noyce, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Nov 25 03:13:32 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v823: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:13:32 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 03:13:33 np0005534516 sharp_noyce[258185]: {
Nov 25 03:13:33 np0005534516 sharp_noyce[258185]:    "0": [
Nov 25 03:13:33 np0005534516 sharp_noyce[258185]:        {
Nov 25 03:13:33 np0005534516 sharp_noyce[258185]:            "devices": [
Nov 25 03:13:33 np0005534516 sharp_noyce[258185]:                "/dev/loop3"
Nov 25 03:13:33 np0005534516 sharp_noyce[258185]:            ],
Nov 25 03:13:33 np0005534516 sharp_noyce[258185]:            "lv_name": "ceph_lv0",
Nov 25 03:13:33 np0005534516 sharp_noyce[258185]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 03:13:33 np0005534516 sharp_noyce[258185]:            "lv_size": "21470642176",
Nov 25 03:13:33 np0005534516 sharp_noyce[258185]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=fa0f8d90-5df8-4d42-9078-da082765696d,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 03:13:33 np0005534516 sharp_noyce[258185]:            "lv_uuid": "ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr",
Nov 25 03:13:33 np0005534516 sharp_noyce[258185]:            "name": "ceph_lv0",
Nov 25 03:13:33 np0005534516 sharp_noyce[258185]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 03:13:33 np0005534516 sharp_noyce[258185]:            "tags": {
Nov 25 03:13:33 np0005534516 sharp_noyce[258185]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 03:13:33 np0005534516 sharp_noyce[258185]:                "ceph.block_uuid": "ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr",
Nov 25 03:13:33 np0005534516 sharp_noyce[258185]:                "ceph.cephx_lockbox_secret": "",
Nov 25 03:13:33 np0005534516 sharp_noyce[258185]:                "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 03:13:33 np0005534516 sharp_noyce[258185]:                "ceph.cluster_name": "ceph",
Nov 25 03:13:33 np0005534516 sharp_noyce[258185]:                "ceph.crush_device_class": "",
Nov 25 03:13:33 np0005534516 sharp_noyce[258185]:                "ceph.encrypted": "0",
Nov 25 03:13:33 np0005534516 sharp_noyce[258185]:                "ceph.osd_fsid": "fa0f8d90-5df8-4d42-9078-da082765696d",
Nov 25 03:13:33 np0005534516 sharp_noyce[258185]:                "ceph.osd_id": "0",
Nov 25 03:13:33 np0005534516 sharp_noyce[258185]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 03:13:33 np0005534516 sharp_noyce[258185]:                "ceph.type": "block",
Nov 25 03:13:33 np0005534516 sharp_noyce[258185]:                "ceph.vdo": "0"
Nov 25 03:13:33 np0005534516 sharp_noyce[258185]:            },
Nov 25 03:13:33 np0005534516 sharp_noyce[258185]:            "type": "block",
Nov 25 03:13:33 np0005534516 sharp_noyce[258185]:            "vg_name": "ceph_vg0"
Nov 25 03:13:33 np0005534516 sharp_noyce[258185]:        }
Nov 25 03:13:33 np0005534516 sharp_noyce[258185]:    ],
Nov 25 03:13:33 np0005534516 sharp_noyce[258185]:    "1": [
Nov 25 03:13:33 np0005534516 sharp_noyce[258185]:        {
Nov 25 03:13:33 np0005534516 sharp_noyce[258185]:            "devices": [
Nov 25 03:13:33 np0005534516 sharp_noyce[258185]:                "/dev/loop4"
Nov 25 03:13:33 np0005534516 sharp_noyce[258185]:            ],
Nov 25 03:13:33 np0005534516 sharp_noyce[258185]:            "lv_name": "ceph_lv1",
Nov 25 03:13:33 np0005534516 sharp_noyce[258185]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 03:13:33 np0005534516 sharp_noyce[258185]:            "lv_size": "21470642176",
Nov 25 03:13:33 np0005534516 sharp_noyce[258185]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1ccbbde0-8faf-460a-800e-c84f00ed17db,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 03:13:33 np0005534516 sharp_noyce[258185]:            "lv_uuid": "nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e",
Nov 25 03:13:33 np0005534516 sharp_noyce[258185]:            "name": "ceph_lv1",
Nov 25 03:13:33 np0005534516 sharp_noyce[258185]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 03:13:33 np0005534516 sharp_noyce[258185]:            "tags": {
Nov 25 03:13:33 np0005534516 sharp_noyce[258185]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 03:13:33 np0005534516 sharp_noyce[258185]:                "ceph.block_uuid": "nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e",
Nov 25 03:13:33 np0005534516 sharp_noyce[258185]:                "ceph.cephx_lockbox_secret": "",
Nov 25 03:13:33 np0005534516 sharp_noyce[258185]:                "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 03:13:33 np0005534516 sharp_noyce[258185]:                "ceph.cluster_name": "ceph",
Nov 25 03:13:33 np0005534516 sharp_noyce[258185]:                "ceph.crush_device_class": "",
Nov 25 03:13:33 np0005534516 sharp_noyce[258185]:                "ceph.encrypted": "0",
Nov 25 03:13:33 np0005534516 sharp_noyce[258185]:                "ceph.osd_fsid": "1ccbbde0-8faf-460a-800e-c84f00ed17db",
Nov 25 03:13:33 np0005534516 sharp_noyce[258185]:                "ceph.osd_id": "1",
Nov 25 03:13:33 np0005534516 sharp_noyce[258185]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 03:13:33 np0005534516 sharp_noyce[258185]:                "ceph.type": "block",
Nov 25 03:13:33 np0005534516 sharp_noyce[258185]:                "ceph.vdo": "0"
Nov 25 03:13:33 np0005534516 sharp_noyce[258185]:            },
Nov 25 03:13:33 np0005534516 sharp_noyce[258185]:            "type": "block",
Nov 25 03:13:33 np0005534516 sharp_noyce[258185]:            "vg_name": "ceph_vg1"
Nov 25 03:13:33 np0005534516 sharp_noyce[258185]:        }
Nov 25 03:13:33 np0005534516 sharp_noyce[258185]:    ],
Nov 25 03:13:33 np0005534516 sharp_noyce[258185]:    "2": [
Nov 25 03:13:33 np0005534516 sharp_noyce[258185]:        {
Nov 25 03:13:33 np0005534516 sharp_noyce[258185]:            "devices": [
Nov 25 03:13:33 np0005534516 sharp_noyce[258185]:                "/dev/loop5"
Nov 25 03:13:33 np0005534516 sharp_noyce[258185]:            ],
Nov 25 03:13:33 np0005534516 sharp_noyce[258185]:            "lv_name": "ceph_lv2",
Nov 25 03:13:33 np0005534516 sharp_noyce[258185]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 03:13:33 np0005534516 sharp_noyce[258185]:            "lv_size": "21470642176",
Nov 25 03:13:33 np0005534516 sharp_noyce[258185]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2a15223e-f21c-42af-b702-2be1c3607399,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 03:13:33 np0005534516 sharp_noyce[258185]:            "lv_uuid": "5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0",
Nov 25 03:13:33 np0005534516 sharp_noyce[258185]:            "name": "ceph_lv2",
Nov 25 03:13:33 np0005534516 sharp_noyce[258185]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 03:13:33 np0005534516 sharp_noyce[258185]:            "tags": {
Nov 25 03:13:33 np0005534516 sharp_noyce[258185]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 03:13:33 np0005534516 sharp_noyce[258185]:                "ceph.block_uuid": "5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0",
Nov 25 03:13:33 np0005534516 sharp_noyce[258185]:                "ceph.cephx_lockbox_secret": "",
Nov 25 03:13:33 np0005534516 sharp_noyce[258185]:                "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 03:13:33 np0005534516 sharp_noyce[258185]:                "ceph.cluster_name": "ceph",
Nov 25 03:13:33 np0005534516 sharp_noyce[258185]:                "ceph.crush_device_class": "",
Nov 25 03:13:33 np0005534516 sharp_noyce[258185]:                "ceph.encrypted": "0",
Nov 25 03:13:33 np0005534516 sharp_noyce[258185]:                "ceph.osd_fsid": "2a15223e-f21c-42af-b702-2be1c3607399",
Nov 25 03:13:33 np0005534516 sharp_noyce[258185]:                "ceph.osd_id": "2",
Nov 25 03:13:33 np0005534516 sharp_noyce[258185]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 03:13:33 np0005534516 sharp_noyce[258185]:                "ceph.type": "block",
Nov 25 03:13:33 np0005534516 sharp_noyce[258185]:                "ceph.vdo": "0"
Nov 25 03:13:33 np0005534516 sharp_noyce[258185]:            },
Nov 25 03:13:33 np0005534516 sharp_noyce[258185]:            "type": "block",
Nov 25 03:13:33 np0005534516 sharp_noyce[258185]:            "vg_name": "ceph_vg2"
Nov 25 03:13:33 np0005534516 sharp_noyce[258185]:        }
Nov 25 03:13:33 np0005534516 sharp_noyce[258185]:    ]
Nov 25 03:13:33 np0005534516 sharp_noyce[258185]: }
Nov 25 03:13:33 np0005534516 systemd[1]: libpod-c358588bccdc7df308950fb42e995fc17bb9c80691348eeace6aa8939127aec2.scope: Deactivated successfully.
Nov 25 03:13:33 np0005534516 podman[258168]: 2025-11-25 08:13:33.298854579 +0000 UTC m=+0.989363639 container died c358588bccdc7df308950fb42e995fc17bb9c80691348eeace6aa8939127aec2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_noyce, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 03:13:33 np0005534516 systemd[1]: var-lib-containers-storage-overlay-758cca7314dc12419c03465d4733fa2dd6c4514b77f7723a7c6fb99b70d8f795-merged.mount: Deactivated successfully.
Nov 25 03:13:33 np0005534516 podman[258168]: 2025-11-25 08:13:33.530845627 +0000 UTC m=+1.221354717 container remove c358588bccdc7df308950fb42e995fc17bb9c80691348eeace6aa8939127aec2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_noyce, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 03:13:33 np0005534516 systemd[1]: libpod-conmon-c358588bccdc7df308950fb42e995fc17bb9c80691348eeace6aa8939127aec2.scope: Deactivated successfully.
Nov 25 03:13:34 np0005534516 podman[258349]: 2025-11-25 08:13:34.342949759 +0000 UTC m=+0.091843698 container create d1b93b0645e1d29890bd5cc3c24767c4e983cde5ca49eb92ea176b60406ebacc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_sammet, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 03:13:34 np0005534516 podman[258349]: 2025-11-25 08:13:34.281696807 +0000 UTC m=+0.030590746 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 03:13:34 np0005534516 systemd[1]: Started libpod-conmon-d1b93b0645e1d29890bd5cc3c24767c4e983cde5ca49eb92ea176b60406ebacc.scope.
Nov 25 03:13:34 np0005534516 systemd[1]: Started libcrun container.
Nov 25 03:13:34 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v824: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:13:34 np0005534516 podman[258349]: 2025-11-25 08:13:34.657158038 +0000 UTC m=+0.406052017 container init d1b93b0645e1d29890bd5cc3c24767c4e983cde5ca49eb92ea176b60406ebacc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_sammet, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 03:13:34 np0005534516 podman[258349]: 2025-11-25 08:13:34.667526245 +0000 UTC m=+0.416420174 container start d1b93b0645e1d29890bd5cc3c24767c4e983cde5ca49eb92ea176b60406ebacc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_sammet, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 03:13:34 np0005534516 adoring_sammet[258365]: 167 167
Nov 25 03:13:34 np0005534516 systemd[1]: libpod-d1b93b0645e1d29890bd5cc3c24767c4e983cde5ca49eb92ea176b60406ebacc.scope: Deactivated successfully.
Nov 25 03:13:34 np0005534516 podman[258349]: 2025-11-25 08:13:34.848191544 +0000 UTC m=+0.597085553 container attach d1b93b0645e1d29890bd5cc3c24767c4e983cde5ca49eb92ea176b60406ebacc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_sammet, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 03:13:34 np0005534516 podman[258349]: 2025-11-25 08:13:34.848783002 +0000 UTC m=+0.597676961 container died d1b93b0645e1d29890bd5cc3c24767c4e983cde5ca49eb92ea176b60406ebacc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_sammet, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Nov 25 03:13:35 np0005534516 systemd[1]: var-lib-containers-storage-overlay-c30e94700b76f9cf1f4bd6fdc2de500400d4d59d4d171721d6bc4578124a2555-merged.mount: Deactivated successfully.
Nov 25 03:13:35 np0005534516 podman[258349]: 2025-11-25 08:13:35.445599277 +0000 UTC m=+1.194493226 container remove d1b93b0645e1d29890bd5cc3c24767c4e983cde5ca49eb92ea176b60406ebacc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_sammet, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Nov 25 03:13:35 np0005534516 systemd[1]: libpod-conmon-d1b93b0645e1d29890bd5cc3c24767c4e983cde5ca49eb92ea176b60406ebacc.scope: Deactivated successfully.
Nov 25 03:13:35 np0005534516 podman[258391]: 2025-11-25 08:13:35.637747654 +0000 UTC m=+0.025348731 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 03:13:35 np0005534516 podman[258391]: 2025-11-25 08:13:35.881806195 +0000 UTC m=+0.269407272 container create dc603e3bb353fa0be05a63bb9728d7ea06f88c3691fe136882be818dd5e1f90f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_kepler, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Nov 25 03:13:36 np0005534516 systemd[1]: Started libpod-conmon-dc603e3bb353fa0be05a63bb9728d7ea06f88c3691fe136882be818dd5e1f90f.scope.
Nov 25 03:13:36 np0005534516 systemd[1]: Started libcrun container.
Nov 25 03:13:36 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c71379eab033f7e46e5ce3e4c6811c7503440faee4e3bd33703c714826c8c46e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 03:13:36 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c71379eab033f7e46e5ce3e4c6811c7503440faee4e3bd33703c714826c8c46e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 03:13:36 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c71379eab033f7e46e5ce3e4c6811c7503440faee4e3bd33703c714826c8c46e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 03:13:36 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c71379eab033f7e46e5ce3e4c6811c7503440faee4e3bd33703c714826c8c46e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 03:13:36 np0005534516 podman[258391]: 2025-11-25 08:13:36.124764186 +0000 UTC m=+0.512365253 container init dc603e3bb353fa0be05a63bb9728d7ea06f88c3691fe136882be818dd5e1f90f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_kepler, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 03:13:36 np0005534516 podman[258391]: 2025-11-25 08:13:36.132162341 +0000 UTC m=+0.519763388 container start dc603e3bb353fa0be05a63bb9728d7ea06f88c3691fe136882be818dd5e1f90f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_kepler, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 03:13:36 np0005534516 podman[258391]: 2025-11-25 08:13:36.251626931 +0000 UTC m=+0.639228028 container attach dc603e3bb353fa0be05a63bb9728d7ea06f88c3691fe136882be818dd5e1f90f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_kepler, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Nov 25 03:13:36 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v825: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:13:37 np0005534516 boring_kepler[258407]: {
Nov 25 03:13:37 np0005534516 boring_kepler[258407]:    "1ccbbde0-8faf-460a-800e-c84f00ed17db": {
Nov 25 03:13:37 np0005534516 boring_kepler[258407]:        "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 03:13:37 np0005534516 boring_kepler[258407]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 03:13:37 np0005534516 boring_kepler[258407]:        "osd_id": 1,
Nov 25 03:13:37 np0005534516 boring_kepler[258407]:        "osd_uuid": "1ccbbde0-8faf-460a-800e-c84f00ed17db",
Nov 25 03:13:37 np0005534516 boring_kepler[258407]:        "type": "bluestore"
Nov 25 03:13:37 np0005534516 boring_kepler[258407]:    },
Nov 25 03:13:37 np0005534516 boring_kepler[258407]:    "2a15223e-f21c-42af-b702-2be1c3607399": {
Nov 25 03:13:37 np0005534516 boring_kepler[258407]:        "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 03:13:37 np0005534516 boring_kepler[258407]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 03:13:37 np0005534516 boring_kepler[258407]:        "osd_id": 2,
Nov 25 03:13:37 np0005534516 boring_kepler[258407]:        "osd_uuid": "2a15223e-f21c-42af-b702-2be1c3607399",
Nov 25 03:13:37 np0005534516 boring_kepler[258407]:        "type": "bluestore"
Nov 25 03:13:37 np0005534516 boring_kepler[258407]:    },
Nov 25 03:13:37 np0005534516 boring_kepler[258407]:    "fa0f8d90-5df8-4d42-9078-da082765696d": {
Nov 25 03:13:37 np0005534516 boring_kepler[258407]:        "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 03:13:37 np0005534516 boring_kepler[258407]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 03:13:37 np0005534516 boring_kepler[258407]:        "osd_id": 0,
Nov 25 03:13:37 np0005534516 boring_kepler[258407]:        "osd_uuid": "fa0f8d90-5df8-4d42-9078-da082765696d",
Nov 25 03:13:37 np0005534516 boring_kepler[258407]:        "type": "bluestore"
Nov 25 03:13:37 np0005534516 boring_kepler[258407]:    }
Nov 25 03:13:37 np0005534516 boring_kepler[258407]: }
Nov 25 03:13:37 np0005534516 systemd[1]: libpod-dc603e3bb353fa0be05a63bb9728d7ea06f88c3691fe136882be818dd5e1f90f.scope: Deactivated successfully.
Nov 25 03:13:37 np0005534516 systemd[1]: libpod-dc603e3bb353fa0be05a63bb9728d7ea06f88c3691fe136882be818dd5e1f90f.scope: Consumed 1.171s CPU time.
Nov 25 03:13:37 np0005534516 podman[258391]: 2025-11-25 08:13:37.298879208 +0000 UTC m=+1.686480255 container died dc603e3bb353fa0be05a63bb9728d7ea06f88c3691fe136882be818dd5e1f90f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_kepler, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 03:13:37 np0005534516 systemd[1]: var-lib-containers-storage-overlay-c71379eab033f7e46e5ce3e4c6811c7503440faee4e3bd33703c714826c8c46e-merged.mount: Deactivated successfully.
Nov 25 03:13:37 np0005534516 podman[258391]: 2025-11-25 08:13:37.480618138 +0000 UTC m=+1.868219185 container remove dc603e3bb353fa0be05a63bb9728d7ea06f88c3691fe136882be818dd5e1f90f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_kepler, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 03:13:37 np0005534516 systemd[1]: libpod-conmon-dc603e3bb353fa0be05a63bb9728d7ea06f88c3691fe136882be818dd5e1f90f.scope: Deactivated successfully.
Nov 25 03:13:37 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 03:13:37 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 03:13:37 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 03:13:37 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 03:13:37 np0005534516 ceph-mgr[75313]: [progress WARNING root] complete: ev ca3c3aba-c000-441b-9cb7-84395828d72d does not exist
Nov 25 03:13:37 np0005534516 ceph-mgr[75313]: [progress WARNING root] complete: ev f92d0615-3192-49d0-b1eb-8e75e1dd7da7 does not exist
Nov 25 03:13:37 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 03:13:38 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 03:13:38 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 03:13:38 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v826: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:13:40 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v827: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:13:40 np0005534516 podman[258505]: 2025-11-25 08:13:40.829550233 +0000 UTC m=+0.076980948 container health_status 5c8d5508db57b4c3da7e80ae11f3a3094e87785cb74f60c98199261dd8b73481 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent)
Nov 25 03:13:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:13:41.036 162739 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:13:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:13:41.037 162739 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:13:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:13:41.037 162739 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:13:42 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v828: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:13:42 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 03:13:42 np0005534516 podman[258526]: 2025-11-25 08:13:42.81120463 +0000 UTC m=+0.060059539 container health_status 201f5ba8a846455dad7681d91099d8c3f33e8ac91298c1330a8b525b94c03938 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, org.label-schema.vendor=CentOS, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Nov 25 03:13:44 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v829: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:13:46 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v830: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:13:47 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 03:13:47 np0005534516 podman[258546]: 2025-11-25 08:13:47.822843525 +0000 UTC m=+0.070606902 container health_status 8ad1e319ea263c63021f01bb2d6f18ca5aea205883339692c6b10bddfa993191 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller)
Nov 25 03:13:48 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v831: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:13:50 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v832: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:13:52 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v833: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:13:52 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 03:13:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] Optimize plan auto_2025-11-25_08:13:53
Nov 25 03:13:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 03:13:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] do_upmap
Nov 25 03:13:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] pools ['volumes', 'cephfs.cephfs.data', 'default.rgw.meta', 'default.rgw.log', 'cephfs.cephfs.meta', 'backups', '.rgw.root', '.mgr', 'default.rgw.control', 'images', 'vms']
Nov 25 03:13:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] prepared 0/10 changes
Nov 25 03:13:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 03:13:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 03:13:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 03:13:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 03:13:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 03:13:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 03:13:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 03:13:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 03:13:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 03:13:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 03:13:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 03:13:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 03:13:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 03:13:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 03:13:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 03:13:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 03:13:54 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v834: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:13:56 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v835: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:13:57 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 03:13:58 np0005534516 nova_compute[253538]: 2025-11-25 08:13:58.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 03:13:58 np0005534516 nova_compute[253538]: 2025-11-25 08:13:58.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 03:13:58 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v836: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:13:59 np0005534516 nova_compute[253538]: 2025-11-25 08:13:59.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 03:13:59 np0005534516 nova_compute[253538]: 2025-11-25 08:13:59.585 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:13:59 np0005534516 nova_compute[253538]: 2025-11-25 08:13:59.586 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:13:59 np0005534516 nova_compute[253538]: 2025-11-25 08:13:59.586 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:13:59 np0005534516 nova_compute[253538]: 2025-11-25 08:13:59.586 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 25 03:13:59 np0005534516 nova_compute[253538]: 2025-11-25 08:13:59.587 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:14:00 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 03:14:00 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4263636577' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 03:14:00 np0005534516 nova_compute[253538]: 2025-11-25 08:14:00.055 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.468s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:14:00 np0005534516 nova_compute[253538]: 2025-11-25 08:14:00.253 253542 WARNING nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 03:14:00 np0005534516 nova_compute[253538]: 2025-11-25 08:14:00.255 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5179MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 25 03:14:00 np0005534516 nova_compute[253538]: 2025-11-25 08:14:00.256 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:14:00 np0005534516 nova_compute[253538]: 2025-11-25 08:14:00.256 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:14:00 np0005534516 nova_compute[253538]: 2025-11-25 08:14:00.311 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 25 03:14:00 np0005534516 nova_compute[253538]: 2025-11-25 08:14:00.312 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 25 03:14:00 np0005534516 nova_compute[253538]: 2025-11-25 08:14:00.333 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:14:00 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v837: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:14:00 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 03:14:00 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2643593700' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 03:14:00 np0005534516 nova_compute[253538]: 2025-11-25 08:14:00.785 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.452s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:14:00 np0005534516 nova_compute[253538]: 2025-11-25 08:14:00.791 253542 DEBUG nova.compute.provider_tree [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 03:14:00 np0005534516 nova_compute[253538]: 2025-11-25 08:14:00.810 253542 DEBUG nova.scheduler.client.report [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 03:14:00 np0005534516 nova_compute[253538]: 2025-11-25 08:14:00.811 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 25 03:14:00 np0005534516 nova_compute[253538]: 2025-11-25 08:14:00.811 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.555s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:14:01 np0005534516 nova_compute[253538]: 2025-11-25 08:14:01.806 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 03:14:01 np0005534516 nova_compute[253538]: 2025-11-25 08:14:01.820 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 03:14:02 np0005534516 nova_compute[253538]: 2025-11-25 08:14:02.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 03:14:02 np0005534516 nova_compute[253538]: 2025-11-25 08:14:02.555 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 25 03:14:02 np0005534516 nova_compute[253538]: 2025-11-25 08:14:02.555 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 25 03:14:02 np0005534516 nova_compute[253538]: 2025-11-25 08:14:02.567 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 25 03:14:02 np0005534516 nova_compute[253538]: 2025-11-25 08:14:02.568 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 03:14:02 np0005534516 nova_compute[253538]: 2025-11-25 08:14:02.569 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 03:14:02 np0005534516 nova_compute[253538]: 2025-11-25 08:14:02.569 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 03:14:02 np0005534516 nova_compute[253538]: 2025-11-25 08:14:02.569 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 03:14:02 np0005534516 nova_compute[253538]: 2025-11-25 08:14:02.570 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 25 03:14:02 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v838: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:14:02 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 03:14:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 03:14:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:14:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 03:14:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:14:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 03:14:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:14:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 03:14:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:14:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 03:14:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:14:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 03:14:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:14:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 25 03:14:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:14:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 03:14:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:14:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 25 03:14:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:14:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 25 03:14:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:14:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 03:14:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:14:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 25 03:14:04 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v839: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:14:06 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v840: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:14:07 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 03:14:08 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v841: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:14:10 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v842: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:14:11 np0005534516 podman[258617]: 2025-11-25 08:14:11.818244767 +0000 UTC m=+0.070376736 container health_status 5c8d5508db57b4c3da7e80ae11f3a3094e87785cb74f60c98199261dd8b73481 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 25 03:14:12 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v843: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:14:12 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 03:14:13 np0005534516 podman[258636]: 2025-11-25 08:14:13.80812138 +0000 UTC m=+0.059449603 container health_status 201f5ba8a846455dad7681d91099d8c3f33e8ac91298c1330a8b525b94c03938 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, container_name=multipathd, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251118)
Nov 25 03:14:14 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v844: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:14:16 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v845: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:14:17 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 03:14:18 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v846: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:14:18 np0005534516 podman[258657]: 2025-11-25 08:14:18.854822308 +0000 UTC m=+0.098582934 container health_status 8ad1e319ea263c63021f01bb2d6f18ca5aea205883339692c6b10bddfa993191 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251118, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Nov 25 03:14:20 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v847: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:14:22 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v848: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:14:22 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 03:14:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 03:14:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 03:14:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 03:14:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 03:14:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 03:14:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 03:14:24 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v849: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:14:26 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v850: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:14:27 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 03:14:28 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v851: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:14:28 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 25 03:14:28 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/534291760' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 03:14:28 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 25 03:14:28 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/534291760' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 03:14:30 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v852: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:14:32 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v853: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:14:32 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 03:14:34 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v854: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:14:36 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v855: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:14:37 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 03:14:38 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 03:14:38 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 03:14:38 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v856: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:14:38 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 03:14:38 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 03:14:38 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 03:14:38 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 03:14:38 np0005534516 ceph-mgr[75313]: [progress WARNING root] complete: ev 0ed4eb3f-e3d2-4e93-bf35-09fc9916a529 does not exist
Nov 25 03:14:38 np0005534516 ceph-mgr[75313]: [progress WARNING root] complete: ev 69a4d50e-d6ec-4a5b-b66e-abf291f21a3a does not exist
Nov 25 03:14:38 np0005534516 ceph-mgr[75313]: [progress WARNING root] complete: ev baa19e7e-222a-493b-a99c-97f09773109e does not exist
Nov 25 03:14:38 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 03:14:38 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 03:14:38 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 03:14:38 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 03:14:38 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 03:14:38 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 03:14:39 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 03:14:39 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 03:14:39 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 03:14:39 np0005534516 podman[258954]: 2025-11-25 08:14:39.360126912 +0000 UTC m=+0.049513919 container create b2507706c95280f87948778e7b279674626c6026602946dc9a158ccaceb8c492 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_golick, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Nov 25 03:14:39 np0005534516 systemd[1]: Started libpod-conmon-b2507706c95280f87948778e7b279674626c6026602946dc9a158ccaceb8c492.scope.
Nov 25 03:14:39 np0005534516 podman[258954]: 2025-11-25 08:14:39.339021559 +0000 UTC m=+0.028408616 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 03:14:39 np0005534516 systemd[1]: Started libcrun container.
Nov 25 03:14:39 np0005534516 podman[258954]: 2025-11-25 08:14:39.473975407 +0000 UTC m=+0.163362464 container init b2507706c95280f87948778e7b279674626c6026602946dc9a158ccaceb8c492 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_golick, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Nov 25 03:14:39 np0005534516 podman[258954]: 2025-11-25 08:14:39.48639735 +0000 UTC m=+0.175784357 container start b2507706c95280f87948778e7b279674626c6026602946dc9a158ccaceb8c492 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_golick, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 03:14:39 np0005534516 podman[258954]: 2025-11-25 08:14:39.490411621 +0000 UTC m=+0.179798648 container attach b2507706c95280f87948778e7b279674626c6026602946dc9a158ccaceb8c492 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_golick, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Nov 25 03:14:39 np0005534516 modest_golick[258971]: 167 167
Nov 25 03:14:39 np0005534516 systemd[1]: libpod-b2507706c95280f87948778e7b279674626c6026602946dc9a158ccaceb8c492.scope: Deactivated successfully.
Nov 25 03:14:39 np0005534516 conmon[258971]: conmon b2507706c95280f87948 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-b2507706c95280f87948778e7b279674626c6026602946dc9a158ccaceb8c492.scope/container/memory.events
Nov 25 03:14:39 np0005534516 podman[258954]: 2025-11-25 08:14:39.497915368 +0000 UTC m=+0.187302365 container died b2507706c95280f87948778e7b279674626c6026602946dc9a158ccaceb8c492 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_golick, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Nov 25 03:14:39 np0005534516 systemd[1]: var-lib-containers-storage-overlay-74bb1f46aa19d35aa675dcd42f2ac3048089ef0623c899ec910ca7dd133019a2-merged.mount: Deactivated successfully.
Nov 25 03:14:39 np0005534516 podman[258954]: 2025-11-25 08:14:39.538829598 +0000 UTC m=+0.228216625 container remove b2507706c95280f87948778e7b279674626c6026602946dc9a158ccaceb8c492 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_golick, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Nov 25 03:14:39 np0005534516 systemd[1]: libpod-conmon-b2507706c95280f87948778e7b279674626c6026602946dc9a158ccaceb8c492.scope: Deactivated successfully.
Nov 25 03:14:39 np0005534516 podman[258994]: 2025-11-25 08:14:39.712332351 +0000 UTC m=+0.044775268 container create d674d84a72abb7d5502433dbc39f833ebdb1eb15ae69882504ba353c3e3bfd17 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_keldysh, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Nov 25 03:14:39 np0005534516 systemd[1]: Started libpod-conmon-d674d84a72abb7d5502433dbc39f833ebdb1eb15ae69882504ba353c3e3bfd17.scope.
Nov 25 03:14:39 np0005534516 podman[258994]: 2025-11-25 08:14:39.691725701 +0000 UTC m=+0.024168668 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 03:14:39 np0005534516 systemd[1]: Started libcrun container.
Nov 25 03:14:39 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/21ee42b81036c73e24fc7307ae537d10dc07ca196672a8dbfcc80bacdb22f221/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 03:14:39 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/21ee42b81036c73e24fc7307ae537d10dc07ca196672a8dbfcc80bacdb22f221/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 03:14:39 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/21ee42b81036c73e24fc7307ae537d10dc07ca196672a8dbfcc80bacdb22f221/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 03:14:39 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/21ee42b81036c73e24fc7307ae537d10dc07ca196672a8dbfcc80bacdb22f221/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 03:14:39 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/21ee42b81036c73e24fc7307ae537d10dc07ca196672a8dbfcc80bacdb22f221/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 03:14:39 np0005534516 podman[258994]: 2025-11-25 08:14:39.814168433 +0000 UTC m=+0.146611450 container init d674d84a72abb7d5502433dbc39f833ebdb1eb15ae69882504ba353c3e3bfd17 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_keldysh, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 03:14:39 np0005534516 podman[258994]: 2025-11-25 08:14:39.824419197 +0000 UTC m=+0.156862154 container start d674d84a72abb7d5502433dbc39f833ebdb1eb15ae69882504ba353c3e3bfd17 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_keldysh, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 03:14:39 np0005534516 podman[258994]: 2025-11-25 08:14:39.829684942 +0000 UTC m=+0.162127899 container attach d674d84a72abb7d5502433dbc39f833ebdb1eb15ae69882504ba353c3e3bfd17 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_keldysh, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 03:14:40 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v857: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:14:40 np0005534516 heuristic_keldysh[259010]: --> passed data devices: 0 physical, 3 LVM
Nov 25 03:14:40 np0005534516 heuristic_keldysh[259010]: --> relative data size: 1.0
Nov 25 03:14:40 np0005534516 heuristic_keldysh[259010]: --> All data devices are unavailable
Nov 25 03:14:40 np0005534516 systemd[1]: libpod-d674d84a72abb7d5502433dbc39f833ebdb1eb15ae69882504ba353c3e3bfd17.scope: Deactivated successfully.
Nov 25 03:14:40 np0005534516 systemd[1]: libpod-d674d84a72abb7d5502433dbc39f833ebdb1eb15ae69882504ba353c3e3bfd17.scope: Consumed 1.053s CPU time.
Nov 25 03:14:40 np0005534516 conmon[259010]: conmon d674d84a72abb7d55024 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-d674d84a72abb7d5502433dbc39f833ebdb1eb15ae69882504ba353c3e3bfd17.scope/container/memory.events
Nov 25 03:14:40 np0005534516 podman[258994]: 2025-11-25 08:14:40.931132106 +0000 UTC m=+1.263575023 container died d674d84a72abb7d5502433dbc39f833ebdb1eb15ae69882504ba353c3e3bfd17 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_keldysh, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 03:14:40 np0005534516 systemd[1]: var-lib-containers-storage-overlay-21ee42b81036c73e24fc7307ae537d10dc07ca196672a8dbfcc80bacdb22f221-merged.mount: Deactivated successfully.
Nov 25 03:14:40 np0005534516 podman[258994]: 2025-11-25 08:14:40.999132274 +0000 UTC m=+1.331575191 container remove d674d84a72abb7d5502433dbc39f833ebdb1eb15ae69882504ba353c3e3bfd17 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_keldysh, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 03:14:41 np0005534516 systemd[1]: libpod-conmon-d674d84a72abb7d5502433dbc39f833ebdb1eb15ae69882504ba353c3e3bfd17.scope: Deactivated successfully.
Nov 25 03:14:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:14:41.038 162739 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:14:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:14:41.041 162739 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.004s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:14:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:14:41.041 162739 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:14:41 np0005534516 podman[259196]: 2025-11-25 08:14:41.847052156 +0000 UTC m=+0.069654275 container create 4fd5f959e6fd0a8a584cf7997e441e50fe6e890641b2f2fe9c98cd4de0cab11d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_hawking, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Nov 25 03:14:41 np0005534516 systemd[1]: Started libpod-conmon-4fd5f959e6fd0a8a584cf7997e441e50fe6e890641b2f2fe9c98cd4de0cab11d.scope.
Nov 25 03:14:41 np0005534516 podman[259196]: 2025-11-25 08:14:41.817815678 +0000 UTC m=+0.040417857 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 03:14:41 np0005534516 systemd[1]: Started libcrun container.
Nov 25 03:14:41 np0005534516 podman[259196]: 2025-11-25 08:14:41.937656358 +0000 UTC m=+0.160258517 container init 4fd5f959e6fd0a8a584cf7997e441e50fe6e890641b2f2fe9c98cd4de0cab11d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_hawking, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Nov 25 03:14:41 np0005534516 podman[259196]: 2025-11-25 08:14:41.946430131 +0000 UTC m=+0.169032250 container start 4fd5f959e6fd0a8a584cf7997e441e50fe6e890641b2f2fe9c98cd4de0cab11d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_hawking, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Nov 25 03:14:41 np0005534516 podman[259196]: 2025-11-25 08:14:41.950646428 +0000 UTC m=+0.173248557 container attach 4fd5f959e6fd0a8a584cf7997e441e50fe6e890641b2f2fe9c98cd4de0cab11d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_hawking, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Nov 25 03:14:41 np0005534516 zealous_hawking[259213]: 167 167
Nov 25 03:14:41 np0005534516 systemd[1]: libpod-4fd5f959e6fd0a8a584cf7997e441e50fe6e890641b2f2fe9c98cd4de0cab11d.scope: Deactivated successfully.
Nov 25 03:14:41 np0005534516 podman[259196]: 2025-11-25 08:14:41.95290302 +0000 UTC m=+0.175505169 container died 4fd5f959e6fd0a8a584cf7997e441e50fe6e890641b2f2fe9c98cd4de0cab11d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_hawking, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 03:14:41 np0005534516 podman[259210]: 2025-11-25 08:14:41.953593099 +0000 UTC m=+0.058673722 container health_status 5c8d5508db57b4c3da7e80ae11f3a3094e87785cb74f60c98199261dd8b73481 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 03:14:41 np0005534516 systemd[1]: var-lib-containers-storage-overlay-000c889760471385fb9599cdcc330ffa463d34bcfce0076ad8d7589f147055fc-merged.mount: Deactivated successfully.
Nov 25 03:14:41 np0005534516 podman[259196]: 2025-11-25 08:14:41.995870817 +0000 UTC m=+0.218472936 container remove 4fd5f959e6fd0a8a584cf7997e441e50fe6e890641b2f2fe9c98cd4de0cab11d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_hawking, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Nov 25 03:14:42 np0005534516 systemd[1]: libpod-conmon-4fd5f959e6fd0a8a584cf7997e441e50fe6e890641b2f2fe9c98cd4de0cab11d.scope: Deactivated successfully.
Nov 25 03:14:42 np0005534516 podman[259255]: 2025-11-25 08:14:42.17592216 +0000 UTC m=+0.058512358 container create 58c26109b03a181acc429858c2529278ce3006e527e67c23740c4c1e52f21ea9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_diffie, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 03:14:42 np0005534516 systemd[1]: Started libpod-conmon-58c26109b03a181acc429858c2529278ce3006e527e67c23740c4c1e52f21ea9.scope.
Nov 25 03:14:42 np0005534516 systemd[1]: Started libcrun container.
Nov 25 03:14:42 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1c36292127e0b9d42d53ba4be3726c5278e6c7655235e620536630b8c790cf80/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 03:14:42 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1c36292127e0b9d42d53ba4be3726c5278e6c7655235e620536630b8c790cf80/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 03:14:42 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1c36292127e0b9d42d53ba4be3726c5278e6c7655235e620536630b8c790cf80/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 03:14:42 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1c36292127e0b9d42d53ba4be3726c5278e6c7655235e620536630b8c790cf80/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 03:14:42 np0005534516 podman[259255]: 2025-11-25 08:14:42.156632347 +0000 UTC m=+0.039222535 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 03:14:42 np0005534516 podman[259255]: 2025-11-25 08:14:42.258588374 +0000 UTC m=+0.141178612 container init 58c26109b03a181acc429858c2529278ce3006e527e67c23740c4c1e52f21ea9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_diffie, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Nov 25 03:14:42 np0005534516 podman[259255]: 2025-11-25 08:14:42.272494898 +0000 UTC m=+0.155085066 container start 58c26109b03a181acc429858c2529278ce3006e527e67c23740c4c1e52f21ea9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_diffie, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 03:14:42 np0005534516 podman[259255]: 2025-11-25 08:14:42.27582883 +0000 UTC m=+0.158419138 container attach 58c26109b03a181acc429858c2529278ce3006e527e67c23740c4c1e52f21ea9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_diffie, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Nov 25 03:14:42 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 03:14:42 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v858: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:14:43 np0005534516 objective_diffie[259271]: {
Nov 25 03:14:43 np0005534516 objective_diffie[259271]:    "0": [
Nov 25 03:14:43 np0005534516 objective_diffie[259271]:        {
Nov 25 03:14:43 np0005534516 objective_diffie[259271]:            "devices": [
Nov 25 03:14:43 np0005534516 objective_diffie[259271]:                "/dev/loop3"
Nov 25 03:14:43 np0005534516 objective_diffie[259271]:            ],
Nov 25 03:14:43 np0005534516 objective_diffie[259271]:            "lv_name": "ceph_lv0",
Nov 25 03:14:43 np0005534516 objective_diffie[259271]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 03:14:43 np0005534516 objective_diffie[259271]:            "lv_size": "21470642176",
Nov 25 03:14:43 np0005534516 objective_diffie[259271]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=fa0f8d90-5df8-4d42-9078-da082765696d,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 03:14:43 np0005534516 objective_diffie[259271]:            "lv_uuid": "ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr",
Nov 25 03:14:43 np0005534516 objective_diffie[259271]:            "name": "ceph_lv0",
Nov 25 03:14:43 np0005534516 objective_diffie[259271]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 03:14:43 np0005534516 objective_diffie[259271]:            "tags": {
Nov 25 03:14:43 np0005534516 objective_diffie[259271]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 03:14:43 np0005534516 objective_diffie[259271]:                "ceph.block_uuid": "ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr",
Nov 25 03:14:43 np0005534516 objective_diffie[259271]:                "ceph.cephx_lockbox_secret": "",
Nov 25 03:14:43 np0005534516 objective_diffie[259271]:                "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 03:14:43 np0005534516 objective_diffie[259271]:                "ceph.cluster_name": "ceph",
Nov 25 03:14:43 np0005534516 objective_diffie[259271]:                "ceph.crush_device_class": "",
Nov 25 03:14:43 np0005534516 objective_diffie[259271]:                "ceph.encrypted": "0",
Nov 25 03:14:43 np0005534516 objective_diffie[259271]:                "ceph.osd_fsid": "fa0f8d90-5df8-4d42-9078-da082765696d",
Nov 25 03:14:43 np0005534516 objective_diffie[259271]:                "ceph.osd_id": "0",
Nov 25 03:14:43 np0005534516 objective_diffie[259271]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 03:14:43 np0005534516 objective_diffie[259271]:                "ceph.type": "block",
Nov 25 03:14:43 np0005534516 objective_diffie[259271]:                "ceph.vdo": "0"
Nov 25 03:14:43 np0005534516 objective_diffie[259271]:            },
Nov 25 03:14:43 np0005534516 objective_diffie[259271]:            "type": "block",
Nov 25 03:14:43 np0005534516 objective_diffie[259271]:            "vg_name": "ceph_vg0"
Nov 25 03:14:43 np0005534516 objective_diffie[259271]:        }
Nov 25 03:14:43 np0005534516 objective_diffie[259271]:    ],
Nov 25 03:14:43 np0005534516 objective_diffie[259271]:    "1": [
Nov 25 03:14:43 np0005534516 objective_diffie[259271]:        {
Nov 25 03:14:43 np0005534516 objective_diffie[259271]:            "devices": [
Nov 25 03:14:43 np0005534516 objective_diffie[259271]:                "/dev/loop4"
Nov 25 03:14:43 np0005534516 objective_diffie[259271]:            ],
Nov 25 03:14:43 np0005534516 objective_diffie[259271]:            "lv_name": "ceph_lv1",
Nov 25 03:14:43 np0005534516 objective_diffie[259271]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 03:14:43 np0005534516 objective_diffie[259271]:            "lv_size": "21470642176",
Nov 25 03:14:43 np0005534516 objective_diffie[259271]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1ccbbde0-8faf-460a-800e-c84f00ed17db,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 03:14:43 np0005534516 objective_diffie[259271]:            "lv_uuid": "nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e",
Nov 25 03:14:43 np0005534516 objective_diffie[259271]:            "name": "ceph_lv1",
Nov 25 03:14:43 np0005534516 objective_diffie[259271]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 03:14:43 np0005534516 objective_diffie[259271]:            "tags": {
Nov 25 03:14:43 np0005534516 objective_diffie[259271]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 03:14:43 np0005534516 objective_diffie[259271]:                "ceph.block_uuid": "nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e",
Nov 25 03:14:43 np0005534516 objective_diffie[259271]:                "ceph.cephx_lockbox_secret": "",
Nov 25 03:14:43 np0005534516 objective_diffie[259271]:                "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 03:14:43 np0005534516 objective_diffie[259271]:                "ceph.cluster_name": "ceph",
Nov 25 03:14:43 np0005534516 objective_diffie[259271]:                "ceph.crush_device_class": "",
Nov 25 03:14:43 np0005534516 objective_diffie[259271]:                "ceph.encrypted": "0",
Nov 25 03:14:43 np0005534516 objective_diffie[259271]:                "ceph.osd_fsid": "1ccbbde0-8faf-460a-800e-c84f00ed17db",
Nov 25 03:14:43 np0005534516 objective_diffie[259271]:                "ceph.osd_id": "1",
Nov 25 03:14:43 np0005534516 objective_diffie[259271]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 03:14:43 np0005534516 objective_diffie[259271]:                "ceph.type": "block",
Nov 25 03:14:43 np0005534516 objective_diffie[259271]:                "ceph.vdo": "0"
Nov 25 03:14:43 np0005534516 objective_diffie[259271]:            },
Nov 25 03:14:43 np0005534516 objective_diffie[259271]:            "type": "block",
Nov 25 03:14:43 np0005534516 objective_diffie[259271]:            "vg_name": "ceph_vg1"
Nov 25 03:14:43 np0005534516 objective_diffie[259271]:        }
Nov 25 03:14:43 np0005534516 objective_diffie[259271]:    ],
Nov 25 03:14:43 np0005534516 objective_diffie[259271]:    "2": [
Nov 25 03:14:43 np0005534516 objective_diffie[259271]:        {
Nov 25 03:14:43 np0005534516 objective_diffie[259271]:            "devices": [
Nov 25 03:14:43 np0005534516 objective_diffie[259271]:                "/dev/loop5"
Nov 25 03:14:43 np0005534516 objective_diffie[259271]:            ],
Nov 25 03:14:43 np0005534516 objective_diffie[259271]:            "lv_name": "ceph_lv2",
Nov 25 03:14:43 np0005534516 objective_diffie[259271]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 03:14:43 np0005534516 objective_diffie[259271]:            "lv_size": "21470642176",
Nov 25 03:14:43 np0005534516 objective_diffie[259271]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2a15223e-f21c-42af-b702-2be1c3607399,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 03:14:43 np0005534516 objective_diffie[259271]:            "lv_uuid": "5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0",
Nov 25 03:14:43 np0005534516 objective_diffie[259271]:            "name": "ceph_lv2",
Nov 25 03:14:43 np0005534516 objective_diffie[259271]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 03:14:43 np0005534516 objective_diffie[259271]:            "tags": {
Nov 25 03:14:43 np0005534516 objective_diffie[259271]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 03:14:43 np0005534516 objective_diffie[259271]:                "ceph.block_uuid": "5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0",
Nov 25 03:14:43 np0005534516 objective_diffie[259271]:                "ceph.cephx_lockbox_secret": "",
Nov 25 03:14:43 np0005534516 objective_diffie[259271]:                "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 03:14:43 np0005534516 objective_diffie[259271]:                "ceph.cluster_name": "ceph",
Nov 25 03:14:43 np0005534516 objective_diffie[259271]:                "ceph.crush_device_class": "",
Nov 25 03:14:43 np0005534516 objective_diffie[259271]:                "ceph.encrypted": "0",
Nov 25 03:14:43 np0005534516 objective_diffie[259271]:                "ceph.osd_fsid": "2a15223e-f21c-42af-b702-2be1c3607399",
Nov 25 03:14:43 np0005534516 objective_diffie[259271]:                "ceph.osd_id": "2",
Nov 25 03:14:43 np0005534516 objective_diffie[259271]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 03:14:43 np0005534516 objective_diffie[259271]:                "ceph.type": "block",
Nov 25 03:14:43 np0005534516 objective_diffie[259271]:                "ceph.vdo": "0"
Nov 25 03:14:43 np0005534516 objective_diffie[259271]:            },
Nov 25 03:14:43 np0005534516 objective_diffie[259271]:            "type": "block",
Nov 25 03:14:43 np0005534516 objective_diffie[259271]:            "vg_name": "ceph_vg2"
Nov 25 03:14:43 np0005534516 objective_diffie[259271]:        }
Nov 25 03:14:43 np0005534516 objective_diffie[259271]:    ]
Nov 25 03:14:43 np0005534516 objective_diffie[259271]: }
Nov 25 03:14:43 np0005534516 systemd[1]: libpod-58c26109b03a181acc429858c2529278ce3006e527e67c23740c4c1e52f21ea9.scope: Deactivated successfully.
Nov 25 03:14:43 np0005534516 podman[259255]: 2025-11-25 08:14:43.051731672 +0000 UTC m=+0.934321860 container died 58c26109b03a181acc429858c2529278ce3006e527e67c23740c4c1e52f21ea9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_diffie, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 03:14:43 np0005534516 systemd[1]: var-lib-containers-storage-overlay-1c36292127e0b9d42d53ba4be3726c5278e6c7655235e620536630b8c790cf80-merged.mount: Deactivated successfully.
Nov 25 03:14:43 np0005534516 podman[259255]: 2025-11-25 08:14:43.099405109 +0000 UTC m=+0.981995267 container remove 58c26109b03a181acc429858c2529278ce3006e527e67c23740c4c1e52f21ea9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_diffie, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 03:14:43 np0005534516 systemd[1]: libpod-conmon-58c26109b03a181acc429858c2529278ce3006e527e67c23740c4c1e52f21ea9.scope: Deactivated successfully.
Nov 25 03:14:43 np0005534516 podman[259435]: 2025-11-25 08:14:43.801564243 +0000 UTC m=+0.044360065 container create ae35a65029cff3dbb00274a1b6203c383d52249b1106bde90848b6302feec012 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_solomon, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS)
Nov 25 03:14:43 np0005534516 systemd[1]: Started libpod-conmon-ae35a65029cff3dbb00274a1b6203c383d52249b1106bde90848b6302feec012.scope.
Nov 25 03:14:43 np0005534516 systemd[1]: Started libcrun container.
Nov 25 03:14:43 np0005534516 podman[259435]: 2025-11-25 08:14:43.778242309 +0000 UTC m=+0.021038121 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 03:14:43 np0005534516 podman[259435]: 2025-11-25 08:14:43.88976936 +0000 UTC m=+0.132565192 container init ae35a65029cff3dbb00274a1b6203c383d52249b1106bde90848b6302feec012 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_solomon, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 03:14:43 np0005534516 podman[259435]: 2025-11-25 08:14:43.896477305 +0000 UTC m=+0.139273097 container start ae35a65029cff3dbb00274a1b6203c383d52249b1106bde90848b6302feec012 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_solomon, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Nov 25 03:14:43 np0005534516 xenodochial_solomon[259451]: 167 167
Nov 25 03:14:43 np0005534516 systemd[1]: libpod-ae35a65029cff3dbb00274a1b6203c383d52249b1106bde90848b6302feec012.scope: Deactivated successfully.
Nov 25 03:14:43 np0005534516 podman[259435]: 2025-11-25 08:14:43.901051662 +0000 UTC m=+0.143847474 container attach ae35a65029cff3dbb00274a1b6203c383d52249b1106bde90848b6302feec012 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_solomon, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 03:14:43 np0005534516 podman[259435]: 2025-11-25 08:14:43.901845953 +0000 UTC m=+0.144641765 container died ae35a65029cff3dbb00274a1b6203c383d52249b1106bde90848b6302feec012 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_solomon, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True)
Nov 25 03:14:43 np0005534516 systemd[1]: var-lib-containers-storage-overlay-62ad18c0682d8294748b8ba34179258115beaa81db1c948849c593183bb1eeb5-merged.mount: Deactivated successfully.
Nov 25 03:14:43 np0005534516 podman[259435]: 2025-11-25 08:14:43.984583739 +0000 UTC m=+0.227379521 container remove ae35a65029cff3dbb00274a1b6203c383d52249b1106bde90848b6302feec012 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_solomon, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 03:14:43 np0005534516 podman[259452]: 2025-11-25 08:14:43.989718901 +0000 UTC m=+0.122880865 container health_status 201f5ba8a846455dad7681d91099d8c3f33e8ac91298c1330a8b525b94c03938 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=multipathd, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 25 03:14:43 np0005534516 systemd[1]: libpod-conmon-ae35a65029cff3dbb00274a1b6203c383d52249b1106bde90848b6302feec012.scope: Deactivated successfully.
Nov 25 03:14:44 np0005534516 podman[259498]: 2025-11-25 08:14:44.202040965 +0000 UTC m=+0.072100242 container create ef29a9fabde1b97d1d4c1dedd278dec1d7bc5d71f33001e30f515ef48e5a1cb4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_euclid, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 03:14:44 np0005534516 systemd[1]: Started libpod-conmon-ef29a9fabde1b97d1d4c1dedd278dec1d7bc5d71f33001e30f515ef48e5a1cb4.scope.
Nov 25 03:14:44 np0005534516 podman[259498]: 2025-11-25 08:14:44.175506993 +0000 UTC m=+0.045566350 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 03:14:44 np0005534516 systemd[1]: Started libcrun container.
Nov 25 03:14:44 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/adb627aab95a5674c3488e2ec9d09ad7781945c5f34d0da1c27ac71497c8fd70/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 03:14:44 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/adb627aab95a5674c3488e2ec9d09ad7781945c5f34d0da1c27ac71497c8fd70/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 03:14:44 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/adb627aab95a5674c3488e2ec9d09ad7781945c5f34d0da1c27ac71497c8fd70/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 03:14:44 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/adb627aab95a5674c3488e2ec9d09ad7781945c5f34d0da1c27ac71497c8fd70/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 03:14:44 np0005534516 podman[259498]: 2025-11-25 08:14:44.290939811 +0000 UTC m=+0.160999138 container init ef29a9fabde1b97d1d4c1dedd278dec1d7bc5d71f33001e30f515ef48e5a1cb4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_euclid, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Nov 25 03:14:44 np0005534516 podman[259498]: 2025-11-25 08:14:44.298064458 +0000 UTC m=+0.168123735 container start ef29a9fabde1b97d1d4c1dedd278dec1d7bc5d71f33001e30f515ef48e5a1cb4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_euclid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Nov 25 03:14:44 np0005534516 podman[259498]: 2025-11-25 08:14:44.301524054 +0000 UTC m=+0.171583421 container attach ef29a9fabde1b97d1d4c1dedd278dec1d7bc5d71f33001e30f515ef48e5a1cb4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_euclid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Nov 25 03:14:44 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v859: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:14:45 np0005534516 vibrant_euclid[259513]: {
Nov 25 03:14:45 np0005534516 vibrant_euclid[259513]:    "1ccbbde0-8faf-460a-800e-c84f00ed17db": {
Nov 25 03:14:45 np0005534516 vibrant_euclid[259513]:        "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 03:14:45 np0005534516 vibrant_euclid[259513]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 03:14:45 np0005534516 vibrant_euclid[259513]:        "osd_id": 1,
Nov 25 03:14:45 np0005534516 vibrant_euclid[259513]:        "osd_uuid": "1ccbbde0-8faf-460a-800e-c84f00ed17db",
Nov 25 03:14:45 np0005534516 vibrant_euclid[259513]:        "type": "bluestore"
Nov 25 03:14:45 np0005534516 vibrant_euclid[259513]:    },
Nov 25 03:14:45 np0005534516 vibrant_euclid[259513]:    "2a15223e-f21c-42af-b702-2be1c3607399": {
Nov 25 03:14:45 np0005534516 vibrant_euclid[259513]:        "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 03:14:45 np0005534516 vibrant_euclid[259513]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 03:14:45 np0005534516 vibrant_euclid[259513]:        "osd_id": 2,
Nov 25 03:14:45 np0005534516 vibrant_euclid[259513]:        "osd_uuid": "2a15223e-f21c-42af-b702-2be1c3607399",
Nov 25 03:14:45 np0005534516 vibrant_euclid[259513]:        "type": "bluestore"
Nov 25 03:14:45 np0005534516 vibrant_euclid[259513]:    },
Nov 25 03:14:45 np0005534516 vibrant_euclid[259513]:    "fa0f8d90-5df8-4d42-9078-da082765696d": {
Nov 25 03:14:45 np0005534516 vibrant_euclid[259513]:        "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 03:14:45 np0005534516 vibrant_euclid[259513]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 03:14:45 np0005534516 vibrant_euclid[259513]:        "osd_id": 0,
Nov 25 03:14:45 np0005534516 vibrant_euclid[259513]:        "osd_uuid": "fa0f8d90-5df8-4d42-9078-da082765696d",
Nov 25 03:14:45 np0005534516 vibrant_euclid[259513]:        "type": "bluestore"
Nov 25 03:14:45 np0005534516 vibrant_euclid[259513]:    }
Nov 25 03:14:45 np0005534516 vibrant_euclid[259513]: }
Nov 25 03:14:45 np0005534516 systemd[1]: libpod-ef29a9fabde1b97d1d4c1dedd278dec1d7bc5d71f33001e30f515ef48e5a1cb4.scope: Deactivated successfully.
Nov 25 03:14:45 np0005534516 systemd[1]: libpod-ef29a9fabde1b97d1d4c1dedd278dec1d7bc5d71f33001e30f515ef48e5a1cb4.scope: Consumed 1.048s CPU time.
Nov 25 03:14:45 np0005534516 podman[259498]: 2025-11-25 08:14:45.340487522 +0000 UTC m=+1.210546799 container died ef29a9fabde1b97d1d4c1dedd278dec1d7bc5d71f33001e30f515ef48e5a1cb4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_euclid, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True)
Nov 25 03:14:45 np0005534516 systemd[1]: var-lib-containers-storage-overlay-adb627aab95a5674c3488e2ec9d09ad7781945c5f34d0da1c27ac71497c8fd70-merged.mount: Deactivated successfully.
Nov 25 03:14:45 np0005534516 podman[259498]: 2025-11-25 08:14:45.398657649 +0000 UTC m=+1.268716926 container remove ef29a9fabde1b97d1d4c1dedd278dec1d7bc5d71f33001e30f515ef48e5a1cb4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_euclid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 25 03:14:45 np0005534516 systemd[1]: libpod-conmon-ef29a9fabde1b97d1d4c1dedd278dec1d7bc5d71f33001e30f515ef48e5a1cb4.scope: Deactivated successfully.
Nov 25 03:14:45 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 03:14:45 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 03:14:45 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 03:14:45 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 03:14:45 np0005534516 ceph-mgr[75313]: [progress WARNING root] complete: ev c5c4a3f8-005b-4074-a43a-4b353a491370 does not exist
Nov 25 03:14:45 np0005534516 ceph-mgr[75313]: [progress WARNING root] complete: ev 5bee2447-f736-41bc-abb5-c3febd6bd30b does not exist
Nov 25 03:14:46 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 03:14:46 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 03:14:46 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v860: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:14:47 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 03:14:48 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v861: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:14:49 np0005534516 podman[259610]: 2025-11-25 08:14:49.881453173 +0000 UTC m=+0.113868007 container health_status 8ad1e319ea263c63021f01bb2d6f18ca5aea205883339692c6b10bddfa993191 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Nov 25 03:14:50 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v862: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:14:52 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 03:14:52 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v863: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:14:52 np0005534516 ceph-mon[75015]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #39. Immutable memtables: 0.
Nov 25 03:14:52 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:14:52.795115) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 25 03:14:52 np0005534516 ceph-mon[75015]: rocksdb: [db/flush_job.cc:856] [default] [JOB 17] Flushing memtable with next log file: 39
Nov 25 03:14:52 np0005534516 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764058492795147, "job": 17, "event": "flush_started", "num_memtables": 1, "num_entries": 1679, "num_deletes": 251, "total_data_size": 2733219, "memory_usage": 2776808, "flush_reason": "Manual Compaction"}
Nov 25 03:14:52 np0005534516 ceph-mon[75015]: rocksdb: [db/flush_job.cc:885] [default] [JOB 17] Level-0 flush table #40: started
Nov 25 03:14:52 np0005534516 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764058492893111, "cf_name": "default", "job": 17, "event": "table_file_creation", "file_number": 40, "file_size": 1572152, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 16343, "largest_seqno": 18021, "table_properties": {"data_size": 1566595, "index_size": 2697, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1797, "raw_key_size": 14296, "raw_average_key_size": 20, "raw_value_size": 1554229, "raw_average_value_size": 2195, "num_data_blocks": 125, "num_entries": 708, "num_filter_entries": 708, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764058312, "oldest_key_time": 1764058312, "file_creation_time": 1764058492, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "dc5dc6ae-0fb8-474e-b34d-e37705a41add", "db_session_id": "WCSCMBNWDQZ3QKJA3OWW", "orig_file_number": 40, "seqno_to_time_mapping": "N/A"}}
Nov 25 03:14:52 np0005534516 ceph-mon[75015]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 17] Flush lasted 98058 microseconds, and 8173 cpu microseconds.
Nov 25 03:14:52 np0005534516 ceph-mon[75015]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 03:14:52 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:14:52.893166) [db/flush_job.cc:967] [default] [JOB 17] Level-0 flush table #40: 1572152 bytes OK
Nov 25 03:14:52 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:14:52.893193) [db/memtable_list.cc:519] [default] Level-0 commit table #40 started
Nov 25 03:14:52 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:14:52.940921) [db/memtable_list.cc:722] [default] Level-0 commit table #40: memtable #1 done
Nov 25 03:14:52 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:14:52.940975) EVENT_LOG_v1 {"time_micros": 1764058492940962, "job": 17, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 25 03:14:52 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:14:52.941006) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 25 03:14:52 np0005534516 ceph-mon[75015]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 17] Try to delete WAL files size 2725991, prev total WAL file size 2752477, number of live WAL files 2.
Nov 25 03:14:52 np0005534516 ceph-mon[75015]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000036.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 03:14:52 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:14:52.942715) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D67727374617400353033' seq:72057594037927935, type:22 .. '6D67727374617400373535' seq:0, type:0; will stop at (end)
Nov 25 03:14:52 np0005534516 ceph-mon[75015]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 18] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 25 03:14:52 np0005534516 ceph-mon[75015]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 17 Base level 0, inputs: [40(1535KB)], [38(7745KB)]
Nov 25 03:14:52 np0005534516 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764058492942772, "job": 18, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [40], "files_L6": [38], "score": -1, "input_data_size": 9503970, "oldest_snapshot_seqno": -1}
Nov 25 03:14:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] Optimize plan auto_2025-11-25_08:14:53
Nov 25 03:14:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 03:14:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] do_upmap
Nov 25 03:14:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] pools ['volumes', 'cephfs.cephfs.data', '.rgw.root', 'vms', 'backups', 'default.rgw.log', '.mgr', 'cephfs.cephfs.meta', 'images', 'default.rgw.control', 'default.rgw.meta']
Nov 25 03:14:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] prepared 0/10 changes
Nov 25 03:14:53 np0005534516 ceph-mon[75015]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 18] Generated table #41: 4326 keys, 7295616 bytes, temperature: kUnknown
Nov 25 03:14:53 np0005534516 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764058493228622, "cf_name": "default", "job": 18, "event": "table_file_creation", "file_number": 41, "file_size": 7295616, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7266857, "index_size": 16873, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 10821, "raw_key_size": 105212, "raw_average_key_size": 24, "raw_value_size": 7188700, "raw_average_value_size": 1661, "num_data_blocks": 721, "num_entries": 4326, "num_filter_entries": 4326, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764056881, "oldest_key_time": 0, "file_creation_time": 1764058492, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "dc5dc6ae-0fb8-474e-b34d-e37705a41add", "db_session_id": "WCSCMBNWDQZ3QKJA3OWW", "orig_file_number": 41, "seqno_to_time_mapping": "N/A"}}
Nov 25 03:14:53 np0005534516 ceph-mon[75015]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 03:14:53 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:14:53.228902) [db/compaction/compaction_job.cc:1663] [default] [JOB 18] Compacted 1@0 + 1@6 files to L6 => 7295616 bytes
Nov 25 03:14:53 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:14:53.315442) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 33.2 rd, 25.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.5, 7.6 +0.0 blob) out(7.0 +0.0 blob), read-write-amplify(10.7) write-amplify(4.6) OK, records in: 4756, records dropped: 430 output_compression: NoCompression
Nov 25 03:14:53 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:14:53.315492) EVENT_LOG_v1 {"time_micros": 1764058493315472, "job": 18, "event": "compaction_finished", "compaction_time_micros": 285928, "compaction_time_cpu_micros": 33677, "output_level": 6, "num_output_files": 1, "total_output_size": 7295616, "num_input_records": 4756, "num_output_records": 4326, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 25 03:14:53 np0005534516 ceph-mon[75015]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000040.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 03:14:53 np0005534516 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764058493316410, "job": 18, "event": "table_file_deletion", "file_number": 40}
Nov 25 03:14:53 np0005534516 ceph-mon[75015]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000038.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 03:14:53 np0005534516 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764058493318993, "job": 18, "event": "table_file_deletion", "file_number": 38}
Nov 25 03:14:53 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:14:52.942594) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 03:14:53 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:14:53.319114) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 03:14:53 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:14:53.319122) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 03:14:53 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:14:53.319125) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 03:14:53 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:14:53.319128) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 03:14:53 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:14:53.319131) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 03:14:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 03:14:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 03:14:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 03:14:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 03:14:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 03:14:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 03:14:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 03:14:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 03:14:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 03:14:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 03:14:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 03:14:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 03:14:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 03:14:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 03:14:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 03:14:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 03:14:54 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v864: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:14:56 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v865: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:14:57 np0005534516 nova_compute[253538]: 2025-11-25 08:14:57.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 03:14:57 np0005534516 nova_compute[253538]: 2025-11-25 08:14:57.555 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Nov 25 03:14:57 np0005534516 nova_compute[253538]: 2025-11-25 08:14:57.569 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Nov 25 03:14:57 np0005534516 nova_compute[253538]: 2025-11-25 08:14:57.570 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 03:14:57 np0005534516 nova_compute[253538]: 2025-11-25 08:14:57.570 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Nov 25 03:14:57 np0005534516 nova_compute[253538]: 2025-11-25 08:14:57.579 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 03:14:57 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 03:14:58 np0005534516 nova_compute[253538]: 2025-11-25 08:14:58.581 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 03:14:58 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v866: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:14:59 np0005534516 nova_compute[253538]: 2025-11-25 08:14:59.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 03:15:00 np0005534516 nova_compute[253538]: 2025-11-25 08:15:00.553 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 03:15:00 np0005534516 nova_compute[253538]: 2025-11-25 08:15:00.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 03:15:00 np0005534516 nova_compute[253538]: 2025-11-25 08:15:00.591 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:15:00 np0005534516 nova_compute[253538]: 2025-11-25 08:15:00.592 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:15:00 np0005534516 nova_compute[253538]: 2025-11-25 08:15:00.592 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:15:00 np0005534516 nova_compute[253538]: 2025-11-25 08:15:00.592 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 25 03:15:00 np0005534516 nova_compute[253538]: 2025-11-25 08:15:00.593 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:15:00 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v867: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:15:01 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 03:15:01 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3824731598' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 03:15:01 np0005534516 nova_compute[253538]: 2025-11-25 08:15:01.057 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.464s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:15:01 np0005534516 nova_compute[253538]: 2025-11-25 08:15:01.224 253542 WARNING nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 03:15:01 np0005534516 nova_compute[253538]: 2025-11-25 08:15:01.226 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5164MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 25 03:15:01 np0005534516 nova_compute[253538]: 2025-11-25 08:15:01.226 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:15:01 np0005534516 nova_compute[253538]: 2025-11-25 08:15:01.226 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:15:01 np0005534516 nova_compute[253538]: 2025-11-25 08:15:01.508 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 25 03:15:01 np0005534516 nova_compute[253538]: 2025-11-25 08:15:01.509 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 25 03:15:01 np0005534516 nova_compute[253538]: 2025-11-25 08:15:01.613 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:15:02 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 03:15:02 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3000365572' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 03:15:02 np0005534516 nova_compute[253538]: 2025-11-25 08:15:02.209 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.597s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:15:02 np0005534516 nova_compute[253538]: 2025-11-25 08:15:02.214 253542 DEBUG nova.compute.provider_tree [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 03:15:02 np0005534516 nova_compute[253538]: 2025-11-25 08:15:02.226 253542 DEBUG nova.scheduler.client.report [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 03:15:02 np0005534516 nova_compute[253538]: 2025-11-25 08:15:02.227 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 25 03:15:02 np0005534516 nova_compute[253538]: 2025-11-25 08:15:02.228 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:15:02 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v868: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:15:02 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 03:15:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 03:15:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:15:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 03:15:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:15:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 03:15:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:15:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 03:15:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:15:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 03:15:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:15:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 03:15:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:15:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 25 03:15:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:15:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 03:15:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:15:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 25 03:15:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:15:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 25 03:15:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:15:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 03:15:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:15:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 25 03:15:04 np0005534516 nova_compute[253538]: 2025-11-25 08:15:04.229 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 03:15:04 np0005534516 nova_compute[253538]: 2025-11-25 08:15:04.230 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 25 03:15:04 np0005534516 nova_compute[253538]: 2025-11-25 08:15:04.230 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 25 03:15:04 np0005534516 nova_compute[253538]: 2025-11-25 08:15:04.243 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 25 03:15:04 np0005534516 nova_compute[253538]: 2025-11-25 08:15:04.243 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 03:15:04 np0005534516 nova_compute[253538]: 2025-11-25 08:15:04.243 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 03:15:04 np0005534516 nova_compute[253538]: 2025-11-25 08:15:04.244 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 03:15:04 np0005534516 nova_compute[253538]: 2025-11-25 08:15:04.244 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 03:15:04 np0005534516 nova_compute[253538]: 2025-11-25 08:15:04.244 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 25 03:15:04 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v869: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:15:06 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v870: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:15:07 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 03:15:08 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v871: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:15:10 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v872: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:15:12 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v873: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:15:12 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 03:15:12 np0005534516 podman[259682]: 2025-11-25 08:15:12.821328017 +0000 UTC m=+0.077352105 container health_status 5c8d5508db57b4c3da7e80ae11f3a3094e87785cb74f60c98199261dd8b73481 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Nov 25 03:15:14 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v874: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:15:14 np0005534516 podman[259702]: 2025-11-25 08:15:14.811386114 +0000 UTC m=+0.061446788 container health_status 201f5ba8a846455dad7681d91099d8c3f33e8ac91298c1330a8b525b94c03938 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, container_name=multipathd)
Nov 25 03:15:16 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v875: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:15:17 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 03:15:18 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v876: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:15:20 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v877: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:15:20 np0005534516 podman[259722]: 2025-11-25 08:15:20.834779696 +0000 UTC m=+0.092067548 container health_status 8ad1e319ea263c63021f01bb2d6f18ca5aea205883339692c6b10bddfa993191 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_controller, org.label-schema.build-date=20251118, config_id=ovn_controller, io.buildah.version=1.41.3, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 25 03:15:22 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v878: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:15:22 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 03:15:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 03:15:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 03:15:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 03:15:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 03:15:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 03:15:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 03:15:24 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v879: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:15:26 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v880: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:15:27 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 03:15:28 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v881: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:15:28 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 25 03:15:28 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3081447490' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 03:15:28 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 25 03:15:28 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3081447490' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 03:15:30 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v882: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:15:32 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v883: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:15:32 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 03:15:34 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v884: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:15:36 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v885: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:15:37 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 03:15:38 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v886: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:15:40 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v887: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:15:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:15:41.038 162739 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:15:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:15:41.039 162739 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:15:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:15:41.039 162739 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:15:42 np0005534516 ceph-mon[75015]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #42. Immutable memtables: 0.
Nov 25 03:15:42 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:15:42.193201) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 25 03:15:42 np0005534516 ceph-mon[75015]: rocksdb: [db/flush_job.cc:856] [default] [JOB 19] Flushing memtable with next log file: 42
Nov 25 03:15:42 np0005534516 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764058542193240, "job": 19, "event": "flush_started", "num_memtables": 1, "num_entries": 616, "num_deletes": 251, "total_data_size": 710454, "memory_usage": 722584, "flush_reason": "Manual Compaction"}
Nov 25 03:15:42 np0005534516 ceph-mon[75015]: rocksdb: [db/flush_job.cc:885] [default] [JOB 19] Level-0 flush table #43: started
Nov 25 03:15:42 np0005534516 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764058542213124, "cf_name": "default", "job": 19, "event": "table_file_creation", "file_number": 43, "file_size": 703838, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 18022, "largest_seqno": 18637, "table_properties": {"data_size": 700560, "index_size": 1186, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1029, "raw_key_size": 7361, "raw_average_key_size": 18, "raw_value_size": 694125, "raw_average_value_size": 1779, "num_data_blocks": 55, "num_entries": 390, "num_filter_entries": 390, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764058492, "oldest_key_time": 1764058492, "file_creation_time": 1764058542, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "dc5dc6ae-0fb8-474e-b34d-e37705a41add", "db_session_id": "WCSCMBNWDQZ3QKJA3OWW", "orig_file_number": 43, "seqno_to_time_mapping": "N/A"}}
Nov 25 03:15:42 np0005534516 ceph-mon[75015]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 19] Flush lasted 19981 microseconds, and 2921 cpu microseconds.
Nov 25 03:15:42 np0005534516 ceph-mon[75015]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 03:15:42 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:15:42.213177) [db/flush_job.cc:967] [default] [JOB 19] Level-0 flush table #43: 703838 bytes OK
Nov 25 03:15:42 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:15:42.213201) [db/memtable_list.cc:519] [default] Level-0 commit table #43 started
Nov 25 03:15:42 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:15:42.270657) [db/memtable_list.cc:722] [default] Level-0 commit table #43: memtable #1 done
Nov 25 03:15:42 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:15:42.270696) EVENT_LOG_v1 {"time_micros": 1764058542270687, "job": 19, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 25 03:15:42 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:15:42.270716) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 25 03:15:42 np0005534516 ceph-mon[75015]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 19] Try to delete WAL files size 707139, prev total WAL file size 707139, number of live WAL files 2.
Nov 25 03:15:42 np0005534516 ceph-mon[75015]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000039.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 03:15:42 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:15:42.271348) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031323535' seq:72057594037927935, type:22 .. '7061786F730031353037' seq:0, type:0; will stop at (end)
Nov 25 03:15:42 np0005534516 ceph-mon[75015]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 20] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 25 03:15:42 np0005534516 ceph-mon[75015]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 19 Base level 0, inputs: [43(687KB)], [41(7124KB)]
Nov 25 03:15:42 np0005534516 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764058542271377, "job": 20, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [43], "files_L6": [41], "score": -1, "input_data_size": 7999454, "oldest_snapshot_seqno": -1}
Nov 25 03:15:42 np0005534516 ceph-mon[75015]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 20] Generated table #44: 4207 keys, 6270916 bytes, temperature: kUnknown
Nov 25 03:15:42 np0005534516 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764058542521455, "cf_name": "default", "job": 20, "event": "table_file_creation", "file_number": 44, "file_size": 6270916, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 6243890, "index_size": 15397, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 10565, "raw_key_size": 103354, "raw_average_key_size": 24, "raw_value_size": 6168775, "raw_average_value_size": 1466, "num_data_blocks": 651, "num_entries": 4207, "num_filter_entries": 4207, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764056881, "oldest_key_time": 0, "file_creation_time": 1764058542, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "dc5dc6ae-0fb8-474e-b34d-e37705a41add", "db_session_id": "WCSCMBNWDQZ3QKJA3OWW", "orig_file_number": 44, "seqno_to_time_mapping": "N/A"}}
Nov 25 03:15:42 np0005534516 ceph-mon[75015]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 03:15:42 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:15:42.521856) [db/compaction/compaction_job.cc:1663] [default] [JOB 20] Compacted 1@0 + 1@6 files to L6 => 6270916 bytes
Nov 25 03:15:42 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:15:42.586982) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 32.0 rd, 25.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.7, 7.0 +0.0 blob) out(6.0 +0.0 blob), read-write-amplify(20.3) write-amplify(8.9) OK, records in: 4716, records dropped: 509 output_compression: NoCompression
Nov 25 03:15:42 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:15:42.587028) EVENT_LOG_v1 {"time_micros": 1764058542587010, "job": 20, "event": "compaction_finished", "compaction_time_micros": 250209, "compaction_time_cpu_micros": 14126, "output_level": 6, "num_output_files": 1, "total_output_size": 6270916, "num_input_records": 4716, "num_output_records": 4207, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 25 03:15:42 np0005534516 ceph-mon[75015]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000043.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 03:15:42 np0005534516 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764058542587503, "job": 20, "event": "table_file_deletion", "file_number": 43}
Nov 25 03:15:42 np0005534516 ceph-mon[75015]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000041.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 03:15:42 np0005534516 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764058542590258, "job": 20, "event": "table_file_deletion", "file_number": 41}
Nov 25 03:15:42 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:15:42.271197) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 03:15:42 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:15:42.590396) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 03:15:42 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:15:42.590407) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 03:15:42 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:15:42.590410) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 03:15:42 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:15:42.590412) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 03:15:42 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:15:42.590415) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 03:15:42 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 03:15:42 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v888: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:15:43 np0005534516 podman[259749]: 2025-11-25 08:15:43.795848705 +0000 UTC m=+0.050061465 container health_status 5c8d5508db57b4c3da7e80ae11f3a3094e87785cb74f60c98199261dd8b73481 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_metadata_agent, managed_by=edpm_ansible)
Nov 25 03:15:44 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v889: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:15:45 np0005534516 podman[259793]: 2025-11-25 08:15:45.796945694 +0000 UTC m=+0.061760126 container health_status 201f5ba8a846455dad7681d91099d8c3f33e8ac91298c1330a8b525b94c03938 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, tcib_managed=true, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Nov 25 03:15:46 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 03:15:46 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 03:15:46 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 03:15:46 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 03:15:46 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 03:15:46 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 03:15:46 np0005534516 ceph-mgr[75313]: [progress WARNING root] complete: ev 39b52cee-5c19-41a7-9484-1f67abf8d311 does not exist
Nov 25 03:15:46 np0005534516 ceph-mgr[75313]: [progress WARNING root] complete: ev 9cadbce0-3d97-463d-ab9c-dcedd6262589 does not exist
Nov 25 03:15:46 np0005534516 ceph-mgr[75313]: [progress WARNING root] complete: ev 6978cda3-2ace-4701-bc20-988a8800ce7c does not exist
Nov 25 03:15:46 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 03:15:46 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 03:15:46 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 03:15:46 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 03:15:46 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 03:15:46 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 03:15:46 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v890: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:15:46 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 03:15:46 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 03:15:46 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 03:15:47 np0005534516 podman[260060]: 2025-11-25 08:15:47.37974993 +0000 UTC m=+0.093892710 container create 61ff190e7776981b7dc56b9e707bfbaad4e28168941affdecbee856b499c91b3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_hamilton, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2)
Nov 25 03:15:47 np0005534516 podman[260060]: 2025-11-25 08:15:47.308463792 +0000 UTC m=+0.022606662 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 03:15:47 np0005534516 systemd[1]: Started libpod-conmon-61ff190e7776981b7dc56b9e707bfbaad4e28168941affdecbee856b499c91b3.scope.
Nov 25 03:15:47 np0005534516 systemd[1]: Started libcrun container.
Nov 25 03:15:47 np0005534516 podman[260060]: 2025-11-25 08:15:47.496675139 +0000 UTC m=+0.210817939 container init 61ff190e7776981b7dc56b9e707bfbaad4e28168941affdecbee856b499c91b3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_hamilton, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef)
Nov 25 03:15:47 np0005534516 podman[260060]: 2025-11-25 08:15:47.504496934 +0000 UTC m=+0.218639724 container start 61ff190e7776981b7dc56b9e707bfbaad4e28168941affdecbee856b499c91b3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_hamilton, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Nov 25 03:15:47 np0005534516 eager_hamilton[260077]: 167 167
Nov 25 03:15:47 np0005534516 systemd[1]: libpod-61ff190e7776981b7dc56b9e707bfbaad4e28168941affdecbee856b499c91b3.scope: Deactivated successfully.
Nov 25 03:15:47 np0005534516 conmon[260077]: conmon 61ff190e7776981b7dc5 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-61ff190e7776981b7dc56b9e707bfbaad4e28168941affdecbee856b499c91b3.scope/container/memory.events
Nov 25 03:15:47 np0005534516 podman[260060]: 2025-11-25 08:15:47.516220916 +0000 UTC m=+0.230363696 container attach 61ff190e7776981b7dc56b9e707bfbaad4e28168941affdecbee856b499c91b3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_hamilton, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 03:15:47 np0005534516 podman[260060]: 2025-11-25 08:15:47.516840684 +0000 UTC m=+0.230983484 container died 61ff190e7776981b7dc56b9e707bfbaad4e28168941affdecbee856b499c91b3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_hamilton, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS)
Nov 25 03:15:47 np0005534516 systemd[1]: var-lib-containers-storage-overlay-2d1e6c21cf0e25c4864b5efd933a7df7f62a57cf14cba223759cc148038211c0-merged.mount: Deactivated successfully.
Nov 25 03:15:47 np0005534516 podman[260060]: 2025-11-25 08:15:47.593369784 +0000 UTC m=+0.307512574 container remove 61ff190e7776981b7dc56b9e707bfbaad4e28168941affdecbee856b499c91b3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_hamilton, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 03:15:47 np0005534516 systemd[1]: libpod-conmon-61ff190e7776981b7dc56b9e707bfbaad4e28168941affdecbee856b499c91b3.scope: Deactivated successfully.
Nov 25 03:15:47 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 03:15:47 np0005534516 podman[260101]: 2025-11-25 08:15:47.766276632 +0000 UTC m=+0.049945413 container create effe37644ccea78fc400615d38a87d28134ced18db019d6303556ed867f947a8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_davinci, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 03:15:47 np0005534516 systemd[1]: Started libpod-conmon-effe37644ccea78fc400615d38a87d28134ced18db019d6303556ed867f947a8.scope.
Nov 25 03:15:47 np0005534516 podman[260101]: 2025-11-25 08:15:47.739085865 +0000 UTC m=+0.022754676 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 03:15:47 np0005534516 systemd[1]: Started libcrun container.
Nov 25 03:15:47 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/417f9d65e37426df740b86f3aedf463b27af5b72367d314c1dcfaaff67c899dd/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 03:15:47 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/417f9d65e37426df740b86f3aedf463b27af5b72367d314c1dcfaaff67c899dd/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 03:15:47 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/417f9d65e37426df740b86f3aedf463b27af5b72367d314c1dcfaaff67c899dd/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 03:15:47 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/417f9d65e37426df740b86f3aedf463b27af5b72367d314c1dcfaaff67c899dd/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 03:15:47 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/417f9d65e37426df740b86f3aedf463b27af5b72367d314c1dcfaaff67c899dd/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 03:15:47 np0005534516 podman[260101]: 2025-11-25 08:15:47.868835447 +0000 UTC m=+0.152504318 container init effe37644ccea78fc400615d38a87d28134ced18db019d6303556ed867f947a8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_davinci, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Nov 25 03:15:47 np0005534516 podman[260101]: 2025-11-25 08:15:47.878162614 +0000 UTC m=+0.161831425 container start effe37644ccea78fc400615d38a87d28134ced18db019d6303556ed867f947a8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_davinci, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 03:15:47 np0005534516 podman[260101]: 2025-11-25 08:15:47.885401872 +0000 UTC m=+0.169070683 container attach effe37644ccea78fc400615d38a87d28134ced18db019d6303556ed867f947a8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_davinci, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 03:15:48 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v891: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:15:48 np0005534516 suspicious_davinci[260117]: --> passed data devices: 0 physical, 3 LVM
Nov 25 03:15:48 np0005534516 suspicious_davinci[260117]: --> relative data size: 1.0
Nov 25 03:15:48 np0005534516 suspicious_davinci[260117]: --> All data devices are unavailable
Nov 25 03:15:48 np0005534516 systemd[1]: libpod-effe37644ccea78fc400615d38a87d28134ced18db019d6303556ed867f947a8.scope: Deactivated successfully.
Nov 25 03:15:48 np0005534516 systemd[1]: libpod-effe37644ccea78fc400615d38a87d28134ced18db019d6303556ed867f947a8.scope: Consumed 1.003s CPU time.
Nov 25 03:15:48 np0005534516 podman[260101]: 2025-11-25 08:15:48.96967417 +0000 UTC m=+1.253342951 container died effe37644ccea78fc400615d38a87d28134ced18db019d6303556ed867f947a8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_davinci, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Nov 25 03:15:49 np0005534516 systemd[1]: var-lib-containers-storage-overlay-417f9d65e37426df740b86f3aedf463b27af5b72367d314c1dcfaaff67c899dd-merged.mount: Deactivated successfully.
Nov 25 03:15:49 np0005534516 podman[260101]: 2025-11-25 08:15:49.069567393 +0000 UTC m=+1.353236174 container remove effe37644ccea78fc400615d38a87d28134ced18db019d6303556ed867f947a8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_davinci, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 03:15:49 np0005534516 systemd[1]: libpod-conmon-effe37644ccea78fc400615d38a87d28134ced18db019d6303556ed867f947a8.scope: Deactivated successfully.
Nov 25 03:15:49 np0005534516 podman[260294]: 2025-11-25 08:15:49.813815936 +0000 UTC m=+0.088960074 container create e482f062ae2e74fd21bf15ce13f8277c4077577c97769be2c03a8f2146fd22dc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_margulis, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 03:15:49 np0005534516 podman[260294]: 2025-11-25 08:15:49.745211752 +0000 UTC m=+0.020355920 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 03:15:49 np0005534516 systemd[1]: Started libpod-conmon-e482f062ae2e74fd21bf15ce13f8277c4077577c97769be2c03a8f2146fd22dc.scope.
Nov 25 03:15:49 np0005534516 systemd[1]: Started libcrun container.
Nov 25 03:15:49 np0005534516 podman[260294]: 2025-11-25 08:15:49.907456406 +0000 UTC m=+0.182600634 container init e482f062ae2e74fd21bf15ce13f8277c4077577c97769be2c03a8f2146fd22dc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_margulis, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default)
Nov 25 03:15:49 np0005534516 podman[260294]: 2025-11-25 08:15:49.917915624 +0000 UTC m=+0.193059762 container start e482f062ae2e74fd21bf15ce13f8277c4077577c97769be2c03a8f2146fd22dc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_margulis, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Nov 25 03:15:49 np0005534516 podman[260294]: 2025-11-25 08:15:49.92323662 +0000 UTC m=+0.198380758 container attach e482f062ae2e74fd21bf15ce13f8277c4077577c97769be2c03a8f2146fd22dc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_margulis, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Nov 25 03:15:49 np0005534516 priceless_margulis[260311]: 167 167
Nov 25 03:15:49 np0005534516 systemd[1]: libpod-e482f062ae2e74fd21bf15ce13f8277c4077577c97769be2c03a8f2146fd22dc.scope: Deactivated successfully.
Nov 25 03:15:49 np0005534516 podman[260294]: 2025-11-25 08:15:49.928640469 +0000 UTC m=+0.203784627 container died e482f062ae2e74fd21bf15ce13f8277c4077577c97769be2c03a8f2146fd22dc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_margulis, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Nov 25 03:15:49 np0005534516 systemd[1]: var-lib-containers-storage-overlay-58df1f1a9313d6dddfd7ca11d3e412f4bcd9183d429b1fd5d79d3be0d844548b-merged.mount: Deactivated successfully.
Nov 25 03:15:50 np0005534516 podman[260294]: 2025-11-25 08:15:50.098901363 +0000 UTC m=+0.374045541 container remove e482f062ae2e74fd21bf15ce13f8277c4077577c97769be2c03a8f2146fd22dc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_margulis, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Nov 25 03:15:50 np0005534516 systemd[1]: libpod-conmon-e482f062ae2e74fd21bf15ce13f8277c4077577c97769be2c03a8f2146fd22dc.scope: Deactivated successfully.
Nov 25 03:15:50 np0005534516 podman[260335]: 2025-11-25 08:15:50.285746313 +0000 UTC m=+0.050663492 container create d38bdedd164a31b380ae596a073b09ae65b819d14b6615f01c7179d62f3b46c6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_neumann, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Nov 25 03:15:50 np0005534516 systemd[1]: Started libpod-conmon-d38bdedd164a31b380ae596a073b09ae65b819d14b6615f01c7179d62f3b46c6.scope.
Nov 25 03:15:50 np0005534516 systemd[1]: Started libcrun container.
Nov 25 03:15:50 np0005534516 podman[260335]: 2025-11-25 08:15:50.260407647 +0000 UTC m=+0.025324916 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 03:15:50 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/95de9514c70fc5ba20521c5706e724acc8efc73e21f7d61ed3de943332ef2eb5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 03:15:50 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/95de9514c70fc5ba20521c5706e724acc8efc73e21f7d61ed3de943332ef2eb5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 03:15:50 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/95de9514c70fc5ba20521c5706e724acc8efc73e21f7d61ed3de943332ef2eb5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 03:15:50 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/95de9514c70fc5ba20521c5706e724acc8efc73e21f7d61ed3de943332ef2eb5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 03:15:50 np0005534516 podman[260335]: 2025-11-25 08:15:50.376033281 +0000 UTC m=+0.140950470 container init d38bdedd164a31b380ae596a073b09ae65b819d14b6615f01c7179d62f3b46c6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_neumann, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507)
Nov 25 03:15:50 np0005534516 podman[260335]: 2025-11-25 08:15:50.387002612 +0000 UTC m=+0.151919811 container start d38bdedd164a31b380ae596a073b09ae65b819d14b6615f01c7179d62f3b46c6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_neumann, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 03:15:50 np0005534516 podman[260335]: 2025-11-25 08:15:50.391142497 +0000 UTC m=+0.156059676 container attach d38bdedd164a31b380ae596a073b09ae65b819d14b6615f01c7179d62f3b46c6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_neumann, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Nov 25 03:15:50 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v892: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:15:51 np0005534516 sweet_neumann[260352]: {
Nov 25 03:15:51 np0005534516 sweet_neumann[260352]:    "0": [
Nov 25 03:15:51 np0005534516 sweet_neumann[260352]:        {
Nov 25 03:15:51 np0005534516 sweet_neumann[260352]:            "devices": [
Nov 25 03:15:51 np0005534516 sweet_neumann[260352]:                "/dev/loop3"
Nov 25 03:15:51 np0005534516 sweet_neumann[260352]:            ],
Nov 25 03:15:51 np0005534516 sweet_neumann[260352]:            "lv_name": "ceph_lv0",
Nov 25 03:15:51 np0005534516 sweet_neumann[260352]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 03:15:51 np0005534516 sweet_neumann[260352]:            "lv_size": "21470642176",
Nov 25 03:15:51 np0005534516 sweet_neumann[260352]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=fa0f8d90-5df8-4d42-9078-da082765696d,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 03:15:51 np0005534516 sweet_neumann[260352]:            "lv_uuid": "ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr",
Nov 25 03:15:51 np0005534516 sweet_neumann[260352]:            "name": "ceph_lv0",
Nov 25 03:15:51 np0005534516 sweet_neumann[260352]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 03:15:51 np0005534516 sweet_neumann[260352]:            "tags": {
Nov 25 03:15:51 np0005534516 sweet_neumann[260352]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 03:15:51 np0005534516 sweet_neumann[260352]:                "ceph.block_uuid": "ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr",
Nov 25 03:15:51 np0005534516 sweet_neumann[260352]:                "ceph.cephx_lockbox_secret": "",
Nov 25 03:15:51 np0005534516 sweet_neumann[260352]:                "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 03:15:51 np0005534516 sweet_neumann[260352]:                "ceph.cluster_name": "ceph",
Nov 25 03:15:51 np0005534516 sweet_neumann[260352]:                "ceph.crush_device_class": "",
Nov 25 03:15:51 np0005534516 sweet_neumann[260352]:                "ceph.encrypted": "0",
Nov 25 03:15:51 np0005534516 sweet_neumann[260352]:                "ceph.osd_fsid": "fa0f8d90-5df8-4d42-9078-da082765696d",
Nov 25 03:15:51 np0005534516 sweet_neumann[260352]:                "ceph.osd_id": "0",
Nov 25 03:15:51 np0005534516 sweet_neumann[260352]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 03:15:51 np0005534516 sweet_neumann[260352]:                "ceph.type": "block",
Nov 25 03:15:51 np0005534516 sweet_neumann[260352]:                "ceph.vdo": "0"
Nov 25 03:15:51 np0005534516 sweet_neumann[260352]:            },
Nov 25 03:15:51 np0005534516 sweet_neumann[260352]:            "type": "block",
Nov 25 03:15:51 np0005534516 sweet_neumann[260352]:            "vg_name": "ceph_vg0"
Nov 25 03:15:51 np0005534516 sweet_neumann[260352]:        }
Nov 25 03:15:51 np0005534516 sweet_neumann[260352]:    ],
Nov 25 03:15:51 np0005534516 sweet_neumann[260352]:    "1": [
Nov 25 03:15:51 np0005534516 sweet_neumann[260352]:        {
Nov 25 03:15:51 np0005534516 sweet_neumann[260352]:            "devices": [
Nov 25 03:15:51 np0005534516 sweet_neumann[260352]:                "/dev/loop4"
Nov 25 03:15:51 np0005534516 sweet_neumann[260352]:            ],
Nov 25 03:15:51 np0005534516 sweet_neumann[260352]:            "lv_name": "ceph_lv1",
Nov 25 03:15:51 np0005534516 sweet_neumann[260352]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 03:15:51 np0005534516 sweet_neumann[260352]:            "lv_size": "21470642176",
Nov 25 03:15:51 np0005534516 sweet_neumann[260352]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1ccbbde0-8faf-460a-800e-c84f00ed17db,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 03:15:51 np0005534516 sweet_neumann[260352]:            "lv_uuid": "nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e",
Nov 25 03:15:51 np0005534516 sweet_neumann[260352]:            "name": "ceph_lv1",
Nov 25 03:15:51 np0005534516 sweet_neumann[260352]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 03:15:51 np0005534516 sweet_neumann[260352]:            "tags": {
Nov 25 03:15:51 np0005534516 sweet_neumann[260352]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 03:15:51 np0005534516 sweet_neumann[260352]:                "ceph.block_uuid": "nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e",
Nov 25 03:15:51 np0005534516 sweet_neumann[260352]:                "ceph.cephx_lockbox_secret": "",
Nov 25 03:15:51 np0005534516 sweet_neumann[260352]:                "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 03:15:51 np0005534516 sweet_neumann[260352]:                "ceph.cluster_name": "ceph",
Nov 25 03:15:51 np0005534516 sweet_neumann[260352]:                "ceph.crush_device_class": "",
Nov 25 03:15:51 np0005534516 sweet_neumann[260352]:                "ceph.encrypted": "0",
Nov 25 03:15:51 np0005534516 sweet_neumann[260352]:                "ceph.osd_fsid": "1ccbbde0-8faf-460a-800e-c84f00ed17db",
Nov 25 03:15:51 np0005534516 sweet_neumann[260352]:                "ceph.osd_id": "1",
Nov 25 03:15:51 np0005534516 sweet_neumann[260352]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 03:15:51 np0005534516 sweet_neumann[260352]:                "ceph.type": "block",
Nov 25 03:15:51 np0005534516 sweet_neumann[260352]:                "ceph.vdo": "0"
Nov 25 03:15:51 np0005534516 sweet_neumann[260352]:            },
Nov 25 03:15:51 np0005534516 sweet_neumann[260352]:            "type": "block",
Nov 25 03:15:51 np0005534516 sweet_neumann[260352]:            "vg_name": "ceph_vg1"
Nov 25 03:15:51 np0005534516 sweet_neumann[260352]:        }
Nov 25 03:15:51 np0005534516 sweet_neumann[260352]:    ],
Nov 25 03:15:51 np0005534516 sweet_neumann[260352]:    "2": [
Nov 25 03:15:51 np0005534516 sweet_neumann[260352]:        {
Nov 25 03:15:51 np0005534516 sweet_neumann[260352]:            "devices": [
Nov 25 03:15:51 np0005534516 sweet_neumann[260352]:                "/dev/loop5"
Nov 25 03:15:51 np0005534516 sweet_neumann[260352]:            ],
Nov 25 03:15:51 np0005534516 sweet_neumann[260352]:            "lv_name": "ceph_lv2",
Nov 25 03:15:51 np0005534516 sweet_neumann[260352]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 03:15:51 np0005534516 sweet_neumann[260352]:            "lv_size": "21470642176",
Nov 25 03:15:51 np0005534516 sweet_neumann[260352]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2a15223e-f21c-42af-b702-2be1c3607399,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 03:15:51 np0005534516 sweet_neumann[260352]:            "lv_uuid": "5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0",
Nov 25 03:15:51 np0005534516 sweet_neumann[260352]:            "name": "ceph_lv2",
Nov 25 03:15:51 np0005534516 sweet_neumann[260352]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 03:15:51 np0005534516 sweet_neumann[260352]:            "tags": {
Nov 25 03:15:51 np0005534516 sweet_neumann[260352]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 03:15:51 np0005534516 sweet_neumann[260352]:                "ceph.block_uuid": "5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0",
Nov 25 03:15:51 np0005534516 sweet_neumann[260352]:                "ceph.cephx_lockbox_secret": "",
Nov 25 03:15:51 np0005534516 sweet_neumann[260352]:                "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 03:15:51 np0005534516 sweet_neumann[260352]:                "ceph.cluster_name": "ceph",
Nov 25 03:15:51 np0005534516 sweet_neumann[260352]:                "ceph.crush_device_class": "",
Nov 25 03:15:51 np0005534516 sweet_neumann[260352]:                "ceph.encrypted": "0",
Nov 25 03:15:51 np0005534516 sweet_neumann[260352]:                "ceph.osd_fsid": "2a15223e-f21c-42af-b702-2be1c3607399",
Nov 25 03:15:51 np0005534516 sweet_neumann[260352]:                "ceph.osd_id": "2",
Nov 25 03:15:51 np0005534516 sweet_neumann[260352]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 03:15:51 np0005534516 sweet_neumann[260352]:                "ceph.type": "block",
Nov 25 03:15:51 np0005534516 sweet_neumann[260352]:                "ceph.vdo": "0"
Nov 25 03:15:51 np0005534516 sweet_neumann[260352]:            },
Nov 25 03:15:51 np0005534516 sweet_neumann[260352]:            "type": "block",
Nov 25 03:15:51 np0005534516 sweet_neumann[260352]:            "vg_name": "ceph_vg2"
Nov 25 03:15:51 np0005534516 sweet_neumann[260352]:        }
Nov 25 03:15:51 np0005534516 sweet_neumann[260352]:    ]
Nov 25 03:15:51 np0005534516 sweet_neumann[260352]: }
Nov 25 03:15:51 np0005534516 systemd[1]: libpod-d38bdedd164a31b380ae596a073b09ae65b819d14b6615f01c7179d62f3b46c6.scope: Deactivated successfully.
Nov 25 03:15:51 np0005534516 podman[260335]: 2025-11-25 08:15:51.15253433 +0000 UTC m=+0.917451489 container died d38bdedd164a31b380ae596a073b09ae65b819d14b6615f01c7179d62f3b46c6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_neumann, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Nov 25 03:15:51 np0005534516 systemd[1]: var-lib-containers-storage-overlay-95de9514c70fc5ba20521c5706e724acc8efc73e21f7d61ed3de943332ef2eb5-merged.mount: Deactivated successfully.
Nov 25 03:15:51 np0005534516 podman[260335]: 2025-11-25 08:15:51.247526848 +0000 UTC m=+1.012443997 container remove d38bdedd164a31b380ae596a073b09ae65b819d14b6615f01c7179d62f3b46c6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_neumann, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 03:15:51 np0005534516 systemd[1]: libpod-conmon-d38bdedd164a31b380ae596a073b09ae65b819d14b6615f01c7179d62f3b46c6.scope: Deactivated successfully.
Nov 25 03:15:51 np0005534516 podman[260361]: 2025-11-25 08:15:51.287001721 +0000 UTC m=+0.108648103 container health_status 8ad1e319ea263c63021f01bb2d6f18ca5aea205883339692c6b10bddfa993191 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.schema-version=1.0, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, managed_by=edpm_ansible)
Nov 25 03:15:51 np0005534516 podman[260541]: 2025-11-25 08:15:51.837058784 +0000 UTC m=+0.043558368 container create 98cc12d0b102e8c406daab9b4f6d83d497fb01fb208f2780dd9fb1e843dc7d1d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_pasteur, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 03:15:51 np0005534516 systemd[1]: Started libpod-conmon-98cc12d0b102e8c406daab9b4f6d83d497fb01fb208f2780dd9fb1e843dc7d1d.scope.
Nov 25 03:15:51 np0005534516 systemd[1]: Started libcrun container.
Nov 25 03:15:51 np0005534516 podman[260541]: 2025-11-25 08:15:51.819516242 +0000 UTC m=+0.026015846 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 03:15:51 np0005534516 podman[260541]: 2025-11-25 08:15:51.926713065 +0000 UTC m=+0.133212679 container init 98cc12d0b102e8c406daab9b4f6d83d497fb01fb208f2780dd9fb1e843dc7d1d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_pasteur, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Nov 25 03:15:51 np0005534516 podman[260541]: 2025-11-25 08:15:51.934415517 +0000 UTC m=+0.140915101 container start 98cc12d0b102e8c406daab9b4f6d83d497fb01fb208f2780dd9fb1e843dc7d1d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_pasteur, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef)
Nov 25 03:15:51 np0005534516 podman[260541]: 2025-11-25 08:15:51.93854153 +0000 UTC m=+0.145041114 container attach 98cc12d0b102e8c406daab9b4f6d83d497fb01fb208f2780dd9fb1e843dc7d1d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_pasteur, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Nov 25 03:15:51 np0005534516 recursing_pasteur[260557]: 167 167
Nov 25 03:15:51 np0005534516 systemd[1]: libpod-98cc12d0b102e8c406daab9b4f6d83d497fb01fb208f2780dd9fb1e843dc7d1d.scope: Deactivated successfully.
Nov 25 03:15:51 np0005534516 conmon[260557]: conmon 98cc12d0b102e8c406da <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-98cc12d0b102e8c406daab9b4f6d83d497fb01fb208f2780dd9fb1e843dc7d1d.scope/container/memory.events
Nov 25 03:15:51 np0005534516 podman[260541]: 2025-11-25 08:15:51.941563132 +0000 UTC m=+0.148062736 container died 98cc12d0b102e8c406daab9b4f6d83d497fb01fb208f2780dd9fb1e843dc7d1d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_pasteur, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 03:15:52 np0005534516 systemd[1]: var-lib-containers-storage-overlay-e780dc2044f9def97b44245970dfa57ed5be66c7f4487e21d0aa0cddefffdb65-merged.mount: Deactivated successfully.
Nov 25 03:15:52 np0005534516 podman[260541]: 2025-11-25 08:15:52.172099441 +0000 UTC m=+0.378599055 container remove 98cc12d0b102e8c406daab9b4f6d83d497fb01fb208f2780dd9fb1e843dc7d1d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_pasteur, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True)
Nov 25 03:15:52 np0005534516 systemd[1]: libpod-conmon-98cc12d0b102e8c406daab9b4f6d83d497fb01fb208f2780dd9fb1e843dc7d1d.scope: Deactivated successfully.
Nov 25 03:15:52 np0005534516 podman[260580]: 2025-11-25 08:15:52.372522844 +0000 UTC m=+0.063538525 container create cc0d29261bda1ca648d6a5edee0ba7576b9c3715de37cfa82de05532ab13af98 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_khayyam, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 03:15:52 np0005534516 systemd[1]: Started libpod-conmon-cc0d29261bda1ca648d6a5edee0ba7576b9c3715de37cfa82de05532ab13af98.scope.
Nov 25 03:15:52 np0005534516 podman[260580]: 2025-11-25 08:15:52.33301384 +0000 UTC m=+0.024029551 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 03:15:52 np0005534516 systemd[1]: Started libcrun container.
Nov 25 03:15:52 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/33b13059a6cd3fac49e405fe6456a7c7986a57188ba14cbada8969f48e763596/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 03:15:52 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/33b13059a6cd3fac49e405fe6456a7c7986a57188ba14cbada8969f48e763596/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 03:15:52 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/33b13059a6cd3fac49e405fe6456a7c7986a57188ba14cbada8969f48e763596/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 03:15:52 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/33b13059a6cd3fac49e405fe6456a7c7986a57188ba14cbada8969f48e763596/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 03:15:52 np0005534516 podman[260580]: 2025-11-25 08:15:52.475503012 +0000 UTC m=+0.166518713 container init cc0d29261bda1ca648d6a5edee0ba7576b9c3715de37cfa82de05532ab13af98 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_khayyam, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 03:15:52 np0005534516 podman[260580]: 2025-11-25 08:15:52.482565615 +0000 UTC m=+0.173581296 container start cc0d29261bda1ca648d6a5edee0ba7576b9c3715de37cfa82de05532ab13af98 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_khayyam, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 03:15:52 np0005534516 podman[260580]: 2025-11-25 08:15:52.491727758 +0000 UTC m=+0.182743549 container attach cc0d29261bda1ca648d6a5edee0ba7576b9c3715de37cfa82de05532ab13af98 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_khayyam, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 03:15:52 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 03:15:52 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v893: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:15:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] Optimize plan auto_2025-11-25_08:15:53
Nov 25 03:15:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 03:15:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] do_upmap
Nov 25 03:15:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] pools ['vms', 'default.rgw.log', 'default.rgw.control', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', '.mgr', 'backups', 'default.rgw.meta', 'images', 'volumes', '.rgw.root']
Nov 25 03:15:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] prepared 0/10 changes
Nov 25 03:15:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 03:15:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 03:15:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 03:15:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 03:15:53 np0005534516 romantic_khayyam[260596]: {
Nov 25 03:15:53 np0005534516 romantic_khayyam[260596]:    "1ccbbde0-8faf-460a-800e-c84f00ed17db": {
Nov 25 03:15:53 np0005534516 romantic_khayyam[260596]:        "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 03:15:53 np0005534516 romantic_khayyam[260596]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 03:15:53 np0005534516 romantic_khayyam[260596]:        "osd_id": 1,
Nov 25 03:15:53 np0005534516 romantic_khayyam[260596]:        "osd_uuid": "1ccbbde0-8faf-460a-800e-c84f00ed17db",
Nov 25 03:15:53 np0005534516 romantic_khayyam[260596]:        "type": "bluestore"
Nov 25 03:15:53 np0005534516 romantic_khayyam[260596]:    },
Nov 25 03:15:53 np0005534516 romantic_khayyam[260596]:    "2a15223e-f21c-42af-b702-2be1c3607399": {
Nov 25 03:15:53 np0005534516 romantic_khayyam[260596]:        "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 03:15:53 np0005534516 romantic_khayyam[260596]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 03:15:53 np0005534516 romantic_khayyam[260596]:        "osd_id": 2,
Nov 25 03:15:53 np0005534516 romantic_khayyam[260596]:        "osd_uuid": "2a15223e-f21c-42af-b702-2be1c3607399",
Nov 25 03:15:53 np0005534516 romantic_khayyam[260596]:        "type": "bluestore"
Nov 25 03:15:53 np0005534516 romantic_khayyam[260596]:    },
Nov 25 03:15:53 np0005534516 romantic_khayyam[260596]:    "fa0f8d90-5df8-4d42-9078-da082765696d": {
Nov 25 03:15:53 np0005534516 romantic_khayyam[260596]:        "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 03:15:53 np0005534516 romantic_khayyam[260596]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 03:15:53 np0005534516 romantic_khayyam[260596]:        "osd_id": 0,
Nov 25 03:15:53 np0005534516 romantic_khayyam[260596]:        "osd_uuid": "fa0f8d90-5df8-4d42-9078-da082765696d",
Nov 25 03:15:53 np0005534516 romantic_khayyam[260596]:        "type": "bluestore"
Nov 25 03:15:53 np0005534516 romantic_khayyam[260596]:    }
Nov 25 03:15:53 np0005534516 romantic_khayyam[260596]: }
Nov 25 03:15:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 03:15:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 03:15:53 np0005534516 systemd[1]: libpod-cc0d29261bda1ca648d6a5edee0ba7576b9c3715de37cfa82de05532ab13af98.scope: Deactivated successfully.
Nov 25 03:15:53 np0005534516 podman[260580]: 2025-11-25 08:15:53.449702649 +0000 UTC m=+1.140718330 container died cc0d29261bda1ca648d6a5edee0ba7576b9c3715de37cfa82de05532ab13af98 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_khayyam, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Nov 25 03:15:53 np0005534516 systemd[1]: var-lib-containers-storage-overlay-33b13059a6cd3fac49e405fe6456a7c7986a57188ba14cbada8969f48e763596-merged.mount: Deactivated successfully.
Nov 25 03:15:53 np0005534516 podman[260580]: 2025-11-25 08:15:53.506013674 +0000 UTC m=+1.197029365 container remove cc0d29261bda1ca648d6a5edee0ba7576b9c3715de37cfa82de05532ab13af98 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_khayyam, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 03:15:53 np0005534516 systemd[1]: libpod-conmon-cc0d29261bda1ca648d6a5edee0ba7576b9c3715de37cfa82de05532ab13af98.scope: Deactivated successfully.
Nov 25 03:15:53 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 03:15:53 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 03:15:53 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 03:15:53 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 03:15:53 np0005534516 ceph-mgr[75313]: [progress WARNING root] complete: ev cbbf3abe-5dd3-4eda-97e2-6dda03b600a5 does not exist
Nov 25 03:15:53 np0005534516 ceph-mgr[75313]: [progress WARNING root] complete: ev 1aba95a2-b8e8-481f-ade3-3f1ab997b741 does not exist
Nov 25 03:15:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 03:15:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 03:15:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 03:15:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 03:15:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 03:15:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 03:15:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 03:15:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 03:15:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 03:15:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 03:15:54 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 03:15:54 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 03:15:54 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v894: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:15:56 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v895: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:15:57 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 03:15:58 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v896: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:15:59 np0005534516 nova_compute[253538]: 2025-11-25 08:15:59.562 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 03:16:00 np0005534516 nova_compute[253538]: 2025-11-25 08:16:00.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 03:16:00 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v897: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:16:01 np0005534516 nova_compute[253538]: 2025-11-25 08:16:01.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 03:16:02 np0005534516 nova_compute[253538]: 2025-11-25 08:16:02.553 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 03:16:02 np0005534516 nova_compute[253538]: 2025-11-25 08:16:02.576 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:16:02 np0005534516 nova_compute[253538]: 2025-11-25 08:16:02.577 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:16:02 np0005534516 nova_compute[253538]: 2025-11-25 08:16:02.577 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:16:02 np0005534516 nova_compute[253538]: 2025-11-25 08:16:02.577 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 25 03:16:02 np0005534516 nova_compute[253538]: 2025-11-25 08:16:02.577 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:16:02 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 03:16:02 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v898: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:16:03 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 03:16:03 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4182931423' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 03:16:03 np0005534516 nova_compute[253538]: 2025-11-25 08:16:03.033 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.455s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:16:03 np0005534516 nova_compute[253538]: 2025-11-25 08:16:03.219 253542 WARNING nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 03:16:03 np0005534516 nova_compute[253538]: 2025-11-25 08:16:03.221 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5131MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 25 03:16:03 np0005534516 nova_compute[253538]: 2025-11-25 08:16:03.221 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:16:03 np0005534516 nova_compute[253538]: 2025-11-25 08:16:03.221 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:16:03 np0005534516 nova_compute[253538]: 2025-11-25 08:16:03.375 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 25 03:16:03 np0005534516 nova_compute[253538]: 2025-11-25 08:16:03.375 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 25 03:16:03 np0005534516 nova_compute[253538]: 2025-11-25 08:16:03.399 253542 DEBUG nova.scheduler.client.report [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Refreshing inventories for resource provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Nov 25 03:16:03 np0005534516 nova_compute[253538]: 2025-11-25 08:16:03.432 253542 DEBUG nova.scheduler.client.report [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Updating ProviderTree inventory for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Nov 25 03:16:03 np0005534516 nova_compute[253538]: 2025-11-25 08:16:03.433 253542 DEBUG nova.compute.provider_tree [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Updating inventory in ProviderTree for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Nov 25 03:16:03 np0005534516 nova_compute[253538]: 2025-11-25 08:16:03.455 253542 DEBUG nova.scheduler.client.report [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Refreshing aggregate associations for resource provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Nov 25 03:16:03 np0005534516 nova_compute[253538]: 2025-11-25 08:16:03.476 253542 DEBUG nova.scheduler.client.report [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Refreshing trait associations for resource provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4, traits: HW_CPU_X86_ABM,HW_CPU_X86_SSE,HW_CPU_X86_SSE4A,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_NET_VIF_MODEL_VIRTIO,HW_CPU_X86_SSSE3,COMPUTE_ACCELERATORS,COMPUTE_RESCUE_BFV,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_BMI2,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_DEVICE_TAGGING,HW_CPU_X86_SSE2,HW_CPU_X86_SSE42,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_SECURITY_TPM_1_2,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_SVM,HW_CPU_X86_AVX,HW_CPU_X86_CLMUL,COMPUTE_NODE,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_AVX2,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_SHA,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_BMI,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_MMX,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_SSE41,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_SECURITY_TPM_2_0,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_F16C,COMPUTE_STORAGE_BUS_FDC,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_AMD_SVM,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_AESNI,HW_CPU_X86_FMA3 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Nov 25 03:16:03 np0005534516 nova_compute[253538]: 2025-11-25 08:16:03.495 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:16:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 03:16:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:16:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 03:16:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:16:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 03:16:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:16:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 03:16:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:16:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 03:16:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:16:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 03:16:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:16:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 25 03:16:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:16:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 03:16:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:16:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 25 03:16:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:16:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 25 03:16:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:16:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 03:16:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:16:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 25 03:16:04 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 03:16:04 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/166761886' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 03:16:04 np0005534516 nova_compute[253538]: 2025-11-25 08:16:04.073 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.578s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:16:04 np0005534516 nova_compute[253538]: 2025-11-25 08:16:04.081 253542 DEBUG nova.compute.provider_tree [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 03:16:04 np0005534516 nova_compute[253538]: 2025-11-25 08:16:04.094 253542 DEBUG nova.scheduler.client.report [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 03:16:04 np0005534516 nova_compute[253538]: 2025-11-25 08:16:04.096 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 25 03:16:04 np0005534516 nova_compute[253538]: 2025-11-25 08:16:04.096 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.875s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:16:04 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v899: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:16:05 np0005534516 nova_compute[253538]: 2025-11-25 08:16:05.097 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 03:16:05 np0005534516 nova_compute[253538]: 2025-11-25 08:16:05.098 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 25 03:16:05 np0005534516 nova_compute[253538]: 2025-11-25 08:16:05.098 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 25 03:16:05 np0005534516 nova_compute[253538]: 2025-11-25 08:16:05.148 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 25 03:16:05 np0005534516 nova_compute[253538]: 2025-11-25 08:16:05.148 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 03:16:05 np0005534516 nova_compute[253538]: 2025-11-25 08:16:05.149 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 03:16:05 np0005534516 nova_compute[253538]: 2025-11-25 08:16:05.150 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 03:16:05 np0005534516 nova_compute[253538]: 2025-11-25 08:16:05.150 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 25 03:16:05 np0005534516 nova_compute[253538]: 2025-11-25 08:16:05.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 03:16:05 np0005534516 nova_compute[253538]: 2025-11-25 08:16:05.571 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 03:16:06 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v900: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:16:07 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 03:16:08 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v901: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:16:10 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v902: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:16:12 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 03:16:12 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v903: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:16:14 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v904: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:16:14 np0005534516 podman[260737]: 2025-11-25 08:16:14.828082808 +0000 UTC m=+0.058791365 container health_status 5c8d5508db57b4c3da7e80ae11f3a3094e87785cb74f60c98199261dd8b73481 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 25 03:16:16 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v905: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:16:16 np0005534516 podman[260756]: 2025-11-25 08:16:16.818015932 +0000 UTC m=+0.068341787 container health_status 201f5ba8a846455dad7681d91099d8c3f33e8ac91298c1330a8b525b94c03938 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118)
Nov 25 03:16:17 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 03:16:18 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v906: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:16:20 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v907: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:16:21 np0005534516 podman[260776]: 2025-11-25 08:16:21.906085923 +0000 UTC m=+0.147604462 container health_status 8ad1e319ea263c63021f01bb2d6f18ca5aea205883339692c6b10bddfa993191 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118)
Nov 25 03:16:22 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 03:16:22 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v908: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:16:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 03:16:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 03:16:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 03:16:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 03:16:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 03:16:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 03:16:24 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v909: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:16:26 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v910: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:16:27 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 03:16:28 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v911: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:16:30 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v912: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:16:32 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 03:16:32 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v913: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:16:34 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v914: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:16:36 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v915: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:16:37 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 03:16:38 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v916: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:16:40 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v917: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:16:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:16:41.040 162739 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:16:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:16:41.041 162739 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:16:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:16:41.041 162739 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:16:42 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 03:16:42 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v918: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:16:44 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v919: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:16:45 np0005534516 podman[260803]: 2025-11-25 08:16:45.826512775 +0000 UTC m=+0.070734943 container health_status 5c8d5508db57b4c3da7e80ae11f3a3094e87785cb74f60c98199261dd8b73481 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent)
Nov 25 03:16:46 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v920: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:16:47 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 03:16:47 np0005534516 podman[260821]: 2025-11-25 08:16:47.839064889 +0000 UTC m=+0.084425658 container health_status 201f5ba8a846455dad7681d91099d8c3f33e8ac91298c1330a8b525b94c03938 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Nov 25 03:16:48 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v921: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:16:50 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v922: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:16:52 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 03:16:52 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v923: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:16:52 np0005534516 podman[260841]: 2025-11-25 08:16:52.850912387 +0000 UTC m=+0.100994624 container health_status 8ad1e319ea263c63021f01bb2d6f18ca5aea205883339692c6b10bddfa993191 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible)
Nov 25 03:16:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] Optimize plan auto_2025-11-25_08:16:53
Nov 25 03:16:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 03:16:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] do_upmap
Nov 25 03:16:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] pools ['default.rgw.log', '.rgw.root', 'cephfs.cephfs.data', 'default.rgw.control', 'images', 'default.rgw.meta', '.mgr', 'backups', 'volumes', 'cephfs.cephfs.meta', 'vms']
Nov 25 03:16:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] prepared 0/10 changes
Nov 25 03:16:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 03:16:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 03:16:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 03:16:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 03:16:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 03:16:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 03:16:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 03:16:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 03:16:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 03:16:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 03:16:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 03:16:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 03:16:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 03:16:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 03:16:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 03:16:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 03:16:54 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 03:16:54 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 03:16:54 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 03:16:54 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 03:16:54 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 03:16:54 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 03:16:54 np0005534516 ceph-mgr[75313]: [progress WARNING root] complete: ev 66833c9a-0a7c-4ace-ac5c-09db02332612 does not exist
Nov 25 03:16:54 np0005534516 ceph-mgr[75313]: [progress WARNING root] complete: ev e343439b-c2ec-4307-b143-e0005b62ddfd does not exist
Nov 25 03:16:54 np0005534516 ceph-mgr[75313]: [progress WARNING root] complete: ev 3734917a-fbfa-438a-8c32-1f82743aff47 does not exist
Nov 25 03:16:54 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 03:16:54 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 03:16:54 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 03:16:54 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 03:16:54 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 03:16:54 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 03:16:54 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v924: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:16:54 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 03:16:54 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 03:16:54 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 03:16:55 np0005534516 podman[261136]: 2025-11-25 08:16:55.276598524 +0000 UTC m=+0.101348275 container create 4588aba8f3c1a008f9ca827b93d1e25b3c22d0e7e683b649774a919760bd4a25 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_swanson, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Nov 25 03:16:55 np0005534516 podman[261136]: 2025-11-25 08:16:55.2014587 +0000 UTC m=+0.026208461 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 03:16:55 np0005534516 systemd[1]: Started libpod-conmon-4588aba8f3c1a008f9ca827b93d1e25b3c22d0e7e683b649774a919760bd4a25.scope.
Nov 25 03:16:55 np0005534516 systemd[1]: Started libcrun container.
Nov 25 03:16:55 np0005534516 podman[261136]: 2025-11-25 08:16:55.443475724 +0000 UTC m=+0.268225475 container init 4588aba8f3c1a008f9ca827b93d1e25b3c22d0e7e683b649774a919760bd4a25 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_swanson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Nov 25 03:16:55 np0005534516 podman[261136]: 2025-11-25 08:16:55.452677547 +0000 UTC m=+0.277427298 container start 4588aba8f3c1a008f9ca827b93d1e25b3c22d0e7e683b649774a919760bd4a25 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_swanson, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 03:16:55 np0005534516 blissful_swanson[261152]: 167 167
Nov 25 03:16:55 np0005534516 systemd[1]: libpod-4588aba8f3c1a008f9ca827b93d1e25b3c22d0e7e683b649774a919760bd4a25.scope: Deactivated successfully.
Nov 25 03:16:55 np0005534516 podman[261136]: 2025-11-25 08:16:55.464993815 +0000 UTC m=+0.289743586 container attach 4588aba8f3c1a008f9ca827b93d1e25b3c22d0e7e683b649774a919760bd4a25 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_swanson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 03:16:55 np0005534516 podman[261136]: 2025-11-25 08:16:55.465721525 +0000 UTC m=+0.290471256 container died 4588aba8f3c1a008f9ca827b93d1e25b3c22d0e7e683b649774a919760bd4a25 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_swanson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Nov 25 03:16:55 np0005534516 systemd[1]: var-lib-containers-storage-overlay-f1da25bc2ca39fc58542f97001daa68715802cc878026afa00610bfe546164c9-merged.mount: Deactivated successfully.
Nov 25 03:16:55 np0005534516 podman[261136]: 2025-11-25 08:16:55.554175444 +0000 UTC m=+0.378925205 container remove 4588aba8f3c1a008f9ca827b93d1e25b3c22d0e7e683b649774a919760bd4a25 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_swanson, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Nov 25 03:16:55 np0005534516 systemd[1]: libpod-conmon-4588aba8f3c1a008f9ca827b93d1e25b3c22d0e7e683b649774a919760bd4a25.scope: Deactivated successfully.
Nov 25 03:16:55 np0005534516 podman[261175]: 2025-11-25 08:16:55.75691902 +0000 UTC m=+0.046535569 container create dd0adee3e1626a290bc114ed9314a8fc54c1f4160b9dccd1fff5de5401694ca4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_kalam, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Nov 25 03:16:55 np0005534516 systemd[1]: Started libpod-conmon-dd0adee3e1626a290bc114ed9314a8fc54c1f4160b9dccd1fff5de5401694ca4.scope.
Nov 25 03:16:55 np0005534516 systemd[1]: Started libcrun container.
Nov 25 03:16:55 np0005534516 podman[261175]: 2025-11-25 08:16:55.734251708 +0000 UTC m=+0.023868307 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 03:16:55 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7701673b7f0b051c7e96eb9312949f3b47dc8199ba8d752ed9fe5c522a49e36c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 03:16:55 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7701673b7f0b051c7e96eb9312949f3b47dc8199ba8d752ed9fe5c522a49e36c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 03:16:55 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7701673b7f0b051c7e96eb9312949f3b47dc8199ba8d752ed9fe5c522a49e36c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 03:16:55 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7701673b7f0b051c7e96eb9312949f3b47dc8199ba8d752ed9fe5c522a49e36c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 03:16:55 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7701673b7f0b051c7e96eb9312949f3b47dc8199ba8d752ed9fe5c522a49e36c/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 03:16:55 np0005534516 podman[261175]: 2025-11-25 08:16:55.849150592 +0000 UTC m=+0.138767151 container init dd0adee3e1626a290bc114ed9314a8fc54c1f4160b9dccd1fff5de5401694ca4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_kalam, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 03:16:55 np0005534516 podman[261175]: 2025-11-25 08:16:55.858122028 +0000 UTC m=+0.147738577 container start dd0adee3e1626a290bc114ed9314a8fc54c1f4160b9dccd1fff5de5401694ca4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_kalam, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Nov 25 03:16:55 np0005534516 podman[261175]: 2025-11-25 08:16:55.862597482 +0000 UTC m=+0.152214151 container attach dd0adee3e1626a290bc114ed9314a8fc54c1f4160b9dccd1fff5de5401694ca4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_kalam, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Nov 25 03:16:56 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v925: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:16:56 np0005534516 compassionate_kalam[261191]: --> passed data devices: 0 physical, 3 LVM
Nov 25 03:16:56 np0005534516 compassionate_kalam[261191]: --> relative data size: 1.0
Nov 25 03:16:56 np0005534516 compassionate_kalam[261191]: --> All data devices are unavailable
Nov 25 03:16:56 np0005534516 systemd[1]: libpod-dd0adee3e1626a290bc114ed9314a8fc54c1f4160b9dccd1fff5de5401694ca4.scope: Deactivated successfully.
Nov 25 03:16:56 np0005534516 podman[261175]: 2025-11-25 08:16:56.875369886 +0000 UTC m=+1.164986525 container died dd0adee3e1626a290bc114ed9314a8fc54c1f4160b9dccd1fff5de5401694ca4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_kalam, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 03:16:56 np0005534516 systemd[1]: var-lib-containers-storage-overlay-7701673b7f0b051c7e96eb9312949f3b47dc8199ba8d752ed9fe5c522a49e36c-merged.mount: Deactivated successfully.
Nov 25 03:16:57 np0005534516 podman[261175]: 2025-11-25 08:16:57.033469938 +0000 UTC m=+1.323086477 container remove dd0adee3e1626a290bc114ed9314a8fc54c1f4160b9dccd1fff5de5401694ca4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_kalam, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 03:16:57 np0005534516 systemd[1]: libpod-conmon-dd0adee3e1626a290bc114ed9314a8fc54c1f4160b9dccd1fff5de5401694ca4.scope: Deactivated successfully.
Nov 25 03:16:57 np0005534516 podman[261374]: 2025-11-25 08:16:57.579113878 +0000 UTC m=+0.037347037 container create 3e81a85217534ee1694b36306b355a36b64017245707d6f68f9c8b3dcf2b45a6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_pike, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Nov 25 03:16:57 np0005534516 systemd[1]: Started libpod-conmon-3e81a85217534ee1694b36306b355a36b64017245707d6f68f9c8b3dcf2b45a6.scope.
Nov 25 03:16:57 np0005534516 systemd[1]: Started libcrun container.
Nov 25 03:16:57 np0005534516 podman[261374]: 2025-11-25 08:16:57.650616771 +0000 UTC m=+0.108849950 container init 3e81a85217534ee1694b36306b355a36b64017245707d6f68f9c8b3dcf2b45a6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_pike, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 03:16:57 np0005534516 podman[261374]: 2025-11-25 08:16:57.656208034 +0000 UTC m=+0.114441193 container start 3e81a85217534ee1694b36306b355a36b64017245707d6f68f9c8b3dcf2b45a6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_pike, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 03:16:57 np0005534516 podman[261374]: 2025-11-25 08:16:57.563633743 +0000 UTC m=+0.021866922 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 03:16:57 np0005534516 podman[261374]: 2025-11-25 08:16:57.658890948 +0000 UTC m=+0.117124127 container attach 3e81a85217534ee1694b36306b355a36b64017245707d6f68f9c8b3dcf2b45a6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_pike, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Nov 25 03:16:57 np0005534516 great_pike[261391]: 167 167
Nov 25 03:16:57 np0005534516 systemd[1]: libpod-3e81a85217534ee1694b36306b355a36b64017245707d6f68f9c8b3dcf2b45a6.scope: Deactivated successfully.
Nov 25 03:16:57 np0005534516 podman[261374]: 2025-11-25 08:16:57.661610083 +0000 UTC m=+0.119843242 container died 3e81a85217534ee1694b36306b355a36b64017245707d6f68f9c8b3dcf2b45a6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_pike, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Nov 25 03:16:57 np0005534516 systemd[1]: var-lib-containers-storage-overlay-f0e54fee6f70de5380d431f0416e214323a354b09cb143172fe85da841af6f5d-merged.mount: Deactivated successfully.
Nov 25 03:16:57 np0005534516 podman[261374]: 2025-11-25 08:16:57.695285097 +0000 UTC m=+0.153518256 container remove 3e81a85217534ee1694b36306b355a36b64017245707d6f68f9c8b3dcf2b45a6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_pike, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Nov 25 03:16:57 np0005534516 systemd[1]: libpod-conmon-3e81a85217534ee1694b36306b355a36b64017245707d6f68f9c8b3dcf2b45a6.scope: Deactivated successfully.
Nov 25 03:16:57 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 03:16:57 np0005534516 podman[261414]: 2025-11-25 08:16:57.852887154 +0000 UTC m=+0.038000244 container create 15d8fae135eac2770e62a4ac9fc7a52a931bf44c34513fc4f82ffdb0479ca4d6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_snyder, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 03:16:57 np0005534516 systemd[1]: Started libpod-conmon-15d8fae135eac2770e62a4ac9fc7a52a931bf44c34513fc4f82ffdb0479ca4d6.scope.
Nov 25 03:16:57 np0005534516 systemd[1]: Started libcrun container.
Nov 25 03:16:57 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/294e8c6b5452b2f394ed9b93874000aae8d8022ef3ce8a9ab0b2f08b9e1745cb/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 03:16:57 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/294e8c6b5452b2f394ed9b93874000aae8d8022ef3ce8a9ab0b2f08b9e1745cb/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 03:16:57 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/294e8c6b5452b2f394ed9b93874000aae8d8022ef3ce8a9ab0b2f08b9e1745cb/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 03:16:57 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/294e8c6b5452b2f394ed9b93874000aae8d8022ef3ce8a9ab0b2f08b9e1745cb/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 03:16:57 np0005534516 podman[261414]: 2025-11-25 08:16:57.924906401 +0000 UTC m=+0.110019541 container init 15d8fae135eac2770e62a4ac9fc7a52a931bf44c34513fc4f82ffdb0479ca4d6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_snyder, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Nov 25 03:16:57 np0005534516 podman[261414]: 2025-11-25 08:16:57.834673914 +0000 UTC m=+0.019787024 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 03:16:57 np0005534516 podman[261414]: 2025-11-25 08:16:57.931729518 +0000 UTC m=+0.116842608 container start 15d8fae135eac2770e62a4ac9fc7a52a931bf44c34513fc4f82ffdb0479ca4d6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_snyder, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Nov 25 03:16:57 np0005534516 podman[261414]: 2025-11-25 08:16:57.935019569 +0000 UTC m=+0.120132669 container attach 15d8fae135eac2770e62a4ac9fc7a52a931bf44c34513fc4f82ffdb0479ca4d6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_snyder, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 03:16:58 np0005534516 elastic_snyder[261430]: {
Nov 25 03:16:58 np0005534516 elastic_snyder[261430]:    "0": [
Nov 25 03:16:58 np0005534516 elastic_snyder[261430]:        {
Nov 25 03:16:58 np0005534516 elastic_snyder[261430]:            "devices": [
Nov 25 03:16:58 np0005534516 elastic_snyder[261430]:                "/dev/loop3"
Nov 25 03:16:58 np0005534516 elastic_snyder[261430]:            ],
Nov 25 03:16:58 np0005534516 elastic_snyder[261430]:            "lv_name": "ceph_lv0",
Nov 25 03:16:58 np0005534516 elastic_snyder[261430]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 03:16:58 np0005534516 elastic_snyder[261430]:            "lv_size": "21470642176",
Nov 25 03:16:58 np0005534516 elastic_snyder[261430]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=fa0f8d90-5df8-4d42-9078-da082765696d,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 03:16:58 np0005534516 elastic_snyder[261430]:            "lv_uuid": "ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr",
Nov 25 03:16:58 np0005534516 elastic_snyder[261430]:            "name": "ceph_lv0",
Nov 25 03:16:58 np0005534516 elastic_snyder[261430]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 03:16:58 np0005534516 elastic_snyder[261430]:            "tags": {
Nov 25 03:16:58 np0005534516 elastic_snyder[261430]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 03:16:58 np0005534516 elastic_snyder[261430]:                "ceph.block_uuid": "ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr",
Nov 25 03:16:58 np0005534516 elastic_snyder[261430]:                "ceph.cephx_lockbox_secret": "",
Nov 25 03:16:58 np0005534516 elastic_snyder[261430]:                "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 03:16:58 np0005534516 elastic_snyder[261430]:                "ceph.cluster_name": "ceph",
Nov 25 03:16:58 np0005534516 elastic_snyder[261430]:                "ceph.crush_device_class": "",
Nov 25 03:16:58 np0005534516 elastic_snyder[261430]:                "ceph.encrypted": "0",
Nov 25 03:16:58 np0005534516 elastic_snyder[261430]:                "ceph.osd_fsid": "fa0f8d90-5df8-4d42-9078-da082765696d",
Nov 25 03:16:58 np0005534516 elastic_snyder[261430]:                "ceph.osd_id": "0",
Nov 25 03:16:58 np0005534516 elastic_snyder[261430]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 03:16:58 np0005534516 elastic_snyder[261430]:                "ceph.type": "block",
Nov 25 03:16:58 np0005534516 elastic_snyder[261430]:                "ceph.vdo": "0"
Nov 25 03:16:58 np0005534516 elastic_snyder[261430]:            },
Nov 25 03:16:58 np0005534516 elastic_snyder[261430]:            "type": "block",
Nov 25 03:16:58 np0005534516 elastic_snyder[261430]:            "vg_name": "ceph_vg0"
Nov 25 03:16:58 np0005534516 elastic_snyder[261430]:        }
Nov 25 03:16:58 np0005534516 elastic_snyder[261430]:    ],
Nov 25 03:16:58 np0005534516 elastic_snyder[261430]:    "1": [
Nov 25 03:16:58 np0005534516 elastic_snyder[261430]:        {
Nov 25 03:16:58 np0005534516 elastic_snyder[261430]:            "devices": [
Nov 25 03:16:58 np0005534516 elastic_snyder[261430]:                "/dev/loop4"
Nov 25 03:16:58 np0005534516 elastic_snyder[261430]:            ],
Nov 25 03:16:58 np0005534516 elastic_snyder[261430]:            "lv_name": "ceph_lv1",
Nov 25 03:16:58 np0005534516 elastic_snyder[261430]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 03:16:58 np0005534516 elastic_snyder[261430]:            "lv_size": "21470642176",
Nov 25 03:16:58 np0005534516 elastic_snyder[261430]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1ccbbde0-8faf-460a-800e-c84f00ed17db,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 03:16:58 np0005534516 elastic_snyder[261430]:            "lv_uuid": "nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e",
Nov 25 03:16:58 np0005534516 elastic_snyder[261430]:            "name": "ceph_lv1",
Nov 25 03:16:58 np0005534516 elastic_snyder[261430]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 03:16:58 np0005534516 elastic_snyder[261430]:            "tags": {
Nov 25 03:16:58 np0005534516 elastic_snyder[261430]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 03:16:58 np0005534516 elastic_snyder[261430]:                "ceph.block_uuid": "nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e",
Nov 25 03:16:58 np0005534516 elastic_snyder[261430]:                "ceph.cephx_lockbox_secret": "",
Nov 25 03:16:58 np0005534516 elastic_snyder[261430]:                "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 03:16:58 np0005534516 elastic_snyder[261430]:                "ceph.cluster_name": "ceph",
Nov 25 03:16:58 np0005534516 elastic_snyder[261430]:                "ceph.crush_device_class": "",
Nov 25 03:16:58 np0005534516 elastic_snyder[261430]:                "ceph.encrypted": "0",
Nov 25 03:16:58 np0005534516 elastic_snyder[261430]:                "ceph.osd_fsid": "1ccbbde0-8faf-460a-800e-c84f00ed17db",
Nov 25 03:16:58 np0005534516 elastic_snyder[261430]:                "ceph.osd_id": "1",
Nov 25 03:16:58 np0005534516 elastic_snyder[261430]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 03:16:58 np0005534516 elastic_snyder[261430]:                "ceph.type": "block",
Nov 25 03:16:58 np0005534516 elastic_snyder[261430]:                "ceph.vdo": "0"
Nov 25 03:16:58 np0005534516 elastic_snyder[261430]:            },
Nov 25 03:16:58 np0005534516 elastic_snyder[261430]:            "type": "block",
Nov 25 03:16:58 np0005534516 elastic_snyder[261430]:            "vg_name": "ceph_vg1"
Nov 25 03:16:58 np0005534516 elastic_snyder[261430]:        }
Nov 25 03:16:58 np0005534516 elastic_snyder[261430]:    ],
Nov 25 03:16:58 np0005534516 elastic_snyder[261430]:    "2": [
Nov 25 03:16:58 np0005534516 elastic_snyder[261430]:        {
Nov 25 03:16:58 np0005534516 elastic_snyder[261430]:            "devices": [
Nov 25 03:16:58 np0005534516 elastic_snyder[261430]:                "/dev/loop5"
Nov 25 03:16:58 np0005534516 elastic_snyder[261430]:            ],
Nov 25 03:16:58 np0005534516 elastic_snyder[261430]:            "lv_name": "ceph_lv2",
Nov 25 03:16:58 np0005534516 elastic_snyder[261430]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 03:16:58 np0005534516 elastic_snyder[261430]:            "lv_size": "21470642176",
Nov 25 03:16:58 np0005534516 elastic_snyder[261430]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2a15223e-f21c-42af-b702-2be1c3607399,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 03:16:58 np0005534516 elastic_snyder[261430]:            "lv_uuid": "5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0",
Nov 25 03:16:58 np0005534516 elastic_snyder[261430]:            "name": "ceph_lv2",
Nov 25 03:16:58 np0005534516 elastic_snyder[261430]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 03:16:58 np0005534516 elastic_snyder[261430]:            "tags": {
Nov 25 03:16:58 np0005534516 elastic_snyder[261430]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 03:16:58 np0005534516 elastic_snyder[261430]:                "ceph.block_uuid": "5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0",
Nov 25 03:16:58 np0005534516 elastic_snyder[261430]:                "ceph.cephx_lockbox_secret": "",
Nov 25 03:16:58 np0005534516 elastic_snyder[261430]:                "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 03:16:58 np0005534516 elastic_snyder[261430]:                "ceph.cluster_name": "ceph",
Nov 25 03:16:58 np0005534516 elastic_snyder[261430]:                "ceph.crush_device_class": "",
Nov 25 03:16:58 np0005534516 elastic_snyder[261430]:                "ceph.encrypted": "0",
Nov 25 03:16:58 np0005534516 elastic_snyder[261430]:                "ceph.osd_fsid": "2a15223e-f21c-42af-b702-2be1c3607399",
Nov 25 03:16:58 np0005534516 elastic_snyder[261430]:                "ceph.osd_id": "2",
Nov 25 03:16:58 np0005534516 elastic_snyder[261430]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 03:16:58 np0005534516 elastic_snyder[261430]:                "ceph.type": "block",
Nov 25 03:16:58 np0005534516 elastic_snyder[261430]:                "ceph.vdo": "0"
Nov 25 03:16:58 np0005534516 elastic_snyder[261430]:            },
Nov 25 03:16:58 np0005534516 elastic_snyder[261430]:            "type": "block",
Nov 25 03:16:58 np0005534516 elastic_snyder[261430]:            "vg_name": "ceph_vg2"
Nov 25 03:16:58 np0005534516 elastic_snyder[261430]:        }
Nov 25 03:16:58 np0005534516 elastic_snyder[261430]:    ]
Nov 25 03:16:58 np0005534516 elastic_snyder[261430]: }
Nov 25 03:16:58 np0005534516 systemd[1]: libpod-15d8fae135eac2770e62a4ac9fc7a52a931bf44c34513fc4f82ffdb0479ca4d6.scope: Deactivated successfully.
Nov 25 03:16:58 np0005534516 podman[261414]: 2025-11-25 08:16:58.643653515 +0000 UTC m=+0.828766605 container died 15d8fae135eac2770e62a4ac9fc7a52a931bf44c34513fc4f82ffdb0479ca4d6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_snyder, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 03:16:58 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v926: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:16:58 np0005534516 systemd[1]: var-lib-containers-storage-overlay-294e8c6b5452b2f394ed9b93874000aae8d8022ef3ce8a9ab0b2f08b9e1745cb-merged.mount: Deactivated successfully.
Nov 25 03:16:58 np0005534516 podman[261414]: 2025-11-25 08:16:58.956341809 +0000 UTC m=+1.141454899 container remove 15d8fae135eac2770e62a4ac9fc7a52a931bf44c34513fc4f82ffdb0479ca4d6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_snyder, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 03:16:58 np0005534516 systemd[1]: libpod-conmon-15d8fae135eac2770e62a4ac9fc7a52a931bf44c34513fc4f82ffdb0479ca4d6.scope: Deactivated successfully.
Nov 25 03:16:59 np0005534516 podman[261593]: 2025-11-25 08:16:59.727551622 +0000 UTC m=+0.051672659 container create beed498242046d2626530a3c1c8d25a1d0cd339bd6fc2b97868cea1b093edc3c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_thompson, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Nov 25 03:16:59 np0005534516 systemd[1]: Started libpod-conmon-beed498242046d2626530a3c1c8d25a1d0cd339bd6fc2b97868cea1b093edc3c.scope.
Nov 25 03:16:59 np0005534516 systemd[1]: Started libcrun container.
Nov 25 03:16:59 np0005534516 podman[261593]: 2025-11-25 08:16:59.702258658 +0000 UTC m=+0.026379745 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 03:16:59 np0005534516 podman[261593]: 2025-11-25 08:16:59.801627227 +0000 UTC m=+0.125748264 container init beed498242046d2626530a3c1c8d25a1d0cd339bd6fc2b97868cea1b093edc3c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_thompson, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Nov 25 03:16:59 np0005534516 podman[261593]: 2025-11-25 08:16:59.807070366 +0000 UTC m=+0.131191403 container start beed498242046d2626530a3c1c8d25a1d0cd339bd6fc2b97868cea1b093edc3c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_thompson, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 03:16:59 np0005534516 podman[261593]: 2025-11-25 08:16:59.809848182 +0000 UTC m=+0.133969239 container attach beed498242046d2626530a3c1c8d25a1d0cd339bd6fc2b97868cea1b093edc3c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_thompson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 25 03:16:59 np0005534516 amazing_thompson[261610]: 167 167
Nov 25 03:16:59 np0005534516 systemd[1]: libpod-beed498242046d2626530a3c1c8d25a1d0cd339bd6fc2b97868cea1b093edc3c.scope: Deactivated successfully.
Nov 25 03:16:59 np0005534516 podman[261593]: 2025-11-25 08:16:59.812156456 +0000 UTC m=+0.136277493 container died beed498242046d2626530a3c1c8d25a1d0cd339bd6fc2b97868cea1b093edc3c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_thompson, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Nov 25 03:16:59 np0005534516 systemd[1]: var-lib-containers-storage-overlay-53549c19fffff8a0b352becd3ee82039856d33f9534953df5f4700408048001d-merged.mount: Deactivated successfully.
Nov 25 03:16:59 np0005534516 podman[261593]: 2025-11-25 08:16:59.848698779 +0000 UTC m=+0.172819826 container remove beed498242046d2626530a3c1c8d25a1d0cd339bd6fc2b97868cea1b093edc3c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_thompson, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef)
Nov 25 03:16:59 np0005534516 systemd[1]: libpod-conmon-beed498242046d2626530a3c1c8d25a1d0cd339bd6fc2b97868cea1b093edc3c.scope: Deactivated successfully.
Nov 25 03:17:00 np0005534516 podman[261633]: 2025-11-25 08:17:00.044806953 +0000 UTC m=+0.057397397 container create e87bd6b3e79ca825b0b7ba477b02ad30e7c8c780aeac6eed391c99f07b607508 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_liskov, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 03:17:00 np0005534516 systemd[1]: Started libpod-conmon-e87bd6b3e79ca825b0b7ba477b02ad30e7c8c780aeac6eed391c99f07b607508.scope.
Nov 25 03:17:00 np0005534516 podman[261633]: 2025-11-25 08:17:00.017165194 +0000 UTC m=+0.029755698 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 03:17:00 np0005534516 systemd[1]: Started libcrun container.
Nov 25 03:17:00 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/99359affc54f765bfdf0a75d902a4ff56e04f70b2f952c9d0ad9aca3a13266e7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 03:17:00 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/99359affc54f765bfdf0a75d902a4ff56e04f70b2f952c9d0ad9aca3a13266e7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 03:17:00 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/99359affc54f765bfdf0a75d902a4ff56e04f70b2f952c9d0ad9aca3a13266e7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 03:17:00 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/99359affc54f765bfdf0a75d902a4ff56e04f70b2f952c9d0ad9aca3a13266e7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 03:17:00 np0005534516 podman[261633]: 2025-11-25 08:17:00.152950932 +0000 UTC m=+0.165541436 container init e87bd6b3e79ca825b0b7ba477b02ad30e7c8c780aeac6eed391c99f07b607508 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_liskov, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 03:17:00 np0005534516 podman[261633]: 2025-11-25 08:17:00.166106753 +0000 UTC m=+0.178697147 container start e87bd6b3e79ca825b0b7ba477b02ad30e7c8c780aeac6eed391c99f07b607508 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_liskov, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Nov 25 03:17:00 np0005534516 podman[261633]: 2025-11-25 08:17:00.169900777 +0000 UTC m=+0.182491241 container attach e87bd6b3e79ca825b0b7ba477b02ad30e7c8c780aeac6eed391c99f07b607508 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_liskov, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2)
Nov 25 03:17:00 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v927: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:17:01 np0005534516 pedantic_liskov[261650]: {
Nov 25 03:17:01 np0005534516 pedantic_liskov[261650]:    "1ccbbde0-8faf-460a-800e-c84f00ed17db": {
Nov 25 03:17:01 np0005534516 pedantic_liskov[261650]:        "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 03:17:01 np0005534516 pedantic_liskov[261650]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 03:17:01 np0005534516 pedantic_liskov[261650]:        "osd_id": 1,
Nov 25 03:17:01 np0005534516 pedantic_liskov[261650]:        "osd_uuid": "1ccbbde0-8faf-460a-800e-c84f00ed17db",
Nov 25 03:17:01 np0005534516 pedantic_liskov[261650]:        "type": "bluestore"
Nov 25 03:17:01 np0005534516 pedantic_liskov[261650]:    },
Nov 25 03:17:01 np0005534516 pedantic_liskov[261650]:    "2a15223e-f21c-42af-b702-2be1c3607399": {
Nov 25 03:17:01 np0005534516 pedantic_liskov[261650]:        "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 03:17:01 np0005534516 pedantic_liskov[261650]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 03:17:01 np0005534516 pedantic_liskov[261650]:        "osd_id": 2,
Nov 25 03:17:01 np0005534516 pedantic_liskov[261650]:        "osd_uuid": "2a15223e-f21c-42af-b702-2be1c3607399",
Nov 25 03:17:01 np0005534516 pedantic_liskov[261650]:        "type": "bluestore"
Nov 25 03:17:01 np0005534516 pedantic_liskov[261650]:    },
Nov 25 03:17:01 np0005534516 pedantic_liskov[261650]:    "fa0f8d90-5df8-4d42-9078-da082765696d": {
Nov 25 03:17:01 np0005534516 pedantic_liskov[261650]:        "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 03:17:01 np0005534516 pedantic_liskov[261650]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 03:17:01 np0005534516 pedantic_liskov[261650]:        "osd_id": 0,
Nov 25 03:17:01 np0005534516 pedantic_liskov[261650]:        "osd_uuid": "fa0f8d90-5df8-4d42-9078-da082765696d",
Nov 25 03:17:01 np0005534516 pedantic_liskov[261650]:        "type": "bluestore"
Nov 25 03:17:01 np0005534516 pedantic_liskov[261650]:    }
Nov 25 03:17:01 np0005534516 pedantic_liskov[261650]: }
Nov 25 03:17:01 np0005534516 systemd[1]: libpod-e87bd6b3e79ca825b0b7ba477b02ad30e7c8c780aeac6eed391c99f07b607508.scope: Deactivated successfully.
Nov 25 03:17:01 np0005534516 podman[261633]: 2025-11-25 08:17:01.252263554 +0000 UTC m=+1.264853958 container died e87bd6b3e79ca825b0b7ba477b02ad30e7c8c780aeac6eed391c99f07b607508 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_liskov, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Nov 25 03:17:01 np0005534516 systemd[1]: libpod-e87bd6b3e79ca825b0b7ba477b02ad30e7c8c780aeac6eed391c99f07b607508.scope: Consumed 1.095s CPU time.
Nov 25 03:17:01 np0005534516 systemd[1]: var-lib-containers-storage-overlay-99359affc54f765bfdf0a75d902a4ff56e04f70b2f952c9d0ad9aca3a13266e7-merged.mount: Deactivated successfully.
Nov 25 03:17:01 np0005534516 podman[261633]: 2025-11-25 08:17:01.303688975 +0000 UTC m=+1.316279389 container remove e87bd6b3e79ca825b0b7ba477b02ad30e7c8c780aeac6eed391c99f07b607508 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_liskov, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Nov 25 03:17:01 np0005534516 systemd[1]: libpod-conmon-e87bd6b3e79ca825b0b7ba477b02ad30e7c8c780aeac6eed391c99f07b607508.scope: Deactivated successfully.
Nov 25 03:17:01 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 03:17:01 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 03:17:01 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 03:17:01 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 03:17:01 np0005534516 ceph-mgr[75313]: [progress WARNING root] complete: ev a128de3f-df0e-4277-a509-9512f3dd1809 does not exist
Nov 25 03:17:01 np0005534516 ceph-mgr[75313]: [progress WARNING root] complete: ev 0514cd39-43e5-40b8-9645-5eee54238b9b does not exist
Nov 25 03:17:01 np0005534516 nova_compute[253538]: 2025-11-25 08:17:01.555 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 03:17:01 np0005534516 nova_compute[253538]: 2025-11-25 08:17:01.556 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 03:17:01 np0005534516 nova_compute[253538]: 2025-11-25 08:17:01.556 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 03:17:01 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 03:17:01 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 03:17:02 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 03:17:02 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v928: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:17:03 np0005534516 nova_compute[253538]: 2025-11-25 08:17:03.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 03:17:03 np0005534516 nova_compute[253538]: 2025-11-25 08:17:03.554 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 25 03:17:03 np0005534516 nova_compute[253538]: 2025-11-25 08:17:03.555 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 25 03:17:03 np0005534516 nova_compute[253538]: 2025-11-25 08:17:03.568 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 25 03:17:03 np0005534516 nova_compute[253538]: 2025-11-25 08:17:03.568 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 03:17:03 np0005534516 nova_compute[253538]: 2025-11-25 08:17:03.588 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:17:03 np0005534516 nova_compute[253538]: 2025-11-25 08:17:03.588 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:17:03 np0005534516 nova_compute[253538]: 2025-11-25 08:17:03.588 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:17:03 np0005534516 nova_compute[253538]: 2025-11-25 08:17:03.588 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 25 03:17:03 np0005534516 nova_compute[253538]: 2025-11-25 08:17:03.588 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:17:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 03:17:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:17:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 03:17:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:17:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 03:17:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:17:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 03:17:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:17:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 03:17:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:17:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 03:17:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:17:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 25 03:17:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:17:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 03:17:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:17:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 25 03:17:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:17:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 25 03:17:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:17:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 03:17:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:17:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 25 03:17:03 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 03:17:03 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1342127741' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 03:17:04 np0005534516 nova_compute[253538]: 2025-11-25 08:17:04.012 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.423s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:17:04 np0005534516 nova_compute[253538]: 2025-11-25 08:17:04.173 253542 WARNING nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 03:17:04 np0005534516 nova_compute[253538]: 2025-11-25 08:17:04.174 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5147MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 25 03:17:04 np0005534516 nova_compute[253538]: 2025-11-25 08:17:04.174 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:17:04 np0005534516 nova_compute[253538]: 2025-11-25 08:17:04.175 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:17:04 np0005534516 nova_compute[253538]: 2025-11-25 08:17:04.224 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 25 03:17:04 np0005534516 nova_compute[253538]: 2025-11-25 08:17:04.225 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 25 03:17:04 np0005534516 nova_compute[253538]: 2025-11-25 08:17:04.247 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:17:04 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 03:17:04 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3129252528' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 03:17:04 np0005534516 nova_compute[253538]: 2025-11-25 08:17:04.670 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.423s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:17:04 np0005534516 nova_compute[253538]: 2025-11-25 08:17:04.676 253542 DEBUG nova.compute.provider_tree [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 03:17:04 np0005534516 nova_compute[253538]: 2025-11-25 08:17:04.688 253542 DEBUG nova.scheduler.client.report [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 03:17:04 np0005534516 nova_compute[253538]: 2025-11-25 08:17:04.689 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 25 03:17:04 np0005534516 nova_compute[253538]: 2025-11-25 08:17:04.690 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.515s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:17:04 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v929: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:17:05 np0005534516 nova_compute[253538]: 2025-11-25 08:17:05.676 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 03:17:05 np0005534516 nova_compute[253538]: 2025-11-25 08:17:05.676 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 03:17:05 np0005534516 nova_compute[253538]: 2025-11-25 08:17:05.677 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 25 03:17:06 np0005534516 nova_compute[253538]: 2025-11-25 08:17:06.555 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 03:17:06 np0005534516 nova_compute[253538]: 2025-11-25 08:17:06.555 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 03:17:06 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v930: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:17:07 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 03:17:07 np0005534516 ceph-mon[75015]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #45. Immutable memtables: 0.
Nov 25 03:17:07 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:17:07.723342) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 25 03:17:07 np0005534516 ceph-mon[75015]: rocksdb: [db/flush_job.cc:856] [default] [JOB 21] Flushing memtable with next log file: 45
Nov 25 03:17:07 np0005534516 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764058627723382, "job": 21, "event": "flush_started", "num_memtables": 1, "num_entries": 945, "num_deletes": 257, "total_data_size": 1315662, "memory_usage": 1346288, "flush_reason": "Manual Compaction"}
Nov 25 03:17:07 np0005534516 ceph-mon[75015]: rocksdb: [db/flush_job.cc:885] [default] [JOB 21] Level-0 flush table #46: started
Nov 25 03:17:07 np0005534516 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764058627731393, "cf_name": "default", "job": 21, "event": "table_file_creation", "file_number": 46, "file_size": 1293082, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 18638, "largest_seqno": 19582, "table_properties": {"data_size": 1288412, "index_size": 2258, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1349, "raw_key_size": 9677, "raw_average_key_size": 18, "raw_value_size": 1279034, "raw_average_value_size": 2445, "num_data_blocks": 103, "num_entries": 523, "num_filter_entries": 523, "num_deletions": 257, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764058543, "oldest_key_time": 1764058543, "file_creation_time": 1764058627, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "dc5dc6ae-0fb8-474e-b34d-e37705a41add", "db_session_id": "WCSCMBNWDQZ3QKJA3OWW", "orig_file_number": 46, "seqno_to_time_mapping": "N/A"}}
Nov 25 03:17:07 np0005534516 ceph-mon[75015]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 21] Flush lasted 8105 microseconds, and 3776 cpu microseconds.
Nov 25 03:17:07 np0005534516 ceph-mon[75015]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 03:17:07 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:17:07.731442) [db/flush_job.cc:967] [default] [JOB 21] Level-0 flush table #46: 1293082 bytes OK
Nov 25 03:17:07 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:17:07.731465) [db/memtable_list.cc:519] [default] Level-0 commit table #46 started
Nov 25 03:17:07 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:17:07.732975) [db/memtable_list.cc:722] [default] Level-0 commit table #46: memtable #1 done
Nov 25 03:17:07 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:17:07.732990) EVENT_LOG_v1 {"time_micros": 1764058627732985, "job": 21, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 25 03:17:07 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:17:07.733008) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 25 03:17:07 np0005534516 ceph-mon[75015]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 21] Try to delete WAL files size 1311094, prev total WAL file size 1311094, number of live WAL files 2.
Nov 25 03:17:07 np0005534516 ceph-mon[75015]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000042.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 03:17:07 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:17:07.733541) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D00323532' seq:72057594037927935, type:22 .. '6C6F676D00353035' seq:0, type:0; will stop at (end)
Nov 25 03:17:07 np0005534516 ceph-mon[75015]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 22] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 25 03:17:07 np0005534516 ceph-mon[75015]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 21 Base level 0, inputs: [46(1262KB)], [44(6123KB)]
Nov 25 03:17:07 np0005534516 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764058627733585, "job": 22, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [46], "files_L6": [44], "score": -1, "input_data_size": 7563998, "oldest_snapshot_seqno": -1}
Nov 25 03:17:07 np0005534516 ceph-mon[75015]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 22] Generated table #47: 4204 keys, 7433473 bytes, temperature: kUnknown
Nov 25 03:17:07 np0005534516 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764058627774157, "cf_name": "default", "job": 22, "event": "table_file_creation", "file_number": 47, "file_size": 7433473, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7404719, "index_size": 17124, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 10565, "raw_key_size": 104367, "raw_average_key_size": 24, "raw_value_size": 7327924, "raw_average_value_size": 1743, "num_data_blocks": 724, "num_entries": 4204, "num_filter_entries": 4204, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764056881, "oldest_key_time": 0, "file_creation_time": 1764058627, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "dc5dc6ae-0fb8-474e-b34d-e37705a41add", "db_session_id": "WCSCMBNWDQZ3QKJA3OWW", "orig_file_number": 47, "seqno_to_time_mapping": "N/A"}}
Nov 25 03:17:07 np0005534516 ceph-mon[75015]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 03:17:07 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:17:07.774397) [db/compaction/compaction_job.cc:1663] [default] [JOB 22] Compacted 1@0 + 1@6 files to L6 => 7433473 bytes
Nov 25 03:17:07 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:17:07.775579) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 186.1 rd, 182.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.2, 6.0 +0.0 blob) out(7.1 +0.0 blob), read-write-amplify(11.6) write-amplify(5.7) OK, records in: 4730, records dropped: 526 output_compression: NoCompression
Nov 25 03:17:07 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:17:07.775594) EVENT_LOG_v1 {"time_micros": 1764058627775587, "job": 22, "event": "compaction_finished", "compaction_time_micros": 40653, "compaction_time_cpu_micros": 15410, "output_level": 6, "num_output_files": 1, "total_output_size": 7433473, "num_input_records": 4730, "num_output_records": 4204, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 25 03:17:07 np0005534516 ceph-mon[75015]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000046.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 03:17:07 np0005534516 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764058627775859, "job": 22, "event": "table_file_deletion", "file_number": 46}
Nov 25 03:17:07 np0005534516 ceph-mon[75015]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000044.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 03:17:07 np0005534516 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764058627776785, "job": 22, "event": "table_file_deletion", "file_number": 44}
Nov 25 03:17:07 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:17:07.733469) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 03:17:07 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:17:07.776887) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 03:17:07 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:17:07.776894) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 03:17:07 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:17:07.776896) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 03:17:07 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:17:07.776898) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 03:17:07 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:17:07.776900) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 03:17:08 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v931: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:17:10 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v932: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:17:12 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 03:17:12 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v933: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:17:14 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v934: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:17:16 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v935: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:17:16 np0005534516 podman[261791]: 2025-11-25 08:17:16.863140265 +0000 UTC m=+0.109292481 container health_status 5c8d5508db57b4c3da7e80ae11f3a3094e87785cb74f60c98199261dd8b73481 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2)
Nov 25 03:17:17 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 03:17:18 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v936: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:17:18 np0005534516 podman[261810]: 2025-11-25 08:17:18.954654586 +0000 UTC m=+0.061425107 container health_status 201f5ba8a846455dad7681d91099d8c3f33e8ac91298c1330a8b525b94c03938 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 03:17:20 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v937: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:17:22 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 03:17:22 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v938: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:17:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 03:17:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 03:17:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 03:17:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 03:17:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 03:17:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 03:17:23 np0005534516 podman[261829]: 2025-11-25 08:17:23.879109985 +0000 UTC m=+0.126113814 container health_status 8ad1e319ea263c63021f01bb2d6f18ca5aea205883339692c6b10bddfa993191 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 03:17:24 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v939: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:17:26 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v940: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:17:27 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 03:17:28 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v941: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:17:28 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 25 03:17:28 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1009571275' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 03:17:28 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 25 03:17:28 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1009571275' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 03:17:30 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v942: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:17:32 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 03:17:32 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v943: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:17:34 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v944: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:17:36 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v945: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:17:37 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 03:17:38 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v946: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:17:40 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v947: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:17:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:17:41.042 162739 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:17:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:17:41.042 162739 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:17:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:17:41.042 162739 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:17:42 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 03:17:42 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v948: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:17:44 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v949: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:17:46 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v950: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:17:47 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 03:17:47 np0005534516 podman[261855]: 2025-11-25 08:17:47.80505546 +0000 UTC m=+0.061056315 container health_status 5c8d5508db57b4c3da7e80ae11f3a3094e87785cb74f60c98199261dd8b73481 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 03:17:48 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v951: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:17:49 np0005534516 podman[261875]: 2025-11-25 08:17:49.815655748 +0000 UTC m=+0.064036680 container health_status 201f5ba8a846455dad7681d91099d8c3f33e8ac91298c1330a8b525b94c03938 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, org.label-schema.license=GPLv2, tcib_managed=true, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Nov 25 03:17:50 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v952: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:17:52 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 03:17:52 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v953: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:17:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] Optimize plan auto_2025-11-25_08:17:53
Nov 25 03:17:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 03:17:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] do_upmap
Nov 25 03:17:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] pools ['cephfs.cephfs.data', 'cephfs.cephfs.meta', 'backups', 'vms', 'images', '.rgw.root', 'volumes', 'default.rgw.control', 'default.rgw.log', '.mgr', 'default.rgw.meta']
Nov 25 03:17:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] prepared 0/10 changes
Nov 25 03:17:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 03:17:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 03:17:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 03:17:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 03:17:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 03:17:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 03:17:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 03:17:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 03:17:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 03:17:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 03:17:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 03:17:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 03:17:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 03:17:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 03:17:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 03:17:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 03:17:54 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v954: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:17:54 np0005534516 podman[261893]: 2025-11-25 08:17:54.870578923 +0000 UTC m=+0.112729129 container health_status 8ad1e319ea263c63021f01bb2d6f18ca5aea205883339692c6b10bddfa993191 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0)
Nov 25 03:17:56 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v955: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:17:57 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 03:17:58 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v956: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:18:00 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v957: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:18:02 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 03:18:02 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 03:18:02 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 03:18:02 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 03:18:02 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 03:18:02 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 03:18:02 np0005534516 ceph-mgr[75313]: [progress WARNING root] complete: ev a82818f5-78cf-439e-a3b7-9d107b3d3cd8 does not exist
Nov 25 03:18:02 np0005534516 ceph-mgr[75313]: [progress WARNING root] complete: ev 5ffd2f8e-59f6-4943-86e4-b2bf27ac9e62 does not exist
Nov 25 03:18:02 np0005534516 ceph-mgr[75313]: [progress WARNING root] complete: ev 1a2b8ad4-0e22-4914-beb2-5d727cd60846 does not exist
Nov 25 03:18:02 np0005534516 nova_compute[253538]: 2025-11-25 08:18:02.552 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 03:18:02 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 03:18:02 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 03:18:02 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 03:18:02 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 03:18:02 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 03:18:02 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 03:18:02 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 03:18:02 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v958: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:18:02 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 03:18:03 np0005534516 podman[262189]: 2025-11-25 08:18:03.302452725 +0000 UTC m=+0.040971492 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 03:18:03 np0005534516 nova_compute[253538]: 2025-11-25 08:18:03.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 03:18:03 np0005534516 nova_compute[253538]: 2025-11-25 08:18:03.554 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 25 03:18:03 np0005534516 nova_compute[253538]: 2025-11-25 08:18:03.555 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 25 03:18:03 np0005534516 nova_compute[253538]: 2025-11-25 08:18:03.572 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 25 03:18:03 np0005534516 nova_compute[253538]: 2025-11-25 08:18:03.572 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 03:18:03 np0005534516 nova_compute[253538]: 2025-11-25 08:18:03.574 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 03:18:03 np0005534516 nova_compute[253538]: 2025-11-25 08:18:03.574 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 03:18:03 np0005534516 nova_compute[253538]: 2025-11-25 08:18:03.600 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:18:03 np0005534516 nova_compute[253538]: 2025-11-25 08:18:03.601 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:18:03 np0005534516 nova_compute[253538]: 2025-11-25 08:18:03.602 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:18:03 np0005534516 nova_compute[253538]: 2025-11-25 08:18:03.602 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 25 03:18:03 np0005534516 nova_compute[253538]: 2025-11-25 08:18:03.603 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:18:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 03:18:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:18:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 03:18:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:18:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 03:18:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:18:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 03:18:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:18:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 03:18:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:18:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 03:18:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:18:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 25 03:18:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:18:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 03:18:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:18:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 25 03:18:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:18:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 25 03:18:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:18:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 03:18:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:18:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 25 03:18:03 np0005534516 podman[262189]: 2025-11-25 08:18:03.728103811 +0000 UTC m=+0.466622498 container create 2b74b59daaabe6b63fb4c9070543f10220f3e095567a5df729cda3d5d6a07a5d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_neumann, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 03:18:03 np0005534516 systemd[1]: Started libpod-conmon-2b74b59daaabe6b63fb4c9070543f10220f3e095567a5df729cda3d5d6a07a5d.scope.
Nov 25 03:18:03 np0005534516 systemd[1]: Started libcrun container.
Nov 25 03:18:03 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 03:18:03 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 03:18:04 np0005534516 podman[262189]: 2025-11-25 08:18:04.742677642 +0000 UTC m=+1.481196369 container init 2b74b59daaabe6b63fb4c9070543f10220f3e095567a5df729cda3d5d6a07a5d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_neumann, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 03:18:04 np0005534516 podman[262189]: 2025-11-25 08:18:04.750722438 +0000 UTC m=+1.489241155 container start 2b74b59daaabe6b63fb4c9070543f10220f3e095567a5df729cda3d5d6a07a5d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_neumann, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 03:18:04 np0005534516 elastic_neumann[262226]: 167 167
Nov 25 03:18:04 np0005534516 systemd[1]: libpod-2b74b59daaabe6b63fb4c9070543f10220f3e095567a5df729cda3d5d6a07a5d.scope: Deactivated successfully.
Nov 25 03:18:04 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 03:18:04 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/227971988' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 03:18:04 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v959: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:18:04 np0005534516 nova_compute[253538]: 2025-11-25 08:18:04.803 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.200s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:18:05 np0005534516 podman[262189]: 2025-11-25 08:18:05.006381479 +0000 UTC m=+1.744900206 container attach 2b74b59daaabe6b63fb4c9070543f10220f3e095567a5df729cda3d5d6a07a5d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_neumann, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 03:18:05 np0005534516 podman[262189]: 2025-11-25 08:18:05.007770338 +0000 UTC m=+1.746289015 container died 2b74b59daaabe6b63fb4c9070543f10220f3e095567a5df729cda3d5d6a07a5d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_neumann, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 03:18:05 np0005534516 nova_compute[253538]: 2025-11-25 08:18:05.034 253542 WARNING nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 03:18:05 np0005534516 nova_compute[253538]: 2025-11-25 08:18:05.035 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5141MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 25 03:18:05 np0005534516 nova_compute[253538]: 2025-11-25 08:18:05.036 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:18:05 np0005534516 nova_compute[253538]: 2025-11-25 08:18:05.036 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:18:05 np0005534516 nova_compute[253538]: 2025-11-25 08:18:05.107 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 25 03:18:05 np0005534516 nova_compute[253538]: 2025-11-25 08:18:05.108 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 25 03:18:05 np0005534516 nova_compute[253538]: 2025-11-25 08:18:05.130 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:18:05 np0005534516 ceph-mon[75015]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 25 03:18:05 np0005534516 ceph-mon[75015]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1800.0 total, 600.0 interval#012Cumulative writes: 4409 writes, 19K keys, 4409 commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.02 MB/s#012Cumulative WAL: 4408 writes, 4408 syncs, 1.00 writes per sync, written: 0.03 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1299 writes, 5910 keys, 1299 commit groups, 1.0 writes per commit group, ingest: 8.48 MB, 0.01 MB/s#012Interval WAL: 1298 writes, 1298 syncs, 1.00 writes per sync, written: 0.01 GB, 0.01 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     26.1      0.85              0.08        11    0.077       0      0       0.0       0.0#012  L6      1/0    7.09 MB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   3.1     49.7     40.7      1.68              0.20        10    0.168     43K   5244       0.0       0.0#012 Sum      1/0    7.09 MB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   4.1     33.1     35.8      2.53              0.28        21    0.121     43K   5244       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   5.4     28.0     28.3      1.50              0.15        10    0.150     23K   3006       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   0.0     49.7     40.7      1.68              0.20        10    0.168     43K   5244       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     26.2      0.85              0.08        10    0.085       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     19.9      0.00              0.00         1    0.003       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1800.0 total, 600.0 interval#012Flush(GB): cumulative 0.022, interval 0.008#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.09 GB write, 0.05 MB/s write, 0.08 GB read, 0.05 MB/s read, 2.5 seconds#012Interval compaction: 0.04 GB write, 0.07 MB/s write, 0.04 GB read, 0.07 MB/s read, 1.5 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55e967e9f1f0#2 capacity: 308.00 MB usage: 6.33 MB table_size: 0 occupancy: 18446744073709551615 collections: 4 last_copies: 0 last_secs: 8.5e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(406,5.97 MB,1.93965%) FilterBlock(22,127.42 KB,0.0404011%) IndexBlock(22,233.92 KB,0.0741686%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Nov 25 03:18:05 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 03:18:05 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4186907086' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 03:18:05 np0005534516 nova_compute[253538]: 2025-11-25 08:18:05.850 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.720s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:18:05 np0005534516 nova_compute[253538]: 2025-11-25 08:18:05.857 253542 DEBUG nova.compute.provider_tree [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 03:18:05 np0005534516 nova_compute[253538]: 2025-11-25 08:18:05.874 253542 DEBUG nova.scheduler.client.report [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 03:18:05 np0005534516 nova_compute[253538]: 2025-11-25 08:18:05.876 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 25 03:18:05 np0005534516 nova_compute[253538]: 2025-11-25 08:18:05.877 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.840s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:18:06 np0005534516 systemd[1]: var-lib-containers-storage-overlay-09fb59c414841cd53f65eea0742150bc4df9ab0a8c78e7c239e5cb6fd8784ce2-merged.mount: Deactivated successfully.
Nov 25 03:18:06 np0005534516 podman[262189]: 2025-11-25 08:18:06.136964358 +0000 UTC m=+2.875483055 container remove 2b74b59daaabe6b63fb4c9070543f10220f3e095567a5df729cda3d5d6a07a5d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_neumann, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Nov 25 03:18:06 np0005534516 systemd[1]: libpod-conmon-2b74b59daaabe6b63fb4c9070543f10220f3e095567a5df729cda3d5d6a07a5d.scope: Deactivated successfully.
Nov 25 03:18:06 np0005534516 podman[262274]: 2025-11-25 08:18:06.345063294 +0000 UTC m=+0.060096029 container create 8430022677a88b665d903cd6133640f88efdb312311e8980d10d14bb3dc21a67 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_keller, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 03:18:06 np0005534516 systemd[1]: Started libpod-conmon-8430022677a88b665d903cd6133640f88efdb312311e8980d10d14bb3dc21a67.scope.
Nov 25 03:18:06 np0005534516 podman[262274]: 2025-11-25 08:18:06.315275817 +0000 UTC m=+0.030308542 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 03:18:06 np0005534516 systemd[1]: Started libcrun container.
Nov 25 03:18:06 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/84e31b1878c718bb1043351e8533db03d1a5081e83e3f07970e812b376a5144e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 03:18:06 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/84e31b1878c718bb1043351e8533db03d1a5081e83e3f07970e812b376a5144e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 03:18:06 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/84e31b1878c718bb1043351e8533db03d1a5081e83e3f07970e812b376a5144e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 03:18:06 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/84e31b1878c718bb1043351e8533db03d1a5081e83e3f07970e812b376a5144e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 03:18:06 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/84e31b1878c718bb1043351e8533db03d1a5081e83e3f07970e812b376a5144e/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 03:18:06 np0005534516 podman[262274]: 2025-11-25 08:18:06.454778116 +0000 UTC m=+0.169810851 container init 8430022677a88b665d903cd6133640f88efdb312311e8980d10d14bb3dc21a67 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_keller, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 03:18:06 np0005534516 podman[262274]: 2025-11-25 08:18:06.462361579 +0000 UTC m=+0.177394304 container start 8430022677a88b665d903cd6133640f88efdb312311e8980d10d14bb3dc21a67 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_keller, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 03:18:06 np0005534516 podman[262274]: 2025-11-25 08:18:06.465345623 +0000 UTC m=+0.180378338 container attach 8430022677a88b665d903cd6133640f88efdb312311e8980d10d14bb3dc21a67 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_keller, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 03:18:06 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v960: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:18:06 np0005534516 nova_compute[253538]: 2025-11-25 08:18:06.857 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 03:18:06 np0005534516 nova_compute[253538]: 2025-11-25 08:18:06.858 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 25 03:18:07 np0005534516 nova_compute[253538]: 2025-11-25 08:18:07.555 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 03:18:07 np0005534516 priceless_keller[262291]: --> passed data devices: 0 physical, 3 LVM
Nov 25 03:18:07 np0005534516 priceless_keller[262291]: --> relative data size: 1.0
Nov 25 03:18:07 np0005534516 priceless_keller[262291]: --> All data devices are unavailable
Nov 25 03:18:07 np0005534516 systemd[1]: libpod-8430022677a88b665d903cd6133640f88efdb312311e8980d10d14bb3dc21a67.scope: Deactivated successfully.
Nov 25 03:18:07 np0005534516 systemd[1]: libpod-8430022677a88b665d903cd6133640f88efdb312311e8980d10d14bb3dc21a67.scope: Consumed 1.069s CPU time.
Nov 25 03:18:07 np0005534516 podman[262274]: 2025-11-25 08:18:07.585006864 +0000 UTC m=+1.300039609 container died 8430022677a88b665d903cd6133640f88efdb312311e8980d10d14bb3dc21a67 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_keller, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Nov 25 03:18:07 np0005534516 systemd[1]: var-lib-containers-storage-overlay-84e31b1878c718bb1043351e8533db03d1a5081e83e3f07970e812b376a5144e-merged.mount: Deactivated successfully.
Nov 25 03:18:07 np0005534516 podman[262274]: 2025-11-25 08:18:07.744371971 +0000 UTC m=+1.459404676 container remove 8430022677a88b665d903cd6133640f88efdb312311e8980d10d14bb3dc21a67 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_keller, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 03:18:07 np0005534516 systemd[1]: libpod-conmon-8430022677a88b665d903cd6133640f88efdb312311e8980d10d14bb3dc21a67.scope: Deactivated successfully.
Nov 25 03:18:07 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 03:18:08 np0005534516 podman[262472]: 2025-11-25 08:18:08.343550392 +0000 UTC m=+0.026103314 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 03:18:08 np0005534516 nova_compute[253538]: 2025-11-25 08:18:08.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 03:18:08 np0005534516 nova_compute[253538]: 2025-11-25 08:18:08.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 03:18:08 np0005534516 podman[262472]: 2025-11-25 08:18:08.597222328 +0000 UTC m=+0.279775220 container create 32794d9ef6dd27509f0d3a3f1e5c4b757492cb90bc46dea1953600a459ec7ca0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_davinci, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 03:18:08 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v961: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:18:09 np0005534516 systemd[1]: Started libpod-conmon-32794d9ef6dd27509f0d3a3f1e5c4b757492cb90bc46dea1953600a459ec7ca0.scope.
Nov 25 03:18:09 np0005534516 systemd[1]: Started libcrun container.
Nov 25 03:18:09 np0005534516 nova_compute[253538]: 2025-11-25 08:18:09.548 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 03:18:09 np0005534516 podman[262472]: 2025-11-25 08:18:09.906080604 +0000 UTC m=+1.588633576 container init 32794d9ef6dd27509f0d3a3f1e5c4b757492cb90bc46dea1953600a459ec7ca0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_davinci, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 03:18:09 np0005534516 podman[262472]: 2025-11-25 08:18:09.915029476 +0000 UTC m=+1.597582398 container start 32794d9ef6dd27509f0d3a3f1e5c4b757492cb90bc46dea1953600a459ec7ca0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_davinci, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 03:18:09 np0005534516 sad_davinci[262488]: 167 167
Nov 25 03:18:09 np0005534516 systemd[1]: libpod-32794d9ef6dd27509f0d3a3f1e5c4b757492cb90bc46dea1953600a459ec7ca0.scope: Deactivated successfully.
Nov 25 03:18:10 np0005534516 podman[262472]: 2025-11-25 08:18:10.356532978 +0000 UTC m=+2.039085860 container attach 32794d9ef6dd27509f0d3a3f1e5c4b757492cb90bc46dea1953600a459ec7ca0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_davinci, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 25 03:18:10 np0005534516 podman[262472]: 2025-11-25 08:18:10.356934529 +0000 UTC m=+2.039487411 container died 32794d9ef6dd27509f0d3a3f1e5c4b757492cb90bc46dea1953600a459ec7ca0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_davinci, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 03:18:10 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v962: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:18:11 np0005534516 systemd[1]: var-lib-containers-storage-overlay-bb4c7a6e586fb5b01f7432a3ec61ddd5fbca1e2465aea6956c8c2f1a612830f3-merged.mount: Deactivated successfully.
Nov 25 03:18:12 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v963: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:18:13 np0005534516 podman[262472]: 2025-11-25 08:18:13.0290955 +0000 UTC m=+4.711648412 container remove 32794d9ef6dd27509f0d3a3f1e5c4b757492cb90bc46dea1953600a459ec7ca0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_davinci, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 03:18:13 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 03:18:13 np0005534516 systemd[1]: libpod-conmon-32794d9ef6dd27509f0d3a3f1e5c4b757492cb90bc46dea1953600a459ec7ca0.scope: Deactivated successfully.
Nov 25 03:18:13 np0005534516 podman[262511]: 2025-11-25 08:18:13.248519373 +0000 UTC m=+0.037902736 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 03:18:13 np0005534516 podman[262511]: 2025-11-25 08:18:13.900978281 +0000 UTC m=+0.690361604 container create 7be6229695e1d728e05bd07291a3e7f7fb065040f33b98ffd5350a1c21d29f75 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_joliot, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Nov 25 03:18:14 np0005534516 systemd[1]: Started libpod-conmon-7be6229695e1d728e05bd07291a3e7f7fb065040f33b98ffd5350a1c21d29f75.scope.
Nov 25 03:18:14 np0005534516 systemd[1]: Started libcrun container.
Nov 25 03:18:14 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e12d8ae42d6434e44b1fe95ef1dccbfc41b311a1c035481fc80d095a7f41caa8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 03:18:14 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e12d8ae42d6434e44b1fe95ef1dccbfc41b311a1c035481fc80d095a7f41caa8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 03:18:14 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e12d8ae42d6434e44b1fe95ef1dccbfc41b311a1c035481fc80d095a7f41caa8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 03:18:14 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e12d8ae42d6434e44b1fe95ef1dccbfc41b311a1c035481fc80d095a7f41caa8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 03:18:14 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v964: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:18:14 np0005534516 podman[262511]: 2025-11-25 08:18:14.826086108 +0000 UTC m=+1.615469521 container init 7be6229695e1d728e05bd07291a3e7f7fb065040f33b98ffd5350a1c21d29f75 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_joliot, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Nov 25 03:18:14 np0005534516 podman[262511]: 2025-11-25 08:18:14.833392714 +0000 UTC m=+1.622776047 container start 7be6229695e1d728e05bd07291a3e7f7fb065040f33b98ffd5350a1c21d29f75 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_joliot, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Nov 25 03:18:15 np0005534516 podman[262511]: 2025-11-25 08:18:15.196868503 +0000 UTC m=+1.986251886 container attach 7be6229695e1d728e05bd07291a3e7f7fb065040f33b98ffd5350a1c21d29f75 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_joliot, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 03:18:15 np0005534516 compassionate_joliot[262528]: {
Nov 25 03:18:15 np0005534516 compassionate_joliot[262528]:    "0": [
Nov 25 03:18:15 np0005534516 compassionate_joliot[262528]:        {
Nov 25 03:18:15 np0005534516 compassionate_joliot[262528]:            "devices": [
Nov 25 03:18:15 np0005534516 compassionate_joliot[262528]:                "/dev/loop3"
Nov 25 03:18:15 np0005534516 compassionate_joliot[262528]:            ],
Nov 25 03:18:15 np0005534516 compassionate_joliot[262528]:            "lv_name": "ceph_lv0",
Nov 25 03:18:15 np0005534516 compassionate_joliot[262528]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 03:18:15 np0005534516 compassionate_joliot[262528]:            "lv_size": "21470642176",
Nov 25 03:18:15 np0005534516 compassionate_joliot[262528]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=fa0f8d90-5df8-4d42-9078-da082765696d,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 03:18:15 np0005534516 compassionate_joliot[262528]:            "lv_uuid": "ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr",
Nov 25 03:18:15 np0005534516 compassionate_joliot[262528]:            "name": "ceph_lv0",
Nov 25 03:18:15 np0005534516 compassionate_joliot[262528]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 03:18:15 np0005534516 compassionate_joliot[262528]:            "tags": {
Nov 25 03:18:15 np0005534516 compassionate_joliot[262528]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 03:18:15 np0005534516 compassionate_joliot[262528]:                "ceph.block_uuid": "ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr",
Nov 25 03:18:15 np0005534516 compassionate_joliot[262528]:                "ceph.cephx_lockbox_secret": "",
Nov 25 03:18:15 np0005534516 compassionate_joliot[262528]:                "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 03:18:15 np0005534516 compassionate_joliot[262528]:                "ceph.cluster_name": "ceph",
Nov 25 03:18:15 np0005534516 compassionate_joliot[262528]:                "ceph.crush_device_class": "",
Nov 25 03:18:15 np0005534516 compassionate_joliot[262528]:                "ceph.encrypted": "0",
Nov 25 03:18:15 np0005534516 compassionate_joliot[262528]:                "ceph.osd_fsid": "fa0f8d90-5df8-4d42-9078-da082765696d",
Nov 25 03:18:15 np0005534516 compassionate_joliot[262528]:                "ceph.osd_id": "0",
Nov 25 03:18:15 np0005534516 compassionate_joliot[262528]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 03:18:15 np0005534516 compassionate_joliot[262528]:                "ceph.type": "block",
Nov 25 03:18:15 np0005534516 compassionate_joliot[262528]:                "ceph.vdo": "0"
Nov 25 03:18:15 np0005534516 compassionate_joliot[262528]:            },
Nov 25 03:18:15 np0005534516 compassionate_joliot[262528]:            "type": "block",
Nov 25 03:18:15 np0005534516 compassionate_joliot[262528]:            "vg_name": "ceph_vg0"
Nov 25 03:18:15 np0005534516 compassionate_joliot[262528]:        }
Nov 25 03:18:15 np0005534516 compassionate_joliot[262528]:    ],
Nov 25 03:18:15 np0005534516 compassionate_joliot[262528]:    "1": [
Nov 25 03:18:15 np0005534516 compassionate_joliot[262528]:        {
Nov 25 03:18:15 np0005534516 compassionate_joliot[262528]:            "devices": [
Nov 25 03:18:15 np0005534516 compassionate_joliot[262528]:                "/dev/loop4"
Nov 25 03:18:15 np0005534516 compassionate_joliot[262528]:            ],
Nov 25 03:18:15 np0005534516 compassionate_joliot[262528]:            "lv_name": "ceph_lv1",
Nov 25 03:18:15 np0005534516 compassionate_joliot[262528]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 03:18:15 np0005534516 compassionate_joliot[262528]:            "lv_size": "21470642176",
Nov 25 03:18:15 np0005534516 compassionate_joliot[262528]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1ccbbde0-8faf-460a-800e-c84f00ed17db,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 03:18:15 np0005534516 compassionate_joliot[262528]:            "lv_uuid": "nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e",
Nov 25 03:18:15 np0005534516 compassionate_joliot[262528]:            "name": "ceph_lv1",
Nov 25 03:18:15 np0005534516 compassionate_joliot[262528]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 03:18:15 np0005534516 compassionate_joliot[262528]:            "tags": {
Nov 25 03:18:15 np0005534516 compassionate_joliot[262528]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 03:18:15 np0005534516 compassionate_joliot[262528]:                "ceph.block_uuid": "nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e",
Nov 25 03:18:15 np0005534516 compassionate_joliot[262528]:                "ceph.cephx_lockbox_secret": "",
Nov 25 03:18:15 np0005534516 compassionate_joliot[262528]:                "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 03:18:15 np0005534516 compassionate_joliot[262528]:                "ceph.cluster_name": "ceph",
Nov 25 03:18:15 np0005534516 compassionate_joliot[262528]:                "ceph.crush_device_class": "",
Nov 25 03:18:15 np0005534516 compassionate_joliot[262528]:                "ceph.encrypted": "0",
Nov 25 03:18:15 np0005534516 compassionate_joliot[262528]:                "ceph.osd_fsid": "1ccbbde0-8faf-460a-800e-c84f00ed17db",
Nov 25 03:18:15 np0005534516 compassionate_joliot[262528]:                "ceph.osd_id": "1",
Nov 25 03:18:15 np0005534516 compassionate_joliot[262528]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 03:18:15 np0005534516 compassionate_joliot[262528]:                "ceph.type": "block",
Nov 25 03:18:15 np0005534516 compassionate_joliot[262528]:                "ceph.vdo": "0"
Nov 25 03:18:15 np0005534516 compassionate_joliot[262528]:            },
Nov 25 03:18:15 np0005534516 compassionate_joliot[262528]:            "type": "block",
Nov 25 03:18:15 np0005534516 compassionate_joliot[262528]:            "vg_name": "ceph_vg1"
Nov 25 03:18:15 np0005534516 compassionate_joliot[262528]:        }
Nov 25 03:18:15 np0005534516 compassionate_joliot[262528]:    ],
Nov 25 03:18:15 np0005534516 compassionate_joliot[262528]:    "2": [
Nov 25 03:18:15 np0005534516 compassionate_joliot[262528]:        {
Nov 25 03:18:15 np0005534516 compassionate_joliot[262528]:            "devices": [
Nov 25 03:18:15 np0005534516 compassionate_joliot[262528]:                "/dev/loop5"
Nov 25 03:18:15 np0005534516 compassionate_joliot[262528]:            ],
Nov 25 03:18:15 np0005534516 compassionate_joliot[262528]:            "lv_name": "ceph_lv2",
Nov 25 03:18:15 np0005534516 compassionate_joliot[262528]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 03:18:15 np0005534516 compassionate_joliot[262528]:            "lv_size": "21470642176",
Nov 25 03:18:15 np0005534516 compassionate_joliot[262528]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2a15223e-f21c-42af-b702-2be1c3607399,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 03:18:15 np0005534516 compassionate_joliot[262528]:            "lv_uuid": "5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0",
Nov 25 03:18:15 np0005534516 compassionate_joliot[262528]:            "name": "ceph_lv2",
Nov 25 03:18:15 np0005534516 compassionate_joliot[262528]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 03:18:15 np0005534516 compassionate_joliot[262528]:            "tags": {
Nov 25 03:18:15 np0005534516 compassionate_joliot[262528]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 03:18:15 np0005534516 compassionate_joliot[262528]:                "ceph.block_uuid": "5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0",
Nov 25 03:18:15 np0005534516 compassionate_joliot[262528]:                "ceph.cephx_lockbox_secret": "",
Nov 25 03:18:15 np0005534516 compassionate_joliot[262528]:                "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 03:18:15 np0005534516 compassionate_joliot[262528]:                "ceph.cluster_name": "ceph",
Nov 25 03:18:15 np0005534516 compassionate_joliot[262528]:                "ceph.crush_device_class": "",
Nov 25 03:18:15 np0005534516 compassionate_joliot[262528]:                "ceph.encrypted": "0",
Nov 25 03:18:15 np0005534516 compassionate_joliot[262528]:                "ceph.osd_fsid": "2a15223e-f21c-42af-b702-2be1c3607399",
Nov 25 03:18:15 np0005534516 compassionate_joliot[262528]:                "ceph.osd_id": "2",
Nov 25 03:18:15 np0005534516 compassionate_joliot[262528]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 03:18:15 np0005534516 compassionate_joliot[262528]:                "ceph.type": "block",
Nov 25 03:18:15 np0005534516 compassionate_joliot[262528]:                "ceph.vdo": "0"
Nov 25 03:18:15 np0005534516 compassionate_joliot[262528]:            },
Nov 25 03:18:15 np0005534516 compassionate_joliot[262528]:            "type": "block",
Nov 25 03:18:15 np0005534516 compassionate_joliot[262528]:            "vg_name": "ceph_vg2"
Nov 25 03:18:15 np0005534516 compassionate_joliot[262528]:        }
Nov 25 03:18:15 np0005534516 compassionate_joliot[262528]:    ]
Nov 25 03:18:15 np0005534516 compassionate_joliot[262528]: }
Nov 25 03:18:15 np0005534516 systemd[1]: libpod-7be6229695e1d728e05bd07291a3e7f7fb065040f33b98ffd5350a1c21d29f75.scope: Deactivated successfully.
Nov 25 03:18:15 np0005534516 podman[262511]: 2025-11-25 08:18:15.637260295 +0000 UTC m=+2.426643628 container died 7be6229695e1d728e05bd07291a3e7f7fb065040f33b98ffd5350a1c21d29f75 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_joliot, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Nov 25 03:18:15 np0005534516 systemd[1]: var-lib-containers-storage-overlay-e12d8ae42d6434e44b1fe95ef1dccbfc41b311a1c035481fc80d095a7f41caa8-merged.mount: Deactivated successfully.
Nov 25 03:18:16 np0005534516 podman[262511]: 2025-11-25 08:18:16.0543342 +0000 UTC m=+2.843717523 container remove 7be6229695e1d728e05bd07291a3e7f7fb065040f33b98ffd5350a1c21d29f75 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_joliot, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 03:18:16 np0005534516 systemd[1]: libpod-conmon-7be6229695e1d728e05bd07291a3e7f7fb065040f33b98ffd5350a1c21d29f75.scope: Deactivated successfully.
Nov 25 03:18:16 np0005534516 podman[262689]: 2025-11-25 08:18:16.686556769 +0000 UTC m=+0.045319724 container create dcbe9be1f12d5fcf10c91f0e99c9c7ef08a4e5f91d548391ebb2d35aeca17f41 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_sanderson, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default)
Nov 25 03:18:16 np0005534516 systemd[1]: Started libpod-conmon-dcbe9be1f12d5fcf10c91f0e99c9c7ef08a4e5f91d548391ebb2d35aeca17f41.scope.
Nov 25 03:18:16 np0005534516 systemd[1]: Started libcrun container.
Nov 25 03:18:16 np0005534516 podman[262689]: 2025-11-25 08:18:16.666202958 +0000 UTC m=+0.024965933 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 03:18:16 np0005534516 podman[262689]: 2025-11-25 08:18:16.767713519 +0000 UTC m=+0.126476524 container init dcbe9be1f12d5fcf10c91f0e99c9c7ef08a4e5f91d548391ebb2d35aeca17f41 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_sanderson, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 03:18:16 np0005534516 podman[262689]: 2025-11-25 08:18:16.774004995 +0000 UTC m=+0.132767940 container start dcbe9be1f12d5fcf10c91f0e99c9c7ef08a4e5f91d548391ebb2d35aeca17f41 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_sanderson, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 03:18:16 np0005534516 podman[262689]: 2025-11-25 08:18:16.777484234 +0000 UTC m=+0.136247179 container attach dcbe9be1f12d5fcf10c91f0e99c9c7ef08a4e5f91d548391ebb2d35aeca17f41 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_sanderson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3)
Nov 25 03:18:16 np0005534516 amazing_sanderson[262705]: 167 167
Nov 25 03:18:16 np0005534516 systemd[1]: libpod-dcbe9be1f12d5fcf10c91f0e99c9c7ef08a4e5f91d548391ebb2d35aeca17f41.scope: Deactivated successfully.
Nov 25 03:18:16 np0005534516 conmon[262705]: conmon dcbe9be1f12d5fcf10c9 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-dcbe9be1f12d5fcf10c91f0e99c9c7ef08a4e5f91d548391ebb2d35aeca17f41.scope/container/memory.events
Nov 25 03:18:16 np0005534516 podman[262689]: 2025-11-25 08:18:16.780877019 +0000 UTC m=+0.139639964 container died dcbe9be1f12d5fcf10c91f0e99c9c7ef08a4e5f91d548391ebb2d35aeca17f41 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_sanderson, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Nov 25 03:18:16 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v965: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:18:16 np0005534516 systemd[1]: var-lib-containers-storage-overlay-e1373ac63a59a036aaf889da73dc581b7cd852846e228c31afba9fb78d92bcbf-merged.mount: Deactivated successfully.
Nov 25 03:18:16 np0005534516 podman[262689]: 2025-11-25 08:18:16.821951503 +0000 UTC m=+0.180714448 container remove dcbe9be1f12d5fcf10c91f0e99c9c7ef08a4e5f91d548391ebb2d35aeca17f41 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_sanderson, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True)
Nov 25 03:18:16 np0005534516 systemd[1]: libpod-conmon-dcbe9be1f12d5fcf10c91f0e99c9c7ef08a4e5f91d548391ebb2d35aeca17f41.scope: Deactivated successfully.
Nov 25 03:18:17 np0005534516 podman[262730]: 2025-11-25 08:18:17.002867075 +0000 UTC m=+0.043453592 container create 6cca11b8ee9d813f7027a236cfef6294212078c55f80499c91dc3cca737f292b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_germain, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True)
Nov 25 03:18:17 np0005534516 systemd[1]: Started libpod-conmon-6cca11b8ee9d813f7027a236cfef6294212078c55f80499c91dc3cca737f292b.scope.
Nov 25 03:18:17 np0005534516 systemd[1]: Started libcrun container.
Nov 25 03:18:17 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cd6e4f18c3561630314727e837b7aaffe5625688fb77100414e733f9b65fff43/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 03:18:17 np0005534516 podman[262730]: 2025-11-25 08:18:16.984563041 +0000 UTC m=+0.025149588 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 03:18:17 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cd6e4f18c3561630314727e837b7aaffe5625688fb77100414e733f9b65fff43/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 03:18:17 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cd6e4f18c3561630314727e837b7aaffe5625688fb77100414e733f9b65fff43/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 03:18:17 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cd6e4f18c3561630314727e837b7aaffe5625688fb77100414e733f9b65fff43/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 03:18:17 np0005534516 podman[262730]: 2025-11-25 08:18:17.092486972 +0000 UTC m=+0.133073539 container init 6cca11b8ee9d813f7027a236cfef6294212078c55f80499c91dc3cca737f292b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_germain, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Nov 25 03:18:17 np0005534516 podman[262730]: 2025-11-25 08:18:17.102777752 +0000 UTC m=+0.143364269 container start 6cca11b8ee9d813f7027a236cfef6294212078c55f80499c91dc3cca737f292b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_germain, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Nov 25 03:18:17 np0005534516 podman[262730]: 2025-11-25 08:18:17.106351521 +0000 UTC m=+0.146938088 container attach 6cca11b8ee9d813f7027a236cfef6294212078c55f80499c91dc3cca737f292b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_germain, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 03:18:17 np0005534516 affectionate_germain[262747]: {
Nov 25 03:18:17 np0005534516 affectionate_germain[262747]:    "1ccbbde0-8faf-460a-800e-c84f00ed17db": {
Nov 25 03:18:17 np0005534516 affectionate_germain[262747]:        "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 03:18:18 np0005534516 affectionate_germain[262747]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 03:18:18 np0005534516 affectionate_germain[262747]:        "osd_id": 1,
Nov 25 03:18:18 np0005534516 affectionate_germain[262747]:        "osd_uuid": "1ccbbde0-8faf-460a-800e-c84f00ed17db",
Nov 25 03:18:18 np0005534516 affectionate_germain[262747]:        "type": "bluestore"
Nov 25 03:18:18 np0005534516 affectionate_germain[262747]:    },
Nov 25 03:18:18 np0005534516 affectionate_germain[262747]:    "2a15223e-f21c-42af-b702-2be1c3607399": {
Nov 25 03:18:18 np0005534516 affectionate_germain[262747]:        "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 03:18:18 np0005534516 affectionate_germain[262747]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 03:18:18 np0005534516 affectionate_germain[262747]:        "osd_id": 2,
Nov 25 03:18:18 np0005534516 affectionate_germain[262747]:        "osd_uuid": "2a15223e-f21c-42af-b702-2be1c3607399",
Nov 25 03:18:18 np0005534516 affectionate_germain[262747]:        "type": "bluestore"
Nov 25 03:18:18 np0005534516 affectionate_germain[262747]:    },
Nov 25 03:18:18 np0005534516 affectionate_germain[262747]:    "fa0f8d90-5df8-4d42-9078-da082765696d": {
Nov 25 03:18:18 np0005534516 affectionate_germain[262747]:        "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 03:18:18 np0005534516 affectionate_germain[262747]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 03:18:18 np0005534516 affectionate_germain[262747]:        "osd_id": 0,
Nov 25 03:18:18 np0005534516 affectionate_germain[262747]:        "osd_uuid": "fa0f8d90-5df8-4d42-9078-da082765696d",
Nov 25 03:18:18 np0005534516 affectionate_germain[262747]:        "type": "bluestore"
Nov 25 03:18:18 np0005534516 affectionate_germain[262747]:    }
Nov 25 03:18:18 np0005534516 affectionate_germain[262747]: }
Nov 25 03:18:18 np0005534516 systemd[1]: libpod-6cca11b8ee9d813f7027a236cfef6294212078c55f80499c91dc3cca737f292b.scope: Deactivated successfully.
Nov 25 03:18:18 np0005534516 podman[262781]: 2025-11-25 08:18:18.067284334 +0000 UTC m=+0.023510560 container died 6cca11b8ee9d813f7027a236cfef6294212078c55f80499c91dc3cca737f292b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_germain, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Nov 25 03:18:18 np0005534516 systemd[1]: var-lib-containers-storage-overlay-cd6e4f18c3561630314727e837b7aaffe5625688fb77100414e733f9b65fff43-merged.mount: Deactivated successfully.
Nov 25 03:18:18 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 03:18:18 np0005534516 podman[262781]: 2025-11-25 08:18:18.117427923 +0000 UTC m=+0.073654129 container remove 6cca11b8ee9d813f7027a236cfef6294212078c55f80499c91dc3cca737f292b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_germain, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 03:18:18 np0005534516 systemd[1]: libpod-conmon-6cca11b8ee9d813f7027a236cfef6294212078c55f80499c91dc3cca737f292b.scope: Deactivated successfully.
Nov 25 03:18:18 np0005534516 podman[262780]: 2025-11-25 08:18:18.13617305 +0000 UTC m=+0.071899861 container health_status 5c8d5508db57b4c3da7e80ae11f3a3094e87785cb74f60c98199261dd8b73481 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 03:18:18 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 03:18:18 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 03:18:18 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 03:18:18 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 03:18:18 np0005534516 ceph-mgr[75313]: [progress WARNING root] complete: ev 1f432dc5-72e6-4627-afd8-3506ff40ac7c does not exist
Nov 25 03:18:18 np0005534516 ceph-mgr[75313]: [progress WARNING root] complete: ev aa484492-f9a2-478f-8cfe-f6b373c9938e does not exist
Nov 25 03:18:18 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v966: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:18:19 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 03:18:19 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 03:18:20 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v967: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:18:20 np0005534516 podman[262865]: 2025-11-25 08:18:20.813063513 +0000 UTC m=+0.060157630 container health_status 201f5ba8a846455dad7681d91099d8c3f33e8ac91298c1330a8b525b94c03938 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0)
Nov 25 03:18:22 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v968: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:18:23 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 03:18:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 03:18:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 03:18:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 03:18:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 03:18:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 03:18:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 03:18:24 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v969: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:18:25 np0005534516 podman[262888]: 2025-11-25 08:18:25.847174712 +0000 UTC m=+0.100201976 container health_status 8ad1e319ea263c63021f01bb2d6f18ca5aea205883339692c6b10bddfa993191 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_controller, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 03:18:26 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v970: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:18:28 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 03:18:28 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v971: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:18:28 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 25 03:18:28 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3105841533' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 03:18:28 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 25 03:18:28 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3105841533' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 03:18:30 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v972: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:18:32 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v973: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:18:33 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 03:18:34 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v974: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:18:36 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v975: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:18:38 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 03:18:38 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v976: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:18:40 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e93 do_prune osdmap full prune enabled
Nov 25 03:18:40 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e94 e94: 3 total, 3 up, 3 in
Nov 25 03:18:40 np0005534516 ceph-mon[75015]: log_channel(cluster) log [DBG] : osdmap e94: 3 total, 3 up, 3 in
Nov 25 03:18:40 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v978: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:18:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:18:41.043 162739 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:18:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:18:41.044 162739 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:18:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:18:41.044 162739 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:18:41 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e94 do_prune osdmap full prune enabled
Nov 25 03:18:41 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e95 e95: 3 total, 3 up, 3 in
Nov 25 03:18:41 np0005534516 ceph-mon[75015]: log_channel(cluster) log [DBG] : osdmap e95: 3 total, 3 up, 3 in
Nov 25 03:18:42 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e95 do_prune osdmap full prune enabled
Nov 25 03:18:42 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e96 e96: 3 total, 3 up, 3 in
Nov 25 03:18:42 np0005534516 ceph-mon[75015]: log_channel(cluster) log [DBG] : osdmap e96: 3 total, 3 up, 3 in
Nov 25 03:18:42 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v981: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail; 170 B/s wr, 0 op/s
Nov 25 03:18:43 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e96 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 03:18:44 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e96 do_prune osdmap full prune enabled
Nov 25 03:18:44 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e97 e97: 3 total, 3 up, 3 in
Nov 25 03:18:44 np0005534516 ceph-mon[75015]: log_channel(cluster) log [DBG] : osdmap e97: 3 total, 3 up, 3 in
Nov 25 03:18:44 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v983: 321 pgs: 321 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail; 22 KiB/s rd, 2.2 KiB/s wr, 29 op/s
Nov 25 03:18:46 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v984: 321 pgs: 321 active+clean; 33 MiB data, 180 MiB used, 60 GiB / 60 GiB avail; 39 KiB/s rd, 5.5 MiB/s wr, 54 op/s
Nov 25 03:18:48 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e97 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 03:18:48 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e97 do_prune osdmap full prune enabled
Nov 25 03:18:48 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e98 e98: 3 total, 3 up, 3 in
Nov 25 03:18:48 np0005534516 ceph-mon[75015]: log_channel(cluster) log [DBG] : osdmap e98: 3 total, 3 up, 3 in
Nov 25 03:18:48 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v986: 321 pgs: 321 active+clean; 33 MiB data, 180 MiB used, 60 GiB / 60 GiB avail; 38 KiB/s rd, 5.3 MiB/s wr, 52 op/s
Nov 25 03:18:48 np0005534516 podman[262915]: 2025-11-25 08:18:48.830710045 +0000 UTC m=+0.074548546 container health_status 5c8d5508db57b4c3da7e80ae11f3a3094e87785cb74f60c98199261dd8b73481 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251118, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 25 03:18:50 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v987: 321 pgs: 321 active+clean; 41 MiB data, 188 MiB used, 60 GiB / 60 GiB avail; 33 KiB/s rd, 5.1 MiB/s wr, 47 op/s
Nov 25 03:18:51 np0005534516 podman[262934]: 2025-11-25 08:18:51.852250238 +0000 UTC m=+0.099845495 container health_status 201f5ba8a846455dad7681d91099d8c3f33e8ac91298c1330a8b525b94c03938 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=multipathd, org.label-schema.build-date=20251118, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=multipathd)
Nov 25 03:18:52 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v988: 321 pgs: 321 active+clean; 41 MiB data, 188 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 5.0 MiB/s wr, 30 op/s
Nov 25 03:18:53 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e98 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 03:18:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] Optimize plan auto_2025-11-25_08:18:53
Nov 25 03:18:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 03:18:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] do_upmap
Nov 25 03:18:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] pools ['cephfs.cephfs.data', 'backups', 'vms', '.mgr', 'cephfs.cephfs.meta', 'default.rgw.meta', '.rgw.root', 'default.rgw.control', 'default.rgw.log', 'volumes', 'images']
Nov 25 03:18:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] prepared 0/10 changes
Nov 25 03:18:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 03:18:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 03:18:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 03:18:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 03:18:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 03:18:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 03:18:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 03:18:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 03:18:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 03:18:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 03:18:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 03:18:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 03:18:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 03:18:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 03:18:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 03:18:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 03:18:53 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e98 do_prune osdmap full prune enabled
Nov 25 03:18:53 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e99 e99: 3 total, 3 up, 3 in
Nov 25 03:18:53 np0005534516 ceph-mon[75015]: log_channel(cluster) log [DBG] : osdmap e99: 3 total, 3 up, 3 in
Nov 25 03:18:54 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v990: 321 pgs: 321 active+clean; 41 MiB data, 188 MiB used, 60 GiB / 60 GiB avail; 11 KiB/s rd, 1.0 MiB/s wr, 15 op/s
Nov 25 03:18:56 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v991: 321 pgs: 321 active+clean; 21 MiB data, 168 MiB used, 60 GiB / 60 GiB avail; 14 KiB/s rd, 945 KiB/s wr, 22 op/s
Nov 25 03:18:56 np0005534516 podman[262955]: 2025-11-25 08:18:56.877462349 +0000 UTC m=+0.120448295 container health_status 8ad1e319ea263c63021f01bb2d6f18ca5aea205883339692c6b10bddfa993191 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0)
Nov 25 03:18:56 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e99 do_prune osdmap full prune enabled
Nov 25 03:18:56 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e100 e100: 3 total, 3 up, 3 in
Nov 25 03:18:56 np0005534516 ceph-mon[75015]: log_channel(cluster) log [DBG] : osdmap e100: 3 total, 3 up, 3 in
Nov 25 03:18:58 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e100 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 03:18:58 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v993: 321 pgs: 321 active+clean; 21 MiB data, 168 MiB used, 60 GiB / 60 GiB avail; 12 KiB/s rd, 1.6 KiB/s wr, 17 op/s
Nov 25 03:19:00 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v994: 321 pgs: 321 active+clean; 4.9 MiB data, 152 MiB used, 60 GiB / 60 GiB avail; 45 KiB/s rd, 3.5 KiB/s wr, 61 op/s
Nov 25 03:19:02 np0005534516 nova_compute[253538]: 2025-11-25 08:19:02.561 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 03:19:02 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v995: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 34 KiB/s rd, 2.6 KiB/s wr, 47 op/s
Nov 25 03:19:03 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e100 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 03:19:03 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e100 do_prune osdmap full prune enabled
Nov 25 03:19:03 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e101 e101: 3 total, 3 up, 3 in
Nov 25 03:19:03 np0005534516 ceph-mon[75015]: log_channel(cluster) log [DBG] : osdmap e101: 3 total, 3 up, 3 in
Nov 25 03:19:03 np0005534516 nova_compute[253538]: 2025-11-25 08:19:03.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 03:19:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 03:19:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:19:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 03:19:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:19:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 03:19:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:19:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 03:19:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:19:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 03:19:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:19:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 25 03:19:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:19:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 25 03:19:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:19:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 03:19:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:19:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 25 03:19:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:19:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 25 03:19:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:19:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 03:19:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:19:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 25 03:19:04 np0005534516 nova_compute[253538]: 2025-11-25 08:19:04.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 03:19:04 np0005534516 nova_compute[253538]: 2025-11-25 08:19:04.554 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 25 03:19:04 np0005534516 nova_compute[253538]: 2025-11-25 08:19:04.555 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 25 03:19:04 np0005534516 nova_compute[253538]: 2025-11-25 08:19:04.566 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 25 03:19:04 np0005534516 nova_compute[253538]: 2025-11-25 08:19:04.566 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 03:19:04 np0005534516 nova_compute[253538]: 2025-11-25 08:19:04.566 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 03:19:04 np0005534516 nova_compute[253538]: 2025-11-25 08:19:04.566 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 25 03:19:04 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v997: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 34 KiB/s rd, 1.9 KiB/s wr, 45 op/s
Nov 25 03:19:05 np0005534516 nova_compute[253538]: 2025-11-25 08:19:05.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 03:19:05 np0005534516 nova_compute[253538]: 2025-11-25 08:19:05.577 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:19:05 np0005534516 nova_compute[253538]: 2025-11-25 08:19:05.577 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:19:05 np0005534516 nova_compute[253538]: 2025-11-25 08:19:05.577 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:19:05 np0005534516 nova_compute[253538]: 2025-11-25 08:19:05.577 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 25 03:19:05 np0005534516 nova_compute[253538]: 2025-11-25 08:19:05.578 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:19:05 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 03:19:05 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2951231838' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 03:19:06 np0005534516 nova_compute[253538]: 2025-11-25 08:19:06.001 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.423s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:19:06 np0005534516 nova_compute[253538]: 2025-11-25 08:19:06.211 253542 WARNING nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 03:19:06 np0005534516 nova_compute[253538]: 2025-11-25 08:19:06.212 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5173MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 25 03:19:06 np0005534516 nova_compute[253538]: 2025-11-25 08:19:06.212 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:19:06 np0005534516 nova_compute[253538]: 2025-11-25 08:19:06.212 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:19:06 np0005534516 nova_compute[253538]: 2025-11-25 08:19:06.263 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 25 03:19:06 np0005534516 nova_compute[253538]: 2025-11-25 08:19:06.263 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 25 03:19:06 np0005534516 nova_compute[253538]: 2025-11-25 08:19:06.281 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:19:06 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 03:19:06 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2427168129' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 03:19:06 np0005534516 nova_compute[253538]: 2025-11-25 08:19:06.676 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.395s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:19:06 np0005534516 nova_compute[253538]: 2025-11-25 08:19:06.681 253542 DEBUG nova.compute.provider_tree [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 03:19:06 np0005534516 nova_compute[253538]: 2025-11-25 08:19:06.698 253542 DEBUG nova.scheduler.client.report [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 03:19:06 np0005534516 nova_compute[253538]: 2025-11-25 08:19:06.699 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 25 03:19:06 np0005534516 nova_compute[253538]: 2025-11-25 08:19:06.700 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.487s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:19:06 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v998: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 27 KiB/s rd, 1.5 KiB/s wr, 36 op/s
Nov 25 03:19:08 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e101 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 03:19:08 np0005534516 nova_compute[253538]: 2025-11-25 08:19:08.700 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 03:19:08 np0005534516 nova_compute[253538]: 2025-11-25 08:19:08.701 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 03:19:08 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v999: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 27 KiB/s rd, 1.5 KiB/s wr, 36 op/s
Nov 25 03:19:10 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e101 do_prune osdmap full prune enabled
Nov 25 03:19:10 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e102 e102: 3 total, 3 up, 3 in
Nov 25 03:19:10 np0005534516 ceph-mon[75015]: log_channel(cluster) log [DBG] : osdmap e102: 3 total, 3 up, 3 in
Nov 25 03:19:10 np0005534516 nova_compute[253538]: 2025-11-25 08:19:10.555 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 03:19:10 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1001: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 127 B/s rd, 255 B/s wr, 0 op/s
Nov 25 03:19:12 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1002: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 950 B/s wr, 4 op/s
Nov 25 03:19:13 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e102 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 03:19:14 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1003: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 1.1 KiB/s wr, 4 op/s
Nov 25 03:19:16 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1004: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 10 KiB/s rd, 1.6 KiB/s wr, 14 op/s
Nov 25 03:19:18 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e102 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 03:19:18 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1005: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 10 KiB/s rd, 1.6 KiB/s wr, 14 op/s
Nov 25 03:19:19 np0005534516 podman[263181]: 2025-11-25 08:19:19.146958338 +0000 UTC m=+0.054997611 container health_status 5c8d5508db57b4c3da7e80ae11f3a3094e87785cb74f60c98199261dd8b73481 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, io.buildah.version=1.41.3)
Nov 25 03:19:19 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 03:19:19 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 03:19:19 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 03:19:19 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 03:19:19 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 03:19:19 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 03:19:19 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 03:19:19 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 03:19:19 np0005534516 ceph-mon[75015]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #48. Immutable memtables: 0.
Nov 25 03:19:19 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:19:19.587326) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 25 03:19:19 np0005534516 ceph-mon[75015]: rocksdb: [db/flush_job.cc:856] [default] [JOB 23] Flushing memtable with next log file: 48
Nov 25 03:19:19 np0005534516 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764058759587350, "job": 23, "event": "flush_started", "num_memtables": 1, "num_entries": 1393, "num_deletes": 253, "total_data_size": 2099313, "memory_usage": 2137120, "flush_reason": "Manual Compaction"}
Nov 25 03:19:19 np0005534516 ceph-mon[75015]: rocksdb: [db/flush_job.cc:885] [default] [JOB 23] Level-0 flush table #49: started
Nov 25 03:19:19 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 03:19:19 np0005534516 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764058759606948, "cf_name": "default", "job": 23, "event": "table_file_creation", "file_number": 49, "file_size": 2066741, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 19583, "largest_seqno": 20975, "table_properties": {"data_size": 2060231, "index_size": 3713, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1733, "raw_key_size": 13631, "raw_average_key_size": 19, "raw_value_size": 2047043, "raw_average_value_size": 2992, "num_data_blocks": 169, "num_entries": 684, "num_filter_entries": 684, "num_deletions": 253, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764058628, "oldest_key_time": 1764058628, "file_creation_time": 1764058759, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "dc5dc6ae-0fb8-474e-b34d-e37705a41add", "db_session_id": "WCSCMBNWDQZ3QKJA3OWW", "orig_file_number": 49, "seqno_to_time_mapping": "N/A"}}
Nov 25 03:19:19 np0005534516 ceph-mon[75015]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 23] Flush lasted 19667 microseconds, and 4546 cpu microseconds.
Nov 25 03:19:19 np0005534516 ceph-mon[75015]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 03:19:19 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:19:19.606982) [db/flush_job.cc:967] [default] [JOB 23] Level-0 flush table #49: 2066741 bytes OK
Nov 25 03:19:19 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:19:19.607008) [db/memtable_list.cc:519] [default] Level-0 commit table #49 started
Nov 25 03:19:19 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:19:19.614708) [db/memtable_list.cc:722] [default] Level-0 commit table #49: memtable #1 done
Nov 25 03:19:19 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:19:19.614721) EVENT_LOG_v1 {"time_micros": 1764058759614717, "job": 23, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 25 03:19:19 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:19:19.614737) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 25 03:19:19 np0005534516 ceph-mon[75015]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 23] Try to delete WAL files size 2093137, prev total WAL file size 2093598, number of live WAL files 2.
Nov 25 03:19:19 np0005534516 ceph-mon[75015]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000045.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 03:19:19 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:19:19.615333) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031353036' seq:72057594037927935, type:22 .. '7061786F730031373538' seq:0, type:0; will stop at (end)
Nov 25 03:19:19 np0005534516 ceph-mon[75015]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 24] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 25 03:19:19 np0005534516 ceph-mon[75015]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 23 Base level 0, inputs: [49(2018KB)], [47(7259KB)]
Nov 25 03:19:19 np0005534516 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764058759615366, "job": 24, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [49], "files_L6": [47], "score": -1, "input_data_size": 9500214, "oldest_snapshot_seqno": -1}
Nov 25 03:19:19 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 03:19:19 np0005534516 ceph-mgr[75313]: [progress WARNING root] complete: ev d0f64682-a801-45fb-8db6-bb0e2ed9d8e5 does not exist
Nov 25 03:19:19 np0005534516 ceph-mgr[75313]: [progress WARNING root] complete: ev 50e7b085-ceb6-49e1-9d8a-2df127e6f440 does not exist
Nov 25 03:19:19 np0005534516 ceph-mgr[75313]: [progress WARNING root] complete: ev 2dba1a42-0ef9-4b53-a0f6-e724f18b5d36 does not exist
Nov 25 03:19:19 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 03:19:19 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 03:19:19 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 03:19:19 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 03:19:19 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 03:19:19 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 03:19:19 np0005534516 ceph-mon[75015]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 24] Generated table #50: 4368 keys, 7732920 bytes, temperature: kUnknown
Nov 25 03:19:19 np0005534516 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764058759661608, "cf_name": "default", "job": 24, "event": "table_file_creation", "file_number": 50, "file_size": 7732920, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7702764, "index_size": 18142, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 10949, "raw_key_size": 108369, "raw_average_key_size": 24, "raw_value_size": 7622635, "raw_average_value_size": 1745, "num_data_blocks": 764, "num_entries": 4368, "num_filter_entries": 4368, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764056881, "oldest_key_time": 0, "file_creation_time": 1764058759, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "dc5dc6ae-0fb8-474e-b34d-e37705a41add", "db_session_id": "WCSCMBNWDQZ3QKJA3OWW", "orig_file_number": 50, "seqno_to_time_mapping": "N/A"}}
Nov 25 03:19:19 np0005534516 ceph-mon[75015]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 03:19:19 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:19:19.661844) [db/compaction/compaction_job.cc:1663] [default] [JOB 24] Compacted 1@0 + 1@6 files to L6 => 7732920 bytes
Nov 25 03:19:19 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:19:19.663161) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 205.1 rd, 166.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.0, 7.1 +0.0 blob) out(7.4 +0.0 blob), read-write-amplify(8.3) write-amplify(3.7) OK, records in: 4888, records dropped: 520 output_compression: NoCompression
Nov 25 03:19:19 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:19:19.663181) EVENT_LOG_v1 {"time_micros": 1764058759663172, "job": 24, "event": "compaction_finished", "compaction_time_micros": 46320, "compaction_time_cpu_micros": 17430, "output_level": 6, "num_output_files": 1, "total_output_size": 7732920, "num_input_records": 4888, "num_output_records": 4368, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 25 03:19:19 np0005534516 ceph-mon[75015]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000049.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 03:19:19 np0005534516 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764058759663713, "job": 24, "event": "table_file_deletion", "file_number": 49}
Nov 25 03:19:19 np0005534516 ceph-mon[75015]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000047.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 03:19:19 np0005534516 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764058759665490, "job": 24, "event": "table_file_deletion", "file_number": 47}
Nov 25 03:19:19 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:19:19.615239) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 03:19:19 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:19:19.665557) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 03:19:19 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:19:19.665561) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 03:19:19 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:19:19.665564) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 03:19:19 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:19:19.665565) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 03:19:19 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:19:19.665567) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 03:19:19 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 03:19:19 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 03:19:19 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 03:19:19 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 03:19:19 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 03:19:20 np0005534516 podman[263433]: 2025-11-25 08:19:20.302098261 +0000 UTC m=+0.054103146 container create 4a23c5a32b4cab958a6e05bead7dd99fd8c7354d9f8b7ec994a35c25a2118673 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_raman, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Nov 25 03:19:20 np0005534516 systemd[1]: Started libpod-conmon-4a23c5a32b4cab958a6e05bead7dd99fd8c7354d9f8b7ec994a35c25a2118673.scope.
Nov 25 03:19:20 np0005534516 podman[263433]: 2025-11-25 08:19:20.274838528 +0000 UTC m=+0.026843423 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 03:19:20 np0005534516 systemd[1]: Started libcrun container.
Nov 25 03:19:20 np0005534516 podman[263433]: 2025-11-25 08:19:20.396181695 +0000 UTC m=+0.148186570 container init 4a23c5a32b4cab958a6e05bead7dd99fd8c7354d9f8b7ec994a35c25a2118673 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_raman, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 03:19:20 np0005534516 podman[263433]: 2025-11-25 08:19:20.405791005 +0000 UTC m=+0.157795860 container start 4a23c5a32b4cab958a6e05bead7dd99fd8c7354d9f8b7ec994a35c25a2118673 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_raman, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 03:19:20 np0005534516 podman[263433]: 2025-11-25 08:19:20.408377217 +0000 UTC m=+0.160382102 container attach 4a23c5a32b4cab958a6e05bead7dd99fd8c7354d9f8b7ec994a35c25a2118673 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_raman, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 03:19:20 np0005534516 beautiful_raman[263449]: 167 167
Nov 25 03:19:20 np0005534516 systemd[1]: libpod-4a23c5a32b4cab958a6e05bead7dd99fd8c7354d9f8b7ec994a35c25a2118673.scope: Deactivated successfully.
Nov 25 03:19:20 np0005534516 podman[263433]: 2025-11-25 08:19:20.412936294 +0000 UTC m=+0.164941179 container died 4a23c5a32b4cab958a6e05bead7dd99fd8c7354d9f8b7ec994a35c25a2118673 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_raman, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 03:19:20 np0005534516 systemd[1]: var-lib-containers-storage-overlay-47c2bf79a096b234662320b033fc045f05cef6050140e9483877e1386e37b871-merged.mount: Deactivated successfully.
Nov 25 03:19:20 np0005534516 podman[263433]: 2025-11-25 08:19:20.458546422 +0000 UTC m=+0.210551287 container remove 4a23c5a32b4cab958a6e05bead7dd99fd8c7354d9f8b7ec994a35c25a2118673 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_raman, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Nov 25 03:19:20 np0005534516 systemd[1]: libpod-conmon-4a23c5a32b4cab958a6e05bead7dd99fd8c7354d9f8b7ec994a35c25a2118673.scope: Deactivated successfully.
Nov 25 03:19:20 np0005534516 podman[263473]: 2025-11-25 08:19:20.62993267 +0000 UTC m=+0.052014858 container create 15b74451a0c63cf631ab6e883c89684b643d65a260bb15d64759d2cccc139a20 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_ellis, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 03:19:20 np0005534516 systemd[1]: Started libpod-conmon-15b74451a0c63cf631ab6e883c89684b643d65a260bb15d64759d2cccc139a20.scope.
Nov 25 03:19:20 np0005534516 systemd[1]: Started libcrun container.
Nov 25 03:19:20 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c56b434b35a8df4fa662fed750b5708d0c5ce6371f3ccf387c013db9dd20ab6c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 03:19:20 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c56b434b35a8df4fa662fed750b5708d0c5ce6371f3ccf387c013db9dd20ab6c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 03:19:20 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c56b434b35a8df4fa662fed750b5708d0c5ce6371f3ccf387c013db9dd20ab6c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 03:19:20 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c56b434b35a8df4fa662fed750b5708d0c5ce6371f3ccf387c013db9dd20ab6c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 03:19:20 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c56b434b35a8df4fa662fed750b5708d0c5ce6371f3ccf387c013db9dd20ab6c/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 03:19:20 np0005534516 podman[263473]: 2025-11-25 08:19:20.61171266 +0000 UTC m=+0.033794828 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 03:19:20 np0005534516 podman[263473]: 2025-11-25 08:19:20.709547419 +0000 UTC m=+0.131629587 container init 15b74451a0c63cf631ab6e883c89684b643d65a260bb15d64759d2cccc139a20 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_ellis, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Nov 25 03:19:20 np0005534516 podman[263473]: 2025-11-25 08:19:20.719875438 +0000 UTC m=+0.141957586 container start 15b74451a0c63cf631ab6e883c89684b643d65a260bb15d64759d2cccc139a20 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_ellis, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Nov 25 03:19:20 np0005534516 podman[263473]: 2025-11-25 08:19:20.72316017 +0000 UTC m=+0.145242328 container attach 15b74451a0c63cf631ab6e883c89684b643d65a260bb15d64759d2cccc139a20 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_ellis, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Nov 25 03:19:20 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1006: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 9.7 KiB/s rd, 1.3 KiB/s wr, 13 op/s
Nov 25 03:19:21 np0005534516 cranky_ellis[263489]: --> passed data devices: 0 physical, 3 LVM
Nov 25 03:19:21 np0005534516 cranky_ellis[263489]: --> relative data size: 1.0
Nov 25 03:19:21 np0005534516 cranky_ellis[263489]: --> All data devices are unavailable
Nov 25 03:19:21 np0005534516 systemd[1]: libpod-15b74451a0c63cf631ab6e883c89684b643d65a260bb15d64759d2cccc139a20.scope: Deactivated successfully.
Nov 25 03:19:21 np0005534516 systemd[1]: libpod-15b74451a0c63cf631ab6e883c89684b643d65a260bb15d64759d2cccc139a20.scope: Consumed 1.013s CPU time.
Nov 25 03:19:21 np0005534516 podman[263473]: 2025-11-25 08:19:21.779672152 +0000 UTC m=+1.201754300 container died 15b74451a0c63cf631ab6e883c89684b643d65a260bb15d64759d2cccc139a20 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_ellis, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 03:19:21 np0005534516 systemd[1]: var-lib-containers-storage-overlay-c56b434b35a8df4fa662fed750b5708d0c5ce6371f3ccf387c013db9dd20ab6c-merged.mount: Deactivated successfully.
Nov 25 03:19:21 np0005534516 podman[263473]: 2025-11-25 08:19:21.832045898 +0000 UTC m=+1.254128046 container remove 15b74451a0c63cf631ab6e883c89684b643d65a260bb15d64759d2cccc139a20 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_ellis, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 03:19:21 np0005534516 systemd[1]: libpod-conmon-15b74451a0c63cf631ab6e883c89684b643d65a260bb15d64759d2cccc139a20.scope: Deactivated successfully.
Nov 25 03:19:22 np0005534516 podman[263553]: 2025-11-25 08:19:22.030392972 +0000 UTC m=+0.069906298 container health_status 201f5ba8a846455dad7681d91099d8c3f33e8ac91298c1330a8b525b94c03938 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=multipathd, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 03:19:22 np0005534516 podman[263686]: 2025-11-25 08:19:22.43570735 +0000 UTC m=+0.046844242 container create 5074811e983c1572f928bff6f1e24bf914d010b149468ba5e6a8881b750fc3bd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_babbage, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 03:19:22 np0005534516 systemd[1]: Started libpod-conmon-5074811e983c1572f928bff6f1e24bf914d010b149468ba5e6a8881b750fc3bd.scope.
Nov 25 03:19:22 np0005534516 systemd[1]: Started libcrun container.
Nov 25 03:19:22 np0005534516 podman[263686]: 2025-11-25 08:19:22.50713455 +0000 UTC m=+0.118271422 container init 5074811e983c1572f928bff6f1e24bf914d010b149468ba5e6a8881b750fc3bd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_babbage, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Nov 25 03:19:22 np0005534516 podman[263686]: 2025-11-25 08:19:22.417869381 +0000 UTC m=+0.029006253 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 03:19:22 np0005534516 podman[263686]: 2025-11-25 08:19:22.513614572 +0000 UTC m=+0.124751424 container start 5074811e983c1572f928bff6f1e24bf914d010b149468ba5e6a8881b750fc3bd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_babbage, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Nov 25 03:19:22 np0005534516 podman[263686]: 2025-11-25 08:19:22.51679177 +0000 UTC m=+0.127928652 container attach 5074811e983c1572f928bff6f1e24bf914d010b149468ba5e6a8881b750fc3bd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_babbage, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Nov 25 03:19:22 np0005534516 reverent_babbage[263702]: 167 167
Nov 25 03:19:22 np0005534516 podman[263686]: 2025-11-25 08:19:22.519439255 +0000 UTC m=+0.130576107 container died 5074811e983c1572f928bff6f1e24bf914d010b149468ba5e6a8881b750fc3bd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_babbage, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 03:19:22 np0005534516 systemd[1]: libpod-5074811e983c1572f928bff6f1e24bf914d010b149468ba5e6a8881b750fc3bd.scope: Deactivated successfully.
Nov 25 03:19:22 np0005534516 systemd[1]: var-lib-containers-storage-overlay-b265f743cefb37798c893d62ef7662dab52aec362d02440a53b392ffeb4973e3-merged.mount: Deactivated successfully.
Nov 25 03:19:22 np0005534516 podman[263686]: 2025-11-25 08:19:22.55211555 +0000 UTC m=+0.163252402 container remove 5074811e983c1572f928bff6f1e24bf914d010b149468ba5e6a8881b750fc3bd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_babbage, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 03:19:22 np0005534516 systemd[1]: libpod-conmon-5074811e983c1572f928bff6f1e24bf914d010b149468ba5e6a8881b750fc3bd.scope: Deactivated successfully.
Nov 25 03:19:22 np0005534516 podman[263725]: 2025-11-25 08:19:22.727805719 +0000 UTC m=+0.050520525 container create 68cc735f0da1169355ec94082e6fb4c6a5db8dab5dbfb75e11d1d6913cd6587d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_black, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Nov 25 03:19:22 np0005534516 systemd[1]: Started libpod-conmon-68cc735f0da1169355ec94082e6fb4c6a5db8dab5dbfb75e11d1d6913cd6587d.scope.
Nov 25 03:19:22 np0005534516 systemd[1]: Started libcrun container.
Nov 25 03:19:22 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/89a775a916db7ec3899c2d09dd633344f198238a1d0830d748a29a27f91a9581/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 03:19:22 np0005534516 podman[263725]: 2025-11-25 08:19:22.70641162 +0000 UTC m=+0.029126456 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 03:19:22 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/89a775a916db7ec3899c2d09dd633344f198238a1d0830d748a29a27f91a9581/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 03:19:22 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/89a775a916db7ec3899c2d09dd633344f198238a1d0830d748a29a27f91a9581/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 03:19:22 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/89a775a916db7ec3899c2d09dd633344f198238a1d0830d748a29a27f91a9581/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 03:19:22 np0005534516 podman[263725]: 2025-11-25 08:19:22.812933462 +0000 UTC m=+0.135648288 container init 68cc735f0da1169355ec94082e6fb4c6a5db8dab5dbfb75e11d1d6913cd6587d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_black, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 03:19:22 np0005534516 podman[263725]: 2025-11-25 08:19:22.820204506 +0000 UTC m=+0.142919302 container start 68cc735f0da1169355ec94082e6fb4c6a5db8dab5dbfb75e11d1d6913cd6587d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_black, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Nov 25 03:19:22 np0005534516 podman[263725]: 2025-11-25 08:19:22.822585343 +0000 UTC m=+0.145300179 container attach 68cc735f0da1169355ec94082e6fb4c6a5db8dab5dbfb75e11d1d6913cd6587d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_black, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Nov 25 03:19:22 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1007: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 8.6 KiB/s rd, 1.2 KiB/s wr, 12 op/s
Nov 25 03:19:23 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e102 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 03:19:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 03:19:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 03:19:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 03:19:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 03:19:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 03:19:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 03:19:23 np0005534516 serene_black[263741]: {
Nov 25 03:19:23 np0005534516 serene_black[263741]:    "0": [
Nov 25 03:19:23 np0005534516 serene_black[263741]:        {
Nov 25 03:19:23 np0005534516 serene_black[263741]:            "devices": [
Nov 25 03:19:23 np0005534516 serene_black[263741]:                "/dev/loop3"
Nov 25 03:19:23 np0005534516 serene_black[263741]:            ],
Nov 25 03:19:23 np0005534516 serene_black[263741]:            "lv_name": "ceph_lv0",
Nov 25 03:19:23 np0005534516 serene_black[263741]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 03:19:23 np0005534516 serene_black[263741]:            "lv_size": "21470642176",
Nov 25 03:19:23 np0005534516 serene_black[263741]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=fa0f8d90-5df8-4d42-9078-da082765696d,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 03:19:23 np0005534516 serene_black[263741]:            "lv_uuid": "ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr",
Nov 25 03:19:23 np0005534516 serene_black[263741]:            "name": "ceph_lv0",
Nov 25 03:19:23 np0005534516 serene_black[263741]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 03:19:23 np0005534516 serene_black[263741]:            "tags": {
Nov 25 03:19:23 np0005534516 serene_black[263741]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 03:19:23 np0005534516 serene_black[263741]:                "ceph.block_uuid": "ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr",
Nov 25 03:19:23 np0005534516 serene_black[263741]:                "ceph.cephx_lockbox_secret": "",
Nov 25 03:19:23 np0005534516 serene_black[263741]:                "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 03:19:23 np0005534516 serene_black[263741]:                "ceph.cluster_name": "ceph",
Nov 25 03:19:23 np0005534516 serene_black[263741]:                "ceph.crush_device_class": "",
Nov 25 03:19:23 np0005534516 serene_black[263741]:                "ceph.encrypted": "0",
Nov 25 03:19:23 np0005534516 serene_black[263741]:                "ceph.osd_fsid": "fa0f8d90-5df8-4d42-9078-da082765696d",
Nov 25 03:19:23 np0005534516 serene_black[263741]:                "ceph.osd_id": "0",
Nov 25 03:19:23 np0005534516 serene_black[263741]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 03:19:23 np0005534516 serene_black[263741]:                "ceph.type": "block",
Nov 25 03:19:23 np0005534516 serene_black[263741]:                "ceph.vdo": "0"
Nov 25 03:19:23 np0005534516 serene_black[263741]:            },
Nov 25 03:19:23 np0005534516 serene_black[263741]:            "type": "block",
Nov 25 03:19:23 np0005534516 serene_black[263741]:            "vg_name": "ceph_vg0"
Nov 25 03:19:23 np0005534516 serene_black[263741]:        }
Nov 25 03:19:23 np0005534516 serene_black[263741]:    ],
Nov 25 03:19:23 np0005534516 serene_black[263741]:    "1": [
Nov 25 03:19:23 np0005534516 serene_black[263741]:        {
Nov 25 03:19:23 np0005534516 serene_black[263741]:            "devices": [
Nov 25 03:19:23 np0005534516 serene_black[263741]:                "/dev/loop4"
Nov 25 03:19:23 np0005534516 serene_black[263741]:            ],
Nov 25 03:19:23 np0005534516 serene_black[263741]:            "lv_name": "ceph_lv1",
Nov 25 03:19:23 np0005534516 serene_black[263741]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 03:19:23 np0005534516 serene_black[263741]:            "lv_size": "21470642176",
Nov 25 03:19:23 np0005534516 serene_black[263741]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1ccbbde0-8faf-460a-800e-c84f00ed17db,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 03:19:23 np0005534516 serene_black[263741]:            "lv_uuid": "nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e",
Nov 25 03:19:23 np0005534516 serene_black[263741]:            "name": "ceph_lv1",
Nov 25 03:19:23 np0005534516 serene_black[263741]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 03:19:23 np0005534516 serene_black[263741]:            "tags": {
Nov 25 03:19:23 np0005534516 serene_black[263741]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 03:19:23 np0005534516 serene_black[263741]:                "ceph.block_uuid": "nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e",
Nov 25 03:19:23 np0005534516 serene_black[263741]:                "ceph.cephx_lockbox_secret": "",
Nov 25 03:19:23 np0005534516 serene_black[263741]:                "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 03:19:23 np0005534516 serene_black[263741]:                "ceph.cluster_name": "ceph",
Nov 25 03:19:23 np0005534516 serene_black[263741]:                "ceph.crush_device_class": "",
Nov 25 03:19:23 np0005534516 serene_black[263741]:                "ceph.encrypted": "0",
Nov 25 03:19:23 np0005534516 serene_black[263741]:                "ceph.osd_fsid": "1ccbbde0-8faf-460a-800e-c84f00ed17db",
Nov 25 03:19:23 np0005534516 serene_black[263741]:                "ceph.osd_id": "1",
Nov 25 03:19:23 np0005534516 serene_black[263741]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 03:19:23 np0005534516 serene_black[263741]:                "ceph.type": "block",
Nov 25 03:19:23 np0005534516 serene_black[263741]:                "ceph.vdo": "0"
Nov 25 03:19:23 np0005534516 serene_black[263741]:            },
Nov 25 03:19:23 np0005534516 serene_black[263741]:            "type": "block",
Nov 25 03:19:23 np0005534516 serene_black[263741]:            "vg_name": "ceph_vg1"
Nov 25 03:19:23 np0005534516 serene_black[263741]:        }
Nov 25 03:19:23 np0005534516 serene_black[263741]:    ],
Nov 25 03:19:23 np0005534516 serene_black[263741]:    "2": [
Nov 25 03:19:23 np0005534516 serene_black[263741]:        {
Nov 25 03:19:23 np0005534516 serene_black[263741]:            "devices": [
Nov 25 03:19:23 np0005534516 serene_black[263741]:                "/dev/loop5"
Nov 25 03:19:23 np0005534516 serene_black[263741]:            ],
Nov 25 03:19:23 np0005534516 serene_black[263741]:            "lv_name": "ceph_lv2",
Nov 25 03:19:23 np0005534516 serene_black[263741]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 03:19:23 np0005534516 serene_black[263741]:            "lv_size": "21470642176",
Nov 25 03:19:23 np0005534516 serene_black[263741]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2a15223e-f21c-42af-b702-2be1c3607399,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 03:19:23 np0005534516 serene_black[263741]:            "lv_uuid": "5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0",
Nov 25 03:19:23 np0005534516 serene_black[263741]:            "name": "ceph_lv2",
Nov 25 03:19:23 np0005534516 serene_black[263741]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 03:19:23 np0005534516 serene_black[263741]:            "tags": {
Nov 25 03:19:23 np0005534516 serene_black[263741]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 03:19:23 np0005534516 serene_black[263741]:                "ceph.block_uuid": "5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0",
Nov 25 03:19:23 np0005534516 serene_black[263741]:                "ceph.cephx_lockbox_secret": "",
Nov 25 03:19:23 np0005534516 serene_black[263741]:                "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 03:19:23 np0005534516 serene_black[263741]:                "ceph.cluster_name": "ceph",
Nov 25 03:19:23 np0005534516 serene_black[263741]:                "ceph.crush_device_class": "",
Nov 25 03:19:23 np0005534516 serene_black[263741]:                "ceph.encrypted": "0",
Nov 25 03:19:23 np0005534516 serene_black[263741]:                "ceph.osd_fsid": "2a15223e-f21c-42af-b702-2be1c3607399",
Nov 25 03:19:23 np0005534516 serene_black[263741]:                "ceph.osd_id": "2",
Nov 25 03:19:23 np0005534516 serene_black[263741]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 03:19:23 np0005534516 serene_black[263741]:                "ceph.type": "block",
Nov 25 03:19:23 np0005534516 serene_black[263741]:                "ceph.vdo": "0"
Nov 25 03:19:23 np0005534516 serene_black[263741]:            },
Nov 25 03:19:23 np0005534516 serene_black[263741]:            "type": "block",
Nov 25 03:19:23 np0005534516 serene_black[263741]:            "vg_name": "ceph_vg2"
Nov 25 03:19:23 np0005534516 serene_black[263741]:        }
Nov 25 03:19:23 np0005534516 serene_black[263741]:    ]
Nov 25 03:19:23 np0005534516 serene_black[263741]: }
Nov 25 03:19:23 np0005534516 systemd[1]: libpod-68cc735f0da1169355ec94082e6fb4c6a5db8dab5dbfb75e11d1d6913cd6587d.scope: Deactivated successfully.
Nov 25 03:19:23 np0005534516 podman[263725]: 2025-11-25 08:19:23.552044487 +0000 UTC m=+0.874759313 container died 68cc735f0da1169355ec94082e6fb4c6a5db8dab5dbfb75e11d1d6913cd6587d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_black, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 03:19:23 np0005534516 systemd[1]: var-lib-containers-storage-overlay-89a775a916db7ec3899c2d09dd633344f198238a1d0830d748a29a27f91a9581-merged.mount: Deactivated successfully.
Nov 25 03:19:23 np0005534516 podman[263725]: 2025-11-25 08:19:23.616294636 +0000 UTC m=+0.939009432 container remove 68cc735f0da1169355ec94082e6fb4c6a5db8dab5dbfb75e11d1d6913cd6587d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_black, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Nov 25 03:19:23 np0005534516 systemd[1]: libpod-conmon-68cc735f0da1169355ec94082e6fb4c6a5db8dab5dbfb75e11d1d6913cd6587d.scope: Deactivated successfully.
Nov 25 03:19:24 np0005534516 podman[263901]: 2025-11-25 08:19:24.292004615 +0000 UTC m=+0.094061275 container create 2b312d5781445324f0122b73890e01f71e8520a650bef066b2cfd0d4c2c3013b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_murdock, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 03:19:24 np0005534516 podman[263901]: 2025-11-25 08:19:24.224737142 +0000 UTC m=+0.026793802 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 03:19:24 np0005534516 systemd[1]: Started libpod-conmon-2b312d5781445324f0122b73890e01f71e8520a650bef066b2cfd0d4c2c3013b.scope.
Nov 25 03:19:24 np0005534516 systemd[1]: Started libcrun container.
Nov 25 03:19:24 np0005534516 podman[263901]: 2025-11-25 08:19:24.406596664 +0000 UTC m=+0.208653364 container init 2b312d5781445324f0122b73890e01f71e8520a650bef066b2cfd0d4c2c3013b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_murdock, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Nov 25 03:19:24 np0005534516 podman[263901]: 2025-11-25 08:19:24.417130049 +0000 UTC m=+0.219186709 container start 2b312d5781445324f0122b73890e01f71e8520a650bef066b2cfd0d4c2c3013b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_murdock, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 03:19:24 np0005534516 nervous_murdock[263918]: 167 167
Nov 25 03:19:24 np0005534516 podman[263901]: 2025-11-25 08:19:24.423351733 +0000 UTC m=+0.225408443 container attach 2b312d5781445324f0122b73890e01f71e8520a650bef066b2cfd0d4c2c3013b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_murdock, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 03:19:24 np0005534516 systemd[1]: libpod-2b312d5781445324f0122b73890e01f71e8520a650bef066b2cfd0d4c2c3013b.scope: Deactivated successfully.
Nov 25 03:19:24 np0005534516 podman[263901]: 2025-11-25 08:19:24.424403152 +0000 UTC m=+0.226459852 container died 2b312d5781445324f0122b73890e01f71e8520a650bef066b2cfd0d4c2c3013b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_murdock, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3)
Nov 25 03:19:24 np0005534516 systemd[1]: var-lib-containers-storage-overlay-ab5eefc60612d51e4746dec189476fd04299fc3638fdfff4f0c9b99eefb6906d-merged.mount: Deactivated successfully.
Nov 25 03:19:24 np0005534516 podman[263901]: 2025-11-25 08:19:24.653118446 +0000 UTC m=+0.455175106 container remove 2b312d5781445324f0122b73890e01f71e8520a650bef066b2cfd0d4c2c3013b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_murdock, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 03:19:24 np0005534516 systemd[1]: libpod-conmon-2b312d5781445324f0122b73890e01f71e8520a650bef066b2cfd0d4c2c3013b.scope: Deactivated successfully.
Nov 25 03:19:24 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1008: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 6.9 KiB/s rd, 597 B/s wr, 9 op/s
Nov 25 03:19:24 np0005534516 podman[263942]: 2025-11-25 08:19:24.883884267 +0000 UTC m=+0.063918680 container create 734aaa3066a258cf93c818f3c91ebd7f5b2516b13c083143beb6532677a292a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_franklin, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Nov 25 03:19:24 np0005534516 podman[263942]: 2025-11-25 08:19:24.855471452 +0000 UTC m=+0.035505875 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 03:19:25 np0005534516 systemd[1]: Started libpod-conmon-734aaa3066a258cf93c818f3c91ebd7f5b2516b13c083143beb6532677a292a9.scope.
Nov 25 03:19:25 np0005534516 systemd[1]: Started libcrun container.
Nov 25 03:19:25 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/214ee64036e56db9c1e9f12c2654d05701effe5bd083da85755a4ed32e4ca9e9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 03:19:25 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/214ee64036e56db9c1e9f12c2654d05701effe5bd083da85755a4ed32e4ca9e9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 03:19:25 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/214ee64036e56db9c1e9f12c2654d05701effe5bd083da85755a4ed32e4ca9e9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 03:19:25 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/214ee64036e56db9c1e9f12c2654d05701effe5bd083da85755a4ed32e4ca9e9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 03:19:25 np0005534516 podman[263942]: 2025-11-25 08:19:25.088891207 +0000 UTC m=+0.268925600 container init 734aaa3066a258cf93c818f3c91ebd7f5b2516b13c083143beb6532677a292a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_franklin, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 03:19:25 np0005534516 podman[263942]: 2025-11-25 08:19:25.102461217 +0000 UTC m=+0.282495590 container start 734aaa3066a258cf93c818f3c91ebd7f5b2516b13c083143beb6532677a292a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_franklin, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Nov 25 03:19:25 np0005534516 podman[263942]: 2025-11-25 08:19:25.105561084 +0000 UTC m=+0.285595477 container attach 734aaa3066a258cf93c818f3c91ebd7f5b2516b13c083143beb6532677a292a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_franklin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 03:19:26 np0005534516 serene_franklin[263958]: {
Nov 25 03:19:26 np0005534516 serene_franklin[263958]:    "1ccbbde0-8faf-460a-800e-c84f00ed17db": {
Nov 25 03:19:26 np0005534516 serene_franklin[263958]:        "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 03:19:26 np0005534516 serene_franklin[263958]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 03:19:26 np0005534516 serene_franklin[263958]:        "osd_id": 1,
Nov 25 03:19:26 np0005534516 serene_franklin[263958]:        "osd_uuid": "1ccbbde0-8faf-460a-800e-c84f00ed17db",
Nov 25 03:19:26 np0005534516 serene_franklin[263958]:        "type": "bluestore"
Nov 25 03:19:26 np0005534516 serene_franklin[263958]:    },
Nov 25 03:19:26 np0005534516 serene_franklin[263958]:    "2a15223e-f21c-42af-b702-2be1c3607399": {
Nov 25 03:19:26 np0005534516 serene_franklin[263958]:        "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 03:19:26 np0005534516 serene_franklin[263958]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 03:19:26 np0005534516 serene_franklin[263958]:        "osd_id": 2,
Nov 25 03:19:26 np0005534516 serene_franklin[263958]:        "osd_uuid": "2a15223e-f21c-42af-b702-2be1c3607399",
Nov 25 03:19:26 np0005534516 serene_franklin[263958]:        "type": "bluestore"
Nov 25 03:19:26 np0005534516 serene_franklin[263958]:    },
Nov 25 03:19:26 np0005534516 serene_franklin[263958]:    "fa0f8d90-5df8-4d42-9078-da082765696d": {
Nov 25 03:19:26 np0005534516 serene_franklin[263958]:        "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 03:19:26 np0005534516 serene_franklin[263958]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 03:19:26 np0005534516 serene_franklin[263958]:        "osd_id": 0,
Nov 25 03:19:26 np0005534516 serene_franklin[263958]:        "osd_uuid": "fa0f8d90-5df8-4d42-9078-da082765696d",
Nov 25 03:19:26 np0005534516 serene_franklin[263958]:        "type": "bluestore"
Nov 25 03:19:26 np0005534516 serene_franklin[263958]:    }
Nov 25 03:19:26 np0005534516 serene_franklin[263958]: }
Nov 25 03:19:26 np0005534516 systemd[1]: libpod-734aaa3066a258cf93c818f3c91ebd7f5b2516b13c083143beb6532677a292a9.scope: Deactivated successfully.
Nov 25 03:19:26 np0005534516 podman[263942]: 2025-11-25 08:19:26.212116037 +0000 UTC m=+1.392150420 container died 734aaa3066a258cf93c818f3c91ebd7f5b2516b13c083143beb6532677a292a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_franklin, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 03:19:26 np0005534516 systemd[1]: libpod-734aaa3066a258cf93c818f3c91ebd7f5b2516b13c083143beb6532677a292a9.scope: Consumed 1.116s CPU time.
Nov 25 03:19:26 np0005534516 systemd[1]: var-lib-containers-storage-overlay-214ee64036e56db9c1e9f12c2654d05701effe5bd083da85755a4ed32e4ca9e9-merged.mount: Deactivated successfully.
Nov 25 03:19:26 np0005534516 podman[263942]: 2025-11-25 08:19:26.272266131 +0000 UTC m=+1.452300504 container remove 734aaa3066a258cf93c818f3c91ebd7f5b2516b13c083143beb6532677a292a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_franklin, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Nov 25 03:19:26 np0005534516 systemd[1]: libpod-conmon-734aaa3066a258cf93c818f3c91ebd7f5b2516b13c083143beb6532677a292a9.scope: Deactivated successfully.
Nov 25 03:19:26 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 03:19:26 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 03:19:26 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 03:19:26 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 03:19:26 np0005534516 ceph-mgr[75313]: [progress WARNING root] complete: ev af7f0031-a31c-45ae-8efe-bf5d2baf9ed0 does not exist
Nov 25 03:19:26 np0005534516 ceph-mgr[75313]: [progress WARNING root] complete: ev 7a43e9d6-18e1-4147-88a5-75b64ab7a28b does not exist
Nov 25 03:19:26 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1009: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 6.7 KiB/s rd, 426 B/s wr, 8 op/s
Nov 25 03:19:27 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 03:19:27 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 03:19:27 np0005534516 podman[264053]: 2025-11-25 08:19:27.849300427 +0000 UTC m=+0.092036088 container health_status 8ad1e319ea263c63021f01bb2d6f18ca5aea205883339692c6b10bddfa993191 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, config_id=ovn_controller, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible)
Nov 25 03:19:28 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e102 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 03:19:28 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1010: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:19:28 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 25 03:19:28 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3216047470' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 03:19:28 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 25 03:19:28 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3216047470' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 03:19:30 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1011: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:19:32 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1012: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:19:33 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e102 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 03:19:34 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1013: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:19:36 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1014: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:19:38 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e102 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 03:19:38 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1015: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:19:40 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1016: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:19:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:19:41.045 162739 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:19:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:19:41.045 162739 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:19:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:19:41.045 162739 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:19:42 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1017: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:19:43 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e102 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 03:19:44 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1018: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:19:46 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1019: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:19:48 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e102 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 03:19:48 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1020: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:19:49 np0005534516 podman[264080]: 2025-11-25 08:19:49.806794851 +0000 UTC m=+0.054946650 container health_status 5c8d5508db57b4c3da7e80ae11f3a3094e87785cb74f60c98199261dd8b73481 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, tcib_managed=true, io.buildah.version=1.41.3)
Nov 25 03:19:50 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1021: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:19:52 np0005534516 podman[264099]: 2025-11-25 08:19:52.353348094 +0000 UTC m=+0.065252408 container health_status 201f5ba8a846455dad7681d91099d8c3f33e8ac91298c1330a8b525b94c03938 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=multipathd, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 25 03:19:52 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1022: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:19:53 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e102 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 03:19:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] Optimize plan auto_2025-11-25_08:19:53
Nov 25 03:19:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 03:19:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] do_upmap
Nov 25 03:19:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'images', 'volumes', 'backups', 'vms', 'default.rgw.meta', 'cephfs.cephfs.data', 'default.rgw.control', '.mgr', '.rgw.root', 'default.rgw.log']
Nov 25 03:19:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] prepared 0/10 changes
Nov 25 03:19:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 03:19:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 03:19:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 03:19:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 03:19:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 03:19:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 03:19:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 03:19:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 03:19:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 03:19:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 03:19:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 03:19:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 03:19:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 03:19:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 03:19:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 03:19:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 03:19:54 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1023: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:19:56 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1024: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:19:57 np0005534516 nova_compute[253538]: 2025-11-25 08:19:57.555 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 03:19:57 np0005534516 nova_compute[253538]: 2025-11-25 08:19:57.555 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Nov 25 03:19:58 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e102 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 03:19:58 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1025: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:19:58 np0005534516 podman[264120]: 2025-11-25 08:19:58.854776629 +0000 UTC m=+0.108775116 container health_status 8ad1e319ea263c63021f01bb2d6f18ca5aea205883339692c6b10bddfa993191 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251118, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 25 03:20:00 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1026: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:20:02 np0005534516 nova_compute[253538]: 2025-11-25 08:20:02.557 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 03:20:02 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1027: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:20:03 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e102 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 03:20:03 np0005534516 nova_compute[253538]: 2025-11-25 08:20:03.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 03:20:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 03:20:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:20:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 03:20:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:20:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 03:20:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:20:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 03:20:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:20:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 03:20:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:20:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 3.1795353910268934e-07 of space, bias 1.0, pg target 9.53860617308068e-05 quantized to 32 (current 32)
Nov 25 03:20:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:20:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 25 03:20:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:20:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 03:20:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:20:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 25 03:20:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:20:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 25 03:20:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:20:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 03:20:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:20:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 25 03:20:04 np0005534516 nova_compute[253538]: 2025-11-25 08:20:04.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 03:20:04 np0005534516 nova_compute[253538]: 2025-11-25 08:20:04.555 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 25 03:20:04 np0005534516 nova_compute[253538]: 2025-11-25 08:20:04.555 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 25 03:20:04 np0005534516 nova_compute[253538]: 2025-11-25 08:20:04.569 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 25 03:20:04 np0005534516 nova_compute[253538]: 2025-11-25 08:20:04.570 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 03:20:04 np0005534516 nova_compute[253538]: 2025-11-25 08:20:04.570 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 25 03:20:04 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1028: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:20:05 np0005534516 ceph-osd[88620]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 25 03:20:05 np0005534516 ceph-osd[88620]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1800.5 total, 600.0 interval#012Cumulative writes: 5877 writes, 23K keys, 5877 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s#012Cumulative WAL: 5877 writes, 1040 syncs, 5.65 writes per sync, written: 0.02 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 455 writes, 1002 keys, 455 commit groups, 1.0 writes per commit group, ingest: 0.52 MB, 0.00 MB/s#012Interval WAL: 455 writes, 217 syncs, 2.10 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 25 03:20:05 np0005534516 nova_compute[253538]: 2025-11-25 08:20:05.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 03:20:05 np0005534516 nova_compute[253538]: 2025-11-25 08:20:05.555 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 03:20:05 np0005534516 nova_compute[253538]: 2025-11-25 08:20:05.555 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Nov 25 03:20:05 np0005534516 nova_compute[253538]: 2025-11-25 08:20:05.565 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Nov 25 03:20:06 np0005534516 nova_compute[253538]: 2025-11-25 08:20:06.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 03:20:06 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1029: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:20:07 np0005534516 nova_compute[253538]: 2025-11-25 08:20:07.564 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 03:20:07 np0005534516 nova_compute[253538]: 2025-11-25 08:20:07.564 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 03:20:07 np0005534516 nova_compute[253538]: 2025-11-25 08:20:07.584 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:20:07 np0005534516 nova_compute[253538]: 2025-11-25 08:20:07.584 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:20:07 np0005534516 nova_compute[253538]: 2025-11-25 08:20:07.584 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:20:07 np0005534516 nova_compute[253538]: 2025-11-25 08:20:07.584 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 25 03:20:07 np0005534516 nova_compute[253538]: 2025-11-25 08:20:07.584 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:20:07 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 03:20:07 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4042700887' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 03:20:08 np0005534516 nova_compute[253538]: 2025-11-25 08:20:08.008 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.423s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:20:08 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e102 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 03:20:08 np0005534516 nova_compute[253538]: 2025-11-25 08:20:08.203 253542 WARNING nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 03:20:08 np0005534516 nova_compute[253538]: 2025-11-25 08:20:08.205 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5165MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 25 03:20:08 np0005534516 nova_compute[253538]: 2025-11-25 08:20:08.206 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:20:08 np0005534516 nova_compute[253538]: 2025-11-25 08:20:08.206 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:20:08 np0005534516 nova_compute[253538]: 2025-11-25 08:20:08.377 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 25 03:20:08 np0005534516 nova_compute[253538]: 2025-11-25 08:20:08.377 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 25 03:20:08 np0005534516 nova_compute[253538]: 2025-11-25 08:20:08.445 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:20:08 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1030: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:20:08 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 03:20:08 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/104225698' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 03:20:08 np0005534516 nova_compute[253538]: 2025-11-25 08:20:08.945 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.500s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:20:08 np0005534516 nova_compute[253538]: 2025-11-25 08:20:08.955 253542 DEBUG nova.compute.provider_tree [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 03:20:08 np0005534516 nova_compute[253538]: 2025-11-25 08:20:08.970 253542 DEBUG nova.scheduler.client.report [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 03:20:08 np0005534516 nova_compute[253538]: 2025-11-25 08:20:08.972 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 25 03:20:08 np0005534516 nova_compute[253538]: 2025-11-25 08:20:08.972 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.766s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:20:10 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1031: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:20:10 np0005534516 nova_compute[253538]: 2025-11-25 08:20:10.957 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 03:20:10 np0005534516 nova_compute[253538]: 2025-11-25 08:20:10.970 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 03:20:10 np0005534516 nova_compute[253538]: 2025-11-25 08:20:10.970 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 03:20:11 np0005534516 ceph-osd[89702]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 25 03:20:11 np0005534516 ceph-osd[89702]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1800.4 total, 600.0 interval#012Cumulative writes: 6510 writes, 26K keys, 6510 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s#012Cumulative WAL: 6510 writes, 1105 syncs, 5.89 writes per sync, written: 0.02 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 347 writes, 977 keys, 347 commit groups, 1.0 writes per commit group, ingest: 0.46 MB, 0.00 MB/s#012Interval WAL: 347 writes, 156 syncs, 2.22 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 25 03:20:12 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1032: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:20:13 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e102 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 03:20:14 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1033: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:20:16 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1034: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:20:18 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e102 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 03:20:18 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1035: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:20:20 np0005534516 podman[264195]: 2025-11-25 08:20:20.790132352 +0000 UTC m=+0.048439107 container health_status 5c8d5508db57b4c3da7e80ae11f3a3094e87785cb74f60c98199261dd8b73481 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true)
Nov 25 03:20:20 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1036: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:20:21 np0005534516 ceph-osd[90711]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 25 03:20:21 np0005534516 ceph-osd[90711]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1802.4 total, 600.0 interval#012Cumulative writes: 5817 writes, 24K keys, 5817 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s#012Cumulative WAL: 5817 writes, 930 syncs, 6.25 writes per sync, written: 0.02 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 433 writes, 984 keys, 433 commit groups, 1.0 writes per commit group, ingest: 0.51 MB, 0.00 MB/s#012Interval WAL: 433 writes, 200 syncs, 2.17 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 25 03:20:22 np0005534516 podman[264215]: 2025-11-25 08:20:22.792879137 +0000 UTC m=+0.051460932 container health_status 201f5ba8a846455dad7681d91099d8c3f33e8ac91298c1330a8b525b94c03938 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 03:20:22 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1037: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:20:23 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e102 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 03:20:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 03:20:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 03:20:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 03:20:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 03:20:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 03:20:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 03:20:24 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1038: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:20:26 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1039: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:20:27 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 03:20:27 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 03:20:27 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 03:20:27 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 03:20:27 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 03:20:27 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 03:20:27 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 03:20:27 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 03:20:27 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 03:20:27 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 03:20:27 np0005534516 ceph-mgr[75313]: [progress WARNING root] complete: ev 1a80add6-143c-4382-8926-740e660834c7 does not exist
Nov 25 03:20:27 np0005534516 ceph-mgr[75313]: [progress WARNING root] complete: ev f36270e1-3873-490c-9108-33742333446a does not exist
Nov 25 03:20:27 np0005534516 ceph-mgr[75313]: [progress WARNING root] complete: ev 0938553a-d746-4d0e-85f6-edee193081d0 does not exist
Nov 25 03:20:27 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 03:20:27 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 03:20:27 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 03:20:27 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 03:20:27 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 03:20:27 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 03:20:28 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 03:20:28 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 03:20:28 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 03:20:28 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 03:20:28 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 03:20:28 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e102 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 03:20:28 np0005534516 podman[264627]: 2025-11-25 08:20:28.587489521 +0000 UTC m=+0.108544179 container create 3b9e6ed2267afd4a8549fa66ff8c9d09bf2ca18cac942996d3ca2e4d292e5646 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_faraday, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 03:20:28 np0005534516 podman[264627]: 2025-11-25 08:20:28.518210522 +0000 UTC m=+0.039265210 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 03:20:28 np0005534516 systemd[1]: Started libpod-conmon-3b9e6ed2267afd4a8549fa66ff8c9d09bf2ca18cac942996d3ca2e4d292e5646.scope.
Nov 25 03:20:28 np0005534516 systemd[1]: Started libcrun container.
Nov 25 03:20:28 np0005534516 podman[264627]: 2025-11-25 08:20:28.762968313 +0000 UTC m=+0.284022991 container init 3b9e6ed2267afd4a8549fa66ff8c9d09bf2ca18cac942996d3ca2e4d292e5646 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_faraday, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 03:20:28 np0005534516 podman[264627]: 2025-11-25 08:20:28.771451431 +0000 UTC m=+0.292506089 container start 3b9e6ed2267afd4a8549fa66ff8c9d09bf2ca18cac942996d3ca2e4d292e5646 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_faraday, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507)
Nov 25 03:20:28 np0005534516 silly_faraday[264643]: 167 167
Nov 25 03:20:28 np0005534516 systemd[1]: libpod-3b9e6ed2267afd4a8549fa66ff8c9d09bf2ca18cac942996d3ca2e4d292e5646.scope: Deactivated successfully.
Nov 25 03:20:28 np0005534516 podman[264627]: 2025-11-25 08:20:28.807211383 +0000 UTC m=+0.328266041 container attach 3b9e6ed2267afd4a8549fa66ff8c9d09bf2ca18cac942996d3ca2e4d292e5646 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_faraday, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 03:20:28 np0005534516 podman[264627]: 2025-11-25 08:20:28.808962891 +0000 UTC m=+0.330017599 container died 3b9e6ed2267afd4a8549fa66ff8c9d09bf2ca18cac942996d3ca2e4d292e5646 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_faraday, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Nov 25 03:20:28 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1040: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:20:28 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 25 03:20:28 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2551336119' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 03:20:28 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 25 03:20:28 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2551336119' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 03:20:29 np0005534516 systemd[1]: var-lib-containers-storage-overlay-7e8b032b1d4f6e177c3f8223fd8296c7bb0f8b4af1362f093fee0aedf6bb4c5c-merged.mount: Deactivated successfully.
Nov 25 03:20:29 np0005534516 podman[264627]: 2025-11-25 08:20:29.295173926 +0000 UTC m=+0.816228584 container remove 3b9e6ed2267afd4a8549fa66ff8c9d09bf2ca18cac942996d3ca2e4d292e5646 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_faraday, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Nov 25 03:20:29 np0005534516 systemd[1]: libpod-conmon-3b9e6ed2267afd4a8549fa66ff8c9d09bf2ca18cac942996d3ca2e4d292e5646.scope: Deactivated successfully.
Nov 25 03:20:29 np0005534516 podman[264660]: 2025-11-25 08:20:29.387181482 +0000 UTC m=+0.159313891 container health_status 8ad1e319ea263c63021f01bb2d6f18ca5aea205883339692c6b10bddfa993191 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 03:20:29 np0005534516 podman[264691]: 2025-11-25 08:20:29.487024457 +0000 UTC m=+0.058955122 container create 804402f66a1f8f8a468779f24c22b7f595b44c355e172b740c983119f9130b0f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_heisenberg, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True)
Nov 25 03:20:29 np0005534516 systemd[1]: Started libpod-conmon-804402f66a1f8f8a468779f24c22b7f595b44c355e172b740c983119f9130b0f.scope.
Nov 25 03:20:29 np0005534516 podman[264691]: 2025-11-25 08:20:29.453255282 +0000 UTC m=+0.025185967 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 03:20:29 np0005534516 systemd[1]: Started libcrun container.
Nov 25 03:20:29 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/253ac5c3eb1c36392a72cd42efa1b4f91eb2db43be9cf868061c774c6bc4f5ff/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 03:20:29 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/253ac5c3eb1c36392a72cd42efa1b4f91eb2db43be9cf868061c774c6bc4f5ff/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 03:20:29 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/253ac5c3eb1c36392a72cd42efa1b4f91eb2db43be9cf868061c774c6bc4f5ff/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 03:20:29 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/253ac5c3eb1c36392a72cd42efa1b4f91eb2db43be9cf868061c774c6bc4f5ff/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 03:20:29 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/253ac5c3eb1c36392a72cd42efa1b4f91eb2db43be9cf868061c774c6bc4f5ff/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 03:20:29 np0005534516 podman[264691]: 2025-11-25 08:20:29.601724209 +0000 UTC m=+0.173654874 container init 804402f66a1f8f8a468779f24c22b7f595b44c355e172b740c983119f9130b0f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_heisenberg, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 03:20:29 np0005534516 podman[264691]: 2025-11-25 08:20:29.608721095 +0000 UTC m=+0.180651780 container start 804402f66a1f8f8a468779f24c22b7f595b44c355e172b740c983119f9130b0f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_heisenberg, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Nov 25 03:20:29 np0005534516 podman[264691]: 2025-11-25 08:20:29.621278286 +0000 UTC m=+0.193208951 container attach 804402f66a1f8f8a468779f24c22b7f595b44c355e172b740c983119f9130b0f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_heisenberg, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 03:20:30 np0005534516 hopeful_heisenberg[264710]: --> passed data devices: 0 physical, 3 LVM
Nov 25 03:20:30 np0005534516 hopeful_heisenberg[264710]: --> relative data size: 1.0
Nov 25 03:20:30 np0005534516 hopeful_heisenberg[264710]: --> All data devices are unavailable
Nov 25 03:20:30 np0005534516 systemd[1]: libpod-804402f66a1f8f8a468779f24c22b7f595b44c355e172b740c983119f9130b0f.scope: Deactivated successfully.
Nov 25 03:20:30 np0005534516 podman[264691]: 2025-11-25 08:20:30.713418415 +0000 UTC m=+1.285349080 container died 804402f66a1f8f8a468779f24c22b7f595b44c355e172b740c983119f9130b0f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_heisenberg, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 03:20:30 np0005534516 systemd[1]: libpod-804402f66a1f8f8a468779f24c22b7f595b44c355e172b740c983119f9130b0f.scope: Consumed 1.046s CPU time.
Nov 25 03:20:30 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1041: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:20:30 np0005534516 systemd[1]: var-lib-containers-storage-overlay-253ac5c3eb1c36392a72cd42efa1b4f91eb2db43be9cf868061c774c6bc4f5ff-merged.mount: Deactivated successfully.
Nov 25 03:20:31 np0005534516 podman[264691]: 2025-11-25 08:20:31.165513373 +0000 UTC m=+1.737444048 container remove 804402f66a1f8f8a468779f24c22b7f595b44c355e172b740c983119f9130b0f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_heisenberg, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Nov 25 03:20:31 np0005534516 systemd[1]: libpod-conmon-804402f66a1f8f8a468779f24c22b7f595b44c355e172b740c983119f9130b0f.scope: Deactivated successfully.
Nov 25 03:20:31 np0005534516 podman[264892]: 2025-11-25 08:20:31.840518764 +0000 UTC m=+0.030932967 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 03:20:31 np0005534516 podman[264892]: 2025-11-25 08:20:31.948043824 +0000 UTC m=+0.138457977 container create 02cdb8d7aa89c40995332fbbb6b336351791e86d688cc358d1fa9874d98facf0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_chebyshev, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Nov 25 03:20:32 np0005534516 systemd[1]: Started libpod-conmon-02cdb8d7aa89c40995332fbbb6b336351791e86d688cc358d1fa9874d98facf0.scope.
Nov 25 03:20:32 np0005534516 systemd[1]: Started libcrun container.
Nov 25 03:20:32 np0005534516 podman[264892]: 2025-11-25 08:20:32.423440255 +0000 UTC m=+0.613854408 container init 02cdb8d7aa89c40995332fbbb6b336351791e86d688cc358d1fa9874d98facf0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_chebyshev, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Nov 25 03:20:32 np0005534516 podman[264892]: 2025-11-25 08:20:32.435362128 +0000 UTC m=+0.625776281 container start 02cdb8d7aa89c40995332fbbb6b336351791e86d688cc358d1fa9874d98facf0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_chebyshev, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Nov 25 03:20:32 np0005534516 strange_chebyshev[264908]: 167 167
Nov 25 03:20:32 np0005534516 systemd[1]: libpod-02cdb8d7aa89c40995332fbbb6b336351791e86d688cc358d1fa9874d98facf0.scope: Deactivated successfully.
Nov 25 03:20:32 np0005534516 podman[264892]: 2025-11-25 08:20:32.540848722 +0000 UTC m=+0.731262915 container attach 02cdb8d7aa89c40995332fbbb6b336351791e86d688cc358d1fa9874d98facf0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_chebyshev, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3)
Nov 25 03:20:32 np0005534516 podman[264892]: 2025-11-25 08:20:32.541426548 +0000 UTC m=+0.731840701 container died 02cdb8d7aa89c40995332fbbb6b336351791e86d688cc358d1fa9874d98facf0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_chebyshev, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Nov 25 03:20:32 np0005534516 systemd[1]: var-lib-containers-storage-overlay-350a61752a466d635d99d95ae31eda2572cf14f5d4d8e0654aed8e733951e9f5-merged.mount: Deactivated successfully.
Nov 25 03:20:32 np0005534516 podman[264892]: 2025-11-25 08:20:32.745575555 +0000 UTC m=+0.935989668 container remove 02cdb8d7aa89c40995332fbbb6b336351791e86d688cc358d1fa9874d98facf0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_chebyshev, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 03:20:32 np0005534516 systemd[1]: libpod-conmon-02cdb8d7aa89c40995332fbbb6b336351791e86d688cc358d1fa9874d98facf0.scope: Deactivated successfully.
Nov 25 03:20:32 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1042: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:20:32 np0005534516 podman[264933]: 2025-11-25 08:20:32.928468075 +0000 UTC m=+0.062576803 container create 89a35fd30cb901172fe42886db374aa05b59324dc03798eb7ba1d6e6806d542f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_meninsky, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 03:20:32 np0005534516 systemd[1]: Started libpod-conmon-89a35fd30cb901172fe42886db374aa05b59324dc03798eb7ba1d6e6806d542f.scope.
Nov 25 03:20:33 np0005534516 podman[264933]: 2025-11-25 08:20:32.905755909 +0000 UTC m=+0.039864657 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 03:20:33 np0005534516 systemd[1]: Started libcrun container.
Nov 25 03:20:33 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d1fd8c75e7aa60116bb943e05ca98d19e10892282dac2193c42f29a9db9e213a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 03:20:33 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d1fd8c75e7aa60116bb943e05ca98d19e10892282dac2193c42f29a9db9e213a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 03:20:33 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d1fd8c75e7aa60116bb943e05ca98d19e10892282dac2193c42f29a9db9e213a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 03:20:33 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d1fd8c75e7aa60116bb943e05ca98d19e10892282dac2193c42f29a9db9e213a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 03:20:33 np0005534516 podman[264933]: 2025-11-25 08:20:33.10185388 +0000 UTC m=+0.235962698 container init 89a35fd30cb901172fe42886db374aa05b59324dc03798eb7ba1d6e6806d542f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_meninsky, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 03:20:33 np0005534516 podman[264933]: 2025-11-25 08:20:33.114079882 +0000 UTC m=+0.248188630 container start 89a35fd30cb901172fe42886db374aa05b59324dc03798eb7ba1d6e6806d542f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_meninsky, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 03:20:33 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e102 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 03:20:33 np0005534516 podman[264933]: 2025-11-25 08:20:33.18687968 +0000 UTC m=+0.320988408 container attach 89a35fd30cb901172fe42886db374aa05b59324dc03798eb7ba1d6e6806d542f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_meninsky, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 03:20:33 np0005534516 unruffled_meninsky[264949]: {
Nov 25 03:20:33 np0005534516 unruffled_meninsky[264949]:    "0": [
Nov 25 03:20:33 np0005534516 unruffled_meninsky[264949]:        {
Nov 25 03:20:33 np0005534516 unruffled_meninsky[264949]:            "devices": [
Nov 25 03:20:33 np0005534516 unruffled_meninsky[264949]:                "/dev/loop3"
Nov 25 03:20:33 np0005534516 unruffled_meninsky[264949]:            ],
Nov 25 03:20:33 np0005534516 unruffled_meninsky[264949]:            "lv_name": "ceph_lv0",
Nov 25 03:20:33 np0005534516 unruffled_meninsky[264949]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 03:20:33 np0005534516 unruffled_meninsky[264949]:            "lv_size": "21470642176",
Nov 25 03:20:33 np0005534516 unruffled_meninsky[264949]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=fa0f8d90-5df8-4d42-9078-da082765696d,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 03:20:33 np0005534516 unruffled_meninsky[264949]:            "lv_uuid": "ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr",
Nov 25 03:20:33 np0005534516 unruffled_meninsky[264949]:            "name": "ceph_lv0",
Nov 25 03:20:33 np0005534516 unruffled_meninsky[264949]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 03:20:33 np0005534516 unruffled_meninsky[264949]:            "tags": {
Nov 25 03:20:33 np0005534516 unruffled_meninsky[264949]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 03:20:33 np0005534516 unruffled_meninsky[264949]:                "ceph.block_uuid": "ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr",
Nov 25 03:20:33 np0005534516 unruffled_meninsky[264949]:                "ceph.cephx_lockbox_secret": "",
Nov 25 03:20:33 np0005534516 unruffled_meninsky[264949]:                "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 03:20:33 np0005534516 unruffled_meninsky[264949]:                "ceph.cluster_name": "ceph",
Nov 25 03:20:33 np0005534516 unruffled_meninsky[264949]:                "ceph.crush_device_class": "",
Nov 25 03:20:33 np0005534516 unruffled_meninsky[264949]:                "ceph.encrypted": "0",
Nov 25 03:20:33 np0005534516 unruffled_meninsky[264949]:                "ceph.osd_fsid": "fa0f8d90-5df8-4d42-9078-da082765696d",
Nov 25 03:20:33 np0005534516 unruffled_meninsky[264949]:                "ceph.osd_id": "0",
Nov 25 03:20:33 np0005534516 unruffled_meninsky[264949]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 03:20:33 np0005534516 unruffled_meninsky[264949]:                "ceph.type": "block",
Nov 25 03:20:33 np0005534516 unruffled_meninsky[264949]:                "ceph.vdo": "0"
Nov 25 03:20:33 np0005534516 unruffled_meninsky[264949]:            },
Nov 25 03:20:33 np0005534516 unruffled_meninsky[264949]:            "type": "block",
Nov 25 03:20:33 np0005534516 unruffled_meninsky[264949]:            "vg_name": "ceph_vg0"
Nov 25 03:20:33 np0005534516 unruffled_meninsky[264949]:        }
Nov 25 03:20:33 np0005534516 unruffled_meninsky[264949]:    ],
Nov 25 03:20:33 np0005534516 unruffled_meninsky[264949]:    "1": [
Nov 25 03:20:33 np0005534516 unruffled_meninsky[264949]:        {
Nov 25 03:20:33 np0005534516 unruffled_meninsky[264949]:            "devices": [
Nov 25 03:20:33 np0005534516 unruffled_meninsky[264949]:                "/dev/loop4"
Nov 25 03:20:33 np0005534516 unruffled_meninsky[264949]:            ],
Nov 25 03:20:33 np0005534516 unruffled_meninsky[264949]:            "lv_name": "ceph_lv1",
Nov 25 03:20:33 np0005534516 unruffled_meninsky[264949]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 03:20:33 np0005534516 unruffled_meninsky[264949]:            "lv_size": "21470642176",
Nov 25 03:20:33 np0005534516 unruffled_meninsky[264949]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1ccbbde0-8faf-460a-800e-c84f00ed17db,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 03:20:33 np0005534516 unruffled_meninsky[264949]:            "lv_uuid": "nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e",
Nov 25 03:20:33 np0005534516 unruffled_meninsky[264949]:            "name": "ceph_lv1",
Nov 25 03:20:33 np0005534516 unruffled_meninsky[264949]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 03:20:33 np0005534516 unruffled_meninsky[264949]:            "tags": {
Nov 25 03:20:33 np0005534516 unruffled_meninsky[264949]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 03:20:33 np0005534516 unruffled_meninsky[264949]:                "ceph.block_uuid": "nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e",
Nov 25 03:20:33 np0005534516 unruffled_meninsky[264949]:                "ceph.cephx_lockbox_secret": "",
Nov 25 03:20:33 np0005534516 unruffled_meninsky[264949]:                "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 03:20:33 np0005534516 unruffled_meninsky[264949]:                "ceph.cluster_name": "ceph",
Nov 25 03:20:33 np0005534516 unruffled_meninsky[264949]:                "ceph.crush_device_class": "",
Nov 25 03:20:33 np0005534516 unruffled_meninsky[264949]:                "ceph.encrypted": "0",
Nov 25 03:20:33 np0005534516 unruffled_meninsky[264949]:                "ceph.osd_fsid": "1ccbbde0-8faf-460a-800e-c84f00ed17db",
Nov 25 03:20:33 np0005534516 unruffled_meninsky[264949]:                "ceph.osd_id": "1",
Nov 25 03:20:33 np0005534516 unruffled_meninsky[264949]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 03:20:33 np0005534516 unruffled_meninsky[264949]:                "ceph.type": "block",
Nov 25 03:20:33 np0005534516 unruffled_meninsky[264949]:                "ceph.vdo": "0"
Nov 25 03:20:33 np0005534516 unruffled_meninsky[264949]:            },
Nov 25 03:20:33 np0005534516 unruffled_meninsky[264949]:            "type": "block",
Nov 25 03:20:33 np0005534516 unruffled_meninsky[264949]:            "vg_name": "ceph_vg1"
Nov 25 03:20:33 np0005534516 unruffled_meninsky[264949]:        }
Nov 25 03:20:33 np0005534516 unruffled_meninsky[264949]:    ],
Nov 25 03:20:33 np0005534516 unruffled_meninsky[264949]:    "2": [
Nov 25 03:20:33 np0005534516 unruffled_meninsky[264949]:        {
Nov 25 03:20:33 np0005534516 unruffled_meninsky[264949]:            "devices": [
Nov 25 03:20:33 np0005534516 unruffled_meninsky[264949]:                "/dev/loop5"
Nov 25 03:20:33 np0005534516 unruffled_meninsky[264949]:            ],
Nov 25 03:20:33 np0005534516 unruffled_meninsky[264949]:            "lv_name": "ceph_lv2",
Nov 25 03:20:33 np0005534516 unruffled_meninsky[264949]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 03:20:33 np0005534516 unruffled_meninsky[264949]:            "lv_size": "21470642176",
Nov 25 03:20:33 np0005534516 unruffled_meninsky[264949]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2a15223e-f21c-42af-b702-2be1c3607399,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 03:20:33 np0005534516 unruffled_meninsky[264949]:            "lv_uuid": "5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0",
Nov 25 03:20:33 np0005534516 unruffled_meninsky[264949]:            "name": "ceph_lv2",
Nov 25 03:20:33 np0005534516 unruffled_meninsky[264949]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 03:20:33 np0005534516 unruffled_meninsky[264949]:            "tags": {
Nov 25 03:20:33 np0005534516 unruffled_meninsky[264949]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 03:20:33 np0005534516 unruffled_meninsky[264949]:                "ceph.block_uuid": "5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0",
Nov 25 03:20:33 np0005534516 unruffled_meninsky[264949]:                "ceph.cephx_lockbox_secret": "",
Nov 25 03:20:33 np0005534516 unruffled_meninsky[264949]:                "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 03:20:33 np0005534516 unruffled_meninsky[264949]:                "ceph.cluster_name": "ceph",
Nov 25 03:20:33 np0005534516 unruffled_meninsky[264949]:                "ceph.crush_device_class": "",
Nov 25 03:20:33 np0005534516 unruffled_meninsky[264949]:                "ceph.encrypted": "0",
Nov 25 03:20:33 np0005534516 unruffled_meninsky[264949]:                "ceph.osd_fsid": "2a15223e-f21c-42af-b702-2be1c3607399",
Nov 25 03:20:33 np0005534516 unruffled_meninsky[264949]:                "ceph.osd_id": "2",
Nov 25 03:20:33 np0005534516 unruffled_meninsky[264949]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 03:20:33 np0005534516 unruffled_meninsky[264949]:                "ceph.type": "block",
Nov 25 03:20:33 np0005534516 unruffled_meninsky[264949]:                "ceph.vdo": "0"
Nov 25 03:20:33 np0005534516 unruffled_meninsky[264949]:            },
Nov 25 03:20:33 np0005534516 unruffled_meninsky[264949]:            "type": "block",
Nov 25 03:20:33 np0005534516 unruffled_meninsky[264949]:            "vg_name": "ceph_vg2"
Nov 25 03:20:33 np0005534516 unruffled_meninsky[264949]:        }
Nov 25 03:20:33 np0005534516 unruffled_meninsky[264949]:    ]
Nov 25 03:20:33 np0005534516 unruffled_meninsky[264949]: }
Nov 25 03:20:33 np0005534516 systemd[1]: libpod-89a35fd30cb901172fe42886db374aa05b59324dc03798eb7ba1d6e6806d542f.scope: Deactivated successfully.
Nov 25 03:20:33 np0005534516 podman[264933]: 2025-11-25 08:20:33.917094026 +0000 UTC m=+1.051202744 container died 89a35fd30cb901172fe42886db374aa05b59324dc03798eb7ba1d6e6806d542f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_meninsky, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 03:20:34 np0005534516 systemd[1]: var-lib-containers-storage-overlay-d1fd8c75e7aa60116bb943e05ca98d19e10892282dac2193c42f29a9db9e213a-merged.mount: Deactivated successfully.
Nov 25 03:20:34 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1043: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:20:35 np0005534516 podman[264933]: 2025-11-25 08:20:35.247200387 +0000 UTC m=+2.381309145 container remove 89a35fd30cb901172fe42886db374aa05b59324dc03798eb7ba1d6e6806d542f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_meninsky, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Nov 25 03:20:35 np0005534516 systemd[1]: libpod-conmon-89a35fd30cb901172fe42886db374aa05b59324dc03798eb7ba1d6e6806d542f.scope: Deactivated successfully.
Nov 25 03:20:35 np0005534516 podman[265111]: 2025-11-25 08:20:35.956519177 +0000 UTC m=+0.102201413 container create 9afd36c871094e61e7259dccd0a5349180d224514b1af10b8f6b2b2103cb98b8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_poincare, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 03:20:35 np0005534516 podman[265111]: 2025-11-25 08:20:35.873385039 +0000 UTC m=+0.019067295 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 03:20:36 np0005534516 systemd[1]: Started libpod-conmon-9afd36c871094e61e7259dccd0a5349180d224514b1af10b8f6b2b2103cb98b8.scope.
Nov 25 03:20:36 np0005534516 systemd[1]: Started libcrun container.
Nov 25 03:20:36 np0005534516 podman[265111]: 2025-11-25 08:20:36.277221466 +0000 UTC m=+0.422903872 container init 9afd36c871094e61e7259dccd0a5349180d224514b1af10b8f6b2b2103cb98b8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_poincare, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 03:20:36 np0005534516 podman[265111]: 2025-11-25 08:20:36.289941113 +0000 UTC m=+0.435623359 container start 9afd36c871094e61e7259dccd0a5349180d224514b1af10b8f6b2b2103cb98b8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_poincare, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Nov 25 03:20:36 np0005534516 elegant_poincare[265128]: 167 167
Nov 25 03:20:36 np0005534516 systemd[1]: libpod-9afd36c871094e61e7259dccd0a5349180d224514b1af10b8f6b2b2103cb98b8.scope: Deactivated successfully.
Nov 25 03:20:36 np0005534516 podman[265111]: 2025-11-25 08:20:36.305476147 +0000 UTC m=+0.451158443 container attach 9afd36c871094e61e7259dccd0a5349180d224514b1af10b8f6b2b2103cb98b8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_poincare, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 03:20:36 np0005534516 podman[265111]: 2025-11-25 08:20:36.306115376 +0000 UTC m=+0.451797612 container died 9afd36c871094e61e7259dccd0a5349180d224514b1af10b8f6b2b2103cb98b8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_poincare, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 03:20:36 np0005534516 systemd[1]: var-lib-containers-storage-overlay-73045cecb17b253320225c1d7227895aa3072ecde12099baae017600ff5e28a3-merged.mount: Deactivated successfully.
Nov 25 03:20:36 np0005534516 podman[265111]: 2025-11-25 08:20:36.445923411 +0000 UTC m=+0.591605637 container remove 9afd36c871094e61e7259dccd0a5349180d224514b1af10b8f6b2b2103cb98b8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_poincare, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 03:20:36 np0005534516 systemd[1]: libpod-conmon-9afd36c871094e61e7259dccd0a5349180d224514b1af10b8f6b2b2103cb98b8.scope: Deactivated successfully.
Nov 25 03:20:36 np0005534516 podman[265153]: 2025-11-25 08:20:36.620294413 +0000 UTC m=+0.067331446 container create 420d179b8149af5dcc5c848c2efd95f52af7b0371852a596d987084b177c44c6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_haibt, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True)
Nov 25 03:20:36 np0005534516 podman[265153]: 2025-11-25 08:20:36.579539281 +0000 UTC m=+0.026576334 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 03:20:36 np0005534516 systemd[1]: Started libpod-conmon-420d179b8149af5dcc5c848c2efd95f52af7b0371852a596d987084b177c44c6.scope.
Nov 25 03:20:36 np0005534516 systemd[1]: Started libcrun container.
Nov 25 03:20:36 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bc4bc74798bd6c70f375e773fad45fa3ed4a1338c3e9a9f440ae6d667f34eab0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 03:20:36 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bc4bc74798bd6c70f375e773fad45fa3ed4a1338c3e9a9f440ae6d667f34eab0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 03:20:36 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bc4bc74798bd6c70f375e773fad45fa3ed4a1338c3e9a9f440ae6d667f34eab0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 03:20:36 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bc4bc74798bd6c70f375e773fad45fa3ed4a1338c3e9a9f440ae6d667f34eab0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 03:20:36 np0005534516 podman[265153]: 2025-11-25 08:20:36.724658305 +0000 UTC m=+0.171695358 container init 420d179b8149af5dcc5c848c2efd95f52af7b0371852a596d987084b177c44c6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_haibt, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Nov 25 03:20:36 np0005534516 podman[265153]: 2025-11-25 08:20:36.730960451 +0000 UTC m=+0.177997484 container start 420d179b8149af5dcc5c848c2efd95f52af7b0371852a596d987084b177c44c6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_haibt, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Nov 25 03:20:36 np0005534516 podman[265153]: 2025-11-25 08:20:36.756476405 +0000 UTC m=+0.203513468 container attach 420d179b8149af5dcc5c848c2efd95f52af7b0371852a596d987084b177c44c6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_haibt, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default)
Nov 25 03:20:36 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1044: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:20:37 np0005534516 lucid_haibt[265169]: {
Nov 25 03:20:37 np0005534516 lucid_haibt[265169]:    "1ccbbde0-8faf-460a-800e-c84f00ed17db": {
Nov 25 03:20:37 np0005534516 lucid_haibt[265169]:        "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 03:20:37 np0005534516 lucid_haibt[265169]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 03:20:37 np0005534516 lucid_haibt[265169]:        "osd_id": 1,
Nov 25 03:20:37 np0005534516 lucid_haibt[265169]:        "osd_uuid": "1ccbbde0-8faf-460a-800e-c84f00ed17db",
Nov 25 03:20:37 np0005534516 lucid_haibt[265169]:        "type": "bluestore"
Nov 25 03:20:37 np0005534516 lucid_haibt[265169]:    },
Nov 25 03:20:37 np0005534516 lucid_haibt[265169]:    "2a15223e-f21c-42af-b702-2be1c3607399": {
Nov 25 03:20:37 np0005534516 lucid_haibt[265169]:        "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 03:20:37 np0005534516 lucid_haibt[265169]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 03:20:37 np0005534516 lucid_haibt[265169]:        "osd_id": 2,
Nov 25 03:20:37 np0005534516 lucid_haibt[265169]:        "osd_uuid": "2a15223e-f21c-42af-b702-2be1c3607399",
Nov 25 03:20:37 np0005534516 lucid_haibt[265169]:        "type": "bluestore"
Nov 25 03:20:37 np0005534516 lucid_haibt[265169]:    },
Nov 25 03:20:37 np0005534516 lucid_haibt[265169]:    "fa0f8d90-5df8-4d42-9078-da082765696d": {
Nov 25 03:20:37 np0005534516 lucid_haibt[265169]:        "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 03:20:37 np0005534516 lucid_haibt[265169]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 03:20:37 np0005534516 lucid_haibt[265169]:        "osd_id": 0,
Nov 25 03:20:37 np0005534516 lucid_haibt[265169]:        "osd_uuid": "fa0f8d90-5df8-4d42-9078-da082765696d",
Nov 25 03:20:37 np0005534516 lucid_haibt[265169]:        "type": "bluestore"
Nov 25 03:20:37 np0005534516 lucid_haibt[265169]:    }
Nov 25 03:20:37 np0005534516 lucid_haibt[265169]: }
Nov 25 03:20:37 np0005534516 systemd[1]: libpod-420d179b8149af5dcc5c848c2efd95f52af7b0371852a596d987084b177c44c6.scope: Deactivated successfully.
Nov 25 03:20:37 np0005534516 systemd[1]: libpod-420d179b8149af5dcc5c848c2efd95f52af7b0371852a596d987084b177c44c6.scope: Consumed 1.081s CPU time.
Nov 25 03:20:37 np0005534516 podman[265153]: 2025-11-25 08:20:37.821883056 +0000 UTC m=+1.268920099 container died 420d179b8149af5dcc5c848c2efd95f52af7b0371852a596d987084b177c44c6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_haibt, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Nov 25 03:20:38 np0005534516 systemd[1]: var-lib-containers-storage-overlay-bc4bc74798bd6c70f375e773fad45fa3ed4a1338c3e9a9f440ae6d667f34eab0-merged.mount: Deactivated successfully.
Nov 25 03:20:38 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e102 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 03:20:38 np0005534516 podman[265153]: 2025-11-25 08:20:38.436237787 +0000 UTC m=+1.883274860 container remove 420d179b8149af5dcc5c848c2efd95f52af7b0371852a596d987084b177c44c6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_haibt, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Nov 25 03:20:38 np0005534516 systemd[1]: libpod-conmon-420d179b8149af5dcc5c848c2efd95f52af7b0371852a596d987084b177c44c6.scope: Deactivated successfully.
Nov 25 03:20:38 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 03:20:38 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 03:20:38 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 03:20:38 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 03:20:38 np0005534516 ceph-mgr[75313]: [progress WARNING root] complete: ev 4691570c-1a17-4b62-9bf2-b58fd9627f6c does not exist
Nov 25 03:20:38 np0005534516 ceph-mgr[75313]: [progress WARNING root] complete: ev c3c274e4-55e8-4ad5-a7b6-9f6597ec5269 does not exist
Nov 25 03:20:38 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1045: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:20:39 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 03:20:39 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 03:20:40 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1046: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:20:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:20:41.046 162739 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:20:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:20:41.047 162739 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:20:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:20:41.047 162739 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:20:42 np0005534516 ceph-mgr[75313]: [devicehealth INFO root] Check health
Nov 25 03:20:42 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1047: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:20:43 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e102 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 03:20:44 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1048: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:20:46 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1049: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:20:48 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e102 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 03:20:48 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1050: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:20:50 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1051: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:20:51 np0005534516 podman[265266]: 2025-11-25 08:20:51.819483799 +0000 UTC m=+0.061954957 container health_status 5c8d5508db57b4c3da7e80ae11f3a3094e87785cb74f60c98199261dd8b73481 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 03:20:52 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1052: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:20:53 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e102 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 03:20:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] Optimize plan auto_2025-11-25_08:20:53
Nov 25 03:20:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 03:20:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] do_upmap
Nov 25 03:20:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] pools ['images', 'volumes', 'backups', '.mgr', 'vms', 'default.rgw.meta', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', 'default.rgw.log', '.rgw.root', 'default.rgw.control']
Nov 25 03:20:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] prepared 0/10 changes
Nov 25 03:20:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 03:20:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 03:20:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 03:20:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 03:20:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 03:20:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 03:20:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 03:20:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 03:20:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 03:20:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 03:20:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 03:20:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 03:20:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 03:20:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 03:20:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 03:20:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 03:20:53 np0005534516 podman[265286]: 2025-11-25 08:20:53.796432422 +0000 UTC m=+0.055534846 container health_status 201f5ba8a846455dad7681d91099d8c3f33e8ac91298c1330a8b525b94c03938 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Nov 25 03:20:54 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1053: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:20:56 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e102 do_prune osdmap full prune enabled
Nov 25 03:20:56 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e103 e103: 3 total, 3 up, 3 in
Nov 25 03:20:56 np0005534516 ceph-mon[75015]: log_channel(cluster) log [DBG] : osdmap e103: 3 total, 3 up, 3 in
Nov 25 03:20:56 np0005534516 nova_compute[253538]: 2025-11-25 08:20:56.843 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 03:20:56 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1055: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:20:58 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e103 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 03:20:58 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e103 do_prune osdmap full prune enabled
Nov 25 03:20:58 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e104 e104: 3 total, 3 up, 3 in
Nov 25 03:20:58 np0005534516 ceph-mon[75015]: log_channel(cluster) log [DBG] : osdmap e104: 3 total, 3 up, 3 in
Nov 25 03:20:58 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1057: 321 pgs: 321 active+clean; 24 MiB data, 164 MiB used, 60 GiB / 60 GiB avail; 25 KiB/s rd, 3.0 MiB/s wr, 34 op/s
Nov 25 03:20:59 np0005534516 podman[265307]: 2025-11-25 08:20:59.84690537 +0000 UTC m=+0.097621495 container health_status 8ad1e319ea263c63021f01bb2d6f18ca5aea205883339692c6b10bddfa993191 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller)
Nov 25 03:21:00 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1058: 321 pgs: 321 active+clean; 33 MiB data, 173 MiB used, 60 GiB / 60 GiB avail; 25 KiB/s rd, 4.1 MiB/s wr, 35 op/s
Nov 25 03:21:02 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1059: 321 pgs: 321 active+clean; 41 MiB data, 181 MiB used, 60 GiB / 60 GiB avail; 29 KiB/s rd, 5.1 MiB/s wr, 43 op/s
Nov 25 03:21:03 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e104 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 03:21:03 np0005534516 nova_compute[253538]: 2025-11-25 08:21:03.574 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 03:21:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 03:21:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:21:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 03:21:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:21:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 03:21:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:21:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 03:21:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:21:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 03:21:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:21:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006661126644201341 of space, bias 1.0, pg target 0.19983379932604023 quantized to 32 (current 32)
Nov 25 03:21:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:21:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 25 03:21:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:21:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 03:21:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:21:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 25 03:21:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:21:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 25 03:21:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:21:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 03:21:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:21:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 25 03:21:04 np0005534516 nova_compute[253538]: 2025-11-25 08:21:04.548 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 03:21:04 np0005534516 nova_compute[253538]: 2025-11-25 08:21:04.553 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 03:21:04 np0005534516 nova_compute[253538]: 2025-11-25 08:21:04.554 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 25 03:21:04 np0005534516 nova_compute[253538]: 2025-11-25 08:21:04.554 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 25 03:21:04 np0005534516 nova_compute[253538]: 2025-11-25 08:21:04.568 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 25 03:21:04 np0005534516 nova_compute[253538]: 2025-11-25 08:21:04.568 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 03:21:04 np0005534516 nova_compute[253538]: 2025-11-25 08:21:04.568 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 25 03:21:04 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1060: 321 pgs: 321 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail; 31 KiB/s rd, 4.8 MiB/s wr, 44 op/s
Nov 25 03:21:05 np0005534516 nova_compute[253538]: 2025-11-25 08:21:05.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 03:21:06 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1061: 321 pgs: 321 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail; 26 KiB/s rd, 4.1 MiB/s wr, 38 op/s
Nov 25 03:21:08 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e104 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 03:21:08 np0005534516 nova_compute[253538]: 2025-11-25 08:21:08.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 03:21:08 np0005534516 nova_compute[253538]: 2025-11-25 08:21:08.555 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 03:21:08 np0005534516 nova_compute[253538]: 2025-11-25 08:21:08.577 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:21:08 np0005534516 nova_compute[253538]: 2025-11-25 08:21:08.578 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:21:08 np0005534516 nova_compute[253538]: 2025-11-25 08:21:08.578 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:21:08 np0005534516 nova_compute[253538]: 2025-11-25 08:21:08.578 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 25 03:21:08 np0005534516 nova_compute[253538]: 2025-11-25 08:21:08.578 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:21:08 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1062: 321 pgs: 321 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail; 6.0 KiB/s rd, 1.6 MiB/s wr, 10 op/s
Nov 25 03:21:09 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 03:21:09 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2460794611' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 03:21:09 np0005534516 nova_compute[253538]: 2025-11-25 08:21:09.036 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.458s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:21:09 np0005534516 nova_compute[253538]: 2025-11-25 08:21:09.227 253542 WARNING nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 03:21:09 np0005534516 nova_compute[253538]: 2025-11-25 08:21:09.228 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5158MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 25 03:21:09 np0005534516 nova_compute[253538]: 2025-11-25 08:21:09.229 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:21:09 np0005534516 nova_compute[253538]: 2025-11-25 08:21:09.229 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:21:09 np0005534516 nova_compute[253538]: 2025-11-25 08:21:09.297 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 25 03:21:09 np0005534516 nova_compute[253538]: 2025-11-25 08:21:09.298 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 25 03:21:09 np0005534516 nova_compute[253538]: 2025-11-25 08:21:09.418 253542 DEBUG nova.scheduler.client.report [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Refreshing inventories for resource provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Nov 25 03:21:09 np0005534516 nova_compute[253538]: 2025-11-25 08:21:09.438 253542 DEBUG nova.scheduler.client.report [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Updating ProviderTree inventory for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Nov 25 03:21:09 np0005534516 nova_compute[253538]: 2025-11-25 08:21:09.438 253542 DEBUG nova.compute.provider_tree [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Updating inventory in ProviderTree for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Nov 25 03:21:09 np0005534516 nova_compute[253538]: 2025-11-25 08:21:09.457 253542 DEBUG nova.scheduler.client.report [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Refreshing aggregate associations for resource provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Nov 25 03:21:09 np0005534516 nova_compute[253538]: 2025-11-25 08:21:09.478 253542 DEBUG nova.scheduler.client.report [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Refreshing trait associations for resource provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4, traits: HW_CPU_X86_ABM,HW_CPU_X86_SSE,HW_CPU_X86_SSE4A,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_NET_VIF_MODEL_VIRTIO,HW_CPU_X86_SSSE3,COMPUTE_ACCELERATORS,COMPUTE_RESCUE_BFV,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_BMI2,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_DEVICE_TAGGING,HW_CPU_X86_SSE2,HW_CPU_X86_SSE42,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_SECURITY_TPM_1_2,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_SVM,HW_CPU_X86_AVX,HW_CPU_X86_CLMUL,COMPUTE_NODE,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_AVX2,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_SHA,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_BMI,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_MMX,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_SSE41,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_SECURITY_TPM_2_0,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_F16C,COMPUTE_STORAGE_BUS_FDC,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_AMD_SVM,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_AESNI,HW_CPU_X86_FMA3 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Nov 25 03:21:09 np0005534516 nova_compute[253538]: 2025-11-25 08:21:09.507 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:21:09 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 03:21:09 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3634598595' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 03:21:10 np0005534516 nova_compute[253538]: 2025-11-25 08:21:10.011 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.505s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:21:10 np0005534516 nova_compute[253538]: 2025-11-25 08:21:10.019 253542 DEBUG nova.compute.provider_tree [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 03:21:10 np0005534516 nova_compute[253538]: 2025-11-25 08:21:10.041 253542 DEBUG nova.scheduler.client.report [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 03:21:10 np0005534516 nova_compute[253538]: 2025-11-25 08:21:10.042 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 25 03:21:10 np0005534516 nova_compute[253538]: 2025-11-25 08:21:10.043 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.814s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:21:10 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1063: 321 pgs: 321 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail; 5.2 KiB/s rd, 1.4 MiB/s wr, 8 op/s
Nov 25 03:21:12 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1064: 321 pgs: 321 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail; 5.2 KiB/s rd, 684 KiB/s wr, 8 op/s
Nov 25 03:21:13 np0005534516 nova_compute[253538]: 2025-11-25 08:21:13.042 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 03:21:13 np0005534516 nova_compute[253538]: 2025-11-25 08:21:13.043 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 03:21:13 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e104 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 03:21:14 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1065: 321 pgs: 321 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 255 B/s wr, 3 op/s
Nov 25 03:21:15 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:21:15.435 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=3, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '3e:21:09', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '6a:71:8e:07:fa:c2'}, ipsec=False) old=SB_Global(nb_cfg=2) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 03:21:15 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:21:15.436 162739 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 25 03:21:16 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1066: 321 pgs: 321 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:21:18 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e104 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 03:21:18 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:21:18.438 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=0e3362c2-a4a0-4a10-9289-943331244f84, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '3'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:21:19 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1067: 321 pgs: 321 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:21:20 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1068: 321 pgs: 321 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:21:22 np0005534516 podman[265377]: 2025-11-25 08:21:22.250174783 +0000 UTC m=+0.089034944 container health_status 5c8d5508db57b4c3da7e80ae11f3a3094e87785cb74f60c98199261dd8b73481 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 03:21:22 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1069: 321 pgs: 321 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:21:23 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e104 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 03:21:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 03:21:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 03:21:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 03:21:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 03:21:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 03:21:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 03:21:24 np0005534516 podman[265396]: 2025-11-25 08:21:24.806053745 +0000 UTC m=+0.057038938 container health_status 201f5ba8a846455dad7681d91099d8c3f33e8ac91298c1330a8b525b94c03938 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 25 03:21:24 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1070: 321 pgs: 321 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:21:26 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1071: 321 pgs: 321 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:21:28 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e104 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 03:21:28 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1072: 321 pgs: 321 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:21:28 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 25 03:21:28 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4119902960' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 03:21:28 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 25 03:21:28 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4119902960' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 03:21:30 np0005534516 podman[265417]: 2025-11-25 08:21:30.874034315 +0000 UTC m=+0.116197465 container health_status 8ad1e319ea263c63021f01bb2d6f18ca5aea205883339692c6b10bddfa993191 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller)
Nov 25 03:21:30 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1073: 321 pgs: 321 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:21:32 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1074: 321 pgs: 321 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:21:33 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e104 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 03:21:34 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1075: 321 pgs: 321 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:21:36 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1076: 321 pgs: 321 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:21:37 np0005534516 nova_compute[253538]: 2025-11-25 08:21:37.444 253542 DEBUG oslo_concurrency.lockutils [None req-f10f6a1b-29f1-4487-9bc5-9d28c3d86390 6afe291f5eed4868aa587372c8338aa9 85521dbe314d4431bc8ec53a24b5e793 - - default default] Acquiring lock "95b483fd-03ee-4f4f-b242-a04fe0f183cd" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:21:37 np0005534516 nova_compute[253538]: 2025-11-25 08:21:37.444 253542 DEBUG oslo_concurrency.lockutils [None req-f10f6a1b-29f1-4487-9bc5-9d28c3d86390 6afe291f5eed4868aa587372c8338aa9 85521dbe314d4431bc8ec53a24b5e793 - - default default] Lock "95b483fd-03ee-4f4f-b242-a04fe0f183cd" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:21:37 np0005534516 nova_compute[253538]: 2025-11-25 08:21:37.733 253542 DEBUG nova.compute.manager [None req-f10f6a1b-29f1-4487-9bc5-9d28c3d86390 6afe291f5eed4868aa587372c8338aa9 85521dbe314d4431bc8ec53a24b5e793 - - default default] [instance: 95b483fd-03ee-4f4f-b242-a04fe0f183cd] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 25 03:21:38 np0005534516 nova_compute[253538]: 2025-11-25 08:21:38.103 253542 DEBUG oslo_concurrency.lockutils [None req-f10f6a1b-29f1-4487-9bc5-9d28c3d86390 6afe291f5eed4868aa587372c8338aa9 85521dbe314d4431bc8ec53a24b5e793 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:21:38 np0005534516 nova_compute[253538]: 2025-11-25 08:21:38.104 253542 DEBUG oslo_concurrency.lockutils [None req-f10f6a1b-29f1-4487-9bc5-9d28c3d86390 6afe291f5eed4868aa587372c8338aa9 85521dbe314d4431bc8ec53a24b5e793 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:21:38 np0005534516 nova_compute[253538]: 2025-11-25 08:21:38.111 253542 DEBUG nova.virt.hardware [None req-f10f6a1b-29f1-4487-9bc5-9d28c3d86390 6afe291f5eed4868aa587372c8338aa9 85521dbe314d4431bc8ec53a24b5e793 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 25 03:21:38 np0005534516 nova_compute[253538]: 2025-11-25 08:21:38.112 253542 INFO nova.compute.claims [None req-f10f6a1b-29f1-4487-9bc5-9d28c3d86390 6afe291f5eed4868aa587372c8338aa9 85521dbe314d4431bc8ec53a24b5e793 - - default default] [instance: 95b483fd-03ee-4f4f-b242-a04fe0f183cd] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 25 03:21:38 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e104 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 03:21:38 np0005534516 nova_compute[253538]: 2025-11-25 08:21:38.293 253542 DEBUG oslo_concurrency.processutils [None req-f10f6a1b-29f1-4487-9bc5-9d28c3d86390 6afe291f5eed4868aa587372c8338aa9 85521dbe314d4431bc8ec53a24b5e793 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:21:38 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 03:21:38 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4085823408' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 03:21:38 np0005534516 nova_compute[253538]: 2025-11-25 08:21:38.723 253542 DEBUG oslo_concurrency.processutils [None req-f10f6a1b-29f1-4487-9bc5-9d28c3d86390 6afe291f5eed4868aa587372c8338aa9 85521dbe314d4431bc8ec53a24b5e793 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.430s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:21:38 np0005534516 nova_compute[253538]: 2025-11-25 08:21:38.728 253542 DEBUG nova.compute.provider_tree [None req-f10f6a1b-29f1-4487-9bc5-9d28c3d86390 6afe291f5eed4868aa587372c8338aa9 85521dbe314d4431bc8ec53a24b5e793 - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 03:21:38 np0005534516 nova_compute[253538]: 2025-11-25 08:21:38.741 253542 DEBUG nova.scheduler.client.report [None req-f10f6a1b-29f1-4487-9bc5-9d28c3d86390 6afe291f5eed4868aa587372c8338aa9 85521dbe314d4431bc8ec53a24b5e793 - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 03:21:38 np0005534516 nova_compute[253538]: 2025-11-25 08:21:38.771 253542 DEBUG oslo_concurrency.lockutils [None req-f10f6a1b-29f1-4487-9bc5-9d28c3d86390 6afe291f5eed4868aa587372c8338aa9 85521dbe314d4431bc8ec53a24b5e793 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.668s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:21:38 np0005534516 nova_compute[253538]: 2025-11-25 08:21:38.773 253542 DEBUG nova.compute.manager [None req-f10f6a1b-29f1-4487-9bc5-9d28c3d86390 6afe291f5eed4868aa587372c8338aa9 85521dbe314d4431bc8ec53a24b5e793 - - default default] [instance: 95b483fd-03ee-4f4f-b242-a04fe0f183cd] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 25 03:21:38 np0005534516 nova_compute[253538]: 2025-11-25 08:21:38.838 253542 DEBUG nova.compute.manager [None req-f10f6a1b-29f1-4487-9bc5-9d28c3d86390 6afe291f5eed4868aa587372c8338aa9 85521dbe314d4431bc8ec53a24b5e793 - - default default] [instance: 95b483fd-03ee-4f4f-b242-a04fe0f183cd] Not allocating networking since 'none' was specified. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1948#033[00m
Nov 25 03:21:38 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1077: 321 pgs: 321 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:21:39 np0005534516 nova_compute[253538]: 2025-11-25 08:21:39.229 253542 INFO nova.virt.libvirt.driver [None req-f10f6a1b-29f1-4487-9bc5-9d28c3d86390 6afe291f5eed4868aa587372c8338aa9 85521dbe314d4431bc8ec53a24b5e793 - - default default] [instance: 95b483fd-03ee-4f4f-b242-a04fe0f183cd] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 25 03:21:39 np0005534516 nova_compute[253538]: 2025-11-25 08:21:39.273 253542 DEBUG nova.compute.manager [None req-f10f6a1b-29f1-4487-9bc5-9d28c3d86390 6afe291f5eed4868aa587372c8338aa9 85521dbe314d4431bc8ec53a24b5e793 - - default default] [instance: 95b483fd-03ee-4f4f-b242-a04fe0f183cd] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 25 03:21:39 np0005534516 nova_compute[253538]: 2025-11-25 08:21:39.587 253542 DEBUG nova.compute.manager [None req-f10f6a1b-29f1-4487-9bc5-9d28c3d86390 6afe291f5eed4868aa587372c8338aa9 85521dbe314d4431bc8ec53a24b5e793 - - default default] [instance: 95b483fd-03ee-4f4f-b242-a04fe0f183cd] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 25 03:21:39 np0005534516 nova_compute[253538]: 2025-11-25 08:21:39.588 253542 DEBUG nova.virt.libvirt.driver [None req-f10f6a1b-29f1-4487-9bc5-9d28c3d86390 6afe291f5eed4868aa587372c8338aa9 85521dbe314d4431bc8ec53a24b5e793 - - default default] [instance: 95b483fd-03ee-4f4f-b242-a04fe0f183cd] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 25 03:21:39 np0005534516 nova_compute[253538]: 2025-11-25 08:21:39.589 253542 INFO nova.virt.libvirt.driver [None req-f10f6a1b-29f1-4487-9bc5-9d28c3d86390 6afe291f5eed4868aa587372c8338aa9 85521dbe314d4431bc8ec53a24b5e793 - - default default] [instance: 95b483fd-03ee-4f4f-b242-a04fe0f183cd] Creating image(s)#033[00m
Nov 25 03:21:39 np0005534516 nova_compute[253538]: 2025-11-25 08:21:39.613 253542 DEBUG nova.storage.rbd_utils [None req-f10f6a1b-29f1-4487-9bc5-9d28c3d86390 6afe291f5eed4868aa587372c8338aa9 85521dbe314d4431bc8ec53a24b5e793 - - default default] rbd image 95b483fd-03ee-4f4f-b242-a04fe0f183cd_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:21:39 np0005534516 nova_compute[253538]: 2025-11-25 08:21:39.651 253542 DEBUG nova.storage.rbd_utils [None req-f10f6a1b-29f1-4487-9bc5-9d28c3d86390 6afe291f5eed4868aa587372c8338aa9 85521dbe314d4431bc8ec53a24b5e793 - - default default] rbd image 95b483fd-03ee-4f4f-b242-a04fe0f183cd_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:21:39 np0005534516 nova_compute[253538]: 2025-11-25 08:21:39.687 253542 DEBUG nova.storage.rbd_utils [None req-f10f6a1b-29f1-4487-9bc5-9d28c3d86390 6afe291f5eed4868aa587372c8338aa9 85521dbe314d4431bc8ec53a24b5e793 - - default default] rbd image 95b483fd-03ee-4f4f-b242-a04fe0f183cd_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:21:39 np0005534516 nova_compute[253538]: 2025-11-25 08:21:39.691 253542 DEBUG oslo_concurrency.lockutils [None req-f10f6a1b-29f1-4487-9bc5-9d28c3d86390 6afe291f5eed4868aa587372c8338aa9 85521dbe314d4431bc8ec53a24b5e793 - - default default] Acquiring lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:21:39 np0005534516 nova_compute[253538]: 2025-11-25 08:21:39.692 253542 DEBUG oslo_concurrency.lockutils [None req-f10f6a1b-29f1-4487-9bc5-9d28c3d86390 6afe291f5eed4868aa587372c8338aa9 85521dbe314d4431bc8ec53a24b5e793 - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:21:40 np0005534516 podman[265792]: 2025-11-25 08:21:40.185131757 +0000 UTC m=+0.036650027 container create 2361a1605bd686ea4ecc940db0502ab646d2a99d2cc9a5236b301f9fdec0be96 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_wright, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 03:21:40 np0005534516 systemd[1]: Started libpod-conmon-2361a1605bd686ea4ecc940db0502ab646d2a99d2cc9a5236b301f9fdec0be96.scope.
Nov 25 03:21:40 np0005534516 systemd[1]: Started libcrun container.
Nov 25 03:21:40 np0005534516 podman[265792]: 2025-11-25 08:21:40.254240442 +0000 UTC m=+0.105758722 container init 2361a1605bd686ea4ecc940db0502ab646d2a99d2cc9a5236b301f9fdec0be96 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_wright, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3)
Nov 25 03:21:40 np0005534516 podman[265792]: 2025-11-25 08:21:40.259447448 +0000 UTC m=+0.110965718 container start 2361a1605bd686ea4ecc940db0502ab646d2a99d2cc9a5236b301f9fdec0be96 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_wright, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507)
Nov 25 03:21:40 np0005534516 podman[265792]: 2025-11-25 08:21:40.261747762 +0000 UTC m=+0.113266052 container attach 2361a1605bd686ea4ecc940db0502ab646d2a99d2cc9a5236b301f9fdec0be96 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_wright, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Nov 25 03:21:40 np0005534516 zen_wright[265808]: 167 167
Nov 25 03:21:40 np0005534516 systemd[1]: libpod-2361a1605bd686ea4ecc940db0502ab646d2a99d2cc9a5236b301f9fdec0be96.scope: Deactivated successfully.
Nov 25 03:21:40 np0005534516 podman[265792]: 2025-11-25 08:21:40.263741128 +0000 UTC m=+0.115259408 container died 2361a1605bd686ea4ecc940db0502ab646d2a99d2cc9a5236b301f9fdec0be96 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_wright, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 03:21:40 np0005534516 podman[265792]: 2025-11-25 08:21:40.168784299 +0000 UTC m=+0.020302589 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 03:21:40 np0005534516 systemd[1]: var-lib-containers-storage-overlay-3f63f234341b280acdcbb7fb241dbfb7673ea0060359c3cd1ad3362cce33a026-merged.mount: Deactivated successfully.
Nov 25 03:21:40 np0005534516 podman[265792]: 2025-11-25 08:21:40.299416057 +0000 UTC m=+0.150934337 container remove 2361a1605bd686ea4ecc940db0502ab646d2a99d2cc9a5236b301f9fdec0be96 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_wright, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Nov 25 03:21:40 np0005534516 systemd[1]: libpod-conmon-2361a1605bd686ea4ecc940db0502ab646d2a99d2cc9a5236b301f9fdec0be96.scope: Deactivated successfully.
Nov 25 03:21:40 np0005534516 podman[265832]: 2025-11-25 08:21:40.498139641 +0000 UTC m=+0.054634131 container create 32b5386ef7488de32f9db7b3ac0c91cd6ed7c80142f2d193e3e2098907976b43 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_raman, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 03:21:40 np0005534516 systemd[1]: Started libpod-conmon-32b5386ef7488de32f9db7b3ac0c91cd6ed7c80142f2d193e3e2098907976b43.scope.
Nov 25 03:21:40 np0005534516 podman[265832]: 2025-11-25 08:21:40.473395578 +0000 UTC m=+0.029890088 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 03:21:40 np0005534516 systemd[1]: Started libcrun container.
Nov 25 03:21:40 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c3779cdc4fa9c05f1f1a4def63ea7bf1049c09bc0579f1914b43f9446c8c6a10/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 03:21:40 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c3779cdc4fa9c05f1f1a4def63ea7bf1049c09bc0579f1914b43f9446c8c6a10/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 03:21:40 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c3779cdc4fa9c05f1f1a4def63ea7bf1049c09bc0579f1914b43f9446c8c6a10/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 03:21:40 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c3779cdc4fa9c05f1f1a4def63ea7bf1049c09bc0579f1914b43f9446c8c6a10/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 03:21:40 np0005534516 podman[265832]: 2025-11-25 08:21:40.590589229 +0000 UTC m=+0.147083719 container init 32b5386ef7488de32f9db7b3ac0c91cd6ed7c80142f2d193e3e2098907976b43 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_raman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 03:21:40 np0005534516 podman[265832]: 2025-11-25 08:21:40.60311873 +0000 UTC m=+0.159613190 container start 32b5386ef7488de32f9db7b3ac0c91cd6ed7c80142f2d193e3e2098907976b43 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_raman, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 03:21:40 np0005534516 podman[265832]: 2025-11-25 08:21:40.606280699 +0000 UTC m=+0.162775159 container attach 32b5386ef7488de32f9db7b3ac0c91cd6ed7c80142f2d193e3e2098907976b43 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_raman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 03:21:40 np0005534516 nova_compute[253538]: 2025-11-25 08:21:40.789 253542 DEBUG nova.virt.libvirt.imagebackend [None req-f10f6a1b-29f1-4487-9bc5-9d28c3d86390 6afe291f5eed4868aa587372c8338aa9 85521dbe314d4431bc8ec53a24b5e793 - - default default] Image locations are: [{'url': 'rbd://a058ea16-8b73-51e1-b172-ed66107102bf/images/8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e/snap', 'metadata': {'store': 'default_backend'}}, {'url': 'rbd://a058ea16-8b73-51e1-b172-ed66107102bf/images/8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e/snap', 'metadata': {}}] clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1085#033[00m
Nov 25 03:21:40 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1078: 321 pgs: 321 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 25 03:21:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:21:41.047 162739 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:21:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:21:41.047 162739 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:21:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:21:41.047 162739 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:21:42 np0005534516 romantic_raman[265848]: [
Nov 25 03:21:42 np0005534516 romantic_raman[265848]:    {
Nov 25 03:21:42 np0005534516 romantic_raman[265848]:        "available": false,
Nov 25 03:21:42 np0005534516 romantic_raman[265848]:        "ceph_device": false,
Nov 25 03:21:42 np0005534516 romantic_raman[265848]:        "device_id": "QEMU_DVD-ROM_QM00001",
Nov 25 03:21:42 np0005534516 romantic_raman[265848]:        "lsm_data": {},
Nov 25 03:21:42 np0005534516 romantic_raman[265848]:        "lvs": [],
Nov 25 03:21:42 np0005534516 romantic_raman[265848]:        "path": "/dev/sr0",
Nov 25 03:21:42 np0005534516 romantic_raman[265848]:        "rejected_reasons": [
Nov 25 03:21:42 np0005534516 romantic_raman[265848]:            "Has a FileSystem",
Nov 25 03:21:42 np0005534516 romantic_raman[265848]:            "Insufficient space (<5GB)"
Nov 25 03:21:42 np0005534516 romantic_raman[265848]:        ],
Nov 25 03:21:42 np0005534516 romantic_raman[265848]:        "sys_api": {
Nov 25 03:21:42 np0005534516 romantic_raman[265848]:            "actuators": null,
Nov 25 03:21:42 np0005534516 romantic_raman[265848]:            "device_nodes": "sr0",
Nov 25 03:21:42 np0005534516 romantic_raman[265848]:            "devname": "sr0",
Nov 25 03:21:42 np0005534516 romantic_raman[265848]:            "human_readable_size": "482.00 KB",
Nov 25 03:21:42 np0005534516 romantic_raman[265848]:            "id_bus": "ata",
Nov 25 03:21:42 np0005534516 romantic_raman[265848]:            "model": "QEMU DVD-ROM",
Nov 25 03:21:42 np0005534516 romantic_raman[265848]:            "nr_requests": "2",
Nov 25 03:21:42 np0005534516 romantic_raman[265848]:            "parent": "/dev/sr0",
Nov 25 03:21:42 np0005534516 romantic_raman[265848]:            "partitions": {},
Nov 25 03:21:42 np0005534516 romantic_raman[265848]:            "path": "/dev/sr0",
Nov 25 03:21:42 np0005534516 romantic_raman[265848]:            "removable": "1",
Nov 25 03:21:42 np0005534516 romantic_raman[265848]:            "rev": "2.5+",
Nov 25 03:21:42 np0005534516 romantic_raman[265848]:            "ro": "0",
Nov 25 03:21:42 np0005534516 romantic_raman[265848]:            "rotational": "1",
Nov 25 03:21:42 np0005534516 romantic_raman[265848]:            "sas_address": "",
Nov 25 03:21:42 np0005534516 romantic_raman[265848]:            "sas_device_handle": "",
Nov 25 03:21:42 np0005534516 romantic_raman[265848]:            "scheduler_mode": "mq-deadline",
Nov 25 03:21:42 np0005534516 romantic_raman[265848]:            "sectors": 0,
Nov 25 03:21:42 np0005534516 romantic_raman[265848]:            "sectorsize": "2048",
Nov 25 03:21:42 np0005534516 romantic_raman[265848]:            "size": 493568.0,
Nov 25 03:21:42 np0005534516 romantic_raman[265848]:            "support_discard": "2048",
Nov 25 03:21:42 np0005534516 romantic_raman[265848]:            "type": "disk",
Nov 25 03:21:42 np0005534516 romantic_raman[265848]:            "vendor": "QEMU"
Nov 25 03:21:42 np0005534516 romantic_raman[265848]:        }
Nov 25 03:21:42 np0005534516 romantic_raman[265848]:    }
Nov 25 03:21:42 np0005534516 romantic_raman[265848]: ]
Nov 25 03:21:42 np0005534516 systemd[1]: libpod-32b5386ef7488de32f9db7b3ac0c91cd6ed7c80142f2d193e3e2098907976b43.scope: Deactivated successfully.
Nov 25 03:21:42 np0005534516 systemd[1]: libpod-32b5386ef7488de32f9db7b3ac0c91cd6ed7c80142f2d193e3e2098907976b43.scope: Consumed 1.504s CPU time.
Nov 25 03:21:42 np0005534516 podman[265832]: 2025-11-25 08:21:42.062720987 +0000 UTC m=+1.619215437 container died 32b5386ef7488de32f9db7b3ac0c91cd6ed7c80142f2d193e3e2098907976b43 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_raman, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Nov 25 03:21:42 np0005534516 systemd[1]: var-lib-containers-storage-overlay-c3779cdc4fa9c05f1f1a4def63ea7bf1049c09bc0579f1914b43f9446c8c6a10-merged.mount: Deactivated successfully.
Nov 25 03:21:42 np0005534516 podman[265832]: 2025-11-25 08:21:42.121640878 +0000 UTC m=+1.678135328 container remove 32b5386ef7488de32f9db7b3ac0c91cd6ed7c80142f2d193e3e2098907976b43 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_raman, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Nov 25 03:21:42 np0005534516 systemd[1]: libpod-conmon-32b5386ef7488de32f9db7b3ac0c91cd6ed7c80142f2d193e3e2098907976b43.scope: Deactivated successfully.
Nov 25 03:21:42 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 03:21:42 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 03:21:42 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 03:21:42 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 03:21:42 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0) v1
Nov 25 03:21:42 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 25 03:21:42 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 03:21:42 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 03:21:42 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 03:21:42 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 03:21:42 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 03:21:42 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 03:21:42 np0005534516 ceph-mgr[75313]: [progress WARNING root] complete: ev 14a4a0ca-5aa5-46b5-a2ac-d76dd1a5974f does not exist
Nov 25 03:21:42 np0005534516 ceph-mgr[75313]: [progress WARNING root] complete: ev 66aa50f1-7eaa-4e46-8ced-ed907e576f2f does not exist
Nov 25 03:21:42 np0005534516 ceph-mgr[75313]: [progress WARNING root] complete: ev 2bc15d35-c325-461b-8843-2934c58e1ce4 does not exist
Nov 25 03:21:42 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 03:21:42 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 03:21:42 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 03:21:42 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 03:21:42 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 03:21:42 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 03:21:42 np0005534516 nova_compute[253538]: 2025-11-25 08:21:42.288 253542 DEBUG oslo_concurrency.processutils [None req-f10f6a1b-29f1-4487-9bc5-9d28c3d86390 6afe291f5eed4868aa587372c8338aa9 85521dbe314d4431bc8ec53a24b5e793 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc.part --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:21:42 np0005534516 nova_compute[253538]: 2025-11-25 08:21:42.386 253542 DEBUG oslo_concurrency.processutils [None req-f10f6a1b-29f1-4487-9bc5-9d28c3d86390 6afe291f5eed4868aa587372c8338aa9 85521dbe314d4431bc8ec53a24b5e793 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc.part --force-share --output=json" returned: 0 in 0.098s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:21:42 np0005534516 nova_compute[253538]: 2025-11-25 08:21:42.388 253542 DEBUG nova.virt.images [None req-f10f6a1b-29f1-4487-9bc5-9d28c3d86390 6afe291f5eed4868aa587372c8338aa9 85521dbe314d4431bc8ec53a24b5e793 - - default default] 8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e was qcow2, converting to raw fetch_to_raw /usr/lib/python3.9/site-packages/nova/virt/images.py:242#033[00m
Nov 25 03:21:42 np0005534516 nova_compute[253538]: 2025-11-25 08:21:42.390 253542 DEBUG nova.privsep.utils [None req-f10f6a1b-29f1-4487-9bc5-9d28c3d86390 6afe291f5eed4868aa587372c8338aa9 85521dbe314d4431bc8ec53a24b5e793 - - default default] Path '/var/lib/nova/instances' supports direct I/O supports_direct_io /usr/lib/python3.9/site-packages/nova/privsep/utils.py:63#033[00m
Nov 25 03:21:42 np0005534516 nova_compute[253538]: 2025-11-25 08:21:42.390 253542 DEBUG oslo_concurrency.processutils [None req-f10f6a1b-29f1-4487-9bc5-9d28c3d86390 6afe291f5eed4868aa587372c8338aa9 85521dbe314d4431bc8ec53a24b5e793 - - default default] Running cmd (subprocess): qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc.part /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc.converted execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:21:42 np0005534516 nova_compute[253538]: 2025-11-25 08:21:42.630 253542 DEBUG oslo_concurrency.processutils [None req-f10f6a1b-29f1-4487-9bc5-9d28c3d86390 6afe291f5eed4868aa587372c8338aa9 85521dbe314d4431bc8ec53a24b5e793 - - default default] CMD "qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc.part /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc.converted" returned: 0 in 0.239s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:21:42 np0005534516 nova_compute[253538]: 2025-11-25 08:21:42.640 253542 DEBUG oslo_concurrency.processutils [None req-f10f6a1b-29f1-4487-9bc5-9d28c3d86390 6afe291f5eed4868aa587372c8338aa9 85521dbe314d4431bc8ec53a24b5e793 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc.converted --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:21:42 np0005534516 nova_compute[253538]: 2025-11-25 08:21:42.700 253542 DEBUG oslo_concurrency.processutils [None req-f10f6a1b-29f1-4487-9bc5-9d28c3d86390 6afe291f5eed4868aa587372c8338aa9 85521dbe314d4431bc8ec53a24b5e793 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc.converted --force-share --output=json" returned: 0 in 0.059s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:21:42 np0005534516 nova_compute[253538]: 2025-11-25 08:21:42.702 253542 DEBUG oslo_concurrency.lockutils [None req-f10f6a1b-29f1-4487-9bc5-9d28c3d86390 6afe291f5eed4868aa587372c8338aa9 85521dbe314d4431bc8ec53a24b5e793 - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 3.010s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:21:42 np0005534516 nova_compute[253538]: 2025-11-25 08:21:42.737 253542 DEBUG nova.storage.rbd_utils [None req-f10f6a1b-29f1-4487-9bc5-9d28c3d86390 6afe291f5eed4868aa587372c8338aa9 85521dbe314d4431bc8ec53a24b5e793 - - default default] rbd image 95b483fd-03ee-4f4f-b242-a04fe0f183cd_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:21:42 np0005534516 nova_compute[253538]: 2025-11-25 08:21:42.744 253542 DEBUG oslo_concurrency.processutils [None req-f10f6a1b-29f1-4487-9bc5-9d28c3d86390 6afe291f5eed4868aa587372c8338aa9 85521dbe314d4431bc8ec53a24b5e793 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc 95b483fd-03ee-4f4f-b242-a04fe0f183cd_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:21:42 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1079: 321 pgs: 321 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail; 1.3 MiB/s rd, 0 op/s
Nov 25 03:21:42 np0005534516 podman[268014]: 2025-11-25 08:21:42.903289504 +0000 UTC m=+0.047223403 container create 0619565f1ba30d95479e8b7f4af7bf806966ca073124a8797783f779466a1ffb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_ellis, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 03:21:42 np0005534516 systemd[1]: Started libpod-conmon-0619565f1ba30d95479e8b7f4af7bf806966ca073124a8797783f779466a1ffb.scope.
Nov 25 03:21:42 np0005534516 podman[268014]: 2025-11-25 08:21:42.882054539 +0000 UTC m=+0.025988438 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 03:21:42 np0005534516 systemd[1]: Started libcrun container.
Nov 25 03:21:43 np0005534516 podman[268014]: 2025-11-25 08:21:43.007372938 +0000 UTC m=+0.151306887 container init 0619565f1ba30d95479e8b7f4af7bf806966ca073124a8797783f779466a1ffb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_ellis, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 03:21:43 np0005534516 podman[268014]: 2025-11-25 08:21:43.018229192 +0000 UTC m=+0.162163061 container start 0619565f1ba30d95479e8b7f4af7bf806966ca073124a8797783f779466a1ffb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_ellis, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 03:21:43 np0005534516 podman[268014]: 2025-11-25 08:21:43.022063489 +0000 UTC m=+0.165997438 container attach 0619565f1ba30d95479e8b7f4af7bf806966ca073124a8797783f779466a1ffb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_ellis, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 03:21:43 np0005534516 dreamy_ellis[268031]: 167 167
Nov 25 03:21:43 np0005534516 systemd[1]: libpod-0619565f1ba30d95479e8b7f4af7bf806966ca073124a8797783f779466a1ffb.scope: Deactivated successfully.
Nov 25 03:21:43 np0005534516 podman[268014]: 2025-11-25 08:21:43.02746566 +0000 UTC m=+0.171399559 container died 0619565f1ba30d95479e8b7f4af7bf806966ca073124a8797783f779466a1ffb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_ellis, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 03:21:43 np0005534516 systemd[1]: var-lib-containers-storage-overlay-1a84ec9388409901e4f694855e49f52f079d6dc5daf5448e836d6af3a3558fc0-merged.mount: Deactivated successfully.
Nov 25 03:21:43 np0005534516 podman[268014]: 2025-11-25 08:21:43.071466902 +0000 UTC m=+0.215400771 container remove 0619565f1ba30d95479e8b7f4af7bf806966ca073124a8797783f779466a1ffb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_ellis, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Nov 25 03:21:43 np0005534516 systemd[1]: libpod-conmon-0619565f1ba30d95479e8b7f4af7bf806966ca073124a8797783f779466a1ffb.scope: Deactivated successfully.
Nov 25 03:21:43 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e104 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 03:21:43 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 03:21:43 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 03:21:43 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 25 03:21:43 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 03:21:43 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 03:21:43 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 03:21:43 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e104 do_prune osdmap full prune enabled
Nov 25 03:21:43 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e105 e105: 3 total, 3 up, 3 in
Nov 25 03:21:43 np0005534516 ceph-mon[75015]: log_channel(cluster) log [DBG] : osdmap e105: 3 total, 3 up, 3 in
Nov 25 03:21:43 np0005534516 podman[268054]: 2025-11-25 08:21:43.251229975 +0000 UTC m=+0.042423178 container create a74dc7607faf3564bfe9d0d368c2424f4e0e7527459422aa4f1e5f9145525c38 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_montalcini, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 25 03:21:43 np0005534516 systemd[1]: Started libpod-conmon-a74dc7607faf3564bfe9d0d368c2424f4e0e7527459422aa4f1e5f9145525c38.scope.
Nov 25 03:21:43 np0005534516 systemd[1]: Started libcrun container.
Nov 25 03:21:43 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b75b7fab835fc44294e929ab133c18687386803988c7a65febb436c43c7dd2ae/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 03:21:43 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b75b7fab835fc44294e929ab133c18687386803988c7a65febb436c43c7dd2ae/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 03:21:43 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b75b7fab835fc44294e929ab133c18687386803988c7a65febb436c43c7dd2ae/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 03:21:43 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b75b7fab835fc44294e929ab133c18687386803988c7a65febb436c43c7dd2ae/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 03:21:43 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b75b7fab835fc44294e929ab133c18687386803988c7a65febb436c43c7dd2ae/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 03:21:43 np0005534516 podman[268054]: 2025-11-25 08:21:43.231734169 +0000 UTC m=+0.022927422 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 03:21:43 np0005534516 podman[268054]: 2025-11-25 08:21:43.331630246 +0000 UTC m=+0.122823469 container init a74dc7607faf3564bfe9d0d368c2424f4e0e7527459422aa4f1e5f9145525c38 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_montalcini, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True)
Nov 25 03:21:43 np0005534516 podman[268054]: 2025-11-25 08:21:43.342105609 +0000 UTC m=+0.133298822 container start a74dc7607faf3564bfe9d0d368c2424f4e0e7527459422aa4f1e5f9145525c38 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_montalcini, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 03:21:43 np0005534516 podman[268054]: 2025-11-25 08:21:43.346927804 +0000 UTC m=+0.138121067 container attach a74dc7607faf3564bfe9d0d368c2424f4e0e7527459422aa4f1e5f9145525c38 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_montalcini, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Nov 25 03:21:44 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e105 do_prune osdmap full prune enabled
Nov 25 03:21:44 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e106 e106: 3 total, 3 up, 3 in
Nov 25 03:21:44 np0005534516 ceph-mon[75015]: log_channel(cluster) log [DBG] : osdmap e106: 3 total, 3 up, 3 in
Nov 25 03:21:44 np0005534516 hardcore_montalcini[268070]: --> passed data devices: 0 physical, 3 LVM
Nov 25 03:21:44 np0005534516 hardcore_montalcini[268070]: --> relative data size: 1.0
Nov 25 03:21:44 np0005534516 hardcore_montalcini[268070]: --> All data devices are unavailable
Nov 25 03:21:44 np0005534516 systemd[1]: libpod-a74dc7607faf3564bfe9d0d368c2424f4e0e7527459422aa4f1e5f9145525c38.scope: Deactivated successfully.
Nov 25 03:21:44 np0005534516 podman[268054]: 2025-11-25 08:21:44.395904644 +0000 UTC m=+1.187097847 container died a74dc7607faf3564bfe9d0d368c2424f4e0e7527459422aa4f1e5f9145525c38 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_montalcini, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 03:21:44 np0005534516 systemd[1]: var-lib-containers-storage-overlay-b75b7fab835fc44294e929ab133c18687386803988c7a65febb436c43c7dd2ae-merged.mount: Deactivated successfully.
Nov 25 03:21:44 np0005534516 podman[268054]: 2025-11-25 08:21:44.506871272 +0000 UTC m=+1.298064475 container remove a74dc7607faf3564bfe9d0d368c2424f4e0e7527459422aa4f1e5f9145525c38 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_montalcini, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Nov 25 03:21:44 np0005534516 nova_compute[253538]: 2025-11-25 08:21:44.505 253542 DEBUG oslo_concurrency.processutils [None req-f10f6a1b-29f1-4487-9bc5-9d28c3d86390 6afe291f5eed4868aa587372c8338aa9 85521dbe314d4431bc8ec53a24b5e793 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc 95b483fd-03ee-4f4f-b242-a04fe0f183cd_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.761s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:21:44 np0005534516 systemd[1]: libpod-conmon-a74dc7607faf3564bfe9d0d368c2424f4e0e7527459422aa4f1e5f9145525c38.scope: Deactivated successfully.
Nov 25 03:21:44 np0005534516 nova_compute[253538]: 2025-11-25 08:21:44.578 253542 DEBUG nova.storage.rbd_utils [None req-f10f6a1b-29f1-4487-9bc5-9d28c3d86390 6afe291f5eed4868aa587372c8338aa9 85521dbe314d4431bc8ec53a24b5e793 - - default default] resizing rbd image 95b483fd-03ee-4f4f-b242-a04fe0f183cd_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 25 03:21:44 np0005534516 nova_compute[253538]: 2025-11-25 08:21:44.678 253542 DEBUG nova.objects.instance [None req-f10f6a1b-29f1-4487-9bc5-9d28c3d86390 6afe291f5eed4868aa587372c8338aa9 85521dbe314d4431bc8ec53a24b5e793 - - default default] Lazy-loading 'migration_context' on Instance uuid 95b483fd-03ee-4f4f-b242-a04fe0f183cd obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 03:21:44 np0005534516 nova_compute[253538]: 2025-11-25 08:21:44.699 253542 DEBUG nova.virt.libvirt.driver [None req-f10f6a1b-29f1-4487-9bc5-9d28c3d86390 6afe291f5eed4868aa587372c8338aa9 85521dbe314d4431bc8ec53a24b5e793 - - default default] [instance: 95b483fd-03ee-4f4f-b242-a04fe0f183cd] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 25 03:21:44 np0005534516 nova_compute[253538]: 2025-11-25 08:21:44.700 253542 DEBUG nova.virt.libvirt.driver [None req-f10f6a1b-29f1-4487-9bc5-9d28c3d86390 6afe291f5eed4868aa587372c8338aa9 85521dbe314d4431bc8ec53a24b5e793 - - default default] [instance: 95b483fd-03ee-4f4f-b242-a04fe0f183cd] Ensure instance console log exists: /var/lib/nova/instances/95b483fd-03ee-4f4f-b242-a04fe0f183cd/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 25 03:21:44 np0005534516 nova_compute[253538]: 2025-11-25 08:21:44.700 253542 DEBUG oslo_concurrency.lockutils [None req-f10f6a1b-29f1-4487-9bc5-9d28c3d86390 6afe291f5eed4868aa587372c8338aa9 85521dbe314d4431bc8ec53a24b5e793 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:21:44 np0005534516 nova_compute[253538]: 2025-11-25 08:21:44.701 253542 DEBUG oslo_concurrency.lockutils [None req-f10f6a1b-29f1-4487-9bc5-9d28c3d86390 6afe291f5eed4868aa587372c8338aa9 85521dbe314d4431bc8ec53a24b5e793 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:21:44 np0005534516 nova_compute[253538]: 2025-11-25 08:21:44.701 253542 DEBUG oslo_concurrency.lockutils [None req-f10f6a1b-29f1-4487-9bc5-9d28c3d86390 6afe291f5eed4868aa587372c8338aa9 85521dbe314d4431bc8ec53a24b5e793 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:21:44 np0005534516 nova_compute[253538]: 2025-11-25 08:21:44.703 253542 DEBUG nova.virt.libvirt.driver [None req-f10f6a1b-29f1-4487-9bc5-9d28c3d86390 6afe291f5eed4868aa587372c8338aa9 85521dbe314d4431bc8ec53a24b5e793 - - default default] [instance: 95b483fd-03ee-4f4f-b242-a04fe0f183cd] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'device_name': '/dev/vda', 'guest_format': None, 'encryption_options': None, 'boot_index': 0, 'encryption_format': None, 'encryption_secret_uuid': None, 'size': 0, 'encrypted': False, 'disk_bus': 'virtio', 'image_id': '8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 25 03:21:44 np0005534516 nova_compute[253538]: 2025-11-25 08:21:44.707 253542 WARNING nova.virt.libvirt.driver [None req-f10f6a1b-29f1-4487-9bc5-9d28c3d86390 6afe291f5eed4868aa587372c8338aa9 85521dbe314d4431bc8ec53a24b5e793 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 03:21:44 np0005534516 nova_compute[253538]: 2025-11-25 08:21:44.712 253542 DEBUG nova.virt.libvirt.host [None req-f10f6a1b-29f1-4487-9bc5-9d28c3d86390 6afe291f5eed4868aa587372c8338aa9 85521dbe314d4431bc8ec53a24b5e793 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 25 03:21:44 np0005534516 nova_compute[253538]: 2025-11-25 08:21:44.713 253542 DEBUG nova.virt.libvirt.host [None req-f10f6a1b-29f1-4487-9bc5-9d28c3d86390 6afe291f5eed4868aa587372c8338aa9 85521dbe314d4431bc8ec53a24b5e793 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 25 03:21:44 np0005534516 nova_compute[253538]: 2025-11-25 08:21:44.715 253542 DEBUG nova.virt.libvirt.host [None req-f10f6a1b-29f1-4487-9bc5-9d28c3d86390 6afe291f5eed4868aa587372c8338aa9 85521dbe314d4431bc8ec53a24b5e793 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 25 03:21:44 np0005534516 nova_compute[253538]: 2025-11-25 08:21:44.716 253542 DEBUG nova.virt.libvirt.host [None req-f10f6a1b-29f1-4487-9bc5-9d28c3d86390 6afe291f5eed4868aa587372c8338aa9 85521dbe314d4431bc8ec53a24b5e793 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 25 03:21:44 np0005534516 nova_compute[253538]: 2025-11-25 08:21:44.716 253542 DEBUG nova.virt.libvirt.driver [None req-f10f6a1b-29f1-4487-9bc5-9d28c3d86390 6afe291f5eed4868aa587372c8338aa9 85521dbe314d4431bc8ec53a24b5e793 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 25 03:21:44 np0005534516 nova_compute[253538]: 2025-11-25 08:21:44.717 253542 DEBUG nova.virt.hardware [None req-f10f6a1b-29f1-4487-9bc5-9d28c3d86390 6afe291f5eed4868aa587372c8338aa9 85521dbe314d4431bc8ec53a24b5e793 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T08:20:54Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='c0247c91-0f34-45b2-87e7-43f31a790a00',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 25 03:21:44 np0005534516 nova_compute[253538]: 2025-11-25 08:21:44.717 253542 DEBUG nova.virt.hardware [None req-f10f6a1b-29f1-4487-9bc5-9d28c3d86390 6afe291f5eed4868aa587372c8338aa9 85521dbe314d4431bc8ec53a24b5e793 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 25 03:21:44 np0005534516 nova_compute[253538]: 2025-11-25 08:21:44.717 253542 DEBUG nova.virt.hardware [None req-f10f6a1b-29f1-4487-9bc5-9d28c3d86390 6afe291f5eed4868aa587372c8338aa9 85521dbe314d4431bc8ec53a24b5e793 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 25 03:21:44 np0005534516 nova_compute[253538]: 2025-11-25 08:21:44.718 253542 DEBUG nova.virt.hardware [None req-f10f6a1b-29f1-4487-9bc5-9d28c3d86390 6afe291f5eed4868aa587372c8338aa9 85521dbe314d4431bc8ec53a24b5e793 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 25 03:21:44 np0005534516 nova_compute[253538]: 2025-11-25 08:21:44.718 253542 DEBUG nova.virt.hardware [None req-f10f6a1b-29f1-4487-9bc5-9d28c3d86390 6afe291f5eed4868aa587372c8338aa9 85521dbe314d4431bc8ec53a24b5e793 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 25 03:21:44 np0005534516 nova_compute[253538]: 2025-11-25 08:21:44.718 253542 DEBUG nova.virt.hardware [None req-f10f6a1b-29f1-4487-9bc5-9d28c3d86390 6afe291f5eed4868aa587372c8338aa9 85521dbe314d4431bc8ec53a24b5e793 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 25 03:21:44 np0005534516 nova_compute[253538]: 2025-11-25 08:21:44.719 253542 DEBUG nova.virt.hardware [None req-f10f6a1b-29f1-4487-9bc5-9d28c3d86390 6afe291f5eed4868aa587372c8338aa9 85521dbe314d4431bc8ec53a24b5e793 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 25 03:21:44 np0005534516 nova_compute[253538]: 2025-11-25 08:21:44.719 253542 DEBUG nova.virt.hardware [None req-f10f6a1b-29f1-4487-9bc5-9d28c3d86390 6afe291f5eed4868aa587372c8338aa9 85521dbe314d4431bc8ec53a24b5e793 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 25 03:21:44 np0005534516 nova_compute[253538]: 2025-11-25 08:21:44.719 253542 DEBUG nova.virt.hardware [None req-f10f6a1b-29f1-4487-9bc5-9d28c3d86390 6afe291f5eed4868aa587372c8338aa9 85521dbe314d4431bc8ec53a24b5e793 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 25 03:21:44 np0005534516 nova_compute[253538]: 2025-11-25 08:21:44.719 253542 DEBUG nova.virt.hardware [None req-f10f6a1b-29f1-4487-9bc5-9d28c3d86390 6afe291f5eed4868aa587372c8338aa9 85521dbe314d4431bc8ec53a24b5e793 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 25 03:21:44 np0005534516 nova_compute[253538]: 2025-11-25 08:21:44.720 253542 DEBUG nova.virt.hardware [None req-f10f6a1b-29f1-4487-9bc5-9d28c3d86390 6afe291f5eed4868aa587372c8338aa9 85521dbe314d4431bc8ec53a24b5e793 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 25 03:21:44 np0005534516 nova_compute[253538]: 2025-11-25 08:21:44.724 253542 DEBUG nova.privsep.utils [None req-f10f6a1b-29f1-4487-9bc5-9d28c3d86390 6afe291f5eed4868aa587372c8338aa9 85521dbe314d4431bc8ec53a24b5e793 - - default default] Path '/var/lib/nova/instances' supports direct I/O supports_direct_io /usr/lib/python3.9/site-packages/nova/privsep/utils.py:63#033[00m
Nov 25 03:21:44 np0005534516 nova_compute[253538]: 2025-11-25 08:21:44.724 253542 DEBUG oslo_concurrency.processutils [None req-f10f6a1b-29f1-4487-9bc5-9d28c3d86390 6afe291f5eed4868aa587372c8338aa9 85521dbe314d4431bc8ec53a24b5e793 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:21:44 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1082: 321 pgs: 321 active+clean; 49 MiB data, 189 MiB used, 60 GiB / 60 GiB avail; 2.6 MiB/s rd, 85 KiB/s wr, 11 op/s
Nov 25 03:21:45 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 03:21:45 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1472923506' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 03:21:45 np0005534516 podman[268347]: 2025-11-25 08:21:45.124156235 +0000 UTC m=+0.045300710 container create f59ffacf7ccb866502760cdb99c5f09da527fcfe8ddd59c3c52e5b7ffc63cce3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_turing, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Nov 25 03:21:45 np0005534516 nova_compute[253538]: 2025-11-25 08:21:45.132 253542 DEBUG oslo_concurrency.processutils [None req-f10f6a1b-29f1-4487-9bc5-9d28c3d86390 6afe291f5eed4868aa587372c8338aa9 85521dbe314d4431bc8ec53a24b5e793 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.408s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:21:45 np0005534516 nova_compute[253538]: 2025-11-25 08:21:45.154 253542 DEBUG nova.storage.rbd_utils [None req-f10f6a1b-29f1-4487-9bc5-9d28c3d86390 6afe291f5eed4868aa587372c8338aa9 85521dbe314d4431bc8ec53a24b5e793 - - default default] rbd image 95b483fd-03ee-4f4f-b242-a04fe0f183cd_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:21:45 np0005534516 systemd[1]: Started libpod-conmon-f59ffacf7ccb866502760cdb99c5f09da527fcfe8ddd59c3c52e5b7ffc63cce3.scope.
Nov 25 03:21:45 np0005534516 nova_compute[253538]: 2025-11-25 08:21:45.160 253542 DEBUG oslo_concurrency.processutils [None req-f10f6a1b-29f1-4487-9bc5-9d28c3d86390 6afe291f5eed4868aa587372c8338aa9 85521dbe314d4431bc8ec53a24b5e793 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:21:45 np0005534516 systemd[1]: Started libcrun container.
Nov 25 03:21:45 np0005534516 podman[268347]: 2025-11-25 08:21:45.191040537 +0000 UTC m=+0.112185032 container init f59ffacf7ccb866502760cdb99c5f09da527fcfe8ddd59c3c52e5b7ffc63cce3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_turing, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Nov 25 03:21:45 np0005534516 podman[268347]: 2025-11-25 08:21:45.197476087 +0000 UTC m=+0.118620562 container start f59ffacf7ccb866502760cdb99c5f09da527fcfe8ddd59c3c52e5b7ffc63cce3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_turing, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 03:21:45 np0005534516 podman[268347]: 2025-11-25 08:21:45.106282705 +0000 UTC m=+0.027427200 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 03:21:45 np0005534516 podman[268347]: 2025-11-25 08:21:45.20041126 +0000 UTC m=+0.121555755 container attach f59ffacf7ccb866502760cdb99c5f09da527fcfe8ddd59c3c52e5b7ffc63cce3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_turing, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Nov 25 03:21:45 np0005534516 youthful_turing[268383]: 167 167
Nov 25 03:21:45 np0005534516 systemd[1]: libpod-f59ffacf7ccb866502760cdb99c5f09da527fcfe8ddd59c3c52e5b7ffc63cce3.scope: Deactivated successfully.
Nov 25 03:21:45 np0005534516 podman[268347]: 2025-11-25 08:21:45.20294548 +0000 UTC m=+0.124089955 container died f59ffacf7ccb866502760cdb99c5f09da527fcfe8ddd59c3c52e5b7ffc63cce3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_turing, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 03:21:45 np0005534516 systemd[1]: var-lib-containers-storage-overlay-00d2afccab0b28cb46f19dbb7b65e26834ef7686fac55426821ce6889f513ff5-merged.mount: Deactivated successfully.
Nov 25 03:21:45 np0005534516 podman[268347]: 2025-11-25 08:21:45.244659559 +0000 UTC m=+0.165804074 container remove f59ffacf7ccb866502760cdb99c5f09da527fcfe8ddd59c3c52e5b7ffc63cce3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_turing, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 03:21:45 np0005534516 systemd[1]: libpod-conmon-f59ffacf7ccb866502760cdb99c5f09da527fcfe8ddd59c3c52e5b7ffc63cce3.scope: Deactivated successfully.
Nov 25 03:21:45 np0005534516 podman[268426]: 2025-11-25 08:21:45.397948481 +0000 UTC m=+0.038825898 container create c58369ae8d60bfe8eaa375124d46885e3a0d368e06e54ac8063e8882429acff3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_mclean, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 03:21:45 np0005534516 systemd[1]: Started libpod-conmon-c58369ae8d60bfe8eaa375124d46885e3a0d368e06e54ac8063e8882429acff3.scope.
Nov 25 03:21:45 np0005534516 podman[268426]: 2025-11-25 08:21:45.382163109 +0000 UTC m=+0.023040556 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 03:21:45 np0005534516 systemd[1]: Started libcrun container.
Nov 25 03:21:45 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/33887bd2934a0aca35b8fb999927fc0798d70ac571a6ccbee7a3cb6f702e565d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 03:21:45 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/33887bd2934a0aca35b8fb999927fc0798d70ac571a6ccbee7a3cb6f702e565d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 03:21:45 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/33887bd2934a0aca35b8fb999927fc0798d70ac571a6ccbee7a3cb6f702e565d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 03:21:45 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/33887bd2934a0aca35b8fb999927fc0798d70ac571a6ccbee7a3cb6f702e565d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 03:21:45 np0005534516 podman[268426]: 2025-11-25 08:21:45.508850666 +0000 UTC m=+0.149728113 container init c58369ae8d60bfe8eaa375124d46885e3a0d368e06e54ac8063e8882429acff3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_mclean, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 03:21:45 np0005534516 podman[268426]: 2025-11-25 08:21:45.514606318 +0000 UTC m=+0.155483735 container start c58369ae8d60bfe8eaa375124d46885e3a0d368e06e54ac8063e8882429acff3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_mclean, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Nov 25 03:21:45 np0005534516 podman[268426]: 2025-11-25 08:21:45.517744045 +0000 UTC m=+0.158621542 container attach c58369ae8d60bfe8eaa375124d46885e3a0d368e06e54ac8063e8882429acff3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_mclean, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 03:21:45 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 03:21:45 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3542732332' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 03:21:45 np0005534516 nova_compute[253538]: 2025-11-25 08:21:45.608 253542 DEBUG oslo_concurrency.processutils [None req-f10f6a1b-29f1-4487-9bc5-9d28c3d86390 6afe291f5eed4868aa587372c8338aa9 85521dbe314d4431bc8ec53a24b5e793 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.448s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:21:45 np0005534516 nova_compute[253538]: 2025-11-25 08:21:45.612 253542 DEBUG nova.objects.instance [None req-f10f6a1b-29f1-4487-9bc5-9d28c3d86390 6afe291f5eed4868aa587372c8338aa9 85521dbe314d4431bc8ec53a24b5e793 - - default default] Lazy-loading 'pci_devices' on Instance uuid 95b483fd-03ee-4f4f-b242-a04fe0f183cd obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 03:21:45 np0005534516 nova_compute[253538]: 2025-11-25 08:21:45.629 253542 DEBUG nova.virt.libvirt.driver [None req-f10f6a1b-29f1-4487-9bc5-9d28c3d86390 6afe291f5eed4868aa587372c8338aa9 85521dbe314d4431bc8ec53a24b5e793 - - default default] [instance: 95b483fd-03ee-4f4f-b242-a04fe0f183cd] End _get_guest_xml xml=<domain type="kvm">
Nov 25 03:21:45 np0005534516 nova_compute[253538]:  <uuid>95b483fd-03ee-4f4f-b242-a04fe0f183cd</uuid>
Nov 25 03:21:45 np0005534516 nova_compute[253538]:  <name>instance-00000001</name>
Nov 25 03:21:45 np0005534516 nova_compute[253538]:  <memory>131072</memory>
Nov 25 03:21:45 np0005534516 nova_compute[253538]:  <vcpu>1</vcpu>
Nov 25 03:21:45 np0005534516 nova_compute[253538]:  <metadata>
Nov 25 03:21:45 np0005534516 nova_compute[253538]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 03:21:45 np0005534516 nova_compute[253538]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 03:21:45 np0005534516 nova_compute[253538]:      <nova:name>tempest-AutoAllocateNetworkTest-server-634532863</nova:name>
Nov 25 03:21:45 np0005534516 nova_compute[253538]:      <nova:creationTime>2025-11-25 08:21:44</nova:creationTime>
Nov 25 03:21:45 np0005534516 nova_compute[253538]:      <nova:flavor name="m1.nano">
Nov 25 03:21:45 np0005534516 nova_compute[253538]:        <nova:memory>128</nova:memory>
Nov 25 03:21:45 np0005534516 nova_compute[253538]:        <nova:disk>1</nova:disk>
Nov 25 03:21:45 np0005534516 nova_compute[253538]:        <nova:swap>0</nova:swap>
Nov 25 03:21:45 np0005534516 nova_compute[253538]:        <nova:ephemeral>0</nova:ephemeral>
Nov 25 03:21:45 np0005534516 nova_compute[253538]:        <nova:vcpus>1</nova:vcpus>
Nov 25 03:21:45 np0005534516 nova_compute[253538]:      </nova:flavor>
Nov 25 03:21:45 np0005534516 nova_compute[253538]:      <nova:owner>
Nov 25 03:21:45 np0005534516 nova_compute[253538]:        <nova:user uuid="6afe291f5eed4868aa587372c8338aa9">tempest-AutoAllocateNetworkTest-1712148102-project-member</nova:user>
Nov 25 03:21:45 np0005534516 nova_compute[253538]:        <nova:project uuid="85521dbe314d4431bc8ec53a24b5e793">tempest-AutoAllocateNetworkTest-1712148102</nova:project>
Nov 25 03:21:45 np0005534516 nova_compute[253538]:      </nova:owner>
Nov 25 03:21:45 np0005534516 nova_compute[253538]:      <nova:root type="image" uuid="8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e"/>
Nov 25 03:21:45 np0005534516 nova_compute[253538]:      <nova:ports/>
Nov 25 03:21:45 np0005534516 nova_compute[253538]:    </nova:instance>
Nov 25 03:21:45 np0005534516 nova_compute[253538]:  </metadata>
Nov 25 03:21:45 np0005534516 nova_compute[253538]:  <sysinfo type="smbios">
Nov 25 03:21:45 np0005534516 nova_compute[253538]:    <system>
Nov 25 03:21:45 np0005534516 nova_compute[253538]:      <entry name="manufacturer">RDO</entry>
Nov 25 03:21:45 np0005534516 nova_compute[253538]:      <entry name="product">OpenStack Compute</entry>
Nov 25 03:21:45 np0005534516 nova_compute[253538]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 03:21:45 np0005534516 nova_compute[253538]:      <entry name="serial">95b483fd-03ee-4f4f-b242-a04fe0f183cd</entry>
Nov 25 03:21:45 np0005534516 nova_compute[253538]:      <entry name="uuid">95b483fd-03ee-4f4f-b242-a04fe0f183cd</entry>
Nov 25 03:21:45 np0005534516 nova_compute[253538]:      <entry name="family">Virtual Machine</entry>
Nov 25 03:21:45 np0005534516 nova_compute[253538]:    </system>
Nov 25 03:21:45 np0005534516 nova_compute[253538]:  </sysinfo>
Nov 25 03:21:45 np0005534516 nova_compute[253538]:  <os>
Nov 25 03:21:45 np0005534516 nova_compute[253538]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 25 03:21:45 np0005534516 nova_compute[253538]:    <boot dev="hd"/>
Nov 25 03:21:45 np0005534516 nova_compute[253538]:    <smbios mode="sysinfo"/>
Nov 25 03:21:45 np0005534516 nova_compute[253538]:  </os>
Nov 25 03:21:45 np0005534516 nova_compute[253538]:  <features>
Nov 25 03:21:45 np0005534516 nova_compute[253538]:    <acpi/>
Nov 25 03:21:45 np0005534516 nova_compute[253538]:    <apic/>
Nov 25 03:21:45 np0005534516 nova_compute[253538]:    <vmcoreinfo/>
Nov 25 03:21:45 np0005534516 nova_compute[253538]:  </features>
Nov 25 03:21:45 np0005534516 nova_compute[253538]:  <clock offset="utc">
Nov 25 03:21:45 np0005534516 nova_compute[253538]:    <timer name="pit" tickpolicy="delay"/>
Nov 25 03:21:45 np0005534516 nova_compute[253538]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 25 03:21:45 np0005534516 nova_compute[253538]:    <timer name="hpet" present="no"/>
Nov 25 03:21:45 np0005534516 nova_compute[253538]:  </clock>
Nov 25 03:21:45 np0005534516 nova_compute[253538]:  <cpu mode="host-model" match="exact">
Nov 25 03:21:45 np0005534516 nova_compute[253538]:    <topology sockets="1" cores="1" threads="1"/>
Nov 25 03:21:45 np0005534516 nova_compute[253538]:  </cpu>
Nov 25 03:21:45 np0005534516 nova_compute[253538]:  <devices>
Nov 25 03:21:45 np0005534516 nova_compute[253538]:    <disk type="network" device="disk">
Nov 25 03:21:45 np0005534516 nova_compute[253538]:      <driver type="raw" cache="none"/>
Nov 25 03:21:45 np0005534516 nova_compute[253538]:      <source protocol="rbd" name="vms/95b483fd-03ee-4f4f-b242-a04fe0f183cd_disk">
Nov 25 03:21:45 np0005534516 nova_compute[253538]:        <host name="192.168.122.100" port="6789"/>
Nov 25 03:21:45 np0005534516 nova_compute[253538]:      </source>
Nov 25 03:21:45 np0005534516 nova_compute[253538]:      <auth username="openstack">
Nov 25 03:21:45 np0005534516 nova_compute[253538]:        <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 03:21:45 np0005534516 nova_compute[253538]:      </auth>
Nov 25 03:21:45 np0005534516 nova_compute[253538]:      <target dev="vda" bus="virtio"/>
Nov 25 03:21:45 np0005534516 nova_compute[253538]:    </disk>
Nov 25 03:21:45 np0005534516 nova_compute[253538]:    <disk type="network" device="cdrom">
Nov 25 03:21:45 np0005534516 nova_compute[253538]:      <driver type="raw" cache="none"/>
Nov 25 03:21:45 np0005534516 nova_compute[253538]:      <source protocol="rbd" name="vms/95b483fd-03ee-4f4f-b242-a04fe0f183cd_disk.config">
Nov 25 03:21:45 np0005534516 nova_compute[253538]:        <host name="192.168.122.100" port="6789"/>
Nov 25 03:21:45 np0005534516 nova_compute[253538]:      </source>
Nov 25 03:21:45 np0005534516 nova_compute[253538]:      <auth username="openstack">
Nov 25 03:21:45 np0005534516 nova_compute[253538]:        <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 03:21:45 np0005534516 nova_compute[253538]:      </auth>
Nov 25 03:21:45 np0005534516 nova_compute[253538]:      <target dev="sda" bus="sata"/>
Nov 25 03:21:45 np0005534516 nova_compute[253538]:    </disk>
Nov 25 03:21:45 np0005534516 nova_compute[253538]:    <serial type="pty">
Nov 25 03:21:45 np0005534516 nova_compute[253538]:      <log file="/var/lib/nova/instances/95b483fd-03ee-4f4f-b242-a04fe0f183cd/console.log" append="off"/>
Nov 25 03:21:45 np0005534516 nova_compute[253538]:    </serial>
Nov 25 03:21:45 np0005534516 nova_compute[253538]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 03:21:45 np0005534516 nova_compute[253538]:    <video>
Nov 25 03:21:45 np0005534516 nova_compute[253538]:      <model type="virtio"/>
Nov 25 03:21:45 np0005534516 nova_compute[253538]:    </video>
Nov 25 03:21:45 np0005534516 nova_compute[253538]:    <input type="tablet" bus="usb"/>
Nov 25 03:21:45 np0005534516 nova_compute[253538]:    <rng model="virtio">
Nov 25 03:21:45 np0005534516 nova_compute[253538]:      <backend model="random">/dev/urandom</backend>
Nov 25 03:21:45 np0005534516 nova_compute[253538]:    </rng>
Nov 25 03:21:45 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root"/>
Nov 25 03:21:45 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:21:45 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:21:45 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:21:45 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:21:45 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:21:45 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:21:45 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:21:45 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:21:45 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:21:45 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:21:45 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:21:45 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:21:45 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:21:45 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:21:45 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:21:45 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:21:45 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:21:45 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:21:45 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:21:45 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:21:45 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:21:45 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:21:45 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:21:45 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:21:45 np0005534516 nova_compute[253538]:    <controller type="usb" index="0"/>
Nov 25 03:21:45 np0005534516 nova_compute[253538]:    <memballoon model="virtio">
Nov 25 03:21:45 np0005534516 nova_compute[253538]:      <stats period="10"/>
Nov 25 03:21:45 np0005534516 nova_compute[253538]:    </memballoon>
Nov 25 03:21:45 np0005534516 nova_compute[253538]:  </devices>
Nov 25 03:21:45 np0005534516 nova_compute[253538]: </domain>
Nov 25 03:21:45 np0005534516 nova_compute[253538]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 25 03:21:45 np0005534516 nova_compute[253538]: 2025-11-25 08:21:45.718 253542 DEBUG nova.virt.libvirt.driver [None req-f10f6a1b-29f1-4487-9bc5-9d28c3d86390 6afe291f5eed4868aa587372c8338aa9 85521dbe314d4431bc8ec53a24b5e793 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 03:21:45 np0005534516 nova_compute[253538]: 2025-11-25 08:21:45.719 253542 DEBUG nova.virt.libvirt.driver [None req-f10f6a1b-29f1-4487-9bc5-9d28c3d86390 6afe291f5eed4868aa587372c8338aa9 85521dbe314d4431bc8ec53a24b5e793 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 03:21:45 np0005534516 nova_compute[253538]: 2025-11-25 08:21:45.719 253542 INFO nova.virt.libvirt.driver [None req-f10f6a1b-29f1-4487-9bc5-9d28c3d86390 6afe291f5eed4868aa587372c8338aa9 85521dbe314d4431bc8ec53a24b5e793 - - default default] [instance: 95b483fd-03ee-4f4f-b242-a04fe0f183cd] Using config drive#033[00m
Nov 25 03:21:45 np0005534516 nova_compute[253538]: 2025-11-25 08:21:45.743 253542 DEBUG nova.storage.rbd_utils [None req-f10f6a1b-29f1-4487-9bc5-9d28c3d86390 6afe291f5eed4868aa587372c8338aa9 85521dbe314d4431bc8ec53a24b5e793 - - default default] rbd image 95b483fd-03ee-4f4f-b242-a04fe0f183cd_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:21:46 np0005534516 optimistic_mclean[268443]: {
Nov 25 03:21:46 np0005534516 optimistic_mclean[268443]:    "0": [
Nov 25 03:21:46 np0005534516 optimistic_mclean[268443]:        {
Nov 25 03:21:46 np0005534516 optimistic_mclean[268443]:            "devices": [
Nov 25 03:21:46 np0005534516 optimistic_mclean[268443]:                "/dev/loop3"
Nov 25 03:21:46 np0005534516 optimistic_mclean[268443]:            ],
Nov 25 03:21:46 np0005534516 optimistic_mclean[268443]:            "lv_name": "ceph_lv0",
Nov 25 03:21:46 np0005534516 optimistic_mclean[268443]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 03:21:46 np0005534516 optimistic_mclean[268443]:            "lv_size": "21470642176",
Nov 25 03:21:46 np0005534516 optimistic_mclean[268443]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=fa0f8d90-5df8-4d42-9078-da082765696d,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 03:21:46 np0005534516 optimistic_mclean[268443]:            "lv_uuid": "ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr",
Nov 25 03:21:46 np0005534516 optimistic_mclean[268443]:            "name": "ceph_lv0",
Nov 25 03:21:46 np0005534516 optimistic_mclean[268443]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 03:21:46 np0005534516 optimistic_mclean[268443]:            "tags": {
Nov 25 03:21:46 np0005534516 optimistic_mclean[268443]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 03:21:46 np0005534516 optimistic_mclean[268443]:                "ceph.block_uuid": "ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr",
Nov 25 03:21:46 np0005534516 optimistic_mclean[268443]:                "ceph.cephx_lockbox_secret": "",
Nov 25 03:21:46 np0005534516 optimistic_mclean[268443]:                "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 03:21:46 np0005534516 optimistic_mclean[268443]:                "ceph.cluster_name": "ceph",
Nov 25 03:21:46 np0005534516 optimistic_mclean[268443]:                "ceph.crush_device_class": "",
Nov 25 03:21:46 np0005534516 optimistic_mclean[268443]:                "ceph.encrypted": "0",
Nov 25 03:21:46 np0005534516 optimistic_mclean[268443]:                "ceph.osd_fsid": "fa0f8d90-5df8-4d42-9078-da082765696d",
Nov 25 03:21:46 np0005534516 optimistic_mclean[268443]:                "ceph.osd_id": "0",
Nov 25 03:21:46 np0005534516 optimistic_mclean[268443]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 03:21:46 np0005534516 optimistic_mclean[268443]:                "ceph.type": "block",
Nov 25 03:21:46 np0005534516 optimistic_mclean[268443]:                "ceph.vdo": "0"
Nov 25 03:21:46 np0005534516 optimistic_mclean[268443]:            },
Nov 25 03:21:46 np0005534516 optimistic_mclean[268443]:            "type": "block",
Nov 25 03:21:46 np0005534516 optimistic_mclean[268443]:            "vg_name": "ceph_vg0"
Nov 25 03:21:46 np0005534516 optimistic_mclean[268443]:        }
Nov 25 03:21:46 np0005534516 optimistic_mclean[268443]:    ],
Nov 25 03:21:46 np0005534516 optimistic_mclean[268443]:    "1": [
Nov 25 03:21:46 np0005534516 optimistic_mclean[268443]:        {
Nov 25 03:21:46 np0005534516 optimistic_mclean[268443]:            "devices": [
Nov 25 03:21:46 np0005534516 optimistic_mclean[268443]:                "/dev/loop4"
Nov 25 03:21:46 np0005534516 optimistic_mclean[268443]:            ],
Nov 25 03:21:46 np0005534516 optimistic_mclean[268443]:            "lv_name": "ceph_lv1",
Nov 25 03:21:46 np0005534516 optimistic_mclean[268443]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 03:21:46 np0005534516 optimistic_mclean[268443]:            "lv_size": "21470642176",
Nov 25 03:21:46 np0005534516 optimistic_mclean[268443]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1ccbbde0-8faf-460a-800e-c84f00ed17db,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 03:21:46 np0005534516 optimistic_mclean[268443]:            "lv_uuid": "nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e",
Nov 25 03:21:46 np0005534516 optimistic_mclean[268443]:            "name": "ceph_lv1",
Nov 25 03:21:46 np0005534516 optimistic_mclean[268443]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 03:21:46 np0005534516 optimistic_mclean[268443]:            "tags": {
Nov 25 03:21:46 np0005534516 optimistic_mclean[268443]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 03:21:46 np0005534516 optimistic_mclean[268443]:                "ceph.block_uuid": "nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e",
Nov 25 03:21:46 np0005534516 optimistic_mclean[268443]:                "ceph.cephx_lockbox_secret": "",
Nov 25 03:21:46 np0005534516 optimistic_mclean[268443]:                "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 03:21:46 np0005534516 optimistic_mclean[268443]:                "ceph.cluster_name": "ceph",
Nov 25 03:21:46 np0005534516 optimistic_mclean[268443]:                "ceph.crush_device_class": "",
Nov 25 03:21:46 np0005534516 optimistic_mclean[268443]:                "ceph.encrypted": "0",
Nov 25 03:21:46 np0005534516 optimistic_mclean[268443]:                "ceph.osd_fsid": "1ccbbde0-8faf-460a-800e-c84f00ed17db",
Nov 25 03:21:46 np0005534516 optimistic_mclean[268443]:                "ceph.osd_id": "1",
Nov 25 03:21:46 np0005534516 optimistic_mclean[268443]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 03:21:46 np0005534516 optimistic_mclean[268443]:                "ceph.type": "block",
Nov 25 03:21:46 np0005534516 optimistic_mclean[268443]:                "ceph.vdo": "0"
Nov 25 03:21:46 np0005534516 optimistic_mclean[268443]:            },
Nov 25 03:21:46 np0005534516 optimistic_mclean[268443]:            "type": "block",
Nov 25 03:21:46 np0005534516 optimistic_mclean[268443]:            "vg_name": "ceph_vg1"
Nov 25 03:21:46 np0005534516 optimistic_mclean[268443]:        }
Nov 25 03:21:46 np0005534516 optimistic_mclean[268443]:    ],
Nov 25 03:21:46 np0005534516 optimistic_mclean[268443]:    "2": [
Nov 25 03:21:46 np0005534516 optimistic_mclean[268443]:        {
Nov 25 03:21:46 np0005534516 optimistic_mclean[268443]:            "devices": [
Nov 25 03:21:46 np0005534516 optimistic_mclean[268443]:                "/dev/loop5"
Nov 25 03:21:46 np0005534516 optimistic_mclean[268443]:            ],
Nov 25 03:21:46 np0005534516 optimistic_mclean[268443]:            "lv_name": "ceph_lv2",
Nov 25 03:21:46 np0005534516 optimistic_mclean[268443]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 03:21:46 np0005534516 optimistic_mclean[268443]:            "lv_size": "21470642176",
Nov 25 03:21:46 np0005534516 optimistic_mclean[268443]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2a15223e-f21c-42af-b702-2be1c3607399,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 03:21:46 np0005534516 optimistic_mclean[268443]:            "lv_uuid": "5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0",
Nov 25 03:21:46 np0005534516 optimistic_mclean[268443]:            "name": "ceph_lv2",
Nov 25 03:21:46 np0005534516 optimistic_mclean[268443]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 03:21:46 np0005534516 optimistic_mclean[268443]:            "tags": {
Nov 25 03:21:46 np0005534516 optimistic_mclean[268443]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 03:21:46 np0005534516 optimistic_mclean[268443]:                "ceph.block_uuid": "5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0",
Nov 25 03:21:46 np0005534516 optimistic_mclean[268443]:                "ceph.cephx_lockbox_secret": "",
Nov 25 03:21:46 np0005534516 optimistic_mclean[268443]:                "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 03:21:46 np0005534516 optimistic_mclean[268443]:                "ceph.cluster_name": "ceph",
Nov 25 03:21:46 np0005534516 optimistic_mclean[268443]:                "ceph.crush_device_class": "",
Nov 25 03:21:46 np0005534516 optimistic_mclean[268443]:                "ceph.encrypted": "0",
Nov 25 03:21:46 np0005534516 optimistic_mclean[268443]:                "ceph.osd_fsid": "2a15223e-f21c-42af-b702-2be1c3607399",
Nov 25 03:21:46 np0005534516 optimistic_mclean[268443]:                "ceph.osd_id": "2",
Nov 25 03:21:46 np0005534516 optimistic_mclean[268443]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 03:21:46 np0005534516 optimistic_mclean[268443]:                "ceph.type": "block",
Nov 25 03:21:46 np0005534516 optimistic_mclean[268443]:                "ceph.vdo": "0"
Nov 25 03:21:46 np0005534516 optimistic_mclean[268443]:            },
Nov 25 03:21:46 np0005534516 optimistic_mclean[268443]:            "type": "block",
Nov 25 03:21:46 np0005534516 optimistic_mclean[268443]:            "vg_name": "ceph_vg2"
Nov 25 03:21:46 np0005534516 optimistic_mclean[268443]:        }
Nov 25 03:21:46 np0005534516 optimistic_mclean[268443]:    ]
Nov 25 03:21:46 np0005534516 optimistic_mclean[268443]: }
Nov 25 03:21:46 np0005534516 systemd[1]: libpod-c58369ae8d60bfe8eaa375124d46885e3a0d368e06e54ac8063e8882429acff3.scope: Deactivated successfully.
Nov 25 03:21:46 np0005534516 podman[268426]: 2025-11-25 08:21:46.306697735 +0000 UTC m=+0.947575142 container died c58369ae8d60bfe8eaa375124d46885e3a0d368e06e54ac8063e8882429acff3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_mclean, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 03:21:46 np0005534516 systemd[1]: var-lib-containers-storage-overlay-33887bd2934a0aca35b8fb999927fc0798d70ac571a6ccbee7a3cb6f702e565d-merged.mount: Deactivated successfully.
Nov 25 03:21:46 np0005534516 podman[268426]: 2025-11-25 08:21:46.376693025 +0000 UTC m=+1.017570442 container remove c58369ae8d60bfe8eaa375124d46885e3a0d368e06e54ac8063e8882429acff3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_mclean, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 03:21:46 np0005534516 systemd[1]: libpod-conmon-c58369ae8d60bfe8eaa375124d46885e3a0d368e06e54ac8063e8882429acff3.scope: Deactivated successfully.
Nov 25 03:21:46 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1083: 321 pgs: 321 active+clean; 49 MiB data, 189 MiB used, 60 GiB / 60 GiB avail; 2.6 MiB/s rd, 85 KiB/s wr, 11 op/s
Nov 25 03:21:47 np0005534516 podman[268621]: 2025-11-25 08:21:47.009082831 +0000 UTC m=+0.041886224 container create b3b2f2bbd63138f040bf41fc39f91ec97493118a636daa39d1cc0b24f4722811 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_pare, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 03:21:47 np0005534516 systemd[1]: Started libpod-conmon-b3b2f2bbd63138f040bf41fc39f91ec97493118a636daa39d1cc0b24f4722811.scope.
Nov 25 03:21:47 np0005534516 systemd[1]: Started libcrun container.
Nov 25 03:21:47 np0005534516 podman[268621]: 2025-11-25 08:21:46.988885336 +0000 UTC m=+0.021688749 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 03:21:47 np0005534516 podman[268621]: 2025-11-25 08:21:47.095548382 +0000 UTC m=+0.128351855 container init b3b2f2bbd63138f040bf41fc39f91ec97493118a636daa39d1cc0b24f4722811 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_pare, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Nov 25 03:21:47 np0005534516 podman[268621]: 2025-11-25 08:21:47.106788227 +0000 UTC m=+0.139591660 container start b3b2f2bbd63138f040bf41fc39f91ec97493118a636daa39d1cc0b24f4722811 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_pare, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Nov 25 03:21:47 np0005534516 keen_pare[268638]: 167 167
Nov 25 03:21:47 np0005534516 systemd[1]: libpod-b3b2f2bbd63138f040bf41fc39f91ec97493118a636daa39d1cc0b24f4722811.scope: Deactivated successfully.
Nov 25 03:21:47 np0005534516 podman[268621]: 2025-11-25 08:21:47.125885112 +0000 UTC m=+0.158688515 container attach b3b2f2bbd63138f040bf41fc39f91ec97493118a636daa39d1cc0b24f4722811 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_pare, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Nov 25 03:21:47 np0005534516 podman[268621]: 2025-11-25 08:21:47.12655385 +0000 UTC m=+0.159357253 container died b3b2f2bbd63138f040bf41fc39f91ec97493118a636daa39d1cc0b24f4722811 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_pare, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Nov 25 03:21:47 np0005534516 systemd[1]: var-lib-containers-storage-overlay-d160eda7df632bfd47b1d01a6366615d0f5a529ae0daa68fb3d5e50f09e51b63-merged.mount: Deactivated successfully.
Nov 25 03:21:47 np0005534516 podman[268621]: 2025-11-25 08:21:47.721842128 +0000 UTC m=+0.754645521 container remove b3b2f2bbd63138f040bf41fc39f91ec97493118a636daa39d1cc0b24f4722811 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_pare, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 03:21:47 np0005534516 systemd[1]: libpod-conmon-b3b2f2bbd63138f040bf41fc39f91ec97493118a636daa39d1cc0b24f4722811.scope: Deactivated successfully.
Nov 25 03:21:47 np0005534516 nova_compute[253538]: 2025-11-25 08:21:47.865 253542 INFO nova.virt.libvirt.driver [None req-f10f6a1b-29f1-4487-9bc5-9d28c3d86390 6afe291f5eed4868aa587372c8338aa9 85521dbe314d4431bc8ec53a24b5e793 - - default default] [instance: 95b483fd-03ee-4f4f-b242-a04fe0f183cd] Creating config drive at /var/lib/nova/instances/95b483fd-03ee-4f4f-b242-a04fe0f183cd/disk.config#033[00m
Nov 25 03:21:47 np0005534516 nova_compute[253538]: 2025-11-25 08:21:47.872 253542 DEBUG oslo_concurrency.processutils [None req-f10f6a1b-29f1-4487-9bc5-9d28c3d86390 6afe291f5eed4868aa587372c8338aa9 85521dbe314d4431bc8ec53a24b5e793 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/95b483fd-03ee-4f4f-b242-a04fe0f183cd/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpbv32u5yi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:21:47 np0005534516 podman[268663]: 2025-11-25 08:21:47.905558702 +0000 UTC m=+0.045912006 container create aadf2f4c3a41b88f3478e1ec4e12a6a568e4d85ba6f06c7f7a7d3cb7d7f68a50 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_hermann, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 03:21:47 np0005534516 systemd[1]: Started libpod-conmon-aadf2f4c3a41b88f3478e1ec4e12a6a568e4d85ba6f06c7f7a7d3cb7d7f68a50.scope.
Nov 25 03:21:47 np0005534516 systemd[1]: Started libcrun container.
Nov 25 03:21:47 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/12035bbab82b243eb963cb37ca6eb80556767b5f1d44cd967309d2f27df35b7e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 03:21:47 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/12035bbab82b243eb963cb37ca6eb80556767b5f1d44cd967309d2f27df35b7e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 03:21:47 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/12035bbab82b243eb963cb37ca6eb80556767b5f1d44cd967309d2f27df35b7e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 03:21:47 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/12035bbab82b243eb963cb37ca6eb80556767b5f1d44cd967309d2f27df35b7e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 03:21:47 np0005534516 podman[268663]: 2025-11-25 08:21:47.981724754 +0000 UTC m=+0.122078068 container init aadf2f4c3a41b88f3478e1ec4e12a6a568e4d85ba6f06c7f7a7d3cb7d7f68a50 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_hermann, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Nov 25 03:21:47 np0005534516 podman[268663]: 2025-11-25 08:21:47.889014198 +0000 UTC m=+0.029367522 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 03:21:47 np0005534516 podman[268663]: 2025-11-25 08:21:47.98869998 +0000 UTC m=+0.129053324 container start aadf2f4c3a41b88f3478e1ec4e12a6a568e4d85ba6f06c7f7a7d3cb7d7f68a50 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_hermann, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 03:21:47 np0005534516 podman[268663]: 2025-11-25 08:21:47.992152036 +0000 UTC m=+0.132505370 container attach aadf2f4c3a41b88f3478e1ec4e12a6a568e4d85ba6f06c7f7a7d3cb7d7f68a50 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_hermann, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default)
Nov 25 03:21:48 np0005534516 nova_compute[253538]: 2025-11-25 08:21:48.005 253542 DEBUG oslo_concurrency.processutils [None req-f10f6a1b-29f1-4487-9bc5-9d28c3d86390 6afe291f5eed4868aa587372c8338aa9 85521dbe314d4431bc8ec53a24b5e793 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/95b483fd-03ee-4f4f-b242-a04fe0f183cd/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpbv32u5yi" returned: 0 in 0.134s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:21:48 np0005534516 nova_compute[253538]: 2025-11-25 08:21:48.032 253542 DEBUG nova.storage.rbd_utils [None req-f10f6a1b-29f1-4487-9bc5-9d28c3d86390 6afe291f5eed4868aa587372c8338aa9 85521dbe314d4431bc8ec53a24b5e793 - - default default] rbd image 95b483fd-03ee-4f4f-b242-a04fe0f183cd_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:21:48 np0005534516 nova_compute[253538]: 2025-11-25 08:21:48.037 253542 DEBUG oslo_concurrency.processutils [None req-f10f6a1b-29f1-4487-9bc5-9d28c3d86390 6afe291f5eed4868aa587372c8338aa9 85521dbe314d4431bc8ec53a24b5e793 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/95b483fd-03ee-4f4f-b242-a04fe0f183cd/disk.config 95b483fd-03ee-4f4f-b242-a04fe0f183cd_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:21:48 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e106 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 03:21:48 np0005534516 nova_compute[253538]: 2025-11-25 08:21:48.233 253542 DEBUG oslo_concurrency.processutils [None req-f10f6a1b-29f1-4487-9bc5-9d28c3d86390 6afe291f5eed4868aa587372c8338aa9 85521dbe314d4431bc8ec53a24b5e793 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/95b483fd-03ee-4f4f-b242-a04fe0f183cd/disk.config 95b483fd-03ee-4f4f-b242-a04fe0f183cd_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.196s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:21:48 np0005534516 nova_compute[253538]: 2025-11-25 08:21:48.234 253542 INFO nova.virt.libvirt.driver [None req-f10f6a1b-29f1-4487-9bc5-9d28c3d86390 6afe291f5eed4868aa587372c8338aa9 85521dbe314d4431bc8ec53a24b5e793 - - default default] [instance: 95b483fd-03ee-4f4f-b242-a04fe0f183cd] Deleting local config drive /var/lib/nova/instances/95b483fd-03ee-4f4f-b242-a04fe0f183cd/disk.config because it was imported into RBD.#033[00m
Nov 25 03:21:48 np0005534516 systemd[1]: Starting libvirt secret daemon...
Nov 25 03:21:48 np0005534516 systemd[1]: Started libvirt secret daemon.
Nov 25 03:21:48 np0005534516 systemd-machined[215790]: New machine qemu-1-instance-00000001.
Nov 25 03:21:48 np0005534516 systemd[1]: Started Virtual Machine qemu-1-instance-00000001.
Nov 25 03:21:48 np0005534516 nova_compute[253538]: 2025-11-25 08:21:48.784 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764058908.7839665, 95b483fd-03ee-4f4f-b242-a04fe0f183cd => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 03:21:48 np0005534516 nova_compute[253538]: 2025-11-25 08:21:48.785 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 95b483fd-03ee-4f4f-b242-a04fe0f183cd] VM Resumed (Lifecycle Event)#033[00m
Nov 25 03:21:48 np0005534516 nova_compute[253538]: 2025-11-25 08:21:48.790 253542 DEBUG nova.compute.manager [None req-f10f6a1b-29f1-4487-9bc5-9d28c3d86390 6afe291f5eed4868aa587372c8338aa9 85521dbe314d4431bc8ec53a24b5e793 - - default default] [instance: 95b483fd-03ee-4f4f-b242-a04fe0f183cd] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 25 03:21:48 np0005534516 nova_compute[253538]: 2025-11-25 08:21:48.790 253542 DEBUG nova.virt.libvirt.driver [None req-f10f6a1b-29f1-4487-9bc5-9d28c3d86390 6afe291f5eed4868aa587372c8338aa9 85521dbe314d4431bc8ec53a24b5e793 - - default default] [instance: 95b483fd-03ee-4f4f-b242-a04fe0f183cd] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 25 03:21:48 np0005534516 nova_compute[253538]: 2025-11-25 08:21:48.795 253542 INFO nova.virt.libvirt.driver [-] [instance: 95b483fd-03ee-4f4f-b242-a04fe0f183cd] Instance spawned successfully.#033[00m
Nov 25 03:21:48 np0005534516 nova_compute[253538]: 2025-11-25 08:21:48.795 253542 DEBUG nova.virt.libvirt.driver [None req-f10f6a1b-29f1-4487-9bc5-9d28c3d86390 6afe291f5eed4868aa587372c8338aa9 85521dbe314d4431bc8ec53a24b5e793 - - default default] [instance: 95b483fd-03ee-4f4f-b242-a04fe0f183cd] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 25 03:21:48 np0005534516 nova_compute[253538]: 2025-11-25 08:21:48.822 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 95b483fd-03ee-4f4f-b242-a04fe0f183cd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:21:48 np0005534516 nova_compute[253538]: 2025-11-25 08:21:48.830 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 95b483fd-03ee-4f4f-b242-a04fe0f183cd] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 03:21:48 np0005534516 nova_compute[253538]: 2025-11-25 08:21:48.833 253542 DEBUG nova.virt.libvirt.driver [None req-f10f6a1b-29f1-4487-9bc5-9d28c3d86390 6afe291f5eed4868aa587372c8338aa9 85521dbe314d4431bc8ec53a24b5e793 - - default default] [instance: 95b483fd-03ee-4f4f-b242-a04fe0f183cd] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:21:48 np0005534516 nova_compute[253538]: 2025-11-25 08:21:48.834 253542 DEBUG nova.virt.libvirt.driver [None req-f10f6a1b-29f1-4487-9bc5-9d28c3d86390 6afe291f5eed4868aa587372c8338aa9 85521dbe314d4431bc8ec53a24b5e793 - - default default] [instance: 95b483fd-03ee-4f4f-b242-a04fe0f183cd] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:21:48 np0005534516 nova_compute[253538]: 2025-11-25 08:21:48.834 253542 DEBUG nova.virt.libvirt.driver [None req-f10f6a1b-29f1-4487-9bc5-9d28c3d86390 6afe291f5eed4868aa587372c8338aa9 85521dbe314d4431bc8ec53a24b5e793 - - default default] [instance: 95b483fd-03ee-4f4f-b242-a04fe0f183cd] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:21:48 np0005534516 nova_compute[253538]: 2025-11-25 08:21:48.835 253542 DEBUG nova.virt.libvirt.driver [None req-f10f6a1b-29f1-4487-9bc5-9d28c3d86390 6afe291f5eed4868aa587372c8338aa9 85521dbe314d4431bc8ec53a24b5e793 - - default default] [instance: 95b483fd-03ee-4f4f-b242-a04fe0f183cd] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:21:48 np0005534516 nova_compute[253538]: 2025-11-25 08:21:48.835 253542 DEBUG nova.virt.libvirt.driver [None req-f10f6a1b-29f1-4487-9bc5-9d28c3d86390 6afe291f5eed4868aa587372c8338aa9 85521dbe314d4431bc8ec53a24b5e793 - - default default] [instance: 95b483fd-03ee-4f4f-b242-a04fe0f183cd] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:21:48 np0005534516 nova_compute[253538]: 2025-11-25 08:21:48.835 253542 DEBUG nova.virt.libvirt.driver [None req-f10f6a1b-29f1-4487-9bc5-9d28c3d86390 6afe291f5eed4868aa587372c8338aa9 85521dbe314d4431bc8ec53a24b5e793 - - default default] [instance: 95b483fd-03ee-4f4f-b242-a04fe0f183cd] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:21:48 np0005534516 nova_compute[253538]: 2025-11-25 08:21:48.853 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 95b483fd-03ee-4f4f-b242-a04fe0f183cd] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 03:21:48 np0005534516 nova_compute[253538]: 2025-11-25 08:21:48.854 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764058908.7856693, 95b483fd-03ee-4f4f-b242-a04fe0f183cd => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 03:21:48 np0005534516 nova_compute[253538]: 2025-11-25 08:21:48.854 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 95b483fd-03ee-4f4f-b242-a04fe0f183cd] VM Started (Lifecycle Event)#033[00m
Nov 25 03:21:48 np0005534516 nova_compute[253538]: 2025-11-25 08:21:48.873 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 95b483fd-03ee-4f4f-b242-a04fe0f183cd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:21:48 np0005534516 nova_compute[253538]: 2025-11-25 08:21:48.879 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 95b483fd-03ee-4f4f-b242-a04fe0f183cd] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 03:21:48 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1084: 321 pgs: 321 active+clean; 78 MiB data, 203 MiB used, 60 GiB / 60 GiB avail; 2.6 MiB/s rd, 1.9 MiB/s wr, 53 op/s
Nov 25 03:21:48 np0005534516 nova_compute[253538]: 2025-11-25 08:21:48.895 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 95b483fd-03ee-4f4f-b242-a04fe0f183cd] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 03:21:48 np0005534516 priceless_hermann[268683]: {
Nov 25 03:21:48 np0005534516 priceless_hermann[268683]:    "1ccbbde0-8faf-460a-800e-c84f00ed17db": {
Nov 25 03:21:48 np0005534516 priceless_hermann[268683]:        "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 03:21:48 np0005534516 priceless_hermann[268683]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 03:21:48 np0005534516 priceless_hermann[268683]:        "osd_id": 1,
Nov 25 03:21:48 np0005534516 priceless_hermann[268683]:        "osd_uuid": "1ccbbde0-8faf-460a-800e-c84f00ed17db",
Nov 25 03:21:48 np0005534516 priceless_hermann[268683]:        "type": "bluestore"
Nov 25 03:21:48 np0005534516 priceless_hermann[268683]:    },
Nov 25 03:21:48 np0005534516 priceless_hermann[268683]:    "2a15223e-f21c-42af-b702-2be1c3607399": {
Nov 25 03:21:48 np0005534516 priceless_hermann[268683]:        "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 03:21:48 np0005534516 priceless_hermann[268683]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 03:21:48 np0005534516 priceless_hermann[268683]:        "osd_id": 2,
Nov 25 03:21:48 np0005534516 priceless_hermann[268683]:        "osd_uuid": "2a15223e-f21c-42af-b702-2be1c3607399",
Nov 25 03:21:48 np0005534516 priceless_hermann[268683]:        "type": "bluestore"
Nov 25 03:21:48 np0005534516 priceless_hermann[268683]:    },
Nov 25 03:21:48 np0005534516 priceless_hermann[268683]:    "fa0f8d90-5df8-4d42-9078-da082765696d": {
Nov 25 03:21:48 np0005534516 priceless_hermann[268683]:        "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 03:21:48 np0005534516 priceless_hermann[268683]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 03:21:48 np0005534516 priceless_hermann[268683]:        "osd_id": 0,
Nov 25 03:21:48 np0005534516 priceless_hermann[268683]:        "osd_uuid": "fa0f8d90-5df8-4d42-9078-da082765696d",
Nov 25 03:21:48 np0005534516 priceless_hermann[268683]:        "type": "bluestore"
Nov 25 03:21:48 np0005534516 priceless_hermann[268683]:    }
Nov 25 03:21:48 np0005534516 priceless_hermann[268683]: }
Nov 25 03:21:49 np0005534516 systemd[1]: libpod-aadf2f4c3a41b88f3478e1ec4e12a6a568e4d85ba6f06c7f7a7d3cb7d7f68a50.scope: Deactivated successfully.
Nov 25 03:21:49 np0005534516 systemd[1]: libpod-aadf2f4c3a41b88f3478e1ec4e12a6a568e4d85ba6f06c7f7a7d3cb7d7f68a50.scope: Consumed 1.026s CPU time.
Nov 25 03:21:49 np0005534516 nova_compute[253538]: 2025-11-25 08:21:49.053 253542 INFO nova.compute.manager [None req-f10f6a1b-29f1-4487-9bc5-9d28c3d86390 6afe291f5eed4868aa587372c8338aa9 85521dbe314d4431bc8ec53a24b5e793 - - default default] [instance: 95b483fd-03ee-4f4f-b242-a04fe0f183cd] Took 9.47 seconds to spawn the instance on the hypervisor.#033[00m
Nov 25 03:21:49 np0005534516 nova_compute[253538]: 2025-11-25 08:21:49.054 253542 DEBUG nova.compute.manager [None req-f10f6a1b-29f1-4487-9bc5-9d28c3d86390 6afe291f5eed4868aa587372c8338aa9 85521dbe314d4431bc8ec53a24b5e793 - - default default] [instance: 95b483fd-03ee-4f4f-b242-a04fe0f183cd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:21:49 np0005534516 podman[268829]: 2025-11-25 08:21:49.089137142 +0000 UTC m=+0.037846742 container died aadf2f4c3a41b88f3478e1ec4e12a6a568e4d85ba6f06c7f7a7d3cb7d7f68a50 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_hermann, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2)
Nov 25 03:21:49 np0005534516 systemd[1]: var-lib-containers-storage-overlay-12035bbab82b243eb963cb37ca6eb80556767b5f1d44cd967309d2f27df35b7e-merged.mount: Deactivated successfully.
Nov 25 03:21:49 np0005534516 podman[268829]: 2025-11-25 08:21:49.240949092 +0000 UTC m=+0.189658692 container remove aadf2f4c3a41b88f3478e1ec4e12a6a568e4d85ba6f06c7f7a7d3cb7d7f68a50 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_hermann, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507)
Nov 25 03:21:49 np0005534516 systemd[1]: libpod-conmon-aadf2f4c3a41b88f3478e1ec4e12a6a568e4d85ba6f06c7f7a7d3cb7d7f68a50.scope: Deactivated successfully.
Nov 25 03:21:49 np0005534516 nova_compute[253538]: 2025-11-25 08:21:49.263 253542 INFO nova.compute.manager [None req-f10f6a1b-29f1-4487-9bc5-9d28c3d86390 6afe291f5eed4868aa587372c8338aa9 85521dbe314d4431bc8ec53a24b5e793 - - default default] [instance: 95b483fd-03ee-4f4f-b242-a04fe0f183cd] Took 11.20 seconds to build instance.#033[00m
Nov 25 03:21:49 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 03:21:49 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 03:21:49 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 03:21:49 np0005534516 nova_compute[253538]: 2025-11-25 08:21:49.303 253542 DEBUG oslo_concurrency.lockutils [None req-f10f6a1b-29f1-4487-9bc5-9d28c3d86390 6afe291f5eed4868aa587372c8338aa9 85521dbe314d4431bc8ec53a24b5e793 - - default default] Lock "95b483fd-03ee-4f4f-b242-a04fe0f183cd" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.859s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:21:49 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 03:21:49 np0005534516 ceph-mgr[75313]: [progress WARNING root] complete: ev 8aa26159-15b6-4e57-9ca4-8da4f068ee1d does not exist
Nov 25 03:21:49 np0005534516 ceph-mgr[75313]: [progress WARNING root] complete: ev a74a53a8-810c-44d9-a907-874c30cbbbe1 does not exist
Nov 25 03:21:49 np0005534516 nova_compute[253538]: 2025-11-25 08:21:49.395 253542 DEBUG oslo_concurrency.lockutils [None req-c902f289-5678-4813-9bdd-1c6484e2c112 19a48db5eafb4ccb9008a204aa3d72d4 2671313ddba04346ac0e2eef435f909c - - default default] Acquiring lock "67445e66-65d6-487d-8c34-7d798ac485c8" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:21:49 np0005534516 nova_compute[253538]: 2025-11-25 08:21:49.396 253542 DEBUG oslo_concurrency.lockutils [None req-c902f289-5678-4813-9bdd-1c6484e2c112 19a48db5eafb4ccb9008a204aa3d72d4 2671313ddba04346ac0e2eef435f909c - - default default] Lock "67445e66-65d6-487d-8c34-7d798ac485c8" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:21:49 np0005534516 nova_compute[253538]: 2025-11-25 08:21:49.524 253542 DEBUG nova.compute.manager [None req-c902f289-5678-4813-9bdd-1c6484e2c112 19a48db5eafb4ccb9008a204aa3d72d4 2671313ddba04346ac0e2eef435f909c - - default default] [instance: 67445e66-65d6-487d-8c34-7d798ac485c8] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 25 03:21:49 np0005534516 nova_compute[253538]: 2025-11-25 08:21:49.651 253542 DEBUG oslo_concurrency.lockutils [None req-c902f289-5678-4813-9bdd-1c6484e2c112 19a48db5eafb4ccb9008a204aa3d72d4 2671313ddba04346ac0e2eef435f909c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:21:49 np0005534516 nova_compute[253538]: 2025-11-25 08:21:49.652 253542 DEBUG oslo_concurrency.lockutils [None req-c902f289-5678-4813-9bdd-1c6484e2c112 19a48db5eafb4ccb9008a204aa3d72d4 2671313ddba04346ac0e2eef435f909c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:21:49 np0005534516 nova_compute[253538]: 2025-11-25 08:21:49.661 253542 DEBUG nova.virt.hardware [None req-c902f289-5678-4813-9bdd-1c6484e2c112 19a48db5eafb4ccb9008a204aa3d72d4 2671313ddba04346ac0e2eef435f909c - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 25 03:21:49 np0005534516 nova_compute[253538]: 2025-11-25 08:21:49.662 253542 INFO nova.compute.claims [None req-c902f289-5678-4813-9bdd-1c6484e2c112 19a48db5eafb4ccb9008a204aa3d72d4 2671313ddba04346ac0e2eef435f909c - - default default] [instance: 67445e66-65d6-487d-8c34-7d798ac485c8] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 25 03:21:49 np0005534516 nova_compute[253538]: 2025-11-25 08:21:49.811 253542 DEBUG oslo_concurrency.processutils [None req-c902f289-5678-4813-9bdd-1c6484e2c112 19a48db5eafb4ccb9008a204aa3d72d4 2671313ddba04346ac0e2eef435f909c - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:21:50 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 03:21:50 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/57417244' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 03:21:50 np0005534516 nova_compute[253538]: 2025-11-25 08:21:50.238 253542 DEBUG oslo_concurrency.processutils [None req-c902f289-5678-4813-9bdd-1c6484e2c112 19a48db5eafb4ccb9008a204aa3d72d4 2671313ddba04346ac0e2eef435f909c - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.427s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:21:50 np0005534516 nova_compute[253538]: 2025-11-25 08:21:50.244 253542 DEBUG nova.compute.provider_tree [None req-c902f289-5678-4813-9bdd-1c6484e2c112 19a48db5eafb4ccb9008a204aa3d72d4 2671313ddba04346ac0e2eef435f909c - - default default] Updating inventory in ProviderTree for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 with inventory: {'MEMORY_MB': {'total': 7680, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 59, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 1}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Nov 25 03:21:50 np0005534516 nova_compute[253538]: 2025-11-25 08:21:50.291 253542 ERROR nova.scheduler.client.report [None req-c902f289-5678-4813-9bdd-1c6484e2c112 19a48db5eafb4ccb9008a204aa3d72d4 2671313ddba04346ac0e2eef435f909c - - default default] [req-d33668f2-c7d3-454d-b6c2-8a3adc165d4e] Failed to update inventory to [{'MEMORY_MB': {'total': 7680, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 59, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 1}}] for resource provider with UUID 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4.  Got 409: {"errors": [{"status": 409, "title": "Conflict", "detail": "There was a conflict when trying to complete your request.\n\n resource provider generation conflict  ", "code": "placement.concurrent_update", "request_id": "req-d33668f2-c7d3-454d-b6c2-8a3adc165d4e"}]}#033[00m
Nov 25 03:21:50 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 03:21:50 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 03:21:50 np0005534516 nova_compute[253538]: 2025-11-25 08:21:50.308 253542 DEBUG nova.scheduler.client.report [None req-c902f289-5678-4813-9bdd-1c6484e2c112 19a48db5eafb4ccb9008a204aa3d72d4 2671313ddba04346ac0e2eef435f909c - - default default] Refreshing inventories for resource provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Nov 25 03:21:50 np0005534516 nova_compute[253538]: 2025-11-25 08:21:50.324 253542 DEBUG nova.scheduler.client.report [None req-c902f289-5678-4813-9bdd-1c6484e2c112 19a48db5eafb4ccb9008a204aa3d72d4 2671313ddba04346ac0e2eef435f909c - - default default] Updating ProviderTree inventory for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Nov 25 03:21:50 np0005534516 nova_compute[253538]: 2025-11-25 08:21:50.325 253542 DEBUG nova.compute.provider_tree [None req-c902f289-5678-4813-9bdd-1c6484e2c112 19a48db5eafb4ccb9008a204aa3d72d4 2671313ddba04346ac0e2eef435f909c - - default default] Updating inventory in ProviderTree for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Nov 25 03:21:50 np0005534516 nova_compute[253538]: 2025-11-25 08:21:50.343 253542 DEBUG nova.scheduler.client.report [None req-c902f289-5678-4813-9bdd-1c6484e2c112 19a48db5eafb4ccb9008a204aa3d72d4 2671313ddba04346ac0e2eef435f909c - - default default] Refreshing aggregate associations for resource provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Nov 25 03:21:50 np0005534516 nova_compute[253538]: 2025-11-25 08:21:50.364 253542 DEBUG nova.scheduler.client.report [None req-c902f289-5678-4813-9bdd-1c6484e2c112 19a48db5eafb4ccb9008a204aa3d72d4 2671313ddba04346ac0e2eef435f909c - - default default] Refreshing trait associations for resource provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4, traits: HW_CPU_X86_ABM,HW_CPU_X86_SSE,HW_CPU_X86_SSE4A,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_NET_VIF_MODEL_VIRTIO,HW_CPU_X86_SSSE3,COMPUTE_ACCELERATORS,COMPUTE_RESCUE_BFV,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_BMI2,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_DEVICE_TAGGING,HW_CPU_X86_SSE2,HW_CPU_X86_SSE42,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_SECURITY_TPM_1_2,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_SVM,HW_CPU_X86_AVX,HW_CPU_X86_CLMUL,COMPUTE_NODE,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_AVX2,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_SHA,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_BMI,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_MMX,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_SSE41,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_SECURITY_TPM_2_0,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_F16C,COMPUTE_STORAGE_BUS_FDC,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_AMD_SVM,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_AESNI,HW_CPU_X86_FMA3 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Nov 25 03:21:50 np0005534516 nova_compute[253538]: 2025-11-25 08:21:50.424 253542 DEBUG oslo_concurrency.processutils [None req-c902f289-5678-4813-9bdd-1c6484e2c112 19a48db5eafb4ccb9008a204aa3d72d4 2671313ddba04346ac0e2eef435f909c - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:21:50 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 03:21:50 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2806469668' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 03:21:50 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1085: 321 pgs: 321 active+clean; 88 MiB data, 210 MiB used, 60 GiB / 60 GiB avail; 693 KiB/s rd, 2.7 MiB/s wr, 58 op/s
Nov 25 03:21:50 np0005534516 nova_compute[253538]: 2025-11-25 08:21:50.901 253542 DEBUG oslo_concurrency.processutils [None req-c902f289-5678-4813-9bdd-1c6484e2c112 19a48db5eafb4ccb9008a204aa3d72d4 2671313ddba04346ac0e2eef435f909c - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.476s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:21:50 np0005534516 nova_compute[253538]: 2025-11-25 08:21:50.906 253542 DEBUG nova.compute.provider_tree [None req-c902f289-5678-4813-9bdd-1c6484e2c112 19a48db5eafb4ccb9008a204aa3d72d4 2671313ddba04346ac0e2eef435f909c - - default default] Updating inventory in ProviderTree for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 with inventory: {'MEMORY_MB': {'total': 7680, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 59, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 1}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Nov 25 03:21:50 np0005534516 nova_compute[253538]: 2025-11-25 08:21:50.993 253542 DEBUG nova.scheduler.client.report [None req-c902f289-5678-4813-9bdd-1c6484e2c112 19a48db5eafb4ccb9008a204aa3d72d4 2671313ddba04346ac0e2eef435f909c - - default default] Updated inventory for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 with generation 8 in Placement from set_inventory_for_provider using data: {'MEMORY_MB': {'total': 7680, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 59, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 1}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:957#033[00m
Nov 25 03:21:50 np0005534516 nova_compute[253538]: 2025-11-25 08:21:50.994 253542 DEBUG nova.compute.provider_tree [None req-c902f289-5678-4813-9bdd-1c6484e2c112 19a48db5eafb4ccb9008a204aa3d72d4 2671313ddba04346ac0e2eef435f909c - - default default] Updating resource provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 generation from 8 to 9 during operation: update_inventory _update_generation /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:164#033[00m
Nov 25 03:21:50 np0005534516 nova_compute[253538]: 2025-11-25 08:21:50.994 253542 DEBUG nova.compute.provider_tree [None req-c902f289-5678-4813-9bdd-1c6484e2c112 19a48db5eafb4ccb9008a204aa3d72d4 2671313ddba04346ac0e2eef435f909c - - default default] Updating inventory in ProviderTree for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 with inventory: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Nov 25 03:21:51 np0005534516 nova_compute[253538]: 2025-11-25 08:21:51.178 253542 DEBUG oslo_concurrency.lockutils [None req-c902f289-5678-4813-9bdd-1c6484e2c112 19a48db5eafb4ccb9008a204aa3d72d4 2671313ddba04346ac0e2eef435f909c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.526s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:21:51 np0005534516 nova_compute[253538]: 2025-11-25 08:21:51.181 253542 DEBUG nova.compute.manager [None req-c902f289-5678-4813-9bdd-1c6484e2c112 19a48db5eafb4ccb9008a204aa3d72d4 2671313ddba04346ac0e2eef435f909c - - default default] [instance: 67445e66-65d6-487d-8c34-7d798ac485c8] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 25 03:21:51 np0005534516 nova_compute[253538]: 2025-11-25 08:21:51.425 253542 DEBUG nova.compute.manager [None req-c902f289-5678-4813-9bdd-1c6484e2c112 19a48db5eafb4ccb9008a204aa3d72d4 2671313ddba04346ac0e2eef435f909c - - default default] [instance: 67445e66-65d6-487d-8c34-7d798ac485c8] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 25 03:21:51 np0005534516 nova_compute[253538]: 2025-11-25 08:21:51.427 253542 DEBUG nova.network.neutron [None req-c902f289-5678-4813-9bdd-1c6484e2c112 19a48db5eafb4ccb9008a204aa3d72d4 2671313ddba04346ac0e2eef435f909c - - default default] [instance: 67445e66-65d6-487d-8c34-7d798ac485c8] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 25 03:21:51 np0005534516 nova_compute[253538]: 2025-11-25 08:21:51.544 253542 INFO nova.virt.libvirt.driver [None req-c902f289-5678-4813-9bdd-1c6484e2c112 19a48db5eafb4ccb9008a204aa3d72d4 2671313ddba04346ac0e2eef435f909c - - default default] [instance: 67445e66-65d6-487d-8c34-7d798ac485c8] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 25 03:21:51 np0005534516 nova_compute[253538]: 2025-11-25 08:21:51.547 253542 DEBUG oslo_concurrency.lockutils [None req-f596dd79-f436-4d59-9ac2-595cc8586113 6afe291f5eed4868aa587372c8338aa9 85521dbe314d4431bc8ec53a24b5e793 - - default default] Acquiring lock "95b483fd-03ee-4f4f-b242-a04fe0f183cd" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:21:51 np0005534516 nova_compute[253538]: 2025-11-25 08:21:51.547 253542 DEBUG oslo_concurrency.lockutils [None req-f596dd79-f436-4d59-9ac2-595cc8586113 6afe291f5eed4868aa587372c8338aa9 85521dbe314d4431bc8ec53a24b5e793 - - default default] Lock "95b483fd-03ee-4f4f-b242-a04fe0f183cd" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:21:51 np0005534516 nova_compute[253538]: 2025-11-25 08:21:51.548 253542 DEBUG oslo_concurrency.lockutils [None req-f596dd79-f436-4d59-9ac2-595cc8586113 6afe291f5eed4868aa587372c8338aa9 85521dbe314d4431bc8ec53a24b5e793 - - default default] Acquiring lock "95b483fd-03ee-4f4f-b242-a04fe0f183cd-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:21:51 np0005534516 nova_compute[253538]: 2025-11-25 08:21:51.548 253542 DEBUG oslo_concurrency.lockutils [None req-f596dd79-f436-4d59-9ac2-595cc8586113 6afe291f5eed4868aa587372c8338aa9 85521dbe314d4431bc8ec53a24b5e793 - - default default] Lock "95b483fd-03ee-4f4f-b242-a04fe0f183cd-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:21:51 np0005534516 nova_compute[253538]: 2025-11-25 08:21:51.549 253542 DEBUG oslo_concurrency.lockutils [None req-f596dd79-f436-4d59-9ac2-595cc8586113 6afe291f5eed4868aa587372c8338aa9 85521dbe314d4431bc8ec53a24b5e793 - - default default] Lock "95b483fd-03ee-4f4f-b242-a04fe0f183cd-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:21:51 np0005534516 nova_compute[253538]: 2025-11-25 08:21:51.550 253542 INFO nova.compute.manager [None req-f596dd79-f436-4d59-9ac2-595cc8586113 6afe291f5eed4868aa587372c8338aa9 85521dbe314d4431bc8ec53a24b5e793 - - default default] [instance: 95b483fd-03ee-4f4f-b242-a04fe0f183cd] Terminating instance#033[00m
Nov 25 03:21:51 np0005534516 nova_compute[253538]: 2025-11-25 08:21:51.551 253542 DEBUG oslo_concurrency.lockutils [None req-f596dd79-f436-4d59-9ac2-595cc8586113 6afe291f5eed4868aa587372c8338aa9 85521dbe314d4431bc8ec53a24b5e793 - - default default] Acquiring lock "refresh_cache-95b483fd-03ee-4f4f-b242-a04fe0f183cd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 03:21:51 np0005534516 nova_compute[253538]: 2025-11-25 08:21:51.552 253542 DEBUG oslo_concurrency.lockutils [None req-f596dd79-f436-4d59-9ac2-595cc8586113 6afe291f5eed4868aa587372c8338aa9 85521dbe314d4431bc8ec53a24b5e793 - - default default] Acquired lock "refresh_cache-95b483fd-03ee-4f4f-b242-a04fe0f183cd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 03:21:51 np0005534516 nova_compute[253538]: 2025-11-25 08:21:51.552 253542 DEBUG nova.network.neutron [None req-f596dd79-f436-4d59-9ac2-595cc8586113 6afe291f5eed4868aa587372c8338aa9 85521dbe314d4431bc8ec53a24b5e793 - - default default] [instance: 95b483fd-03ee-4f4f-b242-a04fe0f183cd] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 25 03:21:51 np0005534516 nova_compute[253538]: 2025-11-25 08:21:51.749 253542 DEBUG nova.compute.manager [None req-c902f289-5678-4813-9bdd-1c6484e2c112 19a48db5eafb4ccb9008a204aa3d72d4 2671313ddba04346ac0e2eef435f909c - - default default] [instance: 67445e66-65d6-487d-8c34-7d798ac485c8] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 25 03:21:52 np0005534516 nova_compute[253538]: 2025-11-25 08:21:52.070 253542 DEBUG nova.compute.manager [None req-c902f289-5678-4813-9bdd-1c6484e2c112 19a48db5eafb4ccb9008a204aa3d72d4 2671313ddba04346ac0e2eef435f909c - - default default] [instance: 67445e66-65d6-487d-8c34-7d798ac485c8] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 25 03:21:52 np0005534516 nova_compute[253538]: 2025-11-25 08:21:52.072 253542 DEBUG nova.virt.libvirt.driver [None req-c902f289-5678-4813-9bdd-1c6484e2c112 19a48db5eafb4ccb9008a204aa3d72d4 2671313ddba04346ac0e2eef435f909c - - default default] [instance: 67445e66-65d6-487d-8c34-7d798ac485c8] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 25 03:21:52 np0005534516 nova_compute[253538]: 2025-11-25 08:21:52.072 253542 INFO nova.virt.libvirt.driver [None req-c902f289-5678-4813-9bdd-1c6484e2c112 19a48db5eafb4ccb9008a204aa3d72d4 2671313ddba04346ac0e2eef435f909c - - default default] [instance: 67445e66-65d6-487d-8c34-7d798ac485c8] Creating image(s)#033[00m
Nov 25 03:21:52 np0005534516 nova_compute[253538]: 2025-11-25 08:21:52.093 253542 DEBUG nova.storage.rbd_utils [None req-c902f289-5678-4813-9bdd-1c6484e2c112 19a48db5eafb4ccb9008a204aa3d72d4 2671313ddba04346ac0e2eef435f909c - - default default] rbd image 67445e66-65d6-487d-8c34-7d798ac485c8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:21:52 np0005534516 nova_compute[253538]: 2025-11-25 08:21:52.112 253542 DEBUG nova.storage.rbd_utils [None req-c902f289-5678-4813-9bdd-1c6484e2c112 19a48db5eafb4ccb9008a204aa3d72d4 2671313ddba04346ac0e2eef435f909c - - default default] rbd image 67445e66-65d6-487d-8c34-7d798ac485c8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:21:52 np0005534516 nova_compute[253538]: 2025-11-25 08:21:52.133 253542 DEBUG nova.storage.rbd_utils [None req-c902f289-5678-4813-9bdd-1c6484e2c112 19a48db5eafb4ccb9008a204aa3d72d4 2671313ddba04346ac0e2eef435f909c - - default default] rbd image 67445e66-65d6-487d-8c34-7d798ac485c8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:21:52 np0005534516 nova_compute[253538]: 2025-11-25 08:21:52.136 253542 DEBUG oslo_concurrency.processutils [None req-c902f289-5678-4813-9bdd-1c6484e2c112 19a48db5eafb4ccb9008a204aa3d72d4 2671313ddba04346ac0e2eef435f909c - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:21:52 np0005534516 nova_compute[253538]: 2025-11-25 08:21:52.156 253542 DEBUG nova.network.neutron [None req-f596dd79-f436-4d59-9ac2-595cc8586113 6afe291f5eed4868aa587372c8338aa9 85521dbe314d4431bc8ec53a24b5e793 - - default default] [instance: 95b483fd-03ee-4f4f-b242-a04fe0f183cd] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 25 03:21:52 np0005534516 nova_compute[253538]: 2025-11-25 08:21:52.169 253542 WARNING oslo_policy.policy [None req-c902f289-5678-4813-9bdd-1c6484e2c112 19a48db5eafb4ccb9008a204aa3d72d4 2671313ddba04346ac0e2eef435f909c - - default default] JSON formatted policy_file support is deprecated since Victoria release. You need to use YAML format which will be default in future. You can use ``oslopolicy-convert-json-to-yaml`` tool to convert existing JSON-formatted policy file to YAML-formatted in backward compatible way: https://docs.openstack.org/oslo.policy/latest/cli/oslopolicy-convert-json-to-yaml.html.#033[00m
Nov 25 03:21:52 np0005534516 nova_compute[253538]: 2025-11-25 08:21:52.170 253542 WARNING oslo_policy.policy [None req-c902f289-5678-4813-9bdd-1c6484e2c112 19a48db5eafb4ccb9008a204aa3d72d4 2671313ddba04346ac0e2eef435f909c - - default default] JSON formatted policy_file support is deprecated since Victoria release. You need to use YAML format which will be default in future. You can use ``oslopolicy-convert-json-to-yaml`` tool to convert existing JSON-formatted policy file to YAML-formatted in backward compatible way: https://docs.openstack.org/oslo.policy/latest/cli/oslopolicy-convert-json-to-yaml.html.#033[00m
Nov 25 03:21:52 np0005534516 nova_compute[253538]: 2025-11-25 08:21:52.172 253542 DEBUG nova.policy [None req-c902f289-5678-4813-9bdd-1c6484e2c112 19a48db5eafb4ccb9008a204aa3d72d4 2671313ddba04346ac0e2eef435f909c - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '19a48db5eafb4ccb9008a204aa3d72d4', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '2671313ddba04346ac0e2eef435f909c', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 25 03:21:52 np0005534516 nova_compute[253538]: 2025-11-25 08:21:52.208 253542 DEBUG oslo_concurrency.processutils [None req-c902f289-5678-4813-9bdd-1c6484e2c112 19a48db5eafb4ccb9008a204aa3d72d4 2671313ddba04346ac0e2eef435f909c - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json" returned: 0 in 0.072s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:21:52 np0005534516 nova_compute[253538]: 2025-11-25 08:21:52.209 253542 DEBUG oslo_concurrency.lockutils [None req-c902f289-5678-4813-9bdd-1c6484e2c112 19a48db5eafb4ccb9008a204aa3d72d4 2671313ddba04346ac0e2eef435f909c - - default default] Acquiring lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:21:52 np0005534516 nova_compute[253538]: 2025-11-25 08:21:52.209 253542 DEBUG oslo_concurrency.lockutils [None req-c902f289-5678-4813-9bdd-1c6484e2c112 19a48db5eafb4ccb9008a204aa3d72d4 2671313ddba04346ac0e2eef435f909c - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:21:52 np0005534516 nova_compute[253538]: 2025-11-25 08:21:52.210 253542 DEBUG oslo_concurrency.lockutils [None req-c902f289-5678-4813-9bdd-1c6484e2c112 19a48db5eafb4ccb9008a204aa3d72d4 2671313ddba04346ac0e2eef435f909c - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:21:52 np0005534516 nova_compute[253538]: 2025-11-25 08:21:52.233 253542 DEBUG nova.storage.rbd_utils [None req-c902f289-5678-4813-9bdd-1c6484e2c112 19a48db5eafb4ccb9008a204aa3d72d4 2671313ddba04346ac0e2eef435f909c - - default default] rbd image 67445e66-65d6-487d-8c34-7d798ac485c8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:21:52 np0005534516 nova_compute[253538]: 2025-11-25 08:21:52.239 253542 DEBUG oslo_concurrency.processutils [None req-c902f289-5678-4813-9bdd-1c6484e2c112 19a48db5eafb4ccb9008a204aa3d72d4 2671313ddba04346ac0e2eef435f909c - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc 67445e66-65d6-487d-8c34-7d798ac485c8_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:21:52 np0005534516 nova_compute[253538]: 2025-11-25 08:21:52.579 253542 DEBUG nova.network.neutron [None req-f596dd79-f436-4d59-9ac2-595cc8586113 6afe291f5eed4868aa587372c8338aa9 85521dbe314d4431bc8ec53a24b5e793 - - default default] [instance: 95b483fd-03ee-4f4f-b242-a04fe0f183cd] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 03:21:52 np0005534516 nova_compute[253538]: 2025-11-25 08:21:52.596 253542 DEBUG oslo_concurrency.processutils [None req-c902f289-5678-4813-9bdd-1c6484e2c112 19a48db5eafb4ccb9008a204aa3d72d4 2671313ddba04346ac0e2eef435f909c - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc 67445e66-65d6-487d-8c34-7d798ac485c8_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.357s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:21:52 np0005534516 nova_compute[253538]: 2025-11-25 08:21:52.620 253542 DEBUG oslo_concurrency.lockutils [None req-f596dd79-f436-4d59-9ac2-595cc8586113 6afe291f5eed4868aa587372c8338aa9 85521dbe314d4431bc8ec53a24b5e793 - - default default] Releasing lock "refresh_cache-95b483fd-03ee-4f4f-b242-a04fe0f183cd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 03:21:52 np0005534516 nova_compute[253538]: 2025-11-25 08:21:52.621 253542 DEBUG nova.compute.manager [None req-f596dd79-f436-4d59-9ac2-595cc8586113 6afe291f5eed4868aa587372c8338aa9 85521dbe314d4431bc8ec53a24b5e793 - - default default] [instance: 95b483fd-03ee-4f4f-b242-a04fe0f183cd] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 25 03:21:52 np0005534516 nova_compute[253538]: 2025-11-25 08:21:52.655 253542 DEBUG nova.storage.rbd_utils [None req-c902f289-5678-4813-9bdd-1c6484e2c112 19a48db5eafb4ccb9008a204aa3d72d4 2671313ddba04346ac0e2eef435f909c - - default default] resizing rbd image 67445e66-65d6-487d-8c34-7d798ac485c8_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 25 03:21:52 np0005534516 systemd[1]: machine-qemu\x2d1\x2dinstance\x2d00000001.scope: Deactivated successfully.
Nov 25 03:21:52 np0005534516 systemd[1]: machine-qemu\x2d1\x2dinstance\x2d00000001.scope: Consumed 4.297s CPU time.
Nov 25 03:21:52 np0005534516 systemd-machined[215790]: Machine qemu-1-instance-00000001 terminated.
Nov 25 03:21:52 np0005534516 podman[269084]: 2025-11-25 08:21:52.849663583 +0000 UTC m=+0.095972448 container health_status 5c8d5508db57b4c3da7e80ae11f3a3094e87785cb74f60c98199261dd8b73481 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Nov 25 03:21:52 np0005534516 nova_compute[253538]: 2025-11-25 08:21:52.873 253542 INFO nova.virt.libvirt.driver [-] [instance: 95b483fd-03ee-4f4f-b242-a04fe0f183cd] Instance destroyed successfully.#033[00m
Nov 25 03:21:52 np0005534516 nova_compute[253538]: 2025-11-25 08:21:52.874 253542 DEBUG nova.objects.instance [None req-f596dd79-f436-4d59-9ac2-595cc8586113 6afe291f5eed4868aa587372c8338aa9 85521dbe314d4431bc8ec53a24b5e793 - - default default] Lazy-loading 'resources' on Instance uuid 95b483fd-03ee-4f4f-b242-a04fe0f183cd obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 03:21:52 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1086: 321 pgs: 321 active+clean; 88 MiB data, 210 MiB used, 60 GiB / 60 GiB avail; 981 KiB/s rd, 2.2 MiB/s wr, 56 op/s
Nov 25 03:21:52 np0005534516 nova_compute[253538]: 2025-11-25 08:21:52.935 253542 DEBUG nova.objects.instance [None req-c902f289-5678-4813-9bdd-1c6484e2c112 19a48db5eafb4ccb9008a204aa3d72d4 2671313ddba04346ac0e2eef435f909c - - default default] Lazy-loading 'migration_context' on Instance uuid 67445e66-65d6-487d-8c34-7d798ac485c8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 03:21:52 np0005534516 nova_compute[253538]: 2025-11-25 08:21:52.947 253542 DEBUG nova.virt.libvirt.driver [None req-c902f289-5678-4813-9bdd-1c6484e2c112 19a48db5eafb4ccb9008a204aa3d72d4 2671313ddba04346ac0e2eef435f909c - - default default] [instance: 67445e66-65d6-487d-8c34-7d798ac485c8] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 25 03:21:52 np0005534516 nova_compute[253538]: 2025-11-25 08:21:52.948 253542 DEBUG nova.virt.libvirt.driver [None req-c902f289-5678-4813-9bdd-1c6484e2c112 19a48db5eafb4ccb9008a204aa3d72d4 2671313ddba04346ac0e2eef435f909c - - default default] [instance: 67445e66-65d6-487d-8c34-7d798ac485c8] Ensure instance console log exists: /var/lib/nova/instances/67445e66-65d6-487d-8c34-7d798ac485c8/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 25 03:21:52 np0005534516 nova_compute[253538]: 2025-11-25 08:21:52.948 253542 DEBUG oslo_concurrency.lockutils [None req-c902f289-5678-4813-9bdd-1c6484e2c112 19a48db5eafb4ccb9008a204aa3d72d4 2671313ddba04346ac0e2eef435f909c - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:21:52 np0005534516 nova_compute[253538]: 2025-11-25 08:21:52.948 253542 DEBUG oslo_concurrency.lockutils [None req-c902f289-5678-4813-9bdd-1c6484e2c112 19a48db5eafb4ccb9008a204aa3d72d4 2671313ddba04346ac0e2eef435f909c - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:21:52 np0005534516 nova_compute[253538]: 2025-11-25 08:21:52.949 253542 DEBUG oslo_concurrency.lockutils [None req-c902f289-5678-4813-9bdd-1c6484e2c112 19a48db5eafb4ccb9008a204aa3d72d4 2671313ddba04346ac0e2eef435f909c - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:21:53 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e106 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 03:21:53 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e106 do_prune osdmap full prune enabled
Nov 25 03:21:53 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e107 e107: 3 total, 3 up, 3 in
Nov 25 03:21:53 np0005534516 ceph-mon[75015]: log_channel(cluster) log [DBG] : osdmap e107: 3 total, 3 up, 3 in
Nov 25 03:21:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] Optimize plan auto_2025-11-25_08:21:53
Nov 25 03:21:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 03:21:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] do_upmap
Nov 25 03:21:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] pools ['default.rgw.log', 'vms', 'volumes', '.mgr', 'backups', 'default.rgw.meta', 'cephfs.cephfs.data', '.rgw.root', 'cephfs.cephfs.meta', 'images', 'default.rgw.control']
Nov 25 03:21:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] prepared 0/10 changes
Nov 25 03:21:53 np0005534516 nova_compute[253538]: 2025-11-25 08:21:53.322 253542 INFO nova.virt.libvirt.driver [None req-f596dd79-f436-4d59-9ac2-595cc8586113 6afe291f5eed4868aa587372c8338aa9 85521dbe314d4431bc8ec53a24b5e793 - - default default] [instance: 95b483fd-03ee-4f4f-b242-a04fe0f183cd] Deleting instance files /var/lib/nova/instances/95b483fd-03ee-4f4f-b242-a04fe0f183cd_del#033[00m
Nov 25 03:21:53 np0005534516 nova_compute[253538]: 2025-11-25 08:21:53.323 253542 INFO nova.virt.libvirt.driver [None req-f596dd79-f436-4d59-9ac2-595cc8586113 6afe291f5eed4868aa587372c8338aa9 85521dbe314d4431bc8ec53a24b5e793 - - default default] [instance: 95b483fd-03ee-4f4f-b242-a04fe0f183cd] Deletion of /var/lib/nova/instances/95b483fd-03ee-4f4f-b242-a04fe0f183cd_del complete#033[00m
Nov 25 03:21:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 03:21:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 03:21:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 03:21:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 03:21:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 03:21:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 03:21:53 np0005534516 nova_compute[253538]: 2025-11-25 08:21:53.458 253542 DEBUG nova.virt.libvirt.host [None req-f596dd79-f436-4d59-9ac2-595cc8586113 6afe291f5eed4868aa587372c8338aa9 85521dbe314d4431bc8ec53a24b5e793 - - default default] Checking UEFI support for host arch (x86_64) supports_uefi /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1754#033[00m
Nov 25 03:21:53 np0005534516 nova_compute[253538]: 2025-11-25 08:21:53.459 253542 INFO nova.virt.libvirt.host [None req-f596dd79-f436-4d59-9ac2-595cc8586113 6afe291f5eed4868aa587372c8338aa9 85521dbe314d4431bc8ec53a24b5e793 - - default default] UEFI support detected#033[00m
Nov 25 03:21:53 np0005534516 nova_compute[253538]: 2025-11-25 08:21:53.461 253542 INFO nova.compute.manager [None req-f596dd79-f436-4d59-9ac2-595cc8586113 6afe291f5eed4868aa587372c8338aa9 85521dbe314d4431bc8ec53a24b5e793 - - default default] [instance: 95b483fd-03ee-4f4f-b242-a04fe0f183cd] Took 0.84 seconds to destroy the instance on the hypervisor.#033[00m
Nov 25 03:21:53 np0005534516 nova_compute[253538]: 2025-11-25 08:21:53.462 253542 DEBUG oslo.service.loopingcall [None req-f596dd79-f436-4d59-9ac2-595cc8586113 6afe291f5eed4868aa587372c8338aa9 85521dbe314d4431bc8ec53a24b5e793 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 25 03:21:53 np0005534516 nova_compute[253538]: 2025-11-25 08:21:53.462 253542 DEBUG nova.compute.manager [-] [instance: 95b483fd-03ee-4f4f-b242-a04fe0f183cd] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 25 03:21:53 np0005534516 nova_compute[253538]: 2025-11-25 08:21:53.463 253542 DEBUG nova.network.neutron [-] [instance: 95b483fd-03ee-4f4f-b242-a04fe0f183cd] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 25 03:21:53 np0005534516 nova_compute[253538]: 2025-11-25 08:21:53.693 253542 DEBUG nova.network.neutron [-] [instance: 95b483fd-03ee-4f4f-b242-a04fe0f183cd] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 25 03:21:53 np0005534516 nova_compute[253538]: 2025-11-25 08:21:53.704 253542 DEBUG nova.network.neutron [-] [instance: 95b483fd-03ee-4f4f-b242-a04fe0f183cd] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 03:21:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 03:21:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 03:21:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 03:21:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 03:21:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 03:21:53 np0005534516 nova_compute[253538]: 2025-11-25 08:21:53.717 253542 INFO nova.compute.manager [-] [instance: 95b483fd-03ee-4f4f-b242-a04fe0f183cd] Took 0.25 seconds to deallocate network for instance.#033[00m
Nov 25 03:21:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 03:21:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 03:21:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 03:21:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 03:21:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 03:21:53 np0005534516 nova_compute[253538]: 2025-11-25 08:21:53.782 253542 DEBUG oslo_concurrency.lockutils [None req-f596dd79-f436-4d59-9ac2-595cc8586113 6afe291f5eed4868aa587372c8338aa9 85521dbe314d4431bc8ec53a24b5e793 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:21:53 np0005534516 nova_compute[253538]: 2025-11-25 08:21:53.783 253542 DEBUG oslo_concurrency.lockutils [None req-f596dd79-f436-4d59-9ac2-595cc8586113 6afe291f5eed4868aa587372c8338aa9 85521dbe314d4431bc8ec53a24b5e793 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:21:53 np0005534516 nova_compute[253538]: 2025-11-25 08:21:53.848 253542 DEBUG oslo_concurrency.processutils [None req-f596dd79-f436-4d59-9ac2-595cc8586113 6afe291f5eed4868aa587372c8338aa9 85521dbe314d4431bc8ec53a24b5e793 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:21:53 np0005534516 nova_compute[253538]: 2025-11-25 08:21:53.870 253542 DEBUG nova.network.neutron [None req-c902f289-5678-4813-9bdd-1c6484e2c112 19a48db5eafb4ccb9008a204aa3d72d4 2671313ddba04346ac0e2eef435f909c - - default default] [instance: 67445e66-65d6-487d-8c34-7d798ac485c8] Successfully created port: 5235689d-9333-45f8-8f44-50e4a006f7d3 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 25 03:21:54 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 03:21:54 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2602757215' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 03:21:54 np0005534516 nova_compute[253538]: 2025-11-25 08:21:54.299 253542 DEBUG oslo_concurrency.processutils [None req-f596dd79-f436-4d59-9ac2-595cc8586113 6afe291f5eed4868aa587372c8338aa9 85521dbe314d4431bc8ec53a24b5e793 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.451s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:21:54 np0005534516 nova_compute[253538]: 2025-11-25 08:21:54.305 253542 DEBUG nova.compute.provider_tree [None req-f596dd79-f436-4d59-9ac2-595cc8586113 6afe291f5eed4868aa587372c8338aa9 85521dbe314d4431bc8ec53a24b5e793 - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 03:21:54 np0005534516 nova_compute[253538]: 2025-11-25 08:21:54.327 253542 DEBUG nova.scheduler.client.report [None req-f596dd79-f436-4d59-9ac2-595cc8586113 6afe291f5eed4868aa587372c8338aa9 85521dbe314d4431bc8ec53a24b5e793 - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 03:21:54 np0005534516 nova_compute[253538]: 2025-11-25 08:21:54.363 253542 DEBUG oslo_concurrency.lockutils [None req-f596dd79-f436-4d59-9ac2-595cc8586113 6afe291f5eed4868aa587372c8338aa9 85521dbe314d4431bc8ec53a24b5e793 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.580s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:21:54 np0005534516 nova_compute[253538]: 2025-11-25 08:21:54.408 253542 INFO nova.scheduler.client.report [None req-f596dd79-f436-4d59-9ac2-595cc8586113 6afe291f5eed4868aa587372c8338aa9 85521dbe314d4431bc8ec53a24b5e793 - - default default] Deleted allocations for instance 95b483fd-03ee-4f4f-b242-a04fe0f183cd#033[00m
Nov 25 03:21:54 np0005534516 nova_compute[253538]: 2025-11-25 08:21:54.524 253542 DEBUG oslo_concurrency.lockutils [None req-f596dd79-f436-4d59-9ac2-595cc8586113 6afe291f5eed4868aa587372c8338aa9 85521dbe314d4431bc8ec53a24b5e793 - - default default] Lock "95b483fd-03ee-4f4f-b242-a04fe0f183cd" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.977s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:21:54 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1088: 321 pgs: 321 active+clean; 78 MiB data, 222 MiB used, 60 GiB / 60 GiB avail; 2.4 MiB/s rd, 4.0 MiB/s wr, 172 op/s
Nov 25 03:21:55 np0005534516 nova_compute[253538]: 2025-11-25 08:21:55.112 253542 DEBUG nova.network.neutron [None req-c902f289-5678-4813-9bdd-1c6484e2c112 19a48db5eafb4ccb9008a204aa3d72d4 2671313ddba04346ac0e2eef435f909c - - default default] [instance: 67445e66-65d6-487d-8c34-7d798ac485c8] Successfully updated port: 5235689d-9333-45f8-8f44-50e4a006f7d3 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 25 03:21:55 np0005534516 nova_compute[253538]: 2025-11-25 08:21:55.148 253542 DEBUG oslo_concurrency.lockutils [None req-c902f289-5678-4813-9bdd-1c6484e2c112 19a48db5eafb4ccb9008a204aa3d72d4 2671313ddba04346ac0e2eef435f909c - - default default] Acquiring lock "refresh_cache-67445e66-65d6-487d-8c34-7d798ac485c8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 03:21:55 np0005534516 nova_compute[253538]: 2025-11-25 08:21:55.148 253542 DEBUG oslo_concurrency.lockutils [None req-c902f289-5678-4813-9bdd-1c6484e2c112 19a48db5eafb4ccb9008a204aa3d72d4 2671313ddba04346ac0e2eef435f909c - - default default] Acquired lock "refresh_cache-67445e66-65d6-487d-8c34-7d798ac485c8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 03:21:55 np0005534516 nova_compute[253538]: 2025-11-25 08:21:55.149 253542 DEBUG nova.network.neutron [None req-c902f289-5678-4813-9bdd-1c6484e2c112 19a48db5eafb4ccb9008a204aa3d72d4 2671313ddba04346ac0e2eef435f909c - - default default] [instance: 67445e66-65d6-487d-8c34-7d798ac485c8] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 25 03:21:55 np0005534516 podman[269163]: 2025-11-25 08:21:55.865251247 +0000 UTC m=+0.104597380 container health_status 201f5ba8a846455dad7681d91099d8c3f33e8ac91298c1330a8b525b94c03938 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=multipathd)
Nov 25 03:21:56 np0005534516 nova_compute[253538]: 2025-11-25 08:21:56.392 253542 DEBUG nova.compute.manager [req-be5badac-a69a-47e0-babb-99eb1ee57d2b req-86bebe82-20d4-40d5-99ce-20dfbe364cfa b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 67445e66-65d6-487d-8c34-7d798ac485c8] Received event network-changed-5235689d-9333-45f8-8f44-50e4a006f7d3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:21:56 np0005534516 nova_compute[253538]: 2025-11-25 08:21:56.393 253542 DEBUG nova.compute.manager [req-be5badac-a69a-47e0-babb-99eb1ee57d2b req-86bebe82-20d4-40d5-99ce-20dfbe364cfa b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 67445e66-65d6-487d-8c34-7d798ac485c8] Refreshing instance network info cache due to event network-changed-5235689d-9333-45f8-8f44-50e4a006f7d3. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 03:21:56 np0005534516 nova_compute[253538]: 2025-11-25 08:21:56.393 253542 DEBUG oslo_concurrency.lockutils [req-be5badac-a69a-47e0-babb-99eb1ee57d2b req-86bebe82-20d4-40d5-99ce-20dfbe364cfa b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-67445e66-65d6-487d-8c34-7d798ac485c8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 03:21:56 np0005534516 nova_compute[253538]: 2025-11-25 08:21:56.625 253542 DEBUG nova.network.neutron [None req-c902f289-5678-4813-9bdd-1c6484e2c112 19a48db5eafb4ccb9008a204aa3d72d4 2671313ddba04346ac0e2eef435f909c - - default default] [instance: 67445e66-65d6-487d-8c34-7d798ac485c8] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 25 03:21:56 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1089: 321 pgs: 321 active+clean; 78 MiB data, 222 MiB used, 60 GiB / 60 GiB avail; 2.4 MiB/s rd, 4.0 MiB/s wr, 172 op/s
Nov 25 03:21:58 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e107 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 03:21:58 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1090: 321 pgs: 321 active+clean; 88 MiB data, 214 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.7 MiB/s wr, 150 op/s
Nov 25 03:21:59 np0005534516 nova_compute[253538]: 2025-11-25 08:21:59.239 253542 DEBUG nova.network.neutron [None req-c902f289-5678-4813-9bdd-1c6484e2c112 19a48db5eafb4ccb9008a204aa3d72d4 2671313ddba04346ac0e2eef435f909c - - default default] [instance: 67445e66-65d6-487d-8c34-7d798ac485c8] Updating instance_info_cache with network_info: [{"id": "5235689d-9333-45f8-8f44-50e4a006f7d3", "address": "fa:16:3e:c9:db:d0", "network": {"id": "ef52fe4f-78d3-45fa-ab69-177fdfabe604", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-1512173371-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2671313ddba04346ac0e2eef435f909c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5235689d-93", "ovs_interfaceid": "5235689d-9333-45f8-8f44-50e4a006f7d3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 03:21:59 np0005534516 nova_compute[253538]: 2025-11-25 08:21:59.269 253542 DEBUG oslo_concurrency.lockutils [None req-c902f289-5678-4813-9bdd-1c6484e2c112 19a48db5eafb4ccb9008a204aa3d72d4 2671313ddba04346ac0e2eef435f909c - - default default] Releasing lock "refresh_cache-67445e66-65d6-487d-8c34-7d798ac485c8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 03:21:59 np0005534516 nova_compute[253538]: 2025-11-25 08:21:59.270 253542 DEBUG nova.compute.manager [None req-c902f289-5678-4813-9bdd-1c6484e2c112 19a48db5eafb4ccb9008a204aa3d72d4 2671313ddba04346ac0e2eef435f909c - - default default] [instance: 67445e66-65d6-487d-8c34-7d798ac485c8] Instance network_info: |[{"id": "5235689d-9333-45f8-8f44-50e4a006f7d3", "address": "fa:16:3e:c9:db:d0", "network": {"id": "ef52fe4f-78d3-45fa-ab69-177fdfabe604", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-1512173371-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2671313ddba04346ac0e2eef435f909c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5235689d-93", "ovs_interfaceid": "5235689d-9333-45f8-8f44-50e4a006f7d3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 25 03:21:59 np0005534516 nova_compute[253538]: 2025-11-25 08:21:59.270 253542 DEBUG oslo_concurrency.lockutils [req-be5badac-a69a-47e0-babb-99eb1ee57d2b req-86bebe82-20d4-40d5-99ce-20dfbe364cfa b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-67445e66-65d6-487d-8c34-7d798ac485c8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 03:21:59 np0005534516 nova_compute[253538]: 2025-11-25 08:21:59.270 253542 DEBUG nova.network.neutron [req-be5badac-a69a-47e0-babb-99eb1ee57d2b req-86bebe82-20d4-40d5-99ce-20dfbe364cfa b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 67445e66-65d6-487d-8c34-7d798ac485c8] Refreshing network info cache for port 5235689d-9333-45f8-8f44-50e4a006f7d3 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 03:21:59 np0005534516 nova_compute[253538]: 2025-11-25 08:21:59.273 253542 DEBUG nova.virt.libvirt.driver [None req-c902f289-5678-4813-9bdd-1c6484e2c112 19a48db5eafb4ccb9008a204aa3d72d4 2671313ddba04346ac0e2eef435f909c - - default default] [instance: 67445e66-65d6-487d-8c34-7d798ac485c8] Start _get_guest_xml network_info=[{"id": "5235689d-9333-45f8-8f44-50e4a006f7d3", "address": "fa:16:3e:c9:db:d0", "network": {"id": "ef52fe4f-78d3-45fa-ab69-177fdfabe604", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-1512173371-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2671313ddba04346ac0e2eef435f909c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5235689d-93", "ovs_interfaceid": "5235689d-9333-45f8-8f44-50e4a006f7d3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'device_name': '/dev/vda', 'guest_format': None, 'encryption_options': None, 'boot_index': 0, 'encryption_format': None, 'encryption_secret_uuid': None, 'size': 0, 'encrypted': False, 'disk_bus': 'virtio', 'image_id': '8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 25 03:21:59 np0005534516 nova_compute[253538]: 2025-11-25 08:21:59.278 253542 WARNING nova.virt.libvirt.driver [None req-c902f289-5678-4813-9bdd-1c6484e2c112 19a48db5eafb4ccb9008a204aa3d72d4 2671313ddba04346ac0e2eef435f909c - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 03:21:59 np0005534516 nova_compute[253538]: 2025-11-25 08:21:59.285 253542 DEBUG nova.virt.libvirt.host [None req-c902f289-5678-4813-9bdd-1c6484e2c112 19a48db5eafb4ccb9008a204aa3d72d4 2671313ddba04346ac0e2eef435f909c - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 25 03:21:59 np0005534516 nova_compute[253538]: 2025-11-25 08:21:59.286 253542 DEBUG nova.virt.libvirt.host [None req-c902f289-5678-4813-9bdd-1c6484e2c112 19a48db5eafb4ccb9008a204aa3d72d4 2671313ddba04346ac0e2eef435f909c - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 25 03:21:59 np0005534516 nova_compute[253538]: 2025-11-25 08:21:59.289 253542 DEBUG nova.virt.libvirt.host [None req-c902f289-5678-4813-9bdd-1c6484e2c112 19a48db5eafb4ccb9008a204aa3d72d4 2671313ddba04346ac0e2eef435f909c - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 25 03:21:59 np0005534516 nova_compute[253538]: 2025-11-25 08:21:59.290 253542 DEBUG nova.virt.libvirt.host [None req-c902f289-5678-4813-9bdd-1c6484e2c112 19a48db5eafb4ccb9008a204aa3d72d4 2671313ddba04346ac0e2eef435f909c - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 25 03:21:59 np0005534516 nova_compute[253538]: 2025-11-25 08:21:59.290 253542 DEBUG nova.virt.libvirt.driver [None req-c902f289-5678-4813-9bdd-1c6484e2c112 19a48db5eafb4ccb9008a204aa3d72d4 2671313ddba04346ac0e2eef435f909c - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 25 03:21:59 np0005534516 nova_compute[253538]: 2025-11-25 08:21:59.291 253542 DEBUG nova.virt.hardware [None req-c902f289-5678-4813-9bdd-1c6484e2c112 19a48db5eafb4ccb9008a204aa3d72d4 2671313ddba04346ac0e2eef435f909c - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T08:21:37Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='129694508',id=25,is_public=True,memory_mb=128,name='tempest-flavor_with_ephemeral_0-1902324690',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 25 03:21:59 np0005534516 nova_compute[253538]: 2025-11-25 08:21:59.291 253542 DEBUG nova.virt.hardware [None req-c902f289-5678-4813-9bdd-1c6484e2c112 19a48db5eafb4ccb9008a204aa3d72d4 2671313ddba04346ac0e2eef435f909c - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 25 03:21:59 np0005534516 nova_compute[253538]: 2025-11-25 08:21:59.291 253542 DEBUG nova.virt.hardware [None req-c902f289-5678-4813-9bdd-1c6484e2c112 19a48db5eafb4ccb9008a204aa3d72d4 2671313ddba04346ac0e2eef435f909c - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 25 03:21:59 np0005534516 nova_compute[253538]: 2025-11-25 08:21:59.291 253542 DEBUG nova.virt.hardware [None req-c902f289-5678-4813-9bdd-1c6484e2c112 19a48db5eafb4ccb9008a204aa3d72d4 2671313ddba04346ac0e2eef435f909c - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 25 03:21:59 np0005534516 nova_compute[253538]: 2025-11-25 08:21:59.292 253542 DEBUG nova.virt.hardware [None req-c902f289-5678-4813-9bdd-1c6484e2c112 19a48db5eafb4ccb9008a204aa3d72d4 2671313ddba04346ac0e2eef435f909c - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 25 03:21:59 np0005534516 nova_compute[253538]: 2025-11-25 08:21:59.292 253542 DEBUG nova.virt.hardware [None req-c902f289-5678-4813-9bdd-1c6484e2c112 19a48db5eafb4ccb9008a204aa3d72d4 2671313ddba04346ac0e2eef435f909c - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 25 03:21:59 np0005534516 nova_compute[253538]: 2025-11-25 08:21:59.292 253542 DEBUG nova.virt.hardware [None req-c902f289-5678-4813-9bdd-1c6484e2c112 19a48db5eafb4ccb9008a204aa3d72d4 2671313ddba04346ac0e2eef435f909c - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 25 03:21:59 np0005534516 nova_compute[253538]: 2025-11-25 08:21:59.292 253542 DEBUG nova.virt.hardware [None req-c902f289-5678-4813-9bdd-1c6484e2c112 19a48db5eafb4ccb9008a204aa3d72d4 2671313ddba04346ac0e2eef435f909c - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 25 03:21:59 np0005534516 nova_compute[253538]: 2025-11-25 08:21:59.292 253542 DEBUG nova.virt.hardware [None req-c902f289-5678-4813-9bdd-1c6484e2c112 19a48db5eafb4ccb9008a204aa3d72d4 2671313ddba04346ac0e2eef435f909c - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 25 03:21:59 np0005534516 nova_compute[253538]: 2025-11-25 08:21:59.293 253542 DEBUG nova.virt.hardware [None req-c902f289-5678-4813-9bdd-1c6484e2c112 19a48db5eafb4ccb9008a204aa3d72d4 2671313ddba04346ac0e2eef435f909c - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 25 03:21:59 np0005534516 nova_compute[253538]: 2025-11-25 08:21:59.293 253542 DEBUG nova.virt.hardware [None req-c902f289-5678-4813-9bdd-1c6484e2c112 19a48db5eafb4ccb9008a204aa3d72d4 2671313ddba04346ac0e2eef435f909c - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 25 03:21:59 np0005534516 nova_compute[253538]: 2025-11-25 08:21:59.295 253542 DEBUG oslo_concurrency.processutils [None req-c902f289-5678-4813-9bdd-1c6484e2c112 19a48db5eafb4ccb9008a204aa3d72d4 2671313ddba04346ac0e2eef435f909c - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:21:59 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 03:21:59 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4146064950' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 03:21:59 np0005534516 nova_compute[253538]: 2025-11-25 08:21:59.753 253542 DEBUG oslo_concurrency.processutils [None req-c902f289-5678-4813-9bdd-1c6484e2c112 19a48db5eafb4ccb9008a204aa3d72d4 2671313ddba04346ac0e2eef435f909c - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.458s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:21:59 np0005534516 nova_compute[253538]: 2025-11-25 08:21:59.786 253542 DEBUG nova.storage.rbd_utils [None req-c902f289-5678-4813-9bdd-1c6484e2c112 19a48db5eafb4ccb9008a204aa3d72d4 2671313ddba04346ac0e2eef435f909c - - default default] rbd image 67445e66-65d6-487d-8c34-7d798ac485c8_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:21:59 np0005534516 nova_compute[253538]: 2025-11-25 08:21:59.791 253542 DEBUG oslo_concurrency.processutils [None req-c902f289-5678-4813-9bdd-1c6484e2c112 19a48db5eafb4ccb9008a204aa3d72d4 2671313ddba04346ac0e2eef435f909c - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:22:00 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 03:22:00 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3909728107' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 03:22:00 np0005534516 nova_compute[253538]: 2025-11-25 08:22:00.242 253542 DEBUG oslo_concurrency.processutils [None req-c902f289-5678-4813-9bdd-1c6484e2c112 19a48db5eafb4ccb9008a204aa3d72d4 2671313ddba04346ac0e2eef435f909c - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.451s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:22:00 np0005534516 nova_compute[253538]: 2025-11-25 08:22:00.245 253542 DEBUG nova.virt.libvirt.vif [None req-c902f289-5678-4813-9bdd-1c6484e2c112 19a48db5eafb4ccb9008a204aa3d72d4 2671313ddba04346ac0e2eef435f909c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T08:21:47Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersWithSpecificFlavorTestJSON-server-101687112',display_name='tempest-ServersWithSpecificFlavorTestJSON-server-101687112',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(25),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverswithspecificflavortestjson-server-101687112',id=2,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=25,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNBzTqcG3JodznNWSwgj3rCrEedPdeiQ/mnvgYz3cydyWhFZXQKv9KjumDsSN5/xC3KFolCMDQs3EubeJBsTVkZbaVh8dka9krQSkDOu6i81Tbp1XKPFk+NDOA6XHT/FTw==',key_name='tempest-keypair-1479336102',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='2671313ddba04346ac0e2eef435f909c',ramdisk_id='',reservation_id='r-5kfa9qry',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersWithSpecificFlavorTestJSON-625252619',owner_user_name='tempest-ServersWithSpecificFlavorTestJSON-625252619-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T08:21:51Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='19a48db5eafb4ccb9008a204aa3d72d4',uuid=67445e66-65d6-487d-8c34-7d798ac485c8,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "5235689d-9333-45f8-8f44-50e4a006f7d3", "address": "fa:16:3e:c9:db:d0", "network": {"id": "ef52fe4f-78d3-45fa-ab69-177fdfabe604", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-1512173371-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2671313ddba04346ac0e2eef435f909c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5235689d-93", "ovs_interfaceid": "5235689d-9333-45f8-8f44-50e4a006f7d3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 25 03:22:00 np0005534516 nova_compute[253538]: 2025-11-25 08:22:00.245 253542 DEBUG nova.network.os_vif_util [None req-c902f289-5678-4813-9bdd-1c6484e2c112 19a48db5eafb4ccb9008a204aa3d72d4 2671313ddba04346ac0e2eef435f909c - - default default] Converting VIF {"id": "5235689d-9333-45f8-8f44-50e4a006f7d3", "address": "fa:16:3e:c9:db:d0", "network": {"id": "ef52fe4f-78d3-45fa-ab69-177fdfabe604", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-1512173371-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2671313ddba04346ac0e2eef435f909c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5235689d-93", "ovs_interfaceid": "5235689d-9333-45f8-8f44-50e4a006f7d3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 03:22:00 np0005534516 nova_compute[253538]: 2025-11-25 08:22:00.246 253542 DEBUG nova.network.os_vif_util [None req-c902f289-5678-4813-9bdd-1c6484e2c112 19a48db5eafb4ccb9008a204aa3d72d4 2671313ddba04346ac0e2eef435f909c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c9:db:d0,bridge_name='br-int',has_traffic_filtering=True,id=5235689d-9333-45f8-8f44-50e4a006f7d3,network=Network(ef52fe4f-78d3-45fa-ab69-177fdfabe604),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5235689d-93') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 03:22:00 np0005534516 nova_compute[253538]: 2025-11-25 08:22:00.249 253542 DEBUG nova.objects.instance [None req-c902f289-5678-4813-9bdd-1c6484e2c112 19a48db5eafb4ccb9008a204aa3d72d4 2671313ddba04346ac0e2eef435f909c - - default default] Lazy-loading 'pci_devices' on Instance uuid 67445e66-65d6-487d-8c34-7d798ac485c8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 03:22:00 np0005534516 nova_compute[253538]: 2025-11-25 08:22:00.272 253542 DEBUG nova.virt.libvirt.driver [None req-c902f289-5678-4813-9bdd-1c6484e2c112 19a48db5eafb4ccb9008a204aa3d72d4 2671313ddba04346ac0e2eef435f909c - - default default] [instance: 67445e66-65d6-487d-8c34-7d798ac485c8] End _get_guest_xml xml=<domain type="kvm">
Nov 25 03:22:00 np0005534516 nova_compute[253538]:  <uuid>67445e66-65d6-487d-8c34-7d798ac485c8</uuid>
Nov 25 03:22:00 np0005534516 nova_compute[253538]:  <name>instance-00000002</name>
Nov 25 03:22:00 np0005534516 nova_compute[253538]:  <memory>131072</memory>
Nov 25 03:22:00 np0005534516 nova_compute[253538]:  <vcpu>1</vcpu>
Nov 25 03:22:00 np0005534516 nova_compute[253538]:  <metadata>
Nov 25 03:22:00 np0005534516 nova_compute[253538]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 03:22:00 np0005534516 nova_compute[253538]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 03:22:00 np0005534516 nova_compute[253538]:      <nova:name>tempest-ServersWithSpecificFlavorTestJSON-server-101687112</nova:name>
Nov 25 03:22:00 np0005534516 nova_compute[253538]:      <nova:creationTime>2025-11-25 08:21:59</nova:creationTime>
Nov 25 03:22:00 np0005534516 nova_compute[253538]:      <nova:flavor name="tempest-flavor_with_ephemeral_0-1902324690">
Nov 25 03:22:00 np0005534516 nova_compute[253538]:        <nova:memory>128</nova:memory>
Nov 25 03:22:00 np0005534516 nova_compute[253538]:        <nova:disk>1</nova:disk>
Nov 25 03:22:00 np0005534516 nova_compute[253538]:        <nova:swap>0</nova:swap>
Nov 25 03:22:00 np0005534516 nova_compute[253538]:        <nova:ephemeral>0</nova:ephemeral>
Nov 25 03:22:00 np0005534516 nova_compute[253538]:        <nova:vcpus>1</nova:vcpus>
Nov 25 03:22:00 np0005534516 nova_compute[253538]:      </nova:flavor>
Nov 25 03:22:00 np0005534516 nova_compute[253538]:      <nova:owner>
Nov 25 03:22:00 np0005534516 nova_compute[253538]:        <nova:user uuid="19a48db5eafb4ccb9008a204aa3d72d4">tempest-ServersWithSpecificFlavorTestJSON-625252619-project-member</nova:user>
Nov 25 03:22:00 np0005534516 nova_compute[253538]:        <nova:project uuid="2671313ddba04346ac0e2eef435f909c">tempest-ServersWithSpecificFlavorTestJSON-625252619</nova:project>
Nov 25 03:22:00 np0005534516 nova_compute[253538]:      </nova:owner>
Nov 25 03:22:00 np0005534516 nova_compute[253538]:      <nova:root type="image" uuid="8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e"/>
Nov 25 03:22:00 np0005534516 nova_compute[253538]:      <nova:ports>
Nov 25 03:22:00 np0005534516 nova_compute[253538]:        <nova:port uuid="5235689d-9333-45f8-8f44-50e4a006f7d3">
Nov 25 03:22:00 np0005534516 nova_compute[253538]:          <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Nov 25 03:22:00 np0005534516 nova_compute[253538]:        </nova:port>
Nov 25 03:22:00 np0005534516 nova_compute[253538]:      </nova:ports>
Nov 25 03:22:00 np0005534516 nova_compute[253538]:    </nova:instance>
Nov 25 03:22:00 np0005534516 nova_compute[253538]:  </metadata>
Nov 25 03:22:00 np0005534516 nova_compute[253538]:  <sysinfo type="smbios">
Nov 25 03:22:00 np0005534516 nova_compute[253538]:    <system>
Nov 25 03:22:00 np0005534516 nova_compute[253538]:      <entry name="manufacturer">RDO</entry>
Nov 25 03:22:00 np0005534516 nova_compute[253538]:      <entry name="product">OpenStack Compute</entry>
Nov 25 03:22:00 np0005534516 nova_compute[253538]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 03:22:00 np0005534516 nova_compute[253538]:      <entry name="serial">67445e66-65d6-487d-8c34-7d798ac485c8</entry>
Nov 25 03:22:00 np0005534516 nova_compute[253538]:      <entry name="uuid">67445e66-65d6-487d-8c34-7d798ac485c8</entry>
Nov 25 03:22:00 np0005534516 nova_compute[253538]:      <entry name="family">Virtual Machine</entry>
Nov 25 03:22:00 np0005534516 nova_compute[253538]:    </system>
Nov 25 03:22:00 np0005534516 nova_compute[253538]:  </sysinfo>
Nov 25 03:22:00 np0005534516 nova_compute[253538]:  <os>
Nov 25 03:22:00 np0005534516 nova_compute[253538]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 25 03:22:00 np0005534516 nova_compute[253538]:    <boot dev="hd"/>
Nov 25 03:22:00 np0005534516 nova_compute[253538]:    <smbios mode="sysinfo"/>
Nov 25 03:22:00 np0005534516 nova_compute[253538]:  </os>
Nov 25 03:22:00 np0005534516 nova_compute[253538]:  <features>
Nov 25 03:22:00 np0005534516 nova_compute[253538]:    <acpi/>
Nov 25 03:22:00 np0005534516 nova_compute[253538]:    <apic/>
Nov 25 03:22:00 np0005534516 nova_compute[253538]:    <vmcoreinfo/>
Nov 25 03:22:00 np0005534516 nova_compute[253538]:  </features>
Nov 25 03:22:00 np0005534516 nova_compute[253538]:  <clock offset="utc">
Nov 25 03:22:00 np0005534516 nova_compute[253538]:    <timer name="pit" tickpolicy="delay"/>
Nov 25 03:22:00 np0005534516 nova_compute[253538]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 25 03:22:00 np0005534516 nova_compute[253538]:    <timer name="hpet" present="no"/>
Nov 25 03:22:00 np0005534516 nova_compute[253538]:  </clock>
Nov 25 03:22:00 np0005534516 nova_compute[253538]:  <cpu mode="host-model" match="exact">
Nov 25 03:22:00 np0005534516 nova_compute[253538]:    <topology sockets="1" cores="1" threads="1"/>
Nov 25 03:22:00 np0005534516 nova_compute[253538]:  </cpu>
Nov 25 03:22:00 np0005534516 nova_compute[253538]:  <devices>
Nov 25 03:22:00 np0005534516 nova_compute[253538]:    <disk type="network" device="disk">
Nov 25 03:22:00 np0005534516 nova_compute[253538]:      <driver type="raw" cache="none"/>
Nov 25 03:22:00 np0005534516 nova_compute[253538]:      <source protocol="rbd" name="vms/67445e66-65d6-487d-8c34-7d798ac485c8_disk">
Nov 25 03:22:00 np0005534516 nova_compute[253538]:        <host name="192.168.122.100" port="6789"/>
Nov 25 03:22:00 np0005534516 nova_compute[253538]:      </source>
Nov 25 03:22:00 np0005534516 nova_compute[253538]:      <auth username="openstack">
Nov 25 03:22:00 np0005534516 nova_compute[253538]:        <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 03:22:00 np0005534516 nova_compute[253538]:      </auth>
Nov 25 03:22:00 np0005534516 nova_compute[253538]:      <target dev="vda" bus="virtio"/>
Nov 25 03:22:00 np0005534516 nova_compute[253538]:    </disk>
Nov 25 03:22:00 np0005534516 nova_compute[253538]:    <disk type="network" device="cdrom">
Nov 25 03:22:00 np0005534516 nova_compute[253538]:      <driver type="raw" cache="none"/>
Nov 25 03:22:00 np0005534516 nova_compute[253538]:      <source protocol="rbd" name="vms/67445e66-65d6-487d-8c34-7d798ac485c8_disk.config">
Nov 25 03:22:00 np0005534516 nova_compute[253538]:        <host name="192.168.122.100" port="6789"/>
Nov 25 03:22:00 np0005534516 nova_compute[253538]:      </source>
Nov 25 03:22:00 np0005534516 nova_compute[253538]:      <auth username="openstack">
Nov 25 03:22:00 np0005534516 nova_compute[253538]:        <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 03:22:00 np0005534516 nova_compute[253538]:      </auth>
Nov 25 03:22:00 np0005534516 nova_compute[253538]:      <target dev="sda" bus="sata"/>
Nov 25 03:22:00 np0005534516 nova_compute[253538]:    </disk>
Nov 25 03:22:00 np0005534516 nova_compute[253538]:    <interface type="ethernet">
Nov 25 03:22:00 np0005534516 nova_compute[253538]:      <mac address="fa:16:3e:c9:db:d0"/>
Nov 25 03:22:00 np0005534516 nova_compute[253538]:      <model type="virtio"/>
Nov 25 03:22:00 np0005534516 nova_compute[253538]:      <driver name="vhost" rx_queue_size="512"/>
Nov 25 03:22:00 np0005534516 nova_compute[253538]:      <mtu size="1442"/>
Nov 25 03:22:00 np0005534516 nova_compute[253538]:      <target dev="tap5235689d-93"/>
Nov 25 03:22:00 np0005534516 nova_compute[253538]:    </interface>
Nov 25 03:22:00 np0005534516 nova_compute[253538]:    <serial type="pty">
Nov 25 03:22:00 np0005534516 nova_compute[253538]:      <log file="/var/lib/nova/instances/67445e66-65d6-487d-8c34-7d798ac485c8/console.log" append="off"/>
Nov 25 03:22:00 np0005534516 nova_compute[253538]:    </serial>
Nov 25 03:22:00 np0005534516 nova_compute[253538]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 03:22:00 np0005534516 nova_compute[253538]:    <video>
Nov 25 03:22:00 np0005534516 nova_compute[253538]:      <model type="virtio"/>
Nov 25 03:22:00 np0005534516 nova_compute[253538]:    </video>
Nov 25 03:22:00 np0005534516 nova_compute[253538]:    <input type="tablet" bus="usb"/>
Nov 25 03:22:00 np0005534516 nova_compute[253538]:    <rng model="virtio">
Nov 25 03:22:00 np0005534516 nova_compute[253538]:      <backend model="random">/dev/urandom</backend>
Nov 25 03:22:00 np0005534516 nova_compute[253538]:    </rng>
Nov 25 03:22:00 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root"/>
Nov 25 03:22:00 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:22:00 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:22:00 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:22:00 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:22:00 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:22:00 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:22:00 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:22:00 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:22:00 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:22:00 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:22:00 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:22:00 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:22:00 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:22:00 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:22:00 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:22:00 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:22:00 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:22:00 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:22:00 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:22:00 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:22:00 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:22:00 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:22:00 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:22:00 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:22:00 np0005534516 nova_compute[253538]:    <controller type="usb" index="0"/>
Nov 25 03:22:00 np0005534516 nova_compute[253538]:    <memballoon model="virtio">
Nov 25 03:22:00 np0005534516 nova_compute[253538]:      <stats period="10"/>
Nov 25 03:22:00 np0005534516 nova_compute[253538]:    </memballoon>
Nov 25 03:22:00 np0005534516 nova_compute[253538]:  </devices>
Nov 25 03:22:00 np0005534516 nova_compute[253538]: </domain>
Nov 25 03:22:00 np0005534516 nova_compute[253538]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 25 03:22:00 np0005534516 nova_compute[253538]: 2025-11-25 08:22:00.273 253542 DEBUG nova.compute.manager [None req-c902f289-5678-4813-9bdd-1c6484e2c112 19a48db5eafb4ccb9008a204aa3d72d4 2671313ddba04346ac0e2eef435f909c - - default default] [instance: 67445e66-65d6-487d-8c34-7d798ac485c8] Preparing to wait for external event network-vif-plugged-5235689d-9333-45f8-8f44-50e4a006f7d3 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 25 03:22:00 np0005534516 nova_compute[253538]: 2025-11-25 08:22:00.274 253542 DEBUG oslo_concurrency.lockutils [None req-c902f289-5678-4813-9bdd-1c6484e2c112 19a48db5eafb4ccb9008a204aa3d72d4 2671313ddba04346ac0e2eef435f909c - - default default] Acquiring lock "67445e66-65d6-487d-8c34-7d798ac485c8-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:22:00 np0005534516 nova_compute[253538]: 2025-11-25 08:22:00.274 253542 DEBUG oslo_concurrency.lockutils [None req-c902f289-5678-4813-9bdd-1c6484e2c112 19a48db5eafb4ccb9008a204aa3d72d4 2671313ddba04346ac0e2eef435f909c - - default default] Lock "67445e66-65d6-487d-8c34-7d798ac485c8-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:22:00 np0005534516 nova_compute[253538]: 2025-11-25 08:22:00.275 253542 DEBUG oslo_concurrency.lockutils [None req-c902f289-5678-4813-9bdd-1c6484e2c112 19a48db5eafb4ccb9008a204aa3d72d4 2671313ddba04346ac0e2eef435f909c - - default default] Lock "67445e66-65d6-487d-8c34-7d798ac485c8-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:22:00 np0005534516 nova_compute[253538]: 2025-11-25 08:22:00.275 253542 DEBUG nova.virt.libvirt.vif [None req-c902f289-5678-4813-9bdd-1c6484e2c112 19a48db5eafb4ccb9008a204aa3d72d4 2671313ddba04346ac0e2eef435f909c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T08:21:47Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersWithSpecificFlavorTestJSON-server-101687112',display_name='tempest-ServersWithSpecificFlavorTestJSON-server-101687112',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(25),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverswithspecificflavortestjson-server-101687112',id=2,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=25,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNBzTqcG3JodznNWSwgj3rCrEedPdeiQ/mnvgYz3cydyWhFZXQKv9KjumDsSN5/xC3KFolCMDQs3EubeJBsTVkZbaVh8dka9krQSkDOu6i81Tbp1XKPFk+NDOA6XHT/FTw==',key_name='tempest-keypair-1479336102',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='2671313ddba04346ac0e2eef435f909c',ramdisk_id='',reservation_id='r-5kfa9qry',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersWithSpecificFlavorTestJSON-625252619',owner_user_name='tempest-ServersWithSpecificFlavorTestJSON-625252619-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T08:21:51Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='19a48db5eafb4ccb9008a204aa3d72d4',uuid=67445e66-65d6-487d-8c34-7d798ac485c8,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "5235689d-9333-45f8-8f44-50e4a006f7d3", "address": "fa:16:3e:c9:db:d0", "network": {"id": "ef52fe4f-78d3-45fa-ab69-177fdfabe604", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-1512173371-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2671313ddba04346ac0e2eef435f909c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5235689d-93", "ovs_interfaceid": "5235689d-9333-45f8-8f44-50e4a006f7d3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 25 03:22:00 np0005534516 nova_compute[253538]: 2025-11-25 08:22:00.276 253542 DEBUG nova.network.os_vif_util [None req-c902f289-5678-4813-9bdd-1c6484e2c112 19a48db5eafb4ccb9008a204aa3d72d4 2671313ddba04346ac0e2eef435f909c - - default default] Converting VIF {"id": "5235689d-9333-45f8-8f44-50e4a006f7d3", "address": "fa:16:3e:c9:db:d0", "network": {"id": "ef52fe4f-78d3-45fa-ab69-177fdfabe604", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-1512173371-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2671313ddba04346ac0e2eef435f909c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5235689d-93", "ovs_interfaceid": "5235689d-9333-45f8-8f44-50e4a006f7d3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 03:22:00 np0005534516 nova_compute[253538]: 2025-11-25 08:22:00.276 253542 DEBUG nova.network.os_vif_util [None req-c902f289-5678-4813-9bdd-1c6484e2c112 19a48db5eafb4ccb9008a204aa3d72d4 2671313ddba04346ac0e2eef435f909c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c9:db:d0,bridge_name='br-int',has_traffic_filtering=True,id=5235689d-9333-45f8-8f44-50e4a006f7d3,network=Network(ef52fe4f-78d3-45fa-ab69-177fdfabe604),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5235689d-93') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 03:22:00 np0005534516 nova_compute[253538]: 2025-11-25 08:22:00.277 253542 DEBUG os_vif [None req-c902f289-5678-4813-9bdd-1c6484e2c112 19a48db5eafb4ccb9008a204aa3d72d4 2671313ddba04346ac0e2eef435f909c - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:c9:db:d0,bridge_name='br-int',has_traffic_filtering=True,id=5235689d-9333-45f8-8f44-50e4a006f7d3,network=Network(ef52fe4f-78d3-45fa-ab69-177fdfabe604),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5235689d-93') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 25 03:22:00 np0005534516 nova_compute[253538]: 2025-11-25 08:22:00.312 253542 DEBUG ovsdbapp.backend.ovs_idl [None req-c902f289-5678-4813-9bdd-1c6484e2c112 19a48db5eafb4ccb9008a204aa3d72d4 2671313ddba04346ac0e2eef435f909c - - default default] Created schema index Interface.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Nov 25 03:22:00 np0005534516 nova_compute[253538]: 2025-11-25 08:22:00.312 253542 DEBUG ovsdbapp.backend.ovs_idl [None req-c902f289-5678-4813-9bdd-1c6484e2c112 19a48db5eafb4ccb9008a204aa3d72d4 2671313ddba04346ac0e2eef435f909c - - default default] Created schema index Port.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Nov 25 03:22:00 np0005534516 nova_compute[253538]: 2025-11-25 08:22:00.312 253542 DEBUG ovsdbapp.backend.ovs_idl [None req-c902f289-5678-4813-9bdd-1c6484e2c112 19a48db5eafb4ccb9008a204aa3d72d4 2671313ddba04346ac0e2eef435f909c - - default default] Created schema index Bridge.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Nov 25 03:22:00 np0005534516 nova_compute[253538]: 2025-11-25 08:22:00.313 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-c902f289-5678-4813-9bdd-1c6484e2c112 19a48db5eafb4ccb9008a204aa3d72d4 2671313ddba04346ac0e2eef435f909c - - default default] tcp:127.0.0.1:6640: entering CONNECTING _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Nov 25 03:22:00 np0005534516 nova_compute[253538]: 2025-11-25 08:22:00.313 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-c902f289-5678-4813-9bdd-1c6484e2c112 19a48db5eafb4ccb9008a204aa3d72d4 2671313ddba04346ac0e2eef435f909c - - default default] [POLLOUT] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:22:00 np0005534516 nova_compute[253538]: 2025-11-25 08:22:00.314 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-c902f289-5678-4813-9bdd-1c6484e2c112 19a48db5eafb4ccb9008a204aa3d72d4 2671313ddba04346ac0e2eef435f909c - - default default] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Nov 25 03:22:00 np0005534516 nova_compute[253538]: 2025-11-25 08:22:00.314 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-c902f289-5678-4813-9bdd-1c6484e2c112 19a48db5eafb4ccb9008a204aa3d72d4 2671313ddba04346ac0e2eef435f909c - - default default] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:22:00 np0005534516 nova_compute[253538]: 2025-11-25 08:22:00.315 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-c902f289-5678-4813-9bdd-1c6484e2c112 19a48db5eafb4ccb9008a204aa3d72d4 2671313ddba04346ac0e2eef435f909c - - default default] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:22:00 np0005534516 nova_compute[253538]: 2025-11-25 08:22:00.318 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-c902f289-5678-4813-9bdd-1c6484e2c112 19a48db5eafb4ccb9008a204aa3d72d4 2671313ddba04346ac0e2eef435f909c - - default default] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:22:00 np0005534516 nova_compute[253538]: 2025-11-25 08:22:00.329 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:22:00 np0005534516 nova_compute[253538]: 2025-11-25 08:22:00.329 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:22:00 np0005534516 nova_compute[253538]: 2025-11-25 08:22:00.330 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 03:22:00 np0005534516 nova_compute[253538]: 2025-11-25 08:22:00.331 253542 INFO oslo.privsep.daemon [None req-c902f289-5678-4813-9bdd-1c6484e2c112 19a48db5eafb4ccb9008a204aa3d72d4 2671313ddba04346ac0e2eef435f909c - - default default] Running privsep helper: ['sudo', 'nova-rootwrap', '/etc/nova/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/nova/nova.conf', '--config-file', '/etc/nova/nova-compute.conf', '--config-dir', '/etc/nova/nova.conf.d', '--privsep_context', 'vif_plug_ovs.privsep.vif_plug', '--privsep_sock_path', '/tmp/tmpjvrp0faa/privsep.sock']#033[00m
Nov 25 03:22:00 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1091: 321 pgs: 321 active+clean; 88 MiB data, 214 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.1 MiB/s wr, 145 op/s
Nov 25 03:22:01 np0005534516 nova_compute[253538]: 2025-11-25 08:22:01.108 253542 INFO oslo.privsep.daemon [None req-c902f289-5678-4813-9bdd-1c6484e2c112 19a48db5eafb4ccb9008a204aa3d72d4 2671313ddba04346ac0e2eef435f909c - - default default] Spawned new privsep daemon via rootwrap#033[00m
Nov 25 03:22:01 np0005534516 nova_compute[253538]: 2025-11-25 08:22:00.919 269249 INFO oslo.privsep.daemon [-] privsep daemon starting#033[00m
Nov 25 03:22:01 np0005534516 nova_compute[253538]: 2025-11-25 08:22:00.925 269249 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0#033[00m
Nov 25 03:22:01 np0005534516 nova_compute[253538]: 2025-11-25 08:22:00.927 269249 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_DAC_OVERRIDE|CAP_NET_ADMIN/CAP_DAC_OVERRIDE|CAP_NET_ADMIN/none#033[00m
Nov 25 03:22:01 np0005534516 nova_compute[253538]: 2025-11-25 08:22:00.927 269249 INFO oslo.privsep.daemon [-] privsep daemon running as pid 269249#033[00m
Nov 25 03:22:01 np0005534516 nova_compute[253538]: 2025-11-25 08:22:01.414 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:22:01 np0005534516 nova_compute[253538]: 2025-11-25 08:22:01.415 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap5235689d-93, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:22:01 np0005534516 nova_compute[253538]: 2025-11-25 08:22:01.415 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap5235689d-93, col_values=(('external_ids', {'iface-id': '5235689d-9333-45f8-8f44-50e4a006f7d3', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:c9:db:d0', 'vm-uuid': '67445e66-65d6-487d-8c34-7d798ac485c8'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:22:01 np0005534516 nova_compute[253538]: 2025-11-25 08:22:01.417 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:22:01 np0005534516 NetworkManager[48915]: <info>  [1764058921.4185] manager: (tap5235689d-93): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/21)
Nov 25 03:22:01 np0005534516 nova_compute[253538]: 2025-11-25 08:22:01.421 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 25 03:22:01 np0005534516 nova_compute[253538]: 2025-11-25 08:22:01.425 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:22:01 np0005534516 nova_compute[253538]: 2025-11-25 08:22:01.426 253542 INFO os_vif [None req-c902f289-5678-4813-9bdd-1c6484e2c112 19a48db5eafb4ccb9008a204aa3d72d4 2671313ddba04346ac0e2eef435f909c - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:c9:db:d0,bridge_name='br-int',has_traffic_filtering=True,id=5235689d-9333-45f8-8f44-50e4a006f7d3,network=Network(ef52fe4f-78d3-45fa-ab69-177fdfabe604),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5235689d-93')#033[00m
Nov 25 03:22:01 np0005534516 nova_compute[253538]: 2025-11-25 08:22:01.494 253542 DEBUG nova.virt.libvirt.driver [None req-c902f289-5678-4813-9bdd-1c6484e2c112 19a48db5eafb4ccb9008a204aa3d72d4 2671313ddba04346ac0e2eef435f909c - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 03:22:01 np0005534516 nova_compute[253538]: 2025-11-25 08:22:01.495 253542 DEBUG nova.virt.libvirt.driver [None req-c902f289-5678-4813-9bdd-1c6484e2c112 19a48db5eafb4ccb9008a204aa3d72d4 2671313ddba04346ac0e2eef435f909c - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 03:22:01 np0005534516 nova_compute[253538]: 2025-11-25 08:22:01.495 253542 DEBUG nova.virt.libvirt.driver [None req-c902f289-5678-4813-9bdd-1c6484e2c112 19a48db5eafb4ccb9008a204aa3d72d4 2671313ddba04346ac0e2eef435f909c - - default default] No VIF found with MAC fa:16:3e:c9:db:d0, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 25 03:22:01 np0005534516 nova_compute[253538]: 2025-11-25 08:22:01.496 253542 INFO nova.virt.libvirt.driver [None req-c902f289-5678-4813-9bdd-1c6484e2c112 19a48db5eafb4ccb9008a204aa3d72d4 2671313ddba04346ac0e2eef435f909c - - default default] [instance: 67445e66-65d6-487d-8c34-7d798ac485c8] Using config drive#033[00m
Nov 25 03:22:01 np0005534516 nova_compute[253538]: 2025-11-25 08:22:01.522 253542 DEBUG nova.storage.rbd_utils [None req-c902f289-5678-4813-9bdd-1c6484e2c112 19a48db5eafb4ccb9008a204aa3d72d4 2671313ddba04346ac0e2eef435f909c - - default default] rbd image 67445e66-65d6-487d-8c34-7d798ac485c8_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:22:01 np0005534516 podman[269256]: 2025-11-25 08:22:01.575696195 +0000 UTC m=+0.105917957 container health_status 8ad1e319ea263c63021f01bb2d6f18ca5aea205883339692c6b10bddfa993191 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, container_name=ovn_controller)
Nov 25 03:22:02 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1092: 321 pgs: 321 active+clean; 88 MiB data, 214 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 2.1 MiB/s wr, 130 op/s
Nov 25 03:22:03 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e107 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 03:22:03 np0005534516 nova_compute[253538]: 2025-11-25 08:22:03.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 03:22:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 03:22:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:22:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 03:22:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:22:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0003459970412515465 of space, bias 1.0, pg target 0.10379911237546395 quantized to 32 (current 32)
Nov 25 03:22:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:22:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 03:22:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:22:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 03:22:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:22:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006661126644201341 of space, bias 1.0, pg target 0.19983379932604023 quantized to 32 (current 32)
Nov 25 03:22:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:22:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 25 03:22:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:22:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 03:22:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:22:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 25 03:22:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:22:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 25 03:22:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:22:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 03:22:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:22:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 25 03:22:03 np0005534516 nova_compute[253538]: 2025-11-25 08:22:03.744 253542 DEBUG nova.network.neutron [req-be5badac-a69a-47e0-babb-99eb1ee57d2b req-86bebe82-20d4-40d5-99ce-20dfbe364cfa b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 67445e66-65d6-487d-8c34-7d798ac485c8] Updated VIF entry in instance network info cache for port 5235689d-9333-45f8-8f44-50e4a006f7d3. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 25 03:22:03 np0005534516 nova_compute[253538]: 2025-11-25 08:22:03.745 253542 DEBUG nova.network.neutron [req-be5badac-a69a-47e0-babb-99eb1ee57d2b req-86bebe82-20d4-40d5-99ce-20dfbe364cfa b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 67445e66-65d6-487d-8c34-7d798ac485c8] Updating instance_info_cache with network_info: [{"id": "5235689d-9333-45f8-8f44-50e4a006f7d3", "address": "fa:16:3e:c9:db:d0", "network": {"id": "ef52fe4f-78d3-45fa-ab69-177fdfabe604", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-1512173371-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2671313ddba04346ac0e2eef435f909c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5235689d-93", "ovs_interfaceid": "5235689d-9333-45f8-8f44-50e4a006f7d3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 03:22:03 np0005534516 nova_compute[253538]: 2025-11-25 08:22:03.765 253542 DEBUG oslo_concurrency.lockutils [req-be5badac-a69a-47e0-babb-99eb1ee57d2b req-86bebe82-20d4-40d5-99ce-20dfbe364cfa b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-67445e66-65d6-487d-8c34-7d798ac485c8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 03:22:03 np0005534516 nova_compute[253538]: 2025-11-25 08:22:03.788 253542 INFO nova.virt.libvirt.driver [None req-c902f289-5678-4813-9bdd-1c6484e2c112 19a48db5eafb4ccb9008a204aa3d72d4 2671313ddba04346ac0e2eef435f909c - - default default] [instance: 67445e66-65d6-487d-8c34-7d798ac485c8] Creating config drive at /var/lib/nova/instances/67445e66-65d6-487d-8c34-7d798ac485c8/disk.config#033[00m
Nov 25 03:22:03 np0005534516 nova_compute[253538]: 2025-11-25 08:22:03.798 253542 DEBUG oslo_concurrency.processutils [None req-c902f289-5678-4813-9bdd-1c6484e2c112 19a48db5eafb4ccb9008a204aa3d72d4 2671313ddba04346ac0e2eef435f909c - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/67445e66-65d6-487d-8c34-7d798ac485c8/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpsydgyuwt execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:22:03 np0005534516 nova_compute[253538]: 2025-11-25 08:22:03.926 253542 DEBUG oslo_concurrency.processutils [None req-c902f289-5678-4813-9bdd-1c6484e2c112 19a48db5eafb4ccb9008a204aa3d72d4 2671313ddba04346ac0e2eef435f909c - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/67445e66-65d6-487d-8c34-7d798ac485c8/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpsydgyuwt" returned: 0 in 0.128s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:22:03 np0005534516 nova_compute[253538]: 2025-11-25 08:22:03.952 253542 DEBUG nova.storage.rbd_utils [None req-c902f289-5678-4813-9bdd-1c6484e2c112 19a48db5eafb4ccb9008a204aa3d72d4 2671313ddba04346ac0e2eef435f909c - - default default] rbd image 67445e66-65d6-487d-8c34-7d798ac485c8_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:22:03 np0005534516 nova_compute[253538]: 2025-11-25 08:22:03.955 253542 DEBUG oslo_concurrency.processutils [None req-c902f289-5678-4813-9bdd-1c6484e2c112 19a48db5eafb4ccb9008a204aa3d72d4 2671313ddba04346ac0e2eef435f909c - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/67445e66-65d6-487d-8c34-7d798ac485c8/disk.config 67445e66-65d6-487d-8c34-7d798ac485c8_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:22:04 np0005534516 nova_compute[253538]: 2025-11-25 08:22:04.171 253542 DEBUG oslo_concurrency.processutils [None req-c902f289-5678-4813-9bdd-1c6484e2c112 19a48db5eafb4ccb9008a204aa3d72d4 2671313ddba04346ac0e2eef435f909c - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/67445e66-65d6-487d-8c34-7d798ac485c8/disk.config 67445e66-65d6-487d-8c34-7d798ac485c8_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.216s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:22:04 np0005534516 nova_compute[253538]: 2025-11-25 08:22:04.173 253542 INFO nova.virt.libvirt.driver [None req-c902f289-5678-4813-9bdd-1c6484e2c112 19a48db5eafb4ccb9008a204aa3d72d4 2671313ddba04346ac0e2eef435f909c - - default default] [instance: 67445e66-65d6-487d-8c34-7d798ac485c8] Deleting local config drive /var/lib/nova/instances/67445e66-65d6-487d-8c34-7d798ac485c8/disk.config because it was imported into RBD.#033[00m
Nov 25 03:22:04 np0005534516 kernel: tun: Universal TUN/TAP device driver, 1.6
Nov 25 03:22:04 np0005534516 kernel: tap5235689d-93: entered promiscuous mode
Nov 25 03:22:04 np0005534516 NetworkManager[48915]: <info>  [1764058924.2530] manager: (tap5235689d-93): new Tun device (/org/freedesktop/NetworkManager/Devices/22)
Nov 25 03:22:04 np0005534516 ovn_controller[152859]: 2025-11-25T08:22:04Z|00027|binding|INFO|Claiming lport 5235689d-9333-45f8-8f44-50e4a006f7d3 for this chassis.
Nov 25 03:22:04 np0005534516 ovn_controller[152859]: 2025-11-25T08:22:04Z|00028|binding|INFO|5235689d-9333-45f8-8f44-50e4a006f7d3: Claiming fa:16:3e:c9:db:d0 10.100.0.10
Nov 25 03:22:04 np0005534516 nova_compute[253538]: 2025-11-25 08:22:04.254 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:22:04 np0005534516 nova_compute[253538]: 2025-11-25 08:22:04.260 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:22:04 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:22:04.281 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c9:db:d0 10.100.0.10'], port_security=['fa:16:3e:c9:db:d0 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '67445e66-65d6-487d-8c34-7d798ac485c8', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ef52fe4f-78d3-45fa-ab69-177fdfabe604', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2671313ddba04346ac0e2eef435f909c', 'neutron:revision_number': '2', 'neutron:security_group_ids': '45e37a08-d0c9-4931-b93a-912579eefb2a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=1e5eae09-1123-407e-9138-26c6151dcc1c, chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=5235689d-9333-45f8-8f44-50e4a006f7d3) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 03:22:04 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:22:04.283 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 5235689d-9333-45f8-8f44-50e4a006f7d3 in datapath ef52fe4f-78d3-45fa-ab69-177fdfabe604 bound to our chassis#033[00m
Nov 25 03:22:04 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:22:04.284 162739 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network ef52fe4f-78d3-45fa-ab69-177fdfabe604#033[00m
Nov 25 03:22:04 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:22:04.285 162739 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.default', '--privsep_sock_path', '/tmp/tmpuk0ke5fq/privsep.sock']#033[00m
Nov 25 03:22:04 np0005534516 systemd-udevd[269354]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 03:22:04 np0005534516 NetworkManager[48915]: <info>  [1764058924.2996] device (tap5235689d-93): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 03:22:04 np0005534516 NetworkManager[48915]: <info>  [1764058924.3028] device (tap5235689d-93): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 03:22:04 np0005534516 systemd-machined[215790]: New machine qemu-2-instance-00000002.
Nov 25 03:22:04 np0005534516 nova_compute[253538]: 2025-11-25 08:22:04.335 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:22:04 np0005534516 nova_compute[253538]: 2025-11-25 08:22:04.338 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:22:04 np0005534516 systemd[1]: Started Virtual Machine qemu-2-instance-00000002.
Nov 25 03:22:04 np0005534516 ovn_controller[152859]: 2025-11-25T08:22:04Z|00029|binding|INFO|Setting lport 5235689d-9333-45f8-8f44-50e4a006f7d3 ovn-installed in OVS
Nov 25 03:22:04 np0005534516 ovn_controller[152859]: 2025-11-25T08:22:04Z|00030|binding|INFO|Setting lport 5235689d-9333-45f8-8f44-50e4a006f7d3 up in Southbound
Nov 25 03:22:04 np0005534516 nova_compute[253538]: 2025-11-25 08:22:04.346 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:22:04 np0005534516 nova_compute[253538]: 2025-11-25 08:22:04.548 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 03:22:04 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1093: 321 pgs: 321 active+clean; 88 MiB data, 214 MiB used, 60 GiB / 60 GiB avail; 1.6 MiB/s rd, 1.8 MiB/s wr, 125 op/s
Nov 25 03:22:05 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:22:05.055 162739 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap#033[00m
Nov 25 03:22:05 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:22:05.055 162739 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmpuk0ke5fq/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362#033[00m
Nov 25 03:22:05 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:22:04.887 269370 INFO oslo.privsep.daemon [-] privsep daemon starting#033[00m
Nov 25 03:22:05 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:22:04.895 269370 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0#033[00m
Nov 25 03:22:05 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:22:04.899 269370 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_NET_ADMIN|CAP_SYS_ADMIN|CAP_SYS_PTRACE/CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_NET_ADMIN|CAP_SYS_ADMIN|CAP_SYS_PTRACE/none#033[00m
Nov 25 03:22:05 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:22:04.899 269370 INFO oslo.privsep.daemon [-] privsep daemon running as pid 269370#033[00m
Nov 25 03:22:05 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:22:05.059 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[197b64dd-200d-4948-becd-a30cb60f2c52]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:22:05 np0005534516 nova_compute[253538]: 2025-11-25 08:22:05.264 253542 DEBUG nova.compute.manager [req-ae813064-8615-41b9-a9dd-52561d643c80 req-ae415c4c-1ce5-4f59-a34b-6b79f6f89516 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 67445e66-65d6-487d-8c34-7d798ac485c8] Received event network-vif-plugged-5235689d-9333-45f8-8f44-50e4a006f7d3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:22:05 np0005534516 nova_compute[253538]: 2025-11-25 08:22:05.267 253542 DEBUG oslo_concurrency.lockutils [req-ae813064-8615-41b9-a9dd-52561d643c80 req-ae415c4c-1ce5-4f59-a34b-6b79f6f89516 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "67445e66-65d6-487d-8c34-7d798ac485c8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:22:05 np0005534516 nova_compute[253538]: 2025-11-25 08:22:05.268 253542 DEBUG oslo_concurrency.lockutils [req-ae813064-8615-41b9-a9dd-52561d643c80 req-ae415c4c-1ce5-4f59-a34b-6b79f6f89516 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "67445e66-65d6-487d-8c34-7d798ac485c8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:22:05 np0005534516 nova_compute[253538]: 2025-11-25 08:22:05.269 253542 DEBUG oslo_concurrency.lockutils [req-ae813064-8615-41b9-a9dd-52561d643c80 req-ae415c4c-1ce5-4f59-a34b-6b79f6f89516 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "67445e66-65d6-487d-8c34-7d798ac485c8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:22:05 np0005534516 nova_compute[253538]: 2025-11-25 08:22:05.269 253542 DEBUG nova.compute.manager [req-ae813064-8615-41b9-a9dd-52561d643c80 req-ae415c4c-1ce5-4f59-a34b-6b79f6f89516 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 67445e66-65d6-487d-8c34-7d798ac485c8] Processing event network-vif-plugged-5235689d-9333-45f8-8f44-50e4a006f7d3 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 25 03:22:05 np0005534516 nova_compute[253538]: 2025-11-25 08:22:05.553 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 03:22:05 np0005534516 nova_compute[253538]: 2025-11-25 08:22:05.554 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 25 03:22:05 np0005534516 nova_compute[253538]: 2025-11-25 08:22:05.554 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 25 03:22:05 np0005534516 nova_compute[253538]: 2025-11-25 08:22:05.574 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] [instance: 67445e66-65d6-487d-8c34-7d798ac485c8] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Nov 25 03:22:05 np0005534516 nova_compute[253538]: 2025-11-25 08:22:05.575 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 25 03:22:05 np0005534516 nova_compute[253538]: 2025-11-25 08:22:05.759 253542 DEBUG oslo_concurrency.lockutils [None req-a1cbd3bc-67ba-4a3b-b37a-d7f9b4486353 8572d5befb7449ee93a99483cc1a148b 0706b4d7e21644ee997b95ec96c1fca2 - - default default] Acquiring lock "04f4ebed-aaaa-4be2-9916-0361c1de57c5" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:22:05 np0005534516 nova_compute[253538]: 2025-11-25 08:22:05.760 253542 DEBUG oslo_concurrency.lockutils [None req-a1cbd3bc-67ba-4a3b-b37a-d7f9b4486353 8572d5befb7449ee93a99483cc1a148b 0706b4d7e21644ee997b95ec96c1fca2 - - default default] Lock "04f4ebed-aaaa-4be2-9916-0361c1de57c5" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:22:05 np0005534516 nova_compute[253538]: 2025-11-25 08:22:05.780 253542 DEBUG nova.compute.manager [None req-a1cbd3bc-67ba-4a3b-b37a-d7f9b4486353 8572d5befb7449ee93a99483cc1a148b 0706b4d7e21644ee997b95ec96c1fca2 - - default default] [instance: 04f4ebed-aaaa-4be2-9916-0361c1de57c5] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 25 03:22:05 np0005534516 nova_compute[253538]: 2025-11-25 08:22:05.843 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764058925.8434975, 67445e66-65d6-487d-8c34-7d798ac485c8 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 03:22:05 np0005534516 nova_compute[253538]: 2025-11-25 08:22:05.844 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 67445e66-65d6-487d-8c34-7d798ac485c8] VM Started (Lifecycle Event)#033[00m
Nov 25 03:22:05 np0005534516 nova_compute[253538]: 2025-11-25 08:22:05.846 253542 DEBUG nova.compute.manager [None req-c902f289-5678-4813-9bdd-1c6484e2c112 19a48db5eafb4ccb9008a204aa3d72d4 2671313ddba04346ac0e2eef435f909c - - default default] [instance: 67445e66-65d6-487d-8c34-7d798ac485c8] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 25 03:22:05 np0005534516 nova_compute[253538]: 2025-11-25 08:22:05.851 253542 DEBUG nova.virt.libvirt.driver [None req-c902f289-5678-4813-9bdd-1c6484e2c112 19a48db5eafb4ccb9008a204aa3d72d4 2671313ddba04346ac0e2eef435f909c - - default default] [instance: 67445e66-65d6-487d-8c34-7d798ac485c8] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 25 03:22:05 np0005534516 nova_compute[253538]: 2025-11-25 08:22:05.856 253542 INFO nova.virt.libvirt.driver [-] [instance: 67445e66-65d6-487d-8c34-7d798ac485c8] Instance spawned successfully.#033[00m
Nov 25 03:22:05 np0005534516 nova_compute[253538]: 2025-11-25 08:22:05.857 253542 DEBUG nova.virt.libvirt.driver [None req-c902f289-5678-4813-9bdd-1c6484e2c112 19a48db5eafb4ccb9008a204aa3d72d4 2671313ddba04346ac0e2eef435f909c - - default default] [instance: 67445e66-65d6-487d-8c34-7d798ac485c8] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 25 03:22:05 np0005534516 nova_compute[253538]: 2025-11-25 08:22:05.861 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 67445e66-65d6-487d-8c34-7d798ac485c8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:22:05 np0005534516 nova_compute[253538]: 2025-11-25 08:22:05.864 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 67445e66-65d6-487d-8c34-7d798ac485c8] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 03:22:05 np0005534516 nova_compute[253538]: 2025-11-25 08:22:05.873 253542 DEBUG nova.virt.libvirt.driver [None req-c902f289-5678-4813-9bdd-1c6484e2c112 19a48db5eafb4ccb9008a204aa3d72d4 2671313ddba04346ac0e2eef435f909c - - default default] [instance: 67445e66-65d6-487d-8c34-7d798ac485c8] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:22:05 np0005534516 nova_compute[253538]: 2025-11-25 08:22:05.873 253542 DEBUG nova.virt.libvirt.driver [None req-c902f289-5678-4813-9bdd-1c6484e2c112 19a48db5eafb4ccb9008a204aa3d72d4 2671313ddba04346ac0e2eef435f909c - - default default] [instance: 67445e66-65d6-487d-8c34-7d798ac485c8] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:22:05 np0005534516 nova_compute[253538]: 2025-11-25 08:22:05.874 253542 DEBUG nova.virt.libvirt.driver [None req-c902f289-5678-4813-9bdd-1c6484e2c112 19a48db5eafb4ccb9008a204aa3d72d4 2671313ddba04346ac0e2eef435f909c - - default default] [instance: 67445e66-65d6-487d-8c34-7d798ac485c8] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:22:05 np0005534516 nova_compute[253538]: 2025-11-25 08:22:05.875 253542 DEBUG nova.virt.libvirt.driver [None req-c902f289-5678-4813-9bdd-1c6484e2c112 19a48db5eafb4ccb9008a204aa3d72d4 2671313ddba04346ac0e2eef435f909c - - default default] [instance: 67445e66-65d6-487d-8c34-7d798ac485c8] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:22:05 np0005534516 nova_compute[253538]: 2025-11-25 08:22:05.875 253542 DEBUG nova.virt.libvirt.driver [None req-c902f289-5678-4813-9bdd-1c6484e2c112 19a48db5eafb4ccb9008a204aa3d72d4 2671313ddba04346ac0e2eef435f909c - - default default] [instance: 67445e66-65d6-487d-8c34-7d798ac485c8] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:22:05 np0005534516 nova_compute[253538]: 2025-11-25 08:22:05.876 253542 DEBUG nova.virt.libvirt.driver [None req-c902f289-5678-4813-9bdd-1c6484e2c112 19a48db5eafb4ccb9008a204aa3d72d4 2671313ddba04346ac0e2eef435f909c - - default default] [instance: 67445e66-65d6-487d-8c34-7d798ac485c8] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:22:05 np0005534516 nova_compute[253538]: 2025-11-25 08:22:05.879 253542 DEBUG oslo_concurrency.lockutils [None req-a1cbd3bc-67ba-4a3b-b37a-d7f9b4486353 8572d5befb7449ee93a99483cc1a148b 0706b4d7e21644ee997b95ec96c1fca2 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:22:05 np0005534516 nova_compute[253538]: 2025-11-25 08:22:05.880 253542 DEBUG oslo_concurrency.lockutils [None req-a1cbd3bc-67ba-4a3b-b37a-d7f9b4486353 8572d5befb7449ee93a99483cc1a148b 0706b4d7e21644ee997b95ec96c1fca2 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:22:05 np0005534516 nova_compute[253538]: 2025-11-25 08:22:05.882 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 67445e66-65d6-487d-8c34-7d798ac485c8] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 03:22:05 np0005534516 nova_compute[253538]: 2025-11-25 08:22:05.883 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764058925.8436034, 67445e66-65d6-487d-8c34-7d798ac485c8 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 03:22:05 np0005534516 nova_compute[253538]: 2025-11-25 08:22:05.883 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 67445e66-65d6-487d-8c34-7d798ac485c8] VM Paused (Lifecycle Event)#033[00m
Nov 25 03:22:05 np0005534516 nova_compute[253538]: 2025-11-25 08:22:05.890 253542 DEBUG nova.virt.hardware [None req-a1cbd3bc-67ba-4a3b-b37a-d7f9b4486353 8572d5befb7449ee93a99483cc1a148b 0706b4d7e21644ee997b95ec96c1fca2 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 25 03:22:05 np0005534516 nova_compute[253538]: 2025-11-25 08:22:05.890 253542 INFO nova.compute.claims [None req-a1cbd3bc-67ba-4a3b-b37a-d7f9b4486353 8572d5befb7449ee93a99483cc1a148b 0706b4d7e21644ee997b95ec96c1fca2 - - default default] [instance: 04f4ebed-aaaa-4be2-9916-0361c1de57c5] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 25 03:22:05 np0005534516 nova_compute[253538]: 2025-11-25 08:22:05.942 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 67445e66-65d6-487d-8c34-7d798ac485c8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:22:05 np0005534516 nova_compute[253538]: 2025-11-25 08:22:05.944 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764058925.849926, 67445e66-65d6-487d-8c34-7d798ac485c8 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 03:22:05 np0005534516 nova_compute[253538]: 2025-11-25 08:22:05.945 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 67445e66-65d6-487d-8c34-7d798ac485c8] VM Resumed (Lifecycle Event)#033[00m
Nov 25 03:22:05 np0005534516 nova_compute[253538]: 2025-11-25 08:22:05.963 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 67445e66-65d6-487d-8c34-7d798ac485c8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:22:05 np0005534516 nova_compute[253538]: 2025-11-25 08:22:05.965 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 67445e66-65d6-487d-8c34-7d798ac485c8] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 03:22:05 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:22:05.970 269370 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:22:05 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:22:05.970 269370 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:22:05 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:22:05.970 269370 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:22:05 np0005534516 nova_compute[253538]: 2025-11-25 08:22:05.974 253542 INFO nova.compute.manager [None req-c902f289-5678-4813-9bdd-1c6484e2c112 19a48db5eafb4ccb9008a204aa3d72d4 2671313ddba04346ac0e2eef435f909c - - default default] [instance: 67445e66-65d6-487d-8c34-7d798ac485c8] Took 13.90 seconds to spawn the instance on the hypervisor.#033[00m
Nov 25 03:22:05 np0005534516 nova_compute[253538]: 2025-11-25 08:22:05.975 253542 DEBUG nova.compute.manager [None req-c902f289-5678-4813-9bdd-1c6484e2c112 19a48db5eafb4ccb9008a204aa3d72d4 2671313ddba04346ac0e2eef435f909c - - default default] [instance: 67445e66-65d6-487d-8c34-7d798ac485c8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:22:06 np0005534516 nova_compute[253538]: 2025-11-25 08:22:06.002 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 67445e66-65d6-487d-8c34-7d798ac485c8] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 03:22:06 np0005534516 nova_compute[253538]: 2025-11-25 08:22:06.095 253542 INFO nova.compute.manager [None req-c902f289-5678-4813-9bdd-1c6484e2c112 19a48db5eafb4ccb9008a204aa3d72d4 2671313ddba04346ac0e2eef435f909c - - default default] [instance: 67445e66-65d6-487d-8c34-7d798ac485c8] Took 16.47 seconds to build instance.#033[00m
Nov 25 03:22:06 np0005534516 nova_compute[253538]: 2025-11-25 08:22:06.127 253542 DEBUG oslo_concurrency.lockutils [None req-c902f289-5678-4813-9bdd-1c6484e2c112 19a48db5eafb4ccb9008a204aa3d72d4 2671313ddba04346ac0e2eef435f909c - - default default] Lock "67445e66-65d6-487d-8c34-7d798ac485c8" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 16.731s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:22:06 np0005534516 nova_compute[253538]: 2025-11-25 08:22:06.175 253542 DEBUG oslo_concurrency.processutils [None req-a1cbd3bc-67ba-4a3b-b37a-d7f9b4486353 8572d5befb7449ee93a99483cc1a148b 0706b4d7e21644ee997b95ec96c1fca2 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:22:06 np0005534516 nova_compute[253538]: 2025-11-25 08:22:06.417 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:22:06 np0005534516 nova_compute[253538]: 2025-11-25 08:22:06.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 03:22:06 np0005534516 nova_compute[253538]: 2025-11-25 08:22:06.555 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 03:22:06 np0005534516 nova_compute[253538]: 2025-11-25 08:22:06.556 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 25 03:22:06 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 03:22:06 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1577669593' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 03:22:06 np0005534516 nova_compute[253538]: 2025-11-25 08:22:06.644 253542 DEBUG oslo_concurrency.processutils [None req-a1cbd3bc-67ba-4a3b-b37a-d7f9b4486353 8572d5befb7449ee93a99483cc1a148b 0706b4d7e21644ee997b95ec96c1fca2 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.469s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:22:06 np0005534516 nova_compute[253538]: 2025-11-25 08:22:06.649 253542 DEBUG nova.compute.provider_tree [None req-a1cbd3bc-67ba-4a3b-b37a-d7f9b4486353 8572d5befb7449ee93a99483cc1a148b 0706b4d7e21644ee997b95ec96c1fca2 - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 03:22:06 np0005534516 nova_compute[253538]: 2025-11-25 08:22:06.662 253542 DEBUG nova.scheduler.client.report [None req-a1cbd3bc-67ba-4a3b-b37a-d7f9b4486353 8572d5befb7449ee93a99483cc1a148b 0706b4d7e21644ee997b95ec96c1fca2 - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 03:22:06 np0005534516 nova_compute[253538]: 2025-11-25 08:22:06.763 253542 DEBUG oslo_concurrency.lockutils [None req-a1cbd3bc-67ba-4a3b-b37a-d7f9b4486353 8572d5befb7449ee93a99483cc1a148b 0706b4d7e21644ee997b95ec96c1fca2 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.883s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:22:06 np0005534516 nova_compute[253538]: 2025-11-25 08:22:06.765 253542 DEBUG nova.compute.manager [None req-a1cbd3bc-67ba-4a3b-b37a-d7f9b4486353 8572d5befb7449ee93a99483cc1a148b 0706b4d7e21644ee997b95ec96c1fca2 - - default default] [instance: 04f4ebed-aaaa-4be2-9916-0361c1de57c5] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 25 03:22:06 np0005534516 nova_compute[253538]: 2025-11-25 08:22:06.850 253542 DEBUG nova.compute.manager [None req-a1cbd3bc-67ba-4a3b-b37a-d7f9b4486353 8572d5befb7449ee93a99483cc1a148b 0706b4d7e21644ee997b95ec96c1fca2 - - default default] [instance: 04f4ebed-aaaa-4be2-9916-0361c1de57c5] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 25 03:22:06 np0005534516 nova_compute[253538]: 2025-11-25 08:22:06.851 253542 DEBUG nova.network.neutron [None req-a1cbd3bc-67ba-4a3b-b37a-d7f9b4486353 8572d5befb7449ee93a99483cc1a148b 0706b4d7e21644ee997b95ec96c1fca2 - - default default] [instance: 04f4ebed-aaaa-4be2-9916-0361c1de57c5] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 25 03:22:06 np0005534516 nova_compute[253538]: 2025-11-25 08:22:06.885 253542 INFO nova.virt.libvirt.driver [None req-a1cbd3bc-67ba-4a3b-b37a-d7f9b4486353 8572d5befb7449ee93a99483cc1a148b 0706b4d7e21644ee997b95ec96c1fca2 - - default default] [instance: 04f4ebed-aaaa-4be2-9916-0361c1de57c5] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 25 03:22:06 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1094: 321 pgs: 321 active+clean; 88 MiB data, 214 MiB used, 60 GiB / 60 GiB avail; 14 KiB/s rd, 194 KiB/s wr, 23 op/s
Nov 25 03:22:06 np0005534516 nova_compute[253538]: 2025-11-25 08:22:06.903 253542 DEBUG nova.compute.manager [None req-a1cbd3bc-67ba-4a3b-b37a-d7f9b4486353 8572d5befb7449ee93a99483cc1a148b 0706b4d7e21644ee997b95ec96c1fca2 - - default default] [instance: 04f4ebed-aaaa-4be2-9916-0361c1de57c5] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 25 03:22:07 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:22:07.023 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[4b1589c8-55e0-457f-b693-95f538d63003]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:22:07 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:22:07.025 162739 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapef52fe4f-71 in ovnmeta-ef52fe4f-78d3-45fa-ab69-177fdfabe604 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 25 03:22:07 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:22:07.040 269370 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapef52fe4f-70 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 25 03:22:07 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:22:07.040 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[0e42d4ff-b885-467c-b74f-29c2847e5649]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:22:07 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:22:07.045 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[58354b62-ae64-4aa1-bd28-b2e48dfe01cd]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:22:07 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:22:07.076 162852 DEBUG oslo.privsep.daemon [-] privsep: reply[b938dfac-cbb4-45b7-9f16-668ece6f002b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:22:07 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:22:07.114 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[5be6bf32-6665-4195-a1f9-1c0599a20281]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:22:07 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:22:07.116 162739 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.link_cmd', '--privsep_sock_path', '/tmp/tmpvbord9cb/privsep.sock']#033[00m
Nov 25 03:22:07 np0005534516 nova_compute[253538]: 2025-11-25 08:22:07.172 253542 DEBUG nova.compute.manager [None req-a1cbd3bc-67ba-4a3b-b37a-d7f9b4486353 8572d5befb7449ee93a99483cc1a148b 0706b4d7e21644ee997b95ec96c1fca2 - - default default] [instance: 04f4ebed-aaaa-4be2-9916-0361c1de57c5] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 25 03:22:07 np0005534516 nova_compute[253538]: 2025-11-25 08:22:07.173 253542 DEBUG nova.virt.libvirt.driver [None req-a1cbd3bc-67ba-4a3b-b37a-d7f9b4486353 8572d5befb7449ee93a99483cc1a148b 0706b4d7e21644ee997b95ec96c1fca2 - - default default] [instance: 04f4ebed-aaaa-4be2-9916-0361c1de57c5] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 25 03:22:07 np0005534516 nova_compute[253538]: 2025-11-25 08:22:07.174 253542 INFO nova.virt.libvirt.driver [None req-a1cbd3bc-67ba-4a3b-b37a-d7f9b4486353 8572d5befb7449ee93a99483cc1a148b 0706b4d7e21644ee997b95ec96c1fca2 - - default default] [instance: 04f4ebed-aaaa-4be2-9916-0361c1de57c5] Creating image(s)#033[00m
Nov 25 03:22:07 np0005534516 nova_compute[253538]: 2025-11-25 08:22:07.197 253542 DEBUG nova.storage.rbd_utils [None req-a1cbd3bc-67ba-4a3b-b37a-d7f9b4486353 8572d5befb7449ee93a99483cc1a148b 0706b4d7e21644ee997b95ec96c1fca2 - - default default] rbd image 04f4ebed-aaaa-4be2-9916-0361c1de57c5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:22:07 np0005534516 nova_compute[253538]: 2025-11-25 08:22:07.223 253542 DEBUG nova.storage.rbd_utils [None req-a1cbd3bc-67ba-4a3b-b37a-d7f9b4486353 8572d5befb7449ee93a99483cc1a148b 0706b4d7e21644ee997b95ec96c1fca2 - - default default] rbd image 04f4ebed-aaaa-4be2-9916-0361c1de57c5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:22:07 np0005534516 nova_compute[253538]: 2025-11-25 08:22:07.252 253542 DEBUG nova.storage.rbd_utils [None req-a1cbd3bc-67ba-4a3b-b37a-d7f9b4486353 8572d5befb7449ee93a99483cc1a148b 0706b4d7e21644ee997b95ec96c1fca2 - - default default] rbd image 04f4ebed-aaaa-4be2-9916-0361c1de57c5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:22:07 np0005534516 nova_compute[253538]: 2025-11-25 08:22:07.256 253542 DEBUG oslo_concurrency.processutils [None req-a1cbd3bc-67ba-4a3b-b37a-d7f9b4486353 8572d5befb7449ee93a99483cc1a148b 0706b4d7e21644ee997b95ec96c1fca2 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:22:07 np0005534516 nova_compute[253538]: 2025-11-25 08:22:07.336 253542 DEBUG oslo_concurrency.processutils [None req-a1cbd3bc-67ba-4a3b-b37a-d7f9b4486353 8572d5befb7449ee93a99483cc1a148b 0706b4d7e21644ee997b95ec96c1fca2 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json" returned: 0 in 0.080s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:22:07 np0005534516 nova_compute[253538]: 2025-11-25 08:22:07.337 253542 DEBUG oslo_concurrency.lockutils [None req-a1cbd3bc-67ba-4a3b-b37a-d7f9b4486353 8572d5befb7449ee93a99483cc1a148b 0706b4d7e21644ee997b95ec96c1fca2 - - default default] Acquiring lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:22:07 np0005534516 nova_compute[253538]: 2025-11-25 08:22:07.338 253542 DEBUG oslo_concurrency.lockutils [None req-a1cbd3bc-67ba-4a3b-b37a-d7f9b4486353 8572d5befb7449ee93a99483cc1a148b 0706b4d7e21644ee997b95ec96c1fca2 - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:22:07 np0005534516 nova_compute[253538]: 2025-11-25 08:22:07.338 253542 DEBUG oslo_concurrency.lockutils [None req-a1cbd3bc-67ba-4a3b-b37a-d7f9b4486353 8572d5befb7449ee93a99483cc1a148b 0706b4d7e21644ee997b95ec96c1fca2 - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:22:07 np0005534516 nova_compute[253538]: 2025-11-25 08:22:07.364 253542 DEBUG nova.storage.rbd_utils [None req-a1cbd3bc-67ba-4a3b-b37a-d7f9b4486353 8572d5befb7449ee93a99483cc1a148b 0706b4d7e21644ee997b95ec96c1fca2 - - default default] rbd image 04f4ebed-aaaa-4be2-9916-0361c1de57c5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:22:07 np0005534516 nova_compute[253538]: 2025-11-25 08:22:07.368 253542 DEBUG oslo_concurrency.processutils [None req-a1cbd3bc-67ba-4a3b-b37a-d7f9b4486353 8572d5befb7449ee93a99483cc1a148b 0706b4d7e21644ee997b95ec96c1fca2 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc 04f4ebed-aaaa-4be2-9916-0361c1de57c5_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:22:07 np0005534516 nova_compute[253538]: 2025-11-25 08:22:07.512 253542 DEBUG nova.compute.manager [req-0a851c72-7fa1-45c7-9881-f6b993633129 req-ce505373-9b77-4500-807a-2c23a0d8c78b b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 67445e66-65d6-487d-8c34-7d798ac485c8] Received event network-vif-plugged-5235689d-9333-45f8-8f44-50e4a006f7d3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:22:07 np0005534516 nova_compute[253538]: 2025-11-25 08:22:07.514 253542 DEBUG oslo_concurrency.lockutils [req-0a851c72-7fa1-45c7-9881-f6b993633129 req-ce505373-9b77-4500-807a-2c23a0d8c78b b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "67445e66-65d6-487d-8c34-7d798ac485c8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:22:07 np0005534516 nova_compute[253538]: 2025-11-25 08:22:07.515 253542 DEBUG oslo_concurrency.lockutils [req-0a851c72-7fa1-45c7-9881-f6b993633129 req-ce505373-9b77-4500-807a-2c23a0d8c78b b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "67445e66-65d6-487d-8c34-7d798ac485c8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:22:07 np0005534516 nova_compute[253538]: 2025-11-25 08:22:07.516 253542 DEBUG oslo_concurrency.lockutils [req-0a851c72-7fa1-45c7-9881-f6b993633129 req-ce505373-9b77-4500-807a-2c23a0d8c78b b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "67445e66-65d6-487d-8c34-7d798ac485c8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:22:07 np0005534516 nova_compute[253538]: 2025-11-25 08:22:07.516 253542 DEBUG nova.compute.manager [req-0a851c72-7fa1-45c7-9881-f6b993633129 req-ce505373-9b77-4500-807a-2c23a0d8c78b b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 67445e66-65d6-487d-8c34-7d798ac485c8] No waiting events found dispatching network-vif-plugged-5235689d-9333-45f8-8f44-50e4a006f7d3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 03:22:07 np0005534516 nova_compute[253538]: 2025-11-25 08:22:07.517 253542 WARNING nova.compute.manager [req-0a851c72-7fa1-45c7-9881-f6b993633129 req-ce505373-9b77-4500-807a-2c23a0d8c78b b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 67445e66-65d6-487d-8c34-7d798ac485c8] Received unexpected event network-vif-plugged-5235689d-9333-45f8-8f44-50e4a006f7d3 for instance with vm_state active and task_state None.#033[00m
Nov 25 03:22:07 np0005534516 nova_compute[253538]: 2025-11-25 08:22:07.562 253542 DEBUG nova.network.neutron [None req-a1cbd3bc-67ba-4a3b-b37a-d7f9b4486353 8572d5befb7449ee93a99483cc1a148b 0706b4d7e21644ee997b95ec96c1fca2 - - default default] [instance: 04f4ebed-aaaa-4be2-9916-0361c1de57c5] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188#033[00m
Nov 25 03:22:07 np0005534516 nova_compute[253538]: 2025-11-25 08:22:07.563 253542 DEBUG nova.compute.manager [None req-a1cbd3bc-67ba-4a3b-b37a-d7f9b4486353 8572d5befb7449ee93a99483cc1a148b 0706b4d7e21644ee997b95ec96c1fca2 - - default default] [instance: 04f4ebed-aaaa-4be2-9916-0361c1de57c5] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 25 03:22:07 np0005534516 nova_compute[253538]: 2025-11-25 08:22:07.741 253542 DEBUG oslo_concurrency.processutils [None req-a1cbd3bc-67ba-4a3b-b37a-d7f9b4486353 8572d5befb7449ee93a99483cc1a148b 0706b4d7e21644ee997b95ec96c1fca2 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc 04f4ebed-aaaa-4be2-9916-0361c1de57c5_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.372s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:22:07 np0005534516 nova_compute[253538]: 2025-11-25 08:22:07.833 253542 DEBUG nova.storage.rbd_utils [None req-a1cbd3bc-67ba-4a3b-b37a-d7f9b4486353 8572d5befb7449ee93a99483cc1a148b 0706b4d7e21644ee997b95ec96c1fca2 - - default default] resizing rbd image 04f4ebed-aaaa-4be2-9916-0361c1de57c5_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 25 03:22:07 np0005534516 nova_compute[253538]: 2025-11-25 08:22:07.871 253542 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764058912.8691206, 95b483fd-03ee-4f4f-b242-a04fe0f183cd => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 03:22:07 np0005534516 nova_compute[253538]: 2025-11-25 08:22:07.872 253542 INFO nova.compute.manager [-] [instance: 95b483fd-03ee-4f4f-b242-a04fe0f183cd] VM Stopped (Lifecycle Event)#033[00m
Nov 25 03:22:07 np0005534516 nova_compute[253538]: 2025-11-25 08:22:07.888 253542 DEBUG nova.compute.manager [None req-abfc67f3-c83e-40c6-8fbe-71369e9ea5c7 - - - - - -] [instance: 95b483fd-03ee-4f4f-b242-a04fe0f183cd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:22:07 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:22:07.935 162739 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap#033[00m
Nov 25 03:22:07 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:22:07.936 162739 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmpvbord9cb/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362#033[00m
Nov 25 03:22:07 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:22:07.776 269560 INFO oslo.privsep.daemon [-] privsep daemon starting#033[00m
Nov 25 03:22:07 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:22:07.784 269560 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0#033[00m
Nov 25 03:22:07 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:22:07.788 269560 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_NET_ADMIN|CAP_SYS_ADMIN/CAP_NET_ADMIN|CAP_SYS_ADMIN/none#033[00m
Nov 25 03:22:07 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:22:07.789 269560 INFO oslo.privsep.daemon [-] privsep daemon running as pid 269560#033[00m
Nov 25 03:22:07 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:22:07.939 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[41bd0e5a-e5d6-4e04-8536-98eeba3bc6ff]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:22:07 np0005534516 nova_compute[253538]: 2025-11-25 08:22:07.944 253542 DEBUG nova.objects.instance [None req-a1cbd3bc-67ba-4a3b-b37a-d7f9b4486353 8572d5befb7449ee93a99483cc1a148b 0706b4d7e21644ee997b95ec96c1fca2 - - default default] Lazy-loading 'migration_context' on Instance uuid 04f4ebed-aaaa-4be2-9916-0361c1de57c5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 03:22:07 np0005534516 nova_compute[253538]: 2025-11-25 08:22:07.960 253542 DEBUG nova.virt.libvirt.driver [None req-a1cbd3bc-67ba-4a3b-b37a-d7f9b4486353 8572d5befb7449ee93a99483cc1a148b 0706b4d7e21644ee997b95ec96c1fca2 - - default default] [instance: 04f4ebed-aaaa-4be2-9916-0361c1de57c5] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 25 03:22:07 np0005534516 nova_compute[253538]: 2025-11-25 08:22:07.960 253542 DEBUG nova.virt.libvirt.driver [None req-a1cbd3bc-67ba-4a3b-b37a-d7f9b4486353 8572d5befb7449ee93a99483cc1a148b 0706b4d7e21644ee997b95ec96c1fca2 - - default default] [instance: 04f4ebed-aaaa-4be2-9916-0361c1de57c5] Ensure instance console log exists: /var/lib/nova/instances/04f4ebed-aaaa-4be2-9916-0361c1de57c5/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 25 03:22:07 np0005534516 nova_compute[253538]: 2025-11-25 08:22:07.961 253542 DEBUG oslo_concurrency.lockutils [None req-a1cbd3bc-67ba-4a3b-b37a-d7f9b4486353 8572d5befb7449ee93a99483cc1a148b 0706b4d7e21644ee997b95ec96c1fca2 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:22:07 np0005534516 nova_compute[253538]: 2025-11-25 08:22:07.961 253542 DEBUG oslo_concurrency.lockutils [None req-a1cbd3bc-67ba-4a3b-b37a-d7f9b4486353 8572d5befb7449ee93a99483cc1a148b 0706b4d7e21644ee997b95ec96c1fca2 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:22:07 np0005534516 nova_compute[253538]: 2025-11-25 08:22:07.962 253542 DEBUG oslo_concurrency.lockutils [None req-a1cbd3bc-67ba-4a3b-b37a-d7f9b4486353 8572d5befb7449ee93a99483cc1a148b 0706b4d7e21644ee997b95ec96c1fca2 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:22:07 np0005534516 nova_compute[253538]: 2025-11-25 08:22:07.963 253542 DEBUG nova.virt.libvirt.driver [None req-a1cbd3bc-67ba-4a3b-b37a-d7f9b4486353 8572d5befb7449ee93a99483cc1a148b 0706b4d7e21644ee997b95ec96c1fca2 - - default default] [instance: 04f4ebed-aaaa-4be2-9916-0361c1de57c5] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'device_name': '/dev/vda', 'guest_format': None, 'encryption_options': None, 'boot_index': 0, 'encryption_format': None, 'encryption_secret_uuid': None, 'size': 0, 'encrypted': False, 'disk_bus': 'virtio', 'image_id': '8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 25 03:22:07 np0005534516 nova_compute[253538]: 2025-11-25 08:22:07.968 253542 WARNING nova.virt.libvirt.driver [None req-a1cbd3bc-67ba-4a3b-b37a-d7f9b4486353 8572d5befb7449ee93a99483cc1a148b 0706b4d7e21644ee997b95ec96c1fca2 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 03:22:07 np0005534516 nova_compute[253538]: 2025-11-25 08:22:07.976 253542 DEBUG nova.virt.libvirt.host [None req-a1cbd3bc-67ba-4a3b-b37a-d7f9b4486353 8572d5befb7449ee93a99483cc1a148b 0706b4d7e21644ee997b95ec96c1fca2 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 25 03:22:07 np0005534516 nova_compute[253538]: 2025-11-25 08:22:07.977 253542 DEBUG nova.virt.libvirt.host [None req-a1cbd3bc-67ba-4a3b-b37a-d7f9b4486353 8572d5befb7449ee93a99483cc1a148b 0706b4d7e21644ee997b95ec96c1fca2 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 25 03:22:07 np0005534516 nova_compute[253538]: 2025-11-25 08:22:07.983 253542 DEBUG nova.virt.libvirt.host [None req-a1cbd3bc-67ba-4a3b-b37a-d7f9b4486353 8572d5befb7449ee93a99483cc1a148b 0706b4d7e21644ee997b95ec96c1fca2 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 25 03:22:07 np0005534516 nova_compute[253538]: 2025-11-25 08:22:07.984 253542 DEBUG nova.virt.libvirt.host [None req-a1cbd3bc-67ba-4a3b-b37a-d7f9b4486353 8572d5befb7449ee93a99483cc1a148b 0706b4d7e21644ee997b95ec96c1fca2 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 25 03:22:07 np0005534516 nova_compute[253538]: 2025-11-25 08:22:07.985 253542 DEBUG nova.virt.libvirt.driver [None req-a1cbd3bc-67ba-4a3b-b37a-d7f9b4486353 8572d5befb7449ee93a99483cc1a148b 0706b4d7e21644ee997b95ec96c1fca2 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 25 03:22:07 np0005534516 nova_compute[253538]: 2025-11-25 08:22:07.985 253542 DEBUG nova.virt.hardware [None req-a1cbd3bc-67ba-4a3b-b37a-d7f9b4486353 8572d5befb7449ee93a99483cc1a148b 0706b4d7e21644ee997b95ec96c1fca2 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T08:20:54Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='c0247c91-0f34-45b2-87e7-43f31a790a00',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 25 03:22:07 np0005534516 nova_compute[253538]: 2025-11-25 08:22:07.985 253542 DEBUG nova.virt.hardware [None req-a1cbd3bc-67ba-4a3b-b37a-d7f9b4486353 8572d5befb7449ee93a99483cc1a148b 0706b4d7e21644ee997b95ec96c1fca2 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 25 03:22:07 np0005534516 nova_compute[253538]: 2025-11-25 08:22:07.986 253542 DEBUG nova.virt.hardware [None req-a1cbd3bc-67ba-4a3b-b37a-d7f9b4486353 8572d5befb7449ee93a99483cc1a148b 0706b4d7e21644ee997b95ec96c1fca2 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 25 03:22:07 np0005534516 nova_compute[253538]: 2025-11-25 08:22:07.986 253542 DEBUG nova.virt.hardware [None req-a1cbd3bc-67ba-4a3b-b37a-d7f9b4486353 8572d5befb7449ee93a99483cc1a148b 0706b4d7e21644ee997b95ec96c1fca2 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 25 03:22:07 np0005534516 nova_compute[253538]: 2025-11-25 08:22:07.986 253542 DEBUG nova.virt.hardware [None req-a1cbd3bc-67ba-4a3b-b37a-d7f9b4486353 8572d5befb7449ee93a99483cc1a148b 0706b4d7e21644ee997b95ec96c1fca2 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 25 03:22:07 np0005534516 nova_compute[253538]: 2025-11-25 08:22:07.987 253542 DEBUG nova.virt.hardware [None req-a1cbd3bc-67ba-4a3b-b37a-d7f9b4486353 8572d5befb7449ee93a99483cc1a148b 0706b4d7e21644ee997b95ec96c1fca2 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 25 03:22:07 np0005534516 nova_compute[253538]: 2025-11-25 08:22:07.987 253542 DEBUG nova.virt.hardware [None req-a1cbd3bc-67ba-4a3b-b37a-d7f9b4486353 8572d5befb7449ee93a99483cc1a148b 0706b4d7e21644ee997b95ec96c1fca2 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 25 03:22:07 np0005534516 nova_compute[253538]: 2025-11-25 08:22:07.987 253542 DEBUG nova.virt.hardware [None req-a1cbd3bc-67ba-4a3b-b37a-d7f9b4486353 8572d5befb7449ee93a99483cc1a148b 0706b4d7e21644ee997b95ec96c1fca2 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 25 03:22:07 np0005534516 nova_compute[253538]: 2025-11-25 08:22:07.988 253542 DEBUG nova.virt.hardware [None req-a1cbd3bc-67ba-4a3b-b37a-d7f9b4486353 8572d5befb7449ee93a99483cc1a148b 0706b4d7e21644ee997b95ec96c1fca2 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 25 03:22:07 np0005534516 nova_compute[253538]: 2025-11-25 08:22:07.988 253542 DEBUG nova.virt.hardware [None req-a1cbd3bc-67ba-4a3b-b37a-d7f9b4486353 8572d5befb7449ee93a99483cc1a148b 0706b4d7e21644ee997b95ec96c1fca2 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 25 03:22:07 np0005534516 nova_compute[253538]: 2025-11-25 08:22:07.988 253542 DEBUG nova.virt.hardware [None req-a1cbd3bc-67ba-4a3b-b37a-d7f9b4486353 8572d5befb7449ee93a99483cc1a148b 0706b4d7e21644ee997b95ec96c1fca2 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 25 03:22:07 np0005534516 nova_compute[253538]: 2025-11-25 08:22:07.991 253542 DEBUG oslo_concurrency.processutils [None req-a1cbd3bc-67ba-4a3b-b37a-d7f9b4486353 8572d5befb7449ee93a99483cc1a148b 0706b4d7e21644ee997b95ec96c1fca2 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:22:08 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e107 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 03:22:08 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 03:22:08 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/589033979' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 03:22:08 np0005534516 nova_compute[253538]: 2025-11-25 08:22:08.479 253542 DEBUG oslo_concurrency.processutils [None req-a1cbd3bc-67ba-4a3b-b37a-d7f9b4486353 8572d5befb7449ee93a99483cc1a148b 0706b4d7e21644ee997b95ec96c1fca2 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.487s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:22:08 np0005534516 nova_compute[253538]: 2025-11-25 08:22:08.504 253542 DEBUG nova.storage.rbd_utils [None req-a1cbd3bc-67ba-4a3b-b37a-d7f9b4486353 8572d5befb7449ee93a99483cc1a148b 0706b4d7e21644ee997b95ec96c1fca2 - - default default] rbd image 04f4ebed-aaaa-4be2-9916-0361c1de57c5_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:22:08 np0005534516 nova_compute[253538]: 2025-11-25 08:22:08.509 253542 DEBUG oslo_concurrency.processutils [None req-a1cbd3bc-67ba-4a3b-b37a-d7f9b4486353 8572d5befb7449ee93a99483cc1a148b 0706b4d7e21644ee997b95ec96c1fca2 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:22:08 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:22:08.515 269560 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:22:08 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:22:08.515 269560 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:22:08 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:22:08.515 269560 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:22:08 np0005534516 nova_compute[253538]: 2025-11-25 08:22:08.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 03:22:08 np0005534516 nova_compute[253538]: 2025-11-25 08:22:08.576 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:22:08 np0005534516 nova_compute[253538]: 2025-11-25 08:22:08.577 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:22:08 np0005534516 nova_compute[253538]: 2025-11-25 08:22:08.578 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:22:08 np0005534516 nova_compute[253538]: 2025-11-25 08:22:08.578 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 25 03:22:08 np0005534516 nova_compute[253538]: 2025-11-25 08:22:08.578 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:22:08 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1095: 321 pgs: 321 active+clean; 98 MiB data, 210 MiB used, 60 GiB / 60 GiB avail; 1.1 MiB/s rd, 965 KiB/s wr, 110 op/s
Nov 25 03:22:09 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 03:22:09 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2669008722' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 03:22:09 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 03:22:09 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2815814186' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 03:22:09 np0005534516 nova_compute[253538]: 2025-11-25 08:22:09.044 253542 DEBUG oslo_concurrency.processutils [None req-a1cbd3bc-67ba-4a3b-b37a-d7f9b4486353 8572d5befb7449ee93a99483cc1a148b 0706b4d7e21644ee997b95ec96c1fca2 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.535s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:22:09 np0005534516 nova_compute[253538]: 2025-11-25 08:22:09.046 253542 DEBUG nova.objects.instance [None req-a1cbd3bc-67ba-4a3b-b37a-d7f9b4486353 8572d5befb7449ee93a99483cc1a148b 0706b4d7e21644ee997b95ec96c1fca2 - - default default] Lazy-loading 'pci_devices' on Instance uuid 04f4ebed-aaaa-4be2-9916-0361c1de57c5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 03:22:09 np0005534516 nova_compute[253538]: 2025-11-25 08:22:09.056 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.478s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:22:09 np0005534516 nova_compute[253538]: 2025-11-25 08:22:09.068 253542 DEBUG nova.virt.libvirt.driver [None req-a1cbd3bc-67ba-4a3b-b37a-d7f9b4486353 8572d5befb7449ee93a99483cc1a148b 0706b4d7e21644ee997b95ec96c1fca2 - - default default] [instance: 04f4ebed-aaaa-4be2-9916-0361c1de57c5] End _get_guest_xml xml=<domain type="kvm">
Nov 25 03:22:09 np0005534516 nova_compute[253538]:  <uuid>04f4ebed-aaaa-4be2-9916-0361c1de57c5</uuid>
Nov 25 03:22:09 np0005534516 nova_compute[253538]:  <name>instance-00000003</name>
Nov 25 03:22:09 np0005534516 nova_compute[253538]:  <memory>131072</memory>
Nov 25 03:22:09 np0005534516 nova_compute[253538]:  <vcpu>1</vcpu>
Nov 25 03:22:09 np0005534516 nova_compute[253538]:  <metadata>
Nov 25 03:22:09 np0005534516 nova_compute[253538]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 03:22:09 np0005534516 nova_compute[253538]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 03:22:09 np0005534516 nova_compute[253538]:      <nova:name>tempest-DeleteServersAdminTestJSON-server-986849408</nova:name>
Nov 25 03:22:09 np0005534516 nova_compute[253538]:      <nova:creationTime>2025-11-25 08:22:07</nova:creationTime>
Nov 25 03:22:09 np0005534516 nova_compute[253538]:      <nova:flavor name="m1.nano">
Nov 25 03:22:09 np0005534516 nova_compute[253538]:        <nova:memory>128</nova:memory>
Nov 25 03:22:09 np0005534516 nova_compute[253538]:        <nova:disk>1</nova:disk>
Nov 25 03:22:09 np0005534516 nova_compute[253538]:        <nova:swap>0</nova:swap>
Nov 25 03:22:09 np0005534516 nova_compute[253538]:        <nova:ephemeral>0</nova:ephemeral>
Nov 25 03:22:09 np0005534516 nova_compute[253538]:        <nova:vcpus>1</nova:vcpus>
Nov 25 03:22:09 np0005534516 nova_compute[253538]:      </nova:flavor>
Nov 25 03:22:09 np0005534516 nova_compute[253538]:      <nova:owner>
Nov 25 03:22:09 np0005534516 nova_compute[253538]:        <nova:user uuid="8572d5befb7449ee93a99483cc1a148b">tempest-DeleteServersAdminTestJSON-1175531157-project-member</nova:user>
Nov 25 03:22:09 np0005534516 nova_compute[253538]:        <nova:project uuid="0706b4d7e21644ee997b95ec96c1fca2">tempest-DeleteServersAdminTestJSON-1175531157</nova:project>
Nov 25 03:22:09 np0005534516 nova_compute[253538]:      </nova:owner>
Nov 25 03:22:09 np0005534516 nova_compute[253538]:      <nova:root type="image" uuid="8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e"/>
Nov 25 03:22:09 np0005534516 nova_compute[253538]:      <nova:ports/>
Nov 25 03:22:09 np0005534516 nova_compute[253538]:    </nova:instance>
Nov 25 03:22:09 np0005534516 nova_compute[253538]:  </metadata>
Nov 25 03:22:09 np0005534516 nova_compute[253538]:  <sysinfo type="smbios">
Nov 25 03:22:09 np0005534516 nova_compute[253538]:    <system>
Nov 25 03:22:09 np0005534516 nova_compute[253538]:      <entry name="manufacturer">RDO</entry>
Nov 25 03:22:09 np0005534516 nova_compute[253538]:      <entry name="product">OpenStack Compute</entry>
Nov 25 03:22:09 np0005534516 nova_compute[253538]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 03:22:09 np0005534516 nova_compute[253538]:      <entry name="serial">04f4ebed-aaaa-4be2-9916-0361c1de57c5</entry>
Nov 25 03:22:09 np0005534516 nova_compute[253538]:      <entry name="uuid">04f4ebed-aaaa-4be2-9916-0361c1de57c5</entry>
Nov 25 03:22:09 np0005534516 nova_compute[253538]:      <entry name="family">Virtual Machine</entry>
Nov 25 03:22:09 np0005534516 nova_compute[253538]:    </system>
Nov 25 03:22:09 np0005534516 nova_compute[253538]:  </sysinfo>
Nov 25 03:22:09 np0005534516 nova_compute[253538]:  <os>
Nov 25 03:22:09 np0005534516 nova_compute[253538]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 25 03:22:09 np0005534516 nova_compute[253538]:    <boot dev="hd"/>
Nov 25 03:22:09 np0005534516 nova_compute[253538]:    <smbios mode="sysinfo"/>
Nov 25 03:22:09 np0005534516 nova_compute[253538]:  </os>
Nov 25 03:22:09 np0005534516 nova_compute[253538]:  <features>
Nov 25 03:22:09 np0005534516 nova_compute[253538]:    <acpi/>
Nov 25 03:22:09 np0005534516 nova_compute[253538]:    <apic/>
Nov 25 03:22:09 np0005534516 nova_compute[253538]:    <vmcoreinfo/>
Nov 25 03:22:09 np0005534516 nova_compute[253538]:  </features>
Nov 25 03:22:09 np0005534516 nova_compute[253538]:  <clock offset="utc">
Nov 25 03:22:09 np0005534516 nova_compute[253538]:    <timer name="pit" tickpolicy="delay"/>
Nov 25 03:22:09 np0005534516 nova_compute[253538]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 25 03:22:09 np0005534516 nova_compute[253538]:    <timer name="hpet" present="no"/>
Nov 25 03:22:09 np0005534516 nova_compute[253538]:  </clock>
Nov 25 03:22:09 np0005534516 nova_compute[253538]:  <cpu mode="host-model" match="exact">
Nov 25 03:22:09 np0005534516 nova_compute[253538]:    <topology sockets="1" cores="1" threads="1"/>
Nov 25 03:22:09 np0005534516 nova_compute[253538]:  </cpu>
Nov 25 03:22:09 np0005534516 nova_compute[253538]:  <devices>
Nov 25 03:22:09 np0005534516 nova_compute[253538]:    <disk type="network" device="disk">
Nov 25 03:22:09 np0005534516 nova_compute[253538]:      <driver type="raw" cache="none"/>
Nov 25 03:22:09 np0005534516 nova_compute[253538]:      <source protocol="rbd" name="vms/04f4ebed-aaaa-4be2-9916-0361c1de57c5_disk">
Nov 25 03:22:09 np0005534516 nova_compute[253538]:        <host name="192.168.122.100" port="6789"/>
Nov 25 03:22:09 np0005534516 nova_compute[253538]:      </source>
Nov 25 03:22:09 np0005534516 nova_compute[253538]:      <auth username="openstack">
Nov 25 03:22:09 np0005534516 nova_compute[253538]:        <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 03:22:09 np0005534516 nova_compute[253538]:      </auth>
Nov 25 03:22:09 np0005534516 nova_compute[253538]:      <target dev="vda" bus="virtio"/>
Nov 25 03:22:09 np0005534516 nova_compute[253538]:    </disk>
Nov 25 03:22:09 np0005534516 nova_compute[253538]:    <disk type="network" device="cdrom">
Nov 25 03:22:09 np0005534516 nova_compute[253538]:      <driver type="raw" cache="none"/>
Nov 25 03:22:09 np0005534516 nova_compute[253538]:      <source protocol="rbd" name="vms/04f4ebed-aaaa-4be2-9916-0361c1de57c5_disk.config">
Nov 25 03:22:09 np0005534516 nova_compute[253538]:        <host name="192.168.122.100" port="6789"/>
Nov 25 03:22:09 np0005534516 nova_compute[253538]:      </source>
Nov 25 03:22:09 np0005534516 nova_compute[253538]:      <auth username="openstack">
Nov 25 03:22:09 np0005534516 nova_compute[253538]:        <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 03:22:09 np0005534516 nova_compute[253538]:      </auth>
Nov 25 03:22:09 np0005534516 nova_compute[253538]:      <target dev="sda" bus="sata"/>
Nov 25 03:22:09 np0005534516 nova_compute[253538]:    </disk>
Nov 25 03:22:09 np0005534516 nova_compute[253538]:    <serial type="pty">
Nov 25 03:22:09 np0005534516 nova_compute[253538]:      <log file="/var/lib/nova/instances/04f4ebed-aaaa-4be2-9916-0361c1de57c5/console.log" append="off"/>
Nov 25 03:22:09 np0005534516 nova_compute[253538]:    </serial>
Nov 25 03:22:09 np0005534516 nova_compute[253538]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 03:22:09 np0005534516 nova_compute[253538]:    <video>
Nov 25 03:22:09 np0005534516 nova_compute[253538]:      <model type="virtio"/>
Nov 25 03:22:09 np0005534516 nova_compute[253538]:    </video>
Nov 25 03:22:09 np0005534516 nova_compute[253538]:    <input type="tablet" bus="usb"/>
Nov 25 03:22:09 np0005534516 nova_compute[253538]:    <rng model="virtio">
Nov 25 03:22:09 np0005534516 nova_compute[253538]:      <backend model="random">/dev/urandom</backend>
Nov 25 03:22:09 np0005534516 nova_compute[253538]:    </rng>
Nov 25 03:22:09 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root"/>
Nov 25 03:22:09 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:22:09 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:22:09 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:22:09 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:22:09 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:22:09 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:22:09 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:22:09 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:22:09 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:22:09 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:22:09 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:22:09 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:22:09 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:22:09 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:22:09 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:22:09 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:22:09 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:22:09 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:22:09 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:22:09 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:22:09 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:22:09 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:22:09 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:22:09 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:22:09 np0005534516 nova_compute[253538]:    <controller type="usb" index="0"/>
Nov 25 03:22:09 np0005534516 nova_compute[253538]:    <memballoon model="virtio">
Nov 25 03:22:09 np0005534516 nova_compute[253538]:      <stats period="10"/>
Nov 25 03:22:09 np0005534516 nova_compute[253538]:    </memballoon>
Nov 25 03:22:09 np0005534516 nova_compute[253538]:  </devices>
Nov 25 03:22:09 np0005534516 nova_compute[253538]: </domain>
Nov 25 03:22:09 np0005534516 nova_compute[253538]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 25 03:22:09 np0005534516 nova_compute[253538]: 2025-11-25 08:22:09.150 253542 DEBUG nova.virt.libvirt.driver [None req-a1cbd3bc-67ba-4a3b-b37a-d7f9b4486353 8572d5befb7449ee93a99483cc1a148b 0706b4d7e21644ee997b95ec96c1fca2 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 03:22:09 np0005534516 nova_compute[253538]: 2025-11-25 08:22:09.151 253542 DEBUG nova.virt.libvirt.driver [None req-a1cbd3bc-67ba-4a3b-b37a-d7f9b4486353 8572d5befb7449ee93a99483cc1a148b 0706b4d7e21644ee997b95ec96c1fca2 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 03:22:09 np0005534516 nova_compute[253538]: 2025-11-25 08:22:09.152 253542 INFO nova.virt.libvirt.driver [None req-a1cbd3bc-67ba-4a3b-b37a-d7f9b4486353 8572d5befb7449ee93a99483cc1a148b 0706b4d7e21644ee997b95ec96c1fca2 - - default default] [instance: 04f4ebed-aaaa-4be2-9916-0361c1de57c5] Using config drive#033[00m
Nov 25 03:22:09 np0005534516 nova_compute[253538]: 2025-11-25 08:22:09.175 253542 DEBUG nova.storage.rbd_utils [None req-a1cbd3bc-67ba-4a3b-b37a-d7f9b4486353 8572d5befb7449ee93a99483cc1a148b 0706b4d7e21644ee997b95ec96c1fca2 - - default default] rbd image 04f4ebed-aaaa-4be2-9916-0361c1de57c5_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:22:09 np0005534516 nova_compute[253538]: 2025-11-25 08:22:09.186 253542 DEBUG nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 25 03:22:09 np0005534516 nova_compute[253538]: 2025-11-25 08:22:09.187 253542 DEBUG nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 25 03:22:09 np0005534516 nova_compute[253538]: 2025-11-25 08:22:09.190 253542 DEBUG nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] skipping disk for instance-00000003 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 25 03:22:09 np0005534516 nova_compute[253538]: 2025-11-25 08:22:09.191 253542 DEBUG nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] skipping disk for instance-00000003 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 25 03:22:09 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:22:09.283 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[464e0601-bae6-44ff-951b-c9d3e85eeb54]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:22:09 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:22:09.301 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[89344cf3-80a4-400f-87e0-6c2128207958]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:22:09 np0005534516 NetworkManager[48915]: <info>  [1764058929.3029] manager: (tapef52fe4f-70): new Veth device (/org/freedesktop/NetworkManager/Devices/23)
Nov 25 03:22:09 np0005534516 systemd-udevd[269729]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 03:22:09 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:22:09.329 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[bf759e15-cae1-4d4f-9668-4a89a6a6d835]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:22:09 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:22:09.332 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[f749638f-76e4-4644-866c-7840e60058f7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:22:09 np0005534516 nova_compute[253538]: 2025-11-25 08:22:09.342 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:22:09 np0005534516 NetworkManager[48915]: <info>  [1764058929.3547] device (tapef52fe4f-70): carrier: link connected
Nov 25 03:22:09 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:22:09.358 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[9a03e180-183a-4bb3-962e-dbf040f50a8f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:22:09 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:22:09.378 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[453a5d8a-0c5b-4be6-a064-01e774f00a61]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapef52fe4f-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:db:1c:c5'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 13], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 428395, 'reachable_time': 32929, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 269747, 'error': None, 'target': 'ovnmeta-ef52fe4f-78d3-45fa-ab69-177fdfabe604', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:22:09 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:22:09.395 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[2e40b938-5db9-4c8e-b38e-af34ac676f77]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fedb:1cc5'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 428395, 'tstamp': 428395}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 269748, 'error': None, 'target': 'ovnmeta-ef52fe4f-78d3-45fa-ab69-177fdfabe604', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:22:09 np0005534516 nova_compute[253538]: 2025-11-25 08:22:09.410 253542 WARNING nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 03:22:09 np0005534516 nova_compute[253538]: 2025-11-25 08:22:09.412 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4676MB free_disk=59.96752166748047GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 25 03:22:09 np0005534516 nova_compute[253538]: 2025-11-25 08:22:09.412 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:22:09 np0005534516 nova_compute[253538]: 2025-11-25 08:22:09.413 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:22:09 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:22:09.415 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[f6acb507-2cbf-4997-adef-fd74a7c3beab]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapef52fe4f-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:db:1c:c5'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 13], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 428395, 'reachable_time': 32929, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 269749, 'error': None, 'target': 'ovnmeta-ef52fe4f-78d3-45fa-ab69-177fdfabe604', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:22:09 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:22:09.441 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[79c025f5-7dd9-471e-a244-0eea782ed47a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:22:09 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:22:09.497 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[9e063ac2-9569-48d2-9919-ec4e813d9a0b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:22:09 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:22:09.499 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapef52fe4f-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:22:09 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:22:09.499 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 03:22:09 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:22:09.500 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapef52fe4f-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:22:09 np0005534516 nova_compute[253538]: 2025-11-25 08:22:09.501 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:22:09 np0005534516 NetworkManager[48915]: <info>  [1764058929.5024] manager: (tapef52fe4f-70): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/24)
Nov 25 03:22:09 np0005534516 kernel: tapef52fe4f-70: entered promiscuous mode
Nov 25 03:22:09 np0005534516 nova_compute[253538]: 2025-11-25 08:22:09.503 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:22:09 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:22:09.504 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapef52fe4f-70, col_values=(('external_ids', {'iface-id': 'c3160d37-f1ba-461a-855e-31f72d46baee'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:22:09 np0005534516 nova_compute[253538]: 2025-11-25 08:22:09.505 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:22:09 np0005534516 ovn_controller[152859]: 2025-11-25T08:22:09Z|00031|binding|INFO|Releasing lport c3160d37-f1ba-461a-855e-31f72d46baee from this chassis (sb_readonly=0)
Nov 25 03:22:09 np0005534516 nova_compute[253538]: 2025-11-25 08:22:09.525 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:22:09 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:22:09.527 162739 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/ef52fe4f-78d3-45fa-ab69-177fdfabe604.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/ef52fe4f-78d3-45fa-ab69-177fdfabe604.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 25 03:22:09 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:22:09.527 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[22db0236-fb91-462e-a823-64e809fb4aca]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:22:09 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:22:09.529 162739 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 25 03:22:09 np0005534516 ovn_metadata_agent[162734]: global
Nov 25 03:22:09 np0005534516 ovn_metadata_agent[162734]:    log         /dev/log local0 debug
Nov 25 03:22:09 np0005534516 ovn_metadata_agent[162734]:    log-tag     haproxy-metadata-proxy-ef52fe4f-78d3-45fa-ab69-177fdfabe604
Nov 25 03:22:09 np0005534516 ovn_metadata_agent[162734]:    user        root
Nov 25 03:22:09 np0005534516 ovn_metadata_agent[162734]:    group       root
Nov 25 03:22:09 np0005534516 ovn_metadata_agent[162734]:    maxconn     1024
Nov 25 03:22:09 np0005534516 ovn_metadata_agent[162734]:    pidfile     /var/lib/neutron/external/pids/ef52fe4f-78d3-45fa-ab69-177fdfabe604.pid.haproxy
Nov 25 03:22:09 np0005534516 ovn_metadata_agent[162734]:    daemon
Nov 25 03:22:09 np0005534516 ovn_metadata_agent[162734]: 
Nov 25 03:22:09 np0005534516 ovn_metadata_agent[162734]: defaults
Nov 25 03:22:09 np0005534516 ovn_metadata_agent[162734]:    log global
Nov 25 03:22:09 np0005534516 ovn_metadata_agent[162734]:    mode http
Nov 25 03:22:09 np0005534516 ovn_metadata_agent[162734]:    option httplog
Nov 25 03:22:09 np0005534516 ovn_metadata_agent[162734]:    option dontlognull
Nov 25 03:22:09 np0005534516 ovn_metadata_agent[162734]:    option http-server-close
Nov 25 03:22:09 np0005534516 ovn_metadata_agent[162734]:    option forwardfor
Nov 25 03:22:09 np0005534516 ovn_metadata_agent[162734]:    retries                 3
Nov 25 03:22:09 np0005534516 ovn_metadata_agent[162734]:    timeout http-request    30s
Nov 25 03:22:09 np0005534516 ovn_metadata_agent[162734]:    timeout connect         30s
Nov 25 03:22:09 np0005534516 ovn_metadata_agent[162734]:    timeout client          32s
Nov 25 03:22:09 np0005534516 ovn_metadata_agent[162734]:    timeout server          32s
Nov 25 03:22:09 np0005534516 ovn_metadata_agent[162734]:    timeout http-keep-alive 30s
Nov 25 03:22:09 np0005534516 ovn_metadata_agent[162734]: 
Nov 25 03:22:09 np0005534516 ovn_metadata_agent[162734]: 
Nov 25 03:22:09 np0005534516 ovn_metadata_agent[162734]: listen listener
Nov 25 03:22:09 np0005534516 ovn_metadata_agent[162734]:    bind 169.254.169.254:80
Nov 25 03:22:09 np0005534516 ovn_metadata_agent[162734]:    server metadata /var/lib/neutron/metadata_proxy
Nov 25 03:22:09 np0005534516 ovn_metadata_agent[162734]:    http-request add-header X-OVN-Network-ID ef52fe4f-78d3-45fa-ab69-177fdfabe604
Nov 25 03:22:09 np0005534516 ovn_metadata_agent[162734]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 25 03:22:09 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:22:09.529 162739 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-ef52fe4f-78d3-45fa-ab69-177fdfabe604', 'env', 'PROCESS_TAG=haproxy-ef52fe4f-78d3-45fa-ab69-177fdfabe604', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/ef52fe4f-78d3-45fa-ab69-177fdfabe604.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 25 03:22:09 np0005534516 nova_compute[253538]: 2025-11-25 08:22:09.807 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Instance 67445e66-65d6-487d-8c34-7d798ac485c8 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 25 03:22:09 np0005534516 nova_compute[253538]: 2025-11-25 08:22:09.808 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Instance 04f4ebed-aaaa-4be2-9916-0361c1de57c5 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 25 03:22:09 np0005534516 nova_compute[253538]: 2025-11-25 08:22:09.808 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 25 03:22:09 np0005534516 nova_compute[253538]: 2025-11-25 08:22:09.808 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=768MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 25 03:22:09 np0005534516 nova_compute[253538]: 2025-11-25 08:22:09.866 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:22:09 np0005534516 podman[269780]: 2025-11-25 08:22:09.905562604 +0000 UTC m=+0.060081013 container create 412833b7c3fef201b41adc426f0095eb0c085685fe8b281b7e78c8c82e296c47 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-ef52fe4f-78d3-45fa-ab69-177fdfabe604, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 03:22:09 np0005534516 systemd[1]: Started libpod-conmon-412833b7c3fef201b41adc426f0095eb0c085685fe8b281b7e78c8c82e296c47.scope.
Nov 25 03:22:09 np0005534516 podman[269780]: 2025-11-25 08:22:09.867670954 +0000 UTC m=+0.022189373 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 25 03:22:09 np0005534516 nova_compute[253538]: 2025-11-25 08:22:09.965 253542 INFO nova.virt.libvirt.driver [None req-a1cbd3bc-67ba-4a3b-b37a-d7f9b4486353 8572d5befb7449ee93a99483cc1a148b 0706b4d7e21644ee997b95ec96c1fca2 - - default default] [instance: 04f4ebed-aaaa-4be2-9916-0361c1de57c5] Creating config drive at /var/lib/nova/instances/04f4ebed-aaaa-4be2-9916-0361c1de57c5/disk.config#033[00m
Nov 25 03:22:09 np0005534516 nova_compute[253538]: 2025-11-25 08:22:09.973 253542 DEBUG oslo_concurrency.processutils [None req-a1cbd3bc-67ba-4a3b-b37a-d7f9b4486353 8572d5befb7449ee93a99483cc1a148b 0706b4d7e21644ee997b95ec96c1fca2 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/04f4ebed-aaaa-4be2-9916-0361c1de57c5/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpgutlwfpk execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:22:09 np0005534516 systemd[1]: Started libcrun container.
Nov 25 03:22:09 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b5b696d176ed1fe3f68ef96c36dc7d5b035659cfd38836579d018864edf7bf5a/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 25 03:22:10 np0005534516 podman[269780]: 2025-11-25 08:22:10.000203374 +0000 UTC m=+0.154721773 container init 412833b7c3fef201b41adc426f0095eb0c085685fe8b281b7e78c8c82e296c47 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-ef52fe4f-78d3-45fa-ab69-177fdfabe604, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Nov 25 03:22:10 np0005534516 podman[269780]: 2025-11-25 08:22:10.008096065 +0000 UTC m=+0.162614454 container start 412833b7c3fef201b41adc426f0095eb0c085685fe8b281b7e78c8c82e296c47 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-ef52fe4f-78d3-45fa-ab69-177fdfabe604, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Nov 25 03:22:10 np0005534516 neutron-haproxy-ovnmeta-ef52fe4f-78d3-45fa-ab69-177fdfabe604[269796]: [NOTICE]   (269820) : New worker (269824) forked
Nov 25 03:22:10 np0005534516 neutron-haproxy-ovnmeta-ef52fe4f-78d3-45fa-ab69-177fdfabe604[269796]: [NOTICE]   (269820) : Loading success.
Nov 25 03:22:10 np0005534516 nova_compute[253538]: 2025-11-25 08:22:10.108 253542 DEBUG oslo_concurrency.processutils [None req-a1cbd3bc-67ba-4a3b-b37a-d7f9b4486353 8572d5befb7449ee93a99483cc1a148b 0706b4d7e21644ee997b95ec96c1fca2 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/04f4ebed-aaaa-4be2-9916-0361c1de57c5/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpgutlwfpk" returned: 0 in 0.135s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:22:10 np0005534516 nova_compute[253538]: 2025-11-25 08:22:10.135 253542 DEBUG nova.storage.rbd_utils [None req-a1cbd3bc-67ba-4a3b-b37a-d7f9b4486353 8572d5befb7449ee93a99483cc1a148b 0706b4d7e21644ee997b95ec96c1fca2 - - default default] rbd image 04f4ebed-aaaa-4be2-9916-0361c1de57c5_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:22:10 np0005534516 nova_compute[253538]: 2025-11-25 08:22:10.140 253542 DEBUG oslo_concurrency.processutils [None req-a1cbd3bc-67ba-4a3b-b37a-d7f9b4486353 8572d5befb7449ee93a99483cc1a148b 0706b4d7e21644ee997b95ec96c1fca2 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/04f4ebed-aaaa-4be2-9916-0361c1de57c5/disk.config 04f4ebed-aaaa-4be2-9916-0361c1de57c5_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:22:10 np0005534516 nova_compute[253538]: 2025-11-25 08:22:10.279 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:22:10 np0005534516 NetworkManager[48915]: <info>  [1764058930.2822] manager: (patch-br-int-to-provnet-14ba300c-36d8-42f5-a7bd-8245a4488c5f): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/25)
Nov 25 03:22:10 np0005534516 NetworkManager[48915]: <info>  [1764058930.2832] device (patch-br-int-to-provnet-14ba300c-36d8-42f5-a7bd-8245a4488c5f)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 25 03:22:10 np0005534516 NetworkManager[48915]: <info>  [1764058930.2854] manager: (patch-provnet-14ba300c-36d8-42f5-a7bd-8245a4488c5f-to-br-int): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/26)
Nov 25 03:22:10 np0005534516 NetworkManager[48915]: <info>  [1764058930.2866] device (patch-provnet-14ba300c-36d8-42f5-a7bd-8245a4488c5f-to-br-int)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 25 03:22:10 np0005534516 NetworkManager[48915]: <info>  [1764058930.2891] manager: (patch-br-int-to-provnet-14ba300c-36d8-42f5-a7bd-8245a4488c5f): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/27)
Nov 25 03:22:10 np0005534516 NetworkManager[48915]: <info>  [1764058930.2906] manager: (patch-provnet-14ba300c-36d8-42f5-a7bd-8245a4488c5f-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/28)
Nov 25 03:22:10 np0005534516 NetworkManager[48915]: <info>  [1764058930.2917] device (patch-br-int-to-provnet-14ba300c-36d8-42f5-a7bd-8245a4488c5f)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'none', managed-type: 'full')
Nov 25 03:22:10 np0005534516 NetworkManager[48915]: <info>  [1764058930.2934] device (patch-provnet-14ba300c-36d8-42f5-a7bd-8245a4488c5f-to-br-int)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'none', managed-type: 'full')
Nov 25 03:22:10 np0005534516 nova_compute[253538]: 2025-11-25 08:22:10.324 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:22:10 np0005534516 ovn_controller[152859]: 2025-11-25T08:22:10Z|00032|binding|INFO|Releasing lport c3160d37-f1ba-461a-855e-31f72d46baee from this chassis (sb_readonly=0)
Nov 25 03:22:10 np0005534516 nova_compute[253538]: 2025-11-25 08:22:10.334 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:22:10 np0005534516 nova_compute[253538]: 2025-11-25 08:22:10.348 253542 DEBUG oslo_concurrency.processutils [None req-a1cbd3bc-67ba-4a3b-b37a-d7f9b4486353 8572d5befb7449ee93a99483cc1a148b 0706b4d7e21644ee997b95ec96c1fca2 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/04f4ebed-aaaa-4be2-9916-0361c1de57c5/disk.config 04f4ebed-aaaa-4be2-9916-0361c1de57c5_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.208s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:22:10 np0005534516 nova_compute[253538]: 2025-11-25 08:22:10.349 253542 INFO nova.virt.libvirt.driver [None req-a1cbd3bc-67ba-4a3b-b37a-d7f9b4486353 8572d5befb7449ee93a99483cc1a148b 0706b4d7e21644ee997b95ec96c1fca2 - - default default] [instance: 04f4ebed-aaaa-4be2-9916-0361c1de57c5] Deleting local config drive /var/lib/nova/instances/04f4ebed-aaaa-4be2-9916-0361c1de57c5/disk.config because it was imported into RBD.#033[00m
Nov 25 03:22:10 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 03:22:10 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2303754560' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 03:22:10 np0005534516 systemd-machined[215790]: New machine qemu-3-instance-00000003.
Nov 25 03:22:10 np0005534516 nova_compute[253538]: 2025-11-25 08:22:10.423 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.556s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:22:10 np0005534516 systemd[1]: Started Virtual Machine qemu-3-instance-00000003.
Nov 25 03:22:10 np0005534516 nova_compute[253538]: 2025-11-25 08:22:10.429 253542 DEBUG nova.compute.provider_tree [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 03:22:10 np0005534516 nova_compute[253538]: 2025-11-25 08:22:10.447 253542 DEBUG nova.scheduler.client.report [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 03:22:10 np0005534516 nova_compute[253538]: 2025-11-25 08:22:10.472 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 25 03:22:10 np0005534516 nova_compute[253538]: 2025-11-25 08:22:10.474 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.061s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:22:10 np0005534516 nova_compute[253538]: 2025-11-25 08:22:10.802 253542 DEBUG nova.compute.manager [req-b2715f62-499a-4112-b185-d6b6d6ed4bbe req-22a8ad8b-9800-4413-9802-6014d27607e7 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 67445e66-65d6-487d-8c34-7d798ac485c8] Received event network-changed-5235689d-9333-45f8-8f44-50e4a006f7d3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:22:10 np0005534516 nova_compute[253538]: 2025-11-25 08:22:10.803 253542 DEBUG nova.compute.manager [req-b2715f62-499a-4112-b185-d6b6d6ed4bbe req-22a8ad8b-9800-4413-9802-6014d27607e7 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 67445e66-65d6-487d-8c34-7d798ac485c8] Refreshing instance network info cache due to event network-changed-5235689d-9333-45f8-8f44-50e4a006f7d3. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 03:22:10 np0005534516 nova_compute[253538]: 2025-11-25 08:22:10.804 253542 DEBUG oslo_concurrency.lockutils [req-b2715f62-499a-4112-b185-d6b6d6ed4bbe req-22a8ad8b-9800-4413-9802-6014d27607e7 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-67445e66-65d6-487d-8c34-7d798ac485c8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 03:22:10 np0005534516 nova_compute[253538]: 2025-11-25 08:22:10.804 253542 DEBUG oslo_concurrency.lockutils [req-b2715f62-499a-4112-b185-d6b6d6ed4bbe req-22a8ad8b-9800-4413-9802-6014d27607e7 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-67445e66-65d6-487d-8c34-7d798ac485c8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 03:22:10 np0005534516 nova_compute[253538]: 2025-11-25 08:22:10.804 253542 DEBUG nova.network.neutron [req-b2715f62-499a-4112-b185-d6b6d6ed4bbe req-22a8ad8b-9800-4413-9802-6014d27607e7 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 67445e66-65d6-487d-8c34-7d798ac485c8] Refreshing network info cache for port 5235689d-9333-45f8-8f44-50e4a006f7d3 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 03:22:10 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1096: 321 pgs: 321 active+clean; 129 MiB data, 220 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.6 MiB/s wr, 159 op/s
Nov 25 03:22:11 np0005534516 nova_compute[253538]: 2025-11-25 08:22:11.092 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764058931.0915585, 04f4ebed-aaaa-4be2-9916-0361c1de57c5 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 03:22:11 np0005534516 nova_compute[253538]: 2025-11-25 08:22:11.093 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 04f4ebed-aaaa-4be2-9916-0361c1de57c5] VM Resumed (Lifecycle Event)#033[00m
Nov 25 03:22:11 np0005534516 nova_compute[253538]: 2025-11-25 08:22:11.096 253542 DEBUG nova.compute.manager [None req-a1cbd3bc-67ba-4a3b-b37a-d7f9b4486353 8572d5befb7449ee93a99483cc1a148b 0706b4d7e21644ee997b95ec96c1fca2 - - default default] [instance: 04f4ebed-aaaa-4be2-9916-0361c1de57c5] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 25 03:22:11 np0005534516 nova_compute[253538]: 2025-11-25 08:22:11.097 253542 DEBUG nova.virt.libvirt.driver [None req-a1cbd3bc-67ba-4a3b-b37a-d7f9b4486353 8572d5befb7449ee93a99483cc1a148b 0706b4d7e21644ee997b95ec96c1fca2 - - default default] [instance: 04f4ebed-aaaa-4be2-9916-0361c1de57c5] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 25 03:22:11 np0005534516 nova_compute[253538]: 2025-11-25 08:22:11.102 253542 INFO nova.virt.libvirt.driver [-] [instance: 04f4ebed-aaaa-4be2-9916-0361c1de57c5] Instance spawned successfully.#033[00m
Nov 25 03:22:11 np0005534516 nova_compute[253538]: 2025-11-25 08:22:11.103 253542 DEBUG nova.virt.libvirt.driver [None req-a1cbd3bc-67ba-4a3b-b37a-d7f9b4486353 8572d5befb7449ee93a99483cc1a148b 0706b4d7e21644ee997b95ec96c1fca2 - - default default] [instance: 04f4ebed-aaaa-4be2-9916-0361c1de57c5] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 25 03:22:11 np0005534516 nova_compute[253538]: 2025-11-25 08:22:11.112 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 04f4ebed-aaaa-4be2-9916-0361c1de57c5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:22:11 np0005534516 nova_compute[253538]: 2025-11-25 08:22:11.117 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 04f4ebed-aaaa-4be2-9916-0361c1de57c5] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 03:22:11 np0005534516 nova_compute[253538]: 2025-11-25 08:22:11.129 253542 DEBUG nova.virt.libvirt.driver [None req-a1cbd3bc-67ba-4a3b-b37a-d7f9b4486353 8572d5befb7449ee93a99483cc1a148b 0706b4d7e21644ee997b95ec96c1fca2 - - default default] [instance: 04f4ebed-aaaa-4be2-9916-0361c1de57c5] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:22:11 np0005534516 nova_compute[253538]: 2025-11-25 08:22:11.130 253542 DEBUG nova.virt.libvirt.driver [None req-a1cbd3bc-67ba-4a3b-b37a-d7f9b4486353 8572d5befb7449ee93a99483cc1a148b 0706b4d7e21644ee997b95ec96c1fca2 - - default default] [instance: 04f4ebed-aaaa-4be2-9916-0361c1de57c5] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:22:11 np0005534516 nova_compute[253538]: 2025-11-25 08:22:11.131 253542 DEBUG nova.virt.libvirt.driver [None req-a1cbd3bc-67ba-4a3b-b37a-d7f9b4486353 8572d5befb7449ee93a99483cc1a148b 0706b4d7e21644ee997b95ec96c1fca2 - - default default] [instance: 04f4ebed-aaaa-4be2-9916-0361c1de57c5] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:22:11 np0005534516 nova_compute[253538]: 2025-11-25 08:22:11.132 253542 DEBUG nova.virt.libvirt.driver [None req-a1cbd3bc-67ba-4a3b-b37a-d7f9b4486353 8572d5befb7449ee93a99483cc1a148b 0706b4d7e21644ee997b95ec96c1fca2 - - default default] [instance: 04f4ebed-aaaa-4be2-9916-0361c1de57c5] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:22:11 np0005534516 nova_compute[253538]: 2025-11-25 08:22:11.133 253542 DEBUG nova.virt.libvirt.driver [None req-a1cbd3bc-67ba-4a3b-b37a-d7f9b4486353 8572d5befb7449ee93a99483cc1a148b 0706b4d7e21644ee997b95ec96c1fca2 - - default default] [instance: 04f4ebed-aaaa-4be2-9916-0361c1de57c5] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:22:11 np0005534516 nova_compute[253538]: 2025-11-25 08:22:11.134 253542 DEBUG nova.virt.libvirt.driver [None req-a1cbd3bc-67ba-4a3b-b37a-d7f9b4486353 8572d5befb7449ee93a99483cc1a148b 0706b4d7e21644ee997b95ec96c1fca2 - - default default] [instance: 04f4ebed-aaaa-4be2-9916-0361c1de57c5] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:22:11 np0005534516 nova_compute[253538]: 2025-11-25 08:22:11.139 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 04f4ebed-aaaa-4be2-9916-0361c1de57c5] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 03:22:11 np0005534516 nova_compute[253538]: 2025-11-25 08:22:11.140 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764058931.0951734, 04f4ebed-aaaa-4be2-9916-0361c1de57c5 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 03:22:11 np0005534516 nova_compute[253538]: 2025-11-25 08:22:11.140 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 04f4ebed-aaaa-4be2-9916-0361c1de57c5] VM Started (Lifecycle Event)#033[00m
Nov 25 03:22:11 np0005534516 nova_compute[253538]: 2025-11-25 08:22:11.167 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 04f4ebed-aaaa-4be2-9916-0361c1de57c5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:22:11 np0005534516 nova_compute[253538]: 2025-11-25 08:22:11.170 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 04f4ebed-aaaa-4be2-9916-0361c1de57c5] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 03:22:11 np0005534516 nova_compute[253538]: 2025-11-25 08:22:11.185 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 04f4ebed-aaaa-4be2-9916-0361c1de57c5] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 03:22:11 np0005534516 nova_compute[253538]: 2025-11-25 08:22:11.257 253542 INFO nova.compute.manager [None req-a1cbd3bc-67ba-4a3b-b37a-d7f9b4486353 8572d5befb7449ee93a99483cc1a148b 0706b4d7e21644ee997b95ec96c1fca2 - - default default] [instance: 04f4ebed-aaaa-4be2-9916-0361c1de57c5] Took 4.08 seconds to spawn the instance on the hypervisor.#033[00m
Nov 25 03:22:11 np0005534516 nova_compute[253538]: 2025-11-25 08:22:11.258 253542 DEBUG nova.compute.manager [None req-a1cbd3bc-67ba-4a3b-b37a-d7f9b4486353 8572d5befb7449ee93a99483cc1a148b 0706b4d7e21644ee997b95ec96c1fca2 - - default default] [instance: 04f4ebed-aaaa-4be2-9916-0361c1de57c5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:22:11 np0005534516 nova_compute[253538]: 2025-11-25 08:22:11.331 253542 INFO nova.compute.manager [None req-a1cbd3bc-67ba-4a3b-b37a-d7f9b4486353 8572d5befb7449ee93a99483cc1a148b 0706b4d7e21644ee997b95ec96c1fca2 - - default default] [instance: 04f4ebed-aaaa-4be2-9916-0361c1de57c5] Took 5.50 seconds to build instance.#033[00m
Nov 25 03:22:11 np0005534516 nova_compute[253538]: 2025-11-25 08:22:11.367 253542 DEBUG oslo_concurrency.lockutils [None req-a1cbd3bc-67ba-4a3b-b37a-d7f9b4486353 8572d5befb7449ee93a99483cc1a148b 0706b4d7e21644ee997b95ec96c1fca2 - - default default] Lock "04f4ebed-aaaa-4be2-9916-0361c1de57c5" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 5.607s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:22:11 np0005534516 nova_compute[253538]: 2025-11-25 08:22:11.466 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:22:11 np0005534516 nova_compute[253538]: 2025-11-25 08:22:11.468 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 03:22:11 np0005534516 nova_compute[253538]: 2025-11-25 08:22:11.484 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 03:22:12 np0005534516 nova_compute[253538]: 2025-11-25 08:22:12.538 253542 DEBUG nova.network.neutron [req-b2715f62-499a-4112-b185-d6b6d6ed4bbe req-22a8ad8b-9800-4413-9802-6014d27607e7 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 67445e66-65d6-487d-8c34-7d798ac485c8] Updated VIF entry in instance network info cache for port 5235689d-9333-45f8-8f44-50e4a006f7d3. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 25 03:22:12 np0005534516 nova_compute[253538]: 2025-11-25 08:22:12.539 253542 DEBUG nova.network.neutron [req-b2715f62-499a-4112-b185-d6b6d6ed4bbe req-22a8ad8b-9800-4413-9802-6014d27607e7 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 67445e66-65d6-487d-8c34-7d798ac485c8] Updating instance_info_cache with network_info: [{"id": "5235689d-9333-45f8-8f44-50e4a006f7d3", "address": "fa:16:3e:c9:db:d0", "network": {"id": "ef52fe4f-78d3-45fa-ab69-177fdfabe604", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-1512173371-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.177", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2671313ddba04346ac0e2eef435f909c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5235689d-93", "ovs_interfaceid": "5235689d-9333-45f8-8f44-50e4a006f7d3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 03:22:12 np0005534516 nova_compute[253538]: 2025-11-25 08:22:12.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 03:22:12 np0005534516 nova_compute[253538]: 2025-11-25 08:22:12.603 253542 DEBUG oslo_concurrency.lockutils [req-b2715f62-499a-4112-b185-d6b6d6ed4bbe req-22a8ad8b-9800-4413-9802-6014d27607e7 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-67445e66-65d6-487d-8c34-7d798ac485c8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 03:22:12 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1097: 321 pgs: 321 active+clean; 134 MiB data, 223 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 163 op/s
Nov 25 03:22:13 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e107 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 03:22:13 np0005534516 nova_compute[253538]: 2025-11-25 08:22:13.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 03:22:14 np0005534516 nova_compute[253538]: 2025-11-25 08:22:14.094 253542 DEBUG oslo_concurrency.lockutils [None req-b2b8c8d4-054b-4305-a3dc-1cc3857b12d5 23499617114d4998a8fb5cb17c6c3975 144c57a9e53f495d9daa7e8b4c9012c8 - - default default] Acquiring lock "04f4ebed-aaaa-4be2-9916-0361c1de57c5" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:22:14 np0005534516 nova_compute[253538]: 2025-11-25 08:22:14.095 253542 DEBUG oslo_concurrency.lockutils [None req-b2b8c8d4-054b-4305-a3dc-1cc3857b12d5 23499617114d4998a8fb5cb17c6c3975 144c57a9e53f495d9daa7e8b4c9012c8 - - default default] Lock "04f4ebed-aaaa-4be2-9916-0361c1de57c5" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:22:14 np0005534516 nova_compute[253538]: 2025-11-25 08:22:14.095 253542 DEBUG oslo_concurrency.lockutils [None req-b2b8c8d4-054b-4305-a3dc-1cc3857b12d5 23499617114d4998a8fb5cb17c6c3975 144c57a9e53f495d9daa7e8b4c9012c8 - - default default] Acquiring lock "04f4ebed-aaaa-4be2-9916-0361c1de57c5-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:22:14 np0005534516 nova_compute[253538]: 2025-11-25 08:22:14.096 253542 DEBUG oslo_concurrency.lockutils [None req-b2b8c8d4-054b-4305-a3dc-1cc3857b12d5 23499617114d4998a8fb5cb17c6c3975 144c57a9e53f495d9daa7e8b4c9012c8 - - default default] Lock "04f4ebed-aaaa-4be2-9916-0361c1de57c5-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:22:14 np0005534516 nova_compute[253538]: 2025-11-25 08:22:14.096 253542 DEBUG oslo_concurrency.lockutils [None req-b2b8c8d4-054b-4305-a3dc-1cc3857b12d5 23499617114d4998a8fb5cb17c6c3975 144c57a9e53f495d9daa7e8b4c9012c8 - - default default] Lock "04f4ebed-aaaa-4be2-9916-0361c1de57c5-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:22:14 np0005534516 nova_compute[253538]: 2025-11-25 08:22:14.097 253542 INFO nova.compute.manager [None req-b2b8c8d4-054b-4305-a3dc-1cc3857b12d5 23499617114d4998a8fb5cb17c6c3975 144c57a9e53f495d9daa7e8b4c9012c8 - - default default] [instance: 04f4ebed-aaaa-4be2-9916-0361c1de57c5] Terminating instance#033[00m
Nov 25 03:22:14 np0005534516 nova_compute[253538]: 2025-11-25 08:22:14.098 253542 DEBUG oslo_concurrency.lockutils [None req-b2b8c8d4-054b-4305-a3dc-1cc3857b12d5 23499617114d4998a8fb5cb17c6c3975 144c57a9e53f495d9daa7e8b4c9012c8 - - default default] Acquiring lock "refresh_cache-04f4ebed-aaaa-4be2-9916-0361c1de57c5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 03:22:14 np0005534516 nova_compute[253538]: 2025-11-25 08:22:14.098 253542 DEBUG oslo_concurrency.lockutils [None req-b2b8c8d4-054b-4305-a3dc-1cc3857b12d5 23499617114d4998a8fb5cb17c6c3975 144c57a9e53f495d9daa7e8b4c9012c8 - - default default] Acquired lock "refresh_cache-04f4ebed-aaaa-4be2-9916-0361c1de57c5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 03:22:14 np0005534516 nova_compute[253538]: 2025-11-25 08:22:14.099 253542 DEBUG nova.network.neutron [None req-b2b8c8d4-054b-4305-a3dc-1cc3857b12d5 23499617114d4998a8fb5cb17c6c3975 144c57a9e53f495d9daa7e8b4c9012c8 - - default default] [instance: 04f4ebed-aaaa-4be2-9916-0361c1de57c5] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 25 03:22:14 np0005534516 nova_compute[253538]: 2025-11-25 08:22:14.343 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:22:14 np0005534516 nova_compute[253538]: 2025-11-25 08:22:14.349 253542 DEBUG nova.network.neutron [None req-b2b8c8d4-054b-4305-a3dc-1cc3857b12d5 23499617114d4998a8fb5cb17c6c3975 144c57a9e53f495d9daa7e8b4c9012c8 - - default default] [instance: 04f4ebed-aaaa-4be2-9916-0361c1de57c5] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 25 03:22:14 np0005534516 nova_compute[253538]: 2025-11-25 08:22:14.892 253542 DEBUG nova.network.neutron [None req-b2b8c8d4-054b-4305-a3dc-1cc3857b12d5 23499617114d4998a8fb5cb17c6c3975 144c57a9e53f495d9daa7e8b4c9012c8 - - default default] [instance: 04f4ebed-aaaa-4be2-9916-0361c1de57c5] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 03:22:14 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1098: 321 pgs: 321 active+clean; 134 MiB data, 232 MiB used, 60 GiB / 60 GiB avail; 3.7 MiB/s rd, 1.8 MiB/s wr, 229 op/s
Nov 25 03:22:14 np0005534516 nova_compute[253538]: 2025-11-25 08:22:14.908 253542 DEBUG oslo_concurrency.lockutils [None req-b2b8c8d4-054b-4305-a3dc-1cc3857b12d5 23499617114d4998a8fb5cb17c6c3975 144c57a9e53f495d9daa7e8b4c9012c8 - - default default] Releasing lock "refresh_cache-04f4ebed-aaaa-4be2-9916-0361c1de57c5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 03:22:14 np0005534516 nova_compute[253538]: 2025-11-25 08:22:14.909 253542 DEBUG nova.compute.manager [None req-b2b8c8d4-054b-4305-a3dc-1cc3857b12d5 23499617114d4998a8fb5cb17c6c3975 144c57a9e53f495d9daa7e8b4c9012c8 - - default default] [instance: 04f4ebed-aaaa-4be2-9916-0361c1de57c5] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 25 03:22:14 np0005534516 systemd[1]: machine-qemu\x2d3\x2dinstance\x2d00000003.scope: Deactivated successfully.
Nov 25 03:22:14 np0005534516 systemd[1]: machine-qemu\x2d3\x2dinstance\x2d00000003.scope: Consumed 4.506s CPU time.
Nov 25 03:22:14 np0005534516 systemd-machined[215790]: Machine qemu-3-instance-00000003 terminated.
Nov 25 03:22:15 np0005534516 nova_compute[253538]: 2025-11-25 08:22:15.133 253542 INFO nova.virt.libvirt.driver [-] [instance: 04f4ebed-aaaa-4be2-9916-0361c1de57c5] Instance destroyed successfully.#033[00m
Nov 25 03:22:15 np0005534516 nova_compute[253538]: 2025-11-25 08:22:15.135 253542 DEBUG nova.objects.instance [None req-b2b8c8d4-054b-4305-a3dc-1cc3857b12d5 23499617114d4998a8fb5cb17c6c3975 144c57a9e53f495d9daa7e8b4c9012c8 - - default default] Lazy-loading 'resources' on Instance uuid 04f4ebed-aaaa-4be2-9916-0361c1de57c5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 03:22:15 np0005534516 nova_compute[253538]: 2025-11-25 08:22:15.578 253542 INFO nova.virt.libvirt.driver [None req-b2b8c8d4-054b-4305-a3dc-1cc3857b12d5 23499617114d4998a8fb5cb17c6c3975 144c57a9e53f495d9daa7e8b4c9012c8 - - default default] [instance: 04f4ebed-aaaa-4be2-9916-0361c1de57c5] Deleting instance files /var/lib/nova/instances/04f4ebed-aaaa-4be2-9916-0361c1de57c5_del#033[00m
Nov 25 03:22:15 np0005534516 nova_compute[253538]: 2025-11-25 08:22:15.580 253542 INFO nova.virt.libvirt.driver [None req-b2b8c8d4-054b-4305-a3dc-1cc3857b12d5 23499617114d4998a8fb5cb17c6c3975 144c57a9e53f495d9daa7e8b4c9012c8 - - default default] [instance: 04f4ebed-aaaa-4be2-9916-0361c1de57c5] Deletion of /var/lib/nova/instances/04f4ebed-aaaa-4be2-9916-0361c1de57c5_del complete#033[00m
Nov 25 03:22:16 np0005534516 nova_compute[253538]: 2025-11-25 08:22:16.047 253542 INFO nova.compute.manager [None req-b2b8c8d4-054b-4305-a3dc-1cc3857b12d5 23499617114d4998a8fb5cb17c6c3975 144c57a9e53f495d9daa7e8b4c9012c8 - - default default] [instance: 04f4ebed-aaaa-4be2-9916-0361c1de57c5] Took 1.14 seconds to destroy the instance on the hypervisor.#033[00m
Nov 25 03:22:16 np0005534516 nova_compute[253538]: 2025-11-25 08:22:16.048 253542 DEBUG oslo.service.loopingcall [None req-b2b8c8d4-054b-4305-a3dc-1cc3857b12d5 23499617114d4998a8fb5cb17c6c3975 144c57a9e53f495d9daa7e8b4c9012c8 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 25 03:22:16 np0005534516 nova_compute[253538]: 2025-11-25 08:22:16.049 253542 DEBUG nova.compute.manager [-] [instance: 04f4ebed-aaaa-4be2-9916-0361c1de57c5] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 25 03:22:16 np0005534516 nova_compute[253538]: 2025-11-25 08:22:16.049 253542 DEBUG nova.network.neutron [-] [instance: 04f4ebed-aaaa-4be2-9916-0361c1de57c5] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 25 03:22:16 np0005534516 nova_compute[253538]: 2025-11-25 08:22:16.307 253542 DEBUG nova.network.neutron [-] [instance: 04f4ebed-aaaa-4be2-9916-0361c1de57c5] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 25 03:22:16 np0005534516 nova_compute[253538]: 2025-11-25 08:22:16.324 253542 DEBUG nova.network.neutron [-] [instance: 04f4ebed-aaaa-4be2-9916-0361c1de57c5] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 03:22:16 np0005534516 nova_compute[253538]: 2025-11-25 08:22:16.338 253542 INFO nova.compute.manager [-] [instance: 04f4ebed-aaaa-4be2-9916-0361c1de57c5] Took 0.29 seconds to deallocate network for instance.#033[00m
Nov 25 03:22:16 np0005534516 nova_compute[253538]: 2025-11-25 08:22:16.469 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:22:16 np0005534516 nova_compute[253538]: 2025-11-25 08:22:16.525 253542 DEBUG oslo_concurrency.lockutils [None req-b2b8c8d4-054b-4305-a3dc-1cc3857b12d5 23499617114d4998a8fb5cb17c6c3975 144c57a9e53f495d9daa7e8b4c9012c8 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:22:16 np0005534516 nova_compute[253538]: 2025-11-25 08:22:16.526 253542 DEBUG oslo_concurrency.lockutils [None req-b2b8c8d4-054b-4305-a3dc-1cc3857b12d5 23499617114d4998a8fb5cb17c6c3975 144c57a9e53f495d9daa7e8b4c9012c8 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:22:16 np0005534516 nova_compute[253538]: 2025-11-25 08:22:16.597 253542 DEBUG oslo_concurrency.processutils [None req-b2b8c8d4-054b-4305-a3dc-1cc3857b12d5 23499617114d4998a8fb5cb17c6c3975 144c57a9e53f495d9daa7e8b4c9012c8 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:22:16 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1099: 321 pgs: 321 active+clean; 134 MiB data, 232 MiB used, 60 GiB / 60 GiB avail; 3.7 MiB/s rd, 1.8 MiB/s wr, 215 op/s
Nov 25 03:22:16 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:22:16.941 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=4, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '3e:21:09', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '6a:71:8e:07:fa:c2'}, ipsec=False) old=SB_Global(nb_cfg=3) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 03:22:16 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:22:16.942 162739 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 25 03:22:16 np0005534516 nova_compute[253538]: 2025-11-25 08:22:16.941 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:22:17 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 03:22:17 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3077333532' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 03:22:17 np0005534516 nova_compute[253538]: 2025-11-25 08:22:17.048 253542 DEBUG oslo_concurrency.processutils [None req-b2b8c8d4-054b-4305-a3dc-1cc3857b12d5 23499617114d4998a8fb5cb17c6c3975 144c57a9e53f495d9daa7e8b4c9012c8 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.451s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:22:17 np0005534516 nova_compute[253538]: 2025-11-25 08:22:17.053 253542 DEBUG nova.compute.provider_tree [None req-b2b8c8d4-054b-4305-a3dc-1cc3857b12d5 23499617114d4998a8fb5cb17c6c3975 144c57a9e53f495d9daa7e8b4c9012c8 - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 03:22:17 np0005534516 nova_compute[253538]: 2025-11-25 08:22:17.065 253542 DEBUG nova.scheduler.client.report [None req-b2b8c8d4-054b-4305-a3dc-1cc3857b12d5 23499617114d4998a8fb5cb17c6c3975 144c57a9e53f495d9daa7e8b4c9012c8 - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 03:22:17 np0005534516 nova_compute[253538]: 2025-11-25 08:22:17.125 253542 DEBUG oslo_concurrency.lockutils [None req-b2b8c8d4-054b-4305-a3dc-1cc3857b12d5 23499617114d4998a8fb5cb17c6c3975 144c57a9e53f495d9daa7e8b4c9012c8 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.599s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:22:17 np0005534516 nova_compute[253538]: 2025-11-25 08:22:17.344 253542 INFO nova.scheduler.client.report [None req-b2b8c8d4-054b-4305-a3dc-1cc3857b12d5 23499617114d4998a8fb5cb17c6c3975 144c57a9e53f495d9daa7e8b4c9012c8 - - default default] Deleted allocations for instance 04f4ebed-aaaa-4be2-9916-0361c1de57c5#033[00m
Nov 25 03:22:17 np0005534516 nova_compute[253538]: 2025-11-25 08:22:17.474 253542 DEBUG oslo_concurrency.lockutils [None req-b2b8c8d4-054b-4305-a3dc-1cc3857b12d5 23499617114d4998a8fb5cb17c6c3975 144c57a9e53f495d9daa7e8b4c9012c8 - - default default] Lock "04f4ebed-aaaa-4be2-9916-0361c1de57c5" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.379s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:22:18 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e107 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 03:22:18 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1100: 321 pgs: 321 active+clean; 119 MiB data, 220 MiB used, 60 GiB / 60 GiB avail; 3.9 MiB/s rd, 1.8 MiB/s wr, 241 op/s
Nov 25 03:22:19 np0005534516 nova_compute[253538]: 2025-11-25 08:22:19.347 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:22:20 np0005534516 ovn_controller[152859]: 2025-11-25T08:22:20Z|00004|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:c9:db:d0 10.100.0.10
Nov 25 03:22:20 np0005534516 ovn_controller[152859]: 2025-11-25T08:22:20Z|00005|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:c9:db:d0 10.100.0.10
Nov 25 03:22:20 np0005534516 nova_compute[253538]: 2025-11-25 08:22:20.345 253542 DEBUG oslo_concurrency.lockutils [None req-33374b29-0e9e-40a5-8b2d-7f0c2119472c 8572d5befb7449ee93a99483cc1a148b 0706b4d7e21644ee997b95ec96c1fca2 - - default default] Acquiring lock "a134e11b-8c87-4a96-a92c-b4dfdad7e518" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:22:20 np0005534516 nova_compute[253538]: 2025-11-25 08:22:20.345 253542 DEBUG oslo_concurrency.lockutils [None req-33374b29-0e9e-40a5-8b2d-7f0c2119472c 8572d5befb7449ee93a99483cc1a148b 0706b4d7e21644ee997b95ec96c1fca2 - - default default] Lock "a134e11b-8c87-4a96-a92c-b4dfdad7e518" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:22:20 np0005534516 nova_compute[253538]: 2025-11-25 08:22:20.365 253542 DEBUG nova.compute.manager [None req-33374b29-0e9e-40a5-8b2d-7f0c2119472c 8572d5befb7449ee93a99483cc1a148b 0706b4d7e21644ee997b95ec96c1fca2 - - default default] [instance: a134e11b-8c87-4a96-a92c-b4dfdad7e518] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 25 03:22:20 np0005534516 nova_compute[253538]: 2025-11-25 08:22:20.475 253542 DEBUG oslo_concurrency.lockutils [None req-33374b29-0e9e-40a5-8b2d-7f0c2119472c 8572d5befb7449ee93a99483cc1a148b 0706b4d7e21644ee997b95ec96c1fca2 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:22:20 np0005534516 nova_compute[253538]: 2025-11-25 08:22:20.475 253542 DEBUG oslo_concurrency.lockutils [None req-33374b29-0e9e-40a5-8b2d-7f0c2119472c 8572d5befb7449ee93a99483cc1a148b 0706b4d7e21644ee997b95ec96c1fca2 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:22:20 np0005534516 nova_compute[253538]: 2025-11-25 08:22:20.481 253542 DEBUG nova.virt.hardware [None req-33374b29-0e9e-40a5-8b2d-7f0c2119472c 8572d5befb7449ee93a99483cc1a148b 0706b4d7e21644ee997b95ec96c1fca2 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 25 03:22:20 np0005534516 nova_compute[253538]: 2025-11-25 08:22:20.481 253542 INFO nova.compute.claims [None req-33374b29-0e9e-40a5-8b2d-7f0c2119472c 8572d5befb7449ee93a99483cc1a148b 0706b4d7e21644ee997b95ec96c1fca2 - - default default] [instance: a134e11b-8c87-4a96-a92c-b4dfdad7e518] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 25 03:22:20 np0005534516 nova_compute[253538]: 2025-11-25 08:22:20.629 253542 DEBUG oslo_concurrency.processutils [None req-33374b29-0e9e-40a5-8b2d-7f0c2119472c 8572d5befb7449ee93a99483cc1a148b 0706b4d7e21644ee997b95ec96c1fca2 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:22:20 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1101: 321 pgs: 321 active+clean; 93 MiB data, 217 MiB used, 60 GiB / 60 GiB avail; 3.0 MiB/s rd, 1.6 MiB/s wr, 175 op/s
Nov 25 03:22:21 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 03:22:21 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/522173011' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 03:22:21 np0005534516 nova_compute[253538]: 2025-11-25 08:22:21.118 253542 DEBUG oslo_concurrency.processutils [None req-33374b29-0e9e-40a5-8b2d-7f0c2119472c 8572d5befb7449ee93a99483cc1a148b 0706b4d7e21644ee997b95ec96c1fca2 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.489s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:22:21 np0005534516 nova_compute[253538]: 2025-11-25 08:22:21.127 253542 DEBUG nova.compute.provider_tree [None req-33374b29-0e9e-40a5-8b2d-7f0c2119472c 8572d5befb7449ee93a99483cc1a148b 0706b4d7e21644ee997b95ec96c1fca2 - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 03:22:21 np0005534516 nova_compute[253538]: 2025-11-25 08:22:21.143 253542 DEBUG nova.scheduler.client.report [None req-33374b29-0e9e-40a5-8b2d-7f0c2119472c 8572d5befb7449ee93a99483cc1a148b 0706b4d7e21644ee997b95ec96c1fca2 - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 03:22:21 np0005534516 nova_compute[253538]: 2025-11-25 08:22:21.244 253542 DEBUG oslo_concurrency.lockutils [None req-33374b29-0e9e-40a5-8b2d-7f0c2119472c 8572d5befb7449ee93a99483cc1a148b 0706b4d7e21644ee997b95ec96c1fca2 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.769s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:22:21 np0005534516 nova_compute[253538]: 2025-11-25 08:22:21.245 253542 DEBUG nova.compute.manager [None req-33374b29-0e9e-40a5-8b2d-7f0c2119472c 8572d5befb7449ee93a99483cc1a148b 0706b4d7e21644ee997b95ec96c1fca2 - - default default] [instance: a134e11b-8c87-4a96-a92c-b4dfdad7e518] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 25 03:22:21 np0005534516 nova_compute[253538]: 2025-11-25 08:22:21.319 253542 DEBUG nova.compute.manager [None req-33374b29-0e9e-40a5-8b2d-7f0c2119472c 8572d5befb7449ee93a99483cc1a148b 0706b4d7e21644ee997b95ec96c1fca2 - - default default] [instance: a134e11b-8c87-4a96-a92c-b4dfdad7e518] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 25 03:22:21 np0005534516 nova_compute[253538]: 2025-11-25 08:22:21.319 253542 DEBUG nova.network.neutron [None req-33374b29-0e9e-40a5-8b2d-7f0c2119472c 8572d5befb7449ee93a99483cc1a148b 0706b4d7e21644ee997b95ec96c1fca2 - - default default] [instance: a134e11b-8c87-4a96-a92c-b4dfdad7e518] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 25 03:22:21 np0005534516 nova_compute[253538]: 2025-11-25 08:22:21.377 253542 INFO nova.virt.libvirt.driver [None req-33374b29-0e9e-40a5-8b2d-7f0c2119472c 8572d5befb7449ee93a99483cc1a148b 0706b4d7e21644ee997b95ec96c1fca2 - - default default] [instance: a134e11b-8c87-4a96-a92c-b4dfdad7e518] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 25 03:22:21 np0005534516 nova_compute[253538]: 2025-11-25 08:22:21.406 253542 DEBUG nova.compute.manager [None req-33374b29-0e9e-40a5-8b2d-7f0c2119472c 8572d5befb7449ee93a99483cc1a148b 0706b4d7e21644ee997b95ec96c1fca2 - - default default] [instance: a134e11b-8c87-4a96-a92c-b4dfdad7e518] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 25 03:22:21 np0005534516 nova_compute[253538]: 2025-11-25 08:22:21.502 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:22:21 np0005534516 nova_compute[253538]: 2025-11-25 08:22:21.623 253542 DEBUG nova.compute.manager [None req-33374b29-0e9e-40a5-8b2d-7f0c2119472c 8572d5befb7449ee93a99483cc1a148b 0706b4d7e21644ee997b95ec96c1fca2 - - default default] [instance: a134e11b-8c87-4a96-a92c-b4dfdad7e518] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 25 03:22:21 np0005534516 nova_compute[253538]: 2025-11-25 08:22:21.624 253542 DEBUG nova.virt.libvirt.driver [None req-33374b29-0e9e-40a5-8b2d-7f0c2119472c 8572d5befb7449ee93a99483cc1a148b 0706b4d7e21644ee997b95ec96c1fca2 - - default default] [instance: a134e11b-8c87-4a96-a92c-b4dfdad7e518] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 25 03:22:21 np0005534516 nova_compute[253538]: 2025-11-25 08:22:21.625 253542 INFO nova.virt.libvirt.driver [None req-33374b29-0e9e-40a5-8b2d-7f0c2119472c 8572d5befb7449ee93a99483cc1a148b 0706b4d7e21644ee997b95ec96c1fca2 - - default default] [instance: a134e11b-8c87-4a96-a92c-b4dfdad7e518] Creating image(s)#033[00m
Nov 25 03:22:21 np0005534516 nova_compute[253538]: 2025-11-25 08:22:21.645 253542 DEBUG nova.storage.rbd_utils [None req-33374b29-0e9e-40a5-8b2d-7f0c2119472c 8572d5befb7449ee93a99483cc1a148b 0706b4d7e21644ee997b95ec96c1fca2 - - default default] rbd image a134e11b-8c87-4a96-a92c-b4dfdad7e518_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:22:21 np0005534516 nova_compute[253538]: 2025-11-25 08:22:21.668 253542 DEBUG nova.storage.rbd_utils [None req-33374b29-0e9e-40a5-8b2d-7f0c2119472c 8572d5befb7449ee93a99483cc1a148b 0706b4d7e21644ee997b95ec96c1fca2 - - default default] rbd image a134e11b-8c87-4a96-a92c-b4dfdad7e518_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:22:21 np0005534516 nova_compute[253538]: 2025-11-25 08:22:21.689 253542 DEBUG nova.storage.rbd_utils [None req-33374b29-0e9e-40a5-8b2d-7f0c2119472c 8572d5befb7449ee93a99483cc1a148b 0706b4d7e21644ee997b95ec96c1fca2 - - default default] rbd image a134e11b-8c87-4a96-a92c-b4dfdad7e518_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:22:21 np0005534516 nova_compute[253538]: 2025-11-25 08:22:21.694 253542 DEBUG oslo_concurrency.processutils [None req-33374b29-0e9e-40a5-8b2d-7f0c2119472c 8572d5befb7449ee93a99483cc1a148b 0706b4d7e21644ee997b95ec96c1fca2 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:22:21 np0005534516 nova_compute[253538]: 2025-11-25 08:22:21.767 253542 DEBUG oslo_concurrency.processutils [None req-33374b29-0e9e-40a5-8b2d-7f0c2119472c 8572d5befb7449ee93a99483cc1a148b 0706b4d7e21644ee997b95ec96c1fca2 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json" returned: 0 in 0.072s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:22:21 np0005534516 nova_compute[253538]: 2025-11-25 08:22:21.768 253542 DEBUG oslo_concurrency.lockutils [None req-33374b29-0e9e-40a5-8b2d-7f0c2119472c 8572d5befb7449ee93a99483cc1a148b 0706b4d7e21644ee997b95ec96c1fca2 - - default default] Acquiring lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:22:21 np0005534516 nova_compute[253538]: 2025-11-25 08:22:21.769 253542 DEBUG oslo_concurrency.lockutils [None req-33374b29-0e9e-40a5-8b2d-7f0c2119472c 8572d5befb7449ee93a99483cc1a148b 0706b4d7e21644ee997b95ec96c1fca2 - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:22:21 np0005534516 nova_compute[253538]: 2025-11-25 08:22:21.769 253542 DEBUG oslo_concurrency.lockutils [None req-33374b29-0e9e-40a5-8b2d-7f0c2119472c 8572d5befb7449ee93a99483cc1a148b 0706b4d7e21644ee997b95ec96c1fca2 - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:22:21 np0005534516 nova_compute[253538]: 2025-11-25 08:22:21.792 253542 DEBUG nova.storage.rbd_utils [None req-33374b29-0e9e-40a5-8b2d-7f0c2119472c 8572d5befb7449ee93a99483cc1a148b 0706b4d7e21644ee997b95ec96c1fca2 - - default default] rbd image a134e11b-8c87-4a96-a92c-b4dfdad7e518_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:22:21 np0005534516 nova_compute[253538]: 2025-11-25 08:22:21.797 253542 DEBUG oslo_concurrency.processutils [None req-33374b29-0e9e-40a5-8b2d-7f0c2119472c 8572d5befb7449ee93a99483cc1a148b 0706b4d7e21644ee997b95ec96c1fca2 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc a134e11b-8c87-4a96-a92c-b4dfdad7e518_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:22:21 np0005534516 ceph-osd[89702]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #43. Immutable memtables: 0.
Nov 25 03:22:22 np0005534516 nova_compute[253538]: 2025-11-25 08:22:22.054 253542 DEBUG oslo_concurrency.processutils [None req-33374b29-0e9e-40a5-8b2d-7f0c2119472c 8572d5befb7449ee93a99483cc1a148b 0706b4d7e21644ee997b95ec96c1fca2 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc a134e11b-8c87-4a96-a92c-b4dfdad7e518_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.257s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:22:22 np0005534516 nova_compute[253538]: 2025-11-25 08:22:22.116 253542 DEBUG nova.storage.rbd_utils [None req-33374b29-0e9e-40a5-8b2d-7f0c2119472c 8572d5befb7449ee93a99483cc1a148b 0706b4d7e21644ee997b95ec96c1fca2 - - default default] resizing rbd image a134e11b-8c87-4a96-a92c-b4dfdad7e518_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 25 03:22:22 np0005534516 nova_compute[253538]: 2025-11-25 08:22:22.528 253542 DEBUG nova.objects.instance [None req-33374b29-0e9e-40a5-8b2d-7f0c2119472c 8572d5befb7449ee93a99483cc1a148b 0706b4d7e21644ee997b95ec96c1fca2 - - default default] Lazy-loading 'migration_context' on Instance uuid a134e11b-8c87-4a96-a92c-b4dfdad7e518 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 03:22:22 np0005534516 nova_compute[253538]: 2025-11-25 08:22:22.541 253542 DEBUG nova.virt.libvirt.driver [None req-33374b29-0e9e-40a5-8b2d-7f0c2119472c 8572d5befb7449ee93a99483cc1a148b 0706b4d7e21644ee997b95ec96c1fca2 - - default default] [instance: a134e11b-8c87-4a96-a92c-b4dfdad7e518] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 25 03:22:22 np0005534516 nova_compute[253538]: 2025-11-25 08:22:22.541 253542 DEBUG nova.virt.libvirt.driver [None req-33374b29-0e9e-40a5-8b2d-7f0c2119472c 8572d5befb7449ee93a99483cc1a148b 0706b4d7e21644ee997b95ec96c1fca2 - - default default] [instance: a134e11b-8c87-4a96-a92c-b4dfdad7e518] Ensure instance console log exists: /var/lib/nova/instances/a134e11b-8c87-4a96-a92c-b4dfdad7e518/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 25 03:22:22 np0005534516 nova_compute[253538]: 2025-11-25 08:22:22.542 253542 DEBUG oslo_concurrency.lockutils [None req-33374b29-0e9e-40a5-8b2d-7f0c2119472c 8572d5befb7449ee93a99483cc1a148b 0706b4d7e21644ee997b95ec96c1fca2 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:22:22 np0005534516 nova_compute[253538]: 2025-11-25 08:22:22.542 253542 DEBUG oslo_concurrency.lockutils [None req-33374b29-0e9e-40a5-8b2d-7f0c2119472c 8572d5befb7449ee93a99483cc1a148b 0706b4d7e21644ee997b95ec96c1fca2 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:22:22 np0005534516 nova_compute[253538]: 2025-11-25 08:22:22.543 253542 DEBUG oslo_concurrency.lockutils [None req-33374b29-0e9e-40a5-8b2d-7f0c2119472c 8572d5befb7449ee93a99483cc1a148b 0706b4d7e21644ee997b95ec96c1fca2 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:22:22 np0005534516 nova_compute[253538]: 2025-11-25 08:22:22.719 253542 DEBUG nova.network.neutron [None req-33374b29-0e9e-40a5-8b2d-7f0c2119472c 8572d5befb7449ee93a99483cc1a148b 0706b4d7e21644ee997b95ec96c1fca2 - - default default] [instance: a134e11b-8c87-4a96-a92c-b4dfdad7e518] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188#033[00m
Nov 25 03:22:22 np0005534516 nova_compute[253538]: 2025-11-25 08:22:22.720 253542 DEBUG nova.compute.manager [None req-33374b29-0e9e-40a5-8b2d-7f0c2119472c 8572d5befb7449ee93a99483cc1a148b 0706b4d7e21644ee997b95ec96c1fca2 - - default default] [instance: a134e11b-8c87-4a96-a92c-b4dfdad7e518] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 25 03:22:22 np0005534516 nova_compute[253538]: 2025-11-25 08:22:22.723 253542 DEBUG nova.virt.libvirt.driver [None req-33374b29-0e9e-40a5-8b2d-7f0c2119472c 8572d5befb7449ee93a99483cc1a148b 0706b4d7e21644ee997b95ec96c1fca2 - - default default] [instance: a134e11b-8c87-4a96-a92c-b4dfdad7e518] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'device_name': '/dev/vda', 'guest_format': None, 'encryption_options': None, 'boot_index': 0, 'encryption_format': None, 'encryption_secret_uuid': None, 'size': 0, 'encrypted': False, 'disk_bus': 'virtio', 'image_id': '8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 25 03:22:22 np0005534516 nova_compute[253538]: 2025-11-25 08:22:22.729 253542 WARNING nova.virt.libvirt.driver [None req-33374b29-0e9e-40a5-8b2d-7f0c2119472c 8572d5befb7449ee93a99483cc1a148b 0706b4d7e21644ee997b95ec96c1fca2 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 03:22:22 np0005534516 nova_compute[253538]: 2025-11-25 08:22:22.736 253542 DEBUG nova.virt.libvirt.host [None req-33374b29-0e9e-40a5-8b2d-7f0c2119472c 8572d5befb7449ee93a99483cc1a148b 0706b4d7e21644ee997b95ec96c1fca2 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 25 03:22:22 np0005534516 nova_compute[253538]: 2025-11-25 08:22:22.736 253542 DEBUG nova.virt.libvirt.host [None req-33374b29-0e9e-40a5-8b2d-7f0c2119472c 8572d5befb7449ee93a99483cc1a148b 0706b4d7e21644ee997b95ec96c1fca2 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 25 03:22:22 np0005534516 nova_compute[253538]: 2025-11-25 08:22:22.741 253542 DEBUG nova.virt.libvirt.host [None req-33374b29-0e9e-40a5-8b2d-7f0c2119472c 8572d5befb7449ee93a99483cc1a148b 0706b4d7e21644ee997b95ec96c1fca2 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 25 03:22:22 np0005534516 nova_compute[253538]: 2025-11-25 08:22:22.742 253542 DEBUG nova.virt.libvirt.host [None req-33374b29-0e9e-40a5-8b2d-7f0c2119472c 8572d5befb7449ee93a99483cc1a148b 0706b4d7e21644ee997b95ec96c1fca2 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 25 03:22:22 np0005534516 nova_compute[253538]: 2025-11-25 08:22:22.742 253542 DEBUG nova.virt.libvirt.driver [None req-33374b29-0e9e-40a5-8b2d-7f0c2119472c 8572d5befb7449ee93a99483cc1a148b 0706b4d7e21644ee997b95ec96c1fca2 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 25 03:22:22 np0005534516 nova_compute[253538]: 2025-11-25 08:22:22.743 253542 DEBUG nova.virt.hardware [None req-33374b29-0e9e-40a5-8b2d-7f0c2119472c 8572d5befb7449ee93a99483cc1a148b 0706b4d7e21644ee997b95ec96c1fca2 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T08:20:54Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='c0247c91-0f34-45b2-87e7-43f31a790a00',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 25 03:22:22 np0005534516 nova_compute[253538]: 2025-11-25 08:22:22.744 253542 DEBUG nova.virt.hardware [None req-33374b29-0e9e-40a5-8b2d-7f0c2119472c 8572d5befb7449ee93a99483cc1a148b 0706b4d7e21644ee997b95ec96c1fca2 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 25 03:22:22 np0005534516 nova_compute[253538]: 2025-11-25 08:22:22.744 253542 DEBUG nova.virt.hardware [None req-33374b29-0e9e-40a5-8b2d-7f0c2119472c 8572d5befb7449ee93a99483cc1a148b 0706b4d7e21644ee997b95ec96c1fca2 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 25 03:22:22 np0005534516 nova_compute[253538]: 2025-11-25 08:22:22.744 253542 DEBUG nova.virt.hardware [None req-33374b29-0e9e-40a5-8b2d-7f0c2119472c 8572d5befb7449ee93a99483cc1a148b 0706b4d7e21644ee997b95ec96c1fca2 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 25 03:22:22 np0005534516 nova_compute[253538]: 2025-11-25 08:22:22.745 253542 DEBUG nova.virt.hardware [None req-33374b29-0e9e-40a5-8b2d-7f0c2119472c 8572d5befb7449ee93a99483cc1a148b 0706b4d7e21644ee997b95ec96c1fca2 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 25 03:22:22 np0005534516 nova_compute[253538]: 2025-11-25 08:22:22.745 253542 DEBUG nova.virt.hardware [None req-33374b29-0e9e-40a5-8b2d-7f0c2119472c 8572d5befb7449ee93a99483cc1a148b 0706b4d7e21644ee997b95ec96c1fca2 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 25 03:22:22 np0005534516 nova_compute[253538]: 2025-11-25 08:22:22.745 253542 DEBUG nova.virt.hardware [None req-33374b29-0e9e-40a5-8b2d-7f0c2119472c 8572d5befb7449ee93a99483cc1a148b 0706b4d7e21644ee997b95ec96c1fca2 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 25 03:22:22 np0005534516 nova_compute[253538]: 2025-11-25 08:22:22.746 253542 DEBUG nova.virt.hardware [None req-33374b29-0e9e-40a5-8b2d-7f0c2119472c 8572d5befb7449ee93a99483cc1a148b 0706b4d7e21644ee997b95ec96c1fca2 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 25 03:22:22 np0005534516 nova_compute[253538]: 2025-11-25 08:22:22.746 253542 DEBUG nova.virt.hardware [None req-33374b29-0e9e-40a5-8b2d-7f0c2119472c 8572d5befb7449ee93a99483cc1a148b 0706b4d7e21644ee997b95ec96c1fca2 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 25 03:22:22 np0005534516 nova_compute[253538]: 2025-11-25 08:22:22.747 253542 DEBUG nova.virt.hardware [None req-33374b29-0e9e-40a5-8b2d-7f0c2119472c 8572d5befb7449ee93a99483cc1a148b 0706b4d7e21644ee997b95ec96c1fca2 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 25 03:22:22 np0005534516 nova_compute[253538]: 2025-11-25 08:22:22.747 253542 DEBUG nova.virt.hardware [None req-33374b29-0e9e-40a5-8b2d-7f0c2119472c 8572d5befb7449ee93a99483cc1a148b 0706b4d7e21644ee997b95ec96c1fca2 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 25 03:22:22 np0005534516 nova_compute[253538]: 2025-11-25 08:22:22.750 253542 DEBUG oslo_concurrency.processutils [None req-33374b29-0e9e-40a5-8b2d-7f0c2119472c 8572d5befb7449ee93a99483cc1a148b 0706b4d7e21644ee997b95ec96c1fca2 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:22:22 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1102: 321 pgs: 321 active+clean; 104 MiB data, 230 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 1.9 MiB/s wr, 143 op/s
Nov 25 03:22:23 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e107 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 03:22:23 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 03:22:23 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1348105461' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 03:22:23 np0005534516 nova_compute[253538]: 2025-11-25 08:22:23.303 253542 DEBUG oslo_concurrency.processutils [None req-33374b29-0e9e-40a5-8b2d-7f0c2119472c 8572d5befb7449ee93a99483cc1a148b 0706b4d7e21644ee997b95ec96c1fca2 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.554s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:22:23 np0005534516 nova_compute[253538]: 2025-11-25 08:22:23.325 253542 DEBUG nova.storage.rbd_utils [None req-33374b29-0e9e-40a5-8b2d-7f0c2119472c 8572d5befb7449ee93a99483cc1a148b 0706b4d7e21644ee997b95ec96c1fca2 - - default default] rbd image a134e11b-8c87-4a96-a92c-b4dfdad7e518_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:22:23 np0005534516 nova_compute[253538]: 2025-11-25 08:22:23.329 253542 DEBUG oslo_concurrency.processutils [None req-33374b29-0e9e-40a5-8b2d-7f0c2119472c 8572d5befb7449ee93a99483cc1a148b 0706b4d7e21644ee997b95ec96c1fca2 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:22:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 03:22:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 03:22:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 03:22:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 03:22:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 03:22:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 03:22:23 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 03:22:23 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4071656316' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 03:22:23 np0005534516 nova_compute[253538]: 2025-11-25 08:22:23.751 253542 DEBUG oslo_concurrency.processutils [None req-33374b29-0e9e-40a5-8b2d-7f0c2119472c 8572d5befb7449ee93a99483cc1a148b 0706b4d7e21644ee997b95ec96c1fca2 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.422s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:22:23 np0005534516 nova_compute[253538]: 2025-11-25 08:22:23.753 253542 DEBUG nova.objects.instance [None req-33374b29-0e9e-40a5-8b2d-7f0c2119472c 8572d5befb7449ee93a99483cc1a148b 0706b4d7e21644ee997b95ec96c1fca2 - - default default] Lazy-loading 'pci_devices' on Instance uuid a134e11b-8c87-4a96-a92c-b4dfdad7e518 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 03:22:23 np0005534516 nova_compute[253538]: 2025-11-25 08:22:23.768 253542 DEBUG nova.virt.libvirt.driver [None req-33374b29-0e9e-40a5-8b2d-7f0c2119472c 8572d5befb7449ee93a99483cc1a148b 0706b4d7e21644ee997b95ec96c1fca2 - - default default] [instance: a134e11b-8c87-4a96-a92c-b4dfdad7e518] End _get_guest_xml xml=<domain type="kvm">
Nov 25 03:22:23 np0005534516 nova_compute[253538]:  <uuid>a134e11b-8c87-4a96-a92c-b4dfdad7e518</uuid>
Nov 25 03:22:23 np0005534516 nova_compute[253538]:  <name>instance-00000004</name>
Nov 25 03:22:23 np0005534516 nova_compute[253538]:  <memory>131072</memory>
Nov 25 03:22:23 np0005534516 nova_compute[253538]:  <vcpu>1</vcpu>
Nov 25 03:22:23 np0005534516 nova_compute[253538]:  <metadata>
Nov 25 03:22:23 np0005534516 nova_compute[253538]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 03:22:23 np0005534516 nova_compute[253538]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 03:22:23 np0005534516 nova_compute[253538]:      <nova:name>tempest-DeleteServersAdminTestJSON-server-1374684485</nova:name>
Nov 25 03:22:23 np0005534516 nova_compute[253538]:      <nova:creationTime>2025-11-25 08:22:22</nova:creationTime>
Nov 25 03:22:23 np0005534516 nova_compute[253538]:      <nova:flavor name="m1.nano">
Nov 25 03:22:23 np0005534516 nova_compute[253538]:        <nova:memory>128</nova:memory>
Nov 25 03:22:23 np0005534516 nova_compute[253538]:        <nova:disk>1</nova:disk>
Nov 25 03:22:23 np0005534516 nova_compute[253538]:        <nova:swap>0</nova:swap>
Nov 25 03:22:23 np0005534516 nova_compute[253538]:        <nova:ephemeral>0</nova:ephemeral>
Nov 25 03:22:23 np0005534516 nova_compute[253538]:        <nova:vcpus>1</nova:vcpus>
Nov 25 03:22:23 np0005534516 nova_compute[253538]:      </nova:flavor>
Nov 25 03:22:23 np0005534516 nova_compute[253538]:      <nova:owner>
Nov 25 03:22:23 np0005534516 nova_compute[253538]:        <nova:user uuid="8572d5befb7449ee93a99483cc1a148b">tempest-DeleteServersAdminTestJSON-1175531157-project-member</nova:user>
Nov 25 03:22:23 np0005534516 nova_compute[253538]:        <nova:project uuid="0706b4d7e21644ee997b95ec96c1fca2">tempest-DeleteServersAdminTestJSON-1175531157</nova:project>
Nov 25 03:22:23 np0005534516 nova_compute[253538]:      </nova:owner>
Nov 25 03:22:23 np0005534516 nova_compute[253538]:      <nova:root type="image" uuid="8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e"/>
Nov 25 03:22:23 np0005534516 nova_compute[253538]:      <nova:ports/>
Nov 25 03:22:23 np0005534516 nova_compute[253538]:    </nova:instance>
Nov 25 03:22:23 np0005534516 nova_compute[253538]:  </metadata>
Nov 25 03:22:23 np0005534516 nova_compute[253538]:  <sysinfo type="smbios">
Nov 25 03:22:23 np0005534516 nova_compute[253538]:    <system>
Nov 25 03:22:23 np0005534516 nova_compute[253538]:      <entry name="manufacturer">RDO</entry>
Nov 25 03:22:23 np0005534516 nova_compute[253538]:      <entry name="product">OpenStack Compute</entry>
Nov 25 03:22:23 np0005534516 nova_compute[253538]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 03:22:23 np0005534516 nova_compute[253538]:      <entry name="serial">a134e11b-8c87-4a96-a92c-b4dfdad7e518</entry>
Nov 25 03:22:23 np0005534516 nova_compute[253538]:      <entry name="uuid">a134e11b-8c87-4a96-a92c-b4dfdad7e518</entry>
Nov 25 03:22:23 np0005534516 nova_compute[253538]:      <entry name="family">Virtual Machine</entry>
Nov 25 03:22:23 np0005534516 nova_compute[253538]:    </system>
Nov 25 03:22:23 np0005534516 nova_compute[253538]:  </sysinfo>
Nov 25 03:22:23 np0005534516 nova_compute[253538]:  <os>
Nov 25 03:22:23 np0005534516 nova_compute[253538]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 25 03:22:23 np0005534516 nova_compute[253538]:    <boot dev="hd"/>
Nov 25 03:22:23 np0005534516 nova_compute[253538]:    <smbios mode="sysinfo"/>
Nov 25 03:22:23 np0005534516 nova_compute[253538]:  </os>
Nov 25 03:22:23 np0005534516 nova_compute[253538]:  <features>
Nov 25 03:22:23 np0005534516 nova_compute[253538]:    <acpi/>
Nov 25 03:22:23 np0005534516 nova_compute[253538]:    <apic/>
Nov 25 03:22:23 np0005534516 nova_compute[253538]:    <vmcoreinfo/>
Nov 25 03:22:23 np0005534516 nova_compute[253538]:  </features>
Nov 25 03:22:23 np0005534516 nova_compute[253538]:  <clock offset="utc">
Nov 25 03:22:23 np0005534516 nova_compute[253538]:    <timer name="pit" tickpolicy="delay"/>
Nov 25 03:22:23 np0005534516 nova_compute[253538]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 25 03:22:23 np0005534516 nova_compute[253538]:    <timer name="hpet" present="no"/>
Nov 25 03:22:23 np0005534516 nova_compute[253538]:  </clock>
Nov 25 03:22:23 np0005534516 nova_compute[253538]:  <cpu mode="host-model" match="exact">
Nov 25 03:22:23 np0005534516 nova_compute[253538]:    <topology sockets="1" cores="1" threads="1"/>
Nov 25 03:22:23 np0005534516 nova_compute[253538]:  </cpu>
Nov 25 03:22:23 np0005534516 nova_compute[253538]:  <devices>
Nov 25 03:22:23 np0005534516 nova_compute[253538]:    <disk type="network" device="disk">
Nov 25 03:22:23 np0005534516 nova_compute[253538]:      <driver type="raw" cache="none"/>
Nov 25 03:22:23 np0005534516 nova_compute[253538]:      <source protocol="rbd" name="vms/a134e11b-8c87-4a96-a92c-b4dfdad7e518_disk">
Nov 25 03:22:23 np0005534516 nova_compute[253538]:        <host name="192.168.122.100" port="6789"/>
Nov 25 03:22:23 np0005534516 nova_compute[253538]:      </source>
Nov 25 03:22:23 np0005534516 nova_compute[253538]:      <auth username="openstack">
Nov 25 03:22:23 np0005534516 nova_compute[253538]:        <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 03:22:23 np0005534516 nova_compute[253538]:      </auth>
Nov 25 03:22:23 np0005534516 nova_compute[253538]:      <target dev="vda" bus="virtio"/>
Nov 25 03:22:23 np0005534516 nova_compute[253538]:    </disk>
Nov 25 03:22:23 np0005534516 nova_compute[253538]:    <disk type="network" device="cdrom">
Nov 25 03:22:23 np0005534516 nova_compute[253538]:      <driver type="raw" cache="none"/>
Nov 25 03:22:23 np0005534516 nova_compute[253538]:      <source protocol="rbd" name="vms/a134e11b-8c87-4a96-a92c-b4dfdad7e518_disk.config">
Nov 25 03:22:23 np0005534516 nova_compute[253538]:        <host name="192.168.122.100" port="6789"/>
Nov 25 03:22:23 np0005534516 nova_compute[253538]:      </source>
Nov 25 03:22:23 np0005534516 nova_compute[253538]:      <auth username="openstack">
Nov 25 03:22:23 np0005534516 nova_compute[253538]:        <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 03:22:23 np0005534516 nova_compute[253538]:      </auth>
Nov 25 03:22:23 np0005534516 nova_compute[253538]:      <target dev="sda" bus="sata"/>
Nov 25 03:22:23 np0005534516 nova_compute[253538]:    </disk>
Nov 25 03:22:23 np0005534516 nova_compute[253538]:    <serial type="pty">
Nov 25 03:22:23 np0005534516 nova_compute[253538]:      <log file="/var/lib/nova/instances/a134e11b-8c87-4a96-a92c-b4dfdad7e518/console.log" append="off"/>
Nov 25 03:22:23 np0005534516 nova_compute[253538]:    </serial>
Nov 25 03:22:23 np0005534516 nova_compute[253538]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 03:22:23 np0005534516 nova_compute[253538]:    <video>
Nov 25 03:22:23 np0005534516 nova_compute[253538]:      <model type="virtio"/>
Nov 25 03:22:23 np0005534516 nova_compute[253538]:    </video>
Nov 25 03:22:23 np0005534516 nova_compute[253538]:    <input type="tablet" bus="usb"/>
Nov 25 03:22:23 np0005534516 nova_compute[253538]:    <rng model="virtio">
Nov 25 03:22:23 np0005534516 nova_compute[253538]:      <backend model="random">/dev/urandom</backend>
Nov 25 03:22:23 np0005534516 nova_compute[253538]:    </rng>
Nov 25 03:22:23 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root"/>
Nov 25 03:22:23 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:22:23 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:22:23 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:22:23 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:22:23 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:22:23 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:22:23 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:22:23 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:22:23 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:22:23 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:22:23 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:22:23 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:22:23 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:22:23 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:22:23 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:22:23 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:22:23 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:22:23 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:22:23 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:22:23 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:22:23 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:22:23 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:22:23 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:22:23 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:22:23 np0005534516 nova_compute[253538]:    <controller type="usb" index="0"/>
Nov 25 03:22:23 np0005534516 nova_compute[253538]:    <memballoon model="virtio">
Nov 25 03:22:23 np0005534516 nova_compute[253538]:      <stats period="10"/>
Nov 25 03:22:23 np0005534516 nova_compute[253538]:    </memballoon>
Nov 25 03:22:23 np0005534516 nova_compute[253538]:  </devices>
Nov 25 03:22:23 np0005534516 nova_compute[253538]: </domain>
Nov 25 03:22:23 np0005534516 nova_compute[253538]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 25 03:22:23 np0005534516 nova_compute[253538]: 2025-11-25 08:22:23.818 253542 DEBUG nova.virt.libvirt.driver [None req-33374b29-0e9e-40a5-8b2d-7f0c2119472c 8572d5befb7449ee93a99483cc1a148b 0706b4d7e21644ee997b95ec96c1fca2 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 03:22:23 np0005534516 nova_compute[253538]: 2025-11-25 08:22:23.818 253542 DEBUG nova.virt.libvirt.driver [None req-33374b29-0e9e-40a5-8b2d-7f0c2119472c 8572d5befb7449ee93a99483cc1a148b 0706b4d7e21644ee997b95ec96c1fca2 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 03:22:23 np0005534516 nova_compute[253538]: 2025-11-25 08:22:23.819 253542 INFO nova.virt.libvirt.driver [None req-33374b29-0e9e-40a5-8b2d-7f0c2119472c 8572d5befb7449ee93a99483cc1a148b 0706b4d7e21644ee997b95ec96c1fca2 - - default default] [instance: a134e11b-8c87-4a96-a92c-b4dfdad7e518] Using config drive#033[00m
Nov 25 03:22:23 np0005534516 podman[270225]: 2025-11-25 08:22:23.838489606 +0000 UTC m=+0.075487525 container health_status 5c8d5508db57b4c3da7e80ae11f3a3094e87785cb74f60c98199261dd8b73481 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent)
Nov 25 03:22:23 np0005534516 nova_compute[253538]: 2025-11-25 08:22:23.848 253542 DEBUG nova.storage.rbd_utils [None req-33374b29-0e9e-40a5-8b2d-7f0c2119472c 8572d5befb7449ee93a99483cc1a148b 0706b4d7e21644ee997b95ec96c1fca2 - - default default] rbd image a134e11b-8c87-4a96-a92c-b4dfdad7e518_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:22:24 np0005534516 nova_compute[253538]: 2025-11-25 08:22:24.124 253542 INFO nova.virt.libvirt.driver [None req-33374b29-0e9e-40a5-8b2d-7f0c2119472c 8572d5befb7449ee93a99483cc1a148b 0706b4d7e21644ee997b95ec96c1fca2 - - default default] [instance: a134e11b-8c87-4a96-a92c-b4dfdad7e518] Creating config drive at /var/lib/nova/instances/a134e11b-8c87-4a96-a92c-b4dfdad7e518/disk.config#033[00m
Nov 25 03:22:24 np0005534516 nova_compute[253538]: 2025-11-25 08:22:24.133 253542 DEBUG oslo_concurrency.processutils [None req-33374b29-0e9e-40a5-8b2d-7f0c2119472c 8572d5befb7449ee93a99483cc1a148b 0706b4d7e21644ee997b95ec96c1fca2 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/a134e11b-8c87-4a96-a92c-b4dfdad7e518/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpa4obmgfb execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:22:24 np0005534516 nova_compute[253538]: 2025-11-25 08:22:24.263 253542 DEBUG oslo_concurrency.processutils [None req-33374b29-0e9e-40a5-8b2d-7f0c2119472c 8572d5befb7449ee93a99483cc1a148b 0706b4d7e21644ee997b95ec96c1fca2 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/a134e11b-8c87-4a96-a92c-b4dfdad7e518/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpa4obmgfb" returned: 0 in 0.129s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:22:24 np0005534516 nova_compute[253538]: 2025-11-25 08:22:24.287 253542 DEBUG nova.storage.rbd_utils [None req-33374b29-0e9e-40a5-8b2d-7f0c2119472c 8572d5befb7449ee93a99483cc1a148b 0706b4d7e21644ee997b95ec96c1fca2 - - default default] rbd image a134e11b-8c87-4a96-a92c-b4dfdad7e518_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:22:24 np0005534516 nova_compute[253538]: 2025-11-25 08:22:24.293 253542 DEBUG oslo_concurrency.processutils [None req-33374b29-0e9e-40a5-8b2d-7f0c2119472c 8572d5befb7449ee93a99483cc1a148b 0706b4d7e21644ee997b95ec96c1fca2 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/a134e11b-8c87-4a96-a92c-b4dfdad7e518/disk.config a134e11b-8c87-4a96-a92c-b4dfdad7e518_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:22:24 np0005534516 nova_compute[253538]: 2025-11-25 08:22:24.350 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:22:24 np0005534516 nova_compute[253538]: 2025-11-25 08:22:24.543 253542 DEBUG oslo_concurrency.processutils [None req-33374b29-0e9e-40a5-8b2d-7f0c2119472c 8572d5befb7449ee93a99483cc1a148b 0706b4d7e21644ee997b95ec96c1fca2 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/a134e11b-8c87-4a96-a92c-b4dfdad7e518/disk.config a134e11b-8c87-4a96-a92c-b4dfdad7e518_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.250s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:22:24 np0005534516 nova_compute[253538]: 2025-11-25 08:22:24.544 253542 INFO nova.virt.libvirt.driver [None req-33374b29-0e9e-40a5-8b2d-7f0c2119472c 8572d5befb7449ee93a99483cc1a148b 0706b4d7e21644ee997b95ec96c1fca2 - - default default] [instance: a134e11b-8c87-4a96-a92c-b4dfdad7e518] Deleting local config drive /var/lib/nova/instances/a134e11b-8c87-4a96-a92c-b4dfdad7e518/disk.config because it was imported into RBD.#033[00m
Nov 25 03:22:24 np0005534516 systemd-machined[215790]: New machine qemu-4-instance-00000004.
Nov 25 03:22:24 np0005534516 systemd[1]: Started Virtual Machine qemu-4-instance-00000004.
Nov 25 03:22:24 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1103: 321 pgs: 321 active+clean; 150 MiB data, 249 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 3.3 MiB/s wr, 179 op/s
Nov 25 03:22:25 np0005534516 nova_compute[253538]: 2025-11-25 08:22:25.601 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764058945.600612, a134e11b-8c87-4a96-a92c-b4dfdad7e518 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 03:22:25 np0005534516 nova_compute[253538]: 2025-11-25 08:22:25.601 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: a134e11b-8c87-4a96-a92c-b4dfdad7e518] VM Resumed (Lifecycle Event)#033[00m
Nov 25 03:22:25 np0005534516 nova_compute[253538]: 2025-11-25 08:22:25.604 253542 DEBUG nova.compute.manager [None req-33374b29-0e9e-40a5-8b2d-7f0c2119472c 8572d5befb7449ee93a99483cc1a148b 0706b4d7e21644ee997b95ec96c1fca2 - - default default] [instance: a134e11b-8c87-4a96-a92c-b4dfdad7e518] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 25 03:22:25 np0005534516 nova_compute[253538]: 2025-11-25 08:22:25.604 253542 DEBUG nova.virt.libvirt.driver [None req-33374b29-0e9e-40a5-8b2d-7f0c2119472c 8572d5befb7449ee93a99483cc1a148b 0706b4d7e21644ee997b95ec96c1fca2 - - default default] [instance: a134e11b-8c87-4a96-a92c-b4dfdad7e518] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 25 03:22:25 np0005534516 nova_compute[253538]: 2025-11-25 08:22:25.608 253542 INFO nova.virt.libvirt.driver [-] [instance: a134e11b-8c87-4a96-a92c-b4dfdad7e518] Instance spawned successfully.#033[00m
Nov 25 03:22:25 np0005534516 nova_compute[253538]: 2025-11-25 08:22:25.608 253542 DEBUG nova.virt.libvirt.driver [None req-33374b29-0e9e-40a5-8b2d-7f0c2119472c 8572d5befb7449ee93a99483cc1a148b 0706b4d7e21644ee997b95ec96c1fca2 - - default default] [instance: a134e11b-8c87-4a96-a92c-b4dfdad7e518] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 25 03:22:25 np0005534516 nova_compute[253538]: 2025-11-25 08:22:25.631 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: a134e11b-8c87-4a96-a92c-b4dfdad7e518] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:22:25 np0005534516 nova_compute[253538]: 2025-11-25 08:22:25.634 253542 DEBUG nova.virt.libvirt.driver [None req-33374b29-0e9e-40a5-8b2d-7f0c2119472c 8572d5befb7449ee93a99483cc1a148b 0706b4d7e21644ee997b95ec96c1fca2 - - default default] [instance: a134e11b-8c87-4a96-a92c-b4dfdad7e518] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:22:25 np0005534516 nova_compute[253538]: 2025-11-25 08:22:25.635 253542 DEBUG nova.virt.libvirt.driver [None req-33374b29-0e9e-40a5-8b2d-7f0c2119472c 8572d5befb7449ee93a99483cc1a148b 0706b4d7e21644ee997b95ec96c1fca2 - - default default] [instance: a134e11b-8c87-4a96-a92c-b4dfdad7e518] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:22:25 np0005534516 nova_compute[253538]: 2025-11-25 08:22:25.635 253542 DEBUG nova.virt.libvirt.driver [None req-33374b29-0e9e-40a5-8b2d-7f0c2119472c 8572d5befb7449ee93a99483cc1a148b 0706b4d7e21644ee997b95ec96c1fca2 - - default default] [instance: a134e11b-8c87-4a96-a92c-b4dfdad7e518] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:22:25 np0005534516 nova_compute[253538]: 2025-11-25 08:22:25.635 253542 DEBUG nova.virt.libvirt.driver [None req-33374b29-0e9e-40a5-8b2d-7f0c2119472c 8572d5befb7449ee93a99483cc1a148b 0706b4d7e21644ee997b95ec96c1fca2 - - default default] [instance: a134e11b-8c87-4a96-a92c-b4dfdad7e518] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:22:25 np0005534516 nova_compute[253538]: 2025-11-25 08:22:25.636 253542 DEBUG nova.virt.libvirt.driver [None req-33374b29-0e9e-40a5-8b2d-7f0c2119472c 8572d5befb7449ee93a99483cc1a148b 0706b4d7e21644ee997b95ec96c1fca2 - - default default] [instance: a134e11b-8c87-4a96-a92c-b4dfdad7e518] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:22:25 np0005534516 nova_compute[253538]: 2025-11-25 08:22:25.636 253542 DEBUG nova.virt.libvirt.driver [None req-33374b29-0e9e-40a5-8b2d-7f0c2119472c 8572d5befb7449ee93a99483cc1a148b 0706b4d7e21644ee997b95ec96c1fca2 - - default default] [instance: a134e11b-8c87-4a96-a92c-b4dfdad7e518] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:22:25 np0005534516 nova_compute[253538]: 2025-11-25 08:22:25.639 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: a134e11b-8c87-4a96-a92c-b4dfdad7e518] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 03:22:25 np0005534516 nova_compute[253538]: 2025-11-25 08:22:25.678 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: a134e11b-8c87-4a96-a92c-b4dfdad7e518] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 03:22:25 np0005534516 nova_compute[253538]: 2025-11-25 08:22:25.679 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764058945.6041136, a134e11b-8c87-4a96-a92c-b4dfdad7e518 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 03:22:25 np0005534516 nova_compute[253538]: 2025-11-25 08:22:25.679 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: a134e11b-8c87-4a96-a92c-b4dfdad7e518] VM Started (Lifecycle Event)#033[00m
Nov 25 03:22:25 np0005534516 nova_compute[253538]: 2025-11-25 08:22:25.699 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: a134e11b-8c87-4a96-a92c-b4dfdad7e518] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:22:25 np0005534516 nova_compute[253538]: 2025-11-25 08:22:25.702 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: a134e11b-8c87-4a96-a92c-b4dfdad7e518] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 03:22:25 np0005534516 nova_compute[253538]: 2025-11-25 08:22:25.717 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: a134e11b-8c87-4a96-a92c-b4dfdad7e518] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 03:22:25 np0005534516 nova_compute[253538]: 2025-11-25 08:22:25.895 253542 INFO nova.compute.manager [None req-33374b29-0e9e-40a5-8b2d-7f0c2119472c 8572d5befb7449ee93a99483cc1a148b 0706b4d7e21644ee997b95ec96c1fca2 - - default default] [instance: a134e11b-8c87-4a96-a92c-b4dfdad7e518] Took 4.27 seconds to spawn the instance on the hypervisor.#033[00m
Nov 25 03:22:25 np0005534516 nova_compute[253538]: 2025-11-25 08:22:25.896 253542 DEBUG nova.compute.manager [None req-33374b29-0e9e-40a5-8b2d-7f0c2119472c 8572d5befb7449ee93a99483cc1a148b 0706b4d7e21644ee997b95ec96c1fca2 - - default default] [instance: a134e11b-8c87-4a96-a92c-b4dfdad7e518] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:22:26 np0005534516 nova_compute[253538]: 2025-11-25 08:22:26.006 253542 INFO nova.compute.manager [None req-33374b29-0e9e-40a5-8b2d-7f0c2119472c 8572d5befb7449ee93a99483cc1a148b 0706b4d7e21644ee997b95ec96c1fca2 - - default default] [instance: a134e11b-8c87-4a96-a92c-b4dfdad7e518] Took 5.55 seconds to build instance.#033[00m
Nov 25 03:22:26 np0005534516 nova_compute[253538]: 2025-11-25 08:22:26.035 253542 DEBUG oslo_concurrency.lockutils [None req-33374b29-0e9e-40a5-8b2d-7f0c2119472c 8572d5befb7449ee93a99483cc1a148b 0706b4d7e21644ee997b95ec96c1fca2 - - default default] Lock "a134e11b-8c87-4a96-a92c-b4dfdad7e518" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 5.690s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:22:26 np0005534516 nova_compute[253538]: 2025-11-25 08:22:26.504 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:22:26 np0005534516 podman[270362]: 2025-11-25 08:22:26.834550354 +0000 UTC m=+0.080664280 container health_status 201f5ba8a846455dad7681d91099d8c3f33e8ac91298c1330a8b525b94c03938 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.schema-version=1.0)
Nov 25 03:22:26 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1104: 321 pgs: 321 active+clean; 150 MiB data, 249 MiB used, 60 GiB / 60 GiB avail; 492 KiB/s rd, 3.3 MiB/s wr, 112 op/s
Nov 25 03:22:26 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:22:26.944 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=0e3362c2-a4a0-4a10-9289-943331244f84, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '4'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:22:27 np0005534516 nova_compute[253538]: 2025-11-25 08:22:27.853 253542 DEBUG oslo_concurrency.lockutils [None req-dbe5ab57-58f3-4078-945a-6aba64d8d257 19a48db5eafb4ccb9008a204aa3d72d4 2671313ddba04346ac0e2eef435f909c - - default default] Acquiring lock "67445e66-65d6-487d-8c34-7d798ac485c8" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:22:27 np0005534516 nova_compute[253538]: 2025-11-25 08:22:27.854 253542 DEBUG oslo_concurrency.lockutils [None req-dbe5ab57-58f3-4078-945a-6aba64d8d257 19a48db5eafb4ccb9008a204aa3d72d4 2671313ddba04346ac0e2eef435f909c - - default default] Lock "67445e66-65d6-487d-8c34-7d798ac485c8" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:22:27 np0005534516 nova_compute[253538]: 2025-11-25 08:22:27.854 253542 DEBUG oslo_concurrency.lockutils [None req-dbe5ab57-58f3-4078-945a-6aba64d8d257 19a48db5eafb4ccb9008a204aa3d72d4 2671313ddba04346ac0e2eef435f909c - - default default] Acquiring lock "67445e66-65d6-487d-8c34-7d798ac485c8-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:22:27 np0005534516 nova_compute[253538]: 2025-11-25 08:22:27.854 253542 DEBUG oslo_concurrency.lockutils [None req-dbe5ab57-58f3-4078-945a-6aba64d8d257 19a48db5eafb4ccb9008a204aa3d72d4 2671313ddba04346ac0e2eef435f909c - - default default] Lock "67445e66-65d6-487d-8c34-7d798ac485c8-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:22:27 np0005534516 nova_compute[253538]: 2025-11-25 08:22:27.855 253542 DEBUG oslo_concurrency.lockutils [None req-dbe5ab57-58f3-4078-945a-6aba64d8d257 19a48db5eafb4ccb9008a204aa3d72d4 2671313ddba04346ac0e2eef435f909c - - default default] Lock "67445e66-65d6-487d-8c34-7d798ac485c8-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:22:27 np0005534516 nova_compute[253538]: 2025-11-25 08:22:27.856 253542 INFO nova.compute.manager [None req-dbe5ab57-58f3-4078-945a-6aba64d8d257 19a48db5eafb4ccb9008a204aa3d72d4 2671313ddba04346ac0e2eef435f909c - - default default] [instance: 67445e66-65d6-487d-8c34-7d798ac485c8] Terminating instance#033[00m
Nov 25 03:22:27 np0005534516 nova_compute[253538]: 2025-11-25 08:22:27.857 253542 DEBUG nova.compute.manager [None req-dbe5ab57-58f3-4078-945a-6aba64d8d257 19a48db5eafb4ccb9008a204aa3d72d4 2671313ddba04346ac0e2eef435f909c - - default default] [instance: 67445e66-65d6-487d-8c34-7d798ac485c8] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 25 03:22:27 np0005534516 kernel: tap5235689d-93 (unregistering): left promiscuous mode
Nov 25 03:22:27 np0005534516 NetworkManager[48915]: <info>  [1764058947.9259] device (tap5235689d-93): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 03:22:27 np0005534516 ovn_controller[152859]: 2025-11-25T08:22:27Z|00033|binding|INFO|Releasing lport 5235689d-9333-45f8-8f44-50e4a006f7d3 from this chassis (sb_readonly=0)
Nov 25 03:22:27 np0005534516 ovn_controller[152859]: 2025-11-25T08:22:27Z|00034|binding|INFO|Setting lport 5235689d-9333-45f8-8f44-50e4a006f7d3 down in Southbound
Nov 25 03:22:27 np0005534516 ovn_controller[152859]: 2025-11-25T08:22:27Z|00035|binding|INFO|Removing iface tap5235689d-93 ovn-installed in OVS
Nov 25 03:22:27 np0005534516 nova_compute[253538]: 2025-11-25 08:22:27.935 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:22:27 np0005534516 nova_compute[253538]: 2025-11-25 08:22:27.969 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:22:27 np0005534516 nova_compute[253538]: 2025-11-25 08:22:27.988 253542 DEBUG oslo_concurrency.lockutils [None req-83f45b32-d7cc-48b1-9b3c-5ef4ede58338 8572d5befb7449ee93a99483cc1a148b 0706b4d7e21644ee997b95ec96c1fca2 - - default default] Acquiring lock "a134e11b-8c87-4a96-a92c-b4dfdad7e518" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:22:27 np0005534516 nova_compute[253538]: 2025-11-25 08:22:27.989 253542 DEBUG oslo_concurrency.lockutils [None req-83f45b32-d7cc-48b1-9b3c-5ef4ede58338 8572d5befb7449ee93a99483cc1a148b 0706b4d7e21644ee997b95ec96c1fca2 - - default default] Lock "a134e11b-8c87-4a96-a92c-b4dfdad7e518" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:22:27 np0005534516 nova_compute[253538]: 2025-11-25 08:22:27.989 253542 DEBUG oslo_concurrency.lockutils [None req-83f45b32-d7cc-48b1-9b3c-5ef4ede58338 8572d5befb7449ee93a99483cc1a148b 0706b4d7e21644ee997b95ec96c1fca2 - - default default] Acquiring lock "a134e11b-8c87-4a96-a92c-b4dfdad7e518-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:22:27 np0005534516 nova_compute[253538]: 2025-11-25 08:22:27.989 253542 DEBUG oslo_concurrency.lockutils [None req-83f45b32-d7cc-48b1-9b3c-5ef4ede58338 8572d5befb7449ee93a99483cc1a148b 0706b4d7e21644ee997b95ec96c1fca2 - - default default] Lock "a134e11b-8c87-4a96-a92c-b4dfdad7e518-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:22:27 np0005534516 nova_compute[253538]: 2025-11-25 08:22:27.989 253542 DEBUG oslo_concurrency.lockutils [None req-83f45b32-d7cc-48b1-9b3c-5ef4ede58338 8572d5befb7449ee93a99483cc1a148b 0706b4d7e21644ee997b95ec96c1fca2 - - default default] Lock "a134e11b-8c87-4a96-a92c-b4dfdad7e518-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:22:27 np0005534516 nova_compute[253538]: 2025-11-25 08:22:27.990 253542 INFO nova.compute.manager [None req-83f45b32-d7cc-48b1-9b3c-5ef4ede58338 8572d5befb7449ee93a99483cc1a148b 0706b4d7e21644ee997b95ec96c1fca2 - - default default] [instance: a134e11b-8c87-4a96-a92c-b4dfdad7e518] Terminating instance#033[00m
Nov 25 03:22:27 np0005534516 nova_compute[253538]: 2025-11-25 08:22:27.991 253542 DEBUG oslo_concurrency.lockutils [None req-83f45b32-d7cc-48b1-9b3c-5ef4ede58338 8572d5befb7449ee93a99483cc1a148b 0706b4d7e21644ee997b95ec96c1fca2 - - default default] Acquiring lock "refresh_cache-a134e11b-8c87-4a96-a92c-b4dfdad7e518" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 03:22:27 np0005534516 nova_compute[253538]: 2025-11-25 08:22:27.991 253542 DEBUG oslo_concurrency.lockutils [None req-83f45b32-d7cc-48b1-9b3c-5ef4ede58338 8572d5befb7449ee93a99483cc1a148b 0706b4d7e21644ee997b95ec96c1fca2 - - default default] Acquired lock "refresh_cache-a134e11b-8c87-4a96-a92c-b4dfdad7e518" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 03:22:27 np0005534516 nova_compute[253538]: 2025-11-25 08:22:27.992 253542 DEBUG nova.network.neutron [None req-83f45b32-d7cc-48b1-9b3c-5ef4ede58338 8572d5befb7449ee93a99483cc1a148b 0706b4d7e21644ee997b95ec96c1fca2 - - default default] [instance: a134e11b-8c87-4a96-a92c-b4dfdad7e518] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 25 03:22:27 np0005534516 systemd[1]: machine-qemu\x2d2\x2dinstance\x2d00000002.scope: Deactivated successfully.
Nov 25 03:22:27 np0005534516 systemd[1]: machine-qemu\x2d2\x2dinstance\x2d00000002.scope: Consumed 15.488s CPU time.
Nov 25 03:22:27 np0005534516 systemd-machined[215790]: Machine qemu-2-instance-00000002 terminated.
Nov 25 03:22:28 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:22:28.029 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c9:db:d0 10.100.0.10'], port_security=['fa:16:3e:c9:db:d0 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '67445e66-65d6-487d-8c34-7d798ac485c8', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ef52fe4f-78d3-45fa-ab69-177fdfabe604', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2671313ddba04346ac0e2eef435f909c', 'neutron:revision_number': '4', 'neutron:security_group_ids': '45e37a08-d0c9-4931-b93a-912579eefb2a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.177'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=1e5eae09-1123-407e-9138-26c6151dcc1c, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=5235689d-9333-45f8-8f44-50e4a006f7d3) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 03:22:28 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:22:28.030 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 5235689d-9333-45f8-8f44-50e4a006f7d3 in datapath ef52fe4f-78d3-45fa-ab69-177fdfabe604 unbound from our chassis#033[00m
Nov 25 03:22:28 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:22:28.031 162739 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network ef52fe4f-78d3-45fa-ab69-177fdfabe604, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 25 03:22:28 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:22:28.032 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[7d4b4ae7-cab2-4b2f-ac0b-7cd11f284453]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:22:28 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:22:28.033 162739 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-ef52fe4f-78d3-45fa-ab69-177fdfabe604 namespace which is not needed anymore#033[00m
Nov 25 03:22:28 np0005534516 NetworkManager[48915]: <info>  [1764058948.0787] manager: (tap5235689d-93): new Tun device (/org/freedesktop/NetworkManager/Devices/29)
Nov 25 03:22:28 np0005534516 nova_compute[253538]: 2025-11-25 08:22:28.091 253542 INFO nova.virt.libvirt.driver [-] [instance: 67445e66-65d6-487d-8c34-7d798ac485c8] Instance destroyed successfully.#033[00m
Nov 25 03:22:28 np0005534516 nova_compute[253538]: 2025-11-25 08:22:28.091 253542 DEBUG nova.objects.instance [None req-dbe5ab57-58f3-4078-945a-6aba64d8d257 19a48db5eafb4ccb9008a204aa3d72d4 2671313ddba04346ac0e2eef435f909c - - default default] Lazy-loading 'resources' on Instance uuid 67445e66-65d6-487d-8c34-7d798ac485c8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 03:22:28 np0005534516 nova_compute[253538]: 2025-11-25 08:22:28.101 253542 DEBUG nova.virt.libvirt.vif [None req-dbe5ab57-58f3-4078-945a-6aba64d8d257 19a48db5eafb4ccb9008a204aa3d72d4 2671313ddba04346ac0e2eef435f909c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T08:21:47Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersWithSpecificFlavorTestJSON-server-101687112',display_name='tempest-ServersWithSpecificFlavorTestJSON-server-101687112',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(25),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverswithspecificflavortestjson-server-101687112',id=2,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=25,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNBzTqcG3JodznNWSwgj3rCrEedPdeiQ/mnvgYz3cydyWhFZXQKv9KjumDsSN5/xC3KFolCMDQs3EubeJBsTVkZbaVh8dka9krQSkDOu6i81Tbp1XKPFk+NDOA6XHT/FTw==',key_name='tempest-keypair-1479336102',keypairs=<?>,launch_index=0,launched_at=2025-11-25T08:22:05Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='2671313ddba04346ac0e2eef435f909c',ramdisk_id='',reservation_id='r-5kfa9qry',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersWithSpecificFlavorTestJSON-625252619',owner_user_name='tempest-ServersWithSpecificFlavorTestJSON-625252619-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T08:22:06Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='19a48db5eafb4ccb9008a204aa3d72d4',uuid=67445e66-65d6-487d-8c34-7d798ac485c8,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "5235689d-9333-45f8-8f44-50e4a006f7d3", "address": "fa:16:3e:c9:db:d0", "network": {"id": "ef52fe4f-78d3-45fa-ab69-177fdfabe604", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-1512173371-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.177", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2671313ddba04346ac0e2eef435f909c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5235689d-93", "ovs_interfaceid": "5235689d-9333-45f8-8f44-50e4a006f7d3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 25 03:22:28 np0005534516 nova_compute[253538]: 2025-11-25 08:22:28.102 253542 DEBUG nova.network.os_vif_util [None req-dbe5ab57-58f3-4078-945a-6aba64d8d257 19a48db5eafb4ccb9008a204aa3d72d4 2671313ddba04346ac0e2eef435f909c - - default default] Converting VIF {"id": "5235689d-9333-45f8-8f44-50e4a006f7d3", "address": "fa:16:3e:c9:db:d0", "network": {"id": "ef52fe4f-78d3-45fa-ab69-177fdfabe604", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-1512173371-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.177", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2671313ddba04346ac0e2eef435f909c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5235689d-93", "ovs_interfaceid": "5235689d-9333-45f8-8f44-50e4a006f7d3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 03:22:28 np0005534516 nova_compute[253538]: 2025-11-25 08:22:28.102 253542 DEBUG nova.network.os_vif_util [None req-dbe5ab57-58f3-4078-945a-6aba64d8d257 19a48db5eafb4ccb9008a204aa3d72d4 2671313ddba04346ac0e2eef435f909c - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:c9:db:d0,bridge_name='br-int',has_traffic_filtering=True,id=5235689d-9333-45f8-8f44-50e4a006f7d3,network=Network(ef52fe4f-78d3-45fa-ab69-177fdfabe604),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5235689d-93') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 03:22:28 np0005534516 nova_compute[253538]: 2025-11-25 08:22:28.106 253542 DEBUG os_vif [None req-dbe5ab57-58f3-4078-945a-6aba64d8d257 19a48db5eafb4ccb9008a204aa3d72d4 2671313ddba04346ac0e2eef435f909c - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:c9:db:d0,bridge_name='br-int',has_traffic_filtering=True,id=5235689d-9333-45f8-8f44-50e4a006f7d3,network=Network(ef52fe4f-78d3-45fa-ab69-177fdfabe604),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5235689d-93') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 25 03:22:28 np0005534516 nova_compute[253538]: 2025-11-25 08:22:28.108 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:22:28 np0005534516 nova_compute[253538]: 2025-11-25 08:22:28.109 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5235689d-93, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:22:28 np0005534516 nova_compute[253538]: 2025-11-25 08:22:28.110 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:22:28 np0005534516 nova_compute[253538]: 2025-11-25 08:22:28.116 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 25 03:22:28 np0005534516 nova_compute[253538]: 2025-11-25 08:22:28.118 253542 INFO os_vif [None req-dbe5ab57-58f3-4078-945a-6aba64d8d257 19a48db5eafb4ccb9008a204aa3d72d4 2671313ddba04346ac0e2eef435f909c - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:c9:db:d0,bridge_name='br-int',has_traffic_filtering=True,id=5235689d-9333-45f8-8f44-50e4a006f7d3,network=Network(ef52fe4f-78d3-45fa-ab69-177fdfabe604),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5235689d-93')#033[00m
Nov 25 03:22:28 np0005534516 neutron-haproxy-ovnmeta-ef52fe4f-78d3-45fa-ab69-177fdfabe604[269796]: [NOTICE]   (269820) : haproxy version is 2.8.14-c23fe91
Nov 25 03:22:28 np0005534516 neutron-haproxy-ovnmeta-ef52fe4f-78d3-45fa-ab69-177fdfabe604[269796]: [NOTICE]   (269820) : path to executable is /usr/sbin/haproxy
Nov 25 03:22:28 np0005534516 neutron-haproxy-ovnmeta-ef52fe4f-78d3-45fa-ab69-177fdfabe604[269796]: [WARNING]  (269820) : Exiting Master process...
Nov 25 03:22:28 np0005534516 neutron-haproxy-ovnmeta-ef52fe4f-78d3-45fa-ab69-177fdfabe604[269796]: [ALERT]    (269820) : Current worker (269824) exited with code 143 (Terminated)
Nov 25 03:22:28 np0005534516 neutron-haproxy-ovnmeta-ef52fe4f-78d3-45fa-ab69-177fdfabe604[269796]: [WARNING]  (269820) : All workers exited. Exiting... (0)
Nov 25 03:22:28 np0005534516 systemd[1]: libpod-412833b7c3fef201b41adc426f0095eb0c085685fe8b281b7e78c8c82e296c47.scope: Deactivated successfully.
Nov 25 03:22:28 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e107 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 03:22:28 np0005534516 podman[270415]: 2025-11-25 08:22:28.16946208 +0000 UTC m=+0.046062440 container died 412833b7c3fef201b41adc426f0095eb0c085685fe8b281b7e78c8c82e296c47 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-ef52fe4f-78d3-45fa-ab69-177fdfabe604, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 03:22:28 np0005534516 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-412833b7c3fef201b41adc426f0095eb0c085685fe8b281b7e78c8c82e296c47-userdata-shm.mount: Deactivated successfully.
Nov 25 03:22:28 np0005534516 systemd[1]: var-lib-containers-storage-overlay-b5b696d176ed1fe3f68ef96c36dc7d5b035659cfd38836579d018864edf7bf5a-merged.mount: Deactivated successfully.
Nov 25 03:22:28 np0005534516 podman[270415]: 2025-11-25 08:22:28.210768867 +0000 UTC m=+0.087369217 container cleanup 412833b7c3fef201b41adc426f0095eb0c085685fe8b281b7e78c8c82e296c47 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-ef52fe4f-78d3-45fa-ab69-177fdfabe604, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Nov 25 03:22:28 np0005534516 systemd[1]: libpod-conmon-412833b7c3fef201b41adc426f0095eb0c085685fe8b281b7e78c8c82e296c47.scope: Deactivated successfully.
Nov 25 03:22:28 np0005534516 podman[270461]: 2025-11-25 08:22:28.272873276 +0000 UTC m=+0.042784259 container remove 412833b7c3fef201b41adc426f0095eb0c085685fe8b281b7e78c8c82e296c47 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-ef52fe4f-78d3-45fa-ab69-177fdfabe604, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251118, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2)
Nov 25 03:22:28 np0005534516 nova_compute[253538]: 2025-11-25 08:22:28.273 253542 DEBUG nova.compute.manager [req-a9db5c57-c093-4b03-a85e-f61d40bae6e1 req-0cc75d1f-cbce-4555-8a60-ed5f005ad538 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 67445e66-65d6-487d-8c34-7d798ac485c8] Received event network-vif-unplugged-5235689d-9333-45f8-8f44-50e4a006f7d3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:22:28 np0005534516 nova_compute[253538]: 2025-11-25 08:22:28.275 253542 DEBUG oslo_concurrency.lockutils [req-a9db5c57-c093-4b03-a85e-f61d40bae6e1 req-0cc75d1f-cbce-4555-8a60-ed5f005ad538 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "67445e66-65d6-487d-8c34-7d798ac485c8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:22:28 np0005534516 nova_compute[253538]: 2025-11-25 08:22:28.275 253542 DEBUG oslo_concurrency.lockutils [req-a9db5c57-c093-4b03-a85e-f61d40bae6e1 req-0cc75d1f-cbce-4555-8a60-ed5f005ad538 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "67445e66-65d6-487d-8c34-7d798ac485c8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:22:28 np0005534516 nova_compute[253538]: 2025-11-25 08:22:28.275 253542 DEBUG oslo_concurrency.lockutils [req-a9db5c57-c093-4b03-a85e-f61d40bae6e1 req-0cc75d1f-cbce-4555-8a60-ed5f005ad538 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "67445e66-65d6-487d-8c34-7d798ac485c8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:22:28 np0005534516 nova_compute[253538]: 2025-11-25 08:22:28.276 253542 DEBUG nova.compute.manager [req-a9db5c57-c093-4b03-a85e-f61d40bae6e1 req-0cc75d1f-cbce-4555-8a60-ed5f005ad538 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 67445e66-65d6-487d-8c34-7d798ac485c8] No waiting events found dispatching network-vif-unplugged-5235689d-9333-45f8-8f44-50e4a006f7d3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 03:22:28 np0005534516 nova_compute[253538]: 2025-11-25 08:22:28.276 253542 DEBUG nova.compute.manager [req-a9db5c57-c093-4b03-a85e-f61d40bae6e1 req-0cc75d1f-cbce-4555-8a60-ed5f005ad538 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 67445e66-65d6-487d-8c34-7d798ac485c8] Received event network-vif-unplugged-5235689d-9333-45f8-8f44-50e4a006f7d3 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 25 03:22:28 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:22:28.284 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[d578b06d-95e2-459a-b5d1-4d348de83a60]: (4, ('Tue Nov 25 08:22:28 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-ef52fe4f-78d3-45fa-ab69-177fdfabe604 (412833b7c3fef201b41adc426f0095eb0c085685fe8b281b7e78c8c82e296c47)\n412833b7c3fef201b41adc426f0095eb0c085685fe8b281b7e78c8c82e296c47\nTue Nov 25 08:22:28 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-ef52fe4f-78d3-45fa-ab69-177fdfabe604 (412833b7c3fef201b41adc426f0095eb0c085685fe8b281b7e78c8c82e296c47)\n412833b7c3fef201b41adc426f0095eb0c085685fe8b281b7e78c8c82e296c47\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:22:28 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:22:28.286 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[6cce6d63-b61a-46e9-9d02-6f268976bbcb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:22:28 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:22:28.287 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapef52fe4f-70, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:22:28 np0005534516 kernel: tapef52fe4f-70: left promiscuous mode
Nov 25 03:22:28 np0005534516 nova_compute[253538]: 2025-11-25 08:22:28.289 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:22:28 np0005534516 nova_compute[253538]: 2025-11-25 08:22:28.306 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:22:28 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:22:28.316 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[2e0f5c01-e36c-416f-9441-4b8fa6e9a932]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:22:28 np0005534516 nova_compute[253538]: 2025-11-25 08:22:28.334 253542 DEBUG nova.network.neutron [None req-83f45b32-d7cc-48b1-9b3c-5ef4ede58338 8572d5befb7449ee93a99483cc1a148b 0706b4d7e21644ee997b95ec96c1fca2 - - default default] [instance: a134e11b-8c87-4a96-a92c-b4dfdad7e518] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 25 03:22:28 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:22:28.338 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[586516ca-503b-4c7f-acf8-54f3ac703eb8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:22:28 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:22:28.341 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[d95ef297-e6b1-46fa-8153-de169a05c7e2]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:22:28 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:22:28.359 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[ff1764d3-9a3d-451f-a468-1c49a37a7238]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 428387, 'reachable_time': 36564, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 270477, 'error': None, 'target': 'ovnmeta-ef52fe4f-78d3-45fa-ab69-177fdfabe604', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:22:28 np0005534516 systemd[1]: run-netns-ovnmeta\x2def52fe4f\x2d78d3\x2d45fa\x2dab69\x2d177fdfabe604.mount: Deactivated successfully.
Nov 25 03:22:28 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:22:28.374 162852 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-ef52fe4f-78d3-45fa-ab69-177fdfabe604 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 25 03:22:28 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:22:28.376 162852 DEBUG oslo.privsep.daemon [-] privsep: reply[32d880eb-afcc-4666-ab4c-e67155093325]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:22:28 np0005534516 nova_compute[253538]: 2025-11-25 08:22:28.466 253542 INFO nova.virt.libvirt.driver [None req-dbe5ab57-58f3-4078-945a-6aba64d8d257 19a48db5eafb4ccb9008a204aa3d72d4 2671313ddba04346ac0e2eef435f909c - - default default] [instance: 67445e66-65d6-487d-8c34-7d798ac485c8] Deleting instance files /var/lib/nova/instances/67445e66-65d6-487d-8c34-7d798ac485c8_del#033[00m
Nov 25 03:22:28 np0005534516 nova_compute[253538]: 2025-11-25 08:22:28.467 253542 INFO nova.virt.libvirt.driver [None req-dbe5ab57-58f3-4078-945a-6aba64d8d257 19a48db5eafb4ccb9008a204aa3d72d4 2671313ddba04346ac0e2eef435f909c - - default default] [instance: 67445e66-65d6-487d-8c34-7d798ac485c8] Deletion of /var/lib/nova/instances/67445e66-65d6-487d-8c34-7d798ac485c8_del complete#033[00m
Nov 25 03:22:28 np0005534516 nova_compute[253538]: 2025-11-25 08:22:28.684 253542 INFO nova.compute.manager [None req-dbe5ab57-58f3-4078-945a-6aba64d8d257 19a48db5eafb4ccb9008a204aa3d72d4 2671313ddba04346ac0e2eef435f909c - - default default] [instance: 67445e66-65d6-487d-8c34-7d798ac485c8] Took 0.83 seconds to destroy the instance on the hypervisor.#033[00m
Nov 25 03:22:28 np0005534516 nova_compute[253538]: 2025-11-25 08:22:28.685 253542 DEBUG oslo.service.loopingcall [None req-dbe5ab57-58f3-4078-945a-6aba64d8d257 19a48db5eafb4ccb9008a204aa3d72d4 2671313ddba04346ac0e2eef435f909c - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 25 03:22:28 np0005534516 nova_compute[253538]: 2025-11-25 08:22:28.686 253542 DEBUG nova.compute.manager [-] [instance: 67445e66-65d6-487d-8c34-7d798ac485c8] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 25 03:22:28 np0005534516 nova_compute[253538]: 2025-11-25 08:22:28.686 253542 DEBUG nova.network.neutron [-] [instance: 67445e66-65d6-487d-8c34-7d798ac485c8] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 25 03:22:28 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1105: 321 pgs: 321 active+clean; 167 MiB data, 274 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 3.9 MiB/s wr, 181 op/s
Nov 25 03:22:28 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 25 03:22:28 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1288267613' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 03:22:28 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 25 03:22:28 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1288267613' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 03:22:28 np0005534516 nova_compute[253538]: 2025-11-25 08:22:28.997 253542 DEBUG nova.network.neutron [None req-83f45b32-d7cc-48b1-9b3c-5ef4ede58338 8572d5befb7449ee93a99483cc1a148b 0706b4d7e21644ee997b95ec96c1fca2 - - default default] [instance: a134e11b-8c87-4a96-a92c-b4dfdad7e518] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 03:22:29 np0005534516 nova_compute[253538]: 2025-11-25 08:22:29.035 253542 DEBUG oslo_concurrency.lockutils [None req-83f45b32-d7cc-48b1-9b3c-5ef4ede58338 8572d5befb7449ee93a99483cc1a148b 0706b4d7e21644ee997b95ec96c1fca2 - - default default] Releasing lock "refresh_cache-a134e11b-8c87-4a96-a92c-b4dfdad7e518" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 03:22:29 np0005534516 nova_compute[253538]: 2025-11-25 08:22:29.036 253542 DEBUG nova.compute.manager [None req-83f45b32-d7cc-48b1-9b3c-5ef4ede58338 8572d5befb7449ee93a99483cc1a148b 0706b4d7e21644ee997b95ec96c1fca2 - - default default] [instance: a134e11b-8c87-4a96-a92c-b4dfdad7e518] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 25 03:22:29 np0005534516 systemd[1]: machine-qemu\x2d4\x2dinstance\x2d00000004.scope: Deactivated successfully.
Nov 25 03:22:29 np0005534516 systemd[1]: machine-qemu\x2d4\x2dinstance\x2d00000004.scope: Consumed 4.474s CPU time.
Nov 25 03:22:29 np0005534516 systemd-machined[215790]: Machine qemu-4-instance-00000004 terminated.
Nov 25 03:22:29 np0005534516 nova_compute[253538]: 2025-11-25 08:22:29.260 253542 INFO nova.virt.libvirt.driver [-] [instance: a134e11b-8c87-4a96-a92c-b4dfdad7e518] Instance destroyed successfully.#033[00m
Nov 25 03:22:29 np0005534516 nova_compute[253538]: 2025-11-25 08:22:29.261 253542 DEBUG nova.objects.instance [None req-83f45b32-d7cc-48b1-9b3c-5ef4ede58338 8572d5befb7449ee93a99483cc1a148b 0706b4d7e21644ee997b95ec96c1fca2 - - default default] Lazy-loading 'resources' on Instance uuid a134e11b-8c87-4a96-a92c-b4dfdad7e518 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 03:22:29 np0005534516 nova_compute[253538]: 2025-11-25 08:22:29.352 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:22:29 np0005534516 nova_compute[253538]: 2025-11-25 08:22:29.588 253542 INFO nova.virt.libvirt.driver [None req-83f45b32-d7cc-48b1-9b3c-5ef4ede58338 8572d5befb7449ee93a99483cc1a148b 0706b4d7e21644ee997b95ec96c1fca2 - - default default] [instance: a134e11b-8c87-4a96-a92c-b4dfdad7e518] Deleting instance files /var/lib/nova/instances/a134e11b-8c87-4a96-a92c-b4dfdad7e518_del#033[00m
Nov 25 03:22:29 np0005534516 nova_compute[253538]: 2025-11-25 08:22:29.588 253542 INFO nova.virt.libvirt.driver [None req-83f45b32-d7cc-48b1-9b3c-5ef4ede58338 8572d5befb7449ee93a99483cc1a148b 0706b4d7e21644ee997b95ec96c1fca2 - - default default] [instance: a134e11b-8c87-4a96-a92c-b4dfdad7e518] Deletion of /var/lib/nova/instances/a134e11b-8c87-4a96-a92c-b4dfdad7e518_del complete#033[00m
Nov 25 03:22:29 np0005534516 nova_compute[253538]: 2025-11-25 08:22:29.660 253542 INFO nova.compute.manager [None req-83f45b32-d7cc-48b1-9b3c-5ef4ede58338 8572d5befb7449ee93a99483cc1a148b 0706b4d7e21644ee997b95ec96c1fca2 - - default default] [instance: a134e11b-8c87-4a96-a92c-b4dfdad7e518] Took 0.62 seconds to destroy the instance on the hypervisor.#033[00m
Nov 25 03:22:29 np0005534516 nova_compute[253538]: 2025-11-25 08:22:29.661 253542 DEBUG oslo.service.loopingcall [None req-83f45b32-d7cc-48b1-9b3c-5ef4ede58338 8572d5befb7449ee93a99483cc1a148b 0706b4d7e21644ee997b95ec96c1fca2 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 25 03:22:29 np0005534516 nova_compute[253538]: 2025-11-25 08:22:29.661 253542 DEBUG nova.compute.manager [-] [instance: a134e11b-8c87-4a96-a92c-b4dfdad7e518] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 25 03:22:29 np0005534516 nova_compute[253538]: 2025-11-25 08:22:29.662 253542 DEBUG nova.network.neutron [-] [instance: a134e11b-8c87-4a96-a92c-b4dfdad7e518] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 25 03:22:29 np0005534516 nova_compute[253538]: 2025-11-25 08:22:29.866 253542 DEBUG nova.network.neutron [-] [instance: a134e11b-8c87-4a96-a92c-b4dfdad7e518] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 25 03:22:29 np0005534516 nova_compute[253538]: 2025-11-25 08:22:29.904 253542 DEBUG nova.network.neutron [-] [instance: a134e11b-8c87-4a96-a92c-b4dfdad7e518] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 03:22:29 np0005534516 nova_compute[253538]: 2025-11-25 08:22:29.923 253542 INFO nova.compute.manager [-] [instance: a134e11b-8c87-4a96-a92c-b4dfdad7e518] Took 0.26 seconds to deallocate network for instance.#033[00m
Nov 25 03:22:30 np0005534516 nova_compute[253538]: 2025-11-25 08:22:30.019 253542 DEBUG oslo_concurrency.lockutils [None req-83f45b32-d7cc-48b1-9b3c-5ef4ede58338 8572d5befb7449ee93a99483cc1a148b 0706b4d7e21644ee997b95ec96c1fca2 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:22:30 np0005534516 nova_compute[253538]: 2025-11-25 08:22:30.020 253542 DEBUG oslo_concurrency.lockutils [None req-83f45b32-d7cc-48b1-9b3c-5ef4ede58338 8572d5befb7449ee93a99483cc1a148b 0706b4d7e21644ee997b95ec96c1fca2 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:22:30 np0005534516 nova_compute[253538]: 2025-11-25 08:22:30.131 253542 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764058935.130605, 04f4ebed-aaaa-4be2-9916-0361c1de57c5 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 03:22:30 np0005534516 nova_compute[253538]: 2025-11-25 08:22:30.132 253542 INFO nova.compute.manager [-] [instance: 04f4ebed-aaaa-4be2-9916-0361c1de57c5] VM Stopped (Lifecycle Event)#033[00m
Nov 25 03:22:30 np0005534516 nova_compute[253538]: 2025-11-25 08:22:30.175 253542 DEBUG nova.compute.manager [None req-039b6bfd-3340-4a66-86a5-5e5e4e8eef0d - - - - - -] [instance: 04f4ebed-aaaa-4be2-9916-0361c1de57c5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:22:30 np0005534516 nova_compute[253538]: 2025-11-25 08:22:30.212 253542 DEBUG oslo_concurrency.processutils [None req-83f45b32-d7cc-48b1-9b3c-5ef4ede58338 8572d5befb7449ee93a99483cc1a148b 0706b4d7e21644ee997b95ec96c1fca2 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:22:30 np0005534516 nova_compute[253538]: 2025-11-25 08:22:30.487 253542 DEBUG nova.compute.manager [req-3cfcf300-4912-4c5f-8071-00c88d9aad5e req-2dace0de-e1e5-4dc9-bb23-8232f3a62379 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 67445e66-65d6-487d-8c34-7d798ac485c8] Received event network-vif-plugged-5235689d-9333-45f8-8f44-50e4a006f7d3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:22:30 np0005534516 nova_compute[253538]: 2025-11-25 08:22:30.488 253542 DEBUG oslo_concurrency.lockutils [req-3cfcf300-4912-4c5f-8071-00c88d9aad5e req-2dace0de-e1e5-4dc9-bb23-8232f3a62379 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "67445e66-65d6-487d-8c34-7d798ac485c8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:22:30 np0005534516 nova_compute[253538]: 2025-11-25 08:22:30.488 253542 DEBUG oslo_concurrency.lockutils [req-3cfcf300-4912-4c5f-8071-00c88d9aad5e req-2dace0de-e1e5-4dc9-bb23-8232f3a62379 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "67445e66-65d6-487d-8c34-7d798ac485c8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:22:30 np0005534516 nova_compute[253538]: 2025-11-25 08:22:30.489 253542 DEBUG oslo_concurrency.lockutils [req-3cfcf300-4912-4c5f-8071-00c88d9aad5e req-2dace0de-e1e5-4dc9-bb23-8232f3a62379 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "67445e66-65d6-487d-8c34-7d798ac485c8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:22:30 np0005534516 nova_compute[253538]: 2025-11-25 08:22:30.489 253542 DEBUG nova.compute.manager [req-3cfcf300-4912-4c5f-8071-00c88d9aad5e req-2dace0de-e1e5-4dc9-bb23-8232f3a62379 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 67445e66-65d6-487d-8c34-7d798ac485c8] No waiting events found dispatching network-vif-plugged-5235689d-9333-45f8-8f44-50e4a006f7d3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 03:22:30 np0005534516 nova_compute[253538]: 2025-11-25 08:22:30.489 253542 WARNING nova.compute.manager [req-3cfcf300-4912-4c5f-8071-00c88d9aad5e req-2dace0de-e1e5-4dc9-bb23-8232f3a62379 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 67445e66-65d6-487d-8c34-7d798ac485c8] Received unexpected event network-vif-plugged-5235689d-9333-45f8-8f44-50e4a006f7d3 for instance with vm_state active and task_state deleting.#033[00m
Nov 25 03:22:30 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 03:22:30 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2395309643' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 03:22:30 np0005534516 nova_compute[253538]: 2025-11-25 08:22:30.701 253542 DEBUG oslo_concurrency.processutils [None req-83f45b32-d7cc-48b1-9b3c-5ef4ede58338 8572d5befb7449ee93a99483cc1a148b 0706b4d7e21644ee997b95ec96c1fca2 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.489s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:22:30 np0005534516 nova_compute[253538]: 2025-11-25 08:22:30.707 253542 DEBUG nova.compute.provider_tree [None req-83f45b32-d7cc-48b1-9b3c-5ef4ede58338 8572d5befb7449ee93a99483cc1a148b 0706b4d7e21644ee997b95ec96c1fca2 - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 03:22:30 np0005534516 nova_compute[253538]: 2025-11-25 08:22:30.723 253542 DEBUG nova.scheduler.client.report [None req-83f45b32-d7cc-48b1-9b3c-5ef4ede58338 8572d5befb7449ee93a99483cc1a148b 0706b4d7e21644ee997b95ec96c1fca2 - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 03:22:30 np0005534516 nova_compute[253538]: 2025-11-25 08:22:30.758 253542 DEBUG oslo_concurrency.lockutils [None req-83f45b32-d7cc-48b1-9b3c-5ef4ede58338 8572d5befb7449ee93a99483cc1a148b 0706b4d7e21644ee997b95ec96c1fca2 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.738s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:22:30 np0005534516 nova_compute[253538]: 2025-11-25 08:22:30.858 253542 INFO nova.scheduler.client.report [None req-83f45b32-d7cc-48b1-9b3c-5ef4ede58338 8572d5befb7449ee93a99483cc1a148b 0706b4d7e21644ee997b95ec96c1fca2 - - default default] Deleted allocations for instance a134e11b-8c87-4a96-a92c-b4dfdad7e518#033[00m
Nov 25 03:22:30 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1106: 321 pgs: 321 active+clean; 132 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 2.1 MiB/s rd, 3.9 MiB/s wr, 187 op/s
Nov 25 03:22:30 np0005534516 nova_compute[253538]: 2025-11-25 08:22:30.946 253542 DEBUG oslo_concurrency.lockutils [None req-83f45b32-d7cc-48b1-9b3c-5ef4ede58338 8572d5befb7449ee93a99483cc1a148b 0706b4d7e21644ee997b95ec96c1fca2 - - default default] Lock "a134e11b-8c87-4a96-a92c-b4dfdad7e518" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.957s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:22:30 np0005534516 nova_compute[253538]: 2025-11-25 08:22:30.994 253542 DEBUG nova.network.neutron [-] [instance: 67445e66-65d6-487d-8c34-7d798ac485c8] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 03:22:31 np0005534516 nova_compute[253538]: 2025-11-25 08:22:31.054 253542 INFO nova.compute.manager [-] [instance: 67445e66-65d6-487d-8c34-7d798ac485c8] Took 2.37 seconds to deallocate network for instance.#033[00m
Nov 25 03:22:31 np0005534516 nova_compute[253538]: 2025-11-25 08:22:31.164 253542 DEBUG oslo_concurrency.lockutils [None req-dbe5ab57-58f3-4078-945a-6aba64d8d257 19a48db5eafb4ccb9008a204aa3d72d4 2671313ddba04346ac0e2eef435f909c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:22:31 np0005534516 nova_compute[253538]: 2025-11-25 08:22:31.165 253542 DEBUG oslo_concurrency.lockutils [None req-dbe5ab57-58f3-4078-945a-6aba64d8d257 19a48db5eafb4ccb9008a204aa3d72d4 2671313ddba04346ac0e2eef435f909c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:22:31 np0005534516 nova_compute[253538]: 2025-11-25 08:22:31.213 253542 DEBUG oslo_concurrency.processutils [None req-dbe5ab57-58f3-4078-945a-6aba64d8d257 19a48db5eafb4ccb9008a204aa3d72d4 2671313ddba04346ac0e2eef435f909c - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:22:31 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 03:22:31 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3462311046' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 03:22:31 np0005534516 nova_compute[253538]: 2025-11-25 08:22:31.656 253542 DEBUG oslo_concurrency.processutils [None req-dbe5ab57-58f3-4078-945a-6aba64d8d257 19a48db5eafb4ccb9008a204aa3d72d4 2671313ddba04346ac0e2eef435f909c - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.443s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:22:31 np0005534516 nova_compute[253538]: 2025-11-25 08:22:31.663 253542 DEBUG nova.compute.provider_tree [None req-dbe5ab57-58f3-4078-945a-6aba64d8d257 19a48db5eafb4ccb9008a204aa3d72d4 2671313ddba04346ac0e2eef435f909c - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 03:22:31 np0005534516 nova_compute[253538]: 2025-11-25 08:22:31.689 253542 DEBUG nova.scheduler.client.report [None req-dbe5ab57-58f3-4078-945a-6aba64d8d257 19a48db5eafb4ccb9008a204aa3d72d4 2671313ddba04346ac0e2eef435f909c - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 03:22:31 np0005534516 nova_compute[253538]: 2025-11-25 08:22:31.728 253542 DEBUG oslo_concurrency.lockutils [None req-dbe5ab57-58f3-4078-945a-6aba64d8d257 19a48db5eafb4ccb9008a204aa3d72d4 2671313ddba04346ac0e2eef435f909c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.563s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:22:31 np0005534516 nova_compute[253538]: 2025-11-25 08:22:31.769 253542 INFO nova.scheduler.client.report [None req-dbe5ab57-58f3-4078-945a-6aba64d8d257 19a48db5eafb4ccb9008a204aa3d72d4 2671313ddba04346ac0e2eef435f909c - - default default] Deleted allocations for instance 67445e66-65d6-487d-8c34-7d798ac485c8#033[00m
Nov 25 03:22:31 np0005534516 nova_compute[253538]: 2025-11-25 08:22:31.856 253542 DEBUG oslo_concurrency.lockutils [None req-dbe5ab57-58f3-4078-945a-6aba64d8d257 19a48db5eafb4ccb9008a204aa3d72d4 2671313ddba04346ac0e2eef435f909c - - default default] Lock "67445e66-65d6-487d-8c34-7d798ac485c8" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:22:31 np0005534516 podman[270545]: 2025-11-25 08:22:31.858211442 +0000 UTC m=+0.103546000 container health_status 8ad1e319ea263c63021f01bb2d6f18ca5aea205883339692c6b10bddfa993191 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller)
Nov 25 03:22:32 np0005534516 nova_compute[253538]: 2025-11-25 08:22:32.251 253542 DEBUG oslo_concurrency.lockutils [None req-324a0cc4-7dc0-4eda-abc8-2a6c75889181 19a48db5eafb4ccb9008a204aa3d72d4 2671313ddba04346ac0e2eef435f909c - - default default] Acquiring lock "4bc97ee2-5aba-4bb5-86e2-f0806a200c04" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:22:32 np0005534516 nova_compute[253538]: 2025-11-25 08:22:32.252 253542 DEBUG oslo_concurrency.lockutils [None req-324a0cc4-7dc0-4eda-abc8-2a6c75889181 19a48db5eafb4ccb9008a204aa3d72d4 2671313ddba04346ac0e2eef435f909c - - default default] Lock "4bc97ee2-5aba-4bb5-86e2-f0806a200c04" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:22:32 np0005534516 nova_compute[253538]: 2025-11-25 08:22:32.294 253542 DEBUG nova.compute.manager [None req-324a0cc4-7dc0-4eda-abc8-2a6c75889181 19a48db5eafb4ccb9008a204aa3d72d4 2671313ddba04346ac0e2eef435f909c - - default default] [instance: 4bc97ee2-5aba-4bb5-86e2-f0806a200c04] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 25 03:22:32 np0005534516 nova_compute[253538]: 2025-11-25 08:22:32.538 253542 DEBUG oslo_concurrency.lockutils [None req-324a0cc4-7dc0-4eda-abc8-2a6c75889181 19a48db5eafb4ccb9008a204aa3d72d4 2671313ddba04346ac0e2eef435f909c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:22:32 np0005534516 nova_compute[253538]: 2025-11-25 08:22:32.539 253542 DEBUG oslo_concurrency.lockutils [None req-324a0cc4-7dc0-4eda-abc8-2a6c75889181 19a48db5eafb4ccb9008a204aa3d72d4 2671313ddba04346ac0e2eef435f909c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:22:32 np0005534516 nova_compute[253538]: 2025-11-25 08:22:32.548 253542 DEBUG nova.virt.hardware [None req-324a0cc4-7dc0-4eda-abc8-2a6c75889181 19a48db5eafb4ccb9008a204aa3d72d4 2671313ddba04346ac0e2eef435f909c - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 25 03:22:32 np0005534516 nova_compute[253538]: 2025-11-25 08:22:32.549 253542 INFO nova.compute.claims [None req-324a0cc4-7dc0-4eda-abc8-2a6c75889181 19a48db5eafb4ccb9008a204aa3d72d4 2671313ddba04346ac0e2eef435f909c - - default default] [instance: 4bc97ee2-5aba-4bb5-86e2-f0806a200c04] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 25 03:22:32 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1107: 321 pgs: 321 active+clean; 84 MiB data, 234 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 3.4 MiB/s wr, 193 op/s
Nov 25 03:22:33 np0005534516 nova_compute[253538]: 2025-11-25 08:22:33.000 253542 DEBUG oslo_concurrency.processutils [None req-324a0cc4-7dc0-4eda-abc8-2a6c75889181 19a48db5eafb4ccb9008a204aa3d72d4 2671313ddba04346ac0e2eef435f909c - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:22:33 np0005534516 nova_compute[253538]: 2025-11-25 08:22:33.111 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:22:33 np0005534516 nova_compute[253538]: 2025-11-25 08:22:33.132 253542 DEBUG nova.compute.manager [req-a0ca03a6-d3e1-4f61-a977-1a490089f6c4 req-1f35074c-c7c9-45ce-b565-739c010fc0c2 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 67445e66-65d6-487d-8c34-7d798ac485c8] Received event network-vif-deleted-5235689d-9333-45f8-8f44-50e4a006f7d3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:22:33 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e107 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 03:22:33 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 03:22:33 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1577757621' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 03:22:33 np0005534516 nova_compute[253538]: 2025-11-25 08:22:33.447 253542 DEBUG oslo_concurrency.processutils [None req-324a0cc4-7dc0-4eda-abc8-2a6c75889181 19a48db5eafb4ccb9008a204aa3d72d4 2671313ddba04346ac0e2eef435f909c - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.447s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:22:33 np0005534516 nova_compute[253538]: 2025-11-25 08:22:33.452 253542 DEBUG nova.compute.provider_tree [None req-324a0cc4-7dc0-4eda-abc8-2a6c75889181 19a48db5eafb4ccb9008a204aa3d72d4 2671313ddba04346ac0e2eef435f909c - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 03:22:33 np0005534516 nova_compute[253538]: 2025-11-25 08:22:33.486 253542 DEBUG nova.scheduler.client.report [None req-324a0cc4-7dc0-4eda-abc8-2a6c75889181 19a48db5eafb4ccb9008a204aa3d72d4 2671313ddba04346ac0e2eef435f909c - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 03:22:33 np0005534516 nova_compute[253538]: 2025-11-25 08:22:33.529 253542 DEBUG oslo_concurrency.lockutils [None req-324a0cc4-7dc0-4eda-abc8-2a6c75889181 19a48db5eafb4ccb9008a204aa3d72d4 2671313ddba04346ac0e2eef435f909c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.990s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:22:33 np0005534516 nova_compute[253538]: 2025-11-25 08:22:33.529 253542 DEBUG nova.compute.manager [None req-324a0cc4-7dc0-4eda-abc8-2a6c75889181 19a48db5eafb4ccb9008a204aa3d72d4 2671313ddba04346ac0e2eef435f909c - - default default] [instance: 4bc97ee2-5aba-4bb5-86e2-f0806a200c04] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 25 03:22:33 np0005534516 nova_compute[253538]: 2025-11-25 08:22:33.575 253542 DEBUG nova.compute.manager [None req-324a0cc4-7dc0-4eda-abc8-2a6c75889181 19a48db5eafb4ccb9008a204aa3d72d4 2671313ddba04346ac0e2eef435f909c - - default default] [instance: 4bc97ee2-5aba-4bb5-86e2-f0806a200c04] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 25 03:22:33 np0005534516 nova_compute[253538]: 2025-11-25 08:22:33.575 253542 DEBUG nova.network.neutron [None req-324a0cc4-7dc0-4eda-abc8-2a6c75889181 19a48db5eafb4ccb9008a204aa3d72d4 2671313ddba04346ac0e2eef435f909c - - default default] [instance: 4bc97ee2-5aba-4bb5-86e2-f0806a200c04] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 25 03:22:33 np0005534516 nova_compute[253538]: 2025-11-25 08:22:33.600 253542 INFO nova.virt.libvirt.driver [None req-324a0cc4-7dc0-4eda-abc8-2a6c75889181 19a48db5eafb4ccb9008a204aa3d72d4 2671313ddba04346ac0e2eef435f909c - - default default] [instance: 4bc97ee2-5aba-4bb5-86e2-f0806a200c04] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 25 03:22:33 np0005534516 nova_compute[253538]: 2025-11-25 08:22:33.641 253542 DEBUG nova.compute.manager [None req-324a0cc4-7dc0-4eda-abc8-2a6c75889181 19a48db5eafb4ccb9008a204aa3d72d4 2671313ddba04346ac0e2eef435f909c - - default default] [instance: 4bc97ee2-5aba-4bb5-86e2-f0806a200c04] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 25 03:22:33 np0005534516 nova_compute[253538]: 2025-11-25 08:22:33.752 253542 DEBUG nova.compute.manager [None req-324a0cc4-7dc0-4eda-abc8-2a6c75889181 19a48db5eafb4ccb9008a204aa3d72d4 2671313ddba04346ac0e2eef435f909c - - default default] [instance: 4bc97ee2-5aba-4bb5-86e2-f0806a200c04] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 25 03:22:33 np0005534516 nova_compute[253538]: 2025-11-25 08:22:33.754 253542 DEBUG nova.virt.libvirt.driver [None req-324a0cc4-7dc0-4eda-abc8-2a6c75889181 19a48db5eafb4ccb9008a204aa3d72d4 2671313ddba04346ac0e2eef435f909c - - default default] [instance: 4bc97ee2-5aba-4bb5-86e2-f0806a200c04] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 25 03:22:33 np0005534516 nova_compute[253538]: 2025-11-25 08:22:33.755 253542 INFO nova.virt.libvirt.driver [None req-324a0cc4-7dc0-4eda-abc8-2a6c75889181 19a48db5eafb4ccb9008a204aa3d72d4 2671313ddba04346ac0e2eef435f909c - - default default] [instance: 4bc97ee2-5aba-4bb5-86e2-f0806a200c04] Creating image(s)#033[00m
Nov 25 03:22:33 np0005534516 nova_compute[253538]: 2025-11-25 08:22:33.785 253542 DEBUG nova.storage.rbd_utils [None req-324a0cc4-7dc0-4eda-abc8-2a6c75889181 19a48db5eafb4ccb9008a204aa3d72d4 2671313ddba04346ac0e2eef435f909c - - default default] rbd image 4bc97ee2-5aba-4bb5-86e2-f0806a200c04_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:22:33 np0005534516 nova_compute[253538]: 2025-11-25 08:22:33.821 253542 DEBUG nova.storage.rbd_utils [None req-324a0cc4-7dc0-4eda-abc8-2a6c75889181 19a48db5eafb4ccb9008a204aa3d72d4 2671313ddba04346ac0e2eef435f909c - - default default] rbd image 4bc97ee2-5aba-4bb5-86e2-f0806a200c04_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:22:33 np0005534516 nova_compute[253538]: 2025-11-25 08:22:33.853 253542 DEBUG nova.storage.rbd_utils [None req-324a0cc4-7dc0-4eda-abc8-2a6c75889181 19a48db5eafb4ccb9008a204aa3d72d4 2671313ddba04346ac0e2eef435f909c - - default default] rbd image 4bc97ee2-5aba-4bb5-86e2-f0806a200c04_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:22:33 np0005534516 nova_compute[253538]: 2025-11-25 08:22:33.858 253542 DEBUG oslo_concurrency.processutils [None req-324a0cc4-7dc0-4eda-abc8-2a6c75889181 19a48db5eafb4ccb9008a204aa3d72d4 2671313ddba04346ac0e2eef435f909c - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:22:33 np0005534516 nova_compute[253538]: 2025-11-25 08:22:33.883 253542 DEBUG nova.policy [None req-324a0cc4-7dc0-4eda-abc8-2a6c75889181 19a48db5eafb4ccb9008a204aa3d72d4 2671313ddba04346ac0e2eef435f909c - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '19a48db5eafb4ccb9008a204aa3d72d4', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '2671313ddba04346ac0e2eef435f909c', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 25 03:22:33 np0005534516 nova_compute[253538]: 2025-11-25 08:22:33.927 253542 DEBUG oslo_concurrency.processutils [None req-324a0cc4-7dc0-4eda-abc8-2a6c75889181 19a48db5eafb4ccb9008a204aa3d72d4 2671313ddba04346ac0e2eef435f909c - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json" returned: 0 in 0.070s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:22:33 np0005534516 nova_compute[253538]: 2025-11-25 08:22:33.928 253542 DEBUG oslo_concurrency.lockutils [None req-324a0cc4-7dc0-4eda-abc8-2a6c75889181 19a48db5eafb4ccb9008a204aa3d72d4 2671313ddba04346ac0e2eef435f909c - - default default] Acquiring lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:22:33 np0005534516 nova_compute[253538]: 2025-11-25 08:22:33.929 253542 DEBUG oslo_concurrency.lockutils [None req-324a0cc4-7dc0-4eda-abc8-2a6c75889181 19a48db5eafb4ccb9008a204aa3d72d4 2671313ddba04346ac0e2eef435f909c - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:22:33 np0005534516 nova_compute[253538]: 2025-11-25 08:22:33.929 253542 DEBUG oslo_concurrency.lockutils [None req-324a0cc4-7dc0-4eda-abc8-2a6c75889181 19a48db5eafb4ccb9008a204aa3d72d4 2671313ddba04346ac0e2eef435f909c - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:22:33 np0005534516 nova_compute[253538]: 2025-11-25 08:22:33.953 253542 DEBUG nova.storage.rbd_utils [None req-324a0cc4-7dc0-4eda-abc8-2a6c75889181 19a48db5eafb4ccb9008a204aa3d72d4 2671313ddba04346ac0e2eef435f909c - - default default] rbd image 4bc97ee2-5aba-4bb5-86e2-f0806a200c04_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:22:33 np0005534516 nova_compute[253538]: 2025-11-25 08:22:33.956 253542 DEBUG oslo_concurrency.processutils [None req-324a0cc4-7dc0-4eda-abc8-2a6c75889181 19a48db5eafb4ccb9008a204aa3d72d4 2671313ddba04346ac0e2eef435f909c - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc 4bc97ee2-5aba-4bb5-86e2-f0806a200c04_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:22:34 np0005534516 nova_compute[253538]: 2025-11-25 08:22:34.252 253542 DEBUG oslo_concurrency.processutils [None req-324a0cc4-7dc0-4eda-abc8-2a6c75889181 19a48db5eafb4ccb9008a204aa3d72d4 2671313ddba04346ac0e2eef435f909c - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc 4bc97ee2-5aba-4bb5-86e2-f0806a200c04_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.296s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:22:34 np0005534516 nova_compute[253538]: 2025-11-25 08:22:34.311 253542 DEBUG nova.storage.rbd_utils [None req-324a0cc4-7dc0-4eda-abc8-2a6c75889181 19a48db5eafb4ccb9008a204aa3d72d4 2671313ddba04346ac0e2eef435f909c - - default default] resizing rbd image 4bc97ee2-5aba-4bb5-86e2-f0806a200c04_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 25 03:22:34 np0005534516 nova_compute[253538]: 2025-11-25 08:22:34.353 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:22:34 np0005534516 nova_compute[253538]: 2025-11-25 08:22:34.400 253542 DEBUG nova.objects.instance [None req-324a0cc4-7dc0-4eda-abc8-2a6c75889181 19a48db5eafb4ccb9008a204aa3d72d4 2671313ddba04346ac0e2eef435f909c - - default default] Lazy-loading 'migration_context' on Instance uuid 4bc97ee2-5aba-4bb5-86e2-f0806a200c04 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 03:22:34 np0005534516 nova_compute[253538]: 2025-11-25 08:22:34.604 253542 DEBUG nova.storage.rbd_utils [None req-324a0cc4-7dc0-4eda-abc8-2a6c75889181 19a48db5eafb4ccb9008a204aa3d72d4 2671313ddba04346ac0e2eef435f909c - - default default] rbd image 4bc97ee2-5aba-4bb5-86e2-f0806a200c04_disk.eph0 does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:22:34 np0005534516 nova_compute[253538]: 2025-11-25 08:22:34.631 253542 DEBUG nova.storage.rbd_utils [None req-324a0cc4-7dc0-4eda-abc8-2a6c75889181 19a48db5eafb4ccb9008a204aa3d72d4 2671313ddba04346ac0e2eef435f909c - - default default] rbd image 4bc97ee2-5aba-4bb5-86e2-f0806a200c04_disk.eph0 does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:22:34 np0005534516 nova_compute[253538]: 2025-11-25 08:22:34.637 253542 DEBUG oslo_concurrency.lockutils [None req-324a0cc4-7dc0-4eda-abc8-2a6c75889181 19a48db5eafb4ccb9008a204aa3d72d4 2671313ddba04346ac0e2eef435f909c - - default default] Acquiring lock "ephemeral_1_0706d66" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:22:34 np0005534516 nova_compute[253538]: 2025-11-25 08:22:34.638 253542 DEBUG oslo_concurrency.lockutils [None req-324a0cc4-7dc0-4eda-abc8-2a6c75889181 19a48db5eafb4ccb9008a204aa3d72d4 2671313ddba04346ac0e2eef435f909c - - default default] Lock "ephemeral_1_0706d66" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:22:34 np0005534516 nova_compute[253538]: 2025-11-25 08:22:34.639 253542 DEBUG oslo_concurrency.processutils [None req-324a0cc4-7dc0-4eda-abc8-2a6c75889181 19a48db5eafb4ccb9008a204aa3d72d4 2671313ddba04346ac0e2eef435f909c - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f raw /var/lib/nova/instances/_base/ephemeral_1_0706d66 1G execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:22:34 np0005534516 nova_compute[253538]: 2025-11-25 08:22:34.670 253542 DEBUG oslo_concurrency.processutils [None req-324a0cc4-7dc0-4eda-abc8-2a6c75889181 19a48db5eafb4ccb9008a204aa3d72d4 2671313ddba04346ac0e2eef435f909c - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f raw /var/lib/nova/instances/_base/ephemeral_1_0706d66 1G" returned: 0 in 0.031s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:22:34 np0005534516 nova_compute[253538]: 2025-11-25 08:22:34.672 253542 DEBUG oslo_concurrency.processutils [None req-324a0cc4-7dc0-4eda-abc8-2a6c75889181 19a48db5eafb4ccb9008a204aa3d72d4 2671313ddba04346ac0e2eef435f909c - - default default] Running cmd (subprocess): mkfs -t vfat -n ephemeral0 /var/lib/nova/instances/_base/ephemeral_1_0706d66 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:22:34 np0005534516 nova_compute[253538]: 2025-11-25 08:22:34.715 253542 DEBUG oslo_concurrency.processutils [None req-324a0cc4-7dc0-4eda-abc8-2a6c75889181 19a48db5eafb4ccb9008a204aa3d72d4 2671313ddba04346ac0e2eef435f909c - - default default] CMD "mkfs -t vfat -n ephemeral0 /var/lib/nova/instances/_base/ephemeral_1_0706d66" returned: 0 in 0.043s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:22:34 np0005534516 nova_compute[253538]: 2025-11-25 08:22:34.717 253542 DEBUG oslo_concurrency.lockutils [None req-324a0cc4-7dc0-4eda-abc8-2a6c75889181 19a48db5eafb4ccb9008a204aa3d72d4 2671313ddba04346ac0e2eef435f909c - - default default] Lock "ephemeral_1_0706d66" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.079s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:22:34 np0005534516 nova_compute[253538]: 2025-11-25 08:22:34.739 253542 DEBUG nova.storage.rbd_utils [None req-324a0cc4-7dc0-4eda-abc8-2a6c75889181 19a48db5eafb4ccb9008a204aa3d72d4 2671313ddba04346ac0e2eef435f909c - - default default] rbd image 4bc97ee2-5aba-4bb5-86e2-f0806a200c04_disk.eph0 does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:22:34 np0005534516 nova_compute[253538]: 2025-11-25 08:22:34.744 253542 DEBUG oslo_concurrency.processutils [None req-324a0cc4-7dc0-4eda-abc8-2a6c75889181 19a48db5eafb4ccb9008a204aa3d72d4 2671313ddba04346ac0e2eef435f909c - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ephemeral_1_0706d66 4bc97ee2-5aba-4bb5-86e2-f0806a200c04_disk.eph0 --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:22:34 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1108: 321 pgs: 321 active+clean; 57 MiB data, 221 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 3.0 MiB/s wr, 202 op/s
Nov 25 03:22:35 np0005534516 nova_compute[253538]: 2025-11-25 08:22:35.904 253542 DEBUG nova.network.neutron [None req-324a0cc4-7dc0-4eda-abc8-2a6c75889181 19a48db5eafb4ccb9008a204aa3d72d4 2671313ddba04346ac0e2eef435f909c - - default default] [instance: 4bc97ee2-5aba-4bb5-86e2-f0806a200c04] Successfully created port: 029a2ee5-4018-4b73-8953-5436c5af3666 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 25 03:22:35 np0005534516 nova_compute[253538]: 2025-11-25 08:22:35.935 253542 DEBUG oslo_concurrency.processutils [None req-324a0cc4-7dc0-4eda-abc8-2a6c75889181 19a48db5eafb4ccb9008a204aa3d72d4 2671313ddba04346ac0e2eef435f909c - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ephemeral_1_0706d66 4bc97ee2-5aba-4bb5-86e2-f0806a200c04_disk.eph0 --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.192s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:22:36 np0005534516 nova_compute[253538]: 2025-11-25 08:22:36.031 253542 DEBUG nova.virt.libvirt.driver [None req-324a0cc4-7dc0-4eda-abc8-2a6c75889181 19a48db5eafb4ccb9008a204aa3d72d4 2671313ddba04346ac0e2eef435f909c - - default default] [instance: 4bc97ee2-5aba-4bb5-86e2-f0806a200c04] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 25 03:22:36 np0005534516 nova_compute[253538]: 2025-11-25 08:22:36.031 253542 DEBUG nova.virt.libvirt.driver [None req-324a0cc4-7dc0-4eda-abc8-2a6c75889181 19a48db5eafb4ccb9008a204aa3d72d4 2671313ddba04346ac0e2eef435f909c - - default default] [instance: 4bc97ee2-5aba-4bb5-86e2-f0806a200c04] Ensure instance console log exists: /var/lib/nova/instances/4bc97ee2-5aba-4bb5-86e2-f0806a200c04/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 25 03:22:36 np0005534516 nova_compute[253538]: 2025-11-25 08:22:36.032 253542 DEBUG oslo_concurrency.lockutils [None req-324a0cc4-7dc0-4eda-abc8-2a6c75889181 19a48db5eafb4ccb9008a204aa3d72d4 2671313ddba04346ac0e2eef435f909c - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:22:36 np0005534516 nova_compute[253538]: 2025-11-25 08:22:36.032 253542 DEBUG oslo_concurrency.lockutils [None req-324a0cc4-7dc0-4eda-abc8-2a6c75889181 19a48db5eafb4ccb9008a204aa3d72d4 2671313ddba04346ac0e2eef435f909c - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:22:36 np0005534516 nova_compute[253538]: 2025-11-25 08:22:36.033 253542 DEBUG oslo_concurrency.lockutils [None req-324a0cc4-7dc0-4eda-abc8-2a6c75889181 19a48db5eafb4ccb9008a204aa3d72d4 2671313ddba04346ac0e2eef435f909c - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:22:36 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1109: 321 pgs: 321 active+clean; 57 MiB data, 221 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.4 MiB/s wr, 163 op/s
Nov 25 03:22:38 np0005534516 nova_compute[253538]: 2025-11-25 08:22:38.116 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:22:38 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e107 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 03:22:38 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1110: 321 pgs: 321 active+clean; 90 MiB data, 228 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 2.4 MiB/s wr, 166 op/s
Nov 25 03:22:39 np0005534516 nova_compute[253538]: 2025-11-25 08:22:39.014 253542 DEBUG nova.network.neutron [None req-324a0cc4-7dc0-4eda-abc8-2a6c75889181 19a48db5eafb4ccb9008a204aa3d72d4 2671313ddba04346ac0e2eef435f909c - - default default] [instance: 4bc97ee2-5aba-4bb5-86e2-f0806a200c04] Successfully updated port: 029a2ee5-4018-4b73-8953-5436c5af3666 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 25 03:22:39 np0005534516 nova_compute[253538]: 2025-11-25 08:22:39.039 253542 DEBUG oslo_concurrency.lockutils [None req-324a0cc4-7dc0-4eda-abc8-2a6c75889181 19a48db5eafb4ccb9008a204aa3d72d4 2671313ddba04346ac0e2eef435f909c - - default default] Acquiring lock "refresh_cache-4bc97ee2-5aba-4bb5-86e2-f0806a200c04" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 03:22:39 np0005534516 nova_compute[253538]: 2025-11-25 08:22:39.040 253542 DEBUG oslo_concurrency.lockutils [None req-324a0cc4-7dc0-4eda-abc8-2a6c75889181 19a48db5eafb4ccb9008a204aa3d72d4 2671313ddba04346ac0e2eef435f909c - - default default] Acquired lock "refresh_cache-4bc97ee2-5aba-4bb5-86e2-f0806a200c04" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 03:22:39 np0005534516 nova_compute[253538]: 2025-11-25 08:22:39.040 253542 DEBUG nova.network.neutron [None req-324a0cc4-7dc0-4eda-abc8-2a6c75889181 19a48db5eafb4ccb9008a204aa3d72d4 2671313ddba04346ac0e2eef435f909c - - default default] [instance: 4bc97ee2-5aba-4bb5-86e2-f0806a200c04] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 25 03:22:39 np0005534516 nova_compute[253538]: 2025-11-25 08:22:39.262 253542 DEBUG nova.compute.manager [req-e1f91395-0808-4b43-b5a4-ccba19399888 req-ab321005-202c-4af5-95d7-f047639420bd b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 4bc97ee2-5aba-4bb5-86e2-f0806a200c04] Received event network-changed-029a2ee5-4018-4b73-8953-5436c5af3666 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:22:39 np0005534516 nova_compute[253538]: 2025-11-25 08:22:39.262 253542 DEBUG nova.compute.manager [req-e1f91395-0808-4b43-b5a4-ccba19399888 req-ab321005-202c-4af5-95d7-f047639420bd b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 4bc97ee2-5aba-4bb5-86e2-f0806a200c04] Refreshing instance network info cache due to event network-changed-029a2ee5-4018-4b73-8953-5436c5af3666. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 03:22:39 np0005534516 nova_compute[253538]: 2025-11-25 08:22:39.263 253542 DEBUG oslo_concurrency.lockutils [req-e1f91395-0808-4b43-b5a4-ccba19399888 req-ab321005-202c-4af5-95d7-f047639420bd b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-4bc97ee2-5aba-4bb5-86e2-f0806a200c04" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 03:22:39 np0005534516 nova_compute[253538]: 2025-11-25 08:22:39.355 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:22:39 np0005534516 nova_compute[253538]: 2025-11-25 08:22:39.498 253542 DEBUG nova.network.neutron [None req-324a0cc4-7dc0-4eda-abc8-2a6c75889181 19a48db5eafb4ccb9008a204aa3d72d4 2671313ddba04346ac0e2eef435f909c - - default default] [instance: 4bc97ee2-5aba-4bb5-86e2-f0806a200c04] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 25 03:22:40 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1111: 321 pgs: 321 active+clean; 90 MiB data, 228 MiB used, 60 GiB / 60 GiB avail; 552 KiB/s rd, 1.8 MiB/s wr, 109 op/s
Nov 25 03:22:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:22:41.048 162739 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:22:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:22:41.049 162739 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:22:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:22:41.049 162739 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:22:41 np0005534516 nova_compute[253538]: 2025-11-25 08:22:41.218 253542 DEBUG nova.network.neutron [None req-324a0cc4-7dc0-4eda-abc8-2a6c75889181 19a48db5eafb4ccb9008a204aa3d72d4 2671313ddba04346ac0e2eef435f909c - - default default] [instance: 4bc97ee2-5aba-4bb5-86e2-f0806a200c04] Updating instance_info_cache with network_info: [{"id": "029a2ee5-4018-4b73-8953-5436c5af3666", "address": "fa:16:3e:99:b2:39", "network": {"id": "ef52fe4f-78d3-45fa-ab69-177fdfabe604", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-1512173371-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2671313ddba04346ac0e2eef435f909c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap029a2ee5-40", "ovs_interfaceid": "029a2ee5-4018-4b73-8953-5436c5af3666", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 03:22:41 np0005534516 nova_compute[253538]: 2025-11-25 08:22:41.247 253542 DEBUG oslo_concurrency.lockutils [None req-324a0cc4-7dc0-4eda-abc8-2a6c75889181 19a48db5eafb4ccb9008a204aa3d72d4 2671313ddba04346ac0e2eef435f909c - - default default] Releasing lock "refresh_cache-4bc97ee2-5aba-4bb5-86e2-f0806a200c04" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 03:22:41 np0005534516 nova_compute[253538]: 2025-11-25 08:22:41.248 253542 DEBUG nova.compute.manager [None req-324a0cc4-7dc0-4eda-abc8-2a6c75889181 19a48db5eafb4ccb9008a204aa3d72d4 2671313ddba04346ac0e2eef435f909c - - default default] [instance: 4bc97ee2-5aba-4bb5-86e2-f0806a200c04] Instance network_info: |[{"id": "029a2ee5-4018-4b73-8953-5436c5af3666", "address": "fa:16:3e:99:b2:39", "network": {"id": "ef52fe4f-78d3-45fa-ab69-177fdfabe604", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-1512173371-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2671313ddba04346ac0e2eef435f909c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap029a2ee5-40", "ovs_interfaceid": "029a2ee5-4018-4b73-8953-5436c5af3666", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 25 03:22:41 np0005534516 nova_compute[253538]: 2025-11-25 08:22:41.249 253542 DEBUG oslo_concurrency.lockutils [req-e1f91395-0808-4b43-b5a4-ccba19399888 req-ab321005-202c-4af5-95d7-f047639420bd b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-4bc97ee2-5aba-4bb5-86e2-f0806a200c04" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 03:22:41 np0005534516 nova_compute[253538]: 2025-11-25 08:22:41.250 253542 DEBUG nova.network.neutron [req-e1f91395-0808-4b43-b5a4-ccba19399888 req-ab321005-202c-4af5-95d7-f047639420bd b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 4bc97ee2-5aba-4bb5-86e2-f0806a200c04] Refreshing network info cache for port 029a2ee5-4018-4b73-8953-5436c5af3666 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 03:22:41 np0005534516 nova_compute[253538]: 2025-11-25 08:22:41.254 253542 DEBUG nova.virt.libvirt.driver [None req-324a0cc4-7dc0-4eda-abc8-2a6c75889181 19a48db5eafb4ccb9008a204aa3d72d4 2671313ddba04346ac0e2eef435f909c - - default default] [instance: 4bc97ee2-5aba-4bb5-86e2-f0806a200c04] Start _get_guest_xml network_info=[{"id": "029a2ee5-4018-4b73-8953-5436c5af3666", "address": "fa:16:3e:99:b2:39", "network": {"id": "ef52fe4f-78d3-45fa-ab69-177fdfabe604", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-1512173371-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2671313ddba04346ac0e2eef435f909c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap029a2ee5-40", "ovs_interfaceid": "029a2ee5-4018-4b73-8953-5436c5af3666", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.eph0': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'device_name': '/dev/vda', 'guest_format': None, 'encryption_options': None, 'boot_index': 0, 'encryption_format': None, 'encryption_secret_uuid': None, 'size': 0, 'encrypted': False, 'disk_bus': 'virtio', 'image_id': '8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e'}], 'ephemerals': [{'device_type': 'disk', 'device_name': '/dev/vdb', 'guest_format': None, 'encryption_options': None, 'encryption_format': None, 'encryption_secret_uuid': None, 'size': 1, 'encrypted': False, 'disk_bus': 'virtio'}], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 25 03:22:41 np0005534516 nova_compute[253538]: 2025-11-25 08:22:41.262 253542 WARNING nova.virt.libvirt.driver [None req-324a0cc4-7dc0-4eda-abc8-2a6c75889181 19a48db5eafb4ccb9008a204aa3d72d4 2671313ddba04346ac0e2eef435f909c - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 03:22:41 np0005534516 nova_compute[253538]: 2025-11-25 08:22:41.268 253542 DEBUG nova.virt.libvirt.host [None req-324a0cc4-7dc0-4eda-abc8-2a6c75889181 19a48db5eafb4ccb9008a204aa3d72d4 2671313ddba04346ac0e2eef435f909c - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 25 03:22:41 np0005534516 nova_compute[253538]: 2025-11-25 08:22:41.269 253542 DEBUG nova.virt.libvirt.host [None req-324a0cc4-7dc0-4eda-abc8-2a6c75889181 19a48db5eafb4ccb9008a204aa3d72d4 2671313ddba04346ac0e2eef435f909c - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 25 03:22:41 np0005534516 nova_compute[253538]: 2025-11-25 08:22:41.275 253542 DEBUG nova.virt.libvirt.host [None req-324a0cc4-7dc0-4eda-abc8-2a6c75889181 19a48db5eafb4ccb9008a204aa3d72d4 2671313ddba04346ac0e2eef435f909c - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 25 03:22:41 np0005534516 nova_compute[253538]: 2025-11-25 08:22:41.277 253542 DEBUG nova.virt.libvirt.host [None req-324a0cc4-7dc0-4eda-abc8-2a6c75889181 19a48db5eafb4ccb9008a204aa3d72d4 2671313ddba04346ac0e2eef435f909c - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 25 03:22:41 np0005534516 nova_compute[253538]: 2025-11-25 08:22:41.278 253542 DEBUG nova.virt.libvirt.driver [None req-324a0cc4-7dc0-4eda-abc8-2a6c75889181 19a48db5eafb4ccb9008a204aa3d72d4 2671313ddba04346ac0e2eef435f909c - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 25 03:22:41 np0005534516 nova_compute[253538]: 2025-11-25 08:22:41.278 253542 DEBUG nova.virt.hardware [None req-324a0cc4-7dc0-4eda-abc8-2a6c75889181 19a48db5eafb4ccb9008a204aa3d72d4 2671313ddba04346ac0e2eef435f909c - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T08:21:36Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=1,extra_specs={hw_rng:allowed='True'},flavorid='1778939713',id=23,is_public=True,memory_mb=128,name='tempest-flavor_with_ephemeral_1-1343327660',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 25 03:22:41 np0005534516 nova_compute[253538]: 2025-11-25 08:22:41.279 253542 DEBUG nova.virt.hardware [None req-324a0cc4-7dc0-4eda-abc8-2a6c75889181 19a48db5eafb4ccb9008a204aa3d72d4 2671313ddba04346ac0e2eef435f909c - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 25 03:22:41 np0005534516 nova_compute[253538]: 2025-11-25 08:22:41.279 253542 DEBUG nova.virt.hardware [None req-324a0cc4-7dc0-4eda-abc8-2a6c75889181 19a48db5eafb4ccb9008a204aa3d72d4 2671313ddba04346ac0e2eef435f909c - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 25 03:22:41 np0005534516 nova_compute[253538]: 2025-11-25 08:22:41.279 253542 DEBUG nova.virt.hardware [None req-324a0cc4-7dc0-4eda-abc8-2a6c75889181 19a48db5eafb4ccb9008a204aa3d72d4 2671313ddba04346ac0e2eef435f909c - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 25 03:22:41 np0005534516 nova_compute[253538]: 2025-11-25 08:22:41.279 253542 DEBUG nova.virt.hardware [None req-324a0cc4-7dc0-4eda-abc8-2a6c75889181 19a48db5eafb4ccb9008a204aa3d72d4 2671313ddba04346ac0e2eef435f909c - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 25 03:22:41 np0005534516 nova_compute[253538]: 2025-11-25 08:22:41.280 253542 DEBUG nova.virt.hardware [None req-324a0cc4-7dc0-4eda-abc8-2a6c75889181 19a48db5eafb4ccb9008a204aa3d72d4 2671313ddba04346ac0e2eef435f909c - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 25 03:22:41 np0005534516 nova_compute[253538]: 2025-11-25 08:22:41.280 253542 DEBUG nova.virt.hardware [None req-324a0cc4-7dc0-4eda-abc8-2a6c75889181 19a48db5eafb4ccb9008a204aa3d72d4 2671313ddba04346ac0e2eef435f909c - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 25 03:22:41 np0005534516 nova_compute[253538]: 2025-11-25 08:22:41.280 253542 DEBUG nova.virt.hardware [None req-324a0cc4-7dc0-4eda-abc8-2a6c75889181 19a48db5eafb4ccb9008a204aa3d72d4 2671313ddba04346ac0e2eef435f909c - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 25 03:22:41 np0005534516 nova_compute[253538]: 2025-11-25 08:22:41.280 253542 DEBUG nova.virt.hardware [None req-324a0cc4-7dc0-4eda-abc8-2a6c75889181 19a48db5eafb4ccb9008a204aa3d72d4 2671313ddba04346ac0e2eef435f909c - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 25 03:22:41 np0005534516 nova_compute[253538]: 2025-11-25 08:22:41.281 253542 DEBUG nova.virt.hardware [None req-324a0cc4-7dc0-4eda-abc8-2a6c75889181 19a48db5eafb4ccb9008a204aa3d72d4 2671313ddba04346ac0e2eef435f909c - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 25 03:22:41 np0005534516 nova_compute[253538]: 2025-11-25 08:22:41.281 253542 DEBUG nova.virt.hardware [None req-324a0cc4-7dc0-4eda-abc8-2a6c75889181 19a48db5eafb4ccb9008a204aa3d72d4 2671313ddba04346ac0e2eef435f909c - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 25 03:22:41 np0005534516 nova_compute[253538]: 2025-11-25 08:22:41.307 253542 DEBUG oslo_concurrency.processutils [None req-324a0cc4-7dc0-4eda-abc8-2a6c75889181 19a48db5eafb4ccb9008a204aa3d72d4 2671313ddba04346ac0e2eef435f909c - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:22:41 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 03:22:41 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1525725904' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 03:22:41 np0005534516 nova_compute[253538]: 2025-11-25 08:22:41.809 253542 DEBUG oslo_concurrency.processutils [None req-324a0cc4-7dc0-4eda-abc8-2a6c75889181 19a48db5eafb4ccb9008a204aa3d72d4 2671313ddba04346ac0e2eef435f909c - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.501s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:22:41 np0005534516 nova_compute[253538]: 2025-11-25 08:22:41.815 253542 DEBUG oslo_concurrency.processutils [None req-324a0cc4-7dc0-4eda-abc8-2a6c75889181 19a48db5eafb4ccb9008a204aa3d72d4 2671313ddba04346ac0e2eef435f909c - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:22:42 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 03:22:42 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1063819186' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 03:22:42 np0005534516 nova_compute[253538]: 2025-11-25 08:22:42.264 253542 DEBUG oslo_concurrency.processutils [None req-324a0cc4-7dc0-4eda-abc8-2a6c75889181 19a48db5eafb4ccb9008a204aa3d72d4 2671313ddba04346ac0e2eef435f909c - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.449s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:22:42 np0005534516 nova_compute[253538]: 2025-11-25 08:22:42.298 253542 DEBUG nova.storage.rbd_utils [None req-324a0cc4-7dc0-4eda-abc8-2a6c75889181 19a48db5eafb4ccb9008a204aa3d72d4 2671313ddba04346ac0e2eef435f909c - - default default] rbd image 4bc97ee2-5aba-4bb5-86e2-f0806a200c04_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:22:42 np0005534516 nova_compute[253538]: 2025-11-25 08:22:42.302 253542 DEBUG oslo_concurrency.processutils [None req-324a0cc4-7dc0-4eda-abc8-2a6c75889181 19a48db5eafb4ccb9008a204aa3d72d4 2671313ddba04346ac0e2eef435f909c - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:22:42 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 03:22:42 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/706444676' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 03:22:42 np0005534516 nova_compute[253538]: 2025-11-25 08:22:42.741 253542 DEBUG oslo_concurrency.processutils [None req-324a0cc4-7dc0-4eda-abc8-2a6c75889181 19a48db5eafb4ccb9008a204aa3d72d4 2671313ddba04346ac0e2eef435f909c - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.439s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:22:42 np0005534516 nova_compute[253538]: 2025-11-25 08:22:42.743 253542 DEBUG nova.virt.libvirt.vif [None req-324a0cc4-7dc0-4eda-abc8-2a6c75889181 19a48db5eafb4ccb9008a204aa3d72d4 2671313ddba04346ac0e2eef435f909c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T08:22:31Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersWithSpecificFlavorTestJSON-server-1503393729',display_name='tempest-ServersWithSpecificFlavorTestJSON-server-1503393729',ec2_ids=EC2Ids,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(23),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverswithspecificflavortestjson-server-1503393729',id=5,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=23,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNBzTqcG3JodznNWSwgj3rCrEedPdeiQ/mnvgYz3cydyWhFZXQKv9KjumDsSN5/xC3KFolCMDQs3EubeJBsTVkZbaVh8dka9krQSkDOu6i81Tbp1XKPFk+NDOA6XHT/FTw==',key_name='tempest-keypair-1479336102',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='2671313ddba04346ac0e2eef435f909c',ramdisk_id='',reservation_id='r-1iln2b4j',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersWithSpecificFlavorTestJSON-625252619',owner_user_name='tempest-ServersWithSpecificFlavorTestJSON-625252619-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T08:22:33Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='19a48db5eafb4ccb9008a204aa3d72d4',uuid=4bc97ee2-5aba-4bb5-86e2-f0806a200c04,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "029a2ee5-4018-4b73-8953-5436c5af3666", "address": "fa:16:3e:99:b2:39", "network": {"id": "ef52fe4f-78d3-45fa-ab69-177fdfabe604", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-1512173371-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2671313ddba04346ac0e2eef435f909c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap029a2ee5-40", "ovs_interfaceid": "029a2ee5-4018-4b73-8953-5436c5af3666", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 25 03:22:42 np0005534516 nova_compute[253538]: 2025-11-25 08:22:42.744 253542 DEBUG nova.network.os_vif_util [None req-324a0cc4-7dc0-4eda-abc8-2a6c75889181 19a48db5eafb4ccb9008a204aa3d72d4 2671313ddba04346ac0e2eef435f909c - - default default] Converting VIF {"id": "029a2ee5-4018-4b73-8953-5436c5af3666", "address": "fa:16:3e:99:b2:39", "network": {"id": "ef52fe4f-78d3-45fa-ab69-177fdfabe604", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-1512173371-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2671313ddba04346ac0e2eef435f909c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap029a2ee5-40", "ovs_interfaceid": "029a2ee5-4018-4b73-8953-5436c5af3666", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 03:22:42 np0005534516 nova_compute[253538]: 2025-11-25 08:22:42.744 253542 DEBUG nova.network.os_vif_util [None req-324a0cc4-7dc0-4eda-abc8-2a6c75889181 19a48db5eafb4ccb9008a204aa3d72d4 2671313ddba04346ac0e2eef435f909c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:99:b2:39,bridge_name='br-int',has_traffic_filtering=True,id=029a2ee5-4018-4b73-8953-5436c5af3666,network=Network(ef52fe4f-78d3-45fa-ab69-177fdfabe604),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap029a2ee5-40') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 03:22:42 np0005534516 nova_compute[253538]: 2025-11-25 08:22:42.745 253542 DEBUG nova.objects.instance [None req-324a0cc4-7dc0-4eda-abc8-2a6c75889181 19a48db5eafb4ccb9008a204aa3d72d4 2671313ddba04346ac0e2eef435f909c - - default default] Lazy-loading 'pci_devices' on Instance uuid 4bc97ee2-5aba-4bb5-86e2-f0806a200c04 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 03:22:42 np0005534516 nova_compute[253538]: 2025-11-25 08:22:42.766 253542 DEBUG nova.virt.libvirt.driver [None req-324a0cc4-7dc0-4eda-abc8-2a6c75889181 19a48db5eafb4ccb9008a204aa3d72d4 2671313ddba04346ac0e2eef435f909c - - default default] [instance: 4bc97ee2-5aba-4bb5-86e2-f0806a200c04] End _get_guest_xml xml=<domain type="kvm">
Nov 25 03:22:42 np0005534516 nova_compute[253538]:  <uuid>4bc97ee2-5aba-4bb5-86e2-f0806a200c04</uuid>
Nov 25 03:22:42 np0005534516 nova_compute[253538]:  <name>instance-00000005</name>
Nov 25 03:22:42 np0005534516 nova_compute[253538]:  <memory>131072</memory>
Nov 25 03:22:42 np0005534516 nova_compute[253538]:  <vcpu>1</vcpu>
Nov 25 03:22:42 np0005534516 nova_compute[253538]:  <metadata>
Nov 25 03:22:42 np0005534516 nova_compute[253538]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 03:22:42 np0005534516 nova_compute[253538]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 03:22:42 np0005534516 nova_compute[253538]:      <nova:name>tempest-ServersWithSpecificFlavorTestJSON-server-1503393729</nova:name>
Nov 25 03:22:42 np0005534516 nova_compute[253538]:      <nova:creationTime>2025-11-25 08:22:41</nova:creationTime>
Nov 25 03:22:42 np0005534516 nova_compute[253538]:      <nova:flavor name="tempest-flavor_with_ephemeral_1-1343327660">
Nov 25 03:22:42 np0005534516 nova_compute[253538]:        <nova:memory>128</nova:memory>
Nov 25 03:22:42 np0005534516 nova_compute[253538]:        <nova:disk>1</nova:disk>
Nov 25 03:22:42 np0005534516 nova_compute[253538]:        <nova:swap>0</nova:swap>
Nov 25 03:22:42 np0005534516 nova_compute[253538]:        <nova:ephemeral>1</nova:ephemeral>
Nov 25 03:22:42 np0005534516 nova_compute[253538]:        <nova:vcpus>1</nova:vcpus>
Nov 25 03:22:42 np0005534516 nova_compute[253538]:      </nova:flavor>
Nov 25 03:22:42 np0005534516 nova_compute[253538]:      <nova:owner>
Nov 25 03:22:42 np0005534516 nova_compute[253538]:        <nova:user uuid="19a48db5eafb4ccb9008a204aa3d72d4">tempest-ServersWithSpecificFlavorTestJSON-625252619-project-member</nova:user>
Nov 25 03:22:42 np0005534516 nova_compute[253538]:        <nova:project uuid="2671313ddba04346ac0e2eef435f909c">tempest-ServersWithSpecificFlavorTestJSON-625252619</nova:project>
Nov 25 03:22:42 np0005534516 nova_compute[253538]:      </nova:owner>
Nov 25 03:22:42 np0005534516 nova_compute[253538]:      <nova:root type="image" uuid="8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e"/>
Nov 25 03:22:42 np0005534516 nova_compute[253538]:      <nova:ports>
Nov 25 03:22:42 np0005534516 nova_compute[253538]:        <nova:port uuid="029a2ee5-4018-4b73-8953-5436c5af3666">
Nov 25 03:22:42 np0005534516 nova_compute[253538]:          <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Nov 25 03:22:42 np0005534516 nova_compute[253538]:        </nova:port>
Nov 25 03:22:42 np0005534516 nova_compute[253538]:      </nova:ports>
Nov 25 03:22:42 np0005534516 nova_compute[253538]:    </nova:instance>
Nov 25 03:22:42 np0005534516 nova_compute[253538]:  </metadata>
Nov 25 03:22:42 np0005534516 nova_compute[253538]:  <sysinfo type="smbios">
Nov 25 03:22:42 np0005534516 nova_compute[253538]:    <system>
Nov 25 03:22:42 np0005534516 nova_compute[253538]:      <entry name="manufacturer">RDO</entry>
Nov 25 03:22:42 np0005534516 nova_compute[253538]:      <entry name="product">OpenStack Compute</entry>
Nov 25 03:22:42 np0005534516 nova_compute[253538]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 03:22:42 np0005534516 nova_compute[253538]:      <entry name="serial">4bc97ee2-5aba-4bb5-86e2-f0806a200c04</entry>
Nov 25 03:22:42 np0005534516 rsyslogd[1007]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 25 03:22:42 np0005534516 nova_compute[253538]:      <entry name="uuid">4bc97ee2-5aba-4bb5-86e2-f0806a200c04</entry>
Nov 25 03:22:42 np0005534516 rsyslogd[1007]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 25 03:22:42 np0005534516 nova_compute[253538]:      <entry name="family">Virtual Machine</entry>
Nov 25 03:22:42 np0005534516 nova_compute[253538]:    </system>
Nov 25 03:22:42 np0005534516 nova_compute[253538]:  </sysinfo>
Nov 25 03:22:42 np0005534516 nova_compute[253538]:  <os>
Nov 25 03:22:42 np0005534516 nova_compute[253538]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 25 03:22:42 np0005534516 nova_compute[253538]:    <boot dev="hd"/>
Nov 25 03:22:42 np0005534516 nova_compute[253538]:    <smbios mode="sysinfo"/>
Nov 25 03:22:42 np0005534516 nova_compute[253538]:  </os>
Nov 25 03:22:42 np0005534516 nova_compute[253538]:  <features>
Nov 25 03:22:42 np0005534516 nova_compute[253538]:    <acpi/>
Nov 25 03:22:42 np0005534516 nova_compute[253538]:    <apic/>
Nov 25 03:22:42 np0005534516 nova_compute[253538]:    <vmcoreinfo/>
Nov 25 03:22:42 np0005534516 nova_compute[253538]:  </features>
Nov 25 03:22:42 np0005534516 nova_compute[253538]:  <clock offset="utc">
Nov 25 03:22:42 np0005534516 nova_compute[253538]:    <timer name="pit" tickpolicy="delay"/>
Nov 25 03:22:42 np0005534516 nova_compute[253538]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 25 03:22:42 np0005534516 nova_compute[253538]:    <timer name="hpet" present="no"/>
Nov 25 03:22:42 np0005534516 nova_compute[253538]:  </clock>
Nov 25 03:22:42 np0005534516 nova_compute[253538]:  <cpu mode="host-model" match="exact">
Nov 25 03:22:42 np0005534516 nova_compute[253538]:    <topology sockets="1" cores="1" threads="1"/>
Nov 25 03:22:42 np0005534516 nova_compute[253538]:  </cpu>
Nov 25 03:22:42 np0005534516 nova_compute[253538]:  <devices>
Nov 25 03:22:42 np0005534516 nova_compute[253538]:    <disk type="network" device="disk">
Nov 25 03:22:42 np0005534516 nova_compute[253538]:      <driver type="raw" cache="none"/>
Nov 25 03:22:42 np0005534516 nova_compute[253538]:      <source protocol="rbd" name="vms/4bc97ee2-5aba-4bb5-86e2-f0806a200c04_disk">
Nov 25 03:22:42 np0005534516 nova_compute[253538]:        <host name="192.168.122.100" port="6789"/>
Nov 25 03:22:42 np0005534516 nova_compute[253538]:      </source>
Nov 25 03:22:42 np0005534516 nova_compute[253538]:      <auth username="openstack">
Nov 25 03:22:42 np0005534516 nova_compute[253538]:        <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 03:22:42 np0005534516 nova_compute[253538]:      </auth>
Nov 25 03:22:42 np0005534516 nova_compute[253538]:      <target dev="vda" bus="virtio"/>
Nov 25 03:22:42 np0005534516 nova_compute[253538]:    </disk>
Nov 25 03:22:42 np0005534516 nova_compute[253538]:    <disk type="network" device="disk">
Nov 25 03:22:42 np0005534516 nova_compute[253538]:      <driver type="raw" cache="none"/>
Nov 25 03:22:42 np0005534516 nova_compute[253538]:      <source protocol="rbd" name="vms/4bc97ee2-5aba-4bb5-86e2-f0806a200c04_disk.eph0">
Nov 25 03:22:42 np0005534516 nova_compute[253538]:        <host name="192.168.122.100" port="6789"/>
Nov 25 03:22:42 np0005534516 nova_compute[253538]:      </source>
Nov 25 03:22:42 np0005534516 nova_compute[253538]:      <auth username="openstack">
Nov 25 03:22:42 np0005534516 nova_compute[253538]:        <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 03:22:42 np0005534516 nova_compute[253538]:      </auth>
Nov 25 03:22:42 np0005534516 nova_compute[253538]:      <target dev="vdb" bus="virtio"/>
Nov 25 03:22:42 np0005534516 nova_compute[253538]:    </disk>
Nov 25 03:22:42 np0005534516 nova_compute[253538]:    <disk type="network" device="cdrom">
Nov 25 03:22:42 np0005534516 nova_compute[253538]:      <driver type="raw" cache="none"/>
Nov 25 03:22:42 np0005534516 nova_compute[253538]:      <source protocol="rbd" name="vms/4bc97ee2-5aba-4bb5-86e2-f0806a200c04_disk.config">
Nov 25 03:22:42 np0005534516 nova_compute[253538]:        <host name="192.168.122.100" port="6789"/>
Nov 25 03:22:42 np0005534516 nova_compute[253538]:      </source>
Nov 25 03:22:42 np0005534516 nova_compute[253538]:      <auth username="openstack">
Nov 25 03:22:42 np0005534516 nova_compute[253538]:        <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 03:22:42 np0005534516 nova_compute[253538]:      </auth>
Nov 25 03:22:42 np0005534516 nova_compute[253538]:      <target dev="sda" bus="sata"/>
Nov 25 03:22:42 np0005534516 nova_compute[253538]:    </disk>
Nov 25 03:22:42 np0005534516 nova_compute[253538]:    <interface type="ethernet">
Nov 25 03:22:42 np0005534516 nova_compute[253538]:      <mac address="fa:16:3e:99:b2:39"/>
Nov 25 03:22:42 np0005534516 nova_compute[253538]:      <model type="virtio"/>
Nov 25 03:22:42 np0005534516 nova_compute[253538]:      <driver name="vhost" rx_queue_size="512"/>
Nov 25 03:22:42 np0005534516 nova_compute[253538]:      <mtu size="1442"/>
Nov 25 03:22:42 np0005534516 nova_compute[253538]:      <target dev="tap029a2ee5-40"/>
Nov 25 03:22:42 np0005534516 nova_compute[253538]:    </interface>
Nov 25 03:22:42 np0005534516 nova_compute[253538]:    <serial type="pty">
Nov 25 03:22:42 np0005534516 nova_compute[253538]:      <log file="/var/lib/nova/instances/4bc97ee2-5aba-4bb5-86e2-f0806a200c04/console.log" append="off"/>
Nov 25 03:22:42 np0005534516 nova_compute[253538]:    </serial>
Nov 25 03:22:42 np0005534516 nova_compute[253538]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 03:22:42 np0005534516 nova_compute[253538]:    <video>
Nov 25 03:22:42 np0005534516 nova_compute[253538]:      <model type="virtio"/>
Nov 25 03:22:42 np0005534516 nova_compute[253538]:    </video>
Nov 25 03:22:42 np0005534516 nova_compute[253538]:    <input type="tablet" bus="usb"/>
Nov 25 03:22:42 np0005534516 nova_compute[253538]:    <rng model="virtio">
Nov 25 03:22:42 np0005534516 nova_compute[253538]:      <backend model="random">/dev/urandom</backend>
Nov 25 03:22:42 np0005534516 nova_compute[253538]:    </rng>
Nov 25 03:22:42 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root"/>
Nov 25 03:22:42 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:22:42 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:22:42 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:22:42 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:22:42 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:22:42 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:22:42 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:22:42 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:22:42 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:22:42 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:22:42 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:22:42 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:22:42 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:22:42 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:22:42 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:22:42 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:22:42 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:22:42 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:22:42 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:22:42 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:22:42 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:22:42 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:22:42 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:22:42 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:22:42 np0005534516 nova_compute[253538]:    <controller type="usb" index="0"/>
Nov 25 03:22:42 np0005534516 nova_compute[253538]:    <memballoon model="virtio">
Nov 25 03:22:42 np0005534516 nova_compute[253538]:      <stats period="10"/>
Nov 25 03:22:42 np0005534516 nova_compute[253538]:    </memballoon>
Nov 25 03:22:42 np0005534516 nova_compute[253538]:  </devices>
Nov 25 03:22:42 np0005534516 nova_compute[253538]: </domain>
Nov 25 03:22:42 np0005534516 nova_compute[253538]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 25 03:22:42 np0005534516 nova_compute[253538]: 2025-11-25 08:22:42.768 253542 DEBUG nova.compute.manager [None req-324a0cc4-7dc0-4eda-abc8-2a6c75889181 19a48db5eafb4ccb9008a204aa3d72d4 2671313ddba04346ac0e2eef435f909c - - default default] [instance: 4bc97ee2-5aba-4bb5-86e2-f0806a200c04] Preparing to wait for external event network-vif-plugged-029a2ee5-4018-4b73-8953-5436c5af3666 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 25 03:22:42 np0005534516 nova_compute[253538]: 2025-11-25 08:22:42.769 253542 DEBUG oslo_concurrency.lockutils [None req-324a0cc4-7dc0-4eda-abc8-2a6c75889181 19a48db5eafb4ccb9008a204aa3d72d4 2671313ddba04346ac0e2eef435f909c - - default default] Acquiring lock "4bc97ee2-5aba-4bb5-86e2-f0806a200c04-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:22:42 np0005534516 nova_compute[253538]: 2025-11-25 08:22:42.769 253542 DEBUG oslo_concurrency.lockutils [None req-324a0cc4-7dc0-4eda-abc8-2a6c75889181 19a48db5eafb4ccb9008a204aa3d72d4 2671313ddba04346ac0e2eef435f909c - - default default] Lock "4bc97ee2-5aba-4bb5-86e2-f0806a200c04-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:22:42 np0005534516 nova_compute[253538]: 2025-11-25 08:22:42.770 253542 DEBUG oslo_concurrency.lockutils [None req-324a0cc4-7dc0-4eda-abc8-2a6c75889181 19a48db5eafb4ccb9008a204aa3d72d4 2671313ddba04346ac0e2eef435f909c - - default default] Lock "4bc97ee2-5aba-4bb5-86e2-f0806a200c04-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:22:42 np0005534516 nova_compute[253538]: 2025-11-25 08:22:42.770 253542 DEBUG nova.virt.libvirt.vif [None req-324a0cc4-7dc0-4eda-abc8-2a6c75889181 19a48db5eafb4ccb9008a204aa3d72d4 2671313ddba04346ac0e2eef435f909c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T08:22:31Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersWithSpecificFlavorTestJSON-server-1503393729',display_name='tempest-ServersWithSpecificFlavorTestJSON-server-1503393729',ec2_ids=EC2Ids,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(23),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverswithspecificflavortestjson-server-1503393729',id=5,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=23,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNBzTqcG3JodznNWSwgj3rCrEedPdeiQ/mnvgYz3cydyWhFZXQKv9KjumDsSN5/xC3KFolCMDQs3EubeJBsTVkZbaVh8dka9krQSkDOu6i81Tbp1XKPFk+NDOA6XHT/FTw==',key_name='tempest-keypair-1479336102',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='2671313ddba04346ac0e2eef435f909c',ramdisk_id='',reservation_id='r-1iln2b4j',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersWithSpecificFlavorTestJSON-625252619',owner_user_name='tempest-ServersWithSpecificFlavorTestJSON-625252619-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T08:22:33Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='19a48db5eafb4ccb9008a204aa3d72d4',uuid=4bc97ee2-5aba-4bb5-86e2-f0806a200c04,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "029a2ee5-4018-4b73-8953-5436c5af3666", "address": "fa:16:3e:99:b2:39", "network": {"id": "ef52fe4f-78d3-45fa-ab69-177fdfabe604", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-1512173371-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2671313ddba04346ac0e2eef435f909c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap029a2ee5-40", "ovs_interfaceid": "029a2ee5-4018-4b73-8953-5436c5af3666", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 25 03:22:42 np0005534516 nova_compute[253538]: 2025-11-25 08:22:42.771 253542 DEBUG nova.network.os_vif_util [None req-324a0cc4-7dc0-4eda-abc8-2a6c75889181 19a48db5eafb4ccb9008a204aa3d72d4 2671313ddba04346ac0e2eef435f909c - - default default] Converting VIF {"id": "029a2ee5-4018-4b73-8953-5436c5af3666", "address": "fa:16:3e:99:b2:39", "network": {"id": "ef52fe4f-78d3-45fa-ab69-177fdfabe604", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-1512173371-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2671313ddba04346ac0e2eef435f909c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap029a2ee5-40", "ovs_interfaceid": "029a2ee5-4018-4b73-8953-5436c5af3666", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 03:22:42 np0005534516 nova_compute[253538]: 2025-11-25 08:22:42.772 253542 DEBUG nova.network.os_vif_util [None req-324a0cc4-7dc0-4eda-abc8-2a6c75889181 19a48db5eafb4ccb9008a204aa3d72d4 2671313ddba04346ac0e2eef435f909c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:99:b2:39,bridge_name='br-int',has_traffic_filtering=True,id=029a2ee5-4018-4b73-8953-5436c5af3666,network=Network(ef52fe4f-78d3-45fa-ab69-177fdfabe604),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap029a2ee5-40') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 03:22:42 np0005534516 nova_compute[253538]: 2025-11-25 08:22:42.772 253542 DEBUG os_vif [None req-324a0cc4-7dc0-4eda-abc8-2a6c75889181 19a48db5eafb4ccb9008a204aa3d72d4 2671313ddba04346ac0e2eef435f909c - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:99:b2:39,bridge_name='br-int',has_traffic_filtering=True,id=029a2ee5-4018-4b73-8953-5436c5af3666,network=Network(ef52fe4f-78d3-45fa-ab69-177fdfabe604),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap029a2ee5-40') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 25 03:22:42 np0005534516 nova_compute[253538]: 2025-11-25 08:22:42.773 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:22:42 np0005534516 nova_compute[253538]: 2025-11-25 08:22:42.774 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:22:42 np0005534516 nova_compute[253538]: 2025-11-25 08:22:42.774 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 03:22:42 np0005534516 nova_compute[253538]: 2025-11-25 08:22:42.782 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:22:42 np0005534516 nova_compute[253538]: 2025-11-25 08:22:42.783 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap029a2ee5-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:22:42 np0005534516 nova_compute[253538]: 2025-11-25 08:22:42.784 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap029a2ee5-40, col_values=(('external_ids', {'iface-id': '029a2ee5-4018-4b73-8953-5436c5af3666', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:99:b2:39', 'vm-uuid': '4bc97ee2-5aba-4bb5-86e2-f0806a200c04'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:22:42 np0005534516 nova_compute[253538]: 2025-11-25 08:22:42.786 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:22:42 np0005534516 NetworkManager[48915]: <info>  [1764058962.7871] manager: (tap029a2ee5-40): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/30)
Nov 25 03:22:42 np0005534516 nova_compute[253538]: 2025-11-25 08:22:42.790 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 25 03:22:42 np0005534516 nova_compute[253538]: 2025-11-25 08:22:42.793 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:22:42 np0005534516 nova_compute[253538]: 2025-11-25 08:22:42.794 253542 INFO os_vif [None req-324a0cc4-7dc0-4eda-abc8-2a6c75889181 19a48db5eafb4ccb9008a204aa3d72d4 2671313ddba04346ac0e2eef435f909c - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:99:b2:39,bridge_name='br-int',has_traffic_filtering=True,id=029a2ee5-4018-4b73-8953-5436c5af3666,network=Network(ef52fe4f-78d3-45fa-ab69-177fdfabe604),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap029a2ee5-40')#033[00m
Nov 25 03:22:42 np0005534516 nova_compute[253538]: 2025-11-25 08:22:42.850 253542 DEBUG nova.virt.libvirt.driver [None req-324a0cc4-7dc0-4eda-abc8-2a6c75889181 19a48db5eafb4ccb9008a204aa3d72d4 2671313ddba04346ac0e2eef435f909c - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 03:22:42 np0005534516 nova_compute[253538]: 2025-11-25 08:22:42.850 253542 DEBUG nova.virt.libvirt.driver [None req-324a0cc4-7dc0-4eda-abc8-2a6c75889181 19a48db5eafb4ccb9008a204aa3d72d4 2671313ddba04346ac0e2eef435f909c - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 03:22:42 np0005534516 nova_compute[253538]: 2025-11-25 08:22:42.851 253542 DEBUG nova.virt.libvirt.driver [None req-324a0cc4-7dc0-4eda-abc8-2a6c75889181 19a48db5eafb4ccb9008a204aa3d72d4 2671313ddba04346ac0e2eef435f909c - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 03:22:42 np0005534516 nova_compute[253538]: 2025-11-25 08:22:42.851 253542 DEBUG nova.virt.libvirt.driver [None req-324a0cc4-7dc0-4eda-abc8-2a6c75889181 19a48db5eafb4ccb9008a204aa3d72d4 2671313ddba04346ac0e2eef435f909c - - default default] No VIF found with MAC fa:16:3e:99:b2:39, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 25 03:22:42 np0005534516 nova_compute[253538]: 2025-11-25 08:22:42.851 253542 INFO nova.virt.libvirt.driver [None req-324a0cc4-7dc0-4eda-abc8-2a6c75889181 19a48db5eafb4ccb9008a204aa3d72d4 2671313ddba04346ac0e2eef435f909c - - default default] [instance: 4bc97ee2-5aba-4bb5-86e2-f0806a200c04] Using config drive#033[00m
Nov 25 03:22:42 np0005534516 nova_compute[253538]: 2025-11-25 08:22:42.873 253542 DEBUG nova.storage.rbd_utils [None req-324a0cc4-7dc0-4eda-abc8-2a6c75889181 19a48db5eafb4ccb9008a204aa3d72d4 2671313ddba04346ac0e2eef435f909c - - default default] rbd image 4bc97ee2-5aba-4bb5-86e2-f0806a200c04_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:22:42 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1112: 321 pgs: 321 active+clean; 90 MiB data, 228 MiB used, 60 GiB / 60 GiB avail; 270 KiB/s rd, 1.8 MiB/s wr, 77 op/s
Nov 25 03:22:43 np0005534516 nova_compute[253538]: 2025-11-25 08:22:43.089 253542 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764058948.0885918, 67445e66-65d6-487d-8c34-7d798ac485c8 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 03:22:43 np0005534516 nova_compute[253538]: 2025-11-25 08:22:43.089 253542 INFO nova.compute.manager [-] [instance: 67445e66-65d6-487d-8c34-7d798ac485c8] VM Stopped (Lifecycle Event)#033[00m
Nov 25 03:22:43 np0005534516 nova_compute[253538]: 2025-11-25 08:22:43.118 253542 DEBUG nova.compute.manager [None req-3c004bf3-cf1f-4fa3-b305-06a94a8f88e3 - - - - - -] [instance: 67445e66-65d6-487d-8c34-7d798ac485c8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:22:43 np0005534516 nova_compute[253538]: 2025-11-25 08:22:43.165 253542 DEBUG nova.network.neutron [req-e1f91395-0808-4b43-b5a4-ccba19399888 req-ab321005-202c-4af5-95d7-f047639420bd b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 4bc97ee2-5aba-4bb5-86e2-f0806a200c04] Updated VIF entry in instance network info cache for port 029a2ee5-4018-4b73-8953-5436c5af3666. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 25 03:22:43 np0005534516 nova_compute[253538]: 2025-11-25 08:22:43.166 253542 DEBUG nova.network.neutron [req-e1f91395-0808-4b43-b5a4-ccba19399888 req-ab321005-202c-4af5-95d7-f047639420bd b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 4bc97ee2-5aba-4bb5-86e2-f0806a200c04] Updating instance_info_cache with network_info: [{"id": "029a2ee5-4018-4b73-8953-5436c5af3666", "address": "fa:16:3e:99:b2:39", "network": {"id": "ef52fe4f-78d3-45fa-ab69-177fdfabe604", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-1512173371-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2671313ddba04346ac0e2eef435f909c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap029a2ee5-40", "ovs_interfaceid": "029a2ee5-4018-4b73-8953-5436c5af3666", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 03:22:43 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e107 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 03:22:43 np0005534516 nova_compute[253538]: 2025-11-25 08:22:43.182 253542 DEBUG oslo_concurrency.lockutils [req-e1f91395-0808-4b43-b5a4-ccba19399888 req-ab321005-202c-4af5-95d7-f047639420bd b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-4bc97ee2-5aba-4bb5-86e2-f0806a200c04" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 03:22:43 np0005534516 nova_compute[253538]: 2025-11-25 08:22:43.368 253542 INFO nova.virt.libvirt.driver [None req-324a0cc4-7dc0-4eda-abc8-2a6c75889181 19a48db5eafb4ccb9008a204aa3d72d4 2671313ddba04346ac0e2eef435f909c - - default default] [instance: 4bc97ee2-5aba-4bb5-86e2-f0806a200c04] Creating config drive at /var/lib/nova/instances/4bc97ee2-5aba-4bb5-86e2-f0806a200c04/disk.config#033[00m
Nov 25 03:22:43 np0005534516 nova_compute[253538]: 2025-11-25 08:22:43.376 253542 DEBUG oslo_concurrency.processutils [None req-324a0cc4-7dc0-4eda-abc8-2a6c75889181 19a48db5eafb4ccb9008a204aa3d72d4 2671313ddba04346ac0e2eef435f909c - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/4bc97ee2-5aba-4bb5-86e2-f0806a200c04/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpr8bqjidi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:22:43 np0005534516 nova_compute[253538]: 2025-11-25 08:22:43.522 253542 DEBUG oslo_concurrency.processutils [None req-324a0cc4-7dc0-4eda-abc8-2a6c75889181 19a48db5eafb4ccb9008a204aa3d72d4 2671313ddba04346ac0e2eef435f909c - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/4bc97ee2-5aba-4bb5-86e2-f0806a200c04/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpr8bqjidi" returned: 0 in 0.146s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:22:43 np0005534516 nova_compute[253538]: 2025-11-25 08:22:43.559 253542 DEBUG nova.storage.rbd_utils [None req-324a0cc4-7dc0-4eda-abc8-2a6c75889181 19a48db5eafb4ccb9008a204aa3d72d4 2671313ddba04346ac0e2eef435f909c - - default default] rbd image 4bc97ee2-5aba-4bb5-86e2-f0806a200c04_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:22:43 np0005534516 nova_compute[253538]: 2025-11-25 08:22:43.563 253542 DEBUG oslo_concurrency.processutils [None req-324a0cc4-7dc0-4eda-abc8-2a6c75889181 19a48db5eafb4ccb9008a204aa3d72d4 2671313ddba04346ac0e2eef435f909c - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/4bc97ee2-5aba-4bb5-86e2-f0806a200c04/disk.config 4bc97ee2-5aba-4bb5-86e2-f0806a200c04_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:22:43 np0005534516 ceph-osd[88620]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #43. Immutable memtables: 0.
Nov 25 03:22:43 np0005534516 nova_compute[253538]: 2025-11-25 08:22:43.758 253542 DEBUG oslo_concurrency.processutils [None req-324a0cc4-7dc0-4eda-abc8-2a6c75889181 19a48db5eafb4ccb9008a204aa3d72d4 2671313ddba04346ac0e2eef435f909c - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/4bc97ee2-5aba-4bb5-86e2-f0806a200c04/disk.config 4bc97ee2-5aba-4bb5-86e2-f0806a200c04_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.195s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:22:43 np0005534516 nova_compute[253538]: 2025-11-25 08:22:43.759 253542 INFO nova.virt.libvirt.driver [None req-324a0cc4-7dc0-4eda-abc8-2a6c75889181 19a48db5eafb4ccb9008a204aa3d72d4 2671313ddba04346ac0e2eef435f909c - - default default] [instance: 4bc97ee2-5aba-4bb5-86e2-f0806a200c04] Deleting local config drive /var/lib/nova/instances/4bc97ee2-5aba-4bb5-86e2-f0806a200c04/disk.config because it was imported into RBD.#033[00m
Nov 25 03:22:43 np0005534516 kernel: tap029a2ee5-40: entered promiscuous mode
Nov 25 03:22:43 np0005534516 NetworkManager[48915]: <info>  [1764058963.8329] manager: (tap029a2ee5-40): new Tun device (/org/freedesktop/NetworkManager/Devices/31)
Nov 25 03:22:43 np0005534516 nova_compute[253538]: 2025-11-25 08:22:43.835 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:22:43 np0005534516 ovn_controller[152859]: 2025-11-25T08:22:43Z|00036|binding|INFO|Claiming lport 029a2ee5-4018-4b73-8953-5436c5af3666 for this chassis.
Nov 25 03:22:43 np0005534516 ovn_controller[152859]: 2025-11-25T08:22:43Z|00037|binding|INFO|029a2ee5-4018-4b73-8953-5436c5af3666: Claiming fa:16:3e:99:b2:39 10.100.0.13
Nov 25 03:22:43 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:22:43.841 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:99:b2:39 10.100.0.13'], port_security=['fa:16:3e:99:b2:39 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '4bc97ee2-5aba-4bb5-86e2-f0806a200c04', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ef52fe4f-78d3-45fa-ab69-177fdfabe604', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2671313ddba04346ac0e2eef435f909c', 'neutron:revision_number': '2', 'neutron:security_group_ids': '45e37a08-d0c9-4931-b93a-912579eefb2a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=1e5eae09-1123-407e-9138-26c6151dcc1c, chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=029a2ee5-4018-4b73-8953-5436c5af3666) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 03:22:43 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:22:43.843 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 029a2ee5-4018-4b73-8953-5436c5af3666 in datapath ef52fe4f-78d3-45fa-ab69-177fdfabe604 bound to our chassis#033[00m
Nov 25 03:22:43 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:22:43.844 162739 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network ef52fe4f-78d3-45fa-ab69-177fdfabe604#033[00m
Nov 25 03:22:43 np0005534516 ovn_controller[152859]: 2025-11-25T08:22:43Z|00038|binding|INFO|Setting lport 029a2ee5-4018-4b73-8953-5436c5af3666 ovn-installed in OVS
Nov 25 03:22:43 np0005534516 ovn_controller[152859]: 2025-11-25T08:22:43Z|00039|binding|INFO|Setting lport 029a2ee5-4018-4b73-8953-5436c5af3666 up in Southbound
Nov 25 03:22:43 np0005534516 nova_compute[253538]: 2025-11-25 08:22:43.852 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:22:43 np0005534516 nova_compute[253538]: 2025-11-25 08:22:43.854 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:22:43 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:22:43.854 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[56cb4533-78aa-443d-a15c-070093a36b81]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:22:43 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:22:43.856 162739 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapef52fe4f-71 in ovnmeta-ef52fe4f-78d3-45fa-ab69-177fdfabe604 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 25 03:22:43 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:22:43.857 269370 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapef52fe4f-70 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 25 03:22:43 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:22:43.857 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[bcf71671-009d-415f-8153-d5ace5ad7d7f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:22:43 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:22:43.859 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[9c67c94d-fae5-4ebb-ae8a-a1d7374bb0c0]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:22:43 np0005534516 systemd-machined[215790]: New machine qemu-5-instance-00000005.
Nov 25 03:22:43 np0005534516 systemd-udevd[271051]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 03:22:43 np0005534516 systemd[1]: Started Virtual Machine qemu-5-instance-00000005.
Nov 25 03:22:43 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:22:43.873 162852 DEBUG oslo.privsep.daemon [-] privsep: reply[d46d080f-9eb5-4074-b3fe-b4f45f784d0d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:22:43 np0005534516 NetworkManager[48915]: <info>  [1764058963.8764] device (tap029a2ee5-40): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 03:22:43 np0005534516 NetworkManager[48915]: <info>  [1764058963.8772] device (tap029a2ee5-40): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 03:22:43 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:22:43.895 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[f9f45f24-e3e0-4dbe-b93b-a95e2c9dd83f]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:22:43 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:22:43.918 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[ba55b682-c144-4018-b03d-69055a233dd2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:22:43 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:22:43.924 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[1e1facfb-5e9c-4b32-b477-bb6da0b595e6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:22:43 np0005534516 NetworkManager[48915]: <info>  [1764058963.9250] manager: (tapef52fe4f-70): new Veth device (/org/freedesktop/NetworkManager/Devices/32)
Nov 25 03:22:43 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:22:43.955 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[e747701d-d195-4363-92ba-849c2c3cc644]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:22:43 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:22:43.958 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[47eac41d-9341-45b3-ac22-1268ff8cc7f9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:22:43 np0005534516 NetworkManager[48915]: <info>  [1764058963.9834] device (tapef52fe4f-70): carrier: link connected
Nov 25 03:22:43 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:22:43.991 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[b4b434ba-5a72-4207-8037-c9fd1f31f5c4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:22:44 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:22:44.009 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[89444853-bb10-4803-b064-129fa781b4da]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapef52fe4f-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:db:1c:c5'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 16], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 431858, 'reachable_time': 15570, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 271084, 'error': None, 'target': 'ovnmeta-ef52fe4f-78d3-45fa-ab69-177fdfabe604', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:22:44 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:22:44.028 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[e9e6de10-b369-4ffc-8e14-66ca315e2dad]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fedb:1cc5'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 431858, 'tstamp': 431858}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 271085, 'error': None, 'target': 'ovnmeta-ef52fe4f-78d3-45fa-ab69-177fdfabe604', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:22:44 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:22:44.051 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[87cda7dd-8430-4fd7-8d22-629ad2756e80]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapef52fe4f-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:db:1c:c5'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 16], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 431858, 'reachable_time': 15570, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 271097, 'error': None, 'target': 'ovnmeta-ef52fe4f-78d3-45fa-ab69-177fdfabe604', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:22:44 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:22:44.089 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[e6b1e7d4-19ca-4db4-9042-64c13f8a0204]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:22:44 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:22:44.150 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[7f25294f-170e-407b-b43c-074ed27d9614]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:22:44 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:22:44.152 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapef52fe4f-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:22:44 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:22:44.152 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 03:22:44 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:22:44.153 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapef52fe4f-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:22:44 np0005534516 nova_compute[253538]: 2025-11-25 08:22:44.200 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:22:44 np0005534516 NetworkManager[48915]: <info>  [1764058964.2020] manager: (tapef52fe4f-70): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/33)
Nov 25 03:22:44 np0005534516 kernel: tapef52fe4f-70: entered promiscuous mode
Nov 25 03:22:44 np0005534516 nova_compute[253538]: 2025-11-25 08:22:44.204 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:22:44 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:22:44.206 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapef52fe4f-70, col_values=(('external_ids', {'iface-id': 'c3160d37-f1ba-461a-855e-31f72d46baee'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:22:44 np0005534516 ovn_controller[152859]: 2025-11-25T08:22:44Z|00040|binding|INFO|Releasing lport c3160d37-f1ba-461a-855e-31f72d46baee from this chassis (sb_readonly=0)
Nov 25 03:22:44 np0005534516 nova_compute[253538]: 2025-11-25 08:22:44.207 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:22:44 np0005534516 nova_compute[253538]: 2025-11-25 08:22:44.236 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:22:44 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:22:44.237 162739 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/ef52fe4f-78d3-45fa-ab69-177fdfabe604.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/ef52fe4f-78d3-45fa-ab69-177fdfabe604.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 25 03:22:44 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:22:44.238 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[ff5e1e4e-7b58-4ae6-a5e9-8e175ffebcc5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:22:44 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:22:44.239 162739 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 25 03:22:44 np0005534516 ovn_metadata_agent[162734]: global
Nov 25 03:22:44 np0005534516 ovn_metadata_agent[162734]:    log         /dev/log local0 debug
Nov 25 03:22:44 np0005534516 ovn_metadata_agent[162734]:    log-tag     haproxy-metadata-proxy-ef52fe4f-78d3-45fa-ab69-177fdfabe604
Nov 25 03:22:44 np0005534516 ovn_metadata_agent[162734]:    user        root
Nov 25 03:22:44 np0005534516 ovn_metadata_agent[162734]:    group       root
Nov 25 03:22:44 np0005534516 ovn_metadata_agent[162734]:    maxconn     1024
Nov 25 03:22:44 np0005534516 ovn_metadata_agent[162734]:    pidfile     /var/lib/neutron/external/pids/ef52fe4f-78d3-45fa-ab69-177fdfabe604.pid.haproxy
Nov 25 03:22:44 np0005534516 ovn_metadata_agent[162734]:    daemon
Nov 25 03:22:44 np0005534516 ovn_metadata_agent[162734]: 
Nov 25 03:22:44 np0005534516 ovn_metadata_agent[162734]: defaults
Nov 25 03:22:44 np0005534516 ovn_metadata_agent[162734]:    log global
Nov 25 03:22:44 np0005534516 ovn_metadata_agent[162734]:    mode http
Nov 25 03:22:44 np0005534516 ovn_metadata_agent[162734]:    option httplog
Nov 25 03:22:44 np0005534516 ovn_metadata_agent[162734]:    option dontlognull
Nov 25 03:22:44 np0005534516 ovn_metadata_agent[162734]:    option http-server-close
Nov 25 03:22:44 np0005534516 ovn_metadata_agent[162734]:    option forwardfor
Nov 25 03:22:44 np0005534516 ovn_metadata_agent[162734]:    retries                 3
Nov 25 03:22:44 np0005534516 ovn_metadata_agent[162734]:    timeout http-request    30s
Nov 25 03:22:44 np0005534516 ovn_metadata_agent[162734]:    timeout connect         30s
Nov 25 03:22:44 np0005534516 ovn_metadata_agent[162734]:    timeout client          32s
Nov 25 03:22:44 np0005534516 ovn_metadata_agent[162734]:    timeout server          32s
Nov 25 03:22:44 np0005534516 ovn_metadata_agent[162734]:    timeout http-keep-alive 30s
Nov 25 03:22:44 np0005534516 ovn_metadata_agent[162734]: 
Nov 25 03:22:44 np0005534516 ovn_metadata_agent[162734]: 
Nov 25 03:22:44 np0005534516 ovn_metadata_agent[162734]: listen listener
Nov 25 03:22:44 np0005534516 ovn_metadata_agent[162734]:    bind 169.254.169.254:80
Nov 25 03:22:44 np0005534516 ovn_metadata_agent[162734]:    server metadata /var/lib/neutron/metadata_proxy
Nov 25 03:22:44 np0005534516 ovn_metadata_agent[162734]:    http-request add-header X-OVN-Network-ID ef52fe4f-78d3-45fa-ab69-177fdfabe604
Nov 25 03:22:44 np0005534516 ovn_metadata_agent[162734]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 25 03:22:44 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:22:44.240 162739 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-ef52fe4f-78d3-45fa-ab69-177fdfabe604', 'env', 'PROCESS_TAG=haproxy-ef52fe4f-78d3-45fa-ab69-177fdfabe604', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/ef52fe4f-78d3-45fa-ab69-177fdfabe604.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 25 03:22:44 np0005534516 nova_compute[253538]: 2025-11-25 08:22:44.258 253542 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764058949.2571197, a134e11b-8c87-4a96-a92c-b4dfdad7e518 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 03:22:44 np0005534516 nova_compute[253538]: 2025-11-25 08:22:44.259 253542 INFO nova.compute.manager [-] [instance: a134e11b-8c87-4a96-a92c-b4dfdad7e518] VM Stopped (Lifecycle Event)#033[00m
Nov 25 03:22:44 np0005534516 nova_compute[253538]: 2025-11-25 08:22:44.290 253542 DEBUG nova.compute.manager [None req-5ced3f25-a661-4235-9a0c-185cc5561744 - - - - - -] [instance: a134e11b-8c87-4a96-a92c-b4dfdad7e518] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:22:44 np0005534516 nova_compute[253538]: 2025-11-25 08:22:44.328 253542 DEBUG nova.compute.manager [req-c4136aad-79c9-42a7-a4db-26735a555b96 req-f09eeca1-1b4a-4c25-bfeb-5a3138942c69 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 4bc97ee2-5aba-4bb5-86e2-f0806a200c04] Received event network-vif-plugged-029a2ee5-4018-4b73-8953-5436c5af3666 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:22:44 np0005534516 nova_compute[253538]: 2025-11-25 08:22:44.329 253542 DEBUG oslo_concurrency.lockutils [req-c4136aad-79c9-42a7-a4db-26735a555b96 req-f09eeca1-1b4a-4c25-bfeb-5a3138942c69 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "4bc97ee2-5aba-4bb5-86e2-f0806a200c04-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:22:44 np0005534516 nova_compute[253538]: 2025-11-25 08:22:44.330 253542 DEBUG oslo_concurrency.lockutils [req-c4136aad-79c9-42a7-a4db-26735a555b96 req-f09eeca1-1b4a-4c25-bfeb-5a3138942c69 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "4bc97ee2-5aba-4bb5-86e2-f0806a200c04-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:22:44 np0005534516 nova_compute[253538]: 2025-11-25 08:22:44.331 253542 DEBUG oslo_concurrency.lockutils [req-c4136aad-79c9-42a7-a4db-26735a555b96 req-f09eeca1-1b4a-4c25-bfeb-5a3138942c69 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "4bc97ee2-5aba-4bb5-86e2-f0806a200c04-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:22:44 np0005534516 nova_compute[253538]: 2025-11-25 08:22:44.331 253542 DEBUG nova.compute.manager [req-c4136aad-79c9-42a7-a4db-26735a555b96 req-f09eeca1-1b4a-4c25-bfeb-5a3138942c69 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 4bc97ee2-5aba-4bb5-86e2-f0806a200c04] Processing event network-vif-plugged-029a2ee5-4018-4b73-8953-5436c5af3666 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 25 03:22:44 np0005534516 nova_compute[253538]: 2025-11-25 08:22:44.359 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:22:44 np0005534516 nova_compute[253538]: 2025-11-25 08:22:44.674 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764058964.67421, 4bc97ee2-5aba-4bb5-86e2-f0806a200c04 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 03:22:44 np0005534516 nova_compute[253538]: 2025-11-25 08:22:44.675 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 4bc97ee2-5aba-4bb5-86e2-f0806a200c04] VM Started (Lifecycle Event)#033[00m
Nov 25 03:22:44 np0005534516 nova_compute[253538]: 2025-11-25 08:22:44.678 253542 DEBUG nova.compute.manager [None req-324a0cc4-7dc0-4eda-abc8-2a6c75889181 19a48db5eafb4ccb9008a204aa3d72d4 2671313ddba04346ac0e2eef435f909c - - default default] [instance: 4bc97ee2-5aba-4bb5-86e2-f0806a200c04] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 25 03:22:44 np0005534516 nova_compute[253538]: 2025-11-25 08:22:44.682 253542 DEBUG nova.virt.libvirt.driver [None req-324a0cc4-7dc0-4eda-abc8-2a6c75889181 19a48db5eafb4ccb9008a204aa3d72d4 2671313ddba04346ac0e2eef435f909c - - default default] [instance: 4bc97ee2-5aba-4bb5-86e2-f0806a200c04] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 25 03:22:44 np0005534516 nova_compute[253538]: 2025-11-25 08:22:44.686 253542 INFO nova.virt.libvirt.driver [-] [instance: 4bc97ee2-5aba-4bb5-86e2-f0806a200c04] Instance spawned successfully.#033[00m
Nov 25 03:22:44 np0005534516 nova_compute[253538]: 2025-11-25 08:22:44.687 253542 DEBUG nova.virt.libvirt.driver [None req-324a0cc4-7dc0-4eda-abc8-2a6c75889181 19a48db5eafb4ccb9008a204aa3d72d4 2671313ddba04346ac0e2eef435f909c - - default default] [instance: 4bc97ee2-5aba-4bb5-86e2-f0806a200c04] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 25 03:22:44 np0005534516 nova_compute[253538]: 2025-11-25 08:22:44.701 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 4bc97ee2-5aba-4bb5-86e2-f0806a200c04] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:22:44 np0005534516 nova_compute[253538]: 2025-11-25 08:22:44.708 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 4bc97ee2-5aba-4bb5-86e2-f0806a200c04] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 03:22:44 np0005534516 nova_compute[253538]: 2025-11-25 08:22:44.712 253542 DEBUG nova.virt.libvirt.driver [None req-324a0cc4-7dc0-4eda-abc8-2a6c75889181 19a48db5eafb4ccb9008a204aa3d72d4 2671313ddba04346ac0e2eef435f909c - - default default] [instance: 4bc97ee2-5aba-4bb5-86e2-f0806a200c04] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:22:44 np0005534516 nova_compute[253538]: 2025-11-25 08:22:44.713 253542 DEBUG nova.virt.libvirt.driver [None req-324a0cc4-7dc0-4eda-abc8-2a6c75889181 19a48db5eafb4ccb9008a204aa3d72d4 2671313ddba04346ac0e2eef435f909c - - default default] [instance: 4bc97ee2-5aba-4bb5-86e2-f0806a200c04] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:22:44 np0005534516 nova_compute[253538]: 2025-11-25 08:22:44.713 253542 DEBUG nova.virt.libvirt.driver [None req-324a0cc4-7dc0-4eda-abc8-2a6c75889181 19a48db5eafb4ccb9008a204aa3d72d4 2671313ddba04346ac0e2eef435f909c - - default default] [instance: 4bc97ee2-5aba-4bb5-86e2-f0806a200c04] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:22:44 np0005534516 nova_compute[253538]: 2025-11-25 08:22:44.714 253542 DEBUG nova.virt.libvirt.driver [None req-324a0cc4-7dc0-4eda-abc8-2a6c75889181 19a48db5eafb4ccb9008a204aa3d72d4 2671313ddba04346ac0e2eef435f909c - - default default] [instance: 4bc97ee2-5aba-4bb5-86e2-f0806a200c04] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:22:44 np0005534516 nova_compute[253538]: 2025-11-25 08:22:44.715 253542 DEBUG nova.virt.libvirt.driver [None req-324a0cc4-7dc0-4eda-abc8-2a6c75889181 19a48db5eafb4ccb9008a204aa3d72d4 2671313ddba04346ac0e2eef435f909c - - default default] [instance: 4bc97ee2-5aba-4bb5-86e2-f0806a200c04] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:22:44 np0005534516 nova_compute[253538]: 2025-11-25 08:22:44.715 253542 DEBUG nova.virt.libvirt.driver [None req-324a0cc4-7dc0-4eda-abc8-2a6c75889181 19a48db5eafb4ccb9008a204aa3d72d4 2671313ddba04346ac0e2eef435f909c - - default default] [instance: 4bc97ee2-5aba-4bb5-86e2-f0806a200c04] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:22:44 np0005534516 nova_compute[253538]: 2025-11-25 08:22:44.743 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 4bc97ee2-5aba-4bb5-86e2-f0806a200c04] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 03:22:44 np0005534516 nova_compute[253538]: 2025-11-25 08:22:44.744 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764058964.6755211, 4bc97ee2-5aba-4bb5-86e2-f0806a200c04 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 03:22:44 np0005534516 nova_compute[253538]: 2025-11-25 08:22:44.744 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 4bc97ee2-5aba-4bb5-86e2-f0806a200c04] VM Paused (Lifecycle Event)#033[00m
Nov 25 03:22:44 np0005534516 nova_compute[253538]: 2025-11-25 08:22:44.779 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 4bc97ee2-5aba-4bb5-86e2-f0806a200c04] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:22:44 np0005534516 nova_compute[253538]: 2025-11-25 08:22:44.784 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764058964.6814785, 4bc97ee2-5aba-4bb5-86e2-f0806a200c04 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 03:22:44 np0005534516 nova_compute[253538]: 2025-11-25 08:22:44.785 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 4bc97ee2-5aba-4bb5-86e2-f0806a200c04] VM Resumed (Lifecycle Event)#033[00m
Nov 25 03:22:44 np0005534516 podman[271178]: 2025-11-25 08:22:44.71564856 +0000 UTC m=+0.052409398 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 25 03:22:44 np0005534516 nova_compute[253538]: 2025-11-25 08:22:44.810 253542 INFO nova.compute.manager [None req-324a0cc4-7dc0-4eda-abc8-2a6c75889181 19a48db5eafb4ccb9008a204aa3d72d4 2671313ddba04346ac0e2eef435f909c - - default default] [instance: 4bc97ee2-5aba-4bb5-86e2-f0806a200c04] Took 11.06 seconds to spawn the instance on the hypervisor.#033[00m
Nov 25 03:22:44 np0005534516 nova_compute[253538]: 2025-11-25 08:22:44.811 253542 DEBUG nova.compute.manager [None req-324a0cc4-7dc0-4eda-abc8-2a6c75889181 19a48db5eafb4ccb9008a204aa3d72d4 2671313ddba04346ac0e2eef435f909c - - default default] [instance: 4bc97ee2-5aba-4bb5-86e2-f0806a200c04] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:22:44 np0005534516 nova_compute[253538]: 2025-11-25 08:22:44.813 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 4bc97ee2-5aba-4bb5-86e2-f0806a200c04] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:22:44 np0005534516 nova_compute[253538]: 2025-11-25 08:22:44.825 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 4bc97ee2-5aba-4bb5-86e2-f0806a200c04] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 03:22:44 np0005534516 nova_compute[253538]: 2025-11-25 08:22:44.853 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 4bc97ee2-5aba-4bb5-86e2-f0806a200c04] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 03:22:44 np0005534516 nova_compute[253538]: 2025-11-25 08:22:44.879 253542 INFO nova.compute.manager [None req-324a0cc4-7dc0-4eda-abc8-2a6c75889181 19a48db5eafb4ccb9008a204aa3d72d4 2671313ddba04346ac0e2eef435f909c - - default default] [instance: 4bc97ee2-5aba-4bb5-86e2-f0806a200c04] Took 12.37 seconds to build instance.#033[00m
Nov 25 03:22:44 np0005534516 nova_compute[253538]: 2025-11-25 08:22:44.895 253542 DEBUG oslo_concurrency.lockutils [None req-324a0cc4-7dc0-4eda-abc8-2a6c75889181 19a48db5eafb4ccb9008a204aa3d72d4 2671313ddba04346ac0e2eef435f909c - - default default] Lock "4bc97ee2-5aba-4bb5-86e2-f0806a200c04" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 12.643s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:22:44 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1113: 321 pgs: 321 active+clean; 90 MiB data, 228 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 1.8 MiB/s wr, 55 op/s
Nov 25 03:22:44 np0005534516 podman[271178]: 2025-11-25 08:22:44.915704862 +0000 UTC m=+0.252465660 container create f531910c08daeea977133b7f50a8ea2ee4c30fd4ce3969def7129d09431e9000 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-ef52fe4f-78d3-45fa-ab69-177fdfabe604, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Nov 25 03:22:45 np0005534516 systemd[1]: Started libpod-conmon-f531910c08daeea977133b7f50a8ea2ee4c30fd4ce3969def7129d09431e9000.scope.
Nov 25 03:22:45 np0005534516 systemd[1]: Started libcrun container.
Nov 25 03:22:45 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6262865f4caeb3aa76bd5939a209fa5913aca3e8ecc37800451d727ee1322951/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 25 03:22:45 np0005534516 podman[271178]: 2025-11-25 08:22:45.123745137 +0000 UTC m=+0.460505985 container init f531910c08daeea977133b7f50a8ea2ee4c30fd4ce3969def7129d09431e9000 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-ef52fe4f-78d3-45fa-ab69-177fdfabe604, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118)
Nov 25 03:22:45 np0005534516 podman[271178]: 2025-11-25 08:22:45.135974348 +0000 UTC m=+0.472735177 container start f531910c08daeea977133b7f50a8ea2ee4c30fd4ce3969def7129d09431e9000 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-ef52fe4f-78d3-45fa-ab69-177fdfabe604, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Nov 25 03:22:45 np0005534516 neutron-haproxy-ovnmeta-ef52fe4f-78d3-45fa-ab69-177fdfabe604[271193]: [NOTICE]   (271197) : New worker (271199) forked
Nov 25 03:22:45 np0005534516 neutron-haproxy-ovnmeta-ef52fe4f-78d3-45fa-ab69-177fdfabe604[271193]: [NOTICE]   (271197) : Loading success.
Nov 25 03:22:46 np0005534516 nova_compute[253538]: 2025-11-25 08:22:46.510 253542 DEBUG nova.compute.manager [req-f9f5e719-a72c-4557-bed1-38909e69fd83 req-1031c73e-929c-4347-9fb7-b4859b00b76b b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 4bc97ee2-5aba-4bb5-86e2-f0806a200c04] Received event network-vif-plugged-029a2ee5-4018-4b73-8953-5436c5af3666 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:22:46 np0005534516 nova_compute[253538]: 2025-11-25 08:22:46.510 253542 DEBUG oslo_concurrency.lockutils [req-f9f5e719-a72c-4557-bed1-38909e69fd83 req-1031c73e-929c-4347-9fb7-b4859b00b76b b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "4bc97ee2-5aba-4bb5-86e2-f0806a200c04-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:22:46 np0005534516 nova_compute[253538]: 2025-11-25 08:22:46.510 253542 DEBUG oslo_concurrency.lockutils [req-f9f5e719-a72c-4557-bed1-38909e69fd83 req-1031c73e-929c-4347-9fb7-b4859b00b76b b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "4bc97ee2-5aba-4bb5-86e2-f0806a200c04-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:22:46 np0005534516 nova_compute[253538]: 2025-11-25 08:22:46.510 253542 DEBUG oslo_concurrency.lockutils [req-f9f5e719-a72c-4557-bed1-38909e69fd83 req-1031c73e-929c-4347-9fb7-b4859b00b76b b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "4bc97ee2-5aba-4bb5-86e2-f0806a200c04-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:22:46 np0005534516 nova_compute[253538]: 2025-11-25 08:22:46.511 253542 DEBUG nova.compute.manager [req-f9f5e719-a72c-4557-bed1-38909e69fd83 req-1031c73e-929c-4347-9fb7-b4859b00b76b b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 4bc97ee2-5aba-4bb5-86e2-f0806a200c04] No waiting events found dispatching network-vif-plugged-029a2ee5-4018-4b73-8953-5436c5af3666 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 03:22:46 np0005534516 nova_compute[253538]: 2025-11-25 08:22:46.511 253542 WARNING nova.compute.manager [req-f9f5e719-a72c-4557-bed1-38909e69fd83 req-1031c73e-929c-4347-9fb7-b4859b00b76b b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 4bc97ee2-5aba-4bb5-86e2-f0806a200c04] Received unexpected event network-vif-plugged-029a2ee5-4018-4b73-8953-5436c5af3666 for instance with vm_state active and task_state None.#033[00m
Nov 25 03:22:46 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1114: 321 pgs: 321 active+clean; 90 MiB data, 228 MiB used, 60 GiB / 60 GiB avail; 14 KiB/s rd, 1.1 MiB/s wr, 20 op/s
Nov 25 03:22:47 np0005534516 nova_compute[253538]: 2025-11-25 08:22:47.788 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:22:48 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e107 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 03:22:48 np0005534516 nova_compute[253538]: 2025-11-25 08:22:48.563 253542 DEBUG nova.compute.manager [req-84f8fe13-e525-4697-9932-ecd4dfa49dc4 req-14c65e82-6d51-4385-b2a2-2c65ebad1fa4 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 4bc97ee2-5aba-4bb5-86e2-f0806a200c04] Received event network-changed-029a2ee5-4018-4b73-8953-5436c5af3666 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:22:48 np0005534516 nova_compute[253538]: 2025-11-25 08:22:48.563 253542 DEBUG nova.compute.manager [req-84f8fe13-e525-4697-9932-ecd4dfa49dc4 req-14c65e82-6d51-4385-b2a2-2c65ebad1fa4 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 4bc97ee2-5aba-4bb5-86e2-f0806a200c04] Refreshing instance network info cache due to event network-changed-029a2ee5-4018-4b73-8953-5436c5af3666. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 03:22:48 np0005534516 nova_compute[253538]: 2025-11-25 08:22:48.564 253542 DEBUG oslo_concurrency.lockutils [req-84f8fe13-e525-4697-9932-ecd4dfa49dc4 req-14c65e82-6d51-4385-b2a2-2c65ebad1fa4 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-4bc97ee2-5aba-4bb5-86e2-f0806a200c04" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 03:22:48 np0005534516 nova_compute[253538]: 2025-11-25 08:22:48.564 253542 DEBUG oslo_concurrency.lockutils [req-84f8fe13-e525-4697-9932-ecd4dfa49dc4 req-14c65e82-6d51-4385-b2a2-2c65ebad1fa4 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-4bc97ee2-5aba-4bb5-86e2-f0806a200c04" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 03:22:48 np0005534516 nova_compute[253538]: 2025-11-25 08:22:48.564 253542 DEBUG nova.network.neutron [req-84f8fe13-e525-4697-9932-ecd4dfa49dc4 req-14c65e82-6d51-4385-b2a2-2c65ebad1fa4 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 4bc97ee2-5aba-4bb5-86e2-f0806a200c04] Refreshing network info cache for port 029a2ee5-4018-4b73-8953-5436c5af3666 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 03:22:48 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1115: 321 pgs: 321 active+clean; 90 MiB data, 228 MiB used, 60 GiB / 60 GiB avail; 1.1 MiB/s rd, 1.1 MiB/s wr, 63 op/s
Nov 25 03:22:49 np0005534516 nova_compute[253538]: 2025-11-25 08:22:49.360 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:22:50 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 03:22:50 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 03:22:50 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 03:22:50 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 03:22:50 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 03:22:50 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 03:22:50 np0005534516 ceph-mgr[75313]: [progress WARNING root] complete: ev 9233aae9-9626-40ab-b90b-9a25b0c588d5 does not exist
Nov 25 03:22:50 np0005534516 ceph-mgr[75313]: [progress WARNING root] complete: ev 5207d199-5b5e-44e6-9eda-4884de70436c does not exist
Nov 25 03:22:50 np0005534516 ceph-mgr[75313]: [progress WARNING root] complete: ev 8f40c3e3-3c1c-447d-b86a-3b1365c103a0 does not exist
Nov 25 03:22:50 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 03:22:50 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 03:22:50 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 03:22:50 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 03:22:50 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 03:22:50 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 03:22:50 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1116: 321 pgs: 321 active+clean; 90 MiB data, 245 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 15 KiB/s wr, 87 op/s
Nov 25 03:22:50 np0005534516 nova_compute[253538]: 2025-11-25 08:22:50.978 253542 DEBUG nova.network.neutron [req-84f8fe13-e525-4697-9932-ecd4dfa49dc4 req-14c65e82-6d51-4385-b2a2-2c65ebad1fa4 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 4bc97ee2-5aba-4bb5-86e2-f0806a200c04] Updated VIF entry in instance network info cache for port 029a2ee5-4018-4b73-8953-5436c5af3666. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 25 03:22:50 np0005534516 nova_compute[253538]: 2025-11-25 08:22:50.978 253542 DEBUG nova.network.neutron [req-84f8fe13-e525-4697-9932-ecd4dfa49dc4 req-14c65e82-6d51-4385-b2a2-2c65ebad1fa4 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 4bc97ee2-5aba-4bb5-86e2-f0806a200c04] Updating instance_info_cache with network_info: [{"id": "029a2ee5-4018-4b73-8953-5436c5af3666", "address": "fa:16:3e:99:b2:39", "network": {"id": "ef52fe4f-78d3-45fa-ab69-177fdfabe604", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-1512173371-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.177", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2671313ddba04346ac0e2eef435f909c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap029a2ee5-40", "ovs_interfaceid": "029a2ee5-4018-4b73-8953-5436c5af3666", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 03:22:51 np0005534516 nova_compute[253538]: 2025-11-25 08:22:51.050 253542 DEBUG oslo_concurrency.lockutils [req-84f8fe13-e525-4697-9932-ecd4dfa49dc4 req-14c65e82-6d51-4385-b2a2-2c65ebad1fa4 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-4bc97ee2-5aba-4bb5-86e2-f0806a200c04" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 03:22:51 np0005534516 podman[271481]: 2025-11-25 08:22:51.137577979 +0000 UTC m=+0.101777281 container create 52150e599c254c0a609a299a5b6812046b68cd0e364cce2c049bfec9a503aa3a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_mendeleev, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Nov 25 03:22:51 np0005534516 podman[271481]: 2025-11-25 08:22:51.063523835 +0000 UTC m=+0.027723116 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 03:22:51 np0005534516 systemd[1]: Started libpod-conmon-52150e599c254c0a609a299a5b6812046b68cd0e364cce2c049bfec9a503aa3a.scope.
Nov 25 03:22:51 np0005534516 systemd[1]: Started libcrun container.
Nov 25 03:22:51 np0005534516 podman[271481]: 2025-11-25 08:22:51.328959898 +0000 UTC m=+0.293159229 container init 52150e599c254c0a609a299a5b6812046b68cd0e364cce2c049bfec9a503aa3a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_mendeleev, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 03:22:51 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 03:22:51 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 03:22:51 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 03:22:51 np0005534516 podman[271481]: 2025-11-25 08:22:51.338407163 +0000 UTC m=+0.302606474 container start 52150e599c254c0a609a299a5b6812046b68cd0e364cce2c049bfec9a503aa3a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_mendeleev, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True)
Nov 25 03:22:51 np0005534516 podman[271481]: 2025-11-25 08:22:51.342791335 +0000 UTC m=+0.306990636 container attach 52150e599c254c0a609a299a5b6812046b68cd0e364cce2c049bfec9a503aa3a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_mendeleev, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 03:22:51 np0005534516 strange_mendeleev[271497]: 167 167
Nov 25 03:22:51 np0005534516 systemd[1]: libpod-52150e599c254c0a609a299a5b6812046b68cd0e364cce2c049bfec9a503aa3a.scope: Deactivated successfully.
Nov 25 03:22:51 np0005534516 conmon[271497]: conmon 52150e599c254c0a609a <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-52150e599c254c0a609a299a5b6812046b68cd0e364cce2c049bfec9a503aa3a.scope/container/memory.events
Nov 25 03:22:51 np0005534516 podman[271481]: 2025-11-25 08:22:51.348034452 +0000 UTC m=+0.312233703 container died 52150e599c254c0a609a299a5b6812046b68cd0e364cce2c049bfec9a503aa3a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_mendeleev, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3)
Nov 25 03:22:51 np0005534516 systemd[1]: var-lib-containers-storage-overlay-fc48b6a9bbcb890314d71694179e246d31c7a0e133981316ef634972c671ae31-merged.mount: Deactivated successfully.
Nov 25 03:22:51 np0005534516 podman[271481]: 2025-11-25 08:22:51.389533014 +0000 UTC m=+0.353732265 container remove 52150e599c254c0a609a299a5b6812046b68cd0e364cce2c049bfec9a503aa3a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_mendeleev, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True)
Nov 25 03:22:51 np0005534516 systemd[1]: libpod-conmon-52150e599c254c0a609a299a5b6812046b68cd0e364cce2c049bfec9a503aa3a.scope: Deactivated successfully.
Nov 25 03:22:51 np0005534516 podman[271521]: 2025-11-25 08:22:51.611975462 +0000 UTC m=+0.067626435 container create 838146708a8fe149cd7b49d991b96d1c76d94bffd8cc0eacdb64f11bbb98b56a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_shannon, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 03:22:51 np0005534516 systemd[1]: Started libpod-conmon-838146708a8fe149cd7b49d991b96d1c76d94bffd8cc0eacdb64f11bbb98b56a.scope.
Nov 25 03:22:51 np0005534516 podman[271521]: 2025-11-25 08:22:51.590903662 +0000 UTC m=+0.046554625 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 03:22:51 np0005534516 systemd[1]: Started libcrun container.
Nov 25 03:22:51 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f53836abc474f8855562a907a6efc5c07b93dfa99b74da3e1618cfc67c2dff91/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 03:22:51 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f53836abc474f8855562a907a6efc5c07b93dfa99b74da3e1618cfc67c2dff91/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 03:22:51 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f53836abc474f8855562a907a6efc5c07b93dfa99b74da3e1618cfc67c2dff91/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 03:22:51 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f53836abc474f8855562a907a6efc5c07b93dfa99b74da3e1618cfc67c2dff91/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 03:22:51 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f53836abc474f8855562a907a6efc5c07b93dfa99b74da3e1618cfc67c2dff91/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 03:22:51 np0005534516 podman[271521]: 2025-11-25 08:22:51.735174551 +0000 UTC m=+0.190825504 container init 838146708a8fe149cd7b49d991b96d1c76d94bffd8cc0eacdb64f11bbb98b56a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_shannon, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 25 03:22:51 np0005534516 podman[271521]: 2025-11-25 08:22:51.743395822 +0000 UTC m=+0.199046765 container start 838146708a8fe149cd7b49d991b96d1c76d94bffd8cc0eacdb64f11bbb98b56a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_shannon, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 03:22:51 np0005534516 podman[271521]: 2025-11-25 08:22:51.754265706 +0000 UTC m=+0.209916669 container attach 838146708a8fe149cd7b49d991b96d1c76d94bffd8cc0eacdb64f11bbb98b56a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_shannon, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Nov 25 03:22:52 np0005534516 charming_shannon[271538]: --> passed data devices: 0 physical, 3 LVM
Nov 25 03:22:52 np0005534516 charming_shannon[271538]: --> relative data size: 1.0
Nov 25 03:22:52 np0005534516 charming_shannon[271538]: --> All data devices are unavailable
Nov 25 03:22:52 np0005534516 systemd[1]: libpod-838146708a8fe149cd7b49d991b96d1c76d94bffd8cc0eacdb64f11bbb98b56a.scope: Deactivated successfully.
Nov 25 03:22:52 np0005534516 nova_compute[253538]: 2025-11-25 08:22:52.821 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:22:52 np0005534516 podman[271567]: 2025-11-25 08:22:52.834254895 +0000 UTC m=+0.047490150 container died 838146708a8fe149cd7b49d991b96d1c76d94bffd8cc0eacdb64f11bbb98b56a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_shannon, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 03:22:52 np0005534516 systemd[1]: var-lib-containers-storage-overlay-f53836abc474f8855562a907a6efc5c07b93dfa99b74da3e1618cfc67c2dff91-merged.mount: Deactivated successfully.
Nov 25 03:22:52 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1117: 321 pgs: 321 active+clean; 90 MiB data, 245 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 76 op/s
Nov 25 03:22:52 np0005534516 podman[271567]: 2025-11-25 08:22:52.974689537 +0000 UTC m=+0.187924802 container remove 838146708a8fe149cd7b49d991b96d1c76d94bffd8cc0eacdb64f11bbb98b56a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_shannon, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 03:22:52 np0005534516 systemd[1]: libpod-conmon-838146708a8fe149cd7b49d991b96d1c76d94bffd8cc0eacdb64f11bbb98b56a.scope: Deactivated successfully.
Nov 25 03:22:53 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e107 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 03:22:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] Optimize plan auto_2025-11-25_08:22:53
Nov 25 03:22:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 03:22:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] do_upmap
Nov 25 03:22:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] pools ['default.rgw.log', '.rgw.root', 'default.rgw.control', 'backups', 'volumes', 'default.rgw.meta', 'cephfs.cephfs.meta', 'images', 'vms', '.mgr', 'cephfs.cephfs.data']
Nov 25 03:22:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] prepared 0/10 changes
Nov 25 03:22:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 03:22:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 03:22:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 03:22:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 03:22:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 03:22:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 03:22:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 03:22:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 03:22:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 03:22:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 03:22:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 03:22:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 03:22:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 03:22:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 03:22:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 03:22:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 03:22:53 np0005534516 podman[271723]: 2025-11-25 08:22:53.719001177 +0000 UTC m=+0.046937125 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 03:22:53 np0005534516 nova_compute[253538]: 2025-11-25 08:22:53.888 253542 DEBUG oslo_concurrency.lockutils [None req-07a74685-1b4c-4827-a544-e8eec088d775 e93ad3d8111e48218a5ab899be7d3708 029b328a15754b45970b2053b56564bc - - default default] Acquiring lock "30491b9b-e328-43ff-9a35-3f5afa6fed34" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:22:53 np0005534516 nova_compute[253538]: 2025-11-25 08:22:53.890 253542 DEBUG oslo_concurrency.lockutils [None req-07a74685-1b4c-4827-a544-e8eec088d775 e93ad3d8111e48218a5ab899be7d3708 029b328a15754b45970b2053b56564bc - - default default] Lock "30491b9b-e328-43ff-9a35-3f5afa6fed34" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:22:53 np0005534516 nova_compute[253538]: 2025-11-25 08:22:53.908 253542 DEBUG nova.compute.manager [None req-07a74685-1b4c-4827-a544-e8eec088d775 e93ad3d8111e48218a5ab899be7d3708 029b328a15754b45970b2053b56564bc - - default default] [instance: 30491b9b-e328-43ff-9a35-3f5afa6fed34] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 25 03:22:53 np0005534516 podman[271723]: 2025-11-25 08:22:53.925736195 +0000 UTC m=+0.253672043 container create 26d9e662dbb48a488ea0a1eeda63a4b7331e09724b9934fa2134b5b118cba7a1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_herschel, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Nov 25 03:22:53 np0005534516 systemd[1]: Started libpod-conmon-26d9e662dbb48a488ea0a1eeda63a4b7331e09724b9934fa2134b5b118cba7a1.scope.
Nov 25 03:22:53 np0005534516 nova_compute[253538]: 2025-11-25 08:22:53.988 253542 DEBUG oslo_concurrency.lockutils [None req-07a74685-1b4c-4827-a544-e8eec088d775 e93ad3d8111e48218a5ab899be7d3708 029b328a15754b45970b2053b56564bc - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:22:53 np0005534516 nova_compute[253538]: 2025-11-25 08:22:53.989 253542 DEBUG oslo_concurrency.lockutils [None req-07a74685-1b4c-4827-a544-e8eec088d775 e93ad3d8111e48218a5ab899be7d3708 029b328a15754b45970b2053b56564bc - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:22:53 np0005534516 systemd[1]: Started libcrun container.
Nov 25 03:22:54 np0005534516 nova_compute[253538]: 2025-11-25 08:22:54.005 253542 DEBUG nova.virt.hardware [None req-07a74685-1b4c-4827-a544-e8eec088d775 e93ad3d8111e48218a5ab899be7d3708 029b328a15754b45970b2053b56564bc - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 25 03:22:54 np0005534516 nova_compute[253538]: 2025-11-25 08:22:54.006 253542 INFO nova.compute.claims [None req-07a74685-1b4c-4827-a544-e8eec088d775 e93ad3d8111e48218a5ab899be7d3708 029b328a15754b45970b2053b56564bc - - default default] [instance: 30491b9b-e328-43ff-9a35-3f5afa6fed34] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 25 03:22:54 np0005534516 podman[271723]: 2025-11-25 08:22:54.042094344 +0000 UTC m=+0.370030232 container init 26d9e662dbb48a488ea0a1eeda63a4b7331e09724b9934fa2134b5b118cba7a1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_herschel, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default)
Nov 25 03:22:54 np0005534516 podman[271723]: 2025-11-25 08:22:54.053426641 +0000 UTC m=+0.381362529 container start 26d9e662dbb48a488ea0a1eeda63a4b7331e09724b9934fa2134b5b118cba7a1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_herschel, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 03:22:54 np0005534516 podman[271723]: 2025-11-25 08:22:54.060981432 +0000 UTC m=+0.388917310 container attach 26d9e662dbb48a488ea0a1eeda63a4b7331e09724b9934fa2134b5b118cba7a1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_herschel, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Nov 25 03:22:54 np0005534516 fervent_herschel[271740]: 167 167
Nov 25 03:22:54 np0005534516 systemd[1]: libpod-26d9e662dbb48a488ea0a1eeda63a4b7331e09724b9934fa2134b5b118cba7a1.scope: Deactivated successfully.
Nov 25 03:22:54 np0005534516 podman[271723]: 2025-11-25 08:22:54.062572127 +0000 UTC m=+0.390507975 container died 26d9e662dbb48a488ea0a1eeda63a4b7331e09724b9934fa2134b5b118cba7a1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_herschel, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 03:22:54 np0005534516 systemd[1]: var-lib-containers-storage-overlay-2d0eb8c71d7e3aa3fd702729fe91bfd803df4ff7014dada6e31e843a89e88fa5-merged.mount: Deactivated successfully.
Nov 25 03:22:54 np0005534516 nova_compute[253538]: 2025-11-25 08:22:54.170 253542 DEBUG oslo_concurrency.processutils [None req-07a74685-1b4c-4827-a544-e8eec088d775 e93ad3d8111e48218a5ab899be7d3708 029b328a15754b45970b2053b56564bc - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:22:54 np0005534516 podman[271723]: 2025-11-25 08:22:54.220630162 +0000 UTC m=+0.548566010 container remove 26d9e662dbb48a488ea0a1eeda63a4b7331e09724b9934fa2134b5b118cba7a1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_herschel, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 03:22:54 np0005534516 systemd[1]: libpod-conmon-26d9e662dbb48a488ea0a1eeda63a4b7331e09724b9934fa2134b5b118cba7a1.scope: Deactivated successfully.
Nov 25 03:22:54 np0005534516 podman[271737]: 2025-11-25 08:22:54.271108725 +0000 UTC m=+0.295921196 container health_status 5c8d5508db57b4c3da7e80ae11f3a3094e87785cb74f60c98199261dd8b73481 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251118, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 03:22:54 np0005534516 nova_compute[253538]: 2025-11-25 08:22:54.362 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:22:54 np0005534516 podman[271800]: 2025-11-25 08:22:54.428339999 +0000 UTC m=+0.054765665 container create 5d383cbaab5a384db3d99ca8fa437ac4e7646dc6fa8dcd579c3c9d6d24b2b78a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_brahmagupta, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Nov 25 03:22:54 np0005534516 systemd[1]: Started libpod-conmon-5d383cbaab5a384db3d99ca8fa437ac4e7646dc6fa8dcd579c3c9d6d24b2b78a.scope.
Nov 25 03:22:54 np0005534516 podman[271800]: 2025-11-25 08:22:54.398858013 +0000 UTC m=+0.025283719 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 03:22:54 np0005534516 systemd[1]: Started libcrun container.
Nov 25 03:22:54 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3bae54cc7bb89c96a151a488f8930db544ffc9ad8fb1258e03608c868c073fbd/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 03:22:54 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3bae54cc7bb89c96a151a488f8930db544ffc9ad8fb1258e03608c868c073fbd/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 03:22:54 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3bae54cc7bb89c96a151a488f8930db544ffc9ad8fb1258e03608c868c073fbd/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 03:22:54 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3bae54cc7bb89c96a151a488f8930db544ffc9ad8fb1258e03608c868c073fbd/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 03:22:54 np0005534516 podman[271800]: 2025-11-25 08:22:54.5591229 +0000 UTC m=+0.185548556 container init 5d383cbaab5a384db3d99ca8fa437ac4e7646dc6fa8dcd579c3c9d6d24b2b78a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_brahmagupta, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 03:22:54 np0005534516 podman[271800]: 2025-11-25 08:22:54.572942717 +0000 UTC m=+0.199368363 container start 5d383cbaab5a384db3d99ca8fa437ac4e7646dc6fa8dcd579c3c9d6d24b2b78a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_brahmagupta, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Nov 25 03:22:54 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 03:22:54 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2004869931' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 03:22:54 np0005534516 nova_compute[253538]: 2025-11-25 08:22:54.617 253542 DEBUG oslo_concurrency.processutils [None req-07a74685-1b4c-4827-a544-e8eec088d775 e93ad3d8111e48218a5ab899be7d3708 029b328a15754b45970b2053b56564bc - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.447s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:22:54 np0005534516 nova_compute[253538]: 2025-11-25 08:22:54.625 253542 DEBUG nova.compute.provider_tree [None req-07a74685-1b4c-4827-a544-e8eec088d775 e93ad3d8111e48218a5ab899be7d3708 029b328a15754b45970b2053b56564bc - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 03:22:54 np0005534516 nova_compute[253538]: 2025-11-25 08:22:54.653 253542 DEBUG nova.scheduler.client.report [None req-07a74685-1b4c-4827-a544-e8eec088d775 e93ad3d8111e48218a5ab899be7d3708 029b328a15754b45970b2053b56564bc - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 03:22:54 np0005534516 podman[271800]: 2025-11-25 08:22:54.671836505 +0000 UTC m=+0.298262161 container attach 5d383cbaab5a384db3d99ca8fa437ac4e7646dc6fa8dcd579c3c9d6d24b2b78a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_brahmagupta, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 03:22:54 np0005534516 nova_compute[253538]: 2025-11-25 08:22:54.678 253542 DEBUG oslo_concurrency.lockutils [None req-07a74685-1b4c-4827-a544-e8eec088d775 e93ad3d8111e48218a5ab899be7d3708 029b328a15754b45970b2053b56564bc - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.688s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:22:54 np0005534516 nova_compute[253538]: 2025-11-25 08:22:54.679 253542 DEBUG nova.compute.manager [None req-07a74685-1b4c-4827-a544-e8eec088d775 e93ad3d8111e48218a5ab899be7d3708 029b328a15754b45970b2053b56564bc - - default default] [instance: 30491b9b-e328-43ff-9a35-3f5afa6fed34] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 25 03:22:54 np0005534516 nova_compute[253538]: 2025-11-25 08:22:54.735 253542 DEBUG nova.compute.manager [None req-07a74685-1b4c-4827-a544-e8eec088d775 e93ad3d8111e48218a5ab899be7d3708 029b328a15754b45970b2053b56564bc - - default default] [instance: 30491b9b-e328-43ff-9a35-3f5afa6fed34] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 25 03:22:54 np0005534516 nova_compute[253538]: 2025-11-25 08:22:54.736 253542 DEBUG nova.network.neutron [None req-07a74685-1b4c-4827-a544-e8eec088d775 e93ad3d8111e48218a5ab899be7d3708 029b328a15754b45970b2053b56564bc - - default default] [instance: 30491b9b-e328-43ff-9a35-3f5afa6fed34] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 25 03:22:54 np0005534516 nova_compute[253538]: 2025-11-25 08:22:54.763 253542 INFO nova.virt.libvirt.driver [None req-07a74685-1b4c-4827-a544-e8eec088d775 e93ad3d8111e48218a5ab899be7d3708 029b328a15754b45970b2053b56564bc - - default default] [instance: 30491b9b-e328-43ff-9a35-3f5afa6fed34] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 25 03:22:54 np0005534516 nova_compute[253538]: 2025-11-25 08:22:54.785 253542 DEBUG nova.compute.manager [None req-07a74685-1b4c-4827-a544-e8eec088d775 e93ad3d8111e48218a5ab899be7d3708 029b328a15754b45970b2053b56564bc - - default default] [instance: 30491b9b-e328-43ff-9a35-3f5afa6fed34] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 25 03:22:54 np0005534516 nova_compute[253538]: 2025-11-25 08:22:54.867 253542 DEBUG nova.compute.manager [None req-07a74685-1b4c-4827-a544-e8eec088d775 e93ad3d8111e48218a5ab899be7d3708 029b328a15754b45970b2053b56564bc - - default default] [instance: 30491b9b-e328-43ff-9a35-3f5afa6fed34] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 25 03:22:54 np0005534516 nova_compute[253538]: 2025-11-25 08:22:54.868 253542 DEBUG nova.virt.libvirt.driver [None req-07a74685-1b4c-4827-a544-e8eec088d775 e93ad3d8111e48218a5ab899be7d3708 029b328a15754b45970b2053b56564bc - - default default] [instance: 30491b9b-e328-43ff-9a35-3f5afa6fed34] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 25 03:22:54 np0005534516 nova_compute[253538]: 2025-11-25 08:22:54.869 253542 INFO nova.virt.libvirt.driver [None req-07a74685-1b4c-4827-a544-e8eec088d775 e93ad3d8111e48218a5ab899be7d3708 029b328a15754b45970b2053b56564bc - - default default] [instance: 30491b9b-e328-43ff-9a35-3f5afa6fed34] Creating image(s)#033[00m
Nov 25 03:22:54 np0005534516 nova_compute[253538]: 2025-11-25 08:22:54.891 253542 DEBUG nova.storage.rbd_utils [None req-07a74685-1b4c-4827-a544-e8eec088d775 e93ad3d8111e48218a5ab899be7d3708 029b328a15754b45970b2053b56564bc - - default default] rbd image 30491b9b-e328-43ff-9a35-3f5afa6fed34_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:22:54 np0005534516 nova_compute[253538]: 2025-11-25 08:22:54.915 253542 DEBUG nova.storage.rbd_utils [None req-07a74685-1b4c-4827-a544-e8eec088d775 e93ad3d8111e48218a5ab899be7d3708 029b328a15754b45970b2053b56564bc - - default default] rbd image 30491b9b-e328-43ff-9a35-3f5afa6fed34_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:22:54 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1118: 321 pgs: 321 active+clean; 90 MiB data, 245 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 76 op/s
Nov 25 03:22:54 np0005534516 nova_compute[253538]: 2025-11-25 08:22:54.937 253542 DEBUG nova.storage.rbd_utils [None req-07a74685-1b4c-4827-a544-e8eec088d775 e93ad3d8111e48218a5ab899be7d3708 029b328a15754b45970b2053b56564bc - - default default] rbd image 30491b9b-e328-43ff-9a35-3f5afa6fed34_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:22:54 np0005534516 nova_compute[253538]: 2025-11-25 08:22:54.941 253542 DEBUG oslo_concurrency.processutils [None req-07a74685-1b4c-4827-a544-e8eec088d775 e93ad3d8111e48218a5ab899be7d3708 029b328a15754b45970b2053b56564bc - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:22:55 np0005534516 nova_compute[253538]: 2025-11-25 08:22:55.032 253542 DEBUG oslo_concurrency.processutils [None req-07a74685-1b4c-4827-a544-e8eec088d775 e93ad3d8111e48218a5ab899be7d3708 029b328a15754b45970b2053b56564bc - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json" returned: 0 in 0.091s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:22:55 np0005534516 nova_compute[253538]: 2025-11-25 08:22:55.033 253542 DEBUG oslo_concurrency.lockutils [None req-07a74685-1b4c-4827-a544-e8eec088d775 e93ad3d8111e48218a5ab899be7d3708 029b328a15754b45970b2053b56564bc - - default default] Acquiring lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:22:55 np0005534516 nova_compute[253538]: 2025-11-25 08:22:55.033 253542 DEBUG oslo_concurrency.lockutils [None req-07a74685-1b4c-4827-a544-e8eec088d775 e93ad3d8111e48218a5ab899be7d3708 029b328a15754b45970b2053b56564bc - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:22:55 np0005534516 nova_compute[253538]: 2025-11-25 08:22:55.034 253542 DEBUG oslo_concurrency.lockutils [None req-07a74685-1b4c-4827-a544-e8eec088d775 e93ad3d8111e48218a5ab899be7d3708 029b328a15754b45970b2053b56564bc - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:22:55 np0005534516 nova_compute[253538]: 2025-11-25 08:22:55.058 253542 DEBUG nova.storage.rbd_utils [None req-07a74685-1b4c-4827-a544-e8eec088d775 e93ad3d8111e48218a5ab899be7d3708 029b328a15754b45970b2053b56564bc - - default default] rbd image 30491b9b-e328-43ff-9a35-3f5afa6fed34_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:22:55 np0005534516 nova_compute[253538]: 2025-11-25 08:22:55.062 253542 DEBUG oslo_concurrency.processutils [None req-07a74685-1b4c-4827-a544-e8eec088d775 e93ad3d8111e48218a5ab899be7d3708 029b328a15754b45970b2053b56564bc - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc 30491b9b-e328-43ff-9a35-3f5afa6fed34_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:22:55 np0005534516 nova_compute[253538]: 2025-11-25 08:22:55.183 253542 DEBUG nova.network.neutron [None req-07a74685-1b4c-4827-a544-e8eec088d775 e93ad3d8111e48218a5ab899be7d3708 029b328a15754b45970b2053b56564bc - - default default] [instance: 30491b9b-e328-43ff-9a35-3f5afa6fed34] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188#033[00m
Nov 25 03:22:55 np0005534516 nova_compute[253538]: 2025-11-25 08:22:55.184 253542 DEBUG nova.compute.manager [None req-07a74685-1b4c-4827-a544-e8eec088d775 e93ad3d8111e48218a5ab899be7d3708 029b328a15754b45970b2053b56564bc - - default default] [instance: 30491b9b-e328-43ff-9a35-3f5afa6fed34] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 25 03:22:55 np0005534516 musing_brahmagupta[271817]: {
Nov 25 03:22:55 np0005534516 musing_brahmagupta[271817]:    "0": [
Nov 25 03:22:55 np0005534516 musing_brahmagupta[271817]:        {
Nov 25 03:22:55 np0005534516 musing_brahmagupta[271817]:            "devices": [
Nov 25 03:22:55 np0005534516 musing_brahmagupta[271817]:                "/dev/loop3"
Nov 25 03:22:55 np0005534516 musing_brahmagupta[271817]:            ],
Nov 25 03:22:55 np0005534516 musing_brahmagupta[271817]:            "lv_name": "ceph_lv0",
Nov 25 03:22:55 np0005534516 musing_brahmagupta[271817]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 03:22:55 np0005534516 musing_brahmagupta[271817]:            "lv_size": "21470642176",
Nov 25 03:22:55 np0005534516 musing_brahmagupta[271817]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=fa0f8d90-5df8-4d42-9078-da082765696d,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 03:22:55 np0005534516 musing_brahmagupta[271817]:            "lv_uuid": "ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr",
Nov 25 03:22:55 np0005534516 musing_brahmagupta[271817]:            "name": "ceph_lv0",
Nov 25 03:22:55 np0005534516 musing_brahmagupta[271817]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 03:22:55 np0005534516 musing_brahmagupta[271817]:            "tags": {
Nov 25 03:22:55 np0005534516 musing_brahmagupta[271817]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 03:22:55 np0005534516 musing_brahmagupta[271817]:                "ceph.block_uuid": "ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr",
Nov 25 03:22:55 np0005534516 musing_brahmagupta[271817]:                "ceph.cephx_lockbox_secret": "",
Nov 25 03:22:55 np0005534516 musing_brahmagupta[271817]:                "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 03:22:55 np0005534516 musing_brahmagupta[271817]:                "ceph.cluster_name": "ceph",
Nov 25 03:22:55 np0005534516 musing_brahmagupta[271817]:                "ceph.crush_device_class": "",
Nov 25 03:22:55 np0005534516 musing_brahmagupta[271817]:                "ceph.encrypted": "0",
Nov 25 03:22:55 np0005534516 musing_brahmagupta[271817]:                "ceph.osd_fsid": "fa0f8d90-5df8-4d42-9078-da082765696d",
Nov 25 03:22:55 np0005534516 musing_brahmagupta[271817]:                "ceph.osd_id": "0",
Nov 25 03:22:55 np0005534516 musing_brahmagupta[271817]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 03:22:55 np0005534516 musing_brahmagupta[271817]:                "ceph.type": "block",
Nov 25 03:22:55 np0005534516 musing_brahmagupta[271817]:                "ceph.vdo": "0"
Nov 25 03:22:55 np0005534516 musing_brahmagupta[271817]:            },
Nov 25 03:22:55 np0005534516 musing_brahmagupta[271817]:            "type": "block",
Nov 25 03:22:55 np0005534516 musing_brahmagupta[271817]:            "vg_name": "ceph_vg0"
Nov 25 03:22:55 np0005534516 musing_brahmagupta[271817]:        }
Nov 25 03:22:55 np0005534516 musing_brahmagupta[271817]:    ],
Nov 25 03:22:55 np0005534516 musing_brahmagupta[271817]:    "1": [
Nov 25 03:22:55 np0005534516 musing_brahmagupta[271817]:        {
Nov 25 03:22:55 np0005534516 musing_brahmagupta[271817]:            "devices": [
Nov 25 03:22:55 np0005534516 musing_brahmagupta[271817]:                "/dev/loop4"
Nov 25 03:22:55 np0005534516 musing_brahmagupta[271817]:            ],
Nov 25 03:22:55 np0005534516 musing_brahmagupta[271817]:            "lv_name": "ceph_lv1",
Nov 25 03:22:55 np0005534516 musing_brahmagupta[271817]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 03:22:55 np0005534516 musing_brahmagupta[271817]:            "lv_size": "21470642176",
Nov 25 03:22:55 np0005534516 musing_brahmagupta[271817]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1ccbbde0-8faf-460a-800e-c84f00ed17db,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 03:22:55 np0005534516 musing_brahmagupta[271817]:            "lv_uuid": "nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e",
Nov 25 03:22:55 np0005534516 musing_brahmagupta[271817]:            "name": "ceph_lv1",
Nov 25 03:22:55 np0005534516 musing_brahmagupta[271817]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 03:22:55 np0005534516 musing_brahmagupta[271817]:            "tags": {
Nov 25 03:22:55 np0005534516 musing_brahmagupta[271817]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 03:22:55 np0005534516 musing_brahmagupta[271817]:                "ceph.block_uuid": "nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e",
Nov 25 03:22:55 np0005534516 musing_brahmagupta[271817]:                "ceph.cephx_lockbox_secret": "",
Nov 25 03:22:55 np0005534516 musing_brahmagupta[271817]:                "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 03:22:55 np0005534516 musing_brahmagupta[271817]:                "ceph.cluster_name": "ceph",
Nov 25 03:22:55 np0005534516 musing_brahmagupta[271817]:                "ceph.crush_device_class": "",
Nov 25 03:22:55 np0005534516 musing_brahmagupta[271817]:                "ceph.encrypted": "0",
Nov 25 03:22:55 np0005534516 musing_brahmagupta[271817]:                "ceph.osd_fsid": "1ccbbde0-8faf-460a-800e-c84f00ed17db",
Nov 25 03:22:55 np0005534516 musing_brahmagupta[271817]:                "ceph.osd_id": "1",
Nov 25 03:22:55 np0005534516 musing_brahmagupta[271817]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 03:22:55 np0005534516 musing_brahmagupta[271817]:                "ceph.type": "block",
Nov 25 03:22:55 np0005534516 musing_brahmagupta[271817]:                "ceph.vdo": "0"
Nov 25 03:22:55 np0005534516 musing_brahmagupta[271817]:            },
Nov 25 03:22:55 np0005534516 musing_brahmagupta[271817]:            "type": "block",
Nov 25 03:22:55 np0005534516 musing_brahmagupta[271817]:            "vg_name": "ceph_vg1"
Nov 25 03:22:55 np0005534516 musing_brahmagupta[271817]:        }
Nov 25 03:22:55 np0005534516 musing_brahmagupta[271817]:    ],
Nov 25 03:22:55 np0005534516 musing_brahmagupta[271817]:    "2": [
Nov 25 03:22:55 np0005534516 musing_brahmagupta[271817]:        {
Nov 25 03:22:55 np0005534516 musing_brahmagupta[271817]:            "devices": [
Nov 25 03:22:55 np0005534516 musing_brahmagupta[271817]:                "/dev/loop5"
Nov 25 03:22:55 np0005534516 musing_brahmagupta[271817]:            ],
Nov 25 03:22:55 np0005534516 musing_brahmagupta[271817]:            "lv_name": "ceph_lv2",
Nov 25 03:22:55 np0005534516 musing_brahmagupta[271817]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 03:22:55 np0005534516 musing_brahmagupta[271817]:            "lv_size": "21470642176",
Nov 25 03:22:55 np0005534516 musing_brahmagupta[271817]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2a15223e-f21c-42af-b702-2be1c3607399,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 03:22:55 np0005534516 musing_brahmagupta[271817]:            "lv_uuid": "5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0",
Nov 25 03:22:55 np0005534516 musing_brahmagupta[271817]:            "name": "ceph_lv2",
Nov 25 03:22:55 np0005534516 musing_brahmagupta[271817]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 03:22:55 np0005534516 musing_brahmagupta[271817]:            "tags": {
Nov 25 03:22:55 np0005534516 musing_brahmagupta[271817]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 03:22:55 np0005534516 musing_brahmagupta[271817]:                "ceph.block_uuid": "5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0",
Nov 25 03:22:55 np0005534516 musing_brahmagupta[271817]:                "ceph.cephx_lockbox_secret": "",
Nov 25 03:22:55 np0005534516 musing_brahmagupta[271817]:                "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 03:22:55 np0005534516 musing_brahmagupta[271817]:                "ceph.cluster_name": "ceph",
Nov 25 03:22:55 np0005534516 musing_brahmagupta[271817]:                "ceph.crush_device_class": "",
Nov 25 03:22:55 np0005534516 musing_brahmagupta[271817]:                "ceph.encrypted": "0",
Nov 25 03:22:55 np0005534516 musing_brahmagupta[271817]:                "ceph.osd_fsid": "2a15223e-f21c-42af-b702-2be1c3607399",
Nov 25 03:22:55 np0005534516 musing_brahmagupta[271817]:                "ceph.osd_id": "2",
Nov 25 03:22:55 np0005534516 musing_brahmagupta[271817]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 03:22:55 np0005534516 musing_brahmagupta[271817]:                "ceph.type": "block",
Nov 25 03:22:55 np0005534516 musing_brahmagupta[271817]:                "ceph.vdo": "0"
Nov 25 03:22:55 np0005534516 musing_brahmagupta[271817]:            },
Nov 25 03:22:55 np0005534516 musing_brahmagupta[271817]:            "type": "block",
Nov 25 03:22:55 np0005534516 musing_brahmagupta[271817]:            "vg_name": "ceph_vg2"
Nov 25 03:22:55 np0005534516 musing_brahmagupta[271817]:        }
Nov 25 03:22:55 np0005534516 musing_brahmagupta[271817]:    ]
Nov 25 03:22:55 np0005534516 musing_brahmagupta[271817]: }
Nov 25 03:22:55 np0005534516 systemd[1]: libpod-5d383cbaab5a384db3d99ca8fa437ac4e7646dc6fa8dcd579c3c9d6d24b2b78a.scope: Deactivated successfully.
Nov 25 03:22:55 np0005534516 conmon[271817]: conmon 5d383cbaab5a384db3d9 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-5d383cbaab5a384db3d99ca8fa437ac4e7646dc6fa8dcd579c3c9d6d24b2b78a.scope/container/memory.events
Nov 25 03:22:55 np0005534516 podman[271800]: 2025-11-25 08:22:55.380750804 +0000 UTC m=+1.007176470 container died 5d383cbaab5a384db3d99ca8fa437ac4e7646dc6fa8dcd579c3c9d6d24b2b78a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_brahmagupta, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3)
Nov 25 03:22:55 np0005534516 systemd[1]: var-lib-containers-storage-overlay-3bae54cc7bb89c96a151a488f8930db544ffc9ad8fb1258e03608c868c073fbd-merged.mount: Deactivated successfully.
Nov 25 03:22:55 np0005534516 podman[271800]: 2025-11-25 08:22:55.828923193 +0000 UTC m=+1.455348829 container remove 5d383cbaab5a384db3d99ca8fa437ac4e7646dc6fa8dcd579c3c9d6d24b2b78a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_brahmagupta, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 03:22:55 np0005534516 systemd[1]: libpod-conmon-5d383cbaab5a384db3d99ca8fa437ac4e7646dc6fa8dcd579c3c9d6d24b2b78a.scope: Deactivated successfully.
Nov 25 03:22:56 np0005534516 ceph-mon[75015]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #51. Immutable memtables: 0.
Nov 25 03:22:56 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:22:56.222546) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 25 03:22:56 np0005534516 ceph-mon[75015]: rocksdb: [db/flush_job.cc:856] [default] [JOB 25] Flushing memtable with next log file: 51
Nov 25 03:22:56 np0005534516 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764058976222827, "job": 25, "event": "flush_started", "num_memtables": 1, "num_entries": 2092, "num_deletes": 251, "total_data_size": 3409477, "memory_usage": 3474464, "flush_reason": "Manual Compaction"}
Nov 25 03:22:56 np0005534516 ceph-mon[75015]: rocksdb: [db/flush_job.cc:885] [default] [JOB 25] Level-0 flush table #52: started
Nov 25 03:22:56 np0005534516 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764058976288592, "cf_name": "default", "job": 25, "event": "table_file_creation", "file_number": 52, "file_size": 3331619, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 20976, "largest_seqno": 23067, "table_properties": {"data_size": 3322153, "index_size": 5960, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2437, "raw_key_size": 19488, "raw_average_key_size": 20, "raw_value_size": 3303054, "raw_average_value_size": 3426, "num_data_blocks": 268, "num_entries": 964, "num_filter_entries": 964, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764058759, "oldest_key_time": 1764058759, "file_creation_time": 1764058976, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "dc5dc6ae-0fb8-474e-b34d-e37705a41add", "db_session_id": "WCSCMBNWDQZ3QKJA3OWW", "orig_file_number": 52, "seqno_to_time_mapping": "N/A"}}
Nov 25 03:22:56 np0005534516 ceph-mon[75015]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 25] Flush lasted 66115 microseconds, and 8480 cpu microseconds.
Nov 25 03:22:56 np0005534516 ceph-mon[75015]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 03:22:56 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:22:56.288668) [db/flush_job.cc:967] [default] [JOB 25] Level-0 flush table #52: 3331619 bytes OK
Nov 25 03:22:56 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:22:56.288703) [db/memtable_list.cc:519] [default] Level-0 commit table #52 started
Nov 25 03:22:56 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:22:56.304048) [db/memtable_list.cc:722] [default] Level-0 commit table #52: memtable #1 done
Nov 25 03:22:56 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:22:56.304102) EVENT_LOG_v1 {"time_micros": 1764058976304088, "job": 25, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 25 03:22:56 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:22:56.304124) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 25 03:22:56 np0005534516 ceph-mon[75015]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 25] Try to delete WAL files size 3400666, prev total WAL file size 3400666, number of live WAL files 2.
Nov 25 03:22:56 np0005534516 ceph-mon[75015]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000048.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 03:22:56 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:22:56.305522) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031373537' seq:72057594037927935, type:22 .. '7061786F730032303039' seq:0, type:0; will stop at (end)
Nov 25 03:22:56 np0005534516 ceph-mon[75015]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 26] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 25 03:22:56 np0005534516 ceph-mon[75015]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 25 Base level 0, inputs: [52(3253KB)], [50(7551KB)]
Nov 25 03:22:56 np0005534516 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764058976305576, "job": 26, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [52], "files_L6": [50], "score": -1, "input_data_size": 11064539, "oldest_snapshot_seqno": -1}
Nov 25 03:22:56 np0005534516 ceph-mon[75015]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 26] Generated table #53: 4814 keys, 9321849 bytes, temperature: kUnknown
Nov 25 03:22:56 np0005534516 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764058976635891, "cf_name": "default", "job": 26, "event": "table_file_creation", "file_number": 53, "file_size": 9321849, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9287209, "index_size": 21474, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 12101, "raw_key_size": 118322, "raw_average_key_size": 24, "raw_value_size": 9197735, "raw_average_value_size": 1910, "num_data_blocks": 907, "num_entries": 4814, "num_filter_entries": 4814, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764056881, "oldest_key_time": 0, "file_creation_time": 1764058976, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "dc5dc6ae-0fb8-474e-b34d-e37705a41add", "db_session_id": "WCSCMBNWDQZ3QKJA3OWW", "orig_file_number": 53, "seqno_to_time_mapping": "N/A"}}
Nov 25 03:22:56 np0005534516 ceph-mon[75015]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 03:22:56 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:22:56.636203) [db/compaction/compaction_job.cc:1663] [default] [JOB 26] Compacted 1@0 + 1@6 files to L6 => 9321849 bytes
Nov 25 03:22:56 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:22:56.639200) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 33.5 rd, 28.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.2, 7.4 +0.0 blob) out(8.9 +0.0 blob), read-write-amplify(6.1) write-amplify(2.8) OK, records in: 5332, records dropped: 518 output_compression: NoCompression
Nov 25 03:22:56 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:22:56.639230) EVENT_LOG_v1 {"time_micros": 1764058976639216, "job": 26, "event": "compaction_finished", "compaction_time_micros": 330408, "compaction_time_cpu_micros": 36959, "output_level": 6, "num_output_files": 1, "total_output_size": 9321849, "num_input_records": 5332, "num_output_records": 4814, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 25 03:22:56 np0005534516 podman[272075]: 2025-11-25 08:22:56.639794776 +0000 UTC m=+0.129190559 container create 89f92b51218aaeeba043e8444f787b9bf9188d6176042905d6902212d5284842 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_wescoff, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Nov 25 03:22:56 np0005534516 ceph-mon[75015]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000052.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 03:22:56 np0005534516 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764058976640479, "job": 26, "event": "table_file_deletion", "file_number": 52}
Nov 25 03:22:56 np0005534516 ceph-mon[75015]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000050.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 03:22:56 np0005534516 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764058976643018, "job": 26, "event": "table_file_deletion", "file_number": 50}
Nov 25 03:22:56 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:22:56.305408) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 03:22:56 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:22:56.643114) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 03:22:56 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:22:56.643123) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 03:22:56 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:22:56.643126) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 03:22:56 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:22:56.643130) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 03:22:56 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:22:56.643133) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 03:22:56 np0005534516 podman[272075]: 2025-11-25 08:22:56.569445336 +0000 UTC m=+0.058841169 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 03:22:56 np0005534516 nova_compute[253538]: 2025-11-25 08:22:56.682 253542 DEBUG oslo_concurrency.processutils [None req-07a74685-1b4c-4827-a544-e8eec088d775 e93ad3d8111e48218a5ab899be7d3708 029b328a15754b45970b2053b56564bc - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc 30491b9b-e328-43ff-9a35-3f5afa6fed34_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.620s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:22:56 np0005534516 systemd[1]: Started libpod-conmon-89f92b51218aaeeba043e8444f787b9bf9188d6176042905d6902212d5284842.scope.
Nov 25 03:22:56 np0005534516 systemd[1]: Started libcrun container.
Nov 25 03:22:56 np0005534516 nova_compute[253538]: 2025-11-25 08:22:56.769 253542 DEBUG nova.storage.rbd_utils [None req-07a74685-1b4c-4827-a544-e8eec088d775 e93ad3d8111e48218a5ab899be7d3708 029b328a15754b45970b2053b56564bc - - default default] resizing rbd image 30491b9b-e328-43ff-9a35-3f5afa6fed34_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 25 03:22:56 np0005534516 podman[272075]: 2025-11-25 08:22:56.806208445 +0000 UTC m=+0.295604218 container init 89f92b51218aaeeba043e8444f787b9bf9188d6176042905d6902212d5284842 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_wescoff, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 03:22:56 np0005534516 podman[272075]: 2025-11-25 08:22:56.816269037 +0000 UTC m=+0.305664810 container start 89f92b51218aaeeba043e8444f787b9bf9188d6176042905d6902212d5284842 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_wescoff, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 03:22:56 np0005534516 great_wescoff[272109]: 167 167
Nov 25 03:22:56 np0005534516 systemd[1]: libpod-89f92b51218aaeeba043e8444f787b9bf9188d6176042905d6902212d5284842.scope: Deactivated successfully.
Nov 25 03:22:56 np0005534516 conmon[272109]: conmon 89f92b51218aaeeba043 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-89f92b51218aaeeba043e8444f787b9bf9188d6176042905d6902212d5284842.scope/container/memory.events
Nov 25 03:22:56 np0005534516 podman[272075]: 2025-11-25 08:22:56.892561273 +0000 UTC m=+0.381957026 container attach 89f92b51218aaeeba043e8444f787b9bf9188d6176042905d6902212d5284842 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_wescoff, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 03:22:56 np0005534516 podman[272075]: 2025-11-25 08:22:56.893477299 +0000 UTC m=+0.382873042 container died 89f92b51218aaeeba043e8444f787b9bf9188d6176042905d6902212d5284842 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_wescoff, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Nov 25 03:22:56 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1119: 321 pgs: 321 active+clean; 90 MiB data, 245 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 70 op/s
Nov 25 03:22:56 np0005534516 systemd[1]: var-lib-containers-storage-overlay-8d57164362c9ccd3bd8b67c02b10ce51ccb8222f91c6f0d05f3ad8467b8964d6-merged.mount: Deactivated successfully.
Nov 25 03:22:57 np0005534516 podman[272075]: 2025-11-25 08:22:57.38460498 +0000 UTC m=+0.874000743 container remove 89f92b51218aaeeba043e8444f787b9bf9188d6176042905d6902212d5284842 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_wescoff, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 03:22:57 np0005534516 systemd[1]: libpod-conmon-89f92b51218aaeeba043e8444f787b9bf9188d6176042905d6902212d5284842.scope: Deactivated successfully.
Nov 25 03:22:57 np0005534516 nova_compute[253538]: 2025-11-25 08:22:57.497 253542 DEBUG nova.objects.instance [None req-07a74685-1b4c-4827-a544-e8eec088d775 e93ad3d8111e48218a5ab899be7d3708 029b328a15754b45970b2053b56564bc - - default default] Lazy-loading 'migration_context' on Instance uuid 30491b9b-e328-43ff-9a35-3f5afa6fed34 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 03:22:57 np0005534516 podman[272162]: 2025-11-25 08:22:57.50209174 +0000 UTC m=+0.516961686 container health_status 201f5ba8a846455dad7681d91099d8c3f33e8ac91298c1330a8b525b94c03938 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd)
Nov 25 03:22:57 np0005534516 nova_compute[253538]: 2025-11-25 08:22:57.512 253542 DEBUG nova.virt.libvirt.driver [None req-07a74685-1b4c-4827-a544-e8eec088d775 e93ad3d8111e48218a5ab899be7d3708 029b328a15754b45970b2053b56564bc - - default default] [instance: 30491b9b-e328-43ff-9a35-3f5afa6fed34] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 25 03:22:57 np0005534516 nova_compute[253538]: 2025-11-25 08:22:57.513 253542 DEBUG nova.virt.libvirt.driver [None req-07a74685-1b4c-4827-a544-e8eec088d775 e93ad3d8111e48218a5ab899be7d3708 029b328a15754b45970b2053b56564bc - - default default] [instance: 30491b9b-e328-43ff-9a35-3f5afa6fed34] Ensure instance console log exists: /var/lib/nova/instances/30491b9b-e328-43ff-9a35-3f5afa6fed34/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 25 03:22:57 np0005534516 nova_compute[253538]: 2025-11-25 08:22:57.514 253542 DEBUG oslo_concurrency.lockutils [None req-07a74685-1b4c-4827-a544-e8eec088d775 e93ad3d8111e48218a5ab899be7d3708 029b328a15754b45970b2053b56564bc - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:22:57 np0005534516 nova_compute[253538]: 2025-11-25 08:22:57.514 253542 DEBUG oslo_concurrency.lockutils [None req-07a74685-1b4c-4827-a544-e8eec088d775 e93ad3d8111e48218a5ab899be7d3708 029b328a15754b45970b2053b56564bc - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:22:57 np0005534516 nova_compute[253538]: 2025-11-25 08:22:57.515 253542 DEBUG oslo_concurrency.lockutils [None req-07a74685-1b4c-4827-a544-e8eec088d775 e93ad3d8111e48218a5ab899be7d3708 029b328a15754b45970b2053b56564bc - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:22:57 np0005534516 nova_compute[253538]: 2025-11-25 08:22:57.517 253542 DEBUG nova.virt.libvirt.driver [None req-07a74685-1b4c-4827-a544-e8eec088d775 e93ad3d8111e48218a5ab899be7d3708 029b328a15754b45970b2053b56564bc - - default default] [instance: 30491b9b-e328-43ff-9a35-3f5afa6fed34] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'device_name': '/dev/vda', 'guest_format': None, 'encryption_options': None, 'boot_index': 0, 'encryption_format': None, 'encryption_secret_uuid': None, 'size': 0, 'encrypted': False, 'disk_bus': 'virtio', 'image_id': '8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 25 03:22:57 np0005534516 nova_compute[253538]: 2025-11-25 08:22:57.528 253542 WARNING nova.virt.libvirt.driver [None req-07a74685-1b4c-4827-a544-e8eec088d775 e93ad3d8111e48218a5ab899be7d3708 029b328a15754b45970b2053b56564bc - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 03:22:57 np0005534516 nova_compute[253538]: 2025-11-25 08:22:57.538 253542 DEBUG nova.virt.libvirt.host [None req-07a74685-1b4c-4827-a544-e8eec088d775 e93ad3d8111e48218a5ab899be7d3708 029b328a15754b45970b2053b56564bc - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 25 03:22:57 np0005534516 nova_compute[253538]: 2025-11-25 08:22:57.540 253542 DEBUG nova.virt.libvirt.host [None req-07a74685-1b4c-4827-a544-e8eec088d775 e93ad3d8111e48218a5ab899be7d3708 029b328a15754b45970b2053b56564bc - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 25 03:22:57 np0005534516 nova_compute[253538]: 2025-11-25 08:22:57.545 253542 DEBUG nova.virt.libvirt.host [None req-07a74685-1b4c-4827-a544-e8eec088d775 e93ad3d8111e48218a5ab899be7d3708 029b328a15754b45970b2053b56564bc - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 25 03:22:57 np0005534516 nova_compute[253538]: 2025-11-25 08:22:57.546 253542 DEBUG nova.virt.libvirt.host [None req-07a74685-1b4c-4827-a544-e8eec088d775 e93ad3d8111e48218a5ab899be7d3708 029b328a15754b45970b2053b56564bc - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 25 03:22:57 np0005534516 nova_compute[253538]: 2025-11-25 08:22:57.547 253542 DEBUG nova.virt.libvirt.driver [None req-07a74685-1b4c-4827-a544-e8eec088d775 e93ad3d8111e48218a5ab899be7d3708 029b328a15754b45970b2053b56564bc - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 25 03:22:57 np0005534516 nova_compute[253538]: 2025-11-25 08:22:57.548 253542 DEBUG nova.virt.hardware [None req-07a74685-1b4c-4827-a544-e8eec088d775 e93ad3d8111e48218a5ab899be7d3708 029b328a15754b45970b2053b56564bc - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T08:20:54Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='c0247c91-0f34-45b2-87e7-43f31a790a00',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 25 03:22:57 np0005534516 nova_compute[253538]: 2025-11-25 08:22:57.549 253542 DEBUG nova.virt.hardware [None req-07a74685-1b4c-4827-a544-e8eec088d775 e93ad3d8111e48218a5ab899be7d3708 029b328a15754b45970b2053b56564bc - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 25 03:22:57 np0005534516 nova_compute[253538]: 2025-11-25 08:22:57.550 253542 DEBUG nova.virt.hardware [None req-07a74685-1b4c-4827-a544-e8eec088d775 e93ad3d8111e48218a5ab899be7d3708 029b328a15754b45970b2053b56564bc - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 25 03:22:57 np0005534516 nova_compute[253538]: 2025-11-25 08:22:57.551 253542 DEBUG nova.virt.hardware [None req-07a74685-1b4c-4827-a544-e8eec088d775 e93ad3d8111e48218a5ab899be7d3708 029b328a15754b45970b2053b56564bc - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 25 03:22:57 np0005534516 nova_compute[253538]: 2025-11-25 08:22:57.551 253542 DEBUG nova.virt.hardware [None req-07a74685-1b4c-4827-a544-e8eec088d775 e93ad3d8111e48218a5ab899be7d3708 029b328a15754b45970b2053b56564bc - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 25 03:22:57 np0005534516 nova_compute[253538]: 2025-11-25 08:22:57.552 253542 DEBUG nova.virt.hardware [None req-07a74685-1b4c-4827-a544-e8eec088d775 e93ad3d8111e48218a5ab899be7d3708 029b328a15754b45970b2053b56564bc - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 25 03:22:57 np0005534516 nova_compute[253538]: 2025-11-25 08:22:57.552 253542 DEBUG nova.virt.hardware [None req-07a74685-1b4c-4827-a544-e8eec088d775 e93ad3d8111e48218a5ab899be7d3708 029b328a15754b45970b2053b56564bc - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 25 03:22:57 np0005534516 nova_compute[253538]: 2025-11-25 08:22:57.553 253542 DEBUG nova.virt.hardware [None req-07a74685-1b4c-4827-a544-e8eec088d775 e93ad3d8111e48218a5ab899be7d3708 029b328a15754b45970b2053b56564bc - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 25 03:22:57 np0005534516 nova_compute[253538]: 2025-11-25 08:22:57.553 253542 DEBUG nova.virt.hardware [None req-07a74685-1b4c-4827-a544-e8eec088d775 e93ad3d8111e48218a5ab899be7d3708 029b328a15754b45970b2053b56564bc - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 25 03:22:57 np0005534516 nova_compute[253538]: 2025-11-25 08:22:57.553 253542 DEBUG nova.virt.hardware [None req-07a74685-1b4c-4827-a544-e8eec088d775 e93ad3d8111e48218a5ab899be7d3708 029b328a15754b45970b2053b56564bc - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 25 03:22:57 np0005534516 nova_compute[253538]: 2025-11-25 08:22:57.553 253542 DEBUG nova.virt.hardware [None req-07a74685-1b4c-4827-a544-e8eec088d775 e93ad3d8111e48218a5ab899be7d3708 029b328a15754b45970b2053b56564bc - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 25 03:22:57 np0005534516 nova_compute[253538]: 2025-11-25 08:22:57.557 253542 DEBUG oslo_concurrency.processutils [None req-07a74685-1b4c-4827-a544-e8eec088d775 e93ad3d8111e48218a5ab899be7d3708 029b328a15754b45970b2053b56564bc - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:22:57 np0005534516 nova_compute[253538]: 2025-11-25 08:22:57.662 253542 DEBUG oslo_concurrency.processutils [None req-457a00a4-03e6-4892-99d8-37e20d3e1cb2 3469d64ae20e4870ad703ac6d75a2a35 5a87aa3f5a47431ba468b9c1fdbcf5cd - - default default] Running cmd (subprocess): env LANG=C uptime execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:22:57 np0005534516 podman[272208]: 2025-11-25 08:22:57.597590963 +0000 UTC m=+0.033533129 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 03:22:57 np0005534516 nova_compute[253538]: 2025-11-25 08:22:57.697 253542 DEBUG oslo_concurrency.processutils [None req-457a00a4-03e6-4892-99d8-37e20d3e1cb2 3469d64ae20e4870ad703ac6d75a2a35 5a87aa3f5a47431ba468b9c1fdbcf5cd - - default default] CMD "env LANG=C uptime" returned: 0 in 0.034s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:22:57 np0005534516 podman[272208]: 2025-11-25 08:22:57.719153007 +0000 UTC m=+0.155095183 container create 11c1e00090e8e100697aff36008941eb455ae4252a1321ca11a872adf98a986a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_wing, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 03:22:57 np0005534516 nova_compute[253538]: 2025-11-25 08:22:57.830 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:22:57 np0005534516 systemd[1]: Started libpod-conmon-11c1e00090e8e100697aff36008941eb455ae4252a1321ca11a872adf98a986a.scope.
Nov 25 03:22:57 np0005534516 systemd[1]: Started libcrun container.
Nov 25 03:22:57 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c5b0cd7dc87567a8c9cdcef12457cdd99e347bdc20399c0593b1b4ca4b9036b9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 03:22:57 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c5b0cd7dc87567a8c9cdcef12457cdd99e347bdc20399c0593b1b4ca4b9036b9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 03:22:57 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c5b0cd7dc87567a8c9cdcef12457cdd99e347bdc20399c0593b1b4ca4b9036b9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 03:22:57 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c5b0cd7dc87567a8c9cdcef12457cdd99e347bdc20399c0593b1b4ca4b9036b9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 03:22:57 np0005534516 podman[272208]: 2025-11-25 08:22:57.91569571 +0000 UTC m=+0.351637876 container init 11c1e00090e8e100697aff36008941eb455ae4252a1321ca11a872adf98a986a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_wing, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Nov 25 03:22:57 np0005534516 podman[272208]: 2025-11-25 08:22:57.923428026 +0000 UTC m=+0.359370152 container start 11c1e00090e8e100697aff36008941eb455ae4252a1321ca11a872adf98a986a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_wing, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 03:22:57 np0005534516 podman[272208]: 2025-11-25 08:22:57.965739081 +0000 UTC m=+0.401681257 container attach 11c1e00090e8e100697aff36008941eb455ae4252a1321ca11a872adf98a986a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_wing, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 03:22:58 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 03:22:58 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/643865434' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 03:22:58 np0005534516 nova_compute[253538]: 2025-11-25 08:22:58.052 253542 DEBUG oslo_concurrency.processutils [None req-07a74685-1b4c-4827-a544-e8eec088d775 e93ad3d8111e48218a5ab899be7d3708 029b328a15754b45970b2053b56564bc - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.495s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:22:58 np0005534516 nova_compute[253538]: 2025-11-25 08:22:58.071 253542 DEBUG nova.storage.rbd_utils [None req-07a74685-1b4c-4827-a544-e8eec088d775 e93ad3d8111e48218a5ab899be7d3708 029b328a15754b45970b2053b56564bc - - default default] rbd image 30491b9b-e328-43ff-9a35-3f5afa6fed34_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:22:58 np0005534516 nova_compute[253538]: 2025-11-25 08:22:58.074 253542 DEBUG oslo_concurrency.processutils [None req-07a74685-1b4c-4827-a544-e8eec088d775 e93ad3d8111e48218a5ab899be7d3708 029b328a15754b45970b2053b56564bc - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:22:58 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e107 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 03:22:58 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 03:22:58 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3121262340' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 03:22:58 np0005534516 nova_compute[253538]: 2025-11-25 08:22:58.564 253542 DEBUG oslo_concurrency.processutils [None req-07a74685-1b4c-4827-a544-e8eec088d775 e93ad3d8111e48218a5ab899be7d3708 029b328a15754b45970b2053b56564bc - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.490s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:22:58 np0005534516 nova_compute[253538]: 2025-11-25 08:22:58.566 253542 DEBUG nova.objects.instance [None req-07a74685-1b4c-4827-a544-e8eec088d775 e93ad3d8111e48218a5ab899be7d3708 029b328a15754b45970b2053b56564bc - - default default] Lazy-loading 'pci_devices' on Instance uuid 30491b9b-e328-43ff-9a35-3f5afa6fed34 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 03:22:58 np0005534516 nova_compute[253538]: 2025-11-25 08:22:58.579 253542 DEBUG nova.virt.libvirt.driver [None req-07a74685-1b4c-4827-a544-e8eec088d775 e93ad3d8111e48218a5ab899be7d3708 029b328a15754b45970b2053b56564bc - - default default] [instance: 30491b9b-e328-43ff-9a35-3f5afa6fed34] End _get_guest_xml xml=<domain type="kvm">
Nov 25 03:22:58 np0005534516 nova_compute[253538]:  <uuid>30491b9b-e328-43ff-9a35-3f5afa6fed34</uuid>
Nov 25 03:22:58 np0005534516 nova_compute[253538]:  <name>instance-00000006</name>
Nov 25 03:22:58 np0005534516 nova_compute[253538]:  <memory>131072</memory>
Nov 25 03:22:58 np0005534516 nova_compute[253538]:  <vcpu>1</vcpu>
Nov 25 03:22:58 np0005534516 nova_compute[253538]:  <metadata>
Nov 25 03:22:58 np0005534516 nova_compute[253538]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 03:22:58 np0005534516 nova_compute[253538]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 03:22:58 np0005534516 nova_compute[253538]:      <nova:name>tempest-LiveMigrationNegativeTest-server-2012193751</nova:name>
Nov 25 03:22:58 np0005534516 nova_compute[253538]:      <nova:creationTime>2025-11-25 08:22:57</nova:creationTime>
Nov 25 03:22:58 np0005534516 nova_compute[253538]:      <nova:flavor name="m1.nano">
Nov 25 03:22:58 np0005534516 nova_compute[253538]:        <nova:memory>128</nova:memory>
Nov 25 03:22:58 np0005534516 nova_compute[253538]:        <nova:disk>1</nova:disk>
Nov 25 03:22:58 np0005534516 nova_compute[253538]:        <nova:swap>0</nova:swap>
Nov 25 03:22:58 np0005534516 nova_compute[253538]:        <nova:ephemeral>0</nova:ephemeral>
Nov 25 03:22:58 np0005534516 nova_compute[253538]:        <nova:vcpus>1</nova:vcpus>
Nov 25 03:22:58 np0005534516 nova_compute[253538]:      </nova:flavor>
Nov 25 03:22:58 np0005534516 nova_compute[253538]:      <nova:owner>
Nov 25 03:22:58 np0005534516 nova_compute[253538]:        <nova:user uuid="e93ad3d8111e48218a5ab899be7d3708">tempest-LiveMigrationNegativeTest-1761635651-project-member</nova:user>
Nov 25 03:22:58 np0005534516 nova_compute[253538]:        <nova:project uuid="029b328a15754b45970b2053b56564bc">tempest-LiveMigrationNegativeTest-1761635651</nova:project>
Nov 25 03:22:58 np0005534516 nova_compute[253538]:      </nova:owner>
Nov 25 03:22:58 np0005534516 nova_compute[253538]:      <nova:root type="image" uuid="8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e"/>
Nov 25 03:22:58 np0005534516 nova_compute[253538]:      <nova:ports/>
Nov 25 03:22:58 np0005534516 nova_compute[253538]:    </nova:instance>
Nov 25 03:22:58 np0005534516 nova_compute[253538]:  </metadata>
Nov 25 03:22:58 np0005534516 nova_compute[253538]:  <sysinfo type="smbios">
Nov 25 03:22:58 np0005534516 nova_compute[253538]:    <system>
Nov 25 03:22:58 np0005534516 nova_compute[253538]:      <entry name="manufacturer">RDO</entry>
Nov 25 03:22:58 np0005534516 nova_compute[253538]:      <entry name="product">OpenStack Compute</entry>
Nov 25 03:22:58 np0005534516 nova_compute[253538]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 03:22:58 np0005534516 nova_compute[253538]:      <entry name="serial">30491b9b-e328-43ff-9a35-3f5afa6fed34</entry>
Nov 25 03:22:58 np0005534516 nova_compute[253538]:      <entry name="uuid">30491b9b-e328-43ff-9a35-3f5afa6fed34</entry>
Nov 25 03:22:58 np0005534516 nova_compute[253538]:      <entry name="family">Virtual Machine</entry>
Nov 25 03:22:58 np0005534516 nova_compute[253538]:    </system>
Nov 25 03:22:58 np0005534516 nova_compute[253538]:  </sysinfo>
Nov 25 03:22:58 np0005534516 nova_compute[253538]:  <os>
Nov 25 03:22:58 np0005534516 nova_compute[253538]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 25 03:22:58 np0005534516 nova_compute[253538]:    <boot dev="hd"/>
Nov 25 03:22:58 np0005534516 nova_compute[253538]:    <smbios mode="sysinfo"/>
Nov 25 03:22:58 np0005534516 nova_compute[253538]:  </os>
Nov 25 03:22:58 np0005534516 nova_compute[253538]:  <features>
Nov 25 03:22:58 np0005534516 nova_compute[253538]:    <acpi/>
Nov 25 03:22:58 np0005534516 nova_compute[253538]:    <apic/>
Nov 25 03:22:58 np0005534516 nova_compute[253538]:    <vmcoreinfo/>
Nov 25 03:22:58 np0005534516 nova_compute[253538]:  </features>
Nov 25 03:22:58 np0005534516 nova_compute[253538]:  <clock offset="utc">
Nov 25 03:22:58 np0005534516 nova_compute[253538]:    <timer name="pit" tickpolicy="delay"/>
Nov 25 03:22:58 np0005534516 nova_compute[253538]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 25 03:22:58 np0005534516 nova_compute[253538]:    <timer name="hpet" present="no"/>
Nov 25 03:22:58 np0005534516 nova_compute[253538]:  </clock>
Nov 25 03:22:58 np0005534516 nova_compute[253538]:  <cpu mode="host-model" match="exact">
Nov 25 03:22:58 np0005534516 nova_compute[253538]:    <topology sockets="1" cores="1" threads="1"/>
Nov 25 03:22:58 np0005534516 nova_compute[253538]:  </cpu>
Nov 25 03:22:58 np0005534516 nova_compute[253538]:  <devices>
Nov 25 03:22:58 np0005534516 nova_compute[253538]:    <disk type="network" device="disk">
Nov 25 03:22:58 np0005534516 nova_compute[253538]:      <driver type="raw" cache="none"/>
Nov 25 03:22:58 np0005534516 nova_compute[253538]:      <source protocol="rbd" name="vms/30491b9b-e328-43ff-9a35-3f5afa6fed34_disk">
Nov 25 03:22:58 np0005534516 nova_compute[253538]:        <host name="192.168.122.100" port="6789"/>
Nov 25 03:22:58 np0005534516 nova_compute[253538]:      </source>
Nov 25 03:22:58 np0005534516 nova_compute[253538]:      <auth username="openstack">
Nov 25 03:22:58 np0005534516 nova_compute[253538]:        <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 03:22:58 np0005534516 nova_compute[253538]:      </auth>
Nov 25 03:22:58 np0005534516 nova_compute[253538]:      <target dev="vda" bus="virtio"/>
Nov 25 03:22:58 np0005534516 nova_compute[253538]:    </disk>
Nov 25 03:22:58 np0005534516 nova_compute[253538]:    <disk type="network" device="cdrom">
Nov 25 03:22:58 np0005534516 nova_compute[253538]:      <driver type="raw" cache="none"/>
Nov 25 03:22:58 np0005534516 nova_compute[253538]:      <source protocol="rbd" name="vms/30491b9b-e328-43ff-9a35-3f5afa6fed34_disk.config">
Nov 25 03:22:58 np0005534516 nova_compute[253538]:        <host name="192.168.122.100" port="6789"/>
Nov 25 03:22:58 np0005534516 nova_compute[253538]:      </source>
Nov 25 03:22:58 np0005534516 nova_compute[253538]:      <auth username="openstack">
Nov 25 03:22:58 np0005534516 nova_compute[253538]:        <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 03:22:58 np0005534516 nova_compute[253538]:      </auth>
Nov 25 03:22:58 np0005534516 nova_compute[253538]:      <target dev="sda" bus="sata"/>
Nov 25 03:22:58 np0005534516 nova_compute[253538]:    </disk>
Nov 25 03:22:58 np0005534516 nova_compute[253538]:    <serial type="pty">
Nov 25 03:22:58 np0005534516 nova_compute[253538]:      <log file="/var/lib/nova/instances/30491b9b-e328-43ff-9a35-3f5afa6fed34/console.log" append="off"/>
Nov 25 03:22:58 np0005534516 nova_compute[253538]:    </serial>
Nov 25 03:22:58 np0005534516 nova_compute[253538]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 03:22:58 np0005534516 nova_compute[253538]:    <video>
Nov 25 03:22:58 np0005534516 nova_compute[253538]:      <model type="virtio"/>
Nov 25 03:22:58 np0005534516 nova_compute[253538]:    </video>
Nov 25 03:22:58 np0005534516 nova_compute[253538]:    <input type="tablet" bus="usb"/>
Nov 25 03:22:58 np0005534516 nova_compute[253538]:    <rng model="virtio">
Nov 25 03:22:58 np0005534516 nova_compute[253538]:      <backend model="random">/dev/urandom</backend>
Nov 25 03:22:58 np0005534516 nova_compute[253538]:    </rng>
Nov 25 03:22:58 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root"/>
Nov 25 03:22:58 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:22:58 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:22:58 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:22:58 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:22:58 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:22:58 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:22:58 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:22:58 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:22:58 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:22:58 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:22:58 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:22:58 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:22:58 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:22:58 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:22:58 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:22:58 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:22:58 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:22:58 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:22:58 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:22:58 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:22:58 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:22:58 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:22:58 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:22:58 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:22:58 np0005534516 nova_compute[253538]:    <controller type="usb" index="0"/>
Nov 25 03:22:58 np0005534516 nova_compute[253538]:    <memballoon model="virtio">
Nov 25 03:22:58 np0005534516 nova_compute[253538]:      <stats period="10"/>
Nov 25 03:22:58 np0005534516 nova_compute[253538]:    </memballoon>
Nov 25 03:22:58 np0005534516 nova_compute[253538]:  </devices>
Nov 25 03:22:58 np0005534516 nova_compute[253538]: </domain>
Nov 25 03:22:58 np0005534516 nova_compute[253538]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 25 03:22:58 np0005534516 nova_compute[253538]: 2025-11-25 08:22:58.625 253542 DEBUG nova.virt.libvirt.driver [None req-07a74685-1b4c-4827-a544-e8eec088d775 e93ad3d8111e48218a5ab899be7d3708 029b328a15754b45970b2053b56564bc - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 03:22:58 np0005534516 nova_compute[253538]: 2025-11-25 08:22:58.626 253542 DEBUG nova.virt.libvirt.driver [None req-07a74685-1b4c-4827-a544-e8eec088d775 e93ad3d8111e48218a5ab899be7d3708 029b328a15754b45970b2053b56564bc - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 03:22:58 np0005534516 nova_compute[253538]: 2025-11-25 08:22:58.626 253542 INFO nova.virt.libvirt.driver [None req-07a74685-1b4c-4827-a544-e8eec088d775 e93ad3d8111e48218a5ab899be7d3708 029b328a15754b45970b2053b56564bc - - default default] [instance: 30491b9b-e328-43ff-9a35-3f5afa6fed34] Using config drive#033[00m
Nov 25 03:22:58 np0005534516 nova_compute[253538]: 2025-11-25 08:22:58.647 253542 DEBUG nova.storage.rbd_utils [None req-07a74685-1b4c-4827-a544-e8eec088d775 e93ad3d8111e48218a5ab899be7d3708 029b328a15754b45970b2053b56564bc - - default default] rbd image 30491b9b-e328-43ff-9a35-3f5afa6fed34_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:22:58 np0005534516 nova_compute[253538]: 2025-11-25 08:22:58.876 253542 INFO nova.virt.libvirt.driver [None req-07a74685-1b4c-4827-a544-e8eec088d775 e93ad3d8111e48218a5ab899be7d3708 029b328a15754b45970b2053b56564bc - - default default] [instance: 30491b9b-e328-43ff-9a35-3f5afa6fed34] Creating config drive at /var/lib/nova/instances/30491b9b-e328-43ff-9a35-3f5afa6fed34/disk.config#033[00m
Nov 25 03:22:58 np0005534516 nova_compute[253538]: 2025-11-25 08:22:58.882 253542 DEBUG oslo_concurrency.processutils [None req-07a74685-1b4c-4827-a544-e8eec088d775 e93ad3d8111e48218a5ab899be7d3708 029b328a15754b45970b2053b56564bc - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/30491b9b-e328-43ff-9a35-3f5afa6fed34/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp_td_jay8 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:22:58 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1120: 321 pgs: 321 active+clean; 119 MiB data, 257 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.2 MiB/s wr, 87 op/s
Nov 25 03:22:59 np0005534516 amazing_wing[272244]: {
Nov 25 03:22:59 np0005534516 amazing_wing[272244]:    "1ccbbde0-8faf-460a-800e-c84f00ed17db": {
Nov 25 03:22:59 np0005534516 amazing_wing[272244]:        "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 03:22:59 np0005534516 amazing_wing[272244]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 03:22:59 np0005534516 amazing_wing[272244]:        "osd_id": 1,
Nov 25 03:22:59 np0005534516 amazing_wing[272244]:        "osd_uuid": "1ccbbde0-8faf-460a-800e-c84f00ed17db",
Nov 25 03:22:59 np0005534516 amazing_wing[272244]:        "type": "bluestore"
Nov 25 03:22:59 np0005534516 amazing_wing[272244]:    },
Nov 25 03:22:59 np0005534516 amazing_wing[272244]:    "2a15223e-f21c-42af-b702-2be1c3607399": {
Nov 25 03:22:59 np0005534516 amazing_wing[272244]:        "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 03:22:59 np0005534516 amazing_wing[272244]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 03:22:59 np0005534516 amazing_wing[272244]:        "osd_id": 2,
Nov 25 03:22:59 np0005534516 amazing_wing[272244]:        "osd_uuid": "2a15223e-f21c-42af-b702-2be1c3607399",
Nov 25 03:22:59 np0005534516 amazing_wing[272244]:        "type": "bluestore"
Nov 25 03:22:59 np0005534516 amazing_wing[272244]:    },
Nov 25 03:22:59 np0005534516 amazing_wing[272244]:    "fa0f8d90-5df8-4d42-9078-da082765696d": {
Nov 25 03:22:59 np0005534516 amazing_wing[272244]:        "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 03:22:59 np0005534516 amazing_wing[272244]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 03:22:59 np0005534516 amazing_wing[272244]:        "osd_id": 0,
Nov 25 03:22:59 np0005534516 amazing_wing[272244]:        "osd_uuid": "fa0f8d90-5df8-4d42-9078-da082765696d",
Nov 25 03:22:59 np0005534516 amazing_wing[272244]:        "type": "bluestore"
Nov 25 03:22:59 np0005534516 amazing_wing[272244]:    }
Nov 25 03:22:59 np0005534516 amazing_wing[272244]: }
Nov 25 03:22:59 np0005534516 nova_compute[253538]: 2025-11-25 08:22:59.019 253542 DEBUG oslo_concurrency.processutils [None req-07a74685-1b4c-4827-a544-e8eec088d775 e93ad3d8111e48218a5ab899be7d3708 029b328a15754b45970b2053b56564bc - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/30491b9b-e328-43ff-9a35-3f5afa6fed34/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp_td_jay8" returned: 0 in 0.137s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:22:59 np0005534516 systemd[1]: libpod-11c1e00090e8e100697aff36008941eb455ae4252a1321ca11a872adf98a986a.scope: Deactivated successfully.
Nov 25 03:22:59 np0005534516 systemd[1]: libpod-11c1e00090e8e100697aff36008941eb455ae4252a1321ca11a872adf98a986a.scope: Consumed 1.098s CPU time.
Nov 25 03:22:59 np0005534516 podman[272208]: 2025-11-25 08:22:59.03785303 +0000 UTC m=+1.473795206 container died 11c1e00090e8e100697aff36008941eb455ae4252a1321ca11a872adf98a986a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_wing, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Nov 25 03:22:59 np0005534516 nova_compute[253538]: 2025-11-25 08:22:59.059 253542 DEBUG nova.storage.rbd_utils [None req-07a74685-1b4c-4827-a544-e8eec088d775 e93ad3d8111e48218a5ab899be7d3708 029b328a15754b45970b2053b56564bc - - default default] rbd image 30491b9b-e328-43ff-9a35-3f5afa6fed34_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:22:59 np0005534516 nova_compute[253538]: 2025-11-25 08:22:59.064 253542 DEBUG oslo_concurrency.processutils [None req-07a74685-1b4c-4827-a544-e8eec088d775 e93ad3d8111e48218a5ab899be7d3708 029b328a15754b45970b2053b56564bc - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/30491b9b-e328-43ff-9a35-3f5afa6fed34/disk.config 30491b9b-e328-43ff-9a35-3f5afa6fed34_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:22:59 np0005534516 systemd[1]: var-lib-containers-storage-overlay-c5b0cd7dc87567a8c9cdcef12457cdd99e347bdc20399c0593b1b4ca4b9036b9-merged.mount: Deactivated successfully.
Nov 25 03:22:59 np0005534516 podman[272208]: 2025-11-25 08:22:59.184788123 +0000 UTC m=+1.620730269 container remove 11c1e00090e8e100697aff36008941eb455ae4252a1321ca11a872adf98a986a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_wing, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Nov 25 03:22:59 np0005534516 systemd[1]: libpod-conmon-11c1e00090e8e100697aff36008941eb455ae4252a1321ca11a872adf98a986a.scope: Deactivated successfully.
Nov 25 03:22:59 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 03:22:59 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 03:22:59 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 03:22:59 np0005534516 nova_compute[253538]: 2025-11-25 08:22:59.255 253542 DEBUG oslo_concurrency.processutils [None req-07a74685-1b4c-4827-a544-e8eec088d775 e93ad3d8111e48218a5ab899be7d3708 029b328a15754b45970b2053b56564bc - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/30491b9b-e328-43ff-9a35-3f5afa6fed34/disk.config 30491b9b-e328-43ff-9a35-3f5afa6fed34_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.191s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:22:59 np0005534516 nova_compute[253538]: 2025-11-25 08:22:59.256 253542 INFO nova.virt.libvirt.driver [None req-07a74685-1b4c-4827-a544-e8eec088d775 e93ad3d8111e48218a5ab899be7d3708 029b328a15754b45970b2053b56564bc - - default default] [instance: 30491b9b-e328-43ff-9a35-3f5afa6fed34] Deleting local config drive /var/lib/nova/instances/30491b9b-e328-43ff-9a35-3f5afa6fed34/disk.config because it was imported into RBD.#033[00m
Nov 25 03:22:59 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 03:22:59 np0005534516 ceph-mgr[75313]: [progress WARNING root] complete: ev 97476215-8325-442e-a4e8-a2f3d5922981 does not exist
Nov 25 03:22:59 np0005534516 ceph-mgr[75313]: [progress WARNING root] complete: ev 82f75854-8c30-4852-b118-62f75371b604 does not exist
Nov 25 03:22:59 np0005534516 systemd-machined[215790]: New machine qemu-6-instance-00000006.
Nov 25 03:22:59 np0005534516 systemd[1]: Started Virtual Machine qemu-6-instance-00000006.
Nov 25 03:22:59 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 03:22:59 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 03:22:59 np0005534516 nova_compute[253538]: 2025-11-25 08:22:59.365 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:23:00 np0005534516 nova_compute[253538]: 2025-11-25 08:23:00.092 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764058980.0920835, 30491b9b-e328-43ff-9a35-3f5afa6fed34 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 03:23:00 np0005534516 nova_compute[253538]: 2025-11-25 08:23:00.094 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 30491b9b-e328-43ff-9a35-3f5afa6fed34] VM Resumed (Lifecycle Event)#033[00m
Nov 25 03:23:00 np0005534516 nova_compute[253538]: 2025-11-25 08:23:00.098 253542 DEBUG nova.compute.manager [None req-07a74685-1b4c-4827-a544-e8eec088d775 e93ad3d8111e48218a5ab899be7d3708 029b328a15754b45970b2053b56564bc - - default default] [instance: 30491b9b-e328-43ff-9a35-3f5afa6fed34] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 25 03:23:00 np0005534516 nova_compute[253538]: 2025-11-25 08:23:00.099 253542 DEBUG nova.virt.libvirt.driver [None req-07a74685-1b4c-4827-a544-e8eec088d775 e93ad3d8111e48218a5ab899be7d3708 029b328a15754b45970b2053b56564bc - - default default] [instance: 30491b9b-e328-43ff-9a35-3f5afa6fed34] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 25 03:23:00 np0005534516 nova_compute[253538]: 2025-11-25 08:23:00.113 253542 DEBUG oslo_concurrency.lockutils [None req-512552d9-98f5-4ecc-a713-b9805f714944 2efed9769e444ce29167cef08d536e28 87f3ce31098e4e0cb8717a11dd3fee23 - - default default] Acquiring lock "a8f34404-8153-46bd-aea0-02d909cd66a9" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:23:00 np0005534516 nova_compute[253538]: 2025-11-25 08:23:00.114 253542 DEBUG oslo_concurrency.lockutils [None req-512552d9-98f5-4ecc-a713-b9805f714944 2efed9769e444ce29167cef08d536e28 87f3ce31098e4e0cb8717a11dd3fee23 - - default default] Lock "a8f34404-8153-46bd-aea0-02d909cd66a9" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:23:00 np0005534516 nova_compute[253538]: 2025-11-25 08:23:00.116 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 30491b9b-e328-43ff-9a35-3f5afa6fed34] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:23:00 np0005534516 nova_compute[253538]: 2025-11-25 08:23:00.117 253542 INFO nova.virt.libvirt.driver [-] [instance: 30491b9b-e328-43ff-9a35-3f5afa6fed34] Instance spawned successfully.#033[00m
Nov 25 03:23:00 np0005534516 nova_compute[253538]: 2025-11-25 08:23:00.119 253542 DEBUG nova.virt.libvirt.driver [None req-07a74685-1b4c-4827-a544-e8eec088d775 e93ad3d8111e48218a5ab899be7d3708 029b328a15754b45970b2053b56564bc - - default default] [instance: 30491b9b-e328-43ff-9a35-3f5afa6fed34] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 25 03:23:00 np0005534516 nova_compute[253538]: 2025-11-25 08:23:00.123 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 30491b9b-e328-43ff-9a35-3f5afa6fed34] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 03:23:00 np0005534516 nova_compute[253538]: 2025-11-25 08:23:00.145 253542 DEBUG nova.compute.manager [None req-512552d9-98f5-4ecc-a713-b9805f714944 2efed9769e444ce29167cef08d536e28 87f3ce31098e4e0cb8717a11dd3fee23 - - default default] [instance: a8f34404-8153-46bd-aea0-02d909cd66a9] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 25 03:23:00 np0005534516 nova_compute[253538]: 2025-11-25 08:23:00.151 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 30491b9b-e328-43ff-9a35-3f5afa6fed34] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 03:23:00 np0005534516 nova_compute[253538]: 2025-11-25 08:23:00.151 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764058980.0946102, 30491b9b-e328-43ff-9a35-3f5afa6fed34 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 03:23:00 np0005534516 nova_compute[253538]: 2025-11-25 08:23:00.151 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 30491b9b-e328-43ff-9a35-3f5afa6fed34] VM Started (Lifecycle Event)#033[00m
Nov 25 03:23:00 np0005534516 nova_compute[253538]: 2025-11-25 08:23:00.156 253542 DEBUG nova.virt.libvirt.driver [None req-07a74685-1b4c-4827-a544-e8eec088d775 e93ad3d8111e48218a5ab899be7d3708 029b328a15754b45970b2053b56564bc - - default default] [instance: 30491b9b-e328-43ff-9a35-3f5afa6fed34] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:23:00 np0005534516 nova_compute[253538]: 2025-11-25 08:23:00.157 253542 DEBUG nova.virt.libvirt.driver [None req-07a74685-1b4c-4827-a544-e8eec088d775 e93ad3d8111e48218a5ab899be7d3708 029b328a15754b45970b2053b56564bc - - default default] [instance: 30491b9b-e328-43ff-9a35-3f5afa6fed34] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:23:00 np0005534516 nova_compute[253538]: 2025-11-25 08:23:00.157 253542 DEBUG nova.virt.libvirt.driver [None req-07a74685-1b4c-4827-a544-e8eec088d775 e93ad3d8111e48218a5ab899be7d3708 029b328a15754b45970b2053b56564bc - - default default] [instance: 30491b9b-e328-43ff-9a35-3f5afa6fed34] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:23:00 np0005534516 nova_compute[253538]: 2025-11-25 08:23:00.158 253542 DEBUG nova.virt.libvirt.driver [None req-07a74685-1b4c-4827-a544-e8eec088d775 e93ad3d8111e48218a5ab899be7d3708 029b328a15754b45970b2053b56564bc - - default default] [instance: 30491b9b-e328-43ff-9a35-3f5afa6fed34] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:23:00 np0005534516 nova_compute[253538]: 2025-11-25 08:23:00.158 253542 DEBUG nova.virt.libvirt.driver [None req-07a74685-1b4c-4827-a544-e8eec088d775 e93ad3d8111e48218a5ab899be7d3708 029b328a15754b45970b2053b56564bc - - default default] [instance: 30491b9b-e328-43ff-9a35-3f5afa6fed34] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:23:00 np0005534516 nova_compute[253538]: 2025-11-25 08:23:00.159 253542 DEBUG nova.virt.libvirt.driver [None req-07a74685-1b4c-4827-a544-e8eec088d775 e93ad3d8111e48218a5ab899be7d3708 029b328a15754b45970b2053b56564bc - - default default] [instance: 30491b9b-e328-43ff-9a35-3f5afa6fed34] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:23:00 np0005534516 nova_compute[253538]: 2025-11-25 08:23:00.182 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 30491b9b-e328-43ff-9a35-3f5afa6fed34] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:23:00 np0005534516 nova_compute[253538]: 2025-11-25 08:23:00.187 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 30491b9b-e328-43ff-9a35-3f5afa6fed34] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 03:23:00 np0005534516 nova_compute[253538]: 2025-11-25 08:23:00.230 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 30491b9b-e328-43ff-9a35-3f5afa6fed34] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 03:23:00 np0005534516 nova_compute[253538]: 2025-11-25 08:23:00.233 253542 INFO nova.compute.manager [None req-07a74685-1b4c-4827-a544-e8eec088d775 e93ad3d8111e48218a5ab899be7d3708 029b328a15754b45970b2053b56564bc - - default default] [instance: 30491b9b-e328-43ff-9a35-3f5afa6fed34] Took 5.37 seconds to spawn the instance on the hypervisor.#033[00m
Nov 25 03:23:00 np0005534516 nova_compute[253538]: 2025-11-25 08:23:00.234 253542 DEBUG nova.compute.manager [None req-07a74685-1b4c-4827-a544-e8eec088d775 e93ad3d8111e48218a5ab899be7d3708 029b328a15754b45970b2053b56564bc - - default default] [instance: 30491b9b-e328-43ff-9a35-3f5afa6fed34] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:23:00 np0005534516 nova_compute[253538]: 2025-11-25 08:23:00.265 253542 DEBUG oslo_concurrency.lockutils [None req-512552d9-98f5-4ecc-a713-b9805f714944 2efed9769e444ce29167cef08d536e28 87f3ce31098e4e0cb8717a11dd3fee23 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:23:00 np0005534516 nova_compute[253538]: 2025-11-25 08:23:00.266 253542 DEBUG oslo_concurrency.lockutils [None req-512552d9-98f5-4ecc-a713-b9805f714944 2efed9769e444ce29167cef08d536e28 87f3ce31098e4e0cb8717a11dd3fee23 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:23:00 np0005534516 nova_compute[253538]: 2025-11-25 08:23:00.276 253542 DEBUG nova.virt.hardware [None req-512552d9-98f5-4ecc-a713-b9805f714944 2efed9769e444ce29167cef08d536e28 87f3ce31098e4e0cb8717a11dd3fee23 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 25 03:23:00 np0005534516 nova_compute[253538]: 2025-11-25 08:23:00.277 253542 INFO nova.compute.claims [None req-512552d9-98f5-4ecc-a713-b9805f714944 2efed9769e444ce29167cef08d536e28 87f3ce31098e4e0cb8717a11dd3fee23 - - default default] [instance: a8f34404-8153-46bd-aea0-02d909cd66a9] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 25 03:23:00 np0005534516 nova_compute[253538]: 2025-11-25 08:23:00.314 253542 INFO nova.compute.manager [None req-07a74685-1b4c-4827-a544-e8eec088d775 e93ad3d8111e48218a5ab899be7d3708 029b328a15754b45970b2053b56564bc - - default default] [instance: 30491b9b-e328-43ff-9a35-3f5afa6fed34] Took 6.35 seconds to build instance.#033[00m
Nov 25 03:23:00 np0005534516 nova_compute[253538]: 2025-11-25 08:23:00.346 253542 DEBUG oslo_concurrency.lockutils [None req-07a74685-1b4c-4827-a544-e8eec088d775 e93ad3d8111e48218a5ab899be7d3708 029b328a15754b45970b2053b56564bc - - default default] Lock "30491b9b-e328-43ff-9a35-3f5afa6fed34" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 6.456s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:23:00 np0005534516 nova_compute[253538]: 2025-11-25 08:23:00.424 253542 DEBUG oslo_concurrency.processutils [None req-512552d9-98f5-4ecc-a713-b9805f714944 2efed9769e444ce29167cef08d536e28 87f3ce31098e4e0cb8717a11dd3fee23 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:23:00 np0005534516 ovn_controller[152859]: 2025-11-25T08:23:00Z|00006|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:99:b2:39 10.100.0.13
Nov 25 03:23:00 np0005534516 ovn_controller[152859]: 2025-11-25T08:23:00Z|00007|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:99:b2:39 10.100.0.13
Nov 25 03:23:00 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 03:23:00 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3312403547' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 03:23:00 np0005534516 nova_compute[253538]: 2025-11-25 08:23:00.908 253542 DEBUG oslo_concurrency.processutils [None req-512552d9-98f5-4ecc-a713-b9805f714944 2efed9769e444ce29167cef08d536e28 87f3ce31098e4e0cb8717a11dd3fee23 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.484s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:23:00 np0005534516 nova_compute[253538]: 2025-11-25 08:23:00.914 253542 DEBUG nova.compute.provider_tree [None req-512552d9-98f5-4ecc-a713-b9805f714944 2efed9769e444ce29167cef08d536e28 87f3ce31098e4e0cb8717a11dd3fee23 - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 03:23:00 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1121: 321 pgs: 321 active+clean; 156 MiB data, 275 MiB used, 60 GiB / 60 GiB avail; 883 KiB/s rd, 2.8 MiB/s wr, 78 op/s
Nov 25 03:23:00 np0005534516 nova_compute[253538]: 2025-11-25 08:23:00.930 253542 DEBUG nova.scheduler.client.report [None req-512552d9-98f5-4ecc-a713-b9805f714944 2efed9769e444ce29167cef08d536e28 87f3ce31098e4e0cb8717a11dd3fee23 - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 03:23:00 np0005534516 nova_compute[253538]: 2025-11-25 08:23:00.956 253542 DEBUG oslo_concurrency.lockutils [None req-512552d9-98f5-4ecc-a713-b9805f714944 2efed9769e444ce29167cef08d536e28 87f3ce31098e4e0cb8717a11dd3fee23 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.690s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:23:00 np0005534516 nova_compute[253538]: 2025-11-25 08:23:00.957 253542 DEBUG nova.compute.manager [None req-512552d9-98f5-4ecc-a713-b9805f714944 2efed9769e444ce29167cef08d536e28 87f3ce31098e4e0cb8717a11dd3fee23 - - default default] [instance: a8f34404-8153-46bd-aea0-02d909cd66a9] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 25 03:23:01 np0005534516 nova_compute[253538]: 2025-11-25 08:23:01.016 253542 DEBUG nova.compute.manager [None req-512552d9-98f5-4ecc-a713-b9805f714944 2efed9769e444ce29167cef08d536e28 87f3ce31098e4e0cb8717a11dd3fee23 - - default default] [instance: a8f34404-8153-46bd-aea0-02d909cd66a9] Not allocating networking since 'none' was specified. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1948#033[00m
Nov 25 03:23:01 np0005534516 nova_compute[253538]: 2025-11-25 08:23:01.029 253542 INFO nova.virt.libvirt.driver [None req-512552d9-98f5-4ecc-a713-b9805f714944 2efed9769e444ce29167cef08d536e28 87f3ce31098e4e0cb8717a11dd3fee23 - - default default] [instance: a8f34404-8153-46bd-aea0-02d909cd66a9] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 25 03:23:01 np0005534516 nova_compute[253538]: 2025-11-25 08:23:01.048 253542 DEBUG nova.compute.manager [None req-512552d9-98f5-4ecc-a713-b9805f714944 2efed9769e444ce29167cef08d536e28 87f3ce31098e4e0cb8717a11dd3fee23 - - default default] [instance: a8f34404-8153-46bd-aea0-02d909cd66a9] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 25 03:23:01 np0005534516 nova_compute[253538]: 2025-11-25 08:23:01.125 253542 DEBUG nova.compute.manager [None req-512552d9-98f5-4ecc-a713-b9805f714944 2efed9769e444ce29167cef08d536e28 87f3ce31098e4e0cb8717a11dd3fee23 - - default default] [instance: a8f34404-8153-46bd-aea0-02d909cd66a9] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 25 03:23:01 np0005534516 nova_compute[253538]: 2025-11-25 08:23:01.126 253542 DEBUG nova.virt.libvirt.driver [None req-512552d9-98f5-4ecc-a713-b9805f714944 2efed9769e444ce29167cef08d536e28 87f3ce31098e4e0cb8717a11dd3fee23 - - default default] [instance: a8f34404-8153-46bd-aea0-02d909cd66a9] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 25 03:23:01 np0005534516 nova_compute[253538]: 2025-11-25 08:23:01.127 253542 INFO nova.virt.libvirt.driver [None req-512552d9-98f5-4ecc-a713-b9805f714944 2efed9769e444ce29167cef08d536e28 87f3ce31098e4e0cb8717a11dd3fee23 - - default default] [instance: a8f34404-8153-46bd-aea0-02d909cd66a9] Creating image(s)#033[00m
Nov 25 03:23:01 np0005534516 nova_compute[253538]: 2025-11-25 08:23:01.152 253542 DEBUG nova.storage.rbd_utils [None req-512552d9-98f5-4ecc-a713-b9805f714944 2efed9769e444ce29167cef08d536e28 87f3ce31098e4e0cb8717a11dd3fee23 - - default default] rbd image a8f34404-8153-46bd-aea0-02d909cd66a9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:23:01 np0005534516 nova_compute[253538]: 2025-11-25 08:23:01.180 253542 DEBUG nova.storage.rbd_utils [None req-512552d9-98f5-4ecc-a713-b9805f714944 2efed9769e444ce29167cef08d536e28 87f3ce31098e4e0cb8717a11dd3fee23 - - default default] rbd image a8f34404-8153-46bd-aea0-02d909cd66a9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:23:01 np0005534516 nova_compute[253538]: 2025-11-25 08:23:01.204 253542 DEBUG nova.storage.rbd_utils [None req-512552d9-98f5-4ecc-a713-b9805f714944 2efed9769e444ce29167cef08d536e28 87f3ce31098e4e0cb8717a11dd3fee23 - - default default] rbd image a8f34404-8153-46bd-aea0-02d909cd66a9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:23:01 np0005534516 nova_compute[253538]: 2025-11-25 08:23:01.208 253542 DEBUG oslo_concurrency.processutils [None req-512552d9-98f5-4ecc-a713-b9805f714944 2efed9769e444ce29167cef08d536e28 87f3ce31098e4e0cb8717a11dd3fee23 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:23:01 np0005534516 nova_compute[253538]: 2025-11-25 08:23:01.272 253542 DEBUG oslo_concurrency.processutils [None req-512552d9-98f5-4ecc-a713-b9805f714944 2efed9769e444ce29167cef08d536e28 87f3ce31098e4e0cb8717a11dd3fee23 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json" returned: 0 in 0.064s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:23:01 np0005534516 nova_compute[253538]: 2025-11-25 08:23:01.273 253542 DEBUG oslo_concurrency.lockutils [None req-512552d9-98f5-4ecc-a713-b9805f714944 2efed9769e444ce29167cef08d536e28 87f3ce31098e4e0cb8717a11dd3fee23 - - default default] Acquiring lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:23:01 np0005534516 nova_compute[253538]: 2025-11-25 08:23:01.274 253542 DEBUG oslo_concurrency.lockutils [None req-512552d9-98f5-4ecc-a713-b9805f714944 2efed9769e444ce29167cef08d536e28 87f3ce31098e4e0cb8717a11dd3fee23 - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:23:01 np0005534516 nova_compute[253538]: 2025-11-25 08:23:01.274 253542 DEBUG oslo_concurrency.lockutils [None req-512552d9-98f5-4ecc-a713-b9805f714944 2efed9769e444ce29167cef08d536e28 87f3ce31098e4e0cb8717a11dd3fee23 - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:23:01 np0005534516 nova_compute[253538]: 2025-11-25 08:23:01.297 253542 DEBUG nova.storage.rbd_utils [None req-512552d9-98f5-4ecc-a713-b9805f714944 2efed9769e444ce29167cef08d536e28 87f3ce31098e4e0cb8717a11dd3fee23 - - default default] rbd image a8f34404-8153-46bd-aea0-02d909cd66a9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:23:01 np0005534516 nova_compute[253538]: 2025-11-25 08:23:01.301 253542 DEBUG oslo_concurrency.processutils [None req-512552d9-98f5-4ecc-a713-b9805f714944 2efed9769e444ce29167cef08d536e28 87f3ce31098e4e0cb8717a11dd3fee23 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc a8f34404-8153-46bd-aea0-02d909cd66a9_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:23:01 np0005534516 nova_compute[253538]: 2025-11-25 08:23:01.634 253542 DEBUG oslo_concurrency.processutils [None req-512552d9-98f5-4ecc-a713-b9805f714944 2efed9769e444ce29167cef08d536e28 87f3ce31098e4e0cb8717a11dd3fee23 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc a8f34404-8153-46bd-aea0-02d909cd66a9_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.333s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:23:01 np0005534516 nova_compute[253538]: 2025-11-25 08:23:01.704 253542 DEBUG nova.storage.rbd_utils [None req-512552d9-98f5-4ecc-a713-b9805f714944 2efed9769e444ce29167cef08d536e28 87f3ce31098e4e0cb8717a11dd3fee23 - - default default] resizing rbd image a8f34404-8153-46bd-aea0-02d909cd66a9_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 25 03:23:01 np0005534516 nova_compute[253538]: 2025-11-25 08:23:01.798 253542 DEBUG nova.objects.instance [None req-512552d9-98f5-4ecc-a713-b9805f714944 2efed9769e444ce29167cef08d536e28 87f3ce31098e4e0cb8717a11dd3fee23 - - default default] Lazy-loading 'migration_context' on Instance uuid a8f34404-8153-46bd-aea0-02d909cd66a9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 03:23:01 np0005534516 nova_compute[253538]: 2025-11-25 08:23:01.809 253542 DEBUG nova.virt.libvirt.driver [None req-512552d9-98f5-4ecc-a713-b9805f714944 2efed9769e444ce29167cef08d536e28 87f3ce31098e4e0cb8717a11dd3fee23 - - default default] [instance: a8f34404-8153-46bd-aea0-02d909cd66a9] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 25 03:23:01 np0005534516 nova_compute[253538]: 2025-11-25 08:23:01.809 253542 DEBUG nova.virt.libvirt.driver [None req-512552d9-98f5-4ecc-a713-b9805f714944 2efed9769e444ce29167cef08d536e28 87f3ce31098e4e0cb8717a11dd3fee23 - - default default] [instance: a8f34404-8153-46bd-aea0-02d909cd66a9] Ensure instance console log exists: /var/lib/nova/instances/a8f34404-8153-46bd-aea0-02d909cd66a9/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 25 03:23:01 np0005534516 nova_compute[253538]: 2025-11-25 08:23:01.810 253542 DEBUG oslo_concurrency.lockutils [None req-512552d9-98f5-4ecc-a713-b9805f714944 2efed9769e444ce29167cef08d536e28 87f3ce31098e4e0cb8717a11dd3fee23 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:23:01 np0005534516 nova_compute[253538]: 2025-11-25 08:23:01.810 253542 DEBUG oslo_concurrency.lockutils [None req-512552d9-98f5-4ecc-a713-b9805f714944 2efed9769e444ce29167cef08d536e28 87f3ce31098e4e0cb8717a11dd3fee23 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:23:01 np0005534516 nova_compute[253538]: 2025-11-25 08:23:01.810 253542 DEBUG oslo_concurrency.lockutils [None req-512552d9-98f5-4ecc-a713-b9805f714944 2efed9769e444ce29167cef08d536e28 87f3ce31098e4e0cb8717a11dd3fee23 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:23:01 np0005534516 nova_compute[253538]: 2025-11-25 08:23:01.813 253542 DEBUG nova.virt.libvirt.driver [None req-512552d9-98f5-4ecc-a713-b9805f714944 2efed9769e444ce29167cef08d536e28 87f3ce31098e4e0cb8717a11dd3fee23 - - default default] [instance: a8f34404-8153-46bd-aea0-02d909cd66a9] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'device_name': '/dev/vda', 'guest_format': None, 'encryption_options': None, 'boot_index': 0, 'encryption_format': None, 'encryption_secret_uuid': None, 'size': 0, 'encrypted': False, 'disk_bus': 'virtio', 'image_id': '8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 25 03:23:01 np0005534516 nova_compute[253538]: 2025-11-25 08:23:01.816 253542 WARNING nova.virt.libvirt.driver [None req-512552d9-98f5-4ecc-a713-b9805f714944 2efed9769e444ce29167cef08d536e28 87f3ce31098e4e0cb8717a11dd3fee23 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 03:23:01 np0005534516 nova_compute[253538]: 2025-11-25 08:23:01.820 253542 DEBUG nova.virt.libvirt.host [None req-512552d9-98f5-4ecc-a713-b9805f714944 2efed9769e444ce29167cef08d536e28 87f3ce31098e4e0cb8717a11dd3fee23 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 25 03:23:01 np0005534516 nova_compute[253538]: 2025-11-25 08:23:01.820 253542 DEBUG nova.virt.libvirt.host [None req-512552d9-98f5-4ecc-a713-b9805f714944 2efed9769e444ce29167cef08d536e28 87f3ce31098e4e0cb8717a11dd3fee23 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 25 03:23:01 np0005534516 nova_compute[253538]: 2025-11-25 08:23:01.823 253542 DEBUG nova.virt.libvirt.host [None req-512552d9-98f5-4ecc-a713-b9805f714944 2efed9769e444ce29167cef08d536e28 87f3ce31098e4e0cb8717a11dd3fee23 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 25 03:23:01 np0005534516 nova_compute[253538]: 2025-11-25 08:23:01.823 253542 DEBUG nova.virt.libvirt.host [None req-512552d9-98f5-4ecc-a713-b9805f714944 2efed9769e444ce29167cef08d536e28 87f3ce31098e4e0cb8717a11dd3fee23 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 25 03:23:01 np0005534516 nova_compute[253538]: 2025-11-25 08:23:01.824 253542 DEBUG nova.virt.libvirt.driver [None req-512552d9-98f5-4ecc-a713-b9805f714944 2efed9769e444ce29167cef08d536e28 87f3ce31098e4e0cb8717a11dd3fee23 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 25 03:23:01 np0005534516 nova_compute[253538]: 2025-11-25 08:23:01.824 253542 DEBUG nova.virt.hardware [None req-512552d9-98f5-4ecc-a713-b9805f714944 2efed9769e444ce29167cef08d536e28 87f3ce31098e4e0cb8717a11dd3fee23 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T08:20:54Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='c0247c91-0f34-45b2-87e7-43f31a790a00',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 25 03:23:01 np0005534516 nova_compute[253538]: 2025-11-25 08:23:01.824 253542 DEBUG nova.virt.hardware [None req-512552d9-98f5-4ecc-a713-b9805f714944 2efed9769e444ce29167cef08d536e28 87f3ce31098e4e0cb8717a11dd3fee23 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 25 03:23:01 np0005534516 nova_compute[253538]: 2025-11-25 08:23:01.824 253542 DEBUG nova.virt.hardware [None req-512552d9-98f5-4ecc-a713-b9805f714944 2efed9769e444ce29167cef08d536e28 87f3ce31098e4e0cb8717a11dd3fee23 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 25 03:23:01 np0005534516 nova_compute[253538]: 2025-11-25 08:23:01.824 253542 DEBUG nova.virt.hardware [None req-512552d9-98f5-4ecc-a713-b9805f714944 2efed9769e444ce29167cef08d536e28 87f3ce31098e4e0cb8717a11dd3fee23 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 25 03:23:01 np0005534516 nova_compute[253538]: 2025-11-25 08:23:01.825 253542 DEBUG nova.virt.hardware [None req-512552d9-98f5-4ecc-a713-b9805f714944 2efed9769e444ce29167cef08d536e28 87f3ce31098e4e0cb8717a11dd3fee23 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 25 03:23:01 np0005534516 nova_compute[253538]: 2025-11-25 08:23:01.825 253542 DEBUG nova.virt.hardware [None req-512552d9-98f5-4ecc-a713-b9805f714944 2efed9769e444ce29167cef08d536e28 87f3ce31098e4e0cb8717a11dd3fee23 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 25 03:23:01 np0005534516 nova_compute[253538]: 2025-11-25 08:23:01.825 253542 DEBUG nova.virt.hardware [None req-512552d9-98f5-4ecc-a713-b9805f714944 2efed9769e444ce29167cef08d536e28 87f3ce31098e4e0cb8717a11dd3fee23 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 25 03:23:01 np0005534516 nova_compute[253538]: 2025-11-25 08:23:01.825 253542 DEBUG nova.virt.hardware [None req-512552d9-98f5-4ecc-a713-b9805f714944 2efed9769e444ce29167cef08d536e28 87f3ce31098e4e0cb8717a11dd3fee23 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 25 03:23:01 np0005534516 nova_compute[253538]: 2025-11-25 08:23:01.825 253542 DEBUG nova.virt.hardware [None req-512552d9-98f5-4ecc-a713-b9805f714944 2efed9769e444ce29167cef08d536e28 87f3ce31098e4e0cb8717a11dd3fee23 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 25 03:23:01 np0005534516 nova_compute[253538]: 2025-11-25 08:23:01.825 253542 DEBUG nova.virt.hardware [None req-512552d9-98f5-4ecc-a713-b9805f714944 2efed9769e444ce29167cef08d536e28 87f3ce31098e4e0cb8717a11dd3fee23 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 25 03:23:01 np0005534516 nova_compute[253538]: 2025-11-25 08:23:01.825 253542 DEBUG nova.virt.hardware [None req-512552d9-98f5-4ecc-a713-b9805f714944 2efed9769e444ce29167cef08d536e28 87f3ce31098e4e0cb8717a11dd3fee23 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 25 03:23:01 np0005534516 nova_compute[253538]: 2025-11-25 08:23:01.828 253542 DEBUG oslo_concurrency.processutils [None req-512552d9-98f5-4ecc-a713-b9805f714944 2efed9769e444ce29167cef08d536e28 87f3ce31098e4e0cb8717a11dd3fee23 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:23:02 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 03:23:02 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1620578608' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 03:23:02 np0005534516 nova_compute[253538]: 2025-11-25 08:23:02.252 253542 DEBUG oslo_concurrency.processutils [None req-512552d9-98f5-4ecc-a713-b9805f714944 2efed9769e444ce29167cef08d536e28 87f3ce31098e4e0cb8717a11dd3fee23 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.425s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:23:02 np0005534516 nova_compute[253538]: 2025-11-25 08:23:02.272 253542 DEBUG nova.storage.rbd_utils [None req-512552d9-98f5-4ecc-a713-b9805f714944 2efed9769e444ce29167cef08d536e28 87f3ce31098e4e0cb8717a11dd3fee23 - - default default] rbd image a8f34404-8153-46bd-aea0-02d909cd66a9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:23:02 np0005534516 nova_compute[253538]: 2025-11-25 08:23:02.275 253542 DEBUG oslo_concurrency.processutils [None req-512552d9-98f5-4ecc-a713-b9805f714944 2efed9769e444ce29167cef08d536e28 87f3ce31098e4e0cb8717a11dd3fee23 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:23:02 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 03:23:02 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1367038960' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 03:23:02 np0005534516 nova_compute[253538]: 2025-11-25 08:23:02.702 253542 DEBUG oslo_concurrency.processutils [None req-512552d9-98f5-4ecc-a713-b9805f714944 2efed9769e444ce29167cef08d536e28 87f3ce31098e4e0cb8717a11dd3fee23 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.427s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:23:02 np0005534516 nova_compute[253538]: 2025-11-25 08:23:02.705 253542 DEBUG nova.objects.instance [None req-512552d9-98f5-4ecc-a713-b9805f714944 2efed9769e444ce29167cef08d536e28 87f3ce31098e4e0cb8717a11dd3fee23 - - default default] Lazy-loading 'pci_devices' on Instance uuid a8f34404-8153-46bd-aea0-02d909cd66a9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 03:23:02 np0005534516 nova_compute[253538]: 2025-11-25 08:23:02.728 253542 DEBUG nova.virt.libvirt.driver [None req-512552d9-98f5-4ecc-a713-b9805f714944 2efed9769e444ce29167cef08d536e28 87f3ce31098e4e0cb8717a11dd3fee23 - - default default] [instance: a8f34404-8153-46bd-aea0-02d909cd66a9] End _get_guest_xml xml=<domain type="kvm">
Nov 25 03:23:02 np0005534516 nova_compute[253538]:  <uuid>a8f34404-8153-46bd-aea0-02d909cd66a9</uuid>
Nov 25 03:23:02 np0005534516 nova_compute[253538]:  <name>instance-00000007</name>
Nov 25 03:23:02 np0005534516 nova_compute[253538]:  <memory>131072</memory>
Nov 25 03:23:02 np0005534516 nova_compute[253538]:  <vcpu>1</vcpu>
Nov 25 03:23:02 np0005534516 nova_compute[253538]:  <metadata>
Nov 25 03:23:02 np0005534516 nova_compute[253538]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 03:23:02 np0005534516 nova_compute[253538]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 03:23:02 np0005534516 nova_compute[253538]:      <nova:name>tempest-ServerDiagnosticsV248Test-server-1222467350</nova:name>
Nov 25 03:23:02 np0005534516 nova_compute[253538]:      <nova:creationTime>2025-11-25 08:23:01</nova:creationTime>
Nov 25 03:23:02 np0005534516 nova_compute[253538]:      <nova:flavor name="m1.nano">
Nov 25 03:23:02 np0005534516 nova_compute[253538]:        <nova:memory>128</nova:memory>
Nov 25 03:23:02 np0005534516 nova_compute[253538]:        <nova:disk>1</nova:disk>
Nov 25 03:23:02 np0005534516 nova_compute[253538]:        <nova:swap>0</nova:swap>
Nov 25 03:23:02 np0005534516 nova_compute[253538]:        <nova:ephemeral>0</nova:ephemeral>
Nov 25 03:23:02 np0005534516 nova_compute[253538]:        <nova:vcpus>1</nova:vcpus>
Nov 25 03:23:02 np0005534516 nova_compute[253538]:      </nova:flavor>
Nov 25 03:23:02 np0005534516 nova_compute[253538]:      <nova:owner>
Nov 25 03:23:02 np0005534516 nova_compute[253538]:        <nova:user uuid="2efed9769e444ce29167cef08d536e28">tempest-ServerDiagnosticsV248Test-1239151635-project-member</nova:user>
Nov 25 03:23:02 np0005534516 nova_compute[253538]:        <nova:project uuid="87f3ce31098e4e0cb8717a11dd3fee23">tempest-ServerDiagnosticsV248Test-1239151635</nova:project>
Nov 25 03:23:02 np0005534516 nova_compute[253538]:      </nova:owner>
Nov 25 03:23:02 np0005534516 nova_compute[253538]:      <nova:root type="image" uuid="8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e"/>
Nov 25 03:23:02 np0005534516 nova_compute[253538]:      <nova:ports/>
Nov 25 03:23:02 np0005534516 nova_compute[253538]:    </nova:instance>
Nov 25 03:23:02 np0005534516 nova_compute[253538]:  </metadata>
Nov 25 03:23:02 np0005534516 nova_compute[253538]:  <sysinfo type="smbios">
Nov 25 03:23:02 np0005534516 nova_compute[253538]:    <system>
Nov 25 03:23:02 np0005534516 nova_compute[253538]:      <entry name="manufacturer">RDO</entry>
Nov 25 03:23:02 np0005534516 nova_compute[253538]:      <entry name="product">OpenStack Compute</entry>
Nov 25 03:23:02 np0005534516 nova_compute[253538]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 03:23:02 np0005534516 nova_compute[253538]:      <entry name="serial">a8f34404-8153-46bd-aea0-02d909cd66a9</entry>
Nov 25 03:23:02 np0005534516 nova_compute[253538]:      <entry name="uuid">a8f34404-8153-46bd-aea0-02d909cd66a9</entry>
Nov 25 03:23:02 np0005534516 nova_compute[253538]:      <entry name="family">Virtual Machine</entry>
Nov 25 03:23:02 np0005534516 nova_compute[253538]:    </system>
Nov 25 03:23:02 np0005534516 nova_compute[253538]:  </sysinfo>
Nov 25 03:23:02 np0005534516 nova_compute[253538]:  <os>
Nov 25 03:23:02 np0005534516 nova_compute[253538]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 25 03:23:02 np0005534516 nova_compute[253538]:    <boot dev="hd"/>
Nov 25 03:23:02 np0005534516 nova_compute[253538]:    <smbios mode="sysinfo"/>
Nov 25 03:23:02 np0005534516 nova_compute[253538]:  </os>
Nov 25 03:23:02 np0005534516 nova_compute[253538]:  <features>
Nov 25 03:23:02 np0005534516 nova_compute[253538]:    <acpi/>
Nov 25 03:23:02 np0005534516 nova_compute[253538]:    <apic/>
Nov 25 03:23:02 np0005534516 nova_compute[253538]:    <vmcoreinfo/>
Nov 25 03:23:02 np0005534516 nova_compute[253538]:  </features>
Nov 25 03:23:02 np0005534516 nova_compute[253538]:  <clock offset="utc">
Nov 25 03:23:02 np0005534516 nova_compute[253538]:    <timer name="pit" tickpolicy="delay"/>
Nov 25 03:23:02 np0005534516 nova_compute[253538]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 25 03:23:02 np0005534516 nova_compute[253538]:    <timer name="hpet" present="no"/>
Nov 25 03:23:02 np0005534516 nova_compute[253538]:  </clock>
Nov 25 03:23:02 np0005534516 nova_compute[253538]:  <cpu mode="host-model" match="exact">
Nov 25 03:23:02 np0005534516 nova_compute[253538]:    <topology sockets="1" cores="1" threads="1"/>
Nov 25 03:23:02 np0005534516 nova_compute[253538]:  </cpu>
Nov 25 03:23:02 np0005534516 nova_compute[253538]:  <devices>
Nov 25 03:23:02 np0005534516 nova_compute[253538]:    <disk type="network" device="disk">
Nov 25 03:23:02 np0005534516 nova_compute[253538]:      <driver type="raw" cache="none"/>
Nov 25 03:23:02 np0005534516 nova_compute[253538]:      <source protocol="rbd" name="vms/a8f34404-8153-46bd-aea0-02d909cd66a9_disk">
Nov 25 03:23:02 np0005534516 nova_compute[253538]:        <host name="192.168.122.100" port="6789"/>
Nov 25 03:23:02 np0005534516 nova_compute[253538]:      </source>
Nov 25 03:23:02 np0005534516 nova_compute[253538]:      <auth username="openstack">
Nov 25 03:23:02 np0005534516 nova_compute[253538]:        <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 03:23:02 np0005534516 nova_compute[253538]:      </auth>
Nov 25 03:23:02 np0005534516 nova_compute[253538]:      <target dev="vda" bus="virtio"/>
Nov 25 03:23:02 np0005534516 nova_compute[253538]:    </disk>
Nov 25 03:23:02 np0005534516 nova_compute[253538]:    <disk type="network" device="cdrom">
Nov 25 03:23:02 np0005534516 nova_compute[253538]:      <driver type="raw" cache="none"/>
Nov 25 03:23:02 np0005534516 nova_compute[253538]:      <source protocol="rbd" name="vms/a8f34404-8153-46bd-aea0-02d909cd66a9_disk.config">
Nov 25 03:23:02 np0005534516 nova_compute[253538]:        <host name="192.168.122.100" port="6789"/>
Nov 25 03:23:02 np0005534516 nova_compute[253538]:      </source>
Nov 25 03:23:02 np0005534516 nova_compute[253538]:      <auth username="openstack">
Nov 25 03:23:02 np0005534516 nova_compute[253538]:        <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 03:23:02 np0005534516 nova_compute[253538]:      </auth>
Nov 25 03:23:02 np0005534516 nova_compute[253538]:      <target dev="sda" bus="sata"/>
Nov 25 03:23:02 np0005534516 nova_compute[253538]:    </disk>
Nov 25 03:23:02 np0005534516 nova_compute[253538]:    <serial type="pty">
Nov 25 03:23:02 np0005534516 nova_compute[253538]:      <log file="/var/lib/nova/instances/a8f34404-8153-46bd-aea0-02d909cd66a9/console.log" append="off"/>
Nov 25 03:23:02 np0005534516 nova_compute[253538]:    </serial>
Nov 25 03:23:02 np0005534516 nova_compute[253538]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 03:23:02 np0005534516 nova_compute[253538]:    <video>
Nov 25 03:23:02 np0005534516 nova_compute[253538]:      <model type="virtio"/>
Nov 25 03:23:02 np0005534516 nova_compute[253538]:    </video>
Nov 25 03:23:02 np0005534516 nova_compute[253538]:    <input type="tablet" bus="usb"/>
Nov 25 03:23:02 np0005534516 nova_compute[253538]:    <rng model="virtio">
Nov 25 03:23:02 np0005534516 nova_compute[253538]:      <backend model="random">/dev/urandom</backend>
Nov 25 03:23:02 np0005534516 nova_compute[253538]:    </rng>
Nov 25 03:23:02 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root"/>
Nov 25 03:23:02 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:23:02 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:23:02 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:23:02 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:23:02 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:23:02 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:23:02 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:23:02 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:23:02 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:23:02 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:23:02 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:23:02 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:23:02 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:23:02 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:23:02 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:23:02 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:23:02 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:23:02 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:23:02 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:23:02 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:23:02 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:23:02 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:23:02 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:23:02 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:23:02 np0005534516 nova_compute[253538]:    <controller type="usb" index="0"/>
Nov 25 03:23:02 np0005534516 nova_compute[253538]:    <memballoon model="virtio">
Nov 25 03:23:02 np0005534516 nova_compute[253538]:      <stats period="10"/>
Nov 25 03:23:02 np0005534516 nova_compute[253538]:    </memballoon>
Nov 25 03:23:02 np0005534516 nova_compute[253538]:  </devices>
Nov 25 03:23:02 np0005534516 nova_compute[253538]: </domain>
Nov 25 03:23:02 np0005534516 nova_compute[253538]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 25 03:23:02 np0005534516 nova_compute[253538]: 2025-11-25 08:23:02.801 253542 DEBUG nova.virt.libvirt.driver [None req-512552d9-98f5-4ecc-a713-b9805f714944 2efed9769e444ce29167cef08d536e28 87f3ce31098e4e0cb8717a11dd3fee23 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 03:23:02 np0005534516 nova_compute[253538]: 2025-11-25 08:23:02.801 253542 DEBUG nova.virt.libvirt.driver [None req-512552d9-98f5-4ecc-a713-b9805f714944 2efed9769e444ce29167cef08d536e28 87f3ce31098e4e0cb8717a11dd3fee23 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 03:23:02 np0005534516 nova_compute[253538]: 2025-11-25 08:23:02.801 253542 INFO nova.virt.libvirt.driver [None req-512552d9-98f5-4ecc-a713-b9805f714944 2efed9769e444ce29167cef08d536e28 87f3ce31098e4e0cb8717a11dd3fee23 - - default default] [instance: a8f34404-8153-46bd-aea0-02d909cd66a9] Using config drive#033[00m
Nov 25 03:23:02 np0005534516 nova_compute[253538]: 2025-11-25 08:23:02.828 253542 DEBUG nova.storage.rbd_utils [None req-512552d9-98f5-4ecc-a713-b9805f714944 2efed9769e444ce29167cef08d536e28 87f3ce31098e4e0cb8717a11dd3fee23 - - default default] rbd image a8f34404-8153-46bd-aea0-02d909cd66a9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:23:02 np0005534516 podman[272747]: 2025-11-25 08:23:02.832432795 +0000 UTC m=+0.085248148 container health_status 8ad1e319ea263c63021f01bb2d6f18ca5aea205883339692c6b10bddfa993191 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_controller, managed_by=edpm_ansible, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Nov 25 03:23:02 np0005534516 nova_compute[253538]: 2025-11-25 08:23:02.841 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:23:02 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1122: 321 pgs: 321 active+clean; 193 MiB data, 288 MiB used, 60 GiB / 60 GiB avail; 1.2 MiB/s rd, 4.9 MiB/s wr, 139 op/s
Nov 25 03:23:03 np0005534516 nova_compute[253538]: 2025-11-25 08:23:03.034 253542 INFO nova.virt.libvirt.driver [None req-512552d9-98f5-4ecc-a713-b9805f714944 2efed9769e444ce29167cef08d536e28 87f3ce31098e4e0cb8717a11dd3fee23 - - default default] [instance: a8f34404-8153-46bd-aea0-02d909cd66a9] Creating config drive at /var/lib/nova/instances/a8f34404-8153-46bd-aea0-02d909cd66a9/disk.config#033[00m
Nov 25 03:23:03 np0005534516 nova_compute[253538]: 2025-11-25 08:23:03.039 253542 DEBUG oslo_concurrency.processutils [None req-512552d9-98f5-4ecc-a713-b9805f714944 2efed9769e444ce29167cef08d536e28 87f3ce31098e4e0cb8717a11dd3fee23 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/a8f34404-8153-46bd-aea0-02d909cd66a9/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp6x9uigvi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:23:03 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e107 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 03:23:03 np0005534516 nova_compute[253538]: 2025-11-25 08:23:03.181 253542 DEBUG oslo_concurrency.processutils [None req-512552d9-98f5-4ecc-a713-b9805f714944 2efed9769e444ce29167cef08d536e28 87f3ce31098e4e0cb8717a11dd3fee23 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/a8f34404-8153-46bd-aea0-02d909cd66a9/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp6x9uigvi" returned: 0 in 0.142s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:23:03 np0005534516 nova_compute[253538]: 2025-11-25 08:23:03.202 253542 DEBUG nova.storage.rbd_utils [None req-512552d9-98f5-4ecc-a713-b9805f714944 2efed9769e444ce29167cef08d536e28 87f3ce31098e4e0cb8717a11dd3fee23 - - default default] rbd image a8f34404-8153-46bd-aea0-02d909cd66a9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:23:03 np0005534516 nova_compute[253538]: 2025-11-25 08:23:03.206 253542 DEBUG oslo_concurrency.processutils [None req-512552d9-98f5-4ecc-a713-b9805f714944 2efed9769e444ce29167cef08d536e28 87f3ce31098e4e0cb8717a11dd3fee23 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/a8f34404-8153-46bd-aea0-02d909cd66a9/disk.config a8f34404-8153-46bd-aea0-02d909cd66a9_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:23:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 03:23:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:23:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 03:23:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:23:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0012975048023702545 of space, bias 1.0, pg target 0.38925144071107637 quantized to 32 (current 32)
Nov 25 03:23:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:23:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 03:23:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:23:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 03:23:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:23:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006661126644201341 of space, bias 1.0, pg target 0.19983379932604023 quantized to 32 (current 32)
Nov 25 03:23:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:23:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 25 03:23:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:23:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 03:23:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:23:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 25 03:23:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:23:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 25 03:23:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:23:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 03:23:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:23:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 25 03:23:03 np0005534516 nova_compute[253538]: 2025-11-25 08:23:03.714 253542 DEBUG oslo_concurrency.lockutils [None req-cc692998-6ffd-4743-8f4d-b19c24a48e49 e93ad3d8111e48218a5ab899be7d3708 029b328a15754b45970b2053b56564bc - - default default] Acquiring lock "7779552a-aa17-4b2f-8b15-69121d6b6a63" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:23:03 np0005534516 nova_compute[253538]: 2025-11-25 08:23:03.715 253542 DEBUG oslo_concurrency.lockutils [None req-cc692998-6ffd-4743-8f4d-b19c24a48e49 e93ad3d8111e48218a5ab899be7d3708 029b328a15754b45970b2053b56564bc - - default default] Lock "7779552a-aa17-4b2f-8b15-69121d6b6a63" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:23:03 np0005534516 nova_compute[253538]: 2025-11-25 08:23:03.734 253542 DEBUG nova.compute.manager [None req-cc692998-6ffd-4743-8f4d-b19c24a48e49 e93ad3d8111e48218a5ab899be7d3708 029b328a15754b45970b2053b56564bc - - default default] [instance: 7779552a-aa17-4b2f-8b15-69121d6b6a63] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 25 03:23:03 np0005534516 nova_compute[253538]: 2025-11-25 08:23:03.799 253542 DEBUG oslo_concurrency.lockutils [None req-cc692998-6ffd-4743-8f4d-b19c24a48e49 e93ad3d8111e48218a5ab899be7d3708 029b328a15754b45970b2053b56564bc - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:23:03 np0005534516 nova_compute[253538]: 2025-11-25 08:23:03.799 253542 DEBUG oslo_concurrency.lockutils [None req-cc692998-6ffd-4743-8f4d-b19c24a48e49 e93ad3d8111e48218a5ab899be7d3708 029b328a15754b45970b2053b56564bc - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:23:03 np0005534516 nova_compute[253538]: 2025-11-25 08:23:03.807 253542 DEBUG nova.virt.hardware [None req-cc692998-6ffd-4743-8f4d-b19c24a48e49 e93ad3d8111e48218a5ab899be7d3708 029b328a15754b45970b2053b56564bc - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 25 03:23:03 np0005534516 nova_compute[253538]: 2025-11-25 08:23:03.807 253542 INFO nova.compute.claims [None req-cc692998-6ffd-4743-8f4d-b19c24a48e49 e93ad3d8111e48218a5ab899be7d3708 029b328a15754b45970b2053b56564bc - - default default] [instance: 7779552a-aa17-4b2f-8b15-69121d6b6a63] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 25 03:23:03 np0005534516 nova_compute[253538]: 2025-11-25 08:23:03.904 253542 DEBUG oslo_concurrency.processutils [None req-512552d9-98f5-4ecc-a713-b9805f714944 2efed9769e444ce29167cef08d536e28 87f3ce31098e4e0cb8717a11dd3fee23 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/a8f34404-8153-46bd-aea0-02d909cd66a9/disk.config a8f34404-8153-46bd-aea0-02d909cd66a9_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.698s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:23:03 np0005534516 nova_compute[253538]: 2025-11-25 08:23:03.904 253542 INFO nova.virt.libvirt.driver [None req-512552d9-98f5-4ecc-a713-b9805f714944 2efed9769e444ce29167cef08d536e28 87f3ce31098e4e0cb8717a11dd3fee23 - - default default] [instance: a8f34404-8153-46bd-aea0-02d909cd66a9] Deleting local config drive /var/lib/nova/instances/a8f34404-8153-46bd-aea0-02d909cd66a9/disk.config because it was imported into RBD.#033[00m
Nov 25 03:23:03 np0005534516 systemd-machined[215790]: New machine qemu-7-instance-00000007.
Nov 25 03:23:03 np0005534516 nova_compute[253538]: 2025-11-25 08:23:03.956 253542 DEBUG oslo_concurrency.processutils [None req-cc692998-6ffd-4743-8f4d-b19c24a48e49 e93ad3d8111e48218a5ab899be7d3708 029b328a15754b45970b2053b56564bc - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:23:03 np0005534516 systemd[1]: Started Virtual Machine qemu-7-instance-00000007.
Nov 25 03:23:04 np0005534516 nova_compute[253538]: 2025-11-25 08:23:04.367 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:23:04 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 03:23:04 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1437880790' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 03:23:04 np0005534516 nova_compute[253538]: 2025-11-25 08:23:04.413 253542 DEBUG oslo_concurrency.processutils [None req-cc692998-6ffd-4743-8f4d-b19c24a48e49 e93ad3d8111e48218a5ab899be7d3708 029b328a15754b45970b2053b56564bc - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.457s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:23:04 np0005534516 nova_compute[253538]: 2025-11-25 08:23:04.421 253542 DEBUG nova.compute.provider_tree [None req-cc692998-6ffd-4743-8f4d-b19c24a48e49 e93ad3d8111e48218a5ab899be7d3708 029b328a15754b45970b2053b56564bc - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 03:23:04 np0005534516 nova_compute[253538]: 2025-11-25 08:23:04.437 253542 DEBUG nova.scheduler.client.report [None req-cc692998-6ffd-4743-8f4d-b19c24a48e49 e93ad3d8111e48218a5ab899be7d3708 029b328a15754b45970b2053b56564bc - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 03:23:04 np0005534516 nova_compute[253538]: 2025-11-25 08:23:04.463 253542 DEBUG oslo_concurrency.lockutils [None req-cc692998-6ffd-4743-8f4d-b19c24a48e49 e93ad3d8111e48218a5ab899be7d3708 029b328a15754b45970b2053b56564bc - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.663s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:23:04 np0005534516 nova_compute[253538]: 2025-11-25 08:23:04.464 253542 DEBUG nova.compute.manager [None req-cc692998-6ffd-4743-8f4d-b19c24a48e49 e93ad3d8111e48218a5ab899be7d3708 029b328a15754b45970b2053b56564bc - - default default] [instance: 7779552a-aa17-4b2f-8b15-69121d6b6a63] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 25 03:23:04 np0005534516 nova_compute[253538]: 2025-11-25 08:23:04.514 253542 DEBUG nova.compute.manager [None req-cc692998-6ffd-4743-8f4d-b19c24a48e49 e93ad3d8111e48218a5ab899be7d3708 029b328a15754b45970b2053b56564bc - - default default] [instance: 7779552a-aa17-4b2f-8b15-69121d6b6a63] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 25 03:23:04 np0005534516 nova_compute[253538]: 2025-11-25 08:23:04.515 253542 DEBUG nova.network.neutron [None req-cc692998-6ffd-4743-8f4d-b19c24a48e49 e93ad3d8111e48218a5ab899be7d3708 029b328a15754b45970b2053b56564bc - - default default] [instance: 7779552a-aa17-4b2f-8b15-69121d6b6a63] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 25 03:23:04 np0005534516 nova_compute[253538]: 2025-11-25 08:23:04.536 253542 INFO nova.virt.libvirt.driver [None req-cc692998-6ffd-4743-8f4d-b19c24a48e49 e93ad3d8111e48218a5ab899be7d3708 029b328a15754b45970b2053b56564bc - - default default] [instance: 7779552a-aa17-4b2f-8b15-69121d6b6a63] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 25 03:23:04 np0005534516 nova_compute[253538]: 2025-11-25 08:23:04.553 253542 DEBUG nova.compute.manager [None req-cc692998-6ffd-4743-8f4d-b19c24a48e49 e93ad3d8111e48218a5ab899be7d3708 029b328a15754b45970b2053b56564bc - - default default] [instance: 7779552a-aa17-4b2f-8b15-69121d6b6a63] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 25 03:23:04 np0005534516 nova_compute[253538]: 2025-11-25 08:23:04.674 253542 DEBUG nova.compute.manager [None req-cc692998-6ffd-4743-8f4d-b19c24a48e49 e93ad3d8111e48218a5ab899be7d3708 029b328a15754b45970b2053b56564bc - - default default] [instance: 7779552a-aa17-4b2f-8b15-69121d6b6a63] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 25 03:23:04 np0005534516 nova_compute[253538]: 2025-11-25 08:23:04.676 253542 DEBUG nova.virt.libvirt.driver [None req-cc692998-6ffd-4743-8f4d-b19c24a48e49 e93ad3d8111e48218a5ab899be7d3708 029b328a15754b45970b2053b56564bc - - default default] [instance: 7779552a-aa17-4b2f-8b15-69121d6b6a63] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 25 03:23:04 np0005534516 nova_compute[253538]: 2025-11-25 08:23:04.676 253542 INFO nova.virt.libvirt.driver [None req-cc692998-6ffd-4743-8f4d-b19c24a48e49 e93ad3d8111e48218a5ab899be7d3708 029b328a15754b45970b2053b56564bc - - default default] [instance: 7779552a-aa17-4b2f-8b15-69121d6b6a63] Creating image(s)#033[00m
Nov 25 03:23:04 np0005534516 nova_compute[253538]: 2025-11-25 08:23:04.694 253542 DEBUG nova.storage.rbd_utils [None req-cc692998-6ffd-4743-8f4d-b19c24a48e49 e93ad3d8111e48218a5ab899be7d3708 029b328a15754b45970b2053b56564bc - - default default] rbd image 7779552a-aa17-4b2f-8b15-69121d6b6a63_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:23:04 np0005534516 nova_compute[253538]: 2025-11-25 08:23:04.716 253542 DEBUG nova.storage.rbd_utils [None req-cc692998-6ffd-4743-8f4d-b19c24a48e49 e93ad3d8111e48218a5ab899be7d3708 029b328a15754b45970b2053b56564bc - - default default] rbd image 7779552a-aa17-4b2f-8b15-69121d6b6a63_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:23:04 np0005534516 nova_compute[253538]: 2025-11-25 08:23:04.739 253542 DEBUG nova.storage.rbd_utils [None req-cc692998-6ffd-4743-8f4d-b19c24a48e49 e93ad3d8111e48218a5ab899be7d3708 029b328a15754b45970b2053b56564bc - - default default] rbd image 7779552a-aa17-4b2f-8b15-69121d6b6a63_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:23:04 np0005534516 nova_compute[253538]: 2025-11-25 08:23:04.743 253542 DEBUG oslo_concurrency.processutils [None req-cc692998-6ffd-4743-8f4d-b19c24a48e49 e93ad3d8111e48218a5ab899be7d3708 029b328a15754b45970b2053b56564bc - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:23:04 np0005534516 nova_compute[253538]: 2025-11-25 08:23:04.797 253542 DEBUG oslo_concurrency.processutils [None req-cc692998-6ffd-4743-8f4d-b19c24a48e49 e93ad3d8111e48218a5ab899be7d3708 029b328a15754b45970b2053b56564bc - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json" returned: 0 in 0.054s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:23:04 np0005534516 nova_compute[253538]: 2025-11-25 08:23:04.798 253542 DEBUG oslo_concurrency.lockutils [None req-cc692998-6ffd-4743-8f4d-b19c24a48e49 e93ad3d8111e48218a5ab899be7d3708 029b328a15754b45970b2053b56564bc - - default default] Acquiring lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:23:04 np0005534516 nova_compute[253538]: 2025-11-25 08:23:04.798 253542 DEBUG oslo_concurrency.lockutils [None req-cc692998-6ffd-4743-8f4d-b19c24a48e49 e93ad3d8111e48218a5ab899be7d3708 029b328a15754b45970b2053b56564bc - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:23:04 np0005534516 nova_compute[253538]: 2025-11-25 08:23:04.798 253542 DEBUG oslo_concurrency.lockutils [None req-cc692998-6ffd-4743-8f4d-b19c24a48e49 e93ad3d8111e48218a5ab899be7d3708 029b328a15754b45970b2053b56564bc - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:23:04 np0005534516 nova_compute[253538]: 2025-11-25 08:23:04.815 253542 DEBUG nova.storage.rbd_utils [None req-cc692998-6ffd-4743-8f4d-b19c24a48e49 e93ad3d8111e48218a5ab899be7d3708 029b328a15754b45970b2053b56564bc - - default default] rbd image 7779552a-aa17-4b2f-8b15-69121d6b6a63_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:23:04 np0005534516 nova_compute[253538]: 2025-11-25 08:23:04.817 253542 DEBUG oslo_concurrency.processutils [None req-cc692998-6ffd-4743-8f4d-b19c24a48e49 e93ad3d8111e48218a5ab899be7d3708 029b328a15754b45970b2053b56564bc - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc 7779552a-aa17-4b2f-8b15-69121d6b6a63_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:23:04 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1123: 321 pgs: 321 active+clean; 215 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 5.7 MiB/s wr, 201 op/s
Nov 25 03:23:05 np0005534516 nova_compute[253538]: 2025-11-25 08:23:05.105 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764058985.10516, a8f34404-8153-46bd-aea0-02d909cd66a9 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 03:23:05 np0005534516 nova_compute[253538]: 2025-11-25 08:23:05.107 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: a8f34404-8153-46bd-aea0-02d909cd66a9] VM Resumed (Lifecycle Event)#033[00m
Nov 25 03:23:05 np0005534516 nova_compute[253538]: 2025-11-25 08:23:05.112 253542 DEBUG nova.compute.manager [None req-512552d9-98f5-4ecc-a713-b9805f714944 2efed9769e444ce29167cef08d536e28 87f3ce31098e4e0cb8717a11dd3fee23 - - default default] [instance: a8f34404-8153-46bd-aea0-02d909cd66a9] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 25 03:23:05 np0005534516 nova_compute[253538]: 2025-11-25 08:23:05.113 253542 DEBUG nova.virt.libvirt.driver [None req-512552d9-98f5-4ecc-a713-b9805f714944 2efed9769e444ce29167cef08d536e28 87f3ce31098e4e0cb8717a11dd3fee23 - - default default] [instance: a8f34404-8153-46bd-aea0-02d909cd66a9] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 25 03:23:05 np0005534516 nova_compute[253538]: 2025-11-25 08:23:05.118 253542 INFO nova.virt.libvirt.driver [-] [instance: a8f34404-8153-46bd-aea0-02d909cd66a9] Instance spawned successfully.#033[00m
Nov 25 03:23:05 np0005534516 nova_compute[253538]: 2025-11-25 08:23:05.119 253542 DEBUG nova.virt.libvirt.driver [None req-512552d9-98f5-4ecc-a713-b9805f714944 2efed9769e444ce29167cef08d536e28 87f3ce31098e4e0cb8717a11dd3fee23 - - default default] [instance: a8f34404-8153-46bd-aea0-02d909cd66a9] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 25 03:23:05 np0005534516 nova_compute[253538]: 2025-11-25 08:23:05.144 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: a8f34404-8153-46bd-aea0-02d909cd66a9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:23:05 np0005534516 nova_compute[253538]: 2025-11-25 08:23:05.150 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: a8f34404-8153-46bd-aea0-02d909cd66a9] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 03:23:05 np0005534516 nova_compute[253538]: 2025-11-25 08:23:05.153 253542 DEBUG nova.virt.libvirt.driver [None req-512552d9-98f5-4ecc-a713-b9805f714944 2efed9769e444ce29167cef08d536e28 87f3ce31098e4e0cb8717a11dd3fee23 - - default default] [instance: a8f34404-8153-46bd-aea0-02d909cd66a9] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:23:05 np0005534516 nova_compute[253538]: 2025-11-25 08:23:05.154 253542 DEBUG nova.virt.libvirt.driver [None req-512552d9-98f5-4ecc-a713-b9805f714944 2efed9769e444ce29167cef08d536e28 87f3ce31098e4e0cb8717a11dd3fee23 - - default default] [instance: a8f34404-8153-46bd-aea0-02d909cd66a9] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:23:05 np0005534516 nova_compute[253538]: 2025-11-25 08:23:05.154 253542 DEBUG nova.virt.libvirt.driver [None req-512552d9-98f5-4ecc-a713-b9805f714944 2efed9769e444ce29167cef08d536e28 87f3ce31098e4e0cb8717a11dd3fee23 - - default default] [instance: a8f34404-8153-46bd-aea0-02d909cd66a9] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:23:05 np0005534516 nova_compute[253538]: 2025-11-25 08:23:05.155 253542 DEBUG nova.virt.libvirt.driver [None req-512552d9-98f5-4ecc-a713-b9805f714944 2efed9769e444ce29167cef08d536e28 87f3ce31098e4e0cb8717a11dd3fee23 - - default default] [instance: a8f34404-8153-46bd-aea0-02d909cd66a9] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:23:05 np0005534516 nova_compute[253538]: 2025-11-25 08:23:05.155 253542 DEBUG nova.virt.libvirt.driver [None req-512552d9-98f5-4ecc-a713-b9805f714944 2efed9769e444ce29167cef08d536e28 87f3ce31098e4e0cb8717a11dd3fee23 - - default default] [instance: a8f34404-8153-46bd-aea0-02d909cd66a9] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:23:05 np0005534516 nova_compute[253538]: 2025-11-25 08:23:05.155 253542 DEBUG nova.virt.libvirt.driver [None req-512552d9-98f5-4ecc-a713-b9805f714944 2efed9769e444ce29167cef08d536e28 87f3ce31098e4e0cb8717a11dd3fee23 - - default default] [instance: a8f34404-8153-46bd-aea0-02d909cd66a9] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:23:05 np0005534516 nova_compute[253538]: 2025-11-25 08:23:05.182 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: a8f34404-8153-46bd-aea0-02d909cd66a9] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 03:23:05 np0005534516 nova_compute[253538]: 2025-11-25 08:23:05.183 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764058985.1070108, a8f34404-8153-46bd-aea0-02d909cd66a9 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 03:23:05 np0005534516 nova_compute[253538]: 2025-11-25 08:23:05.183 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: a8f34404-8153-46bd-aea0-02d909cd66a9] VM Started (Lifecycle Event)#033[00m
Nov 25 03:23:05 np0005534516 nova_compute[253538]: 2025-11-25 08:23:05.196 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: a8f34404-8153-46bd-aea0-02d909cd66a9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:23:05 np0005534516 nova_compute[253538]: 2025-11-25 08:23:05.200 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: a8f34404-8153-46bd-aea0-02d909cd66a9] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 03:23:05 np0005534516 nova_compute[253538]: 2025-11-25 08:23:05.215 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: a8f34404-8153-46bd-aea0-02d909cd66a9] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 03:23:05 np0005534516 nova_compute[253538]: 2025-11-25 08:23:05.304 253542 INFO nova.compute.manager [None req-512552d9-98f5-4ecc-a713-b9805f714944 2efed9769e444ce29167cef08d536e28 87f3ce31098e4e0cb8717a11dd3fee23 - - default default] [instance: a8f34404-8153-46bd-aea0-02d909cd66a9] Took 4.18 seconds to spawn the instance on the hypervisor.#033[00m
Nov 25 03:23:05 np0005534516 nova_compute[253538]: 2025-11-25 08:23:05.304 253542 DEBUG nova.compute.manager [None req-512552d9-98f5-4ecc-a713-b9805f714944 2efed9769e444ce29167cef08d536e28 87f3ce31098e4e0cb8717a11dd3fee23 - - default default] [instance: a8f34404-8153-46bd-aea0-02d909cd66a9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:23:05 np0005534516 nova_compute[253538]: 2025-11-25 08:23:05.372 253542 INFO nova.compute.manager [None req-512552d9-98f5-4ecc-a713-b9805f714944 2efed9769e444ce29167cef08d536e28 87f3ce31098e4e0cb8717a11dd3fee23 - - default default] [instance: a8f34404-8153-46bd-aea0-02d909cd66a9] Took 5.15 seconds to build instance.#033[00m
Nov 25 03:23:05 np0005534516 nova_compute[253538]: 2025-11-25 08:23:05.401 253542 DEBUG oslo_concurrency.lockutils [None req-512552d9-98f5-4ecc-a713-b9805f714944 2efed9769e444ce29167cef08d536e28 87f3ce31098e4e0cb8717a11dd3fee23 - - default default] Lock "a8f34404-8153-46bd-aea0-02d909cd66a9" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 5.286s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:23:05 np0005534516 nova_compute[253538]: 2025-11-25 08:23:05.556 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 03:23:05 np0005534516 nova_compute[253538]: 2025-11-25 08:23:05.739 253542 DEBUG oslo_concurrency.processutils [None req-cc692998-6ffd-4743-8f4d-b19c24a48e49 e93ad3d8111e48218a5ab899be7d3708 029b328a15754b45970b2053b56564bc - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc 7779552a-aa17-4b2f-8b15-69121d6b6a63_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.922s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:23:05 np0005534516 nova_compute[253538]: 2025-11-25 08:23:05.765 253542 DEBUG nova.network.neutron [None req-cc692998-6ffd-4743-8f4d-b19c24a48e49 e93ad3d8111e48218a5ab899be7d3708 029b328a15754b45970b2053b56564bc - - default default] [instance: 7779552a-aa17-4b2f-8b15-69121d6b6a63] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188#033[00m
Nov 25 03:23:05 np0005534516 nova_compute[253538]: 2025-11-25 08:23:05.766 253542 DEBUG nova.compute.manager [None req-cc692998-6ffd-4743-8f4d-b19c24a48e49 e93ad3d8111e48218a5ab899be7d3708 029b328a15754b45970b2053b56564bc - - default default] [instance: 7779552a-aa17-4b2f-8b15-69121d6b6a63] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 25 03:23:05 np0005534516 nova_compute[253538]: 2025-11-25 08:23:05.808 253542 DEBUG nova.storage.rbd_utils [None req-cc692998-6ffd-4743-8f4d-b19c24a48e49 e93ad3d8111e48218a5ab899be7d3708 029b328a15754b45970b2053b56564bc - - default default] resizing rbd image 7779552a-aa17-4b2f-8b15-69121d6b6a63_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 25 03:23:06 np0005534516 nova_compute[253538]: 2025-11-25 08:23:06.261 253542 DEBUG nova.objects.instance [None req-cc692998-6ffd-4743-8f4d-b19c24a48e49 e93ad3d8111e48218a5ab899be7d3708 029b328a15754b45970b2053b56564bc - - default default] Lazy-loading 'migration_context' on Instance uuid 7779552a-aa17-4b2f-8b15-69121d6b6a63 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 03:23:06 np0005534516 nova_compute[253538]: 2025-11-25 08:23:06.272 253542 DEBUG nova.virt.libvirt.driver [None req-cc692998-6ffd-4743-8f4d-b19c24a48e49 e93ad3d8111e48218a5ab899be7d3708 029b328a15754b45970b2053b56564bc - - default default] [instance: 7779552a-aa17-4b2f-8b15-69121d6b6a63] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 25 03:23:06 np0005534516 nova_compute[253538]: 2025-11-25 08:23:06.273 253542 DEBUG nova.virt.libvirt.driver [None req-cc692998-6ffd-4743-8f4d-b19c24a48e49 e93ad3d8111e48218a5ab899be7d3708 029b328a15754b45970b2053b56564bc - - default default] [instance: 7779552a-aa17-4b2f-8b15-69121d6b6a63] Ensure instance console log exists: /var/lib/nova/instances/7779552a-aa17-4b2f-8b15-69121d6b6a63/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 25 03:23:06 np0005534516 nova_compute[253538]: 2025-11-25 08:23:06.273 253542 DEBUG oslo_concurrency.lockutils [None req-cc692998-6ffd-4743-8f4d-b19c24a48e49 e93ad3d8111e48218a5ab899be7d3708 029b328a15754b45970b2053b56564bc - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:23:06 np0005534516 nova_compute[253538]: 2025-11-25 08:23:06.273 253542 DEBUG oslo_concurrency.lockutils [None req-cc692998-6ffd-4743-8f4d-b19c24a48e49 e93ad3d8111e48218a5ab899be7d3708 029b328a15754b45970b2053b56564bc - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:23:06 np0005534516 nova_compute[253538]: 2025-11-25 08:23:06.274 253542 DEBUG oslo_concurrency.lockutils [None req-cc692998-6ffd-4743-8f4d-b19c24a48e49 e93ad3d8111e48218a5ab899be7d3708 029b328a15754b45970b2053b56564bc - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:23:06 np0005534516 nova_compute[253538]: 2025-11-25 08:23:06.276 253542 DEBUG nova.virt.libvirt.driver [None req-cc692998-6ffd-4743-8f4d-b19c24a48e49 e93ad3d8111e48218a5ab899be7d3708 029b328a15754b45970b2053b56564bc - - default default] [instance: 7779552a-aa17-4b2f-8b15-69121d6b6a63] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'device_name': '/dev/vda', 'guest_format': None, 'encryption_options': None, 'boot_index': 0, 'encryption_format': None, 'encryption_secret_uuid': None, 'size': 0, 'encrypted': False, 'disk_bus': 'virtio', 'image_id': '8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 25 03:23:06 np0005534516 nova_compute[253538]: 2025-11-25 08:23:06.281 253542 WARNING nova.virt.libvirt.driver [None req-cc692998-6ffd-4743-8f4d-b19c24a48e49 e93ad3d8111e48218a5ab899be7d3708 029b328a15754b45970b2053b56564bc - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 03:23:06 np0005534516 nova_compute[253538]: 2025-11-25 08:23:06.285 253542 DEBUG nova.virt.libvirt.host [None req-cc692998-6ffd-4743-8f4d-b19c24a48e49 e93ad3d8111e48218a5ab899be7d3708 029b328a15754b45970b2053b56564bc - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 25 03:23:06 np0005534516 nova_compute[253538]: 2025-11-25 08:23:06.286 253542 DEBUG nova.virt.libvirt.host [None req-cc692998-6ffd-4743-8f4d-b19c24a48e49 e93ad3d8111e48218a5ab899be7d3708 029b328a15754b45970b2053b56564bc - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 25 03:23:06 np0005534516 nova_compute[253538]: 2025-11-25 08:23:06.288 253542 DEBUG nova.virt.libvirt.host [None req-cc692998-6ffd-4743-8f4d-b19c24a48e49 e93ad3d8111e48218a5ab899be7d3708 029b328a15754b45970b2053b56564bc - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 25 03:23:06 np0005534516 nova_compute[253538]: 2025-11-25 08:23:06.288 253542 DEBUG nova.virt.libvirt.host [None req-cc692998-6ffd-4743-8f4d-b19c24a48e49 e93ad3d8111e48218a5ab899be7d3708 029b328a15754b45970b2053b56564bc - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 25 03:23:06 np0005534516 nova_compute[253538]: 2025-11-25 08:23:06.289 253542 DEBUG nova.virt.libvirt.driver [None req-cc692998-6ffd-4743-8f4d-b19c24a48e49 e93ad3d8111e48218a5ab899be7d3708 029b328a15754b45970b2053b56564bc - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 25 03:23:06 np0005534516 nova_compute[253538]: 2025-11-25 08:23:06.289 253542 DEBUG nova.virt.hardware [None req-cc692998-6ffd-4743-8f4d-b19c24a48e49 e93ad3d8111e48218a5ab899be7d3708 029b328a15754b45970b2053b56564bc - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T08:20:54Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='c0247c91-0f34-45b2-87e7-43f31a790a00',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 25 03:23:06 np0005534516 nova_compute[253538]: 2025-11-25 08:23:06.289 253542 DEBUG nova.virt.hardware [None req-cc692998-6ffd-4743-8f4d-b19c24a48e49 e93ad3d8111e48218a5ab899be7d3708 029b328a15754b45970b2053b56564bc - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 25 03:23:06 np0005534516 nova_compute[253538]: 2025-11-25 08:23:06.289 253542 DEBUG nova.virt.hardware [None req-cc692998-6ffd-4743-8f4d-b19c24a48e49 e93ad3d8111e48218a5ab899be7d3708 029b328a15754b45970b2053b56564bc - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 25 03:23:06 np0005534516 nova_compute[253538]: 2025-11-25 08:23:06.289 253542 DEBUG nova.virt.hardware [None req-cc692998-6ffd-4743-8f4d-b19c24a48e49 e93ad3d8111e48218a5ab899be7d3708 029b328a15754b45970b2053b56564bc - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 25 03:23:06 np0005534516 nova_compute[253538]: 2025-11-25 08:23:06.290 253542 DEBUG nova.virt.hardware [None req-cc692998-6ffd-4743-8f4d-b19c24a48e49 e93ad3d8111e48218a5ab899be7d3708 029b328a15754b45970b2053b56564bc - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 25 03:23:06 np0005534516 nova_compute[253538]: 2025-11-25 08:23:06.290 253542 DEBUG nova.virt.hardware [None req-cc692998-6ffd-4743-8f4d-b19c24a48e49 e93ad3d8111e48218a5ab899be7d3708 029b328a15754b45970b2053b56564bc - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 25 03:23:06 np0005534516 nova_compute[253538]: 2025-11-25 08:23:06.290 253542 DEBUG nova.virt.hardware [None req-cc692998-6ffd-4743-8f4d-b19c24a48e49 e93ad3d8111e48218a5ab899be7d3708 029b328a15754b45970b2053b56564bc - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 25 03:23:06 np0005534516 nova_compute[253538]: 2025-11-25 08:23:06.290 253542 DEBUG nova.virt.hardware [None req-cc692998-6ffd-4743-8f4d-b19c24a48e49 e93ad3d8111e48218a5ab899be7d3708 029b328a15754b45970b2053b56564bc - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 25 03:23:06 np0005534516 nova_compute[253538]: 2025-11-25 08:23:06.290 253542 DEBUG nova.virt.hardware [None req-cc692998-6ffd-4743-8f4d-b19c24a48e49 e93ad3d8111e48218a5ab899be7d3708 029b328a15754b45970b2053b56564bc - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 25 03:23:06 np0005534516 nova_compute[253538]: 2025-11-25 08:23:06.290 253542 DEBUG nova.virt.hardware [None req-cc692998-6ffd-4743-8f4d-b19c24a48e49 e93ad3d8111e48218a5ab899be7d3708 029b328a15754b45970b2053b56564bc - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 25 03:23:06 np0005534516 nova_compute[253538]: 2025-11-25 08:23:06.291 253542 DEBUG nova.virt.hardware [None req-cc692998-6ffd-4743-8f4d-b19c24a48e49 e93ad3d8111e48218a5ab899be7d3708 029b328a15754b45970b2053b56564bc - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 25 03:23:06 np0005534516 nova_compute[253538]: 2025-11-25 08:23:06.293 253542 DEBUG oslo_concurrency.processutils [None req-cc692998-6ffd-4743-8f4d-b19c24a48e49 e93ad3d8111e48218a5ab899be7d3708 029b328a15754b45970b2053b56564bc - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:23:06 np0005534516 nova_compute[253538]: 2025-11-25 08:23:06.552 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 03:23:06 np0005534516 nova_compute[253538]: 2025-11-25 08:23:06.553 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 03:23:06 np0005534516 nova_compute[253538]: 2025-11-25 08:23:06.553 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 25 03:23:06 np0005534516 nova_compute[253538]: 2025-11-25 08:23:06.554 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 25 03:23:06 np0005534516 nova_compute[253538]: 2025-11-25 08:23:06.578 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] [instance: 7779552a-aa17-4b2f-8b15-69121d6b6a63] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Nov 25 03:23:06 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 03:23:06 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3598226049' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 03:23:06 np0005534516 nova_compute[253538]: 2025-11-25 08:23:06.783 253542 DEBUG oslo_concurrency.processutils [None req-cc692998-6ffd-4743-8f4d-b19c24a48e49 e93ad3d8111e48218a5ab899be7d3708 029b328a15754b45970b2053b56564bc - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.489s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:23:06 np0005534516 nova_compute[253538]: 2025-11-25 08:23:06.818 253542 DEBUG nova.storage.rbd_utils [None req-cc692998-6ffd-4743-8f4d-b19c24a48e49 e93ad3d8111e48218a5ab899be7d3708 029b328a15754b45970b2053b56564bc - - default default] rbd image 7779552a-aa17-4b2f-8b15-69121d6b6a63_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:23:06 np0005534516 nova_compute[253538]: 2025-11-25 08:23:06.825 253542 DEBUG oslo_concurrency.processutils [None req-cc692998-6ffd-4743-8f4d-b19c24a48e49 e93ad3d8111e48218a5ab899be7d3708 029b328a15754b45970b2053b56564bc - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:23:06 np0005534516 nova_compute[253538]: 2025-11-25 08:23:06.850 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Acquiring lock "refresh_cache-4bc97ee2-5aba-4bb5-86e2-f0806a200c04" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 03:23:06 np0005534516 nova_compute[253538]: 2025-11-25 08:23:06.851 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Acquired lock "refresh_cache-4bc97ee2-5aba-4bb5-86e2-f0806a200c04" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 03:23:06 np0005534516 nova_compute[253538]: 2025-11-25 08:23:06.851 253542 DEBUG nova.network.neutron [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] [instance: 4bc97ee2-5aba-4bb5-86e2-f0806a200c04] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Nov 25 03:23:06 np0005534516 nova_compute[253538]: 2025-11-25 08:23:06.851 253542 DEBUG nova.objects.instance [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 4bc97ee2-5aba-4bb5-86e2-f0806a200c04 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 03:23:06 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1124: 321 pgs: 321 active+clean; 215 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 5.7 MiB/s wr, 201 op/s
Nov 25 03:23:07 np0005534516 nova_compute[253538]: 2025-11-25 08:23:07.000 253542 DEBUG nova.compute.manager [None req-4d910055-5443-4b5f-9a6b-aaa1a0992ab6 d603ec87a6444149a76c35f9302f5d37 5fb5f42261cd4144abf24782eefc3dec - - default default] [instance: a8f34404-8153-46bd-aea0-02d909cd66a9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:23:07 np0005534516 nova_compute[253538]: 2025-11-25 08:23:07.004 253542 INFO nova.compute.manager [None req-4d910055-5443-4b5f-9a6b-aaa1a0992ab6 d603ec87a6444149a76c35f9302f5d37 5fb5f42261cd4144abf24782eefc3dec - - default default] [instance: a8f34404-8153-46bd-aea0-02d909cd66a9] Retrieving diagnostics#033[00m
Nov 25 03:23:07 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 03:23:07 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1309992808' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 03:23:07 np0005534516 nova_compute[253538]: 2025-11-25 08:23:07.237 253542 DEBUG oslo_concurrency.processutils [None req-cc692998-6ffd-4743-8f4d-b19c24a48e49 e93ad3d8111e48218a5ab899be7d3708 029b328a15754b45970b2053b56564bc - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.412s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:23:07 np0005534516 nova_compute[253538]: 2025-11-25 08:23:07.239 253542 DEBUG nova.objects.instance [None req-cc692998-6ffd-4743-8f4d-b19c24a48e49 e93ad3d8111e48218a5ab899be7d3708 029b328a15754b45970b2053b56564bc - - default default] Lazy-loading 'pci_devices' on Instance uuid 7779552a-aa17-4b2f-8b15-69121d6b6a63 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 03:23:07 np0005534516 nova_compute[253538]: 2025-11-25 08:23:07.253 253542 DEBUG nova.virt.libvirt.driver [None req-cc692998-6ffd-4743-8f4d-b19c24a48e49 e93ad3d8111e48218a5ab899be7d3708 029b328a15754b45970b2053b56564bc - - default default] [instance: 7779552a-aa17-4b2f-8b15-69121d6b6a63] End _get_guest_xml xml=<domain type="kvm">
Nov 25 03:23:07 np0005534516 nova_compute[253538]:  <uuid>7779552a-aa17-4b2f-8b15-69121d6b6a63</uuid>
Nov 25 03:23:07 np0005534516 nova_compute[253538]:  <name>instance-00000008</name>
Nov 25 03:23:07 np0005534516 nova_compute[253538]:  <memory>131072</memory>
Nov 25 03:23:07 np0005534516 nova_compute[253538]:  <vcpu>1</vcpu>
Nov 25 03:23:07 np0005534516 nova_compute[253538]:  <metadata>
Nov 25 03:23:07 np0005534516 nova_compute[253538]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 03:23:07 np0005534516 nova_compute[253538]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 03:23:07 np0005534516 nova_compute[253538]:      <nova:name>tempest-LiveMigrationNegativeTest-server-1157235376</nova:name>
Nov 25 03:23:07 np0005534516 nova_compute[253538]:      <nova:creationTime>2025-11-25 08:23:06</nova:creationTime>
Nov 25 03:23:07 np0005534516 nova_compute[253538]:      <nova:flavor name="m1.nano">
Nov 25 03:23:07 np0005534516 nova_compute[253538]:        <nova:memory>128</nova:memory>
Nov 25 03:23:07 np0005534516 nova_compute[253538]:        <nova:disk>1</nova:disk>
Nov 25 03:23:07 np0005534516 nova_compute[253538]:        <nova:swap>0</nova:swap>
Nov 25 03:23:07 np0005534516 nova_compute[253538]:        <nova:ephemeral>0</nova:ephemeral>
Nov 25 03:23:07 np0005534516 nova_compute[253538]:        <nova:vcpus>1</nova:vcpus>
Nov 25 03:23:07 np0005534516 nova_compute[253538]:      </nova:flavor>
Nov 25 03:23:07 np0005534516 nova_compute[253538]:      <nova:owner>
Nov 25 03:23:07 np0005534516 nova_compute[253538]:        <nova:user uuid="e93ad3d8111e48218a5ab899be7d3708">tempest-LiveMigrationNegativeTest-1761635651-project-member</nova:user>
Nov 25 03:23:07 np0005534516 nova_compute[253538]:        <nova:project uuid="029b328a15754b45970b2053b56564bc">tempest-LiveMigrationNegativeTest-1761635651</nova:project>
Nov 25 03:23:07 np0005534516 nova_compute[253538]:      </nova:owner>
Nov 25 03:23:07 np0005534516 nova_compute[253538]:      <nova:root type="image" uuid="8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e"/>
Nov 25 03:23:07 np0005534516 nova_compute[253538]:      <nova:ports/>
Nov 25 03:23:07 np0005534516 nova_compute[253538]:    </nova:instance>
Nov 25 03:23:07 np0005534516 nova_compute[253538]:  </metadata>
Nov 25 03:23:07 np0005534516 nova_compute[253538]:  <sysinfo type="smbios">
Nov 25 03:23:07 np0005534516 nova_compute[253538]:    <system>
Nov 25 03:23:07 np0005534516 nova_compute[253538]:      <entry name="manufacturer">RDO</entry>
Nov 25 03:23:07 np0005534516 nova_compute[253538]:      <entry name="product">OpenStack Compute</entry>
Nov 25 03:23:07 np0005534516 nova_compute[253538]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 03:23:07 np0005534516 nova_compute[253538]:      <entry name="serial">7779552a-aa17-4b2f-8b15-69121d6b6a63</entry>
Nov 25 03:23:07 np0005534516 nova_compute[253538]:      <entry name="uuid">7779552a-aa17-4b2f-8b15-69121d6b6a63</entry>
Nov 25 03:23:07 np0005534516 nova_compute[253538]:      <entry name="family">Virtual Machine</entry>
Nov 25 03:23:07 np0005534516 nova_compute[253538]:    </system>
Nov 25 03:23:07 np0005534516 nova_compute[253538]:  </sysinfo>
Nov 25 03:23:07 np0005534516 nova_compute[253538]:  <os>
Nov 25 03:23:07 np0005534516 nova_compute[253538]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 25 03:23:07 np0005534516 nova_compute[253538]:    <boot dev="hd"/>
Nov 25 03:23:07 np0005534516 nova_compute[253538]:    <smbios mode="sysinfo"/>
Nov 25 03:23:07 np0005534516 nova_compute[253538]:  </os>
Nov 25 03:23:07 np0005534516 nova_compute[253538]:  <features>
Nov 25 03:23:07 np0005534516 nova_compute[253538]:    <acpi/>
Nov 25 03:23:07 np0005534516 nova_compute[253538]:    <apic/>
Nov 25 03:23:07 np0005534516 nova_compute[253538]:    <vmcoreinfo/>
Nov 25 03:23:07 np0005534516 nova_compute[253538]:  </features>
Nov 25 03:23:07 np0005534516 nova_compute[253538]:  <clock offset="utc">
Nov 25 03:23:07 np0005534516 nova_compute[253538]:    <timer name="pit" tickpolicy="delay"/>
Nov 25 03:23:07 np0005534516 nova_compute[253538]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 25 03:23:07 np0005534516 nova_compute[253538]:    <timer name="hpet" present="no"/>
Nov 25 03:23:07 np0005534516 nova_compute[253538]:  </clock>
Nov 25 03:23:07 np0005534516 nova_compute[253538]:  <cpu mode="host-model" match="exact">
Nov 25 03:23:07 np0005534516 nova_compute[253538]:    <topology sockets="1" cores="1" threads="1"/>
Nov 25 03:23:07 np0005534516 nova_compute[253538]:  </cpu>
Nov 25 03:23:07 np0005534516 nova_compute[253538]:  <devices>
Nov 25 03:23:07 np0005534516 nova_compute[253538]:    <disk type="network" device="disk">
Nov 25 03:23:07 np0005534516 nova_compute[253538]:      <driver type="raw" cache="none"/>
Nov 25 03:23:07 np0005534516 nova_compute[253538]:      <source protocol="rbd" name="vms/7779552a-aa17-4b2f-8b15-69121d6b6a63_disk">
Nov 25 03:23:07 np0005534516 nova_compute[253538]:        <host name="192.168.122.100" port="6789"/>
Nov 25 03:23:07 np0005534516 nova_compute[253538]:      </source>
Nov 25 03:23:07 np0005534516 nova_compute[253538]:      <auth username="openstack">
Nov 25 03:23:07 np0005534516 nova_compute[253538]:        <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 03:23:07 np0005534516 nova_compute[253538]:      </auth>
Nov 25 03:23:07 np0005534516 nova_compute[253538]:      <target dev="vda" bus="virtio"/>
Nov 25 03:23:07 np0005534516 nova_compute[253538]:    </disk>
Nov 25 03:23:07 np0005534516 nova_compute[253538]:    <disk type="network" device="cdrom">
Nov 25 03:23:07 np0005534516 nova_compute[253538]:      <driver type="raw" cache="none"/>
Nov 25 03:23:07 np0005534516 nova_compute[253538]:      <source protocol="rbd" name="vms/7779552a-aa17-4b2f-8b15-69121d6b6a63_disk.config">
Nov 25 03:23:07 np0005534516 nova_compute[253538]:        <host name="192.168.122.100" port="6789"/>
Nov 25 03:23:07 np0005534516 nova_compute[253538]:      </source>
Nov 25 03:23:07 np0005534516 nova_compute[253538]:      <auth username="openstack">
Nov 25 03:23:07 np0005534516 nova_compute[253538]:        <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 03:23:07 np0005534516 nova_compute[253538]:      </auth>
Nov 25 03:23:07 np0005534516 nova_compute[253538]:      <target dev="sda" bus="sata"/>
Nov 25 03:23:07 np0005534516 nova_compute[253538]:    </disk>
Nov 25 03:23:07 np0005534516 nova_compute[253538]:    <serial type="pty">
Nov 25 03:23:07 np0005534516 nova_compute[253538]:      <log file="/var/lib/nova/instances/7779552a-aa17-4b2f-8b15-69121d6b6a63/console.log" append="off"/>
Nov 25 03:23:07 np0005534516 nova_compute[253538]:    </serial>
Nov 25 03:23:07 np0005534516 nova_compute[253538]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 03:23:07 np0005534516 nova_compute[253538]:    <video>
Nov 25 03:23:07 np0005534516 nova_compute[253538]:      <model type="virtio"/>
Nov 25 03:23:07 np0005534516 nova_compute[253538]:    </video>
Nov 25 03:23:07 np0005534516 nova_compute[253538]:    <input type="tablet" bus="usb"/>
Nov 25 03:23:07 np0005534516 nova_compute[253538]:    <rng model="virtio">
Nov 25 03:23:07 np0005534516 nova_compute[253538]:      <backend model="random">/dev/urandom</backend>
Nov 25 03:23:07 np0005534516 nova_compute[253538]:    </rng>
Nov 25 03:23:07 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root"/>
Nov 25 03:23:07 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:23:07 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:23:07 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:23:07 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:23:07 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:23:07 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:23:07 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:23:07 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:23:07 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:23:07 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:23:07 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:23:07 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:23:07 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:23:07 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:23:07 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:23:07 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:23:07 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:23:07 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:23:07 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:23:07 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:23:07 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:23:07 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:23:07 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:23:07 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:23:07 np0005534516 nova_compute[253538]:    <controller type="usb" index="0"/>
Nov 25 03:23:07 np0005534516 nova_compute[253538]:    <memballoon model="virtio">
Nov 25 03:23:07 np0005534516 nova_compute[253538]:      <stats period="10"/>
Nov 25 03:23:07 np0005534516 nova_compute[253538]:    </memballoon>
Nov 25 03:23:07 np0005534516 nova_compute[253538]:  </devices>
Nov 25 03:23:07 np0005534516 nova_compute[253538]: </domain>
Nov 25 03:23:07 np0005534516 nova_compute[253538]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 25 03:23:07 np0005534516 nova_compute[253538]: 2025-11-25 08:23:07.325 253542 DEBUG nova.virt.libvirt.driver [None req-cc692998-6ffd-4743-8f4d-b19c24a48e49 e93ad3d8111e48218a5ab899be7d3708 029b328a15754b45970b2053b56564bc - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 03:23:07 np0005534516 nova_compute[253538]: 2025-11-25 08:23:07.325 253542 DEBUG nova.virt.libvirt.driver [None req-cc692998-6ffd-4743-8f4d-b19c24a48e49 e93ad3d8111e48218a5ab899be7d3708 029b328a15754b45970b2053b56564bc - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 03:23:07 np0005534516 nova_compute[253538]: 2025-11-25 08:23:07.326 253542 INFO nova.virt.libvirt.driver [None req-cc692998-6ffd-4743-8f4d-b19c24a48e49 e93ad3d8111e48218a5ab899be7d3708 029b328a15754b45970b2053b56564bc - - default default] [instance: 7779552a-aa17-4b2f-8b15-69121d6b6a63] Using config drive#033[00m
Nov 25 03:23:07 np0005534516 nova_compute[253538]: 2025-11-25 08:23:07.350 253542 DEBUG nova.storage.rbd_utils [None req-cc692998-6ffd-4743-8f4d-b19c24a48e49 e93ad3d8111e48218a5ab899be7d3708 029b328a15754b45970b2053b56564bc - - default default] rbd image 7779552a-aa17-4b2f-8b15-69121d6b6a63_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:23:07 np0005534516 nova_compute[253538]: 2025-11-25 08:23:07.524 253542 INFO nova.virt.libvirt.driver [None req-cc692998-6ffd-4743-8f4d-b19c24a48e49 e93ad3d8111e48218a5ab899be7d3708 029b328a15754b45970b2053b56564bc - - default default] [instance: 7779552a-aa17-4b2f-8b15-69121d6b6a63] Creating config drive at /var/lib/nova/instances/7779552a-aa17-4b2f-8b15-69121d6b6a63/disk.config#033[00m
Nov 25 03:23:07 np0005534516 nova_compute[253538]: 2025-11-25 08:23:07.529 253542 DEBUG oslo_concurrency.processutils [None req-cc692998-6ffd-4743-8f4d-b19c24a48e49 e93ad3d8111e48218a5ab899be7d3708 029b328a15754b45970b2053b56564bc - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/7779552a-aa17-4b2f-8b15-69121d6b6a63/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpcjpluod_ execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:23:07 np0005534516 nova_compute[253538]: 2025-11-25 08:23:07.657 253542 DEBUG oslo_concurrency.processutils [None req-cc692998-6ffd-4743-8f4d-b19c24a48e49 e93ad3d8111e48218a5ab899be7d3708 029b328a15754b45970b2053b56564bc - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/7779552a-aa17-4b2f-8b15-69121d6b6a63/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpcjpluod_" returned: 0 in 0.128s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:23:07 np0005534516 nova_compute[253538]: 2025-11-25 08:23:07.681 253542 DEBUG nova.storage.rbd_utils [None req-cc692998-6ffd-4743-8f4d-b19c24a48e49 e93ad3d8111e48218a5ab899be7d3708 029b328a15754b45970b2053b56564bc - - default default] rbd image 7779552a-aa17-4b2f-8b15-69121d6b6a63_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:23:07 np0005534516 nova_compute[253538]: 2025-11-25 08:23:07.684 253542 DEBUG oslo_concurrency.processutils [None req-cc692998-6ffd-4743-8f4d-b19c24a48e49 e93ad3d8111e48218a5ab899be7d3708 029b328a15754b45970b2053b56564bc - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/7779552a-aa17-4b2f-8b15-69121d6b6a63/disk.config 7779552a-aa17-4b2f-8b15-69121d6b6a63_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:23:07 np0005534516 nova_compute[253538]: 2025-11-25 08:23:07.844 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:23:07 np0005534516 nova_compute[253538]: 2025-11-25 08:23:07.968 253542 DEBUG oslo_concurrency.processutils [None req-cc692998-6ffd-4743-8f4d-b19c24a48e49 e93ad3d8111e48218a5ab899be7d3708 029b328a15754b45970b2053b56564bc - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/7779552a-aa17-4b2f-8b15-69121d6b6a63/disk.config 7779552a-aa17-4b2f-8b15-69121d6b6a63_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.284s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:23:07 np0005534516 nova_compute[253538]: 2025-11-25 08:23:07.970 253542 INFO nova.virt.libvirt.driver [None req-cc692998-6ffd-4743-8f4d-b19c24a48e49 e93ad3d8111e48218a5ab899be7d3708 029b328a15754b45970b2053b56564bc - - default default] [instance: 7779552a-aa17-4b2f-8b15-69121d6b6a63] Deleting local config drive /var/lib/nova/instances/7779552a-aa17-4b2f-8b15-69121d6b6a63/disk.config because it was imported into RBD.#033[00m
Nov 25 03:23:08 np0005534516 systemd-machined[215790]: New machine qemu-8-instance-00000008.
Nov 25 03:23:08 np0005534516 systemd[1]: Started Virtual Machine qemu-8-instance-00000008.
Nov 25 03:23:08 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e107 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 03:23:08 np0005534516 nova_compute[253538]: 2025-11-25 08:23:08.610 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764058988.6102822, 7779552a-aa17-4b2f-8b15-69121d6b6a63 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 03:23:08 np0005534516 nova_compute[253538]: 2025-11-25 08:23:08.611 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 7779552a-aa17-4b2f-8b15-69121d6b6a63] VM Resumed (Lifecycle Event)#033[00m
Nov 25 03:23:08 np0005534516 nova_compute[253538]: 2025-11-25 08:23:08.615 253542 DEBUG nova.compute.manager [None req-cc692998-6ffd-4743-8f4d-b19c24a48e49 e93ad3d8111e48218a5ab899be7d3708 029b328a15754b45970b2053b56564bc - - default default] [instance: 7779552a-aa17-4b2f-8b15-69121d6b6a63] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 25 03:23:08 np0005534516 nova_compute[253538]: 2025-11-25 08:23:08.615 253542 DEBUG nova.virt.libvirt.driver [None req-cc692998-6ffd-4743-8f4d-b19c24a48e49 e93ad3d8111e48218a5ab899be7d3708 029b328a15754b45970b2053b56564bc - - default default] [instance: 7779552a-aa17-4b2f-8b15-69121d6b6a63] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 25 03:23:08 np0005534516 nova_compute[253538]: 2025-11-25 08:23:08.618 253542 INFO nova.virt.libvirt.driver [-] [instance: 7779552a-aa17-4b2f-8b15-69121d6b6a63] Instance spawned successfully.#033[00m
Nov 25 03:23:08 np0005534516 nova_compute[253538]: 2025-11-25 08:23:08.618 253542 DEBUG nova.virt.libvirt.driver [None req-cc692998-6ffd-4743-8f4d-b19c24a48e49 e93ad3d8111e48218a5ab899be7d3708 029b328a15754b45970b2053b56564bc - - default default] [instance: 7779552a-aa17-4b2f-8b15-69121d6b6a63] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 25 03:23:08 np0005534516 nova_compute[253538]: 2025-11-25 08:23:08.630 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 7779552a-aa17-4b2f-8b15-69121d6b6a63] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:23:08 np0005534516 nova_compute[253538]: 2025-11-25 08:23:08.638 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 7779552a-aa17-4b2f-8b15-69121d6b6a63] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 03:23:08 np0005534516 nova_compute[253538]: 2025-11-25 08:23:08.647 253542 DEBUG nova.virt.libvirt.driver [None req-cc692998-6ffd-4743-8f4d-b19c24a48e49 e93ad3d8111e48218a5ab899be7d3708 029b328a15754b45970b2053b56564bc - - default default] [instance: 7779552a-aa17-4b2f-8b15-69121d6b6a63] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:23:08 np0005534516 nova_compute[253538]: 2025-11-25 08:23:08.647 253542 DEBUG nova.virt.libvirt.driver [None req-cc692998-6ffd-4743-8f4d-b19c24a48e49 e93ad3d8111e48218a5ab899be7d3708 029b328a15754b45970b2053b56564bc - - default default] [instance: 7779552a-aa17-4b2f-8b15-69121d6b6a63] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:23:08 np0005534516 nova_compute[253538]: 2025-11-25 08:23:08.648 253542 DEBUG nova.virt.libvirt.driver [None req-cc692998-6ffd-4743-8f4d-b19c24a48e49 e93ad3d8111e48218a5ab899be7d3708 029b328a15754b45970b2053b56564bc - - default default] [instance: 7779552a-aa17-4b2f-8b15-69121d6b6a63] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:23:08 np0005534516 nova_compute[253538]: 2025-11-25 08:23:08.648 253542 DEBUG nova.virt.libvirt.driver [None req-cc692998-6ffd-4743-8f4d-b19c24a48e49 e93ad3d8111e48218a5ab899be7d3708 029b328a15754b45970b2053b56564bc - - default default] [instance: 7779552a-aa17-4b2f-8b15-69121d6b6a63] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:23:08 np0005534516 nova_compute[253538]: 2025-11-25 08:23:08.648 253542 DEBUG nova.virt.libvirt.driver [None req-cc692998-6ffd-4743-8f4d-b19c24a48e49 e93ad3d8111e48218a5ab899be7d3708 029b328a15754b45970b2053b56564bc - - default default] [instance: 7779552a-aa17-4b2f-8b15-69121d6b6a63] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:23:08 np0005534516 nova_compute[253538]: 2025-11-25 08:23:08.649 253542 DEBUG nova.virt.libvirt.driver [None req-cc692998-6ffd-4743-8f4d-b19c24a48e49 e93ad3d8111e48218a5ab899be7d3708 029b328a15754b45970b2053b56564bc - - default default] [instance: 7779552a-aa17-4b2f-8b15-69121d6b6a63] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:23:08 np0005534516 nova_compute[253538]: 2025-11-25 08:23:08.656 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 7779552a-aa17-4b2f-8b15-69121d6b6a63] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 03:23:08 np0005534516 nova_compute[253538]: 2025-11-25 08:23:08.657 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764058988.6144156, 7779552a-aa17-4b2f-8b15-69121d6b6a63 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 03:23:08 np0005534516 nova_compute[253538]: 2025-11-25 08:23:08.657 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 7779552a-aa17-4b2f-8b15-69121d6b6a63] VM Started (Lifecycle Event)#033[00m
Nov 25 03:23:08 np0005534516 nova_compute[253538]: 2025-11-25 08:23:08.695 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 7779552a-aa17-4b2f-8b15-69121d6b6a63] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:23:08 np0005534516 nova_compute[253538]: 2025-11-25 08:23:08.701 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 7779552a-aa17-4b2f-8b15-69121d6b6a63] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 03:23:08 np0005534516 nova_compute[253538]: 2025-11-25 08:23:08.729 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 7779552a-aa17-4b2f-8b15-69121d6b6a63] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 03:23:08 np0005534516 nova_compute[253538]: 2025-11-25 08:23:08.741 253542 INFO nova.compute.manager [None req-cc692998-6ffd-4743-8f4d-b19c24a48e49 e93ad3d8111e48218a5ab899be7d3708 029b328a15754b45970b2053b56564bc - - default default] [instance: 7779552a-aa17-4b2f-8b15-69121d6b6a63] Took 4.07 seconds to spawn the instance on the hypervisor.#033[00m
Nov 25 03:23:08 np0005534516 nova_compute[253538]: 2025-11-25 08:23:08.742 253542 DEBUG nova.compute.manager [None req-cc692998-6ffd-4743-8f4d-b19c24a48e49 e93ad3d8111e48218a5ab899be7d3708 029b328a15754b45970b2053b56564bc - - default default] [instance: 7779552a-aa17-4b2f-8b15-69121d6b6a63] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:23:08 np0005534516 nova_compute[253538]: 2025-11-25 08:23:08.839 253542 INFO nova.compute.manager [None req-cc692998-6ffd-4743-8f4d-b19c24a48e49 e93ad3d8111e48218a5ab899be7d3708 029b328a15754b45970b2053b56564bc - - default default] [instance: 7779552a-aa17-4b2f-8b15-69121d6b6a63] Took 5.06 seconds to build instance.#033[00m
Nov 25 03:23:08 np0005534516 nova_compute[253538]: 2025-11-25 08:23:08.850 253542 DEBUG nova.network.neutron [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] [instance: 4bc97ee2-5aba-4bb5-86e2-f0806a200c04] Updating instance_info_cache with network_info: [{"id": "029a2ee5-4018-4b73-8953-5436c5af3666", "address": "fa:16:3e:99:b2:39", "network": {"id": "ef52fe4f-78d3-45fa-ab69-177fdfabe604", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-1512173371-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.177", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2671313ddba04346ac0e2eef435f909c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap029a2ee5-40", "ovs_interfaceid": "029a2ee5-4018-4b73-8953-5436c5af3666", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 03:23:08 np0005534516 nova_compute[253538]: 2025-11-25 08:23:08.867 253542 DEBUG oslo_concurrency.lockutils [None req-cc692998-6ffd-4743-8f4d-b19c24a48e49 e93ad3d8111e48218a5ab899be7d3708 029b328a15754b45970b2053b56564bc - - default default] Lock "7779552a-aa17-4b2f-8b15-69121d6b6a63" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 5.152s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:23:08 np0005534516 nova_compute[253538]: 2025-11-25 08:23:08.868 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Releasing lock "refresh_cache-4bc97ee2-5aba-4bb5-86e2-f0806a200c04" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 03:23:08 np0005534516 nova_compute[253538]: 2025-11-25 08:23:08.868 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] [instance: 4bc97ee2-5aba-4bb5-86e2-f0806a200c04] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Nov 25 03:23:08 np0005534516 nova_compute[253538]: 2025-11-25 08:23:08.869 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 03:23:08 np0005534516 nova_compute[253538]: 2025-11-25 08:23:08.871 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 03:23:08 np0005534516 nova_compute[253538]: 2025-11-25 08:23:08.872 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 25 03:23:08 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1125: 321 pgs: 321 active+clean; 247 MiB data, 325 MiB used, 60 GiB / 60 GiB avail; 3.3 MiB/s rd, 6.7 MiB/s wr, 255 op/s
Nov 25 03:23:09 np0005534516 nova_compute[253538]: 2025-11-25 08:23:09.368 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:23:10 np0005534516 nova_compute[253538]: 2025-11-25 08:23:10.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 03:23:10 np0005534516 nova_compute[253538]: 2025-11-25 08:23:10.575 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:23:10 np0005534516 nova_compute[253538]: 2025-11-25 08:23:10.575 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:23:10 np0005534516 nova_compute[253538]: 2025-11-25 08:23:10.576 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:23:10 np0005534516 nova_compute[253538]: 2025-11-25 08:23:10.576 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 25 03:23:10 np0005534516 nova_compute[253538]: 2025-11-25 08:23:10.576 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:23:10 np0005534516 nova_compute[253538]: 2025-11-25 08:23:10.729 253542 DEBUG nova.objects.instance [None req-86f476a4-aebb-4c1d-afe8-e5de1e03287f b0630450e2d545b5ac0a7dcada11588b 1249e66da1ff4135bf22d5fc5ecb4674 - - default default] Lazy-loading 'pci_devices' on Instance uuid 7779552a-aa17-4b2f-8b15-69121d6b6a63 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 03:23:10 np0005534516 nova_compute[253538]: 2025-11-25 08:23:10.756 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764058990.7553976, 7779552a-aa17-4b2f-8b15-69121d6b6a63 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 03:23:10 np0005534516 nova_compute[253538]: 2025-11-25 08:23:10.757 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 7779552a-aa17-4b2f-8b15-69121d6b6a63] VM Paused (Lifecycle Event)#033[00m
Nov 25 03:23:10 np0005534516 nova_compute[253538]: 2025-11-25 08:23:10.779 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 7779552a-aa17-4b2f-8b15-69121d6b6a63] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:23:10 np0005534516 nova_compute[253538]: 2025-11-25 08:23:10.785 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 7779552a-aa17-4b2f-8b15-69121d6b6a63] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: suspending, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 03:23:10 np0005534516 nova_compute[253538]: 2025-11-25 08:23:10.806 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 7779552a-aa17-4b2f-8b15-69121d6b6a63] During sync_power_state the instance has a pending task (suspending). Skip.#033[00m
Nov 25 03:23:10 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1126: 321 pgs: 321 active+clean; 262 MiB data, 335 MiB used, 60 GiB / 60 GiB avail; 4.4 MiB/s rd, 6.3 MiB/s wr, 296 op/s
Nov 25 03:23:11 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 03:23:11 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2909703305' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 03:23:11 np0005534516 nova_compute[253538]: 2025-11-25 08:23:11.323 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.747s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:23:11 np0005534516 systemd[1]: machine-qemu\x2d8\x2dinstance\x2d00000008.scope: Deactivated successfully.
Nov 25 03:23:11 np0005534516 systemd[1]: machine-qemu\x2d8\x2dinstance\x2d00000008.scope: Consumed 2.826s CPU time.
Nov 25 03:23:11 np0005534516 systemd-machined[215790]: Machine qemu-8-instance-00000008 terminated.
Nov 25 03:23:11 np0005534516 nova_compute[253538]: 2025-11-25 08:23:11.505 253542 DEBUG nova.compute.manager [None req-86f476a4-aebb-4c1d-afe8-e5de1e03287f b0630450e2d545b5ac0a7dcada11588b 1249e66da1ff4135bf22d5fc5ecb4674 - - default default] [instance: 7779552a-aa17-4b2f-8b15-69121d6b6a63] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:23:11 np0005534516 nova_compute[253538]: 2025-11-25 08:23:11.607 253542 DEBUG nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] skipping disk for instance-00000008 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 25 03:23:11 np0005534516 nova_compute[253538]: 2025-11-25 08:23:11.608 253542 DEBUG nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] skipping disk for instance-00000008 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 25 03:23:11 np0005534516 nova_compute[253538]: 2025-11-25 08:23:11.613 253542 DEBUG nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] skipping disk for instance-00000005 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 25 03:23:11 np0005534516 nova_compute[253538]: 2025-11-25 08:23:11.614 253542 DEBUG nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] skipping disk for instance-00000005 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 25 03:23:11 np0005534516 nova_compute[253538]: 2025-11-25 08:23:11.614 253542 DEBUG nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] skipping disk for instance-00000005 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 25 03:23:11 np0005534516 nova_compute[253538]: 2025-11-25 08:23:11.618 253542 DEBUG nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] skipping disk for instance-00000006 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 25 03:23:11 np0005534516 nova_compute[253538]: 2025-11-25 08:23:11.618 253542 DEBUG nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] skipping disk for instance-00000006 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 25 03:23:11 np0005534516 nova_compute[253538]: 2025-11-25 08:23:11.624 253542 DEBUG nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] skipping disk for instance-00000007 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 25 03:23:11 np0005534516 nova_compute[253538]: 2025-11-25 08:23:11.624 253542 DEBUG nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] skipping disk for instance-00000007 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 25 03:23:11 np0005534516 nova_compute[253538]: 2025-11-25 08:23:11.804 253542 WARNING nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 03:23:11 np0005534516 nova_compute[253538]: 2025-11-25 08:23:11.805 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4253MB free_disk=59.88930892944336GB free_vcpus=4 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 25 03:23:11 np0005534516 nova_compute[253538]: 2025-11-25 08:23:11.806 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:23:11 np0005534516 nova_compute[253538]: 2025-11-25 08:23:11.806 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:23:11 np0005534516 nova_compute[253538]: 2025-11-25 08:23:11.877 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Instance 4bc97ee2-5aba-4bb5-86e2-f0806a200c04 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 25 03:23:11 np0005534516 nova_compute[253538]: 2025-11-25 08:23:11.878 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Instance 30491b9b-e328-43ff-9a35-3f5afa6fed34 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 25 03:23:11 np0005534516 nova_compute[253538]: 2025-11-25 08:23:11.878 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Instance a8f34404-8153-46bd-aea0-02d909cd66a9 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 25 03:23:11 np0005534516 nova_compute[253538]: 2025-11-25 08:23:11.878 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Instance 7779552a-aa17-4b2f-8b15-69121d6b6a63 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 25 03:23:11 np0005534516 nova_compute[253538]: 2025-11-25 08:23:11.879 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 4 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 25 03:23:11 np0005534516 nova_compute[253538]: 2025-11-25 08:23:11.879 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=1024MB phys_disk=59GB used_disk=5GB total_vcpus=8 used_vcpus=4 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 25 03:23:11 np0005534516 nova_compute[253538]: 2025-11-25 08:23:11.963 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:23:11 np0005534516 ceph-osd[90711]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #43. Immutable memtables: 0.
Nov 25 03:23:12 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 03:23:12 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1025853926' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 03:23:12 np0005534516 nova_compute[253538]: 2025-11-25 08:23:12.426 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.462s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:23:12 np0005534516 nova_compute[253538]: 2025-11-25 08:23:12.431 253542 DEBUG nova.compute.provider_tree [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 03:23:12 np0005534516 nova_compute[253538]: 2025-11-25 08:23:12.446 253542 DEBUG nova.scheduler.client.report [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 03:23:12 np0005534516 nova_compute[253538]: 2025-11-25 08:23:12.546 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 25 03:23:12 np0005534516 nova_compute[253538]: 2025-11-25 08:23:12.547 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.741s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:23:12 np0005534516 nova_compute[253538]: 2025-11-25 08:23:12.848 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:23:12 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1127: 321 pgs: 321 active+clean; 264 MiB data, 335 MiB used, 60 GiB / 60 GiB avail; 5.4 MiB/s rd, 5.1 MiB/s wr, 303 op/s
Nov 25 03:23:13 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e107 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 03:23:13 np0005534516 nova_compute[253538]: 2025-11-25 08:23:13.548 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 03:23:13 np0005534516 nova_compute[253538]: 2025-11-25 08:23:13.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 03:23:14 np0005534516 nova_compute[253538]: 2025-11-25 08:23:14.391 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:23:14 np0005534516 nova_compute[253538]: 2025-11-25 08:23:14.553 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 03:23:14 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1128: 321 pgs: 321 active+clean; 287 MiB data, 368 MiB used, 60 GiB / 60 GiB avail; 5.1 MiB/s rd, 4.3 MiB/s wr, 272 op/s
Nov 25 03:23:15 np0005534516 nova_compute[253538]: 2025-11-25 08:23:15.142 253542 DEBUG oslo_concurrency.lockutils [None req-6b6e7722-7b71-4154-b094-5c23f5f5f7cb e93ad3d8111e48218a5ab899be7d3708 029b328a15754b45970b2053b56564bc - - default default] Acquiring lock "7779552a-aa17-4b2f-8b15-69121d6b6a63" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:23:15 np0005534516 nova_compute[253538]: 2025-11-25 08:23:15.143 253542 DEBUG oslo_concurrency.lockutils [None req-6b6e7722-7b71-4154-b094-5c23f5f5f7cb e93ad3d8111e48218a5ab899be7d3708 029b328a15754b45970b2053b56564bc - - default default] Lock "7779552a-aa17-4b2f-8b15-69121d6b6a63" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:23:15 np0005534516 nova_compute[253538]: 2025-11-25 08:23:15.144 253542 DEBUG oslo_concurrency.lockutils [None req-6b6e7722-7b71-4154-b094-5c23f5f5f7cb e93ad3d8111e48218a5ab899be7d3708 029b328a15754b45970b2053b56564bc - - default default] Acquiring lock "7779552a-aa17-4b2f-8b15-69121d6b6a63-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:23:15 np0005534516 nova_compute[253538]: 2025-11-25 08:23:15.144 253542 DEBUG oslo_concurrency.lockutils [None req-6b6e7722-7b71-4154-b094-5c23f5f5f7cb e93ad3d8111e48218a5ab899be7d3708 029b328a15754b45970b2053b56564bc - - default default] Lock "7779552a-aa17-4b2f-8b15-69121d6b6a63-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:23:15 np0005534516 nova_compute[253538]: 2025-11-25 08:23:15.145 253542 DEBUG oslo_concurrency.lockutils [None req-6b6e7722-7b71-4154-b094-5c23f5f5f7cb e93ad3d8111e48218a5ab899be7d3708 029b328a15754b45970b2053b56564bc - - default default] Lock "7779552a-aa17-4b2f-8b15-69121d6b6a63-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:23:15 np0005534516 nova_compute[253538]: 2025-11-25 08:23:15.147 253542 INFO nova.compute.manager [None req-6b6e7722-7b71-4154-b094-5c23f5f5f7cb e93ad3d8111e48218a5ab899be7d3708 029b328a15754b45970b2053b56564bc - - default default] [instance: 7779552a-aa17-4b2f-8b15-69121d6b6a63] Terminating instance#033[00m
Nov 25 03:23:15 np0005534516 nova_compute[253538]: 2025-11-25 08:23:15.148 253542 DEBUG oslo_concurrency.lockutils [None req-6b6e7722-7b71-4154-b094-5c23f5f5f7cb e93ad3d8111e48218a5ab899be7d3708 029b328a15754b45970b2053b56564bc - - default default] Acquiring lock "refresh_cache-7779552a-aa17-4b2f-8b15-69121d6b6a63" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 03:23:15 np0005534516 nova_compute[253538]: 2025-11-25 08:23:15.149 253542 DEBUG oslo_concurrency.lockutils [None req-6b6e7722-7b71-4154-b094-5c23f5f5f7cb e93ad3d8111e48218a5ab899be7d3708 029b328a15754b45970b2053b56564bc - - default default] Acquired lock "refresh_cache-7779552a-aa17-4b2f-8b15-69121d6b6a63" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 03:23:15 np0005534516 nova_compute[253538]: 2025-11-25 08:23:15.149 253542 DEBUG nova.network.neutron [None req-6b6e7722-7b71-4154-b094-5c23f5f5f7cb e93ad3d8111e48218a5ab899be7d3708 029b328a15754b45970b2053b56564bc - - default default] [instance: 7779552a-aa17-4b2f-8b15-69121d6b6a63] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 25 03:23:15 np0005534516 nova_compute[253538]: 2025-11-25 08:23:15.767 253542 DEBUG nova.network.neutron [None req-6b6e7722-7b71-4154-b094-5c23f5f5f7cb e93ad3d8111e48218a5ab899be7d3708 029b328a15754b45970b2053b56564bc - - default default] [instance: 7779552a-aa17-4b2f-8b15-69121d6b6a63] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 25 03:23:16 np0005534516 nova_compute[253538]: 2025-11-25 08:23:16.058 253542 DEBUG nova.network.neutron [None req-6b6e7722-7b71-4154-b094-5c23f5f5f7cb e93ad3d8111e48218a5ab899be7d3708 029b328a15754b45970b2053b56564bc - - default default] [instance: 7779552a-aa17-4b2f-8b15-69121d6b6a63] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 03:23:16 np0005534516 nova_compute[253538]: 2025-11-25 08:23:16.076 253542 DEBUG oslo_concurrency.lockutils [None req-6b6e7722-7b71-4154-b094-5c23f5f5f7cb e93ad3d8111e48218a5ab899be7d3708 029b328a15754b45970b2053b56564bc - - default default] Releasing lock "refresh_cache-7779552a-aa17-4b2f-8b15-69121d6b6a63" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 03:23:16 np0005534516 nova_compute[253538]: 2025-11-25 08:23:16.076 253542 DEBUG nova.compute.manager [None req-6b6e7722-7b71-4154-b094-5c23f5f5f7cb e93ad3d8111e48218a5ab899be7d3708 029b328a15754b45970b2053b56564bc - - default default] [instance: 7779552a-aa17-4b2f-8b15-69121d6b6a63] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 25 03:23:16 np0005534516 nova_compute[253538]: 2025-11-25 08:23:16.083 253542 INFO nova.virt.libvirt.driver [-] [instance: 7779552a-aa17-4b2f-8b15-69121d6b6a63] Instance destroyed successfully.#033[00m
Nov 25 03:23:16 np0005534516 nova_compute[253538]: 2025-11-25 08:23:16.084 253542 DEBUG nova.objects.instance [None req-6b6e7722-7b71-4154-b094-5c23f5f5f7cb e93ad3d8111e48218a5ab899be7d3708 029b328a15754b45970b2053b56564bc - - default default] Lazy-loading 'resources' on Instance uuid 7779552a-aa17-4b2f-8b15-69121d6b6a63 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 03:23:16 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1129: 321 pgs: 321 active+clean; 287 MiB data, 368 MiB used, 60 GiB / 60 GiB avail; 4.0 MiB/s rd, 3.5 MiB/s wr, 210 op/s
Nov 25 03:23:16 np0005534516 nova_compute[253538]: 2025-11-25 08:23:16.936 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:23:16 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:23:16.938 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=5, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '3e:21:09', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '6a:71:8e:07:fa:c2'}, ipsec=False) old=SB_Global(nb_cfg=4) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 03:23:16 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:23:16.940 162739 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 25 03:23:17 np0005534516 ovn_controller[152859]: 2025-11-25T08:23:17Z|00041|memory_trim|INFO|Detected inactivity (last active 30001 ms ago): trimming memory
Nov 25 03:23:17 np0005534516 nova_compute[253538]: 2025-11-25 08:23:17.359 253542 DEBUG nova.compute.manager [None req-144950eb-7800-46c1-8308-57c1bde3b8c0 d603ec87a6444149a76c35f9302f5d37 5fb5f42261cd4144abf24782eefc3dec - - default default] [instance: a8f34404-8153-46bd-aea0-02d909cd66a9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:23:17 np0005534516 nova_compute[253538]: 2025-11-25 08:23:17.364 253542 INFO nova.compute.manager [None req-144950eb-7800-46c1-8308-57c1bde3b8c0 d603ec87a6444149a76c35f9302f5d37 5fb5f42261cd4144abf24782eefc3dec - - default default] [instance: a8f34404-8153-46bd-aea0-02d909cd66a9] Retrieving diagnostics#033[00m
Nov 25 03:23:17 np0005534516 nova_compute[253538]: 2025-11-25 08:23:17.622 253542 DEBUG oslo_concurrency.lockutils [None req-225adb63-c415-4664-b9ce-71d1298904a1 2efed9769e444ce29167cef08d536e28 87f3ce31098e4e0cb8717a11dd3fee23 - - default default] Acquiring lock "a8f34404-8153-46bd-aea0-02d909cd66a9" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:23:17 np0005534516 nova_compute[253538]: 2025-11-25 08:23:17.623 253542 DEBUG oslo_concurrency.lockutils [None req-225adb63-c415-4664-b9ce-71d1298904a1 2efed9769e444ce29167cef08d536e28 87f3ce31098e4e0cb8717a11dd3fee23 - - default default] Lock "a8f34404-8153-46bd-aea0-02d909cd66a9" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:23:17 np0005534516 nova_compute[253538]: 2025-11-25 08:23:17.623 253542 DEBUG oslo_concurrency.lockutils [None req-225adb63-c415-4664-b9ce-71d1298904a1 2efed9769e444ce29167cef08d536e28 87f3ce31098e4e0cb8717a11dd3fee23 - - default default] Acquiring lock "a8f34404-8153-46bd-aea0-02d909cd66a9-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:23:17 np0005534516 nova_compute[253538]: 2025-11-25 08:23:17.624 253542 DEBUG oslo_concurrency.lockutils [None req-225adb63-c415-4664-b9ce-71d1298904a1 2efed9769e444ce29167cef08d536e28 87f3ce31098e4e0cb8717a11dd3fee23 - - default default] Lock "a8f34404-8153-46bd-aea0-02d909cd66a9-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:23:17 np0005534516 nova_compute[253538]: 2025-11-25 08:23:17.624 253542 DEBUG oslo_concurrency.lockutils [None req-225adb63-c415-4664-b9ce-71d1298904a1 2efed9769e444ce29167cef08d536e28 87f3ce31098e4e0cb8717a11dd3fee23 - - default default] Lock "a8f34404-8153-46bd-aea0-02d909cd66a9-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:23:17 np0005534516 nova_compute[253538]: 2025-11-25 08:23:17.625 253542 INFO nova.compute.manager [None req-225adb63-c415-4664-b9ce-71d1298904a1 2efed9769e444ce29167cef08d536e28 87f3ce31098e4e0cb8717a11dd3fee23 - - default default] [instance: a8f34404-8153-46bd-aea0-02d909cd66a9] Terminating instance#033[00m
Nov 25 03:23:17 np0005534516 nova_compute[253538]: 2025-11-25 08:23:17.626 253542 DEBUG oslo_concurrency.lockutils [None req-225adb63-c415-4664-b9ce-71d1298904a1 2efed9769e444ce29167cef08d536e28 87f3ce31098e4e0cb8717a11dd3fee23 - - default default] Acquiring lock "refresh_cache-a8f34404-8153-46bd-aea0-02d909cd66a9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 03:23:17 np0005534516 nova_compute[253538]: 2025-11-25 08:23:17.627 253542 DEBUG oslo_concurrency.lockutils [None req-225adb63-c415-4664-b9ce-71d1298904a1 2efed9769e444ce29167cef08d536e28 87f3ce31098e4e0cb8717a11dd3fee23 - - default default] Acquired lock "refresh_cache-a8f34404-8153-46bd-aea0-02d909cd66a9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 03:23:17 np0005534516 nova_compute[253538]: 2025-11-25 08:23:17.627 253542 DEBUG nova.network.neutron [None req-225adb63-c415-4664-b9ce-71d1298904a1 2efed9769e444ce29167cef08d536e28 87f3ce31098e4e0cb8717a11dd3fee23 - - default default] [instance: a8f34404-8153-46bd-aea0-02d909cd66a9] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 25 03:23:17 np0005534516 nova_compute[253538]: 2025-11-25 08:23:17.654 253542 INFO nova.virt.libvirt.driver [None req-6b6e7722-7b71-4154-b094-5c23f5f5f7cb e93ad3d8111e48218a5ab899be7d3708 029b328a15754b45970b2053b56564bc - - default default] [instance: 7779552a-aa17-4b2f-8b15-69121d6b6a63] Deleting instance files /var/lib/nova/instances/7779552a-aa17-4b2f-8b15-69121d6b6a63_del#033[00m
Nov 25 03:23:17 np0005534516 nova_compute[253538]: 2025-11-25 08:23:17.655 253542 INFO nova.virt.libvirt.driver [None req-6b6e7722-7b71-4154-b094-5c23f5f5f7cb e93ad3d8111e48218a5ab899be7d3708 029b328a15754b45970b2053b56564bc - - default default] [instance: 7779552a-aa17-4b2f-8b15-69121d6b6a63] Deletion of /var/lib/nova/instances/7779552a-aa17-4b2f-8b15-69121d6b6a63_del complete#033[00m
Nov 25 03:23:17 np0005534516 nova_compute[253538]: 2025-11-25 08:23:17.730 253542 INFO nova.compute.manager [None req-6b6e7722-7b71-4154-b094-5c23f5f5f7cb e93ad3d8111e48218a5ab899be7d3708 029b328a15754b45970b2053b56564bc - - default default] [instance: 7779552a-aa17-4b2f-8b15-69121d6b6a63] Took 1.65 seconds to destroy the instance on the hypervisor.#033[00m
Nov 25 03:23:17 np0005534516 nova_compute[253538]: 2025-11-25 08:23:17.731 253542 DEBUG oslo.service.loopingcall [None req-6b6e7722-7b71-4154-b094-5c23f5f5f7cb e93ad3d8111e48218a5ab899be7d3708 029b328a15754b45970b2053b56564bc - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 25 03:23:17 np0005534516 nova_compute[253538]: 2025-11-25 08:23:17.731 253542 DEBUG nova.compute.manager [-] [instance: 7779552a-aa17-4b2f-8b15-69121d6b6a63] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 25 03:23:17 np0005534516 nova_compute[253538]: 2025-11-25 08:23:17.731 253542 DEBUG nova.network.neutron [-] [instance: 7779552a-aa17-4b2f-8b15-69121d6b6a63] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 25 03:23:17 np0005534516 nova_compute[253538]: 2025-11-25 08:23:17.850 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:23:17 np0005534516 nova_compute[253538]: 2025-11-25 08:23:17.968 253542 DEBUG nova.network.neutron [None req-225adb63-c415-4664-b9ce-71d1298904a1 2efed9769e444ce29167cef08d536e28 87f3ce31098e4e0cb8717a11dd3fee23 - - default default] [instance: a8f34404-8153-46bd-aea0-02d909cd66a9] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 25 03:23:17 np0005534516 nova_compute[253538]: 2025-11-25 08:23:17.976 253542 DEBUG nova.network.neutron [-] [instance: 7779552a-aa17-4b2f-8b15-69121d6b6a63] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 25 03:23:17 np0005534516 nova_compute[253538]: 2025-11-25 08:23:17.986 253542 DEBUG nova.network.neutron [-] [instance: 7779552a-aa17-4b2f-8b15-69121d6b6a63] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 03:23:18 np0005534516 nova_compute[253538]: 2025-11-25 08:23:17.999 253542 INFO nova.compute.manager [-] [instance: 7779552a-aa17-4b2f-8b15-69121d6b6a63] Took 0.27 seconds to deallocate network for instance.#033[00m
Nov 25 03:23:18 np0005534516 nova_compute[253538]: 2025-11-25 08:23:18.040 253542 DEBUG oslo_concurrency.lockutils [None req-6b6e7722-7b71-4154-b094-5c23f5f5f7cb e93ad3d8111e48218a5ab899be7d3708 029b328a15754b45970b2053b56564bc - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:23:18 np0005534516 nova_compute[253538]: 2025-11-25 08:23:18.041 253542 DEBUG oslo_concurrency.lockutils [None req-6b6e7722-7b71-4154-b094-5c23f5f5f7cb e93ad3d8111e48218a5ab899be7d3708 029b328a15754b45970b2053b56564bc - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:23:18 np0005534516 nova_compute[253538]: 2025-11-25 08:23:18.133 253542 DEBUG oslo_concurrency.processutils [None req-6b6e7722-7b71-4154-b094-5c23f5f5f7cb e93ad3d8111e48218a5ab899be7d3708 029b328a15754b45970b2053b56564bc - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:23:18 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e107 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 03:23:18 np0005534516 nova_compute[253538]: 2025-11-25 08:23:18.275 253542 DEBUG nova.network.neutron [None req-225adb63-c415-4664-b9ce-71d1298904a1 2efed9769e444ce29167cef08d536e28 87f3ce31098e4e0cb8717a11dd3fee23 - - default default] [instance: a8f34404-8153-46bd-aea0-02d909cd66a9] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 03:23:18 np0005534516 nova_compute[253538]: 2025-11-25 08:23:18.288 253542 DEBUG oslo_concurrency.lockutils [None req-225adb63-c415-4664-b9ce-71d1298904a1 2efed9769e444ce29167cef08d536e28 87f3ce31098e4e0cb8717a11dd3fee23 - - default default] Releasing lock "refresh_cache-a8f34404-8153-46bd-aea0-02d909cd66a9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 03:23:18 np0005534516 nova_compute[253538]: 2025-11-25 08:23:18.289 253542 DEBUG nova.compute.manager [None req-225adb63-c415-4664-b9ce-71d1298904a1 2efed9769e444ce29167cef08d536e28 87f3ce31098e4e0cb8717a11dd3fee23 - - default default] [instance: a8f34404-8153-46bd-aea0-02d909cd66a9] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 25 03:23:18 np0005534516 systemd[1]: machine-qemu\x2d7\x2dinstance\x2d00000007.scope: Deactivated successfully.
Nov 25 03:23:18 np0005534516 systemd[1]: machine-qemu\x2d7\x2dinstance\x2d00000007.scope: Consumed 12.938s CPU time.
Nov 25 03:23:18 np0005534516 systemd-machined[215790]: Machine qemu-7-instance-00000007 terminated.
Nov 25 03:23:18 np0005534516 nova_compute[253538]: 2025-11-25 08:23:18.514 253542 INFO nova.virt.libvirt.driver [-] [instance: a8f34404-8153-46bd-aea0-02d909cd66a9] Instance destroyed successfully.#033[00m
Nov 25 03:23:18 np0005534516 nova_compute[253538]: 2025-11-25 08:23:18.515 253542 DEBUG nova.objects.instance [None req-225adb63-c415-4664-b9ce-71d1298904a1 2efed9769e444ce29167cef08d536e28 87f3ce31098e4e0cb8717a11dd3fee23 - - default default] Lazy-loading 'resources' on Instance uuid a8f34404-8153-46bd-aea0-02d909cd66a9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 03:23:18 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 03:23:18 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3231050133' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 03:23:18 np0005534516 nova_compute[253538]: 2025-11-25 08:23:18.602 253542 DEBUG oslo_concurrency.processutils [None req-6b6e7722-7b71-4154-b094-5c23f5f5f7cb e93ad3d8111e48218a5ab899be7d3708 029b328a15754b45970b2053b56564bc - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.469s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:23:18 np0005534516 nova_compute[253538]: 2025-11-25 08:23:18.608 253542 DEBUG nova.compute.provider_tree [None req-6b6e7722-7b71-4154-b094-5c23f5f5f7cb e93ad3d8111e48218a5ab899be7d3708 029b328a15754b45970b2053b56564bc - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 03:23:18 np0005534516 nova_compute[253538]: 2025-11-25 08:23:18.628 253542 DEBUG nova.scheduler.client.report [None req-6b6e7722-7b71-4154-b094-5c23f5f5f7cb e93ad3d8111e48218a5ab899be7d3708 029b328a15754b45970b2053b56564bc - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 03:23:18 np0005534516 nova_compute[253538]: 2025-11-25 08:23:18.650 253542 DEBUG oslo_concurrency.lockutils [None req-6b6e7722-7b71-4154-b094-5c23f5f5f7cb e93ad3d8111e48218a5ab899be7d3708 029b328a15754b45970b2053b56564bc - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.609s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:23:18 np0005534516 nova_compute[253538]: 2025-11-25 08:23:18.673 253542 INFO nova.scheduler.client.report [None req-6b6e7722-7b71-4154-b094-5c23f5f5f7cb e93ad3d8111e48218a5ab899be7d3708 029b328a15754b45970b2053b56564bc - - default default] Deleted allocations for instance 7779552a-aa17-4b2f-8b15-69121d6b6a63#033[00m
Nov 25 03:23:18 np0005534516 nova_compute[253538]: 2025-11-25 08:23:18.733 253542 DEBUG oslo_concurrency.lockutils [None req-6b6e7722-7b71-4154-b094-5c23f5f5f7cb e93ad3d8111e48218a5ab899be7d3708 029b328a15754b45970b2053b56564bc - - default default] Lock "7779552a-aa17-4b2f-8b15-69121d6b6a63" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.590s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:23:18 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1130: 321 pgs: 321 active+clean; 278 MiB data, 382 MiB used, 60 GiB / 60 GiB avail; 4.3 MiB/s rd, 5.0 MiB/s wr, 271 op/s
Nov 25 03:23:19 np0005534516 nova_compute[253538]: 2025-11-25 08:23:19.195 253542 INFO nova.virt.libvirt.driver [None req-225adb63-c415-4664-b9ce-71d1298904a1 2efed9769e444ce29167cef08d536e28 87f3ce31098e4e0cb8717a11dd3fee23 - - default default] [instance: a8f34404-8153-46bd-aea0-02d909cd66a9] Deleting instance files /var/lib/nova/instances/a8f34404-8153-46bd-aea0-02d909cd66a9_del#033[00m
Nov 25 03:23:19 np0005534516 nova_compute[253538]: 2025-11-25 08:23:19.196 253542 INFO nova.virt.libvirt.driver [None req-225adb63-c415-4664-b9ce-71d1298904a1 2efed9769e444ce29167cef08d536e28 87f3ce31098e4e0cb8717a11dd3fee23 - - default default] [instance: a8f34404-8153-46bd-aea0-02d909cd66a9] Deletion of /var/lib/nova/instances/a8f34404-8153-46bd-aea0-02d909cd66a9_del complete#033[00m
Nov 25 03:23:19 np0005534516 nova_compute[253538]: 2025-11-25 08:23:19.282 253542 INFO nova.compute.manager [None req-225adb63-c415-4664-b9ce-71d1298904a1 2efed9769e444ce29167cef08d536e28 87f3ce31098e4e0cb8717a11dd3fee23 - - default default] [instance: a8f34404-8153-46bd-aea0-02d909cd66a9] Took 0.99 seconds to destroy the instance on the hypervisor.#033[00m
Nov 25 03:23:19 np0005534516 nova_compute[253538]: 2025-11-25 08:23:19.282 253542 DEBUG oslo.service.loopingcall [None req-225adb63-c415-4664-b9ce-71d1298904a1 2efed9769e444ce29167cef08d536e28 87f3ce31098e4e0cb8717a11dd3fee23 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 25 03:23:19 np0005534516 nova_compute[253538]: 2025-11-25 08:23:19.283 253542 DEBUG nova.compute.manager [-] [instance: a8f34404-8153-46bd-aea0-02d909cd66a9] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 25 03:23:19 np0005534516 nova_compute[253538]: 2025-11-25 08:23:19.283 253542 DEBUG nova.network.neutron [-] [instance: a8f34404-8153-46bd-aea0-02d909cd66a9] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 25 03:23:19 np0005534516 nova_compute[253538]: 2025-11-25 08:23:19.392 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:23:19 np0005534516 nova_compute[253538]: 2025-11-25 08:23:19.769 253542 DEBUG nova.network.neutron [-] [instance: a8f34404-8153-46bd-aea0-02d909cd66a9] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 25 03:23:19 np0005534516 nova_compute[253538]: 2025-11-25 08:23:19.786 253542 DEBUG nova.network.neutron [-] [instance: a8f34404-8153-46bd-aea0-02d909cd66a9] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 03:23:19 np0005534516 nova_compute[253538]: 2025-11-25 08:23:19.801 253542 INFO nova.compute.manager [-] [instance: a8f34404-8153-46bd-aea0-02d909cd66a9] Took 0.52 seconds to deallocate network for instance.#033[00m
Nov 25 03:23:19 np0005534516 nova_compute[253538]: 2025-11-25 08:23:19.845 253542 DEBUG oslo_concurrency.lockutils [None req-225adb63-c415-4664-b9ce-71d1298904a1 2efed9769e444ce29167cef08d536e28 87f3ce31098e4e0cb8717a11dd3fee23 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:23:19 np0005534516 nova_compute[253538]: 2025-11-25 08:23:19.846 253542 DEBUG oslo_concurrency.lockutils [None req-225adb63-c415-4664-b9ce-71d1298904a1 2efed9769e444ce29167cef08d536e28 87f3ce31098e4e0cb8717a11dd3fee23 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:23:19 np0005534516 nova_compute[253538]: 2025-11-25 08:23:19.889 253542 DEBUG oslo_concurrency.lockutils [None req-19cd2f8e-2d0c-4479-8994-7dbe017b7b35 e93ad3d8111e48218a5ab899be7d3708 029b328a15754b45970b2053b56564bc - - default default] Acquiring lock "30491b9b-e328-43ff-9a35-3f5afa6fed34" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:23:19 np0005534516 nova_compute[253538]: 2025-11-25 08:23:19.889 253542 DEBUG oslo_concurrency.lockutils [None req-19cd2f8e-2d0c-4479-8994-7dbe017b7b35 e93ad3d8111e48218a5ab899be7d3708 029b328a15754b45970b2053b56564bc - - default default] Lock "30491b9b-e328-43ff-9a35-3f5afa6fed34" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:23:19 np0005534516 nova_compute[253538]: 2025-11-25 08:23:19.890 253542 DEBUG oslo_concurrency.lockutils [None req-19cd2f8e-2d0c-4479-8994-7dbe017b7b35 e93ad3d8111e48218a5ab899be7d3708 029b328a15754b45970b2053b56564bc - - default default] Acquiring lock "30491b9b-e328-43ff-9a35-3f5afa6fed34-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:23:19 np0005534516 nova_compute[253538]: 2025-11-25 08:23:19.890 253542 DEBUG oslo_concurrency.lockutils [None req-19cd2f8e-2d0c-4479-8994-7dbe017b7b35 e93ad3d8111e48218a5ab899be7d3708 029b328a15754b45970b2053b56564bc - - default default] Lock "30491b9b-e328-43ff-9a35-3f5afa6fed34-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:23:19 np0005534516 nova_compute[253538]: 2025-11-25 08:23:19.891 253542 DEBUG oslo_concurrency.lockutils [None req-19cd2f8e-2d0c-4479-8994-7dbe017b7b35 e93ad3d8111e48218a5ab899be7d3708 029b328a15754b45970b2053b56564bc - - default default] Lock "30491b9b-e328-43ff-9a35-3f5afa6fed34-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:23:19 np0005534516 nova_compute[253538]: 2025-11-25 08:23:19.893 253542 INFO nova.compute.manager [None req-19cd2f8e-2d0c-4479-8994-7dbe017b7b35 e93ad3d8111e48218a5ab899be7d3708 029b328a15754b45970b2053b56564bc - - default default] [instance: 30491b9b-e328-43ff-9a35-3f5afa6fed34] Terminating instance#033[00m
Nov 25 03:23:19 np0005534516 nova_compute[253538]: 2025-11-25 08:23:19.895 253542 DEBUG oslo_concurrency.lockutils [None req-19cd2f8e-2d0c-4479-8994-7dbe017b7b35 e93ad3d8111e48218a5ab899be7d3708 029b328a15754b45970b2053b56564bc - - default default] Acquiring lock "refresh_cache-30491b9b-e328-43ff-9a35-3f5afa6fed34" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 03:23:19 np0005534516 nova_compute[253538]: 2025-11-25 08:23:19.895 253542 DEBUG oslo_concurrency.lockutils [None req-19cd2f8e-2d0c-4479-8994-7dbe017b7b35 e93ad3d8111e48218a5ab899be7d3708 029b328a15754b45970b2053b56564bc - - default default] Acquired lock "refresh_cache-30491b9b-e328-43ff-9a35-3f5afa6fed34" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 03:23:19 np0005534516 nova_compute[253538]: 2025-11-25 08:23:19.896 253542 DEBUG nova.network.neutron [None req-19cd2f8e-2d0c-4479-8994-7dbe017b7b35 e93ad3d8111e48218a5ab899be7d3708 029b328a15754b45970b2053b56564bc - - default default] [instance: 30491b9b-e328-43ff-9a35-3f5afa6fed34] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 25 03:23:19 np0005534516 nova_compute[253538]: 2025-11-25 08:23:19.941 253542 DEBUG oslo_concurrency.processutils [None req-225adb63-c415-4664-b9ce-71d1298904a1 2efed9769e444ce29167cef08d536e28 87f3ce31098e4e0cb8717a11dd3fee23 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:23:20 np0005534516 nova_compute[253538]: 2025-11-25 08:23:20.083 253542 DEBUG nova.network.neutron [None req-19cd2f8e-2d0c-4479-8994-7dbe017b7b35 e93ad3d8111e48218a5ab899be7d3708 029b328a15754b45970b2053b56564bc - - default default] [instance: 30491b9b-e328-43ff-9a35-3f5afa6fed34] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 25 03:23:20 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 03:23:20 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/583977569' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 03:23:20 np0005534516 nova_compute[253538]: 2025-11-25 08:23:20.421 253542 DEBUG oslo_concurrency.processutils [None req-225adb63-c415-4664-b9ce-71d1298904a1 2efed9769e444ce29167cef08d536e28 87f3ce31098e4e0cb8717a11dd3fee23 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.480s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:23:20 np0005534516 nova_compute[253538]: 2025-11-25 08:23:20.426 253542 DEBUG nova.compute.provider_tree [None req-225adb63-c415-4664-b9ce-71d1298904a1 2efed9769e444ce29167cef08d536e28 87f3ce31098e4e0cb8717a11dd3fee23 - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 03:23:20 np0005534516 nova_compute[253538]: 2025-11-25 08:23:20.447 253542 DEBUG nova.scheduler.client.report [None req-225adb63-c415-4664-b9ce-71d1298904a1 2efed9769e444ce29167cef08d536e28 87f3ce31098e4e0cb8717a11dd3fee23 - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 03:23:20 np0005534516 nova_compute[253538]: 2025-11-25 08:23:20.467 253542 DEBUG oslo_concurrency.lockutils [None req-225adb63-c415-4664-b9ce-71d1298904a1 2efed9769e444ce29167cef08d536e28 87f3ce31098e4e0cb8717a11dd3fee23 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.620s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:23:20 np0005534516 nova_compute[253538]: 2025-11-25 08:23:20.493 253542 INFO nova.scheduler.client.report [None req-225adb63-c415-4664-b9ce-71d1298904a1 2efed9769e444ce29167cef08d536e28 87f3ce31098e4e0cb8717a11dd3fee23 - - default default] Deleted allocations for instance a8f34404-8153-46bd-aea0-02d909cd66a9#033[00m
Nov 25 03:23:20 np0005534516 nova_compute[253538]: 2025-11-25 08:23:20.499 253542 DEBUG nova.network.neutron [None req-19cd2f8e-2d0c-4479-8994-7dbe017b7b35 e93ad3d8111e48218a5ab899be7d3708 029b328a15754b45970b2053b56564bc - - default default] [instance: 30491b9b-e328-43ff-9a35-3f5afa6fed34] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 03:23:20 np0005534516 nova_compute[253538]: 2025-11-25 08:23:20.512 253542 DEBUG oslo_concurrency.lockutils [None req-19cd2f8e-2d0c-4479-8994-7dbe017b7b35 e93ad3d8111e48218a5ab899be7d3708 029b328a15754b45970b2053b56564bc - - default default] Releasing lock "refresh_cache-30491b9b-e328-43ff-9a35-3f5afa6fed34" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 03:23:20 np0005534516 nova_compute[253538]: 2025-11-25 08:23:20.514 253542 DEBUG nova.compute.manager [None req-19cd2f8e-2d0c-4479-8994-7dbe017b7b35 e93ad3d8111e48218a5ab899be7d3708 029b328a15754b45970b2053b56564bc - - default default] [instance: 30491b9b-e328-43ff-9a35-3f5afa6fed34] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 25 03:23:20 np0005534516 nova_compute[253538]: 2025-11-25 08:23:20.571 253542 DEBUG oslo_concurrency.lockutils [None req-225adb63-c415-4664-b9ce-71d1298904a1 2efed9769e444ce29167cef08d536e28 87f3ce31098e4e0cb8717a11dd3fee23 - - default default] Lock "a8f34404-8153-46bd-aea0-02d909cd66a9" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.948s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:23:20 np0005534516 systemd[1]: machine-qemu\x2d6\x2dinstance\x2d00000006.scope: Deactivated successfully.
Nov 25 03:23:20 np0005534516 systemd[1]: machine-qemu\x2d6\x2dinstance\x2d00000006.scope: Consumed 13.716s CPU time.
Nov 25 03:23:20 np0005534516 systemd-machined[215790]: Machine qemu-6-instance-00000006 terminated.
Nov 25 03:23:20 np0005534516 nova_compute[253538]: 2025-11-25 08:23:20.747 253542 INFO nova.virt.libvirt.driver [-] [instance: 30491b9b-e328-43ff-9a35-3f5afa6fed34] Instance destroyed successfully.#033[00m
Nov 25 03:23:20 np0005534516 nova_compute[253538]: 2025-11-25 08:23:20.748 253542 DEBUG nova.objects.instance [None req-19cd2f8e-2d0c-4479-8994-7dbe017b7b35 e93ad3d8111e48218a5ab899be7d3708 029b328a15754b45970b2053b56564bc - - default default] Lazy-loading 'resources' on Instance uuid 30491b9b-e328-43ff-9a35-3f5afa6fed34 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 03:23:20 np0005534516 nova_compute[253538]: 2025-11-25 08:23:20.754 253542 DEBUG oslo_concurrency.lockutils [None req-ba2d0324-8de5-497a-a4e1-8a5349afa1d5 19a48db5eafb4ccb9008a204aa3d72d4 2671313ddba04346ac0e2eef435f909c - - default default] Acquiring lock "4bc97ee2-5aba-4bb5-86e2-f0806a200c04" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:23:20 np0005534516 nova_compute[253538]: 2025-11-25 08:23:20.755 253542 DEBUG oslo_concurrency.lockutils [None req-ba2d0324-8de5-497a-a4e1-8a5349afa1d5 19a48db5eafb4ccb9008a204aa3d72d4 2671313ddba04346ac0e2eef435f909c - - default default] Lock "4bc97ee2-5aba-4bb5-86e2-f0806a200c04" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:23:20 np0005534516 nova_compute[253538]: 2025-11-25 08:23:20.755 253542 DEBUG oslo_concurrency.lockutils [None req-ba2d0324-8de5-497a-a4e1-8a5349afa1d5 19a48db5eafb4ccb9008a204aa3d72d4 2671313ddba04346ac0e2eef435f909c - - default default] Acquiring lock "4bc97ee2-5aba-4bb5-86e2-f0806a200c04-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:23:20 np0005534516 nova_compute[253538]: 2025-11-25 08:23:20.755 253542 DEBUG oslo_concurrency.lockutils [None req-ba2d0324-8de5-497a-a4e1-8a5349afa1d5 19a48db5eafb4ccb9008a204aa3d72d4 2671313ddba04346ac0e2eef435f909c - - default default] Lock "4bc97ee2-5aba-4bb5-86e2-f0806a200c04-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:23:20 np0005534516 nova_compute[253538]: 2025-11-25 08:23:20.756 253542 DEBUG oslo_concurrency.lockutils [None req-ba2d0324-8de5-497a-a4e1-8a5349afa1d5 19a48db5eafb4ccb9008a204aa3d72d4 2671313ddba04346ac0e2eef435f909c - - default default] Lock "4bc97ee2-5aba-4bb5-86e2-f0806a200c04-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:23:20 np0005534516 nova_compute[253538]: 2025-11-25 08:23:20.758 253542 INFO nova.compute.manager [None req-ba2d0324-8de5-497a-a4e1-8a5349afa1d5 19a48db5eafb4ccb9008a204aa3d72d4 2671313ddba04346ac0e2eef435f909c - - default default] [instance: 4bc97ee2-5aba-4bb5-86e2-f0806a200c04] Terminating instance#033[00m
Nov 25 03:23:20 np0005534516 nova_compute[253538]: 2025-11-25 08:23:20.759 253542 DEBUG nova.compute.manager [None req-ba2d0324-8de5-497a-a4e1-8a5349afa1d5 19a48db5eafb4ccb9008a204aa3d72d4 2671313ddba04346ac0e2eef435f909c - - default default] [instance: 4bc97ee2-5aba-4bb5-86e2-f0806a200c04] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 25 03:23:20 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1131: 321 pgs: 321 active+clean; 242 MiB data, 384 MiB used, 60 GiB / 60 GiB avail; 3.3 MiB/s rd, 4.9 MiB/s wr, 256 op/s
Nov 25 03:23:20 np0005534516 kernel: tap029a2ee5-40 (unregistering): left promiscuous mode
Nov 25 03:23:20 np0005534516 NetworkManager[48915]: <info>  [1764059000.9641] device (tap029a2ee5-40): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 03:23:20 np0005534516 nova_compute[253538]: 2025-11-25 08:23:20.973 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:23:20 np0005534516 ovn_controller[152859]: 2025-11-25T08:23:20Z|00042|binding|INFO|Releasing lport 029a2ee5-4018-4b73-8953-5436c5af3666 from this chassis (sb_readonly=0)
Nov 25 03:23:20 np0005534516 ovn_controller[152859]: 2025-11-25T08:23:20Z|00043|binding|INFO|Setting lport 029a2ee5-4018-4b73-8953-5436c5af3666 down in Southbound
Nov 25 03:23:20 np0005534516 ovn_controller[152859]: 2025-11-25T08:23:20Z|00044|binding|INFO|Removing iface tap029a2ee5-40 ovn-installed in OVS
Nov 25 03:23:20 np0005534516 nova_compute[253538]: 2025-11-25 08:23:20.976 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:23:20 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:23:20.980 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:99:b2:39 10.100.0.13'], port_security=['fa:16:3e:99:b2:39 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '4bc97ee2-5aba-4bb5-86e2-f0806a200c04', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ef52fe4f-78d3-45fa-ab69-177fdfabe604', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2671313ddba04346ac0e2eef435f909c', 'neutron:revision_number': '4', 'neutron:security_group_ids': '45e37a08-d0c9-4931-b93a-912579eefb2a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.177'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=1e5eae09-1123-407e-9138-26c6151dcc1c, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=029a2ee5-4018-4b73-8953-5436c5af3666) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 03:23:20 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:23:20.981 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 029a2ee5-4018-4b73-8953-5436c5af3666 in datapath ef52fe4f-78d3-45fa-ab69-177fdfabe604 unbound from our chassis#033[00m
Nov 25 03:23:20 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:23:20.982 162739 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network ef52fe4f-78d3-45fa-ab69-177fdfabe604, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 25 03:23:20 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:23:20.984 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[2e3a91bd-6c58-436f-9bcc-ef36e9f81c5e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:23:20 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:23:20.984 162739 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-ef52fe4f-78d3-45fa-ab69-177fdfabe604 namespace which is not needed anymore#033[00m
Nov 25 03:23:20 np0005534516 nova_compute[253538]: 2025-11-25 08:23:20.995 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:23:21 np0005534516 systemd[1]: machine-qemu\x2d5\x2dinstance\x2d00000005.scope: Deactivated successfully.
Nov 25 03:23:21 np0005534516 systemd[1]: machine-qemu\x2d5\x2dinstance\x2d00000005.scope: Consumed 15.198s CPU time.
Nov 25 03:23:21 np0005534516 systemd-machined[215790]: Machine qemu-5-instance-00000005 terminated.
Nov 25 03:23:21 np0005534516 neutron-haproxy-ovnmeta-ef52fe4f-78d3-45fa-ab69-177fdfabe604[271193]: [NOTICE]   (271197) : haproxy version is 2.8.14-c23fe91
Nov 25 03:23:21 np0005534516 neutron-haproxy-ovnmeta-ef52fe4f-78d3-45fa-ab69-177fdfabe604[271193]: [NOTICE]   (271197) : path to executable is /usr/sbin/haproxy
Nov 25 03:23:21 np0005534516 neutron-haproxy-ovnmeta-ef52fe4f-78d3-45fa-ab69-177fdfabe604[271193]: [WARNING]  (271197) : Exiting Master process...
Nov 25 03:23:21 np0005534516 neutron-haproxy-ovnmeta-ef52fe4f-78d3-45fa-ab69-177fdfabe604[271193]: [ALERT]    (271197) : Current worker (271199) exited with code 143 (Terminated)
Nov 25 03:23:21 np0005534516 neutron-haproxy-ovnmeta-ef52fe4f-78d3-45fa-ab69-177fdfabe604[271193]: [WARNING]  (271197) : All workers exited. Exiting... (0)
Nov 25 03:23:21 np0005534516 systemd[1]: libpod-f531910c08daeea977133b7f50a8ea2ee4c30fd4ce3969def7129d09431e9000.scope: Deactivated successfully.
Nov 25 03:23:21 np0005534516 podman[273431]: 2025-11-25 08:23:21.130788725 +0000 UTC m=+0.050959798 container died f531910c08daeea977133b7f50a8ea2ee4c30fd4ce3969def7129d09431e9000 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-ef52fe4f-78d3-45fa-ab69-177fdfabe604, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Nov 25 03:23:21 np0005534516 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-f531910c08daeea977133b7f50a8ea2ee4c30fd4ce3969def7129d09431e9000-userdata-shm.mount: Deactivated successfully.
Nov 25 03:23:21 np0005534516 systemd[1]: var-lib-containers-storage-overlay-6262865f4caeb3aa76bd5939a209fa5913aca3e8ecc37800451d727ee1322951-merged.mount: Deactivated successfully.
Nov 25 03:23:21 np0005534516 podman[273431]: 2025-11-25 08:23:21.191888766 +0000 UTC m=+0.112059799 container cleanup f531910c08daeea977133b7f50a8ea2ee4c30fd4ce3969def7129d09431e9000 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-ef52fe4f-78d3-45fa-ab69-177fdfabe604, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2)
Nov 25 03:23:21 np0005534516 NetworkManager[48915]: <info>  [1764059001.1953] manager: (tap029a2ee5-40): new Tun device (/org/freedesktop/NetworkManager/Devices/34)
Nov 25 03:23:21 np0005534516 nova_compute[253538]: 2025-11-25 08:23:21.196 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:23:21 np0005534516 nova_compute[253538]: 2025-11-25 08:23:21.200 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:23:21 np0005534516 systemd[1]: libpod-conmon-f531910c08daeea977133b7f50a8ea2ee4c30fd4ce3969def7129d09431e9000.scope: Deactivated successfully.
Nov 25 03:23:21 np0005534516 nova_compute[253538]: 2025-11-25 08:23:21.207 253542 INFO nova.virt.libvirt.driver [-] [instance: 4bc97ee2-5aba-4bb5-86e2-f0806a200c04] Instance destroyed successfully.#033[00m
Nov 25 03:23:21 np0005534516 nova_compute[253538]: 2025-11-25 08:23:21.207 253542 DEBUG nova.objects.instance [None req-ba2d0324-8de5-497a-a4e1-8a5349afa1d5 19a48db5eafb4ccb9008a204aa3d72d4 2671313ddba04346ac0e2eef435f909c - - default default] Lazy-loading 'resources' on Instance uuid 4bc97ee2-5aba-4bb5-86e2-f0806a200c04 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 03:23:21 np0005534516 nova_compute[253538]: 2025-11-25 08:23:21.227 253542 DEBUG nova.virt.libvirt.vif [None req-ba2d0324-8de5-497a-a4e1-8a5349afa1d5 19a48db5eafb4ccb9008a204aa3d72d4 2671313ddba04346ac0e2eef435f909c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T08:22:31Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersWithSpecificFlavorTestJSON-server-1503393729',display_name='tempest-ServersWithSpecificFlavorTestJSON-server-1503393729',ec2_ids=<?>,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(23),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverswithspecificflavortestjson-server-1503393729',id=5,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=23,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNBzTqcG3JodznNWSwgj3rCrEedPdeiQ/mnvgYz3cydyWhFZXQKv9KjumDsSN5/xC3KFolCMDQs3EubeJBsTVkZbaVh8dka9krQSkDOu6i81Tbp1XKPFk+NDOA6XHT/FTw==',key_name='tempest-keypair-1479336102',keypairs=<?>,launch_index=0,launched_at=2025-11-25T08:22:44Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='2671313ddba04346ac0e2eef435f909c',ramdisk_id='',reservation_id='r-1iln2b4j',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersWithSpecificFlavorTestJSON-625252619',owner_user_name='tempest-ServersWithSpecificFlavorTestJSON-625252619-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T08:22:44Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='19a48db5eafb4ccb9008a204aa3d72d4',uuid=4bc97ee2-5aba-4bb5-86e2-f0806a200c04,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "029a2ee5-4018-4b73-8953-5436c5af3666", "address": "fa:16:3e:99:b2:39", "network": {"id": "ef52fe4f-78d3-45fa-ab69-177fdfabe604", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-1512173371-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.177", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2671313ddba04346ac0e2eef435f909c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap029a2ee5-40", "ovs_interfaceid": "029a2ee5-4018-4b73-8953-5436c5af3666", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 25 03:23:21 np0005534516 nova_compute[253538]: 2025-11-25 08:23:21.227 253542 DEBUG nova.network.os_vif_util [None req-ba2d0324-8de5-497a-a4e1-8a5349afa1d5 19a48db5eafb4ccb9008a204aa3d72d4 2671313ddba04346ac0e2eef435f909c - - default default] Converting VIF {"id": "029a2ee5-4018-4b73-8953-5436c5af3666", "address": "fa:16:3e:99:b2:39", "network": {"id": "ef52fe4f-78d3-45fa-ab69-177fdfabe604", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-1512173371-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.177", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2671313ddba04346ac0e2eef435f909c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap029a2ee5-40", "ovs_interfaceid": "029a2ee5-4018-4b73-8953-5436c5af3666", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 03:23:21 np0005534516 nova_compute[253538]: 2025-11-25 08:23:21.228 253542 DEBUG nova.network.os_vif_util [None req-ba2d0324-8de5-497a-a4e1-8a5349afa1d5 19a48db5eafb4ccb9008a204aa3d72d4 2671313ddba04346ac0e2eef435f909c - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:99:b2:39,bridge_name='br-int',has_traffic_filtering=True,id=029a2ee5-4018-4b73-8953-5436c5af3666,network=Network(ef52fe4f-78d3-45fa-ab69-177fdfabe604),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap029a2ee5-40') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 03:23:21 np0005534516 nova_compute[253538]: 2025-11-25 08:23:21.228 253542 DEBUG os_vif [None req-ba2d0324-8de5-497a-a4e1-8a5349afa1d5 19a48db5eafb4ccb9008a204aa3d72d4 2671313ddba04346ac0e2eef435f909c - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:99:b2:39,bridge_name='br-int',has_traffic_filtering=True,id=029a2ee5-4018-4b73-8953-5436c5af3666,network=Network(ef52fe4f-78d3-45fa-ab69-177fdfabe604),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap029a2ee5-40') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 25 03:23:21 np0005534516 nova_compute[253538]: 2025-11-25 08:23:21.230 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:23:21 np0005534516 nova_compute[253538]: 2025-11-25 08:23:21.231 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap029a2ee5-40, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:23:21 np0005534516 nova_compute[253538]: 2025-11-25 08:23:21.232 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:23:21 np0005534516 nova_compute[253538]: 2025-11-25 08:23:21.234 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:23:21 np0005534516 nova_compute[253538]: 2025-11-25 08:23:21.236 253542 INFO os_vif [None req-ba2d0324-8de5-497a-a4e1-8a5349afa1d5 19a48db5eafb4ccb9008a204aa3d72d4 2671313ddba04346ac0e2eef435f909c - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:99:b2:39,bridge_name='br-int',has_traffic_filtering=True,id=029a2ee5-4018-4b73-8953-5436c5af3666,network=Network(ef52fe4f-78d3-45fa-ab69-177fdfabe604),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap029a2ee5-40')#033[00m
Nov 25 03:23:21 np0005534516 podman[273467]: 2025-11-25 08:23:21.723572523 +0000 UTC m=+0.507387247 container remove f531910c08daeea977133b7f50a8ea2ee4c30fd4ce3969def7129d09431e9000 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-ef52fe4f-78d3-45fa-ab69-177fdfabe604, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Nov 25 03:23:21 np0005534516 nova_compute[253538]: 2025-11-25 08:23:21.733 253542 DEBUG nova.compute.manager [req-cb4b8ff1-228d-4b3c-9f8f-9148d5089584 req-a8e4b972-925d-4925-8ae9-fb854dd5797f b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 4bc97ee2-5aba-4bb5-86e2-f0806a200c04] Received event network-vif-unplugged-029a2ee5-4018-4b73-8953-5436c5af3666 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:23:21 np0005534516 nova_compute[253538]: 2025-11-25 08:23:21.735 253542 DEBUG oslo_concurrency.lockutils [req-cb4b8ff1-228d-4b3c-9f8f-9148d5089584 req-a8e4b972-925d-4925-8ae9-fb854dd5797f b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "4bc97ee2-5aba-4bb5-86e2-f0806a200c04-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:23:21 np0005534516 nova_compute[253538]: 2025-11-25 08:23:21.736 253542 DEBUG oslo_concurrency.lockutils [req-cb4b8ff1-228d-4b3c-9f8f-9148d5089584 req-a8e4b972-925d-4925-8ae9-fb854dd5797f b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "4bc97ee2-5aba-4bb5-86e2-f0806a200c04-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:23:21 np0005534516 nova_compute[253538]: 2025-11-25 08:23:21.736 253542 DEBUG oslo_concurrency.lockutils [req-cb4b8ff1-228d-4b3c-9f8f-9148d5089584 req-a8e4b972-925d-4925-8ae9-fb854dd5797f b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "4bc97ee2-5aba-4bb5-86e2-f0806a200c04-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:23:21 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:23:21.735 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[3e156510-1a0a-4c73-944f-3a95d40d1b49]: (4, ('Tue Nov 25 08:23:21 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-ef52fe4f-78d3-45fa-ab69-177fdfabe604 (f531910c08daeea977133b7f50a8ea2ee4c30fd4ce3969def7129d09431e9000)\nf531910c08daeea977133b7f50a8ea2ee4c30fd4ce3969def7129d09431e9000\nTue Nov 25 08:23:21 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-ef52fe4f-78d3-45fa-ab69-177fdfabe604 (f531910c08daeea977133b7f50a8ea2ee4c30fd4ce3969def7129d09431e9000)\nf531910c08daeea977133b7f50a8ea2ee4c30fd4ce3969def7129d09431e9000\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:23:21 np0005534516 nova_compute[253538]: 2025-11-25 08:23:21.736 253542 DEBUG nova.compute.manager [req-cb4b8ff1-228d-4b3c-9f8f-9148d5089584 req-a8e4b972-925d-4925-8ae9-fb854dd5797f b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 4bc97ee2-5aba-4bb5-86e2-f0806a200c04] No waiting events found dispatching network-vif-unplugged-029a2ee5-4018-4b73-8953-5436c5af3666 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 03:23:21 np0005534516 nova_compute[253538]: 2025-11-25 08:23:21.737 253542 DEBUG nova.compute.manager [req-cb4b8ff1-228d-4b3c-9f8f-9148d5089584 req-a8e4b972-925d-4925-8ae9-fb854dd5797f b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 4bc97ee2-5aba-4bb5-86e2-f0806a200c04] Received event network-vif-unplugged-029a2ee5-4018-4b73-8953-5436c5af3666 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 25 03:23:21 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:23:21.738 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[8c276331-eaeb-433a-8e92-faf007b766d7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:23:21 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:23:21.739 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapef52fe4f-70, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:23:21 np0005534516 nova_compute[253538]: 2025-11-25 08:23:21.741 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:23:21 np0005534516 kernel: tapef52fe4f-70: left promiscuous mode
Nov 25 03:23:21 np0005534516 nova_compute[253538]: 2025-11-25 08:23:21.744 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:23:21 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:23:21.748 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[9282c781-a7a1-4dd8-9785-856ca996cd36]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:23:21 np0005534516 nova_compute[253538]: 2025-11-25 08:23:21.758 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:23:21 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:23:21.765 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[a9c85072-df14-4441-8eb8-242916270d27]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:23:21 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:23:21.766 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[5d419659-3196-416d-8745-6dc72e3f3733]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:23:21 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:23:21.781 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[6b73fba3-8643-4702-9dd6-36a625a005d2]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 431851, 'reachable_time': 25214, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 273502, 'error': None, 'target': 'ovnmeta-ef52fe4f-78d3-45fa-ab69-177fdfabe604', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:23:21 np0005534516 systemd[1]: run-netns-ovnmeta\x2def52fe4f\x2d78d3\x2d45fa\x2dab69\x2d177fdfabe604.mount: Deactivated successfully.
Nov 25 03:23:21 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:23:21.785 162852 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-ef52fe4f-78d3-45fa-ab69-177fdfabe604 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 25 03:23:21 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:23:21.785 162852 DEBUG oslo.privsep.daemon [-] privsep: reply[a66331f7-928f-485c-934d-3a3cdd9c4f01]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:23:22 np0005534516 nova_compute[253538]: 2025-11-25 08:23:22.024 253542 INFO nova.virt.libvirt.driver [None req-19cd2f8e-2d0c-4479-8994-7dbe017b7b35 e93ad3d8111e48218a5ab899be7d3708 029b328a15754b45970b2053b56564bc - - default default] [instance: 30491b9b-e328-43ff-9a35-3f5afa6fed34] Deleting instance files /var/lib/nova/instances/30491b9b-e328-43ff-9a35-3f5afa6fed34_del#033[00m
Nov 25 03:23:22 np0005534516 nova_compute[253538]: 2025-11-25 08:23:22.025 253542 INFO nova.virt.libvirt.driver [None req-19cd2f8e-2d0c-4479-8994-7dbe017b7b35 e93ad3d8111e48218a5ab899be7d3708 029b328a15754b45970b2053b56564bc - - default default] [instance: 30491b9b-e328-43ff-9a35-3f5afa6fed34] Deletion of /var/lib/nova/instances/30491b9b-e328-43ff-9a35-3f5afa6fed34_del complete#033[00m
Nov 25 03:23:22 np0005534516 nova_compute[253538]: 2025-11-25 08:23:22.084 253542 INFO nova.compute.manager [None req-19cd2f8e-2d0c-4479-8994-7dbe017b7b35 e93ad3d8111e48218a5ab899be7d3708 029b328a15754b45970b2053b56564bc - - default default] [instance: 30491b9b-e328-43ff-9a35-3f5afa6fed34] Took 1.57 seconds to destroy the instance on the hypervisor.#033[00m
Nov 25 03:23:22 np0005534516 nova_compute[253538]: 2025-11-25 08:23:22.085 253542 DEBUG oslo.service.loopingcall [None req-19cd2f8e-2d0c-4479-8994-7dbe017b7b35 e93ad3d8111e48218a5ab899be7d3708 029b328a15754b45970b2053b56564bc - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 25 03:23:22 np0005534516 nova_compute[253538]: 2025-11-25 08:23:22.085 253542 DEBUG nova.compute.manager [-] [instance: 30491b9b-e328-43ff-9a35-3f5afa6fed34] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 25 03:23:22 np0005534516 nova_compute[253538]: 2025-11-25 08:23:22.085 253542 DEBUG nova.network.neutron [-] [instance: 30491b9b-e328-43ff-9a35-3f5afa6fed34] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 25 03:23:22 np0005534516 nova_compute[253538]: 2025-11-25 08:23:22.243 253542 DEBUG nova.network.neutron [-] [instance: 30491b9b-e328-43ff-9a35-3f5afa6fed34] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 25 03:23:22 np0005534516 nova_compute[253538]: 2025-11-25 08:23:22.255 253542 DEBUG nova.network.neutron [-] [instance: 30491b9b-e328-43ff-9a35-3f5afa6fed34] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 03:23:22 np0005534516 nova_compute[253538]: 2025-11-25 08:23:22.267 253542 INFO nova.compute.manager [-] [instance: 30491b9b-e328-43ff-9a35-3f5afa6fed34] Took 0.18 seconds to deallocate network for instance.#033[00m
Nov 25 03:23:22 np0005534516 nova_compute[253538]: 2025-11-25 08:23:22.317 253542 DEBUG oslo_concurrency.lockutils [None req-19cd2f8e-2d0c-4479-8994-7dbe017b7b35 e93ad3d8111e48218a5ab899be7d3708 029b328a15754b45970b2053b56564bc - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:23:22 np0005534516 nova_compute[253538]: 2025-11-25 08:23:22.318 253542 DEBUG oslo_concurrency.lockutils [None req-19cd2f8e-2d0c-4479-8994-7dbe017b7b35 e93ad3d8111e48218a5ab899be7d3708 029b328a15754b45970b2053b56564bc - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:23:22 np0005534516 nova_compute[253538]: 2025-11-25 08:23:22.380 253542 DEBUG oslo_concurrency.processutils [None req-19cd2f8e-2d0c-4479-8994-7dbe017b7b35 e93ad3d8111e48218a5ab899be7d3708 029b328a15754b45970b2053b56564bc - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:23:22 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 03:23:22 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1420791341' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 03:23:22 np0005534516 nova_compute[253538]: 2025-11-25 08:23:22.847 253542 DEBUG oslo_concurrency.processutils [None req-19cd2f8e-2d0c-4479-8994-7dbe017b7b35 e93ad3d8111e48218a5ab899be7d3708 029b328a15754b45970b2053b56564bc - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.466s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:23:22 np0005534516 nova_compute[253538]: 2025-11-25 08:23:22.855 253542 DEBUG nova.compute.provider_tree [None req-19cd2f8e-2d0c-4479-8994-7dbe017b7b35 e93ad3d8111e48218a5ab899be7d3708 029b328a15754b45970b2053b56564bc - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 03:23:22 np0005534516 nova_compute[253538]: 2025-11-25 08:23:22.870 253542 DEBUG nova.scheduler.client.report [None req-19cd2f8e-2d0c-4479-8994-7dbe017b7b35 e93ad3d8111e48218a5ab899be7d3708 029b328a15754b45970b2053b56564bc - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 03:23:22 np0005534516 nova_compute[253538]: 2025-11-25 08:23:22.890 253542 DEBUG oslo_concurrency.lockutils [None req-19cd2f8e-2d0c-4479-8994-7dbe017b7b35 e93ad3d8111e48218a5ab899be7d3708 029b328a15754b45970b2053b56564bc - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.572s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:23:22 np0005534516 nova_compute[253538]: 2025-11-25 08:23:22.919 253542 INFO nova.scheduler.client.report [None req-19cd2f8e-2d0c-4479-8994-7dbe017b7b35 e93ad3d8111e48218a5ab899be7d3708 029b328a15754b45970b2053b56564bc - - default default] Deleted allocations for instance 30491b9b-e328-43ff-9a35-3f5afa6fed34#033[00m
Nov 25 03:23:22 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1132: 321 pgs: 321 active+clean; 182 MiB data, 342 MiB used, 60 GiB / 60 GiB avail; 2.4 MiB/s rd, 4.2 MiB/s wr, 235 op/s
Nov 25 03:23:22 np0005534516 nova_compute[253538]: 2025-11-25 08:23:22.941 253542 INFO nova.virt.libvirt.driver [None req-ba2d0324-8de5-497a-a4e1-8a5349afa1d5 19a48db5eafb4ccb9008a204aa3d72d4 2671313ddba04346ac0e2eef435f909c - - default default] [instance: 4bc97ee2-5aba-4bb5-86e2-f0806a200c04] Deleting instance files /var/lib/nova/instances/4bc97ee2-5aba-4bb5-86e2-f0806a200c04_del#033[00m
Nov 25 03:23:22 np0005534516 nova_compute[253538]: 2025-11-25 08:23:22.942 253542 INFO nova.virt.libvirt.driver [None req-ba2d0324-8de5-497a-a4e1-8a5349afa1d5 19a48db5eafb4ccb9008a204aa3d72d4 2671313ddba04346ac0e2eef435f909c - - default default] [instance: 4bc97ee2-5aba-4bb5-86e2-f0806a200c04] Deletion of /var/lib/nova/instances/4bc97ee2-5aba-4bb5-86e2-f0806a200c04_del complete#033[00m
Nov 25 03:23:23 np0005534516 nova_compute[253538]: 2025-11-25 08:23:23.004 253542 DEBUG oslo_concurrency.lockutils [None req-19cd2f8e-2d0c-4479-8994-7dbe017b7b35 e93ad3d8111e48218a5ab899be7d3708 029b328a15754b45970b2053b56564bc - - default default] Lock "30491b9b-e328-43ff-9a35-3f5afa6fed34" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.115s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:23:23 np0005534516 nova_compute[253538]: 2025-11-25 08:23:23.010 253542 INFO nova.compute.manager [None req-ba2d0324-8de5-497a-a4e1-8a5349afa1d5 19a48db5eafb4ccb9008a204aa3d72d4 2671313ddba04346ac0e2eef435f909c - - default default] [instance: 4bc97ee2-5aba-4bb5-86e2-f0806a200c04] Took 2.25 seconds to destroy the instance on the hypervisor.#033[00m
Nov 25 03:23:23 np0005534516 nova_compute[253538]: 2025-11-25 08:23:23.010 253542 DEBUG oslo.service.loopingcall [None req-ba2d0324-8de5-497a-a4e1-8a5349afa1d5 19a48db5eafb4ccb9008a204aa3d72d4 2671313ddba04346ac0e2eef435f909c - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 25 03:23:23 np0005534516 nova_compute[253538]: 2025-11-25 08:23:23.011 253542 DEBUG nova.compute.manager [-] [instance: 4bc97ee2-5aba-4bb5-86e2-f0806a200c04] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 25 03:23:23 np0005534516 nova_compute[253538]: 2025-11-25 08:23:23.011 253542 DEBUG nova.network.neutron [-] [instance: 4bc97ee2-5aba-4bb5-86e2-f0806a200c04] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 25 03:23:23 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e107 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 03:23:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 03:23:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 03:23:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 03:23:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 03:23:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 03:23:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 03:23:23 np0005534516 nova_compute[253538]: 2025-11-25 08:23:23.940 253542 DEBUG nova.compute.manager [req-5136dd19-8373-43dd-b39e-d8a55b76110e req-db8fb72d-abc7-420e-9074-1292ba822b00 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 4bc97ee2-5aba-4bb5-86e2-f0806a200c04] Received event network-vif-plugged-029a2ee5-4018-4b73-8953-5436c5af3666 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:23:23 np0005534516 nova_compute[253538]: 2025-11-25 08:23:23.941 253542 DEBUG oslo_concurrency.lockutils [req-5136dd19-8373-43dd-b39e-d8a55b76110e req-db8fb72d-abc7-420e-9074-1292ba822b00 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "4bc97ee2-5aba-4bb5-86e2-f0806a200c04-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:23:23 np0005534516 nova_compute[253538]: 2025-11-25 08:23:23.941 253542 DEBUG oslo_concurrency.lockutils [req-5136dd19-8373-43dd-b39e-d8a55b76110e req-db8fb72d-abc7-420e-9074-1292ba822b00 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "4bc97ee2-5aba-4bb5-86e2-f0806a200c04-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:23:23 np0005534516 nova_compute[253538]: 2025-11-25 08:23:23.941 253542 DEBUG oslo_concurrency.lockutils [req-5136dd19-8373-43dd-b39e-d8a55b76110e req-db8fb72d-abc7-420e-9074-1292ba822b00 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "4bc97ee2-5aba-4bb5-86e2-f0806a200c04-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:23:23 np0005534516 nova_compute[253538]: 2025-11-25 08:23:23.942 253542 DEBUG nova.compute.manager [req-5136dd19-8373-43dd-b39e-d8a55b76110e req-db8fb72d-abc7-420e-9074-1292ba822b00 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 4bc97ee2-5aba-4bb5-86e2-f0806a200c04] No waiting events found dispatching network-vif-plugged-029a2ee5-4018-4b73-8953-5436c5af3666 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 03:23:23 np0005534516 nova_compute[253538]: 2025-11-25 08:23:23.942 253542 WARNING nova.compute.manager [req-5136dd19-8373-43dd-b39e-d8a55b76110e req-db8fb72d-abc7-420e-9074-1292ba822b00 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 4bc97ee2-5aba-4bb5-86e2-f0806a200c04] Received unexpected event network-vif-plugged-029a2ee5-4018-4b73-8953-5436c5af3666 for instance with vm_state active and task_state deleting.#033[00m
Nov 25 03:23:24 np0005534516 nova_compute[253538]: 2025-11-25 08:23:24.114 253542 DEBUG nova.network.neutron [-] [instance: 4bc97ee2-5aba-4bb5-86e2-f0806a200c04] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 03:23:24 np0005534516 nova_compute[253538]: 2025-11-25 08:23:24.131 253542 INFO nova.compute.manager [-] [instance: 4bc97ee2-5aba-4bb5-86e2-f0806a200c04] Took 1.12 seconds to deallocate network for instance.#033[00m
Nov 25 03:23:24 np0005534516 nova_compute[253538]: 2025-11-25 08:23:24.176 253542 DEBUG oslo_concurrency.lockutils [None req-ba2d0324-8de5-497a-a4e1-8a5349afa1d5 19a48db5eafb4ccb9008a204aa3d72d4 2671313ddba04346ac0e2eef435f909c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:23:24 np0005534516 nova_compute[253538]: 2025-11-25 08:23:24.177 253542 DEBUG oslo_concurrency.lockutils [None req-ba2d0324-8de5-497a-a4e1-8a5349afa1d5 19a48db5eafb4ccb9008a204aa3d72d4 2671313ddba04346ac0e2eef435f909c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:23:24 np0005534516 nova_compute[253538]: 2025-11-25 08:23:24.245 253542 DEBUG oslo_concurrency.processutils [None req-ba2d0324-8de5-497a-a4e1-8a5349afa1d5 19a48db5eafb4ccb9008a204aa3d72d4 2671313ddba04346ac0e2eef435f909c - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:23:24 np0005534516 nova_compute[253538]: 2025-11-25 08:23:24.395 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:23:24 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 03:23:24 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3946080216' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 03:23:24 np0005534516 nova_compute[253538]: 2025-11-25 08:23:24.708 253542 DEBUG oslo_concurrency.processutils [None req-ba2d0324-8de5-497a-a4e1-8a5349afa1d5 19a48db5eafb4ccb9008a204aa3d72d4 2671313ddba04346ac0e2eef435f909c - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.463s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:23:24 np0005534516 nova_compute[253538]: 2025-11-25 08:23:24.712 253542 DEBUG nova.compute.provider_tree [None req-ba2d0324-8de5-497a-a4e1-8a5349afa1d5 19a48db5eafb4ccb9008a204aa3d72d4 2671313ddba04346ac0e2eef435f909c - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 03:23:24 np0005534516 nova_compute[253538]: 2025-11-25 08:23:24.725 253542 DEBUG nova.scheduler.client.report [None req-ba2d0324-8de5-497a-a4e1-8a5349afa1d5 19a48db5eafb4ccb9008a204aa3d72d4 2671313ddba04346ac0e2eef435f909c - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 03:23:24 np0005534516 nova_compute[253538]: 2025-11-25 08:23:24.742 253542 DEBUG oslo_concurrency.lockutils [None req-ba2d0324-8de5-497a-a4e1-8a5349afa1d5 19a48db5eafb4ccb9008a204aa3d72d4 2671313ddba04346ac0e2eef435f909c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.565s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:23:24 np0005534516 nova_compute[253538]: 2025-11-25 08:23:24.770 253542 INFO nova.scheduler.client.report [None req-ba2d0324-8de5-497a-a4e1-8a5349afa1d5 19a48db5eafb4ccb9008a204aa3d72d4 2671313ddba04346ac0e2eef435f909c - - default default] Deleted allocations for instance 4bc97ee2-5aba-4bb5-86e2-f0806a200c04#033[00m
Nov 25 03:23:24 np0005534516 podman[273554]: 2025-11-25 08:23:24.822738687 +0000 UTC m=+0.072245933 container health_status 5c8d5508db57b4c3da7e80ae11f3a3094e87785cb74f60c98199261dd8b73481 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Nov 25 03:23:24 np0005534516 nova_compute[253538]: 2025-11-25 08:23:24.825 253542 DEBUG oslo_concurrency.lockutils [None req-ba2d0324-8de5-497a-a4e1-8a5349afa1d5 19a48db5eafb4ccb9008a204aa3d72d4 2671313ddba04346ac0e2eef435f909c - - default default] Lock "4bc97ee2-5aba-4bb5-86e2-f0806a200c04" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.070s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:23:24 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1133: 321 pgs: 321 active+clean; 74 MiB data, 265 MiB used, 60 GiB / 60 GiB avail; 1.3 MiB/s rd, 3.8 MiB/s wr, 238 op/s
Nov 25 03:23:26 np0005534516 nova_compute[253538]: 2025-11-25 08:23:26.088 253542 DEBUG nova.compute.manager [req-cb1f7d5f-0adc-4bff-8be3-37bfc2eb868f req-86c3df25-d20e-487c-848d-bf8f1ba84dc2 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 4bc97ee2-5aba-4bb5-86e2-f0806a200c04] Received event network-vif-deleted-029a2ee5-4018-4b73-8953-5436c5af3666 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:23:26 np0005534516 nova_compute[253538]: 2025-11-25 08:23:26.232 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:23:26 np0005534516 nova_compute[253538]: 2025-11-25 08:23:26.507 253542 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764058991.506591, 7779552a-aa17-4b2f-8b15-69121d6b6a63 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 03:23:26 np0005534516 nova_compute[253538]: 2025-11-25 08:23:26.508 253542 INFO nova.compute.manager [-] [instance: 7779552a-aa17-4b2f-8b15-69121d6b6a63] VM Stopped (Lifecycle Event)#033[00m
Nov 25 03:23:26 np0005534516 nova_compute[253538]: 2025-11-25 08:23:26.524 253542 DEBUG nova.compute.manager [None req-3b16fc33-da19-4034-8938-a70e5620a85c - - - - - -] [instance: 7779552a-aa17-4b2f-8b15-69121d6b6a63] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:23:26 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1134: 321 pgs: 321 active+clean; 74 MiB data, 265 MiB used, 60 GiB / 60 GiB avail; 511 KiB/s rd, 2.5 MiB/s wr, 181 op/s
Nov 25 03:23:26 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:23:26.943 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=0e3362c2-a4a0-4a10-9289-943331244f84, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '5'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:23:27 np0005534516 podman[273573]: 2025-11-25 08:23:27.827358334 +0000 UTC m=+0.074534188 container health_status 201f5ba8a846455dad7681d91099d8c3f33e8ac91298c1330a8b525b94c03938 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Nov 25 03:23:28 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e107 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 03:23:28 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 25 03:23:28 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2456352356' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 03:23:28 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 25 03:23:28 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2456352356' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 03:23:28 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1135: 321 pgs: 321 active+clean; 41 MiB data, 248 MiB used, 60 GiB / 60 GiB avail; 520 KiB/s rd, 2.5 MiB/s wr, 194 op/s
Nov 25 03:23:29 np0005534516 nova_compute[253538]: 2025-11-25 08:23:29.397 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:23:30 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1136: 321 pgs: 321 active+clean; 41 MiB data, 248 MiB used, 60 GiB / 60 GiB avail; 284 KiB/s rd, 1.0 MiB/s wr, 133 op/s
Nov 25 03:23:31 np0005534516 nova_compute[253538]: 2025-11-25 08:23:31.011 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:23:31 np0005534516 nova_compute[253538]: 2025-11-25 08:23:31.152 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:23:31 np0005534516 nova_compute[253538]: 2025-11-25 08:23:31.235 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:23:32 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1137: 321 pgs: 321 active+clean; 41 MiB data, 248 MiB used, 60 GiB / 60 GiB avail; 206 KiB/s rd, 72 KiB/s wr, 94 op/s
Nov 25 03:23:33 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e107 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 03:23:33 np0005534516 nova_compute[253538]: 2025-11-25 08:23:33.513 253542 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764058998.5120335, a8f34404-8153-46bd-aea0-02d909cd66a9 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 03:23:33 np0005534516 nova_compute[253538]: 2025-11-25 08:23:33.514 253542 INFO nova.compute.manager [-] [instance: a8f34404-8153-46bd-aea0-02d909cd66a9] VM Stopped (Lifecycle Event)#033[00m
Nov 25 03:23:33 np0005534516 nova_compute[253538]: 2025-11-25 08:23:33.532 253542 DEBUG nova.compute.manager [None req-fec8b18e-f915-49d3-a3d5-1ab9417f5b6f - - - - - -] [instance: a8f34404-8153-46bd-aea0-02d909cd66a9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:23:33 np0005534516 podman[273595]: 2025-11-25 08:23:33.844822348 +0000 UTC m=+0.094342884 container health_status 8ad1e319ea263c63021f01bb2d6f18ca5aea205883339692c6b10bddfa993191 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Nov 25 03:23:34 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e107 do_prune osdmap full prune enabled
Nov 25 03:23:34 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e108 e108: 3 total, 3 up, 3 in
Nov 25 03:23:34 np0005534516 ceph-mon[75015]: log_channel(cluster) log [DBG] : osdmap e108: 3 total, 3 up, 3 in
Nov 25 03:23:34 np0005534516 nova_compute[253538]: 2025-11-25 08:23:34.399 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:23:34 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1139: 321 pgs: 321 active+clean; 41 MiB data, 248 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.6 KiB/s wr, 26 op/s
Nov 25 03:23:35 np0005534516 nova_compute[253538]: 2025-11-25 08:23:35.744 253542 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764059000.7430954, 30491b9b-e328-43ff-9a35-3f5afa6fed34 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 03:23:35 np0005534516 nova_compute[253538]: 2025-11-25 08:23:35.745 253542 INFO nova.compute.manager [-] [instance: 30491b9b-e328-43ff-9a35-3f5afa6fed34] VM Stopped (Lifecycle Event)#033[00m
Nov 25 03:23:35 np0005534516 nova_compute[253538]: 2025-11-25 08:23:35.874 253542 DEBUG nova.compute.manager [None req-cbdf3cb4-00d2-437f-a627-b40e3f800aa3 - - - - - -] [instance: 30491b9b-e328-43ff-9a35-3f5afa6fed34] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:23:35 np0005534516 nova_compute[253538]: 2025-11-25 08:23:35.924 253542 DEBUG oslo_concurrency.lockutils [None req-95ec1bb3-c50b-4b21-8997-db3f364e8e2a 57ccb3076c9145fda72f75af7dd3acc0 e0f1bfa27d5e45138e846f38c1d92dfc - - default default] Acquiring lock "4659329f-611a-4436-aa9e-26937db9cd61" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:23:35 np0005534516 nova_compute[253538]: 2025-11-25 08:23:35.925 253542 DEBUG oslo_concurrency.lockutils [None req-95ec1bb3-c50b-4b21-8997-db3f364e8e2a 57ccb3076c9145fda72f75af7dd3acc0 e0f1bfa27d5e45138e846f38c1d92dfc - - default default] Lock "4659329f-611a-4436-aa9e-26937db9cd61" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:23:35 np0005534516 nova_compute[253538]: 2025-11-25 08:23:35.990 253542 DEBUG nova.compute.manager [None req-95ec1bb3-c50b-4b21-8997-db3f364e8e2a 57ccb3076c9145fda72f75af7dd3acc0 e0f1bfa27d5e45138e846f38c1d92dfc - - default default] [instance: 4659329f-611a-4436-aa9e-26937db9cd61] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 25 03:23:36 np0005534516 nova_compute[253538]: 2025-11-25 08:23:36.103 253542 DEBUG oslo_concurrency.lockutils [None req-95ec1bb3-c50b-4b21-8997-db3f364e8e2a 57ccb3076c9145fda72f75af7dd3acc0 e0f1bfa27d5e45138e846f38c1d92dfc - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:23:36 np0005534516 nova_compute[253538]: 2025-11-25 08:23:36.104 253542 DEBUG oslo_concurrency.lockutils [None req-95ec1bb3-c50b-4b21-8997-db3f364e8e2a 57ccb3076c9145fda72f75af7dd3acc0 e0f1bfa27d5e45138e846f38c1d92dfc - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:23:36 np0005534516 nova_compute[253538]: 2025-11-25 08:23:36.113 253542 DEBUG nova.virt.hardware [None req-95ec1bb3-c50b-4b21-8997-db3f364e8e2a 57ccb3076c9145fda72f75af7dd3acc0 e0f1bfa27d5e45138e846f38c1d92dfc - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 25 03:23:36 np0005534516 nova_compute[253538]: 2025-11-25 08:23:36.113 253542 INFO nova.compute.claims [None req-95ec1bb3-c50b-4b21-8997-db3f364e8e2a 57ccb3076c9145fda72f75af7dd3acc0 e0f1bfa27d5e45138e846f38c1d92dfc - - default default] [instance: 4659329f-611a-4436-aa9e-26937db9cd61] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 25 03:23:36 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e108 do_prune osdmap full prune enabled
Nov 25 03:23:36 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e109 e109: 3 total, 3 up, 3 in
Nov 25 03:23:36 np0005534516 ceph-mon[75015]: log_channel(cluster) log [DBG] : osdmap e109: 3 total, 3 up, 3 in
Nov 25 03:23:36 np0005534516 nova_compute[253538]: 2025-11-25 08:23:36.206 253542 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764059001.2050695, 4bc97ee2-5aba-4bb5-86e2-f0806a200c04 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 03:23:36 np0005534516 nova_compute[253538]: 2025-11-25 08:23:36.206 253542 INFO nova.compute.manager [-] [instance: 4bc97ee2-5aba-4bb5-86e2-f0806a200c04] VM Stopped (Lifecycle Event)#033[00m
Nov 25 03:23:36 np0005534516 nova_compute[253538]: 2025-11-25 08:23:36.222 253542 DEBUG nova.compute.manager [None req-a7efacb9-7e8f-4976-8ac8-0f1c40b0ab70 - - - - - -] [instance: 4bc97ee2-5aba-4bb5-86e2-f0806a200c04] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:23:36 np0005534516 nova_compute[253538]: 2025-11-25 08:23:36.236 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:23:36 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1141: 321 pgs: 321 active+clean; 41 MiB data, 248 MiB used, 60 GiB / 60 GiB avail; 10 KiB/s rd, 767 B/s wr, 12 op/s
Nov 25 03:23:37 np0005534516 nova_compute[253538]: 2025-11-25 08:23:37.082 253542 DEBUG oslo_concurrency.processutils [None req-95ec1bb3-c50b-4b21-8997-db3f364e8e2a 57ccb3076c9145fda72f75af7dd3acc0 e0f1bfa27d5e45138e846f38c1d92dfc - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:23:37 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 03:23:37 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1089520596' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 03:23:37 np0005534516 nova_compute[253538]: 2025-11-25 08:23:37.535 253542 DEBUG oslo_concurrency.processutils [None req-95ec1bb3-c50b-4b21-8997-db3f364e8e2a 57ccb3076c9145fda72f75af7dd3acc0 e0f1bfa27d5e45138e846f38c1d92dfc - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.453s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:23:37 np0005534516 nova_compute[253538]: 2025-11-25 08:23:37.541 253542 DEBUG nova.compute.provider_tree [None req-95ec1bb3-c50b-4b21-8997-db3f364e8e2a 57ccb3076c9145fda72f75af7dd3acc0 e0f1bfa27d5e45138e846f38c1d92dfc - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 03:23:37 np0005534516 nova_compute[253538]: 2025-11-25 08:23:37.554 253542 DEBUG nova.scheduler.client.report [None req-95ec1bb3-c50b-4b21-8997-db3f364e8e2a 57ccb3076c9145fda72f75af7dd3acc0 e0f1bfa27d5e45138e846f38c1d92dfc - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 03:23:37 np0005534516 nova_compute[253538]: 2025-11-25 08:23:37.588 253542 DEBUG oslo_concurrency.lockutils [None req-95ec1bb3-c50b-4b21-8997-db3f364e8e2a 57ccb3076c9145fda72f75af7dd3acc0 e0f1bfa27d5e45138e846f38c1d92dfc - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.484s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:23:37 np0005534516 nova_compute[253538]: 2025-11-25 08:23:37.589 253542 DEBUG nova.compute.manager [None req-95ec1bb3-c50b-4b21-8997-db3f364e8e2a 57ccb3076c9145fda72f75af7dd3acc0 e0f1bfa27d5e45138e846f38c1d92dfc - - default default] [instance: 4659329f-611a-4436-aa9e-26937db9cd61] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 25 03:23:37 np0005534516 nova_compute[253538]: 2025-11-25 08:23:37.649 253542 DEBUG nova.compute.manager [None req-95ec1bb3-c50b-4b21-8997-db3f364e8e2a 57ccb3076c9145fda72f75af7dd3acc0 e0f1bfa27d5e45138e846f38c1d92dfc - - default default] [instance: 4659329f-611a-4436-aa9e-26937db9cd61] Not allocating networking since 'none' was specified. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1948#033[00m
Nov 25 03:23:37 np0005534516 nova_compute[253538]: 2025-11-25 08:23:37.685 253542 INFO nova.virt.libvirt.driver [None req-95ec1bb3-c50b-4b21-8997-db3f364e8e2a 57ccb3076c9145fda72f75af7dd3acc0 e0f1bfa27d5e45138e846f38c1d92dfc - - default default] [instance: 4659329f-611a-4436-aa9e-26937db9cd61] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 25 03:23:37 np0005534516 nova_compute[253538]: 2025-11-25 08:23:37.724 253542 DEBUG nova.compute.manager [None req-95ec1bb3-c50b-4b21-8997-db3f364e8e2a 57ccb3076c9145fda72f75af7dd3acc0 e0f1bfa27d5e45138e846f38c1d92dfc - - default default] [instance: 4659329f-611a-4436-aa9e-26937db9cd61] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 25 03:23:37 np0005534516 nova_compute[253538]: 2025-11-25 08:23:37.862 253542 DEBUG nova.compute.manager [None req-95ec1bb3-c50b-4b21-8997-db3f364e8e2a 57ccb3076c9145fda72f75af7dd3acc0 e0f1bfa27d5e45138e846f38c1d92dfc - - default default] [instance: 4659329f-611a-4436-aa9e-26937db9cd61] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 25 03:23:37 np0005534516 nova_compute[253538]: 2025-11-25 08:23:37.864 253542 DEBUG nova.virt.libvirt.driver [None req-95ec1bb3-c50b-4b21-8997-db3f364e8e2a 57ccb3076c9145fda72f75af7dd3acc0 e0f1bfa27d5e45138e846f38c1d92dfc - - default default] [instance: 4659329f-611a-4436-aa9e-26937db9cd61] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 25 03:23:37 np0005534516 nova_compute[253538]: 2025-11-25 08:23:37.865 253542 INFO nova.virt.libvirt.driver [None req-95ec1bb3-c50b-4b21-8997-db3f364e8e2a 57ccb3076c9145fda72f75af7dd3acc0 e0f1bfa27d5e45138e846f38c1d92dfc - - default default] [instance: 4659329f-611a-4436-aa9e-26937db9cd61] Creating image(s)#033[00m
Nov 25 03:23:37 np0005534516 nova_compute[253538]: 2025-11-25 08:23:37.890 253542 DEBUG nova.storage.rbd_utils [None req-95ec1bb3-c50b-4b21-8997-db3f364e8e2a 57ccb3076c9145fda72f75af7dd3acc0 e0f1bfa27d5e45138e846f38c1d92dfc - - default default] rbd image 4659329f-611a-4436-aa9e-26937db9cd61_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:23:37 np0005534516 nova_compute[253538]: 2025-11-25 08:23:37.912 253542 DEBUG nova.storage.rbd_utils [None req-95ec1bb3-c50b-4b21-8997-db3f364e8e2a 57ccb3076c9145fda72f75af7dd3acc0 e0f1bfa27d5e45138e846f38c1d92dfc - - default default] rbd image 4659329f-611a-4436-aa9e-26937db9cd61_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:23:37 np0005534516 nova_compute[253538]: 2025-11-25 08:23:37.929 253542 DEBUG nova.storage.rbd_utils [None req-95ec1bb3-c50b-4b21-8997-db3f364e8e2a 57ccb3076c9145fda72f75af7dd3acc0 e0f1bfa27d5e45138e846f38c1d92dfc - - default default] rbd image 4659329f-611a-4436-aa9e-26937db9cd61_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:23:37 np0005534516 nova_compute[253538]: 2025-11-25 08:23:37.931 253542 DEBUG oslo_concurrency.processutils [None req-95ec1bb3-c50b-4b21-8997-db3f364e8e2a 57ccb3076c9145fda72f75af7dd3acc0 e0f1bfa27d5e45138e846f38c1d92dfc - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:23:37 np0005534516 nova_compute[253538]: 2025-11-25 08:23:37.997 253542 DEBUG oslo_concurrency.processutils [None req-95ec1bb3-c50b-4b21-8997-db3f364e8e2a 57ccb3076c9145fda72f75af7dd3acc0 e0f1bfa27d5e45138e846f38c1d92dfc - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json" returned: 0 in 0.066s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:23:37 np0005534516 nova_compute[253538]: 2025-11-25 08:23:37.998 253542 DEBUG oslo_concurrency.lockutils [None req-95ec1bb3-c50b-4b21-8997-db3f364e8e2a 57ccb3076c9145fda72f75af7dd3acc0 e0f1bfa27d5e45138e846f38c1d92dfc - - default default] Acquiring lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:23:37 np0005534516 nova_compute[253538]: 2025-11-25 08:23:37.998 253542 DEBUG oslo_concurrency.lockutils [None req-95ec1bb3-c50b-4b21-8997-db3f364e8e2a 57ccb3076c9145fda72f75af7dd3acc0 e0f1bfa27d5e45138e846f38c1d92dfc - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:23:37 np0005534516 nova_compute[253538]: 2025-11-25 08:23:37.999 253542 DEBUG oslo_concurrency.lockutils [None req-95ec1bb3-c50b-4b21-8997-db3f364e8e2a 57ccb3076c9145fda72f75af7dd3acc0 e0f1bfa27d5e45138e846f38c1d92dfc - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:23:38 np0005534516 nova_compute[253538]: 2025-11-25 08:23:38.017 253542 DEBUG nova.storage.rbd_utils [None req-95ec1bb3-c50b-4b21-8997-db3f364e8e2a 57ccb3076c9145fda72f75af7dd3acc0 e0f1bfa27d5e45138e846f38c1d92dfc - - default default] rbd image 4659329f-611a-4436-aa9e-26937db9cd61_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:23:38 np0005534516 nova_compute[253538]: 2025-11-25 08:23:38.021 253542 DEBUG oslo_concurrency.processutils [None req-95ec1bb3-c50b-4b21-8997-db3f364e8e2a 57ccb3076c9145fda72f75af7dd3acc0 e0f1bfa27d5e45138e846f38c1d92dfc - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc 4659329f-611a-4436-aa9e-26937db9cd61_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:23:38 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e109 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 03:23:38 np0005534516 nova_compute[253538]: 2025-11-25 08:23:38.479 253542 DEBUG oslo_concurrency.processutils [None req-95ec1bb3-c50b-4b21-8997-db3f364e8e2a 57ccb3076c9145fda72f75af7dd3acc0 e0f1bfa27d5e45138e846f38c1d92dfc - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc 4659329f-611a-4436-aa9e-26937db9cd61_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.458s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:23:38 np0005534516 nova_compute[253538]: 2025-11-25 08:23:38.555 253542 DEBUG nova.storage.rbd_utils [None req-95ec1bb3-c50b-4b21-8997-db3f364e8e2a 57ccb3076c9145fda72f75af7dd3acc0 e0f1bfa27d5e45138e846f38c1d92dfc - - default default] resizing rbd image 4659329f-611a-4436-aa9e-26937db9cd61_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 25 03:23:38 np0005534516 nova_compute[253538]: 2025-11-25 08:23:38.686 253542 DEBUG nova.objects.instance [None req-95ec1bb3-c50b-4b21-8997-db3f364e8e2a 57ccb3076c9145fda72f75af7dd3acc0 e0f1bfa27d5e45138e846f38c1d92dfc - - default default] Lazy-loading 'migration_context' on Instance uuid 4659329f-611a-4436-aa9e-26937db9cd61 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 03:23:38 np0005534516 nova_compute[253538]: 2025-11-25 08:23:38.700 253542 DEBUG nova.virt.libvirt.driver [None req-95ec1bb3-c50b-4b21-8997-db3f364e8e2a 57ccb3076c9145fda72f75af7dd3acc0 e0f1bfa27d5e45138e846f38c1d92dfc - - default default] [instance: 4659329f-611a-4436-aa9e-26937db9cd61] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 25 03:23:38 np0005534516 nova_compute[253538]: 2025-11-25 08:23:38.701 253542 DEBUG nova.virt.libvirt.driver [None req-95ec1bb3-c50b-4b21-8997-db3f364e8e2a 57ccb3076c9145fda72f75af7dd3acc0 e0f1bfa27d5e45138e846f38c1d92dfc - - default default] [instance: 4659329f-611a-4436-aa9e-26937db9cd61] Ensure instance console log exists: /var/lib/nova/instances/4659329f-611a-4436-aa9e-26937db9cd61/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 25 03:23:38 np0005534516 nova_compute[253538]: 2025-11-25 08:23:38.701 253542 DEBUG oslo_concurrency.lockutils [None req-95ec1bb3-c50b-4b21-8997-db3f364e8e2a 57ccb3076c9145fda72f75af7dd3acc0 e0f1bfa27d5e45138e846f38c1d92dfc - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:23:38 np0005534516 nova_compute[253538]: 2025-11-25 08:23:38.702 253542 DEBUG oslo_concurrency.lockutils [None req-95ec1bb3-c50b-4b21-8997-db3f364e8e2a 57ccb3076c9145fda72f75af7dd3acc0 e0f1bfa27d5e45138e846f38c1d92dfc - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:23:38 np0005534516 nova_compute[253538]: 2025-11-25 08:23:38.702 253542 DEBUG oslo_concurrency.lockutils [None req-95ec1bb3-c50b-4b21-8997-db3f364e8e2a 57ccb3076c9145fda72f75af7dd3acc0 e0f1bfa27d5e45138e846f38c1d92dfc - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:23:38 np0005534516 nova_compute[253538]: 2025-11-25 08:23:38.704 253542 DEBUG nova.virt.libvirt.driver [None req-95ec1bb3-c50b-4b21-8997-db3f364e8e2a 57ccb3076c9145fda72f75af7dd3acc0 e0f1bfa27d5e45138e846f38c1d92dfc - - default default] [instance: 4659329f-611a-4436-aa9e-26937db9cd61] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'device_name': '/dev/vda', 'guest_format': None, 'encryption_options': None, 'boot_index': 0, 'encryption_format': None, 'encryption_secret_uuid': None, 'size': 0, 'encrypted': False, 'disk_bus': 'virtio', 'image_id': '8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 25 03:23:38 np0005534516 nova_compute[253538]: 2025-11-25 08:23:38.710 253542 WARNING nova.virt.libvirt.driver [None req-95ec1bb3-c50b-4b21-8997-db3f364e8e2a 57ccb3076c9145fda72f75af7dd3acc0 e0f1bfa27d5e45138e846f38c1d92dfc - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 03:23:38 np0005534516 nova_compute[253538]: 2025-11-25 08:23:38.715 253542 DEBUG nova.virt.libvirt.host [None req-95ec1bb3-c50b-4b21-8997-db3f364e8e2a 57ccb3076c9145fda72f75af7dd3acc0 e0f1bfa27d5e45138e846f38c1d92dfc - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 25 03:23:38 np0005534516 nova_compute[253538]: 2025-11-25 08:23:38.716 253542 DEBUG nova.virt.libvirt.host [None req-95ec1bb3-c50b-4b21-8997-db3f364e8e2a 57ccb3076c9145fda72f75af7dd3acc0 e0f1bfa27d5e45138e846f38c1d92dfc - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 25 03:23:38 np0005534516 nova_compute[253538]: 2025-11-25 08:23:38.719 253542 DEBUG nova.virt.libvirt.host [None req-95ec1bb3-c50b-4b21-8997-db3f364e8e2a 57ccb3076c9145fda72f75af7dd3acc0 e0f1bfa27d5e45138e846f38c1d92dfc - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 25 03:23:38 np0005534516 nova_compute[253538]: 2025-11-25 08:23:38.720 253542 DEBUG nova.virt.libvirt.host [None req-95ec1bb3-c50b-4b21-8997-db3f364e8e2a 57ccb3076c9145fda72f75af7dd3acc0 e0f1bfa27d5e45138e846f38c1d92dfc - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 25 03:23:38 np0005534516 nova_compute[253538]: 2025-11-25 08:23:38.721 253542 DEBUG nova.virt.libvirt.driver [None req-95ec1bb3-c50b-4b21-8997-db3f364e8e2a 57ccb3076c9145fda72f75af7dd3acc0 e0f1bfa27d5e45138e846f38c1d92dfc - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 25 03:23:38 np0005534516 nova_compute[253538]: 2025-11-25 08:23:38.721 253542 DEBUG nova.virt.hardware [None req-95ec1bb3-c50b-4b21-8997-db3f364e8e2a 57ccb3076c9145fda72f75af7dd3acc0 e0f1bfa27d5e45138e846f38c1d92dfc - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T08:20:54Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='c0247c91-0f34-45b2-87e7-43f31a790a00',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 25 03:23:38 np0005534516 nova_compute[253538]: 2025-11-25 08:23:38.722 253542 DEBUG nova.virt.hardware [None req-95ec1bb3-c50b-4b21-8997-db3f364e8e2a 57ccb3076c9145fda72f75af7dd3acc0 e0f1bfa27d5e45138e846f38c1d92dfc - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 25 03:23:38 np0005534516 nova_compute[253538]: 2025-11-25 08:23:38.722 253542 DEBUG nova.virt.hardware [None req-95ec1bb3-c50b-4b21-8997-db3f364e8e2a 57ccb3076c9145fda72f75af7dd3acc0 e0f1bfa27d5e45138e846f38c1d92dfc - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 25 03:23:38 np0005534516 nova_compute[253538]: 2025-11-25 08:23:38.722 253542 DEBUG nova.virt.hardware [None req-95ec1bb3-c50b-4b21-8997-db3f364e8e2a 57ccb3076c9145fda72f75af7dd3acc0 e0f1bfa27d5e45138e846f38c1d92dfc - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 25 03:23:38 np0005534516 nova_compute[253538]: 2025-11-25 08:23:38.722 253542 DEBUG nova.virt.hardware [None req-95ec1bb3-c50b-4b21-8997-db3f364e8e2a 57ccb3076c9145fda72f75af7dd3acc0 e0f1bfa27d5e45138e846f38c1d92dfc - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 25 03:23:38 np0005534516 nova_compute[253538]: 2025-11-25 08:23:38.722 253542 DEBUG nova.virt.hardware [None req-95ec1bb3-c50b-4b21-8997-db3f364e8e2a 57ccb3076c9145fda72f75af7dd3acc0 e0f1bfa27d5e45138e846f38c1d92dfc - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 25 03:23:38 np0005534516 nova_compute[253538]: 2025-11-25 08:23:38.723 253542 DEBUG nova.virt.hardware [None req-95ec1bb3-c50b-4b21-8997-db3f364e8e2a 57ccb3076c9145fda72f75af7dd3acc0 e0f1bfa27d5e45138e846f38c1d92dfc - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 25 03:23:38 np0005534516 nova_compute[253538]: 2025-11-25 08:23:38.723 253542 DEBUG nova.virt.hardware [None req-95ec1bb3-c50b-4b21-8997-db3f364e8e2a 57ccb3076c9145fda72f75af7dd3acc0 e0f1bfa27d5e45138e846f38c1d92dfc - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 25 03:23:38 np0005534516 nova_compute[253538]: 2025-11-25 08:23:38.723 253542 DEBUG nova.virt.hardware [None req-95ec1bb3-c50b-4b21-8997-db3f364e8e2a 57ccb3076c9145fda72f75af7dd3acc0 e0f1bfa27d5e45138e846f38c1d92dfc - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 25 03:23:38 np0005534516 nova_compute[253538]: 2025-11-25 08:23:38.723 253542 DEBUG nova.virt.hardware [None req-95ec1bb3-c50b-4b21-8997-db3f364e8e2a 57ccb3076c9145fda72f75af7dd3acc0 e0f1bfa27d5e45138e846f38c1d92dfc - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 25 03:23:38 np0005534516 nova_compute[253538]: 2025-11-25 08:23:38.724 253542 DEBUG nova.virt.hardware [None req-95ec1bb3-c50b-4b21-8997-db3f364e8e2a 57ccb3076c9145fda72f75af7dd3acc0 e0f1bfa27d5e45138e846f38c1d92dfc - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 25 03:23:38 np0005534516 nova_compute[253538]: 2025-11-25 08:23:38.728 253542 DEBUG oslo_concurrency.processutils [None req-95ec1bb3-c50b-4b21-8997-db3f364e8e2a 57ccb3076c9145fda72f75af7dd3acc0 e0f1bfa27d5e45138e846f38c1d92dfc - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:23:38 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1142: 321 pgs: 321 active+clean; 41 MiB data, 246 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 2.7 KiB/s wr, 27 op/s
Nov 25 03:23:39 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 03:23:39 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/480854766' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 03:23:39 np0005534516 nova_compute[253538]: 2025-11-25 08:23:39.201 253542 DEBUG oslo_concurrency.processutils [None req-95ec1bb3-c50b-4b21-8997-db3f364e8e2a 57ccb3076c9145fda72f75af7dd3acc0 e0f1bfa27d5e45138e846f38c1d92dfc - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.473s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:23:39 np0005534516 nova_compute[253538]: 2025-11-25 08:23:39.228 253542 DEBUG nova.storage.rbd_utils [None req-95ec1bb3-c50b-4b21-8997-db3f364e8e2a 57ccb3076c9145fda72f75af7dd3acc0 e0f1bfa27d5e45138e846f38c1d92dfc - - default default] rbd image 4659329f-611a-4436-aa9e-26937db9cd61_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:23:39 np0005534516 nova_compute[253538]: 2025-11-25 08:23:39.232 253542 DEBUG oslo_concurrency.processutils [None req-95ec1bb3-c50b-4b21-8997-db3f364e8e2a 57ccb3076c9145fda72f75af7dd3acc0 e0f1bfa27d5e45138e846f38c1d92dfc - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:23:39 np0005534516 nova_compute[253538]: 2025-11-25 08:23:39.400 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:23:39 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 03:23:39 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2162049715' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 03:23:39 np0005534516 nova_compute[253538]: 2025-11-25 08:23:39.651 253542 DEBUG oslo_concurrency.processutils [None req-95ec1bb3-c50b-4b21-8997-db3f364e8e2a 57ccb3076c9145fda72f75af7dd3acc0 e0f1bfa27d5e45138e846f38c1d92dfc - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.419s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:23:39 np0005534516 nova_compute[253538]: 2025-11-25 08:23:39.653 253542 DEBUG nova.objects.instance [None req-95ec1bb3-c50b-4b21-8997-db3f364e8e2a 57ccb3076c9145fda72f75af7dd3acc0 e0f1bfa27d5e45138e846f38c1d92dfc - - default default] Lazy-loading 'pci_devices' on Instance uuid 4659329f-611a-4436-aa9e-26937db9cd61 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 03:23:39 np0005534516 nova_compute[253538]: 2025-11-25 08:23:39.674 253542 DEBUG nova.virt.libvirt.driver [None req-95ec1bb3-c50b-4b21-8997-db3f364e8e2a 57ccb3076c9145fda72f75af7dd3acc0 e0f1bfa27d5e45138e846f38c1d92dfc - - default default] [instance: 4659329f-611a-4436-aa9e-26937db9cd61] End _get_guest_xml xml=<domain type="kvm">
Nov 25 03:23:39 np0005534516 nova_compute[253538]:  <uuid>4659329f-611a-4436-aa9e-26937db9cd61</uuid>
Nov 25 03:23:39 np0005534516 nova_compute[253538]:  <name>instance-00000009</name>
Nov 25 03:23:39 np0005534516 nova_compute[253538]:  <memory>131072</memory>
Nov 25 03:23:39 np0005534516 nova_compute[253538]:  <vcpu>1</vcpu>
Nov 25 03:23:39 np0005534516 nova_compute[253538]:  <metadata>
Nov 25 03:23:39 np0005534516 nova_compute[253538]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 03:23:39 np0005534516 nova_compute[253538]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 03:23:39 np0005534516 nova_compute[253538]:      <nova:name>tempest-ServersAdmin275Test-server-190405605</nova:name>
Nov 25 03:23:39 np0005534516 nova_compute[253538]:      <nova:creationTime>2025-11-25 08:23:38</nova:creationTime>
Nov 25 03:23:39 np0005534516 nova_compute[253538]:      <nova:flavor name="m1.nano">
Nov 25 03:23:39 np0005534516 nova_compute[253538]:        <nova:memory>128</nova:memory>
Nov 25 03:23:39 np0005534516 nova_compute[253538]:        <nova:disk>1</nova:disk>
Nov 25 03:23:39 np0005534516 nova_compute[253538]:        <nova:swap>0</nova:swap>
Nov 25 03:23:39 np0005534516 nova_compute[253538]:        <nova:ephemeral>0</nova:ephemeral>
Nov 25 03:23:39 np0005534516 nova_compute[253538]:        <nova:vcpus>1</nova:vcpus>
Nov 25 03:23:39 np0005534516 nova_compute[253538]:      </nova:flavor>
Nov 25 03:23:39 np0005534516 nova_compute[253538]:      <nova:owner>
Nov 25 03:23:39 np0005534516 nova_compute[253538]:        <nova:user uuid="57ccb3076c9145fda72f75af7dd3acc0">tempest-ServersAdmin275Test-1226019010-project-member</nova:user>
Nov 25 03:23:39 np0005534516 nova_compute[253538]:        <nova:project uuid="e0f1bfa27d5e45138e846f38c1d92dfc">tempest-ServersAdmin275Test-1226019010</nova:project>
Nov 25 03:23:39 np0005534516 nova_compute[253538]:      </nova:owner>
Nov 25 03:23:39 np0005534516 nova_compute[253538]:      <nova:root type="image" uuid="8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e"/>
Nov 25 03:23:39 np0005534516 nova_compute[253538]:      <nova:ports/>
Nov 25 03:23:39 np0005534516 nova_compute[253538]:    </nova:instance>
Nov 25 03:23:39 np0005534516 nova_compute[253538]:  </metadata>
Nov 25 03:23:39 np0005534516 nova_compute[253538]:  <sysinfo type="smbios">
Nov 25 03:23:39 np0005534516 nova_compute[253538]:    <system>
Nov 25 03:23:39 np0005534516 nova_compute[253538]:      <entry name="manufacturer">RDO</entry>
Nov 25 03:23:39 np0005534516 nova_compute[253538]:      <entry name="product">OpenStack Compute</entry>
Nov 25 03:23:39 np0005534516 nova_compute[253538]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 03:23:39 np0005534516 nova_compute[253538]:      <entry name="serial">4659329f-611a-4436-aa9e-26937db9cd61</entry>
Nov 25 03:23:39 np0005534516 nova_compute[253538]:      <entry name="uuid">4659329f-611a-4436-aa9e-26937db9cd61</entry>
Nov 25 03:23:39 np0005534516 nova_compute[253538]:      <entry name="family">Virtual Machine</entry>
Nov 25 03:23:39 np0005534516 nova_compute[253538]:    </system>
Nov 25 03:23:39 np0005534516 nova_compute[253538]:  </sysinfo>
Nov 25 03:23:39 np0005534516 nova_compute[253538]:  <os>
Nov 25 03:23:39 np0005534516 nova_compute[253538]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 25 03:23:39 np0005534516 nova_compute[253538]:    <boot dev="hd"/>
Nov 25 03:23:39 np0005534516 nova_compute[253538]:    <smbios mode="sysinfo"/>
Nov 25 03:23:39 np0005534516 nova_compute[253538]:  </os>
Nov 25 03:23:39 np0005534516 nova_compute[253538]:  <features>
Nov 25 03:23:39 np0005534516 nova_compute[253538]:    <acpi/>
Nov 25 03:23:39 np0005534516 nova_compute[253538]:    <apic/>
Nov 25 03:23:39 np0005534516 nova_compute[253538]:    <vmcoreinfo/>
Nov 25 03:23:39 np0005534516 nova_compute[253538]:  </features>
Nov 25 03:23:39 np0005534516 nova_compute[253538]:  <clock offset="utc">
Nov 25 03:23:39 np0005534516 nova_compute[253538]:    <timer name="pit" tickpolicy="delay"/>
Nov 25 03:23:39 np0005534516 nova_compute[253538]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 25 03:23:39 np0005534516 nova_compute[253538]:    <timer name="hpet" present="no"/>
Nov 25 03:23:39 np0005534516 nova_compute[253538]:  </clock>
Nov 25 03:23:39 np0005534516 nova_compute[253538]:  <cpu mode="host-model" match="exact">
Nov 25 03:23:39 np0005534516 nova_compute[253538]:    <topology sockets="1" cores="1" threads="1"/>
Nov 25 03:23:39 np0005534516 nova_compute[253538]:  </cpu>
Nov 25 03:23:39 np0005534516 nova_compute[253538]:  <devices>
Nov 25 03:23:39 np0005534516 nova_compute[253538]:    <disk type="network" device="disk">
Nov 25 03:23:39 np0005534516 nova_compute[253538]:      <driver type="raw" cache="none"/>
Nov 25 03:23:39 np0005534516 nova_compute[253538]:      <source protocol="rbd" name="vms/4659329f-611a-4436-aa9e-26937db9cd61_disk">
Nov 25 03:23:39 np0005534516 nova_compute[253538]:        <host name="192.168.122.100" port="6789"/>
Nov 25 03:23:39 np0005534516 nova_compute[253538]:      </source>
Nov 25 03:23:39 np0005534516 nova_compute[253538]:      <auth username="openstack">
Nov 25 03:23:39 np0005534516 nova_compute[253538]:        <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 03:23:39 np0005534516 nova_compute[253538]:      </auth>
Nov 25 03:23:39 np0005534516 nova_compute[253538]:      <target dev="vda" bus="virtio"/>
Nov 25 03:23:39 np0005534516 nova_compute[253538]:    </disk>
Nov 25 03:23:39 np0005534516 nova_compute[253538]:    <disk type="network" device="cdrom">
Nov 25 03:23:39 np0005534516 nova_compute[253538]:      <driver type="raw" cache="none"/>
Nov 25 03:23:39 np0005534516 nova_compute[253538]:      <source protocol="rbd" name="vms/4659329f-611a-4436-aa9e-26937db9cd61_disk.config">
Nov 25 03:23:39 np0005534516 nova_compute[253538]:        <host name="192.168.122.100" port="6789"/>
Nov 25 03:23:39 np0005534516 nova_compute[253538]:      </source>
Nov 25 03:23:39 np0005534516 nova_compute[253538]:      <auth username="openstack">
Nov 25 03:23:39 np0005534516 nova_compute[253538]:        <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 03:23:39 np0005534516 nova_compute[253538]:      </auth>
Nov 25 03:23:39 np0005534516 nova_compute[253538]:      <target dev="sda" bus="sata"/>
Nov 25 03:23:39 np0005534516 nova_compute[253538]:    </disk>
Nov 25 03:23:39 np0005534516 nova_compute[253538]:    <serial type="pty">
Nov 25 03:23:39 np0005534516 nova_compute[253538]:      <log file="/var/lib/nova/instances/4659329f-611a-4436-aa9e-26937db9cd61/console.log" append="off"/>
Nov 25 03:23:39 np0005534516 nova_compute[253538]:    </serial>
Nov 25 03:23:39 np0005534516 nova_compute[253538]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 03:23:39 np0005534516 nova_compute[253538]:    <video>
Nov 25 03:23:39 np0005534516 nova_compute[253538]:      <model type="virtio"/>
Nov 25 03:23:39 np0005534516 nova_compute[253538]:    </video>
Nov 25 03:23:39 np0005534516 nova_compute[253538]:    <input type="tablet" bus="usb"/>
Nov 25 03:23:39 np0005534516 nova_compute[253538]:    <rng model="virtio">
Nov 25 03:23:39 np0005534516 nova_compute[253538]:      <backend model="random">/dev/urandom</backend>
Nov 25 03:23:39 np0005534516 nova_compute[253538]:    </rng>
Nov 25 03:23:39 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root"/>
Nov 25 03:23:39 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:23:39 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:23:39 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:23:39 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:23:39 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:23:39 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:23:39 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:23:39 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:23:39 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:23:39 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:23:39 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:23:39 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:23:39 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:23:39 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:23:39 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:23:39 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:23:39 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:23:39 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:23:39 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:23:39 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:23:39 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:23:39 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:23:39 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:23:39 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:23:39 np0005534516 nova_compute[253538]:    <controller type="usb" index="0"/>
Nov 25 03:23:39 np0005534516 nova_compute[253538]:    <memballoon model="virtio">
Nov 25 03:23:39 np0005534516 nova_compute[253538]:      <stats period="10"/>
Nov 25 03:23:39 np0005534516 nova_compute[253538]:    </memballoon>
Nov 25 03:23:39 np0005534516 nova_compute[253538]:  </devices>
Nov 25 03:23:39 np0005534516 nova_compute[253538]: </domain>
Nov 25 03:23:39 np0005534516 nova_compute[253538]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 25 03:23:39 np0005534516 nova_compute[253538]: 2025-11-25 08:23:39.719 253542 DEBUG nova.virt.libvirt.driver [None req-95ec1bb3-c50b-4b21-8997-db3f364e8e2a 57ccb3076c9145fda72f75af7dd3acc0 e0f1bfa27d5e45138e846f38c1d92dfc - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 03:23:39 np0005534516 nova_compute[253538]: 2025-11-25 08:23:39.720 253542 DEBUG nova.virt.libvirt.driver [None req-95ec1bb3-c50b-4b21-8997-db3f364e8e2a 57ccb3076c9145fda72f75af7dd3acc0 e0f1bfa27d5e45138e846f38c1d92dfc - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 03:23:39 np0005534516 nova_compute[253538]: 2025-11-25 08:23:39.720 253542 INFO nova.virt.libvirt.driver [None req-95ec1bb3-c50b-4b21-8997-db3f364e8e2a 57ccb3076c9145fda72f75af7dd3acc0 e0f1bfa27d5e45138e846f38c1d92dfc - - default default] [instance: 4659329f-611a-4436-aa9e-26937db9cd61] Using config drive#033[00m
Nov 25 03:23:39 np0005534516 nova_compute[253538]: 2025-11-25 08:23:39.739 253542 DEBUG nova.storage.rbd_utils [None req-95ec1bb3-c50b-4b21-8997-db3f364e8e2a 57ccb3076c9145fda72f75af7dd3acc0 e0f1bfa27d5e45138e846f38c1d92dfc - - default default] rbd image 4659329f-611a-4436-aa9e-26937db9cd61_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:23:39 np0005534516 nova_compute[253538]: 2025-11-25 08:23:39.979 253542 INFO nova.virt.libvirt.driver [None req-95ec1bb3-c50b-4b21-8997-db3f364e8e2a 57ccb3076c9145fda72f75af7dd3acc0 e0f1bfa27d5e45138e846f38c1d92dfc - - default default] [instance: 4659329f-611a-4436-aa9e-26937db9cd61] Creating config drive at /var/lib/nova/instances/4659329f-611a-4436-aa9e-26937db9cd61/disk.config#033[00m
Nov 25 03:23:39 np0005534516 nova_compute[253538]: 2025-11-25 08:23:39.983 253542 DEBUG oslo_concurrency.processutils [None req-95ec1bb3-c50b-4b21-8997-db3f364e8e2a 57ccb3076c9145fda72f75af7dd3acc0 e0f1bfa27d5e45138e846f38c1d92dfc - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/4659329f-611a-4436-aa9e-26937db9cd61/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpkdikmrcs execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:23:40 np0005534516 nova_compute[253538]: 2025-11-25 08:23:40.107 253542 DEBUG oslo_concurrency.processutils [None req-95ec1bb3-c50b-4b21-8997-db3f364e8e2a 57ccb3076c9145fda72f75af7dd3acc0 e0f1bfa27d5e45138e846f38c1d92dfc - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/4659329f-611a-4436-aa9e-26937db9cd61/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpkdikmrcs" returned: 0 in 0.124s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:23:40 np0005534516 nova_compute[253538]: 2025-11-25 08:23:40.137 253542 DEBUG nova.storage.rbd_utils [None req-95ec1bb3-c50b-4b21-8997-db3f364e8e2a 57ccb3076c9145fda72f75af7dd3acc0 e0f1bfa27d5e45138e846f38c1d92dfc - - default default] rbd image 4659329f-611a-4436-aa9e-26937db9cd61_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:23:40 np0005534516 nova_compute[253538]: 2025-11-25 08:23:40.141 253542 DEBUG oslo_concurrency.processutils [None req-95ec1bb3-c50b-4b21-8997-db3f364e8e2a 57ccb3076c9145fda72f75af7dd3acc0 e0f1bfa27d5e45138e846f38c1d92dfc - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/4659329f-611a-4436-aa9e-26937db9cd61/disk.config 4659329f-611a-4436-aa9e-26937db9cd61_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:23:40 np0005534516 nova_compute[253538]: 2025-11-25 08:23:40.316 253542 DEBUG oslo_concurrency.processutils [None req-95ec1bb3-c50b-4b21-8997-db3f364e8e2a 57ccb3076c9145fda72f75af7dd3acc0 e0f1bfa27d5e45138e846f38c1d92dfc - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/4659329f-611a-4436-aa9e-26937db9cd61/disk.config 4659329f-611a-4436-aa9e-26937db9cd61_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.175s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:23:40 np0005534516 nova_compute[253538]: 2025-11-25 08:23:40.317 253542 INFO nova.virt.libvirt.driver [None req-95ec1bb3-c50b-4b21-8997-db3f364e8e2a 57ccb3076c9145fda72f75af7dd3acc0 e0f1bfa27d5e45138e846f38c1d92dfc - - default default] [instance: 4659329f-611a-4436-aa9e-26937db9cd61] Deleting local config drive /var/lib/nova/instances/4659329f-611a-4436-aa9e-26937db9cd61/disk.config because it was imported into RBD.#033[00m
Nov 25 03:23:40 np0005534516 systemd-machined[215790]: New machine qemu-9-instance-00000009.
Nov 25 03:23:40 np0005534516 systemd[1]: Started Virtual Machine qemu-9-instance-00000009.
Nov 25 03:23:40 np0005534516 nova_compute[253538]: 2025-11-25 08:23:40.760 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764059020.7600143, 4659329f-611a-4436-aa9e-26937db9cd61 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 03:23:40 np0005534516 nova_compute[253538]: 2025-11-25 08:23:40.761 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 4659329f-611a-4436-aa9e-26937db9cd61] VM Resumed (Lifecycle Event)#033[00m
Nov 25 03:23:40 np0005534516 nova_compute[253538]: 2025-11-25 08:23:40.764 253542 DEBUG nova.compute.manager [None req-95ec1bb3-c50b-4b21-8997-db3f364e8e2a 57ccb3076c9145fda72f75af7dd3acc0 e0f1bfa27d5e45138e846f38c1d92dfc - - default default] [instance: 4659329f-611a-4436-aa9e-26937db9cd61] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 25 03:23:40 np0005534516 nova_compute[253538]: 2025-11-25 08:23:40.764 253542 DEBUG nova.virt.libvirt.driver [None req-95ec1bb3-c50b-4b21-8997-db3f364e8e2a 57ccb3076c9145fda72f75af7dd3acc0 e0f1bfa27d5e45138e846f38c1d92dfc - - default default] [instance: 4659329f-611a-4436-aa9e-26937db9cd61] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 25 03:23:40 np0005534516 nova_compute[253538]: 2025-11-25 08:23:40.768 253542 INFO nova.virt.libvirt.driver [-] [instance: 4659329f-611a-4436-aa9e-26937db9cd61] Instance spawned successfully.#033[00m
Nov 25 03:23:40 np0005534516 nova_compute[253538]: 2025-11-25 08:23:40.768 253542 DEBUG nova.virt.libvirt.driver [None req-95ec1bb3-c50b-4b21-8997-db3f364e8e2a 57ccb3076c9145fda72f75af7dd3acc0 e0f1bfa27d5e45138e846f38c1d92dfc - - default default] [instance: 4659329f-611a-4436-aa9e-26937db9cd61] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 25 03:23:40 np0005534516 nova_compute[253538]: 2025-11-25 08:23:40.784 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 4659329f-611a-4436-aa9e-26937db9cd61] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:23:40 np0005534516 nova_compute[253538]: 2025-11-25 08:23:40.789 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 4659329f-611a-4436-aa9e-26937db9cd61] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 03:23:40 np0005534516 nova_compute[253538]: 2025-11-25 08:23:40.793 253542 DEBUG nova.virt.libvirt.driver [None req-95ec1bb3-c50b-4b21-8997-db3f364e8e2a 57ccb3076c9145fda72f75af7dd3acc0 e0f1bfa27d5e45138e846f38c1d92dfc - - default default] [instance: 4659329f-611a-4436-aa9e-26937db9cd61] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:23:40 np0005534516 nova_compute[253538]: 2025-11-25 08:23:40.794 253542 DEBUG nova.virt.libvirt.driver [None req-95ec1bb3-c50b-4b21-8997-db3f364e8e2a 57ccb3076c9145fda72f75af7dd3acc0 e0f1bfa27d5e45138e846f38c1d92dfc - - default default] [instance: 4659329f-611a-4436-aa9e-26937db9cd61] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:23:40 np0005534516 nova_compute[253538]: 2025-11-25 08:23:40.794 253542 DEBUG nova.virt.libvirt.driver [None req-95ec1bb3-c50b-4b21-8997-db3f364e8e2a 57ccb3076c9145fda72f75af7dd3acc0 e0f1bfa27d5e45138e846f38c1d92dfc - - default default] [instance: 4659329f-611a-4436-aa9e-26937db9cd61] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:23:40 np0005534516 nova_compute[253538]: 2025-11-25 08:23:40.795 253542 DEBUG nova.virt.libvirt.driver [None req-95ec1bb3-c50b-4b21-8997-db3f364e8e2a 57ccb3076c9145fda72f75af7dd3acc0 e0f1bfa27d5e45138e846f38c1d92dfc - - default default] [instance: 4659329f-611a-4436-aa9e-26937db9cd61] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:23:40 np0005534516 nova_compute[253538]: 2025-11-25 08:23:40.795 253542 DEBUG nova.virt.libvirt.driver [None req-95ec1bb3-c50b-4b21-8997-db3f364e8e2a 57ccb3076c9145fda72f75af7dd3acc0 e0f1bfa27d5e45138e846f38c1d92dfc - - default default] [instance: 4659329f-611a-4436-aa9e-26937db9cd61] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:23:40 np0005534516 nova_compute[253538]: 2025-11-25 08:23:40.795 253542 DEBUG nova.virt.libvirt.driver [None req-95ec1bb3-c50b-4b21-8997-db3f364e8e2a 57ccb3076c9145fda72f75af7dd3acc0 e0f1bfa27d5e45138e846f38c1d92dfc - - default default] [instance: 4659329f-611a-4436-aa9e-26937db9cd61] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:23:40 np0005534516 nova_compute[253538]: 2025-11-25 08:23:40.817 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 4659329f-611a-4436-aa9e-26937db9cd61] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 03:23:40 np0005534516 nova_compute[253538]: 2025-11-25 08:23:40.818 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764059020.7640014, 4659329f-611a-4436-aa9e-26937db9cd61 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 03:23:40 np0005534516 nova_compute[253538]: 2025-11-25 08:23:40.818 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 4659329f-611a-4436-aa9e-26937db9cd61] VM Started (Lifecycle Event)#033[00m
Nov 25 03:23:40 np0005534516 nova_compute[253538]: 2025-11-25 08:23:40.842 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 4659329f-611a-4436-aa9e-26937db9cd61] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:23:40 np0005534516 nova_compute[253538]: 2025-11-25 08:23:40.845 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 4659329f-611a-4436-aa9e-26937db9cd61] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 03:23:40 np0005534516 nova_compute[253538]: 2025-11-25 08:23:40.859 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 4659329f-611a-4436-aa9e-26937db9cd61] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 03:23:40 np0005534516 nova_compute[253538]: 2025-11-25 08:23:40.866 253542 INFO nova.compute.manager [None req-95ec1bb3-c50b-4b21-8997-db3f364e8e2a 57ccb3076c9145fda72f75af7dd3acc0 e0f1bfa27d5e45138e846f38c1d92dfc - - default default] [instance: 4659329f-611a-4436-aa9e-26937db9cd61] Took 3.00 seconds to spawn the instance on the hypervisor.#033[00m
Nov 25 03:23:40 np0005534516 nova_compute[253538]: 2025-11-25 08:23:40.866 253542 DEBUG nova.compute.manager [None req-95ec1bb3-c50b-4b21-8997-db3f364e8e2a 57ccb3076c9145fda72f75af7dd3acc0 e0f1bfa27d5e45138e846f38c1d92dfc - - default default] [instance: 4659329f-611a-4436-aa9e-26937db9cd61] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:23:40 np0005534516 nova_compute[253538]: 2025-11-25 08:23:40.934 253542 INFO nova.compute.manager [None req-95ec1bb3-c50b-4b21-8997-db3f364e8e2a 57ccb3076c9145fda72f75af7dd3acc0 e0f1bfa27d5e45138e846f38c1d92dfc - - default default] [instance: 4659329f-611a-4436-aa9e-26937db9cd61] Took 4.86 seconds to build instance.#033[00m
Nov 25 03:23:40 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1143: 321 pgs: 321 active+clean; 47 MiB data, 251 MiB used, 60 GiB / 60 GiB avail; 51 KiB/s rd, 208 KiB/s wr, 68 op/s
Nov 25 03:23:40 np0005534516 nova_compute[253538]: 2025-11-25 08:23:40.962 253542 DEBUG oslo_concurrency.lockutils [None req-95ec1bb3-c50b-4b21-8997-db3f364e8e2a 57ccb3076c9145fda72f75af7dd3acc0 e0f1bfa27d5e45138e846f38c1d92dfc - - default default] Lock "4659329f-611a-4436-aa9e-26937db9cd61" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 5.037s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:23:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:23:41.049 162739 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:23:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:23:41.049 162739 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:23:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:23:41.049 162739 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:23:41 np0005534516 nova_compute[253538]: 2025-11-25 08:23:41.238 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:23:42 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1144: 321 pgs: 321 active+clean; 80 MiB data, 267 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 2.0 MiB/s wr, 129 op/s
Nov 25 03:23:43 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e109 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 03:23:43 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e109 do_prune osdmap full prune enabled
Nov 25 03:23:43 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e110 e110: 3 total, 3 up, 3 in
Nov 25 03:23:43 np0005534516 ceph-mon[75015]: log_channel(cluster) log [DBG] : osdmap e110: 3 total, 3 up, 3 in
Nov 25 03:23:44 np0005534516 nova_compute[253538]: 2025-11-25 08:23:44.094 253542 INFO nova.compute.manager [None req-30da9785-a1fd-47ff-a7d7-c47887bfaf5d 57ccb3076c9145fda72f75af7dd3acc0 e0f1bfa27d5e45138e846f38c1d92dfc - - default default] [instance: 4659329f-611a-4436-aa9e-26937db9cd61] Rebuilding instance#033[00m
Nov 25 03:23:44 np0005534516 nova_compute[253538]: 2025-11-25 08:23:44.313 253542 DEBUG nova.objects.instance [None req-30da9785-a1fd-47ff-a7d7-c47887bfaf5d 57ccb3076c9145fda72f75af7dd3acc0 e0f1bfa27d5e45138e846f38c1d92dfc - - default default] Lazy-loading 'trusted_certs' on Instance uuid 4659329f-611a-4436-aa9e-26937db9cd61 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 03:23:44 np0005534516 nova_compute[253538]: 2025-11-25 08:23:44.328 253542 DEBUG nova.compute.manager [None req-30da9785-a1fd-47ff-a7d7-c47887bfaf5d 57ccb3076c9145fda72f75af7dd3acc0 e0f1bfa27d5e45138e846f38c1d92dfc - - default default] [instance: 4659329f-611a-4436-aa9e-26937db9cd61] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:23:44 np0005534516 nova_compute[253538]: 2025-11-25 08:23:44.371 253542 DEBUG nova.objects.instance [None req-30da9785-a1fd-47ff-a7d7-c47887bfaf5d 57ccb3076c9145fda72f75af7dd3acc0 e0f1bfa27d5e45138e846f38c1d92dfc - - default default] Lazy-loading 'pci_requests' on Instance uuid 4659329f-611a-4436-aa9e-26937db9cd61 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 03:23:44 np0005534516 nova_compute[253538]: 2025-11-25 08:23:44.382 253542 DEBUG nova.objects.instance [None req-30da9785-a1fd-47ff-a7d7-c47887bfaf5d 57ccb3076c9145fda72f75af7dd3acc0 e0f1bfa27d5e45138e846f38c1d92dfc - - default default] Lazy-loading 'pci_devices' on Instance uuid 4659329f-611a-4436-aa9e-26937db9cd61 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 03:23:44 np0005534516 nova_compute[253538]: 2025-11-25 08:23:44.435 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:23:44 np0005534516 nova_compute[253538]: 2025-11-25 08:23:44.438 253542 DEBUG nova.objects.instance [None req-30da9785-a1fd-47ff-a7d7-c47887bfaf5d 57ccb3076c9145fda72f75af7dd3acc0 e0f1bfa27d5e45138e846f38c1d92dfc - - default default] Lazy-loading 'resources' on Instance uuid 4659329f-611a-4436-aa9e-26937db9cd61 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 03:23:44 np0005534516 nova_compute[253538]: 2025-11-25 08:23:44.446 253542 DEBUG nova.objects.instance [None req-30da9785-a1fd-47ff-a7d7-c47887bfaf5d 57ccb3076c9145fda72f75af7dd3acc0 e0f1bfa27d5e45138e846f38c1d92dfc - - default default] Lazy-loading 'migration_context' on Instance uuid 4659329f-611a-4436-aa9e-26937db9cd61 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 03:23:44 np0005534516 nova_compute[253538]: 2025-11-25 08:23:44.453 253542 DEBUG nova.objects.instance [None req-30da9785-a1fd-47ff-a7d7-c47887bfaf5d 57ccb3076c9145fda72f75af7dd3acc0 e0f1bfa27d5e45138e846f38c1d92dfc - - default default] [instance: 4659329f-611a-4436-aa9e-26937db9cd61] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032#033[00m
Nov 25 03:23:44 np0005534516 nova_compute[253538]: 2025-11-25 08:23:44.457 253542 DEBUG nova.virt.libvirt.driver [None req-30da9785-a1fd-47ff-a7d7-c47887bfaf5d 57ccb3076c9145fda72f75af7dd3acc0 e0f1bfa27d5e45138e846f38c1d92dfc - - default default] [instance: 4659329f-611a-4436-aa9e-26937db9cd61] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071#033[00m
Nov 25 03:23:44 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1146: 321 pgs: 321 active+clean; 88 MiB data, 271 MiB used, 60 GiB / 60 GiB avail; 2.5 MiB/s rd, 2.4 MiB/s wr, 165 op/s
Nov 25 03:23:46 np0005534516 nova_compute[253538]: 2025-11-25 08:23:46.241 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:23:46 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1147: 321 pgs: 321 active+clean; 88 MiB data, 271 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.1 MiB/s wr, 150 op/s
Nov 25 03:23:48 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e110 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 03:23:48 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1148: 321 pgs: 321 active+clean; 88 MiB data, 271 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.1 MiB/s wr, 139 op/s
Nov 25 03:23:49 np0005534516 nova_compute[253538]: 2025-11-25 08:23:49.437 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:23:50 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1149: 321 pgs: 321 active+clean; 88 MiB data, 271 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.0 MiB/s wr, 106 op/s
Nov 25 03:23:51 np0005534516 nova_compute[253538]: 2025-11-25 08:23:51.242 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:23:52 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1150: 321 pgs: 321 active+clean; 91 MiB data, 271 MiB used, 60 GiB / 60 GiB avail; 686 KiB/s rd, 1.0 MiB/s wr, 54 op/s
Nov 25 03:23:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] Optimize plan auto_2025-11-25_08:23:53
Nov 25 03:23:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 03:23:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] do_upmap
Nov 25 03:23:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] pools ['default.rgw.meta', 'default.rgw.log', 'cephfs.cephfs.meta', 'backups', 'volumes', '.rgw.root', '.mgr', 'images', 'cephfs.cephfs.data', 'vms', 'default.rgw.control']
Nov 25 03:23:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] prepared 0/10 changes
Nov 25 03:23:53 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e110 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 03:23:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 03:23:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 03:23:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 03:23:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 03:23:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 03:23:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 03:23:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 03:23:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 03:23:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 03:23:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 03:23:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 03:23:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 03:23:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 03:23:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 03:23:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 03:23:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 03:23:54 np0005534516 nova_compute[253538]: 2025-11-25 08:23:54.438 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:23:54 np0005534516 nova_compute[253538]: 2025-11-25 08:23:54.504 253542 DEBUG nova.virt.libvirt.driver [None req-30da9785-a1fd-47ff-a7d7-c47887bfaf5d 57ccb3076c9145fda72f75af7dd3acc0 e0f1bfa27d5e45138e846f38c1d92dfc - - default default] [instance: 4659329f-611a-4436-aa9e-26937db9cd61] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101#033[00m
Nov 25 03:23:54 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1151: 321 pgs: 321 active+clean; 105 MiB data, 284 MiB used, 60 GiB / 60 GiB avail; 310 KiB/s rd, 1.6 MiB/s wr, 44 op/s
Nov 25 03:23:55 np0005534516 podman[273987]: 2025-11-25 08:23:55.821983628 +0000 UTC m=+0.062867770 container health_status 5c8d5508db57b4c3da7e80ae11f3a3094e87785cb74f60c98199261dd8b73481 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team)
Nov 25 03:23:56 np0005534516 nova_compute[253538]: 2025-11-25 08:23:56.244 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:23:56 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1152: 321 pgs: 321 active+clean; 112 MiB data, 296 MiB used, 60 GiB / 60 GiB avail; 392 KiB/s rd, 2.1 MiB/s wr, 57 op/s
Nov 25 03:23:58 np0005534516 systemd[1]: machine-qemu\x2d9\x2dinstance\x2d00000009.scope: Deactivated successfully.
Nov 25 03:23:58 np0005534516 systemd[1]: machine-qemu\x2d9\x2dinstance\x2d00000009.scope: Consumed 12.885s CPU time.
Nov 25 03:23:58 np0005534516 systemd-machined[215790]: Machine qemu-9-instance-00000009 terminated.
Nov 25 03:23:58 np0005534516 podman[274006]: 2025-11-25 08:23:58.187046035 +0000 UTC m=+0.082637888 container health_status 201f5ba8a846455dad7681d91099d8c3f33e8ac91298c1330a8b525b94c03938 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_id=multipathd, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Nov 25 03:23:58 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e110 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 03:23:58 np0005534516 nova_compute[253538]: 2025-11-25 08:23:58.526 253542 INFO nova.virt.libvirt.driver [None req-30da9785-a1fd-47ff-a7d7-c47887bfaf5d 57ccb3076c9145fda72f75af7dd3acc0 e0f1bfa27d5e45138e846f38c1d92dfc - - default default] [instance: 4659329f-611a-4436-aa9e-26937db9cd61] Instance shutdown successfully after 14 seconds.#033[00m
Nov 25 03:23:58 np0005534516 nova_compute[253538]: 2025-11-25 08:23:58.533 253542 INFO nova.virt.libvirt.driver [-] [instance: 4659329f-611a-4436-aa9e-26937db9cd61] Instance destroyed successfully.#033[00m
Nov 25 03:23:58 np0005534516 nova_compute[253538]: 2025-11-25 08:23:58.539 253542 INFO nova.virt.libvirt.driver [-] [instance: 4659329f-611a-4436-aa9e-26937db9cd61] Instance destroyed successfully.#033[00m
Nov 25 03:23:58 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1153: 321 pgs: 321 active+clean; 118 MiB data, 296 MiB used, 60 GiB / 60 GiB avail; 270 KiB/s rd, 2.2 MiB/s wr, 60 op/s
Nov 25 03:23:59 np0005534516 nova_compute[253538]: 2025-11-25 08:23:59.002 253542 INFO nova.virt.libvirt.driver [None req-30da9785-a1fd-47ff-a7d7-c47887bfaf5d 57ccb3076c9145fda72f75af7dd3acc0 e0f1bfa27d5e45138e846f38c1d92dfc - - default default] [instance: 4659329f-611a-4436-aa9e-26937db9cd61] Deleting instance files /var/lib/nova/instances/4659329f-611a-4436-aa9e-26937db9cd61_del#033[00m
Nov 25 03:23:59 np0005534516 nova_compute[253538]: 2025-11-25 08:23:59.003 253542 INFO nova.virt.libvirt.driver [None req-30da9785-a1fd-47ff-a7d7-c47887bfaf5d 57ccb3076c9145fda72f75af7dd3acc0 e0f1bfa27d5e45138e846f38c1d92dfc - - default default] [instance: 4659329f-611a-4436-aa9e-26937db9cd61] Deletion of /var/lib/nova/instances/4659329f-611a-4436-aa9e-26937db9cd61_del complete#033[00m
Nov 25 03:23:59 np0005534516 nova_compute[253538]: 2025-11-25 08:23:59.173 253542 DEBUG nova.virt.libvirt.driver [None req-30da9785-a1fd-47ff-a7d7-c47887bfaf5d 57ccb3076c9145fda72f75af7dd3acc0 e0f1bfa27d5e45138e846f38c1d92dfc - - default default] [instance: 4659329f-611a-4436-aa9e-26937db9cd61] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 25 03:23:59 np0005534516 nova_compute[253538]: 2025-11-25 08:23:59.174 253542 INFO nova.virt.libvirt.driver [None req-30da9785-a1fd-47ff-a7d7-c47887bfaf5d 57ccb3076c9145fda72f75af7dd3acc0 e0f1bfa27d5e45138e846f38c1d92dfc - - default default] [instance: 4659329f-611a-4436-aa9e-26937db9cd61] Creating image(s)#033[00m
Nov 25 03:23:59 np0005534516 nova_compute[253538]: 2025-11-25 08:23:59.199 253542 DEBUG nova.storage.rbd_utils [None req-30da9785-a1fd-47ff-a7d7-c47887bfaf5d 57ccb3076c9145fda72f75af7dd3acc0 e0f1bfa27d5e45138e846f38c1d92dfc - - default default] rbd image 4659329f-611a-4436-aa9e-26937db9cd61_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:23:59 np0005534516 nova_compute[253538]: 2025-11-25 08:23:59.231 253542 DEBUG nova.storage.rbd_utils [None req-30da9785-a1fd-47ff-a7d7-c47887bfaf5d 57ccb3076c9145fda72f75af7dd3acc0 e0f1bfa27d5e45138e846f38c1d92dfc - - default default] rbd image 4659329f-611a-4436-aa9e-26937db9cd61_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:23:59 np0005534516 nova_compute[253538]: 2025-11-25 08:23:59.251 253542 DEBUG nova.storage.rbd_utils [None req-30da9785-a1fd-47ff-a7d7-c47887bfaf5d 57ccb3076c9145fda72f75af7dd3acc0 e0f1bfa27d5e45138e846f38c1d92dfc - - default default] rbd image 4659329f-611a-4436-aa9e-26937db9cd61_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:23:59 np0005534516 nova_compute[253538]: 2025-11-25 08:23:59.254 253542 DEBUG oslo_concurrency.lockutils [None req-30da9785-a1fd-47ff-a7d7-c47887bfaf5d 57ccb3076c9145fda72f75af7dd3acc0 e0f1bfa27d5e45138e846f38c1d92dfc - - default default] Acquiring lock "a6a8bade0130d94f6fc91c6e483cc9b6db836676" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:23:59 np0005534516 nova_compute[253538]: 2025-11-25 08:23:59.255 253542 DEBUG oslo_concurrency.lockutils [None req-30da9785-a1fd-47ff-a7d7-c47887bfaf5d 57ccb3076c9145fda72f75af7dd3acc0 e0f1bfa27d5e45138e846f38c1d92dfc - - default default] Lock "a6a8bade0130d94f6fc91c6e483cc9b6db836676" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:23:59 np0005534516 nova_compute[253538]: 2025-11-25 08:23:59.462 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:23:59 np0005534516 nova_compute[253538]: 2025-11-25 08:23:59.790 253542 DEBUG nova.virt.libvirt.imagebackend [None req-30da9785-a1fd-47ff-a7d7-c47887bfaf5d 57ccb3076c9145fda72f75af7dd3acc0 e0f1bfa27d5e45138e846f38c1d92dfc - - default default] Image locations are: [{'url': 'rbd://a058ea16-8b73-51e1-b172-ed66107102bf/images/64385127-d622-49bb-be38-b33beb2692d1/snap', 'metadata': {'store': 'default_backend'}}, {'url': 'rbd://a058ea16-8b73-51e1-b172-ed66107102bf/images/64385127-d622-49bb-be38-b33beb2692d1/snap', 'metadata': {}}] clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1085#033[00m
Nov 25 03:24:00 np0005534516 podman[274275]: 2025-11-25 08:24:00.307219203 +0000 UTC m=+0.079866341 container exec 04ce8b89bbacb77b3b59c4996e899d7494010827e740656ebea8754c25303956 (image=quay.io/ceph/ceph:v18, name=ceph-a058ea16-8b73-51e1-b172-ed66107102bf-mon-compute-0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 03:24:00 np0005534516 podman[274275]: 2025-11-25 08:24:00.438809446 +0000 UTC m=+0.211456574 container exec_died 04ce8b89bbacb77b3b59c4996e899d7494010827e740656ebea8754c25303956 (image=quay.io/ceph/ceph:v18, name=ceph-a058ea16-8b73-51e1-b172-ed66107102bf-mon-compute-0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 25 03:24:00 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1154: 321 pgs: 321 active+clean; 107 MiB data, 297 MiB used, 60 GiB / 60 GiB avail; 286 KiB/s rd, 2.2 MiB/s wr, 73 op/s
Nov 25 03:24:01 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 03:24:01 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 03:24:01 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 03:24:01 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 03:24:01 np0005534516 nova_compute[253538]: 2025-11-25 08:24:01.135 253542 DEBUG oslo_concurrency.processutils [None req-30da9785-a1fd-47ff-a7d7-c47887bfaf5d 57ccb3076c9145fda72f75af7dd3acc0 e0f1bfa27d5e45138e846f38c1d92dfc - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a6a8bade0130d94f6fc91c6e483cc9b6db836676.part --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:24:01 np0005534516 nova_compute[253538]: 2025-11-25 08:24:01.199 253542 DEBUG oslo_concurrency.processutils [None req-30da9785-a1fd-47ff-a7d7-c47887bfaf5d 57ccb3076c9145fda72f75af7dd3acc0 e0f1bfa27d5e45138e846f38c1d92dfc - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a6a8bade0130d94f6fc91c6e483cc9b6db836676.part --force-share --output=json" returned: 0 in 0.064s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:24:01 np0005534516 nova_compute[253538]: 2025-11-25 08:24:01.200 253542 DEBUG nova.virt.images [None req-30da9785-a1fd-47ff-a7d7-c47887bfaf5d 57ccb3076c9145fda72f75af7dd3acc0 e0f1bfa27d5e45138e846f38c1d92dfc - - default default] 64385127-d622-49bb-be38-b33beb2692d1 was qcow2, converting to raw fetch_to_raw /usr/lib/python3.9/site-packages/nova/virt/images.py:242#033[00m
Nov 25 03:24:01 np0005534516 nova_compute[253538]: 2025-11-25 08:24:01.202 253542 DEBUG nova.privsep.utils [None req-30da9785-a1fd-47ff-a7d7-c47887bfaf5d 57ccb3076c9145fda72f75af7dd3acc0 e0f1bfa27d5e45138e846f38c1d92dfc - - default default] Path '/var/lib/nova/instances' supports direct I/O supports_direct_io /usr/lib/python3.9/site-packages/nova/privsep/utils.py:63#033[00m
Nov 25 03:24:01 np0005534516 nova_compute[253538]: 2025-11-25 08:24:01.202 253542 DEBUG oslo_concurrency.processutils [None req-30da9785-a1fd-47ff-a7d7-c47887bfaf5d 57ccb3076c9145fda72f75af7dd3acc0 e0f1bfa27d5e45138e846f38c1d92dfc - - default default] Running cmd (subprocess): qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/a6a8bade0130d94f6fc91c6e483cc9b6db836676.part /var/lib/nova/instances/_base/a6a8bade0130d94f6fc91c6e483cc9b6db836676.converted execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:24:01 np0005534516 nova_compute[253538]: 2025-11-25 08:24:01.246 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:24:01 np0005534516 nova_compute[253538]: 2025-11-25 08:24:01.399 253542 DEBUG oslo_concurrency.processutils [None req-30da9785-a1fd-47ff-a7d7-c47887bfaf5d 57ccb3076c9145fda72f75af7dd3acc0 e0f1bfa27d5e45138e846f38c1d92dfc - - default default] CMD "qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/a6a8bade0130d94f6fc91c6e483cc9b6db836676.part /var/lib/nova/instances/_base/a6a8bade0130d94f6fc91c6e483cc9b6db836676.converted" returned: 0 in 0.196s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:24:01 np0005534516 nova_compute[253538]: 2025-11-25 08:24:01.402 253542 DEBUG oslo_concurrency.processutils [None req-30da9785-a1fd-47ff-a7d7-c47887bfaf5d 57ccb3076c9145fda72f75af7dd3acc0 e0f1bfa27d5e45138e846f38c1d92dfc - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a6a8bade0130d94f6fc91c6e483cc9b6db836676.converted --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:24:01 np0005534516 nova_compute[253538]: 2025-11-25 08:24:01.461 253542 DEBUG oslo_concurrency.processutils [None req-30da9785-a1fd-47ff-a7d7-c47887bfaf5d 57ccb3076c9145fda72f75af7dd3acc0 e0f1bfa27d5e45138e846f38c1d92dfc - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a6a8bade0130d94f6fc91c6e483cc9b6db836676.converted --force-share --output=json" returned: 0 in 0.059s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:24:01 np0005534516 nova_compute[253538]: 2025-11-25 08:24:01.463 253542 DEBUG oslo_concurrency.lockutils [None req-30da9785-a1fd-47ff-a7d7-c47887bfaf5d 57ccb3076c9145fda72f75af7dd3acc0 e0f1bfa27d5e45138e846f38c1d92dfc - - default default] Lock "a6a8bade0130d94f6fc91c6e483cc9b6db836676" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 2.208s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:24:01 np0005534516 nova_compute[253538]: 2025-11-25 08:24:01.484 253542 DEBUG nova.storage.rbd_utils [None req-30da9785-a1fd-47ff-a7d7-c47887bfaf5d 57ccb3076c9145fda72f75af7dd3acc0 e0f1bfa27d5e45138e846f38c1d92dfc - - default default] rbd image 4659329f-611a-4436-aa9e-26937db9cd61_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:24:01 np0005534516 nova_compute[253538]: 2025-11-25 08:24:01.491 253542 DEBUG oslo_concurrency.processutils [None req-30da9785-a1fd-47ff-a7d7-c47887bfaf5d 57ccb3076c9145fda72f75af7dd3acc0 e0f1bfa27d5e45138e846f38c1d92dfc - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/a6a8bade0130d94f6fc91c6e483cc9b6db836676 4659329f-611a-4436-aa9e-26937db9cd61_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:24:01 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 03:24:01 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 03:24:01 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 03:24:01 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 03:24:01 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 03:24:01 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 03:24:01 np0005534516 ceph-mgr[75313]: [progress WARNING root] complete: ev 7f5f9d89-8567-44e4-8a5f-7436811d9517 does not exist
Nov 25 03:24:01 np0005534516 ceph-mgr[75313]: [progress WARNING root] complete: ev f0e4c608-ba61-484f-ade1-cb1083c39236 does not exist
Nov 25 03:24:01 np0005534516 ceph-mgr[75313]: [progress WARNING root] complete: ev 6cf16551-34e5-4e33-9371-bbbd31a8f497 does not exist
Nov 25 03:24:01 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 03:24:01 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 03:24:01 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 03:24:01 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 03:24:01 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 03:24:01 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 03:24:01 np0005534516 nova_compute[253538]: 2025-11-25 08:24:01.971 253542 DEBUG oslo_concurrency.processutils [None req-30da9785-a1fd-47ff-a7d7-c47887bfaf5d 57ccb3076c9145fda72f75af7dd3acc0 e0f1bfa27d5e45138e846f38c1d92dfc - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/a6a8bade0130d94f6fc91c6e483cc9b6db836676 4659329f-611a-4436-aa9e-26937db9cd61_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.480s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:24:02 np0005534516 nova_compute[253538]: 2025-11-25 08:24:02.036 253542 DEBUG nova.storage.rbd_utils [None req-30da9785-a1fd-47ff-a7d7-c47887bfaf5d 57ccb3076c9145fda72f75af7dd3acc0 e0f1bfa27d5e45138e846f38c1d92dfc - - default default] resizing rbd image 4659329f-611a-4436-aa9e-26937db9cd61_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 25 03:24:02 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 03:24:02 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 03:24:02 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 03:24:02 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 03:24:02 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 03:24:02 np0005534516 nova_compute[253538]: 2025-11-25 08:24:02.122 253542 DEBUG nova.virt.libvirt.driver [None req-30da9785-a1fd-47ff-a7d7-c47887bfaf5d 57ccb3076c9145fda72f75af7dd3acc0 e0f1bfa27d5e45138e846f38c1d92dfc - - default default] [instance: 4659329f-611a-4436-aa9e-26937db9cd61] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 25 03:24:02 np0005534516 nova_compute[253538]: 2025-11-25 08:24:02.123 253542 DEBUG nova.virt.libvirt.driver [None req-30da9785-a1fd-47ff-a7d7-c47887bfaf5d 57ccb3076c9145fda72f75af7dd3acc0 e0f1bfa27d5e45138e846f38c1d92dfc - - default default] [instance: 4659329f-611a-4436-aa9e-26937db9cd61] Ensure instance console log exists: /var/lib/nova/instances/4659329f-611a-4436-aa9e-26937db9cd61/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 25 03:24:02 np0005534516 nova_compute[253538]: 2025-11-25 08:24:02.123 253542 DEBUG oslo_concurrency.lockutils [None req-30da9785-a1fd-47ff-a7d7-c47887bfaf5d 57ccb3076c9145fda72f75af7dd3acc0 e0f1bfa27d5e45138e846f38c1d92dfc - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:24:02 np0005534516 nova_compute[253538]: 2025-11-25 08:24:02.124 253542 DEBUG oslo_concurrency.lockutils [None req-30da9785-a1fd-47ff-a7d7-c47887bfaf5d 57ccb3076c9145fda72f75af7dd3acc0 e0f1bfa27d5e45138e846f38c1d92dfc - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:24:02 np0005534516 nova_compute[253538]: 2025-11-25 08:24:02.125 253542 DEBUG oslo_concurrency.lockutils [None req-30da9785-a1fd-47ff-a7d7-c47887bfaf5d 57ccb3076c9145fda72f75af7dd3acc0 e0f1bfa27d5e45138e846f38c1d92dfc - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:24:02 np0005534516 nova_compute[253538]: 2025-11-25 08:24:02.126 253542 DEBUG nova.virt.libvirt.driver [None req-30da9785-a1fd-47ff-a7d7-c47887bfaf5d 57ccb3076c9145fda72f75af7dd3acc0 e0f1bfa27d5e45138e846f38c1d92dfc - - default default] [instance: 4659329f-611a-4436-aa9e-26937db9cd61] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:56Z,direct_url=<?>,disk_format='qcow2',id=64385127-d622-49bb-be38-b33beb2692d1,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:58Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'device_name': '/dev/vda', 'guest_format': None, 'encryption_options': None, 'boot_index': 0, 'encryption_format': None, 'encryption_secret_uuid': None, 'size': 0, 'encrypted': False, 'disk_bus': 'virtio', 'image_id': '8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 25 03:24:02 np0005534516 nova_compute[253538]: 2025-11-25 08:24:02.130 253542 WARNING nova.virt.libvirt.driver [None req-30da9785-a1fd-47ff-a7d7-c47887bfaf5d 57ccb3076c9145fda72f75af7dd3acc0 e0f1bfa27d5e45138e846f38c1d92dfc - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.: NotImplementedError#033[00m
Nov 25 03:24:02 np0005534516 nova_compute[253538]: 2025-11-25 08:24:02.136 253542 DEBUG nova.virt.libvirt.host [None req-30da9785-a1fd-47ff-a7d7-c47887bfaf5d 57ccb3076c9145fda72f75af7dd3acc0 e0f1bfa27d5e45138e846f38c1d92dfc - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 25 03:24:02 np0005534516 nova_compute[253538]: 2025-11-25 08:24:02.137 253542 DEBUG nova.virt.libvirt.host [None req-30da9785-a1fd-47ff-a7d7-c47887bfaf5d 57ccb3076c9145fda72f75af7dd3acc0 e0f1bfa27d5e45138e846f38c1d92dfc - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 25 03:24:02 np0005534516 nova_compute[253538]: 2025-11-25 08:24:02.144 253542 DEBUG nova.virt.libvirt.host [None req-30da9785-a1fd-47ff-a7d7-c47887bfaf5d 57ccb3076c9145fda72f75af7dd3acc0 e0f1bfa27d5e45138e846f38c1d92dfc - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 25 03:24:02 np0005534516 nova_compute[253538]: 2025-11-25 08:24:02.144 253542 DEBUG nova.virt.libvirt.host [None req-30da9785-a1fd-47ff-a7d7-c47887bfaf5d 57ccb3076c9145fda72f75af7dd3acc0 e0f1bfa27d5e45138e846f38c1d92dfc - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 25 03:24:02 np0005534516 nova_compute[253538]: 2025-11-25 08:24:02.145 253542 DEBUG nova.virt.libvirt.driver [None req-30da9785-a1fd-47ff-a7d7-c47887bfaf5d 57ccb3076c9145fda72f75af7dd3acc0 e0f1bfa27d5e45138e846f38c1d92dfc - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 25 03:24:02 np0005534516 nova_compute[253538]: 2025-11-25 08:24:02.145 253542 DEBUG nova.virt.hardware [None req-30da9785-a1fd-47ff-a7d7-c47887bfaf5d 57ccb3076c9145fda72f75af7dd3acc0 e0f1bfa27d5e45138e846f38c1d92dfc - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T08:20:54Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='c0247c91-0f34-45b2-87e7-43f31a790a00',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:56Z,direct_url=<?>,disk_format='qcow2',id=64385127-d622-49bb-be38-b33beb2692d1,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:58Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 25 03:24:02 np0005534516 nova_compute[253538]: 2025-11-25 08:24:02.146 253542 DEBUG nova.virt.hardware [None req-30da9785-a1fd-47ff-a7d7-c47887bfaf5d 57ccb3076c9145fda72f75af7dd3acc0 e0f1bfa27d5e45138e846f38c1d92dfc - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 25 03:24:02 np0005534516 nova_compute[253538]: 2025-11-25 08:24:02.146 253542 DEBUG nova.virt.hardware [None req-30da9785-a1fd-47ff-a7d7-c47887bfaf5d 57ccb3076c9145fda72f75af7dd3acc0 e0f1bfa27d5e45138e846f38c1d92dfc - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 25 03:24:02 np0005534516 nova_compute[253538]: 2025-11-25 08:24:02.146 253542 DEBUG nova.virt.hardware [None req-30da9785-a1fd-47ff-a7d7-c47887bfaf5d 57ccb3076c9145fda72f75af7dd3acc0 e0f1bfa27d5e45138e846f38c1d92dfc - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 25 03:24:02 np0005534516 nova_compute[253538]: 2025-11-25 08:24:02.146 253542 DEBUG nova.virt.hardware [None req-30da9785-a1fd-47ff-a7d7-c47887bfaf5d 57ccb3076c9145fda72f75af7dd3acc0 e0f1bfa27d5e45138e846f38c1d92dfc - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 25 03:24:02 np0005534516 nova_compute[253538]: 2025-11-25 08:24:02.147 253542 DEBUG nova.virt.hardware [None req-30da9785-a1fd-47ff-a7d7-c47887bfaf5d 57ccb3076c9145fda72f75af7dd3acc0 e0f1bfa27d5e45138e846f38c1d92dfc - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 25 03:24:02 np0005534516 nova_compute[253538]: 2025-11-25 08:24:02.147 253542 DEBUG nova.virt.hardware [None req-30da9785-a1fd-47ff-a7d7-c47887bfaf5d 57ccb3076c9145fda72f75af7dd3acc0 e0f1bfa27d5e45138e846f38c1d92dfc - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 25 03:24:02 np0005534516 nova_compute[253538]: 2025-11-25 08:24:02.147 253542 DEBUG nova.virt.hardware [None req-30da9785-a1fd-47ff-a7d7-c47887bfaf5d 57ccb3076c9145fda72f75af7dd3acc0 e0f1bfa27d5e45138e846f38c1d92dfc - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 25 03:24:02 np0005534516 nova_compute[253538]: 2025-11-25 08:24:02.147 253542 DEBUG nova.virt.hardware [None req-30da9785-a1fd-47ff-a7d7-c47887bfaf5d 57ccb3076c9145fda72f75af7dd3acc0 e0f1bfa27d5e45138e846f38c1d92dfc - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 25 03:24:02 np0005534516 nova_compute[253538]: 2025-11-25 08:24:02.148 253542 DEBUG nova.virt.hardware [None req-30da9785-a1fd-47ff-a7d7-c47887bfaf5d 57ccb3076c9145fda72f75af7dd3acc0 e0f1bfa27d5e45138e846f38c1d92dfc - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 25 03:24:02 np0005534516 nova_compute[253538]: 2025-11-25 08:24:02.148 253542 DEBUG nova.virt.hardware [None req-30da9785-a1fd-47ff-a7d7-c47887bfaf5d 57ccb3076c9145fda72f75af7dd3acc0 e0f1bfa27d5e45138e846f38c1d92dfc - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 25 03:24:02 np0005534516 nova_compute[253538]: 2025-11-25 08:24:02.148 253542 DEBUG nova.objects.instance [None req-30da9785-a1fd-47ff-a7d7-c47887bfaf5d 57ccb3076c9145fda72f75af7dd3acc0 e0f1bfa27d5e45138e846f38c1d92dfc - - default default] Lazy-loading 'vcpu_model' on Instance uuid 4659329f-611a-4436-aa9e-26937db9cd61 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 03:24:02 np0005534516 nova_compute[253538]: 2025-11-25 08:24:02.162 253542 DEBUG oslo_concurrency.processutils [None req-30da9785-a1fd-47ff-a7d7-c47887bfaf5d 57ccb3076c9145fda72f75af7dd3acc0 e0f1bfa27d5e45138e846f38c1d92dfc - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:24:02 np0005534516 podman[274847]: 2025-11-25 08:24:02.511397157 +0000 UTC m=+0.044835642 container create 5590b1cfc8fcc3adca285d0047fae2eacb2fdd17438aafc60bdfa284a35037eb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_fermi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Nov 25 03:24:02 np0005534516 systemd[1]: Started libpod-conmon-5590b1cfc8fcc3adca285d0047fae2eacb2fdd17438aafc60bdfa284a35037eb.scope.
Nov 25 03:24:02 np0005534516 systemd[1]: Started libcrun container.
Nov 25 03:24:02 np0005534516 podman[274847]: 2025-11-25 08:24:02.488575095 +0000 UTC m=+0.022013500 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 03:24:02 np0005534516 podman[274847]: 2025-11-25 08:24:02.62785179 +0000 UTC m=+0.161290205 container init 5590b1cfc8fcc3adca285d0047fae2eacb2fdd17438aafc60bdfa284a35037eb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_fermi, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Nov 25 03:24:02 np0005534516 podman[274847]: 2025-11-25 08:24:02.636804128 +0000 UTC m=+0.170242493 container start 5590b1cfc8fcc3adca285d0047fae2eacb2fdd17438aafc60bdfa284a35037eb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_fermi, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 03:24:02 np0005534516 hopeful_fermi[274863]: 167 167
Nov 25 03:24:02 np0005534516 systemd[1]: libpod-5590b1cfc8fcc3adca285d0047fae2eacb2fdd17438aafc60bdfa284a35037eb.scope: Deactivated successfully.
Nov 25 03:24:02 np0005534516 podman[274847]: 2025-11-25 08:24:02.645872159 +0000 UTC m=+0.179310544 container attach 5590b1cfc8fcc3adca285d0047fae2eacb2fdd17438aafc60bdfa284a35037eb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_fermi, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 03:24:02 np0005534516 podman[274847]: 2025-11-25 08:24:02.64628591 +0000 UTC m=+0.179724285 container died 5590b1cfc8fcc3adca285d0047fae2eacb2fdd17438aafc60bdfa284a35037eb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_fermi, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 03:24:02 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 03:24:02 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1330364686' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 03:24:02 np0005534516 nova_compute[253538]: 2025-11-25 08:24:02.684 253542 DEBUG oslo_concurrency.processutils [None req-30da9785-a1fd-47ff-a7d7-c47887bfaf5d 57ccb3076c9145fda72f75af7dd3acc0 e0f1bfa27d5e45138e846f38c1d92dfc - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.522s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:24:02 np0005534516 nova_compute[253538]: 2025-11-25 08:24:02.709 253542 DEBUG nova.storage.rbd_utils [None req-30da9785-a1fd-47ff-a7d7-c47887bfaf5d 57ccb3076c9145fda72f75af7dd3acc0 e0f1bfa27d5e45138e846f38c1d92dfc - - default default] rbd image 4659329f-611a-4436-aa9e-26937db9cd61_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:24:02 np0005534516 nova_compute[253538]: 2025-11-25 08:24:02.714 253542 DEBUG oslo_concurrency.processutils [None req-30da9785-a1fd-47ff-a7d7-c47887bfaf5d 57ccb3076c9145fda72f75af7dd3acc0 e0f1bfa27d5e45138e846f38c1d92dfc - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:24:02 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1155: 321 pgs: 321 active+clean; 74 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 977 KiB/s rd, 2.5 MiB/s wr, 88 op/s
Nov 25 03:24:02 np0005534516 systemd[1]: var-lib-containers-storage-overlay-53a3f45fa07fb121a8728bcbcf4059822806b2cc59ad0822c29695b0a31a0ba0-merged.mount: Deactivated successfully.
Nov 25 03:24:03 np0005534516 podman[274847]: 2025-11-25 08:24:03.011424507 +0000 UTC m=+0.544862872 container remove 5590b1cfc8fcc3adca285d0047fae2eacb2fdd17438aafc60bdfa284a35037eb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_fermi, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Nov 25 03:24:03 np0005534516 systemd[1]: libpod-conmon-5590b1cfc8fcc3adca285d0047fae2eacb2fdd17438aafc60bdfa284a35037eb.scope: Deactivated successfully.
Nov 25 03:24:03 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 03:24:03 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/110738618' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 03:24:03 np0005534516 nova_compute[253538]: 2025-11-25 08:24:03.180 253542 DEBUG oslo_concurrency.processutils [None req-30da9785-a1fd-47ff-a7d7-c47887bfaf5d 57ccb3076c9145fda72f75af7dd3acc0 e0f1bfa27d5e45138e846f38c1d92dfc - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.465s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:24:03 np0005534516 nova_compute[253538]: 2025-11-25 08:24:03.189 253542 DEBUG nova.virt.libvirt.driver [None req-30da9785-a1fd-47ff-a7d7-c47887bfaf5d 57ccb3076c9145fda72f75af7dd3acc0 e0f1bfa27d5e45138e846f38c1d92dfc - - default default] [instance: 4659329f-611a-4436-aa9e-26937db9cd61] End _get_guest_xml xml=<domain type="kvm">
Nov 25 03:24:03 np0005534516 nova_compute[253538]:  <uuid>4659329f-611a-4436-aa9e-26937db9cd61</uuid>
Nov 25 03:24:03 np0005534516 nova_compute[253538]:  <name>instance-00000009</name>
Nov 25 03:24:03 np0005534516 nova_compute[253538]:  <memory>131072</memory>
Nov 25 03:24:03 np0005534516 nova_compute[253538]:  <vcpu>1</vcpu>
Nov 25 03:24:03 np0005534516 nova_compute[253538]:  <metadata>
Nov 25 03:24:03 np0005534516 nova_compute[253538]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 03:24:03 np0005534516 nova_compute[253538]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 03:24:03 np0005534516 nova_compute[253538]:      <nova:name>tempest-ServersAdmin275Test-server-190405605</nova:name>
Nov 25 03:24:03 np0005534516 nova_compute[253538]:      <nova:creationTime>2025-11-25 08:24:02</nova:creationTime>
Nov 25 03:24:03 np0005534516 nova_compute[253538]:      <nova:flavor name="m1.nano">
Nov 25 03:24:03 np0005534516 nova_compute[253538]:        <nova:memory>128</nova:memory>
Nov 25 03:24:03 np0005534516 nova_compute[253538]:        <nova:disk>1</nova:disk>
Nov 25 03:24:03 np0005534516 nova_compute[253538]:        <nova:swap>0</nova:swap>
Nov 25 03:24:03 np0005534516 nova_compute[253538]:        <nova:ephemeral>0</nova:ephemeral>
Nov 25 03:24:03 np0005534516 nova_compute[253538]:        <nova:vcpus>1</nova:vcpus>
Nov 25 03:24:03 np0005534516 nova_compute[253538]:      </nova:flavor>
Nov 25 03:24:03 np0005534516 nova_compute[253538]:      <nova:owner>
Nov 25 03:24:03 np0005534516 nova_compute[253538]:        <nova:user uuid="57ccb3076c9145fda72f75af7dd3acc0">tempest-ServersAdmin275Test-1226019010-project-member</nova:user>
Nov 25 03:24:03 np0005534516 nova_compute[253538]:        <nova:project uuid="e0f1bfa27d5e45138e846f38c1d92dfc">tempest-ServersAdmin275Test-1226019010</nova:project>
Nov 25 03:24:03 np0005534516 nova_compute[253538]:      </nova:owner>
Nov 25 03:24:03 np0005534516 nova_compute[253538]:      <nova:root type="image" uuid="64385127-d622-49bb-be38-b33beb2692d1"/>
Nov 25 03:24:03 np0005534516 nova_compute[253538]:      <nova:ports/>
Nov 25 03:24:03 np0005534516 nova_compute[253538]:    </nova:instance>
Nov 25 03:24:03 np0005534516 nova_compute[253538]:  </metadata>
Nov 25 03:24:03 np0005534516 nova_compute[253538]:  <sysinfo type="smbios">
Nov 25 03:24:03 np0005534516 nova_compute[253538]:    <system>
Nov 25 03:24:03 np0005534516 nova_compute[253538]:      <entry name="manufacturer">RDO</entry>
Nov 25 03:24:03 np0005534516 nova_compute[253538]:      <entry name="product">OpenStack Compute</entry>
Nov 25 03:24:03 np0005534516 nova_compute[253538]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 03:24:03 np0005534516 nova_compute[253538]:      <entry name="serial">4659329f-611a-4436-aa9e-26937db9cd61</entry>
Nov 25 03:24:03 np0005534516 nova_compute[253538]:      <entry name="uuid">4659329f-611a-4436-aa9e-26937db9cd61</entry>
Nov 25 03:24:03 np0005534516 nova_compute[253538]:      <entry name="family">Virtual Machine</entry>
Nov 25 03:24:03 np0005534516 nova_compute[253538]:    </system>
Nov 25 03:24:03 np0005534516 nova_compute[253538]:  </sysinfo>
Nov 25 03:24:03 np0005534516 nova_compute[253538]:  <os>
Nov 25 03:24:03 np0005534516 nova_compute[253538]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 25 03:24:03 np0005534516 nova_compute[253538]:    <boot dev="hd"/>
Nov 25 03:24:03 np0005534516 nova_compute[253538]:    <smbios mode="sysinfo"/>
Nov 25 03:24:03 np0005534516 nova_compute[253538]:  </os>
Nov 25 03:24:03 np0005534516 nova_compute[253538]:  <features>
Nov 25 03:24:03 np0005534516 nova_compute[253538]:    <acpi/>
Nov 25 03:24:03 np0005534516 nova_compute[253538]:    <apic/>
Nov 25 03:24:03 np0005534516 nova_compute[253538]:    <vmcoreinfo/>
Nov 25 03:24:03 np0005534516 nova_compute[253538]:  </features>
Nov 25 03:24:03 np0005534516 nova_compute[253538]:  <clock offset="utc">
Nov 25 03:24:03 np0005534516 nova_compute[253538]:    <timer name="pit" tickpolicy="delay"/>
Nov 25 03:24:03 np0005534516 nova_compute[253538]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 25 03:24:03 np0005534516 nova_compute[253538]:    <timer name="hpet" present="no"/>
Nov 25 03:24:03 np0005534516 nova_compute[253538]:  </clock>
Nov 25 03:24:03 np0005534516 nova_compute[253538]:  <cpu mode="host-model" match="exact">
Nov 25 03:24:03 np0005534516 nova_compute[253538]:    <topology sockets="1" cores="1" threads="1"/>
Nov 25 03:24:03 np0005534516 nova_compute[253538]:  </cpu>
Nov 25 03:24:03 np0005534516 nova_compute[253538]:  <devices>
Nov 25 03:24:03 np0005534516 nova_compute[253538]:    <disk type="network" device="disk">
Nov 25 03:24:03 np0005534516 nova_compute[253538]:      <driver type="raw" cache="none"/>
Nov 25 03:24:03 np0005534516 nova_compute[253538]:      <source protocol="rbd" name="vms/4659329f-611a-4436-aa9e-26937db9cd61_disk">
Nov 25 03:24:03 np0005534516 nova_compute[253538]:        <host name="192.168.122.100" port="6789"/>
Nov 25 03:24:03 np0005534516 nova_compute[253538]:      </source>
Nov 25 03:24:03 np0005534516 nova_compute[253538]:      <auth username="openstack">
Nov 25 03:24:03 np0005534516 nova_compute[253538]:        <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 03:24:03 np0005534516 nova_compute[253538]:      </auth>
Nov 25 03:24:03 np0005534516 nova_compute[253538]:      <target dev="vda" bus="virtio"/>
Nov 25 03:24:03 np0005534516 nova_compute[253538]:    </disk>
Nov 25 03:24:03 np0005534516 nova_compute[253538]:    <disk type="network" device="cdrom">
Nov 25 03:24:03 np0005534516 nova_compute[253538]:      <driver type="raw" cache="none"/>
Nov 25 03:24:03 np0005534516 nova_compute[253538]:      <source protocol="rbd" name="vms/4659329f-611a-4436-aa9e-26937db9cd61_disk.config">
Nov 25 03:24:03 np0005534516 nova_compute[253538]:        <host name="192.168.122.100" port="6789"/>
Nov 25 03:24:03 np0005534516 nova_compute[253538]:      </source>
Nov 25 03:24:03 np0005534516 nova_compute[253538]:      <auth username="openstack">
Nov 25 03:24:03 np0005534516 nova_compute[253538]:        <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 03:24:03 np0005534516 nova_compute[253538]:      </auth>
Nov 25 03:24:03 np0005534516 nova_compute[253538]:      <target dev="sda" bus="sata"/>
Nov 25 03:24:03 np0005534516 nova_compute[253538]:    </disk>
Nov 25 03:24:03 np0005534516 nova_compute[253538]:    <serial type="pty">
Nov 25 03:24:03 np0005534516 nova_compute[253538]:      <log file="/var/lib/nova/instances/4659329f-611a-4436-aa9e-26937db9cd61/console.log" append="off"/>
Nov 25 03:24:03 np0005534516 nova_compute[253538]:    </serial>
Nov 25 03:24:03 np0005534516 nova_compute[253538]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 03:24:03 np0005534516 nova_compute[253538]:    <video>
Nov 25 03:24:03 np0005534516 nova_compute[253538]:      <model type="virtio"/>
Nov 25 03:24:03 np0005534516 nova_compute[253538]:    </video>
Nov 25 03:24:03 np0005534516 nova_compute[253538]:    <input type="tablet" bus="usb"/>
Nov 25 03:24:03 np0005534516 nova_compute[253538]:    <rng model="virtio">
Nov 25 03:24:03 np0005534516 nova_compute[253538]:      <backend model="random">/dev/urandom</backend>
Nov 25 03:24:03 np0005534516 nova_compute[253538]:    </rng>
Nov 25 03:24:03 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root"/>
Nov 25 03:24:03 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:24:03 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:24:03 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:24:03 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:24:03 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:24:03 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:24:03 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:24:03 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:24:03 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:24:03 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:24:03 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:24:03 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:24:03 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:24:03 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:24:03 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:24:03 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:24:03 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:24:03 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:24:03 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:24:03 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:24:03 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:24:03 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:24:03 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:24:03 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:24:03 np0005534516 nova_compute[253538]:    <controller type="usb" index="0"/>
Nov 25 03:24:03 np0005534516 nova_compute[253538]:    <memballoon model="virtio">
Nov 25 03:24:03 np0005534516 nova_compute[253538]:      <stats period="10"/>
Nov 25 03:24:03 np0005534516 nova_compute[253538]:    </memballoon>
Nov 25 03:24:03 np0005534516 nova_compute[253538]:  </devices>
Nov 25 03:24:03 np0005534516 nova_compute[253538]: </domain>
Nov 25 03:24:03 np0005534516 nova_compute[253538]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 25 03:24:03 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e110 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 03:24:03 np0005534516 nova_compute[253538]: 2025-11-25 08:24:03.252 253542 DEBUG nova.virt.libvirt.driver [None req-30da9785-a1fd-47ff-a7d7-c47887bfaf5d 57ccb3076c9145fda72f75af7dd3acc0 e0f1bfa27d5e45138e846f38c1d92dfc - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 03:24:03 np0005534516 nova_compute[253538]: 2025-11-25 08:24:03.252 253542 DEBUG nova.virt.libvirt.driver [None req-30da9785-a1fd-47ff-a7d7-c47887bfaf5d 57ccb3076c9145fda72f75af7dd3acc0 e0f1bfa27d5e45138e846f38c1d92dfc - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 03:24:03 np0005534516 nova_compute[253538]: 2025-11-25 08:24:03.253 253542 INFO nova.virt.libvirt.driver [None req-30da9785-a1fd-47ff-a7d7-c47887bfaf5d 57ccb3076c9145fda72f75af7dd3acc0 e0f1bfa27d5e45138e846f38c1d92dfc - - default default] [instance: 4659329f-611a-4436-aa9e-26937db9cd61] Using config drive#033[00m
Nov 25 03:24:03 np0005534516 podman[274930]: 2025-11-25 08:24:03.277608995 +0000 UTC m=+0.074777260 container create fc5bbaf7f80b1bce58037fc30817f7c37312e18a55b6455fdcb025ac58c40c3a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_heyrovsky, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 03:24:03 np0005534516 nova_compute[253538]: 2025-11-25 08:24:03.281 253542 DEBUG nova.storage.rbd_utils [None req-30da9785-a1fd-47ff-a7d7-c47887bfaf5d 57ccb3076c9145fda72f75af7dd3acc0 e0f1bfa27d5e45138e846f38c1d92dfc - - default default] rbd image 4659329f-611a-4436-aa9e-26937db9cd61_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:24:03 np0005534516 nova_compute[253538]: 2025-11-25 08:24:03.303 253542 DEBUG nova.objects.instance [None req-30da9785-a1fd-47ff-a7d7-c47887bfaf5d 57ccb3076c9145fda72f75af7dd3acc0 e0f1bfa27d5e45138e846f38c1d92dfc - - default default] Lazy-loading 'ec2_ids' on Instance uuid 4659329f-611a-4436-aa9e-26937db9cd61 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 03:24:03 np0005534516 podman[274930]: 2025-11-25 08:24:03.228321812 +0000 UTC m=+0.025490147 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 03:24:03 np0005534516 systemd[1]: Started libpod-conmon-fc5bbaf7f80b1bce58037fc30817f7c37312e18a55b6455fdcb025ac58c40c3a.scope.
Nov 25 03:24:03 np0005534516 nova_compute[253538]: 2025-11-25 08:24:03.355 253542 DEBUG nova.objects.instance [None req-30da9785-a1fd-47ff-a7d7-c47887bfaf5d 57ccb3076c9145fda72f75af7dd3acc0 e0f1bfa27d5e45138e846f38c1d92dfc - - default default] Lazy-loading 'keypairs' on Instance uuid 4659329f-611a-4436-aa9e-26937db9cd61 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 03:24:03 np0005534516 systemd[1]: Started libcrun container.
Nov 25 03:24:03 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ba9e72b75be376b653ac768e958108a936568a51c24172a99d28e6f2a631a7dc/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 03:24:03 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ba9e72b75be376b653ac768e958108a936568a51c24172a99d28e6f2a631a7dc/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 03:24:03 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ba9e72b75be376b653ac768e958108a936568a51c24172a99d28e6f2a631a7dc/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 03:24:03 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ba9e72b75be376b653ac768e958108a936568a51c24172a99d28e6f2a631a7dc/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 03:24:03 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ba9e72b75be376b653ac768e958108a936568a51c24172a99d28e6f2a631a7dc/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 03:24:03 np0005534516 podman[274930]: 2025-11-25 08:24:03.378597981 +0000 UTC m=+0.175766326 container init fc5bbaf7f80b1bce58037fc30817f7c37312e18a55b6455fdcb025ac58c40c3a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_heyrovsky, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 03:24:03 np0005534516 podman[274930]: 2025-11-25 08:24:03.38719369 +0000 UTC m=+0.184361985 container start fc5bbaf7f80b1bce58037fc30817f7c37312e18a55b6455fdcb025ac58c40c3a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_heyrovsky, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 03:24:03 np0005534516 podman[274930]: 2025-11-25 08:24:03.391010935 +0000 UTC m=+0.188179220 container attach fc5bbaf7f80b1bce58037fc30817f7c37312e18a55b6455fdcb025ac58c40c3a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_heyrovsky, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 03:24:03 np0005534516 nova_compute[253538]: 2025-11-25 08:24:03.526 253542 INFO nova.virt.libvirt.driver [None req-30da9785-a1fd-47ff-a7d7-c47887bfaf5d 57ccb3076c9145fda72f75af7dd3acc0 e0f1bfa27d5e45138e846f38c1d92dfc - - default default] [instance: 4659329f-611a-4436-aa9e-26937db9cd61] Creating config drive at /var/lib/nova/instances/4659329f-611a-4436-aa9e-26937db9cd61/disk.config#033[00m
Nov 25 03:24:03 np0005534516 nova_compute[253538]: 2025-11-25 08:24:03.533 253542 DEBUG oslo_concurrency.processutils [None req-30da9785-a1fd-47ff-a7d7-c47887bfaf5d 57ccb3076c9145fda72f75af7dd3acc0 e0f1bfa27d5e45138e846f38c1d92dfc - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/4659329f-611a-4436-aa9e-26937db9cd61/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpfjdh9v4d execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:24:03 np0005534516 nova_compute[253538]: 2025-11-25 08:24:03.658 253542 DEBUG oslo_concurrency.processutils [None req-30da9785-a1fd-47ff-a7d7-c47887bfaf5d 57ccb3076c9145fda72f75af7dd3acc0 e0f1bfa27d5e45138e846f38c1d92dfc - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/4659329f-611a-4436-aa9e-26937db9cd61/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpfjdh9v4d" returned: 0 in 0.125s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:24:03 np0005534516 nova_compute[253538]: 2025-11-25 08:24:03.683 253542 DEBUG nova.storage.rbd_utils [None req-30da9785-a1fd-47ff-a7d7-c47887bfaf5d 57ccb3076c9145fda72f75af7dd3acc0 e0f1bfa27d5e45138e846f38c1d92dfc - - default default] rbd image 4659329f-611a-4436-aa9e-26937db9cd61_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:24:03 np0005534516 nova_compute[253538]: 2025-11-25 08:24:03.687 253542 DEBUG oslo_concurrency.processutils [None req-30da9785-a1fd-47ff-a7d7-c47887bfaf5d 57ccb3076c9145fda72f75af7dd3acc0 e0f1bfa27d5e45138e846f38c1d92dfc - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/4659329f-611a-4436-aa9e-26937db9cd61/disk.config 4659329f-611a-4436-aa9e-26937db9cd61_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:24:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 03:24:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:24:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 03:24:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:24:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.00027935397945562287 of space, bias 1.0, pg target 0.08380619383668686 quantized to 32 (current 32)
Nov 25 03:24:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:24:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 03:24:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:24:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 03:24:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:24:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006661126644201341 of space, bias 1.0, pg target 0.19983379932604023 quantized to 32 (current 32)
Nov 25 03:24:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:24:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 25 03:24:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:24:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 03:24:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:24:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 25 03:24:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:24:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 25 03:24:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:24:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 03:24:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:24:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 25 03:24:03 np0005534516 nova_compute[253538]: 2025-11-25 08:24:03.830 253542 DEBUG oslo_concurrency.processutils [None req-30da9785-a1fd-47ff-a7d7-c47887bfaf5d 57ccb3076c9145fda72f75af7dd3acc0 e0f1bfa27d5e45138e846f38c1d92dfc - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/4659329f-611a-4436-aa9e-26937db9cd61/disk.config 4659329f-611a-4436-aa9e-26937db9cd61_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.143s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:24:03 np0005534516 nova_compute[253538]: 2025-11-25 08:24:03.831 253542 INFO nova.virt.libvirt.driver [None req-30da9785-a1fd-47ff-a7d7-c47887bfaf5d 57ccb3076c9145fda72f75af7dd3acc0 e0f1bfa27d5e45138e846f38c1d92dfc - - default default] [instance: 4659329f-611a-4436-aa9e-26937db9cd61] Deleting local config drive /var/lib/nova/instances/4659329f-611a-4436-aa9e-26937db9cd61/disk.config because it was imported into RBD.#033[00m
Nov 25 03:24:03 np0005534516 systemd-machined[215790]: New machine qemu-10-instance-00000009.
Nov 25 03:24:03 np0005534516 systemd[1]: Started Virtual Machine qemu-10-instance-00000009.
Nov 25 03:24:03 np0005534516 podman[275015]: 2025-11-25 08:24:03.984259037 +0000 UTC m=+0.083676438 container health_status 8ad1e319ea263c63021f01bb2d6f18ca5aea205883339692c6b10bddfa993191 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, io.buildah.version=1.41.3)
Nov 25 03:24:04 np0005534516 nova_compute[253538]: 2025-11-25 08:24:04.253 253542 DEBUG nova.virt.libvirt.host [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Removed pending event for 4659329f-611a-4436-aa9e-26937db9cd61 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Nov 25 03:24:04 np0005534516 nova_compute[253538]: 2025-11-25 08:24:04.253 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764059044.2528143, 4659329f-611a-4436-aa9e-26937db9cd61 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 03:24:04 np0005534516 nova_compute[253538]: 2025-11-25 08:24:04.254 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 4659329f-611a-4436-aa9e-26937db9cd61] VM Resumed (Lifecycle Event)#033[00m
Nov 25 03:24:04 np0005534516 nova_compute[253538]: 2025-11-25 08:24:04.256 253542 DEBUG nova.compute.manager [None req-30da9785-a1fd-47ff-a7d7-c47887bfaf5d 57ccb3076c9145fda72f75af7dd3acc0 e0f1bfa27d5e45138e846f38c1d92dfc - - default default] [instance: 4659329f-611a-4436-aa9e-26937db9cd61] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 25 03:24:04 np0005534516 nova_compute[253538]: 2025-11-25 08:24:04.256 253542 DEBUG nova.virt.libvirt.driver [None req-30da9785-a1fd-47ff-a7d7-c47887bfaf5d 57ccb3076c9145fda72f75af7dd3acc0 e0f1bfa27d5e45138e846f38c1d92dfc - - default default] [instance: 4659329f-611a-4436-aa9e-26937db9cd61] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 25 03:24:04 np0005534516 nova_compute[253538]: 2025-11-25 08:24:04.259 253542 INFO nova.virt.libvirt.driver [-] [instance: 4659329f-611a-4436-aa9e-26937db9cd61] Instance spawned successfully.#033[00m
Nov 25 03:24:04 np0005534516 nova_compute[253538]: 2025-11-25 08:24:04.259 253542 DEBUG nova.virt.libvirt.driver [None req-30da9785-a1fd-47ff-a7d7-c47887bfaf5d 57ccb3076c9145fda72f75af7dd3acc0 e0f1bfa27d5e45138e846f38c1d92dfc - - default default] [instance: 4659329f-611a-4436-aa9e-26937db9cd61] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 25 03:24:04 np0005534516 nova_compute[253538]: 2025-11-25 08:24:04.295 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 4659329f-611a-4436-aa9e-26937db9cd61] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:24:04 np0005534516 nova_compute[253538]: 2025-11-25 08:24:04.302 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 4659329f-611a-4436-aa9e-26937db9cd61] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 03:24:04 np0005534516 nova_compute[253538]: 2025-11-25 08:24:04.305 253542 DEBUG nova.virt.libvirt.driver [None req-30da9785-a1fd-47ff-a7d7-c47887bfaf5d 57ccb3076c9145fda72f75af7dd3acc0 e0f1bfa27d5e45138e846f38c1d92dfc - - default default] [instance: 4659329f-611a-4436-aa9e-26937db9cd61] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:24:04 np0005534516 nova_compute[253538]: 2025-11-25 08:24:04.305 253542 DEBUG nova.virt.libvirt.driver [None req-30da9785-a1fd-47ff-a7d7-c47887bfaf5d 57ccb3076c9145fda72f75af7dd3acc0 e0f1bfa27d5e45138e846f38c1d92dfc - - default default] [instance: 4659329f-611a-4436-aa9e-26937db9cd61] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:24:04 np0005534516 nova_compute[253538]: 2025-11-25 08:24:04.305 253542 DEBUG nova.virt.libvirt.driver [None req-30da9785-a1fd-47ff-a7d7-c47887bfaf5d 57ccb3076c9145fda72f75af7dd3acc0 e0f1bfa27d5e45138e846f38c1d92dfc - - default default] [instance: 4659329f-611a-4436-aa9e-26937db9cd61] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:24:04 np0005534516 nova_compute[253538]: 2025-11-25 08:24:04.306 253542 DEBUG nova.virt.libvirt.driver [None req-30da9785-a1fd-47ff-a7d7-c47887bfaf5d 57ccb3076c9145fda72f75af7dd3acc0 e0f1bfa27d5e45138e846f38c1d92dfc - - default default] [instance: 4659329f-611a-4436-aa9e-26937db9cd61] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:24:04 np0005534516 nova_compute[253538]: 2025-11-25 08:24:04.306 253542 DEBUG nova.virt.libvirt.driver [None req-30da9785-a1fd-47ff-a7d7-c47887bfaf5d 57ccb3076c9145fda72f75af7dd3acc0 e0f1bfa27d5e45138e846f38c1d92dfc - - default default] [instance: 4659329f-611a-4436-aa9e-26937db9cd61] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:24:04 np0005534516 nova_compute[253538]: 2025-11-25 08:24:04.306 253542 DEBUG nova.virt.libvirt.driver [None req-30da9785-a1fd-47ff-a7d7-c47887bfaf5d 57ccb3076c9145fda72f75af7dd3acc0 e0f1bfa27d5e45138e846f38c1d92dfc - - default default] [instance: 4659329f-611a-4436-aa9e-26937db9cd61] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:24:04 np0005534516 nova_compute[253538]: 2025-11-25 08:24:04.334 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 4659329f-611a-4436-aa9e-26937db9cd61] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.#033[00m
Nov 25 03:24:04 np0005534516 nova_compute[253538]: 2025-11-25 08:24:04.335 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764059044.2561886, 4659329f-611a-4436-aa9e-26937db9cd61 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 03:24:04 np0005534516 nova_compute[253538]: 2025-11-25 08:24:04.335 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 4659329f-611a-4436-aa9e-26937db9cd61] VM Started (Lifecycle Event)#033[00m
Nov 25 03:24:04 np0005534516 nova_compute[253538]: 2025-11-25 08:24:04.353 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 4659329f-611a-4436-aa9e-26937db9cd61] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:24:04 np0005534516 nova_compute[253538]: 2025-11-25 08:24:04.356 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 4659329f-611a-4436-aa9e-26937db9cd61] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 03:24:04 np0005534516 nova_compute[253538]: 2025-11-25 08:24:04.388 253542 DEBUG nova.compute.manager [None req-30da9785-a1fd-47ff-a7d7-c47887bfaf5d 57ccb3076c9145fda72f75af7dd3acc0 e0f1bfa27d5e45138e846f38c1d92dfc - - default default] [instance: 4659329f-611a-4436-aa9e-26937db9cd61] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:24:04 np0005534516 nova_compute[253538]: 2025-11-25 08:24:04.389 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 4659329f-611a-4436-aa9e-26937db9cd61] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.#033[00m
Nov 25 03:24:04 np0005534516 lucid_heyrovsky[274964]: --> passed data devices: 0 physical, 3 LVM
Nov 25 03:24:04 np0005534516 lucid_heyrovsky[274964]: --> relative data size: 1.0
Nov 25 03:24:04 np0005534516 lucid_heyrovsky[274964]: --> All data devices are unavailable
Nov 25 03:24:04 np0005534516 nova_compute[253538]: 2025-11-25 08:24:04.436 253542 DEBUG oslo_concurrency.lockutils [None req-30da9785-a1fd-47ff-a7d7-c47887bfaf5d 57ccb3076c9145fda72f75af7dd3acc0 e0f1bfa27d5e45138e846f38c1d92dfc - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:24:04 np0005534516 nova_compute[253538]: 2025-11-25 08:24:04.436 253542 DEBUG oslo_concurrency.lockutils [None req-30da9785-a1fd-47ff-a7d7-c47887bfaf5d 57ccb3076c9145fda72f75af7dd3acc0 e0f1bfa27d5e45138e846f38c1d92dfc - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:24:04 np0005534516 nova_compute[253538]: 2025-11-25 08:24:04.437 253542 DEBUG nova.objects.instance [None req-30da9785-a1fd-47ff-a7d7-c47887bfaf5d 57ccb3076c9145fda72f75af7dd3acc0 e0f1bfa27d5e45138e846f38c1d92dfc - - default default] [instance: 4659329f-611a-4436-aa9e-26937db9cd61] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032#033[00m
Nov 25 03:24:04 np0005534516 systemd[1]: libpod-fc5bbaf7f80b1bce58037fc30817f7c37312e18a55b6455fdcb025ac58c40c3a.scope: Deactivated successfully.
Nov 25 03:24:04 np0005534516 podman[274930]: 2025-11-25 08:24:04.462374691 +0000 UTC m=+1.259542986 container died fc5bbaf7f80b1bce58037fc30817f7c37312e18a55b6455fdcb025ac58c40c3a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_heyrovsky, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 03:24:04 np0005534516 nova_compute[253538]: 2025-11-25 08:24:04.463 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:24:04 np0005534516 systemd[1]: var-lib-containers-storage-overlay-ba9e72b75be376b653ac768e958108a936568a51c24172a99d28e6f2a631a7dc-merged.mount: Deactivated successfully.
Nov 25 03:24:04 np0005534516 nova_compute[253538]: 2025-11-25 08:24:04.504 253542 DEBUG oslo_concurrency.lockutils [None req-30da9785-a1fd-47ff-a7d7-c47887bfaf5d 57ccb3076c9145fda72f75af7dd3acc0 e0f1bfa27d5e45138e846f38c1d92dfc - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: held 0.068s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:24:04 np0005534516 podman[274930]: 2025-11-25 08:24:04.530483417 +0000 UTC m=+1.327651682 container remove fc5bbaf7f80b1bce58037fc30817f7c37312e18a55b6455fdcb025ac58c40c3a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_heyrovsky, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 03:24:04 np0005534516 systemd[1]: libpod-conmon-fc5bbaf7f80b1bce58037fc30817f7c37312e18a55b6455fdcb025ac58c40c3a.scope: Deactivated successfully.
Nov 25 03:24:04 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1156: 321 pgs: 321 active+clean; 88 MiB data, 267 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 3.4 MiB/s wr, 121 op/s
Nov 25 03:24:05 np0005534516 nova_compute[253538]: 2025-11-25 08:24:05.158 253542 DEBUG oslo_concurrency.lockutils [None req-666ccd83-4bf8-4de6-810a-c21239428d00 247c30d08ff74c88808c16ddee332fbe 5a8ba4490f464848abc986d6b52e37cc - - default default] Acquiring lock "7ca3cfae-7765-48b7-9d65-660c3b709a55" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:24:05 np0005534516 nova_compute[253538]: 2025-11-25 08:24:05.158 253542 DEBUG oslo_concurrency.lockutils [None req-666ccd83-4bf8-4de6-810a-c21239428d00 247c30d08ff74c88808c16ddee332fbe 5a8ba4490f464848abc986d6b52e37cc - - default default] Lock "7ca3cfae-7765-48b7-9d65-660c3b709a55" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:24:05 np0005534516 nova_compute[253538]: 2025-11-25 08:24:05.173 253542 DEBUG nova.compute.manager [None req-666ccd83-4bf8-4de6-810a-c21239428d00 247c30d08ff74c88808c16ddee332fbe 5a8ba4490f464848abc986d6b52e37cc - - default default] [instance: 7ca3cfae-7765-48b7-9d65-660c3b709a55] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 25 03:24:05 np0005534516 nova_compute[253538]: 2025-11-25 08:24:05.247 253542 DEBUG oslo_concurrency.lockutils [None req-666ccd83-4bf8-4de6-810a-c21239428d00 247c30d08ff74c88808c16ddee332fbe 5a8ba4490f464848abc986d6b52e37cc - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:24:05 np0005534516 nova_compute[253538]: 2025-11-25 08:24:05.248 253542 DEBUG oslo_concurrency.lockutils [None req-666ccd83-4bf8-4de6-810a-c21239428d00 247c30d08ff74c88808c16ddee332fbe 5a8ba4490f464848abc986d6b52e37cc - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:24:05 np0005534516 podman[275268]: 2025-11-25 08:24:05.253272624 +0000 UTC m=+0.076149880 container create 0bd531a2740412c907fd0ac3cedafc06206e815312ece3a9681172ae2f1fa251 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_allen, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 03:24:05 np0005534516 nova_compute[253538]: 2025-11-25 08:24:05.256 253542 DEBUG nova.virt.hardware [None req-666ccd83-4bf8-4de6-810a-c21239428d00 247c30d08ff74c88808c16ddee332fbe 5a8ba4490f464848abc986d6b52e37cc - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 25 03:24:05 np0005534516 nova_compute[253538]: 2025-11-25 08:24:05.256 253542 INFO nova.compute.claims [None req-666ccd83-4bf8-4de6-810a-c21239428d00 247c30d08ff74c88808c16ddee332fbe 5a8ba4490f464848abc986d6b52e37cc - - default default] [instance: 7ca3cfae-7765-48b7-9d65-660c3b709a55] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 25 03:24:05 np0005534516 podman[275268]: 2025-11-25 08:24:05.205966464 +0000 UTC m=+0.028843820 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 03:24:05 np0005534516 nova_compute[253538]: 2025-11-25 08:24:05.374 253542 DEBUG oslo_concurrency.processutils [None req-666ccd83-4bf8-4de6-810a-c21239428d00 247c30d08ff74c88808c16ddee332fbe 5a8ba4490f464848abc986d6b52e37cc - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:24:05 np0005534516 systemd[1]: Started libpod-conmon-0bd531a2740412c907fd0ac3cedafc06206e815312ece3a9681172ae2f1fa251.scope.
Nov 25 03:24:05 np0005534516 systemd[1]: Started libcrun container.
Nov 25 03:24:05 np0005534516 podman[275268]: 2025-11-25 08:24:05.618082072 +0000 UTC m=+0.440959348 container init 0bd531a2740412c907fd0ac3cedafc06206e815312ece3a9681172ae2f1fa251 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_allen, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 03:24:05 np0005534516 podman[275268]: 2025-11-25 08:24:05.625843827 +0000 UTC m=+0.448721083 container start 0bd531a2740412c907fd0ac3cedafc06206e815312ece3a9681172ae2f1fa251 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_allen, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Nov 25 03:24:05 np0005534516 vigorous_allen[275286]: 167 167
Nov 25 03:24:05 np0005534516 systemd[1]: libpod-0bd531a2740412c907fd0ac3cedafc06206e815312ece3a9681172ae2f1fa251.scope: Deactivated successfully.
Nov 25 03:24:05 np0005534516 podman[275268]: 2025-11-25 08:24:05.683471492 +0000 UTC m=+0.506348748 container attach 0bd531a2740412c907fd0ac3cedafc06206e815312ece3a9681172ae2f1fa251 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_allen, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Nov 25 03:24:05 np0005534516 podman[275268]: 2025-11-25 08:24:05.683937364 +0000 UTC m=+0.506814610 container died 0bd531a2740412c907fd0ac3cedafc06206e815312ece3a9681172ae2f1fa251 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_allen, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Nov 25 03:24:05 np0005534516 systemd[1]: var-lib-containers-storage-overlay-dfc426047ba604c00f1fe66e2e0e9b00d7398d9aa1557a5eac07a8dcfc7a59af-merged.mount: Deactivated successfully.
Nov 25 03:24:05 np0005534516 podman[275268]: 2025-11-25 08:24:05.739496862 +0000 UTC m=+0.562374118 container remove 0bd531a2740412c907fd0ac3cedafc06206e815312ece3a9681172ae2f1fa251 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_allen, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 03:24:05 np0005534516 systemd[1]: libpod-conmon-0bd531a2740412c907fd0ac3cedafc06206e815312ece3a9681172ae2f1fa251.scope: Deactivated successfully.
Nov 25 03:24:05 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 03:24:05 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3892928765' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 03:24:05 np0005534516 nova_compute[253538]: 2025-11-25 08:24:05.852 253542 DEBUG oslo_concurrency.processutils [None req-666ccd83-4bf8-4de6-810a-c21239428d00 247c30d08ff74c88808c16ddee332fbe 5a8ba4490f464848abc986d6b52e37cc - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.478s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:24:05 np0005534516 nova_compute[253538]: 2025-11-25 08:24:05.864 253542 DEBUG nova.compute.provider_tree [None req-666ccd83-4bf8-4de6-810a-c21239428d00 247c30d08ff74c88808c16ddee332fbe 5a8ba4490f464848abc986d6b52e37cc - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 03:24:05 np0005534516 nova_compute[253538]: 2025-11-25 08:24:05.882 253542 DEBUG nova.scheduler.client.report [None req-666ccd83-4bf8-4de6-810a-c21239428d00 247c30d08ff74c88808c16ddee332fbe 5a8ba4490f464848abc986d6b52e37cc - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 03:24:05 np0005534516 nova_compute[253538]: 2025-11-25 08:24:05.916 253542 DEBUG oslo_concurrency.lockutils [None req-666ccd83-4bf8-4de6-810a-c21239428d00 247c30d08ff74c88808c16ddee332fbe 5a8ba4490f464848abc986d6b52e37cc - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.669s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:24:05 np0005534516 nova_compute[253538]: 2025-11-25 08:24:05.917 253542 DEBUG nova.compute.manager [None req-666ccd83-4bf8-4de6-810a-c21239428d00 247c30d08ff74c88808c16ddee332fbe 5a8ba4490f464848abc986d6b52e37cc - - default default] [instance: 7ca3cfae-7765-48b7-9d65-660c3b709a55] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 25 03:24:05 np0005534516 podman[275332]: 2025-11-25 08:24:05.937660558 +0000 UTC m=+0.047063674 container create 02f1513406697c03124c33dab04908b0458797ce6cc9e598f8ab24b35778968c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_khorana, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS)
Nov 25 03:24:05 np0005534516 systemd[1]: Started libpod-conmon-02f1513406697c03124c33dab04908b0458797ce6cc9e598f8ab24b35778968c.scope.
Nov 25 03:24:05 np0005534516 systemd[1]: Started libcrun container.
Nov 25 03:24:05 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7ef51937ae41297dd191c15f6e9993fab7c12bb2bae032df544df2e2d455bd6e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 03:24:05 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7ef51937ae41297dd191c15f6e9993fab7c12bb2bae032df544df2e2d455bd6e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 03:24:05 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7ef51937ae41297dd191c15f6e9993fab7c12bb2bae032df544df2e2d455bd6e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 03:24:05 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7ef51937ae41297dd191c15f6e9993fab7c12bb2bae032df544df2e2d455bd6e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 03:24:06 np0005534516 nova_compute[253538]: 2025-11-25 08:24:06.010 253542 DEBUG nova.compute.manager [None req-666ccd83-4bf8-4de6-810a-c21239428d00 247c30d08ff74c88808c16ddee332fbe 5a8ba4490f464848abc986d6b52e37cc - - default default] [instance: 7ca3cfae-7765-48b7-9d65-660c3b709a55] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 25 03:24:06 np0005534516 nova_compute[253538]: 2025-11-25 08:24:06.010 253542 DEBUG nova.network.neutron [None req-666ccd83-4bf8-4de6-810a-c21239428d00 247c30d08ff74c88808c16ddee332fbe 5a8ba4490f464848abc986d6b52e37cc - - default default] [instance: 7ca3cfae-7765-48b7-9d65-660c3b709a55] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 25 03:24:06 np0005534516 podman[275332]: 2025-11-25 08:24:05.917638094 +0000 UTC m=+0.027041230 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 03:24:06 np0005534516 podman[275332]: 2025-11-25 08:24:06.016376757 +0000 UTC m=+0.125779893 container init 02f1513406697c03124c33dab04908b0458797ce6cc9e598f8ab24b35778968c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_khorana, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Nov 25 03:24:06 np0005534516 podman[275332]: 2025-11-25 08:24:06.023784112 +0000 UTC m=+0.133187228 container start 02f1513406697c03124c33dab04908b0458797ce6cc9e598f8ab24b35778968c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_khorana, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Nov 25 03:24:06 np0005534516 podman[275332]: 2025-11-25 08:24:06.026903148 +0000 UTC m=+0.136306284 container attach 02f1513406697c03124c33dab04908b0458797ce6cc9e598f8ab24b35778968c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_khorana, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True)
Nov 25 03:24:06 np0005534516 nova_compute[253538]: 2025-11-25 08:24:06.035 253542 INFO nova.virt.libvirt.driver [None req-666ccd83-4bf8-4de6-810a-c21239428d00 247c30d08ff74c88808c16ddee332fbe 5a8ba4490f464848abc986d6b52e37cc - - default default] [instance: 7ca3cfae-7765-48b7-9d65-660c3b709a55] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 25 03:24:06 np0005534516 nova_compute[253538]: 2025-11-25 08:24:06.048 253542 DEBUG nova.compute.manager [None req-666ccd83-4bf8-4de6-810a-c21239428d00 247c30d08ff74c88808c16ddee332fbe 5a8ba4490f464848abc986d6b52e37cc - - default default] [instance: 7ca3cfae-7765-48b7-9d65-660c3b709a55] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 25 03:24:06 np0005534516 nova_compute[253538]: 2025-11-25 08:24:06.156 253542 DEBUG nova.compute.manager [None req-666ccd83-4bf8-4de6-810a-c21239428d00 247c30d08ff74c88808c16ddee332fbe 5a8ba4490f464848abc986d6b52e37cc - - default default] [instance: 7ca3cfae-7765-48b7-9d65-660c3b709a55] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 25 03:24:06 np0005534516 nova_compute[253538]: 2025-11-25 08:24:06.158 253542 DEBUG nova.virt.libvirt.driver [None req-666ccd83-4bf8-4de6-810a-c21239428d00 247c30d08ff74c88808c16ddee332fbe 5a8ba4490f464848abc986d6b52e37cc - - default default] [instance: 7ca3cfae-7765-48b7-9d65-660c3b709a55] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 25 03:24:06 np0005534516 nova_compute[253538]: 2025-11-25 08:24:06.158 253542 INFO nova.virt.libvirt.driver [None req-666ccd83-4bf8-4de6-810a-c21239428d00 247c30d08ff74c88808c16ddee332fbe 5a8ba4490f464848abc986d6b52e37cc - - default default] [instance: 7ca3cfae-7765-48b7-9d65-660c3b709a55] Creating image(s)#033[00m
Nov 25 03:24:06 np0005534516 nova_compute[253538]: 2025-11-25 08:24:06.187 253542 DEBUG nova.storage.rbd_utils [None req-666ccd83-4bf8-4de6-810a-c21239428d00 247c30d08ff74c88808c16ddee332fbe 5a8ba4490f464848abc986d6b52e37cc - - default default] rbd image 7ca3cfae-7765-48b7-9d65-660c3b709a55_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:24:06 np0005534516 nova_compute[253538]: 2025-11-25 08:24:06.209 253542 DEBUG nova.storage.rbd_utils [None req-666ccd83-4bf8-4de6-810a-c21239428d00 247c30d08ff74c88808c16ddee332fbe 5a8ba4490f464848abc986d6b52e37cc - - default default] rbd image 7ca3cfae-7765-48b7-9d65-660c3b709a55_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:24:06 np0005534516 nova_compute[253538]: 2025-11-25 08:24:06.231 253542 DEBUG nova.storage.rbd_utils [None req-666ccd83-4bf8-4de6-810a-c21239428d00 247c30d08ff74c88808c16ddee332fbe 5a8ba4490f464848abc986d6b52e37cc - - default default] rbd image 7ca3cfae-7765-48b7-9d65-660c3b709a55_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:24:06 np0005534516 nova_compute[253538]: 2025-11-25 08:24:06.235 253542 DEBUG oslo_concurrency.processutils [None req-666ccd83-4bf8-4de6-810a-c21239428d00 247c30d08ff74c88808c16ddee332fbe 5a8ba4490f464848abc986d6b52e37cc - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:24:06 np0005534516 nova_compute[253538]: 2025-11-25 08:24:06.261 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:24:06 np0005534516 nova_compute[253538]: 2025-11-25 08:24:06.304 253542 DEBUG oslo_concurrency.processutils [None req-666ccd83-4bf8-4de6-810a-c21239428d00 247c30d08ff74c88808c16ddee332fbe 5a8ba4490f464848abc986d6b52e37cc - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json" returned: 0 in 0.069s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:24:06 np0005534516 nova_compute[253538]: 2025-11-25 08:24:06.305 253542 DEBUG oslo_concurrency.lockutils [None req-666ccd83-4bf8-4de6-810a-c21239428d00 247c30d08ff74c88808c16ddee332fbe 5a8ba4490f464848abc986d6b52e37cc - - default default] Acquiring lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:24:06 np0005534516 nova_compute[253538]: 2025-11-25 08:24:06.305 253542 DEBUG oslo_concurrency.lockutils [None req-666ccd83-4bf8-4de6-810a-c21239428d00 247c30d08ff74c88808c16ddee332fbe 5a8ba4490f464848abc986d6b52e37cc - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:24:06 np0005534516 nova_compute[253538]: 2025-11-25 08:24:06.305 253542 DEBUG oslo_concurrency.lockutils [None req-666ccd83-4bf8-4de6-810a-c21239428d00 247c30d08ff74c88808c16ddee332fbe 5a8ba4490f464848abc986d6b52e37cc - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:24:06 np0005534516 nova_compute[253538]: 2025-11-25 08:24:06.328 253542 DEBUG nova.storage.rbd_utils [None req-666ccd83-4bf8-4de6-810a-c21239428d00 247c30d08ff74c88808c16ddee332fbe 5a8ba4490f464848abc986d6b52e37cc - - default default] rbd image 7ca3cfae-7765-48b7-9d65-660c3b709a55_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:24:06 np0005534516 nova_compute[253538]: 2025-11-25 08:24:06.332 253542 DEBUG oslo_concurrency.processutils [None req-666ccd83-4bf8-4de6-810a-c21239428d00 247c30d08ff74c88808c16ddee332fbe 5a8ba4490f464848abc986d6b52e37cc - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc 7ca3cfae-7765-48b7-9d65-660c3b709a55_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:24:06 np0005534516 nova_compute[253538]: 2025-11-25 08:24:06.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 03:24:06 np0005534516 nova_compute[253538]: 2025-11-25 08:24:06.555 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 25 03:24:06 np0005534516 nova_compute[253538]: 2025-11-25 08:24:06.580 253542 DEBUG nova.network.neutron [None req-666ccd83-4bf8-4de6-810a-c21239428d00 247c30d08ff74c88808c16ddee332fbe 5a8ba4490f464848abc986d6b52e37cc - - default default] [instance: 7ca3cfae-7765-48b7-9d65-660c3b709a55] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188#033[00m
Nov 25 03:24:06 np0005534516 nova_compute[253538]: 2025-11-25 08:24:06.582 253542 DEBUG nova.compute.manager [None req-666ccd83-4bf8-4de6-810a-c21239428d00 247c30d08ff74c88808c16ddee332fbe 5a8ba4490f464848abc986d6b52e37cc - - default default] [instance: 7ca3cfae-7765-48b7-9d65-660c3b709a55] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 25 03:24:06 np0005534516 nova_compute[253538]: 2025-11-25 08:24:06.606 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 25 03:24:06 np0005534516 nova_compute[253538]: 2025-11-25 08:24:06.607 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 03:24:06 np0005534516 nova_compute[253538]: 2025-11-25 08:24:06.753 253542 INFO nova.compute.manager [None req-f0f67520-50d5-44d1-bd6c-82dd827f18a4 8eb21f29ec514ce1adc864a5875afc7f cb11b932e29641d899540daf5c89f9d1 - - default default] [instance: 4659329f-611a-4436-aa9e-26937db9cd61] Rebuilding instance#033[00m
Nov 25 03:24:06 np0005534516 practical_khorana[275346]: {
Nov 25 03:24:06 np0005534516 practical_khorana[275346]:    "0": [
Nov 25 03:24:06 np0005534516 practical_khorana[275346]:        {
Nov 25 03:24:06 np0005534516 practical_khorana[275346]:            "devices": [
Nov 25 03:24:06 np0005534516 practical_khorana[275346]:                "/dev/loop3"
Nov 25 03:24:06 np0005534516 practical_khorana[275346]:            ],
Nov 25 03:24:06 np0005534516 practical_khorana[275346]:            "lv_name": "ceph_lv0",
Nov 25 03:24:06 np0005534516 practical_khorana[275346]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 03:24:06 np0005534516 practical_khorana[275346]:            "lv_size": "21470642176",
Nov 25 03:24:06 np0005534516 practical_khorana[275346]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=fa0f8d90-5df8-4d42-9078-da082765696d,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 03:24:06 np0005534516 practical_khorana[275346]:            "lv_uuid": "ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr",
Nov 25 03:24:06 np0005534516 practical_khorana[275346]:            "name": "ceph_lv0",
Nov 25 03:24:06 np0005534516 practical_khorana[275346]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 03:24:06 np0005534516 practical_khorana[275346]:            "tags": {
Nov 25 03:24:06 np0005534516 practical_khorana[275346]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 03:24:06 np0005534516 practical_khorana[275346]:                "ceph.block_uuid": "ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr",
Nov 25 03:24:06 np0005534516 practical_khorana[275346]:                "ceph.cephx_lockbox_secret": "",
Nov 25 03:24:06 np0005534516 practical_khorana[275346]:                "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 03:24:06 np0005534516 practical_khorana[275346]:                "ceph.cluster_name": "ceph",
Nov 25 03:24:06 np0005534516 practical_khorana[275346]:                "ceph.crush_device_class": "",
Nov 25 03:24:06 np0005534516 practical_khorana[275346]:                "ceph.encrypted": "0",
Nov 25 03:24:06 np0005534516 practical_khorana[275346]:                "ceph.osd_fsid": "fa0f8d90-5df8-4d42-9078-da082765696d",
Nov 25 03:24:06 np0005534516 practical_khorana[275346]:                "ceph.osd_id": "0",
Nov 25 03:24:06 np0005534516 practical_khorana[275346]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 03:24:06 np0005534516 practical_khorana[275346]:                "ceph.type": "block",
Nov 25 03:24:06 np0005534516 practical_khorana[275346]:                "ceph.vdo": "0"
Nov 25 03:24:06 np0005534516 practical_khorana[275346]:            },
Nov 25 03:24:06 np0005534516 practical_khorana[275346]:            "type": "block",
Nov 25 03:24:06 np0005534516 practical_khorana[275346]:            "vg_name": "ceph_vg0"
Nov 25 03:24:06 np0005534516 practical_khorana[275346]:        }
Nov 25 03:24:06 np0005534516 practical_khorana[275346]:    ],
Nov 25 03:24:06 np0005534516 practical_khorana[275346]:    "1": [
Nov 25 03:24:06 np0005534516 practical_khorana[275346]:        {
Nov 25 03:24:06 np0005534516 practical_khorana[275346]:            "devices": [
Nov 25 03:24:06 np0005534516 practical_khorana[275346]:                "/dev/loop4"
Nov 25 03:24:06 np0005534516 practical_khorana[275346]:            ],
Nov 25 03:24:06 np0005534516 practical_khorana[275346]:            "lv_name": "ceph_lv1",
Nov 25 03:24:06 np0005534516 practical_khorana[275346]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 03:24:06 np0005534516 practical_khorana[275346]:            "lv_size": "21470642176",
Nov 25 03:24:06 np0005534516 practical_khorana[275346]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1ccbbde0-8faf-460a-800e-c84f00ed17db,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 03:24:06 np0005534516 practical_khorana[275346]:            "lv_uuid": "nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e",
Nov 25 03:24:06 np0005534516 practical_khorana[275346]:            "name": "ceph_lv1",
Nov 25 03:24:06 np0005534516 practical_khorana[275346]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 03:24:06 np0005534516 practical_khorana[275346]:            "tags": {
Nov 25 03:24:06 np0005534516 practical_khorana[275346]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 03:24:06 np0005534516 practical_khorana[275346]:                "ceph.block_uuid": "nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e",
Nov 25 03:24:06 np0005534516 practical_khorana[275346]:                "ceph.cephx_lockbox_secret": "",
Nov 25 03:24:06 np0005534516 practical_khorana[275346]:                "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 03:24:06 np0005534516 practical_khorana[275346]:                "ceph.cluster_name": "ceph",
Nov 25 03:24:06 np0005534516 practical_khorana[275346]:                "ceph.crush_device_class": "",
Nov 25 03:24:06 np0005534516 practical_khorana[275346]:                "ceph.encrypted": "0",
Nov 25 03:24:06 np0005534516 practical_khorana[275346]:                "ceph.osd_fsid": "1ccbbde0-8faf-460a-800e-c84f00ed17db",
Nov 25 03:24:06 np0005534516 practical_khorana[275346]:                "ceph.osd_id": "1",
Nov 25 03:24:06 np0005534516 practical_khorana[275346]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 03:24:06 np0005534516 practical_khorana[275346]:                "ceph.type": "block",
Nov 25 03:24:06 np0005534516 practical_khorana[275346]:                "ceph.vdo": "0"
Nov 25 03:24:06 np0005534516 practical_khorana[275346]:            },
Nov 25 03:24:06 np0005534516 practical_khorana[275346]:            "type": "block",
Nov 25 03:24:06 np0005534516 practical_khorana[275346]:            "vg_name": "ceph_vg1"
Nov 25 03:24:06 np0005534516 practical_khorana[275346]:        }
Nov 25 03:24:06 np0005534516 practical_khorana[275346]:    ],
Nov 25 03:24:06 np0005534516 practical_khorana[275346]:    "2": [
Nov 25 03:24:06 np0005534516 practical_khorana[275346]:        {
Nov 25 03:24:06 np0005534516 practical_khorana[275346]:            "devices": [
Nov 25 03:24:06 np0005534516 practical_khorana[275346]:                "/dev/loop5"
Nov 25 03:24:06 np0005534516 practical_khorana[275346]:            ],
Nov 25 03:24:06 np0005534516 practical_khorana[275346]:            "lv_name": "ceph_lv2",
Nov 25 03:24:06 np0005534516 practical_khorana[275346]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 03:24:06 np0005534516 practical_khorana[275346]:            "lv_size": "21470642176",
Nov 25 03:24:06 np0005534516 practical_khorana[275346]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2a15223e-f21c-42af-b702-2be1c3607399,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 03:24:06 np0005534516 practical_khorana[275346]:            "lv_uuid": "5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0",
Nov 25 03:24:06 np0005534516 practical_khorana[275346]:            "name": "ceph_lv2",
Nov 25 03:24:06 np0005534516 practical_khorana[275346]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 03:24:06 np0005534516 practical_khorana[275346]:            "tags": {
Nov 25 03:24:06 np0005534516 practical_khorana[275346]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 03:24:06 np0005534516 practical_khorana[275346]:                "ceph.block_uuid": "5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0",
Nov 25 03:24:06 np0005534516 practical_khorana[275346]:                "ceph.cephx_lockbox_secret": "",
Nov 25 03:24:06 np0005534516 practical_khorana[275346]:                "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 03:24:06 np0005534516 practical_khorana[275346]:                "ceph.cluster_name": "ceph",
Nov 25 03:24:06 np0005534516 practical_khorana[275346]:                "ceph.crush_device_class": "",
Nov 25 03:24:06 np0005534516 practical_khorana[275346]:                "ceph.encrypted": "0",
Nov 25 03:24:06 np0005534516 practical_khorana[275346]:                "ceph.osd_fsid": "2a15223e-f21c-42af-b702-2be1c3607399",
Nov 25 03:24:06 np0005534516 practical_khorana[275346]:                "ceph.osd_id": "2",
Nov 25 03:24:06 np0005534516 practical_khorana[275346]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 03:24:06 np0005534516 practical_khorana[275346]:                "ceph.type": "block",
Nov 25 03:24:06 np0005534516 practical_khorana[275346]:                "ceph.vdo": "0"
Nov 25 03:24:06 np0005534516 practical_khorana[275346]:            },
Nov 25 03:24:06 np0005534516 practical_khorana[275346]:            "type": "block",
Nov 25 03:24:06 np0005534516 practical_khorana[275346]:            "vg_name": "ceph_vg2"
Nov 25 03:24:06 np0005534516 practical_khorana[275346]:        }
Nov 25 03:24:06 np0005534516 practical_khorana[275346]:    ]
Nov 25 03:24:06 np0005534516 practical_khorana[275346]: }
Nov 25 03:24:06 np0005534516 systemd[1]: libpod-02f1513406697c03124c33dab04908b0458797ce6cc9e598f8ab24b35778968c.scope: Deactivated successfully.
Nov 25 03:24:06 np0005534516 podman[275332]: 2025-11-25 08:24:06.800599465 +0000 UTC m=+0.910002601 container died 02f1513406697c03124c33dab04908b0458797ce6cc9e598f8ab24b35778968c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_khorana, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True)
Nov 25 03:24:06 np0005534516 systemd[1]: var-lib-containers-storage-overlay-7ef51937ae41297dd191c15f6e9993fab7c12bb2bae032df544df2e2d455bd6e-merged.mount: Deactivated successfully.
Nov 25 03:24:06 np0005534516 nova_compute[253538]: 2025-11-25 08:24:06.882 253542 DEBUG oslo_concurrency.processutils [None req-666ccd83-4bf8-4de6-810a-c21239428d00 247c30d08ff74c88808c16ddee332fbe 5a8ba4490f464848abc986d6b52e37cc - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc 7ca3cfae-7765-48b7-9d65-660c3b709a55_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.550s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:24:06 np0005534516 podman[275332]: 2025-11-25 08:24:06.914996311 +0000 UTC m=+1.024399437 container remove 02f1513406697c03124c33dab04908b0458797ce6cc9e598f8ab24b35778968c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_khorana, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Nov 25 03:24:06 np0005534516 systemd[1]: libpod-conmon-02f1513406697c03124c33dab04908b0458797ce6cc9e598f8ab24b35778968c.scope: Deactivated successfully.
Nov 25 03:24:06 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1157: 321 pgs: 321 active+clean; 101 MiB data, 271 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 3.1 MiB/s wr, 117 op/s
Nov 25 03:24:06 np0005534516 nova_compute[253538]: 2025-11-25 08:24:06.981 253542 DEBUG nova.storage.rbd_utils [None req-666ccd83-4bf8-4de6-810a-c21239428d00 247c30d08ff74c88808c16ddee332fbe 5a8ba4490f464848abc986d6b52e37cc - - default default] resizing rbd image 7ca3cfae-7765-48b7-9d65-660c3b709a55_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 25 03:24:07 np0005534516 nova_compute[253538]: 2025-11-25 08:24:07.022 253542 DEBUG nova.objects.instance [None req-f0f67520-50d5-44d1-bd6c-82dd827f18a4 8eb21f29ec514ce1adc864a5875afc7f cb11b932e29641d899540daf5c89f9d1 - - default default] Lazy-loading 'trusted_certs' on Instance uuid 4659329f-611a-4436-aa9e-26937db9cd61 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 03:24:07 np0005534516 nova_compute[253538]: 2025-11-25 08:24:07.041 253542 DEBUG nova.compute.manager [None req-f0f67520-50d5-44d1-bd6c-82dd827f18a4 8eb21f29ec514ce1adc864a5875afc7f cb11b932e29641d899540daf5c89f9d1 - - default default] [instance: 4659329f-611a-4436-aa9e-26937db9cd61] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:24:07 np0005534516 nova_compute[253538]: 2025-11-25 08:24:07.096 253542 DEBUG nova.objects.instance [None req-666ccd83-4bf8-4de6-810a-c21239428d00 247c30d08ff74c88808c16ddee332fbe 5a8ba4490f464848abc986d6b52e37cc - - default default] Lazy-loading 'migration_context' on Instance uuid 7ca3cfae-7765-48b7-9d65-660c3b709a55 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 03:24:07 np0005534516 nova_compute[253538]: 2025-11-25 08:24:07.104 253542 DEBUG nova.objects.instance [None req-f0f67520-50d5-44d1-bd6c-82dd827f18a4 8eb21f29ec514ce1adc864a5875afc7f cb11b932e29641d899540daf5c89f9d1 - - default default] Lazy-loading 'pci_requests' on Instance uuid 4659329f-611a-4436-aa9e-26937db9cd61 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 03:24:07 np0005534516 nova_compute[253538]: 2025-11-25 08:24:07.112 253542 DEBUG nova.virt.libvirt.driver [None req-666ccd83-4bf8-4de6-810a-c21239428d00 247c30d08ff74c88808c16ddee332fbe 5a8ba4490f464848abc986d6b52e37cc - - default default] [instance: 7ca3cfae-7765-48b7-9d65-660c3b709a55] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 25 03:24:07 np0005534516 nova_compute[253538]: 2025-11-25 08:24:07.112 253542 DEBUG nova.virt.libvirt.driver [None req-666ccd83-4bf8-4de6-810a-c21239428d00 247c30d08ff74c88808c16ddee332fbe 5a8ba4490f464848abc986d6b52e37cc - - default default] [instance: 7ca3cfae-7765-48b7-9d65-660c3b709a55] Ensure instance console log exists: /var/lib/nova/instances/7ca3cfae-7765-48b7-9d65-660c3b709a55/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 25 03:24:07 np0005534516 nova_compute[253538]: 2025-11-25 08:24:07.112 253542 DEBUG oslo_concurrency.lockutils [None req-666ccd83-4bf8-4de6-810a-c21239428d00 247c30d08ff74c88808c16ddee332fbe 5a8ba4490f464848abc986d6b52e37cc - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:24:07 np0005534516 nova_compute[253538]: 2025-11-25 08:24:07.113 253542 DEBUG oslo_concurrency.lockutils [None req-666ccd83-4bf8-4de6-810a-c21239428d00 247c30d08ff74c88808c16ddee332fbe 5a8ba4490f464848abc986d6b52e37cc - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:24:07 np0005534516 nova_compute[253538]: 2025-11-25 08:24:07.113 253542 DEBUG oslo_concurrency.lockutils [None req-666ccd83-4bf8-4de6-810a-c21239428d00 247c30d08ff74c88808c16ddee332fbe 5a8ba4490f464848abc986d6b52e37cc - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:24:07 np0005534516 nova_compute[253538]: 2025-11-25 08:24:07.114 253542 DEBUG nova.virt.libvirt.driver [None req-666ccd83-4bf8-4de6-810a-c21239428d00 247c30d08ff74c88808c16ddee332fbe 5a8ba4490f464848abc986d6b52e37cc - - default default] [instance: 7ca3cfae-7765-48b7-9d65-660c3b709a55] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'device_name': '/dev/vda', 'guest_format': None, 'encryption_options': None, 'boot_index': 0, 'encryption_format': None, 'encryption_secret_uuid': None, 'size': 0, 'encrypted': False, 'disk_bus': 'virtio', 'image_id': '8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 25 03:24:07 np0005534516 nova_compute[253538]: 2025-11-25 08:24:07.115 253542 DEBUG nova.objects.instance [None req-f0f67520-50d5-44d1-bd6c-82dd827f18a4 8eb21f29ec514ce1adc864a5875afc7f cb11b932e29641d899540daf5c89f9d1 - - default default] Lazy-loading 'pci_devices' on Instance uuid 4659329f-611a-4436-aa9e-26937db9cd61 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 03:24:07 np0005534516 nova_compute[253538]: 2025-11-25 08:24:07.122 253542 DEBUG nova.objects.instance [None req-f0f67520-50d5-44d1-bd6c-82dd827f18a4 8eb21f29ec514ce1adc864a5875afc7f cb11b932e29641d899540daf5c89f9d1 - - default default] Lazy-loading 'resources' on Instance uuid 4659329f-611a-4436-aa9e-26937db9cd61 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 03:24:07 np0005534516 nova_compute[253538]: 2025-11-25 08:24:07.127 253542 WARNING nova.virt.libvirt.driver [None req-666ccd83-4bf8-4de6-810a-c21239428d00 247c30d08ff74c88808c16ddee332fbe 5a8ba4490f464848abc986d6b52e37cc - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 03:24:07 np0005534516 nova_compute[253538]: 2025-11-25 08:24:07.129 253542 DEBUG nova.objects.instance [None req-f0f67520-50d5-44d1-bd6c-82dd827f18a4 8eb21f29ec514ce1adc864a5875afc7f cb11b932e29641d899540daf5c89f9d1 - - default default] Lazy-loading 'migration_context' on Instance uuid 4659329f-611a-4436-aa9e-26937db9cd61 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 03:24:07 np0005534516 nova_compute[253538]: 2025-11-25 08:24:07.132 253542 DEBUG nova.virt.libvirt.host [None req-666ccd83-4bf8-4de6-810a-c21239428d00 247c30d08ff74c88808c16ddee332fbe 5a8ba4490f464848abc986d6b52e37cc - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 25 03:24:07 np0005534516 nova_compute[253538]: 2025-11-25 08:24:07.132 253542 DEBUG nova.virt.libvirt.host [None req-666ccd83-4bf8-4de6-810a-c21239428d00 247c30d08ff74c88808c16ddee332fbe 5a8ba4490f464848abc986d6b52e37cc - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 25 03:24:07 np0005534516 nova_compute[253538]: 2025-11-25 08:24:07.136 253542 DEBUG nova.virt.libvirt.host [None req-666ccd83-4bf8-4de6-810a-c21239428d00 247c30d08ff74c88808c16ddee332fbe 5a8ba4490f464848abc986d6b52e37cc - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 25 03:24:07 np0005534516 nova_compute[253538]: 2025-11-25 08:24:07.136 253542 DEBUG nova.virt.libvirt.host [None req-666ccd83-4bf8-4de6-810a-c21239428d00 247c30d08ff74c88808c16ddee332fbe 5a8ba4490f464848abc986d6b52e37cc - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 25 03:24:07 np0005534516 nova_compute[253538]: 2025-11-25 08:24:07.136 253542 DEBUG nova.virt.libvirt.driver [None req-666ccd83-4bf8-4de6-810a-c21239428d00 247c30d08ff74c88808c16ddee332fbe 5a8ba4490f464848abc986d6b52e37cc - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 25 03:24:07 np0005534516 nova_compute[253538]: 2025-11-25 08:24:07.137 253542 DEBUG nova.virt.hardware [None req-666ccd83-4bf8-4de6-810a-c21239428d00 247c30d08ff74c88808c16ddee332fbe 5a8ba4490f464848abc986d6b52e37cc - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T08:20:54Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='c0247c91-0f34-45b2-87e7-43f31a790a00',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 25 03:24:07 np0005534516 nova_compute[253538]: 2025-11-25 08:24:07.137 253542 DEBUG nova.virt.hardware [None req-666ccd83-4bf8-4de6-810a-c21239428d00 247c30d08ff74c88808c16ddee332fbe 5a8ba4490f464848abc986d6b52e37cc - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 25 03:24:07 np0005534516 nova_compute[253538]: 2025-11-25 08:24:07.137 253542 DEBUG nova.virt.hardware [None req-666ccd83-4bf8-4de6-810a-c21239428d00 247c30d08ff74c88808c16ddee332fbe 5a8ba4490f464848abc986d6b52e37cc - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 25 03:24:07 np0005534516 nova_compute[253538]: 2025-11-25 08:24:07.137 253542 DEBUG nova.virt.hardware [None req-666ccd83-4bf8-4de6-810a-c21239428d00 247c30d08ff74c88808c16ddee332fbe 5a8ba4490f464848abc986d6b52e37cc - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 25 03:24:07 np0005534516 nova_compute[253538]: 2025-11-25 08:24:07.137 253542 DEBUG nova.virt.hardware [None req-666ccd83-4bf8-4de6-810a-c21239428d00 247c30d08ff74c88808c16ddee332fbe 5a8ba4490f464848abc986d6b52e37cc - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 25 03:24:07 np0005534516 nova_compute[253538]: 2025-11-25 08:24:07.137 253542 DEBUG nova.virt.hardware [None req-666ccd83-4bf8-4de6-810a-c21239428d00 247c30d08ff74c88808c16ddee332fbe 5a8ba4490f464848abc986d6b52e37cc - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 25 03:24:07 np0005534516 nova_compute[253538]: 2025-11-25 08:24:07.138 253542 DEBUG nova.virt.hardware [None req-666ccd83-4bf8-4de6-810a-c21239428d00 247c30d08ff74c88808c16ddee332fbe 5a8ba4490f464848abc986d6b52e37cc - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 25 03:24:07 np0005534516 nova_compute[253538]: 2025-11-25 08:24:07.138 253542 DEBUG nova.virt.hardware [None req-666ccd83-4bf8-4de6-810a-c21239428d00 247c30d08ff74c88808c16ddee332fbe 5a8ba4490f464848abc986d6b52e37cc - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 25 03:24:07 np0005534516 nova_compute[253538]: 2025-11-25 08:24:07.138 253542 DEBUG nova.virt.hardware [None req-666ccd83-4bf8-4de6-810a-c21239428d00 247c30d08ff74c88808c16ddee332fbe 5a8ba4490f464848abc986d6b52e37cc - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 25 03:24:07 np0005534516 nova_compute[253538]: 2025-11-25 08:24:07.138 253542 DEBUG nova.virt.hardware [None req-666ccd83-4bf8-4de6-810a-c21239428d00 247c30d08ff74c88808c16ddee332fbe 5a8ba4490f464848abc986d6b52e37cc - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 25 03:24:07 np0005534516 nova_compute[253538]: 2025-11-25 08:24:07.138 253542 DEBUG nova.virt.hardware [None req-666ccd83-4bf8-4de6-810a-c21239428d00 247c30d08ff74c88808c16ddee332fbe 5a8ba4490f464848abc986d6b52e37cc - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 25 03:24:07 np0005534516 nova_compute[253538]: 2025-11-25 08:24:07.141 253542 DEBUG oslo_concurrency.processutils [None req-666ccd83-4bf8-4de6-810a-c21239428d00 247c30d08ff74c88808c16ddee332fbe 5a8ba4490f464848abc986d6b52e37cc - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:24:07 np0005534516 nova_compute[253538]: 2025-11-25 08:24:07.164 253542 DEBUG nova.objects.instance [None req-f0f67520-50d5-44d1-bd6c-82dd827f18a4 8eb21f29ec514ce1adc864a5875afc7f cb11b932e29641d899540daf5c89f9d1 - - default default] [instance: 4659329f-611a-4436-aa9e-26937db9cd61] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032#033[00m
Nov 25 03:24:07 np0005534516 nova_compute[253538]: 2025-11-25 08:24:07.170 253542 DEBUG nova.virt.libvirt.driver [None req-f0f67520-50d5-44d1-bd6c-82dd827f18a4 8eb21f29ec514ce1adc864a5875afc7f cb11b932e29641d899540daf5c89f9d1 - - default default] [instance: 4659329f-611a-4436-aa9e-26937db9cd61] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071#033[00m
Nov 25 03:24:07 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 03:24:07 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/484734067' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 03:24:07 np0005534516 nova_compute[253538]: 2025-11-25 08:24:07.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 03:24:07 np0005534516 nova_compute[253538]: 2025-11-25 08:24:07.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 03:24:07 np0005534516 nova_compute[253538]: 2025-11-25 08:24:07.555 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 25 03:24:07 np0005534516 nova_compute[253538]: 2025-11-25 08:24:07.576 253542 DEBUG oslo_concurrency.processutils [None req-666ccd83-4bf8-4de6-810a-c21239428d00 247c30d08ff74c88808c16ddee332fbe 5a8ba4490f464848abc986d6b52e37cc - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.435s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:24:07 np0005534516 nova_compute[253538]: 2025-11-25 08:24:07.615 253542 DEBUG nova.storage.rbd_utils [None req-666ccd83-4bf8-4de6-810a-c21239428d00 247c30d08ff74c88808c16ddee332fbe 5a8ba4490f464848abc986d6b52e37cc - - default default] rbd image 7ca3cfae-7765-48b7-9d65-660c3b709a55_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:24:07 np0005534516 nova_compute[253538]: 2025-11-25 08:24:07.623 253542 DEBUG oslo_concurrency.processutils [None req-666ccd83-4bf8-4de6-810a-c21239428d00 247c30d08ff74c88808c16ddee332fbe 5a8ba4490f464848abc986d6b52e37cc - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:24:07 np0005534516 podman[275710]: 2025-11-25 08:24:07.645649966 +0000 UTC m=+0.037115269 container create 80c9ef87afeae2e9551bdd6e5a06b4e6126a1ac68b480f8654ac4ccdd620166d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_hermann, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Nov 25 03:24:07 np0005534516 systemd[1]: Started libpod-conmon-80c9ef87afeae2e9551bdd6e5a06b4e6126a1ac68b480f8654ac4ccdd620166d.scope.
Nov 25 03:24:07 np0005534516 systemd[1]: Started libcrun container.
Nov 25 03:24:07 np0005534516 podman[275710]: 2025-11-25 08:24:07.63132931 +0000 UTC m=+0.022794643 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 03:24:07 np0005534516 podman[275710]: 2025-11-25 08:24:07.787553294 +0000 UTC m=+0.179018617 container init 80c9ef87afeae2e9551bdd6e5a06b4e6126a1ac68b480f8654ac4ccdd620166d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_hermann, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 03:24:07 np0005534516 podman[275710]: 2025-11-25 08:24:07.796151922 +0000 UTC m=+0.187617255 container start 80c9ef87afeae2e9551bdd6e5a06b4e6126a1ac68b480f8654ac4ccdd620166d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_hermann, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Nov 25 03:24:07 np0005534516 beautiful_hermann[275729]: 167 167
Nov 25 03:24:07 np0005534516 podman[275710]: 2025-11-25 08:24:07.801050677 +0000 UTC m=+0.192516030 container attach 80c9ef87afeae2e9551bdd6e5a06b4e6126a1ac68b480f8654ac4ccdd620166d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_hermann, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Nov 25 03:24:07 np0005534516 systemd[1]: libpod-80c9ef87afeae2e9551bdd6e5a06b4e6126a1ac68b480f8654ac4ccdd620166d.scope: Deactivated successfully.
Nov 25 03:24:07 np0005534516 conmon[275729]: conmon 80c9ef87afeae2e9551b <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-80c9ef87afeae2e9551bdd6e5a06b4e6126a1ac68b480f8654ac4ccdd620166d.scope/container/memory.events
Nov 25 03:24:07 np0005534516 podman[275710]: 2025-11-25 08:24:07.803714591 +0000 UTC m=+0.195179914 container died 80c9ef87afeae2e9551bdd6e5a06b4e6126a1ac68b480f8654ac4ccdd620166d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_hermann, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Nov 25 03:24:07 np0005534516 systemd[1]: var-lib-containers-storage-overlay-80b1c9d29f67d0e5cb01335439a05baa3e0086a9d78405c403713a190709ef96-merged.mount: Deactivated successfully.
Nov 25 03:24:07 np0005534516 podman[275710]: 2025-11-25 08:24:07.876687231 +0000 UTC m=+0.268152554 container remove 80c9ef87afeae2e9551bdd6e5a06b4e6126a1ac68b480f8654ac4ccdd620166d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_hermann, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 03:24:07 np0005534516 systemd[1]: libpod-conmon-80c9ef87afeae2e9551bdd6e5a06b4e6126a1ac68b480f8654ac4ccdd620166d.scope: Deactivated successfully.
Nov 25 03:24:08 np0005534516 podman[275772]: 2025-11-25 08:24:08.047744036 +0000 UTC m=+0.050614532 container create 38b0ec16caf53e33c480aee01417c5d895daf34dfb358e7bd57293f08810829a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_bose, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 03:24:08 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 03:24:08 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/694983390' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 03:24:08 np0005534516 nova_compute[253538]: 2025-11-25 08:24:08.080 253542 DEBUG oslo_concurrency.processutils [None req-666ccd83-4bf8-4de6-810a-c21239428d00 247c30d08ff74c88808c16ddee332fbe 5a8ba4490f464848abc986d6b52e37cc - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.457s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:24:08 np0005534516 nova_compute[253538]: 2025-11-25 08:24:08.084 253542 DEBUG nova.objects.instance [None req-666ccd83-4bf8-4de6-810a-c21239428d00 247c30d08ff74c88808c16ddee332fbe 5a8ba4490f464848abc986d6b52e37cc - - default default] Lazy-loading 'pci_devices' on Instance uuid 7ca3cfae-7765-48b7-9d65-660c3b709a55 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 03:24:08 np0005534516 nova_compute[253538]: 2025-11-25 08:24:08.094 253542 DEBUG nova.virt.libvirt.driver [None req-666ccd83-4bf8-4de6-810a-c21239428d00 247c30d08ff74c88808c16ddee332fbe 5a8ba4490f464848abc986d6b52e37cc - - default default] [instance: 7ca3cfae-7765-48b7-9d65-660c3b709a55] End _get_guest_xml xml=<domain type="kvm">
Nov 25 03:24:08 np0005534516 nova_compute[253538]:  <uuid>7ca3cfae-7765-48b7-9d65-660c3b709a55</uuid>
Nov 25 03:24:08 np0005534516 nova_compute[253538]:  <name>instance-0000000a</name>
Nov 25 03:24:08 np0005534516 nova_compute[253538]:  <memory>131072</memory>
Nov 25 03:24:08 np0005534516 nova_compute[253538]:  <vcpu>1</vcpu>
Nov 25 03:24:08 np0005534516 nova_compute[253538]:  <metadata>
Nov 25 03:24:08 np0005534516 nova_compute[253538]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 03:24:08 np0005534516 nova_compute[253538]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 03:24:08 np0005534516 nova_compute[253538]:      <nova:name>tempest-ServerDiagnosticsTest-server-1777165044</nova:name>
Nov 25 03:24:08 np0005534516 nova_compute[253538]:      <nova:creationTime>2025-11-25 08:24:07</nova:creationTime>
Nov 25 03:24:08 np0005534516 nova_compute[253538]:      <nova:flavor name="m1.nano">
Nov 25 03:24:08 np0005534516 nova_compute[253538]:        <nova:memory>128</nova:memory>
Nov 25 03:24:08 np0005534516 nova_compute[253538]:        <nova:disk>1</nova:disk>
Nov 25 03:24:08 np0005534516 nova_compute[253538]:        <nova:swap>0</nova:swap>
Nov 25 03:24:08 np0005534516 nova_compute[253538]:        <nova:ephemeral>0</nova:ephemeral>
Nov 25 03:24:08 np0005534516 nova_compute[253538]:        <nova:vcpus>1</nova:vcpus>
Nov 25 03:24:08 np0005534516 nova_compute[253538]:      </nova:flavor>
Nov 25 03:24:08 np0005534516 nova_compute[253538]:      <nova:owner>
Nov 25 03:24:08 np0005534516 nova_compute[253538]:        <nova:user uuid="247c30d08ff74c88808c16ddee332fbe">tempest-ServerDiagnosticsTest-593637892-project-member</nova:user>
Nov 25 03:24:08 np0005534516 nova_compute[253538]:        <nova:project uuid="5a8ba4490f464848abc986d6b52e37cc">tempest-ServerDiagnosticsTest-593637892</nova:project>
Nov 25 03:24:08 np0005534516 nova_compute[253538]:      </nova:owner>
Nov 25 03:24:08 np0005534516 nova_compute[253538]:      <nova:root type="image" uuid="8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e"/>
Nov 25 03:24:08 np0005534516 nova_compute[253538]:      <nova:ports/>
Nov 25 03:24:08 np0005534516 nova_compute[253538]:    </nova:instance>
Nov 25 03:24:08 np0005534516 nova_compute[253538]:  </metadata>
Nov 25 03:24:08 np0005534516 nova_compute[253538]:  <sysinfo type="smbios">
Nov 25 03:24:08 np0005534516 nova_compute[253538]:    <system>
Nov 25 03:24:08 np0005534516 nova_compute[253538]:      <entry name="manufacturer">RDO</entry>
Nov 25 03:24:08 np0005534516 nova_compute[253538]:      <entry name="product">OpenStack Compute</entry>
Nov 25 03:24:08 np0005534516 nova_compute[253538]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 03:24:08 np0005534516 nova_compute[253538]:      <entry name="serial">7ca3cfae-7765-48b7-9d65-660c3b709a55</entry>
Nov 25 03:24:08 np0005534516 nova_compute[253538]:      <entry name="uuid">7ca3cfae-7765-48b7-9d65-660c3b709a55</entry>
Nov 25 03:24:08 np0005534516 nova_compute[253538]:      <entry name="family">Virtual Machine</entry>
Nov 25 03:24:08 np0005534516 nova_compute[253538]:    </system>
Nov 25 03:24:08 np0005534516 nova_compute[253538]:  </sysinfo>
Nov 25 03:24:08 np0005534516 nova_compute[253538]:  <os>
Nov 25 03:24:08 np0005534516 nova_compute[253538]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 25 03:24:08 np0005534516 nova_compute[253538]:    <boot dev="hd"/>
Nov 25 03:24:08 np0005534516 nova_compute[253538]:    <smbios mode="sysinfo"/>
Nov 25 03:24:08 np0005534516 nova_compute[253538]:  </os>
Nov 25 03:24:08 np0005534516 nova_compute[253538]:  <features>
Nov 25 03:24:08 np0005534516 nova_compute[253538]:    <acpi/>
Nov 25 03:24:08 np0005534516 nova_compute[253538]:    <apic/>
Nov 25 03:24:08 np0005534516 nova_compute[253538]:    <vmcoreinfo/>
Nov 25 03:24:08 np0005534516 nova_compute[253538]:  </features>
Nov 25 03:24:08 np0005534516 nova_compute[253538]:  <clock offset="utc">
Nov 25 03:24:08 np0005534516 nova_compute[253538]:    <timer name="pit" tickpolicy="delay"/>
Nov 25 03:24:08 np0005534516 nova_compute[253538]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 25 03:24:08 np0005534516 nova_compute[253538]:    <timer name="hpet" present="no"/>
Nov 25 03:24:08 np0005534516 nova_compute[253538]:  </clock>
Nov 25 03:24:08 np0005534516 nova_compute[253538]:  <cpu mode="host-model" match="exact">
Nov 25 03:24:08 np0005534516 nova_compute[253538]:    <topology sockets="1" cores="1" threads="1"/>
Nov 25 03:24:08 np0005534516 nova_compute[253538]:  </cpu>
Nov 25 03:24:08 np0005534516 nova_compute[253538]:  <devices>
Nov 25 03:24:08 np0005534516 nova_compute[253538]:    <disk type="network" device="disk">
Nov 25 03:24:08 np0005534516 nova_compute[253538]:      <driver type="raw" cache="none"/>
Nov 25 03:24:08 np0005534516 nova_compute[253538]:      <source protocol="rbd" name="vms/7ca3cfae-7765-48b7-9d65-660c3b709a55_disk">
Nov 25 03:24:08 np0005534516 nova_compute[253538]:        <host name="192.168.122.100" port="6789"/>
Nov 25 03:24:08 np0005534516 nova_compute[253538]:      </source>
Nov 25 03:24:08 np0005534516 nova_compute[253538]:      <auth username="openstack">
Nov 25 03:24:08 np0005534516 nova_compute[253538]:        <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 03:24:08 np0005534516 nova_compute[253538]:      </auth>
Nov 25 03:24:08 np0005534516 nova_compute[253538]:      <target dev="vda" bus="virtio"/>
Nov 25 03:24:08 np0005534516 nova_compute[253538]:    </disk>
Nov 25 03:24:08 np0005534516 nova_compute[253538]:    <disk type="network" device="cdrom">
Nov 25 03:24:08 np0005534516 nova_compute[253538]:      <driver type="raw" cache="none"/>
Nov 25 03:24:08 np0005534516 nova_compute[253538]:      <source protocol="rbd" name="vms/7ca3cfae-7765-48b7-9d65-660c3b709a55_disk.config">
Nov 25 03:24:08 np0005534516 nova_compute[253538]:        <host name="192.168.122.100" port="6789"/>
Nov 25 03:24:08 np0005534516 nova_compute[253538]:      </source>
Nov 25 03:24:08 np0005534516 nova_compute[253538]:      <auth username="openstack">
Nov 25 03:24:08 np0005534516 nova_compute[253538]:        <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 03:24:08 np0005534516 nova_compute[253538]:      </auth>
Nov 25 03:24:08 np0005534516 nova_compute[253538]:      <target dev="sda" bus="sata"/>
Nov 25 03:24:08 np0005534516 nova_compute[253538]:    </disk>
Nov 25 03:24:08 np0005534516 nova_compute[253538]:    <serial type="pty">
Nov 25 03:24:08 np0005534516 nova_compute[253538]:      <log file="/var/lib/nova/instances/7ca3cfae-7765-48b7-9d65-660c3b709a55/console.log" append="off"/>
Nov 25 03:24:08 np0005534516 nova_compute[253538]:    </serial>
Nov 25 03:24:08 np0005534516 nova_compute[253538]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 03:24:08 np0005534516 nova_compute[253538]:    <video>
Nov 25 03:24:08 np0005534516 nova_compute[253538]:      <model type="virtio"/>
Nov 25 03:24:08 np0005534516 nova_compute[253538]:    </video>
Nov 25 03:24:08 np0005534516 nova_compute[253538]:    <input type="tablet" bus="usb"/>
Nov 25 03:24:08 np0005534516 nova_compute[253538]:    <rng model="virtio">
Nov 25 03:24:08 np0005534516 nova_compute[253538]:      <backend model="random">/dev/urandom</backend>
Nov 25 03:24:08 np0005534516 nova_compute[253538]:    </rng>
Nov 25 03:24:08 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root"/>
Nov 25 03:24:08 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:24:08 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:24:08 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:24:08 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:24:08 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:24:08 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:24:08 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:24:08 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:24:08 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:24:08 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:24:08 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:24:08 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:24:08 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:24:08 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:24:08 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:24:08 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:24:08 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:24:08 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:24:08 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:24:08 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:24:08 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:24:08 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:24:08 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:24:08 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:24:08 np0005534516 nova_compute[253538]:    <controller type="usb" index="0"/>
Nov 25 03:24:08 np0005534516 nova_compute[253538]:    <memballoon model="virtio">
Nov 25 03:24:08 np0005534516 nova_compute[253538]:      <stats period="10"/>
Nov 25 03:24:08 np0005534516 nova_compute[253538]:    </memballoon>
Nov 25 03:24:08 np0005534516 nova_compute[253538]:  </devices>
Nov 25 03:24:08 np0005534516 nova_compute[253538]: </domain>
Nov 25 03:24:08 np0005534516 nova_compute[253538]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 25 03:24:08 np0005534516 systemd[1]: Started libpod-conmon-38b0ec16caf53e33c480aee01417c5d895daf34dfb358e7bd57293f08810829a.scope.
Nov 25 03:24:08 np0005534516 podman[275772]: 2025-11-25 08:24:08.018536618 +0000 UTC m=+0.021407094 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 03:24:08 np0005534516 systemd[1]: Started libcrun container.
Nov 25 03:24:08 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4c215b97091e818a58a3954957626c70a59fd99e85135a8df913f22f46e878fc/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 03:24:08 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4c215b97091e818a58a3954957626c70a59fd99e85135a8df913f22f46e878fc/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 03:24:08 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4c215b97091e818a58a3954957626c70a59fd99e85135a8df913f22f46e878fc/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 03:24:08 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4c215b97091e818a58a3954957626c70a59fd99e85135a8df913f22f46e878fc/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 03:24:08 np0005534516 podman[275772]: 2025-11-25 08:24:08.171532842 +0000 UTC m=+0.174403348 container init 38b0ec16caf53e33c480aee01417c5d895daf34dfb358e7bd57293f08810829a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_bose, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Nov 25 03:24:08 np0005534516 nova_compute[253538]: 2025-11-25 08:24:08.171 253542 DEBUG nova.virt.libvirt.driver [None req-666ccd83-4bf8-4de6-810a-c21239428d00 247c30d08ff74c88808c16ddee332fbe 5a8ba4490f464848abc986d6b52e37cc - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 03:24:08 np0005534516 nova_compute[253538]: 2025-11-25 08:24:08.172 253542 DEBUG nova.virt.libvirt.driver [None req-666ccd83-4bf8-4de6-810a-c21239428d00 247c30d08ff74c88808c16ddee332fbe 5a8ba4490f464848abc986d6b52e37cc - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 03:24:08 np0005534516 nova_compute[253538]: 2025-11-25 08:24:08.173 253542 INFO nova.virt.libvirt.driver [None req-666ccd83-4bf8-4de6-810a-c21239428d00 247c30d08ff74c88808c16ddee332fbe 5a8ba4490f464848abc986d6b52e37cc - - default default] [instance: 7ca3cfae-7765-48b7-9d65-660c3b709a55] Using config drive#033[00m
Nov 25 03:24:08 np0005534516 podman[275772]: 2025-11-25 08:24:08.181301523 +0000 UTC m=+0.184171979 container start 38b0ec16caf53e33c480aee01417c5d895daf34dfb358e7bd57293f08810829a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_bose, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Nov 25 03:24:08 np0005534516 podman[275772]: 2025-11-25 08:24:08.19346262 +0000 UTC m=+0.196333106 container attach 38b0ec16caf53e33c480aee01417c5d895daf34dfb358e7bd57293f08810829a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_bose, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 03:24:08 np0005534516 nova_compute[253538]: 2025-11-25 08:24:08.207 253542 DEBUG nova.storage.rbd_utils [None req-666ccd83-4bf8-4de6-810a-c21239428d00 247c30d08ff74c88808c16ddee332fbe 5a8ba4490f464848abc986d6b52e37cc - - default default] rbd image 7ca3cfae-7765-48b7-9d65-660c3b709a55_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:24:08 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e110 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 03:24:08 np0005534516 nova_compute[253538]: 2025-11-25 08:24:08.812 253542 INFO nova.virt.libvirt.driver [None req-666ccd83-4bf8-4de6-810a-c21239428d00 247c30d08ff74c88808c16ddee332fbe 5a8ba4490f464848abc986d6b52e37cc - - default default] [instance: 7ca3cfae-7765-48b7-9d65-660c3b709a55] Creating config drive at /var/lib/nova/instances/7ca3cfae-7765-48b7-9d65-660c3b709a55/disk.config#033[00m
Nov 25 03:24:08 np0005534516 nova_compute[253538]: 2025-11-25 08:24:08.817 253542 DEBUG oslo_concurrency.processutils [None req-666ccd83-4bf8-4de6-810a-c21239428d00 247c30d08ff74c88808c16ddee332fbe 5a8ba4490f464848abc986d6b52e37cc - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/7ca3cfae-7765-48b7-9d65-660c3b709a55/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpwucte386 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:24:08 np0005534516 nova_compute[253538]: 2025-11-25 08:24:08.947 253542 DEBUG oslo_concurrency.processutils [None req-666ccd83-4bf8-4de6-810a-c21239428d00 247c30d08ff74c88808c16ddee332fbe 5a8ba4490f464848abc986d6b52e37cc - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/7ca3cfae-7765-48b7-9d65-660c3b709a55/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpwucte386" returned: 0 in 0.129s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:24:08 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1158: 321 pgs: 321 active+clean; 109 MiB data, 274 MiB used, 60 GiB / 60 GiB avail; 2.8 MiB/s rd, 2.8 MiB/s wr, 141 op/s
Nov 25 03:24:08 np0005534516 nova_compute[253538]: 2025-11-25 08:24:08.973 253542 DEBUG nova.storage.rbd_utils [None req-666ccd83-4bf8-4de6-810a-c21239428d00 247c30d08ff74c88808c16ddee332fbe 5a8ba4490f464848abc986d6b52e37cc - - default default] rbd image 7ca3cfae-7765-48b7-9d65-660c3b709a55_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:24:08 np0005534516 nova_compute[253538]: 2025-11-25 08:24:08.977 253542 DEBUG oslo_concurrency.processutils [None req-666ccd83-4bf8-4de6-810a-c21239428d00 247c30d08ff74c88808c16ddee332fbe 5a8ba4490f464848abc986d6b52e37cc - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/7ca3cfae-7765-48b7-9d65-660c3b709a55/disk.config 7ca3cfae-7765-48b7-9d65-660c3b709a55_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:24:09 np0005534516 nova_compute[253538]: 2025-11-25 08:24:09.164 253542 DEBUG oslo_concurrency.processutils [None req-666ccd83-4bf8-4de6-810a-c21239428d00 247c30d08ff74c88808c16ddee332fbe 5a8ba4490f464848abc986d6b52e37cc - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/7ca3cfae-7765-48b7-9d65-660c3b709a55/disk.config 7ca3cfae-7765-48b7-9d65-660c3b709a55_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.187s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:24:09 np0005534516 nova_compute[253538]: 2025-11-25 08:24:09.165 253542 INFO nova.virt.libvirt.driver [None req-666ccd83-4bf8-4de6-810a-c21239428d00 247c30d08ff74c88808c16ddee332fbe 5a8ba4490f464848abc986d6b52e37cc - - default default] [instance: 7ca3cfae-7765-48b7-9d65-660c3b709a55] Deleting local config drive /var/lib/nova/instances/7ca3cfae-7765-48b7-9d65-660c3b709a55/disk.config because it was imported into RBD.#033[00m
Nov 25 03:24:09 np0005534516 condescending_bose[275791]: {
Nov 25 03:24:09 np0005534516 condescending_bose[275791]:    "1ccbbde0-8faf-460a-800e-c84f00ed17db": {
Nov 25 03:24:09 np0005534516 condescending_bose[275791]:        "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 03:24:09 np0005534516 condescending_bose[275791]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 03:24:09 np0005534516 condescending_bose[275791]:        "osd_id": 1,
Nov 25 03:24:09 np0005534516 condescending_bose[275791]:        "osd_uuid": "1ccbbde0-8faf-460a-800e-c84f00ed17db",
Nov 25 03:24:09 np0005534516 condescending_bose[275791]:        "type": "bluestore"
Nov 25 03:24:09 np0005534516 condescending_bose[275791]:    },
Nov 25 03:24:09 np0005534516 condescending_bose[275791]:    "2a15223e-f21c-42af-b702-2be1c3607399": {
Nov 25 03:24:09 np0005534516 condescending_bose[275791]:        "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 03:24:09 np0005534516 condescending_bose[275791]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 03:24:09 np0005534516 condescending_bose[275791]:        "osd_id": 2,
Nov 25 03:24:09 np0005534516 condescending_bose[275791]:        "osd_uuid": "2a15223e-f21c-42af-b702-2be1c3607399",
Nov 25 03:24:09 np0005534516 condescending_bose[275791]:        "type": "bluestore"
Nov 25 03:24:09 np0005534516 condescending_bose[275791]:    },
Nov 25 03:24:09 np0005534516 condescending_bose[275791]:    "fa0f8d90-5df8-4d42-9078-da082765696d": {
Nov 25 03:24:09 np0005534516 condescending_bose[275791]:        "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 03:24:09 np0005534516 condescending_bose[275791]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 03:24:09 np0005534516 condescending_bose[275791]:        "osd_id": 0,
Nov 25 03:24:09 np0005534516 condescending_bose[275791]:        "osd_uuid": "fa0f8d90-5df8-4d42-9078-da082765696d",
Nov 25 03:24:09 np0005534516 condescending_bose[275791]:        "type": "bluestore"
Nov 25 03:24:09 np0005534516 condescending_bose[275791]:    }
Nov 25 03:24:09 np0005534516 condescending_bose[275791]: }
Nov 25 03:24:09 np0005534516 systemd[1]: libpod-38b0ec16caf53e33c480aee01417c5d895daf34dfb358e7bd57293f08810829a.scope: Deactivated successfully.
Nov 25 03:24:09 np0005534516 podman[275772]: 2025-11-25 08:24:09.210392519 +0000 UTC m=+1.213262965 container died 38b0ec16caf53e33c480aee01417c5d895daf34dfb358e7bd57293f08810829a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_bose, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 03:24:09 np0005534516 systemd[1]: libpod-38b0ec16caf53e33c480aee01417c5d895daf34dfb358e7bd57293f08810829a.scope: Consumed 1.020s CPU time.
Nov 25 03:24:09 np0005534516 systemd-machined[215790]: New machine qemu-11-instance-0000000a.
Nov 25 03:24:09 np0005534516 systemd[1]: Started Virtual Machine qemu-11-instance-0000000a.
Nov 25 03:24:09 np0005534516 systemd[1]: var-lib-containers-storage-overlay-4c215b97091e818a58a3954957626c70a59fd99e85135a8df913f22f46e878fc-merged.mount: Deactivated successfully.
Nov 25 03:24:09 np0005534516 podman[275772]: 2025-11-25 08:24:09.323126359 +0000 UTC m=+1.325996815 container remove 38b0ec16caf53e33c480aee01417c5d895daf34dfb358e7bd57293f08810829a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_bose, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Nov 25 03:24:09 np0005534516 systemd[1]: libpod-conmon-38b0ec16caf53e33c480aee01417c5d895daf34dfb358e7bd57293f08810829a.scope: Deactivated successfully.
Nov 25 03:24:09 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 03:24:09 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 03:24:09 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 03:24:09 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 03:24:09 np0005534516 ceph-mgr[75313]: [progress WARNING root] complete: ev e162fcd0-88c4-4075-94a2-c7b3eeb4d01e does not exist
Nov 25 03:24:09 np0005534516 ceph-mgr[75313]: [progress WARNING root] complete: ev ed814936-35aa-485f-8096-3f1448dda72b does not exist
Nov 25 03:24:09 np0005534516 nova_compute[253538]: 2025-11-25 08:24:09.467 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:24:09 np0005534516 nova_compute[253538]: 2025-11-25 08:24:09.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 03:24:10 np0005534516 nova_compute[253538]: 2025-11-25 08:24:10.028 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764059050.0281622, 7ca3cfae-7765-48b7-9d65-660c3b709a55 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 03:24:10 np0005534516 nova_compute[253538]: 2025-11-25 08:24:10.029 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 7ca3cfae-7765-48b7-9d65-660c3b709a55] VM Resumed (Lifecycle Event)#033[00m
Nov 25 03:24:10 np0005534516 nova_compute[253538]: 2025-11-25 08:24:10.033 253542 DEBUG nova.compute.manager [None req-666ccd83-4bf8-4de6-810a-c21239428d00 247c30d08ff74c88808c16ddee332fbe 5a8ba4490f464848abc986d6b52e37cc - - default default] [instance: 7ca3cfae-7765-48b7-9d65-660c3b709a55] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 25 03:24:10 np0005534516 nova_compute[253538]: 2025-11-25 08:24:10.033 253542 DEBUG nova.virt.libvirt.driver [None req-666ccd83-4bf8-4de6-810a-c21239428d00 247c30d08ff74c88808c16ddee332fbe 5a8ba4490f464848abc986d6b52e37cc - - default default] [instance: 7ca3cfae-7765-48b7-9d65-660c3b709a55] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 25 03:24:10 np0005534516 nova_compute[253538]: 2025-11-25 08:24:10.037 253542 INFO nova.virt.libvirt.driver [-] [instance: 7ca3cfae-7765-48b7-9d65-660c3b709a55] Instance spawned successfully.#033[00m
Nov 25 03:24:10 np0005534516 nova_compute[253538]: 2025-11-25 08:24:10.037 253542 DEBUG nova.virt.libvirt.driver [None req-666ccd83-4bf8-4de6-810a-c21239428d00 247c30d08ff74c88808c16ddee332fbe 5a8ba4490f464848abc986d6b52e37cc - - default default] [instance: 7ca3cfae-7765-48b7-9d65-660c3b709a55] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 25 03:24:10 np0005534516 nova_compute[253538]: 2025-11-25 08:24:10.052 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 7ca3cfae-7765-48b7-9d65-660c3b709a55] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:24:10 np0005534516 nova_compute[253538]: 2025-11-25 08:24:10.059 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 7ca3cfae-7765-48b7-9d65-660c3b709a55] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 03:24:10 np0005534516 nova_compute[253538]: 2025-11-25 08:24:10.062 253542 DEBUG nova.virt.libvirt.driver [None req-666ccd83-4bf8-4de6-810a-c21239428d00 247c30d08ff74c88808c16ddee332fbe 5a8ba4490f464848abc986d6b52e37cc - - default default] [instance: 7ca3cfae-7765-48b7-9d65-660c3b709a55] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:24:10 np0005534516 nova_compute[253538]: 2025-11-25 08:24:10.063 253542 DEBUG nova.virt.libvirt.driver [None req-666ccd83-4bf8-4de6-810a-c21239428d00 247c30d08ff74c88808c16ddee332fbe 5a8ba4490f464848abc986d6b52e37cc - - default default] [instance: 7ca3cfae-7765-48b7-9d65-660c3b709a55] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:24:10 np0005534516 nova_compute[253538]: 2025-11-25 08:24:10.063 253542 DEBUG nova.virt.libvirt.driver [None req-666ccd83-4bf8-4de6-810a-c21239428d00 247c30d08ff74c88808c16ddee332fbe 5a8ba4490f464848abc986d6b52e37cc - - default default] [instance: 7ca3cfae-7765-48b7-9d65-660c3b709a55] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:24:10 np0005534516 nova_compute[253538]: 2025-11-25 08:24:10.064 253542 DEBUG nova.virt.libvirt.driver [None req-666ccd83-4bf8-4de6-810a-c21239428d00 247c30d08ff74c88808c16ddee332fbe 5a8ba4490f464848abc986d6b52e37cc - - default default] [instance: 7ca3cfae-7765-48b7-9d65-660c3b709a55] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:24:10 np0005534516 nova_compute[253538]: 2025-11-25 08:24:10.064 253542 DEBUG nova.virt.libvirt.driver [None req-666ccd83-4bf8-4de6-810a-c21239428d00 247c30d08ff74c88808c16ddee332fbe 5a8ba4490f464848abc986d6b52e37cc - - default default] [instance: 7ca3cfae-7765-48b7-9d65-660c3b709a55] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:24:10 np0005534516 nova_compute[253538]: 2025-11-25 08:24:10.065 253542 DEBUG nova.virt.libvirt.driver [None req-666ccd83-4bf8-4de6-810a-c21239428d00 247c30d08ff74c88808c16ddee332fbe 5a8ba4490f464848abc986d6b52e37cc - - default default] [instance: 7ca3cfae-7765-48b7-9d65-660c3b709a55] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:24:10 np0005534516 nova_compute[253538]: 2025-11-25 08:24:10.270 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 7ca3cfae-7765-48b7-9d65-660c3b709a55] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 03:24:10 np0005534516 nova_compute[253538]: 2025-11-25 08:24:10.271 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764059050.0322893, 7ca3cfae-7765-48b7-9d65-660c3b709a55 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 03:24:10 np0005534516 nova_compute[253538]: 2025-11-25 08:24:10.271 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 7ca3cfae-7765-48b7-9d65-660c3b709a55] VM Started (Lifecycle Event)#033[00m
Nov 25 03:24:10 np0005534516 nova_compute[253538]: 2025-11-25 08:24:10.312 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 7ca3cfae-7765-48b7-9d65-660c3b709a55] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:24:10 np0005534516 nova_compute[253538]: 2025-11-25 08:24:10.316 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 7ca3cfae-7765-48b7-9d65-660c3b709a55] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 03:24:10 np0005534516 nova_compute[253538]: 2025-11-25 08:24:10.324 253542 INFO nova.compute.manager [None req-666ccd83-4bf8-4de6-810a-c21239428d00 247c30d08ff74c88808c16ddee332fbe 5a8ba4490f464848abc986d6b52e37cc - - default default] [instance: 7ca3cfae-7765-48b7-9d65-660c3b709a55] Took 4.17 seconds to spawn the instance on the hypervisor.#033[00m
Nov 25 03:24:10 np0005534516 nova_compute[253538]: 2025-11-25 08:24:10.324 253542 DEBUG nova.compute.manager [None req-666ccd83-4bf8-4de6-810a-c21239428d00 247c30d08ff74c88808c16ddee332fbe 5a8ba4490f464848abc986d6b52e37cc - - default default] [instance: 7ca3cfae-7765-48b7-9d65-660c3b709a55] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:24:10 np0005534516 nova_compute[253538]: 2025-11-25 08:24:10.347 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 7ca3cfae-7765-48b7-9d65-660c3b709a55] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 03:24:10 np0005534516 nova_compute[253538]: 2025-11-25 08:24:10.384 253542 INFO nova.compute.manager [None req-666ccd83-4bf8-4de6-810a-c21239428d00 247c30d08ff74c88808c16ddee332fbe 5a8ba4490f464848abc986d6b52e37cc - - default default] [instance: 7ca3cfae-7765-48b7-9d65-660c3b709a55] Took 5.16 seconds to build instance.#033[00m
Nov 25 03:24:10 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 03:24:10 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 03:24:10 np0005534516 nova_compute[253538]: 2025-11-25 08:24:10.400 253542 DEBUG oslo_concurrency.lockutils [None req-666ccd83-4bf8-4de6-810a-c21239428d00 247c30d08ff74c88808c16ddee332fbe 5a8ba4490f464848abc986d6b52e37cc - - default default] Lock "7ca3cfae-7765-48b7-9d65-660c3b709a55" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 5.241s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:24:10 np0005534516 nova_compute[253538]: 2025-11-25 08:24:10.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 03:24:10 np0005534516 nova_compute[253538]: 2025-11-25 08:24:10.573 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:24:10 np0005534516 nova_compute[253538]: 2025-11-25 08:24:10.574 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:24:10 np0005534516 nova_compute[253538]: 2025-11-25 08:24:10.575 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:24:10 np0005534516 nova_compute[253538]: 2025-11-25 08:24:10.575 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 25 03:24:10 np0005534516 nova_compute[253538]: 2025-11-25 08:24:10.576 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:24:10 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1159: 321 pgs: 321 active+clean; 134 MiB data, 283 MiB used, 60 GiB / 60 GiB avail; 3.7 MiB/s rd, 3.6 MiB/s wr, 164 op/s
Nov 25 03:24:11 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 03:24:11 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/511440864' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 03:24:11 np0005534516 nova_compute[253538]: 2025-11-25 08:24:11.048 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.473s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:24:11 np0005534516 nova_compute[253538]: 2025-11-25 08:24:11.118 253542 DEBUG nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] skipping disk for instance-0000000a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 25 03:24:11 np0005534516 nova_compute[253538]: 2025-11-25 08:24:11.119 253542 DEBUG nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] skipping disk for instance-0000000a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 25 03:24:11 np0005534516 nova_compute[253538]: 2025-11-25 08:24:11.123 253542 DEBUG nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] skipping disk for instance-00000009 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 25 03:24:11 np0005534516 nova_compute[253538]: 2025-11-25 08:24:11.123 253542 DEBUG nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] skipping disk for instance-00000009 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 25 03:24:11 np0005534516 nova_compute[253538]: 2025-11-25 08:24:11.258 253542 WARNING nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 03:24:11 np0005534516 nova_compute[253538]: 2025-11-25 08:24:11.259 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4439MB free_disk=59.95622634887695GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 25 03:24:11 np0005534516 nova_compute[253538]: 2025-11-25 08:24:11.259 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:24:11 np0005534516 nova_compute[253538]: 2025-11-25 08:24:11.260 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:24:11 np0005534516 nova_compute[253538]: 2025-11-25 08:24:11.263 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:24:11 np0005534516 nova_compute[253538]: 2025-11-25 08:24:11.344 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Instance 4659329f-611a-4436-aa9e-26937db9cd61 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 25 03:24:11 np0005534516 nova_compute[253538]: 2025-11-25 08:24:11.344 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Instance 7ca3cfae-7765-48b7-9d65-660c3b709a55 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 25 03:24:11 np0005534516 nova_compute[253538]: 2025-11-25 08:24:11.345 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 25 03:24:11 np0005534516 nova_compute[253538]: 2025-11-25 08:24:11.345 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=768MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 25 03:24:11 np0005534516 nova_compute[253538]: 2025-11-25 08:24:11.397 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:24:11 np0005534516 nova_compute[253538]: 2025-11-25 08:24:11.544 253542 DEBUG nova.compute.manager [None req-a85b4a57-e15c-486c-83b4-33fa9d9f719b d683a34b78b44a5ca9f59cd5f49e57cf 77990df78fda43d19109b5fd47d2d5ad - - default default] [instance: 7ca3cfae-7765-48b7-9d65-660c3b709a55] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:24:11 np0005534516 nova_compute[253538]: 2025-11-25 08:24:11.548 253542 INFO nova.compute.manager [None req-a85b4a57-e15c-486c-83b4-33fa9d9f719b d683a34b78b44a5ca9f59cd5f49e57cf 77990df78fda43d19109b5fd47d2d5ad - - default default] [instance: 7ca3cfae-7765-48b7-9d65-660c3b709a55] Retrieving diagnostics#033[00m
Nov 25 03:24:11 np0005534516 nova_compute[253538]: 2025-11-25 08:24:11.777 253542 DEBUG oslo_concurrency.lockutils [None req-06f463bb-3052-409e-a356-29186e6c4bf3 247c30d08ff74c88808c16ddee332fbe 5a8ba4490f464848abc986d6b52e37cc - - default default] Acquiring lock "7ca3cfae-7765-48b7-9d65-660c3b709a55" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:24:11 np0005534516 nova_compute[253538]: 2025-11-25 08:24:11.778 253542 DEBUG oslo_concurrency.lockutils [None req-06f463bb-3052-409e-a356-29186e6c4bf3 247c30d08ff74c88808c16ddee332fbe 5a8ba4490f464848abc986d6b52e37cc - - default default] Lock "7ca3cfae-7765-48b7-9d65-660c3b709a55" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:24:11 np0005534516 nova_compute[253538]: 2025-11-25 08:24:11.778 253542 DEBUG oslo_concurrency.lockutils [None req-06f463bb-3052-409e-a356-29186e6c4bf3 247c30d08ff74c88808c16ddee332fbe 5a8ba4490f464848abc986d6b52e37cc - - default default] Acquiring lock "7ca3cfae-7765-48b7-9d65-660c3b709a55-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:24:11 np0005534516 nova_compute[253538]: 2025-11-25 08:24:11.779 253542 DEBUG oslo_concurrency.lockutils [None req-06f463bb-3052-409e-a356-29186e6c4bf3 247c30d08ff74c88808c16ddee332fbe 5a8ba4490f464848abc986d6b52e37cc - - default default] Lock "7ca3cfae-7765-48b7-9d65-660c3b709a55-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:24:11 np0005534516 nova_compute[253538]: 2025-11-25 08:24:11.779 253542 DEBUG oslo_concurrency.lockutils [None req-06f463bb-3052-409e-a356-29186e6c4bf3 247c30d08ff74c88808c16ddee332fbe 5a8ba4490f464848abc986d6b52e37cc - - default default] Lock "7ca3cfae-7765-48b7-9d65-660c3b709a55-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:24:11 np0005534516 nova_compute[253538]: 2025-11-25 08:24:11.780 253542 INFO nova.compute.manager [None req-06f463bb-3052-409e-a356-29186e6c4bf3 247c30d08ff74c88808c16ddee332fbe 5a8ba4490f464848abc986d6b52e37cc - - default default] [instance: 7ca3cfae-7765-48b7-9d65-660c3b709a55] Terminating instance#033[00m
Nov 25 03:24:11 np0005534516 nova_compute[253538]: 2025-11-25 08:24:11.781 253542 DEBUG oslo_concurrency.lockutils [None req-06f463bb-3052-409e-a356-29186e6c4bf3 247c30d08ff74c88808c16ddee332fbe 5a8ba4490f464848abc986d6b52e37cc - - default default] Acquiring lock "refresh_cache-7ca3cfae-7765-48b7-9d65-660c3b709a55" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 03:24:11 np0005534516 nova_compute[253538]: 2025-11-25 08:24:11.782 253542 DEBUG oslo_concurrency.lockutils [None req-06f463bb-3052-409e-a356-29186e6c4bf3 247c30d08ff74c88808c16ddee332fbe 5a8ba4490f464848abc986d6b52e37cc - - default default] Acquired lock "refresh_cache-7ca3cfae-7765-48b7-9d65-660c3b709a55" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 03:24:11 np0005534516 nova_compute[253538]: 2025-11-25 08:24:11.782 253542 DEBUG nova.network.neutron [None req-06f463bb-3052-409e-a356-29186e6c4bf3 247c30d08ff74c88808c16ddee332fbe 5a8ba4490f464848abc986d6b52e37cc - - default default] [instance: 7ca3cfae-7765-48b7-9d65-660c3b709a55] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 25 03:24:11 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 03:24:11 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/890546180' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 03:24:11 np0005534516 nova_compute[253538]: 2025-11-25 08:24:11.847 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.450s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:24:11 np0005534516 nova_compute[253538]: 2025-11-25 08:24:11.854 253542 DEBUG nova.compute.provider_tree [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 03:24:11 np0005534516 nova_compute[253538]: 2025-11-25 08:24:11.874 253542 DEBUG nova.scheduler.client.report [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 03:24:11 np0005534516 nova_compute[253538]: 2025-11-25 08:24:11.896 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 25 03:24:11 np0005534516 nova_compute[253538]: 2025-11-25 08:24:11.897 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.637s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:24:12 np0005534516 nova_compute[253538]: 2025-11-25 08:24:12.255 253542 DEBUG nova.network.neutron [None req-06f463bb-3052-409e-a356-29186e6c4bf3 247c30d08ff74c88808c16ddee332fbe 5a8ba4490f464848abc986d6b52e37cc - - default default] [instance: 7ca3cfae-7765-48b7-9d65-660c3b709a55] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 25 03:24:12 np0005534516 nova_compute[253538]: 2025-11-25 08:24:12.812 253542 DEBUG nova.network.neutron [None req-06f463bb-3052-409e-a356-29186e6c4bf3 247c30d08ff74c88808c16ddee332fbe 5a8ba4490f464848abc986d6b52e37cc - - default default] [instance: 7ca3cfae-7765-48b7-9d65-660c3b709a55] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 03:24:12 np0005534516 nova_compute[253538]: 2025-11-25 08:24:12.826 253542 DEBUG oslo_concurrency.lockutils [None req-06f463bb-3052-409e-a356-29186e6c4bf3 247c30d08ff74c88808c16ddee332fbe 5a8ba4490f464848abc986d6b52e37cc - - default default] Releasing lock "refresh_cache-7ca3cfae-7765-48b7-9d65-660c3b709a55" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 03:24:12 np0005534516 nova_compute[253538]: 2025-11-25 08:24:12.827 253542 DEBUG nova.compute.manager [None req-06f463bb-3052-409e-a356-29186e6c4bf3 247c30d08ff74c88808c16ddee332fbe 5a8ba4490f464848abc986d6b52e37cc - - default default] [instance: 7ca3cfae-7765-48b7-9d65-660c3b709a55] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 25 03:24:12 np0005534516 nova_compute[253538]: 2025-11-25 08:24:12.898 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 03:24:12 np0005534516 nova_compute[253538]: 2025-11-25 08:24:12.915 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 03:24:12 np0005534516 systemd[1]: machine-qemu\x2d11\x2dinstance\x2d0000000a.scope: Deactivated successfully.
Nov 25 03:24:12 np0005534516 systemd[1]: machine-qemu\x2d11\x2dinstance\x2d0000000a.scope: Consumed 3.502s CPU time.
Nov 25 03:24:12 np0005534516 systemd-machined[215790]: Machine qemu-11-instance-0000000a terminated.
Nov 25 03:24:12 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1160: 321 pgs: 321 active+clean; 134 MiB data, 292 MiB used, 60 GiB / 60 GiB avail; 4.4 MiB/s rd, 3.6 MiB/s wr, 179 op/s
Nov 25 03:24:13 np0005534516 nova_compute[253538]: 2025-11-25 08:24:13.049 253542 INFO nova.virt.libvirt.driver [-] [instance: 7ca3cfae-7765-48b7-9d65-660c3b709a55] Instance destroyed successfully.#033[00m
Nov 25 03:24:13 np0005534516 nova_compute[253538]: 2025-11-25 08:24:13.050 253542 DEBUG nova.objects.instance [None req-06f463bb-3052-409e-a356-29186e6c4bf3 247c30d08ff74c88808c16ddee332fbe 5a8ba4490f464848abc986d6b52e37cc - - default default] Lazy-loading 'resources' on Instance uuid 7ca3cfae-7765-48b7-9d65-660c3b709a55 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 03:24:13 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e110 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 03:24:13 np0005534516 ceph-mon[75015]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #54. Immutable memtables: 0.
Nov 25 03:24:13 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:24:13.449622) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 25 03:24:13 np0005534516 ceph-mon[75015]: rocksdb: [db/flush_job.cc:856] [default] [JOB 27] Flushing memtable with next log file: 54
Nov 25 03:24:13 np0005534516 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764059053449664, "job": 27, "event": "flush_started", "num_memtables": 1, "num_entries": 1270, "num_deletes": 507, "total_data_size": 1316105, "memory_usage": 1343024, "flush_reason": "Manual Compaction"}
Nov 25 03:24:13 np0005534516 ceph-mon[75015]: rocksdb: [db/flush_job.cc:885] [default] [JOB 27] Level-0 flush table #55: started
Nov 25 03:24:13 np0005534516 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764059053566104, "cf_name": "default", "job": 27, "event": "table_file_creation", "file_number": 55, "file_size": 1252824, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 23068, "largest_seqno": 24337, "table_properties": {"data_size": 1247429, "index_size": 2279, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2053, "raw_key_size": 15474, "raw_average_key_size": 19, "raw_value_size": 1234214, "raw_average_value_size": 1518, "num_data_blocks": 102, "num_entries": 813, "num_filter_entries": 813, "num_deletions": 507, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764058977, "oldest_key_time": 1764058977, "file_creation_time": 1764059053, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "dc5dc6ae-0fb8-474e-b34d-e37705a41add", "db_session_id": "WCSCMBNWDQZ3QKJA3OWW", "orig_file_number": 55, "seqno_to_time_mapping": "N/A"}}
Nov 25 03:24:13 np0005534516 ceph-mon[75015]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 27] Flush lasted 116546 microseconds, and 5470 cpu microseconds.
Nov 25 03:24:13 np0005534516 ceph-mon[75015]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 03:24:13 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:24:13.566164) [db/flush_job.cc:967] [default] [JOB 27] Level-0 flush table #55: 1252824 bytes OK
Nov 25 03:24:13 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:24:13.566189) [db/memtable_list.cc:519] [default] Level-0 commit table #55 started
Nov 25 03:24:13 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:24:13.590104) [db/memtable_list.cc:722] [default] Level-0 commit table #55: memtable #1 done
Nov 25 03:24:13 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:24:13.590160) EVENT_LOG_v1 {"time_micros": 1764059053590145, "job": 27, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 25 03:24:13 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:24:13.590193) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 25 03:24:13 np0005534516 ceph-mon[75015]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 27] Try to delete WAL files size 1309240, prev total WAL file size 1309240, number of live WAL files 2.
Nov 25 03:24:13 np0005534516 ceph-mon[75015]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000051.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 03:24:13 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:24:13.591391) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D00353034' seq:72057594037927935, type:22 .. '6C6F676D00373537' seq:0, type:0; will stop at (end)
Nov 25 03:24:13 np0005534516 ceph-mon[75015]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 28] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 25 03:24:13 np0005534516 ceph-mon[75015]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 27 Base level 0, inputs: [55(1223KB)], [53(9103KB)]
Nov 25 03:24:13 np0005534516 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764059053591439, "job": 28, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [55], "files_L6": [53], "score": -1, "input_data_size": 10574673, "oldest_snapshot_seqno": -1}
Nov 25 03:24:13 np0005534516 ceph-mon[75015]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 28] Generated table #56: 4598 keys, 7217278 bytes, temperature: kUnknown
Nov 25 03:24:13 np0005534516 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764059053962684, "cf_name": "default", "job": 28, "event": "table_file_creation", "file_number": 56, "file_size": 7217278, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7186963, "index_size": 17725, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 11525, "raw_key_size": 115523, "raw_average_key_size": 25, "raw_value_size": 7104183, "raw_average_value_size": 1545, "num_data_blocks": 739, "num_entries": 4598, "num_filter_entries": 4598, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764056881, "oldest_key_time": 0, "file_creation_time": 1764059053, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "dc5dc6ae-0fb8-474e-b34d-e37705a41add", "db_session_id": "WCSCMBNWDQZ3QKJA3OWW", "orig_file_number": 56, "seqno_to_time_mapping": "N/A"}}
Nov 25 03:24:13 np0005534516 ceph-mon[75015]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 03:24:13 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:24:13.962923) [db/compaction/compaction_job.cc:1663] [default] [JOB 28] Compacted 1@0 + 1@6 files to L6 => 7217278 bytes
Nov 25 03:24:13 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:24:13.974779) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 28.5 rd, 19.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.2, 8.9 +0.0 blob) out(6.9 +0.0 blob), read-write-amplify(14.2) write-amplify(5.8) OK, records in: 5627, records dropped: 1029 output_compression: NoCompression
Nov 25 03:24:13 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:24:13.974824) EVENT_LOG_v1 {"time_micros": 1764059053974804, "job": 28, "event": "compaction_finished", "compaction_time_micros": 371311, "compaction_time_cpu_micros": 33617, "output_level": 6, "num_output_files": 1, "total_output_size": 7217278, "num_input_records": 5627, "num_output_records": 4598, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 25 03:24:13 np0005534516 ceph-mon[75015]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000055.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 03:24:13 np0005534516 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764059053975435, "job": 28, "event": "table_file_deletion", "file_number": 55}
Nov 25 03:24:13 np0005534516 ceph-mon[75015]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000053.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 03:24:13 np0005534516 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764059053978062, "job": 28, "event": "table_file_deletion", "file_number": 53}
Nov 25 03:24:13 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:24:13.591187) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 03:24:13 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:24:13.978598) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 03:24:13 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:24:13.978603) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 03:24:13 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:24:13.978605) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 03:24:13 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:24:13.978607) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 03:24:13 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:24:13.978609) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 03:24:14 np0005534516 nova_compute[253538]: 2025-11-25 08:24:14.527 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:24:14 np0005534516 nova_compute[253538]: 2025-11-25 08:24:14.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 03:24:14 np0005534516 nova_compute[253538]: 2025-11-25 08:24:14.946 253542 INFO nova.virt.libvirt.driver [None req-06f463bb-3052-409e-a356-29186e6c4bf3 247c30d08ff74c88808c16ddee332fbe 5a8ba4490f464848abc986d6b52e37cc - - default default] [instance: 7ca3cfae-7765-48b7-9d65-660c3b709a55] Deleting instance files /var/lib/nova/instances/7ca3cfae-7765-48b7-9d65-660c3b709a55_del#033[00m
Nov 25 03:24:14 np0005534516 nova_compute[253538]: 2025-11-25 08:24:14.947 253542 INFO nova.virt.libvirt.driver [None req-06f463bb-3052-409e-a356-29186e6c4bf3 247c30d08ff74c88808c16ddee332fbe 5a8ba4490f464848abc986d6b52e37cc - - default default] [instance: 7ca3cfae-7765-48b7-9d65-660c3b709a55] Deletion of /var/lib/nova/instances/7ca3cfae-7765-48b7-9d65-660c3b709a55_del complete#033[00m
Nov 25 03:24:14 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1161: 321 pgs: 321 active+clean; 109 MiB data, 293 MiB used, 60 GiB / 60 GiB avail; 4.9 MiB/s rd, 3.2 MiB/s wr, 214 op/s
Nov 25 03:24:15 np0005534516 nova_compute[253538]: 2025-11-25 08:24:15.027 253542 INFO nova.compute.manager [None req-06f463bb-3052-409e-a356-29186e6c4bf3 247c30d08ff74c88808c16ddee332fbe 5a8ba4490f464848abc986d6b52e37cc - - default default] [instance: 7ca3cfae-7765-48b7-9d65-660c3b709a55] Took 2.20 seconds to destroy the instance on the hypervisor.#033[00m
Nov 25 03:24:15 np0005534516 nova_compute[253538]: 2025-11-25 08:24:15.028 253542 DEBUG oslo.service.loopingcall [None req-06f463bb-3052-409e-a356-29186e6c4bf3 247c30d08ff74c88808c16ddee332fbe 5a8ba4490f464848abc986d6b52e37cc - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 25 03:24:15 np0005534516 nova_compute[253538]: 2025-11-25 08:24:15.028 253542 DEBUG nova.compute.manager [-] [instance: 7ca3cfae-7765-48b7-9d65-660c3b709a55] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 25 03:24:15 np0005534516 nova_compute[253538]: 2025-11-25 08:24:15.029 253542 DEBUG nova.network.neutron [-] [instance: 7ca3cfae-7765-48b7-9d65-660c3b709a55] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 25 03:24:15 np0005534516 nova_compute[253538]: 2025-11-25 08:24:15.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 03:24:15 np0005534516 nova_compute[253538]: 2025-11-25 08:24:15.820 253542 DEBUG nova.network.neutron [-] [instance: 7ca3cfae-7765-48b7-9d65-660c3b709a55] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 25 03:24:15 np0005534516 nova_compute[253538]: 2025-11-25 08:24:15.832 253542 DEBUG nova.network.neutron [-] [instance: 7ca3cfae-7765-48b7-9d65-660c3b709a55] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 03:24:15 np0005534516 nova_compute[253538]: 2025-11-25 08:24:15.852 253542 INFO nova.compute.manager [-] [instance: 7ca3cfae-7765-48b7-9d65-660c3b709a55] Took 0.82 seconds to deallocate network for instance.#033[00m
Nov 25 03:24:15 np0005534516 nova_compute[253538]: 2025-11-25 08:24:15.894 253542 DEBUG oslo_concurrency.lockutils [None req-06f463bb-3052-409e-a356-29186e6c4bf3 247c30d08ff74c88808c16ddee332fbe 5a8ba4490f464848abc986d6b52e37cc - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:24:15 np0005534516 nova_compute[253538]: 2025-11-25 08:24:15.895 253542 DEBUG oslo_concurrency.lockutils [None req-06f463bb-3052-409e-a356-29186e6c4bf3 247c30d08ff74c88808c16ddee332fbe 5a8ba4490f464848abc986d6b52e37cc - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:24:15 np0005534516 nova_compute[253538]: 2025-11-25 08:24:15.975 253542 DEBUG oslo_concurrency.processutils [None req-06f463bb-3052-409e-a356-29186e6c4bf3 247c30d08ff74c88808c16ddee332fbe 5a8ba4490f464848abc986d6b52e37cc - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:24:16 np0005534516 nova_compute[253538]: 2025-11-25 08:24:16.264 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:24:16 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 03:24:16 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1739399886' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 03:24:16 np0005534516 nova_compute[253538]: 2025-11-25 08:24:16.451 253542 DEBUG oslo_concurrency.processutils [None req-06f463bb-3052-409e-a356-29186e6c4bf3 247c30d08ff74c88808c16ddee332fbe 5a8ba4490f464848abc986d6b52e37cc - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.476s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:24:16 np0005534516 nova_compute[253538]: 2025-11-25 08:24:16.457 253542 DEBUG nova.compute.provider_tree [None req-06f463bb-3052-409e-a356-29186e6c4bf3 247c30d08ff74c88808c16ddee332fbe 5a8ba4490f464848abc986d6b52e37cc - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 03:24:16 np0005534516 nova_compute[253538]: 2025-11-25 08:24:16.478 253542 DEBUG nova.scheduler.client.report [None req-06f463bb-3052-409e-a356-29186e6c4bf3 247c30d08ff74c88808c16ddee332fbe 5a8ba4490f464848abc986d6b52e37cc - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 03:24:16 np0005534516 nova_compute[253538]: 2025-11-25 08:24:16.513 253542 DEBUG oslo_concurrency.lockutils [None req-06f463bb-3052-409e-a356-29186e6c4bf3 247c30d08ff74c88808c16ddee332fbe 5a8ba4490f464848abc986d6b52e37cc - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.618s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:24:16 np0005534516 nova_compute[253538]: 2025-11-25 08:24:16.542 253542 INFO nova.scheduler.client.report [None req-06f463bb-3052-409e-a356-29186e6c4bf3 247c30d08ff74c88808c16ddee332fbe 5a8ba4490f464848abc986d6b52e37cc - - default default] Deleted allocations for instance 7ca3cfae-7765-48b7-9d65-660c3b709a55#033[00m
Nov 25 03:24:16 np0005534516 nova_compute[253538]: 2025-11-25 08:24:16.615 253542 DEBUG oslo_concurrency.lockutils [None req-06f463bb-3052-409e-a356-29186e6c4bf3 247c30d08ff74c88808c16ddee332fbe 5a8ba4490f464848abc986d6b52e37cc - - default default] Lock "7ca3cfae-7765-48b7-9d65-660c3b709a55" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.837s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:24:16 np0005534516 nova_compute[253538]: 2025-11-25 08:24:16.692 253542 DEBUG oslo_concurrency.lockutils [None req-c5d6135b-f44a-4f19-8193-26c9f80add05 479070fb6fab4140a63c6eb2f769af32 17c27399c5cf49759f4c3caf086910cb - - default default] Acquiring lock "1cb8bb78-4ff6-496c-858c-41159362ffb8" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:24:16 np0005534516 nova_compute[253538]: 2025-11-25 08:24:16.693 253542 DEBUG oslo_concurrency.lockutils [None req-c5d6135b-f44a-4f19-8193-26c9f80add05 479070fb6fab4140a63c6eb2f769af32 17c27399c5cf49759f4c3caf086910cb - - default default] Lock "1cb8bb78-4ff6-496c-858c-41159362ffb8" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:24:16 np0005534516 nova_compute[253538]: 2025-11-25 08:24:16.708 253542 DEBUG nova.compute.manager [None req-c5d6135b-f44a-4f19-8193-26c9f80add05 479070fb6fab4140a63c6eb2f769af32 17c27399c5cf49759f4c3caf086910cb - - default default] [instance: 1cb8bb78-4ff6-496c-858c-41159362ffb8] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 25 03:24:16 np0005534516 nova_compute[253538]: 2025-11-25 08:24:16.776 253542 DEBUG oslo_concurrency.lockutils [None req-c5d6135b-f44a-4f19-8193-26c9f80add05 479070fb6fab4140a63c6eb2f769af32 17c27399c5cf49759f4c3caf086910cb - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:24:16 np0005534516 nova_compute[253538]: 2025-11-25 08:24:16.776 253542 DEBUG oslo_concurrency.lockutils [None req-c5d6135b-f44a-4f19-8193-26c9f80add05 479070fb6fab4140a63c6eb2f769af32 17c27399c5cf49759f4c3caf086910cb - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:24:16 np0005534516 nova_compute[253538]: 2025-11-25 08:24:16.784 253542 DEBUG nova.virt.hardware [None req-c5d6135b-f44a-4f19-8193-26c9f80add05 479070fb6fab4140a63c6eb2f769af32 17c27399c5cf49759f4c3caf086910cb - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 25 03:24:16 np0005534516 nova_compute[253538]: 2025-11-25 08:24:16.785 253542 INFO nova.compute.claims [None req-c5d6135b-f44a-4f19-8193-26c9f80add05 479070fb6fab4140a63c6eb2f769af32 17c27399c5cf49759f4c3caf086910cb - - default default] [instance: 1cb8bb78-4ff6-496c-858c-41159362ffb8] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 25 03:24:16 np0005534516 nova_compute[253538]: 2025-11-25 08:24:16.923 253542 DEBUG oslo_concurrency.processutils [None req-c5d6135b-f44a-4f19-8193-26c9f80add05 479070fb6fab4140a63c6eb2f769af32 17c27399c5cf49759f4c3caf086910cb - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:24:16 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1162: 321 pgs: 321 active+clean; 99 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 3.9 MiB/s rd, 2.2 MiB/s wr, 193 op/s
Nov 25 03:24:17 np0005534516 nova_compute[253538]: 2025-11-25 08:24:17.256 253542 DEBUG nova.virt.libvirt.driver [None req-f0f67520-50d5-44d1-bd6c-82dd827f18a4 8eb21f29ec514ce1adc864a5875afc7f cb11b932e29641d899540daf5c89f9d1 - - default default] [instance: 4659329f-611a-4436-aa9e-26937db9cd61] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101#033[00m
Nov 25 03:24:17 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 03:24:17 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1552264641' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 03:24:17 np0005534516 nova_compute[253538]: 2025-11-25 08:24:17.390 253542 DEBUG oslo_concurrency.processutils [None req-c5d6135b-f44a-4f19-8193-26c9f80add05 479070fb6fab4140a63c6eb2f769af32 17c27399c5cf49759f4c3caf086910cb - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.468s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:24:17 np0005534516 nova_compute[253538]: 2025-11-25 08:24:17.396 253542 DEBUG nova.compute.provider_tree [None req-c5d6135b-f44a-4f19-8193-26c9f80add05 479070fb6fab4140a63c6eb2f769af32 17c27399c5cf49759f4c3caf086910cb - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 03:24:17 np0005534516 nova_compute[253538]: 2025-11-25 08:24:17.410 253542 DEBUG nova.scheduler.client.report [None req-c5d6135b-f44a-4f19-8193-26c9f80add05 479070fb6fab4140a63c6eb2f769af32 17c27399c5cf49759f4c3caf086910cb - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 03:24:17 np0005534516 nova_compute[253538]: 2025-11-25 08:24:17.435 253542 DEBUG oslo_concurrency.lockutils [None req-c5d6135b-f44a-4f19-8193-26c9f80add05 479070fb6fab4140a63c6eb2f769af32 17c27399c5cf49759f4c3caf086910cb - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.659s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:24:17 np0005534516 nova_compute[253538]: 2025-11-25 08:24:17.436 253542 DEBUG nova.compute.manager [None req-c5d6135b-f44a-4f19-8193-26c9f80add05 479070fb6fab4140a63c6eb2f769af32 17c27399c5cf49759f4c3caf086910cb - - default default] [instance: 1cb8bb78-4ff6-496c-858c-41159362ffb8] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 25 03:24:17 np0005534516 nova_compute[253538]: 2025-11-25 08:24:17.489 253542 DEBUG nova.compute.manager [None req-c5d6135b-f44a-4f19-8193-26c9f80add05 479070fb6fab4140a63c6eb2f769af32 17c27399c5cf49759f4c3caf086910cb - - default default] [instance: 1cb8bb78-4ff6-496c-858c-41159362ffb8] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 25 03:24:17 np0005534516 nova_compute[253538]: 2025-11-25 08:24:17.490 253542 DEBUG nova.network.neutron [None req-c5d6135b-f44a-4f19-8193-26c9f80add05 479070fb6fab4140a63c6eb2f769af32 17c27399c5cf49759f4c3caf086910cb - - default default] [instance: 1cb8bb78-4ff6-496c-858c-41159362ffb8] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 25 03:24:17 np0005534516 nova_compute[253538]: 2025-11-25 08:24:17.520 253542 INFO nova.virt.libvirt.driver [None req-c5d6135b-f44a-4f19-8193-26c9f80add05 479070fb6fab4140a63c6eb2f769af32 17c27399c5cf49759f4c3caf086910cb - - default default] [instance: 1cb8bb78-4ff6-496c-858c-41159362ffb8] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 25 03:24:17 np0005534516 nova_compute[253538]: 2025-11-25 08:24:17.539 253542 DEBUG nova.compute.manager [None req-c5d6135b-f44a-4f19-8193-26c9f80add05 479070fb6fab4140a63c6eb2f769af32 17c27399c5cf49759f4c3caf086910cb - - default default] [instance: 1cb8bb78-4ff6-496c-858c-41159362ffb8] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 25 03:24:17 np0005534516 nova_compute[253538]: 2025-11-25 08:24:17.649 253542 DEBUG nova.compute.manager [None req-c5d6135b-f44a-4f19-8193-26c9f80add05 479070fb6fab4140a63c6eb2f769af32 17c27399c5cf49759f4c3caf086910cb - - default default] [instance: 1cb8bb78-4ff6-496c-858c-41159362ffb8] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 25 03:24:17 np0005534516 nova_compute[253538]: 2025-11-25 08:24:17.650 253542 DEBUG nova.virt.libvirt.driver [None req-c5d6135b-f44a-4f19-8193-26c9f80add05 479070fb6fab4140a63c6eb2f769af32 17c27399c5cf49759f4c3caf086910cb - - default default] [instance: 1cb8bb78-4ff6-496c-858c-41159362ffb8] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 25 03:24:17 np0005534516 nova_compute[253538]: 2025-11-25 08:24:17.651 253542 INFO nova.virt.libvirt.driver [None req-c5d6135b-f44a-4f19-8193-26c9f80add05 479070fb6fab4140a63c6eb2f769af32 17c27399c5cf49759f4c3caf086910cb - - default default] [instance: 1cb8bb78-4ff6-496c-858c-41159362ffb8] Creating image(s)#033[00m
Nov 25 03:24:17 np0005534516 nova_compute[253538]: 2025-11-25 08:24:17.672 253542 DEBUG nova.storage.rbd_utils [None req-c5d6135b-f44a-4f19-8193-26c9f80add05 479070fb6fab4140a63c6eb2f769af32 17c27399c5cf49759f4c3caf086910cb - - default default] rbd image 1cb8bb78-4ff6-496c-858c-41159362ffb8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:24:17 np0005534516 nova_compute[253538]: 2025-11-25 08:24:17.695 253542 DEBUG nova.storage.rbd_utils [None req-c5d6135b-f44a-4f19-8193-26c9f80add05 479070fb6fab4140a63c6eb2f769af32 17c27399c5cf49759f4c3caf086910cb - - default default] rbd image 1cb8bb78-4ff6-496c-858c-41159362ffb8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:24:17 np0005534516 nova_compute[253538]: 2025-11-25 08:24:17.717 253542 DEBUG nova.storage.rbd_utils [None req-c5d6135b-f44a-4f19-8193-26c9f80add05 479070fb6fab4140a63c6eb2f769af32 17c27399c5cf49759f4c3caf086910cb - - default default] rbd image 1cb8bb78-4ff6-496c-858c-41159362ffb8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:24:17 np0005534516 nova_compute[253538]: 2025-11-25 08:24:17.720 253542 DEBUG oslo_concurrency.processutils [None req-c5d6135b-f44a-4f19-8193-26c9f80add05 479070fb6fab4140a63c6eb2f769af32 17c27399c5cf49759f4c3caf086910cb - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:24:17 np0005534516 nova_compute[253538]: 2025-11-25 08:24:17.777 253542 DEBUG oslo_concurrency.processutils [None req-c5d6135b-f44a-4f19-8193-26c9f80add05 479070fb6fab4140a63c6eb2f769af32 17c27399c5cf49759f4c3caf086910cb - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json" returned: 0 in 0.056s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:24:17 np0005534516 nova_compute[253538]: 2025-11-25 08:24:17.779 253542 DEBUG oslo_concurrency.lockutils [None req-c5d6135b-f44a-4f19-8193-26c9f80add05 479070fb6fab4140a63c6eb2f769af32 17c27399c5cf49759f4c3caf086910cb - - default default] Acquiring lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:24:17 np0005534516 nova_compute[253538]: 2025-11-25 08:24:17.779 253542 DEBUG oslo_concurrency.lockutils [None req-c5d6135b-f44a-4f19-8193-26c9f80add05 479070fb6fab4140a63c6eb2f769af32 17c27399c5cf49759f4c3caf086910cb - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:24:17 np0005534516 nova_compute[253538]: 2025-11-25 08:24:17.780 253542 DEBUG oslo_concurrency.lockutils [None req-c5d6135b-f44a-4f19-8193-26c9f80add05 479070fb6fab4140a63c6eb2f769af32 17c27399c5cf49759f4c3caf086910cb - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:24:17 np0005534516 nova_compute[253538]: 2025-11-25 08:24:17.804 253542 DEBUG nova.storage.rbd_utils [None req-c5d6135b-f44a-4f19-8193-26c9f80add05 479070fb6fab4140a63c6eb2f769af32 17c27399c5cf49759f4c3caf086910cb - - default default] rbd image 1cb8bb78-4ff6-496c-858c-41159362ffb8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:24:17 np0005534516 nova_compute[253538]: 2025-11-25 08:24:17.807 253542 DEBUG oslo_concurrency.processutils [None req-c5d6135b-f44a-4f19-8193-26c9f80add05 479070fb6fab4140a63c6eb2f769af32 17c27399c5cf49759f4c3caf086910cb - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc 1cb8bb78-4ff6-496c-858c-41159362ffb8_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:24:17 np0005534516 nova_compute[253538]: 2025-11-25 08:24:17.985 253542 DEBUG nova.network.neutron [None req-c5d6135b-f44a-4f19-8193-26c9f80add05 479070fb6fab4140a63c6eb2f769af32 17c27399c5cf49759f4c3caf086910cb - - default default] [instance: 1cb8bb78-4ff6-496c-858c-41159362ffb8] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188#033[00m
Nov 25 03:24:17 np0005534516 nova_compute[253538]: 2025-11-25 08:24:17.986 253542 DEBUG nova.compute.manager [None req-c5d6135b-f44a-4f19-8193-26c9f80add05 479070fb6fab4140a63c6eb2f769af32 17c27399c5cf49759f4c3caf086910cb - - default default] [instance: 1cb8bb78-4ff6-496c-858c-41159362ffb8] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 25 03:24:18 np0005534516 ovn_controller[152859]: 2025-11-25T08:24:18Z|00045|memory_trim|INFO|Detected inactivity (last active 30001 ms ago): trimming memory
Nov 25 03:24:18 np0005534516 nova_compute[253538]: 2025-11-25 08:24:18.257 253542 DEBUG oslo_concurrency.processutils [None req-c5d6135b-f44a-4f19-8193-26c9f80add05 479070fb6fab4140a63c6eb2f769af32 17c27399c5cf49759f4c3caf086910cb - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc 1cb8bb78-4ff6-496c-858c-41159362ffb8_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.450s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:24:18 np0005534516 nova_compute[253538]: 2025-11-25 08:24:18.310 253542 DEBUG nova.storage.rbd_utils [None req-c5d6135b-f44a-4f19-8193-26c9f80add05 479070fb6fab4140a63c6eb2f769af32 17c27399c5cf49759f4c3caf086910cb - - default default] resizing rbd image 1cb8bb78-4ff6-496c-858c-41159362ffb8_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 25 03:24:18 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e110 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 03:24:18 np0005534516 nova_compute[253538]: 2025-11-25 08:24:18.471 253542 DEBUG nova.objects.instance [None req-c5d6135b-f44a-4f19-8193-26c9f80add05 479070fb6fab4140a63c6eb2f769af32 17c27399c5cf49759f4c3caf086910cb - - default default] Lazy-loading 'migration_context' on Instance uuid 1cb8bb78-4ff6-496c-858c-41159362ffb8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 03:24:18 np0005534516 nova_compute[253538]: 2025-11-25 08:24:18.487 253542 DEBUG nova.virt.libvirt.driver [None req-c5d6135b-f44a-4f19-8193-26c9f80add05 479070fb6fab4140a63c6eb2f769af32 17c27399c5cf49759f4c3caf086910cb - - default default] [instance: 1cb8bb78-4ff6-496c-858c-41159362ffb8] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 25 03:24:18 np0005534516 nova_compute[253538]: 2025-11-25 08:24:18.487 253542 DEBUG nova.virt.libvirt.driver [None req-c5d6135b-f44a-4f19-8193-26c9f80add05 479070fb6fab4140a63c6eb2f769af32 17c27399c5cf49759f4c3caf086910cb - - default default] [instance: 1cb8bb78-4ff6-496c-858c-41159362ffb8] Ensure instance console log exists: /var/lib/nova/instances/1cb8bb78-4ff6-496c-858c-41159362ffb8/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 25 03:24:18 np0005534516 nova_compute[253538]: 2025-11-25 08:24:18.488 253542 DEBUG oslo_concurrency.lockutils [None req-c5d6135b-f44a-4f19-8193-26c9f80add05 479070fb6fab4140a63c6eb2f769af32 17c27399c5cf49759f4c3caf086910cb - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:24:18 np0005534516 nova_compute[253538]: 2025-11-25 08:24:18.488 253542 DEBUG oslo_concurrency.lockutils [None req-c5d6135b-f44a-4f19-8193-26c9f80add05 479070fb6fab4140a63c6eb2f769af32 17c27399c5cf49759f4c3caf086910cb - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:24:18 np0005534516 nova_compute[253538]: 2025-11-25 08:24:18.489 253542 DEBUG oslo_concurrency.lockutils [None req-c5d6135b-f44a-4f19-8193-26c9f80add05 479070fb6fab4140a63c6eb2f769af32 17c27399c5cf49759f4c3caf086910cb - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:24:18 np0005534516 nova_compute[253538]: 2025-11-25 08:24:18.490 253542 DEBUG nova.virt.libvirt.driver [None req-c5d6135b-f44a-4f19-8193-26c9f80add05 479070fb6fab4140a63c6eb2f769af32 17c27399c5cf49759f4c3caf086910cb - - default default] [instance: 1cb8bb78-4ff6-496c-858c-41159362ffb8] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'device_name': '/dev/vda', 'guest_format': None, 'encryption_options': None, 'boot_index': 0, 'encryption_format': None, 'encryption_secret_uuid': None, 'size': 0, 'encrypted': False, 'disk_bus': 'virtio', 'image_id': '8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 25 03:24:18 np0005534516 nova_compute[253538]: 2025-11-25 08:24:18.495 253542 WARNING nova.virt.libvirt.driver [None req-c5d6135b-f44a-4f19-8193-26c9f80add05 479070fb6fab4140a63c6eb2f769af32 17c27399c5cf49759f4c3caf086910cb - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 03:24:18 np0005534516 nova_compute[253538]: 2025-11-25 08:24:18.500 253542 DEBUG nova.virt.libvirt.host [None req-c5d6135b-f44a-4f19-8193-26c9f80add05 479070fb6fab4140a63c6eb2f769af32 17c27399c5cf49759f4c3caf086910cb - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 25 03:24:18 np0005534516 nova_compute[253538]: 2025-11-25 08:24:18.501 253542 DEBUG nova.virt.libvirt.host [None req-c5d6135b-f44a-4f19-8193-26c9f80add05 479070fb6fab4140a63c6eb2f769af32 17c27399c5cf49759f4c3caf086910cb - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 25 03:24:18 np0005534516 nova_compute[253538]: 2025-11-25 08:24:18.507 253542 DEBUG nova.virt.libvirt.host [None req-c5d6135b-f44a-4f19-8193-26c9f80add05 479070fb6fab4140a63c6eb2f769af32 17c27399c5cf49759f4c3caf086910cb - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 25 03:24:18 np0005534516 nova_compute[253538]: 2025-11-25 08:24:18.508 253542 DEBUG nova.virt.libvirt.host [None req-c5d6135b-f44a-4f19-8193-26c9f80add05 479070fb6fab4140a63c6eb2f769af32 17c27399c5cf49759f4c3caf086910cb - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 25 03:24:18 np0005534516 nova_compute[253538]: 2025-11-25 08:24:18.508 253542 DEBUG nova.virt.libvirt.driver [None req-c5d6135b-f44a-4f19-8193-26c9f80add05 479070fb6fab4140a63c6eb2f769af32 17c27399c5cf49759f4c3caf086910cb - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 25 03:24:18 np0005534516 nova_compute[253538]: 2025-11-25 08:24:18.508 253542 DEBUG nova.virt.hardware [None req-c5d6135b-f44a-4f19-8193-26c9f80add05 479070fb6fab4140a63c6eb2f769af32 17c27399c5cf49759f4c3caf086910cb - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T08:20:54Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='c0247c91-0f34-45b2-87e7-43f31a790a00',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 25 03:24:18 np0005534516 nova_compute[253538]: 2025-11-25 08:24:18.509 253542 DEBUG nova.virt.hardware [None req-c5d6135b-f44a-4f19-8193-26c9f80add05 479070fb6fab4140a63c6eb2f769af32 17c27399c5cf49759f4c3caf086910cb - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 25 03:24:18 np0005534516 nova_compute[253538]: 2025-11-25 08:24:18.509 253542 DEBUG nova.virt.hardware [None req-c5d6135b-f44a-4f19-8193-26c9f80add05 479070fb6fab4140a63c6eb2f769af32 17c27399c5cf49759f4c3caf086910cb - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 25 03:24:18 np0005534516 nova_compute[253538]: 2025-11-25 08:24:18.509 253542 DEBUG nova.virt.hardware [None req-c5d6135b-f44a-4f19-8193-26c9f80add05 479070fb6fab4140a63c6eb2f769af32 17c27399c5cf49759f4c3caf086910cb - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 25 03:24:18 np0005534516 nova_compute[253538]: 2025-11-25 08:24:18.510 253542 DEBUG nova.virt.hardware [None req-c5d6135b-f44a-4f19-8193-26c9f80add05 479070fb6fab4140a63c6eb2f769af32 17c27399c5cf49759f4c3caf086910cb - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 25 03:24:18 np0005534516 nova_compute[253538]: 2025-11-25 08:24:18.510 253542 DEBUG nova.virt.hardware [None req-c5d6135b-f44a-4f19-8193-26c9f80add05 479070fb6fab4140a63c6eb2f769af32 17c27399c5cf49759f4c3caf086910cb - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 25 03:24:18 np0005534516 nova_compute[253538]: 2025-11-25 08:24:18.510 253542 DEBUG nova.virt.hardware [None req-c5d6135b-f44a-4f19-8193-26c9f80add05 479070fb6fab4140a63c6eb2f769af32 17c27399c5cf49759f4c3caf086910cb - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 25 03:24:18 np0005534516 nova_compute[253538]: 2025-11-25 08:24:18.510 253542 DEBUG nova.virt.hardware [None req-c5d6135b-f44a-4f19-8193-26c9f80add05 479070fb6fab4140a63c6eb2f769af32 17c27399c5cf49759f4c3caf086910cb - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 25 03:24:18 np0005534516 nova_compute[253538]: 2025-11-25 08:24:18.511 253542 DEBUG nova.virt.hardware [None req-c5d6135b-f44a-4f19-8193-26c9f80add05 479070fb6fab4140a63c6eb2f769af32 17c27399c5cf49759f4c3caf086910cb - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 25 03:24:18 np0005534516 nova_compute[253538]: 2025-11-25 08:24:18.511 253542 DEBUG nova.virt.hardware [None req-c5d6135b-f44a-4f19-8193-26c9f80add05 479070fb6fab4140a63c6eb2f769af32 17c27399c5cf49759f4c3caf086910cb - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 25 03:24:18 np0005534516 nova_compute[253538]: 2025-11-25 08:24:18.511 253542 DEBUG nova.virt.hardware [None req-c5d6135b-f44a-4f19-8193-26c9f80add05 479070fb6fab4140a63c6eb2f769af32 17c27399c5cf49759f4c3caf086910cb - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 25 03:24:18 np0005534516 nova_compute[253538]: 2025-11-25 08:24:18.514 253542 DEBUG oslo_concurrency.processutils [None req-c5d6135b-f44a-4f19-8193-26c9f80add05 479070fb6fab4140a63c6eb2f769af32 17c27399c5cf49759f4c3caf086910cb - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:24:18 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 03:24:18 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3193521444' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 03:24:18 np0005534516 nova_compute[253538]: 2025-11-25 08:24:18.928 253542 DEBUG oslo_concurrency.processutils [None req-c5d6135b-f44a-4f19-8193-26c9f80add05 479070fb6fab4140a63c6eb2f769af32 17c27399c5cf49759f4c3caf086910cb - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.414s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:24:18 np0005534516 nova_compute[253538]: 2025-11-25 08:24:18.952 253542 DEBUG nova.storage.rbd_utils [None req-c5d6135b-f44a-4f19-8193-26c9f80add05 479070fb6fab4140a63c6eb2f769af32 17c27399c5cf49759f4c3caf086910cb - - default default] rbd image 1cb8bb78-4ff6-496c-858c-41159362ffb8_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:24:18 np0005534516 nova_compute[253538]: 2025-11-25 08:24:18.957 253542 DEBUG oslo_concurrency.processutils [None req-c5d6135b-f44a-4f19-8193-26c9f80add05 479070fb6fab4140a63c6eb2f769af32 17c27399c5cf49759f4c3caf086910cb - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:24:18 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1163: 321 pgs: 321 active+clean; 114 MiB data, 299 MiB used, 60 GiB / 60 GiB avail; 3.6 MiB/s rd, 2.9 MiB/s wr, 208 op/s
Nov 25 03:24:19 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 03:24:19 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2042104617' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 03:24:19 np0005534516 nova_compute[253538]: 2025-11-25 08:24:19.534 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:24:19 np0005534516 nova_compute[253538]: 2025-11-25 08:24:19.553 253542 DEBUG oslo_concurrency.processutils [None req-c5d6135b-f44a-4f19-8193-26c9f80add05 479070fb6fab4140a63c6eb2f769af32 17c27399c5cf49759f4c3caf086910cb - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.597s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:24:19 np0005534516 nova_compute[253538]: 2025-11-25 08:24:19.556 253542 DEBUG nova.objects.instance [None req-c5d6135b-f44a-4f19-8193-26c9f80add05 479070fb6fab4140a63c6eb2f769af32 17c27399c5cf49759f4c3caf086910cb - - default default] Lazy-loading 'pci_devices' on Instance uuid 1cb8bb78-4ff6-496c-858c-41159362ffb8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 03:24:19 np0005534516 nova_compute[253538]: 2025-11-25 08:24:19.588 253542 DEBUG nova.virt.libvirt.driver [None req-c5d6135b-f44a-4f19-8193-26c9f80add05 479070fb6fab4140a63c6eb2f769af32 17c27399c5cf49759f4c3caf086910cb - - default default] [instance: 1cb8bb78-4ff6-496c-858c-41159362ffb8] End _get_guest_xml xml=<domain type="kvm">
Nov 25 03:24:19 np0005534516 nova_compute[253538]:  <uuid>1cb8bb78-4ff6-496c-858c-41159362ffb8</uuid>
Nov 25 03:24:19 np0005534516 nova_compute[253538]:  <name>instance-0000000b</name>
Nov 25 03:24:19 np0005534516 nova_compute[253538]:  <memory>131072</memory>
Nov 25 03:24:19 np0005534516 nova_compute[253538]:  <vcpu>1</vcpu>
Nov 25 03:24:19 np0005534516 nova_compute[253538]:  <metadata>
Nov 25 03:24:19 np0005534516 nova_compute[253538]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 03:24:19 np0005534516 nova_compute[253538]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 03:24:19 np0005534516 nova_compute[253538]:      <nova:name>tempest-ServerExternalEventsTest-server-44658727</nova:name>
Nov 25 03:24:19 np0005534516 nova_compute[253538]:      <nova:creationTime>2025-11-25 08:24:18</nova:creationTime>
Nov 25 03:24:19 np0005534516 nova_compute[253538]:      <nova:flavor name="m1.nano">
Nov 25 03:24:19 np0005534516 nova_compute[253538]:        <nova:memory>128</nova:memory>
Nov 25 03:24:19 np0005534516 nova_compute[253538]:        <nova:disk>1</nova:disk>
Nov 25 03:24:19 np0005534516 nova_compute[253538]:        <nova:swap>0</nova:swap>
Nov 25 03:24:19 np0005534516 nova_compute[253538]:        <nova:ephemeral>0</nova:ephemeral>
Nov 25 03:24:19 np0005534516 nova_compute[253538]:        <nova:vcpus>1</nova:vcpus>
Nov 25 03:24:19 np0005534516 nova_compute[253538]:      </nova:flavor>
Nov 25 03:24:19 np0005534516 nova_compute[253538]:      <nova:owner>
Nov 25 03:24:19 np0005534516 nova_compute[253538]:        <nova:user uuid="479070fb6fab4140a63c6eb2f769af32">tempest-ServerExternalEventsTest-287738302-project-member</nova:user>
Nov 25 03:24:19 np0005534516 nova_compute[253538]:        <nova:project uuid="17c27399c5cf49759f4c3caf086910cb">tempest-ServerExternalEventsTest-287738302</nova:project>
Nov 25 03:24:19 np0005534516 nova_compute[253538]:      </nova:owner>
Nov 25 03:24:19 np0005534516 nova_compute[253538]:      <nova:root type="image" uuid="8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e"/>
Nov 25 03:24:19 np0005534516 nova_compute[253538]:      <nova:ports/>
Nov 25 03:24:19 np0005534516 nova_compute[253538]:    </nova:instance>
Nov 25 03:24:19 np0005534516 nova_compute[253538]:  </metadata>
Nov 25 03:24:19 np0005534516 nova_compute[253538]:  <sysinfo type="smbios">
Nov 25 03:24:19 np0005534516 nova_compute[253538]:    <system>
Nov 25 03:24:19 np0005534516 nova_compute[253538]:      <entry name="manufacturer">RDO</entry>
Nov 25 03:24:19 np0005534516 nova_compute[253538]:      <entry name="product">OpenStack Compute</entry>
Nov 25 03:24:19 np0005534516 nova_compute[253538]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 03:24:19 np0005534516 nova_compute[253538]:      <entry name="serial">1cb8bb78-4ff6-496c-858c-41159362ffb8</entry>
Nov 25 03:24:19 np0005534516 nova_compute[253538]:      <entry name="uuid">1cb8bb78-4ff6-496c-858c-41159362ffb8</entry>
Nov 25 03:24:19 np0005534516 nova_compute[253538]:      <entry name="family">Virtual Machine</entry>
Nov 25 03:24:19 np0005534516 nova_compute[253538]:    </system>
Nov 25 03:24:19 np0005534516 nova_compute[253538]:  </sysinfo>
Nov 25 03:24:19 np0005534516 nova_compute[253538]:  <os>
Nov 25 03:24:19 np0005534516 nova_compute[253538]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 25 03:24:19 np0005534516 nova_compute[253538]:    <boot dev="hd"/>
Nov 25 03:24:19 np0005534516 nova_compute[253538]:    <smbios mode="sysinfo"/>
Nov 25 03:24:19 np0005534516 nova_compute[253538]:  </os>
Nov 25 03:24:19 np0005534516 nova_compute[253538]:  <features>
Nov 25 03:24:19 np0005534516 nova_compute[253538]:    <acpi/>
Nov 25 03:24:19 np0005534516 nova_compute[253538]:    <apic/>
Nov 25 03:24:19 np0005534516 nova_compute[253538]:    <vmcoreinfo/>
Nov 25 03:24:19 np0005534516 nova_compute[253538]:  </features>
Nov 25 03:24:19 np0005534516 nova_compute[253538]:  <clock offset="utc">
Nov 25 03:24:19 np0005534516 nova_compute[253538]:    <timer name="pit" tickpolicy="delay"/>
Nov 25 03:24:19 np0005534516 nova_compute[253538]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 25 03:24:19 np0005534516 nova_compute[253538]:    <timer name="hpet" present="no"/>
Nov 25 03:24:19 np0005534516 nova_compute[253538]:  </clock>
Nov 25 03:24:19 np0005534516 nova_compute[253538]:  <cpu mode="host-model" match="exact">
Nov 25 03:24:19 np0005534516 nova_compute[253538]:    <topology sockets="1" cores="1" threads="1"/>
Nov 25 03:24:19 np0005534516 nova_compute[253538]:  </cpu>
Nov 25 03:24:19 np0005534516 nova_compute[253538]:  <devices>
Nov 25 03:24:19 np0005534516 nova_compute[253538]:    <disk type="network" device="disk">
Nov 25 03:24:19 np0005534516 nova_compute[253538]:      <driver type="raw" cache="none"/>
Nov 25 03:24:19 np0005534516 nova_compute[253538]:      <source protocol="rbd" name="vms/1cb8bb78-4ff6-496c-858c-41159362ffb8_disk">
Nov 25 03:24:19 np0005534516 nova_compute[253538]:        <host name="192.168.122.100" port="6789"/>
Nov 25 03:24:19 np0005534516 nova_compute[253538]:      </source>
Nov 25 03:24:19 np0005534516 nova_compute[253538]:      <auth username="openstack">
Nov 25 03:24:19 np0005534516 nova_compute[253538]:        <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 03:24:19 np0005534516 nova_compute[253538]:      </auth>
Nov 25 03:24:19 np0005534516 nova_compute[253538]:      <target dev="vda" bus="virtio"/>
Nov 25 03:24:19 np0005534516 nova_compute[253538]:    </disk>
Nov 25 03:24:19 np0005534516 nova_compute[253538]:    <disk type="network" device="cdrom">
Nov 25 03:24:19 np0005534516 nova_compute[253538]:      <driver type="raw" cache="none"/>
Nov 25 03:24:19 np0005534516 nova_compute[253538]:      <source protocol="rbd" name="vms/1cb8bb78-4ff6-496c-858c-41159362ffb8_disk.config">
Nov 25 03:24:19 np0005534516 nova_compute[253538]:        <host name="192.168.122.100" port="6789"/>
Nov 25 03:24:19 np0005534516 nova_compute[253538]:      </source>
Nov 25 03:24:19 np0005534516 nova_compute[253538]:      <auth username="openstack">
Nov 25 03:24:19 np0005534516 nova_compute[253538]:        <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 03:24:19 np0005534516 nova_compute[253538]:      </auth>
Nov 25 03:24:19 np0005534516 nova_compute[253538]:      <target dev="sda" bus="sata"/>
Nov 25 03:24:19 np0005534516 nova_compute[253538]:    </disk>
Nov 25 03:24:19 np0005534516 nova_compute[253538]:    <serial type="pty">
Nov 25 03:24:19 np0005534516 nova_compute[253538]:      <log file="/var/lib/nova/instances/1cb8bb78-4ff6-496c-858c-41159362ffb8/console.log" append="off"/>
Nov 25 03:24:19 np0005534516 nova_compute[253538]:    </serial>
Nov 25 03:24:19 np0005534516 nova_compute[253538]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 03:24:19 np0005534516 nova_compute[253538]:    <video>
Nov 25 03:24:19 np0005534516 nova_compute[253538]:      <model type="virtio"/>
Nov 25 03:24:19 np0005534516 nova_compute[253538]:    </video>
Nov 25 03:24:19 np0005534516 nova_compute[253538]:    <input type="tablet" bus="usb"/>
Nov 25 03:24:19 np0005534516 nova_compute[253538]:    <rng model="virtio">
Nov 25 03:24:19 np0005534516 nova_compute[253538]:      <backend model="random">/dev/urandom</backend>
Nov 25 03:24:19 np0005534516 nova_compute[253538]:    </rng>
Nov 25 03:24:19 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root"/>
Nov 25 03:24:19 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:24:19 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:24:19 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:24:19 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:24:19 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:24:19 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:24:19 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:24:19 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:24:19 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:24:19 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:24:19 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:24:19 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:24:19 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:24:19 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:24:19 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:24:19 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:24:19 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:24:19 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:24:19 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:24:19 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:24:19 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:24:19 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:24:19 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:24:19 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:24:19 np0005534516 nova_compute[253538]:    <controller type="usb" index="0"/>
Nov 25 03:24:19 np0005534516 nova_compute[253538]:    <memballoon model="virtio">
Nov 25 03:24:19 np0005534516 nova_compute[253538]:      <stats period="10"/>
Nov 25 03:24:19 np0005534516 nova_compute[253538]:    </memballoon>
Nov 25 03:24:19 np0005534516 nova_compute[253538]:  </devices>
Nov 25 03:24:19 np0005534516 nova_compute[253538]: </domain>
Nov 25 03:24:19 np0005534516 nova_compute[253538]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 25 03:24:19 np0005534516 nova_compute[253538]: 2025-11-25 08:24:19.688 253542 DEBUG nova.virt.libvirt.driver [None req-c5d6135b-f44a-4f19-8193-26c9f80add05 479070fb6fab4140a63c6eb2f769af32 17c27399c5cf49759f4c3caf086910cb - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 03:24:19 np0005534516 nova_compute[253538]: 2025-11-25 08:24:19.689 253542 DEBUG nova.virt.libvirt.driver [None req-c5d6135b-f44a-4f19-8193-26c9f80add05 479070fb6fab4140a63c6eb2f769af32 17c27399c5cf49759f4c3caf086910cb - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 03:24:19 np0005534516 nova_compute[253538]: 2025-11-25 08:24:19.693 253542 INFO nova.virt.libvirt.driver [None req-c5d6135b-f44a-4f19-8193-26c9f80add05 479070fb6fab4140a63c6eb2f769af32 17c27399c5cf49759f4c3caf086910cb - - default default] [instance: 1cb8bb78-4ff6-496c-858c-41159362ffb8] Using config drive#033[00m
Nov 25 03:24:19 np0005534516 nova_compute[253538]: 2025-11-25 08:24:19.732 253542 DEBUG nova.storage.rbd_utils [None req-c5d6135b-f44a-4f19-8193-26c9f80add05 479070fb6fab4140a63c6eb2f769af32 17c27399c5cf49759f4c3caf086910cb - - default default] rbd image 1cb8bb78-4ff6-496c-858c-41159362ffb8_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:24:19 np0005534516 nova_compute[253538]: 2025-11-25 08:24:19.773 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:24:19 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:24:19.773 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=6, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '3e:21:09', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '6a:71:8e:07:fa:c2'}, ipsec=False) old=SB_Global(nb_cfg=5) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 03:24:19 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:24:19.775 162739 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 25 03:24:19 np0005534516 nova_compute[253538]: 2025-11-25 08:24:19.968 253542 INFO nova.virt.libvirt.driver [None req-c5d6135b-f44a-4f19-8193-26c9f80add05 479070fb6fab4140a63c6eb2f769af32 17c27399c5cf49759f4c3caf086910cb - - default default] [instance: 1cb8bb78-4ff6-496c-858c-41159362ffb8] Creating config drive at /var/lib/nova/instances/1cb8bb78-4ff6-496c-858c-41159362ffb8/disk.config#033[00m
Nov 25 03:24:19 np0005534516 nova_compute[253538]: 2025-11-25 08:24:19.973 253542 DEBUG oslo_concurrency.processutils [None req-c5d6135b-f44a-4f19-8193-26c9f80add05 479070fb6fab4140a63c6eb2f769af32 17c27399c5cf49759f4c3caf086910cb - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/1cb8bb78-4ff6-496c-858c-41159362ffb8/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpvcob9wtt execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:24:20 np0005534516 nova_compute[253538]: 2025-11-25 08:24:20.099 253542 DEBUG oslo_concurrency.processutils [None req-c5d6135b-f44a-4f19-8193-26c9f80add05 479070fb6fab4140a63c6eb2f769af32 17c27399c5cf49759f4c3caf086910cb - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/1cb8bb78-4ff6-496c-858c-41159362ffb8/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpvcob9wtt" returned: 0 in 0.126s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:24:20 np0005534516 nova_compute[253538]: 2025-11-25 08:24:20.123 253542 DEBUG nova.storage.rbd_utils [None req-c5d6135b-f44a-4f19-8193-26c9f80add05 479070fb6fab4140a63c6eb2f769af32 17c27399c5cf49759f4c3caf086910cb - - default default] rbd image 1cb8bb78-4ff6-496c-858c-41159362ffb8_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:24:20 np0005534516 nova_compute[253538]: 2025-11-25 08:24:20.127 253542 DEBUG oslo_concurrency.processutils [None req-c5d6135b-f44a-4f19-8193-26c9f80add05 479070fb6fab4140a63c6eb2f769af32 17c27399c5cf49759f4c3caf086910cb - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/1cb8bb78-4ff6-496c-858c-41159362ffb8/disk.config 1cb8bb78-4ff6-496c-858c-41159362ffb8_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:24:20 np0005534516 nova_compute[253538]: 2025-11-25 08:24:20.564 253542 DEBUG oslo_concurrency.processutils [None req-c5d6135b-f44a-4f19-8193-26c9f80add05 479070fb6fab4140a63c6eb2f769af32 17c27399c5cf49759f4c3caf086910cb - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/1cb8bb78-4ff6-496c-858c-41159362ffb8/disk.config 1cb8bb78-4ff6-496c-858c-41159362ffb8_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.437s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:24:20 np0005534516 nova_compute[253538]: 2025-11-25 08:24:20.565 253542 INFO nova.virt.libvirt.driver [None req-c5d6135b-f44a-4f19-8193-26c9f80add05 479070fb6fab4140a63c6eb2f769af32 17c27399c5cf49759f4c3caf086910cb - - default default] [instance: 1cb8bb78-4ff6-496c-858c-41159362ffb8] Deleting local config drive /var/lib/nova/instances/1cb8bb78-4ff6-496c-858c-41159362ffb8/disk.config because it was imported into RBD.#033[00m
Nov 25 03:24:20 np0005534516 systemd-machined[215790]: New machine qemu-12-instance-0000000b.
Nov 25 03:24:20 np0005534516 systemd[1]: Started Virtual Machine qemu-12-instance-0000000b.
Nov 25 03:24:20 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1164: 321 pgs: 321 active+clean; 134 MiB data, 305 MiB used, 60 GiB / 60 GiB avail; 3.1 MiB/s rd, 4.0 MiB/s wr, 193 op/s
Nov 25 03:24:21 np0005534516 nova_compute[253538]: 2025-11-25 08:24:21.266 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:24:21 np0005534516 systemd[1]: machine-qemu\x2d10\x2dinstance\x2d00000009.scope: Deactivated successfully.
Nov 25 03:24:21 np0005534516 systemd[1]: machine-qemu\x2d10\x2dinstance\x2d00000009.scope: Consumed 13.060s CPU time.
Nov 25 03:24:21 np0005534516 systemd-machined[215790]: Machine qemu-10-instance-00000009 terminated.
Nov 25 03:24:21 np0005534516 nova_compute[253538]: 2025-11-25 08:24:21.726 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764059061.7259235, 1cb8bb78-4ff6-496c-858c-41159362ffb8 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 03:24:21 np0005534516 nova_compute[253538]: 2025-11-25 08:24:21.726 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 1cb8bb78-4ff6-496c-858c-41159362ffb8] VM Resumed (Lifecycle Event)#033[00m
Nov 25 03:24:21 np0005534516 nova_compute[253538]: 2025-11-25 08:24:21.730 253542 DEBUG nova.compute.manager [None req-c5d6135b-f44a-4f19-8193-26c9f80add05 479070fb6fab4140a63c6eb2f769af32 17c27399c5cf49759f4c3caf086910cb - - default default] [instance: 1cb8bb78-4ff6-496c-858c-41159362ffb8] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 25 03:24:21 np0005534516 nova_compute[253538]: 2025-11-25 08:24:21.731 253542 DEBUG nova.virt.libvirt.driver [None req-c5d6135b-f44a-4f19-8193-26c9f80add05 479070fb6fab4140a63c6eb2f769af32 17c27399c5cf49759f4c3caf086910cb - - default default] [instance: 1cb8bb78-4ff6-496c-858c-41159362ffb8] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 25 03:24:21 np0005534516 nova_compute[253538]: 2025-11-25 08:24:21.735 253542 INFO nova.virt.libvirt.driver [-] [instance: 1cb8bb78-4ff6-496c-858c-41159362ffb8] Instance spawned successfully.#033[00m
Nov 25 03:24:21 np0005534516 nova_compute[253538]: 2025-11-25 08:24:21.736 253542 DEBUG nova.virt.libvirt.driver [None req-c5d6135b-f44a-4f19-8193-26c9f80add05 479070fb6fab4140a63c6eb2f769af32 17c27399c5cf49759f4c3caf086910cb - - default default] [instance: 1cb8bb78-4ff6-496c-858c-41159362ffb8] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 25 03:24:21 np0005534516 nova_compute[253538]: 2025-11-25 08:24:21.749 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 1cb8bb78-4ff6-496c-858c-41159362ffb8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:24:21 np0005534516 nova_compute[253538]: 2025-11-25 08:24:21.755 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 1cb8bb78-4ff6-496c-858c-41159362ffb8] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 03:24:21 np0005534516 nova_compute[253538]: 2025-11-25 08:24:21.758 253542 DEBUG nova.virt.libvirt.driver [None req-c5d6135b-f44a-4f19-8193-26c9f80add05 479070fb6fab4140a63c6eb2f769af32 17c27399c5cf49759f4c3caf086910cb - - default default] [instance: 1cb8bb78-4ff6-496c-858c-41159362ffb8] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:24:21 np0005534516 nova_compute[253538]: 2025-11-25 08:24:21.759 253542 DEBUG nova.virt.libvirt.driver [None req-c5d6135b-f44a-4f19-8193-26c9f80add05 479070fb6fab4140a63c6eb2f769af32 17c27399c5cf49759f4c3caf086910cb - - default default] [instance: 1cb8bb78-4ff6-496c-858c-41159362ffb8] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:24:21 np0005534516 nova_compute[253538]: 2025-11-25 08:24:21.759 253542 DEBUG nova.virt.libvirt.driver [None req-c5d6135b-f44a-4f19-8193-26c9f80add05 479070fb6fab4140a63c6eb2f769af32 17c27399c5cf49759f4c3caf086910cb - - default default] [instance: 1cb8bb78-4ff6-496c-858c-41159362ffb8] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:24:21 np0005534516 nova_compute[253538]: 2025-11-25 08:24:21.760 253542 DEBUG nova.virt.libvirt.driver [None req-c5d6135b-f44a-4f19-8193-26c9f80add05 479070fb6fab4140a63c6eb2f769af32 17c27399c5cf49759f4c3caf086910cb - - default default] [instance: 1cb8bb78-4ff6-496c-858c-41159362ffb8] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:24:21 np0005534516 nova_compute[253538]: 2025-11-25 08:24:21.760 253542 DEBUG nova.virt.libvirt.driver [None req-c5d6135b-f44a-4f19-8193-26c9f80add05 479070fb6fab4140a63c6eb2f769af32 17c27399c5cf49759f4c3caf086910cb - - default default] [instance: 1cb8bb78-4ff6-496c-858c-41159362ffb8] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:24:21 np0005534516 nova_compute[253538]: 2025-11-25 08:24:21.761 253542 DEBUG nova.virt.libvirt.driver [None req-c5d6135b-f44a-4f19-8193-26c9f80add05 479070fb6fab4140a63c6eb2f769af32 17c27399c5cf49759f4c3caf086910cb - - default default] [instance: 1cb8bb78-4ff6-496c-858c-41159362ffb8] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:24:21 np0005534516 nova_compute[253538]: 2025-11-25 08:24:21.787 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 1cb8bb78-4ff6-496c-858c-41159362ffb8] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 03:24:21 np0005534516 nova_compute[253538]: 2025-11-25 08:24:21.787 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764059061.7305493, 1cb8bb78-4ff6-496c-858c-41159362ffb8 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 03:24:21 np0005534516 nova_compute[253538]: 2025-11-25 08:24:21.787 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 1cb8bb78-4ff6-496c-858c-41159362ffb8] VM Started (Lifecycle Event)#033[00m
Nov 25 03:24:21 np0005534516 nova_compute[253538]: 2025-11-25 08:24:21.815 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 1cb8bb78-4ff6-496c-858c-41159362ffb8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:24:21 np0005534516 nova_compute[253538]: 2025-11-25 08:24:21.819 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 1cb8bb78-4ff6-496c-858c-41159362ffb8] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 03:24:21 np0005534516 nova_compute[253538]: 2025-11-25 08:24:21.824 253542 INFO nova.compute.manager [None req-c5d6135b-f44a-4f19-8193-26c9f80add05 479070fb6fab4140a63c6eb2f769af32 17c27399c5cf49759f4c3caf086910cb - - default default] [instance: 1cb8bb78-4ff6-496c-858c-41159362ffb8] Took 4.17 seconds to spawn the instance on the hypervisor.#033[00m
Nov 25 03:24:21 np0005534516 nova_compute[253538]: 2025-11-25 08:24:21.825 253542 DEBUG nova.compute.manager [None req-c5d6135b-f44a-4f19-8193-26c9f80add05 479070fb6fab4140a63c6eb2f769af32 17c27399c5cf49759f4c3caf086910cb - - default default] [instance: 1cb8bb78-4ff6-496c-858c-41159362ffb8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:24:21 np0005534516 nova_compute[253538]: 2025-11-25 08:24:21.835 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 1cb8bb78-4ff6-496c-858c-41159362ffb8] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 03:24:21 np0005534516 nova_compute[253538]: 2025-11-25 08:24:21.881 253542 INFO nova.compute.manager [None req-c5d6135b-f44a-4f19-8193-26c9f80add05 479070fb6fab4140a63c6eb2f769af32 17c27399c5cf49759f4c3caf086910cb - - default default] [instance: 1cb8bb78-4ff6-496c-858c-41159362ffb8] Took 5.13 seconds to build instance.#033[00m
Nov 25 03:24:21 np0005534516 nova_compute[253538]: 2025-11-25 08:24:21.897 253542 DEBUG oslo_concurrency.lockutils [None req-c5d6135b-f44a-4f19-8193-26c9f80add05 479070fb6fab4140a63c6eb2f769af32 17c27399c5cf49759f4c3caf086910cb - - default default] Lock "1cb8bb78-4ff6-496c-858c-41159362ffb8" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 5.204s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:24:22 np0005534516 nova_compute[253538]: 2025-11-25 08:24:22.458 253542 INFO nova.virt.libvirt.driver [None req-f0f67520-50d5-44d1-bd6c-82dd827f18a4 8eb21f29ec514ce1adc864a5875afc7f cb11b932e29641d899540daf5c89f9d1 - - default default] [instance: 4659329f-611a-4436-aa9e-26937db9cd61] Instance shutdown successfully after 15 seconds.#033[00m
Nov 25 03:24:22 np0005534516 nova_compute[253538]: 2025-11-25 08:24:22.464 253542 INFO nova.virt.libvirt.driver [-] [instance: 4659329f-611a-4436-aa9e-26937db9cd61] Instance destroyed successfully.#033[00m
Nov 25 03:24:22 np0005534516 nova_compute[253538]: 2025-11-25 08:24:22.468 253542 INFO nova.virt.libvirt.driver [-] [instance: 4659329f-611a-4436-aa9e-26937db9cd61] Instance destroyed successfully.#033[00m
Nov 25 03:24:22 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1165: 321 pgs: 321 active+clean; 164 MiB data, 313 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 3.9 MiB/s wr, 181 op/s
Nov 25 03:24:23 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e110 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 03:24:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 03:24:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 03:24:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 03:24:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 03:24:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 03:24:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 03:24:23 np0005534516 nova_compute[253538]: 2025-11-25 08:24:23.475 253542 INFO nova.virt.libvirt.driver [None req-f0f67520-50d5-44d1-bd6c-82dd827f18a4 8eb21f29ec514ce1adc864a5875afc7f cb11b932e29641d899540daf5c89f9d1 - - default default] [instance: 4659329f-611a-4436-aa9e-26937db9cd61] Deleting instance files /var/lib/nova/instances/4659329f-611a-4436-aa9e-26937db9cd61_del#033[00m
Nov 25 03:24:23 np0005534516 nova_compute[253538]: 2025-11-25 08:24:23.476 253542 INFO nova.virt.libvirt.driver [None req-f0f67520-50d5-44d1-bd6c-82dd827f18a4 8eb21f29ec514ce1adc864a5875afc7f cb11b932e29641d899540daf5c89f9d1 - - default default] [instance: 4659329f-611a-4436-aa9e-26937db9cd61] Deletion of /var/lib/nova/instances/4659329f-611a-4436-aa9e-26937db9cd61_del complete#033[00m
Nov 25 03:24:23 np0005534516 nova_compute[253538]: 2025-11-25 08:24:23.499 253542 DEBUG nova.compute.manager [None req-714c023e-24b6-4e82-adf6-0aa705fed6b6 fe6ab27d78934bb8bd1e2c796e24a021 8be8e15be92f4c8a8ad37f92b6ba82e4 - - default default] [instance: 1cb8bb78-4ff6-496c-858c-41159362ffb8] Received event network-changed external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:24:23 np0005534516 nova_compute[253538]: 2025-11-25 08:24:23.499 253542 DEBUG nova.compute.manager [None req-714c023e-24b6-4e82-adf6-0aa705fed6b6 fe6ab27d78934bb8bd1e2c796e24a021 8be8e15be92f4c8a8ad37f92b6ba82e4 - - default default] [instance: 1cb8bb78-4ff6-496c-858c-41159362ffb8] Refreshing instance network info cache due to event network-changed. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 03:24:23 np0005534516 nova_compute[253538]: 2025-11-25 08:24:23.500 253542 DEBUG oslo_concurrency.lockutils [None req-714c023e-24b6-4e82-adf6-0aa705fed6b6 fe6ab27d78934bb8bd1e2c796e24a021 8be8e15be92f4c8a8ad37f92b6ba82e4 - - default default] Acquiring lock "refresh_cache-1cb8bb78-4ff6-496c-858c-41159362ffb8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 03:24:23 np0005534516 nova_compute[253538]: 2025-11-25 08:24:23.500 253542 DEBUG oslo_concurrency.lockutils [None req-714c023e-24b6-4e82-adf6-0aa705fed6b6 fe6ab27d78934bb8bd1e2c796e24a021 8be8e15be92f4c8a8ad37f92b6ba82e4 - - default default] Acquired lock "refresh_cache-1cb8bb78-4ff6-496c-858c-41159362ffb8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 03:24:23 np0005534516 nova_compute[253538]: 2025-11-25 08:24:23.501 253542 DEBUG nova.network.neutron [None req-714c023e-24b6-4e82-adf6-0aa705fed6b6 fe6ab27d78934bb8bd1e2c796e24a021 8be8e15be92f4c8a8ad37f92b6ba82e4 - - default default] [instance: 1cb8bb78-4ff6-496c-858c-41159362ffb8] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 25 03:24:23 np0005534516 nova_compute[253538]: 2025-11-25 08:24:23.627 253542 DEBUG nova.virt.libvirt.driver [None req-f0f67520-50d5-44d1-bd6c-82dd827f18a4 8eb21f29ec514ce1adc864a5875afc7f cb11b932e29641d899540daf5c89f9d1 - - default default] [instance: 4659329f-611a-4436-aa9e-26937db9cd61] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 25 03:24:23 np0005534516 nova_compute[253538]: 2025-11-25 08:24:23.629 253542 INFO nova.virt.libvirt.driver [None req-f0f67520-50d5-44d1-bd6c-82dd827f18a4 8eb21f29ec514ce1adc864a5875afc7f cb11b932e29641d899540daf5c89f9d1 - - default default] [instance: 4659329f-611a-4436-aa9e-26937db9cd61] Creating image(s)#033[00m
Nov 25 03:24:23 np0005534516 nova_compute[253538]: 2025-11-25 08:24:23.655 253542 DEBUG nova.storage.rbd_utils [None req-f0f67520-50d5-44d1-bd6c-82dd827f18a4 8eb21f29ec514ce1adc864a5875afc7f cb11b932e29641d899540daf5c89f9d1 - - default default] rbd image 4659329f-611a-4436-aa9e-26937db9cd61_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:24:23 np0005534516 nova_compute[253538]: 2025-11-25 08:24:23.684 253542 DEBUG nova.storage.rbd_utils [None req-f0f67520-50d5-44d1-bd6c-82dd827f18a4 8eb21f29ec514ce1adc864a5875afc7f cb11b932e29641d899540daf5c89f9d1 - - default default] rbd image 4659329f-611a-4436-aa9e-26937db9cd61_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:24:23 np0005534516 nova_compute[253538]: 2025-11-25 08:24:23.708 253542 DEBUG nova.storage.rbd_utils [None req-f0f67520-50d5-44d1-bd6c-82dd827f18a4 8eb21f29ec514ce1adc864a5875afc7f cb11b932e29641d899540daf5c89f9d1 - - default default] rbd image 4659329f-611a-4436-aa9e-26937db9cd61_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:24:23 np0005534516 nova_compute[253538]: 2025-11-25 08:24:23.712 253542 DEBUG oslo_concurrency.processutils [None req-f0f67520-50d5-44d1-bd6c-82dd827f18a4 8eb21f29ec514ce1adc864a5875afc7f cb11b932e29641d899540daf5c89f9d1 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:24:23 np0005534516 nova_compute[253538]: 2025-11-25 08:24:23.747 253542 DEBUG nova.network.neutron [None req-714c023e-24b6-4e82-adf6-0aa705fed6b6 fe6ab27d78934bb8bd1e2c796e24a021 8be8e15be92f4c8a8ad37f92b6ba82e4 - - default default] [instance: 1cb8bb78-4ff6-496c-858c-41159362ffb8] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 25 03:24:23 np0005534516 nova_compute[253538]: 2025-11-25 08:24:23.798 253542 DEBUG oslo_concurrency.processutils [None req-f0f67520-50d5-44d1-bd6c-82dd827f18a4 8eb21f29ec514ce1adc864a5875afc7f cb11b932e29641d899540daf5c89f9d1 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json" returned: 0 in 0.086s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:24:23 np0005534516 nova_compute[253538]: 2025-11-25 08:24:23.799 253542 DEBUG oslo_concurrency.lockutils [None req-f0f67520-50d5-44d1-bd6c-82dd827f18a4 8eb21f29ec514ce1adc864a5875afc7f cb11b932e29641d899540daf5c89f9d1 - - default default] Acquiring lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:24:23 np0005534516 nova_compute[253538]: 2025-11-25 08:24:23.800 253542 DEBUG oslo_concurrency.lockutils [None req-f0f67520-50d5-44d1-bd6c-82dd827f18a4 8eb21f29ec514ce1adc864a5875afc7f cb11b932e29641d899540daf5c89f9d1 - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:24:23 np0005534516 nova_compute[253538]: 2025-11-25 08:24:23.800 253542 DEBUG oslo_concurrency.lockutils [None req-f0f67520-50d5-44d1-bd6c-82dd827f18a4 8eb21f29ec514ce1adc864a5875afc7f cb11b932e29641d899540daf5c89f9d1 - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:24:23 np0005534516 nova_compute[253538]: 2025-11-25 08:24:23.822 253542 DEBUG nova.storage.rbd_utils [None req-f0f67520-50d5-44d1-bd6c-82dd827f18a4 8eb21f29ec514ce1adc864a5875afc7f cb11b932e29641d899540daf5c89f9d1 - - default default] rbd image 4659329f-611a-4436-aa9e-26937db9cd61_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:24:23 np0005534516 nova_compute[253538]: 2025-11-25 08:24:23.826 253542 DEBUG oslo_concurrency.processutils [None req-f0f67520-50d5-44d1-bd6c-82dd827f18a4 8eb21f29ec514ce1adc864a5875afc7f cb11b932e29641d899540daf5c89f9d1 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc 4659329f-611a-4436-aa9e-26937db9cd61_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:24:23 np0005534516 nova_compute[253538]: 2025-11-25 08:24:23.943 253542 DEBUG oslo_concurrency.lockutils [None req-cce667b4-09e0-4f99-a432-b9789a278886 479070fb6fab4140a63c6eb2f769af32 17c27399c5cf49759f4c3caf086910cb - - default default] Acquiring lock "1cb8bb78-4ff6-496c-858c-41159362ffb8" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:24:23 np0005534516 nova_compute[253538]: 2025-11-25 08:24:23.944 253542 DEBUG oslo_concurrency.lockutils [None req-cce667b4-09e0-4f99-a432-b9789a278886 479070fb6fab4140a63c6eb2f769af32 17c27399c5cf49759f4c3caf086910cb - - default default] Lock "1cb8bb78-4ff6-496c-858c-41159362ffb8" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:24:23 np0005534516 nova_compute[253538]: 2025-11-25 08:24:23.944 253542 DEBUG oslo_concurrency.lockutils [None req-cce667b4-09e0-4f99-a432-b9789a278886 479070fb6fab4140a63c6eb2f769af32 17c27399c5cf49759f4c3caf086910cb - - default default] Acquiring lock "1cb8bb78-4ff6-496c-858c-41159362ffb8-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:24:23 np0005534516 nova_compute[253538]: 2025-11-25 08:24:23.945 253542 DEBUG oslo_concurrency.lockutils [None req-cce667b4-09e0-4f99-a432-b9789a278886 479070fb6fab4140a63c6eb2f769af32 17c27399c5cf49759f4c3caf086910cb - - default default] Lock "1cb8bb78-4ff6-496c-858c-41159362ffb8-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:24:23 np0005534516 nova_compute[253538]: 2025-11-25 08:24:23.945 253542 DEBUG oslo_concurrency.lockutils [None req-cce667b4-09e0-4f99-a432-b9789a278886 479070fb6fab4140a63c6eb2f769af32 17c27399c5cf49759f4c3caf086910cb - - default default] Lock "1cb8bb78-4ff6-496c-858c-41159362ffb8-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:24:23 np0005534516 nova_compute[253538]: 2025-11-25 08:24:23.946 253542 INFO nova.compute.manager [None req-cce667b4-09e0-4f99-a432-b9789a278886 479070fb6fab4140a63c6eb2f769af32 17c27399c5cf49759f4c3caf086910cb - - default default] [instance: 1cb8bb78-4ff6-496c-858c-41159362ffb8] Terminating instance#033[00m
Nov 25 03:24:23 np0005534516 nova_compute[253538]: 2025-11-25 08:24:23.947 253542 DEBUG oslo_concurrency.lockutils [None req-cce667b4-09e0-4f99-a432-b9789a278886 479070fb6fab4140a63c6eb2f769af32 17c27399c5cf49759f4c3caf086910cb - - default default] Acquiring lock "refresh_cache-1cb8bb78-4ff6-496c-858c-41159362ffb8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 03:24:24 np0005534516 nova_compute[253538]: 2025-11-25 08:24:24.005 253542 DEBUG nova.network.neutron [None req-714c023e-24b6-4e82-adf6-0aa705fed6b6 fe6ab27d78934bb8bd1e2c796e24a021 8be8e15be92f4c8a8ad37f92b6ba82e4 - - default default] [instance: 1cb8bb78-4ff6-496c-858c-41159362ffb8] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 03:24:24 np0005534516 nova_compute[253538]: 2025-11-25 08:24:24.019 253542 DEBUG oslo_concurrency.lockutils [None req-714c023e-24b6-4e82-adf6-0aa705fed6b6 fe6ab27d78934bb8bd1e2c796e24a021 8be8e15be92f4c8a8ad37f92b6ba82e4 - - default default] Releasing lock "refresh_cache-1cb8bb78-4ff6-496c-858c-41159362ffb8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 03:24:24 np0005534516 nova_compute[253538]: 2025-11-25 08:24:24.022 253542 DEBUG oslo_concurrency.lockutils [None req-cce667b4-09e0-4f99-a432-b9789a278886 479070fb6fab4140a63c6eb2f769af32 17c27399c5cf49759f4c3caf086910cb - - default default] Acquired lock "refresh_cache-1cb8bb78-4ff6-496c-858c-41159362ffb8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 03:24:24 np0005534516 nova_compute[253538]: 2025-11-25 08:24:24.022 253542 DEBUG nova.network.neutron [None req-cce667b4-09e0-4f99-a432-b9789a278886 479070fb6fab4140a63c6eb2f769af32 17c27399c5cf49759f4c3caf086910cb - - default default] [instance: 1cb8bb78-4ff6-496c-858c-41159362ffb8] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 25 03:24:24 np0005534516 nova_compute[253538]: 2025-11-25 08:24:24.123 253542 DEBUG oslo_concurrency.processutils [None req-f0f67520-50d5-44d1-bd6c-82dd827f18a4 8eb21f29ec514ce1adc864a5875afc7f cb11b932e29641d899540daf5c89f9d1 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc 4659329f-611a-4436-aa9e-26937db9cd61_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.297s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:24:24 np0005534516 nova_compute[253538]: 2025-11-25 08:24:24.178 253542 DEBUG nova.network.neutron [None req-cce667b4-09e0-4f99-a432-b9789a278886 479070fb6fab4140a63c6eb2f769af32 17c27399c5cf49759f4c3caf086910cb - - default default] [instance: 1cb8bb78-4ff6-496c-858c-41159362ffb8] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 25 03:24:24 np0005534516 nova_compute[253538]: 2025-11-25 08:24:24.185 253542 DEBUG nova.storage.rbd_utils [None req-f0f67520-50d5-44d1-bd6c-82dd827f18a4 8eb21f29ec514ce1adc864a5875afc7f cb11b932e29641d899540daf5c89f9d1 - - default default] resizing rbd image 4659329f-611a-4436-aa9e-26937db9cd61_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 25 03:24:24 np0005534516 nova_compute[253538]: 2025-11-25 08:24:24.535 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:24:24 np0005534516 nova_compute[253538]: 2025-11-25 08:24:24.579 253542 DEBUG nova.network.neutron [None req-cce667b4-09e0-4f99-a432-b9789a278886 479070fb6fab4140a63c6eb2f769af32 17c27399c5cf49759f4c3caf086910cb - - default default] [instance: 1cb8bb78-4ff6-496c-858c-41159362ffb8] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 03:24:24 np0005534516 nova_compute[253538]: 2025-11-25 08:24:24.591 253542 DEBUG oslo_concurrency.lockutils [None req-cce667b4-09e0-4f99-a432-b9789a278886 479070fb6fab4140a63c6eb2f769af32 17c27399c5cf49759f4c3caf086910cb - - default default] Releasing lock "refresh_cache-1cb8bb78-4ff6-496c-858c-41159362ffb8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 03:24:24 np0005534516 nova_compute[253538]: 2025-11-25 08:24:24.592 253542 DEBUG nova.compute.manager [None req-cce667b4-09e0-4f99-a432-b9789a278886 479070fb6fab4140a63c6eb2f769af32 17c27399c5cf49759f4c3caf086910cb - - default default] [instance: 1cb8bb78-4ff6-496c-858c-41159362ffb8] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 25 03:24:24 np0005534516 nova_compute[253538]: 2025-11-25 08:24:24.631 253542 DEBUG oslo_concurrency.lockutils [None req-1548f2c5-d48a-4cdb-8ce2-174785409a0b 5fe54f3384ea4571bee28e13c88e6d14 c86b7b0d5b344dfb82fc7554a622988e - - default default] Acquiring lock "651b0445-6a0f-41e0-ad78-6318f8175e0b" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:24:24 np0005534516 nova_compute[253538]: 2025-11-25 08:24:24.631 253542 DEBUG oslo_concurrency.lockutils [None req-1548f2c5-d48a-4cdb-8ce2-174785409a0b 5fe54f3384ea4571bee28e13c88e6d14 c86b7b0d5b344dfb82fc7554a622988e - - default default] Lock "651b0445-6a0f-41e0-ad78-6318f8175e0b" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:24:24 np0005534516 nova_compute[253538]: 2025-11-25 08:24:24.644 253542 DEBUG nova.compute.manager [None req-1548f2c5-d48a-4cdb-8ce2-174785409a0b 5fe54f3384ea4571bee28e13c88e6d14 c86b7b0d5b344dfb82fc7554a622988e - - default default] [instance: 651b0445-6a0f-41e0-ad78-6318f8175e0b] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 25 03:24:24 np0005534516 nova_compute[253538]: 2025-11-25 08:24:24.729 253542 DEBUG oslo_concurrency.lockutils [None req-1548f2c5-d48a-4cdb-8ce2-174785409a0b 5fe54f3384ea4571bee28e13c88e6d14 c86b7b0d5b344dfb82fc7554a622988e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:24:24 np0005534516 nova_compute[253538]: 2025-11-25 08:24:24.730 253542 DEBUG oslo_concurrency.lockutils [None req-1548f2c5-d48a-4cdb-8ce2-174785409a0b 5fe54f3384ea4571bee28e13c88e6d14 c86b7b0d5b344dfb82fc7554a622988e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:24:24 np0005534516 nova_compute[253538]: 2025-11-25 08:24:24.737 253542 DEBUG nova.virt.hardware [None req-1548f2c5-d48a-4cdb-8ce2-174785409a0b 5fe54f3384ea4571bee28e13c88e6d14 c86b7b0d5b344dfb82fc7554a622988e - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 25 03:24:24 np0005534516 nova_compute[253538]: 2025-11-25 08:24:24.738 253542 INFO nova.compute.claims [None req-1548f2c5-d48a-4cdb-8ce2-174785409a0b 5fe54f3384ea4571bee28e13c88e6d14 c86b7b0d5b344dfb82fc7554a622988e - - default default] [instance: 651b0445-6a0f-41e0-ad78-6318f8175e0b] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 25 03:24:24 np0005534516 systemd[1]: machine-qemu\x2d12\x2dinstance\x2d0000000b.scope: Deactivated successfully.
Nov 25 03:24:24 np0005534516 systemd[1]: machine-qemu\x2d12\x2dinstance\x2d0000000b.scope: Consumed 4.015s CPU time.
Nov 25 03:24:24 np0005534516 systemd-machined[215790]: Machine qemu-12-instance-0000000b terminated.
Nov 25 03:24:24 np0005534516 nova_compute[253538]: 2025-11-25 08:24:24.813 253542 INFO nova.virt.libvirt.driver [-] [instance: 1cb8bb78-4ff6-496c-858c-41159362ffb8] Instance destroyed successfully.#033[00m
Nov 25 03:24:24 np0005534516 nova_compute[253538]: 2025-11-25 08:24:24.814 253542 DEBUG nova.objects.instance [None req-cce667b4-09e0-4f99-a432-b9789a278886 479070fb6fab4140a63c6eb2f769af32 17c27399c5cf49759f4c3caf086910cb - - default default] Lazy-loading 'resources' on Instance uuid 1cb8bb78-4ff6-496c-858c-41159362ffb8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 03:24:24 np0005534516 nova_compute[253538]: 2025-11-25 08:24:24.967 253542 DEBUG nova.virt.libvirt.driver [None req-f0f67520-50d5-44d1-bd6c-82dd827f18a4 8eb21f29ec514ce1adc864a5875afc7f cb11b932e29641d899540daf5c89f9d1 - - default default] [instance: 4659329f-611a-4436-aa9e-26937db9cd61] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 25 03:24:24 np0005534516 nova_compute[253538]: 2025-11-25 08:24:24.968 253542 DEBUG nova.virt.libvirt.driver [None req-f0f67520-50d5-44d1-bd6c-82dd827f18a4 8eb21f29ec514ce1adc864a5875afc7f cb11b932e29641d899540daf5c89f9d1 - - default default] [instance: 4659329f-611a-4436-aa9e-26937db9cd61] Ensure instance console log exists: /var/lib/nova/instances/4659329f-611a-4436-aa9e-26937db9cd61/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 25 03:24:24 np0005534516 nova_compute[253538]: 2025-11-25 08:24:24.969 253542 DEBUG oslo_concurrency.lockutils [None req-f0f67520-50d5-44d1-bd6c-82dd827f18a4 8eb21f29ec514ce1adc864a5875afc7f cb11b932e29641d899540daf5c89f9d1 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:24:24 np0005534516 nova_compute[253538]: 2025-11-25 08:24:24.969 253542 DEBUG oslo_concurrency.lockutils [None req-f0f67520-50d5-44d1-bd6c-82dd827f18a4 8eb21f29ec514ce1adc864a5875afc7f cb11b932e29641d899540daf5c89f9d1 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:24:24 np0005534516 nova_compute[253538]: 2025-11-25 08:24:24.970 253542 DEBUG oslo_concurrency.lockutils [None req-f0f67520-50d5-44d1-bd6c-82dd827f18a4 8eb21f29ec514ce1adc864a5875afc7f cb11b932e29641d899540daf5c89f9d1 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:24:24 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1166: 321 pgs: 321 active+clean; 115 MiB data, 312 MiB used, 60 GiB / 60 GiB avail; 2.7 MiB/s rd, 4.5 MiB/s wr, 254 op/s
Nov 25 03:24:24 np0005534516 nova_compute[253538]: 2025-11-25 08:24:24.972 253542 DEBUG nova.virt.libvirt.driver [None req-f0f67520-50d5-44d1-bd6c-82dd827f18a4 8eb21f29ec514ce1adc864a5875afc7f cb11b932e29641d899540daf5c89f9d1 - - default default] [instance: 4659329f-611a-4436-aa9e-26937db9cd61] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'device_name': '/dev/vda', 'guest_format': None, 'encryption_options': None, 'boot_index': 0, 'encryption_format': None, 'encryption_secret_uuid': None, 'size': 0, 'encrypted': False, 'disk_bus': 'virtio', 'image_id': '8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 25 03:24:24 np0005534516 nova_compute[253538]: 2025-11-25 08:24:24.977 253542 WARNING nova.virt.libvirt.driver [None req-f0f67520-50d5-44d1-bd6c-82dd827f18a4 8eb21f29ec514ce1adc864a5875afc7f cb11b932e29641d899540daf5c89f9d1 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.: NotImplementedError#033[00m
Nov 25 03:24:24 np0005534516 nova_compute[253538]: 2025-11-25 08:24:24.981 253542 DEBUG oslo_concurrency.processutils [None req-1548f2c5-d48a-4cdb-8ce2-174785409a0b 5fe54f3384ea4571bee28e13c88e6d14 c86b7b0d5b344dfb82fc7554a622988e - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:24:25 np0005534516 nova_compute[253538]: 2025-11-25 08:24:25.014 253542 DEBUG nova.virt.libvirt.host [None req-f0f67520-50d5-44d1-bd6c-82dd827f18a4 8eb21f29ec514ce1adc864a5875afc7f cb11b932e29641d899540daf5c89f9d1 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 25 03:24:25 np0005534516 nova_compute[253538]: 2025-11-25 08:24:25.017 253542 DEBUG nova.virt.libvirt.host [None req-f0f67520-50d5-44d1-bd6c-82dd827f18a4 8eb21f29ec514ce1adc864a5875afc7f cb11b932e29641d899540daf5c89f9d1 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 25 03:24:25 np0005534516 nova_compute[253538]: 2025-11-25 08:24:25.021 253542 DEBUG nova.virt.libvirt.host [None req-f0f67520-50d5-44d1-bd6c-82dd827f18a4 8eb21f29ec514ce1adc864a5875afc7f cb11b932e29641d899540daf5c89f9d1 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 25 03:24:25 np0005534516 nova_compute[253538]: 2025-11-25 08:24:25.022 253542 DEBUG nova.virt.libvirt.host [None req-f0f67520-50d5-44d1-bd6c-82dd827f18a4 8eb21f29ec514ce1adc864a5875afc7f cb11b932e29641d899540daf5c89f9d1 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 25 03:24:25 np0005534516 nova_compute[253538]: 2025-11-25 08:24:25.023 253542 DEBUG nova.virt.libvirt.driver [None req-f0f67520-50d5-44d1-bd6c-82dd827f18a4 8eb21f29ec514ce1adc864a5875afc7f cb11b932e29641d899540daf5c89f9d1 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 25 03:24:25 np0005534516 nova_compute[253538]: 2025-11-25 08:24:25.024 253542 DEBUG nova.virt.hardware [None req-f0f67520-50d5-44d1-bd6c-82dd827f18a4 8eb21f29ec514ce1adc864a5875afc7f cb11b932e29641d899540daf5c89f9d1 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T08:20:54Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='c0247c91-0f34-45b2-87e7-43f31a790a00',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 25 03:24:25 np0005534516 nova_compute[253538]: 2025-11-25 08:24:25.026 253542 DEBUG nova.virt.hardware [None req-f0f67520-50d5-44d1-bd6c-82dd827f18a4 8eb21f29ec514ce1adc864a5875afc7f cb11b932e29641d899540daf5c89f9d1 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 25 03:24:25 np0005534516 nova_compute[253538]: 2025-11-25 08:24:25.027 253542 DEBUG nova.virt.hardware [None req-f0f67520-50d5-44d1-bd6c-82dd827f18a4 8eb21f29ec514ce1adc864a5875afc7f cb11b932e29641d899540daf5c89f9d1 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 25 03:24:25 np0005534516 nova_compute[253538]: 2025-11-25 08:24:25.028 253542 DEBUG nova.virt.hardware [None req-f0f67520-50d5-44d1-bd6c-82dd827f18a4 8eb21f29ec514ce1adc864a5875afc7f cb11b932e29641d899540daf5c89f9d1 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 25 03:24:25 np0005534516 nova_compute[253538]: 2025-11-25 08:24:25.028 253542 DEBUG nova.virt.hardware [None req-f0f67520-50d5-44d1-bd6c-82dd827f18a4 8eb21f29ec514ce1adc864a5875afc7f cb11b932e29641d899540daf5c89f9d1 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 25 03:24:25 np0005534516 nova_compute[253538]: 2025-11-25 08:24:25.029 253542 DEBUG nova.virt.hardware [None req-f0f67520-50d5-44d1-bd6c-82dd827f18a4 8eb21f29ec514ce1adc864a5875afc7f cb11b932e29641d899540daf5c89f9d1 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 25 03:24:25 np0005534516 nova_compute[253538]: 2025-11-25 08:24:25.030 253542 DEBUG nova.virt.hardware [None req-f0f67520-50d5-44d1-bd6c-82dd827f18a4 8eb21f29ec514ce1adc864a5875afc7f cb11b932e29641d899540daf5c89f9d1 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 25 03:24:25 np0005534516 nova_compute[253538]: 2025-11-25 08:24:25.030 253542 DEBUG nova.virt.hardware [None req-f0f67520-50d5-44d1-bd6c-82dd827f18a4 8eb21f29ec514ce1adc864a5875afc7f cb11b932e29641d899540daf5c89f9d1 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 25 03:24:25 np0005534516 nova_compute[253538]: 2025-11-25 08:24:25.031 253542 DEBUG nova.virt.hardware [None req-f0f67520-50d5-44d1-bd6c-82dd827f18a4 8eb21f29ec514ce1adc864a5875afc7f cb11b932e29641d899540daf5c89f9d1 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 25 03:24:25 np0005534516 nova_compute[253538]: 2025-11-25 08:24:25.032 253542 DEBUG nova.virt.hardware [None req-f0f67520-50d5-44d1-bd6c-82dd827f18a4 8eb21f29ec514ce1adc864a5875afc7f cb11b932e29641d899540daf5c89f9d1 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 25 03:24:25 np0005534516 nova_compute[253538]: 2025-11-25 08:24:25.032 253542 DEBUG nova.virt.hardware [None req-f0f67520-50d5-44d1-bd6c-82dd827f18a4 8eb21f29ec514ce1adc864a5875afc7f cb11b932e29641d899540daf5c89f9d1 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 25 03:24:25 np0005534516 nova_compute[253538]: 2025-11-25 08:24:25.033 253542 DEBUG nova.objects.instance [None req-f0f67520-50d5-44d1-bd6c-82dd827f18a4 8eb21f29ec514ce1adc864a5875afc7f cb11b932e29641d899540daf5c89f9d1 - - default default] Lazy-loading 'vcpu_model' on Instance uuid 4659329f-611a-4436-aa9e-26937db9cd61 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 03:24:25 np0005534516 nova_compute[253538]: 2025-11-25 08:24:25.053 253542 DEBUG oslo_concurrency.processutils [None req-f0f67520-50d5-44d1-bd6c-82dd827f18a4 8eb21f29ec514ce1adc864a5875afc7f cb11b932e29641d899540daf5c89f9d1 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:24:25 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 03:24:25 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3334326783' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 03:24:25 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 03:24:25 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1947771019' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 03:24:25 np0005534516 nova_compute[253538]: 2025-11-25 08:24:25.481 253542 DEBUG oslo_concurrency.processutils [None req-f0f67520-50d5-44d1-bd6c-82dd827f18a4 8eb21f29ec514ce1adc864a5875afc7f cb11b932e29641d899540daf5c89f9d1 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.428s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:24:25 np0005534516 nova_compute[253538]: 2025-11-25 08:24:25.508 253542 DEBUG nova.storage.rbd_utils [None req-f0f67520-50d5-44d1-bd6c-82dd827f18a4 8eb21f29ec514ce1adc864a5875afc7f cb11b932e29641d899540daf5c89f9d1 - - default default] rbd image 4659329f-611a-4436-aa9e-26937db9cd61_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:24:25 np0005534516 nova_compute[253538]: 2025-11-25 08:24:25.512 253542 DEBUG oslo_concurrency.processutils [None req-f0f67520-50d5-44d1-bd6c-82dd827f18a4 8eb21f29ec514ce1adc864a5875afc7f cb11b932e29641d899540daf5c89f9d1 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:24:25 np0005534516 nova_compute[253538]: 2025-11-25 08:24:25.533 253542 DEBUG oslo_concurrency.processutils [None req-1548f2c5-d48a-4cdb-8ce2-174785409a0b 5fe54f3384ea4571bee28e13c88e6d14 c86b7b0d5b344dfb82fc7554a622988e - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.552s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:24:25 np0005534516 nova_compute[253538]: 2025-11-25 08:24:25.539 253542 DEBUG nova.compute.provider_tree [None req-1548f2c5-d48a-4cdb-8ce2-174785409a0b 5fe54f3384ea4571bee28e13c88e6d14 c86b7b0d5b344dfb82fc7554a622988e - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 03:24:25 np0005534516 nova_compute[253538]: 2025-11-25 08:24:25.562 253542 DEBUG nova.scheduler.client.report [None req-1548f2c5-d48a-4cdb-8ce2-174785409a0b 5fe54f3384ea4571bee28e13c88e6d14 c86b7b0d5b344dfb82fc7554a622988e - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 03:24:25 np0005534516 nova_compute[253538]: 2025-11-25 08:24:25.615 253542 DEBUG oslo_concurrency.lockutils [None req-1548f2c5-d48a-4cdb-8ce2-174785409a0b 5fe54f3384ea4571bee28e13c88e6d14 c86b7b0d5b344dfb82fc7554a622988e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.886s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:24:25 np0005534516 nova_compute[253538]: 2025-11-25 08:24:25.616 253542 DEBUG nova.compute.manager [None req-1548f2c5-d48a-4cdb-8ce2-174785409a0b 5fe54f3384ea4571bee28e13c88e6d14 c86b7b0d5b344dfb82fc7554a622988e - - default default] [instance: 651b0445-6a0f-41e0-ad78-6318f8175e0b] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 25 03:24:25 np0005534516 nova_compute[253538]: 2025-11-25 08:24:25.830 253542 DEBUG nova.compute.manager [None req-1548f2c5-d48a-4cdb-8ce2-174785409a0b 5fe54f3384ea4571bee28e13c88e6d14 c86b7b0d5b344dfb82fc7554a622988e - - default default] [instance: 651b0445-6a0f-41e0-ad78-6318f8175e0b] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 25 03:24:25 np0005534516 nova_compute[253538]: 2025-11-25 08:24:25.831 253542 DEBUG nova.network.neutron [None req-1548f2c5-d48a-4cdb-8ce2-174785409a0b 5fe54f3384ea4571bee28e13c88e6d14 c86b7b0d5b344dfb82fc7554a622988e - - default default] [instance: 651b0445-6a0f-41e0-ad78-6318f8175e0b] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 25 03:24:25 np0005534516 nova_compute[253538]: 2025-11-25 08:24:25.910 253542 INFO nova.virt.libvirt.driver [None req-1548f2c5-d48a-4cdb-8ce2-174785409a0b 5fe54f3384ea4571bee28e13c88e6d14 c86b7b0d5b344dfb82fc7554a622988e - - default default] [instance: 651b0445-6a0f-41e0-ad78-6318f8175e0b] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 25 03:24:25 np0005534516 nova_compute[253538]: 2025-11-25 08:24:25.949 253542 DEBUG nova.compute.manager [None req-1548f2c5-d48a-4cdb-8ce2-174785409a0b 5fe54f3384ea4571bee28e13c88e6d14 c86b7b0d5b344dfb82fc7554a622988e - - default default] [instance: 651b0445-6a0f-41e0-ad78-6318f8175e0b] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 25 03:24:25 np0005534516 nova_compute[253538]: 2025-11-25 08:24:25.977 253542 INFO nova.virt.libvirt.driver [None req-cce667b4-09e0-4f99-a432-b9789a278886 479070fb6fab4140a63c6eb2f769af32 17c27399c5cf49759f4c3caf086910cb - - default default] [instance: 1cb8bb78-4ff6-496c-858c-41159362ffb8] Deleting instance files /var/lib/nova/instances/1cb8bb78-4ff6-496c-858c-41159362ffb8_del#033[00m
Nov 25 03:24:25 np0005534516 nova_compute[253538]: 2025-11-25 08:24:25.978 253542 INFO nova.virt.libvirt.driver [None req-cce667b4-09e0-4f99-a432-b9789a278886 479070fb6fab4140a63c6eb2f769af32 17c27399c5cf49759f4c3caf086910cb - - default default] [instance: 1cb8bb78-4ff6-496c-858c-41159362ffb8] Deletion of /var/lib/nova/instances/1cb8bb78-4ff6-496c-858c-41159362ffb8_del complete#033[00m
Nov 25 03:24:25 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 03:24:25 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1254586632' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 03:24:26 np0005534516 nova_compute[253538]: 2025-11-25 08:24:26.008 253542 DEBUG oslo_concurrency.processutils [None req-f0f67520-50d5-44d1-bd6c-82dd827f18a4 8eb21f29ec514ce1adc864a5875afc7f cb11b932e29641d899540daf5c89f9d1 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.496s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:24:26 np0005534516 nova_compute[253538]: 2025-11-25 08:24:26.010 253542 DEBUG nova.virt.libvirt.driver [None req-f0f67520-50d5-44d1-bd6c-82dd827f18a4 8eb21f29ec514ce1adc864a5875afc7f cb11b932e29641d899540daf5c89f9d1 - - default default] [instance: 4659329f-611a-4436-aa9e-26937db9cd61] End _get_guest_xml xml=<domain type="kvm">
Nov 25 03:24:26 np0005534516 nova_compute[253538]:  <uuid>4659329f-611a-4436-aa9e-26937db9cd61</uuid>
Nov 25 03:24:26 np0005534516 nova_compute[253538]:  <name>instance-00000009</name>
Nov 25 03:24:26 np0005534516 nova_compute[253538]:  <memory>131072</memory>
Nov 25 03:24:26 np0005534516 nova_compute[253538]:  <vcpu>1</vcpu>
Nov 25 03:24:26 np0005534516 nova_compute[253538]:  <metadata>
Nov 25 03:24:26 np0005534516 nova_compute[253538]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 03:24:26 np0005534516 nova_compute[253538]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 03:24:26 np0005534516 nova_compute[253538]:      <nova:name>tempest-ServersAdmin275Test-server-190405605</nova:name>
Nov 25 03:24:26 np0005534516 nova_compute[253538]:      <nova:creationTime>2025-11-25 08:24:24</nova:creationTime>
Nov 25 03:24:26 np0005534516 nova_compute[253538]:      <nova:flavor name="m1.nano">
Nov 25 03:24:26 np0005534516 nova_compute[253538]:        <nova:memory>128</nova:memory>
Nov 25 03:24:26 np0005534516 nova_compute[253538]:        <nova:disk>1</nova:disk>
Nov 25 03:24:26 np0005534516 nova_compute[253538]:        <nova:swap>0</nova:swap>
Nov 25 03:24:26 np0005534516 nova_compute[253538]:        <nova:ephemeral>0</nova:ephemeral>
Nov 25 03:24:26 np0005534516 nova_compute[253538]:        <nova:vcpus>1</nova:vcpus>
Nov 25 03:24:26 np0005534516 nova_compute[253538]:      </nova:flavor>
Nov 25 03:24:26 np0005534516 nova_compute[253538]:      <nova:owner>
Nov 25 03:24:26 np0005534516 nova_compute[253538]:        <nova:user uuid="57ccb3076c9145fda72f75af7dd3acc0">tempest-ServersAdmin275Test-1226019010-project-member</nova:user>
Nov 25 03:24:26 np0005534516 nova_compute[253538]:        <nova:project uuid="e0f1bfa27d5e45138e846f38c1d92dfc">tempest-ServersAdmin275Test-1226019010</nova:project>
Nov 25 03:24:26 np0005534516 nova_compute[253538]:      </nova:owner>
Nov 25 03:24:26 np0005534516 nova_compute[253538]:      <nova:root type="image" uuid="8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e"/>
Nov 25 03:24:26 np0005534516 nova_compute[253538]:      <nova:ports/>
Nov 25 03:24:26 np0005534516 nova_compute[253538]:    </nova:instance>
Nov 25 03:24:26 np0005534516 nova_compute[253538]:  </metadata>
Nov 25 03:24:26 np0005534516 nova_compute[253538]:  <sysinfo type="smbios">
Nov 25 03:24:26 np0005534516 nova_compute[253538]:    <system>
Nov 25 03:24:26 np0005534516 nova_compute[253538]:      <entry name="manufacturer">RDO</entry>
Nov 25 03:24:26 np0005534516 nova_compute[253538]:      <entry name="product">OpenStack Compute</entry>
Nov 25 03:24:26 np0005534516 nova_compute[253538]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 03:24:26 np0005534516 nova_compute[253538]:      <entry name="serial">4659329f-611a-4436-aa9e-26937db9cd61</entry>
Nov 25 03:24:26 np0005534516 nova_compute[253538]:      <entry name="uuid">4659329f-611a-4436-aa9e-26937db9cd61</entry>
Nov 25 03:24:26 np0005534516 nova_compute[253538]:      <entry name="family">Virtual Machine</entry>
Nov 25 03:24:26 np0005534516 nova_compute[253538]:    </system>
Nov 25 03:24:26 np0005534516 nova_compute[253538]:  </sysinfo>
Nov 25 03:24:26 np0005534516 nova_compute[253538]:  <os>
Nov 25 03:24:26 np0005534516 nova_compute[253538]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 25 03:24:26 np0005534516 nova_compute[253538]:    <boot dev="hd"/>
Nov 25 03:24:26 np0005534516 nova_compute[253538]:    <smbios mode="sysinfo"/>
Nov 25 03:24:26 np0005534516 nova_compute[253538]:  </os>
Nov 25 03:24:26 np0005534516 nova_compute[253538]:  <features>
Nov 25 03:24:26 np0005534516 nova_compute[253538]:    <acpi/>
Nov 25 03:24:26 np0005534516 nova_compute[253538]:    <apic/>
Nov 25 03:24:26 np0005534516 nova_compute[253538]:    <vmcoreinfo/>
Nov 25 03:24:26 np0005534516 nova_compute[253538]:  </features>
Nov 25 03:24:26 np0005534516 nova_compute[253538]:  <clock offset="utc">
Nov 25 03:24:26 np0005534516 nova_compute[253538]:    <timer name="pit" tickpolicy="delay"/>
Nov 25 03:24:26 np0005534516 nova_compute[253538]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 25 03:24:26 np0005534516 nova_compute[253538]:    <timer name="hpet" present="no"/>
Nov 25 03:24:26 np0005534516 nova_compute[253538]:  </clock>
Nov 25 03:24:26 np0005534516 nova_compute[253538]:  <cpu mode="host-model" match="exact">
Nov 25 03:24:26 np0005534516 nova_compute[253538]:    <topology sockets="1" cores="1" threads="1"/>
Nov 25 03:24:26 np0005534516 nova_compute[253538]:  </cpu>
Nov 25 03:24:26 np0005534516 nova_compute[253538]:  <devices>
Nov 25 03:24:26 np0005534516 nova_compute[253538]:    <disk type="network" device="disk">
Nov 25 03:24:26 np0005534516 nova_compute[253538]:      <driver type="raw" cache="none"/>
Nov 25 03:24:26 np0005534516 nova_compute[253538]:      <source protocol="rbd" name="vms/4659329f-611a-4436-aa9e-26937db9cd61_disk">
Nov 25 03:24:26 np0005534516 nova_compute[253538]:        <host name="192.168.122.100" port="6789"/>
Nov 25 03:24:26 np0005534516 nova_compute[253538]:      </source>
Nov 25 03:24:26 np0005534516 nova_compute[253538]:      <auth username="openstack">
Nov 25 03:24:26 np0005534516 nova_compute[253538]:        <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 03:24:26 np0005534516 nova_compute[253538]:      </auth>
Nov 25 03:24:26 np0005534516 nova_compute[253538]:      <target dev="vda" bus="virtio"/>
Nov 25 03:24:26 np0005534516 nova_compute[253538]:    </disk>
Nov 25 03:24:26 np0005534516 nova_compute[253538]:    <disk type="network" device="cdrom">
Nov 25 03:24:26 np0005534516 nova_compute[253538]:      <driver type="raw" cache="none"/>
Nov 25 03:24:26 np0005534516 nova_compute[253538]:      <source protocol="rbd" name="vms/4659329f-611a-4436-aa9e-26937db9cd61_disk.config">
Nov 25 03:24:26 np0005534516 nova_compute[253538]:        <host name="192.168.122.100" port="6789"/>
Nov 25 03:24:26 np0005534516 nova_compute[253538]:      </source>
Nov 25 03:24:26 np0005534516 nova_compute[253538]:      <auth username="openstack">
Nov 25 03:24:26 np0005534516 nova_compute[253538]:        <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 03:24:26 np0005534516 nova_compute[253538]:      </auth>
Nov 25 03:24:26 np0005534516 nova_compute[253538]:      <target dev="sda" bus="sata"/>
Nov 25 03:24:26 np0005534516 nova_compute[253538]:    </disk>
Nov 25 03:24:26 np0005534516 nova_compute[253538]:    <serial type="pty">
Nov 25 03:24:26 np0005534516 nova_compute[253538]:      <log file="/var/lib/nova/instances/4659329f-611a-4436-aa9e-26937db9cd61/console.log" append="off"/>
Nov 25 03:24:26 np0005534516 nova_compute[253538]:    </serial>
Nov 25 03:24:26 np0005534516 nova_compute[253538]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 03:24:26 np0005534516 nova_compute[253538]:    <video>
Nov 25 03:24:26 np0005534516 nova_compute[253538]:      <model type="virtio"/>
Nov 25 03:24:26 np0005534516 nova_compute[253538]:    </video>
Nov 25 03:24:26 np0005534516 nova_compute[253538]:    <input type="tablet" bus="usb"/>
Nov 25 03:24:26 np0005534516 nova_compute[253538]:    <rng model="virtio">
Nov 25 03:24:26 np0005534516 nova_compute[253538]:      <backend model="random">/dev/urandom</backend>
Nov 25 03:24:26 np0005534516 nova_compute[253538]:    </rng>
Nov 25 03:24:26 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root"/>
Nov 25 03:24:26 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:24:26 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:24:26 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:24:26 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:24:26 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:24:26 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:24:26 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:24:26 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:24:26 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:24:26 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:24:26 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:24:26 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:24:26 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:24:26 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:24:26 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:24:26 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:24:26 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:24:26 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:24:26 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:24:26 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:24:26 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:24:26 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:24:26 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:24:26 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:24:26 np0005534516 nova_compute[253538]:    <controller type="usb" index="0"/>
Nov 25 03:24:26 np0005534516 nova_compute[253538]:    <memballoon model="virtio">
Nov 25 03:24:26 np0005534516 nova_compute[253538]:      <stats period="10"/>
Nov 25 03:24:26 np0005534516 nova_compute[253538]:    </memballoon>
Nov 25 03:24:26 np0005534516 nova_compute[253538]:  </devices>
Nov 25 03:24:26 np0005534516 nova_compute[253538]: </domain>
Nov 25 03:24:26 np0005534516 nova_compute[253538]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 25 03:24:26 np0005534516 nova_compute[253538]: 2025-11-25 08:24:26.051 253542 DEBUG nova.virt.libvirt.driver [None req-f0f67520-50d5-44d1-bd6c-82dd827f18a4 8eb21f29ec514ce1adc864a5875afc7f cb11b932e29641d899540daf5c89f9d1 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 03:24:26 np0005534516 nova_compute[253538]: 2025-11-25 08:24:26.051 253542 DEBUG nova.virt.libvirt.driver [None req-f0f67520-50d5-44d1-bd6c-82dd827f18a4 8eb21f29ec514ce1adc864a5875afc7f cb11b932e29641d899540daf5c89f9d1 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 03:24:26 np0005534516 nova_compute[253538]: 2025-11-25 08:24:26.052 253542 INFO nova.virt.libvirt.driver [None req-f0f67520-50d5-44d1-bd6c-82dd827f18a4 8eb21f29ec514ce1adc864a5875afc7f cb11b932e29641d899540daf5c89f9d1 - - default default] [instance: 4659329f-611a-4436-aa9e-26937db9cd61] Using config drive#033[00m
Nov 25 03:24:26 np0005534516 nova_compute[253538]: 2025-11-25 08:24:26.077 253542 DEBUG nova.storage.rbd_utils [None req-f0f67520-50d5-44d1-bd6c-82dd827f18a4 8eb21f29ec514ce1adc864a5875afc7f cb11b932e29641d899540daf5c89f9d1 - - default default] rbd image 4659329f-611a-4436-aa9e-26937db9cd61_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:24:26 np0005534516 nova_compute[253538]: 2025-11-25 08:24:26.101 253542 DEBUG nova.network.neutron [None req-1548f2c5-d48a-4cdb-8ce2-174785409a0b 5fe54f3384ea4571bee28e13c88e6d14 c86b7b0d5b344dfb82fc7554a622988e - - default default] [instance: 651b0445-6a0f-41e0-ad78-6318f8175e0b] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188#033[00m
Nov 25 03:24:26 np0005534516 nova_compute[253538]: 2025-11-25 08:24:26.101 253542 DEBUG nova.compute.manager [None req-1548f2c5-d48a-4cdb-8ce2-174785409a0b 5fe54f3384ea4571bee28e13c88e6d14 c86b7b0d5b344dfb82fc7554a622988e - - default default] [instance: 651b0445-6a0f-41e0-ad78-6318f8175e0b] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 25 03:24:26 np0005534516 podman[276754]: 2025-11-25 08:24:26.108137011 +0000 UTC m=+0.055581800 container health_status 5c8d5508db57b4c3da7e80ae11f3a3094e87785cb74f60c98199261dd8b73481 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent)
Nov 25 03:24:26 np0005534516 nova_compute[253538]: 2025-11-25 08:24:26.123 253542 DEBUG nova.objects.instance [None req-f0f67520-50d5-44d1-bd6c-82dd827f18a4 8eb21f29ec514ce1adc864a5875afc7f cb11b932e29641d899540daf5c89f9d1 - - default default] Lazy-loading 'ec2_ids' on Instance uuid 4659329f-611a-4436-aa9e-26937db9cd61 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 03:24:26 np0005534516 nova_compute[253538]: 2025-11-25 08:24:26.150 253542 DEBUG nova.objects.instance [None req-f0f67520-50d5-44d1-bd6c-82dd827f18a4 8eb21f29ec514ce1adc864a5875afc7f cb11b932e29641d899540daf5c89f9d1 - - default default] Lazy-loading 'keypairs' on Instance uuid 4659329f-611a-4436-aa9e-26937db9cd61 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 03:24:26 np0005534516 nova_compute[253538]: 2025-11-25 08:24:26.237 253542 INFO nova.compute.manager [None req-cce667b4-09e0-4f99-a432-b9789a278886 479070fb6fab4140a63c6eb2f769af32 17c27399c5cf49759f4c3caf086910cb - - default default] [instance: 1cb8bb78-4ff6-496c-858c-41159362ffb8] Took 1.64 seconds to destroy the instance on the hypervisor.#033[00m
Nov 25 03:24:26 np0005534516 nova_compute[253538]: 2025-11-25 08:24:26.237 253542 DEBUG oslo.service.loopingcall [None req-cce667b4-09e0-4f99-a432-b9789a278886 479070fb6fab4140a63c6eb2f769af32 17c27399c5cf49759f4c3caf086910cb - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 25 03:24:26 np0005534516 nova_compute[253538]: 2025-11-25 08:24:26.237 253542 DEBUG nova.compute.manager [-] [instance: 1cb8bb78-4ff6-496c-858c-41159362ffb8] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 25 03:24:26 np0005534516 nova_compute[253538]: 2025-11-25 08:24:26.238 253542 DEBUG nova.network.neutron [-] [instance: 1cb8bb78-4ff6-496c-858c-41159362ffb8] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 25 03:24:26 np0005534516 nova_compute[253538]: 2025-11-25 08:24:26.252 253542 DEBUG nova.compute.manager [None req-1548f2c5-d48a-4cdb-8ce2-174785409a0b 5fe54f3384ea4571bee28e13c88e6d14 c86b7b0d5b344dfb82fc7554a622988e - - default default] [instance: 651b0445-6a0f-41e0-ad78-6318f8175e0b] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 25 03:24:26 np0005534516 nova_compute[253538]: 2025-11-25 08:24:26.253 253542 DEBUG nova.virt.libvirt.driver [None req-1548f2c5-d48a-4cdb-8ce2-174785409a0b 5fe54f3384ea4571bee28e13c88e6d14 c86b7b0d5b344dfb82fc7554a622988e - - default default] [instance: 651b0445-6a0f-41e0-ad78-6318f8175e0b] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 25 03:24:26 np0005534516 nova_compute[253538]: 2025-11-25 08:24:26.253 253542 INFO nova.virt.libvirt.driver [None req-1548f2c5-d48a-4cdb-8ce2-174785409a0b 5fe54f3384ea4571bee28e13c88e6d14 c86b7b0d5b344dfb82fc7554a622988e - - default default] [instance: 651b0445-6a0f-41e0-ad78-6318f8175e0b] Creating image(s)#033[00m
Nov 25 03:24:26 np0005534516 nova_compute[253538]: 2025-11-25 08:24:26.300 253542 DEBUG nova.storage.rbd_utils [None req-1548f2c5-d48a-4cdb-8ce2-174785409a0b 5fe54f3384ea4571bee28e13c88e6d14 c86b7b0d5b344dfb82fc7554a622988e - - default default] rbd image 651b0445-6a0f-41e0-ad78-6318f8175e0b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:24:26 np0005534516 nova_compute[253538]: 2025-11-25 08:24:26.322 253542 DEBUG nova.storage.rbd_utils [None req-1548f2c5-d48a-4cdb-8ce2-174785409a0b 5fe54f3384ea4571bee28e13c88e6d14 c86b7b0d5b344dfb82fc7554a622988e - - default default] rbd image 651b0445-6a0f-41e0-ad78-6318f8175e0b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:24:26 np0005534516 nova_compute[253538]: 2025-11-25 08:24:26.344 253542 DEBUG nova.storage.rbd_utils [None req-1548f2c5-d48a-4cdb-8ce2-174785409a0b 5fe54f3384ea4571bee28e13c88e6d14 c86b7b0d5b344dfb82fc7554a622988e - - default default] rbd image 651b0445-6a0f-41e0-ad78-6318f8175e0b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:24:26 np0005534516 nova_compute[253538]: 2025-11-25 08:24:26.347 253542 DEBUG oslo_concurrency.processutils [None req-1548f2c5-d48a-4cdb-8ce2-174785409a0b 5fe54f3384ea4571bee28e13c88e6d14 c86b7b0d5b344dfb82fc7554a622988e - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:24:26 np0005534516 nova_compute[253538]: 2025-11-25 08:24:26.370 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:24:26 np0005534516 nova_compute[253538]: 2025-11-25 08:24:26.374 253542 INFO nova.virt.libvirt.driver [None req-f0f67520-50d5-44d1-bd6c-82dd827f18a4 8eb21f29ec514ce1adc864a5875afc7f cb11b932e29641d899540daf5c89f9d1 - - default default] [instance: 4659329f-611a-4436-aa9e-26937db9cd61] Creating config drive at /var/lib/nova/instances/4659329f-611a-4436-aa9e-26937db9cd61/disk.config#033[00m
Nov 25 03:24:26 np0005534516 nova_compute[253538]: 2025-11-25 08:24:26.378 253542 DEBUG oslo_concurrency.processutils [None req-f0f67520-50d5-44d1-bd6c-82dd827f18a4 8eb21f29ec514ce1adc864a5875afc7f cb11b932e29641d899540daf5c89f9d1 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/4659329f-611a-4436-aa9e-26937db9cd61/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpql71eqeu execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:24:26 np0005534516 nova_compute[253538]: 2025-11-25 08:24:26.397 253542 DEBUG nova.network.neutron [-] [instance: 1cb8bb78-4ff6-496c-858c-41159362ffb8] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 25 03:24:26 np0005534516 nova_compute[253538]: 2025-11-25 08:24:26.411 253542 DEBUG nova.network.neutron [-] [instance: 1cb8bb78-4ff6-496c-858c-41159362ffb8] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 03:24:26 np0005534516 nova_compute[253538]: 2025-11-25 08:24:26.425 253542 DEBUG oslo_concurrency.processutils [None req-1548f2c5-d48a-4cdb-8ce2-174785409a0b 5fe54f3384ea4571bee28e13c88e6d14 c86b7b0d5b344dfb82fc7554a622988e - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json" returned: 0 in 0.079s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:24:26 np0005534516 nova_compute[253538]: 2025-11-25 08:24:26.426 253542 DEBUG oslo_concurrency.lockutils [None req-1548f2c5-d48a-4cdb-8ce2-174785409a0b 5fe54f3384ea4571bee28e13c88e6d14 c86b7b0d5b344dfb82fc7554a622988e - - default default] Acquiring lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:24:26 np0005534516 nova_compute[253538]: 2025-11-25 08:24:26.427 253542 DEBUG oslo_concurrency.lockutils [None req-1548f2c5-d48a-4cdb-8ce2-174785409a0b 5fe54f3384ea4571bee28e13c88e6d14 c86b7b0d5b344dfb82fc7554a622988e - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:24:26 np0005534516 nova_compute[253538]: 2025-11-25 08:24:26.427 253542 DEBUG oslo_concurrency.lockutils [None req-1548f2c5-d48a-4cdb-8ce2-174785409a0b 5fe54f3384ea4571bee28e13c88e6d14 c86b7b0d5b344dfb82fc7554a622988e - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:24:26 np0005534516 nova_compute[253538]: 2025-11-25 08:24:26.447 253542 DEBUG nova.storage.rbd_utils [None req-1548f2c5-d48a-4cdb-8ce2-174785409a0b 5fe54f3384ea4571bee28e13c88e6d14 c86b7b0d5b344dfb82fc7554a622988e - - default default] rbd image 651b0445-6a0f-41e0-ad78-6318f8175e0b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:24:26 np0005534516 nova_compute[253538]: 2025-11-25 08:24:26.449 253542 DEBUG oslo_concurrency.processutils [None req-1548f2c5-d48a-4cdb-8ce2-174785409a0b 5fe54f3384ea4571bee28e13c88e6d14 c86b7b0d5b344dfb82fc7554a622988e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc 651b0445-6a0f-41e0-ad78-6318f8175e0b_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:24:26 np0005534516 nova_compute[253538]: 2025-11-25 08:24:26.470 253542 INFO nova.compute.manager [-] [instance: 1cb8bb78-4ff6-496c-858c-41159362ffb8] Took 0.23 seconds to deallocate network for instance.#033[00m
Nov 25 03:24:26 np0005534516 nova_compute[253538]: 2025-11-25 08:24:26.502 253542 DEBUG oslo_concurrency.processutils [None req-f0f67520-50d5-44d1-bd6c-82dd827f18a4 8eb21f29ec514ce1adc864a5875afc7f cb11b932e29641d899540daf5c89f9d1 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/4659329f-611a-4436-aa9e-26937db9cd61/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpql71eqeu" returned: 0 in 0.124s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:24:26 np0005534516 nova_compute[253538]: 2025-11-25 08:24:26.533 253542 DEBUG nova.storage.rbd_utils [None req-f0f67520-50d5-44d1-bd6c-82dd827f18a4 8eb21f29ec514ce1adc864a5875afc7f cb11b932e29641d899540daf5c89f9d1 - - default default] rbd image 4659329f-611a-4436-aa9e-26937db9cd61_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:24:26 np0005534516 nova_compute[253538]: 2025-11-25 08:24:26.537 253542 DEBUG oslo_concurrency.processutils [None req-f0f67520-50d5-44d1-bd6c-82dd827f18a4 8eb21f29ec514ce1adc864a5875afc7f cb11b932e29641d899540daf5c89f9d1 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/4659329f-611a-4436-aa9e-26937db9cd61/disk.config 4659329f-611a-4436-aa9e-26937db9cd61_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:24:26 np0005534516 nova_compute[253538]: 2025-11-25 08:24:26.670 253542 DEBUG oslo_concurrency.lockutils [None req-cce667b4-09e0-4f99-a432-b9789a278886 479070fb6fab4140a63c6eb2f769af32 17c27399c5cf49759f4c3caf086910cb - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:24:26 np0005534516 nova_compute[253538]: 2025-11-25 08:24:26.672 253542 DEBUG oslo_concurrency.lockutils [None req-cce667b4-09e0-4f99-a432-b9789a278886 479070fb6fab4140a63c6eb2f769af32 17c27399c5cf49759f4c3caf086910cb - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:24:26 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:24:26.777 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=0e3362c2-a4a0-4a10-9289-943331244f84, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '6'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:24:26 np0005534516 nova_compute[253538]: 2025-11-25 08:24:26.792 253542 DEBUG oslo_concurrency.processutils [None req-cce667b4-09e0-4f99-a432-b9789a278886 479070fb6fab4140a63c6eb2f769af32 17c27399c5cf49759f4c3caf086910cb - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:24:26 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1167: 321 pgs: 321 active+clean; 91 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 5.2 MiB/s wr, 239 op/s
Nov 25 03:24:27 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 03:24:27 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2978263295' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 03:24:27 np0005534516 nova_compute[253538]: 2025-11-25 08:24:27.253 253542 DEBUG oslo_concurrency.processutils [None req-cce667b4-09e0-4f99-a432-b9789a278886 479070fb6fab4140a63c6eb2f769af32 17c27399c5cf49759f4c3caf086910cb - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.461s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:24:27 np0005534516 nova_compute[253538]: 2025-11-25 08:24:27.258 253542 DEBUG nova.compute.provider_tree [None req-cce667b4-09e0-4f99-a432-b9789a278886 479070fb6fab4140a63c6eb2f769af32 17c27399c5cf49759f4c3caf086910cb - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 03:24:27 np0005534516 nova_compute[253538]: 2025-11-25 08:24:27.417 253542 DEBUG nova.scheduler.client.report [None req-cce667b4-09e0-4f99-a432-b9789a278886 479070fb6fab4140a63c6eb2f769af32 17c27399c5cf49759f4c3caf086910cb - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 03:24:27 np0005534516 nova_compute[253538]: 2025-11-25 08:24:27.941 253542 DEBUG oslo_concurrency.lockutils [None req-cce667b4-09e0-4f99-a432-b9789a278886 479070fb6fab4140a63c6eb2f769af32 17c27399c5cf49759f4c3caf086910cb - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 1.270s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:24:28 np0005534516 nova_compute[253538]: 2025-11-25 08:24:28.048 253542 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764059053.047059, 7ca3cfae-7765-48b7-9d65-660c3b709a55 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 03:24:28 np0005534516 nova_compute[253538]: 2025-11-25 08:24:28.048 253542 INFO nova.compute.manager [-] [instance: 7ca3cfae-7765-48b7-9d65-660c3b709a55] VM Stopped (Lifecycle Event)#033[00m
Nov 25 03:24:28 np0005534516 nova_compute[253538]: 2025-11-25 08:24:28.063 253542 INFO nova.scheduler.client.report [None req-cce667b4-09e0-4f99-a432-b9789a278886 479070fb6fab4140a63c6eb2f769af32 17c27399c5cf49759f4c3caf086910cb - - default default] Deleted allocations for instance 1cb8bb78-4ff6-496c-858c-41159362ffb8#033[00m
Nov 25 03:24:28 np0005534516 nova_compute[253538]: 2025-11-25 08:24:28.070 253542 DEBUG nova.compute.manager [None req-3746c55d-c83d-4029-9c9e-a8aee7ddcf44 - - - - - -] [instance: 7ca3cfae-7765-48b7-9d65-660c3b709a55] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:24:28 np0005534516 nova_compute[253538]: 2025-11-25 08:24:28.134 253542 DEBUG oslo_concurrency.lockutils [None req-cce667b4-09e0-4f99-a432-b9789a278886 479070fb6fab4140a63c6eb2f769af32 17c27399c5cf49759f4c3caf086910cb - - default default] Lock "1cb8bb78-4ff6-496c-858c-41159362ffb8" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.190s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:24:28 np0005534516 nova_compute[253538]: 2025-11-25 08:24:28.142 253542 DEBUG oslo_concurrency.processutils [None req-f0f67520-50d5-44d1-bd6c-82dd827f18a4 8eb21f29ec514ce1adc864a5875afc7f cb11b932e29641d899540daf5c89f9d1 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/4659329f-611a-4436-aa9e-26937db9cd61/disk.config 4659329f-611a-4436-aa9e-26937db9cd61_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.605s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:24:28 np0005534516 nova_compute[253538]: 2025-11-25 08:24:28.143 253542 INFO nova.virt.libvirt.driver [None req-f0f67520-50d5-44d1-bd6c-82dd827f18a4 8eb21f29ec514ce1adc864a5875afc7f cb11b932e29641d899540daf5c89f9d1 - - default default] [instance: 4659329f-611a-4436-aa9e-26937db9cd61] Deleting local config drive /var/lib/nova/instances/4659329f-611a-4436-aa9e-26937db9cd61/disk.config because it was imported into RBD.#033[00m
Nov 25 03:24:28 np0005534516 systemd-machined[215790]: New machine qemu-13-instance-00000009.
Nov 25 03:24:28 np0005534516 systemd[1]: Started Virtual Machine qemu-13-instance-00000009.
Nov 25 03:24:28 np0005534516 nova_compute[253538]: 2025-11-25 08:24:28.306 253542 DEBUG oslo_concurrency.processutils [None req-1548f2c5-d48a-4cdb-8ce2-174785409a0b 5fe54f3384ea4571bee28e13c88e6d14 c86b7b0d5b344dfb82fc7554a622988e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc 651b0445-6a0f-41e0-ad78-6318f8175e0b_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.857s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:24:28 np0005534516 podman[276953]: 2025-11-25 08:24:28.319470312 +0000 UTC m=+0.077470445 container health_status 201f5ba8a846455dad7681d91099d8c3f33e8ac91298c1330a8b525b94c03938 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 03:24:28 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e110 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 03:24:28 np0005534516 nova_compute[253538]: 2025-11-25 08:24:28.426 253542 DEBUG nova.storage.rbd_utils [None req-1548f2c5-d48a-4cdb-8ce2-174785409a0b 5fe54f3384ea4571bee28e13c88e6d14 c86b7b0d5b344dfb82fc7554a622988e - - default default] resizing rbd image 651b0445-6a0f-41e0-ad78-6318f8175e0b_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 25 03:24:28 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1168: 321 pgs: 321 active+clean; 111 MiB data, 284 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 5.7 MiB/s wr, 247 op/s
Nov 25 03:24:29 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 25 03:24:29 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2152344658' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 03:24:29 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 25 03:24:29 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2152344658' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 03:24:29 np0005534516 nova_compute[253538]: 2025-11-25 08:24:29.121 253542 DEBUG nova.objects.instance [None req-1548f2c5-d48a-4cdb-8ce2-174785409a0b 5fe54f3384ea4571bee28e13c88e6d14 c86b7b0d5b344dfb82fc7554a622988e - - default default] Lazy-loading 'migration_context' on Instance uuid 651b0445-6a0f-41e0-ad78-6318f8175e0b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 03:24:29 np0005534516 nova_compute[253538]: 2025-11-25 08:24:29.138 253542 DEBUG nova.virt.libvirt.driver [None req-1548f2c5-d48a-4cdb-8ce2-174785409a0b 5fe54f3384ea4571bee28e13c88e6d14 c86b7b0d5b344dfb82fc7554a622988e - - default default] [instance: 651b0445-6a0f-41e0-ad78-6318f8175e0b] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 25 03:24:29 np0005534516 nova_compute[253538]: 2025-11-25 08:24:29.139 253542 DEBUG nova.virt.libvirt.driver [None req-1548f2c5-d48a-4cdb-8ce2-174785409a0b 5fe54f3384ea4571bee28e13c88e6d14 c86b7b0d5b344dfb82fc7554a622988e - - default default] [instance: 651b0445-6a0f-41e0-ad78-6318f8175e0b] Ensure instance console log exists: /var/lib/nova/instances/651b0445-6a0f-41e0-ad78-6318f8175e0b/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 25 03:24:29 np0005534516 nova_compute[253538]: 2025-11-25 08:24:29.139 253542 DEBUG oslo_concurrency.lockutils [None req-1548f2c5-d48a-4cdb-8ce2-174785409a0b 5fe54f3384ea4571bee28e13c88e6d14 c86b7b0d5b344dfb82fc7554a622988e - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:24:29 np0005534516 nova_compute[253538]: 2025-11-25 08:24:29.139 253542 DEBUG oslo_concurrency.lockutils [None req-1548f2c5-d48a-4cdb-8ce2-174785409a0b 5fe54f3384ea4571bee28e13c88e6d14 c86b7b0d5b344dfb82fc7554a622988e - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:24:29 np0005534516 nova_compute[253538]: 2025-11-25 08:24:29.139 253542 DEBUG oslo_concurrency.lockutils [None req-1548f2c5-d48a-4cdb-8ce2-174785409a0b 5fe54f3384ea4571bee28e13c88e6d14 c86b7b0d5b344dfb82fc7554a622988e - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:24:29 np0005534516 nova_compute[253538]: 2025-11-25 08:24:29.141 253542 DEBUG nova.virt.libvirt.driver [None req-1548f2c5-d48a-4cdb-8ce2-174785409a0b 5fe54f3384ea4571bee28e13c88e6d14 c86b7b0d5b344dfb82fc7554a622988e - - default default] [instance: 651b0445-6a0f-41e0-ad78-6318f8175e0b] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'device_name': '/dev/vda', 'guest_format': None, 'encryption_options': None, 'boot_index': 0, 'encryption_format': None, 'encryption_secret_uuid': None, 'size': 0, 'encrypted': False, 'disk_bus': 'virtio', 'image_id': '8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 25 03:24:29 np0005534516 nova_compute[253538]: 2025-11-25 08:24:29.145 253542 WARNING nova.virt.libvirt.driver [None req-1548f2c5-d48a-4cdb-8ce2-174785409a0b 5fe54f3384ea4571bee28e13c88e6d14 c86b7b0d5b344dfb82fc7554a622988e - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 03:24:29 np0005534516 nova_compute[253538]: 2025-11-25 08:24:29.149 253542 DEBUG nova.virt.libvirt.host [None req-1548f2c5-d48a-4cdb-8ce2-174785409a0b 5fe54f3384ea4571bee28e13c88e6d14 c86b7b0d5b344dfb82fc7554a622988e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 25 03:24:29 np0005534516 nova_compute[253538]: 2025-11-25 08:24:29.149 253542 DEBUG nova.virt.libvirt.host [None req-1548f2c5-d48a-4cdb-8ce2-174785409a0b 5fe54f3384ea4571bee28e13c88e6d14 c86b7b0d5b344dfb82fc7554a622988e - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 25 03:24:29 np0005534516 nova_compute[253538]: 2025-11-25 08:24:29.153 253542 DEBUG nova.virt.libvirt.host [None req-1548f2c5-d48a-4cdb-8ce2-174785409a0b 5fe54f3384ea4571bee28e13c88e6d14 c86b7b0d5b344dfb82fc7554a622988e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 25 03:24:29 np0005534516 nova_compute[253538]: 2025-11-25 08:24:29.154 253542 DEBUG nova.virt.libvirt.host [None req-1548f2c5-d48a-4cdb-8ce2-174785409a0b 5fe54f3384ea4571bee28e13c88e6d14 c86b7b0d5b344dfb82fc7554a622988e - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 25 03:24:29 np0005534516 nova_compute[253538]: 2025-11-25 08:24:29.154 253542 DEBUG nova.virt.libvirt.driver [None req-1548f2c5-d48a-4cdb-8ce2-174785409a0b 5fe54f3384ea4571bee28e13c88e6d14 c86b7b0d5b344dfb82fc7554a622988e - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 25 03:24:29 np0005534516 nova_compute[253538]: 2025-11-25 08:24:29.154 253542 DEBUG nova.virt.hardware [None req-1548f2c5-d48a-4cdb-8ce2-174785409a0b 5fe54f3384ea4571bee28e13c88e6d14 c86b7b0d5b344dfb82fc7554a622988e - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T08:20:54Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='c0247c91-0f34-45b2-87e7-43f31a790a00',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 25 03:24:29 np0005534516 nova_compute[253538]: 2025-11-25 08:24:29.155 253542 DEBUG nova.virt.hardware [None req-1548f2c5-d48a-4cdb-8ce2-174785409a0b 5fe54f3384ea4571bee28e13c88e6d14 c86b7b0d5b344dfb82fc7554a622988e - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 25 03:24:29 np0005534516 nova_compute[253538]: 2025-11-25 08:24:29.155 253542 DEBUG nova.virt.hardware [None req-1548f2c5-d48a-4cdb-8ce2-174785409a0b 5fe54f3384ea4571bee28e13c88e6d14 c86b7b0d5b344dfb82fc7554a622988e - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 25 03:24:29 np0005534516 nova_compute[253538]: 2025-11-25 08:24:29.155 253542 DEBUG nova.virt.hardware [None req-1548f2c5-d48a-4cdb-8ce2-174785409a0b 5fe54f3384ea4571bee28e13c88e6d14 c86b7b0d5b344dfb82fc7554a622988e - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 25 03:24:29 np0005534516 nova_compute[253538]: 2025-11-25 08:24:29.155 253542 DEBUG nova.virt.hardware [None req-1548f2c5-d48a-4cdb-8ce2-174785409a0b 5fe54f3384ea4571bee28e13c88e6d14 c86b7b0d5b344dfb82fc7554a622988e - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 25 03:24:29 np0005534516 nova_compute[253538]: 2025-11-25 08:24:29.155 253542 DEBUG nova.virt.hardware [None req-1548f2c5-d48a-4cdb-8ce2-174785409a0b 5fe54f3384ea4571bee28e13c88e6d14 c86b7b0d5b344dfb82fc7554a622988e - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 25 03:24:29 np0005534516 nova_compute[253538]: 2025-11-25 08:24:29.155 253542 DEBUG nova.virt.hardware [None req-1548f2c5-d48a-4cdb-8ce2-174785409a0b 5fe54f3384ea4571bee28e13c88e6d14 c86b7b0d5b344dfb82fc7554a622988e - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 25 03:24:29 np0005534516 nova_compute[253538]: 2025-11-25 08:24:29.156 253542 DEBUG nova.virt.hardware [None req-1548f2c5-d48a-4cdb-8ce2-174785409a0b 5fe54f3384ea4571bee28e13c88e6d14 c86b7b0d5b344dfb82fc7554a622988e - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 25 03:24:29 np0005534516 nova_compute[253538]: 2025-11-25 08:24:29.156 253542 DEBUG nova.virt.hardware [None req-1548f2c5-d48a-4cdb-8ce2-174785409a0b 5fe54f3384ea4571bee28e13c88e6d14 c86b7b0d5b344dfb82fc7554a622988e - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 25 03:24:29 np0005534516 nova_compute[253538]: 2025-11-25 08:24:29.156 253542 DEBUG nova.virt.hardware [None req-1548f2c5-d48a-4cdb-8ce2-174785409a0b 5fe54f3384ea4571bee28e13c88e6d14 c86b7b0d5b344dfb82fc7554a622988e - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 25 03:24:29 np0005534516 nova_compute[253538]: 2025-11-25 08:24:29.156 253542 DEBUG nova.virt.hardware [None req-1548f2c5-d48a-4cdb-8ce2-174785409a0b 5fe54f3384ea4571bee28e13c88e6d14 c86b7b0d5b344dfb82fc7554a622988e - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 25 03:24:29 np0005534516 nova_compute[253538]: 2025-11-25 08:24:29.159 253542 DEBUG oslo_concurrency.processutils [None req-1548f2c5-d48a-4cdb-8ce2-174785409a0b 5fe54f3384ea4571bee28e13c88e6d14 c86b7b0d5b344dfb82fc7554a622988e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:24:29 np0005534516 nova_compute[253538]: 2025-11-25 08:24:29.179 253542 DEBUG nova.virt.libvirt.host [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Removed pending event for 4659329f-611a-4436-aa9e-26937db9cd61 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Nov 25 03:24:29 np0005534516 nova_compute[253538]: 2025-11-25 08:24:29.180 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764059069.1679134, 4659329f-611a-4436-aa9e-26937db9cd61 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 03:24:29 np0005534516 nova_compute[253538]: 2025-11-25 08:24:29.180 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 4659329f-611a-4436-aa9e-26937db9cd61] VM Resumed (Lifecycle Event)#033[00m
Nov 25 03:24:29 np0005534516 nova_compute[253538]: 2025-11-25 08:24:29.182 253542 DEBUG nova.compute.manager [None req-f0f67520-50d5-44d1-bd6c-82dd827f18a4 8eb21f29ec514ce1adc864a5875afc7f cb11b932e29641d899540daf5c89f9d1 - - default default] [instance: 4659329f-611a-4436-aa9e-26937db9cd61] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 25 03:24:29 np0005534516 nova_compute[253538]: 2025-11-25 08:24:29.183 253542 DEBUG nova.virt.libvirt.driver [None req-f0f67520-50d5-44d1-bd6c-82dd827f18a4 8eb21f29ec514ce1adc864a5875afc7f cb11b932e29641d899540daf5c89f9d1 - - default default] [instance: 4659329f-611a-4436-aa9e-26937db9cd61] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 25 03:24:29 np0005534516 nova_compute[253538]: 2025-11-25 08:24:29.187 253542 INFO nova.virt.libvirt.driver [-] [instance: 4659329f-611a-4436-aa9e-26937db9cd61] Instance spawned successfully.#033[00m
Nov 25 03:24:29 np0005534516 nova_compute[253538]: 2025-11-25 08:24:29.188 253542 DEBUG nova.virt.libvirt.driver [None req-f0f67520-50d5-44d1-bd6c-82dd827f18a4 8eb21f29ec514ce1adc864a5875afc7f cb11b932e29641d899540daf5c89f9d1 - - default default] [instance: 4659329f-611a-4436-aa9e-26937db9cd61] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 25 03:24:29 np0005534516 nova_compute[253538]: 2025-11-25 08:24:29.199 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 4659329f-611a-4436-aa9e-26937db9cd61] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:24:29 np0005534516 nova_compute[253538]: 2025-11-25 08:24:29.205 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 4659329f-611a-4436-aa9e-26937db9cd61] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 03:24:29 np0005534516 nova_compute[253538]: 2025-11-25 08:24:29.208 253542 DEBUG nova.virt.libvirt.driver [None req-f0f67520-50d5-44d1-bd6c-82dd827f18a4 8eb21f29ec514ce1adc864a5875afc7f cb11b932e29641d899540daf5c89f9d1 - - default default] [instance: 4659329f-611a-4436-aa9e-26937db9cd61] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:24:29 np0005534516 nova_compute[253538]: 2025-11-25 08:24:29.209 253542 DEBUG nova.virt.libvirt.driver [None req-f0f67520-50d5-44d1-bd6c-82dd827f18a4 8eb21f29ec514ce1adc864a5875afc7f cb11b932e29641d899540daf5c89f9d1 - - default default] [instance: 4659329f-611a-4436-aa9e-26937db9cd61] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:24:29 np0005534516 nova_compute[253538]: 2025-11-25 08:24:29.209 253542 DEBUG nova.virt.libvirt.driver [None req-f0f67520-50d5-44d1-bd6c-82dd827f18a4 8eb21f29ec514ce1adc864a5875afc7f cb11b932e29641d899540daf5c89f9d1 - - default default] [instance: 4659329f-611a-4436-aa9e-26937db9cd61] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:24:29 np0005534516 nova_compute[253538]: 2025-11-25 08:24:29.210 253542 DEBUG nova.virt.libvirt.driver [None req-f0f67520-50d5-44d1-bd6c-82dd827f18a4 8eb21f29ec514ce1adc864a5875afc7f cb11b932e29641d899540daf5c89f9d1 - - default default] [instance: 4659329f-611a-4436-aa9e-26937db9cd61] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:24:29 np0005534516 nova_compute[253538]: 2025-11-25 08:24:29.210 253542 DEBUG nova.virt.libvirt.driver [None req-f0f67520-50d5-44d1-bd6c-82dd827f18a4 8eb21f29ec514ce1adc864a5875afc7f cb11b932e29641d899540daf5c89f9d1 - - default default] [instance: 4659329f-611a-4436-aa9e-26937db9cd61] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:24:29 np0005534516 nova_compute[253538]: 2025-11-25 08:24:29.211 253542 DEBUG nova.virt.libvirt.driver [None req-f0f67520-50d5-44d1-bd6c-82dd827f18a4 8eb21f29ec514ce1adc864a5875afc7f cb11b932e29641d899540daf5c89f9d1 - - default default] [instance: 4659329f-611a-4436-aa9e-26937db9cd61] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:24:29 np0005534516 nova_compute[253538]: 2025-11-25 08:24:29.237 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 4659329f-611a-4436-aa9e-26937db9cd61] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.#033[00m
Nov 25 03:24:29 np0005534516 nova_compute[253538]: 2025-11-25 08:24:29.238 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764059069.1683638, 4659329f-611a-4436-aa9e-26937db9cd61 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 03:24:29 np0005534516 nova_compute[253538]: 2025-11-25 08:24:29.238 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 4659329f-611a-4436-aa9e-26937db9cd61] VM Started (Lifecycle Event)#033[00m
Nov 25 03:24:29 np0005534516 nova_compute[253538]: 2025-11-25 08:24:29.271 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 4659329f-611a-4436-aa9e-26937db9cd61] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:24:29 np0005534516 nova_compute[253538]: 2025-11-25 08:24:29.275 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 4659329f-611a-4436-aa9e-26937db9cd61] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 03:24:29 np0005534516 nova_compute[253538]: 2025-11-25 08:24:29.295 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 4659329f-611a-4436-aa9e-26937db9cd61] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.#033[00m
Nov 25 03:24:29 np0005534516 nova_compute[253538]: 2025-11-25 08:24:29.301 253542 DEBUG nova.compute.manager [None req-f0f67520-50d5-44d1-bd6c-82dd827f18a4 8eb21f29ec514ce1adc864a5875afc7f cb11b932e29641d899540daf5c89f9d1 - - default default] [instance: 4659329f-611a-4436-aa9e-26937db9cd61] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:24:29 np0005534516 nova_compute[253538]: 2025-11-25 08:24:29.354 253542 DEBUG oslo_concurrency.lockutils [None req-f0f67520-50d5-44d1-bd6c-82dd827f18a4 8eb21f29ec514ce1adc864a5875afc7f cb11b932e29641d899540daf5c89f9d1 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:24:29 np0005534516 nova_compute[253538]: 2025-11-25 08:24:29.354 253542 DEBUG oslo_concurrency.lockutils [None req-f0f67520-50d5-44d1-bd6c-82dd827f18a4 8eb21f29ec514ce1adc864a5875afc7f cb11b932e29641d899540daf5c89f9d1 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:24:29 np0005534516 nova_compute[253538]: 2025-11-25 08:24:29.355 253542 DEBUG nova.objects.instance [None req-f0f67520-50d5-44d1-bd6c-82dd827f18a4 8eb21f29ec514ce1adc864a5875afc7f cb11b932e29641d899540daf5c89f9d1 - - default default] [instance: 4659329f-611a-4436-aa9e-26937db9cd61] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032#033[00m
Nov 25 03:24:29 np0005534516 nova_compute[253538]: 2025-11-25 08:24:29.421 253542 DEBUG oslo_concurrency.lockutils [None req-f0f67520-50d5-44d1-bd6c-82dd827f18a4 8eb21f29ec514ce1adc864a5875afc7f cb11b932e29641d899540daf5c89f9d1 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: held 0.067s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:24:29 np0005534516 nova_compute[253538]: 2025-11-25 08:24:29.538 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:24:29 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 03:24:29 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3515615473' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 03:24:29 np0005534516 nova_compute[253538]: 2025-11-25 08:24:29.626 253542 DEBUG oslo_concurrency.processutils [None req-1548f2c5-d48a-4cdb-8ce2-174785409a0b 5fe54f3384ea4571bee28e13c88e6d14 c86b7b0d5b344dfb82fc7554a622988e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.467s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:24:29 np0005534516 nova_compute[253538]: 2025-11-25 08:24:29.651 253542 DEBUG nova.storage.rbd_utils [None req-1548f2c5-d48a-4cdb-8ce2-174785409a0b 5fe54f3384ea4571bee28e13c88e6d14 c86b7b0d5b344dfb82fc7554a622988e - - default default] rbd image 651b0445-6a0f-41e0-ad78-6318f8175e0b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:24:29 np0005534516 nova_compute[253538]: 2025-11-25 08:24:29.658 253542 DEBUG oslo_concurrency.processutils [None req-1548f2c5-d48a-4cdb-8ce2-174785409a0b 5fe54f3384ea4571bee28e13c88e6d14 c86b7b0d5b344dfb82fc7554a622988e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:24:30 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 03:24:30 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3243680153' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 03:24:30 np0005534516 nova_compute[253538]: 2025-11-25 08:24:30.151 253542 DEBUG oslo_concurrency.processutils [None req-1548f2c5-d48a-4cdb-8ce2-174785409a0b 5fe54f3384ea4571bee28e13c88e6d14 c86b7b0d5b344dfb82fc7554a622988e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.494s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:24:30 np0005534516 nova_compute[253538]: 2025-11-25 08:24:30.153 253542 DEBUG nova.objects.instance [None req-1548f2c5-d48a-4cdb-8ce2-174785409a0b 5fe54f3384ea4571bee28e13c88e6d14 c86b7b0d5b344dfb82fc7554a622988e - - default default] Lazy-loading 'pci_devices' on Instance uuid 651b0445-6a0f-41e0-ad78-6318f8175e0b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 03:24:30 np0005534516 nova_compute[253538]: 2025-11-25 08:24:30.167 253542 DEBUG nova.virt.libvirt.driver [None req-1548f2c5-d48a-4cdb-8ce2-174785409a0b 5fe54f3384ea4571bee28e13c88e6d14 c86b7b0d5b344dfb82fc7554a622988e - - default default] [instance: 651b0445-6a0f-41e0-ad78-6318f8175e0b] End _get_guest_xml xml=<domain type="kvm">
Nov 25 03:24:30 np0005534516 nova_compute[253538]:  <uuid>651b0445-6a0f-41e0-ad78-6318f8175e0b</uuid>
Nov 25 03:24:30 np0005534516 nova_compute[253538]:  <name>instance-0000000c</name>
Nov 25 03:24:30 np0005534516 nova_compute[253538]:  <memory>131072</memory>
Nov 25 03:24:30 np0005534516 nova_compute[253538]:  <vcpu>1</vcpu>
Nov 25 03:24:30 np0005534516 nova_compute[253538]:  <metadata>
Nov 25 03:24:30 np0005534516 nova_compute[253538]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 03:24:30 np0005534516 nova_compute[253538]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 03:24:30 np0005534516 nova_compute[253538]:      <nova:name>tempest-ServerDiagnosticsNegativeTest-server-1478904873</nova:name>
Nov 25 03:24:30 np0005534516 nova_compute[253538]:      <nova:creationTime>2025-11-25 08:24:29</nova:creationTime>
Nov 25 03:24:30 np0005534516 nova_compute[253538]:      <nova:flavor name="m1.nano">
Nov 25 03:24:30 np0005534516 nova_compute[253538]:        <nova:memory>128</nova:memory>
Nov 25 03:24:30 np0005534516 nova_compute[253538]:        <nova:disk>1</nova:disk>
Nov 25 03:24:30 np0005534516 nova_compute[253538]:        <nova:swap>0</nova:swap>
Nov 25 03:24:30 np0005534516 nova_compute[253538]:        <nova:ephemeral>0</nova:ephemeral>
Nov 25 03:24:30 np0005534516 nova_compute[253538]:        <nova:vcpus>1</nova:vcpus>
Nov 25 03:24:30 np0005534516 nova_compute[253538]:      </nova:flavor>
Nov 25 03:24:30 np0005534516 nova_compute[253538]:      <nova:owner>
Nov 25 03:24:30 np0005534516 nova_compute[253538]:        <nova:user uuid="5fe54f3384ea4571bee28e13c88e6d14">tempest-ServerDiagnosticsNegativeTest-966457076-project-member</nova:user>
Nov 25 03:24:30 np0005534516 nova_compute[253538]:        <nova:project uuid="c86b7b0d5b344dfb82fc7554a622988e">tempest-ServerDiagnosticsNegativeTest-966457076</nova:project>
Nov 25 03:24:30 np0005534516 nova_compute[253538]:      </nova:owner>
Nov 25 03:24:30 np0005534516 nova_compute[253538]:      <nova:root type="image" uuid="8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e"/>
Nov 25 03:24:30 np0005534516 nova_compute[253538]:      <nova:ports/>
Nov 25 03:24:30 np0005534516 nova_compute[253538]:    </nova:instance>
Nov 25 03:24:30 np0005534516 nova_compute[253538]:  </metadata>
Nov 25 03:24:30 np0005534516 nova_compute[253538]:  <sysinfo type="smbios">
Nov 25 03:24:30 np0005534516 nova_compute[253538]:    <system>
Nov 25 03:24:30 np0005534516 nova_compute[253538]:      <entry name="manufacturer">RDO</entry>
Nov 25 03:24:30 np0005534516 nova_compute[253538]:      <entry name="product">OpenStack Compute</entry>
Nov 25 03:24:30 np0005534516 nova_compute[253538]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 03:24:30 np0005534516 nova_compute[253538]:      <entry name="serial">651b0445-6a0f-41e0-ad78-6318f8175e0b</entry>
Nov 25 03:24:30 np0005534516 nova_compute[253538]:      <entry name="uuid">651b0445-6a0f-41e0-ad78-6318f8175e0b</entry>
Nov 25 03:24:30 np0005534516 nova_compute[253538]:      <entry name="family">Virtual Machine</entry>
Nov 25 03:24:30 np0005534516 nova_compute[253538]:    </system>
Nov 25 03:24:30 np0005534516 nova_compute[253538]:  </sysinfo>
Nov 25 03:24:30 np0005534516 nova_compute[253538]:  <os>
Nov 25 03:24:30 np0005534516 nova_compute[253538]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 25 03:24:30 np0005534516 nova_compute[253538]:    <boot dev="hd"/>
Nov 25 03:24:30 np0005534516 nova_compute[253538]:    <smbios mode="sysinfo"/>
Nov 25 03:24:30 np0005534516 nova_compute[253538]:  </os>
Nov 25 03:24:30 np0005534516 nova_compute[253538]:  <features>
Nov 25 03:24:30 np0005534516 nova_compute[253538]:    <acpi/>
Nov 25 03:24:30 np0005534516 nova_compute[253538]:    <apic/>
Nov 25 03:24:30 np0005534516 nova_compute[253538]:    <vmcoreinfo/>
Nov 25 03:24:30 np0005534516 nova_compute[253538]:  </features>
Nov 25 03:24:30 np0005534516 nova_compute[253538]:  <clock offset="utc">
Nov 25 03:24:30 np0005534516 nova_compute[253538]:    <timer name="pit" tickpolicy="delay"/>
Nov 25 03:24:30 np0005534516 nova_compute[253538]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 25 03:24:30 np0005534516 nova_compute[253538]:    <timer name="hpet" present="no"/>
Nov 25 03:24:30 np0005534516 nova_compute[253538]:  </clock>
Nov 25 03:24:30 np0005534516 nova_compute[253538]:  <cpu mode="host-model" match="exact">
Nov 25 03:24:30 np0005534516 nova_compute[253538]:    <topology sockets="1" cores="1" threads="1"/>
Nov 25 03:24:30 np0005534516 nova_compute[253538]:  </cpu>
Nov 25 03:24:30 np0005534516 nova_compute[253538]:  <devices>
Nov 25 03:24:30 np0005534516 nova_compute[253538]:    <disk type="network" device="disk">
Nov 25 03:24:30 np0005534516 nova_compute[253538]:      <driver type="raw" cache="none"/>
Nov 25 03:24:30 np0005534516 nova_compute[253538]:      <source protocol="rbd" name="vms/651b0445-6a0f-41e0-ad78-6318f8175e0b_disk">
Nov 25 03:24:30 np0005534516 nova_compute[253538]:        <host name="192.168.122.100" port="6789"/>
Nov 25 03:24:30 np0005534516 nova_compute[253538]:      </source>
Nov 25 03:24:30 np0005534516 nova_compute[253538]:      <auth username="openstack">
Nov 25 03:24:30 np0005534516 nova_compute[253538]:        <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 03:24:30 np0005534516 nova_compute[253538]:      </auth>
Nov 25 03:24:30 np0005534516 nova_compute[253538]:      <target dev="vda" bus="virtio"/>
Nov 25 03:24:30 np0005534516 nova_compute[253538]:    </disk>
Nov 25 03:24:30 np0005534516 nova_compute[253538]:    <disk type="network" device="cdrom">
Nov 25 03:24:30 np0005534516 nova_compute[253538]:      <driver type="raw" cache="none"/>
Nov 25 03:24:30 np0005534516 nova_compute[253538]:      <source protocol="rbd" name="vms/651b0445-6a0f-41e0-ad78-6318f8175e0b_disk.config">
Nov 25 03:24:30 np0005534516 nova_compute[253538]:        <host name="192.168.122.100" port="6789"/>
Nov 25 03:24:30 np0005534516 nova_compute[253538]:      </source>
Nov 25 03:24:30 np0005534516 nova_compute[253538]:      <auth username="openstack">
Nov 25 03:24:30 np0005534516 nova_compute[253538]:        <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 03:24:30 np0005534516 nova_compute[253538]:      </auth>
Nov 25 03:24:30 np0005534516 nova_compute[253538]:      <target dev="sda" bus="sata"/>
Nov 25 03:24:30 np0005534516 nova_compute[253538]:    </disk>
Nov 25 03:24:30 np0005534516 nova_compute[253538]:    <serial type="pty">
Nov 25 03:24:30 np0005534516 nova_compute[253538]:      <log file="/var/lib/nova/instances/651b0445-6a0f-41e0-ad78-6318f8175e0b/console.log" append="off"/>
Nov 25 03:24:30 np0005534516 nova_compute[253538]:    </serial>
Nov 25 03:24:30 np0005534516 nova_compute[253538]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 03:24:30 np0005534516 nova_compute[253538]:    <video>
Nov 25 03:24:30 np0005534516 nova_compute[253538]:      <model type="virtio"/>
Nov 25 03:24:30 np0005534516 nova_compute[253538]:    </video>
Nov 25 03:24:30 np0005534516 nova_compute[253538]:    <input type="tablet" bus="usb"/>
Nov 25 03:24:30 np0005534516 nova_compute[253538]:    <rng model="virtio">
Nov 25 03:24:30 np0005534516 nova_compute[253538]:      <backend model="random">/dev/urandom</backend>
Nov 25 03:24:30 np0005534516 nova_compute[253538]:    </rng>
Nov 25 03:24:30 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root"/>
Nov 25 03:24:30 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:24:30 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:24:30 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:24:30 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:24:30 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:24:30 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:24:30 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:24:30 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:24:30 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:24:30 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:24:30 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:24:30 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:24:30 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:24:30 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:24:30 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:24:30 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:24:30 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:24:30 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:24:30 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:24:30 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:24:30 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:24:30 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:24:30 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:24:30 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:24:30 np0005534516 nova_compute[253538]:    <controller type="usb" index="0"/>
Nov 25 03:24:30 np0005534516 nova_compute[253538]:    <memballoon model="virtio">
Nov 25 03:24:30 np0005534516 nova_compute[253538]:      <stats period="10"/>
Nov 25 03:24:30 np0005534516 nova_compute[253538]:    </memballoon>
Nov 25 03:24:30 np0005534516 nova_compute[253538]:  </devices>
Nov 25 03:24:30 np0005534516 nova_compute[253538]: </domain>
Nov 25 03:24:30 np0005534516 nova_compute[253538]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 25 03:24:30 np0005534516 nova_compute[253538]: 2025-11-25 08:24:30.283 253542 DEBUG nova.virt.libvirt.driver [None req-1548f2c5-d48a-4cdb-8ce2-174785409a0b 5fe54f3384ea4571bee28e13c88e6d14 c86b7b0d5b344dfb82fc7554a622988e - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 03:24:30 np0005534516 nova_compute[253538]: 2025-11-25 08:24:30.284 253542 DEBUG nova.virt.libvirt.driver [None req-1548f2c5-d48a-4cdb-8ce2-174785409a0b 5fe54f3384ea4571bee28e13c88e6d14 c86b7b0d5b344dfb82fc7554a622988e - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 03:24:30 np0005534516 nova_compute[253538]: 2025-11-25 08:24:30.285 253542 INFO nova.virt.libvirt.driver [None req-1548f2c5-d48a-4cdb-8ce2-174785409a0b 5fe54f3384ea4571bee28e13c88e6d14 c86b7b0d5b344dfb82fc7554a622988e - - default default] [instance: 651b0445-6a0f-41e0-ad78-6318f8175e0b] Using config drive#033[00m
Nov 25 03:24:30 np0005534516 nova_compute[253538]: 2025-11-25 08:24:30.317 253542 DEBUG nova.storage.rbd_utils [None req-1548f2c5-d48a-4cdb-8ce2-174785409a0b 5fe54f3384ea4571bee28e13c88e6d14 c86b7b0d5b344dfb82fc7554a622988e - - default default] rbd image 651b0445-6a0f-41e0-ad78-6318f8175e0b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:24:30 np0005534516 nova_compute[253538]: 2025-11-25 08:24:30.629 253542 INFO nova.virt.libvirt.driver [None req-1548f2c5-d48a-4cdb-8ce2-174785409a0b 5fe54f3384ea4571bee28e13c88e6d14 c86b7b0d5b344dfb82fc7554a622988e - - default default] [instance: 651b0445-6a0f-41e0-ad78-6318f8175e0b] Creating config drive at /var/lib/nova/instances/651b0445-6a0f-41e0-ad78-6318f8175e0b/disk.config#033[00m
Nov 25 03:24:30 np0005534516 nova_compute[253538]: 2025-11-25 08:24:30.633 253542 DEBUG oslo_concurrency.processutils [None req-1548f2c5-d48a-4cdb-8ce2-174785409a0b 5fe54f3384ea4571bee28e13c88e6d14 c86b7b0d5b344dfb82fc7554a622988e - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/651b0445-6a0f-41e0-ad78-6318f8175e0b/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp9prsvbg2 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:24:30 np0005534516 nova_compute[253538]: 2025-11-25 08:24:30.775 253542 DEBUG oslo_concurrency.processutils [None req-1548f2c5-d48a-4cdb-8ce2-174785409a0b 5fe54f3384ea4571bee28e13c88e6d14 c86b7b0d5b344dfb82fc7554a622988e - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/651b0445-6a0f-41e0-ad78-6318f8175e0b/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp9prsvbg2" returned: 0 in 0.141s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:24:30 np0005534516 nova_compute[253538]: 2025-11-25 08:24:30.813 253542 DEBUG nova.storage.rbd_utils [None req-1548f2c5-d48a-4cdb-8ce2-174785409a0b 5fe54f3384ea4571bee28e13c88e6d14 c86b7b0d5b344dfb82fc7554a622988e - - default default] rbd image 651b0445-6a0f-41e0-ad78-6318f8175e0b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:24:30 np0005534516 nova_compute[253538]: 2025-11-25 08:24:30.816 253542 DEBUG oslo_concurrency.processutils [None req-1548f2c5-d48a-4cdb-8ce2-174785409a0b 5fe54f3384ea4571bee28e13c88e6d14 c86b7b0d5b344dfb82fc7554a622988e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/651b0445-6a0f-41e0-ad78-6318f8175e0b/disk.config 651b0445-6a0f-41e0-ad78-6318f8175e0b_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:24:30 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1169: 321 pgs: 321 active+clean; 116 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 4.7 MiB/s wr, 227 op/s
Nov 25 03:24:30 np0005534516 nova_compute[253538]: 2025-11-25 08:24:30.988 253542 DEBUG oslo_concurrency.processutils [None req-1548f2c5-d48a-4cdb-8ce2-174785409a0b 5fe54f3384ea4571bee28e13c88e6d14 c86b7b0d5b344dfb82fc7554a622988e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/651b0445-6a0f-41e0-ad78-6318f8175e0b/disk.config 651b0445-6a0f-41e0-ad78-6318f8175e0b_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.172s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:24:30 np0005534516 nova_compute[253538]: 2025-11-25 08:24:30.989 253542 INFO nova.virt.libvirt.driver [None req-1548f2c5-d48a-4cdb-8ce2-174785409a0b 5fe54f3384ea4571bee28e13c88e6d14 c86b7b0d5b344dfb82fc7554a622988e - - default default] [instance: 651b0445-6a0f-41e0-ad78-6318f8175e0b] Deleting local config drive /var/lib/nova/instances/651b0445-6a0f-41e0-ad78-6318f8175e0b/disk.config because it was imported into RBD.#033[00m
Nov 25 03:24:31 np0005534516 systemd-machined[215790]: New machine qemu-14-instance-0000000c.
Nov 25 03:24:31 np0005534516 systemd[1]: Started Virtual Machine qemu-14-instance-0000000c.
Nov 25 03:24:31 np0005534516 nova_compute[253538]: 2025-11-25 08:24:31.372 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:24:31 np0005534516 nova_compute[253538]: 2025-11-25 08:24:31.761 253542 DEBUG oslo_concurrency.lockutils [None req-ab887b35-ff51-48ad-bf6d-cad94f8fbf10 57ccb3076c9145fda72f75af7dd3acc0 e0f1bfa27d5e45138e846f38c1d92dfc - - default default] Acquiring lock "4659329f-611a-4436-aa9e-26937db9cd61" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:24:31 np0005534516 nova_compute[253538]: 2025-11-25 08:24:31.761 253542 DEBUG oslo_concurrency.lockutils [None req-ab887b35-ff51-48ad-bf6d-cad94f8fbf10 57ccb3076c9145fda72f75af7dd3acc0 e0f1bfa27d5e45138e846f38c1d92dfc - - default default] Lock "4659329f-611a-4436-aa9e-26937db9cd61" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:24:31 np0005534516 nova_compute[253538]: 2025-11-25 08:24:31.762 253542 DEBUG oslo_concurrency.lockutils [None req-ab887b35-ff51-48ad-bf6d-cad94f8fbf10 57ccb3076c9145fda72f75af7dd3acc0 e0f1bfa27d5e45138e846f38c1d92dfc - - default default] Acquiring lock "4659329f-611a-4436-aa9e-26937db9cd61-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:24:31 np0005534516 nova_compute[253538]: 2025-11-25 08:24:31.762 253542 DEBUG oslo_concurrency.lockutils [None req-ab887b35-ff51-48ad-bf6d-cad94f8fbf10 57ccb3076c9145fda72f75af7dd3acc0 e0f1bfa27d5e45138e846f38c1d92dfc - - default default] Lock "4659329f-611a-4436-aa9e-26937db9cd61-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:24:31 np0005534516 nova_compute[253538]: 2025-11-25 08:24:31.762 253542 DEBUG oslo_concurrency.lockutils [None req-ab887b35-ff51-48ad-bf6d-cad94f8fbf10 57ccb3076c9145fda72f75af7dd3acc0 e0f1bfa27d5e45138e846f38c1d92dfc - - default default] Lock "4659329f-611a-4436-aa9e-26937db9cd61-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:24:31 np0005534516 nova_compute[253538]: 2025-11-25 08:24:31.764 253542 INFO nova.compute.manager [None req-ab887b35-ff51-48ad-bf6d-cad94f8fbf10 57ccb3076c9145fda72f75af7dd3acc0 e0f1bfa27d5e45138e846f38c1d92dfc - - default default] [instance: 4659329f-611a-4436-aa9e-26937db9cd61] Terminating instance#033[00m
Nov 25 03:24:31 np0005534516 nova_compute[253538]: 2025-11-25 08:24:31.764 253542 DEBUG oslo_concurrency.lockutils [None req-ab887b35-ff51-48ad-bf6d-cad94f8fbf10 57ccb3076c9145fda72f75af7dd3acc0 e0f1bfa27d5e45138e846f38c1d92dfc - - default default] Acquiring lock "refresh_cache-4659329f-611a-4436-aa9e-26937db9cd61" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 03:24:31 np0005534516 nova_compute[253538]: 2025-11-25 08:24:31.765 253542 DEBUG oslo_concurrency.lockutils [None req-ab887b35-ff51-48ad-bf6d-cad94f8fbf10 57ccb3076c9145fda72f75af7dd3acc0 e0f1bfa27d5e45138e846f38c1d92dfc - - default default] Acquired lock "refresh_cache-4659329f-611a-4436-aa9e-26937db9cd61" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 03:24:31 np0005534516 nova_compute[253538]: 2025-11-25 08:24:31.765 253542 DEBUG nova.network.neutron [None req-ab887b35-ff51-48ad-bf6d-cad94f8fbf10 57ccb3076c9145fda72f75af7dd3acc0 e0f1bfa27d5e45138e846f38c1d92dfc - - default default] [instance: 4659329f-611a-4436-aa9e-26937db9cd61] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 25 03:24:31 np0005534516 nova_compute[253538]: 2025-11-25 08:24:31.880 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764059071.8806245, 651b0445-6a0f-41e0-ad78-6318f8175e0b => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 03:24:31 np0005534516 nova_compute[253538]: 2025-11-25 08:24:31.881 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 651b0445-6a0f-41e0-ad78-6318f8175e0b] VM Resumed (Lifecycle Event)#033[00m
Nov 25 03:24:31 np0005534516 nova_compute[253538]: 2025-11-25 08:24:31.883 253542 DEBUG nova.compute.manager [None req-1548f2c5-d48a-4cdb-8ce2-174785409a0b 5fe54f3384ea4571bee28e13c88e6d14 c86b7b0d5b344dfb82fc7554a622988e - - default default] [instance: 651b0445-6a0f-41e0-ad78-6318f8175e0b] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 25 03:24:31 np0005534516 nova_compute[253538]: 2025-11-25 08:24:31.883 253542 DEBUG nova.virt.libvirt.driver [None req-1548f2c5-d48a-4cdb-8ce2-174785409a0b 5fe54f3384ea4571bee28e13c88e6d14 c86b7b0d5b344dfb82fc7554a622988e - - default default] [instance: 651b0445-6a0f-41e0-ad78-6318f8175e0b] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 25 03:24:31 np0005534516 nova_compute[253538]: 2025-11-25 08:24:31.886 253542 INFO nova.virt.libvirt.driver [-] [instance: 651b0445-6a0f-41e0-ad78-6318f8175e0b] Instance spawned successfully.#033[00m
Nov 25 03:24:31 np0005534516 nova_compute[253538]: 2025-11-25 08:24:31.887 253542 DEBUG nova.virt.libvirt.driver [None req-1548f2c5-d48a-4cdb-8ce2-174785409a0b 5fe54f3384ea4571bee28e13c88e6d14 c86b7b0d5b344dfb82fc7554a622988e - - default default] [instance: 651b0445-6a0f-41e0-ad78-6318f8175e0b] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 25 03:24:31 np0005534516 nova_compute[253538]: 2025-11-25 08:24:31.903 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 651b0445-6a0f-41e0-ad78-6318f8175e0b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:24:31 np0005534516 nova_compute[253538]: 2025-11-25 08:24:31.908 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 651b0445-6a0f-41e0-ad78-6318f8175e0b] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 03:24:31 np0005534516 nova_compute[253538]: 2025-11-25 08:24:31.912 253542 DEBUG nova.virt.libvirt.driver [None req-1548f2c5-d48a-4cdb-8ce2-174785409a0b 5fe54f3384ea4571bee28e13c88e6d14 c86b7b0d5b344dfb82fc7554a622988e - - default default] [instance: 651b0445-6a0f-41e0-ad78-6318f8175e0b] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:24:31 np0005534516 nova_compute[253538]: 2025-11-25 08:24:31.913 253542 DEBUG nova.virt.libvirt.driver [None req-1548f2c5-d48a-4cdb-8ce2-174785409a0b 5fe54f3384ea4571bee28e13c88e6d14 c86b7b0d5b344dfb82fc7554a622988e - - default default] [instance: 651b0445-6a0f-41e0-ad78-6318f8175e0b] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:24:31 np0005534516 nova_compute[253538]: 2025-11-25 08:24:31.913 253542 DEBUG nova.virt.libvirt.driver [None req-1548f2c5-d48a-4cdb-8ce2-174785409a0b 5fe54f3384ea4571bee28e13c88e6d14 c86b7b0d5b344dfb82fc7554a622988e - - default default] [instance: 651b0445-6a0f-41e0-ad78-6318f8175e0b] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:24:31 np0005534516 nova_compute[253538]: 2025-11-25 08:24:31.913 253542 DEBUG nova.virt.libvirt.driver [None req-1548f2c5-d48a-4cdb-8ce2-174785409a0b 5fe54f3384ea4571bee28e13c88e6d14 c86b7b0d5b344dfb82fc7554a622988e - - default default] [instance: 651b0445-6a0f-41e0-ad78-6318f8175e0b] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:24:31 np0005534516 nova_compute[253538]: 2025-11-25 08:24:31.914 253542 DEBUG nova.virt.libvirt.driver [None req-1548f2c5-d48a-4cdb-8ce2-174785409a0b 5fe54f3384ea4571bee28e13c88e6d14 c86b7b0d5b344dfb82fc7554a622988e - - default default] [instance: 651b0445-6a0f-41e0-ad78-6318f8175e0b] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:24:31 np0005534516 nova_compute[253538]: 2025-11-25 08:24:31.914 253542 DEBUG nova.virt.libvirt.driver [None req-1548f2c5-d48a-4cdb-8ce2-174785409a0b 5fe54f3384ea4571bee28e13c88e6d14 c86b7b0d5b344dfb82fc7554a622988e - - default default] [instance: 651b0445-6a0f-41e0-ad78-6318f8175e0b] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:24:31 np0005534516 nova_compute[253538]: 2025-11-25 08:24:31.928 253542 DEBUG nova.network.neutron [None req-ab887b35-ff51-48ad-bf6d-cad94f8fbf10 57ccb3076c9145fda72f75af7dd3acc0 e0f1bfa27d5e45138e846f38c1d92dfc - - default default] [instance: 4659329f-611a-4436-aa9e-26937db9cd61] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 25 03:24:31 np0005534516 nova_compute[253538]: 2025-11-25 08:24:31.945 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 651b0445-6a0f-41e0-ad78-6318f8175e0b] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 03:24:31 np0005534516 nova_compute[253538]: 2025-11-25 08:24:31.946 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764059071.8825336, 651b0445-6a0f-41e0-ad78-6318f8175e0b => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 03:24:31 np0005534516 nova_compute[253538]: 2025-11-25 08:24:31.946 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 651b0445-6a0f-41e0-ad78-6318f8175e0b] VM Started (Lifecycle Event)#033[00m
Nov 25 03:24:31 np0005534516 nova_compute[253538]: 2025-11-25 08:24:31.978 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 651b0445-6a0f-41e0-ad78-6318f8175e0b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:24:31 np0005534516 nova_compute[253538]: 2025-11-25 08:24:31.982 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 651b0445-6a0f-41e0-ad78-6318f8175e0b] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 03:24:32 np0005534516 nova_compute[253538]: 2025-11-25 08:24:32.005 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 651b0445-6a0f-41e0-ad78-6318f8175e0b] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 03:24:32 np0005534516 nova_compute[253538]: 2025-11-25 08:24:32.010 253542 INFO nova.compute.manager [None req-1548f2c5-d48a-4cdb-8ce2-174785409a0b 5fe54f3384ea4571bee28e13c88e6d14 c86b7b0d5b344dfb82fc7554a622988e - - default default] [instance: 651b0445-6a0f-41e0-ad78-6318f8175e0b] Took 5.76 seconds to spawn the instance on the hypervisor.#033[00m
Nov 25 03:24:32 np0005534516 nova_compute[253538]: 2025-11-25 08:24:32.011 253542 DEBUG nova.compute.manager [None req-1548f2c5-d48a-4cdb-8ce2-174785409a0b 5fe54f3384ea4571bee28e13c88e6d14 c86b7b0d5b344dfb82fc7554a622988e - - default default] [instance: 651b0445-6a0f-41e0-ad78-6318f8175e0b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:24:32 np0005534516 nova_compute[253538]: 2025-11-25 08:24:32.094 253542 INFO nova.compute.manager [None req-1548f2c5-d48a-4cdb-8ce2-174785409a0b 5fe54f3384ea4571bee28e13c88e6d14 c86b7b0d5b344dfb82fc7554a622988e - - default default] [instance: 651b0445-6a0f-41e0-ad78-6318f8175e0b] Took 7.39 seconds to build instance.#033[00m
Nov 25 03:24:32 np0005534516 nova_compute[253538]: 2025-11-25 08:24:32.141 253542 DEBUG oslo_concurrency.lockutils [None req-1548f2c5-d48a-4cdb-8ce2-174785409a0b 5fe54f3384ea4571bee28e13c88e6d14 c86b7b0d5b344dfb82fc7554a622988e - - default default] Lock "651b0445-6a0f-41e0-ad78-6318f8175e0b" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 7.509s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:24:32 np0005534516 nova_compute[253538]: 2025-11-25 08:24:32.281 253542 DEBUG nova.network.neutron [None req-ab887b35-ff51-48ad-bf6d-cad94f8fbf10 57ccb3076c9145fda72f75af7dd3acc0 e0f1bfa27d5e45138e846f38c1d92dfc - - default default] [instance: 4659329f-611a-4436-aa9e-26937db9cd61] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 03:24:32 np0005534516 nova_compute[253538]: 2025-11-25 08:24:32.295 253542 DEBUG oslo_concurrency.lockutils [None req-ab887b35-ff51-48ad-bf6d-cad94f8fbf10 57ccb3076c9145fda72f75af7dd3acc0 e0f1bfa27d5e45138e846f38c1d92dfc - - default default] Releasing lock "refresh_cache-4659329f-611a-4436-aa9e-26937db9cd61" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 03:24:32 np0005534516 nova_compute[253538]: 2025-11-25 08:24:32.295 253542 DEBUG nova.compute.manager [None req-ab887b35-ff51-48ad-bf6d-cad94f8fbf10 57ccb3076c9145fda72f75af7dd3acc0 e0f1bfa27d5e45138e846f38c1d92dfc - - default default] [instance: 4659329f-611a-4436-aa9e-26937db9cd61] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 25 03:24:32 np0005534516 systemd[1]: machine-qemu\x2d13\x2dinstance\x2d00000009.scope: Deactivated successfully.
Nov 25 03:24:32 np0005534516 systemd[1]: machine-qemu\x2d13\x2dinstance\x2d00000009.scope: Consumed 3.691s CPU time.
Nov 25 03:24:32 np0005534516 systemd-machined[215790]: Machine qemu-13-instance-00000009 terminated.
Nov 25 03:24:32 np0005534516 nova_compute[253538]: 2025-11-25 08:24:32.517 253542 INFO nova.virt.libvirt.driver [-] [instance: 4659329f-611a-4436-aa9e-26937db9cd61] Instance destroyed successfully.#033[00m
Nov 25 03:24:32 np0005534516 nova_compute[253538]: 2025-11-25 08:24:32.517 253542 DEBUG nova.objects.instance [None req-ab887b35-ff51-48ad-bf6d-cad94f8fbf10 57ccb3076c9145fda72f75af7dd3acc0 e0f1bfa27d5e45138e846f38c1d92dfc - - default default] Lazy-loading 'resources' on Instance uuid 4659329f-611a-4436-aa9e-26937db9cd61 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 03:24:32 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e110 do_prune osdmap full prune enabled
Nov 25 03:24:32 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e111 e111: 3 total, 3 up, 3 in
Nov 25 03:24:32 np0005534516 ceph-mon[75015]: log_channel(cluster) log [DBG] : osdmap e111: 3 total, 3 up, 3 in
Nov 25 03:24:32 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1171: 321 pgs: 321 active+clean; 130 MiB data, 288 MiB used, 60 GiB / 60 GiB avail; 3.3 MiB/s rd, 3.9 MiB/s wr, 268 op/s
Nov 25 03:24:33 np0005534516 nova_compute[253538]: 2025-11-25 08:24:33.200 253542 DEBUG oslo_concurrency.lockutils [None req-6395b2e8-ef8f-46a3-91fc-7dc07c33bce5 5fe54f3384ea4571bee28e13c88e6d14 c86b7b0d5b344dfb82fc7554a622988e - - default default] Acquiring lock "651b0445-6a0f-41e0-ad78-6318f8175e0b" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:24:33 np0005534516 nova_compute[253538]: 2025-11-25 08:24:33.200 253542 DEBUG oslo_concurrency.lockutils [None req-6395b2e8-ef8f-46a3-91fc-7dc07c33bce5 5fe54f3384ea4571bee28e13c88e6d14 c86b7b0d5b344dfb82fc7554a622988e - - default default] Lock "651b0445-6a0f-41e0-ad78-6318f8175e0b" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:24:33 np0005534516 nova_compute[253538]: 2025-11-25 08:24:33.201 253542 DEBUG oslo_concurrency.lockutils [None req-6395b2e8-ef8f-46a3-91fc-7dc07c33bce5 5fe54f3384ea4571bee28e13c88e6d14 c86b7b0d5b344dfb82fc7554a622988e - - default default] Acquiring lock "651b0445-6a0f-41e0-ad78-6318f8175e0b-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:24:33 np0005534516 nova_compute[253538]: 2025-11-25 08:24:33.201 253542 DEBUG oslo_concurrency.lockutils [None req-6395b2e8-ef8f-46a3-91fc-7dc07c33bce5 5fe54f3384ea4571bee28e13c88e6d14 c86b7b0d5b344dfb82fc7554a622988e - - default default] Lock "651b0445-6a0f-41e0-ad78-6318f8175e0b-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:24:33 np0005534516 nova_compute[253538]: 2025-11-25 08:24:33.202 253542 DEBUG oslo_concurrency.lockutils [None req-6395b2e8-ef8f-46a3-91fc-7dc07c33bce5 5fe54f3384ea4571bee28e13c88e6d14 c86b7b0d5b344dfb82fc7554a622988e - - default default] Lock "651b0445-6a0f-41e0-ad78-6318f8175e0b-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:24:33 np0005534516 nova_compute[253538]: 2025-11-25 08:24:33.204 253542 INFO nova.compute.manager [None req-6395b2e8-ef8f-46a3-91fc-7dc07c33bce5 5fe54f3384ea4571bee28e13c88e6d14 c86b7b0d5b344dfb82fc7554a622988e - - default default] [instance: 651b0445-6a0f-41e0-ad78-6318f8175e0b] Terminating instance#033[00m
Nov 25 03:24:33 np0005534516 nova_compute[253538]: 2025-11-25 08:24:33.207 253542 DEBUG oslo_concurrency.lockutils [None req-6395b2e8-ef8f-46a3-91fc-7dc07c33bce5 5fe54f3384ea4571bee28e13c88e6d14 c86b7b0d5b344dfb82fc7554a622988e - - default default] Acquiring lock "refresh_cache-651b0445-6a0f-41e0-ad78-6318f8175e0b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 03:24:33 np0005534516 nova_compute[253538]: 2025-11-25 08:24:33.207 253542 DEBUG oslo_concurrency.lockutils [None req-6395b2e8-ef8f-46a3-91fc-7dc07c33bce5 5fe54f3384ea4571bee28e13c88e6d14 c86b7b0d5b344dfb82fc7554a622988e - - default default] Acquired lock "refresh_cache-651b0445-6a0f-41e0-ad78-6318f8175e0b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 03:24:33 np0005534516 nova_compute[253538]: 2025-11-25 08:24:33.207 253542 DEBUG nova.network.neutron [None req-6395b2e8-ef8f-46a3-91fc-7dc07c33bce5 5fe54f3384ea4571bee28e13c88e6d14 c86b7b0d5b344dfb82fc7554a622988e - - default default] [instance: 651b0445-6a0f-41e0-ad78-6318f8175e0b] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 25 03:24:33 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e111 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 03:24:33 np0005534516 nova_compute[253538]: 2025-11-25 08:24:33.421 253542 DEBUG nova.network.neutron [None req-6395b2e8-ef8f-46a3-91fc-7dc07c33bce5 5fe54f3384ea4571bee28e13c88e6d14 c86b7b0d5b344dfb82fc7554a622988e - - default default] [instance: 651b0445-6a0f-41e0-ad78-6318f8175e0b] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 25 03:24:33 np0005534516 nova_compute[253538]: 2025-11-25 08:24:33.771 253542 DEBUG nova.network.neutron [None req-6395b2e8-ef8f-46a3-91fc-7dc07c33bce5 5fe54f3384ea4571bee28e13c88e6d14 c86b7b0d5b344dfb82fc7554a622988e - - default default] [instance: 651b0445-6a0f-41e0-ad78-6318f8175e0b] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 03:24:33 np0005534516 nova_compute[253538]: 2025-11-25 08:24:33.785 253542 DEBUG oslo_concurrency.lockutils [None req-6395b2e8-ef8f-46a3-91fc-7dc07c33bce5 5fe54f3384ea4571bee28e13c88e6d14 c86b7b0d5b344dfb82fc7554a622988e - - default default] Releasing lock "refresh_cache-651b0445-6a0f-41e0-ad78-6318f8175e0b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 03:24:33 np0005534516 nova_compute[253538]: 2025-11-25 08:24:33.786 253542 DEBUG nova.compute.manager [None req-6395b2e8-ef8f-46a3-91fc-7dc07c33bce5 5fe54f3384ea4571bee28e13c88e6d14 c86b7b0d5b344dfb82fc7554a622988e - - default default] [instance: 651b0445-6a0f-41e0-ad78-6318f8175e0b] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 25 03:24:33 np0005534516 nova_compute[253538]: 2025-11-25 08:24:33.810 253542 INFO nova.virt.libvirt.driver [None req-ab887b35-ff51-48ad-bf6d-cad94f8fbf10 57ccb3076c9145fda72f75af7dd3acc0 e0f1bfa27d5e45138e846f38c1d92dfc - - default default] [instance: 4659329f-611a-4436-aa9e-26937db9cd61] Deleting instance files /var/lib/nova/instances/4659329f-611a-4436-aa9e-26937db9cd61_del#033[00m
Nov 25 03:24:33 np0005534516 nova_compute[253538]: 2025-11-25 08:24:33.811 253542 INFO nova.virt.libvirt.driver [None req-ab887b35-ff51-48ad-bf6d-cad94f8fbf10 57ccb3076c9145fda72f75af7dd3acc0 e0f1bfa27d5e45138e846f38c1d92dfc - - default default] [instance: 4659329f-611a-4436-aa9e-26937db9cd61] Deletion of /var/lib/nova/instances/4659329f-611a-4436-aa9e-26937db9cd61_del complete#033[00m
Nov 25 03:24:33 np0005534516 nova_compute[253538]: 2025-11-25 08:24:33.858 253542 INFO nova.compute.manager [None req-ab887b35-ff51-48ad-bf6d-cad94f8fbf10 57ccb3076c9145fda72f75af7dd3acc0 e0f1bfa27d5e45138e846f38c1d92dfc - - default default] [instance: 4659329f-611a-4436-aa9e-26937db9cd61] Took 1.56 seconds to destroy the instance on the hypervisor.#033[00m
Nov 25 03:24:33 np0005534516 nova_compute[253538]: 2025-11-25 08:24:33.859 253542 DEBUG oslo.service.loopingcall [None req-ab887b35-ff51-48ad-bf6d-cad94f8fbf10 57ccb3076c9145fda72f75af7dd3acc0 e0f1bfa27d5e45138e846f38c1d92dfc - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 25 03:24:33 np0005534516 nova_compute[253538]: 2025-11-25 08:24:33.859 253542 DEBUG nova.compute.manager [-] [instance: 4659329f-611a-4436-aa9e-26937db9cd61] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 25 03:24:33 np0005534516 nova_compute[253538]: 2025-11-25 08:24:33.859 253542 DEBUG nova.network.neutron [-] [instance: 4659329f-611a-4436-aa9e-26937db9cd61] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 25 03:24:33 np0005534516 systemd[1]: machine-qemu\x2d14\x2dinstance\x2d0000000c.scope: Deactivated successfully.
Nov 25 03:24:33 np0005534516 systemd[1]: machine-qemu\x2d14\x2dinstance\x2d0000000c.scope: Consumed 2.723s CPU time.
Nov 25 03:24:33 np0005534516 systemd-machined[215790]: Machine qemu-14-instance-0000000c terminated.
Nov 25 03:24:34 np0005534516 nova_compute[253538]: 2025-11-25 08:24:34.007 253542 INFO nova.virt.libvirt.driver [-] [instance: 651b0445-6a0f-41e0-ad78-6318f8175e0b] Instance destroyed successfully.#033[00m
Nov 25 03:24:34 np0005534516 nova_compute[253538]: 2025-11-25 08:24:34.008 253542 DEBUG nova.objects.instance [None req-6395b2e8-ef8f-46a3-91fc-7dc07c33bce5 5fe54f3384ea4571bee28e13c88e6d14 c86b7b0d5b344dfb82fc7554a622988e - - default default] Lazy-loading 'resources' on Instance uuid 651b0445-6a0f-41e0-ad78-6318f8175e0b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 03:24:34 np0005534516 nova_compute[253538]: 2025-11-25 08:24:34.088 253542 DEBUG nova.network.neutron [-] [instance: 4659329f-611a-4436-aa9e-26937db9cd61] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 25 03:24:34 np0005534516 nova_compute[253538]: 2025-11-25 08:24:34.099 253542 DEBUG nova.network.neutron [-] [instance: 4659329f-611a-4436-aa9e-26937db9cd61] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 03:24:34 np0005534516 nova_compute[253538]: 2025-11-25 08:24:34.122 253542 INFO nova.compute.manager [-] [instance: 4659329f-611a-4436-aa9e-26937db9cd61] Took 0.26 seconds to deallocate network for instance.#033[00m
Nov 25 03:24:34 np0005534516 nova_compute[253538]: 2025-11-25 08:24:34.168 253542 DEBUG oslo_concurrency.lockutils [None req-ab887b35-ff51-48ad-bf6d-cad94f8fbf10 57ccb3076c9145fda72f75af7dd3acc0 e0f1bfa27d5e45138e846f38c1d92dfc - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:24:34 np0005534516 nova_compute[253538]: 2025-11-25 08:24:34.169 253542 DEBUG oslo_concurrency.lockutils [None req-ab887b35-ff51-48ad-bf6d-cad94f8fbf10 57ccb3076c9145fda72f75af7dd3acc0 e0f1bfa27d5e45138e846f38c1d92dfc - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:24:34 np0005534516 nova_compute[253538]: 2025-11-25 08:24:34.240 253542 DEBUG oslo_concurrency.processutils [None req-ab887b35-ff51-48ad-bf6d-cad94f8fbf10 57ccb3076c9145fda72f75af7dd3acc0 e0f1bfa27d5e45138e846f38c1d92dfc - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:24:34 np0005534516 nova_compute[253538]: 2025-11-25 08:24:34.539 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:24:34 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 03:24:34 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4178786896' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 03:24:34 np0005534516 nova_compute[253538]: 2025-11-25 08:24:34.743 253542 DEBUG oslo_concurrency.processutils [None req-ab887b35-ff51-48ad-bf6d-cad94f8fbf10 57ccb3076c9145fda72f75af7dd3acc0 e0f1bfa27d5e45138e846f38c1d92dfc - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.503s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:24:34 np0005534516 nova_compute[253538]: 2025-11-25 08:24:34.752 253542 DEBUG nova.compute.provider_tree [None req-ab887b35-ff51-48ad-bf6d-cad94f8fbf10 57ccb3076c9145fda72f75af7dd3acc0 e0f1bfa27d5e45138e846f38c1d92dfc - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 03:24:34 np0005534516 nova_compute[253538]: 2025-11-25 08:24:34.770 253542 DEBUG nova.scheduler.client.report [None req-ab887b35-ff51-48ad-bf6d-cad94f8fbf10 57ccb3076c9145fda72f75af7dd3acc0 e0f1bfa27d5e45138e846f38c1d92dfc - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 03:24:34 np0005534516 nova_compute[253538]: 2025-11-25 08:24:34.796 253542 DEBUG oslo_concurrency.lockutils [None req-ab887b35-ff51-48ad-bf6d-cad94f8fbf10 57ccb3076c9145fda72f75af7dd3acc0 e0f1bfa27d5e45138e846f38c1d92dfc - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.627s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:24:34 np0005534516 nova_compute[253538]: 2025-11-25 08:24:34.805 253542 INFO nova.virt.libvirt.driver [None req-6395b2e8-ef8f-46a3-91fc-7dc07c33bce5 5fe54f3384ea4571bee28e13c88e6d14 c86b7b0d5b344dfb82fc7554a622988e - - default default] [instance: 651b0445-6a0f-41e0-ad78-6318f8175e0b] Deleting instance files /var/lib/nova/instances/651b0445-6a0f-41e0-ad78-6318f8175e0b_del#033[00m
Nov 25 03:24:34 np0005534516 nova_compute[253538]: 2025-11-25 08:24:34.806 253542 INFO nova.virt.libvirt.driver [None req-6395b2e8-ef8f-46a3-91fc-7dc07c33bce5 5fe54f3384ea4571bee28e13c88e6d14 c86b7b0d5b344dfb82fc7554a622988e - - default default] [instance: 651b0445-6a0f-41e0-ad78-6318f8175e0b] Deletion of /var/lib/nova/instances/651b0445-6a0f-41e0-ad78-6318f8175e0b_del complete#033[00m
Nov 25 03:24:34 np0005534516 nova_compute[253538]: 2025-11-25 08:24:34.827 253542 INFO nova.scheduler.client.report [None req-ab887b35-ff51-48ad-bf6d-cad94f8fbf10 57ccb3076c9145fda72f75af7dd3acc0 e0f1bfa27d5e45138e846f38c1d92dfc - - default default] Deleted allocations for instance 4659329f-611a-4436-aa9e-26937db9cd61#033[00m
Nov 25 03:24:34 np0005534516 podman[277336]: 2025-11-25 08:24:34.849148498 +0000 UTC m=+0.093885169 container health_status 8ad1e319ea263c63021f01bb2d6f18ca5aea205883339692c6b10bddfa993191 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_controller)
Nov 25 03:24:34 np0005534516 nova_compute[253538]: 2025-11-25 08:24:34.865 253542 INFO nova.compute.manager [None req-6395b2e8-ef8f-46a3-91fc-7dc07c33bce5 5fe54f3384ea4571bee28e13c88e6d14 c86b7b0d5b344dfb82fc7554a622988e - - default default] [instance: 651b0445-6a0f-41e0-ad78-6318f8175e0b] Took 1.08 seconds to destroy the instance on the hypervisor.#033[00m
Nov 25 03:24:34 np0005534516 nova_compute[253538]: 2025-11-25 08:24:34.865 253542 DEBUG oslo.service.loopingcall [None req-6395b2e8-ef8f-46a3-91fc-7dc07c33bce5 5fe54f3384ea4571bee28e13c88e6d14 c86b7b0d5b344dfb82fc7554a622988e - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 25 03:24:34 np0005534516 nova_compute[253538]: 2025-11-25 08:24:34.866 253542 DEBUG nova.compute.manager [-] [instance: 651b0445-6a0f-41e0-ad78-6318f8175e0b] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 25 03:24:34 np0005534516 nova_compute[253538]: 2025-11-25 08:24:34.866 253542 DEBUG nova.network.neutron [-] [instance: 651b0445-6a0f-41e0-ad78-6318f8175e0b] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 25 03:24:34 np0005534516 nova_compute[253538]: 2025-11-25 08:24:34.927 253542 DEBUG oslo_concurrency.lockutils [None req-ab887b35-ff51-48ad-bf6d-cad94f8fbf10 57ccb3076c9145fda72f75af7dd3acc0 e0f1bfa27d5e45138e846f38c1d92dfc - - default default] Lock "4659329f-611a-4436-aa9e-26937db9cd61" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.165s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:24:34 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1172: 321 pgs: 321 active+clean; 91 MiB data, 290 MiB used, 60 GiB / 60 GiB avail; 4.7 MiB/s rd, 3.6 MiB/s wr, 294 op/s
Nov 25 03:24:34 np0005534516 nova_compute[253538]: 2025-11-25 08:24:34.996 253542 DEBUG nova.network.neutron [-] [instance: 651b0445-6a0f-41e0-ad78-6318f8175e0b] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 25 03:24:35 np0005534516 nova_compute[253538]: 2025-11-25 08:24:35.008 253542 DEBUG nova.network.neutron [-] [instance: 651b0445-6a0f-41e0-ad78-6318f8175e0b] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 03:24:35 np0005534516 nova_compute[253538]: 2025-11-25 08:24:35.025 253542 INFO nova.compute.manager [-] [instance: 651b0445-6a0f-41e0-ad78-6318f8175e0b] Took 0.16 seconds to deallocate network for instance.#033[00m
Nov 25 03:24:35 np0005534516 nova_compute[253538]: 2025-11-25 08:24:35.064 253542 DEBUG oslo_concurrency.lockutils [None req-6395b2e8-ef8f-46a3-91fc-7dc07c33bce5 5fe54f3384ea4571bee28e13c88e6d14 c86b7b0d5b344dfb82fc7554a622988e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:24:35 np0005534516 nova_compute[253538]: 2025-11-25 08:24:35.065 253542 DEBUG oslo_concurrency.lockutils [None req-6395b2e8-ef8f-46a3-91fc-7dc07c33bce5 5fe54f3384ea4571bee28e13c88e6d14 c86b7b0d5b344dfb82fc7554a622988e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:24:35 np0005534516 nova_compute[253538]: 2025-11-25 08:24:35.106 253542 DEBUG oslo_concurrency.processutils [None req-6395b2e8-ef8f-46a3-91fc-7dc07c33bce5 5fe54f3384ea4571bee28e13c88e6d14 c86b7b0d5b344dfb82fc7554a622988e - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:24:35 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 03:24:35 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2980519771' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 03:24:35 np0005534516 nova_compute[253538]: 2025-11-25 08:24:35.551 253542 DEBUG oslo_concurrency.processutils [None req-6395b2e8-ef8f-46a3-91fc-7dc07c33bce5 5fe54f3384ea4571bee28e13c88e6d14 c86b7b0d5b344dfb82fc7554a622988e - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.446s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:24:35 np0005534516 nova_compute[253538]: 2025-11-25 08:24:35.561 253542 DEBUG nova.compute.provider_tree [None req-6395b2e8-ef8f-46a3-91fc-7dc07c33bce5 5fe54f3384ea4571bee28e13c88e6d14 c86b7b0d5b344dfb82fc7554a622988e - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 03:24:35 np0005534516 nova_compute[253538]: 2025-11-25 08:24:35.578 253542 DEBUG nova.scheduler.client.report [None req-6395b2e8-ef8f-46a3-91fc-7dc07c33bce5 5fe54f3384ea4571bee28e13c88e6d14 c86b7b0d5b344dfb82fc7554a622988e - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 03:24:35 np0005534516 nova_compute[253538]: 2025-11-25 08:24:35.603 253542 DEBUG oslo_concurrency.lockutils [None req-6395b2e8-ef8f-46a3-91fc-7dc07c33bce5 5fe54f3384ea4571bee28e13c88e6d14 c86b7b0d5b344dfb82fc7554a622988e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.538s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:24:35 np0005534516 nova_compute[253538]: 2025-11-25 08:24:35.629 253542 INFO nova.scheduler.client.report [None req-6395b2e8-ef8f-46a3-91fc-7dc07c33bce5 5fe54f3384ea4571bee28e13c88e6d14 c86b7b0d5b344dfb82fc7554a622988e - - default default] Deleted allocations for instance 651b0445-6a0f-41e0-ad78-6318f8175e0b#033[00m
Nov 25 03:24:35 np0005534516 nova_compute[253538]: 2025-11-25 08:24:35.695 253542 DEBUG oslo_concurrency.lockutils [None req-6395b2e8-ef8f-46a3-91fc-7dc07c33bce5 5fe54f3384ea4571bee28e13c88e6d14 c86b7b0d5b344dfb82fc7554a622988e - - default default] Lock "651b0445-6a0f-41e0-ad78-6318f8175e0b" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.494s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:24:36 np0005534516 nova_compute[253538]: 2025-11-25 08:24:36.377 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:24:36 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1173: 321 pgs: 321 active+clean; 57 MiB data, 274 MiB used, 60 GiB / 60 GiB avail; 4.7 MiB/s rd, 2.7 MiB/s wr, 308 op/s
Nov 25 03:24:38 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e111 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 03:24:38 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1174: 321 pgs: 321 active+clean; 41 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 4.7 MiB/s rd, 1.7 MiB/s wr, 287 op/s
Nov 25 03:24:39 np0005534516 nova_compute[253538]: 2025-11-25 08:24:39.542 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:24:39 np0005534516 nova_compute[253538]: 2025-11-25 08:24:39.813 253542 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764059064.8116925, 1cb8bb78-4ff6-496c-858c-41159362ffb8 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 03:24:39 np0005534516 nova_compute[253538]: 2025-11-25 08:24:39.813 253542 INFO nova.compute.manager [-] [instance: 1cb8bb78-4ff6-496c-858c-41159362ffb8] VM Stopped (Lifecycle Event)#033[00m
Nov 25 03:24:39 np0005534516 nova_compute[253538]: 2025-11-25 08:24:39.830 253542 DEBUG nova.compute.manager [None req-28143c51-d59c-45ad-919b-665e2967ca20 - - - - - -] [instance: 1cb8bb78-4ff6-496c-858c-41159362ffb8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:24:40 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1175: 321 pgs: 321 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 4.7 MiB/s rd, 1.1 MiB/s wr, 270 op/s
Nov 25 03:24:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:24:41.050 162739 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:24:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:24:41.050 162739 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:24:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:24:41.050 162739 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:24:41 np0005534516 nova_compute[253538]: 2025-11-25 08:24:41.379 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:24:42 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1176: 321 pgs: 321 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 3.7 MiB/s rd, 408 KiB/s wr, 214 op/s
Nov 25 03:24:43 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e111 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 03:24:44 np0005534516 nova_compute[253538]: 2025-11-25 08:24:44.543 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:24:44 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1177: 321 pgs: 321 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 3.1 MiB/s rd, 345 KiB/s wr, 182 op/s
Nov 25 03:24:45 np0005534516 nova_compute[253538]: 2025-11-25 08:24:45.716 253542 DEBUG oslo_concurrency.lockutils [None req-bf34376d-b6e7-4feb-b2df-5913e90367e0 345cc8bd54dd46c9aaf034a44f55f52e fbc71ebb18c64a72bd6d93bc520d8921 - - default default] Acquiring lock "8e6c81c4-a422-42e4-950b-66fa6411c1eb" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:24:45 np0005534516 nova_compute[253538]: 2025-11-25 08:24:45.717 253542 DEBUG oslo_concurrency.lockutils [None req-bf34376d-b6e7-4feb-b2df-5913e90367e0 345cc8bd54dd46c9aaf034a44f55f52e fbc71ebb18c64a72bd6d93bc520d8921 - - default default] Lock "8e6c81c4-a422-42e4-950b-66fa6411c1eb" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:24:45 np0005534516 nova_compute[253538]: 2025-11-25 08:24:45.737 253542 DEBUG nova.compute.manager [None req-bf34376d-b6e7-4feb-b2df-5913e90367e0 345cc8bd54dd46c9aaf034a44f55f52e fbc71ebb18c64a72bd6d93bc520d8921 - - default default] [instance: 8e6c81c4-a422-42e4-950b-66fa6411c1eb] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 25 03:24:45 np0005534516 nova_compute[253538]: 2025-11-25 08:24:45.816 253542 DEBUG oslo_concurrency.lockutils [None req-bf34376d-b6e7-4feb-b2df-5913e90367e0 345cc8bd54dd46c9aaf034a44f55f52e fbc71ebb18c64a72bd6d93bc520d8921 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:24:45 np0005534516 nova_compute[253538]: 2025-11-25 08:24:45.817 253542 DEBUG oslo_concurrency.lockutils [None req-bf34376d-b6e7-4feb-b2df-5913e90367e0 345cc8bd54dd46c9aaf034a44f55f52e fbc71ebb18c64a72bd6d93bc520d8921 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:24:45 np0005534516 nova_compute[253538]: 2025-11-25 08:24:45.825 253542 DEBUG nova.virt.hardware [None req-bf34376d-b6e7-4feb-b2df-5913e90367e0 345cc8bd54dd46c9aaf034a44f55f52e fbc71ebb18c64a72bd6d93bc520d8921 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 25 03:24:45 np0005534516 nova_compute[253538]: 2025-11-25 08:24:45.826 253542 INFO nova.compute.claims [None req-bf34376d-b6e7-4feb-b2df-5913e90367e0 345cc8bd54dd46c9aaf034a44f55f52e fbc71ebb18c64a72bd6d93bc520d8921 - - default default] [instance: 8e6c81c4-a422-42e4-950b-66fa6411c1eb] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 25 03:24:45 np0005534516 nova_compute[253538]: 2025-11-25 08:24:45.940 253542 DEBUG oslo_concurrency.processutils [None req-bf34376d-b6e7-4feb-b2df-5913e90367e0 345cc8bd54dd46c9aaf034a44f55f52e fbc71ebb18c64a72bd6d93bc520d8921 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:24:46 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 03:24:46 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2991617410' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 03:24:46 np0005534516 nova_compute[253538]: 2025-11-25 08:24:46.382 253542 DEBUG oslo_concurrency.processutils [None req-bf34376d-b6e7-4feb-b2df-5913e90367e0 345cc8bd54dd46c9aaf034a44f55f52e fbc71ebb18c64a72bd6d93bc520d8921 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.442s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:24:46 np0005534516 nova_compute[253538]: 2025-11-25 08:24:46.384 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:24:46 np0005534516 nova_compute[253538]: 2025-11-25 08:24:46.389 253542 DEBUG nova.compute.provider_tree [None req-bf34376d-b6e7-4feb-b2df-5913e90367e0 345cc8bd54dd46c9aaf034a44f55f52e fbc71ebb18c64a72bd6d93bc520d8921 - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 03:24:46 np0005534516 nova_compute[253538]: 2025-11-25 08:24:46.411 253542 DEBUG nova.scheduler.client.report [None req-bf34376d-b6e7-4feb-b2df-5913e90367e0 345cc8bd54dd46c9aaf034a44f55f52e fbc71ebb18c64a72bd6d93bc520d8921 - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 03:24:46 np0005534516 nova_compute[253538]: 2025-11-25 08:24:46.437 253542 DEBUG oslo_concurrency.lockutils [None req-bf34376d-b6e7-4feb-b2df-5913e90367e0 345cc8bd54dd46c9aaf034a44f55f52e fbc71ebb18c64a72bd6d93bc520d8921 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.620s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:24:46 np0005534516 nova_compute[253538]: 2025-11-25 08:24:46.438 253542 DEBUG nova.compute.manager [None req-bf34376d-b6e7-4feb-b2df-5913e90367e0 345cc8bd54dd46c9aaf034a44f55f52e fbc71ebb18c64a72bd6d93bc520d8921 - - default default] [instance: 8e6c81c4-a422-42e4-950b-66fa6411c1eb] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 25 03:24:46 np0005534516 nova_compute[253538]: 2025-11-25 08:24:46.477 253542 DEBUG nova.compute.manager [None req-bf34376d-b6e7-4feb-b2df-5913e90367e0 345cc8bd54dd46c9aaf034a44f55f52e fbc71ebb18c64a72bd6d93bc520d8921 - - default default] [instance: 8e6c81c4-a422-42e4-950b-66fa6411c1eb] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 25 03:24:46 np0005534516 nova_compute[253538]: 2025-11-25 08:24:46.477 253542 DEBUG nova.network.neutron [None req-bf34376d-b6e7-4feb-b2df-5913e90367e0 345cc8bd54dd46c9aaf034a44f55f52e fbc71ebb18c64a72bd6d93bc520d8921 - - default default] [instance: 8e6c81c4-a422-42e4-950b-66fa6411c1eb] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 25 03:24:46 np0005534516 nova_compute[253538]: 2025-11-25 08:24:46.493 253542 INFO nova.virt.libvirt.driver [None req-bf34376d-b6e7-4feb-b2df-5913e90367e0 345cc8bd54dd46c9aaf034a44f55f52e fbc71ebb18c64a72bd6d93bc520d8921 - - default default] [instance: 8e6c81c4-a422-42e4-950b-66fa6411c1eb] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 25 03:24:46 np0005534516 nova_compute[253538]: 2025-11-25 08:24:46.507 253542 DEBUG nova.compute.manager [None req-bf34376d-b6e7-4feb-b2df-5913e90367e0 345cc8bd54dd46c9aaf034a44f55f52e fbc71ebb18c64a72bd6d93bc520d8921 - - default default] [instance: 8e6c81c4-a422-42e4-950b-66fa6411c1eb] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 25 03:24:46 np0005534516 nova_compute[253538]: 2025-11-25 08:24:46.584 253542 DEBUG nova.compute.manager [None req-bf34376d-b6e7-4feb-b2df-5913e90367e0 345cc8bd54dd46c9aaf034a44f55f52e fbc71ebb18c64a72bd6d93bc520d8921 - - default default] [instance: 8e6c81c4-a422-42e4-950b-66fa6411c1eb] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 25 03:24:46 np0005534516 nova_compute[253538]: 2025-11-25 08:24:46.585 253542 DEBUG nova.virt.libvirt.driver [None req-bf34376d-b6e7-4feb-b2df-5913e90367e0 345cc8bd54dd46c9aaf034a44f55f52e fbc71ebb18c64a72bd6d93bc520d8921 - - default default] [instance: 8e6c81c4-a422-42e4-950b-66fa6411c1eb] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 25 03:24:46 np0005534516 nova_compute[253538]: 2025-11-25 08:24:46.586 253542 INFO nova.virt.libvirt.driver [None req-bf34376d-b6e7-4feb-b2df-5913e90367e0 345cc8bd54dd46c9aaf034a44f55f52e fbc71ebb18c64a72bd6d93bc520d8921 - - default default] [instance: 8e6c81c4-a422-42e4-950b-66fa6411c1eb] Creating image(s)#033[00m
Nov 25 03:24:46 np0005534516 nova_compute[253538]: 2025-11-25 08:24:46.607 253542 DEBUG nova.storage.rbd_utils [None req-bf34376d-b6e7-4feb-b2df-5913e90367e0 345cc8bd54dd46c9aaf034a44f55f52e fbc71ebb18c64a72bd6d93bc520d8921 - - default default] rbd image 8e6c81c4-a422-42e4-950b-66fa6411c1eb_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:24:46 np0005534516 nova_compute[253538]: 2025-11-25 08:24:46.627 253542 DEBUG nova.storage.rbd_utils [None req-bf34376d-b6e7-4feb-b2df-5913e90367e0 345cc8bd54dd46c9aaf034a44f55f52e fbc71ebb18c64a72bd6d93bc520d8921 - - default default] rbd image 8e6c81c4-a422-42e4-950b-66fa6411c1eb_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:24:46 np0005534516 nova_compute[253538]: 2025-11-25 08:24:46.647 253542 DEBUG nova.storage.rbd_utils [None req-bf34376d-b6e7-4feb-b2df-5913e90367e0 345cc8bd54dd46c9aaf034a44f55f52e fbc71ebb18c64a72bd6d93bc520d8921 - - default default] rbd image 8e6c81c4-a422-42e4-950b-66fa6411c1eb_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:24:46 np0005534516 nova_compute[253538]: 2025-11-25 08:24:46.651 253542 DEBUG oslo_concurrency.processutils [None req-bf34376d-b6e7-4feb-b2df-5913e90367e0 345cc8bd54dd46c9aaf034a44f55f52e fbc71ebb18c64a72bd6d93bc520d8921 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:24:46 np0005534516 nova_compute[253538]: 2025-11-25 08:24:46.720 253542 DEBUG oslo_concurrency.processutils [None req-bf34376d-b6e7-4feb-b2df-5913e90367e0 345cc8bd54dd46c9aaf034a44f55f52e fbc71ebb18c64a72bd6d93bc520d8921 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json" returned: 0 in 0.069s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:24:46 np0005534516 nova_compute[253538]: 2025-11-25 08:24:46.721 253542 DEBUG oslo_concurrency.lockutils [None req-bf34376d-b6e7-4feb-b2df-5913e90367e0 345cc8bd54dd46c9aaf034a44f55f52e fbc71ebb18c64a72bd6d93bc520d8921 - - default default] Acquiring lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:24:46 np0005534516 nova_compute[253538]: 2025-11-25 08:24:46.721 253542 DEBUG oslo_concurrency.lockutils [None req-bf34376d-b6e7-4feb-b2df-5913e90367e0 345cc8bd54dd46c9aaf034a44f55f52e fbc71ebb18c64a72bd6d93bc520d8921 - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:24:46 np0005534516 nova_compute[253538]: 2025-11-25 08:24:46.721 253542 DEBUG oslo_concurrency.lockutils [None req-bf34376d-b6e7-4feb-b2df-5913e90367e0 345cc8bd54dd46c9aaf034a44f55f52e fbc71ebb18c64a72bd6d93bc520d8921 - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:24:46 np0005534516 nova_compute[253538]: 2025-11-25 08:24:46.746 253542 DEBUG nova.storage.rbd_utils [None req-bf34376d-b6e7-4feb-b2df-5913e90367e0 345cc8bd54dd46c9aaf034a44f55f52e fbc71ebb18c64a72bd6d93bc520d8921 - - default default] rbd image 8e6c81c4-a422-42e4-950b-66fa6411c1eb_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:24:46 np0005534516 nova_compute[253538]: 2025-11-25 08:24:46.750 253542 DEBUG oslo_concurrency.processutils [None req-bf34376d-b6e7-4feb-b2df-5913e90367e0 345cc8bd54dd46c9aaf034a44f55f52e fbc71ebb18c64a72bd6d93bc520d8921 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc 8e6c81c4-a422-42e4-950b-66fa6411c1eb_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:24:46 np0005534516 nova_compute[253538]: 2025-11-25 08:24:46.850 253542 DEBUG nova.network.neutron [None req-bf34376d-b6e7-4feb-b2df-5913e90367e0 345cc8bd54dd46c9aaf034a44f55f52e fbc71ebb18c64a72bd6d93bc520d8921 - - default default] [instance: 8e6c81c4-a422-42e4-950b-66fa6411c1eb] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188#033[00m
Nov 25 03:24:46 np0005534516 nova_compute[253538]: 2025-11-25 08:24:46.850 253542 DEBUG nova.compute.manager [None req-bf34376d-b6e7-4feb-b2df-5913e90367e0 345cc8bd54dd46c9aaf034a44f55f52e fbc71ebb18c64a72bd6d93bc520d8921 - - default default] [instance: 8e6c81c4-a422-42e4-950b-66fa6411c1eb] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 25 03:24:46 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1178: 321 pgs: 321 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 745 KiB/s rd, 2.3 KiB/s wr, 60 op/s
Nov 25 03:24:47 np0005534516 nova_compute[253538]: 2025-11-25 08:24:47.481 253542 DEBUG oslo_concurrency.processutils [None req-bf34376d-b6e7-4feb-b2df-5913e90367e0 345cc8bd54dd46c9aaf034a44f55f52e fbc71ebb18c64a72bd6d93bc520d8921 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc 8e6c81c4-a422-42e4-950b-66fa6411c1eb_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.731s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:24:47 np0005534516 nova_compute[253538]: 2025-11-25 08:24:47.532 253542 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764059072.5143735, 4659329f-611a-4436-aa9e-26937db9cd61 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 03:24:47 np0005534516 nova_compute[253538]: 2025-11-25 08:24:47.533 253542 INFO nova.compute.manager [-] [instance: 4659329f-611a-4436-aa9e-26937db9cd61] VM Stopped (Lifecycle Event)#033[00m
Nov 25 03:24:47 np0005534516 nova_compute[253538]: 2025-11-25 08:24:47.539 253542 DEBUG nova.storage.rbd_utils [None req-bf34376d-b6e7-4feb-b2df-5913e90367e0 345cc8bd54dd46c9aaf034a44f55f52e fbc71ebb18c64a72bd6d93bc520d8921 - - default default] resizing rbd image 8e6c81c4-a422-42e4-950b-66fa6411c1eb_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 25 03:24:47 np0005534516 nova_compute[253538]: 2025-11-25 08:24:47.623 253542 DEBUG nova.compute.manager [None req-6c710a7f-d17d-4277-a446-38db8056cf2d - - - - - -] [instance: 4659329f-611a-4436-aa9e-26937db9cd61] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:24:47 np0005534516 nova_compute[253538]: 2025-11-25 08:24:47.814 253542 DEBUG nova.objects.instance [None req-bf34376d-b6e7-4feb-b2df-5913e90367e0 345cc8bd54dd46c9aaf034a44f55f52e fbc71ebb18c64a72bd6d93bc520d8921 - - default default] Lazy-loading 'migration_context' on Instance uuid 8e6c81c4-a422-42e4-950b-66fa6411c1eb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 03:24:47 np0005534516 nova_compute[253538]: 2025-11-25 08:24:47.825 253542 DEBUG nova.virt.libvirt.driver [None req-bf34376d-b6e7-4feb-b2df-5913e90367e0 345cc8bd54dd46c9aaf034a44f55f52e fbc71ebb18c64a72bd6d93bc520d8921 - - default default] [instance: 8e6c81c4-a422-42e4-950b-66fa6411c1eb] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 25 03:24:47 np0005534516 nova_compute[253538]: 2025-11-25 08:24:47.825 253542 DEBUG nova.virt.libvirt.driver [None req-bf34376d-b6e7-4feb-b2df-5913e90367e0 345cc8bd54dd46c9aaf034a44f55f52e fbc71ebb18c64a72bd6d93bc520d8921 - - default default] [instance: 8e6c81c4-a422-42e4-950b-66fa6411c1eb] Ensure instance console log exists: /var/lib/nova/instances/8e6c81c4-a422-42e4-950b-66fa6411c1eb/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 25 03:24:47 np0005534516 nova_compute[253538]: 2025-11-25 08:24:47.825 253542 DEBUG oslo_concurrency.lockutils [None req-bf34376d-b6e7-4feb-b2df-5913e90367e0 345cc8bd54dd46c9aaf034a44f55f52e fbc71ebb18c64a72bd6d93bc520d8921 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:24:47 np0005534516 nova_compute[253538]: 2025-11-25 08:24:47.826 253542 DEBUG oslo_concurrency.lockutils [None req-bf34376d-b6e7-4feb-b2df-5913e90367e0 345cc8bd54dd46c9aaf034a44f55f52e fbc71ebb18c64a72bd6d93bc520d8921 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:24:47 np0005534516 nova_compute[253538]: 2025-11-25 08:24:47.826 253542 DEBUG oslo_concurrency.lockutils [None req-bf34376d-b6e7-4feb-b2df-5913e90367e0 345cc8bd54dd46c9aaf034a44f55f52e fbc71ebb18c64a72bd6d93bc520d8921 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:24:47 np0005534516 nova_compute[253538]: 2025-11-25 08:24:47.827 253542 DEBUG nova.virt.libvirt.driver [None req-bf34376d-b6e7-4feb-b2df-5913e90367e0 345cc8bd54dd46c9aaf034a44f55f52e fbc71ebb18c64a72bd6d93bc520d8921 - - default default] [instance: 8e6c81c4-a422-42e4-950b-66fa6411c1eb] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'device_name': '/dev/vda', 'guest_format': None, 'encryption_options': None, 'boot_index': 0, 'encryption_format': None, 'encryption_secret_uuid': None, 'size': 0, 'encrypted': False, 'disk_bus': 'virtio', 'image_id': '8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 25 03:24:47 np0005534516 nova_compute[253538]: 2025-11-25 08:24:47.830 253542 WARNING nova.virt.libvirt.driver [None req-bf34376d-b6e7-4feb-b2df-5913e90367e0 345cc8bd54dd46c9aaf034a44f55f52e fbc71ebb18c64a72bd6d93bc520d8921 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 03:24:47 np0005534516 nova_compute[253538]: 2025-11-25 08:24:47.835 253542 DEBUG nova.virt.libvirt.host [None req-bf34376d-b6e7-4feb-b2df-5913e90367e0 345cc8bd54dd46c9aaf034a44f55f52e fbc71ebb18c64a72bd6d93bc520d8921 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 25 03:24:47 np0005534516 nova_compute[253538]: 2025-11-25 08:24:47.835 253542 DEBUG nova.virt.libvirt.host [None req-bf34376d-b6e7-4feb-b2df-5913e90367e0 345cc8bd54dd46c9aaf034a44f55f52e fbc71ebb18c64a72bd6d93bc520d8921 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 25 03:24:47 np0005534516 nova_compute[253538]: 2025-11-25 08:24:47.841 253542 DEBUG nova.virt.libvirt.host [None req-bf34376d-b6e7-4feb-b2df-5913e90367e0 345cc8bd54dd46c9aaf034a44f55f52e fbc71ebb18c64a72bd6d93bc520d8921 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 25 03:24:47 np0005534516 nova_compute[253538]: 2025-11-25 08:24:47.841 253542 DEBUG nova.virt.libvirt.host [None req-bf34376d-b6e7-4feb-b2df-5913e90367e0 345cc8bd54dd46c9aaf034a44f55f52e fbc71ebb18c64a72bd6d93bc520d8921 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 25 03:24:47 np0005534516 nova_compute[253538]: 2025-11-25 08:24:47.841 253542 DEBUG nova.virt.libvirt.driver [None req-bf34376d-b6e7-4feb-b2df-5913e90367e0 345cc8bd54dd46c9aaf034a44f55f52e fbc71ebb18c64a72bd6d93bc520d8921 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 25 03:24:47 np0005534516 nova_compute[253538]: 2025-11-25 08:24:47.842 253542 DEBUG nova.virt.hardware [None req-bf34376d-b6e7-4feb-b2df-5913e90367e0 345cc8bd54dd46c9aaf034a44f55f52e fbc71ebb18c64a72bd6d93bc520d8921 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T08:20:54Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='c0247c91-0f34-45b2-87e7-43f31a790a00',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 25 03:24:47 np0005534516 nova_compute[253538]: 2025-11-25 08:24:47.842 253542 DEBUG nova.virt.hardware [None req-bf34376d-b6e7-4feb-b2df-5913e90367e0 345cc8bd54dd46c9aaf034a44f55f52e fbc71ebb18c64a72bd6d93bc520d8921 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 25 03:24:47 np0005534516 nova_compute[253538]: 2025-11-25 08:24:47.842 253542 DEBUG nova.virt.hardware [None req-bf34376d-b6e7-4feb-b2df-5913e90367e0 345cc8bd54dd46c9aaf034a44f55f52e fbc71ebb18c64a72bd6d93bc520d8921 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 25 03:24:47 np0005534516 nova_compute[253538]: 2025-11-25 08:24:47.842 253542 DEBUG nova.virt.hardware [None req-bf34376d-b6e7-4feb-b2df-5913e90367e0 345cc8bd54dd46c9aaf034a44f55f52e fbc71ebb18c64a72bd6d93bc520d8921 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 25 03:24:47 np0005534516 nova_compute[253538]: 2025-11-25 08:24:47.843 253542 DEBUG nova.virt.hardware [None req-bf34376d-b6e7-4feb-b2df-5913e90367e0 345cc8bd54dd46c9aaf034a44f55f52e fbc71ebb18c64a72bd6d93bc520d8921 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 25 03:24:47 np0005534516 nova_compute[253538]: 2025-11-25 08:24:47.843 253542 DEBUG nova.virt.hardware [None req-bf34376d-b6e7-4feb-b2df-5913e90367e0 345cc8bd54dd46c9aaf034a44f55f52e fbc71ebb18c64a72bd6d93bc520d8921 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 25 03:24:47 np0005534516 nova_compute[253538]: 2025-11-25 08:24:47.843 253542 DEBUG nova.virt.hardware [None req-bf34376d-b6e7-4feb-b2df-5913e90367e0 345cc8bd54dd46c9aaf034a44f55f52e fbc71ebb18c64a72bd6d93bc520d8921 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 25 03:24:47 np0005534516 nova_compute[253538]: 2025-11-25 08:24:47.843 253542 DEBUG nova.virt.hardware [None req-bf34376d-b6e7-4feb-b2df-5913e90367e0 345cc8bd54dd46c9aaf034a44f55f52e fbc71ebb18c64a72bd6d93bc520d8921 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 25 03:24:47 np0005534516 nova_compute[253538]: 2025-11-25 08:24:47.843 253542 DEBUG nova.virt.hardware [None req-bf34376d-b6e7-4feb-b2df-5913e90367e0 345cc8bd54dd46c9aaf034a44f55f52e fbc71ebb18c64a72bd6d93bc520d8921 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 25 03:24:47 np0005534516 nova_compute[253538]: 2025-11-25 08:24:47.844 253542 DEBUG nova.virt.hardware [None req-bf34376d-b6e7-4feb-b2df-5913e90367e0 345cc8bd54dd46c9aaf034a44f55f52e fbc71ebb18c64a72bd6d93bc520d8921 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 25 03:24:47 np0005534516 nova_compute[253538]: 2025-11-25 08:24:47.844 253542 DEBUG nova.virt.hardware [None req-bf34376d-b6e7-4feb-b2df-5913e90367e0 345cc8bd54dd46c9aaf034a44f55f52e fbc71ebb18c64a72bd6d93bc520d8921 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 25 03:24:47 np0005534516 nova_compute[253538]: 2025-11-25 08:24:47.846 253542 DEBUG oslo_concurrency.processutils [None req-bf34376d-b6e7-4feb-b2df-5913e90367e0 345cc8bd54dd46c9aaf034a44f55f52e fbc71ebb18c64a72bd6d93bc520d8921 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:24:48 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 03:24:48 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1466949336' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 03:24:48 np0005534516 nova_compute[253538]: 2025-11-25 08:24:48.275 253542 DEBUG oslo_concurrency.processutils [None req-bf34376d-b6e7-4feb-b2df-5913e90367e0 345cc8bd54dd46c9aaf034a44f55f52e fbc71ebb18c64a72bd6d93bc520d8921 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.429s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:24:48 np0005534516 nova_compute[253538]: 2025-11-25 08:24:48.294 253542 DEBUG nova.storage.rbd_utils [None req-bf34376d-b6e7-4feb-b2df-5913e90367e0 345cc8bd54dd46c9aaf034a44f55f52e fbc71ebb18c64a72bd6d93bc520d8921 - - default default] rbd image 8e6c81c4-a422-42e4-950b-66fa6411c1eb_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:24:48 np0005534516 nova_compute[253538]: 2025-11-25 08:24:48.297 253542 DEBUG oslo_concurrency.processutils [None req-bf34376d-b6e7-4feb-b2df-5913e90367e0 345cc8bd54dd46c9aaf034a44f55f52e fbc71ebb18c64a72bd6d93bc520d8921 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:24:48 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e111 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 03:24:48 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 03:24:48 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3709436398' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 03:24:48 np0005534516 nova_compute[253538]: 2025-11-25 08:24:48.722 253542 DEBUG oslo_concurrency.processutils [None req-bf34376d-b6e7-4feb-b2df-5913e90367e0 345cc8bd54dd46c9aaf034a44f55f52e fbc71ebb18c64a72bd6d93bc520d8921 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.424s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:24:48 np0005534516 nova_compute[253538]: 2025-11-25 08:24:48.725 253542 DEBUG nova.objects.instance [None req-bf34376d-b6e7-4feb-b2df-5913e90367e0 345cc8bd54dd46c9aaf034a44f55f52e fbc71ebb18c64a72bd6d93bc520d8921 - - default default] Lazy-loading 'pci_devices' on Instance uuid 8e6c81c4-a422-42e4-950b-66fa6411c1eb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 03:24:48 np0005534516 nova_compute[253538]: 2025-11-25 08:24:48.745 253542 DEBUG nova.virt.libvirt.driver [None req-bf34376d-b6e7-4feb-b2df-5913e90367e0 345cc8bd54dd46c9aaf034a44f55f52e fbc71ebb18c64a72bd6d93bc520d8921 - - default default] [instance: 8e6c81c4-a422-42e4-950b-66fa6411c1eb] End _get_guest_xml xml=<domain type="kvm">
Nov 25 03:24:48 np0005534516 nova_compute[253538]:  <uuid>8e6c81c4-a422-42e4-950b-66fa6411c1eb</uuid>
Nov 25 03:24:48 np0005534516 nova_compute[253538]:  <name>instance-0000000d</name>
Nov 25 03:24:48 np0005534516 nova_compute[253538]:  <memory>131072</memory>
Nov 25 03:24:48 np0005534516 nova_compute[253538]:  <vcpu>1</vcpu>
Nov 25 03:24:48 np0005534516 nova_compute[253538]:  <metadata>
Nov 25 03:24:48 np0005534516 nova_compute[253538]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 03:24:48 np0005534516 nova_compute[253538]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 03:24:48 np0005534516 nova_compute[253538]:      <nova:name>tempest-ServersAdminNegativeTestJSON-server-1133317659</nova:name>
Nov 25 03:24:48 np0005534516 nova_compute[253538]:      <nova:creationTime>2025-11-25 08:24:47</nova:creationTime>
Nov 25 03:24:48 np0005534516 nova_compute[253538]:      <nova:flavor name="m1.nano">
Nov 25 03:24:48 np0005534516 nova_compute[253538]:        <nova:memory>128</nova:memory>
Nov 25 03:24:48 np0005534516 nova_compute[253538]:        <nova:disk>1</nova:disk>
Nov 25 03:24:48 np0005534516 nova_compute[253538]:        <nova:swap>0</nova:swap>
Nov 25 03:24:48 np0005534516 nova_compute[253538]:        <nova:ephemeral>0</nova:ephemeral>
Nov 25 03:24:48 np0005534516 nova_compute[253538]:        <nova:vcpus>1</nova:vcpus>
Nov 25 03:24:48 np0005534516 nova_compute[253538]:      </nova:flavor>
Nov 25 03:24:48 np0005534516 nova_compute[253538]:      <nova:owner>
Nov 25 03:24:48 np0005534516 nova_compute[253538]:        <nova:user uuid="345cc8bd54dd46c9aaf034a44f55f52e">tempest-ServersAdminNegativeTestJSON-455379081-project-member</nova:user>
Nov 25 03:24:48 np0005534516 nova_compute[253538]:        <nova:project uuid="fbc71ebb18c64a72bd6d93bc520d8921">tempest-ServersAdminNegativeTestJSON-455379081</nova:project>
Nov 25 03:24:48 np0005534516 nova_compute[253538]:      </nova:owner>
Nov 25 03:24:48 np0005534516 nova_compute[253538]:      <nova:root type="image" uuid="8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e"/>
Nov 25 03:24:48 np0005534516 nova_compute[253538]:      <nova:ports/>
Nov 25 03:24:48 np0005534516 nova_compute[253538]:    </nova:instance>
Nov 25 03:24:48 np0005534516 nova_compute[253538]:  </metadata>
Nov 25 03:24:48 np0005534516 nova_compute[253538]:  <sysinfo type="smbios">
Nov 25 03:24:48 np0005534516 nova_compute[253538]:    <system>
Nov 25 03:24:48 np0005534516 nova_compute[253538]:      <entry name="manufacturer">RDO</entry>
Nov 25 03:24:48 np0005534516 nova_compute[253538]:      <entry name="product">OpenStack Compute</entry>
Nov 25 03:24:48 np0005534516 nova_compute[253538]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 03:24:48 np0005534516 nova_compute[253538]:      <entry name="serial">8e6c81c4-a422-42e4-950b-66fa6411c1eb</entry>
Nov 25 03:24:48 np0005534516 nova_compute[253538]:      <entry name="uuid">8e6c81c4-a422-42e4-950b-66fa6411c1eb</entry>
Nov 25 03:24:48 np0005534516 nova_compute[253538]:      <entry name="family">Virtual Machine</entry>
Nov 25 03:24:48 np0005534516 nova_compute[253538]:    </system>
Nov 25 03:24:48 np0005534516 nova_compute[253538]:  </sysinfo>
Nov 25 03:24:48 np0005534516 nova_compute[253538]:  <os>
Nov 25 03:24:48 np0005534516 nova_compute[253538]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 25 03:24:48 np0005534516 nova_compute[253538]:    <boot dev="hd"/>
Nov 25 03:24:48 np0005534516 nova_compute[253538]:    <smbios mode="sysinfo"/>
Nov 25 03:24:48 np0005534516 nova_compute[253538]:  </os>
Nov 25 03:24:48 np0005534516 nova_compute[253538]:  <features>
Nov 25 03:24:48 np0005534516 nova_compute[253538]:    <acpi/>
Nov 25 03:24:48 np0005534516 nova_compute[253538]:    <apic/>
Nov 25 03:24:48 np0005534516 nova_compute[253538]:    <vmcoreinfo/>
Nov 25 03:24:48 np0005534516 nova_compute[253538]:  </features>
Nov 25 03:24:48 np0005534516 nova_compute[253538]:  <clock offset="utc">
Nov 25 03:24:48 np0005534516 nova_compute[253538]:    <timer name="pit" tickpolicy="delay"/>
Nov 25 03:24:48 np0005534516 nova_compute[253538]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 25 03:24:48 np0005534516 nova_compute[253538]:    <timer name="hpet" present="no"/>
Nov 25 03:24:48 np0005534516 nova_compute[253538]:  </clock>
Nov 25 03:24:48 np0005534516 nova_compute[253538]:  <cpu mode="host-model" match="exact">
Nov 25 03:24:48 np0005534516 nova_compute[253538]:    <topology sockets="1" cores="1" threads="1"/>
Nov 25 03:24:48 np0005534516 nova_compute[253538]:  </cpu>
Nov 25 03:24:48 np0005534516 nova_compute[253538]:  <devices>
Nov 25 03:24:48 np0005534516 nova_compute[253538]:    <disk type="network" device="disk">
Nov 25 03:24:48 np0005534516 nova_compute[253538]:      <driver type="raw" cache="none"/>
Nov 25 03:24:48 np0005534516 nova_compute[253538]:      <source protocol="rbd" name="vms/8e6c81c4-a422-42e4-950b-66fa6411c1eb_disk">
Nov 25 03:24:48 np0005534516 nova_compute[253538]:        <host name="192.168.122.100" port="6789"/>
Nov 25 03:24:48 np0005534516 nova_compute[253538]:      </source>
Nov 25 03:24:48 np0005534516 nova_compute[253538]:      <auth username="openstack">
Nov 25 03:24:48 np0005534516 nova_compute[253538]:        <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 03:24:48 np0005534516 nova_compute[253538]:      </auth>
Nov 25 03:24:48 np0005534516 nova_compute[253538]:      <target dev="vda" bus="virtio"/>
Nov 25 03:24:48 np0005534516 nova_compute[253538]:    </disk>
Nov 25 03:24:48 np0005534516 nova_compute[253538]:    <disk type="network" device="cdrom">
Nov 25 03:24:48 np0005534516 nova_compute[253538]:      <driver type="raw" cache="none"/>
Nov 25 03:24:48 np0005534516 nova_compute[253538]:      <source protocol="rbd" name="vms/8e6c81c4-a422-42e4-950b-66fa6411c1eb_disk.config">
Nov 25 03:24:48 np0005534516 nova_compute[253538]:        <host name="192.168.122.100" port="6789"/>
Nov 25 03:24:48 np0005534516 nova_compute[253538]:      </source>
Nov 25 03:24:48 np0005534516 nova_compute[253538]:      <auth username="openstack">
Nov 25 03:24:48 np0005534516 nova_compute[253538]:        <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 03:24:48 np0005534516 nova_compute[253538]:      </auth>
Nov 25 03:24:48 np0005534516 nova_compute[253538]:      <target dev="sda" bus="sata"/>
Nov 25 03:24:48 np0005534516 nova_compute[253538]:    </disk>
Nov 25 03:24:48 np0005534516 nova_compute[253538]:    <serial type="pty">
Nov 25 03:24:48 np0005534516 nova_compute[253538]:      <log file="/var/lib/nova/instances/8e6c81c4-a422-42e4-950b-66fa6411c1eb/console.log" append="off"/>
Nov 25 03:24:48 np0005534516 nova_compute[253538]:    </serial>
Nov 25 03:24:48 np0005534516 nova_compute[253538]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 03:24:48 np0005534516 nova_compute[253538]:    <video>
Nov 25 03:24:48 np0005534516 nova_compute[253538]:      <model type="virtio"/>
Nov 25 03:24:48 np0005534516 nova_compute[253538]:    </video>
Nov 25 03:24:48 np0005534516 nova_compute[253538]:    <input type="tablet" bus="usb"/>
Nov 25 03:24:48 np0005534516 nova_compute[253538]:    <rng model="virtio">
Nov 25 03:24:48 np0005534516 nova_compute[253538]:      <backend model="random">/dev/urandom</backend>
Nov 25 03:24:48 np0005534516 nova_compute[253538]:    </rng>
Nov 25 03:24:48 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root"/>
Nov 25 03:24:48 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:24:48 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:24:48 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:24:48 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:24:48 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:24:48 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:24:48 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:24:48 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:24:48 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:24:48 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:24:48 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:24:48 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:24:48 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:24:48 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:24:48 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:24:48 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:24:48 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:24:48 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:24:48 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:24:48 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:24:48 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:24:48 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:24:48 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:24:48 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:24:48 np0005534516 nova_compute[253538]:    <controller type="usb" index="0"/>
Nov 25 03:24:48 np0005534516 nova_compute[253538]:    <memballoon model="virtio">
Nov 25 03:24:48 np0005534516 nova_compute[253538]:      <stats period="10"/>
Nov 25 03:24:48 np0005534516 nova_compute[253538]:    </memballoon>
Nov 25 03:24:48 np0005534516 nova_compute[253538]:  </devices>
Nov 25 03:24:48 np0005534516 nova_compute[253538]: </domain>
Nov 25 03:24:48 np0005534516 nova_compute[253538]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 25 03:24:48 np0005534516 nova_compute[253538]: 2025-11-25 08:24:48.797 253542 DEBUG nova.virt.libvirt.driver [None req-bf34376d-b6e7-4feb-b2df-5913e90367e0 345cc8bd54dd46c9aaf034a44f55f52e fbc71ebb18c64a72bd6d93bc520d8921 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 03:24:48 np0005534516 nova_compute[253538]: 2025-11-25 08:24:48.797 253542 DEBUG nova.virt.libvirt.driver [None req-bf34376d-b6e7-4feb-b2df-5913e90367e0 345cc8bd54dd46c9aaf034a44f55f52e fbc71ebb18c64a72bd6d93bc520d8921 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 03:24:48 np0005534516 nova_compute[253538]: 2025-11-25 08:24:48.798 253542 INFO nova.virt.libvirt.driver [None req-bf34376d-b6e7-4feb-b2df-5913e90367e0 345cc8bd54dd46c9aaf034a44f55f52e fbc71ebb18c64a72bd6d93bc520d8921 - - default default] [instance: 8e6c81c4-a422-42e4-950b-66fa6411c1eb] Using config drive#033[00m
Nov 25 03:24:48 np0005534516 nova_compute[253538]: 2025-11-25 08:24:48.816 253542 DEBUG nova.storage.rbd_utils [None req-bf34376d-b6e7-4feb-b2df-5913e90367e0 345cc8bd54dd46c9aaf034a44f55f52e fbc71ebb18c64a72bd6d93bc520d8921 - - default default] rbd image 8e6c81c4-a422-42e4-950b-66fa6411c1eb_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:24:48 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1179: 321 pgs: 321 active+clean; 46 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 9.7 KiB/s rd, 344 KiB/s wr, 13 op/s
Nov 25 03:24:49 np0005534516 nova_compute[253538]: 2025-11-25 08:24:49.006 253542 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764059074.0059495, 651b0445-6a0f-41e0-ad78-6318f8175e0b => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 03:24:49 np0005534516 nova_compute[253538]: 2025-11-25 08:24:49.007 253542 INFO nova.compute.manager [-] [instance: 651b0445-6a0f-41e0-ad78-6318f8175e0b] VM Stopped (Lifecycle Event)#033[00m
Nov 25 03:24:49 np0005534516 nova_compute[253538]: 2025-11-25 08:24:49.025 253542 DEBUG nova.compute.manager [None req-881a29ee-6414-46ec-937f-71063b4ee328 - - - - - -] [instance: 651b0445-6a0f-41e0-ad78-6318f8175e0b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:24:49 np0005534516 nova_compute[253538]: 2025-11-25 08:24:49.227 253542 INFO nova.virt.libvirt.driver [None req-bf34376d-b6e7-4feb-b2df-5913e90367e0 345cc8bd54dd46c9aaf034a44f55f52e fbc71ebb18c64a72bd6d93bc520d8921 - - default default] [instance: 8e6c81c4-a422-42e4-950b-66fa6411c1eb] Creating config drive at /var/lib/nova/instances/8e6c81c4-a422-42e4-950b-66fa6411c1eb/disk.config#033[00m
Nov 25 03:24:49 np0005534516 nova_compute[253538]: 2025-11-25 08:24:49.233 253542 DEBUG oslo_concurrency.processutils [None req-bf34376d-b6e7-4feb-b2df-5913e90367e0 345cc8bd54dd46c9aaf034a44f55f52e fbc71ebb18c64a72bd6d93bc520d8921 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/8e6c81c4-a422-42e4-950b-66fa6411c1eb/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp0w7wb0ti execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:24:49 np0005534516 nova_compute[253538]: 2025-11-25 08:24:49.361 253542 DEBUG oslo_concurrency.processutils [None req-bf34376d-b6e7-4feb-b2df-5913e90367e0 345cc8bd54dd46c9aaf034a44f55f52e fbc71ebb18c64a72bd6d93bc520d8921 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/8e6c81c4-a422-42e4-950b-66fa6411c1eb/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp0w7wb0ti" returned: 0 in 0.127s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:24:49 np0005534516 nova_compute[253538]: 2025-11-25 08:24:49.389 253542 DEBUG nova.storage.rbd_utils [None req-bf34376d-b6e7-4feb-b2df-5913e90367e0 345cc8bd54dd46c9aaf034a44f55f52e fbc71ebb18c64a72bd6d93bc520d8921 - - default default] rbd image 8e6c81c4-a422-42e4-950b-66fa6411c1eb_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:24:49 np0005534516 nova_compute[253538]: 2025-11-25 08:24:49.393 253542 DEBUG oslo_concurrency.processutils [None req-bf34376d-b6e7-4feb-b2df-5913e90367e0 345cc8bd54dd46c9aaf034a44f55f52e fbc71ebb18c64a72bd6d93bc520d8921 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/8e6c81c4-a422-42e4-950b-66fa6411c1eb/disk.config 8e6c81c4-a422-42e4-950b-66fa6411c1eb_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:24:49 np0005534516 nova_compute[253538]: 2025-11-25 08:24:49.545 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:24:49 np0005534516 nova_compute[253538]: 2025-11-25 08:24:49.604 253542 DEBUG oslo_concurrency.processutils [None req-bf34376d-b6e7-4feb-b2df-5913e90367e0 345cc8bd54dd46c9aaf034a44f55f52e fbc71ebb18c64a72bd6d93bc520d8921 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/8e6c81c4-a422-42e4-950b-66fa6411c1eb/disk.config 8e6c81c4-a422-42e4-950b-66fa6411c1eb_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.212s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:24:49 np0005534516 nova_compute[253538]: 2025-11-25 08:24:49.605 253542 INFO nova.virt.libvirt.driver [None req-bf34376d-b6e7-4feb-b2df-5913e90367e0 345cc8bd54dd46c9aaf034a44f55f52e fbc71ebb18c64a72bd6d93bc520d8921 - - default default] [instance: 8e6c81c4-a422-42e4-950b-66fa6411c1eb] Deleting local config drive /var/lib/nova/instances/8e6c81c4-a422-42e4-950b-66fa6411c1eb/disk.config because it was imported into RBD.#033[00m
Nov 25 03:24:49 np0005534516 systemd-machined[215790]: New machine qemu-15-instance-0000000d.
Nov 25 03:24:49 np0005534516 systemd[1]: Started Virtual Machine qemu-15-instance-0000000d.
Nov 25 03:24:50 np0005534516 nova_compute[253538]: 2025-11-25 08:24:50.148 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764059090.1484919, 8e6c81c4-a422-42e4-950b-66fa6411c1eb => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 03:24:50 np0005534516 nova_compute[253538]: 2025-11-25 08:24:50.149 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 8e6c81c4-a422-42e4-950b-66fa6411c1eb] VM Resumed (Lifecycle Event)#033[00m
Nov 25 03:24:50 np0005534516 nova_compute[253538]: 2025-11-25 08:24:50.151 253542 DEBUG nova.compute.manager [None req-bf34376d-b6e7-4feb-b2df-5913e90367e0 345cc8bd54dd46c9aaf034a44f55f52e fbc71ebb18c64a72bd6d93bc520d8921 - - default default] [instance: 8e6c81c4-a422-42e4-950b-66fa6411c1eb] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 25 03:24:50 np0005534516 nova_compute[253538]: 2025-11-25 08:24:50.151 253542 DEBUG nova.virt.libvirt.driver [None req-bf34376d-b6e7-4feb-b2df-5913e90367e0 345cc8bd54dd46c9aaf034a44f55f52e fbc71ebb18c64a72bd6d93bc520d8921 - - default default] [instance: 8e6c81c4-a422-42e4-950b-66fa6411c1eb] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 25 03:24:50 np0005534516 nova_compute[253538]: 2025-11-25 08:24:50.155 253542 INFO nova.virt.libvirt.driver [-] [instance: 8e6c81c4-a422-42e4-950b-66fa6411c1eb] Instance spawned successfully.#033[00m
Nov 25 03:24:50 np0005534516 nova_compute[253538]: 2025-11-25 08:24:50.155 253542 DEBUG nova.virt.libvirt.driver [None req-bf34376d-b6e7-4feb-b2df-5913e90367e0 345cc8bd54dd46c9aaf034a44f55f52e fbc71ebb18c64a72bd6d93bc520d8921 - - default default] [instance: 8e6c81c4-a422-42e4-950b-66fa6411c1eb] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 25 03:24:50 np0005534516 nova_compute[253538]: 2025-11-25 08:24:50.173 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 8e6c81c4-a422-42e4-950b-66fa6411c1eb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:24:50 np0005534516 nova_compute[253538]: 2025-11-25 08:24:50.178 253542 DEBUG nova.virt.libvirt.driver [None req-bf34376d-b6e7-4feb-b2df-5913e90367e0 345cc8bd54dd46c9aaf034a44f55f52e fbc71ebb18c64a72bd6d93bc520d8921 - - default default] [instance: 8e6c81c4-a422-42e4-950b-66fa6411c1eb] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:24:50 np0005534516 nova_compute[253538]: 2025-11-25 08:24:50.178 253542 DEBUG nova.virt.libvirt.driver [None req-bf34376d-b6e7-4feb-b2df-5913e90367e0 345cc8bd54dd46c9aaf034a44f55f52e fbc71ebb18c64a72bd6d93bc520d8921 - - default default] [instance: 8e6c81c4-a422-42e4-950b-66fa6411c1eb] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:24:50 np0005534516 nova_compute[253538]: 2025-11-25 08:24:50.179 253542 DEBUG nova.virt.libvirt.driver [None req-bf34376d-b6e7-4feb-b2df-5913e90367e0 345cc8bd54dd46c9aaf034a44f55f52e fbc71ebb18c64a72bd6d93bc520d8921 - - default default] [instance: 8e6c81c4-a422-42e4-950b-66fa6411c1eb] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:24:50 np0005534516 nova_compute[253538]: 2025-11-25 08:24:50.180 253542 DEBUG nova.virt.libvirt.driver [None req-bf34376d-b6e7-4feb-b2df-5913e90367e0 345cc8bd54dd46c9aaf034a44f55f52e fbc71ebb18c64a72bd6d93bc520d8921 - - default default] [instance: 8e6c81c4-a422-42e4-950b-66fa6411c1eb] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:24:50 np0005534516 nova_compute[253538]: 2025-11-25 08:24:50.180 253542 DEBUG nova.virt.libvirt.driver [None req-bf34376d-b6e7-4feb-b2df-5913e90367e0 345cc8bd54dd46c9aaf034a44f55f52e fbc71ebb18c64a72bd6d93bc520d8921 - - default default] [instance: 8e6c81c4-a422-42e4-950b-66fa6411c1eb] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:24:50 np0005534516 nova_compute[253538]: 2025-11-25 08:24:50.181 253542 DEBUG nova.virt.libvirt.driver [None req-bf34376d-b6e7-4feb-b2df-5913e90367e0 345cc8bd54dd46c9aaf034a44f55f52e fbc71ebb18c64a72bd6d93bc520d8921 - - default default] [instance: 8e6c81c4-a422-42e4-950b-66fa6411c1eb] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:24:50 np0005534516 nova_compute[253538]: 2025-11-25 08:24:50.185 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 8e6c81c4-a422-42e4-950b-66fa6411c1eb] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 03:24:50 np0005534516 nova_compute[253538]: 2025-11-25 08:24:50.220 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 8e6c81c4-a422-42e4-950b-66fa6411c1eb] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 03:24:50 np0005534516 nova_compute[253538]: 2025-11-25 08:24:50.220 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764059090.1490479, 8e6c81c4-a422-42e4-950b-66fa6411c1eb => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 03:24:50 np0005534516 nova_compute[253538]: 2025-11-25 08:24:50.221 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 8e6c81c4-a422-42e4-950b-66fa6411c1eb] VM Started (Lifecycle Event)#033[00m
Nov 25 03:24:50 np0005534516 nova_compute[253538]: 2025-11-25 08:24:50.251 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 8e6c81c4-a422-42e4-950b-66fa6411c1eb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:24:50 np0005534516 nova_compute[253538]: 2025-11-25 08:24:50.255 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 8e6c81c4-a422-42e4-950b-66fa6411c1eb] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 03:24:50 np0005534516 nova_compute[253538]: 2025-11-25 08:24:50.263 253542 INFO nova.compute.manager [None req-bf34376d-b6e7-4feb-b2df-5913e90367e0 345cc8bd54dd46c9aaf034a44f55f52e fbc71ebb18c64a72bd6d93bc520d8921 - - default default] [instance: 8e6c81c4-a422-42e4-950b-66fa6411c1eb] Took 3.68 seconds to spawn the instance on the hypervisor.#033[00m
Nov 25 03:24:50 np0005534516 nova_compute[253538]: 2025-11-25 08:24:50.263 253542 DEBUG nova.compute.manager [None req-bf34376d-b6e7-4feb-b2df-5913e90367e0 345cc8bd54dd46c9aaf034a44f55f52e fbc71ebb18c64a72bd6d93bc520d8921 - - default default] [instance: 8e6c81c4-a422-42e4-950b-66fa6411c1eb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:24:50 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e111 do_prune osdmap full prune enabled
Nov 25 03:24:50 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e112 e112: 3 total, 3 up, 3 in
Nov 25 03:24:50 np0005534516 ceph-mon[75015]: log_channel(cluster) log [DBG] : osdmap e112: 3 total, 3 up, 3 in
Nov 25 03:24:50 np0005534516 nova_compute[253538]: 2025-11-25 08:24:50.413 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 8e6c81c4-a422-42e4-950b-66fa6411c1eb] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 03:24:50 np0005534516 nova_compute[253538]: 2025-11-25 08:24:50.451 253542 INFO nova.compute.manager [None req-bf34376d-b6e7-4feb-b2df-5913e90367e0 345cc8bd54dd46c9aaf034a44f55f52e fbc71ebb18c64a72bd6d93bc520d8921 - - default default] [instance: 8e6c81c4-a422-42e4-950b-66fa6411c1eb] Took 4.66 seconds to build instance.#033[00m
Nov 25 03:24:50 np0005534516 nova_compute[253538]: 2025-11-25 08:24:50.471 253542 DEBUG oslo_concurrency.lockutils [None req-bf34376d-b6e7-4feb-b2df-5913e90367e0 345cc8bd54dd46c9aaf034a44f55f52e fbc71ebb18c64a72bd6d93bc520d8921 - - default default] Lock "8e6c81c4-a422-42e4-950b-66fa6411c1eb" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 4.755s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:24:50 np0005534516 nova_compute[253538]: 2025-11-25 08:24:50.910 253542 DEBUG oslo_concurrency.lockutils [None req-2460f9f1-7e69-4aa3-b7b8-6ffd2bd05f80 33ac0c97ce984130b9394683c963cda1 e8f1e6855a5b45a8b1c74763485618f4 - - default default] Acquiring lock "7e576418-7454-49eb-9918-2d7f04547bd8" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:24:50 np0005534516 nova_compute[253538]: 2025-11-25 08:24:50.911 253542 DEBUG oslo_concurrency.lockutils [None req-2460f9f1-7e69-4aa3-b7b8-6ffd2bd05f80 33ac0c97ce984130b9394683c963cda1 e8f1e6855a5b45a8b1c74763485618f4 - - default default] Lock "7e576418-7454-49eb-9918-2d7f04547bd8" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:24:50 np0005534516 nova_compute[253538]: 2025-11-25 08:24:50.938 253542 DEBUG nova.compute.manager [None req-2460f9f1-7e69-4aa3-b7b8-6ffd2bd05f80 33ac0c97ce984130b9394683c963cda1 e8f1e6855a5b45a8b1c74763485618f4 - - default default] [instance: 7e576418-7454-49eb-9918-2d7f04547bd8] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 25 03:24:50 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1181: 321 pgs: 321 active+clean; 63 MiB data, 262 MiB used, 60 GiB / 60 GiB avail; 3.3 KiB/s rd, 849 KiB/s wr, 6 op/s
Nov 25 03:24:51 np0005534516 nova_compute[253538]: 2025-11-25 08:24:51.014 253542 DEBUG oslo_concurrency.lockutils [None req-2460f9f1-7e69-4aa3-b7b8-6ffd2bd05f80 33ac0c97ce984130b9394683c963cda1 e8f1e6855a5b45a8b1c74763485618f4 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:24:51 np0005534516 nova_compute[253538]: 2025-11-25 08:24:51.015 253542 DEBUG oslo_concurrency.lockutils [None req-2460f9f1-7e69-4aa3-b7b8-6ffd2bd05f80 33ac0c97ce984130b9394683c963cda1 e8f1e6855a5b45a8b1c74763485618f4 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:24:51 np0005534516 nova_compute[253538]: 2025-11-25 08:24:51.020 253542 DEBUG nova.virt.hardware [None req-2460f9f1-7e69-4aa3-b7b8-6ffd2bd05f80 33ac0c97ce984130b9394683c963cda1 e8f1e6855a5b45a8b1c74763485618f4 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 25 03:24:51 np0005534516 nova_compute[253538]: 2025-11-25 08:24:51.021 253542 INFO nova.compute.claims [None req-2460f9f1-7e69-4aa3-b7b8-6ffd2bd05f80 33ac0c97ce984130b9394683c963cda1 e8f1e6855a5b45a8b1c74763485618f4 - - default default] [instance: 7e576418-7454-49eb-9918-2d7f04547bd8] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 25 03:24:51 np0005534516 nova_compute[253538]: 2025-11-25 08:24:51.153 253542 DEBUG oslo_concurrency.processutils [None req-2460f9f1-7e69-4aa3-b7b8-6ffd2bd05f80 33ac0c97ce984130b9394683c963cda1 e8f1e6855a5b45a8b1c74763485618f4 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:24:51 np0005534516 nova_compute[253538]: 2025-11-25 08:24:51.416 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:24:51 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 03:24:51 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3044485481' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 03:24:51 np0005534516 nova_compute[253538]: 2025-11-25 08:24:51.659 253542 DEBUG oslo_concurrency.processutils [None req-2460f9f1-7e69-4aa3-b7b8-6ffd2bd05f80 33ac0c97ce984130b9394683c963cda1 e8f1e6855a5b45a8b1c74763485618f4 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.506s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:24:51 np0005534516 nova_compute[253538]: 2025-11-25 08:24:51.666 253542 DEBUG nova.compute.provider_tree [None req-2460f9f1-7e69-4aa3-b7b8-6ffd2bd05f80 33ac0c97ce984130b9394683c963cda1 e8f1e6855a5b45a8b1c74763485618f4 - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 03:24:51 np0005534516 nova_compute[253538]: 2025-11-25 08:24:51.686 253542 DEBUG nova.scheduler.client.report [None req-2460f9f1-7e69-4aa3-b7b8-6ffd2bd05f80 33ac0c97ce984130b9394683c963cda1 e8f1e6855a5b45a8b1c74763485618f4 - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 03:24:51 np0005534516 nova_compute[253538]: 2025-11-25 08:24:51.715 253542 DEBUG oslo_concurrency.lockutils [None req-2460f9f1-7e69-4aa3-b7b8-6ffd2bd05f80 33ac0c97ce984130b9394683c963cda1 e8f1e6855a5b45a8b1c74763485618f4 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.700s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:24:51 np0005534516 nova_compute[253538]: 2025-11-25 08:24:51.716 253542 DEBUG nova.compute.manager [None req-2460f9f1-7e69-4aa3-b7b8-6ffd2bd05f80 33ac0c97ce984130b9394683c963cda1 e8f1e6855a5b45a8b1c74763485618f4 - - default default] [instance: 7e576418-7454-49eb-9918-2d7f04547bd8] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 25 03:24:51 np0005534516 nova_compute[253538]: 2025-11-25 08:24:51.776 253542 DEBUG nova.compute.manager [None req-2460f9f1-7e69-4aa3-b7b8-6ffd2bd05f80 33ac0c97ce984130b9394683c963cda1 e8f1e6855a5b45a8b1c74763485618f4 - - default default] [instance: 7e576418-7454-49eb-9918-2d7f04547bd8] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 25 03:24:51 np0005534516 nova_compute[253538]: 2025-11-25 08:24:51.777 253542 DEBUG nova.network.neutron [None req-2460f9f1-7e69-4aa3-b7b8-6ffd2bd05f80 33ac0c97ce984130b9394683c963cda1 e8f1e6855a5b45a8b1c74763485618f4 - - default default] [instance: 7e576418-7454-49eb-9918-2d7f04547bd8] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 25 03:24:51 np0005534516 nova_compute[253538]: 2025-11-25 08:24:51.836 253542 INFO nova.virt.libvirt.driver [None req-2460f9f1-7e69-4aa3-b7b8-6ffd2bd05f80 33ac0c97ce984130b9394683c963cda1 e8f1e6855a5b45a8b1c74763485618f4 - - default default] [instance: 7e576418-7454-49eb-9918-2d7f04547bd8] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 25 03:24:51 np0005534516 nova_compute[253538]: 2025-11-25 08:24:51.853 253542 DEBUG nova.compute.manager [None req-2460f9f1-7e69-4aa3-b7b8-6ffd2bd05f80 33ac0c97ce984130b9394683c963cda1 e8f1e6855a5b45a8b1c74763485618f4 - - default default] [instance: 7e576418-7454-49eb-9918-2d7f04547bd8] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 25 03:24:51 np0005534516 nova_compute[253538]: 2025-11-25 08:24:51.943 253542 DEBUG nova.compute.manager [None req-2460f9f1-7e69-4aa3-b7b8-6ffd2bd05f80 33ac0c97ce984130b9394683c963cda1 e8f1e6855a5b45a8b1c74763485618f4 - - default default] [instance: 7e576418-7454-49eb-9918-2d7f04547bd8] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 25 03:24:51 np0005534516 nova_compute[253538]: 2025-11-25 08:24:51.947 253542 DEBUG nova.virt.libvirt.driver [None req-2460f9f1-7e69-4aa3-b7b8-6ffd2bd05f80 33ac0c97ce984130b9394683c963cda1 e8f1e6855a5b45a8b1c74763485618f4 - - default default] [instance: 7e576418-7454-49eb-9918-2d7f04547bd8] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 25 03:24:51 np0005534516 nova_compute[253538]: 2025-11-25 08:24:51.948 253542 INFO nova.virt.libvirt.driver [None req-2460f9f1-7e69-4aa3-b7b8-6ffd2bd05f80 33ac0c97ce984130b9394683c963cda1 e8f1e6855a5b45a8b1c74763485618f4 - - default default] [instance: 7e576418-7454-49eb-9918-2d7f04547bd8] Creating image(s)#033[00m
Nov 25 03:24:51 np0005534516 nova_compute[253538]: 2025-11-25 08:24:51.982 253542 DEBUG nova.storage.rbd_utils [None req-2460f9f1-7e69-4aa3-b7b8-6ffd2bd05f80 33ac0c97ce984130b9394683c963cda1 e8f1e6855a5b45a8b1c74763485618f4 - - default default] rbd image 7e576418-7454-49eb-9918-2d7f04547bd8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:24:52 np0005534516 nova_compute[253538]: 2025-11-25 08:24:52.013 253542 DEBUG nova.storage.rbd_utils [None req-2460f9f1-7e69-4aa3-b7b8-6ffd2bd05f80 33ac0c97ce984130b9394683c963cda1 e8f1e6855a5b45a8b1c74763485618f4 - - default default] rbd image 7e576418-7454-49eb-9918-2d7f04547bd8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:24:52 np0005534516 nova_compute[253538]: 2025-11-25 08:24:52.044 253542 DEBUG nova.storage.rbd_utils [None req-2460f9f1-7e69-4aa3-b7b8-6ffd2bd05f80 33ac0c97ce984130b9394683c963cda1 e8f1e6855a5b45a8b1c74763485618f4 - - default default] rbd image 7e576418-7454-49eb-9918-2d7f04547bd8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:24:52 np0005534516 nova_compute[253538]: 2025-11-25 08:24:52.049 253542 DEBUG oslo_concurrency.processutils [None req-2460f9f1-7e69-4aa3-b7b8-6ffd2bd05f80 33ac0c97ce984130b9394683c963cda1 e8f1e6855a5b45a8b1c74763485618f4 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:24:52 np0005534516 nova_compute[253538]: 2025-11-25 08:24:52.143 253542 DEBUG oslo_concurrency.processutils [None req-2460f9f1-7e69-4aa3-b7b8-6ffd2bd05f80 33ac0c97ce984130b9394683c963cda1 e8f1e6855a5b45a8b1c74763485618f4 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json" returned: 0 in 0.094s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:24:52 np0005534516 nova_compute[253538]: 2025-11-25 08:24:52.144 253542 DEBUG oslo_concurrency.lockutils [None req-2460f9f1-7e69-4aa3-b7b8-6ffd2bd05f80 33ac0c97ce984130b9394683c963cda1 e8f1e6855a5b45a8b1c74763485618f4 - - default default] Acquiring lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:24:52 np0005534516 nova_compute[253538]: 2025-11-25 08:24:52.146 253542 DEBUG oslo_concurrency.lockutils [None req-2460f9f1-7e69-4aa3-b7b8-6ffd2bd05f80 33ac0c97ce984130b9394683c963cda1 e8f1e6855a5b45a8b1c74763485618f4 - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:24:52 np0005534516 nova_compute[253538]: 2025-11-25 08:24:52.146 253542 DEBUG oslo_concurrency.lockutils [None req-2460f9f1-7e69-4aa3-b7b8-6ffd2bd05f80 33ac0c97ce984130b9394683c963cda1 e8f1e6855a5b45a8b1c74763485618f4 - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:24:52 np0005534516 nova_compute[253538]: 2025-11-25 08:24:52.176 253542 DEBUG nova.storage.rbd_utils [None req-2460f9f1-7e69-4aa3-b7b8-6ffd2bd05f80 33ac0c97ce984130b9394683c963cda1 e8f1e6855a5b45a8b1c74763485618f4 - - default default] rbd image 7e576418-7454-49eb-9918-2d7f04547bd8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:24:52 np0005534516 nova_compute[253538]: 2025-11-25 08:24:52.180 253542 DEBUG oslo_concurrency.processutils [None req-2460f9f1-7e69-4aa3-b7b8-6ffd2bd05f80 33ac0c97ce984130b9394683c963cda1 e8f1e6855a5b45a8b1c74763485618f4 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc 7e576418-7454-49eb-9918-2d7f04547bd8_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:24:52 np0005534516 nova_compute[253538]: 2025-11-25 08:24:52.580 253542 DEBUG oslo_concurrency.processutils [None req-2460f9f1-7e69-4aa3-b7b8-6ffd2bd05f80 33ac0c97ce984130b9394683c963cda1 e8f1e6855a5b45a8b1c74763485618f4 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc 7e576418-7454-49eb-9918-2d7f04547bd8_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.400s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:24:52 np0005534516 nova_compute[253538]: 2025-11-25 08:24:52.647 253542 DEBUG nova.storage.rbd_utils [None req-2460f9f1-7e69-4aa3-b7b8-6ffd2bd05f80 33ac0c97ce984130b9394683c963cda1 e8f1e6855a5b45a8b1c74763485618f4 - - default default] resizing rbd image 7e576418-7454-49eb-9918-2d7f04547bd8_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 25 03:24:52 np0005534516 nova_compute[253538]: 2025-11-25 08:24:52.752 253542 DEBUG nova.objects.instance [None req-2460f9f1-7e69-4aa3-b7b8-6ffd2bd05f80 33ac0c97ce984130b9394683c963cda1 e8f1e6855a5b45a8b1c74763485618f4 - - default default] Lazy-loading 'migration_context' on Instance uuid 7e576418-7454-49eb-9918-2d7f04547bd8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 03:24:52 np0005534516 nova_compute[253538]: 2025-11-25 08:24:52.775 253542 DEBUG nova.virt.libvirt.driver [None req-2460f9f1-7e69-4aa3-b7b8-6ffd2bd05f80 33ac0c97ce984130b9394683c963cda1 e8f1e6855a5b45a8b1c74763485618f4 - - default default] [instance: 7e576418-7454-49eb-9918-2d7f04547bd8] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 25 03:24:52 np0005534516 nova_compute[253538]: 2025-11-25 08:24:52.776 253542 DEBUG nova.virt.libvirt.driver [None req-2460f9f1-7e69-4aa3-b7b8-6ffd2bd05f80 33ac0c97ce984130b9394683c963cda1 e8f1e6855a5b45a8b1c74763485618f4 - - default default] [instance: 7e576418-7454-49eb-9918-2d7f04547bd8] Ensure instance console log exists: /var/lib/nova/instances/7e576418-7454-49eb-9918-2d7f04547bd8/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 25 03:24:52 np0005534516 nova_compute[253538]: 2025-11-25 08:24:52.777 253542 DEBUG oslo_concurrency.lockutils [None req-2460f9f1-7e69-4aa3-b7b8-6ffd2bd05f80 33ac0c97ce984130b9394683c963cda1 e8f1e6855a5b45a8b1c74763485618f4 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:24:52 np0005534516 nova_compute[253538]: 2025-11-25 08:24:52.777 253542 DEBUG oslo_concurrency.lockutils [None req-2460f9f1-7e69-4aa3-b7b8-6ffd2bd05f80 33ac0c97ce984130b9394683c963cda1 e8f1e6855a5b45a8b1c74763485618f4 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:24:52 np0005534516 nova_compute[253538]: 2025-11-25 08:24:52.778 253542 DEBUG oslo_concurrency.lockutils [None req-2460f9f1-7e69-4aa3-b7b8-6ffd2bd05f80 33ac0c97ce984130b9394683c963cda1 e8f1e6855a5b45a8b1c74763485618f4 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:24:52 np0005534516 nova_compute[253538]: 2025-11-25 08:24:52.881 253542 DEBUG nova.network.neutron [None req-2460f9f1-7e69-4aa3-b7b8-6ffd2bd05f80 33ac0c97ce984130b9394683c963cda1 e8f1e6855a5b45a8b1c74763485618f4 - - default default] [instance: 7e576418-7454-49eb-9918-2d7f04547bd8] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188#033[00m
Nov 25 03:24:52 np0005534516 nova_compute[253538]: 2025-11-25 08:24:52.883 253542 DEBUG nova.compute.manager [None req-2460f9f1-7e69-4aa3-b7b8-6ffd2bd05f80 33ac0c97ce984130b9394683c963cda1 e8f1e6855a5b45a8b1c74763485618f4 - - default default] [instance: 7e576418-7454-49eb-9918-2d7f04547bd8] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 25 03:24:52 np0005534516 nova_compute[253538]: 2025-11-25 08:24:52.887 253542 DEBUG nova.virt.libvirt.driver [None req-2460f9f1-7e69-4aa3-b7b8-6ffd2bd05f80 33ac0c97ce984130b9394683c963cda1 e8f1e6855a5b45a8b1c74763485618f4 - - default default] [instance: 7e576418-7454-49eb-9918-2d7f04547bd8] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'device_name': '/dev/vda', 'guest_format': None, 'encryption_options': None, 'boot_index': 0, 'encryption_format': None, 'encryption_secret_uuid': None, 'size': 0, 'encrypted': False, 'disk_bus': 'virtio', 'image_id': '8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 25 03:24:52 np0005534516 nova_compute[253538]: 2025-11-25 08:24:52.892 253542 WARNING nova.virt.libvirt.driver [None req-2460f9f1-7e69-4aa3-b7b8-6ffd2bd05f80 33ac0c97ce984130b9394683c963cda1 e8f1e6855a5b45a8b1c74763485618f4 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 03:24:52 np0005534516 nova_compute[253538]: 2025-11-25 08:24:52.898 253542 DEBUG nova.virt.libvirt.host [None req-2460f9f1-7e69-4aa3-b7b8-6ffd2bd05f80 33ac0c97ce984130b9394683c963cda1 e8f1e6855a5b45a8b1c74763485618f4 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 25 03:24:52 np0005534516 nova_compute[253538]: 2025-11-25 08:24:52.899 253542 DEBUG nova.virt.libvirt.host [None req-2460f9f1-7e69-4aa3-b7b8-6ffd2bd05f80 33ac0c97ce984130b9394683c963cda1 e8f1e6855a5b45a8b1c74763485618f4 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 25 03:24:52 np0005534516 nova_compute[253538]: 2025-11-25 08:24:52.903 253542 DEBUG nova.virt.libvirt.host [None req-2460f9f1-7e69-4aa3-b7b8-6ffd2bd05f80 33ac0c97ce984130b9394683c963cda1 e8f1e6855a5b45a8b1c74763485618f4 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 25 03:24:52 np0005534516 nova_compute[253538]: 2025-11-25 08:24:52.904 253542 DEBUG nova.virt.libvirt.host [None req-2460f9f1-7e69-4aa3-b7b8-6ffd2bd05f80 33ac0c97ce984130b9394683c963cda1 e8f1e6855a5b45a8b1c74763485618f4 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 25 03:24:52 np0005534516 nova_compute[253538]: 2025-11-25 08:24:52.905 253542 DEBUG nova.virt.libvirt.driver [None req-2460f9f1-7e69-4aa3-b7b8-6ffd2bd05f80 33ac0c97ce984130b9394683c963cda1 e8f1e6855a5b45a8b1c74763485618f4 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 25 03:24:52 np0005534516 nova_compute[253538]: 2025-11-25 08:24:52.906 253542 DEBUG nova.virt.hardware [None req-2460f9f1-7e69-4aa3-b7b8-6ffd2bd05f80 33ac0c97ce984130b9394683c963cda1 e8f1e6855a5b45a8b1c74763485618f4 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T08:20:54Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='c0247c91-0f34-45b2-87e7-43f31a790a00',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 25 03:24:52 np0005534516 nova_compute[253538]: 2025-11-25 08:24:52.907 253542 DEBUG nova.virt.hardware [None req-2460f9f1-7e69-4aa3-b7b8-6ffd2bd05f80 33ac0c97ce984130b9394683c963cda1 e8f1e6855a5b45a8b1c74763485618f4 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 25 03:24:52 np0005534516 nova_compute[253538]: 2025-11-25 08:24:52.908 253542 DEBUG nova.virt.hardware [None req-2460f9f1-7e69-4aa3-b7b8-6ffd2bd05f80 33ac0c97ce984130b9394683c963cda1 e8f1e6855a5b45a8b1c74763485618f4 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 25 03:24:52 np0005534516 nova_compute[253538]: 2025-11-25 08:24:52.909 253542 DEBUG nova.virt.hardware [None req-2460f9f1-7e69-4aa3-b7b8-6ffd2bd05f80 33ac0c97ce984130b9394683c963cda1 e8f1e6855a5b45a8b1c74763485618f4 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 25 03:24:52 np0005534516 nova_compute[253538]: 2025-11-25 08:24:52.909 253542 DEBUG nova.virt.hardware [None req-2460f9f1-7e69-4aa3-b7b8-6ffd2bd05f80 33ac0c97ce984130b9394683c963cda1 e8f1e6855a5b45a8b1c74763485618f4 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 25 03:24:52 np0005534516 nova_compute[253538]: 2025-11-25 08:24:52.910 253542 DEBUG nova.virt.hardware [None req-2460f9f1-7e69-4aa3-b7b8-6ffd2bd05f80 33ac0c97ce984130b9394683c963cda1 e8f1e6855a5b45a8b1c74763485618f4 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 25 03:24:52 np0005534516 nova_compute[253538]: 2025-11-25 08:24:52.911 253542 DEBUG nova.virt.hardware [None req-2460f9f1-7e69-4aa3-b7b8-6ffd2bd05f80 33ac0c97ce984130b9394683c963cda1 e8f1e6855a5b45a8b1c74763485618f4 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 25 03:24:52 np0005534516 nova_compute[253538]: 2025-11-25 08:24:52.912 253542 DEBUG nova.virt.hardware [None req-2460f9f1-7e69-4aa3-b7b8-6ffd2bd05f80 33ac0c97ce984130b9394683c963cda1 e8f1e6855a5b45a8b1c74763485618f4 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 25 03:24:52 np0005534516 nova_compute[253538]: 2025-11-25 08:24:52.912 253542 DEBUG nova.virt.hardware [None req-2460f9f1-7e69-4aa3-b7b8-6ffd2bd05f80 33ac0c97ce984130b9394683c963cda1 e8f1e6855a5b45a8b1c74763485618f4 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 25 03:24:52 np0005534516 nova_compute[253538]: 2025-11-25 08:24:52.913 253542 DEBUG nova.virt.hardware [None req-2460f9f1-7e69-4aa3-b7b8-6ffd2bd05f80 33ac0c97ce984130b9394683c963cda1 e8f1e6855a5b45a8b1c74763485618f4 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 25 03:24:52 np0005534516 nova_compute[253538]: 2025-11-25 08:24:52.914 253542 DEBUG nova.virt.hardware [None req-2460f9f1-7e69-4aa3-b7b8-6ffd2bd05f80 33ac0c97ce984130b9394683c963cda1 e8f1e6855a5b45a8b1c74763485618f4 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 25 03:24:52 np0005534516 nova_compute[253538]: 2025-11-25 08:24:52.920 253542 DEBUG oslo_concurrency.processutils [None req-2460f9f1-7e69-4aa3-b7b8-6ffd2bd05f80 33ac0c97ce984130b9394683c963cda1 e8f1e6855a5b45a8b1c74763485618f4 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:24:52 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1182: 321 pgs: 321 active+clean; 88 MiB data, 275 MiB used, 60 GiB / 60 GiB avail; 1.5 MiB/s rd, 2.1 MiB/s wr, 104 op/s
Nov 25 03:24:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] Optimize plan auto_2025-11-25_08:24:53
Nov 25 03:24:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 03:24:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] do_upmap
Nov 25 03:24:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] pools ['images', '.rgw.root', '.mgr', 'backups', 'cephfs.cephfs.meta', 'volumes', 'default.rgw.control', 'vms', 'default.rgw.log', 'cephfs.cephfs.data', 'default.rgw.meta']
Nov 25 03:24:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] prepared 0/10 changes
Nov 25 03:24:53 np0005534516 nova_compute[253538]: 2025-11-25 08:24:53.301 253542 DEBUG oslo_concurrency.lockutils [None req-bcd3d6b9-5252-4c91-8945-50734fbda5f2 345cc8bd54dd46c9aaf034a44f55f52e fbc71ebb18c64a72bd6d93bc520d8921 - - default default] Acquiring lock "d3764acc-bce0-452e-bba5-90d76d88df2e" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:24:53 np0005534516 nova_compute[253538]: 2025-11-25 08:24:53.304 253542 DEBUG oslo_concurrency.lockutils [None req-bcd3d6b9-5252-4c91-8945-50734fbda5f2 345cc8bd54dd46c9aaf034a44f55f52e fbc71ebb18c64a72bd6d93bc520d8921 - - default default] Lock "d3764acc-bce0-452e-bba5-90d76d88df2e" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:24:53 np0005534516 nova_compute[253538]: 2025-11-25 08:24:53.323 253542 DEBUG nova.compute.manager [None req-bcd3d6b9-5252-4c91-8945-50734fbda5f2 345cc8bd54dd46c9aaf034a44f55f52e fbc71ebb18c64a72bd6d93bc520d8921 - - default default] [instance: d3764acc-bce0-452e-bba5-90d76d88df2e] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 25 03:24:53 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e112 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 03:24:53 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 03:24:53 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/753797572' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 03:24:53 np0005534516 nova_compute[253538]: 2025-11-25 08:24:53.403 253542 DEBUG oslo_concurrency.processutils [None req-2460f9f1-7e69-4aa3-b7b8-6ffd2bd05f80 33ac0c97ce984130b9394683c963cda1 e8f1e6855a5b45a8b1c74763485618f4 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.483s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:24:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 03:24:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 03:24:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 03:24:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 03:24:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 03:24:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 03:24:53 np0005534516 nova_compute[253538]: 2025-11-25 08:24:53.429 253542 DEBUG nova.storage.rbd_utils [None req-2460f9f1-7e69-4aa3-b7b8-6ffd2bd05f80 33ac0c97ce984130b9394683c963cda1 e8f1e6855a5b45a8b1c74763485618f4 - - default default] rbd image 7e576418-7454-49eb-9918-2d7f04547bd8_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:24:53 np0005534516 nova_compute[253538]: 2025-11-25 08:24:53.435 253542 DEBUG oslo_concurrency.processutils [None req-2460f9f1-7e69-4aa3-b7b8-6ffd2bd05f80 33ac0c97ce984130b9394683c963cda1 e8f1e6855a5b45a8b1c74763485618f4 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:24:53 np0005534516 nova_compute[253538]: 2025-11-25 08:24:53.462 253542 DEBUG oslo_concurrency.lockutils [None req-bcd3d6b9-5252-4c91-8945-50734fbda5f2 345cc8bd54dd46c9aaf034a44f55f52e fbc71ebb18c64a72bd6d93bc520d8921 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:24:53 np0005534516 nova_compute[253538]: 2025-11-25 08:24:53.463 253542 DEBUG oslo_concurrency.lockutils [None req-bcd3d6b9-5252-4c91-8945-50734fbda5f2 345cc8bd54dd46c9aaf034a44f55f52e fbc71ebb18c64a72bd6d93bc520d8921 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:24:53 np0005534516 nova_compute[253538]: 2025-11-25 08:24:53.472 253542 DEBUG nova.virt.hardware [None req-bcd3d6b9-5252-4c91-8945-50734fbda5f2 345cc8bd54dd46c9aaf034a44f55f52e fbc71ebb18c64a72bd6d93bc520d8921 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 25 03:24:53 np0005534516 nova_compute[253538]: 2025-11-25 08:24:53.472 253542 INFO nova.compute.claims [None req-bcd3d6b9-5252-4c91-8945-50734fbda5f2 345cc8bd54dd46c9aaf034a44f55f52e fbc71ebb18c64a72bd6d93bc520d8921 - - default default] [instance: d3764acc-bce0-452e-bba5-90d76d88df2e] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 25 03:24:53 np0005534516 nova_compute[253538]: 2025-11-25 08:24:53.621 253542 DEBUG oslo_concurrency.processutils [None req-bcd3d6b9-5252-4c91-8945-50734fbda5f2 345cc8bd54dd46c9aaf034a44f55f52e fbc71ebb18c64a72bd6d93bc520d8921 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:24:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 03:24:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 03:24:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 03:24:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 03:24:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 03:24:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 03:24:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 03:24:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 03:24:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 03:24:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 03:24:53 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 03:24:53 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/732634573' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 03:24:53 np0005534516 nova_compute[253538]: 2025-11-25 08:24:53.913 253542 DEBUG oslo_concurrency.processutils [None req-2460f9f1-7e69-4aa3-b7b8-6ffd2bd05f80 33ac0c97ce984130b9394683c963cda1 e8f1e6855a5b45a8b1c74763485618f4 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.478s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:24:53 np0005534516 nova_compute[253538]: 2025-11-25 08:24:53.917 253542 DEBUG nova.objects.instance [None req-2460f9f1-7e69-4aa3-b7b8-6ffd2bd05f80 33ac0c97ce984130b9394683c963cda1 e8f1e6855a5b45a8b1c74763485618f4 - - default default] Lazy-loading 'pci_devices' on Instance uuid 7e576418-7454-49eb-9918-2d7f04547bd8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 03:24:53 np0005534516 nova_compute[253538]: 2025-11-25 08:24:53.948 253542 DEBUG nova.virt.libvirt.driver [None req-2460f9f1-7e69-4aa3-b7b8-6ffd2bd05f80 33ac0c97ce984130b9394683c963cda1 e8f1e6855a5b45a8b1c74763485618f4 - - default default] [instance: 7e576418-7454-49eb-9918-2d7f04547bd8] End _get_guest_xml xml=<domain type="kvm">
Nov 25 03:24:53 np0005534516 nova_compute[253538]:  <uuid>7e576418-7454-49eb-9918-2d7f04547bd8</uuid>
Nov 25 03:24:53 np0005534516 nova_compute[253538]:  <name>instance-0000000e</name>
Nov 25 03:24:53 np0005534516 nova_compute[253538]:  <memory>131072</memory>
Nov 25 03:24:53 np0005534516 nova_compute[253538]:  <vcpu>1</vcpu>
Nov 25 03:24:53 np0005534516 nova_compute[253538]:  <metadata>
Nov 25 03:24:53 np0005534516 nova_compute[253538]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 03:24:53 np0005534516 nova_compute[253538]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 03:24:53 np0005534516 nova_compute[253538]:      <nova:name>tempest-TenantUsagesTestJSON-server-625857128</nova:name>
Nov 25 03:24:53 np0005534516 nova_compute[253538]:      <nova:creationTime>2025-11-25 08:24:52</nova:creationTime>
Nov 25 03:24:53 np0005534516 nova_compute[253538]:      <nova:flavor name="m1.nano">
Nov 25 03:24:53 np0005534516 nova_compute[253538]:        <nova:memory>128</nova:memory>
Nov 25 03:24:53 np0005534516 nova_compute[253538]:        <nova:disk>1</nova:disk>
Nov 25 03:24:53 np0005534516 nova_compute[253538]:        <nova:swap>0</nova:swap>
Nov 25 03:24:53 np0005534516 nova_compute[253538]:        <nova:ephemeral>0</nova:ephemeral>
Nov 25 03:24:53 np0005534516 nova_compute[253538]:        <nova:vcpus>1</nova:vcpus>
Nov 25 03:24:53 np0005534516 nova_compute[253538]:      </nova:flavor>
Nov 25 03:24:53 np0005534516 nova_compute[253538]:      <nova:owner>
Nov 25 03:24:53 np0005534516 nova_compute[253538]:        <nova:user uuid="33ac0c97ce984130b9394683c963cda1">tempest-TenantUsagesTestJSON-1684672578-project-member</nova:user>
Nov 25 03:24:53 np0005534516 nova_compute[253538]:        <nova:project uuid="e8f1e6855a5b45a8b1c74763485618f4">tempest-TenantUsagesTestJSON-1684672578</nova:project>
Nov 25 03:24:53 np0005534516 nova_compute[253538]:      </nova:owner>
Nov 25 03:24:53 np0005534516 nova_compute[253538]:      <nova:root type="image" uuid="8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e"/>
Nov 25 03:24:53 np0005534516 nova_compute[253538]:      <nova:ports/>
Nov 25 03:24:53 np0005534516 nova_compute[253538]:    </nova:instance>
Nov 25 03:24:53 np0005534516 nova_compute[253538]:  </metadata>
Nov 25 03:24:53 np0005534516 nova_compute[253538]:  <sysinfo type="smbios">
Nov 25 03:24:53 np0005534516 nova_compute[253538]:    <system>
Nov 25 03:24:53 np0005534516 nova_compute[253538]:      <entry name="manufacturer">RDO</entry>
Nov 25 03:24:53 np0005534516 nova_compute[253538]:      <entry name="product">OpenStack Compute</entry>
Nov 25 03:24:53 np0005534516 nova_compute[253538]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 03:24:53 np0005534516 nova_compute[253538]:      <entry name="serial">7e576418-7454-49eb-9918-2d7f04547bd8</entry>
Nov 25 03:24:53 np0005534516 nova_compute[253538]:      <entry name="uuid">7e576418-7454-49eb-9918-2d7f04547bd8</entry>
Nov 25 03:24:53 np0005534516 nova_compute[253538]:      <entry name="family">Virtual Machine</entry>
Nov 25 03:24:53 np0005534516 nova_compute[253538]:    </system>
Nov 25 03:24:53 np0005534516 nova_compute[253538]:  </sysinfo>
Nov 25 03:24:53 np0005534516 nova_compute[253538]:  <os>
Nov 25 03:24:53 np0005534516 nova_compute[253538]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 25 03:24:53 np0005534516 nova_compute[253538]:    <boot dev="hd"/>
Nov 25 03:24:53 np0005534516 nova_compute[253538]:    <smbios mode="sysinfo"/>
Nov 25 03:24:53 np0005534516 nova_compute[253538]:  </os>
Nov 25 03:24:53 np0005534516 nova_compute[253538]:  <features>
Nov 25 03:24:53 np0005534516 nova_compute[253538]:    <acpi/>
Nov 25 03:24:53 np0005534516 nova_compute[253538]:    <apic/>
Nov 25 03:24:53 np0005534516 nova_compute[253538]:    <vmcoreinfo/>
Nov 25 03:24:53 np0005534516 nova_compute[253538]:  </features>
Nov 25 03:24:53 np0005534516 nova_compute[253538]:  <clock offset="utc">
Nov 25 03:24:53 np0005534516 nova_compute[253538]:    <timer name="pit" tickpolicy="delay"/>
Nov 25 03:24:53 np0005534516 nova_compute[253538]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 25 03:24:53 np0005534516 nova_compute[253538]:    <timer name="hpet" present="no"/>
Nov 25 03:24:53 np0005534516 nova_compute[253538]:  </clock>
Nov 25 03:24:53 np0005534516 nova_compute[253538]:  <cpu mode="host-model" match="exact">
Nov 25 03:24:53 np0005534516 nova_compute[253538]:    <topology sockets="1" cores="1" threads="1"/>
Nov 25 03:24:53 np0005534516 nova_compute[253538]:  </cpu>
Nov 25 03:24:53 np0005534516 nova_compute[253538]:  <devices>
Nov 25 03:24:53 np0005534516 nova_compute[253538]:    <disk type="network" device="disk">
Nov 25 03:24:53 np0005534516 nova_compute[253538]:      <driver type="raw" cache="none"/>
Nov 25 03:24:53 np0005534516 nova_compute[253538]:      <source protocol="rbd" name="vms/7e576418-7454-49eb-9918-2d7f04547bd8_disk">
Nov 25 03:24:53 np0005534516 nova_compute[253538]:        <host name="192.168.122.100" port="6789"/>
Nov 25 03:24:53 np0005534516 nova_compute[253538]:      </source>
Nov 25 03:24:53 np0005534516 nova_compute[253538]:      <auth username="openstack">
Nov 25 03:24:53 np0005534516 nova_compute[253538]:        <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 03:24:53 np0005534516 nova_compute[253538]:      </auth>
Nov 25 03:24:53 np0005534516 nova_compute[253538]:      <target dev="vda" bus="virtio"/>
Nov 25 03:24:53 np0005534516 nova_compute[253538]:    </disk>
Nov 25 03:24:53 np0005534516 nova_compute[253538]:    <disk type="network" device="cdrom">
Nov 25 03:24:53 np0005534516 nova_compute[253538]:      <driver type="raw" cache="none"/>
Nov 25 03:24:53 np0005534516 nova_compute[253538]:      <source protocol="rbd" name="vms/7e576418-7454-49eb-9918-2d7f04547bd8_disk.config">
Nov 25 03:24:53 np0005534516 nova_compute[253538]:        <host name="192.168.122.100" port="6789"/>
Nov 25 03:24:53 np0005534516 nova_compute[253538]:      </source>
Nov 25 03:24:53 np0005534516 nova_compute[253538]:      <auth username="openstack">
Nov 25 03:24:53 np0005534516 nova_compute[253538]:        <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 03:24:53 np0005534516 nova_compute[253538]:      </auth>
Nov 25 03:24:53 np0005534516 nova_compute[253538]:      <target dev="sda" bus="sata"/>
Nov 25 03:24:53 np0005534516 nova_compute[253538]:    </disk>
Nov 25 03:24:53 np0005534516 nova_compute[253538]:    <serial type="pty">
Nov 25 03:24:53 np0005534516 nova_compute[253538]:      <log file="/var/lib/nova/instances/7e576418-7454-49eb-9918-2d7f04547bd8/console.log" append="off"/>
Nov 25 03:24:53 np0005534516 nova_compute[253538]:    </serial>
Nov 25 03:24:53 np0005534516 nova_compute[253538]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 03:24:53 np0005534516 nova_compute[253538]:    <video>
Nov 25 03:24:53 np0005534516 nova_compute[253538]:      <model type="virtio"/>
Nov 25 03:24:53 np0005534516 nova_compute[253538]:    </video>
Nov 25 03:24:53 np0005534516 nova_compute[253538]:    <input type="tablet" bus="usb"/>
Nov 25 03:24:53 np0005534516 nova_compute[253538]:    <rng model="virtio">
Nov 25 03:24:53 np0005534516 nova_compute[253538]:      <backend model="random">/dev/urandom</backend>
Nov 25 03:24:53 np0005534516 nova_compute[253538]:    </rng>
Nov 25 03:24:53 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root"/>
Nov 25 03:24:53 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:24:53 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:24:53 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:24:53 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:24:53 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:24:53 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:24:53 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:24:53 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:24:53 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:24:53 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:24:53 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:24:53 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:24:53 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:24:53 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:24:53 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:24:53 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:24:53 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:24:53 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:24:53 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:24:53 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:24:53 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:24:53 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:24:53 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:24:53 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:24:53 np0005534516 nova_compute[253538]:    <controller type="usb" index="0"/>
Nov 25 03:24:53 np0005534516 nova_compute[253538]:    <memballoon model="virtio">
Nov 25 03:24:53 np0005534516 nova_compute[253538]:      <stats period="10"/>
Nov 25 03:24:53 np0005534516 nova_compute[253538]:    </memballoon>
Nov 25 03:24:53 np0005534516 nova_compute[253538]:  </devices>
Nov 25 03:24:53 np0005534516 nova_compute[253538]: </domain>
Nov 25 03:24:53 np0005534516 nova_compute[253538]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 25 03:24:54 np0005534516 nova_compute[253538]: 2025-11-25 08:24:54.037 253542 DEBUG nova.virt.libvirt.driver [None req-2460f9f1-7e69-4aa3-b7b8-6ffd2bd05f80 33ac0c97ce984130b9394683c963cda1 e8f1e6855a5b45a8b1c74763485618f4 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 03:24:54 np0005534516 nova_compute[253538]: 2025-11-25 08:24:54.038 253542 DEBUG nova.virt.libvirt.driver [None req-2460f9f1-7e69-4aa3-b7b8-6ffd2bd05f80 33ac0c97ce984130b9394683c963cda1 e8f1e6855a5b45a8b1c74763485618f4 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 03:24:54 np0005534516 nova_compute[253538]: 2025-11-25 08:24:54.040 253542 INFO nova.virt.libvirt.driver [None req-2460f9f1-7e69-4aa3-b7b8-6ffd2bd05f80 33ac0c97ce984130b9394683c963cda1 e8f1e6855a5b45a8b1c74763485618f4 - - default default] [instance: 7e576418-7454-49eb-9918-2d7f04547bd8] Using config drive#033[00m
Nov 25 03:24:54 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 03:24:54 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4086237870' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 03:24:54 np0005534516 nova_compute[253538]: 2025-11-25 08:24:54.079 253542 DEBUG nova.storage.rbd_utils [None req-2460f9f1-7e69-4aa3-b7b8-6ffd2bd05f80 33ac0c97ce984130b9394683c963cda1 e8f1e6855a5b45a8b1c74763485618f4 - - default default] rbd image 7e576418-7454-49eb-9918-2d7f04547bd8_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:24:54 np0005534516 nova_compute[253538]: 2025-11-25 08:24:54.090 253542 DEBUG oslo_concurrency.processutils [None req-bcd3d6b9-5252-4c91-8945-50734fbda5f2 345cc8bd54dd46c9aaf034a44f55f52e fbc71ebb18c64a72bd6d93bc520d8921 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.469s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:24:54 np0005534516 nova_compute[253538]: 2025-11-25 08:24:54.099 253542 DEBUG nova.compute.provider_tree [None req-bcd3d6b9-5252-4c91-8945-50734fbda5f2 345cc8bd54dd46c9aaf034a44f55f52e fbc71ebb18c64a72bd6d93bc520d8921 - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 03:24:54 np0005534516 nova_compute[253538]: 2025-11-25 08:24:54.116 253542 DEBUG nova.scheduler.client.report [None req-bcd3d6b9-5252-4c91-8945-50734fbda5f2 345cc8bd54dd46c9aaf034a44f55f52e fbc71ebb18c64a72bd6d93bc520d8921 - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 03:24:54 np0005534516 nova_compute[253538]: 2025-11-25 08:24:54.146 253542 DEBUG oslo_concurrency.lockutils [None req-bcd3d6b9-5252-4c91-8945-50734fbda5f2 345cc8bd54dd46c9aaf034a44f55f52e fbc71ebb18c64a72bd6d93bc520d8921 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.684s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:24:54 np0005534516 nova_compute[253538]: 2025-11-25 08:24:54.147 253542 DEBUG nova.compute.manager [None req-bcd3d6b9-5252-4c91-8945-50734fbda5f2 345cc8bd54dd46c9aaf034a44f55f52e fbc71ebb18c64a72bd6d93bc520d8921 - - default default] [instance: d3764acc-bce0-452e-bba5-90d76d88df2e] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 25 03:24:54 np0005534516 nova_compute[253538]: 2025-11-25 08:24:54.211 253542 DEBUG nova.compute.manager [None req-bcd3d6b9-5252-4c91-8945-50734fbda5f2 345cc8bd54dd46c9aaf034a44f55f52e fbc71ebb18c64a72bd6d93bc520d8921 - - default default] [instance: d3764acc-bce0-452e-bba5-90d76d88df2e] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 25 03:24:54 np0005534516 nova_compute[253538]: 2025-11-25 08:24:54.212 253542 DEBUG nova.network.neutron [None req-bcd3d6b9-5252-4c91-8945-50734fbda5f2 345cc8bd54dd46c9aaf034a44f55f52e fbc71ebb18c64a72bd6d93bc520d8921 - - default default] [instance: d3764acc-bce0-452e-bba5-90d76d88df2e] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 25 03:24:54 np0005534516 nova_compute[253538]: 2025-11-25 08:24:54.236 253542 INFO nova.virt.libvirt.driver [None req-bcd3d6b9-5252-4c91-8945-50734fbda5f2 345cc8bd54dd46c9aaf034a44f55f52e fbc71ebb18c64a72bd6d93bc520d8921 - - default default] [instance: d3764acc-bce0-452e-bba5-90d76d88df2e] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 25 03:24:54 np0005534516 nova_compute[253538]: 2025-11-25 08:24:54.268 253542 DEBUG nova.compute.manager [None req-bcd3d6b9-5252-4c91-8945-50734fbda5f2 345cc8bd54dd46c9aaf034a44f55f52e fbc71ebb18c64a72bd6d93bc520d8921 - - default default] [instance: d3764acc-bce0-452e-bba5-90d76d88df2e] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 25 03:24:54 np0005534516 nova_compute[253538]: 2025-11-25 08:24:54.368 253542 INFO nova.virt.libvirt.driver [None req-2460f9f1-7e69-4aa3-b7b8-6ffd2bd05f80 33ac0c97ce984130b9394683c963cda1 e8f1e6855a5b45a8b1c74763485618f4 - - default default] [instance: 7e576418-7454-49eb-9918-2d7f04547bd8] Creating config drive at /var/lib/nova/instances/7e576418-7454-49eb-9918-2d7f04547bd8/disk.config#033[00m
Nov 25 03:24:54 np0005534516 nova_compute[253538]: 2025-11-25 08:24:54.375 253542 DEBUG oslo_concurrency.processutils [None req-2460f9f1-7e69-4aa3-b7b8-6ffd2bd05f80 33ac0c97ce984130b9394683c963cda1 e8f1e6855a5b45a8b1c74763485618f4 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/7e576418-7454-49eb-9918-2d7f04547bd8/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp054ozkg_ execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:24:54 np0005534516 nova_compute[253538]: 2025-11-25 08:24:54.408 253542 DEBUG oslo_concurrency.lockutils [None req-cd280208-b595-4b3f-8796-270485a4caef 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Acquiring lock "86bfa56f-56d0-4a5e-b0b2-302c375e37a3" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:24:54 np0005534516 nova_compute[253538]: 2025-11-25 08:24:54.409 253542 DEBUG oslo_concurrency.lockutils [None req-cd280208-b595-4b3f-8796-270485a4caef 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Lock "86bfa56f-56d0-4a5e-b0b2-302c375e37a3" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:24:54 np0005534516 nova_compute[253538]: 2025-11-25 08:24:54.413 253542 DEBUG nova.compute.manager [None req-bcd3d6b9-5252-4c91-8945-50734fbda5f2 345cc8bd54dd46c9aaf034a44f55f52e fbc71ebb18c64a72bd6d93bc520d8921 - - default default] [instance: d3764acc-bce0-452e-bba5-90d76d88df2e] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 25 03:24:54 np0005534516 nova_compute[253538]: 2025-11-25 08:24:54.415 253542 DEBUG nova.virt.libvirt.driver [None req-bcd3d6b9-5252-4c91-8945-50734fbda5f2 345cc8bd54dd46c9aaf034a44f55f52e fbc71ebb18c64a72bd6d93bc520d8921 - - default default] [instance: d3764acc-bce0-452e-bba5-90d76d88df2e] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 25 03:24:54 np0005534516 nova_compute[253538]: 2025-11-25 08:24:54.415 253542 INFO nova.virt.libvirt.driver [None req-bcd3d6b9-5252-4c91-8945-50734fbda5f2 345cc8bd54dd46c9aaf034a44f55f52e fbc71ebb18c64a72bd6d93bc520d8921 - - default default] [instance: d3764acc-bce0-452e-bba5-90d76d88df2e] Creating image(s)#033[00m
Nov 25 03:24:54 np0005534516 nova_compute[253538]: 2025-11-25 08:24:54.447 253542 DEBUG nova.storage.rbd_utils [None req-bcd3d6b9-5252-4c91-8945-50734fbda5f2 345cc8bd54dd46c9aaf034a44f55f52e fbc71ebb18c64a72bd6d93bc520d8921 - - default default] rbd image d3764acc-bce0-452e-bba5-90d76d88df2e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:24:54 np0005534516 nova_compute[253538]: 2025-11-25 08:24:54.473 253542 DEBUG nova.storage.rbd_utils [None req-bcd3d6b9-5252-4c91-8945-50734fbda5f2 345cc8bd54dd46c9aaf034a44f55f52e fbc71ebb18c64a72bd6d93bc520d8921 - - default default] rbd image d3764acc-bce0-452e-bba5-90d76d88df2e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:24:54 np0005534516 nova_compute[253538]: 2025-11-25 08:24:54.505 253542 DEBUG nova.storage.rbd_utils [None req-bcd3d6b9-5252-4c91-8945-50734fbda5f2 345cc8bd54dd46c9aaf034a44f55f52e fbc71ebb18c64a72bd6d93bc520d8921 - - default default] rbd image d3764acc-bce0-452e-bba5-90d76d88df2e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:24:54 np0005534516 nova_compute[253538]: 2025-11-25 08:24:54.519 253542 DEBUG oslo_concurrency.processutils [None req-bcd3d6b9-5252-4c91-8945-50734fbda5f2 345cc8bd54dd46c9aaf034a44f55f52e fbc71ebb18c64a72bd6d93bc520d8921 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:24:54 np0005534516 nova_compute[253538]: 2025-11-25 08:24:54.552 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:24:54 np0005534516 nova_compute[253538]: 2025-11-25 08:24:54.554 253542 DEBUG oslo_concurrency.processutils [None req-2460f9f1-7e69-4aa3-b7b8-6ffd2bd05f80 33ac0c97ce984130b9394683c963cda1 e8f1e6855a5b45a8b1c74763485618f4 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/7e576418-7454-49eb-9918-2d7f04547bd8/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp054ozkg_" returned: 0 in 0.179s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:24:54 np0005534516 nova_compute[253538]: 2025-11-25 08:24:54.591 253542 DEBUG nova.storage.rbd_utils [None req-2460f9f1-7e69-4aa3-b7b8-6ffd2bd05f80 33ac0c97ce984130b9394683c963cda1 e8f1e6855a5b45a8b1c74763485618f4 - - default default] rbd image 7e576418-7454-49eb-9918-2d7f04547bd8_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:24:54 np0005534516 nova_compute[253538]: 2025-11-25 08:24:54.595 253542 DEBUG oslo_concurrency.processutils [None req-2460f9f1-7e69-4aa3-b7b8-6ffd2bd05f80 33ac0c97ce984130b9394683c963cda1 e8f1e6855a5b45a8b1c74763485618f4 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/7e576418-7454-49eb-9918-2d7f04547bd8/disk.config 7e576418-7454-49eb-9918-2d7f04547bd8_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:24:54 np0005534516 nova_compute[253538]: 2025-11-25 08:24:54.616 253542 DEBUG nova.compute.manager [None req-cd280208-b595-4b3f-8796-270485a4caef 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] [instance: 86bfa56f-56d0-4a5e-b0b2-302c375e37a3] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 25 03:24:54 np0005534516 nova_compute[253538]: 2025-11-25 08:24:54.620 253542 DEBUG oslo_concurrency.processutils [None req-bcd3d6b9-5252-4c91-8945-50734fbda5f2 345cc8bd54dd46c9aaf034a44f55f52e fbc71ebb18c64a72bd6d93bc520d8921 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json" returned: 0 in 0.101s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:24:54 np0005534516 nova_compute[253538]: 2025-11-25 08:24:54.621 253542 DEBUG oslo_concurrency.lockutils [None req-bcd3d6b9-5252-4c91-8945-50734fbda5f2 345cc8bd54dd46c9aaf034a44f55f52e fbc71ebb18c64a72bd6d93bc520d8921 - - default default] Acquiring lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:24:54 np0005534516 nova_compute[253538]: 2025-11-25 08:24:54.621 253542 DEBUG oslo_concurrency.lockutils [None req-bcd3d6b9-5252-4c91-8945-50734fbda5f2 345cc8bd54dd46c9aaf034a44f55f52e fbc71ebb18c64a72bd6d93bc520d8921 - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:24:54 np0005534516 nova_compute[253538]: 2025-11-25 08:24:54.622 253542 DEBUG oslo_concurrency.lockutils [None req-bcd3d6b9-5252-4c91-8945-50734fbda5f2 345cc8bd54dd46c9aaf034a44f55f52e fbc71ebb18c64a72bd6d93bc520d8921 - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:24:54 np0005534516 nova_compute[253538]: 2025-11-25 08:24:54.641 253542 DEBUG nova.storage.rbd_utils [None req-bcd3d6b9-5252-4c91-8945-50734fbda5f2 345cc8bd54dd46c9aaf034a44f55f52e fbc71ebb18c64a72bd6d93bc520d8921 - - default default] rbd image d3764acc-bce0-452e-bba5-90d76d88df2e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:24:54 np0005534516 nova_compute[253538]: 2025-11-25 08:24:54.644 253542 DEBUG oslo_concurrency.processutils [None req-bcd3d6b9-5252-4c91-8945-50734fbda5f2 345cc8bd54dd46c9aaf034a44f55f52e fbc71ebb18c64a72bd6d93bc520d8921 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc d3764acc-bce0-452e-bba5-90d76d88df2e_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:24:54 np0005534516 nova_compute[253538]: 2025-11-25 08:24:54.695 253542 DEBUG oslo_concurrency.lockutils [None req-cd280208-b595-4b3f-8796-270485a4caef 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:24:54 np0005534516 nova_compute[253538]: 2025-11-25 08:24:54.695 253542 DEBUG oslo_concurrency.lockutils [None req-cd280208-b595-4b3f-8796-270485a4caef 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:24:54 np0005534516 nova_compute[253538]: 2025-11-25 08:24:54.702 253542 DEBUG nova.virt.hardware [None req-cd280208-b595-4b3f-8796-270485a4caef 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 25 03:24:54 np0005534516 nova_compute[253538]: 2025-11-25 08:24:54.703 253542 INFO nova.compute.claims [None req-cd280208-b595-4b3f-8796-270485a4caef 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] [instance: 86bfa56f-56d0-4a5e-b0b2-302c375e37a3] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 25 03:24:54 np0005534516 nova_compute[253538]: 2025-11-25 08:24:54.714 253542 DEBUG oslo_concurrency.processutils [None req-2460f9f1-7e69-4aa3-b7b8-6ffd2bd05f80 33ac0c97ce984130b9394683c963cda1 e8f1e6855a5b45a8b1c74763485618f4 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/7e576418-7454-49eb-9918-2d7f04547bd8/disk.config 7e576418-7454-49eb-9918-2d7f04547bd8_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.119s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:24:54 np0005534516 nova_compute[253538]: 2025-11-25 08:24:54.715 253542 INFO nova.virt.libvirt.driver [None req-2460f9f1-7e69-4aa3-b7b8-6ffd2bd05f80 33ac0c97ce984130b9394683c963cda1 e8f1e6855a5b45a8b1c74763485618f4 - - default default] [instance: 7e576418-7454-49eb-9918-2d7f04547bd8] Deleting local config drive /var/lib/nova/instances/7e576418-7454-49eb-9918-2d7f04547bd8/disk.config because it was imported into RBD.#033[00m
Nov 25 03:24:54 np0005534516 systemd-machined[215790]: New machine qemu-16-instance-0000000e.
Nov 25 03:24:54 np0005534516 systemd[1]: Started Virtual Machine qemu-16-instance-0000000e.
Nov 25 03:24:54 np0005534516 nova_compute[253538]: 2025-11-25 08:24:54.849 253542 DEBUG nova.network.neutron [None req-bcd3d6b9-5252-4c91-8945-50734fbda5f2 345cc8bd54dd46c9aaf034a44f55f52e fbc71ebb18c64a72bd6d93bc520d8921 - - default default] [instance: d3764acc-bce0-452e-bba5-90d76d88df2e] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188#033[00m
Nov 25 03:24:54 np0005534516 nova_compute[253538]: 2025-11-25 08:24:54.850 253542 DEBUG nova.compute.manager [None req-bcd3d6b9-5252-4c91-8945-50734fbda5f2 345cc8bd54dd46c9aaf034a44f55f52e fbc71ebb18c64a72bd6d93bc520d8921 - - default default] [instance: d3764acc-bce0-452e-bba5-90d76d88df2e] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 25 03:24:54 np0005534516 nova_compute[253538]: 2025-11-25 08:24:54.886 253542 DEBUG oslo_concurrency.processutils [None req-cd280208-b595-4b3f-8796-270485a4caef 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:24:54 np0005534516 nova_compute[253538]: 2025-11-25 08:24:54.916 253542 DEBUG oslo_concurrency.processutils [None req-bcd3d6b9-5252-4c91-8945-50734fbda5f2 345cc8bd54dd46c9aaf034a44f55f52e fbc71ebb18c64a72bd6d93bc520d8921 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc d3764acc-bce0-452e-bba5-90d76d88df2e_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.272s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:24:54 np0005534516 nova_compute[253538]: 2025-11-25 08:24:54.982 253542 DEBUG nova.storage.rbd_utils [None req-bcd3d6b9-5252-4c91-8945-50734fbda5f2 345cc8bd54dd46c9aaf034a44f55f52e fbc71ebb18c64a72bd6d93bc520d8921 - - default default] resizing rbd image d3764acc-bce0-452e-bba5-90d76d88df2e_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 25 03:24:54 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1183: 321 pgs: 321 active+clean; 110 MiB data, 283 MiB used, 60 GiB / 60 GiB avail; 2.4 MiB/s rd, 3.1 MiB/s wr, 176 op/s
Nov 25 03:24:55 np0005534516 nova_compute[253538]: 2025-11-25 08:24:55.096 253542 DEBUG nova.objects.instance [None req-bcd3d6b9-5252-4c91-8945-50734fbda5f2 345cc8bd54dd46c9aaf034a44f55f52e fbc71ebb18c64a72bd6d93bc520d8921 - - default default] Lazy-loading 'migration_context' on Instance uuid d3764acc-bce0-452e-bba5-90d76d88df2e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 03:24:55 np0005534516 nova_compute[253538]: 2025-11-25 08:24:55.110 253542 DEBUG nova.virt.libvirt.driver [None req-bcd3d6b9-5252-4c91-8945-50734fbda5f2 345cc8bd54dd46c9aaf034a44f55f52e fbc71ebb18c64a72bd6d93bc520d8921 - - default default] [instance: d3764acc-bce0-452e-bba5-90d76d88df2e] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 25 03:24:55 np0005534516 nova_compute[253538]: 2025-11-25 08:24:55.110 253542 DEBUG nova.virt.libvirt.driver [None req-bcd3d6b9-5252-4c91-8945-50734fbda5f2 345cc8bd54dd46c9aaf034a44f55f52e fbc71ebb18c64a72bd6d93bc520d8921 - - default default] [instance: d3764acc-bce0-452e-bba5-90d76d88df2e] Ensure instance console log exists: /var/lib/nova/instances/d3764acc-bce0-452e-bba5-90d76d88df2e/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 25 03:24:55 np0005534516 nova_compute[253538]: 2025-11-25 08:24:55.111 253542 DEBUG oslo_concurrency.lockutils [None req-bcd3d6b9-5252-4c91-8945-50734fbda5f2 345cc8bd54dd46c9aaf034a44f55f52e fbc71ebb18c64a72bd6d93bc520d8921 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:24:55 np0005534516 nova_compute[253538]: 2025-11-25 08:24:55.111 253542 DEBUG oslo_concurrency.lockutils [None req-bcd3d6b9-5252-4c91-8945-50734fbda5f2 345cc8bd54dd46c9aaf034a44f55f52e fbc71ebb18c64a72bd6d93bc520d8921 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:24:55 np0005534516 nova_compute[253538]: 2025-11-25 08:24:55.111 253542 DEBUG oslo_concurrency.lockutils [None req-bcd3d6b9-5252-4c91-8945-50734fbda5f2 345cc8bd54dd46c9aaf034a44f55f52e fbc71ebb18c64a72bd6d93bc520d8921 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:24:55 np0005534516 nova_compute[253538]: 2025-11-25 08:24:55.113 253542 DEBUG nova.virt.libvirt.driver [None req-bcd3d6b9-5252-4c91-8945-50734fbda5f2 345cc8bd54dd46c9aaf034a44f55f52e fbc71ebb18c64a72bd6d93bc520d8921 - - default default] [instance: d3764acc-bce0-452e-bba5-90d76d88df2e] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'device_name': '/dev/vda', 'guest_format': None, 'encryption_options': None, 'boot_index': 0, 'encryption_format': None, 'encryption_secret_uuid': None, 'size': 0, 'encrypted': False, 'disk_bus': 'virtio', 'image_id': '8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 25 03:24:55 np0005534516 nova_compute[253538]: 2025-11-25 08:24:55.119 253542 WARNING nova.virt.libvirt.driver [None req-bcd3d6b9-5252-4c91-8945-50734fbda5f2 345cc8bd54dd46c9aaf034a44f55f52e fbc71ebb18c64a72bd6d93bc520d8921 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 03:24:55 np0005534516 nova_compute[253538]: 2025-11-25 08:24:55.123 253542 DEBUG nova.virt.libvirt.host [None req-bcd3d6b9-5252-4c91-8945-50734fbda5f2 345cc8bd54dd46c9aaf034a44f55f52e fbc71ebb18c64a72bd6d93bc520d8921 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 25 03:24:55 np0005534516 nova_compute[253538]: 2025-11-25 08:24:55.124 253542 DEBUG nova.virt.libvirt.host [None req-bcd3d6b9-5252-4c91-8945-50734fbda5f2 345cc8bd54dd46c9aaf034a44f55f52e fbc71ebb18c64a72bd6d93bc520d8921 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 25 03:24:55 np0005534516 nova_compute[253538]: 2025-11-25 08:24:55.127 253542 DEBUG nova.virt.libvirt.host [None req-bcd3d6b9-5252-4c91-8945-50734fbda5f2 345cc8bd54dd46c9aaf034a44f55f52e fbc71ebb18c64a72bd6d93bc520d8921 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 25 03:24:55 np0005534516 nova_compute[253538]: 2025-11-25 08:24:55.127 253542 DEBUG nova.virt.libvirt.host [None req-bcd3d6b9-5252-4c91-8945-50734fbda5f2 345cc8bd54dd46c9aaf034a44f55f52e fbc71ebb18c64a72bd6d93bc520d8921 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 25 03:24:55 np0005534516 nova_compute[253538]: 2025-11-25 08:24:55.128 253542 DEBUG nova.virt.libvirt.driver [None req-bcd3d6b9-5252-4c91-8945-50734fbda5f2 345cc8bd54dd46c9aaf034a44f55f52e fbc71ebb18c64a72bd6d93bc520d8921 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 25 03:24:55 np0005534516 nova_compute[253538]: 2025-11-25 08:24:55.128 253542 DEBUG nova.virt.hardware [None req-bcd3d6b9-5252-4c91-8945-50734fbda5f2 345cc8bd54dd46c9aaf034a44f55f52e fbc71ebb18c64a72bd6d93bc520d8921 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T08:20:54Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='c0247c91-0f34-45b2-87e7-43f31a790a00',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 25 03:24:55 np0005534516 nova_compute[253538]: 2025-11-25 08:24:55.128 253542 DEBUG nova.virt.hardware [None req-bcd3d6b9-5252-4c91-8945-50734fbda5f2 345cc8bd54dd46c9aaf034a44f55f52e fbc71ebb18c64a72bd6d93bc520d8921 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 25 03:24:55 np0005534516 nova_compute[253538]: 2025-11-25 08:24:55.129 253542 DEBUG nova.virt.hardware [None req-bcd3d6b9-5252-4c91-8945-50734fbda5f2 345cc8bd54dd46c9aaf034a44f55f52e fbc71ebb18c64a72bd6d93bc520d8921 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 25 03:24:55 np0005534516 nova_compute[253538]: 2025-11-25 08:24:55.129 253542 DEBUG nova.virt.hardware [None req-bcd3d6b9-5252-4c91-8945-50734fbda5f2 345cc8bd54dd46c9aaf034a44f55f52e fbc71ebb18c64a72bd6d93bc520d8921 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 25 03:24:55 np0005534516 nova_compute[253538]: 2025-11-25 08:24:55.129 253542 DEBUG nova.virt.hardware [None req-bcd3d6b9-5252-4c91-8945-50734fbda5f2 345cc8bd54dd46c9aaf034a44f55f52e fbc71ebb18c64a72bd6d93bc520d8921 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 25 03:24:55 np0005534516 nova_compute[253538]: 2025-11-25 08:24:55.129 253542 DEBUG nova.virt.hardware [None req-bcd3d6b9-5252-4c91-8945-50734fbda5f2 345cc8bd54dd46c9aaf034a44f55f52e fbc71ebb18c64a72bd6d93bc520d8921 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 25 03:24:55 np0005534516 nova_compute[253538]: 2025-11-25 08:24:55.129 253542 DEBUG nova.virt.hardware [None req-bcd3d6b9-5252-4c91-8945-50734fbda5f2 345cc8bd54dd46c9aaf034a44f55f52e fbc71ebb18c64a72bd6d93bc520d8921 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 25 03:24:55 np0005534516 nova_compute[253538]: 2025-11-25 08:24:55.130 253542 DEBUG nova.virt.hardware [None req-bcd3d6b9-5252-4c91-8945-50734fbda5f2 345cc8bd54dd46c9aaf034a44f55f52e fbc71ebb18c64a72bd6d93bc520d8921 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 25 03:24:55 np0005534516 nova_compute[253538]: 2025-11-25 08:24:55.130 253542 DEBUG nova.virt.hardware [None req-bcd3d6b9-5252-4c91-8945-50734fbda5f2 345cc8bd54dd46c9aaf034a44f55f52e fbc71ebb18c64a72bd6d93bc520d8921 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 25 03:24:55 np0005534516 nova_compute[253538]: 2025-11-25 08:24:55.130 253542 DEBUG nova.virt.hardware [None req-bcd3d6b9-5252-4c91-8945-50734fbda5f2 345cc8bd54dd46c9aaf034a44f55f52e fbc71ebb18c64a72bd6d93bc520d8921 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 25 03:24:55 np0005534516 nova_compute[253538]: 2025-11-25 08:24:55.130 253542 DEBUG nova.virt.hardware [None req-bcd3d6b9-5252-4c91-8945-50734fbda5f2 345cc8bd54dd46c9aaf034a44f55f52e fbc71ebb18c64a72bd6d93bc520d8921 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 25 03:24:55 np0005534516 nova_compute[253538]: 2025-11-25 08:24:55.134 253542 DEBUG oslo_concurrency.processutils [None req-bcd3d6b9-5252-4c91-8945-50734fbda5f2 345cc8bd54dd46c9aaf034a44f55f52e fbc71ebb18c64a72bd6d93bc520d8921 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:24:55 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 03:24:55 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1898110085' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 03:24:55 np0005534516 nova_compute[253538]: 2025-11-25 08:24:55.410 253542 DEBUG oslo_concurrency.processutils [None req-cd280208-b595-4b3f-8796-270485a4caef 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.524s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:24:55 np0005534516 nova_compute[253538]: 2025-11-25 08:24:55.417 253542 DEBUG nova.compute.provider_tree [None req-cd280208-b595-4b3f-8796-270485a4caef 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 03:24:55 np0005534516 nova_compute[253538]: 2025-11-25 08:24:55.430 253542 DEBUG nova.scheduler.client.report [None req-cd280208-b595-4b3f-8796-270485a4caef 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 03:24:55 np0005534516 nova_compute[253538]: 2025-11-25 08:24:55.450 253542 DEBUG oslo_concurrency.lockutils [None req-cd280208-b595-4b3f-8796-270485a4caef 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.755s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:24:55 np0005534516 nova_compute[253538]: 2025-11-25 08:24:55.451 253542 DEBUG nova.compute.manager [None req-cd280208-b595-4b3f-8796-270485a4caef 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] [instance: 86bfa56f-56d0-4a5e-b0b2-302c375e37a3] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 25 03:24:55 np0005534516 nova_compute[253538]: 2025-11-25 08:24:55.497 253542 DEBUG nova.compute.manager [None req-cd280208-b595-4b3f-8796-270485a4caef 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] [instance: 86bfa56f-56d0-4a5e-b0b2-302c375e37a3] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 25 03:24:55 np0005534516 nova_compute[253538]: 2025-11-25 08:24:55.506 253542 DEBUG nova.network.neutron [None req-cd280208-b595-4b3f-8796-270485a4caef 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] [instance: 86bfa56f-56d0-4a5e-b0b2-302c375e37a3] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 25 03:24:55 np0005534516 nova_compute[253538]: 2025-11-25 08:24:55.521 253542 INFO nova.virt.libvirt.driver [None req-cd280208-b595-4b3f-8796-270485a4caef 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] [instance: 86bfa56f-56d0-4a5e-b0b2-302c375e37a3] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 25 03:24:55 np0005534516 nova_compute[253538]: 2025-11-25 08:24:55.538 253542 DEBUG nova.compute.manager [None req-cd280208-b595-4b3f-8796-270485a4caef 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] [instance: 86bfa56f-56d0-4a5e-b0b2-302c375e37a3] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 25 03:24:55 np0005534516 nova_compute[253538]: 2025-11-25 08:24:55.557 253542 DEBUG nova.compute.manager [None req-2460f9f1-7e69-4aa3-b7b8-6ffd2bd05f80 33ac0c97ce984130b9394683c963cda1 e8f1e6855a5b45a8b1c74763485618f4 - - default default] [instance: 7e576418-7454-49eb-9918-2d7f04547bd8] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 25 03:24:55 np0005534516 nova_compute[253538]: 2025-11-25 08:24:55.558 253542 DEBUG nova.virt.libvirt.driver [None req-2460f9f1-7e69-4aa3-b7b8-6ffd2bd05f80 33ac0c97ce984130b9394683c963cda1 e8f1e6855a5b45a8b1c74763485618f4 - - default default] [instance: 7e576418-7454-49eb-9918-2d7f04547bd8] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 25 03:24:55 np0005534516 nova_compute[253538]: 2025-11-25 08:24:55.558 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764059095.557283, 7e576418-7454-49eb-9918-2d7f04547bd8 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 03:24:55 np0005534516 nova_compute[253538]: 2025-11-25 08:24:55.558 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 7e576418-7454-49eb-9918-2d7f04547bd8] VM Resumed (Lifecycle Event)#033[00m
Nov 25 03:24:55 np0005534516 nova_compute[253538]: 2025-11-25 08:24:55.562 253542 INFO nova.virt.libvirt.driver [-] [instance: 7e576418-7454-49eb-9918-2d7f04547bd8] Instance spawned successfully.#033[00m
Nov 25 03:24:55 np0005534516 nova_compute[253538]: 2025-11-25 08:24:55.562 253542 DEBUG nova.virt.libvirt.driver [None req-2460f9f1-7e69-4aa3-b7b8-6ffd2bd05f80 33ac0c97ce984130b9394683c963cda1 e8f1e6855a5b45a8b1c74763485618f4 - - default default] [instance: 7e576418-7454-49eb-9918-2d7f04547bd8] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 25 03:24:55 np0005534516 nova_compute[253538]: 2025-11-25 08:24:55.594 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 7e576418-7454-49eb-9918-2d7f04547bd8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:24:55 np0005534516 nova_compute[253538]: 2025-11-25 08:24:55.603 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 7e576418-7454-49eb-9918-2d7f04547bd8] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 03:24:55 np0005534516 nova_compute[253538]: 2025-11-25 08:24:55.606 253542 DEBUG nova.virt.libvirt.driver [None req-2460f9f1-7e69-4aa3-b7b8-6ffd2bd05f80 33ac0c97ce984130b9394683c963cda1 e8f1e6855a5b45a8b1c74763485618f4 - - default default] [instance: 7e576418-7454-49eb-9918-2d7f04547bd8] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:24:55 np0005534516 nova_compute[253538]: 2025-11-25 08:24:55.606 253542 DEBUG nova.virt.libvirt.driver [None req-2460f9f1-7e69-4aa3-b7b8-6ffd2bd05f80 33ac0c97ce984130b9394683c963cda1 e8f1e6855a5b45a8b1c74763485618f4 - - default default] [instance: 7e576418-7454-49eb-9918-2d7f04547bd8] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:24:55 np0005534516 nova_compute[253538]: 2025-11-25 08:24:55.607 253542 DEBUG nova.virt.libvirt.driver [None req-2460f9f1-7e69-4aa3-b7b8-6ffd2bd05f80 33ac0c97ce984130b9394683c963cda1 e8f1e6855a5b45a8b1c74763485618f4 - - default default] [instance: 7e576418-7454-49eb-9918-2d7f04547bd8] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:24:55 np0005534516 nova_compute[253538]: 2025-11-25 08:24:55.607 253542 DEBUG nova.virt.libvirt.driver [None req-2460f9f1-7e69-4aa3-b7b8-6ffd2bd05f80 33ac0c97ce984130b9394683c963cda1 e8f1e6855a5b45a8b1c74763485618f4 - - default default] [instance: 7e576418-7454-49eb-9918-2d7f04547bd8] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:24:55 np0005534516 nova_compute[253538]: 2025-11-25 08:24:55.607 253542 DEBUG nova.virt.libvirt.driver [None req-2460f9f1-7e69-4aa3-b7b8-6ffd2bd05f80 33ac0c97ce984130b9394683c963cda1 e8f1e6855a5b45a8b1c74763485618f4 - - default default] [instance: 7e576418-7454-49eb-9918-2d7f04547bd8] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:24:55 np0005534516 nova_compute[253538]: 2025-11-25 08:24:55.608 253542 DEBUG nova.virt.libvirt.driver [None req-2460f9f1-7e69-4aa3-b7b8-6ffd2bd05f80 33ac0c97ce984130b9394683c963cda1 e8f1e6855a5b45a8b1c74763485618f4 - - default default] [instance: 7e576418-7454-49eb-9918-2d7f04547bd8] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:24:55 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 03:24:55 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3090641423' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 03:24:55 np0005534516 nova_compute[253538]: 2025-11-25 08:24:55.657 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 7e576418-7454-49eb-9918-2d7f04547bd8] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 03:24:55 np0005534516 nova_compute[253538]: 2025-11-25 08:24:55.657 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764059095.5584865, 7e576418-7454-49eb-9918-2d7f04547bd8 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 03:24:55 np0005534516 nova_compute[253538]: 2025-11-25 08:24:55.657 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 7e576418-7454-49eb-9918-2d7f04547bd8] VM Started (Lifecycle Event)#033[00m
Nov 25 03:24:55 np0005534516 nova_compute[253538]: 2025-11-25 08:24:55.674 253542 DEBUG oslo_concurrency.processutils [None req-bcd3d6b9-5252-4c91-8945-50734fbda5f2 345cc8bd54dd46c9aaf034a44f55f52e fbc71ebb18c64a72bd6d93bc520d8921 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.540s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:24:55 np0005534516 nova_compute[253538]: 2025-11-25 08:24:55.695 253542 DEBUG nova.storage.rbd_utils [None req-bcd3d6b9-5252-4c91-8945-50734fbda5f2 345cc8bd54dd46c9aaf034a44f55f52e fbc71ebb18c64a72bd6d93bc520d8921 - - default default] rbd image d3764acc-bce0-452e-bba5-90d76d88df2e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:24:55 np0005534516 nova_compute[253538]: 2025-11-25 08:24:55.699 253542 DEBUG oslo_concurrency.processutils [None req-bcd3d6b9-5252-4c91-8945-50734fbda5f2 345cc8bd54dd46c9aaf034a44f55f52e fbc71ebb18c64a72bd6d93bc520d8921 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:24:55 np0005534516 nova_compute[253538]: 2025-11-25 08:24:55.720 253542 DEBUG nova.compute.manager [None req-cd280208-b595-4b3f-8796-270485a4caef 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] [instance: 86bfa56f-56d0-4a5e-b0b2-302c375e37a3] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 25 03:24:55 np0005534516 nova_compute[253538]: 2025-11-25 08:24:55.722 253542 DEBUG nova.virt.libvirt.driver [None req-cd280208-b595-4b3f-8796-270485a4caef 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] [instance: 86bfa56f-56d0-4a5e-b0b2-302c375e37a3] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 25 03:24:55 np0005534516 nova_compute[253538]: 2025-11-25 08:24:55.722 253542 INFO nova.virt.libvirt.driver [None req-cd280208-b595-4b3f-8796-270485a4caef 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] [instance: 86bfa56f-56d0-4a5e-b0b2-302c375e37a3] Creating image(s)#033[00m
Nov 25 03:24:55 np0005534516 nova_compute[253538]: 2025-11-25 08:24:55.741 253542 DEBUG nova.storage.rbd_utils [None req-cd280208-b595-4b3f-8796-270485a4caef 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] rbd image 86bfa56f-56d0-4a5e-b0b2-302c375e37a3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:24:55 np0005534516 nova_compute[253538]: 2025-11-25 08:24:55.760 253542 DEBUG nova.storage.rbd_utils [None req-cd280208-b595-4b3f-8796-270485a4caef 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] rbd image 86bfa56f-56d0-4a5e-b0b2-302c375e37a3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:24:55 np0005534516 nova_compute[253538]: 2025-11-25 08:24:55.779 253542 DEBUG nova.storage.rbd_utils [None req-cd280208-b595-4b3f-8796-270485a4caef 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] rbd image 86bfa56f-56d0-4a5e-b0b2-302c375e37a3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:24:55 np0005534516 nova_compute[253538]: 2025-11-25 08:24:55.784 253542 DEBUG oslo_concurrency.processutils [None req-cd280208-b595-4b3f-8796-270485a4caef 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:24:55 np0005534516 nova_compute[253538]: 2025-11-25 08:24:55.809 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 7e576418-7454-49eb-9918-2d7f04547bd8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:24:55 np0005534516 nova_compute[253538]: 2025-11-25 08:24:55.811 253542 INFO nova.compute.manager [None req-2460f9f1-7e69-4aa3-b7b8-6ffd2bd05f80 33ac0c97ce984130b9394683c963cda1 e8f1e6855a5b45a8b1c74763485618f4 - - default default] [instance: 7e576418-7454-49eb-9918-2d7f04547bd8] Took 3.86 seconds to spawn the instance on the hypervisor.#033[00m
Nov 25 03:24:55 np0005534516 nova_compute[253538]: 2025-11-25 08:24:55.811 253542 DEBUG nova.compute.manager [None req-2460f9f1-7e69-4aa3-b7b8-6ffd2bd05f80 33ac0c97ce984130b9394683c963cda1 e8f1e6855a5b45a8b1c74763485618f4 - - default default] [instance: 7e576418-7454-49eb-9918-2d7f04547bd8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:24:55 np0005534516 nova_compute[253538]: 2025-11-25 08:24:55.815 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 7e576418-7454-49eb-9918-2d7f04547bd8] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 03:24:55 np0005534516 nova_compute[253538]: 2025-11-25 08:24:55.860 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 7e576418-7454-49eb-9918-2d7f04547bd8] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 03:24:55 np0005534516 nova_compute[253538]: 2025-11-25 08:24:55.863 253542 DEBUG oslo_concurrency.processutils [None req-cd280208-b595-4b3f-8796-270485a4caef 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json" returned: 0 in 0.079s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:24:55 np0005534516 nova_compute[253538]: 2025-11-25 08:24:55.863 253542 DEBUG oslo_concurrency.lockutils [None req-cd280208-b595-4b3f-8796-270485a4caef 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Acquiring lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:24:55 np0005534516 nova_compute[253538]: 2025-11-25 08:24:55.864 253542 DEBUG oslo_concurrency.lockutils [None req-cd280208-b595-4b3f-8796-270485a4caef 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:24:55 np0005534516 nova_compute[253538]: 2025-11-25 08:24:55.864 253542 DEBUG oslo_concurrency.lockutils [None req-cd280208-b595-4b3f-8796-270485a4caef 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:24:55 np0005534516 nova_compute[253538]: 2025-11-25 08:24:55.894 253542 DEBUG nova.storage.rbd_utils [None req-cd280208-b595-4b3f-8796-270485a4caef 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] rbd image 86bfa56f-56d0-4a5e-b0b2-302c375e37a3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:24:55 np0005534516 nova_compute[253538]: 2025-11-25 08:24:55.898 253542 DEBUG oslo_concurrency.processutils [None req-cd280208-b595-4b3f-8796-270485a4caef 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc 86bfa56f-56d0-4a5e-b0b2-302c375e37a3_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:24:55 np0005534516 nova_compute[253538]: 2025-11-25 08:24:55.930 253542 INFO nova.compute.manager [None req-2460f9f1-7e69-4aa3-b7b8-6ffd2bd05f80 33ac0c97ce984130b9394683c963cda1 e8f1e6855a5b45a8b1c74763485618f4 - - default default] [instance: 7e576418-7454-49eb-9918-2d7f04547bd8] Took 4.94 seconds to build instance.#033[00m
Nov 25 03:24:55 np0005534516 nova_compute[253538]: 2025-11-25 08:24:55.938 253542 DEBUG nova.policy [None req-cd280208-b595-4b3f-8796-270485a4caef 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '76d3377d398a4214a77bc0eb91638ec5', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '65a2f983cce14453b2dc9251a520f289', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 25 03:24:55 np0005534516 nova_compute[253538]: 2025-11-25 08:24:55.950 253542 DEBUG oslo_concurrency.lockutils [None req-2460f9f1-7e69-4aa3-b7b8-6ffd2bd05f80 33ac0c97ce984130b9394683c963cda1 e8f1e6855a5b45a8b1c74763485618f4 - - default default] Lock "7e576418-7454-49eb-9918-2d7f04547bd8" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 5.039s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:24:56 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 03:24:56 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1569556256' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 03:24:56 np0005534516 nova_compute[253538]: 2025-11-25 08:24:56.191 253542 DEBUG oslo_concurrency.processutils [None req-bcd3d6b9-5252-4c91-8945-50734fbda5f2 345cc8bd54dd46c9aaf034a44f55f52e fbc71ebb18c64a72bd6d93bc520d8921 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.493s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:24:56 np0005534516 nova_compute[253538]: 2025-11-25 08:24:56.193 253542 DEBUG nova.objects.instance [None req-bcd3d6b9-5252-4c91-8945-50734fbda5f2 345cc8bd54dd46c9aaf034a44f55f52e fbc71ebb18c64a72bd6d93bc520d8921 - - default default] Lazy-loading 'pci_devices' on Instance uuid d3764acc-bce0-452e-bba5-90d76d88df2e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 03:24:56 np0005534516 nova_compute[253538]: 2025-11-25 08:24:56.206 253542 DEBUG nova.virt.libvirt.driver [None req-bcd3d6b9-5252-4c91-8945-50734fbda5f2 345cc8bd54dd46c9aaf034a44f55f52e fbc71ebb18c64a72bd6d93bc520d8921 - - default default] [instance: d3764acc-bce0-452e-bba5-90d76d88df2e] End _get_guest_xml xml=<domain type="kvm">
Nov 25 03:24:56 np0005534516 nova_compute[253538]:  <uuid>d3764acc-bce0-452e-bba5-90d76d88df2e</uuid>
Nov 25 03:24:56 np0005534516 nova_compute[253538]:  <name>instance-0000000f</name>
Nov 25 03:24:56 np0005534516 nova_compute[253538]:  <memory>131072</memory>
Nov 25 03:24:56 np0005534516 nova_compute[253538]:  <vcpu>1</vcpu>
Nov 25 03:24:56 np0005534516 nova_compute[253538]:  <metadata>
Nov 25 03:24:56 np0005534516 nova_compute[253538]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 03:24:56 np0005534516 nova_compute[253538]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 03:24:56 np0005534516 nova_compute[253538]:      <nova:name>tempest-ServersAdminNegativeTestJSON-server-922281324</nova:name>
Nov 25 03:24:56 np0005534516 nova_compute[253538]:      <nova:creationTime>2025-11-25 08:24:55</nova:creationTime>
Nov 25 03:24:56 np0005534516 nova_compute[253538]:      <nova:flavor name="m1.nano">
Nov 25 03:24:56 np0005534516 nova_compute[253538]:        <nova:memory>128</nova:memory>
Nov 25 03:24:56 np0005534516 nova_compute[253538]:        <nova:disk>1</nova:disk>
Nov 25 03:24:56 np0005534516 nova_compute[253538]:        <nova:swap>0</nova:swap>
Nov 25 03:24:56 np0005534516 nova_compute[253538]:        <nova:ephemeral>0</nova:ephemeral>
Nov 25 03:24:56 np0005534516 nova_compute[253538]:        <nova:vcpus>1</nova:vcpus>
Nov 25 03:24:56 np0005534516 nova_compute[253538]:      </nova:flavor>
Nov 25 03:24:56 np0005534516 nova_compute[253538]:      <nova:owner>
Nov 25 03:24:56 np0005534516 nova_compute[253538]:        <nova:user uuid="345cc8bd54dd46c9aaf034a44f55f52e">tempest-ServersAdminNegativeTestJSON-455379081-project-member</nova:user>
Nov 25 03:24:56 np0005534516 nova_compute[253538]:        <nova:project uuid="fbc71ebb18c64a72bd6d93bc520d8921">tempest-ServersAdminNegativeTestJSON-455379081</nova:project>
Nov 25 03:24:56 np0005534516 nova_compute[253538]:      </nova:owner>
Nov 25 03:24:56 np0005534516 nova_compute[253538]:      <nova:root type="image" uuid="8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e"/>
Nov 25 03:24:56 np0005534516 nova_compute[253538]:      <nova:ports/>
Nov 25 03:24:56 np0005534516 nova_compute[253538]:    </nova:instance>
Nov 25 03:24:56 np0005534516 nova_compute[253538]:  </metadata>
Nov 25 03:24:56 np0005534516 nova_compute[253538]:  <sysinfo type="smbios">
Nov 25 03:24:56 np0005534516 nova_compute[253538]:    <system>
Nov 25 03:24:56 np0005534516 nova_compute[253538]:      <entry name="manufacturer">RDO</entry>
Nov 25 03:24:56 np0005534516 nova_compute[253538]:      <entry name="product">OpenStack Compute</entry>
Nov 25 03:24:56 np0005534516 nova_compute[253538]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 03:24:56 np0005534516 nova_compute[253538]:      <entry name="serial">d3764acc-bce0-452e-bba5-90d76d88df2e</entry>
Nov 25 03:24:56 np0005534516 nova_compute[253538]:      <entry name="uuid">d3764acc-bce0-452e-bba5-90d76d88df2e</entry>
Nov 25 03:24:56 np0005534516 nova_compute[253538]:      <entry name="family">Virtual Machine</entry>
Nov 25 03:24:56 np0005534516 nova_compute[253538]:    </system>
Nov 25 03:24:56 np0005534516 nova_compute[253538]:  </sysinfo>
Nov 25 03:24:56 np0005534516 nova_compute[253538]:  <os>
Nov 25 03:24:56 np0005534516 nova_compute[253538]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 25 03:24:56 np0005534516 nova_compute[253538]:    <boot dev="hd"/>
Nov 25 03:24:56 np0005534516 nova_compute[253538]:    <smbios mode="sysinfo"/>
Nov 25 03:24:56 np0005534516 nova_compute[253538]:  </os>
Nov 25 03:24:56 np0005534516 nova_compute[253538]:  <features>
Nov 25 03:24:56 np0005534516 nova_compute[253538]:    <acpi/>
Nov 25 03:24:56 np0005534516 nova_compute[253538]:    <apic/>
Nov 25 03:24:56 np0005534516 nova_compute[253538]:    <vmcoreinfo/>
Nov 25 03:24:56 np0005534516 nova_compute[253538]:  </features>
Nov 25 03:24:56 np0005534516 nova_compute[253538]:  <clock offset="utc">
Nov 25 03:24:56 np0005534516 nova_compute[253538]:    <timer name="pit" tickpolicy="delay"/>
Nov 25 03:24:56 np0005534516 nova_compute[253538]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 25 03:24:56 np0005534516 nova_compute[253538]:    <timer name="hpet" present="no"/>
Nov 25 03:24:56 np0005534516 nova_compute[253538]:  </clock>
Nov 25 03:24:56 np0005534516 nova_compute[253538]:  <cpu mode="host-model" match="exact">
Nov 25 03:24:56 np0005534516 nova_compute[253538]:    <topology sockets="1" cores="1" threads="1"/>
Nov 25 03:24:56 np0005534516 nova_compute[253538]:  </cpu>
Nov 25 03:24:56 np0005534516 nova_compute[253538]:  <devices>
Nov 25 03:24:56 np0005534516 nova_compute[253538]:    <disk type="network" device="disk">
Nov 25 03:24:56 np0005534516 nova_compute[253538]:      <driver type="raw" cache="none"/>
Nov 25 03:24:56 np0005534516 nova_compute[253538]:      <source protocol="rbd" name="vms/d3764acc-bce0-452e-bba5-90d76d88df2e_disk">
Nov 25 03:24:56 np0005534516 nova_compute[253538]:        <host name="192.168.122.100" port="6789"/>
Nov 25 03:24:56 np0005534516 nova_compute[253538]:      </source>
Nov 25 03:24:56 np0005534516 nova_compute[253538]:      <auth username="openstack">
Nov 25 03:24:56 np0005534516 nova_compute[253538]:        <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 03:24:56 np0005534516 nova_compute[253538]:      </auth>
Nov 25 03:24:56 np0005534516 nova_compute[253538]:      <target dev="vda" bus="virtio"/>
Nov 25 03:24:56 np0005534516 nova_compute[253538]:    </disk>
Nov 25 03:24:56 np0005534516 nova_compute[253538]:    <disk type="network" device="cdrom">
Nov 25 03:24:56 np0005534516 nova_compute[253538]:      <driver type="raw" cache="none"/>
Nov 25 03:24:56 np0005534516 nova_compute[253538]:      <source protocol="rbd" name="vms/d3764acc-bce0-452e-bba5-90d76d88df2e_disk.config">
Nov 25 03:24:56 np0005534516 nova_compute[253538]:        <host name="192.168.122.100" port="6789"/>
Nov 25 03:24:56 np0005534516 nova_compute[253538]:      </source>
Nov 25 03:24:56 np0005534516 nova_compute[253538]:      <auth username="openstack">
Nov 25 03:24:56 np0005534516 nova_compute[253538]:        <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 03:24:56 np0005534516 nova_compute[253538]:      </auth>
Nov 25 03:24:56 np0005534516 nova_compute[253538]:      <target dev="sda" bus="sata"/>
Nov 25 03:24:56 np0005534516 nova_compute[253538]:    </disk>
Nov 25 03:24:56 np0005534516 nova_compute[253538]:    <serial type="pty">
Nov 25 03:24:56 np0005534516 nova_compute[253538]:      <log file="/var/lib/nova/instances/d3764acc-bce0-452e-bba5-90d76d88df2e/console.log" append="off"/>
Nov 25 03:24:56 np0005534516 nova_compute[253538]:    </serial>
Nov 25 03:24:56 np0005534516 nova_compute[253538]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 03:24:56 np0005534516 nova_compute[253538]:    <video>
Nov 25 03:24:56 np0005534516 nova_compute[253538]:      <model type="virtio"/>
Nov 25 03:24:56 np0005534516 nova_compute[253538]:    </video>
Nov 25 03:24:56 np0005534516 nova_compute[253538]:    <input type="tablet" bus="usb"/>
Nov 25 03:24:56 np0005534516 nova_compute[253538]:    <rng model="virtio">
Nov 25 03:24:56 np0005534516 nova_compute[253538]:      <backend model="random">/dev/urandom</backend>
Nov 25 03:24:56 np0005534516 nova_compute[253538]:    </rng>
Nov 25 03:24:56 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root"/>
Nov 25 03:24:56 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:24:56 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:24:56 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:24:56 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:24:56 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:24:56 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:24:56 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:24:56 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:24:56 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:24:56 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:24:56 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:24:56 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:24:56 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:24:56 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:24:56 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:24:56 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:24:56 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:24:56 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:24:56 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:24:56 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:24:56 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:24:56 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:24:56 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:24:56 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:24:56 np0005534516 nova_compute[253538]:    <controller type="usb" index="0"/>
Nov 25 03:24:56 np0005534516 nova_compute[253538]:    <memballoon model="virtio">
Nov 25 03:24:56 np0005534516 nova_compute[253538]:      <stats period="10"/>
Nov 25 03:24:56 np0005534516 nova_compute[253538]:    </memballoon>
Nov 25 03:24:56 np0005534516 nova_compute[253538]:  </devices>
Nov 25 03:24:56 np0005534516 nova_compute[253538]: </domain>
Nov 25 03:24:56 np0005534516 nova_compute[253538]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 25 03:24:56 np0005534516 nova_compute[253538]: 2025-11-25 08:24:56.211 253542 DEBUG oslo_concurrency.processutils [None req-cd280208-b595-4b3f-8796-270485a4caef 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc 86bfa56f-56d0-4a5e-b0b2-302c375e37a3_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.313s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:24:56 np0005534516 nova_compute[253538]: 2025-11-25 08:24:56.296 253542 DEBUG nova.storage.rbd_utils [None req-cd280208-b595-4b3f-8796-270485a4caef 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] resizing rbd image 86bfa56f-56d0-4a5e-b0b2-302c375e37a3_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 25 03:24:56 np0005534516 podman[278481]: 2025-11-25 08:24:56.311514842 +0000 UTC m=+0.069460364 container health_status 5c8d5508db57b4c3da7e80ae11f3a3094e87785cb74f60c98199261dd8b73481 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_metadata_agent, managed_by=edpm_ansible, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent)
Nov 25 03:24:56 np0005534516 nova_compute[253538]: 2025-11-25 08:24:56.329 253542 DEBUG nova.virt.libvirt.driver [None req-bcd3d6b9-5252-4c91-8945-50734fbda5f2 345cc8bd54dd46c9aaf034a44f55f52e fbc71ebb18c64a72bd6d93bc520d8921 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 03:24:56 np0005534516 nova_compute[253538]: 2025-11-25 08:24:56.329 253542 DEBUG nova.virt.libvirt.driver [None req-bcd3d6b9-5252-4c91-8945-50734fbda5f2 345cc8bd54dd46c9aaf034a44f55f52e fbc71ebb18c64a72bd6d93bc520d8921 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 03:24:56 np0005534516 nova_compute[253538]: 2025-11-25 08:24:56.329 253542 INFO nova.virt.libvirt.driver [None req-bcd3d6b9-5252-4c91-8945-50734fbda5f2 345cc8bd54dd46c9aaf034a44f55f52e fbc71ebb18c64a72bd6d93bc520d8921 - - default default] [instance: d3764acc-bce0-452e-bba5-90d76d88df2e] Using config drive#033[00m
Nov 25 03:24:56 np0005534516 nova_compute[253538]: 2025-11-25 08:24:56.350 253542 DEBUG nova.storage.rbd_utils [None req-bcd3d6b9-5252-4c91-8945-50734fbda5f2 345cc8bd54dd46c9aaf034a44f55f52e fbc71ebb18c64a72bd6d93bc520d8921 - - default default] rbd image d3764acc-bce0-452e-bba5-90d76d88df2e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:24:56 np0005534516 nova_compute[253538]: 2025-11-25 08:24:56.399 253542 DEBUG nova.objects.instance [None req-cd280208-b595-4b3f-8796-270485a4caef 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Lazy-loading 'migration_context' on Instance uuid 86bfa56f-56d0-4a5e-b0b2-302c375e37a3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 03:24:56 np0005534516 nova_compute[253538]: 2025-11-25 08:24:56.413 253542 DEBUG nova.virt.libvirt.driver [None req-cd280208-b595-4b3f-8796-270485a4caef 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] [instance: 86bfa56f-56d0-4a5e-b0b2-302c375e37a3] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 25 03:24:56 np0005534516 nova_compute[253538]: 2025-11-25 08:24:56.413 253542 DEBUG nova.virt.libvirt.driver [None req-cd280208-b595-4b3f-8796-270485a4caef 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] [instance: 86bfa56f-56d0-4a5e-b0b2-302c375e37a3] Ensure instance console log exists: /var/lib/nova/instances/86bfa56f-56d0-4a5e-b0b2-302c375e37a3/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 25 03:24:56 np0005534516 nova_compute[253538]: 2025-11-25 08:24:56.414 253542 DEBUG oslo_concurrency.lockutils [None req-cd280208-b595-4b3f-8796-270485a4caef 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:24:56 np0005534516 nova_compute[253538]: 2025-11-25 08:24:56.414 253542 DEBUG oslo_concurrency.lockutils [None req-cd280208-b595-4b3f-8796-270485a4caef 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:24:56 np0005534516 nova_compute[253538]: 2025-11-25 08:24:56.414 253542 DEBUG oslo_concurrency.lockutils [None req-cd280208-b595-4b3f-8796-270485a4caef 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:24:56 np0005534516 nova_compute[253538]: 2025-11-25 08:24:56.418 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:24:56 np0005534516 nova_compute[253538]: 2025-11-25 08:24:56.623 253542 DEBUG nova.network.neutron [None req-cd280208-b595-4b3f-8796-270485a4caef 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] [instance: 86bfa56f-56d0-4a5e-b0b2-302c375e37a3] Successfully created port: 4ad9572b-6ac1-4659-8ea6-71b8a32c06fe _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 25 03:24:56 np0005534516 nova_compute[253538]: 2025-11-25 08:24:56.650 253542 INFO nova.virt.libvirt.driver [None req-bcd3d6b9-5252-4c91-8945-50734fbda5f2 345cc8bd54dd46c9aaf034a44f55f52e fbc71ebb18c64a72bd6d93bc520d8921 - - default default] [instance: d3764acc-bce0-452e-bba5-90d76d88df2e] Creating config drive at /var/lib/nova/instances/d3764acc-bce0-452e-bba5-90d76d88df2e/disk.config#033[00m
Nov 25 03:24:56 np0005534516 nova_compute[253538]: 2025-11-25 08:24:56.659 253542 DEBUG oslo_concurrency.processutils [None req-bcd3d6b9-5252-4c91-8945-50734fbda5f2 345cc8bd54dd46c9aaf034a44f55f52e fbc71ebb18c64a72bd6d93bc520d8921 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/d3764acc-bce0-452e-bba5-90d76d88df2e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpg_oc8sw0 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:24:56 np0005534516 nova_compute[253538]: 2025-11-25 08:24:56.736 253542 DEBUG oslo_concurrency.lockutils [None req-eff26c03-6da1-4935-b2ab-a11367dd46a2 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Acquiring lock "23ace5af-6840-42aa-a801-98abbb4f3a52" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:24:56 np0005534516 nova_compute[253538]: 2025-11-25 08:24:56.737 253542 DEBUG oslo_concurrency.lockutils [None req-eff26c03-6da1-4935-b2ab-a11367dd46a2 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Lock "23ace5af-6840-42aa-a801-98abbb4f3a52" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:24:56 np0005534516 nova_compute[253538]: 2025-11-25 08:24:56.750 253542 DEBUG nova.compute.manager [None req-eff26c03-6da1-4935-b2ab-a11367dd46a2 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] [instance: 23ace5af-6840-42aa-a801-98abbb4f3a52] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 25 03:24:56 np0005534516 nova_compute[253538]: 2025-11-25 08:24:56.793 253542 DEBUG oslo_concurrency.processutils [None req-bcd3d6b9-5252-4c91-8945-50734fbda5f2 345cc8bd54dd46c9aaf034a44f55f52e fbc71ebb18c64a72bd6d93bc520d8921 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/d3764acc-bce0-452e-bba5-90d76d88df2e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpg_oc8sw0" returned: 0 in 0.135s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:24:56 np0005534516 nova_compute[253538]: 2025-11-25 08:24:56.817 253542 DEBUG nova.storage.rbd_utils [None req-bcd3d6b9-5252-4c91-8945-50734fbda5f2 345cc8bd54dd46c9aaf034a44f55f52e fbc71ebb18c64a72bd6d93bc520d8921 - - default default] rbd image d3764acc-bce0-452e-bba5-90d76d88df2e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:24:56 np0005534516 nova_compute[253538]: 2025-11-25 08:24:56.823 253542 DEBUG oslo_concurrency.processutils [None req-bcd3d6b9-5252-4c91-8945-50734fbda5f2 345cc8bd54dd46c9aaf034a44f55f52e fbc71ebb18c64a72bd6d93bc520d8921 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/d3764acc-bce0-452e-bba5-90d76d88df2e/disk.config d3764acc-bce0-452e-bba5-90d76d88df2e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:24:56 np0005534516 nova_compute[253538]: 2025-11-25 08:24:56.872 253542 DEBUG oslo_concurrency.lockutils [None req-eff26c03-6da1-4935-b2ab-a11367dd46a2 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:24:56 np0005534516 nova_compute[253538]: 2025-11-25 08:24:56.872 253542 DEBUG oslo_concurrency.lockutils [None req-eff26c03-6da1-4935-b2ab-a11367dd46a2 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:24:56 np0005534516 nova_compute[253538]: 2025-11-25 08:24:56.880 253542 DEBUG nova.virt.hardware [None req-eff26c03-6da1-4935-b2ab-a11367dd46a2 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 25 03:24:56 np0005534516 nova_compute[253538]: 2025-11-25 08:24:56.881 253542 INFO nova.compute.claims [None req-eff26c03-6da1-4935-b2ab-a11367dd46a2 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] [instance: 23ace5af-6840-42aa-a801-98abbb4f3a52] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 25 03:24:56 np0005534516 nova_compute[253538]: 2025-11-25 08:24:56.970 253542 DEBUG oslo_concurrency.processutils [None req-bcd3d6b9-5252-4c91-8945-50734fbda5f2 345cc8bd54dd46c9aaf034a44f55f52e fbc71ebb18c64a72bd6d93bc520d8921 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/d3764acc-bce0-452e-bba5-90d76d88df2e/disk.config d3764acc-bce0-452e-bba5-90d76d88df2e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.147s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:24:56 np0005534516 nova_compute[253538]: 2025-11-25 08:24:56.971 253542 INFO nova.virt.libvirt.driver [None req-bcd3d6b9-5252-4c91-8945-50734fbda5f2 345cc8bd54dd46c9aaf034a44f55f52e fbc71ebb18c64a72bd6d93bc520d8921 - - default default] [instance: d3764acc-bce0-452e-bba5-90d76d88df2e] Deleting local config drive /var/lib/nova/instances/d3764acc-bce0-452e-bba5-90d76d88df2e/disk.config because it was imported into RBD.#033[00m
Nov 25 03:24:56 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1184: 321 pgs: 321 active+clean; 162 MiB data, 305 MiB used, 60 GiB / 60 GiB avail; 3.5 MiB/s rd, 5.3 MiB/s wr, 216 op/s
Nov 25 03:24:57 np0005534516 systemd-machined[215790]: New machine qemu-17-instance-0000000f.
Nov 25 03:24:57 np0005534516 nova_compute[253538]: 2025-11-25 08:24:57.034 253542 DEBUG oslo_concurrency.lockutils [None req-46639f30-841e-4511-9447-6e35fb117677 33ac0c97ce984130b9394683c963cda1 e8f1e6855a5b45a8b1c74763485618f4 - - default default] Acquiring lock "7e576418-7454-49eb-9918-2d7f04547bd8" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:24:57 np0005534516 nova_compute[253538]: 2025-11-25 08:24:57.035 253542 DEBUG oslo_concurrency.lockutils [None req-46639f30-841e-4511-9447-6e35fb117677 33ac0c97ce984130b9394683c963cda1 e8f1e6855a5b45a8b1c74763485618f4 - - default default] Lock "7e576418-7454-49eb-9918-2d7f04547bd8" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:24:57 np0005534516 nova_compute[253538]: 2025-11-25 08:24:57.035 253542 DEBUG oslo_concurrency.lockutils [None req-46639f30-841e-4511-9447-6e35fb117677 33ac0c97ce984130b9394683c963cda1 e8f1e6855a5b45a8b1c74763485618f4 - - default default] Acquiring lock "7e576418-7454-49eb-9918-2d7f04547bd8-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:24:57 np0005534516 nova_compute[253538]: 2025-11-25 08:24:57.035 253542 DEBUG oslo_concurrency.lockutils [None req-46639f30-841e-4511-9447-6e35fb117677 33ac0c97ce984130b9394683c963cda1 e8f1e6855a5b45a8b1c74763485618f4 - - default default] Lock "7e576418-7454-49eb-9918-2d7f04547bd8-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:24:57 np0005534516 nova_compute[253538]: 2025-11-25 08:24:57.035 253542 DEBUG oslo_concurrency.lockutils [None req-46639f30-841e-4511-9447-6e35fb117677 33ac0c97ce984130b9394683c963cda1 e8f1e6855a5b45a8b1c74763485618f4 - - default default] Lock "7e576418-7454-49eb-9918-2d7f04547bd8-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:24:57 np0005534516 nova_compute[253538]: 2025-11-25 08:24:57.036 253542 INFO nova.compute.manager [None req-46639f30-841e-4511-9447-6e35fb117677 33ac0c97ce984130b9394683c963cda1 e8f1e6855a5b45a8b1c74763485618f4 - - default default] [instance: 7e576418-7454-49eb-9918-2d7f04547bd8] Terminating instance#033[00m
Nov 25 03:24:57 np0005534516 nova_compute[253538]: 2025-11-25 08:24:57.037 253542 DEBUG oslo_concurrency.lockutils [None req-46639f30-841e-4511-9447-6e35fb117677 33ac0c97ce984130b9394683c963cda1 e8f1e6855a5b45a8b1c74763485618f4 - - default default] Acquiring lock "refresh_cache-7e576418-7454-49eb-9918-2d7f04547bd8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 03:24:57 np0005534516 nova_compute[253538]: 2025-11-25 08:24:57.037 253542 DEBUG oslo_concurrency.lockutils [None req-46639f30-841e-4511-9447-6e35fb117677 33ac0c97ce984130b9394683c963cda1 e8f1e6855a5b45a8b1c74763485618f4 - - default default] Acquired lock "refresh_cache-7e576418-7454-49eb-9918-2d7f04547bd8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 03:24:57 np0005534516 nova_compute[253538]: 2025-11-25 08:24:57.037 253542 DEBUG nova.network.neutron [None req-46639f30-841e-4511-9447-6e35fb117677 33ac0c97ce984130b9394683c963cda1 e8f1e6855a5b45a8b1c74763485618f4 - - default default] [instance: 7e576418-7454-49eb-9918-2d7f04547bd8] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 25 03:24:57 np0005534516 systemd[1]: Started Virtual Machine qemu-17-instance-0000000f.
Nov 25 03:24:57 np0005534516 nova_compute[253538]: 2025-11-25 08:24:57.054 253542 DEBUG oslo_concurrency.processutils [None req-eff26c03-6da1-4935-b2ab-a11367dd46a2 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:24:57 np0005534516 nova_compute[253538]: 2025-11-25 08:24:57.170 253542 DEBUG nova.network.neutron [None req-46639f30-841e-4511-9447-6e35fb117677 33ac0c97ce984130b9394683c963cda1 e8f1e6855a5b45a8b1c74763485618f4 - - default default] [instance: 7e576418-7454-49eb-9918-2d7f04547bd8] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 25 03:24:57 np0005534516 nova_compute[253538]: 2025-11-25 08:24:57.393 253542 DEBUG nova.network.neutron [None req-46639f30-841e-4511-9447-6e35fb117677 33ac0c97ce984130b9394683c963cda1 e8f1e6855a5b45a8b1c74763485618f4 - - default default] [instance: 7e576418-7454-49eb-9918-2d7f04547bd8] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 03:24:57 np0005534516 nova_compute[253538]: 2025-11-25 08:24:57.408 253542 DEBUG oslo_concurrency.lockutils [None req-46639f30-841e-4511-9447-6e35fb117677 33ac0c97ce984130b9394683c963cda1 e8f1e6855a5b45a8b1c74763485618f4 - - default default] Releasing lock "refresh_cache-7e576418-7454-49eb-9918-2d7f04547bd8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 03:24:57 np0005534516 nova_compute[253538]: 2025-11-25 08:24:57.409 253542 DEBUG nova.compute.manager [None req-46639f30-841e-4511-9447-6e35fb117677 33ac0c97ce984130b9394683c963cda1 e8f1e6855a5b45a8b1c74763485618f4 - - default default] [instance: 7e576418-7454-49eb-9918-2d7f04547bd8] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 25 03:24:57 np0005534516 nova_compute[253538]: 2025-11-25 08:24:57.425 253542 DEBUG nova.network.neutron [None req-cd280208-b595-4b3f-8796-270485a4caef 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] [instance: 86bfa56f-56d0-4a5e-b0b2-302c375e37a3] Successfully updated port: 4ad9572b-6ac1-4659-8ea6-71b8a32c06fe _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 25 03:24:57 np0005534516 nova_compute[253538]: 2025-11-25 08:24:57.439 253542 DEBUG oslo_concurrency.lockutils [None req-cd280208-b595-4b3f-8796-270485a4caef 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Acquiring lock "refresh_cache-86bfa56f-56d0-4a5e-b0b2-302c375e37a3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 03:24:57 np0005534516 nova_compute[253538]: 2025-11-25 08:24:57.439 253542 DEBUG oslo_concurrency.lockutils [None req-cd280208-b595-4b3f-8796-270485a4caef 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Acquired lock "refresh_cache-86bfa56f-56d0-4a5e-b0b2-302c375e37a3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 03:24:57 np0005534516 nova_compute[253538]: 2025-11-25 08:24:57.439 253542 DEBUG nova.network.neutron [None req-cd280208-b595-4b3f-8796-270485a4caef 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] [instance: 86bfa56f-56d0-4a5e-b0b2-302c375e37a3] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 25 03:24:57 np0005534516 systemd[1]: machine-qemu\x2d16\x2dinstance\x2d0000000e.scope: Deactivated successfully.
Nov 25 03:24:57 np0005534516 systemd[1]: machine-qemu\x2d16\x2dinstance\x2d0000000e.scope: Consumed 2.525s CPU time.
Nov 25 03:24:57 np0005534516 systemd-machined[215790]: Machine qemu-16-instance-0000000e terminated.
Nov 25 03:24:57 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 03:24:57 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/315663466' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 03:24:57 np0005534516 nova_compute[253538]: 2025-11-25 08:24:57.500 253542 DEBUG oslo_concurrency.processutils [None req-eff26c03-6da1-4935-b2ab-a11367dd46a2 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.446s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:24:57 np0005534516 nova_compute[253538]: 2025-11-25 08:24:57.505 253542 DEBUG nova.compute.manager [req-39390d9a-11d1-4f93-92f7-b3896b27d9be req-4311a001-cb66-4f00-964d-cd5caed782f0 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 86bfa56f-56d0-4a5e-b0b2-302c375e37a3] Received event network-changed-4ad9572b-6ac1-4659-8ea6-71b8a32c06fe external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:24:57 np0005534516 nova_compute[253538]: 2025-11-25 08:24:57.505 253542 DEBUG nova.compute.manager [req-39390d9a-11d1-4f93-92f7-b3896b27d9be req-4311a001-cb66-4f00-964d-cd5caed782f0 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 86bfa56f-56d0-4a5e-b0b2-302c375e37a3] Refreshing instance network info cache due to event network-changed-4ad9572b-6ac1-4659-8ea6-71b8a32c06fe. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 03:24:57 np0005534516 nova_compute[253538]: 2025-11-25 08:24:57.505 253542 DEBUG oslo_concurrency.lockutils [req-39390d9a-11d1-4f93-92f7-b3896b27d9be req-4311a001-cb66-4f00-964d-cd5caed782f0 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-86bfa56f-56d0-4a5e-b0b2-302c375e37a3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 03:24:57 np0005534516 nova_compute[253538]: 2025-11-25 08:24:57.507 253542 DEBUG nova.compute.provider_tree [None req-eff26c03-6da1-4935-b2ab-a11367dd46a2 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 03:24:57 np0005534516 nova_compute[253538]: 2025-11-25 08:24:57.523 253542 DEBUG nova.scheduler.client.report [None req-eff26c03-6da1-4935-b2ab-a11367dd46a2 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 03:24:57 np0005534516 nova_compute[253538]: 2025-11-25 08:24:57.546 253542 DEBUG oslo_concurrency.lockutils [None req-eff26c03-6da1-4935-b2ab-a11367dd46a2 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.673s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:24:57 np0005534516 nova_compute[253538]: 2025-11-25 08:24:57.546 253542 DEBUG nova.compute.manager [None req-eff26c03-6da1-4935-b2ab-a11367dd46a2 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] [instance: 23ace5af-6840-42aa-a801-98abbb4f3a52] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 25 03:24:57 np0005534516 nova_compute[253538]: 2025-11-25 08:24:57.587 253542 DEBUG nova.compute.manager [None req-eff26c03-6da1-4935-b2ab-a11367dd46a2 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] [instance: 23ace5af-6840-42aa-a801-98abbb4f3a52] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 25 03:24:57 np0005534516 nova_compute[253538]: 2025-11-25 08:24:57.588 253542 DEBUG nova.network.neutron [None req-eff26c03-6da1-4935-b2ab-a11367dd46a2 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] [instance: 23ace5af-6840-42aa-a801-98abbb4f3a52] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 25 03:24:57 np0005534516 nova_compute[253538]: 2025-11-25 08:24:57.602 253542 INFO nova.virt.libvirt.driver [None req-eff26c03-6da1-4935-b2ab-a11367dd46a2 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] [instance: 23ace5af-6840-42aa-a801-98abbb4f3a52] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 25 03:24:57 np0005534516 nova_compute[253538]: 2025-11-25 08:24:57.618 253542 DEBUG nova.compute.manager [None req-eff26c03-6da1-4935-b2ab-a11367dd46a2 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] [instance: 23ace5af-6840-42aa-a801-98abbb4f3a52] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 25 03:24:57 np0005534516 nova_compute[253538]: 2025-11-25 08:24:57.630 253542 INFO nova.virt.libvirt.driver [-] [instance: 7e576418-7454-49eb-9918-2d7f04547bd8] Instance destroyed successfully.#033[00m
Nov 25 03:24:57 np0005534516 nova_compute[253538]: 2025-11-25 08:24:57.630 253542 DEBUG nova.objects.instance [None req-46639f30-841e-4511-9447-6e35fb117677 33ac0c97ce984130b9394683c963cda1 e8f1e6855a5b45a8b1c74763485618f4 - - default default] Lazy-loading 'resources' on Instance uuid 7e576418-7454-49eb-9918-2d7f04547bd8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 03:24:57 np0005534516 nova_compute[253538]: 2025-11-25 08:24:57.688 253542 DEBUG nova.network.neutron [None req-cd280208-b595-4b3f-8796-270485a4caef 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] [instance: 86bfa56f-56d0-4a5e-b0b2-302c375e37a3] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 25 03:24:57 np0005534516 nova_compute[253538]: 2025-11-25 08:24:57.703 253542 DEBUG nova.compute.manager [None req-eff26c03-6da1-4935-b2ab-a11367dd46a2 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] [instance: 23ace5af-6840-42aa-a801-98abbb4f3a52] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 25 03:24:57 np0005534516 nova_compute[253538]: 2025-11-25 08:24:57.704 253542 DEBUG nova.virt.libvirt.driver [None req-eff26c03-6da1-4935-b2ab-a11367dd46a2 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] [instance: 23ace5af-6840-42aa-a801-98abbb4f3a52] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 25 03:24:57 np0005534516 nova_compute[253538]: 2025-11-25 08:24:57.705 253542 INFO nova.virt.libvirt.driver [None req-eff26c03-6da1-4935-b2ab-a11367dd46a2 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] [instance: 23ace5af-6840-42aa-a801-98abbb4f3a52] Creating image(s)#033[00m
Nov 25 03:24:57 np0005534516 nova_compute[253538]: 2025-11-25 08:24:57.721 253542 DEBUG nova.storage.rbd_utils [None req-eff26c03-6da1-4935-b2ab-a11367dd46a2 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] rbd image 23ace5af-6840-42aa-a801-98abbb4f3a52_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:24:57 np0005534516 nova_compute[253538]: 2025-11-25 08:24:57.740 253542 DEBUG nova.storage.rbd_utils [None req-eff26c03-6da1-4935-b2ab-a11367dd46a2 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] rbd image 23ace5af-6840-42aa-a801-98abbb4f3a52_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:24:57 np0005534516 nova_compute[253538]: 2025-11-25 08:24:57.759 253542 DEBUG nova.storage.rbd_utils [None req-eff26c03-6da1-4935-b2ab-a11367dd46a2 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] rbd image 23ace5af-6840-42aa-a801-98abbb4f3a52_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:24:57 np0005534516 nova_compute[253538]: 2025-11-25 08:24:57.764 253542 DEBUG oslo_concurrency.processutils [None req-eff26c03-6da1-4935-b2ab-a11367dd46a2 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:24:57 np0005534516 nova_compute[253538]: 2025-11-25 08:24:57.792 253542 DEBUG nova.policy [None req-eff26c03-6da1-4935-b2ab-a11367dd46a2 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '76d3377d398a4214a77bc0eb91638ec5', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '65a2f983cce14453b2dc9251a520f289', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 25 03:24:57 np0005534516 nova_compute[253538]: 2025-11-25 08:24:57.838 253542 DEBUG oslo_concurrency.processutils [None req-eff26c03-6da1-4935-b2ab-a11367dd46a2 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json" returned: 0 in 0.074s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:24:57 np0005534516 nova_compute[253538]: 2025-11-25 08:24:57.839 253542 DEBUG oslo_concurrency.lockutils [None req-eff26c03-6da1-4935-b2ab-a11367dd46a2 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Acquiring lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:24:57 np0005534516 nova_compute[253538]: 2025-11-25 08:24:57.839 253542 DEBUG oslo_concurrency.lockutils [None req-eff26c03-6da1-4935-b2ab-a11367dd46a2 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:24:57 np0005534516 nova_compute[253538]: 2025-11-25 08:24:57.840 253542 DEBUG oslo_concurrency.lockutils [None req-eff26c03-6da1-4935-b2ab-a11367dd46a2 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:24:57 np0005534516 nova_compute[253538]: 2025-11-25 08:24:57.858 253542 DEBUG nova.storage.rbd_utils [None req-eff26c03-6da1-4935-b2ab-a11367dd46a2 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] rbd image 23ace5af-6840-42aa-a801-98abbb4f3a52_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:24:57 np0005534516 nova_compute[253538]: 2025-11-25 08:24:57.862 253542 DEBUG oslo_concurrency.processutils [None req-eff26c03-6da1-4935-b2ab-a11367dd46a2 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc 23ace5af-6840-42aa-a801-98abbb4f3a52_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:24:57 np0005534516 nova_compute[253538]: 2025-11-25 08:24:57.985 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764059097.981997, d3764acc-bce0-452e-bba5-90d76d88df2e => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 03:24:57 np0005534516 nova_compute[253538]: 2025-11-25 08:24:57.986 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: d3764acc-bce0-452e-bba5-90d76d88df2e] VM Resumed (Lifecycle Event)#033[00m
Nov 25 03:24:57 np0005534516 nova_compute[253538]: 2025-11-25 08:24:57.995 253542 DEBUG nova.compute.manager [None req-bcd3d6b9-5252-4c91-8945-50734fbda5f2 345cc8bd54dd46c9aaf034a44f55f52e fbc71ebb18c64a72bd6d93bc520d8921 - - default default] [instance: d3764acc-bce0-452e-bba5-90d76d88df2e] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 25 03:24:57 np0005534516 nova_compute[253538]: 2025-11-25 08:24:57.997 253542 DEBUG nova.virt.libvirt.driver [None req-bcd3d6b9-5252-4c91-8945-50734fbda5f2 345cc8bd54dd46c9aaf034a44f55f52e fbc71ebb18c64a72bd6d93bc520d8921 - - default default] [instance: d3764acc-bce0-452e-bba5-90d76d88df2e] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 25 03:24:58 np0005534516 nova_compute[253538]: 2025-11-25 08:24:58.002 253542 INFO nova.virt.libvirt.driver [-] [instance: d3764acc-bce0-452e-bba5-90d76d88df2e] Instance spawned successfully.#033[00m
Nov 25 03:24:58 np0005534516 nova_compute[253538]: 2025-11-25 08:24:58.002 253542 DEBUG nova.virt.libvirt.driver [None req-bcd3d6b9-5252-4c91-8945-50734fbda5f2 345cc8bd54dd46c9aaf034a44f55f52e fbc71ebb18c64a72bd6d93bc520d8921 - - default default] [instance: d3764acc-bce0-452e-bba5-90d76d88df2e] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 25 03:24:58 np0005534516 nova_compute[253538]: 2025-11-25 08:24:58.004 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: d3764acc-bce0-452e-bba5-90d76d88df2e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:24:58 np0005534516 nova_compute[253538]: 2025-11-25 08:24:58.009 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: d3764acc-bce0-452e-bba5-90d76d88df2e] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 03:24:58 np0005534516 nova_compute[253538]: 2025-11-25 08:24:58.024 253542 DEBUG nova.virt.libvirt.driver [None req-bcd3d6b9-5252-4c91-8945-50734fbda5f2 345cc8bd54dd46c9aaf034a44f55f52e fbc71ebb18c64a72bd6d93bc520d8921 - - default default] [instance: d3764acc-bce0-452e-bba5-90d76d88df2e] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:24:58 np0005534516 nova_compute[253538]: 2025-11-25 08:24:58.025 253542 DEBUG nova.virt.libvirt.driver [None req-bcd3d6b9-5252-4c91-8945-50734fbda5f2 345cc8bd54dd46c9aaf034a44f55f52e fbc71ebb18c64a72bd6d93bc520d8921 - - default default] [instance: d3764acc-bce0-452e-bba5-90d76d88df2e] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:24:58 np0005534516 nova_compute[253538]: 2025-11-25 08:24:58.025 253542 DEBUG nova.virt.libvirt.driver [None req-bcd3d6b9-5252-4c91-8945-50734fbda5f2 345cc8bd54dd46c9aaf034a44f55f52e fbc71ebb18c64a72bd6d93bc520d8921 - - default default] [instance: d3764acc-bce0-452e-bba5-90d76d88df2e] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:24:58 np0005534516 nova_compute[253538]: 2025-11-25 08:24:58.026 253542 DEBUG nova.virt.libvirt.driver [None req-bcd3d6b9-5252-4c91-8945-50734fbda5f2 345cc8bd54dd46c9aaf034a44f55f52e fbc71ebb18c64a72bd6d93bc520d8921 - - default default] [instance: d3764acc-bce0-452e-bba5-90d76d88df2e] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:24:58 np0005534516 nova_compute[253538]: 2025-11-25 08:24:58.026 253542 DEBUG nova.virt.libvirt.driver [None req-bcd3d6b9-5252-4c91-8945-50734fbda5f2 345cc8bd54dd46c9aaf034a44f55f52e fbc71ebb18c64a72bd6d93bc520d8921 - - default default] [instance: d3764acc-bce0-452e-bba5-90d76d88df2e] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:24:58 np0005534516 nova_compute[253538]: 2025-11-25 08:24:58.026 253542 DEBUG nova.virt.libvirt.driver [None req-bcd3d6b9-5252-4c91-8945-50734fbda5f2 345cc8bd54dd46c9aaf034a44f55f52e fbc71ebb18c64a72bd6d93bc520d8921 - - default default] [instance: d3764acc-bce0-452e-bba5-90d76d88df2e] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:24:58 np0005534516 nova_compute[253538]: 2025-11-25 08:24:58.032 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: d3764acc-bce0-452e-bba5-90d76d88df2e] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 03:24:58 np0005534516 nova_compute[253538]: 2025-11-25 08:24:58.032 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764059097.9842272, d3764acc-bce0-452e-bba5-90d76d88df2e => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 03:24:58 np0005534516 nova_compute[253538]: 2025-11-25 08:24:58.033 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: d3764acc-bce0-452e-bba5-90d76d88df2e] VM Started (Lifecycle Event)#033[00m
Nov 25 03:24:58 np0005534516 nova_compute[253538]: 2025-11-25 08:24:58.072 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: d3764acc-bce0-452e-bba5-90d76d88df2e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:24:58 np0005534516 nova_compute[253538]: 2025-11-25 08:24:58.083 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: d3764acc-bce0-452e-bba5-90d76d88df2e] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 03:24:58 np0005534516 nova_compute[253538]: 2025-11-25 08:24:58.088 253542 INFO nova.compute.manager [None req-bcd3d6b9-5252-4c91-8945-50734fbda5f2 345cc8bd54dd46c9aaf034a44f55f52e fbc71ebb18c64a72bd6d93bc520d8921 - - default default] [instance: d3764acc-bce0-452e-bba5-90d76d88df2e] Took 3.67 seconds to spawn the instance on the hypervisor.#033[00m
Nov 25 03:24:58 np0005534516 nova_compute[253538]: 2025-11-25 08:24:58.088 253542 DEBUG nova.compute.manager [None req-bcd3d6b9-5252-4c91-8945-50734fbda5f2 345cc8bd54dd46c9aaf034a44f55f52e fbc71ebb18c64a72bd6d93bc520d8921 - - default default] [instance: d3764acc-bce0-452e-bba5-90d76d88df2e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:24:58 np0005534516 nova_compute[253538]: 2025-11-25 08:24:58.117 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: d3764acc-bce0-452e-bba5-90d76d88df2e] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 03:24:58 np0005534516 nova_compute[253538]: 2025-11-25 08:24:58.149 253542 INFO nova.compute.manager [None req-bcd3d6b9-5252-4c91-8945-50734fbda5f2 345cc8bd54dd46c9aaf034a44f55f52e fbc71ebb18c64a72bd6d93bc520d8921 - - default default] [instance: d3764acc-bce0-452e-bba5-90d76d88df2e] Took 4.77 seconds to build instance.#033[00m
Nov 25 03:24:58 np0005534516 nova_compute[253538]: 2025-11-25 08:24:58.162 253542 DEBUG oslo_concurrency.lockutils [None req-bcd3d6b9-5252-4c91-8945-50734fbda5f2 345cc8bd54dd46c9aaf034a44f55f52e fbc71ebb18c64a72bd6d93bc520d8921 - - default default] Lock "d3764acc-bce0-452e-bba5-90d76d88df2e" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 4.859s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:24:58 np0005534516 nova_compute[253538]: 2025-11-25 08:24:58.223 253542 DEBUG oslo_concurrency.processutils [None req-eff26c03-6da1-4935-b2ab-a11367dd46a2 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc 23ace5af-6840-42aa-a801-98abbb4f3a52_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.362s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:24:58 np0005534516 nova_compute[253538]: 2025-11-25 08:24:58.269 253542 INFO nova.virt.libvirt.driver [None req-46639f30-841e-4511-9447-6e35fb117677 33ac0c97ce984130b9394683c963cda1 e8f1e6855a5b45a8b1c74763485618f4 - - default default] [instance: 7e576418-7454-49eb-9918-2d7f04547bd8] Deleting instance files /var/lib/nova/instances/7e576418-7454-49eb-9918-2d7f04547bd8_del#033[00m
Nov 25 03:24:58 np0005534516 nova_compute[253538]: 2025-11-25 08:24:58.270 253542 INFO nova.virt.libvirt.driver [None req-46639f30-841e-4511-9447-6e35fb117677 33ac0c97ce984130b9394683c963cda1 e8f1e6855a5b45a8b1c74763485618f4 - - default default] [instance: 7e576418-7454-49eb-9918-2d7f04547bd8] Deletion of /var/lib/nova/instances/7e576418-7454-49eb-9918-2d7f04547bd8_del complete#033[00m
Nov 25 03:24:58 np0005534516 nova_compute[253538]: 2025-11-25 08:24:58.310 253542 DEBUG nova.network.neutron [None req-eff26c03-6da1-4935-b2ab-a11367dd46a2 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] [instance: 23ace5af-6840-42aa-a801-98abbb4f3a52] Successfully created port: e9d1298d-411a-4018-ba08-c41d40ba0d41 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 25 03:24:58 np0005534516 nova_compute[253538]: 2025-11-25 08:24:58.315 253542 DEBUG nova.network.neutron [None req-cd280208-b595-4b3f-8796-270485a4caef 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] [instance: 86bfa56f-56d0-4a5e-b0b2-302c375e37a3] Updating instance_info_cache with network_info: [{"id": "4ad9572b-6ac1-4659-8ea6-71b8a32c06fe", "address": "fa:16:3e:5e:0e:e0", "network": {"id": "93269c36-ab23-4d95-925a-798173550624", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-897372830-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "65a2f983cce14453b2dc9251a520f289", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4ad9572b-6a", "ovs_interfaceid": "4ad9572b-6ac1-4659-8ea6-71b8a32c06fe", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 03:24:58 np0005534516 nova_compute[253538]: 2025-11-25 08:24:58.321 253542 DEBUG nova.storage.rbd_utils [None req-eff26c03-6da1-4935-b2ab-a11367dd46a2 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] resizing rbd image 23ace5af-6840-42aa-a801-98abbb4f3a52_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 25 03:24:58 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e112 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 03:24:58 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e112 do_prune osdmap full prune enabled
Nov 25 03:24:58 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e113 e113: 3 total, 3 up, 3 in
Nov 25 03:24:58 np0005534516 ceph-mon[75015]: log_channel(cluster) log [DBG] : osdmap e113: 3 total, 3 up, 3 in
Nov 25 03:24:58 np0005534516 nova_compute[253538]: 2025-11-25 08:24:58.357 253542 DEBUG oslo_concurrency.lockutils [None req-cd280208-b595-4b3f-8796-270485a4caef 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Releasing lock "refresh_cache-86bfa56f-56d0-4a5e-b0b2-302c375e37a3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 03:24:58 np0005534516 nova_compute[253538]: 2025-11-25 08:24:58.358 253542 DEBUG nova.compute.manager [None req-cd280208-b595-4b3f-8796-270485a4caef 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] [instance: 86bfa56f-56d0-4a5e-b0b2-302c375e37a3] Instance network_info: |[{"id": "4ad9572b-6ac1-4659-8ea6-71b8a32c06fe", "address": "fa:16:3e:5e:0e:e0", "network": {"id": "93269c36-ab23-4d95-925a-798173550624", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-897372830-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "65a2f983cce14453b2dc9251a520f289", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4ad9572b-6a", "ovs_interfaceid": "4ad9572b-6ac1-4659-8ea6-71b8a32c06fe", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 25 03:24:58 np0005534516 nova_compute[253538]: 2025-11-25 08:24:58.359 253542 DEBUG oslo_concurrency.lockutils [req-39390d9a-11d1-4f93-92f7-b3896b27d9be req-4311a001-cb66-4f00-964d-cd5caed782f0 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-86bfa56f-56d0-4a5e-b0b2-302c375e37a3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 03:24:58 np0005534516 nova_compute[253538]: 2025-11-25 08:24:58.359 253542 DEBUG nova.network.neutron [req-39390d9a-11d1-4f93-92f7-b3896b27d9be req-4311a001-cb66-4f00-964d-cd5caed782f0 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 86bfa56f-56d0-4a5e-b0b2-302c375e37a3] Refreshing network info cache for port 4ad9572b-6ac1-4659-8ea6-71b8a32c06fe _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 03:24:58 np0005534516 nova_compute[253538]: 2025-11-25 08:24:58.363 253542 DEBUG nova.virt.libvirt.driver [None req-cd280208-b595-4b3f-8796-270485a4caef 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] [instance: 86bfa56f-56d0-4a5e-b0b2-302c375e37a3] Start _get_guest_xml network_info=[{"id": "4ad9572b-6ac1-4659-8ea6-71b8a32c06fe", "address": "fa:16:3e:5e:0e:e0", "network": {"id": "93269c36-ab23-4d95-925a-798173550624", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-897372830-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "65a2f983cce14453b2dc9251a520f289", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4ad9572b-6a", "ovs_interfaceid": "4ad9572b-6ac1-4659-8ea6-71b8a32c06fe", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'device_name': '/dev/vda', 'guest_format': None, 'encryption_options': None, 'boot_index': 0, 'encryption_format': None, 'encryption_secret_uuid': None, 'size': 0, 'encrypted': False, 'disk_bus': 'virtio', 'image_id': '8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 25 03:24:58 np0005534516 nova_compute[253538]: 2025-11-25 08:24:58.368 253542 INFO nova.compute.manager [None req-46639f30-841e-4511-9447-6e35fb117677 33ac0c97ce984130b9394683c963cda1 e8f1e6855a5b45a8b1c74763485618f4 - - default default] [instance: 7e576418-7454-49eb-9918-2d7f04547bd8] Took 0.96 seconds to destroy the instance on the hypervisor.#033[00m
Nov 25 03:24:58 np0005534516 nova_compute[253538]: 2025-11-25 08:24:58.368 253542 DEBUG oslo.service.loopingcall [None req-46639f30-841e-4511-9447-6e35fb117677 33ac0c97ce984130b9394683c963cda1 e8f1e6855a5b45a8b1c74763485618f4 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 25 03:24:58 np0005534516 nova_compute[253538]: 2025-11-25 08:24:58.368 253542 DEBUG nova.compute.manager [-] [instance: 7e576418-7454-49eb-9918-2d7f04547bd8] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 25 03:24:58 np0005534516 nova_compute[253538]: 2025-11-25 08:24:58.369 253542 DEBUG nova.network.neutron [-] [instance: 7e576418-7454-49eb-9918-2d7f04547bd8] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 25 03:24:58 np0005534516 nova_compute[253538]: 2025-11-25 08:24:58.381 253542 WARNING nova.virt.libvirt.driver [None req-cd280208-b595-4b3f-8796-270485a4caef 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 03:24:58 np0005534516 nova_compute[253538]: 2025-11-25 08:24:58.428 253542 DEBUG nova.objects.instance [None req-eff26c03-6da1-4935-b2ab-a11367dd46a2 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Lazy-loading 'migration_context' on Instance uuid 23ace5af-6840-42aa-a801-98abbb4f3a52 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 03:24:58 np0005534516 nova_compute[253538]: 2025-11-25 08:24:58.431 253542 DEBUG nova.virt.libvirt.host [None req-cd280208-b595-4b3f-8796-270485a4caef 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 25 03:24:58 np0005534516 nova_compute[253538]: 2025-11-25 08:24:58.431 253542 DEBUG nova.virt.libvirt.host [None req-cd280208-b595-4b3f-8796-270485a4caef 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 25 03:24:58 np0005534516 nova_compute[253538]: 2025-11-25 08:24:58.437 253542 DEBUG nova.virt.libvirt.host [None req-cd280208-b595-4b3f-8796-270485a4caef 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 25 03:24:58 np0005534516 nova_compute[253538]: 2025-11-25 08:24:58.437 253542 DEBUG nova.virt.libvirt.host [None req-cd280208-b595-4b3f-8796-270485a4caef 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 25 03:24:58 np0005534516 nova_compute[253538]: 2025-11-25 08:24:58.437 253542 DEBUG nova.virt.libvirt.driver [None req-cd280208-b595-4b3f-8796-270485a4caef 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 25 03:24:58 np0005534516 nova_compute[253538]: 2025-11-25 08:24:58.437 253542 DEBUG nova.virt.hardware [None req-cd280208-b595-4b3f-8796-270485a4caef 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T08:20:54Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='c0247c91-0f34-45b2-87e7-43f31a790a00',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 25 03:24:58 np0005534516 nova_compute[253538]: 2025-11-25 08:24:58.438 253542 DEBUG nova.virt.hardware [None req-cd280208-b595-4b3f-8796-270485a4caef 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 25 03:24:58 np0005534516 nova_compute[253538]: 2025-11-25 08:24:58.438 253542 DEBUG nova.virt.hardware [None req-cd280208-b595-4b3f-8796-270485a4caef 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 25 03:24:58 np0005534516 nova_compute[253538]: 2025-11-25 08:24:58.438 253542 DEBUG nova.virt.hardware [None req-cd280208-b595-4b3f-8796-270485a4caef 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 25 03:24:58 np0005534516 nova_compute[253538]: 2025-11-25 08:24:58.438 253542 DEBUG nova.virt.hardware [None req-cd280208-b595-4b3f-8796-270485a4caef 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 25 03:24:58 np0005534516 nova_compute[253538]: 2025-11-25 08:24:58.438 253542 DEBUG nova.virt.hardware [None req-cd280208-b595-4b3f-8796-270485a4caef 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 25 03:24:58 np0005534516 nova_compute[253538]: 2025-11-25 08:24:58.438 253542 DEBUG nova.virt.hardware [None req-cd280208-b595-4b3f-8796-270485a4caef 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 25 03:24:58 np0005534516 nova_compute[253538]: 2025-11-25 08:24:58.439 253542 DEBUG nova.virt.hardware [None req-cd280208-b595-4b3f-8796-270485a4caef 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 25 03:24:58 np0005534516 nova_compute[253538]: 2025-11-25 08:24:58.439 253542 DEBUG nova.virt.hardware [None req-cd280208-b595-4b3f-8796-270485a4caef 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 25 03:24:58 np0005534516 nova_compute[253538]: 2025-11-25 08:24:58.439 253542 DEBUG nova.virt.hardware [None req-cd280208-b595-4b3f-8796-270485a4caef 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 25 03:24:58 np0005534516 nova_compute[253538]: 2025-11-25 08:24:58.439 253542 DEBUG nova.virt.hardware [None req-cd280208-b595-4b3f-8796-270485a4caef 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 25 03:24:58 np0005534516 nova_compute[253538]: 2025-11-25 08:24:58.441 253542 DEBUG oslo_concurrency.processutils [None req-cd280208-b595-4b3f-8796-270485a4caef 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:24:58 np0005534516 nova_compute[253538]: 2025-11-25 08:24:58.458 253542 DEBUG nova.virt.libvirt.driver [None req-eff26c03-6da1-4935-b2ab-a11367dd46a2 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] [instance: 23ace5af-6840-42aa-a801-98abbb4f3a52] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 25 03:24:58 np0005534516 nova_compute[253538]: 2025-11-25 08:24:58.459 253542 DEBUG nova.virt.libvirt.driver [None req-eff26c03-6da1-4935-b2ab-a11367dd46a2 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] [instance: 23ace5af-6840-42aa-a801-98abbb4f3a52] Ensure instance console log exists: /var/lib/nova/instances/23ace5af-6840-42aa-a801-98abbb4f3a52/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 25 03:24:58 np0005534516 nova_compute[253538]: 2025-11-25 08:24:58.460 253542 DEBUG oslo_concurrency.lockutils [None req-eff26c03-6da1-4935-b2ab-a11367dd46a2 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:24:58 np0005534516 nova_compute[253538]: 2025-11-25 08:24:58.460 253542 DEBUG oslo_concurrency.lockutils [None req-eff26c03-6da1-4935-b2ab-a11367dd46a2 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:24:58 np0005534516 nova_compute[253538]: 2025-11-25 08:24:58.461 253542 DEBUG oslo_concurrency.lockutils [None req-eff26c03-6da1-4935-b2ab-a11367dd46a2 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:24:58 np0005534516 nova_compute[253538]: 2025-11-25 08:24:58.654 253542 DEBUG nova.network.neutron [-] [instance: 7e576418-7454-49eb-9918-2d7f04547bd8] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 25 03:24:58 np0005534516 nova_compute[253538]: 2025-11-25 08:24:58.669 253542 DEBUG nova.network.neutron [-] [instance: 7e576418-7454-49eb-9918-2d7f04547bd8] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 03:24:58 np0005534516 nova_compute[253538]: 2025-11-25 08:24:58.689 253542 INFO nova.compute.manager [-] [instance: 7e576418-7454-49eb-9918-2d7f04547bd8] Took 0.32 seconds to deallocate network for instance.#033[00m
Nov 25 03:24:58 np0005534516 nova_compute[253538]: 2025-11-25 08:24:58.731 253542 DEBUG oslo_concurrency.lockutils [None req-46639f30-841e-4511-9447-6e35fb117677 33ac0c97ce984130b9394683c963cda1 e8f1e6855a5b45a8b1c74763485618f4 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:24:58 np0005534516 nova_compute[253538]: 2025-11-25 08:24:58.732 253542 DEBUG oslo_concurrency.lockutils [None req-46639f30-841e-4511-9447-6e35fb117677 33ac0c97ce984130b9394683c963cda1 e8f1e6855a5b45a8b1c74763485618f4 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:24:58 np0005534516 podman[278915]: 2025-11-25 08:24:58.819532286 +0000 UTC m=+0.068421405 container health_status 201f5ba8a846455dad7681d91099d8c3f33e8ac91298c1330a8b525b94c03938 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=multipathd, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 03:24:58 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 03:24:58 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2853933548' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 03:24:58 np0005534516 nova_compute[253538]: 2025-11-25 08:24:58.881 253542 DEBUG oslo_concurrency.processutils [None req-cd280208-b595-4b3f-8796-270485a4caef 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.440s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:24:58 np0005534516 nova_compute[253538]: 2025-11-25 08:24:58.910 253542 DEBUG nova.storage.rbd_utils [None req-cd280208-b595-4b3f-8796-270485a4caef 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] rbd image 86bfa56f-56d0-4a5e-b0b2-302c375e37a3_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:24:58 np0005534516 nova_compute[253538]: 2025-11-25 08:24:58.915 253542 DEBUG oslo_concurrency.processutils [None req-cd280208-b595-4b3f-8796-270485a4caef 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:24:58 np0005534516 nova_compute[253538]: 2025-11-25 08:24:58.935 253542 DEBUG oslo_concurrency.processutils [None req-46639f30-841e-4511-9447-6e35fb117677 33ac0c97ce984130b9394683c963cda1 e8f1e6855a5b45a8b1c74763485618f4 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:24:58 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1186: 321 pgs: 321 active+clean; 184 MiB data, 314 MiB used, 60 GiB / 60 GiB avail; 4.6 MiB/s rd, 6.7 MiB/s wr, 323 op/s
Nov 25 03:24:59 np0005534516 nova_compute[253538]: 2025-11-25 08:24:59.225 253542 DEBUG nova.network.neutron [None req-eff26c03-6da1-4935-b2ab-a11367dd46a2 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] [instance: 23ace5af-6840-42aa-a801-98abbb4f3a52] Successfully updated port: e9d1298d-411a-4018-ba08-c41d40ba0d41 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 25 03:24:59 np0005534516 nova_compute[253538]: 2025-11-25 08:24:59.243 253542 DEBUG oslo_concurrency.lockutils [None req-eff26c03-6da1-4935-b2ab-a11367dd46a2 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Acquiring lock "refresh_cache-23ace5af-6840-42aa-a801-98abbb4f3a52" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 03:24:59 np0005534516 nova_compute[253538]: 2025-11-25 08:24:59.243 253542 DEBUG oslo_concurrency.lockutils [None req-eff26c03-6da1-4935-b2ab-a11367dd46a2 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Acquired lock "refresh_cache-23ace5af-6840-42aa-a801-98abbb4f3a52" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 03:24:59 np0005534516 nova_compute[253538]: 2025-11-25 08:24:59.244 253542 DEBUG nova.network.neutron [None req-eff26c03-6da1-4935-b2ab-a11367dd46a2 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] [instance: 23ace5af-6840-42aa-a801-98abbb4f3a52] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 25 03:24:59 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 03:24:59 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/317007074' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 03:24:59 np0005534516 nova_compute[253538]: 2025-11-25 08:24:59.329 253542 DEBUG oslo_concurrency.processutils [None req-cd280208-b595-4b3f-8796-270485a4caef 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.413s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:24:59 np0005534516 nova_compute[253538]: 2025-11-25 08:24:59.331 253542 DEBUG nova.virt.libvirt.vif [None req-cd280208-b595-4b3f-8796-270485a4caef 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T08:24:53Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-1649971692',display_name='tempest-ServersAdminTestJSON-server-1649971692',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-1649971692',id=16,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='65a2f983cce14453b2dc9251a520f289',ramdisk_id='',reservation_id='r-309qh2t9',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersAdminTestJSON-857664825',owner_user_name='tempest-ServersAdminTestJSON-857664825-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T08:24:55Z,user_data=None,user_id='76d3377d398a4214a77bc0eb91638ec5',uuid=86bfa56f-56d0-4a5e-b0b2-302c375e37a3,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "4ad9572b-6ac1-4659-8ea6-71b8a32c06fe", "address": "fa:16:3e:5e:0e:e0", "network": {"id": "93269c36-ab23-4d95-925a-798173550624", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-897372830-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "65a2f983cce14453b2dc9251a520f289", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4ad9572b-6a", "ovs_interfaceid": "4ad9572b-6ac1-4659-8ea6-71b8a32c06fe", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 25 03:24:59 np0005534516 nova_compute[253538]: 2025-11-25 08:24:59.331 253542 DEBUG nova.network.os_vif_util [None req-cd280208-b595-4b3f-8796-270485a4caef 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Converting VIF {"id": "4ad9572b-6ac1-4659-8ea6-71b8a32c06fe", "address": "fa:16:3e:5e:0e:e0", "network": {"id": "93269c36-ab23-4d95-925a-798173550624", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-897372830-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "65a2f983cce14453b2dc9251a520f289", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4ad9572b-6a", "ovs_interfaceid": "4ad9572b-6ac1-4659-8ea6-71b8a32c06fe", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 03:24:59 np0005534516 nova_compute[253538]: 2025-11-25 08:24:59.333 253542 DEBUG nova.network.os_vif_util [None req-cd280208-b595-4b3f-8796-270485a4caef 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:5e:0e:e0,bridge_name='br-int',has_traffic_filtering=True,id=4ad9572b-6ac1-4659-8ea6-71b8a32c06fe,network=Network(93269c36-ab23-4d95-925a-798173550624),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4ad9572b-6a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 03:24:59 np0005534516 nova_compute[253538]: 2025-11-25 08:24:59.334 253542 DEBUG nova.objects.instance [None req-cd280208-b595-4b3f-8796-270485a4caef 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Lazy-loading 'pci_devices' on Instance uuid 86bfa56f-56d0-4a5e-b0b2-302c375e37a3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 03:24:59 np0005534516 nova_compute[253538]: 2025-11-25 08:24:59.341 253542 DEBUG nova.objects.instance [None req-dc4d201f-60cf-4a6c-bd3e-1fd40f0ead3f 645c20ba7f4c4bb7a8facc194a00857c f1a1c40fc7b44a8fb2422030f90fbc4c - - default default] Lazy-loading 'pci_devices' on Instance uuid d3764acc-bce0-452e-bba5-90d76d88df2e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 03:24:59 np0005534516 nova_compute[253538]: 2025-11-25 08:24:59.362 253542 DEBUG nova.virt.libvirt.driver [None req-cd280208-b595-4b3f-8796-270485a4caef 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] [instance: 86bfa56f-56d0-4a5e-b0b2-302c375e37a3] End _get_guest_xml xml=<domain type="kvm">
Nov 25 03:24:59 np0005534516 nova_compute[253538]:  <uuid>86bfa56f-56d0-4a5e-b0b2-302c375e37a3</uuid>
Nov 25 03:24:59 np0005534516 nova_compute[253538]:  <name>instance-00000010</name>
Nov 25 03:24:59 np0005534516 nova_compute[253538]:  <memory>131072</memory>
Nov 25 03:24:59 np0005534516 nova_compute[253538]:  <vcpu>1</vcpu>
Nov 25 03:24:59 np0005534516 nova_compute[253538]:  <metadata>
Nov 25 03:24:59 np0005534516 nova_compute[253538]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 03:24:59 np0005534516 nova_compute[253538]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 03:24:59 np0005534516 nova_compute[253538]:      <nova:name>tempest-ServersAdminTestJSON-server-1649971692</nova:name>
Nov 25 03:24:59 np0005534516 nova_compute[253538]:      <nova:creationTime>2025-11-25 08:24:58</nova:creationTime>
Nov 25 03:24:59 np0005534516 nova_compute[253538]:      <nova:flavor name="m1.nano">
Nov 25 03:24:59 np0005534516 nova_compute[253538]:        <nova:memory>128</nova:memory>
Nov 25 03:24:59 np0005534516 nova_compute[253538]:        <nova:disk>1</nova:disk>
Nov 25 03:24:59 np0005534516 nova_compute[253538]:        <nova:swap>0</nova:swap>
Nov 25 03:24:59 np0005534516 nova_compute[253538]:        <nova:ephemeral>0</nova:ephemeral>
Nov 25 03:24:59 np0005534516 nova_compute[253538]:        <nova:vcpus>1</nova:vcpus>
Nov 25 03:24:59 np0005534516 nova_compute[253538]:      </nova:flavor>
Nov 25 03:24:59 np0005534516 nova_compute[253538]:      <nova:owner>
Nov 25 03:24:59 np0005534516 nova_compute[253538]:        <nova:user uuid="76d3377d398a4214a77bc0eb91638ec5">tempest-ServersAdminTestJSON-857664825-project-member</nova:user>
Nov 25 03:24:59 np0005534516 nova_compute[253538]:        <nova:project uuid="65a2f983cce14453b2dc9251a520f289">tempest-ServersAdminTestJSON-857664825</nova:project>
Nov 25 03:24:59 np0005534516 nova_compute[253538]:      </nova:owner>
Nov 25 03:24:59 np0005534516 nova_compute[253538]:      <nova:root type="image" uuid="8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e"/>
Nov 25 03:24:59 np0005534516 nova_compute[253538]:      <nova:ports>
Nov 25 03:24:59 np0005534516 nova_compute[253538]:        <nova:port uuid="4ad9572b-6ac1-4659-8ea6-71b8a32c06fe">
Nov 25 03:24:59 np0005534516 nova_compute[253538]:          <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Nov 25 03:24:59 np0005534516 nova_compute[253538]:        </nova:port>
Nov 25 03:24:59 np0005534516 nova_compute[253538]:      </nova:ports>
Nov 25 03:24:59 np0005534516 nova_compute[253538]:    </nova:instance>
Nov 25 03:24:59 np0005534516 nova_compute[253538]:  </metadata>
Nov 25 03:24:59 np0005534516 nova_compute[253538]:  <sysinfo type="smbios">
Nov 25 03:24:59 np0005534516 nova_compute[253538]:    <system>
Nov 25 03:24:59 np0005534516 nova_compute[253538]:      <entry name="manufacturer">RDO</entry>
Nov 25 03:24:59 np0005534516 nova_compute[253538]:      <entry name="product">OpenStack Compute</entry>
Nov 25 03:24:59 np0005534516 nova_compute[253538]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 03:24:59 np0005534516 nova_compute[253538]:      <entry name="serial">86bfa56f-56d0-4a5e-b0b2-302c375e37a3</entry>
Nov 25 03:24:59 np0005534516 nova_compute[253538]:      <entry name="uuid">86bfa56f-56d0-4a5e-b0b2-302c375e37a3</entry>
Nov 25 03:24:59 np0005534516 nova_compute[253538]:      <entry name="family">Virtual Machine</entry>
Nov 25 03:24:59 np0005534516 nova_compute[253538]:    </system>
Nov 25 03:24:59 np0005534516 nova_compute[253538]:  </sysinfo>
Nov 25 03:24:59 np0005534516 nova_compute[253538]:  <os>
Nov 25 03:24:59 np0005534516 nova_compute[253538]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 25 03:24:59 np0005534516 nova_compute[253538]:    <boot dev="hd"/>
Nov 25 03:24:59 np0005534516 nova_compute[253538]:    <smbios mode="sysinfo"/>
Nov 25 03:24:59 np0005534516 nova_compute[253538]:  </os>
Nov 25 03:24:59 np0005534516 nova_compute[253538]:  <features>
Nov 25 03:24:59 np0005534516 nova_compute[253538]:    <acpi/>
Nov 25 03:24:59 np0005534516 nova_compute[253538]:    <apic/>
Nov 25 03:24:59 np0005534516 nova_compute[253538]:    <vmcoreinfo/>
Nov 25 03:24:59 np0005534516 nova_compute[253538]:  </features>
Nov 25 03:24:59 np0005534516 nova_compute[253538]:  <clock offset="utc">
Nov 25 03:24:59 np0005534516 nova_compute[253538]:    <timer name="pit" tickpolicy="delay"/>
Nov 25 03:24:59 np0005534516 nova_compute[253538]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 25 03:24:59 np0005534516 nova_compute[253538]:    <timer name="hpet" present="no"/>
Nov 25 03:24:59 np0005534516 nova_compute[253538]:  </clock>
Nov 25 03:24:59 np0005534516 nova_compute[253538]:  <cpu mode="host-model" match="exact">
Nov 25 03:24:59 np0005534516 nova_compute[253538]:    <topology sockets="1" cores="1" threads="1"/>
Nov 25 03:24:59 np0005534516 nova_compute[253538]:  </cpu>
Nov 25 03:24:59 np0005534516 nova_compute[253538]:  <devices>
Nov 25 03:24:59 np0005534516 nova_compute[253538]:    <disk type="network" device="disk">
Nov 25 03:24:59 np0005534516 nova_compute[253538]:      <driver type="raw" cache="none"/>
Nov 25 03:24:59 np0005534516 nova_compute[253538]:      <source protocol="rbd" name="vms/86bfa56f-56d0-4a5e-b0b2-302c375e37a3_disk">
Nov 25 03:24:59 np0005534516 nova_compute[253538]:        <host name="192.168.122.100" port="6789"/>
Nov 25 03:24:59 np0005534516 nova_compute[253538]:      </source>
Nov 25 03:24:59 np0005534516 nova_compute[253538]:      <auth username="openstack">
Nov 25 03:24:59 np0005534516 nova_compute[253538]:        <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 03:24:59 np0005534516 nova_compute[253538]:      </auth>
Nov 25 03:24:59 np0005534516 nova_compute[253538]:      <target dev="vda" bus="virtio"/>
Nov 25 03:24:59 np0005534516 nova_compute[253538]:    </disk>
Nov 25 03:24:59 np0005534516 nova_compute[253538]:    <disk type="network" device="cdrom">
Nov 25 03:24:59 np0005534516 nova_compute[253538]:      <driver type="raw" cache="none"/>
Nov 25 03:24:59 np0005534516 nova_compute[253538]:      <source protocol="rbd" name="vms/86bfa56f-56d0-4a5e-b0b2-302c375e37a3_disk.config">
Nov 25 03:24:59 np0005534516 nova_compute[253538]:        <host name="192.168.122.100" port="6789"/>
Nov 25 03:24:59 np0005534516 nova_compute[253538]:      </source>
Nov 25 03:24:59 np0005534516 nova_compute[253538]:      <auth username="openstack">
Nov 25 03:24:59 np0005534516 nova_compute[253538]:        <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 03:24:59 np0005534516 nova_compute[253538]:      </auth>
Nov 25 03:24:59 np0005534516 nova_compute[253538]:      <target dev="sda" bus="sata"/>
Nov 25 03:24:59 np0005534516 nova_compute[253538]:    </disk>
Nov 25 03:24:59 np0005534516 nova_compute[253538]:    <interface type="ethernet">
Nov 25 03:24:59 np0005534516 nova_compute[253538]:      <mac address="fa:16:3e:5e:0e:e0"/>
Nov 25 03:24:59 np0005534516 nova_compute[253538]:      <model type="virtio"/>
Nov 25 03:24:59 np0005534516 nova_compute[253538]:      <driver name="vhost" rx_queue_size="512"/>
Nov 25 03:24:59 np0005534516 nova_compute[253538]:      <mtu size="1442"/>
Nov 25 03:24:59 np0005534516 nova_compute[253538]:      <target dev="tap4ad9572b-6a"/>
Nov 25 03:24:59 np0005534516 nova_compute[253538]:    </interface>
Nov 25 03:24:59 np0005534516 nova_compute[253538]:    <serial type="pty">
Nov 25 03:24:59 np0005534516 nova_compute[253538]:      <log file="/var/lib/nova/instances/86bfa56f-56d0-4a5e-b0b2-302c375e37a3/console.log" append="off"/>
Nov 25 03:24:59 np0005534516 nova_compute[253538]:    </serial>
Nov 25 03:24:59 np0005534516 nova_compute[253538]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 03:24:59 np0005534516 nova_compute[253538]:    <video>
Nov 25 03:24:59 np0005534516 nova_compute[253538]:      <model type="virtio"/>
Nov 25 03:24:59 np0005534516 nova_compute[253538]:    </video>
Nov 25 03:24:59 np0005534516 nova_compute[253538]:    <input type="tablet" bus="usb"/>
Nov 25 03:24:59 np0005534516 nova_compute[253538]:    <rng model="virtio">
Nov 25 03:24:59 np0005534516 nova_compute[253538]:      <backend model="random">/dev/urandom</backend>
Nov 25 03:24:59 np0005534516 nova_compute[253538]:    </rng>
Nov 25 03:24:59 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root"/>
Nov 25 03:24:59 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:24:59 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:24:59 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:24:59 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:24:59 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:24:59 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:24:59 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:24:59 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:24:59 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:24:59 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:24:59 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:24:59 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:24:59 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:24:59 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:24:59 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:24:59 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:24:59 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:24:59 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:24:59 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:24:59 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:24:59 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:24:59 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:24:59 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:24:59 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:24:59 np0005534516 nova_compute[253538]:    <controller type="usb" index="0"/>
Nov 25 03:24:59 np0005534516 nova_compute[253538]:    <memballoon model="virtio">
Nov 25 03:24:59 np0005534516 nova_compute[253538]:      <stats period="10"/>
Nov 25 03:24:59 np0005534516 nova_compute[253538]:    </memballoon>
Nov 25 03:24:59 np0005534516 nova_compute[253538]:  </devices>
Nov 25 03:24:59 np0005534516 nova_compute[253538]: </domain>
Nov 25 03:24:59 np0005534516 nova_compute[253538]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 25 03:24:59 np0005534516 nova_compute[253538]: 2025-11-25 08:24:59.362 253542 DEBUG nova.compute.manager [None req-cd280208-b595-4b3f-8796-270485a4caef 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] [instance: 86bfa56f-56d0-4a5e-b0b2-302c375e37a3] Preparing to wait for external event network-vif-plugged-4ad9572b-6ac1-4659-8ea6-71b8a32c06fe prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 25 03:24:59 np0005534516 nova_compute[253538]: 2025-11-25 08:24:59.363 253542 DEBUG oslo_concurrency.lockutils [None req-cd280208-b595-4b3f-8796-270485a4caef 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Acquiring lock "86bfa56f-56d0-4a5e-b0b2-302c375e37a3-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:24:59 np0005534516 nova_compute[253538]: 2025-11-25 08:24:59.363 253542 DEBUG oslo_concurrency.lockutils [None req-cd280208-b595-4b3f-8796-270485a4caef 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Lock "86bfa56f-56d0-4a5e-b0b2-302c375e37a3-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:24:59 np0005534516 nova_compute[253538]: 2025-11-25 08:24:59.363 253542 DEBUG oslo_concurrency.lockutils [None req-cd280208-b595-4b3f-8796-270485a4caef 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Lock "86bfa56f-56d0-4a5e-b0b2-302c375e37a3-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:24:59 np0005534516 nova_compute[253538]: 2025-11-25 08:24:59.364 253542 DEBUG nova.virt.libvirt.vif [None req-cd280208-b595-4b3f-8796-270485a4caef 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T08:24:53Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-1649971692',display_name='tempest-ServersAdminTestJSON-server-1649971692',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-1649971692',id=16,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='65a2f983cce14453b2dc9251a520f289',ramdisk_id='',reservation_id='r-309qh2t9',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersAdminTestJSON-857664825',owner_user_name='tempest-ServersAdminTestJSON-857664825-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T08:24:55Z,user_data=None,user_id='76d3377d398a4214a77bc0eb91638ec5',uuid=86bfa56f-56d0-4a5e-b0b2-302c375e37a3,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "4ad9572b-6ac1-4659-8ea6-71b8a32c06fe", "address": "fa:16:3e:5e:0e:e0", "network": {"id": "93269c36-ab23-4d95-925a-798173550624", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-897372830-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "65a2f983cce14453b2dc9251a520f289", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4ad9572b-6a", "ovs_interfaceid": "4ad9572b-6ac1-4659-8ea6-71b8a32c06fe", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 25 03:24:59 np0005534516 nova_compute[253538]: 2025-11-25 08:24:59.364 253542 DEBUG nova.network.os_vif_util [None req-cd280208-b595-4b3f-8796-270485a4caef 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Converting VIF {"id": "4ad9572b-6ac1-4659-8ea6-71b8a32c06fe", "address": "fa:16:3e:5e:0e:e0", "network": {"id": "93269c36-ab23-4d95-925a-798173550624", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-897372830-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "65a2f983cce14453b2dc9251a520f289", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4ad9572b-6a", "ovs_interfaceid": "4ad9572b-6ac1-4659-8ea6-71b8a32c06fe", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 03:24:59 np0005534516 nova_compute[253538]: 2025-11-25 08:24:59.365 253542 DEBUG nova.network.os_vif_util [None req-cd280208-b595-4b3f-8796-270485a4caef 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:5e:0e:e0,bridge_name='br-int',has_traffic_filtering=True,id=4ad9572b-6ac1-4659-8ea6-71b8a32c06fe,network=Network(93269c36-ab23-4d95-925a-798173550624),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4ad9572b-6a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 03:24:59 np0005534516 nova_compute[253538]: 2025-11-25 08:24:59.365 253542 DEBUG os_vif [None req-cd280208-b595-4b3f-8796-270485a4caef 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:5e:0e:e0,bridge_name='br-int',has_traffic_filtering=True,id=4ad9572b-6ac1-4659-8ea6-71b8a32c06fe,network=Network(93269c36-ab23-4d95-925a-798173550624),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4ad9572b-6a') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 25 03:24:59 np0005534516 nova_compute[253538]: 2025-11-25 08:24:59.367 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:24:59 np0005534516 nova_compute[253538]: 2025-11-25 08:24:59.367 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:24:59 np0005534516 nova_compute[253538]: 2025-11-25 08:24:59.368 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 03:24:59 np0005534516 nova_compute[253538]: 2025-11-25 08:24:59.372 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:24:59 np0005534516 nova_compute[253538]: 2025-11-25 08:24:59.372 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4ad9572b-6a, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:24:59 np0005534516 nova_compute[253538]: 2025-11-25 08:24:59.373 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap4ad9572b-6a, col_values=(('external_ids', {'iface-id': '4ad9572b-6ac1-4659-8ea6-71b8a32c06fe', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:5e:0e:e0', 'vm-uuid': '86bfa56f-56d0-4a5e-b0b2-302c375e37a3'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:24:59 np0005534516 NetworkManager[48915]: <info>  [1764059099.3756] manager: (tap4ad9572b-6a): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/35)
Nov 25 03:24:59 np0005534516 nova_compute[253538]: 2025-11-25 08:24:59.375 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:24:59 np0005534516 nova_compute[253538]: 2025-11-25 08:24:59.377 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 25 03:24:59 np0005534516 nova_compute[253538]: 2025-11-25 08:24:59.381 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:24:59 np0005534516 nova_compute[253538]: 2025-11-25 08:24:59.383 253542 INFO os_vif [None req-cd280208-b595-4b3f-8796-270485a4caef 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:5e:0e:e0,bridge_name='br-int',has_traffic_filtering=True,id=4ad9572b-6ac1-4659-8ea6-71b8a32c06fe,network=Network(93269c36-ab23-4d95-925a-798173550624),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4ad9572b-6a')#033[00m
Nov 25 03:24:59 np0005534516 nova_compute[253538]: 2025-11-25 08:24:59.396 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764059099.396449, d3764acc-bce0-452e-bba5-90d76d88df2e => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 03:24:59 np0005534516 nova_compute[253538]: 2025-11-25 08:24:59.397 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: d3764acc-bce0-452e-bba5-90d76d88df2e] VM Paused (Lifecycle Event)#033[00m
Nov 25 03:24:59 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 03:24:59 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1754520027' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 03:24:59 np0005534516 nova_compute[253538]: 2025-11-25 08:24:59.414 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: d3764acc-bce0-452e-bba5-90d76d88df2e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:24:59 np0005534516 nova_compute[253538]: 2025-11-25 08:24:59.422 253542 DEBUG nova.network.neutron [None req-eff26c03-6da1-4935-b2ab-a11367dd46a2 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] [instance: 23ace5af-6840-42aa-a801-98abbb4f3a52] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 25 03:24:59 np0005534516 nova_compute[253538]: 2025-11-25 08:24:59.430 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: d3764acc-bce0-452e-bba5-90d76d88df2e] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: suspending, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 03:24:59 np0005534516 nova_compute[253538]: 2025-11-25 08:24:59.433 253542 DEBUG oslo_concurrency.processutils [None req-46639f30-841e-4511-9447-6e35fb117677 33ac0c97ce984130b9394683c963cda1 e8f1e6855a5b45a8b1c74763485618f4 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.499s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:24:59 np0005534516 nova_compute[253538]: 2025-11-25 08:24:59.443 253542 DEBUG nova.compute.provider_tree [None req-46639f30-841e-4511-9447-6e35fb117677 33ac0c97ce984130b9394683c963cda1 e8f1e6855a5b45a8b1c74763485618f4 - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 03:24:59 np0005534516 nova_compute[253538]: 2025-11-25 08:24:59.453 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: d3764acc-bce0-452e-bba5-90d76d88df2e] During sync_power_state the instance has a pending task (suspending). Skip.#033[00m
Nov 25 03:24:59 np0005534516 nova_compute[253538]: 2025-11-25 08:24:59.458 253542 DEBUG nova.scheduler.client.report [None req-46639f30-841e-4511-9447-6e35fb117677 33ac0c97ce984130b9394683c963cda1 e8f1e6855a5b45a8b1c74763485618f4 - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 03:24:59 np0005534516 nova_compute[253538]: 2025-11-25 08:24:59.464 253542 DEBUG nova.virt.libvirt.driver [None req-cd280208-b595-4b3f-8796-270485a4caef 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 03:24:59 np0005534516 nova_compute[253538]: 2025-11-25 08:24:59.464 253542 DEBUG nova.virt.libvirt.driver [None req-cd280208-b595-4b3f-8796-270485a4caef 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 03:24:59 np0005534516 nova_compute[253538]: 2025-11-25 08:24:59.464 253542 DEBUG nova.virt.libvirt.driver [None req-cd280208-b595-4b3f-8796-270485a4caef 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] No VIF found with MAC fa:16:3e:5e:0e:e0, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 25 03:24:59 np0005534516 nova_compute[253538]: 2025-11-25 08:24:59.465 253542 INFO nova.virt.libvirt.driver [None req-cd280208-b595-4b3f-8796-270485a4caef 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] [instance: 86bfa56f-56d0-4a5e-b0b2-302c375e37a3] Using config drive#033[00m
Nov 25 03:24:59 np0005534516 nova_compute[253538]: 2025-11-25 08:24:59.485 253542 DEBUG nova.storage.rbd_utils [None req-cd280208-b595-4b3f-8796-270485a4caef 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] rbd image 86bfa56f-56d0-4a5e-b0b2-302c375e37a3_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:24:59 np0005534516 nova_compute[253538]: 2025-11-25 08:24:59.503 253542 DEBUG oslo_concurrency.lockutils [None req-46639f30-841e-4511-9447-6e35fb117677 33ac0c97ce984130b9394683c963cda1 e8f1e6855a5b45a8b1c74763485618f4 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.772s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:24:59 np0005534516 nova_compute[253538]: 2025-11-25 08:24:59.538 253542 INFO nova.scheduler.client.report [None req-46639f30-841e-4511-9447-6e35fb117677 33ac0c97ce984130b9394683c963cda1 e8f1e6855a5b45a8b1c74763485618f4 - - default default] Deleted allocations for instance 7e576418-7454-49eb-9918-2d7f04547bd8#033[00m
Nov 25 03:24:59 np0005534516 nova_compute[253538]: 2025-11-25 08:24:59.548 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:24:59 np0005534516 nova_compute[253538]: 2025-11-25 08:24:59.581 253542 DEBUG nova.compute.manager [req-8a32d8cc-b206-4136-94a9-584fea6fcd47 req-6b365ca6-cacd-43fa-9ad8-774b4ed63e33 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 23ace5af-6840-42aa-a801-98abbb4f3a52] Received event network-changed-e9d1298d-411a-4018-ba08-c41d40ba0d41 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:24:59 np0005534516 nova_compute[253538]: 2025-11-25 08:24:59.582 253542 DEBUG nova.compute.manager [req-8a32d8cc-b206-4136-94a9-584fea6fcd47 req-6b365ca6-cacd-43fa-9ad8-774b4ed63e33 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 23ace5af-6840-42aa-a801-98abbb4f3a52] Refreshing instance network info cache due to event network-changed-e9d1298d-411a-4018-ba08-c41d40ba0d41. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 03:24:59 np0005534516 nova_compute[253538]: 2025-11-25 08:24:59.582 253542 DEBUG oslo_concurrency.lockutils [req-8a32d8cc-b206-4136-94a9-584fea6fcd47 req-6b365ca6-cacd-43fa-9ad8-774b4ed63e33 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-23ace5af-6840-42aa-a801-98abbb4f3a52" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 03:24:59 np0005534516 systemd[1]: machine-qemu\x2d17\x2dinstance\x2d0000000f.scope: Deactivated successfully.
Nov 25 03:24:59 np0005534516 systemd[1]: machine-qemu\x2d17\x2dinstance\x2d0000000f.scope: Consumed 2.400s CPU time.
Nov 25 03:24:59 np0005534516 systemd-machined[215790]: Machine qemu-17-instance-0000000f terminated.
Nov 25 03:24:59 np0005534516 nova_compute[253538]: 2025-11-25 08:24:59.598 253542 DEBUG oslo_concurrency.lockutils [None req-46639f30-841e-4511-9447-6e35fb117677 33ac0c97ce984130b9394683c963cda1 e8f1e6855a5b45a8b1c74763485618f4 - - default default] Lock "7e576418-7454-49eb-9918-2d7f04547bd8" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.564s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:24:59 np0005534516 nova_compute[253538]: 2025-11-25 08:24:59.752 253542 DEBUG nova.compute.manager [None req-dc4d201f-60cf-4a6c-bd3e-1fd40f0ead3f 645c20ba7f4c4bb7a8facc194a00857c f1a1c40fc7b44a8fb2422030f90fbc4c - - default default] [instance: d3764acc-bce0-452e-bba5-90d76d88df2e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:24:59 np0005534516 nova_compute[253538]: 2025-11-25 08:24:59.915 253542 DEBUG nova.network.neutron [req-39390d9a-11d1-4f93-92f7-b3896b27d9be req-4311a001-cb66-4f00-964d-cd5caed782f0 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 86bfa56f-56d0-4a5e-b0b2-302c375e37a3] Updated VIF entry in instance network info cache for port 4ad9572b-6ac1-4659-8ea6-71b8a32c06fe. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 25 03:24:59 np0005534516 nova_compute[253538]: 2025-11-25 08:24:59.915 253542 DEBUG nova.network.neutron [req-39390d9a-11d1-4f93-92f7-b3896b27d9be req-4311a001-cb66-4f00-964d-cd5caed782f0 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 86bfa56f-56d0-4a5e-b0b2-302c375e37a3] Updating instance_info_cache with network_info: [{"id": "4ad9572b-6ac1-4659-8ea6-71b8a32c06fe", "address": "fa:16:3e:5e:0e:e0", "network": {"id": "93269c36-ab23-4d95-925a-798173550624", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-897372830-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "65a2f983cce14453b2dc9251a520f289", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4ad9572b-6a", "ovs_interfaceid": "4ad9572b-6ac1-4659-8ea6-71b8a32c06fe", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 03:24:59 np0005534516 nova_compute[253538]: 2025-11-25 08:24:59.930 253542 DEBUG oslo_concurrency.lockutils [req-39390d9a-11d1-4f93-92f7-b3896b27d9be req-4311a001-cb66-4f00-964d-cd5caed782f0 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-86bfa56f-56d0-4a5e-b0b2-302c375e37a3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 03:25:00 np0005534516 nova_compute[253538]: 2025-11-25 08:25:00.065 253542 INFO nova.virt.libvirt.driver [None req-cd280208-b595-4b3f-8796-270485a4caef 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] [instance: 86bfa56f-56d0-4a5e-b0b2-302c375e37a3] Creating config drive at /var/lib/nova/instances/86bfa56f-56d0-4a5e-b0b2-302c375e37a3/disk.config#033[00m
Nov 25 03:25:00 np0005534516 nova_compute[253538]: 2025-11-25 08:25:00.069 253542 DEBUG oslo_concurrency.processutils [None req-cd280208-b595-4b3f-8796-270485a4caef 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/86bfa56f-56d0-4a5e-b0b2-302c375e37a3/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpmk73hby8 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:25:00 np0005534516 nova_compute[253538]: 2025-11-25 08:25:00.200 253542 DEBUG oslo_concurrency.processutils [None req-cd280208-b595-4b3f-8796-270485a4caef 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/86bfa56f-56d0-4a5e-b0b2-302c375e37a3/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpmk73hby8" returned: 0 in 0.131s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:25:00 np0005534516 nova_compute[253538]: 2025-11-25 08:25:00.222 253542 DEBUG nova.storage.rbd_utils [None req-cd280208-b595-4b3f-8796-270485a4caef 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] rbd image 86bfa56f-56d0-4a5e-b0b2-302c375e37a3_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:25:00 np0005534516 nova_compute[253538]: 2025-11-25 08:25:00.224 253542 DEBUG oslo_concurrency.processutils [None req-cd280208-b595-4b3f-8796-270485a4caef 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/86bfa56f-56d0-4a5e-b0b2-302c375e37a3/disk.config 86bfa56f-56d0-4a5e-b0b2-302c375e37a3_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:25:00 np0005534516 nova_compute[253538]: 2025-11-25 08:25:00.353 253542 DEBUG oslo_concurrency.processutils [None req-cd280208-b595-4b3f-8796-270485a4caef 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/86bfa56f-56d0-4a5e-b0b2-302c375e37a3/disk.config 86bfa56f-56d0-4a5e-b0b2-302c375e37a3_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.128s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:25:00 np0005534516 nova_compute[253538]: 2025-11-25 08:25:00.353 253542 INFO nova.virt.libvirt.driver [None req-cd280208-b595-4b3f-8796-270485a4caef 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] [instance: 86bfa56f-56d0-4a5e-b0b2-302c375e37a3] Deleting local config drive /var/lib/nova/instances/86bfa56f-56d0-4a5e-b0b2-302c375e37a3/disk.config because it was imported into RBD.#033[00m
Nov 25 03:25:00 np0005534516 kernel: tap4ad9572b-6a: entered promiscuous mode
Nov 25 03:25:00 np0005534516 systemd-udevd[279022]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 03:25:00 np0005534516 NetworkManager[48915]: <info>  [1764059100.4002] manager: (tap4ad9572b-6a): new Tun device (/org/freedesktop/NetworkManager/Devices/36)
Nov 25 03:25:00 np0005534516 ovn_controller[152859]: 2025-11-25T08:25:00Z|00046|binding|INFO|Claiming lport 4ad9572b-6ac1-4659-8ea6-71b8a32c06fe for this chassis.
Nov 25 03:25:00 np0005534516 ovn_controller[152859]: 2025-11-25T08:25:00Z|00047|binding|INFO|4ad9572b-6ac1-4659-8ea6-71b8a32c06fe: Claiming fa:16:3e:5e:0e:e0 10.100.0.6
Nov 25 03:25:00 np0005534516 nova_compute[253538]: 2025-11-25 08:25:00.404 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:25:00 np0005534516 nova_compute[253538]: 2025-11-25 08:25:00.407 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:25:00 np0005534516 NetworkManager[48915]: <info>  [1764059100.4125] device (tap4ad9572b-6a): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 03:25:00 np0005534516 NetworkManager[48915]: <info>  [1764059100.4138] device (tap4ad9572b-6a): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 03:25:00 np0005534516 nova_compute[253538]: 2025-11-25 08:25:00.413 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:25:00 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:25:00.417 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:5e:0e:e0 10.100.0.6'], port_security=['fa:16:3e:5e:0e:e0 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '86bfa56f-56d0-4a5e-b0b2-302c375e37a3', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-93269c36-ab23-4d95-925a-798173550624', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '65a2f983cce14453b2dc9251a520f289', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'cc9f601e-bb99-4729-8b62-ddcf81c134a4', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a705f4ac-b7cc-4deb-b453-a20afb944392, chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=4ad9572b-6ac1-4659-8ea6-71b8a32c06fe) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 03:25:00 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:25:00.418 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 4ad9572b-6ac1-4659-8ea6-71b8a32c06fe in datapath 93269c36-ab23-4d95-925a-798173550624 bound to our chassis#033[00m
Nov 25 03:25:00 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:25:00.419 162739 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 93269c36-ab23-4d95-925a-798173550624#033[00m
Nov 25 03:25:00 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:25:00.429 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[7927b67f-59a1-46b3-b331-66999dd0f287]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:25:00 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:25:00.430 162739 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap93269c36-a1 in ovnmeta-93269c36-ab23-4d95-925a-798173550624 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 25 03:25:00 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:25:00.432 269370 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap93269c36-a0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 25 03:25:00 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:25:00.432 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[48548175-609b-4795-9017-df44aceda165]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:25:00 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:25:00.433 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[97355664-0056-4b05-ad99-532f3ea0db63]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:25:00 np0005534516 systemd-machined[215790]: New machine qemu-18-instance-00000010.
Nov 25 03:25:00 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:25:00.443 162852 DEBUG oslo.privsep.daemon [-] privsep: reply[cee897ce-da77-4a1e-81d5-710bfbd8d714]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:25:00 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:25:00.466 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[2e11d7eb-f45e-45d5-881e-c323d09eebf9]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:25:00 np0005534516 systemd[1]: Started Virtual Machine qemu-18-instance-00000010.
Nov 25 03:25:00 np0005534516 nova_compute[253538]: 2025-11-25 08:25:00.488 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:25:00 np0005534516 ovn_controller[152859]: 2025-11-25T08:25:00Z|00048|binding|INFO|Setting lport 4ad9572b-6ac1-4659-8ea6-71b8a32c06fe ovn-installed in OVS
Nov 25 03:25:00 np0005534516 ovn_controller[152859]: 2025-11-25T08:25:00Z|00049|binding|INFO|Setting lport 4ad9572b-6ac1-4659-8ea6-71b8a32c06fe up in Southbound
Nov 25 03:25:00 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:25:00.489 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[7692981b-1481-4bb7-a14e-6dd5ca41b99f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:25:00 np0005534516 nova_compute[253538]: 2025-11-25 08:25:00.492 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:25:00 np0005534516 systemd-udevd[279077]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 03:25:00 np0005534516 NetworkManager[48915]: <info>  [1764059100.4955] manager: (tap93269c36-a0): new Veth device (/org/freedesktop/NetworkManager/Devices/37)
Nov 25 03:25:00 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:25:00.494 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[62685375-b584-4037-b8ca-eef1135448b5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:25:00 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:25:00.527 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[af9c9c9d-a84f-4752-8274-1d302da5d2bc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:25:00 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:25:00.530 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[d3be5eff-4940-4064-a404-eae407ad5b6f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:25:00 np0005534516 NetworkManager[48915]: <info>  [1764059100.5510] device (tap93269c36-a0): carrier: link connected
Nov 25 03:25:00 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:25:00.555 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[5491c5b9-d56e-4322-a3a9-2fbc68c420f2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:25:00 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:25:00.570 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[89819ad9-ee81-49bf-b56a-2ac12ed50152]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap93269c36-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b6:11:21'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 19], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 445515, 'reachable_time': 41516, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 279111, 'error': None, 'target': 'ovnmeta-93269c36-ab23-4d95-925a-798173550624', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:25:00 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:25:00.584 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[edc54449-25e4-4cf9-86bb-c46bdd112130]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feb6:1121'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 445515, 'tstamp': 445515}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 279112, 'error': None, 'target': 'ovnmeta-93269c36-ab23-4d95-925a-798173550624', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:25:00 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:25:00.607 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[7cdbfa62-7ab6-461b-b5ef-fc77784a67a4]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap93269c36-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b6:11:21'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 19], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 445515, 'reachable_time': 41516, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 279113, 'error': None, 'target': 'ovnmeta-93269c36-ab23-4d95-925a-798173550624', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:25:00 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:25:00.636 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[82f8b477-27d8-4624-86cd-5aed35ae370d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:25:00 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:25:00.699 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[6647dfd4-492d-481d-bcdf-2f6c6fb8f6b6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:25:00 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:25:00.700 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap93269c36-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:25:00 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:25:00.701 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 03:25:00 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:25:00.701 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap93269c36-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:25:00 np0005534516 nova_compute[253538]: 2025-11-25 08:25:00.703 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:25:00 np0005534516 kernel: tap93269c36-a0: entered promiscuous mode
Nov 25 03:25:00 np0005534516 NetworkManager[48915]: <info>  [1764059100.7046] manager: (tap93269c36-a0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/38)
Nov 25 03:25:00 np0005534516 nova_compute[253538]: 2025-11-25 08:25:00.704 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:25:00 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:25:00.706 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap93269c36-a0, col_values=(('external_ids', {'iface-id': '52d2128c-19c6-4892-8ba5-cc8740039f5e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:25:00 np0005534516 ovn_controller[152859]: 2025-11-25T08:25:00Z|00050|binding|INFO|Releasing lport 52d2128c-19c6-4892-8ba5-cc8740039f5e from this chassis (sb_readonly=0)
Nov 25 03:25:00 np0005534516 nova_compute[253538]: 2025-11-25 08:25:00.707 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:25:00 np0005534516 nova_compute[253538]: 2025-11-25 08:25:00.708 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:25:00 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:25:00.711 162739 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/93269c36-ab23-4d95-925a-798173550624.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/93269c36-ab23-4d95-925a-798173550624.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 25 03:25:00 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:25:00.712 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[5de66faa-f965-4b4a-ae61-12f22fd8887b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:25:00 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:25:00.712 162739 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 25 03:25:00 np0005534516 ovn_metadata_agent[162734]: global
Nov 25 03:25:00 np0005534516 ovn_metadata_agent[162734]:    log         /dev/log local0 debug
Nov 25 03:25:00 np0005534516 ovn_metadata_agent[162734]:    log-tag     haproxy-metadata-proxy-93269c36-ab23-4d95-925a-798173550624
Nov 25 03:25:00 np0005534516 ovn_metadata_agent[162734]:    user        root
Nov 25 03:25:00 np0005534516 ovn_metadata_agent[162734]:    group       root
Nov 25 03:25:00 np0005534516 ovn_metadata_agent[162734]:    maxconn     1024
Nov 25 03:25:00 np0005534516 ovn_metadata_agent[162734]:    pidfile     /var/lib/neutron/external/pids/93269c36-ab23-4d95-925a-798173550624.pid.haproxy
Nov 25 03:25:00 np0005534516 ovn_metadata_agent[162734]:    daemon
Nov 25 03:25:00 np0005534516 ovn_metadata_agent[162734]: 
Nov 25 03:25:00 np0005534516 ovn_metadata_agent[162734]: defaults
Nov 25 03:25:00 np0005534516 ovn_metadata_agent[162734]:    log global
Nov 25 03:25:00 np0005534516 ovn_metadata_agent[162734]:    mode http
Nov 25 03:25:00 np0005534516 ovn_metadata_agent[162734]:    option httplog
Nov 25 03:25:00 np0005534516 ovn_metadata_agent[162734]:    option dontlognull
Nov 25 03:25:00 np0005534516 ovn_metadata_agent[162734]:    option http-server-close
Nov 25 03:25:00 np0005534516 ovn_metadata_agent[162734]:    option forwardfor
Nov 25 03:25:00 np0005534516 ovn_metadata_agent[162734]:    retries                 3
Nov 25 03:25:00 np0005534516 ovn_metadata_agent[162734]:    timeout http-request    30s
Nov 25 03:25:00 np0005534516 ovn_metadata_agent[162734]:    timeout connect         30s
Nov 25 03:25:00 np0005534516 ovn_metadata_agent[162734]:    timeout client          32s
Nov 25 03:25:00 np0005534516 ovn_metadata_agent[162734]:    timeout server          32s
Nov 25 03:25:00 np0005534516 ovn_metadata_agent[162734]:    timeout http-keep-alive 30s
Nov 25 03:25:00 np0005534516 ovn_metadata_agent[162734]: 
Nov 25 03:25:00 np0005534516 ovn_metadata_agent[162734]: 
Nov 25 03:25:00 np0005534516 ovn_metadata_agent[162734]: listen listener
Nov 25 03:25:00 np0005534516 ovn_metadata_agent[162734]:    bind 169.254.169.254:80
Nov 25 03:25:00 np0005534516 ovn_metadata_agent[162734]:    server metadata /var/lib/neutron/metadata_proxy
Nov 25 03:25:00 np0005534516 ovn_metadata_agent[162734]:    http-request add-header X-OVN-Network-ID 93269c36-ab23-4d95-925a-798173550624
Nov 25 03:25:00 np0005534516 ovn_metadata_agent[162734]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 25 03:25:00 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:25:00.714 162739 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-93269c36-ab23-4d95-925a-798173550624', 'env', 'PROCESS_TAG=haproxy-93269c36-ab23-4d95-925a-798173550624', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/93269c36-ab23-4d95-925a-798173550624.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 25 03:25:00 np0005534516 nova_compute[253538]: 2025-11-25 08:25:00.723 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:25:00 np0005534516 nova_compute[253538]: 2025-11-25 08:25:00.988 253542 DEBUG nova.network.neutron [None req-eff26c03-6da1-4935-b2ab-a11367dd46a2 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] [instance: 23ace5af-6840-42aa-a801-98abbb4f3a52] Updating instance_info_cache with network_info: [{"id": "e9d1298d-411a-4018-ba08-c41d40ba0d41", "address": "fa:16:3e:af:6c:e2", "network": {"id": "93269c36-ab23-4d95-925a-798173550624", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-897372830-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "65a2f983cce14453b2dc9251a520f289", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape9d1298d-41", "ovs_interfaceid": "e9d1298d-411a-4018-ba08-c41d40ba0d41", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 03:25:00 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1187: 321 pgs: 321 active+clean; 227 MiB data, 335 MiB used, 60 GiB / 60 GiB avail; 5.9 MiB/s rd, 8.5 MiB/s wr, 434 op/s
Nov 25 03:25:01 np0005534516 nova_compute[253538]: 2025-11-25 08:25:01.008 253542 DEBUG oslo_concurrency.lockutils [None req-eff26c03-6da1-4935-b2ab-a11367dd46a2 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Releasing lock "refresh_cache-23ace5af-6840-42aa-a801-98abbb4f3a52" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 03:25:01 np0005534516 nova_compute[253538]: 2025-11-25 08:25:01.009 253542 DEBUG nova.compute.manager [None req-eff26c03-6da1-4935-b2ab-a11367dd46a2 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] [instance: 23ace5af-6840-42aa-a801-98abbb4f3a52] Instance network_info: |[{"id": "e9d1298d-411a-4018-ba08-c41d40ba0d41", "address": "fa:16:3e:af:6c:e2", "network": {"id": "93269c36-ab23-4d95-925a-798173550624", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-897372830-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "65a2f983cce14453b2dc9251a520f289", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape9d1298d-41", "ovs_interfaceid": "e9d1298d-411a-4018-ba08-c41d40ba0d41", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 25 03:25:01 np0005534516 nova_compute[253538]: 2025-11-25 08:25:01.010 253542 DEBUG oslo_concurrency.lockutils [req-8a32d8cc-b206-4136-94a9-584fea6fcd47 req-6b365ca6-cacd-43fa-9ad8-774b4ed63e33 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-23ace5af-6840-42aa-a801-98abbb4f3a52" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 03:25:01 np0005534516 nova_compute[253538]: 2025-11-25 08:25:01.011 253542 DEBUG nova.network.neutron [req-8a32d8cc-b206-4136-94a9-584fea6fcd47 req-6b365ca6-cacd-43fa-9ad8-774b4ed63e33 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 23ace5af-6840-42aa-a801-98abbb4f3a52] Refreshing network info cache for port e9d1298d-411a-4018-ba08-c41d40ba0d41 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 03:25:01 np0005534516 nova_compute[253538]: 2025-11-25 08:25:01.018 253542 DEBUG nova.virt.libvirt.driver [None req-eff26c03-6da1-4935-b2ab-a11367dd46a2 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] [instance: 23ace5af-6840-42aa-a801-98abbb4f3a52] Start _get_guest_xml network_info=[{"id": "e9d1298d-411a-4018-ba08-c41d40ba0d41", "address": "fa:16:3e:af:6c:e2", "network": {"id": "93269c36-ab23-4d95-925a-798173550624", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-897372830-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "65a2f983cce14453b2dc9251a520f289", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape9d1298d-41", "ovs_interfaceid": "e9d1298d-411a-4018-ba08-c41d40ba0d41", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'device_name': '/dev/vda', 'guest_format': None, 'encryption_options': None, 'boot_index': 0, 'encryption_format': None, 'encryption_secret_uuid': None, 'size': 0, 'encrypted': False, 'disk_bus': 'virtio', 'image_id': '8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 25 03:25:01 np0005534516 nova_compute[253538]: 2025-11-25 08:25:01.026 253542 WARNING nova.virt.libvirt.driver [None req-eff26c03-6da1-4935-b2ab-a11367dd46a2 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 03:25:01 np0005534516 nova_compute[253538]: 2025-11-25 08:25:01.035 253542 DEBUG nova.virt.libvirt.host [None req-eff26c03-6da1-4935-b2ab-a11367dd46a2 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 25 03:25:01 np0005534516 nova_compute[253538]: 2025-11-25 08:25:01.036 253542 DEBUG nova.virt.libvirt.host [None req-eff26c03-6da1-4935-b2ab-a11367dd46a2 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 25 03:25:01 np0005534516 nova_compute[253538]: 2025-11-25 08:25:01.040 253542 DEBUG nova.virt.libvirt.host [None req-eff26c03-6da1-4935-b2ab-a11367dd46a2 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 25 03:25:01 np0005534516 nova_compute[253538]: 2025-11-25 08:25:01.041 253542 DEBUG nova.virt.libvirt.host [None req-eff26c03-6da1-4935-b2ab-a11367dd46a2 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 25 03:25:01 np0005534516 nova_compute[253538]: 2025-11-25 08:25:01.041 253542 DEBUG nova.virt.libvirt.driver [None req-eff26c03-6da1-4935-b2ab-a11367dd46a2 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 25 03:25:01 np0005534516 nova_compute[253538]: 2025-11-25 08:25:01.041 253542 DEBUG nova.virt.hardware [None req-eff26c03-6da1-4935-b2ab-a11367dd46a2 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T08:20:54Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='c0247c91-0f34-45b2-87e7-43f31a790a00',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 25 03:25:01 np0005534516 nova_compute[253538]: 2025-11-25 08:25:01.042 253542 DEBUG nova.virt.hardware [None req-eff26c03-6da1-4935-b2ab-a11367dd46a2 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 25 03:25:01 np0005534516 nova_compute[253538]: 2025-11-25 08:25:01.042 253542 DEBUG nova.virt.hardware [None req-eff26c03-6da1-4935-b2ab-a11367dd46a2 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 25 03:25:01 np0005534516 nova_compute[253538]: 2025-11-25 08:25:01.042 253542 DEBUG nova.virt.hardware [None req-eff26c03-6da1-4935-b2ab-a11367dd46a2 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 25 03:25:01 np0005534516 nova_compute[253538]: 2025-11-25 08:25:01.043 253542 DEBUG nova.virt.hardware [None req-eff26c03-6da1-4935-b2ab-a11367dd46a2 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 25 03:25:01 np0005534516 nova_compute[253538]: 2025-11-25 08:25:01.043 253542 DEBUG nova.virt.hardware [None req-eff26c03-6da1-4935-b2ab-a11367dd46a2 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 25 03:25:01 np0005534516 nova_compute[253538]: 2025-11-25 08:25:01.043 253542 DEBUG nova.virt.hardware [None req-eff26c03-6da1-4935-b2ab-a11367dd46a2 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 25 03:25:01 np0005534516 nova_compute[253538]: 2025-11-25 08:25:01.044 253542 DEBUG nova.virt.hardware [None req-eff26c03-6da1-4935-b2ab-a11367dd46a2 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 25 03:25:01 np0005534516 nova_compute[253538]: 2025-11-25 08:25:01.044 253542 DEBUG nova.virt.hardware [None req-eff26c03-6da1-4935-b2ab-a11367dd46a2 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 25 03:25:01 np0005534516 nova_compute[253538]: 2025-11-25 08:25:01.045 253542 DEBUG nova.virt.hardware [None req-eff26c03-6da1-4935-b2ab-a11367dd46a2 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 25 03:25:01 np0005534516 nova_compute[253538]: 2025-11-25 08:25:01.045 253542 DEBUG nova.virt.hardware [None req-eff26c03-6da1-4935-b2ab-a11367dd46a2 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 25 03:25:01 np0005534516 nova_compute[253538]: 2025-11-25 08:25:01.049 253542 DEBUG oslo_concurrency.processutils [None req-eff26c03-6da1-4935-b2ab-a11367dd46a2 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:25:01 np0005534516 podman[279181]: 2025-11-25 08:25:01.120080466 +0000 UTC m=+0.059931039 container create 5a79a5522617471f9f9e9a6b3106daa8a451cbbb718bec8de38f4fa7ab7821ea (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-93269c36-ab23-4d95-925a-798173550624, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 03:25:01 np0005534516 systemd[1]: Started libpod-conmon-5a79a5522617471f9f9e9a6b3106daa8a451cbbb718bec8de38f4fa7ab7821ea.scope.
Nov 25 03:25:01 np0005534516 nova_compute[253538]: 2025-11-25 08:25:01.165 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764059101.1648073, 86bfa56f-56d0-4a5e-b0b2-302c375e37a3 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 03:25:01 np0005534516 nova_compute[253538]: 2025-11-25 08:25:01.166 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 86bfa56f-56d0-4a5e-b0b2-302c375e37a3] VM Started (Lifecycle Event)#033[00m
Nov 25 03:25:01 np0005534516 systemd[1]: Started libcrun container.
Nov 25 03:25:01 np0005534516 nova_compute[253538]: 2025-11-25 08:25:01.191 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 86bfa56f-56d0-4a5e-b0b2-302c375e37a3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:25:01 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6f08b2f30a3472e570aff96bae58b6aa5c1b8e49bb02775e83be72c16348ca83/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 25 03:25:01 np0005534516 podman[279181]: 2025-11-25 08:25:01.09311267 +0000 UTC m=+0.032963263 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 25 03:25:01 np0005534516 nova_compute[253538]: 2025-11-25 08:25:01.199 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764059101.1654396, 86bfa56f-56d0-4a5e-b0b2-302c375e37a3 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 03:25:01 np0005534516 nova_compute[253538]: 2025-11-25 08:25:01.200 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 86bfa56f-56d0-4a5e-b0b2-302c375e37a3] VM Paused (Lifecycle Event)#033[00m
Nov 25 03:25:01 np0005534516 podman[279181]: 2025-11-25 08:25:01.205649775 +0000 UTC m=+0.145500368 container init 5a79a5522617471f9f9e9a6b3106daa8a451cbbb718bec8de38f4fa7ab7821ea (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-93269c36-ab23-4d95-925a-798173550624, org.label-schema.build-date=20251118, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3)
Nov 25 03:25:01 np0005534516 nova_compute[253538]: 2025-11-25 08:25:01.213 253542 DEBUG nova.compute.manager [req-1599ae67-6320-4259-96d2-5031b2a52b2e req-e6ece45d-9ec7-483a-8c82-4adb1214a653 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 86bfa56f-56d0-4a5e-b0b2-302c375e37a3] Received event network-vif-plugged-4ad9572b-6ac1-4659-8ea6-71b8a32c06fe external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:25:01 np0005534516 nova_compute[253538]: 2025-11-25 08:25:01.214 253542 DEBUG oslo_concurrency.lockutils [req-1599ae67-6320-4259-96d2-5031b2a52b2e req-e6ece45d-9ec7-483a-8c82-4adb1214a653 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "86bfa56f-56d0-4a5e-b0b2-302c375e37a3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:25:01 np0005534516 nova_compute[253538]: 2025-11-25 08:25:01.214 253542 DEBUG oslo_concurrency.lockutils [req-1599ae67-6320-4259-96d2-5031b2a52b2e req-e6ece45d-9ec7-483a-8c82-4adb1214a653 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "86bfa56f-56d0-4a5e-b0b2-302c375e37a3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:25:01 np0005534516 nova_compute[253538]: 2025-11-25 08:25:01.215 253542 DEBUG oslo_concurrency.lockutils [req-1599ae67-6320-4259-96d2-5031b2a52b2e req-e6ece45d-9ec7-483a-8c82-4adb1214a653 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "86bfa56f-56d0-4a5e-b0b2-302c375e37a3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:25:01 np0005534516 podman[279181]: 2025-11-25 08:25:01.215626671 +0000 UTC m=+0.155477244 container start 5a79a5522617471f9f9e9a6b3106daa8a451cbbb718bec8de38f4fa7ab7821ea (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-93269c36-ab23-4d95-925a-798173550624, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118)
Nov 25 03:25:01 np0005534516 nova_compute[253538]: 2025-11-25 08:25:01.215 253542 DEBUG nova.compute.manager [req-1599ae67-6320-4259-96d2-5031b2a52b2e req-e6ece45d-9ec7-483a-8c82-4adb1214a653 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 86bfa56f-56d0-4a5e-b0b2-302c375e37a3] Processing event network-vif-plugged-4ad9572b-6ac1-4659-8ea6-71b8a32c06fe _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 25 03:25:01 np0005534516 nova_compute[253538]: 2025-11-25 08:25:01.217 253542 DEBUG nova.compute.manager [None req-cd280208-b595-4b3f-8796-270485a4caef 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] [instance: 86bfa56f-56d0-4a5e-b0b2-302c375e37a3] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 25 03:25:01 np0005534516 nova_compute[253538]: 2025-11-25 08:25:01.218 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 86bfa56f-56d0-4a5e-b0b2-302c375e37a3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:25:01 np0005534516 nova_compute[253538]: 2025-11-25 08:25:01.226 253542 DEBUG nova.virt.libvirt.driver [None req-cd280208-b595-4b3f-8796-270485a4caef 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] [instance: 86bfa56f-56d0-4a5e-b0b2-302c375e37a3] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 25 03:25:01 np0005534516 nova_compute[253538]: 2025-11-25 08:25:01.228 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764059101.2243848, 86bfa56f-56d0-4a5e-b0b2-302c375e37a3 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 03:25:01 np0005534516 nova_compute[253538]: 2025-11-25 08:25:01.229 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 86bfa56f-56d0-4a5e-b0b2-302c375e37a3] VM Resumed (Lifecycle Event)#033[00m
Nov 25 03:25:01 np0005534516 nova_compute[253538]: 2025-11-25 08:25:01.233 253542 INFO nova.virt.libvirt.driver [-] [instance: 86bfa56f-56d0-4a5e-b0b2-302c375e37a3] Instance spawned successfully.#033[00m
Nov 25 03:25:01 np0005534516 nova_compute[253538]: 2025-11-25 08:25:01.234 253542 DEBUG nova.virt.libvirt.driver [None req-cd280208-b595-4b3f-8796-270485a4caef 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] [instance: 86bfa56f-56d0-4a5e-b0b2-302c375e37a3] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 25 03:25:01 np0005534516 neutron-haproxy-ovnmeta-93269c36-ab23-4d95-925a-798173550624[279202]: [NOTICE]   (279225) : New worker (279227) forked
Nov 25 03:25:01 np0005534516 neutron-haproxy-ovnmeta-93269c36-ab23-4d95-925a-798173550624[279202]: [NOTICE]   (279225) : Loading success.
Nov 25 03:25:01 np0005534516 nova_compute[253538]: 2025-11-25 08:25:01.246 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 86bfa56f-56d0-4a5e-b0b2-302c375e37a3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:25:01 np0005534516 nova_compute[253538]: 2025-11-25 08:25:01.250 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 86bfa56f-56d0-4a5e-b0b2-302c375e37a3] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 03:25:01 np0005534516 nova_compute[253538]: 2025-11-25 08:25:01.259 253542 DEBUG nova.virt.libvirt.driver [None req-cd280208-b595-4b3f-8796-270485a4caef 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] [instance: 86bfa56f-56d0-4a5e-b0b2-302c375e37a3] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:25:01 np0005534516 nova_compute[253538]: 2025-11-25 08:25:01.260 253542 DEBUG nova.virt.libvirt.driver [None req-cd280208-b595-4b3f-8796-270485a4caef 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] [instance: 86bfa56f-56d0-4a5e-b0b2-302c375e37a3] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:25:01 np0005534516 nova_compute[253538]: 2025-11-25 08:25:01.260 253542 DEBUG nova.virt.libvirt.driver [None req-cd280208-b595-4b3f-8796-270485a4caef 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] [instance: 86bfa56f-56d0-4a5e-b0b2-302c375e37a3] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:25:01 np0005534516 nova_compute[253538]: 2025-11-25 08:25:01.260 253542 DEBUG nova.virt.libvirt.driver [None req-cd280208-b595-4b3f-8796-270485a4caef 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] [instance: 86bfa56f-56d0-4a5e-b0b2-302c375e37a3] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:25:01 np0005534516 nova_compute[253538]: 2025-11-25 08:25:01.261 253542 DEBUG nova.virt.libvirt.driver [None req-cd280208-b595-4b3f-8796-270485a4caef 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] [instance: 86bfa56f-56d0-4a5e-b0b2-302c375e37a3] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:25:01 np0005534516 nova_compute[253538]: 2025-11-25 08:25:01.261 253542 DEBUG nova.virt.libvirt.driver [None req-cd280208-b595-4b3f-8796-270485a4caef 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] [instance: 86bfa56f-56d0-4a5e-b0b2-302c375e37a3] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:25:01 np0005534516 nova_compute[253538]: 2025-11-25 08:25:01.269 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 86bfa56f-56d0-4a5e-b0b2-302c375e37a3] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 03:25:01 np0005534516 nova_compute[253538]: 2025-11-25 08:25:01.319 253542 INFO nova.compute.manager [None req-cd280208-b595-4b3f-8796-270485a4caef 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] [instance: 86bfa56f-56d0-4a5e-b0b2-302c375e37a3] Took 5.60 seconds to spawn the instance on the hypervisor.#033[00m
Nov 25 03:25:01 np0005534516 nova_compute[253538]: 2025-11-25 08:25:01.320 253542 DEBUG nova.compute.manager [None req-cd280208-b595-4b3f-8796-270485a4caef 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] [instance: 86bfa56f-56d0-4a5e-b0b2-302c375e37a3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:25:01 np0005534516 nova_compute[253538]: 2025-11-25 08:25:01.377 253542 INFO nova.compute.manager [None req-cd280208-b595-4b3f-8796-270485a4caef 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] [instance: 86bfa56f-56d0-4a5e-b0b2-302c375e37a3] Took 6.71 seconds to build instance.#033[00m
Nov 25 03:25:01 np0005534516 nova_compute[253538]: 2025-11-25 08:25:01.394 253542 DEBUG oslo_concurrency.lockutils [None req-cd280208-b595-4b3f-8796-270485a4caef 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Lock "86bfa56f-56d0-4a5e-b0b2-302c375e37a3" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 6.985s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:25:01 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 03:25:01 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1441138275' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 03:25:01 np0005534516 nova_compute[253538]: 2025-11-25 08:25:01.530 253542 DEBUG oslo_concurrency.processutils [None req-eff26c03-6da1-4935-b2ab-a11367dd46a2 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.482s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:25:01 np0005534516 nova_compute[253538]: 2025-11-25 08:25:01.561 253542 DEBUG nova.storage.rbd_utils [None req-eff26c03-6da1-4935-b2ab-a11367dd46a2 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] rbd image 23ace5af-6840-42aa-a801-98abbb4f3a52_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:25:01 np0005534516 nova_compute[253538]: 2025-11-25 08:25:01.565 253542 DEBUG oslo_concurrency.processutils [None req-eff26c03-6da1-4935-b2ab-a11367dd46a2 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:25:02 np0005534516 nova_compute[253538]: 2025-11-25 08:25:02.014 253542 DEBUG oslo_concurrency.lockutils [None req-3261949f-221a-4d13-874e-356c6716179f 345cc8bd54dd46c9aaf034a44f55f52e fbc71ebb18c64a72bd6d93bc520d8921 - - default default] Acquiring lock "d3764acc-bce0-452e-bba5-90d76d88df2e" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:25:02 np0005534516 nova_compute[253538]: 2025-11-25 08:25:02.015 253542 DEBUG oslo_concurrency.lockutils [None req-3261949f-221a-4d13-874e-356c6716179f 345cc8bd54dd46c9aaf034a44f55f52e fbc71ebb18c64a72bd6d93bc520d8921 - - default default] Lock "d3764acc-bce0-452e-bba5-90d76d88df2e" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:25:02 np0005534516 nova_compute[253538]: 2025-11-25 08:25:02.016 253542 DEBUG oslo_concurrency.lockutils [None req-3261949f-221a-4d13-874e-356c6716179f 345cc8bd54dd46c9aaf034a44f55f52e fbc71ebb18c64a72bd6d93bc520d8921 - - default default] Acquiring lock "d3764acc-bce0-452e-bba5-90d76d88df2e-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:25:02 np0005534516 nova_compute[253538]: 2025-11-25 08:25:02.016 253542 DEBUG oslo_concurrency.lockutils [None req-3261949f-221a-4d13-874e-356c6716179f 345cc8bd54dd46c9aaf034a44f55f52e fbc71ebb18c64a72bd6d93bc520d8921 - - default default] Lock "d3764acc-bce0-452e-bba5-90d76d88df2e-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:25:02 np0005534516 nova_compute[253538]: 2025-11-25 08:25:02.017 253542 DEBUG oslo_concurrency.lockutils [None req-3261949f-221a-4d13-874e-356c6716179f 345cc8bd54dd46c9aaf034a44f55f52e fbc71ebb18c64a72bd6d93bc520d8921 - - default default] Lock "d3764acc-bce0-452e-bba5-90d76d88df2e-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:25:02 np0005534516 nova_compute[253538]: 2025-11-25 08:25:02.019 253542 INFO nova.compute.manager [None req-3261949f-221a-4d13-874e-356c6716179f 345cc8bd54dd46c9aaf034a44f55f52e fbc71ebb18c64a72bd6d93bc520d8921 - - default default] [instance: d3764acc-bce0-452e-bba5-90d76d88df2e] Terminating instance#033[00m
Nov 25 03:25:02 np0005534516 nova_compute[253538]: 2025-11-25 08:25:02.020 253542 DEBUG oslo_concurrency.lockutils [None req-3261949f-221a-4d13-874e-356c6716179f 345cc8bd54dd46c9aaf034a44f55f52e fbc71ebb18c64a72bd6d93bc520d8921 - - default default] Acquiring lock "refresh_cache-d3764acc-bce0-452e-bba5-90d76d88df2e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 03:25:02 np0005534516 nova_compute[253538]: 2025-11-25 08:25:02.021 253542 DEBUG oslo_concurrency.lockutils [None req-3261949f-221a-4d13-874e-356c6716179f 345cc8bd54dd46c9aaf034a44f55f52e fbc71ebb18c64a72bd6d93bc520d8921 - - default default] Acquired lock "refresh_cache-d3764acc-bce0-452e-bba5-90d76d88df2e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 03:25:02 np0005534516 nova_compute[253538]: 2025-11-25 08:25:02.021 253542 DEBUG nova.network.neutron [None req-3261949f-221a-4d13-874e-356c6716179f 345cc8bd54dd46c9aaf034a44f55f52e fbc71ebb18c64a72bd6d93bc520d8921 - - default default] [instance: d3764acc-bce0-452e-bba5-90d76d88df2e] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 25 03:25:02 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 03:25:02 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3025257833' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 03:25:02 np0005534516 nova_compute[253538]: 2025-11-25 08:25:02.047 253542 DEBUG oslo_concurrency.processutils [None req-eff26c03-6da1-4935-b2ab-a11367dd46a2 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.482s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:25:02 np0005534516 nova_compute[253538]: 2025-11-25 08:25:02.048 253542 DEBUG nova.virt.libvirt.vif [None req-eff26c03-6da1-4935-b2ab-a11367dd46a2 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T08:24:56Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-2132333394',display_name='tempest-ServersAdminTestJSON-server-2132333394',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-2132333394',id=17,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='65a2f983cce14453b2dc9251a520f289',ramdisk_id='',reservation_id='r-8p4p95p6',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersAdminTestJSON-857664825',owner_user_name='tempest-ServersAdminTestJSON-857664825-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T08:24:57Z,user_data=None,user_id='76d3377d398a4214a77bc0eb91638ec5',uuid=23ace5af-6840-42aa-a801-98abbb4f3a52,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "e9d1298d-411a-4018-ba08-c41d40ba0d41", "address": "fa:16:3e:af:6c:e2", "network": {"id": "93269c36-ab23-4d95-925a-798173550624", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-897372830-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "65a2f983cce14453b2dc9251a520f289", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape9d1298d-41", "ovs_interfaceid": "e9d1298d-411a-4018-ba08-c41d40ba0d41", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 25 03:25:02 np0005534516 nova_compute[253538]: 2025-11-25 08:25:02.049 253542 DEBUG nova.network.os_vif_util [None req-eff26c03-6da1-4935-b2ab-a11367dd46a2 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Converting VIF {"id": "e9d1298d-411a-4018-ba08-c41d40ba0d41", "address": "fa:16:3e:af:6c:e2", "network": {"id": "93269c36-ab23-4d95-925a-798173550624", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-897372830-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "65a2f983cce14453b2dc9251a520f289", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape9d1298d-41", "ovs_interfaceid": "e9d1298d-411a-4018-ba08-c41d40ba0d41", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 03:25:02 np0005534516 nova_compute[253538]: 2025-11-25 08:25:02.050 253542 DEBUG nova.network.os_vif_util [None req-eff26c03-6da1-4935-b2ab-a11367dd46a2 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:af:6c:e2,bridge_name='br-int',has_traffic_filtering=True,id=e9d1298d-411a-4018-ba08-c41d40ba0d41,network=Network(93269c36-ab23-4d95-925a-798173550624),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape9d1298d-41') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 03:25:02 np0005534516 nova_compute[253538]: 2025-11-25 08:25:02.051 253542 DEBUG nova.objects.instance [None req-eff26c03-6da1-4935-b2ab-a11367dd46a2 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Lazy-loading 'pci_devices' on Instance uuid 23ace5af-6840-42aa-a801-98abbb4f3a52 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 03:25:02 np0005534516 nova_compute[253538]: 2025-11-25 08:25:02.063 253542 DEBUG nova.virt.libvirt.driver [None req-eff26c03-6da1-4935-b2ab-a11367dd46a2 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] [instance: 23ace5af-6840-42aa-a801-98abbb4f3a52] End _get_guest_xml xml=<domain type="kvm">
Nov 25 03:25:02 np0005534516 nova_compute[253538]:  <uuid>23ace5af-6840-42aa-a801-98abbb4f3a52</uuid>
Nov 25 03:25:02 np0005534516 nova_compute[253538]:  <name>instance-00000011</name>
Nov 25 03:25:02 np0005534516 nova_compute[253538]:  <memory>131072</memory>
Nov 25 03:25:02 np0005534516 nova_compute[253538]:  <vcpu>1</vcpu>
Nov 25 03:25:02 np0005534516 nova_compute[253538]:  <metadata>
Nov 25 03:25:02 np0005534516 nova_compute[253538]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 03:25:02 np0005534516 nova_compute[253538]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 03:25:02 np0005534516 nova_compute[253538]:      <nova:name>tempest-ServersAdminTestJSON-server-2132333394</nova:name>
Nov 25 03:25:02 np0005534516 nova_compute[253538]:      <nova:creationTime>2025-11-25 08:25:01</nova:creationTime>
Nov 25 03:25:02 np0005534516 nova_compute[253538]:      <nova:flavor name="m1.nano">
Nov 25 03:25:02 np0005534516 nova_compute[253538]:        <nova:memory>128</nova:memory>
Nov 25 03:25:02 np0005534516 nova_compute[253538]:        <nova:disk>1</nova:disk>
Nov 25 03:25:02 np0005534516 nova_compute[253538]:        <nova:swap>0</nova:swap>
Nov 25 03:25:02 np0005534516 nova_compute[253538]:        <nova:ephemeral>0</nova:ephemeral>
Nov 25 03:25:02 np0005534516 nova_compute[253538]:        <nova:vcpus>1</nova:vcpus>
Nov 25 03:25:02 np0005534516 nova_compute[253538]:      </nova:flavor>
Nov 25 03:25:02 np0005534516 nova_compute[253538]:      <nova:owner>
Nov 25 03:25:02 np0005534516 nova_compute[253538]:        <nova:user uuid="76d3377d398a4214a77bc0eb91638ec5">tempest-ServersAdminTestJSON-857664825-project-member</nova:user>
Nov 25 03:25:02 np0005534516 nova_compute[253538]:        <nova:project uuid="65a2f983cce14453b2dc9251a520f289">tempest-ServersAdminTestJSON-857664825</nova:project>
Nov 25 03:25:02 np0005534516 nova_compute[253538]:      </nova:owner>
Nov 25 03:25:02 np0005534516 nova_compute[253538]:      <nova:root type="image" uuid="8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e"/>
Nov 25 03:25:02 np0005534516 nova_compute[253538]:      <nova:ports>
Nov 25 03:25:02 np0005534516 nova_compute[253538]:        <nova:port uuid="e9d1298d-411a-4018-ba08-c41d40ba0d41">
Nov 25 03:25:02 np0005534516 nova_compute[253538]:          <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Nov 25 03:25:02 np0005534516 nova_compute[253538]:        </nova:port>
Nov 25 03:25:02 np0005534516 nova_compute[253538]:      </nova:ports>
Nov 25 03:25:02 np0005534516 nova_compute[253538]:    </nova:instance>
Nov 25 03:25:02 np0005534516 nova_compute[253538]:  </metadata>
Nov 25 03:25:02 np0005534516 nova_compute[253538]:  <sysinfo type="smbios">
Nov 25 03:25:02 np0005534516 nova_compute[253538]:    <system>
Nov 25 03:25:02 np0005534516 nova_compute[253538]:      <entry name="manufacturer">RDO</entry>
Nov 25 03:25:02 np0005534516 nova_compute[253538]:      <entry name="product">OpenStack Compute</entry>
Nov 25 03:25:02 np0005534516 nova_compute[253538]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 03:25:02 np0005534516 nova_compute[253538]:      <entry name="serial">23ace5af-6840-42aa-a801-98abbb4f3a52</entry>
Nov 25 03:25:02 np0005534516 nova_compute[253538]:      <entry name="uuid">23ace5af-6840-42aa-a801-98abbb4f3a52</entry>
Nov 25 03:25:02 np0005534516 nova_compute[253538]:      <entry name="family">Virtual Machine</entry>
Nov 25 03:25:02 np0005534516 nova_compute[253538]:    </system>
Nov 25 03:25:02 np0005534516 nova_compute[253538]:  </sysinfo>
Nov 25 03:25:02 np0005534516 nova_compute[253538]:  <os>
Nov 25 03:25:02 np0005534516 nova_compute[253538]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 25 03:25:02 np0005534516 nova_compute[253538]:    <boot dev="hd"/>
Nov 25 03:25:02 np0005534516 nova_compute[253538]:    <smbios mode="sysinfo"/>
Nov 25 03:25:02 np0005534516 nova_compute[253538]:  </os>
Nov 25 03:25:02 np0005534516 nova_compute[253538]:  <features>
Nov 25 03:25:02 np0005534516 nova_compute[253538]:    <acpi/>
Nov 25 03:25:02 np0005534516 nova_compute[253538]:    <apic/>
Nov 25 03:25:02 np0005534516 nova_compute[253538]:    <vmcoreinfo/>
Nov 25 03:25:02 np0005534516 nova_compute[253538]:  </features>
Nov 25 03:25:02 np0005534516 nova_compute[253538]:  <clock offset="utc">
Nov 25 03:25:02 np0005534516 nova_compute[253538]:    <timer name="pit" tickpolicy="delay"/>
Nov 25 03:25:02 np0005534516 nova_compute[253538]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 25 03:25:02 np0005534516 nova_compute[253538]:    <timer name="hpet" present="no"/>
Nov 25 03:25:02 np0005534516 nova_compute[253538]:  </clock>
Nov 25 03:25:02 np0005534516 nova_compute[253538]:  <cpu mode="host-model" match="exact">
Nov 25 03:25:02 np0005534516 nova_compute[253538]:    <topology sockets="1" cores="1" threads="1"/>
Nov 25 03:25:02 np0005534516 nova_compute[253538]:  </cpu>
Nov 25 03:25:02 np0005534516 nova_compute[253538]:  <devices>
Nov 25 03:25:02 np0005534516 nova_compute[253538]:    <disk type="network" device="disk">
Nov 25 03:25:02 np0005534516 nova_compute[253538]:      <driver type="raw" cache="none"/>
Nov 25 03:25:02 np0005534516 nova_compute[253538]:      <source protocol="rbd" name="vms/23ace5af-6840-42aa-a801-98abbb4f3a52_disk">
Nov 25 03:25:02 np0005534516 nova_compute[253538]:        <host name="192.168.122.100" port="6789"/>
Nov 25 03:25:02 np0005534516 nova_compute[253538]:      </source>
Nov 25 03:25:02 np0005534516 nova_compute[253538]:      <auth username="openstack">
Nov 25 03:25:02 np0005534516 nova_compute[253538]:        <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 03:25:02 np0005534516 nova_compute[253538]:      </auth>
Nov 25 03:25:02 np0005534516 nova_compute[253538]:      <target dev="vda" bus="virtio"/>
Nov 25 03:25:02 np0005534516 nova_compute[253538]:    </disk>
Nov 25 03:25:02 np0005534516 nova_compute[253538]:    <disk type="network" device="cdrom">
Nov 25 03:25:02 np0005534516 nova_compute[253538]:      <driver type="raw" cache="none"/>
Nov 25 03:25:02 np0005534516 nova_compute[253538]:      <source protocol="rbd" name="vms/23ace5af-6840-42aa-a801-98abbb4f3a52_disk.config">
Nov 25 03:25:02 np0005534516 nova_compute[253538]:        <host name="192.168.122.100" port="6789"/>
Nov 25 03:25:02 np0005534516 nova_compute[253538]:      </source>
Nov 25 03:25:02 np0005534516 nova_compute[253538]:      <auth username="openstack">
Nov 25 03:25:02 np0005534516 nova_compute[253538]:        <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 03:25:02 np0005534516 nova_compute[253538]:      </auth>
Nov 25 03:25:02 np0005534516 nova_compute[253538]:      <target dev="sda" bus="sata"/>
Nov 25 03:25:02 np0005534516 nova_compute[253538]:    </disk>
Nov 25 03:25:02 np0005534516 nova_compute[253538]:    <interface type="ethernet">
Nov 25 03:25:02 np0005534516 nova_compute[253538]:      <mac address="fa:16:3e:af:6c:e2"/>
Nov 25 03:25:02 np0005534516 nova_compute[253538]:      <model type="virtio"/>
Nov 25 03:25:02 np0005534516 nova_compute[253538]:      <driver name="vhost" rx_queue_size="512"/>
Nov 25 03:25:02 np0005534516 nova_compute[253538]:      <mtu size="1442"/>
Nov 25 03:25:02 np0005534516 nova_compute[253538]:      <target dev="tape9d1298d-41"/>
Nov 25 03:25:02 np0005534516 nova_compute[253538]:    </interface>
Nov 25 03:25:02 np0005534516 nova_compute[253538]:    <serial type="pty">
Nov 25 03:25:02 np0005534516 nova_compute[253538]:      <log file="/var/lib/nova/instances/23ace5af-6840-42aa-a801-98abbb4f3a52/console.log" append="off"/>
Nov 25 03:25:02 np0005534516 nova_compute[253538]:    </serial>
Nov 25 03:25:02 np0005534516 nova_compute[253538]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 03:25:02 np0005534516 nova_compute[253538]:    <video>
Nov 25 03:25:02 np0005534516 nova_compute[253538]:      <model type="virtio"/>
Nov 25 03:25:02 np0005534516 nova_compute[253538]:    </video>
Nov 25 03:25:02 np0005534516 nova_compute[253538]:    <input type="tablet" bus="usb"/>
Nov 25 03:25:02 np0005534516 nova_compute[253538]:    <rng model="virtio">
Nov 25 03:25:02 np0005534516 nova_compute[253538]:      <backend model="random">/dev/urandom</backend>
Nov 25 03:25:02 np0005534516 nova_compute[253538]:    </rng>
Nov 25 03:25:02 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root"/>
Nov 25 03:25:02 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:25:02 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:25:02 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:25:02 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:25:02 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:25:02 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:25:02 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:25:02 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:25:02 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:25:02 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:25:02 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:25:02 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:25:02 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:25:02 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:25:02 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:25:02 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:25:02 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:25:02 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:25:02 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:25:02 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:25:02 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:25:02 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:25:02 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:25:02 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:25:02 np0005534516 nova_compute[253538]:    <controller type="usb" index="0"/>
Nov 25 03:25:02 np0005534516 nova_compute[253538]:    <memballoon model="virtio">
Nov 25 03:25:02 np0005534516 nova_compute[253538]:      <stats period="10"/>
Nov 25 03:25:02 np0005534516 nova_compute[253538]:    </memballoon>
Nov 25 03:25:02 np0005534516 nova_compute[253538]:  </devices>
Nov 25 03:25:02 np0005534516 nova_compute[253538]: </domain>
Nov 25 03:25:02 np0005534516 nova_compute[253538]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 25 03:25:02 np0005534516 nova_compute[253538]: 2025-11-25 08:25:02.064 253542 DEBUG nova.compute.manager [None req-eff26c03-6da1-4935-b2ab-a11367dd46a2 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] [instance: 23ace5af-6840-42aa-a801-98abbb4f3a52] Preparing to wait for external event network-vif-plugged-e9d1298d-411a-4018-ba08-c41d40ba0d41 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 25 03:25:02 np0005534516 nova_compute[253538]: 2025-11-25 08:25:02.064 253542 DEBUG oslo_concurrency.lockutils [None req-eff26c03-6da1-4935-b2ab-a11367dd46a2 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Acquiring lock "23ace5af-6840-42aa-a801-98abbb4f3a52-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:25:02 np0005534516 nova_compute[253538]: 2025-11-25 08:25:02.064 253542 DEBUG oslo_concurrency.lockutils [None req-eff26c03-6da1-4935-b2ab-a11367dd46a2 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Lock "23ace5af-6840-42aa-a801-98abbb4f3a52-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:25:02 np0005534516 nova_compute[253538]: 2025-11-25 08:25:02.065 253542 DEBUG oslo_concurrency.lockutils [None req-eff26c03-6da1-4935-b2ab-a11367dd46a2 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Lock "23ace5af-6840-42aa-a801-98abbb4f3a52-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:25:02 np0005534516 nova_compute[253538]: 2025-11-25 08:25:02.065 253542 DEBUG nova.virt.libvirt.vif [None req-eff26c03-6da1-4935-b2ab-a11367dd46a2 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T08:24:56Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-2132333394',display_name='tempest-ServersAdminTestJSON-server-2132333394',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-2132333394',id=17,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='65a2f983cce14453b2dc9251a520f289',ramdisk_id='',reservation_id='r-8p4p95p6',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersAdminTestJSON-857664825',owner_user_name='tempest-ServersAdminTestJSON-857664825-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T08:24:57Z,user_data=None,user_id='76d3377d398a4214a77bc0eb91638ec5',uuid=23ace5af-6840-42aa-a801-98abbb4f3a52,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "e9d1298d-411a-4018-ba08-c41d40ba0d41", "address": "fa:16:3e:af:6c:e2", "network": {"id": "93269c36-ab23-4d95-925a-798173550624", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-897372830-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "65a2f983cce14453b2dc9251a520f289", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape9d1298d-41", "ovs_interfaceid": "e9d1298d-411a-4018-ba08-c41d40ba0d41", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 25 03:25:02 np0005534516 nova_compute[253538]: 2025-11-25 08:25:02.065 253542 DEBUG nova.network.os_vif_util [None req-eff26c03-6da1-4935-b2ab-a11367dd46a2 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Converting VIF {"id": "e9d1298d-411a-4018-ba08-c41d40ba0d41", "address": "fa:16:3e:af:6c:e2", "network": {"id": "93269c36-ab23-4d95-925a-798173550624", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-897372830-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "65a2f983cce14453b2dc9251a520f289", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape9d1298d-41", "ovs_interfaceid": "e9d1298d-411a-4018-ba08-c41d40ba0d41", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 03:25:02 np0005534516 nova_compute[253538]: 2025-11-25 08:25:02.066 253542 DEBUG nova.network.os_vif_util [None req-eff26c03-6da1-4935-b2ab-a11367dd46a2 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:af:6c:e2,bridge_name='br-int',has_traffic_filtering=True,id=e9d1298d-411a-4018-ba08-c41d40ba0d41,network=Network(93269c36-ab23-4d95-925a-798173550624),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape9d1298d-41') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 03:25:02 np0005534516 nova_compute[253538]: 2025-11-25 08:25:02.066 253542 DEBUG os_vif [None req-eff26c03-6da1-4935-b2ab-a11367dd46a2 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:af:6c:e2,bridge_name='br-int',has_traffic_filtering=True,id=e9d1298d-411a-4018-ba08-c41d40ba0d41,network=Network(93269c36-ab23-4d95-925a-798173550624),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape9d1298d-41') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 25 03:25:02 np0005534516 nova_compute[253538]: 2025-11-25 08:25:02.067 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:25:02 np0005534516 nova_compute[253538]: 2025-11-25 08:25:02.067 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:25:02 np0005534516 nova_compute[253538]: 2025-11-25 08:25:02.068 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 03:25:02 np0005534516 nova_compute[253538]: 2025-11-25 08:25:02.070 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:25:02 np0005534516 nova_compute[253538]: 2025-11-25 08:25:02.070 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape9d1298d-41, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:25:02 np0005534516 nova_compute[253538]: 2025-11-25 08:25:02.071 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tape9d1298d-41, col_values=(('external_ids', {'iface-id': 'e9d1298d-411a-4018-ba08-c41d40ba0d41', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:af:6c:e2', 'vm-uuid': '23ace5af-6840-42aa-a801-98abbb4f3a52'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:25:02 np0005534516 nova_compute[253538]: 2025-11-25 08:25:02.072 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:25:02 np0005534516 NetworkManager[48915]: <info>  [1764059102.0732] manager: (tape9d1298d-41): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/39)
Nov 25 03:25:02 np0005534516 nova_compute[253538]: 2025-11-25 08:25:02.075 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 25 03:25:02 np0005534516 nova_compute[253538]: 2025-11-25 08:25:02.078 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:25:02 np0005534516 nova_compute[253538]: 2025-11-25 08:25:02.080 253542 INFO os_vif [None req-eff26c03-6da1-4935-b2ab-a11367dd46a2 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:af:6c:e2,bridge_name='br-int',has_traffic_filtering=True,id=e9d1298d-411a-4018-ba08-c41d40ba0d41,network=Network(93269c36-ab23-4d95-925a-798173550624),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape9d1298d-41')#033[00m
Nov 25 03:25:02 np0005534516 nova_compute[253538]: 2025-11-25 08:25:02.130 253542 DEBUG nova.virt.libvirt.driver [None req-eff26c03-6da1-4935-b2ab-a11367dd46a2 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 03:25:02 np0005534516 nova_compute[253538]: 2025-11-25 08:25:02.130 253542 DEBUG nova.virt.libvirt.driver [None req-eff26c03-6da1-4935-b2ab-a11367dd46a2 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 03:25:02 np0005534516 nova_compute[253538]: 2025-11-25 08:25:02.131 253542 DEBUG nova.virt.libvirt.driver [None req-eff26c03-6da1-4935-b2ab-a11367dd46a2 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] No VIF found with MAC fa:16:3e:af:6c:e2, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 25 03:25:02 np0005534516 nova_compute[253538]: 2025-11-25 08:25:02.132 253542 INFO nova.virt.libvirt.driver [None req-eff26c03-6da1-4935-b2ab-a11367dd46a2 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] [instance: 23ace5af-6840-42aa-a801-98abbb4f3a52] Using config drive#033[00m
Nov 25 03:25:02 np0005534516 nova_compute[253538]: 2025-11-25 08:25:02.159 253542 DEBUG nova.storage.rbd_utils [None req-eff26c03-6da1-4935-b2ab-a11367dd46a2 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] rbd image 23ace5af-6840-42aa-a801-98abbb4f3a52_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:25:02 np0005534516 nova_compute[253538]: 2025-11-25 08:25:02.232 253542 DEBUG nova.network.neutron [None req-3261949f-221a-4d13-874e-356c6716179f 345cc8bd54dd46c9aaf034a44f55f52e fbc71ebb18c64a72bd6d93bc520d8921 - - default default] [instance: d3764acc-bce0-452e-bba5-90d76d88df2e] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 25 03:25:02 np0005534516 nova_compute[253538]: 2025-11-25 08:25:02.675 253542 DEBUG nova.network.neutron [None req-3261949f-221a-4d13-874e-356c6716179f 345cc8bd54dd46c9aaf034a44f55f52e fbc71ebb18c64a72bd6d93bc520d8921 - - default default] [instance: d3764acc-bce0-452e-bba5-90d76d88df2e] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 03:25:02 np0005534516 nova_compute[253538]: 2025-11-25 08:25:02.688 253542 DEBUG oslo_concurrency.lockutils [None req-3261949f-221a-4d13-874e-356c6716179f 345cc8bd54dd46c9aaf034a44f55f52e fbc71ebb18c64a72bd6d93bc520d8921 - - default default] Releasing lock "refresh_cache-d3764acc-bce0-452e-bba5-90d76d88df2e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 03:25:02 np0005534516 nova_compute[253538]: 2025-11-25 08:25:02.689 253542 DEBUG nova.compute.manager [None req-3261949f-221a-4d13-874e-356c6716179f 345cc8bd54dd46c9aaf034a44f55f52e fbc71ebb18c64a72bd6d93bc520d8921 - - default default] [instance: d3764acc-bce0-452e-bba5-90d76d88df2e] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 25 03:25:02 np0005534516 nova_compute[253538]: 2025-11-25 08:25:02.694 253542 INFO nova.virt.libvirt.driver [-] [instance: d3764acc-bce0-452e-bba5-90d76d88df2e] Instance destroyed successfully.#033[00m
Nov 25 03:25:02 np0005534516 nova_compute[253538]: 2025-11-25 08:25:02.695 253542 DEBUG nova.objects.instance [None req-3261949f-221a-4d13-874e-356c6716179f 345cc8bd54dd46c9aaf034a44f55f52e fbc71ebb18c64a72bd6d93bc520d8921 - - default default] Lazy-loading 'resources' on Instance uuid d3764acc-bce0-452e-bba5-90d76d88df2e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 03:25:02 np0005534516 nova_compute[253538]: 2025-11-25 08:25:02.737 253542 DEBUG nova.network.neutron [req-8a32d8cc-b206-4136-94a9-584fea6fcd47 req-6b365ca6-cacd-43fa-9ad8-774b4ed63e33 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 23ace5af-6840-42aa-a801-98abbb4f3a52] Updated VIF entry in instance network info cache for port e9d1298d-411a-4018-ba08-c41d40ba0d41. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 25 03:25:02 np0005534516 nova_compute[253538]: 2025-11-25 08:25:02.738 253542 DEBUG nova.network.neutron [req-8a32d8cc-b206-4136-94a9-584fea6fcd47 req-6b365ca6-cacd-43fa-9ad8-774b4ed63e33 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 23ace5af-6840-42aa-a801-98abbb4f3a52] Updating instance_info_cache with network_info: [{"id": "e9d1298d-411a-4018-ba08-c41d40ba0d41", "address": "fa:16:3e:af:6c:e2", "network": {"id": "93269c36-ab23-4d95-925a-798173550624", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-897372830-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "65a2f983cce14453b2dc9251a520f289", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape9d1298d-41", "ovs_interfaceid": "e9d1298d-411a-4018-ba08-c41d40ba0d41", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 03:25:02 np0005534516 nova_compute[253538]: 2025-11-25 08:25:02.761 253542 DEBUG oslo_concurrency.lockutils [req-8a32d8cc-b206-4136-94a9-584fea6fcd47 req-6b365ca6-cacd-43fa-9ad8-774b4ed63e33 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-23ace5af-6840-42aa-a801-98abbb4f3a52" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 03:25:02 np0005534516 nova_compute[253538]: 2025-11-25 08:25:02.793 253542 INFO nova.virt.libvirt.driver [None req-eff26c03-6da1-4935-b2ab-a11367dd46a2 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] [instance: 23ace5af-6840-42aa-a801-98abbb4f3a52] Creating config drive at /var/lib/nova/instances/23ace5af-6840-42aa-a801-98abbb4f3a52/disk.config#033[00m
Nov 25 03:25:02 np0005534516 nova_compute[253538]: 2025-11-25 08:25:02.800 253542 DEBUG oslo_concurrency.processutils [None req-eff26c03-6da1-4935-b2ab-a11367dd46a2 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/23ace5af-6840-42aa-a801-98abbb4f3a52/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpcgf5618h execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:25:02 np0005534516 nova_compute[253538]: 2025-11-25 08:25:02.945 253542 DEBUG oslo_concurrency.processutils [None req-eff26c03-6da1-4935-b2ab-a11367dd46a2 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/23ace5af-6840-42aa-a801-98abbb4f3a52/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpcgf5618h" returned: 0 in 0.145s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:25:02 np0005534516 nova_compute[253538]: 2025-11-25 08:25:02.971 253542 DEBUG nova.storage.rbd_utils [None req-eff26c03-6da1-4935-b2ab-a11367dd46a2 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] rbd image 23ace5af-6840-42aa-a801-98abbb4f3a52_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:25:02 np0005534516 nova_compute[253538]: 2025-11-25 08:25:02.975 253542 DEBUG oslo_concurrency.processutils [None req-eff26c03-6da1-4935-b2ab-a11367dd46a2 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/23ace5af-6840-42aa-a801-98abbb4f3a52/disk.config 23ace5af-6840-42aa-a801-98abbb4f3a52_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:25:02 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1188: 321 pgs: 321 active+clean; 232 MiB data, 346 MiB used, 60 GiB / 60 GiB avail; 5.5 MiB/s rd, 9.4 MiB/s wr, 406 op/s
Nov 25 03:25:03 np0005534516 nova_compute[253538]: 2025-11-25 08:25:03.139 253542 DEBUG oslo_concurrency.processutils [None req-eff26c03-6da1-4935-b2ab-a11367dd46a2 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/23ace5af-6840-42aa-a801-98abbb4f3a52/disk.config 23ace5af-6840-42aa-a801-98abbb4f3a52_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.164s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:25:03 np0005534516 nova_compute[253538]: 2025-11-25 08:25:03.140 253542 INFO nova.virt.libvirt.driver [None req-eff26c03-6da1-4935-b2ab-a11367dd46a2 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] [instance: 23ace5af-6840-42aa-a801-98abbb4f3a52] Deleting local config drive /var/lib/nova/instances/23ace5af-6840-42aa-a801-98abbb4f3a52/disk.config because it was imported into RBD.#033[00m
Nov 25 03:25:03 np0005534516 kernel: tape9d1298d-41: entered promiscuous mode
Nov 25 03:25:03 np0005534516 NetworkManager[48915]: <info>  [1764059103.1940] manager: (tape9d1298d-41): new Tun device (/org/freedesktop/NetworkManager/Devices/40)
Nov 25 03:25:03 np0005534516 ovn_controller[152859]: 2025-11-25T08:25:03Z|00051|binding|INFO|Claiming lport e9d1298d-411a-4018-ba08-c41d40ba0d41 for this chassis.
Nov 25 03:25:03 np0005534516 ovn_controller[152859]: 2025-11-25T08:25:03Z|00052|binding|INFO|e9d1298d-411a-4018-ba08-c41d40ba0d41: Claiming fa:16:3e:af:6c:e2 10.100.0.3
Nov 25 03:25:03 np0005534516 nova_compute[253538]: 2025-11-25 08:25:03.195 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:25:03 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:25:03.201 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:af:6c:e2 10.100.0.3'], port_security=['fa:16:3e:af:6c:e2 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '23ace5af-6840-42aa-a801-98abbb4f3a52', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-93269c36-ab23-4d95-925a-798173550624', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '65a2f983cce14453b2dc9251a520f289', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'cc9f601e-bb99-4729-8b62-ddcf81c134a4', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a705f4ac-b7cc-4deb-b453-a20afb944392, chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=e9d1298d-411a-4018-ba08-c41d40ba0d41) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 03:25:03 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:25:03.202 162739 INFO neutron.agent.ovn.metadata.agent [-] Port e9d1298d-411a-4018-ba08-c41d40ba0d41 in datapath 93269c36-ab23-4d95-925a-798173550624 bound to our chassis#033[00m
Nov 25 03:25:03 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:25:03.203 162739 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 93269c36-ab23-4d95-925a-798173550624#033[00m
Nov 25 03:25:03 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:25:03.217 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[20604f6f-f963-460d-b4f8-31f6d3befe8a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:25:03 np0005534516 ovn_controller[152859]: 2025-11-25T08:25:03Z|00053|binding|INFO|Setting lport e9d1298d-411a-4018-ba08-c41d40ba0d41 ovn-installed in OVS
Nov 25 03:25:03 np0005534516 ovn_controller[152859]: 2025-11-25T08:25:03Z|00054|binding|INFO|Setting lport e9d1298d-411a-4018-ba08-c41d40ba0d41 up in Southbound
Nov 25 03:25:03 np0005534516 nova_compute[253538]: 2025-11-25 08:25:03.227 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:25:03 np0005534516 nova_compute[253538]: 2025-11-25 08:25:03.231 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:25:03 np0005534516 systemd-machined[215790]: New machine qemu-19-instance-00000011.
Nov 25 03:25:03 np0005534516 systemd[1]: Started Virtual Machine qemu-19-instance-00000011.
Nov 25 03:25:03 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:25:03.247 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[997360ca-f129-408f-98e9-a30a7497076a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:25:03 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:25:03.252 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[ba18ad85-3a69-4b28-ba74-d704a01ea256]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:25:03 np0005534516 systemd-udevd[279374]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 03:25:03 np0005534516 NetworkManager[48915]: <info>  [1764059103.2719] device (tape9d1298d-41): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 03:25:03 np0005534516 NetworkManager[48915]: <info>  [1764059103.2728] device (tape9d1298d-41): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 03:25:03 np0005534516 nova_compute[253538]: 2025-11-25 08:25:03.280 253542 INFO nova.virt.libvirt.driver [None req-3261949f-221a-4d13-874e-356c6716179f 345cc8bd54dd46c9aaf034a44f55f52e fbc71ebb18c64a72bd6d93bc520d8921 - - default default] [instance: d3764acc-bce0-452e-bba5-90d76d88df2e] Deleting instance files /var/lib/nova/instances/d3764acc-bce0-452e-bba5-90d76d88df2e_del#033[00m
Nov 25 03:25:03 np0005534516 nova_compute[253538]: 2025-11-25 08:25:03.280 253542 INFO nova.virt.libvirt.driver [None req-3261949f-221a-4d13-874e-356c6716179f 345cc8bd54dd46c9aaf034a44f55f52e fbc71ebb18c64a72bd6d93bc520d8921 - - default default] [instance: d3764acc-bce0-452e-bba5-90d76d88df2e] Deletion of /var/lib/nova/instances/d3764acc-bce0-452e-bba5-90d76d88df2e_del complete#033[00m
Nov 25 03:25:03 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:25:03.287 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[8c6de63c-0f4e-4462-813d-d1cd35342056]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:25:03 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:25:03.305 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[bd16d4f8-f594-45a6-b17d-f2df73350819]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap93269c36-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b6:11:21'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 6, 'tx_packets': 5, 'rx_bytes': 532, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 6, 'tx_packets': 5, 'rx_bytes': 532, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 19], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 445515, 'reachable_time': 41516, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 279384, 'error': None, 'target': 'ovnmeta-93269c36-ab23-4d95-925a-798173550624', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:25:03 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:25:03.323 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[7d508698-aaf8-42fc-a736-81e181c82efa]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap93269c36-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 445525, 'tstamp': 445525}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 279385, 'error': None, 'target': 'ovnmeta-93269c36-ab23-4d95-925a-798173550624', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap93269c36-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 445529, 'tstamp': 445529}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 279385, 'error': None, 'target': 'ovnmeta-93269c36-ab23-4d95-925a-798173550624', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:25:03 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:25:03.325 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap93269c36-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:25:03 np0005534516 nova_compute[253538]: 2025-11-25 08:25:03.327 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:25:03 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:25:03.328 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap93269c36-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:25:03 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:25:03.328 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 03:25:03 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:25:03.329 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap93269c36-a0, col_values=(('external_ids', {'iface-id': '52d2128c-19c6-4892-8ba5-cc8740039f5e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:25:03 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:25:03.329 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 03:25:03 np0005534516 nova_compute[253538]: 2025-11-25 08:25:03.329 253542 INFO nova.compute.manager [None req-3261949f-221a-4d13-874e-356c6716179f 345cc8bd54dd46c9aaf034a44f55f52e fbc71ebb18c64a72bd6d93bc520d8921 - - default default] [instance: d3764acc-bce0-452e-bba5-90d76d88df2e] Took 0.64 seconds to destroy the instance on the hypervisor.#033[00m
Nov 25 03:25:03 np0005534516 nova_compute[253538]: 2025-11-25 08:25:03.329 253542 DEBUG oslo.service.loopingcall [None req-3261949f-221a-4d13-874e-356c6716179f 345cc8bd54dd46c9aaf034a44f55f52e fbc71ebb18c64a72bd6d93bc520d8921 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 25 03:25:03 np0005534516 nova_compute[253538]: 2025-11-25 08:25:03.330 253542 DEBUG nova.compute.manager [-] [instance: d3764acc-bce0-452e-bba5-90d76d88df2e] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 25 03:25:03 np0005534516 nova_compute[253538]: 2025-11-25 08:25:03.330 253542 DEBUG nova.network.neutron [-] [instance: d3764acc-bce0-452e-bba5-90d76d88df2e] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 25 03:25:03 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e113 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 03:25:03 np0005534516 nova_compute[253538]: 2025-11-25 08:25:03.372 253542 DEBUG nova.compute.manager [req-75f7bb4e-7955-4852-afd8-2223ffa7818a req-a2946efc-8d5f-4e0a-9447-55c4b4931d5c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 86bfa56f-56d0-4a5e-b0b2-302c375e37a3] Received event network-vif-plugged-4ad9572b-6ac1-4659-8ea6-71b8a32c06fe external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:25:03 np0005534516 nova_compute[253538]: 2025-11-25 08:25:03.373 253542 DEBUG oslo_concurrency.lockutils [req-75f7bb4e-7955-4852-afd8-2223ffa7818a req-a2946efc-8d5f-4e0a-9447-55c4b4931d5c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "86bfa56f-56d0-4a5e-b0b2-302c375e37a3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:25:03 np0005534516 nova_compute[253538]: 2025-11-25 08:25:03.373 253542 DEBUG oslo_concurrency.lockutils [req-75f7bb4e-7955-4852-afd8-2223ffa7818a req-a2946efc-8d5f-4e0a-9447-55c4b4931d5c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "86bfa56f-56d0-4a5e-b0b2-302c375e37a3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:25:03 np0005534516 nova_compute[253538]: 2025-11-25 08:25:03.374 253542 DEBUG oslo_concurrency.lockutils [req-75f7bb4e-7955-4852-afd8-2223ffa7818a req-a2946efc-8d5f-4e0a-9447-55c4b4931d5c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "86bfa56f-56d0-4a5e-b0b2-302c375e37a3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:25:03 np0005534516 nova_compute[253538]: 2025-11-25 08:25:03.374 253542 DEBUG nova.compute.manager [req-75f7bb4e-7955-4852-afd8-2223ffa7818a req-a2946efc-8d5f-4e0a-9447-55c4b4931d5c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 86bfa56f-56d0-4a5e-b0b2-302c375e37a3] No waiting events found dispatching network-vif-plugged-4ad9572b-6ac1-4659-8ea6-71b8a32c06fe pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 03:25:03 np0005534516 nova_compute[253538]: 2025-11-25 08:25:03.375 253542 WARNING nova.compute.manager [req-75f7bb4e-7955-4852-afd8-2223ffa7818a req-a2946efc-8d5f-4e0a-9447-55c4b4931d5c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 86bfa56f-56d0-4a5e-b0b2-302c375e37a3] Received unexpected event network-vif-plugged-4ad9572b-6ac1-4659-8ea6-71b8a32c06fe for instance with vm_state active and task_state None.#033[00m
Nov 25 03:25:03 np0005534516 nova_compute[253538]: 2025-11-25 08:25:03.423 253542 DEBUG nova.compute.manager [req-07d30ada-0d57-4ac7-91be-c62690ffcd32 req-668f9794-6d4a-470c-bfd3-b18373b10d67 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 23ace5af-6840-42aa-a801-98abbb4f3a52] Received event network-vif-plugged-e9d1298d-411a-4018-ba08-c41d40ba0d41 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:25:03 np0005534516 nova_compute[253538]: 2025-11-25 08:25:03.424 253542 DEBUG oslo_concurrency.lockutils [req-07d30ada-0d57-4ac7-91be-c62690ffcd32 req-668f9794-6d4a-470c-bfd3-b18373b10d67 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "23ace5af-6840-42aa-a801-98abbb4f3a52-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:25:03 np0005534516 nova_compute[253538]: 2025-11-25 08:25:03.425 253542 DEBUG oslo_concurrency.lockutils [req-07d30ada-0d57-4ac7-91be-c62690ffcd32 req-668f9794-6d4a-470c-bfd3-b18373b10d67 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "23ace5af-6840-42aa-a801-98abbb4f3a52-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:25:03 np0005534516 nova_compute[253538]: 2025-11-25 08:25:03.426 253542 DEBUG oslo_concurrency.lockutils [req-07d30ada-0d57-4ac7-91be-c62690ffcd32 req-668f9794-6d4a-470c-bfd3-b18373b10d67 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "23ace5af-6840-42aa-a801-98abbb4f3a52-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:25:03 np0005534516 nova_compute[253538]: 2025-11-25 08:25:03.426 253542 DEBUG nova.compute.manager [req-07d30ada-0d57-4ac7-91be-c62690ffcd32 req-668f9794-6d4a-470c-bfd3-b18373b10d67 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 23ace5af-6840-42aa-a801-98abbb4f3a52] Processing event network-vif-plugged-e9d1298d-411a-4018-ba08-c41d40ba0d41 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 25 03:25:03 np0005534516 nova_compute[253538]: 2025-11-25 08:25:03.570 253542 DEBUG nova.network.neutron [-] [instance: d3764acc-bce0-452e-bba5-90d76d88df2e] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 25 03:25:03 np0005534516 nova_compute[253538]: 2025-11-25 08:25:03.588 253542 DEBUG nova.network.neutron [-] [instance: d3764acc-bce0-452e-bba5-90d76d88df2e] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 03:25:03 np0005534516 nova_compute[253538]: 2025-11-25 08:25:03.601 253542 INFO nova.compute.manager [-] [instance: d3764acc-bce0-452e-bba5-90d76d88df2e] Took 0.27 seconds to deallocate network for instance.#033[00m
Nov 25 03:25:03 np0005534516 nova_compute[253538]: 2025-11-25 08:25:03.634 253542 DEBUG oslo_concurrency.lockutils [None req-3261949f-221a-4d13-874e-356c6716179f 345cc8bd54dd46c9aaf034a44f55f52e fbc71ebb18c64a72bd6d93bc520d8921 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:25:03 np0005534516 nova_compute[253538]: 2025-11-25 08:25:03.635 253542 DEBUG oslo_concurrency.lockutils [None req-3261949f-221a-4d13-874e-356c6716179f 345cc8bd54dd46c9aaf034a44f55f52e fbc71ebb18c64a72bd6d93bc520d8921 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:25:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 03:25:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:25:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 03:25:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:25:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0015271308483102168 of space, bias 1.0, pg target 0.45813925449306503 quantized to 32 (current 32)
Nov 25 03:25:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:25:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 03:25:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:25:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 03:25:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:25:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006661126644201341 of space, bias 1.0, pg target 0.19983379932604023 quantized to 32 (current 32)
Nov 25 03:25:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:25:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 25 03:25:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:25:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 03:25:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:25:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 25 03:25:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:25:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 25 03:25:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:25:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 03:25:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:25:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 25 03:25:03 np0005534516 nova_compute[253538]: 2025-11-25 08:25:03.733 253542 DEBUG oslo_concurrency.processutils [None req-3261949f-221a-4d13-874e-356c6716179f 345cc8bd54dd46c9aaf034a44f55f52e fbc71ebb18c64a72bd6d93bc520d8921 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:25:04 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 03:25:04 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4289916280' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 03:25:04 np0005534516 nova_compute[253538]: 2025-11-25 08:25:04.209 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764059104.209265, 23ace5af-6840-42aa-a801-98abbb4f3a52 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 03:25:04 np0005534516 nova_compute[253538]: 2025-11-25 08:25:04.210 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 23ace5af-6840-42aa-a801-98abbb4f3a52] VM Started (Lifecycle Event)#033[00m
Nov 25 03:25:04 np0005534516 nova_compute[253538]: 2025-11-25 08:25:04.213 253542 DEBUG nova.compute.manager [None req-eff26c03-6da1-4935-b2ab-a11367dd46a2 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] [instance: 23ace5af-6840-42aa-a801-98abbb4f3a52] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 25 03:25:04 np0005534516 nova_compute[253538]: 2025-11-25 08:25:04.215 253542 DEBUG oslo_concurrency.processutils [None req-3261949f-221a-4d13-874e-356c6716179f 345cc8bd54dd46c9aaf034a44f55f52e fbc71ebb18c64a72bd6d93bc520d8921 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.483s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:25:04 np0005534516 nova_compute[253538]: 2025-11-25 08:25:04.216 253542 DEBUG nova.virt.libvirt.driver [None req-eff26c03-6da1-4935-b2ab-a11367dd46a2 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] [instance: 23ace5af-6840-42aa-a801-98abbb4f3a52] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 25 03:25:04 np0005534516 nova_compute[253538]: 2025-11-25 08:25:04.221 253542 DEBUG nova.compute.provider_tree [None req-3261949f-221a-4d13-874e-356c6716179f 345cc8bd54dd46c9aaf034a44f55f52e fbc71ebb18c64a72bd6d93bc520d8921 - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 03:25:04 np0005534516 nova_compute[253538]: 2025-11-25 08:25:04.224 253542 INFO nova.virt.libvirt.driver [-] [instance: 23ace5af-6840-42aa-a801-98abbb4f3a52] Instance spawned successfully.#033[00m
Nov 25 03:25:04 np0005534516 nova_compute[253538]: 2025-11-25 08:25:04.224 253542 DEBUG nova.virt.libvirt.driver [None req-eff26c03-6da1-4935-b2ab-a11367dd46a2 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] [instance: 23ace5af-6840-42aa-a801-98abbb4f3a52] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 25 03:25:04 np0005534516 nova_compute[253538]: 2025-11-25 08:25:04.258 253542 DEBUG nova.scheduler.client.report [None req-3261949f-221a-4d13-874e-356c6716179f 345cc8bd54dd46c9aaf034a44f55f52e fbc71ebb18c64a72bd6d93bc520d8921 - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 03:25:04 np0005534516 nova_compute[253538]: 2025-11-25 08:25:04.262 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 23ace5af-6840-42aa-a801-98abbb4f3a52] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:25:04 np0005534516 nova_compute[253538]: 2025-11-25 08:25:04.267 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 23ace5af-6840-42aa-a801-98abbb4f3a52] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 03:25:04 np0005534516 nova_compute[253538]: 2025-11-25 08:25:04.318 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 23ace5af-6840-42aa-a801-98abbb4f3a52] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 03:25:04 np0005534516 nova_compute[253538]: 2025-11-25 08:25:04.318 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764059104.209551, 23ace5af-6840-42aa-a801-98abbb4f3a52 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 03:25:04 np0005534516 nova_compute[253538]: 2025-11-25 08:25:04.319 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 23ace5af-6840-42aa-a801-98abbb4f3a52] VM Paused (Lifecycle Event)#033[00m
Nov 25 03:25:04 np0005534516 nova_compute[253538]: 2025-11-25 08:25:04.322 253542 DEBUG oslo_concurrency.lockutils [None req-3261949f-221a-4d13-874e-356c6716179f 345cc8bd54dd46c9aaf034a44f55f52e fbc71ebb18c64a72bd6d93bc520d8921 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.688s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:25:04 np0005534516 nova_compute[253538]: 2025-11-25 08:25:04.338 253542 DEBUG nova.virt.libvirt.driver [None req-eff26c03-6da1-4935-b2ab-a11367dd46a2 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] [instance: 23ace5af-6840-42aa-a801-98abbb4f3a52] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:25:04 np0005534516 nova_compute[253538]: 2025-11-25 08:25:04.339 253542 DEBUG nova.virt.libvirt.driver [None req-eff26c03-6da1-4935-b2ab-a11367dd46a2 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] [instance: 23ace5af-6840-42aa-a801-98abbb4f3a52] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:25:04 np0005534516 nova_compute[253538]: 2025-11-25 08:25:04.340 253542 DEBUG nova.virt.libvirt.driver [None req-eff26c03-6da1-4935-b2ab-a11367dd46a2 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] [instance: 23ace5af-6840-42aa-a801-98abbb4f3a52] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:25:04 np0005534516 nova_compute[253538]: 2025-11-25 08:25:04.340 253542 DEBUG nova.virt.libvirt.driver [None req-eff26c03-6da1-4935-b2ab-a11367dd46a2 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] [instance: 23ace5af-6840-42aa-a801-98abbb4f3a52] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:25:04 np0005534516 nova_compute[253538]: 2025-11-25 08:25:04.341 253542 DEBUG nova.virt.libvirt.driver [None req-eff26c03-6da1-4935-b2ab-a11367dd46a2 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] [instance: 23ace5af-6840-42aa-a801-98abbb4f3a52] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:25:04 np0005534516 nova_compute[253538]: 2025-11-25 08:25:04.342 253542 DEBUG nova.virt.libvirt.driver [None req-eff26c03-6da1-4935-b2ab-a11367dd46a2 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] [instance: 23ace5af-6840-42aa-a801-98abbb4f3a52] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:25:04 np0005534516 nova_compute[253538]: 2025-11-25 08:25:04.348 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 23ace5af-6840-42aa-a801-98abbb4f3a52] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:25:04 np0005534516 nova_compute[253538]: 2025-11-25 08:25:04.353 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764059104.2160735, 23ace5af-6840-42aa-a801-98abbb4f3a52 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 03:25:04 np0005534516 nova_compute[253538]: 2025-11-25 08:25:04.353 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 23ace5af-6840-42aa-a801-98abbb4f3a52] VM Resumed (Lifecycle Event)#033[00m
Nov 25 03:25:04 np0005534516 nova_compute[253538]: 2025-11-25 08:25:04.360 253542 INFO nova.scheduler.client.report [None req-3261949f-221a-4d13-874e-356c6716179f 345cc8bd54dd46c9aaf034a44f55f52e fbc71ebb18c64a72bd6d93bc520d8921 - - default default] Deleted allocations for instance d3764acc-bce0-452e-bba5-90d76d88df2e#033[00m
Nov 25 03:25:04 np0005534516 nova_compute[253538]: 2025-11-25 08:25:04.405 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 23ace5af-6840-42aa-a801-98abbb4f3a52] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:25:04 np0005534516 nova_compute[253538]: 2025-11-25 08:25:04.409 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 23ace5af-6840-42aa-a801-98abbb4f3a52] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 03:25:04 np0005534516 nova_compute[253538]: 2025-11-25 08:25:04.431 253542 INFO nova.compute.manager [None req-eff26c03-6da1-4935-b2ab-a11367dd46a2 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] [instance: 23ace5af-6840-42aa-a801-98abbb4f3a52] Took 6.73 seconds to spawn the instance on the hypervisor.#033[00m
Nov 25 03:25:04 np0005534516 nova_compute[253538]: 2025-11-25 08:25:04.432 253542 DEBUG nova.compute.manager [None req-eff26c03-6da1-4935-b2ab-a11367dd46a2 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] [instance: 23ace5af-6840-42aa-a801-98abbb4f3a52] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:25:04 np0005534516 nova_compute[253538]: 2025-11-25 08:25:04.435 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 23ace5af-6840-42aa-a801-98abbb4f3a52] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 03:25:04 np0005534516 nova_compute[253538]: 2025-11-25 08:25:04.467 253542 DEBUG oslo_concurrency.lockutils [None req-3261949f-221a-4d13-874e-356c6716179f 345cc8bd54dd46c9aaf034a44f55f52e fbc71ebb18c64a72bd6d93bc520d8921 - - default default] Lock "d3764acc-bce0-452e-bba5-90d76d88df2e" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.452s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:25:04 np0005534516 nova_compute[253538]: 2025-11-25 08:25:04.515 253542 INFO nova.compute.manager [None req-eff26c03-6da1-4935-b2ab-a11367dd46a2 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] [instance: 23ace5af-6840-42aa-a801-98abbb4f3a52] Took 7.66 seconds to build instance.#033[00m
Nov 25 03:25:04 np0005534516 nova_compute[253538]: 2025-11-25 08:25:04.532 253542 DEBUG oslo_concurrency.lockutils [None req-eff26c03-6da1-4935-b2ab-a11367dd46a2 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Lock "23ace5af-6840-42aa-a801-98abbb4f3a52" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 7.795s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:25:04 np0005534516 nova_compute[253538]: 2025-11-25 08:25:04.549 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:25:04 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1189: 321 pgs: 321 active+clean; 227 MiB data, 363 MiB used, 60 GiB / 60 GiB avail; 7.0 MiB/s rd, 10 MiB/s wr, 484 op/s
Nov 25 03:25:05 np0005534516 nova_compute[253538]: 2025-11-25 08:25:05.390 253542 DEBUG oslo_concurrency.lockutils [None req-85efc1f6-1c43-4482-88dd-c1294495f1db 345cc8bd54dd46c9aaf034a44f55f52e fbc71ebb18c64a72bd6d93bc520d8921 - - default default] Acquiring lock "8e6c81c4-a422-42e4-950b-66fa6411c1eb" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:25:05 np0005534516 nova_compute[253538]: 2025-11-25 08:25:05.390 253542 DEBUG oslo_concurrency.lockutils [None req-85efc1f6-1c43-4482-88dd-c1294495f1db 345cc8bd54dd46c9aaf034a44f55f52e fbc71ebb18c64a72bd6d93bc520d8921 - - default default] Lock "8e6c81c4-a422-42e4-950b-66fa6411c1eb" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:25:05 np0005534516 nova_compute[253538]: 2025-11-25 08:25:05.391 253542 DEBUG oslo_concurrency.lockutils [None req-85efc1f6-1c43-4482-88dd-c1294495f1db 345cc8bd54dd46c9aaf034a44f55f52e fbc71ebb18c64a72bd6d93bc520d8921 - - default default] Acquiring lock "8e6c81c4-a422-42e4-950b-66fa6411c1eb-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:25:05 np0005534516 nova_compute[253538]: 2025-11-25 08:25:05.391 253542 DEBUG oslo_concurrency.lockutils [None req-85efc1f6-1c43-4482-88dd-c1294495f1db 345cc8bd54dd46c9aaf034a44f55f52e fbc71ebb18c64a72bd6d93bc520d8921 - - default default] Lock "8e6c81c4-a422-42e4-950b-66fa6411c1eb-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:25:05 np0005534516 nova_compute[253538]: 2025-11-25 08:25:05.392 253542 DEBUG oslo_concurrency.lockutils [None req-85efc1f6-1c43-4482-88dd-c1294495f1db 345cc8bd54dd46c9aaf034a44f55f52e fbc71ebb18c64a72bd6d93bc520d8921 - - default default] Lock "8e6c81c4-a422-42e4-950b-66fa6411c1eb-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:25:05 np0005534516 nova_compute[253538]: 2025-11-25 08:25:05.393 253542 INFO nova.compute.manager [None req-85efc1f6-1c43-4482-88dd-c1294495f1db 345cc8bd54dd46c9aaf034a44f55f52e fbc71ebb18c64a72bd6d93bc520d8921 - - default default] [instance: 8e6c81c4-a422-42e4-950b-66fa6411c1eb] Terminating instance#033[00m
Nov 25 03:25:05 np0005534516 nova_compute[253538]: 2025-11-25 08:25:05.394 253542 DEBUG oslo_concurrency.lockutils [None req-85efc1f6-1c43-4482-88dd-c1294495f1db 345cc8bd54dd46c9aaf034a44f55f52e fbc71ebb18c64a72bd6d93bc520d8921 - - default default] Acquiring lock "refresh_cache-8e6c81c4-a422-42e4-950b-66fa6411c1eb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 03:25:05 np0005534516 nova_compute[253538]: 2025-11-25 08:25:05.394 253542 DEBUG oslo_concurrency.lockutils [None req-85efc1f6-1c43-4482-88dd-c1294495f1db 345cc8bd54dd46c9aaf034a44f55f52e fbc71ebb18c64a72bd6d93bc520d8921 - - default default] Acquired lock "refresh_cache-8e6c81c4-a422-42e4-950b-66fa6411c1eb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 03:25:05 np0005534516 nova_compute[253538]: 2025-11-25 08:25:05.394 253542 DEBUG nova.network.neutron [None req-85efc1f6-1c43-4482-88dd-c1294495f1db 345cc8bd54dd46c9aaf034a44f55f52e fbc71ebb18c64a72bd6d93bc520d8921 - - default default] [instance: 8e6c81c4-a422-42e4-950b-66fa6411c1eb] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 25 03:25:05 np0005534516 nova_compute[253538]: 2025-11-25 08:25:05.561 253542 DEBUG nova.network.neutron [None req-85efc1f6-1c43-4482-88dd-c1294495f1db 345cc8bd54dd46c9aaf034a44f55f52e fbc71ebb18c64a72bd6d93bc520d8921 - - default default] [instance: 8e6c81c4-a422-42e4-950b-66fa6411c1eb] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 25 03:25:05 np0005534516 nova_compute[253538]: 2025-11-25 08:25:05.716 253542 DEBUG nova.compute.manager [req-5f7cc571-c28d-4c20-a81c-e74d941ea48f req-d67d3d45-c748-4484-be1a-5e44d9ac8b3a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 23ace5af-6840-42aa-a801-98abbb4f3a52] Received event network-vif-plugged-e9d1298d-411a-4018-ba08-c41d40ba0d41 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:25:05 np0005534516 nova_compute[253538]: 2025-11-25 08:25:05.716 253542 DEBUG oslo_concurrency.lockutils [req-5f7cc571-c28d-4c20-a81c-e74d941ea48f req-d67d3d45-c748-4484-be1a-5e44d9ac8b3a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "23ace5af-6840-42aa-a801-98abbb4f3a52-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:25:05 np0005534516 nova_compute[253538]: 2025-11-25 08:25:05.717 253542 DEBUG oslo_concurrency.lockutils [req-5f7cc571-c28d-4c20-a81c-e74d941ea48f req-d67d3d45-c748-4484-be1a-5e44d9ac8b3a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "23ace5af-6840-42aa-a801-98abbb4f3a52-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:25:05 np0005534516 nova_compute[253538]: 2025-11-25 08:25:05.717 253542 DEBUG oslo_concurrency.lockutils [req-5f7cc571-c28d-4c20-a81c-e74d941ea48f req-d67d3d45-c748-4484-be1a-5e44d9ac8b3a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "23ace5af-6840-42aa-a801-98abbb4f3a52-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:25:05 np0005534516 nova_compute[253538]: 2025-11-25 08:25:05.718 253542 DEBUG nova.compute.manager [req-5f7cc571-c28d-4c20-a81c-e74d941ea48f req-d67d3d45-c748-4484-be1a-5e44d9ac8b3a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 23ace5af-6840-42aa-a801-98abbb4f3a52] No waiting events found dispatching network-vif-plugged-e9d1298d-411a-4018-ba08-c41d40ba0d41 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 03:25:05 np0005534516 nova_compute[253538]: 2025-11-25 08:25:05.718 253542 WARNING nova.compute.manager [req-5f7cc571-c28d-4c20-a81c-e74d941ea48f req-d67d3d45-c748-4484-be1a-5e44d9ac8b3a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 23ace5af-6840-42aa-a801-98abbb4f3a52] Received unexpected event network-vif-plugged-e9d1298d-411a-4018-ba08-c41d40ba0d41 for instance with vm_state active and task_state None.#033[00m
Nov 25 03:25:05 np0005534516 podman[279451]: 2025-11-25 08:25:05.834500145 +0000 UTC m=+0.086100545 container health_status 8ad1e319ea263c63021f01bb2d6f18ca5aea205883339692c6b10bddfa993191 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, container_name=ovn_controller, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS)
Nov 25 03:25:05 np0005534516 nova_compute[253538]: 2025-11-25 08:25:05.902 253542 DEBUG nova.network.neutron [None req-85efc1f6-1c43-4482-88dd-c1294495f1db 345cc8bd54dd46c9aaf034a44f55f52e fbc71ebb18c64a72bd6d93bc520d8921 - - default default] [instance: 8e6c81c4-a422-42e4-950b-66fa6411c1eb] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 03:25:05 np0005534516 nova_compute[253538]: 2025-11-25 08:25:05.920 253542 DEBUG oslo_concurrency.lockutils [None req-85efc1f6-1c43-4482-88dd-c1294495f1db 345cc8bd54dd46c9aaf034a44f55f52e fbc71ebb18c64a72bd6d93bc520d8921 - - default default] Releasing lock "refresh_cache-8e6c81c4-a422-42e4-950b-66fa6411c1eb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 03:25:05 np0005534516 nova_compute[253538]: 2025-11-25 08:25:05.921 253542 DEBUG nova.compute.manager [None req-85efc1f6-1c43-4482-88dd-c1294495f1db 345cc8bd54dd46c9aaf034a44f55f52e fbc71ebb18c64a72bd6d93bc520d8921 - - default default] [instance: 8e6c81c4-a422-42e4-950b-66fa6411c1eb] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 25 03:25:05 np0005534516 systemd[1]: machine-qemu\x2d15\x2dinstance\x2d0000000d.scope: Deactivated successfully.
Nov 25 03:25:05 np0005534516 systemd[1]: machine-qemu\x2d15\x2dinstance\x2d0000000d.scope: Consumed 12.782s CPU time.
Nov 25 03:25:05 np0005534516 systemd-machined[215790]: Machine qemu-15-instance-0000000d terminated.
Nov 25 03:25:06 np0005534516 nova_compute[253538]: 2025-11-25 08:25:06.142 253542 INFO nova.virt.libvirt.driver [-] [instance: 8e6c81c4-a422-42e4-950b-66fa6411c1eb] Instance destroyed successfully.#033[00m
Nov 25 03:25:06 np0005534516 nova_compute[253538]: 2025-11-25 08:25:06.143 253542 DEBUG nova.objects.instance [None req-85efc1f6-1c43-4482-88dd-c1294495f1db 345cc8bd54dd46c9aaf034a44f55f52e fbc71ebb18c64a72bd6d93bc520d8921 - - default default] Lazy-loading 'resources' on Instance uuid 8e6c81c4-a422-42e4-950b-66fa6411c1eb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 03:25:06 np0005534516 nova_compute[253538]: 2025-11-25 08:25:06.454 253542 INFO nova.virt.libvirt.driver [None req-85efc1f6-1c43-4482-88dd-c1294495f1db 345cc8bd54dd46c9aaf034a44f55f52e fbc71ebb18c64a72bd6d93bc520d8921 - - default default] [instance: 8e6c81c4-a422-42e4-950b-66fa6411c1eb] Deleting instance files /var/lib/nova/instances/8e6c81c4-a422-42e4-950b-66fa6411c1eb_del#033[00m
Nov 25 03:25:06 np0005534516 nova_compute[253538]: 2025-11-25 08:25:06.454 253542 INFO nova.virt.libvirt.driver [None req-85efc1f6-1c43-4482-88dd-c1294495f1db 345cc8bd54dd46c9aaf034a44f55f52e fbc71ebb18c64a72bd6d93bc520d8921 - - default default] [instance: 8e6c81c4-a422-42e4-950b-66fa6411c1eb] Deletion of /var/lib/nova/instances/8e6c81c4-a422-42e4-950b-66fa6411c1eb_del complete#033[00m
Nov 25 03:25:06 np0005534516 nova_compute[253538]: 2025-11-25 08:25:06.538 253542 INFO nova.compute.manager [None req-85efc1f6-1c43-4482-88dd-c1294495f1db 345cc8bd54dd46c9aaf034a44f55f52e fbc71ebb18c64a72bd6d93bc520d8921 - - default default] [instance: 8e6c81c4-a422-42e4-950b-66fa6411c1eb] Took 0.62 seconds to destroy the instance on the hypervisor.#033[00m
Nov 25 03:25:06 np0005534516 nova_compute[253538]: 2025-11-25 08:25:06.539 253542 DEBUG oslo.service.loopingcall [None req-85efc1f6-1c43-4482-88dd-c1294495f1db 345cc8bd54dd46c9aaf034a44f55f52e fbc71ebb18c64a72bd6d93bc520d8921 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 25 03:25:06 np0005534516 nova_compute[253538]: 2025-11-25 08:25:06.539 253542 DEBUG nova.compute.manager [-] [instance: 8e6c81c4-a422-42e4-950b-66fa6411c1eb] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 25 03:25:06 np0005534516 nova_compute[253538]: 2025-11-25 08:25:06.540 253542 DEBUG nova.network.neutron [-] [instance: 8e6c81c4-a422-42e4-950b-66fa6411c1eb] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 25 03:25:06 np0005534516 nova_compute[253538]: 2025-11-25 08:25:06.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 03:25:06 np0005534516 nova_compute[253538]: 2025-11-25 08:25:06.555 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 25 03:25:06 np0005534516 nova_compute[253538]: 2025-11-25 08:25:06.555 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 25 03:25:06 np0005534516 nova_compute[253538]: 2025-11-25 08:25:06.583 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] [instance: 8e6c81c4-a422-42e4-950b-66fa6411c1eb] Skipping network cache update for instance because it is being deleted. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9875#033[00m
Nov 25 03:25:06 np0005534516 nova_compute[253538]: 2025-11-25 08:25:06.890 253542 DEBUG nova.network.neutron [-] [instance: 8e6c81c4-a422-42e4-950b-66fa6411c1eb] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 25 03:25:06 np0005534516 nova_compute[253538]: 2025-11-25 08:25:06.903 253542 DEBUG nova.network.neutron [-] [instance: 8e6c81c4-a422-42e4-950b-66fa6411c1eb] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 03:25:06 np0005534516 nova_compute[253538]: 2025-11-25 08:25:06.920 253542 INFO nova.compute.manager [-] [instance: 8e6c81c4-a422-42e4-950b-66fa6411c1eb] Took 0.38 seconds to deallocate network for instance.#033[00m
Nov 25 03:25:06 np0005534516 nova_compute[253538]: 2025-11-25 08:25:06.926 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Acquiring lock "refresh_cache-86bfa56f-56d0-4a5e-b0b2-302c375e37a3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 03:25:06 np0005534516 nova_compute[253538]: 2025-11-25 08:25:06.926 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Acquired lock "refresh_cache-86bfa56f-56d0-4a5e-b0b2-302c375e37a3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 03:25:06 np0005534516 nova_compute[253538]: 2025-11-25 08:25:06.926 253542 DEBUG nova.network.neutron [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] [instance: 86bfa56f-56d0-4a5e-b0b2-302c375e37a3] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Nov 25 03:25:06 np0005534516 nova_compute[253538]: 2025-11-25 08:25:06.927 253542 DEBUG nova.objects.instance [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 86bfa56f-56d0-4a5e-b0b2-302c375e37a3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 03:25:06 np0005534516 nova_compute[253538]: 2025-11-25 08:25:06.975 253542 DEBUG oslo_concurrency.lockutils [None req-85efc1f6-1c43-4482-88dd-c1294495f1db 345cc8bd54dd46c9aaf034a44f55f52e fbc71ebb18c64a72bd6d93bc520d8921 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:25:06 np0005534516 nova_compute[253538]: 2025-11-25 08:25:06.976 253542 DEBUG oslo_concurrency.lockutils [None req-85efc1f6-1c43-4482-88dd-c1294495f1db 345cc8bd54dd46c9aaf034a44f55f52e fbc71ebb18c64a72bd6d93bc520d8921 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:25:06 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1190: 321 pgs: 321 active+clean; 183 MiB data, 333 MiB used, 60 GiB / 60 GiB avail; 7.6 MiB/s rd, 8.0 MiB/s wr, 530 op/s
Nov 25 03:25:07 np0005534516 nova_compute[253538]: 2025-11-25 08:25:07.055 253542 DEBUG oslo_concurrency.processutils [None req-85efc1f6-1c43-4482-88dd-c1294495f1db 345cc8bd54dd46c9aaf034a44f55f52e fbc71ebb18c64a72bd6d93bc520d8921 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:25:07 np0005534516 nova_compute[253538]: 2025-11-25 08:25:07.077 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:25:07 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 03:25:07 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3479462927' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 03:25:07 np0005534516 nova_compute[253538]: 2025-11-25 08:25:07.471 253542 DEBUG oslo_concurrency.processutils [None req-85efc1f6-1c43-4482-88dd-c1294495f1db 345cc8bd54dd46c9aaf034a44f55f52e fbc71ebb18c64a72bd6d93bc520d8921 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.415s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:25:07 np0005534516 nova_compute[253538]: 2025-11-25 08:25:07.476 253542 DEBUG nova.compute.provider_tree [None req-85efc1f6-1c43-4482-88dd-c1294495f1db 345cc8bd54dd46c9aaf034a44f55f52e fbc71ebb18c64a72bd6d93bc520d8921 - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 03:25:07 np0005534516 nova_compute[253538]: 2025-11-25 08:25:07.490 253542 DEBUG nova.scheduler.client.report [None req-85efc1f6-1c43-4482-88dd-c1294495f1db 345cc8bd54dd46c9aaf034a44f55f52e fbc71ebb18c64a72bd6d93bc520d8921 - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 03:25:07 np0005534516 nova_compute[253538]: 2025-11-25 08:25:07.533 253542 DEBUG oslo_concurrency.lockutils [None req-85efc1f6-1c43-4482-88dd-c1294495f1db 345cc8bd54dd46c9aaf034a44f55f52e fbc71ebb18c64a72bd6d93bc520d8921 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.557s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:25:07 np0005534516 nova_compute[253538]: 2025-11-25 08:25:07.586 253542 INFO nova.scheduler.client.report [None req-85efc1f6-1c43-4482-88dd-c1294495f1db 345cc8bd54dd46c9aaf034a44f55f52e fbc71ebb18c64a72bd6d93bc520d8921 - - default default] Deleted allocations for instance 8e6c81c4-a422-42e4-950b-66fa6411c1eb#033[00m
Nov 25 03:25:07 np0005534516 nova_compute[253538]: 2025-11-25 08:25:07.650 253542 DEBUG oslo_concurrency.lockutils [None req-85efc1f6-1c43-4482-88dd-c1294495f1db 345cc8bd54dd46c9aaf034a44f55f52e fbc71ebb18c64a72bd6d93bc520d8921 - - default default] Lock "8e6c81c4-a422-42e4-950b-66fa6411c1eb" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.260s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:25:08 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e113 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 03:25:08 np0005534516 nova_compute[253538]: 2025-11-25 08:25:08.518 253542 DEBUG nova.network.neutron [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] [instance: 86bfa56f-56d0-4a5e-b0b2-302c375e37a3] Updating instance_info_cache with network_info: [{"id": "4ad9572b-6ac1-4659-8ea6-71b8a32c06fe", "address": "fa:16:3e:5e:0e:e0", "network": {"id": "93269c36-ab23-4d95-925a-798173550624", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-897372830-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "65a2f983cce14453b2dc9251a520f289", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4ad9572b-6a", "ovs_interfaceid": "4ad9572b-6ac1-4659-8ea6-71b8a32c06fe", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 03:25:08 np0005534516 nova_compute[253538]: 2025-11-25 08:25:08.540 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Releasing lock "refresh_cache-86bfa56f-56d0-4a5e-b0b2-302c375e37a3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 03:25:08 np0005534516 nova_compute[253538]: 2025-11-25 08:25:08.540 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] [instance: 86bfa56f-56d0-4a5e-b0b2-302c375e37a3] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Nov 25 03:25:08 np0005534516 nova_compute[253538]: 2025-11-25 08:25:08.541 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 03:25:08 np0005534516 nova_compute[253538]: 2025-11-25 08:25:08.542 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 03:25:08 np0005534516 nova_compute[253538]: 2025-11-25 08:25:08.542 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 25 03:25:08 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1191: 321 pgs: 321 active+clean; 176 MiB data, 327 MiB used, 60 GiB / 60 GiB avail; 7.3 MiB/s rd, 6.3 MiB/s wr, 459 op/s
Nov 25 03:25:09 np0005534516 nova_compute[253538]: 2025-11-25 08:25:09.552 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:25:09 np0005534516 nova_compute[253538]: 2025-11-25 08:25:09.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 03:25:09 np0005534516 nova_compute[253538]: 2025-11-25 08:25:09.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 03:25:09 np0005534516 nova_compute[253538]: 2025-11-25 08:25:09.554 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Nov 25 03:25:10 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 03:25:10 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 03:25:10 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 03:25:10 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 03:25:10 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 03:25:10 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 03:25:10 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 03:25:10 np0005534516 ceph-mgr[75313]: [progress WARNING root] complete: ev 403a0863-1c3d-464f-bb32-6d10bcd916df does not exist
Nov 25 03:25:10 np0005534516 ceph-mgr[75313]: [progress WARNING root] complete: ev 0bb4f461-3f61-48e8-a398-84f307cf5df9 does not exist
Nov 25 03:25:10 np0005534516 ceph-mgr[75313]: [progress WARNING root] complete: ev 855f5374-84c0-4844-9d99-46c21a6fde16 does not exist
Nov 25 03:25:10 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 03:25:10 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 03:25:10 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 03:25:10 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 03:25:10 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 03:25:10 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 03:25:10 np0005534516 nova_compute[253538]: 2025-11-25 08:25:10.576 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 03:25:10 np0005534516 nova_compute[253538]: 2025-11-25 08:25:10.691 253542 DEBUG oslo_concurrency.lockutils [None req-9f33985b-68e9-4e9b-980c-be3fef1a34e0 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Acquiring lock "29fb9e2b-13d1-41e6-b0b1-1d5262dcadec" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:25:10 np0005534516 nova_compute[253538]: 2025-11-25 08:25:10.691 253542 DEBUG oslo_concurrency.lockutils [None req-9f33985b-68e9-4e9b-980c-be3fef1a34e0 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Lock "29fb9e2b-13d1-41e6-b0b1-1d5262dcadec" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:25:10 np0005534516 nova_compute[253538]: 2025-11-25 08:25:10.707 253542 DEBUG nova.compute.manager [None req-9f33985b-68e9-4e9b-980c-be3fef1a34e0 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] [instance: 29fb9e2b-13d1-41e6-b0b1-1d5262dcadec] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 25 03:25:10 np0005534516 nova_compute[253538]: 2025-11-25 08:25:10.765 253542 DEBUG oslo_concurrency.lockutils [None req-9f33985b-68e9-4e9b-980c-be3fef1a34e0 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:25:10 np0005534516 nova_compute[253538]: 2025-11-25 08:25:10.766 253542 DEBUG oslo_concurrency.lockutils [None req-9f33985b-68e9-4e9b-980c-be3fef1a34e0 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:25:10 np0005534516 nova_compute[253538]: 2025-11-25 08:25:10.773 253542 DEBUG nova.virt.hardware [None req-9f33985b-68e9-4e9b-980c-be3fef1a34e0 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 25 03:25:10 np0005534516 nova_compute[253538]: 2025-11-25 08:25:10.773 253542 INFO nova.compute.claims [None req-9f33985b-68e9-4e9b-980c-be3fef1a34e0 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] [instance: 29fb9e2b-13d1-41e6-b0b1-1d5262dcadec] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 25 03:25:10 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1192: 321 pgs: 321 active+clean; 134 MiB data, 297 MiB used, 60 GiB / 60 GiB avail; 6.6 MiB/s rd, 5.6 MiB/s wr, 425 op/s
Nov 25 03:25:11 np0005534516 podman[279789]: 2025-11-25 08:25:11.035746729 +0000 UTC m=+0.043849765 container create aab33eacda8c52801b8a7ea3425540296a7d17a857bcc3457c740470a9c1b144 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_mirzakhani, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 03:25:11 np0005534516 systemd[1]: Started libpod-conmon-aab33eacda8c52801b8a7ea3425540296a7d17a857bcc3457c740470a9c1b144.scope.
Nov 25 03:25:11 np0005534516 systemd[1]: Started libcrun container.
Nov 25 03:25:11 np0005534516 podman[279789]: 2025-11-25 08:25:11.014979144 +0000 UTC m=+0.023082210 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 03:25:11 np0005534516 podman[279789]: 2025-11-25 08:25:11.121069051 +0000 UTC m=+0.129172087 container init aab33eacda8c52801b8a7ea3425540296a7d17a857bcc3457c740470a9c1b144 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_mirzakhani, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 03:25:11 np0005534516 podman[279789]: 2025-11-25 08:25:11.128114236 +0000 UTC m=+0.136217272 container start aab33eacda8c52801b8a7ea3425540296a7d17a857bcc3457c740470a9c1b144 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_mirzakhani, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Nov 25 03:25:11 np0005534516 podman[279789]: 2025-11-25 08:25:11.130969285 +0000 UTC m=+0.139072341 container attach aab33eacda8c52801b8a7ea3425540296a7d17a857bcc3457c740470a9c1b144 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_mirzakhani, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Nov 25 03:25:11 np0005534516 vigorous_mirzakhani[279805]: 167 167
Nov 25 03:25:11 np0005534516 systemd[1]: libpod-aab33eacda8c52801b8a7ea3425540296a7d17a857bcc3457c740470a9c1b144.scope: Deactivated successfully.
Nov 25 03:25:11 np0005534516 podman[279789]: 2025-11-25 08:25:11.135699595 +0000 UTC m=+0.143802641 container died aab33eacda8c52801b8a7ea3425540296a7d17a857bcc3457c740470a9c1b144 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_mirzakhani, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 03:25:11 np0005534516 systemd[1]: var-lib-containers-storage-overlay-c755a86c7ee0b329a4b70ebbfbb6b6c40051375c85626013e0354afc2f12b899-merged.mount: Deactivated successfully.
Nov 25 03:25:11 np0005534516 podman[279789]: 2025-11-25 08:25:11.173955385 +0000 UTC m=+0.182058431 container remove aab33eacda8c52801b8a7ea3425540296a7d17a857bcc3457c740470a9c1b144 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_mirzakhani, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True)
Nov 25 03:25:11 np0005534516 nova_compute[253538]: 2025-11-25 08:25:11.202 253542 DEBUG oslo_concurrency.processutils [None req-9f33985b-68e9-4e9b-980c-be3fef1a34e0 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:25:11 np0005534516 systemd[1]: libpod-conmon-aab33eacda8c52801b8a7ea3425540296a7d17a857bcc3457c740470a9c1b144.scope: Deactivated successfully.
Nov 25 03:25:11 np0005534516 podman[279829]: 2025-11-25 08:25:11.356486767 +0000 UTC m=+0.044342299 container create ba33fa3523491854a4c04ca1f7735af8a9bf198d2845dd525e36fd11d347e341 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_lichterman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Nov 25 03:25:11 np0005534516 systemd[1]: Started libpod-conmon-ba33fa3523491854a4c04ca1f7735af8a9bf198d2845dd525e36fd11d347e341.scope.
Nov 25 03:25:11 np0005534516 podman[279829]: 2025-11-25 08:25:11.338554491 +0000 UTC m=+0.026410043 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 03:25:11 np0005534516 systemd[1]: Started libcrun container.
Nov 25 03:25:11 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5d72a772b7ee2eaf817353045316f4024f65e21593e09c6aa0b55b6db6243ea7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 03:25:11 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5d72a772b7ee2eaf817353045316f4024f65e21593e09c6aa0b55b6db6243ea7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 03:25:11 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5d72a772b7ee2eaf817353045316f4024f65e21593e09c6aa0b55b6db6243ea7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 03:25:11 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5d72a772b7ee2eaf817353045316f4024f65e21593e09c6aa0b55b6db6243ea7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 03:25:11 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5d72a772b7ee2eaf817353045316f4024f65e21593e09c6aa0b55b6db6243ea7/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 03:25:11 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 03:25:11 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 03:25:11 np0005534516 podman[279829]: 2025-11-25 08:25:11.466578195 +0000 UTC m=+0.154433747 container init ba33fa3523491854a4c04ca1f7735af8a9bf198d2845dd525e36fd11d347e341 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_lichterman, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 03:25:11 np0005534516 podman[279829]: 2025-11-25 08:25:11.475979075 +0000 UTC m=+0.163834617 container start ba33fa3523491854a4c04ca1f7735af8a9bf198d2845dd525e36fd11d347e341 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_lichterman, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Nov 25 03:25:11 np0005534516 podman[279829]: 2025-11-25 08:25:11.479519363 +0000 UTC m=+0.167374895 container attach ba33fa3523491854a4c04ca1f7735af8a9bf198d2845dd525e36fd11d347e341 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_lichterman, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Nov 25 03:25:11 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 03:25:11 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2802608226' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 03:25:11 np0005534516 nova_compute[253538]: 2025-11-25 08:25:11.677 253542 DEBUG oslo_concurrency.processutils [None req-9f33985b-68e9-4e9b-980c-be3fef1a34e0 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.475s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:25:11 np0005534516 nova_compute[253538]: 2025-11-25 08:25:11.685 253542 DEBUG nova.compute.provider_tree [None req-9f33985b-68e9-4e9b-980c-be3fef1a34e0 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 03:25:11 np0005534516 nova_compute[253538]: 2025-11-25 08:25:11.703 253542 DEBUG nova.scheduler.client.report [None req-9f33985b-68e9-4e9b-980c-be3fef1a34e0 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 03:25:11 np0005534516 nova_compute[253538]: 2025-11-25 08:25:11.726 253542 DEBUG oslo_concurrency.lockutils [None req-9f33985b-68e9-4e9b-980c-be3fef1a34e0 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.961s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:25:11 np0005534516 nova_compute[253538]: 2025-11-25 08:25:11.727 253542 DEBUG nova.compute.manager [None req-9f33985b-68e9-4e9b-980c-be3fef1a34e0 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] [instance: 29fb9e2b-13d1-41e6-b0b1-1d5262dcadec] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 25 03:25:11 np0005534516 nova_compute[253538]: 2025-11-25 08:25:11.784 253542 DEBUG nova.compute.manager [None req-9f33985b-68e9-4e9b-980c-be3fef1a34e0 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] [instance: 29fb9e2b-13d1-41e6-b0b1-1d5262dcadec] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 25 03:25:11 np0005534516 nova_compute[253538]: 2025-11-25 08:25:11.785 253542 DEBUG nova.network.neutron [None req-9f33985b-68e9-4e9b-980c-be3fef1a34e0 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] [instance: 29fb9e2b-13d1-41e6-b0b1-1d5262dcadec] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 25 03:25:11 np0005534516 nova_compute[253538]: 2025-11-25 08:25:11.805 253542 INFO nova.virt.libvirt.driver [None req-9f33985b-68e9-4e9b-980c-be3fef1a34e0 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] [instance: 29fb9e2b-13d1-41e6-b0b1-1d5262dcadec] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 25 03:25:11 np0005534516 nova_compute[253538]: 2025-11-25 08:25:11.825 253542 DEBUG nova.compute.manager [None req-9f33985b-68e9-4e9b-980c-be3fef1a34e0 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] [instance: 29fb9e2b-13d1-41e6-b0b1-1d5262dcadec] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 25 03:25:11 np0005534516 nova_compute[253538]: 2025-11-25 08:25:11.935 253542 DEBUG nova.compute.manager [None req-9f33985b-68e9-4e9b-980c-be3fef1a34e0 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] [instance: 29fb9e2b-13d1-41e6-b0b1-1d5262dcadec] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 25 03:25:11 np0005534516 nova_compute[253538]: 2025-11-25 08:25:11.937 253542 DEBUG nova.virt.libvirt.driver [None req-9f33985b-68e9-4e9b-980c-be3fef1a34e0 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] [instance: 29fb9e2b-13d1-41e6-b0b1-1d5262dcadec] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 25 03:25:11 np0005534516 nova_compute[253538]: 2025-11-25 08:25:11.938 253542 INFO nova.virt.libvirt.driver [None req-9f33985b-68e9-4e9b-980c-be3fef1a34e0 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] [instance: 29fb9e2b-13d1-41e6-b0b1-1d5262dcadec] Creating image(s)#033[00m
Nov 25 03:25:11 np0005534516 nova_compute[253538]: 2025-11-25 08:25:11.974 253542 DEBUG nova.storage.rbd_utils [None req-9f33985b-68e9-4e9b-980c-be3fef1a34e0 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] rbd image 29fb9e2b-13d1-41e6-b0b1-1d5262dcadec_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:25:11 np0005534516 nova_compute[253538]: 2025-11-25 08:25:11.997 253542 DEBUG nova.storage.rbd_utils [None req-9f33985b-68e9-4e9b-980c-be3fef1a34e0 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] rbd image 29fb9e2b-13d1-41e6-b0b1-1d5262dcadec_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:25:12 np0005534516 nova_compute[253538]: 2025-11-25 08:25:12.022 253542 DEBUG nova.storage.rbd_utils [None req-9f33985b-68e9-4e9b-980c-be3fef1a34e0 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] rbd image 29fb9e2b-13d1-41e6-b0b1-1d5262dcadec_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:25:12 np0005534516 nova_compute[253538]: 2025-11-25 08:25:12.026 253542 DEBUG oslo_concurrency.processutils [None req-9f33985b-68e9-4e9b-980c-be3fef1a34e0 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:25:12 np0005534516 nova_compute[253538]: 2025-11-25 08:25:12.081 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:25:12 np0005534516 nova_compute[253538]: 2025-11-25 08:25:12.091 253542 DEBUG oslo_concurrency.processutils [None req-9f33985b-68e9-4e9b-980c-be3fef1a34e0 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json" returned: 0 in 0.065s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:25:12 np0005534516 nova_compute[253538]: 2025-11-25 08:25:12.091 253542 DEBUG oslo_concurrency.lockutils [None req-9f33985b-68e9-4e9b-980c-be3fef1a34e0 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Acquiring lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:25:12 np0005534516 nova_compute[253538]: 2025-11-25 08:25:12.092 253542 DEBUG oslo_concurrency.lockutils [None req-9f33985b-68e9-4e9b-980c-be3fef1a34e0 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:25:12 np0005534516 nova_compute[253538]: 2025-11-25 08:25:12.092 253542 DEBUG oslo_concurrency.lockutils [None req-9f33985b-68e9-4e9b-980c-be3fef1a34e0 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:25:12 np0005534516 nova_compute[253538]: 2025-11-25 08:25:12.112 253542 DEBUG nova.storage.rbd_utils [None req-9f33985b-68e9-4e9b-980c-be3fef1a34e0 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] rbd image 29fb9e2b-13d1-41e6-b0b1-1d5262dcadec_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:25:12 np0005534516 nova_compute[253538]: 2025-11-25 08:25:12.115 253542 DEBUG oslo_concurrency.processutils [None req-9f33985b-68e9-4e9b-980c-be3fef1a34e0 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc 29fb9e2b-13d1-41e6-b0b1-1d5262dcadec_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:25:12 np0005534516 nova_compute[253538]: 2025-11-25 08:25:12.321 253542 DEBUG nova.policy [None req-9f33985b-68e9-4e9b-980c-be3fef1a34e0 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '76d3377d398a4214a77bc0eb91638ec5', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '65a2f983cce14453b2dc9251a520f289', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 25 03:25:12 np0005534516 nova_compute[253538]: 2025-11-25 08:25:12.407 253542 DEBUG oslo_concurrency.processutils [None req-9f33985b-68e9-4e9b-980c-be3fef1a34e0 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc 29fb9e2b-13d1-41e6-b0b1-1d5262dcadec_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.292s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:25:12 np0005534516 nova_compute[253538]: 2025-11-25 08:25:12.517 253542 DEBUG nova.storage.rbd_utils [None req-9f33985b-68e9-4e9b-980c-be3fef1a34e0 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] resizing rbd image 29fb9e2b-13d1-41e6-b0b1-1d5262dcadec_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 25 03:25:12 np0005534516 nova_compute[253538]: 2025-11-25 08:25:12.559 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 03:25:12 np0005534516 nova_compute[253538]: 2025-11-25 08:25:12.560 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 03:25:12 np0005534516 nova_compute[253538]: 2025-11-25 08:25:12.586 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:25:12 np0005534516 nova_compute[253538]: 2025-11-25 08:25:12.587 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:25:12 np0005534516 nova_compute[253538]: 2025-11-25 08:25:12.587 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:25:12 np0005534516 nova_compute[253538]: 2025-11-25 08:25:12.587 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 25 03:25:12 np0005534516 nova_compute[253538]: 2025-11-25 08:25:12.587 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:25:12 np0005534516 nova_compute[253538]: 2025-11-25 08:25:12.629 253542 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764059097.627532, 7e576418-7454-49eb-9918-2d7f04547bd8 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 03:25:12 np0005534516 nova_compute[253538]: 2025-11-25 08:25:12.629 253542 INFO nova.compute.manager [-] [instance: 7e576418-7454-49eb-9918-2d7f04547bd8] VM Stopped (Lifecycle Event)#033[00m
Nov 25 03:25:12 np0005534516 nova_compute[253538]: 2025-11-25 08:25:12.649 253542 DEBUG nova.compute.manager [None req-faafaea5-dd09-4d16-b520-ace4c0b3c8f6 - - - - - -] [instance: 7e576418-7454-49eb-9918-2d7f04547bd8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:25:12 np0005534516 eloquent_lichterman[279864]: --> passed data devices: 0 physical, 3 LVM
Nov 25 03:25:12 np0005534516 eloquent_lichterman[279864]: --> relative data size: 1.0
Nov 25 03:25:12 np0005534516 eloquent_lichterman[279864]: --> All data devices are unavailable
Nov 25 03:25:12 np0005534516 nova_compute[253538]: 2025-11-25 08:25:12.710 253542 DEBUG nova.objects.instance [None req-9f33985b-68e9-4e9b-980c-be3fef1a34e0 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Lazy-loading 'migration_context' on Instance uuid 29fb9e2b-13d1-41e6-b0b1-1d5262dcadec obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 03:25:12 np0005534516 nova_compute[253538]: 2025-11-25 08:25:12.722 253542 DEBUG nova.virt.libvirt.driver [None req-9f33985b-68e9-4e9b-980c-be3fef1a34e0 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] [instance: 29fb9e2b-13d1-41e6-b0b1-1d5262dcadec] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 25 03:25:12 np0005534516 nova_compute[253538]: 2025-11-25 08:25:12.722 253542 DEBUG nova.virt.libvirt.driver [None req-9f33985b-68e9-4e9b-980c-be3fef1a34e0 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] [instance: 29fb9e2b-13d1-41e6-b0b1-1d5262dcadec] Ensure instance console log exists: /var/lib/nova/instances/29fb9e2b-13d1-41e6-b0b1-1d5262dcadec/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 25 03:25:12 np0005534516 nova_compute[253538]: 2025-11-25 08:25:12.723 253542 DEBUG oslo_concurrency.lockutils [None req-9f33985b-68e9-4e9b-980c-be3fef1a34e0 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:25:12 np0005534516 nova_compute[253538]: 2025-11-25 08:25:12.723 253542 DEBUG oslo_concurrency.lockutils [None req-9f33985b-68e9-4e9b-980c-be3fef1a34e0 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:25:12 np0005534516 nova_compute[253538]: 2025-11-25 08:25:12.723 253542 DEBUG oslo_concurrency.lockutils [None req-9f33985b-68e9-4e9b-980c-be3fef1a34e0 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:25:12 np0005534516 systemd[1]: libpod-ba33fa3523491854a4c04ca1f7735af8a9bf198d2845dd525e36fd11d347e341.scope: Deactivated successfully.
Nov 25 03:25:12 np0005534516 systemd[1]: libpod-ba33fa3523491854a4c04ca1f7735af8a9bf198d2845dd525e36fd11d347e341.scope: Consumed 1.139s CPU time.
Nov 25 03:25:12 np0005534516 podman[279829]: 2025-11-25 08:25:12.781433361 +0000 UTC m=+1.469288913 container died ba33fa3523491854a4c04ca1f7735af8a9bf198d2845dd525e36fd11d347e341 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_lichterman, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 03:25:12 np0005534516 systemd[1]: var-lib-containers-storage-overlay-5d72a772b7ee2eaf817353045316f4024f65e21593e09c6aa0b55b6db6243ea7-merged.mount: Deactivated successfully.
Nov 25 03:25:12 np0005534516 podman[279829]: 2025-11-25 08:25:12.867632117 +0000 UTC m=+1.555487669 container remove ba33fa3523491854a4c04ca1f7735af8a9bf198d2845dd525e36fd11d347e341 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_lichterman, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 03:25:12 np0005534516 systemd[1]: libpod-conmon-ba33fa3523491854a4c04ca1f7735af8a9bf198d2845dd525e36fd11d347e341.scope: Deactivated successfully.
Nov 25 03:25:12 np0005534516 nova_compute[253538]: 2025-11-25 08:25:12.977 253542 DEBUG nova.network.neutron [None req-9f33985b-68e9-4e9b-980c-be3fef1a34e0 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] [instance: 29fb9e2b-13d1-41e6-b0b1-1d5262dcadec] Successfully created port: a0a9c956-aaa5-4981-a1d9-ae896cea7b7c _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 25 03:25:12 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1193: 321 pgs: 321 active+clean; 134 MiB data, 297 MiB used, 60 GiB / 60 GiB avail; 4.9 MiB/s rd, 3.3 MiB/s wr, 295 op/s
Nov 25 03:25:13 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 03:25:13 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/944451179' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 03:25:13 np0005534516 nova_compute[253538]: 2025-11-25 08:25:13.117 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.529s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:25:13 np0005534516 nova_compute[253538]: 2025-11-25 08:25:13.199 253542 DEBUG nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] skipping disk for instance-00000011 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 25 03:25:13 np0005534516 nova_compute[253538]: 2025-11-25 08:25:13.199 253542 DEBUG nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] skipping disk for instance-00000011 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 25 03:25:13 np0005534516 nova_compute[253538]: 2025-11-25 08:25:13.203 253542 DEBUG nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] skipping disk for instance-00000010 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 25 03:25:13 np0005534516 nova_compute[253538]: 2025-11-25 08:25:13.203 253542 DEBUG nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] skipping disk for instance-00000010 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 25 03:25:13 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e113 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 03:25:13 np0005534516 nova_compute[253538]: 2025-11-25 08:25:13.460 253542 WARNING nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 03:25:13 np0005534516 nova_compute[253538]: 2025-11-25 08:25:13.461 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4274MB free_disk=59.94648361206055GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 25 03:25:13 np0005534516 nova_compute[253538]: 2025-11-25 08:25:13.461 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:25:13 np0005534516 nova_compute[253538]: 2025-11-25 08:25:13.461 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:25:13 np0005534516 podman[280238]: 2025-11-25 08:25:13.534056744 +0000 UTC m=+0.046882819 container create 32a1fc0e19eabd87fe59d3e303e0bdb5f382051f3cce4676e3a693633cacf612 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_rubin, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 03:25:13 np0005534516 nova_compute[253538]: 2025-11-25 08:25:13.560 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Instance 86bfa56f-56d0-4a5e-b0b2-302c375e37a3 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 25 03:25:13 np0005534516 nova_compute[253538]: 2025-11-25 08:25:13.560 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Instance 23ace5af-6840-42aa-a801-98abbb4f3a52 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 25 03:25:13 np0005534516 nova_compute[253538]: 2025-11-25 08:25:13.560 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Instance 29fb9e2b-13d1-41e6-b0b1-1d5262dcadec actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 25 03:25:13 np0005534516 nova_compute[253538]: 2025-11-25 08:25:13.561 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 25 03:25:13 np0005534516 nova_compute[253538]: 2025-11-25 08:25:13.561 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=896MB phys_disk=59GB used_disk=3GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 25 03:25:13 np0005534516 systemd[1]: Started libpod-conmon-32a1fc0e19eabd87fe59d3e303e0bdb5f382051f3cce4676e3a693633cacf612.scope.
Nov 25 03:25:13 np0005534516 systemd[1]: Started libcrun container.
Nov 25 03:25:13 np0005534516 podman[280238]: 2025-11-25 08:25:13.517411864 +0000 UTC m=+0.030237969 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 03:25:13 np0005534516 podman[280238]: 2025-11-25 08:25:13.613258786 +0000 UTC m=+0.126084871 container init 32a1fc0e19eabd87fe59d3e303e0bdb5f382051f3cce4676e3a693633cacf612 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_rubin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Nov 25 03:25:13 np0005534516 podman[280238]: 2025-11-25 08:25:13.620758964 +0000 UTC m=+0.133585029 container start 32a1fc0e19eabd87fe59d3e303e0bdb5f382051f3cce4676e3a693633cacf612 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_rubin, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Nov 25 03:25:13 np0005534516 podman[280238]: 2025-11-25 08:25:13.623437548 +0000 UTC m=+0.136263633 container attach 32a1fc0e19eabd87fe59d3e303e0bdb5f382051f3cce4676e3a693633cacf612 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_rubin, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Nov 25 03:25:13 np0005534516 crazy_rubin[280253]: 167 167
Nov 25 03:25:13 np0005534516 systemd[1]: libpod-32a1fc0e19eabd87fe59d3e303e0bdb5f382051f3cce4676e3a693633cacf612.scope: Deactivated successfully.
Nov 25 03:25:13 np0005534516 conmon[280253]: conmon 32a1fc0e19eabd87fe59 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-32a1fc0e19eabd87fe59d3e303e0bdb5f382051f3cce4676e3a693633cacf612.scope/container/memory.events
Nov 25 03:25:13 np0005534516 podman[280238]: 2025-11-25 08:25:13.627274494 +0000 UTC m=+0.140100569 container died 32a1fc0e19eabd87fe59d3e303e0bdb5f382051f3cce4676e3a693633cacf612 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_rubin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 03:25:13 np0005534516 nova_compute[253538]: 2025-11-25 08:25:13.632 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:25:13 np0005534516 systemd[1]: var-lib-containers-storage-overlay-d2bff00768786ce34c44da0ced77eb537416f09d3517d8b85e90fdec3a696689-merged.mount: Deactivated successfully.
Nov 25 03:25:13 np0005534516 podman[280238]: 2025-11-25 08:25:13.755568146 +0000 UTC m=+0.268394221 container remove 32a1fc0e19eabd87fe59d3e303e0bdb5f382051f3cce4676e3a693633cacf612 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_rubin, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 03:25:13 np0005534516 systemd[1]: libpod-conmon-32a1fc0e19eabd87fe59d3e303e0bdb5f382051f3cce4676e3a693633cacf612.scope: Deactivated successfully.
Nov 25 03:25:14 np0005534516 podman[280296]: 2025-11-25 08:25:13.913359193 +0000 UTC m=+0.020910259 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 03:25:14 np0005534516 podman[280296]: 2025-11-25 08:25:14.009846314 +0000 UTC m=+0.117397400 container create 71f08b7aae3a2f25adf573c881573b4e2627b2bad00243ed9fc5f9b745050cef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_vaughan, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Nov 25 03:25:14 np0005534516 nova_compute[253538]: 2025-11-25 08:25:14.042 253542 DEBUG nova.network.neutron [None req-9f33985b-68e9-4e9b-980c-be3fef1a34e0 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] [instance: 29fb9e2b-13d1-41e6-b0b1-1d5262dcadec] Successfully updated port: a0a9c956-aaa5-4981-a1d9-ae896cea7b7c _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 25 03:25:14 np0005534516 nova_compute[253538]: 2025-11-25 08:25:14.055 253542 DEBUG oslo_concurrency.lockutils [None req-9f33985b-68e9-4e9b-980c-be3fef1a34e0 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Acquiring lock "refresh_cache-29fb9e2b-13d1-41e6-b0b1-1d5262dcadec" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 03:25:14 np0005534516 nova_compute[253538]: 2025-11-25 08:25:14.055 253542 DEBUG oslo_concurrency.lockutils [None req-9f33985b-68e9-4e9b-980c-be3fef1a34e0 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Acquired lock "refresh_cache-29fb9e2b-13d1-41e6-b0b1-1d5262dcadec" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 03:25:14 np0005534516 nova_compute[253538]: 2025-11-25 08:25:14.055 253542 DEBUG nova.network.neutron [None req-9f33985b-68e9-4e9b-980c-be3fef1a34e0 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] [instance: 29fb9e2b-13d1-41e6-b0b1-1d5262dcadec] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 25 03:25:14 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 03:25:14 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2997014469' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 03:25:14 np0005534516 ovn_controller[152859]: 2025-11-25T08:25:14Z|00008|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:5e:0e:e0 10.100.0.6
Nov 25 03:25:14 np0005534516 ovn_controller[152859]: 2025-11-25T08:25:14Z|00009|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:5e:0e:e0 10.100.0.6
Nov 25 03:25:14 np0005534516 systemd[1]: Started libpod-conmon-71f08b7aae3a2f25adf573c881573b4e2627b2bad00243ed9fc5f9b745050cef.scope.
Nov 25 03:25:14 np0005534516 nova_compute[253538]: 2025-11-25 08:25:14.129 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.498s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:25:14 np0005534516 nova_compute[253538]: 2025-11-25 08:25:14.147 253542 DEBUG nova.compute.provider_tree [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 03:25:14 np0005534516 systemd[1]: Started libcrun container.
Nov 25 03:25:14 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7d3857365d3e57c6fb4bbd9e053d2a0d5a656d628fd6a7f618bb6311ba934924/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 03:25:14 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7d3857365d3e57c6fb4bbd9e053d2a0d5a656d628fd6a7f618bb6311ba934924/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 03:25:14 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7d3857365d3e57c6fb4bbd9e053d2a0d5a656d628fd6a7f618bb6311ba934924/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 03:25:14 np0005534516 nova_compute[253538]: 2025-11-25 08:25:14.162 253542 DEBUG nova.compute.manager [req-a4d7c72b-a628-4cc5-b63a-7d0a60797013 req-196991de-e32f-466d-bcf2-e0fdb462f89a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 29fb9e2b-13d1-41e6-b0b1-1d5262dcadec] Received event network-changed-a0a9c956-aaa5-4981-a1d9-ae896cea7b7c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:25:14 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7d3857365d3e57c6fb4bbd9e053d2a0d5a656d628fd6a7f618bb6311ba934924/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 03:25:14 np0005534516 nova_compute[253538]: 2025-11-25 08:25:14.163 253542 DEBUG nova.compute.manager [req-a4d7c72b-a628-4cc5-b63a-7d0a60797013 req-196991de-e32f-466d-bcf2-e0fdb462f89a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 29fb9e2b-13d1-41e6-b0b1-1d5262dcadec] Refreshing instance network info cache due to event network-changed-a0a9c956-aaa5-4981-a1d9-ae896cea7b7c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 03:25:14 np0005534516 nova_compute[253538]: 2025-11-25 08:25:14.163 253542 DEBUG oslo_concurrency.lockutils [req-a4d7c72b-a628-4cc5-b63a-7d0a60797013 req-196991de-e32f-466d-bcf2-e0fdb462f89a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-29fb9e2b-13d1-41e6-b0b1-1d5262dcadec" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 03:25:14 np0005534516 nova_compute[253538]: 2025-11-25 08:25:14.165 253542 DEBUG nova.scheduler.client.report [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 03:25:14 np0005534516 podman[280296]: 2025-11-25 08:25:14.181385063 +0000 UTC m=+0.288936129 container init 71f08b7aae3a2f25adf573c881573b4e2627b2bad00243ed9fc5f9b745050cef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_vaughan, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Nov 25 03:25:14 np0005534516 nova_compute[253538]: 2025-11-25 08:25:14.186 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 25 03:25:14 np0005534516 nova_compute[253538]: 2025-11-25 08:25:14.186 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.725s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:25:14 np0005534516 podman[280296]: 2025-11-25 08:25:14.187692797 +0000 UTC m=+0.295243843 container start 71f08b7aae3a2f25adf573c881573b4e2627b2bad00243ed9fc5f9b745050cef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_vaughan, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True)
Nov 25 03:25:14 np0005534516 podman[280296]: 2025-11-25 08:25:14.196191662 +0000 UTC m=+0.303742708 container attach 71f08b7aae3a2f25adf573c881573b4e2627b2bad00243ed9fc5f9b745050cef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_vaughan, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Nov 25 03:25:14 np0005534516 nova_compute[253538]: 2025-11-25 08:25:14.258 253542 DEBUG nova.network.neutron [None req-9f33985b-68e9-4e9b-980c-be3fef1a34e0 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] [instance: 29fb9e2b-13d1-41e6-b0b1-1d5262dcadec] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 25 03:25:14 np0005534516 nova_compute[253538]: 2025-11-25 08:25:14.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 03:25:14 np0005534516 nova_compute[253538]: 2025-11-25 08:25:14.554 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:25:14 np0005534516 nova_compute[253538]: 2025-11-25 08:25:14.556 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 03:25:14 np0005534516 nova_compute[253538]: 2025-11-25 08:25:14.556 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Nov 25 03:25:14 np0005534516 nova_compute[253538]: 2025-11-25 08:25:14.568 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Nov 25 03:25:14 np0005534516 nova_compute[253538]: 2025-11-25 08:25:14.752 253542 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764059099.751519, d3764acc-bce0-452e-bba5-90d76d88df2e => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 03:25:14 np0005534516 nova_compute[253538]: 2025-11-25 08:25:14.753 253542 INFO nova.compute.manager [-] [instance: d3764acc-bce0-452e-bba5-90d76d88df2e] VM Stopped (Lifecycle Event)#033[00m
Nov 25 03:25:14 np0005534516 nova_compute[253538]: 2025-11-25 08:25:14.766 253542 DEBUG nova.compute.manager [None req-21d3da1e-b910-4b7f-8654-7e47d31bb5f0 - - - - - -] [instance: d3764acc-bce0-452e-bba5-90d76d88df2e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:25:14 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1194: 321 pgs: 321 active+clean; 183 MiB data, 326 MiB used, 60 GiB / 60 GiB avail; 4.3 MiB/s rd, 4.1 MiB/s wr, 278 op/s
Nov 25 03:25:15 np0005534516 fervent_vaughan[280313]: {
Nov 25 03:25:15 np0005534516 fervent_vaughan[280313]:    "0": [
Nov 25 03:25:15 np0005534516 fervent_vaughan[280313]:        {
Nov 25 03:25:15 np0005534516 fervent_vaughan[280313]:            "devices": [
Nov 25 03:25:15 np0005534516 fervent_vaughan[280313]:                "/dev/loop3"
Nov 25 03:25:15 np0005534516 fervent_vaughan[280313]:            ],
Nov 25 03:25:15 np0005534516 fervent_vaughan[280313]:            "lv_name": "ceph_lv0",
Nov 25 03:25:15 np0005534516 fervent_vaughan[280313]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 03:25:15 np0005534516 fervent_vaughan[280313]:            "lv_size": "21470642176",
Nov 25 03:25:15 np0005534516 fervent_vaughan[280313]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=fa0f8d90-5df8-4d42-9078-da082765696d,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 03:25:15 np0005534516 fervent_vaughan[280313]:            "lv_uuid": "ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr",
Nov 25 03:25:15 np0005534516 fervent_vaughan[280313]:            "name": "ceph_lv0",
Nov 25 03:25:15 np0005534516 fervent_vaughan[280313]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 03:25:15 np0005534516 fervent_vaughan[280313]:            "tags": {
Nov 25 03:25:15 np0005534516 fervent_vaughan[280313]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 03:25:15 np0005534516 fervent_vaughan[280313]:                "ceph.block_uuid": "ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr",
Nov 25 03:25:15 np0005534516 fervent_vaughan[280313]:                "ceph.cephx_lockbox_secret": "",
Nov 25 03:25:15 np0005534516 fervent_vaughan[280313]:                "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 03:25:15 np0005534516 fervent_vaughan[280313]:                "ceph.cluster_name": "ceph",
Nov 25 03:25:15 np0005534516 fervent_vaughan[280313]:                "ceph.crush_device_class": "",
Nov 25 03:25:15 np0005534516 fervent_vaughan[280313]:                "ceph.encrypted": "0",
Nov 25 03:25:15 np0005534516 fervent_vaughan[280313]:                "ceph.osd_fsid": "fa0f8d90-5df8-4d42-9078-da082765696d",
Nov 25 03:25:15 np0005534516 fervent_vaughan[280313]:                "ceph.osd_id": "0",
Nov 25 03:25:15 np0005534516 fervent_vaughan[280313]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 03:25:15 np0005534516 fervent_vaughan[280313]:                "ceph.type": "block",
Nov 25 03:25:15 np0005534516 fervent_vaughan[280313]:                "ceph.vdo": "0"
Nov 25 03:25:15 np0005534516 fervent_vaughan[280313]:            },
Nov 25 03:25:15 np0005534516 fervent_vaughan[280313]:            "type": "block",
Nov 25 03:25:15 np0005534516 fervent_vaughan[280313]:            "vg_name": "ceph_vg0"
Nov 25 03:25:15 np0005534516 fervent_vaughan[280313]:        }
Nov 25 03:25:15 np0005534516 fervent_vaughan[280313]:    ],
Nov 25 03:25:15 np0005534516 fervent_vaughan[280313]:    "1": [
Nov 25 03:25:15 np0005534516 fervent_vaughan[280313]:        {
Nov 25 03:25:15 np0005534516 fervent_vaughan[280313]:            "devices": [
Nov 25 03:25:15 np0005534516 fervent_vaughan[280313]:                "/dev/loop4"
Nov 25 03:25:15 np0005534516 fervent_vaughan[280313]:            ],
Nov 25 03:25:15 np0005534516 fervent_vaughan[280313]:            "lv_name": "ceph_lv1",
Nov 25 03:25:15 np0005534516 fervent_vaughan[280313]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 03:25:15 np0005534516 fervent_vaughan[280313]:            "lv_size": "21470642176",
Nov 25 03:25:15 np0005534516 fervent_vaughan[280313]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1ccbbde0-8faf-460a-800e-c84f00ed17db,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 03:25:15 np0005534516 fervent_vaughan[280313]:            "lv_uuid": "nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e",
Nov 25 03:25:15 np0005534516 fervent_vaughan[280313]:            "name": "ceph_lv1",
Nov 25 03:25:15 np0005534516 fervent_vaughan[280313]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 03:25:15 np0005534516 fervent_vaughan[280313]:            "tags": {
Nov 25 03:25:15 np0005534516 fervent_vaughan[280313]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 03:25:15 np0005534516 fervent_vaughan[280313]:                "ceph.block_uuid": "nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e",
Nov 25 03:25:15 np0005534516 fervent_vaughan[280313]:                "ceph.cephx_lockbox_secret": "",
Nov 25 03:25:15 np0005534516 fervent_vaughan[280313]:                "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 03:25:15 np0005534516 fervent_vaughan[280313]:                "ceph.cluster_name": "ceph",
Nov 25 03:25:15 np0005534516 fervent_vaughan[280313]:                "ceph.crush_device_class": "",
Nov 25 03:25:15 np0005534516 fervent_vaughan[280313]:                "ceph.encrypted": "0",
Nov 25 03:25:15 np0005534516 fervent_vaughan[280313]:                "ceph.osd_fsid": "1ccbbde0-8faf-460a-800e-c84f00ed17db",
Nov 25 03:25:15 np0005534516 fervent_vaughan[280313]:                "ceph.osd_id": "1",
Nov 25 03:25:15 np0005534516 fervent_vaughan[280313]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 03:25:15 np0005534516 fervent_vaughan[280313]:                "ceph.type": "block",
Nov 25 03:25:15 np0005534516 fervent_vaughan[280313]:                "ceph.vdo": "0"
Nov 25 03:25:15 np0005534516 fervent_vaughan[280313]:            },
Nov 25 03:25:15 np0005534516 fervent_vaughan[280313]:            "type": "block",
Nov 25 03:25:15 np0005534516 fervent_vaughan[280313]:            "vg_name": "ceph_vg1"
Nov 25 03:25:15 np0005534516 fervent_vaughan[280313]:        }
Nov 25 03:25:15 np0005534516 fervent_vaughan[280313]:    ],
Nov 25 03:25:15 np0005534516 fervent_vaughan[280313]:    "2": [
Nov 25 03:25:15 np0005534516 fervent_vaughan[280313]:        {
Nov 25 03:25:15 np0005534516 fervent_vaughan[280313]:            "devices": [
Nov 25 03:25:15 np0005534516 fervent_vaughan[280313]:                "/dev/loop5"
Nov 25 03:25:15 np0005534516 fervent_vaughan[280313]:            ],
Nov 25 03:25:15 np0005534516 fervent_vaughan[280313]:            "lv_name": "ceph_lv2",
Nov 25 03:25:15 np0005534516 fervent_vaughan[280313]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 03:25:15 np0005534516 fervent_vaughan[280313]:            "lv_size": "21470642176",
Nov 25 03:25:15 np0005534516 fervent_vaughan[280313]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2a15223e-f21c-42af-b702-2be1c3607399,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 03:25:15 np0005534516 fervent_vaughan[280313]:            "lv_uuid": "5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0",
Nov 25 03:25:15 np0005534516 fervent_vaughan[280313]:            "name": "ceph_lv2",
Nov 25 03:25:15 np0005534516 fervent_vaughan[280313]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 03:25:15 np0005534516 fervent_vaughan[280313]:            "tags": {
Nov 25 03:25:15 np0005534516 fervent_vaughan[280313]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 03:25:15 np0005534516 fervent_vaughan[280313]:                "ceph.block_uuid": "5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0",
Nov 25 03:25:15 np0005534516 fervent_vaughan[280313]:                "ceph.cephx_lockbox_secret": "",
Nov 25 03:25:15 np0005534516 fervent_vaughan[280313]:                "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 03:25:15 np0005534516 fervent_vaughan[280313]:                "ceph.cluster_name": "ceph",
Nov 25 03:25:15 np0005534516 fervent_vaughan[280313]:                "ceph.crush_device_class": "",
Nov 25 03:25:15 np0005534516 fervent_vaughan[280313]:                "ceph.encrypted": "0",
Nov 25 03:25:15 np0005534516 fervent_vaughan[280313]:                "ceph.osd_fsid": "2a15223e-f21c-42af-b702-2be1c3607399",
Nov 25 03:25:15 np0005534516 fervent_vaughan[280313]:                "ceph.osd_id": "2",
Nov 25 03:25:15 np0005534516 fervent_vaughan[280313]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 03:25:15 np0005534516 fervent_vaughan[280313]:                "ceph.type": "block",
Nov 25 03:25:15 np0005534516 fervent_vaughan[280313]:                "ceph.vdo": "0"
Nov 25 03:25:15 np0005534516 fervent_vaughan[280313]:            },
Nov 25 03:25:15 np0005534516 fervent_vaughan[280313]:            "type": "block",
Nov 25 03:25:15 np0005534516 fervent_vaughan[280313]:            "vg_name": "ceph_vg2"
Nov 25 03:25:15 np0005534516 fervent_vaughan[280313]:        }
Nov 25 03:25:15 np0005534516 fervent_vaughan[280313]:    ]
Nov 25 03:25:15 np0005534516 fervent_vaughan[280313]: }
Nov 25 03:25:15 np0005534516 systemd[1]: libpod-71f08b7aae3a2f25adf573c881573b4e2627b2bad00243ed9fc5f9b745050cef.scope: Deactivated successfully.
Nov 25 03:25:15 np0005534516 podman[280296]: 2025-11-25 08:25:15.042646003 +0000 UTC m=+1.150197049 container died 71f08b7aae3a2f25adf573c881573b4e2627b2bad00243ed9fc5f9b745050cef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_vaughan, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 03:25:15 np0005534516 systemd[1]: var-lib-containers-storage-overlay-7d3857365d3e57c6fb4bbd9e053d2a0d5a656d628fd6a7f618bb6311ba934924-merged.mount: Deactivated successfully.
Nov 25 03:25:15 np0005534516 podman[280296]: 2025-11-25 08:25:15.11407265 +0000 UTC m=+1.221623696 container remove 71f08b7aae3a2f25adf573c881573b4e2627b2bad00243ed9fc5f9b745050cef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_vaughan, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Nov 25 03:25:15 np0005534516 systemd[1]: libpod-conmon-71f08b7aae3a2f25adf573c881573b4e2627b2bad00243ed9fc5f9b745050cef.scope: Deactivated successfully.
Nov 25 03:25:15 np0005534516 podman[280477]: 2025-11-25 08:25:15.761258565 +0000 UTC m=+0.045531052 container create 0607b96bb67e0e6eb14c3312634b1e1178d8d971752c330b97f9f35f4dcf2089 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_bhabha, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Nov 25 03:25:15 np0005534516 systemd[1]: Started libpod-conmon-0607b96bb67e0e6eb14c3312634b1e1178d8d971752c330b97f9f35f4dcf2089.scope.
Nov 25 03:25:15 np0005534516 podman[280477]: 2025-11-25 08:25:15.737803895 +0000 UTC m=+0.022076402 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 03:25:15 np0005534516 systemd[1]: Started libcrun container.
Nov 25 03:25:16 np0005534516 podman[280477]: 2025-11-25 08:25:16.043527137 +0000 UTC m=+0.327799644 container init 0607b96bb67e0e6eb14c3312634b1e1178d8d971752c330b97f9f35f4dcf2089 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_bhabha, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 03:25:16 np0005534516 podman[280477]: 2025-11-25 08:25:16.049974627 +0000 UTC m=+0.334247114 container start 0607b96bb67e0e6eb14c3312634b1e1178d8d971752c330b97f9f35f4dcf2089 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_bhabha, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Nov 25 03:25:16 np0005534516 hopeful_bhabha[280493]: 167 167
Nov 25 03:25:16 np0005534516 systemd[1]: libpod-0607b96bb67e0e6eb14c3312634b1e1178d8d971752c330b97f9f35f4dcf2089.scope: Deactivated successfully.
Nov 25 03:25:16 np0005534516 conmon[280493]: conmon 0607b96bb67e0e6eb14c <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-0607b96bb67e0e6eb14c3312634b1e1178d8d971752c330b97f9f35f4dcf2089.scope/container/memory.events
Nov 25 03:25:16 np0005534516 nova_compute[253538]: 2025-11-25 08:25:16.154 253542 DEBUG nova.network.neutron [None req-9f33985b-68e9-4e9b-980c-be3fef1a34e0 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] [instance: 29fb9e2b-13d1-41e6-b0b1-1d5262dcadec] Updating instance_info_cache with network_info: [{"id": "a0a9c956-aaa5-4981-a1d9-ae896cea7b7c", "address": "fa:16:3e:11:6b:81", "network": {"id": "93269c36-ab23-4d95-925a-798173550624", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-897372830-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "65a2f983cce14453b2dc9251a520f289", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa0a9c956-aa", "ovs_interfaceid": "a0a9c956-aaa5-4981-a1d9-ae896cea7b7c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 03:25:16 np0005534516 nova_compute[253538]: 2025-11-25 08:25:16.185 253542 DEBUG oslo_concurrency.lockutils [None req-9f33985b-68e9-4e9b-980c-be3fef1a34e0 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Releasing lock "refresh_cache-29fb9e2b-13d1-41e6-b0b1-1d5262dcadec" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 03:25:16 np0005534516 nova_compute[253538]: 2025-11-25 08:25:16.186 253542 DEBUG nova.compute.manager [None req-9f33985b-68e9-4e9b-980c-be3fef1a34e0 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] [instance: 29fb9e2b-13d1-41e6-b0b1-1d5262dcadec] Instance network_info: |[{"id": "a0a9c956-aaa5-4981-a1d9-ae896cea7b7c", "address": "fa:16:3e:11:6b:81", "network": {"id": "93269c36-ab23-4d95-925a-798173550624", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-897372830-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "65a2f983cce14453b2dc9251a520f289", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa0a9c956-aa", "ovs_interfaceid": "a0a9c956-aaa5-4981-a1d9-ae896cea7b7c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 25 03:25:16 np0005534516 nova_compute[253538]: 2025-11-25 08:25:16.186 253542 DEBUG oslo_concurrency.lockutils [req-a4d7c72b-a628-4cc5-b63a-7d0a60797013 req-196991de-e32f-466d-bcf2-e0fdb462f89a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-29fb9e2b-13d1-41e6-b0b1-1d5262dcadec" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 03:25:16 np0005534516 nova_compute[253538]: 2025-11-25 08:25:16.187 253542 DEBUG nova.network.neutron [req-a4d7c72b-a628-4cc5-b63a-7d0a60797013 req-196991de-e32f-466d-bcf2-e0fdb462f89a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 29fb9e2b-13d1-41e6-b0b1-1d5262dcadec] Refreshing network info cache for port a0a9c956-aaa5-4981-a1d9-ae896cea7b7c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 03:25:16 np0005534516 nova_compute[253538]: 2025-11-25 08:25:16.189 253542 DEBUG nova.virt.libvirt.driver [None req-9f33985b-68e9-4e9b-980c-be3fef1a34e0 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] [instance: 29fb9e2b-13d1-41e6-b0b1-1d5262dcadec] Start _get_guest_xml network_info=[{"id": "a0a9c956-aaa5-4981-a1d9-ae896cea7b7c", "address": "fa:16:3e:11:6b:81", "network": {"id": "93269c36-ab23-4d95-925a-798173550624", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-897372830-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "65a2f983cce14453b2dc9251a520f289", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa0a9c956-aa", "ovs_interfaceid": "a0a9c956-aaa5-4981-a1d9-ae896cea7b7c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'device_name': '/dev/vda', 'guest_format': None, 'encryption_options': None, 'boot_index': 0, 'encryption_format': None, 'encryption_secret_uuid': None, 'size': 0, 'encrypted': False, 'disk_bus': 'virtio', 'image_id': '8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 25 03:25:16 np0005534516 nova_compute[253538]: 2025-11-25 08:25:16.194 253542 WARNING nova.virt.libvirt.driver [None req-9f33985b-68e9-4e9b-980c-be3fef1a34e0 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 03:25:16 np0005534516 podman[280477]: 2025-11-25 08:25:16.199196117 +0000 UTC m=+0.483468614 container attach 0607b96bb67e0e6eb14c3312634b1e1178d8d971752c330b97f9f35f4dcf2089 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_bhabha, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Nov 25 03:25:16 np0005534516 podman[280477]: 2025-11-25 08:25:16.199744502 +0000 UTC m=+0.484017009 container died 0607b96bb67e0e6eb14c3312634b1e1178d8d971752c330b97f9f35f4dcf2089 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_bhabha, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 03:25:16 np0005534516 nova_compute[253538]: 2025-11-25 08:25:16.201 253542 DEBUG nova.virt.libvirt.host [None req-9f33985b-68e9-4e9b-980c-be3fef1a34e0 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 25 03:25:16 np0005534516 nova_compute[253538]: 2025-11-25 08:25:16.203 253542 DEBUG nova.virt.libvirt.host [None req-9f33985b-68e9-4e9b-980c-be3fef1a34e0 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 25 03:25:16 np0005534516 nova_compute[253538]: 2025-11-25 08:25:16.207 253542 DEBUG nova.virt.libvirt.host [None req-9f33985b-68e9-4e9b-980c-be3fef1a34e0 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 25 03:25:16 np0005534516 nova_compute[253538]: 2025-11-25 08:25:16.208 253542 DEBUG nova.virt.libvirt.host [None req-9f33985b-68e9-4e9b-980c-be3fef1a34e0 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 25 03:25:16 np0005534516 nova_compute[253538]: 2025-11-25 08:25:16.209 253542 DEBUG nova.virt.libvirt.driver [None req-9f33985b-68e9-4e9b-980c-be3fef1a34e0 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 25 03:25:16 np0005534516 nova_compute[253538]: 2025-11-25 08:25:16.209 253542 DEBUG nova.virt.hardware [None req-9f33985b-68e9-4e9b-980c-be3fef1a34e0 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T08:20:54Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='c0247c91-0f34-45b2-87e7-43f31a790a00',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 25 03:25:16 np0005534516 nova_compute[253538]: 2025-11-25 08:25:16.210 253542 DEBUG nova.virt.hardware [None req-9f33985b-68e9-4e9b-980c-be3fef1a34e0 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 25 03:25:16 np0005534516 nova_compute[253538]: 2025-11-25 08:25:16.211 253542 DEBUG nova.virt.hardware [None req-9f33985b-68e9-4e9b-980c-be3fef1a34e0 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 25 03:25:16 np0005534516 nova_compute[253538]: 2025-11-25 08:25:16.211 253542 DEBUG nova.virt.hardware [None req-9f33985b-68e9-4e9b-980c-be3fef1a34e0 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 25 03:25:16 np0005534516 nova_compute[253538]: 2025-11-25 08:25:16.212 253542 DEBUG nova.virt.hardware [None req-9f33985b-68e9-4e9b-980c-be3fef1a34e0 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 25 03:25:16 np0005534516 nova_compute[253538]: 2025-11-25 08:25:16.212 253542 DEBUG nova.virt.hardware [None req-9f33985b-68e9-4e9b-980c-be3fef1a34e0 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 25 03:25:16 np0005534516 nova_compute[253538]: 2025-11-25 08:25:16.213 253542 DEBUG nova.virt.hardware [None req-9f33985b-68e9-4e9b-980c-be3fef1a34e0 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 25 03:25:16 np0005534516 nova_compute[253538]: 2025-11-25 08:25:16.213 253542 DEBUG nova.virt.hardware [None req-9f33985b-68e9-4e9b-980c-be3fef1a34e0 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 25 03:25:16 np0005534516 nova_compute[253538]: 2025-11-25 08:25:16.213 253542 DEBUG nova.virt.hardware [None req-9f33985b-68e9-4e9b-980c-be3fef1a34e0 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 25 03:25:16 np0005534516 nova_compute[253538]: 2025-11-25 08:25:16.213 253542 DEBUG nova.virt.hardware [None req-9f33985b-68e9-4e9b-980c-be3fef1a34e0 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 25 03:25:16 np0005534516 nova_compute[253538]: 2025-11-25 08:25:16.214 253542 DEBUG nova.virt.hardware [None req-9f33985b-68e9-4e9b-980c-be3fef1a34e0 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 25 03:25:16 np0005534516 nova_compute[253538]: 2025-11-25 08:25:16.218 253542 DEBUG oslo_concurrency.processutils [None req-9f33985b-68e9-4e9b-980c-be3fef1a34e0 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:25:16 np0005534516 systemd[1]: var-lib-containers-storage-overlay-6bf3baad1494d8441ef45d3c06a22c99bf61c080e0e1fa6acd331b87ce58e6bb-merged.mount: Deactivated successfully.
Nov 25 03:25:16 np0005534516 podman[280477]: 2025-11-25 08:25:16.512926631 +0000 UTC m=+0.797199148 container remove 0607b96bb67e0e6eb14c3312634b1e1178d8d971752c330b97f9f35f4dcf2089 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_bhabha, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Nov 25 03:25:16 np0005534516 systemd[1]: libpod-conmon-0607b96bb67e0e6eb14c3312634b1e1178d8d971752c330b97f9f35f4dcf2089.scope: Deactivated successfully.
Nov 25 03:25:16 np0005534516 nova_compute[253538]: 2025-11-25 08:25:16.566 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 03:25:16 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 03:25:16 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1663447044' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 03:25:16 np0005534516 nova_compute[253538]: 2025-11-25 08:25:16.682 253542 DEBUG oslo_concurrency.processutils [None req-9f33985b-68e9-4e9b-980c-be3fef1a34e0 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.464s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:25:16 np0005534516 nova_compute[253538]: 2025-11-25 08:25:16.704 253542 DEBUG nova.storage.rbd_utils [None req-9f33985b-68e9-4e9b-980c-be3fef1a34e0 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] rbd image 29fb9e2b-13d1-41e6-b0b1-1d5262dcadec_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:25:16 np0005534516 nova_compute[253538]: 2025-11-25 08:25:16.708 253542 DEBUG oslo_concurrency.processutils [None req-9f33985b-68e9-4e9b-980c-be3fef1a34e0 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:25:16 np0005534516 podman[280536]: 2025-11-25 08:25:16.713786842 +0000 UTC m=+0.063731146 container create 7243b2bc33e5d1b1ef93941ee95b3713f020921da317e14451f06e2ca762f4d8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_snyder, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 03:25:16 np0005534516 podman[280536]: 2025-11-25 08:25:16.670793401 +0000 UTC m=+0.020737645 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 03:25:16 np0005534516 systemd[1]: Started libpod-conmon-7243b2bc33e5d1b1ef93941ee95b3713f020921da317e14451f06e2ca762f4d8.scope.
Nov 25 03:25:16 np0005534516 systemd[1]: Started libcrun container.
Nov 25 03:25:16 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/05dd9628226752813a31a5b80303e45981eb1b06b4485f17be510e276d506cdb/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 03:25:16 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/05dd9628226752813a31a5b80303e45981eb1b06b4485f17be510e276d506cdb/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 03:25:16 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/05dd9628226752813a31a5b80303e45981eb1b06b4485f17be510e276d506cdb/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 03:25:16 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/05dd9628226752813a31a5b80303e45981eb1b06b4485f17be510e276d506cdb/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 03:25:16 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1195: 321 pgs: 321 active+clean; 210 MiB data, 346 MiB used, 60 GiB / 60 GiB avail; 2.4 MiB/s rd, 4.5 MiB/s wr, 195 op/s
Nov 25 03:25:17 np0005534516 podman[280536]: 2025-11-25 08:25:17.05259959 +0000 UTC m=+0.402543924 container init 7243b2bc33e5d1b1ef93941ee95b3713f020921da317e14451f06e2ca762f4d8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_snyder, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 03:25:17 np0005534516 podman[280536]: 2025-11-25 08:25:17.068235062 +0000 UTC m=+0.418179286 container start 7243b2bc33e5d1b1ef93941ee95b3713f020921da317e14451f06e2ca762f4d8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_snyder, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Nov 25 03:25:17 np0005534516 podman[280536]: 2025-11-25 08:25:17.072679676 +0000 UTC m=+0.422624010 container attach 7243b2bc33e5d1b1ef93941ee95b3713f020921da317e14451f06e2ca762f4d8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_snyder, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Nov 25 03:25:17 np0005534516 nova_compute[253538]: 2025-11-25 08:25:17.084 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:25:17 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 03:25:17 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2980305441' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 03:25:17 np0005534516 nova_compute[253538]: 2025-11-25 08:25:17.123 253542 DEBUG oslo_concurrency.processutils [None req-9f33985b-68e9-4e9b-980c-be3fef1a34e0 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.415s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:25:17 np0005534516 nova_compute[253538]: 2025-11-25 08:25:17.125 253542 DEBUG nova.virt.libvirt.vif [None req-9f33985b-68e9-4e9b-980c-be3fef1a34e0 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T08:25:09Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-1138480486',display_name='tempest-ServersAdminTestJSON-server-1138480486',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-1138480486',id=18,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='65a2f983cce14453b2dc9251a520f289',ramdisk_id='',reservation_id='r-hs51jeqx',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersAdminTestJSON-857664825',owner_user_name='tempest-ServersAdminTestJSON-857664825-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T08:25:11Z,user_data=None,user_id='76d3377d398a4214a77bc0eb91638ec5',uuid=29fb9e2b-13d1-41e6-b0b1-1d5262dcadec,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "a0a9c956-aaa5-4981-a1d9-ae896cea7b7c", "address": "fa:16:3e:11:6b:81", "network": {"id": "93269c36-ab23-4d95-925a-798173550624", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-897372830-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "65a2f983cce14453b2dc9251a520f289", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa0a9c956-aa", "ovs_interfaceid": "a0a9c956-aaa5-4981-a1d9-ae896cea7b7c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 25 03:25:17 np0005534516 nova_compute[253538]: 2025-11-25 08:25:17.126 253542 DEBUG nova.network.os_vif_util [None req-9f33985b-68e9-4e9b-980c-be3fef1a34e0 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Converting VIF {"id": "a0a9c956-aaa5-4981-a1d9-ae896cea7b7c", "address": "fa:16:3e:11:6b:81", "network": {"id": "93269c36-ab23-4d95-925a-798173550624", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-897372830-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "65a2f983cce14453b2dc9251a520f289", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa0a9c956-aa", "ovs_interfaceid": "a0a9c956-aaa5-4981-a1d9-ae896cea7b7c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 03:25:17 np0005534516 nova_compute[253538]: 2025-11-25 08:25:17.127 253542 DEBUG nova.network.os_vif_util [None req-9f33985b-68e9-4e9b-980c-be3fef1a34e0 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:11:6b:81,bridge_name='br-int',has_traffic_filtering=True,id=a0a9c956-aaa5-4981-a1d9-ae896cea7b7c,network=Network(93269c36-ab23-4d95-925a-798173550624),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa0a9c956-aa') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 03:25:17 np0005534516 nova_compute[253538]: 2025-11-25 08:25:17.128 253542 DEBUG nova.objects.instance [None req-9f33985b-68e9-4e9b-980c-be3fef1a34e0 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Lazy-loading 'pci_devices' on Instance uuid 29fb9e2b-13d1-41e6-b0b1-1d5262dcadec obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 03:25:17 np0005534516 nova_compute[253538]: 2025-11-25 08:25:17.150 253542 DEBUG nova.virt.libvirt.driver [None req-9f33985b-68e9-4e9b-980c-be3fef1a34e0 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] [instance: 29fb9e2b-13d1-41e6-b0b1-1d5262dcadec] End _get_guest_xml xml=<domain type="kvm">
Nov 25 03:25:17 np0005534516 nova_compute[253538]:  <uuid>29fb9e2b-13d1-41e6-b0b1-1d5262dcadec</uuid>
Nov 25 03:25:17 np0005534516 nova_compute[253538]:  <name>instance-00000012</name>
Nov 25 03:25:17 np0005534516 nova_compute[253538]:  <memory>131072</memory>
Nov 25 03:25:17 np0005534516 nova_compute[253538]:  <vcpu>1</vcpu>
Nov 25 03:25:17 np0005534516 nova_compute[253538]:  <metadata>
Nov 25 03:25:17 np0005534516 nova_compute[253538]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 03:25:17 np0005534516 nova_compute[253538]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 03:25:17 np0005534516 nova_compute[253538]:      <nova:name>tempest-ServersAdminTestJSON-server-1138480486</nova:name>
Nov 25 03:25:17 np0005534516 nova_compute[253538]:      <nova:creationTime>2025-11-25 08:25:16</nova:creationTime>
Nov 25 03:25:17 np0005534516 nova_compute[253538]:      <nova:flavor name="m1.nano">
Nov 25 03:25:17 np0005534516 nova_compute[253538]:        <nova:memory>128</nova:memory>
Nov 25 03:25:17 np0005534516 nova_compute[253538]:        <nova:disk>1</nova:disk>
Nov 25 03:25:17 np0005534516 nova_compute[253538]:        <nova:swap>0</nova:swap>
Nov 25 03:25:17 np0005534516 nova_compute[253538]:        <nova:ephemeral>0</nova:ephemeral>
Nov 25 03:25:17 np0005534516 nova_compute[253538]:        <nova:vcpus>1</nova:vcpus>
Nov 25 03:25:17 np0005534516 nova_compute[253538]:      </nova:flavor>
Nov 25 03:25:17 np0005534516 nova_compute[253538]:      <nova:owner>
Nov 25 03:25:17 np0005534516 nova_compute[253538]:        <nova:user uuid="76d3377d398a4214a77bc0eb91638ec5">tempest-ServersAdminTestJSON-857664825-project-member</nova:user>
Nov 25 03:25:17 np0005534516 nova_compute[253538]:        <nova:project uuid="65a2f983cce14453b2dc9251a520f289">tempest-ServersAdminTestJSON-857664825</nova:project>
Nov 25 03:25:17 np0005534516 nova_compute[253538]:      </nova:owner>
Nov 25 03:25:17 np0005534516 nova_compute[253538]:      <nova:root type="image" uuid="8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e"/>
Nov 25 03:25:17 np0005534516 nova_compute[253538]:      <nova:ports>
Nov 25 03:25:17 np0005534516 nova_compute[253538]:        <nova:port uuid="a0a9c956-aaa5-4981-a1d9-ae896cea7b7c">
Nov 25 03:25:17 np0005534516 nova_compute[253538]:          <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Nov 25 03:25:17 np0005534516 nova_compute[253538]:        </nova:port>
Nov 25 03:25:17 np0005534516 nova_compute[253538]:      </nova:ports>
Nov 25 03:25:17 np0005534516 nova_compute[253538]:    </nova:instance>
Nov 25 03:25:17 np0005534516 nova_compute[253538]:  </metadata>
Nov 25 03:25:17 np0005534516 nova_compute[253538]:  <sysinfo type="smbios">
Nov 25 03:25:17 np0005534516 nova_compute[253538]:    <system>
Nov 25 03:25:17 np0005534516 nova_compute[253538]:      <entry name="manufacturer">RDO</entry>
Nov 25 03:25:17 np0005534516 nova_compute[253538]:      <entry name="product">OpenStack Compute</entry>
Nov 25 03:25:17 np0005534516 nova_compute[253538]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 03:25:17 np0005534516 nova_compute[253538]:      <entry name="serial">29fb9e2b-13d1-41e6-b0b1-1d5262dcadec</entry>
Nov 25 03:25:17 np0005534516 nova_compute[253538]:      <entry name="uuid">29fb9e2b-13d1-41e6-b0b1-1d5262dcadec</entry>
Nov 25 03:25:17 np0005534516 nova_compute[253538]:      <entry name="family">Virtual Machine</entry>
Nov 25 03:25:17 np0005534516 nova_compute[253538]:    </system>
Nov 25 03:25:17 np0005534516 nova_compute[253538]:  </sysinfo>
Nov 25 03:25:17 np0005534516 nova_compute[253538]:  <os>
Nov 25 03:25:17 np0005534516 nova_compute[253538]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 25 03:25:17 np0005534516 nova_compute[253538]:    <boot dev="hd"/>
Nov 25 03:25:17 np0005534516 nova_compute[253538]:    <smbios mode="sysinfo"/>
Nov 25 03:25:17 np0005534516 nova_compute[253538]:  </os>
Nov 25 03:25:17 np0005534516 nova_compute[253538]:  <features>
Nov 25 03:25:17 np0005534516 nova_compute[253538]:    <acpi/>
Nov 25 03:25:17 np0005534516 nova_compute[253538]:    <apic/>
Nov 25 03:25:17 np0005534516 nova_compute[253538]:    <vmcoreinfo/>
Nov 25 03:25:17 np0005534516 nova_compute[253538]:  </features>
Nov 25 03:25:17 np0005534516 nova_compute[253538]:  <clock offset="utc">
Nov 25 03:25:17 np0005534516 nova_compute[253538]:    <timer name="pit" tickpolicy="delay"/>
Nov 25 03:25:17 np0005534516 nova_compute[253538]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 25 03:25:17 np0005534516 nova_compute[253538]:    <timer name="hpet" present="no"/>
Nov 25 03:25:17 np0005534516 nova_compute[253538]:  </clock>
Nov 25 03:25:17 np0005534516 nova_compute[253538]:  <cpu mode="host-model" match="exact">
Nov 25 03:25:17 np0005534516 nova_compute[253538]:    <topology sockets="1" cores="1" threads="1"/>
Nov 25 03:25:17 np0005534516 nova_compute[253538]:  </cpu>
Nov 25 03:25:17 np0005534516 nova_compute[253538]:  <devices>
Nov 25 03:25:17 np0005534516 nova_compute[253538]:    <disk type="network" device="disk">
Nov 25 03:25:17 np0005534516 nova_compute[253538]:      <driver type="raw" cache="none"/>
Nov 25 03:25:17 np0005534516 nova_compute[253538]:      <source protocol="rbd" name="vms/29fb9e2b-13d1-41e6-b0b1-1d5262dcadec_disk">
Nov 25 03:25:17 np0005534516 nova_compute[253538]:        <host name="192.168.122.100" port="6789"/>
Nov 25 03:25:17 np0005534516 nova_compute[253538]:      </source>
Nov 25 03:25:17 np0005534516 nova_compute[253538]:      <auth username="openstack">
Nov 25 03:25:17 np0005534516 nova_compute[253538]:        <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 03:25:17 np0005534516 nova_compute[253538]:      </auth>
Nov 25 03:25:17 np0005534516 nova_compute[253538]:      <target dev="vda" bus="virtio"/>
Nov 25 03:25:17 np0005534516 nova_compute[253538]:    </disk>
Nov 25 03:25:17 np0005534516 nova_compute[253538]:    <disk type="network" device="cdrom">
Nov 25 03:25:17 np0005534516 nova_compute[253538]:      <driver type="raw" cache="none"/>
Nov 25 03:25:17 np0005534516 nova_compute[253538]:      <source protocol="rbd" name="vms/29fb9e2b-13d1-41e6-b0b1-1d5262dcadec_disk.config">
Nov 25 03:25:17 np0005534516 nova_compute[253538]:        <host name="192.168.122.100" port="6789"/>
Nov 25 03:25:17 np0005534516 nova_compute[253538]:      </source>
Nov 25 03:25:17 np0005534516 nova_compute[253538]:      <auth username="openstack">
Nov 25 03:25:17 np0005534516 nova_compute[253538]:        <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 03:25:17 np0005534516 nova_compute[253538]:      </auth>
Nov 25 03:25:17 np0005534516 nova_compute[253538]:      <target dev="sda" bus="sata"/>
Nov 25 03:25:17 np0005534516 nova_compute[253538]:    </disk>
Nov 25 03:25:17 np0005534516 nova_compute[253538]:    <interface type="ethernet">
Nov 25 03:25:17 np0005534516 nova_compute[253538]:      <mac address="fa:16:3e:11:6b:81"/>
Nov 25 03:25:17 np0005534516 nova_compute[253538]:      <model type="virtio"/>
Nov 25 03:25:17 np0005534516 nova_compute[253538]:      <driver name="vhost" rx_queue_size="512"/>
Nov 25 03:25:17 np0005534516 nova_compute[253538]:      <mtu size="1442"/>
Nov 25 03:25:17 np0005534516 nova_compute[253538]:      <target dev="tapa0a9c956-aa"/>
Nov 25 03:25:17 np0005534516 nova_compute[253538]:    </interface>
Nov 25 03:25:17 np0005534516 nova_compute[253538]:    <serial type="pty">
Nov 25 03:25:17 np0005534516 nova_compute[253538]:      <log file="/var/lib/nova/instances/29fb9e2b-13d1-41e6-b0b1-1d5262dcadec/console.log" append="off"/>
Nov 25 03:25:17 np0005534516 nova_compute[253538]:    </serial>
Nov 25 03:25:17 np0005534516 nova_compute[253538]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 03:25:17 np0005534516 nova_compute[253538]:    <video>
Nov 25 03:25:17 np0005534516 nova_compute[253538]:      <model type="virtio"/>
Nov 25 03:25:17 np0005534516 nova_compute[253538]:    </video>
Nov 25 03:25:17 np0005534516 nova_compute[253538]:    <input type="tablet" bus="usb"/>
Nov 25 03:25:17 np0005534516 nova_compute[253538]:    <rng model="virtio">
Nov 25 03:25:17 np0005534516 nova_compute[253538]:      <backend model="random">/dev/urandom</backend>
Nov 25 03:25:17 np0005534516 nova_compute[253538]:    </rng>
Nov 25 03:25:17 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root"/>
Nov 25 03:25:17 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:25:17 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:25:17 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:25:17 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:25:17 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:25:17 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:25:17 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:25:17 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:25:17 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:25:17 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:25:17 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:25:17 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:25:17 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:25:17 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:25:17 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:25:17 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:25:17 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:25:17 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:25:17 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:25:17 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:25:17 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:25:17 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:25:17 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:25:17 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:25:17 np0005534516 nova_compute[253538]:    <controller type="usb" index="0"/>
Nov 25 03:25:17 np0005534516 nova_compute[253538]:    <memballoon model="virtio">
Nov 25 03:25:17 np0005534516 nova_compute[253538]:      <stats period="10"/>
Nov 25 03:25:17 np0005534516 nova_compute[253538]:    </memballoon>
Nov 25 03:25:17 np0005534516 nova_compute[253538]:  </devices>
Nov 25 03:25:17 np0005534516 nova_compute[253538]: </domain>
Nov 25 03:25:17 np0005534516 nova_compute[253538]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 25 03:25:17 np0005534516 nova_compute[253538]: 2025-11-25 08:25:17.150 253542 DEBUG nova.compute.manager [None req-9f33985b-68e9-4e9b-980c-be3fef1a34e0 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] [instance: 29fb9e2b-13d1-41e6-b0b1-1d5262dcadec] Preparing to wait for external event network-vif-plugged-a0a9c956-aaa5-4981-a1d9-ae896cea7b7c prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 25 03:25:17 np0005534516 nova_compute[253538]: 2025-11-25 08:25:17.151 253542 DEBUG oslo_concurrency.lockutils [None req-9f33985b-68e9-4e9b-980c-be3fef1a34e0 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Acquiring lock "29fb9e2b-13d1-41e6-b0b1-1d5262dcadec-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:25:17 np0005534516 nova_compute[253538]: 2025-11-25 08:25:17.151 253542 DEBUG oslo_concurrency.lockutils [None req-9f33985b-68e9-4e9b-980c-be3fef1a34e0 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Lock "29fb9e2b-13d1-41e6-b0b1-1d5262dcadec-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:25:17 np0005534516 nova_compute[253538]: 2025-11-25 08:25:17.151 253542 DEBUG oslo_concurrency.lockutils [None req-9f33985b-68e9-4e9b-980c-be3fef1a34e0 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Lock "29fb9e2b-13d1-41e6-b0b1-1d5262dcadec-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:25:17 np0005534516 nova_compute[253538]: 2025-11-25 08:25:17.152 253542 DEBUG nova.virt.libvirt.vif [None req-9f33985b-68e9-4e9b-980c-be3fef1a34e0 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T08:25:09Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-1138480486',display_name='tempest-ServersAdminTestJSON-server-1138480486',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-1138480486',id=18,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='65a2f983cce14453b2dc9251a520f289',ramdisk_id='',reservation_id='r-hs51jeqx',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersAdminTestJSON-857664825',owner_user_name='tempest-ServersAdminTestJSON-857664825-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T08:25:11Z,user_data=None,user_id='76d3377d398a4214a77bc0eb91638ec5',uuid=29fb9e2b-13d1-41e6-b0b1-1d5262dcadec,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "a0a9c956-aaa5-4981-a1d9-ae896cea7b7c", "address": "fa:16:3e:11:6b:81", "network": {"id": "93269c36-ab23-4d95-925a-798173550624", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-897372830-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "65a2f983cce14453b2dc9251a520f289", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa0a9c956-aa", "ovs_interfaceid": "a0a9c956-aaa5-4981-a1d9-ae896cea7b7c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 25 03:25:17 np0005534516 nova_compute[253538]: 2025-11-25 08:25:17.152 253542 DEBUG nova.network.os_vif_util [None req-9f33985b-68e9-4e9b-980c-be3fef1a34e0 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Converting VIF {"id": "a0a9c956-aaa5-4981-a1d9-ae896cea7b7c", "address": "fa:16:3e:11:6b:81", "network": {"id": "93269c36-ab23-4d95-925a-798173550624", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-897372830-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "65a2f983cce14453b2dc9251a520f289", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa0a9c956-aa", "ovs_interfaceid": "a0a9c956-aaa5-4981-a1d9-ae896cea7b7c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 03:25:17 np0005534516 nova_compute[253538]: 2025-11-25 08:25:17.153 253542 DEBUG nova.network.os_vif_util [None req-9f33985b-68e9-4e9b-980c-be3fef1a34e0 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:11:6b:81,bridge_name='br-int',has_traffic_filtering=True,id=a0a9c956-aaa5-4981-a1d9-ae896cea7b7c,network=Network(93269c36-ab23-4d95-925a-798173550624),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa0a9c956-aa') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 03:25:17 np0005534516 nova_compute[253538]: 2025-11-25 08:25:17.153 253542 DEBUG os_vif [None req-9f33985b-68e9-4e9b-980c-be3fef1a34e0 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:11:6b:81,bridge_name='br-int',has_traffic_filtering=True,id=a0a9c956-aaa5-4981-a1d9-ae896cea7b7c,network=Network(93269c36-ab23-4d95-925a-798173550624),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa0a9c956-aa') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 25 03:25:17 np0005534516 nova_compute[253538]: 2025-11-25 08:25:17.154 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:25:17 np0005534516 nova_compute[253538]: 2025-11-25 08:25:17.154 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:25:17 np0005534516 nova_compute[253538]: 2025-11-25 08:25:17.155 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 03:25:17 np0005534516 nova_compute[253538]: 2025-11-25 08:25:17.158 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:25:17 np0005534516 nova_compute[253538]: 2025-11-25 08:25:17.158 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa0a9c956-aa, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:25:17 np0005534516 nova_compute[253538]: 2025-11-25 08:25:17.159 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapa0a9c956-aa, col_values=(('external_ids', {'iface-id': 'a0a9c956-aaa5-4981-a1d9-ae896cea7b7c', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:11:6b:81', 'vm-uuid': '29fb9e2b-13d1-41e6-b0b1-1d5262dcadec'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:25:17 np0005534516 nova_compute[253538]: 2025-11-25 08:25:17.161 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:25:17 np0005534516 NetworkManager[48915]: <info>  [1764059117.1619] manager: (tapa0a9c956-aa): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/41)
Nov 25 03:25:17 np0005534516 nova_compute[253538]: 2025-11-25 08:25:17.163 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 25 03:25:17 np0005534516 nova_compute[253538]: 2025-11-25 08:25:17.170 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:25:17 np0005534516 nova_compute[253538]: 2025-11-25 08:25:17.171 253542 INFO os_vif [None req-9f33985b-68e9-4e9b-980c-be3fef1a34e0 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:11:6b:81,bridge_name='br-int',has_traffic_filtering=True,id=a0a9c956-aaa5-4981-a1d9-ae896cea7b7c,network=Network(93269c36-ab23-4d95-925a-798173550624),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa0a9c956-aa')#033[00m
Nov 25 03:25:17 np0005534516 nova_compute[253538]: 2025-11-25 08:25:17.214 253542 DEBUG nova.virt.libvirt.driver [None req-9f33985b-68e9-4e9b-980c-be3fef1a34e0 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 03:25:17 np0005534516 nova_compute[253538]: 2025-11-25 08:25:17.214 253542 DEBUG nova.virt.libvirt.driver [None req-9f33985b-68e9-4e9b-980c-be3fef1a34e0 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 03:25:17 np0005534516 nova_compute[253538]: 2025-11-25 08:25:17.215 253542 DEBUG nova.virt.libvirt.driver [None req-9f33985b-68e9-4e9b-980c-be3fef1a34e0 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] No VIF found with MAC fa:16:3e:11:6b:81, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 25 03:25:17 np0005534516 nova_compute[253538]: 2025-11-25 08:25:17.215 253542 INFO nova.virt.libvirt.driver [None req-9f33985b-68e9-4e9b-980c-be3fef1a34e0 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] [instance: 29fb9e2b-13d1-41e6-b0b1-1d5262dcadec] Using config drive#033[00m
Nov 25 03:25:17 np0005534516 nova_compute[253538]: 2025-11-25 08:25:17.234 253542 DEBUG nova.storage.rbd_utils [None req-9f33985b-68e9-4e9b-980c-be3fef1a34e0 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] rbd image 29fb9e2b-13d1-41e6-b0b1-1d5262dcadec_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:25:17 np0005534516 nova_compute[253538]: 2025-11-25 08:25:17.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 03:25:17 np0005534516 nova_compute[253538]: 2025-11-25 08:25:17.859 253542 INFO nova.virt.libvirt.driver [None req-9f33985b-68e9-4e9b-980c-be3fef1a34e0 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] [instance: 29fb9e2b-13d1-41e6-b0b1-1d5262dcadec] Creating config drive at /var/lib/nova/instances/29fb9e2b-13d1-41e6-b0b1-1d5262dcadec/disk.config#033[00m
Nov 25 03:25:17 np0005534516 nova_compute[253538]: 2025-11-25 08:25:17.865 253542 DEBUG oslo_concurrency.processutils [None req-9f33985b-68e9-4e9b-980c-be3fef1a34e0 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/29fb9e2b-13d1-41e6-b0b1-1d5262dcadec/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpxt5d0dfs execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:25:18 np0005534516 nova_compute[253538]: 2025-11-25 08:25:18.009 253542 DEBUG oslo_concurrency.processutils [None req-9f33985b-68e9-4e9b-980c-be3fef1a34e0 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/29fb9e2b-13d1-41e6-b0b1-1d5262dcadec/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpxt5d0dfs" returned: 0 in 0.144s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:25:18 np0005534516 quirky_snyder[280591]: {
Nov 25 03:25:18 np0005534516 quirky_snyder[280591]:    "1ccbbde0-8faf-460a-800e-c84f00ed17db": {
Nov 25 03:25:18 np0005534516 quirky_snyder[280591]:        "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 03:25:18 np0005534516 quirky_snyder[280591]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 03:25:18 np0005534516 quirky_snyder[280591]:        "osd_id": 1,
Nov 25 03:25:18 np0005534516 quirky_snyder[280591]:        "osd_uuid": "1ccbbde0-8faf-460a-800e-c84f00ed17db",
Nov 25 03:25:18 np0005534516 quirky_snyder[280591]:        "type": "bluestore"
Nov 25 03:25:18 np0005534516 quirky_snyder[280591]:    },
Nov 25 03:25:18 np0005534516 quirky_snyder[280591]:    "2a15223e-f21c-42af-b702-2be1c3607399": {
Nov 25 03:25:18 np0005534516 quirky_snyder[280591]:        "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 03:25:18 np0005534516 quirky_snyder[280591]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 03:25:18 np0005534516 quirky_snyder[280591]:        "osd_id": 2,
Nov 25 03:25:18 np0005534516 quirky_snyder[280591]:        "osd_uuid": "2a15223e-f21c-42af-b702-2be1c3607399",
Nov 25 03:25:18 np0005534516 quirky_snyder[280591]:        "type": "bluestore"
Nov 25 03:25:18 np0005534516 quirky_snyder[280591]:    },
Nov 25 03:25:18 np0005534516 quirky_snyder[280591]:    "fa0f8d90-5df8-4d42-9078-da082765696d": {
Nov 25 03:25:18 np0005534516 quirky_snyder[280591]:        "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 03:25:18 np0005534516 quirky_snyder[280591]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 03:25:18 np0005534516 quirky_snyder[280591]:        "osd_id": 0,
Nov 25 03:25:18 np0005534516 quirky_snyder[280591]:        "osd_uuid": "fa0f8d90-5df8-4d42-9078-da082765696d",
Nov 25 03:25:18 np0005534516 quirky_snyder[280591]:        "type": "bluestore"
Nov 25 03:25:18 np0005534516 quirky_snyder[280591]:    }
Nov 25 03:25:18 np0005534516 quirky_snyder[280591]: }
Nov 25 03:25:18 np0005534516 systemd[1]: libpod-7243b2bc33e5d1b1ef93941ee95b3713f020921da317e14451f06e2ca762f4d8.scope: Deactivated successfully.
Nov 25 03:25:18 np0005534516 podman[280536]: 2025-11-25 08:25:18.05728527 +0000 UTC m=+1.407229514 container died 7243b2bc33e5d1b1ef93941ee95b3713f020921da317e14451f06e2ca762f4d8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_snyder, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 03:25:18 np0005534516 nova_compute[253538]: 2025-11-25 08:25:18.072 253542 DEBUG nova.storage.rbd_utils [None req-9f33985b-68e9-4e9b-980c-be3fef1a34e0 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] rbd image 29fb9e2b-13d1-41e6-b0b1-1d5262dcadec_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:25:18 np0005534516 nova_compute[253538]: 2025-11-25 08:25:18.078 253542 DEBUG oslo_concurrency.processutils [None req-9f33985b-68e9-4e9b-980c-be3fef1a34e0 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/29fb9e2b-13d1-41e6-b0b1-1d5262dcadec/disk.config 29fb9e2b-13d1-41e6-b0b1-1d5262dcadec_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:25:18 np0005534516 systemd[1]: var-lib-containers-storage-overlay-05dd9628226752813a31a5b80303e45981eb1b06b4485f17be510e276d506cdb-merged.mount: Deactivated successfully.
Nov 25 03:25:18 np0005534516 podman[280536]: 2025-11-25 08:25:18.108542559 +0000 UTC m=+1.458486773 container remove 7243b2bc33e5d1b1ef93941ee95b3713f020921da317e14451f06e2ca762f4d8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_snyder, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 03:25:18 np0005534516 systemd[1]: libpod-conmon-7243b2bc33e5d1b1ef93941ee95b3713f020921da317e14451f06e2ca762f4d8.scope: Deactivated successfully.
Nov 25 03:25:18 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 03:25:18 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 03:25:18 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 03:25:18 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 03:25:18 np0005534516 ceph-mgr[75313]: [progress WARNING root] complete: ev e3796ebb-2e9c-4484-a918-282a19ba3d80 does not exist
Nov 25 03:25:18 np0005534516 ceph-mgr[75313]: [progress WARNING root] complete: ev 1471d9e8-7761-4cf0-aadf-66269d9f6469 does not exist
Nov 25 03:25:18 np0005534516 nova_compute[253538]: 2025-11-25 08:25:18.244 253542 DEBUG oslo_concurrency.processutils [None req-9f33985b-68e9-4e9b-980c-be3fef1a34e0 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/29fb9e2b-13d1-41e6-b0b1-1d5262dcadec/disk.config 29fb9e2b-13d1-41e6-b0b1-1d5262dcadec_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.166s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:25:18 np0005534516 nova_compute[253538]: 2025-11-25 08:25:18.245 253542 INFO nova.virt.libvirt.driver [None req-9f33985b-68e9-4e9b-980c-be3fef1a34e0 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] [instance: 29fb9e2b-13d1-41e6-b0b1-1d5262dcadec] Deleting local config drive /var/lib/nova/instances/29fb9e2b-13d1-41e6-b0b1-1d5262dcadec/disk.config because it was imported into RBD.#033[00m
Nov 25 03:25:18 np0005534516 kernel: tapa0a9c956-aa: entered promiscuous mode
Nov 25 03:25:18 np0005534516 NetworkManager[48915]: <info>  [1764059118.2949] manager: (tapa0a9c956-aa): new Tun device (/org/freedesktop/NetworkManager/Devices/42)
Nov 25 03:25:18 np0005534516 nova_compute[253538]: 2025-11-25 08:25:18.294 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:25:18 np0005534516 ovn_controller[152859]: 2025-11-25T08:25:18Z|00055|binding|INFO|Claiming lport a0a9c956-aaa5-4981-a1d9-ae896cea7b7c for this chassis.
Nov 25 03:25:18 np0005534516 ovn_controller[152859]: 2025-11-25T08:25:18Z|00056|binding|INFO|a0a9c956-aaa5-4981-a1d9-ae896cea7b7c: Claiming fa:16:3e:11:6b:81 10.100.0.10
Nov 25 03:25:18 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:25:18.303 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:11:6b:81 10.100.0.10'], port_security=['fa:16:3e:11:6b:81 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '29fb9e2b-13d1-41e6-b0b1-1d5262dcadec', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-93269c36-ab23-4d95-925a-798173550624', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '65a2f983cce14453b2dc9251a520f289', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'cc9f601e-bb99-4729-8b62-ddcf81c134a4', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a705f4ac-b7cc-4deb-b453-a20afb944392, chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=a0a9c956-aaa5-4981-a1d9-ae896cea7b7c) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 03:25:18 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:25:18.305 162739 INFO neutron.agent.ovn.metadata.agent [-] Port a0a9c956-aaa5-4981-a1d9-ae896cea7b7c in datapath 93269c36-ab23-4d95-925a-798173550624 bound to our chassis#033[00m
Nov 25 03:25:18 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:25:18.308 162739 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 93269c36-ab23-4d95-925a-798173550624#033[00m
Nov 25 03:25:18 np0005534516 ovn_controller[152859]: 2025-11-25T08:25:18Z|00057|binding|INFO|Setting lport a0a9c956-aaa5-4981-a1d9-ae896cea7b7c ovn-installed in OVS
Nov 25 03:25:18 np0005534516 ovn_controller[152859]: 2025-11-25T08:25:18Z|00058|binding|INFO|Setting lport a0a9c956-aaa5-4981-a1d9-ae896cea7b7c up in Southbound
Nov 25 03:25:18 np0005534516 nova_compute[253538]: 2025-11-25 08:25:18.317 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:25:18 np0005534516 nova_compute[253538]: 2025-11-25 08:25:18.320 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:25:18 np0005534516 systemd-machined[215790]: New machine qemu-20-instance-00000012.
Nov 25 03:25:18 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:25:18.330 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[53f7611e-c5a1-41e0-bb35-25e8ce1a3c4f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:25:18 np0005534516 systemd[1]: Started Virtual Machine qemu-20-instance-00000012.
Nov 25 03:25:18 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e113 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 03:25:18 np0005534516 systemd-udevd[280762]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 03:25:18 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:25:18.360 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[175ffcde-22ee-4941-af75-21bdc4436d94]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:25:18 np0005534516 nova_compute[253538]: 2025-11-25 08:25:18.362 253542 DEBUG oslo_concurrency.lockutils [None req-fd954489-8e65-4c0e-94f5-d710e8a29ad8 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] Acquiring lock "1664fad5-765c-4ecc-93e2-6f96c7fb6d44" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:25:18 np0005534516 nova_compute[253538]: 2025-11-25 08:25:18.363 253542 DEBUG oslo_concurrency.lockutils [None req-fd954489-8e65-4c0e-94f5-d710e8a29ad8 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] Lock "1664fad5-765c-4ecc-93e2-6f96c7fb6d44" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:25:18 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:25:18.363 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[ef40d41c-c155-4528-84e0-0c4400830cd8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:25:18 np0005534516 NetworkManager[48915]: <info>  [1764059118.3706] device (tapa0a9c956-aa): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 03:25:18 np0005534516 NetworkManager[48915]: <info>  [1764059118.3719] device (tapa0a9c956-aa): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 03:25:18 np0005534516 nova_compute[253538]: 2025-11-25 08:25:18.392 253542 DEBUG nova.compute.manager [None req-fd954489-8e65-4c0e-94f5-d710e8a29ad8 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] [instance: 1664fad5-765c-4ecc-93e2-6f96c7fb6d44] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 25 03:25:18 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:25:18.395 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[e3a53c6e-408a-4df7-ad7e-7af93739de2a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:25:18 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:25:18.417 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[e4c3529a-7dc6-4bbc-a39d-8f1223d3472e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap93269c36-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b6:11:21'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 6, 'tx_packets': 7, 'rx_bytes': 532, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 6, 'tx_packets': 7, 'rx_bytes': 532, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 19], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 445515, 'reachable_time': 41516, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 280772, 'error': None, 'target': 'ovnmeta-93269c36-ab23-4d95-925a-798173550624', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:25:18 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:25:18.432 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[f8dc4080-5f35-48a7-8811-4e2aa846ac59]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap93269c36-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 445525, 'tstamp': 445525}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 280774, 'error': None, 'target': 'ovnmeta-93269c36-ab23-4d95-925a-798173550624', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap93269c36-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 445529, 'tstamp': 445529}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 280774, 'error': None, 'target': 'ovnmeta-93269c36-ab23-4d95-925a-798173550624', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:25:18 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:25:18.433 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap93269c36-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:25:18 np0005534516 nova_compute[253538]: 2025-11-25 08:25:18.434 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:25:18 np0005534516 nova_compute[253538]: 2025-11-25 08:25:18.435 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:25:18 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:25:18.436 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap93269c36-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:25:18 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:25:18.436 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 03:25:18 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:25:18.436 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap93269c36-a0, col_values=(('external_ids', {'iface-id': '52d2128c-19c6-4892-8ba5-cc8740039f5e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:25:18 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:25:18.437 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 03:25:18 np0005534516 nova_compute[253538]: 2025-11-25 08:25:18.469 253542 DEBUG oslo_concurrency.lockutils [None req-fd954489-8e65-4c0e-94f5-d710e8a29ad8 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:25:18 np0005534516 nova_compute[253538]: 2025-11-25 08:25:18.469 253542 DEBUG oslo_concurrency.lockutils [None req-fd954489-8e65-4c0e-94f5-d710e8a29ad8 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:25:18 np0005534516 nova_compute[253538]: 2025-11-25 08:25:18.497 253542 DEBUG nova.virt.hardware [None req-fd954489-8e65-4c0e-94f5-d710e8a29ad8 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 25 03:25:18 np0005534516 nova_compute[253538]: 2025-11-25 08:25:18.498 253542 INFO nova.compute.claims [None req-fd954489-8e65-4c0e-94f5-d710e8a29ad8 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] [instance: 1664fad5-765c-4ecc-93e2-6f96c7fb6d44] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 25 03:25:18 np0005534516 ovn_controller[152859]: 2025-11-25T08:25:18Z|00010|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:af:6c:e2 10.100.0.3
Nov 25 03:25:18 np0005534516 ovn_controller[152859]: 2025-11-25T08:25:18Z|00011|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:af:6c:e2 10.100.0.3
Nov 25 03:25:18 np0005534516 nova_compute[253538]: 2025-11-25 08:25:18.687 253542 DEBUG nova.compute.manager [req-d598cda7-0034-4be3-b42b-e34efe52221e req-78cb6083-13f0-48d1-987d-2420fde67f21 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 29fb9e2b-13d1-41e6-b0b1-1d5262dcadec] Received event network-vif-plugged-a0a9c956-aaa5-4981-a1d9-ae896cea7b7c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:25:18 np0005534516 nova_compute[253538]: 2025-11-25 08:25:18.688 253542 DEBUG oslo_concurrency.lockutils [req-d598cda7-0034-4be3-b42b-e34efe52221e req-78cb6083-13f0-48d1-987d-2420fde67f21 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "29fb9e2b-13d1-41e6-b0b1-1d5262dcadec-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:25:18 np0005534516 nova_compute[253538]: 2025-11-25 08:25:18.688 253542 DEBUG oslo_concurrency.lockutils [req-d598cda7-0034-4be3-b42b-e34efe52221e req-78cb6083-13f0-48d1-987d-2420fde67f21 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "29fb9e2b-13d1-41e6-b0b1-1d5262dcadec-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:25:18 np0005534516 nova_compute[253538]: 2025-11-25 08:25:18.688 253542 DEBUG oslo_concurrency.lockutils [req-d598cda7-0034-4be3-b42b-e34efe52221e req-78cb6083-13f0-48d1-987d-2420fde67f21 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "29fb9e2b-13d1-41e6-b0b1-1d5262dcadec-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:25:18 np0005534516 nova_compute[253538]: 2025-11-25 08:25:18.688 253542 DEBUG nova.compute.manager [req-d598cda7-0034-4be3-b42b-e34efe52221e req-78cb6083-13f0-48d1-987d-2420fde67f21 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 29fb9e2b-13d1-41e6-b0b1-1d5262dcadec] Processing event network-vif-plugged-a0a9c956-aaa5-4981-a1d9-ae896cea7b7c _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 25 03:25:18 np0005534516 nova_compute[253538]: 2025-11-25 08:25:18.693 253542 DEBUG oslo_concurrency.processutils [None req-fd954489-8e65-4c0e-94f5-d710e8a29ad8 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:25:18 np0005534516 nova_compute[253538]: 2025-11-25 08:25:18.898 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764059118.897475, 29fb9e2b-13d1-41e6-b0b1-1d5262dcadec => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 03:25:18 np0005534516 nova_compute[253538]: 2025-11-25 08:25:18.898 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 29fb9e2b-13d1-41e6-b0b1-1d5262dcadec] VM Started (Lifecycle Event)#033[00m
Nov 25 03:25:18 np0005534516 nova_compute[253538]: 2025-11-25 08:25:18.901 253542 DEBUG nova.compute.manager [None req-9f33985b-68e9-4e9b-980c-be3fef1a34e0 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] [instance: 29fb9e2b-13d1-41e6-b0b1-1d5262dcadec] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 25 03:25:18 np0005534516 nova_compute[253538]: 2025-11-25 08:25:18.910 253542 DEBUG nova.virt.libvirt.driver [None req-9f33985b-68e9-4e9b-980c-be3fef1a34e0 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] [instance: 29fb9e2b-13d1-41e6-b0b1-1d5262dcadec] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 25 03:25:18 np0005534516 nova_compute[253538]: 2025-11-25 08:25:18.918 253542 INFO nova.virt.libvirt.driver [-] [instance: 29fb9e2b-13d1-41e6-b0b1-1d5262dcadec] Instance spawned successfully.#033[00m
Nov 25 03:25:18 np0005534516 nova_compute[253538]: 2025-11-25 08:25:18.918 253542 DEBUG nova.virt.libvirt.driver [None req-9f33985b-68e9-4e9b-980c-be3fef1a34e0 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] [instance: 29fb9e2b-13d1-41e6-b0b1-1d5262dcadec] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 25 03:25:18 np0005534516 nova_compute[253538]: 2025-11-25 08:25:18.929 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 29fb9e2b-13d1-41e6-b0b1-1d5262dcadec] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:25:18 np0005534516 nova_compute[253538]: 2025-11-25 08:25:18.934 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 29fb9e2b-13d1-41e6-b0b1-1d5262dcadec] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 03:25:18 np0005534516 nova_compute[253538]: 2025-11-25 08:25:18.958 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 29fb9e2b-13d1-41e6-b0b1-1d5262dcadec] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 03:25:18 np0005534516 nova_compute[253538]: 2025-11-25 08:25:18.958 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764059118.8977208, 29fb9e2b-13d1-41e6-b0b1-1d5262dcadec => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 03:25:18 np0005534516 nova_compute[253538]: 2025-11-25 08:25:18.958 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 29fb9e2b-13d1-41e6-b0b1-1d5262dcadec] VM Paused (Lifecycle Event)#033[00m
Nov 25 03:25:18 np0005534516 nova_compute[253538]: 2025-11-25 08:25:18.964 253542 DEBUG nova.virt.libvirt.driver [None req-9f33985b-68e9-4e9b-980c-be3fef1a34e0 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] [instance: 29fb9e2b-13d1-41e6-b0b1-1d5262dcadec] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:25:18 np0005534516 nova_compute[253538]: 2025-11-25 08:25:18.964 253542 DEBUG nova.virt.libvirt.driver [None req-9f33985b-68e9-4e9b-980c-be3fef1a34e0 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] [instance: 29fb9e2b-13d1-41e6-b0b1-1d5262dcadec] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:25:18 np0005534516 nova_compute[253538]: 2025-11-25 08:25:18.965 253542 DEBUG nova.virt.libvirt.driver [None req-9f33985b-68e9-4e9b-980c-be3fef1a34e0 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] [instance: 29fb9e2b-13d1-41e6-b0b1-1d5262dcadec] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:25:18 np0005534516 nova_compute[253538]: 2025-11-25 08:25:18.965 253542 DEBUG nova.virt.libvirt.driver [None req-9f33985b-68e9-4e9b-980c-be3fef1a34e0 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] [instance: 29fb9e2b-13d1-41e6-b0b1-1d5262dcadec] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:25:18 np0005534516 nova_compute[253538]: 2025-11-25 08:25:18.966 253542 DEBUG nova.virt.libvirt.driver [None req-9f33985b-68e9-4e9b-980c-be3fef1a34e0 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] [instance: 29fb9e2b-13d1-41e6-b0b1-1d5262dcadec] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:25:18 np0005534516 nova_compute[253538]: 2025-11-25 08:25:18.966 253542 DEBUG nova.virt.libvirt.driver [None req-9f33985b-68e9-4e9b-980c-be3fef1a34e0 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] [instance: 29fb9e2b-13d1-41e6-b0b1-1d5262dcadec] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:25:18 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1196: 321 pgs: 321 active+clean; 219 MiB data, 356 MiB used, 60 GiB / 60 GiB avail; 1.1 MiB/s rd, 5.0 MiB/s wr, 136 op/s
Nov 25 03:25:19 np0005534516 nova_compute[253538]: 2025-11-25 08:25:19.016 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 29fb9e2b-13d1-41e6-b0b1-1d5262dcadec] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:25:19 np0005534516 nova_compute[253538]: 2025-11-25 08:25:19.019 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764059118.9048998, 29fb9e2b-13d1-41e6-b0b1-1d5262dcadec => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 03:25:19 np0005534516 nova_compute[253538]: 2025-11-25 08:25:19.019 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 29fb9e2b-13d1-41e6-b0b1-1d5262dcadec] VM Resumed (Lifecycle Event)#033[00m
Nov 25 03:25:19 np0005534516 nova_compute[253538]: 2025-11-25 08:25:19.042 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 29fb9e2b-13d1-41e6-b0b1-1d5262dcadec] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:25:19 np0005534516 nova_compute[253538]: 2025-11-25 08:25:19.044 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 29fb9e2b-13d1-41e6-b0b1-1d5262dcadec] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 03:25:19 np0005534516 nova_compute[253538]: 2025-11-25 08:25:19.054 253542 INFO nova.compute.manager [None req-9f33985b-68e9-4e9b-980c-be3fef1a34e0 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] [instance: 29fb9e2b-13d1-41e6-b0b1-1d5262dcadec] Took 7.12 seconds to spawn the instance on the hypervisor.#033[00m
Nov 25 03:25:19 np0005534516 nova_compute[253538]: 2025-11-25 08:25:19.055 253542 DEBUG nova.compute.manager [None req-9f33985b-68e9-4e9b-980c-be3fef1a34e0 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] [instance: 29fb9e2b-13d1-41e6-b0b1-1d5262dcadec] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:25:19 np0005534516 nova_compute[253538]: 2025-11-25 08:25:19.079 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 29fb9e2b-13d1-41e6-b0b1-1d5262dcadec] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 03:25:19 np0005534516 nova_compute[253538]: 2025-11-25 08:25:19.113 253542 INFO nova.compute.manager [None req-9f33985b-68e9-4e9b-980c-be3fef1a34e0 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] [instance: 29fb9e2b-13d1-41e6-b0b1-1d5262dcadec] Took 8.36 seconds to build instance.#033[00m
Nov 25 03:25:19 np0005534516 nova_compute[253538]: 2025-11-25 08:25:19.127 253542 DEBUG oslo_concurrency.lockutils [None req-9f33985b-68e9-4e9b-980c-be3fef1a34e0 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Lock "29fb9e2b-13d1-41e6-b0b1-1d5262dcadec" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.436s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:25:19 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 03:25:19 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/713389299' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 03:25:19 np0005534516 nova_compute[253538]: 2025-11-25 08:25:19.148 253542 DEBUG oslo_concurrency.processutils [None req-fd954489-8e65-4c0e-94f5-d710e8a29ad8 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.455s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:25:19 np0005534516 nova_compute[253538]: 2025-11-25 08:25:19.153 253542 DEBUG nova.compute.provider_tree [None req-fd954489-8e65-4c0e-94f5-d710e8a29ad8 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 03:25:19 np0005534516 nova_compute[253538]: 2025-11-25 08:25:19.166 253542 DEBUG nova.scheduler.client.report [None req-fd954489-8e65-4c0e-94f5-d710e8a29ad8 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 03:25:19 np0005534516 nova_compute[253538]: 2025-11-25 08:25:19.190 253542 DEBUG oslo_concurrency.lockutils [None req-fd954489-8e65-4c0e-94f5-d710e8a29ad8 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.721s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:25:19 np0005534516 nova_compute[253538]: 2025-11-25 08:25:19.191 253542 DEBUG nova.compute.manager [None req-fd954489-8e65-4c0e-94f5-d710e8a29ad8 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] [instance: 1664fad5-765c-4ecc-93e2-6f96c7fb6d44] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 25 03:25:19 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 03:25:19 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 03:25:19 np0005534516 nova_compute[253538]: 2025-11-25 08:25:19.230 253542 DEBUG nova.compute.manager [None req-fd954489-8e65-4c0e-94f5-d710e8a29ad8 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] [instance: 1664fad5-765c-4ecc-93e2-6f96c7fb6d44] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 25 03:25:19 np0005534516 nova_compute[253538]: 2025-11-25 08:25:19.231 253542 DEBUG nova.network.neutron [None req-fd954489-8e65-4c0e-94f5-d710e8a29ad8 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] [instance: 1664fad5-765c-4ecc-93e2-6f96c7fb6d44] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 25 03:25:19 np0005534516 nova_compute[253538]: 2025-11-25 08:25:19.244 253542 INFO nova.virt.libvirt.driver [None req-fd954489-8e65-4c0e-94f5-d710e8a29ad8 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] [instance: 1664fad5-765c-4ecc-93e2-6f96c7fb6d44] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 25 03:25:19 np0005534516 nova_compute[253538]: 2025-11-25 08:25:19.266 253542 DEBUG nova.compute.manager [None req-fd954489-8e65-4c0e-94f5-d710e8a29ad8 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] [instance: 1664fad5-765c-4ecc-93e2-6f96c7fb6d44] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 25 03:25:19 np0005534516 nova_compute[253538]: 2025-11-25 08:25:19.283 253542 DEBUG nova.network.neutron [req-a4d7c72b-a628-4cc5-b63a-7d0a60797013 req-196991de-e32f-466d-bcf2-e0fdb462f89a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 29fb9e2b-13d1-41e6-b0b1-1d5262dcadec] Updated VIF entry in instance network info cache for port a0a9c956-aaa5-4981-a1d9-ae896cea7b7c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 25 03:25:19 np0005534516 nova_compute[253538]: 2025-11-25 08:25:19.283 253542 DEBUG nova.network.neutron [req-a4d7c72b-a628-4cc5-b63a-7d0a60797013 req-196991de-e32f-466d-bcf2-e0fdb462f89a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 29fb9e2b-13d1-41e6-b0b1-1d5262dcadec] Updating instance_info_cache with network_info: [{"id": "a0a9c956-aaa5-4981-a1d9-ae896cea7b7c", "address": "fa:16:3e:11:6b:81", "network": {"id": "93269c36-ab23-4d95-925a-798173550624", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-897372830-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "65a2f983cce14453b2dc9251a520f289", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa0a9c956-aa", "ovs_interfaceid": "a0a9c956-aaa5-4981-a1d9-ae896cea7b7c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 03:25:19 np0005534516 nova_compute[253538]: 2025-11-25 08:25:19.302 253542 DEBUG oslo_concurrency.lockutils [req-a4d7c72b-a628-4cc5-b63a-7d0a60797013 req-196991de-e32f-466d-bcf2-e0fdb462f89a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-29fb9e2b-13d1-41e6-b0b1-1d5262dcadec" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 03:25:19 np0005534516 nova_compute[253538]: 2025-11-25 08:25:19.403 253542 DEBUG nova.compute.manager [None req-fd954489-8e65-4c0e-94f5-d710e8a29ad8 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] [instance: 1664fad5-765c-4ecc-93e2-6f96c7fb6d44] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 25 03:25:19 np0005534516 nova_compute[253538]: 2025-11-25 08:25:19.405 253542 DEBUG nova.virt.libvirt.driver [None req-fd954489-8e65-4c0e-94f5-d710e8a29ad8 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] [instance: 1664fad5-765c-4ecc-93e2-6f96c7fb6d44] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 25 03:25:19 np0005534516 nova_compute[253538]: 2025-11-25 08:25:19.405 253542 INFO nova.virt.libvirt.driver [None req-fd954489-8e65-4c0e-94f5-d710e8a29ad8 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] [instance: 1664fad5-765c-4ecc-93e2-6f96c7fb6d44] Creating image(s)#033[00m
Nov 25 03:25:19 np0005534516 nova_compute[253538]: 2025-11-25 08:25:19.436 253542 DEBUG nova.storage.rbd_utils [None req-fd954489-8e65-4c0e-94f5-d710e8a29ad8 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] rbd image 1664fad5-765c-4ecc-93e2-6f96c7fb6d44_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:25:19 np0005534516 nova_compute[253538]: 2025-11-25 08:25:19.477 253542 DEBUG nova.storage.rbd_utils [None req-fd954489-8e65-4c0e-94f5-d710e8a29ad8 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] rbd image 1664fad5-765c-4ecc-93e2-6f96c7fb6d44_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:25:19 np0005534516 nova_compute[253538]: 2025-11-25 08:25:19.504 253542 DEBUG nova.storage.rbd_utils [None req-fd954489-8e65-4c0e-94f5-d710e8a29ad8 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] rbd image 1664fad5-765c-4ecc-93e2-6f96c7fb6d44_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:25:19 np0005534516 nova_compute[253538]: 2025-11-25 08:25:19.509 253542 DEBUG oslo_concurrency.processutils [None req-fd954489-8e65-4c0e-94f5-d710e8a29ad8 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:25:19 np0005534516 nova_compute[253538]: 2025-11-25 08:25:19.558 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:25:19 np0005534516 nova_compute[253538]: 2025-11-25 08:25:19.588 253542 DEBUG oslo_concurrency.processutils [None req-fd954489-8e65-4c0e-94f5-d710e8a29ad8 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json" returned: 0 in 0.078s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:25:19 np0005534516 nova_compute[253538]: 2025-11-25 08:25:19.589 253542 DEBUG oslo_concurrency.lockutils [None req-fd954489-8e65-4c0e-94f5-d710e8a29ad8 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] Acquiring lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:25:19 np0005534516 nova_compute[253538]: 2025-11-25 08:25:19.591 253542 DEBUG oslo_concurrency.lockutils [None req-fd954489-8e65-4c0e-94f5-d710e8a29ad8 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:25:19 np0005534516 nova_compute[253538]: 2025-11-25 08:25:19.591 253542 DEBUG oslo_concurrency.lockutils [None req-fd954489-8e65-4c0e-94f5-d710e8a29ad8 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:25:19 np0005534516 nova_compute[253538]: 2025-11-25 08:25:19.622 253542 DEBUG nova.storage.rbd_utils [None req-fd954489-8e65-4c0e-94f5-d710e8a29ad8 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] rbd image 1664fad5-765c-4ecc-93e2-6f96c7fb6d44_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:25:19 np0005534516 nova_compute[253538]: 2025-11-25 08:25:19.630 253542 DEBUG oslo_concurrency.processutils [None req-fd954489-8e65-4c0e-94f5-d710e8a29ad8 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc 1664fad5-765c-4ecc-93e2-6f96c7fb6d44_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:25:19 np0005534516 nova_compute[253538]: 2025-11-25 08:25:19.749 253542 DEBUG nova.policy [None req-fd954489-8e65-4c0e-94f5-d710e8a29ad8 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'd61511e82c674abeb4ba87a4e5c5bf9d', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '2d3671fc1a3f4b319a62f23168a9df72', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 25 03:25:19 np0005534516 nova_compute[253538]: 2025-11-25 08:25:19.902 253542 DEBUG oslo_concurrency.processutils [None req-fd954489-8e65-4c0e-94f5-d710e8a29ad8 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc 1664fad5-765c-4ecc-93e2-6f96c7fb6d44_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.272s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:25:19 np0005534516 nova_compute[253538]: 2025-11-25 08:25:19.957 253542 DEBUG nova.storage.rbd_utils [None req-fd954489-8e65-4c0e-94f5-d710e8a29ad8 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] resizing rbd image 1664fad5-765c-4ecc-93e2-6f96c7fb6d44_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 25 03:25:20 np0005534516 nova_compute[253538]: 2025-11-25 08:25:20.046 253542 DEBUG nova.objects.instance [None req-fd954489-8e65-4c0e-94f5-d710e8a29ad8 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] Lazy-loading 'migration_context' on Instance uuid 1664fad5-765c-4ecc-93e2-6f96c7fb6d44 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 03:25:20 np0005534516 nova_compute[253538]: 2025-11-25 08:25:20.057 253542 DEBUG nova.virt.libvirt.driver [None req-fd954489-8e65-4c0e-94f5-d710e8a29ad8 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] [instance: 1664fad5-765c-4ecc-93e2-6f96c7fb6d44] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 25 03:25:20 np0005534516 nova_compute[253538]: 2025-11-25 08:25:20.058 253542 DEBUG nova.virt.libvirt.driver [None req-fd954489-8e65-4c0e-94f5-d710e8a29ad8 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] [instance: 1664fad5-765c-4ecc-93e2-6f96c7fb6d44] Ensure instance console log exists: /var/lib/nova/instances/1664fad5-765c-4ecc-93e2-6f96c7fb6d44/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 25 03:25:20 np0005534516 nova_compute[253538]: 2025-11-25 08:25:20.058 253542 DEBUG oslo_concurrency.lockutils [None req-fd954489-8e65-4c0e-94f5-d710e8a29ad8 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:25:20 np0005534516 nova_compute[253538]: 2025-11-25 08:25:20.058 253542 DEBUG oslo_concurrency.lockutils [None req-fd954489-8e65-4c0e-94f5-d710e8a29ad8 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:25:20 np0005534516 nova_compute[253538]: 2025-11-25 08:25:20.059 253542 DEBUG oslo_concurrency.lockutils [None req-fd954489-8e65-4c0e-94f5-d710e8a29ad8 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:25:20 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:25:20.255 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=7, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '3e:21:09', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '6a:71:8e:07:fa:c2'}, ipsec=False) old=SB_Global(nb_cfg=6) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 03:25:20 np0005534516 nova_compute[253538]: 2025-11-25 08:25:20.255 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:25:20 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:25:20.256 162739 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 1 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 25 03:25:20 np0005534516 nova_compute[253538]: 2025-11-25 08:25:20.570 253542 DEBUG nova.network.neutron [None req-fd954489-8e65-4c0e-94f5-d710e8a29ad8 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] [instance: 1664fad5-765c-4ecc-93e2-6f96c7fb6d44] Successfully created port: fdb3703c-f8da-4c10-9784-ed63bfe93fe1 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 25 03:25:20 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1197: 321 pgs: 321 active+clean; 236 MiB data, 367 MiB used, 60 GiB / 60 GiB avail; 772 KiB/s rd, 5.9 MiB/s wr, 157 op/s
Nov 25 03:25:21 np0005534516 nova_compute[253538]: 2025-11-25 08:25:21.141 253542 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764059106.1397002, 8e6c81c4-a422-42e4-950b-66fa6411c1eb => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 03:25:21 np0005534516 nova_compute[253538]: 2025-11-25 08:25:21.141 253542 INFO nova.compute.manager [-] [instance: 8e6c81c4-a422-42e4-950b-66fa6411c1eb] VM Stopped (Lifecycle Event)#033[00m
Nov 25 03:25:21 np0005534516 nova_compute[253538]: 2025-11-25 08:25:21.163 253542 DEBUG nova.compute.manager [None req-b34467f4-01ab-4090-876e-68531eb66e24 - - - - - -] [instance: 8e6c81c4-a422-42e4-950b-66fa6411c1eb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:25:21 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:25:21.258 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=0e3362c2-a4a0-4a10-9289-943331244f84, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '7'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:25:21 np0005534516 nova_compute[253538]: 2025-11-25 08:25:21.283 253542 DEBUG nova.compute.manager [req-f9821de4-a471-411c-9dd3-2c7c7edca002 req-c453527d-1cfb-4e4b-9e8d-04a1ec1c67d9 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 29fb9e2b-13d1-41e6-b0b1-1d5262dcadec] Received event network-vif-plugged-a0a9c956-aaa5-4981-a1d9-ae896cea7b7c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:25:21 np0005534516 nova_compute[253538]: 2025-11-25 08:25:21.283 253542 DEBUG oslo_concurrency.lockutils [req-f9821de4-a471-411c-9dd3-2c7c7edca002 req-c453527d-1cfb-4e4b-9e8d-04a1ec1c67d9 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "29fb9e2b-13d1-41e6-b0b1-1d5262dcadec-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:25:21 np0005534516 nova_compute[253538]: 2025-11-25 08:25:21.283 253542 DEBUG oslo_concurrency.lockutils [req-f9821de4-a471-411c-9dd3-2c7c7edca002 req-c453527d-1cfb-4e4b-9e8d-04a1ec1c67d9 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "29fb9e2b-13d1-41e6-b0b1-1d5262dcadec-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:25:21 np0005534516 nova_compute[253538]: 2025-11-25 08:25:21.284 253542 DEBUG oslo_concurrency.lockutils [req-f9821de4-a471-411c-9dd3-2c7c7edca002 req-c453527d-1cfb-4e4b-9e8d-04a1ec1c67d9 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "29fb9e2b-13d1-41e6-b0b1-1d5262dcadec-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:25:21 np0005534516 nova_compute[253538]: 2025-11-25 08:25:21.284 253542 DEBUG nova.compute.manager [req-f9821de4-a471-411c-9dd3-2c7c7edca002 req-c453527d-1cfb-4e4b-9e8d-04a1ec1c67d9 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 29fb9e2b-13d1-41e6-b0b1-1d5262dcadec] No waiting events found dispatching network-vif-plugged-a0a9c956-aaa5-4981-a1d9-ae896cea7b7c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 03:25:21 np0005534516 nova_compute[253538]: 2025-11-25 08:25:21.284 253542 WARNING nova.compute.manager [req-f9821de4-a471-411c-9dd3-2c7c7edca002 req-c453527d-1cfb-4e4b-9e8d-04a1ec1c67d9 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 29fb9e2b-13d1-41e6-b0b1-1d5262dcadec] Received unexpected event network-vif-plugged-a0a9c956-aaa5-4981-a1d9-ae896cea7b7c for instance with vm_state active and task_state None.#033[00m
Nov 25 03:25:22 np0005534516 nova_compute[253538]: 2025-11-25 08:25:22.057 253542 DEBUG nova.network.neutron [None req-fd954489-8e65-4c0e-94f5-d710e8a29ad8 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] [instance: 1664fad5-765c-4ecc-93e2-6f96c7fb6d44] Successfully updated port: fdb3703c-f8da-4c10-9784-ed63bfe93fe1 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 25 03:25:22 np0005534516 nova_compute[253538]: 2025-11-25 08:25:22.089 253542 DEBUG oslo_concurrency.lockutils [None req-fd954489-8e65-4c0e-94f5-d710e8a29ad8 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] Acquiring lock "refresh_cache-1664fad5-765c-4ecc-93e2-6f96c7fb6d44" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 03:25:22 np0005534516 nova_compute[253538]: 2025-11-25 08:25:22.089 253542 DEBUG oslo_concurrency.lockutils [None req-fd954489-8e65-4c0e-94f5-d710e8a29ad8 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] Acquired lock "refresh_cache-1664fad5-765c-4ecc-93e2-6f96c7fb6d44" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 03:25:22 np0005534516 nova_compute[253538]: 2025-11-25 08:25:22.089 253542 DEBUG nova.network.neutron [None req-fd954489-8e65-4c0e-94f5-d710e8a29ad8 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] [instance: 1664fad5-765c-4ecc-93e2-6f96c7fb6d44] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 25 03:25:22 np0005534516 nova_compute[253538]: 2025-11-25 08:25:22.194 253542 DEBUG nova.compute.manager [req-c6aa6b31-0de2-49d5-9d04-393703a2d5cc req-b372d0ad-9092-4770-855c-ff8e8332c7f8 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 1664fad5-765c-4ecc-93e2-6f96c7fb6d44] Received event network-changed-fdb3703c-f8da-4c10-9784-ed63bfe93fe1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:25:22 np0005534516 nova_compute[253538]: 2025-11-25 08:25:22.195 253542 DEBUG nova.compute.manager [req-c6aa6b31-0de2-49d5-9d04-393703a2d5cc req-b372d0ad-9092-4770-855c-ff8e8332c7f8 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 1664fad5-765c-4ecc-93e2-6f96c7fb6d44] Refreshing instance network info cache due to event network-changed-fdb3703c-f8da-4c10-9784-ed63bfe93fe1. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 03:25:22 np0005534516 nova_compute[253538]: 2025-11-25 08:25:22.195 253542 DEBUG oslo_concurrency.lockutils [req-c6aa6b31-0de2-49d5-9d04-393703a2d5cc req-b372d0ad-9092-4770-855c-ff8e8332c7f8 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-1664fad5-765c-4ecc-93e2-6f96c7fb6d44" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 03:25:22 np0005534516 nova_compute[253538]: 2025-11-25 08:25:22.201 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:25:22 np0005534516 nova_compute[253538]: 2025-11-25 08:25:22.281 253542 DEBUG nova.network.neutron [None req-fd954489-8e65-4c0e-94f5-d710e8a29ad8 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] [instance: 1664fad5-765c-4ecc-93e2-6f96c7fb6d44] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 25 03:25:22 np0005534516 nova_compute[253538]: 2025-11-25 08:25:22.757 253542 DEBUG oslo_concurrency.lockutils [None req-4f59ed76-8f96-42c9-8127-ec70884ea40a 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Acquiring lock "740ce5e2-b8eb-46b1-9ec8-e9c89ed3e5b2" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:25:22 np0005534516 nova_compute[253538]: 2025-11-25 08:25:22.757 253542 DEBUG oslo_concurrency.lockutils [None req-4f59ed76-8f96-42c9-8127-ec70884ea40a 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Lock "740ce5e2-b8eb-46b1-9ec8-e9c89ed3e5b2" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:25:22 np0005534516 nova_compute[253538]: 2025-11-25 08:25:22.770 253542 DEBUG nova.compute.manager [None req-4f59ed76-8f96-42c9-8127-ec70884ea40a 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] [instance: 740ce5e2-b8eb-46b1-9ec8-e9c89ed3e5b2] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 25 03:25:22 np0005534516 nova_compute[253538]: 2025-11-25 08:25:22.936 253542 DEBUG oslo_concurrency.lockutils [None req-4f59ed76-8f96-42c9-8127-ec70884ea40a 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:25:22 np0005534516 nova_compute[253538]: 2025-11-25 08:25:22.936 253542 DEBUG oslo_concurrency.lockutils [None req-4f59ed76-8f96-42c9-8127-ec70884ea40a 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:25:22 np0005534516 nova_compute[253538]: 2025-11-25 08:25:22.942 253542 DEBUG nova.virt.hardware [None req-4f59ed76-8f96-42c9-8127-ec70884ea40a 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 25 03:25:22 np0005534516 nova_compute[253538]: 2025-11-25 08:25:22.943 253542 INFO nova.compute.claims [None req-4f59ed76-8f96-42c9-8127-ec70884ea40a 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] [instance: 740ce5e2-b8eb-46b1-9ec8-e9c89ed3e5b2] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 25 03:25:22 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1198: 321 pgs: 321 active+clean; 255 MiB data, 372 MiB used, 60 GiB / 60 GiB avail; 1.3 MiB/s rd, 6.3 MiB/s wr, 184 op/s
Nov 25 03:25:23 np0005534516 nova_compute[253538]: 2025-11-25 08:25:23.149 253542 DEBUG oslo_concurrency.processutils [None req-4f59ed76-8f96-42c9-8127-ec70884ea40a 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:25:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 03:25:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 03:25:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 03:25:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 03:25:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 03:25:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 03:25:23 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 03:25:23 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2684200752' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 03:25:23 np0005534516 nova_compute[253538]: 2025-11-25 08:25:23.575 253542 DEBUG nova.network.neutron [None req-fd954489-8e65-4c0e-94f5-d710e8a29ad8 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] [instance: 1664fad5-765c-4ecc-93e2-6f96c7fb6d44] Updating instance_info_cache with network_info: [{"id": "fdb3703c-f8da-4c10-9784-ed63bfe93fe1", "address": "fa:16:3e:f0:8b:90", "network": {"id": "f86a7b06-d9db-4462-bd9b-8ad648dec7f4", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-795190525-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2d3671fc1a3f4b319a62f23168a9df72", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfdb3703c-f8", "ovs_interfaceid": "fdb3703c-f8da-4c10-9784-ed63bfe93fe1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 03:25:23 np0005534516 nova_compute[253538]: 2025-11-25 08:25:23.581 253542 DEBUG oslo_concurrency.processutils [None req-4f59ed76-8f96-42c9-8127-ec70884ea40a 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.432s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:25:23 np0005534516 nova_compute[253538]: 2025-11-25 08:25:23.590 253542 DEBUG nova.compute.provider_tree [None req-4f59ed76-8f96-42c9-8127-ec70884ea40a 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 03:25:23 np0005534516 nova_compute[253538]: 2025-11-25 08:25:23.596 253542 DEBUG oslo_concurrency.lockutils [None req-fd954489-8e65-4c0e-94f5-d710e8a29ad8 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] Releasing lock "refresh_cache-1664fad5-765c-4ecc-93e2-6f96c7fb6d44" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 03:25:23 np0005534516 nova_compute[253538]: 2025-11-25 08:25:23.596 253542 DEBUG nova.compute.manager [None req-fd954489-8e65-4c0e-94f5-d710e8a29ad8 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] [instance: 1664fad5-765c-4ecc-93e2-6f96c7fb6d44] Instance network_info: |[{"id": "fdb3703c-f8da-4c10-9784-ed63bfe93fe1", "address": "fa:16:3e:f0:8b:90", "network": {"id": "f86a7b06-d9db-4462-bd9b-8ad648dec7f4", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-795190525-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2d3671fc1a3f4b319a62f23168a9df72", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfdb3703c-f8", "ovs_interfaceid": "fdb3703c-f8da-4c10-9784-ed63bfe93fe1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 25 03:25:23 np0005534516 nova_compute[253538]: 2025-11-25 08:25:23.597 253542 DEBUG oslo_concurrency.lockutils [req-c6aa6b31-0de2-49d5-9d04-393703a2d5cc req-b372d0ad-9092-4770-855c-ff8e8332c7f8 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-1664fad5-765c-4ecc-93e2-6f96c7fb6d44" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 03:25:23 np0005534516 nova_compute[253538]: 2025-11-25 08:25:23.598 253542 DEBUG nova.network.neutron [req-c6aa6b31-0de2-49d5-9d04-393703a2d5cc req-b372d0ad-9092-4770-855c-ff8e8332c7f8 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 1664fad5-765c-4ecc-93e2-6f96c7fb6d44] Refreshing network info cache for port fdb3703c-f8da-4c10-9784-ed63bfe93fe1 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 03:25:23 np0005534516 nova_compute[253538]: 2025-11-25 08:25:23.602 253542 DEBUG nova.virt.libvirt.driver [None req-fd954489-8e65-4c0e-94f5-d710e8a29ad8 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] [instance: 1664fad5-765c-4ecc-93e2-6f96c7fb6d44] Start _get_guest_xml network_info=[{"id": "fdb3703c-f8da-4c10-9784-ed63bfe93fe1", "address": "fa:16:3e:f0:8b:90", "network": {"id": "f86a7b06-d9db-4462-bd9b-8ad648dec7f4", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-795190525-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2d3671fc1a3f4b319a62f23168a9df72", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfdb3703c-f8", "ovs_interfaceid": "fdb3703c-f8da-4c10-9784-ed63bfe93fe1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'device_name': '/dev/vda', 'guest_format': None, 'encryption_options': None, 'boot_index': 0, 'encryption_format': None, 'encryption_secret_uuid': None, 'size': 0, 'encrypted': False, 'disk_bus': 'virtio', 'image_id': '8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 25 03:25:23 np0005534516 nova_compute[253538]: 2025-11-25 08:25:23.606 253542 DEBUG nova.scheduler.client.report [None req-4f59ed76-8f96-42c9-8127-ec70884ea40a 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 03:25:23 np0005534516 nova_compute[253538]: 2025-11-25 08:25:23.620 253542 WARNING nova.virt.libvirt.driver [None req-fd954489-8e65-4c0e-94f5-d710e8a29ad8 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 03:25:23 np0005534516 nova_compute[253538]: 2025-11-25 08:25:23.629 253542 DEBUG oslo_concurrency.lockutils [None req-4f59ed76-8f96-42c9-8127-ec70884ea40a 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.693s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:25:23 np0005534516 nova_compute[253538]: 2025-11-25 08:25:23.643 253542 DEBUG nova.compute.manager [None req-4f59ed76-8f96-42c9-8127-ec70884ea40a 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] [instance: 740ce5e2-b8eb-46b1-9ec8-e9c89ed3e5b2] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 25 03:25:23 np0005534516 nova_compute[253538]: 2025-11-25 08:25:23.648 253542 DEBUG nova.virt.libvirt.host [None req-fd954489-8e65-4c0e-94f5-d710e8a29ad8 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 25 03:25:23 np0005534516 nova_compute[253538]: 2025-11-25 08:25:23.649 253542 DEBUG nova.virt.libvirt.host [None req-fd954489-8e65-4c0e-94f5-d710e8a29ad8 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 25 03:25:23 np0005534516 nova_compute[253538]: 2025-11-25 08:25:23.652 253542 DEBUG nova.virt.libvirt.host [None req-fd954489-8e65-4c0e-94f5-d710e8a29ad8 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 25 03:25:23 np0005534516 nova_compute[253538]: 2025-11-25 08:25:23.652 253542 DEBUG nova.virt.libvirt.host [None req-fd954489-8e65-4c0e-94f5-d710e8a29ad8 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 25 03:25:23 np0005534516 nova_compute[253538]: 2025-11-25 08:25:23.653 253542 DEBUG nova.virt.libvirt.driver [None req-fd954489-8e65-4c0e-94f5-d710e8a29ad8 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 25 03:25:23 np0005534516 nova_compute[253538]: 2025-11-25 08:25:23.653 253542 DEBUG nova.virt.hardware [None req-fd954489-8e65-4c0e-94f5-d710e8a29ad8 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T08:20:54Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='c0247c91-0f34-45b2-87e7-43f31a790a00',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 25 03:25:23 np0005534516 nova_compute[253538]: 2025-11-25 08:25:23.654 253542 DEBUG nova.virt.hardware [None req-fd954489-8e65-4c0e-94f5-d710e8a29ad8 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 25 03:25:23 np0005534516 nova_compute[253538]: 2025-11-25 08:25:23.654 253542 DEBUG nova.virt.hardware [None req-fd954489-8e65-4c0e-94f5-d710e8a29ad8 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 25 03:25:23 np0005534516 nova_compute[253538]: 2025-11-25 08:25:23.654 253542 DEBUG nova.virt.hardware [None req-fd954489-8e65-4c0e-94f5-d710e8a29ad8 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 25 03:25:23 np0005534516 nova_compute[253538]: 2025-11-25 08:25:23.655 253542 DEBUG nova.virt.hardware [None req-fd954489-8e65-4c0e-94f5-d710e8a29ad8 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 25 03:25:23 np0005534516 nova_compute[253538]: 2025-11-25 08:25:23.655 253542 DEBUG nova.virt.hardware [None req-fd954489-8e65-4c0e-94f5-d710e8a29ad8 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 25 03:25:23 np0005534516 nova_compute[253538]: 2025-11-25 08:25:23.655 253542 DEBUG nova.virt.hardware [None req-fd954489-8e65-4c0e-94f5-d710e8a29ad8 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 25 03:25:23 np0005534516 nova_compute[253538]: 2025-11-25 08:25:23.655 253542 DEBUG nova.virt.hardware [None req-fd954489-8e65-4c0e-94f5-d710e8a29ad8 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 25 03:25:23 np0005534516 nova_compute[253538]: 2025-11-25 08:25:23.655 253542 DEBUG nova.virt.hardware [None req-fd954489-8e65-4c0e-94f5-d710e8a29ad8 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 25 03:25:23 np0005534516 nova_compute[253538]: 2025-11-25 08:25:23.656 253542 DEBUG nova.virt.hardware [None req-fd954489-8e65-4c0e-94f5-d710e8a29ad8 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 25 03:25:23 np0005534516 nova_compute[253538]: 2025-11-25 08:25:23.656 253542 DEBUG nova.virt.hardware [None req-fd954489-8e65-4c0e-94f5-d710e8a29ad8 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 25 03:25:23 np0005534516 nova_compute[253538]: 2025-11-25 08:25:23.660 253542 DEBUG oslo_concurrency.processutils [None req-fd954489-8e65-4c0e-94f5-d710e8a29ad8 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:25:23 np0005534516 nova_compute[253538]: 2025-11-25 08:25:23.695 253542 DEBUG nova.compute.manager [None req-4f59ed76-8f96-42c9-8127-ec70884ea40a 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] [instance: 740ce5e2-b8eb-46b1-9ec8-e9c89ed3e5b2] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 25 03:25:23 np0005534516 nova_compute[253538]: 2025-11-25 08:25:23.695 253542 DEBUG nova.network.neutron [None req-4f59ed76-8f96-42c9-8127-ec70884ea40a 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] [instance: 740ce5e2-b8eb-46b1-9ec8-e9c89ed3e5b2] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 25 03:25:23 np0005534516 nova_compute[253538]: 2025-11-25 08:25:23.709 253542 INFO nova.virt.libvirt.driver [None req-4f59ed76-8f96-42c9-8127-ec70884ea40a 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] [instance: 740ce5e2-b8eb-46b1-9ec8-e9c89ed3e5b2] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 25 03:25:23 np0005534516 nova_compute[253538]: 2025-11-25 08:25:23.726 253542 DEBUG nova.compute.manager [None req-4f59ed76-8f96-42c9-8127-ec70884ea40a 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] [instance: 740ce5e2-b8eb-46b1-9ec8-e9c89ed3e5b2] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 25 03:25:23 np0005534516 nova_compute[253538]: 2025-11-25 08:25:23.802 253542 DEBUG nova.compute.manager [None req-4f59ed76-8f96-42c9-8127-ec70884ea40a 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] [instance: 740ce5e2-b8eb-46b1-9ec8-e9c89ed3e5b2] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 25 03:25:23 np0005534516 nova_compute[253538]: 2025-11-25 08:25:23.803 253542 DEBUG nova.virt.libvirt.driver [None req-4f59ed76-8f96-42c9-8127-ec70884ea40a 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] [instance: 740ce5e2-b8eb-46b1-9ec8-e9c89ed3e5b2] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 25 03:25:23 np0005534516 nova_compute[253538]: 2025-11-25 08:25:23.804 253542 INFO nova.virt.libvirt.driver [None req-4f59ed76-8f96-42c9-8127-ec70884ea40a 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] [instance: 740ce5e2-b8eb-46b1-9ec8-e9c89ed3e5b2] Creating image(s)#033[00m
Nov 25 03:25:23 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e113 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 03:25:23 np0005534516 nova_compute[253538]: 2025-11-25 08:25:23.851 253542 DEBUG nova.storage.rbd_utils [None req-4f59ed76-8f96-42c9-8127-ec70884ea40a 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] rbd image 740ce5e2-b8eb-46b1-9ec8-e9c89ed3e5b2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:25:23 np0005534516 nova_compute[253538]: 2025-11-25 08:25:23.922 253542 DEBUG nova.storage.rbd_utils [None req-4f59ed76-8f96-42c9-8127-ec70884ea40a 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] rbd image 740ce5e2-b8eb-46b1-9ec8-e9c89ed3e5b2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:25:23 np0005534516 nova_compute[253538]: 2025-11-25 08:25:23.946 253542 DEBUG nova.storage.rbd_utils [None req-4f59ed76-8f96-42c9-8127-ec70884ea40a 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] rbd image 740ce5e2-b8eb-46b1-9ec8-e9c89ed3e5b2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:25:23 np0005534516 nova_compute[253538]: 2025-11-25 08:25:23.950 253542 DEBUG oslo_concurrency.processutils [None req-4f59ed76-8f96-42c9-8127-ec70884ea40a 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:25:23 np0005534516 nova_compute[253538]: 2025-11-25 08:25:23.983 253542 DEBUG nova.policy [None req-4f59ed76-8f96-42c9-8127-ec70884ea40a 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '76d3377d398a4214a77bc0eb91638ec5', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '65a2f983cce14453b2dc9251a520f289', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 25 03:25:24 np0005534516 nova_compute[253538]: 2025-11-25 08:25:24.009 253542 DEBUG oslo_concurrency.processutils [None req-4f59ed76-8f96-42c9-8127-ec70884ea40a 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json" returned: 0 in 0.059s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:25:24 np0005534516 nova_compute[253538]: 2025-11-25 08:25:24.010 253542 DEBUG oslo_concurrency.lockutils [None req-4f59ed76-8f96-42c9-8127-ec70884ea40a 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Acquiring lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:25:24 np0005534516 nova_compute[253538]: 2025-11-25 08:25:24.011 253542 DEBUG oslo_concurrency.lockutils [None req-4f59ed76-8f96-42c9-8127-ec70884ea40a 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:25:24 np0005534516 nova_compute[253538]: 2025-11-25 08:25:24.011 253542 DEBUG oslo_concurrency.lockutils [None req-4f59ed76-8f96-42c9-8127-ec70884ea40a 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:25:24 np0005534516 nova_compute[253538]: 2025-11-25 08:25:24.033 253542 DEBUG nova.storage.rbd_utils [None req-4f59ed76-8f96-42c9-8127-ec70884ea40a 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] rbd image 740ce5e2-b8eb-46b1-9ec8-e9c89ed3e5b2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:25:24 np0005534516 nova_compute[253538]: 2025-11-25 08:25:24.036 253542 DEBUG oslo_concurrency.processutils [None req-4f59ed76-8f96-42c9-8127-ec70884ea40a 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc 740ce5e2-b8eb-46b1-9ec8-e9c89ed3e5b2_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:25:24 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 03:25:24 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/355161595' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 03:25:24 np0005534516 nova_compute[253538]: 2025-11-25 08:25:24.087 253542 DEBUG oslo_concurrency.processutils [None req-fd954489-8e65-4c0e-94f5-d710e8a29ad8 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.427s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:25:24 np0005534516 nova_compute[253538]: 2025-11-25 08:25:24.109 253542 DEBUG nova.storage.rbd_utils [None req-fd954489-8e65-4c0e-94f5-d710e8a29ad8 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] rbd image 1664fad5-765c-4ecc-93e2-6f96c7fb6d44_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:25:24 np0005534516 nova_compute[253538]: 2025-11-25 08:25:24.112 253542 DEBUG oslo_concurrency.processutils [None req-fd954489-8e65-4c0e-94f5-d710e8a29ad8 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:25:24 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 03:25:24 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4115453552' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 03:25:24 np0005534516 nova_compute[253538]: 2025-11-25 08:25:24.516 253542 DEBUG oslo_concurrency.processutils [None req-fd954489-8e65-4c0e-94f5-d710e8a29ad8 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.404s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:25:24 np0005534516 nova_compute[253538]: 2025-11-25 08:25:24.520 253542 DEBUG nova.virt.libvirt.vif [None req-fd954489-8e65-4c0e-94f5-d710e8a29ad8 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T08:25:17Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-FloatingIPsAssociationTestJSON-server-1794284611',display_name='tempest-FloatingIPsAssociationTestJSON-server-1794284611',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-floatingipsassociationtestjson-server-1794284611',id=19,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='2d3671fc1a3f4b319a62f23168a9df72',ramdisk_id='',reservation_id='r-f695m80c',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-FloatingIPsAssociationTestJSON-1833680054',owner_user_name='tempest-FloatingIPsAssociationTestJSON-1833680054-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T08:25:19Z,user_data=None,user_id='d61511e82c674abeb4ba87a4e5c5bf9d',uuid=1664fad5-765c-4ecc-93e2-6f96c7fb6d44,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "fdb3703c-f8da-4c10-9784-ed63bfe93fe1", "address": "fa:16:3e:f0:8b:90", "network": {"id": "f86a7b06-d9db-4462-bd9b-8ad648dec7f4", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-795190525-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2d3671fc1a3f4b319a62f23168a9df72", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfdb3703c-f8", "ovs_interfaceid": "fdb3703c-f8da-4c10-9784-ed63bfe93fe1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 25 03:25:24 np0005534516 nova_compute[253538]: 2025-11-25 08:25:24.520 253542 DEBUG nova.network.os_vif_util [None req-fd954489-8e65-4c0e-94f5-d710e8a29ad8 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] Converting VIF {"id": "fdb3703c-f8da-4c10-9784-ed63bfe93fe1", "address": "fa:16:3e:f0:8b:90", "network": {"id": "f86a7b06-d9db-4462-bd9b-8ad648dec7f4", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-795190525-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2d3671fc1a3f4b319a62f23168a9df72", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfdb3703c-f8", "ovs_interfaceid": "fdb3703c-f8da-4c10-9784-ed63bfe93fe1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 03:25:24 np0005534516 nova_compute[253538]: 2025-11-25 08:25:24.522 253542 DEBUG nova.network.os_vif_util [None req-fd954489-8e65-4c0e-94f5-d710e8a29ad8 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f0:8b:90,bridge_name='br-int',has_traffic_filtering=True,id=fdb3703c-f8da-4c10-9784-ed63bfe93fe1,network=Network(f86a7b06-d9db-4462-bd9b-8ad648dec7f4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfdb3703c-f8') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 03:25:24 np0005534516 nova_compute[253538]: 2025-11-25 08:25:24.524 253542 DEBUG nova.objects.instance [None req-fd954489-8e65-4c0e-94f5-d710e8a29ad8 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] Lazy-loading 'pci_devices' on Instance uuid 1664fad5-765c-4ecc-93e2-6f96c7fb6d44 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 03:25:24 np0005534516 nova_compute[253538]: 2025-11-25 08:25:24.547 253542 DEBUG nova.virt.libvirt.driver [None req-fd954489-8e65-4c0e-94f5-d710e8a29ad8 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] [instance: 1664fad5-765c-4ecc-93e2-6f96c7fb6d44] End _get_guest_xml xml=<domain type="kvm">
Nov 25 03:25:24 np0005534516 nova_compute[253538]:  <uuid>1664fad5-765c-4ecc-93e2-6f96c7fb6d44</uuid>
Nov 25 03:25:24 np0005534516 nova_compute[253538]:  <name>instance-00000013</name>
Nov 25 03:25:24 np0005534516 nova_compute[253538]:  <memory>131072</memory>
Nov 25 03:25:24 np0005534516 nova_compute[253538]:  <vcpu>1</vcpu>
Nov 25 03:25:24 np0005534516 nova_compute[253538]:  <metadata>
Nov 25 03:25:24 np0005534516 nova_compute[253538]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 03:25:24 np0005534516 nova_compute[253538]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 03:25:24 np0005534516 nova_compute[253538]:      <nova:name>tempest-FloatingIPsAssociationTestJSON-server-1794284611</nova:name>
Nov 25 03:25:24 np0005534516 nova_compute[253538]:      <nova:creationTime>2025-11-25 08:25:23</nova:creationTime>
Nov 25 03:25:24 np0005534516 nova_compute[253538]:      <nova:flavor name="m1.nano">
Nov 25 03:25:24 np0005534516 nova_compute[253538]:        <nova:memory>128</nova:memory>
Nov 25 03:25:24 np0005534516 nova_compute[253538]:        <nova:disk>1</nova:disk>
Nov 25 03:25:24 np0005534516 nova_compute[253538]:        <nova:swap>0</nova:swap>
Nov 25 03:25:24 np0005534516 nova_compute[253538]:        <nova:ephemeral>0</nova:ephemeral>
Nov 25 03:25:24 np0005534516 nova_compute[253538]:        <nova:vcpus>1</nova:vcpus>
Nov 25 03:25:24 np0005534516 nova_compute[253538]:      </nova:flavor>
Nov 25 03:25:24 np0005534516 nova_compute[253538]:      <nova:owner>
Nov 25 03:25:24 np0005534516 nova_compute[253538]:        <nova:user uuid="d61511e82c674abeb4ba87a4e5c5bf9d">tempest-FloatingIPsAssociationTestJSON-1833680054-project-member</nova:user>
Nov 25 03:25:24 np0005534516 nova_compute[253538]:        <nova:project uuid="2d3671fc1a3f4b319a62f23168a9df72">tempest-FloatingIPsAssociationTestJSON-1833680054</nova:project>
Nov 25 03:25:24 np0005534516 nova_compute[253538]:      </nova:owner>
Nov 25 03:25:24 np0005534516 nova_compute[253538]:      <nova:root type="image" uuid="8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e"/>
Nov 25 03:25:24 np0005534516 nova_compute[253538]:      <nova:ports>
Nov 25 03:25:24 np0005534516 nova_compute[253538]:        <nova:port uuid="fdb3703c-f8da-4c10-9784-ed63bfe93fe1">
Nov 25 03:25:24 np0005534516 nova_compute[253538]:          <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Nov 25 03:25:24 np0005534516 nova_compute[253538]:        </nova:port>
Nov 25 03:25:24 np0005534516 nova_compute[253538]:      </nova:ports>
Nov 25 03:25:24 np0005534516 nova_compute[253538]:    </nova:instance>
Nov 25 03:25:24 np0005534516 nova_compute[253538]:  </metadata>
Nov 25 03:25:24 np0005534516 nova_compute[253538]:  <sysinfo type="smbios">
Nov 25 03:25:24 np0005534516 nova_compute[253538]:    <system>
Nov 25 03:25:24 np0005534516 nova_compute[253538]:      <entry name="manufacturer">RDO</entry>
Nov 25 03:25:24 np0005534516 nova_compute[253538]:      <entry name="product">OpenStack Compute</entry>
Nov 25 03:25:24 np0005534516 nova_compute[253538]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 03:25:24 np0005534516 nova_compute[253538]:      <entry name="serial">1664fad5-765c-4ecc-93e2-6f96c7fb6d44</entry>
Nov 25 03:25:24 np0005534516 nova_compute[253538]:      <entry name="uuid">1664fad5-765c-4ecc-93e2-6f96c7fb6d44</entry>
Nov 25 03:25:24 np0005534516 nova_compute[253538]:      <entry name="family">Virtual Machine</entry>
Nov 25 03:25:24 np0005534516 nova_compute[253538]:    </system>
Nov 25 03:25:24 np0005534516 nova_compute[253538]:  </sysinfo>
Nov 25 03:25:24 np0005534516 nova_compute[253538]:  <os>
Nov 25 03:25:24 np0005534516 nova_compute[253538]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 25 03:25:24 np0005534516 nova_compute[253538]:    <boot dev="hd"/>
Nov 25 03:25:24 np0005534516 nova_compute[253538]:    <smbios mode="sysinfo"/>
Nov 25 03:25:24 np0005534516 nova_compute[253538]:  </os>
Nov 25 03:25:24 np0005534516 nova_compute[253538]:  <features>
Nov 25 03:25:24 np0005534516 nova_compute[253538]:    <acpi/>
Nov 25 03:25:24 np0005534516 nova_compute[253538]:    <apic/>
Nov 25 03:25:24 np0005534516 nova_compute[253538]:    <vmcoreinfo/>
Nov 25 03:25:24 np0005534516 nova_compute[253538]:  </features>
Nov 25 03:25:24 np0005534516 nova_compute[253538]:  <clock offset="utc">
Nov 25 03:25:24 np0005534516 nova_compute[253538]:    <timer name="pit" tickpolicy="delay"/>
Nov 25 03:25:24 np0005534516 nova_compute[253538]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 25 03:25:24 np0005534516 nova_compute[253538]:    <timer name="hpet" present="no"/>
Nov 25 03:25:24 np0005534516 nova_compute[253538]:  </clock>
Nov 25 03:25:24 np0005534516 nova_compute[253538]:  <cpu mode="host-model" match="exact">
Nov 25 03:25:24 np0005534516 nova_compute[253538]:    <topology sockets="1" cores="1" threads="1"/>
Nov 25 03:25:24 np0005534516 nova_compute[253538]:  </cpu>
Nov 25 03:25:24 np0005534516 nova_compute[253538]:  <devices>
Nov 25 03:25:24 np0005534516 nova_compute[253538]:    <disk type="network" device="disk">
Nov 25 03:25:24 np0005534516 nova_compute[253538]:      <driver type="raw" cache="none"/>
Nov 25 03:25:24 np0005534516 nova_compute[253538]:      <source protocol="rbd" name="vms/1664fad5-765c-4ecc-93e2-6f96c7fb6d44_disk">
Nov 25 03:25:24 np0005534516 nova_compute[253538]:        <host name="192.168.122.100" port="6789"/>
Nov 25 03:25:24 np0005534516 nova_compute[253538]:      </source>
Nov 25 03:25:24 np0005534516 nova_compute[253538]:      <auth username="openstack">
Nov 25 03:25:24 np0005534516 nova_compute[253538]:        <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 03:25:24 np0005534516 nova_compute[253538]:      </auth>
Nov 25 03:25:24 np0005534516 nova_compute[253538]:      <target dev="vda" bus="virtio"/>
Nov 25 03:25:24 np0005534516 nova_compute[253538]:    </disk>
Nov 25 03:25:24 np0005534516 nova_compute[253538]:    <disk type="network" device="cdrom">
Nov 25 03:25:24 np0005534516 nova_compute[253538]:      <driver type="raw" cache="none"/>
Nov 25 03:25:24 np0005534516 nova_compute[253538]:      <source protocol="rbd" name="vms/1664fad5-765c-4ecc-93e2-6f96c7fb6d44_disk.config">
Nov 25 03:25:24 np0005534516 nova_compute[253538]:        <host name="192.168.122.100" port="6789"/>
Nov 25 03:25:24 np0005534516 nova_compute[253538]:      </source>
Nov 25 03:25:24 np0005534516 nova_compute[253538]:      <auth username="openstack">
Nov 25 03:25:24 np0005534516 nova_compute[253538]:        <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 03:25:24 np0005534516 nova_compute[253538]:      </auth>
Nov 25 03:25:24 np0005534516 nova_compute[253538]:      <target dev="sda" bus="sata"/>
Nov 25 03:25:24 np0005534516 nova_compute[253538]:    </disk>
Nov 25 03:25:24 np0005534516 nova_compute[253538]:    <interface type="ethernet">
Nov 25 03:25:24 np0005534516 nova_compute[253538]:      <mac address="fa:16:3e:f0:8b:90"/>
Nov 25 03:25:24 np0005534516 nova_compute[253538]:      <model type="virtio"/>
Nov 25 03:25:24 np0005534516 nova_compute[253538]:      <driver name="vhost" rx_queue_size="512"/>
Nov 25 03:25:24 np0005534516 nova_compute[253538]:      <mtu size="1442"/>
Nov 25 03:25:24 np0005534516 nova_compute[253538]:      <target dev="tapfdb3703c-f8"/>
Nov 25 03:25:24 np0005534516 nova_compute[253538]:    </interface>
Nov 25 03:25:24 np0005534516 nova_compute[253538]:    <serial type="pty">
Nov 25 03:25:24 np0005534516 nova_compute[253538]:      <log file="/var/lib/nova/instances/1664fad5-765c-4ecc-93e2-6f96c7fb6d44/console.log" append="off"/>
Nov 25 03:25:24 np0005534516 nova_compute[253538]:    </serial>
Nov 25 03:25:24 np0005534516 nova_compute[253538]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 03:25:24 np0005534516 nova_compute[253538]:    <video>
Nov 25 03:25:24 np0005534516 nova_compute[253538]:      <model type="virtio"/>
Nov 25 03:25:24 np0005534516 nova_compute[253538]:    </video>
Nov 25 03:25:24 np0005534516 nova_compute[253538]:    <input type="tablet" bus="usb"/>
Nov 25 03:25:24 np0005534516 nova_compute[253538]:    <rng model="virtio">
Nov 25 03:25:24 np0005534516 nova_compute[253538]:      <backend model="random">/dev/urandom</backend>
Nov 25 03:25:24 np0005534516 nova_compute[253538]:    </rng>
Nov 25 03:25:24 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root"/>
Nov 25 03:25:24 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:25:24 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:25:24 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:25:24 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:25:24 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:25:24 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:25:24 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:25:24 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:25:24 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:25:24 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:25:24 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:25:24 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:25:24 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:25:24 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:25:24 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:25:24 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:25:24 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:25:24 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:25:24 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:25:24 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:25:24 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:25:24 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:25:24 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:25:24 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:25:24 np0005534516 nova_compute[253538]:    <controller type="usb" index="0"/>
Nov 25 03:25:24 np0005534516 nova_compute[253538]:    <memballoon model="virtio">
Nov 25 03:25:24 np0005534516 nova_compute[253538]:      <stats period="10"/>
Nov 25 03:25:24 np0005534516 nova_compute[253538]:    </memballoon>
Nov 25 03:25:24 np0005534516 nova_compute[253538]:  </devices>
Nov 25 03:25:24 np0005534516 nova_compute[253538]: </domain>
Nov 25 03:25:24 np0005534516 nova_compute[253538]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 25 03:25:24 np0005534516 nova_compute[253538]: 2025-11-25 08:25:24.548 253542 DEBUG nova.compute.manager [None req-fd954489-8e65-4c0e-94f5-d710e8a29ad8 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] [instance: 1664fad5-765c-4ecc-93e2-6f96c7fb6d44] Preparing to wait for external event network-vif-plugged-fdb3703c-f8da-4c10-9784-ed63bfe93fe1 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 25 03:25:24 np0005534516 nova_compute[253538]: 2025-11-25 08:25:24.549 253542 DEBUG oslo_concurrency.lockutils [None req-fd954489-8e65-4c0e-94f5-d710e8a29ad8 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] Acquiring lock "1664fad5-765c-4ecc-93e2-6f96c7fb6d44-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:25:24 np0005534516 nova_compute[253538]: 2025-11-25 08:25:24.549 253542 DEBUG oslo_concurrency.lockutils [None req-fd954489-8e65-4c0e-94f5-d710e8a29ad8 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] Lock "1664fad5-765c-4ecc-93e2-6f96c7fb6d44-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:25:24 np0005534516 nova_compute[253538]: 2025-11-25 08:25:24.550 253542 DEBUG oslo_concurrency.lockutils [None req-fd954489-8e65-4c0e-94f5-d710e8a29ad8 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] Lock "1664fad5-765c-4ecc-93e2-6f96c7fb6d44-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:25:24 np0005534516 nova_compute[253538]: 2025-11-25 08:25:24.551 253542 DEBUG nova.virt.libvirt.vif [None req-fd954489-8e65-4c0e-94f5-d710e8a29ad8 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T08:25:17Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-FloatingIPsAssociationTestJSON-server-1794284611',display_name='tempest-FloatingIPsAssociationTestJSON-server-1794284611',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-floatingipsassociationtestjson-server-1794284611',id=19,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='2d3671fc1a3f4b319a62f23168a9df72',ramdisk_id='',reservation_id='r-f695m80c',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-FloatingIPsAssociationTestJSON-1833680054',owner_user_name='tempest-FloatingIPsAssociationTestJSON-1833680054-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T08:25:19Z,user_data=None,user_id='d61511e82c674abeb4ba87a4e5c5bf9d',uuid=1664fad5-765c-4ecc-93e2-6f96c7fb6d44,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "fdb3703c-f8da-4c10-9784-ed63bfe93fe1", "address": "fa:16:3e:f0:8b:90", "network": {"id": "f86a7b06-d9db-4462-bd9b-8ad648dec7f4", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-795190525-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2d3671fc1a3f4b319a62f23168a9df72", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfdb3703c-f8", "ovs_interfaceid": "fdb3703c-f8da-4c10-9784-ed63bfe93fe1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 25 03:25:24 np0005534516 nova_compute[253538]: 2025-11-25 08:25:24.552 253542 DEBUG nova.network.os_vif_util [None req-fd954489-8e65-4c0e-94f5-d710e8a29ad8 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] Converting VIF {"id": "fdb3703c-f8da-4c10-9784-ed63bfe93fe1", "address": "fa:16:3e:f0:8b:90", "network": {"id": "f86a7b06-d9db-4462-bd9b-8ad648dec7f4", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-795190525-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2d3671fc1a3f4b319a62f23168a9df72", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfdb3703c-f8", "ovs_interfaceid": "fdb3703c-f8da-4c10-9784-ed63bfe93fe1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 03:25:24 np0005534516 nova_compute[253538]: 2025-11-25 08:25:24.553 253542 DEBUG nova.network.os_vif_util [None req-fd954489-8e65-4c0e-94f5-d710e8a29ad8 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f0:8b:90,bridge_name='br-int',has_traffic_filtering=True,id=fdb3703c-f8da-4c10-9784-ed63bfe93fe1,network=Network(f86a7b06-d9db-4462-bd9b-8ad648dec7f4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfdb3703c-f8') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 03:25:24 np0005534516 nova_compute[253538]: 2025-11-25 08:25:24.553 253542 DEBUG os_vif [None req-fd954489-8e65-4c0e-94f5-d710e8a29ad8 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:f0:8b:90,bridge_name='br-int',has_traffic_filtering=True,id=fdb3703c-f8da-4c10-9784-ed63bfe93fe1,network=Network(f86a7b06-d9db-4462-bd9b-8ad648dec7f4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfdb3703c-f8') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 25 03:25:24 np0005534516 nova_compute[253538]: 2025-11-25 08:25:24.555 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:25:24 np0005534516 nova_compute[253538]: 2025-11-25 08:25:24.556 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:25:24 np0005534516 nova_compute[253538]: 2025-11-25 08:25:24.557 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 03:25:24 np0005534516 nova_compute[253538]: 2025-11-25 08:25:24.559 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:25:24 np0005534516 nova_compute[253538]: 2025-11-25 08:25:24.563 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:25:24 np0005534516 nova_compute[253538]: 2025-11-25 08:25:24.563 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapfdb3703c-f8, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:25:24 np0005534516 nova_compute[253538]: 2025-11-25 08:25:24.564 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapfdb3703c-f8, col_values=(('external_ids', {'iface-id': 'fdb3703c-f8da-4c10-9784-ed63bfe93fe1', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:f0:8b:90', 'vm-uuid': '1664fad5-765c-4ecc-93e2-6f96c7fb6d44'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:25:24 np0005534516 NetworkManager[48915]: <info>  [1764059124.5670] manager: (tapfdb3703c-f8): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/43)
Nov 25 03:25:24 np0005534516 nova_compute[253538]: 2025-11-25 08:25:24.566 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:25:24 np0005534516 nova_compute[253538]: 2025-11-25 08:25:24.569 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 25 03:25:24 np0005534516 nova_compute[253538]: 2025-11-25 08:25:24.571 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:25:24 np0005534516 nova_compute[253538]: 2025-11-25 08:25:24.573 253542 INFO os_vif [None req-fd954489-8e65-4c0e-94f5-d710e8a29ad8 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:f0:8b:90,bridge_name='br-int',has_traffic_filtering=True,id=fdb3703c-f8da-4c10-9784-ed63bfe93fe1,network=Network(f86a7b06-d9db-4462-bd9b-8ad648dec7f4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfdb3703c-f8')#033[00m
Nov 25 03:25:24 np0005534516 nova_compute[253538]: 2025-11-25 08:25:24.637 253542 DEBUG nova.virt.libvirt.driver [None req-fd954489-8e65-4c0e-94f5-d710e8a29ad8 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 03:25:24 np0005534516 nova_compute[253538]: 2025-11-25 08:25:24.637 253542 DEBUG nova.virt.libvirt.driver [None req-fd954489-8e65-4c0e-94f5-d710e8a29ad8 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 03:25:24 np0005534516 nova_compute[253538]: 2025-11-25 08:25:24.638 253542 DEBUG nova.virt.libvirt.driver [None req-fd954489-8e65-4c0e-94f5-d710e8a29ad8 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] No VIF found with MAC fa:16:3e:f0:8b:90, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 25 03:25:24 np0005534516 nova_compute[253538]: 2025-11-25 08:25:24.639 253542 INFO nova.virt.libvirt.driver [None req-fd954489-8e65-4c0e-94f5-d710e8a29ad8 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] [instance: 1664fad5-765c-4ecc-93e2-6f96c7fb6d44] Using config drive#033[00m
Nov 25 03:25:24 np0005534516 nova_compute[253538]: 2025-11-25 08:25:24.674 253542 DEBUG nova.storage.rbd_utils [None req-fd954489-8e65-4c0e-94f5-d710e8a29ad8 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] rbd image 1664fad5-765c-4ecc-93e2-6f96c7fb6d44_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:25:24 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1199: 321 pgs: 321 active+clean; 293 MiB data, 390 MiB used, 60 GiB / 60 GiB avail; 2.6 MiB/s rd, 7.8 MiB/s wr, 255 op/s
Nov 25 03:25:25 np0005534516 nova_compute[253538]: 2025-11-25 08:25:25.085 253542 DEBUG oslo_concurrency.processutils [None req-4f59ed76-8f96-42c9-8127-ec70884ea40a 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc 740ce5e2-b8eb-46b1-9ec8-e9c89ed3e5b2_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.049s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:25:25 np0005534516 nova_compute[253538]: 2025-11-25 08:25:25.153 253542 DEBUG nova.storage.rbd_utils [None req-4f59ed76-8f96-42c9-8127-ec70884ea40a 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] resizing rbd image 740ce5e2-b8eb-46b1-9ec8-e9c89ed3e5b2_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 25 03:25:25 np0005534516 nova_compute[253538]: 2025-11-25 08:25:25.280 253542 DEBUG nova.network.neutron [None req-4f59ed76-8f96-42c9-8127-ec70884ea40a 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] [instance: 740ce5e2-b8eb-46b1-9ec8-e9c89ed3e5b2] Successfully created port: 817e9f9b-9d8a-4f80-a7b3-f2bcae49ffe8 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 25 03:25:25 np0005534516 nova_compute[253538]: 2025-11-25 08:25:25.287 253542 DEBUG nova.objects.instance [None req-4f59ed76-8f96-42c9-8127-ec70884ea40a 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Lazy-loading 'migration_context' on Instance uuid 740ce5e2-b8eb-46b1-9ec8-e9c89ed3e5b2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 03:25:25 np0005534516 nova_compute[253538]: 2025-11-25 08:25:25.304 253542 DEBUG nova.virt.libvirt.driver [None req-4f59ed76-8f96-42c9-8127-ec70884ea40a 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] [instance: 740ce5e2-b8eb-46b1-9ec8-e9c89ed3e5b2] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 25 03:25:25 np0005534516 nova_compute[253538]: 2025-11-25 08:25:25.304 253542 DEBUG nova.virt.libvirt.driver [None req-4f59ed76-8f96-42c9-8127-ec70884ea40a 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] [instance: 740ce5e2-b8eb-46b1-9ec8-e9c89ed3e5b2] Ensure instance console log exists: /var/lib/nova/instances/740ce5e2-b8eb-46b1-9ec8-e9c89ed3e5b2/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 25 03:25:25 np0005534516 nova_compute[253538]: 2025-11-25 08:25:25.305 253542 DEBUG oslo_concurrency.lockutils [None req-4f59ed76-8f96-42c9-8127-ec70884ea40a 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:25:25 np0005534516 nova_compute[253538]: 2025-11-25 08:25:25.305 253542 DEBUG oslo_concurrency.lockutils [None req-4f59ed76-8f96-42c9-8127-ec70884ea40a 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:25:25 np0005534516 nova_compute[253538]: 2025-11-25 08:25:25.305 253542 DEBUG oslo_concurrency.lockutils [None req-4f59ed76-8f96-42c9-8127-ec70884ea40a 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:25:25 np0005534516 nova_compute[253538]: 2025-11-25 08:25:25.331 253542 INFO nova.virt.libvirt.driver [None req-fd954489-8e65-4c0e-94f5-d710e8a29ad8 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] [instance: 1664fad5-765c-4ecc-93e2-6f96c7fb6d44] Creating config drive at /var/lib/nova/instances/1664fad5-765c-4ecc-93e2-6f96c7fb6d44/disk.config#033[00m
Nov 25 03:25:25 np0005534516 nova_compute[253538]: 2025-11-25 08:25:25.336 253542 DEBUG oslo_concurrency.processutils [None req-fd954489-8e65-4c0e-94f5-d710e8a29ad8 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/1664fad5-765c-4ecc-93e2-6f96c7fb6d44/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpl750rea7 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:25:25 np0005534516 nova_compute[253538]: 2025-11-25 08:25:25.479 253542 DEBUG oslo_concurrency.processutils [None req-fd954489-8e65-4c0e-94f5-d710e8a29ad8 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/1664fad5-765c-4ecc-93e2-6f96c7fb6d44/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpl750rea7" returned: 0 in 0.143s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:25:25 np0005534516 nova_compute[253538]: 2025-11-25 08:25:25.504 253542 DEBUG nova.storage.rbd_utils [None req-fd954489-8e65-4c0e-94f5-d710e8a29ad8 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] rbd image 1664fad5-765c-4ecc-93e2-6f96c7fb6d44_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:25:25 np0005534516 nova_compute[253538]: 2025-11-25 08:25:25.507 253542 DEBUG oslo_concurrency.processutils [None req-fd954489-8e65-4c0e-94f5-d710e8a29ad8 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/1664fad5-765c-4ecc-93e2-6f96c7fb6d44/disk.config 1664fad5-765c-4ecc-93e2-6f96c7fb6d44_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:25:25 np0005534516 nova_compute[253538]: 2025-11-25 08:25:25.527 253542 DEBUG nova.network.neutron [req-c6aa6b31-0de2-49d5-9d04-393703a2d5cc req-b372d0ad-9092-4770-855c-ff8e8332c7f8 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 1664fad5-765c-4ecc-93e2-6f96c7fb6d44] Updated VIF entry in instance network info cache for port fdb3703c-f8da-4c10-9784-ed63bfe93fe1. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 25 03:25:25 np0005534516 nova_compute[253538]: 2025-11-25 08:25:25.528 253542 DEBUG nova.network.neutron [req-c6aa6b31-0de2-49d5-9d04-393703a2d5cc req-b372d0ad-9092-4770-855c-ff8e8332c7f8 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 1664fad5-765c-4ecc-93e2-6f96c7fb6d44] Updating instance_info_cache with network_info: [{"id": "fdb3703c-f8da-4c10-9784-ed63bfe93fe1", "address": "fa:16:3e:f0:8b:90", "network": {"id": "f86a7b06-d9db-4462-bd9b-8ad648dec7f4", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-795190525-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2d3671fc1a3f4b319a62f23168a9df72", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfdb3703c-f8", "ovs_interfaceid": "fdb3703c-f8da-4c10-9784-ed63bfe93fe1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 03:25:25 np0005534516 nova_compute[253538]: 2025-11-25 08:25:25.545 253542 DEBUG oslo_concurrency.lockutils [req-c6aa6b31-0de2-49d5-9d04-393703a2d5cc req-b372d0ad-9092-4770-855c-ff8e8332c7f8 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-1664fad5-765c-4ecc-93e2-6f96c7fb6d44" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 03:25:25 np0005534516 nova_compute[253538]: 2025-11-25 08:25:25.910 253542 DEBUG oslo_concurrency.processutils [None req-fd954489-8e65-4c0e-94f5-d710e8a29ad8 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/1664fad5-765c-4ecc-93e2-6f96c7fb6d44/disk.config 1664fad5-765c-4ecc-93e2-6f96c7fb6d44_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.402s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:25:25 np0005534516 nova_compute[253538]: 2025-11-25 08:25:25.911 253542 INFO nova.virt.libvirt.driver [None req-fd954489-8e65-4c0e-94f5-d710e8a29ad8 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] [instance: 1664fad5-765c-4ecc-93e2-6f96c7fb6d44] Deleting local config drive /var/lib/nova/instances/1664fad5-765c-4ecc-93e2-6f96c7fb6d44/disk.config because it was imported into RBD.#033[00m
Nov 25 03:25:25 np0005534516 kernel: tapfdb3703c-f8: entered promiscuous mode
Nov 25 03:25:25 np0005534516 NetworkManager[48915]: <info>  [1764059125.9610] manager: (tapfdb3703c-f8): new Tun device (/org/freedesktop/NetworkManager/Devices/44)
Nov 25 03:25:25 np0005534516 nova_compute[253538]: 2025-11-25 08:25:25.963 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:25:25 np0005534516 ovn_controller[152859]: 2025-11-25T08:25:25Z|00059|binding|INFO|Claiming lport fdb3703c-f8da-4c10-9784-ed63bfe93fe1 for this chassis.
Nov 25 03:25:25 np0005534516 ovn_controller[152859]: 2025-11-25T08:25:25Z|00060|binding|INFO|fdb3703c-f8da-4c10-9784-ed63bfe93fe1: Claiming fa:16:3e:f0:8b:90 10.100.0.11
Nov 25 03:25:25 np0005534516 systemd-udevd[281326]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 03:25:26 np0005534516 NetworkManager[48915]: <info>  [1764059126.0137] device (tapfdb3703c-f8): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 03:25:26 np0005534516 NetworkManager[48915]: <info>  [1764059126.0148] device (tapfdb3703c-f8): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 03:25:26 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:25:26.017 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f0:8b:90 10.100.0.11'], port_security=['fa:16:3e:f0:8b:90 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '1664fad5-765c-4ecc-93e2-6f96c7fb6d44', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f86a7b06-d9db-4462-bd9b-8ad648dec7f4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2d3671fc1a3f4b319a62f23168a9df72', 'neutron:revision_number': '2', 'neutron:security_group_ids': '487cd315-808b-491d-a4f3-9b5dcda46b51', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ddf7b92c-cc8c-4886-ba67-90d06b198671, chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=fdb3703c-f8da-4c10-9784-ed63bfe93fe1) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 03:25:26 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:25:26.018 162739 INFO neutron.agent.ovn.metadata.agent [-] Port fdb3703c-f8da-4c10-9784-ed63bfe93fe1 in datapath f86a7b06-d9db-4462-bd9b-8ad648dec7f4 bound to our chassis#033[00m
Nov 25 03:25:26 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:25:26.020 162739 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network f86a7b06-d9db-4462-bd9b-8ad648dec7f4#033[00m
Nov 25 03:25:26 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:25:26.034 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[9e2394fe-55e5-4d96-a5a4-bd35f20c2dbc]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:25:26 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:25:26.035 162739 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapf86a7b06-d1 in ovnmeta-f86a7b06-d9db-4462-bd9b-8ad648dec7f4 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 25 03:25:26 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:25:26.036 269370 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapf86a7b06-d0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 25 03:25:26 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:25:26.036 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[624240c0-b9e0-40c1-a1ef-4f6345609b51]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:25:26 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:25:26.037 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[ec9fd287-4d5f-4cd2-9d55-f69b7161b8ac]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:25:26 np0005534516 systemd-machined[215790]: New machine qemu-21-instance-00000013.
Nov 25 03:25:26 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:25:26.048 162852 DEBUG oslo.privsep.daemon [-] privsep: reply[d7c7a39d-ef6d-451c-94b8-d264935045f3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:25:26 np0005534516 nova_compute[253538]: 2025-11-25 08:25:26.052 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:25:26 np0005534516 ovn_controller[152859]: 2025-11-25T08:25:26Z|00061|binding|INFO|Setting lport fdb3703c-f8da-4c10-9784-ed63bfe93fe1 ovn-installed in OVS
Nov 25 03:25:26 np0005534516 ovn_controller[152859]: 2025-11-25T08:25:26Z|00062|binding|INFO|Setting lport fdb3703c-f8da-4c10-9784-ed63bfe93fe1 up in Southbound
Nov 25 03:25:26 np0005534516 systemd[1]: Started Virtual Machine qemu-21-instance-00000013.
Nov 25 03:25:26 np0005534516 nova_compute[253538]: 2025-11-25 08:25:26.056 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:25:26 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:25:26.070 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[645c382c-1e90-473b-9c6c-c21f9ab4307d]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:25:26 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:25:26.096 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[a1b34fd4-cf9a-4701-a1ae-533e7950bf07]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:25:26 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:25:26.101 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[bc5823fa-8c1e-43c5-b3e4-12d82e646d54]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:25:26 np0005534516 NetworkManager[48915]: <info>  [1764059126.1021] manager: (tapf86a7b06-d0): new Veth device (/org/freedesktop/NetworkManager/Devices/45)
Nov 25 03:25:26 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:25:26.136 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[e092b3ef-4b92-429c-804b-d40a378c4cf6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:25:26 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:25:26.141 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[c75a8dbf-500d-4c34-a05c-da0eaf98da46]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:25:26 np0005534516 NetworkManager[48915]: <info>  [1764059126.1666] device (tapf86a7b06-d0): carrier: link connected
Nov 25 03:25:26 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:25:26.174 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[9dd296ed-c498-48ce-bed8-fda0e4b3bdfa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:25:26 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:25:26.190 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[b56b75c9-1c97-4a71-a3b8-4d6c342d6632]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf86a7b06-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:10:7b:45'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 23], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 448076, 'reachable_time': 32795, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 281361, 'error': None, 'target': 'ovnmeta-f86a7b06-d9db-4462-bd9b-8ad648dec7f4', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:25:26 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:25:26.203 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[0d8fba91-6709-4840-84db-c572ce044b9b]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe10:7b45'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 448076, 'tstamp': 448076}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 281363, 'error': None, 'target': 'ovnmeta-f86a7b06-d9db-4462-bd9b-8ad648dec7f4', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:25:26 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:25:26.216 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[3263cff7-21be-4ce5-a65a-dedf6a37ada3]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf86a7b06-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:10:7b:45'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 23], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 448076, 'reachable_time': 32795, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 281364, 'error': None, 'target': 'ovnmeta-f86a7b06-d9db-4462-bd9b-8ad648dec7f4', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:25:26 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:25:26.253 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[6120951e-49e6-47cd-8f4f-093aec4fa8b5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:25:26 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:25:26.316 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[b442cb94-be02-48f4-a0b6-cbb2f27943b9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:25:26 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:25:26.318 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf86a7b06-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:25:26 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:25:26.318 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 03:25:26 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:25:26.319 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf86a7b06-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:25:26 np0005534516 NetworkManager[48915]: <info>  [1764059126.3214] manager: (tapf86a7b06-d0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/46)
Nov 25 03:25:26 np0005534516 nova_compute[253538]: 2025-11-25 08:25:26.320 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:25:26 np0005534516 kernel: tapf86a7b06-d0: entered promiscuous mode
Nov 25 03:25:26 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:25:26.323 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapf86a7b06-d0, col_values=(('external_ids', {'iface-id': '156cbe7f-b9cd-46c4-9552-1484f581cde6'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:25:26 np0005534516 ovn_controller[152859]: 2025-11-25T08:25:26Z|00063|binding|INFO|Releasing lport 156cbe7f-b9cd-46c4-9552-1484f581cde6 from this chassis (sb_readonly=0)
Nov 25 03:25:26 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:25:26.342 162739 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/f86a7b06-d9db-4462-bd9b-8ad648dec7f4.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/f86a7b06-d9db-4462-bd9b-8ad648dec7f4.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 25 03:25:26 np0005534516 nova_compute[253538]: 2025-11-25 08:25:26.342 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:25:26 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:25:26.344 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[8f9e6d35-a3fc-430f-9283-2499eebae732]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:25:26 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:25:26.344 162739 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 25 03:25:26 np0005534516 ovn_metadata_agent[162734]: global
Nov 25 03:25:26 np0005534516 ovn_metadata_agent[162734]:    log         /dev/log local0 debug
Nov 25 03:25:26 np0005534516 ovn_metadata_agent[162734]:    log-tag     haproxy-metadata-proxy-f86a7b06-d9db-4462-bd9b-8ad648dec7f4
Nov 25 03:25:26 np0005534516 ovn_metadata_agent[162734]:    user        root
Nov 25 03:25:26 np0005534516 ovn_metadata_agent[162734]:    group       root
Nov 25 03:25:26 np0005534516 ovn_metadata_agent[162734]:    maxconn     1024
Nov 25 03:25:26 np0005534516 ovn_metadata_agent[162734]:    pidfile     /var/lib/neutron/external/pids/f86a7b06-d9db-4462-bd9b-8ad648dec7f4.pid.haproxy
Nov 25 03:25:26 np0005534516 ovn_metadata_agent[162734]:    daemon
Nov 25 03:25:26 np0005534516 ovn_metadata_agent[162734]: 
Nov 25 03:25:26 np0005534516 ovn_metadata_agent[162734]: defaults
Nov 25 03:25:26 np0005534516 ovn_metadata_agent[162734]:    log global
Nov 25 03:25:26 np0005534516 ovn_metadata_agent[162734]:    mode http
Nov 25 03:25:26 np0005534516 ovn_metadata_agent[162734]:    option httplog
Nov 25 03:25:26 np0005534516 ovn_metadata_agent[162734]:    option dontlognull
Nov 25 03:25:26 np0005534516 ovn_metadata_agent[162734]:    option http-server-close
Nov 25 03:25:26 np0005534516 ovn_metadata_agent[162734]:    option forwardfor
Nov 25 03:25:26 np0005534516 ovn_metadata_agent[162734]:    retries                 3
Nov 25 03:25:26 np0005534516 ovn_metadata_agent[162734]:    timeout http-request    30s
Nov 25 03:25:26 np0005534516 ovn_metadata_agent[162734]:    timeout connect         30s
Nov 25 03:25:26 np0005534516 ovn_metadata_agent[162734]:    timeout client          32s
Nov 25 03:25:26 np0005534516 ovn_metadata_agent[162734]:    timeout server          32s
Nov 25 03:25:26 np0005534516 ovn_metadata_agent[162734]:    timeout http-keep-alive 30s
Nov 25 03:25:26 np0005534516 ovn_metadata_agent[162734]: 
Nov 25 03:25:26 np0005534516 ovn_metadata_agent[162734]: 
Nov 25 03:25:26 np0005534516 ovn_metadata_agent[162734]: listen listener
Nov 25 03:25:26 np0005534516 ovn_metadata_agent[162734]:    bind 169.254.169.254:80
Nov 25 03:25:26 np0005534516 ovn_metadata_agent[162734]:    server metadata /var/lib/neutron/metadata_proxy
Nov 25 03:25:26 np0005534516 ovn_metadata_agent[162734]:    http-request add-header X-OVN-Network-ID f86a7b06-d9db-4462-bd9b-8ad648dec7f4
Nov 25 03:25:26 np0005534516 ovn_metadata_agent[162734]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 25 03:25:26 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:25:26.346 162739 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-f86a7b06-d9db-4462-bd9b-8ad648dec7f4', 'env', 'PROCESS_TAG=haproxy-f86a7b06-d9db-4462-bd9b-8ad648dec7f4', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/f86a7b06-d9db-4462-bd9b-8ad648dec7f4.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 25 03:25:26 np0005534516 nova_compute[253538]: 2025-11-25 08:25:26.524 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764059126.5242057, 1664fad5-765c-4ecc-93e2-6f96c7fb6d44 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 03:25:26 np0005534516 nova_compute[253538]: 2025-11-25 08:25:26.525 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 1664fad5-765c-4ecc-93e2-6f96c7fb6d44] VM Started (Lifecycle Event)#033[00m
Nov 25 03:25:26 np0005534516 nova_compute[253538]: 2025-11-25 08:25:26.548 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 1664fad5-765c-4ecc-93e2-6f96c7fb6d44] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:25:26 np0005534516 nova_compute[253538]: 2025-11-25 08:25:26.552 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764059126.5244079, 1664fad5-765c-4ecc-93e2-6f96c7fb6d44 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 03:25:26 np0005534516 nova_compute[253538]: 2025-11-25 08:25:26.552 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 1664fad5-765c-4ecc-93e2-6f96c7fb6d44] VM Paused (Lifecycle Event)#033[00m
Nov 25 03:25:26 np0005534516 nova_compute[253538]: 2025-11-25 08:25:26.568 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 1664fad5-765c-4ecc-93e2-6f96c7fb6d44] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:25:26 np0005534516 nova_compute[253538]: 2025-11-25 08:25:26.572 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 1664fad5-765c-4ecc-93e2-6f96c7fb6d44] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 03:25:26 np0005534516 nova_compute[253538]: 2025-11-25 08:25:26.589 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 1664fad5-765c-4ecc-93e2-6f96c7fb6d44] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 03:25:26 np0005534516 podman[281438]: 2025-11-25 08:25:26.771502287 +0000 UTC m=+0.067979883 container create e41149f548783254118977cc5d5669b5407f85f52828edec42cb6985c8922e84 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-f86a7b06-d9db-4462-bd9b-8ad648dec7f4, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 03:25:26 np0005534516 systemd[1]: Started libpod-conmon-e41149f548783254118977cc5d5669b5407f85f52828edec42cb6985c8922e84.scope.
Nov 25 03:25:26 np0005534516 systemd[1]: Started libcrun container.
Nov 25 03:25:26 np0005534516 podman[281438]: 2025-11-25 08:25:26.742232756 +0000 UTC m=+0.038710362 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 25 03:25:26 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/afd67979dba2879b6102202e2641e51318f111cb23f4aeb0a647e6c7456f9106/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 25 03:25:26 np0005534516 podman[281438]: 2025-11-25 08:25:26.842950714 +0000 UTC m=+0.139428310 container init e41149f548783254118977cc5d5669b5407f85f52828edec42cb6985c8922e84 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-f86a7b06-d9db-4462-bd9b-8ad648dec7f4, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118)
Nov 25 03:25:26 np0005534516 podman[281450]: 2025-11-25 08:25:26.844473987 +0000 UTC m=+0.090976170 container health_status 5c8d5508db57b4c3da7e80ae11f3a3094e87785cb74f60c98199261dd8b73481 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 03:25:26 np0005534516 podman[281438]: 2025-11-25 08:25:26.849952988 +0000 UTC m=+0.146430584 container start e41149f548783254118977cc5d5669b5407f85f52828edec42cb6985c8922e84 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-f86a7b06-d9db-4462-bd9b-8ad648dec7f4, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, tcib_managed=true)
Nov 25 03:25:26 np0005534516 neutron-haproxy-ovnmeta-f86a7b06-d9db-4462-bd9b-8ad648dec7f4[281468]: [NOTICE]   (281474) : New worker (281476) forked
Nov 25 03:25:26 np0005534516 neutron-haproxy-ovnmeta-f86a7b06-d9db-4462-bd9b-8ad648dec7f4[281468]: [NOTICE]   (281474) : Loading success.
Nov 25 03:25:27 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1200: 321 pgs: 321 active+clean; 304 MiB data, 394 MiB used, 60 GiB / 60 GiB avail; 2.4 MiB/s rd, 5.5 MiB/s wr, 215 op/s
Nov 25 03:25:27 np0005534516 nova_compute[253538]: 2025-11-25 08:25:27.527 253542 DEBUG nova.network.neutron [None req-4f59ed76-8f96-42c9-8127-ec70884ea40a 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] [instance: 740ce5e2-b8eb-46b1-9ec8-e9c89ed3e5b2] Successfully updated port: 817e9f9b-9d8a-4f80-a7b3-f2bcae49ffe8 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 25 03:25:27 np0005534516 nova_compute[253538]: 2025-11-25 08:25:27.543 253542 DEBUG oslo_concurrency.lockutils [None req-4f59ed76-8f96-42c9-8127-ec70884ea40a 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Acquiring lock "refresh_cache-740ce5e2-b8eb-46b1-9ec8-e9c89ed3e5b2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 03:25:27 np0005534516 nova_compute[253538]: 2025-11-25 08:25:27.543 253542 DEBUG oslo_concurrency.lockutils [None req-4f59ed76-8f96-42c9-8127-ec70884ea40a 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Acquired lock "refresh_cache-740ce5e2-b8eb-46b1-9ec8-e9c89ed3e5b2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 03:25:27 np0005534516 nova_compute[253538]: 2025-11-25 08:25:27.543 253542 DEBUG nova.network.neutron [None req-4f59ed76-8f96-42c9-8127-ec70884ea40a 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] [instance: 740ce5e2-b8eb-46b1-9ec8-e9c89ed3e5b2] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 25 03:25:27 np0005534516 nova_compute[253538]: 2025-11-25 08:25:27.747 253542 DEBUG nova.network.neutron [None req-4f59ed76-8f96-42c9-8127-ec70884ea40a 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] [instance: 740ce5e2-b8eb-46b1-9ec8-e9c89ed3e5b2] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 25 03:25:27 np0005534516 nova_compute[253538]: 2025-11-25 08:25:27.761 253542 DEBUG nova.compute.manager [req-e2014a23-e9ea-4a75-9e0d-e6b5107b03ad req-1272014a-dda3-421c-8f0f-8fe3e699f901 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 740ce5e2-b8eb-46b1-9ec8-e9c89ed3e5b2] Received event network-changed-817e9f9b-9d8a-4f80-a7b3-f2bcae49ffe8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:25:27 np0005534516 nova_compute[253538]: 2025-11-25 08:25:27.762 253542 DEBUG nova.compute.manager [req-e2014a23-e9ea-4a75-9e0d-e6b5107b03ad req-1272014a-dda3-421c-8f0f-8fe3e699f901 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 740ce5e2-b8eb-46b1-9ec8-e9c89ed3e5b2] Refreshing instance network info cache due to event network-changed-817e9f9b-9d8a-4f80-a7b3-f2bcae49ffe8. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 03:25:27 np0005534516 nova_compute[253538]: 2025-11-25 08:25:27.763 253542 DEBUG oslo_concurrency.lockutils [req-e2014a23-e9ea-4a75-9e0d-e6b5107b03ad req-1272014a-dda3-421c-8f0f-8fe3e699f901 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-740ce5e2-b8eb-46b1-9ec8-e9c89ed3e5b2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 03:25:28 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e113 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 03:25:29 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1201: 321 pgs: 321 active+clean; 324 MiB data, 405 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 4.7 MiB/s wr, 200 op/s
Nov 25 03:25:29 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 25 03:25:29 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4095158638' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 03:25:29 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 25 03:25:29 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4095158638' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 03:25:29 np0005534516 nova_compute[253538]: 2025-11-25 08:25:29.561 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:25:29 np0005534516 nova_compute[253538]: 2025-11-25 08:25:29.565 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:25:29 np0005534516 podman[281485]: 2025-11-25 08:25:29.808064721 +0000 UTC m=+0.057965296 container health_status 201f5ba8a846455dad7681d91099d8c3f33e8ac91298c1330a8b525b94c03938 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, config_id=multipathd, org.label-schema.build-date=20251118)
Nov 25 03:25:30 np0005534516 nova_compute[253538]: 2025-11-25 08:25:30.021 253542 DEBUG nova.network.neutron [None req-4f59ed76-8f96-42c9-8127-ec70884ea40a 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] [instance: 740ce5e2-b8eb-46b1-9ec8-e9c89ed3e5b2] Updating instance_info_cache with network_info: [{"id": "817e9f9b-9d8a-4f80-a7b3-f2bcae49ffe8", "address": "fa:16:3e:50:3c:73", "network": {"id": "93269c36-ab23-4d95-925a-798173550624", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-897372830-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "65a2f983cce14453b2dc9251a520f289", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap817e9f9b-9d", "ovs_interfaceid": "817e9f9b-9d8a-4f80-a7b3-f2bcae49ffe8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 03:25:30 np0005534516 nova_compute[253538]: 2025-11-25 08:25:30.047 253542 DEBUG oslo_concurrency.lockutils [None req-4f59ed76-8f96-42c9-8127-ec70884ea40a 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Releasing lock "refresh_cache-740ce5e2-b8eb-46b1-9ec8-e9c89ed3e5b2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 03:25:30 np0005534516 nova_compute[253538]: 2025-11-25 08:25:30.048 253542 DEBUG nova.compute.manager [None req-4f59ed76-8f96-42c9-8127-ec70884ea40a 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] [instance: 740ce5e2-b8eb-46b1-9ec8-e9c89ed3e5b2] Instance network_info: |[{"id": "817e9f9b-9d8a-4f80-a7b3-f2bcae49ffe8", "address": "fa:16:3e:50:3c:73", "network": {"id": "93269c36-ab23-4d95-925a-798173550624", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-897372830-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "65a2f983cce14453b2dc9251a520f289", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap817e9f9b-9d", "ovs_interfaceid": "817e9f9b-9d8a-4f80-a7b3-f2bcae49ffe8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 25 03:25:30 np0005534516 nova_compute[253538]: 2025-11-25 08:25:30.050 253542 DEBUG oslo_concurrency.lockutils [req-e2014a23-e9ea-4a75-9e0d-e6b5107b03ad req-1272014a-dda3-421c-8f0f-8fe3e699f901 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-740ce5e2-b8eb-46b1-9ec8-e9c89ed3e5b2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 03:25:30 np0005534516 nova_compute[253538]: 2025-11-25 08:25:30.051 253542 DEBUG nova.network.neutron [req-e2014a23-e9ea-4a75-9e0d-e6b5107b03ad req-1272014a-dda3-421c-8f0f-8fe3e699f901 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 740ce5e2-b8eb-46b1-9ec8-e9c89ed3e5b2] Refreshing network info cache for port 817e9f9b-9d8a-4f80-a7b3-f2bcae49ffe8 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 03:25:30 np0005534516 nova_compute[253538]: 2025-11-25 08:25:30.056 253542 DEBUG nova.virt.libvirt.driver [None req-4f59ed76-8f96-42c9-8127-ec70884ea40a 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] [instance: 740ce5e2-b8eb-46b1-9ec8-e9c89ed3e5b2] Start _get_guest_xml network_info=[{"id": "817e9f9b-9d8a-4f80-a7b3-f2bcae49ffe8", "address": "fa:16:3e:50:3c:73", "network": {"id": "93269c36-ab23-4d95-925a-798173550624", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-897372830-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "65a2f983cce14453b2dc9251a520f289", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap817e9f9b-9d", "ovs_interfaceid": "817e9f9b-9d8a-4f80-a7b3-f2bcae49ffe8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'device_name': '/dev/vda', 'guest_format': None, 'encryption_options': None, 'boot_index': 0, 'encryption_format': None, 'encryption_secret_uuid': None, 'size': 0, 'encrypted': False, 'disk_bus': 'virtio', 'image_id': '8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 25 03:25:30 np0005534516 nova_compute[253538]: 2025-11-25 08:25:30.064 253542 DEBUG nova.compute.manager [req-80c7cd52-6d53-4e9b-a3b7-07e38aa69622 req-126880cf-7b17-42f9-9c47-93c923a9d959 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 1664fad5-765c-4ecc-93e2-6f96c7fb6d44] Received event network-vif-plugged-fdb3703c-f8da-4c10-9784-ed63bfe93fe1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:25:30 np0005534516 nova_compute[253538]: 2025-11-25 08:25:30.065 253542 DEBUG oslo_concurrency.lockutils [req-80c7cd52-6d53-4e9b-a3b7-07e38aa69622 req-126880cf-7b17-42f9-9c47-93c923a9d959 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "1664fad5-765c-4ecc-93e2-6f96c7fb6d44-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:25:30 np0005534516 nova_compute[253538]: 2025-11-25 08:25:30.066 253542 DEBUG oslo_concurrency.lockutils [req-80c7cd52-6d53-4e9b-a3b7-07e38aa69622 req-126880cf-7b17-42f9-9c47-93c923a9d959 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "1664fad5-765c-4ecc-93e2-6f96c7fb6d44-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:25:30 np0005534516 nova_compute[253538]: 2025-11-25 08:25:30.066 253542 DEBUG oslo_concurrency.lockutils [req-80c7cd52-6d53-4e9b-a3b7-07e38aa69622 req-126880cf-7b17-42f9-9c47-93c923a9d959 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "1664fad5-765c-4ecc-93e2-6f96c7fb6d44-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:25:30 np0005534516 nova_compute[253538]: 2025-11-25 08:25:30.067 253542 DEBUG nova.compute.manager [req-80c7cd52-6d53-4e9b-a3b7-07e38aa69622 req-126880cf-7b17-42f9-9c47-93c923a9d959 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 1664fad5-765c-4ecc-93e2-6f96c7fb6d44] Processing event network-vif-plugged-fdb3703c-f8da-4c10-9784-ed63bfe93fe1 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 25 03:25:30 np0005534516 nova_compute[253538]: 2025-11-25 08:25:30.067 253542 DEBUG nova.compute.manager [req-80c7cd52-6d53-4e9b-a3b7-07e38aa69622 req-126880cf-7b17-42f9-9c47-93c923a9d959 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 1664fad5-765c-4ecc-93e2-6f96c7fb6d44] Received event network-vif-plugged-fdb3703c-f8da-4c10-9784-ed63bfe93fe1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:25:30 np0005534516 nova_compute[253538]: 2025-11-25 08:25:30.068 253542 DEBUG oslo_concurrency.lockutils [req-80c7cd52-6d53-4e9b-a3b7-07e38aa69622 req-126880cf-7b17-42f9-9c47-93c923a9d959 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "1664fad5-765c-4ecc-93e2-6f96c7fb6d44-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:25:30 np0005534516 nova_compute[253538]: 2025-11-25 08:25:30.068 253542 DEBUG oslo_concurrency.lockutils [req-80c7cd52-6d53-4e9b-a3b7-07e38aa69622 req-126880cf-7b17-42f9-9c47-93c923a9d959 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "1664fad5-765c-4ecc-93e2-6f96c7fb6d44-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:25:30 np0005534516 nova_compute[253538]: 2025-11-25 08:25:30.069 253542 DEBUG oslo_concurrency.lockutils [req-80c7cd52-6d53-4e9b-a3b7-07e38aa69622 req-126880cf-7b17-42f9-9c47-93c923a9d959 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "1664fad5-765c-4ecc-93e2-6f96c7fb6d44-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:25:30 np0005534516 nova_compute[253538]: 2025-11-25 08:25:30.069 253542 DEBUG nova.compute.manager [req-80c7cd52-6d53-4e9b-a3b7-07e38aa69622 req-126880cf-7b17-42f9-9c47-93c923a9d959 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 1664fad5-765c-4ecc-93e2-6f96c7fb6d44] No waiting events found dispatching network-vif-plugged-fdb3703c-f8da-4c10-9784-ed63bfe93fe1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 03:25:30 np0005534516 nova_compute[253538]: 2025-11-25 08:25:30.070 253542 WARNING nova.compute.manager [req-80c7cd52-6d53-4e9b-a3b7-07e38aa69622 req-126880cf-7b17-42f9-9c47-93c923a9d959 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 1664fad5-765c-4ecc-93e2-6f96c7fb6d44] Received unexpected event network-vif-plugged-fdb3703c-f8da-4c10-9784-ed63bfe93fe1 for instance with vm_state building and task_state spawning.#033[00m
Nov 25 03:25:30 np0005534516 nova_compute[253538]: 2025-11-25 08:25:30.074 253542 DEBUG nova.compute.manager [None req-fd954489-8e65-4c0e-94f5-d710e8a29ad8 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] [instance: 1664fad5-765c-4ecc-93e2-6f96c7fb6d44] Instance event wait completed in 3 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 25 03:25:30 np0005534516 nova_compute[253538]: 2025-11-25 08:25:30.079 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764059130.0794368, 1664fad5-765c-4ecc-93e2-6f96c7fb6d44 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 03:25:30 np0005534516 nova_compute[253538]: 2025-11-25 08:25:30.080 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 1664fad5-765c-4ecc-93e2-6f96c7fb6d44] VM Resumed (Lifecycle Event)#033[00m
Nov 25 03:25:30 np0005534516 nova_compute[253538]: 2025-11-25 08:25:30.082 253542 WARNING nova.virt.libvirt.driver [None req-4f59ed76-8f96-42c9-8127-ec70884ea40a 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 03:25:30 np0005534516 nova_compute[253538]: 2025-11-25 08:25:30.083 253542 DEBUG nova.virt.libvirt.driver [None req-fd954489-8e65-4c0e-94f5-d710e8a29ad8 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] [instance: 1664fad5-765c-4ecc-93e2-6f96c7fb6d44] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 25 03:25:30 np0005534516 nova_compute[253538]: 2025-11-25 08:25:30.087 253542 INFO nova.virt.libvirt.driver [-] [instance: 1664fad5-765c-4ecc-93e2-6f96c7fb6d44] Instance spawned successfully.#033[00m
Nov 25 03:25:30 np0005534516 nova_compute[253538]: 2025-11-25 08:25:30.088 253542 DEBUG nova.virt.libvirt.driver [None req-fd954489-8e65-4c0e-94f5-d710e8a29ad8 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] [instance: 1664fad5-765c-4ecc-93e2-6f96c7fb6d44] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 25 03:25:30 np0005534516 nova_compute[253538]: 2025-11-25 08:25:30.090 253542 DEBUG nova.virt.libvirt.host [None req-4f59ed76-8f96-42c9-8127-ec70884ea40a 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 25 03:25:30 np0005534516 nova_compute[253538]: 2025-11-25 08:25:30.091 253542 DEBUG nova.virt.libvirt.host [None req-4f59ed76-8f96-42c9-8127-ec70884ea40a 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 25 03:25:30 np0005534516 nova_compute[253538]: 2025-11-25 08:25:30.095 253542 DEBUG nova.virt.libvirt.host [None req-4f59ed76-8f96-42c9-8127-ec70884ea40a 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 25 03:25:30 np0005534516 nova_compute[253538]: 2025-11-25 08:25:30.095 253542 DEBUG nova.virt.libvirt.host [None req-4f59ed76-8f96-42c9-8127-ec70884ea40a 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 25 03:25:30 np0005534516 nova_compute[253538]: 2025-11-25 08:25:30.096 253542 DEBUG nova.virt.libvirt.driver [None req-4f59ed76-8f96-42c9-8127-ec70884ea40a 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 25 03:25:30 np0005534516 nova_compute[253538]: 2025-11-25 08:25:30.097 253542 DEBUG nova.virt.hardware [None req-4f59ed76-8f96-42c9-8127-ec70884ea40a 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T08:20:54Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='c0247c91-0f34-45b2-87e7-43f31a790a00',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 25 03:25:30 np0005534516 nova_compute[253538]: 2025-11-25 08:25:30.098 253542 DEBUG nova.virt.hardware [None req-4f59ed76-8f96-42c9-8127-ec70884ea40a 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 25 03:25:30 np0005534516 nova_compute[253538]: 2025-11-25 08:25:30.098 253542 DEBUG nova.virt.hardware [None req-4f59ed76-8f96-42c9-8127-ec70884ea40a 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 25 03:25:30 np0005534516 nova_compute[253538]: 2025-11-25 08:25:30.098 253542 DEBUG nova.virt.hardware [None req-4f59ed76-8f96-42c9-8127-ec70884ea40a 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 25 03:25:30 np0005534516 nova_compute[253538]: 2025-11-25 08:25:30.099 253542 DEBUG nova.virt.hardware [None req-4f59ed76-8f96-42c9-8127-ec70884ea40a 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 25 03:25:30 np0005534516 nova_compute[253538]: 2025-11-25 08:25:30.099 253542 DEBUG nova.virt.hardware [None req-4f59ed76-8f96-42c9-8127-ec70884ea40a 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 25 03:25:30 np0005534516 nova_compute[253538]: 2025-11-25 08:25:30.099 253542 DEBUG nova.virt.hardware [None req-4f59ed76-8f96-42c9-8127-ec70884ea40a 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 25 03:25:30 np0005534516 nova_compute[253538]: 2025-11-25 08:25:30.099 253542 DEBUG nova.virt.hardware [None req-4f59ed76-8f96-42c9-8127-ec70884ea40a 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 25 03:25:30 np0005534516 nova_compute[253538]: 2025-11-25 08:25:30.100 253542 DEBUG nova.virt.hardware [None req-4f59ed76-8f96-42c9-8127-ec70884ea40a 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 25 03:25:30 np0005534516 nova_compute[253538]: 2025-11-25 08:25:30.100 253542 DEBUG nova.virt.hardware [None req-4f59ed76-8f96-42c9-8127-ec70884ea40a 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 25 03:25:30 np0005534516 nova_compute[253538]: 2025-11-25 08:25:30.100 253542 DEBUG nova.virt.hardware [None req-4f59ed76-8f96-42c9-8127-ec70884ea40a 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 25 03:25:30 np0005534516 nova_compute[253538]: 2025-11-25 08:25:30.103 253542 DEBUG oslo_concurrency.processutils [None req-4f59ed76-8f96-42c9-8127-ec70884ea40a 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:25:30 np0005534516 nova_compute[253538]: 2025-11-25 08:25:30.132 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 1664fad5-765c-4ecc-93e2-6f96c7fb6d44] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:25:30 np0005534516 nova_compute[253538]: 2025-11-25 08:25:30.138 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 1664fad5-765c-4ecc-93e2-6f96c7fb6d44] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 03:25:30 np0005534516 nova_compute[253538]: 2025-11-25 08:25:30.145 253542 DEBUG nova.virt.libvirt.driver [None req-fd954489-8e65-4c0e-94f5-d710e8a29ad8 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] [instance: 1664fad5-765c-4ecc-93e2-6f96c7fb6d44] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:25:30 np0005534516 nova_compute[253538]: 2025-11-25 08:25:30.146 253542 DEBUG nova.virt.libvirt.driver [None req-fd954489-8e65-4c0e-94f5-d710e8a29ad8 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] [instance: 1664fad5-765c-4ecc-93e2-6f96c7fb6d44] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:25:30 np0005534516 nova_compute[253538]: 2025-11-25 08:25:30.146 253542 DEBUG nova.virt.libvirt.driver [None req-fd954489-8e65-4c0e-94f5-d710e8a29ad8 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] [instance: 1664fad5-765c-4ecc-93e2-6f96c7fb6d44] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:25:30 np0005534516 nova_compute[253538]: 2025-11-25 08:25:30.147 253542 DEBUG nova.virt.libvirt.driver [None req-fd954489-8e65-4c0e-94f5-d710e8a29ad8 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] [instance: 1664fad5-765c-4ecc-93e2-6f96c7fb6d44] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:25:30 np0005534516 nova_compute[253538]: 2025-11-25 08:25:30.147 253542 DEBUG nova.virt.libvirt.driver [None req-fd954489-8e65-4c0e-94f5-d710e8a29ad8 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] [instance: 1664fad5-765c-4ecc-93e2-6f96c7fb6d44] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:25:30 np0005534516 nova_compute[253538]: 2025-11-25 08:25:30.148 253542 DEBUG nova.virt.libvirt.driver [None req-fd954489-8e65-4c0e-94f5-d710e8a29ad8 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] [instance: 1664fad5-765c-4ecc-93e2-6f96c7fb6d44] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:25:30 np0005534516 nova_compute[253538]: 2025-11-25 08:25:30.167 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 1664fad5-765c-4ecc-93e2-6f96c7fb6d44] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 03:25:30 np0005534516 nova_compute[253538]: 2025-11-25 08:25:30.199 253542 INFO nova.compute.manager [None req-fd954489-8e65-4c0e-94f5-d710e8a29ad8 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] [instance: 1664fad5-765c-4ecc-93e2-6f96c7fb6d44] Took 10.80 seconds to spawn the instance on the hypervisor.#033[00m
Nov 25 03:25:30 np0005534516 nova_compute[253538]: 2025-11-25 08:25:30.200 253542 DEBUG nova.compute.manager [None req-fd954489-8e65-4c0e-94f5-d710e8a29ad8 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] [instance: 1664fad5-765c-4ecc-93e2-6f96c7fb6d44] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:25:30 np0005534516 nova_compute[253538]: 2025-11-25 08:25:30.268 253542 INFO nova.compute.manager [None req-fd954489-8e65-4c0e-94f5-d710e8a29ad8 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] [instance: 1664fad5-765c-4ecc-93e2-6f96c7fb6d44] Took 11.82 seconds to build instance.#033[00m
Nov 25 03:25:30 np0005534516 nova_compute[253538]: 2025-11-25 08:25:30.291 253542 DEBUG oslo_concurrency.lockutils [None req-fd954489-8e65-4c0e-94f5-d710e8a29ad8 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] Lock "1664fad5-765c-4ecc-93e2-6f96c7fb6d44" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.928s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:25:30 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 03:25:30 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1404325393' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 03:25:30 np0005534516 nova_compute[253538]: 2025-11-25 08:25:30.605 253542 DEBUG oslo_concurrency.processutils [None req-4f59ed76-8f96-42c9-8127-ec70884ea40a 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.503s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:25:30 np0005534516 nova_compute[253538]: 2025-11-25 08:25:30.628 253542 DEBUG nova.storage.rbd_utils [None req-4f59ed76-8f96-42c9-8127-ec70884ea40a 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] rbd image 740ce5e2-b8eb-46b1-9ec8-e9c89ed3e5b2_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:25:30 np0005534516 nova_compute[253538]: 2025-11-25 08:25:30.632 253542 DEBUG oslo_concurrency.processutils [None req-4f59ed76-8f96-42c9-8127-ec70884ea40a 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:25:31 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1202: 321 pgs: 321 active+clean; 339 MiB data, 411 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 4.7 MiB/s wr, 194 op/s
Nov 25 03:25:31 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 03:25:31 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/56515670' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 03:25:31 np0005534516 nova_compute[253538]: 2025-11-25 08:25:31.149 253542 DEBUG oslo_concurrency.processutils [None req-4f59ed76-8f96-42c9-8127-ec70884ea40a 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.517s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:25:31 np0005534516 nova_compute[253538]: 2025-11-25 08:25:31.152 253542 DEBUG nova.virt.libvirt.vif [None req-4f59ed76-8f96-42c9-8127-ec70884ea40a 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T08:25:22Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-485239503',display_name='tempest-ServersAdminTestJSON-server-485239503',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-485239503',id=20,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='65a2f983cce14453b2dc9251a520f289',ramdisk_id='',reservation_id='r-bxwnte0u',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersAdminTestJSON-857664825',owner_user_name='tempest-ServersAdminTestJSON-857664825-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T08:25:23Z,user_data=None,user_id='76d3377d398a4214a77bc0eb91638ec5',uuid=740ce5e2-b8eb-46b1-9ec8-e9c89ed3e5b2,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "817e9f9b-9d8a-4f80-a7b3-f2bcae49ffe8", "address": "fa:16:3e:50:3c:73", "network": {"id": "93269c36-ab23-4d95-925a-798173550624", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-897372830-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "65a2f983cce14453b2dc9251a520f289", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap817e9f9b-9d", "ovs_interfaceid": "817e9f9b-9d8a-4f80-a7b3-f2bcae49ffe8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 25 03:25:31 np0005534516 nova_compute[253538]: 2025-11-25 08:25:31.153 253542 DEBUG nova.network.os_vif_util [None req-4f59ed76-8f96-42c9-8127-ec70884ea40a 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Converting VIF {"id": "817e9f9b-9d8a-4f80-a7b3-f2bcae49ffe8", "address": "fa:16:3e:50:3c:73", "network": {"id": "93269c36-ab23-4d95-925a-798173550624", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-897372830-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "65a2f983cce14453b2dc9251a520f289", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap817e9f9b-9d", "ovs_interfaceid": "817e9f9b-9d8a-4f80-a7b3-f2bcae49ffe8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 03:25:31 np0005534516 nova_compute[253538]: 2025-11-25 08:25:31.154 253542 DEBUG nova.network.os_vif_util [None req-4f59ed76-8f96-42c9-8127-ec70884ea40a 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:50:3c:73,bridge_name='br-int',has_traffic_filtering=True,id=817e9f9b-9d8a-4f80-a7b3-f2bcae49ffe8,network=Network(93269c36-ab23-4d95-925a-798173550624),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap817e9f9b-9d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 03:25:31 np0005534516 nova_compute[253538]: 2025-11-25 08:25:31.156 253542 DEBUG nova.objects.instance [None req-4f59ed76-8f96-42c9-8127-ec70884ea40a 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Lazy-loading 'pci_devices' on Instance uuid 740ce5e2-b8eb-46b1-9ec8-e9c89ed3e5b2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 03:25:31 np0005534516 nova_compute[253538]: 2025-11-25 08:25:31.172 253542 DEBUG nova.virt.libvirt.driver [None req-4f59ed76-8f96-42c9-8127-ec70884ea40a 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] [instance: 740ce5e2-b8eb-46b1-9ec8-e9c89ed3e5b2] End _get_guest_xml xml=<domain type="kvm">
Nov 25 03:25:31 np0005534516 nova_compute[253538]:  <uuid>740ce5e2-b8eb-46b1-9ec8-e9c89ed3e5b2</uuid>
Nov 25 03:25:31 np0005534516 nova_compute[253538]:  <name>instance-00000014</name>
Nov 25 03:25:31 np0005534516 nova_compute[253538]:  <memory>131072</memory>
Nov 25 03:25:31 np0005534516 nova_compute[253538]:  <vcpu>1</vcpu>
Nov 25 03:25:31 np0005534516 nova_compute[253538]:  <metadata>
Nov 25 03:25:31 np0005534516 nova_compute[253538]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 03:25:31 np0005534516 nova_compute[253538]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 03:25:31 np0005534516 nova_compute[253538]:      <nova:name>tempest-ServersAdminTestJSON-server-485239503</nova:name>
Nov 25 03:25:31 np0005534516 nova_compute[253538]:      <nova:creationTime>2025-11-25 08:25:30</nova:creationTime>
Nov 25 03:25:31 np0005534516 nova_compute[253538]:      <nova:flavor name="m1.nano">
Nov 25 03:25:31 np0005534516 nova_compute[253538]:        <nova:memory>128</nova:memory>
Nov 25 03:25:31 np0005534516 nova_compute[253538]:        <nova:disk>1</nova:disk>
Nov 25 03:25:31 np0005534516 nova_compute[253538]:        <nova:swap>0</nova:swap>
Nov 25 03:25:31 np0005534516 nova_compute[253538]:        <nova:ephemeral>0</nova:ephemeral>
Nov 25 03:25:31 np0005534516 nova_compute[253538]:        <nova:vcpus>1</nova:vcpus>
Nov 25 03:25:31 np0005534516 nova_compute[253538]:      </nova:flavor>
Nov 25 03:25:31 np0005534516 nova_compute[253538]:      <nova:owner>
Nov 25 03:25:31 np0005534516 nova_compute[253538]:        <nova:user uuid="76d3377d398a4214a77bc0eb91638ec5">tempest-ServersAdminTestJSON-857664825-project-member</nova:user>
Nov 25 03:25:31 np0005534516 nova_compute[253538]:        <nova:project uuid="65a2f983cce14453b2dc9251a520f289">tempest-ServersAdminTestJSON-857664825</nova:project>
Nov 25 03:25:31 np0005534516 nova_compute[253538]:      </nova:owner>
Nov 25 03:25:31 np0005534516 nova_compute[253538]:      <nova:root type="image" uuid="8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e"/>
Nov 25 03:25:31 np0005534516 nova_compute[253538]:      <nova:ports>
Nov 25 03:25:31 np0005534516 nova_compute[253538]:        <nova:port uuid="817e9f9b-9d8a-4f80-a7b3-f2bcae49ffe8">
Nov 25 03:25:31 np0005534516 nova_compute[253538]:          <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Nov 25 03:25:31 np0005534516 nova_compute[253538]:        </nova:port>
Nov 25 03:25:31 np0005534516 nova_compute[253538]:      </nova:ports>
Nov 25 03:25:31 np0005534516 nova_compute[253538]:    </nova:instance>
Nov 25 03:25:31 np0005534516 nova_compute[253538]:  </metadata>
Nov 25 03:25:31 np0005534516 nova_compute[253538]:  <sysinfo type="smbios">
Nov 25 03:25:31 np0005534516 nova_compute[253538]:    <system>
Nov 25 03:25:31 np0005534516 nova_compute[253538]:      <entry name="manufacturer">RDO</entry>
Nov 25 03:25:31 np0005534516 nova_compute[253538]:      <entry name="product">OpenStack Compute</entry>
Nov 25 03:25:31 np0005534516 nova_compute[253538]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 03:25:31 np0005534516 nova_compute[253538]:      <entry name="serial">740ce5e2-b8eb-46b1-9ec8-e9c89ed3e5b2</entry>
Nov 25 03:25:31 np0005534516 nova_compute[253538]:      <entry name="uuid">740ce5e2-b8eb-46b1-9ec8-e9c89ed3e5b2</entry>
Nov 25 03:25:31 np0005534516 nova_compute[253538]:      <entry name="family">Virtual Machine</entry>
Nov 25 03:25:31 np0005534516 nova_compute[253538]:    </system>
Nov 25 03:25:31 np0005534516 nova_compute[253538]:  </sysinfo>
Nov 25 03:25:31 np0005534516 nova_compute[253538]:  <os>
Nov 25 03:25:31 np0005534516 nova_compute[253538]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 25 03:25:31 np0005534516 nova_compute[253538]:    <boot dev="hd"/>
Nov 25 03:25:31 np0005534516 nova_compute[253538]:    <smbios mode="sysinfo"/>
Nov 25 03:25:31 np0005534516 nova_compute[253538]:  </os>
Nov 25 03:25:31 np0005534516 nova_compute[253538]:  <features>
Nov 25 03:25:31 np0005534516 nova_compute[253538]:    <acpi/>
Nov 25 03:25:31 np0005534516 nova_compute[253538]:    <apic/>
Nov 25 03:25:31 np0005534516 nova_compute[253538]:    <vmcoreinfo/>
Nov 25 03:25:31 np0005534516 nova_compute[253538]:  </features>
Nov 25 03:25:31 np0005534516 nova_compute[253538]:  <clock offset="utc">
Nov 25 03:25:31 np0005534516 nova_compute[253538]:    <timer name="pit" tickpolicy="delay"/>
Nov 25 03:25:31 np0005534516 nova_compute[253538]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 25 03:25:31 np0005534516 nova_compute[253538]:    <timer name="hpet" present="no"/>
Nov 25 03:25:31 np0005534516 nova_compute[253538]:  </clock>
Nov 25 03:25:31 np0005534516 nova_compute[253538]:  <cpu mode="host-model" match="exact">
Nov 25 03:25:31 np0005534516 nova_compute[253538]:    <topology sockets="1" cores="1" threads="1"/>
Nov 25 03:25:31 np0005534516 nova_compute[253538]:  </cpu>
Nov 25 03:25:31 np0005534516 nova_compute[253538]:  <devices>
Nov 25 03:25:31 np0005534516 nova_compute[253538]:    <disk type="network" device="disk">
Nov 25 03:25:31 np0005534516 nova_compute[253538]:      <driver type="raw" cache="none"/>
Nov 25 03:25:31 np0005534516 nova_compute[253538]:      <source protocol="rbd" name="vms/740ce5e2-b8eb-46b1-9ec8-e9c89ed3e5b2_disk">
Nov 25 03:25:31 np0005534516 nova_compute[253538]:        <host name="192.168.122.100" port="6789"/>
Nov 25 03:25:31 np0005534516 nova_compute[253538]:      </source>
Nov 25 03:25:31 np0005534516 nova_compute[253538]:      <auth username="openstack">
Nov 25 03:25:31 np0005534516 nova_compute[253538]:        <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 03:25:31 np0005534516 nova_compute[253538]:      </auth>
Nov 25 03:25:31 np0005534516 nova_compute[253538]:      <target dev="vda" bus="virtio"/>
Nov 25 03:25:31 np0005534516 nova_compute[253538]:    </disk>
Nov 25 03:25:31 np0005534516 nova_compute[253538]:    <disk type="network" device="cdrom">
Nov 25 03:25:31 np0005534516 nova_compute[253538]:      <driver type="raw" cache="none"/>
Nov 25 03:25:31 np0005534516 nova_compute[253538]:      <source protocol="rbd" name="vms/740ce5e2-b8eb-46b1-9ec8-e9c89ed3e5b2_disk.config">
Nov 25 03:25:31 np0005534516 nova_compute[253538]:        <host name="192.168.122.100" port="6789"/>
Nov 25 03:25:31 np0005534516 nova_compute[253538]:      </source>
Nov 25 03:25:31 np0005534516 nova_compute[253538]:      <auth username="openstack">
Nov 25 03:25:31 np0005534516 nova_compute[253538]:        <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 03:25:31 np0005534516 nova_compute[253538]:      </auth>
Nov 25 03:25:31 np0005534516 nova_compute[253538]:      <target dev="sda" bus="sata"/>
Nov 25 03:25:31 np0005534516 nova_compute[253538]:    </disk>
Nov 25 03:25:31 np0005534516 nova_compute[253538]:    <interface type="ethernet">
Nov 25 03:25:31 np0005534516 nova_compute[253538]:      <mac address="fa:16:3e:50:3c:73"/>
Nov 25 03:25:31 np0005534516 nova_compute[253538]:      <model type="virtio"/>
Nov 25 03:25:31 np0005534516 nova_compute[253538]:      <driver name="vhost" rx_queue_size="512"/>
Nov 25 03:25:31 np0005534516 nova_compute[253538]:      <mtu size="1442"/>
Nov 25 03:25:31 np0005534516 nova_compute[253538]:      <target dev="tap817e9f9b-9d"/>
Nov 25 03:25:31 np0005534516 nova_compute[253538]:    </interface>
Nov 25 03:25:31 np0005534516 nova_compute[253538]:    <serial type="pty">
Nov 25 03:25:31 np0005534516 nova_compute[253538]:      <log file="/var/lib/nova/instances/740ce5e2-b8eb-46b1-9ec8-e9c89ed3e5b2/console.log" append="off"/>
Nov 25 03:25:31 np0005534516 nova_compute[253538]:    </serial>
Nov 25 03:25:31 np0005534516 nova_compute[253538]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 03:25:31 np0005534516 nova_compute[253538]:    <video>
Nov 25 03:25:31 np0005534516 nova_compute[253538]:      <model type="virtio"/>
Nov 25 03:25:31 np0005534516 nova_compute[253538]:    </video>
Nov 25 03:25:31 np0005534516 nova_compute[253538]:    <input type="tablet" bus="usb"/>
Nov 25 03:25:31 np0005534516 nova_compute[253538]:    <rng model="virtio">
Nov 25 03:25:31 np0005534516 nova_compute[253538]:      <backend model="random">/dev/urandom</backend>
Nov 25 03:25:31 np0005534516 nova_compute[253538]:    </rng>
Nov 25 03:25:31 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root"/>
Nov 25 03:25:31 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:25:31 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:25:31 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:25:31 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:25:31 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:25:31 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:25:31 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:25:31 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:25:31 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:25:31 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:25:31 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:25:31 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:25:31 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:25:31 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:25:31 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:25:31 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:25:31 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:25:31 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:25:31 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:25:31 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:25:31 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:25:31 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:25:31 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:25:31 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:25:31 np0005534516 nova_compute[253538]:    <controller type="usb" index="0"/>
Nov 25 03:25:31 np0005534516 nova_compute[253538]:    <memballoon model="virtio">
Nov 25 03:25:31 np0005534516 nova_compute[253538]:      <stats period="10"/>
Nov 25 03:25:31 np0005534516 nova_compute[253538]:    </memballoon>
Nov 25 03:25:31 np0005534516 nova_compute[253538]:  </devices>
Nov 25 03:25:31 np0005534516 nova_compute[253538]: </domain>
Nov 25 03:25:31 np0005534516 nova_compute[253538]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 25 03:25:31 np0005534516 nova_compute[253538]: 2025-11-25 08:25:31.178 253542 DEBUG nova.compute.manager [None req-4f59ed76-8f96-42c9-8127-ec70884ea40a 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] [instance: 740ce5e2-b8eb-46b1-9ec8-e9c89ed3e5b2] Preparing to wait for external event network-vif-plugged-817e9f9b-9d8a-4f80-a7b3-f2bcae49ffe8 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 25 03:25:31 np0005534516 nova_compute[253538]: 2025-11-25 08:25:31.179 253542 DEBUG oslo_concurrency.lockutils [None req-4f59ed76-8f96-42c9-8127-ec70884ea40a 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Acquiring lock "740ce5e2-b8eb-46b1-9ec8-e9c89ed3e5b2-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:25:31 np0005534516 nova_compute[253538]: 2025-11-25 08:25:31.179 253542 DEBUG oslo_concurrency.lockutils [None req-4f59ed76-8f96-42c9-8127-ec70884ea40a 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Lock "740ce5e2-b8eb-46b1-9ec8-e9c89ed3e5b2-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:25:31 np0005534516 nova_compute[253538]: 2025-11-25 08:25:31.179 253542 DEBUG oslo_concurrency.lockutils [None req-4f59ed76-8f96-42c9-8127-ec70884ea40a 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Lock "740ce5e2-b8eb-46b1-9ec8-e9c89ed3e5b2-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:25:31 np0005534516 nova_compute[253538]: 2025-11-25 08:25:31.180 253542 DEBUG nova.virt.libvirt.vif [None req-4f59ed76-8f96-42c9-8127-ec70884ea40a 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T08:25:22Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-485239503',display_name='tempest-ServersAdminTestJSON-server-485239503',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-485239503',id=20,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='65a2f983cce14453b2dc9251a520f289',ramdisk_id='',reservation_id='r-bxwnte0u',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersAdminTestJSON-857664825',owner_user_name='tempest-ServersAdminTestJSON-857664825-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T08:25:23Z,user_data=None,user_id='76d3377d398a4214a77bc0eb91638ec5',uuid=740ce5e2-b8eb-46b1-9ec8-e9c89ed3e5b2,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "817e9f9b-9d8a-4f80-a7b3-f2bcae49ffe8", "address": "fa:16:3e:50:3c:73", "network": {"id": "93269c36-ab23-4d95-925a-798173550624", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-897372830-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "65a2f983cce14453b2dc9251a520f289", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap817e9f9b-9d", "ovs_interfaceid": "817e9f9b-9d8a-4f80-a7b3-f2bcae49ffe8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 25 03:25:31 np0005534516 nova_compute[253538]: 2025-11-25 08:25:31.180 253542 DEBUG nova.network.os_vif_util [None req-4f59ed76-8f96-42c9-8127-ec70884ea40a 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Converting VIF {"id": "817e9f9b-9d8a-4f80-a7b3-f2bcae49ffe8", "address": "fa:16:3e:50:3c:73", "network": {"id": "93269c36-ab23-4d95-925a-798173550624", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-897372830-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "65a2f983cce14453b2dc9251a520f289", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap817e9f9b-9d", "ovs_interfaceid": "817e9f9b-9d8a-4f80-a7b3-f2bcae49ffe8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 03:25:31 np0005534516 nova_compute[253538]: 2025-11-25 08:25:31.181 253542 DEBUG nova.network.os_vif_util [None req-4f59ed76-8f96-42c9-8127-ec70884ea40a 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:50:3c:73,bridge_name='br-int',has_traffic_filtering=True,id=817e9f9b-9d8a-4f80-a7b3-f2bcae49ffe8,network=Network(93269c36-ab23-4d95-925a-798173550624),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap817e9f9b-9d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 03:25:31 np0005534516 nova_compute[253538]: 2025-11-25 08:25:31.182 253542 DEBUG os_vif [None req-4f59ed76-8f96-42c9-8127-ec70884ea40a 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:50:3c:73,bridge_name='br-int',has_traffic_filtering=True,id=817e9f9b-9d8a-4f80-a7b3-f2bcae49ffe8,network=Network(93269c36-ab23-4d95-925a-798173550624),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap817e9f9b-9d') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 25 03:25:31 np0005534516 nova_compute[253538]: 2025-11-25 08:25:31.182 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:25:31 np0005534516 nova_compute[253538]: 2025-11-25 08:25:31.183 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:25:31 np0005534516 nova_compute[253538]: 2025-11-25 08:25:31.183 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 03:25:31 np0005534516 nova_compute[253538]: 2025-11-25 08:25:31.187 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:25:31 np0005534516 nova_compute[253538]: 2025-11-25 08:25:31.188 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap817e9f9b-9d, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:25:31 np0005534516 nova_compute[253538]: 2025-11-25 08:25:31.188 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap817e9f9b-9d, col_values=(('external_ids', {'iface-id': '817e9f9b-9d8a-4f80-a7b3-f2bcae49ffe8', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:50:3c:73', 'vm-uuid': '740ce5e2-b8eb-46b1-9ec8-e9c89ed3e5b2'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:25:31 np0005534516 nova_compute[253538]: 2025-11-25 08:25:31.192 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:25:31 np0005534516 NetworkManager[48915]: <info>  [1764059131.1938] manager: (tap817e9f9b-9d): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/47)
Nov 25 03:25:31 np0005534516 nova_compute[253538]: 2025-11-25 08:25:31.194 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 25 03:25:31 np0005534516 nova_compute[253538]: 2025-11-25 08:25:31.199 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:25:31 np0005534516 nova_compute[253538]: 2025-11-25 08:25:31.200 253542 INFO os_vif [None req-4f59ed76-8f96-42c9-8127-ec70884ea40a 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:50:3c:73,bridge_name='br-int',has_traffic_filtering=True,id=817e9f9b-9d8a-4f80-a7b3-f2bcae49ffe8,network=Network(93269c36-ab23-4d95-925a-798173550624),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap817e9f9b-9d')#033[00m
Nov 25 03:25:31 np0005534516 nova_compute[253538]: 2025-11-25 08:25:31.284 253542 DEBUG nova.virt.libvirt.driver [None req-4f59ed76-8f96-42c9-8127-ec70884ea40a 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 03:25:31 np0005534516 nova_compute[253538]: 2025-11-25 08:25:31.284 253542 DEBUG nova.virt.libvirt.driver [None req-4f59ed76-8f96-42c9-8127-ec70884ea40a 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 03:25:31 np0005534516 nova_compute[253538]: 2025-11-25 08:25:31.285 253542 DEBUG nova.virt.libvirt.driver [None req-4f59ed76-8f96-42c9-8127-ec70884ea40a 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] No VIF found with MAC fa:16:3e:50:3c:73, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 25 03:25:31 np0005534516 nova_compute[253538]: 2025-11-25 08:25:31.285 253542 INFO nova.virt.libvirt.driver [None req-4f59ed76-8f96-42c9-8127-ec70884ea40a 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] [instance: 740ce5e2-b8eb-46b1-9ec8-e9c89ed3e5b2] Using config drive#033[00m
Nov 25 03:25:31 np0005534516 nova_compute[253538]: 2025-11-25 08:25:31.313 253542 DEBUG nova.storage.rbd_utils [None req-4f59ed76-8f96-42c9-8127-ec70884ea40a 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] rbd image 740ce5e2-b8eb-46b1-9ec8-e9c89ed3e5b2_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:25:31 np0005534516 nova_compute[253538]: 2025-11-25 08:25:31.853 253542 INFO nova.virt.libvirt.driver [None req-4f59ed76-8f96-42c9-8127-ec70884ea40a 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] [instance: 740ce5e2-b8eb-46b1-9ec8-e9c89ed3e5b2] Creating config drive at /var/lib/nova/instances/740ce5e2-b8eb-46b1-9ec8-e9c89ed3e5b2/disk.config#033[00m
Nov 25 03:25:31 np0005534516 nova_compute[253538]: 2025-11-25 08:25:31.858 253542 DEBUG oslo_concurrency.processutils [None req-4f59ed76-8f96-42c9-8127-ec70884ea40a 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/740ce5e2-b8eb-46b1-9ec8-e9c89ed3e5b2/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp6bynld62 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:25:32 np0005534516 nova_compute[253538]: 2025-11-25 08:25:32.288 253542 DEBUG nova.network.neutron [req-e2014a23-e9ea-4a75-9e0d-e6b5107b03ad req-1272014a-dda3-421c-8f0f-8fe3e699f901 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 740ce5e2-b8eb-46b1-9ec8-e9c89ed3e5b2] Updated VIF entry in instance network info cache for port 817e9f9b-9d8a-4f80-a7b3-f2bcae49ffe8. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 25 03:25:32 np0005534516 nova_compute[253538]: 2025-11-25 08:25:32.289 253542 DEBUG nova.network.neutron [req-e2014a23-e9ea-4a75-9e0d-e6b5107b03ad req-1272014a-dda3-421c-8f0f-8fe3e699f901 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 740ce5e2-b8eb-46b1-9ec8-e9c89ed3e5b2] Updating instance_info_cache with network_info: [{"id": "817e9f9b-9d8a-4f80-a7b3-f2bcae49ffe8", "address": "fa:16:3e:50:3c:73", "network": {"id": "93269c36-ab23-4d95-925a-798173550624", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-897372830-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "65a2f983cce14453b2dc9251a520f289", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap817e9f9b-9d", "ovs_interfaceid": "817e9f9b-9d8a-4f80-a7b3-f2bcae49ffe8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 03:25:32 np0005534516 nova_compute[253538]: 2025-11-25 08:25:32.309 253542 DEBUG oslo_concurrency.lockutils [req-e2014a23-e9ea-4a75-9e0d-e6b5107b03ad req-1272014a-dda3-421c-8f0f-8fe3e699f901 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-740ce5e2-b8eb-46b1-9ec8-e9c89ed3e5b2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 03:25:32 np0005534516 nova_compute[253538]: 2025-11-25 08:25:32.395 253542 DEBUG oslo_concurrency.processutils [None req-4f59ed76-8f96-42c9-8127-ec70884ea40a 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/740ce5e2-b8eb-46b1-9ec8-e9c89ed3e5b2/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp6bynld62" returned: 0 in 0.537s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:25:32 np0005534516 nova_compute[253538]: 2025-11-25 08:25:32.436 253542 DEBUG nova.storage.rbd_utils [None req-4f59ed76-8f96-42c9-8127-ec70884ea40a 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] rbd image 740ce5e2-b8eb-46b1-9ec8-e9c89ed3e5b2_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:25:32 np0005534516 nova_compute[253538]: 2025-11-25 08:25:32.439 253542 DEBUG oslo_concurrency.processutils [None req-4f59ed76-8f96-42c9-8127-ec70884ea40a 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/740ce5e2-b8eb-46b1-9ec8-e9c89ed3e5b2/disk.config 740ce5e2-b8eb-46b1-9ec8-e9c89ed3e5b2_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:25:32 np0005534516 nova_compute[253538]: 2025-11-25 08:25:32.590 253542 DEBUG oslo_concurrency.processutils [None req-4f59ed76-8f96-42c9-8127-ec70884ea40a 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/740ce5e2-b8eb-46b1-9ec8-e9c89ed3e5b2/disk.config 740ce5e2-b8eb-46b1-9ec8-e9c89ed3e5b2_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.151s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:25:32 np0005534516 nova_compute[253538]: 2025-11-25 08:25:32.592 253542 INFO nova.virt.libvirt.driver [None req-4f59ed76-8f96-42c9-8127-ec70884ea40a 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] [instance: 740ce5e2-b8eb-46b1-9ec8-e9c89ed3e5b2] Deleting local config drive /var/lib/nova/instances/740ce5e2-b8eb-46b1-9ec8-e9c89ed3e5b2/disk.config because it was imported into RBD.#033[00m
Nov 25 03:25:32 np0005534516 kernel: tap817e9f9b-9d: entered promiscuous mode
Nov 25 03:25:32 np0005534516 NetworkManager[48915]: <info>  [1764059132.6533] manager: (tap817e9f9b-9d): new Tun device (/org/freedesktop/NetworkManager/Devices/48)
Nov 25 03:25:32 np0005534516 ovn_controller[152859]: 2025-11-25T08:25:32Z|00064|binding|INFO|Claiming lport 817e9f9b-9d8a-4f80-a7b3-f2bcae49ffe8 for this chassis.
Nov 25 03:25:32 np0005534516 ovn_controller[152859]: 2025-11-25T08:25:32Z|00065|binding|INFO|817e9f9b-9d8a-4f80-a7b3-f2bcae49ffe8: Claiming fa:16:3e:50:3c:73 10.100.0.4
Nov 25 03:25:32 np0005534516 nova_compute[253538]: 2025-11-25 08:25:32.655 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:25:32 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:25:32.662 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:50:3c:73 10.100.0.4'], port_security=['fa:16:3e:50:3c:73 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '740ce5e2-b8eb-46b1-9ec8-e9c89ed3e5b2', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-93269c36-ab23-4d95-925a-798173550624', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '65a2f983cce14453b2dc9251a520f289', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'cc9f601e-bb99-4729-8b62-ddcf81c134a4', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a705f4ac-b7cc-4deb-b453-a20afb944392, chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], tunnel_key=6, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=817e9f9b-9d8a-4f80-a7b3-f2bcae49ffe8) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 03:25:32 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:25:32.664 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 817e9f9b-9d8a-4f80-a7b3-f2bcae49ffe8 in datapath 93269c36-ab23-4d95-925a-798173550624 bound to our chassis#033[00m
Nov 25 03:25:32 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:25:32.666 162739 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 93269c36-ab23-4d95-925a-798173550624#033[00m
Nov 25 03:25:32 np0005534516 ovn_controller[152859]: 2025-11-25T08:25:32Z|00066|binding|INFO|Setting lport 817e9f9b-9d8a-4f80-a7b3-f2bcae49ffe8 ovn-installed in OVS
Nov 25 03:25:32 np0005534516 ovn_controller[152859]: 2025-11-25T08:25:32Z|00067|binding|INFO|Setting lport 817e9f9b-9d8a-4f80-a7b3-f2bcae49ffe8 up in Southbound
Nov 25 03:25:32 np0005534516 nova_compute[253538]: 2025-11-25 08:25:32.679 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:25:32 np0005534516 nova_compute[253538]: 2025-11-25 08:25:32.687 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:25:32 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:25:32.687 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[ff3da90c-6264-44bb-8c97-df611f05bdcf]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:25:32 np0005534516 systemd-machined[215790]: New machine qemu-22-instance-00000014.
Nov 25 03:25:32 np0005534516 systemd-udevd[281643]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 03:25:32 np0005534516 systemd[1]: Started Virtual Machine qemu-22-instance-00000014.
Nov 25 03:25:32 np0005534516 NetworkManager[48915]: <info>  [1764059132.7212] device (tap817e9f9b-9d): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 03:25:32 np0005534516 NetworkManager[48915]: <info>  [1764059132.7249] device (tap817e9f9b-9d): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 03:25:32 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:25:32.725 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[0566b0e2-6cf8-482c-ad45-b5117582700a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:25:32 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:25:32.729 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[91f15788-4a4e-473c-a720-16181e1b74ad]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:25:32 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:25:32.760 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[324dc9f9-bca4-4cd8-ab28-8b6d07614d56]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:25:32 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:25:32.779 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[9a9774ab-247a-4aab-8bdb-a4d3f145060e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap93269c36-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b6:11:21'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 9, 'rx_bytes': 700, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 9, 'rx_bytes': 700, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 19], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 445515, 'reachable_time': 41516, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 281652, 'error': None, 'target': 'ovnmeta-93269c36-ab23-4d95-925a-798173550624', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:25:32 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:25:32.807 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[7d473255-3e8b-4bb3-84eb-599b0be95b81]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap93269c36-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 445525, 'tstamp': 445525}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 281655, 'error': None, 'target': 'ovnmeta-93269c36-ab23-4d95-925a-798173550624', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap93269c36-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 445529, 'tstamp': 445529}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 281655, 'error': None, 'target': 'ovnmeta-93269c36-ab23-4d95-925a-798173550624', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:25:32 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:25:32.808 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap93269c36-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:25:32 np0005534516 nova_compute[253538]: 2025-11-25 08:25:32.813 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:25:32 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:25:32.813 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap93269c36-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:25:32 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:25:32.814 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 03:25:32 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:25:32.814 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap93269c36-a0, col_values=(('external_ids', {'iface-id': '52d2128c-19c6-4892-8ba5-cc8740039f5e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:25:32 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:25:32.814 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 03:25:33 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1203: 321 pgs: 321 active+clean; 350 MiB data, 421 MiB used, 60 GiB / 60 GiB avail; 2.4 MiB/s rd, 4.4 MiB/s wr, 177 op/s
Nov 25 03:25:33 np0005534516 nova_compute[253538]: 2025-11-25 08:25:33.063 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764059133.0628486, 740ce5e2-b8eb-46b1-9ec8-e9c89ed3e5b2 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 03:25:33 np0005534516 nova_compute[253538]: 2025-11-25 08:25:33.063 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 740ce5e2-b8eb-46b1-9ec8-e9c89ed3e5b2] VM Started (Lifecycle Event)#033[00m
Nov 25 03:25:33 np0005534516 nova_compute[253538]: 2025-11-25 08:25:33.083 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 740ce5e2-b8eb-46b1-9ec8-e9c89ed3e5b2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:25:33 np0005534516 nova_compute[253538]: 2025-11-25 08:25:33.088 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764059133.065332, 740ce5e2-b8eb-46b1-9ec8-e9c89ed3e5b2 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 03:25:33 np0005534516 nova_compute[253538]: 2025-11-25 08:25:33.088 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 740ce5e2-b8eb-46b1-9ec8-e9c89ed3e5b2] VM Paused (Lifecycle Event)#033[00m
Nov 25 03:25:33 np0005534516 nova_compute[253538]: 2025-11-25 08:25:33.104 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 740ce5e2-b8eb-46b1-9ec8-e9c89ed3e5b2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:25:33 np0005534516 nova_compute[253538]: 2025-11-25 08:25:33.107 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 740ce5e2-b8eb-46b1-9ec8-e9c89ed3e5b2] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 03:25:33 np0005534516 nova_compute[253538]: 2025-11-25 08:25:33.130 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 740ce5e2-b8eb-46b1-9ec8-e9c89ed3e5b2] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 03:25:33 np0005534516 ovn_controller[152859]: 2025-11-25T08:25:33Z|00012|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:11:6b:81 10.100.0.10
Nov 25 03:25:33 np0005534516 ovn_controller[152859]: 2025-11-25T08:25:33Z|00013|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:11:6b:81 10.100.0.10
Nov 25 03:25:33 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e113 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 03:25:34 np0005534516 nova_compute[253538]: 2025-11-25 08:25:34.561 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:25:35 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1204: 321 pgs: 321 active+clean; 370 MiB data, 436 MiB used, 60 GiB / 60 GiB avail; 3.5 MiB/s rd, 5.4 MiB/s wr, 235 op/s
Nov 25 03:25:35 np0005534516 nova_compute[253538]: 2025-11-25 08:25:35.285 253542 DEBUG oslo_concurrency.lockutils [None req-975b261d-151e-4285-95bb-765b22dea8e7 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] Acquiring lock "39580ba3-504b-4e17-b64f-f44ef66091da" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:25:35 np0005534516 nova_compute[253538]: 2025-11-25 08:25:35.286 253542 DEBUG oslo_concurrency.lockutils [None req-975b261d-151e-4285-95bb-765b22dea8e7 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] Lock "39580ba3-504b-4e17-b64f-f44ef66091da" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:25:35 np0005534516 nova_compute[253538]: 2025-11-25 08:25:35.310 253542 DEBUG nova.compute.manager [None req-975b261d-151e-4285-95bb-765b22dea8e7 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] [instance: 39580ba3-504b-4e17-b64f-f44ef66091da] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 25 03:25:35 np0005534516 nova_compute[253538]: 2025-11-25 08:25:35.391 253542 DEBUG oslo_concurrency.lockutils [None req-975b261d-151e-4285-95bb-765b22dea8e7 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:25:35 np0005534516 nova_compute[253538]: 2025-11-25 08:25:35.392 253542 DEBUG oslo_concurrency.lockutils [None req-975b261d-151e-4285-95bb-765b22dea8e7 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:25:35 np0005534516 nova_compute[253538]: 2025-11-25 08:25:35.398 253542 DEBUG nova.virt.hardware [None req-975b261d-151e-4285-95bb-765b22dea8e7 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 25 03:25:35 np0005534516 nova_compute[253538]: 2025-11-25 08:25:35.398 253542 INFO nova.compute.claims [None req-975b261d-151e-4285-95bb-765b22dea8e7 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] [instance: 39580ba3-504b-4e17-b64f-f44ef66091da] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 25 03:25:35 np0005534516 nova_compute[253538]: 2025-11-25 08:25:35.612 253542 DEBUG oslo_concurrency.processutils [None req-975b261d-151e-4285-95bb-765b22dea8e7 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:25:36 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 03:25:36 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1124271192' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 03:25:36 np0005534516 nova_compute[253538]: 2025-11-25 08:25:36.045 253542 DEBUG oslo_concurrency.processutils [None req-975b261d-151e-4285-95bb-765b22dea8e7 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.433s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:25:36 np0005534516 nova_compute[253538]: 2025-11-25 08:25:36.052 253542 DEBUG nova.compute.provider_tree [None req-975b261d-151e-4285-95bb-765b22dea8e7 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 03:25:36 np0005534516 nova_compute[253538]: 2025-11-25 08:25:36.123 253542 DEBUG nova.scheduler.client.report [None req-975b261d-151e-4285-95bb-765b22dea8e7 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 03:25:36 np0005534516 nova_compute[253538]: 2025-11-25 08:25:36.145 253542 DEBUG oslo_concurrency.lockutils [None req-975b261d-151e-4285-95bb-765b22dea8e7 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.753s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:25:36 np0005534516 nova_compute[253538]: 2025-11-25 08:25:36.146 253542 DEBUG nova.compute.manager [None req-975b261d-151e-4285-95bb-765b22dea8e7 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] [instance: 39580ba3-504b-4e17-b64f-f44ef66091da] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 25 03:25:36 np0005534516 nova_compute[253538]: 2025-11-25 08:25:36.192 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:25:36 np0005534516 nova_compute[253538]: 2025-11-25 08:25:36.202 253542 DEBUG nova.compute.manager [None req-975b261d-151e-4285-95bb-765b22dea8e7 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] [instance: 39580ba3-504b-4e17-b64f-f44ef66091da] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 25 03:25:36 np0005534516 nova_compute[253538]: 2025-11-25 08:25:36.203 253542 DEBUG nova.network.neutron [None req-975b261d-151e-4285-95bb-765b22dea8e7 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] [instance: 39580ba3-504b-4e17-b64f-f44ef66091da] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 25 03:25:36 np0005534516 nova_compute[253538]: 2025-11-25 08:25:36.236 253542 INFO nova.virt.libvirt.driver [None req-975b261d-151e-4285-95bb-765b22dea8e7 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] [instance: 39580ba3-504b-4e17-b64f-f44ef66091da] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 25 03:25:36 np0005534516 nova_compute[253538]: 2025-11-25 08:25:36.254 253542 DEBUG nova.compute.manager [None req-975b261d-151e-4285-95bb-765b22dea8e7 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] [instance: 39580ba3-504b-4e17-b64f-f44ef66091da] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 25 03:25:36 np0005534516 nova_compute[253538]: 2025-11-25 08:25:36.271 253542 DEBUG oslo_concurrency.lockutils [None req-e9b91fea-0bb5-4fc4-b71d-ebdcc710f0e6 c53798457642457e8c93278c6bbae0b7 c4ea3de796e6464fbf65835dc4c3ad79 - - default default] Acquiring lock "ceb93a9d-5e18-4351-9cfa-3949c00b448a" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:25:36 np0005534516 nova_compute[253538]: 2025-11-25 08:25:36.272 253542 DEBUG oslo_concurrency.lockutils [None req-e9b91fea-0bb5-4fc4-b71d-ebdcc710f0e6 c53798457642457e8c93278c6bbae0b7 c4ea3de796e6464fbf65835dc4c3ad79 - - default default] Lock "ceb93a9d-5e18-4351-9cfa-3949c00b448a" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:25:36 np0005534516 nova_compute[253538]: 2025-11-25 08:25:36.309 253542 DEBUG nova.compute.manager [None req-e9b91fea-0bb5-4fc4-b71d-ebdcc710f0e6 c53798457642457e8c93278c6bbae0b7 c4ea3de796e6464fbf65835dc4c3ad79 - - default default] [instance: ceb93a9d-5e18-4351-9cfa-3949c00b448a] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 25 03:25:36 np0005534516 nova_compute[253538]: 2025-11-25 08:25:36.373 253542 DEBUG nova.compute.manager [None req-975b261d-151e-4285-95bb-765b22dea8e7 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] [instance: 39580ba3-504b-4e17-b64f-f44ef66091da] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 25 03:25:36 np0005534516 nova_compute[253538]: 2025-11-25 08:25:36.375 253542 DEBUG nova.virt.libvirt.driver [None req-975b261d-151e-4285-95bb-765b22dea8e7 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] [instance: 39580ba3-504b-4e17-b64f-f44ef66091da] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 25 03:25:36 np0005534516 nova_compute[253538]: 2025-11-25 08:25:36.375 253542 INFO nova.virt.libvirt.driver [None req-975b261d-151e-4285-95bb-765b22dea8e7 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] [instance: 39580ba3-504b-4e17-b64f-f44ef66091da] Creating image(s)#033[00m
Nov 25 03:25:36 np0005534516 nova_compute[253538]: 2025-11-25 08:25:36.397 253542 DEBUG nova.storage.rbd_utils [None req-975b261d-151e-4285-95bb-765b22dea8e7 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] rbd image 39580ba3-504b-4e17-b64f-f44ef66091da_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:25:36 np0005534516 nova_compute[253538]: 2025-11-25 08:25:36.419 253542 DEBUG nova.storage.rbd_utils [None req-975b261d-151e-4285-95bb-765b22dea8e7 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] rbd image 39580ba3-504b-4e17-b64f-f44ef66091da_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:25:36 np0005534516 nova_compute[253538]: 2025-11-25 08:25:36.441 253542 DEBUG nova.storage.rbd_utils [None req-975b261d-151e-4285-95bb-765b22dea8e7 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] rbd image 39580ba3-504b-4e17-b64f-f44ef66091da_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:25:36 np0005534516 nova_compute[253538]: 2025-11-25 08:25:36.444 253542 DEBUG oslo_concurrency.processutils [None req-975b261d-151e-4285-95bb-765b22dea8e7 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:25:36 np0005534516 nova_compute[253538]: 2025-11-25 08:25:36.463 253542 DEBUG nova.policy [None req-975b261d-151e-4285-95bb-765b22dea8e7 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'd61511e82c674abeb4ba87a4e5c5bf9d', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '2d3671fc1a3f4b319a62f23168a9df72', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 25 03:25:36 np0005534516 nova_compute[253538]: 2025-11-25 08:25:36.480 253542 DEBUG oslo_concurrency.lockutils [None req-e9b91fea-0bb5-4fc4-b71d-ebdcc710f0e6 c53798457642457e8c93278c6bbae0b7 c4ea3de796e6464fbf65835dc4c3ad79 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:25:36 np0005534516 nova_compute[253538]: 2025-11-25 08:25:36.482 253542 DEBUG oslo_concurrency.lockutils [None req-e9b91fea-0bb5-4fc4-b71d-ebdcc710f0e6 c53798457642457e8c93278c6bbae0b7 c4ea3de796e6464fbf65835dc4c3ad79 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:25:36 np0005534516 nova_compute[253538]: 2025-11-25 08:25:36.490 253542 DEBUG nova.virt.hardware [None req-e9b91fea-0bb5-4fc4-b71d-ebdcc710f0e6 c53798457642457e8c93278c6bbae0b7 c4ea3de796e6464fbf65835dc4c3ad79 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 25 03:25:36 np0005534516 nova_compute[253538]: 2025-11-25 08:25:36.490 253542 INFO nova.compute.claims [None req-e9b91fea-0bb5-4fc4-b71d-ebdcc710f0e6 c53798457642457e8c93278c6bbae0b7 c4ea3de796e6464fbf65835dc4c3ad79 - - default default] [instance: ceb93a9d-5e18-4351-9cfa-3949c00b448a] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 25 03:25:36 np0005534516 nova_compute[253538]: 2025-11-25 08:25:36.500 253542 DEBUG oslo_concurrency.processutils [None req-975b261d-151e-4285-95bb-765b22dea8e7 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json" returned: 0 in 0.055s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:25:36 np0005534516 nova_compute[253538]: 2025-11-25 08:25:36.501 253542 DEBUG oslo_concurrency.lockutils [None req-975b261d-151e-4285-95bb-765b22dea8e7 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] Acquiring lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:25:36 np0005534516 nova_compute[253538]: 2025-11-25 08:25:36.501 253542 DEBUG oslo_concurrency.lockutils [None req-975b261d-151e-4285-95bb-765b22dea8e7 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:25:36 np0005534516 nova_compute[253538]: 2025-11-25 08:25:36.501 253542 DEBUG oslo_concurrency.lockutils [None req-975b261d-151e-4285-95bb-765b22dea8e7 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:25:36 np0005534516 nova_compute[253538]: 2025-11-25 08:25:36.520 253542 DEBUG nova.storage.rbd_utils [None req-975b261d-151e-4285-95bb-765b22dea8e7 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] rbd image 39580ba3-504b-4e17-b64f-f44ef66091da_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:25:36 np0005534516 nova_compute[253538]: 2025-11-25 08:25:36.524 253542 DEBUG oslo_concurrency.processutils [None req-975b261d-151e-4285-95bb-765b22dea8e7 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc 39580ba3-504b-4e17-b64f-f44ef66091da_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:25:36 np0005534516 nova_compute[253538]: 2025-11-25 08:25:36.588 253542 DEBUG nova.compute.manager [req-1880f91f-80f5-4dfc-90d5-6d60ec4b64d5 req-b228b342-4d2a-4913-a372-ef9147ea1a74 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 740ce5e2-b8eb-46b1-9ec8-e9c89ed3e5b2] Received event network-vif-plugged-817e9f9b-9d8a-4f80-a7b3-f2bcae49ffe8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:25:36 np0005534516 nova_compute[253538]: 2025-11-25 08:25:36.589 253542 DEBUG oslo_concurrency.lockutils [req-1880f91f-80f5-4dfc-90d5-6d60ec4b64d5 req-b228b342-4d2a-4913-a372-ef9147ea1a74 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "740ce5e2-b8eb-46b1-9ec8-e9c89ed3e5b2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:25:36 np0005534516 nova_compute[253538]: 2025-11-25 08:25:36.589 253542 DEBUG oslo_concurrency.lockutils [req-1880f91f-80f5-4dfc-90d5-6d60ec4b64d5 req-b228b342-4d2a-4913-a372-ef9147ea1a74 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "740ce5e2-b8eb-46b1-9ec8-e9c89ed3e5b2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:25:36 np0005534516 nova_compute[253538]: 2025-11-25 08:25:36.589 253542 DEBUG oslo_concurrency.lockutils [req-1880f91f-80f5-4dfc-90d5-6d60ec4b64d5 req-b228b342-4d2a-4913-a372-ef9147ea1a74 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "740ce5e2-b8eb-46b1-9ec8-e9c89ed3e5b2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:25:36 np0005534516 nova_compute[253538]: 2025-11-25 08:25:36.589 253542 DEBUG nova.compute.manager [req-1880f91f-80f5-4dfc-90d5-6d60ec4b64d5 req-b228b342-4d2a-4913-a372-ef9147ea1a74 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 740ce5e2-b8eb-46b1-9ec8-e9c89ed3e5b2] Processing event network-vif-plugged-817e9f9b-9d8a-4f80-a7b3-f2bcae49ffe8 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 25 03:25:36 np0005534516 nova_compute[253538]: 2025-11-25 08:25:36.590 253542 DEBUG nova.compute.manager [None req-4f59ed76-8f96-42c9-8127-ec70884ea40a 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] [instance: 740ce5e2-b8eb-46b1-9ec8-e9c89ed3e5b2] Instance event wait completed in 3 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 25 03:25:36 np0005534516 nova_compute[253538]: 2025-11-25 08:25:36.618 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764059136.6022937, 740ce5e2-b8eb-46b1-9ec8-e9c89ed3e5b2 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 03:25:36 np0005534516 nova_compute[253538]: 2025-11-25 08:25:36.618 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 740ce5e2-b8eb-46b1-9ec8-e9c89ed3e5b2] VM Resumed (Lifecycle Event)#033[00m
Nov 25 03:25:36 np0005534516 nova_compute[253538]: 2025-11-25 08:25:36.620 253542 DEBUG nova.virt.libvirt.driver [None req-4f59ed76-8f96-42c9-8127-ec70884ea40a 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] [instance: 740ce5e2-b8eb-46b1-9ec8-e9c89ed3e5b2] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 25 03:25:36 np0005534516 nova_compute[253538]: 2025-11-25 08:25:36.624 253542 INFO nova.virt.libvirt.driver [-] [instance: 740ce5e2-b8eb-46b1-9ec8-e9c89ed3e5b2] Instance spawned successfully.#033[00m
Nov 25 03:25:36 np0005534516 nova_compute[253538]: 2025-11-25 08:25:36.625 253542 DEBUG nova.virt.libvirt.driver [None req-4f59ed76-8f96-42c9-8127-ec70884ea40a 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] [instance: 740ce5e2-b8eb-46b1-9ec8-e9c89ed3e5b2] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 25 03:25:36 np0005534516 nova_compute[253538]: 2025-11-25 08:25:36.639 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 740ce5e2-b8eb-46b1-9ec8-e9c89ed3e5b2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:25:36 np0005534516 nova_compute[253538]: 2025-11-25 08:25:36.653 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 740ce5e2-b8eb-46b1-9ec8-e9c89ed3e5b2] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 03:25:36 np0005534516 nova_compute[253538]: 2025-11-25 08:25:36.663 253542 DEBUG nova.virt.libvirt.driver [None req-4f59ed76-8f96-42c9-8127-ec70884ea40a 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] [instance: 740ce5e2-b8eb-46b1-9ec8-e9c89ed3e5b2] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:25:36 np0005534516 nova_compute[253538]: 2025-11-25 08:25:36.663 253542 DEBUG nova.virt.libvirt.driver [None req-4f59ed76-8f96-42c9-8127-ec70884ea40a 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] [instance: 740ce5e2-b8eb-46b1-9ec8-e9c89ed3e5b2] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:25:36 np0005534516 nova_compute[253538]: 2025-11-25 08:25:36.664 253542 DEBUG nova.virt.libvirt.driver [None req-4f59ed76-8f96-42c9-8127-ec70884ea40a 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] [instance: 740ce5e2-b8eb-46b1-9ec8-e9c89ed3e5b2] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:25:36 np0005534516 nova_compute[253538]: 2025-11-25 08:25:36.665 253542 DEBUG nova.virt.libvirt.driver [None req-4f59ed76-8f96-42c9-8127-ec70884ea40a 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] [instance: 740ce5e2-b8eb-46b1-9ec8-e9c89ed3e5b2] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:25:36 np0005534516 nova_compute[253538]: 2025-11-25 08:25:36.665 253542 DEBUG nova.virt.libvirt.driver [None req-4f59ed76-8f96-42c9-8127-ec70884ea40a 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] [instance: 740ce5e2-b8eb-46b1-9ec8-e9c89ed3e5b2] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:25:36 np0005534516 nova_compute[253538]: 2025-11-25 08:25:36.666 253542 DEBUG nova.virt.libvirt.driver [None req-4f59ed76-8f96-42c9-8127-ec70884ea40a 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] [instance: 740ce5e2-b8eb-46b1-9ec8-e9c89ed3e5b2] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:25:36 np0005534516 nova_compute[253538]: 2025-11-25 08:25:36.674 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 740ce5e2-b8eb-46b1-9ec8-e9c89ed3e5b2] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 03:25:36 np0005534516 nova_compute[253538]: 2025-11-25 08:25:36.725 253542 INFO nova.compute.manager [None req-4f59ed76-8f96-42c9-8127-ec70884ea40a 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] [instance: 740ce5e2-b8eb-46b1-9ec8-e9c89ed3e5b2] Took 12.92 seconds to spawn the instance on the hypervisor.#033[00m
Nov 25 03:25:36 np0005534516 nova_compute[253538]: 2025-11-25 08:25:36.725 253542 DEBUG nova.compute.manager [None req-4f59ed76-8f96-42c9-8127-ec70884ea40a 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] [instance: 740ce5e2-b8eb-46b1-9ec8-e9c89ed3e5b2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:25:36 np0005534516 nova_compute[253538]: 2025-11-25 08:25:36.757 253542 DEBUG oslo_concurrency.processutils [None req-e9b91fea-0bb5-4fc4-b71d-ebdcc710f0e6 c53798457642457e8c93278c6bbae0b7 c4ea3de796e6464fbf65835dc4c3ad79 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:25:36 np0005534516 nova_compute[253538]: 2025-11-25 08:25:36.792 253542 INFO nova.compute.manager [None req-4f59ed76-8f96-42c9-8127-ec70884ea40a 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] [instance: 740ce5e2-b8eb-46b1-9ec8-e9c89ed3e5b2] Took 13.88 seconds to build instance.#033[00m
Nov 25 03:25:36 np0005534516 nova_compute[253538]: 2025-11-25 08:25:36.808 253542 DEBUG oslo_concurrency.lockutils [None req-4f59ed76-8f96-42c9-8127-ec70884ea40a 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Lock "740ce5e2-b8eb-46b1-9ec8-e9c89ed3e5b2" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 14.050s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:25:36 np0005534516 nova_compute[253538]: 2025-11-25 08:25:36.817 253542 DEBUG oslo_concurrency.processutils [None req-975b261d-151e-4285-95bb-765b22dea8e7 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc 39580ba3-504b-4e17-b64f-f44ef66091da_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.293s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:25:36 np0005534516 podman[281815]: 2025-11-25 08:25:36.876554161 +0000 UTC m=+0.104372689 container health_status 8ad1e319ea263c63021f01bb2d6f18ca5aea205883339692c6b10bddfa993191 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_id=ovn_controller, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 03:25:36 np0005534516 nova_compute[253538]: 2025-11-25 08:25:36.919 253542 DEBUG nova.storage.rbd_utils [None req-975b261d-151e-4285-95bb-765b22dea8e7 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] resizing rbd image 39580ba3-504b-4e17-b64f-f44ef66091da_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 25 03:25:37 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1205: 321 pgs: 321 active+clean; 390 MiB data, 447 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 4.9 MiB/s wr, 186 op/s
Nov 25 03:25:37 np0005534516 nova_compute[253538]: 2025-11-25 08:25:37.024 253542 DEBUG nova.objects.instance [None req-975b261d-151e-4285-95bb-765b22dea8e7 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] Lazy-loading 'migration_context' on Instance uuid 39580ba3-504b-4e17-b64f-f44ef66091da obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 03:25:37 np0005534516 nova_compute[253538]: 2025-11-25 08:25:37.035 253542 DEBUG nova.virt.libvirt.driver [None req-975b261d-151e-4285-95bb-765b22dea8e7 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] [instance: 39580ba3-504b-4e17-b64f-f44ef66091da] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 25 03:25:37 np0005534516 nova_compute[253538]: 2025-11-25 08:25:37.036 253542 DEBUG nova.virt.libvirt.driver [None req-975b261d-151e-4285-95bb-765b22dea8e7 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] [instance: 39580ba3-504b-4e17-b64f-f44ef66091da] Ensure instance console log exists: /var/lib/nova/instances/39580ba3-504b-4e17-b64f-f44ef66091da/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 25 03:25:37 np0005534516 nova_compute[253538]: 2025-11-25 08:25:37.037 253542 DEBUG oslo_concurrency.lockutils [None req-975b261d-151e-4285-95bb-765b22dea8e7 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:25:37 np0005534516 nova_compute[253538]: 2025-11-25 08:25:37.037 253542 DEBUG oslo_concurrency.lockutils [None req-975b261d-151e-4285-95bb-765b22dea8e7 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:25:37 np0005534516 nova_compute[253538]: 2025-11-25 08:25:37.037 253542 DEBUG oslo_concurrency.lockutils [None req-975b261d-151e-4285-95bb-765b22dea8e7 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:25:37 np0005534516 nova_compute[253538]: 2025-11-25 08:25:37.063 253542 DEBUG nova.network.neutron [None req-975b261d-151e-4285-95bb-765b22dea8e7 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] [instance: 39580ba3-504b-4e17-b64f-f44ef66091da] Successfully created port: f718b0c0-ca1b-4f5d-aa70-3d1f48097b97 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 25 03:25:37 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 03:25:37 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4014580120' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 03:25:37 np0005534516 nova_compute[253538]: 2025-11-25 08:25:37.205 253542 DEBUG oslo_concurrency.processutils [None req-e9b91fea-0bb5-4fc4-b71d-ebdcc710f0e6 c53798457642457e8c93278c6bbae0b7 c4ea3de796e6464fbf65835dc4c3ad79 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.448s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:25:37 np0005534516 nova_compute[253538]: 2025-11-25 08:25:37.212 253542 DEBUG nova.compute.provider_tree [None req-e9b91fea-0bb5-4fc4-b71d-ebdcc710f0e6 c53798457642457e8c93278c6bbae0b7 c4ea3de796e6464fbf65835dc4c3ad79 - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 03:25:37 np0005534516 nova_compute[253538]: 2025-11-25 08:25:37.232 253542 DEBUG nova.scheduler.client.report [None req-e9b91fea-0bb5-4fc4-b71d-ebdcc710f0e6 c53798457642457e8c93278c6bbae0b7 c4ea3de796e6464fbf65835dc4c3ad79 - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 03:25:37 np0005534516 nova_compute[253538]: 2025-11-25 08:25:37.253 253542 DEBUG oslo_concurrency.lockutils [None req-e9b91fea-0bb5-4fc4-b71d-ebdcc710f0e6 c53798457642457e8c93278c6bbae0b7 c4ea3de796e6464fbf65835dc4c3ad79 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.771s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:25:37 np0005534516 nova_compute[253538]: 2025-11-25 08:25:37.254 253542 DEBUG nova.compute.manager [None req-e9b91fea-0bb5-4fc4-b71d-ebdcc710f0e6 c53798457642457e8c93278c6bbae0b7 c4ea3de796e6464fbf65835dc4c3ad79 - - default default] [instance: ceb93a9d-5e18-4351-9cfa-3949c00b448a] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 25 03:25:37 np0005534516 nova_compute[253538]: 2025-11-25 08:25:37.306 253542 DEBUG nova.compute.manager [None req-e9b91fea-0bb5-4fc4-b71d-ebdcc710f0e6 c53798457642457e8c93278c6bbae0b7 c4ea3de796e6464fbf65835dc4c3ad79 - - default default] [instance: ceb93a9d-5e18-4351-9cfa-3949c00b448a] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 25 03:25:37 np0005534516 nova_compute[253538]: 2025-11-25 08:25:37.306 253542 DEBUG nova.network.neutron [None req-e9b91fea-0bb5-4fc4-b71d-ebdcc710f0e6 c53798457642457e8c93278c6bbae0b7 c4ea3de796e6464fbf65835dc4c3ad79 - - default default] [instance: ceb93a9d-5e18-4351-9cfa-3949c00b448a] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 25 03:25:37 np0005534516 nova_compute[253538]: 2025-11-25 08:25:37.325 253542 INFO nova.virt.libvirt.driver [None req-e9b91fea-0bb5-4fc4-b71d-ebdcc710f0e6 c53798457642457e8c93278c6bbae0b7 c4ea3de796e6464fbf65835dc4c3ad79 - - default default] [instance: ceb93a9d-5e18-4351-9cfa-3949c00b448a] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 25 03:25:37 np0005534516 nova_compute[253538]: 2025-11-25 08:25:37.342 253542 DEBUG nova.compute.manager [None req-e9b91fea-0bb5-4fc4-b71d-ebdcc710f0e6 c53798457642457e8c93278c6bbae0b7 c4ea3de796e6464fbf65835dc4c3ad79 - - default default] [instance: ceb93a9d-5e18-4351-9cfa-3949c00b448a] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 25 03:25:37 np0005534516 nova_compute[253538]: 2025-11-25 08:25:37.421 253542 DEBUG nova.compute.manager [None req-e9b91fea-0bb5-4fc4-b71d-ebdcc710f0e6 c53798457642457e8c93278c6bbae0b7 c4ea3de796e6464fbf65835dc4c3ad79 - - default default] [instance: ceb93a9d-5e18-4351-9cfa-3949c00b448a] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 25 03:25:37 np0005534516 nova_compute[253538]: 2025-11-25 08:25:37.423 253542 DEBUG nova.virt.libvirt.driver [None req-e9b91fea-0bb5-4fc4-b71d-ebdcc710f0e6 c53798457642457e8c93278c6bbae0b7 c4ea3de796e6464fbf65835dc4c3ad79 - - default default] [instance: ceb93a9d-5e18-4351-9cfa-3949c00b448a] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 25 03:25:37 np0005534516 nova_compute[253538]: 2025-11-25 08:25:37.423 253542 INFO nova.virt.libvirt.driver [None req-e9b91fea-0bb5-4fc4-b71d-ebdcc710f0e6 c53798457642457e8c93278c6bbae0b7 c4ea3de796e6464fbf65835dc4c3ad79 - - default default] [instance: ceb93a9d-5e18-4351-9cfa-3949c00b448a] Creating image(s)#033[00m
Nov 25 03:25:37 np0005534516 nova_compute[253538]: 2025-11-25 08:25:37.449 253542 DEBUG nova.storage.rbd_utils [None req-e9b91fea-0bb5-4fc4-b71d-ebdcc710f0e6 c53798457642457e8c93278c6bbae0b7 c4ea3de796e6464fbf65835dc4c3ad79 - - default default] rbd image ceb93a9d-5e18-4351-9cfa-3949c00b448a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:25:37 np0005534516 nova_compute[253538]: 2025-11-25 08:25:37.479 253542 DEBUG nova.storage.rbd_utils [None req-e9b91fea-0bb5-4fc4-b71d-ebdcc710f0e6 c53798457642457e8c93278c6bbae0b7 c4ea3de796e6464fbf65835dc4c3ad79 - - default default] rbd image ceb93a9d-5e18-4351-9cfa-3949c00b448a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:25:37 np0005534516 nova_compute[253538]: 2025-11-25 08:25:37.506 253542 DEBUG nova.storage.rbd_utils [None req-e9b91fea-0bb5-4fc4-b71d-ebdcc710f0e6 c53798457642457e8c93278c6bbae0b7 c4ea3de796e6464fbf65835dc4c3ad79 - - default default] rbd image ceb93a9d-5e18-4351-9cfa-3949c00b448a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:25:37 np0005534516 nova_compute[253538]: 2025-11-25 08:25:37.510 253542 DEBUG oslo_concurrency.processutils [None req-e9b91fea-0bb5-4fc4-b71d-ebdcc710f0e6 c53798457642457e8c93278c6bbae0b7 c4ea3de796e6464fbf65835dc4c3ad79 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:25:37 np0005534516 nova_compute[253538]: 2025-11-25 08:25:37.589 253542 DEBUG oslo_concurrency.processutils [None req-e9b91fea-0bb5-4fc4-b71d-ebdcc710f0e6 c53798457642457e8c93278c6bbae0b7 c4ea3de796e6464fbf65835dc4c3ad79 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json" returned: 0 in 0.079s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:25:37 np0005534516 nova_compute[253538]: 2025-11-25 08:25:37.590 253542 DEBUG oslo_concurrency.lockutils [None req-e9b91fea-0bb5-4fc4-b71d-ebdcc710f0e6 c53798457642457e8c93278c6bbae0b7 c4ea3de796e6464fbf65835dc4c3ad79 - - default default] Acquiring lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:25:37 np0005534516 nova_compute[253538]: 2025-11-25 08:25:37.591 253542 DEBUG oslo_concurrency.lockutils [None req-e9b91fea-0bb5-4fc4-b71d-ebdcc710f0e6 c53798457642457e8c93278c6bbae0b7 c4ea3de796e6464fbf65835dc4c3ad79 - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:25:37 np0005534516 nova_compute[253538]: 2025-11-25 08:25:37.591 253542 DEBUG oslo_concurrency.lockutils [None req-e9b91fea-0bb5-4fc4-b71d-ebdcc710f0e6 c53798457642457e8c93278c6bbae0b7 c4ea3de796e6464fbf65835dc4c3ad79 - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:25:37 np0005534516 nova_compute[253538]: 2025-11-25 08:25:37.628 253542 DEBUG nova.storage.rbd_utils [None req-e9b91fea-0bb5-4fc4-b71d-ebdcc710f0e6 c53798457642457e8c93278c6bbae0b7 c4ea3de796e6464fbf65835dc4c3ad79 - - default default] rbd image ceb93a9d-5e18-4351-9cfa-3949c00b448a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:25:37 np0005534516 nova_compute[253538]: 2025-11-25 08:25:37.631 253542 DEBUG oslo_concurrency.processutils [None req-e9b91fea-0bb5-4fc4-b71d-ebdcc710f0e6 c53798457642457e8c93278c6bbae0b7 c4ea3de796e6464fbf65835dc4c3ad79 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc ceb93a9d-5e18-4351-9cfa-3949c00b448a_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:25:37 np0005534516 nova_compute[253538]: 2025-11-25 08:25:37.715 253542 DEBUG nova.policy [None req-e9b91fea-0bb5-4fc4-b71d-ebdcc710f0e6 c53798457642457e8c93278c6bbae0b7 c4ea3de796e6464fbf65835dc4c3ad79 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'c53798457642457e8c93278c6bbae0b7', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'c4ea3de796e6464fbf65835dc4c3ad79', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 25 03:25:38 np0005534516 nova_compute[253538]: 2025-11-25 08:25:38.126 253542 DEBUG oslo_concurrency.processutils [None req-e9b91fea-0bb5-4fc4-b71d-ebdcc710f0e6 c53798457642457e8c93278c6bbae0b7 c4ea3de796e6464fbf65835dc4c3ad79 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc ceb93a9d-5e18-4351-9cfa-3949c00b448a_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.495s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:25:38 np0005534516 nova_compute[253538]: 2025-11-25 08:25:38.182 253542 DEBUG nova.storage.rbd_utils [None req-e9b91fea-0bb5-4fc4-b71d-ebdcc710f0e6 c53798457642457e8c93278c6bbae0b7 c4ea3de796e6464fbf65835dc4c3ad79 - - default default] resizing rbd image ceb93a9d-5e18-4351-9cfa-3949c00b448a_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 25 03:25:38 np0005534516 nova_compute[253538]: 2025-11-25 08:25:38.406 253542 DEBUG nova.network.neutron [None req-975b261d-151e-4285-95bb-765b22dea8e7 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] [instance: 39580ba3-504b-4e17-b64f-f44ef66091da] Successfully updated port: f718b0c0-ca1b-4f5d-aa70-3d1f48097b97 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 25 03:25:38 np0005534516 nova_compute[253538]: 2025-11-25 08:25:38.423 253542 DEBUG oslo_concurrency.lockutils [None req-975b261d-151e-4285-95bb-765b22dea8e7 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] Acquiring lock "refresh_cache-39580ba3-504b-4e17-b64f-f44ef66091da" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 03:25:38 np0005534516 nova_compute[253538]: 2025-11-25 08:25:38.423 253542 DEBUG oslo_concurrency.lockutils [None req-975b261d-151e-4285-95bb-765b22dea8e7 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] Acquired lock "refresh_cache-39580ba3-504b-4e17-b64f-f44ef66091da" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 03:25:38 np0005534516 nova_compute[253538]: 2025-11-25 08:25:38.424 253542 DEBUG nova.network.neutron [None req-975b261d-151e-4285-95bb-765b22dea8e7 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] [instance: 39580ba3-504b-4e17-b64f-f44ef66091da] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 25 03:25:38 np0005534516 nova_compute[253538]: 2025-11-25 08:25:38.577 253542 DEBUG nova.network.neutron [None req-e9b91fea-0bb5-4fc4-b71d-ebdcc710f0e6 c53798457642457e8c93278c6bbae0b7 c4ea3de796e6464fbf65835dc4c3ad79 - - default default] [instance: ceb93a9d-5e18-4351-9cfa-3949c00b448a] Successfully created port: 6283ff13-d854-41d6-8a7a-eab602cc4cf4 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 25 03:25:38 np0005534516 nova_compute[253538]: 2025-11-25 08:25:38.630 253542 DEBUG nova.network.neutron [None req-975b261d-151e-4285-95bb-765b22dea8e7 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] [instance: 39580ba3-504b-4e17-b64f-f44ef66091da] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 25 03:25:38 np0005534516 nova_compute[253538]: 2025-11-25 08:25:38.696 253542 DEBUG nova.objects.instance [None req-e9b91fea-0bb5-4fc4-b71d-ebdcc710f0e6 c53798457642457e8c93278c6bbae0b7 c4ea3de796e6464fbf65835dc4c3ad79 - - default default] Lazy-loading 'migration_context' on Instance uuid ceb93a9d-5e18-4351-9cfa-3949c00b448a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 03:25:38 np0005534516 nova_compute[253538]: 2025-11-25 08:25:38.708 253542 DEBUG nova.virt.libvirt.driver [None req-e9b91fea-0bb5-4fc4-b71d-ebdcc710f0e6 c53798457642457e8c93278c6bbae0b7 c4ea3de796e6464fbf65835dc4c3ad79 - - default default] [instance: ceb93a9d-5e18-4351-9cfa-3949c00b448a] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 25 03:25:38 np0005534516 nova_compute[253538]: 2025-11-25 08:25:38.708 253542 DEBUG nova.virt.libvirt.driver [None req-e9b91fea-0bb5-4fc4-b71d-ebdcc710f0e6 c53798457642457e8c93278c6bbae0b7 c4ea3de796e6464fbf65835dc4c3ad79 - - default default] [instance: ceb93a9d-5e18-4351-9cfa-3949c00b448a] Ensure instance console log exists: /var/lib/nova/instances/ceb93a9d-5e18-4351-9cfa-3949c00b448a/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 25 03:25:38 np0005534516 nova_compute[253538]: 2025-11-25 08:25:38.708 253542 DEBUG oslo_concurrency.lockutils [None req-e9b91fea-0bb5-4fc4-b71d-ebdcc710f0e6 c53798457642457e8c93278c6bbae0b7 c4ea3de796e6464fbf65835dc4c3ad79 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:25:38 np0005534516 nova_compute[253538]: 2025-11-25 08:25:38.709 253542 DEBUG oslo_concurrency.lockutils [None req-e9b91fea-0bb5-4fc4-b71d-ebdcc710f0e6 c53798457642457e8c93278c6bbae0b7 c4ea3de796e6464fbf65835dc4c3ad79 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:25:38 np0005534516 nova_compute[253538]: 2025-11-25 08:25:38.709 253542 DEBUG oslo_concurrency.lockutils [None req-e9b91fea-0bb5-4fc4-b71d-ebdcc710f0e6 c53798457642457e8c93278c6bbae0b7 c4ea3de796e6464fbf65835dc4c3ad79 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:25:38 np0005534516 nova_compute[253538]: 2025-11-25 08:25:38.758 253542 DEBUG nova.compute.manager [req-11e21f0b-6dca-44f1-b18a-ac1deecb514f req-b2d4d5ba-a814-47e3-a59f-0f8bc3e95993 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 39580ba3-504b-4e17-b64f-f44ef66091da] Received event network-changed-f718b0c0-ca1b-4f5d-aa70-3d1f48097b97 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:25:38 np0005534516 nova_compute[253538]: 2025-11-25 08:25:38.758 253542 DEBUG nova.compute.manager [req-11e21f0b-6dca-44f1-b18a-ac1deecb514f req-b2d4d5ba-a814-47e3-a59f-0f8bc3e95993 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 39580ba3-504b-4e17-b64f-f44ef66091da] Refreshing instance network info cache due to event network-changed-f718b0c0-ca1b-4f5d-aa70-3d1f48097b97. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 03:25:38 np0005534516 nova_compute[253538]: 2025-11-25 08:25:38.758 253542 DEBUG oslo_concurrency.lockutils [req-11e21f0b-6dca-44f1-b18a-ac1deecb514f req-b2d4d5ba-a814-47e3-a59f-0f8bc3e95993 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-39580ba3-504b-4e17-b64f-f44ef66091da" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 03:25:38 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e113 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 03:25:38 np0005534516 nova_compute[253538]: 2025-11-25 08:25:38.951 253542 DEBUG nova.compute.manager [req-4973dde1-d4f7-4a7b-b360-cc328c9718f4 req-699c4fca-6ff8-4977-b9ea-81070012400a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 740ce5e2-b8eb-46b1-9ec8-e9c89ed3e5b2] Received event network-vif-plugged-817e9f9b-9d8a-4f80-a7b3-f2bcae49ffe8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:25:38 np0005534516 nova_compute[253538]: 2025-11-25 08:25:38.951 253542 DEBUG oslo_concurrency.lockutils [req-4973dde1-d4f7-4a7b-b360-cc328c9718f4 req-699c4fca-6ff8-4977-b9ea-81070012400a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "740ce5e2-b8eb-46b1-9ec8-e9c89ed3e5b2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:25:38 np0005534516 nova_compute[253538]: 2025-11-25 08:25:38.953 253542 DEBUG oslo_concurrency.lockutils [req-4973dde1-d4f7-4a7b-b360-cc328c9718f4 req-699c4fca-6ff8-4977-b9ea-81070012400a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "740ce5e2-b8eb-46b1-9ec8-e9c89ed3e5b2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:25:38 np0005534516 nova_compute[253538]: 2025-11-25 08:25:38.953 253542 DEBUG oslo_concurrency.lockutils [req-4973dde1-d4f7-4a7b-b360-cc328c9718f4 req-699c4fca-6ff8-4977-b9ea-81070012400a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "740ce5e2-b8eb-46b1-9ec8-e9c89ed3e5b2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:25:38 np0005534516 nova_compute[253538]: 2025-11-25 08:25:38.953 253542 DEBUG nova.compute.manager [req-4973dde1-d4f7-4a7b-b360-cc328c9718f4 req-699c4fca-6ff8-4977-b9ea-81070012400a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 740ce5e2-b8eb-46b1-9ec8-e9c89ed3e5b2] No waiting events found dispatching network-vif-plugged-817e9f9b-9d8a-4f80-a7b3-f2bcae49ffe8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 03:25:38 np0005534516 nova_compute[253538]: 2025-11-25 08:25:38.953 253542 WARNING nova.compute.manager [req-4973dde1-d4f7-4a7b-b360-cc328c9718f4 req-699c4fca-6ff8-4977-b9ea-81070012400a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 740ce5e2-b8eb-46b1-9ec8-e9c89ed3e5b2] Received unexpected event network-vif-plugged-817e9f9b-9d8a-4f80-a7b3-f2bcae49ffe8 for instance with vm_state active and task_state None.#033[00m
Nov 25 03:25:39 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1206: 321 pgs: 321 active+clean; 412 MiB data, 450 MiB used, 60 GiB / 60 GiB avail; 3.2 MiB/s rd, 5.1 MiB/s wr, 228 op/s
Nov 25 03:25:39 np0005534516 nova_compute[253538]: 2025-11-25 08:25:39.046 253542 DEBUG oslo_concurrency.lockutils [None req-72e3d350-4653-4ee9-889f-4e099416522c fa9e1d63b2ec4332a2c7a39b0a87664a 998870165086474c97c02b065e3e6837 - - default default] Acquiring lock "refresh_cache-740ce5e2-b8eb-46b1-9ec8-e9c89ed3e5b2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 03:25:39 np0005534516 nova_compute[253538]: 2025-11-25 08:25:39.046 253542 DEBUG oslo_concurrency.lockutils [None req-72e3d350-4653-4ee9-889f-4e099416522c fa9e1d63b2ec4332a2c7a39b0a87664a 998870165086474c97c02b065e3e6837 - - default default] Acquired lock "refresh_cache-740ce5e2-b8eb-46b1-9ec8-e9c89ed3e5b2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 03:25:39 np0005534516 nova_compute[253538]: 2025-11-25 08:25:39.046 253542 DEBUG nova.network.neutron [None req-72e3d350-4653-4ee9-889f-4e099416522c fa9e1d63b2ec4332a2c7a39b0a87664a 998870165086474c97c02b065e3e6837 - - default default] [instance: 740ce5e2-b8eb-46b1-9ec8-e9c89ed3e5b2] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 25 03:25:39 np0005534516 nova_compute[253538]: 2025-11-25 08:25:39.606 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:25:39 np0005534516 nova_compute[253538]: 2025-11-25 08:25:39.798 253542 DEBUG nova.network.neutron [None req-975b261d-151e-4285-95bb-765b22dea8e7 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] [instance: 39580ba3-504b-4e17-b64f-f44ef66091da] Updating instance_info_cache with network_info: [{"id": "f718b0c0-ca1b-4f5d-aa70-3d1f48097b97", "address": "fa:16:3e:60:b8:f5", "network": {"id": "f86a7b06-d9db-4462-bd9b-8ad648dec7f4", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-795190525-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2d3671fc1a3f4b319a62f23168a9df72", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf718b0c0-ca", "ovs_interfaceid": "f718b0c0-ca1b-4f5d-aa70-3d1f48097b97", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 03:25:39 np0005534516 nova_compute[253538]: 2025-11-25 08:25:39.812 253542 DEBUG oslo_concurrency.lockutils [None req-975b261d-151e-4285-95bb-765b22dea8e7 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] Releasing lock "refresh_cache-39580ba3-504b-4e17-b64f-f44ef66091da" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 03:25:39 np0005534516 nova_compute[253538]: 2025-11-25 08:25:39.812 253542 DEBUG nova.compute.manager [None req-975b261d-151e-4285-95bb-765b22dea8e7 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] [instance: 39580ba3-504b-4e17-b64f-f44ef66091da] Instance network_info: |[{"id": "f718b0c0-ca1b-4f5d-aa70-3d1f48097b97", "address": "fa:16:3e:60:b8:f5", "network": {"id": "f86a7b06-d9db-4462-bd9b-8ad648dec7f4", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-795190525-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2d3671fc1a3f4b319a62f23168a9df72", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf718b0c0-ca", "ovs_interfaceid": "f718b0c0-ca1b-4f5d-aa70-3d1f48097b97", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 25 03:25:39 np0005534516 nova_compute[253538]: 2025-11-25 08:25:39.813 253542 DEBUG oslo_concurrency.lockutils [req-11e21f0b-6dca-44f1-b18a-ac1deecb514f req-b2d4d5ba-a814-47e3-a59f-0f8bc3e95993 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-39580ba3-504b-4e17-b64f-f44ef66091da" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 03:25:39 np0005534516 nova_compute[253538]: 2025-11-25 08:25:39.813 253542 DEBUG nova.network.neutron [req-11e21f0b-6dca-44f1-b18a-ac1deecb514f req-b2d4d5ba-a814-47e3-a59f-0f8bc3e95993 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 39580ba3-504b-4e17-b64f-f44ef66091da] Refreshing network info cache for port f718b0c0-ca1b-4f5d-aa70-3d1f48097b97 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 03:25:39 np0005534516 nova_compute[253538]: 2025-11-25 08:25:39.817 253542 DEBUG nova.virt.libvirt.driver [None req-975b261d-151e-4285-95bb-765b22dea8e7 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] [instance: 39580ba3-504b-4e17-b64f-f44ef66091da] Start _get_guest_xml network_info=[{"id": "f718b0c0-ca1b-4f5d-aa70-3d1f48097b97", "address": "fa:16:3e:60:b8:f5", "network": {"id": "f86a7b06-d9db-4462-bd9b-8ad648dec7f4", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-795190525-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2d3671fc1a3f4b319a62f23168a9df72", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf718b0c0-ca", "ovs_interfaceid": "f718b0c0-ca1b-4f5d-aa70-3d1f48097b97", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'device_name': '/dev/vda', 'guest_format': None, 'encryption_options': None, 'boot_index': 0, 'encryption_format': None, 'encryption_secret_uuid': None, 'size': 0, 'encrypted': False, 'disk_bus': 'virtio', 'image_id': '8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 25 03:25:39 np0005534516 nova_compute[253538]: 2025-11-25 08:25:39.821 253542 WARNING nova.virt.libvirt.driver [None req-975b261d-151e-4285-95bb-765b22dea8e7 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 03:25:39 np0005534516 nova_compute[253538]: 2025-11-25 08:25:39.829 253542 DEBUG nova.virt.libvirt.host [None req-975b261d-151e-4285-95bb-765b22dea8e7 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 25 03:25:39 np0005534516 nova_compute[253538]: 2025-11-25 08:25:39.829 253542 DEBUG nova.virt.libvirt.host [None req-975b261d-151e-4285-95bb-765b22dea8e7 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 25 03:25:39 np0005534516 nova_compute[253538]: 2025-11-25 08:25:39.838 253542 DEBUG nova.virt.libvirt.host [None req-975b261d-151e-4285-95bb-765b22dea8e7 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 25 03:25:39 np0005534516 nova_compute[253538]: 2025-11-25 08:25:39.838 253542 DEBUG nova.virt.libvirt.host [None req-975b261d-151e-4285-95bb-765b22dea8e7 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 25 03:25:39 np0005534516 nova_compute[253538]: 2025-11-25 08:25:39.838 253542 DEBUG nova.virt.libvirt.driver [None req-975b261d-151e-4285-95bb-765b22dea8e7 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 25 03:25:39 np0005534516 nova_compute[253538]: 2025-11-25 08:25:39.838 253542 DEBUG nova.virt.hardware [None req-975b261d-151e-4285-95bb-765b22dea8e7 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T08:20:54Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='c0247c91-0f34-45b2-87e7-43f31a790a00',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 25 03:25:39 np0005534516 nova_compute[253538]: 2025-11-25 08:25:39.839 253542 DEBUG nova.virt.hardware [None req-975b261d-151e-4285-95bb-765b22dea8e7 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 25 03:25:39 np0005534516 nova_compute[253538]: 2025-11-25 08:25:39.839 253542 DEBUG nova.virt.hardware [None req-975b261d-151e-4285-95bb-765b22dea8e7 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 25 03:25:39 np0005534516 nova_compute[253538]: 2025-11-25 08:25:39.839 253542 DEBUG nova.virt.hardware [None req-975b261d-151e-4285-95bb-765b22dea8e7 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 25 03:25:39 np0005534516 nova_compute[253538]: 2025-11-25 08:25:39.839 253542 DEBUG nova.virt.hardware [None req-975b261d-151e-4285-95bb-765b22dea8e7 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 25 03:25:39 np0005534516 nova_compute[253538]: 2025-11-25 08:25:39.839 253542 DEBUG nova.virt.hardware [None req-975b261d-151e-4285-95bb-765b22dea8e7 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 25 03:25:39 np0005534516 nova_compute[253538]: 2025-11-25 08:25:39.840 253542 DEBUG nova.virt.hardware [None req-975b261d-151e-4285-95bb-765b22dea8e7 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 25 03:25:39 np0005534516 nova_compute[253538]: 2025-11-25 08:25:39.840 253542 DEBUG nova.virt.hardware [None req-975b261d-151e-4285-95bb-765b22dea8e7 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 25 03:25:39 np0005534516 nova_compute[253538]: 2025-11-25 08:25:39.840 253542 DEBUG nova.virt.hardware [None req-975b261d-151e-4285-95bb-765b22dea8e7 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 25 03:25:39 np0005534516 nova_compute[253538]: 2025-11-25 08:25:39.840 253542 DEBUG nova.virt.hardware [None req-975b261d-151e-4285-95bb-765b22dea8e7 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 25 03:25:39 np0005534516 nova_compute[253538]: 2025-11-25 08:25:39.840 253542 DEBUG nova.virt.hardware [None req-975b261d-151e-4285-95bb-765b22dea8e7 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 25 03:25:39 np0005534516 nova_compute[253538]: 2025-11-25 08:25:39.843 253542 DEBUG oslo_concurrency.processutils [None req-975b261d-151e-4285-95bb-765b22dea8e7 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:25:40 np0005534516 nova_compute[253538]: 2025-11-25 08:25:40.078 253542 DEBUG nova.network.neutron [None req-72e3d350-4653-4ee9-889f-4e099416522c fa9e1d63b2ec4332a2c7a39b0a87664a 998870165086474c97c02b065e3e6837 - - default default] [instance: 740ce5e2-b8eb-46b1-9ec8-e9c89ed3e5b2] Updating instance_info_cache with network_info: [{"id": "817e9f9b-9d8a-4f80-a7b3-f2bcae49ffe8", "address": "fa:16:3e:50:3c:73", "network": {"id": "93269c36-ab23-4d95-925a-798173550624", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-897372830-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "65a2f983cce14453b2dc9251a520f289", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap817e9f9b-9d", "ovs_interfaceid": "817e9f9b-9d8a-4f80-a7b3-f2bcae49ffe8", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 03:25:40 np0005534516 nova_compute[253538]: 2025-11-25 08:25:40.094 253542 DEBUG oslo_concurrency.lockutils [None req-72e3d350-4653-4ee9-889f-4e099416522c fa9e1d63b2ec4332a2c7a39b0a87664a 998870165086474c97c02b065e3e6837 - - default default] Releasing lock "refresh_cache-740ce5e2-b8eb-46b1-9ec8-e9c89ed3e5b2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 03:25:40 np0005534516 nova_compute[253538]: 2025-11-25 08:25:40.094 253542 DEBUG nova.compute.manager [None req-72e3d350-4653-4ee9-889f-4e099416522c fa9e1d63b2ec4332a2c7a39b0a87664a 998870165086474c97c02b065e3e6837 - - default default] [instance: 740ce5e2-b8eb-46b1-9ec8-e9c89ed3e5b2] Inject network info _inject_network_info /usr/lib/python3.9/site-packages/nova/compute/manager.py:7144#033[00m
Nov 25 03:25:40 np0005534516 nova_compute[253538]: 2025-11-25 08:25:40.095 253542 DEBUG nova.compute.manager [None req-72e3d350-4653-4ee9-889f-4e099416522c fa9e1d63b2ec4332a2c7a39b0a87664a 998870165086474c97c02b065e3e6837 - - default default] [instance: 740ce5e2-b8eb-46b1-9ec8-e9c89ed3e5b2] network_info to inject: |[{"id": "817e9f9b-9d8a-4f80-a7b3-f2bcae49ffe8", "address": "fa:16:3e:50:3c:73", "network": {"id": "93269c36-ab23-4d95-925a-798173550624", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-897372830-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "65a2f983cce14453b2dc9251a520f289", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap817e9f9b-9d", "ovs_interfaceid": "817e9f9b-9d8a-4f80-a7b3-f2bcae49ffe8", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _inject_network_info /usr/lib/python3.9/site-packages/nova/compute/manager.py:7145#033[00m
Nov 25 03:25:40 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 03:25:40 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4248823452' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 03:25:40 np0005534516 nova_compute[253538]: 2025-11-25 08:25:40.319 253542 DEBUG oslo_concurrency.processutils [None req-975b261d-151e-4285-95bb-765b22dea8e7 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.476s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:25:40 np0005534516 nova_compute[253538]: 2025-11-25 08:25:40.340 253542 DEBUG nova.storage.rbd_utils [None req-975b261d-151e-4285-95bb-765b22dea8e7 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] rbd image 39580ba3-504b-4e17-b64f-f44ef66091da_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:25:40 np0005534516 nova_compute[253538]: 2025-11-25 08:25:40.344 253542 DEBUG oslo_concurrency.processutils [None req-975b261d-151e-4285-95bb-765b22dea8e7 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:25:40 np0005534516 nova_compute[253538]: 2025-11-25 08:25:40.369 253542 DEBUG nova.network.neutron [None req-e9b91fea-0bb5-4fc4-b71d-ebdcc710f0e6 c53798457642457e8c93278c6bbae0b7 c4ea3de796e6464fbf65835dc4c3ad79 - - default default] [instance: ceb93a9d-5e18-4351-9cfa-3949c00b448a] Successfully updated port: 6283ff13-d854-41d6-8a7a-eab602cc4cf4 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 25 03:25:40 np0005534516 nova_compute[253538]: 2025-11-25 08:25:40.390 253542 DEBUG oslo_concurrency.lockutils [None req-e9b91fea-0bb5-4fc4-b71d-ebdcc710f0e6 c53798457642457e8c93278c6bbae0b7 c4ea3de796e6464fbf65835dc4c3ad79 - - default default] Acquiring lock "refresh_cache-ceb93a9d-5e18-4351-9cfa-3949c00b448a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 03:25:40 np0005534516 nova_compute[253538]: 2025-11-25 08:25:40.390 253542 DEBUG oslo_concurrency.lockutils [None req-e9b91fea-0bb5-4fc4-b71d-ebdcc710f0e6 c53798457642457e8c93278c6bbae0b7 c4ea3de796e6464fbf65835dc4c3ad79 - - default default] Acquired lock "refresh_cache-ceb93a9d-5e18-4351-9cfa-3949c00b448a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 03:25:40 np0005534516 nova_compute[253538]: 2025-11-25 08:25:40.390 253542 DEBUG nova.network.neutron [None req-e9b91fea-0bb5-4fc4-b71d-ebdcc710f0e6 c53798457642457e8c93278c6bbae0b7 c4ea3de796e6464fbf65835dc4c3ad79 - - default default] [instance: ceb93a9d-5e18-4351-9cfa-3949c00b448a] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 25 03:25:40 np0005534516 nova_compute[253538]: 2025-11-25 08:25:40.562 253542 DEBUG nova.network.neutron [None req-e9b91fea-0bb5-4fc4-b71d-ebdcc710f0e6 c53798457642457e8c93278c6bbae0b7 c4ea3de796e6464fbf65835dc4c3ad79 - - default default] [instance: ceb93a9d-5e18-4351-9cfa-3949c00b448a] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 25 03:25:40 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 03:25:40 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2657543922' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 03:25:40 np0005534516 nova_compute[253538]: 2025-11-25 08:25:40.750 253542 DEBUG oslo_concurrency.processutils [None req-975b261d-151e-4285-95bb-765b22dea8e7 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.406s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:25:40 np0005534516 nova_compute[253538]: 2025-11-25 08:25:40.751 253542 DEBUG nova.virt.libvirt.vif [None req-975b261d-151e-4285-95bb-765b22dea8e7 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T08:25:34Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-FloatingIPsAssociationTestJSON-server-1841402445',display_name='tempest-FloatingIPsAssociationTestJSON-server-1841402445',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-floatingipsassociationtestjson-server-1841402445',id=21,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='2d3671fc1a3f4b319a62f23168a9df72',ramdisk_id='',reservation_id='r-19c31s1c',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-FloatingIPsAssociationTestJSON-1833680054',owner_user_name='tempest-FloatingIPsAssociationTestJSON-1833680054-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T08:25:36Z,user_data=None,user_id='d61511e82c674abeb4ba87a4e5c5bf9d',uuid=39580ba3-504b-4e17-b64f-f44ef66091da,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "f718b0c0-ca1b-4f5d-aa70-3d1f48097b97", "address": "fa:16:3e:60:b8:f5", "network": {"id": "f86a7b06-d9db-4462-bd9b-8ad648dec7f4", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-795190525-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2d3671fc1a3f4b319a62f23168a9df72", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf718b0c0-ca", "ovs_interfaceid": "f718b0c0-ca1b-4f5d-aa70-3d1f48097b97", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 25 03:25:40 np0005534516 nova_compute[253538]: 2025-11-25 08:25:40.751 253542 DEBUG nova.network.os_vif_util [None req-975b261d-151e-4285-95bb-765b22dea8e7 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] Converting VIF {"id": "f718b0c0-ca1b-4f5d-aa70-3d1f48097b97", "address": "fa:16:3e:60:b8:f5", "network": {"id": "f86a7b06-d9db-4462-bd9b-8ad648dec7f4", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-795190525-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2d3671fc1a3f4b319a62f23168a9df72", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf718b0c0-ca", "ovs_interfaceid": "f718b0c0-ca1b-4f5d-aa70-3d1f48097b97", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 03:25:40 np0005534516 nova_compute[253538]: 2025-11-25 08:25:40.752 253542 DEBUG nova.network.os_vif_util [None req-975b261d-151e-4285-95bb-765b22dea8e7 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:60:b8:f5,bridge_name='br-int',has_traffic_filtering=True,id=f718b0c0-ca1b-4f5d-aa70-3d1f48097b97,network=Network(f86a7b06-d9db-4462-bd9b-8ad648dec7f4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf718b0c0-ca') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 03:25:40 np0005534516 nova_compute[253538]: 2025-11-25 08:25:40.753 253542 DEBUG nova.objects.instance [None req-975b261d-151e-4285-95bb-765b22dea8e7 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] Lazy-loading 'pci_devices' on Instance uuid 39580ba3-504b-4e17-b64f-f44ef66091da obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 03:25:40 np0005534516 nova_compute[253538]: 2025-11-25 08:25:40.767 253542 DEBUG nova.virt.libvirt.driver [None req-975b261d-151e-4285-95bb-765b22dea8e7 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] [instance: 39580ba3-504b-4e17-b64f-f44ef66091da] End _get_guest_xml xml=<domain type="kvm">
Nov 25 03:25:40 np0005534516 nova_compute[253538]:  <uuid>39580ba3-504b-4e17-b64f-f44ef66091da</uuid>
Nov 25 03:25:40 np0005534516 nova_compute[253538]:  <name>instance-00000015</name>
Nov 25 03:25:40 np0005534516 nova_compute[253538]:  <memory>131072</memory>
Nov 25 03:25:40 np0005534516 nova_compute[253538]:  <vcpu>1</vcpu>
Nov 25 03:25:40 np0005534516 nova_compute[253538]:  <metadata>
Nov 25 03:25:40 np0005534516 nova_compute[253538]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 03:25:40 np0005534516 nova_compute[253538]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 03:25:40 np0005534516 nova_compute[253538]:      <nova:name>tempest-FloatingIPsAssociationTestJSON-server-1841402445</nova:name>
Nov 25 03:25:40 np0005534516 nova_compute[253538]:      <nova:creationTime>2025-11-25 08:25:39</nova:creationTime>
Nov 25 03:25:40 np0005534516 nova_compute[253538]:      <nova:flavor name="m1.nano">
Nov 25 03:25:40 np0005534516 nova_compute[253538]:        <nova:memory>128</nova:memory>
Nov 25 03:25:40 np0005534516 nova_compute[253538]:        <nova:disk>1</nova:disk>
Nov 25 03:25:40 np0005534516 nova_compute[253538]:        <nova:swap>0</nova:swap>
Nov 25 03:25:40 np0005534516 nova_compute[253538]:        <nova:ephemeral>0</nova:ephemeral>
Nov 25 03:25:40 np0005534516 nova_compute[253538]:        <nova:vcpus>1</nova:vcpus>
Nov 25 03:25:40 np0005534516 nova_compute[253538]:      </nova:flavor>
Nov 25 03:25:40 np0005534516 nova_compute[253538]:      <nova:owner>
Nov 25 03:25:40 np0005534516 nova_compute[253538]:        <nova:user uuid="d61511e82c674abeb4ba87a4e5c5bf9d">tempest-FloatingIPsAssociationTestJSON-1833680054-project-member</nova:user>
Nov 25 03:25:40 np0005534516 nova_compute[253538]:        <nova:project uuid="2d3671fc1a3f4b319a62f23168a9df72">tempest-FloatingIPsAssociationTestJSON-1833680054</nova:project>
Nov 25 03:25:40 np0005534516 nova_compute[253538]:      </nova:owner>
Nov 25 03:25:40 np0005534516 nova_compute[253538]:      <nova:root type="image" uuid="8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e"/>
Nov 25 03:25:40 np0005534516 nova_compute[253538]:      <nova:ports>
Nov 25 03:25:40 np0005534516 nova_compute[253538]:        <nova:port uuid="f718b0c0-ca1b-4f5d-aa70-3d1f48097b97">
Nov 25 03:25:40 np0005534516 nova_compute[253538]:          <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Nov 25 03:25:40 np0005534516 nova_compute[253538]:        </nova:port>
Nov 25 03:25:40 np0005534516 nova_compute[253538]:      </nova:ports>
Nov 25 03:25:40 np0005534516 nova_compute[253538]:    </nova:instance>
Nov 25 03:25:40 np0005534516 nova_compute[253538]:  </metadata>
Nov 25 03:25:40 np0005534516 nova_compute[253538]:  <sysinfo type="smbios">
Nov 25 03:25:40 np0005534516 nova_compute[253538]:    <system>
Nov 25 03:25:40 np0005534516 nova_compute[253538]:      <entry name="manufacturer">RDO</entry>
Nov 25 03:25:40 np0005534516 nova_compute[253538]:      <entry name="product">OpenStack Compute</entry>
Nov 25 03:25:40 np0005534516 nova_compute[253538]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 03:25:40 np0005534516 nova_compute[253538]:      <entry name="serial">39580ba3-504b-4e17-b64f-f44ef66091da</entry>
Nov 25 03:25:40 np0005534516 nova_compute[253538]:      <entry name="uuid">39580ba3-504b-4e17-b64f-f44ef66091da</entry>
Nov 25 03:25:40 np0005534516 nova_compute[253538]:      <entry name="family">Virtual Machine</entry>
Nov 25 03:25:40 np0005534516 nova_compute[253538]:    </system>
Nov 25 03:25:40 np0005534516 nova_compute[253538]:  </sysinfo>
Nov 25 03:25:40 np0005534516 nova_compute[253538]:  <os>
Nov 25 03:25:40 np0005534516 nova_compute[253538]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 25 03:25:40 np0005534516 nova_compute[253538]:    <boot dev="hd"/>
Nov 25 03:25:40 np0005534516 nova_compute[253538]:    <smbios mode="sysinfo"/>
Nov 25 03:25:40 np0005534516 nova_compute[253538]:  </os>
Nov 25 03:25:40 np0005534516 nova_compute[253538]:  <features>
Nov 25 03:25:40 np0005534516 nova_compute[253538]:    <acpi/>
Nov 25 03:25:40 np0005534516 nova_compute[253538]:    <apic/>
Nov 25 03:25:40 np0005534516 nova_compute[253538]:    <vmcoreinfo/>
Nov 25 03:25:40 np0005534516 nova_compute[253538]:  </features>
Nov 25 03:25:40 np0005534516 nova_compute[253538]:  <clock offset="utc">
Nov 25 03:25:40 np0005534516 nova_compute[253538]:    <timer name="pit" tickpolicy="delay"/>
Nov 25 03:25:40 np0005534516 nova_compute[253538]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 25 03:25:40 np0005534516 nova_compute[253538]:    <timer name="hpet" present="no"/>
Nov 25 03:25:40 np0005534516 nova_compute[253538]:  </clock>
Nov 25 03:25:40 np0005534516 nova_compute[253538]:  <cpu mode="host-model" match="exact">
Nov 25 03:25:40 np0005534516 nova_compute[253538]:    <topology sockets="1" cores="1" threads="1"/>
Nov 25 03:25:40 np0005534516 nova_compute[253538]:  </cpu>
Nov 25 03:25:40 np0005534516 nova_compute[253538]:  <devices>
Nov 25 03:25:40 np0005534516 nova_compute[253538]:    <disk type="network" device="disk">
Nov 25 03:25:40 np0005534516 nova_compute[253538]:      <driver type="raw" cache="none"/>
Nov 25 03:25:40 np0005534516 nova_compute[253538]:      <source protocol="rbd" name="vms/39580ba3-504b-4e17-b64f-f44ef66091da_disk">
Nov 25 03:25:40 np0005534516 nova_compute[253538]:        <host name="192.168.122.100" port="6789"/>
Nov 25 03:25:40 np0005534516 nova_compute[253538]:      </source>
Nov 25 03:25:40 np0005534516 nova_compute[253538]:      <auth username="openstack">
Nov 25 03:25:40 np0005534516 nova_compute[253538]:        <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 03:25:40 np0005534516 nova_compute[253538]:      </auth>
Nov 25 03:25:40 np0005534516 nova_compute[253538]:      <target dev="vda" bus="virtio"/>
Nov 25 03:25:40 np0005534516 nova_compute[253538]:    </disk>
Nov 25 03:25:40 np0005534516 nova_compute[253538]:    <disk type="network" device="cdrom">
Nov 25 03:25:40 np0005534516 nova_compute[253538]:      <driver type="raw" cache="none"/>
Nov 25 03:25:40 np0005534516 nova_compute[253538]:      <source protocol="rbd" name="vms/39580ba3-504b-4e17-b64f-f44ef66091da_disk.config">
Nov 25 03:25:40 np0005534516 nova_compute[253538]:        <host name="192.168.122.100" port="6789"/>
Nov 25 03:25:40 np0005534516 nova_compute[253538]:      </source>
Nov 25 03:25:40 np0005534516 nova_compute[253538]:      <auth username="openstack">
Nov 25 03:25:40 np0005534516 nova_compute[253538]:        <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 03:25:40 np0005534516 nova_compute[253538]:      </auth>
Nov 25 03:25:40 np0005534516 nova_compute[253538]:      <target dev="sda" bus="sata"/>
Nov 25 03:25:40 np0005534516 nova_compute[253538]:    </disk>
Nov 25 03:25:40 np0005534516 nova_compute[253538]:    <interface type="ethernet">
Nov 25 03:25:40 np0005534516 nova_compute[253538]:      <mac address="fa:16:3e:60:b8:f5"/>
Nov 25 03:25:40 np0005534516 nova_compute[253538]:      <model type="virtio"/>
Nov 25 03:25:40 np0005534516 nova_compute[253538]:      <driver name="vhost" rx_queue_size="512"/>
Nov 25 03:25:40 np0005534516 nova_compute[253538]:      <mtu size="1442"/>
Nov 25 03:25:40 np0005534516 nova_compute[253538]:      <target dev="tapf718b0c0-ca"/>
Nov 25 03:25:40 np0005534516 nova_compute[253538]:    </interface>
Nov 25 03:25:40 np0005534516 nova_compute[253538]:    <serial type="pty">
Nov 25 03:25:40 np0005534516 nova_compute[253538]:      <log file="/var/lib/nova/instances/39580ba3-504b-4e17-b64f-f44ef66091da/console.log" append="off"/>
Nov 25 03:25:40 np0005534516 nova_compute[253538]:    </serial>
Nov 25 03:25:40 np0005534516 nova_compute[253538]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 03:25:40 np0005534516 nova_compute[253538]:    <video>
Nov 25 03:25:40 np0005534516 nova_compute[253538]:      <model type="virtio"/>
Nov 25 03:25:40 np0005534516 nova_compute[253538]:    </video>
Nov 25 03:25:40 np0005534516 nova_compute[253538]:    <input type="tablet" bus="usb"/>
Nov 25 03:25:40 np0005534516 nova_compute[253538]:    <rng model="virtio">
Nov 25 03:25:40 np0005534516 nova_compute[253538]:      <backend model="random">/dev/urandom</backend>
Nov 25 03:25:40 np0005534516 nova_compute[253538]:    </rng>
Nov 25 03:25:40 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root"/>
Nov 25 03:25:40 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:25:40 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:25:40 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:25:40 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:25:40 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:25:40 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:25:40 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:25:40 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:25:40 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:25:40 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:25:40 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:25:40 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:25:40 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:25:40 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:25:40 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:25:40 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:25:40 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:25:40 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:25:40 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:25:40 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:25:40 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:25:40 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:25:40 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:25:40 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:25:40 np0005534516 nova_compute[253538]:    <controller type="usb" index="0"/>
Nov 25 03:25:40 np0005534516 nova_compute[253538]:    <memballoon model="virtio">
Nov 25 03:25:40 np0005534516 nova_compute[253538]:      <stats period="10"/>
Nov 25 03:25:40 np0005534516 nova_compute[253538]:    </memballoon>
Nov 25 03:25:40 np0005534516 nova_compute[253538]:  </devices>
Nov 25 03:25:40 np0005534516 nova_compute[253538]: </domain>
Nov 25 03:25:40 np0005534516 nova_compute[253538]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 25 03:25:40 np0005534516 nova_compute[253538]: 2025-11-25 08:25:40.768 253542 DEBUG nova.compute.manager [None req-975b261d-151e-4285-95bb-765b22dea8e7 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] [instance: 39580ba3-504b-4e17-b64f-f44ef66091da] Preparing to wait for external event network-vif-plugged-f718b0c0-ca1b-4f5d-aa70-3d1f48097b97 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 25 03:25:40 np0005534516 nova_compute[253538]: 2025-11-25 08:25:40.768 253542 DEBUG oslo_concurrency.lockutils [None req-975b261d-151e-4285-95bb-765b22dea8e7 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] Acquiring lock "39580ba3-504b-4e17-b64f-f44ef66091da-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:25:40 np0005534516 nova_compute[253538]: 2025-11-25 08:25:40.768 253542 DEBUG oslo_concurrency.lockutils [None req-975b261d-151e-4285-95bb-765b22dea8e7 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] Lock "39580ba3-504b-4e17-b64f-f44ef66091da-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:25:40 np0005534516 nova_compute[253538]: 2025-11-25 08:25:40.768 253542 DEBUG oslo_concurrency.lockutils [None req-975b261d-151e-4285-95bb-765b22dea8e7 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] Lock "39580ba3-504b-4e17-b64f-f44ef66091da-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:25:40 np0005534516 nova_compute[253538]: 2025-11-25 08:25:40.769 253542 DEBUG nova.virt.libvirt.vif [None req-975b261d-151e-4285-95bb-765b22dea8e7 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T08:25:34Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-FloatingIPsAssociationTestJSON-server-1841402445',display_name='tempest-FloatingIPsAssociationTestJSON-server-1841402445',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-floatingipsassociationtestjson-server-1841402445',id=21,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='2d3671fc1a3f4b319a62f23168a9df72',ramdisk_id='',reservation_id='r-19c31s1c',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-FloatingIPsAssociationTestJSON-1833680054',owner_user_name='tempest-FloatingIPsAssociationTestJSON-1833680054-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T08:25:36Z,user_data=None,user_id='d61511e82c674abeb4ba87a4e5c5bf9d',uuid=39580ba3-504b-4e17-b64f-f44ef66091da,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "f718b0c0-ca1b-4f5d-aa70-3d1f48097b97", "address": "fa:16:3e:60:b8:f5", "network": {"id": "f86a7b06-d9db-4462-bd9b-8ad648dec7f4", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-795190525-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2d3671fc1a3f4b319a62f23168a9df72", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf718b0c0-ca", "ovs_interfaceid": "f718b0c0-ca1b-4f5d-aa70-3d1f48097b97", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 25 03:25:40 np0005534516 nova_compute[253538]: 2025-11-25 08:25:40.769 253542 DEBUG nova.network.os_vif_util [None req-975b261d-151e-4285-95bb-765b22dea8e7 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] Converting VIF {"id": "f718b0c0-ca1b-4f5d-aa70-3d1f48097b97", "address": "fa:16:3e:60:b8:f5", "network": {"id": "f86a7b06-d9db-4462-bd9b-8ad648dec7f4", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-795190525-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2d3671fc1a3f4b319a62f23168a9df72", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf718b0c0-ca", "ovs_interfaceid": "f718b0c0-ca1b-4f5d-aa70-3d1f48097b97", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 03:25:40 np0005534516 nova_compute[253538]: 2025-11-25 08:25:40.770 253542 DEBUG nova.network.os_vif_util [None req-975b261d-151e-4285-95bb-765b22dea8e7 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:60:b8:f5,bridge_name='br-int',has_traffic_filtering=True,id=f718b0c0-ca1b-4f5d-aa70-3d1f48097b97,network=Network(f86a7b06-d9db-4462-bd9b-8ad648dec7f4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf718b0c0-ca') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 03:25:40 np0005534516 nova_compute[253538]: 2025-11-25 08:25:40.770 253542 DEBUG os_vif [None req-975b261d-151e-4285-95bb-765b22dea8e7 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:60:b8:f5,bridge_name='br-int',has_traffic_filtering=True,id=f718b0c0-ca1b-4f5d-aa70-3d1f48097b97,network=Network(f86a7b06-d9db-4462-bd9b-8ad648dec7f4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf718b0c0-ca') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 25 03:25:40 np0005534516 nova_compute[253538]: 2025-11-25 08:25:40.771 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:25:40 np0005534516 nova_compute[253538]: 2025-11-25 08:25:40.771 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:25:40 np0005534516 nova_compute[253538]: 2025-11-25 08:25:40.771 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 03:25:40 np0005534516 nova_compute[253538]: 2025-11-25 08:25:40.774 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:25:40 np0005534516 nova_compute[253538]: 2025-11-25 08:25:40.774 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf718b0c0-ca, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:25:40 np0005534516 nova_compute[253538]: 2025-11-25 08:25:40.774 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapf718b0c0-ca, col_values=(('external_ids', {'iface-id': 'f718b0c0-ca1b-4f5d-aa70-3d1f48097b97', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:60:b8:f5', 'vm-uuid': '39580ba3-504b-4e17-b64f-f44ef66091da'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:25:40 np0005534516 nova_compute[253538]: 2025-11-25 08:25:40.776 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:25:40 np0005534516 NetworkManager[48915]: <info>  [1764059140.7770] manager: (tapf718b0c0-ca): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/49)
Nov 25 03:25:40 np0005534516 nova_compute[253538]: 2025-11-25 08:25:40.778 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 25 03:25:40 np0005534516 nova_compute[253538]: 2025-11-25 08:25:40.782 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:25:40 np0005534516 nova_compute[253538]: 2025-11-25 08:25:40.782 253542 INFO os_vif [None req-975b261d-151e-4285-95bb-765b22dea8e7 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:60:b8:f5,bridge_name='br-int',has_traffic_filtering=True,id=f718b0c0-ca1b-4f5d-aa70-3d1f48097b97,network=Network(f86a7b06-d9db-4462-bd9b-8ad648dec7f4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf718b0c0-ca')#033[00m
Nov 25 03:25:40 np0005534516 nova_compute[253538]: 2025-11-25 08:25:40.832 253542 DEBUG nova.virt.libvirt.driver [None req-975b261d-151e-4285-95bb-765b22dea8e7 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 03:25:40 np0005534516 nova_compute[253538]: 2025-11-25 08:25:40.832 253542 DEBUG nova.virt.libvirt.driver [None req-975b261d-151e-4285-95bb-765b22dea8e7 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 03:25:40 np0005534516 nova_compute[253538]: 2025-11-25 08:25:40.832 253542 DEBUG nova.virt.libvirt.driver [None req-975b261d-151e-4285-95bb-765b22dea8e7 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] No VIF found with MAC fa:16:3e:60:b8:f5, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 25 03:25:40 np0005534516 nova_compute[253538]: 2025-11-25 08:25:40.833 253542 INFO nova.virt.libvirt.driver [None req-975b261d-151e-4285-95bb-765b22dea8e7 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] [instance: 39580ba3-504b-4e17-b64f-f44ef66091da] Using config drive#033[00m
Nov 25 03:25:40 np0005534516 nova_compute[253538]: 2025-11-25 08:25:40.853 253542 DEBUG nova.storage.rbd_utils [None req-975b261d-151e-4285-95bb-765b22dea8e7 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] rbd image 39580ba3-504b-4e17-b64f-f44ef66091da_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:25:40 np0005534516 nova_compute[253538]: 2025-11-25 08:25:40.861 253542 DEBUG nova.compute.manager [req-d77d0d92-b3bb-47df-a20f-d698fe4195d7 req-cdca7883-9149-4913-aa8e-af3576b866ad b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: ceb93a9d-5e18-4351-9cfa-3949c00b448a] Received event network-changed-6283ff13-d854-41d6-8a7a-eab602cc4cf4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:25:40 np0005534516 nova_compute[253538]: 2025-11-25 08:25:40.861 253542 DEBUG nova.compute.manager [req-d77d0d92-b3bb-47df-a20f-d698fe4195d7 req-cdca7883-9149-4913-aa8e-af3576b866ad b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: ceb93a9d-5e18-4351-9cfa-3949c00b448a] Refreshing instance network info cache due to event network-changed-6283ff13-d854-41d6-8a7a-eab602cc4cf4. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 03:25:40 np0005534516 nova_compute[253538]: 2025-11-25 08:25:40.861 253542 DEBUG oslo_concurrency.lockutils [req-d77d0d92-b3bb-47df-a20f-d698fe4195d7 req-cdca7883-9149-4913-aa8e-af3576b866ad b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-ceb93a9d-5e18-4351-9cfa-3949c00b448a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 03:25:41 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1207: 321 pgs: 321 active+clean; 450 MiB data, 470 MiB used, 60 GiB / 60 GiB avail; 3.8 MiB/s rd, 5.8 MiB/s wr, 247 op/s
Nov 25 03:25:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:25:41.051 162739 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:25:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:25:41.051 162739 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:25:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:25:41.052 162739 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:25:41 np0005534516 nova_compute[253538]: 2025-11-25 08:25:41.362 253542 INFO nova.virt.libvirt.driver [None req-975b261d-151e-4285-95bb-765b22dea8e7 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] [instance: 39580ba3-504b-4e17-b64f-f44ef66091da] Creating config drive at /var/lib/nova/instances/39580ba3-504b-4e17-b64f-f44ef66091da/disk.config#033[00m
Nov 25 03:25:41 np0005534516 nova_compute[253538]: 2025-11-25 08:25:41.369 253542 DEBUG oslo_concurrency.processutils [None req-975b261d-151e-4285-95bb-765b22dea8e7 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/39580ba3-504b-4e17-b64f-f44ef66091da/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpnrowkyrm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:25:41 np0005534516 nova_compute[253538]: 2025-11-25 08:25:41.499 253542 DEBUG oslo_concurrency.processutils [None req-975b261d-151e-4285-95bb-765b22dea8e7 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/39580ba3-504b-4e17-b64f-f44ef66091da/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpnrowkyrm" returned: 0 in 0.130s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:25:41 np0005534516 nova_compute[253538]: 2025-11-25 08:25:41.527 253542 DEBUG nova.storage.rbd_utils [None req-975b261d-151e-4285-95bb-765b22dea8e7 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] rbd image 39580ba3-504b-4e17-b64f-f44ef66091da_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:25:41 np0005534516 nova_compute[253538]: 2025-11-25 08:25:41.531 253542 DEBUG oslo_concurrency.processutils [None req-975b261d-151e-4285-95bb-765b22dea8e7 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/39580ba3-504b-4e17-b64f-f44ef66091da/disk.config 39580ba3-504b-4e17-b64f-f44ef66091da_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:25:41 np0005534516 nova_compute[253538]: 2025-11-25 08:25:41.705 253542 DEBUG nova.network.neutron [req-11e21f0b-6dca-44f1-b18a-ac1deecb514f req-b2d4d5ba-a814-47e3-a59f-0f8bc3e95993 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 39580ba3-504b-4e17-b64f-f44ef66091da] Updated VIF entry in instance network info cache for port f718b0c0-ca1b-4f5d-aa70-3d1f48097b97. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 25 03:25:41 np0005534516 nova_compute[253538]: 2025-11-25 08:25:41.706 253542 DEBUG nova.network.neutron [req-11e21f0b-6dca-44f1-b18a-ac1deecb514f req-b2d4d5ba-a814-47e3-a59f-0f8bc3e95993 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 39580ba3-504b-4e17-b64f-f44ef66091da] Updating instance_info_cache with network_info: [{"id": "f718b0c0-ca1b-4f5d-aa70-3d1f48097b97", "address": "fa:16:3e:60:b8:f5", "network": {"id": "f86a7b06-d9db-4462-bd9b-8ad648dec7f4", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-795190525-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2d3671fc1a3f4b319a62f23168a9df72", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf718b0c0-ca", "ovs_interfaceid": "f718b0c0-ca1b-4f5d-aa70-3d1f48097b97", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 03:25:41 np0005534516 nova_compute[253538]: 2025-11-25 08:25:41.709 253542 DEBUG nova.network.neutron [None req-e9b91fea-0bb5-4fc4-b71d-ebdcc710f0e6 c53798457642457e8c93278c6bbae0b7 c4ea3de796e6464fbf65835dc4c3ad79 - - default default] [instance: ceb93a9d-5e18-4351-9cfa-3949c00b448a] Updating instance_info_cache with network_info: [{"id": "6283ff13-d854-41d6-8a7a-eab602cc4cf4", "address": "fa:16:3e:57:ac:37", "network": {"id": "f0cb07bc-dc94-4b65-bb7f-100ce36c9428", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationNegativeTestJSON-827720896-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c4ea3de796e6464fbf65835dc4c3ad79", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6283ff13-d8", "ovs_interfaceid": "6283ff13-d854-41d6-8a7a-eab602cc4cf4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 03:25:41 np0005534516 nova_compute[253538]: 2025-11-25 08:25:41.733 253542 DEBUG oslo_concurrency.lockutils [req-11e21f0b-6dca-44f1-b18a-ac1deecb514f req-b2d4d5ba-a814-47e3-a59f-0f8bc3e95993 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-39580ba3-504b-4e17-b64f-f44ef66091da" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 03:25:41 np0005534516 nova_compute[253538]: 2025-11-25 08:25:41.921 253542 DEBUG oslo_concurrency.lockutils [None req-e9b91fea-0bb5-4fc4-b71d-ebdcc710f0e6 c53798457642457e8c93278c6bbae0b7 c4ea3de796e6464fbf65835dc4c3ad79 - - default default] Releasing lock "refresh_cache-ceb93a9d-5e18-4351-9cfa-3949c00b448a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 03:25:41 np0005534516 nova_compute[253538]: 2025-11-25 08:25:41.921 253542 DEBUG nova.compute.manager [None req-e9b91fea-0bb5-4fc4-b71d-ebdcc710f0e6 c53798457642457e8c93278c6bbae0b7 c4ea3de796e6464fbf65835dc4c3ad79 - - default default] [instance: ceb93a9d-5e18-4351-9cfa-3949c00b448a] Instance network_info: |[{"id": "6283ff13-d854-41d6-8a7a-eab602cc4cf4", "address": "fa:16:3e:57:ac:37", "network": {"id": "f0cb07bc-dc94-4b65-bb7f-100ce36c9428", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationNegativeTestJSON-827720896-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c4ea3de796e6464fbf65835dc4c3ad79", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6283ff13-d8", "ovs_interfaceid": "6283ff13-d854-41d6-8a7a-eab602cc4cf4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 25 03:25:41 np0005534516 nova_compute[253538]: 2025-11-25 08:25:41.922 253542 DEBUG oslo_concurrency.lockutils [req-d77d0d92-b3bb-47df-a20f-d698fe4195d7 req-cdca7883-9149-4913-aa8e-af3576b866ad b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-ceb93a9d-5e18-4351-9cfa-3949c00b448a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 03:25:41 np0005534516 nova_compute[253538]: 2025-11-25 08:25:41.922 253542 DEBUG nova.network.neutron [req-d77d0d92-b3bb-47df-a20f-d698fe4195d7 req-cdca7883-9149-4913-aa8e-af3576b866ad b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: ceb93a9d-5e18-4351-9cfa-3949c00b448a] Refreshing network info cache for port 6283ff13-d854-41d6-8a7a-eab602cc4cf4 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 03:25:41 np0005534516 nova_compute[253538]: 2025-11-25 08:25:41.924 253542 DEBUG nova.virt.libvirt.driver [None req-e9b91fea-0bb5-4fc4-b71d-ebdcc710f0e6 c53798457642457e8c93278c6bbae0b7 c4ea3de796e6464fbf65835dc4c3ad79 - - default default] [instance: ceb93a9d-5e18-4351-9cfa-3949c00b448a] Start _get_guest_xml network_info=[{"id": "6283ff13-d854-41d6-8a7a-eab602cc4cf4", "address": "fa:16:3e:57:ac:37", "network": {"id": "f0cb07bc-dc94-4b65-bb7f-100ce36c9428", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationNegativeTestJSON-827720896-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c4ea3de796e6464fbf65835dc4c3ad79", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6283ff13-d8", "ovs_interfaceid": "6283ff13-d854-41d6-8a7a-eab602cc4cf4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'device_name': '/dev/vda', 'guest_format': None, 'encryption_options': None, 'boot_index': 0, 'encryption_format': None, 'encryption_secret_uuid': None, 'size': 0, 'encrypted': False, 'disk_bus': 'virtio', 'image_id': '8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 25 03:25:41 np0005534516 nova_compute[253538]: 2025-11-25 08:25:41.929 253542 WARNING nova.virt.libvirt.driver [None req-e9b91fea-0bb5-4fc4-b71d-ebdcc710f0e6 c53798457642457e8c93278c6bbae0b7 c4ea3de796e6464fbf65835dc4c3ad79 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 03:25:41 np0005534516 nova_compute[253538]: 2025-11-25 08:25:41.935 253542 DEBUG nova.virt.libvirt.host [None req-e9b91fea-0bb5-4fc4-b71d-ebdcc710f0e6 c53798457642457e8c93278c6bbae0b7 c4ea3de796e6464fbf65835dc4c3ad79 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 25 03:25:41 np0005534516 nova_compute[253538]: 2025-11-25 08:25:41.936 253542 DEBUG nova.virt.libvirt.host [None req-e9b91fea-0bb5-4fc4-b71d-ebdcc710f0e6 c53798457642457e8c93278c6bbae0b7 c4ea3de796e6464fbf65835dc4c3ad79 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 25 03:25:41 np0005534516 nova_compute[253538]: 2025-11-25 08:25:41.943 253542 DEBUG nova.virt.libvirt.host [None req-e9b91fea-0bb5-4fc4-b71d-ebdcc710f0e6 c53798457642457e8c93278c6bbae0b7 c4ea3de796e6464fbf65835dc4c3ad79 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 25 03:25:41 np0005534516 nova_compute[253538]: 2025-11-25 08:25:41.943 253542 DEBUG nova.virt.libvirt.host [None req-e9b91fea-0bb5-4fc4-b71d-ebdcc710f0e6 c53798457642457e8c93278c6bbae0b7 c4ea3de796e6464fbf65835dc4c3ad79 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 25 03:25:41 np0005534516 nova_compute[253538]: 2025-11-25 08:25:41.944 253542 DEBUG nova.virt.libvirt.driver [None req-e9b91fea-0bb5-4fc4-b71d-ebdcc710f0e6 c53798457642457e8c93278c6bbae0b7 c4ea3de796e6464fbf65835dc4c3ad79 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 25 03:25:41 np0005534516 nova_compute[253538]: 2025-11-25 08:25:41.944 253542 DEBUG nova.virt.hardware [None req-e9b91fea-0bb5-4fc4-b71d-ebdcc710f0e6 c53798457642457e8c93278c6bbae0b7 c4ea3de796e6464fbf65835dc4c3ad79 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T08:20:54Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='c0247c91-0f34-45b2-87e7-43f31a790a00',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 25 03:25:41 np0005534516 nova_compute[253538]: 2025-11-25 08:25:41.945 253542 DEBUG nova.virt.hardware [None req-e9b91fea-0bb5-4fc4-b71d-ebdcc710f0e6 c53798457642457e8c93278c6bbae0b7 c4ea3de796e6464fbf65835dc4c3ad79 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 25 03:25:41 np0005534516 nova_compute[253538]: 2025-11-25 08:25:41.945 253542 DEBUG nova.virt.hardware [None req-e9b91fea-0bb5-4fc4-b71d-ebdcc710f0e6 c53798457642457e8c93278c6bbae0b7 c4ea3de796e6464fbf65835dc4c3ad79 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 25 03:25:41 np0005534516 nova_compute[253538]: 2025-11-25 08:25:41.945 253542 DEBUG nova.virt.hardware [None req-e9b91fea-0bb5-4fc4-b71d-ebdcc710f0e6 c53798457642457e8c93278c6bbae0b7 c4ea3de796e6464fbf65835dc4c3ad79 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 25 03:25:41 np0005534516 nova_compute[253538]: 2025-11-25 08:25:41.945 253542 DEBUG nova.virt.hardware [None req-e9b91fea-0bb5-4fc4-b71d-ebdcc710f0e6 c53798457642457e8c93278c6bbae0b7 c4ea3de796e6464fbf65835dc4c3ad79 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 25 03:25:41 np0005534516 nova_compute[253538]: 2025-11-25 08:25:41.946 253542 DEBUG nova.virt.hardware [None req-e9b91fea-0bb5-4fc4-b71d-ebdcc710f0e6 c53798457642457e8c93278c6bbae0b7 c4ea3de796e6464fbf65835dc4c3ad79 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 25 03:25:41 np0005534516 nova_compute[253538]: 2025-11-25 08:25:41.946 253542 DEBUG nova.virt.hardware [None req-e9b91fea-0bb5-4fc4-b71d-ebdcc710f0e6 c53798457642457e8c93278c6bbae0b7 c4ea3de796e6464fbf65835dc4c3ad79 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 25 03:25:41 np0005534516 nova_compute[253538]: 2025-11-25 08:25:41.946 253542 DEBUG nova.virt.hardware [None req-e9b91fea-0bb5-4fc4-b71d-ebdcc710f0e6 c53798457642457e8c93278c6bbae0b7 c4ea3de796e6464fbf65835dc4c3ad79 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 25 03:25:41 np0005534516 nova_compute[253538]: 2025-11-25 08:25:41.946 253542 DEBUG nova.virt.hardware [None req-e9b91fea-0bb5-4fc4-b71d-ebdcc710f0e6 c53798457642457e8c93278c6bbae0b7 c4ea3de796e6464fbf65835dc4c3ad79 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 25 03:25:41 np0005534516 nova_compute[253538]: 2025-11-25 08:25:41.947 253542 DEBUG nova.virt.hardware [None req-e9b91fea-0bb5-4fc4-b71d-ebdcc710f0e6 c53798457642457e8c93278c6bbae0b7 c4ea3de796e6464fbf65835dc4c3ad79 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 25 03:25:41 np0005534516 nova_compute[253538]: 2025-11-25 08:25:41.947 253542 DEBUG nova.virt.hardware [None req-e9b91fea-0bb5-4fc4-b71d-ebdcc710f0e6 c53798457642457e8c93278c6bbae0b7 c4ea3de796e6464fbf65835dc4c3ad79 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 25 03:25:41 np0005534516 nova_compute[253538]: 2025-11-25 08:25:41.951 253542 DEBUG oslo_concurrency.processutils [None req-e9b91fea-0bb5-4fc4-b71d-ebdcc710f0e6 c53798457642457e8c93278c6bbae0b7 c4ea3de796e6464fbf65835dc4c3ad79 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:25:42 np0005534516 nova_compute[253538]: 2025-11-25 08:25:42.368 253542 DEBUG oslo_concurrency.processutils [None req-975b261d-151e-4285-95bb-765b22dea8e7 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/39580ba3-504b-4e17-b64f-f44ef66091da/disk.config 39580ba3-504b-4e17-b64f-f44ef66091da_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.837s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:25:42 np0005534516 nova_compute[253538]: 2025-11-25 08:25:42.369 253542 INFO nova.virt.libvirt.driver [None req-975b261d-151e-4285-95bb-765b22dea8e7 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] [instance: 39580ba3-504b-4e17-b64f-f44ef66091da] Deleting local config drive /var/lib/nova/instances/39580ba3-504b-4e17-b64f-f44ef66091da/disk.config because it was imported into RBD.#033[00m
Nov 25 03:25:42 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 03:25:42 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2770839393' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 03:25:42 np0005534516 kernel: tapf718b0c0-ca: entered promiscuous mode
Nov 25 03:25:42 np0005534516 NetworkManager[48915]: <info>  [1764059142.4154] manager: (tapf718b0c0-ca): new Tun device (/org/freedesktop/NetworkManager/Devices/50)
Nov 25 03:25:42 np0005534516 nova_compute[253538]: 2025-11-25 08:25:42.421 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:25:42 np0005534516 ovn_controller[152859]: 2025-11-25T08:25:42Z|00068|binding|INFO|Claiming lport f718b0c0-ca1b-4f5d-aa70-3d1f48097b97 for this chassis.
Nov 25 03:25:42 np0005534516 ovn_controller[152859]: 2025-11-25T08:25:42Z|00069|binding|INFO|f718b0c0-ca1b-4f5d-aa70-3d1f48097b97: Claiming fa:16:3e:60:b8:f5 10.100.0.3
Nov 25 03:25:42 np0005534516 nova_compute[253538]: 2025-11-25 08:25:42.423 253542 DEBUG oslo_concurrency.processutils [None req-e9b91fea-0bb5-4fc4-b71d-ebdcc710f0e6 c53798457642457e8c93278c6bbae0b7 c4ea3de796e6464fbf65835dc4c3ad79 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.473s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:25:42 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:25:42.429 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:60:b8:f5 10.100.0.3'], port_security=['fa:16:3e:60:b8:f5 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '39580ba3-504b-4e17-b64f-f44ef66091da', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f86a7b06-d9db-4462-bd9b-8ad648dec7f4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2d3671fc1a3f4b319a62f23168a9df72', 'neutron:revision_number': '2', 'neutron:security_group_ids': '487cd315-808b-491d-a4f3-9b5dcda46b51', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ddf7b92c-cc8c-4886-ba67-90d06b198671, chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=f718b0c0-ca1b-4f5d-aa70-3d1f48097b97) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 03:25:42 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:25:42.430 162739 INFO neutron.agent.ovn.metadata.agent [-] Port f718b0c0-ca1b-4f5d-aa70-3d1f48097b97 in datapath f86a7b06-d9db-4462-bd9b-8ad648dec7f4 bound to our chassis#033[00m
Nov 25 03:25:42 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:25:42.433 162739 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network f86a7b06-d9db-4462-bd9b-8ad648dec7f4#033[00m
Nov 25 03:25:42 np0005534516 ovn_controller[152859]: 2025-11-25T08:25:42Z|00070|binding|INFO|Setting lport f718b0c0-ca1b-4f5d-aa70-3d1f48097b97 ovn-installed in OVS
Nov 25 03:25:42 np0005534516 ovn_controller[152859]: 2025-11-25T08:25:42Z|00071|binding|INFO|Setting lport f718b0c0-ca1b-4f5d-aa70-3d1f48097b97 up in Southbound
Nov 25 03:25:42 np0005534516 systemd-udevd[282268]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 03:25:42 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:25:42.461 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[4c50ac45-ec8c-4046-bf57-72abce74a397]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:25:42 np0005534516 NetworkManager[48915]: <info>  [1764059142.4649] device (tapf718b0c0-ca): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 03:25:42 np0005534516 NetworkManager[48915]: <info>  [1764059142.4661] device (tapf718b0c0-ca): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 03:25:42 np0005534516 systemd-machined[215790]: New machine qemu-23-instance-00000015.
Nov 25 03:25:42 np0005534516 systemd[1]: Started Virtual Machine qemu-23-instance-00000015.
Nov 25 03:25:42 np0005534516 nova_compute[253538]: 2025-11-25 08:25:42.478 253542 DEBUG nova.storage.rbd_utils [None req-e9b91fea-0bb5-4fc4-b71d-ebdcc710f0e6 c53798457642457e8c93278c6bbae0b7 c4ea3de796e6464fbf65835dc4c3ad79 - - default default] rbd image ceb93a9d-5e18-4351-9cfa-3949c00b448a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:25:42 np0005534516 nova_compute[253538]: 2025-11-25 08:25:42.485 253542 DEBUG oslo_concurrency.processutils [None req-e9b91fea-0bb5-4fc4-b71d-ebdcc710f0e6 c53798457642457e8c93278c6bbae0b7 c4ea3de796e6464fbf65835dc4c3ad79 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:25:42 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:25:42.500 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[b26c8c82-0395-46bf-a8c9-8fd8381a6cc8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:25:42 np0005534516 nova_compute[253538]: 2025-11-25 08:25:42.506 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:25:42 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:25:42.507 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[41824de4-5e93-4889-8065-24dc5df6a641]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:25:42 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:25:42.534 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[c6b4c0c8-7da9-4e20-a6de-d37e3b96fbe4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:25:42 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:25:42.551 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[3f019c31-85ac-4e6f-bdf0-d0bee32c1d8c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf86a7b06-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:10:7b:45'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 6, 'tx_packets': 5, 'rx_bytes': 532, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 6, 'tx_packets': 5, 'rx_bytes': 532, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 23], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 448076, 'reachable_time': 32795, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 282290, 'error': None, 'target': 'ovnmeta-f86a7b06-d9db-4462-bd9b-8ad648dec7f4', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:25:42 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:25:42.567 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[e8379f66-ccdc-4a9a-b119-f9a5344d73e6]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapf86a7b06-d1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 448087, 'tstamp': 448087}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 282291, 'error': None, 'target': 'ovnmeta-f86a7b06-d9db-4462-bd9b-8ad648dec7f4', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapf86a7b06-d1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 448090, 'tstamp': 448090}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 282291, 'error': None, 'target': 'ovnmeta-f86a7b06-d9db-4462-bd9b-8ad648dec7f4', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:25:42 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:25:42.568 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf86a7b06-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:25:42 np0005534516 nova_compute[253538]: 2025-11-25 08:25:42.569 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:25:42 np0005534516 nova_compute[253538]: 2025-11-25 08:25:42.570 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:25:42 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:25:42.571 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf86a7b06-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:25:42 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:25:42.571 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 03:25:42 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:25:42.571 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapf86a7b06-d0, col_values=(('external_ids', {'iface-id': '156cbe7f-b9cd-46c4-9552-1484f581cde6'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:25:42 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:25:42.572 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 03:25:42 np0005534516 nova_compute[253538]: 2025-11-25 08:25:42.872 253542 DEBUG nova.network.neutron [req-d77d0d92-b3bb-47df-a20f-d698fe4195d7 req-cdca7883-9149-4913-aa8e-af3576b866ad b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: ceb93a9d-5e18-4351-9cfa-3949c00b448a] Updated VIF entry in instance network info cache for port 6283ff13-d854-41d6-8a7a-eab602cc4cf4. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 25 03:25:42 np0005534516 nova_compute[253538]: 2025-11-25 08:25:42.872 253542 DEBUG nova.network.neutron [req-d77d0d92-b3bb-47df-a20f-d698fe4195d7 req-cdca7883-9149-4913-aa8e-af3576b866ad b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: ceb93a9d-5e18-4351-9cfa-3949c00b448a] Updating instance_info_cache with network_info: [{"id": "6283ff13-d854-41d6-8a7a-eab602cc4cf4", "address": "fa:16:3e:57:ac:37", "network": {"id": "f0cb07bc-dc94-4b65-bb7f-100ce36c9428", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationNegativeTestJSON-827720896-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c4ea3de796e6464fbf65835dc4c3ad79", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6283ff13-d8", "ovs_interfaceid": "6283ff13-d854-41d6-8a7a-eab602cc4cf4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 03:25:42 np0005534516 nova_compute[253538]: 2025-11-25 08:25:42.888 253542 DEBUG oslo_concurrency.lockutils [req-d77d0d92-b3bb-47df-a20f-d698fe4195d7 req-cdca7883-9149-4913-aa8e-af3576b866ad b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-ceb93a9d-5e18-4351-9cfa-3949c00b448a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 03:25:42 np0005534516 nova_compute[253538]: 2025-11-25 08:25:42.954 253542 DEBUG nova.compute.manager [req-b90993ed-a68e-4355-8fd1-ed4b37bab08a req-af8a3eab-096c-42d5-b620-04abfa9b8975 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 39580ba3-504b-4e17-b64f-f44ef66091da] Received event network-vif-plugged-f718b0c0-ca1b-4f5d-aa70-3d1f48097b97 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:25:42 np0005534516 nova_compute[253538]: 2025-11-25 08:25:42.955 253542 DEBUG oslo_concurrency.lockutils [req-b90993ed-a68e-4355-8fd1-ed4b37bab08a req-af8a3eab-096c-42d5-b620-04abfa9b8975 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "39580ba3-504b-4e17-b64f-f44ef66091da-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:25:42 np0005534516 nova_compute[253538]: 2025-11-25 08:25:42.955 253542 DEBUG oslo_concurrency.lockutils [req-b90993ed-a68e-4355-8fd1-ed4b37bab08a req-af8a3eab-096c-42d5-b620-04abfa9b8975 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "39580ba3-504b-4e17-b64f-f44ef66091da-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:25:42 np0005534516 nova_compute[253538]: 2025-11-25 08:25:42.956 253542 DEBUG oslo_concurrency.lockutils [req-b90993ed-a68e-4355-8fd1-ed4b37bab08a req-af8a3eab-096c-42d5-b620-04abfa9b8975 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "39580ba3-504b-4e17-b64f-f44ef66091da-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:25:42 np0005534516 nova_compute[253538]: 2025-11-25 08:25:42.956 253542 DEBUG nova.compute.manager [req-b90993ed-a68e-4355-8fd1-ed4b37bab08a req-af8a3eab-096c-42d5-b620-04abfa9b8975 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 39580ba3-504b-4e17-b64f-f44ef66091da] Processing event network-vif-plugged-f718b0c0-ca1b-4f5d-aa70-3d1f48097b97 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 25 03:25:42 np0005534516 nova_compute[253538]: 2025-11-25 08:25:42.957 253542 DEBUG nova.compute.manager [req-b90993ed-a68e-4355-8fd1-ed4b37bab08a req-af8a3eab-096c-42d5-b620-04abfa9b8975 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 39580ba3-504b-4e17-b64f-f44ef66091da] Received event network-vif-plugged-f718b0c0-ca1b-4f5d-aa70-3d1f48097b97 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:25:42 np0005534516 nova_compute[253538]: 2025-11-25 08:25:42.959 253542 DEBUG oslo_concurrency.lockutils [req-b90993ed-a68e-4355-8fd1-ed4b37bab08a req-af8a3eab-096c-42d5-b620-04abfa9b8975 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "39580ba3-504b-4e17-b64f-f44ef66091da-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:25:42 np0005534516 nova_compute[253538]: 2025-11-25 08:25:42.959 253542 DEBUG oslo_concurrency.lockutils [req-b90993ed-a68e-4355-8fd1-ed4b37bab08a req-af8a3eab-096c-42d5-b620-04abfa9b8975 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "39580ba3-504b-4e17-b64f-f44ef66091da-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:25:42 np0005534516 nova_compute[253538]: 2025-11-25 08:25:42.959 253542 DEBUG oslo_concurrency.lockutils [req-b90993ed-a68e-4355-8fd1-ed4b37bab08a req-af8a3eab-096c-42d5-b620-04abfa9b8975 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "39580ba3-504b-4e17-b64f-f44ef66091da-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:25:42 np0005534516 nova_compute[253538]: 2025-11-25 08:25:42.959 253542 DEBUG nova.compute.manager [req-b90993ed-a68e-4355-8fd1-ed4b37bab08a req-af8a3eab-096c-42d5-b620-04abfa9b8975 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 39580ba3-504b-4e17-b64f-f44ef66091da] No waiting events found dispatching network-vif-plugged-f718b0c0-ca1b-4f5d-aa70-3d1f48097b97 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 03:25:42 np0005534516 nova_compute[253538]: 2025-11-25 08:25:42.960 253542 WARNING nova.compute.manager [req-b90993ed-a68e-4355-8fd1-ed4b37bab08a req-af8a3eab-096c-42d5-b620-04abfa9b8975 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 39580ba3-504b-4e17-b64f-f44ef66091da] Received unexpected event network-vif-plugged-f718b0c0-ca1b-4f5d-aa70-3d1f48097b97 for instance with vm_state building and task_state spawning.#033[00m
Nov 25 03:25:43 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1208: 321 pgs: 321 active+clean; 465 MiB data, 477 MiB used, 60 GiB / 60 GiB avail; 4.2 MiB/s rd, 5.9 MiB/s wr, 257 op/s
Nov 25 03:25:43 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 03:25:43 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4191226321' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 03:25:43 np0005534516 nova_compute[253538]: 2025-11-25 08:25:43.026 253542 DEBUG oslo_concurrency.processutils [None req-e9b91fea-0bb5-4fc4-b71d-ebdcc710f0e6 c53798457642457e8c93278c6bbae0b7 c4ea3de796e6464fbf65835dc4c3ad79 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.540s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:25:43 np0005534516 nova_compute[253538]: 2025-11-25 08:25:43.027 253542 DEBUG nova.virt.libvirt.vif [None req-e9b91fea-0bb5-4fc4-b71d-ebdcc710f0e6 c53798457642457e8c93278c6bbae0b7 c4ea3de796e6464fbf65835dc4c3ad79 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T08:25:35Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-FloatingIPsAssociationNegativeTestJSON-server-1269146307',display_name='tempest-FloatingIPsAssociationNegativeTestJSON-server-1269146307',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-floatingipsassociationnegativetestjson-server-126914630',id=22,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c4ea3de796e6464fbf65835dc4c3ad79',ramdisk_id='',reservation_id='r-jh0ldu4e',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-FloatingIPsAssociationNegativeTestJSON-1564808270',owner_user_name='tempest-FloatingIPsAssociationNegativeTestJSON-1564808270-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T08:25:37Z,user_data=None,user_id='c53798457642457e8c93278c6bbae0b7',uuid=ceb93a9d-5e18-4351-9cfa-3949c00b448a,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "6283ff13-d854-41d6-8a7a-eab602cc4cf4", "address": "fa:16:3e:57:ac:37", "network": {"id": "f0cb07bc-dc94-4b65-bb7f-100ce36c9428", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationNegativeTestJSON-827720896-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c4ea3de796e6464fbf65835dc4c3ad79", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6283ff13-d8", "ovs_interfaceid": "6283ff13-d854-41d6-8a7a-eab602cc4cf4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 25 03:25:43 np0005534516 nova_compute[253538]: 2025-11-25 08:25:43.028 253542 DEBUG nova.network.os_vif_util [None req-e9b91fea-0bb5-4fc4-b71d-ebdcc710f0e6 c53798457642457e8c93278c6bbae0b7 c4ea3de796e6464fbf65835dc4c3ad79 - - default default] Converting VIF {"id": "6283ff13-d854-41d6-8a7a-eab602cc4cf4", "address": "fa:16:3e:57:ac:37", "network": {"id": "f0cb07bc-dc94-4b65-bb7f-100ce36c9428", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationNegativeTestJSON-827720896-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c4ea3de796e6464fbf65835dc4c3ad79", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6283ff13-d8", "ovs_interfaceid": "6283ff13-d854-41d6-8a7a-eab602cc4cf4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 03:25:43 np0005534516 nova_compute[253538]: 2025-11-25 08:25:43.029 253542 DEBUG nova.network.os_vif_util [None req-e9b91fea-0bb5-4fc4-b71d-ebdcc710f0e6 c53798457642457e8c93278c6bbae0b7 c4ea3de796e6464fbf65835dc4c3ad79 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:57:ac:37,bridge_name='br-int',has_traffic_filtering=True,id=6283ff13-d854-41d6-8a7a-eab602cc4cf4,network=Network(f0cb07bc-dc94-4b65-bb7f-100ce36c9428),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6283ff13-d8') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 03:25:43 np0005534516 nova_compute[253538]: 2025-11-25 08:25:43.030 253542 DEBUG nova.objects.instance [None req-e9b91fea-0bb5-4fc4-b71d-ebdcc710f0e6 c53798457642457e8c93278c6bbae0b7 c4ea3de796e6464fbf65835dc4c3ad79 - - default default] Lazy-loading 'pci_devices' on Instance uuid ceb93a9d-5e18-4351-9cfa-3949c00b448a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 03:25:43 np0005534516 nova_compute[253538]: 2025-11-25 08:25:43.050 253542 DEBUG nova.virt.libvirt.driver [None req-e9b91fea-0bb5-4fc4-b71d-ebdcc710f0e6 c53798457642457e8c93278c6bbae0b7 c4ea3de796e6464fbf65835dc4c3ad79 - - default default] [instance: ceb93a9d-5e18-4351-9cfa-3949c00b448a] End _get_guest_xml xml=<domain type="kvm">
Nov 25 03:25:43 np0005534516 nova_compute[253538]:  <uuid>ceb93a9d-5e18-4351-9cfa-3949c00b448a</uuid>
Nov 25 03:25:43 np0005534516 nova_compute[253538]:  <name>instance-00000016</name>
Nov 25 03:25:43 np0005534516 nova_compute[253538]:  <memory>131072</memory>
Nov 25 03:25:43 np0005534516 nova_compute[253538]:  <vcpu>1</vcpu>
Nov 25 03:25:43 np0005534516 nova_compute[253538]:  <metadata>
Nov 25 03:25:43 np0005534516 nova_compute[253538]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 03:25:43 np0005534516 nova_compute[253538]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 03:25:43 np0005534516 nova_compute[253538]:      <nova:name>tempest-FloatingIPsAssociationNegativeTestJSON-server-1269146307</nova:name>
Nov 25 03:25:43 np0005534516 nova_compute[253538]:      <nova:creationTime>2025-11-25 08:25:41</nova:creationTime>
Nov 25 03:25:43 np0005534516 nova_compute[253538]:      <nova:flavor name="m1.nano">
Nov 25 03:25:43 np0005534516 nova_compute[253538]:        <nova:memory>128</nova:memory>
Nov 25 03:25:43 np0005534516 nova_compute[253538]:        <nova:disk>1</nova:disk>
Nov 25 03:25:43 np0005534516 nova_compute[253538]:        <nova:swap>0</nova:swap>
Nov 25 03:25:43 np0005534516 nova_compute[253538]:        <nova:ephemeral>0</nova:ephemeral>
Nov 25 03:25:43 np0005534516 nova_compute[253538]:        <nova:vcpus>1</nova:vcpus>
Nov 25 03:25:43 np0005534516 nova_compute[253538]:      </nova:flavor>
Nov 25 03:25:43 np0005534516 nova_compute[253538]:      <nova:owner>
Nov 25 03:25:43 np0005534516 nova_compute[253538]:        <nova:user uuid="c53798457642457e8c93278c6bbae0b7">tempest-FloatingIPsAssociationNegativeTestJSON-1564808270-project-member</nova:user>
Nov 25 03:25:43 np0005534516 nova_compute[253538]:        <nova:project uuid="c4ea3de796e6464fbf65835dc4c3ad79">tempest-FloatingIPsAssociationNegativeTestJSON-1564808270</nova:project>
Nov 25 03:25:43 np0005534516 nova_compute[253538]:      </nova:owner>
Nov 25 03:25:43 np0005534516 nova_compute[253538]:      <nova:root type="image" uuid="8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e"/>
Nov 25 03:25:43 np0005534516 nova_compute[253538]:      <nova:ports>
Nov 25 03:25:43 np0005534516 nova_compute[253538]:        <nova:port uuid="6283ff13-d854-41d6-8a7a-eab602cc4cf4">
Nov 25 03:25:43 np0005534516 nova_compute[253538]:          <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Nov 25 03:25:43 np0005534516 nova_compute[253538]:        </nova:port>
Nov 25 03:25:43 np0005534516 nova_compute[253538]:      </nova:ports>
Nov 25 03:25:43 np0005534516 nova_compute[253538]:    </nova:instance>
Nov 25 03:25:43 np0005534516 nova_compute[253538]:  </metadata>
Nov 25 03:25:43 np0005534516 nova_compute[253538]:  <sysinfo type="smbios">
Nov 25 03:25:43 np0005534516 nova_compute[253538]:    <system>
Nov 25 03:25:43 np0005534516 nova_compute[253538]:      <entry name="manufacturer">RDO</entry>
Nov 25 03:25:43 np0005534516 nova_compute[253538]:      <entry name="product">OpenStack Compute</entry>
Nov 25 03:25:43 np0005534516 nova_compute[253538]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 03:25:43 np0005534516 nova_compute[253538]:      <entry name="serial">ceb93a9d-5e18-4351-9cfa-3949c00b448a</entry>
Nov 25 03:25:43 np0005534516 nova_compute[253538]:      <entry name="uuid">ceb93a9d-5e18-4351-9cfa-3949c00b448a</entry>
Nov 25 03:25:43 np0005534516 nova_compute[253538]:      <entry name="family">Virtual Machine</entry>
Nov 25 03:25:43 np0005534516 nova_compute[253538]:    </system>
Nov 25 03:25:43 np0005534516 nova_compute[253538]:  </sysinfo>
Nov 25 03:25:43 np0005534516 nova_compute[253538]:  <os>
Nov 25 03:25:43 np0005534516 nova_compute[253538]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 25 03:25:43 np0005534516 nova_compute[253538]:    <boot dev="hd"/>
Nov 25 03:25:43 np0005534516 nova_compute[253538]:    <smbios mode="sysinfo"/>
Nov 25 03:25:43 np0005534516 nova_compute[253538]:  </os>
Nov 25 03:25:43 np0005534516 nova_compute[253538]:  <features>
Nov 25 03:25:43 np0005534516 nova_compute[253538]:    <acpi/>
Nov 25 03:25:43 np0005534516 nova_compute[253538]:    <apic/>
Nov 25 03:25:43 np0005534516 nova_compute[253538]:    <vmcoreinfo/>
Nov 25 03:25:43 np0005534516 nova_compute[253538]:  </features>
Nov 25 03:25:43 np0005534516 nova_compute[253538]:  <clock offset="utc">
Nov 25 03:25:43 np0005534516 nova_compute[253538]:    <timer name="pit" tickpolicy="delay"/>
Nov 25 03:25:43 np0005534516 nova_compute[253538]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 25 03:25:43 np0005534516 nova_compute[253538]:    <timer name="hpet" present="no"/>
Nov 25 03:25:43 np0005534516 nova_compute[253538]:  </clock>
Nov 25 03:25:43 np0005534516 nova_compute[253538]:  <cpu mode="host-model" match="exact">
Nov 25 03:25:43 np0005534516 nova_compute[253538]:    <topology sockets="1" cores="1" threads="1"/>
Nov 25 03:25:43 np0005534516 nova_compute[253538]:  </cpu>
Nov 25 03:25:43 np0005534516 nova_compute[253538]:  <devices>
Nov 25 03:25:43 np0005534516 nova_compute[253538]:    <disk type="network" device="disk">
Nov 25 03:25:43 np0005534516 nova_compute[253538]:      <driver type="raw" cache="none"/>
Nov 25 03:25:43 np0005534516 nova_compute[253538]:      <source protocol="rbd" name="vms/ceb93a9d-5e18-4351-9cfa-3949c00b448a_disk">
Nov 25 03:25:43 np0005534516 nova_compute[253538]:        <host name="192.168.122.100" port="6789"/>
Nov 25 03:25:43 np0005534516 nova_compute[253538]:      </source>
Nov 25 03:25:43 np0005534516 nova_compute[253538]:      <auth username="openstack">
Nov 25 03:25:43 np0005534516 nova_compute[253538]:        <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 03:25:43 np0005534516 nova_compute[253538]:      </auth>
Nov 25 03:25:43 np0005534516 nova_compute[253538]:      <target dev="vda" bus="virtio"/>
Nov 25 03:25:43 np0005534516 nova_compute[253538]:    </disk>
Nov 25 03:25:43 np0005534516 nova_compute[253538]:    <disk type="network" device="cdrom">
Nov 25 03:25:43 np0005534516 nova_compute[253538]:      <driver type="raw" cache="none"/>
Nov 25 03:25:43 np0005534516 nova_compute[253538]:      <source protocol="rbd" name="vms/ceb93a9d-5e18-4351-9cfa-3949c00b448a_disk.config">
Nov 25 03:25:43 np0005534516 nova_compute[253538]:        <host name="192.168.122.100" port="6789"/>
Nov 25 03:25:43 np0005534516 nova_compute[253538]:      </source>
Nov 25 03:25:43 np0005534516 nova_compute[253538]:      <auth username="openstack">
Nov 25 03:25:43 np0005534516 nova_compute[253538]:        <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 03:25:43 np0005534516 nova_compute[253538]:      </auth>
Nov 25 03:25:43 np0005534516 nova_compute[253538]:      <target dev="sda" bus="sata"/>
Nov 25 03:25:43 np0005534516 nova_compute[253538]:    </disk>
Nov 25 03:25:43 np0005534516 nova_compute[253538]:    <interface type="ethernet">
Nov 25 03:25:43 np0005534516 nova_compute[253538]:      <mac address="fa:16:3e:57:ac:37"/>
Nov 25 03:25:43 np0005534516 nova_compute[253538]:      <model type="virtio"/>
Nov 25 03:25:43 np0005534516 nova_compute[253538]:      <driver name="vhost" rx_queue_size="512"/>
Nov 25 03:25:43 np0005534516 nova_compute[253538]:      <mtu size="1442"/>
Nov 25 03:25:43 np0005534516 nova_compute[253538]:      <target dev="tap6283ff13-d8"/>
Nov 25 03:25:43 np0005534516 nova_compute[253538]:    </interface>
Nov 25 03:25:43 np0005534516 nova_compute[253538]:    <serial type="pty">
Nov 25 03:25:43 np0005534516 nova_compute[253538]:      <log file="/var/lib/nova/instances/ceb93a9d-5e18-4351-9cfa-3949c00b448a/console.log" append="off"/>
Nov 25 03:25:43 np0005534516 nova_compute[253538]:    </serial>
Nov 25 03:25:43 np0005534516 nova_compute[253538]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 03:25:43 np0005534516 nova_compute[253538]:    <video>
Nov 25 03:25:43 np0005534516 nova_compute[253538]:      <model type="virtio"/>
Nov 25 03:25:43 np0005534516 nova_compute[253538]:    </video>
Nov 25 03:25:43 np0005534516 nova_compute[253538]:    <input type="tablet" bus="usb"/>
Nov 25 03:25:43 np0005534516 nova_compute[253538]:    <rng model="virtio">
Nov 25 03:25:43 np0005534516 nova_compute[253538]:      <backend model="random">/dev/urandom</backend>
Nov 25 03:25:43 np0005534516 nova_compute[253538]:    </rng>
Nov 25 03:25:43 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root"/>
Nov 25 03:25:43 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:25:43 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:25:43 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:25:43 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:25:43 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:25:43 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:25:43 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:25:43 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:25:43 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:25:43 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:25:43 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:25:43 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:25:43 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:25:43 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:25:43 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:25:43 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:25:43 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:25:43 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:25:43 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:25:43 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:25:43 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:25:43 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:25:43 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:25:43 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:25:43 np0005534516 nova_compute[253538]:    <controller type="usb" index="0"/>
Nov 25 03:25:43 np0005534516 nova_compute[253538]:    <memballoon model="virtio">
Nov 25 03:25:43 np0005534516 nova_compute[253538]:      <stats period="10"/>
Nov 25 03:25:43 np0005534516 nova_compute[253538]:    </memballoon>
Nov 25 03:25:43 np0005534516 nova_compute[253538]:  </devices>
Nov 25 03:25:43 np0005534516 nova_compute[253538]: </domain>
Nov 25 03:25:43 np0005534516 nova_compute[253538]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 25 03:25:43 np0005534516 nova_compute[253538]: 2025-11-25 08:25:43.054 253542 DEBUG nova.compute.manager [None req-e9b91fea-0bb5-4fc4-b71d-ebdcc710f0e6 c53798457642457e8c93278c6bbae0b7 c4ea3de796e6464fbf65835dc4c3ad79 - - default default] [instance: ceb93a9d-5e18-4351-9cfa-3949c00b448a] Preparing to wait for external event network-vif-plugged-6283ff13-d854-41d6-8a7a-eab602cc4cf4 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 25 03:25:43 np0005534516 nova_compute[253538]: 2025-11-25 08:25:43.054 253542 DEBUG oslo_concurrency.lockutils [None req-e9b91fea-0bb5-4fc4-b71d-ebdcc710f0e6 c53798457642457e8c93278c6bbae0b7 c4ea3de796e6464fbf65835dc4c3ad79 - - default default] Acquiring lock "ceb93a9d-5e18-4351-9cfa-3949c00b448a-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:25:43 np0005534516 nova_compute[253538]: 2025-11-25 08:25:43.054 253542 DEBUG oslo_concurrency.lockutils [None req-e9b91fea-0bb5-4fc4-b71d-ebdcc710f0e6 c53798457642457e8c93278c6bbae0b7 c4ea3de796e6464fbf65835dc4c3ad79 - - default default] Lock "ceb93a9d-5e18-4351-9cfa-3949c00b448a-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:25:43 np0005534516 nova_compute[253538]: 2025-11-25 08:25:43.055 253542 DEBUG oslo_concurrency.lockutils [None req-e9b91fea-0bb5-4fc4-b71d-ebdcc710f0e6 c53798457642457e8c93278c6bbae0b7 c4ea3de796e6464fbf65835dc4c3ad79 - - default default] Lock "ceb93a9d-5e18-4351-9cfa-3949c00b448a-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:25:43 np0005534516 nova_compute[253538]: 2025-11-25 08:25:43.056 253542 DEBUG nova.virt.libvirt.vif [None req-e9b91fea-0bb5-4fc4-b71d-ebdcc710f0e6 c53798457642457e8c93278c6bbae0b7 c4ea3de796e6464fbf65835dc4c3ad79 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T08:25:35Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-FloatingIPsAssociationNegativeTestJSON-server-1269146307',display_name='tempest-FloatingIPsAssociationNegativeTestJSON-server-1269146307',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-floatingipsassociationnegativetestjson-server-126914630',id=22,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c4ea3de796e6464fbf65835dc4c3ad79',ramdisk_id='',reservation_id='r-jh0ldu4e',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-FloatingIPsAssociationNegativeTestJSON-1564808270',owner_user_name='tempest-FloatingIPsAssociationNegativeTestJSON-1564808270-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T08:25:37Z,user_data=None,user_id='c53798457642457e8c93278c6bbae0b7',uuid=ceb93a9d-5e18-4351-9cfa-3949c00b448a,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "6283ff13-d854-41d6-8a7a-eab602cc4cf4", "address": "fa:16:3e:57:ac:37", "network": {"id": "f0cb07bc-dc94-4b65-bb7f-100ce36c9428", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationNegativeTestJSON-827720896-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c4ea3de796e6464fbf65835dc4c3ad79", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6283ff13-d8", "ovs_interfaceid": "6283ff13-d854-41d6-8a7a-eab602cc4cf4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 25 03:25:43 np0005534516 nova_compute[253538]: 2025-11-25 08:25:43.056 253542 DEBUG nova.network.os_vif_util [None req-e9b91fea-0bb5-4fc4-b71d-ebdcc710f0e6 c53798457642457e8c93278c6bbae0b7 c4ea3de796e6464fbf65835dc4c3ad79 - - default default] Converting VIF {"id": "6283ff13-d854-41d6-8a7a-eab602cc4cf4", "address": "fa:16:3e:57:ac:37", "network": {"id": "f0cb07bc-dc94-4b65-bb7f-100ce36c9428", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationNegativeTestJSON-827720896-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c4ea3de796e6464fbf65835dc4c3ad79", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6283ff13-d8", "ovs_interfaceid": "6283ff13-d854-41d6-8a7a-eab602cc4cf4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 03:25:43 np0005534516 nova_compute[253538]: 2025-11-25 08:25:43.057 253542 DEBUG nova.network.os_vif_util [None req-e9b91fea-0bb5-4fc4-b71d-ebdcc710f0e6 c53798457642457e8c93278c6bbae0b7 c4ea3de796e6464fbf65835dc4c3ad79 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:57:ac:37,bridge_name='br-int',has_traffic_filtering=True,id=6283ff13-d854-41d6-8a7a-eab602cc4cf4,network=Network(f0cb07bc-dc94-4b65-bb7f-100ce36c9428),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6283ff13-d8') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 03:25:43 np0005534516 nova_compute[253538]: 2025-11-25 08:25:43.057 253542 DEBUG os_vif [None req-e9b91fea-0bb5-4fc4-b71d-ebdcc710f0e6 c53798457642457e8c93278c6bbae0b7 c4ea3de796e6464fbf65835dc4c3ad79 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:57:ac:37,bridge_name='br-int',has_traffic_filtering=True,id=6283ff13-d854-41d6-8a7a-eab602cc4cf4,network=Network(f0cb07bc-dc94-4b65-bb7f-100ce36c9428),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6283ff13-d8') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 25 03:25:43 np0005534516 nova_compute[253538]: 2025-11-25 08:25:43.058 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:25:43 np0005534516 nova_compute[253538]: 2025-11-25 08:25:43.059 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:25:43 np0005534516 nova_compute[253538]: 2025-11-25 08:25:43.059 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 03:25:43 np0005534516 nova_compute[253538]: 2025-11-25 08:25:43.062 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:25:43 np0005534516 nova_compute[253538]: 2025-11-25 08:25:43.062 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6283ff13-d8, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:25:43 np0005534516 nova_compute[253538]: 2025-11-25 08:25:43.062 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap6283ff13-d8, col_values=(('external_ids', {'iface-id': '6283ff13-d854-41d6-8a7a-eab602cc4cf4', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:57:ac:37', 'vm-uuid': 'ceb93a9d-5e18-4351-9cfa-3949c00b448a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:25:43 np0005534516 nova_compute[253538]: 2025-11-25 08:25:43.064 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:25:43 np0005534516 NetworkManager[48915]: <info>  [1764059143.0652] manager: (tap6283ff13-d8): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/51)
Nov 25 03:25:43 np0005534516 nova_compute[253538]: 2025-11-25 08:25:43.066 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 25 03:25:43 np0005534516 nova_compute[253538]: 2025-11-25 08:25:43.069 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:25:43 np0005534516 nova_compute[253538]: 2025-11-25 08:25:43.069 253542 INFO os_vif [None req-e9b91fea-0bb5-4fc4-b71d-ebdcc710f0e6 c53798457642457e8c93278c6bbae0b7 c4ea3de796e6464fbf65835dc4c3ad79 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:57:ac:37,bridge_name='br-int',has_traffic_filtering=True,id=6283ff13-d854-41d6-8a7a-eab602cc4cf4,network=Network(f0cb07bc-dc94-4b65-bb7f-100ce36c9428),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6283ff13-d8')#033[00m
Nov 25 03:25:43 np0005534516 nova_compute[253538]: 2025-11-25 08:25:43.195 253542 DEBUG nova.virt.libvirt.driver [None req-e9b91fea-0bb5-4fc4-b71d-ebdcc710f0e6 c53798457642457e8c93278c6bbae0b7 c4ea3de796e6464fbf65835dc4c3ad79 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 03:25:43 np0005534516 nova_compute[253538]: 2025-11-25 08:25:43.196 253542 DEBUG nova.virt.libvirt.driver [None req-e9b91fea-0bb5-4fc4-b71d-ebdcc710f0e6 c53798457642457e8c93278c6bbae0b7 c4ea3de796e6464fbf65835dc4c3ad79 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 03:25:43 np0005534516 nova_compute[253538]: 2025-11-25 08:25:43.196 253542 DEBUG nova.virt.libvirt.driver [None req-e9b91fea-0bb5-4fc4-b71d-ebdcc710f0e6 c53798457642457e8c93278c6bbae0b7 c4ea3de796e6464fbf65835dc4c3ad79 - - default default] No VIF found with MAC fa:16:3e:57:ac:37, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 25 03:25:43 np0005534516 nova_compute[253538]: 2025-11-25 08:25:43.197 253542 INFO nova.virt.libvirt.driver [None req-e9b91fea-0bb5-4fc4-b71d-ebdcc710f0e6 c53798457642457e8c93278c6bbae0b7 c4ea3de796e6464fbf65835dc4c3ad79 - - default default] [instance: ceb93a9d-5e18-4351-9cfa-3949c00b448a] Using config drive#033[00m
Nov 25 03:25:43 np0005534516 nova_compute[253538]: 2025-11-25 08:25:43.228 253542 DEBUG nova.storage.rbd_utils [None req-e9b91fea-0bb5-4fc4-b71d-ebdcc710f0e6 c53798457642457e8c93278c6bbae0b7 c4ea3de796e6464fbf65835dc4c3ad79 - - default default] rbd image ceb93a9d-5e18-4351-9cfa-3949c00b448a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:25:43 np0005534516 nova_compute[253538]: 2025-11-25 08:25:43.349 253542 DEBUG nova.compute.manager [None req-975b261d-151e-4285-95bb-765b22dea8e7 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] [instance: 39580ba3-504b-4e17-b64f-f44ef66091da] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 25 03:25:43 np0005534516 nova_compute[253538]: 2025-11-25 08:25:43.349 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764059143.3487282, 39580ba3-504b-4e17-b64f-f44ef66091da => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 03:25:43 np0005534516 nova_compute[253538]: 2025-11-25 08:25:43.350 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 39580ba3-504b-4e17-b64f-f44ef66091da] VM Started (Lifecycle Event)#033[00m
Nov 25 03:25:43 np0005534516 nova_compute[253538]: 2025-11-25 08:25:43.353 253542 DEBUG nova.virt.libvirt.driver [None req-975b261d-151e-4285-95bb-765b22dea8e7 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] [instance: 39580ba3-504b-4e17-b64f-f44ef66091da] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 25 03:25:43 np0005534516 nova_compute[253538]: 2025-11-25 08:25:43.358 253542 INFO nova.virt.libvirt.driver [-] [instance: 39580ba3-504b-4e17-b64f-f44ef66091da] Instance spawned successfully.#033[00m
Nov 25 03:25:43 np0005534516 nova_compute[253538]: 2025-11-25 08:25:43.358 253542 DEBUG nova.virt.libvirt.driver [None req-975b261d-151e-4285-95bb-765b22dea8e7 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] [instance: 39580ba3-504b-4e17-b64f-f44ef66091da] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 25 03:25:43 np0005534516 nova_compute[253538]: 2025-11-25 08:25:43.370 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 39580ba3-504b-4e17-b64f-f44ef66091da] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:25:43 np0005534516 nova_compute[253538]: 2025-11-25 08:25:43.379 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 39580ba3-504b-4e17-b64f-f44ef66091da] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 03:25:43 np0005534516 nova_compute[253538]: 2025-11-25 08:25:43.384 253542 DEBUG nova.virt.libvirt.driver [None req-975b261d-151e-4285-95bb-765b22dea8e7 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] [instance: 39580ba3-504b-4e17-b64f-f44ef66091da] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:25:43 np0005534516 nova_compute[253538]: 2025-11-25 08:25:43.385 253542 DEBUG nova.virt.libvirt.driver [None req-975b261d-151e-4285-95bb-765b22dea8e7 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] [instance: 39580ba3-504b-4e17-b64f-f44ef66091da] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:25:43 np0005534516 nova_compute[253538]: 2025-11-25 08:25:43.385 253542 DEBUG nova.virt.libvirt.driver [None req-975b261d-151e-4285-95bb-765b22dea8e7 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] [instance: 39580ba3-504b-4e17-b64f-f44ef66091da] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:25:43 np0005534516 nova_compute[253538]: 2025-11-25 08:25:43.386 253542 DEBUG nova.virt.libvirt.driver [None req-975b261d-151e-4285-95bb-765b22dea8e7 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] [instance: 39580ba3-504b-4e17-b64f-f44ef66091da] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:25:43 np0005534516 nova_compute[253538]: 2025-11-25 08:25:43.387 253542 DEBUG nova.virt.libvirt.driver [None req-975b261d-151e-4285-95bb-765b22dea8e7 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] [instance: 39580ba3-504b-4e17-b64f-f44ef66091da] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:25:43 np0005534516 nova_compute[253538]: 2025-11-25 08:25:43.387 253542 DEBUG nova.virt.libvirt.driver [None req-975b261d-151e-4285-95bb-765b22dea8e7 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] [instance: 39580ba3-504b-4e17-b64f-f44ef66091da] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:25:43 np0005534516 nova_compute[253538]: 2025-11-25 08:25:43.394 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 39580ba3-504b-4e17-b64f-f44ef66091da] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 03:25:43 np0005534516 nova_compute[253538]: 2025-11-25 08:25:43.395 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764059143.3504753, 39580ba3-504b-4e17-b64f-f44ef66091da => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 03:25:43 np0005534516 nova_compute[253538]: 2025-11-25 08:25:43.395 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 39580ba3-504b-4e17-b64f-f44ef66091da] VM Paused (Lifecycle Event)#033[00m
Nov 25 03:25:43 np0005534516 nova_compute[253538]: 2025-11-25 08:25:43.413 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 39580ba3-504b-4e17-b64f-f44ef66091da] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:25:43 np0005534516 nova_compute[253538]: 2025-11-25 08:25:43.416 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764059143.352884, 39580ba3-504b-4e17-b64f-f44ef66091da => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 03:25:43 np0005534516 nova_compute[253538]: 2025-11-25 08:25:43.416 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 39580ba3-504b-4e17-b64f-f44ef66091da] VM Resumed (Lifecycle Event)#033[00m
Nov 25 03:25:43 np0005534516 nova_compute[253538]: 2025-11-25 08:25:43.440 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 39580ba3-504b-4e17-b64f-f44ef66091da] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:25:43 np0005534516 nova_compute[253538]: 2025-11-25 08:25:43.444 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 39580ba3-504b-4e17-b64f-f44ef66091da] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 03:25:43 np0005534516 nova_compute[253538]: 2025-11-25 08:25:43.458 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 39580ba3-504b-4e17-b64f-f44ef66091da] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 03:25:43 np0005534516 nova_compute[253538]: 2025-11-25 08:25:43.503 253542 INFO nova.compute.manager [None req-975b261d-151e-4285-95bb-765b22dea8e7 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] [instance: 39580ba3-504b-4e17-b64f-f44ef66091da] Took 7.13 seconds to spawn the instance on the hypervisor.#033[00m
Nov 25 03:25:43 np0005534516 nova_compute[253538]: 2025-11-25 08:25:43.504 253542 DEBUG nova.compute.manager [None req-975b261d-151e-4285-95bb-765b22dea8e7 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] [instance: 39580ba3-504b-4e17-b64f-f44ef66091da] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:25:43 np0005534516 nova_compute[253538]: 2025-11-25 08:25:43.590 253542 INFO nova.compute.manager [None req-975b261d-151e-4285-95bb-765b22dea8e7 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] [instance: 39580ba3-504b-4e17-b64f-f44ef66091da] Took 8.22 seconds to build instance.#033[00m
Nov 25 03:25:43 np0005534516 nova_compute[253538]: 2025-11-25 08:25:43.616 253542 DEBUG oslo_concurrency.lockutils [None req-975b261d-151e-4285-95bb-765b22dea8e7 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] Lock "39580ba3-504b-4e17-b64f-f44ef66091da" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.330s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:25:43 np0005534516 ovn_controller[152859]: 2025-11-25T08:25:43Z|00014|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:f0:8b:90 10.100.0.11
Nov 25 03:25:43 np0005534516 ovn_controller[152859]: 2025-11-25T08:25:43Z|00015|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:f0:8b:90 10.100.0.11
Nov 25 03:25:43 np0005534516 nova_compute[253538]: 2025-11-25 08:25:43.840 253542 INFO nova.virt.libvirt.driver [None req-e9b91fea-0bb5-4fc4-b71d-ebdcc710f0e6 c53798457642457e8c93278c6bbae0b7 c4ea3de796e6464fbf65835dc4c3ad79 - - default default] [instance: ceb93a9d-5e18-4351-9cfa-3949c00b448a] Creating config drive at /var/lib/nova/instances/ceb93a9d-5e18-4351-9cfa-3949c00b448a/disk.config#033[00m
Nov 25 03:25:43 np0005534516 nova_compute[253538]: 2025-11-25 08:25:43.846 253542 DEBUG oslo_concurrency.processutils [None req-e9b91fea-0bb5-4fc4-b71d-ebdcc710f0e6 c53798457642457e8c93278c6bbae0b7 c4ea3de796e6464fbf65835dc4c3ad79 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/ceb93a9d-5e18-4351-9cfa-3949c00b448a/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp9yr98jz_ execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:25:43 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e113 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 03:25:43 np0005534516 nova_compute[253538]: 2025-11-25 08:25:43.977 253542 DEBUG oslo_concurrency.processutils [None req-e9b91fea-0bb5-4fc4-b71d-ebdcc710f0e6 c53798457642457e8c93278c6bbae0b7 c4ea3de796e6464fbf65835dc4c3ad79 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/ceb93a9d-5e18-4351-9cfa-3949c00b448a/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp9yr98jz_" returned: 0 in 0.131s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:25:44 np0005534516 nova_compute[253538]: 2025-11-25 08:25:44.016 253542 DEBUG nova.storage.rbd_utils [None req-e9b91fea-0bb5-4fc4-b71d-ebdcc710f0e6 c53798457642457e8c93278c6bbae0b7 c4ea3de796e6464fbf65835dc4c3ad79 - - default default] rbd image ceb93a9d-5e18-4351-9cfa-3949c00b448a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:25:44 np0005534516 nova_compute[253538]: 2025-11-25 08:25:44.021 253542 DEBUG oslo_concurrency.processutils [None req-e9b91fea-0bb5-4fc4-b71d-ebdcc710f0e6 c53798457642457e8c93278c6bbae0b7 c4ea3de796e6464fbf65835dc4c3ad79 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/ceb93a9d-5e18-4351-9cfa-3949c00b448a/disk.config ceb93a9d-5e18-4351-9cfa-3949c00b448a_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:25:44 np0005534516 nova_compute[253538]: 2025-11-25 08:25:44.086 253542 INFO nova.compute.manager [None req-35bd05ab-f6ad-4042-81df-7cebe8557610 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] [instance: 86bfa56f-56d0-4a5e-b0b2-302c375e37a3] Rebuilding instance#033[00m
Nov 25 03:25:44 np0005534516 nova_compute[253538]: 2025-11-25 08:25:44.311 253542 DEBUG nova.objects.instance [None req-35bd05ab-f6ad-4042-81df-7cebe8557610 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Lazy-loading 'trusted_certs' on Instance uuid 86bfa56f-56d0-4a5e-b0b2-302c375e37a3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 03:25:44 np0005534516 nova_compute[253538]: 2025-11-25 08:25:44.324 253542 DEBUG nova.compute.manager [None req-35bd05ab-f6ad-4042-81df-7cebe8557610 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] [instance: 86bfa56f-56d0-4a5e-b0b2-302c375e37a3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:25:44 np0005534516 nova_compute[253538]: 2025-11-25 08:25:44.391 253542 DEBUG oslo_concurrency.processutils [None req-e9b91fea-0bb5-4fc4-b71d-ebdcc710f0e6 c53798457642457e8c93278c6bbae0b7 c4ea3de796e6464fbf65835dc4c3ad79 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/ceb93a9d-5e18-4351-9cfa-3949c00b448a/disk.config ceb93a9d-5e18-4351-9cfa-3949c00b448a_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.370s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:25:44 np0005534516 nova_compute[253538]: 2025-11-25 08:25:44.393 253542 INFO nova.virt.libvirt.driver [None req-e9b91fea-0bb5-4fc4-b71d-ebdcc710f0e6 c53798457642457e8c93278c6bbae0b7 c4ea3de796e6464fbf65835dc4c3ad79 - - default default] [instance: ceb93a9d-5e18-4351-9cfa-3949c00b448a] Deleting local config drive /var/lib/nova/instances/ceb93a9d-5e18-4351-9cfa-3949c00b448a/disk.config because it was imported into RBD.#033[00m
Nov 25 03:25:44 np0005534516 nova_compute[253538]: 2025-11-25 08:25:44.412 253542 DEBUG nova.objects.instance [None req-35bd05ab-f6ad-4042-81df-7cebe8557610 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Lazy-loading 'pci_requests' on Instance uuid 86bfa56f-56d0-4a5e-b0b2-302c375e37a3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 03:25:44 np0005534516 nova_compute[253538]: 2025-11-25 08:25:44.421 253542 DEBUG nova.objects.instance [None req-35bd05ab-f6ad-4042-81df-7cebe8557610 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Lazy-loading 'pci_devices' on Instance uuid 86bfa56f-56d0-4a5e-b0b2-302c375e37a3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 03:25:44 np0005534516 nova_compute[253538]: 2025-11-25 08:25:44.432 253542 DEBUG nova.objects.instance [None req-35bd05ab-f6ad-4042-81df-7cebe8557610 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Lazy-loading 'resources' on Instance uuid 86bfa56f-56d0-4a5e-b0b2-302c375e37a3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 03:25:44 np0005534516 nova_compute[253538]: 2025-11-25 08:25:44.443 253542 DEBUG nova.objects.instance [None req-35bd05ab-f6ad-4042-81df-7cebe8557610 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Lazy-loading 'migration_context' on Instance uuid 86bfa56f-56d0-4a5e-b0b2-302c375e37a3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 03:25:44 np0005534516 kernel: tap6283ff13-d8: entered promiscuous mode
Nov 25 03:25:44 np0005534516 NetworkManager[48915]: <info>  [1764059144.4492] manager: (tap6283ff13-d8): new Tun device (/org/freedesktop/NetworkManager/Devices/52)
Nov 25 03:25:44 np0005534516 nova_compute[253538]: 2025-11-25 08:25:44.451 253542 DEBUG nova.objects.instance [None req-35bd05ab-f6ad-4042-81df-7cebe8557610 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] [instance: 86bfa56f-56d0-4a5e-b0b2-302c375e37a3] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032#033[00m
Nov 25 03:25:44 np0005534516 ovn_controller[152859]: 2025-11-25T08:25:44Z|00072|binding|INFO|Claiming lport 6283ff13-d854-41d6-8a7a-eab602cc4cf4 for this chassis.
Nov 25 03:25:44 np0005534516 ovn_controller[152859]: 2025-11-25T08:25:44Z|00073|binding|INFO|6283ff13-d854-41d6-8a7a-eab602cc4cf4: Claiming fa:16:3e:57:ac:37 10.100.0.3
Nov 25 03:25:44 np0005534516 nova_compute[253538]: 2025-11-25 08:25:44.458 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:25:44 np0005534516 NetworkManager[48915]: <info>  [1764059144.4685] device (tap6283ff13-d8): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 03:25:44 np0005534516 NetworkManager[48915]: <info>  [1764059144.4694] device (tap6283ff13-d8): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 03:25:44 np0005534516 nova_compute[253538]: 2025-11-25 08:25:44.476 253542 DEBUG nova.virt.libvirt.driver [None req-35bd05ab-f6ad-4042-81df-7cebe8557610 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] [instance: 86bfa56f-56d0-4a5e-b0b2-302c375e37a3] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071#033[00m
Nov 25 03:25:44 np0005534516 systemd-machined[215790]: New machine qemu-24-instance-00000016.
Nov 25 03:25:44 np0005534516 systemd[1]: Started Virtual Machine qemu-24-instance-00000016.
Nov 25 03:25:44 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:25:44.502 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:57:ac:37 10.100.0.3'], port_security=['fa:16:3e:57:ac:37 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'ceb93a9d-5e18-4351-9cfa-3949c00b448a', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f0cb07bc-dc94-4b65-bb7f-100ce36c9428', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c4ea3de796e6464fbf65835dc4c3ad79', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'c614f14d-ba5c-4351-9110-1ad24f7c46f8', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d60c6aa2-d509-48b2-b548-58ea5b315827, chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=6283ff13-d854-41d6-8a7a-eab602cc4cf4) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 03:25:44 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:25:44.503 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 6283ff13-d854-41d6-8a7a-eab602cc4cf4 in datapath f0cb07bc-dc94-4b65-bb7f-100ce36c9428 bound to our chassis#033[00m
Nov 25 03:25:44 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:25:44.505 162739 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network f0cb07bc-dc94-4b65-bb7f-100ce36c9428#033[00m
Nov 25 03:25:44 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:25:44.517 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[f3f18d6e-dcf3-4fb9-9617-b959bd44dab2]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:25:44 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:25:44.517 162739 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapf0cb07bc-d1 in ovnmeta-f0cb07bc-dc94-4b65-bb7f-100ce36c9428 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 25 03:25:44 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:25:44.519 269370 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapf0cb07bc-d0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 25 03:25:44 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:25:44.519 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[35d0604f-2ad2-4dcf-b2b9-e431888ad3b4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:25:44 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:25:44.520 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[205a2420-6962-4215-ad1e-98b6cdab0254]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:25:44 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:25:44.531 162852 DEBUG oslo.privsep.daemon [-] privsep: reply[2367a0d7-ec1d-4dbe-bd0a-2133fd39f0bc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:25:44 np0005534516 nova_compute[253538]: 2025-11-25 08:25:44.536 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:25:44 np0005534516 ovn_controller[152859]: 2025-11-25T08:25:44Z|00074|binding|INFO|Setting lport 6283ff13-d854-41d6-8a7a-eab602cc4cf4 ovn-installed in OVS
Nov 25 03:25:44 np0005534516 ovn_controller[152859]: 2025-11-25T08:25:44Z|00075|binding|INFO|Setting lport 6283ff13-d854-41d6-8a7a-eab602cc4cf4 up in Southbound
Nov 25 03:25:44 np0005534516 nova_compute[253538]: 2025-11-25 08:25:44.544 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:25:44 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:25:44.557 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[4d617b70-9cb4-43c8-9cb7-f73b2e282c7f]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:25:44 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:25:44.586 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[03222ada-b6a7-4a06-8a78-ae2beb4eec6d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:25:44 np0005534516 NetworkManager[48915]: <info>  [1764059144.5967] manager: (tapf0cb07bc-d0): new Veth device (/org/freedesktop/NetworkManager/Devices/53)
Nov 25 03:25:44 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:25:44.597 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[3be0acf6-907b-42c5-a66b-0b5ee855aaa8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:25:44 np0005534516 nova_compute[253538]: 2025-11-25 08:25:44.611 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:25:44 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:25:44.637 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[329a8915-bbab-4d34-aa4f-5e95b97c54cd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:25:44 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:25:44.640 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[dbbe355d-163e-4618-b651-02d017832793]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:25:44 np0005534516 NetworkManager[48915]: <info>  [1764059144.6615] device (tapf0cb07bc-d0): carrier: link connected
Nov 25 03:25:44 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:25:44.667 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[4c2131a6-261a-4b6e-8ae4-5b90d841cd22]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:25:44 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:25:44.688 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[104c0139-177e-4f58-a9c2-595e455f2850]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf0cb07bc-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:cc:d7:36'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 27], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 449926, 'reachable_time': 33256, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 282461, 'error': None, 'target': 'ovnmeta-f0cb07bc-dc94-4b65-bb7f-100ce36c9428', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:25:44 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:25:44.706 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[f9761f11-6053-4ed7-9a4e-1291f84a8213]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fecc:d736'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 449926, 'tstamp': 449926}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 282462, 'error': None, 'target': 'ovnmeta-f0cb07bc-dc94-4b65-bb7f-100ce36c9428', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:25:44 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:25:44.724 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[0ea58f54-e059-46be-81cb-8979f36f4b8c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf0cb07bc-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:cc:d7:36'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 27], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 449926, 'reachable_time': 33256, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 282463, 'error': None, 'target': 'ovnmeta-f0cb07bc-dc94-4b65-bb7f-100ce36c9428', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:25:44 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:25:44.757 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[e0fc775c-5e4c-4050-8dd0-7c9fc3cebf66]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:25:44 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:25:44.824 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[1a2cd6f9-34f5-45a3-98d4-40bf3af09c60]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:25:44 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:25:44.826 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf0cb07bc-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:25:44 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:25:44.826 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 03:25:44 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:25:44.826 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf0cb07bc-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:25:44 np0005534516 NetworkManager[48915]: <info>  [1764059144.8288] manager: (tapf0cb07bc-d0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/54)
Nov 25 03:25:44 np0005534516 nova_compute[253538]: 2025-11-25 08:25:44.828 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:25:44 np0005534516 kernel: tapf0cb07bc-d0: entered promiscuous mode
Nov 25 03:25:44 np0005534516 nova_compute[253538]: 2025-11-25 08:25:44.831 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:25:44 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:25:44.833 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapf0cb07bc-d0, col_values=(('external_ids', {'iface-id': '6ff33e52-8ce7-4c58-8236-5e210dda120f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:25:44 np0005534516 ovn_controller[152859]: 2025-11-25T08:25:44Z|00076|binding|INFO|Releasing lport 6ff33e52-8ce7-4c58-8236-5e210dda120f from this chassis (sb_readonly=0)
Nov 25 03:25:44 np0005534516 nova_compute[253538]: 2025-11-25 08:25:44.834 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:25:44 np0005534516 nova_compute[253538]: 2025-11-25 08:25:44.854 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:25:44 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:25:44.856 162739 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/f0cb07bc-dc94-4b65-bb7f-100ce36c9428.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/f0cb07bc-dc94-4b65-bb7f-100ce36c9428.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 25 03:25:44 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:25:44.858 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[de5a4dc3-94e2-44a9-a543-bf5bd4e8aa1b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:25:44 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:25:44.859 162739 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 25 03:25:44 np0005534516 ovn_metadata_agent[162734]: global
Nov 25 03:25:44 np0005534516 ovn_metadata_agent[162734]:    log         /dev/log local0 debug
Nov 25 03:25:44 np0005534516 ovn_metadata_agent[162734]:    log-tag     haproxy-metadata-proxy-f0cb07bc-dc94-4b65-bb7f-100ce36c9428
Nov 25 03:25:44 np0005534516 ovn_metadata_agent[162734]:    user        root
Nov 25 03:25:44 np0005534516 ovn_metadata_agent[162734]:    group       root
Nov 25 03:25:44 np0005534516 ovn_metadata_agent[162734]:    maxconn     1024
Nov 25 03:25:44 np0005534516 ovn_metadata_agent[162734]:    pidfile     /var/lib/neutron/external/pids/f0cb07bc-dc94-4b65-bb7f-100ce36c9428.pid.haproxy
Nov 25 03:25:44 np0005534516 ovn_metadata_agent[162734]:    daemon
Nov 25 03:25:44 np0005534516 ovn_metadata_agent[162734]: 
Nov 25 03:25:44 np0005534516 ovn_metadata_agent[162734]: defaults
Nov 25 03:25:44 np0005534516 ovn_metadata_agent[162734]:    log global
Nov 25 03:25:44 np0005534516 ovn_metadata_agent[162734]:    mode http
Nov 25 03:25:44 np0005534516 ovn_metadata_agent[162734]:    option httplog
Nov 25 03:25:44 np0005534516 ovn_metadata_agent[162734]:    option dontlognull
Nov 25 03:25:44 np0005534516 ovn_metadata_agent[162734]:    option http-server-close
Nov 25 03:25:44 np0005534516 ovn_metadata_agent[162734]:    option forwardfor
Nov 25 03:25:44 np0005534516 ovn_metadata_agent[162734]:    retries                 3
Nov 25 03:25:44 np0005534516 ovn_metadata_agent[162734]:    timeout http-request    30s
Nov 25 03:25:44 np0005534516 ovn_metadata_agent[162734]:    timeout connect         30s
Nov 25 03:25:44 np0005534516 ovn_metadata_agent[162734]:    timeout client          32s
Nov 25 03:25:44 np0005534516 ovn_metadata_agent[162734]:    timeout server          32s
Nov 25 03:25:44 np0005534516 ovn_metadata_agent[162734]:    timeout http-keep-alive 30s
Nov 25 03:25:44 np0005534516 ovn_metadata_agent[162734]: 
Nov 25 03:25:44 np0005534516 ovn_metadata_agent[162734]: 
Nov 25 03:25:44 np0005534516 ovn_metadata_agent[162734]: listen listener
Nov 25 03:25:44 np0005534516 ovn_metadata_agent[162734]:    bind 169.254.169.254:80
Nov 25 03:25:44 np0005534516 ovn_metadata_agent[162734]:    server metadata /var/lib/neutron/metadata_proxy
Nov 25 03:25:44 np0005534516 ovn_metadata_agent[162734]:    http-request add-header X-OVN-Network-ID f0cb07bc-dc94-4b65-bb7f-100ce36c9428
Nov 25 03:25:44 np0005534516 ovn_metadata_agent[162734]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 25 03:25:44 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:25:44.860 162739 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-f0cb07bc-dc94-4b65-bb7f-100ce36c9428', 'env', 'PROCESS_TAG=haproxy-f0cb07bc-dc94-4b65-bb7f-100ce36c9428', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/f0cb07bc-dc94-4b65-bb7f-100ce36c9428.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 25 03:25:45 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1209: 321 pgs: 321 active+clean; 487 MiB data, 499 MiB used, 60 GiB / 60 GiB avail; 4.2 MiB/s rd, 6.6 MiB/s wr, 291 op/s
Nov 25 03:25:45 np0005534516 nova_compute[253538]: 2025-11-25 08:25:45.148 253542 DEBUG nova.compute.manager [req-e4f1300b-48d4-4812-bcb8-42efc739cb53 req-59fa0ee5-cac6-4011-9d0e-298a0a93dcb7 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: ceb93a9d-5e18-4351-9cfa-3949c00b448a] Received event network-vif-plugged-6283ff13-d854-41d6-8a7a-eab602cc4cf4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:25:45 np0005534516 nova_compute[253538]: 2025-11-25 08:25:45.149 253542 DEBUG oslo_concurrency.lockutils [req-e4f1300b-48d4-4812-bcb8-42efc739cb53 req-59fa0ee5-cac6-4011-9d0e-298a0a93dcb7 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "ceb93a9d-5e18-4351-9cfa-3949c00b448a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:25:45 np0005534516 nova_compute[253538]: 2025-11-25 08:25:45.149 253542 DEBUG oslo_concurrency.lockutils [req-e4f1300b-48d4-4812-bcb8-42efc739cb53 req-59fa0ee5-cac6-4011-9d0e-298a0a93dcb7 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "ceb93a9d-5e18-4351-9cfa-3949c00b448a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:25:45 np0005534516 nova_compute[253538]: 2025-11-25 08:25:45.149 253542 DEBUG oslo_concurrency.lockutils [req-e4f1300b-48d4-4812-bcb8-42efc739cb53 req-59fa0ee5-cac6-4011-9d0e-298a0a93dcb7 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "ceb93a9d-5e18-4351-9cfa-3949c00b448a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:25:45 np0005534516 nova_compute[253538]: 2025-11-25 08:25:45.150 253542 DEBUG nova.compute.manager [req-e4f1300b-48d4-4812-bcb8-42efc739cb53 req-59fa0ee5-cac6-4011-9d0e-298a0a93dcb7 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: ceb93a9d-5e18-4351-9cfa-3949c00b448a] Processing event network-vif-plugged-6283ff13-d854-41d6-8a7a-eab602cc4cf4 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 25 03:25:45 np0005534516 nova_compute[253538]: 2025-11-25 08:25:45.273 253542 DEBUG nova.compute.manager [None req-e9b91fea-0bb5-4fc4-b71d-ebdcc710f0e6 c53798457642457e8c93278c6bbae0b7 c4ea3de796e6464fbf65835dc4c3ad79 - - default default] [instance: ceb93a9d-5e18-4351-9cfa-3949c00b448a] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 25 03:25:45 np0005534516 nova_compute[253538]: 2025-11-25 08:25:45.274 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764059145.2730155, ceb93a9d-5e18-4351-9cfa-3949c00b448a => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 03:25:45 np0005534516 nova_compute[253538]: 2025-11-25 08:25:45.274 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: ceb93a9d-5e18-4351-9cfa-3949c00b448a] VM Started (Lifecycle Event)#033[00m
Nov 25 03:25:45 np0005534516 nova_compute[253538]: 2025-11-25 08:25:45.280 253542 DEBUG nova.virt.libvirt.driver [None req-e9b91fea-0bb5-4fc4-b71d-ebdcc710f0e6 c53798457642457e8c93278c6bbae0b7 c4ea3de796e6464fbf65835dc4c3ad79 - - default default] [instance: ceb93a9d-5e18-4351-9cfa-3949c00b448a] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 25 03:25:45 np0005534516 nova_compute[253538]: 2025-11-25 08:25:45.293 253542 INFO nova.virt.libvirt.driver [-] [instance: ceb93a9d-5e18-4351-9cfa-3949c00b448a] Instance spawned successfully.#033[00m
Nov 25 03:25:45 np0005534516 nova_compute[253538]: 2025-11-25 08:25:45.293 253542 DEBUG nova.virt.libvirt.driver [None req-e9b91fea-0bb5-4fc4-b71d-ebdcc710f0e6 c53798457642457e8c93278c6bbae0b7 c4ea3de796e6464fbf65835dc4c3ad79 - - default default] [instance: ceb93a9d-5e18-4351-9cfa-3949c00b448a] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 25 03:25:45 np0005534516 nova_compute[253538]: 2025-11-25 08:25:45.297 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: ceb93a9d-5e18-4351-9cfa-3949c00b448a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:25:45 np0005534516 nova_compute[253538]: 2025-11-25 08:25:45.315 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: ceb93a9d-5e18-4351-9cfa-3949c00b448a] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 03:25:45 np0005534516 nova_compute[253538]: 2025-11-25 08:25:45.321 253542 DEBUG nova.virt.libvirt.driver [None req-e9b91fea-0bb5-4fc4-b71d-ebdcc710f0e6 c53798457642457e8c93278c6bbae0b7 c4ea3de796e6464fbf65835dc4c3ad79 - - default default] [instance: ceb93a9d-5e18-4351-9cfa-3949c00b448a] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:25:45 np0005534516 nova_compute[253538]: 2025-11-25 08:25:45.322 253542 DEBUG nova.virt.libvirt.driver [None req-e9b91fea-0bb5-4fc4-b71d-ebdcc710f0e6 c53798457642457e8c93278c6bbae0b7 c4ea3de796e6464fbf65835dc4c3ad79 - - default default] [instance: ceb93a9d-5e18-4351-9cfa-3949c00b448a] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:25:45 np0005534516 nova_compute[253538]: 2025-11-25 08:25:45.322 253542 DEBUG nova.virt.libvirt.driver [None req-e9b91fea-0bb5-4fc4-b71d-ebdcc710f0e6 c53798457642457e8c93278c6bbae0b7 c4ea3de796e6464fbf65835dc4c3ad79 - - default default] [instance: ceb93a9d-5e18-4351-9cfa-3949c00b448a] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:25:45 np0005534516 nova_compute[253538]: 2025-11-25 08:25:45.322 253542 DEBUG nova.virt.libvirt.driver [None req-e9b91fea-0bb5-4fc4-b71d-ebdcc710f0e6 c53798457642457e8c93278c6bbae0b7 c4ea3de796e6464fbf65835dc4c3ad79 - - default default] [instance: ceb93a9d-5e18-4351-9cfa-3949c00b448a] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:25:45 np0005534516 nova_compute[253538]: 2025-11-25 08:25:45.323 253542 DEBUG nova.virt.libvirt.driver [None req-e9b91fea-0bb5-4fc4-b71d-ebdcc710f0e6 c53798457642457e8c93278c6bbae0b7 c4ea3de796e6464fbf65835dc4c3ad79 - - default default] [instance: ceb93a9d-5e18-4351-9cfa-3949c00b448a] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:25:45 np0005534516 nova_compute[253538]: 2025-11-25 08:25:45.323 253542 DEBUG nova.virt.libvirt.driver [None req-e9b91fea-0bb5-4fc4-b71d-ebdcc710f0e6 c53798457642457e8c93278c6bbae0b7 c4ea3de796e6464fbf65835dc4c3ad79 - - default default] [instance: ceb93a9d-5e18-4351-9cfa-3949c00b448a] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:25:45 np0005534516 nova_compute[253538]: 2025-11-25 08:25:45.348 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: ceb93a9d-5e18-4351-9cfa-3949c00b448a] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 03:25:45 np0005534516 nova_compute[253538]: 2025-11-25 08:25:45.349 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764059145.2742052, ceb93a9d-5e18-4351-9cfa-3949c00b448a => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 03:25:45 np0005534516 nova_compute[253538]: 2025-11-25 08:25:45.349 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: ceb93a9d-5e18-4351-9cfa-3949c00b448a] VM Paused (Lifecycle Event)#033[00m
Nov 25 03:25:45 np0005534516 nova_compute[253538]: 2025-11-25 08:25:45.362 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: ceb93a9d-5e18-4351-9cfa-3949c00b448a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:25:45 np0005534516 nova_compute[253538]: 2025-11-25 08:25:45.365 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764059145.27616, ceb93a9d-5e18-4351-9cfa-3949c00b448a => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 03:25:45 np0005534516 nova_compute[253538]: 2025-11-25 08:25:45.365 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: ceb93a9d-5e18-4351-9cfa-3949c00b448a] VM Resumed (Lifecycle Event)#033[00m
Nov 25 03:25:45 np0005534516 nova_compute[253538]: 2025-11-25 08:25:45.379 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: ceb93a9d-5e18-4351-9cfa-3949c00b448a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:25:45 np0005534516 nova_compute[253538]: 2025-11-25 08:25:45.386 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: ceb93a9d-5e18-4351-9cfa-3949c00b448a] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 03:25:45 np0005534516 nova_compute[253538]: 2025-11-25 08:25:45.401 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: ceb93a9d-5e18-4351-9cfa-3949c00b448a] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 03:25:45 np0005534516 podman[282534]: 2025-11-25 08:25:45.43808362 +0000 UTC m=+0.030175076 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 25 03:25:45 np0005534516 nova_compute[253538]: 2025-11-25 08:25:45.549 253542 INFO nova.compute.manager [None req-e9b91fea-0bb5-4fc4-b71d-ebdcc710f0e6 c53798457642457e8c93278c6bbae0b7 c4ea3de796e6464fbf65835dc4c3ad79 - - default default] [instance: ceb93a9d-5e18-4351-9cfa-3949c00b448a] Took 8.13 seconds to spawn the instance on the hypervisor.#033[00m
Nov 25 03:25:45 np0005534516 nova_compute[253538]: 2025-11-25 08:25:45.550 253542 DEBUG nova.compute.manager [None req-e9b91fea-0bb5-4fc4-b71d-ebdcc710f0e6 c53798457642457e8c93278c6bbae0b7 c4ea3de796e6464fbf65835dc4c3ad79 - - default default] [instance: ceb93a9d-5e18-4351-9cfa-3949c00b448a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:25:45 np0005534516 podman[282534]: 2025-11-25 08:25:45.565676132 +0000 UTC m=+0.157767568 container create df9bbde1e0097c2246b98c50c44ed8de68e62f49a5f07f17aea74da675ba71b7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-f0cb07bc-dc94-4b65-bb7f-100ce36c9428, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 03:25:45 np0005534516 nova_compute[253538]: 2025-11-25 08:25:45.627 253542 INFO nova.compute.manager [None req-e9b91fea-0bb5-4fc4-b71d-ebdcc710f0e6 c53798457642457e8c93278c6bbae0b7 c4ea3de796e6464fbf65835dc4c3ad79 - - default default] [instance: ceb93a9d-5e18-4351-9cfa-3949c00b448a] Took 9.26 seconds to build instance.#033[00m
Nov 25 03:25:45 np0005534516 nova_compute[253538]: 2025-11-25 08:25:45.651 253542 DEBUG oslo_concurrency.lockutils [None req-e9b91fea-0bb5-4fc4-b71d-ebdcc710f0e6 c53798457642457e8c93278c6bbae0b7 c4ea3de796e6464fbf65835dc4c3ad79 - - default default] Lock "ceb93a9d-5e18-4351-9cfa-3949c00b448a" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.379s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:25:45 np0005534516 systemd[1]: Started libpod-conmon-df9bbde1e0097c2246b98c50c44ed8de68e62f49a5f07f17aea74da675ba71b7.scope.
Nov 25 03:25:45 np0005534516 systemd[1]: Started libcrun container.
Nov 25 03:25:45 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/92f235619556de56030111cecddf940547ff0043e49f6964151133b56b1d88ee/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 25 03:25:45 np0005534516 podman[282534]: 2025-11-25 08:25:45.851020371 +0000 UTC m=+0.443111857 container init df9bbde1e0097c2246b98c50c44ed8de68e62f49a5f07f17aea74da675ba71b7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-f0cb07bc-dc94-4b65-bb7f-100ce36c9428, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Nov 25 03:25:45 np0005534516 podman[282534]: 2025-11-25 08:25:45.859198948 +0000 UTC m=+0.451290404 container start df9bbde1e0097c2246b98c50c44ed8de68e62f49a5f07f17aea74da675ba71b7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-f0cb07bc-dc94-4b65-bb7f-100ce36c9428, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 03:25:45 np0005534516 neutron-haproxy-ovnmeta-f0cb07bc-dc94-4b65-bb7f-100ce36c9428[282548]: [NOTICE]   (282554) : New worker (282556) forked
Nov 25 03:25:45 np0005534516 neutron-haproxy-ovnmeta-f0cb07bc-dc94-4b65-bb7f-100ce36c9428[282548]: [NOTICE]   (282554) : Loading success.
Nov 25 03:25:47 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1210: 321 pgs: 321 active+clean; 496 MiB data, 504 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 5.7 MiB/s wr, 248 op/s
Nov 25 03:25:47 np0005534516 kernel: tap4ad9572b-6a (unregistering): left promiscuous mode
Nov 25 03:25:47 np0005534516 NetworkManager[48915]: <info>  [1764059147.2092] device (tap4ad9572b-6a): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 03:25:47 np0005534516 nova_compute[253538]: 2025-11-25 08:25:47.228 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:25:47 np0005534516 ovn_controller[152859]: 2025-11-25T08:25:47Z|00077|binding|INFO|Releasing lport 4ad9572b-6ac1-4659-8ea6-71b8a32c06fe from this chassis (sb_readonly=0)
Nov 25 03:25:47 np0005534516 ovn_controller[152859]: 2025-11-25T08:25:47Z|00078|binding|INFO|Setting lport 4ad9572b-6ac1-4659-8ea6-71b8a32c06fe down in Southbound
Nov 25 03:25:47 np0005534516 ovn_controller[152859]: 2025-11-25T08:25:47Z|00079|binding|INFO|Removing iface tap4ad9572b-6a ovn-installed in OVS
Nov 25 03:25:47 np0005534516 nova_compute[253538]: 2025-11-25 08:25:47.231 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:25:47 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:25:47.237 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:5e:0e:e0 10.100.0.6'], port_security=['fa:16:3e:5e:0e:e0 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '86bfa56f-56d0-4a5e-b0b2-302c375e37a3', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-93269c36-ab23-4d95-925a-798173550624', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '65a2f983cce14453b2dc9251a520f289', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'cc9f601e-bb99-4729-8b62-ddcf81c134a4', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a705f4ac-b7cc-4deb-b453-a20afb944392, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=4ad9572b-6ac1-4659-8ea6-71b8a32c06fe) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 03:25:47 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:25:47.238 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 4ad9572b-6ac1-4659-8ea6-71b8a32c06fe in datapath 93269c36-ab23-4d95-925a-798173550624 unbound from our chassis#033[00m
Nov 25 03:25:47 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:25:47.243 162739 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 93269c36-ab23-4d95-925a-798173550624#033[00m
Nov 25 03:25:47 np0005534516 nova_compute[253538]: 2025-11-25 08:25:47.250 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:25:47 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:25:47.259 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[d49d055c-1421-4774-b95a-62900599513f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:25:47 np0005534516 systemd[1]: machine-qemu\x2d18\x2dinstance\x2d00000010.scope: Deactivated successfully.
Nov 25 03:25:47 np0005534516 systemd[1]: machine-qemu\x2d18\x2dinstance\x2d00000010.scope: Consumed 15.785s CPU time.
Nov 25 03:25:47 np0005534516 systemd-machined[215790]: Machine qemu-18-instance-00000010 terminated.
Nov 25 03:25:47 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:25:47.291 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[675af06a-156c-4a4a-b32f-9160b8be0f12]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:25:47 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:25:47.295 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[42dfdc23-e8bc-40f1-90ae-414906d44ef1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:25:47 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:25:47.324 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[2afb00ba-a7d2-4e00-a050-a310a3a98603]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:25:47 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:25:47.343 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[897f6575-16f0-4bb4-be20-773baaa4244c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap93269c36-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b6:11:21'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 12, 'tx_packets': 11, 'rx_bytes': 784, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 12, 'tx_packets': 11, 'rx_bytes': 784, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 19], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 445515, 'reachable_time': 41516, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 282576, 'error': None, 'target': 'ovnmeta-93269c36-ab23-4d95-925a-798173550624', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:25:47 np0005534516 nova_compute[253538]: 2025-11-25 08:25:47.351 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:25:47 np0005534516 NetworkManager[48915]: <info>  [1764059147.3527] manager: (patch-provnet-14ba300c-36d8-42f5-a7bd-8245a4488c5f-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/55)
Nov 25 03:25:47 np0005534516 NetworkManager[48915]: <info>  [1764059147.3532] manager: (patch-br-int-to-provnet-14ba300c-36d8-42f5-a7bd-8245a4488c5f): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/56)
Nov 25 03:25:47 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:25:47.358 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[53cbcafa-16ef-49f2-82fe-ea9b986bee27]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap93269c36-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 445525, 'tstamp': 445525}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 282578, 'error': None, 'target': 'ovnmeta-93269c36-ab23-4d95-925a-798173550624', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap93269c36-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 445529, 'tstamp': 445529}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 282578, 'error': None, 'target': 'ovnmeta-93269c36-ab23-4d95-925a-798173550624', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:25:47 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:25:47.359 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap93269c36-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:25:47 np0005534516 nova_compute[253538]: 2025-11-25 08:25:47.360 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:25:47 np0005534516 nova_compute[253538]: 2025-11-25 08:25:47.463 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:25:47 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:25:47.465 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap93269c36-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:25:47 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:25:47.465 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 03:25:47 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:25:47.465 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap93269c36-a0, col_values=(('external_ids', {'iface-id': '52d2128c-19c6-4892-8ba5-cc8740039f5e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:25:47 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:25:47.466 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 03:25:47 np0005534516 ovn_controller[152859]: 2025-11-25T08:25:47Z|00080|binding|INFO|Releasing lport 6ff33e52-8ce7-4c58-8236-5e210dda120f from this chassis (sb_readonly=0)
Nov 25 03:25:47 np0005534516 ovn_controller[152859]: 2025-11-25T08:25:47Z|00081|binding|INFO|Releasing lport 156cbe7f-b9cd-46c4-9552-1484f581cde6 from this chassis (sb_readonly=0)
Nov 25 03:25:47 np0005534516 ovn_controller[152859]: 2025-11-25T08:25:47Z|00082|binding|INFO|Releasing lport 52d2128c-19c6-4892-8ba5-cc8740039f5e from this chassis (sb_readonly=0)
Nov 25 03:25:47 np0005534516 nova_compute[253538]: 2025-11-25 08:25:47.488 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:25:47 np0005534516 nova_compute[253538]: 2025-11-25 08:25:47.501 253542 INFO nova.virt.libvirt.driver [None req-35bd05ab-f6ad-4042-81df-7cebe8557610 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] [instance: 86bfa56f-56d0-4a5e-b0b2-302c375e37a3] Instance shutdown successfully after 3 seconds.#033[00m
Nov 25 03:25:47 np0005534516 nova_compute[253538]: 2025-11-25 08:25:47.505 253542 INFO nova.virt.libvirt.driver [-] [instance: 86bfa56f-56d0-4a5e-b0b2-302c375e37a3] Instance destroyed successfully.#033[00m
Nov 25 03:25:47 np0005534516 nova_compute[253538]: 2025-11-25 08:25:47.509 253542 INFO nova.virt.libvirt.driver [-] [instance: 86bfa56f-56d0-4a5e-b0b2-302c375e37a3] Instance destroyed successfully.#033[00m
Nov 25 03:25:47 np0005534516 nova_compute[253538]: 2025-11-25 08:25:47.511 253542 DEBUG nova.virt.libvirt.vif [None req-35bd05ab-f6ad-4042-81df-7cebe8557610 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T08:24:53Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-1649971692',display_name='tempest-ServersAdminTestJSON-server-1649971692',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-1649971692',id=16,image_ref='64385127-d622-49bb-be38-b33beb2692d1',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-25T08:25:01Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='65a2f983cce14453b2dc9251a520f289',ramdisk_id='',reservation_id='r-309qh2t9',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='64385127-d622-49bb-be38-b33beb2692d1',image_container_format='bare',image_disk_format='qcow2',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersAdminTestJSON-857664825',owner_user_name='tempest-ServersAdminTestJSON-857664825-project-member'},tags=<?>,task_state='rebuilding',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T08:25:43Z,user_data=None,user_id='76d3377d398a4214a77bc0eb91638ec5',uuid=86bfa56f-56d0-4a5e-b0b2-302c375e37a3,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='error') vif={"id": "4ad9572b-6ac1-4659-8ea6-71b8a32c06fe", "address": "fa:16:3e:5e:0e:e0", "network": {"id": "93269c36-ab23-4d95-925a-798173550624", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-897372830-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "65a2f983cce14453b2dc9251a520f289", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4ad9572b-6a", "ovs_interfaceid": "4ad9572b-6ac1-4659-8ea6-71b8a32c06fe", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 25 03:25:47 np0005534516 nova_compute[253538]: 2025-11-25 08:25:47.511 253542 DEBUG nova.network.os_vif_util [None req-35bd05ab-f6ad-4042-81df-7cebe8557610 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Converting VIF {"id": "4ad9572b-6ac1-4659-8ea6-71b8a32c06fe", "address": "fa:16:3e:5e:0e:e0", "network": {"id": "93269c36-ab23-4d95-925a-798173550624", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-897372830-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "65a2f983cce14453b2dc9251a520f289", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4ad9572b-6a", "ovs_interfaceid": "4ad9572b-6ac1-4659-8ea6-71b8a32c06fe", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 03:25:47 np0005534516 nova_compute[253538]: 2025-11-25 08:25:47.512 253542 DEBUG nova.network.os_vif_util [None req-35bd05ab-f6ad-4042-81df-7cebe8557610 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:5e:0e:e0,bridge_name='br-int',has_traffic_filtering=True,id=4ad9572b-6ac1-4659-8ea6-71b8a32c06fe,network=Network(93269c36-ab23-4d95-925a-798173550624),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4ad9572b-6a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 03:25:47 np0005534516 nova_compute[253538]: 2025-11-25 08:25:47.513 253542 DEBUG os_vif [None req-35bd05ab-f6ad-4042-81df-7cebe8557610 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:5e:0e:e0,bridge_name='br-int',has_traffic_filtering=True,id=4ad9572b-6ac1-4659-8ea6-71b8a32c06fe,network=Network(93269c36-ab23-4d95-925a-798173550624),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4ad9572b-6a') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 25 03:25:47 np0005534516 nova_compute[253538]: 2025-11-25 08:25:47.514 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:25:47 np0005534516 nova_compute[253538]: 2025-11-25 08:25:47.515 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4ad9572b-6a, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:25:47 np0005534516 nova_compute[253538]: 2025-11-25 08:25:47.519 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 25 03:25:47 np0005534516 nova_compute[253538]: 2025-11-25 08:25:47.521 253542 INFO os_vif [None req-35bd05ab-f6ad-4042-81df-7cebe8557610 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:5e:0e:e0,bridge_name='br-int',has_traffic_filtering=True,id=4ad9572b-6ac1-4659-8ea6-71b8a32c06fe,network=Network(93269c36-ab23-4d95-925a-798173550624),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4ad9572b-6a')#033[00m
Nov 25 03:25:48 np0005534516 nova_compute[253538]: 2025-11-25 08:25:48.160 253542 DEBUG nova.compute.manager [req-90627a71-3ed3-4ddf-93de-1a9d2f102f45 req-b318ee8f-dbab-4ebb-bb78-763c44333685 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: ceb93a9d-5e18-4351-9cfa-3949c00b448a] Received event network-vif-plugged-6283ff13-d854-41d6-8a7a-eab602cc4cf4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:25:48 np0005534516 nova_compute[253538]: 2025-11-25 08:25:48.161 253542 DEBUG oslo_concurrency.lockutils [req-90627a71-3ed3-4ddf-93de-1a9d2f102f45 req-b318ee8f-dbab-4ebb-bb78-763c44333685 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "ceb93a9d-5e18-4351-9cfa-3949c00b448a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:25:48 np0005534516 nova_compute[253538]: 2025-11-25 08:25:48.161 253542 DEBUG oslo_concurrency.lockutils [req-90627a71-3ed3-4ddf-93de-1a9d2f102f45 req-b318ee8f-dbab-4ebb-bb78-763c44333685 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "ceb93a9d-5e18-4351-9cfa-3949c00b448a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:25:48 np0005534516 nova_compute[253538]: 2025-11-25 08:25:48.162 253542 DEBUG oslo_concurrency.lockutils [req-90627a71-3ed3-4ddf-93de-1a9d2f102f45 req-b318ee8f-dbab-4ebb-bb78-763c44333685 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "ceb93a9d-5e18-4351-9cfa-3949c00b448a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:25:48 np0005534516 nova_compute[253538]: 2025-11-25 08:25:48.163 253542 DEBUG nova.compute.manager [req-90627a71-3ed3-4ddf-93de-1a9d2f102f45 req-b318ee8f-dbab-4ebb-bb78-763c44333685 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: ceb93a9d-5e18-4351-9cfa-3949c00b448a] No waiting events found dispatching network-vif-plugged-6283ff13-d854-41d6-8a7a-eab602cc4cf4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 03:25:48 np0005534516 nova_compute[253538]: 2025-11-25 08:25:48.163 253542 WARNING nova.compute.manager [req-90627a71-3ed3-4ddf-93de-1a9d2f102f45 req-b318ee8f-dbab-4ebb-bb78-763c44333685 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: ceb93a9d-5e18-4351-9cfa-3949c00b448a] Received unexpected event network-vif-plugged-6283ff13-d854-41d6-8a7a-eab602cc4cf4 for instance with vm_state active and task_state None.#033[00m
Nov 25 03:25:48 np0005534516 nova_compute[253538]: 2025-11-25 08:25:48.661 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:25:48 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e113 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 03:25:49 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1211: 321 pgs: 321 active+clean; 498 MiB data, 504 MiB used, 59 GiB / 60 GiB avail; 4.6 MiB/s rd, 4.8 MiB/s wr, 274 op/s
Nov 25 03:25:49 np0005534516 nova_compute[253538]: 2025-11-25 08:25:49.257 253542 DEBUG nova.compute.manager [req-4982b202-95a8-49b5-ac54-8fab6d8e7850 req-2235e8e6-07f5-476a-9b4a-f3c528cbbede b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 1664fad5-765c-4ecc-93e2-6f96c7fb6d44] Received event network-changed-fdb3703c-f8da-4c10-9784-ed63bfe93fe1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:25:49 np0005534516 nova_compute[253538]: 2025-11-25 08:25:49.257 253542 DEBUG nova.compute.manager [req-4982b202-95a8-49b5-ac54-8fab6d8e7850 req-2235e8e6-07f5-476a-9b4a-f3c528cbbede b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 1664fad5-765c-4ecc-93e2-6f96c7fb6d44] Refreshing instance network info cache due to event network-changed-fdb3703c-f8da-4c10-9784-ed63bfe93fe1. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 03:25:49 np0005534516 nova_compute[253538]: 2025-11-25 08:25:49.258 253542 DEBUG oslo_concurrency.lockutils [req-4982b202-95a8-49b5-ac54-8fab6d8e7850 req-2235e8e6-07f5-476a-9b4a-f3c528cbbede b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-1664fad5-765c-4ecc-93e2-6f96c7fb6d44" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 03:25:49 np0005534516 nova_compute[253538]: 2025-11-25 08:25:49.258 253542 DEBUG oslo_concurrency.lockutils [req-4982b202-95a8-49b5-ac54-8fab6d8e7850 req-2235e8e6-07f5-476a-9b4a-f3c528cbbede b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-1664fad5-765c-4ecc-93e2-6f96c7fb6d44" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 03:25:49 np0005534516 nova_compute[253538]: 2025-11-25 08:25:49.258 253542 DEBUG nova.network.neutron [req-4982b202-95a8-49b5-ac54-8fab6d8e7850 req-2235e8e6-07f5-476a-9b4a-f3c528cbbede b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 1664fad5-765c-4ecc-93e2-6f96c7fb6d44] Refreshing network info cache for port fdb3703c-f8da-4c10-9784-ed63bfe93fe1 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 03:25:49 np0005534516 nova_compute[253538]: 2025-11-25 08:25:49.617 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:25:49 np0005534516 nova_compute[253538]: 2025-11-25 08:25:49.894 253542 INFO nova.virt.libvirt.driver [None req-35bd05ab-f6ad-4042-81df-7cebe8557610 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] [instance: 86bfa56f-56d0-4a5e-b0b2-302c375e37a3] Deleting instance files /var/lib/nova/instances/86bfa56f-56d0-4a5e-b0b2-302c375e37a3_del#033[00m
Nov 25 03:25:49 np0005534516 nova_compute[253538]: 2025-11-25 08:25:49.895 253542 INFO nova.virt.libvirt.driver [None req-35bd05ab-f6ad-4042-81df-7cebe8557610 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] [instance: 86bfa56f-56d0-4a5e-b0b2-302c375e37a3] Deletion of /var/lib/nova/instances/86bfa56f-56d0-4a5e-b0b2-302c375e37a3_del complete#033[00m
Nov 25 03:25:49 np0005534516 ovn_controller[152859]: 2025-11-25T08:25:49Z|00016|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:50:3c:73 10.100.0.4
Nov 25 03:25:49 np0005534516 ovn_controller[152859]: 2025-11-25T08:25:49Z|00017|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:50:3c:73 10.100.0.4
Nov 25 03:25:50 np0005534516 nova_compute[253538]: 2025-11-25 08:25:50.400 253542 DEBUG nova.virt.libvirt.driver [None req-35bd05ab-f6ad-4042-81df-7cebe8557610 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] [instance: 86bfa56f-56d0-4a5e-b0b2-302c375e37a3] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 25 03:25:50 np0005534516 nova_compute[253538]: 2025-11-25 08:25:50.401 253542 INFO nova.virt.libvirt.driver [None req-35bd05ab-f6ad-4042-81df-7cebe8557610 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] [instance: 86bfa56f-56d0-4a5e-b0b2-302c375e37a3] Creating image(s)#033[00m
Nov 25 03:25:50 np0005534516 nova_compute[253538]: 2025-11-25 08:25:50.430 253542 DEBUG nova.storage.rbd_utils [None req-35bd05ab-f6ad-4042-81df-7cebe8557610 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] rbd image 86bfa56f-56d0-4a5e-b0b2-302c375e37a3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:25:50 np0005534516 nova_compute[253538]: 2025-11-25 08:25:50.464 253542 DEBUG nova.storage.rbd_utils [None req-35bd05ab-f6ad-4042-81df-7cebe8557610 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] rbd image 86bfa56f-56d0-4a5e-b0b2-302c375e37a3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:25:50 np0005534516 nova_compute[253538]: 2025-11-25 08:25:50.485 253542 DEBUG nova.storage.rbd_utils [None req-35bd05ab-f6ad-4042-81df-7cebe8557610 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] rbd image 86bfa56f-56d0-4a5e-b0b2-302c375e37a3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:25:50 np0005534516 nova_compute[253538]: 2025-11-25 08:25:50.488 253542 DEBUG oslo_concurrency.processutils [None req-35bd05ab-f6ad-4042-81df-7cebe8557610 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a6a8bade0130d94f6fc91c6e483cc9b6db836676 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:25:50 np0005534516 nova_compute[253538]: 2025-11-25 08:25:50.564 253542 DEBUG oslo_concurrency.processutils [None req-35bd05ab-f6ad-4042-81df-7cebe8557610 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a6a8bade0130d94f6fc91c6e483cc9b6db836676 --force-share --output=json" returned: 0 in 0.076s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:25:50 np0005534516 nova_compute[253538]: 2025-11-25 08:25:50.565 253542 DEBUG oslo_concurrency.lockutils [None req-35bd05ab-f6ad-4042-81df-7cebe8557610 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Acquiring lock "a6a8bade0130d94f6fc91c6e483cc9b6db836676" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:25:50 np0005534516 nova_compute[253538]: 2025-11-25 08:25:50.565 253542 DEBUG oslo_concurrency.lockutils [None req-35bd05ab-f6ad-4042-81df-7cebe8557610 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Lock "a6a8bade0130d94f6fc91c6e483cc9b6db836676" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:25:50 np0005534516 nova_compute[253538]: 2025-11-25 08:25:50.566 253542 DEBUG oslo_concurrency.lockutils [None req-35bd05ab-f6ad-4042-81df-7cebe8557610 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Lock "a6a8bade0130d94f6fc91c6e483cc9b6db836676" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:25:50 np0005534516 nova_compute[253538]: 2025-11-25 08:25:50.588 253542 DEBUG nova.storage.rbd_utils [None req-35bd05ab-f6ad-4042-81df-7cebe8557610 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] rbd image 86bfa56f-56d0-4a5e-b0b2-302c375e37a3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:25:50 np0005534516 nova_compute[253538]: 2025-11-25 08:25:50.592 253542 DEBUG oslo_concurrency.processutils [None req-35bd05ab-f6ad-4042-81df-7cebe8557610 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/a6a8bade0130d94f6fc91c6e483cc9b6db836676 86bfa56f-56d0-4a5e-b0b2-302c375e37a3_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:25:50 np0005534516 nova_compute[253538]: 2025-11-25 08:25:50.813 253542 DEBUG nova.compute.manager [req-268b40a8-f919-4f1b-87f9-d12af86569ae req-dbeda999-65ef-4e89-b278-0818911bb33a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 86bfa56f-56d0-4a5e-b0b2-302c375e37a3] Received event network-vif-unplugged-4ad9572b-6ac1-4659-8ea6-71b8a32c06fe external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:25:50 np0005534516 nova_compute[253538]: 2025-11-25 08:25:50.814 253542 DEBUG oslo_concurrency.lockutils [req-268b40a8-f919-4f1b-87f9-d12af86569ae req-dbeda999-65ef-4e89-b278-0818911bb33a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "86bfa56f-56d0-4a5e-b0b2-302c375e37a3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:25:50 np0005534516 nova_compute[253538]: 2025-11-25 08:25:50.814 253542 DEBUG oslo_concurrency.lockutils [req-268b40a8-f919-4f1b-87f9-d12af86569ae req-dbeda999-65ef-4e89-b278-0818911bb33a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "86bfa56f-56d0-4a5e-b0b2-302c375e37a3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:25:50 np0005534516 nova_compute[253538]: 2025-11-25 08:25:50.814 253542 DEBUG oslo_concurrency.lockutils [req-268b40a8-f919-4f1b-87f9-d12af86569ae req-dbeda999-65ef-4e89-b278-0818911bb33a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "86bfa56f-56d0-4a5e-b0b2-302c375e37a3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:25:50 np0005534516 nova_compute[253538]: 2025-11-25 08:25:50.815 253542 DEBUG nova.compute.manager [req-268b40a8-f919-4f1b-87f9-d12af86569ae req-dbeda999-65ef-4e89-b278-0818911bb33a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 86bfa56f-56d0-4a5e-b0b2-302c375e37a3] No waiting events found dispatching network-vif-unplugged-4ad9572b-6ac1-4659-8ea6-71b8a32c06fe pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 03:25:50 np0005534516 nova_compute[253538]: 2025-11-25 08:25:50.815 253542 WARNING nova.compute.manager [req-268b40a8-f919-4f1b-87f9-d12af86569ae req-dbeda999-65ef-4e89-b278-0818911bb33a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 86bfa56f-56d0-4a5e-b0b2-302c375e37a3] Received unexpected event network-vif-unplugged-4ad9572b-6ac1-4659-8ea6-71b8a32c06fe for instance with vm_state error and task_state rebuild_spawning.#033[00m
Nov 25 03:25:50 np0005534516 nova_compute[253538]: 2025-11-25 08:25:50.815 253542 DEBUG nova.compute.manager [req-268b40a8-f919-4f1b-87f9-d12af86569ae req-dbeda999-65ef-4e89-b278-0818911bb33a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 86bfa56f-56d0-4a5e-b0b2-302c375e37a3] Received event network-vif-plugged-4ad9572b-6ac1-4659-8ea6-71b8a32c06fe external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:25:50 np0005534516 nova_compute[253538]: 2025-11-25 08:25:50.815 253542 DEBUG oslo_concurrency.lockutils [req-268b40a8-f919-4f1b-87f9-d12af86569ae req-dbeda999-65ef-4e89-b278-0818911bb33a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "86bfa56f-56d0-4a5e-b0b2-302c375e37a3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:25:50 np0005534516 nova_compute[253538]: 2025-11-25 08:25:50.815 253542 DEBUG oslo_concurrency.lockutils [req-268b40a8-f919-4f1b-87f9-d12af86569ae req-dbeda999-65ef-4e89-b278-0818911bb33a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "86bfa56f-56d0-4a5e-b0b2-302c375e37a3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:25:50 np0005534516 nova_compute[253538]: 2025-11-25 08:25:50.816 253542 DEBUG oslo_concurrency.lockutils [req-268b40a8-f919-4f1b-87f9-d12af86569ae req-dbeda999-65ef-4e89-b278-0818911bb33a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "86bfa56f-56d0-4a5e-b0b2-302c375e37a3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:25:50 np0005534516 nova_compute[253538]: 2025-11-25 08:25:50.816 253542 DEBUG nova.compute.manager [req-268b40a8-f919-4f1b-87f9-d12af86569ae req-dbeda999-65ef-4e89-b278-0818911bb33a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 86bfa56f-56d0-4a5e-b0b2-302c375e37a3] No waiting events found dispatching network-vif-plugged-4ad9572b-6ac1-4659-8ea6-71b8a32c06fe pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 03:25:50 np0005534516 nova_compute[253538]: 2025-11-25 08:25:50.816 253542 WARNING nova.compute.manager [req-268b40a8-f919-4f1b-87f9-d12af86569ae req-dbeda999-65ef-4e89-b278-0818911bb33a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 86bfa56f-56d0-4a5e-b0b2-302c375e37a3] Received unexpected event network-vif-plugged-4ad9572b-6ac1-4659-8ea6-71b8a32c06fe for instance with vm_state error and task_state rebuild_spawning.#033[00m
Nov 25 03:25:50 np0005534516 nova_compute[253538]: 2025-11-25 08:25:50.913 253542 DEBUG nova.network.neutron [req-4982b202-95a8-49b5-ac54-8fab6d8e7850 req-2235e8e6-07f5-476a-9b4a-f3c528cbbede b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 1664fad5-765c-4ecc-93e2-6f96c7fb6d44] Updated VIF entry in instance network info cache for port fdb3703c-f8da-4c10-9784-ed63bfe93fe1. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 25 03:25:50 np0005534516 nova_compute[253538]: 2025-11-25 08:25:50.914 253542 DEBUG nova.network.neutron [req-4982b202-95a8-49b5-ac54-8fab6d8e7850 req-2235e8e6-07f5-476a-9b4a-f3c528cbbede b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 1664fad5-765c-4ecc-93e2-6f96c7fb6d44] Updating instance_info_cache with network_info: [{"id": "fdb3703c-f8da-4c10-9784-ed63bfe93fe1", "address": "fa:16:3e:f0:8b:90", "network": {"id": "f86a7b06-d9db-4462-bd9b-8ad648dec7f4", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-795190525-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.182", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2d3671fc1a3f4b319a62f23168a9df72", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfdb3703c-f8", "ovs_interfaceid": "fdb3703c-f8da-4c10-9784-ed63bfe93fe1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 03:25:50 np0005534516 nova_compute[253538]: 2025-11-25 08:25:50.927 253542 DEBUG oslo_concurrency.lockutils [req-4982b202-95a8-49b5-ac54-8fab6d8e7850 req-2235e8e6-07f5-476a-9b4a-f3c528cbbede b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-1664fad5-765c-4ecc-93e2-6f96c7fb6d44" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 03:25:51 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1212: 321 pgs: 321 active+clean; 461 MiB data, 484 MiB used, 60 GiB / 60 GiB avail; 5.2 MiB/s rd, 4.6 MiB/s wr, 296 op/s
Nov 25 03:25:51 np0005534516 nova_compute[253538]: 2025-11-25 08:25:51.778 253542 DEBUG oslo_concurrency.processutils [None req-35bd05ab-f6ad-4042-81df-7cebe8557610 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/a6a8bade0130d94f6fc91c6e483cc9b6db836676 86bfa56f-56d0-4a5e-b0b2-302c375e37a3_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.186s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:25:51 np0005534516 nova_compute[253538]: 2025-11-25 08:25:51.847 253542 DEBUG nova.storage.rbd_utils [None req-35bd05ab-f6ad-4042-81df-7cebe8557610 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] resizing rbd image 86bfa56f-56d0-4a5e-b0b2-302c375e37a3_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 25 03:25:52 np0005534516 nova_compute[253538]: 2025-11-25 08:25:52.476 253542 DEBUG nova.virt.libvirt.driver [None req-35bd05ab-f6ad-4042-81df-7cebe8557610 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] [instance: 86bfa56f-56d0-4a5e-b0b2-302c375e37a3] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 25 03:25:52 np0005534516 nova_compute[253538]: 2025-11-25 08:25:52.477 253542 DEBUG nova.virt.libvirt.driver [None req-35bd05ab-f6ad-4042-81df-7cebe8557610 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] [instance: 86bfa56f-56d0-4a5e-b0b2-302c375e37a3] Ensure instance console log exists: /var/lib/nova/instances/86bfa56f-56d0-4a5e-b0b2-302c375e37a3/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 25 03:25:52 np0005534516 nova_compute[253538]: 2025-11-25 08:25:52.477 253542 DEBUG oslo_concurrency.lockutils [None req-35bd05ab-f6ad-4042-81df-7cebe8557610 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:25:52 np0005534516 nova_compute[253538]: 2025-11-25 08:25:52.478 253542 DEBUG oslo_concurrency.lockutils [None req-35bd05ab-f6ad-4042-81df-7cebe8557610 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:25:52 np0005534516 nova_compute[253538]: 2025-11-25 08:25:52.478 253542 DEBUG oslo_concurrency.lockutils [None req-35bd05ab-f6ad-4042-81df-7cebe8557610 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:25:52 np0005534516 nova_compute[253538]: 2025-11-25 08:25:52.481 253542 DEBUG nova.virt.libvirt.driver [None req-35bd05ab-f6ad-4042-81df-7cebe8557610 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] [instance: 86bfa56f-56d0-4a5e-b0b2-302c375e37a3] Start _get_guest_xml network_info=[{"id": "4ad9572b-6ac1-4659-8ea6-71b8a32c06fe", "address": "fa:16:3e:5e:0e:e0", "network": {"id": "93269c36-ab23-4d95-925a-798173550624", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-897372830-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "65a2f983cce14453b2dc9251a520f289", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4ad9572b-6a", "ovs_interfaceid": "4ad9572b-6ac1-4659-8ea6-71b8a32c06fe", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:56Z,direct_url=<?>,disk_format='qcow2',id=64385127-d622-49bb-be38-b33beb2692d1,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:58Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'device_name': '/dev/vda', 'guest_format': None, 'encryption_options': None, 'boot_index': 0, 'encryption_format': None, 'encryption_secret_uuid': None, 'size': 0, 'encrypted': False, 'disk_bus': 'virtio', 'image_id': '8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 25 03:25:52 np0005534516 nova_compute[253538]: 2025-11-25 08:25:52.485 253542 WARNING nova.virt.libvirt.driver [None req-35bd05ab-f6ad-4042-81df-7cebe8557610 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.: NotImplementedError#033[00m
Nov 25 03:25:52 np0005534516 nova_compute[253538]: 2025-11-25 08:25:52.492 253542 DEBUG nova.virt.libvirt.host [None req-35bd05ab-f6ad-4042-81df-7cebe8557610 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 25 03:25:52 np0005534516 nova_compute[253538]: 2025-11-25 08:25:52.492 253542 DEBUG nova.virt.libvirt.host [None req-35bd05ab-f6ad-4042-81df-7cebe8557610 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 25 03:25:52 np0005534516 nova_compute[253538]: 2025-11-25 08:25:52.496 253542 DEBUG nova.virt.libvirt.host [None req-35bd05ab-f6ad-4042-81df-7cebe8557610 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 25 03:25:52 np0005534516 nova_compute[253538]: 2025-11-25 08:25:52.496 253542 DEBUG nova.virt.libvirt.host [None req-35bd05ab-f6ad-4042-81df-7cebe8557610 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 25 03:25:52 np0005534516 nova_compute[253538]: 2025-11-25 08:25:52.497 253542 DEBUG nova.virt.libvirt.driver [None req-35bd05ab-f6ad-4042-81df-7cebe8557610 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 25 03:25:52 np0005534516 nova_compute[253538]: 2025-11-25 08:25:52.497 253542 DEBUG nova.virt.hardware [None req-35bd05ab-f6ad-4042-81df-7cebe8557610 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T08:20:54Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='c0247c91-0f34-45b2-87e7-43f31a790a00',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:56Z,direct_url=<?>,disk_format='qcow2',id=64385127-d622-49bb-be38-b33beb2692d1,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:58Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 25 03:25:52 np0005534516 nova_compute[253538]: 2025-11-25 08:25:52.498 253542 DEBUG nova.virt.hardware [None req-35bd05ab-f6ad-4042-81df-7cebe8557610 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 25 03:25:52 np0005534516 nova_compute[253538]: 2025-11-25 08:25:52.498 253542 DEBUG nova.virt.hardware [None req-35bd05ab-f6ad-4042-81df-7cebe8557610 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 25 03:25:52 np0005534516 nova_compute[253538]: 2025-11-25 08:25:52.498 253542 DEBUG nova.virt.hardware [None req-35bd05ab-f6ad-4042-81df-7cebe8557610 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 25 03:25:52 np0005534516 nova_compute[253538]: 2025-11-25 08:25:52.498 253542 DEBUG nova.virt.hardware [None req-35bd05ab-f6ad-4042-81df-7cebe8557610 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 25 03:25:52 np0005534516 nova_compute[253538]: 2025-11-25 08:25:52.498 253542 DEBUG nova.virt.hardware [None req-35bd05ab-f6ad-4042-81df-7cebe8557610 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 25 03:25:52 np0005534516 nova_compute[253538]: 2025-11-25 08:25:52.499 253542 DEBUG nova.virt.hardware [None req-35bd05ab-f6ad-4042-81df-7cebe8557610 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 25 03:25:52 np0005534516 nova_compute[253538]: 2025-11-25 08:25:52.499 253542 DEBUG nova.virt.hardware [None req-35bd05ab-f6ad-4042-81df-7cebe8557610 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 25 03:25:52 np0005534516 nova_compute[253538]: 2025-11-25 08:25:52.499 253542 DEBUG nova.virt.hardware [None req-35bd05ab-f6ad-4042-81df-7cebe8557610 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 25 03:25:52 np0005534516 nova_compute[253538]: 2025-11-25 08:25:52.499 253542 DEBUG nova.virt.hardware [None req-35bd05ab-f6ad-4042-81df-7cebe8557610 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 25 03:25:52 np0005534516 nova_compute[253538]: 2025-11-25 08:25:52.500 253542 DEBUG nova.virt.hardware [None req-35bd05ab-f6ad-4042-81df-7cebe8557610 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 25 03:25:52 np0005534516 nova_compute[253538]: 2025-11-25 08:25:52.500 253542 DEBUG nova.objects.instance [None req-35bd05ab-f6ad-4042-81df-7cebe8557610 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Lazy-loading 'vcpu_model' on Instance uuid 86bfa56f-56d0-4a5e-b0b2-302c375e37a3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 03:25:52 np0005534516 nova_compute[253538]: 2025-11-25 08:25:52.515 253542 DEBUG oslo_concurrency.processutils [None req-35bd05ab-f6ad-4042-81df-7cebe8557610 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:25:52 np0005534516 nova_compute[253538]: 2025-11-25 08:25:52.536 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:25:52 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 03:25:52 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/604412980' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 03:25:52 np0005534516 nova_compute[253538]: 2025-11-25 08:25:52.933 253542 DEBUG oslo_concurrency.processutils [None req-35bd05ab-f6ad-4042-81df-7cebe8557610 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.418s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:25:52 np0005534516 nova_compute[253538]: 2025-11-25 08:25:52.974 253542 DEBUG nova.storage.rbd_utils [None req-35bd05ab-f6ad-4042-81df-7cebe8557610 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] rbd image 86bfa56f-56d0-4a5e-b0b2-302c375e37a3_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:25:52 np0005534516 nova_compute[253538]: 2025-11-25 08:25:52.981 253542 DEBUG oslo_concurrency.processutils [None req-35bd05ab-f6ad-4042-81df-7cebe8557610 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:25:53 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1213: 321 pgs: 321 active+clean; 464 MiB data, 493 MiB used, 60 GiB / 60 GiB avail; 4.7 MiB/s rd, 4.5 MiB/s wr, 282 op/s
Nov 25 03:25:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] Optimize plan auto_2025-11-25_08:25:53
Nov 25 03:25:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 03:25:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] do_upmap
Nov 25 03:25:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] pools ['.mgr', 'default.rgw.log', 'backups', 'default.rgw.control', 'default.rgw.meta', 'cephfs.cephfs.meta', 'images', '.rgw.root', 'vms', 'cephfs.cephfs.data', 'volumes']
Nov 25 03:25:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] prepared 0/10 changes
Nov 25 03:25:53 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 03:25:53 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3471726923' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 03:25:53 np0005534516 nova_compute[253538]: 2025-11-25 08:25:53.416 253542 DEBUG oslo_concurrency.processutils [None req-35bd05ab-f6ad-4042-81df-7cebe8557610 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.435s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:25:53 np0005534516 nova_compute[253538]: 2025-11-25 08:25:53.418 253542 DEBUG nova.virt.libvirt.vif [None req-35bd05ab-f6ad-4042-81df-7cebe8557610 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-11-25T08:24:53Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-1649971692',display_name='tempest-ServersAdminTestJSON-server-1649971692',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-1649971692',id=16,image_ref='64385127-d622-49bb-be38-b33beb2692d1',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-25T08:25:01Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='65a2f983cce14453b2dc9251a520f289',ramdisk_id='',reservation_id='r-309qh2t9',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='64385127-d622-49bb-be38-b33beb2692d1',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersAdminTestJSON-857664825',owner_user_name='tempest-ServersAdminTestJSON-857664825-project-member'},tags=<?>,task_state='rebuild_spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T08:25:50Z,user_data=None,user_id='76d3377d398a4214a77bc0eb91638ec5',uuid=86bfa56f-56d0-4a5e-b0b2-302c375e37a3,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='error') vif={"id": "4ad9572b-6ac1-4659-8ea6-71b8a32c06fe", "address": "fa:16:3e:5e:0e:e0", "network": {"id": "93269c36-ab23-4d95-925a-798173550624", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-897372830-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "65a2f983cce14453b2dc9251a520f289", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4ad9572b-6a", "ovs_interfaceid": "4ad9572b-6ac1-4659-8ea6-71b8a32c06fe", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 25 03:25:53 np0005534516 nova_compute[253538]: 2025-11-25 08:25:53.418 253542 DEBUG nova.network.os_vif_util [None req-35bd05ab-f6ad-4042-81df-7cebe8557610 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Converting VIF {"id": "4ad9572b-6ac1-4659-8ea6-71b8a32c06fe", "address": "fa:16:3e:5e:0e:e0", "network": {"id": "93269c36-ab23-4d95-925a-798173550624", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-897372830-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "65a2f983cce14453b2dc9251a520f289", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4ad9572b-6a", "ovs_interfaceid": "4ad9572b-6ac1-4659-8ea6-71b8a32c06fe", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 03:25:53 np0005534516 nova_compute[253538]: 2025-11-25 08:25:53.419 253542 DEBUG nova.network.os_vif_util [None req-35bd05ab-f6ad-4042-81df-7cebe8557610 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:5e:0e:e0,bridge_name='br-int',has_traffic_filtering=True,id=4ad9572b-6ac1-4659-8ea6-71b8a32c06fe,network=Network(93269c36-ab23-4d95-925a-798173550624),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4ad9572b-6a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 03:25:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 03:25:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 03:25:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 03:25:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 03:25:53 np0005534516 nova_compute[253538]: 2025-11-25 08:25:53.422 253542 DEBUG nova.virt.libvirt.driver [None req-35bd05ab-f6ad-4042-81df-7cebe8557610 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] [instance: 86bfa56f-56d0-4a5e-b0b2-302c375e37a3] End _get_guest_xml xml=<domain type="kvm">
Nov 25 03:25:53 np0005534516 nova_compute[253538]:  <uuid>86bfa56f-56d0-4a5e-b0b2-302c375e37a3</uuid>
Nov 25 03:25:53 np0005534516 nova_compute[253538]:  <name>instance-00000010</name>
Nov 25 03:25:53 np0005534516 nova_compute[253538]:  <memory>131072</memory>
Nov 25 03:25:53 np0005534516 nova_compute[253538]:  <vcpu>1</vcpu>
Nov 25 03:25:53 np0005534516 nova_compute[253538]:  <metadata>
Nov 25 03:25:53 np0005534516 nova_compute[253538]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 03:25:53 np0005534516 nova_compute[253538]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 03:25:53 np0005534516 nova_compute[253538]:      <nova:name>tempest-ServersAdminTestJSON-server-1649971692</nova:name>
Nov 25 03:25:53 np0005534516 nova_compute[253538]:      <nova:creationTime>2025-11-25 08:25:52</nova:creationTime>
Nov 25 03:25:53 np0005534516 nova_compute[253538]:      <nova:flavor name="m1.nano">
Nov 25 03:25:53 np0005534516 nova_compute[253538]:        <nova:memory>128</nova:memory>
Nov 25 03:25:53 np0005534516 nova_compute[253538]:        <nova:disk>1</nova:disk>
Nov 25 03:25:53 np0005534516 nova_compute[253538]:        <nova:swap>0</nova:swap>
Nov 25 03:25:53 np0005534516 nova_compute[253538]:        <nova:ephemeral>0</nova:ephemeral>
Nov 25 03:25:53 np0005534516 nova_compute[253538]:        <nova:vcpus>1</nova:vcpus>
Nov 25 03:25:53 np0005534516 nova_compute[253538]:      </nova:flavor>
Nov 25 03:25:53 np0005534516 nova_compute[253538]:      <nova:owner>
Nov 25 03:25:53 np0005534516 nova_compute[253538]:        <nova:user uuid="76d3377d398a4214a77bc0eb91638ec5">tempest-ServersAdminTestJSON-857664825-project-member</nova:user>
Nov 25 03:25:53 np0005534516 nova_compute[253538]:        <nova:project uuid="65a2f983cce14453b2dc9251a520f289">tempest-ServersAdminTestJSON-857664825</nova:project>
Nov 25 03:25:53 np0005534516 nova_compute[253538]:      </nova:owner>
Nov 25 03:25:53 np0005534516 nova_compute[253538]:      <nova:root type="image" uuid="64385127-d622-49bb-be38-b33beb2692d1"/>
Nov 25 03:25:53 np0005534516 nova_compute[253538]:      <nova:ports>
Nov 25 03:25:53 np0005534516 nova_compute[253538]:        <nova:port uuid="4ad9572b-6ac1-4659-8ea6-71b8a32c06fe">
Nov 25 03:25:53 np0005534516 nova_compute[253538]:          <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Nov 25 03:25:53 np0005534516 nova_compute[253538]:        </nova:port>
Nov 25 03:25:53 np0005534516 nova_compute[253538]:      </nova:ports>
Nov 25 03:25:53 np0005534516 nova_compute[253538]:    </nova:instance>
Nov 25 03:25:53 np0005534516 nova_compute[253538]:  </metadata>
Nov 25 03:25:53 np0005534516 nova_compute[253538]:  <sysinfo type="smbios">
Nov 25 03:25:53 np0005534516 nova_compute[253538]:    <system>
Nov 25 03:25:53 np0005534516 nova_compute[253538]:      <entry name="manufacturer">RDO</entry>
Nov 25 03:25:53 np0005534516 nova_compute[253538]:      <entry name="product">OpenStack Compute</entry>
Nov 25 03:25:53 np0005534516 nova_compute[253538]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 03:25:53 np0005534516 nova_compute[253538]:      <entry name="serial">86bfa56f-56d0-4a5e-b0b2-302c375e37a3</entry>
Nov 25 03:25:53 np0005534516 nova_compute[253538]:      <entry name="uuid">86bfa56f-56d0-4a5e-b0b2-302c375e37a3</entry>
Nov 25 03:25:53 np0005534516 nova_compute[253538]:      <entry name="family">Virtual Machine</entry>
Nov 25 03:25:53 np0005534516 nova_compute[253538]:    </system>
Nov 25 03:25:53 np0005534516 nova_compute[253538]:  </sysinfo>
Nov 25 03:25:53 np0005534516 nova_compute[253538]:  <os>
Nov 25 03:25:53 np0005534516 nova_compute[253538]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 25 03:25:53 np0005534516 nova_compute[253538]:    <boot dev="hd"/>
Nov 25 03:25:53 np0005534516 nova_compute[253538]:    <smbios mode="sysinfo"/>
Nov 25 03:25:53 np0005534516 nova_compute[253538]:  </os>
Nov 25 03:25:53 np0005534516 nova_compute[253538]:  <features>
Nov 25 03:25:53 np0005534516 nova_compute[253538]:    <acpi/>
Nov 25 03:25:53 np0005534516 nova_compute[253538]:    <apic/>
Nov 25 03:25:53 np0005534516 nova_compute[253538]:    <vmcoreinfo/>
Nov 25 03:25:53 np0005534516 nova_compute[253538]:  </features>
Nov 25 03:25:53 np0005534516 nova_compute[253538]:  <clock offset="utc">
Nov 25 03:25:53 np0005534516 nova_compute[253538]:    <timer name="pit" tickpolicy="delay"/>
Nov 25 03:25:53 np0005534516 nova_compute[253538]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 25 03:25:53 np0005534516 nova_compute[253538]:    <timer name="hpet" present="no"/>
Nov 25 03:25:53 np0005534516 nova_compute[253538]:  </clock>
Nov 25 03:25:53 np0005534516 nova_compute[253538]:  <cpu mode="host-model" match="exact">
Nov 25 03:25:53 np0005534516 nova_compute[253538]:    <topology sockets="1" cores="1" threads="1"/>
Nov 25 03:25:53 np0005534516 nova_compute[253538]:  </cpu>
Nov 25 03:25:53 np0005534516 nova_compute[253538]:  <devices>
Nov 25 03:25:53 np0005534516 nova_compute[253538]:    <disk type="network" device="disk">
Nov 25 03:25:53 np0005534516 nova_compute[253538]:      <driver type="raw" cache="none"/>
Nov 25 03:25:53 np0005534516 nova_compute[253538]:      <source protocol="rbd" name="vms/86bfa56f-56d0-4a5e-b0b2-302c375e37a3_disk">
Nov 25 03:25:53 np0005534516 nova_compute[253538]:        <host name="192.168.122.100" port="6789"/>
Nov 25 03:25:53 np0005534516 nova_compute[253538]:      </source>
Nov 25 03:25:53 np0005534516 nova_compute[253538]:      <auth username="openstack">
Nov 25 03:25:53 np0005534516 nova_compute[253538]:        <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 03:25:53 np0005534516 nova_compute[253538]:      </auth>
Nov 25 03:25:53 np0005534516 nova_compute[253538]:      <target dev="vda" bus="virtio"/>
Nov 25 03:25:53 np0005534516 nova_compute[253538]:    </disk>
Nov 25 03:25:53 np0005534516 nova_compute[253538]:    <disk type="network" device="cdrom">
Nov 25 03:25:53 np0005534516 nova_compute[253538]:      <driver type="raw" cache="none"/>
Nov 25 03:25:53 np0005534516 nova_compute[253538]:      <source protocol="rbd" name="vms/86bfa56f-56d0-4a5e-b0b2-302c375e37a3_disk.config">
Nov 25 03:25:53 np0005534516 nova_compute[253538]:        <host name="192.168.122.100" port="6789"/>
Nov 25 03:25:53 np0005534516 nova_compute[253538]:      </source>
Nov 25 03:25:53 np0005534516 nova_compute[253538]:      <auth username="openstack">
Nov 25 03:25:53 np0005534516 nova_compute[253538]:        <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 03:25:53 np0005534516 nova_compute[253538]:      </auth>
Nov 25 03:25:53 np0005534516 nova_compute[253538]:      <target dev="sda" bus="sata"/>
Nov 25 03:25:53 np0005534516 nova_compute[253538]:    </disk>
Nov 25 03:25:53 np0005534516 nova_compute[253538]:    <interface type="ethernet">
Nov 25 03:25:53 np0005534516 nova_compute[253538]:      <mac address="fa:16:3e:5e:0e:e0"/>
Nov 25 03:25:53 np0005534516 nova_compute[253538]:      <model type="virtio"/>
Nov 25 03:25:53 np0005534516 nova_compute[253538]:      <driver name="vhost" rx_queue_size="512"/>
Nov 25 03:25:53 np0005534516 nova_compute[253538]:      <mtu size="1442"/>
Nov 25 03:25:53 np0005534516 nova_compute[253538]:      <target dev="tap4ad9572b-6a"/>
Nov 25 03:25:53 np0005534516 nova_compute[253538]:    </interface>
Nov 25 03:25:53 np0005534516 nova_compute[253538]:    <serial type="pty">
Nov 25 03:25:53 np0005534516 nova_compute[253538]:      <log file="/var/lib/nova/instances/86bfa56f-56d0-4a5e-b0b2-302c375e37a3/console.log" append="off"/>
Nov 25 03:25:53 np0005534516 nova_compute[253538]:    </serial>
Nov 25 03:25:53 np0005534516 nova_compute[253538]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 03:25:53 np0005534516 nova_compute[253538]:    <video>
Nov 25 03:25:53 np0005534516 nova_compute[253538]:      <model type="virtio"/>
Nov 25 03:25:53 np0005534516 nova_compute[253538]:    </video>
Nov 25 03:25:53 np0005534516 nova_compute[253538]:    <input type="tablet" bus="usb"/>
Nov 25 03:25:53 np0005534516 nova_compute[253538]:    <rng model="virtio">
Nov 25 03:25:53 np0005534516 nova_compute[253538]:      <backend model="random">/dev/urandom</backend>
Nov 25 03:25:53 np0005534516 nova_compute[253538]:    </rng>
Nov 25 03:25:53 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root"/>
Nov 25 03:25:53 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:25:53 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:25:53 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:25:53 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:25:53 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:25:53 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:25:53 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:25:53 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:25:53 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:25:53 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:25:53 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:25:53 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:25:53 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:25:53 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:25:53 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:25:53 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:25:53 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:25:53 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:25:53 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:25:53 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:25:53 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:25:53 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:25:53 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:25:53 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:25:53 np0005534516 nova_compute[253538]:    <controller type="usb" index="0"/>
Nov 25 03:25:53 np0005534516 nova_compute[253538]:    <memballoon model="virtio">
Nov 25 03:25:53 np0005534516 nova_compute[253538]:      <stats period="10"/>
Nov 25 03:25:53 np0005534516 nova_compute[253538]:    </memballoon>
Nov 25 03:25:53 np0005534516 nova_compute[253538]:  </devices>
Nov 25 03:25:53 np0005534516 nova_compute[253538]: </domain>
Nov 25 03:25:53 np0005534516 nova_compute[253538]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 25 03:25:53 np0005534516 nova_compute[253538]: 2025-11-25 08:25:53.423 253542 DEBUG nova.virt.libvirt.vif [None req-35bd05ab-f6ad-4042-81df-7cebe8557610 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-11-25T08:24:53Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-1649971692',display_name='tempest-ServersAdminTestJSON-server-1649971692',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-1649971692',id=16,image_ref='64385127-d622-49bb-be38-b33beb2692d1',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-25T08:25:01Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='65a2f983cce14453b2dc9251a520f289',ramdisk_id='',reservation_id='r-309qh2t9',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='64385127-d622-49bb-be38-b33beb2692d1',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersAdminTestJSON-857664825',owner_user_name='tempest-ServersAdminTestJSON-857664825-project-member'},tags=<?>,task_state='rebuild_spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T08:25:50Z,user_data=None,user_id='76d3377d398a4214a77bc0eb91638ec5',uuid=86bfa56f-56d0-4a5e-b0b2-302c375e37a3,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='error') vif={"id": "4ad9572b-6ac1-4659-8ea6-71b8a32c06fe", "address": "fa:16:3e:5e:0e:e0", "network": {"id": "93269c36-ab23-4d95-925a-798173550624", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-897372830-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "65a2f983cce14453b2dc9251a520f289", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4ad9572b-6a", "ovs_interfaceid": "4ad9572b-6ac1-4659-8ea6-71b8a32c06fe", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 25 03:25:53 np0005534516 nova_compute[253538]: 2025-11-25 08:25:53.423 253542 DEBUG nova.network.os_vif_util [None req-35bd05ab-f6ad-4042-81df-7cebe8557610 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Converting VIF {"id": "4ad9572b-6ac1-4659-8ea6-71b8a32c06fe", "address": "fa:16:3e:5e:0e:e0", "network": {"id": "93269c36-ab23-4d95-925a-798173550624", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-897372830-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "65a2f983cce14453b2dc9251a520f289", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4ad9572b-6a", "ovs_interfaceid": "4ad9572b-6ac1-4659-8ea6-71b8a32c06fe", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 03:25:53 np0005534516 nova_compute[253538]: 2025-11-25 08:25:53.423 253542 DEBUG nova.network.os_vif_util [None req-35bd05ab-f6ad-4042-81df-7cebe8557610 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:5e:0e:e0,bridge_name='br-int',has_traffic_filtering=True,id=4ad9572b-6ac1-4659-8ea6-71b8a32c06fe,network=Network(93269c36-ab23-4d95-925a-798173550624),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4ad9572b-6a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 03:25:53 np0005534516 nova_compute[253538]: 2025-11-25 08:25:53.424 253542 DEBUG os_vif [None req-35bd05ab-f6ad-4042-81df-7cebe8557610 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Plugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:5e:0e:e0,bridge_name='br-int',has_traffic_filtering=True,id=4ad9572b-6ac1-4659-8ea6-71b8a32c06fe,network=Network(93269c36-ab23-4d95-925a-798173550624),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4ad9572b-6a') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 25 03:25:53 np0005534516 nova_compute[253538]: 2025-11-25 08:25:53.424 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:25:53 np0005534516 nova_compute[253538]: 2025-11-25 08:25:53.425 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:25:53 np0005534516 nova_compute[253538]: 2025-11-25 08:25:53.425 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 03:25:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 03:25:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 03:25:53 np0005534516 nova_compute[253538]: 2025-11-25 08:25:53.429 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:25:53 np0005534516 nova_compute[253538]: 2025-11-25 08:25:53.429 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4ad9572b-6a, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:25:53 np0005534516 nova_compute[253538]: 2025-11-25 08:25:53.429 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap4ad9572b-6a, col_values=(('external_ids', {'iface-id': '4ad9572b-6ac1-4659-8ea6-71b8a32c06fe', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:5e:0e:e0', 'vm-uuid': '86bfa56f-56d0-4a5e-b0b2-302c375e37a3'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:25:53 np0005534516 nova_compute[253538]: 2025-11-25 08:25:53.431 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:25:53 np0005534516 nova_compute[253538]: 2025-11-25 08:25:53.434 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 25 03:25:53 np0005534516 NetworkManager[48915]: <info>  [1764059153.4344] manager: (tap4ad9572b-6a): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/57)
Nov 25 03:25:53 np0005534516 nova_compute[253538]: 2025-11-25 08:25:53.436 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:25:53 np0005534516 nova_compute[253538]: 2025-11-25 08:25:53.436 253542 INFO os_vif [None req-35bd05ab-f6ad-4042-81df-7cebe8557610 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Successfully plugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:5e:0e:e0,bridge_name='br-int',has_traffic_filtering=True,id=4ad9572b-6ac1-4659-8ea6-71b8a32c06fe,network=Network(93269c36-ab23-4d95-925a-798173550624),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4ad9572b-6a')#033[00m
Nov 25 03:25:53 np0005534516 nova_compute[253538]: 2025-11-25 08:25:53.524 253542 DEBUG nova.virt.libvirt.driver [None req-35bd05ab-f6ad-4042-81df-7cebe8557610 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 03:25:53 np0005534516 nova_compute[253538]: 2025-11-25 08:25:53.524 253542 DEBUG nova.virt.libvirt.driver [None req-35bd05ab-f6ad-4042-81df-7cebe8557610 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 03:25:53 np0005534516 nova_compute[253538]: 2025-11-25 08:25:53.524 253542 DEBUG nova.virt.libvirt.driver [None req-35bd05ab-f6ad-4042-81df-7cebe8557610 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] No VIF found with MAC fa:16:3e:5e:0e:e0, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 25 03:25:53 np0005534516 nova_compute[253538]: 2025-11-25 08:25:53.525 253542 INFO nova.virt.libvirt.driver [None req-35bd05ab-f6ad-4042-81df-7cebe8557610 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] [instance: 86bfa56f-56d0-4a5e-b0b2-302c375e37a3] Using config drive#033[00m
Nov 25 03:25:53 np0005534516 nova_compute[253538]: 2025-11-25 08:25:53.553 253542 DEBUG nova.storage.rbd_utils [None req-35bd05ab-f6ad-4042-81df-7cebe8557610 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] rbd image 86bfa56f-56d0-4a5e-b0b2-302c375e37a3_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:25:53 np0005534516 nova_compute[253538]: 2025-11-25 08:25:53.566 253542 DEBUG nova.objects.instance [None req-35bd05ab-f6ad-4042-81df-7cebe8557610 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Lazy-loading 'ec2_ids' on Instance uuid 86bfa56f-56d0-4a5e-b0b2-302c375e37a3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 03:25:53 np0005534516 nova_compute[253538]: 2025-11-25 08:25:53.592 253542 DEBUG nova.objects.instance [None req-35bd05ab-f6ad-4042-81df-7cebe8557610 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Lazy-loading 'keypairs' on Instance uuid 86bfa56f-56d0-4a5e-b0b2-302c375e37a3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 03:25:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 03:25:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 03:25:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 03:25:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 03:25:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 03:25:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 03:25:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 03:25:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 03:25:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 03:25:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 03:25:53 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e113 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 03:25:53 np0005534516 nova_compute[253538]: 2025-11-25 08:25:53.860 253542 INFO nova.virt.libvirt.driver [None req-35bd05ab-f6ad-4042-81df-7cebe8557610 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] [instance: 86bfa56f-56d0-4a5e-b0b2-302c375e37a3] Creating config drive at /var/lib/nova/instances/86bfa56f-56d0-4a5e-b0b2-302c375e37a3/disk.config#033[00m
Nov 25 03:25:53 np0005534516 nova_compute[253538]: 2025-11-25 08:25:53.866 253542 DEBUG oslo_concurrency.processutils [None req-35bd05ab-f6ad-4042-81df-7cebe8557610 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/86bfa56f-56d0-4a5e-b0b2-302c375e37a3/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpzqtpomag execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:25:53 np0005534516 nova_compute[253538]: 2025-11-25 08:25:53.994 253542 DEBUG oslo_concurrency.processutils [None req-35bd05ab-f6ad-4042-81df-7cebe8557610 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/86bfa56f-56d0-4a5e-b0b2-302c375e37a3/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpzqtpomag" returned: 0 in 0.128s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:25:54 np0005534516 nova_compute[253538]: 2025-11-25 08:25:54.017 253542 DEBUG nova.storage.rbd_utils [None req-35bd05ab-f6ad-4042-81df-7cebe8557610 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] rbd image 86bfa56f-56d0-4a5e-b0b2-302c375e37a3_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:25:54 np0005534516 nova_compute[253538]: 2025-11-25 08:25:54.022 253542 DEBUG oslo_concurrency.processutils [None req-35bd05ab-f6ad-4042-81df-7cebe8557610 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/86bfa56f-56d0-4a5e-b0b2-302c375e37a3/disk.config 86bfa56f-56d0-4a5e-b0b2-302c375e37a3_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:25:54 np0005534516 nova_compute[253538]: 2025-11-25 08:25:54.160 253542 DEBUG oslo_concurrency.processutils [None req-35bd05ab-f6ad-4042-81df-7cebe8557610 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/86bfa56f-56d0-4a5e-b0b2-302c375e37a3/disk.config 86bfa56f-56d0-4a5e-b0b2-302c375e37a3_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.138s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:25:54 np0005534516 nova_compute[253538]: 2025-11-25 08:25:54.161 253542 INFO nova.virt.libvirt.driver [None req-35bd05ab-f6ad-4042-81df-7cebe8557610 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] [instance: 86bfa56f-56d0-4a5e-b0b2-302c375e37a3] Deleting local config drive /var/lib/nova/instances/86bfa56f-56d0-4a5e-b0b2-302c375e37a3/disk.config because it was imported into RBD.#033[00m
Nov 25 03:25:54 np0005534516 NetworkManager[48915]: <info>  [1764059154.2026] manager: (tap4ad9572b-6a): new Tun device (/org/freedesktop/NetworkManager/Devices/58)
Nov 25 03:25:54 np0005534516 kernel: tap4ad9572b-6a: entered promiscuous mode
Nov 25 03:25:54 np0005534516 ovn_controller[152859]: 2025-11-25T08:25:54Z|00083|binding|INFO|Claiming lport 4ad9572b-6ac1-4659-8ea6-71b8a32c06fe for this chassis.
Nov 25 03:25:54 np0005534516 ovn_controller[152859]: 2025-11-25T08:25:54Z|00084|binding|INFO|4ad9572b-6ac1-4659-8ea6-71b8a32c06fe: Claiming fa:16:3e:5e:0e:e0 10.100.0.6
Nov 25 03:25:54 np0005534516 nova_compute[253538]: 2025-11-25 08:25:54.208 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:25:54 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:25:54.216 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:5e:0e:e0 10.100.0.6'], port_security=['fa:16:3e:5e:0e:e0 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '86bfa56f-56d0-4a5e-b0b2-302c375e37a3', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-93269c36-ab23-4d95-925a-798173550624', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '65a2f983cce14453b2dc9251a520f289', 'neutron:revision_number': '5', 'neutron:security_group_ids': 'cc9f601e-bb99-4729-8b62-ddcf81c134a4', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a705f4ac-b7cc-4deb-b453-a20afb944392, chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=4ad9572b-6ac1-4659-8ea6-71b8a32c06fe) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 03:25:54 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:25:54.218 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 4ad9572b-6ac1-4659-8ea6-71b8a32c06fe in datapath 93269c36-ab23-4d95-925a-798173550624 bound to our chassis#033[00m
Nov 25 03:25:54 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:25:54.219 162739 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 93269c36-ab23-4d95-925a-798173550624#033[00m
Nov 25 03:25:54 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:25:54.233 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[a2c5d1a6-9411-4bac-9823-f273cd744ea1]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:25:54 np0005534516 ovn_controller[152859]: 2025-11-25T08:25:54Z|00085|binding|INFO|Setting lport 4ad9572b-6ac1-4659-8ea6-71b8a32c06fe ovn-installed in OVS
Nov 25 03:25:54 np0005534516 ovn_controller[152859]: 2025-11-25T08:25:54Z|00086|binding|INFO|Setting lport 4ad9572b-6ac1-4659-8ea6-71b8a32c06fe up in Southbound
Nov 25 03:25:54 np0005534516 nova_compute[253538]: 2025-11-25 08:25:54.252 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:25:54 np0005534516 systemd-machined[215790]: New machine qemu-25-instance-00000010.
Nov 25 03:25:54 np0005534516 systemd[1]: Started Virtual Machine qemu-25-instance-00000010.
Nov 25 03:25:54 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:25:54.272 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[c228ebc8-f4d3-4a52-b58b-86d536720417]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:25:54 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:25:54.277 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[9e3162d5-adf6-4265-a308-6035c84b78b3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:25:54 np0005534516 systemd-udevd[282913]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 03:25:54 np0005534516 NetworkManager[48915]: <info>  [1764059154.2994] device (tap4ad9572b-6a): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 03:25:54 np0005534516 NetworkManager[48915]: <info>  [1764059154.3002] device (tap4ad9572b-6a): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 03:25:54 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:25:54.314 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[93f74cf2-c2b6-4c3b-b2b9-c5177f342d40]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:25:54 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:25:54.330 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[4a0bdc53-9a85-4995-adac-55d14de6e4ae]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap93269c36-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b6:11:21'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 12, 'tx_packets': 13, 'rx_bytes': 784, 'tx_bytes': 690, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 12, 'tx_packets': 13, 'rx_bytes': 784, 'tx_bytes': 690, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 19], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 445515, 'reachable_time': 41516, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 282923, 'error': None, 'target': 'ovnmeta-93269c36-ab23-4d95-925a-798173550624', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:25:54 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:25:54.344 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[45364dd0-142d-4f46-8c22-16a48ba0d5f2]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap93269c36-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 445525, 'tstamp': 445525}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 282924, 'error': None, 'target': 'ovnmeta-93269c36-ab23-4d95-925a-798173550624', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap93269c36-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 445529, 'tstamp': 445529}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 282924, 'error': None, 'target': 'ovnmeta-93269c36-ab23-4d95-925a-798173550624', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:25:54 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:25:54.345 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap93269c36-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:25:54 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:25:54.349 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap93269c36-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:25:54 np0005534516 nova_compute[253538]: 2025-11-25 08:25:54.348 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:25:54 np0005534516 nova_compute[253538]: 2025-11-25 08:25:54.349 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:25:54 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:25:54.349 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 03:25:54 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:25:54.350 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap93269c36-a0, col_values=(('external_ids', {'iface-id': '52d2128c-19c6-4892-8ba5-cc8740039f5e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:25:54 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:25:54.350 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 03:25:54 np0005534516 nova_compute[253538]: 2025-11-25 08:25:54.643 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:25:54 np0005534516 nova_compute[253538]: 2025-11-25 08:25:54.699 253542 DEBUG nova.compute.manager [None req-35bd05ab-f6ad-4042-81df-7cebe8557610 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] [instance: 86bfa56f-56d0-4a5e-b0b2-302c375e37a3] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 25 03:25:54 np0005534516 nova_compute[253538]: 2025-11-25 08:25:54.699 253542 DEBUG nova.virt.libvirt.driver [None req-35bd05ab-f6ad-4042-81df-7cebe8557610 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] [instance: 86bfa56f-56d0-4a5e-b0b2-302c375e37a3] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 25 03:25:54 np0005534516 nova_compute[253538]: 2025-11-25 08:25:54.700 253542 DEBUG nova.virt.libvirt.host [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Removed pending event for 86bfa56f-56d0-4a5e-b0b2-302c375e37a3 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Nov 25 03:25:54 np0005534516 nova_compute[253538]: 2025-11-25 08:25:54.700 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764059154.6986327, 86bfa56f-56d0-4a5e-b0b2-302c375e37a3 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 03:25:54 np0005534516 nova_compute[253538]: 2025-11-25 08:25:54.701 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 86bfa56f-56d0-4a5e-b0b2-302c375e37a3] VM Resumed (Lifecycle Event)#033[00m
Nov 25 03:25:54 np0005534516 nova_compute[253538]: 2025-11-25 08:25:54.705 253542 INFO nova.virt.libvirt.driver [-] [instance: 86bfa56f-56d0-4a5e-b0b2-302c375e37a3] Instance spawned successfully.#033[00m
Nov 25 03:25:54 np0005534516 nova_compute[253538]: 2025-11-25 08:25:54.706 253542 DEBUG nova.virt.libvirt.driver [None req-35bd05ab-f6ad-4042-81df-7cebe8557610 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] [instance: 86bfa56f-56d0-4a5e-b0b2-302c375e37a3] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 25 03:25:54 np0005534516 nova_compute[253538]: 2025-11-25 08:25:54.718 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 86bfa56f-56d0-4a5e-b0b2-302c375e37a3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:25:54 np0005534516 nova_compute[253538]: 2025-11-25 08:25:54.730 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 86bfa56f-56d0-4a5e-b0b2-302c375e37a3] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: error, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 03:25:54 np0005534516 nova_compute[253538]: 2025-11-25 08:25:54.733 253542 DEBUG nova.virt.libvirt.driver [None req-35bd05ab-f6ad-4042-81df-7cebe8557610 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] [instance: 86bfa56f-56d0-4a5e-b0b2-302c375e37a3] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:25:54 np0005534516 nova_compute[253538]: 2025-11-25 08:25:54.734 253542 DEBUG nova.virt.libvirt.driver [None req-35bd05ab-f6ad-4042-81df-7cebe8557610 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] [instance: 86bfa56f-56d0-4a5e-b0b2-302c375e37a3] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:25:54 np0005534516 nova_compute[253538]: 2025-11-25 08:25:54.734 253542 DEBUG nova.virt.libvirt.driver [None req-35bd05ab-f6ad-4042-81df-7cebe8557610 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] [instance: 86bfa56f-56d0-4a5e-b0b2-302c375e37a3] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:25:54 np0005534516 nova_compute[253538]: 2025-11-25 08:25:54.734 253542 DEBUG nova.virt.libvirt.driver [None req-35bd05ab-f6ad-4042-81df-7cebe8557610 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] [instance: 86bfa56f-56d0-4a5e-b0b2-302c375e37a3] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:25:54 np0005534516 nova_compute[253538]: 2025-11-25 08:25:54.735 253542 DEBUG nova.virt.libvirt.driver [None req-35bd05ab-f6ad-4042-81df-7cebe8557610 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] [instance: 86bfa56f-56d0-4a5e-b0b2-302c375e37a3] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:25:54 np0005534516 nova_compute[253538]: 2025-11-25 08:25:54.735 253542 DEBUG nova.virt.libvirt.driver [None req-35bd05ab-f6ad-4042-81df-7cebe8557610 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] [instance: 86bfa56f-56d0-4a5e-b0b2-302c375e37a3] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:25:54 np0005534516 nova_compute[253538]: 2025-11-25 08:25:54.756 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 86bfa56f-56d0-4a5e-b0b2-302c375e37a3] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.#033[00m
Nov 25 03:25:54 np0005534516 nova_compute[253538]: 2025-11-25 08:25:54.757 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764059154.6998086, 86bfa56f-56d0-4a5e-b0b2-302c375e37a3 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 03:25:54 np0005534516 nova_compute[253538]: 2025-11-25 08:25:54.757 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 86bfa56f-56d0-4a5e-b0b2-302c375e37a3] VM Started (Lifecycle Event)#033[00m
Nov 25 03:25:54 np0005534516 nova_compute[253538]: 2025-11-25 08:25:54.778 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 86bfa56f-56d0-4a5e-b0b2-302c375e37a3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:25:54 np0005534516 nova_compute[253538]: 2025-11-25 08:25:54.783 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 86bfa56f-56d0-4a5e-b0b2-302c375e37a3] Synchronizing instance power state after lifecycle event "Started"; current vm_state: error, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 03:25:54 np0005534516 nova_compute[253538]: 2025-11-25 08:25:54.788 253542 DEBUG nova.compute.manager [None req-35bd05ab-f6ad-4042-81df-7cebe8557610 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] [instance: 86bfa56f-56d0-4a5e-b0b2-302c375e37a3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:25:54 np0005534516 nova_compute[253538]: 2025-11-25 08:25:54.806 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 86bfa56f-56d0-4a5e-b0b2-302c375e37a3] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.#033[00m
Nov 25 03:25:54 np0005534516 nova_compute[253538]: 2025-11-25 08:25:54.835 253542 DEBUG oslo_concurrency.lockutils [None req-35bd05ab-f6ad-4042-81df-7cebe8557610 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:25:54 np0005534516 nova_compute[253538]: 2025-11-25 08:25:54.835 253542 DEBUG oslo_concurrency.lockutils [None req-35bd05ab-f6ad-4042-81df-7cebe8557610 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:25:54 np0005534516 nova_compute[253538]: 2025-11-25 08:25:54.836 253542 DEBUG nova.objects.instance [None req-35bd05ab-f6ad-4042-81df-7cebe8557610 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] [instance: 86bfa56f-56d0-4a5e-b0b2-302c375e37a3] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032#033[00m
Nov 25 03:25:54 np0005534516 nova_compute[253538]: 2025-11-25 08:25:54.884 253542 DEBUG oslo_concurrency.lockutils [None req-35bd05ab-f6ad-4042-81df-7cebe8557610 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: held 0.049s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:25:55 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1214: 321 pgs: 321 active+clean; 497 MiB data, 504 MiB used, 59 GiB / 60 GiB avail; 4.5 MiB/s rd, 5.9 MiB/s wr, 327 op/s
Nov 25 03:25:55 np0005534516 nova_compute[253538]: 2025-11-25 08:25:55.476 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:25:55 np0005534516 nova_compute[253538]: 2025-11-25 08:25:55.958 253542 DEBUG nova.compute.manager [req-fdc29d66-032c-4ab0-9be3-093784b78cd8 req-101ca0cd-178a-43c4-bcb1-e15222c7c759 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 1664fad5-765c-4ecc-93e2-6f96c7fb6d44] Received event network-changed-fdb3703c-f8da-4c10-9784-ed63bfe93fe1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:25:55 np0005534516 nova_compute[253538]: 2025-11-25 08:25:55.958 253542 DEBUG nova.compute.manager [req-fdc29d66-032c-4ab0-9be3-093784b78cd8 req-101ca0cd-178a-43c4-bcb1-e15222c7c759 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 1664fad5-765c-4ecc-93e2-6f96c7fb6d44] Refreshing instance network info cache due to event network-changed-fdb3703c-f8da-4c10-9784-ed63bfe93fe1. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 03:25:55 np0005534516 nova_compute[253538]: 2025-11-25 08:25:55.959 253542 DEBUG oslo_concurrency.lockutils [req-fdc29d66-032c-4ab0-9be3-093784b78cd8 req-101ca0cd-178a-43c4-bcb1-e15222c7c759 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-1664fad5-765c-4ecc-93e2-6f96c7fb6d44" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 03:25:55 np0005534516 nova_compute[253538]: 2025-11-25 08:25:55.959 253542 DEBUG oslo_concurrency.lockutils [req-fdc29d66-032c-4ab0-9be3-093784b78cd8 req-101ca0cd-178a-43c4-bcb1-e15222c7c759 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-1664fad5-765c-4ecc-93e2-6f96c7fb6d44" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 03:25:55 np0005534516 nova_compute[253538]: 2025-11-25 08:25:55.959 253542 DEBUG nova.network.neutron [req-fdc29d66-032c-4ab0-9be3-093784b78cd8 req-101ca0cd-178a-43c4-bcb1-e15222c7c759 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 1664fad5-765c-4ecc-93e2-6f96c7fb6d44] Refreshing network info cache for port fdb3703c-f8da-4c10-9784-ed63bfe93fe1 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 03:25:57 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1215: 321 pgs: 321 active+clean; 501 MiB data, 509 MiB used, 59 GiB / 60 GiB avail; 5.0 MiB/s rd, 4.8 MiB/s wr, 319 op/s
Nov 25 03:25:57 np0005534516 nova_compute[253538]: 2025-11-25 08:25:57.298 253542 DEBUG nova.network.neutron [req-fdc29d66-032c-4ab0-9be3-093784b78cd8 req-101ca0cd-178a-43c4-bcb1-e15222c7c759 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 1664fad5-765c-4ecc-93e2-6f96c7fb6d44] Updated VIF entry in instance network info cache for port fdb3703c-f8da-4c10-9784-ed63bfe93fe1. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 25 03:25:57 np0005534516 nova_compute[253538]: 2025-11-25 08:25:57.299 253542 DEBUG nova.network.neutron [req-fdc29d66-032c-4ab0-9be3-093784b78cd8 req-101ca0cd-178a-43c4-bcb1-e15222c7c759 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 1664fad5-765c-4ecc-93e2-6f96c7fb6d44] Updating instance_info_cache with network_info: [{"id": "fdb3703c-f8da-4c10-9784-ed63bfe93fe1", "address": "fa:16:3e:f0:8b:90", "network": {"id": "f86a7b06-d9db-4462-bd9b-8ad648dec7f4", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-795190525-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2d3671fc1a3f4b319a62f23168a9df72", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfdb3703c-f8", "ovs_interfaceid": "fdb3703c-f8da-4c10-9784-ed63bfe93fe1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 03:25:57 np0005534516 nova_compute[253538]: 2025-11-25 08:25:57.313 253542 DEBUG oslo_concurrency.lockutils [req-fdc29d66-032c-4ab0-9be3-093784b78cd8 req-101ca0cd-178a-43c4-bcb1-e15222c7c759 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-1664fad5-765c-4ecc-93e2-6f96c7fb6d44" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 03:25:57 np0005534516 nova_compute[253538]: 2025-11-25 08:25:57.313 253542 DEBUG nova.compute.manager [req-fdc29d66-032c-4ab0-9be3-093784b78cd8 req-101ca0cd-178a-43c4-bcb1-e15222c7c759 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 39580ba3-504b-4e17-b64f-f44ef66091da] Received event network-changed-f718b0c0-ca1b-4f5d-aa70-3d1f48097b97 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:25:57 np0005534516 nova_compute[253538]: 2025-11-25 08:25:57.313 253542 DEBUG nova.compute.manager [req-fdc29d66-032c-4ab0-9be3-093784b78cd8 req-101ca0cd-178a-43c4-bcb1-e15222c7c759 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 39580ba3-504b-4e17-b64f-f44ef66091da] Refreshing instance network info cache due to event network-changed-f718b0c0-ca1b-4f5d-aa70-3d1f48097b97. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 03:25:57 np0005534516 nova_compute[253538]: 2025-11-25 08:25:57.314 253542 DEBUG oslo_concurrency.lockutils [req-fdc29d66-032c-4ab0-9be3-093784b78cd8 req-101ca0cd-178a-43c4-bcb1-e15222c7c759 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-39580ba3-504b-4e17-b64f-f44ef66091da" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 03:25:57 np0005534516 nova_compute[253538]: 2025-11-25 08:25:57.314 253542 DEBUG oslo_concurrency.lockutils [req-fdc29d66-032c-4ab0-9be3-093784b78cd8 req-101ca0cd-178a-43c4-bcb1-e15222c7c759 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-39580ba3-504b-4e17-b64f-f44ef66091da" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 03:25:57 np0005534516 nova_compute[253538]: 2025-11-25 08:25:57.314 253542 DEBUG nova.network.neutron [req-fdc29d66-032c-4ab0-9be3-093784b78cd8 req-101ca0cd-178a-43c4-bcb1-e15222c7c759 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 39580ba3-504b-4e17-b64f-f44ef66091da] Refreshing network info cache for port f718b0c0-ca1b-4f5d-aa70-3d1f48097b97 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 03:25:57 np0005534516 ceph-osd[89702]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #44. Immutable memtables: 1.
Nov 25 03:25:57 np0005534516 podman[282968]: 2025-11-25 08:25:57.835973963 +0000 UTC m=+0.078841013 container health_status 5c8d5508db57b4c3da7e80ae11f3a3094e87785cb74f60c98199261dd8b73481 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251118, config_id=ovn_metadata_agent)
Nov 25 03:25:57 np0005534516 ovn_controller[152859]: 2025-11-25T08:25:57Z|00018|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:60:b8:f5 10.100.0.3
Nov 25 03:25:57 np0005534516 ovn_controller[152859]: 2025-11-25T08:25:57Z|00019|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:60:b8:f5 10.100.0.3
Nov 25 03:25:58 np0005534516 ovn_controller[152859]: 2025-11-25T08:25:58Z|00020|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:57:ac:37 10.100.0.3
Nov 25 03:25:58 np0005534516 ovn_controller[152859]: 2025-11-25T08:25:58Z|00021|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:57:ac:37 10.100.0.3
Nov 25 03:25:58 np0005534516 nova_compute[253538]: 2025-11-25 08:25:58.433 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:25:58 np0005534516 nova_compute[253538]: 2025-11-25 08:25:58.560 253542 DEBUG nova.compute.manager [req-2e9582d4-ba45-41c2-b114-35b39a1f5d69 req-990bbf99-211e-49bb-8138-602dacd56d37 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: ceb93a9d-5e18-4351-9cfa-3949c00b448a] Received event network-changed-6283ff13-d854-41d6-8a7a-eab602cc4cf4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:25:58 np0005534516 nova_compute[253538]: 2025-11-25 08:25:58.560 253542 DEBUG nova.compute.manager [req-2e9582d4-ba45-41c2-b114-35b39a1f5d69 req-990bbf99-211e-49bb-8138-602dacd56d37 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: ceb93a9d-5e18-4351-9cfa-3949c00b448a] Refreshing instance network info cache due to event network-changed-6283ff13-d854-41d6-8a7a-eab602cc4cf4. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 03:25:58 np0005534516 nova_compute[253538]: 2025-11-25 08:25:58.561 253542 DEBUG oslo_concurrency.lockutils [req-2e9582d4-ba45-41c2-b114-35b39a1f5d69 req-990bbf99-211e-49bb-8138-602dacd56d37 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-ceb93a9d-5e18-4351-9cfa-3949c00b448a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 03:25:58 np0005534516 nova_compute[253538]: 2025-11-25 08:25:58.561 253542 DEBUG oslo_concurrency.lockutils [req-2e9582d4-ba45-41c2-b114-35b39a1f5d69 req-990bbf99-211e-49bb-8138-602dacd56d37 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-ceb93a9d-5e18-4351-9cfa-3949c00b448a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 03:25:58 np0005534516 nova_compute[253538]: 2025-11-25 08:25:58.561 253542 DEBUG nova.network.neutron [req-2e9582d4-ba45-41c2-b114-35b39a1f5d69 req-990bbf99-211e-49bb-8138-602dacd56d37 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: ceb93a9d-5e18-4351-9cfa-3949c00b448a] Refreshing network info cache for port 6283ff13-d854-41d6-8a7a-eab602cc4cf4 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 03:25:58 np0005534516 nova_compute[253538]: 2025-11-25 08:25:58.626 253542 DEBUG nova.network.neutron [req-fdc29d66-032c-4ab0-9be3-093784b78cd8 req-101ca0cd-178a-43c4-bcb1-e15222c7c759 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 39580ba3-504b-4e17-b64f-f44ef66091da] Updated VIF entry in instance network info cache for port f718b0c0-ca1b-4f5d-aa70-3d1f48097b97. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 25 03:25:58 np0005534516 nova_compute[253538]: 2025-11-25 08:25:58.627 253542 DEBUG nova.network.neutron [req-fdc29d66-032c-4ab0-9be3-093784b78cd8 req-101ca0cd-178a-43c4-bcb1-e15222c7c759 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 39580ba3-504b-4e17-b64f-f44ef66091da] Updating instance_info_cache with network_info: [{"id": "f718b0c0-ca1b-4f5d-aa70-3d1f48097b97", "address": "fa:16:3e:60:b8:f5", "network": {"id": "f86a7b06-d9db-4462-bd9b-8ad648dec7f4", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-795190525-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.182", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2d3671fc1a3f4b319a62f23168a9df72", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf718b0c0-ca", "ovs_interfaceid": "f718b0c0-ca1b-4f5d-aa70-3d1f48097b97", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 03:25:58 np0005534516 nova_compute[253538]: 2025-11-25 08:25:58.641 253542 DEBUG oslo_concurrency.lockutils [req-fdc29d66-032c-4ab0-9be3-093784b78cd8 req-101ca0cd-178a-43c4-bcb1-e15222c7c759 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-39580ba3-504b-4e17-b64f-f44ef66091da" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 03:25:58 np0005534516 nova_compute[253538]: 2025-11-25 08:25:58.805 253542 INFO nova.compute.manager [None req-b2263c80-33ef-445c-930a-78c60ce7c0c9 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] [instance: 86bfa56f-56d0-4a5e-b0b2-302c375e37a3] Rebuilding instance#033[00m
Nov 25 03:25:58 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e113 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 03:25:59 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1216: 321 pgs: 321 active+clean; 515 MiB data, 519 MiB used, 59 GiB / 60 GiB avail; 4.1 MiB/s rd, 5.6 MiB/s wr, 308 op/s
Nov 25 03:25:59 np0005534516 nova_compute[253538]: 2025-11-25 08:25:59.072 253542 DEBUG nova.objects.instance [None req-b2263c80-33ef-445c-930a-78c60ce7c0c9 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Lazy-loading 'trusted_certs' on Instance uuid 86bfa56f-56d0-4a5e-b0b2-302c375e37a3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 03:25:59 np0005534516 nova_compute[253538]: 2025-11-25 08:25:59.090 253542 DEBUG nova.compute.manager [None req-b2263c80-33ef-445c-930a-78c60ce7c0c9 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] [instance: 86bfa56f-56d0-4a5e-b0b2-302c375e37a3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:25:59 np0005534516 nova_compute[253538]: 2025-11-25 08:25:59.131 253542 DEBUG nova.objects.instance [None req-b2263c80-33ef-445c-930a-78c60ce7c0c9 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Lazy-loading 'pci_requests' on Instance uuid 86bfa56f-56d0-4a5e-b0b2-302c375e37a3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 03:25:59 np0005534516 nova_compute[253538]: 2025-11-25 08:25:59.139 253542 DEBUG nova.objects.instance [None req-b2263c80-33ef-445c-930a-78c60ce7c0c9 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Lazy-loading 'pci_devices' on Instance uuid 86bfa56f-56d0-4a5e-b0b2-302c375e37a3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 03:25:59 np0005534516 nova_compute[253538]: 2025-11-25 08:25:59.148 253542 DEBUG nova.objects.instance [None req-b2263c80-33ef-445c-930a-78c60ce7c0c9 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Lazy-loading 'resources' on Instance uuid 86bfa56f-56d0-4a5e-b0b2-302c375e37a3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 03:25:59 np0005534516 nova_compute[253538]: 2025-11-25 08:25:59.156 253542 DEBUG nova.objects.instance [None req-b2263c80-33ef-445c-930a-78c60ce7c0c9 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Lazy-loading 'migration_context' on Instance uuid 86bfa56f-56d0-4a5e-b0b2-302c375e37a3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 03:25:59 np0005534516 nova_compute[253538]: 2025-11-25 08:25:59.165 253542 DEBUG nova.objects.instance [None req-b2263c80-33ef-445c-930a-78c60ce7c0c9 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] [instance: 86bfa56f-56d0-4a5e-b0b2-302c375e37a3] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032#033[00m
Nov 25 03:25:59 np0005534516 nova_compute[253538]: 2025-11-25 08:25:59.168 253542 DEBUG nova.virt.libvirt.driver [None req-b2263c80-33ef-445c-930a-78c60ce7c0c9 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] [instance: 86bfa56f-56d0-4a5e-b0b2-302c375e37a3] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071#033[00m
Nov 25 03:25:59 np0005534516 nova_compute[253538]: 2025-11-25 08:25:59.646 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:26:00 np0005534516 podman[282985]: 2025-11-25 08:26:00.821657409 +0000 UTC m=+0.071724487 container health_status 201f5ba8a846455dad7681d91099d8c3f33e8ac91298c1330a8b525b94c03938 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Nov 25 03:26:01 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1217: 321 pgs: 321 active+clean; 549 MiB data, 540 MiB used, 59 GiB / 60 GiB avail; 4.1 MiB/s rd, 7.3 MiB/s wr, 328 op/s
Nov 25 03:26:01 np0005534516 nova_compute[253538]: 2025-11-25 08:26:01.592 253542 DEBUG nova.network.neutron [req-2e9582d4-ba45-41c2-b114-35b39a1f5d69 req-990bbf99-211e-49bb-8138-602dacd56d37 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: ceb93a9d-5e18-4351-9cfa-3949c00b448a] Updated VIF entry in instance network info cache for port 6283ff13-d854-41d6-8a7a-eab602cc4cf4. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 25 03:26:01 np0005534516 nova_compute[253538]: 2025-11-25 08:26:01.594 253542 DEBUG nova.network.neutron [req-2e9582d4-ba45-41c2-b114-35b39a1f5d69 req-990bbf99-211e-49bb-8138-602dacd56d37 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: ceb93a9d-5e18-4351-9cfa-3949c00b448a] Updating instance_info_cache with network_info: [{"id": "6283ff13-d854-41d6-8a7a-eab602cc4cf4", "address": "fa:16:3e:57:ac:37", "network": {"id": "f0cb07bc-dc94-4b65-bb7f-100ce36c9428", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationNegativeTestJSON-827720896-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.248", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c4ea3de796e6464fbf65835dc4c3ad79", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6283ff13-d8", "ovs_interfaceid": "6283ff13-d854-41d6-8a7a-eab602cc4cf4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 03:26:01 np0005534516 nova_compute[253538]: 2025-11-25 08:26:01.611 253542 DEBUG oslo_concurrency.lockutils [req-2e9582d4-ba45-41c2-b114-35b39a1f5d69 req-990bbf99-211e-49bb-8138-602dacd56d37 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-ceb93a9d-5e18-4351-9cfa-3949c00b448a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 03:26:01 np0005534516 nova_compute[253538]: 2025-11-25 08:26:01.612 253542 DEBUG nova.compute.manager [req-2e9582d4-ba45-41c2-b114-35b39a1f5d69 req-990bbf99-211e-49bb-8138-602dacd56d37 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 86bfa56f-56d0-4a5e-b0b2-302c375e37a3] Received event network-vif-plugged-4ad9572b-6ac1-4659-8ea6-71b8a32c06fe external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:26:01 np0005534516 nova_compute[253538]: 2025-11-25 08:26:01.613 253542 DEBUG oslo_concurrency.lockutils [req-2e9582d4-ba45-41c2-b114-35b39a1f5d69 req-990bbf99-211e-49bb-8138-602dacd56d37 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "86bfa56f-56d0-4a5e-b0b2-302c375e37a3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:26:01 np0005534516 nova_compute[253538]: 2025-11-25 08:26:01.614 253542 DEBUG oslo_concurrency.lockutils [req-2e9582d4-ba45-41c2-b114-35b39a1f5d69 req-990bbf99-211e-49bb-8138-602dacd56d37 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "86bfa56f-56d0-4a5e-b0b2-302c375e37a3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:26:01 np0005534516 nova_compute[253538]: 2025-11-25 08:26:01.614 253542 DEBUG oslo_concurrency.lockutils [req-2e9582d4-ba45-41c2-b114-35b39a1f5d69 req-990bbf99-211e-49bb-8138-602dacd56d37 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "86bfa56f-56d0-4a5e-b0b2-302c375e37a3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:26:01 np0005534516 nova_compute[253538]: 2025-11-25 08:26:01.615 253542 DEBUG nova.compute.manager [req-2e9582d4-ba45-41c2-b114-35b39a1f5d69 req-990bbf99-211e-49bb-8138-602dacd56d37 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 86bfa56f-56d0-4a5e-b0b2-302c375e37a3] No waiting events found dispatching network-vif-plugged-4ad9572b-6ac1-4659-8ea6-71b8a32c06fe pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 03:26:01 np0005534516 nova_compute[253538]: 2025-11-25 08:26:01.616 253542 WARNING nova.compute.manager [req-2e9582d4-ba45-41c2-b114-35b39a1f5d69 req-990bbf99-211e-49bb-8138-602dacd56d37 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 86bfa56f-56d0-4a5e-b0b2-302c375e37a3] Received unexpected event network-vif-plugged-4ad9572b-6ac1-4659-8ea6-71b8a32c06fe for instance with vm_state active and task_state rebuilding.#033[00m
Nov 25 03:26:01 np0005534516 nova_compute[253538]: 2025-11-25 08:26:01.617 253542 DEBUG nova.compute.manager [req-2e9582d4-ba45-41c2-b114-35b39a1f5d69 req-990bbf99-211e-49bb-8138-602dacd56d37 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 86bfa56f-56d0-4a5e-b0b2-302c375e37a3] Received event network-vif-plugged-4ad9572b-6ac1-4659-8ea6-71b8a32c06fe external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:26:01 np0005534516 nova_compute[253538]: 2025-11-25 08:26:01.618 253542 DEBUG oslo_concurrency.lockutils [req-2e9582d4-ba45-41c2-b114-35b39a1f5d69 req-990bbf99-211e-49bb-8138-602dacd56d37 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "86bfa56f-56d0-4a5e-b0b2-302c375e37a3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:26:01 np0005534516 nova_compute[253538]: 2025-11-25 08:26:01.619 253542 DEBUG oslo_concurrency.lockutils [req-2e9582d4-ba45-41c2-b114-35b39a1f5d69 req-990bbf99-211e-49bb-8138-602dacd56d37 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "86bfa56f-56d0-4a5e-b0b2-302c375e37a3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:26:01 np0005534516 nova_compute[253538]: 2025-11-25 08:26:01.619 253542 DEBUG oslo_concurrency.lockutils [req-2e9582d4-ba45-41c2-b114-35b39a1f5d69 req-990bbf99-211e-49bb-8138-602dacd56d37 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "86bfa56f-56d0-4a5e-b0b2-302c375e37a3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:26:01 np0005534516 nova_compute[253538]: 2025-11-25 08:26:01.620 253542 DEBUG nova.compute.manager [req-2e9582d4-ba45-41c2-b114-35b39a1f5d69 req-990bbf99-211e-49bb-8138-602dacd56d37 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 86bfa56f-56d0-4a5e-b0b2-302c375e37a3] No waiting events found dispatching network-vif-plugged-4ad9572b-6ac1-4659-8ea6-71b8a32c06fe pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 03:26:01 np0005534516 nova_compute[253538]: 2025-11-25 08:26:01.621 253542 WARNING nova.compute.manager [req-2e9582d4-ba45-41c2-b114-35b39a1f5d69 req-990bbf99-211e-49bb-8138-602dacd56d37 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 86bfa56f-56d0-4a5e-b0b2-302c375e37a3] Received unexpected event network-vif-plugged-4ad9572b-6ac1-4659-8ea6-71b8a32c06fe for instance with vm_state active and task_state rebuilding.#033[00m
Nov 25 03:26:02 np0005534516 nova_compute[253538]: 2025-11-25 08:26:02.892 253542 DEBUG oslo_concurrency.lockutils [None req-1e35dad3-44c2-4c96-99c5-86a99fa9ac76 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] Acquiring lock "ca088afd-31e5-497b-bfc5-ba1f56096642" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:26:02 np0005534516 nova_compute[253538]: 2025-11-25 08:26:02.893 253542 DEBUG oslo_concurrency.lockutils [None req-1e35dad3-44c2-4c96-99c5-86a99fa9ac76 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] Lock "ca088afd-31e5-497b-bfc5-ba1f56096642" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:26:02 np0005534516 nova_compute[253538]: 2025-11-25 08:26:02.908 253542 DEBUG nova.compute.manager [None req-1e35dad3-44c2-4c96-99c5-86a99fa9ac76 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] [instance: ca088afd-31e5-497b-bfc5-ba1f56096642] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 25 03:26:02 np0005534516 nova_compute[253538]: 2025-11-25 08:26:02.962 253542 DEBUG oslo_concurrency.lockutils [None req-1e35dad3-44c2-4c96-99c5-86a99fa9ac76 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:26:02 np0005534516 nova_compute[253538]: 2025-11-25 08:26:02.963 253542 DEBUG oslo_concurrency.lockutils [None req-1e35dad3-44c2-4c96-99c5-86a99fa9ac76 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:26:02 np0005534516 nova_compute[253538]: 2025-11-25 08:26:02.973 253542 DEBUG nova.virt.hardware [None req-1e35dad3-44c2-4c96-99c5-86a99fa9ac76 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 25 03:26:02 np0005534516 nova_compute[253538]: 2025-11-25 08:26:02.973 253542 INFO nova.compute.claims [None req-1e35dad3-44c2-4c96-99c5-86a99fa9ac76 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] [instance: ca088afd-31e5-497b-bfc5-ba1f56096642] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 25 03:26:03 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1218: 321 pgs: 321 active+clean; 561 MiB data, 585 MiB used, 59 GiB / 60 GiB avail; 2.8 MiB/s rd, 7.7 MiB/s wr, 288 op/s
Nov 25 03:26:03 np0005534516 nova_compute[253538]: 2025-11-25 08:26:03.159 253542 DEBUG oslo_concurrency.processutils [None req-1e35dad3-44c2-4c96-99c5-86a99fa9ac76 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:26:03 np0005534516 nova_compute[253538]: 2025-11-25 08:26:03.437 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:26:03 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 03:26:03 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3669348661' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 03:26:03 np0005534516 nova_compute[253538]: 2025-11-25 08:26:03.632 253542 DEBUG oslo_concurrency.processutils [None req-1e35dad3-44c2-4c96-99c5-86a99fa9ac76 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.473s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:26:03 np0005534516 nova_compute[253538]: 2025-11-25 08:26:03.638 253542 DEBUG nova.compute.provider_tree [None req-1e35dad3-44c2-4c96-99c5-86a99fa9ac76 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 03:26:03 np0005534516 nova_compute[253538]: 2025-11-25 08:26:03.658 253542 DEBUG nova.scheduler.client.report [None req-1e35dad3-44c2-4c96-99c5-86a99fa9ac76 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 03:26:03 np0005534516 nova_compute[253538]: 2025-11-25 08:26:03.686 253542 DEBUG oslo_concurrency.lockutils [None req-1e35dad3-44c2-4c96-99c5-86a99fa9ac76 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.724s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:26:03 np0005534516 nova_compute[253538]: 2025-11-25 08:26:03.688 253542 DEBUG nova.compute.manager [None req-1e35dad3-44c2-4c96-99c5-86a99fa9ac76 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] [instance: ca088afd-31e5-497b-bfc5-ba1f56096642] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 25 03:26:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 03:26:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:26:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 03:26:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:26:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.00489235110617308 of space, bias 1.0, pg target 1.4677053318519242 quantized to 32 (current 32)
Nov 25 03:26:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:26:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 03:26:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:26:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 03:26:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:26:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006661126644201341 of space, bias 1.0, pg target 0.1991676866616201 quantized to 32 (current 32)
Nov 25 03:26:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:26:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006084358924269063 quantized to 16 (current 32)
Nov 25 03:26:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:26:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 03:26:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:26:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.605448655336329e-05 quantized to 32 (current 32)
Nov 25 03:26:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:26:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006464631357035879 quantized to 32 (current 32)
Nov 25 03:26:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:26:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 03:26:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:26:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015210897310672657 quantized to 32 (current 32)
Nov 25 03:26:03 np0005534516 nova_compute[253538]: 2025-11-25 08:26:03.752 253542 DEBUG nova.compute.manager [None req-1e35dad3-44c2-4c96-99c5-86a99fa9ac76 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] [instance: ca088afd-31e5-497b-bfc5-ba1f56096642] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 25 03:26:03 np0005534516 nova_compute[253538]: 2025-11-25 08:26:03.752 253542 DEBUG nova.network.neutron [None req-1e35dad3-44c2-4c96-99c5-86a99fa9ac76 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] [instance: ca088afd-31e5-497b-bfc5-ba1f56096642] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 25 03:26:03 np0005534516 nova_compute[253538]: 2025-11-25 08:26:03.792 253542 INFO nova.virt.libvirt.driver [None req-1e35dad3-44c2-4c96-99c5-86a99fa9ac76 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] [instance: ca088afd-31e5-497b-bfc5-ba1f56096642] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 25 03:26:03 np0005534516 nova_compute[253538]: 2025-11-25 08:26:03.811 253542 DEBUG nova.compute.manager [None req-1e35dad3-44c2-4c96-99c5-86a99fa9ac76 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] [instance: ca088afd-31e5-497b-bfc5-ba1f56096642] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 25 03:26:03 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e113 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 03:26:03 np0005534516 nova_compute[253538]: 2025-11-25 08:26:03.918 253542 DEBUG nova.compute.manager [None req-1e35dad3-44c2-4c96-99c5-86a99fa9ac76 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] [instance: ca088afd-31e5-497b-bfc5-ba1f56096642] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 25 03:26:03 np0005534516 nova_compute[253538]: 2025-11-25 08:26:03.920 253542 DEBUG nova.virt.libvirt.driver [None req-1e35dad3-44c2-4c96-99c5-86a99fa9ac76 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] [instance: ca088afd-31e5-497b-bfc5-ba1f56096642] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 25 03:26:03 np0005534516 nova_compute[253538]: 2025-11-25 08:26:03.920 253542 INFO nova.virt.libvirt.driver [None req-1e35dad3-44c2-4c96-99c5-86a99fa9ac76 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] [instance: ca088afd-31e5-497b-bfc5-ba1f56096642] Creating image(s)#033[00m
Nov 25 03:26:03 np0005534516 nova_compute[253538]: 2025-11-25 08:26:03.941 253542 DEBUG nova.storage.rbd_utils [None req-1e35dad3-44c2-4c96-99c5-86a99fa9ac76 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] rbd image ca088afd-31e5-497b-bfc5-ba1f56096642_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:26:03 np0005534516 nova_compute[253538]: 2025-11-25 08:26:03.965 253542 DEBUG nova.storage.rbd_utils [None req-1e35dad3-44c2-4c96-99c5-86a99fa9ac76 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] rbd image ca088afd-31e5-497b-bfc5-ba1f56096642_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:26:03 np0005534516 nova_compute[253538]: 2025-11-25 08:26:03.985 253542 DEBUG nova.storage.rbd_utils [None req-1e35dad3-44c2-4c96-99c5-86a99fa9ac76 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] rbd image ca088afd-31e5-497b-bfc5-ba1f56096642_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:26:03 np0005534516 nova_compute[253538]: 2025-11-25 08:26:03.988 253542 DEBUG oslo_concurrency.processutils [None req-1e35dad3-44c2-4c96-99c5-86a99fa9ac76 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:26:04 np0005534516 nova_compute[253538]: 2025-11-25 08:26:04.016 253542 DEBUG nova.policy [None req-1e35dad3-44c2-4c96-99c5-86a99fa9ac76 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '02e795c75a3b40bbbc3ca83d0501777f', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '52217f37b23343d697fa6d2be38e236d', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 25 03:26:04 np0005534516 nova_compute[253538]: 2025-11-25 08:26:04.073 253542 DEBUG oslo_concurrency.processutils [None req-1e35dad3-44c2-4c96-99c5-86a99fa9ac76 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json" returned: 0 in 0.085s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:26:04 np0005534516 nova_compute[253538]: 2025-11-25 08:26:04.074 253542 DEBUG oslo_concurrency.lockutils [None req-1e35dad3-44c2-4c96-99c5-86a99fa9ac76 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] Acquiring lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:26:04 np0005534516 nova_compute[253538]: 2025-11-25 08:26:04.075 253542 DEBUG oslo_concurrency.lockutils [None req-1e35dad3-44c2-4c96-99c5-86a99fa9ac76 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:26:04 np0005534516 nova_compute[253538]: 2025-11-25 08:26:04.075 253542 DEBUG oslo_concurrency.lockutils [None req-1e35dad3-44c2-4c96-99c5-86a99fa9ac76 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:26:04 np0005534516 nova_compute[253538]: 2025-11-25 08:26:04.094 253542 DEBUG nova.storage.rbd_utils [None req-1e35dad3-44c2-4c96-99c5-86a99fa9ac76 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] rbd image ca088afd-31e5-497b-bfc5-ba1f56096642_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:26:04 np0005534516 nova_compute[253538]: 2025-11-25 08:26:04.097 253542 DEBUG oslo_concurrency.processutils [None req-1e35dad3-44c2-4c96-99c5-86a99fa9ac76 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc ca088afd-31e5-497b-bfc5-ba1f56096642_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:26:04 np0005534516 nova_compute[253538]: 2025-11-25 08:26:04.456 253542 DEBUG oslo_concurrency.processutils [None req-1e35dad3-44c2-4c96-99c5-86a99fa9ac76 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc ca088afd-31e5-497b-bfc5-ba1f56096642_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.359s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:26:04 np0005534516 nova_compute[253538]: 2025-11-25 08:26:04.524 253542 DEBUG nova.storage.rbd_utils [None req-1e35dad3-44c2-4c96-99c5-86a99fa9ac76 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] resizing rbd image ca088afd-31e5-497b-bfc5-ba1f56096642_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 25 03:26:04 np0005534516 nova_compute[253538]: 2025-11-25 08:26:04.783 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:26:04 np0005534516 nova_compute[253538]: 2025-11-25 08:26:04.934 253542 DEBUG nova.objects.instance [None req-1e35dad3-44c2-4c96-99c5-86a99fa9ac76 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] Lazy-loading 'migration_context' on Instance uuid ca088afd-31e5-497b-bfc5-ba1f56096642 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 03:26:04 np0005534516 nova_compute[253538]: 2025-11-25 08:26:04.951 253542 DEBUG nova.virt.libvirt.driver [None req-1e35dad3-44c2-4c96-99c5-86a99fa9ac76 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] [instance: ca088afd-31e5-497b-bfc5-ba1f56096642] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 25 03:26:04 np0005534516 nova_compute[253538]: 2025-11-25 08:26:04.951 253542 DEBUG nova.virt.libvirt.driver [None req-1e35dad3-44c2-4c96-99c5-86a99fa9ac76 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] [instance: ca088afd-31e5-497b-bfc5-ba1f56096642] Ensure instance console log exists: /var/lib/nova/instances/ca088afd-31e5-497b-bfc5-ba1f56096642/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 25 03:26:04 np0005534516 nova_compute[253538]: 2025-11-25 08:26:04.952 253542 DEBUG oslo_concurrency.lockutils [None req-1e35dad3-44c2-4c96-99c5-86a99fa9ac76 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:26:04 np0005534516 nova_compute[253538]: 2025-11-25 08:26:04.952 253542 DEBUG oslo_concurrency.lockutils [None req-1e35dad3-44c2-4c96-99c5-86a99fa9ac76 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:26:04 np0005534516 nova_compute[253538]: 2025-11-25 08:26:04.953 253542 DEBUG oslo_concurrency.lockutils [None req-1e35dad3-44c2-4c96-99c5-86a99fa9ac76 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:26:05 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1219: 321 pgs: 321 active+clean; 576 MiB data, 598 MiB used, 59 GiB / 60 GiB avail; 2.7 MiB/s rd, 6.6 MiB/s wr, 264 op/s
Nov 25 03:26:05 np0005534516 nova_compute[253538]: 2025-11-25 08:26:05.413 253542 DEBUG nova.network.neutron [None req-1e35dad3-44c2-4c96-99c5-86a99fa9ac76 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] [instance: ca088afd-31e5-497b-bfc5-ba1f56096642] Successfully created port: 2089bf75-6119-4c42-a326-989b3931ec08 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 25 03:26:05 np0005534516 nova_compute[253538]: 2025-11-25 08:26:05.546 253542 DEBUG nova.compute.manager [req-b735f3c2-a677-4c64-a6a0-cce02fd04c22 req-b87ecea7-9233-4090-9906-43e5e0ca7151 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 39580ba3-504b-4e17-b64f-f44ef66091da] Received event network-changed-f718b0c0-ca1b-4f5d-aa70-3d1f48097b97 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:26:05 np0005534516 nova_compute[253538]: 2025-11-25 08:26:05.546 253542 DEBUG nova.compute.manager [req-b735f3c2-a677-4c64-a6a0-cce02fd04c22 req-b87ecea7-9233-4090-9906-43e5e0ca7151 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 39580ba3-504b-4e17-b64f-f44ef66091da] Refreshing instance network info cache due to event network-changed-f718b0c0-ca1b-4f5d-aa70-3d1f48097b97. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 03:26:05 np0005534516 nova_compute[253538]: 2025-11-25 08:26:05.547 253542 DEBUG oslo_concurrency.lockutils [req-b735f3c2-a677-4c64-a6a0-cce02fd04c22 req-b87ecea7-9233-4090-9906-43e5e0ca7151 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-39580ba3-504b-4e17-b64f-f44ef66091da" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 03:26:05 np0005534516 nova_compute[253538]: 2025-11-25 08:26:05.547 253542 DEBUG oslo_concurrency.lockutils [req-b735f3c2-a677-4c64-a6a0-cce02fd04c22 req-b87ecea7-9233-4090-9906-43e5e0ca7151 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-39580ba3-504b-4e17-b64f-f44ef66091da" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 03:26:05 np0005534516 nova_compute[253538]: 2025-11-25 08:26:05.548 253542 DEBUG nova.network.neutron [req-b735f3c2-a677-4c64-a6a0-cce02fd04c22 req-b87ecea7-9233-4090-9906-43e5e0ca7151 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 39580ba3-504b-4e17-b64f-f44ef66091da] Refreshing network info cache for port f718b0c0-ca1b-4f5d-aa70-3d1f48097b97 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 03:26:05 np0005534516 nova_compute[253538]: 2025-11-25 08:26:05.814 253542 DEBUG oslo_concurrency.lockutils [None req-3e5b6bd1-af5e-4842-aa52-e2886e1e7447 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] Acquiring lock "39580ba3-504b-4e17-b64f-f44ef66091da" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:26:05 np0005534516 nova_compute[253538]: 2025-11-25 08:26:05.815 253542 DEBUG oslo_concurrency.lockutils [None req-3e5b6bd1-af5e-4842-aa52-e2886e1e7447 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] Lock "39580ba3-504b-4e17-b64f-f44ef66091da" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:26:05 np0005534516 nova_compute[253538]: 2025-11-25 08:26:05.816 253542 DEBUG oslo_concurrency.lockutils [None req-3e5b6bd1-af5e-4842-aa52-e2886e1e7447 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] Acquiring lock "39580ba3-504b-4e17-b64f-f44ef66091da-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:26:05 np0005534516 nova_compute[253538]: 2025-11-25 08:26:05.817 253542 DEBUG oslo_concurrency.lockutils [None req-3e5b6bd1-af5e-4842-aa52-e2886e1e7447 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] Lock "39580ba3-504b-4e17-b64f-f44ef66091da-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:26:05 np0005534516 nova_compute[253538]: 2025-11-25 08:26:05.817 253542 DEBUG oslo_concurrency.lockutils [None req-3e5b6bd1-af5e-4842-aa52-e2886e1e7447 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] Lock "39580ba3-504b-4e17-b64f-f44ef66091da-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:26:05 np0005534516 nova_compute[253538]: 2025-11-25 08:26:05.819 253542 INFO nova.compute.manager [None req-3e5b6bd1-af5e-4842-aa52-e2886e1e7447 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] [instance: 39580ba3-504b-4e17-b64f-f44ef66091da] Terminating instance#033[00m
Nov 25 03:26:05 np0005534516 nova_compute[253538]: 2025-11-25 08:26:05.820 253542 DEBUG nova.compute.manager [None req-3e5b6bd1-af5e-4842-aa52-e2886e1e7447 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] [instance: 39580ba3-504b-4e17-b64f-f44ef66091da] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 25 03:26:05 np0005534516 kernel: tapf718b0c0-ca (unregistering): left promiscuous mode
Nov 25 03:26:05 np0005534516 NetworkManager[48915]: <info>  [1764059165.9987] device (tapf718b0c0-ca): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 03:26:06 np0005534516 ovn_controller[152859]: 2025-11-25T08:26:06Z|00087|binding|INFO|Releasing lport f718b0c0-ca1b-4f5d-aa70-3d1f48097b97 from this chassis (sb_readonly=0)
Nov 25 03:26:06 np0005534516 ovn_controller[152859]: 2025-11-25T08:26:06Z|00088|binding|INFO|Setting lport f718b0c0-ca1b-4f5d-aa70-3d1f48097b97 down in Southbound
Nov 25 03:26:06 np0005534516 nova_compute[253538]: 2025-11-25 08:26:06.010 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:26:06 np0005534516 ovn_controller[152859]: 2025-11-25T08:26:06Z|00089|binding|INFO|Removing iface tapf718b0c0-ca ovn-installed in OVS
Nov 25 03:26:06 np0005534516 nova_compute[253538]: 2025-11-25 08:26:06.015 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:26:06 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:26:06.021 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:60:b8:f5 10.100.0.3'], port_security=['fa:16:3e:60:b8:f5 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '39580ba3-504b-4e17-b64f-f44ef66091da', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f86a7b06-d9db-4462-bd9b-8ad648dec7f4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2d3671fc1a3f4b319a62f23168a9df72', 'neutron:revision_number': '4', 'neutron:security_group_ids': '487cd315-808b-491d-a4f3-9b5dcda46b51', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ddf7b92c-cc8c-4886-ba67-90d06b198671, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=f718b0c0-ca1b-4f5d-aa70-3d1f48097b97) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 03:26:06 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:26:06.022 162739 INFO neutron.agent.ovn.metadata.agent [-] Port f718b0c0-ca1b-4f5d-aa70-3d1f48097b97 in datapath f86a7b06-d9db-4462-bd9b-8ad648dec7f4 unbound from our chassis#033[00m
Nov 25 03:26:06 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:26:06.024 162739 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network f86a7b06-d9db-4462-bd9b-8ad648dec7f4#033[00m
Nov 25 03:26:06 np0005534516 nova_compute[253538]: 2025-11-25 08:26:06.035 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:26:06 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:26:06.049 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[4be3e7c3-7b74-43a3-b7fc-e0371d14dc16]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:26:06 np0005534516 systemd[1]: machine-qemu\x2d23\x2dinstance\x2d00000015.scope: Deactivated successfully.
Nov 25 03:26:06 np0005534516 systemd[1]: machine-qemu\x2d23\x2dinstance\x2d00000015.scope: Consumed 14.648s CPU time.
Nov 25 03:26:06 np0005534516 systemd-machined[215790]: Machine qemu-23-instance-00000015 terminated.
Nov 25 03:26:06 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:26:06.079 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[73110975-fcbd-4e26-84d5-4999b2997173]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:26:06 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:26:06.081 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[00c94474-e5a2-47ea-9f93-b55e275921c1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:26:06 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:26:06.106 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[281df5e4-e555-47da-b413-4e2d59e3bbf1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:26:06 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:26:06.122 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[0255be6c-32c5-4aed-a5f0-f3f12a8dfa25]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf86a7b06-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:10:7b:45'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 7, 'rx_bytes': 700, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 7, 'rx_bytes': 700, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 23], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 448076, 'reachable_time': 32795, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 283205, 'error': None, 'target': 'ovnmeta-f86a7b06-d9db-4462-bd9b-8ad648dec7f4', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:26:06 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:26:06.137 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[1c1e0f2a-fe78-4d24-aeef-cef019a5061a]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapf86a7b06-d1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 448087, 'tstamp': 448087}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 283206, 'error': None, 'target': 'ovnmeta-f86a7b06-d9db-4462-bd9b-8ad648dec7f4', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapf86a7b06-d1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 448090, 'tstamp': 448090}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 283206, 'error': None, 'target': 'ovnmeta-f86a7b06-d9db-4462-bd9b-8ad648dec7f4', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:26:06 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:26:06.138 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf86a7b06-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:26:06 np0005534516 nova_compute[253538]: 2025-11-25 08:26:06.139 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:26:06 np0005534516 nova_compute[253538]: 2025-11-25 08:26:06.144 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:26:06 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:26:06.144 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf86a7b06-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:26:06 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:26:06.144 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 03:26:06 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:26:06.145 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapf86a7b06-d0, col_values=(('external_ids', {'iface-id': '156cbe7f-b9cd-46c4-9552-1484f581cde6'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:26:06 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:26:06.145 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 03:26:06 np0005534516 nova_compute[253538]: 2025-11-25 08:26:06.259 253542 INFO nova.virt.libvirt.driver [-] [instance: 39580ba3-504b-4e17-b64f-f44ef66091da] Instance destroyed successfully.#033[00m
Nov 25 03:26:06 np0005534516 nova_compute[253538]: 2025-11-25 08:26:06.260 253542 DEBUG nova.objects.instance [None req-3e5b6bd1-af5e-4842-aa52-e2886e1e7447 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] Lazy-loading 'resources' on Instance uuid 39580ba3-504b-4e17-b64f-f44ef66091da obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 03:26:06 np0005534516 nova_compute[253538]: 2025-11-25 08:26:06.271 253542 DEBUG nova.virt.libvirt.vif [None req-3e5b6bd1-af5e-4842-aa52-e2886e1e7447 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T08:25:34Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-FloatingIPsAssociationTestJSON-server-1841402445',display_name='tempest-FloatingIPsAssociationTestJSON-server-1841402445',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-floatingipsassociationtestjson-server-1841402445',id=21,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-25T08:25:43Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='2d3671fc1a3f4b319a62f23168a9df72',ramdisk_id='',reservation_id='r-19c31s1c',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-FloatingIPsAssociationTestJSON-1833680054',owner_user_name='tempest-FloatingIPsAssociationTestJSON-1833680054-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T08:25:43Z,user_data=None,user_id='d61511e82c674abeb4ba87a4e5c5bf9d',uuid=39580ba3-504b-4e17-b64f-f44ef66091da,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "f718b0c0-ca1b-4f5d-aa70-3d1f48097b97", "address": "fa:16:3e:60:b8:f5", "network": {"id": "f86a7b06-d9db-4462-bd9b-8ad648dec7f4", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-795190525-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2d3671fc1a3f4b319a62f23168a9df72", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf718b0c0-ca", "ovs_interfaceid": "f718b0c0-ca1b-4f5d-aa70-3d1f48097b97", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 25 03:26:06 np0005534516 nova_compute[253538]: 2025-11-25 08:26:06.272 253542 DEBUG nova.network.os_vif_util [None req-3e5b6bd1-af5e-4842-aa52-e2886e1e7447 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] Converting VIF {"id": "f718b0c0-ca1b-4f5d-aa70-3d1f48097b97", "address": "fa:16:3e:60:b8:f5", "network": {"id": "f86a7b06-d9db-4462-bd9b-8ad648dec7f4", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-795190525-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2d3671fc1a3f4b319a62f23168a9df72", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf718b0c0-ca", "ovs_interfaceid": "f718b0c0-ca1b-4f5d-aa70-3d1f48097b97", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 03:26:06 np0005534516 nova_compute[253538]: 2025-11-25 08:26:06.272 253542 DEBUG nova.network.os_vif_util [None req-3e5b6bd1-af5e-4842-aa52-e2886e1e7447 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:60:b8:f5,bridge_name='br-int',has_traffic_filtering=True,id=f718b0c0-ca1b-4f5d-aa70-3d1f48097b97,network=Network(f86a7b06-d9db-4462-bd9b-8ad648dec7f4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf718b0c0-ca') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 03:26:06 np0005534516 nova_compute[253538]: 2025-11-25 08:26:06.273 253542 DEBUG os_vif [None req-3e5b6bd1-af5e-4842-aa52-e2886e1e7447 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:60:b8:f5,bridge_name='br-int',has_traffic_filtering=True,id=f718b0c0-ca1b-4f5d-aa70-3d1f48097b97,network=Network(f86a7b06-d9db-4462-bd9b-8ad648dec7f4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf718b0c0-ca') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 25 03:26:06 np0005534516 nova_compute[253538]: 2025-11-25 08:26:06.275 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:26:06 np0005534516 nova_compute[253538]: 2025-11-25 08:26:06.275 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf718b0c0-ca, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:26:06 np0005534516 nova_compute[253538]: 2025-11-25 08:26:06.277 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:26:06 np0005534516 nova_compute[253538]: 2025-11-25 08:26:06.279 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:26:06 np0005534516 nova_compute[253538]: 2025-11-25 08:26:06.283 253542 INFO os_vif [None req-3e5b6bd1-af5e-4842-aa52-e2886e1e7447 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:60:b8:f5,bridge_name='br-int',has_traffic_filtering=True,id=f718b0c0-ca1b-4f5d-aa70-3d1f48097b97,network=Network(f86a7b06-d9db-4462-bd9b-8ad648dec7f4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf718b0c0-ca')#033[00m
Nov 25 03:26:07 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1220: 321 pgs: 321 active+clean; 591 MiB data, 607 MiB used, 59 GiB / 60 GiB avail; 2.6 MiB/s rd, 5.4 MiB/s wr, 209 op/s
Nov 25 03:26:07 np0005534516 nova_compute[253538]: 2025-11-25 08:26:07.565 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 03:26:07 np0005534516 nova_compute[253538]: 2025-11-25 08:26:07.567 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 25 03:26:07 np0005534516 podman[283237]: 2025-11-25 08:26:07.84066537 +0000 UTC m=+0.082506634 container health_status 8ad1e319ea263c63021f01bb2d6f18ca5aea205883339692c6b10bddfa993191 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, config_id=ovn_controller, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3)
Nov 25 03:26:07 np0005534516 nova_compute[253538]: 2025-11-25 08:26:07.909 253542 DEBUG nova.network.neutron [None req-1e35dad3-44c2-4c96-99c5-86a99fa9ac76 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] [instance: ca088afd-31e5-497b-bfc5-ba1f56096642] Successfully updated port: 2089bf75-6119-4c42-a326-989b3931ec08 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 25 03:26:07 np0005534516 nova_compute[253538]: 2025-11-25 08:26:07.931 253542 DEBUG oslo_concurrency.lockutils [None req-1e35dad3-44c2-4c96-99c5-86a99fa9ac76 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] Acquiring lock "refresh_cache-ca088afd-31e5-497b-bfc5-ba1f56096642" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 03:26:07 np0005534516 nova_compute[253538]: 2025-11-25 08:26:07.931 253542 DEBUG oslo_concurrency.lockutils [None req-1e35dad3-44c2-4c96-99c5-86a99fa9ac76 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] Acquired lock "refresh_cache-ca088afd-31e5-497b-bfc5-ba1f56096642" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 03:26:07 np0005534516 nova_compute[253538]: 2025-11-25 08:26:07.931 253542 DEBUG nova.network.neutron [None req-1e35dad3-44c2-4c96-99c5-86a99fa9ac76 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] [instance: ca088afd-31e5-497b-bfc5-ba1f56096642] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 25 03:26:08 np0005534516 nova_compute[253538]: 2025-11-25 08:26:08.013 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Acquiring lock "refresh_cache-23ace5af-6840-42aa-a801-98abbb4f3a52" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 03:26:08 np0005534516 nova_compute[253538]: 2025-11-25 08:26:08.013 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Acquired lock "refresh_cache-23ace5af-6840-42aa-a801-98abbb4f3a52" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 03:26:08 np0005534516 nova_compute[253538]: 2025-11-25 08:26:08.013 253542 DEBUG nova.network.neutron [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] [instance: 23ace5af-6840-42aa-a801-98abbb4f3a52] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Nov 25 03:26:08 np0005534516 nova_compute[253538]: 2025-11-25 08:26:08.209 253542 DEBUG nova.network.neutron [None req-1e35dad3-44c2-4c96-99c5-86a99fa9ac76 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] [instance: ca088afd-31e5-497b-bfc5-ba1f56096642] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 25 03:26:08 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e113 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 03:26:09 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1221: 321 pgs: 321 active+clean; 586 MiB data, 615 MiB used, 59 GiB / 60 GiB avail; 1.7 MiB/s rd, 5.7 MiB/s wr, 195 op/s
Nov 25 03:26:09 np0005534516 nova_compute[253538]: 2025-11-25 08:26:09.238 253542 DEBUG nova.virt.libvirt.driver [None req-b2263c80-33ef-445c-930a-78c60ce7c0c9 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] [instance: 86bfa56f-56d0-4a5e-b0b2-302c375e37a3] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101#033[00m
Nov 25 03:26:09 np0005534516 nova_compute[253538]: 2025-11-25 08:26:09.652 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:26:09 np0005534516 nova_compute[253538]: 2025-11-25 08:26:09.936 253542 DEBUG nova.network.neutron [None req-1e35dad3-44c2-4c96-99c5-86a99fa9ac76 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] [instance: ca088afd-31e5-497b-bfc5-ba1f56096642] Updating instance_info_cache with network_info: [{"id": "2089bf75-6119-4c42-a326-989b3931ec08", "address": "fa:16:3e:b9:c0:7d", "network": {"id": "ec4e7ebb-aba7-46f2-8d8d-f7d49f5af954", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-908342220-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "52217f37b23343d697fa6d2be38e236d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2089bf75-61", "ovs_interfaceid": "2089bf75-6119-4c42-a326-989b3931ec08", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 03:26:09 np0005534516 nova_compute[253538]: 2025-11-25 08:26:09.998 253542 DEBUG oslo_concurrency.lockutils [None req-1e35dad3-44c2-4c96-99c5-86a99fa9ac76 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] Releasing lock "refresh_cache-ca088afd-31e5-497b-bfc5-ba1f56096642" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 03:26:09 np0005534516 nova_compute[253538]: 2025-11-25 08:26:09.999 253542 DEBUG nova.compute.manager [None req-1e35dad3-44c2-4c96-99c5-86a99fa9ac76 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] [instance: ca088afd-31e5-497b-bfc5-ba1f56096642] Instance network_info: |[{"id": "2089bf75-6119-4c42-a326-989b3931ec08", "address": "fa:16:3e:b9:c0:7d", "network": {"id": "ec4e7ebb-aba7-46f2-8d8d-f7d49f5af954", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-908342220-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "52217f37b23343d697fa6d2be38e236d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2089bf75-61", "ovs_interfaceid": "2089bf75-6119-4c42-a326-989b3931ec08", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 25 03:26:10 np0005534516 nova_compute[253538]: 2025-11-25 08:26:10.003 253542 DEBUG nova.virt.libvirt.driver [None req-1e35dad3-44c2-4c96-99c5-86a99fa9ac76 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] [instance: ca088afd-31e5-497b-bfc5-ba1f56096642] Start _get_guest_xml network_info=[{"id": "2089bf75-6119-4c42-a326-989b3931ec08", "address": "fa:16:3e:b9:c0:7d", "network": {"id": "ec4e7ebb-aba7-46f2-8d8d-f7d49f5af954", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-908342220-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "52217f37b23343d697fa6d2be38e236d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2089bf75-61", "ovs_interfaceid": "2089bf75-6119-4c42-a326-989b3931ec08", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'device_name': '/dev/vda', 'guest_format': None, 'encryption_options': None, 'boot_index': 0, 'encryption_format': None, 'encryption_secret_uuid': None, 'size': 0, 'encrypted': False, 'disk_bus': 'virtio', 'image_id': '8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 25 03:26:10 np0005534516 nova_compute[253538]: 2025-11-25 08:26:10.009 253542 WARNING nova.virt.libvirt.driver [None req-1e35dad3-44c2-4c96-99c5-86a99fa9ac76 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 03:26:10 np0005534516 nova_compute[253538]: 2025-11-25 08:26:10.017 253542 DEBUG nova.virt.libvirt.host [None req-1e35dad3-44c2-4c96-99c5-86a99fa9ac76 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 25 03:26:10 np0005534516 nova_compute[253538]: 2025-11-25 08:26:10.018 253542 DEBUG nova.virt.libvirt.host [None req-1e35dad3-44c2-4c96-99c5-86a99fa9ac76 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 25 03:26:10 np0005534516 nova_compute[253538]: 2025-11-25 08:26:10.028 253542 DEBUG nova.virt.libvirt.host [None req-1e35dad3-44c2-4c96-99c5-86a99fa9ac76 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 25 03:26:10 np0005534516 nova_compute[253538]: 2025-11-25 08:26:10.029 253542 DEBUG nova.virt.libvirt.host [None req-1e35dad3-44c2-4c96-99c5-86a99fa9ac76 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 25 03:26:10 np0005534516 nova_compute[253538]: 2025-11-25 08:26:10.030 253542 DEBUG nova.virt.libvirt.driver [None req-1e35dad3-44c2-4c96-99c5-86a99fa9ac76 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 25 03:26:10 np0005534516 nova_compute[253538]: 2025-11-25 08:26:10.031 253542 DEBUG nova.virt.hardware [None req-1e35dad3-44c2-4c96-99c5-86a99fa9ac76 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T08:20:54Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='c0247c91-0f34-45b2-87e7-43f31a790a00',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 25 03:26:10 np0005534516 nova_compute[253538]: 2025-11-25 08:26:10.031 253542 DEBUG nova.virt.hardware [None req-1e35dad3-44c2-4c96-99c5-86a99fa9ac76 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 25 03:26:10 np0005534516 nova_compute[253538]: 2025-11-25 08:26:10.032 253542 DEBUG nova.virt.hardware [None req-1e35dad3-44c2-4c96-99c5-86a99fa9ac76 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 25 03:26:10 np0005534516 nova_compute[253538]: 2025-11-25 08:26:10.033 253542 DEBUG nova.virt.hardware [None req-1e35dad3-44c2-4c96-99c5-86a99fa9ac76 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 25 03:26:10 np0005534516 nova_compute[253538]: 2025-11-25 08:26:10.033 253542 DEBUG nova.virt.hardware [None req-1e35dad3-44c2-4c96-99c5-86a99fa9ac76 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 25 03:26:10 np0005534516 nova_compute[253538]: 2025-11-25 08:26:10.034 253542 DEBUG nova.virt.hardware [None req-1e35dad3-44c2-4c96-99c5-86a99fa9ac76 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 25 03:26:10 np0005534516 nova_compute[253538]: 2025-11-25 08:26:10.034 253542 DEBUG nova.virt.hardware [None req-1e35dad3-44c2-4c96-99c5-86a99fa9ac76 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 25 03:26:10 np0005534516 nova_compute[253538]: 2025-11-25 08:26:10.035 253542 DEBUG nova.virt.hardware [None req-1e35dad3-44c2-4c96-99c5-86a99fa9ac76 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 25 03:26:10 np0005534516 nova_compute[253538]: 2025-11-25 08:26:10.035 253542 DEBUG nova.virt.hardware [None req-1e35dad3-44c2-4c96-99c5-86a99fa9ac76 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 25 03:26:10 np0005534516 nova_compute[253538]: 2025-11-25 08:26:10.036 253542 DEBUG nova.virt.hardware [None req-1e35dad3-44c2-4c96-99c5-86a99fa9ac76 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 25 03:26:10 np0005534516 nova_compute[253538]: 2025-11-25 08:26:10.037 253542 DEBUG nova.virt.hardware [None req-1e35dad3-44c2-4c96-99c5-86a99fa9ac76 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 25 03:26:10 np0005534516 nova_compute[253538]: 2025-11-25 08:26:10.041 253542 DEBUG oslo_concurrency.processutils [None req-1e35dad3-44c2-4c96-99c5-86a99fa9ac76 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:26:10 np0005534516 nova_compute[253538]: 2025-11-25 08:26:10.253 253542 DEBUG nova.network.neutron [req-b735f3c2-a677-4c64-a6a0-cce02fd04c22 req-b87ecea7-9233-4090-9906-43e5e0ca7151 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 39580ba3-504b-4e17-b64f-f44ef66091da] Updated VIF entry in instance network info cache for port f718b0c0-ca1b-4f5d-aa70-3d1f48097b97. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 25 03:26:10 np0005534516 nova_compute[253538]: 2025-11-25 08:26:10.254 253542 DEBUG nova.network.neutron [req-b735f3c2-a677-4c64-a6a0-cce02fd04c22 req-b87ecea7-9233-4090-9906-43e5e0ca7151 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 39580ba3-504b-4e17-b64f-f44ef66091da] Updating instance_info_cache with network_info: [{"id": "f718b0c0-ca1b-4f5d-aa70-3d1f48097b97", "address": "fa:16:3e:60:b8:f5", "network": {"id": "f86a7b06-d9db-4462-bd9b-8ad648dec7f4", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-795190525-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2d3671fc1a3f4b319a62f23168a9df72", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf718b0c0-ca", "ovs_interfaceid": "f718b0c0-ca1b-4f5d-aa70-3d1f48097b97", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 03:26:10 np0005534516 nova_compute[253538]: 2025-11-25 08:26:10.273 253542 DEBUG oslo_concurrency.lockutils [req-b735f3c2-a677-4c64-a6a0-cce02fd04c22 req-b87ecea7-9233-4090-9906-43e5e0ca7151 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-39580ba3-504b-4e17-b64f-f44ef66091da" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 03:26:10 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 03:26:10 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2062862655' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 03:26:10 np0005534516 nova_compute[253538]: 2025-11-25 08:26:10.508 253542 DEBUG oslo_concurrency.processutils [None req-1e35dad3-44c2-4c96-99c5-86a99fa9ac76 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.467s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:26:10 np0005534516 nova_compute[253538]: 2025-11-25 08:26:10.538 253542 DEBUG nova.storage.rbd_utils [None req-1e35dad3-44c2-4c96-99c5-86a99fa9ac76 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] rbd image ca088afd-31e5-497b-bfc5-ba1f56096642_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:26:10 np0005534516 nova_compute[253538]: 2025-11-25 08:26:10.542 253542 DEBUG oslo_concurrency.processutils [None req-1e35dad3-44c2-4c96-99c5-86a99fa9ac76 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:26:10 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 03:26:10 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/542444532' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 03:26:10 np0005534516 nova_compute[253538]: 2025-11-25 08:26:10.973 253542 DEBUG oslo_concurrency.processutils [None req-1e35dad3-44c2-4c96-99c5-86a99fa9ac76 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.431s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:26:10 np0005534516 nova_compute[253538]: 2025-11-25 08:26:10.975 253542 DEBUG nova.virt.libvirt.vif [None req-1e35dad3-44c2-4c96-99c5-86a99fa9ac76 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T08:26:02Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-SecurityGroupsTestJSON-server-1453002528',display_name='tempest-SecurityGroupsTestJSON-server-1453002528',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-securitygroupstestjson-server-1453002528',id=23,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='52217f37b23343d697fa6d2be38e236d',ramdisk_id='',reservation_id='r-502qkvse',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-SecurityGroupsTestJSON-1828125381',owner_user_name='tempest-SecurityGroupsTestJSON-1828125381-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T08:26:03Z,user_data=None,user_id='02e795c75a3b40bbbc3ca83d0501777f',uuid=ca088afd-31e5-497b-bfc5-ba1f56096642,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "2089bf75-6119-4c42-a326-989b3931ec08", "address": "fa:16:3e:b9:c0:7d", "network": {"id": "ec4e7ebb-aba7-46f2-8d8d-f7d49f5af954", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-908342220-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "52217f37b23343d697fa6d2be38e236d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2089bf75-61", "ovs_interfaceid": "2089bf75-6119-4c42-a326-989b3931ec08", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 25 03:26:10 np0005534516 nova_compute[253538]: 2025-11-25 08:26:10.976 253542 DEBUG nova.network.os_vif_util [None req-1e35dad3-44c2-4c96-99c5-86a99fa9ac76 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] Converting VIF {"id": "2089bf75-6119-4c42-a326-989b3931ec08", "address": "fa:16:3e:b9:c0:7d", "network": {"id": "ec4e7ebb-aba7-46f2-8d8d-f7d49f5af954", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-908342220-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "52217f37b23343d697fa6d2be38e236d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2089bf75-61", "ovs_interfaceid": "2089bf75-6119-4c42-a326-989b3931ec08", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 03:26:10 np0005534516 nova_compute[253538]: 2025-11-25 08:26:10.978 253542 DEBUG nova.network.os_vif_util [None req-1e35dad3-44c2-4c96-99c5-86a99fa9ac76 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b9:c0:7d,bridge_name='br-int',has_traffic_filtering=True,id=2089bf75-6119-4c42-a326-989b3931ec08,network=Network(ec4e7ebb-aba7-46f2-8d8d-f7d49f5af954),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2089bf75-61') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 03:26:10 np0005534516 nova_compute[253538]: 2025-11-25 08:26:10.980 253542 DEBUG nova.objects.instance [None req-1e35dad3-44c2-4c96-99c5-86a99fa9ac76 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] Lazy-loading 'pci_devices' on Instance uuid ca088afd-31e5-497b-bfc5-ba1f56096642 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 03:26:10 np0005534516 nova_compute[253538]: 2025-11-25 08:26:10.997 253542 DEBUG nova.virt.libvirt.driver [None req-1e35dad3-44c2-4c96-99c5-86a99fa9ac76 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] [instance: ca088afd-31e5-497b-bfc5-ba1f56096642] End _get_guest_xml xml=<domain type="kvm">
Nov 25 03:26:10 np0005534516 nova_compute[253538]:  <uuid>ca088afd-31e5-497b-bfc5-ba1f56096642</uuid>
Nov 25 03:26:10 np0005534516 nova_compute[253538]:  <name>instance-00000017</name>
Nov 25 03:26:10 np0005534516 nova_compute[253538]:  <memory>131072</memory>
Nov 25 03:26:10 np0005534516 nova_compute[253538]:  <vcpu>1</vcpu>
Nov 25 03:26:10 np0005534516 nova_compute[253538]:  <metadata>
Nov 25 03:26:10 np0005534516 nova_compute[253538]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 03:26:10 np0005534516 nova_compute[253538]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 03:26:10 np0005534516 nova_compute[253538]:      <nova:name>tempest-SecurityGroupsTestJSON-server-1453002528</nova:name>
Nov 25 03:26:10 np0005534516 nova_compute[253538]:      <nova:creationTime>2025-11-25 08:26:10</nova:creationTime>
Nov 25 03:26:10 np0005534516 nova_compute[253538]:      <nova:flavor name="m1.nano">
Nov 25 03:26:10 np0005534516 nova_compute[253538]:        <nova:memory>128</nova:memory>
Nov 25 03:26:10 np0005534516 nova_compute[253538]:        <nova:disk>1</nova:disk>
Nov 25 03:26:10 np0005534516 nova_compute[253538]:        <nova:swap>0</nova:swap>
Nov 25 03:26:10 np0005534516 nova_compute[253538]:        <nova:ephemeral>0</nova:ephemeral>
Nov 25 03:26:10 np0005534516 nova_compute[253538]:        <nova:vcpus>1</nova:vcpus>
Nov 25 03:26:10 np0005534516 nova_compute[253538]:      </nova:flavor>
Nov 25 03:26:10 np0005534516 nova_compute[253538]:      <nova:owner>
Nov 25 03:26:10 np0005534516 nova_compute[253538]:        <nova:user uuid="02e795c75a3b40bbbc3ca83d0501777f">tempest-SecurityGroupsTestJSON-1828125381-project-member</nova:user>
Nov 25 03:26:10 np0005534516 nova_compute[253538]:        <nova:project uuid="52217f37b23343d697fa6d2be38e236d">tempest-SecurityGroupsTestJSON-1828125381</nova:project>
Nov 25 03:26:10 np0005534516 nova_compute[253538]:      </nova:owner>
Nov 25 03:26:11 np0005534516 nova_compute[253538]:      <nova:root type="image" uuid="8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e"/>
Nov 25 03:26:11 np0005534516 nova_compute[253538]:      <nova:ports>
Nov 25 03:26:11 np0005534516 nova_compute[253538]:        <nova:port uuid="2089bf75-6119-4c42-a326-989b3931ec08">
Nov 25 03:26:11 np0005534516 nova_compute[253538]:          <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Nov 25 03:26:11 np0005534516 nova_compute[253538]:        </nova:port>
Nov 25 03:26:11 np0005534516 nova_compute[253538]:      </nova:ports>
Nov 25 03:26:11 np0005534516 nova_compute[253538]:    </nova:instance>
Nov 25 03:26:11 np0005534516 nova_compute[253538]:  </metadata>
Nov 25 03:26:11 np0005534516 nova_compute[253538]:  <sysinfo type="smbios">
Nov 25 03:26:11 np0005534516 nova_compute[253538]:    <system>
Nov 25 03:26:11 np0005534516 nova_compute[253538]:      <entry name="manufacturer">RDO</entry>
Nov 25 03:26:11 np0005534516 nova_compute[253538]:      <entry name="product">OpenStack Compute</entry>
Nov 25 03:26:11 np0005534516 nova_compute[253538]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 03:26:11 np0005534516 nova_compute[253538]:      <entry name="serial">ca088afd-31e5-497b-bfc5-ba1f56096642</entry>
Nov 25 03:26:11 np0005534516 nova_compute[253538]:      <entry name="uuid">ca088afd-31e5-497b-bfc5-ba1f56096642</entry>
Nov 25 03:26:11 np0005534516 nova_compute[253538]:      <entry name="family">Virtual Machine</entry>
Nov 25 03:26:11 np0005534516 nova_compute[253538]:    </system>
Nov 25 03:26:11 np0005534516 nova_compute[253538]:  </sysinfo>
Nov 25 03:26:11 np0005534516 nova_compute[253538]:  <os>
Nov 25 03:26:11 np0005534516 nova_compute[253538]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 25 03:26:11 np0005534516 nova_compute[253538]:    <boot dev="hd"/>
Nov 25 03:26:11 np0005534516 nova_compute[253538]:    <smbios mode="sysinfo"/>
Nov 25 03:26:11 np0005534516 nova_compute[253538]:  </os>
Nov 25 03:26:11 np0005534516 nova_compute[253538]:  <features>
Nov 25 03:26:11 np0005534516 nova_compute[253538]:    <acpi/>
Nov 25 03:26:11 np0005534516 nova_compute[253538]:    <apic/>
Nov 25 03:26:11 np0005534516 nova_compute[253538]:    <vmcoreinfo/>
Nov 25 03:26:11 np0005534516 nova_compute[253538]:  </features>
Nov 25 03:26:11 np0005534516 nova_compute[253538]:  <clock offset="utc">
Nov 25 03:26:11 np0005534516 nova_compute[253538]:    <timer name="pit" tickpolicy="delay"/>
Nov 25 03:26:11 np0005534516 nova_compute[253538]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 25 03:26:11 np0005534516 nova_compute[253538]:    <timer name="hpet" present="no"/>
Nov 25 03:26:11 np0005534516 nova_compute[253538]:  </clock>
Nov 25 03:26:11 np0005534516 nova_compute[253538]:  <cpu mode="host-model" match="exact">
Nov 25 03:26:11 np0005534516 nova_compute[253538]:    <topology sockets="1" cores="1" threads="1"/>
Nov 25 03:26:11 np0005534516 nova_compute[253538]:  </cpu>
Nov 25 03:26:11 np0005534516 nova_compute[253538]:  <devices>
Nov 25 03:26:11 np0005534516 nova_compute[253538]:    <disk type="network" device="disk">
Nov 25 03:26:11 np0005534516 nova_compute[253538]:      <driver type="raw" cache="none"/>
Nov 25 03:26:11 np0005534516 nova_compute[253538]:      <source protocol="rbd" name="vms/ca088afd-31e5-497b-bfc5-ba1f56096642_disk">
Nov 25 03:26:11 np0005534516 nova_compute[253538]:        <host name="192.168.122.100" port="6789"/>
Nov 25 03:26:11 np0005534516 nova_compute[253538]:      </source>
Nov 25 03:26:11 np0005534516 nova_compute[253538]:      <auth username="openstack">
Nov 25 03:26:11 np0005534516 nova_compute[253538]:        <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 03:26:11 np0005534516 nova_compute[253538]:      </auth>
Nov 25 03:26:11 np0005534516 nova_compute[253538]:      <target dev="vda" bus="virtio"/>
Nov 25 03:26:11 np0005534516 nova_compute[253538]:    </disk>
Nov 25 03:26:11 np0005534516 nova_compute[253538]:    <disk type="network" device="cdrom">
Nov 25 03:26:11 np0005534516 nova_compute[253538]:      <driver type="raw" cache="none"/>
Nov 25 03:26:11 np0005534516 nova_compute[253538]:      <source protocol="rbd" name="vms/ca088afd-31e5-497b-bfc5-ba1f56096642_disk.config">
Nov 25 03:26:11 np0005534516 nova_compute[253538]:        <host name="192.168.122.100" port="6789"/>
Nov 25 03:26:11 np0005534516 nova_compute[253538]:      </source>
Nov 25 03:26:11 np0005534516 nova_compute[253538]:      <auth username="openstack">
Nov 25 03:26:11 np0005534516 nova_compute[253538]:        <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 03:26:11 np0005534516 nova_compute[253538]:      </auth>
Nov 25 03:26:11 np0005534516 nova_compute[253538]:      <target dev="sda" bus="sata"/>
Nov 25 03:26:11 np0005534516 nova_compute[253538]:    </disk>
Nov 25 03:26:11 np0005534516 nova_compute[253538]:    <interface type="ethernet">
Nov 25 03:26:11 np0005534516 nova_compute[253538]:      <mac address="fa:16:3e:b9:c0:7d"/>
Nov 25 03:26:11 np0005534516 nova_compute[253538]:      <model type="virtio"/>
Nov 25 03:26:11 np0005534516 nova_compute[253538]:      <driver name="vhost" rx_queue_size="512"/>
Nov 25 03:26:11 np0005534516 nova_compute[253538]:      <mtu size="1442"/>
Nov 25 03:26:11 np0005534516 nova_compute[253538]:      <target dev="tap2089bf75-61"/>
Nov 25 03:26:11 np0005534516 nova_compute[253538]:    </interface>
Nov 25 03:26:11 np0005534516 nova_compute[253538]:    <serial type="pty">
Nov 25 03:26:11 np0005534516 nova_compute[253538]:      <log file="/var/lib/nova/instances/ca088afd-31e5-497b-bfc5-ba1f56096642/console.log" append="off"/>
Nov 25 03:26:11 np0005534516 nova_compute[253538]:    </serial>
Nov 25 03:26:11 np0005534516 nova_compute[253538]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 03:26:11 np0005534516 nova_compute[253538]:    <video>
Nov 25 03:26:11 np0005534516 nova_compute[253538]:      <model type="virtio"/>
Nov 25 03:26:11 np0005534516 nova_compute[253538]:    </video>
Nov 25 03:26:11 np0005534516 nova_compute[253538]:    <input type="tablet" bus="usb"/>
Nov 25 03:26:11 np0005534516 nova_compute[253538]:    <rng model="virtio">
Nov 25 03:26:11 np0005534516 nova_compute[253538]:      <backend model="random">/dev/urandom</backend>
Nov 25 03:26:11 np0005534516 nova_compute[253538]:    </rng>
Nov 25 03:26:11 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root"/>
Nov 25 03:26:11 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:26:11 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:26:11 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:26:11 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:26:11 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:26:11 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:26:11 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:26:11 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:26:11 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:26:11 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:26:11 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:26:11 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:26:11 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:26:11 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:26:11 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:26:11 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:26:11 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:26:11 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:26:11 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:26:11 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:26:11 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:26:11 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:26:11 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:26:11 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:26:11 np0005534516 nova_compute[253538]:    <controller type="usb" index="0"/>
Nov 25 03:26:11 np0005534516 nova_compute[253538]:    <memballoon model="virtio">
Nov 25 03:26:11 np0005534516 nova_compute[253538]:      <stats period="10"/>
Nov 25 03:26:11 np0005534516 nova_compute[253538]:    </memballoon>
Nov 25 03:26:11 np0005534516 nova_compute[253538]:  </devices>
Nov 25 03:26:11 np0005534516 nova_compute[253538]: </domain>
Nov 25 03:26:11 np0005534516 nova_compute[253538]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 25 03:26:11 np0005534516 nova_compute[253538]: 2025-11-25 08:26:10.998 253542 DEBUG nova.compute.manager [None req-1e35dad3-44c2-4c96-99c5-86a99fa9ac76 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] [instance: ca088afd-31e5-497b-bfc5-ba1f56096642] Preparing to wait for external event network-vif-plugged-2089bf75-6119-4c42-a326-989b3931ec08 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 25 03:26:11 np0005534516 nova_compute[253538]: 2025-11-25 08:26:10.999 253542 DEBUG oslo_concurrency.lockutils [None req-1e35dad3-44c2-4c96-99c5-86a99fa9ac76 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] Acquiring lock "ca088afd-31e5-497b-bfc5-ba1f56096642-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:26:11 np0005534516 nova_compute[253538]: 2025-11-25 08:26:10.999 253542 DEBUG oslo_concurrency.lockutils [None req-1e35dad3-44c2-4c96-99c5-86a99fa9ac76 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] Lock "ca088afd-31e5-497b-bfc5-ba1f56096642-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:26:11 np0005534516 nova_compute[253538]: 2025-11-25 08:26:11.000 253542 DEBUG oslo_concurrency.lockutils [None req-1e35dad3-44c2-4c96-99c5-86a99fa9ac76 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] Lock "ca088afd-31e5-497b-bfc5-ba1f56096642-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:26:11 np0005534516 nova_compute[253538]: 2025-11-25 08:26:11.000 253542 DEBUG nova.virt.libvirt.vif [None req-1e35dad3-44c2-4c96-99c5-86a99fa9ac76 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T08:26:02Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-SecurityGroupsTestJSON-server-1453002528',display_name='tempest-SecurityGroupsTestJSON-server-1453002528',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-securitygroupstestjson-server-1453002528',id=23,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='52217f37b23343d697fa6d2be38e236d',ramdisk_id='',reservation_id='r-502qkvse',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-SecurityGroupsTestJSON-1828125381',owner_user_name='tempest-SecurityGroupsTestJSON-1828125381-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T08:26:03Z,user_data=None,user_id='02e795c75a3b40bbbc3ca83d0501777f',uuid=ca088afd-31e5-497b-bfc5-ba1f56096642,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "2089bf75-6119-4c42-a326-989b3931ec08", "address": "fa:16:3e:b9:c0:7d", "network": {"id": "ec4e7ebb-aba7-46f2-8d8d-f7d49f5af954", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-908342220-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "52217f37b23343d697fa6d2be38e236d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2089bf75-61", "ovs_interfaceid": "2089bf75-6119-4c42-a326-989b3931ec08", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 25 03:26:11 np0005534516 nova_compute[253538]: 2025-11-25 08:26:11.001 253542 DEBUG nova.network.os_vif_util [None req-1e35dad3-44c2-4c96-99c5-86a99fa9ac76 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] Converting VIF {"id": "2089bf75-6119-4c42-a326-989b3931ec08", "address": "fa:16:3e:b9:c0:7d", "network": {"id": "ec4e7ebb-aba7-46f2-8d8d-f7d49f5af954", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-908342220-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "52217f37b23343d697fa6d2be38e236d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2089bf75-61", "ovs_interfaceid": "2089bf75-6119-4c42-a326-989b3931ec08", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 03:26:11 np0005534516 nova_compute[253538]: 2025-11-25 08:26:11.001 253542 DEBUG nova.network.os_vif_util [None req-1e35dad3-44c2-4c96-99c5-86a99fa9ac76 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b9:c0:7d,bridge_name='br-int',has_traffic_filtering=True,id=2089bf75-6119-4c42-a326-989b3931ec08,network=Network(ec4e7ebb-aba7-46f2-8d8d-f7d49f5af954),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2089bf75-61') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 03:26:11 np0005534516 nova_compute[253538]: 2025-11-25 08:26:11.002 253542 DEBUG os_vif [None req-1e35dad3-44c2-4c96-99c5-86a99fa9ac76 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:b9:c0:7d,bridge_name='br-int',has_traffic_filtering=True,id=2089bf75-6119-4c42-a326-989b3931ec08,network=Network(ec4e7ebb-aba7-46f2-8d8d-f7d49f5af954),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2089bf75-61') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 25 03:26:11 np0005534516 nova_compute[253538]: 2025-11-25 08:26:11.002 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:26:11 np0005534516 nova_compute[253538]: 2025-11-25 08:26:11.003 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:26:11 np0005534516 nova_compute[253538]: 2025-11-25 08:26:11.003 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 03:26:11 np0005534516 nova_compute[253538]: 2025-11-25 08:26:11.006 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:26:11 np0005534516 nova_compute[253538]: 2025-11-25 08:26:11.006 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2089bf75-61, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:26:11 np0005534516 nova_compute[253538]: 2025-11-25 08:26:11.007 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap2089bf75-61, col_values=(('external_ids', {'iface-id': '2089bf75-6119-4c42-a326-989b3931ec08', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:b9:c0:7d', 'vm-uuid': 'ca088afd-31e5-497b-bfc5-ba1f56096642'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:26:11 np0005534516 nova_compute[253538]: 2025-11-25 08:26:11.008 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:26:11 np0005534516 NetworkManager[48915]: <info>  [1764059171.0095] manager: (tap2089bf75-61): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/59)
Nov 25 03:26:11 np0005534516 nova_compute[253538]: 2025-11-25 08:26:11.014 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 25 03:26:11 np0005534516 nova_compute[253538]: 2025-11-25 08:26:11.015 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:26:11 np0005534516 nova_compute[253538]: 2025-11-25 08:26:11.016 253542 INFO os_vif [None req-1e35dad3-44c2-4c96-99c5-86a99fa9ac76 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:b9:c0:7d,bridge_name='br-int',has_traffic_filtering=True,id=2089bf75-6119-4c42-a326-989b3931ec08,network=Network(ec4e7ebb-aba7-46f2-8d8d-f7d49f5af954),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2089bf75-61')#033[00m
Nov 25 03:26:11 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1222: 321 pgs: 321 active+clean; 555 MiB data, 617 MiB used, 59 GiB / 60 GiB avail; 1.1 MiB/s rd, 4.7 MiB/s wr, 153 op/s
Nov 25 03:26:11 np0005534516 ceph-mon[75015]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #57. Immutable memtables: 0.
Nov 25 03:26:11 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:26:11.336374) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 25 03:26:11 np0005534516 ceph-mon[75015]: rocksdb: [db/flush_job.cc:856] [default] [JOB 29] Flushing memtable with next log file: 57
Nov 25 03:26:11 np0005534516 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764059171336408, "job": 29, "event": "flush_started", "num_memtables": 1, "num_entries": 1340, "num_deletes": 252, "total_data_size": 1834743, "memory_usage": 1870416, "flush_reason": "Manual Compaction"}
Nov 25 03:26:11 np0005534516 ceph-mon[75015]: rocksdb: [db/flush_job.cc:885] [default] [JOB 29] Level-0 flush table #58: started
Nov 25 03:26:11 np0005534516 nova_compute[253538]: 2025-11-25 08:26:11.360 253542 DEBUG nova.virt.libvirt.driver [None req-1e35dad3-44c2-4c96-99c5-86a99fa9ac76 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 03:26:11 np0005534516 nova_compute[253538]: 2025-11-25 08:26:11.360 253542 DEBUG nova.virt.libvirt.driver [None req-1e35dad3-44c2-4c96-99c5-86a99fa9ac76 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 03:26:11 np0005534516 nova_compute[253538]: 2025-11-25 08:26:11.361 253542 DEBUG nova.virt.libvirt.driver [None req-1e35dad3-44c2-4c96-99c5-86a99fa9ac76 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] No VIF found with MAC fa:16:3e:b9:c0:7d, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 25 03:26:11 np0005534516 nova_compute[253538]: 2025-11-25 08:26:11.361 253542 INFO nova.virt.libvirt.driver [None req-1e35dad3-44c2-4c96-99c5-86a99fa9ac76 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] [instance: ca088afd-31e5-497b-bfc5-ba1f56096642] Using config drive#033[00m
Nov 25 03:26:11 np0005534516 nova_compute[253538]: 2025-11-25 08:26:11.384 253542 DEBUG nova.storage.rbd_utils [None req-1e35dad3-44c2-4c96-99c5-86a99fa9ac76 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] rbd image ca088afd-31e5-497b-bfc5-ba1f56096642_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:26:11 np0005534516 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764059171700611, "cf_name": "default", "job": 29, "event": "table_file_creation", "file_number": 58, "file_size": 1814575, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 24338, "largest_seqno": 25677, "table_properties": {"data_size": 1808387, "index_size": 3327, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1797, "raw_key_size": 14087, "raw_average_key_size": 20, "raw_value_size": 1795583, "raw_average_value_size": 2587, "num_data_blocks": 148, "num_entries": 694, "num_filter_entries": 694, "num_deletions": 252, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764059053, "oldest_key_time": 1764059053, "file_creation_time": 1764059171, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "dc5dc6ae-0fb8-474e-b34d-e37705a41add", "db_session_id": "WCSCMBNWDQZ3QKJA3OWW", "orig_file_number": 58, "seqno_to_time_mapping": "N/A"}}
Nov 25 03:26:11 np0005534516 ceph-mon[75015]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 29] Flush lasted 364281 microseconds, and 4960 cpu microseconds.
Nov 25 03:26:11 np0005534516 ceph-mon[75015]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 03:26:11 np0005534516 nova_compute[253538]: 2025-11-25 08:26:11.786 253542 DEBUG nova.network.neutron [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] [instance: 23ace5af-6840-42aa-a801-98abbb4f3a52] Updating instance_info_cache with network_info: [{"id": "e9d1298d-411a-4018-ba08-c41d40ba0d41", "address": "fa:16:3e:af:6c:e2", "network": {"id": "93269c36-ab23-4d95-925a-798173550624", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-897372830-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "65a2f983cce14453b2dc9251a520f289", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape9d1298d-41", "ovs_interfaceid": "e9d1298d-411a-4018-ba08-c41d40ba0d41", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 03:26:11 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:26:11.700651) [db/flush_job.cc:967] [default] [JOB 29] Level-0 flush table #58: 1814575 bytes OK
Nov 25 03:26:11 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:26:11.700674) [db/memtable_list.cc:519] [default] Level-0 commit table #58 started
Nov 25 03:26:11 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:26:11.788471) [db/memtable_list.cc:722] [default] Level-0 commit table #58: memtable #1 done
Nov 25 03:26:11 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:26:11.788527) EVENT_LOG_v1 {"time_micros": 1764059171788513, "job": 29, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 25 03:26:11 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:26:11.788558) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 25 03:26:11 np0005534516 ceph-mon[75015]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 29] Try to delete WAL files size 1828675, prev total WAL file size 1828675, number of live WAL files 2.
Nov 25 03:26:11 np0005534516 ceph-mon[75015]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000054.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 03:26:11 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:26:11.790252) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032303038' seq:72057594037927935, type:22 .. '7061786F730032323630' seq:0, type:0; will stop at (end)
Nov 25 03:26:11 np0005534516 ceph-mon[75015]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 30] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 25 03:26:11 np0005534516 ceph-mon[75015]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 29 Base level 0, inputs: [58(1772KB)], [56(7048KB)]
Nov 25 03:26:11 np0005534516 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764059171790365, "job": 30, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [58], "files_L6": [56], "score": -1, "input_data_size": 9031853, "oldest_snapshot_seqno": -1}
Nov 25 03:26:11 np0005534516 nova_compute[253538]: 2025-11-25 08:26:11.806 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Releasing lock "refresh_cache-23ace5af-6840-42aa-a801-98abbb4f3a52" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 03:26:11 np0005534516 nova_compute[253538]: 2025-11-25 08:26:11.807 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] [instance: 23ace5af-6840-42aa-a801-98abbb4f3a52] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Nov 25 03:26:11 np0005534516 nova_compute[253538]: 2025-11-25 08:26:11.808 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 03:26:11 np0005534516 nova_compute[253538]: 2025-11-25 08:26:11.808 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 03:26:11 np0005534516 nova_compute[253538]: 2025-11-25 08:26:11.808 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 03:26:11 np0005534516 nova_compute[253538]: 2025-11-25 08:26:11.809 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 25 03:26:11 np0005534516 nova_compute[253538]: 2025-11-25 08:26:11.948 253542 INFO nova.virt.libvirt.driver [None req-1e35dad3-44c2-4c96-99c5-86a99fa9ac76 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] [instance: ca088afd-31e5-497b-bfc5-ba1f56096642] Creating config drive at /var/lib/nova/instances/ca088afd-31e5-497b-bfc5-ba1f56096642/disk.config#033[00m
Nov 25 03:26:11 np0005534516 nova_compute[253538]: 2025-11-25 08:26:11.957 253542 DEBUG oslo_concurrency.processutils [None req-1e35dad3-44c2-4c96-99c5-86a99fa9ac76 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/ca088afd-31e5-497b-bfc5-ba1f56096642/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpwoeschgk execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:26:12 np0005534516 nova_compute[253538]: 2025-11-25 08:26:12.088 253542 DEBUG oslo_concurrency.processutils [None req-1e35dad3-44c2-4c96-99c5-86a99fa9ac76 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/ca088afd-31e5-497b-bfc5-ba1f56096642/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpwoeschgk" returned: 0 in 0.130s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:26:12 np0005534516 nova_compute[253538]: 2025-11-25 08:26:12.354 253542 DEBUG nova.storage.rbd_utils [None req-1e35dad3-44c2-4c96-99c5-86a99fa9ac76 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] rbd image ca088afd-31e5-497b-bfc5-ba1f56096642_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:26:12 np0005534516 nova_compute[253538]: 2025-11-25 08:26:12.359 253542 DEBUG oslo_concurrency.processutils [None req-1e35dad3-44c2-4c96-99c5-86a99fa9ac76 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/ca088afd-31e5-497b-bfc5-ba1f56096642/disk.config ca088afd-31e5-497b-bfc5-ba1f56096642_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:26:12 np0005534516 ceph-mon[75015]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 30] Generated table #59: 4772 keys, 7303244 bytes, temperature: kUnknown
Nov 25 03:26:12 np0005534516 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764059172373863, "cf_name": "default", "job": 30, "event": "table_file_creation", "file_number": 59, "file_size": 7303244, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7271824, "index_size": 18382, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 11973, "raw_key_size": 119981, "raw_average_key_size": 25, "raw_value_size": 7186100, "raw_average_value_size": 1505, "num_data_blocks": 760, "num_entries": 4772, "num_filter_entries": 4772, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764056881, "oldest_key_time": 0, "file_creation_time": 1764059171, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "dc5dc6ae-0fb8-474e-b34d-e37705a41add", "db_session_id": "WCSCMBNWDQZ3QKJA3OWW", "orig_file_number": 59, "seqno_to_time_mapping": "N/A"}}
Nov 25 03:26:12 np0005534516 ceph-mon[75015]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 03:26:12 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:26:12.374232) [db/compaction/compaction_job.cc:1663] [default] [JOB 30] Compacted 1@0 + 1@6 files to L6 => 7303244 bytes
Nov 25 03:26:12 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:26:12.392637) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 15.5 rd, 12.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.7, 6.9 +0.0 blob) out(7.0 +0.0 blob), read-write-amplify(9.0) write-amplify(4.0) OK, records in: 5292, records dropped: 520 output_compression: NoCompression
Nov 25 03:26:12 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:26:12.392669) EVENT_LOG_v1 {"time_micros": 1764059172392654, "job": 30, "event": "compaction_finished", "compaction_time_micros": 583346, "compaction_time_cpu_micros": 30630, "output_level": 6, "num_output_files": 1, "total_output_size": 7303244, "num_input_records": 5292, "num_output_records": 4772, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 25 03:26:12 np0005534516 ceph-mon[75015]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000058.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 03:26:12 np0005534516 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764059172393560, "job": 30, "event": "table_file_deletion", "file_number": 58}
Nov 25 03:26:12 np0005534516 ceph-mon[75015]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000056.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 03:26:12 np0005534516 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764059172396620, "job": 30, "event": "table_file_deletion", "file_number": 56}
Nov 25 03:26:12 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:26:11.790142) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 03:26:12 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:26:12.396766) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 03:26:12 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:26:12.396775) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 03:26:12 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:26:12.396777) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 03:26:12 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:26:12.396780) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 03:26:12 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:26:12.396782) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 03:26:12 np0005534516 nova_compute[253538]: 2025-11-25 08:26:12.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 03:26:12 np0005534516 nova_compute[253538]: 2025-11-25 08:26:12.555 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 03:26:12 np0005534516 nova_compute[253538]: 2025-11-25 08:26:12.580 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:26:12 np0005534516 nova_compute[253538]: 2025-11-25 08:26:12.580 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:26:12 np0005534516 nova_compute[253538]: 2025-11-25 08:26:12.580 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:26:12 np0005534516 nova_compute[253538]: 2025-11-25 08:26:12.581 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 25 03:26:12 np0005534516 nova_compute[253538]: 2025-11-25 08:26:12.581 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:26:13 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 03:26:13 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2610384772' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 03:26:13 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1223: 321 pgs: 321 active+clean; 541 MiB data, 610 MiB used, 59 GiB / 60 GiB avail; 253 KiB/s rd, 3.6 MiB/s wr, 95 op/s
Nov 25 03:26:13 np0005534516 nova_compute[253538]: 2025-11-25 08:26:13.044 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.463s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:26:13 np0005534516 nova_compute[253538]: 2025-11-25 08:26:13.141 253542 DEBUG nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] skipping disk for instance-00000013 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 25 03:26:13 np0005534516 nova_compute[253538]: 2025-11-25 08:26:13.142 253542 DEBUG nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] skipping disk for instance-00000013 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 25 03:26:13 np0005534516 nova_compute[253538]: 2025-11-25 08:26:13.148 253542 DEBUG nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] skipping disk for instance-00000014 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 25 03:26:13 np0005534516 nova_compute[253538]: 2025-11-25 08:26:13.148 253542 DEBUG nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] skipping disk for instance-00000014 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 25 03:26:13 np0005534516 nova_compute[253538]: 2025-11-25 08:26:13.154 253542 DEBUG nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] skipping disk for instance-00000012 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 25 03:26:13 np0005534516 nova_compute[253538]: 2025-11-25 08:26:13.155 253542 DEBUG nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] skipping disk for instance-00000012 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 25 03:26:13 np0005534516 nova_compute[253538]: 2025-11-25 08:26:13.160 253542 DEBUG nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] skipping disk for instance-00000011 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 25 03:26:13 np0005534516 nova_compute[253538]: 2025-11-25 08:26:13.161 253542 DEBUG nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] skipping disk for instance-00000011 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 25 03:26:13 np0005534516 nova_compute[253538]: 2025-11-25 08:26:13.165 253542 DEBUG nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] skipping disk for instance-00000017 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 25 03:26:13 np0005534516 nova_compute[253538]: 2025-11-25 08:26:13.166 253542 DEBUG nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] skipping disk for instance-00000017 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 25 03:26:13 np0005534516 nova_compute[253538]: 2025-11-25 08:26:13.171 253542 DEBUG nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] skipping disk for instance-00000016 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 25 03:26:13 np0005534516 nova_compute[253538]: 2025-11-25 08:26:13.171 253542 DEBUG nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] skipping disk for instance-00000016 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 25 03:26:13 np0005534516 nova_compute[253538]: 2025-11-25 08:26:13.175 253542 DEBUG nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] skipping disk for instance-00000015 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 25 03:26:13 np0005534516 nova_compute[253538]: 2025-11-25 08:26:13.175 253542 DEBUG nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] skipping disk for instance-00000015 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 25 03:26:13 np0005534516 nova_compute[253538]: 2025-11-25 08:26:13.180 253542 DEBUG nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] skipping disk for instance-00000010 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 25 03:26:13 np0005534516 nova_compute[253538]: 2025-11-25 08:26:13.181 253542 DEBUG nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] skipping disk for instance-00000010 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 25 03:26:13 np0005534516 nova_compute[253538]: 2025-11-25 08:26:13.513 253542 WARNING nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 03:26:13 np0005534516 nova_compute[253538]: 2025-11-25 08:26:13.514 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3327MB free_disk=59.70021438598633GB free_vcpus=2 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 25 03:26:13 np0005534516 nova_compute[253538]: 2025-11-25 08:26:13.515 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:26:13 np0005534516 nova_compute[253538]: 2025-11-25 08:26:13.515 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:26:13 np0005534516 nova_compute[253538]: 2025-11-25 08:26:13.602 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Instance 86bfa56f-56d0-4a5e-b0b2-302c375e37a3 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 25 03:26:13 np0005534516 nova_compute[253538]: 2025-11-25 08:26:13.602 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Instance 23ace5af-6840-42aa-a801-98abbb4f3a52 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 25 03:26:13 np0005534516 nova_compute[253538]: 2025-11-25 08:26:13.603 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Instance 29fb9e2b-13d1-41e6-b0b1-1d5262dcadec actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 25 03:26:13 np0005534516 nova_compute[253538]: 2025-11-25 08:26:13.603 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Instance 1664fad5-765c-4ecc-93e2-6f96c7fb6d44 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 25 03:26:13 np0005534516 nova_compute[253538]: 2025-11-25 08:26:13.603 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Instance 740ce5e2-b8eb-46b1-9ec8-e9c89ed3e5b2 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 25 03:26:13 np0005534516 nova_compute[253538]: 2025-11-25 08:26:13.603 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Instance 39580ba3-504b-4e17-b64f-f44ef66091da actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 25 03:26:13 np0005534516 nova_compute[253538]: 2025-11-25 08:26:13.603 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Instance ceb93a9d-5e18-4351-9cfa-3949c00b448a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 25 03:26:13 np0005534516 nova_compute[253538]: 2025-11-25 08:26:13.603 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Instance ca088afd-31e5-497b-bfc5-ba1f56096642 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 25 03:26:13 np0005534516 nova_compute[253538]: 2025-11-25 08:26:13.604 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 8 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 25 03:26:13 np0005534516 nova_compute[253538]: 2025-11-25 08:26:13.604 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=1536MB phys_disk=59GB used_disk=8GB total_vcpus=8 used_vcpus=8 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 25 03:26:13 np0005534516 nova_compute[253538]: 2025-11-25 08:26:13.794 253542 DEBUG nova.compute.manager [req-44a1d498-f23a-492e-af7c-76d474b4228f req-4ef442e3-9dfa-4a71-bbda-722d26d7497a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: ca088afd-31e5-497b-bfc5-ba1f56096642] Received event network-changed-2089bf75-6119-4c42-a326-989b3931ec08 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:26:13 np0005534516 nova_compute[253538]: 2025-11-25 08:26:13.795 253542 DEBUG nova.compute.manager [req-44a1d498-f23a-492e-af7c-76d474b4228f req-4ef442e3-9dfa-4a71-bbda-722d26d7497a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: ca088afd-31e5-497b-bfc5-ba1f56096642] Refreshing instance network info cache due to event network-changed-2089bf75-6119-4c42-a326-989b3931ec08. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 03:26:13 np0005534516 nova_compute[253538]: 2025-11-25 08:26:13.796 253542 DEBUG oslo_concurrency.lockutils [req-44a1d498-f23a-492e-af7c-76d474b4228f req-4ef442e3-9dfa-4a71-bbda-722d26d7497a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-ca088afd-31e5-497b-bfc5-ba1f56096642" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 03:26:13 np0005534516 nova_compute[253538]: 2025-11-25 08:26:13.796 253542 DEBUG oslo_concurrency.lockutils [req-44a1d498-f23a-492e-af7c-76d474b4228f req-4ef442e3-9dfa-4a71-bbda-722d26d7497a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-ca088afd-31e5-497b-bfc5-ba1f56096642" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 03:26:13 np0005534516 nova_compute[253538]: 2025-11-25 08:26:13.797 253542 DEBUG nova.network.neutron [req-44a1d498-f23a-492e-af7c-76d474b4228f req-4ef442e3-9dfa-4a71-bbda-722d26d7497a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: ca088afd-31e5-497b-bfc5-ba1f56096642] Refreshing network info cache for port 2089bf75-6119-4c42-a326-989b3931ec08 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 03:26:13 np0005534516 nova_compute[253538]: 2025-11-25 08:26:13.841 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:26:13 np0005534516 nova_compute[253538]: 2025-11-25 08:26:13.878 253542 DEBUG nova.compute.manager [req-b2c97240-536f-4e6b-b0ca-f427c3316ae4 req-16d46cd5-26cd-4f60-8b65-2d1d41d37ff2 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 39580ba3-504b-4e17-b64f-f44ef66091da] Received event network-vif-unplugged-f718b0c0-ca1b-4f5d-aa70-3d1f48097b97 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:26:13 np0005534516 nova_compute[253538]: 2025-11-25 08:26:13.878 253542 DEBUG oslo_concurrency.lockutils [req-b2c97240-536f-4e6b-b0ca-f427c3316ae4 req-16d46cd5-26cd-4f60-8b65-2d1d41d37ff2 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "39580ba3-504b-4e17-b64f-f44ef66091da-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:26:13 np0005534516 nova_compute[253538]: 2025-11-25 08:26:13.878 253542 DEBUG oslo_concurrency.lockutils [req-b2c97240-536f-4e6b-b0ca-f427c3316ae4 req-16d46cd5-26cd-4f60-8b65-2d1d41d37ff2 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "39580ba3-504b-4e17-b64f-f44ef66091da-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:26:13 np0005534516 nova_compute[253538]: 2025-11-25 08:26:13.879 253542 DEBUG oslo_concurrency.lockutils [req-b2c97240-536f-4e6b-b0ca-f427c3316ae4 req-16d46cd5-26cd-4f60-8b65-2d1d41d37ff2 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "39580ba3-504b-4e17-b64f-f44ef66091da-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:26:13 np0005534516 nova_compute[253538]: 2025-11-25 08:26:13.879 253542 DEBUG nova.compute.manager [req-b2c97240-536f-4e6b-b0ca-f427c3316ae4 req-16d46cd5-26cd-4f60-8b65-2d1d41d37ff2 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 39580ba3-504b-4e17-b64f-f44ef66091da] No waiting events found dispatching network-vif-unplugged-f718b0c0-ca1b-4f5d-aa70-3d1f48097b97 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 03:26:13 np0005534516 nova_compute[253538]: 2025-11-25 08:26:13.879 253542 DEBUG nova.compute.manager [req-b2c97240-536f-4e6b-b0ca-f427c3316ae4 req-16d46cd5-26cd-4f60-8b65-2d1d41d37ff2 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 39580ba3-504b-4e17-b64f-f44ef66091da] Received event network-vif-unplugged-f718b0c0-ca1b-4f5d-aa70-3d1f48097b97 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 25 03:26:13 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e113 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 03:26:13 np0005534516 nova_compute[253538]: 2025-11-25 08:26:13.951 253542 DEBUG oslo_concurrency.processutils [None req-1e35dad3-44c2-4c96-99c5-86a99fa9ac76 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/ca088afd-31e5-497b-bfc5-ba1f56096642/disk.config ca088afd-31e5-497b-bfc5-ba1f56096642_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.592s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:26:13 np0005534516 nova_compute[253538]: 2025-11-25 08:26:13.952 253542 INFO nova.virt.libvirt.driver [None req-1e35dad3-44c2-4c96-99c5-86a99fa9ac76 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] [instance: ca088afd-31e5-497b-bfc5-ba1f56096642] Deleting local config drive /var/lib/nova/instances/ca088afd-31e5-497b-bfc5-ba1f56096642/disk.config because it was imported into RBD.#033[00m
Nov 25 03:26:13 np0005534516 nova_compute[253538]: 2025-11-25 08:26:13.992 253542 INFO nova.virt.libvirt.driver [None req-3e5b6bd1-af5e-4842-aa52-e2886e1e7447 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] [instance: 39580ba3-504b-4e17-b64f-f44ef66091da] Deleting instance files /var/lib/nova/instances/39580ba3-504b-4e17-b64f-f44ef66091da_del#033[00m
Nov 25 03:26:13 np0005534516 nova_compute[253538]: 2025-11-25 08:26:13.993 253542 INFO nova.virt.libvirt.driver [None req-3e5b6bd1-af5e-4842-aa52-e2886e1e7447 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] [instance: 39580ba3-504b-4e17-b64f-f44ef66091da] Deletion of /var/lib/nova/instances/39580ba3-504b-4e17-b64f-f44ef66091da_del complete#033[00m
Nov 25 03:26:14 np0005534516 NetworkManager[48915]: <info>  [1764059174.0308] manager: (tap2089bf75-61): new Tun device (/org/freedesktop/NetworkManager/Devices/60)
Nov 25 03:26:14 np0005534516 kernel: tap2089bf75-61: entered promiscuous mode
Nov 25 03:26:14 np0005534516 nova_compute[253538]: 2025-11-25 08:26:14.033 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:26:14 np0005534516 ovn_controller[152859]: 2025-11-25T08:26:14Z|00090|binding|INFO|Claiming lport 2089bf75-6119-4c42-a326-989b3931ec08 for this chassis.
Nov 25 03:26:14 np0005534516 ovn_controller[152859]: 2025-11-25T08:26:14Z|00091|binding|INFO|2089bf75-6119-4c42-a326-989b3931ec08: Claiming fa:16:3e:b9:c0:7d 10.100.0.7
Nov 25 03:26:14 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:26:14.046 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b9:c0:7d 10.100.0.7'], port_security=['fa:16:3e:b9:c0:7d 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': 'ca088afd-31e5-497b-bfc5-ba1f56096642', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ec4e7ebb-aba7-46f2-8d8d-f7d49f5af954', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '52217f37b23343d697fa6d2be38e236d', 'neutron:revision_number': '2', 'neutron:security_group_ids': '94ed9e1b-8451-4dd9-95ef-2d9affe4fca9', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=16b4c924-25ba-4e74-8549-ba25753d78e7, chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=2089bf75-6119-4c42-a326-989b3931ec08) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 03:26:14 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:26:14.047 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 2089bf75-6119-4c42-a326-989b3931ec08 in datapath ec4e7ebb-aba7-46f2-8d8d-f7d49f5af954 bound to our chassis#033[00m
Nov 25 03:26:14 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:26:14.049 162739 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network ec4e7ebb-aba7-46f2-8d8d-f7d49f5af954#033[00m
Nov 25 03:26:14 np0005534516 ovn_controller[152859]: 2025-11-25T08:26:14Z|00092|binding|INFO|Setting lport 2089bf75-6119-4c42-a326-989b3931ec08 ovn-installed in OVS
Nov 25 03:26:14 np0005534516 ovn_controller[152859]: 2025-11-25T08:26:14Z|00093|binding|INFO|Setting lport 2089bf75-6119-4c42-a326-989b3931ec08 up in Southbound
Nov 25 03:26:14 np0005534516 nova_compute[253538]: 2025-11-25 08:26:14.061 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:26:14 np0005534516 nova_compute[253538]: 2025-11-25 08:26:14.063 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:26:14 np0005534516 systemd-udevd[283443]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 03:26:14 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:26:14.068 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[0686c896-a8e7-4fde-9aa6-fb5270581cdc]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:26:14 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:26:14.070 162739 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapec4e7ebb-a1 in ovnmeta-ec4e7ebb-aba7-46f2-8d8d-f7d49f5af954 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 25 03:26:14 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:26:14.073 269370 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapec4e7ebb-a0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 25 03:26:14 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:26:14.073 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[d2a9f2b9-29b9-4925-bd66-845cae3a7ffd]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:26:14 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:26:14.075 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[268871a9-5fa4-4475-8d5c-ee02e10adddf]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:26:14 np0005534516 systemd-machined[215790]: New machine qemu-26-instance-00000017.
Nov 25 03:26:14 np0005534516 nova_compute[253538]: 2025-11-25 08:26:14.080 253542 INFO nova.compute.manager [None req-3e5b6bd1-af5e-4842-aa52-e2886e1e7447 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] [instance: 39580ba3-504b-4e17-b64f-f44ef66091da] Took 8.26 seconds to destroy the instance on the hypervisor.#033[00m
Nov 25 03:26:14 np0005534516 nova_compute[253538]: 2025-11-25 08:26:14.081 253542 DEBUG oslo.service.loopingcall [None req-3e5b6bd1-af5e-4842-aa52-e2886e1e7447 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 25 03:26:14 np0005534516 nova_compute[253538]: 2025-11-25 08:26:14.081 253542 DEBUG nova.compute.manager [-] [instance: 39580ba3-504b-4e17-b64f-f44ef66091da] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 25 03:26:14 np0005534516 nova_compute[253538]: 2025-11-25 08:26:14.081 253542 DEBUG nova.network.neutron [-] [instance: 39580ba3-504b-4e17-b64f-f44ef66091da] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 25 03:26:14 np0005534516 NetworkManager[48915]: <info>  [1764059174.0827] device (tap2089bf75-61): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 03:26:14 np0005534516 NetworkManager[48915]: <info>  [1764059174.0837] device (tap2089bf75-61): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 03:26:14 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:26:14.088 162852 DEBUG oslo.privsep.daemon [-] privsep: reply[fcf44951-97f7-4fcf-aa06-726eef0fb88e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:26:14 np0005534516 systemd[1]: Started Virtual Machine qemu-26-instance-00000017.
Nov 25 03:26:14 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:26:14.114 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[3117f0c8-608d-416e-a6ab-19d143ae6360]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:26:14 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:26:14.148 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[17cff3b4-4d4b-4ccd-be41-6708efc3c4bf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:26:14 np0005534516 NetworkManager[48915]: <info>  [1764059174.1573] manager: (tapec4e7ebb-a0): new Veth device (/org/freedesktop/NetworkManager/Devices/61)
Nov 25 03:26:14 np0005534516 systemd-udevd[283447]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 03:26:14 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:26:14.156 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[dab46d66-1768-41e0-bfbf-95b68f60dec5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:26:14 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:26:14.205 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[8d756f13-8460-4018-817c-60cb4a285f0d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:26:14 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:26:14.208 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[6f06d28c-0978-4a39-86ec-737353d11800]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:26:14 np0005534516 NetworkManager[48915]: <info>  [1764059174.2410] device (tapec4e7ebb-a0): carrier: link connected
Nov 25 03:26:14 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:26:14.242 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[efcef1a0-f850-41ce-a401-ba5f21cd907e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:26:14 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:26:14.258 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[4dae09c8-3505-4a93-a98e-b7270b795104]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapec4e7ebb-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:da:64:1f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 32], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 452883, 'reachable_time': 27756, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 283478, 'error': None, 'target': 'ovnmeta-ec4e7ebb-aba7-46f2-8d8d-f7d49f5af954', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:26:14 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:26:14.273 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[5d367f02-8706-4202-8c9a-4a2ecadc6847]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feda:641f'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 452883, 'tstamp': 452883}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 283479, 'error': None, 'target': 'ovnmeta-ec4e7ebb-aba7-46f2-8d8d-f7d49f5af954', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:26:14 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:26:14.290 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[cd258e48-aa8f-4043-838f-9f5afbb52693]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapec4e7ebb-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:da:64:1f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 32], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 452883, 'reachable_time': 27756, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 283480, 'error': None, 'target': 'ovnmeta-ec4e7ebb-aba7-46f2-8d8d-f7d49f5af954', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:26:14 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:26:14.321 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[6052f02d-35fe-454e-8b52-55c25a9b9b0f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:26:14 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 03:26:14 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2774786305' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 03:26:14 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:26:14.377 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[f43a322d-8f17-4551-847d-81f95ae6d679]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:26:14 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:26:14.379 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapec4e7ebb-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:26:14 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:26:14.379 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 03:26:14 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:26:14.379 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapec4e7ebb-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:26:14 np0005534516 nova_compute[253538]: 2025-11-25 08:26:14.381 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:26:14 np0005534516 NetworkManager[48915]: <info>  [1764059174.3820] manager: (tapec4e7ebb-a0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/62)
Nov 25 03:26:14 np0005534516 kernel: tapec4e7ebb-a0: entered promiscuous mode
Nov 25 03:26:14 np0005534516 nova_compute[253538]: 2025-11-25 08:26:14.384 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:26:14 np0005534516 nova_compute[253538]: 2025-11-25 08:26:14.389 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.548s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:26:14 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:26:14.392 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapec4e7ebb-a0, col_values=(('external_ids', {'iface-id': '26e04d60-4f32-4592-b567-fc34513c5aba'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:26:14 np0005534516 ovn_controller[152859]: 2025-11-25T08:26:14Z|00094|binding|INFO|Releasing lport 26e04d60-4f32-4592-b567-fc34513c5aba from this chassis (sb_readonly=0)
Nov 25 03:26:14 np0005534516 nova_compute[253538]: 2025-11-25 08:26:14.394 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:26:14 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:26:14.401 162739 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/ec4e7ebb-aba7-46f2-8d8d-f7d49f5af954.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/ec4e7ebb-aba7-46f2-8d8d-f7d49f5af954.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 25 03:26:14 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:26:14.402 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[83705e86-c491-49f5-a7d3-59b49fd0c157]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:26:14 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:26:14.402 162739 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 25 03:26:14 np0005534516 ovn_metadata_agent[162734]: global
Nov 25 03:26:14 np0005534516 ovn_metadata_agent[162734]:    log         /dev/log local0 debug
Nov 25 03:26:14 np0005534516 ovn_metadata_agent[162734]:    log-tag     haproxy-metadata-proxy-ec4e7ebb-aba7-46f2-8d8d-f7d49f5af954
Nov 25 03:26:14 np0005534516 ovn_metadata_agent[162734]:    user        root
Nov 25 03:26:14 np0005534516 ovn_metadata_agent[162734]:    group       root
Nov 25 03:26:14 np0005534516 ovn_metadata_agent[162734]:    maxconn     1024
Nov 25 03:26:14 np0005534516 ovn_metadata_agent[162734]:    pidfile     /var/lib/neutron/external/pids/ec4e7ebb-aba7-46f2-8d8d-f7d49f5af954.pid.haproxy
Nov 25 03:26:14 np0005534516 ovn_metadata_agent[162734]:    daemon
Nov 25 03:26:14 np0005534516 ovn_metadata_agent[162734]: 
Nov 25 03:26:14 np0005534516 ovn_metadata_agent[162734]: defaults
Nov 25 03:26:14 np0005534516 ovn_metadata_agent[162734]:    log global
Nov 25 03:26:14 np0005534516 ovn_metadata_agent[162734]:    mode http
Nov 25 03:26:14 np0005534516 ovn_metadata_agent[162734]:    option httplog
Nov 25 03:26:14 np0005534516 ovn_metadata_agent[162734]:    option dontlognull
Nov 25 03:26:14 np0005534516 ovn_metadata_agent[162734]:    option http-server-close
Nov 25 03:26:14 np0005534516 ovn_metadata_agent[162734]:    option forwardfor
Nov 25 03:26:14 np0005534516 ovn_metadata_agent[162734]:    retries                 3
Nov 25 03:26:14 np0005534516 ovn_metadata_agent[162734]:    timeout http-request    30s
Nov 25 03:26:14 np0005534516 ovn_metadata_agent[162734]:    timeout connect         30s
Nov 25 03:26:14 np0005534516 ovn_metadata_agent[162734]:    timeout client          32s
Nov 25 03:26:14 np0005534516 ovn_metadata_agent[162734]:    timeout server          32s
Nov 25 03:26:14 np0005534516 ovn_metadata_agent[162734]:    timeout http-keep-alive 30s
Nov 25 03:26:14 np0005534516 ovn_metadata_agent[162734]: 
Nov 25 03:26:14 np0005534516 ovn_metadata_agent[162734]: 
Nov 25 03:26:14 np0005534516 ovn_metadata_agent[162734]: listen listener
Nov 25 03:26:14 np0005534516 ovn_metadata_agent[162734]:    bind 169.254.169.254:80
Nov 25 03:26:14 np0005534516 ovn_metadata_agent[162734]:    server metadata /var/lib/neutron/metadata_proxy
Nov 25 03:26:14 np0005534516 ovn_metadata_agent[162734]:    http-request add-header X-OVN-Network-ID ec4e7ebb-aba7-46f2-8d8d-f7d49f5af954
Nov 25 03:26:14 np0005534516 ovn_metadata_agent[162734]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 25 03:26:14 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:26:14.403 162739 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-ec4e7ebb-aba7-46f2-8d8d-f7d49f5af954', 'env', 'PROCESS_TAG=haproxy-ec4e7ebb-aba7-46f2-8d8d-f7d49f5af954', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/ec4e7ebb-aba7-46f2-8d8d-f7d49f5af954.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 25 03:26:14 np0005534516 nova_compute[253538]: 2025-11-25 08:26:14.411 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:26:14 np0005534516 nova_compute[253538]: 2025-11-25 08:26:14.414 253542 DEBUG nova.compute.provider_tree [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 03:26:14 np0005534516 nova_compute[253538]: 2025-11-25 08:26:14.425 253542 DEBUG nova.scheduler.client.report [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 03:26:14 np0005534516 nova_compute[253538]: 2025-11-25 08:26:14.446 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 25 03:26:14 np0005534516 nova_compute[253538]: 2025-11-25 08:26:14.446 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.931s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:26:14 np0005534516 nova_compute[253538]: 2025-11-25 08:26:14.709 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:26:14 np0005534516 podman[283529]: 2025-11-25 08:26:14.750579653 +0000 UTC m=+0.026982468 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 25 03:26:14 np0005534516 podman[283529]: 2025-11-25 08:26:14.855884867 +0000 UTC m=+0.132287632 container create e27492d5c587b8c65634ba52539c61715f617596520348327f85b4f22e359117 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-ec4e7ebb-aba7-46f2-8d8d-f7d49f5af954, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 03:26:15 np0005534516 nova_compute[253538]: 2025-11-25 08:26:15.000 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764059174.9994388, ca088afd-31e5-497b-bfc5-ba1f56096642 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 03:26:15 np0005534516 nova_compute[253538]: 2025-11-25 08:26:15.000 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: ca088afd-31e5-497b-bfc5-ba1f56096642] VM Started (Lifecycle Event)#033[00m
Nov 25 03:26:15 np0005534516 systemd[1]: Started libpod-conmon-e27492d5c587b8c65634ba52539c61715f617596520348327f85b4f22e359117.scope.
Nov 25 03:26:15 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1224: 321 pgs: 321 active+clean; 552 MiB data, 611 MiB used, 59 GiB / 60 GiB avail; 125 KiB/s rd, 3.9 MiB/s wr, 90 op/s
Nov 25 03:26:15 np0005534516 nova_compute[253538]: 2025-11-25 08:26:15.039 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: ca088afd-31e5-497b-bfc5-ba1f56096642] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:26:15 np0005534516 nova_compute[253538]: 2025-11-25 08:26:15.045 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764059174.9996421, ca088afd-31e5-497b-bfc5-ba1f56096642 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 03:26:15 np0005534516 nova_compute[253538]: 2025-11-25 08:26:15.045 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: ca088afd-31e5-497b-bfc5-ba1f56096642] VM Paused (Lifecycle Event)#033[00m
Nov 25 03:26:15 np0005534516 systemd[1]: Started libcrun container.
Nov 25 03:26:15 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d651f3fccf787af2f2b211bc049772630cbec479a6881f484dd2c0eb6af6c354/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 25 03:26:15 np0005534516 nova_compute[253538]: 2025-11-25 08:26:15.073 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: ca088afd-31e5-497b-bfc5-ba1f56096642] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:26:15 np0005534516 nova_compute[253538]: 2025-11-25 08:26:15.076 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: ca088afd-31e5-497b-bfc5-ba1f56096642] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 03:26:15 np0005534516 nova_compute[253538]: 2025-11-25 08:26:15.094 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: ca088afd-31e5-497b-bfc5-ba1f56096642] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 03:26:15 np0005534516 podman[283529]: 2025-11-25 08:26:15.15903374 +0000 UTC m=+0.435436545 container init e27492d5c587b8c65634ba52539c61715f617596520348327f85b4f22e359117 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-ec4e7ebb-aba7-46f2-8d8d-f7d49f5af954, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3)
Nov 25 03:26:15 np0005534516 podman[283529]: 2025-11-25 08:26:15.166369472 +0000 UTC m=+0.442772247 container start e27492d5c587b8c65634ba52539c61715f617596520348327f85b4f22e359117 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-ec4e7ebb-aba7-46f2-8d8d-f7d49f5af954, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 25 03:26:15 np0005534516 neutron-haproxy-ovnmeta-ec4e7ebb-aba7-46f2-8d8d-f7d49f5af954[283568]: [NOTICE]   (283572) : New worker (283574) forked
Nov 25 03:26:15 np0005534516 neutron-haproxy-ovnmeta-ec4e7ebb-aba7-46f2-8d8d-f7d49f5af954[283568]: [NOTICE]   (283572) : Loading success.
Nov 25 03:26:15 np0005534516 ovn_controller[152859]: 2025-11-25T08:26:15Z|00022|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:5e:0e:e0 10.100.0.6
Nov 25 03:26:15 np0005534516 ovn_controller[152859]: 2025-11-25T08:26:15Z|00023|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:5e:0e:e0 10.100.0.6
Nov 25 03:26:15 np0005534516 nova_compute[253538]: 2025-11-25 08:26:15.798 253542 DEBUG nova.network.neutron [-] [instance: 39580ba3-504b-4e17-b64f-f44ef66091da] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 03:26:15 np0005534516 nova_compute[253538]: 2025-11-25 08:26:15.811 253542 INFO nova.compute.manager [-] [instance: 39580ba3-504b-4e17-b64f-f44ef66091da] Took 1.73 seconds to deallocate network for instance.#033[00m
Nov 25 03:26:15 np0005534516 nova_compute[253538]: 2025-11-25 08:26:15.856 253542 DEBUG oslo_concurrency.lockutils [None req-3e5b6bd1-af5e-4842-aa52-e2886e1e7447 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:26:15 np0005534516 nova_compute[253538]: 2025-11-25 08:26:15.856 253542 DEBUG oslo_concurrency.lockutils [None req-3e5b6bd1-af5e-4842-aa52-e2886e1e7447 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:26:16 np0005534516 nova_compute[253538]: 2025-11-25 08:26:16.008 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:26:16 np0005534516 nova_compute[253538]: 2025-11-25 08:26:16.034 253542 DEBUG nova.network.neutron [req-44a1d498-f23a-492e-af7c-76d474b4228f req-4ef442e3-9dfa-4a71-bbda-722d26d7497a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: ca088afd-31e5-497b-bfc5-ba1f56096642] Updated VIF entry in instance network info cache for port 2089bf75-6119-4c42-a326-989b3931ec08. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 25 03:26:16 np0005534516 nova_compute[253538]: 2025-11-25 08:26:16.034 253542 DEBUG nova.network.neutron [req-44a1d498-f23a-492e-af7c-76d474b4228f req-4ef442e3-9dfa-4a71-bbda-722d26d7497a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: ca088afd-31e5-497b-bfc5-ba1f56096642] Updating instance_info_cache with network_info: [{"id": "2089bf75-6119-4c42-a326-989b3931ec08", "address": "fa:16:3e:b9:c0:7d", "network": {"id": "ec4e7ebb-aba7-46f2-8d8d-f7d49f5af954", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-908342220-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "52217f37b23343d697fa6d2be38e236d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2089bf75-61", "ovs_interfaceid": "2089bf75-6119-4c42-a326-989b3931ec08", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 03:26:16 np0005534516 nova_compute[253538]: 2025-11-25 08:26:16.053 253542 DEBUG oslo_concurrency.lockutils [req-44a1d498-f23a-492e-af7c-76d474b4228f req-4ef442e3-9dfa-4a71-bbda-722d26d7497a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-ca088afd-31e5-497b-bfc5-ba1f56096642" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 03:26:16 np0005534516 nova_compute[253538]: 2025-11-25 08:26:16.077 253542 DEBUG oslo_concurrency.processutils [None req-3e5b6bd1-af5e-4842-aa52-e2886e1e7447 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:26:16 np0005534516 nova_compute[253538]: 2025-11-25 08:26:16.439 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 03:26:16 np0005534516 nova_compute[253538]: 2025-11-25 08:26:16.471 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 03:26:16 np0005534516 nova_compute[253538]: 2025-11-25 08:26:16.471 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 03:26:16 np0005534516 nova_compute[253538]: 2025-11-25 08:26:16.503 253542 DEBUG nova.compute.manager [req-8bca21e1-6585-481d-b1e4-a2603df209f9 req-8cbb1707-de32-4799-837f-22db191e9416 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 39580ba3-504b-4e17-b64f-f44ef66091da] Received event network-vif-plugged-f718b0c0-ca1b-4f5d-aa70-3d1f48097b97 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:26:16 np0005534516 nova_compute[253538]: 2025-11-25 08:26:16.503 253542 DEBUG oslo_concurrency.lockutils [req-8bca21e1-6585-481d-b1e4-a2603df209f9 req-8cbb1707-de32-4799-837f-22db191e9416 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "39580ba3-504b-4e17-b64f-f44ef66091da-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:26:16 np0005534516 nova_compute[253538]: 2025-11-25 08:26:16.504 253542 DEBUG oslo_concurrency.lockutils [req-8bca21e1-6585-481d-b1e4-a2603df209f9 req-8cbb1707-de32-4799-837f-22db191e9416 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "39580ba3-504b-4e17-b64f-f44ef66091da-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:26:16 np0005534516 nova_compute[253538]: 2025-11-25 08:26:16.504 253542 DEBUG oslo_concurrency.lockutils [req-8bca21e1-6585-481d-b1e4-a2603df209f9 req-8cbb1707-de32-4799-837f-22db191e9416 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "39580ba3-504b-4e17-b64f-f44ef66091da-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:26:16 np0005534516 nova_compute[253538]: 2025-11-25 08:26:16.504 253542 DEBUG nova.compute.manager [req-8bca21e1-6585-481d-b1e4-a2603df209f9 req-8cbb1707-de32-4799-837f-22db191e9416 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 39580ba3-504b-4e17-b64f-f44ef66091da] No waiting events found dispatching network-vif-plugged-f718b0c0-ca1b-4f5d-aa70-3d1f48097b97 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 03:26:16 np0005534516 nova_compute[253538]: 2025-11-25 08:26:16.504 253542 WARNING nova.compute.manager [req-8bca21e1-6585-481d-b1e4-a2603df209f9 req-8cbb1707-de32-4799-837f-22db191e9416 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 39580ba3-504b-4e17-b64f-f44ef66091da] Received unexpected event network-vif-plugged-f718b0c0-ca1b-4f5d-aa70-3d1f48097b97 for instance with vm_state deleted and task_state None.#033[00m
Nov 25 03:26:16 np0005534516 nova_compute[253538]: 2025-11-25 08:26:16.504 253542 DEBUG nova.compute.manager [req-8bca21e1-6585-481d-b1e4-a2603df209f9 req-8cbb1707-de32-4799-837f-22db191e9416 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: ceb93a9d-5e18-4351-9cfa-3949c00b448a] Received event network-changed-6283ff13-d854-41d6-8a7a-eab602cc4cf4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:26:16 np0005534516 nova_compute[253538]: 2025-11-25 08:26:16.505 253542 DEBUG nova.compute.manager [req-8bca21e1-6585-481d-b1e4-a2603df209f9 req-8cbb1707-de32-4799-837f-22db191e9416 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: ceb93a9d-5e18-4351-9cfa-3949c00b448a] Refreshing instance network info cache due to event network-changed-6283ff13-d854-41d6-8a7a-eab602cc4cf4. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 03:26:16 np0005534516 nova_compute[253538]: 2025-11-25 08:26:16.505 253542 DEBUG oslo_concurrency.lockutils [req-8bca21e1-6585-481d-b1e4-a2603df209f9 req-8cbb1707-de32-4799-837f-22db191e9416 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-ceb93a9d-5e18-4351-9cfa-3949c00b448a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 03:26:16 np0005534516 nova_compute[253538]: 2025-11-25 08:26:16.505 253542 DEBUG oslo_concurrency.lockutils [req-8bca21e1-6585-481d-b1e4-a2603df209f9 req-8cbb1707-de32-4799-837f-22db191e9416 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-ceb93a9d-5e18-4351-9cfa-3949c00b448a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 03:26:16 np0005534516 nova_compute[253538]: 2025-11-25 08:26:16.505 253542 DEBUG nova.network.neutron [req-8bca21e1-6585-481d-b1e4-a2603df209f9 req-8cbb1707-de32-4799-837f-22db191e9416 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: ceb93a9d-5e18-4351-9cfa-3949c00b448a] Refreshing network info cache for port 6283ff13-d854-41d6-8a7a-eab602cc4cf4 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 03:26:16 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 03:26:16 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2644728288' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 03:26:16 np0005534516 nova_compute[253538]: 2025-11-25 08:26:16.571 253542 DEBUG oslo_concurrency.processutils [None req-3e5b6bd1-af5e-4842-aa52-e2886e1e7447 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.494s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:26:16 np0005534516 nova_compute[253538]: 2025-11-25 08:26:16.579 253542 DEBUG nova.compute.provider_tree [None req-3e5b6bd1-af5e-4842-aa52-e2886e1e7447 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 03:26:16 np0005534516 nova_compute[253538]: 2025-11-25 08:26:16.592 253542 DEBUG nova.scheduler.client.report [None req-3e5b6bd1-af5e-4842-aa52-e2886e1e7447 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 03:26:16 np0005534516 nova_compute[253538]: 2025-11-25 08:26:16.610 253542 DEBUG oslo_concurrency.lockutils [None req-3e5b6bd1-af5e-4842-aa52-e2886e1e7447 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.754s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:26:16 np0005534516 nova_compute[253538]: 2025-11-25 08:26:16.636 253542 INFO nova.scheduler.client.report [None req-3e5b6bd1-af5e-4842-aa52-e2886e1e7447 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] Deleted allocations for instance 39580ba3-504b-4e17-b64f-f44ef66091da#033[00m
Nov 25 03:26:16 np0005534516 nova_compute[253538]: 2025-11-25 08:26:16.716 253542 DEBUG oslo_concurrency.lockutils [None req-3e5b6bd1-af5e-4842-aa52-e2886e1e7447 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] Lock "39580ba3-504b-4e17-b64f-f44ef66091da" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 10.901s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:26:17 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1225: 321 pgs: 321 active+clean; 552 MiB data, 611 MiB used, 59 GiB / 60 GiB avail; 214 KiB/s rd, 3.5 MiB/s wr, 104 op/s
Nov 25 03:26:17 np0005534516 nova_compute[253538]: 2025-11-25 08:26:17.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 03:26:17 np0005534516 nova_compute[253538]: 2025-11-25 08:26:17.817 253542 DEBUG nova.network.neutron [req-8bca21e1-6585-481d-b1e4-a2603df209f9 req-8cbb1707-de32-4799-837f-22db191e9416 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: ceb93a9d-5e18-4351-9cfa-3949c00b448a] Updated VIF entry in instance network info cache for port 6283ff13-d854-41d6-8a7a-eab602cc4cf4. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 25 03:26:17 np0005534516 nova_compute[253538]: 2025-11-25 08:26:17.818 253542 DEBUG nova.network.neutron [req-8bca21e1-6585-481d-b1e4-a2603df209f9 req-8cbb1707-de32-4799-837f-22db191e9416 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: ceb93a9d-5e18-4351-9cfa-3949c00b448a] Updating instance_info_cache with network_info: [{"id": "6283ff13-d854-41d6-8a7a-eab602cc4cf4", "address": "fa:16:3e:57:ac:37", "network": {"id": "f0cb07bc-dc94-4b65-bb7f-100ce36c9428", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationNegativeTestJSON-827720896-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c4ea3de796e6464fbf65835dc4c3ad79", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6283ff13-d8", "ovs_interfaceid": "6283ff13-d854-41d6-8a7a-eab602cc4cf4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 03:26:17 np0005534516 nova_compute[253538]: 2025-11-25 08:26:17.832 253542 DEBUG oslo_concurrency.lockutils [req-8bca21e1-6585-481d-b1e4-a2603df209f9 req-8cbb1707-de32-4799-837f-22db191e9416 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-ceb93a9d-5e18-4351-9cfa-3949c00b448a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 03:26:17 np0005534516 nova_compute[253538]: 2025-11-25 08:26:17.833 253542 DEBUG nova.compute.manager [req-8bca21e1-6585-481d-b1e4-a2603df209f9 req-8cbb1707-de32-4799-837f-22db191e9416 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 1664fad5-765c-4ecc-93e2-6f96c7fb6d44] Received event network-changed-fdb3703c-f8da-4c10-9784-ed63bfe93fe1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:26:17 np0005534516 nova_compute[253538]: 2025-11-25 08:26:17.833 253542 DEBUG nova.compute.manager [req-8bca21e1-6585-481d-b1e4-a2603df209f9 req-8cbb1707-de32-4799-837f-22db191e9416 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 1664fad5-765c-4ecc-93e2-6f96c7fb6d44] Refreshing instance network info cache due to event network-changed-fdb3703c-f8da-4c10-9784-ed63bfe93fe1. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 03:26:17 np0005534516 nova_compute[253538]: 2025-11-25 08:26:17.833 253542 DEBUG oslo_concurrency.lockutils [req-8bca21e1-6585-481d-b1e4-a2603df209f9 req-8cbb1707-de32-4799-837f-22db191e9416 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-1664fad5-765c-4ecc-93e2-6f96c7fb6d44" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 03:26:17 np0005534516 nova_compute[253538]: 2025-11-25 08:26:17.834 253542 DEBUG oslo_concurrency.lockutils [req-8bca21e1-6585-481d-b1e4-a2603df209f9 req-8cbb1707-de32-4799-837f-22db191e9416 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-1664fad5-765c-4ecc-93e2-6f96c7fb6d44" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 03:26:17 np0005534516 nova_compute[253538]: 2025-11-25 08:26:17.834 253542 DEBUG nova.network.neutron [req-8bca21e1-6585-481d-b1e4-a2603df209f9 req-8cbb1707-de32-4799-837f-22db191e9416 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 1664fad5-765c-4ecc-93e2-6f96c7fb6d44] Refreshing network info cache for port fdb3703c-f8da-4c10-9784-ed63bfe93fe1 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 03:26:18 np0005534516 nova_compute[253538]: 2025-11-25 08:26:18.609 253542 DEBUG oslo_concurrency.lockutils [None req-d54cf114-4cd0-4343-8d8e-85625af4e156 c53798457642457e8c93278c6bbae0b7 c4ea3de796e6464fbf65835dc4c3ad79 - - default default] Acquiring lock "ceb93a9d-5e18-4351-9cfa-3949c00b448a" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:26:18 np0005534516 nova_compute[253538]: 2025-11-25 08:26:18.610 253542 DEBUG oslo_concurrency.lockutils [None req-d54cf114-4cd0-4343-8d8e-85625af4e156 c53798457642457e8c93278c6bbae0b7 c4ea3de796e6464fbf65835dc4c3ad79 - - default default] Lock "ceb93a9d-5e18-4351-9cfa-3949c00b448a" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:26:18 np0005534516 nova_compute[253538]: 2025-11-25 08:26:18.611 253542 DEBUG oslo_concurrency.lockutils [None req-d54cf114-4cd0-4343-8d8e-85625af4e156 c53798457642457e8c93278c6bbae0b7 c4ea3de796e6464fbf65835dc4c3ad79 - - default default] Acquiring lock "ceb93a9d-5e18-4351-9cfa-3949c00b448a-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:26:18 np0005534516 nova_compute[253538]: 2025-11-25 08:26:18.611 253542 DEBUG oslo_concurrency.lockutils [None req-d54cf114-4cd0-4343-8d8e-85625af4e156 c53798457642457e8c93278c6bbae0b7 c4ea3de796e6464fbf65835dc4c3ad79 - - default default] Lock "ceb93a9d-5e18-4351-9cfa-3949c00b448a-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:26:18 np0005534516 nova_compute[253538]: 2025-11-25 08:26:18.612 253542 DEBUG oslo_concurrency.lockutils [None req-d54cf114-4cd0-4343-8d8e-85625af4e156 c53798457642457e8c93278c6bbae0b7 c4ea3de796e6464fbf65835dc4c3ad79 - - default default] Lock "ceb93a9d-5e18-4351-9cfa-3949c00b448a-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:26:18 np0005534516 nova_compute[253538]: 2025-11-25 08:26:18.614 253542 INFO nova.compute.manager [None req-d54cf114-4cd0-4343-8d8e-85625af4e156 c53798457642457e8c93278c6bbae0b7 c4ea3de796e6464fbf65835dc4c3ad79 - - default default] [instance: ceb93a9d-5e18-4351-9cfa-3949c00b448a] Terminating instance#033[00m
Nov 25 03:26:18 np0005534516 nova_compute[253538]: 2025-11-25 08:26:18.616 253542 DEBUG nova.compute.manager [None req-d54cf114-4cd0-4343-8d8e-85625af4e156 c53798457642457e8c93278c6bbae0b7 c4ea3de796e6464fbf65835dc4c3ad79 - - default default] [instance: ceb93a9d-5e18-4351-9cfa-3949c00b448a] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 25 03:26:18 np0005534516 nova_compute[253538]: 2025-11-25 08:26:18.663 253542 DEBUG nova.compute.manager [req-5e6c7527-69a6-493f-833b-481455803dfe req-0a21b136-3915-47ff-a24e-e0745ce46e89 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 1664fad5-765c-4ecc-93e2-6f96c7fb6d44] Received event network-changed-fdb3703c-f8da-4c10-9784-ed63bfe93fe1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:26:18 np0005534516 nova_compute[253538]: 2025-11-25 08:26:18.664 253542 DEBUG nova.compute.manager [req-5e6c7527-69a6-493f-833b-481455803dfe req-0a21b136-3915-47ff-a24e-e0745ce46e89 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 1664fad5-765c-4ecc-93e2-6f96c7fb6d44] Refreshing instance network info cache due to event network-changed-fdb3703c-f8da-4c10-9784-ed63bfe93fe1. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 03:26:18 np0005534516 nova_compute[253538]: 2025-11-25 08:26:18.665 253542 DEBUG oslo_concurrency.lockutils [req-5e6c7527-69a6-493f-833b-481455803dfe req-0a21b136-3915-47ff-a24e-e0745ce46e89 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-1664fad5-765c-4ecc-93e2-6f96c7fb6d44" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 03:26:18 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e113 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 03:26:19 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1226: 321 pgs: 321 active+clean; 555 MiB data, 612 MiB used, 59 GiB / 60 GiB avail; 346 KiB/s rd, 2.8 MiB/s wr, 114 op/s
Nov 25 03:26:19 np0005534516 kernel: tap6283ff13-d8 (unregistering): left promiscuous mode
Nov 25 03:26:19 np0005534516 NetworkManager[48915]: <info>  [1764059179.1184] device (tap6283ff13-d8): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 03:26:19 np0005534516 ovn_controller[152859]: 2025-11-25T08:26:19Z|00095|binding|INFO|Releasing lport 6283ff13-d854-41d6-8a7a-eab602cc4cf4 from this chassis (sb_readonly=0)
Nov 25 03:26:19 np0005534516 ovn_controller[152859]: 2025-11-25T08:26:19Z|00096|binding|INFO|Setting lport 6283ff13-d854-41d6-8a7a-eab602cc4cf4 down in Southbound
Nov 25 03:26:19 np0005534516 ovn_controller[152859]: 2025-11-25T08:26:19Z|00097|binding|INFO|Removing iface tap6283ff13-d8 ovn-installed in OVS
Nov 25 03:26:19 np0005534516 nova_compute[253538]: 2025-11-25 08:26:19.181 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:26:19 np0005534516 nova_compute[253538]: 2025-11-25 08:26:19.184 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:26:19 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:26:19.190 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:57:ac:37 10.100.0.3'], port_security=['fa:16:3e:57:ac:37 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'ceb93a9d-5e18-4351-9cfa-3949c00b448a', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f0cb07bc-dc94-4b65-bb7f-100ce36c9428', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c4ea3de796e6464fbf65835dc4c3ad79', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'c614f14d-ba5c-4351-9110-1ad24f7c46f8', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d60c6aa2-d509-48b2-b548-58ea5b315827, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=6283ff13-d854-41d6-8a7a-eab602cc4cf4) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 03:26:19 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:26:19.193 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 6283ff13-d854-41d6-8a7a-eab602cc4cf4 in datapath f0cb07bc-dc94-4b65-bb7f-100ce36c9428 unbound from our chassis#033[00m
Nov 25 03:26:19 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:26:19.195 162739 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network f0cb07bc-dc94-4b65-bb7f-100ce36c9428, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 25 03:26:19 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:26:19.197 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[64c969ce-39bb-4b93-827b-73f520df47e6]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:26:19 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:26:19.197 162739 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-f0cb07bc-dc94-4b65-bb7f-100ce36c9428 namespace which is not needed anymore#033[00m
Nov 25 03:26:19 np0005534516 nova_compute[253538]: 2025-11-25 08:26:19.201 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:26:19 np0005534516 systemd[1]: machine-qemu\x2d24\x2dinstance\x2d00000016.scope: Deactivated successfully.
Nov 25 03:26:19 np0005534516 systemd[1]: machine-qemu\x2d24\x2dinstance\x2d00000016.scope: Consumed 14.270s CPU time.
Nov 25 03:26:19 np0005534516 systemd-machined[215790]: Machine qemu-24-instance-00000016 terminated.
Nov 25 03:26:19 np0005534516 nova_compute[253538]: 2025-11-25 08:26:19.255 253542 INFO nova.virt.libvirt.driver [-] [instance: ceb93a9d-5e18-4351-9cfa-3949c00b448a] Instance destroyed successfully.#033[00m
Nov 25 03:26:19 np0005534516 nova_compute[253538]: 2025-11-25 08:26:19.256 253542 DEBUG nova.objects.instance [None req-d54cf114-4cd0-4343-8d8e-85625af4e156 c53798457642457e8c93278c6bbae0b7 c4ea3de796e6464fbf65835dc4c3ad79 - - default default] Lazy-loading 'resources' on Instance uuid ceb93a9d-5e18-4351-9cfa-3949c00b448a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 03:26:19 np0005534516 nova_compute[253538]: 2025-11-25 08:26:19.271 253542 DEBUG nova.virt.libvirt.vif [None req-d54cf114-4cd0-4343-8d8e-85625af4e156 c53798457642457e8c93278c6bbae0b7 c4ea3de796e6464fbf65835dc4c3ad79 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T08:25:35Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-FloatingIPsAssociationNegativeTestJSON-server-1269146307',display_name='tempest-FloatingIPsAssociationNegativeTestJSON-server-1269146307',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-floatingipsassociationnegativetestjson-server-126914630',id=22,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-25T08:25:45Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='c4ea3de796e6464fbf65835dc4c3ad79',ramdisk_id='',reservation_id='r-jh0ldu4e',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-FloatingIPsAssociationNegativeTestJSON-1564808270',owner_user_name='tempest-FloatingIPsAssociationNegativeTestJSON-1564808270-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T08:25:45Z,user_data=None,user_id='c53798457642457e8c93278c6bbae0b7',uuid=ceb93a9d-5e18-4351-9cfa-3949c00b448a,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "6283ff13-d854-41d6-8a7a-eab602cc4cf4", "address": "fa:16:3e:57:ac:37", "network": {"id": "f0cb07bc-dc94-4b65-bb7f-100ce36c9428", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationNegativeTestJSON-827720896-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c4ea3de796e6464fbf65835dc4c3ad79", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6283ff13-d8", "ovs_interfaceid": "6283ff13-d854-41d6-8a7a-eab602cc4cf4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 25 03:26:19 np0005534516 nova_compute[253538]: 2025-11-25 08:26:19.271 253542 DEBUG nova.network.os_vif_util [None req-d54cf114-4cd0-4343-8d8e-85625af4e156 c53798457642457e8c93278c6bbae0b7 c4ea3de796e6464fbf65835dc4c3ad79 - - default default] Converting VIF {"id": "6283ff13-d854-41d6-8a7a-eab602cc4cf4", "address": "fa:16:3e:57:ac:37", "network": {"id": "f0cb07bc-dc94-4b65-bb7f-100ce36c9428", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationNegativeTestJSON-827720896-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c4ea3de796e6464fbf65835dc4c3ad79", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6283ff13-d8", "ovs_interfaceid": "6283ff13-d854-41d6-8a7a-eab602cc4cf4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 03:26:19 np0005534516 nova_compute[253538]: 2025-11-25 08:26:19.272 253542 DEBUG nova.network.os_vif_util [None req-d54cf114-4cd0-4343-8d8e-85625af4e156 c53798457642457e8c93278c6bbae0b7 c4ea3de796e6464fbf65835dc4c3ad79 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:57:ac:37,bridge_name='br-int',has_traffic_filtering=True,id=6283ff13-d854-41d6-8a7a-eab602cc4cf4,network=Network(f0cb07bc-dc94-4b65-bb7f-100ce36c9428),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6283ff13-d8') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 03:26:19 np0005534516 nova_compute[253538]: 2025-11-25 08:26:19.272 253542 DEBUG os_vif [None req-d54cf114-4cd0-4343-8d8e-85625af4e156 c53798457642457e8c93278c6bbae0b7 c4ea3de796e6464fbf65835dc4c3ad79 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:57:ac:37,bridge_name='br-int',has_traffic_filtering=True,id=6283ff13-d854-41d6-8a7a-eab602cc4cf4,network=Network(f0cb07bc-dc94-4b65-bb7f-100ce36c9428),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6283ff13-d8') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 25 03:26:19 np0005534516 nova_compute[253538]: 2025-11-25 08:26:19.274 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:26:19 np0005534516 nova_compute[253538]: 2025-11-25 08:26:19.275 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6283ff13-d8, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:26:19 np0005534516 nova_compute[253538]: 2025-11-25 08:26:19.276 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:26:19 np0005534516 nova_compute[253538]: 2025-11-25 08:26:19.279 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 25 03:26:19 np0005534516 nova_compute[253538]: 2025-11-25 08:26:19.280 253542 INFO os_vif [None req-d54cf114-4cd0-4343-8d8e-85625af4e156 c53798457642457e8c93278c6bbae0b7 c4ea3de796e6464fbf65835dc4c3ad79 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:57:ac:37,bridge_name='br-int',has_traffic_filtering=True,id=6283ff13-d854-41d6-8a7a-eab602cc4cf4,network=Network(f0cb07bc-dc94-4b65-bb7f-100ce36c9428),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6283ff13-d8')#033[00m
Nov 25 03:26:19 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 03:26:19 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 03:26:19 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 03:26:19 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 03:26:19 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 03:26:19 np0005534516 neutron-haproxy-ovnmeta-f0cb07bc-dc94-4b65-bb7f-100ce36c9428[282548]: [NOTICE]   (282554) : haproxy version is 2.8.14-c23fe91
Nov 25 03:26:19 np0005534516 neutron-haproxy-ovnmeta-f0cb07bc-dc94-4b65-bb7f-100ce36c9428[282548]: [NOTICE]   (282554) : path to executable is /usr/sbin/haproxy
Nov 25 03:26:19 np0005534516 neutron-haproxy-ovnmeta-f0cb07bc-dc94-4b65-bb7f-100ce36c9428[282548]: [WARNING]  (282554) : Exiting Master process...
Nov 25 03:26:19 np0005534516 neutron-haproxy-ovnmeta-f0cb07bc-dc94-4b65-bb7f-100ce36c9428[282548]: [ALERT]    (282554) : Current worker (282556) exited with code 143 (Terminated)
Nov 25 03:26:19 np0005534516 neutron-haproxy-ovnmeta-f0cb07bc-dc94-4b65-bb7f-100ce36c9428[282548]: [WARNING]  (282554) : All workers exited. Exiting... (0)
Nov 25 03:26:19 np0005534516 systemd[1]: libpod-df9bbde1e0097c2246b98c50c44ed8de68e62f49a5f07f17aea74da675ba71b7.scope: Deactivated successfully.
Nov 25 03:26:19 np0005534516 podman[283768]: 2025-11-25 08:26:19.427866093 +0000 UTC m=+0.135095080 container died df9bbde1e0097c2246b98c50c44ed8de68e62f49a5f07f17aea74da675ba71b7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-f0cb07bc-dc94-4b65-bb7f-100ce36c9428, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3)
Nov 25 03:26:19 np0005534516 nova_compute[253538]: 2025-11-25 08:26:19.555 253542 DEBUG nova.network.neutron [req-8bca21e1-6585-481d-b1e4-a2603df209f9 req-8cbb1707-de32-4799-837f-22db191e9416 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 1664fad5-765c-4ecc-93e2-6f96c7fb6d44] Updated VIF entry in instance network info cache for port fdb3703c-f8da-4c10-9784-ed63bfe93fe1. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 25 03:26:19 np0005534516 nova_compute[253538]: 2025-11-25 08:26:19.556 253542 DEBUG nova.network.neutron [req-8bca21e1-6585-481d-b1e4-a2603df209f9 req-8cbb1707-de32-4799-837f-22db191e9416 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 1664fad5-765c-4ecc-93e2-6f96c7fb6d44] Updating instance_info_cache with network_info: [{"id": "fdb3703c-f8da-4c10-9784-ed63bfe93fe1", "address": "fa:16:3e:f0:8b:90", "network": {"id": "f86a7b06-d9db-4462-bd9b-8ad648dec7f4", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-795190525-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2d3671fc1a3f4b319a62f23168a9df72", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfdb3703c-f8", "ovs_interfaceid": "fdb3703c-f8da-4c10-9784-ed63bfe93fe1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 03:26:19 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 03:26:19 np0005534516 nova_compute[253538]: 2025-11-25 08:26:19.569 253542 DEBUG oslo_concurrency.lockutils [req-8bca21e1-6585-481d-b1e4-a2603df209f9 req-8cbb1707-de32-4799-837f-22db191e9416 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-1664fad5-765c-4ecc-93e2-6f96c7fb6d44" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 03:26:19 np0005534516 nova_compute[253538]: 2025-11-25 08:26:19.570 253542 DEBUG nova.compute.manager [req-8bca21e1-6585-481d-b1e4-a2603df209f9 req-8cbb1707-de32-4799-837f-22db191e9416 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: ca088afd-31e5-497b-bfc5-ba1f56096642] Received event network-vif-plugged-2089bf75-6119-4c42-a326-989b3931ec08 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:26:19 np0005534516 nova_compute[253538]: 2025-11-25 08:26:19.570 253542 DEBUG oslo_concurrency.lockutils [req-8bca21e1-6585-481d-b1e4-a2603df209f9 req-8cbb1707-de32-4799-837f-22db191e9416 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "ca088afd-31e5-497b-bfc5-ba1f56096642-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:26:19 np0005534516 ceph-mgr[75313]: [progress WARNING root] complete: ev 925a5a1a-10b7-4cfa-a432-0c154817475f does not exist
Nov 25 03:26:19 np0005534516 nova_compute[253538]: 2025-11-25 08:26:19.570 253542 DEBUG oslo_concurrency.lockutils [req-8bca21e1-6585-481d-b1e4-a2603df209f9 req-8cbb1707-de32-4799-837f-22db191e9416 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "ca088afd-31e5-497b-bfc5-ba1f56096642-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:26:19 np0005534516 nova_compute[253538]: 2025-11-25 08:26:19.570 253542 DEBUG oslo_concurrency.lockutils [req-8bca21e1-6585-481d-b1e4-a2603df209f9 req-8cbb1707-de32-4799-837f-22db191e9416 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "ca088afd-31e5-497b-bfc5-ba1f56096642-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:26:19 np0005534516 ceph-mgr[75313]: [progress WARNING root] complete: ev e2ed2392-e53c-49d9-867d-57149a7af856 does not exist
Nov 25 03:26:19 np0005534516 nova_compute[253538]: 2025-11-25 08:26:19.570 253542 DEBUG nova.compute.manager [req-8bca21e1-6585-481d-b1e4-a2603df209f9 req-8cbb1707-de32-4799-837f-22db191e9416 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: ca088afd-31e5-497b-bfc5-ba1f56096642] Processing event network-vif-plugged-2089bf75-6119-4c42-a326-989b3931ec08 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 25 03:26:19 np0005534516 ceph-mgr[75313]: [progress WARNING root] complete: ev d9d1744f-29e2-4ca1-b9a9-5d534a90ddf9 does not exist
Nov 25 03:26:19 np0005534516 nova_compute[253538]: 2025-11-25 08:26:19.571 253542 DEBUG nova.compute.manager [req-8bca21e1-6585-481d-b1e4-a2603df209f9 req-8cbb1707-de32-4799-837f-22db191e9416 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: ca088afd-31e5-497b-bfc5-ba1f56096642] Received event network-vif-plugged-2089bf75-6119-4c42-a326-989b3931ec08 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:26:19 np0005534516 nova_compute[253538]: 2025-11-25 08:26:19.571 253542 DEBUG oslo_concurrency.lockutils [req-8bca21e1-6585-481d-b1e4-a2603df209f9 req-8cbb1707-de32-4799-837f-22db191e9416 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "ca088afd-31e5-497b-bfc5-ba1f56096642-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:26:19 np0005534516 nova_compute[253538]: 2025-11-25 08:26:19.571 253542 DEBUG oslo_concurrency.lockutils [req-8bca21e1-6585-481d-b1e4-a2603df209f9 req-8cbb1707-de32-4799-837f-22db191e9416 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "ca088afd-31e5-497b-bfc5-ba1f56096642-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:26:19 np0005534516 nova_compute[253538]: 2025-11-25 08:26:19.571 253542 DEBUG oslo_concurrency.lockutils [req-8bca21e1-6585-481d-b1e4-a2603df209f9 req-8cbb1707-de32-4799-837f-22db191e9416 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "ca088afd-31e5-497b-bfc5-ba1f56096642-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:26:19 np0005534516 nova_compute[253538]: 2025-11-25 08:26:19.571 253542 DEBUG nova.compute.manager [req-8bca21e1-6585-481d-b1e4-a2603df209f9 req-8cbb1707-de32-4799-837f-22db191e9416 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: ca088afd-31e5-497b-bfc5-ba1f56096642] No waiting events found dispatching network-vif-plugged-2089bf75-6119-4c42-a326-989b3931ec08 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 03:26:19 np0005534516 nova_compute[253538]: 2025-11-25 08:26:19.571 253542 WARNING nova.compute.manager [req-8bca21e1-6585-481d-b1e4-a2603df209f9 req-8cbb1707-de32-4799-837f-22db191e9416 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: ca088afd-31e5-497b-bfc5-ba1f56096642] Received unexpected event network-vif-plugged-2089bf75-6119-4c42-a326-989b3931ec08 for instance with vm_state building and task_state spawning.#033[00m
Nov 25 03:26:19 np0005534516 nova_compute[253538]: 2025-11-25 08:26:19.572 253542 DEBUG nova.compute.manager [req-8bca21e1-6585-481d-b1e4-a2603df209f9 req-8cbb1707-de32-4799-837f-22db191e9416 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 39580ba3-504b-4e17-b64f-f44ef66091da] Received event network-vif-deleted-f718b0c0-ca1b-4f5d-aa70-3d1f48097b97 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:26:19 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 03:26:19 np0005534516 nova_compute[253538]: 2025-11-25 08:26:19.572 253542 DEBUG oslo_concurrency.lockutils [req-5e6c7527-69a6-493f-833b-481455803dfe req-0a21b136-3915-47ff-a24e-e0745ce46e89 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-1664fad5-765c-4ecc-93e2-6f96c7fb6d44" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 03:26:19 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 03:26:19 np0005534516 nova_compute[253538]: 2025-11-25 08:26:19.572 253542 DEBUG nova.network.neutron [req-5e6c7527-69a6-493f-833b-481455803dfe req-0a21b136-3915-47ff-a24e-e0745ce46e89 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 1664fad5-765c-4ecc-93e2-6f96c7fb6d44] Refreshing network info cache for port fdb3703c-f8da-4c10-9784-ed63bfe93fe1 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 03:26:19 np0005534516 nova_compute[253538]: 2025-11-25 08:26:19.573 253542 DEBUG nova.compute.manager [None req-1e35dad3-44c2-4c96-99c5-86a99fa9ac76 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] [instance: ca088afd-31e5-497b-bfc5-ba1f56096642] Instance event wait completed in 4 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 25 03:26:19 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 03:26:19 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 03:26:19 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 03:26:19 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 03:26:19 np0005534516 nova_compute[253538]: 2025-11-25 08:26:19.578 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764059179.5779002, ca088afd-31e5-497b-bfc5-ba1f56096642 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 03:26:19 np0005534516 nova_compute[253538]: 2025-11-25 08:26:19.578 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: ca088afd-31e5-497b-bfc5-ba1f56096642] VM Resumed (Lifecycle Event)#033[00m
Nov 25 03:26:19 np0005534516 nova_compute[253538]: 2025-11-25 08:26:19.581 253542 DEBUG nova.virt.libvirt.driver [None req-1e35dad3-44c2-4c96-99c5-86a99fa9ac76 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] [instance: ca088afd-31e5-497b-bfc5-ba1f56096642] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 25 03:26:19 np0005534516 nova_compute[253538]: 2025-11-25 08:26:19.586 253542 INFO nova.virt.libvirt.driver [-] [instance: ca088afd-31e5-497b-bfc5-ba1f56096642] Instance spawned successfully.#033[00m
Nov 25 03:26:19 np0005534516 nova_compute[253538]: 2025-11-25 08:26:19.586 253542 DEBUG nova.virt.libvirt.driver [None req-1e35dad3-44c2-4c96-99c5-86a99fa9ac76 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] [instance: ca088afd-31e5-497b-bfc5-ba1f56096642] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 25 03:26:19 np0005534516 nova_compute[253538]: 2025-11-25 08:26:19.605 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: ca088afd-31e5-497b-bfc5-ba1f56096642] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:26:19 np0005534516 nova_compute[253538]: 2025-11-25 08:26:19.612 253542 DEBUG nova.virt.libvirt.driver [None req-1e35dad3-44c2-4c96-99c5-86a99fa9ac76 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] [instance: ca088afd-31e5-497b-bfc5-ba1f56096642] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:26:19 np0005534516 nova_compute[253538]: 2025-11-25 08:26:19.612 253542 DEBUG nova.virt.libvirt.driver [None req-1e35dad3-44c2-4c96-99c5-86a99fa9ac76 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] [instance: ca088afd-31e5-497b-bfc5-ba1f56096642] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:26:19 np0005534516 nova_compute[253538]: 2025-11-25 08:26:19.613 253542 DEBUG nova.virt.libvirt.driver [None req-1e35dad3-44c2-4c96-99c5-86a99fa9ac76 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] [instance: ca088afd-31e5-497b-bfc5-ba1f56096642] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:26:19 np0005534516 nova_compute[253538]: 2025-11-25 08:26:19.613 253542 DEBUG nova.virt.libvirt.driver [None req-1e35dad3-44c2-4c96-99c5-86a99fa9ac76 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] [instance: ca088afd-31e5-497b-bfc5-ba1f56096642] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:26:19 np0005534516 nova_compute[253538]: 2025-11-25 08:26:19.614 253542 DEBUG nova.virt.libvirt.driver [None req-1e35dad3-44c2-4c96-99c5-86a99fa9ac76 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] [instance: ca088afd-31e5-497b-bfc5-ba1f56096642] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:26:19 np0005534516 nova_compute[253538]: 2025-11-25 08:26:19.614 253542 DEBUG nova.virt.libvirt.driver [None req-1e35dad3-44c2-4c96-99c5-86a99fa9ac76 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] [instance: ca088afd-31e5-497b-bfc5-ba1f56096642] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:26:19 np0005534516 nova_compute[253538]: 2025-11-25 08:26:19.619 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: ca088afd-31e5-497b-bfc5-ba1f56096642] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 03:26:19 np0005534516 nova_compute[253538]: 2025-11-25 08:26:19.646 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: ca088afd-31e5-497b-bfc5-ba1f56096642] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 03:26:19 np0005534516 nova_compute[253538]: 2025-11-25 08:26:19.673 253542 INFO nova.compute.manager [None req-1e35dad3-44c2-4c96-99c5-86a99fa9ac76 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] [instance: ca088afd-31e5-497b-bfc5-ba1f56096642] Took 15.75 seconds to spawn the instance on the hypervisor.#033[00m
Nov 25 03:26:19 np0005534516 nova_compute[253538]: 2025-11-25 08:26:19.673 253542 DEBUG nova.compute.manager [None req-1e35dad3-44c2-4c96-99c5-86a99fa9ac76 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] [instance: ca088afd-31e5-497b-bfc5-ba1f56096642] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:26:19 np0005534516 nova_compute[253538]: 2025-11-25 08:26:19.711 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:26:19 np0005534516 nova_compute[253538]: 2025-11-25 08:26:19.732 253542 INFO nova.compute.manager [None req-1e35dad3-44c2-4c96-99c5-86a99fa9ac76 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] [instance: ca088afd-31e5-497b-bfc5-ba1f56096642] Took 16.79 seconds to build instance.#033[00m
Nov 25 03:26:19 np0005534516 nova_compute[253538]: 2025-11-25 08:26:19.746 253542 DEBUG oslo_concurrency.lockutils [None req-1e35dad3-44c2-4c96-99c5-86a99fa9ac76 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] Lock "ca088afd-31e5-497b-bfc5-ba1f56096642" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 16.853s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:26:19 np0005534516 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-df9bbde1e0097c2246b98c50c44ed8de68e62f49a5f07f17aea74da675ba71b7-userdata-shm.mount: Deactivated successfully.
Nov 25 03:26:19 np0005534516 systemd[1]: var-lib-containers-storage-overlay-92f235619556de56030111cecddf940547ff0043e49f6964151133b56b1d88ee-merged.mount: Deactivated successfully.
Nov 25 03:26:20 np0005534516 podman[283768]: 2025-11-25 08:26:20.173987787 +0000 UTC m=+0.881216744 container cleanup df9bbde1e0097c2246b98c50c44ed8de68e62f49a5f07f17aea74da675ba71b7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-f0cb07bc-dc94-4b65-bb7f-100ce36c9428, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team)
Nov 25 03:26:20 np0005534516 systemd[1]: libpod-conmon-df9bbde1e0097c2246b98c50c44ed8de68e62f49a5f07f17aea74da675ba71b7.scope: Deactivated successfully.
Nov 25 03:26:20 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 03:26:20 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 03:26:20 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 03:26:20 np0005534516 nova_compute[253538]: 2025-11-25 08:26:20.439 253542 DEBUG nova.virt.libvirt.driver [None req-b2263c80-33ef-445c-930a-78c60ce7c0c9 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] [instance: 86bfa56f-56d0-4a5e-b0b2-302c375e37a3] Instance in state 1 after 21 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101#033[00m
Nov 25 03:26:20 np0005534516 nova_compute[253538]: 2025-11-25 08:26:20.739 253542 DEBUG nova.compute.manager [req-33f4f6af-4453-461a-a92b-5bc0bc7efea9 req-c8aaae13-cd56-43a6-a815-a589c63127c2 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: ceb93a9d-5e18-4351-9cfa-3949c00b448a] Received event network-vif-unplugged-6283ff13-d854-41d6-8a7a-eab602cc4cf4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:26:20 np0005534516 nova_compute[253538]: 2025-11-25 08:26:20.739 253542 DEBUG oslo_concurrency.lockutils [req-33f4f6af-4453-461a-a92b-5bc0bc7efea9 req-c8aaae13-cd56-43a6-a815-a589c63127c2 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "ceb93a9d-5e18-4351-9cfa-3949c00b448a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:26:20 np0005534516 nova_compute[253538]: 2025-11-25 08:26:20.740 253542 DEBUG oslo_concurrency.lockutils [req-33f4f6af-4453-461a-a92b-5bc0bc7efea9 req-c8aaae13-cd56-43a6-a815-a589c63127c2 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "ceb93a9d-5e18-4351-9cfa-3949c00b448a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:26:20 np0005534516 nova_compute[253538]: 2025-11-25 08:26:20.740 253542 DEBUG oslo_concurrency.lockutils [req-33f4f6af-4453-461a-a92b-5bc0bc7efea9 req-c8aaae13-cd56-43a6-a815-a589c63127c2 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "ceb93a9d-5e18-4351-9cfa-3949c00b448a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:26:20 np0005534516 nova_compute[253538]: 2025-11-25 08:26:20.740 253542 DEBUG nova.compute.manager [req-33f4f6af-4453-461a-a92b-5bc0bc7efea9 req-c8aaae13-cd56-43a6-a815-a589c63127c2 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: ceb93a9d-5e18-4351-9cfa-3949c00b448a] No waiting events found dispatching network-vif-unplugged-6283ff13-d854-41d6-8a7a-eab602cc4cf4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 03:26:20 np0005534516 nova_compute[253538]: 2025-11-25 08:26:20.741 253542 DEBUG nova.compute.manager [req-33f4f6af-4453-461a-a92b-5bc0bc7efea9 req-c8aaae13-cd56-43a6-a815-a589c63127c2 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: ceb93a9d-5e18-4351-9cfa-3949c00b448a] Received event network-vif-unplugged-6283ff13-d854-41d6-8a7a-eab602cc4cf4 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 25 03:26:20 np0005534516 nova_compute[253538]: 2025-11-25 08:26:20.741 253542 DEBUG nova.compute.manager [req-33f4f6af-4453-461a-a92b-5bc0bc7efea9 req-c8aaae13-cd56-43a6-a815-a589c63127c2 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: ceb93a9d-5e18-4351-9cfa-3949c00b448a] Received event network-vif-plugged-6283ff13-d854-41d6-8a7a-eab602cc4cf4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:26:20 np0005534516 nova_compute[253538]: 2025-11-25 08:26:20.742 253542 DEBUG oslo_concurrency.lockutils [req-33f4f6af-4453-461a-a92b-5bc0bc7efea9 req-c8aaae13-cd56-43a6-a815-a589c63127c2 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "ceb93a9d-5e18-4351-9cfa-3949c00b448a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:26:20 np0005534516 nova_compute[253538]: 2025-11-25 08:26:20.742 253542 DEBUG oslo_concurrency.lockutils [req-33f4f6af-4453-461a-a92b-5bc0bc7efea9 req-c8aaae13-cd56-43a6-a815-a589c63127c2 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "ceb93a9d-5e18-4351-9cfa-3949c00b448a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:26:20 np0005534516 nova_compute[253538]: 2025-11-25 08:26:20.742 253542 DEBUG oslo_concurrency.lockutils [req-33f4f6af-4453-461a-a92b-5bc0bc7efea9 req-c8aaae13-cd56-43a6-a815-a589c63127c2 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "ceb93a9d-5e18-4351-9cfa-3949c00b448a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:26:20 np0005534516 nova_compute[253538]: 2025-11-25 08:26:20.743 253542 DEBUG nova.compute.manager [req-33f4f6af-4453-461a-a92b-5bc0bc7efea9 req-c8aaae13-cd56-43a6-a815-a589c63127c2 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: ceb93a9d-5e18-4351-9cfa-3949c00b448a] No waiting events found dispatching network-vif-plugged-6283ff13-d854-41d6-8a7a-eab602cc4cf4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 03:26:20 np0005534516 nova_compute[253538]: 2025-11-25 08:26:20.743 253542 WARNING nova.compute.manager [req-33f4f6af-4453-461a-a92b-5bc0bc7efea9 req-c8aaae13-cd56-43a6-a815-a589c63127c2 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: ceb93a9d-5e18-4351-9cfa-3949c00b448a] Received unexpected event network-vif-plugged-6283ff13-d854-41d6-8a7a-eab602cc4cf4 for instance with vm_state active and task_state deleting.#033[00m
Nov 25 03:26:20 np0005534516 podman[283932]: 2025-11-25 08:26:20.887209519 +0000 UTC m=+0.682513723 container remove df9bbde1e0097c2246b98c50c44ed8de68e62f49a5f07f17aea74da675ba71b7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-f0cb07bc-dc94-4b65-bb7f-100ce36c9428, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 03:26:20 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:26:20.901 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[eaac6397-4f36-470c-b9b1-e55409746c54]: (4, ('Tue Nov 25 08:26:19 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-f0cb07bc-dc94-4b65-bb7f-100ce36c9428 (df9bbde1e0097c2246b98c50c44ed8de68e62f49a5f07f17aea74da675ba71b7)\ndf9bbde1e0097c2246b98c50c44ed8de68e62f49a5f07f17aea74da675ba71b7\nTue Nov 25 08:26:20 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-f0cb07bc-dc94-4b65-bb7f-100ce36c9428 (df9bbde1e0097c2246b98c50c44ed8de68e62f49a5f07f17aea74da675ba71b7)\ndf9bbde1e0097c2246b98c50c44ed8de68e62f49a5f07f17aea74da675ba71b7\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:26:20 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:26:20.903 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[72751ae1-8963-48e8-8b4c-2b537961d6bd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:26:20 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:26:20.904 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf0cb07bc-d0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:26:20 np0005534516 nova_compute[253538]: 2025-11-25 08:26:20.906 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:26:20 np0005534516 kernel: tapf0cb07bc-d0: left promiscuous mode
Nov 25 03:26:20 np0005534516 nova_compute[253538]: 2025-11-25 08:26:20.931 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:26:20 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:26:20.935 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[be05cd4e-9d2c-419a-a000-233bc9e1cd07]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:26:20 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:26:20.953 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[aef8aa50-1b70-4b91-8bfb-366c725c415d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:26:20 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:26:20.955 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[2edda1ef-3f65-44db-9fb9-08d6ed9a8304]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:26:20 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:26:20.974 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[6850f3c8-f89c-49fc-81ee-16f819338f84]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 449918, 'reachable_time': 28754, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 283969, 'error': None, 'target': 'ovnmeta-f0cb07bc-dc94-4b65-bb7f-100ce36c9428', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:26:20 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:26:20.978 162852 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-f0cb07bc-dc94-4b65-bb7f-100ce36c9428 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 25 03:26:20 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:26:20.979 162852 DEBUG oslo.privsep.daemon [-] privsep: reply[69e61c85-5243-4657-834c-e1803c6aff69]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:26:20 np0005534516 systemd[1]: run-netns-ovnmeta\x2df0cb07bc\x2ddc94\x2d4b65\x2dbb7f\x2d100ce36c9428.mount: Deactivated successfully.
Nov 25 03:26:21 np0005534516 nova_compute[253538]: 2025-11-25 08:26:21.001 253542 DEBUG nova.network.neutron [req-5e6c7527-69a6-493f-833b-481455803dfe req-0a21b136-3915-47ff-a24e-e0745ce46e89 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 1664fad5-765c-4ecc-93e2-6f96c7fb6d44] Updated VIF entry in instance network info cache for port fdb3703c-f8da-4c10-9784-ed63bfe93fe1. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 25 03:26:21 np0005534516 nova_compute[253538]: 2025-11-25 08:26:21.002 253542 DEBUG nova.network.neutron [req-5e6c7527-69a6-493f-833b-481455803dfe req-0a21b136-3915-47ff-a24e-e0745ce46e89 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 1664fad5-765c-4ecc-93e2-6f96c7fb6d44] Updating instance_info_cache with network_info: [{"id": "fdb3703c-f8da-4c10-9784-ed63bfe93fe1", "address": "fa:16:3e:f0:8b:90", "network": {"id": "f86a7b06-d9db-4462-bd9b-8ad648dec7f4", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-795190525-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2d3671fc1a3f4b319a62f23168a9df72", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfdb3703c-f8", "ovs_interfaceid": "fdb3703c-f8da-4c10-9784-ed63bfe93fe1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 03:26:21 np0005534516 nova_compute[253538]: 2025-11-25 08:26:21.014 253542 DEBUG oslo_concurrency.lockutils [req-5e6c7527-69a6-493f-833b-481455803dfe req-0a21b136-3915-47ff-a24e-e0745ce46e89 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-1664fad5-765c-4ecc-93e2-6f96c7fb6d44" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 03:26:21 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1227: 321 pgs: 321 active+clean; 563 MiB data, 612 MiB used, 59 GiB / 60 GiB avail; 347 KiB/s rd, 2.1 MiB/s wr, 93 op/s
Nov 25 03:26:21 np0005534516 podman[283977]: 2025-11-25 08:26:21.049175382 +0000 UTC m=+0.024736196 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 03:26:21 np0005534516 nova_compute[253538]: 2025-11-25 08:26:21.258 253542 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764059166.2564685, 39580ba3-504b-4e17-b64f-f44ef66091da => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 03:26:21 np0005534516 nova_compute[253538]: 2025-11-25 08:26:21.258 253542 INFO nova.compute.manager [-] [instance: 39580ba3-504b-4e17-b64f-f44ef66091da] VM Stopped (Lifecycle Event)#033[00m
Nov 25 03:26:21 np0005534516 nova_compute[253538]: 2025-11-25 08:26:21.298 253542 DEBUG nova.compute.manager [None req-65bcaea8-254e-4e1a-b52e-5318a3029571 - - - - - -] [instance: 39580ba3-504b-4e17-b64f-f44ef66091da] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:26:21 np0005534516 nova_compute[253538]: 2025-11-25 08:26:21.359 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:26:22 np0005534516 nova_compute[253538]: 2025-11-25 08:26:21.804 253542 DEBUG oslo_concurrency.lockutils [None req-b75a2694-4fbc-4845-b2ff-c61073766bc9 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] Acquiring lock "1664fad5-765c-4ecc-93e2-6f96c7fb6d44" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:26:22 np0005534516 nova_compute[253538]: 2025-11-25 08:26:21.804 253542 DEBUG oslo_concurrency.lockutils [None req-b75a2694-4fbc-4845-b2ff-c61073766bc9 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] Lock "1664fad5-765c-4ecc-93e2-6f96c7fb6d44" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:26:22 np0005534516 nova_compute[253538]: 2025-11-25 08:26:21.805 253542 DEBUG oslo_concurrency.lockutils [None req-b75a2694-4fbc-4845-b2ff-c61073766bc9 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] Acquiring lock "1664fad5-765c-4ecc-93e2-6f96c7fb6d44-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:26:22 np0005534516 nova_compute[253538]: 2025-11-25 08:26:21.805 253542 DEBUG oslo_concurrency.lockutils [None req-b75a2694-4fbc-4845-b2ff-c61073766bc9 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] Lock "1664fad5-765c-4ecc-93e2-6f96c7fb6d44-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:26:22 np0005534516 nova_compute[253538]: 2025-11-25 08:26:21.805 253542 DEBUG oslo_concurrency.lockutils [None req-b75a2694-4fbc-4845-b2ff-c61073766bc9 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] Lock "1664fad5-765c-4ecc-93e2-6f96c7fb6d44-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:26:22 np0005534516 nova_compute[253538]: 2025-11-25 08:26:21.806 253542 INFO nova.compute.manager [None req-b75a2694-4fbc-4845-b2ff-c61073766bc9 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] [instance: 1664fad5-765c-4ecc-93e2-6f96c7fb6d44] Terminating instance#033[00m
Nov 25 03:26:22 np0005534516 nova_compute[253538]: 2025-11-25 08:26:21.807 253542 DEBUG nova.compute.manager [None req-b75a2694-4fbc-4845-b2ff-c61073766bc9 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] [instance: 1664fad5-765c-4ecc-93e2-6f96c7fb6d44] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 25 03:26:22 np0005534516 podman[283977]: 2025-11-25 08:26:22.584856511 +0000 UTC m=+1.560417275 container create d9da1a08d30185296a0ac235796cfbeee6a924457684676df4eada67483807e7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_vaughan, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Nov 25 03:26:22 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:26:22.585 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=8, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '3e:21:09', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '6a:71:8e:07:fa:c2'}, ipsec=False) old=SB_Global(nb_cfg=7) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 03:26:22 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:26:22.587 162739 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 25 03:26:22 np0005534516 systemd[1]: Started libpod-conmon-d9da1a08d30185296a0ac235796cfbeee6a924457684676df4eada67483807e7.scope.
Nov 25 03:26:22 np0005534516 kernel: tapfdb3703c-f8 (unregistering): left promiscuous mode
Nov 25 03:26:22 np0005534516 systemd[1]: Started libcrun container.
Nov 25 03:26:22 np0005534516 NetworkManager[48915]: <info>  [1764059182.6765] device (tapfdb3703c-f8): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 03:26:22 np0005534516 ovn_controller[152859]: 2025-11-25T08:26:22Z|00098|binding|INFO|Releasing lport fdb3703c-f8da-4c10-9784-ed63bfe93fe1 from this chassis (sb_readonly=0)
Nov 25 03:26:22 np0005534516 ovn_controller[152859]: 2025-11-25T08:26:22Z|00099|binding|INFO|Setting lport fdb3703c-f8da-4c10-9784-ed63bfe93fe1 down in Southbound
Nov 25 03:26:22 np0005534516 ovn_controller[152859]: 2025-11-25T08:26:22Z|00100|binding|INFO|Removing iface tapfdb3703c-f8 ovn-installed in OVS
Nov 25 03:26:22 np0005534516 nova_compute[253538]: 2025-11-25 08:26:22.692 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:26:22 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:26:22.706 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f0:8b:90 10.100.0.11'], port_security=['fa:16:3e:f0:8b:90 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '1664fad5-765c-4ecc-93e2-6f96c7fb6d44', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f86a7b06-d9db-4462-bd9b-8ad648dec7f4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2d3671fc1a3f4b319a62f23168a9df72', 'neutron:revision_number': '4', 'neutron:security_group_ids': '487cd315-808b-491d-a4f3-9b5dcda46b51', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ddf7b92c-cc8c-4886-ba67-90d06b198671, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=fdb3703c-f8da-4c10-9784-ed63bfe93fe1) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 03:26:22 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:26:22.708 162739 INFO neutron.agent.ovn.metadata.agent [-] Port fdb3703c-f8da-4c10-9784-ed63bfe93fe1 in datapath f86a7b06-d9db-4462-bd9b-8ad648dec7f4 unbound from our chassis#033[00m
Nov 25 03:26:22 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:26:22.711 162739 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network f86a7b06-d9db-4462-bd9b-8ad648dec7f4, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 25 03:26:22 np0005534516 nova_compute[253538]: 2025-11-25 08:26:22.713 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:26:22 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:26:22.712 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[40ac1e11-655e-4b79-9ff0-41581e0dc1da]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:26:22 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:26:22.715 162739 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-f86a7b06-d9db-4462-bd9b-8ad648dec7f4 namespace which is not needed anymore#033[00m
Nov 25 03:26:22 np0005534516 systemd[1]: machine-qemu\x2d21\x2dinstance\x2d00000013.scope: Deactivated successfully.
Nov 25 03:26:22 np0005534516 podman[283977]: 2025-11-25 08:26:22.750449114 +0000 UTC m=+1.726009878 container init d9da1a08d30185296a0ac235796cfbeee6a924457684676df4eada67483807e7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_vaughan, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 03:26:22 np0005534516 systemd[1]: machine-qemu\x2d21\x2dinstance\x2d00000013.scope: Consumed 14.413s CPU time.
Nov 25 03:26:22 np0005534516 systemd-machined[215790]: Machine qemu-21-instance-00000013 terminated.
Nov 25 03:26:22 np0005534516 podman[283977]: 2025-11-25 08:26:22.760141923 +0000 UTC m=+1.735702687 container start d9da1a08d30185296a0ac235796cfbeee6a924457684676df4eada67483807e7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_vaughan, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Nov 25 03:26:22 np0005534516 podman[283977]: 2025-11-25 08:26:22.765696357 +0000 UTC m=+1.741257131 container attach d9da1a08d30185296a0ac235796cfbeee6a924457684676df4eada67483807e7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_vaughan, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 25 03:26:22 np0005534516 systemd[1]: libpod-d9da1a08d30185296a0ac235796cfbeee6a924457684676df4eada67483807e7.scope: Deactivated successfully.
Nov 25 03:26:22 np0005534516 festive_vaughan[283993]: 167 167
Nov 25 03:26:22 np0005534516 conmon[283993]: conmon d9da1a08d30185296a0a <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-d9da1a08d30185296a0ac235796cfbeee6a924457684676df4eada67483807e7.scope/container/memory.events
Nov 25 03:26:22 np0005534516 podman[283977]: 2025-11-25 08:26:22.769960954 +0000 UTC m=+1.745521718 container died d9da1a08d30185296a0ac235796cfbeee6a924457684676df4eada67483807e7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_vaughan, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Nov 25 03:26:22 np0005534516 systemd[1]: var-lib-containers-storage-overlay-f31e0816a28c08c560d761f2bdeff31c1de2cc80ffa721a8db96e71131d706f6-merged.mount: Deactivated successfully.
Nov 25 03:26:22 np0005534516 podman[283977]: 2025-11-25 08:26:22.815825315 +0000 UTC m=+1.791386079 container remove d9da1a08d30185296a0ac235796cfbeee6a924457684676df4eada67483807e7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_vaughan, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Nov 25 03:26:22 np0005534516 nova_compute[253538]: 2025-11-25 08:26:22.829 253542 INFO nova.virt.libvirt.driver [-] [instance: 1664fad5-765c-4ecc-93e2-6f96c7fb6d44] Instance destroyed successfully.#033[00m
Nov 25 03:26:22 np0005534516 nova_compute[253538]: 2025-11-25 08:26:22.831 253542 DEBUG nova.objects.instance [None req-b75a2694-4fbc-4845-b2ff-c61073766bc9 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] Lazy-loading 'resources' on Instance uuid 1664fad5-765c-4ecc-93e2-6f96c7fb6d44 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 03:26:22 np0005534516 systemd[1]: libpod-conmon-d9da1a08d30185296a0ac235796cfbeee6a924457684676df4eada67483807e7.scope: Deactivated successfully.
Nov 25 03:26:22 np0005534516 nova_compute[253538]: 2025-11-25 08:26:22.856 253542 DEBUG nova.virt.libvirt.vif [None req-b75a2694-4fbc-4845-b2ff-c61073766bc9 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T08:25:17Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-FloatingIPsAssociationTestJSON-server-1794284611',display_name='tempest-FloatingIPsAssociationTestJSON-server-1794284611',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-floatingipsassociationtestjson-server-1794284611',id=19,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-25T08:25:30Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='2d3671fc1a3f4b319a62f23168a9df72',ramdisk_id='',reservation_id='r-f695m80c',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-FloatingIPsAssociationTestJSON-1833680054',owner_user_name='tempest-FloatingIPsAssociationTestJSON-1833680054-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T08:25:30Z,user_data=None,user_id='d61511e82c674abeb4ba87a4e5c5bf9d',uuid=1664fad5-765c-4ecc-93e2-6f96c7fb6d44,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "fdb3703c-f8da-4c10-9784-ed63bfe93fe1", "address": "fa:16:3e:f0:8b:90", "network": {"id": "f86a7b06-d9db-4462-bd9b-8ad648dec7f4", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-795190525-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2d3671fc1a3f4b319a62f23168a9df72", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfdb3703c-f8", "ovs_interfaceid": "fdb3703c-f8da-4c10-9784-ed63bfe93fe1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 25 03:26:22 np0005534516 nova_compute[253538]: 2025-11-25 08:26:22.857 253542 DEBUG nova.network.os_vif_util [None req-b75a2694-4fbc-4845-b2ff-c61073766bc9 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] Converting VIF {"id": "fdb3703c-f8da-4c10-9784-ed63bfe93fe1", "address": "fa:16:3e:f0:8b:90", "network": {"id": "f86a7b06-d9db-4462-bd9b-8ad648dec7f4", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-795190525-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2d3671fc1a3f4b319a62f23168a9df72", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfdb3703c-f8", "ovs_interfaceid": "fdb3703c-f8da-4c10-9784-ed63bfe93fe1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 03:26:22 np0005534516 nova_compute[253538]: 2025-11-25 08:26:22.857 253542 DEBUG nova.network.os_vif_util [None req-b75a2694-4fbc-4845-b2ff-c61073766bc9 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:f0:8b:90,bridge_name='br-int',has_traffic_filtering=True,id=fdb3703c-f8da-4c10-9784-ed63bfe93fe1,network=Network(f86a7b06-d9db-4462-bd9b-8ad648dec7f4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfdb3703c-f8') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 03:26:22 np0005534516 nova_compute[253538]: 2025-11-25 08:26:22.858 253542 DEBUG os_vif [None req-b75a2694-4fbc-4845-b2ff-c61073766bc9 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:f0:8b:90,bridge_name='br-int',has_traffic_filtering=True,id=fdb3703c-f8da-4c10-9784-ed63bfe93fe1,network=Network(f86a7b06-d9db-4462-bd9b-8ad648dec7f4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfdb3703c-f8') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 25 03:26:22 np0005534516 nova_compute[253538]: 2025-11-25 08:26:22.859 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:26:22 np0005534516 nova_compute[253538]: 2025-11-25 08:26:22.860 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfdb3703c-f8, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:26:22 np0005534516 nova_compute[253538]: 2025-11-25 08:26:22.861 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:26:22 np0005534516 nova_compute[253538]: 2025-11-25 08:26:22.863 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 25 03:26:22 np0005534516 nova_compute[253538]: 2025-11-25 08:26:22.866 253542 INFO os_vif [None req-b75a2694-4fbc-4845-b2ff-c61073766bc9 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:f0:8b:90,bridge_name='br-int',has_traffic_filtering=True,id=fdb3703c-f8da-4c10-9784-ed63bfe93fe1,network=Network(f86a7b06-d9db-4462-bd9b-8ad648dec7f4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfdb3703c-f8')#033[00m
Nov 25 03:26:22 np0005534516 neutron-haproxy-ovnmeta-f86a7b06-d9db-4462-bd9b-8ad648dec7f4[281468]: [NOTICE]   (281474) : haproxy version is 2.8.14-c23fe91
Nov 25 03:26:22 np0005534516 neutron-haproxy-ovnmeta-f86a7b06-d9db-4462-bd9b-8ad648dec7f4[281468]: [NOTICE]   (281474) : path to executable is /usr/sbin/haproxy
Nov 25 03:26:22 np0005534516 neutron-haproxy-ovnmeta-f86a7b06-d9db-4462-bd9b-8ad648dec7f4[281468]: [WARNING]  (281474) : Exiting Master process...
Nov 25 03:26:22 np0005534516 neutron-haproxy-ovnmeta-f86a7b06-d9db-4462-bd9b-8ad648dec7f4[281468]: [ALERT]    (281474) : Current worker (281476) exited with code 143 (Terminated)
Nov 25 03:26:22 np0005534516 neutron-haproxy-ovnmeta-f86a7b06-d9db-4462-bd9b-8ad648dec7f4[281468]: [WARNING]  (281474) : All workers exited. Exiting... (0)
Nov 25 03:26:22 np0005534516 systemd[1]: libpod-e41149f548783254118977cc5d5669b5407f85f52828edec42cb6985c8922e84.scope: Deactivated successfully.
Nov 25 03:26:22 np0005534516 podman[284036]: 2025-11-25 08:26:22.902925115 +0000 UTC m=+0.070663796 container died e41149f548783254118977cc5d5669b5407f85f52828edec42cb6985c8922e84 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-f86a7b06-d9db-4462-bd9b-8ad648dec7f4, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Nov 25 03:26:22 np0005534516 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-e41149f548783254118977cc5d5669b5407f85f52828edec42cb6985c8922e84-userdata-shm.mount: Deactivated successfully.
Nov 25 03:26:22 np0005534516 systemd[1]: var-lib-containers-storage-overlay-afd67979dba2879b6102202e2641e51318f111cb23f4aeb0a647e6c7456f9106-merged.mount: Deactivated successfully.
Nov 25 03:26:22 np0005534516 podman[284036]: 2025-11-25 08:26:22.943968382 +0000 UTC m=+0.111707063 container cleanup e41149f548783254118977cc5d5669b5407f85f52828edec42cb6985c8922e84 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-f86a7b06-d9db-4462-bd9b-8ad648dec7f4, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true)
Nov 25 03:26:22 np0005534516 systemd[1]: libpod-conmon-e41149f548783254118977cc5d5669b5407f85f52828edec42cb6985c8922e84.scope: Deactivated successfully.
Nov 25 03:26:23 np0005534516 nova_compute[253538]: 2025-11-25 08:26:23.024 253542 INFO nova.virt.libvirt.driver [None req-d54cf114-4cd0-4343-8d8e-85625af4e156 c53798457642457e8c93278c6bbae0b7 c4ea3de796e6464fbf65835dc4c3ad79 - - default default] [instance: ceb93a9d-5e18-4351-9cfa-3949c00b448a] Deleting instance files /var/lib/nova/instances/ceb93a9d-5e18-4351-9cfa-3949c00b448a_del#033[00m
Nov 25 03:26:23 np0005534516 nova_compute[253538]: 2025-11-25 08:26:23.026 253542 INFO nova.virt.libvirt.driver [None req-d54cf114-4cd0-4343-8d8e-85625af4e156 c53798457642457e8c93278c6bbae0b7 c4ea3de796e6464fbf65835dc4c3ad79 - - default default] [instance: ceb93a9d-5e18-4351-9cfa-3949c00b448a] Deletion of /var/lib/nova/instances/ceb93a9d-5e18-4351-9cfa-3949c00b448a_del complete#033[00m
Nov 25 03:26:23 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1228: 321 pgs: 321 active+clean; 563 MiB data, 612 MiB used, 59 GiB / 60 GiB avail; 1003 KiB/s rd, 1.9 MiB/s wr, 109 op/s
Nov 25 03:26:23 np0005534516 podman[284090]: 2025-11-25 08:26:23.04214979 +0000 UTC m=+0.063409317 container remove e41149f548783254118977cc5d5669b5407f85f52828edec42cb6985c8922e84 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-f86a7b06-d9db-4462-bd9b-8ad648dec7f4, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2)
Nov 25 03:26:23 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:26:23.049 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[1e44b79b-9854-4eda-a60a-978d894bf515]: (4, ('Tue Nov 25 08:26:22 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-f86a7b06-d9db-4462-bd9b-8ad648dec7f4 (e41149f548783254118977cc5d5669b5407f85f52828edec42cb6985c8922e84)\ne41149f548783254118977cc5d5669b5407f85f52828edec42cb6985c8922e84\nTue Nov 25 08:26:22 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-f86a7b06-d9db-4462-bd9b-8ad648dec7f4 (e41149f548783254118977cc5d5669b5407f85f52828edec42cb6985c8922e84)\ne41149f548783254118977cc5d5669b5407f85f52828edec42cb6985c8922e84\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:26:23 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:26:23.052 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[7580a085-f973-443c-a17e-f4018d227ad2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:26:23 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:26:23.053 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf86a7b06-d0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:26:23 np0005534516 nova_compute[253538]: 2025-11-25 08:26:23.055 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:26:23 np0005534516 kernel: tapf86a7b06-d0: left promiscuous mode
Nov 25 03:26:23 np0005534516 nova_compute[253538]: 2025-11-25 08:26:23.057 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:26:23 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:26:23.061 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[2fe1f942-5c5d-4500-a5a3-ef7081bec513]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:26:23 np0005534516 podman[284100]: 2025-11-25 08:26:23.07145367 +0000 UTC m=+0.072468697 container create d27cb221c10778b905a507b24ab5c1867613f732f8f592d96e60efeb68688b4a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_carson, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Nov 25 03:26:23 np0005534516 nova_compute[253538]: 2025-11-25 08:26:23.076 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:26:23 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:26:23.085 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[15ff8c87-fd4b-406a-b19c-80715de76a92]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:26:23 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:26:23.093 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[b3884268-57bc-4063-bad1-405e3233ca29]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:26:23 np0005534516 nova_compute[253538]: 2025-11-25 08:26:23.096 253542 INFO nova.compute.manager [None req-d54cf114-4cd0-4343-8d8e-85625af4e156 c53798457642457e8c93278c6bbae0b7 c4ea3de796e6464fbf65835dc4c3ad79 - - default default] [instance: ceb93a9d-5e18-4351-9cfa-3949c00b448a] Took 4.48 seconds to destroy the instance on the hypervisor.#033[00m
Nov 25 03:26:23 np0005534516 nova_compute[253538]: 2025-11-25 08:26:23.097 253542 DEBUG oslo.service.loopingcall [None req-d54cf114-4cd0-4343-8d8e-85625af4e156 c53798457642457e8c93278c6bbae0b7 c4ea3de796e6464fbf65835dc4c3ad79 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 25 03:26:23 np0005534516 nova_compute[253538]: 2025-11-25 08:26:23.097 253542 DEBUG nova.compute.manager [-] [instance: ceb93a9d-5e18-4351-9cfa-3949c00b448a] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 25 03:26:23 np0005534516 nova_compute[253538]: 2025-11-25 08:26:23.097 253542 DEBUG nova.network.neutron [-] [instance: ceb93a9d-5e18-4351-9cfa-3949c00b448a] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 25 03:26:23 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:26:23.111 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[44dfc36a-45d8-4b3b-924a-8e2d8703dba8]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 448068, 'reachable_time': 28220, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 284124, 'error': None, 'target': 'ovnmeta-f86a7b06-d9db-4462-bd9b-8ad648dec7f4', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:26:23 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:26:23.114 162852 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-f86a7b06-d9db-4462-bd9b-8ad648dec7f4 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 25 03:26:23 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:26:23.114 162852 DEBUG oslo.privsep.daemon [-] privsep: reply[91323fd4-1aa5-4291-a501-8f2d7359d651]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:26:23 np0005534516 systemd[1]: Started libpod-conmon-d27cb221c10778b905a507b24ab5c1867613f732f8f592d96e60efeb68688b4a.scope.
Nov 25 03:26:23 np0005534516 podman[284100]: 2025-11-25 08:26:23.047034894 +0000 UTC m=+0.048049951 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 03:26:23 np0005534516 systemd[1]: Started libcrun container.
Nov 25 03:26:23 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/be1cad7d95bc4d2cf12c5cdedea9f2c67ef6e9ca5e2f63e4c0d5b8678620710c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 03:26:23 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/be1cad7d95bc4d2cf12c5cdedea9f2c67ef6e9ca5e2f63e4c0d5b8678620710c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 03:26:23 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/be1cad7d95bc4d2cf12c5cdedea9f2c67ef6e9ca5e2f63e4c0d5b8678620710c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 03:26:23 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/be1cad7d95bc4d2cf12c5cdedea9f2c67ef6e9ca5e2f63e4c0d5b8678620710c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 03:26:23 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/be1cad7d95bc4d2cf12c5cdedea9f2c67ef6e9ca5e2f63e4c0d5b8678620710c/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 03:26:23 np0005534516 podman[284100]: 2025-11-25 08:26:23.174962075 +0000 UTC m=+0.175977132 container init d27cb221c10778b905a507b24ab5c1867613f732f8f592d96e60efeb68688b4a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_carson, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 03:26:23 np0005534516 podman[284100]: 2025-11-25 08:26:23.188352526 +0000 UTC m=+0.189367553 container start d27cb221c10778b905a507b24ab5c1867613f732f8f592d96e60efeb68688b4a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_carson, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 03:26:23 np0005534516 podman[284100]: 2025-11-25 08:26:23.192584463 +0000 UTC m=+0.193599510 container attach d27cb221c10778b905a507b24ab5c1867613f732f8f592d96e60efeb68688b4a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_carson, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507)
Nov 25 03:26:23 np0005534516 nova_compute[253538]: 2025-11-25 08:26:23.213 253542 DEBUG nova.compute.manager [req-06f89d99-6e38-4d91-b3f0-f7dcb649d298 req-86856b76-7820-440a-ab2f-44d668f882d3 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: ca088afd-31e5-497b-bfc5-ba1f56096642] Received event network-changed-2089bf75-6119-4c42-a326-989b3931ec08 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:26:23 np0005534516 nova_compute[253538]: 2025-11-25 08:26:23.214 253542 DEBUG nova.compute.manager [req-06f89d99-6e38-4d91-b3f0-f7dcb649d298 req-86856b76-7820-440a-ab2f-44d668f882d3 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: ca088afd-31e5-497b-bfc5-ba1f56096642] Refreshing instance network info cache due to event network-changed-2089bf75-6119-4c42-a326-989b3931ec08. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 03:26:23 np0005534516 nova_compute[253538]: 2025-11-25 08:26:23.214 253542 DEBUG oslo_concurrency.lockutils [req-06f89d99-6e38-4d91-b3f0-f7dcb649d298 req-86856b76-7820-440a-ab2f-44d668f882d3 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-ca088afd-31e5-497b-bfc5-ba1f56096642" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 03:26:23 np0005534516 nova_compute[253538]: 2025-11-25 08:26:23.214 253542 DEBUG oslo_concurrency.lockutils [req-06f89d99-6e38-4d91-b3f0-f7dcb649d298 req-86856b76-7820-440a-ab2f-44d668f882d3 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-ca088afd-31e5-497b-bfc5-ba1f56096642" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 03:26:23 np0005534516 nova_compute[253538]: 2025-11-25 08:26:23.215 253542 DEBUG nova.network.neutron [req-06f89d99-6e38-4d91-b3f0-f7dcb649d298 req-86856b76-7820-440a-ab2f-44d668f882d3 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: ca088afd-31e5-497b-bfc5-ba1f56096642] Refreshing network info cache for port 2089bf75-6119-4c42-a326-989b3931ec08 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 03:26:23 np0005534516 nova_compute[253538]: 2025-11-25 08:26:23.287 253542 INFO nova.virt.libvirt.driver [None req-b75a2694-4fbc-4845-b2ff-c61073766bc9 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] [instance: 1664fad5-765c-4ecc-93e2-6f96c7fb6d44] Deleting instance files /var/lib/nova/instances/1664fad5-765c-4ecc-93e2-6f96c7fb6d44_del#033[00m
Nov 25 03:26:23 np0005534516 nova_compute[253538]: 2025-11-25 08:26:23.288 253542 INFO nova.virt.libvirt.driver [None req-b75a2694-4fbc-4845-b2ff-c61073766bc9 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] [instance: 1664fad5-765c-4ecc-93e2-6f96c7fb6d44] Deletion of /var/lib/nova/instances/1664fad5-765c-4ecc-93e2-6f96c7fb6d44_del complete#033[00m
Nov 25 03:26:23 np0005534516 nova_compute[253538]: 2025-11-25 08:26:23.347 253542 INFO nova.compute.manager [None req-b75a2694-4fbc-4845-b2ff-c61073766bc9 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] [instance: 1664fad5-765c-4ecc-93e2-6f96c7fb6d44] Took 1.54 seconds to destroy the instance on the hypervisor.#033[00m
Nov 25 03:26:23 np0005534516 nova_compute[253538]: 2025-11-25 08:26:23.348 253542 DEBUG oslo.service.loopingcall [None req-b75a2694-4fbc-4845-b2ff-c61073766bc9 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 25 03:26:23 np0005534516 nova_compute[253538]: 2025-11-25 08:26:23.348 253542 DEBUG nova.compute.manager [-] [instance: 1664fad5-765c-4ecc-93e2-6f96c7fb6d44] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 25 03:26:23 np0005534516 nova_compute[253538]: 2025-11-25 08:26:23.349 253542 DEBUG nova.network.neutron [-] [instance: 1664fad5-765c-4ecc-93e2-6f96c7fb6d44] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 25 03:26:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 03:26:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 03:26:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 03:26:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 03:26:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 03:26:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 03:26:23 np0005534516 systemd[1]: run-netns-ovnmeta\x2df86a7b06\x2dd9db\x2d4462\x2dbd9b\x2d8ad648dec7f4.mount: Deactivated successfully.
Nov 25 03:26:23 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e113 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 03:26:24 np0005534516 laughing_carson[284127]: --> passed data devices: 0 physical, 3 LVM
Nov 25 03:26:24 np0005534516 laughing_carson[284127]: --> relative data size: 1.0
Nov 25 03:26:24 np0005534516 laughing_carson[284127]: --> All data devices are unavailable
Nov 25 03:26:24 np0005534516 systemd[1]: libpod-d27cb221c10778b905a507b24ab5c1867613f732f8f592d96e60efeb68688b4a.scope: Deactivated successfully.
Nov 25 03:26:24 np0005534516 podman[284100]: 2025-11-25 08:26:24.265558534 +0000 UTC m=+1.266573591 container died d27cb221c10778b905a507b24ab5c1867613f732f8f592d96e60efeb68688b4a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_carson, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Nov 25 03:26:24 np0005534516 systemd[1]: var-lib-containers-storage-overlay-be1cad7d95bc4d2cf12c5cdedea9f2c67ef6e9ca5e2f63e4c0d5b8678620710c-merged.mount: Deactivated successfully.
Nov 25 03:26:24 np0005534516 podman[284100]: 2025-11-25 08:26:24.327524889 +0000 UTC m=+1.328539906 container remove d27cb221c10778b905a507b24ab5c1867613f732f8f592d96e60efeb68688b4a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_carson, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Nov 25 03:26:24 np0005534516 systemd[1]: libpod-conmon-d27cb221c10778b905a507b24ab5c1867613f732f8f592d96e60efeb68688b4a.scope: Deactivated successfully.
Nov 25 03:26:24 np0005534516 nova_compute[253538]: 2025-11-25 08:26:24.714 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:26:24 np0005534516 kernel: tap4ad9572b-6a (unregistering): left promiscuous mode
Nov 25 03:26:24 np0005534516 NetworkManager[48915]: <info>  [1764059184.9381] device (tap4ad9572b-6a): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 03:26:24 np0005534516 nova_compute[253538]: 2025-11-25 08:26:24.980 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:26:24 np0005534516 ovn_controller[152859]: 2025-11-25T08:26:24Z|00101|binding|INFO|Releasing lport 4ad9572b-6ac1-4659-8ea6-71b8a32c06fe from this chassis (sb_readonly=0)
Nov 25 03:26:24 np0005534516 ovn_controller[152859]: 2025-11-25T08:26:24Z|00102|binding|INFO|Setting lport 4ad9572b-6ac1-4659-8ea6-71b8a32c06fe down in Southbound
Nov 25 03:26:24 np0005534516 ovn_controller[152859]: 2025-11-25T08:26:24Z|00103|binding|INFO|Removing iface tap4ad9572b-6a ovn-installed in OVS
Nov 25 03:26:24 np0005534516 nova_compute[253538]: 2025-11-25 08:26:24.983 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:26:24 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:26:24.998 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:5e:0e:e0 10.100.0.6'], port_security=['fa:16:3e:5e:0e:e0 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '86bfa56f-56d0-4a5e-b0b2-302c375e37a3', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-93269c36-ab23-4d95-925a-798173550624', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '65a2f983cce14453b2dc9251a520f289', 'neutron:revision_number': '6', 'neutron:security_group_ids': 'cc9f601e-bb99-4729-8b62-ddcf81c134a4', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a705f4ac-b7cc-4deb-b453-a20afb944392, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=4ad9572b-6ac1-4659-8ea6-71b8a32c06fe) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 03:26:25 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:26:24.999 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 4ad9572b-6ac1-4659-8ea6-71b8a32c06fe in datapath 93269c36-ab23-4d95-925a-798173550624 unbound from our chassis#033[00m
Nov 25 03:26:25 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:26:25.001 162739 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 93269c36-ab23-4d95-925a-798173550624#033[00m
Nov 25 03:26:25 np0005534516 nova_compute[253538]: 2025-11-25 08:26:25.005 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:26:25 np0005534516 systemd[1]: machine-qemu\x2d25\x2dinstance\x2d00000010.scope: Deactivated successfully.
Nov 25 03:26:25 np0005534516 systemd[1]: machine-qemu\x2d25\x2dinstance\x2d00000010.scope: Consumed 15.645s CPU time.
Nov 25 03:26:25 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:26:25.019 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[b9c6c0cf-04c0-49fb-9ac8-a786286976e0]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:26:25 np0005534516 systemd-machined[215790]: Machine qemu-25-instance-00000010 terminated.
Nov 25 03:26:25 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1229: 321 pgs: 321 active+clean; 449 MiB data, 578 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 1.2 MiB/s wr, 190 op/s
Nov 25 03:26:25 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:26:25.068 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[abf57031-11ec-4a67-9885-48a3fa191bca]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:26:25 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:26:25.071 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[82b24518-e5e6-447e-ab05-c3fb48981841]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:26:25 np0005534516 podman[284317]: 2025-11-25 08:26:25.088549695 +0000 UTC m=+0.043720772 container create a588c6b2effbec8717915a3b423733334c3a22751a1a5f6b2746d34053e31926 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_margulis, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Nov 25 03:26:25 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:26:25.109 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[692c866a-f4e2-4602-978f-74fb3ac0690e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:26:25 np0005534516 systemd[1]: Started libpod-conmon-a588c6b2effbec8717915a3b423733334c3a22751a1a5f6b2746d34053e31926.scope.
Nov 25 03:26:25 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:26:25.143 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[84fc0e8f-141d-4033-ae02-02465288fd8f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap93269c36-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b6:11:21'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 16, 'tx_packets': 15, 'rx_bytes': 952, 'tx_bytes': 774, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 16, 'tx_packets': 15, 'rx_bytes': 952, 'tx_bytes': 774, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 19], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 445515, 'reachable_time': 41516, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 284334, 'error': None, 'target': 'ovnmeta-93269c36-ab23-4d95-925a-798173550624', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:26:25 np0005534516 podman[284317]: 2025-11-25 08:26:25.069476287 +0000 UTC m=+0.024647404 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 03:26:25 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:26:25.164 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[3ef041dc-21e7-439b-a784-c327f3b9d96a]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap93269c36-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 445525, 'tstamp': 445525}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 284340, 'error': None, 'target': 'ovnmeta-93269c36-ab23-4d95-925a-798173550624', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap93269c36-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 445529, 'tstamp': 445529}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 284340, 'error': None, 'target': 'ovnmeta-93269c36-ab23-4d95-925a-798173550624', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:26:25 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:26:25.166 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap93269c36-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:26:25 np0005534516 systemd[1]: Started libcrun container.
Nov 25 03:26:25 np0005534516 nova_compute[253538]: 2025-11-25 08:26:25.167 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:26:25 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:26:25.175 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap93269c36-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:26:25 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:26:25.175 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 03:26:25 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:26:25.175 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap93269c36-a0, col_values=(('external_ids', {'iface-id': '52d2128c-19c6-4892-8ba5-cc8740039f5e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:26:25 np0005534516 nova_compute[253538]: 2025-11-25 08:26:25.175 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:26:25 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:26:25.176 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 03:26:25 np0005534516 podman[284317]: 2025-11-25 08:26:25.20217247 +0000 UTC m=+0.157343547 container init a588c6b2effbec8717915a3b423733334c3a22751a1a5f6b2746d34053e31926 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_margulis, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 25 03:26:25 np0005534516 nova_compute[253538]: 2025-11-25 08:26:25.203 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:26:25 np0005534516 podman[284317]: 2025-11-25 08:26:25.212241589 +0000 UTC m=+0.167412656 container start a588c6b2effbec8717915a3b423733334c3a22751a1a5f6b2746d34053e31926 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_margulis, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 03:26:25 np0005534516 podman[284317]: 2025-11-25 08:26:25.219759117 +0000 UTC m=+0.174930184 container attach a588c6b2effbec8717915a3b423733334c3a22751a1a5f6b2746d34053e31926 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_margulis, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 03:26:25 np0005534516 nova_compute[253538]: 2025-11-25 08:26:25.219 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:26:25 np0005534516 dreamy_margulis[284337]: 167 167
Nov 25 03:26:25 np0005534516 systemd[1]: libpod-a588c6b2effbec8717915a3b423733334c3a22751a1a5f6b2746d34053e31926.scope: Deactivated successfully.
Nov 25 03:26:25 np0005534516 podman[284317]: 2025-11-25 08:26:25.229246189 +0000 UTC m=+0.184417266 container died a588c6b2effbec8717915a3b423733334c3a22751a1a5f6b2746d34053e31926 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_margulis, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 03:26:25 np0005534516 systemd[1]: var-lib-containers-storage-overlay-b53ac15188cb2d94fe363d2caf9f843b8d435f08b2228d3dd98f43100bb8ea5f-merged.mount: Deactivated successfully.
Nov 25 03:26:25 np0005534516 podman[284317]: 2025-11-25 08:26:25.273212126 +0000 UTC m=+0.228383193 container remove a588c6b2effbec8717915a3b423733334c3a22751a1a5f6b2746d34053e31926 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_margulis, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 03:26:25 np0005534516 systemd[1]: libpod-conmon-a588c6b2effbec8717915a3b423733334c3a22751a1a5f6b2746d34053e31926.scope: Deactivated successfully.
Nov 25 03:26:25 np0005534516 nova_compute[253538]: 2025-11-25 08:26:25.420 253542 DEBUG nova.network.neutron [-] [instance: ceb93a9d-5e18-4351-9cfa-3949c00b448a] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 03:26:25 np0005534516 nova_compute[253538]: 2025-11-25 08:26:25.455 253542 INFO nova.compute.manager [-] [instance: ceb93a9d-5e18-4351-9cfa-3949c00b448a] Took 2.36 seconds to deallocate network for instance.#033[00m
Nov 25 03:26:25 np0005534516 podman[284372]: 2025-11-25 08:26:25.472688348 +0000 UTC m=+0.059422496 container create e569ddb9b8361d60a11cea8df8ffadb8da02e075f3f0286e2a1f81fc17b00545 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_ishizaka, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Nov 25 03:26:25 np0005534516 nova_compute[253538]: 2025-11-25 08:26:25.507 253542 DEBUG oslo_concurrency.lockutils [None req-d54cf114-4cd0-4343-8d8e-85625af4e156 c53798457642457e8c93278c6bbae0b7 c4ea3de796e6464fbf65835dc4c3ad79 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:26:25 np0005534516 nova_compute[253538]: 2025-11-25 08:26:25.508 253542 DEBUG oslo_concurrency.lockutils [None req-d54cf114-4cd0-4343-8d8e-85625af4e156 c53798457642457e8c93278c6bbae0b7 c4ea3de796e6464fbf65835dc4c3ad79 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:26:25 np0005534516 systemd[1]: Started libpod-conmon-e569ddb9b8361d60a11cea8df8ffadb8da02e075f3f0286e2a1f81fc17b00545.scope.
Nov 25 03:26:25 np0005534516 podman[284372]: 2025-11-25 08:26:25.443426588 +0000 UTC m=+0.030160756 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 03:26:25 np0005534516 systemd[1]: Started libcrun container.
Nov 25 03:26:25 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6f742f0aea007c0d03c4b5365e50b60f01100baa162c51f466d44f017dee90a3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 03:26:25 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6f742f0aea007c0d03c4b5365e50b60f01100baa162c51f466d44f017dee90a3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 03:26:25 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6f742f0aea007c0d03c4b5365e50b60f01100baa162c51f466d44f017dee90a3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 03:26:25 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6f742f0aea007c0d03c4b5365e50b60f01100baa162c51f466d44f017dee90a3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 03:26:25 np0005534516 podman[284372]: 2025-11-25 08:26:25.590840138 +0000 UTC m=+0.177574316 container init e569ddb9b8361d60a11cea8df8ffadb8da02e075f3f0286e2a1f81fc17b00545 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_ishizaka, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Nov 25 03:26:25 np0005534516 podman[284372]: 2025-11-25 08:26:25.597597666 +0000 UTC m=+0.184331824 container start e569ddb9b8361d60a11cea8df8ffadb8da02e075f3f0286e2a1f81fc17b00545 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_ishizaka, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Nov 25 03:26:25 np0005534516 podman[284372]: 2025-11-25 08:26:25.603369506 +0000 UTC m=+0.190103684 container attach e569ddb9b8361d60a11cea8df8ffadb8da02e075f3f0286e2a1f81fc17b00545 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_ishizaka, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Nov 25 03:26:25 np0005534516 nova_compute[253538]: 2025-11-25 08:26:25.607 253542 INFO nova.virt.libvirt.driver [None req-b2263c80-33ef-445c-930a-78c60ce7c0c9 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] [instance: 86bfa56f-56d0-4a5e-b0b2-302c375e37a3] Instance shutdown successfully after 25 seconds.#033[00m
Nov 25 03:26:25 np0005534516 nova_compute[253538]: 2025-11-25 08:26:25.612 253542 INFO nova.virt.libvirt.driver [-] [instance: 86bfa56f-56d0-4a5e-b0b2-302c375e37a3] Instance destroyed successfully.#033[00m
Nov 25 03:26:25 np0005534516 nova_compute[253538]: 2025-11-25 08:26:25.617 253542 INFO nova.virt.libvirt.driver [-] [instance: 86bfa56f-56d0-4a5e-b0b2-302c375e37a3] Instance destroyed successfully.#033[00m
Nov 25 03:26:25 np0005534516 nova_compute[253538]: 2025-11-25 08:26:25.618 253542 DEBUG nova.virt.libvirt.vif [None req-b2263c80-33ef-445c-930a-78c60ce7c0c9 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-11-25T08:24:53Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-1649971692',display_name='tempest-ServersAdminTestJSON-server-1649971692',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-1649971692',id=16,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-25T08:25:54Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='65a2f983cce14453b2dc9251a520f289',ramdisk_id='',reservation_id='r-309qh2t9',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersAdminTestJSON-857664825',owner_user_name='tempest-ServersAdminTestJSON-857664825-project-member'},tags=<?>,task_state='rebuilding',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T08:25:58Z,user_data=None,user_id='76d3377d398a4214a77bc0eb91638ec5',uuid=86bfa56f-56d0-4a5e-b0b2-302c375e37a3,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "4ad9572b-6ac1-4659-8ea6-71b8a32c06fe", "address": "fa:16:3e:5e:0e:e0", "network": {"id": "93269c36-ab23-4d95-925a-798173550624", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-897372830-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "65a2f983cce14453b2dc9251a520f289", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4ad9572b-6a", "ovs_interfaceid": "4ad9572b-6ac1-4659-8ea6-71b8a32c06fe", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 25 03:26:25 np0005534516 nova_compute[253538]: 2025-11-25 08:26:25.618 253542 DEBUG nova.network.os_vif_util [None req-b2263c80-33ef-445c-930a-78c60ce7c0c9 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Converting VIF {"id": "4ad9572b-6ac1-4659-8ea6-71b8a32c06fe", "address": "fa:16:3e:5e:0e:e0", "network": {"id": "93269c36-ab23-4d95-925a-798173550624", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-897372830-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "65a2f983cce14453b2dc9251a520f289", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4ad9572b-6a", "ovs_interfaceid": "4ad9572b-6ac1-4659-8ea6-71b8a32c06fe", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 03:26:25 np0005534516 nova_compute[253538]: 2025-11-25 08:26:25.618 253542 DEBUG nova.network.os_vif_util [None req-b2263c80-33ef-445c-930a-78c60ce7c0c9 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:5e:0e:e0,bridge_name='br-int',has_traffic_filtering=True,id=4ad9572b-6ac1-4659-8ea6-71b8a32c06fe,network=Network(93269c36-ab23-4d95-925a-798173550624),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4ad9572b-6a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 03:26:25 np0005534516 nova_compute[253538]: 2025-11-25 08:26:25.619 253542 DEBUG os_vif [None req-b2263c80-33ef-445c-930a-78c60ce7c0c9 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:5e:0e:e0,bridge_name='br-int',has_traffic_filtering=True,id=4ad9572b-6ac1-4659-8ea6-71b8a32c06fe,network=Network(93269c36-ab23-4d95-925a-798173550624),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4ad9572b-6a') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 25 03:26:25 np0005534516 nova_compute[253538]: 2025-11-25 08:26:25.622 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:26:25 np0005534516 nova_compute[253538]: 2025-11-25 08:26:25.622 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4ad9572b-6a, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:26:25 np0005534516 nova_compute[253538]: 2025-11-25 08:26:25.624 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:26:25 np0005534516 nova_compute[253538]: 2025-11-25 08:26:25.625 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 25 03:26:25 np0005534516 nova_compute[253538]: 2025-11-25 08:26:25.626 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:26:25 np0005534516 nova_compute[253538]: 2025-11-25 08:26:25.628 253542 INFO os_vif [None req-b2263c80-33ef-445c-930a-78c60ce7c0c9 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:5e:0e:e0,bridge_name='br-int',has_traffic_filtering=True,id=4ad9572b-6ac1-4659-8ea6-71b8a32c06fe,network=Network(93269c36-ab23-4d95-925a-798173550624),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4ad9572b-6a')#033[00m
Nov 25 03:26:25 np0005534516 nova_compute[253538]: 2025-11-25 08:26:25.682 253542 DEBUG oslo_concurrency.processutils [None req-d54cf114-4cd0-4343-8d8e-85625af4e156 c53798457642457e8c93278c6bbae0b7 c4ea3de796e6464fbf65835dc4c3ad79 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:26:26 np0005534516 nova_compute[253538]: 2025-11-25 08:26:26.194 253542 INFO nova.virt.libvirt.driver [None req-b2263c80-33ef-445c-930a-78c60ce7c0c9 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] [instance: 86bfa56f-56d0-4a5e-b0b2-302c375e37a3] Deleting instance files /var/lib/nova/instances/86bfa56f-56d0-4a5e-b0b2-302c375e37a3_del#033[00m
Nov 25 03:26:26 np0005534516 nova_compute[253538]: 2025-11-25 08:26:26.195 253542 INFO nova.virt.libvirt.driver [None req-b2263c80-33ef-445c-930a-78c60ce7c0c9 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] [instance: 86bfa56f-56d0-4a5e-b0b2-302c375e37a3] Deletion of /var/lib/nova/instances/86bfa56f-56d0-4a5e-b0b2-302c375e37a3_del complete#033[00m
Nov 25 03:26:26 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 03:26:26 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1481982083' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 03:26:26 np0005534516 nova_compute[253538]: 2025-11-25 08:26:26.250 253542 DEBUG oslo_concurrency.processutils [None req-d54cf114-4cd0-4343-8d8e-85625af4e156 c53798457642457e8c93278c6bbae0b7 c4ea3de796e6464fbf65835dc4c3ad79 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.568s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:26:26 np0005534516 nova_compute[253538]: 2025-11-25 08:26:26.257 253542 DEBUG nova.compute.provider_tree [None req-d54cf114-4cd0-4343-8d8e-85625af4e156 c53798457642457e8c93278c6bbae0b7 c4ea3de796e6464fbf65835dc4c3ad79 - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 03:26:26 np0005534516 nova_compute[253538]: 2025-11-25 08:26:26.273 253542 DEBUG nova.scheduler.client.report [None req-d54cf114-4cd0-4343-8d8e-85625af4e156 c53798457642457e8c93278c6bbae0b7 c4ea3de796e6464fbf65835dc4c3ad79 - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 03:26:26 np0005534516 nova_compute[253538]: 2025-11-25 08:26:26.335 253542 DEBUG oslo_concurrency.lockutils [None req-d54cf114-4cd0-4343-8d8e-85625af4e156 c53798457642457e8c93278c6bbae0b7 c4ea3de796e6464fbf65835dc4c3ad79 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.827s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:26:26 np0005534516 nova_compute[253538]: 2025-11-25 08:26:26.363 253542 INFO nova.scheduler.client.report [None req-d54cf114-4cd0-4343-8d8e-85625af4e156 c53798457642457e8c93278c6bbae0b7 c4ea3de796e6464fbf65835dc4c3ad79 - - default default] Deleted allocations for instance ceb93a9d-5e18-4351-9cfa-3949c00b448a#033[00m
Nov 25 03:26:26 np0005534516 nova_compute[253538]: 2025-11-25 08:26:26.382 253542 DEBUG nova.virt.libvirt.driver [None req-b2263c80-33ef-445c-930a-78c60ce7c0c9 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] [instance: 86bfa56f-56d0-4a5e-b0b2-302c375e37a3] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 25 03:26:26 np0005534516 nova_compute[253538]: 2025-11-25 08:26:26.383 253542 INFO nova.virt.libvirt.driver [None req-b2263c80-33ef-445c-930a-78c60ce7c0c9 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] [instance: 86bfa56f-56d0-4a5e-b0b2-302c375e37a3] Creating image(s)#033[00m
Nov 25 03:26:26 np0005534516 nova_compute[253538]: 2025-11-25 08:26:26.407 253542 DEBUG nova.storage.rbd_utils [None req-b2263c80-33ef-445c-930a-78c60ce7c0c9 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] rbd image 86bfa56f-56d0-4a5e-b0b2-302c375e37a3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:26:26 np0005534516 nova_compute[253538]: 2025-11-25 08:26:26.441 253542 DEBUG nova.storage.rbd_utils [None req-b2263c80-33ef-445c-930a-78c60ce7c0c9 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] rbd image 86bfa56f-56d0-4a5e-b0b2-302c375e37a3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:26:26 np0005534516 focused_ishizaka[284389]: {
Nov 25 03:26:26 np0005534516 focused_ishizaka[284389]:    "0": [
Nov 25 03:26:26 np0005534516 focused_ishizaka[284389]:        {
Nov 25 03:26:26 np0005534516 focused_ishizaka[284389]:            "devices": [
Nov 25 03:26:26 np0005534516 focused_ishizaka[284389]:                "/dev/loop3"
Nov 25 03:26:26 np0005534516 focused_ishizaka[284389]:            ],
Nov 25 03:26:26 np0005534516 focused_ishizaka[284389]:            "lv_name": "ceph_lv0",
Nov 25 03:26:26 np0005534516 focused_ishizaka[284389]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 03:26:26 np0005534516 focused_ishizaka[284389]:            "lv_size": "21470642176",
Nov 25 03:26:26 np0005534516 focused_ishizaka[284389]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=fa0f8d90-5df8-4d42-9078-da082765696d,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 03:26:26 np0005534516 focused_ishizaka[284389]:            "lv_uuid": "ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr",
Nov 25 03:26:26 np0005534516 focused_ishizaka[284389]:            "name": "ceph_lv0",
Nov 25 03:26:26 np0005534516 focused_ishizaka[284389]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 03:26:26 np0005534516 focused_ishizaka[284389]:            "tags": {
Nov 25 03:26:26 np0005534516 focused_ishizaka[284389]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 03:26:26 np0005534516 focused_ishizaka[284389]:                "ceph.block_uuid": "ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr",
Nov 25 03:26:26 np0005534516 focused_ishizaka[284389]:                "ceph.cephx_lockbox_secret": "",
Nov 25 03:26:26 np0005534516 focused_ishizaka[284389]:                "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 03:26:26 np0005534516 focused_ishizaka[284389]:                "ceph.cluster_name": "ceph",
Nov 25 03:26:26 np0005534516 focused_ishizaka[284389]:                "ceph.crush_device_class": "",
Nov 25 03:26:26 np0005534516 focused_ishizaka[284389]:                "ceph.encrypted": "0",
Nov 25 03:26:26 np0005534516 focused_ishizaka[284389]:                "ceph.osd_fsid": "fa0f8d90-5df8-4d42-9078-da082765696d",
Nov 25 03:26:26 np0005534516 focused_ishizaka[284389]:                "ceph.osd_id": "0",
Nov 25 03:26:26 np0005534516 focused_ishizaka[284389]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 03:26:26 np0005534516 focused_ishizaka[284389]:                "ceph.type": "block",
Nov 25 03:26:26 np0005534516 focused_ishizaka[284389]:                "ceph.vdo": "0"
Nov 25 03:26:26 np0005534516 focused_ishizaka[284389]:            },
Nov 25 03:26:26 np0005534516 focused_ishizaka[284389]:            "type": "block",
Nov 25 03:26:26 np0005534516 focused_ishizaka[284389]:            "vg_name": "ceph_vg0"
Nov 25 03:26:26 np0005534516 focused_ishizaka[284389]:        }
Nov 25 03:26:26 np0005534516 focused_ishizaka[284389]:    ],
Nov 25 03:26:26 np0005534516 focused_ishizaka[284389]:    "1": [
Nov 25 03:26:26 np0005534516 focused_ishizaka[284389]:        {
Nov 25 03:26:26 np0005534516 focused_ishizaka[284389]:            "devices": [
Nov 25 03:26:26 np0005534516 focused_ishizaka[284389]:                "/dev/loop4"
Nov 25 03:26:26 np0005534516 focused_ishizaka[284389]:            ],
Nov 25 03:26:26 np0005534516 focused_ishizaka[284389]:            "lv_name": "ceph_lv1",
Nov 25 03:26:26 np0005534516 focused_ishizaka[284389]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 03:26:26 np0005534516 focused_ishizaka[284389]:            "lv_size": "21470642176",
Nov 25 03:26:26 np0005534516 focused_ishizaka[284389]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1ccbbde0-8faf-460a-800e-c84f00ed17db,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 03:26:26 np0005534516 focused_ishizaka[284389]:            "lv_uuid": "nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e",
Nov 25 03:26:26 np0005534516 focused_ishizaka[284389]:            "name": "ceph_lv1",
Nov 25 03:26:26 np0005534516 focused_ishizaka[284389]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 03:26:26 np0005534516 focused_ishizaka[284389]:            "tags": {
Nov 25 03:26:26 np0005534516 focused_ishizaka[284389]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 03:26:26 np0005534516 focused_ishizaka[284389]:                "ceph.block_uuid": "nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e",
Nov 25 03:26:26 np0005534516 focused_ishizaka[284389]:                "ceph.cephx_lockbox_secret": "",
Nov 25 03:26:26 np0005534516 focused_ishizaka[284389]:                "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 03:26:26 np0005534516 focused_ishizaka[284389]:                "ceph.cluster_name": "ceph",
Nov 25 03:26:26 np0005534516 focused_ishizaka[284389]:                "ceph.crush_device_class": "",
Nov 25 03:26:26 np0005534516 focused_ishizaka[284389]:                "ceph.encrypted": "0",
Nov 25 03:26:26 np0005534516 focused_ishizaka[284389]:                "ceph.osd_fsid": "1ccbbde0-8faf-460a-800e-c84f00ed17db",
Nov 25 03:26:26 np0005534516 focused_ishizaka[284389]:                "ceph.osd_id": "1",
Nov 25 03:26:26 np0005534516 focused_ishizaka[284389]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 03:26:26 np0005534516 focused_ishizaka[284389]:                "ceph.type": "block",
Nov 25 03:26:26 np0005534516 focused_ishizaka[284389]:                "ceph.vdo": "0"
Nov 25 03:26:26 np0005534516 focused_ishizaka[284389]:            },
Nov 25 03:26:26 np0005534516 focused_ishizaka[284389]:            "type": "block",
Nov 25 03:26:26 np0005534516 focused_ishizaka[284389]:            "vg_name": "ceph_vg1"
Nov 25 03:26:26 np0005534516 focused_ishizaka[284389]:        }
Nov 25 03:26:26 np0005534516 focused_ishizaka[284389]:    ],
Nov 25 03:26:26 np0005534516 focused_ishizaka[284389]:    "2": [
Nov 25 03:26:26 np0005534516 focused_ishizaka[284389]:        {
Nov 25 03:26:26 np0005534516 focused_ishizaka[284389]:            "devices": [
Nov 25 03:26:26 np0005534516 focused_ishizaka[284389]:                "/dev/loop5"
Nov 25 03:26:26 np0005534516 focused_ishizaka[284389]:            ],
Nov 25 03:26:26 np0005534516 focused_ishizaka[284389]:            "lv_name": "ceph_lv2",
Nov 25 03:26:26 np0005534516 focused_ishizaka[284389]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 03:26:26 np0005534516 focused_ishizaka[284389]:            "lv_size": "21470642176",
Nov 25 03:26:26 np0005534516 focused_ishizaka[284389]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2a15223e-f21c-42af-b702-2be1c3607399,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 03:26:26 np0005534516 focused_ishizaka[284389]:            "lv_uuid": "5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0",
Nov 25 03:26:26 np0005534516 focused_ishizaka[284389]:            "name": "ceph_lv2",
Nov 25 03:26:26 np0005534516 focused_ishizaka[284389]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 03:26:26 np0005534516 focused_ishizaka[284389]:            "tags": {
Nov 25 03:26:26 np0005534516 focused_ishizaka[284389]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 03:26:26 np0005534516 focused_ishizaka[284389]:                "ceph.block_uuid": "5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0",
Nov 25 03:26:26 np0005534516 focused_ishizaka[284389]:                "ceph.cephx_lockbox_secret": "",
Nov 25 03:26:26 np0005534516 focused_ishizaka[284389]:                "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 03:26:26 np0005534516 focused_ishizaka[284389]:                "ceph.cluster_name": "ceph",
Nov 25 03:26:26 np0005534516 focused_ishizaka[284389]:                "ceph.crush_device_class": "",
Nov 25 03:26:26 np0005534516 focused_ishizaka[284389]:                "ceph.encrypted": "0",
Nov 25 03:26:26 np0005534516 focused_ishizaka[284389]:                "ceph.osd_fsid": "2a15223e-f21c-42af-b702-2be1c3607399",
Nov 25 03:26:26 np0005534516 focused_ishizaka[284389]:                "ceph.osd_id": "2",
Nov 25 03:26:26 np0005534516 focused_ishizaka[284389]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 03:26:26 np0005534516 focused_ishizaka[284389]:                "ceph.type": "block",
Nov 25 03:26:26 np0005534516 focused_ishizaka[284389]:                "ceph.vdo": "0"
Nov 25 03:26:26 np0005534516 focused_ishizaka[284389]:            },
Nov 25 03:26:26 np0005534516 focused_ishizaka[284389]:            "type": "block",
Nov 25 03:26:26 np0005534516 focused_ishizaka[284389]:            "vg_name": "ceph_vg2"
Nov 25 03:26:26 np0005534516 focused_ishizaka[284389]:        }
Nov 25 03:26:26 np0005534516 focused_ishizaka[284389]:    ]
Nov 25 03:26:26 np0005534516 focused_ishizaka[284389]: }
Nov 25 03:26:26 np0005534516 nova_compute[253538]: 2025-11-25 08:26:26.464 253542 DEBUG nova.storage.rbd_utils [None req-b2263c80-33ef-445c-930a-78c60ce7c0c9 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] rbd image 86bfa56f-56d0-4a5e-b0b2-302c375e37a3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:26:26 np0005534516 nova_compute[253538]: 2025-11-25 08:26:26.467 253542 DEBUG oslo_concurrency.processutils [None req-b2263c80-33ef-445c-930a-78c60ce7c0c9 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:26:26 np0005534516 nova_compute[253538]: 2025-11-25 08:26:26.489 253542 DEBUG oslo_concurrency.lockutils [None req-d54cf114-4cd0-4343-8d8e-85625af4e156 c53798457642457e8c93278c6bbae0b7 c4ea3de796e6464fbf65835dc4c3ad79 - - default default] Lock "ceb93a9d-5e18-4351-9cfa-3949c00b448a" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 7.879s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:26:26 np0005534516 systemd[1]: libpod-e569ddb9b8361d60a11cea8df8ffadb8da02e075f3f0286e2a1f81fc17b00545.scope: Deactivated successfully.
Nov 25 03:26:26 np0005534516 podman[284372]: 2025-11-25 08:26:26.495209162 +0000 UTC m=+1.081943320 container died e569ddb9b8361d60a11cea8df8ffadb8da02e075f3f0286e2a1f81fc17b00545 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_ishizaka, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 03:26:26 np0005534516 systemd[1]: var-lib-containers-storage-overlay-6f742f0aea007c0d03c4b5365e50b60f01100baa162c51f466d44f017dee90a3-merged.mount: Deactivated successfully.
Nov 25 03:26:26 np0005534516 nova_compute[253538]: 2025-11-25 08:26:26.517 253542 DEBUG nova.network.neutron [req-06f89d99-6e38-4d91-b3f0-f7dcb649d298 req-86856b76-7820-440a-ab2f-44d668f882d3 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: ca088afd-31e5-497b-bfc5-ba1f56096642] Updated VIF entry in instance network info cache for port 2089bf75-6119-4c42-a326-989b3931ec08. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 25 03:26:26 np0005534516 nova_compute[253538]: 2025-11-25 08:26:26.519 253542 DEBUG nova.network.neutron [req-06f89d99-6e38-4d91-b3f0-f7dcb649d298 req-86856b76-7820-440a-ab2f-44d668f882d3 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: ca088afd-31e5-497b-bfc5-ba1f56096642] Updating instance_info_cache with network_info: [{"id": "2089bf75-6119-4c42-a326-989b3931ec08", "address": "fa:16:3e:b9:c0:7d", "network": {"id": "ec4e7ebb-aba7-46f2-8d8d-f7d49f5af954", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-908342220-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "52217f37b23343d697fa6d2be38e236d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2089bf75-61", "ovs_interfaceid": "2089bf75-6119-4c42-a326-989b3931ec08", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 03:26:26 np0005534516 nova_compute[253538]: 2025-11-25 08:26:26.529 253542 DEBUG oslo_concurrency.processutils [None req-b2263c80-33ef-445c-930a-78c60ce7c0c9 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:26:26 np0005534516 nova_compute[253538]: 2025-11-25 08:26:26.530 253542 DEBUG oslo_concurrency.lockutils [None req-b2263c80-33ef-445c-930a-78c60ce7c0c9 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Acquiring lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:26:26 np0005534516 nova_compute[253538]: 2025-11-25 08:26:26.531 253542 DEBUG oslo_concurrency.lockutils [None req-b2263c80-33ef-445c-930a-78c60ce7c0c9 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:26:26 np0005534516 nova_compute[253538]: 2025-11-25 08:26:26.531 253542 DEBUG oslo_concurrency.lockutils [None req-b2263c80-33ef-445c-930a-78c60ce7c0c9 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:26:26 np0005534516 nova_compute[253538]: 2025-11-25 08:26:26.549 253542 DEBUG nova.storage.rbd_utils [None req-b2263c80-33ef-445c-930a-78c60ce7c0c9 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] rbd image 86bfa56f-56d0-4a5e-b0b2-302c375e37a3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:26:26 np0005534516 podman[284372]: 2025-11-25 08:26:26.550509723 +0000 UTC m=+1.137243871 container remove e569ddb9b8361d60a11cea8df8ffadb8da02e075f3f0286e2a1f81fc17b00545 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_ishizaka, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Nov 25 03:26:26 np0005534516 nova_compute[253538]: 2025-11-25 08:26:26.552 253542 DEBUG oslo_concurrency.processutils [None req-b2263c80-33ef-445c-930a-78c60ce7c0c9 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc 86bfa56f-56d0-4a5e-b0b2-302c375e37a3_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:26:26 np0005534516 systemd[1]: libpod-conmon-e569ddb9b8361d60a11cea8df8ffadb8da02e075f3f0286e2a1f81fc17b00545.scope: Deactivated successfully.
Nov 25 03:26:26 np0005534516 nova_compute[253538]: 2025-11-25 08:26:26.582 253542 DEBUG nova.network.neutron [-] [instance: 1664fad5-765c-4ecc-93e2-6f96c7fb6d44] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 03:26:26 np0005534516 nova_compute[253538]: 2025-11-25 08:26:26.584 253542 DEBUG oslo_concurrency.lockutils [req-06f89d99-6e38-4d91-b3f0-f7dcb649d298 req-86856b76-7820-440a-ab2f-44d668f882d3 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-ca088afd-31e5-497b-bfc5-ba1f56096642" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 03:26:26 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:26:26.591 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=0e3362c2-a4a0-4a10-9289-943331244f84, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '8'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:26:26 np0005534516 nova_compute[253538]: 2025-11-25 08:26:26.607 253542 DEBUG nova.compute.manager [req-0d00b467-f72e-4f38-ab1b-8c00eed02b7b req-3a83c37e-8630-4116-91f6-b878ef1a453b b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 1664fad5-765c-4ecc-93e2-6f96c7fb6d44] Received event network-vif-unplugged-fdb3703c-f8da-4c10-9784-ed63bfe93fe1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:26:26 np0005534516 nova_compute[253538]: 2025-11-25 08:26:26.607 253542 DEBUG oslo_concurrency.lockutils [req-0d00b467-f72e-4f38-ab1b-8c00eed02b7b req-3a83c37e-8630-4116-91f6-b878ef1a453b b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "1664fad5-765c-4ecc-93e2-6f96c7fb6d44-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:26:26 np0005534516 nova_compute[253538]: 2025-11-25 08:26:26.607 253542 DEBUG oslo_concurrency.lockutils [req-0d00b467-f72e-4f38-ab1b-8c00eed02b7b req-3a83c37e-8630-4116-91f6-b878ef1a453b b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "1664fad5-765c-4ecc-93e2-6f96c7fb6d44-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:26:26 np0005534516 nova_compute[253538]: 2025-11-25 08:26:26.608 253542 DEBUG oslo_concurrency.lockutils [req-0d00b467-f72e-4f38-ab1b-8c00eed02b7b req-3a83c37e-8630-4116-91f6-b878ef1a453b b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "1664fad5-765c-4ecc-93e2-6f96c7fb6d44-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:26:26 np0005534516 nova_compute[253538]: 2025-11-25 08:26:26.608 253542 DEBUG nova.compute.manager [req-0d00b467-f72e-4f38-ab1b-8c00eed02b7b req-3a83c37e-8630-4116-91f6-b878ef1a453b b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 1664fad5-765c-4ecc-93e2-6f96c7fb6d44] No waiting events found dispatching network-vif-unplugged-fdb3703c-f8da-4c10-9784-ed63bfe93fe1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 03:26:26 np0005534516 nova_compute[253538]: 2025-11-25 08:26:26.608 253542 DEBUG nova.compute.manager [req-0d00b467-f72e-4f38-ab1b-8c00eed02b7b req-3a83c37e-8630-4116-91f6-b878ef1a453b b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 1664fad5-765c-4ecc-93e2-6f96c7fb6d44] Received event network-vif-unplugged-fdb3703c-f8da-4c10-9784-ed63bfe93fe1 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 25 03:26:26 np0005534516 nova_compute[253538]: 2025-11-25 08:26:26.608 253542 DEBUG nova.compute.manager [req-0d00b467-f72e-4f38-ab1b-8c00eed02b7b req-3a83c37e-8630-4116-91f6-b878ef1a453b b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: ca088afd-31e5-497b-bfc5-ba1f56096642] Received event network-changed-2089bf75-6119-4c42-a326-989b3931ec08 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:26:26 np0005534516 nova_compute[253538]: 2025-11-25 08:26:26.608 253542 DEBUG nova.compute.manager [req-0d00b467-f72e-4f38-ab1b-8c00eed02b7b req-3a83c37e-8630-4116-91f6-b878ef1a453b b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: ca088afd-31e5-497b-bfc5-ba1f56096642] Refreshing instance network info cache due to event network-changed-2089bf75-6119-4c42-a326-989b3931ec08. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 03:26:26 np0005534516 nova_compute[253538]: 2025-11-25 08:26:26.609 253542 DEBUG oslo_concurrency.lockutils [req-0d00b467-f72e-4f38-ab1b-8c00eed02b7b req-3a83c37e-8630-4116-91f6-b878ef1a453b b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-ca088afd-31e5-497b-bfc5-ba1f56096642" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 03:26:26 np0005534516 nova_compute[253538]: 2025-11-25 08:26:26.609 253542 DEBUG oslo_concurrency.lockutils [req-0d00b467-f72e-4f38-ab1b-8c00eed02b7b req-3a83c37e-8630-4116-91f6-b878ef1a453b b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-ca088afd-31e5-497b-bfc5-ba1f56096642" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 03:26:26 np0005534516 nova_compute[253538]: 2025-11-25 08:26:26.609 253542 DEBUG nova.network.neutron [req-0d00b467-f72e-4f38-ab1b-8c00eed02b7b req-3a83c37e-8630-4116-91f6-b878ef1a453b b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: ca088afd-31e5-497b-bfc5-ba1f56096642] Refreshing network info cache for port 2089bf75-6119-4c42-a326-989b3931ec08 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 03:26:26 np0005534516 nova_compute[253538]: 2025-11-25 08:26:26.612 253542 INFO nova.compute.manager [-] [instance: 1664fad5-765c-4ecc-93e2-6f96c7fb6d44] Took 3.26 seconds to deallocate network for instance.#033[00m
Nov 25 03:26:26 np0005534516 nova_compute[253538]: 2025-11-25 08:26:26.663 253542 DEBUG oslo_concurrency.lockutils [None req-b75a2694-4fbc-4845-b2ff-c61073766bc9 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:26:26 np0005534516 nova_compute[253538]: 2025-11-25 08:26:26.663 253542 DEBUG oslo_concurrency.lockutils [None req-b75a2694-4fbc-4845-b2ff-c61073766bc9 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:26:26 np0005534516 nova_compute[253538]: 2025-11-25 08:26:26.805 253542 DEBUG oslo_concurrency.processutils [None req-b75a2694-4fbc-4845-b2ff-c61073766bc9 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:26:26 np0005534516 nova_compute[253538]: 2025-11-25 08:26:26.836 253542 DEBUG oslo_concurrency.processutils [None req-b2263c80-33ef-445c-930a-78c60ce7c0c9 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc 86bfa56f-56d0-4a5e-b0b2-302c375e37a3_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.284s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:26:26 np0005534516 nova_compute[253538]: 2025-11-25 08:26:26.912 253542 DEBUG nova.storage.rbd_utils [None req-b2263c80-33ef-445c-930a-78c60ce7c0c9 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] resizing rbd image 86bfa56f-56d0-4a5e-b0b2-302c375e37a3_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 25 03:26:27 np0005534516 nova_compute[253538]: 2025-11-25 08:26:27.018 253542 DEBUG nova.virt.libvirt.driver [None req-b2263c80-33ef-445c-930a-78c60ce7c0c9 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] [instance: 86bfa56f-56d0-4a5e-b0b2-302c375e37a3] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 25 03:26:27 np0005534516 nova_compute[253538]: 2025-11-25 08:26:27.019 253542 DEBUG nova.virt.libvirt.driver [None req-b2263c80-33ef-445c-930a-78c60ce7c0c9 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] [instance: 86bfa56f-56d0-4a5e-b0b2-302c375e37a3] Ensure instance console log exists: /var/lib/nova/instances/86bfa56f-56d0-4a5e-b0b2-302c375e37a3/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 25 03:26:27 np0005534516 nova_compute[253538]: 2025-11-25 08:26:27.019 253542 DEBUG oslo_concurrency.lockutils [None req-b2263c80-33ef-445c-930a-78c60ce7c0c9 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:26:27 np0005534516 nova_compute[253538]: 2025-11-25 08:26:27.020 253542 DEBUG oslo_concurrency.lockutils [None req-b2263c80-33ef-445c-930a-78c60ce7c0c9 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:26:27 np0005534516 nova_compute[253538]: 2025-11-25 08:26:27.020 253542 DEBUG oslo_concurrency.lockutils [None req-b2263c80-33ef-445c-930a-78c60ce7c0c9 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:26:27 np0005534516 nova_compute[253538]: 2025-11-25 08:26:27.022 253542 DEBUG nova.virt.libvirt.driver [None req-b2263c80-33ef-445c-930a-78c60ce7c0c9 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] [instance: 86bfa56f-56d0-4a5e-b0b2-302c375e37a3] Start _get_guest_xml network_info=[{"id": "4ad9572b-6ac1-4659-8ea6-71b8a32c06fe", "address": "fa:16:3e:5e:0e:e0", "network": {"id": "93269c36-ab23-4d95-925a-798173550624", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-897372830-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "65a2f983cce14453b2dc9251a520f289", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4ad9572b-6a", "ovs_interfaceid": "4ad9572b-6ac1-4659-8ea6-71b8a32c06fe", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'device_name': '/dev/vda', 'guest_format': None, 'encryption_options': None, 'boot_index': 0, 'encryption_format': None, 'encryption_secret_uuid': None, 'size': 0, 'encrypted': False, 'disk_bus': 'virtio', 'image_id': '8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 25 03:26:27 np0005534516 nova_compute[253538]: 2025-11-25 08:26:27.033 253542 WARNING nova.virt.libvirt.driver [None req-b2263c80-33ef-445c-930a-78c60ce7c0c9 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.: NotImplementedError#033[00m
Nov 25 03:26:27 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1230: 321 pgs: 321 active+clean; 398 MiB data, 538 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 1019 KiB/s wr, 191 op/s
Nov 25 03:26:27 np0005534516 nova_compute[253538]: 2025-11-25 08:26:27.053 253542 DEBUG nova.virt.libvirt.host [None req-b2263c80-33ef-445c-930a-78c60ce7c0c9 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 25 03:26:27 np0005534516 nova_compute[253538]: 2025-11-25 08:26:27.054 253542 DEBUG nova.virt.libvirt.host [None req-b2263c80-33ef-445c-930a-78c60ce7c0c9 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 25 03:26:27 np0005534516 nova_compute[253538]: 2025-11-25 08:26:27.062 253542 DEBUG nova.virt.libvirt.host [None req-b2263c80-33ef-445c-930a-78c60ce7c0c9 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 25 03:26:27 np0005534516 nova_compute[253538]: 2025-11-25 08:26:27.063 253542 DEBUG nova.virt.libvirt.host [None req-b2263c80-33ef-445c-930a-78c60ce7c0c9 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 25 03:26:27 np0005534516 nova_compute[253538]: 2025-11-25 08:26:27.064 253542 DEBUG nova.virt.libvirt.driver [None req-b2263c80-33ef-445c-930a-78c60ce7c0c9 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 25 03:26:27 np0005534516 nova_compute[253538]: 2025-11-25 08:26:27.064 253542 DEBUG nova.virt.hardware [None req-b2263c80-33ef-445c-930a-78c60ce7c0c9 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T08:20:54Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='c0247c91-0f34-45b2-87e7-43f31a790a00',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 25 03:26:27 np0005534516 nova_compute[253538]: 2025-11-25 08:26:27.064 253542 DEBUG nova.virt.hardware [None req-b2263c80-33ef-445c-930a-78c60ce7c0c9 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 25 03:26:27 np0005534516 nova_compute[253538]: 2025-11-25 08:26:27.064 253542 DEBUG nova.virt.hardware [None req-b2263c80-33ef-445c-930a-78c60ce7c0c9 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 25 03:26:27 np0005534516 nova_compute[253538]: 2025-11-25 08:26:27.064 253542 DEBUG nova.virt.hardware [None req-b2263c80-33ef-445c-930a-78c60ce7c0c9 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 25 03:26:27 np0005534516 nova_compute[253538]: 2025-11-25 08:26:27.065 253542 DEBUG nova.virt.hardware [None req-b2263c80-33ef-445c-930a-78c60ce7c0c9 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 25 03:26:27 np0005534516 nova_compute[253538]: 2025-11-25 08:26:27.065 253542 DEBUG nova.virt.hardware [None req-b2263c80-33ef-445c-930a-78c60ce7c0c9 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 25 03:26:27 np0005534516 nova_compute[253538]: 2025-11-25 08:26:27.065 253542 DEBUG nova.virt.hardware [None req-b2263c80-33ef-445c-930a-78c60ce7c0c9 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 25 03:26:27 np0005534516 nova_compute[253538]: 2025-11-25 08:26:27.065 253542 DEBUG nova.virt.hardware [None req-b2263c80-33ef-445c-930a-78c60ce7c0c9 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 25 03:26:27 np0005534516 nova_compute[253538]: 2025-11-25 08:26:27.065 253542 DEBUG nova.virt.hardware [None req-b2263c80-33ef-445c-930a-78c60ce7c0c9 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 25 03:26:27 np0005534516 nova_compute[253538]: 2025-11-25 08:26:27.066 253542 DEBUG nova.virt.hardware [None req-b2263c80-33ef-445c-930a-78c60ce7c0c9 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 25 03:26:27 np0005534516 nova_compute[253538]: 2025-11-25 08:26:27.066 253542 DEBUG nova.virt.hardware [None req-b2263c80-33ef-445c-930a-78c60ce7c0c9 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 25 03:26:27 np0005534516 nova_compute[253538]: 2025-11-25 08:26:27.066 253542 DEBUG nova.objects.instance [None req-b2263c80-33ef-445c-930a-78c60ce7c0c9 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Lazy-loading 'vcpu_model' on Instance uuid 86bfa56f-56d0-4a5e-b0b2-302c375e37a3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 03:26:27 np0005534516 nova_compute[253538]: 2025-11-25 08:26:27.081 253542 DEBUG oslo_concurrency.processutils [None req-b2263c80-33ef-445c-930a-78c60ce7c0c9 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:26:27 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 03:26:27 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1658417652' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 03:26:27 np0005534516 nova_compute[253538]: 2025-11-25 08:26:27.224 253542 DEBUG oslo_concurrency.processutils [None req-b75a2694-4fbc-4845-b2ff-c61073766bc9 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.420s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:26:27 np0005534516 nova_compute[253538]: 2025-11-25 08:26:27.233 253542 DEBUG nova.compute.provider_tree [None req-b75a2694-4fbc-4845-b2ff-c61073766bc9 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 03:26:27 np0005534516 nova_compute[253538]: 2025-11-25 08:26:27.255 253542 DEBUG nova.scheduler.client.report [None req-b75a2694-4fbc-4845-b2ff-c61073766bc9 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 03:26:27 np0005534516 nova_compute[253538]: 2025-11-25 08:26:27.280 253542 DEBUG oslo_concurrency.lockutils [None req-b75a2694-4fbc-4845-b2ff-c61073766bc9 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.617s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:26:27 np0005534516 podman[284775]: 2025-11-25 08:26:27.195676271 +0000 UTC m=+0.028237812 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 03:26:27 np0005534516 podman[284775]: 2025-11-25 08:26:27.296934294 +0000 UTC m=+0.129495775 container create c9af7bc7ff20425be8e412e3d0eff0d1d4404eb1095e39489c2196738addf47c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_leavitt, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Nov 25 03:26:27 np0005534516 nova_compute[253538]: 2025-11-25 08:26:27.307 253542 INFO nova.scheduler.client.report [None req-b75a2694-4fbc-4845-b2ff-c61073766bc9 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] Deleted allocations for instance 1664fad5-765c-4ecc-93e2-6f96c7fb6d44#033[00m
Nov 25 03:26:27 np0005534516 systemd[1]: Started libpod-conmon-c9af7bc7ff20425be8e412e3d0eff0d1d4404eb1095e39489c2196738addf47c.scope.
Nov 25 03:26:27 np0005534516 nova_compute[253538]: 2025-11-25 08:26:27.385 253542 DEBUG oslo_concurrency.lockutils [None req-b75a2694-4fbc-4845-b2ff-c61073766bc9 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] Lock "1664fad5-765c-4ecc-93e2-6f96c7fb6d44" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 5.581s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:26:27 np0005534516 systemd[1]: Started libcrun container.
Nov 25 03:26:27 np0005534516 podman[284775]: 2025-11-25 08:26:27.417927144 +0000 UTC m=+0.250488605 container init c9af7bc7ff20425be8e412e3d0eff0d1d4404eb1095e39489c2196738addf47c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_leavitt, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Nov 25 03:26:27 np0005534516 podman[284775]: 2025-11-25 08:26:27.425236196 +0000 UTC m=+0.257797677 container start c9af7bc7ff20425be8e412e3d0eff0d1d4404eb1095e39489c2196738addf47c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_leavitt, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 03:26:27 np0005534516 vigorous_leavitt[284814]: 167 167
Nov 25 03:26:27 np0005534516 systemd[1]: libpod-c9af7bc7ff20425be8e412e3d0eff0d1d4404eb1095e39489c2196738addf47c.scope: Deactivated successfully.
Nov 25 03:26:27 np0005534516 conmon[284814]: conmon c9af7bc7ff20425be8e4 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-c9af7bc7ff20425be8e412e3d0eff0d1d4404eb1095e39489c2196738addf47c.scope/container/memory.events
Nov 25 03:26:27 np0005534516 podman[284775]: 2025-11-25 08:26:27.51605449 +0000 UTC m=+0.348615951 container attach c9af7bc7ff20425be8e412e3d0eff0d1d4404eb1095e39489c2196738addf47c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_leavitt, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3)
Nov 25 03:26:27 np0005534516 podman[284775]: 2025-11-25 08:26:27.516461912 +0000 UTC m=+0.349023363 container died c9af7bc7ff20425be8e412e3d0eff0d1d4404eb1095e39489c2196738addf47c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_leavitt, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Nov 25 03:26:27 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 03:26:27 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3079511693' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 03:26:27 np0005534516 nova_compute[253538]: 2025-11-25 08:26:27.556 253542 DEBUG oslo_concurrency.processutils [None req-b2263c80-33ef-445c-930a-78c60ce7c0c9 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.475s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:26:27 np0005534516 systemd[1]: var-lib-containers-storage-overlay-356e5b7edcbf1016124a63d089578757035be54b6ba09f006c7d5f17c7c2ea4b-merged.mount: Deactivated successfully.
Nov 25 03:26:27 np0005534516 nova_compute[253538]: 2025-11-25 08:26:27.596 253542 DEBUG nova.storage.rbd_utils [None req-b2263c80-33ef-445c-930a-78c60ce7c0c9 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] rbd image 86bfa56f-56d0-4a5e-b0b2-302c375e37a3_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:26:27 np0005534516 nova_compute[253538]: 2025-11-25 08:26:27.601 253542 DEBUG oslo_concurrency.processutils [None req-b2263c80-33ef-445c-930a-78c60ce7c0c9 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:26:27 np0005534516 podman[284775]: 2025-11-25 08:26:27.607119591 +0000 UTC m=+0.439681042 container remove c9af7bc7ff20425be8e412e3d0eff0d1d4404eb1095e39489c2196738addf47c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_leavitt, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 03:26:27 np0005534516 systemd[1]: libpod-conmon-c9af7bc7ff20425be8e412e3d0eff0d1d4404eb1095e39489c2196738addf47c.scope: Deactivated successfully.
Nov 25 03:26:27 np0005534516 podman[284879]: 2025-11-25 08:26:27.873601417 +0000 UTC m=+0.084145680 container create 9970729f0b7d74dcb6d5ebd569d560cdcd4207f1f61d6818ed96ca1e1ad4a299 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_shannon, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Nov 25 03:26:27 np0005534516 podman[284879]: 2025-11-25 08:26:27.810773698 +0000 UTC m=+0.021317941 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 03:26:27 np0005534516 systemd[1]: Started libpod-conmon-9970729f0b7d74dcb6d5ebd569d560cdcd4207f1f61d6818ed96ca1e1ad4a299.scope.
Nov 25 03:26:27 np0005534516 nova_compute[253538]: 2025-11-25 08:26:27.960 253542 DEBUG nova.network.neutron [req-0d00b467-f72e-4f38-ab1b-8c00eed02b7b req-3a83c37e-8630-4116-91f6-b878ef1a453b b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: ca088afd-31e5-497b-bfc5-ba1f56096642] Updated VIF entry in instance network info cache for port 2089bf75-6119-4c42-a326-989b3931ec08. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 25 03:26:27 np0005534516 nova_compute[253538]: 2025-11-25 08:26:27.960 253542 DEBUG nova.network.neutron [req-0d00b467-f72e-4f38-ab1b-8c00eed02b7b req-3a83c37e-8630-4116-91f6-b878ef1a453b b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: ca088afd-31e5-497b-bfc5-ba1f56096642] Updating instance_info_cache with network_info: [{"id": "2089bf75-6119-4c42-a326-989b3931ec08", "address": "fa:16:3e:b9:c0:7d", "network": {"id": "ec4e7ebb-aba7-46f2-8d8d-f7d49f5af954", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-908342220-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "52217f37b23343d697fa6d2be38e236d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2089bf75-61", "ovs_interfaceid": "2089bf75-6119-4c42-a326-989b3931ec08", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 03:26:27 np0005534516 systemd[1]: Started libcrun container.
Nov 25 03:26:27 np0005534516 nova_compute[253538]: 2025-11-25 08:26:27.972 253542 DEBUG oslo_concurrency.lockutils [req-0d00b467-f72e-4f38-ab1b-8c00eed02b7b req-3a83c37e-8630-4116-91f6-b878ef1a453b b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-ca088afd-31e5-497b-bfc5-ba1f56096642" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 03:26:27 np0005534516 nova_compute[253538]: 2025-11-25 08:26:27.973 253542 DEBUG nova.compute.manager [req-0d00b467-f72e-4f38-ab1b-8c00eed02b7b req-3a83c37e-8630-4116-91f6-b878ef1a453b b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 1664fad5-765c-4ecc-93e2-6f96c7fb6d44] Received event network-vif-plugged-fdb3703c-f8da-4c10-9784-ed63bfe93fe1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:26:27 np0005534516 nova_compute[253538]: 2025-11-25 08:26:27.975 253542 DEBUG oslo_concurrency.lockutils [req-0d00b467-f72e-4f38-ab1b-8c00eed02b7b req-3a83c37e-8630-4116-91f6-b878ef1a453b b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "1664fad5-765c-4ecc-93e2-6f96c7fb6d44-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:26:27 np0005534516 nova_compute[253538]: 2025-11-25 08:26:27.976 253542 DEBUG oslo_concurrency.lockutils [req-0d00b467-f72e-4f38-ab1b-8c00eed02b7b req-3a83c37e-8630-4116-91f6-b878ef1a453b b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "1664fad5-765c-4ecc-93e2-6f96c7fb6d44-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:26:27 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e76c69eb600601acfa40b4fe68bc76649ba23ce23fd23eb60d494d141d1974ee/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 03:26:27 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e76c69eb600601acfa40b4fe68bc76649ba23ce23fd23eb60d494d141d1974ee/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 03:26:27 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e76c69eb600601acfa40b4fe68bc76649ba23ce23fd23eb60d494d141d1974ee/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 03:26:27 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e76c69eb600601acfa40b4fe68bc76649ba23ce23fd23eb60d494d141d1974ee/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 03:26:27 np0005534516 nova_compute[253538]: 2025-11-25 08:26:27.978 253542 DEBUG oslo_concurrency.lockutils [req-0d00b467-f72e-4f38-ab1b-8c00eed02b7b req-3a83c37e-8630-4116-91f6-b878ef1a453b b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "1664fad5-765c-4ecc-93e2-6f96c7fb6d44-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:26:27 np0005534516 nova_compute[253538]: 2025-11-25 08:26:27.979 253542 DEBUG nova.compute.manager [req-0d00b467-f72e-4f38-ab1b-8c00eed02b7b req-3a83c37e-8630-4116-91f6-b878ef1a453b b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 1664fad5-765c-4ecc-93e2-6f96c7fb6d44] No waiting events found dispatching network-vif-plugged-fdb3703c-f8da-4c10-9784-ed63bfe93fe1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 03:26:27 np0005534516 nova_compute[253538]: 2025-11-25 08:26:27.979 253542 WARNING nova.compute.manager [req-0d00b467-f72e-4f38-ab1b-8c00eed02b7b req-3a83c37e-8630-4116-91f6-b878ef1a453b b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 1664fad5-765c-4ecc-93e2-6f96c7fb6d44] Received unexpected event network-vif-plugged-fdb3703c-f8da-4c10-9784-ed63bfe93fe1 for instance with vm_state active and task_state deleting.#033[00m
Nov 25 03:26:28 np0005534516 podman[284879]: 2025-11-25 08:26:28.040510987 +0000 UTC m=+0.251055230 container init 9970729f0b7d74dcb6d5ebd569d560cdcd4207f1f61d6818ed96ca1e1ad4a299 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_shannon, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 03:26:28 np0005534516 podman[284879]: 2025-11-25 08:26:28.049549157 +0000 UTC m=+0.260093380 container start 9970729f0b7d74dcb6d5ebd569d560cdcd4207f1f61d6818ed96ca1e1ad4a299 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_shannon, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 03:26:28 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 03:26:28 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3656318652' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 03:26:28 np0005534516 podman[284879]: 2025-11-25 08:26:28.069978723 +0000 UTC m=+0.280522946 container attach 9970729f0b7d74dcb6d5ebd569d560cdcd4207f1f61d6818ed96ca1e1ad4a299 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_shannon, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Nov 25 03:26:28 np0005534516 podman[284892]: 2025-11-25 08:26:28.076610917 +0000 UTC m=+0.157054118 container health_status 5c8d5508db57b4c3da7e80ae11f3a3094e87785cb74f60c98199261dd8b73481 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent)
Nov 25 03:26:28 np0005534516 nova_compute[253538]: 2025-11-25 08:26:28.085 253542 DEBUG oslo_concurrency.processutils [None req-b2263c80-33ef-445c-930a-78c60ce7c0c9 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.484s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:26:28 np0005534516 nova_compute[253538]: 2025-11-25 08:26:28.089 253542 DEBUG nova.virt.libvirt.vif [None req-b2263c80-33ef-445c-930a-78c60ce7c0c9 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-11-25T08:24:53Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-1649971692',display_name='tempest-ServersAdminTestJSON-server-1649971692',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-1649971692',id=16,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-25T08:25:54Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='65a2f983cce14453b2dc9251a520f289',ramdisk_id='',reservation_id='r-309qh2t9',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='2',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersAdminTestJSON-857664825',owner_user_name='tempest-ServersAdminTestJSON-857664825-project-member'},tags=<?>,task_state='rebuild_spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T08:26:26Z,user_data=None,user_id='76d3377d398a4214a77bc0eb91638ec5',uuid=86bfa56f-56d0-4a5e-b0b2-302c375e37a3,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "4ad9572b-6ac1-4659-8ea6-71b8a32c06fe", "address": "fa:16:3e:5e:0e:e0", "network": {"id": "93269c36-ab23-4d95-925a-798173550624", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-897372830-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "65a2f983cce14453b2dc9251a520f289", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4ad9572b-6a", "ovs_interfaceid": "4ad9572b-6ac1-4659-8ea6-71b8a32c06fe", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 25 03:26:28 np0005534516 nova_compute[253538]: 2025-11-25 08:26:28.090 253542 DEBUG nova.network.os_vif_util [None req-b2263c80-33ef-445c-930a-78c60ce7c0c9 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Converting VIF {"id": "4ad9572b-6ac1-4659-8ea6-71b8a32c06fe", "address": "fa:16:3e:5e:0e:e0", "network": {"id": "93269c36-ab23-4d95-925a-798173550624", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-897372830-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "65a2f983cce14453b2dc9251a520f289", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4ad9572b-6a", "ovs_interfaceid": "4ad9572b-6ac1-4659-8ea6-71b8a32c06fe", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 03:26:28 np0005534516 nova_compute[253538]: 2025-11-25 08:26:28.092 253542 DEBUG nova.network.os_vif_util [None req-b2263c80-33ef-445c-930a-78c60ce7c0c9 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:5e:0e:e0,bridge_name='br-int',has_traffic_filtering=True,id=4ad9572b-6ac1-4659-8ea6-71b8a32c06fe,network=Network(93269c36-ab23-4d95-925a-798173550624),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4ad9572b-6a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 03:26:28 np0005534516 nova_compute[253538]: 2025-11-25 08:26:28.099 253542 DEBUG nova.virt.libvirt.driver [None req-b2263c80-33ef-445c-930a-78c60ce7c0c9 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] [instance: 86bfa56f-56d0-4a5e-b0b2-302c375e37a3] End _get_guest_xml xml=<domain type="kvm">
Nov 25 03:26:28 np0005534516 nova_compute[253538]:  <uuid>86bfa56f-56d0-4a5e-b0b2-302c375e37a3</uuid>
Nov 25 03:26:28 np0005534516 nova_compute[253538]:  <name>instance-00000010</name>
Nov 25 03:26:28 np0005534516 nova_compute[253538]:  <memory>131072</memory>
Nov 25 03:26:28 np0005534516 nova_compute[253538]:  <vcpu>1</vcpu>
Nov 25 03:26:28 np0005534516 nova_compute[253538]:  <metadata>
Nov 25 03:26:28 np0005534516 nova_compute[253538]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 03:26:28 np0005534516 nova_compute[253538]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 03:26:28 np0005534516 nova_compute[253538]:      <nova:name>tempest-ServersAdminTestJSON-server-1649971692</nova:name>
Nov 25 03:26:28 np0005534516 nova_compute[253538]:      <nova:creationTime>2025-11-25 08:26:27</nova:creationTime>
Nov 25 03:26:28 np0005534516 nova_compute[253538]:      <nova:flavor name="m1.nano">
Nov 25 03:26:28 np0005534516 nova_compute[253538]:        <nova:memory>128</nova:memory>
Nov 25 03:26:28 np0005534516 nova_compute[253538]:        <nova:disk>1</nova:disk>
Nov 25 03:26:28 np0005534516 nova_compute[253538]:        <nova:swap>0</nova:swap>
Nov 25 03:26:28 np0005534516 nova_compute[253538]:        <nova:ephemeral>0</nova:ephemeral>
Nov 25 03:26:28 np0005534516 nova_compute[253538]:        <nova:vcpus>1</nova:vcpus>
Nov 25 03:26:28 np0005534516 nova_compute[253538]:      </nova:flavor>
Nov 25 03:26:28 np0005534516 nova_compute[253538]:      <nova:owner>
Nov 25 03:26:28 np0005534516 nova_compute[253538]:        <nova:user uuid="76d3377d398a4214a77bc0eb91638ec5">tempest-ServersAdminTestJSON-857664825-project-member</nova:user>
Nov 25 03:26:28 np0005534516 nova_compute[253538]:        <nova:project uuid="65a2f983cce14453b2dc9251a520f289">tempest-ServersAdminTestJSON-857664825</nova:project>
Nov 25 03:26:28 np0005534516 nova_compute[253538]:      </nova:owner>
Nov 25 03:26:28 np0005534516 nova_compute[253538]:      <nova:root type="image" uuid="8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e"/>
Nov 25 03:26:28 np0005534516 nova_compute[253538]:      <nova:ports>
Nov 25 03:26:28 np0005534516 nova_compute[253538]:        <nova:port uuid="4ad9572b-6ac1-4659-8ea6-71b8a32c06fe">
Nov 25 03:26:28 np0005534516 nova_compute[253538]:          <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Nov 25 03:26:28 np0005534516 nova_compute[253538]:        </nova:port>
Nov 25 03:26:28 np0005534516 nova_compute[253538]:      </nova:ports>
Nov 25 03:26:28 np0005534516 nova_compute[253538]:    </nova:instance>
Nov 25 03:26:28 np0005534516 nova_compute[253538]:  </metadata>
Nov 25 03:26:28 np0005534516 nova_compute[253538]:  <sysinfo type="smbios">
Nov 25 03:26:28 np0005534516 nova_compute[253538]:    <system>
Nov 25 03:26:28 np0005534516 nova_compute[253538]:      <entry name="manufacturer">RDO</entry>
Nov 25 03:26:28 np0005534516 nova_compute[253538]:      <entry name="product">OpenStack Compute</entry>
Nov 25 03:26:28 np0005534516 nova_compute[253538]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 03:26:28 np0005534516 nova_compute[253538]:      <entry name="serial">86bfa56f-56d0-4a5e-b0b2-302c375e37a3</entry>
Nov 25 03:26:28 np0005534516 nova_compute[253538]:      <entry name="uuid">86bfa56f-56d0-4a5e-b0b2-302c375e37a3</entry>
Nov 25 03:26:28 np0005534516 nova_compute[253538]:      <entry name="family">Virtual Machine</entry>
Nov 25 03:26:28 np0005534516 nova_compute[253538]:    </system>
Nov 25 03:26:28 np0005534516 nova_compute[253538]:  </sysinfo>
Nov 25 03:26:28 np0005534516 nova_compute[253538]:  <os>
Nov 25 03:26:28 np0005534516 nova_compute[253538]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 25 03:26:28 np0005534516 nova_compute[253538]:    <boot dev="hd"/>
Nov 25 03:26:28 np0005534516 nova_compute[253538]:    <smbios mode="sysinfo"/>
Nov 25 03:26:28 np0005534516 nova_compute[253538]:  </os>
Nov 25 03:26:28 np0005534516 nova_compute[253538]:  <features>
Nov 25 03:26:28 np0005534516 nova_compute[253538]:    <acpi/>
Nov 25 03:26:28 np0005534516 nova_compute[253538]:    <apic/>
Nov 25 03:26:28 np0005534516 nova_compute[253538]:    <vmcoreinfo/>
Nov 25 03:26:28 np0005534516 nova_compute[253538]:  </features>
Nov 25 03:26:28 np0005534516 nova_compute[253538]:  <clock offset="utc">
Nov 25 03:26:28 np0005534516 nova_compute[253538]:    <timer name="pit" tickpolicy="delay"/>
Nov 25 03:26:28 np0005534516 nova_compute[253538]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 25 03:26:28 np0005534516 nova_compute[253538]:    <timer name="hpet" present="no"/>
Nov 25 03:26:28 np0005534516 nova_compute[253538]:  </clock>
Nov 25 03:26:28 np0005534516 nova_compute[253538]:  <cpu mode="host-model" match="exact">
Nov 25 03:26:28 np0005534516 nova_compute[253538]:    <topology sockets="1" cores="1" threads="1"/>
Nov 25 03:26:28 np0005534516 nova_compute[253538]:  </cpu>
Nov 25 03:26:28 np0005534516 nova_compute[253538]:  <devices>
Nov 25 03:26:28 np0005534516 nova_compute[253538]:    <disk type="network" device="disk">
Nov 25 03:26:28 np0005534516 nova_compute[253538]:      <driver type="raw" cache="none"/>
Nov 25 03:26:28 np0005534516 nova_compute[253538]:      <source protocol="rbd" name="vms/86bfa56f-56d0-4a5e-b0b2-302c375e37a3_disk">
Nov 25 03:26:28 np0005534516 nova_compute[253538]:        <host name="192.168.122.100" port="6789"/>
Nov 25 03:26:28 np0005534516 nova_compute[253538]:      </source>
Nov 25 03:26:28 np0005534516 nova_compute[253538]:      <auth username="openstack">
Nov 25 03:26:28 np0005534516 nova_compute[253538]:        <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 03:26:28 np0005534516 nova_compute[253538]:      </auth>
Nov 25 03:26:28 np0005534516 nova_compute[253538]:      <target dev="vda" bus="virtio"/>
Nov 25 03:26:28 np0005534516 nova_compute[253538]:    </disk>
Nov 25 03:26:28 np0005534516 nova_compute[253538]:    <disk type="network" device="cdrom">
Nov 25 03:26:28 np0005534516 nova_compute[253538]:      <driver type="raw" cache="none"/>
Nov 25 03:26:28 np0005534516 nova_compute[253538]:      <source protocol="rbd" name="vms/86bfa56f-56d0-4a5e-b0b2-302c375e37a3_disk.config">
Nov 25 03:26:28 np0005534516 nova_compute[253538]:        <host name="192.168.122.100" port="6789"/>
Nov 25 03:26:28 np0005534516 nova_compute[253538]:      </source>
Nov 25 03:26:28 np0005534516 nova_compute[253538]:      <auth username="openstack">
Nov 25 03:26:28 np0005534516 nova_compute[253538]:        <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 03:26:28 np0005534516 nova_compute[253538]:      </auth>
Nov 25 03:26:28 np0005534516 nova_compute[253538]:      <target dev="sda" bus="sata"/>
Nov 25 03:26:28 np0005534516 nova_compute[253538]:    </disk>
Nov 25 03:26:28 np0005534516 nova_compute[253538]:    <interface type="ethernet">
Nov 25 03:26:28 np0005534516 nova_compute[253538]:      <mac address="fa:16:3e:5e:0e:e0"/>
Nov 25 03:26:28 np0005534516 nova_compute[253538]:      <model type="virtio"/>
Nov 25 03:26:28 np0005534516 nova_compute[253538]:      <driver name="vhost" rx_queue_size="512"/>
Nov 25 03:26:28 np0005534516 nova_compute[253538]:      <mtu size="1442"/>
Nov 25 03:26:28 np0005534516 nova_compute[253538]:      <target dev="tap4ad9572b-6a"/>
Nov 25 03:26:28 np0005534516 nova_compute[253538]:    </interface>
Nov 25 03:26:28 np0005534516 nova_compute[253538]:    <serial type="pty">
Nov 25 03:26:28 np0005534516 nova_compute[253538]:      <log file="/var/lib/nova/instances/86bfa56f-56d0-4a5e-b0b2-302c375e37a3/console.log" append="off"/>
Nov 25 03:26:28 np0005534516 nova_compute[253538]:    </serial>
Nov 25 03:26:28 np0005534516 nova_compute[253538]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 03:26:28 np0005534516 nova_compute[253538]:    <video>
Nov 25 03:26:28 np0005534516 nova_compute[253538]:      <model type="virtio"/>
Nov 25 03:26:28 np0005534516 nova_compute[253538]:    </video>
Nov 25 03:26:28 np0005534516 nova_compute[253538]:    <input type="tablet" bus="usb"/>
Nov 25 03:26:28 np0005534516 nova_compute[253538]:    <rng model="virtio">
Nov 25 03:26:28 np0005534516 nova_compute[253538]:      <backend model="random">/dev/urandom</backend>
Nov 25 03:26:28 np0005534516 nova_compute[253538]:    </rng>
Nov 25 03:26:28 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root"/>
Nov 25 03:26:28 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:26:28 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:26:28 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:26:28 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:26:28 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:26:28 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:26:28 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:26:28 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:26:28 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:26:28 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:26:28 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:26:28 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:26:28 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:26:28 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:26:28 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:26:28 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:26:28 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:26:28 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:26:28 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:26:28 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:26:28 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:26:28 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:26:28 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:26:28 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:26:28 np0005534516 nova_compute[253538]:    <controller type="usb" index="0"/>
Nov 25 03:26:28 np0005534516 nova_compute[253538]:    <memballoon model="virtio">
Nov 25 03:26:28 np0005534516 nova_compute[253538]:      <stats period="10"/>
Nov 25 03:26:28 np0005534516 nova_compute[253538]:    </memballoon>
Nov 25 03:26:28 np0005534516 nova_compute[253538]:  </devices>
Nov 25 03:26:28 np0005534516 nova_compute[253538]: </domain>
Nov 25 03:26:28 np0005534516 nova_compute[253538]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 25 03:26:28 np0005534516 nova_compute[253538]: 2025-11-25 08:26:28.101 253542 DEBUG nova.virt.libvirt.vif [None req-b2263c80-33ef-445c-930a-78c60ce7c0c9 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-11-25T08:24:53Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-1649971692',display_name='tempest-ServersAdminTestJSON-server-1649971692',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-1649971692',id=16,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-25T08:25:54Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='65a2f983cce14453b2dc9251a520f289',ramdisk_id='',reservation_id='r-309qh2t9',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='2',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersAdminTestJSON-857664825',owner_user_name='tempest-ServersAdminTestJSON-857664825-project-member'},tags=<?>,task_state='rebuild_spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T08:26:26Z,user_data=None,user_id='76d3377d398a4214a77bc0eb91638ec5',uuid=86bfa56f-56d0-4a5e-b0b2-302c375e37a3,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "4ad9572b-6ac1-4659-8ea6-71b8a32c06fe", "address": "fa:16:3e:5e:0e:e0", "network": {"id": "93269c36-ab23-4d95-925a-798173550624", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-897372830-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "65a2f983cce14453b2dc9251a520f289", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4ad9572b-6a", "ovs_interfaceid": "4ad9572b-6ac1-4659-8ea6-71b8a32c06fe", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 25 03:26:28 np0005534516 nova_compute[253538]: 2025-11-25 08:26:28.102 253542 DEBUG nova.network.os_vif_util [None req-b2263c80-33ef-445c-930a-78c60ce7c0c9 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Converting VIF {"id": "4ad9572b-6ac1-4659-8ea6-71b8a32c06fe", "address": "fa:16:3e:5e:0e:e0", "network": {"id": "93269c36-ab23-4d95-925a-798173550624", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-897372830-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "65a2f983cce14453b2dc9251a520f289", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4ad9572b-6a", "ovs_interfaceid": "4ad9572b-6ac1-4659-8ea6-71b8a32c06fe", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 03:26:28 np0005534516 nova_compute[253538]: 2025-11-25 08:26:28.103 253542 DEBUG nova.network.os_vif_util [None req-b2263c80-33ef-445c-930a-78c60ce7c0c9 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:5e:0e:e0,bridge_name='br-int',has_traffic_filtering=True,id=4ad9572b-6ac1-4659-8ea6-71b8a32c06fe,network=Network(93269c36-ab23-4d95-925a-798173550624),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4ad9572b-6a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 03:26:28 np0005534516 nova_compute[253538]: 2025-11-25 08:26:28.104 253542 DEBUG os_vif [None req-b2263c80-33ef-445c-930a-78c60ce7c0c9 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Plugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:5e:0e:e0,bridge_name='br-int',has_traffic_filtering=True,id=4ad9572b-6ac1-4659-8ea6-71b8a32c06fe,network=Network(93269c36-ab23-4d95-925a-798173550624),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4ad9572b-6a') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 25 03:26:28 np0005534516 nova_compute[253538]: 2025-11-25 08:26:28.105 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:26:28 np0005534516 nova_compute[253538]: 2025-11-25 08:26:28.106 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:26:28 np0005534516 nova_compute[253538]: 2025-11-25 08:26:28.107 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 03:26:28 np0005534516 nova_compute[253538]: 2025-11-25 08:26:28.111 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:26:28 np0005534516 nova_compute[253538]: 2025-11-25 08:26:28.111 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4ad9572b-6a, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:26:28 np0005534516 nova_compute[253538]: 2025-11-25 08:26:28.113 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap4ad9572b-6a, col_values=(('external_ids', {'iface-id': '4ad9572b-6ac1-4659-8ea6-71b8a32c06fe', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:5e:0e:e0', 'vm-uuid': '86bfa56f-56d0-4a5e-b0b2-302c375e37a3'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:26:28 np0005534516 NetworkManager[48915]: <info>  [1764059188.1168] manager: (tap4ad9572b-6a): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/63)
Nov 25 03:26:28 np0005534516 nova_compute[253538]: 2025-11-25 08:26:28.119 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:26:28 np0005534516 nova_compute[253538]: 2025-11-25 08:26:28.122 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 25 03:26:28 np0005534516 nova_compute[253538]: 2025-11-25 08:26:28.123 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:26:28 np0005534516 nova_compute[253538]: 2025-11-25 08:26:28.124 253542 INFO os_vif [None req-b2263c80-33ef-445c-930a-78c60ce7c0c9 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Successfully plugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:5e:0e:e0,bridge_name='br-int',has_traffic_filtering=True,id=4ad9572b-6ac1-4659-8ea6-71b8a32c06fe,network=Network(93269c36-ab23-4d95-925a-798173550624),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4ad9572b-6a')#033[00m
Nov 25 03:26:28 np0005534516 nova_compute[253538]: 2025-11-25 08:26:28.177 253542 DEBUG nova.virt.libvirt.driver [None req-b2263c80-33ef-445c-930a-78c60ce7c0c9 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 03:26:28 np0005534516 nova_compute[253538]: 2025-11-25 08:26:28.177 253542 DEBUG nova.virt.libvirt.driver [None req-b2263c80-33ef-445c-930a-78c60ce7c0c9 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 03:26:28 np0005534516 nova_compute[253538]: 2025-11-25 08:26:28.178 253542 DEBUG nova.virt.libvirt.driver [None req-b2263c80-33ef-445c-930a-78c60ce7c0c9 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] No VIF found with MAC fa:16:3e:5e:0e:e0, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 25 03:26:28 np0005534516 nova_compute[253538]: 2025-11-25 08:26:28.179 253542 INFO nova.virt.libvirt.driver [None req-b2263c80-33ef-445c-930a-78c60ce7c0c9 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] [instance: 86bfa56f-56d0-4a5e-b0b2-302c375e37a3] Using config drive#033[00m
Nov 25 03:26:28 np0005534516 nova_compute[253538]: 2025-11-25 08:26:28.212 253542 DEBUG nova.storage.rbd_utils [None req-b2263c80-33ef-445c-930a-78c60ce7c0c9 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] rbd image 86bfa56f-56d0-4a5e-b0b2-302c375e37a3_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:26:28 np0005534516 nova_compute[253538]: 2025-11-25 08:26:28.232 253542 DEBUG nova.objects.instance [None req-b2263c80-33ef-445c-930a-78c60ce7c0c9 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Lazy-loading 'ec2_ids' on Instance uuid 86bfa56f-56d0-4a5e-b0b2-302c375e37a3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 03:26:28 np0005534516 nova_compute[253538]: 2025-11-25 08:26:28.258 253542 DEBUG nova.objects.instance [None req-b2263c80-33ef-445c-930a-78c60ce7c0c9 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Lazy-loading 'keypairs' on Instance uuid 86bfa56f-56d0-4a5e-b0b2-302c375e37a3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 03:26:28 np0005534516 nova_compute[253538]: 2025-11-25 08:26:28.692 253542 DEBUG nova.compute.manager [req-1c81eaa6-c2d5-45b5-8cd0-13a82bd9df65 req-3f558dd2-1e12-472c-89af-03cefc564d9b b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: ceb93a9d-5e18-4351-9cfa-3949c00b448a] Received event network-vif-deleted-6283ff13-d854-41d6-8a7a-eab602cc4cf4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:26:28 np0005534516 nova_compute[253538]: 2025-11-25 08:26:28.693 253542 DEBUG nova.compute.manager [req-1c81eaa6-c2d5-45b5-8cd0-13a82bd9df65 req-3f558dd2-1e12-472c-89af-03cefc564d9b b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 86bfa56f-56d0-4a5e-b0b2-302c375e37a3] Received event network-vif-unplugged-4ad9572b-6ac1-4659-8ea6-71b8a32c06fe external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:26:28 np0005534516 nova_compute[253538]: 2025-11-25 08:26:28.693 253542 DEBUG oslo_concurrency.lockutils [req-1c81eaa6-c2d5-45b5-8cd0-13a82bd9df65 req-3f558dd2-1e12-472c-89af-03cefc564d9b b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "86bfa56f-56d0-4a5e-b0b2-302c375e37a3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:26:28 np0005534516 nova_compute[253538]: 2025-11-25 08:26:28.694 253542 DEBUG oslo_concurrency.lockutils [req-1c81eaa6-c2d5-45b5-8cd0-13a82bd9df65 req-3f558dd2-1e12-472c-89af-03cefc564d9b b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "86bfa56f-56d0-4a5e-b0b2-302c375e37a3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:26:28 np0005534516 nova_compute[253538]: 2025-11-25 08:26:28.694 253542 DEBUG oslo_concurrency.lockutils [req-1c81eaa6-c2d5-45b5-8cd0-13a82bd9df65 req-3f558dd2-1e12-472c-89af-03cefc564d9b b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "86bfa56f-56d0-4a5e-b0b2-302c375e37a3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:26:28 np0005534516 nova_compute[253538]: 2025-11-25 08:26:28.695 253542 DEBUG nova.compute.manager [req-1c81eaa6-c2d5-45b5-8cd0-13a82bd9df65 req-3f558dd2-1e12-472c-89af-03cefc564d9b b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 86bfa56f-56d0-4a5e-b0b2-302c375e37a3] No waiting events found dispatching network-vif-unplugged-4ad9572b-6ac1-4659-8ea6-71b8a32c06fe pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 03:26:28 np0005534516 nova_compute[253538]: 2025-11-25 08:26:28.695 253542 WARNING nova.compute.manager [req-1c81eaa6-c2d5-45b5-8cd0-13a82bd9df65 req-3f558dd2-1e12-472c-89af-03cefc564d9b b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 86bfa56f-56d0-4a5e-b0b2-302c375e37a3] Received unexpected event network-vif-unplugged-4ad9572b-6ac1-4659-8ea6-71b8a32c06fe for instance with vm_state active and task_state rebuild_spawning.#033[00m
Nov 25 03:26:28 np0005534516 nova_compute[253538]: 2025-11-25 08:26:28.696 253542 DEBUG nova.compute.manager [req-1c81eaa6-c2d5-45b5-8cd0-13a82bd9df65 req-3f558dd2-1e12-472c-89af-03cefc564d9b b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 1664fad5-765c-4ecc-93e2-6f96c7fb6d44] Received event network-vif-deleted-fdb3703c-f8da-4c10-9784-ed63bfe93fe1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:26:28 np0005534516 nova_compute[253538]: 2025-11-25 08:26:28.696 253542 DEBUG nova.compute.manager [req-1c81eaa6-c2d5-45b5-8cd0-13a82bd9df65 req-3f558dd2-1e12-472c-89af-03cefc564d9b b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 86bfa56f-56d0-4a5e-b0b2-302c375e37a3] Received event network-vif-plugged-4ad9572b-6ac1-4659-8ea6-71b8a32c06fe external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:26:28 np0005534516 nova_compute[253538]: 2025-11-25 08:26:28.696 253542 DEBUG oslo_concurrency.lockutils [req-1c81eaa6-c2d5-45b5-8cd0-13a82bd9df65 req-3f558dd2-1e12-472c-89af-03cefc564d9b b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "86bfa56f-56d0-4a5e-b0b2-302c375e37a3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:26:28 np0005534516 nova_compute[253538]: 2025-11-25 08:26:28.697 253542 DEBUG oslo_concurrency.lockutils [req-1c81eaa6-c2d5-45b5-8cd0-13a82bd9df65 req-3f558dd2-1e12-472c-89af-03cefc564d9b b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "86bfa56f-56d0-4a5e-b0b2-302c375e37a3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:26:28 np0005534516 nova_compute[253538]: 2025-11-25 08:26:28.697 253542 DEBUG oslo_concurrency.lockutils [req-1c81eaa6-c2d5-45b5-8cd0-13a82bd9df65 req-3f558dd2-1e12-472c-89af-03cefc564d9b b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "86bfa56f-56d0-4a5e-b0b2-302c375e37a3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:26:28 np0005534516 nova_compute[253538]: 2025-11-25 08:26:28.698 253542 DEBUG nova.compute.manager [req-1c81eaa6-c2d5-45b5-8cd0-13a82bd9df65 req-3f558dd2-1e12-472c-89af-03cefc564d9b b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 86bfa56f-56d0-4a5e-b0b2-302c375e37a3] No waiting events found dispatching network-vif-plugged-4ad9572b-6ac1-4659-8ea6-71b8a32c06fe pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 03:26:28 np0005534516 nova_compute[253538]: 2025-11-25 08:26:28.698 253542 WARNING nova.compute.manager [req-1c81eaa6-c2d5-45b5-8cd0-13a82bd9df65 req-3f558dd2-1e12-472c-89af-03cefc564d9b b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 86bfa56f-56d0-4a5e-b0b2-302c375e37a3] Received unexpected event network-vif-plugged-4ad9572b-6ac1-4659-8ea6-71b8a32c06fe for instance with vm_state active and task_state rebuild_spawning.#033[00m
Nov 25 03:26:28 np0005534516 nova_compute[253538]: 2025-11-25 08:26:28.702 253542 INFO nova.virt.libvirt.driver [None req-b2263c80-33ef-445c-930a-78c60ce7c0c9 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] [instance: 86bfa56f-56d0-4a5e-b0b2-302c375e37a3] Creating config drive at /var/lib/nova/instances/86bfa56f-56d0-4a5e-b0b2-302c375e37a3/disk.config#033[00m
Nov 25 03:26:28 np0005534516 nova_compute[253538]: 2025-11-25 08:26:28.712 253542 DEBUG oslo_concurrency.processutils [None req-b2263c80-33ef-445c-930a-78c60ce7c0c9 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/86bfa56f-56d0-4a5e-b0b2-302c375e37a3/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpkbur3q6b execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:26:28 np0005534516 nova_compute[253538]: 2025-11-25 08:26:28.851 253542 DEBUG oslo_concurrency.processutils [None req-b2263c80-33ef-445c-930a-78c60ce7c0c9 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/86bfa56f-56d0-4a5e-b0b2-302c375e37a3/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpkbur3q6b" returned: 0 in 0.139s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:26:28 np0005534516 nova_compute[253538]: 2025-11-25 08:26:28.893 253542 DEBUG nova.storage.rbd_utils [None req-b2263c80-33ef-445c-930a-78c60ce7c0c9 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] rbd image 86bfa56f-56d0-4a5e-b0b2-302c375e37a3_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:26:28 np0005534516 nova_compute[253538]: 2025-11-25 08:26:28.900 253542 DEBUG oslo_concurrency.processutils [None req-b2263c80-33ef-445c-930a-78c60ce7c0c9 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/86bfa56f-56d0-4a5e-b0b2-302c375e37a3/disk.config 86bfa56f-56d0-4a5e-b0b2-302c375e37a3_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:26:28 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e113 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 03:26:28 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 25 03:26:28 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3608054424' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 03:26:29 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 25 03:26:29 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3608054424' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 03:26:29 np0005534516 kind_shannon[284905]: {
Nov 25 03:26:29 np0005534516 kind_shannon[284905]:    "1ccbbde0-8faf-460a-800e-c84f00ed17db": {
Nov 25 03:26:29 np0005534516 kind_shannon[284905]:        "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 03:26:29 np0005534516 kind_shannon[284905]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 03:26:29 np0005534516 kind_shannon[284905]:        "osd_id": 1,
Nov 25 03:26:29 np0005534516 kind_shannon[284905]:        "osd_uuid": "1ccbbde0-8faf-460a-800e-c84f00ed17db",
Nov 25 03:26:29 np0005534516 kind_shannon[284905]:        "type": "bluestore"
Nov 25 03:26:29 np0005534516 kind_shannon[284905]:    },
Nov 25 03:26:29 np0005534516 kind_shannon[284905]:    "2a15223e-f21c-42af-b702-2be1c3607399": {
Nov 25 03:26:29 np0005534516 kind_shannon[284905]:        "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 03:26:29 np0005534516 kind_shannon[284905]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 03:26:29 np0005534516 kind_shannon[284905]:        "osd_id": 2,
Nov 25 03:26:29 np0005534516 kind_shannon[284905]:        "osd_uuid": "2a15223e-f21c-42af-b702-2be1c3607399",
Nov 25 03:26:29 np0005534516 kind_shannon[284905]:        "type": "bluestore"
Nov 25 03:26:29 np0005534516 kind_shannon[284905]:    },
Nov 25 03:26:29 np0005534516 kind_shannon[284905]:    "fa0f8d90-5df8-4d42-9078-da082765696d": {
Nov 25 03:26:29 np0005534516 kind_shannon[284905]:        "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 03:26:29 np0005534516 kind_shannon[284905]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 03:26:29 np0005534516 kind_shannon[284905]:        "osd_id": 0,
Nov 25 03:26:29 np0005534516 kind_shannon[284905]:        "osd_uuid": "fa0f8d90-5df8-4d42-9078-da082765696d",
Nov 25 03:26:29 np0005534516 kind_shannon[284905]:        "type": "bluestore"
Nov 25 03:26:29 np0005534516 kind_shannon[284905]:    }
Nov 25 03:26:29 np0005534516 kind_shannon[284905]: }
Nov 25 03:26:29 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1231: 321 pgs: 321 active+clean; 393 MiB data, 504 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 1.5 MiB/s wr, 190 op/s
Nov 25 03:26:29 np0005534516 nova_compute[253538]: 2025-11-25 08:26:29.053 253542 DEBUG oslo_concurrency.processutils [None req-b2263c80-33ef-445c-930a-78c60ce7c0c9 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/86bfa56f-56d0-4a5e-b0b2-302c375e37a3/disk.config 86bfa56f-56d0-4a5e-b0b2-302c375e37a3_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.153s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:26:29 np0005534516 nova_compute[253538]: 2025-11-25 08:26:29.054 253542 INFO nova.virt.libvirt.driver [None req-b2263c80-33ef-445c-930a-78c60ce7c0c9 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] [instance: 86bfa56f-56d0-4a5e-b0b2-302c375e37a3] Deleting local config drive /var/lib/nova/instances/86bfa56f-56d0-4a5e-b0b2-302c375e37a3/disk.config because it was imported into RBD.#033[00m
Nov 25 03:26:29 np0005534516 systemd[1]: libpod-9970729f0b7d74dcb6d5ebd569d560cdcd4207f1f61d6818ed96ca1e1ad4a299.scope: Deactivated successfully.
Nov 25 03:26:29 np0005534516 systemd[1]: libpod-9970729f0b7d74dcb6d5ebd569d560cdcd4207f1f61d6818ed96ca1e1ad4a299.scope: Consumed 1.006s CPU time.
Nov 25 03:26:29 np0005534516 podman[284879]: 2025-11-25 08:26:29.079328003 +0000 UTC m=+1.289872246 container died 9970729f0b7d74dcb6d5ebd569d560cdcd4207f1f61d6818ed96ca1e1ad4a299 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_shannon, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Nov 25 03:26:29 np0005534516 systemd[1]: var-lib-containers-storage-overlay-e76c69eb600601acfa40b4fe68bc76649ba23ce23fd23eb60d494d141d1974ee-merged.mount: Deactivated successfully.
Nov 25 03:26:29 np0005534516 kernel: tap4ad9572b-6a: entered promiscuous mode
Nov 25 03:26:29 np0005534516 NetworkManager[48915]: <info>  [1764059189.1125] manager: (tap4ad9572b-6a): new Tun device (/org/freedesktop/NetworkManager/Devices/64)
Nov 25 03:26:29 np0005534516 ovn_controller[152859]: 2025-11-25T08:26:29Z|00104|binding|INFO|Claiming lport 4ad9572b-6ac1-4659-8ea6-71b8a32c06fe for this chassis.
Nov 25 03:26:29 np0005534516 nova_compute[253538]: 2025-11-25 08:26:29.114 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:26:29 np0005534516 ovn_controller[152859]: 2025-11-25T08:26:29Z|00105|binding|INFO|4ad9572b-6ac1-4659-8ea6-71b8a32c06fe: Claiming fa:16:3e:5e:0e:e0 10.100.0.6
Nov 25 03:26:29 np0005534516 podman[284879]: 2025-11-25 08:26:29.132520045 +0000 UTC m=+1.343064258 container remove 9970729f0b7d74dcb6d5ebd569d560cdcd4207f1f61d6818ed96ca1e1ad4a299 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_shannon, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Nov 25 03:26:29 np0005534516 ovn_controller[152859]: 2025-11-25T08:26:29Z|00106|binding|INFO|Setting lport 4ad9572b-6ac1-4659-8ea6-71b8a32c06fe ovn-installed in OVS
Nov 25 03:26:29 np0005534516 nova_compute[253538]: 2025-11-25 08:26:29.136 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:26:29 np0005534516 systemd[1]: libpod-conmon-9970729f0b7d74dcb6d5ebd569d560cdcd4207f1f61d6818ed96ca1e1ad4a299.scope: Deactivated successfully.
Nov 25 03:26:29 np0005534516 nova_compute[253538]: 2025-11-25 08:26:29.145 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:26:29 np0005534516 systemd-machined[215790]: New machine qemu-27-instance-00000010.
Nov 25 03:26:29 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:26:29.161 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:5e:0e:e0 10.100.0.6'], port_security=['fa:16:3e:5e:0e:e0 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '86bfa56f-56d0-4a5e-b0b2-302c375e37a3', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-93269c36-ab23-4d95-925a-798173550624', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '65a2f983cce14453b2dc9251a520f289', 'neutron:revision_number': '7', 'neutron:security_group_ids': 'cc9f601e-bb99-4729-8b62-ddcf81c134a4', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a705f4ac-b7cc-4deb-b453-a20afb944392, chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=4ad9572b-6ac1-4659-8ea6-71b8a32c06fe) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 03:26:29 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:26:29.162 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 4ad9572b-6ac1-4659-8ea6-71b8a32c06fe in datapath 93269c36-ab23-4d95-925a-798173550624 bound to our chassis#033[00m
Nov 25 03:26:29 np0005534516 ovn_controller[152859]: 2025-11-25T08:26:29Z|00107|binding|INFO|Setting lport 4ad9572b-6ac1-4659-8ea6-71b8a32c06fe up in Southbound
Nov 25 03:26:29 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:26:29.163 162739 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 93269c36-ab23-4d95-925a-798173550624#033[00m
Nov 25 03:26:29 np0005534516 systemd[1]: Started Virtual Machine qemu-27-instance-00000010.
Nov 25 03:26:29 np0005534516 systemd-udevd[285034]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 03:26:29 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 03:26:29 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:26:29.178 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[5551d83f-8746-4590-9e3e-48db09addcc3]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:26:29 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 03:26:29 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 03:26:29 np0005534516 NetworkManager[48915]: <info>  [1764059189.1910] device (tap4ad9572b-6a): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 03:26:29 np0005534516 NetworkManager[48915]: <info>  [1764059189.1924] device (tap4ad9572b-6a): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 03:26:29 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 03:26:29 np0005534516 ceph-mgr[75313]: [progress WARNING root] complete: ev 9ab726ba-138a-479a-a078-9fd78359ba2c does not exist
Nov 25 03:26:29 np0005534516 ceph-mgr[75313]: [progress WARNING root] complete: ev d10dd466-4fcc-49b2-bb69-36236bd85e42 does not exist
Nov 25 03:26:29 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:26:29.208 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[f3af8e34-bce8-424e-a8a7-9b48bfd6c7de]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:26:29 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:26:29.211 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[3a72e9fb-8502-4029-a689-941fbd06cc37]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:26:29 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:26:29.238 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[22e7f59e-6be3-4eb9-8f59-750bcb2833b3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:26:29 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:26:29.263 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[b4702e5e-3235-4729-8ae8-554665c1ae9f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap93269c36-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b6:11:21'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 16, 'tx_packets': 17, 'rx_bytes': 952, 'tx_bytes': 858, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 16, 'tx_packets': 17, 'rx_bytes': 952, 'tx_bytes': 858, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 19], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 445515, 'reachable_time': 41516, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 285070, 'error': None, 'target': 'ovnmeta-93269c36-ab23-4d95-925a-798173550624', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:26:29 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:26:29.289 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[777dab95-8245-449c-bd5e-dfe29bed0d9a]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap93269c36-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 445525, 'tstamp': 445525}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 285080, 'error': None, 'target': 'ovnmeta-93269c36-ab23-4d95-925a-798173550624', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap93269c36-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 445529, 'tstamp': 445529}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 285080, 'error': None, 'target': 'ovnmeta-93269c36-ab23-4d95-925a-798173550624', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:26:29 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:26:29.290 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap93269c36-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:26:29 np0005534516 nova_compute[253538]: 2025-11-25 08:26:29.291 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:26:29 np0005534516 nova_compute[253538]: 2025-11-25 08:26:29.293 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:26:29 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:26:29.294 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap93269c36-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:26:29 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:26:29.294 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 03:26:29 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:26:29.294 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap93269c36-a0, col_values=(('external_ids', {'iface-id': '52d2128c-19c6-4892-8ba5-cc8740039f5e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:26:29 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:26:29.295 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 03:26:29 np0005534516 nova_compute[253538]: 2025-11-25 08:26:29.716 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:26:29 np0005534516 nova_compute[253538]: 2025-11-25 08:26:29.763 253542 DEBUG nova.compute.manager [None req-b2263c80-33ef-445c-930a-78c60ce7c0c9 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] [instance: 86bfa56f-56d0-4a5e-b0b2-302c375e37a3] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 25 03:26:29 np0005534516 nova_compute[253538]: 2025-11-25 08:26:29.764 253542 DEBUG nova.virt.libvirt.driver [None req-b2263c80-33ef-445c-930a-78c60ce7c0c9 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] [instance: 86bfa56f-56d0-4a5e-b0b2-302c375e37a3] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 25 03:26:29 np0005534516 nova_compute[253538]: 2025-11-25 08:26:29.765 253542 DEBUG nova.virt.libvirt.host [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Removed pending event for 86bfa56f-56d0-4a5e-b0b2-302c375e37a3 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Nov 25 03:26:29 np0005534516 nova_compute[253538]: 2025-11-25 08:26:29.765 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764059189.7644808, 86bfa56f-56d0-4a5e-b0b2-302c375e37a3 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 03:26:29 np0005534516 nova_compute[253538]: 2025-11-25 08:26:29.765 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 86bfa56f-56d0-4a5e-b0b2-302c375e37a3] VM Resumed (Lifecycle Event)#033[00m
Nov 25 03:26:29 np0005534516 nova_compute[253538]: 2025-11-25 08:26:29.771 253542 INFO nova.virt.libvirt.driver [-] [instance: 86bfa56f-56d0-4a5e-b0b2-302c375e37a3] Instance spawned successfully.#033[00m
Nov 25 03:26:29 np0005534516 nova_compute[253538]: 2025-11-25 08:26:29.771 253542 DEBUG nova.virt.libvirt.driver [None req-b2263c80-33ef-445c-930a-78c60ce7c0c9 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] [instance: 86bfa56f-56d0-4a5e-b0b2-302c375e37a3] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 25 03:26:29 np0005534516 nova_compute[253538]: 2025-11-25 08:26:29.792 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 86bfa56f-56d0-4a5e-b0b2-302c375e37a3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:26:29 np0005534516 nova_compute[253538]: 2025-11-25 08:26:29.799 253542 DEBUG nova.virt.libvirt.driver [None req-b2263c80-33ef-445c-930a-78c60ce7c0c9 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] [instance: 86bfa56f-56d0-4a5e-b0b2-302c375e37a3] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:26:29 np0005534516 nova_compute[253538]: 2025-11-25 08:26:29.799 253542 DEBUG nova.virt.libvirt.driver [None req-b2263c80-33ef-445c-930a-78c60ce7c0c9 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] [instance: 86bfa56f-56d0-4a5e-b0b2-302c375e37a3] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:26:29 np0005534516 nova_compute[253538]: 2025-11-25 08:26:29.799 253542 DEBUG nova.virt.libvirt.driver [None req-b2263c80-33ef-445c-930a-78c60ce7c0c9 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] [instance: 86bfa56f-56d0-4a5e-b0b2-302c375e37a3] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:26:29 np0005534516 nova_compute[253538]: 2025-11-25 08:26:29.799 253542 DEBUG nova.virt.libvirt.driver [None req-b2263c80-33ef-445c-930a-78c60ce7c0c9 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] [instance: 86bfa56f-56d0-4a5e-b0b2-302c375e37a3] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:26:29 np0005534516 nova_compute[253538]: 2025-11-25 08:26:29.800 253542 DEBUG nova.virt.libvirt.driver [None req-b2263c80-33ef-445c-930a-78c60ce7c0c9 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] [instance: 86bfa56f-56d0-4a5e-b0b2-302c375e37a3] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:26:29 np0005534516 nova_compute[253538]: 2025-11-25 08:26:29.800 253542 DEBUG nova.virt.libvirt.driver [None req-b2263c80-33ef-445c-930a-78c60ce7c0c9 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] [instance: 86bfa56f-56d0-4a5e-b0b2-302c375e37a3] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:26:29 np0005534516 nova_compute[253538]: 2025-11-25 08:26:29.804 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 86bfa56f-56d0-4a5e-b0b2-302c375e37a3] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 03:26:29 np0005534516 nova_compute[253538]: 2025-11-25 08:26:29.827 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 86bfa56f-56d0-4a5e-b0b2-302c375e37a3] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.#033[00m
Nov 25 03:26:29 np0005534516 nova_compute[253538]: 2025-11-25 08:26:29.827 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764059189.7678297, 86bfa56f-56d0-4a5e-b0b2-302c375e37a3 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 03:26:29 np0005534516 nova_compute[253538]: 2025-11-25 08:26:29.828 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 86bfa56f-56d0-4a5e-b0b2-302c375e37a3] VM Started (Lifecycle Event)#033[00m
Nov 25 03:26:29 np0005534516 nova_compute[253538]: 2025-11-25 08:26:29.845 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 86bfa56f-56d0-4a5e-b0b2-302c375e37a3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:26:29 np0005534516 nova_compute[253538]: 2025-11-25 08:26:29.849 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 86bfa56f-56d0-4a5e-b0b2-302c375e37a3] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 03:26:29 np0005534516 nova_compute[253538]: 2025-11-25 08:26:29.872 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 86bfa56f-56d0-4a5e-b0b2-302c375e37a3] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.#033[00m
Nov 25 03:26:29 np0005534516 nova_compute[253538]: 2025-11-25 08:26:29.899 253542 DEBUG nova.compute.manager [None req-b2263c80-33ef-445c-930a-78c60ce7c0c9 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] [instance: 86bfa56f-56d0-4a5e-b0b2-302c375e37a3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:26:30 np0005534516 nova_compute[253538]: 2025-11-25 08:26:30.025 253542 DEBUG oslo_concurrency.lockutils [None req-b2263c80-33ef-445c-930a-78c60ce7c0c9 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:26:30 np0005534516 nova_compute[253538]: 2025-11-25 08:26:30.026 253542 DEBUG oslo_concurrency.lockutils [None req-b2263c80-33ef-445c-930a-78c60ce7c0c9 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:26:30 np0005534516 nova_compute[253538]: 2025-11-25 08:26:30.026 253542 DEBUG nova.objects.instance [None req-b2263c80-33ef-445c-930a-78c60ce7c0c9 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] [instance: 86bfa56f-56d0-4a5e-b0b2-302c375e37a3] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032#033[00m
Nov 25 03:26:30 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 03:26:30 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 03:26:30 np0005534516 ovn_controller[152859]: 2025-11-25T08:26:30Z|00108|binding|INFO|Releasing lport 26e04d60-4f32-4592-b567-fc34513c5aba from this chassis (sb_readonly=0)
Nov 25 03:26:30 np0005534516 ovn_controller[152859]: 2025-11-25T08:26:30Z|00109|binding|INFO|Releasing lport 52d2128c-19c6-4892-8ba5-cc8740039f5e from this chassis (sb_readonly=0)
Nov 25 03:26:30 np0005534516 nova_compute[253538]: 2025-11-25 08:26:30.198 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:26:30 np0005534516 nova_compute[253538]: 2025-11-25 08:26:30.203 253542 DEBUG oslo_concurrency.lockutils [None req-b2263c80-33ef-445c-930a-78c60ce7c0c9 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: held 0.177s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:26:30 np0005534516 ovn_controller[152859]: 2025-11-25T08:26:30Z|00110|binding|INFO|Releasing lport 26e04d60-4f32-4592-b567-fc34513c5aba from this chassis (sb_readonly=0)
Nov 25 03:26:30 np0005534516 ovn_controller[152859]: 2025-11-25T08:26:30Z|00111|binding|INFO|Releasing lport 52d2128c-19c6-4892-8ba5-cc8740039f5e from this chassis (sb_readonly=0)
Nov 25 03:26:30 np0005534516 nova_compute[253538]: 2025-11-25 08:26:30.487 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:26:30 np0005534516 nova_compute[253538]: 2025-11-25 08:26:30.793 253542 DEBUG nova.compute.manager [req-98a23fff-d300-475e-9da7-c942dfe11fc0 req-733a5dc0-10b8-496f-8042-09c8d93ac3e7 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 86bfa56f-56d0-4a5e-b0b2-302c375e37a3] Received event network-vif-plugged-4ad9572b-6ac1-4659-8ea6-71b8a32c06fe external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:26:30 np0005534516 nova_compute[253538]: 2025-11-25 08:26:30.794 253542 DEBUG oslo_concurrency.lockutils [req-98a23fff-d300-475e-9da7-c942dfe11fc0 req-733a5dc0-10b8-496f-8042-09c8d93ac3e7 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "86bfa56f-56d0-4a5e-b0b2-302c375e37a3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:26:30 np0005534516 nova_compute[253538]: 2025-11-25 08:26:30.794 253542 DEBUG oslo_concurrency.lockutils [req-98a23fff-d300-475e-9da7-c942dfe11fc0 req-733a5dc0-10b8-496f-8042-09c8d93ac3e7 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "86bfa56f-56d0-4a5e-b0b2-302c375e37a3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:26:30 np0005534516 nova_compute[253538]: 2025-11-25 08:26:30.794 253542 DEBUG oslo_concurrency.lockutils [req-98a23fff-d300-475e-9da7-c942dfe11fc0 req-733a5dc0-10b8-496f-8042-09c8d93ac3e7 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "86bfa56f-56d0-4a5e-b0b2-302c375e37a3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:26:30 np0005534516 nova_compute[253538]: 2025-11-25 08:26:30.794 253542 DEBUG nova.compute.manager [req-98a23fff-d300-475e-9da7-c942dfe11fc0 req-733a5dc0-10b8-496f-8042-09c8d93ac3e7 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 86bfa56f-56d0-4a5e-b0b2-302c375e37a3] No waiting events found dispatching network-vif-plugged-4ad9572b-6ac1-4659-8ea6-71b8a32c06fe pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 03:26:30 np0005534516 nova_compute[253538]: 2025-11-25 08:26:30.794 253542 WARNING nova.compute.manager [req-98a23fff-d300-475e-9da7-c942dfe11fc0 req-733a5dc0-10b8-496f-8042-09c8d93ac3e7 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 86bfa56f-56d0-4a5e-b0b2-302c375e37a3] Received unexpected event network-vif-plugged-4ad9572b-6ac1-4659-8ea6-71b8a32c06fe for instance with vm_state active and task_state None.#033[00m
Nov 25 03:26:30 np0005534516 nova_compute[253538]: 2025-11-25 08:26:30.795 253542 DEBUG nova.compute.manager [req-98a23fff-d300-475e-9da7-c942dfe11fc0 req-733a5dc0-10b8-496f-8042-09c8d93ac3e7 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 86bfa56f-56d0-4a5e-b0b2-302c375e37a3] Received event network-vif-plugged-4ad9572b-6ac1-4659-8ea6-71b8a32c06fe external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:26:30 np0005534516 nova_compute[253538]: 2025-11-25 08:26:30.795 253542 DEBUG oslo_concurrency.lockutils [req-98a23fff-d300-475e-9da7-c942dfe11fc0 req-733a5dc0-10b8-496f-8042-09c8d93ac3e7 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "86bfa56f-56d0-4a5e-b0b2-302c375e37a3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:26:30 np0005534516 nova_compute[253538]: 2025-11-25 08:26:30.795 253542 DEBUG oslo_concurrency.lockutils [req-98a23fff-d300-475e-9da7-c942dfe11fc0 req-733a5dc0-10b8-496f-8042-09c8d93ac3e7 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "86bfa56f-56d0-4a5e-b0b2-302c375e37a3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:26:30 np0005534516 nova_compute[253538]: 2025-11-25 08:26:30.795 253542 DEBUG oslo_concurrency.lockutils [req-98a23fff-d300-475e-9da7-c942dfe11fc0 req-733a5dc0-10b8-496f-8042-09c8d93ac3e7 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "86bfa56f-56d0-4a5e-b0b2-302c375e37a3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:26:30 np0005534516 nova_compute[253538]: 2025-11-25 08:26:30.795 253542 DEBUG nova.compute.manager [req-98a23fff-d300-475e-9da7-c942dfe11fc0 req-733a5dc0-10b8-496f-8042-09c8d93ac3e7 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 86bfa56f-56d0-4a5e-b0b2-302c375e37a3] No waiting events found dispatching network-vif-plugged-4ad9572b-6ac1-4659-8ea6-71b8a32c06fe pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 03:26:30 np0005534516 nova_compute[253538]: 2025-11-25 08:26:30.795 253542 WARNING nova.compute.manager [req-98a23fff-d300-475e-9da7-c942dfe11fc0 req-733a5dc0-10b8-496f-8042-09c8d93ac3e7 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 86bfa56f-56d0-4a5e-b0b2-302c375e37a3] Received unexpected event network-vif-plugged-4ad9572b-6ac1-4659-8ea6-71b8a32c06fe for instance with vm_state active and task_state None.#033[00m
Nov 25 03:26:31 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1232: 321 pgs: 321 active+clean; 372 MiB data, 487 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 198 op/s
Nov 25 03:26:31 np0005534516 podman[285142]: 2025-11-25 08:26:31.844513615 +0000 UTC m=+0.082752852 container health_status 201f5ba8a846455dad7681d91099d8c3f33e8ac91298c1330a8b525b94c03938 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, managed_by=edpm_ansible)
Nov 25 03:26:32 np0005534516 ceph-osd[88620]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #44. Immutable memtables: 1.
Nov 25 03:26:33 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1233: 321 pgs: 321 active+clean; 372 MiB data, 498 MiB used, 60 GiB / 60 GiB avail; 2.8 MiB/s rd, 1.8 MiB/s wr, 210 op/s
Nov 25 03:26:33 np0005534516 nova_compute[253538]: 2025-11-25 08:26:33.118 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:26:33 np0005534516 ovn_controller[152859]: 2025-11-25T08:26:33Z|00024|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:b9:c0:7d 10.100.0.7
Nov 25 03:26:33 np0005534516 ovn_controller[152859]: 2025-11-25T08:26:33Z|00025|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:b9:c0:7d 10.100.0.7
Nov 25 03:26:33 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e113 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 03:26:34 np0005534516 nova_compute[253538]: 2025-11-25 08:26:34.253 253542 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764059179.2513971, ceb93a9d-5e18-4351-9cfa-3949c00b448a => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 03:26:34 np0005534516 nova_compute[253538]: 2025-11-25 08:26:34.253 253542 INFO nova.compute.manager [-] [instance: ceb93a9d-5e18-4351-9cfa-3949c00b448a] VM Stopped (Lifecycle Event)#033[00m
Nov 25 03:26:34 np0005534516 nova_compute[253538]: 2025-11-25 08:26:34.283 253542 DEBUG nova.compute.manager [None req-e4b6013e-160f-4e09-90e9-b8a56198e01d - - - - - -] [instance: ceb93a9d-5e18-4351-9cfa-3949c00b448a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:26:34 np0005534516 nova_compute[253538]: 2025-11-25 08:26:34.719 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:26:35 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1234: 321 pgs: 321 active+clean; 391 MiB data, 535 MiB used, 59 GiB / 60 GiB avail; 3.5 MiB/s rd, 3.5 MiB/s wr, 277 op/s
Nov 25 03:26:35 np0005534516 nova_compute[253538]: 2025-11-25 08:26:35.186 253542 DEBUG oslo_concurrency.lockutils [None req-ce87c40b-1887-4255-bdd8-f7c0edabc34a 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Acquiring lock "740ce5e2-b8eb-46b1-9ec8-e9c89ed3e5b2" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:26:35 np0005534516 nova_compute[253538]: 2025-11-25 08:26:35.187 253542 DEBUG oslo_concurrency.lockutils [None req-ce87c40b-1887-4255-bdd8-f7c0edabc34a 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Lock "740ce5e2-b8eb-46b1-9ec8-e9c89ed3e5b2" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:26:35 np0005534516 nova_compute[253538]: 2025-11-25 08:26:35.187 253542 DEBUG oslo_concurrency.lockutils [None req-ce87c40b-1887-4255-bdd8-f7c0edabc34a 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Acquiring lock "740ce5e2-b8eb-46b1-9ec8-e9c89ed3e5b2-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:26:35 np0005534516 nova_compute[253538]: 2025-11-25 08:26:35.188 253542 DEBUG oslo_concurrency.lockutils [None req-ce87c40b-1887-4255-bdd8-f7c0edabc34a 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Lock "740ce5e2-b8eb-46b1-9ec8-e9c89ed3e5b2-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:26:35 np0005534516 nova_compute[253538]: 2025-11-25 08:26:35.189 253542 DEBUG oslo_concurrency.lockutils [None req-ce87c40b-1887-4255-bdd8-f7c0edabc34a 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Lock "740ce5e2-b8eb-46b1-9ec8-e9c89ed3e5b2-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:26:35 np0005534516 nova_compute[253538]: 2025-11-25 08:26:35.191 253542 INFO nova.compute.manager [None req-ce87c40b-1887-4255-bdd8-f7c0edabc34a 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] [instance: 740ce5e2-b8eb-46b1-9ec8-e9c89ed3e5b2] Terminating instance#033[00m
Nov 25 03:26:35 np0005534516 nova_compute[253538]: 2025-11-25 08:26:35.193 253542 DEBUG nova.compute.manager [None req-ce87c40b-1887-4255-bdd8-f7c0edabc34a 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] [instance: 740ce5e2-b8eb-46b1-9ec8-e9c89ed3e5b2] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 25 03:26:35 np0005534516 kernel: tap817e9f9b-9d (unregistering): left promiscuous mode
Nov 25 03:26:35 np0005534516 NetworkManager[48915]: <info>  [1764059195.2498] device (tap817e9f9b-9d): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 03:26:35 np0005534516 nova_compute[253538]: 2025-11-25 08:26:35.262 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:26:35 np0005534516 ovn_controller[152859]: 2025-11-25T08:26:35Z|00112|binding|INFO|Releasing lport 817e9f9b-9d8a-4f80-a7b3-f2bcae49ffe8 from this chassis (sb_readonly=0)
Nov 25 03:26:35 np0005534516 ovn_controller[152859]: 2025-11-25T08:26:35Z|00113|binding|INFO|Setting lport 817e9f9b-9d8a-4f80-a7b3-f2bcae49ffe8 down in Southbound
Nov 25 03:26:35 np0005534516 ovn_controller[152859]: 2025-11-25T08:26:35Z|00114|binding|INFO|Removing iface tap817e9f9b-9d ovn-installed in OVS
Nov 25 03:26:35 np0005534516 nova_compute[253538]: 2025-11-25 08:26:35.268 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:26:35 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:26:35.273 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:50:3c:73 10.100.0.4'], port_security=['fa:16:3e:50:3c:73 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '740ce5e2-b8eb-46b1-9ec8-e9c89ed3e5b2', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-93269c36-ab23-4d95-925a-798173550624', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '65a2f983cce14453b2dc9251a520f289', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'cc9f601e-bb99-4729-8b62-ddcf81c134a4', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a705f4ac-b7cc-4deb-b453-a20afb944392, chassis=[], tunnel_key=6, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=817e9f9b-9d8a-4f80-a7b3-f2bcae49ffe8) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 03:26:35 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:26:35.274 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 817e9f9b-9d8a-4f80-a7b3-f2bcae49ffe8 in datapath 93269c36-ab23-4d95-925a-798173550624 unbound from our chassis#033[00m
Nov 25 03:26:35 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:26:35.277 162739 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 93269c36-ab23-4d95-925a-798173550624#033[00m
Nov 25 03:26:35 np0005534516 nova_compute[253538]: 2025-11-25 08:26:35.281 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:26:35 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:26:35.301 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[3a3125a6-43cb-4ec1-b193-17d4727ab929]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:26:35 np0005534516 systemd[1]: machine-qemu\x2d22\x2dinstance\x2d00000014.scope: Deactivated successfully.
Nov 25 03:26:35 np0005534516 systemd[1]: machine-qemu\x2d22\x2dinstance\x2d00000014.scope: Consumed 14.763s CPU time.
Nov 25 03:26:35 np0005534516 systemd-machined[215790]: Machine qemu-22-instance-00000014 terminated.
Nov 25 03:26:35 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:26:35.333 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[4b09113d-8a13-48b4-adac-d32827026da3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:26:35 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:26:35.337 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[f72bb549-b4ff-4c53-82c0-7a7c1ac2ebb7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:26:35 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:26:35.368 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[58dbbe95-3064-4a79-91bf-479ef2b04366]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:26:35 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:26:35.392 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[1ae65681-a4ba-4b1a-8ff5-fe350c710972]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap93269c36-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b6:11:21'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 16, 'tx_packets': 19, 'rx_bytes': 952, 'tx_bytes': 942, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 16, 'tx_packets': 19, 'rx_bytes': 952, 'tx_bytes': 942, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 19], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 445515, 'reachable_time': 41516, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 285174, 'error': None, 'target': 'ovnmeta-93269c36-ab23-4d95-925a-798173550624', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:26:35 np0005534516 nova_compute[253538]: 2025-11-25 08:26:35.414 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:26:35 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:26:35.417 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[c9a082b5-fbd4-4106-bf7c-87b3c90ae619]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap93269c36-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 445525, 'tstamp': 445525}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 285175, 'error': None, 'target': 'ovnmeta-93269c36-ab23-4d95-925a-798173550624', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap93269c36-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 445529, 'tstamp': 445529}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 285175, 'error': None, 'target': 'ovnmeta-93269c36-ab23-4d95-925a-798173550624', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:26:35 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:26:35.418 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap93269c36-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:26:35 np0005534516 nova_compute[253538]: 2025-11-25 08:26:35.420 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:26:35 np0005534516 nova_compute[253538]: 2025-11-25 08:26:35.420 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:26:35 np0005534516 nova_compute[253538]: 2025-11-25 08:26:35.425 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:26:35 np0005534516 nova_compute[253538]: 2025-11-25 08:26:35.426 253542 INFO nova.virt.libvirt.driver [-] [instance: 740ce5e2-b8eb-46b1-9ec8-e9c89ed3e5b2] Instance destroyed successfully.#033[00m
Nov 25 03:26:35 np0005534516 nova_compute[253538]: 2025-11-25 08:26:35.427 253542 DEBUG nova.objects.instance [None req-ce87c40b-1887-4255-bdd8-f7c0edabc34a 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Lazy-loading 'resources' on Instance uuid 740ce5e2-b8eb-46b1-9ec8-e9c89ed3e5b2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 03:26:35 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:26:35.429 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap93269c36-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:26:35 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:26:35.429 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 03:26:35 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:26:35.430 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap93269c36-a0, col_values=(('external_ids', {'iface-id': '52d2128c-19c6-4892-8ba5-cc8740039f5e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:26:35 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:26:35.431 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 03:26:35 np0005534516 nova_compute[253538]: 2025-11-25 08:26:35.440 253542 DEBUG nova.virt.libvirt.vif [None req-ce87c40b-1887-4255-bdd8-f7c0edabc34a 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T08:25:22Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-485239503',display_name='tempest-ServersAdminTestJSON-server-485239503',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-485239503',id=20,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-25T08:25:36Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='65a2f983cce14453b2dc9251a520f289',ramdisk_id='',reservation_id='r-bxwnte0u',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersAdminTestJSON-857664825',owner_user_name='tempest-ServersAdminTestJSON-857664825-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T08:25:36Z,user_data=None,user_id='76d3377d398a4214a77bc0eb91638ec5',uuid=740ce5e2-b8eb-46b1-9ec8-e9c89ed3e5b2,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "817e9f9b-9d8a-4f80-a7b3-f2bcae49ffe8", "address": "fa:16:3e:50:3c:73", "network": {"id": "93269c36-ab23-4d95-925a-798173550624", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-897372830-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "65a2f983cce14453b2dc9251a520f289", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap817e9f9b-9d", "ovs_interfaceid": "817e9f9b-9d8a-4f80-a7b3-f2bcae49ffe8", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 25 03:26:35 np0005534516 nova_compute[253538]: 2025-11-25 08:26:35.441 253542 DEBUG nova.network.os_vif_util [None req-ce87c40b-1887-4255-bdd8-f7c0edabc34a 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Converting VIF {"id": "817e9f9b-9d8a-4f80-a7b3-f2bcae49ffe8", "address": "fa:16:3e:50:3c:73", "network": {"id": "93269c36-ab23-4d95-925a-798173550624", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-897372830-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "65a2f983cce14453b2dc9251a520f289", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap817e9f9b-9d", "ovs_interfaceid": "817e9f9b-9d8a-4f80-a7b3-f2bcae49ffe8", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 03:26:35 np0005534516 nova_compute[253538]: 2025-11-25 08:26:35.441 253542 DEBUG nova.network.os_vif_util [None req-ce87c40b-1887-4255-bdd8-f7c0edabc34a 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:50:3c:73,bridge_name='br-int',has_traffic_filtering=True,id=817e9f9b-9d8a-4f80-a7b3-f2bcae49ffe8,network=Network(93269c36-ab23-4d95-925a-798173550624),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap817e9f9b-9d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 03:26:35 np0005534516 nova_compute[253538]: 2025-11-25 08:26:35.442 253542 DEBUG os_vif [None req-ce87c40b-1887-4255-bdd8-f7c0edabc34a 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:50:3c:73,bridge_name='br-int',has_traffic_filtering=True,id=817e9f9b-9d8a-4f80-a7b3-f2bcae49ffe8,network=Network(93269c36-ab23-4d95-925a-798173550624),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap817e9f9b-9d') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 25 03:26:35 np0005534516 nova_compute[253538]: 2025-11-25 08:26:35.443 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:26:35 np0005534516 nova_compute[253538]: 2025-11-25 08:26:35.444 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap817e9f9b-9d, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:26:35 np0005534516 nova_compute[253538]: 2025-11-25 08:26:35.445 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:26:35 np0005534516 nova_compute[253538]: 2025-11-25 08:26:35.447 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:26:35 np0005534516 nova_compute[253538]: 2025-11-25 08:26:35.454 253542 INFO os_vif [None req-ce87c40b-1887-4255-bdd8-f7c0edabc34a 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:50:3c:73,bridge_name='br-int',has_traffic_filtering=True,id=817e9f9b-9d8a-4f80-a7b3-f2bcae49ffe8,network=Network(93269c36-ab23-4d95-925a-798173550624),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap817e9f9b-9d')#033[00m
Nov 25 03:26:35 np0005534516 nova_compute[253538]: 2025-11-25 08:26:35.775 253542 DEBUG nova.compute.manager [req-09f24101-bdce-4008-91e7-50d3bd6b78b5 req-c342ca05-35d6-4f80-a814-7180fe6bffbe b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 740ce5e2-b8eb-46b1-9ec8-e9c89ed3e5b2] Received event network-vif-unplugged-817e9f9b-9d8a-4f80-a7b3-f2bcae49ffe8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:26:35 np0005534516 nova_compute[253538]: 2025-11-25 08:26:35.776 253542 DEBUG oslo_concurrency.lockutils [req-09f24101-bdce-4008-91e7-50d3bd6b78b5 req-c342ca05-35d6-4f80-a814-7180fe6bffbe b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "740ce5e2-b8eb-46b1-9ec8-e9c89ed3e5b2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:26:35 np0005534516 nova_compute[253538]: 2025-11-25 08:26:35.776 253542 DEBUG oslo_concurrency.lockutils [req-09f24101-bdce-4008-91e7-50d3bd6b78b5 req-c342ca05-35d6-4f80-a814-7180fe6bffbe b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "740ce5e2-b8eb-46b1-9ec8-e9c89ed3e5b2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:26:35 np0005534516 nova_compute[253538]: 2025-11-25 08:26:35.776 253542 DEBUG oslo_concurrency.lockutils [req-09f24101-bdce-4008-91e7-50d3bd6b78b5 req-c342ca05-35d6-4f80-a814-7180fe6bffbe b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "740ce5e2-b8eb-46b1-9ec8-e9c89ed3e5b2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:26:35 np0005534516 nova_compute[253538]: 2025-11-25 08:26:35.777 253542 DEBUG nova.compute.manager [req-09f24101-bdce-4008-91e7-50d3bd6b78b5 req-c342ca05-35d6-4f80-a814-7180fe6bffbe b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 740ce5e2-b8eb-46b1-9ec8-e9c89ed3e5b2] No waiting events found dispatching network-vif-unplugged-817e9f9b-9d8a-4f80-a7b3-f2bcae49ffe8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 03:26:35 np0005534516 nova_compute[253538]: 2025-11-25 08:26:35.777 253542 DEBUG nova.compute.manager [req-09f24101-bdce-4008-91e7-50d3bd6b78b5 req-c342ca05-35d6-4f80-a814-7180fe6bffbe b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 740ce5e2-b8eb-46b1-9ec8-e9c89ed3e5b2] Received event network-vif-unplugged-817e9f9b-9d8a-4f80-a7b3-f2bcae49ffe8 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 25 03:26:35 np0005534516 nova_compute[253538]: 2025-11-25 08:26:35.850 253542 INFO nova.virt.libvirt.driver [None req-ce87c40b-1887-4255-bdd8-f7c0edabc34a 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] [instance: 740ce5e2-b8eb-46b1-9ec8-e9c89ed3e5b2] Deleting instance files /var/lib/nova/instances/740ce5e2-b8eb-46b1-9ec8-e9c89ed3e5b2_del#033[00m
Nov 25 03:26:35 np0005534516 nova_compute[253538]: 2025-11-25 08:26:35.852 253542 INFO nova.virt.libvirt.driver [None req-ce87c40b-1887-4255-bdd8-f7c0edabc34a 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] [instance: 740ce5e2-b8eb-46b1-9ec8-e9c89ed3e5b2] Deletion of /var/lib/nova/instances/740ce5e2-b8eb-46b1-9ec8-e9c89ed3e5b2_del complete#033[00m
Nov 25 03:26:35 np0005534516 nova_compute[253538]: 2025-11-25 08:26:35.926 253542 INFO nova.compute.manager [None req-ce87c40b-1887-4255-bdd8-f7c0edabc34a 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] [instance: 740ce5e2-b8eb-46b1-9ec8-e9c89ed3e5b2] Took 0.73 seconds to destroy the instance on the hypervisor.#033[00m
Nov 25 03:26:35 np0005534516 nova_compute[253538]: 2025-11-25 08:26:35.928 253542 DEBUG oslo.service.loopingcall [None req-ce87c40b-1887-4255-bdd8-f7c0edabc34a 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 25 03:26:35 np0005534516 nova_compute[253538]: 2025-11-25 08:26:35.929 253542 DEBUG nova.compute.manager [-] [instance: 740ce5e2-b8eb-46b1-9ec8-e9c89ed3e5b2] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 25 03:26:35 np0005534516 nova_compute[253538]: 2025-11-25 08:26:35.930 253542 DEBUG nova.network.neutron [-] [instance: 740ce5e2-b8eb-46b1-9ec8-e9c89ed3e5b2] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 25 03:26:37 np0005534516 nova_compute[253538]: 2025-11-25 08:26:37.035 253542 DEBUG nova.network.neutron [-] [instance: 740ce5e2-b8eb-46b1-9ec8-e9c89ed3e5b2] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 03:26:37 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1235: 321 pgs: 321 active+clean; 378 MiB data, 525 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 3.9 MiB/s wr, 207 op/s
Nov 25 03:26:37 np0005534516 nova_compute[253538]: 2025-11-25 08:26:37.052 253542 INFO nova.compute.manager [-] [instance: 740ce5e2-b8eb-46b1-9ec8-e9c89ed3e5b2] Took 1.12 seconds to deallocate network for instance.#033[00m
Nov 25 03:26:37 np0005534516 nova_compute[253538]: 2025-11-25 08:26:37.093 253542 DEBUG oslo_concurrency.lockutils [None req-ce87c40b-1887-4255-bdd8-f7c0edabc34a 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:26:37 np0005534516 nova_compute[253538]: 2025-11-25 08:26:37.094 253542 DEBUG oslo_concurrency.lockutils [None req-ce87c40b-1887-4255-bdd8-f7c0edabc34a 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:26:37 np0005534516 nova_compute[253538]: 2025-11-25 08:26:37.204 253542 DEBUG oslo_concurrency.processutils [None req-ce87c40b-1887-4255-bdd8-f7c0edabc34a 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:26:37 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 03:26:37 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3568247570' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 03:26:37 np0005534516 nova_compute[253538]: 2025-11-25 08:26:37.613 253542 DEBUG oslo_concurrency.processutils [None req-ce87c40b-1887-4255-bdd8-f7c0edabc34a 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.409s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:26:37 np0005534516 nova_compute[253538]: 2025-11-25 08:26:37.620 253542 DEBUG nova.compute.provider_tree [None req-ce87c40b-1887-4255-bdd8-f7c0edabc34a 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 03:26:37 np0005534516 nova_compute[253538]: 2025-11-25 08:26:37.633 253542 DEBUG nova.scheduler.client.report [None req-ce87c40b-1887-4255-bdd8-f7c0edabc34a 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 03:26:37 np0005534516 nova_compute[253538]: 2025-11-25 08:26:37.651 253542 DEBUG oslo_concurrency.lockutils [None req-ce87c40b-1887-4255-bdd8-f7c0edabc34a 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.557s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:26:37 np0005534516 nova_compute[253538]: 2025-11-25 08:26:37.676 253542 INFO nova.scheduler.client.report [None req-ce87c40b-1887-4255-bdd8-f7c0edabc34a 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Deleted allocations for instance 740ce5e2-b8eb-46b1-9ec8-e9c89ed3e5b2#033[00m
Nov 25 03:26:37 np0005534516 nova_compute[253538]: 2025-11-25 08:26:37.746 253542 DEBUG oslo_concurrency.lockutils [None req-ce87c40b-1887-4255-bdd8-f7c0edabc34a 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Lock "740ce5e2-b8eb-46b1-9ec8-e9c89ed3e5b2" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.560s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:26:37 np0005534516 nova_compute[253538]: 2025-11-25 08:26:37.825 253542 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764059182.822616, 1664fad5-765c-4ecc-93e2-6f96c7fb6d44 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 03:26:37 np0005534516 nova_compute[253538]: 2025-11-25 08:26:37.826 253542 INFO nova.compute.manager [-] [instance: 1664fad5-765c-4ecc-93e2-6f96c7fb6d44] VM Stopped (Lifecycle Event)#033[00m
Nov 25 03:26:37 np0005534516 nova_compute[253538]: 2025-11-25 08:26:37.844 253542 DEBUG nova.compute.manager [req-191a6986-89a2-4d60-a74d-16f1522e5eb2 req-d20f4386-2a5c-49b9-9484-1a52f1ecac77 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 740ce5e2-b8eb-46b1-9ec8-e9c89ed3e5b2] Received event network-vif-plugged-817e9f9b-9d8a-4f80-a7b3-f2bcae49ffe8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:26:37 np0005534516 nova_compute[253538]: 2025-11-25 08:26:37.845 253542 DEBUG oslo_concurrency.lockutils [req-191a6986-89a2-4d60-a74d-16f1522e5eb2 req-d20f4386-2a5c-49b9-9484-1a52f1ecac77 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "740ce5e2-b8eb-46b1-9ec8-e9c89ed3e5b2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:26:37 np0005534516 nova_compute[253538]: 2025-11-25 08:26:37.846 253542 DEBUG oslo_concurrency.lockutils [req-191a6986-89a2-4d60-a74d-16f1522e5eb2 req-d20f4386-2a5c-49b9-9484-1a52f1ecac77 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "740ce5e2-b8eb-46b1-9ec8-e9c89ed3e5b2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:26:37 np0005534516 nova_compute[253538]: 2025-11-25 08:26:37.847 253542 DEBUG oslo_concurrency.lockutils [req-191a6986-89a2-4d60-a74d-16f1522e5eb2 req-d20f4386-2a5c-49b9-9484-1a52f1ecac77 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "740ce5e2-b8eb-46b1-9ec8-e9c89ed3e5b2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:26:37 np0005534516 nova_compute[253538]: 2025-11-25 08:26:37.848 253542 DEBUG nova.compute.manager [req-191a6986-89a2-4d60-a74d-16f1522e5eb2 req-d20f4386-2a5c-49b9-9484-1a52f1ecac77 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 740ce5e2-b8eb-46b1-9ec8-e9c89ed3e5b2] No waiting events found dispatching network-vif-plugged-817e9f9b-9d8a-4f80-a7b3-f2bcae49ffe8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 03:26:37 np0005534516 nova_compute[253538]: 2025-11-25 08:26:37.849 253542 WARNING nova.compute.manager [req-191a6986-89a2-4d60-a74d-16f1522e5eb2 req-d20f4386-2a5c-49b9-9484-1a52f1ecac77 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 740ce5e2-b8eb-46b1-9ec8-e9c89ed3e5b2] Received unexpected event network-vif-plugged-817e9f9b-9d8a-4f80-a7b3-f2bcae49ffe8 for instance with vm_state deleted and task_state None.#033[00m
Nov 25 03:26:37 np0005534516 nova_compute[253538]: 2025-11-25 08:26:37.849 253542 DEBUG nova.compute.manager [req-191a6986-89a2-4d60-a74d-16f1522e5eb2 req-d20f4386-2a5c-49b9-9484-1a52f1ecac77 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 740ce5e2-b8eb-46b1-9ec8-e9c89ed3e5b2] Received event network-vif-deleted-817e9f9b-9d8a-4f80-a7b3-f2bcae49ffe8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:26:37 np0005534516 nova_compute[253538]: 2025-11-25 08:26:37.856 253542 DEBUG nova.compute.manager [None req-8b61a965-bf5b-40a2-8ce6-1c508b4e5bdc - - - - - -] [instance: 1664fad5-765c-4ecc-93e2-6f96c7fb6d44] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:26:38 np0005534516 nova_compute[253538]: 2025-11-25 08:26:38.403 253542 DEBUG oslo_concurrency.lockutils [None req-0ffadc59-036e-47fe-8a78-d4e1681ae88e 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Acquiring lock "29fb9e2b-13d1-41e6-b0b1-1d5262dcadec" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:26:38 np0005534516 nova_compute[253538]: 2025-11-25 08:26:38.404 253542 DEBUG oslo_concurrency.lockutils [None req-0ffadc59-036e-47fe-8a78-d4e1681ae88e 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Lock "29fb9e2b-13d1-41e6-b0b1-1d5262dcadec" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:26:38 np0005534516 nova_compute[253538]: 2025-11-25 08:26:38.405 253542 DEBUG oslo_concurrency.lockutils [None req-0ffadc59-036e-47fe-8a78-d4e1681ae88e 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Acquiring lock "29fb9e2b-13d1-41e6-b0b1-1d5262dcadec-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:26:38 np0005534516 nova_compute[253538]: 2025-11-25 08:26:38.406 253542 DEBUG oslo_concurrency.lockutils [None req-0ffadc59-036e-47fe-8a78-d4e1681ae88e 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Lock "29fb9e2b-13d1-41e6-b0b1-1d5262dcadec-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:26:38 np0005534516 nova_compute[253538]: 2025-11-25 08:26:38.407 253542 DEBUG oslo_concurrency.lockutils [None req-0ffadc59-036e-47fe-8a78-d4e1681ae88e 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Lock "29fb9e2b-13d1-41e6-b0b1-1d5262dcadec-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:26:38 np0005534516 nova_compute[253538]: 2025-11-25 08:26:38.410 253542 INFO nova.compute.manager [None req-0ffadc59-036e-47fe-8a78-d4e1681ae88e 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] [instance: 29fb9e2b-13d1-41e6-b0b1-1d5262dcadec] Terminating instance#033[00m
Nov 25 03:26:38 np0005534516 nova_compute[253538]: 2025-11-25 08:26:38.413 253542 DEBUG nova.compute.manager [None req-0ffadc59-036e-47fe-8a78-d4e1681ae88e 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] [instance: 29fb9e2b-13d1-41e6-b0b1-1d5262dcadec] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 25 03:26:38 np0005534516 kernel: tapa0a9c956-aa (unregistering): left promiscuous mode
Nov 25 03:26:38 np0005534516 NetworkManager[48915]: <info>  [1764059198.4732] device (tapa0a9c956-aa): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 03:26:38 np0005534516 ovn_controller[152859]: 2025-11-25T08:26:38Z|00115|binding|INFO|Releasing lport a0a9c956-aaa5-4981-a1d9-ae896cea7b7c from this chassis (sb_readonly=0)
Nov 25 03:26:38 np0005534516 ovn_controller[152859]: 2025-11-25T08:26:38Z|00116|binding|INFO|Setting lport a0a9c956-aaa5-4981-a1d9-ae896cea7b7c down in Southbound
Nov 25 03:26:38 np0005534516 nova_compute[253538]: 2025-11-25 08:26:38.480 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:26:38 np0005534516 ovn_controller[152859]: 2025-11-25T08:26:38Z|00117|binding|INFO|Removing iface tapa0a9c956-aa ovn-installed in OVS
Nov 25 03:26:38 np0005534516 nova_compute[253538]: 2025-11-25 08:26:38.483 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:26:38 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:26:38.488 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:11:6b:81 10.100.0.10'], port_security=['fa:16:3e:11:6b:81 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '29fb9e2b-13d1-41e6-b0b1-1d5262dcadec', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-93269c36-ab23-4d95-925a-798173550624', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '65a2f983cce14453b2dc9251a520f289', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'cc9f601e-bb99-4729-8b62-ddcf81c134a4', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a705f4ac-b7cc-4deb-b453-a20afb944392, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=a0a9c956-aaa5-4981-a1d9-ae896cea7b7c) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 03:26:38 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:26:38.490 162739 INFO neutron.agent.ovn.metadata.agent [-] Port a0a9c956-aaa5-4981-a1d9-ae896cea7b7c in datapath 93269c36-ab23-4d95-925a-798173550624 unbound from our chassis#033[00m
Nov 25 03:26:38 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:26:38.492 162739 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 93269c36-ab23-4d95-925a-798173550624#033[00m
Nov 25 03:26:38 np0005534516 nova_compute[253538]: 2025-11-25 08:26:38.502 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:26:38 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:26:38.506 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[c1ba8ba7-3bd4-4d67-88a1-d66c389de283]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:26:38 np0005534516 systemd[1]: machine-qemu\x2d20\x2dinstance\x2d00000012.scope: Deactivated successfully.
Nov 25 03:26:38 np0005534516 systemd[1]: machine-qemu\x2d20\x2dinstance\x2d00000012.scope: Consumed 15.726s CPU time.
Nov 25 03:26:38 np0005534516 systemd-machined[215790]: Machine qemu-20-instance-00000012 terminated.
Nov 25 03:26:38 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:26:38.544 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[1486c0b1-b170-4fdf-91fe-d8e9bf5bd4d2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:26:38 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:26:38.548 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[f7711ab7-dfa3-48ce-99c6-02a9d6af147b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:26:38 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:26:38.577 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[f2983b72-bbe6-4b28-8074-8200094a9dde]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:26:38 np0005534516 podman[285228]: 2025-11-25 08:26:38.595128838 +0000 UTC m=+0.096295206 container health_status 8ad1e319ea263c63021f01bb2d6f18ca5aea205883339692c6b10bddfa993191 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 03:26:38 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:26:38.595 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[0de3d83c-ba4b-4f29-a343-0344a8871b5f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap93269c36-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b6:11:21'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 16, 'tx_packets': 21, 'rx_bytes': 952, 'tx_bytes': 1026, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 16, 'tx_packets': 21, 'rx_bytes': 952, 'tx_bytes': 1026, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 19], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 445515, 'reachable_time': 41516, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 285262, 'error': None, 'target': 'ovnmeta-93269c36-ab23-4d95-925a-798173550624', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:26:38 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:26:38.612 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[30b1d5e0-b72e-4f1f-afac-d4c9cdd1c7c9]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap93269c36-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 445525, 'tstamp': 445525}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 285267, 'error': None, 'target': 'ovnmeta-93269c36-ab23-4d95-925a-798173550624', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap93269c36-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 445529, 'tstamp': 445529}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 285267, 'error': None, 'target': 'ovnmeta-93269c36-ab23-4d95-925a-798173550624', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:26:38 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:26:38.614 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap93269c36-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:26:38 np0005534516 nova_compute[253538]: 2025-11-25 08:26:38.615 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:26:38 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:26:38.620 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap93269c36-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:26:38 np0005534516 nova_compute[253538]: 2025-11-25 08:26:38.621 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:26:38 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:26:38.622 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 03:26:38 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:26:38.622 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap93269c36-a0, col_values=(('external_ids', {'iface-id': '52d2128c-19c6-4892-8ba5-cc8740039f5e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:26:38 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:26:38.623 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 03:26:38 np0005534516 nova_compute[253538]: 2025-11-25 08:26:38.648 253542 INFO nova.virt.libvirt.driver [-] [instance: 29fb9e2b-13d1-41e6-b0b1-1d5262dcadec] Instance destroyed successfully.#033[00m
Nov 25 03:26:38 np0005534516 nova_compute[253538]: 2025-11-25 08:26:38.648 253542 DEBUG nova.objects.instance [None req-0ffadc59-036e-47fe-8a78-d4e1681ae88e 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Lazy-loading 'resources' on Instance uuid 29fb9e2b-13d1-41e6-b0b1-1d5262dcadec obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 03:26:38 np0005534516 nova_compute[253538]: 2025-11-25 08:26:38.661 253542 DEBUG nova.virt.libvirt.vif [None req-0ffadc59-036e-47fe-8a78-d4e1681ae88e 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T08:25:09Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-1138480486',display_name='tempest-ServersAdminTestJSON-server-1138480486',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-1138480486',id=18,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-25T08:25:19Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='65a2f983cce14453b2dc9251a520f289',ramdisk_id='',reservation_id='r-hs51jeqx',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersAdminTestJSON-857664825',owner_user_name='tempest-ServersAdminTestJSON-857664825-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T08:25:19Z,user_data=None,user_id='76d3377d398a4214a77bc0eb91638ec5',uuid=29fb9e2b-13d1-41e6-b0b1-1d5262dcadec,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "a0a9c956-aaa5-4981-a1d9-ae896cea7b7c", "address": "fa:16:3e:11:6b:81", "network": {"id": "93269c36-ab23-4d95-925a-798173550624", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-897372830-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "65a2f983cce14453b2dc9251a520f289", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa0a9c956-aa", "ovs_interfaceid": "a0a9c956-aaa5-4981-a1d9-ae896cea7b7c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 25 03:26:38 np0005534516 nova_compute[253538]: 2025-11-25 08:26:38.661 253542 DEBUG nova.network.os_vif_util [None req-0ffadc59-036e-47fe-8a78-d4e1681ae88e 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Converting VIF {"id": "a0a9c956-aaa5-4981-a1d9-ae896cea7b7c", "address": "fa:16:3e:11:6b:81", "network": {"id": "93269c36-ab23-4d95-925a-798173550624", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-897372830-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "65a2f983cce14453b2dc9251a520f289", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa0a9c956-aa", "ovs_interfaceid": "a0a9c956-aaa5-4981-a1d9-ae896cea7b7c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 03:26:38 np0005534516 nova_compute[253538]: 2025-11-25 08:26:38.662 253542 DEBUG nova.network.os_vif_util [None req-0ffadc59-036e-47fe-8a78-d4e1681ae88e 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:11:6b:81,bridge_name='br-int',has_traffic_filtering=True,id=a0a9c956-aaa5-4981-a1d9-ae896cea7b7c,network=Network(93269c36-ab23-4d95-925a-798173550624),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa0a9c956-aa') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 03:26:38 np0005534516 nova_compute[253538]: 2025-11-25 08:26:38.662 253542 DEBUG os_vif [None req-0ffadc59-036e-47fe-8a78-d4e1681ae88e 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:11:6b:81,bridge_name='br-int',has_traffic_filtering=True,id=a0a9c956-aaa5-4981-a1d9-ae896cea7b7c,network=Network(93269c36-ab23-4d95-925a-798173550624),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa0a9c956-aa') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 25 03:26:38 np0005534516 nova_compute[253538]: 2025-11-25 08:26:38.665 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:26:38 np0005534516 nova_compute[253538]: 2025-11-25 08:26:38.665 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa0a9c956-aa, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:26:38 np0005534516 nova_compute[253538]: 2025-11-25 08:26:38.723 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:26:38 np0005534516 nova_compute[253538]: 2025-11-25 08:26:38.726 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 25 03:26:38 np0005534516 nova_compute[253538]: 2025-11-25 08:26:38.728 253542 INFO os_vif [None req-0ffadc59-036e-47fe-8a78-d4e1681ae88e 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:11:6b:81,bridge_name='br-int',has_traffic_filtering=True,id=a0a9c956-aaa5-4981-a1d9-ae896cea7b7c,network=Network(93269c36-ab23-4d95-925a-798173550624),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa0a9c956-aa')#033[00m
Nov 25 03:26:38 np0005534516 nova_compute[253538]: 2025-11-25 08:26:38.795 253542 DEBUG nova.compute.manager [req-297b75c7-53bf-4aa4-9279-71734157ab68 req-ecc99db5-4961-4b84-a1f3-37f59666d0f6 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 29fb9e2b-13d1-41e6-b0b1-1d5262dcadec] Received event network-vif-unplugged-a0a9c956-aaa5-4981-a1d9-ae896cea7b7c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:26:38 np0005534516 nova_compute[253538]: 2025-11-25 08:26:38.795 253542 DEBUG oslo_concurrency.lockutils [req-297b75c7-53bf-4aa4-9279-71734157ab68 req-ecc99db5-4961-4b84-a1f3-37f59666d0f6 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "29fb9e2b-13d1-41e6-b0b1-1d5262dcadec-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:26:38 np0005534516 nova_compute[253538]: 2025-11-25 08:26:38.795 253542 DEBUG oslo_concurrency.lockutils [req-297b75c7-53bf-4aa4-9279-71734157ab68 req-ecc99db5-4961-4b84-a1f3-37f59666d0f6 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "29fb9e2b-13d1-41e6-b0b1-1d5262dcadec-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:26:38 np0005534516 nova_compute[253538]: 2025-11-25 08:26:38.795 253542 DEBUG oslo_concurrency.lockutils [req-297b75c7-53bf-4aa4-9279-71734157ab68 req-ecc99db5-4961-4b84-a1f3-37f59666d0f6 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "29fb9e2b-13d1-41e6-b0b1-1d5262dcadec-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:26:38 np0005534516 nova_compute[253538]: 2025-11-25 08:26:38.795 253542 DEBUG nova.compute.manager [req-297b75c7-53bf-4aa4-9279-71734157ab68 req-ecc99db5-4961-4b84-a1f3-37f59666d0f6 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 29fb9e2b-13d1-41e6-b0b1-1d5262dcadec] No waiting events found dispatching network-vif-unplugged-a0a9c956-aaa5-4981-a1d9-ae896cea7b7c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 03:26:38 np0005534516 nova_compute[253538]: 2025-11-25 08:26:38.796 253542 DEBUG nova.compute.manager [req-297b75c7-53bf-4aa4-9279-71734157ab68 req-ecc99db5-4961-4b84-a1f3-37f59666d0f6 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 29fb9e2b-13d1-41e6-b0b1-1d5262dcadec] Received event network-vif-unplugged-a0a9c956-aaa5-4981-a1d9-ae896cea7b7c for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 25 03:26:38 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e113 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 03:26:39 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1236: 321 pgs: 321 active+clean; 348 MiB data, 517 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 3.1 MiB/s wr, 203 op/s
Nov 25 03:26:39 np0005534516 nova_compute[253538]: 2025-11-25 08:26:39.181 253542 INFO nova.virt.libvirt.driver [None req-0ffadc59-036e-47fe-8a78-d4e1681ae88e 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] [instance: 29fb9e2b-13d1-41e6-b0b1-1d5262dcadec] Deleting instance files /var/lib/nova/instances/29fb9e2b-13d1-41e6-b0b1-1d5262dcadec_del#033[00m
Nov 25 03:26:39 np0005534516 nova_compute[253538]: 2025-11-25 08:26:39.182 253542 INFO nova.virt.libvirt.driver [None req-0ffadc59-036e-47fe-8a78-d4e1681ae88e 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] [instance: 29fb9e2b-13d1-41e6-b0b1-1d5262dcadec] Deletion of /var/lib/nova/instances/29fb9e2b-13d1-41e6-b0b1-1d5262dcadec_del complete#033[00m
Nov 25 03:26:39 np0005534516 nova_compute[253538]: 2025-11-25 08:26:39.255 253542 INFO nova.compute.manager [None req-0ffadc59-036e-47fe-8a78-d4e1681ae88e 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] [instance: 29fb9e2b-13d1-41e6-b0b1-1d5262dcadec] Took 0.84 seconds to destroy the instance on the hypervisor.#033[00m
Nov 25 03:26:39 np0005534516 nova_compute[253538]: 2025-11-25 08:26:39.256 253542 DEBUG oslo.service.loopingcall [None req-0ffadc59-036e-47fe-8a78-d4e1681ae88e 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 25 03:26:39 np0005534516 nova_compute[253538]: 2025-11-25 08:26:39.256 253542 DEBUG nova.compute.manager [-] [instance: 29fb9e2b-13d1-41e6-b0b1-1d5262dcadec] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 25 03:26:39 np0005534516 nova_compute[253538]: 2025-11-25 08:26:39.256 253542 DEBUG nova.network.neutron [-] [instance: 29fb9e2b-13d1-41e6-b0b1-1d5262dcadec] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 25 03:26:39 np0005534516 nova_compute[253538]: 2025-11-25 08:26:39.720 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:26:40 np0005534516 nova_compute[253538]: 2025-11-25 08:26:40.820 253542 DEBUG nova.network.neutron [-] [instance: 29fb9e2b-13d1-41e6-b0b1-1d5262dcadec] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 03:26:40 np0005534516 nova_compute[253538]: 2025-11-25 08:26:40.834 253542 INFO nova.compute.manager [-] [instance: 29fb9e2b-13d1-41e6-b0b1-1d5262dcadec] Took 1.58 seconds to deallocate network for instance.#033[00m
Nov 25 03:26:40 np0005534516 nova_compute[253538]: 2025-11-25 08:26:40.879 253542 DEBUG oslo_concurrency.lockutils [None req-0ffadc59-036e-47fe-8a78-d4e1681ae88e 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:26:40 np0005534516 nova_compute[253538]: 2025-11-25 08:26:40.879 253542 DEBUG oslo_concurrency.lockutils [None req-0ffadc59-036e-47fe-8a78-d4e1681ae88e 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:26:40 np0005534516 nova_compute[253538]: 2025-11-25 08:26:40.916 253542 DEBUG nova.compute.manager [req-afcab062-3f05-4eaa-b580-d68811020e8d req-124c631d-6164-4608-a048-efb88603ed82 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 29fb9e2b-13d1-41e6-b0b1-1d5262dcadec] Received event network-vif-deleted-a0a9c956-aaa5-4981-a1d9-ae896cea7b7c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:26:40 np0005534516 nova_compute[253538]: 2025-11-25 08:26:40.989 253542 DEBUG oslo_concurrency.processutils [None req-0ffadc59-036e-47fe-8a78-d4e1681ae88e 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:26:41 np0005534516 nova_compute[253538]: 2025-11-25 08:26:41.015 253542 DEBUG nova.compute.manager [req-f46006f8-fab1-4835-bcdd-99995a52a288 req-c4ab95dd-afd9-4835-b42f-fb542b4e78e7 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 29fb9e2b-13d1-41e6-b0b1-1d5262dcadec] Received event network-vif-plugged-a0a9c956-aaa5-4981-a1d9-ae896cea7b7c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:26:41 np0005534516 nova_compute[253538]: 2025-11-25 08:26:41.016 253542 DEBUG oslo_concurrency.lockutils [req-f46006f8-fab1-4835-bcdd-99995a52a288 req-c4ab95dd-afd9-4835-b42f-fb542b4e78e7 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "29fb9e2b-13d1-41e6-b0b1-1d5262dcadec-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:26:41 np0005534516 nova_compute[253538]: 2025-11-25 08:26:41.016 253542 DEBUG oslo_concurrency.lockutils [req-f46006f8-fab1-4835-bcdd-99995a52a288 req-c4ab95dd-afd9-4835-b42f-fb542b4e78e7 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "29fb9e2b-13d1-41e6-b0b1-1d5262dcadec-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:26:41 np0005534516 nova_compute[253538]: 2025-11-25 08:26:41.016 253542 DEBUG oslo_concurrency.lockutils [req-f46006f8-fab1-4835-bcdd-99995a52a288 req-c4ab95dd-afd9-4835-b42f-fb542b4e78e7 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "29fb9e2b-13d1-41e6-b0b1-1d5262dcadec-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:26:41 np0005534516 nova_compute[253538]: 2025-11-25 08:26:41.017 253542 DEBUG nova.compute.manager [req-f46006f8-fab1-4835-bcdd-99995a52a288 req-c4ab95dd-afd9-4835-b42f-fb542b4e78e7 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 29fb9e2b-13d1-41e6-b0b1-1d5262dcadec] No waiting events found dispatching network-vif-plugged-a0a9c956-aaa5-4981-a1d9-ae896cea7b7c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 03:26:41 np0005534516 nova_compute[253538]: 2025-11-25 08:26:41.017 253542 WARNING nova.compute.manager [req-f46006f8-fab1-4835-bcdd-99995a52a288 req-c4ab95dd-afd9-4835-b42f-fb542b4e78e7 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 29fb9e2b-13d1-41e6-b0b1-1d5262dcadec] Received unexpected event network-vif-plugged-a0a9c956-aaa5-4981-a1d9-ae896cea7b7c for instance with vm_state deleted and task_state None.#033[00m
Nov 25 03:26:41 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1237: 321 pgs: 321 active+clean; 285 MiB data, 480 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.5 MiB/s wr, 192 op/s
Nov 25 03:26:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:26:41.056 162739 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:26:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:26:41.057 162739 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.005s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:26:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:26:41.058 162739 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:26:41 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 03:26:41 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3844717133' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 03:26:41 np0005534516 nova_compute[253538]: 2025-11-25 08:26:41.458 253542 DEBUG oslo_concurrency.processutils [None req-0ffadc59-036e-47fe-8a78-d4e1681ae88e 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.469s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:26:41 np0005534516 nova_compute[253538]: 2025-11-25 08:26:41.463 253542 DEBUG nova.compute.provider_tree [None req-0ffadc59-036e-47fe-8a78-d4e1681ae88e 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 03:26:41 np0005534516 nova_compute[253538]: 2025-11-25 08:26:41.491 253542 DEBUG nova.scheduler.client.report [None req-0ffadc59-036e-47fe-8a78-d4e1681ae88e 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 03:26:41 np0005534516 nova_compute[253538]: 2025-11-25 08:26:41.514 253542 DEBUG oslo_concurrency.lockutils [None req-0ffadc59-036e-47fe-8a78-d4e1681ae88e 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.635s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:26:41 np0005534516 nova_compute[253538]: 2025-11-25 08:26:41.541 253542 INFO nova.scheduler.client.report [None req-0ffadc59-036e-47fe-8a78-d4e1681ae88e 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Deleted allocations for instance 29fb9e2b-13d1-41e6-b0b1-1d5262dcadec#033[00m
Nov 25 03:26:41 np0005534516 nova_compute[253538]: 2025-11-25 08:26:41.603 253542 DEBUG oslo_concurrency.lockutils [None req-0ffadc59-036e-47fe-8a78-d4e1681ae88e 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Lock "29fb9e2b-13d1-41e6-b0b1-1d5262dcadec" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.198s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:26:41 np0005534516 ovn_controller[152859]: 2025-11-25T08:26:41Z|00026|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:5e:0e:e0 10.100.0.6
Nov 25 03:26:41 np0005534516 ovn_controller[152859]: 2025-11-25T08:26:41Z|00027|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:5e:0e:e0 10.100.0.6
Nov 25 03:26:41 np0005534516 nova_compute[253538]: 2025-11-25 08:26:41.985 253542 DEBUG oslo_concurrency.lockutils [None req-512bd3e2-021e-41ed-a450-06a605dc70eb 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] Acquiring lock "c787de46-dba9-458e-acc0-57470097fac5" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:26:41 np0005534516 nova_compute[253538]: 2025-11-25 08:26:41.985 253542 DEBUG oslo_concurrency.lockutils [None req-512bd3e2-021e-41ed-a450-06a605dc70eb 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] Lock "c787de46-dba9-458e-acc0-57470097fac5" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:26:42 np0005534516 nova_compute[253538]: 2025-11-25 08:26:42.000 253542 DEBUG nova.compute.manager [None req-512bd3e2-021e-41ed-a450-06a605dc70eb 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] [instance: c787de46-dba9-458e-acc0-57470097fac5] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 25 03:26:42 np0005534516 nova_compute[253538]: 2025-11-25 08:26:42.070 253542 DEBUG oslo_concurrency.lockutils [None req-e04947b3-17a1-4175-939f-21d4228cb042 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Acquiring lock "23ace5af-6840-42aa-a801-98abbb4f3a52" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:26:42 np0005534516 nova_compute[253538]: 2025-11-25 08:26:42.070 253542 DEBUG oslo_concurrency.lockutils [None req-e04947b3-17a1-4175-939f-21d4228cb042 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Lock "23ace5af-6840-42aa-a801-98abbb4f3a52" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:26:42 np0005534516 nova_compute[253538]: 2025-11-25 08:26:42.071 253542 DEBUG oslo_concurrency.lockutils [None req-e04947b3-17a1-4175-939f-21d4228cb042 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Acquiring lock "23ace5af-6840-42aa-a801-98abbb4f3a52-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:26:42 np0005534516 nova_compute[253538]: 2025-11-25 08:26:42.071 253542 DEBUG oslo_concurrency.lockutils [None req-e04947b3-17a1-4175-939f-21d4228cb042 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Lock "23ace5af-6840-42aa-a801-98abbb4f3a52-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:26:42 np0005534516 nova_compute[253538]: 2025-11-25 08:26:42.071 253542 DEBUG oslo_concurrency.lockutils [None req-e04947b3-17a1-4175-939f-21d4228cb042 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Lock "23ace5af-6840-42aa-a801-98abbb4f3a52-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:26:42 np0005534516 nova_compute[253538]: 2025-11-25 08:26:42.072 253542 INFO nova.compute.manager [None req-e04947b3-17a1-4175-939f-21d4228cb042 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] [instance: 23ace5af-6840-42aa-a801-98abbb4f3a52] Terminating instance#033[00m
Nov 25 03:26:42 np0005534516 nova_compute[253538]: 2025-11-25 08:26:42.073 253542 DEBUG nova.compute.manager [None req-e04947b3-17a1-4175-939f-21d4228cb042 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] [instance: 23ace5af-6840-42aa-a801-98abbb4f3a52] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 25 03:26:42 np0005534516 nova_compute[253538]: 2025-11-25 08:26:42.080 253542 DEBUG oslo_concurrency.lockutils [None req-512bd3e2-021e-41ed-a450-06a605dc70eb 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:26:42 np0005534516 nova_compute[253538]: 2025-11-25 08:26:42.080 253542 DEBUG oslo_concurrency.lockutils [None req-512bd3e2-021e-41ed-a450-06a605dc70eb 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:26:42 np0005534516 nova_compute[253538]: 2025-11-25 08:26:42.087 253542 DEBUG nova.virt.hardware [None req-512bd3e2-021e-41ed-a450-06a605dc70eb 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 25 03:26:42 np0005534516 nova_compute[253538]: 2025-11-25 08:26:42.088 253542 INFO nova.compute.claims [None req-512bd3e2-021e-41ed-a450-06a605dc70eb 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] [instance: c787de46-dba9-458e-acc0-57470097fac5] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 25 03:26:42 np0005534516 nova_compute[253538]: 2025-11-25 08:26:42.266 253542 DEBUG oslo_concurrency.processutils [None req-512bd3e2-021e-41ed-a450-06a605dc70eb 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:26:42 np0005534516 kernel: tape9d1298d-41 (unregistering): left promiscuous mode
Nov 25 03:26:42 np0005534516 NetworkManager[48915]: <info>  [1764059202.5438] device (tape9d1298d-41): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 03:26:42 np0005534516 ovn_controller[152859]: 2025-11-25T08:26:42Z|00118|binding|INFO|Releasing lport e9d1298d-411a-4018-ba08-c41d40ba0d41 from this chassis (sb_readonly=0)
Nov 25 03:26:42 np0005534516 ovn_controller[152859]: 2025-11-25T08:26:42Z|00119|binding|INFO|Setting lport e9d1298d-411a-4018-ba08-c41d40ba0d41 down in Southbound
Nov 25 03:26:42 np0005534516 ovn_controller[152859]: 2025-11-25T08:26:42Z|00120|binding|INFO|Removing iface tape9d1298d-41 ovn-installed in OVS
Nov 25 03:26:42 np0005534516 nova_compute[253538]: 2025-11-25 08:26:42.552 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:26:42 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:26:42.559 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:af:6c:e2 10.100.0.3'], port_security=['fa:16:3e:af:6c:e2 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '23ace5af-6840-42aa-a801-98abbb4f3a52', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-93269c36-ab23-4d95-925a-798173550624', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '65a2f983cce14453b2dc9251a520f289', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'cc9f601e-bb99-4729-8b62-ddcf81c134a4', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a705f4ac-b7cc-4deb-b453-a20afb944392, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=e9d1298d-411a-4018-ba08-c41d40ba0d41) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 03:26:42 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:26:42.560 162739 INFO neutron.agent.ovn.metadata.agent [-] Port e9d1298d-411a-4018-ba08-c41d40ba0d41 in datapath 93269c36-ab23-4d95-925a-798173550624 unbound from our chassis#033[00m
Nov 25 03:26:42 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:26:42.562 162739 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 93269c36-ab23-4d95-925a-798173550624#033[00m
Nov 25 03:26:42 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:26:42.579 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[e1ced9b8-1d3b-4e52-8dc1-1c5fa64f0c41]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:26:42 np0005534516 nova_compute[253538]: 2025-11-25 08:26:42.594 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:26:42 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:26:42.606 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[57522015-d764-43bd-90e4-41bfccc76ed3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:26:42 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:26:42.609 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[8d4a31f8-4a0e-4639-84e7-473c257312fc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:26:42 np0005534516 systemd[1]: machine-qemu\x2d19\x2dinstance\x2d00000011.scope: Deactivated successfully.
Nov 25 03:26:42 np0005534516 systemd[1]: machine-qemu\x2d19\x2dinstance\x2d00000011.scope: Consumed 18.176s CPU time.
Nov 25 03:26:42 np0005534516 systemd-machined[215790]: Machine qemu-19-instance-00000011 terminated.
Nov 25 03:26:42 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:26:42.637 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[6c6a4d29-609d-429f-8332-b0db7ee25bdb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:26:42 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:26:42.660 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[79ab9411-81de-4ffe-9f4c-4afd28a310cb]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap93269c36-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b6:11:21'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 16, 'tx_packets': 23, 'rx_bytes': 952, 'tx_bytes': 1110, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 16, 'tx_packets': 23, 'rx_bytes': 952, 'tx_bytes': 1110, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 19], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 445515, 'reachable_time': 41516, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 285354, 'error': None, 'target': 'ovnmeta-93269c36-ab23-4d95-925a-798173550624', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:26:42 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:26:42.677 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[3942dcff-8967-40e7-bb35-4247e12b4f52]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap93269c36-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 445525, 'tstamp': 445525}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 285355, 'error': None, 'target': 'ovnmeta-93269c36-ab23-4d95-925a-798173550624', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap93269c36-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 445529, 'tstamp': 445529}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 285355, 'error': None, 'target': 'ovnmeta-93269c36-ab23-4d95-925a-798173550624', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:26:42 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:26:42.679 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap93269c36-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:26:42 np0005534516 nova_compute[253538]: 2025-11-25 08:26:42.681 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:26:42 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:26:42.687 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap93269c36-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:26:42 np0005534516 nova_compute[253538]: 2025-11-25 08:26:42.687 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:26:42 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:26:42.688 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 03:26:42 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:26:42.688 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap93269c36-a0, col_values=(('external_ids', {'iface-id': '52d2128c-19c6-4892-8ba5-cc8740039f5e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:26:42 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:26:42.688 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 03:26:42 np0005534516 nova_compute[253538]: 2025-11-25 08:26:42.693 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:26:42 np0005534516 nova_compute[253538]: 2025-11-25 08:26:42.698 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:26:42 np0005534516 nova_compute[253538]: 2025-11-25 08:26:42.708 253542 INFO nova.virt.libvirt.driver [-] [instance: 23ace5af-6840-42aa-a801-98abbb4f3a52] Instance destroyed successfully.#033[00m
Nov 25 03:26:42 np0005534516 nova_compute[253538]: 2025-11-25 08:26:42.712 253542 DEBUG nova.objects.instance [None req-e04947b3-17a1-4175-939f-21d4228cb042 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Lazy-loading 'resources' on Instance uuid 23ace5af-6840-42aa-a801-98abbb4f3a52 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 03:26:42 np0005534516 nova_compute[253538]: 2025-11-25 08:26:42.737 253542 DEBUG nova.virt.libvirt.vif [None req-e04947b3-17a1-4175-939f-21d4228cb042 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T08:24:56Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-2132333394',display_name='tempest-ServersAdminTestJSON-server-2132333394',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-2132333394',id=17,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-25T08:25:04Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='65a2f983cce14453b2dc9251a520f289',ramdisk_id='',reservation_id='r-8p4p95p6',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersAdminTestJSON-857664825',owner_user_name='tempest-ServersAdminTestJSON-857664825-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T08:25:04Z,user_data=None,user_id='76d3377d398a4214a77bc0eb91638ec5',uuid=23ace5af-6840-42aa-a801-98abbb4f3a52,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "e9d1298d-411a-4018-ba08-c41d40ba0d41", "address": "fa:16:3e:af:6c:e2", "network": {"id": "93269c36-ab23-4d95-925a-798173550624", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-897372830-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "65a2f983cce14453b2dc9251a520f289", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape9d1298d-41", "ovs_interfaceid": "e9d1298d-411a-4018-ba08-c41d40ba0d41", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 25 03:26:42 np0005534516 nova_compute[253538]: 2025-11-25 08:26:42.738 253542 DEBUG nova.network.os_vif_util [None req-e04947b3-17a1-4175-939f-21d4228cb042 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Converting VIF {"id": "e9d1298d-411a-4018-ba08-c41d40ba0d41", "address": "fa:16:3e:af:6c:e2", "network": {"id": "93269c36-ab23-4d95-925a-798173550624", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-897372830-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "65a2f983cce14453b2dc9251a520f289", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape9d1298d-41", "ovs_interfaceid": "e9d1298d-411a-4018-ba08-c41d40ba0d41", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 03:26:42 np0005534516 nova_compute[253538]: 2025-11-25 08:26:42.739 253542 DEBUG nova.network.os_vif_util [None req-e04947b3-17a1-4175-939f-21d4228cb042 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:af:6c:e2,bridge_name='br-int',has_traffic_filtering=True,id=e9d1298d-411a-4018-ba08-c41d40ba0d41,network=Network(93269c36-ab23-4d95-925a-798173550624),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape9d1298d-41') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 03:26:42 np0005534516 nova_compute[253538]: 2025-11-25 08:26:42.739 253542 DEBUG os_vif [None req-e04947b3-17a1-4175-939f-21d4228cb042 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:af:6c:e2,bridge_name='br-int',has_traffic_filtering=True,id=e9d1298d-411a-4018-ba08-c41d40ba0d41,network=Network(93269c36-ab23-4d95-925a-798173550624),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape9d1298d-41') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 25 03:26:42 np0005534516 nova_compute[253538]: 2025-11-25 08:26:42.741 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:26:42 np0005534516 nova_compute[253538]: 2025-11-25 08:26:42.742 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape9d1298d-41, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:26:42 np0005534516 nova_compute[253538]: 2025-11-25 08:26:42.743 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:26:42 np0005534516 nova_compute[253538]: 2025-11-25 08:26:42.745 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 25 03:26:42 np0005534516 nova_compute[253538]: 2025-11-25 08:26:42.747 253542 INFO os_vif [None req-e04947b3-17a1-4175-939f-21d4228cb042 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:af:6c:e2,bridge_name='br-int',has_traffic_filtering=True,id=e9d1298d-411a-4018-ba08-c41d40ba0d41,network=Network(93269c36-ab23-4d95-925a-798173550624),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape9d1298d-41')#033[00m
Nov 25 03:26:42 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 03:26:42 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/906575829' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 03:26:42 np0005534516 nova_compute[253538]: 2025-11-25 08:26:42.826 253542 DEBUG oslo_concurrency.processutils [None req-512bd3e2-021e-41ed-a450-06a605dc70eb 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.561s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:26:42 np0005534516 nova_compute[253538]: 2025-11-25 08:26:42.831 253542 DEBUG nova.compute.provider_tree [None req-512bd3e2-021e-41ed-a450-06a605dc70eb 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 03:26:42 np0005534516 nova_compute[253538]: 2025-11-25 08:26:42.845 253542 DEBUG nova.scheduler.client.report [None req-512bd3e2-021e-41ed-a450-06a605dc70eb 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 03:26:42 np0005534516 nova_compute[253538]: 2025-11-25 08:26:42.864 253542 DEBUG oslo_concurrency.lockutils [None req-512bd3e2-021e-41ed-a450-06a605dc70eb 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.783s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:26:42 np0005534516 nova_compute[253538]: 2025-11-25 08:26:42.865 253542 DEBUG nova.compute.manager [None req-512bd3e2-021e-41ed-a450-06a605dc70eb 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] [instance: c787de46-dba9-458e-acc0-57470097fac5] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 25 03:26:42 np0005534516 nova_compute[253538]: 2025-11-25 08:26:42.909 253542 DEBUG nova.compute.manager [None req-512bd3e2-021e-41ed-a450-06a605dc70eb 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] [instance: c787de46-dba9-458e-acc0-57470097fac5] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 25 03:26:42 np0005534516 nova_compute[253538]: 2025-11-25 08:26:42.909 253542 DEBUG nova.network.neutron [None req-512bd3e2-021e-41ed-a450-06a605dc70eb 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] [instance: c787de46-dba9-458e-acc0-57470097fac5] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 25 03:26:42 np0005534516 nova_compute[253538]: 2025-11-25 08:26:42.927 253542 INFO nova.virt.libvirt.driver [None req-512bd3e2-021e-41ed-a450-06a605dc70eb 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] [instance: c787de46-dba9-458e-acc0-57470097fac5] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 25 03:26:42 np0005534516 nova_compute[253538]: 2025-11-25 08:26:42.944 253542 DEBUG nova.compute.manager [None req-512bd3e2-021e-41ed-a450-06a605dc70eb 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] [instance: c787de46-dba9-458e-acc0-57470097fac5] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 25 03:26:43 np0005534516 nova_compute[253538]: 2025-11-25 08:26:43.023 253542 DEBUG nova.compute.manager [None req-512bd3e2-021e-41ed-a450-06a605dc70eb 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] [instance: c787de46-dba9-458e-acc0-57470097fac5] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 25 03:26:43 np0005534516 nova_compute[253538]: 2025-11-25 08:26:43.024 253542 DEBUG nova.virt.libvirt.driver [None req-512bd3e2-021e-41ed-a450-06a605dc70eb 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] [instance: c787de46-dba9-458e-acc0-57470097fac5] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 25 03:26:43 np0005534516 nova_compute[253538]: 2025-11-25 08:26:43.025 253542 INFO nova.virt.libvirt.driver [None req-512bd3e2-021e-41ed-a450-06a605dc70eb 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] [instance: c787de46-dba9-458e-acc0-57470097fac5] Creating image(s)#033[00m
Nov 25 03:26:43 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1238: 321 pgs: 321 active+clean; 265 MiB data, 471 MiB used, 60 GiB / 60 GiB avail; 2.4 MiB/s rd, 2.7 MiB/s wr, 207 op/s
Nov 25 03:26:43 np0005534516 nova_compute[253538]: 2025-11-25 08:26:43.051 253542 DEBUG nova.storage.rbd_utils [None req-512bd3e2-021e-41ed-a450-06a605dc70eb 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] rbd image c787de46-dba9-458e-acc0-57470097fac5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:26:43 np0005534516 nova_compute[253538]: 2025-11-25 08:26:43.079 253542 DEBUG nova.storage.rbd_utils [None req-512bd3e2-021e-41ed-a450-06a605dc70eb 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] rbd image c787de46-dba9-458e-acc0-57470097fac5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:26:43 np0005534516 nova_compute[253538]: 2025-11-25 08:26:43.099 253542 DEBUG nova.storage.rbd_utils [None req-512bd3e2-021e-41ed-a450-06a605dc70eb 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] rbd image c787de46-dba9-458e-acc0-57470097fac5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:26:43 np0005534516 nova_compute[253538]: 2025-11-25 08:26:43.104 253542 DEBUG oslo_concurrency.processutils [None req-512bd3e2-021e-41ed-a450-06a605dc70eb 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:26:43 np0005534516 nova_compute[253538]: 2025-11-25 08:26:43.139 253542 DEBUG nova.policy [None req-512bd3e2-021e-41ed-a450-06a605dc70eb 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '02e795c75a3b40bbbc3ca83d0501777f', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '52217f37b23343d697fa6d2be38e236d', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 25 03:26:43 np0005534516 nova_compute[253538]: 2025-11-25 08:26:43.192 253542 DEBUG oslo_concurrency.processutils [None req-512bd3e2-021e-41ed-a450-06a605dc70eb 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json" returned: 0 in 0.088s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:26:43 np0005534516 nova_compute[253538]: 2025-11-25 08:26:43.194 253542 DEBUG oslo_concurrency.lockutils [None req-512bd3e2-021e-41ed-a450-06a605dc70eb 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] Acquiring lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:26:43 np0005534516 nova_compute[253538]: 2025-11-25 08:26:43.195 253542 DEBUG oslo_concurrency.lockutils [None req-512bd3e2-021e-41ed-a450-06a605dc70eb 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:26:43 np0005534516 nova_compute[253538]: 2025-11-25 08:26:43.196 253542 DEBUG oslo_concurrency.lockutils [None req-512bd3e2-021e-41ed-a450-06a605dc70eb 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:26:43 np0005534516 nova_compute[253538]: 2025-11-25 08:26:43.230 253542 DEBUG nova.storage.rbd_utils [None req-512bd3e2-021e-41ed-a450-06a605dc70eb 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] rbd image c787de46-dba9-458e-acc0-57470097fac5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:26:43 np0005534516 nova_compute[253538]: 2025-11-25 08:26:43.234 253542 DEBUG oslo_concurrency.processutils [None req-512bd3e2-021e-41ed-a450-06a605dc70eb 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc c787de46-dba9-458e-acc0-57470097fac5_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:26:43 np0005534516 nova_compute[253538]: 2025-11-25 08:26:43.338 253542 INFO nova.virt.libvirt.driver [None req-e04947b3-17a1-4175-939f-21d4228cb042 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] [instance: 23ace5af-6840-42aa-a801-98abbb4f3a52] Deleting instance files /var/lib/nova/instances/23ace5af-6840-42aa-a801-98abbb4f3a52_del#033[00m
Nov 25 03:26:43 np0005534516 nova_compute[253538]: 2025-11-25 08:26:43.339 253542 INFO nova.virt.libvirt.driver [None req-e04947b3-17a1-4175-939f-21d4228cb042 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] [instance: 23ace5af-6840-42aa-a801-98abbb4f3a52] Deletion of /var/lib/nova/instances/23ace5af-6840-42aa-a801-98abbb4f3a52_del complete#033[00m
Nov 25 03:26:43 np0005534516 nova_compute[253538]: 2025-11-25 08:26:43.399 253542 INFO nova.compute.manager [None req-e04947b3-17a1-4175-939f-21d4228cb042 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] [instance: 23ace5af-6840-42aa-a801-98abbb4f3a52] Took 1.33 seconds to destroy the instance on the hypervisor.#033[00m
Nov 25 03:26:43 np0005534516 nova_compute[253538]: 2025-11-25 08:26:43.400 253542 DEBUG oslo.service.loopingcall [None req-e04947b3-17a1-4175-939f-21d4228cb042 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 25 03:26:43 np0005534516 nova_compute[253538]: 2025-11-25 08:26:43.400 253542 DEBUG nova.compute.manager [-] [instance: 23ace5af-6840-42aa-a801-98abbb4f3a52] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 25 03:26:43 np0005534516 nova_compute[253538]: 2025-11-25 08:26:43.400 253542 DEBUG nova.network.neutron [-] [instance: 23ace5af-6840-42aa-a801-98abbb4f3a52] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 25 03:26:43 np0005534516 nova_compute[253538]: 2025-11-25 08:26:43.513 253542 DEBUG oslo_concurrency.processutils [None req-512bd3e2-021e-41ed-a450-06a605dc70eb 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc c787de46-dba9-458e-acc0-57470097fac5_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.279s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:26:43 np0005534516 nova_compute[253538]: 2025-11-25 08:26:43.563 253542 DEBUG nova.storage.rbd_utils [None req-512bd3e2-021e-41ed-a450-06a605dc70eb 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] resizing rbd image c787de46-dba9-458e-acc0-57470097fac5_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 25 03:26:43 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e113 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 03:26:44 np0005534516 nova_compute[253538]: 2025-11-25 08:26:44.188 253542 DEBUG nova.objects.instance [None req-512bd3e2-021e-41ed-a450-06a605dc70eb 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] Lazy-loading 'migration_context' on Instance uuid c787de46-dba9-458e-acc0-57470097fac5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 03:26:44 np0005534516 nova_compute[253538]: 2025-11-25 08:26:44.198 253542 DEBUG nova.virt.libvirt.driver [None req-512bd3e2-021e-41ed-a450-06a605dc70eb 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] [instance: c787de46-dba9-458e-acc0-57470097fac5] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 25 03:26:44 np0005534516 nova_compute[253538]: 2025-11-25 08:26:44.198 253542 DEBUG nova.virt.libvirt.driver [None req-512bd3e2-021e-41ed-a450-06a605dc70eb 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] [instance: c787de46-dba9-458e-acc0-57470097fac5] Ensure instance console log exists: /var/lib/nova/instances/c787de46-dba9-458e-acc0-57470097fac5/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 25 03:26:44 np0005534516 nova_compute[253538]: 2025-11-25 08:26:44.198 253542 DEBUG oslo_concurrency.lockutils [None req-512bd3e2-021e-41ed-a450-06a605dc70eb 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:26:44 np0005534516 nova_compute[253538]: 2025-11-25 08:26:44.199 253542 DEBUG oslo_concurrency.lockutils [None req-512bd3e2-021e-41ed-a450-06a605dc70eb 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:26:44 np0005534516 nova_compute[253538]: 2025-11-25 08:26:44.199 253542 DEBUG oslo_concurrency.lockutils [None req-512bd3e2-021e-41ed-a450-06a605dc70eb 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:26:44 np0005534516 nova_compute[253538]: 2025-11-25 08:26:44.722 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:26:44 np0005534516 nova_compute[253538]: 2025-11-25 08:26:44.750 253542 DEBUG nova.network.neutron [None req-512bd3e2-021e-41ed-a450-06a605dc70eb 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] [instance: c787de46-dba9-458e-acc0-57470097fac5] Successfully created port: 0416b402-0842-4b73-910b-d30a5750474c _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 25 03:26:44 np0005534516 nova_compute[253538]: 2025-11-25 08:26:44.963 253542 DEBUG nova.network.neutron [-] [instance: 23ace5af-6840-42aa-a801-98abbb4f3a52] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 03:26:44 np0005534516 nova_compute[253538]: 2025-11-25 08:26:44.984 253542 INFO nova.compute.manager [-] [instance: 23ace5af-6840-42aa-a801-98abbb4f3a52] Took 1.58 seconds to deallocate network for instance.#033[00m
Nov 25 03:26:45 np0005534516 nova_compute[253538]: 2025-11-25 08:26:45.023 253542 DEBUG oslo_concurrency.lockutils [None req-e04947b3-17a1-4175-939f-21d4228cb042 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:26:45 np0005534516 nova_compute[253538]: 2025-11-25 08:26:45.024 253542 DEBUG oslo_concurrency.lockutils [None req-e04947b3-17a1-4175-939f-21d4228cb042 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:26:45 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1239: 321 pgs: 321 active+clean; 249 MiB data, 457 MiB used, 60 GiB / 60 GiB avail; 1.8 MiB/s rd, 4.6 MiB/s wr, 237 op/s
Nov 25 03:26:45 np0005534516 nova_compute[253538]: 2025-11-25 08:26:45.118 253542 DEBUG oslo_concurrency.processutils [None req-e04947b3-17a1-4175-939f-21d4228cb042 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:26:45 np0005534516 nova_compute[253538]: 2025-11-25 08:26:45.340 253542 DEBUG nova.compute.manager [req-a4c87865-f06c-4c08-ba2e-b4613ed78149 req-060bfe8a-39fc-47a7-aff3-446af7c6698e b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 23ace5af-6840-42aa-a801-98abbb4f3a52] Received event network-vif-deleted-e9d1298d-411a-4018-ba08-c41d40ba0d41 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:26:45 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 03:26:45 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3580963184' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 03:26:45 np0005534516 nova_compute[253538]: 2025-11-25 08:26:45.570 253542 DEBUG oslo_concurrency.processutils [None req-e04947b3-17a1-4175-939f-21d4228cb042 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.452s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:26:45 np0005534516 nova_compute[253538]: 2025-11-25 08:26:45.576 253542 DEBUG nova.compute.provider_tree [None req-e04947b3-17a1-4175-939f-21d4228cb042 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 03:26:45 np0005534516 nova_compute[253538]: 2025-11-25 08:26:45.595 253542 DEBUG nova.scheduler.client.report [None req-e04947b3-17a1-4175-939f-21d4228cb042 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 03:26:45 np0005534516 nova_compute[253538]: 2025-11-25 08:26:45.619 253542 DEBUG oslo_concurrency.lockutils [None req-e04947b3-17a1-4175-939f-21d4228cb042 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.595s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:26:45 np0005534516 nova_compute[253538]: 2025-11-25 08:26:45.675 253542 INFO nova.scheduler.client.report [None req-e04947b3-17a1-4175-939f-21d4228cb042 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Deleted allocations for instance 23ace5af-6840-42aa-a801-98abbb4f3a52#033[00m
Nov 25 03:26:45 np0005534516 nova_compute[253538]: 2025-11-25 08:26:45.789 253542 DEBUG oslo_concurrency.lockutils [None req-e04947b3-17a1-4175-939f-21d4228cb042 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Lock "23ace5af-6840-42aa-a801-98abbb4f3a52" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.719s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:26:47 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1240: 321 pgs: 321 active+clean; 238 MiB data, 465 MiB used, 60 GiB / 60 GiB avail; 431 KiB/s rd, 4.2 MiB/s wr, 172 op/s
Nov 25 03:26:47 np0005534516 nova_compute[253538]: 2025-11-25 08:26:47.470 253542 DEBUG nova.network.neutron [None req-512bd3e2-021e-41ed-a450-06a605dc70eb 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] [instance: c787de46-dba9-458e-acc0-57470097fac5] Successfully updated port: 0416b402-0842-4b73-910b-d30a5750474c _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 25 03:26:47 np0005534516 nova_compute[253538]: 2025-11-25 08:26:47.664 253542 DEBUG oslo_concurrency.lockutils [None req-512bd3e2-021e-41ed-a450-06a605dc70eb 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] Acquiring lock "refresh_cache-c787de46-dba9-458e-acc0-57470097fac5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 03:26:47 np0005534516 nova_compute[253538]: 2025-11-25 08:26:47.665 253542 DEBUG oslo_concurrency.lockutils [None req-512bd3e2-021e-41ed-a450-06a605dc70eb 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] Acquired lock "refresh_cache-c787de46-dba9-458e-acc0-57470097fac5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 03:26:47 np0005534516 nova_compute[253538]: 2025-11-25 08:26:47.665 253542 DEBUG nova.network.neutron [None req-512bd3e2-021e-41ed-a450-06a605dc70eb 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] [instance: c787de46-dba9-458e-acc0-57470097fac5] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 25 03:26:47 np0005534516 nova_compute[253538]: 2025-11-25 08:26:47.724 253542 DEBUG oslo_concurrency.lockutils [None req-e493ddf0-4205-4934-be72-4655b7982c78 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Acquiring lock "86bfa56f-56d0-4a5e-b0b2-302c375e37a3" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:26:47 np0005534516 nova_compute[253538]: 2025-11-25 08:26:47.725 253542 DEBUG oslo_concurrency.lockutils [None req-e493ddf0-4205-4934-be72-4655b7982c78 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Lock "86bfa56f-56d0-4a5e-b0b2-302c375e37a3" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:26:47 np0005534516 nova_compute[253538]: 2025-11-25 08:26:47.726 253542 DEBUG oslo_concurrency.lockutils [None req-e493ddf0-4205-4934-be72-4655b7982c78 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Acquiring lock "86bfa56f-56d0-4a5e-b0b2-302c375e37a3-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:26:47 np0005534516 nova_compute[253538]: 2025-11-25 08:26:47.726 253542 DEBUG oslo_concurrency.lockutils [None req-e493ddf0-4205-4934-be72-4655b7982c78 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Lock "86bfa56f-56d0-4a5e-b0b2-302c375e37a3-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:26:47 np0005534516 nova_compute[253538]: 2025-11-25 08:26:47.726 253542 DEBUG oslo_concurrency.lockutils [None req-e493ddf0-4205-4934-be72-4655b7982c78 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Lock "86bfa56f-56d0-4a5e-b0b2-302c375e37a3-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:26:47 np0005534516 nova_compute[253538]: 2025-11-25 08:26:47.727 253542 INFO nova.compute.manager [None req-e493ddf0-4205-4934-be72-4655b7982c78 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] [instance: 86bfa56f-56d0-4a5e-b0b2-302c375e37a3] Terminating instance#033[00m
Nov 25 03:26:47 np0005534516 nova_compute[253538]: 2025-11-25 08:26:47.728 253542 DEBUG nova.compute.manager [None req-e493ddf0-4205-4934-be72-4655b7982c78 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] [instance: 86bfa56f-56d0-4a5e-b0b2-302c375e37a3] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 25 03:26:47 np0005534516 nova_compute[253538]: 2025-11-25 08:26:47.743 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:26:47 np0005534516 nova_compute[253538]: 2025-11-25 08:26:47.863 253542 DEBUG nova.compute.manager [req-e029a2d7-062f-4137-86c6-8fd22eab0998 req-84005574-af02-4dd3-b715-83a682d4cba3 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: c787de46-dba9-458e-acc0-57470097fac5] Received event network-changed-0416b402-0842-4b73-910b-d30a5750474c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:26:47 np0005534516 nova_compute[253538]: 2025-11-25 08:26:47.863 253542 DEBUG nova.compute.manager [req-e029a2d7-062f-4137-86c6-8fd22eab0998 req-84005574-af02-4dd3-b715-83a682d4cba3 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: c787de46-dba9-458e-acc0-57470097fac5] Refreshing instance network info cache due to event network-changed-0416b402-0842-4b73-910b-d30a5750474c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 03:26:47 np0005534516 nova_compute[253538]: 2025-11-25 08:26:47.864 253542 DEBUG oslo_concurrency.lockutils [req-e029a2d7-062f-4137-86c6-8fd22eab0998 req-84005574-af02-4dd3-b715-83a682d4cba3 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-c787de46-dba9-458e-acc0-57470097fac5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 03:26:47 np0005534516 nova_compute[253538]: 2025-11-25 08:26:47.913 253542 DEBUG nova.network.neutron [None req-512bd3e2-021e-41ed-a450-06a605dc70eb 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] [instance: c787de46-dba9-458e-acc0-57470097fac5] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 25 03:26:48 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e113 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 03:26:48 np0005534516 kernel: tap4ad9572b-6a (unregistering): left promiscuous mode
Nov 25 03:26:48 np0005534516 NetworkManager[48915]: <info>  [1764059208.9455] device (tap4ad9572b-6a): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 03:26:48 np0005534516 nova_compute[253538]: 2025-11-25 08:26:48.948 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:26:48 np0005534516 ovn_controller[152859]: 2025-11-25T08:26:48Z|00121|binding|INFO|Releasing lport 4ad9572b-6ac1-4659-8ea6-71b8a32c06fe from this chassis (sb_readonly=0)
Nov 25 03:26:48 np0005534516 ovn_controller[152859]: 2025-11-25T08:26:48Z|00122|binding|INFO|Setting lport 4ad9572b-6ac1-4659-8ea6-71b8a32c06fe down in Southbound
Nov 25 03:26:48 np0005534516 ovn_controller[152859]: 2025-11-25T08:26:48Z|00123|binding|INFO|Removing iface tap4ad9572b-6a ovn-installed in OVS
Nov 25 03:26:48 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:26:48.972 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:5e:0e:e0 10.100.0.6'], port_security=['fa:16:3e:5e:0e:e0 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '86bfa56f-56d0-4a5e-b0b2-302c375e37a3', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-93269c36-ab23-4d95-925a-798173550624', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '65a2f983cce14453b2dc9251a520f289', 'neutron:revision_number': '8', 'neutron:security_group_ids': 'cc9f601e-bb99-4729-8b62-ddcf81c134a4', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a705f4ac-b7cc-4deb-b453-a20afb944392, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=4ad9572b-6ac1-4659-8ea6-71b8a32c06fe) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 03:26:48 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:26:48.973 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 4ad9572b-6ac1-4659-8ea6-71b8a32c06fe in datapath 93269c36-ab23-4d95-925a-798173550624 unbound from our chassis#033[00m
Nov 25 03:26:48 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:26:48.974 162739 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 93269c36-ab23-4d95-925a-798173550624, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 25 03:26:48 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:26:48.975 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[f90d55a8-ac50-4eaa-9f50-d7ec48d06db0]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:26:48 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:26:48.976 162739 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-93269c36-ab23-4d95-925a-798173550624 namespace which is not needed anymore#033[00m
Nov 25 03:26:48 np0005534516 nova_compute[253538]: 2025-11-25 08:26:48.980 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:26:49 np0005534516 systemd[1]: machine-qemu\x2d27\x2dinstance\x2d00000010.scope: Deactivated successfully.
Nov 25 03:26:49 np0005534516 systemd[1]: machine-qemu\x2d27\x2dinstance\x2d00000010.scope: Consumed 13.324s CPU time.
Nov 25 03:26:49 np0005534516 systemd-machined[215790]: Machine qemu-27-instance-00000010 terminated.
Nov 25 03:26:49 np0005534516 nova_compute[253538]: 2025-11-25 08:26:49.016 253542 DEBUG nova.network.neutron [None req-512bd3e2-021e-41ed-a450-06a605dc70eb 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] [instance: c787de46-dba9-458e-acc0-57470097fac5] Updating instance_info_cache with network_info: [{"id": "0416b402-0842-4b73-910b-d30a5750474c", "address": "fa:16:3e:97:a5:e1", "network": {"id": "ec4e7ebb-aba7-46f2-8d8d-f7d49f5af954", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-908342220-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "52217f37b23343d697fa6d2be38e236d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0416b402-08", "ovs_interfaceid": "0416b402-0842-4b73-910b-d30a5750474c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 03:26:49 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1241: 321 pgs: 321 active+clean; 246 MiB data, 466 MiB used, 60 GiB / 60 GiB avail; 400 KiB/s rd, 3.9 MiB/s wr, 166 op/s
Nov 25 03:26:49 np0005534516 nova_compute[253538]: 2025-11-25 08:26:49.067 253542 DEBUG oslo_concurrency.lockutils [None req-512bd3e2-021e-41ed-a450-06a605dc70eb 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] Releasing lock "refresh_cache-c787de46-dba9-458e-acc0-57470097fac5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 03:26:49 np0005534516 nova_compute[253538]: 2025-11-25 08:26:49.067 253542 DEBUG nova.compute.manager [None req-512bd3e2-021e-41ed-a450-06a605dc70eb 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] [instance: c787de46-dba9-458e-acc0-57470097fac5] Instance network_info: |[{"id": "0416b402-0842-4b73-910b-d30a5750474c", "address": "fa:16:3e:97:a5:e1", "network": {"id": "ec4e7ebb-aba7-46f2-8d8d-f7d49f5af954", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-908342220-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "52217f37b23343d697fa6d2be38e236d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0416b402-08", "ovs_interfaceid": "0416b402-0842-4b73-910b-d30a5750474c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 25 03:26:49 np0005534516 nova_compute[253538]: 2025-11-25 08:26:49.068 253542 DEBUG oslo_concurrency.lockutils [req-e029a2d7-062f-4137-86c6-8fd22eab0998 req-84005574-af02-4dd3-b715-83a682d4cba3 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-c787de46-dba9-458e-acc0-57470097fac5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 03:26:49 np0005534516 nova_compute[253538]: 2025-11-25 08:26:49.068 253542 DEBUG nova.network.neutron [req-e029a2d7-062f-4137-86c6-8fd22eab0998 req-84005574-af02-4dd3-b715-83a682d4cba3 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: c787de46-dba9-458e-acc0-57470097fac5] Refreshing network info cache for port 0416b402-0842-4b73-910b-d30a5750474c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 03:26:49 np0005534516 nova_compute[253538]: 2025-11-25 08:26:49.074 253542 DEBUG nova.virt.libvirt.driver [None req-512bd3e2-021e-41ed-a450-06a605dc70eb 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] [instance: c787de46-dba9-458e-acc0-57470097fac5] Start _get_guest_xml network_info=[{"id": "0416b402-0842-4b73-910b-d30a5750474c", "address": "fa:16:3e:97:a5:e1", "network": {"id": "ec4e7ebb-aba7-46f2-8d8d-f7d49f5af954", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-908342220-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "52217f37b23343d697fa6d2be38e236d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0416b402-08", "ovs_interfaceid": "0416b402-0842-4b73-910b-d30a5750474c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'device_name': '/dev/vda', 'guest_format': None, 'encryption_options': None, 'boot_index': 0, 'encryption_format': None, 'encryption_secret_uuid': None, 'size': 0, 'encrypted': False, 'disk_bus': 'virtio', 'image_id': '8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 25 03:26:49 np0005534516 nova_compute[253538]: 2025-11-25 08:26:49.084 253542 WARNING nova.virt.libvirt.driver [None req-512bd3e2-021e-41ed-a450-06a605dc70eb 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 03:26:49 np0005534516 nova_compute[253538]: 2025-11-25 08:26:49.096 253542 DEBUG nova.virt.libvirt.host [None req-512bd3e2-021e-41ed-a450-06a605dc70eb 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 25 03:26:49 np0005534516 nova_compute[253538]: 2025-11-25 08:26:49.097 253542 DEBUG nova.virt.libvirt.host [None req-512bd3e2-021e-41ed-a450-06a605dc70eb 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 25 03:26:49 np0005534516 nova_compute[253538]: 2025-11-25 08:26:49.102 253542 DEBUG nova.virt.libvirt.host [None req-512bd3e2-021e-41ed-a450-06a605dc70eb 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 25 03:26:49 np0005534516 nova_compute[253538]: 2025-11-25 08:26:49.103 253542 DEBUG nova.virt.libvirt.host [None req-512bd3e2-021e-41ed-a450-06a605dc70eb 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 25 03:26:49 np0005534516 nova_compute[253538]: 2025-11-25 08:26:49.104 253542 DEBUG nova.virt.libvirt.driver [None req-512bd3e2-021e-41ed-a450-06a605dc70eb 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 25 03:26:49 np0005534516 nova_compute[253538]: 2025-11-25 08:26:49.104 253542 DEBUG nova.virt.hardware [None req-512bd3e2-021e-41ed-a450-06a605dc70eb 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T08:20:54Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='c0247c91-0f34-45b2-87e7-43f31a790a00',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 25 03:26:49 np0005534516 nova_compute[253538]: 2025-11-25 08:26:49.106 253542 DEBUG nova.virt.hardware [None req-512bd3e2-021e-41ed-a450-06a605dc70eb 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 25 03:26:49 np0005534516 nova_compute[253538]: 2025-11-25 08:26:49.106 253542 DEBUG nova.virt.hardware [None req-512bd3e2-021e-41ed-a450-06a605dc70eb 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 25 03:26:49 np0005534516 nova_compute[253538]: 2025-11-25 08:26:49.107 253542 DEBUG nova.virt.hardware [None req-512bd3e2-021e-41ed-a450-06a605dc70eb 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 25 03:26:49 np0005534516 nova_compute[253538]: 2025-11-25 08:26:49.107 253542 DEBUG nova.virt.hardware [None req-512bd3e2-021e-41ed-a450-06a605dc70eb 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 25 03:26:49 np0005534516 nova_compute[253538]: 2025-11-25 08:26:49.108 253542 DEBUG nova.virt.hardware [None req-512bd3e2-021e-41ed-a450-06a605dc70eb 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 25 03:26:49 np0005534516 nova_compute[253538]: 2025-11-25 08:26:49.109 253542 DEBUG nova.virt.hardware [None req-512bd3e2-021e-41ed-a450-06a605dc70eb 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 25 03:26:49 np0005534516 nova_compute[253538]: 2025-11-25 08:26:49.109 253542 DEBUG nova.virt.hardware [None req-512bd3e2-021e-41ed-a450-06a605dc70eb 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 25 03:26:49 np0005534516 nova_compute[253538]: 2025-11-25 08:26:49.110 253542 DEBUG nova.virt.hardware [None req-512bd3e2-021e-41ed-a450-06a605dc70eb 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 25 03:26:49 np0005534516 nova_compute[253538]: 2025-11-25 08:26:49.110 253542 DEBUG nova.virt.hardware [None req-512bd3e2-021e-41ed-a450-06a605dc70eb 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 25 03:26:49 np0005534516 nova_compute[253538]: 2025-11-25 08:26:49.111 253542 DEBUG nova.virt.hardware [None req-512bd3e2-021e-41ed-a450-06a605dc70eb 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 25 03:26:49 np0005534516 nova_compute[253538]: 2025-11-25 08:26:49.116 253542 DEBUG oslo_concurrency.processutils [None req-512bd3e2-021e-41ed-a450-06a605dc70eb 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:26:49 np0005534516 nova_compute[253538]: 2025-11-25 08:26:49.153 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:26:49 np0005534516 nova_compute[253538]: 2025-11-25 08:26:49.159 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:26:49 np0005534516 nova_compute[253538]: 2025-11-25 08:26:49.166 253542 INFO nova.virt.libvirt.driver [-] [instance: 86bfa56f-56d0-4a5e-b0b2-302c375e37a3] Instance destroyed successfully.#033[00m
Nov 25 03:26:49 np0005534516 nova_compute[253538]: 2025-11-25 08:26:49.166 253542 DEBUG nova.objects.instance [None req-e493ddf0-4205-4934-be72-4655b7982c78 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Lazy-loading 'resources' on Instance uuid 86bfa56f-56d0-4a5e-b0b2-302c375e37a3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 03:26:49 np0005534516 nova_compute[253538]: 2025-11-25 08:26:49.195 253542 DEBUG nova.virt.libvirt.vif [None req-e493ddf0-4205-4934-be72-4655b7982c78 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-11-25T08:24:53Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-1649971692',display_name='tempest-ServersAdminTestJSON-server-1649971692',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-1649971692',id=16,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-25T08:26:29Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='65a2f983cce14453b2dc9251a520f289',ramdisk_id='',reservation_id='r-309qh2t9',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='2',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersAdminTestJSON-857664825',owner_user_name='tempest-ServersAdminTestJSON-857664825-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T08:26:33Z,user_data=None,user_id='76d3377d398a4214a77bc0eb91638ec5',uuid=86bfa56f-56d0-4a5e-b0b2-302c375e37a3,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "4ad9572b-6ac1-4659-8ea6-71b8a32c06fe", "address": "fa:16:3e:5e:0e:e0", "network": {"id": "93269c36-ab23-4d95-925a-798173550624", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-897372830-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "65a2f983cce14453b2dc9251a520f289", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4ad9572b-6a", "ovs_interfaceid": "4ad9572b-6ac1-4659-8ea6-71b8a32c06fe", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 25 03:26:49 np0005534516 nova_compute[253538]: 2025-11-25 08:26:49.196 253542 DEBUG nova.network.os_vif_util [None req-e493ddf0-4205-4934-be72-4655b7982c78 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Converting VIF {"id": "4ad9572b-6ac1-4659-8ea6-71b8a32c06fe", "address": "fa:16:3e:5e:0e:e0", "network": {"id": "93269c36-ab23-4d95-925a-798173550624", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-897372830-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "65a2f983cce14453b2dc9251a520f289", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4ad9572b-6a", "ovs_interfaceid": "4ad9572b-6ac1-4659-8ea6-71b8a32c06fe", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 03:26:49 np0005534516 nova_compute[253538]: 2025-11-25 08:26:49.198 253542 DEBUG nova.network.os_vif_util [None req-e493ddf0-4205-4934-be72-4655b7982c78 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:5e:0e:e0,bridge_name='br-int',has_traffic_filtering=True,id=4ad9572b-6ac1-4659-8ea6-71b8a32c06fe,network=Network(93269c36-ab23-4d95-925a-798173550624),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4ad9572b-6a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 03:26:49 np0005534516 nova_compute[253538]: 2025-11-25 08:26:49.199 253542 DEBUG os_vif [None req-e493ddf0-4205-4934-be72-4655b7982c78 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:5e:0e:e0,bridge_name='br-int',has_traffic_filtering=True,id=4ad9572b-6ac1-4659-8ea6-71b8a32c06fe,network=Network(93269c36-ab23-4d95-925a-798173550624),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4ad9572b-6a') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 25 03:26:49 np0005534516 nova_compute[253538]: 2025-11-25 08:26:49.202 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:26:49 np0005534516 nova_compute[253538]: 2025-11-25 08:26:49.202 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4ad9572b-6a, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:26:49 np0005534516 nova_compute[253538]: 2025-11-25 08:26:49.206 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:26:49 np0005534516 nova_compute[253538]: 2025-11-25 08:26:49.208 253542 INFO os_vif [None req-e493ddf0-4205-4934-be72-4655b7982c78 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:5e:0e:e0,bridge_name='br-int',has_traffic_filtering=True,id=4ad9572b-6ac1-4659-8ea6-71b8a32c06fe,network=Network(93269c36-ab23-4d95-925a-798173550624),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4ad9572b-6a')#033[00m
Nov 25 03:26:49 np0005534516 neutron-haproxy-ovnmeta-93269c36-ab23-4d95-925a-798173550624[279202]: [NOTICE]   (279225) : haproxy version is 2.8.14-c23fe91
Nov 25 03:26:49 np0005534516 neutron-haproxy-ovnmeta-93269c36-ab23-4d95-925a-798173550624[279202]: [NOTICE]   (279225) : path to executable is /usr/sbin/haproxy
Nov 25 03:26:49 np0005534516 neutron-haproxy-ovnmeta-93269c36-ab23-4d95-925a-798173550624[279202]: [WARNING]  (279225) : Exiting Master process...
Nov 25 03:26:49 np0005534516 neutron-haproxy-ovnmeta-93269c36-ab23-4d95-925a-798173550624[279202]: [WARNING]  (279225) : Exiting Master process...
Nov 25 03:26:49 np0005534516 neutron-haproxy-ovnmeta-93269c36-ab23-4d95-925a-798173550624[279202]: [ALERT]    (279225) : Current worker (279227) exited with code 143 (Terminated)
Nov 25 03:26:49 np0005534516 neutron-haproxy-ovnmeta-93269c36-ab23-4d95-925a-798173550624[279202]: [WARNING]  (279225) : All workers exited. Exiting... (0)
Nov 25 03:26:49 np0005534516 systemd[1]: libpod-5a79a5522617471f9f9e9a6b3106daa8a451cbbb718bec8de38f4fa7ab7821ea.scope: Deactivated successfully.
Nov 25 03:26:49 np0005534516 podman[285601]: 2025-11-25 08:26:49.574248519 +0000 UTC m=+0.487453504 container died 5a79a5522617471f9f9e9a6b3106daa8a451cbbb718bec8de38f4fa7ab7821ea (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-93269c36-ab23-4d95-925a-798173550624, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 25 03:26:49 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 03:26:49 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2190769572' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 03:26:49 np0005534516 nova_compute[253538]: 2025-11-25 08:26:49.646 253542 DEBUG oslo_concurrency.processutils [None req-512bd3e2-021e-41ed-a450-06a605dc70eb 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.530s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:26:49 np0005534516 nova_compute[253538]: 2025-11-25 08:26:49.680 253542 DEBUG nova.storage.rbd_utils [None req-512bd3e2-021e-41ed-a450-06a605dc70eb 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] rbd image c787de46-dba9-458e-acc0-57470097fac5_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:26:49 np0005534516 nova_compute[253538]: 2025-11-25 08:26:49.685 253542 DEBUG oslo_concurrency.processutils [None req-512bd3e2-021e-41ed-a450-06a605dc70eb 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:26:49 np0005534516 nova_compute[253538]: 2025-11-25 08:26:49.725 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:26:50 np0005534516 nova_compute[253538]: 2025-11-25 08:26:50.429 253542 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764059195.4277666, 740ce5e2-b8eb-46b1-9ec8-e9c89ed3e5b2 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 03:26:50 np0005534516 nova_compute[253538]: 2025-11-25 08:26:50.430 253542 INFO nova.compute.manager [-] [instance: 740ce5e2-b8eb-46b1-9ec8-e9c89ed3e5b2] VM Stopped (Lifecycle Event)#033[00m
Nov 25 03:26:50 np0005534516 nova_compute[253538]: 2025-11-25 08:26:50.451 253542 DEBUG nova.compute.manager [None req-be4c7771-d3d8-40f3-8e25-67d0c91263f4 - - - - - -] [instance: 740ce5e2-b8eb-46b1-9ec8-e9c89ed3e5b2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:26:50 np0005534516 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-5a79a5522617471f9f9e9a6b3106daa8a451cbbb718bec8de38f4fa7ab7821ea-userdata-shm.mount: Deactivated successfully.
Nov 25 03:26:50 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 03:26:50 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/660808336' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 03:26:50 np0005534516 systemd[1]: var-lib-containers-storage-overlay-6f08b2f30a3472e570aff96bae58b6aa5c1b8e49bb02775e83be72c16348ca83-merged.mount: Deactivated successfully.
Nov 25 03:26:50 np0005534516 nova_compute[253538]: 2025-11-25 08:26:50.710 253542 DEBUG oslo_concurrency.processutils [None req-512bd3e2-021e-41ed-a450-06a605dc70eb 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.024s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:26:50 np0005534516 nova_compute[253538]: 2025-11-25 08:26:50.711 253542 DEBUG nova.virt.libvirt.vif [None req-512bd3e2-021e-41ed-a450-06a605dc70eb 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T08:26:41Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-SecurityGroupsTestJSON-server-910851624',display_name='tempest-SecurityGroupsTestJSON-server-910851624',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-securitygroupstestjson-server-910851624',id=24,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='52217f37b23343d697fa6d2be38e236d',ramdisk_id='',reservation_id='r-s7z6d2k0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-SecurityGroupsTestJSON-1828125381',owner_user_name='tempest-SecurityGroupsTestJSON-1828125381-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T08:26:42Z,user_data=None,user_id='02e795c75a3b40bbbc3ca83d0501777f',uuid=c787de46-dba9-458e-acc0-57470097fac5,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "0416b402-0842-4b73-910b-d30a5750474c", "address": "fa:16:3e:97:a5:e1", "network": {"id": "ec4e7ebb-aba7-46f2-8d8d-f7d49f5af954", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-908342220-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "52217f37b23343d697fa6d2be38e236d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0416b402-08", "ovs_interfaceid": "0416b402-0842-4b73-910b-d30a5750474c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 25 03:26:50 np0005534516 nova_compute[253538]: 2025-11-25 08:26:50.711 253542 DEBUG nova.network.os_vif_util [None req-512bd3e2-021e-41ed-a450-06a605dc70eb 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] Converting VIF {"id": "0416b402-0842-4b73-910b-d30a5750474c", "address": "fa:16:3e:97:a5:e1", "network": {"id": "ec4e7ebb-aba7-46f2-8d8d-f7d49f5af954", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-908342220-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "52217f37b23343d697fa6d2be38e236d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0416b402-08", "ovs_interfaceid": "0416b402-0842-4b73-910b-d30a5750474c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 03:26:50 np0005534516 nova_compute[253538]: 2025-11-25 08:26:50.712 253542 DEBUG nova.network.os_vif_util [None req-512bd3e2-021e-41ed-a450-06a605dc70eb 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:97:a5:e1,bridge_name='br-int',has_traffic_filtering=True,id=0416b402-0842-4b73-910b-d30a5750474c,network=Network(ec4e7ebb-aba7-46f2-8d8d-f7d49f5af954),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0416b402-08') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 03:26:50 np0005534516 nova_compute[253538]: 2025-11-25 08:26:50.713 253542 DEBUG nova.objects.instance [None req-512bd3e2-021e-41ed-a450-06a605dc70eb 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] Lazy-loading 'pci_devices' on Instance uuid c787de46-dba9-458e-acc0-57470097fac5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 03:26:50 np0005534516 nova_compute[253538]: 2025-11-25 08:26:50.726 253542 DEBUG nova.virt.libvirt.driver [None req-512bd3e2-021e-41ed-a450-06a605dc70eb 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] [instance: c787de46-dba9-458e-acc0-57470097fac5] End _get_guest_xml xml=<domain type="kvm">
Nov 25 03:26:50 np0005534516 nova_compute[253538]:  <uuid>c787de46-dba9-458e-acc0-57470097fac5</uuid>
Nov 25 03:26:50 np0005534516 nova_compute[253538]:  <name>instance-00000018</name>
Nov 25 03:26:50 np0005534516 nova_compute[253538]:  <memory>131072</memory>
Nov 25 03:26:50 np0005534516 nova_compute[253538]:  <vcpu>1</vcpu>
Nov 25 03:26:50 np0005534516 nova_compute[253538]:  <metadata>
Nov 25 03:26:50 np0005534516 nova_compute[253538]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 03:26:50 np0005534516 nova_compute[253538]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 03:26:50 np0005534516 nova_compute[253538]:      <nova:name>tempest-SecurityGroupsTestJSON-server-910851624</nova:name>
Nov 25 03:26:50 np0005534516 nova_compute[253538]:      <nova:creationTime>2025-11-25 08:26:49</nova:creationTime>
Nov 25 03:26:50 np0005534516 nova_compute[253538]:      <nova:flavor name="m1.nano">
Nov 25 03:26:50 np0005534516 nova_compute[253538]:        <nova:memory>128</nova:memory>
Nov 25 03:26:50 np0005534516 nova_compute[253538]:        <nova:disk>1</nova:disk>
Nov 25 03:26:50 np0005534516 nova_compute[253538]:        <nova:swap>0</nova:swap>
Nov 25 03:26:50 np0005534516 nova_compute[253538]:        <nova:ephemeral>0</nova:ephemeral>
Nov 25 03:26:50 np0005534516 nova_compute[253538]:        <nova:vcpus>1</nova:vcpus>
Nov 25 03:26:50 np0005534516 nova_compute[253538]:      </nova:flavor>
Nov 25 03:26:50 np0005534516 nova_compute[253538]:      <nova:owner>
Nov 25 03:26:50 np0005534516 nova_compute[253538]:        <nova:user uuid="02e795c75a3b40bbbc3ca83d0501777f">tempest-SecurityGroupsTestJSON-1828125381-project-member</nova:user>
Nov 25 03:26:50 np0005534516 nova_compute[253538]:        <nova:project uuid="52217f37b23343d697fa6d2be38e236d">tempest-SecurityGroupsTestJSON-1828125381</nova:project>
Nov 25 03:26:50 np0005534516 nova_compute[253538]:      </nova:owner>
Nov 25 03:26:50 np0005534516 nova_compute[253538]:      <nova:root type="image" uuid="8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e"/>
Nov 25 03:26:50 np0005534516 nova_compute[253538]:      <nova:ports>
Nov 25 03:26:50 np0005534516 nova_compute[253538]:        <nova:port uuid="0416b402-0842-4b73-910b-d30a5750474c">
Nov 25 03:26:50 np0005534516 nova_compute[253538]:          <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Nov 25 03:26:50 np0005534516 nova_compute[253538]:        </nova:port>
Nov 25 03:26:50 np0005534516 nova_compute[253538]:      </nova:ports>
Nov 25 03:26:50 np0005534516 nova_compute[253538]:    </nova:instance>
Nov 25 03:26:50 np0005534516 nova_compute[253538]:  </metadata>
Nov 25 03:26:50 np0005534516 nova_compute[253538]:  <sysinfo type="smbios">
Nov 25 03:26:50 np0005534516 nova_compute[253538]:    <system>
Nov 25 03:26:50 np0005534516 nova_compute[253538]:      <entry name="manufacturer">RDO</entry>
Nov 25 03:26:50 np0005534516 nova_compute[253538]:      <entry name="product">OpenStack Compute</entry>
Nov 25 03:26:50 np0005534516 nova_compute[253538]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 03:26:50 np0005534516 nova_compute[253538]:      <entry name="serial">c787de46-dba9-458e-acc0-57470097fac5</entry>
Nov 25 03:26:50 np0005534516 nova_compute[253538]:      <entry name="uuid">c787de46-dba9-458e-acc0-57470097fac5</entry>
Nov 25 03:26:50 np0005534516 nova_compute[253538]:      <entry name="family">Virtual Machine</entry>
Nov 25 03:26:50 np0005534516 nova_compute[253538]:    </system>
Nov 25 03:26:50 np0005534516 nova_compute[253538]:  </sysinfo>
Nov 25 03:26:50 np0005534516 nova_compute[253538]:  <os>
Nov 25 03:26:50 np0005534516 nova_compute[253538]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 25 03:26:50 np0005534516 nova_compute[253538]:    <boot dev="hd"/>
Nov 25 03:26:50 np0005534516 nova_compute[253538]:    <smbios mode="sysinfo"/>
Nov 25 03:26:50 np0005534516 nova_compute[253538]:  </os>
Nov 25 03:26:50 np0005534516 nova_compute[253538]:  <features>
Nov 25 03:26:50 np0005534516 nova_compute[253538]:    <acpi/>
Nov 25 03:26:50 np0005534516 nova_compute[253538]:    <apic/>
Nov 25 03:26:50 np0005534516 nova_compute[253538]:    <vmcoreinfo/>
Nov 25 03:26:50 np0005534516 nova_compute[253538]:  </features>
Nov 25 03:26:50 np0005534516 nova_compute[253538]:  <clock offset="utc">
Nov 25 03:26:50 np0005534516 nova_compute[253538]:    <timer name="pit" tickpolicy="delay"/>
Nov 25 03:26:50 np0005534516 nova_compute[253538]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 25 03:26:50 np0005534516 nova_compute[253538]:    <timer name="hpet" present="no"/>
Nov 25 03:26:50 np0005534516 nova_compute[253538]:  </clock>
Nov 25 03:26:50 np0005534516 nova_compute[253538]:  <cpu mode="host-model" match="exact">
Nov 25 03:26:50 np0005534516 nova_compute[253538]:    <topology sockets="1" cores="1" threads="1"/>
Nov 25 03:26:50 np0005534516 nova_compute[253538]:  </cpu>
Nov 25 03:26:50 np0005534516 nova_compute[253538]:  <devices>
Nov 25 03:26:50 np0005534516 nova_compute[253538]:    <disk type="network" device="disk">
Nov 25 03:26:50 np0005534516 nova_compute[253538]:      <driver type="raw" cache="none"/>
Nov 25 03:26:50 np0005534516 nova_compute[253538]:      <source protocol="rbd" name="vms/c787de46-dba9-458e-acc0-57470097fac5_disk">
Nov 25 03:26:50 np0005534516 nova_compute[253538]:        <host name="192.168.122.100" port="6789"/>
Nov 25 03:26:50 np0005534516 nova_compute[253538]:      </source>
Nov 25 03:26:50 np0005534516 nova_compute[253538]:      <auth username="openstack">
Nov 25 03:26:50 np0005534516 nova_compute[253538]:        <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 03:26:50 np0005534516 nova_compute[253538]:      </auth>
Nov 25 03:26:50 np0005534516 nova_compute[253538]:      <target dev="vda" bus="virtio"/>
Nov 25 03:26:50 np0005534516 nova_compute[253538]:    </disk>
Nov 25 03:26:50 np0005534516 nova_compute[253538]:    <disk type="network" device="cdrom">
Nov 25 03:26:50 np0005534516 nova_compute[253538]:      <driver type="raw" cache="none"/>
Nov 25 03:26:50 np0005534516 nova_compute[253538]:      <source protocol="rbd" name="vms/c787de46-dba9-458e-acc0-57470097fac5_disk.config">
Nov 25 03:26:50 np0005534516 nova_compute[253538]:        <host name="192.168.122.100" port="6789"/>
Nov 25 03:26:50 np0005534516 nova_compute[253538]:      </source>
Nov 25 03:26:50 np0005534516 nova_compute[253538]:      <auth username="openstack">
Nov 25 03:26:50 np0005534516 nova_compute[253538]:        <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 03:26:50 np0005534516 nova_compute[253538]:      </auth>
Nov 25 03:26:50 np0005534516 nova_compute[253538]:      <target dev="sda" bus="sata"/>
Nov 25 03:26:50 np0005534516 nova_compute[253538]:    </disk>
Nov 25 03:26:50 np0005534516 nova_compute[253538]:    <interface type="ethernet">
Nov 25 03:26:50 np0005534516 nova_compute[253538]:      <mac address="fa:16:3e:97:a5:e1"/>
Nov 25 03:26:50 np0005534516 nova_compute[253538]:      <model type="virtio"/>
Nov 25 03:26:50 np0005534516 nova_compute[253538]:      <driver name="vhost" rx_queue_size="512"/>
Nov 25 03:26:50 np0005534516 nova_compute[253538]:      <mtu size="1442"/>
Nov 25 03:26:50 np0005534516 nova_compute[253538]:      <target dev="tap0416b402-08"/>
Nov 25 03:26:50 np0005534516 nova_compute[253538]:    </interface>
Nov 25 03:26:50 np0005534516 nova_compute[253538]:    <serial type="pty">
Nov 25 03:26:50 np0005534516 nova_compute[253538]:      <log file="/var/lib/nova/instances/c787de46-dba9-458e-acc0-57470097fac5/console.log" append="off"/>
Nov 25 03:26:50 np0005534516 nova_compute[253538]:    </serial>
Nov 25 03:26:50 np0005534516 nova_compute[253538]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 03:26:50 np0005534516 nova_compute[253538]:    <video>
Nov 25 03:26:50 np0005534516 nova_compute[253538]:      <model type="virtio"/>
Nov 25 03:26:50 np0005534516 nova_compute[253538]:    </video>
Nov 25 03:26:50 np0005534516 nova_compute[253538]:    <input type="tablet" bus="usb"/>
Nov 25 03:26:50 np0005534516 nova_compute[253538]:    <rng model="virtio">
Nov 25 03:26:50 np0005534516 nova_compute[253538]:      <backend model="random">/dev/urandom</backend>
Nov 25 03:26:50 np0005534516 nova_compute[253538]:    </rng>
Nov 25 03:26:50 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root"/>
Nov 25 03:26:50 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:26:50 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:26:50 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:26:50 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:26:50 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:26:50 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:26:50 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:26:50 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:26:50 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:26:50 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:26:50 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:26:50 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:26:50 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:26:50 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:26:50 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:26:50 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:26:50 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:26:50 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:26:50 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:26:50 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:26:50 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:26:50 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:26:50 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:26:50 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:26:50 np0005534516 nova_compute[253538]:    <controller type="usb" index="0"/>
Nov 25 03:26:50 np0005534516 nova_compute[253538]:    <memballoon model="virtio">
Nov 25 03:26:50 np0005534516 nova_compute[253538]:      <stats period="10"/>
Nov 25 03:26:50 np0005534516 nova_compute[253538]:    </memballoon>
Nov 25 03:26:50 np0005534516 nova_compute[253538]:  </devices>
Nov 25 03:26:50 np0005534516 nova_compute[253538]: </domain>
Nov 25 03:26:50 np0005534516 nova_compute[253538]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 25 03:26:50 np0005534516 nova_compute[253538]: 2025-11-25 08:26:50.727 253542 DEBUG nova.compute.manager [None req-512bd3e2-021e-41ed-a450-06a605dc70eb 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] [instance: c787de46-dba9-458e-acc0-57470097fac5] Preparing to wait for external event network-vif-plugged-0416b402-0842-4b73-910b-d30a5750474c prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 25 03:26:50 np0005534516 nova_compute[253538]: 2025-11-25 08:26:50.727 253542 DEBUG oslo_concurrency.lockutils [None req-512bd3e2-021e-41ed-a450-06a605dc70eb 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] Acquiring lock "c787de46-dba9-458e-acc0-57470097fac5-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:26:50 np0005534516 nova_compute[253538]: 2025-11-25 08:26:50.727 253542 DEBUG oslo_concurrency.lockutils [None req-512bd3e2-021e-41ed-a450-06a605dc70eb 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] Lock "c787de46-dba9-458e-acc0-57470097fac5-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:26:50 np0005534516 nova_compute[253538]: 2025-11-25 08:26:50.727 253542 DEBUG oslo_concurrency.lockutils [None req-512bd3e2-021e-41ed-a450-06a605dc70eb 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] Lock "c787de46-dba9-458e-acc0-57470097fac5-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:26:50 np0005534516 nova_compute[253538]: 2025-11-25 08:26:50.728 253542 DEBUG nova.virt.libvirt.vif [None req-512bd3e2-021e-41ed-a450-06a605dc70eb 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T08:26:41Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-SecurityGroupsTestJSON-server-910851624',display_name='tempest-SecurityGroupsTestJSON-server-910851624',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-securitygroupstestjson-server-910851624',id=24,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='52217f37b23343d697fa6d2be38e236d',ramdisk_id='',reservation_id='r-s7z6d2k0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-SecurityGroupsTestJSON-1828125381',owner_user_name='tempest-SecurityGroupsTestJSON-1828125381-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T08:26:42Z,user_data=None,user_id='02e795c75a3b40bbbc3ca83d0501777f',uuid=c787de46-dba9-458e-acc0-57470097fac5,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "0416b402-0842-4b73-910b-d30a5750474c", "address": "fa:16:3e:97:a5:e1", "network": {"id": "ec4e7ebb-aba7-46f2-8d8d-f7d49f5af954", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-908342220-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "52217f37b23343d697fa6d2be38e236d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0416b402-08", "ovs_interfaceid": "0416b402-0842-4b73-910b-d30a5750474c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 25 03:26:50 np0005534516 nova_compute[253538]: 2025-11-25 08:26:50.728 253542 DEBUG nova.network.os_vif_util [None req-512bd3e2-021e-41ed-a450-06a605dc70eb 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] Converting VIF {"id": "0416b402-0842-4b73-910b-d30a5750474c", "address": "fa:16:3e:97:a5:e1", "network": {"id": "ec4e7ebb-aba7-46f2-8d8d-f7d49f5af954", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-908342220-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "52217f37b23343d697fa6d2be38e236d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0416b402-08", "ovs_interfaceid": "0416b402-0842-4b73-910b-d30a5750474c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 03:26:50 np0005534516 nova_compute[253538]: 2025-11-25 08:26:50.728 253542 DEBUG nova.network.os_vif_util [None req-512bd3e2-021e-41ed-a450-06a605dc70eb 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:97:a5:e1,bridge_name='br-int',has_traffic_filtering=True,id=0416b402-0842-4b73-910b-d30a5750474c,network=Network(ec4e7ebb-aba7-46f2-8d8d-f7d49f5af954),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0416b402-08') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 03:26:50 np0005534516 nova_compute[253538]: 2025-11-25 08:26:50.728 253542 DEBUG os_vif [None req-512bd3e2-021e-41ed-a450-06a605dc70eb 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:97:a5:e1,bridge_name='br-int',has_traffic_filtering=True,id=0416b402-0842-4b73-910b-d30a5750474c,network=Network(ec4e7ebb-aba7-46f2-8d8d-f7d49f5af954),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0416b402-08') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 25 03:26:50 np0005534516 nova_compute[253538]: 2025-11-25 08:26:50.729 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:26:50 np0005534516 nova_compute[253538]: 2025-11-25 08:26:50.729 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:26:50 np0005534516 nova_compute[253538]: 2025-11-25 08:26:50.729 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 03:26:50 np0005534516 nova_compute[253538]: 2025-11-25 08:26:50.732 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:26:50 np0005534516 nova_compute[253538]: 2025-11-25 08:26:50.732 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap0416b402-08, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:26:50 np0005534516 nova_compute[253538]: 2025-11-25 08:26:50.732 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap0416b402-08, col_values=(('external_ids', {'iface-id': '0416b402-0842-4b73-910b-d30a5750474c', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:97:a5:e1', 'vm-uuid': 'c787de46-dba9-458e-acc0-57470097fac5'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:26:50 np0005534516 nova_compute[253538]: 2025-11-25 08:26:50.734 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:26:50 np0005534516 NetworkManager[48915]: <info>  [1764059210.7348] manager: (tap0416b402-08): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/65)
Nov 25 03:26:50 np0005534516 nova_compute[253538]: 2025-11-25 08:26:50.738 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 25 03:26:50 np0005534516 nova_compute[253538]: 2025-11-25 08:26:50.739 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:26:50 np0005534516 nova_compute[253538]: 2025-11-25 08:26:50.740 253542 INFO os_vif [None req-512bd3e2-021e-41ed-a450-06a605dc70eb 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:97:a5:e1,bridge_name='br-int',has_traffic_filtering=True,id=0416b402-0842-4b73-910b-d30a5750474c,network=Network(ec4e7ebb-aba7-46f2-8d8d-f7d49f5af954),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0416b402-08')#033[00m
Nov 25 03:26:50 np0005534516 nova_compute[253538]: 2025-11-25 08:26:50.978 253542 DEBUG nova.network.neutron [req-e029a2d7-062f-4137-86c6-8fd22eab0998 req-84005574-af02-4dd3-b715-83a682d4cba3 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: c787de46-dba9-458e-acc0-57470097fac5] Updated VIF entry in instance network info cache for port 0416b402-0842-4b73-910b-d30a5750474c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 25 03:26:50 np0005534516 nova_compute[253538]: 2025-11-25 08:26:50.978 253542 DEBUG nova.network.neutron [req-e029a2d7-062f-4137-86c6-8fd22eab0998 req-84005574-af02-4dd3-b715-83a682d4cba3 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: c787de46-dba9-458e-acc0-57470097fac5] Updating instance_info_cache with network_info: [{"id": "0416b402-0842-4b73-910b-d30a5750474c", "address": "fa:16:3e:97:a5:e1", "network": {"id": "ec4e7ebb-aba7-46f2-8d8d-f7d49f5af954", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-908342220-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "52217f37b23343d697fa6d2be38e236d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0416b402-08", "ovs_interfaceid": "0416b402-0842-4b73-910b-d30a5750474c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 03:26:50 np0005534516 nova_compute[253538]: 2025-11-25 08:26:50.994 253542 DEBUG oslo_concurrency.lockutils [req-e029a2d7-062f-4137-86c6-8fd22eab0998 req-84005574-af02-4dd3-b715-83a682d4cba3 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-c787de46-dba9-458e-acc0-57470097fac5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 03:26:51 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1242: 321 pgs: 321 active+clean; 246 MiB data, 460 MiB used, 60 GiB / 60 GiB avail; 381 KiB/s rd, 3.9 MiB/s wr, 148 op/s
Nov 25 03:26:51 np0005534516 nova_compute[253538]: 2025-11-25 08:26:51.456 253542 DEBUG nova.virt.libvirt.driver [None req-512bd3e2-021e-41ed-a450-06a605dc70eb 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 03:26:51 np0005534516 nova_compute[253538]: 2025-11-25 08:26:51.456 253542 DEBUG nova.virt.libvirt.driver [None req-512bd3e2-021e-41ed-a450-06a605dc70eb 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 03:26:51 np0005534516 nova_compute[253538]: 2025-11-25 08:26:51.456 253542 DEBUG nova.virt.libvirt.driver [None req-512bd3e2-021e-41ed-a450-06a605dc70eb 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] No VIF found with MAC fa:16:3e:97:a5:e1, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 25 03:26:51 np0005534516 nova_compute[253538]: 2025-11-25 08:26:51.457 253542 INFO nova.virt.libvirt.driver [None req-512bd3e2-021e-41ed-a450-06a605dc70eb 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] [instance: c787de46-dba9-458e-acc0-57470097fac5] Using config drive#033[00m
Nov 25 03:26:51 np0005534516 nova_compute[253538]: 2025-11-25 08:26:51.524 253542 DEBUG nova.storage.rbd_utils [None req-512bd3e2-021e-41ed-a450-06a605dc70eb 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] rbd image c787de46-dba9-458e-acc0-57470097fac5_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:26:52 np0005534516 nova_compute[253538]: 2025-11-25 08:26:52.016 253542 DEBUG oslo_concurrency.lockutils [None req-8fecfbed-03de-473a-b0f0-eb92655a9d55 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Acquiring lock "0fc86c7e-5de2-431c-9152-cfe293f8cc7d" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:26:52 np0005534516 nova_compute[253538]: 2025-11-25 08:26:52.016 253542 DEBUG oslo_concurrency.lockutils [None req-8fecfbed-03de-473a-b0f0-eb92655a9d55 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Lock "0fc86c7e-5de2-431c-9152-cfe293f8cc7d" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:26:52 np0005534516 nova_compute[253538]: 2025-11-25 08:26:52.035 253542 DEBUG nova.compute.manager [None req-8fecfbed-03de-473a-b0f0-eb92655a9d55 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 0fc86c7e-5de2-431c-9152-cfe293f8cc7d] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 25 03:26:52 np0005534516 nova_compute[253538]: 2025-11-25 08:26:52.067 253542 INFO nova.virt.libvirt.driver [None req-512bd3e2-021e-41ed-a450-06a605dc70eb 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] [instance: c787de46-dba9-458e-acc0-57470097fac5] Creating config drive at /var/lib/nova/instances/c787de46-dba9-458e-acc0-57470097fac5/disk.config#033[00m
Nov 25 03:26:52 np0005534516 nova_compute[253538]: 2025-11-25 08:26:52.072 253542 DEBUG oslo_concurrency.processutils [None req-512bd3e2-021e-41ed-a450-06a605dc70eb 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/c787de46-dba9-458e-acc0-57470097fac5/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpfxcegl1t execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:26:52 np0005534516 podman[285601]: 2025-11-25 08:26:52.086768178 +0000 UTC m=+2.999973133 container cleanup 5a79a5522617471f9f9e9a6b3106daa8a451cbbb718bec8de38f4fa7ab7821ea (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-93269c36-ab23-4d95-925a-798173550624, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.license=GPLv2)
Nov 25 03:26:52 np0005534516 systemd[1]: libpod-conmon-5a79a5522617471f9f9e9a6b3106daa8a451cbbb718bec8de38f4fa7ab7821ea.scope: Deactivated successfully.
Nov 25 03:26:52 np0005534516 nova_compute[253538]: 2025-11-25 08:26:52.176 253542 DEBUG oslo_concurrency.lockutils [None req-8fecfbed-03de-473a-b0f0-eb92655a9d55 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:26:52 np0005534516 nova_compute[253538]: 2025-11-25 08:26:52.177 253542 DEBUG oslo_concurrency.lockutils [None req-8fecfbed-03de-473a-b0f0-eb92655a9d55 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:26:52 np0005534516 nova_compute[253538]: 2025-11-25 08:26:52.186 253542 DEBUG nova.virt.hardware [None req-8fecfbed-03de-473a-b0f0-eb92655a9d55 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 25 03:26:52 np0005534516 nova_compute[253538]: 2025-11-25 08:26:52.186 253542 INFO nova.compute.claims [None req-8fecfbed-03de-473a-b0f0-eb92655a9d55 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 0fc86c7e-5de2-431c-9152-cfe293f8cc7d] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 25 03:26:52 np0005534516 nova_compute[253538]: 2025-11-25 08:26:52.215 253542 DEBUG oslo_concurrency.processutils [None req-512bd3e2-021e-41ed-a450-06a605dc70eb 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/c787de46-dba9-458e-acc0-57470097fac5/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpfxcegl1t" returned: 0 in 0.143s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:26:52 np0005534516 nova_compute[253538]: 2025-11-25 08:26:52.666 253542 DEBUG nova.storage.rbd_utils [None req-512bd3e2-021e-41ed-a450-06a605dc70eb 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] rbd image c787de46-dba9-458e-acc0-57470097fac5_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:26:52 np0005534516 nova_compute[253538]: 2025-11-25 08:26:52.671 253542 DEBUG oslo_concurrency.processutils [None req-512bd3e2-021e-41ed-a450-06a605dc70eb 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/c787de46-dba9-458e-acc0-57470097fac5/disk.config c787de46-dba9-458e-acc0-57470097fac5_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:26:52 np0005534516 nova_compute[253538]: 2025-11-25 08:26:52.706 253542 DEBUG nova.compute.manager [req-3ed70af4-a745-46fb-a079-60bb39a660ce req-f986aea0-86ad-4060-9b83-a0f783eff1f3 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 86bfa56f-56d0-4a5e-b0b2-302c375e37a3] Received event network-vif-unplugged-4ad9572b-6ac1-4659-8ea6-71b8a32c06fe external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:26:52 np0005534516 nova_compute[253538]: 2025-11-25 08:26:52.708 253542 DEBUG oslo_concurrency.lockutils [req-3ed70af4-a745-46fb-a079-60bb39a660ce req-f986aea0-86ad-4060-9b83-a0f783eff1f3 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "86bfa56f-56d0-4a5e-b0b2-302c375e37a3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:26:52 np0005534516 nova_compute[253538]: 2025-11-25 08:26:52.708 253542 DEBUG oslo_concurrency.lockutils [req-3ed70af4-a745-46fb-a079-60bb39a660ce req-f986aea0-86ad-4060-9b83-a0f783eff1f3 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "86bfa56f-56d0-4a5e-b0b2-302c375e37a3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:26:52 np0005534516 nova_compute[253538]: 2025-11-25 08:26:52.709 253542 DEBUG oslo_concurrency.lockutils [req-3ed70af4-a745-46fb-a079-60bb39a660ce req-f986aea0-86ad-4060-9b83-a0f783eff1f3 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "86bfa56f-56d0-4a5e-b0b2-302c375e37a3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:26:52 np0005534516 nova_compute[253538]: 2025-11-25 08:26:52.709 253542 DEBUG nova.compute.manager [req-3ed70af4-a745-46fb-a079-60bb39a660ce req-f986aea0-86ad-4060-9b83-a0f783eff1f3 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 86bfa56f-56d0-4a5e-b0b2-302c375e37a3] No waiting events found dispatching network-vif-unplugged-4ad9572b-6ac1-4659-8ea6-71b8a32c06fe pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 03:26:52 np0005534516 nova_compute[253538]: 2025-11-25 08:26:52.710 253542 DEBUG nova.compute.manager [req-3ed70af4-a745-46fb-a079-60bb39a660ce req-f986aea0-86ad-4060-9b83-a0f783eff1f3 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 86bfa56f-56d0-4a5e-b0b2-302c375e37a3] Received event network-vif-unplugged-4ad9572b-6ac1-4659-8ea6-71b8a32c06fe for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 25 03:26:52 np0005534516 nova_compute[253538]: 2025-11-25 08:26:52.740 253542 DEBUG nova.scheduler.client.report [None req-8fecfbed-03de-473a-b0f0-eb92655a9d55 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Refreshing inventories for resource provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Nov 25 03:26:52 np0005534516 nova_compute[253538]: 2025-11-25 08:26:52.757 253542 DEBUG nova.scheduler.client.report [None req-8fecfbed-03de-473a-b0f0-eb92655a9d55 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Updating ProviderTree inventory for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Nov 25 03:26:52 np0005534516 nova_compute[253538]: 2025-11-25 08:26:52.758 253542 DEBUG nova.compute.provider_tree [None req-8fecfbed-03de-473a-b0f0-eb92655a9d55 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Updating inventory in ProviderTree for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Nov 25 03:26:52 np0005534516 nova_compute[253538]: 2025-11-25 08:26:52.775 253542 DEBUG nova.scheduler.client.report [None req-8fecfbed-03de-473a-b0f0-eb92655a9d55 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Refreshing aggregate associations for resource provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Nov 25 03:26:52 np0005534516 nova_compute[253538]: 2025-11-25 08:26:52.798 253542 DEBUG nova.scheduler.client.report [None req-8fecfbed-03de-473a-b0f0-eb92655a9d55 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Refreshing trait associations for resource provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4, traits: HW_CPU_X86_ABM,HW_CPU_X86_SSE,HW_CPU_X86_SSE4A,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_NET_VIF_MODEL_VIRTIO,HW_CPU_X86_SSSE3,COMPUTE_ACCELERATORS,COMPUTE_RESCUE_BFV,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_BMI2,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_DEVICE_TAGGING,HW_CPU_X86_SSE2,HW_CPU_X86_SSE42,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_SECURITY_TPM_1_2,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_SVM,HW_CPU_X86_AVX,HW_CPU_X86_CLMUL,COMPUTE_NODE,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_AVX2,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_SHA,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_BMI,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_MMX,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_SSE41,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_SECURITY_TPM_2_0,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_F16C,COMPUTE_STORAGE_BUS_FDC,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_AMD_SVM,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_AESNI,HW_CPU_X86_FMA3 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Nov 25 03:26:52 np0005534516 nova_compute[253538]: 2025-11-25 08:26:52.881 253542 DEBUG oslo_concurrency.processutils [None req-8fecfbed-03de-473a-b0f0-eb92655a9d55 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:26:53 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1243: 321 pgs: 321 active+clean; 246 MiB data, 460 MiB used, 60 GiB / 60 GiB avail; 376 KiB/s rd, 3.9 MiB/s wr, 137 op/s
Nov 25 03:26:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] Optimize plan auto_2025-11-25_08:26:53
Nov 25 03:26:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 03:26:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] do_upmap
Nov 25 03:26:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] pools ['images', '.rgw.root', 'default.rgw.control', 'vms', 'cephfs.cephfs.meta', 'default.rgw.log', 'backups', 'default.rgw.meta', 'cephfs.cephfs.data', 'volumes', '.mgr']
Nov 25 03:26:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] prepared 0/10 changes
Nov 25 03:26:53 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 03:26:53 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/344841007' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 03:26:53 np0005534516 nova_compute[253538]: 2025-11-25 08:26:53.348 253542 DEBUG oslo_concurrency.processutils [None req-8fecfbed-03de-473a-b0f0-eb92655a9d55 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.467s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:26:53 np0005534516 nova_compute[253538]: 2025-11-25 08:26:53.354 253542 DEBUG nova.compute.provider_tree [None req-8fecfbed-03de-473a-b0f0-eb92655a9d55 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 03:26:53 np0005534516 nova_compute[253538]: 2025-11-25 08:26:53.374 253542 DEBUG nova.scheduler.client.report [None req-8fecfbed-03de-473a-b0f0-eb92655a9d55 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 03:26:53 np0005534516 nova_compute[253538]: 2025-11-25 08:26:53.393 253542 DEBUG oslo_concurrency.lockutils [None req-8fecfbed-03de-473a-b0f0-eb92655a9d55 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.215s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:26:53 np0005534516 nova_compute[253538]: 2025-11-25 08:26:53.394 253542 DEBUG nova.compute.manager [None req-8fecfbed-03de-473a-b0f0-eb92655a9d55 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 0fc86c7e-5de2-431c-9152-cfe293f8cc7d] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 25 03:26:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 03:26:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 03:26:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 03:26:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 03:26:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 03:26:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 03:26:53 np0005534516 nova_compute[253538]: 2025-11-25 08:26:53.437 253542 DEBUG nova.compute.manager [None req-8fecfbed-03de-473a-b0f0-eb92655a9d55 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 0fc86c7e-5de2-431c-9152-cfe293f8cc7d] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 25 03:26:53 np0005534516 nova_compute[253538]: 2025-11-25 08:26:53.438 253542 DEBUG nova.network.neutron [None req-8fecfbed-03de-473a-b0f0-eb92655a9d55 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 0fc86c7e-5de2-431c-9152-cfe293f8cc7d] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 25 03:26:53 np0005534516 nova_compute[253538]: 2025-11-25 08:26:53.465 253542 INFO nova.virt.libvirt.driver [None req-8fecfbed-03de-473a-b0f0-eb92655a9d55 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 0fc86c7e-5de2-431c-9152-cfe293f8cc7d] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 25 03:26:53 np0005534516 nova_compute[253538]: 2025-11-25 08:26:53.488 253542 DEBUG nova.compute.manager [None req-8fecfbed-03de-473a-b0f0-eb92655a9d55 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 0fc86c7e-5de2-431c-9152-cfe293f8cc7d] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 25 03:26:53 np0005534516 nova_compute[253538]: 2025-11-25 08:26:53.581 253542 DEBUG nova.compute.manager [None req-8fecfbed-03de-473a-b0f0-eb92655a9d55 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 0fc86c7e-5de2-431c-9152-cfe293f8cc7d] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 25 03:26:53 np0005534516 nova_compute[253538]: 2025-11-25 08:26:53.582 253542 DEBUG nova.virt.libvirt.driver [None req-8fecfbed-03de-473a-b0f0-eb92655a9d55 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 0fc86c7e-5de2-431c-9152-cfe293f8cc7d] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 25 03:26:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 03:26:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 03:26:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 03:26:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 03:26:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 03:26:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 03:26:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 03:26:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 03:26:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 03:26:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 03:26:53 np0005534516 nova_compute[253538]: 2025-11-25 08:26:53.916 253542 INFO nova.virt.libvirt.driver [None req-8fecfbed-03de-473a-b0f0-eb92655a9d55 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 0fc86c7e-5de2-431c-9152-cfe293f8cc7d] Creating image(s)#033[00m
Nov 25 03:26:54 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e113 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 03:26:54 np0005534516 nova_compute[253538]: 2025-11-25 08:26:54.922 253542 DEBUG nova.storage.rbd_utils [None req-8fecfbed-03de-473a-b0f0-eb92655a9d55 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] rbd image 0fc86c7e-5de2-431c-9152-cfe293f8cc7d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:26:54 np0005534516 nova_compute[253538]: 2025-11-25 08:26:54.941 253542 DEBUG nova.storage.rbd_utils [None req-8fecfbed-03de-473a-b0f0-eb92655a9d55 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] rbd image 0fc86c7e-5de2-431c-9152-cfe293f8cc7d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:26:54 np0005534516 nova_compute[253538]: 2025-11-25 08:26:54.965 253542 DEBUG nova.storage.rbd_utils [None req-8fecfbed-03de-473a-b0f0-eb92655a9d55 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] rbd image 0fc86c7e-5de2-431c-9152-cfe293f8cc7d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:26:54 np0005534516 nova_compute[253538]: 2025-11-25 08:26:54.968 253542 DEBUG oslo_concurrency.processutils [None req-8fecfbed-03de-473a-b0f0-eb92655a9d55 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:26:54 np0005534516 nova_compute[253538]: 2025-11-25 08:26:54.988 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:26:54 np0005534516 nova_compute[253538]: 2025-11-25 08:26:54.994 253542 DEBUG nova.policy [None req-8fecfbed-03de-473a-b0f0-eb92655a9d55 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '38fa175fb699405c9a05d7c28f994ebc', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'b0a28d62fb1841c087b84b40bf5a54ec', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 25 03:26:54 np0005534516 nova_compute[253538]: 2025-11-25 08:26:54.996 253542 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764059198.6460123, 29fb9e2b-13d1-41e6-b0b1-1d5262dcadec => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 03:26:54 np0005534516 nova_compute[253538]: 2025-11-25 08:26:54.996 253542 INFO nova.compute.manager [-] [instance: 29fb9e2b-13d1-41e6-b0b1-1d5262dcadec] VM Stopped (Lifecycle Event)#033[00m
Nov 25 03:26:55 np0005534516 nova_compute[253538]: 2025-11-25 08:26:55.000 253542 DEBUG nova.compute.manager [req-09428a5b-8506-4e34-942c-9b16e49fcdf0 req-9027cdfb-a1bd-4971-8803-735b0ce0bbba b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 86bfa56f-56d0-4a5e-b0b2-302c375e37a3] Received event network-vif-plugged-4ad9572b-6ac1-4659-8ea6-71b8a32c06fe external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:26:55 np0005534516 nova_compute[253538]: 2025-11-25 08:26:55.000 253542 DEBUG oslo_concurrency.lockutils [req-09428a5b-8506-4e34-942c-9b16e49fcdf0 req-9027cdfb-a1bd-4971-8803-735b0ce0bbba b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "86bfa56f-56d0-4a5e-b0b2-302c375e37a3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:26:55 np0005534516 nova_compute[253538]: 2025-11-25 08:26:55.000 253542 DEBUG oslo_concurrency.lockutils [req-09428a5b-8506-4e34-942c-9b16e49fcdf0 req-9027cdfb-a1bd-4971-8803-735b0ce0bbba b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "86bfa56f-56d0-4a5e-b0b2-302c375e37a3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:26:55 np0005534516 nova_compute[253538]: 2025-11-25 08:26:55.000 253542 DEBUG oslo_concurrency.lockutils [req-09428a5b-8506-4e34-942c-9b16e49fcdf0 req-9027cdfb-a1bd-4971-8803-735b0ce0bbba b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "86bfa56f-56d0-4a5e-b0b2-302c375e37a3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:26:55 np0005534516 nova_compute[253538]: 2025-11-25 08:26:55.001 253542 DEBUG nova.compute.manager [req-09428a5b-8506-4e34-942c-9b16e49fcdf0 req-9027cdfb-a1bd-4971-8803-735b0ce0bbba b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 86bfa56f-56d0-4a5e-b0b2-302c375e37a3] No waiting events found dispatching network-vif-plugged-4ad9572b-6ac1-4659-8ea6-71b8a32c06fe pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 03:26:55 np0005534516 nova_compute[253538]: 2025-11-25 08:26:55.001 253542 WARNING nova.compute.manager [req-09428a5b-8506-4e34-942c-9b16e49fcdf0 req-9027cdfb-a1bd-4971-8803-735b0ce0bbba b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 86bfa56f-56d0-4a5e-b0b2-302c375e37a3] Received unexpected event network-vif-plugged-4ad9572b-6ac1-4659-8ea6-71b8a32c06fe for instance with vm_state active and task_state deleting.#033[00m
Nov 25 03:26:55 np0005534516 nova_compute[253538]: 2025-11-25 08:26:55.028 253542 DEBUG nova.compute.manager [None req-eb85c2bd-abbb-4882-82d3-84e66b52d64b - - - - - -] [instance: 29fb9e2b-13d1-41e6-b0b1-1d5262dcadec] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:26:55 np0005534516 nova_compute[253538]: 2025-11-25 08:26:55.029 253542 DEBUG oslo_concurrency.processutils [None req-8fecfbed-03de-473a-b0f0-eb92655a9d55 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:26:55 np0005534516 nova_compute[253538]: 2025-11-25 08:26:55.029 253542 DEBUG oslo_concurrency.lockutils [None req-8fecfbed-03de-473a-b0f0-eb92655a9d55 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Acquiring lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:26:55 np0005534516 nova_compute[253538]: 2025-11-25 08:26:55.030 253542 DEBUG oslo_concurrency.lockutils [None req-8fecfbed-03de-473a-b0f0-eb92655a9d55 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:26:55 np0005534516 nova_compute[253538]: 2025-11-25 08:26:55.030 253542 DEBUG oslo_concurrency.lockutils [None req-8fecfbed-03de-473a-b0f0-eb92655a9d55 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:26:55 np0005534516 nova_compute[253538]: 2025-11-25 08:26:55.052 253542 DEBUG nova.storage.rbd_utils [None req-8fecfbed-03de-473a-b0f0-eb92655a9d55 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] rbd image 0fc86c7e-5de2-431c-9152-cfe293f8cc7d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:26:55 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1244: 321 pgs: 321 active+clean; 246 MiB data, 460 MiB used, 60 GiB / 60 GiB avail; 274 KiB/s rd, 3.3 MiB/s wr, 106 op/s
Nov 25 03:26:55 np0005534516 nova_compute[253538]: 2025-11-25 08:26:55.055 253542 DEBUG oslo_concurrency.processutils [None req-8fecfbed-03de-473a-b0f0-eb92655a9d55 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc 0fc86c7e-5de2-431c-9152-cfe293f8cc7d_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:26:55 np0005534516 nova_compute[253538]: 2025-11-25 08:26:55.561 253542 DEBUG nova.network.neutron [None req-8fecfbed-03de-473a-b0f0-eb92655a9d55 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 0fc86c7e-5de2-431c-9152-cfe293f8cc7d] Successfully created port: 521823df-589a-4370-a3ea-a5a6f4c73a6a _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 25 03:26:55 np0005534516 nova_compute[253538]: 2025-11-25 08:26:55.735 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:26:55 np0005534516 podman[285744]: 2025-11-25 08:26:55.917417184 +0000 UTC m=+3.807973759 container remove 5a79a5522617471f9f9e9a6b3106daa8a451cbbb718bec8de38f4fa7ab7821ea (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-93269c36-ab23-4d95-925a-798173550624, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 25 03:26:55 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:26:55.929 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[2e68c64c-3c5a-4a9d-8bc0-d62f192f26ce]: (4, ('Tue Nov 25 08:26:49 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-93269c36-ab23-4d95-925a-798173550624 (5a79a5522617471f9f9e9a6b3106daa8a451cbbb718bec8de38f4fa7ab7821ea)\n5a79a5522617471f9f9e9a6b3106daa8a451cbbb718bec8de38f4fa7ab7821ea\nTue Nov 25 08:26:52 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-93269c36-ab23-4d95-925a-798173550624 (5a79a5522617471f9f9e9a6b3106daa8a451cbbb718bec8de38f4fa7ab7821ea)\n5a79a5522617471f9f9e9a6b3106daa8a451cbbb718bec8de38f4fa7ab7821ea\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:26:55 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:26:55.932 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[7224dfd9-6807-4a6a-8e25-40ec23144e80]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:26:55 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:26:55.933 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap93269c36-a0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:26:55 np0005534516 nova_compute[253538]: 2025-11-25 08:26:55.936 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:26:55 np0005534516 kernel: tap93269c36-a0: left promiscuous mode
Nov 25 03:26:55 np0005534516 nova_compute[253538]: 2025-11-25 08:26:55.972 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:26:55 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:26:55.977 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[209da6d4-a35d-4182-ab60-c3e6cccc74c0]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:26:55 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:26:55.998 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[5023897b-d168-4061-a124-21366ad1e2c8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:26:56 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:26:55.999 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[3ac7d5e0-daef-4120-9f68-182dd4940c12]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:26:56 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:26:56.021 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[132fc897-12aa-4ec5-9e76-78d1305ab6de]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 445508, 'reachable_time': 31941, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 285908, 'error': None, 'target': 'ovnmeta-93269c36-ab23-4d95-925a-798173550624', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:26:56 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:26:56.025 162852 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-93269c36-ab23-4d95-925a-798173550624 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 25 03:26:56 np0005534516 systemd[1]: run-netns-ovnmeta\x2d93269c36\x2dab23\x2d4d95\x2d925a\x2d798173550624.mount: Deactivated successfully.
Nov 25 03:26:56 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:26:56.025 162852 DEBUG oslo.privsep.daemon [-] privsep: reply[8e615fdc-e3ba-437e-8ef0-b2504f0d3f6c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:26:56 np0005534516 nova_compute[253538]: 2025-11-25 08:26:56.348 253542 DEBUG nova.network.neutron [None req-8fecfbed-03de-473a-b0f0-eb92655a9d55 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 0fc86c7e-5de2-431c-9152-cfe293f8cc7d] Successfully updated port: 521823df-589a-4370-a3ea-a5a6f4c73a6a _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 25 03:26:56 np0005534516 nova_compute[253538]: 2025-11-25 08:26:56.371 253542 DEBUG oslo_concurrency.lockutils [None req-8fecfbed-03de-473a-b0f0-eb92655a9d55 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Acquiring lock "refresh_cache-0fc86c7e-5de2-431c-9152-cfe293f8cc7d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 03:26:56 np0005534516 nova_compute[253538]: 2025-11-25 08:26:56.371 253542 DEBUG oslo_concurrency.lockutils [None req-8fecfbed-03de-473a-b0f0-eb92655a9d55 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Acquired lock "refresh_cache-0fc86c7e-5de2-431c-9152-cfe293f8cc7d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 03:26:56 np0005534516 nova_compute[253538]: 2025-11-25 08:26:56.372 253542 DEBUG nova.network.neutron [None req-8fecfbed-03de-473a-b0f0-eb92655a9d55 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 0fc86c7e-5de2-431c-9152-cfe293f8cc7d] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 25 03:26:56 np0005534516 nova_compute[253538]: 2025-11-25 08:26:56.552 253542 DEBUG nova.compute.manager [req-221a7061-a639-40cb-853a-3ada2ec19579 req-ca28aa9e-fe35-4ee7-8546-8b1c990fbcaf b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 0fc86c7e-5de2-431c-9152-cfe293f8cc7d] Received event network-changed-521823df-589a-4370-a3ea-a5a6f4c73a6a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:26:56 np0005534516 nova_compute[253538]: 2025-11-25 08:26:56.553 253542 DEBUG nova.compute.manager [req-221a7061-a639-40cb-853a-3ada2ec19579 req-ca28aa9e-fe35-4ee7-8546-8b1c990fbcaf b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 0fc86c7e-5de2-431c-9152-cfe293f8cc7d] Refreshing instance network info cache due to event network-changed-521823df-589a-4370-a3ea-a5a6f4c73a6a. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 03:26:56 np0005534516 nova_compute[253538]: 2025-11-25 08:26:56.553 253542 DEBUG oslo_concurrency.lockutils [req-221a7061-a639-40cb-853a-3ada2ec19579 req-ca28aa9e-fe35-4ee7-8546-8b1c990fbcaf b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-0fc86c7e-5de2-431c-9152-cfe293f8cc7d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 03:26:56 np0005534516 nova_compute[253538]: 2025-11-25 08:26:56.592 253542 DEBUG nova.network.neutron [None req-8fecfbed-03de-473a-b0f0-eb92655a9d55 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 0fc86c7e-5de2-431c-9152-cfe293f8cc7d] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 25 03:26:57 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1245: 321 pgs: 321 active+clean; 246 MiB data, 460 MiB used, 60 GiB / 60 GiB avail; 68 KiB/s rd, 1.5 MiB/s wr, 46 op/s
Nov 25 03:26:57 np0005534516 nova_compute[253538]: 2025-11-25 08:26:57.478 253542 DEBUG nova.network.neutron [None req-8fecfbed-03de-473a-b0f0-eb92655a9d55 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 0fc86c7e-5de2-431c-9152-cfe293f8cc7d] Updating instance_info_cache with network_info: [{"id": "521823df-589a-4370-a3ea-a5a6f4c73a6a", "address": "fa:16:3e:3f:ef:3b", "network": {"id": "ba659d6c-c094-47d7-ba45-d0e659ce778e", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1404047212-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b0a28d62fb1841c087b84b40bf5a54ec", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap521823df-58", "ovs_interfaceid": "521823df-589a-4370-a3ea-a5a6f4c73a6a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 03:26:57 np0005534516 nova_compute[253538]: 2025-11-25 08:26:57.517 253542 DEBUG oslo_concurrency.lockutils [None req-8fecfbed-03de-473a-b0f0-eb92655a9d55 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Releasing lock "refresh_cache-0fc86c7e-5de2-431c-9152-cfe293f8cc7d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 03:26:57 np0005534516 nova_compute[253538]: 2025-11-25 08:26:57.518 253542 DEBUG nova.compute.manager [None req-8fecfbed-03de-473a-b0f0-eb92655a9d55 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 0fc86c7e-5de2-431c-9152-cfe293f8cc7d] Instance network_info: |[{"id": "521823df-589a-4370-a3ea-a5a6f4c73a6a", "address": "fa:16:3e:3f:ef:3b", "network": {"id": "ba659d6c-c094-47d7-ba45-d0e659ce778e", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1404047212-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b0a28d62fb1841c087b84b40bf5a54ec", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap521823df-58", "ovs_interfaceid": "521823df-589a-4370-a3ea-a5a6f4c73a6a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 25 03:26:57 np0005534516 nova_compute[253538]: 2025-11-25 08:26:57.518 253542 DEBUG oslo_concurrency.lockutils [req-221a7061-a639-40cb-853a-3ada2ec19579 req-ca28aa9e-fe35-4ee7-8546-8b1c990fbcaf b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-0fc86c7e-5de2-431c-9152-cfe293f8cc7d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 03:26:57 np0005534516 nova_compute[253538]: 2025-11-25 08:26:57.518 253542 DEBUG nova.network.neutron [req-221a7061-a639-40cb-853a-3ada2ec19579 req-ca28aa9e-fe35-4ee7-8546-8b1c990fbcaf b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 0fc86c7e-5de2-431c-9152-cfe293f8cc7d] Refreshing network info cache for port 521823df-589a-4370-a3ea-a5a6f4c73a6a _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 03:26:57 np0005534516 nova_compute[253538]: 2025-11-25 08:26:57.706 253542 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764059202.7051883, 23ace5af-6840-42aa-a801-98abbb4f3a52 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 03:26:57 np0005534516 nova_compute[253538]: 2025-11-25 08:26:57.707 253542 INFO nova.compute.manager [-] [instance: 23ace5af-6840-42aa-a801-98abbb4f3a52] VM Stopped (Lifecycle Event)#033[00m
Nov 25 03:26:57 np0005534516 nova_compute[253538]: 2025-11-25 08:26:57.720 253542 DEBUG nova.compute.manager [None req-bed7a0fa-c90e-44b0-a71b-15f6cb61e566 - - - - - -] [instance: 23ace5af-6840-42aa-a801-98abbb4f3a52] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:26:58 np0005534516 podman[285919]: 2025-11-25 08:26:58.807654827 +0000 UTC m=+0.056600068 container health_status 5c8d5508db57b4c3da7e80ae11f3a3094e87785cb74f60c98199261dd8b73481 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2)
Nov 25 03:26:59 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1246: 321 pgs: 321 active+clean; 246 MiB data, 460 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 118 KiB/s wr, 25 op/s
Nov 25 03:26:59 np0005534516 nova_compute[253538]: 2025-11-25 08:26:59.730 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:26:59 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e113 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 03:27:00 np0005534516 nova_compute[253538]: 2025-11-25 08:27:00.541 253542 DEBUG nova.network.neutron [req-221a7061-a639-40cb-853a-3ada2ec19579 req-ca28aa9e-fe35-4ee7-8546-8b1c990fbcaf b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 0fc86c7e-5de2-431c-9152-cfe293f8cc7d] Updated VIF entry in instance network info cache for port 521823df-589a-4370-a3ea-a5a6f4c73a6a. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 25 03:27:00 np0005534516 nova_compute[253538]: 2025-11-25 08:27:00.542 253542 DEBUG nova.network.neutron [req-221a7061-a639-40cb-853a-3ada2ec19579 req-ca28aa9e-fe35-4ee7-8546-8b1c990fbcaf b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 0fc86c7e-5de2-431c-9152-cfe293f8cc7d] Updating instance_info_cache with network_info: [{"id": "521823df-589a-4370-a3ea-a5a6f4c73a6a", "address": "fa:16:3e:3f:ef:3b", "network": {"id": "ba659d6c-c094-47d7-ba45-d0e659ce778e", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1404047212-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b0a28d62fb1841c087b84b40bf5a54ec", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap521823df-58", "ovs_interfaceid": "521823df-589a-4370-a3ea-a5a6f4c73a6a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 03:27:00 np0005534516 nova_compute[253538]: 2025-11-25 08:27:00.557 253542 DEBUG oslo_concurrency.lockutils [req-221a7061-a639-40cb-853a-3ada2ec19579 req-ca28aa9e-fe35-4ee7-8546-8b1c990fbcaf b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-0fc86c7e-5de2-431c-9152-cfe293f8cc7d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 03:27:00 np0005534516 nova_compute[253538]: 2025-11-25 08:27:00.739 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:27:01 np0005534516 nova_compute[253538]: 2025-11-25 08:27:01.028 253542 DEBUG oslo_concurrency.processutils [None req-512bd3e2-021e-41ed-a450-06a605dc70eb 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/c787de46-dba9-458e-acc0-57470097fac5/disk.config c787de46-dba9-458e-acc0-57470097fac5_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 8.356s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:27:01 np0005534516 nova_compute[253538]: 2025-11-25 08:27:01.028 253542 INFO nova.virt.libvirt.driver [None req-512bd3e2-021e-41ed-a450-06a605dc70eb 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] [instance: c787de46-dba9-458e-acc0-57470097fac5] Deleting local config drive /var/lib/nova/instances/c787de46-dba9-458e-acc0-57470097fac5/disk.config because it was imported into RBD.#033[00m
Nov 25 03:27:01 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1247: 321 pgs: 321 active+clean; 260 MiB data, 470 MiB used, 60 GiB / 60 GiB avail; 8.5 KiB/s rd, 496 KiB/s wr, 13 op/s
Nov 25 03:27:01 np0005534516 kernel: tap0416b402-08: entered promiscuous mode
Nov 25 03:27:01 np0005534516 NetworkManager[48915]: <info>  [1764059221.0872] manager: (tap0416b402-08): new Tun device (/org/freedesktop/NetworkManager/Devices/66)
Nov 25 03:27:01 np0005534516 nova_compute[253538]: 2025-11-25 08:27:01.089 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:27:01 np0005534516 ovn_controller[152859]: 2025-11-25T08:27:01Z|00124|binding|INFO|Claiming lport 0416b402-0842-4b73-910b-d30a5750474c for this chassis.
Nov 25 03:27:01 np0005534516 ovn_controller[152859]: 2025-11-25T08:27:01Z|00125|binding|INFO|0416b402-0842-4b73-910b-d30a5750474c: Claiming fa:16:3e:97:a5:e1 10.100.0.4
Nov 25 03:27:01 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:27:01.097 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:97:a5:e1 10.100.0.4'], port_security=['fa:16:3e:97:a5:e1 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': 'c787de46-dba9-458e-acc0-57470097fac5', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ec4e7ebb-aba7-46f2-8d8d-f7d49f5af954', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '52217f37b23343d697fa6d2be38e236d', 'neutron:revision_number': '2', 'neutron:security_group_ids': '94ed9e1b-8451-4dd9-95ef-2d9affe4fca9', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=16b4c924-25ba-4e74-8549-ba25753d78e7, chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=0416b402-0842-4b73-910b-d30a5750474c) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 03:27:01 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:27:01.098 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 0416b402-0842-4b73-910b-d30a5750474c in datapath ec4e7ebb-aba7-46f2-8d8d-f7d49f5af954 bound to our chassis#033[00m
Nov 25 03:27:01 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:27:01.099 162739 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network ec4e7ebb-aba7-46f2-8d8d-f7d49f5af954#033[00m
Nov 25 03:27:01 np0005534516 ovn_controller[152859]: 2025-11-25T08:27:01Z|00126|binding|INFO|Setting lport 0416b402-0842-4b73-910b-d30a5750474c ovn-installed in OVS
Nov 25 03:27:01 np0005534516 ovn_controller[152859]: 2025-11-25T08:27:01Z|00127|binding|INFO|Setting lport 0416b402-0842-4b73-910b-d30a5750474c up in Southbound
Nov 25 03:27:01 np0005534516 systemd-machined[215790]: New machine qemu-28-instance-00000018.
Nov 25 03:27:01 np0005534516 nova_compute[253538]: 2025-11-25 08:27:01.120 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:27:01 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:27:01.120 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[774ea90c-f294-4ec3-915e-cad83639da51]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:27:01 np0005534516 systemd[1]: Started Virtual Machine qemu-28-instance-00000018.
Nov 25 03:27:01 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:27:01.142 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[4a9378f3-5152-490e-9815-c00e1faa230f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:27:01 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:27:01.145 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[f466ef43-5c5e-41d5-b81b-c52ced0bcdfe]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:27:01 np0005534516 systemd-udevd[285955]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 03:27:01 np0005534516 NetworkManager[48915]: <info>  [1764059221.1594] device (tap0416b402-08): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 03:27:01 np0005534516 NetworkManager[48915]: <info>  [1764059221.1615] device (tap0416b402-08): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 03:27:01 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:27:01.173 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[b89f1ac1-5ecb-4564-bf48-82f03e8e5de1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:27:01 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:27:01.191 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[92df55f4-e025-4533-a897-9de3da0cb42d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapec4e7ebb-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:da:64:1f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 32], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 452883, 'reachable_time': 43709, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 285965, 'error': None, 'target': 'ovnmeta-ec4e7ebb-aba7-46f2-8d8d-f7d49f5af954', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:27:01 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:27:01.206 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[d8850034-bd2b-4086-a481-54f368f5e276]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapec4e7ebb-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 452894, 'tstamp': 452894}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 285967, 'error': None, 'target': 'ovnmeta-ec4e7ebb-aba7-46f2-8d8d-f7d49f5af954', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapec4e7ebb-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 452897, 'tstamp': 452897}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 285967, 'error': None, 'target': 'ovnmeta-ec4e7ebb-aba7-46f2-8d8d-f7d49f5af954', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:27:01 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:27:01.208 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapec4e7ebb-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:27:01 np0005534516 nova_compute[253538]: 2025-11-25 08:27:01.209 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:27:01 np0005534516 nova_compute[253538]: 2025-11-25 08:27:01.210 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:27:01 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:27:01.210 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapec4e7ebb-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:27:01 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:27:01.211 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 03:27:01 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:27:01.212 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapec4e7ebb-a0, col_values=(('external_ids', {'iface-id': '26e04d60-4f32-4592-b567-fc34513c5aba'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:27:01 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:27:01.212 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 03:27:01 np0005534516 nova_compute[253538]: 2025-11-25 08:27:01.354 253542 DEBUG nova.compute.manager [req-ae373da8-694d-4a8f-8b70-69db663e2acc req-b19b6e48-fe37-49b2-80e2-ec3ad0c99062 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: c787de46-dba9-458e-acc0-57470097fac5] Received event network-vif-plugged-0416b402-0842-4b73-910b-d30a5750474c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:27:01 np0005534516 nova_compute[253538]: 2025-11-25 08:27:01.354 253542 DEBUG oslo_concurrency.lockutils [req-ae373da8-694d-4a8f-8b70-69db663e2acc req-b19b6e48-fe37-49b2-80e2-ec3ad0c99062 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "c787de46-dba9-458e-acc0-57470097fac5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:27:01 np0005534516 nova_compute[253538]: 2025-11-25 08:27:01.355 253542 DEBUG oslo_concurrency.lockutils [req-ae373da8-694d-4a8f-8b70-69db663e2acc req-b19b6e48-fe37-49b2-80e2-ec3ad0c99062 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "c787de46-dba9-458e-acc0-57470097fac5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:27:01 np0005534516 nova_compute[253538]: 2025-11-25 08:27:01.355 253542 DEBUG oslo_concurrency.lockutils [req-ae373da8-694d-4a8f-8b70-69db663e2acc req-b19b6e48-fe37-49b2-80e2-ec3ad0c99062 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "c787de46-dba9-458e-acc0-57470097fac5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:27:01 np0005534516 nova_compute[253538]: 2025-11-25 08:27:01.355 253542 DEBUG nova.compute.manager [req-ae373da8-694d-4a8f-8b70-69db663e2acc req-b19b6e48-fe37-49b2-80e2-ec3ad0c99062 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: c787de46-dba9-458e-acc0-57470097fac5] Processing event network-vif-plugged-0416b402-0842-4b73-910b-d30a5750474c _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 25 03:27:02 np0005534516 nova_compute[253538]: 2025-11-25 08:27:02.118 253542 DEBUG oslo_concurrency.processutils [None req-8fecfbed-03de-473a-b0f0-eb92655a9d55 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc 0fc86c7e-5de2-431c-9152-cfe293f8cc7d_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 7.063s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:27:02 np0005534516 nova_compute[253538]: 2025-11-25 08:27:02.197 253542 DEBUG nova.storage.rbd_utils [None req-8fecfbed-03de-473a-b0f0-eb92655a9d55 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] resizing rbd image 0fc86c7e-5de2-431c-9152-cfe293f8cc7d_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 25 03:27:02 np0005534516 podman[286047]: 2025-11-25 08:27:02.853047257 +0000 UTC m=+0.087697880 container health_status 201f5ba8a846455dad7681d91099d8c3f33e8ac91298c1330a8b525b94c03938 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team)
Nov 25 03:27:03 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1248: 321 pgs: 321 active+clean; 277 MiB data, 464 MiB used, 60 GiB / 60 GiB avail; 16 KiB/s rd, 1.1 MiB/s wr, 26 op/s
Nov 25 03:27:03 np0005534516 nova_compute[253538]: 2025-11-25 08:27:03.200 253542 DEBUG nova.compute.manager [None req-512bd3e2-021e-41ed-a450-06a605dc70eb 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] [instance: c787de46-dba9-458e-acc0-57470097fac5] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 25 03:27:03 np0005534516 nova_compute[253538]: 2025-11-25 08:27:03.201 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764059223.1996746, c787de46-dba9-458e-acc0-57470097fac5 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 03:27:03 np0005534516 nova_compute[253538]: 2025-11-25 08:27:03.201 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: c787de46-dba9-458e-acc0-57470097fac5] VM Started (Lifecycle Event)#033[00m
Nov 25 03:27:03 np0005534516 nova_compute[253538]: 2025-11-25 08:27:03.206 253542 DEBUG nova.virt.libvirt.driver [None req-512bd3e2-021e-41ed-a450-06a605dc70eb 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] [instance: c787de46-dba9-458e-acc0-57470097fac5] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 25 03:27:03 np0005534516 nova_compute[253538]: 2025-11-25 08:27:03.209 253542 INFO nova.virt.libvirt.driver [-] [instance: c787de46-dba9-458e-acc0-57470097fac5] Instance spawned successfully.#033[00m
Nov 25 03:27:03 np0005534516 nova_compute[253538]: 2025-11-25 08:27:03.209 253542 DEBUG nova.virt.libvirt.driver [None req-512bd3e2-021e-41ed-a450-06a605dc70eb 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] [instance: c787de46-dba9-458e-acc0-57470097fac5] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 25 03:27:03 np0005534516 nova_compute[253538]: 2025-11-25 08:27:03.222 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: c787de46-dba9-458e-acc0-57470097fac5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:27:03 np0005534516 nova_compute[253538]: 2025-11-25 08:27:03.227 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: c787de46-dba9-458e-acc0-57470097fac5] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 03:27:03 np0005534516 nova_compute[253538]: 2025-11-25 08:27:03.231 253542 DEBUG nova.virt.libvirt.driver [None req-512bd3e2-021e-41ed-a450-06a605dc70eb 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] [instance: c787de46-dba9-458e-acc0-57470097fac5] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:27:03 np0005534516 nova_compute[253538]: 2025-11-25 08:27:03.232 253542 DEBUG nova.virt.libvirt.driver [None req-512bd3e2-021e-41ed-a450-06a605dc70eb 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] [instance: c787de46-dba9-458e-acc0-57470097fac5] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:27:03 np0005534516 nova_compute[253538]: 2025-11-25 08:27:03.232 253542 DEBUG nova.virt.libvirt.driver [None req-512bd3e2-021e-41ed-a450-06a605dc70eb 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] [instance: c787de46-dba9-458e-acc0-57470097fac5] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:27:03 np0005534516 nova_compute[253538]: 2025-11-25 08:27:03.232 253542 DEBUG nova.virt.libvirt.driver [None req-512bd3e2-021e-41ed-a450-06a605dc70eb 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] [instance: c787de46-dba9-458e-acc0-57470097fac5] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:27:03 np0005534516 nova_compute[253538]: 2025-11-25 08:27:03.233 253542 DEBUG nova.virt.libvirt.driver [None req-512bd3e2-021e-41ed-a450-06a605dc70eb 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] [instance: c787de46-dba9-458e-acc0-57470097fac5] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:27:03 np0005534516 nova_compute[253538]: 2025-11-25 08:27:03.233 253542 DEBUG nova.virt.libvirt.driver [None req-512bd3e2-021e-41ed-a450-06a605dc70eb 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] [instance: c787de46-dba9-458e-acc0-57470097fac5] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:27:03 np0005534516 nova_compute[253538]: 2025-11-25 08:27:03.254 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: c787de46-dba9-458e-acc0-57470097fac5] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 03:27:03 np0005534516 nova_compute[253538]: 2025-11-25 08:27:03.254 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764059223.2032046, c787de46-dba9-458e-acc0-57470097fac5 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 03:27:03 np0005534516 nova_compute[253538]: 2025-11-25 08:27:03.255 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: c787de46-dba9-458e-acc0-57470097fac5] VM Paused (Lifecycle Event)#033[00m
Nov 25 03:27:03 np0005534516 nova_compute[253538]: 2025-11-25 08:27:03.284 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: c787de46-dba9-458e-acc0-57470097fac5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:27:03 np0005534516 nova_compute[253538]: 2025-11-25 08:27:03.287 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764059223.2052667, c787de46-dba9-458e-acc0-57470097fac5 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 03:27:03 np0005534516 nova_compute[253538]: 2025-11-25 08:27:03.287 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: c787de46-dba9-458e-acc0-57470097fac5] VM Resumed (Lifecycle Event)#033[00m
Nov 25 03:27:03 np0005534516 nova_compute[253538]: 2025-11-25 08:27:03.313 253542 INFO nova.compute.manager [None req-512bd3e2-021e-41ed-a450-06a605dc70eb 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] [instance: c787de46-dba9-458e-acc0-57470097fac5] Took 20.29 seconds to spawn the instance on the hypervisor.#033[00m
Nov 25 03:27:03 np0005534516 nova_compute[253538]: 2025-11-25 08:27:03.314 253542 DEBUG nova.compute.manager [None req-512bd3e2-021e-41ed-a450-06a605dc70eb 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] [instance: c787de46-dba9-458e-acc0-57470097fac5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:27:03 np0005534516 nova_compute[253538]: 2025-11-25 08:27:03.319 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: c787de46-dba9-458e-acc0-57470097fac5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:27:03 np0005534516 nova_compute[253538]: 2025-11-25 08:27:03.324 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: c787de46-dba9-458e-acc0-57470097fac5] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 03:27:03 np0005534516 nova_compute[253538]: 2025-11-25 08:27:03.363 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: c787de46-dba9-458e-acc0-57470097fac5] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 03:27:03 np0005534516 nova_compute[253538]: 2025-11-25 08:27:03.387 253542 INFO nova.compute.manager [None req-512bd3e2-021e-41ed-a450-06a605dc70eb 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] [instance: c787de46-dba9-458e-acc0-57470097fac5] Took 21.32 seconds to build instance.#033[00m
Nov 25 03:27:03 np0005534516 nova_compute[253538]: 2025-11-25 08:27:03.402 253542 DEBUG oslo_concurrency.lockutils [None req-512bd3e2-021e-41ed-a450-06a605dc70eb 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] Lock "c787de46-dba9-458e-acc0-57470097fac5" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 21.417s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:27:03 np0005534516 nova_compute[253538]: 2025-11-25 08:27:03.640 253542 DEBUG nova.compute.manager [req-13f026d9-7166-451e-8407-6e52025836f7 req-d0bd73bb-f4e5-45d7-940a-5ea8d97723f7 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: c787de46-dba9-458e-acc0-57470097fac5] Received event network-vif-plugged-0416b402-0842-4b73-910b-d30a5750474c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:27:03 np0005534516 nova_compute[253538]: 2025-11-25 08:27:03.640 253542 DEBUG oslo_concurrency.lockutils [req-13f026d9-7166-451e-8407-6e52025836f7 req-d0bd73bb-f4e5-45d7-940a-5ea8d97723f7 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "c787de46-dba9-458e-acc0-57470097fac5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:27:03 np0005534516 nova_compute[253538]: 2025-11-25 08:27:03.640 253542 DEBUG oslo_concurrency.lockutils [req-13f026d9-7166-451e-8407-6e52025836f7 req-d0bd73bb-f4e5-45d7-940a-5ea8d97723f7 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "c787de46-dba9-458e-acc0-57470097fac5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:27:03 np0005534516 nova_compute[253538]: 2025-11-25 08:27:03.641 253542 DEBUG oslo_concurrency.lockutils [req-13f026d9-7166-451e-8407-6e52025836f7 req-d0bd73bb-f4e5-45d7-940a-5ea8d97723f7 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "c787de46-dba9-458e-acc0-57470097fac5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:27:03 np0005534516 nova_compute[253538]: 2025-11-25 08:27:03.641 253542 DEBUG nova.compute.manager [req-13f026d9-7166-451e-8407-6e52025836f7 req-d0bd73bb-f4e5-45d7-940a-5ea8d97723f7 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: c787de46-dba9-458e-acc0-57470097fac5] No waiting events found dispatching network-vif-plugged-0416b402-0842-4b73-910b-d30a5750474c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 03:27:03 np0005534516 nova_compute[253538]: 2025-11-25 08:27:03.641 253542 WARNING nova.compute.manager [req-13f026d9-7166-451e-8407-6e52025836f7 req-d0bd73bb-f4e5-45d7-940a-5ea8d97723f7 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: c787de46-dba9-458e-acc0-57470097fac5] Received unexpected event network-vif-plugged-0416b402-0842-4b73-910b-d30a5750474c for instance with vm_state active and task_state None.#033[00m
Nov 25 03:27:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 03:27:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:27:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 03:27:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:27:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0021430704442599465 of space, bias 1.0, pg target 0.642921133277984 quantized to 32 (current 32)
Nov 25 03:27:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:27:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 03:27:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:27:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 03:27:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:27:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006661126644201341 of space, bias 1.0, pg target 0.19983379932604023 quantized to 32 (current 32)
Nov 25 03:27:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:27:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 25 03:27:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:27:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 03:27:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:27:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 25 03:27:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:27:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 25 03:27:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:27:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 03:27:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:27:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 25 03:27:03 np0005534516 nova_compute[253538]: 2025-11-25 08:27:03.851 253542 DEBUG nova.objects.instance [None req-8fecfbed-03de-473a-b0f0-eb92655a9d55 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Lazy-loading 'migration_context' on Instance uuid 0fc86c7e-5de2-431c-9152-cfe293f8cc7d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 03:27:03 np0005534516 nova_compute[253538]: 2025-11-25 08:27:03.864 253542 DEBUG nova.virt.libvirt.driver [None req-8fecfbed-03de-473a-b0f0-eb92655a9d55 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 0fc86c7e-5de2-431c-9152-cfe293f8cc7d] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 25 03:27:03 np0005534516 nova_compute[253538]: 2025-11-25 08:27:03.865 253542 DEBUG nova.virt.libvirt.driver [None req-8fecfbed-03de-473a-b0f0-eb92655a9d55 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 0fc86c7e-5de2-431c-9152-cfe293f8cc7d] Ensure instance console log exists: /var/lib/nova/instances/0fc86c7e-5de2-431c-9152-cfe293f8cc7d/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 25 03:27:03 np0005534516 nova_compute[253538]: 2025-11-25 08:27:03.866 253542 DEBUG oslo_concurrency.lockutils [None req-8fecfbed-03de-473a-b0f0-eb92655a9d55 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:27:03 np0005534516 nova_compute[253538]: 2025-11-25 08:27:03.866 253542 DEBUG oslo_concurrency.lockutils [None req-8fecfbed-03de-473a-b0f0-eb92655a9d55 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:27:03 np0005534516 nova_compute[253538]: 2025-11-25 08:27:03.866 253542 DEBUG oslo_concurrency.lockutils [None req-8fecfbed-03de-473a-b0f0-eb92655a9d55 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:27:03 np0005534516 nova_compute[253538]: 2025-11-25 08:27:03.869 253542 DEBUG nova.virt.libvirt.driver [None req-8fecfbed-03de-473a-b0f0-eb92655a9d55 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 0fc86c7e-5de2-431c-9152-cfe293f8cc7d] Start _get_guest_xml network_info=[{"id": "521823df-589a-4370-a3ea-a5a6f4c73a6a", "address": "fa:16:3e:3f:ef:3b", "network": {"id": "ba659d6c-c094-47d7-ba45-d0e659ce778e", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1404047212-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b0a28d62fb1841c087b84b40bf5a54ec", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap521823df-58", "ovs_interfaceid": "521823df-589a-4370-a3ea-a5a6f4c73a6a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'device_name': '/dev/vda', 'guest_format': None, 'encryption_options': None, 'boot_index': 0, 'encryption_format': None, 'encryption_secret_uuid': None, 'size': 0, 'encrypted': False, 'disk_bus': 'virtio', 'image_id': '8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 25 03:27:03 np0005534516 nova_compute[253538]: 2025-11-25 08:27:03.874 253542 WARNING nova.virt.libvirt.driver [None req-8fecfbed-03de-473a-b0f0-eb92655a9d55 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 03:27:03 np0005534516 nova_compute[253538]: 2025-11-25 08:27:03.887 253542 DEBUG nova.virt.libvirt.host [None req-8fecfbed-03de-473a-b0f0-eb92655a9d55 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 25 03:27:03 np0005534516 nova_compute[253538]: 2025-11-25 08:27:03.888 253542 DEBUG nova.virt.libvirt.host [None req-8fecfbed-03de-473a-b0f0-eb92655a9d55 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 25 03:27:03 np0005534516 nova_compute[253538]: 2025-11-25 08:27:03.892 253542 DEBUG nova.virt.libvirt.host [None req-8fecfbed-03de-473a-b0f0-eb92655a9d55 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 25 03:27:03 np0005534516 nova_compute[253538]: 2025-11-25 08:27:03.893 253542 DEBUG nova.virt.libvirt.host [None req-8fecfbed-03de-473a-b0f0-eb92655a9d55 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 25 03:27:03 np0005534516 nova_compute[253538]: 2025-11-25 08:27:03.893 253542 DEBUG nova.virt.libvirt.driver [None req-8fecfbed-03de-473a-b0f0-eb92655a9d55 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 25 03:27:03 np0005534516 nova_compute[253538]: 2025-11-25 08:27:03.894 253542 DEBUG nova.virt.hardware [None req-8fecfbed-03de-473a-b0f0-eb92655a9d55 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T08:20:54Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='c0247c91-0f34-45b2-87e7-43f31a790a00',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 25 03:27:03 np0005534516 nova_compute[253538]: 2025-11-25 08:27:03.894 253542 DEBUG nova.virt.hardware [None req-8fecfbed-03de-473a-b0f0-eb92655a9d55 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 25 03:27:03 np0005534516 nova_compute[253538]: 2025-11-25 08:27:03.895 253542 DEBUG nova.virt.hardware [None req-8fecfbed-03de-473a-b0f0-eb92655a9d55 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 25 03:27:03 np0005534516 nova_compute[253538]: 2025-11-25 08:27:03.895 253542 DEBUG nova.virt.hardware [None req-8fecfbed-03de-473a-b0f0-eb92655a9d55 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 25 03:27:03 np0005534516 nova_compute[253538]: 2025-11-25 08:27:03.895 253542 DEBUG nova.virt.hardware [None req-8fecfbed-03de-473a-b0f0-eb92655a9d55 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 25 03:27:03 np0005534516 nova_compute[253538]: 2025-11-25 08:27:03.896 253542 DEBUG nova.virt.hardware [None req-8fecfbed-03de-473a-b0f0-eb92655a9d55 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 25 03:27:03 np0005534516 nova_compute[253538]: 2025-11-25 08:27:03.896 253542 DEBUG nova.virt.hardware [None req-8fecfbed-03de-473a-b0f0-eb92655a9d55 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 25 03:27:03 np0005534516 nova_compute[253538]: 2025-11-25 08:27:03.896 253542 DEBUG nova.virt.hardware [None req-8fecfbed-03de-473a-b0f0-eb92655a9d55 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 25 03:27:03 np0005534516 nova_compute[253538]: 2025-11-25 08:27:03.896 253542 DEBUG nova.virt.hardware [None req-8fecfbed-03de-473a-b0f0-eb92655a9d55 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 25 03:27:03 np0005534516 nova_compute[253538]: 2025-11-25 08:27:03.897 253542 DEBUG nova.virt.hardware [None req-8fecfbed-03de-473a-b0f0-eb92655a9d55 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 25 03:27:03 np0005534516 nova_compute[253538]: 2025-11-25 08:27:03.897 253542 DEBUG nova.virt.hardware [None req-8fecfbed-03de-473a-b0f0-eb92655a9d55 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 25 03:27:03 np0005534516 nova_compute[253538]: 2025-11-25 08:27:03.900 253542 DEBUG oslo_concurrency.processutils [None req-8fecfbed-03de-473a-b0f0-eb92655a9d55 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:27:04 np0005534516 nova_compute[253538]: 2025-11-25 08:27:04.165 253542 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764059209.1640756, 86bfa56f-56d0-4a5e-b0b2-302c375e37a3 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 03:27:04 np0005534516 nova_compute[253538]: 2025-11-25 08:27:04.166 253542 INFO nova.compute.manager [-] [instance: 86bfa56f-56d0-4a5e-b0b2-302c375e37a3] VM Stopped (Lifecycle Event)#033[00m
Nov 25 03:27:04 np0005534516 nova_compute[253538]: 2025-11-25 08:27:04.192 253542 DEBUG nova.compute.manager [None req-994cd579-8358-46f0-b4f1-c4e290e6dd48 - - - - - -] [instance: 86bfa56f-56d0-4a5e-b0b2-302c375e37a3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:27:04 np0005534516 nova_compute[253538]: 2025-11-25 08:27:04.195 253542 DEBUG nova.compute.manager [None req-994cd579-8358-46f0-b4f1-c4e290e6dd48 - - - - - -] [instance: 86bfa56f-56d0-4a5e-b0b2-302c375e37a3] Synchronizing instance power state after lifecycle event "Stopped"; current vm_state: active, current task_state: deleting, current DB power_state: 1, VM power_state: 4 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 03:27:04 np0005534516 nova_compute[253538]: 2025-11-25 08:27:04.221 253542 INFO nova.compute.manager [None req-994cd579-8358-46f0-b4f1-c4e290e6dd48 - - - - - -] [instance: 86bfa56f-56d0-4a5e-b0b2-302c375e37a3] During sync_power_state the instance has a pending task (deleting). Skip.#033[00m
Nov 25 03:27:04 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 03:27:04 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1011166358' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 03:27:04 np0005534516 nova_compute[253538]: 2025-11-25 08:27:04.441 253542 DEBUG oslo_concurrency.processutils [None req-8fecfbed-03de-473a-b0f0-eb92655a9d55 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.540s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:27:04 np0005534516 nova_compute[253538]: 2025-11-25 08:27:04.466 253542 DEBUG nova.storage.rbd_utils [None req-8fecfbed-03de-473a-b0f0-eb92655a9d55 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] rbd image 0fc86c7e-5de2-431c-9152-cfe293f8cc7d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:27:04 np0005534516 nova_compute[253538]: 2025-11-25 08:27:04.472 253542 DEBUG oslo_concurrency.processutils [None req-8fecfbed-03de-473a-b0f0-eb92655a9d55 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:27:04 np0005534516 nova_compute[253538]: 2025-11-25 08:27:04.732 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:27:04 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 03:27:04 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3043833436' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 03:27:04 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e113 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 03:27:04 np0005534516 nova_compute[253538]: 2025-11-25 08:27:04.928 253542 DEBUG oslo_concurrency.processutils [None req-8fecfbed-03de-473a-b0f0-eb92655a9d55 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.456s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:27:04 np0005534516 nova_compute[253538]: 2025-11-25 08:27:04.931 253542 DEBUG nova.virt.libvirt.vif [None req-8fecfbed-03de-473a-b0f0-eb92655a9d55 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T08:26:51Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesTestJSON-server-232041693',display_name='tempest-ImagesTestJSON-server-232041693',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagestestjson-server-232041693',id=25,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='b0a28d62fb1841c087b84b40bf5a54ec',ramdisk_id='',reservation_id='r-pxohzxru',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesTestJSON-109091550',owner_user_name='tempest-ImagesTestJSON-109091550-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T08:26:53Z,user_data=None,user_id='38fa175fb699405c9a05d7c28f994ebc',uuid=0fc86c7e-5de2-431c-9152-cfe293f8cc7d,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "521823df-589a-4370-a3ea-a5a6f4c73a6a", "address": "fa:16:3e:3f:ef:3b", "network": {"id": "ba659d6c-c094-47d7-ba45-d0e659ce778e", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1404047212-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b0a28d62fb1841c087b84b40bf5a54ec", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap521823df-58", "ovs_interfaceid": "521823df-589a-4370-a3ea-a5a6f4c73a6a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 25 03:27:04 np0005534516 nova_compute[253538]: 2025-11-25 08:27:04.932 253542 DEBUG nova.network.os_vif_util [None req-8fecfbed-03de-473a-b0f0-eb92655a9d55 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Converting VIF {"id": "521823df-589a-4370-a3ea-a5a6f4c73a6a", "address": "fa:16:3e:3f:ef:3b", "network": {"id": "ba659d6c-c094-47d7-ba45-d0e659ce778e", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1404047212-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b0a28d62fb1841c087b84b40bf5a54ec", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap521823df-58", "ovs_interfaceid": "521823df-589a-4370-a3ea-a5a6f4c73a6a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 03:27:04 np0005534516 nova_compute[253538]: 2025-11-25 08:27:04.934 253542 DEBUG nova.network.os_vif_util [None req-8fecfbed-03de-473a-b0f0-eb92655a9d55 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:3f:ef:3b,bridge_name='br-int',has_traffic_filtering=True,id=521823df-589a-4370-a3ea-a5a6f4c73a6a,network=Network(ba659d6c-c094-47d7-ba45-d0e659ce778e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap521823df-58') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 03:27:04 np0005534516 nova_compute[253538]: 2025-11-25 08:27:04.936 253542 DEBUG nova.objects.instance [None req-8fecfbed-03de-473a-b0f0-eb92655a9d55 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Lazy-loading 'pci_devices' on Instance uuid 0fc86c7e-5de2-431c-9152-cfe293f8cc7d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 03:27:04 np0005534516 nova_compute[253538]: 2025-11-25 08:27:04.963 253542 DEBUG nova.virt.libvirt.driver [None req-8fecfbed-03de-473a-b0f0-eb92655a9d55 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 0fc86c7e-5de2-431c-9152-cfe293f8cc7d] End _get_guest_xml xml=<domain type="kvm">
Nov 25 03:27:04 np0005534516 nova_compute[253538]:  <uuid>0fc86c7e-5de2-431c-9152-cfe293f8cc7d</uuid>
Nov 25 03:27:04 np0005534516 nova_compute[253538]:  <name>instance-00000019</name>
Nov 25 03:27:04 np0005534516 nova_compute[253538]:  <memory>131072</memory>
Nov 25 03:27:04 np0005534516 nova_compute[253538]:  <vcpu>1</vcpu>
Nov 25 03:27:04 np0005534516 nova_compute[253538]:  <metadata>
Nov 25 03:27:04 np0005534516 nova_compute[253538]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 03:27:04 np0005534516 nova_compute[253538]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 03:27:04 np0005534516 nova_compute[253538]:      <nova:name>tempest-ImagesTestJSON-server-232041693</nova:name>
Nov 25 03:27:04 np0005534516 nova_compute[253538]:      <nova:creationTime>2025-11-25 08:27:03</nova:creationTime>
Nov 25 03:27:04 np0005534516 nova_compute[253538]:      <nova:flavor name="m1.nano">
Nov 25 03:27:04 np0005534516 nova_compute[253538]:        <nova:memory>128</nova:memory>
Nov 25 03:27:04 np0005534516 nova_compute[253538]:        <nova:disk>1</nova:disk>
Nov 25 03:27:04 np0005534516 nova_compute[253538]:        <nova:swap>0</nova:swap>
Nov 25 03:27:04 np0005534516 nova_compute[253538]:        <nova:ephemeral>0</nova:ephemeral>
Nov 25 03:27:04 np0005534516 nova_compute[253538]:        <nova:vcpus>1</nova:vcpus>
Nov 25 03:27:04 np0005534516 nova_compute[253538]:      </nova:flavor>
Nov 25 03:27:04 np0005534516 nova_compute[253538]:      <nova:owner>
Nov 25 03:27:04 np0005534516 nova_compute[253538]:        <nova:user uuid="38fa175fb699405c9a05d7c28f994ebc">tempest-ImagesTestJSON-109091550-project-member</nova:user>
Nov 25 03:27:04 np0005534516 nova_compute[253538]:        <nova:project uuid="b0a28d62fb1841c087b84b40bf5a54ec">tempest-ImagesTestJSON-109091550</nova:project>
Nov 25 03:27:04 np0005534516 nova_compute[253538]:      </nova:owner>
Nov 25 03:27:04 np0005534516 nova_compute[253538]:      <nova:root type="image" uuid="8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e"/>
Nov 25 03:27:04 np0005534516 nova_compute[253538]:      <nova:ports>
Nov 25 03:27:04 np0005534516 nova_compute[253538]:        <nova:port uuid="521823df-589a-4370-a3ea-a5a6f4c73a6a">
Nov 25 03:27:04 np0005534516 nova_compute[253538]:          <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Nov 25 03:27:04 np0005534516 nova_compute[253538]:        </nova:port>
Nov 25 03:27:04 np0005534516 nova_compute[253538]:      </nova:ports>
Nov 25 03:27:04 np0005534516 nova_compute[253538]:    </nova:instance>
Nov 25 03:27:04 np0005534516 nova_compute[253538]:  </metadata>
Nov 25 03:27:04 np0005534516 nova_compute[253538]:  <sysinfo type="smbios">
Nov 25 03:27:04 np0005534516 nova_compute[253538]:    <system>
Nov 25 03:27:04 np0005534516 nova_compute[253538]:      <entry name="manufacturer">RDO</entry>
Nov 25 03:27:04 np0005534516 nova_compute[253538]:      <entry name="product">OpenStack Compute</entry>
Nov 25 03:27:04 np0005534516 nova_compute[253538]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 03:27:04 np0005534516 nova_compute[253538]:      <entry name="serial">0fc86c7e-5de2-431c-9152-cfe293f8cc7d</entry>
Nov 25 03:27:04 np0005534516 nova_compute[253538]:      <entry name="uuid">0fc86c7e-5de2-431c-9152-cfe293f8cc7d</entry>
Nov 25 03:27:04 np0005534516 nova_compute[253538]:      <entry name="family">Virtual Machine</entry>
Nov 25 03:27:04 np0005534516 nova_compute[253538]:    </system>
Nov 25 03:27:04 np0005534516 nova_compute[253538]:  </sysinfo>
Nov 25 03:27:04 np0005534516 nova_compute[253538]:  <os>
Nov 25 03:27:04 np0005534516 nova_compute[253538]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 25 03:27:04 np0005534516 nova_compute[253538]:    <boot dev="hd"/>
Nov 25 03:27:04 np0005534516 nova_compute[253538]:    <smbios mode="sysinfo"/>
Nov 25 03:27:04 np0005534516 nova_compute[253538]:  </os>
Nov 25 03:27:04 np0005534516 nova_compute[253538]:  <features>
Nov 25 03:27:04 np0005534516 nova_compute[253538]:    <acpi/>
Nov 25 03:27:04 np0005534516 nova_compute[253538]:    <apic/>
Nov 25 03:27:04 np0005534516 nova_compute[253538]:    <vmcoreinfo/>
Nov 25 03:27:04 np0005534516 nova_compute[253538]:  </features>
Nov 25 03:27:04 np0005534516 nova_compute[253538]:  <clock offset="utc">
Nov 25 03:27:04 np0005534516 nova_compute[253538]:    <timer name="pit" tickpolicy="delay"/>
Nov 25 03:27:04 np0005534516 nova_compute[253538]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 25 03:27:04 np0005534516 nova_compute[253538]:    <timer name="hpet" present="no"/>
Nov 25 03:27:04 np0005534516 nova_compute[253538]:  </clock>
Nov 25 03:27:04 np0005534516 nova_compute[253538]:  <cpu mode="host-model" match="exact">
Nov 25 03:27:04 np0005534516 nova_compute[253538]:    <topology sockets="1" cores="1" threads="1"/>
Nov 25 03:27:04 np0005534516 nova_compute[253538]:  </cpu>
Nov 25 03:27:04 np0005534516 nova_compute[253538]:  <devices>
Nov 25 03:27:04 np0005534516 nova_compute[253538]:    <disk type="network" device="disk">
Nov 25 03:27:04 np0005534516 nova_compute[253538]:      <driver type="raw" cache="none"/>
Nov 25 03:27:04 np0005534516 nova_compute[253538]:      <source protocol="rbd" name="vms/0fc86c7e-5de2-431c-9152-cfe293f8cc7d_disk">
Nov 25 03:27:04 np0005534516 nova_compute[253538]:        <host name="192.168.122.100" port="6789"/>
Nov 25 03:27:04 np0005534516 nova_compute[253538]:      </source>
Nov 25 03:27:04 np0005534516 nova_compute[253538]:      <auth username="openstack">
Nov 25 03:27:04 np0005534516 nova_compute[253538]:        <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 03:27:04 np0005534516 nova_compute[253538]:      </auth>
Nov 25 03:27:04 np0005534516 nova_compute[253538]:      <target dev="vda" bus="virtio"/>
Nov 25 03:27:04 np0005534516 nova_compute[253538]:    </disk>
Nov 25 03:27:04 np0005534516 nova_compute[253538]:    <disk type="network" device="cdrom">
Nov 25 03:27:04 np0005534516 nova_compute[253538]:      <driver type="raw" cache="none"/>
Nov 25 03:27:04 np0005534516 nova_compute[253538]:      <source protocol="rbd" name="vms/0fc86c7e-5de2-431c-9152-cfe293f8cc7d_disk.config">
Nov 25 03:27:04 np0005534516 nova_compute[253538]:        <host name="192.168.122.100" port="6789"/>
Nov 25 03:27:04 np0005534516 nova_compute[253538]:      </source>
Nov 25 03:27:04 np0005534516 nova_compute[253538]:      <auth username="openstack">
Nov 25 03:27:04 np0005534516 nova_compute[253538]:        <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 03:27:04 np0005534516 nova_compute[253538]:      </auth>
Nov 25 03:27:04 np0005534516 nova_compute[253538]:      <target dev="sda" bus="sata"/>
Nov 25 03:27:04 np0005534516 nova_compute[253538]:    </disk>
Nov 25 03:27:04 np0005534516 nova_compute[253538]:    <interface type="ethernet">
Nov 25 03:27:04 np0005534516 nova_compute[253538]:      <mac address="fa:16:3e:3f:ef:3b"/>
Nov 25 03:27:04 np0005534516 nova_compute[253538]:      <model type="virtio"/>
Nov 25 03:27:04 np0005534516 nova_compute[253538]:      <driver name="vhost" rx_queue_size="512"/>
Nov 25 03:27:04 np0005534516 nova_compute[253538]:      <mtu size="1442"/>
Nov 25 03:27:04 np0005534516 nova_compute[253538]:      <target dev="tap521823df-58"/>
Nov 25 03:27:04 np0005534516 nova_compute[253538]:    </interface>
Nov 25 03:27:04 np0005534516 nova_compute[253538]:    <serial type="pty">
Nov 25 03:27:04 np0005534516 nova_compute[253538]:      <log file="/var/lib/nova/instances/0fc86c7e-5de2-431c-9152-cfe293f8cc7d/console.log" append="off"/>
Nov 25 03:27:04 np0005534516 nova_compute[253538]:    </serial>
Nov 25 03:27:04 np0005534516 nova_compute[253538]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 03:27:04 np0005534516 nova_compute[253538]:    <video>
Nov 25 03:27:04 np0005534516 nova_compute[253538]:      <model type="virtio"/>
Nov 25 03:27:04 np0005534516 nova_compute[253538]:    </video>
Nov 25 03:27:04 np0005534516 nova_compute[253538]:    <input type="tablet" bus="usb"/>
Nov 25 03:27:04 np0005534516 nova_compute[253538]:    <rng model="virtio">
Nov 25 03:27:04 np0005534516 nova_compute[253538]:      <backend model="random">/dev/urandom</backend>
Nov 25 03:27:04 np0005534516 nova_compute[253538]:    </rng>
Nov 25 03:27:04 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root"/>
Nov 25 03:27:04 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:27:04 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:27:04 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:27:04 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:27:04 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:27:04 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:27:04 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:27:04 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:27:04 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:27:04 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:27:04 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:27:04 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:27:04 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:27:04 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:27:04 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:27:04 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:27:04 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:27:04 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:27:04 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:27:04 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:27:04 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:27:04 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:27:04 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:27:04 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:27:04 np0005534516 nova_compute[253538]:    <controller type="usb" index="0"/>
Nov 25 03:27:04 np0005534516 nova_compute[253538]:    <memballoon model="virtio">
Nov 25 03:27:04 np0005534516 nova_compute[253538]:      <stats period="10"/>
Nov 25 03:27:04 np0005534516 nova_compute[253538]:    </memballoon>
Nov 25 03:27:04 np0005534516 nova_compute[253538]:  </devices>
Nov 25 03:27:04 np0005534516 nova_compute[253538]: </domain>
Nov 25 03:27:04 np0005534516 nova_compute[253538]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 25 03:27:04 np0005534516 nova_compute[253538]: 2025-11-25 08:27:04.974 253542 DEBUG nova.compute.manager [None req-8fecfbed-03de-473a-b0f0-eb92655a9d55 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 0fc86c7e-5de2-431c-9152-cfe293f8cc7d] Preparing to wait for external event network-vif-plugged-521823df-589a-4370-a3ea-a5a6f4c73a6a prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 25 03:27:04 np0005534516 nova_compute[253538]: 2025-11-25 08:27:04.975 253542 DEBUG oslo_concurrency.lockutils [None req-8fecfbed-03de-473a-b0f0-eb92655a9d55 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Acquiring lock "0fc86c7e-5de2-431c-9152-cfe293f8cc7d-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:27:04 np0005534516 nova_compute[253538]: 2025-11-25 08:27:04.975 253542 DEBUG oslo_concurrency.lockutils [None req-8fecfbed-03de-473a-b0f0-eb92655a9d55 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Lock "0fc86c7e-5de2-431c-9152-cfe293f8cc7d-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:27:04 np0005534516 nova_compute[253538]: 2025-11-25 08:27:04.975 253542 DEBUG oslo_concurrency.lockutils [None req-8fecfbed-03de-473a-b0f0-eb92655a9d55 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Lock "0fc86c7e-5de2-431c-9152-cfe293f8cc7d-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:27:04 np0005534516 nova_compute[253538]: 2025-11-25 08:27:04.977 253542 DEBUG nova.virt.libvirt.vif [None req-8fecfbed-03de-473a-b0f0-eb92655a9d55 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T08:26:51Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesTestJSON-server-232041693',display_name='tempest-ImagesTestJSON-server-232041693',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagestestjson-server-232041693',id=25,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='b0a28d62fb1841c087b84b40bf5a54ec',ramdisk_id='',reservation_id='r-pxohzxru',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesTestJSON-109091550',owner_user_name='tempest-ImagesTestJSON-109091550-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T08:26:53Z,user_data=None,user_id='38fa175fb699405c9a05d7c28f994ebc',uuid=0fc86c7e-5de2-431c-9152-cfe293f8cc7d,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "521823df-589a-4370-a3ea-a5a6f4c73a6a", "address": "fa:16:3e:3f:ef:3b", "network": {"id": "ba659d6c-c094-47d7-ba45-d0e659ce778e", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1404047212-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b0a28d62fb1841c087b84b40bf5a54ec", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap521823df-58", "ovs_interfaceid": "521823df-589a-4370-a3ea-a5a6f4c73a6a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 25 03:27:04 np0005534516 nova_compute[253538]: 2025-11-25 08:27:04.977 253542 DEBUG nova.network.os_vif_util [None req-8fecfbed-03de-473a-b0f0-eb92655a9d55 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Converting VIF {"id": "521823df-589a-4370-a3ea-a5a6f4c73a6a", "address": "fa:16:3e:3f:ef:3b", "network": {"id": "ba659d6c-c094-47d7-ba45-d0e659ce778e", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1404047212-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b0a28d62fb1841c087b84b40bf5a54ec", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap521823df-58", "ovs_interfaceid": "521823df-589a-4370-a3ea-a5a6f4c73a6a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 03:27:04 np0005534516 nova_compute[253538]: 2025-11-25 08:27:04.979 253542 DEBUG nova.network.os_vif_util [None req-8fecfbed-03de-473a-b0f0-eb92655a9d55 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:3f:ef:3b,bridge_name='br-int',has_traffic_filtering=True,id=521823df-589a-4370-a3ea-a5a6f4c73a6a,network=Network(ba659d6c-c094-47d7-ba45-d0e659ce778e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap521823df-58') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 03:27:04 np0005534516 nova_compute[253538]: 2025-11-25 08:27:04.980 253542 DEBUG os_vif [None req-8fecfbed-03de-473a-b0f0-eb92655a9d55 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:3f:ef:3b,bridge_name='br-int',has_traffic_filtering=True,id=521823df-589a-4370-a3ea-a5a6f4c73a6a,network=Network(ba659d6c-c094-47d7-ba45-d0e659ce778e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap521823df-58') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 25 03:27:04 np0005534516 nova_compute[253538]: 2025-11-25 08:27:04.981 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:27:04 np0005534516 nova_compute[253538]: 2025-11-25 08:27:04.982 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:27:04 np0005534516 nova_compute[253538]: 2025-11-25 08:27:04.983 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 03:27:04 np0005534516 nova_compute[253538]: 2025-11-25 08:27:04.988 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:27:04 np0005534516 nova_compute[253538]: 2025-11-25 08:27:04.988 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap521823df-58, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:27:04 np0005534516 nova_compute[253538]: 2025-11-25 08:27:04.989 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap521823df-58, col_values=(('external_ids', {'iface-id': '521823df-589a-4370-a3ea-a5a6f4c73a6a', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:3f:ef:3b', 'vm-uuid': '0fc86c7e-5de2-431c-9152-cfe293f8cc7d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:27:04 np0005534516 nova_compute[253538]: 2025-11-25 08:27:04.991 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:27:04 np0005534516 NetworkManager[48915]: <info>  [1764059224.9928] manager: (tap521823df-58): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/67)
Nov 25 03:27:04 np0005534516 nova_compute[253538]: 2025-11-25 08:27:04.996 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 25 03:27:05 np0005534516 nova_compute[253538]: 2025-11-25 08:27:05.000 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:27:05 np0005534516 nova_compute[253538]: 2025-11-25 08:27:05.001 253542 INFO os_vif [None req-8fecfbed-03de-473a-b0f0-eb92655a9d55 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:3f:ef:3b,bridge_name='br-int',has_traffic_filtering=True,id=521823df-589a-4370-a3ea-a5a6f4c73a6a,network=Network(ba659d6c-c094-47d7-ba45-d0e659ce778e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap521823df-58')#033[00m
Nov 25 03:27:05 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1249: 321 pgs: 321 active+clean; 257 MiB data, 452 MiB used, 60 GiB / 60 GiB avail; 428 KiB/s rd, 1.8 MiB/s wr, 59 op/s
Nov 25 03:27:05 np0005534516 nova_compute[253538]: 2025-11-25 08:27:05.078 253542 DEBUG nova.virt.libvirt.driver [None req-8fecfbed-03de-473a-b0f0-eb92655a9d55 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 03:27:05 np0005534516 nova_compute[253538]: 2025-11-25 08:27:05.079 253542 DEBUG nova.virt.libvirt.driver [None req-8fecfbed-03de-473a-b0f0-eb92655a9d55 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 03:27:05 np0005534516 nova_compute[253538]: 2025-11-25 08:27:05.079 253542 DEBUG nova.virt.libvirt.driver [None req-8fecfbed-03de-473a-b0f0-eb92655a9d55 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] No VIF found with MAC fa:16:3e:3f:ef:3b, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 25 03:27:05 np0005534516 nova_compute[253538]: 2025-11-25 08:27:05.080 253542 INFO nova.virt.libvirt.driver [None req-8fecfbed-03de-473a-b0f0-eb92655a9d55 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 0fc86c7e-5de2-431c-9152-cfe293f8cc7d] Using config drive#033[00m
Nov 25 03:27:05 np0005534516 nova_compute[253538]: 2025-11-25 08:27:05.107 253542 DEBUG nova.storage.rbd_utils [None req-8fecfbed-03de-473a-b0f0-eb92655a9d55 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] rbd image 0fc86c7e-5de2-431c-9152-cfe293f8cc7d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:27:05 np0005534516 nova_compute[253538]: 2025-11-25 08:27:05.172 253542 INFO nova.virt.libvirt.driver [None req-e493ddf0-4205-4934-be72-4655b7982c78 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] [instance: 86bfa56f-56d0-4a5e-b0b2-302c375e37a3] Deleting instance files /var/lib/nova/instances/86bfa56f-56d0-4a5e-b0b2-302c375e37a3_del#033[00m
Nov 25 03:27:05 np0005534516 nova_compute[253538]: 2025-11-25 08:27:05.173 253542 INFO nova.virt.libvirt.driver [None req-e493ddf0-4205-4934-be72-4655b7982c78 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] [instance: 86bfa56f-56d0-4a5e-b0b2-302c375e37a3] Deletion of /var/lib/nova/instances/86bfa56f-56d0-4a5e-b0b2-302c375e37a3_del complete#033[00m
Nov 25 03:27:05 np0005534516 nova_compute[253538]: 2025-11-25 08:27:05.254 253542 INFO nova.compute.manager [None req-e493ddf0-4205-4934-be72-4655b7982c78 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] [instance: 86bfa56f-56d0-4a5e-b0b2-302c375e37a3] Took 17.53 seconds to destroy the instance on the hypervisor.#033[00m
Nov 25 03:27:05 np0005534516 nova_compute[253538]: 2025-11-25 08:27:05.254 253542 DEBUG oslo.service.loopingcall [None req-e493ddf0-4205-4934-be72-4655b7982c78 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 25 03:27:05 np0005534516 nova_compute[253538]: 2025-11-25 08:27:05.255 253542 DEBUG nova.compute.manager [-] [instance: 86bfa56f-56d0-4a5e-b0b2-302c375e37a3] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 25 03:27:05 np0005534516 nova_compute[253538]: 2025-11-25 08:27:05.255 253542 DEBUG nova.network.neutron [-] [instance: 86bfa56f-56d0-4a5e-b0b2-302c375e37a3] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 25 03:27:05 np0005534516 nova_compute[253538]: 2025-11-25 08:27:05.878 253542 INFO nova.virt.libvirt.driver [None req-8fecfbed-03de-473a-b0f0-eb92655a9d55 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 0fc86c7e-5de2-431c-9152-cfe293f8cc7d] Creating config drive at /var/lib/nova/instances/0fc86c7e-5de2-431c-9152-cfe293f8cc7d/disk.config#033[00m
Nov 25 03:27:05 np0005534516 nova_compute[253538]: 2025-11-25 08:27:05.890 253542 DEBUG oslo_concurrency.processutils [None req-8fecfbed-03de-473a-b0f0-eb92655a9d55 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/0fc86c7e-5de2-431c-9152-cfe293f8cc7d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpkx_hja5b execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:27:06 np0005534516 nova_compute[253538]: 2025-11-25 08:27:06.019 253542 DEBUG oslo_concurrency.processutils [None req-8fecfbed-03de-473a-b0f0-eb92655a9d55 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/0fc86c7e-5de2-431c-9152-cfe293f8cc7d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpkx_hja5b" returned: 0 in 0.129s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:27:06 np0005534516 nova_compute[253538]: 2025-11-25 08:27:06.043 253542 DEBUG nova.storage.rbd_utils [None req-8fecfbed-03de-473a-b0f0-eb92655a9d55 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] rbd image 0fc86c7e-5de2-431c-9152-cfe293f8cc7d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:27:06 np0005534516 nova_compute[253538]: 2025-11-25 08:27:06.046 253542 DEBUG oslo_concurrency.processutils [None req-8fecfbed-03de-473a-b0f0-eb92655a9d55 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/0fc86c7e-5de2-431c-9152-cfe293f8cc7d/disk.config 0fc86c7e-5de2-431c-9152-cfe293f8cc7d_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:27:06 np0005534516 nova_compute[253538]: 2025-11-25 08:27:06.107 253542 DEBUG nova.network.neutron [-] [instance: 86bfa56f-56d0-4a5e-b0b2-302c375e37a3] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 03:27:06 np0005534516 nova_compute[253538]: 2025-11-25 08:27:06.131 253542 INFO nova.compute.manager [-] [instance: 86bfa56f-56d0-4a5e-b0b2-302c375e37a3] Took 0.88 seconds to deallocate network for instance.#033[00m
Nov 25 03:27:06 np0005534516 nova_compute[253538]: 2025-11-25 08:27:06.161 253542 DEBUG nova.compute.manager [req-47c5a6ae-e500-41d2-b257-c65ac7571bf7 req-7006548c-b6d7-4dbb-9ba5-e8fc41377cc9 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 86bfa56f-56d0-4a5e-b0b2-302c375e37a3] Received event network-vif-deleted-4ad9572b-6ac1-4659-8ea6-71b8a32c06fe external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:27:06 np0005534516 nova_compute[253538]: 2025-11-25 08:27:06.165 253542 DEBUG oslo_concurrency.lockutils [None req-e493ddf0-4205-4934-be72-4655b7982c78 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:27:06 np0005534516 nova_compute[253538]: 2025-11-25 08:27:06.165 253542 DEBUG oslo_concurrency.lockutils [None req-e493ddf0-4205-4934-be72-4655b7982c78 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:27:06 np0005534516 nova_compute[253538]: 2025-11-25 08:27:06.262 253542 DEBUG oslo_concurrency.processutils [None req-e493ddf0-4205-4934-be72-4655b7982c78 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:27:06 np0005534516 nova_compute[253538]: 2025-11-25 08:27:06.627 253542 DEBUG oslo_concurrency.processutils [None req-8fecfbed-03de-473a-b0f0-eb92655a9d55 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/0fc86c7e-5de2-431c-9152-cfe293f8cc7d/disk.config 0fc86c7e-5de2-431c-9152-cfe293f8cc7d_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.581s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:27:06 np0005534516 nova_compute[253538]: 2025-11-25 08:27:06.628 253542 INFO nova.virt.libvirt.driver [None req-8fecfbed-03de-473a-b0f0-eb92655a9d55 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 0fc86c7e-5de2-431c-9152-cfe293f8cc7d] Deleting local config drive /var/lib/nova/instances/0fc86c7e-5de2-431c-9152-cfe293f8cc7d/disk.config because it was imported into RBD.#033[00m
Nov 25 03:27:06 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 03:27:06 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/16656392' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 03:27:06 np0005534516 kernel: tap521823df-58: entered promiscuous mode
Nov 25 03:27:06 np0005534516 NetworkManager[48915]: <info>  [1764059226.6885] manager: (tap521823df-58): new Tun device (/org/freedesktop/NetworkManager/Devices/68)
Nov 25 03:27:06 np0005534516 ovn_controller[152859]: 2025-11-25T08:27:06Z|00128|binding|INFO|Claiming lport 521823df-589a-4370-a3ea-a5a6f4c73a6a for this chassis.
Nov 25 03:27:06 np0005534516 ovn_controller[152859]: 2025-11-25T08:27:06Z|00129|binding|INFO|521823df-589a-4370-a3ea-a5a6f4c73a6a: Claiming fa:16:3e:3f:ef:3b 10.100.0.6
Nov 25 03:27:06 np0005534516 nova_compute[253538]: 2025-11-25 08:27:06.690 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:27:06 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:27:06.704 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:3f:ef:3b 10.100.0.6'], port_security=['fa:16:3e:3f:ef:3b 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '0fc86c7e-5de2-431c-9152-cfe293f8cc7d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ba659d6c-c094-47d7-ba45-d0e659ce778e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b0a28d62fb1841c087b84b40bf5a54ec', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'c0dcf48e-8342-437f-bc91-be284d9d2e89', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=38cfcfdd-6d8a-45fc-8bf6-5c1aa5128b91, chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=521823df-589a-4370-a3ea-a5a6f4c73a6a) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 03:27:06 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:27:06.705 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 521823df-589a-4370-a3ea-a5a6f4c73a6a in datapath ba659d6c-c094-47d7-ba45-d0e659ce778e bound to our chassis#033[00m
Nov 25 03:27:06 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:27:06.706 162739 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network ba659d6c-c094-47d7-ba45-d0e659ce778e#033[00m
Nov 25 03:27:06 np0005534516 nova_compute[253538]: 2025-11-25 08:27:06.713 253542 DEBUG oslo_concurrency.processutils [None req-e493ddf0-4205-4934-be72-4655b7982c78 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.451s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:27:06 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:27:06.716 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[2015a45e-5588-40d3-93ff-3e3ef4599acc]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:27:06 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:27:06.719 162739 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapba659d6c-c1 in ovnmeta-ba659d6c-c094-47d7-ba45-d0e659ce778e namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 25 03:27:06 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:27:06.722 269370 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapba659d6c-c0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 25 03:27:06 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:27:06.722 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[cbe16207-b281-45a1-bb85-8b583be0e632]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:27:06 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:27:06.723 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[4eb8ee88-db82-4dfd-9903-bd6f6f2d6eda]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:27:06 np0005534516 nova_compute[253538]: 2025-11-25 08:27:06.729 253542 DEBUG nova.compute.provider_tree [None req-e493ddf0-4205-4934-be72-4655b7982c78 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 03:27:06 np0005534516 systemd-machined[215790]: New machine qemu-29-instance-00000019.
Nov 25 03:27:06 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:27:06.744 162852 DEBUG oslo.privsep.daemon [-] privsep: reply[f8cf343e-e3ef-4d26-9d11-80abbbe80498]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:27:06 np0005534516 systemd[1]: Started Virtual Machine qemu-29-instance-00000019.
Nov 25 03:27:06 np0005534516 nova_compute[253538]: 2025-11-25 08:27:06.747 253542 DEBUG nova.scheduler.client.report [None req-e493ddf0-4205-4934-be72-4655b7982c78 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 03:27:06 np0005534516 systemd-udevd[286265]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 03:27:06 np0005534516 nova_compute[253538]: 2025-11-25 08:27:06.765 253542 DEBUG oslo_concurrency.lockutils [None req-e493ddf0-4205-4934-be72-4655b7982c78 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.600s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:27:06 np0005534516 NetworkManager[48915]: <info>  [1764059226.7721] device (tap521823df-58): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 03:27:06 np0005534516 NetworkManager[48915]: <info>  [1764059226.7730] device (tap521823df-58): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 03:27:06 np0005534516 nova_compute[253538]: 2025-11-25 08:27:06.776 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:27:06 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:27:06.774 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[aa2000e3-e010-4963-9ebe-f678906e74fa]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:27:06 np0005534516 ovn_controller[152859]: 2025-11-25T08:27:06Z|00130|binding|INFO|Setting lport 521823df-589a-4370-a3ea-a5a6f4c73a6a ovn-installed in OVS
Nov 25 03:27:06 np0005534516 ovn_controller[152859]: 2025-11-25T08:27:06Z|00131|binding|INFO|Setting lport 521823df-589a-4370-a3ea-a5a6f4c73a6a up in Southbound
Nov 25 03:27:06 np0005534516 nova_compute[253538]: 2025-11-25 08:27:06.784 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:27:06 np0005534516 nova_compute[253538]: 2025-11-25 08:27:06.792 253542 INFO nova.scheduler.client.report [None req-e493ddf0-4205-4934-be72-4655b7982c78 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Deleted allocations for instance 86bfa56f-56d0-4a5e-b0b2-302c375e37a3#033[00m
Nov 25 03:27:06 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:27:06.804 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[6e66aad4-70ef-4f7c-bb62-0efec26bcd9e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:27:06 np0005534516 NetworkManager[48915]: <info>  [1764059226.8111] manager: (tapba659d6c-c0): new Veth device (/org/freedesktop/NetworkManager/Devices/69)
Nov 25 03:27:06 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:27:06.810 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[81c7ed6d-7f0b-4f40-a415-007820056d5b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:27:06 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:27:06.842 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[d15174eb-91d5-44b4-880d-37d8e7369672]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:27:06 np0005534516 nova_compute[253538]: 2025-11-25 08:27:06.843 253542 DEBUG oslo_concurrency.lockutils [None req-e493ddf0-4205-4934-be72-4655b7982c78 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Lock "86bfa56f-56d0-4a5e-b0b2-302c375e37a3" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 19.118s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:27:06 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:27:06.845 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[8683b179-3543-44b3-941c-63862df46245]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:27:06 np0005534516 NetworkManager[48915]: <info>  [1764059226.8688] device (tapba659d6c-c0): carrier: link connected
Nov 25 03:27:06 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:27:06.874 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[eeda2e7f-79ba-4aec-8f1f-f7af777bfbb6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:27:06 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:27:06.893 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[b959388d-6ce8-4ad7-93c6-8918925b1c21]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapba659d6c-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:bb:c3:40'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 43], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 458146, 'reachable_time': 30819, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 286295, 'error': None, 'target': 'ovnmeta-ba659d6c-c094-47d7-ba45-d0e659ce778e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:27:06 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:27:06.912 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[68c69db2-1a85-47f8-a1e4-a42eb443e8f2]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:febb:c340'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 458146, 'tstamp': 458146}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 286296, 'error': None, 'target': 'ovnmeta-ba659d6c-c094-47d7-ba45-d0e659ce778e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:27:06 np0005534516 nova_compute[253538]: 2025-11-25 08:27:06.923 253542 DEBUG oslo_concurrency.lockutils [None req-0abff3d9-f9fd-4392-94bd-904bac40d2f1 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] Acquiring lock "c787de46-dba9-458e-acc0-57470097fac5" by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:27:06 np0005534516 nova_compute[253538]: 2025-11-25 08:27:06.924 253542 DEBUG oslo_concurrency.lockutils [None req-0abff3d9-f9fd-4392-94bd-904bac40d2f1 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] Lock "c787de46-dba9-458e-acc0-57470097fac5" acquired by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:27:06 np0005534516 nova_compute[253538]: 2025-11-25 08:27:06.925 253542 INFO nova.compute.manager [None req-0abff3d9-f9fd-4392-94bd-904bac40d2f1 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] [instance: c787de46-dba9-458e-acc0-57470097fac5] Rebooting instance#033[00m
Nov 25 03:27:06 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:27:06.933 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[422d8087-64f2-4fb4-aa74-e27390cd44bf]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapba659d6c-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:bb:c3:40'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 43], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 458146, 'reachable_time': 30819, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 286297, 'error': None, 'target': 'ovnmeta-ba659d6c-c094-47d7-ba45-d0e659ce778e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:27:06 np0005534516 nova_compute[253538]: 2025-11-25 08:27:06.950 253542 DEBUG oslo_concurrency.lockutils [None req-0abff3d9-f9fd-4392-94bd-904bac40d2f1 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] Acquiring lock "refresh_cache-c787de46-dba9-458e-acc0-57470097fac5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 03:27:06 np0005534516 nova_compute[253538]: 2025-11-25 08:27:06.952 253542 DEBUG oslo_concurrency.lockutils [None req-0abff3d9-f9fd-4392-94bd-904bac40d2f1 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] Acquired lock "refresh_cache-c787de46-dba9-458e-acc0-57470097fac5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 03:27:06 np0005534516 nova_compute[253538]: 2025-11-25 08:27:06.952 253542 DEBUG nova.network.neutron [None req-0abff3d9-f9fd-4392-94bd-904bac40d2f1 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] [instance: c787de46-dba9-458e-acc0-57470097fac5] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 25 03:27:06 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:27:06.964 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[b395e410-84bf-41c1-9fdd-15533cd1712a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:27:07 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:27:07.028 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[0c2ff0fc-2f0a-4a5f-b5f3-63bcbeb77d0b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:27:07 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:27:07.035 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapba659d6c-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:27:07 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:27:07.036 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 03:27:07 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:27:07.036 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapba659d6c-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:27:07 np0005534516 NetworkManager[48915]: <info>  [1764059227.0397] manager: (tapba659d6c-c0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/70)
Nov 25 03:27:07 np0005534516 nova_compute[253538]: 2025-11-25 08:27:07.040 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:27:07 np0005534516 kernel: tapba659d6c-c0: entered promiscuous mode
Nov 25 03:27:07 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:27:07.045 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapba659d6c-c0, col_values=(('external_ids', {'iface-id': '02ee51d1-7fc5-4815-93ec-b9ead088a46e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:27:07 np0005534516 ovn_controller[152859]: 2025-11-25T08:27:07Z|00132|binding|INFO|Releasing lport 02ee51d1-7fc5-4815-93ec-b9ead088a46e from this chassis (sb_readonly=0)
Nov 25 03:27:07 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1250: 321 pgs: 321 active+clean; 228 MiB data, 435 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 107 op/s
Nov 25 03:27:07 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:27:07.068 162739 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/ba659d6c-c094-47d7-ba45-d0e659ce778e.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/ba659d6c-c094-47d7-ba45-d0e659ce778e.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 25 03:27:07 np0005534516 nova_compute[253538]: 2025-11-25 08:27:07.069 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:27:07 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:27:07.069 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[ce5b1216-6b7f-4140-b059-0fd676b60dc8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:27:07 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:27:07.070 162739 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 25 03:27:07 np0005534516 ovn_metadata_agent[162734]: global
Nov 25 03:27:07 np0005534516 ovn_metadata_agent[162734]:    log         /dev/log local0 debug
Nov 25 03:27:07 np0005534516 ovn_metadata_agent[162734]:    log-tag     haproxy-metadata-proxy-ba659d6c-c094-47d7-ba45-d0e659ce778e
Nov 25 03:27:07 np0005534516 ovn_metadata_agent[162734]:    user        root
Nov 25 03:27:07 np0005534516 ovn_metadata_agent[162734]:    group       root
Nov 25 03:27:07 np0005534516 ovn_metadata_agent[162734]:    maxconn     1024
Nov 25 03:27:07 np0005534516 ovn_metadata_agent[162734]:    pidfile     /var/lib/neutron/external/pids/ba659d6c-c094-47d7-ba45-d0e659ce778e.pid.haproxy
Nov 25 03:27:07 np0005534516 ovn_metadata_agent[162734]:    daemon
Nov 25 03:27:07 np0005534516 ovn_metadata_agent[162734]: 
Nov 25 03:27:07 np0005534516 ovn_metadata_agent[162734]: defaults
Nov 25 03:27:07 np0005534516 ovn_metadata_agent[162734]:    log global
Nov 25 03:27:07 np0005534516 ovn_metadata_agent[162734]:    mode http
Nov 25 03:27:07 np0005534516 ovn_metadata_agent[162734]:    option httplog
Nov 25 03:27:07 np0005534516 ovn_metadata_agent[162734]:    option dontlognull
Nov 25 03:27:07 np0005534516 ovn_metadata_agent[162734]:    option http-server-close
Nov 25 03:27:07 np0005534516 ovn_metadata_agent[162734]:    option forwardfor
Nov 25 03:27:07 np0005534516 ovn_metadata_agent[162734]:    retries                 3
Nov 25 03:27:07 np0005534516 ovn_metadata_agent[162734]:    timeout http-request    30s
Nov 25 03:27:07 np0005534516 ovn_metadata_agent[162734]:    timeout connect         30s
Nov 25 03:27:07 np0005534516 ovn_metadata_agent[162734]:    timeout client          32s
Nov 25 03:27:07 np0005534516 ovn_metadata_agent[162734]:    timeout server          32s
Nov 25 03:27:07 np0005534516 ovn_metadata_agent[162734]:    timeout http-keep-alive 30s
Nov 25 03:27:07 np0005534516 ovn_metadata_agent[162734]: 
Nov 25 03:27:07 np0005534516 ovn_metadata_agent[162734]: 
Nov 25 03:27:07 np0005534516 ovn_metadata_agent[162734]: listen listener
Nov 25 03:27:07 np0005534516 ovn_metadata_agent[162734]:    bind 169.254.169.254:80
Nov 25 03:27:07 np0005534516 ovn_metadata_agent[162734]:    server metadata /var/lib/neutron/metadata_proxy
Nov 25 03:27:07 np0005534516 ovn_metadata_agent[162734]:    http-request add-header X-OVN-Network-ID ba659d6c-c094-47d7-ba45-d0e659ce778e
Nov 25 03:27:07 np0005534516 ovn_metadata_agent[162734]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 25 03:27:07 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:27:07.072 162739 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-ba659d6c-c094-47d7-ba45-d0e659ce778e', 'env', 'PROCESS_TAG=haproxy-ba659d6c-c094-47d7-ba45-d0e659ce778e', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/ba659d6c-c094-47d7-ba45-d0e659ce778e.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 25 03:27:07 np0005534516 podman[286346]: 2025-11-25 08:27:07.488770187 +0000 UTC m=+0.020338505 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 25 03:27:07 np0005534516 nova_compute[253538]: 2025-11-25 08:27:07.650 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764059227.6497922, 0fc86c7e-5de2-431c-9152-cfe293f8cc7d => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 03:27:07 np0005534516 nova_compute[253538]: 2025-11-25 08:27:07.651 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 0fc86c7e-5de2-431c-9152-cfe293f8cc7d] VM Started (Lifecycle Event)#033[00m
Nov 25 03:27:07 np0005534516 nova_compute[253538]: 2025-11-25 08:27:07.680 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 0fc86c7e-5de2-431c-9152-cfe293f8cc7d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:27:07 np0005534516 nova_compute[253538]: 2025-11-25 08:27:07.685 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764059227.64991, 0fc86c7e-5de2-431c-9152-cfe293f8cc7d => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 03:27:07 np0005534516 nova_compute[253538]: 2025-11-25 08:27:07.685 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 0fc86c7e-5de2-431c-9152-cfe293f8cc7d] VM Paused (Lifecycle Event)#033[00m
Nov 25 03:27:07 np0005534516 podman[286346]: 2025-11-25 08:27:07.691129948 +0000 UTC m=+0.222698246 container create ad75695d4bf05f8349f76f28b37e984abb0ee77e194871b1bc91803c0f376e06 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-ba659d6c-c094-47d7-ba45-d0e659ce778e, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 03:27:07 np0005534516 nova_compute[253538]: 2025-11-25 08:27:07.704 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 0fc86c7e-5de2-431c-9152-cfe293f8cc7d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:27:07 np0005534516 nova_compute[253538]: 2025-11-25 08:27:07.707 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 0fc86c7e-5de2-431c-9152-cfe293f8cc7d] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 03:27:07 np0005534516 nova_compute[253538]: 2025-11-25 08:27:07.721 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 0fc86c7e-5de2-431c-9152-cfe293f8cc7d] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 03:27:07 np0005534516 systemd[1]: Started libpod-conmon-ad75695d4bf05f8349f76f28b37e984abb0ee77e194871b1bc91803c0f376e06.scope.
Nov 25 03:27:07 np0005534516 systemd[1]: Started libcrun container.
Nov 25 03:27:07 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/301b66a4b9ad7a58ec132f0b777f3b1abc4e7a7e468241d4a3386fefbc812d53/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 25 03:27:07 np0005534516 podman[286346]: 2025-11-25 08:27:07.927980584 +0000 UTC m=+0.459548902 container init ad75695d4bf05f8349f76f28b37e984abb0ee77e194871b1bc91803c0f376e06 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-ba659d6c-c094-47d7-ba45-d0e659ce778e, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true)
Nov 25 03:27:07 np0005534516 podman[286346]: 2025-11-25 08:27:07.934487915 +0000 UTC m=+0.466056213 container start ad75695d4bf05f8349f76f28b37e984abb0ee77e194871b1bc91803c0f376e06 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-ba659d6c-c094-47d7-ba45-d0e659ce778e, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Nov 25 03:27:07 np0005534516 neutron-haproxy-ovnmeta-ba659d6c-c094-47d7-ba45-d0e659ce778e[286385]: [NOTICE]   (286389) : New worker (286391) forked
Nov 25 03:27:07 np0005534516 neutron-haproxy-ovnmeta-ba659d6c-c094-47d7-ba45-d0e659ce778e[286385]: [NOTICE]   (286389) : Loading success.
Nov 25 03:27:08 np0005534516 nova_compute[253538]: 2025-11-25 08:27:08.058 253542 DEBUG nova.network.neutron [None req-0abff3d9-f9fd-4392-94bd-904bac40d2f1 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] [instance: c787de46-dba9-458e-acc0-57470097fac5] Updating instance_info_cache with network_info: [{"id": "0416b402-0842-4b73-910b-d30a5750474c", "address": "fa:16:3e:97:a5:e1", "network": {"id": "ec4e7ebb-aba7-46f2-8d8d-f7d49f5af954", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-908342220-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "52217f37b23343d697fa6d2be38e236d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0416b402-08", "ovs_interfaceid": "0416b402-0842-4b73-910b-d30a5750474c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 03:27:08 np0005534516 nova_compute[253538]: 2025-11-25 08:27:08.090 253542 DEBUG oslo_concurrency.lockutils [None req-0abff3d9-f9fd-4392-94bd-904bac40d2f1 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] Releasing lock "refresh_cache-c787de46-dba9-458e-acc0-57470097fac5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 03:27:08 np0005534516 nova_compute[253538]: 2025-11-25 08:27:08.092 253542 DEBUG nova.compute.manager [None req-0abff3d9-f9fd-4392-94bd-904bac40d2f1 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] [instance: c787de46-dba9-458e-acc0-57470097fac5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:27:08 np0005534516 nova_compute[253538]: 2025-11-25 08:27:08.240 253542 DEBUG nova.compute.manager [req-dc39e33f-d083-4974-8b10-833f991e23a3 req-a22c4aef-ea3c-40d8-9fae-75855c837145 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: c787de46-dba9-458e-acc0-57470097fac5] Received event network-changed-0416b402-0842-4b73-910b-d30a5750474c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:27:08 np0005534516 nova_compute[253538]: 2025-11-25 08:27:08.241 253542 DEBUG nova.compute.manager [req-dc39e33f-d083-4974-8b10-833f991e23a3 req-a22c4aef-ea3c-40d8-9fae-75855c837145 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: c787de46-dba9-458e-acc0-57470097fac5] Refreshing instance network info cache due to event network-changed-0416b402-0842-4b73-910b-d30a5750474c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 03:27:08 np0005534516 nova_compute[253538]: 2025-11-25 08:27:08.241 253542 DEBUG oslo_concurrency.lockutils [req-dc39e33f-d083-4974-8b10-833f991e23a3 req-a22c4aef-ea3c-40d8-9fae-75855c837145 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-c787de46-dba9-458e-acc0-57470097fac5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 03:27:08 np0005534516 nova_compute[253538]: 2025-11-25 08:27:08.242 253542 DEBUG oslo_concurrency.lockutils [req-dc39e33f-d083-4974-8b10-833f991e23a3 req-a22c4aef-ea3c-40d8-9fae-75855c837145 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-c787de46-dba9-458e-acc0-57470097fac5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 03:27:08 np0005534516 nova_compute[253538]: 2025-11-25 08:27:08.242 253542 DEBUG nova.network.neutron [req-dc39e33f-d083-4974-8b10-833f991e23a3 req-a22c4aef-ea3c-40d8-9fae-75855c837145 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: c787de46-dba9-458e-acc0-57470097fac5] Refreshing network info cache for port 0416b402-0842-4b73-910b-d30a5750474c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 03:27:08 np0005534516 kernel: tap0416b402-08 (unregistering): left promiscuous mode
Nov 25 03:27:08 np0005534516 NetworkManager[48915]: <info>  [1764059228.4101] device (tap0416b402-08): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 03:27:08 np0005534516 ovn_controller[152859]: 2025-11-25T08:27:08Z|00133|binding|INFO|Releasing lport 0416b402-0842-4b73-910b-d30a5750474c from this chassis (sb_readonly=0)
Nov 25 03:27:08 np0005534516 ovn_controller[152859]: 2025-11-25T08:27:08Z|00134|binding|INFO|Setting lport 0416b402-0842-4b73-910b-d30a5750474c down in Southbound
Nov 25 03:27:08 np0005534516 ovn_controller[152859]: 2025-11-25T08:27:08Z|00135|binding|INFO|Removing iface tap0416b402-08 ovn-installed in OVS
Nov 25 03:27:08 np0005534516 nova_compute[253538]: 2025-11-25 08:27:08.421 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:27:08 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:27:08.428 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:97:a5:e1 10.100.0.4'], port_security=['fa:16:3e:97:a5:e1 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': 'c787de46-dba9-458e-acc0-57470097fac5', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ec4e7ebb-aba7-46f2-8d8d-f7d49f5af954', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '52217f37b23343d697fa6d2be38e236d', 'neutron:revision_number': '5', 'neutron:security_group_ids': '94ed9e1b-8451-4dd9-95ef-2d9affe4fca9 b88ff3c6-bea0-4b7c-9374-f058821e8f5a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=16b4c924-25ba-4e74-8549-ba25753d78e7, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=0416b402-0842-4b73-910b-d30a5750474c) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 03:27:08 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:27:08.429 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 0416b402-0842-4b73-910b-d30a5750474c in datapath ec4e7ebb-aba7-46f2-8d8d-f7d49f5af954 unbound from our chassis#033[00m
Nov 25 03:27:08 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:27:08.431 162739 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network ec4e7ebb-aba7-46f2-8d8d-f7d49f5af954#033[00m
Nov 25 03:27:08 np0005534516 nova_compute[253538]: 2025-11-25 08:27:08.440 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:27:08 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:27:08.451 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[6f9deb1c-29cc-4a8d-b2c5-3f2041dc9795]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:27:08 np0005534516 systemd[1]: machine-qemu\x2d28\x2dinstance\x2d00000018.scope: Deactivated successfully.
Nov 25 03:27:08 np0005534516 systemd[1]: machine-qemu\x2d28\x2dinstance\x2d00000018.scope: Consumed 5.582s CPU time.
Nov 25 03:27:08 np0005534516 systemd-machined[215790]: Machine qemu-28-instance-00000018 terminated.
Nov 25 03:27:08 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:27:08.478 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[ac9c0236-983d-478c-ba2b-1fd6940d6f9c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:27:08 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:27:08.481 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[47f69743-3728-435d-b64c-d6ffa7227ecf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:27:08 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:27:08.508 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[c3a7e06c-a7d6-45c9-be32-a164ff449fd1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:27:08 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:27:08.523 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[29e1a1fc-f198-4e9c-bde5-7fbc16294c84]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapec4e7ebb-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:da:64:1f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 7, 'rx_bytes': 616, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 7, 'rx_bytes': 616, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 32], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 452883, 'reachable_time': 43709, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 286409, 'error': None, 'target': 'ovnmeta-ec4e7ebb-aba7-46f2-8d8d-f7d49f5af954', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:27:08 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:27:08.543 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[ec1e3584-092d-4070-9361-df079ebaf618]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapec4e7ebb-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 452894, 'tstamp': 452894}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 286410, 'error': None, 'target': 'ovnmeta-ec4e7ebb-aba7-46f2-8d8d-f7d49f5af954', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapec4e7ebb-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 452897, 'tstamp': 452897}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 286410, 'error': None, 'target': 'ovnmeta-ec4e7ebb-aba7-46f2-8d8d-f7d49f5af954', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:27:08 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:27:08.545 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapec4e7ebb-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:27:08 np0005534516 nova_compute[253538]: 2025-11-25 08:27:08.546 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:27:08 np0005534516 nova_compute[253538]: 2025-11-25 08:27:08.550 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:27:08 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:27:08.551 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapec4e7ebb-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:27:08 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:27:08.551 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 03:27:08 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:27:08.552 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapec4e7ebb-a0, col_values=(('external_ids', {'iface-id': '26e04d60-4f32-4592-b567-fc34513c5aba'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:27:08 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:27:08.552 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 03:27:08 np0005534516 nova_compute[253538]: 2025-11-25 08:27:08.553 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 03:27:08 np0005534516 nova_compute[253538]: 2025-11-25 08:27:08.553 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 03:27:08 np0005534516 nova_compute[253538]: 2025-11-25 08:27:08.554 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 25 03:27:08 np0005534516 NetworkManager[48915]: <info>  [1764059228.6529] manager: (tap0416b402-08): new Tun device (/org/freedesktop/NetworkManager/Devices/71)
Nov 25 03:27:08 np0005534516 nova_compute[253538]: 2025-11-25 08:27:08.676 253542 INFO nova.virt.libvirt.driver [-] [instance: c787de46-dba9-458e-acc0-57470097fac5] Instance destroyed successfully.#033[00m
Nov 25 03:27:08 np0005534516 nova_compute[253538]: 2025-11-25 08:27:08.677 253542 DEBUG nova.objects.instance [None req-0abff3d9-f9fd-4392-94bd-904bac40d2f1 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] Lazy-loading 'resources' on Instance uuid c787de46-dba9-458e-acc0-57470097fac5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 03:27:08 np0005534516 nova_compute[253538]: 2025-11-25 08:27:08.689 253542 DEBUG nova.virt.libvirt.vif [None req-0abff3d9-f9fd-4392-94bd-904bac40d2f1 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T08:26:41Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-SecurityGroupsTestJSON-server-910851624',display_name='tempest-SecurityGroupsTestJSON-server-910851624',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-securitygroupstestjson-server-910851624',id=24,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-25T08:27:03Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='52217f37b23343d697fa6d2be38e236d',ramdisk_id='',reservation_id='r-s7z6d2k0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-SecurityGroupsTestJSON-1828125381',owner_user_name='tempest-SecurityGroupsTestJSON-1828125381-project-member'},tags=<?>,task_state='reboot_started_hard',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T08:27:08Z,user_data=None,user_id='02e795c75a3b40bbbc3ca83d0501777f',uuid=c787de46-dba9-458e-acc0-57470097fac5,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "0416b402-0842-4b73-910b-d30a5750474c", "address": "fa:16:3e:97:a5:e1", "network": {"id": "ec4e7ebb-aba7-46f2-8d8d-f7d49f5af954", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-908342220-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "52217f37b23343d697fa6d2be38e236d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0416b402-08", "ovs_interfaceid": "0416b402-0842-4b73-910b-d30a5750474c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 25 03:27:08 np0005534516 nova_compute[253538]: 2025-11-25 08:27:08.690 253542 DEBUG nova.network.os_vif_util [None req-0abff3d9-f9fd-4392-94bd-904bac40d2f1 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] Converting VIF {"id": "0416b402-0842-4b73-910b-d30a5750474c", "address": "fa:16:3e:97:a5:e1", "network": {"id": "ec4e7ebb-aba7-46f2-8d8d-f7d49f5af954", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-908342220-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "52217f37b23343d697fa6d2be38e236d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0416b402-08", "ovs_interfaceid": "0416b402-0842-4b73-910b-d30a5750474c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 03:27:08 np0005534516 nova_compute[253538]: 2025-11-25 08:27:08.691 253542 DEBUG nova.network.os_vif_util [None req-0abff3d9-f9fd-4392-94bd-904bac40d2f1 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:97:a5:e1,bridge_name='br-int',has_traffic_filtering=True,id=0416b402-0842-4b73-910b-d30a5750474c,network=Network(ec4e7ebb-aba7-46f2-8d8d-f7d49f5af954),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0416b402-08') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 03:27:08 np0005534516 nova_compute[253538]: 2025-11-25 08:27:08.691 253542 DEBUG os_vif [None req-0abff3d9-f9fd-4392-94bd-904bac40d2f1 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:97:a5:e1,bridge_name='br-int',has_traffic_filtering=True,id=0416b402-0842-4b73-910b-d30a5750474c,network=Network(ec4e7ebb-aba7-46f2-8d8d-f7d49f5af954),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0416b402-08') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 25 03:27:08 np0005534516 nova_compute[253538]: 2025-11-25 08:27:08.693 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:27:08 np0005534516 nova_compute[253538]: 2025-11-25 08:27:08.694 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0416b402-08, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:27:08 np0005534516 nova_compute[253538]: 2025-11-25 08:27:08.695 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:27:08 np0005534516 nova_compute[253538]: 2025-11-25 08:27:08.697 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 25 03:27:08 np0005534516 nova_compute[253538]: 2025-11-25 08:27:08.701 253542 INFO os_vif [None req-0abff3d9-f9fd-4392-94bd-904bac40d2f1 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:97:a5:e1,bridge_name='br-int',has_traffic_filtering=True,id=0416b402-0842-4b73-910b-d30a5750474c,network=Network(ec4e7ebb-aba7-46f2-8d8d-f7d49f5af954),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0416b402-08')#033[00m
Nov 25 03:27:08 np0005534516 nova_compute[253538]: 2025-11-25 08:27:08.707 253542 DEBUG nova.virt.libvirt.driver [None req-0abff3d9-f9fd-4392-94bd-904bac40d2f1 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] [instance: c787de46-dba9-458e-acc0-57470097fac5] Start _get_guest_xml network_info=[{"id": "0416b402-0842-4b73-910b-d30a5750474c", "address": "fa:16:3e:97:a5:e1", "network": {"id": "ec4e7ebb-aba7-46f2-8d8d-f7d49f5af954", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-908342220-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "52217f37b23343d697fa6d2be38e236d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0416b402-08", "ovs_interfaceid": "0416b402-0842-4b73-910b-d30a5750474c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'device_name': '/dev/vda', 'guest_format': None, 'encryption_options': None, 'boot_index': 0, 'encryption_format': None, 'encryption_secret_uuid': None, 'size': 0, 'encrypted': False, 'disk_bus': 'virtio', 'image_id': '8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 25 03:27:08 np0005534516 nova_compute[253538]: 2025-11-25 08:27:08.711 253542 WARNING nova.virt.libvirt.driver [None req-0abff3d9-f9fd-4392-94bd-904bac40d2f1 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 03:27:08 np0005534516 nova_compute[253538]: 2025-11-25 08:27:08.716 253542 DEBUG nova.virt.libvirt.host [None req-0abff3d9-f9fd-4392-94bd-904bac40d2f1 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 25 03:27:08 np0005534516 nova_compute[253538]: 2025-11-25 08:27:08.717 253542 DEBUG nova.virt.libvirt.host [None req-0abff3d9-f9fd-4392-94bd-904bac40d2f1 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 25 03:27:08 np0005534516 nova_compute[253538]: 2025-11-25 08:27:08.719 253542 DEBUG nova.virt.libvirt.host [None req-0abff3d9-f9fd-4392-94bd-904bac40d2f1 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 25 03:27:08 np0005534516 nova_compute[253538]: 2025-11-25 08:27:08.719 253542 DEBUG nova.virt.libvirt.host [None req-0abff3d9-f9fd-4392-94bd-904bac40d2f1 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 25 03:27:08 np0005534516 nova_compute[253538]: 2025-11-25 08:27:08.720 253542 DEBUG nova.virt.libvirt.driver [None req-0abff3d9-f9fd-4392-94bd-904bac40d2f1 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 25 03:27:08 np0005534516 nova_compute[253538]: 2025-11-25 08:27:08.720 253542 DEBUG nova.virt.hardware [None req-0abff3d9-f9fd-4392-94bd-904bac40d2f1 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T08:20:54Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='c0247c91-0f34-45b2-87e7-43f31a790a00',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 25 03:27:08 np0005534516 nova_compute[253538]: 2025-11-25 08:27:08.720 253542 DEBUG nova.virt.hardware [None req-0abff3d9-f9fd-4392-94bd-904bac40d2f1 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 25 03:27:08 np0005534516 nova_compute[253538]: 2025-11-25 08:27:08.721 253542 DEBUG nova.virt.hardware [None req-0abff3d9-f9fd-4392-94bd-904bac40d2f1 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 25 03:27:08 np0005534516 nova_compute[253538]: 2025-11-25 08:27:08.721 253542 DEBUG nova.virt.hardware [None req-0abff3d9-f9fd-4392-94bd-904bac40d2f1 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 25 03:27:08 np0005534516 nova_compute[253538]: 2025-11-25 08:27:08.721 253542 DEBUG nova.virt.hardware [None req-0abff3d9-f9fd-4392-94bd-904bac40d2f1 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 25 03:27:08 np0005534516 nova_compute[253538]: 2025-11-25 08:27:08.721 253542 DEBUG nova.virt.hardware [None req-0abff3d9-f9fd-4392-94bd-904bac40d2f1 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 25 03:27:08 np0005534516 nova_compute[253538]: 2025-11-25 08:27:08.721 253542 DEBUG nova.virt.hardware [None req-0abff3d9-f9fd-4392-94bd-904bac40d2f1 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 25 03:27:08 np0005534516 nova_compute[253538]: 2025-11-25 08:27:08.721 253542 DEBUG nova.virt.hardware [None req-0abff3d9-f9fd-4392-94bd-904bac40d2f1 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 25 03:27:08 np0005534516 nova_compute[253538]: 2025-11-25 08:27:08.721 253542 DEBUG nova.virt.hardware [None req-0abff3d9-f9fd-4392-94bd-904bac40d2f1 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 25 03:27:08 np0005534516 nova_compute[253538]: 2025-11-25 08:27:08.722 253542 DEBUG nova.virt.hardware [None req-0abff3d9-f9fd-4392-94bd-904bac40d2f1 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 25 03:27:08 np0005534516 nova_compute[253538]: 2025-11-25 08:27:08.722 253542 DEBUG nova.virt.hardware [None req-0abff3d9-f9fd-4392-94bd-904bac40d2f1 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 25 03:27:08 np0005534516 nova_compute[253538]: 2025-11-25 08:27:08.722 253542 DEBUG nova.objects.instance [None req-0abff3d9-f9fd-4392-94bd-904bac40d2f1 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] Lazy-loading 'vcpu_model' on Instance uuid c787de46-dba9-458e-acc0-57470097fac5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 03:27:08 np0005534516 nova_compute[253538]: 2025-11-25 08:27:08.734 253542 DEBUG oslo_concurrency.processutils [None req-0abff3d9-f9fd-4392-94bd-904bac40d2f1 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:27:08 np0005534516 podman[286423]: 2025-11-25 08:27:08.826217198 +0000 UTC m=+0.100730869 container health_status 8ad1e319ea263c63021f01bb2d6f18ca5aea205883339692c6b10bddfa993191 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 03:27:09 np0005534516 nova_compute[253538]: 2025-11-25 08:27:09.052 253542 DEBUG nova.compute.manager [req-c4c95fb5-c6dc-473e-9c7f-40f6d3841953 req-f28342f1-c3c3-4047-8907-187e4d600112 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: c787de46-dba9-458e-acc0-57470097fac5] Received event network-vif-unplugged-0416b402-0842-4b73-910b-d30a5750474c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:27:09 np0005534516 nova_compute[253538]: 2025-11-25 08:27:09.054 253542 DEBUG oslo_concurrency.lockutils [req-c4c95fb5-c6dc-473e-9c7f-40f6d3841953 req-f28342f1-c3c3-4047-8907-187e4d600112 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "c787de46-dba9-458e-acc0-57470097fac5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:27:09 np0005534516 nova_compute[253538]: 2025-11-25 08:27:09.054 253542 DEBUG oslo_concurrency.lockutils [req-c4c95fb5-c6dc-473e-9c7f-40f6d3841953 req-f28342f1-c3c3-4047-8907-187e4d600112 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "c787de46-dba9-458e-acc0-57470097fac5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:27:09 np0005534516 nova_compute[253538]: 2025-11-25 08:27:09.055 253542 DEBUG oslo_concurrency.lockutils [req-c4c95fb5-c6dc-473e-9c7f-40f6d3841953 req-f28342f1-c3c3-4047-8907-187e4d600112 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "c787de46-dba9-458e-acc0-57470097fac5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:27:09 np0005534516 nova_compute[253538]: 2025-11-25 08:27:09.056 253542 DEBUG nova.compute.manager [req-c4c95fb5-c6dc-473e-9c7f-40f6d3841953 req-f28342f1-c3c3-4047-8907-187e4d600112 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: c787de46-dba9-458e-acc0-57470097fac5] No waiting events found dispatching network-vif-unplugged-0416b402-0842-4b73-910b-d30a5750474c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 03:27:09 np0005534516 nova_compute[253538]: 2025-11-25 08:27:09.056 253542 WARNING nova.compute.manager [req-c4c95fb5-c6dc-473e-9c7f-40f6d3841953 req-f28342f1-c3c3-4047-8907-187e4d600112 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: c787de46-dba9-458e-acc0-57470097fac5] Received unexpected event network-vif-unplugged-0416b402-0842-4b73-910b-d30a5750474c for instance with vm_state active and task_state reboot_started_hard.#033[00m
Nov 25 03:27:09 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1251: 321 pgs: 321 active+clean; 214 MiB data, 422 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 117 op/s
Nov 25 03:27:09 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 03:27:09 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3080990611' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 03:27:09 np0005534516 nova_compute[253538]: 2025-11-25 08:27:09.200 253542 DEBUG oslo_concurrency.processutils [None req-0abff3d9-f9fd-4392-94bd-904bac40d2f1 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.466s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:27:09 np0005534516 nova_compute[253538]: 2025-11-25 08:27:09.233 253542 DEBUG oslo_concurrency.processutils [None req-0abff3d9-f9fd-4392-94bd-904bac40d2f1 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:27:09 np0005534516 nova_compute[253538]: 2025-11-25 08:27:09.408 253542 DEBUG nova.network.neutron [req-dc39e33f-d083-4974-8b10-833f991e23a3 req-a22c4aef-ea3c-40d8-9fae-75855c837145 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: c787de46-dba9-458e-acc0-57470097fac5] Updated VIF entry in instance network info cache for port 0416b402-0842-4b73-910b-d30a5750474c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 25 03:27:09 np0005534516 nova_compute[253538]: 2025-11-25 08:27:09.409 253542 DEBUG nova.network.neutron [req-dc39e33f-d083-4974-8b10-833f991e23a3 req-a22c4aef-ea3c-40d8-9fae-75855c837145 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: c787de46-dba9-458e-acc0-57470097fac5] Updating instance_info_cache with network_info: [{"id": "0416b402-0842-4b73-910b-d30a5750474c", "address": "fa:16:3e:97:a5:e1", "network": {"id": "ec4e7ebb-aba7-46f2-8d8d-f7d49f5af954", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-908342220-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "52217f37b23343d697fa6d2be38e236d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0416b402-08", "ovs_interfaceid": "0416b402-0842-4b73-910b-d30a5750474c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 03:27:09 np0005534516 nova_compute[253538]: 2025-11-25 08:27:09.422 253542 DEBUG oslo_concurrency.lockutils [req-dc39e33f-d083-4974-8b10-833f991e23a3 req-a22c4aef-ea3c-40d8-9fae-75855c837145 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-c787de46-dba9-458e-acc0-57470097fac5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 03:27:09 np0005534516 nova_compute[253538]: 2025-11-25 08:27:09.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 03:27:09 np0005534516 nova_compute[253538]: 2025-11-25 08:27:09.555 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 25 03:27:09 np0005534516 nova_compute[253538]: 2025-11-25 08:27:09.556 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 25 03:27:09 np0005534516 nova_compute[253538]: 2025-11-25 08:27:09.574 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] [instance: 0fc86c7e-5de2-431c-9152-cfe293f8cc7d] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Nov 25 03:27:09 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 03:27:09 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/683937940' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 03:27:09 np0005534516 nova_compute[253538]: 2025-11-25 08:27:09.675 253542 DEBUG oslo_concurrency.processutils [None req-0abff3d9-f9fd-4392-94bd-904bac40d2f1 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.441s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:27:09 np0005534516 nova_compute[253538]: 2025-11-25 08:27:09.676 253542 DEBUG nova.virt.libvirt.vif [None req-0abff3d9-f9fd-4392-94bd-904bac40d2f1 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T08:26:41Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-SecurityGroupsTestJSON-server-910851624',display_name='tempest-SecurityGroupsTestJSON-server-910851624',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-securitygroupstestjson-server-910851624',id=24,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-25T08:27:03Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='52217f37b23343d697fa6d2be38e236d',ramdisk_id='',reservation_id='r-s7z6d2k0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-SecurityGroupsTestJSON-1828125381',owner_user_name='tempest-SecurityGroupsTestJSON-1828125381-project-member'},tags=<?>,task_state='reboot_started_hard',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T08:27:08Z,user_data=None,user_id='02e795c75a3b40bbbc3ca83d0501777f',uuid=c787de46-dba9-458e-acc0-57470097fac5,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "0416b402-0842-4b73-910b-d30a5750474c", "address": "fa:16:3e:97:a5:e1", "network": {"id": "ec4e7ebb-aba7-46f2-8d8d-f7d49f5af954", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-908342220-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "52217f37b23343d697fa6d2be38e236d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0416b402-08", "ovs_interfaceid": "0416b402-0842-4b73-910b-d30a5750474c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 25 03:27:09 np0005534516 nova_compute[253538]: 2025-11-25 08:27:09.676 253542 DEBUG nova.network.os_vif_util [None req-0abff3d9-f9fd-4392-94bd-904bac40d2f1 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] Converting VIF {"id": "0416b402-0842-4b73-910b-d30a5750474c", "address": "fa:16:3e:97:a5:e1", "network": {"id": "ec4e7ebb-aba7-46f2-8d8d-f7d49f5af954", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-908342220-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "52217f37b23343d697fa6d2be38e236d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0416b402-08", "ovs_interfaceid": "0416b402-0842-4b73-910b-d30a5750474c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 03:27:09 np0005534516 nova_compute[253538]: 2025-11-25 08:27:09.677 253542 DEBUG nova.network.os_vif_util [None req-0abff3d9-f9fd-4392-94bd-904bac40d2f1 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:97:a5:e1,bridge_name='br-int',has_traffic_filtering=True,id=0416b402-0842-4b73-910b-d30a5750474c,network=Network(ec4e7ebb-aba7-46f2-8d8d-f7d49f5af954),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0416b402-08') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 03:27:09 np0005534516 nova_compute[253538]: 2025-11-25 08:27:09.678 253542 DEBUG nova.objects.instance [None req-0abff3d9-f9fd-4392-94bd-904bac40d2f1 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] Lazy-loading 'pci_devices' on Instance uuid c787de46-dba9-458e-acc0-57470097fac5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 03:27:09 np0005534516 nova_compute[253538]: 2025-11-25 08:27:09.689 253542 DEBUG nova.virt.libvirt.driver [None req-0abff3d9-f9fd-4392-94bd-904bac40d2f1 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] [instance: c787de46-dba9-458e-acc0-57470097fac5] End _get_guest_xml xml=<domain type="kvm">
Nov 25 03:27:09 np0005534516 nova_compute[253538]:  <uuid>c787de46-dba9-458e-acc0-57470097fac5</uuid>
Nov 25 03:27:09 np0005534516 nova_compute[253538]:  <name>instance-00000018</name>
Nov 25 03:27:09 np0005534516 nova_compute[253538]:  <memory>131072</memory>
Nov 25 03:27:09 np0005534516 nova_compute[253538]:  <vcpu>1</vcpu>
Nov 25 03:27:09 np0005534516 nova_compute[253538]:  <metadata>
Nov 25 03:27:09 np0005534516 nova_compute[253538]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 03:27:09 np0005534516 nova_compute[253538]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 03:27:09 np0005534516 nova_compute[253538]:      <nova:name>tempest-SecurityGroupsTestJSON-server-910851624</nova:name>
Nov 25 03:27:09 np0005534516 nova_compute[253538]:      <nova:creationTime>2025-11-25 08:27:08</nova:creationTime>
Nov 25 03:27:09 np0005534516 nova_compute[253538]:      <nova:flavor name="m1.nano">
Nov 25 03:27:09 np0005534516 nova_compute[253538]:        <nova:memory>128</nova:memory>
Nov 25 03:27:09 np0005534516 nova_compute[253538]:        <nova:disk>1</nova:disk>
Nov 25 03:27:09 np0005534516 nova_compute[253538]:        <nova:swap>0</nova:swap>
Nov 25 03:27:09 np0005534516 nova_compute[253538]:        <nova:ephemeral>0</nova:ephemeral>
Nov 25 03:27:09 np0005534516 nova_compute[253538]:        <nova:vcpus>1</nova:vcpus>
Nov 25 03:27:09 np0005534516 nova_compute[253538]:      </nova:flavor>
Nov 25 03:27:09 np0005534516 nova_compute[253538]:      <nova:owner>
Nov 25 03:27:09 np0005534516 nova_compute[253538]:        <nova:user uuid="02e795c75a3b40bbbc3ca83d0501777f">tempest-SecurityGroupsTestJSON-1828125381-project-member</nova:user>
Nov 25 03:27:09 np0005534516 nova_compute[253538]:        <nova:project uuid="52217f37b23343d697fa6d2be38e236d">tempest-SecurityGroupsTestJSON-1828125381</nova:project>
Nov 25 03:27:09 np0005534516 nova_compute[253538]:      </nova:owner>
Nov 25 03:27:09 np0005534516 nova_compute[253538]:      <nova:root type="image" uuid="8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e"/>
Nov 25 03:27:09 np0005534516 nova_compute[253538]:      <nova:ports>
Nov 25 03:27:09 np0005534516 nova_compute[253538]:        <nova:port uuid="0416b402-0842-4b73-910b-d30a5750474c">
Nov 25 03:27:09 np0005534516 nova_compute[253538]:          <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Nov 25 03:27:09 np0005534516 nova_compute[253538]:        </nova:port>
Nov 25 03:27:09 np0005534516 nova_compute[253538]:      </nova:ports>
Nov 25 03:27:09 np0005534516 nova_compute[253538]:    </nova:instance>
Nov 25 03:27:09 np0005534516 nova_compute[253538]:  </metadata>
Nov 25 03:27:09 np0005534516 nova_compute[253538]:  <sysinfo type="smbios">
Nov 25 03:27:09 np0005534516 nova_compute[253538]:    <system>
Nov 25 03:27:09 np0005534516 nova_compute[253538]:      <entry name="manufacturer">RDO</entry>
Nov 25 03:27:09 np0005534516 nova_compute[253538]:      <entry name="product">OpenStack Compute</entry>
Nov 25 03:27:09 np0005534516 nova_compute[253538]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 03:27:09 np0005534516 nova_compute[253538]:      <entry name="serial">c787de46-dba9-458e-acc0-57470097fac5</entry>
Nov 25 03:27:09 np0005534516 nova_compute[253538]:      <entry name="uuid">c787de46-dba9-458e-acc0-57470097fac5</entry>
Nov 25 03:27:09 np0005534516 nova_compute[253538]:      <entry name="family">Virtual Machine</entry>
Nov 25 03:27:09 np0005534516 nova_compute[253538]:    </system>
Nov 25 03:27:09 np0005534516 nova_compute[253538]:  </sysinfo>
Nov 25 03:27:09 np0005534516 nova_compute[253538]:  <os>
Nov 25 03:27:09 np0005534516 nova_compute[253538]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 25 03:27:09 np0005534516 nova_compute[253538]:    <boot dev="hd"/>
Nov 25 03:27:09 np0005534516 nova_compute[253538]:    <smbios mode="sysinfo"/>
Nov 25 03:27:09 np0005534516 nova_compute[253538]:  </os>
Nov 25 03:27:09 np0005534516 nova_compute[253538]:  <features>
Nov 25 03:27:09 np0005534516 nova_compute[253538]:    <acpi/>
Nov 25 03:27:09 np0005534516 nova_compute[253538]:    <apic/>
Nov 25 03:27:09 np0005534516 nova_compute[253538]:    <vmcoreinfo/>
Nov 25 03:27:09 np0005534516 nova_compute[253538]:  </features>
Nov 25 03:27:09 np0005534516 nova_compute[253538]:  <clock offset="utc">
Nov 25 03:27:09 np0005534516 nova_compute[253538]:    <timer name="pit" tickpolicy="delay"/>
Nov 25 03:27:09 np0005534516 nova_compute[253538]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 25 03:27:09 np0005534516 nova_compute[253538]:    <timer name="hpet" present="no"/>
Nov 25 03:27:09 np0005534516 nova_compute[253538]:  </clock>
Nov 25 03:27:09 np0005534516 nova_compute[253538]:  <cpu mode="host-model" match="exact">
Nov 25 03:27:09 np0005534516 nova_compute[253538]:    <topology sockets="1" cores="1" threads="1"/>
Nov 25 03:27:09 np0005534516 nova_compute[253538]:  </cpu>
Nov 25 03:27:09 np0005534516 nova_compute[253538]:  <devices>
Nov 25 03:27:09 np0005534516 nova_compute[253538]:    <disk type="network" device="disk">
Nov 25 03:27:09 np0005534516 nova_compute[253538]:      <driver type="raw" cache="none"/>
Nov 25 03:27:09 np0005534516 nova_compute[253538]:      <source protocol="rbd" name="vms/c787de46-dba9-458e-acc0-57470097fac5_disk">
Nov 25 03:27:09 np0005534516 nova_compute[253538]:        <host name="192.168.122.100" port="6789"/>
Nov 25 03:27:09 np0005534516 nova_compute[253538]:      </source>
Nov 25 03:27:09 np0005534516 nova_compute[253538]:      <auth username="openstack">
Nov 25 03:27:09 np0005534516 nova_compute[253538]:        <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 03:27:09 np0005534516 nova_compute[253538]:      </auth>
Nov 25 03:27:09 np0005534516 nova_compute[253538]:      <target dev="vda" bus="virtio"/>
Nov 25 03:27:09 np0005534516 nova_compute[253538]:    </disk>
Nov 25 03:27:09 np0005534516 nova_compute[253538]:    <disk type="network" device="cdrom">
Nov 25 03:27:09 np0005534516 nova_compute[253538]:      <driver type="raw" cache="none"/>
Nov 25 03:27:09 np0005534516 nova_compute[253538]:      <source protocol="rbd" name="vms/c787de46-dba9-458e-acc0-57470097fac5_disk.config">
Nov 25 03:27:09 np0005534516 nova_compute[253538]:        <host name="192.168.122.100" port="6789"/>
Nov 25 03:27:09 np0005534516 nova_compute[253538]:      </source>
Nov 25 03:27:09 np0005534516 nova_compute[253538]:      <auth username="openstack">
Nov 25 03:27:09 np0005534516 nova_compute[253538]:        <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 03:27:09 np0005534516 nova_compute[253538]:      </auth>
Nov 25 03:27:09 np0005534516 nova_compute[253538]:      <target dev="sda" bus="sata"/>
Nov 25 03:27:09 np0005534516 nova_compute[253538]:    </disk>
Nov 25 03:27:09 np0005534516 nova_compute[253538]:    <interface type="ethernet">
Nov 25 03:27:09 np0005534516 nova_compute[253538]:      <mac address="fa:16:3e:97:a5:e1"/>
Nov 25 03:27:09 np0005534516 nova_compute[253538]:      <model type="virtio"/>
Nov 25 03:27:09 np0005534516 nova_compute[253538]:      <driver name="vhost" rx_queue_size="512"/>
Nov 25 03:27:09 np0005534516 nova_compute[253538]:      <mtu size="1442"/>
Nov 25 03:27:09 np0005534516 nova_compute[253538]:      <target dev="tap0416b402-08"/>
Nov 25 03:27:09 np0005534516 nova_compute[253538]:    </interface>
Nov 25 03:27:09 np0005534516 nova_compute[253538]:    <serial type="pty">
Nov 25 03:27:09 np0005534516 nova_compute[253538]:      <log file="/var/lib/nova/instances/c787de46-dba9-458e-acc0-57470097fac5/console.log" append="off"/>
Nov 25 03:27:09 np0005534516 nova_compute[253538]:    </serial>
Nov 25 03:27:09 np0005534516 nova_compute[253538]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 03:27:09 np0005534516 nova_compute[253538]:    <video>
Nov 25 03:27:09 np0005534516 nova_compute[253538]:      <model type="virtio"/>
Nov 25 03:27:09 np0005534516 nova_compute[253538]:    </video>
Nov 25 03:27:09 np0005534516 nova_compute[253538]:    <input type="tablet" bus="usb"/>
Nov 25 03:27:09 np0005534516 nova_compute[253538]:    <input type="keyboard" bus="usb"/>
Nov 25 03:27:09 np0005534516 nova_compute[253538]:    <rng model="virtio">
Nov 25 03:27:09 np0005534516 nova_compute[253538]:      <backend model="random">/dev/urandom</backend>
Nov 25 03:27:09 np0005534516 nova_compute[253538]:    </rng>
Nov 25 03:27:09 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root"/>
Nov 25 03:27:09 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:27:09 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:27:09 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:27:09 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:27:09 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:27:09 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:27:09 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:27:09 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:27:09 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:27:09 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:27:09 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:27:09 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:27:09 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:27:09 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:27:09 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:27:09 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:27:09 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:27:09 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:27:09 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:27:09 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:27:09 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:27:09 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:27:09 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:27:09 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:27:09 np0005534516 nova_compute[253538]:    <controller type="usb" index="0"/>
Nov 25 03:27:09 np0005534516 nova_compute[253538]:    <memballoon model="virtio">
Nov 25 03:27:09 np0005534516 nova_compute[253538]:      <stats period="10"/>
Nov 25 03:27:09 np0005534516 nova_compute[253538]:    </memballoon>
Nov 25 03:27:09 np0005534516 nova_compute[253538]:  </devices>
Nov 25 03:27:09 np0005534516 nova_compute[253538]: </domain>
Nov 25 03:27:09 np0005534516 nova_compute[253538]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 25 03:27:09 np0005534516 nova_compute[253538]: 2025-11-25 08:27:09.691 253542 DEBUG nova.virt.libvirt.driver [None req-0abff3d9-f9fd-4392-94bd-904bac40d2f1 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] skipping disk for instance-00000018 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 25 03:27:09 np0005534516 nova_compute[253538]: 2025-11-25 08:27:09.692 253542 DEBUG nova.virt.libvirt.driver [None req-0abff3d9-f9fd-4392-94bd-904bac40d2f1 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] skipping disk for instance-00000018 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 25 03:27:09 np0005534516 nova_compute[253538]: 2025-11-25 08:27:09.695 253542 DEBUG nova.virt.libvirt.vif [None req-0abff3d9-f9fd-4392-94bd-904bac40d2f1 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T08:26:41Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-SecurityGroupsTestJSON-server-910851624',display_name='tempest-SecurityGroupsTestJSON-server-910851624',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-securitygroupstestjson-server-910851624',id=24,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-25T08:27:03Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=<?>,power_state=1,progress=0,project_id='52217f37b23343d697fa6d2be38e236d',ramdisk_id='',reservation_id='r-s7z6d2k0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-SecurityGroupsTestJSON-1828125381',owner_user_name='tempest-SecurityGroupsTestJSON-1828125381-project-member'},tags=<?>,task_state='reboot_started_hard',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T08:27:08Z,user_data=None,user_id='02e795c75a3b40bbbc3ca83d0501777f',uuid=c787de46-dba9-458e-acc0-57470097fac5,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "0416b402-0842-4b73-910b-d30a5750474c", "address": "fa:16:3e:97:a5:e1", "network": {"id": "ec4e7ebb-aba7-46f2-8d8d-f7d49f5af954", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-908342220-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "52217f37b23343d697fa6d2be38e236d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0416b402-08", "ovs_interfaceid": "0416b402-0842-4b73-910b-d30a5750474c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 25 03:27:09 np0005534516 nova_compute[253538]: 2025-11-25 08:27:09.695 253542 DEBUG nova.network.os_vif_util [None req-0abff3d9-f9fd-4392-94bd-904bac40d2f1 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] Converting VIF {"id": "0416b402-0842-4b73-910b-d30a5750474c", "address": "fa:16:3e:97:a5:e1", "network": {"id": "ec4e7ebb-aba7-46f2-8d8d-f7d49f5af954", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-908342220-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "52217f37b23343d697fa6d2be38e236d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0416b402-08", "ovs_interfaceid": "0416b402-0842-4b73-910b-d30a5750474c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 03:27:09 np0005534516 nova_compute[253538]: 2025-11-25 08:27:09.697 253542 DEBUG nova.network.os_vif_util [None req-0abff3d9-f9fd-4392-94bd-904bac40d2f1 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:97:a5:e1,bridge_name='br-int',has_traffic_filtering=True,id=0416b402-0842-4b73-910b-d30a5750474c,network=Network(ec4e7ebb-aba7-46f2-8d8d-f7d49f5af954),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0416b402-08') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 03:27:09 np0005534516 nova_compute[253538]: 2025-11-25 08:27:09.698 253542 DEBUG os_vif [None req-0abff3d9-f9fd-4392-94bd-904bac40d2f1 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] Plugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:97:a5:e1,bridge_name='br-int',has_traffic_filtering=True,id=0416b402-0842-4b73-910b-d30a5750474c,network=Network(ec4e7ebb-aba7-46f2-8d8d-f7d49f5af954),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0416b402-08') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 25 03:27:09 np0005534516 nova_compute[253538]: 2025-11-25 08:27:09.699 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:27:09 np0005534516 nova_compute[253538]: 2025-11-25 08:27:09.700 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:27:09 np0005534516 nova_compute[253538]: 2025-11-25 08:27:09.701 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 03:27:09 np0005534516 nova_compute[253538]: 2025-11-25 08:27:09.705 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:27:09 np0005534516 nova_compute[253538]: 2025-11-25 08:27:09.706 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap0416b402-08, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:27:09 np0005534516 nova_compute[253538]: 2025-11-25 08:27:09.707 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap0416b402-08, col_values=(('external_ids', {'iface-id': '0416b402-0842-4b73-910b-d30a5750474c', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:97:a5:e1', 'vm-uuid': 'c787de46-dba9-458e-acc0-57470097fac5'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:27:09 np0005534516 nova_compute[253538]: 2025-11-25 08:27:09.708 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:27:09 np0005534516 NetworkManager[48915]: <info>  [1764059229.7095] manager: (tap0416b402-08): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/72)
Nov 25 03:27:09 np0005534516 nova_compute[253538]: 2025-11-25 08:27:09.711 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 25 03:27:09 np0005534516 nova_compute[253538]: 2025-11-25 08:27:09.713 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:27:09 np0005534516 nova_compute[253538]: 2025-11-25 08:27:09.714 253542 INFO os_vif [None req-0abff3d9-f9fd-4392-94bd-904bac40d2f1 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] Successfully plugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:97:a5:e1,bridge_name='br-int',has_traffic_filtering=True,id=0416b402-0842-4b73-910b-d30a5750474c,network=Network(ec4e7ebb-aba7-46f2-8d8d-f7d49f5af954),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0416b402-08')#033[00m
Nov 25 03:27:09 np0005534516 nova_compute[253538]: 2025-11-25 08:27:09.733 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:27:09 np0005534516 kernel: tap0416b402-08: entered promiscuous mode
Nov 25 03:27:09 np0005534516 nova_compute[253538]: 2025-11-25 08:27:09.798 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:27:09 np0005534516 ovn_controller[152859]: 2025-11-25T08:27:09Z|00136|binding|INFO|Claiming lport 0416b402-0842-4b73-910b-d30a5750474c for this chassis.
Nov 25 03:27:09 np0005534516 ovn_controller[152859]: 2025-11-25T08:27:09Z|00137|binding|INFO|0416b402-0842-4b73-910b-d30a5750474c: Claiming fa:16:3e:97:a5:e1 10.100.0.4
Nov 25 03:27:09 np0005534516 NetworkManager[48915]: <info>  [1764059229.8044] manager: (tap0416b402-08): new Tun device (/org/freedesktop/NetworkManager/Devices/73)
Nov 25 03:27:09 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:27:09.809 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:97:a5:e1 10.100.0.4'], port_security=['fa:16:3e:97:a5:e1 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': 'c787de46-dba9-458e-acc0-57470097fac5', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ec4e7ebb-aba7-46f2-8d8d-f7d49f5af954', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '52217f37b23343d697fa6d2be38e236d', 'neutron:revision_number': '6', 'neutron:security_group_ids': '94ed9e1b-8451-4dd9-95ef-2d9affe4fca9 b88ff3c6-bea0-4b7c-9374-f058821e8f5a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=16b4c924-25ba-4e74-8549-ba25753d78e7, chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=0416b402-0842-4b73-910b-d30a5750474c) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 03:27:09 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:27:09.811 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 0416b402-0842-4b73-910b-d30a5750474c in datapath ec4e7ebb-aba7-46f2-8d8d-f7d49f5af954 bound to our chassis#033[00m
Nov 25 03:27:09 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:27:09.814 162739 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network ec4e7ebb-aba7-46f2-8d8d-f7d49f5af954#033[00m
Nov 25 03:27:09 np0005534516 nova_compute[253538]: 2025-11-25 08:27:09.820 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:27:09 np0005534516 ovn_controller[152859]: 2025-11-25T08:27:09Z|00138|binding|INFO|Setting lport 0416b402-0842-4b73-910b-d30a5750474c ovn-installed in OVS
Nov 25 03:27:09 np0005534516 ovn_controller[152859]: 2025-11-25T08:27:09Z|00139|binding|INFO|Setting lport 0416b402-0842-4b73-910b-d30a5750474c up in Southbound
Nov 25 03:27:09 np0005534516 nova_compute[253538]: 2025-11-25 08:27:09.824 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:27:09 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:27:09.835 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[b4d6209c-e531-465f-8f9a-b6b984765f0f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:27:09 np0005534516 systemd-machined[215790]: New machine qemu-30-instance-00000018.
Nov 25 03:27:09 np0005534516 systemd[1]: Started Virtual Machine qemu-30-instance-00000018.
Nov 25 03:27:09 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:27:09.865 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[8de7897e-4571-41bc-9816-42c37e698581]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:27:09 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:27:09.868 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[eb67f60c-c0a0-436f-be50-b759305138fa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:27:09 np0005534516 systemd-udevd[286530]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 03:27:09 np0005534516 NetworkManager[48915]: <info>  [1764059229.8886] device (tap0416b402-08): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 03:27:09 np0005534516 NetworkManager[48915]: <info>  [1764059229.8896] device (tap0416b402-08): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 03:27:09 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:27:09.908 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[0bcf9c40-f692-4949-a867-04fd1beb6de2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:27:09 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e113 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 03:27:09 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:27:09.928 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[a839ae78-e1e1-4f3f-b938-28e425b057b2]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapec4e7ebb-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:da:64:1f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 9, 'rx_bytes': 616, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 9, 'rx_bytes': 616, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 32], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 452883, 'reachable_time': 43709, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 286539, 'error': None, 'target': 'ovnmeta-ec4e7ebb-aba7-46f2-8d8d-f7d49f5af954', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:27:09 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:27:09.944 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[cdea10da-6f04-42f4-8261-16468012d46a]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapec4e7ebb-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 452894, 'tstamp': 452894}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 286541, 'error': None, 'target': 'ovnmeta-ec4e7ebb-aba7-46f2-8d8d-f7d49f5af954', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapec4e7ebb-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 452897, 'tstamp': 452897}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 286541, 'error': None, 'target': 'ovnmeta-ec4e7ebb-aba7-46f2-8d8d-f7d49f5af954', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:27:09 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:27:09.946 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapec4e7ebb-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:27:09 np0005534516 nova_compute[253538]: 2025-11-25 08:27:09.948 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:27:09 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:27:09.948 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapec4e7ebb-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:27:09 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:27:09.949 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 03:27:09 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:27:09.949 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapec4e7ebb-a0, col_values=(('external_ids', {'iface-id': '26e04d60-4f32-4592-b567-fc34513c5aba'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:27:09 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:27:09.949 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 03:27:09 np0005534516 nova_compute[253538]: 2025-11-25 08:27:09.960 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Acquiring lock "refresh_cache-ca088afd-31e5-497b-bfc5-ba1f56096642" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 03:27:09 np0005534516 nova_compute[253538]: 2025-11-25 08:27:09.960 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Acquired lock "refresh_cache-ca088afd-31e5-497b-bfc5-ba1f56096642" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 03:27:09 np0005534516 nova_compute[253538]: 2025-11-25 08:27:09.960 253542 DEBUG nova.network.neutron [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] [instance: ca088afd-31e5-497b-bfc5-ba1f56096642] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Nov 25 03:27:09 np0005534516 nova_compute[253538]: 2025-11-25 08:27:09.961 253542 DEBUG nova.objects.instance [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lazy-loading 'info_cache' on Instance uuid ca088afd-31e5-497b-bfc5-ba1f56096642 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 03:27:10 np0005534516 nova_compute[253538]: 2025-11-25 08:27:10.345 253542 DEBUG nova.compute.manager [req-5818361c-b3ea-4bf3-8884-8d1c9da59516 req-4462a348-d5f3-40cd-a670-9a98fd31e329 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 0fc86c7e-5de2-431c-9152-cfe293f8cc7d] Received event network-vif-plugged-521823df-589a-4370-a3ea-a5a6f4c73a6a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:27:10 np0005534516 nova_compute[253538]: 2025-11-25 08:27:10.345 253542 DEBUG oslo_concurrency.lockutils [req-5818361c-b3ea-4bf3-8884-8d1c9da59516 req-4462a348-d5f3-40cd-a670-9a98fd31e329 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "0fc86c7e-5de2-431c-9152-cfe293f8cc7d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:27:10 np0005534516 nova_compute[253538]: 2025-11-25 08:27:10.346 253542 DEBUG oslo_concurrency.lockutils [req-5818361c-b3ea-4bf3-8884-8d1c9da59516 req-4462a348-d5f3-40cd-a670-9a98fd31e329 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "0fc86c7e-5de2-431c-9152-cfe293f8cc7d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:27:10 np0005534516 nova_compute[253538]: 2025-11-25 08:27:10.346 253542 DEBUG oslo_concurrency.lockutils [req-5818361c-b3ea-4bf3-8884-8d1c9da59516 req-4462a348-d5f3-40cd-a670-9a98fd31e329 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "0fc86c7e-5de2-431c-9152-cfe293f8cc7d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:27:10 np0005534516 nova_compute[253538]: 2025-11-25 08:27:10.347 253542 DEBUG nova.compute.manager [req-5818361c-b3ea-4bf3-8884-8d1c9da59516 req-4462a348-d5f3-40cd-a670-9a98fd31e329 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 0fc86c7e-5de2-431c-9152-cfe293f8cc7d] Processing event network-vif-plugged-521823df-589a-4370-a3ea-a5a6f4c73a6a _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 25 03:27:10 np0005534516 nova_compute[253538]: 2025-11-25 08:27:10.348 253542 DEBUG nova.compute.manager [None req-8fecfbed-03de-473a-b0f0-eb92655a9d55 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 0fc86c7e-5de2-431c-9152-cfe293f8cc7d] Instance event wait completed in 2 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 25 03:27:10 np0005534516 nova_compute[253538]: 2025-11-25 08:27:10.352 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764059230.3520625, 0fc86c7e-5de2-431c-9152-cfe293f8cc7d => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 03:27:10 np0005534516 nova_compute[253538]: 2025-11-25 08:27:10.352 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 0fc86c7e-5de2-431c-9152-cfe293f8cc7d] VM Resumed (Lifecycle Event)#033[00m
Nov 25 03:27:10 np0005534516 nova_compute[253538]: 2025-11-25 08:27:10.366 253542 DEBUG nova.virt.libvirt.driver [None req-8fecfbed-03de-473a-b0f0-eb92655a9d55 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 0fc86c7e-5de2-431c-9152-cfe293f8cc7d] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 25 03:27:10 np0005534516 nova_compute[253538]: 2025-11-25 08:27:10.370 253542 INFO nova.virt.libvirt.driver [-] [instance: 0fc86c7e-5de2-431c-9152-cfe293f8cc7d] Instance spawned successfully.#033[00m
Nov 25 03:27:10 np0005534516 nova_compute[253538]: 2025-11-25 08:27:10.370 253542 DEBUG nova.virt.libvirt.driver [None req-8fecfbed-03de-473a-b0f0-eb92655a9d55 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 0fc86c7e-5de2-431c-9152-cfe293f8cc7d] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 25 03:27:10 np0005534516 nova_compute[253538]: 2025-11-25 08:27:10.386 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 0fc86c7e-5de2-431c-9152-cfe293f8cc7d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:27:10 np0005534516 nova_compute[253538]: 2025-11-25 08:27:10.392 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 0fc86c7e-5de2-431c-9152-cfe293f8cc7d] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 03:27:10 np0005534516 nova_compute[253538]: 2025-11-25 08:27:10.395 253542 DEBUG nova.virt.libvirt.driver [None req-8fecfbed-03de-473a-b0f0-eb92655a9d55 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 0fc86c7e-5de2-431c-9152-cfe293f8cc7d] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:27:10 np0005534516 nova_compute[253538]: 2025-11-25 08:27:10.396 253542 DEBUG nova.virt.libvirt.driver [None req-8fecfbed-03de-473a-b0f0-eb92655a9d55 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 0fc86c7e-5de2-431c-9152-cfe293f8cc7d] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:27:10 np0005534516 nova_compute[253538]: 2025-11-25 08:27:10.396 253542 DEBUG nova.virt.libvirt.driver [None req-8fecfbed-03de-473a-b0f0-eb92655a9d55 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 0fc86c7e-5de2-431c-9152-cfe293f8cc7d] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:27:10 np0005534516 nova_compute[253538]: 2025-11-25 08:27:10.397 253542 DEBUG nova.virt.libvirt.driver [None req-8fecfbed-03de-473a-b0f0-eb92655a9d55 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 0fc86c7e-5de2-431c-9152-cfe293f8cc7d] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:27:10 np0005534516 nova_compute[253538]: 2025-11-25 08:27:10.397 253542 DEBUG nova.virt.libvirt.driver [None req-8fecfbed-03de-473a-b0f0-eb92655a9d55 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 0fc86c7e-5de2-431c-9152-cfe293f8cc7d] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:27:10 np0005534516 nova_compute[253538]: 2025-11-25 08:27:10.398 253542 DEBUG nova.virt.libvirt.driver [None req-8fecfbed-03de-473a-b0f0-eb92655a9d55 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 0fc86c7e-5de2-431c-9152-cfe293f8cc7d] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:27:10 np0005534516 nova_compute[253538]: 2025-11-25 08:27:10.420 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 0fc86c7e-5de2-431c-9152-cfe293f8cc7d] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 03:27:10 np0005534516 nova_compute[253538]: 2025-11-25 08:27:10.444 253542 INFO nova.compute.manager [None req-8fecfbed-03de-473a-b0f0-eb92655a9d55 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 0fc86c7e-5de2-431c-9152-cfe293f8cc7d] Took 16.86 seconds to spawn the instance on the hypervisor.#033[00m
Nov 25 03:27:10 np0005534516 nova_compute[253538]: 2025-11-25 08:27:10.445 253542 DEBUG nova.compute.manager [None req-8fecfbed-03de-473a-b0f0-eb92655a9d55 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 0fc86c7e-5de2-431c-9152-cfe293f8cc7d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:27:10 np0005534516 nova_compute[253538]: 2025-11-25 08:27:10.491 253542 INFO nova.compute.manager [None req-8fecfbed-03de-473a-b0f0-eb92655a9d55 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 0fc86c7e-5de2-431c-9152-cfe293f8cc7d] Took 18.35 seconds to build instance.#033[00m
Nov 25 03:27:10 np0005534516 nova_compute[253538]: 2025-11-25 08:27:10.505 253542 DEBUG oslo_concurrency.lockutils [None req-8fecfbed-03de-473a-b0f0-eb92655a9d55 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Lock "0fc86c7e-5de2-431c-9152-cfe293f8cc7d" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 18.489s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:27:10 np0005534516 nova_compute[253538]: 2025-11-25 08:27:10.576 253542 DEBUG nova.virt.libvirt.host [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Removed pending event for c787de46-dba9-458e-acc0-57470097fac5 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Nov 25 03:27:10 np0005534516 nova_compute[253538]: 2025-11-25 08:27:10.577 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764059230.5742455, c787de46-dba9-458e-acc0-57470097fac5 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 03:27:10 np0005534516 nova_compute[253538]: 2025-11-25 08:27:10.577 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: c787de46-dba9-458e-acc0-57470097fac5] VM Resumed (Lifecycle Event)#033[00m
Nov 25 03:27:10 np0005534516 nova_compute[253538]: 2025-11-25 08:27:10.579 253542 DEBUG nova.compute.manager [None req-0abff3d9-f9fd-4392-94bd-904bac40d2f1 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] [instance: c787de46-dba9-458e-acc0-57470097fac5] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 25 03:27:10 np0005534516 nova_compute[253538]: 2025-11-25 08:27:10.583 253542 INFO nova.virt.libvirt.driver [-] [instance: c787de46-dba9-458e-acc0-57470097fac5] Instance rebooted successfully.#033[00m
Nov 25 03:27:10 np0005534516 nova_compute[253538]: 2025-11-25 08:27:10.583 253542 DEBUG nova.compute.manager [None req-0abff3d9-f9fd-4392-94bd-904bac40d2f1 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] [instance: c787de46-dba9-458e-acc0-57470097fac5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:27:10 np0005534516 nova_compute[253538]: 2025-11-25 08:27:10.615 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: c787de46-dba9-458e-acc0-57470097fac5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:27:10 np0005534516 nova_compute[253538]: 2025-11-25 08:27:10.619 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: c787de46-dba9-458e-acc0-57470097fac5] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: reboot_started_hard, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 03:27:10 np0005534516 nova_compute[253538]: 2025-11-25 08:27:10.647 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: c787de46-dba9-458e-acc0-57470097fac5] During sync_power_state the instance has a pending task (reboot_started_hard). Skip.#033[00m
Nov 25 03:27:10 np0005534516 nova_compute[253538]: 2025-11-25 08:27:10.647 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764059230.5768902, c787de46-dba9-458e-acc0-57470097fac5 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 03:27:10 np0005534516 nova_compute[253538]: 2025-11-25 08:27:10.648 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: c787de46-dba9-458e-acc0-57470097fac5] VM Started (Lifecycle Event)#033[00m
Nov 25 03:27:10 np0005534516 nova_compute[253538]: 2025-11-25 08:27:10.652 253542 DEBUG oslo_concurrency.lockutils [None req-0abff3d9-f9fd-4392-94bd-904bac40d2f1 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] Lock "c787de46-dba9-458e-acc0-57470097fac5" "released" by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" :: held 3.728s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:27:10 np0005534516 nova_compute[253538]: 2025-11-25 08:27:10.671 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: c787de46-dba9-458e-acc0-57470097fac5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:27:10 np0005534516 nova_compute[253538]: 2025-11-25 08:27:10.675 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: c787de46-dba9-458e-acc0-57470097fac5] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 03:27:11 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1252: 321 pgs: 321 active+clean; 214 MiB data, 422 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 125 op/s
Nov 25 03:27:11 np0005534516 nova_compute[253538]: 2025-11-25 08:27:11.154 253542 DEBUG nova.compute.manager [req-19ad8b09-ed19-4825-adb5-059aeee99bc9 req-9115d95a-fce9-440e-a751-e518a2468eb6 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: c787de46-dba9-458e-acc0-57470097fac5] Received event network-vif-plugged-0416b402-0842-4b73-910b-d30a5750474c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:27:11 np0005534516 nova_compute[253538]: 2025-11-25 08:27:11.155 253542 DEBUG oslo_concurrency.lockutils [req-19ad8b09-ed19-4825-adb5-059aeee99bc9 req-9115d95a-fce9-440e-a751-e518a2468eb6 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "c787de46-dba9-458e-acc0-57470097fac5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:27:11 np0005534516 nova_compute[253538]: 2025-11-25 08:27:11.155 253542 DEBUG oslo_concurrency.lockutils [req-19ad8b09-ed19-4825-adb5-059aeee99bc9 req-9115d95a-fce9-440e-a751-e518a2468eb6 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "c787de46-dba9-458e-acc0-57470097fac5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:27:11 np0005534516 nova_compute[253538]: 2025-11-25 08:27:11.156 253542 DEBUG oslo_concurrency.lockutils [req-19ad8b09-ed19-4825-adb5-059aeee99bc9 req-9115d95a-fce9-440e-a751-e518a2468eb6 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "c787de46-dba9-458e-acc0-57470097fac5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:27:11 np0005534516 nova_compute[253538]: 2025-11-25 08:27:11.156 253542 DEBUG nova.compute.manager [req-19ad8b09-ed19-4825-adb5-059aeee99bc9 req-9115d95a-fce9-440e-a751-e518a2468eb6 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: c787de46-dba9-458e-acc0-57470097fac5] No waiting events found dispatching network-vif-plugged-0416b402-0842-4b73-910b-d30a5750474c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 03:27:11 np0005534516 nova_compute[253538]: 2025-11-25 08:27:11.156 253542 WARNING nova.compute.manager [req-19ad8b09-ed19-4825-adb5-059aeee99bc9 req-9115d95a-fce9-440e-a751-e518a2468eb6 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: c787de46-dba9-458e-acc0-57470097fac5] Received unexpected event network-vif-plugged-0416b402-0842-4b73-910b-d30a5750474c for instance with vm_state active and task_state None.#033[00m
Nov 25 03:27:11 np0005534516 nova_compute[253538]: 2025-11-25 08:27:11.618 253542 DEBUG nova.network.neutron [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] [instance: ca088afd-31e5-497b-bfc5-ba1f56096642] Updating instance_info_cache with network_info: [{"id": "2089bf75-6119-4c42-a326-989b3931ec08", "address": "fa:16:3e:b9:c0:7d", "network": {"id": "ec4e7ebb-aba7-46f2-8d8d-f7d49f5af954", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-908342220-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "52217f37b23343d697fa6d2be38e236d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2089bf75-61", "ovs_interfaceid": "2089bf75-6119-4c42-a326-989b3931ec08", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 03:27:11 np0005534516 nova_compute[253538]: 2025-11-25 08:27:11.632 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Releasing lock "refresh_cache-ca088afd-31e5-497b-bfc5-ba1f56096642" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 03:27:11 np0005534516 nova_compute[253538]: 2025-11-25 08:27:11.632 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] [instance: ca088afd-31e5-497b-bfc5-ba1f56096642] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Nov 25 03:27:11 np0005534516 nova_compute[253538]: 2025-11-25 08:27:11.869 253542 INFO nova.compute.manager [None req-cee343ae-c164-4e92-b5d4-19a622a8797f 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 0fc86c7e-5de2-431c-9152-cfe293f8cc7d] Pausing#033[00m
Nov 25 03:27:11 np0005534516 nova_compute[253538]: 2025-11-25 08:27:11.870 253542 DEBUG nova.objects.instance [None req-cee343ae-c164-4e92-b5d4-19a622a8797f 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Lazy-loading 'flavor' on Instance uuid 0fc86c7e-5de2-431c-9152-cfe293f8cc7d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 03:27:11 np0005534516 nova_compute[253538]: 2025-11-25 08:27:11.919 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764059231.9190938, 0fc86c7e-5de2-431c-9152-cfe293f8cc7d => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 03:27:11 np0005534516 nova_compute[253538]: 2025-11-25 08:27:11.920 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 0fc86c7e-5de2-431c-9152-cfe293f8cc7d] VM Paused (Lifecycle Event)#033[00m
Nov 25 03:27:11 np0005534516 nova_compute[253538]: 2025-11-25 08:27:11.922 253542 DEBUG nova.compute.manager [None req-cee343ae-c164-4e92-b5d4-19a622a8797f 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 0fc86c7e-5de2-431c-9152-cfe293f8cc7d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:27:11 np0005534516 nova_compute[253538]: 2025-11-25 08:27:11.941 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 0fc86c7e-5de2-431c-9152-cfe293f8cc7d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:27:11 np0005534516 nova_compute[253538]: 2025-11-25 08:27:11.945 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 0fc86c7e-5de2-431c-9152-cfe293f8cc7d] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: pausing, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 03:27:11 np0005534516 nova_compute[253538]: 2025-11-25 08:27:11.978 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 0fc86c7e-5de2-431c-9152-cfe293f8cc7d] During sync_power_state the instance has a pending task (pausing). Skip.#033[00m
Nov 25 03:27:12 np0005534516 nova_compute[253538]: 2025-11-25 08:27:12.421 253542 DEBUG nova.compute.manager [req-b52d6b9a-665b-4411-9300-1487fe3ae398 req-90262f6e-971e-4c33-9d61-5410b73325e0 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 0fc86c7e-5de2-431c-9152-cfe293f8cc7d] Received event network-vif-plugged-521823df-589a-4370-a3ea-a5a6f4c73a6a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:27:12 np0005534516 nova_compute[253538]: 2025-11-25 08:27:12.422 253542 DEBUG oslo_concurrency.lockutils [req-b52d6b9a-665b-4411-9300-1487fe3ae398 req-90262f6e-971e-4c33-9d61-5410b73325e0 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "0fc86c7e-5de2-431c-9152-cfe293f8cc7d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:27:12 np0005534516 nova_compute[253538]: 2025-11-25 08:27:12.422 253542 DEBUG oslo_concurrency.lockutils [req-b52d6b9a-665b-4411-9300-1487fe3ae398 req-90262f6e-971e-4c33-9d61-5410b73325e0 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "0fc86c7e-5de2-431c-9152-cfe293f8cc7d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:27:12 np0005534516 nova_compute[253538]: 2025-11-25 08:27:12.422 253542 DEBUG oslo_concurrency.lockutils [req-b52d6b9a-665b-4411-9300-1487fe3ae398 req-90262f6e-971e-4c33-9d61-5410b73325e0 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "0fc86c7e-5de2-431c-9152-cfe293f8cc7d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:27:12 np0005534516 nova_compute[253538]: 2025-11-25 08:27:12.422 253542 DEBUG nova.compute.manager [req-b52d6b9a-665b-4411-9300-1487fe3ae398 req-90262f6e-971e-4c33-9d61-5410b73325e0 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 0fc86c7e-5de2-431c-9152-cfe293f8cc7d] No waiting events found dispatching network-vif-plugged-521823df-589a-4370-a3ea-a5a6f4c73a6a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 03:27:12 np0005534516 nova_compute[253538]: 2025-11-25 08:27:12.423 253542 WARNING nova.compute.manager [req-b52d6b9a-665b-4411-9300-1487fe3ae398 req-90262f6e-971e-4c33-9d61-5410b73325e0 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 0fc86c7e-5de2-431c-9152-cfe293f8cc7d] Received unexpected event network-vif-plugged-521823df-589a-4370-a3ea-a5a6f4c73a6a for instance with vm_state paused and task_state None.#033[00m
Nov 25 03:27:12 np0005534516 nova_compute[253538]: 2025-11-25 08:27:12.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 03:27:12 np0005534516 nova_compute[253538]: 2025-11-25 08:27:12.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 03:27:12 np0005534516 nova_compute[253538]: 2025-11-25 08:27:12.573 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:27:12 np0005534516 nova_compute[253538]: 2025-11-25 08:27:12.574 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:27:12 np0005534516 nova_compute[253538]: 2025-11-25 08:27:12.574 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:27:12 np0005534516 nova_compute[253538]: 2025-11-25 08:27:12.574 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 25 03:27:12 np0005534516 nova_compute[253538]: 2025-11-25 08:27:12.574 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:27:12 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 03:27:12 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2012209073' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 03:27:13 np0005534516 nova_compute[253538]: 2025-11-25 08:27:13.005 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.431s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:27:13 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1253: 321 pgs: 321 active+clean; 214 MiB data, 422 MiB used, 60 GiB / 60 GiB avail; 4.0 MiB/s rd, 1.3 MiB/s wr, 192 op/s
Nov 25 03:27:13 np0005534516 nova_compute[253538]: 2025-11-25 08:27:13.090 253542 DEBUG nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] skipping disk for instance-00000018 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 25 03:27:13 np0005534516 nova_compute[253538]: 2025-11-25 08:27:13.091 253542 DEBUG nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] skipping disk for instance-00000018 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 25 03:27:13 np0005534516 nova_compute[253538]: 2025-11-25 08:27:13.097 253542 DEBUG nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] skipping disk for instance-00000017 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 25 03:27:13 np0005534516 nova_compute[253538]: 2025-11-25 08:27:13.098 253542 DEBUG nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] skipping disk for instance-00000017 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 25 03:27:13 np0005534516 nova_compute[253538]: 2025-11-25 08:27:13.102 253542 DEBUG nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] skipping disk for instance-00000019 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 25 03:27:13 np0005534516 nova_compute[253538]: 2025-11-25 08:27:13.103 253542 DEBUG nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] skipping disk for instance-00000019 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 25 03:27:13 np0005534516 nova_compute[253538]: 2025-11-25 08:27:13.305 253542 WARNING nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 03:27:13 np0005534516 nova_compute[253538]: 2025-11-25 08:27:13.307 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3977MB free_disk=59.9009895324707GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 25 03:27:13 np0005534516 nova_compute[253538]: 2025-11-25 08:27:13.308 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:27:13 np0005534516 nova_compute[253538]: 2025-11-25 08:27:13.308 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:27:13 np0005534516 nova_compute[253538]: 2025-11-25 08:27:13.380 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Instance ca088afd-31e5-497b-bfc5-ba1f56096642 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 25 03:27:13 np0005534516 nova_compute[253538]: 2025-11-25 08:27:13.380 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Instance c787de46-dba9-458e-acc0-57470097fac5 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 25 03:27:13 np0005534516 nova_compute[253538]: 2025-11-25 08:27:13.381 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Instance 0fc86c7e-5de2-431c-9152-cfe293f8cc7d actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 25 03:27:13 np0005534516 nova_compute[253538]: 2025-11-25 08:27:13.381 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 25 03:27:13 np0005534516 nova_compute[253538]: 2025-11-25 08:27:13.381 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=896MB phys_disk=59GB used_disk=3GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 25 03:27:13 np0005534516 ovn_controller[152859]: 2025-11-25T08:27:13Z|00140|binding|INFO|Releasing lport 26e04d60-4f32-4592-b567-fc34513c5aba from this chassis (sb_readonly=0)
Nov 25 03:27:13 np0005534516 ovn_controller[152859]: 2025-11-25T08:27:13Z|00141|binding|INFO|Releasing lport 02ee51d1-7fc5-4815-93ec-b9ead088a46e from this chassis (sb_readonly=0)
Nov 25 03:27:13 np0005534516 nova_compute[253538]: 2025-11-25 08:27:13.448 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:27:13 np0005534516 nova_compute[253538]: 2025-11-25 08:27:13.469 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:27:13 np0005534516 nova_compute[253538]: 2025-11-25 08:27:13.826 253542 DEBUG nova.compute.manager [req-93594e17-9c44-489e-9111-1a93d2cc27b2 req-45805faf-a52d-4cd9-ab8a-25b9d8939fca b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: c787de46-dba9-458e-acc0-57470097fac5] Received event network-changed-0416b402-0842-4b73-910b-d30a5750474c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:27:13 np0005534516 nova_compute[253538]: 2025-11-25 08:27:13.827 253542 DEBUG nova.compute.manager [req-93594e17-9c44-489e-9111-1a93d2cc27b2 req-45805faf-a52d-4cd9-ab8a-25b9d8939fca b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: c787de46-dba9-458e-acc0-57470097fac5] Refreshing instance network info cache due to event network-changed-0416b402-0842-4b73-910b-d30a5750474c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 03:27:13 np0005534516 nova_compute[253538]: 2025-11-25 08:27:13.828 253542 DEBUG oslo_concurrency.lockutils [req-93594e17-9c44-489e-9111-1a93d2cc27b2 req-45805faf-a52d-4cd9-ab8a-25b9d8939fca b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-c787de46-dba9-458e-acc0-57470097fac5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 03:27:13 np0005534516 nova_compute[253538]: 2025-11-25 08:27:13.828 253542 DEBUG oslo_concurrency.lockutils [req-93594e17-9c44-489e-9111-1a93d2cc27b2 req-45805faf-a52d-4cd9-ab8a-25b9d8939fca b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-c787de46-dba9-458e-acc0-57470097fac5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 03:27:13 np0005534516 nova_compute[253538]: 2025-11-25 08:27:13.828 253542 DEBUG nova.network.neutron [req-93594e17-9c44-489e-9111-1a93d2cc27b2 req-45805faf-a52d-4cd9-ab8a-25b9d8939fca b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: c787de46-dba9-458e-acc0-57470097fac5] Refreshing network info cache for port 0416b402-0842-4b73-910b-d30a5750474c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 03:27:13 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 03:27:13 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/447937006' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 03:27:13 np0005534516 nova_compute[253538]: 2025-11-25 08:27:13.901 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.453s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:27:13 np0005534516 nova_compute[253538]: 2025-11-25 08:27:13.908 253542 DEBUG nova.compute.provider_tree [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 03:27:13 np0005534516 nova_compute[253538]: 2025-11-25 08:27:13.936 253542 DEBUG nova.scheduler.client.report [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 03:27:13 np0005534516 nova_compute[253538]: 2025-11-25 08:27:13.965 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 25 03:27:13 np0005534516 nova_compute[253538]: 2025-11-25 08:27:13.966 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.657s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:27:14 np0005534516 nova_compute[253538]: 2025-11-25 08:27:14.394 253542 DEBUG nova.compute.manager [None req-572a994c-6629-4d26-bd6a-89259d771317 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 0fc86c7e-5de2-431c-9152-cfe293f8cc7d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:27:14 np0005534516 nova_compute[253538]: 2025-11-25 08:27:14.429 253542 INFO nova.compute.manager [None req-572a994c-6629-4d26-bd6a-89259d771317 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 0fc86c7e-5de2-431c-9152-cfe293f8cc7d] instance snapshotting#033[00m
Nov 25 03:27:14 np0005534516 nova_compute[253538]: 2025-11-25 08:27:14.429 253542 WARNING nova.compute.manager [None req-572a994c-6629-4d26-bd6a-89259d771317 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 0fc86c7e-5de2-431c-9152-cfe293f8cc7d] trying to snapshot a non-running instance: (state: 3 expected: 1)#033[00m
Nov 25 03:27:14 np0005534516 nova_compute[253538]: 2025-11-25 08:27:14.492 253542 DEBUG oslo_concurrency.lockutils [None req-14254fe2-cf42-4e2e-8f63-f12433240d4d 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] Acquiring lock "c787de46-dba9-458e-acc0-57470097fac5" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:27:14 np0005534516 nova_compute[253538]: 2025-11-25 08:27:14.493 253542 DEBUG oslo_concurrency.lockutils [None req-14254fe2-cf42-4e2e-8f63-f12433240d4d 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] Lock "c787de46-dba9-458e-acc0-57470097fac5" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:27:14 np0005534516 nova_compute[253538]: 2025-11-25 08:27:14.494 253542 DEBUG oslo_concurrency.lockutils [None req-14254fe2-cf42-4e2e-8f63-f12433240d4d 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] Acquiring lock "c787de46-dba9-458e-acc0-57470097fac5-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:27:14 np0005534516 nova_compute[253538]: 2025-11-25 08:27:14.494 253542 DEBUG oslo_concurrency.lockutils [None req-14254fe2-cf42-4e2e-8f63-f12433240d4d 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] Lock "c787de46-dba9-458e-acc0-57470097fac5-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:27:14 np0005534516 nova_compute[253538]: 2025-11-25 08:27:14.494 253542 DEBUG oslo_concurrency.lockutils [None req-14254fe2-cf42-4e2e-8f63-f12433240d4d 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] Lock "c787de46-dba9-458e-acc0-57470097fac5-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:27:14 np0005534516 nova_compute[253538]: 2025-11-25 08:27:14.495 253542 INFO nova.compute.manager [None req-14254fe2-cf42-4e2e-8f63-f12433240d4d 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] [instance: c787de46-dba9-458e-acc0-57470097fac5] Terminating instance#033[00m
Nov 25 03:27:14 np0005534516 nova_compute[253538]: 2025-11-25 08:27:14.496 253542 DEBUG nova.compute.manager [None req-14254fe2-cf42-4e2e-8f63-f12433240d4d 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] [instance: c787de46-dba9-458e-acc0-57470097fac5] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 25 03:27:14 np0005534516 kernel: tap0416b402-08 (unregistering): left promiscuous mode
Nov 25 03:27:14 np0005534516 NetworkManager[48915]: <info>  [1764059234.5341] device (tap0416b402-08): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 03:27:14 np0005534516 ovn_controller[152859]: 2025-11-25T08:27:14Z|00142|binding|INFO|Releasing lport 0416b402-0842-4b73-910b-d30a5750474c from this chassis (sb_readonly=0)
Nov 25 03:27:14 np0005534516 ovn_controller[152859]: 2025-11-25T08:27:14Z|00143|binding|INFO|Setting lport 0416b402-0842-4b73-910b-d30a5750474c down in Southbound
Nov 25 03:27:14 np0005534516 nova_compute[253538]: 2025-11-25 08:27:14.547 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:27:14 np0005534516 ovn_controller[152859]: 2025-11-25T08:27:14Z|00144|binding|INFO|Removing iface tap0416b402-08 ovn-installed in OVS
Nov 25 03:27:14 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:27:14.555 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:97:a5:e1 10.100.0.4'], port_security=['fa:16:3e:97:a5:e1 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': 'c787de46-dba9-458e-acc0-57470097fac5', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ec4e7ebb-aba7-46f2-8d8d-f7d49f5af954', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '52217f37b23343d697fa6d2be38e236d', 'neutron:revision_number': '7', 'neutron:security_group_ids': '69424ef4-6807-4dfd-9ed6-238b08ebb77e 94ed9e1b-8451-4dd9-95ef-2d9affe4fca9 b88ff3c6-bea0-4b7c-9374-f058821e8f5a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=16b4c924-25ba-4e74-8549-ba25753d78e7, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=0416b402-0842-4b73-910b-d30a5750474c) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 03:27:14 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:27:14.556 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 0416b402-0842-4b73-910b-d30a5750474c in datapath ec4e7ebb-aba7-46f2-8d8d-f7d49f5af954 unbound from our chassis#033[00m
Nov 25 03:27:14 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:27:14.558 162739 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network ec4e7ebb-aba7-46f2-8d8d-f7d49f5af954#033[00m
Nov 25 03:27:14 np0005534516 nova_compute[253538]: 2025-11-25 08:27:14.565 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:27:14 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:27:14.572 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[56af7d81-09b9-4386-b98b-ad0aca9d23c6]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:27:14 np0005534516 systemd[1]: machine-qemu\x2d30\x2dinstance\x2d00000018.scope: Deactivated successfully.
Nov 25 03:27:14 np0005534516 systemd[1]: machine-qemu\x2d30\x2dinstance\x2d00000018.scope: Consumed 4.754s CPU time.
Nov 25 03:27:14 np0005534516 systemd-machined[215790]: Machine qemu-30-instance-00000018 terminated.
Nov 25 03:27:14 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:27:14.596 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[e766f2be-bf4d-4607-a329-af208abe0935]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:27:14 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:27:14.599 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[718ead1f-d223-4206-8f96-fd92e8e0c428]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:27:14 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:27:14.623 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[c8b3d092-eab4-4f93-851b-6f32b2ea02fa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:27:14 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:27:14.640 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[c6765490-9c70-4ad3-b265-58a2e5dfb1ae]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapec4e7ebb-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:da:64:1f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 11, 'rx_bytes': 616, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 11, 'rx_bytes': 616, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 32], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 452883, 'reachable_time': 43709, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 286641, 'error': None, 'target': 'ovnmeta-ec4e7ebb-aba7-46f2-8d8d-f7d49f5af954', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:27:14 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:27:14.655 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[e1ff16d0-9265-404e-98eb-41e9374bf3dd]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapec4e7ebb-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 452894, 'tstamp': 452894}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 286642, 'error': None, 'target': 'ovnmeta-ec4e7ebb-aba7-46f2-8d8d-f7d49f5af954', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapec4e7ebb-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 452897, 'tstamp': 452897}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 286642, 'error': None, 'target': 'ovnmeta-ec4e7ebb-aba7-46f2-8d8d-f7d49f5af954', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:27:14 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:27:14.656 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapec4e7ebb-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:27:14 np0005534516 nova_compute[253538]: 2025-11-25 08:27:14.658 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:27:14 np0005534516 nova_compute[253538]: 2025-11-25 08:27:14.662 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:27:14 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:27:14.662 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapec4e7ebb-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:27:14 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:27:14.663 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 03:27:14 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:27:14.663 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapec4e7ebb-a0, col_values=(('external_ids', {'iface-id': '26e04d60-4f32-4592-b567-fc34513c5aba'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:27:14 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:27:14.663 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 03:27:14 np0005534516 nova_compute[253538]: 2025-11-25 08:27:14.709 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:27:14 np0005534516 nova_compute[253538]: 2025-11-25 08:27:14.735 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:27:14 np0005534516 nova_compute[253538]: 2025-11-25 08:27:14.739 253542 INFO nova.virt.libvirt.driver [None req-572a994c-6629-4d26-bd6a-89259d771317 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 0fc86c7e-5de2-431c-9152-cfe293f8cc7d] Beginning live snapshot process#033[00m
Nov 25 03:27:14 np0005534516 nova_compute[253538]: 2025-11-25 08:27:14.743 253542 INFO nova.virt.libvirt.driver [-] [instance: c787de46-dba9-458e-acc0-57470097fac5] Instance destroyed successfully.#033[00m
Nov 25 03:27:14 np0005534516 nova_compute[253538]: 2025-11-25 08:27:14.744 253542 DEBUG nova.objects.instance [None req-14254fe2-cf42-4e2e-8f63-f12433240d4d 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] Lazy-loading 'resources' on Instance uuid c787de46-dba9-458e-acc0-57470097fac5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 03:27:14 np0005534516 nova_compute[253538]: 2025-11-25 08:27:14.760 253542 DEBUG nova.virt.libvirt.vif [None req-14254fe2-cf42-4e2e-8f63-f12433240d4d 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T08:26:41Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-SecurityGroupsTestJSON-server-910851624',display_name='tempest-SecurityGroupsTestJSON-server-910851624',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-securitygroupstestjson-server-910851624',id=24,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-25T08:27:03Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='52217f37b23343d697fa6d2be38e236d',ramdisk_id='',reservation_id='r-s7z6d2k0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-SecurityGroupsTestJSON-1828125381',owner_user_name='tempest-SecurityGroupsTestJSON-1828125381-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T08:27:10Z,user_data=None,user_id='02e795c75a3b40bbbc3ca83d0501777f',uuid=c787de46-dba9-458e-acc0-57470097fac5,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "0416b402-0842-4b73-910b-d30a5750474c", "address": "fa:16:3e:97:a5:e1", "network": {"id": "ec4e7ebb-aba7-46f2-8d8d-f7d49f5af954", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-908342220-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "52217f37b23343d697fa6d2be38e236d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0416b402-08", "ovs_interfaceid": "0416b402-0842-4b73-910b-d30a5750474c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 25 03:27:14 np0005534516 nova_compute[253538]: 2025-11-25 08:27:14.761 253542 DEBUG nova.network.os_vif_util [None req-14254fe2-cf42-4e2e-8f63-f12433240d4d 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] Converting VIF {"id": "0416b402-0842-4b73-910b-d30a5750474c", "address": "fa:16:3e:97:a5:e1", "network": {"id": "ec4e7ebb-aba7-46f2-8d8d-f7d49f5af954", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-908342220-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "52217f37b23343d697fa6d2be38e236d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0416b402-08", "ovs_interfaceid": "0416b402-0842-4b73-910b-d30a5750474c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 03:27:14 np0005534516 nova_compute[253538]: 2025-11-25 08:27:14.762 253542 DEBUG nova.network.os_vif_util [None req-14254fe2-cf42-4e2e-8f63-f12433240d4d 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:97:a5:e1,bridge_name='br-int',has_traffic_filtering=True,id=0416b402-0842-4b73-910b-d30a5750474c,network=Network(ec4e7ebb-aba7-46f2-8d8d-f7d49f5af954),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0416b402-08') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 03:27:14 np0005534516 nova_compute[253538]: 2025-11-25 08:27:14.762 253542 DEBUG os_vif [None req-14254fe2-cf42-4e2e-8f63-f12433240d4d 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:97:a5:e1,bridge_name='br-int',has_traffic_filtering=True,id=0416b402-0842-4b73-910b-d30a5750474c,network=Network(ec4e7ebb-aba7-46f2-8d8d-f7d49f5af954),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0416b402-08') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 25 03:27:14 np0005534516 nova_compute[253538]: 2025-11-25 08:27:14.764 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:27:14 np0005534516 nova_compute[253538]: 2025-11-25 08:27:14.765 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0416b402-08, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:27:14 np0005534516 nova_compute[253538]: 2025-11-25 08:27:14.770 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:27:14 np0005534516 nova_compute[253538]: 2025-11-25 08:27:14.773 253542 INFO os_vif [None req-14254fe2-cf42-4e2e-8f63-f12433240d4d 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:97:a5:e1,bridge_name='br-int',has_traffic_filtering=True,id=0416b402-0842-4b73-910b-d30a5750474c,network=Network(ec4e7ebb-aba7-46f2-8d8d-f7d49f5af954),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0416b402-08')#033[00m
Nov 25 03:27:14 np0005534516 nova_compute[253538]: 2025-11-25 08:27:14.891 253542 DEBUG nova.virt.libvirt.imagebackend [None req-572a994c-6629-4d26-bd6a-89259d771317 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] No parent info for 8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163#033[00m
Nov 25 03:27:14 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e113 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 03:27:14 np0005534516 nova_compute[253538]: 2025-11-25 08:27:14.939 253542 DEBUG nova.network.neutron [req-93594e17-9c44-489e-9111-1a93d2cc27b2 req-45805faf-a52d-4cd9-ab8a-25b9d8939fca b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: c787de46-dba9-458e-acc0-57470097fac5] Updated VIF entry in instance network info cache for port 0416b402-0842-4b73-910b-d30a5750474c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 25 03:27:14 np0005534516 nova_compute[253538]: 2025-11-25 08:27:14.940 253542 DEBUG nova.network.neutron [req-93594e17-9c44-489e-9111-1a93d2cc27b2 req-45805faf-a52d-4cd9-ab8a-25b9d8939fca b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: c787de46-dba9-458e-acc0-57470097fac5] Updating instance_info_cache with network_info: [{"id": "0416b402-0842-4b73-910b-d30a5750474c", "address": "fa:16:3e:97:a5:e1", "network": {"id": "ec4e7ebb-aba7-46f2-8d8d-f7d49f5af954", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-908342220-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "52217f37b23343d697fa6d2be38e236d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0416b402-08", "ovs_interfaceid": "0416b402-0842-4b73-910b-d30a5750474c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 03:27:14 np0005534516 nova_compute[253538]: 2025-11-25 08:27:14.959 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 03:27:14 np0005534516 nova_compute[253538]: 2025-11-25 08:27:14.960 253542 DEBUG oslo_concurrency.lockutils [req-93594e17-9c44-489e-9111-1a93d2cc27b2 req-45805faf-a52d-4cd9-ab8a-25b9d8939fca b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-c787de46-dba9-458e-acc0-57470097fac5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 03:27:15 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1254: 321 pgs: 321 active+clean; 214 MiB data, 422 MiB used, 60 GiB / 60 GiB avail; 5.8 MiB/s rd, 712 KiB/s wr, 246 op/s
Nov 25 03:27:15 np0005534516 nova_compute[253538]: 2025-11-25 08:27:15.078 253542 DEBUG nova.storage.rbd_utils [None req-572a994c-6629-4d26-bd6a-89259d771317 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] creating snapshot(829c97c5e20f4a8eba46793041b33893) on rbd image(0fc86c7e-5de2-431c-9152-cfe293f8cc7d_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Nov 25 03:27:15 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e113 do_prune osdmap full prune enabled
Nov 25 03:27:15 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e114 e114: 3 total, 3 up, 3 in
Nov 25 03:27:15 np0005534516 ceph-mon[75015]: log_channel(cluster) log [DBG] : osdmap e114: 3 total, 3 up, 3 in
Nov 25 03:27:15 np0005534516 nova_compute[253538]: 2025-11-25 08:27:15.229 253542 DEBUG nova.storage.rbd_utils [None req-572a994c-6629-4d26-bd6a-89259d771317 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] cloning vms/0fc86c7e-5de2-431c-9152-cfe293f8cc7d_disk@829c97c5e20f4a8eba46793041b33893 to images/0743e309-9e26-4d9c-aa8d-6c681073dac1 clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Nov 25 03:27:15 np0005534516 nova_compute[253538]: 2025-11-25 08:27:15.263 253542 INFO nova.virt.libvirt.driver [None req-14254fe2-cf42-4e2e-8f63-f12433240d4d 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] [instance: c787de46-dba9-458e-acc0-57470097fac5] Deleting instance files /var/lib/nova/instances/c787de46-dba9-458e-acc0-57470097fac5_del#033[00m
Nov 25 03:27:15 np0005534516 nova_compute[253538]: 2025-11-25 08:27:15.264 253542 INFO nova.virt.libvirt.driver [None req-14254fe2-cf42-4e2e-8f63-f12433240d4d 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] [instance: c787de46-dba9-458e-acc0-57470097fac5] Deletion of /var/lib/nova/instances/c787de46-dba9-458e-acc0-57470097fac5_del complete#033[00m
Nov 25 03:27:15 np0005534516 nova_compute[253538]: 2025-11-25 08:27:15.335 253542 INFO nova.compute.manager [None req-14254fe2-cf42-4e2e-8f63-f12433240d4d 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] [instance: c787de46-dba9-458e-acc0-57470097fac5] Took 0.84 seconds to destroy the instance on the hypervisor.#033[00m
Nov 25 03:27:15 np0005534516 nova_compute[253538]: 2025-11-25 08:27:15.336 253542 DEBUG oslo.service.loopingcall [None req-14254fe2-cf42-4e2e-8f63-f12433240d4d 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 25 03:27:15 np0005534516 nova_compute[253538]: 2025-11-25 08:27:15.336 253542 DEBUG nova.compute.manager [-] [instance: c787de46-dba9-458e-acc0-57470097fac5] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 25 03:27:15 np0005534516 nova_compute[253538]: 2025-11-25 08:27:15.337 253542 DEBUG nova.network.neutron [-] [instance: c787de46-dba9-458e-acc0-57470097fac5] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 25 03:27:15 np0005534516 nova_compute[253538]: 2025-11-25 08:27:15.353 253542 DEBUG nova.storage.rbd_utils [None req-572a994c-6629-4d26-bd6a-89259d771317 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] flattening images/0743e309-9e26-4d9c-aa8d-6c681073dac1 flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314#033[00m
Nov 25 03:27:15 np0005534516 nova_compute[253538]: 2025-11-25 08:27:15.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 03:27:15 np0005534516 nova_compute[253538]: 2025-11-25 08:27:15.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 03:27:15 np0005534516 nova_compute[253538]: 2025-11-25 08:27:15.735 253542 DEBUG nova.storage.rbd_utils [None req-572a994c-6629-4d26-bd6a-89259d771317 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] removing snapshot(829c97c5e20f4a8eba46793041b33893) on rbd image(0fc86c7e-5de2-431c-9152-cfe293f8cc7d_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489#033[00m
Nov 25 03:27:16 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e114 do_prune osdmap full prune enabled
Nov 25 03:27:16 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e115 e115: 3 total, 3 up, 3 in
Nov 25 03:27:16 np0005534516 ceph-mon[75015]: log_channel(cluster) log [DBG] : osdmap e115: 3 total, 3 up, 3 in
Nov 25 03:27:16 np0005534516 nova_compute[253538]: 2025-11-25 08:27:16.219 253542 DEBUG nova.storage.rbd_utils [None req-572a994c-6629-4d26-bd6a-89259d771317 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] creating snapshot(snap) on rbd image(0743e309-9e26-4d9c-aa8d-6c681073dac1) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Nov 25 03:27:16 np0005534516 nova_compute[253538]: 2025-11-25 08:27:16.682 253542 DEBUG nova.network.neutron [-] [instance: c787de46-dba9-458e-acc0-57470097fac5] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 03:27:16 np0005534516 nova_compute[253538]: 2025-11-25 08:27:16.711 253542 INFO nova.compute.manager [-] [instance: c787de46-dba9-458e-acc0-57470097fac5] Took 1.37 seconds to deallocate network for instance.#033[00m
Nov 25 03:27:16 np0005534516 nova_compute[253538]: 2025-11-25 08:27:16.758 253542 DEBUG oslo_concurrency.lockutils [None req-14254fe2-cf42-4e2e-8f63-f12433240d4d 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:27:16 np0005534516 nova_compute[253538]: 2025-11-25 08:27:16.758 253542 DEBUG oslo_concurrency.lockutils [None req-14254fe2-cf42-4e2e-8f63-f12433240d4d 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:27:16 np0005534516 nova_compute[253538]: 2025-11-25 08:27:16.765 253542 DEBUG nova.compute.manager [req-13ed3801-a1f3-4f37-98a9-1dbe215d1dca req-1aa031e2-3c70-46d3-981d-dea478ecb950 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: c787de46-dba9-458e-acc0-57470097fac5] Received event network-vif-deleted-0416b402-0842-4b73-910b-d30a5750474c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:27:16 np0005534516 nova_compute[253538]: 2025-11-25 08:27:16.833 253542 DEBUG oslo_concurrency.processutils [None req-14254fe2-cf42-4e2e-8f63-f12433240d4d 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:27:17 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1257: 321 pgs: 321 active+clean; 195 MiB data, 407 MiB used, 60 GiB / 60 GiB avail; 6.7 MiB/s rd, 119 KiB/s wr, 287 op/s
Nov 25 03:27:17 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e115 do_prune osdmap full prune enabled
Nov 25 03:27:17 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e116 e116: 3 total, 3 up, 3 in
Nov 25 03:27:17 np0005534516 ceph-mon[75015]: log_channel(cluster) log [DBG] : osdmap e116: 3 total, 3 up, 3 in
Nov 25 03:27:17 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 03:27:17 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1638005216' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 03:27:17 np0005534516 nova_compute[253538]: 2025-11-25 08:27:17.271 253542 DEBUG oslo_concurrency.processutils [None req-14254fe2-cf42-4e2e-8f63-f12433240d4d 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.438s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:27:17 np0005534516 nova_compute[253538]: 2025-11-25 08:27:17.277 253542 DEBUG nova.compute.provider_tree [None req-14254fe2-cf42-4e2e-8f63-f12433240d4d 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 03:27:17 np0005534516 nova_compute[253538]: 2025-11-25 08:27:17.298 253542 DEBUG nova.scheduler.client.report [None req-14254fe2-cf42-4e2e-8f63-f12433240d4d 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 03:27:17 np0005534516 nova_compute[253538]: 2025-11-25 08:27:17.323 253542 DEBUG oslo_concurrency.lockutils [None req-14254fe2-cf42-4e2e-8f63-f12433240d4d 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.564s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:27:17 np0005534516 nova_compute[253538]: 2025-11-25 08:27:17.348 253542 INFO nova.scheduler.client.report [None req-14254fe2-cf42-4e2e-8f63-f12433240d4d 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] Deleted allocations for instance c787de46-dba9-458e-acc0-57470097fac5#033[00m
Nov 25 03:27:17 np0005534516 nova_compute[253538]: 2025-11-25 08:27:17.409 253542 DEBUG oslo_concurrency.lockutils [None req-14254fe2-cf42-4e2e-8f63-f12433240d4d 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] Lock "c787de46-dba9-458e-acc0-57470097fac5" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.916s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:27:19 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1259: 321 pgs: 321 active+clean; 223 MiB data, 426 MiB used, 60 GiB / 60 GiB avail; 5.6 MiB/s rd, 2.8 MiB/s wr, 303 op/s
Nov 25 03:27:19 np0005534516 nova_compute[253538]: 2025-11-25 08:27:19.138 253542 INFO nova.virt.libvirt.driver [None req-572a994c-6629-4d26-bd6a-89259d771317 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 0fc86c7e-5de2-431c-9152-cfe293f8cc7d] Snapshot image upload complete#033[00m
Nov 25 03:27:19 np0005534516 nova_compute[253538]: 2025-11-25 08:27:19.139 253542 INFO nova.compute.manager [None req-572a994c-6629-4d26-bd6a-89259d771317 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 0fc86c7e-5de2-431c-9152-cfe293f8cc7d] Took 4.71 seconds to snapshot the instance on the hypervisor.#033[00m
Nov 25 03:27:19 np0005534516 nova_compute[253538]: 2025-11-25 08:27:19.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 03:27:19 np0005534516 nova_compute[253538]: 2025-11-25 08:27:19.771 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 25 03:27:19 np0005534516 nova_compute[253538]: 2025-11-25 08:27:19.773 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 25 03:27:19 np0005534516 nova_compute[253538]: 2025-11-25 08:27:19.773 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5002 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m
Nov 25 03:27:19 np0005534516 nova_compute[253538]: 2025-11-25 08:27:19.774 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Nov 25 03:27:19 np0005534516 nova_compute[253538]: 2025-11-25 08:27:19.782 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:27:19 np0005534516 nova_compute[253538]: 2025-11-25 08:27:19.782 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Nov 25 03:27:19 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 03:27:20 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e116 do_prune osdmap full prune enabled
Nov 25 03:27:20 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e117 e117: 3 total, 3 up, 3 in
Nov 25 03:27:20 np0005534516 ceph-mon[75015]: log_channel(cluster) log [DBG] : osdmap e117: 3 total, 3 up, 3 in
Nov 25 03:27:21 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1261: 321 pgs: 321 active+clean; 213 MiB data, 426 MiB used, 60 GiB / 60 GiB avail; 3.7 MiB/s rd, 3.6 MiB/s wr, 178 op/s
Nov 25 03:27:22 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:27:22.760 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=9, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '3e:21:09', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '6a:71:8e:07:fa:c2'}, ipsec=False) old=SB_Global(nb_cfg=8) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 03:27:22 np0005534516 nova_compute[253538]: 2025-11-25 08:27:22.760 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:27:22 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:27:22.761 162739 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 25 03:27:22 np0005534516 nova_compute[253538]: 2025-11-25 08:27:22.838 253542 DEBUG oslo_concurrency.lockutils [None req-33d7afed-25cd-4c7e-a18b-79c0bb0f5750 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Acquiring lock "0fc86c7e-5de2-431c-9152-cfe293f8cc7d" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:27:22 np0005534516 nova_compute[253538]: 2025-11-25 08:27:22.839 253542 DEBUG oslo_concurrency.lockutils [None req-33d7afed-25cd-4c7e-a18b-79c0bb0f5750 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Lock "0fc86c7e-5de2-431c-9152-cfe293f8cc7d" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:27:22 np0005534516 nova_compute[253538]: 2025-11-25 08:27:22.839 253542 DEBUG oslo_concurrency.lockutils [None req-33d7afed-25cd-4c7e-a18b-79c0bb0f5750 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Acquiring lock "0fc86c7e-5de2-431c-9152-cfe293f8cc7d-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:27:22 np0005534516 nova_compute[253538]: 2025-11-25 08:27:22.839 253542 DEBUG oslo_concurrency.lockutils [None req-33d7afed-25cd-4c7e-a18b-79c0bb0f5750 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Lock "0fc86c7e-5de2-431c-9152-cfe293f8cc7d-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:27:22 np0005534516 nova_compute[253538]: 2025-11-25 08:27:22.840 253542 DEBUG oslo_concurrency.lockutils [None req-33d7afed-25cd-4c7e-a18b-79c0bb0f5750 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Lock "0fc86c7e-5de2-431c-9152-cfe293f8cc7d-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:27:22 np0005534516 nova_compute[253538]: 2025-11-25 08:27:22.841 253542 INFO nova.compute.manager [None req-33d7afed-25cd-4c7e-a18b-79c0bb0f5750 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 0fc86c7e-5de2-431c-9152-cfe293f8cc7d] Terminating instance#033[00m
Nov 25 03:27:22 np0005534516 nova_compute[253538]: 2025-11-25 08:27:22.842 253542 DEBUG nova.compute.manager [None req-33d7afed-25cd-4c7e-a18b-79c0bb0f5750 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 0fc86c7e-5de2-431c-9152-cfe293f8cc7d] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 25 03:27:22 np0005534516 kernel: tap521823df-58 (unregistering): left promiscuous mode
Nov 25 03:27:22 np0005534516 NetworkManager[48915]: <info>  [1764059242.8888] device (tap521823df-58): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 03:27:22 np0005534516 ovn_controller[152859]: 2025-11-25T08:27:22Z|00145|binding|INFO|Releasing lport 521823df-589a-4370-a3ea-a5a6f4c73a6a from this chassis (sb_readonly=0)
Nov 25 03:27:22 np0005534516 ovn_controller[152859]: 2025-11-25T08:27:22Z|00146|binding|INFO|Setting lport 521823df-589a-4370-a3ea-a5a6f4c73a6a down in Southbound
Nov 25 03:27:22 np0005534516 ovn_controller[152859]: 2025-11-25T08:27:22Z|00147|binding|INFO|Removing iface tap521823df-58 ovn-installed in OVS
Nov 25 03:27:22 np0005534516 nova_compute[253538]: 2025-11-25 08:27:22.894 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:27:22 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:27:22.903 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:3f:ef:3b 10.100.0.6'], port_security=['fa:16:3e:3f:ef:3b 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '0fc86c7e-5de2-431c-9152-cfe293f8cc7d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ba659d6c-c094-47d7-ba45-d0e659ce778e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b0a28d62fb1841c087b84b40bf5a54ec', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'c0dcf48e-8342-437f-bc91-be284d9d2e89', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=38cfcfdd-6d8a-45fc-8bf6-5c1aa5128b91, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=521823df-589a-4370-a3ea-a5a6f4c73a6a) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 03:27:22 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:27:22.904 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 521823df-589a-4370-a3ea-a5a6f4c73a6a in datapath ba659d6c-c094-47d7-ba45-d0e659ce778e unbound from our chassis#033[00m
Nov 25 03:27:22 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:27:22.905 162739 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network ba659d6c-c094-47d7-ba45-d0e659ce778e, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 25 03:27:22 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:27:22.906 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[761f5ad5-3da1-4077-8853-a3ca39aee63e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:27:22 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:27:22.907 162739 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-ba659d6c-c094-47d7-ba45-d0e659ce778e namespace which is not needed anymore#033[00m
Nov 25 03:27:22 np0005534516 nova_compute[253538]: 2025-11-25 08:27:22.929 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:27:22 np0005534516 systemd[1]: machine-qemu\x2d29\x2dinstance\x2d00000019.scope: Deactivated successfully.
Nov 25 03:27:22 np0005534516 systemd[1]: machine-qemu\x2d29\x2dinstance\x2d00000019.scope: Consumed 2.440s CPU time.
Nov 25 03:27:22 np0005534516 systemd-machined[215790]: Machine qemu-29-instance-00000019 terminated.
Nov 25 03:27:23 np0005534516 neutron-haproxy-ovnmeta-ba659d6c-c094-47d7-ba45-d0e659ce778e[286385]: [NOTICE]   (286389) : haproxy version is 2.8.14-c23fe91
Nov 25 03:27:23 np0005534516 neutron-haproxy-ovnmeta-ba659d6c-c094-47d7-ba45-d0e659ce778e[286385]: [NOTICE]   (286389) : path to executable is /usr/sbin/haproxy
Nov 25 03:27:23 np0005534516 neutron-haproxy-ovnmeta-ba659d6c-c094-47d7-ba45-d0e659ce778e[286385]: [WARNING]  (286389) : Exiting Master process...
Nov 25 03:27:23 np0005534516 neutron-haproxy-ovnmeta-ba659d6c-c094-47d7-ba45-d0e659ce778e[286385]: [ALERT]    (286389) : Current worker (286391) exited with code 143 (Terminated)
Nov 25 03:27:23 np0005534516 neutron-haproxy-ovnmeta-ba659d6c-c094-47d7-ba45-d0e659ce778e[286385]: [WARNING]  (286389) : All workers exited. Exiting... (0)
Nov 25 03:27:23 np0005534516 systemd[1]: libpod-ad75695d4bf05f8349f76f28b37e984abb0ee77e194871b1bc91803c0f376e06.scope: Deactivated successfully.
Nov 25 03:27:23 np0005534516 podman[286864]: 2025-11-25 08:27:23.048889514 +0000 UTC m=+0.049416809 container died ad75695d4bf05f8349f76f28b37e984abb0ee77e194871b1bc91803c0f376e06 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-ba659d6c-c094-47d7-ba45-d0e659ce778e, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 03:27:23 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1262: 321 pgs: 321 active+clean; 213 MiB data, 426 MiB used, 60 GiB / 60 GiB avail; 3.2 MiB/s rd, 3.1 MiB/s wr, 159 op/s
Nov 25 03:27:23 np0005534516 nova_compute[253538]: 2025-11-25 08:27:23.079 253542 INFO nova.virt.libvirt.driver [-] [instance: 0fc86c7e-5de2-431c-9152-cfe293f8cc7d] Instance destroyed successfully.#033[00m
Nov 25 03:27:23 np0005534516 nova_compute[253538]: 2025-11-25 08:27:23.079 253542 DEBUG nova.objects.instance [None req-33d7afed-25cd-4c7e-a18b-79c0bb0f5750 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Lazy-loading 'resources' on Instance uuid 0fc86c7e-5de2-431c-9152-cfe293f8cc7d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 03:27:23 np0005534516 systemd[1]: var-lib-containers-storage-overlay-301b66a4b9ad7a58ec132f0b777f3b1abc4e7a7e468241d4a3386fefbc812d53-merged.mount: Deactivated successfully.
Nov 25 03:27:23 np0005534516 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-ad75695d4bf05f8349f76f28b37e984abb0ee77e194871b1bc91803c0f376e06-userdata-shm.mount: Deactivated successfully.
Nov 25 03:27:23 np0005534516 nova_compute[253538]: 2025-11-25 08:27:23.094 253542 DEBUG nova.compute.manager [req-1bbc3295-4e90-4e20-8010-26963f339e9b req-0fd8b9d0-855d-4ee1-bb4d-844141a53162 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 0fc86c7e-5de2-431c-9152-cfe293f8cc7d] Received event network-vif-unplugged-521823df-589a-4370-a3ea-a5a6f4c73a6a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:27:23 np0005534516 nova_compute[253538]: 2025-11-25 08:27:23.094 253542 DEBUG oslo_concurrency.lockutils [req-1bbc3295-4e90-4e20-8010-26963f339e9b req-0fd8b9d0-855d-4ee1-bb4d-844141a53162 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "0fc86c7e-5de2-431c-9152-cfe293f8cc7d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:27:23 np0005534516 nova_compute[253538]: 2025-11-25 08:27:23.094 253542 DEBUG oslo_concurrency.lockutils [req-1bbc3295-4e90-4e20-8010-26963f339e9b req-0fd8b9d0-855d-4ee1-bb4d-844141a53162 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "0fc86c7e-5de2-431c-9152-cfe293f8cc7d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:27:23 np0005534516 nova_compute[253538]: 2025-11-25 08:27:23.095 253542 DEBUG oslo_concurrency.lockutils [req-1bbc3295-4e90-4e20-8010-26963f339e9b req-0fd8b9d0-855d-4ee1-bb4d-844141a53162 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "0fc86c7e-5de2-431c-9152-cfe293f8cc7d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:27:23 np0005534516 nova_compute[253538]: 2025-11-25 08:27:23.095 253542 DEBUG nova.compute.manager [req-1bbc3295-4e90-4e20-8010-26963f339e9b req-0fd8b9d0-855d-4ee1-bb4d-844141a53162 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 0fc86c7e-5de2-431c-9152-cfe293f8cc7d] No waiting events found dispatching network-vif-unplugged-521823df-589a-4370-a3ea-a5a6f4c73a6a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 03:27:23 np0005534516 podman[286864]: 2025-11-25 08:27:23.095573927 +0000 UTC m=+0.096101222 container cleanup ad75695d4bf05f8349f76f28b37e984abb0ee77e194871b1bc91803c0f376e06 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-ba659d6c-c094-47d7-ba45-d0e659ce778e, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, io.buildah.version=1.41.3)
Nov 25 03:27:23 np0005534516 nova_compute[253538]: 2025-11-25 08:27:23.095 253542 DEBUG nova.compute.manager [req-1bbc3295-4e90-4e20-8010-26963f339e9b req-0fd8b9d0-855d-4ee1-bb4d-844141a53162 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 0fc86c7e-5de2-431c-9152-cfe293f8cc7d] Received event network-vif-unplugged-521823df-589a-4370-a3ea-a5a6f4c73a6a for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 25 03:27:23 np0005534516 nova_compute[253538]: 2025-11-25 08:27:23.097 253542 DEBUG nova.virt.libvirt.vif [None req-33d7afed-25cd-4c7e-a18b-79c0bb0f5750 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T08:26:51Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ImagesTestJSON-server-232041693',display_name='tempest-ImagesTestJSON-server-232041693',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagestestjson-server-232041693',id=25,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-25T08:27:10Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=3,progress=0,project_id='b0a28d62fb1841c087b84b40bf5a54ec',ramdisk_id='',reservation_id='r-pxohzxru',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ImagesTestJSON-109091550',owner_user_name='tempest-ImagesTestJSON-109091550-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T08:27:19Z,user_data=None,user_id='38fa175fb699405c9a05d7c28f994ebc',uuid=0fc86c7e-5de2-431c-9152-cfe293f8cc7d,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='paused') vif={"id": "521823df-589a-4370-a3ea-a5a6f4c73a6a", "address": "fa:16:3e:3f:ef:3b", "network": {"id": "ba659d6c-c094-47d7-ba45-d0e659ce778e", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1404047212-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b0a28d62fb1841c087b84b40bf5a54ec", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap521823df-58", "ovs_interfaceid": "521823df-589a-4370-a3ea-a5a6f4c73a6a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 25 03:27:23 np0005534516 nova_compute[253538]: 2025-11-25 08:27:23.097 253542 DEBUG nova.network.os_vif_util [None req-33d7afed-25cd-4c7e-a18b-79c0bb0f5750 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Converting VIF {"id": "521823df-589a-4370-a3ea-a5a6f4c73a6a", "address": "fa:16:3e:3f:ef:3b", "network": {"id": "ba659d6c-c094-47d7-ba45-d0e659ce778e", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1404047212-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b0a28d62fb1841c087b84b40bf5a54ec", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap521823df-58", "ovs_interfaceid": "521823df-589a-4370-a3ea-a5a6f4c73a6a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 03:27:23 np0005534516 nova_compute[253538]: 2025-11-25 08:27:23.098 253542 DEBUG nova.network.os_vif_util [None req-33d7afed-25cd-4c7e-a18b-79c0bb0f5750 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:3f:ef:3b,bridge_name='br-int',has_traffic_filtering=True,id=521823df-589a-4370-a3ea-a5a6f4c73a6a,network=Network(ba659d6c-c094-47d7-ba45-d0e659ce778e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap521823df-58') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 03:27:23 np0005534516 nova_compute[253538]: 2025-11-25 08:27:23.098 253542 DEBUG os_vif [None req-33d7afed-25cd-4c7e-a18b-79c0bb0f5750 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:3f:ef:3b,bridge_name='br-int',has_traffic_filtering=True,id=521823df-589a-4370-a3ea-a5a6f4c73a6a,network=Network(ba659d6c-c094-47d7-ba45-d0e659ce778e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap521823df-58') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 25 03:27:23 np0005534516 nova_compute[253538]: 2025-11-25 08:27:23.100 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:27:23 np0005534516 nova_compute[253538]: 2025-11-25 08:27:23.100 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap521823df-58, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:27:23 np0005534516 systemd[1]: libpod-conmon-ad75695d4bf05f8349f76f28b37e984abb0ee77e194871b1bc91803c0f376e06.scope: Deactivated successfully.
Nov 25 03:27:23 np0005534516 nova_compute[253538]: 2025-11-25 08:27:23.104 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 25 03:27:23 np0005534516 nova_compute[253538]: 2025-11-25 08:27:23.105 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:27:23 np0005534516 nova_compute[253538]: 2025-11-25 08:27:23.107 253542 INFO os_vif [None req-33d7afed-25cd-4c7e-a18b-79c0bb0f5750 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:3f:ef:3b,bridge_name='br-int',has_traffic_filtering=True,id=521823df-589a-4370-a3ea-a5a6f4c73a6a,network=Network(ba659d6c-c094-47d7-ba45-d0e659ce778e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap521823df-58')#033[00m
Nov 25 03:27:23 np0005534516 podman[286902]: 2025-11-25 08:27:23.21672841 +0000 UTC m=+0.099458264 container remove ad75695d4bf05f8349f76f28b37e984abb0ee77e194871b1bc91803c0f376e06 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-ba659d6c-c094-47d7-ba45-d0e659ce778e, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Nov 25 03:27:23 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:27:23.224 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[64d77ff4-9134-4af2-90e2-6ebc736f100b]: (4, ('Tue Nov 25 08:27:22 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-ba659d6c-c094-47d7-ba45-d0e659ce778e (ad75695d4bf05f8349f76f28b37e984abb0ee77e194871b1bc91803c0f376e06)\nad75695d4bf05f8349f76f28b37e984abb0ee77e194871b1bc91803c0f376e06\nTue Nov 25 08:27:23 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-ba659d6c-c094-47d7-ba45-d0e659ce778e (ad75695d4bf05f8349f76f28b37e984abb0ee77e194871b1bc91803c0f376e06)\nad75695d4bf05f8349f76f28b37e984abb0ee77e194871b1bc91803c0f376e06\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:27:23 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:27:23.226 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[326dd756-184d-4100-a5b5-690d7aea9388]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:27:23 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:27:23.227 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapba659d6c-c0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:27:23 np0005534516 nova_compute[253538]: 2025-11-25 08:27:23.236 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:27:23 np0005534516 kernel: tapba659d6c-c0: left promiscuous mode
Nov 25 03:27:23 np0005534516 nova_compute[253538]: 2025-11-25 08:27:23.252 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:27:23 np0005534516 nova_compute[253538]: 2025-11-25 08:27:23.254 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:27:23 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:27:23.255 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[5c3c4750-c75c-4c36-a5d6-322f7348281e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:27:23 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:27:23.268 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[202355d6-2481-4b10-af9e-21d0fd7a8956]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:27:23 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:27:23.270 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[00fdab35-dce1-4a0e-869c-bc657d37c453]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:27:23 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:27:23.289 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[b4a1a3e4-ea89-4d5e-918c-7662af9a8c2b]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 458139, 'reachable_time': 39620, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 286935, 'error': None, 'target': 'ovnmeta-ba659d6c-c094-47d7-ba45-d0e659ce778e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:27:23 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:27:23.291 162852 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-ba659d6c-c094-47d7-ba45-d0e659ce778e deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 25 03:27:23 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:27:23.291 162852 DEBUG oslo.privsep.daemon [-] privsep: reply[d3a1d904-272f-4e52-b0a9-e0300ab3f405]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:27:23 np0005534516 systemd[1]: run-netns-ovnmeta\x2dba659d6c\x2dc094\x2d47d7\x2dba45\x2dd0e659ce778e.mount: Deactivated successfully.
Nov 25 03:27:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 03:27:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 03:27:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 03:27:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 03:27:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 03:27:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 03:27:23 np0005534516 nova_compute[253538]: 2025-11-25 08:27:23.642 253542 INFO nova.virt.libvirt.driver [None req-33d7afed-25cd-4c7e-a18b-79c0bb0f5750 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 0fc86c7e-5de2-431c-9152-cfe293f8cc7d] Deleting instance files /var/lib/nova/instances/0fc86c7e-5de2-431c-9152-cfe293f8cc7d_del#033[00m
Nov 25 03:27:23 np0005534516 nova_compute[253538]: 2025-11-25 08:27:23.643 253542 INFO nova.virt.libvirt.driver [None req-33d7afed-25cd-4c7e-a18b-79c0bb0f5750 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 0fc86c7e-5de2-431c-9152-cfe293f8cc7d] Deletion of /var/lib/nova/instances/0fc86c7e-5de2-431c-9152-cfe293f8cc7d_del complete#033[00m
Nov 25 03:27:23 np0005534516 nova_compute[253538]: 2025-11-25 08:27:23.707 253542 INFO nova.compute.manager [None req-33d7afed-25cd-4c7e-a18b-79c0bb0f5750 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 0fc86c7e-5de2-431c-9152-cfe293f8cc7d] Took 0.86 seconds to destroy the instance on the hypervisor.#033[00m
Nov 25 03:27:23 np0005534516 nova_compute[253538]: 2025-11-25 08:27:23.708 253542 DEBUG oslo.service.loopingcall [None req-33d7afed-25cd-4c7e-a18b-79c0bb0f5750 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 25 03:27:23 np0005534516 nova_compute[253538]: 2025-11-25 08:27:23.709 253542 DEBUG nova.compute.manager [-] [instance: 0fc86c7e-5de2-431c-9152-cfe293f8cc7d] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 25 03:27:23 np0005534516 nova_compute[253538]: 2025-11-25 08:27:23.709 253542 DEBUG nova.network.neutron [-] [instance: 0fc86c7e-5de2-431c-9152-cfe293f8cc7d] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 25 03:27:23 np0005534516 nova_compute[253538]: 2025-11-25 08:27:23.892 253542 DEBUG oslo_concurrency.lockutils [None req-5a0606ae-7546-4870-a8b8-3eb9852746c1 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] Acquiring lock "ca088afd-31e5-497b-bfc5-ba1f56096642" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:27:23 np0005534516 nova_compute[253538]: 2025-11-25 08:27:23.893 253542 DEBUG oslo_concurrency.lockutils [None req-5a0606ae-7546-4870-a8b8-3eb9852746c1 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] Lock "ca088afd-31e5-497b-bfc5-ba1f56096642" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:27:23 np0005534516 nova_compute[253538]: 2025-11-25 08:27:23.893 253542 DEBUG oslo_concurrency.lockutils [None req-5a0606ae-7546-4870-a8b8-3eb9852746c1 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] Acquiring lock "ca088afd-31e5-497b-bfc5-ba1f56096642-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:27:23 np0005534516 nova_compute[253538]: 2025-11-25 08:27:23.893 253542 DEBUG oslo_concurrency.lockutils [None req-5a0606ae-7546-4870-a8b8-3eb9852746c1 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] Lock "ca088afd-31e5-497b-bfc5-ba1f56096642-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:27:23 np0005534516 nova_compute[253538]: 2025-11-25 08:27:23.893 253542 DEBUG oslo_concurrency.lockutils [None req-5a0606ae-7546-4870-a8b8-3eb9852746c1 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] Lock "ca088afd-31e5-497b-bfc5-ba1f56096642-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:27:23 np0005534516 nova_compute[253538]: 2025-11-25 08:27:23.894 253542 INFO nova.compute.manager [None req-5a0606ae-7546-4870-a8b8-3eb9852746c1 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] [instance: ca088afd-31e5-497b-bfc5-ba1f56096642] Terminating instance#033[00m
Nov 25 03:27:23 np0005534516 nova_compute[253538]: 2025-11-25 08:27:23.895 253542 DEBUG nova.compute.manager [None req-5a0606ae-7546-4870-a8b8-3eb9852746c1 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] [instance: ca088afd-31e5-497b-bfc5-ba1f56096642] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 25 03:27:23 np0005534516 kernel: tap2089bf75-61 (unregistering): left promiscuous mode
Nov 25 03:27:23 np0005534516 NetworkManager[48915]: <info>  [1764059243.9668] device (tap2089bf75-61): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 03:27:23 np0005534516 ovn_controller[152859]: 2025-11-25T08:27:23Z|00148|binding|INFO|Releasing lport 2089bf75-6119-4c42-a326-989b3931ec08 from this chassis (sb_readonly=0)
Nov 25 03:27:23 np0005534516 ovn_controller[152859]: 2025-11-25T08:27:23Z|00149|binding|INFO|Setting lport 2089bf75-6119-4c42-a326-989b3931ec08 down in Southbound
Nov 25 03:27:23 np0005534516 nova_compute[253538]: 2025-11-25 08:27:23.973 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:27:23 np0005534516 ovn_controller[152859]: 2025-11-25T08:27:23Z|00150|binding|INFO|Removing iface tap2089bf75-61 ovn-installed in OVS
Nov 25 03:27:23 np0005534516 nova_compute[253538]: 2025-11-25 08:27:23.976 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:27:23 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:27:23.986 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b9:c0:7d 10.100.0.7'], port_security=['fa:16:3e:b9:c0:7d 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': 'ca088afd-31e5-497b-bfc5-ba1f56096642', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ec4e7ebb-aba7-46f2-8d8d-f7d49f5af954', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '52217f37b23343d697fa6d2be38e236d', 'neutron:revision_number': '6', 'neutron:security_group_ids': '401f5f56-5a06-49e0-952d-2d39380ca37b 883216ab-7df8-4c90-b073-93f5d75fcaa1 94ed9e1b-8451-4dd9-95ef-2d9affe4fca9', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=16b4c924-25ba-4e74-8549-ba25753d78e7, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=2089bf75-6119-4c42-a326-989b3931ec08) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 03:27:23 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:27:23.987 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 2089bf75-6119-4c42-a326-989b3931ec08 in datapath ec4e7ebb-aba7-46f2-8d8d-f7d49f5af954 unbound from our chassis#033[00m
Nov 25 03:27:23 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:27:23.988 162739 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network ec4e7ebb-aba7-46f2-8d8d-f7d49f5af954, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 25 03:27:23 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:27:23.989 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[118c2ae7-080a-49a2-ac32-a3829e689828]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:27:23 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:27:23.989 162739 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-ec4e7ebb-aba7-46f2-8d8d-f7d49f5af954 namespace which is not needed anymore#033[00m
Nov 25 03:27:23 np0005534516 nova_compute[253538]: 2025-11-25 08:27:23.995 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:27:24 np0005534516 systemd[1]: machine-qemu\x2d26\x2dinstance\x2d00000017.scope: Deactivated successfully.
Nov 25 03:27:24 np0005534516 systemd[1]: machine-qemu\x2d26\x2dinstance\x2d00000017.scope: Consumed 15.853s CPU time.
Nov 25 03:27:24 np0005534516 systemd-machined[215790]: Machine qemu-26-instance-00000017 terminated.
Nov 25 03:27:24 np0005534516 NetworkManager[48915]: <info>  [1764059244.1125] manager: (tap2089bf75-61): new Tun device (/org/freedesktop/NetworkManager/Devices/74)
Nov 25 03:27:24 np0005534516 neutron-haproxy-ovnmeta-ec4e7ebb-aba7-46f2-8d8d-f7d49f5af954[283568]: [NOTICE]   (283572) : haproxy version is 2.8.14-c23fe91
Nov 25 03:27:24 np0005534516 neutron-haproxy-ovnmeta-ec4e7ebb-aba7-46f2-8d8d-f7d49f5af954[283568]: [NOTICE]   (283572) : path to executable is /usr/sbin/haproxy
Nov 25 03:27:24 np0005534516 neutron-haproxy-ovnmeta-ec4e7ebb-aba7-46f2-8d8d-f7d49f5af954[283568]: [WARNING]  (283572) : Exiting Master process...
Nov 25 03:27:24 np0005534516 neutron-haproxy-ovnmeta-ec4e7ebb-aba7-46f2-8d8d-f7d49f5af954[283568]: [WARNING]  (283572) : Exiting Master process...
Nov 25 03:27:24 np0005534516 neutron-haproxy-ovnmeta-ec4e7ebb-aba7-46f2-8d8d-f7d49f5af954[283568]: [ALERT]    (283572) : Current worker (283574) exited with code 143 (Terminated)
Nov 25 03:27:24 np0005534516 neutron-haproxy-ovnmeta-ec4e7ebb-aba7-46f2-8d8d-f7d49f5af954[283568]: [WARNING]  (283572) : All workers exited. Exiting... (0)
Nov 25 03:27:24 np0005534516 systemd[1]: libpod-e27492d5c587b8c65634ba52539c61715f617596520348327f85b4f22e359117.scope: Deactivated successfully.
Nov 25 03:27:24 np0005534516 podman[286958]: 2025-11-25 08:27:24.125225228 +0000 UTC m=+0.046313793 container died e27492d5c587b8c65634ba52539c61715f617596520348327f85b4f22e359117 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-ec4e7ebb-aba7-46f2-8d8d-f7d49f5af954, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 25 03:27:24 np0005534516 nova_compute[253538]: 2025-11-25 08:27:24.126 253542 INFO nova.virt.libvirt.driver [-] [instance: ca088afd-31e5-497b-bfc5-ba1f56096642] Instance destroyed successfully.#033[00m
Nov 25 03:27:24 np0005534516 nova_compute[253538]: 2025-11-25 08:27:24.126 253542 DEBUG nova.objects.instance [None req-5a0606ae-7546-4870-a8b8-3eb9852746c1 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] Lazy-loading 'resources' on Instance uuid ca088afd-31e5-497b-bfc5-ba1f56096642 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 03:27:24 np0005534516 nova_compute[253538]: 2025-11-25 08:27:24.139 253542 DEBUG nova.virt.libvirt.vif [None req-5a0606ae-7546-4870-a8b8-3eb9852746c1 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T08:26:02Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-SecurityGroupsTestJSON-server-1453002528',display_name='tempest-SecurityGroupsTestJSON-server-1453002528',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-securitygroupstestjson-server-1453002528',id=23,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-25T08:26:19Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='52217f37b23343d697fa6d2be38e236d',ramdisk_id='',reservation_id='r-502qkvse',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-SecurityGroupsTestJSON-1828125381',owner_user_name='tempest-SecurityGroupsTestJSON-1828125381-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T08:26:19Z,user_data=None,user_id='02e795c75a3b40bbbc3ca83d0501777f',uuid=ca088afd-31e5-497b-bfc5-ba1f56096642,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "2089bf75-6119-4c42-a326-989b3931ec08", "address": "fa:16:3e:b9:c0:7d", "network": {"id": "ec4e7ebb-aba7-46f2-8d8d-f7d49f5af954", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-908342220-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "52217f37b23343d697fa6d2be38e236d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2089bf75-61", "ovs_interfaceid": "2089bf75-6119-4c42-a326-989b3931ec08", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 25 03:27:24 np0005534516 nova_compute[253538]: 2025-11-25 08:27:24.139 253542 DEBUG nova.network.os_vif_util [None req-5a0606ae-7546-4870-a8b8-3eb9852746c1 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] Converting VIF {"id": "2089bf75-6119-4c42-a326-989b3931ec08", "address": "fa:16:3e:b9:c0:7d", "network": {"id": "ec4e7ebb-aba7-46f2-8d8d-f7d49f5af954", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-908342220-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "52217f37b23343d697fa6d2be38e236d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2089bf75-61", "ovs_interfaceid": "2089bf75-6119-4c42-a326-989b3931ec08", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 03:27:24 np0005534516 nova_compute[253538]: 2025-11-25 08:27:24.140 253542 DEBUG nova.network.os_vif_util [None req-5a0606ae-7546-4870-a8b8-3eb9852746c1 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:b9:c0:7d,bridge_name='br-int',has_traffic_filtering=True,id=2089bf75-6119-4c42-a326-989b3931ec08,network=Network(ec4e7ebb-aba7-46f2-8d8d-f7d49f5af954),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2089bf75-61') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 03:27:24 np0005534516 nova_compute[253538]: 2025-11-25 08:27:24.140 253542 DEBUG os_vif [None req-5a0606ae-7546-4870-a8b8-3eb9852746c1 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:b9:c0:7d,bridge_name='br-int',has_traffic_filtering=True,id=2089bf75-6119-4c42-a326-989b3931ec08,network=Network(ec4e7ebb-aba7-46f2-8d8d-f7d49f5af954),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2089bf75-61') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 25 03:27:24 np0005534516 nova_compute[253538]: 2025-11-25 08:27:24.141 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:27:24 np0005534516 nova_compute[253538]: 2025-11-25 08:27:24.141 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2089bf75-61, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:27:24 np0005534516 nova_compute[253538]: 2025-11-25 08:27:24.144 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:27:24 np0005534516 nova_compute[253538]: 2025-11-25 08:27:24.148 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 25 03:27:24 np0005534516 nova_compute[253538]: 2025-11-25 08:27:24.149 253542 INFO os_vif [None req-5a0606ae-7546-4870-a8b8-3eb9852746c1 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:b9:c0:7d,bridge_name='br-int',has_traffic_filtering=True,id=2089bf75-6119-4c42-a326-989b3931ec08,network=Network(ec4e7ebb-aba7-46f2-8d8d-f7d49f5af954),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2089bf75-61')#033[00m
Nov 25 03:27:24 np0005534516 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-e27492d5c587b8c65634ba52539c61715f617596520348327f85b4f22e359117-userdata-shm.mount: Deactivated successfully.
Nov 25 03:27:24 np0005534516 systemd[1]: var-lib-containers-storage-overlay-d651f3fccf787af2f2b211bc049772630cbec479a6881f484dd2c0eb6af6c354-merged.mount: Deactivated successfully.
Nov 25 03:27:24 np0005534516 podman[286958]: 2025-11-25 08:27:24.181957688 +0000 UTC m=+0.103046233 container cleanup e27492d5c587b8c65634ba52539c61715f617596520348327f85b4f22e359117 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-ec4e7ebb-aba7-46f2-8d8d-f7d49f5af954, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3)
Nov 25 03:27:24 np0005534516 systemd[1]: libpod-conmon-e27492d5c587b8c65634ba52539c61715f617596520348327f85b4f22e359117.scope: Deactivated successfully.
Nov 25 03:27:24 np0005534516 podman[287016]: 2025-11-25 08:27:24.255595927 +0000 UTC m=+0.050977372 container remove e27492d5c587b8c65634ba52539c61715f617596520348327f85b4f22e359117 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-ec4e7ebb-aba7-46f2-8d8d-f7d49f5af954, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Nov 25 03:27:24 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:27:24.261 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[43e608b0-fecb-4b2d-a6db-be17556b80e9]: (4, ('Tue Nov 25 08:27:24 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-ec4e7ebb-aba7-46f2-8d8d-f7d49f5af954 (e27492d5c587b8c65634ba52539c61715f617596520348327f85b4f22e359117)\ne27492d5c587b8c65634ba52539c61715f617596520348327f85b4f22e359117\nTue Nov 25 08:27:24 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-ec4e7ebb-aba7-46f2-8d8d-f7d49f5af954 (e27492d5c587b8c65634ba52539c61715f617596520348327f85b4f22e359117)\ne27492d5c587b8c65634ba52539c61715f617596520348327f85b4f22e359117\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:27:24 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:27:24.263 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[9b341679-ca46-46e4-9aad-c70d8cea9065]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:27:24 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:27:24.263 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapec4e7ebb-a0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:27:24 np0005534516 nova_compute[253538]: 2025-11-25 08:27:24.317 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:27:24 np0005534516 kernel: tapec4e7ebb-a0: left promiscuous mode
Nov 25 03:27:24 np0005534516 nova_compute[253538]: 2025-11-25 08:27:24.332 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:27:24 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:27:24.335 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[d1bb779d-5eef-4ce2-a73f-4d03f7ac0af7]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:27:24 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:27:24.348 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[3831cf37-93fa-4330-8dd6-a7494ec8d75e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:27:24 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:27:24.349 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[4624f8d0-7a42-44b0-a451-99261d5657ca]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:27:24 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:27:24.362 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[61415e8d-aeac-4060-b6df-b709ba9aaeed]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 452873, 'reachable_time': 41890, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 287031, 'error': None, 'target': 'ovnmeta-ec4e7ebb-aba7-46f2-8d8d-f7d49f5af954', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:27:24 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:27:24.364 162852 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-ec4e7ebb-aba7-46f2-8d8d-f7d49f5af954 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 25 03:27:24 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:27:24.364 162852 DEBUG oslo.privsep.daemon [-] privsep: reply[715bfdf5-bc7c-4bcf-95ce-805b222e5a43]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:27:24 np0005534516 systemd[1]: run-netns-ovnmeta\x2dec4e7ebb\x2daba7\x2d46f2\x2d8d8d\x2df7d49f5af954.mount: Deactivated successfully.
Nov 25 03:27:24 np0005534516 nova_compute[253538]: 2025-11-25 08:27:24.577 253542 DEBUG nova.network.neutron [-] [instance: 0fc86c7e-5de2-431c-9152-cfe293f8cc7d] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 03:27:24 np0005534516 nova_compute[253538]: 2025-11-25 08:27:24.592 253542 INFO nova.compute.manager [-] [instance: 0fc86c7e-5de2-431c-9152-cfe293f8cc7d] Took 0.88 seconds to deallocate network for instance.#033[00m
Nov 25 03:27:24 np0005534516 nova_compute[253538]: 2025-11-25 08:27:24.638 253542 DEBUG oslo_concurrency.lockutils [None req-33d7afed-25cd-4c7e-a18b-79c0bb0f5750 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:27:24 np0005534516 nova_compute[253538]: 2025-11-25 08:27:24.638 253542 DEBUG oslo_concurrency.lockutils [None req-33d7afed-25cd-4c7e-a18b-79c0bb0f5750 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:27:24 np0005534516 nova_compute[253538]: 2025-11-25 08:27:24.706 253542 INFO nova.virt.libvirt.driver [None req-5a0606ae-7546-4870-a8b8-3eb9852746c1 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] [instance: ca088afd-31e5-497b-bfc5-ba1f56096642] Deleting instance files /var/lib/nova/instances/ca088afd-31e5-497b-bfc5-ba1f56096642_del#033[00m
Nov 25 03:27:24 np0005534516 nova_compute[253538]: 2025-11-25 08:27:24.707 253542 INFO nova.virt.libvirt.driver [None req-5a0606ae-7546-4870-a8b8-3eb9852746c1 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] [instance: ca088afd-31e5-497b-bfc5-ba1f56096642] Deletion of /var/lib/nova/instances/ca088afd-31e5-497b-bfc5-ba1f56096642_del complete#033[00m
Nov 25 03:27:24 np0005534516 nova_compute[253538]: 2025-11-25 08:27:24.736 253542 DEBUG oslo_concurrency.processutils [None req-33d7afed-25cd-4c7e-a18b-79c0bb0f5750 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:27:24 np0005534516 nova_compute[253538]: 2025-11-25 08:27:24.783 253542 INFO nova.compute.manager [None req-5a0606ae-7546-4870-a8b8-3eb9852746c1 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] [instance: ca088afd-31e5-497b-bfc5-ba1f56096642] Took 0.89 seconds to destroy the instance on the hypervisor.#033[00m
Nov 25 03:27:24 np0005534516 nova_compute[253538]: 2025-11-25 08:27:24.784 253542 DEBUG oslo.service.loopingcall [None req-5a0606ae-7546-4870-a8b8-3eb9852746c1 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 25 03:27:24 np0005534516 nova_compute[253538]: 2025-11-25 08:27:24.785 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:27:24 np0005534516 nova_compute[253538]: 2025-11-25 08:27:24.787 253542 DEBUG nova.compute.manager [-] [instance: ca088afd-31e5-497b-bfc5-ba1f56096642] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 25 03:27:24 np0005534516 nova_compute[253538]: 2025-11-25 08:27:24.787 253542 DEBUG nova.network.neutron [-] [instance: ca088afd-31e5-497b-bfc5-ba1f56096642] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 25 03:27:24 np0005534516 nova_compute[253538]: 2025-11-25 08:27:24.874 253542 DEBUG nova.compute.manager [req-6bd5a465-1fae-48bc-9847-2e7fd019c8dd req-9c26e124-5f06-46b9-86ac-01d571f49af8 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: ca088afd-31e5-497b-bfc5-ba1f56096642] Received event network-vif-unplugged-2089bf75-6119-4c42-a326-989b3931ec08 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:27:24 np0005534516 nova_compute[253538]: 2025-11-25 08:27:24.875 253542 DEBUG oslo_concurrency.lockutils [req-6bd5a465-1fae-48bc-9847-2e7fd019c8dd req-9c26e124-5f06-46b9-86ac-01d571f49af8 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "ca088afd-31e5-497b-bfc5-ba1f56096642-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:27:24 np0005534516 nova_compute[253538]: 2025-11-25 08:27:24.875 253542 DEBUG oslo_concurrency.lockutils [req-6bd5a465-1fae-48bc-9847-2e7fd019c8dd req-9c26e124-5f06-46b9-86ac-01d571f49af8 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "ca088afd-31e5-497b-bfc5-ba1f56096642-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:27:24 np0005534516 nova_compute[253538]: 2025-11-25 08:27:24.876 253542 DEBUG oslo_concurrency.lockutils [req-6bd5a465-1fae-48bc-9847-2e7fd019c8dd req-9c26e124-5f06-46b9-86ac-01d571f49af8 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "ca088afd-31e5-497b-bfc5-ba1f56096642-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:27:24 np0005534516 nova_compute[253538]: 2025-11-25 08:27:24.876 253542 DEBUG nova.compute.manager [req-6bd5a465-1fae-48bc-9847-2e7fd019c8dd req-9c26e124-5f06-46b9-86ac-01d571f49af8 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: ca088afd-31e5-497b-bfc5-ba1f56096642] No waiting events found dispatching network-vif-unplugged-2089bf75-6119-4c42-a326-989b3931ec08 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 03:27:24 np0005534516 nova_compute[253538]: 2025-11-25 08:27:24.876 253542 DEBUG nova.compute.manager [req-6bd5a465-1fae-48bc-9847-2e7fd019c8dd req-9c26e124-5f06-46b9-86ac-01d571f49af8 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: ca088afd-31e5-497b-bfc5-ba1f56096642] Received event network-vif-unplugged-2089bf75-6119-4c42-a326-989b3931ec08 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 25 03:27:24 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 03:27:24 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e117 do_prune osdmap full prune enabled
Nov 25 03:27:24 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e118 e118: 3 total, 3 up, 3 in
Nov 25 03:27:24 np0005534516 ceph-mon[75015]: log_channel(cluster) log [DBG] : osdmap e118: 3 total, 3 up, 3 in
Nov 25 03:27:25 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1264: 321 pgs: 321 active+clean; 128 MiB data, 396 MiB used, 60 GiB / 60 GiB avail; 1.8 MiB/s rd, 2.6 MiB/s wr, 136 op/s
Nov 25 03:27:25 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 03:27:25 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1876696747' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 03:27:25 np0005534516 nova_compute[253538]: 2025-11-25 08:27:25.202 253542 DEBUG oslo_concurrency.processutils [None req-33d7afed-25cd-4c7e-a18b-79c0bb0f5750 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.466s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:27:25 np0005534516 nova_compute[253538]: 2025-11-25 08:27:25.208 253542 DEBUG nova.compute.provider_tree [None req-33d7afed-25cd-4c7e-a18b-79c0bb0f5750 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 03:27:25 np0005534516 nova_compute[253538]: 2025-11-25 08:27:25.215 253542 DEBUG nova.compute.manager [req-9302e5ad-fe7b-4152-920c-c476b3a85c43 req-750f4741-9d50-456b-8b4e-533c1e8f2045 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 0fc86c7e-5de2-431c-9152-cfe293f8cc7d] Received event network-vif-plugged-521823df-589a-4370-a3ea-a5a6f4c73a6a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:27:25 np0005534516 nova_compute[253538]: 2025-11-25 08:27:25.216 253542 DEBUG oslo_concurrency.lockutils [req-9302e5ad-fe7b-4152-920c-c476b3a85c43 req-750f4741-9d50-456b-8b4e-533c1e8f2045 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "0fc86c7e-5de2-431c-9152-cfe293f8cc7d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:27:25 np0005534516 nova_compute[253538]: 2025-11-25 08:27:25.216 253542 DEBUG oslo_concurrency.lockutils [req-9302e5ad-fe7b-4152-920c-c476b3a85c43 req-750f4741-9d50-456b-8b4e-533c1e8f2045 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "0fc86c7e-5de2-431c-9152-cfe293f8cc7d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:27:25 np0005534516 nova_compute[253538]: 2025-11-25 08:27:25.216 253542 DEBUG oslo_concurrency.lockutils [req-9302e5ad-fe7b-4152-920c-c476b3a85c43 req-750f4741-9d50-456b-8b4e-533c1e8f2045 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "0fc86c7e-5de2-431c-9152-cfe293f8cc7d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:27:25 np0005534516 nova_compute[253538]: 2025-11-25 08:27:25.216 253542 DEBUG nova.compute.manager [req-9302e5ad-fe7b-4152-920c-c476b3a85c43 req-750f4741-9d50-456b-8b4e-533c1e8f2045 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 0fc86c7e-5de2-431c-9152-cfe293f8cc7d] No waiting events found dispatching network-vif-plugged-521823df-589a-4370-a3ea-a5a6f4c73a6a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 03:27:25 np0005534516 nova_compute[253538]: 2025-11-25 08:27:25.217 253542 WARNING nova.compute.manager [req-9302e5ad-fe7b-4152-920c-c476b3a85c43 req-750f4741-9d50-456b-8b4e-533c1e8f2045 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 0fc86c7e-5de2-431c-9152-cfe293f8cc7d] Received unexpected event network-vif-plugged-521823df-589a-4370-a3ea-a5a6f4c73a6a for instance with vm_state deleted and task_state None.#033[00m
Nov 25 03:27:25 np0005534516 nova_compute[253538]: 2025-11-25 08:27:25.217 253542 DEBUG nova.compute.manager [req-9302e5ad-fe7b-4152-920c-c476b3a85c43 req-750f4741-9d50-456b-8b4e-533c1e8f2045 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 0fc86c7e-5de2-431c-9152-cfe293f8cc7d] Received event network-vif-deleted-521823df-589a-4370-a3ea-a5a6f4c73a6a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:27:25 np0005534516 nova_compute[253538]: 2025-11-25 08:27:25.221 253542 DEBUG nova.scheduler.client.report [None req-33d7afed-25cd-4c7e-a18b-79c0bb0f5750 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 03:27:25 np0005534516 nova_compute[253538]: 2025-11-25 08:27:25.240 253542 DEBUG oslo_concurrency.lockutils [None req-33d7afed-25cd-4c7e-a18b-79c0bb0f5750 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.601s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:27:25 np0005534516 nova_compute[253538]: 2025-11-25 08:27:25.266 253542 INFO nova.scheduler.client.report [None req-33d7afed-25cd-4c7e-a18b-79c0bb0f5750 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Deleted allocations for instance 0fc86c7e-5de2-431c-9152-cfe293f8cc7d#033[00m
Nov 25 03:27:25 np0005534516 nova_compute[253538]: 2025-11-25 08:27:25.357 253542 DEBUG oslo_concurrency.lockutils [None req-33d7afed-25cd-4c7e-a18b-79c0bb0f5750 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Lock "0fc86c7e-5de2-431c-9152-cfe293f8cc7d" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.518s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:27:25 np0005534516 nova_compute[253538]: 2025-11-25 08:27:25.461 253542 DEBUG nova.network.neutron [-] [instance: ca088afd-31e5-497b-bfc5-ba1f56096642] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 03:27:25 np0005534516 nova_compute[253538]: 2025-11-25 08:27:25.476 253542 INFO nova.compute.manager [-] [instance: ca088afd-31e5-497b-bfc5-ba1f56096642] Took 0.69 seconds to deallocate network for instance.#033[00m
Nov 25 03:27:25 np0005534516 nova_compute[253538]: 2025-11-25 08:27:25.493 253542 DEBUG oslo_concurrency.lockutils [None req-f786152c-1e05-4ab9-9ade-48969d18372c 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Acquiring lock "30afcbba-78f3-433c-ba0a-5a2d25cf2d48" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:27:25 np0005534516 nova_compute[253538]: 2025-11-25 08:27:25.494 253542 DEBUG oslo_concurrency.lockutils [None req-f786152c-1e05-4ab9-9ade-48969d18372c 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Lock "30afcbba-78f3-433c-ba0a-5a2d25cf2d48" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:27:25 np0005534516 nova_compute[253538]: 2025-11-25 08:27:25.510 253542 DEBUG nova.compute.manager [None req-f786152c-1e05-4ab9-9ade-48969d18372c 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 30afcbba-78f3-433c-ba0a-5a2d25cf2d48] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 25 03:27:25 np0005534516 nova_compute[253538]: 2025-11-25 08:27:25.533 253542 DEBUG oslo_concurrency.lockutils [None req-5a0606ae-7546-4870-a8b8-3eb9852746c1 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:27:25 np0005534516 nova_compute[253538]: 2025-11-25 08:27:25.534 253542 DEBUG oslo_concurrency.lockutils [None req-5a0606ae-7546-4870-a8b8-3eb9852746c1 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:27:25 np0005534516 nova_compute[253538]: 2025-11-25 08:27:25.578 253542 DEBUG oslo_concurrency.lockutils [None req-f786152c-1e05-4ab9-9ade-48969d18372c 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:27:25 np0005534516 nova_compute[253538]: 2025-11-25 08:27:25.586 253542 DEBUG oslo_concurrency.processutils [None req-5a0606ae-7546-4870-a8b8-3eb9852746c1 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:27:25 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 03:27:25 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/550632676' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 03:27:25 np0005534516 nova_compute[253538]: 2025-11-25 08:27:25.997 253542 DEBUG oslo_concurrency.processutils [None req-5a0606ae-7546-4870-a8b8-3eb9852746c1 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.411s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:27:26 np0005534516 nova_compute[253538]: 2025-11-25 08:27:26.002 253542 DEBUG nova.compute.provider_tree [None req-5a0606ae-7546-4870-a8b8-3eb9852746c1 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 03:27:26 np0005534516 nova_compute[253538]: 2025-11-25 08:27:26.017 253542 DEBUG nova.scheduler.client.report [None req-5a0606ae-7546-4870-a8b8-3eb9852746c1 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 03:27:26 np0005534516 nova_compute[253538]: 2025-11-25 08:27:26.043 253542 DEBUG oslo_concurrency.lockutils [None req-5a0606ae-7546-4870-a8b8-3eb9852746c1 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.509s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:27:26 np0005534516 nova_compute[253538]: 2025-11-25 08:27:26.045 253542 DEBUG oslo_concurrency.lockutils [None req-f786152c-1e05-4ab9-9ade-48969d18372c 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.467s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:27:26 np0005534516 nova_compute[253538]: 2025-11-25 08:27:26.051 253542 DEBUG nova.virt.hardware [None req-f786152c-1e05-4ab9-9ade-48969d18372c 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 25 03:27:26 np0005534516 nova_compute[253538]: 2025-11-25 08:27:26.051 253542 INFO nova.compute.claims [None req-f786152c-1e05-4ab9-9ade-48969d18372c 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 30afcbba-78f3-433c-ba0a-5a2d25cf2d48] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 25 03:27:26 np0005534516 nova_compute[253538]: 2025-11-25 08:27:26.081 253542 INFO nova.scheduler.client.report [None req-5a0606ae-7546-4870-a8b8-3eb9852746c1 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] Deleted allocations for instance ca088afd-31e5-497b-bfc5-ba1f56096642#033[00m
Nov 25 03:27:26 np0005534516 nova_compute[253538]: 2025-11-25 08:27:26.147 253542 DEBUG oslo_concurrency.lockutils [None req-5a0606ae-7546-4870-a8b8-3eb9852746c1 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] Lock "ca088afd-31e5-497b-bfc5-ba1f56096642" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.254s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:27:26 np0005534516 nova_compute[253538]: 2025-11-25 08:27:26.169 253542 DEBUG oslo_concurrency.processutils [None req-f786152c-1e05-4ab9-9ade-48969d18372c 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:27:26 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 03:27:26 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2897132439' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 03:27:26 np0005534516 nova_compute[253538]: 2025-11-25 08:27:26.560 253542 DEBUG oslo_concurrency.processutils [None req-f786152c-1e05-4ab9-9ade-48969d18372c 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.390s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:27:26 np0005534516 nova_compute[253538]: 2025-11-25 08:27:26.565 253542 DEBUG nova.compute.provider_tree [None req-f786152c-1e05-4ab9-9ade-48969d18372c 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 03:27:26 np0005534516 nova_compute[253538]: 2025-11-25 08:27:26.582 253542 DEBUG nova.scheduler.client.report [None req-f786152c-1e05-4ab9-9ade-48969d18372c 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 03:27:26 np0005534516 nova_compute[253538]: 2025-11-25 08:27:26.606 253542 DEBUG oslo_concurrency.lockutils [None req-f786152c-1e05-4ab9-9ade-48969d18372c 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.561s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:27:26 np0005534516 nova_compute[253538]: 2025-11-25 08:27:26.607 253542 DEBUG nova.compute.manager [None req-f786152c-1e05-4ab9-9ade-48969d18372c 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 30afcbba-78f3-433c-ba0a-5a2d25cf2d48] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 25 03:27:26 np0005534516 nova_compute[253538]: 2025-11-25 08:27:26.673 253542 DEBUG nova.compute.manager [None req-f786152c-1e05-4ab9-9ade-48969d18372c 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 30afcbba-78f3-433c-ba0a-5a2d25cf2d48] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 25 03:27:26 np0005534516 nova_compute[253538]: 2025-11-25 08:27:26.673 253542 DEBUG nova.network.neutron [None req-f786152c-1e05-4ab9-9ade-48969d18372c 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 30afcbba-78f3-433c-ba0a-5a2d25cf2d48] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 25 03:27:26 np0005534516 nova_compute[253538]: 2025-11-25 08:27:26.697 253542 INFO nova.virt.libvirt.driver [None req-f786152c-1e05-4ab9-9ade-48969d18372c 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 30afcbba-78f3-433c-ba0a-5a2d25cf2d48] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 25 03:27:26 np0005534516 nova_compute[253538]: 2025-11-25 08:27:26.718 253542 DEBUG nova.compute.manager [None req-f786152c-1e05-4ab9-9ade-48969d18372c 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 30afcbba-78f3-433c-ba0a-5a2d25cf2d48] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 25 03:27:26 np0005534516 nova_compute[253538]: 2025-11-25 08:27:26.806 253542 DEBUG nova.compute.manager [None req-f786152c-1e05-4ab9-9ade-48969d18372c 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 30afcbba-78f3-433c-ba0a-5a2d25cf2d48] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 25 03:27:26 np0005534516 nova_compute[253538]: 2025-11-25 08:27:26.808 253542 DEBUG nova.virt.libvirt.driver [None req-f786152c-1e05-4ab9-9ade-48969d18372c 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 30afcbba-78f3-433c-ba0a-5a2d25cf2d48] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 25 03:27:26 np0005534516 nova_compute[253538]: 2025-11-25 08:27:26.808 253542 INFO nova.virt.libvirt.driver [None req-f786152c-1e05-4ab9-9ade-48969d18372c 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 30afcbba-78f3-433c-ba0a-5a2d25cf2d48] Creating image(s)#033[00m
Nov 25 03:27:26 np0005534516 nova_compute[253538]: 2025-11-25 08:27:26.833 253542 DEBUG nova.storage.rbd_utils [None req-f786152c-1e05-4ab9-9ade-48969d18372c 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] rbd image 30afcbba-78f3-433c-ba0a-5a2d25cf2d48_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:27:26 np0005534516 nova_compute[253538]: 2025-11-25 08:27:26.861 253542 DEBUG nova.storage.rbd_utils [None req-f786152c-1e05-4ab9-9ade-48969d18372c 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] rbd image 30afcbba-78f3-433c-ba0a-5a2d25cf2d48_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:27:26 np0005534516 nova_compute[253538]: 2025-11-25 08:27:26.882 253542 DEBUG nova.storage.rbd_utils [None req-f786152c-1e05-4ab9-9ade-48969d18372c 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] rbd image 30afcbba-78f3-433c-ba0a-5a2d25cf2d48_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:27:26 np0005534516 nova_compute[253538]: 2025-11-25 08:27:26.885 253542 DEBUG oslo_concurrency.processutils [None req-f786152c-1e05-4ab9-9ade-48969d18372c 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:27:26 np0005534516 nova_compute[253538]: 2025-11-25 08:27:26.941 253542 DEBUG oslo_concurrency.processutils [None req-f786152c-1e05-4ab9-9ade-48969d18372c 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json" returned: 0 in 0.056s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:27:26 np0005534516 nova_compute[253538]: 2025-11-25 08:27:26.942 253542 DEBUG oslo_concurrency.lockutils [None req-f786152c-1e05-4ab9-9ade-48969d18372c 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Acquiring lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:27:26 np0005534516 nova_compute[253538]: 2025-11-25 08:27:26.942 253542 DEBUG oslo_concurrency.lockutils [None req-f786152c-1e05-4ab9-9ade-48969d18372c 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:27:26 np0005534516 nova_compute[253538]: 2025-11-25 08:27:26.942 253542 DEBUG oslo_concurrency.lockutils [None req-f786152c-1e05-4ab9-9ade-48969d18372c 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:27:26 np0005534516 nova_compute[253538]: 2025-11-25 08:27:26.960 253542 DEBUG nova.storage.rbd_utils [None req-f786152c-1e05-4ab9-9ade-48969d18372c 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] rbd image 30afcbba-78f3-433c-ba0a-5a2d25cf2d48_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:27:26 np0005534516 nova_compute[253538]: 2025-11-25 08:27:26.965 253542 DEBUG oslo_concurrency.processutils [None req-f786152c-1e05-4ab9-9ade-48969d18372c 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc 30afcbba-78f3-433c-ba0a-5a2d25cf2d48_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:27:26 np0005534516 nova_compute[253538]: 2025-11-25 08:27:26.994 253542 DEBUG nova.compute.manager [req-a0caca5f-cf48-464f-8694-6f461ef9d834 req-71d20ab9-0549-406e-9910-6ea58befccd2 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: ca088afd-31e5-497b-bfc5-ba1f56096642] Received event network-vif-plugged-2089bf75-6119-4c42-a326-989b3931ec08 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:27:26 np0005534516 nova_compute[253538]: 2025-11-25 08:27:26.995 253542 DEBUG oslo_concurrency.lockutils [req-a0caca5f-cf48-464f-8694-6f461ef9d834 req-71d20ab9-0549-406e-9910-6ea58befccd2 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "ca088afd-31e5-497b-bfc5-ba1f56096642-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:27:26 np0005534516 nova_compute[253538]: 2025-11-25 08:27:26.995 253542 DEBUG oslo_concurrency.lockutils [req-a0caca5f-cf48-464f-8694-6f461ef9d834 req-71d20ab9-0549-406e-9910-6ea58befccd2 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "ca088afd-31e5-497b-bfc5-ba1f56096642-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:27:26 np0005534516 nova_compute[253538]: 2025-11-25 08:27:26.995 253542 DEBUG oslo_concurrency.lockutils [req-a0caca5f-cf48-464f-8694-6f461ef9d834 req-71d20ab9-0549-406e-9910-6ea58befccd2 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "ca088afd-31e5-497b-bfc5-ba1f56096642-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:27:26 np0005534516 nova_compute[253538]: 2025-11-25 08:27:26.996 253542 DEBUG nova.compute.manager [req-a0caca5f-cf48-464f-8694-6f461ef9d834 req-71d20ab9-0549-406e-9910-6ea58befccd2 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: ca088afd-31e5-497b-bfc5-ba1f56096642] No waiting events found dispatching network-vif-plugged-2089bf75-6119-4c42-a326-989b3931ec08 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 03:27:26 np0005534516 nova_compute[253538]: 2025-11-25 08:27:26.996 253542 WARNING nova.compute.manager [req-a0caca5f-cf48-464f-8694-6f461ef9d834 req-71d20ab9-0549-406e-9910-6ea58befccd2 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: ca088afd-31e5-497b-bfc5-ba1f56096642] Received unexpected event network-vif-plugged-2089bf75-6119-4c42-a326-989b3931ec08 for instance with vm_state deleted and task_state None.#033[00m
Nov 25 03:27:27 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1265: 321 pgs: 321 active+clean; 83 MiB data, 374 MiB used, 60 GiB / 60 GiB avail; 1.3 MiB/s rd, 586 KiB/s wr, 102 op/s
Nov 25 03:27:27 np0005534516 nova_compute[253538]: 2025-11-25 08:27:27.355 253542 DEBUG nova.compute.manager [req-e6acb18e-070d-4e06-911e-b8da5f460392 req-e32861e6-24d4-4ae7-b81a-6d82e92966ca b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: ca088afd-31e5-497b-bfc5-ba1f56096642] Received event network-vif-deleted-2089bf75-6119-4c42-a326-989b3931ec08 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:27:28 np0005534516 nova_compute[253538]: 2025-11-25 08:27:28.106 253542 DEBUG nova.policy [None req-f786152c-1e05-4ab9-9ade-48969d18372c 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '38fa175fb699405c9a05d7c28f994ebc', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'b0a28d62fb1841c087b84b40bf5a54ec', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 25 03:27:28 np0005534516 nova_compute[253538]: 2025-11-25 08:27:28.154 253542 DEBUG oslo_concurrency.processutils [None req-f786152c-1e05-4ab9-9ade-48969d18372c 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc 30afcbba-78f3-433c-ba0a-5a2d25cf2d48_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.189s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:27:28 np0005534516 nova_compute[253538]: 2025-11-25 08:27:28.217 253542 DEBUG nova.storage.rbd_utils [None req-f786152c-1e05-4ab9-9ade-48969d18372c 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] resizing rbd image 30afcbba-78f3-433c-ba0a-5a2d25cf2d48_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 25 03:27:28 np0005534516 nova_compute[253538]: 2025-11-25 08:27:28.333 253542 DEBUG nova.objects.instance [None req-f786152c-1e05-4ab9-9ade-48969d18372c 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Lazy-loading 'migration_context' on Instance uuid 30afcbba-78f3-433c-ba0a-5a2d25cf2d48 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 03:27:28 np0005534516 nova_compute[253538]: 2025-11-25 08:27:28.356 253542 DEBUG nova.virt.libvirt.driver [None req-f786152c-1e05-4ab9-9ade-48969d18372c 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 30afcbba-78f3-433c-ba0a-5a2d25cf2d48] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 25 03:27:28 np0005534516 nova_compute[253538]: 2025-11-25 08:27:28.357 253542 DEBUG nova.virt.libvirt.driver [None req-f786152c-1e05-4ab9-9ade-48969d18372c 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 30afcbba-78f3-433c-ba0a-5a2d25cf2d48] Ensure instance console log exists: /var/lib/nova/instances/30afcbba-78f3-433c-ba0a-5a2d25cf2d48/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 25 03:27:28 np0005534516 nova_compute[253538]: 2025-11-25 08:27:28.357 253542 DEBUG oslo_concurrency.lockutils [None req-f786152c-1e05-4ab9-9ade-48969d18372c 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:27:28 np0005534516 nova_compute[253538]: 2025-11-25 08:27:28.358 253542 DEBUG oslo_concurrency.lockutils [None req-f786152c-1e05-4ab9-9ade-48969d18372c 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:27:28 np0005534516 nova_compute[253538]: 2025-11-25 08:27:28.358 253542 DEBUG oslo_concurrency.lockutils [None req-f786152c-1e05-4ab9-9ade-48969d18372c 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:27:28 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 25 03:27:28 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3651653136' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 03:27:28 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 25 03:27:28 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3651653136' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 03:27:29 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1266: 321 pgs: 321 active+clean; 58 MiB data, 359 MiB used, 60 GiB / 60 GiB avail; 91 KiB/s rd, 930 KiB/s wr, 128 op/s
Nov 25 03:27:29 np0005534516 nova_compute[253538]: 2025-11-25 08:27:29.143 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:27:29 np0005534516 nova_compute[253538]: 2025-11-25 08:27:29.483 253542 DEBUG nova.network.neutron [None req-f786152c-1e05-4ab9-9ade-48969d18372c 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 30afcbba-78f3-433c-ba0a-5a2d25cf2d48] Successfully created port: d6aa33fe-8dd6-4546-aa75-715ad57e5b5c _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 25 03:27:29 np0005534516 podman[287289]: 2025-11-25 08:27:29.515635388 +0000 UTC m=+0.094747593 container health_status 5c8d5508db57b4c3da7e80ae11f3a3094e87785cb74f60c98199261dd8b73481 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Nov 25 03:27:29 np0005534516 nova_compute[253538]: 2025-11-25 08:27:29.735 253542 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764059234.732528, c787de46-dba9-458e-acc0-57470097fac5 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 03:27:29 np0005534516 nova_compute[253538]: 2025-11-25 08:27:29.735 253542 INFO nova.compute.manager [-] [instance: c787de46-dba9-458e-acc0-57470097fac5] VM Stopped (Lifecycle Event)#033[00m
Nov 25 03:27:29 np0005534516 nova_compute[253538]: 2025-11-25 08:27:29.768 253542 DEBUG nova.compute.manager [None req-7868c83d-cef1-4c14-b055-f04fb2931d79 - - - - - -] [instance: c787de46-dba9-458e-acc0-57470097fac5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:27:29 np0005534516 nova_compute[253538]: 2025-11-25 08:27:29.785 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:27:29 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 03:27:29 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e118 do_prune osdmap full prune enabled
Nov 25 03:27:29 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e119 e119: 3 total, 3 up, 3 in
Nov 25 03:27:29 np0005534516 ceph-mon[75015]: log_channel(cluster) log [DBG] : osdmap e119: 3 total, 3 up, 3 in
Nov 25 03:27:30 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 03:27:30 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 03:27:30 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 03:27:30 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 03:27:30 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 03:27:30 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 03:27:30 np0005534516 ceph-mgr[75313]: [progress WARNING root] complete: ev 9cd63bc1-3d4b-4033-a776-2fff13e856a7 does not exist
Nov 25 03:27:30 np0005534516 ceph-mgr[75313]: [progress WARNING root] complete: ev 3dc20461-e5af-4d4b-baf9-dece8a303856 does not exist
Nov 25 03:27:30 np0005534516 ceph-mgr[75313]: [progress WARNING root] complete: ev 80ca0317-a9de-4908-90d2-ad8a6df29e33 does not exist
Nov 25 03:27:30 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 03:27:30 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 03:27:30 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 03:27:30 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 03:27:30 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 03:27:30 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 03:27:30 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:27:30.763 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=0e3362c2-a4a0-4a10-9289-943331244f84, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '9'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:27:30 np0005534516 podman[287556]: 2025-11-25 08:27:30.861647787 +0000 UTC m=+0.046754135 container create 3bf5018f1c110f2517596f90ddaf8e7211e8f4c77bc39a3146baf477dc1ee4e0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_hopper, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 03:27:30 np0005534516 systemd[1]: Started libpod-conmon-3bf5018f1c110f2517596f90ddaf8e7211e8f4c77bc39a3146baf477dc1ee4e0.scope.
Nov 25 03:27:30 np0005534516 podman[287556]: 2025-11-25 08:27:30.839839573 +0000 UTC m=+0.024945921 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 03:27:30 np0005534516 systemd[1]: Started libcrun container.
Nov 25 03:27:30 np0005534516 podman[287556]: 2025-11-25 08:27:30.962590851 +0000 UTC m=+0.147697219 container init 3bf5018f1c110f2517596f90ddaf8e7211e8f4c77bc39a3146baf477dc1ee4e0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_hopper, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 03:27:30 np0005534516 podman[287556]: 2025-11-25 08:27:30.970224693 +0000 UTC m=+0.155331031 container start 3bf5018f1c110f2517596f90ddaf8e7211e8f4c77bc39a3146baf477dc1ee4e0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_hopper, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 03:27:30 np0005534516 systemd[1]: libpod-3bf5018f1c110f2517596f90ddaf8e7211e8f4c77bc39a3146baf477dc1ee4e0.scope: Deactivated successfully.
Nov 25 03:27:30 np0005534516 zen_hopper[287572]: 167 167
Nov 25 03:27:30 np0005534516 podman[287556]: 2025-11-25 08:27:30.978032208 +0000 UTC m=+0.163138556 container attach 3bf5018f1c110f2517596f90ddaf8e7211e8f4c77bc39a3146baf477dc1ee4e0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_hopper, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Nov 25 03:27:30 np0005534516 conmon[287572]: conmon 3bf5018f1c110f251759 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-3bf5018f1c110f2517596f90ddaf8e7211e8f4c77bc39a3146baf477dc1ee4e0.scope/container/memory.events
Nov 25 03:27:30 np0005534516 podman[287556]: 2025-11-25 08:27:30.978374858 +0000 UTC m=+0.163481187 container died 3bf5018f1c110f2517596f90ddaf8e7211e8f4c77bc39a3146baf477dc1ee4e0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_hopper, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True)
Nov 25 03:27:31 np0005534516 systemd[1]: var-lib-containers-storage-overlay-ac1d07c6bf788864a02885dde0a4826cab77a929c996a4098073c777cf857f91-merged.mount: Deactivated successfully.
Nov 25 03:27:31 np0005534516 podman[287556]: 2025-11-25 08:27:31.041968958 +0000 UTC m=+0.227075296 container remove 3bf5018f1c110f2517596f90ddaf8e7211e8f4c77bc39a3146baf477dc1ee4e0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_hopper, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 25 03:27:31 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1268: 321 pgs: 321 active+clean; 70 MiB data, 348 MiB used, 60 GiB / 60 GiB avail; 102 KiB/s rd, 1.4 MiB/s wr, 146 op/s
Nov 25 03:27:31 np0005534516 systemd[1]: libpod-conmon-3bf5018f1c110f2517596f90ddaf8e7211e8f4c77bc39a3146baf477dc1ee4e0.scope: Deactivated successfully.
Nov 25 03:27:31 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 03:27:31 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 03:27:31 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 03:27:31 np0005534516 podman[287596]: 2025-11-25 08:27:31.241203133 +0000 UTC m=+0.047129915 container create 09429595a68217f42d57a18a744ffc8e4789ace4dc544839f22fd8956bc25ce8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_lalande, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Nov 25 03:27:31 np0005534516 nova_compute[253538]: 2025-11-25 08:27:31.250 253542 DEBUG nova.network.neutron [None req-f786152c-1e05-4ab9-9ade-48969d18372c 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 30afcbba-78f3-433c-ba0a-5a2d25cf2d48] Successfully updated port: d6aa33fe-8dd6-4546-aa75-715ad57e5b5c _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 25 03:27:31 np0005534516 nova_compute[253538]: 2025-11-25 08:27:31.278 253542 DEBUG oslo_concurrency.lockutils [None req-f786152c-1e05-4ab9-9ade-48969d18372c 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Acquiring lock "refresh_cache-30afcbba-78f3-433c-ba0a-5a2d25cf2d48" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 03:27:31 np0005534516 nova_compute[253538]: 2025-11-25 08:27:31.278 253542 DEBUG oslo_concurrency.lockutils [None req-f786152c-1e05-4ab9-9ade-48969d18372c 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Acquired lock "refresh_cache-30afcbba-78f3-433c-ba0a-5a2d25cf2d48" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 03:27:31 np0005534516 nova_compute[253538]: 2025-11-25 08:27:31.278 253542 DEBUG nova.network.neutron [None req-f786152c-1e05-4ab9-9ade-48969d18372c 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 30afcbba-78f3-433c-ba0a-5a2d25cf2d48] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 25 03:27:31 np0005534516 systemd[1]: Started libpod-conmon-09429595a68217f42d57a18a744ffc8e4789ace4dc544839f22fd8956bc25ce8.scope.
Nov 25 03:27:31 np0005534516 systemd[1]: Started libcrun container.
Nov 25 03:27:31 np0005534516 podman[287596]: 2025-11-25 08:27:31.221892128 +0000 UTC m=+0.027818890 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 03:27:31 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ed9470821c9b21846ee5102deabbd54f6b1a0b24bc73785509a92c67e0846bcb/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 03:27:31 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ed9470821c9b21846ee5102deabbd54f6b1a0b24bc73785509a92c67e0846bcb/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 03:27:31 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ed9470821c9b21846ee5102deabbd54f6b1a0b24bc73785509a92c67e0846bcb/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 03:27:31 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ed9470821c9b21846ee5102deabbd54f6b1a0b24bc73785509a92c67e0846bcb/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 03:27:31 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ed9470821c9b21846ee5102deabbd54f6b1a0b24bc73785509a92c67e0846bcb/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 03:27:31 np0005534516 podman[287596]: 2025-11-25 08:27:31.336342657 +0000 UTC m=+0.142269429 container init 09429595a68217f42d57a18a744ffc8e4789ace4dc544839f22fd8956bc25ce8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_lalande, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 03:27:31 np0005534516 podman[287596]: 2025-11-25 08:27:31.345729367 +0000 UTC m=+0.151656119 container start 09429595a68217f42d57a18a744ffc8e4789ace4dc544839f22fd8956bc25ce8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_lalande, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 03:27:31 np0005534516 podman[287596]: 2025-11-25 08:27:31.349263405 +0000 UTC m=+0.155190177 container attach 09429595a68217f42d57a18a744ffc8e4789ace4dc544839f22fd8956bc25ce8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_lalande, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 03:27:31 np0005534516 nova_compute[253538]: 2025-11-25 08:27:31.432 253542 DEBUG nova.compute.manager [req-22ff0f5f-f5eb-4f01-b807-93f88b489afe req-b261fd50-76c4-400b-9aaa-180f713ea52a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 30afcbba-78f3-433c-ba0a-5a2d25cf2d48] Received event network-changed-d6aa33fe-8dd6-4546-aa75-715ad57e5b5c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:27:31 np0005534516 nova_compute[253538]: 2025-11-25 08:27:31.433 253542 DEBUG nova.compute.manager [req-22ff0f5f-f5eb-4f01-b807-93f88b489afe req-b261fd50-76c4-400b-9aaa-180f713ea52a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 30afcbba-78f3-433c-ba0a-5a2d25cf2d48] Refreshing instance network info cache due to event network-changed-d6aa33fe-8dd6-4546-aa75-715ad57e5b5c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 03:27:31 np0005534516 nova_compute[253538]: 2025-11-25 08:27:31.433 253542 DEBUG oslo_concurrency.lockutils [req-22ff0f5f-f5eb-4f01-b807-93f88b489afe req-b261fd50-76c4-400b-9aaa-180f713ea52a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-30afcbba-78f3-433c-ba0a-5a2d25cf2d48" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 03:27:31 np0005534516 nova_compute[253538]: 2025-11-25 08:27:31.467 253542 DEBUG nova.network.neutron [None req-f786152c-1e05-4ab9-9ade-48969d18372c 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 30afcbba-78f3-433c-ba0a-5a2d25cf2d48] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 25 03:27:32 np0005534516 adoring_lalande[287613]: --> passed data devices: 0 physical, 3 LVM
Nov 25 03:27:32 np0005534516 adoring_lalande[287613]: --> relative data size: 1.0
Nov 25 03:27:32 np0005534516 adoring_lalande[287613]: --> All data devices are unavailable
Nov 25 03:27:32 np0005534516 systemd[1]: libpod-09429595a68217f42d57a18a744ffc8e4789ace4dc544839f22fd8956bc25ce8.scope: Deactivated successfully.
Nov 25 03:27:32 np0005534516 podman[287642]: 2025-11-25 08:27:32.398152798 +0000 UTC m=+0.030738382 container died 09429595a68217f42d57a18a744ffc8e4789ace4dc544839f22fd8956bc25ce8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_lalande, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default)
Nov 25 03:27:32 np0005534516 systemd[1]: var-lib-containers-storage-overlay-ed9470821c9b21846ee5102deabbd54f6b1a0b24bc73785509a92c67e0846bcb-merged.mount: Deactivated successfully.
Nov 25 03:27:32 np0005534516 podman[287642]: 2025-11-25 08:27:32.718444235 +0000 UTC m=+0.351029809 container remove 09429595a68217f42d57a18a744ffc8e4789ace4dc544839f22fd8956bc25ce8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_lalande, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 03:27:32 np0005534516 systemd[1]: libpod-conmon-09429595a68217f42d57a18a744ffc8e4789ace4dc544839f22fd8956bc25ce8.scope: Deactivated successfully.
Nov 25 03:27:33 np0005534516 podman[287682]: 2025-11-25 08:27:32.997024276 +0000 UTC m=+0.085251331 container health_status 201f5ba8a846455dad7681d91099d8c3f33e8ac91298c1330a8b525b94c03938 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=multipathd, container_name=multipathd, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251118, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 03:27:33 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1269: 321 pgs: 321 active+clean; 88 MiB data, 358 MiB used, 60 GiB / 60 GiB avail; 58 KiB/s rd, 2.6 MiB/s wr, 90 op/s
Nov 25 03:27:33 np0005534516 nova_compute[253538]: 2025-11-25 08:27:33.264 253542 DEBUG nova.network.neutron [None req-f786152c-1e05-4ab9-9ade-48969d18372c 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 30afcbba-78f3-433c-ba0a-5a2d25cf2d48] Updating instance_info_cache with network_info: [{"id": "d6aa33fe-8dd6-4546-aa75-715ad57e5b5c", "address": "fa:16:3e:93:2b:39", "network": {"id": "ba659d6c-c094-47d7-ba45-d0e659ce778e", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1404047212-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b0a28d62fb1841c087b84b40bf5a54ec", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd6aa33fe-8d", "ovs_interfaceid": "d6aa33fe-8dd6-4546-aa75-715ad57e5b5c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 03:27:33 np0005534516 nova_compute[253538]: 2025-11-25 08:27:33.306 253542 DEBUG oslo_concurrency.lockutils [None req-f786152c-1e05-4ab9-9ade-48969d18372c 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Releasing lock "refresh_cache-30afcbba-78f3-433c-ba0a-5a2d25cf2d48" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 03:27:33 np0005534516 nova_compute[253538]: 2025-11-25 08:27:33.307 253542 DEBUG nova.compute.manager [None req-f786152c-1e05-4ab9-9ade-48969d18372c 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 30afcbba-78f3-433c-ba0a-5a2d25cf2d48] Instance network_info: |[{"id": "d6aa33fe-8dd6-4546-aa75-715ad57e5b5c", "address": "fa:16:3e:93:2b:39", "network": {"id": "ba659d6c-c094-47d7-ba45-d0e659ce778e", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1404047212-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b0a28d62fb1841c087b84b40bf5a54ec", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd6aa33fe-8d", "ovs_interfaceid": "d6aa33fe-8dd6-4546-aa75-715ad57e5b5c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 25 03:27:33 np0005534516 nova_compute[253538]: 2025-11-25 08:27:33.308 253542 DEBUG oslo_concurrency.lockutils [req-22ff0f5f-f5eb-4f01-b807-93f88b489afe req-b261fd50-76c4-400b-9aaa-180f713ea52a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-30afcbba-78f3-433c-ba0a-5a2d25cf2d48" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 03:27:33 np0005534516 nova_compute[253538]: 2025-11-25 08:27:33.308 253542 DEBUG nova.network.neutron [req-22ff0f5f-f5eb-4f01-b807-93f88b489afe req-b261fd50-76c4-400b-9aaa-180f713ea52a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 30afcbba-78f3-433c-ba0a-5a2d25cf2d48] Refreshing network info cache for port d6aa33fe-8dd6-4546-aa75-715ad57e5b5c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 03:27:33 np0005534516 nova_compute[253538]: 2025-11-25 08:27:33.314 253542 DEBUG nova.virt.libvirt.driver [None req-f786152c-1e05-4ab9-9ade-48969d18372c 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 30afcbba-78f3-433c-ba0a-5a2d25cf2d48] Start _get_guest_xml network_info=[{"id": "d6aa33fe-8dd6-4546-aa75-715ad57e5b5c", "address": "fa:16:3e:93:2b:39", "network": {"id": "ba659d6c-c094-47d7-ba45-d0e659ce778e", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1404047212-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b0a28d62fb1841c087b84b40bf5a54ec", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd6aa33fe-8d", "ovs_interfaceid": "d6aa33fe-8dd6-4546-aa75-715ad57e5b5c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'device_name': '/dev/vda', 'guest_format': None, 'encryption_options': None, 'boot_index': 0, 'encryption_format': None, 'encryption_secret_uuid': None, 'size': 0, 'encrypted': False, 'disk_bus': 'virtio', 'image_id': '8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 25 03:27:33 np0005534516 nova_compute[253538]: 2025-11-25 08:27:33.323 253542 WARNING nova.virt.libvirt.driver [None req-f786152c-1e05-4ab9-9ade-48969d18372c 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 03:27:33 np0005534516 nova_compute[253538]: 2025-11-25 08:27:33.330 253542 DEBUG nova.virt.libvirt.host [None req-f786152c-1e05-4ab9-9ade-48969d18372c 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 25 03:27:33 np0005534516 nova_compute[253538]: 2025-11-25 08:27:33.332 253542 DEBUG nova.virt.libvirt.host [None req-f786152c-1e05-4ab9-9ade-48969d18372c 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 25 03:27:33 np0005534516 nova_compute[253538]: 2025-11-25 08:27:33.343 253542 DEBUG nova.virt.libvirt.host [None req-f786152c-1e05-4ab9-9ade-48969d18372c 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 25 03:27:33 np0005534516 nova_compute[253538]: 2025-11-25 08:27:33.344 253542 DEBUG nova.virt.libvirt.host [None req-f786152c-1e05-4ab9-9ade-48969d18372c 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 25 03:27:33 np0005534516 nova_compute[253538]: 2025-11-25 08:27:33.345 253542 DEBUG nova.virt.libvirt.driver [None req-f786152c-1e05-4ab9-9ade-48969d18372c 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 25 03:27:33 np0005534516 nova_compute[253538]: 2025-11-25 08:27:33.345 253542 DEBUG nova.virt.hardware [None req-f786152c-1e05-4ab9-9ade-48969d18372c 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T08:20:54Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='c0247c91-0f34-45b2-87e7-43f31a790a00',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 25 03:27:33 np0005534516 nova_compute[253538]: 2025-11-25 08:27:33.346 253542 DEBUG nova.virt.hardware [None req-f786152c-1e05-4ab9-9ade-48969d18372c 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 25 03:27:33 np0005534516 nova_compute[253538]: 2025-11-25 08:27:33.346 253542 DEBUG nova.virt.hardware [None req-f786152c-1e05-4ab9-9ade-48969d18372c 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 25 03:27:33 np0005534516 nova_compute[253538]: 2025-11-25 08:27:33.347 253542 DEBUG nova.virt.hardware [None req-f786152c-1e05-4ab9-9ade-48969d18372c 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 25 03:27:33 np0005534516 nova_compute[253538]: 2025-11-25 08:27:33.347 253542 DEBUG nova.virt.hardware [None req-f786152c-1e05-4ab9-9ade-48969d18372c 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 25 03:27:33 np0005534516 nova_compute[253538]: 2025-11-25 08:27:33.347 253542 DEBUG nova.virt.hardware [None req-f786152c-1e05-4ab9-9ade-48969d18372c 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 25 03:27:33 np0005534516 nova_compute[253538]: 2025-11-25 08:27:33.347 253542 DEBUG nova.virt.hardware [None req-f786152c-1e05-4ab9-9ade-48969d18372c 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 25 03:27:33 np0005534516 nova_compute[253538]: 2025-11-25 08:27:33.348 253542 DEBUG nova.virt.hardware [None req-f786152c-1e05-4ab9-9ade-48969d18372c 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 25 03:27:33 np0005534516 nova_compute[253538]: 2025-11-25 08:27:33.348 253542 DEBUG nova.virt.hardware [None req-f786152c-1e05-4ab9-9ade-48969d18372c 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 25 03:27:33 np0005534516 nova_compute[253538]: 2025-11-25 08:27:33.348 253542 DEBUG nova.virt.hardware [None req-f786152c-1e05-4ab9-9ade-48969d18372c 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 25 03:27:33 np0005534516 nova_compute[253538]: 2025-11-25 08:27:33.349 253542 DEBUG nova.virt.hardware [None req-f786152c-1e05-4ab9-9ade-48969d18372c 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 25 03:27:33 np0005534516 nova_compute[253538]: 2025-11-25 08:27:33.353 253542 DEBUG oslo_concurrency.processutils [None req-f786152c-1e05-4ab9-9ade-48969d18372c 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:27:33 np0005534516 podman[287834]: 2025-11-25 08:27:33.616178424 +0000 UTC m=+0.099905327 container create 2752baa65f290c01a504d2bd40658a491ef0acb08fb1933a0f5ea927f9e7cbfa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_gould, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 03:27:33 np0005534516 podman[287834]: 2025-11-25 08:27:33.538624257 +0000 UTC m=+0.022351190 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 03:27:33 np0005534516 systemd[1]: Started libpod-conmon-2752baa65f290c01a504d2bd40658a491ef0acb08fb1933a0f5ea927f9e7cbfa.scope.
Nov 25 03:27:33 np0005534516 systemd[1]: Started libcrun container.
Nov 25 03:27:33 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 03:27:33 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3991690127' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 03:27:33 np0005534516 nova_compute[253538]: 2025-11-25 08:27:33.778 253542 DEBUG oslo_concurrency.processutils [None req-f786152c-1e05-4ab9-9ade-48969d18372c 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.425s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:27:33 np0005534516 podman[287834]: 2025-11-25 08:27:33.93610574 +0000 UTC m=+0.419832733 container init 2752baa65f290c01a504d2bd40658a491ef0acb08fb1933a0f5ea927f9e7cbfa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_gould, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Nov 25 03:27:33 np0005534516 podman[287834]: 2025-11-25 08:27:33.943425083 +0000 UTC m=+0.427151986 container start 2752baa65f290c01a504d2bd40658a491ef0acb08fb1933a0f5ea927f9e7cbfa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_gould, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef)
Nov 25 03:27:33 np0005534516 crazy_gould[287851]: 167 167
Nov 25 03:27:33 np0005534516 systemd[1]: libpod-2752baa65f290c01a504d2bd40658a491ef0acb08fb1933a0f5ea927f9e7cbfa.scope: Deactivated successfully.
Nov 25 03:27:34 np0005534516 nova_compute[253538]: 2025-11-25 08:27:34.004 253542 DEBUG nova.storage.rbd_utils [None req-f786152c-1e05-4ab9-9ade-48969d18372c 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] rbd image 30afcbba-78f3-433c-ba0a-5a2d25cf2d48_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:27:34 np0005534516 nova_compute[253538]: 2025-11-25 08:27:34.007 253542 DEBUG oslo_concurrency.processutils [None req-f786152c-1e05-4ab9-9ade-48969d18372c 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:27:34 np0005534516 podman[287834]: 2025-11-25 08:27:34.015901569 +0000 UTC m=+0.499628482 container attach 2752baa65f290c01a504d2bd40658a491ef0acb08fb1933a0f5ea927f9e7cbfa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_gould, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Nov 25 03:27:34 np0005534516 podman[287834]: 2025-11-25 08:27:34.02173424 +0000 UTC m=+0.505461143 container died 2752baa65f290c01a504d2bd40658a491ef0acb08fb1933a0f5ea927f9e7cbfa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_gould, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Nov 25 03:27:34 np0005534516 nova_compute[253538]: 2025-11-25 08:27:34.146 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:27:34 np0005534516 systemd[1]: var-lib-containers-storage-overlay-0c2756e7871396c548ef9241b696d2f78ce0cf44a7f0053c39e78d92809c6dbc-merged.mount: Deactivated successfully.
Nov 25 03:27:34 np0005534516 podman[287834]: 2025-11-25 08:27:34.216154502 +0000 UTC m=+0.699881415 container remove 2752baa65f290c01a504d2bd40658a491ef0acb08fb1933a0f5ea927f9e7cbfa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_gould, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 03:27:34 np0005534516 systemd[1]: libpod-conmon-2752baa65f290c01a504d2bd40658a491ef0acb08fb1933a0f5ea927f9e7cbfa.scope: Deactivated successfully.
Nov 25 03:27:34 np0005534516 nova_compute[253538]: 2025-11-25 08:27:34.401 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:27:34 np0005534516 podman[287916]: 2025-11-25 08:27:34.411805078 +0000 UTC m=+0.047021733 container create 6cd3cec6f9b344259a34a523e6b7b658d999538d8bf77d0167a1ef75fbcadceb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_chandrasekhar, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Nov 25 03:27:34 np0005534516 systemd[1]: Started libpod-conmon-6cd3cec6f9b344259a34a523e6b7b658d999538d8bf77d0167a1ef75fbcadceb.scope.
Nov 25 03:27:34 np0005534516 podman[287916]: 2025-11-25 08:27:34.390862808 +0000 UTC m=+0.026079483 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 03:27:34 np0005534516 systemd[1]: Started libcrun container.
Nov 25 03:27:34 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aa01c12bac5dc48b980e357c7430fefb7567d708d4ced2b306713edccca7e1fd/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 03:27:34 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aa01c12bac5dc48b980e357c7430fefb7567d708d4ced2b306713edccca7e1fd/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 03:27:34 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aa01c12bac5dc48b980e357c7430fefb7567d708d4ced2b306713edccca7e1fd/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 03:27:34 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aa01c12bac5dc48b980e357c7430fefb7567d708d4ced2b306713edccca7e1fd/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 03:27:34 np0005534516 podman[287916]: 2025-11-25 08:27:34.525872335 +0000 UTC m=+0.161089090 container init 6cd3cec6f9b344259a34a523e6b7b658d999538d8bf77d0167a1ef75fbcadceb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_chandrasekhar, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 03:27:34 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 03:27:34 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3930265700' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 03:27:34 np0005534516 podman[287916]: 2025-11-25 08:27:34.533086174 +0000 UTC m=+0.168302829 container start 6cd3cec6f9b344259a34a523e6b7b658d999538d8bf77d0167a1ef75fbcadceb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_chandrasekhar, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Nov 25 03:27:34 np0005534516 podman[287916]: 2025-11-25 08:27:34.540945542 +0000 UTC m=+0.176162217 container attach 6cd3cec6f9b344259a34a523e6b7b658d999538d8bf77d0167a1ef75fbcadceb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_chandrasekhar, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 03:27:34 np0005534516 nova_compute[253538]: 2025-11-25 08:27:34.550 253542 DEBUG oslo_concurrency.processutils [None req-f786152c-1e05-4ab9-9ade-48969d18372c 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.542s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:27:34 np0005534516 nova_compute[253538]: 2025-11-25 08:27:34.552 253542 DEBUG nova.virt.libvirt.vif [None req-f786152c-1e05-4ab9-9ade-48969d18372c 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T08:27:24Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesTestJSON-server-2071435205',display_name='tempest-ImagesTestJSON-server-2071435205',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagestestjson-server-2071435205',id=26,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='b0a28d62fb1841c087b84b40bf5a54ec',ramdisk_id='',reservation_id='r-a9u90nyq',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesTestJSON-109091550',owner_user_name='tempest-ImagesTestJSON-109091550-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T08:27:26Z,user_data=None,user_id='38fa175fb699405c9a05d7c28f994ebc',uuid=30afcbba-78f3-433c-ba0a-5a2d25cf2d48,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d6aa33fe-8dd6-4546-aa75-715ad57e5b5c", "address": "fa:16:3e:93:2b:39", "network": {"id": "ba659d6c-c094-47d7-ba45-d0e659ce778e", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1404047212-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b0a28d62fb1841c087b84b40bf5a54ec", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd6aa33fe-8d", "ovs_interfaceid": "d6aa33fe-8dd6-4546-aa75-715ad57e5b5c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 25 03:27:34 np0005534516 nova_compute[253538]: 2025-11-25 08:27:34.552 253542 DEBUG nova.network.os_vif_util [None req-f786152c-1e05-4ab9-9ade-48969d18372c 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Converting VIF {"id": "d6aa33fe-8dd6-4546-aa75-715ad57e5b5c", "address": "fa:16:3e:93:2b:39", "network": {"id": "ba659d6c-c094-47d7-ba45-d0e659ce778e", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1404047212-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b0a28d62fb1841c087b84b40bf5a54ec", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd6aa33fe-8d", "ovs_interfaceid": "d6aa33fe-8dd6-4546-aa75-715ad57e5b5c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 03:27:34 np0005534516 nova_compute[253538]: 2025-11-25 08:27:34.553 253542 DEBUG nova.network.os_vif_util [None req-f786152c-1e05-4ab9-9ade-48969d18372c 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:93:2b:39,bridge_name='br-int',has_traffic_filtering=True,id=d6aa33fe-8dd6-4546-aa75-715ad57e5b5c,network=Network(ba659d6c-c094-47d7-ba45-d0e659ce778e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd6aa33fe-8d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 03:27:34 np0005534516 nova_compute[253538]: 2025-11-25 08:27:34.554 253542 DEBUG nova.objects.instance [None req-f786152c-1e05-4ab9-9ade-48969d18372c 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Lazy-loading 'pci_devices' on Instance uuid 30afcbba-78f3-433c-ba0a-5a2d25cf2d48 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 03:27:34 np0005534516 nova_compute[253538]: 2025-11-25 08:27:34.568 253542 DEBUG nova.virt.libvirt.driver [None req-f786152c-1e05-4ab9-9ade-48969d18372c 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 30afcbba-78f3-433c-ba0a-5a2d25cf2d48] End _get_guest_xml xml=<domain type="kvm">
Nov 25 03:27:34 np0005534516 nova_compute[253538]:  <uuid>30afcbba-78f3-433c-ba0a-5a2d25cf2d48</uuid>
Nov 25 03:27:34 np0005534516 nova_compute[253538]:  <name>instance-0000001a</name>
Nov 25 03:27:34 np0005534516 nova_compute[253538]:  <memory>131072</memory>
Nov 25 03:27:34 np0005534516 nova_compute[253538]:  <vcpu>1</vcpu>
Nov 25 03:27:34 np0005534516 nova_compute[253538]:  <metadata>
Nov 25 03:27:34 np0005534516 nova_compute[253538]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 03:27:34 np0005534516 nova_compute[253538]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 03:27:34 np0005534516 nova_compute[253538]:      <nova:name>tempest-ImagesTestJSON-server-2071435205</nova:name>
Nov 25 03:27:34 np0005534516 nova_compute[253538]:      <nova:creationTime>2025-11-25 08:27:33</nova:creationTime>
Nov 25 03:27:34 np0005534516 nova_compute[253538]:      <nova:flavor name="m1.nano">
Nov 25 03:27:34 np0005534516 nova_compute[253538]:        <nova:memory>128</nova:memory>
Nov 25 03:27:34 np0005534516 nova_compute[253538]:        <nova:disk>1</nova:disk>
Nov 25 03:27:34 np0005534516 nova_compute[253538]:        <nova:swap>0</nova:swap>
Nov 25 03:27:34 np0005534516 nova_compute[253538]:        <nova:ephemeral>0</nova:ephemeral>
Nov 25 03:27:34 np0005534516 nova_compute[253538]:        <nova:vcpus>1</nova:vcpus>
Nov 25 03:27:34 np0005534516 nova_compute[253538]:      </nova:flavor>
Nov 25 03:27:34 np0005534516 nova_compute[253538]:      <nova:owner>
Nov 25 03:27:34 np0005534516 nova_compute[253538]:        <nova:user uuid="38fa175fb699405c9a05d7c28f994ebc">tempest-ImagesTestJSON-109091550-project-member</nova:user>
Nov 25 03:27:34 np0005534516 nova_compute[253538]:        <nova:project uuid="b0a28d62fb1841c087b84b40bf5a54ec">tempest-ImagesTestJSON-109091550</nova:project>
Nov 25 03:27:34 np0005534516 nova_compute[253538]:      </nova:owner>
Nov 25 03:27:34 np0005534516 nova_compute[253538]:      <nova:root type="image" uuid="8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e"/>
Nov 25 03:27:34 np0005534516 nova_compute[253538]:      <nova:ports>
Nov 25 03:27:34 np0005534516 nova_compute[253538]:        <nova:port uuid="d6aa33fe-8dd6-4546-aa75-715ad57e5b5c">
Nov 25 03:27:34 np0005534516 nova_compute[253538]:          <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Nov 25 03:27:34 np0005534516 nova_compute[253538]:        </nova:port>
Nov 25 03:27:34 np0005534516 nova_compute[253538]:      </nova:ports>
Nov 25 03:27:34 np0005534516 nova_compute[253538]:    </nova:instance>
Nov 25 03:27:34 np0005534516 nova_compute[253538]:  </metadata>
Nov 25 03:27:34 np0005534516 nova_compute[253538]:  <sysinfo type="smbios">
Nov 25 03:27:34 np0005534516 nova_compute[253538]:    <system>
Nov 25 03:27:34 np0005534516 nova_compute[253538]:      <entry name="manufacturer">RDO</entry>
Nov 25 03:27:34 np0005534516 nova_compute[253538]:      <entry name="product">OpenStack Compute</entry>
Nov 25 03:27:34 np0005534516 nova_compute[253538]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 03:27:34 np0005534516 nova_compute[253538]:      <entry name="serial">30afcbba-78f3-433c-ba0a-5a2d25cf2d48</entry>
Nov 25 03:27:34 np0005534516 nova_compute[253538]:      <entry name="uuid">30afcbba-78f3-433c-ba0a-5a2d25cf2d48</entry>
Nov 25 03:27:34 np0005534516 nova_compute[253538]:      <entry name="family">Virtual Machine</entry>
Nov 25 03:27:34 np0005534516 nova_compute[253538]:    </system>
Nov 25 03:27:34 np0005534516 nova_compute[253538]:  </sysinfo>
Nov 25 03:27:34 np0005534516 nova_compute[253538]:  <os>
Nov 25 03:27:34 np0005534516 nova_compute[253538]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 25 03:27:34 np0005534516 nova_compute[253538]:    <boot dev="hd"/>
Nov 25 03:27:34 np0005534516 nova_compute[253538]:    <smbios mode="sysinfo"/>
Nov 25 03:27:34 np0005534516 nova_compute[253538]:  </os>
Nov 25 03:27:34 np0005534516 nova_compute[253538]:  <features>
Nov 25 03:27:34 np0005534516 nova_compute[253538]:    <acpi/>
Nov 25 03:27:34 np0005534516 nova_compute[253538]:    <apic/>
Nov 25 03:27:34 np0005534516 nova_compute[253538]:    <vmcoreinfo/>
Nov 25 03:27:34 np0005534516 nova_compute[253538]:  </features>
Nov 25 03:27:34 np0005534516 nova_compute[253538]:  <clock offset="utc">
Nov 25 03:27:34 np0005534516 nova_compute[253538]:    <timer name="pit" tickpolicy="delay"/>
Nov 25 03:27:34 np0005534516 nova_compute[253538]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 25 03:27:34 np0005534516 nova_compute[253538]:    <timer name="hpet" present="no"/>
Nov 25 03:27:34 np0005534516 nova_compute[253538]:  </clock>
Nov 25 03:27:34 np0005534516 nova_compute[253538]:  <cpu mode="host-model" match="exact">
Nov 25 03:27:34 np0005534516 nova_compute[253538]:    <topology sockets="1" cores="1" threads="1"/>
Nov 25 03:27:34 np0005534516 nova_compute[253538]:  </cpu>
Nov 25 03:27:34 np0005534516 nova_compute[253538]:  <devices>
Nov 25 03:27:34 np0005534516 nova_compute[253538]:    <disk type="network" device="disk">
Nov 25 03:27:34 np0005534516 nova_compute[253538]:      <driver type="raw" cache="none"/>
Nov 25 03:27:34 np0005534516 nova_compute[253538]:      <source protocol="rbd" name="vms/30afcbba-78f3-433c-ba0a-5a2d25cf2d48_disk">
Nov 25 03:27:34 np0005534516 nova_compute[253538]:        <host name="192.168.122.100" port="6789"/>
Nov 25 03:27:34 np0005534516 nova_compute[253538]:      </source>
Nov 25 03:27:34 np0005534516 nova_compute[253538]:      <auth username="openstack">
Nov 25 03:27:34 np0005534516 nova_compute[253538]:        <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 03:27:34 np0005534516 nova_compute[253538]:      </auth>
Nov 25 03:27:34 np0005534516 nova_compute[253538]:      <target dev="vda" bus="virtio"/>
Nov 25 03:27:34 np0005534516 nova_compute[253538]:    </disk>
Nov 25 03:27:34 np0005534516 nova_compute[253538]:    <disk type="network" device="cdrom">
Nov 25 03:27:34 np0005534516 nova_compute[253538]:      <driver type="raw" cache="none"/>
Nov 25 03:27:34 np0005534516 nova_compute[253538]:      <source protocol="rbd" name="vms/30afcbba-78f3-433c-ba0a-5a2d25cf2d48_disk.config">
Nov 25 03:27:34 np0005534516 nova_compute[253538]:        <host name="192.168.122.100" port="6789"/>
Nov 25 03:27:34 np0005534516 nova_compute[253538]:      </source>
Nov 25 03:27:34 np0005534516 nova_compute[253538]:      <auth username="openstack">
Nov 25 03:27:34 np0005534516 nova_compute[253538]:        <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 03:27:34 np0005534516 nova_compute[253538]:      </auth>
Nov 25 03:27:34 np0005534516 nova_compute[253538]:      <target dev="sda" bus="sata"/>
Nov 25 03:27:34 np0005534516 nova_compute[253538]:    </disk>
Nov 25 03:27:34 np0005534516 nova_compute[253538]:    <interface type="ethernet">
Nov 25 03:27:34 np0005534516 nova_compute[253538]:      <mac address="fa:16:3e:93:2b:39"/>
Nov 25 03:27:34 np0005534516 nova_compute[253538]:      <model type="virtio"/>
Nov 25 03:27:34 np0005534516 nova_compute[253538]:      <driver name="vhost" rx_queue_size="512"/>
Nov 25 03:27:34 np0005534516 nova_compute[253538]:      <mtu size="1442"/>
Nov 25 03:27:34 np0005534516 nova_compute[253538]:      <target dev="tapd6aa33fe-8d"/>
Nov 25 03:27:34 np0005534516 nova_compute[253538]:    </interface>
Nov 25 03:27:34 np0005534516 nova_compute[253538]:    <serial type="pty">
Nov 25 03:27:34 np0005534516 nova_compute[253538]:      <log file="/var/lib/nova/instances/30afcbba-78f3-433c-ba0a-5a2d25cf2d48/console.log" append="off"/>
Nov 25 03:27:34 np0005534516 nova_compute[253538]:    </serial>
Nov 25 03:27:34 np0005534516 nova_compute[253538]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 03:27:34 np0005534516 nova_compute[253538]:    <video>
Nov 25 03:27:34 np0005534516 nova_compute[253538]:      <model type="virtio"/>
Nov 25 03:27:34 np0005534516 nova_compute[253538]:    </video>
Nov 25 03:27:34 np0005534516 nova_compute[253538]:    <input type="tablet" bus="usb"/>
Nov 25 03:27:34 np0005534516 nova_compute[253538]:    <rng model="virtio">
Nov 25 03:27:34 np0005534516 nova_compute[253538]:      <backend model="random">/dev/urandom</backend>
Nov 25 03:27:34 np0005534516 nova_compute[253538]:    </rng>
Nov 25 03:27:34 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root"/>
Nov 25 03:27:34 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:27:34 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:27:34 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:27:34 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:27:34 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:27:34 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:27:34 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:27:34 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:27:34 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:27:34 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:27:34 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:27:34 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:27:34 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:27:34 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:27:34 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:27:34 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:27:34 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:27:34 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:27:34 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:27:34 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:27:34 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:27:34 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:27:34 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:27:34 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:27:34 np0005534516 nova_compute[253538]:    <controller type="usb" index="0"/>
Nov 25 03:27:34 np0005534516 nova_compute[253538]:    <memballoon model="virtio">
Nov 25 03:27:34 np0005534516 nova_compute[253538]:      <stats period="10"/>
Nov 25 03:27:34 np0005534516 nova_compute[253538]:    </memballoon>
Nov 25 03:27:34 np0005534516 nova_compute[253538]:  </devices>
Nov 25 03:27:34 np0005534516 nova_compute[253538]: </domain>
Nov 25 03:27:34 np0005534516 nova_compute[253538]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 25 03:27:34 np0005534516 nova_compute[253538]: 2025-11-25 08:27:34.569 253542 DEBUG nova.compute.manager [None req-f786152c-1e05-4ab9-9ade-48969d18372c 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 30afcbba-78f3-433c-ba0a-5a2d25cf2d48] Preparing to wait for external event network-vif-plugged-d6aa33fe-8dd6-4546-aa75-715ad57e5b5c prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 25 03:27:34 np0005534516 nova_compute[253538]: 2025-11-25 08:27:34.569 253542 DEBUG oslo_concurrency.lockutils [None req-f786152c-1e05-4ab9-9ade-48969d18372c 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Acquiring lock "30afcbba-78f3-433c-ba0a-5a2d25cf2d48-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:27:34 np0005534516 nova_compute[253538]: 2025-11-25 08:27:34.569 253542 DEBUG oslo_concurrency.lockutils [None req-f786152c-1e05-4ab9-9ade-48969d18372c 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Lock "30afcbba-78f3-433c-ba0a-5a2d25cf2d48-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:27:34 np0005534516 nova_compute[253538]: 2025-11-25 08:27:34.569 253542 DEBUG oslo_concurrency.lockutils [None req-f786152c-1e05-4ab9-9ade-48969d18372c 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Lock "30afcbba-78f3-433c-ba0a-5a2d25cf2d48-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:27:34 np0005534516 nova_compute[253538]: 2025-11-25 08:27:34.570 253542 DEBUG nova.virt.libvirt.vif [None req-f786152c-1e05-4ab9-9ade-48969d18372c 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T08:27:24Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesTestJSON-server-2071435205',display_name='tempest-ImagesTestJSON-server-2071435205',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagestestjson-server-2071435205',id=26,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='b0a28d62fb1841c087b84b40bf5a54ec',ramdisk_id='',reservation_id='r-a9u90nyq',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesTestJSON-109091550',owner_user_name='tempest-ImagesTestJSON-109091550-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T08:27:26Z,user_data=None,user_id='38fa175fb699405c9a05d7c28f994ebc',uuid=30afcbba-78f3-433c-ba0a-5a2d25cf2d48,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d6aa33fe-8dd6-4546-aa75-715ad57e5b5c", "address": "fa:16:3e:93:2b:39", "network": {"id": "ba659d6c-c094-47d7-ba45-d0e659ce778e", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1404047212-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b0a28d62fb1841c087b84b40bf5a54ec", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd6aa33fe-8d", "ovs_interfaceid": "d6aa33fe-8dd6-4546-aa75-715ad57e5b5c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 25 03:27:34 np0005534516 nova_compute[253538]: 2025-11-25 08:27:34.570 253542 DEBUG nova.network.os_vif_util [None req-f786152c-1e05-4ab9-9ade-48969d18372c 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Converting VIF {"id": "d6aa33fe-8dd6-4546-aa75-715ad57e5b5c", "address": "fa:16:3e:93:2b:39", "network": {"id": "ba659d6c-c094-47d7-ba45-d0e659ce778e", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1404047212-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b0a28d62fb1841c087b84b40bf5a54ec", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd6aa33fe-8d", "ovs_interfaceid": "d6aa33fe-8dd6-4546-aa75-715ad57e5b5c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 03:27:34 np0005534516 nova_compute[253538]: 2025-11-25 08:27:34.571 253542 DEBUG nova.network.os_vif_util [None req-f786152c-1e05-4ab9-9ade-48969d18372c 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:93:2b:39,bridge_name='br-int',has_traffic_filtering=True,id=d6aa33fe-8dd6-4546-aa75-715ad57e5b5c,network=Network(ba659d6c-c094-47d7-ba45-d0e659ce778e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd6aa33fe-8d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 03:27:34 np0005534516 nova_compute[253538]: 2025-11-25 08:27:34.571 253542 DEBUG os_vif [None req-f786152c-1e05-4ab9-9ade-48969d18372c 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:93:2b:39,bridge_name='br-int',has_traffic_filtering=True,id=d6aa33fe-8dd6-4546-aa75-715ad57e5b5c,network=Network(ba659d6c-c094-47d7-ba45-d0e659ce778e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd6aa33fe-8d') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 25 03:27:34 np0005534516 nova_compute[253538]: 2025-11-25 08:27:34.572 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:27:34 np0005534516 nova_compute[253538]: 2025-11-25 08:27:34.572 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:27:34 np0005534516 nova_compute[253538]: 2025-11-25 08:27:34.572 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 03:27:34 np0005534516 nova_compute[253538]: 2025-11-25 08:27:34.576 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:27:34 np0005534516 nova_compute[253538]: 2025-11-25 08:27:34.577 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd6aa33fe-8d, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:27:34 np0005534516 nova_compute[253538]: 2025-11-25 08:27:34.577 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapd6aa33fe-8d, col_values=(('external_ids', {'iface-id': 'd6aa33fe-8dd6-4546-aa75-715ad57e5b5c', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:93:2b:39', 'vm-uuid': '30afcbba-78f3-433c-ba0a-5a2d25cf2d48'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:27:34 np0005534516 NetworkManager[48915]: <info>  [1764059254.5800] manager: (tapd6aa33fe-8d): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/75)
Nov 25 03:27:34 np0005534516 nova_compute[253538]: 2025-11-25 08:27:34.581 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 25 03:27:34 np0005534516 nova_compute[253538]: 2025-11-25 08:27:34.587 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:27:34 np0005534516 nova_compute[253538]: 2025-11-25 08:27:34.588 253542 INFO os_vif [None req-f786152c-1e05-4ab9-9ade-48969d18372c 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:93:2b:39,bridge_name='br-int',has_traffic_filtering=True,id=d6aa33fe-8dd6-4546-aa75-715ad57e5b5c,network=Network(ba659d6c-c094-47d7-ba45-d0e659ce778e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd6aa33fe-8d')#033[00m
Nov 25 03:27:34 np0005534516 nova_compute[253538]: 2025-11-25 08:27:34.633 253542 DEBUG nova.virt.libvirt.driver [None req-f786152c-1e05-4ab9-9ade-48969d18372c 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 03:27:34 np0005534516 nova_compute[253538]: 2025-11-25 08:27:34.634 253542 DEBUG nova.virt.libvirt.driver [None req-f786152c-1e05-4ab9-9ade-48969d18372c 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 03:27:34 np0005534516 nova_compute[253538]: 2025-11-25 08:27:34.634 253542 DEBUG nova.virt.libvirt.driver [None req-f786152c-1e05-4ab9-9ade-48969d18372c 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] No VIF found with MAC fa:16:3e:93:2b:39, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 25 03:27:34 np0005534516 nova_compute[253538]: 2025-11-25 08:27:34.634 253542 INFO nova.virt.libvirt.driver [None req-f786152c-1e05-4ab9-9ade-48969d18372c 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 30afcbba-78f3-433c-ba0a-5a2d25cf2d48] Using config drive#033[00m
Nov 25 03:27:34 np0005534516 nova_compute[253538]: 2025-11-25 08:27:34.659 253542 DEBUG nova.storage.rbd_utils [None req-f786152c-1e05-4ab9-9ade-48969d18372c 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] rbd image 30afcbba-78f3-433c-ba0a-5a2d25cf2d48_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:27:34 np0005534516 nova_compute[253538]: 2025-11-25 08:27:34.786 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:27:34 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 03:27:35 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1270: 321 pgs: 321 active+clean; 88 MiB data, 358 MiB used, 60 GiB / 60 GiB avail; 50 KiB/s rd, 2.1 MiB/s wr, 76 op/s
Nov 25 03:27:35 np0005534516 nova_compute[253538]: 2025-11-25 08:27:35.232 253542 INFO nova.virt.libvirt.driver [None req-f786152c-1e05-4ab9-9ade-48969d18372c 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 30afcbba-78f3-433c-ba0a-5a2d25cf2d48] Creating config drive at /var/lib/nova/instances/30afcbba-78f3-433c-ba0a-5a2d25cf2d48/disk.config#033[00m
Nov 25 03:27:35 np0005534516 nova_compute[253538]: 2025-11-25 08:27:35.236 253542 DEBUG oslo_concurrency.processutils [None req-f786152c-1e05-4ab9-9ade-48969d18372c 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/30afcbba-78f3-433c-ba0a-5a2d25cf2d48/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpm4h9nfd6 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:27:35 np0005534516 nova_compute[253538]: 2025-11-25 08:27:35.300 253542 DEBUG nova.network.neutron [req-22ff0f5f-f5eb-4f01-b807-93f88b489afe req-b261fd50-76c4-400b-9aaa-180f713ea52a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 30afcbba-78f3-433c-ba0a-5a2d25cf2d48] Updated VIF entry in instance network info cache for port d6aa33fe-8dd6-4546-aa75-715ad57e5b5c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 25 03:27:35 np0005534516 nova_compute[253538]: 2025-11-25 08:27:35.301 253542 DEBUG nova.network.neutron [req-22ff0f5f-f5eb-4f01-b807-93f88b489afe req-b261fd50-76c4-400b-9aaa-180f713ea52a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 30afcbba-78f3-433c-ba0a-5a2d25cf2d48] Updating instance_info_cache with network_info: [{"id": "d6aa33fe-8dd6-4546-aa75-715ad57e5b5c", "address": "fa:16:3e:93:2b:39", "network": {"id": "ba659d6c-c094-47d7-ba45-d0e659ce778e", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1404047212-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b0a28d62fb1841c087b84b40bf5a54ec", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd6aa33fe-8d", "ovs_interfaceid": "d6aa33fe-8dd6-4546-aa75-715ad57e5b5c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 03:27:35 np0005534516 nova_compute[253538]: 2025-11-25 08:27:35.315 253542 DEBUG oslo_concurrency.lockutils [req-22ff0f5f-f5eb-4f01-b807-93f88b489afe req-b261fd50-76c4-400b-9aaa-180f713ea52a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-30afcbba-78f3-433c-ba0a-5a2d25cf2d48" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 03:27:35 np0005534516 vigilant_chandrasekhar[287933]: {
Nov 25 03:27:35 np0005534516 vigilant_chandrasekhar[287933]:    "0": [
Nov 25 03:27:35 np0005534516 vigilant_chandrasekhar[287933]:        {
Nov 25 03:27:35 np0005534516 vigilant_chandrasekhar[287933]:            "devices": [
Nov 25 03:27:35 np0005534516 vigilant_chandrasekhar[287933]:                "/dev/loop3"
Nov 25 03:27:35 np0005534516 vigilant_chandrasekhar[287933]:            ],
Nov 25 03:27:35 np0005534516 vigilant_chandrasekhar[287933]:            "lv_name": "ceph_lv0",
Nov 25 03:27:35 np0005534516 vigilant_chandrasekhar[287933]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 03:27:35 np0005534516 vigilant_chandrasekhar[287933]:            "lv_size": "21470642176",
Nov 25 03:27:35 np0005534516 vigilant_chandrasekhar[287933]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=fa0f8d90-5df8-4d42-9078-da082765696d,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 03:27:35 np0005534516 vigilant_chandrasekhar[287933]:            "lv_uuid": "ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr",
Nov 25 03:27:35 np0005534516 vigilant_chandrasekhar[287933]:            "name": "ceph_lv0",
Nov 25 03:27:35 np0005534516 vigilant_chandrasekhar[287933]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 03:27:35 np0005534516 vigilant_chandrasekhar[287933]:            "tags": {
Nov 25 03:27:35 np0005534516 vigilant_chandrasekhar[287933]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 03:27:35 np0005534516 vigilant_chandrasekhar[287933]:                "ceph.block_uuid": "ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr",
Nov 25 03:27:35 np0005534516 vigilant_chandrasekhar[287933]:                "ceph.cephx_lockbox_secret": "",
Nov 25 03:27:35 np0005534516 vigilant_chandrasekhar[287933]:                "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 03:27:35 np0005534516 vigilant_chandrasekhar[287933]:                "ceph.cluster_name": "ceph",
Nov 25 03:27:35 np0005534516 vigilant_chandrasekhar[287933]:                "ceph.crush_device_class": "",
Nov 25 03:27:35 np0005534516 vigilant_chandrasekhar[287933]:                "ceph.encrypted": "0",
Nov 25 03:27:35 np0005534516 vigilant_chandrasekhar[287933]:                "ceph.osd_fsid": "fa0f8d90-5df8-4d42-9078-da082765696d",
Nov 25 03:27:35 np0005534516 vigilant_chandrasekhar[287933]:                "ceph.osd_id": "0",
Nov 25 03:27:35 np0005534516 vigilant_chandrasekhar[287933]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 03:27:35 np0005534516 vigilant_chandrasekhar[287933]:                "ceph.type": "block",
Nov 25 03:27:35 np0005534516 vigilant_chandrasekhar[287933]:                "ceph.vdo": "0"
Nov 25 03:27:35 np0005534516 vigilant_chandrasekhar[287933]:            },
Nov 25 03:27:35 np0005534516 vigilant_chandrasekhar[287933]:            "type": "block",
Nov 25 03:27:35 np0005534516 vigilant_chandrasekhar[287933]:            "vg_name": "ceph_vg0"
Nov 25 03:27:35 np0005534516 vigilant_chandrasekhar[287933]:        }
Nov 25 03:27:35 np0005534516 vigilant_chandrasekhar[287933]:    ],
Nov 25 03:27:35 np0005534516 vigilant_chandrasekhar[287933]:    "1": [
Nov 25 03:27:35 np0005534516 vigilant_chandrasekhar[287933]:        {
Nov 25 03:27:35 np0005534516 vigilant_chandrasekhar[287933]:            "devices": [
Nov 25 03:27:35 np0005534516 vigilant_chandrasekhar[287933]:                "/dev/loop4"
Nov 25 03:27:35 np0005534516 vigilant_chandrasekhar[287933]:            ],
Nov 25 03:27:35 np0005534516 vigilant_chandrasekhar[287933]:            "lv_name": "ceph_lv1",
Nov 25 03:27:35 np0005534516 vigilant_chandrasekhar[287933]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 03:27:35 np0005534516 vigilant_chandrasekhar[287933]:            "lv_size": "21470642176",
Nov 25 03:27:35 np0005534516 vigilant_chandrasekhar[287933]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1ccbbde0-8faf-460a-800e-c84f00ed17db,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 03:27:35 np0005534516 vigilant_chandrasekhar[287933]:            "lv_uuid": "nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e",
Nov 25 03:27:35 np0005534516 vigilant_chandrasekhar[287933]:            "name": "ceph_lv1",
Nov 25 03:27:35 np0005534516 vigilant_chandrasekhar[287933]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 03:27:35 np0005534516 vigilant_chandrasekhar[287933]:            "tags": {
Nov 25 03:27:35 np0005534516 vigilant_chandrasekhar[287933]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 03:27:35 np0005534516 vigilant_chandrasekhar[287933]:                "ceph.block_uuid": "nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e",
Nov 25 03:27:35 np0005534516 vigilant_chandrasekhar[287933]:                "ceph.cephx_lockbox_secret": "",
Nov 25 03:27:35 np0005534516 vigilant_chandrasekhar[287933]:                "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 03:27:35 np0005534516 vigilant_chandrasekhar[287933]:                "ceph.cluster_name": "ceph",
Nov 25 03:27:35 np0005534516 vigilant_chandrasekhar[287933]:                "ceph.crush_device_class": "",
Nov 25 03:27:35 np0005534516 vigilant_chandrasekhar[287933]:                "ceph.encrypted": "0",
Nov 25 03:27:35 np0005534516 vigilant_chandrasekhar[287933]:                "ceph.osd_fsid": "1ccbbde0-8faf-460a-800e-c84f00ed17db",
Nov 25 03:27:35 np0005534516 vigilant_chandrasekhar[287933]:                "ceph.osd_id": "1",
Nov 25 03:27:35 np0005534516 vigilant_chandrasekhar[287933]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 03:27:35 np0005534516 vigilant_chandrasekhar[287933]:                "ceph.type": "block",
Nov 25 03:27:35 np0005534516 vigilant_chandrasekhar[287933]:                "ceph.vdo": "0"
Nov 25 03:27:35 np0005534516 vigilant_chandrasekhar[287933]:            },
Nov 25 03:27:35 np0005534516 vigilant_chandrasekhar[287933]:            "type": "block",
Nov 25 03:27:35 np0005534516 vigilant_chandrasekhar[287933]:            "vg_name": "ceph_vg1"
Nov 25 03:27:35 np0005534516 vigilant_chandrasekhar[287933]:        }
Nov 25 03:27:35 np0005534516 vigilant_chandrasekhar[287933]:    ],
Nov 25 03:27:35 np0005534516 vigilant_chandrasekhar[287933]:    "2": [
Nov 25 03:27:35 np0005534516 vigilant_chandrasekhar[287933]:        {
Nov 25 03:27:35 np0005534516 vigilant_chandrasekhar[287933]:            "devices": [
Nov 25 03:27:35 np0005534516 vigilant_chandrasekhar[287933]:                "/dev/loop5"
Nov 25 03:27:35 np0005534516 vigilant_chandrasekhar[287933]:            ],
Nov 25 03:27:35 np0005534516 vigilant_chandrasekhar[287933]:            "lv_name": "ceph_lv2",
Nov 25 03:27:35 np0005534516 vigilant_chandrasekhar[287933]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 03:27:35 np0005534516 vigilant_chandrasekhar[287933]:            "lv_size": "21470642176",
Nov 25 03:27:35 np0005534516 vigilant_chandrasekhar[287933]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2a15223e-f21c-42af-b702-2be1c3607399,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 03:27:35 np0005534516 vigilant_chandrasekhar[287933]:            "lv_uuid": "5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0",
Nov 25 03:27:35 np0005534516 vigilant_chandrasekhar[287933]:            "name": "ceph_lv2",
Nov 25 03:27:35 np0005534516 vigilant_chandrasekhar[287933]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 03:27:35 np0005534516 vigilant_chandrasekhar[287933]:            "tags": {
Nov 25 03:27:35 np0005534516 vigilant_chandrasekhar[287933]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 03:27:35 np0005534516 vigilant_chandrasekhar[287933]:                "ceph.block_uuid": "5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0",
Nov 25 03:27:35 np0005534516 vigilant_chandrasekhar[287933]:                "ceph.cephx_lockbox_secret": "",
Nov 25 03:27:35 np0005534516 vigilant_chandrasekhar[287933]:                "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 03:27:35 np0005534516 vigilant_chandrasekhar[287933]:                "ceph.cluster_name": "ceph",
Nov 25 03:27:35 np0005534516 vigilant_chandrasekhar[287933]:                "ceph.crush_device_class": "",
Nov 25 03:27:35 np0005534516 vigilant_chandrasekhar[287933]:                "ceph.encrypted": "0",
Nov 25 03:27:35 np0005534516 vigilant_chandrasekhar[287933]:                "ceph.osd_fsid": "2a15223e-f21c-42af-b702-2be1c3607399",
Nov 25 03:27:35 np0005534516 vigilant_chandrasekhar[287933]:                "ceph.osd_id": "2",
Nov 25 03:27:35 np0005534516 vigilant_chandrasekhar[287933]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 03:27:35 np0005534516 vigilant_chandrasekhar[287933]:                "ceph.type": "block",
Nov 25 03:27:35 np0005534516 vigilant_chandrasekhar[287933]:                "ceph.vdo": "0"
Nov 25 03:27:35 np0005534516 vigilant_chandrasekhar[287933]:            },
Nov 25 03:27:35 np0005534516 vigilant_chandrasekhar[287933]:            "type": "block",
Nov 25 03:27:35 np0005534516 vigilant_chandrasekhar[287933]:            "vg_name": "ceph_vg2"
Nov 25 03:27:35 np0005534516 vigilant_chandrasekhar[287933]:        }
Nov 25 03:27:35 np0005534516 vigilant_chandrasekhar[287933]:    ]
Nov 25 03:27:35 np0005534516 vigilant_chandrasekhar[287933]: }
Nov 25 03:27:35 np0005534516 nova_compute[253538]: 2025-11-25 08:27:35.378 253542 DEBUG oslo_concurrency.processutils [None req-f786152c-1e05-4ab9-9ade-48969d18372c 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/30afcbba-78f3-433c-ba0a-5a2d25cf2d48/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpm4h9nfd6" returned: 0 in 0.141s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:27:35 np0005534516 systemd[1]: libpod-6cd3cec6f9b344259a34a523e6b7b658d999538d8bf77d0167a1ef75fbcadceb.scope: Deactivated successfully.
Nov 25 03:27:35 np0005534516 nova_compute[253538]: 2025-11-25 08:27:35.412 253542 DEBUG nova.storage.rbd_utils [None req-f786152c-1e05-4ab9-9ade-48969d18372c 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] rbd image 30afcbba-78f3-433c-ba0a-5a2d25cf2d48_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:27:35 np0005534516 nova_compute[253538]: 2025-11-25 08:27:35.416 253542 DEBUG oslo_concurrency.processutils [None req-f786152c-1e05-4ab9-9ade-48969d18372c 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/30afcbba-78f3-433c-ba0a-5a2d25cf2d48/disk.config 30afcbba-78f3-433c-ba0a-5a2d25cf2d48_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:27:35 np0005534516 podman[287983]: 2025-11-25 08:27:35.447174848 +0000 UTC m=+0.026680810 container died 6cd3cec6f9b344259a34a523e6b7b658d999538d8bf77d0167a1ef75fbcadceb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_chandrasekhar, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Nov 25 03:27:35 np0005534516 nova_compute[253538]: 2025-11-25 08:27:35.458 253542 DEBUG oslo_concurrency.lockutils [None req-953d03a0-0edd-4a18-b9f6-3c24a3ba24fc cbe175b10d9243369c5cae0b1a0c718b 16044f9687494680b68b927090e5afc5 - - default default] Acquiring lock "575b6526-de38-4a80-a952-be1b891b4792" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:27:35 np0005534516 nova_compute[253538]: 2025-11-25 08:27:35.459 253542 DEBUG oslo_concurrency.lockutils [None req-953d03a0-0edd-4a18-b9f6-3c24a3ba24fc cbe175b10d9243369c5cae0b1a0c718b 16044f9687494680b68b927090e5afc5 - - default default] Lock "575b6526-de38-4a80-a952-be1b891b4792" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:27:35 np0005534516 nova_compute[253538]: 2025-11-25 08:27:35.482 253542 DEBUG nova.compute.manager [None req-953d03a0-0edd-4a18-b9f6-3c24a3ba24fc cbe175b10d9243369c5cae0b1a0c718b 16044f9687494680b68b927090e5afc5 - - default default] [instance: 575b6526-de38-4a80-a952-be1b891b4792] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 25 03:27:35 np0005534516 systemd[1]: var-lib-containers-storage-overlay-aa01c12bac5dc48b980e357c7430fefb7567d708d4ced2b306713edccca7e1fd-merged.mount: Deactivated successfully.
Nov 25 03:27:35 np0005534516 podman[287983]: 2025-11-25 08:27:35.52853114 +0000 UTC m=+0.108037082 container remove 6cd3cec6f9b344259a34a523e6b7b658d999538d8bf77d0167a1ef75fbcadceb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_chandrasekhar, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Nov 25 03:27:35 np0005534516 systemd[1]: libpod-conmon-6cd3cec6f9b344259a34a523e6b7b658d999538d8bf77d0167a1ef75fbcadceb.scope: Deactivated successfully.
Nov 25 03:27:35 np0005534516 nova_compute[253538]: 2025-11-25 08:27:35.565 253542 DEBUG oslo_concurrency.lockutils [None req-953d03a0-0edd-4a18-b9f6-3c24a3ba24fc cbe175b10d9243369c5cae0b1a0c718b 16044f9687494680b68b927090e5afc5 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:27:35 np0005534516 nova_compute[253538]: 2025-11-25 08:27:35.566 253542 DEBUG oslo_concurrency.lockutils [None req-953d03a0-0edd-4a18-b9f6-3c24a3ba24fc cbe175b10d9243369c5cae0b1a0c718b 16044f9687494680b68b927090e5afc5 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:27:35 np0005534516 nova_compute[253538]: 2025-11-25 08:27:35.573 253542 DEBUG nova.virt.hardware [None req-953d03a0-0edd-4a18-b9f6-3c24a3ba24fc cbe175b10d9243369c5cae0b1a0c718b 16044f9687494680b68b927090e5afc5 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 25 03:27:35 np0005534516 nova_compute[253538]: 2025-11-25 08:27:35.574 253542 INFO nova.compute.claims [None req-953d03a0-0edd-4a18-b9f6-3c24a3ba24fc cbe175b10d9243369c5cae0b1a0c718b 16044f9687494680b68b927090e5afc5 - - default default] [instance: 575b6526-de38-4a80-a952-be1b891b4792] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 25 03:27:35 np0005534516 nova_compute[253538]: 2025-11-25 08:27:35.598 253542 DEBUG oslo_concurrency.processutils [None req-f786152c-1e05-4ab9-9ade-48969d18372c 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/30afcbba-78f3-433c-ba0a-5a2d25cf2d48/disk.config 30afcbba-78f3-433c-ba0a-5a2d25cf2d48_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.182s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:27:35 np0005534516 nova_compute[253538]: 2025-11-25 08:27:35.599 253542 INFO nova.virt.libvirt.driver [None req-f786152c-1e05-4ab9-9ade-48969d18372c 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 30afcbba-78f3-433c-ba0a-5a2d25cf2d48] Deleting local config drive /var/lib/nova/instances/30afcbba-78f3-433c-ba0a-5a2d25cf2d48/disk.config because it was imported into RBD.#033[00m
Nov 25 03:27:35 np0005534516 kernel: tapd6aa33fe-8d: entered promiscuous mode
Nov 25 03:27:35 np0005534516 ovn_controller[152859]: 2025-11-25T08:27:35Z|00151|binding|INFO|Claiming lport d6aa33fe-8dd6-4546-aa75-715ad57e5b5c for this chassis.
Nov 25 03:27:35 np0005534516 ovn_controller[152859]: 2025-11-25T08:27:35Z|00152|binding|INFO|d6aa33fe-8dd6-4546-aa75-715ad57e5b5c: Claiming fa:16:3e:93:2b:39 10.100.0.13
Nov 25 03:27:35 np0005534516 NetworkManager[48915]: <info>  [1764059255.6584] manager: (tapd6aa33fe-8d): new Tun device (/org/freedesktop/NetworkManager/Devices/76)
Nov 25 03:27:35 np0005534516 nova_compute[253538]: 2025-11-25 08:27:35.658 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:27:35 np0005534516 nova_compute[253538]: 2025-11-25 08:27:35.662 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:27:35 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:27:35.675 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:93:2b:39 10.100.0.13'], port_security=['fa:16:3e:93:2b:39 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '30afcbba-78f3-433c-ba0a-5a2d25cf2d48', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ba659d6c-c094-47d7-ba45-d0e659ce778e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b0a28d62fb1841c087b84b40bf5a54ec', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'c0dcf48e-8342-437f-bc91-be284d9d2e89', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=38cfcfdd-6d8a-45fc-8bf6-5c1aa5128b91, chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=d6aa33fe-8dd6-4546-aa75-715ad57e5b5c) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 03:27:35 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:27:35.676 162739 INFO neutron.agent.ovn.metadata.agent [-] Port d6aa33fe-8dd6-4546-aa75-715ad57e5b5c in datapath ba659d6c-c094-47d7-ba45-d0e659ce778e bound to our chassis#033[00m
Nov 25 03:27:35 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:27:35.677 162739 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network ba659d6c-c094-47d7-ba45-d0e659ce778e#033[00m
Nov 25 03:27:35 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:27:35.689 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[023f8a68-111c-4a32-bd37-d2b8066ee69a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:27:35 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:27:35.691 162739 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapba659d6c-c1 in ovnmeta-ba659d6c-c094-47d7-ba45-d0e659ce778e namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 25 03:27:35 np0005534516 systemd-udevd[288080]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 03:27:35 np0005534516 nova_compute[253538]: 2025-11-25 08:27:35.696 253542 DEBUG oslo_concurrency.processutils [None req-953d03a0-0edd-4a18-b9f6-3c24a3ba24fc cbe175b10d9243369c5cae0b1a0c718b 16044f9687494680b68b927090e5afc5 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:27:35 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:27:35.693 269370 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapba659d6c-c0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 25 03:27:35 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:27:35.693 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[51cb4b9d-2247-493c-9af6-3bdd896807aa]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:27:35 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:27:35.694 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[9f677bb3-bfea-4db6-9b02-ad925941f0b6]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:27:35 np0005534516 systemd-machined[215790]: New machine qemu-31-instance-0000001a.
Nov 25 03:27:35 np0005534516 NetworkManager[48915]: <info>  [1764059255.7070] device (tapd6aa33fe-8d): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 03:27:35 np0005534516 NetworkManager[48915]: <info>  [1764059255.7084] device (tapd6aa33fe-8d): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 03:27:35 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:27:35.707 162852 DEBUG oslo.privsep.daemon [-] privsep: reply[4a6f84e6-2bc3-42bf-9362-c27eaf006a47]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:27:35 np0005534516 systemd[1]: Started Virtual Machine qemu-31-instance-0000001a.
Nov 25 03:27:35 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:27:35.723 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[7f212446-e696-473a-81d8-ecb4887bee06]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:27:35 np0005534516 nova_compute[253538]: 2025-11-25 08:27:35.735 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:27:35 np0005534516 ovn_controller[152859]: 2025-11-25T08:27:35Z|00153|binding|INFO|Setting lport d6aa33fe-8dd6-4546-aa75-715ad57e5b5c ovn-installed in OVS
Nov 25 03:27:35 np0005534516 ovn_controller[152859]: 2025-11-25T08:27:35Z|00154|binding|INFO|Setting lport d6aa33fe-8dd6-4546-aa75-715ad57e5b5c up in Southbound
Nov 25 03:27:35 np0005534516 nova_compute[253538]: 2025-11-25 08:27:35.742 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:27:35 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:27:35.753 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[a8612fd6-a70e-4d9c-be29-72e6ba2a31fb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:27:35 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:27:35.757 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[a482220e-0e2d-439e-b07b-9dac0eb4b8ae]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:27:35 np0005534516 NetworkManager[48915]: <info>  [1764059255.7588] manager: (tapba659d6c-c0): new Veth device (/org/freedesktop/NetworkManager/Devices/77)
Nov 25 03:27:35 np0005534516 systemd-udevd[288086]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 03:27:35 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:27:35.790 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[ad7220eb-1c42-45e6-b1a7-84415f1e747d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:27:35 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:27:35.798 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[649c022a-fcf1-4cc2-8b94-4c40b9b83a5a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:27:35 np0005534516 NetworkManager[48915]: <info>  [1764059255.8191] device (tapba659d6c-c0): carrier: link connected
Nov 25 03:27:35 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:27:35.823 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[77df593f-ecc9-43b0-a40f-3d3f6853e03f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:27:35 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:27:35.841 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[ce0e98ac-543d-4fb9-a484-f5fd1ec91a2a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapba659d6c-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:bb:c3:40'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 50], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 461041, 'reachable_time': 24761, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 288175, 'error': None, 'target': 'ovnmeta-ba659d6c-c094-47d7-ba45-d0e659ce778e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:27:35 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:27:35.857 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[1eb1abac-8d9c-4318-a95b-b90e5d3dd368]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:febb:c340'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 461041, 'tstamp': 461041}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 288187, 'error': None, 'target': 'ovnmeta-ba659d6c-c094-47d7-ba45-d0e659ce778e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:27:35 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:27:35.875 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[967f1271-4eb4-4e5c-94ce-a185cb766bf6]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapba659d6c-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:bb:c3:40'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 50], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 461041, 'reachable_time': 24761, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 288188, 'error': None, 'target': 'ovnmeta-ba659d6c-c094-47d7-ba45-d0e659ce778e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:27:35 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:27:35.907 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[6d09e639-3ae8-45c8-916b-049c509300a5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:27:35 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:27:35.976 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[e6eb185b-f701-47fd-9518-de546bd81002]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:27:35 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:27:35.978 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapba659d6c-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:27:35 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:27:35.978 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 03:27:35 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:27:35.979 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapba659d6c-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:27:35 np0005534516 NetworkManager[48915]: <info>  [1764059255.9815] manager: (tapba659d6c-c0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/78)
Nov 25 03:27:35 np0005534516 kernel: tapba659d6c-c0: entered promiscuous mode
Nov 25 03:27:35 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:27:35.987 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapba659d6c-c0, col_values=(('external_ids', {'iface-id': '02ee51d1-7fc5-4815-93ec-b9ead088a46e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:27:35 np0005534516 ovn_controller[152859]: 2025-11-25T08:27:35Z|00155|binding|INFO|Releasing lport 02ee51d1-7fc5-4815-93ec-b9ead088a46e from this chassis (sb_readonly=0)
Nov 25 03:27:36 np0005534516 nova_compute[253538]: 2025-11-25 08:27:36.001 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:27:36 np0005534516 nova_compute[253538]: 2025-11-25 08:27:36.012 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:27:36 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:27:36.013 162739 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/ba659d6c-c094-47d7-ba45-d0e659ce778e.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/ba659d6c-c094-47d7-ba45-d0e659ce778e.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 25 03:27:36 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:27:36.014 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[c8458879-63bd-4c0e-bbfe-fa558915454c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:27:36 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:27:36.015 162739 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 25 03:27:36 np0005534516 ovn_metadata_agent[162734]: global
Nov 25 03:27:36 np0005534516 ovn_metadata_agent[162734]:    log         /dev/log local0 debug
Nov 25 03:27:36 np0005534516 ovn_metadata_agent[162734]:    log-tag     haproxy-metadata-proxy-ba659d6c-c094-47d7-ba45-d0e659ce778e
Nov 25 03:27:36 np0005534516 ovn_metadata_agent[162734]:    user        root
Nov 25 03:27:36 np0005534516 ovn_metadata_agent[162734]:    group       root
Nov 25 03:27:36 np0005534516 ovn_metadata_agent[162734]:    maxconn     1024
Nov 25 03:27:36 np0005534516 ovn_metadata_agent[162734]:    pidfile     /var/lib/neutron/external/pids/ba659d6c-c094-47d7-ba45-d0e659ce778e.pid.haproxy
Nov 25 03:27:36 np0005534516 ovn_metadata_agent[162734]:    daemon
Nov 25 03:27:36 np0005534516 ovn_metadata_agent[162734]: 
Nov 25 03:27:36 np0005534516 ovn_metadata_agent[162734]: defaults
Nov 25 03:27:36 np0005534516 ovn_metadata_agent[162734]:    log global
Nov 25 03:27:36 np0005534516 ovn_metadata_agent[162734]:    mode http
Nov 25 03:27:36 np0005534516 ovn_metadata_agent[162734]:    option httplog
Nov 25 03:27:36 np0005534516 ovn_metadata_agent[162734]:    option dontlognull
Nov 25 03:27:36 np0005534516 ovn_metadata_agent[162734]:    option http-server-close
Nov 25 03:27:36 np0005534516 ovn_metadata_agent[162734]:    option forwardfor
Nov 25 03:27:36 np0005534516 ovn_metadata_agent[162734]:    retries                 3
Nov 25 03:27:36 np0005534516 ovn_metadata_agent[162734]:    timeout http-request    30s
Nov 25 03:27:36 np0005534516 ovn_metadata_agent[162734]:    timeout connect         30s
Nov 25 03:27:36 np0005534516 ovn_metadata_agent[162734]:    timeout client          32s
Nov 25 03:27:36 np0005534516 ovn_metadata_agent[162734]:    timeout server          32s
Nov 25 03:27:36 np0005534516 ovn_metadata_agent[162734]:    timeout http-keep-alive 30s
Nov 25 03:27:36 np0005534516 ovn_metadata_agent[162734]: 
Nov 25 03:27:36 np0005534516 ovn_metadata_agent[162734]: 
Nov 25 03:27:36 np0005534516 ovn_metadata_agent[162734]: listen listener
Nov 25 03:27:36 np0005534516 ovn_metadata_agent[162734]:    bind 169.254.169.254:80
Nov 25 03:27:36 np0005534516 ovn_metadata_agent[162734]:    server metadata /var/lib/neutron/metadata_proxy
Nov 25 03:27:36 np0005534516 ovn_metadata_agent[162734]:    http-request add-header X-OVN-Network-ID ba659d6c-c094-47d7-ba45-d0e659ce778e
Nov 25 03:27:36 np0005534516 ovn_metadata_agent[162734]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 25 03:27:36 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:27:36.017 162739 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-ba659d6c-c094-47d7-ba45-d0e659ce778e', 'env', 'PROCESS_TAG=haproxy-ba659d6c-c094-47d7-ba45-d0e659ce778e', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/ba659d6c-c094-47d7-ba45-d0e659ce778e.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 25 03:27:36 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 03:27:36 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2684891317' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 03:27:36 np0005534516 nova_compute[253538]: 2025-11-25 08:27:36.165 253542 DEBUG oslo_concurrency.processutils [None req-953d03a0-0edd-4a18-b9f6-3c24a3ba24fc cbe175b10d9243369c5cae0b1a0c718b 16044f9687494680b68b927090e5afc5 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.470s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:27:36 np0005534516 nova_compute[253538]: 2025-11-25 08:27:36.175 253542 DEBUG nova.compute.provider_tree [None req-953d03a0-0edd-4a18-b9f6-3c24a3ba24fc cbe175b10d9243369c5cae0b1a0c718b 16044f9687494680b68b927090e5afc5 - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 03:27:36 np0005534516 nova_compute[253538]: 2025-11-25 08:27:36.190 253542 DEBUG nova.scheduler.client.report [None req-953d03a0-0edd-4a18-b9f6-3c24a3ba24fc cbe175b10d9243369c5cae0b1a0c718b 16044f9687494680b68b927090e5afc5 - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 03:27:36 np0005534516 podman[288238]: 2025-11-25 08:27:36.207393471 +0000 UTC m=+0.046683064 container create c47d5ea98aa78c4695bbbabe965e58f90159e0f2336d6d53bf85011960ec8f09 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_euler, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507)
Nov 25 03:27:36 np0005534516 nova_compute[253538]: 2025-11-25 08:27:36.213 253542 DEBUG oslo_concurrency.lockutils [None req-953d03a0-0edd-4a18-b9f6-3c24a3ba24fc cbe175b10d9243369c5cae0b1a0c718b 16044f9687494680b68b927090e5afc5 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.647s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:27:36 np0005534516 nova_compute[253538]: 2025-11-25 08:27:36.214 253542 DEBUG nova.compute.manager [None req-953d03a0-0edd-4a18-b9f6-3c24a3ba24fc cbe175b10d9243369c5cae0b1a0c718b 16044f9687494680b68b927090e5afc5 - - default default] [instance: 575b6526-de38-4a80-a952-be1b891b4792] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 25 03:27:36 np0005534516 systemd[1]: Started libpod-conmon-c47d5ea98aa78c4695bbbabe965e58f90159e0f2336d6d53bf85011960ec8f09.scope.
Nov 25 03:27:36 np0005534516 nova_compute[253538]: 2025-11-25 08:27:36.264 253542 DEBUG nova.compute.manager [None req-953d03a0-0edd-4a18-b9f6-3c24a3ba24fc cbe175b10d9243369c5cae0b1a0c718b 16044f9687494680b68b927090e5afc5 - - default default] [instance: 575b6526-de38-4a80-a952-be1b891b4792] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 25 03:27:36 np0005534516 nova_compute[253538]: 2025-11-25 08:27:36.265 253542 DEBUG nova.network.neutron [None req-953d03a0-0edd-4a18-b9f6-3c24a3ba24fc cbe175b10d9243369c5cae0b1a0c718b 16044f9687494680b68b927090e5afc5 - - default default] [instance: 575b6526-de38-4a80-a952-be1b891b4792] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 25 03:27:36 np0005534516 podman[288238]: 2025-11-25 08:27:36.185395942 +0000 UTC m=+0.024685555 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 03:27:36 np0005534516 systemd[1]: Started libcrun container.
Nov 25 03:27:36 np0005534516 nova_compute[253538]: 2025-11-25 08:27:36.287 253542 INFO nova.virt.libvirt.driver [None req-953d03a0-0edd-4a18-b9f6-3c24a3ba24fc cbe175b10d9243369c5cae0b1a0c718b 16044f9687494680b68b927090e5afc5 - - default default] [instance: 575b6526-de38-4a80-a952-be1b891b4792] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 25 03:27:36 np0005534516 podman[288238]: 2025-11-25 08:27:36.301041904 +0000 UTC m=+0.140331537 container init c47d5ea98aa78c4695bbbabe965e58f90159e0f2336d6d53bf85011960ec8f09 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_euler, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 03:27:36 np0005534516 nova_compute[253538]: 2025-11-25 08:27:36.303 253542 DEBUG nova.compute.manager [req-215b1f70-ef99-4afa-b336-17d9d02c3cdb req-e77c9183-69e1-41af-8083-724b54e8a0bf b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 30afcbba-78f3-433c-ba0a-5a2d25cf2d48] Received event network-vif-plugged-d6aa33fe-8dd6-4546-aa75-715ad57e5b5c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:27:36 np0005534516 nova_compute[253538]: 2025-11-25 08:27:36.304 253542 DEBUG oslo_concurrency.lockutils [req-215b1f70-ef99-4afa-b336-17d9d02c3cdb req-e77c9183-69e1-41af-8083-724b54e8a0bf b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "30afcbba-78f3-433c-ba0a-5a2d25cf2d48-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:27:36 np0005534516 nova_compute[253538]: 2025-11-25 08:27:36.304 253542 DEBUG oslo_concurrency.lockutils [req-215b1f70-ef99-4afa-b336-17d9d02c3cdb req-e77c9183-69e1-41af-8083-724b54e8a0bf b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "30afcbba-78f3-433c-ba0a-5a2d25cf2d48-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:27:36 np0005534516 nova_compute[253538]: 2025-11-25 08:27:36.304 253542 DEBUG oslo_concurrency.lockutils [req-215b1f70-ef99-4afa-b336-17d9d02c3cdb req-e77c9183-69e1-41af-8083-724b54e8a0bf b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "30afcbba-78f3-433c-ba0a-5a2d25cf2d48-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:27:36 np0005534516 nova_compute[253538]: 2025-11-25 08:27:36.304 253542 DEBUG nova.compute.manager [req-215b1f70-ef99-4afa-b336-17d9d02c3cdb req-e77c9183-69e1-41af-8083-724b54e8a0bf b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 30afcbba-78f3-433c-ba0a-5a2d25cf2d48] Processing event network-vif-plugged-d6aa33fe-8dd6-4546-aa75-715ad57e5b5c _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 25 03:27:36 np0005534516 podman[288238]: 2025-11-25 08:27:36.310461664 +0000 UTC m=+0.149751257 container start c47d5ea98aa78c4695bbbabe965e58f90159e0f2336d6d53bf85011960ec8f09 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_euler, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 03:27:36 np0005534516 gifted_euler[288281]: 167 167
Nov 25 03:27:36 np0005534516 podman[288238]: 2025-11-25 08:27:36.31757671 +0000 UTC m=+0.156866303 container attach c47d5ea98aa78c4695bbbabe965e58f90159e0f2336d6d53bf85011960ec8f09 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_euler, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Nov 25 03:27:36 np0005534516 systemd[1]: libpod-c47d5ea98aa78c4695bbbabe965e58f90159e0f2336d6d53bf85011960ec8f09.scope: Deactivated successfully.
Nov 25 03:27:36 np0005534516 conmon[288281]: conmon c47d5ea98aa78c4695bb <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-c47d5ea98aa78c4695bbbabe965e58f90159e0f2336d6d53bf85011960ec8f09.scope/container/memory.events
Nov 25 03:27:36 np0005534516 podman[288238]: 2025-11-25 08:27:36.3200741 +0000 UTC m=+0.159363683 container died c47d5ea98aa78c4695bbbabe965e58f90159e0f2336d6d53bf85011960ec8f09 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_euler, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 03:27:36 np0005534516 nova_compute[253538]: 2025-11-25 08:27:36.331 253542 DEBUG nova.compute.manager [None req-953d03a0-0edd-4a18-b9f6-3c24a3ba24fc cbe175b10d9243369c5cae0b1a0c718b 16044f9687494680b68b927090e5afc5 - - default default] [instance: 575b6526-de38-4a80-a952-be1b891b4792] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 25 03:27:36 np0005534516 systemd[1]: var-lib-containers-storage-overlay-dc3ce88f042b26dbe9064ab00d824af8761c2a1816db79bf02cd9d027003b52c-merged.mount: Deactivated successfully.
Nov 25 03:27:36 np0005534516 nova_compute[253538]: 2025-11-25 08:27:36.382 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764059256.3823512, 30afcbba-78f3-433c-ba0a-5a2d25cf2d48 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 03:27:36 np0005534516 nova_compute[253538]: 2025-11-25 08:27:36.383 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 30afcbba-78f3-433c-ba0a-5a2d25cf2d48] VM Started (Lifecycle Event)#033[00m
Nov 25 03:27:36 np0005534516 nova_compute[253538]: 2025-11-25 08:27:36.385 253542 DEBUG nova.compute.manager [None req-f786152c-1e05-4ab9-9ade-48969d18372c 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 30afcbba-78f3-433c-ba0a-5a2d25cf2d48] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 25 03:27:36 np0005534516 nova_compute[253538]: 2025-11-25 08:27:36.389 253542 DEBUG nova.virt.libvirt.driver [None req-f786152c-1e05-4ab9-9ade-48969d18372c 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 30afcbba-78f3-433c-ba0a-5a2d25cf2d48] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 25 03:27:36 np0005534516 podman[288238]: 2025-11-25 08:27:36.390359035 +0000 UTC m=+0.229648608 container remove c47d5ea98aa78c4695bbbabe965e58f90159e0f2336d6d53bf85011960ec8f09 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_euler, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Nov 25 03:27:36 np0005534516 nova_compute[253538]: 2025-11-25 08:27:36.393 253542 INFO nova.virt.libvirt.driver [-] [instance: 30afcbba-78f3-433c-ba0a-5a2d25cf2d48] Instance spawned successfully.#033[00m
Nov 25 03:27:36 np0005534516 nova_compute[253538]: 2025-11-25 08:27:36.393 253542 DEBUG nova.virt.libvirt.driver [None req-f786152c-1e05-4ab9-9ade-48969d18372c 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 30afcbba-78f3-433c-ba0a-5a2d25cf2d48] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 25 03:27:36 np0005534516 systemd[1]: libpod-conmon-c47d5ea98aa78c4695bbbabe965e58f90159e0f2336d6d53bf85011960ec8f09.scope: Deactivated successfully.
Nov 25 03:27:36 np0005534516 nova_compute[253538]: 2025-11-25 08:27:36.414 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 30afcbba-78f3-433c-ba0a-5a2d25cf2d48] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:27:36 np0005534516 nova_compute[253538]: 2025-11-25 08:27:36.418 253542 DEBUG nova.virt.libvirt.driver [None req-f786152c-1e05-4ab9-9ade-48969d18372c 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 30afcbba-78f3-433c-ba0a-5a2d25cf2d48] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:27:36 np0005534516 nova_compute[253538]: 2025-11-25 08:27:36.419 253542 DEBUG nova.virt.libvirt.driver [None req-f786152c-1e05-4ab9-9ade-48969d18372c 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 30afcbba-78f3-433c-ba0a-5a2d25cf2d48] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:27:36 np0005534516 nova_compute[253538]: 2025-11-25 08:27:36.419 253542 DEBUG nova.virt.libvirt.driver [None req-f786152c-1e05-4ab9-9ade-48969d18372c 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 30afcbba-78f3-433c-ba0a-5a2d25cf2d48] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:27:36 np0005534516 nova_compute[253538]: 2025-11-25 08:27:36.420 253542 DEBUG nova.virt.libvirt.driver [None req-f786152c-1e05-4ab9-9ade-48969d18372c 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 30afcbba-78f3-433c-ba0a-5a2d25cf2d48] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:27:36 np0005534516 nova_compute[253538]: 2025-11-25 08:27:36.420 253542 DEBUG nova.virt.libvirt.driver [None req-f786152c-1e05-4ab9-9ade-48969d18372c 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 30afcbba-78f3-433c-ba0a-5a2d25cf2d48] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:27:36 np0005534516 nova_compute[253538]: 2025-11-25 08:27:36.421 253542 DEBUG nova.virt.libvirt.driver [None req-f786152c-1e05-4ab9-9ade-48969d18372c 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 30afcbba-78f3-433c-ba0a-5a2d25cf2d48] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:27:36 np0005534516 nova_compute[253538]: 2025-11-25 08:27:36.424 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 30afcbba-78f3-433c-ba0a-5a2d25cf2d48] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 03:27:36 np0005534516 nova_compute[253538]: 2025-11-25 08:27:36.446 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 30afcbba-78f3-433c-ba0a-5a2d25cf2d48] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 03:27:36 np0005534516 nova_compute[253538]: 2025-11-25 08:27:36.446 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764059256.382505, 30afcbba-78f3-433c-ba0a-5a2d25cf2d48 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 03:27:36 np0005534516 nova_compute[253538]: 2025-11-25 08:27:36.446 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 30afcbba-78f3-433c-ba0a-5a2d25cf2d48] VM Paused (Lifecycle Event)#033[00m
Nov 25 03:27:36 np0005534516 podman[288336]: 2025-11-25 08:27:36.447406635 +0000 UTC m=+0.070424830 container create 48affac628b5196fd72e2915e1684dca5200b312cdab4ddbed73be0f5b5df529 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-ba659d6c-c094-47d7-ba45-d0e659ce778e, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, tcib_managed=true, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 03:27:36 np0005534516 nova_compute[253538]: 2025-11-25 08:27:36.461 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 30afcbba-78f3-433c-ba0a-5a2d25cf2d48] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:27:36 np0005534516 nova_compute[253538]: 2025-11-25 08:27:36.477 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764059256.387846, 30afcbba-78f3-433c-ba0a-5a2d25cf2d48 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 03:27:36 np0005534516 nova_compute[253538]: 2025-11-25 08:27:36.477 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 30afcbba-78f3-433c-ba0a-5a2d25cf2d48] VM Resumed (Lifecycle Event)#033[00m
Nov 25 03:27:36 np0005534516 systemd[1]: Started libpod-conmon-48affac628b5196fd72e2915e1684dca5200b312cdab4ddbed73be0f5b5df529.scope.
Nov 25 03:27:36 np0005534516 podman[288336]: 2025-11-25 08:27:36.404378383 +0000 UTC m=+0.027396608 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 25 03:27:36 np0005534516 systemd[1]: Started libcrun container.
Nov 25 03:27:36 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dedff8d23f1f42a2bd3738ba82be5e7d6fdc0734473529518931479e1db062ba/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 25 03:27:36 np0005534516 podman[288336]: 2025-11-25 08:27:36.534966168 +0000 UTC m=+0.157984383 container init 48affac628b5196fd72e2915e1684dca5200b312cdab4ddbed73be0f5b5df529 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-ba659d6c-c094-47d7-ba45-d0e659ce778e, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Nov 25 03:27:36 np0005534516 podman[288336]: 2025-11-25 08:27:36.541270003 +0000 UTC m=+0.164288198 container start 48affac628b5196fd72e2915e1684dca5200b312cdab4ddbed73be0f5b5df529 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-ba659d6c-c094-47d7-ba45-d0e659ce778e, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 25 03:27:36 np0005534516 neutron-haproxy-ovnmeta-ba659d6c-c094-47d7-ba45-d0e659ce778e[288351]: [NOTICE]   (288371) : New worker (288376) forked
Nov 25 03:27:36 np0005534516 neutron-haproxy-ovnmeta-ba659d6c-c094-47d7-ba45-d0e659ce778e[288351]: [NOTICE]   (288371) : Loading success.
Nov 25 03:27:36 np0005534516 podman[288359]: 2025-11-25 08:27:36.56899756 +0000 UTC m=+0.046880059 container create 48574a6bfaa1a5b66c264939ba99714244fb483bf5ac0cb267a09a1c5beb1f8e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_black, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 03:27:36 np0005534516 systemd[1]: Started libpod-conmon-48574a6bfaa1a5b66c264939ba99714244fb483bf5ac0cb267a09a1c5beb1f8e.scope.
Nov 25 03:27:36 np0005534516 nova_compute[253538]: 2025-11-25 08:27:36.632 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 30afcbba-78f3-433c-ba0a-5a2d25cf2d48] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:27:36 np0005534516 nova_compute[253538]: 2025-11-25 08:27:36.640 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 30afcbba-78f3-433c-ba0a-5a2d25cf2d48] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 03:27:36 np0005534516 podman[288359]: 2025-11-25 08:27:36.547127705 +0000 UTC m=+0.025010234 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 03:27:36 np0005534516 nova_compute[253538]: 2025-11-25 08:27:36.645 253542 DEBUG nova.compute.manager [None req-953d03a0-0edd-4a18-b9f6-3c24a3ba24fc cbe175b10d9243369c5cae0b1a0c718b 16044f9687494680b68b927090e5afc5 - - default default] [instance: 575b6526-de38-4a80-a952-be1b891b4792] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 25 03:27:36 np0005534516 nova_compute[253538]: 2025-11-25 08:27:36.646 253542 DEBUG nova.virt.libvirt.driver [None req-953d03a0-0edd-4a18-b9f6-3c24a3ba24fc cbe175b10d9243369c5cae0b1a0c718b 16044f9687494680b68b927090e5afc5 - - default default] [instance: 575b6526-de38-4a80-a952-be1b891b4792] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 25 03:27:36 np0005534516 nova_compute[253538]: 2025-11-25 08:27:36.647 253542 INFO nova.virt.libvirt.driver [None req-953d03a0-0edd-4a18-b9f6-3c24a3ba24fc cbe175b10d9243369c5cae0b1a0c718b 16044f9687494680b68b927090e5afc5 - - default default] [instance: 575b6526-de38-4a80-a952-be1b891b4792] Creating image(s)#033[00m
Nov 25 03:27:36 np0005534516 systemd[1]: Started libcrun container.
Nov 25 03:27:36 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fd4a0d49da341748888f3ce03ed5ae40d2ac41d60dc48ee22f8786ea18eb3aaa/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 03:27:36 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fd4a0d49da341748888f3ce03ed5ae40d2ac41d60dc48ee22f8786ea18eb3aaa/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 03:27:36 np0005534516 nova_compute[253538]: 2025-11-25 08:27:36.674 253542 DEBUG nova.storage.rbd_utils [None req-953d03a0-0edd-4a18-b9f6-3c24a3ba24fc cbe175b10d9243369c5cae0b1a0c718b 16044f9687494680b68b927090e5afc5 - - default default] rbd image 575b6526-de38-4a80-a952-be1b891b4792_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:27:36 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fd4a0d49da341748888f3ce03ed5ae40d2ac41d60dc48ee22f8786ea18eb3aaa/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 03:27:36 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fd4a0d49da341748888f3ce03ed5ae40d2ac41d60dc48ee22f8786ea18eb3aaa/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 03:27:36 np0005534516 podman[288359]: 2025-11-25 08:27:36.693686652 +0000 UTC m=+0.171569181 container init 48574a6bfaa1a5b66c264939ba99714244fb483bf5ac0cb267a09a1c5beb1f8e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_black, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Nov 25 03:27:36 np0005534516 nova_compute[253538]: 2025-11-25 08:27:36.695 253542 DEBUG nova.storage.rbd_utils [None req-953d03a0-0edd-4a18-b9f6-3c24a3ba24fc cbe175b10d9243369c5cae0b1a0c718b 16044f9687494680b68b927090e5afc5 - - default default] rbd image 575b6526-de38-4a80-a952-be1b891b4792_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:27:36 np0005534516 podman[288359]: 2025-11-25 08:27:36.7019136 +0000 UTC m=+0.179796099 container start 48574a6bfaa1a5b66c264939ba99714244fb483bf5ac0cb267a09a1c5beb1f8e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_black, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Nov 25 03:27:36 np0005534516 podman[288359]: 2025-11-25 08:27:36.709024516 +0000 UTC m=+0.186907045 container attach 48574a6bfaa1a5b66c264939ba99714244fb483bf5ac0cb267a09a1c5beb1f8e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_black, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Nov 25 03:27:36 np0005534516 nova_compute[253538]: 2025-11-25 08:27:36.716 253542 DEBUG nova.storage.rbd_utils [None req-953d03a0-0edd-4a18-b9f6-3c24a3ba24fc cbe175b10d9243369c5cae0b1a0c718b 16044f9687494680b68b927090e5afc5 - - default default] rbd image 575b6526-de38-4a80-a952-be1b891b4792_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:27:36 np0005534516 nova_compute[253538]: 2025-11-25 08:27:36.721 253542 DEBUG oslo_concurrency.processutils [None req-953d03a0-0edd-4a18-b9f6-3c24a3ba24fc cbe175b10d9243369c5cae0b1a0c718b 16044f9687494680b68b927090e5afc5 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:27:36 np0005534516 nova_compute[253538]: 2025-11-25 08:27:36.750 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 30afcbba-78f3-433c-ba0a-5a2d25cf2d48] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 03:27:36 np0005534516 nova_compute[253538]: 2025-11-25 08:27:36.753 253542 INFO nova.compute.manager [None req-f786152c-1e05-4ab9-9ade-48969d18372c 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 30afcbba-78f3-433c-ba0a-5a2d25cf2d48] Took 9.95 seconds to spawn the instance on the hypervisor.#033[00m
Nov 25 03:27:36 np0005534516 nova_compute[253538]: 2025-11-25 08:27:36.753 253542 DEBUG nova.compute.manager [None req-f786152c-1e05-4ab9-9ade-48969d18372c 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 30afcbba-78f3-433c-ba0a-5a2d25cf2d48] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:27:36 np0005534516 nova_compute[253538]: 2025-11-25 08:27:36.789 253542 DEBUG oslo_concurrency.processutils [None req-953d03a0-0edd-4a18-b9f6-3c24a3ba24fc cbe175b10d9243369c5cae0b1a0c718b 16044f9687494680b68b927090e5afc5 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json" returned: 0 in 0.068s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:27:36 np0005534516 nova_compute[253538]: 2025-11-25 08:27:36.790 253542 DEBUG oslo_concurrency.lockutils [None req-953d03a0-0edd-4a18-b9f6-3c24a3ba24fc cbe175b10d9243369c5cae0b1a0c718b 16044f9687494680b68b927090e5afc5 - - default default] Acquiring lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:27:36 np0005534516 nova_compute[253538]: 2025-11-25 08:27:36.791 253542 DEBUG oslo_concurrency.lockutils [None req-953d03a0-0edd-4a18-b9f6-3c24a3ba24fc cbe175b10d9243369c5cae0b1a0c718b 16044f9687494680b68b927090e5afc5 - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:27:36 np0005534516 nova_compute[253538]: 2025-11-25 08:27:36.791 253542 DEBUG oslo_concurrency.lockutils [None req-953d03a0-0edd-4a18-b9f6-3c24a3ba24fc cbe175b10d9243369c5cae0b1a0c718b 16044f9687494680b68b927090e5afc5 - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:27:36 np0005534516 nova_compute[253538]: 2025-11-25 08:27:36.816 253542 DEBUG nova.storage.rbd_utils [None req-953d03a0-0edd-4a18-b9f6-3c24a3ba24fc cbe175b10d9243369c5cae0b1a0c718b 16044f9687494680b68b927090e5afc5 - - default default] rbd image 575b6526-de38-4a80-a952-be1b891b4792_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:27:36 np0005534516 nova_compute[253538]: 2025-11-25 08:27:36.821 253542 DEBUG oslo_concurrency.processutils [None req-953d03a0-0edd-4a18-b9f6-3c24a3ba24fc cbe175b10d9243369c5cae0b1a0c718b 16044f9687494680b68b927090e5afc5 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc 575b6526-de38-4a80-a952-be1b891b4792_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:27:36 np0005534516 nova_compute[253538]: 2025-11-25 08:27:36.857 253542 INFO nova.compute.manager [None req-f786152c-1e05-4ab9-9ade-48969d18372c 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 30afcbba-78f3-433c-ba0a-5a2d25cf2d48] Took 11.30 seconds to build instance.#033[00m
Nov 25 03:27:36 np0005534516 nova_compute[253538]: 2025-11-25 08:27:36.873 253542 DEBUG oslo_concurrency.lockutils [None req-f786152c-1e05-4ab9-9ade-48969d18372c 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Lock "30afcbba-78f3-433c-ba0a-5a2d25cf2d48" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.380s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:27:37 np0005534516 nova_compute[253538]: 2025-11-25 08:27:37.067 253542 DEBUG nova.policy [None req-953d03a0-0edd-4a18-b9f6-3c24a3ba24fc cbe175b10d9243369c5cae0b1a0c718b 16044f9687494680b68b927090e5afc5 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'cbe175b10d9243369c5cae0b1a0c718b', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '16044f9687494680b68b927090e5afc5', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 25 03:27:37 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1271: 321 pgs: 321 active+clean; 91 MiB data, 358 MiB used, 60 GiB / 60 GiB avail; 94 KiB/s rd, 2.2 MiB/s wr, 57 op/s
Nov 25 03:27:37 np0005534516 nova_compute[253538]: 2025-11-25 08:27:37.154 253542 DEBUG oslo_concurrency.processutils [None req-953d03a0-0edd-4a18-b9f6-3c24a3ba24fc cbe175b10d9243369c5cae0b1a0c718b 16044f9687494680b68b927090e5afc5 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc 575b6526-de38-4a80-a952-be1b891b4792_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.334s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:27:37 np0005534516 nova_compute[253538]: 2025-11-25 08:27:37.231 253542 DEBUG nova.storage.rbd_utils [None req-953d03a0-0edd-4a18-b9f6-3c24a3ba24fc cbe175b10d9243369c5cae0b1a0c718b 16044f9687494680b68b927090e5afc5 - - default default] resizing rbd image 575b6526-de38-4a80-a952-be1b891b4792_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 25 03:27:37 np0005534516 nova_compute[253538]: 2025-11-25 08:27:37.361 253542 DEBUG nova.objects.instance [None req-953d03a0-0edd-4a18-b9f6-3c24a3ba24fc cbe175b10d9243369c5cae0b1a0c718b 16044f9687494680b68b927090e5afc5 - - default default] Lazy-loading 'migration_context' on Instance uuid 575b6526-de38-4a80-a952-be1b891b4792 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 03:27:37 np0005534516 nova_compute[253538]: 2025-11-25 08:27:37.374 253542 DEBUG nova.virt.libvirt.driver [None req-953d03a0-0edd-4a18-b9f6-3c24a3ba24fc cbe175b10d9243369c5cae0b1a0c718b 16044f9687494680b68b927090e5afc5 - - default default] [instance: 575b6526-de38-4a80-a952-be1b891b4792] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 25 03:27:37 np0005534516 nova_compute[253538]: 2025-11-25 08:27:37.375 253542 DEBUG nova.virt.libvirt.driver [None req-953d03a0-0edd-4a18-b9f6-3c24a3ba24fc cbe175b10d9243369c5cae0b1a0c718b 16044f9687494680b68b927090e5afc5 - - default default] [instance: 575b6526-de38-4a80-a952-be1b891b4792] Ensure instance console log exists: /var/lib/nova/instances/575b6526-de38-4a80-a952-be1b891b4792/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 25 03:27:37 np0005534516 nova_compute[253538]: 2025-11-25 08:27:37.375 253542 DEBUG oslo_concurrency.lockutils [None req-953d03a0-0edd-4a18-b9f6-3c24a3ba24fc cbe175b10d9243369c5cae0b1a0c718b 16044f9687494680b68b927090e5afc5 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:27:37 np0005534516 nova_compute[253538]: 2025-11-25 08:27:37.375 253542 DEBUG oslo_concurrency.lockutils [None req-953d03a0-0edd-4a18-b9f6-3c24a3ba24fc cbe175b10d9243369c5cae0b1a0c718b 16044f9687494680b68b927090e5afc5 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:27:37 np0005534516 nova_compute[253538]: 2025-11-25 08:27:37.376 253542 DEBUG oslo_concurrency.lockutils [None req-953d03a0-0edd-4a18-b9f6-3c24a3ba24fc cbe175b10d9243369c5cae0b1a0c718b 16044f9687494680b68b927090e5afc5 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:27:37 np0005534516 hardcore_black[288387]: {
Nov 25 03:27:37 np0005534516 hardcore_black[288387]:    "1ccbbde0-8faf-460a-800e-c84f00ed17db": {
Nov 25 03:27:37 np0005534516 hardcore_black[288387]:        "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 03:27:37 np0005534516 hardcore_black[288387]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 03:27:37 np0005534516 hardcore_black[288387]:        "osd_id": 1,
Nov 25 03:27:37 np0005534516 hardcore_black[288387]:        "osd_uuid": "1ccbbde0-8faf-460a-800e-c84f00ed17db",
Nov 25 03:27:37 np0005534516 hardcore_black[288387]:        "type": "bluestore"
Nov 25 03:27:37 np0005534516 hardcore_black[288387]:    },
Nov 25 03:27:37 np0005534516 hardcore_black[288387]:    "2a15223e-f21c-42af-b702-2be1c3607399": {
Nov 25 03:27:37 np0005534516 hardcore_black[288387]:        "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 03:27:37 np0005534516 hardcore_black[288387]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 03:27:37 np0005534516 hardcore_black[288387]:        "osd_id": 2,
Nov 25 03:27:37 np0005534516 hardcore_black[288387]:        "osd_uuid": "2a15223e-f21c-42af-b702-2be1c3607399",
Nov 25 03:27:37 np0005534516 hardcore_black[288387]:        "type": "bluestore"
Nov 25 03:27:37 np0005534516 hardcore_black[288387]:    },
Nov 25 03:27:37 np0005534516 hardcore_black[288387]:    "fa0f8d90-5df8-4d42-9078-da082765696d": {
Nov 25 03:27:37 np0005534516 hardcore_black[288387]:        "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 03:27:37 np0005534516 hardcore_black[288387]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 03:27:37 np0005534516 hardcore_black[288387]:        "osd_id": 0,
Nov 25 03:27:37 np0005534516 hardcore_black[288387]:        "osd_uuid": "fa0f8d90-5df8-4d42-9078-da082765696d",
Nov 25 03:27:37 np0005534516 hardcore_black[288387]:        "type": "bluestore"
Nov 25 03:27:37 np0005534516 hardcore_black[288387]:    }
Nov 25 03:27:37 np0005534516 hardcore_black[288387]: }
Nov 25 03:27:37 np0005534516 systemd[1]: libpod-48574a6bfaa1a5b66c264939ba99714244fb483bf5ac0cb267a09a1c5beb1f8e.scope: Deactivated successfully.
Nov 25 03:27:37 np0005534516 podman[288359]: 2025-11-25 08:27:37.783519299 +0000 UTC m=+1.261401818 container died 48574a6bfaa1a5b66c264939ba99714244fb483bf5ac0cb267a09a1c5beb1f8e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_black, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 03:27:37 np0005534516 systemd[1]: libpod-48574a6bfaa1a5b66c264939ba99714244fb483bf5ac0cb267a09a1c5beb1f8e.scope: Consumed 1.037s CPU time.
Nov 25 03:27:37 np0005534516 nova_compute[253538]: 2025-11-25 08:27:37.788 253542 DEBUG nova.network.neutron [None req-953d03a0-0edd-4a18-b9f6-3c24a3ba24fc cbe175b10d9243369c5cae0b1a0c718b 16044f9687494680b68b927090e5afc5 - - default default] [instance: 575b6526-de38-4a80-a952-be1b891b4792] Successfully created port: 1de27b55-f4ed-42e6-a9b2-65d84a8a77f2 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 25 03:27:38 np0005534516 nova_compute[253538]: 2025-11-25 08:27:38.076 253542 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764059243.074372, 0fc86c7e-5de2-431c-9152-cfe293f8cc7d => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 03:27:38 np0005534516 nova_compute[253538]: 2025-11-25 08:27:38.076 253542 INFO nova.compute.manager [-] [instance: 0fc86c7e-5de2-431c-9152-cfe293f8cc7d] VM Stopped (Lifecycle Event)#033[00m
Nov 25 03:27:38 np0005534516 nova_compute[253538]: 2025-11-25 08:27:38.095 253542 DEBUG nova.compute.manager [None req-f77fe06f-35c1-4131-a261-3794bbda4539 - - - - - -] [instance: 0fc86c7e-5de2-431c-9152-cfe293f8cc7d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:27:38 np0005534516 nova_compute[253538]: 2025-11-25 08:27:38.470 253542 DEBUG nova.compute.manager [req-774b4035-6e54-4529-baba-7d17350c3073 req-7e2d1fe0-b1da-46b9-afc8-17ff74412b0d b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 30afcbba-78f3-433c-ba0a-5a2d25cf2d48] Received event network-vif-plugged-d6aa33fe-8dd6-4546-aa75-715ad57e5b5c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:27:38 np0005534516 nova_compute[253538]: 2025-11-25 08:27:38.471 253542 DEBUG oslo_concurrency.lockutils [req-774b4035-6e54-4529-baba-7d17350c3073 req-7e2d1fe0-b1da-46b9-afc8-17ff74412b0d b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "30afcbba-78f3-433c-ba0a-5a2d25cf2d48-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:27:38 np0005534516 nova_compute[253538]: 2025-11-25 08:27:38.471 253542 DEBUG oslo_concurrency.lockutils [req-774b4035-6e54-4529-baba-7d17350c3073 req-7e2d1fe0-b1da-46b9-afc8-17ff74412b0d b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "30afcbba-78f3-433c-ba0a-5a2d25cf2d48-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:27:38 np0005534516 nova_compute[253538]: 2025-11-25 08:27:38.471 253542 DEBUG oslo_concurrency.lockutils [req-774b4035-6e54-4529-baba-7d17350c3073 req-7e2d1fe0-b1da-46b9-afc8-17ff74412b0d b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "30afcbba-78f3-433c-ba0a-5a2d25cf2d48-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:27:38 np0005534516 nova_compute[253538]: 2025-11-25 08:27:38.472 253542 DEBUG nova.compute.manager [req-774b4035-6e54-4529-baba-7d17350c3073 req-7e2d1fe0-b1da-46b9-afc8-17ff74412b0d b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 30afcbba-78f3-433c-ba0a-5a2d25cf2d48] No waiting events found dispatching network-vif-plugged-d6aa33fe-8dd6-4546-aa75-715ad57e5b5c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 03:27:38 np0005534516 nova_compute[253538]: 2025-11-25 08:27:38.472 253542 WARNING nova.compute.manager [req-774b4035-6e54-4529-baba-7d17350c3073 req-7e2d1fe0-b1da-46b9-afc8-17ff74412b0d b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 30afcbba-78f3-433c-ba0a-5a2d25cf2d48] Received unexpected event network-vif-plugged-d6aa33fe-8dd6-4546-aa75-715ad57e5b5c for instance with vm_state active and task_state None.#033[00m
Nov 25 03:27:38 np0005534516 systemd[1]: var-lib-containers-storage-overlay-fd4a0d49da341748888f3ce03ed5ae40d2ac41d60dc48ee22f8786ea18eb3aaa-merged.mount: Deactivated successfully.
Nov 25 03:27:38 np0005534516 nova_compute[253538]: 2025-11-25 08:27:38.953 253542 DEBUG nova.network.neutron [None req-953d03a0-0edd-4a18-b9f6-3c24a3ba24fc cbe175b10d9243369c5cae0b1a0c718b 16044f9687494680b68b927090e5afc5 - - default default] [instance: 575b6526-de38-4a80-a952-be1b891b4792] Successfully updated port: 1de27b55-f4ed-42e6-a9b2-65d84a8a77f2 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 25 03:27:38 np0005534516 nova_compute[253538]: 2025-11-25 08:27:38.967 253542 DEBUG oslo_concurrency.lockutils [None req-953d03a0-0edd-4a18-b9f6-3c24a3ba24fc cbe175b10d9243369c5cae0b1a0c718b 16044f9687494680b68b927090e5afc5 - - default default] Acquiring lock "refresh_cache-575b6526-de38-4a80-a952-be1b891b4792" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 03:27:38 np0005534516 nova_compute[253538]: 2025-11-25 08:27:38.967 253542 DEBUG oslo_concurrency.lockutils [None req-953d03a0-0edd-4a18-b9f6-3c24a3ba24fc cbe175b10d9243369c5cae0b1a0c718b 16044f9687494680b68b927090e5afc5 - - default default] Acquired lock "refresh_cache-575b6526-de38-4a80-a952-be1b891b4792" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 03:27:38 np0005534516 nova_compute[253538]: 2025-11-25 08:27:38.968 253542 DEBUG nova.network.neutron [None req-953d03a0-0edd-4a18-b9f6-3c24a3ba24fc cbe175b10d9243369c5cae0b1a0c718b 16044f9687494680b68b927090e5afc5 - - default default] [instance: 575b6526-de38-4a80-a952-be1b891b4792] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 25 03:27:39 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1272: 321 pgs: 321 active+clean; 104 MiB data, 362 MiB used, 60 GiB / 60 GiB avail; 931 KiB/s rd, 1.8 MiB/s wr, 76 op/s
Nov 25 03:27:39 np0005534516 nova_compute[253538]: 2025-11-25 08:27:39.122 253542 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764059244.1218808, ca088afd-31e5-497b-bfc5-ba1f56096642 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 03:27:39 np0005534516 nova_compute[253538]: 2025-11-25 08:27:39.123 253542 INFO nova.compute.manager [-] [instance: ca088afd-31e5-497b-bfc5-ba1f56096642] VM Stopped (Lifecycle Event)#033[00m
Nov 25 03:27:39 np0005534516 nova_compute[253538]: 2025-11-25 08:27:39.137 253542 DEBUG nova.compute.manager [None req-5ca2c5ed-d721-431e-ac93-21a53164a5a2 - - - - - -] [instance: ca088afd-31e5-497b-bfc5-ba1f56096642] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:27:39 np0005534516 podman[288359]: 2025-11-25 08:27:39.212235497 +0000 UTC m=+2.690118006 container remove 48574a6bfaa1a5b66c264939ba99714244fb483bf5ac0cb267a09a1c5beb1f8e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_black, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Nov 25 03:27:39 np0005534516 systemd[1]: libpod-conmon-48574a6bfaa1a5b66c264939ba99714244fb483bf5ac0cb267a09a1c5beb1f8e.scope: Deactivated successfully.
Nov 25 03:27:39 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 03:27:39 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 03:27:39 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 03:27:39 np0005534516 podman[288598]: 2025-11-25 08:27:39.320400981 +0000 UTC m=+0.405548617 container health_status 8ad1e319ea263c63021f01bb2d6f18ca5aea205883339692c6b10bddfa993191 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, container_name=ovn_controller, managed_by=edpm_ansible)
Nov 25 03:27:39 np0005534516 nova_compute[253538]: 2025-11-25 08:27:39.328 253542 DEBUG nova.network.neutron [None req-953d03a0-0edd-4a18-b9f6-3c24a3ba24fc cbe175b10d9243369c5cae0b1a0c718b 16044f9687494680b68b927090e5afc5 - - default default] [instance: 575b6526-de38-4a80-a952-be1b891b4792] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 25 03:27:39 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 03:27:39 np0005534516 ceph-mgr[75313]: [progress WARNING root] complete: ev 3d570d29-7fda-46e8-9bfd-852966078f3c does not exist
Nov 25 03:27:39 np0005534516 ceph-mgr[75313]: [progress WARNING root] complete: ev 2b31b402-648e-4e3f-9073-e08a3940b324 does not exist
Nov 25 03:27:39 np0005534516 nova_compute[253538]: 2025-11-25 08:27:39.610 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:27:39 np0005534516 nova_compute[253538]: 2025-11-25 08:27:39.789 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:27:39 np0005534516 nova_compute[253538]: 2025-11-25 08:27:39.906 253542 DEBUG oslo_concurrency.lockutils [None req-a6ead9e6-c858-49aa-82dc-e5d94b3760e8 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Acquiring lock "30afcbba-78f3-433c-ba0a-5a2d25cf2d48" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:27:39 np0005534516 nova_compute[253538]: 2025-11-25 08:27:39.906 253542 DEBUG oslo_concurrency.lockutils [None req-a6ead9e6-c858-49aa-82dc-e5d94b3760e8 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Lock "30afcbba-78f3-433c-ba0a-5a2d25cf2d48" acquired by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:27:39 np0005534516 nova_compute[253538]: 2025-11-25 08:27:39.906 253542 DEBUG nova.compute.manager [None req-a6ead9e6-c858-49aa-82dc-e5d94b3760e8 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 30afcbba-78f3-433c-ba0a-5a2d25cf2d48] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:27:39 np0005534516 nova_compute[253538]: 2025-11-25 08:27:39.909 253542 DEBUG nova.compute.manager [None req-a6ead9e6-c858-49aa-82dc-e5d94b3760e8 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 30afcbba-78f3-433c-ba0a-5a2d25cf2d48] Stopping instance; current vm_state: active, current task_state: powering-off, current DB power_state: 1, current VM power_state: 1 do_stop_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3338#033[00m
Nov 25 03:27:39 np0005534516 nova_compute[253538]: 2025-11-25 08:27:39.910 253542 DEBUG nova.objects.instance [None req-a6ead9e6-c858-49aa-82dc-e5d94b3760e8 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Lazy-loading 'flavor' on Instance uuid 30afcbba-78f3-433c-ba0a-5a2d25cf2d48 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 03:27:39 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 03:27:39 np0005534516 nova_compute[253538]: 2025-11-25 08:27:39.933 253542 DEBUG nova.virt.libvirt.driver [None req-a6ead9e6-c858-49aa-82dc-e5d94b3760e8 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 30afcbba-78f3-433c-ba0a-5a2d25cf2d48] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071#033[00m
Nov 25 03:27:40 np0005534516 nova_compute[253538]: 2025-11-25 08:27:40.177 253542 DEBUG nova.network.neutron [None req-953d03a0-0edd-4a18-b9f6-3c24a3ba24fc cbe175b10d9243369c5cae0b1a0c718b 16044f9687494680b68b927090e5afc5 - - default default] [instance: 575b6526-de38-4a80-a952-be1b891b4792] Updating instance_info_cache with network_info: [{"id": "1de27b55-f4ed-42e6-a9b2-65d84a8a77f2", "address": "fa:16:3e:14:9a:0c", "network": {"id": "57f2ccd3-ed2f-4d2f-8493-3dd1452e16da", "bridge": "br-int", "label": "tempest-ImagesNegativeTestJSON-1870872754-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "16044f9687494680b68b927090e5afc5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1de27b55-f4", "ovs_interfaceid": "1de27b55-f4ed-42e6-a9b2-65d84a8a77f2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 03:27:40 np0005534516 nova_compute[253538]: 2025-11-25 08:27:40.214 253542 DEBUG oslo_concurrency.lockutils [None req-953d03a0-0edd-4a18-b9f6-3c24a3ba24fc cbe175b10d9243369c5cae0b1a0c718b 16044f9687494680b68b927090e5afc5 - - default default] Releasing lock "refresh_cache-575b6526-de38-4a80-a952-be1b891b4792" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 03:27:40 np0005534516 nova_compute[253538]: 2025-11-25 08:27:40.215 253542 DEBUG nova.compute.manager [None req-953d03a0-0edd-4a18-b9f6-3c24a3ba24fc cbe175b10d9243369c5cae0b1a0c718b 16044f9687494680b68b927090e5afc5 - - default default] [instance: 575b6526-de38-4a80-a952-be1b891b4792] Instance network_info: |[{"id": "1de27b55-f4ed-42e6-a9b2-65d84a8a77f2", "address": "fa:16:3e:14:9a:0c", "network": {"id": "57f2ccd3-ed2f-4d2f-8493-3dd1452e16da", "bridge": "br-int", "label": "tempest-ImagesNegativeTestJSON-1870872754-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "16044f9687494680b68b927090e5afc5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1de27b55-f4", "ovs_interfaceid": "1de27b55-f4ed-42e6-a9b2-65d84a8a77f2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 25 03:27:40 np0005534516 nova_compute[253538]: 2025-11-25 08:27:40.219 253542 DEBUG nova.virt.libvirt.driver [None req-953d03a0-0edd-4a18-b9f6-3c24a3ba24fc cbe175b10d9243369c5cae0b1a0c718b 16044f9687494680b68b927090e5afc5 - - default default] [instance: 575b6526-de38-4a80-a952-be1b891b4792] Start _get_guest_xml network_info=[{"id": "1de27b55-f4ed-42e6-a9b2-65d84a8a77f2", "address": "fa:16:3e:14:9a:0c", "network": {"id": "57f2ccd3-ed2f-4d2f-8493-3dd1452e16da", "bridge": "br-int", "label": "tempest-ImagesNegativeTestJSON-1870872754-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "16044f9687494680b68b927090e5afc5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1de27b55-f4", "ovs_interfaceid": "1de27b55-f4ed-42e6-a9b2-65d84a8a77f2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'device_name': '/dev/vda', 'guest_format': None, 'encryption_options': None, 'boot_index': 0, 'encryption_format': None, 'encryption_secret_uuid': None, 'size': 0, 'encrypted': False, 'disk_bus': 'virtio', 'image_id': '8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 25 03:27:40 np0005534516 nova_compute[253538]: 2025-11-25 08:27:40.226 253542 WARNING nova.virt.libvirt.driver [None req-953d03a0-0edd-4a18-b9f6-3c24a3ba24fc cbe175b10d9243369c5cae0b1a0c718b 16044f9687494680b68b927090e5afc5 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 03:27:40 np0005534516 nova_compute[253538]: 2025-11-25 08:27:40.234 253542 DEBUG nova.virt.libvirt.host [None req-953d03a0-0edd-4a18-b9f6-3c24a3ba24fc cbe175b10d9243369c5cae0b1a0c718b 16044f9687494680b68b927090e5afc5 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 25 03:27:40 np0005534516 nova_compute[253538]: 2025-11-25 08:27:40.236 253542 DEBUG nova.virt.libvirt.host [None req-953d03a0-0edd-4a18-b9f6-3c24a3ba24fc cbe175b10d9243369c5cae0b1a0c718b 16044f9687494680b68b927090e5afc5 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 25 03:27:40 np0005534516 nova_compute[253538]: 2025-11-25 08:27:40.240 253542 DEBUG nova.virt.libvirt.host [None req-953d03a0-0edd-4a18-b9f6-3c24a3ba24fc cbe175b10d9243369c5cae0b1a0c718b 16044f9687494680b68b927090e5afc5 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 25 03:27:40 np0005534516 nova_compute[253538]: 2025-11-25 08:27:40.240 253542 DEBUG nova.virt.libvirt.host [None req-953d03a0-0edd-4a18-b9f6-3c24a3ba24fc cbe175b10d9243369c5cae0b1a0c718b 16044f9687494680b68b927090e5afc5 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 25 03:27:40 np0005534516 nova_compute[253538]: 2025-11-25 08:27:40.241 253542 DEBUG nova.virt.libvirt.driver [None req-953d03a0-0edd-4a18-b9f6-3c24a3ba24fc cbe175b10d9243369c5cae0b1a0c718b 16044f9687494680b68b927090e5afc5 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 25 03:27:40 np0005534516 nova_compute[253538]: 2025-11-25 08:27:40.242 253542 DEBUG nova.virt.hardware [None req-953d03a0-0edd-4a18-b9f6-3c24a3ba24fc cbe175b10d9243369c5cae0b1a0c718b 16044f9687494680b68b927090e5afc5 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T08:20:54Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='c0247c91-0f34-45b2-87e7-43f31a790a00',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 25 03:27:40 np0005534516 nova_compute[253538]: 2025-11-25 08:27:40.242 253542 DEBUG nova.virt.hardware [None req-953d03a0-0edd-4a18-b9f6-3c24a3ba24fc cbe175b10d9243369c5cae0b1a0c718b 16044f9687494680b68b927090e5afc5 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 25 03:27:40 np0005534516 nova_compute[253538]: 2025-11-25 08:27:40.243 253542 DEBUG nova.virt.hardware [None req-953d03a0-0edd-4a18-b9f6-3c24a3ba24fc cbe175b10d9243369c5cae0b1a0c718b 16044f9687494680b68b927090e5afc5 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 25 03:27:40 np0005534516 nova_compute[253538]: 2025-11-25 08:27:40.243 253542 DEBUG nova.virt.hardware [None req-953d03a0-0edd-4a18-b9f6-3c24a3ba24fc cbe175b10d9243369c5cae0b1a0c718b 16044f9687494680b68b927090e5afc5 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 25 03:27:40 np0005534516 nova_compute[253538]: 2025-11-25 08:27:40.244 253542 DEBUG nova.virt.hardware [None req-953d03a0-0edd-4a18-b9f6-3c24a3ba24fc cbe175b10d9243369c5cae0b1a0c718b 16044f9687494680b68b927090e5afc5 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 25 03:27:40 np0005534516 nova_compute[253538]: 2025-11-25 08:27:40.244 253542 DEBUG nova.virt.hardware [None req-953d03a0-0edd-4a18-b9f6-3c24a3ba24fc cbe175b10d9243369c5cae0b1a0c718b 16044f9687494680b68b927090e5afc5 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 25 03:27:40 np0005534516 nova_compute[253538]: 2025-11-25 08:27:40.245 253542 DEBUG nova.virt.hardware [None req-953d03a0-0edd-4a18-b9f6-3c24a3ba24fc cbe175b10d9243369c5cae0b1a0c718b 16044f9687494680b68b927090e5afc5 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 25 03:27:40 np0005534516 nova_compute[253538]: 2025-11-25 08:27:40.246 253542 DEBUG nova.virt.hardware [None req-953d03a0-0edd-4a18-b9f6-3c24a3ba24fc cbe175b10d9243369c5cae0b1a0c718b 16044f9687494680b68b927090e5afc5 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 25 03:27:40 np0005534516 nova_compute[253538]: 2025-11-25 08:27:40.247 253542 DEBUG nova.virt.hardware [None req-953d03a0-0edd-4a18-b9f6-3c24a3ba24fc cbe175b10d9243369c5cae0b1a0c718b 16044f9687494680b68b927090e5afc5 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 25 03:27:40 np0005534516 nova_compute[253538]: 2025-11-25 08:27:40.247 253542 DEBUG nova.virt.hardware [None req-953d03a0-0edd-4a18-b9f6-3c24a3ba24fc cbe175b10d9243369c5cae0b1a0c718b 16044f9687494680b68b927090e5afc5 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 25 03:27:40 np0005534516 nova_compute[253538]: 2025-11-25 08:27:40.247 253542 DEBUG nova.virt.hardware [None req-953d03a0-0edd-4a18-b9f6-3c24a3ba24fc cbe175b10d9243369c5cae0b1a0c718b 16044f9687494680b68b927090e5afc5 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 25 03:27:40 np0005534516 nova_compute[253538]: 2025-11-25 08:27:40.251 253542 DEBUG oslo_concurrency.processutils [None req-953d03a0-0edd-4a18-b9f6-3c24a3ba24fc cbe175b10d9243369c5cae0b1a0c718b 16044f9687494680b68b927090e5afc5 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:27:40 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 03:27:40 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 03:27:40 np0005534516 nova_compute[253538]: 2025-11-25 08:27:40.640 253542 DEBUG nova.compute.manager [req-47aed39f-8024-48c2-a459-1bbb66a66725 req-6d382777-c191-4a50-b088-ce9c2156ec58 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 575b6526-de38-4a80-a952-be1b891b4792] Received event network-changed-1de27b55-f4ed-42e6-a9b2-65d84a8a77f2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:27:40 np0005534516 nova_compute[253538]: 2025-11-25 08:27:40.641 253542 DEBUG nova.compute.manager [req-47aed39f-8024-48c2-a459-1bbb66a66725 req-6d382777-c191-4a50-b088-ce9c2156ec58 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 575b6526-de38-4a80-a952-be1b891b4792] Refreshing instance network info cache due to event network-changed-1de27b55-f4ed-42e6-a9b2-65d84a8a77f2. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 03:27:40 np0005534516 nova_compute[253538]: 2025-11-25 08:27:40.641 253542 DEBUG oslo_concurrency.lockutils [req-47aed39f-8024-48c2-a459-1bbb66a66725 req-6d382777-c191-4a50-b088-ce9c2156ec58 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-575b6526-de38-4a80-a952-be1b891b4792" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 03:27:40 np0005534516 nova_compute[253538]: 2025-11-25 08:27:40.641 253542 DEBUG oslo_concurrency.lockutils [req-47aed39f-8024-48c2-a459-1bbb66a66725 req-6d382777-c191-4a50-b088-ce9c2156ec58 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-575b6526-de38-4a80-a952-be1b891b4792" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 03:27:40 np0005534516 nova_compute[253538]: 2025-11-25 08:27:40.642 253542 DEBUG nova.network.neutron [req-47aed39f-8024-48c2-a459-1bbb66a66725 req-6d382777-c191-4a50-b088-ce9c2156ec58 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 575b6526-de38-4a80-a952-be1b891b4792] Refreshing network info cache for port 1de27b55-f4ed-42e6-a9b2-65d84a8a77f2 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 03:27:40 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 03:27:40 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1518166724' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 03:27:40 np0005534516 nova_compute[253538]: 2025-11-25 08:27:40.776 253542 DEBUG oslo_concurrency.processutils [None req-953d03a0-0edd-4a18-b9f6-3c24a3ba24fc cbe175b10d9243369c5cae0b1a0c718b 16044f9687494680b68b927090e5afc5 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.526s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:27:40 np0005534516 nova_compute[253538]: 2025-11-25 08:27:40.807 253542 DEBUG nova.storage.rbd_utils [None req-953d03a0-0edd-4a18-b9f6-3c24a3ba24fc cbe175b10d9243369c5cae0b1a0c718b 16044f9687494680b68b927090e5afc5 - - default default] rbd image 575b6526-de38-4a80-a952-be1b891b4792_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:27:40 np0005534516 nova_compute[253538]: 2025-11-25 08:27:40.813 253542 DEBUG oslo_concurrency.processutils [None req-953d03a0-0edd-4a18-b9f6-3c24a3ba24fc cbe175b10d9243369c5cae0b1a0c718b 16044f9687494680b68b927090e5afc5 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:27:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:27:41.053 162739 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:27:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:27:41.054 162739 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:27:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:27:41.054 162739 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:27:41 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1273: 321 pgs: 321 active+clean; 125 MiB data, 376 MiB used, 60 GiB / 60 GiB avail; 1.2 MiB/s rd, 2.5 MiB/s wr, 83 op/s
Nov 25 03:27:41 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 03:27:41 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1259217215' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 03:27:41 np0005534516 nova_compute[253538]: 2025-11-25 08:27:41.235 253542 DEBUG oslo_concurrency.processutils [None req-953d03a0-0edd-4a18-b9f6-3c24a3ba24fc cbe175b10d9243369c5cae0b1a0c718b 16044f9687494680b68b927090e5afc5 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.423s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:27:41 np0005534516 nova_compute[253538]: 2025-11-25 08:27:41.237 253542 DEBUG nova.virt.libvirt.vif [None req-953d03a0-0edd-4a18-b9f6-3c24a3ba24fc cbe175b10d9243369c5cae0b1a0c718b 16044f9687494680b68b927090e5afc5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T08:27:34Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesNegativeTestJSON-server-719907871',display_name='tempest-ImagesNegativeTestJSON-server-719907871',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagesnegativetestjson-server-719907871',id=27,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='16044f9687494680b68b927090e5afc5',ramdisk_id='',reservation_id='r-a19z8id3',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesNegativeTestJSON-1651122487',owner_user_name='tempest-ImagesNegativeTestJSON-1651122487-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T08:27:36Z,user_data=None,user_id='cbe175b10d9243369c5cae0b1a0c718b',uuid=575b6526-de38-4a80-a952-be1b891b4792,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "1de27b55-f4ed-42e6-a9b2-65d84a8a77f2", "address": "fa:16:3e:14:9a:0c", "network": {"id": "57f2ccd3-ed2f-4d2f-8493-3dd1452e16da", "bridge": "br-int", "label": "tempest-ImagesNegativeTestJSON-1870872754-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "16044f9687494680b68b927090e5afc5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1de27b55-f4", "ovs_interfaceid": "1de27b55-f4ed-42e6-a9b2-65d84a8a77f2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 25 03:27:41 np0005534516 nova_compute[253538]: 2025-11-25 08:27:41.237 253542 DEBUG nova.network.os_vif_util [None req-953d03a0-0edd-4a18-b9f6-3c24a3ba24fc cbe175b10d9243369c5cae0b1a0c718b 16044f9687494680b68b927090e5afc5 - - default default] Converting VIF {"id": "1de27b55-f4ed-42e6-a9b2-65d84a8a77f2", "address": "fa:16:3e:14:9a:0c", "network": {"id": "57f2ccd3-ed2f-4d2f-8493-3dd1452e16da", "bridge": "br-int", "label": "tempest-ImagesNegativeTestJSON-1870872754-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "16044f9687494680b68b927090e5afc5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1de27b55-f4", "ovs_interfaceid": "1de27b55-f4ed-42e6-a9b2-65d84a8a77f2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 03:27:41 np0005534516 nova_compute[253538]: 2025-11-25 08:27:41.238 253542 DEBUG nova.network.os_vif_util [None req-953d03a0-0edd-4a18-b9f6-3c24a3ba24fc cbe175b10d9243369c5cae0b1a0c718b 16044f9687494680b68b927090e5afc5 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:14:9a:0c,bridge_name='br-int',has_traffic_filtering=True,id=1de27b55-f4ed-42e6-a9b2-65d84a8a77f2,network=Network(57f2ccd3-ed2f-4d2f-8493-3dd1452e16da),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1de27b55-f4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 03:27:41 np0005534516 nova_compute[253538]: 2025-11-25 08:27:41.239 253542 DEBUG nova.objects.instance [None req-953d03a0-0edd-4a18-b9f6-3c24a3ba24fc cbe175b10d9243369c5cae0b1a0c718b 16044f9687494680b68b927090e5afc5 - - default default] Lazy-loading 'pci_devices' on Instance uuid 575b6526-de38-4a80-a952-be1b891b4792 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 03:27:41 np0005534516 nova_compute[253538]: 2025-11-25 08:27:41.253 253542 DEBUG nova.virt.libvirt.driver [None req-953d03a0-0edd-4a18-b9f6-3c24a3ba24fc cbe175b10d9243369c5cae0b1a0c718b 16044f9687494680b68b927090e5afc5 - - default default] [instance: 575b6526-de38-4a80-a952-be1b891b4792] End _get_guest_xml xml=<domain type="kvm">
Nov 25 03:27:41 np0005534516 nova_compute[253538]:  <uuid>575b6526-de38-4a80-a952-be1b891b4792</uuid>
Nov 25 03:27:41 np0005534516 nova_compute[253538]:  <name>instance-0000001b</name>
Nov 25 03:27:41 np0005534516 nova_compute[253538]:  <memory>131072</memory>
Nov 25 03:27:41 np0005534516 nova_compute[253538]:  <vcpu>1</vcpu>
Nov 25 03:27:41 np0005534516 nova_compute[253538]:  <metadata>
Nov 25 03:27:41 np0005534516 nova_compute[253538]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 03:27:41 np0005534516 nova_compute[253538]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 03:27:41 np0005534516 nova_compute[253538]:      <nova:name>tempest-ImagesNegativeTestJSON-server-719907871</nova:name>
Nov 25 03:27:41 np0005534516 nova_compute[253538]:      <nova:creationTime>2025-11-25 08:27:40</nova:creationTime>
Nov 25 03:27:41 np0005534516 nova_compute[253538]:      <nova:flavor name="m1.nano">
Nov 25 03:27:41 np0005534516 nova_compute[253538]:        <nova:memory>128</nova:memory>
Nov 25 03:27:41 np0005534516 nova_compute[253538]:        <nova:disk>1</nova:disk>
Nov 25 03:27:41 np0005534516 nova_compute[253538]:        <nova:swap>0</nova:swap>
Nov 25 03:27:41 np0005534516 nova_compute[253538]:        <nova:ephemeral>0</nova:ephemeral>
Nov 25 03:27:41 np0005534516 nova_compute[253538]:        <nova:vcpus>1</nova:vcpus>
Nov 25 03:27:41 np0005534516 nova_compute[253538]:      </nova:flavor>
Nov 25 03:27:41 np0005534516 nova_compute[253538]:      <nova:owner>
Nov 25 03:27:41 np0005534516 nova_compute[253538]:        <nova:user uuid="cbe175b10d9243369c5cae0b1a0c718b">tempest-ImagesNegativeTestJSON-1651122487-project-member</nova:user>
Nov 25 03:27:41 np0005534516 nova_compute[253538]:        <nova:project uuid="16044f9687494680b68b927090e5afc5">tempest-ImagesNegativeTestJSON-1651122487</nova:project>
Nov 25 03:27:41 np0005534516 nova_compute[253538]:      </nova:owner>
Nov 25 03:27:41 np0005534516 nova_compute[253538]:      <nova:root type="image" uuid="8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e"/>
Nov 25 03:27:41 np0005534516 nova_compute[253538]:      <nova:ports>
Nov 25 03:27:41 np0005534516 nova_compute[253538]:        <nova:port uuid="1de27b55-f4ed-42e6-a9b2-65d84a8a77f2">
Nov 25 03:27:41 np0005534516 nova_compute[253538]:          <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Nov 25 03:27:41 np0005534516 nova_compute[253538]:        </nova:port>
Nov 25 03:27:41 np0005534516 nova_compute[253538]:      </nova:ports>
Nov 25 03:27:41 np0005534516 nova_compute[253538]:    </nova:instance>
Nov 25 03:27:41 np0005534516 nova_compute[253538]:  </metadata>
Nov 25 03:27:41 np0005534516 nova_compute[253538]:  <sysinfo type="smbios">
Nov 25 03:27:41 np0005534516 nova_compute[253538]:    <system>
Nov 25 03:27:41 np0005534516 nova_compute[253538]:      <entry name="manufacturer">RDO</entry>
Nov 25 03:27:41 np0005534516 nova_compute[253538]:      <entry name="product">OpenStack Compute</entry>
Nov 25 03:27:41 np0005534516 nova_compute[253538]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 03:27:41 np0005534516 nova_compute[253538]:      <entry name="serial">575b6526-de38-4a80-a952-be1b891b4792</entry>
Nov 25 03:27:41 np0005534516 nova_compute[253538]:      <entry name="uuid">575b6526-de38-4a80-a952-be1b891b4792</entry>
Nov 25 03:27:41 np0005534516 nova_compute[253538]:      <entry name="family">Virtual Machine</entry>
Nov 25 03:27:41 np0005534516 nova_compute[253538]:    </system>
Nov 25 03:27:41 np0005534516 nova_compute[253538]:  </sysinfo>
Nov 25 03:27:41 np0005534516 nova_compute[253538]:  <os>
Nov 25 03:27:41 np0005534516 nova_compute[253538]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 25 03:27:41 np0005534516 nova_compute[253538]:    <boot dev="hd"/>
Nov 25 03:27:41 np0005534516 nova_compute[253538]:    <smbios mode="sysinfo"/>
Nov 25 03:27:41 np0005534516 nova_compute[253538]:  </os>
Nov 25 03:27:41 np0005534516 nova_compute[253538]:  <features>
Nov 25 03:27:41 np0005534516 nova_compute[253538]:    <acpi/>
Nov 25 03:27:41 np0005534516 nova_compute[253538]:    <apic/>
Nov 25 03:27:41 np0005534516 nova_compute[253538]:    <vmcoreinfo/>
Nov 25 03:27:41 np0005534516 nova_compute[253538]:  </features>
Nov 25 03:27:41 np0005534516 nova_compute[253538]:  <clock offset="utc">
Nov 25 03:27:41 np0005534516 nova_compute[253538]:    <timer name="pit" tickpolicy="delay"/>
Nov 25 03:27:41 np0005534516 nova_compute[253538]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 25 03:27:41 np0005534516 nova_compute[253538]:    <timer name="hpet" present="no"/>
Nov 25 03:27:41 np0005534516 nova_compute[253538]:  </clock>
Nov 25 03:27:41 np0005534516 nova_compute[253538]:  <cpu mode="host-model" match="exact">
Nov 25 03:27:41 np0005534516 nova_compute[253538]:    <topology sockets="1" cores="1" threads="1"/>
Nov 25 03:27:41 np0005534516 nova_compute[253538]:  </cpu>
Nov 25 03:27:41 np0005534516 nova_compute[253538]:  <devices>
Nov 25 03:27:41 np0005534516 nova_compute[253538]:    <disk type="network" device="disk">
Nov 25 03:27:41 np0005534516 nova_compute[253538]:      <driver type="raw" cache="none"/>
Nov 25 03:27:41 np0005534516 nova_compute[253538]:      <source protocol="rbd" name="vms/575b6526-de38-4a80-a952-be1b891b4792_disk">
Nov 25 03:27:41 np0005534516 nova_compute[253538]:        <host name="192.168.122.100" port="6789"/>
Nov 25 03:27:41 np0005534516 nova_compute[253538]:      </source>
Nov 25 03:27:41 np0005534516 nova_compute[253538]:      <auth username="openstack">
Nov 25 03:27:41 np0005534516 nova_compute[253538]:        <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 03:27:41 np0005534516 nova_compute[253538]:      </auth>
Nov 25 03:27:41 np0005534516 nova_compute[253538]:      <target dev="vda" bus="virtio"/>
Nov 25 03:27:41 np0005534516 nova_compute[253538]:    </disk>
Nov 25 03:27:41 np0005534516 nova_compute[253538]:    <disk type="network" device="cdrom">
Nov 25 03:27:41 np0005534516 nova_compute[253538]:      <driver type="raw" cache="none"/>
Nov 25 03:27:41 np0005534516 nova_compute[253538]:      <source protocol="rbd" name="vms/575b6526-de38-4a80-a952-be1b891b4792_disk.config">
Nov 25 03:27:41 np0005534516 nova_compute[253538]:        <host name="192.168.122.100" port="6789"/>
Nov 25 03:27:41 np0005534516 nova_compute[253538]:      </source>
Nov 25 03:27:41 np0005534516 nova_compute[253538]:      <auth username="openstack">
Nov 25 03:27:41 np0005534516 nova_compute[253538]:        <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 03:27:41 np0005534516 nova_compute[253538]:      </auth>
Nov 25 03:27:41 np0005534516 nova_compute[253538]:      <target dev="sda" bus="sata"/>
Nov 25 03:27:41 np0005534516 nova_compute[253538]:    </disk>
Nov 25 03:27:41 np0005534516 nova_compute[253538]:    <interface type="ethernet">
Nov 25 03:27:41 np0005534516 nova_compute[253538]:      <mac address="fa:16:3e:14:9a:0c"/>
Nov 25 03:27:41 np0005534516 nova_compute[253538]:      <model type="virtio"/>
Nov 25 03:27:41 np0005534516 nova_compute[253538]:      <driver name="vhost" rx_queue_size="512"/>
Nov 25 03:27:41 np0005534516 nova_compute[253538]:      <mtu size="1442"/>
Nov 25 03:27:41 np0005534516 nova_compute[253538]:      <target dev="tap1de27b55-f4"/>
Nov 25 03:27:41 np0005534516 nova_compute[253538]:    </interface>
Nov 25 03:27:41 np0005534516 nova_compute[253538]:    <serial type="pty">
Nov 25 03:27:41 np0005534516 nova_compute[253538]:      <log file="/var/lib/nova/instances/575b6526-de38-4a80-a952-be1b891b4792/console.log" append="off"/>
Nov 25 03:27:41 np0005534516 nova_compute[253538]:    </serial>
Nov 25 03:27:41 np0005534516 nova_compute[253538]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 03:27:41 np0005534516 nova_compute[253538]:    <video>
Nov 25 03:27:41 np0005534516 nova_compute[253538]:      <model type="virtio"/>
Nov 25 03:27:41 np0005534516 nova_compute[253538]:    </video>
Nov 25 03:27:41 np0005534516 nova_compute[253538]:    <input type="tablet" bus="usb"/>
Nov 25 03:27:41 np0005534516 nova_compute[253538]:    <rng model="virtio">
Nov 25 03:27:41 np0005534516 nova_compute[253538]:      <backend model="random">/dev/urandom</backend>
Nov 25 03:27:41 np0005534516 nova_compute[253538]:    </rng>
Nov 25 03:27:41 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root"/>
Nov 25 03:27:41 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:27:41 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:27:41 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:27:41 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:27:41 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:27:41 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:27:41 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:27:41 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:27:41 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:27:41 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:27:41 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:27:41 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:27:41 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:27:41 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:27:41 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:27:41 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:27:41 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:27:41 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:27:41 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:27:41 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:27:41 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:27:41 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:27:41 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:27:41 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:27:41 np0005534516 nova_compute[253538]:    <controller type="usb" index="0"/>
Nov 25 03:27:41 np0005534516 nova_compute[253538]:    <memballoon model="virtio">
Nov 25 03:27:41 np0005534516 nova_compute[253538]:      <stats period="10"/>
Nov 25 03:27:41 np0005534516 nova_compute[253538]:    </memballoon>
Nov 25 03:27:41 np0005534516 nova_compute[253538]:  </devices>
Nov 25 03:27:41 np0005534516 nova_compute[253538]: </domain>
Nov 25 03:27:41 np0005534516 nova_compute[253538]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 25 03:27:41 np0005534516 nova_compute[253538]: 2025-11-25 08:27:41.254 253542 DEBUG nova.compute.manager [None req-953d03a0-0edd-4a18-b9f6-3c24a3ba24fc cbe175b10d9243369c5cae0b1a0c718b 16044f9687494680b68b927090e5afc5 - - default default] [instance: 575b6526-de38-4a80-a952-be1b891b4792] Preparing to wait for external event network-vif-plugged-1de27b55-f4ed-42e6-a9b2-65d84a8a77f2 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 25 03:27:41 np0005534516 nova_compute[253538]: 2025-11-25 08:27:41.254 253542 DEBUG oslo_concurrency.lockutils [None req-953d03a0-0edd-4a18-b9f6-3c24a3ba24fc cbe175b10d9243369c5cae0b1a0c718b 16044f9687494680b68b927090e5afc5 - - default default] Acquiring lock "575b6526-de38-4a80-a952-be1b891b4792-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:27:41 np0005534516 nova_compute[253538]: 2025-11-25 08:27:41.255 253542 DEBUG oslo_concurrency.lockutils [None req-953d03a0-0edd-4a18-b9f6-3c24a3ba24fc cbe175b10d9243369c5cae0b1a0c718b 16044f9687494680b68b927090e5afc5 - - default default] Lock "575b6526-de38-4a80-a952-be1b891b4792-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:27:41 np0005534516 nova_compute[253538]: 2025-11-25 08:27:41.255 253542 DEBUG oslo_concurrency.lockutils [None req-953d03a0-0edd-4a18-b9f6-3c24a3ba24fc cbe175b10d9243369c5cae0b1a0c718b 16044f9687494680b68b927090e5afc5 - - default default] Lock "575b6526-de38-4a80-a952-be1b891b4792-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:27:41 np0005534516 nova_compute[253538]: 2025-11-25 08:27:41.255 253542 DEBUG nova.virt.libvirt.vif [None req-953d03a0-0edd-4a18-b9f6-3c24a3ba24fc cbe175b10d9243369c5cae0b1a0c718b 16044f9687494680b68b927090e5afc5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T08:27:34Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesNegativeTestJSON-server-719907871',display_name='tempest-ImagesNegativeTestJSON-server-719907871',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagesnegativetestjson-server-719907871',id=27,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='16044f9687494680b68b927090e5afc5',ramdisk_id='',reservation_id='r-a19z8id3',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesNegativeTestJSON-1651122487',owner_user_name='tempest-ImagesNegativeTestJSON-1651122487-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T08:27:36Z,user_data=None,user_id='cbe175b10d9243369c5cae0b1a0c718b',uuid=575b6526-de38-4a80-a952-be1b891b4792,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "1de27b55-f4ed-42e6-a9b2-65d84a8a77f2", "address": "fa:16:3e:14:9a:0c", "network": {"id": "57f2ccd3-ed2f-4d2f-8493-3dd1452e16da", "bridge": "br-int", "label": "tempest-ImagesNegativeTestJSON-1870872754-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "16044f9687494680b68b927090e5afc5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1de27b55-f4", "ovs_interfaceid": "1de27b55-f4ed-42e6-a9b2-65d84a8a77f2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 25 03:27:41 np0005534516 nova_compute[253538]: 2025-11-25 08:27:41.256 253542 DEBUG nova.network.os_vif_util [None req-953d03a0-0edd-4a18-b9f6-3c24a3ba24fc cbe175b10d9243369c5cae0b1a0c718b 16044f9687494680b68b927090e5afc5 - - default default] Converting VIF {"id": "1de27b55-f4ed-42e6-a9b2-65d84a8a77f2", "address": "fa:16:3e:14:9a:0c", "network": {"id": "57f2ccd3-ed2f-4d2f-8493-3dd1452e16da", "bridge": "br-int", "label": "tempest-ImagesNegativeTestJSON-1870872754-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "16044f9687494680b68b927090e5afc5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1de27b55-f4", "ovs_interfaceid": "1de27b55-f4ed-42e6-a9b2-65d84a8a77f2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 03:27:41 np0005534516 nova_compute[253538]: 2025-11-25 08:27:41.256 253542 DEBUG nova.network.os_vif_util [None req-953d03a0-0edd-4a18-b9f6-3c24a3ba24fc cbe175b10d9243369c5cae0b1a0c718b 16044f9687494680b68b927090e5afc5 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:14:9a:0c,bridge_name='br-int',has_traffic_filtering=True,id=1de27b55-f4ed-42e6-a9b2-65d84a8a77f2,network=Network(57f2ccd3-ed2f-4d2f-8493-3dd1452e16da),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1de27b55-f4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 03:27:41 np0005534516 nova_compute[253538]: 2025-11-25 08:27:41.257 253542 DEBUG os_vif [None req-953d03a0-0edd-4a18-b9f6-3c24a3ba24fc cbe175b10d9243369c5cae0b1a0c718b 16044f9687494680b68b927090e5afc5 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:14:9a:0c,bridge_name='br-int',has_traffic_filtering=True,id=1de27b55-f4ed-42e6-a9b2-65d84a8a77f2,network=Network(57f2ccd3-ed2f-4d2f-8493-3dd1452e16da),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1de27b55-f4') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 25 03:27:41 np0005534516 nova_compute[253538]: 2025-11-25 08:27:41.257 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:27:41 np0005534516 nova_compute[253538]: 2025-11-25 08:27:41.257 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:27:41 np0005534516 nova_compute[253538]: 2025-11-25 08:27:41.258 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 03:27:41 np0005534516 nova_compute[253538]: 2025-11-25 08:27:41.263 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:27:41 np0005534516 nova_compute[253538]: 2025-11-25 08:27:41.264 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap1de27b55-f4, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:27:41 np0005534516 nova_compute[253538]: 2025-11-25 08:27:41.265 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap1de27b55-f4, col_values=(('external_ids', {'iface-id': '1de27b55-f4ed-42e6-a9b2-65d84a8a77f2', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:14:9a:0c', 'vm-uuid': '575b6526-de38-4a80-a952-be1b891b4792'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:27:41 np0005534516 NetworkManager[48915]: <info>  [1764059261.2681] manager: (tap1de27b55-f4): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/79)
Nov 25 03:27:41 np0005534516 nova_compute[253538]: 2025-11-25 08:27:41.269 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 25 03:27:41 np0005534516 nova_compute[253538]: 2025-11-25 08:27:41.274 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:27:41 np0005534516 nova_compute[253538]: 2025-11-25 08:27:41.275 253542 INFO os_vif [None req-953d03a0-0edd-4a18-b9f6-3c24a3ba24fc cbe175b10d9243369c5cae0b1a0c718b 16044f9687494680b68b927090e5afc5 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:14:9a:0c,bridge_name='br-int',has_traffic_filtering=True,id=1de27b55-f4ed-42e6-a9b2-65d84a8a77f2,network=Network(57f2ccd3-ed2f-4d2f-8493-3dd1452e16da),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1de27b55-f4')#033[00m
Nov 25 03:27:41 np0005534516 nova_compute[253538]: 2025-11-25 08:27:41.336 253542 DEBUG nova.virt.libvirt.driver [None req-953d03a0-0edd-4a18-b9f6-3c24a3ba24fc cbe175b10d9243369c5cae0b1a0c718b 16044f9687494680b68b927090e5afc5 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 03:27:41 np0005534516 nova_compute[253538]: 2025-11-25 08:27:41.337 253542 DEBUG nova.virt.libvirt.driver [None req-953d03a0-0edd-4a18-b9f6-3c24a3ba24fc cbe175b10d9243369c5cae0b1a0c718b 16044f9687494680b68b927090e5afc5 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 03:27:41 np0005534516 nova_compute[253538]: 2025-11-25 08:27:41.337 253542 DEBUG nova.virt.libvirt.driver [None req-953d03a0-0edd-4a18-b9f6-3c24a3ba24fc cbe175b10d9243369c5cae0b1a0c718b 16044f9687494680b68b927090e5afc5 - - default default] No VIF found with MAC fa:16:3e:14:9a:0c, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 25 03:27:41 np0005534516 nova_compute[253538]: 2025-11-25 08:27:41.338 253542 INFO nova.virt.libvirt.driver [None req-953d03a0-0edd-4a18-b9f6-3c24a3ba24fc cbe175b10d9243369c5cae0b1a0c718b 16044f9687494680b68b927090e5afc5 - - default default] [instance: 575b6526-de38-4a80-a952-be1b891b4792] Using config drive#033[00m
Nov 25 03:27:41 np0005534516 nova_compute[253538]: 2025-11-25 08:27:41.368 253542 DEBUG nova.storage.rbd_utils [None req-953d03a0-0edd-4a18-b9f6-3c24a3ba24fc cbe175b10d9243369c5cae0b1a0c718b 16044f9687494680b68b927090e5afc5 - - default default] rbd image 575b6526-de38-4a80-a952-be1b891b4792_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:27:41 np0005534516 nova_compute[253538]: 2025-11-25 08:27:41.869 253542 INFO nova.virt.libvirt.driver [None req-953d03a0-0edd-4a18-b9f6-3c24a3ba24fc cbe175b10d9243369c5cae0b1a0c718b 16044f9687494680b68b927090e5afc5 - - default default] [instance: 575b6526-de38-4a80-a952-be1b891b4792] Creating config drive at /var/lib/nova/instances/575b6526-de38-4a80-a952-be1b891b4792/disk.config#033[00m
Nov 25 03:27:41 np0005534516 nova_compute[253538]: 2025-11-25 08:27:41.873 253542 DEBUG oslo_concurrency.processutils [None req-953d03a0-0edd-4a18-b9f6-3c24a3ba24fc cbe175b10d9243369c5cae0b1a0c718b 16044f9687494680b68b927090e5afc5 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/575b6526-de38-4a80-a952-be1b891b4792/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp6jevnn_8 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:27:42 np0005534516 nova_compute[253538]: 2025-11-25 08:27:42.005 253542 DEBUG oslo_concurrency.processutils [None req-953d03a0-0edd-4a18-b9f6-3c24a3ba24fc cbe175b10d9243369c5cae0b1a0c718b 16044f9687494680b68b927090e5afc5 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/575b6526-de38-4a80-a952-be1b891b4792/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp6jevnn_8" returned: 0 in 0.132s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:27:42 np0005534516 nova_compute[253538]: 2025-11-25 08:27:42.031 253542 DEBUG nova.storage.rbd_utils [None req-953d03a0-0edd-4a18-b9f6-3c24a3ba24fc cbe175b10d9243369c5cae0b1a0c718b 16044f9687494680b68b927090e5afc5 - - default default] rbd image 575b6526-de38-4a80-a952-be1b891b4792_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:27:42 np0005534516 nova_compute[253538]: 2025-11-25 08:27:42.035 253542 DEBUG oslo_concurrency.processutils [None req-953d03a0-0edd-4a18-b9f6-3c24a3ba24fc cbe175b10d9243369c5cae0b1a0c718b 16044f9687494680b68b927090e5afc5 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/575b6526-de38-4a80-a952-be1b891b4792/disk.config 575b6526-de38-4a80-a952-be1b891b4792_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:27:42 np0005534516 nova_compute[253538]: 2025-11-25 08:27:42.157 253542 DEBUG nova.network.neutron [req-47aed39f-8024-48c2-a459-1bbb66a66725 req-6d382777-c191-4a50-b088-ce9c2156ec58 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 575b6526-de38-4a80-a952-be1b891b4792] Updated VIF entry in instance network info cache for port 1de27b55-f4ed-42e6-a9b2-65d84a8a77f2. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 25 03:27:42 np0005534516 nova_compute[253538]: 2025-11-25 08:27:42.159 253542 DEBUG nova.network.neutron [req-47aed39f-8024-48c2-a459-1bbb66a66725 req-6d382777-c191-4a50-b088-ce9c2156ec58 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 575b6526-de38-4a80-a952-be1b891b4792] Updating instance_info_cache with network_info: [{"id": "1de27b55-f4ed-42e6-a9b2-65d84a8a77f2", "address": "fa:16:3e:14:9a:0c", "network": {"id": "57f2ccd3-ed2f-4d2f-8493-3dd1452e16da", "bridge": "br-int", "label": "tempest-ImagesNegativeTestJSON-1870872754-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "16044f9687494680b68b927090e5afc5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1de27b55-f4", "ovs_interfaceid": "1de27b55-f4ed-42e6-a9b2-65d84a8a77f2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 03:27:42 np0005534516 nova_compute[253538]: 2025-11-25 08:27:42.174 253542 DEBUG oslo_concurrency.lockutils [req-47aed39f-8024-48c2-a459-1bbb66a66725 req-6d382777-c191-4a50-b088-ce9c2156ec58 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-575b6526-de38-4a80-a952-be1b891b4792" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 03:27:42 np0005534516 nova_compute[253538]: 2025-11-25 08:27:42.420 253542 DEBUG oslo_concurrency.processutils [None req-953d03a0-0edd-4a18-b9f6-3c24a3ba24fc cbe175b10d9243369c5cae0b1a0c718b 16044f9687494680b68b927090e5afc5 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/575b6526-de38-4a80-a952-be1b891b4792/disk.config 575b6526-de38-4a80-a952-be1b891b4792_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.385s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:27:42 np0005534516 nova_compute[253538]: 2025-11-25 08:27:42.421 253542 INFO nova.virt.libvirt.driver [None req-953d03a0-0edd-4a18-b9f6-3c24a3ba24fc cbe175b10d9243369c5cae0b1a0c718b 16044f9687494680b68b927090e5afc5 - - default default] [instance: 575b6526-de38-4a80-a952-be1b891b4792] Deleting local config drive /var/lib/nova/instances/575b6526-de38-4a80-a952-be1b891b4792/disk.config because it was imported into RBD.#033[00m
Nov 25 03:27:42 np0005534516 kernel: tap1de27b55-f4: entered promiscuous mode
Nov 25 03:27:42 np0005534516 NetworkManager[48915]: <info>  [1764059262.5100] manager: (tap1de27b55-f4): new Tun device (/org/freedesktop/NetworkManager/Devices/80)
Nov 25 03:27:42 np0005534516 ovn_controller[152859]: 2025-11-25T08:27:42Z|00156|binding|INFO|Claiming lport 1de27b55-f4ed-42e6-a9b2-65d84a8a77f2 for this chassis.
Nov 25 03:27:42 np0005534516 ovn_controller[152859]: 2025-11-25T08:27:42Z|00157|binding|INFO|1de27b55-f4ed-42e6-a9b2-65d84a8a77f2: Claiming fa:16:3e:14:9a:0c 10.100.0.9
Nov 25 03:27:42 np0005534516 nova_compute[253538]: 2025-11-25 08:27:42.513 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:27:42 np0005534516 nova_compute[253538]: 2025-11-25 08:27:42.520 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:27:42 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:27:42.526 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:14:9a:0c 10.100.0.9'], port_security=['fa:16:3e:14:9a:0c 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '575b6526-de38-4a80-a952-be1b891b4792', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-57f2ccd3-ed2f-4d2f-8493-3dd1452e16da', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '16044f9687494680b68b927090e5afc5', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'c14b24f1-7e28-4619-b440-53caae2d78a0', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=05232433-f85e-44a8-b448-6f9e1f5b2c59, chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=1de27b55-f4ed-42e6-a9b2-65d84a8a77f2) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 03:27:42 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:27:42.528 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 1de27b55-f4ed-42e6-a9b2-65d84a8a77f2 in datapath 57f2ccd3-ed2f-4d2f-8493-3dd1452e16da bound to our chassis#033[00m
Nov 25 03:27:42 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:27:42.530 162739 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 57f2ccd3-ed2f-4d2f-8493-3dd1452e16da#033[00m
Nov 25 03:27:42 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:27:42.542 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[50f8c6ca-a330-42e9-bad1-5dc6b683dd04]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:27:42 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:27:42.547 162739 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap57f2ccd3-e1 in ovnmeta-57f2ccd3-ed2f-4d2f-8493-3dd1452e16da namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 25 03:27:42 np0005534516 systemd-machined[215790]: New machine qemu-32-instance-0000001b.
Nov 25 03:27:42 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:27:42.551 269370 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap57f2ccd3-e0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 25 03:27:42 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:27:42.551 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[477f6a10-ffa7-47a0-8333-807e0b428c2d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:27:42 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:27:42.552 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[b2328658-8d74-4187-9b02-b4eba664ce02]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:27:42 np0005534516 systemd[1]: Started Virtual Machine qemu-32-instance-0000001b.
Nov 25 03:27:42 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:27:42.566 162852 DEBUG oslo.privsep.daemon [-] privsep: reply[774ff828-195b-4ab0-bf6e-95290e31883f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:27:42 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:27:42.591 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[baad3c66-d6e6-4fca-85a2-5f0aa1ce9647]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:27:42 np0005534516 systemd-udevd[288812]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 03:27:42 np0005534516 ovn_controller[152859]: 2025-11-25T08:27:42Z|00158|binding|INFO|Setting lport 1de27b55-f4ed-42e6-a9b2-65d84a8a77f2 ovn-installed in OVS
Nov 25 03:27:42 np0005534516 ovn_controller[152859]: 2025-11-25T08:27:42Z|00159|binding|INFO|Setting lport 1de27b55-f4ed-42e6-a9b2-65d84a8a77f2 up in Southbound
Nov 25 03:27:42 np0005534516 nova_compute[253538]: 2025-11-25 08:27:42.616 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:27:42 np0005534516 NetworkManager[48915]: <info>  [1764059262.6230] device (tap1de27b55-f4): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 03:27:42 np0005534516 NetworkManager[48915]: <info>  [1764059262.6242] device (tap1de27b55-f4): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 03:27:42 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:27:42.634 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[ef74eecb-0284-4fb5-b7d2-dfc53627c249]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:27:42 np0005534516 NetworkManager[48915]: <info>  [1764059262.6412] manager: (tap57f2ccd3-e0): new Veth device (/org/freedesktop/NetworkManager/Devices/81)
Nov 25 03:27:42 np0005534516 systemd-udevd[288820]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 03:27:42 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:27:42.640 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[8b37b037-d516-458b-974c-0cf55ce949db]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:27:42 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:27:42.678 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[82595a53-a03f-49d5-8377-8ccf6978a2b8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:27:42 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:27:42.681 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[69379c7c-2785-49c2-8918-92c25a19f361]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:27:42 np0005534516 NetworkManager[48915]: <info>  [1764059262.7106] device (tap57f2ccd3-e0): carrier: link connected
Nov 25 03:27:42 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:27:42.717 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[c8c2b829-ed47-412d-9d38-872dd0d1ca98]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:27:42 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:27:42.733 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[139e32de-0a17-4630-ab83-8f3517953dc1]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap57f2ccd3-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e0:48:fb'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 52], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 461730, 'reachable_time': 16521, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 288842, 'error': None, 'target': 'ovnmeta-57f2ccd3-ed2f-4d2f-8493-3dd1452e16da', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:27:42 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:27:42.751 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[2a894e95-527f-4dd6-a6c3-48815bdd12e6]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fee0:48fb'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 461730, 'tstamp': 461730}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 288843, 'error': None, 'target': 'ovnmeta-57f2ccd3-ed2f-4d2f-8493-3dd1452e16da', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:27:42 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:27:42.770 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[b86809fd-4b1e-48ee-b430-cc6d38c5c333]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap57f2ccd3-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e0:48:fb'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 52], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 461730, 'reachable_time': 16521, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 288844, 'error': None, 'target': 'ovnmeta-57f2ccd3-ed2f-4d2f-8493-3dd1452e16da', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:27:42 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:27:42.808 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[82eb9985-25c2-4aa1-b36b-c35c38a359fa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:27:42 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:27:42.858 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[7ab595cb-cccc-411b-b32c-155c4a73080b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:27:42 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:27:42.859 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap57f2ccd3-e0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:27:42 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:27:42.860 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 03:27:42 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:27:42.860 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap57f2ccd3-e0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:27:42 np0005534516 nova_compute[253538]: 2025-11-25 08:27:42.862 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:27:42 np0005534516 kernel: tap57f2ccd3-e0: entered promiscuous mode
Nov 25 03:27:42 np0005534516 NetworkManager[48915]: <info>  [1764059262.8632] manager: (tap57f2ccd3-e0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/82)
Nov 25 03:27:42 np0005534516 nova_compute[253538]: 2025-11-25 08:27:42.865 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:27:42 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:27:42.867 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap57f2ccd3-e0, col_values=(('external_ids', {'iface-id': 'f5b8b379-b9d0-48f1-8e76-7cf52c7f9630'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:27:42 np0005534516 nova_compute[253538]: 2025-11-25 08:27:42.868 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:27:42 np0005534516 nova_compute[253538]: 2025-11-25 08:27:42.869 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:27:42 np0005534516 ovn_controller[152859]: 2025-11-25T08:27:42Z|00160|binding|INFO|Releasing lport f5b8b379-b9d0-48f1-8e76-7cf52c7f9630 from this chassis (sb_readonly=0)
Nov 25 03:27:42 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:27:42.870 162739 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/57f2ccd3-ed2f-4d2f-8493-3dd1452e16da.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/57f2ccd3-ed2f-4d2f-8493-3dd1452e16da.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 25 03:27:42 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:27:42.872 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[c2aae2f4-5f2e-4ccb-afed-e2756dd7552d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:27:42 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:27:42.873 162739 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 25 03:27:42 np0005534516 ovn_metadata_agent[162734]: global
Nov 25 03:27:42 np0005534516 ovn_metadata_agent[162734]:    log         /dev/log local0 debug
Nov 25 03:27:42 np0005534516 ovn_metadata_agent[162734]:    log-tag     haproxy-metadata-proxy-57f2ccd3-ed2f-4d2f-8493-3dd1452e16da
Nov 25 03:27:42 np0005534516 ovn_metadata_agent[162734]:    user        root
Nov 25 03:27:42 np0005534516 ovn_metadata_agent[162734]:    group       root
Nov 25 03:27:42 np0005534516 ovn_metadata_agent[162734]:    maxconn     1024
Nov 25 03:27:42 np0005534516 ovn_metadata_agent[162734]:    pidfile     /var/lib/neutron/external/pids/57f2ccd3-ed2f-4d2f-8493-3dd1452e16da.pid.haproxy
Nov 25 03:27:42 np0005534516 ovn_metadata_agent[162734]:    daemon
Nov 25 03:27:42 np0005534516 ovn_metadata_agent[162734]: 
Nov 25 03:27:42 np0005534516 ovn_metadata_agent[162734]: defaults
Nov 25 03:27:42 np0005534516 ovn_metadata_agent[162734]:    log global
Nov 25 03:27:42 np0005534516 ovn_metadata_agent[162734]:    mode http
Nov 25 03:27:42 np0005534516 ovn_metadata_agent[162734]:    option httplog
Nov 25 03:27:42 np0005534516 ovn_metadata_agent[162734]:    option dontlognull
Nov 25 03:27:42 np0005534516 ovn_metadata_agent[162734]:    option http-server-close
Nov 25 03:27:42 np0005534516 ovn_metadata_agent[162734]:    option forwardfor
Nov 25 03:27:42 np0005534516 ovn_metadata_agent[162734]:    retries                 3
Nov 25 03:27:42 np0005534516 ovn_metadata_agent[162734]:    timeout http-request    30s
Nov 25 03:27:42 np0005534516 ovn_metadata_agent[162734]:    timeout connect         30s
Nov 25 03:27:42 np0005534516 ovn_metadata_agent[162734]:    timeout client          32s
Nov 25 03:27:42 np0005534516 ovn_metadata_agent[162734]:    timeout server          32s
Nov 25 03:27:42 np0005534516 ovn_metadata_agent[162734]:    timeout http-keep-alive 30s
Nov 25 03:27:42 np0005534516 ovn_metadata_agent[162734]: 
Nov 25 03:27:42 np0005534516 ovn_metadata_agent[162734]: 
Nov 25 03:27:42 np0005534516 ovn_metadata_agent[162734]: listen listener
Nov 25 03:27:42 np0005534516 ovn_metadata_agent[162734]:    bind 169.254.169.254:80
Nov 25 03:27:42 np0005534516 ovn_metadata_agent[162734]:    server metadata /var/lib/neutron/metadata_proxy
Nov 25 03:27:42 np0005534516 ovn_metadata_agent[162734]:    http-request add-header X-OVN-Network-ID 57f2ccd3-ed2f-4d2f-8493-3dd1452e16da
Nov 25 03:27:42 np0005534516 ovn_metadata_agent[162734]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 25 03:27:42 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:27:42.876 162739 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-57f2ccd3-ed2f-4d2f-8493-3dd1452e16da', 'env', 'PROCESS_TAG=haproxy-57f2ccd3-ed2f-4d2f-8493-3dd1452e16da', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/57f2ccd3-ed2f-4d2f-8493-3dd1452e16da.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 25 03:27:42 np0005534516 nova_compute[253538]: 2025-11-25 08:27:42.883 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:27:43 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1274: 321 pgs: 321 active+clean; 134 MiB data, 380 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 2.6 MiB/s wr, 106 op/s
Nov 25 03:27:43 np0005534516 nova_compute[253538]: 2025-11-25 08:27:43.092 253542 DEBUG nova.compute.manager [req-84bd18c8-a7c5-44e5-9f86-4a7f5dbc83bf req-b1253b4e-1ef4-4fe6-9af0-3672e8889cbf b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 575b6526-de38-4a80-a952-be1b891b4792] Received event network-vif-plugged-1de27b55-f4ed-42e6-a9b2-65d84a8a77f2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:27:43 np0005534516 nova_compute[253538]: 2025-11-25 08:27:43.093 253542 DEBUG oslo_concurrency.lockutils [req-84bd18c8-a7c5-44e5-9f86-4a7f5dbc83bf req-b1253b4e-1ef4-4fe6-9af0-3672e8889cbf b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "575b6526-de38-4a80-a952-be1b891b4792-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:27:43 np0005534516 nova_compute[253538]: 2025-11-25 08:27:43.093 253542 DEBUG oslo_concurrency.lockutils [req-84bd18c8-a7c5-44e5-9f86-4a7f5dbc83bf req-b1253b4e-1ef4-4fe6-9af0-3672e8889cbf b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "575b6526-de38-4a80-a952-be1b891b4792-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:27:43 np0005534516 nova_compute[253538]: 2025-11-25 08:27:43.094 253542 DEBUG oslo_concurrency.lockutils [req-84bd18c8-a7c5-44e5-9f86-4a7f5dbc83bf req-b1253b4e-1ef4-4fe6-9af0-3672e8889cbf b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "575b6526-de38-4a80-a952-be1b891b4792-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:27:43 np0005534516 nova_compute[253538]: 2025-11-25 08:27:43.094 253542 DEBUG nova.compute.manager [req-84bd18c8-a7c5-44e5-9f86-4a7f5dbc83bf req-b1253b4e-1ef4-4fe6-9af0-3672e8889cbf b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 575b6526-de38-4a80-a952-be1b891b4792] Processing event network-vif-plugged-1de27b55-f4ed-42e6-a9b2-65d84a8a77f2 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 25 03:27:43 np0005534516 podman[288876]: 2025-11-25 08:27:43.3065462 +0000 UTC m=+0.039123494 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 25 03:27:43 np0005534516 podman[288876]: 2025-11-25 08:27:43.453695054 +0000 UTC m=+0.186272328 container create a0a016cd70afdaf0b4b5e64fd65184e8ba97739405a3f728b1b0a8d1d9a0ac91 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-57f2ccd3-ed2f-4d2f-8493-3dd1452e16da, org.label-schema.build-date=20251118, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Nov 25 03:27:43 np0005534516 systemd[1]: Started libpod-conmon-a0a016cd70afdaf0b4b5e64fd65184e8ba97739405a3f728b1b0a8d1d9a0ac91.scope.
Nov 25 03:27:43 np0005534516 systemd[1]: Started libcrun container.
Nov 25 03:27:43 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/754c241e4d2ab39b3351c7386a235b84734f23374e154e299f4df3e053aaff0b/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 25 03:27:43 np0005534516 podman[288876]: 2025-11-25 08:27:43.767578762 +0000 UTC m=+0.500156106 container init a0a016cd70afdaf0b4b5e64fd65184e8ba97739405a3f728b1b0a8d1d9a0ac91 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-57f2ccd3-ed2f-4d2f-8493-3dd1452e16da, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Nov 25 03:27:43 np0005534516 podman[288876]: 2025-11-25 08:27:43.780809609 +0000 UTC m=+0.513386893 container start a0a016cd70afdaf0b4b5e64fd65184e8ba97739405a3f728b1b0a8d1d9a0ac91 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-57f2ccd3-ed2f-4d2f-8493-3dd1452e16da, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Nov 25 03:27:43 np0005534516 neutron-haproxy-ovnmeta-57f2ccd3-ed2f-4d2f-8493-3dd1452e16da[288890]: [NOTICE]   (288928) : New worker (288933) forked
Nov 25 03:27:43 np0005534516 neutron-haproxy-ovnmeta-57f2ccd3-ed2f-4d2f-8493-3dd1452e16da[288890]: [NOTICE]   (288928) : Loading success.
Nov 25 03:27:43 np0005534516 nova_compute[253538]: 2025-11-25 08:27:43.918 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764059263.9179223, 575b6526-de38-4a80-a952-be1b891b4792 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 03:27:43 np0005534516 nova_compute[253538]: 2025-11-25 08:27:43.919 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 575b6526-de38-4a80-a952-be1b891b4792] VM Started (Lifecycle Event)#033[00m
Nov 25 03:27:43 np0005534516 nova_compute[253538]: 2025-11-25 08:27:43.921 253542 DEBUG nova.compute.manager [None req-953d03a0-0edd-4a18-b9f6-3c24a3ba24fc cbe175b10d9243369c5cae0b1a0c718b 16044f9687494680b68b927090e5afc5 - - default default] [instance: 575b6526-de38-4a80-a952-be1b891b4792] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 25 03:27:43 np0005534516 nova_compute[253538]: 2025-11-25 08:27:43.924 253542 DEBUG nova.virt.libvirt.driver [None req-953d03a0-0edd-4a18-b9f6-3c24a3ba24fc cbe175b10d9243369c5cae0b1a0c718b 16044f9687494680b68b927090e5afc5 - - default default] [instance: 575b6526-de38-4a80-a952-be1b891b4792] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 25 03:27:43 np0005534516 nova_compute[253538]: 2025-11-25 08:27:43.928 253542 INFO nova.virt.libvirt.driver [-] [instance: 575b6526-de38-4a80-a952-be1b891b4792] Instance spawned successfully.#033[00m
Nov 25 03:27:43 np0005534516 nova_compute[253538]: 2025-11-25 08:27:43.929 253542 DEBUG nova.virt.libvirt.driver [None req-953d03a0-0edd-4a18-b9f6-3c24a3ba24fc cbe175b10d9243369c5cae0b1a0c718b 16044f9687494680b68b927090e5afc5 - - default default] [instance: 575b6526-de38-4a80-a952-be1b891b4792] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 25 03:27:43 np0005534516 nova_compute[253538]: 2025-11-25 08:27:43.940 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 575b6526-de38-4a80-a952-be1b891b4792] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:27:43 np0005534516 nova_compute[253538]: 2025-11-25 08:27:43.945 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 575b6526-de38-4a80-a952-be1b891b4792] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 03:27:43 np0005534516 nova_compute[253538]: 2025-11-25 08:27:43.949 253542 DEBUG nova.virt.libvirt.driver [None req-953d03a0-0edd-4a18-b9f6-3c24a3ba24fc cbe175b10d9243369c5cae0b1a0c718b 16044f9687494680b68b927090e5afc5 - - default default] [instance: 575b6526-de38-4a80-a952-be1b891b4792] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:27:43 np0005534516 nova_compute[253538]: 2025-11-25 08:27:43.950 253542 DEBUG nova.virt.libvirt.driver [None req-953d03a0-0edd-4a18-b9f6-3c24a3ba24fc cbe175b10d9243369c5cae0b1a0c718b 16044f9687494680b68b927090e5afc5 - - default default] [instance: 575b6526-de38-4a80-a952-be1b891b4792] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:27:43 np0005534516 nova_compute[253538]: 2025-11-25 08:27:43.950 253542 DEBUG nova.virt.libvirt.driver [None req-953d03a0-0edd-4a18-b9f6-3c24a3ba24fc cbe175b10d9243369c5cae0b1a0c718b 16044f9687494680b68b927090e5afc5 - - default default] [instance: 575b6526-de38-4a80-a952-be1b891b4792] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:27:43 np0005534516 nova_compute[253538]: 2025-11-25 08:27:43.951 253542 DEBUG nova.virt.libvirt.driver [None req-953d03a0-0edd-4a18-b9f6-3c24a3ba24fc cbe175b10d9243369c5cae0b1a0c718b 16044f9687494680b68b927090e5afc5 - - default default] [instance: 575b6526-de38-4a80-a952-be1b891b4792] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:27:43 np0005534516 nova_compute[253538]: 2025-11-25 08:27:43.952 253542 DEBUG nova.virt.libvirt.driver [None req-953d03a0-0edd-4a18-b9f6-3c24a3ba24fc cbe175b10d9243369c5cae0b1a0c718b 16044f9687494680b68b927090e5afc5 - - default default] [instance: 575b6526-de38-4a80-a952-be1b891b4792] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:27:43 np0005534516 nova_compute[253538]: 2025-11-25 08:27:43.952 253542 DEBUG nova.virt.libvirt.driver [None req-953d03a0-0edd-4a18-b9f6-3c24a3ba24fc cbe175b10d9243369c5cae0b1a0c718b 16044f9687494680b68b927090e5afc5 - - default default] [instance: 575b6526-de38-4a80-a952-be1b891b4792] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:27:43 np0005534516 nova_compute[253538]: 2025-11-25 08:27:43.960 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 575b6526-de38-4a80-a952-be1b891b4792] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 03:27:43 np0005534516 nova_compute[253538]: 2025-11-25 08:27:43.960 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764059263.9187248, 575b6526-de38-4a80-a952-be1b891b4792 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 03:27:43 np0005534516 nova_compute[253538]: 2025-11-25 08:27:43.960 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 575b6526-de38-4a80-a952-be1b891b4792] VM Paused (Lifecycle Event)#033[00m
Nov 25 03:27:43 np0005534516 nova_compute[253538]: 2025-11-25 08:27:43.977 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 575b6526-de38-4a80-a952-be1b891b4792] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:27:43 np0005534516 nova_compute[253538]: 2025-11-25 08:27:43.980 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764059263.9238884, 575b6526-de38-4a80-a952-be1b891b4792 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 03:27:43 np0005534516 nova_compute[253538]: 2025-11-25 08:27:43.980 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 575b6526-de38-4a80-a952-be1b891b4792] VM Resumed (Lifecycle Event)#033[00m
Nov 25 03:27:43 np0005534516 nova_compute[253538]: 2025-11-25 08:27:43.997 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 575b6526-de38-4a80-a952-be1b891b4792] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:27:44 np0005534516 nova_compute[253538]: 2025-11-25 08:27:44.000 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 575b6526-de38-4a80-a952-be1b891b4792] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 03:27:44 np0005534516 nova_compute[253538]: 2025-11-25 08:27:44.005 253542 INFO nova.compute.manager [None req-953d03a0-0edd-4a18-b9f6-3c24a3ba24fc cbe175b10d9243369c5cae0b1a0c718b 16044f9687494680b68b927090e5afc5 - - default default] [instance: 575b6526-de38-4a80-a952-be1b891b4792] Took 7.36 seconds to spawn the instance on the hypervisor.#033[00m
Nov 25 03:27:44 np0005534516 nova_compute[253538]: 2025-11-25 08:27:44.005 253542 DEBUG nova.compute.manager [None req-953d03a0-0edd-4a18-b9f6-3c24a3ba24fc cbe175b10d9243369c5cae0b1a0c718b 16044f9687494680b68b927090e5afc5 - - default default] [instance: 575b6526-de38-4a80-a952-be1b891b4792] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:27:44 np0005534516 nova_compute[253538]: 2025-11-25 08:27:44.013 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 575b6526-de38-4a80-a952-be1b891b4792] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 03:27:44 np0005534516 nova_compute[253538]: 2025-11-25 08:27:44.058 253542 INFO nova.compute.manager [None req-953d03a0-0edd-4a18-b9f6-3c24a3ba24fc cbe175b10d9243369c5cae0b1a0c718b 16044f9687494680b68b927090e5afc5 - - default default] [instance: 575b6526-de38-4a80-a952-be1b891b4792] Took 8.52 seconds to build instance.#033[00m
Nov 25 03:27:44 np0005534516 nova_compute[253538]: 2025-11-25 08:27:44.070 253542 DEBUG oslo_concurrency.lockutils [None req-953d03a0-0edd-4a18-b9f6-3c24a3ba24fc cbe175b10d9243369c5cae0b1a0c718b 16044f9687494680b68b927090e5afc5 - - default default] Lock "575b6526-de38-4a80-a952-be1b891b4792" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.611s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:27:44 np0005534516 nova_compute[253538]: 2025-11-25 08:27:44.791 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:27:44 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 03:27:45 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1275: 321 pgs: 321 active+clean; 134 MiB data, 380 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 110 op/s
Nov 25 03:27:45 np0005534516 nova_compute[253538]: 2025-11-25 08:27:45.191 253542 DEBUG nova.compute.manager [req-1bc5dc39-0b6b-4913-9311-c1334be29b5d req-96df723b-d274-400e-81af-f07152cf5d7f b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 575b6526-de38-4a80-a952-be1b891b4792] Received event network-vif-plugged-1de27b55-f4ed-42e6-a9b2-65d84a8a77f2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:27:45 np0005534516 nova_compute[253538]: 2025-11-25 08:27:45.191 253542 DEBUG oslo_concurrency.lockutils [req-1bc5dc39-0b6b-4913-9311-c1334be29b5d req-96df723b-d274-400e-81af-f07152cf5d7f b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "575b6526-de38-4a80-a952-be1b891b4792-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:27:45 np0005534516 nova_compute[253538]: 2025-11-25 08:27:45.191 253542 DEBUG oslo_concurrency.lockutils [req-1bc5dc39-0b6b-4913-9311-c1334be29b5d req-96df723b-d274-400e-81af-f07152cf5d7f b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "575b6526-de38-4a80-a952-be1b891b4792-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:27:45 np0005534516 nova_compute[253538]: 2025-11-25 08:27:45.192 253542 DEBUG oslo_concurrency.lockutils [req-1bc5dc39-0b6b-4913-9311-c1334be29b5d req-96df723b-d274-400e-81af-f07152cf5d7f b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "575b6526-de38-4a80-a952-be1b891b4792-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:27:45 np0005534516 nova_compute[253538]: 2025-11-25 08:27:45.192 253542 DEBUG nova.compute.manager [req-1bc5dc39-0b6b-4913-9311-c1334be29b5d req-96df723b-d274-400e-81af-f07152cf5d7f b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 575b6526-de38-4a80-a952-be1b891b4792] No waiting events found dispatching network-vif-plugged-1de27b55-f4ed-42e6-a9b2-65d84a8a77f2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 03:27:45 np0005534516 nova_compute[253538]: 2025-11-25 08:27:45.192 253542 WARNING nova.compute.manager [req-1bc5dc39-0b6b-4913-9311-c1334be29b5d req-96df723b-d274-400e-81af-f07152cf5d7f b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 575b6526-de38-4a80-a952-be1b891b4792] Received unexpected event network-vif-plugged-1de27b55-f4ed-42e6-a9b2-65d84a8a77f2 for instance with vm_state active and task_state None.#033[00m
Nov 25 03:27:45 np0005534516 nova_compute[253538]: 2025-11-25 08:27:45.567 253542 DEBUG oslo_concurrency.lockutils [None req-99878b1a-80c7-4f02-93f8-9a091688e41f cbe175b10d9243369c5cae0b1a0c718b 16044f9687494680b68b927090e5afc5 - - default default] Acquiring lock "575b6526-de38-4a80-a952-be1b891b4792" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:27:45 np0005534516 nova_compute[253538]: 2025-11-25 08:27:45.568 253542 DEBUG oslo_concurrency.lockutils [None req-99878b1a-80c7-4f02-93f8-9a091688e41f cbe175b10d9243369c5cae0b1a0c718b 16044f9687494680b68b927090e5afc5 - - default default] Lock "575b6526-de38-4a80-a952-be1b891b4792" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:27:45 np0005534516 nova_compute[253538]: 2025-11-25 08:27:45.568 253542 DEBUG oslo_concurrency.lockutils [None req-99878b1a-80c7-4f02-93f8-9a091688e41f cbe175b10d9243369c5cae0b1a0c718b 16044f9687494680b68b927090e5afc5 - - default default] Acquiring lock "575b6526-de38-4a80-a952-be1b891b4792-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:27:45 np0005534516 nova_compute[253538]: 2025-11-25 08:27:45.569 253542 DEBUG oslo_concurrency.lockutils [None req-99878b1a-80c7-4f02-93f8-9a091688e41f cbe175b10d9243369c5cae0b1a0c718b 16044f9687494680b68b927090e5afc5 - - default default] Lock "575b6526-de38-4a80-a952-be1b891b4792-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:27:45 np0005534516 nova_compute[253538]: 2025-11-25 08:27:45.569 253542 DEBUG oslo_concurrency.lockutils [None req-99878b1a-80c7-4f02-93f8-9a091688e41f cbe175b10d9243369c5cae0b1a0c718b 16044f9687494680b68b927090e5afc5 - - default default] Lock "575b6526-de38-4a80-a952-be1b891b4792-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:27:45 np0005534516 nova_compute[253538]: 2025-11-25 08:27:45.570 253542 INFO nova.compute.manager [None req-99878b1a-80c7-4f02-93f8-9a091688e41f cbe175b10d9243369c5cae0b1a0c718b 16044f9687494680b68b927090e5afc5 - - default default] [instance: 575b6526-de38-4a80-a952-be1b891b4792] Terminating instance#033[00m
Nov 25 03:27:45 np0005534516 nova_compute[253538]: 2025-11-25 08:27:45.571 253542 DEBUG nova.compute.manager [None req-99878b1a-80c7-4f02-93f8-9a091688e41f cbe175b10d9243369c5cae0b1a0c718b 16044f9687494680b68b927090e5afc5 - - default default] [instance: 575b6526-de38-4a80-a952-be1b891b4792] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 25 03:27:45 np0005534516 kernel: tap1de27b55-f4 (unregistering): left promiscuous mode
Nov 25 03:27:45 np0005534516 NetworkManager[48915]: <info>  [1764059265.7046] device (tap1de27b55-f4): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 03:27:45 np0005534516 ovn_controller[152859]: 2025-11-25T08:27:45Z|00161|binding|INFO|Releasing lport 1de27b55-f4ed-42e6-a9b2-65d84a8a77f2 from this chassis (sb_readonly=0)
Nov 25 03:27:45 np0005534516 ovn_controller[152859]: 2025-11-25T08:27:45Z|00162|binding|INFO|Setting lport 1de27b55-f4ed-42e6-a9b2-65d84a8a77f2 down in Southbound
Nov 25 03:27:45 np0005534516 ovn_controller[152859]: 2025-11-25T08:27:45Z|00163|binding|INFO|Removing iface tap1de27b55-f4 ovn-installed in OVS
Nov 25 03:27:45 np0005534516 nova_compute[253538]: 2025-11-25 08:27:45.761 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:27:45 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:27:45.788 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:14:9a:0c 10.100.0.9'], port_security=['fa:16:3e:14:9a:0c 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '575b6526-de38-4a80-a952-be1b891b4792', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-57f2ccd3-ed2f-4d2f-8493-3dd1452e16da', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '16044f9687494680b68b927090e5afc5', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'c14b24f1-7e28-4619-b440-53caae2d78a0', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=05232433-f85e-44a8-b448-6f9e1f5b2c59, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=1de27b55-f4ed-42e6-a9b2-65d84a8a77f2) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 03:27:45 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:27:45.790 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 1de27b55-f4ed-42e6-a9b2-65d84a8a77f2 in datapath 57f2ccd3-ed2f-4d2f-8493-3dd1452e16da unbound from our chassis#033[00m
Nov 25 03:27:45 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:27:45.791 162739 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 57f2ccd3-ed2f-4d2f-8493-3dd1452e16da, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 25 03:27:45 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:27:45.792 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[54a61603-4817-43e2-8307-ee3923036067]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:27:45 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:27:45.793 162739 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-57f2ccd3-ed2f-4d2f-8493-3dd1452e16da namespace which is not needed anymore#033[00m
Nov 25 03:27:45 np0005534516 nova_compute[253538]: 2025-11-25 08:27:45.794 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:27:45 np0005534516 systemd[1]: machine-qemu\x2d32\x2dinstance\x2d0000001b.scope: Deactivated successfully.
Nov 25 03:27:45 np0005534516 systemd[1]: machine-qemu\x2d32\x2dinstance\x2d0000001b.scope: Consumed 2.879s CPU time.
Nov 25 03:27:45 np0005534516 systemd-machined[215790]: Machine qemu-32-instance-0000001b terminated.
Nov 25 03:27:45 np0005534516 kernel: tap1de27b55-f4: entered promiscuous mode
Nov 25 03:27:45 np0005534516 NetworkManager[48915]: <info>  [1764059265.9942] manager: (tap1de27b55-f4): new Tun device (/org/freedesktop/NetworkManager/Devices/83)
Nov 25 03:27:46 np0005534516 ovn_controller[152859]: 2025-11-25T08:27:46Z|00164|binding|INFO|Claiming lport 1de27b55-f4ed-42e6-a9b2-65d84a8a77f2 for this chassis.
Nov 25 03:27:46 np0005534516 ovn_controller[152859]: 2025-11-25T08:27:46Z|00165|binding|INFO|1de27b55-f4ed-42e6-a9b2-65d84a8a77f2: Claiming fa:16:3e:14:9a:0c 10.100.0.9
Nov 25 03:27:46 np0005534516 nova_compute[253538]: 2025-11-25 08:27:46.005 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:27:46 np0005534516 kernel: tap1de27b55-f4 (unregistering): left promiscuous mode
Nov 25 03:27:46 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:27:46.014 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:14:9a:0c 10.100.0.9'], port_security=['fa:16:3e:14:9a:0c 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '575b6526-de38-4a80-a952-be1b891b4792', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-57f2ccd3-ed2f-4d2f-8493-3dd1452e16da', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '16044f9687494680b68b927090e5afc5', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'c14b24f1-7e28-4619-b440-53caae2d78a0', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=05232433-f85e-44a8-b448-6f9e1f5b2c59, chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=1de27b55-f4ed-42e6-a9b2-65d84a8a77f2) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 03:27:46 np0005534516 neutron-haproxy-ovnmeta-57f2ccd3-ed2f-4d2f-8493-3dd1452e16da[288890]: [NOTICE]   (288928) : haproxy version is 2.8.14-c23fe91
Nov 25 03:27:46 np0005534516 neutron-haproxy-ovnmeta-57f2ccd3-ed2f-4d2f-8493-3dd1452e16da[288890]: [NOTICE]   (288928) : path to executable is /usr/sbin/haproxy
Nov 25 03:27:46 np0005534516 neutron-haproxy-ovnmeta-57f2ccd3-ed2f-4d2f-8493-3dd1452e16da[288890]: [WARNING]  (288928) : Exiting Master process...
Nov 25 03:27:46 np0005534516 neutron-haproxy-ovnmeta-57f2ccd3-ed2f-4d2f-8493-3dd1452e16da[288890]: [WARNING]  (288928) : Exiting Master process...
Nov 25 03:27:46 np0005534516 neutron-haproxy-ovnmeta-57f2ccd3-ed2f-4d2f-8493-3dd1452e16da[288890]: [ALERT]    (288928) : Current worker (288933) exited with code 143 (Terminated)
Nov 25 03:27:46 np0005534516 neutron-haproxy-ovnmeta-57f2ccd3-ed2f-4d2f-8493-3dd1452e16da[288890]: [WARNING]  (288928) : All workers exited. Exiting... (0)
Nov 25 03:27:46 np0005534516 systemd[1]: libpod-a0a016cd70afdaf0b4b5e64fd65184e8ba97739405a3f728b1b0a8d1d9a0ac91.scope: Deactivated successfully.
Nov 25 03:27:46 np0005534516 ovn_controller[152859]: 2025-11-25T08:27:46Z|00166|binding|INFO|Releasing lport 1de27b55-f4ed-42e6-a9b2-65d84a8a77f2 from this chassis (sb_readonly=0)
Nov 25 03:27:46 np0005534516 nova_compute[253538]: 2025-11-25 08:27:46.041 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:27:46 np0005534516 nova_compute[253538]: 2025-11-25 08:27:46.043 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:27:46 np0005534516 podman[288972]: 2025-11-25 08:27:46.044065036 +0000 UTC m=+0.148780799 container died a0a016cd70afdaf0b4b5e64fd65184e8ba97739405a3f728b1b0a8d1d9a0ac91 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-57f2ccd3-ed2f-4d2f-8493-3dd1452e16da, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0)
Nov 25 03:27:46 np0005534516 nova_compute[253538]: 2025-11-25 08:27:46.046 253542 INFO nova.virt.libvirt.driver [-] [instance: 575b6526-de38-4a80-a952-be1b891b4792] Instance destroyed successfully.#033[00m
Nov 25 03:27:46 np0005534516 nova_compute[253538]: 2025-11-25 08:27:46.047 253542 DEBUG nova.objects.instance [None req-99878b1a-80c7-4f02-93f8-9a091688e41f cbe175b10d9243369c5cae0b1a0c718b 16044f9687494680b68b927090e5afc5 - - default default] Lazy-loading 'resources' on Instance uuid 575b6526-de38-4a80-a952-be1b891b4792 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 03:27:46 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:27:46.052 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:14:9a:0c 10.100.0.9'], port_security=['fa:16:3e:14:9a:0c 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '575b6526-de38-4a80-a952-be1b891b4792', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-57f2ccd3-ed2f-4d2f-8493-3dd1452e16da', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '16044f9687494680b68b927090e5afc5', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'c14b24f1-7e28-4619-b440-53caae2d78a0', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=05232433-f85e-44a8-b448-6f9e1f5b2c59, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=1de27b55-f4ed-42e6-a9b2-65d84a8a77f2) old=Port_Binding(chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 03:27:46 np0005534516 nova_compute[253538]: 2025-11-25 08:27:46.062 253542 DEBUG nova.virt.libvirt.vif [None req-99878b1a-80c7-4f02-93f8-9a091688e41f cbe175b10d9243369c5cae0b1a0c718b 16044f9687494680b68b927090e5afc5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T08:27:34Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ImagesNegativeTestJSON-server-719907871',display_name='tempest-ImagesNegativeTestJSON-server-719907871',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagesnegativetestjson-server-719907871',id=27,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-25T08:27:44Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='16044f9687494680b68b927090e5afc5',ramdisk_id='',reservation_id='r-a19z8id3',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ImagesNegativeTestJSON-1651122487',owner_user_name='tempest-ImagesNegativeTestJSON-1651122487-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T08:27:44Z,user_data=None,user_id='cbe175b10d9243369c5cae0b1a0c718b',uuid=575b6526-de38-4a80-a952-be1b891b4792,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "1de27b55-f4ed-42e6-a9b2-65d84a8a77f2", "address": "fa:16:3e:14:9a:0c", "network": {"id": "57f2ccd3-ed2f-4d2f-8493-3dd1452e16da", "bridge": "br-int", "label": "tempest-ImagesNegativeTestJSON-1870872754-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "16044f9687494680b68b927090e5afc5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1de27b55-f4", "ovs_interfaceid": "1de27b55-f4ed-42e6-a9b2-65d84a8a77f2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 25 03:27:46 np0005534516 nova_compute[253538]: 2025-11-25 08:27:46.062 253542 DEBUG nova.network.os_vif_util [None req-99878b1a-80c7-4f02-93f8-9a091688e41f cbe175b10d9243369c5cae0b1a0c718b 16044f9687494680b68b927090e5afc5 - - default default] Converting VIF {"id": "1de27b55-f4ed-42e6-a9b2-65d84a8a77f2", "address": "fa:16:3e:14:9a:0c", "network": {"id": "57f2ccd3-ed2f-4d2f-8493-3dd1452e16da", "bridge": "br-int", "label": "tempest-ImagesNegativeTestJSON-1870872754-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "16044f9687494680b68b927090e5afc5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1de27b55-f4", "ovs_interfaceid": "1de27b55-f4ed-42e6-a9b2-65d84a8a77f2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 03:27:46 np0005534516 nova_compute[253538]: 2025-11-25 08:27:46.063 253542 DEBUG nova.network.os_vif_util [None req-99878b1a-80c7-4f02-93f8-9a091688e41f cbe175b10d9243369c5cae0b1a0c718b 16044f9687494680b68b927090e5afc5 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:14:9a:0c,bridge_name='br-int',has_traffic_filtering=True,id=1de27b55-f4ed-42e6-a9b2-65d84a8a77f2,network=Network(57f2ccd3-ed2f-4d2f-8493-3dd1452e16da),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1de27b55-f4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 03:27:46 np0005534516 nova_compute[253538]: 2025-11-25 08:27:46.063 253542 DEBUG os_vif [None req-99878b1a-80c7-4f02-93f8-9a091688e41f cbe175b10d9243369c5cae0b1a0c718b 16044f9687494680b68b927090e5afc5 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:14:9a:0c,bridge_name='br-int',has_traffic_filtering=True,id=1de27b55-f4ed-42e6-a9b2-65d84a8a77f2,network=Network(57f2ccd3-ed2f-4d2f-8493-3dd1452e16da),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1de27b55-f4') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 25 03:27:46 np0005534516 nova_compute[253538]: 2025-11-25 08:27:46.068 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:27:46 np0005534516 nova_compute[253538]: 2025-11-25 08:27:46.069 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1de27b55-f4, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:27:46 np0005534516 nova_compute[253538]: 2025-11-25 08:27:46.071 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:27:46 np0005534516 nova_compute[253538]: 2025-11-25 08:27:46.072 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 25 03:27:46 np0005534516 nova_compute[253538]: 2025-11-25 08:27:46.072 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:27:46 np0005534516 nova_compute[253538]: 2025-11-25 08:27:46.076 253542 INFO os_vif [None req-99878b1a-80c7-4f02-93f8-9a091688e41f cbe175b10d9243369c5cae0b1a0c718b 16044f9687494680b68b927090e5afc5 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:14:9a:0c,bridge_name='br-int',has_traffic_filtering=True,id=1de27b55-f4ed-42e6-a9b2-65d84a8a77f2,network=Network(57f2ccd3-ed2f-4d2f-8493-3dd1452e16da),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1de27b55-f4')#033[00m
Nov 25 03:27:46 np0005534516 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-a0a016cd70afdaf0b4b5e64fd65184e8ba97739405a3f728b1b0a8d1d9a0ac91-userdata-shm.mount: Deactivated successfully.
Nov 25 03:27:46 np0005534516 systemd[1]: var-lib-containers-storage-overlay-754c241e4d2ab39b3351c7386a235b84734f23374e154e299f4df3e053aaff0b-merged.mount: Deactivated successfully.
Nov 25 03:27:46 np0005534516 podman[288972]: 2025-11-25 08:27:46.10884306 +0000 UTC m=+0.213558823 container cleanup a0a016cd70afdaf0b4b5e64fd65184e8ba97739405a3f728b1b0a8d1d9a0ac91 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-57f2ccd3-ed2f-4d2f-8493-3dd1452e16da, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Nov 25 03:27:46 np0005534516 systemd[1]: libpod-conmon-a0a016cd70afdaf0b4b5e64fd65184e8ba97739405a3f728b1b0a8d1d9a0ac91.scope: Deactivated successfully.
Nov 25 03:27:46 np0005534516 podman[289026]: 2025-11-25 08:27:46.198839891 +0000 UTC m=+0.067102498 container remove a0a016cd70afdaf0b4b5e64fd65184e8ba97739405a3f728b1b0a8d1d9a0ac91 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-57f2ccd3-ed2f-4d2f-8493-3dd1452e16da, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Nov 25 03:27:46 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:27:46.206 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[8fa0e1dd-4789-49c3-9334-c93ebd72c803]: (4, ('Tue Nov 25 08:27:45 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-57f2ccd3-ed2f-4d2f-8493-3dd1452e16da (a0a016cd70afdaf0b4b5e64fd65184e8ba97739405a3f728b1b0a8d1d9a0ac91)\na0a016cd70afdaf0b4b5e64fd65184e8ba97739405a3f728b1b0a8d1d9a0ac91\nTue Nov 25 08:27:46 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-57f2ccd3-ed2f-4d2f-8493-3dd1452e16da (a0a016cd70afdaf0b4b5e64fd65184e8ba97739405a3f728b1b0a8d1d9a0ac91)\na0a016cd70afdaf0b4b5e64fd65184e8ba97739405a3f728b1b0a8d1d9a0ac91\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:27:46 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:27:46.208 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[0b4d59de-cef8-4ac2-b742-e63c766b8eca]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:27:46 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:27:46.211 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap57f2ccd3-e0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:27:46 np0005534516 nova_compute[253538]: 2025-11-25 08:27:46.214 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:27:46 np0005534516 kernel: tap57f2ccd3-e0: left promiscuous mode
Nov 25 03:27:46 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:27:46.220 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[5990001e-08be-4f9c-8012-15801acf57bd]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:27:46 np0005534516 nova_compute[253538]: 2025-11-25 08:27:46.236 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:27:46 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:27:46.238 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[7ccbc2dd-5cc7-4b38-bfa1-03c8ae8f474e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:27:46 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:27:46.240 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[11498008-a8a5-4ec9-9875-04a642dab501]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:27:46 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:27:46.255 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[efde0ba3-baf8-422c-a8af-501623aa1f0d]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 461722, 'reachable_time': 18279, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 289041, 'error': None, 'target': 'ovnmeta-57f2ccd3-ed2f-4d2f-8493-3dd1452e16da', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:27:46 np0005534516 systemd[1]: run-netns-ovnmeta\x2d57f2ccd3\x2ded2f\x2d4d2f\x2d8493\x2d3dd1452e16da.mount: Deactivated successfully.
Nov 25 03:27:46 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:27:46.258 162852 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-57f2ccd3-ed2f-4d2f-8493-3dd1452e16da deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 25 03:27:46 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:27:46.258 162852 DEBUG oslo.privsep.daemon [-] privsep: reply[078a0fcf-ac5d-4ed9-a688-0f58846028a0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:27:46 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:27:46.261 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 1de27b55-f4ed-42e6-a9b2-65d84a8a77f2 in datapath 57f2ccd3-ed2f-4d2f-8493-3dd1452e16da unbound from our chassis#033[00m
Nov 25 03:27:46 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:27:46.262 162739 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 57f2ccd3-ed2f-4d2f-8493-3dd1452e16da, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 25 03:27:46 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:27:46.263 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[fa1634a4-0b63-4976-bd57-25503727750d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:27:46 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:27:46.263 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 1de27b55-f4ed-42e6-a9b2-65d84a8a77f2 in datapath 57f2ccd3-ed2f-4d2f-8493-3dd1452e16da unbound from our chassis#033[00m
Nov 25 03:27:46 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:27:46.264 162739 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 57f2ccd3-ed2f-4d2f-8493-3dd1452e16da, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 25 03:27:46 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:27:46.265 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[e89017b6-fd54-4f32-83f8-3fd599c524b3]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:27:46 np0005534516 nova_compute[253538]: 2025-11-25 08:27:46.834 253542 INFO nova.virt.libvirt.driver [None req-99878b1a-80c7-4f02-93f8-9a091688e41f cbe175b10d9243369c5cae0b1a0c718b 16044f9687494680b68b927090e5afc5 - - default default] [instance: 575b6526-de38-4a80-a952-be1b891b4792] Deleting instance files /var/lib/nova/instances/575b6526-de38-4a80-a952-be1b891b4792_del#033[00m
Nov 25 03:27:46 np0005534516 nova_compute[253538]: 2025-11-25 08:27:46.835 253542 INFO nova.virt.libvirt.driver [None req-99878b1a-80c7-4f02-93f8-9a091688e41f cbe175b10d9243369c5cae0b1a0c718b 16044f9687494680b68b927090e5afc5 - - default default] [instance: 575b6526-de38-4a80-a952-be1b891b4792] Deletion of /var/lib/nova/instances/575b6526-de38-4a80-a952-be1b891b4792_del complete#033[00m
Nov 25 03:27:46 np0005534516 nova_compute[253538]: 2025-11-25 08:27:46.907 253542 INFO nova.compute.manager [None req-99878b1a-80c7-4f02-93f8-9a091688e41f cbe175b10d9243369c5cae0b1a0c718b 16044f9687494680b68b927090e5afc5 - - default default] [instance: 575b6526-de38-4a80-a952-be1b891b4792] Took 1.34 seconds to destroy the instance on the hypervisor.#033[00m
Nov 25 03:27:46 np0005534516 nova_compute[253538]: 2025-11-25 08:27:46.909 253542 DEBUG oslo.service.loopingcall [None req-99878b1a-80c7-4f02-93f8-9a091688e41f cbe175b10d9243369c5cae0b1a0c718b 16044f9687494680b68b927090e5afc5 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 25 03:27:46 np0005534516 nova_compute[253538]: 2025-11-25 08:27:46.909 253542 DEBUG nova.compute.manager [-] [instance: 575b6526-de38-4a80-a952-be1b891b4792] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 25 03:27:46 np0005534516 nova_compute[253538]: 2025-11-25 08:27:46.910 253542 DEBUG nova.network.neutron [-] [instance: 575b6526-de38-4a80-a952-be1b891b4792] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 25 03:27:47 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1276: 321 pgs: 321 active+clean; 121 MiB data, 376 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 1.8 MiB/s wr, 123 op/s
Nov 25 03:27:47 np0005534516 nova_compute[253538]: 2025-11-25 08:27:47.330 253542 DEBUG nova.compute.manager [req-794c29ef-a149-44a9-9b74-36e5acea01dd req-969727de-f242-4b18-8a02-d71b97fea74b b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 575b6526-de38-4a80-a952-be1b891b4792] Received event network-vif-unplugged-1de27b55-f4ed-42e6-a9b2-65d84a8a77f2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:27:47 np0005534516 nova_compute[253538]: 2025-11-25 08:27:47.331 253542 DEBUG oslo_concurrency.lockutils [req-794c29ef-a149-44a9-9b74-36e5acea01dd req-969727de-f242-4b18-8a02-d71b97fea74b b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "575b6526-de38-4a80-a952-be1b891b4792-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:27:47 np0005534516 nova_compute[253538]: 2025-11-25 08:27:47.331 253542 DEBUG oslo_concurrency.lockutils [req-794c29ef-a149-44a9-9b74-36e5acea01dd req-969727de-f242-4b18-8a02-d71b97fea74b b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "575b6526-de38-4a80-a952-be1b891b4792-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:27:47 np0005534516 nova_compute[253538]: 2025-11-25 08:27:47.332 253542 DEBUG oslo_concurrency.lockutils [req-794c29ef-a149-44a9-9b74-36e5acea01dd req-969727de-f242-4b18-8a02-d71b97fea74b b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "575b6526-de38-4a80-a952-be1b891b4792-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:27:47 np0005534516 nova_compute[253538]: 2025-11-25 08:27:47.332 253542 DEBUG nova.compute.manager [req-794c29ef-a149-44a9-9b74-36e5acea01dd req-969727de-f242-4b18-8a02-d71b97fea74b b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 575b6526-de38-4a80-a952-be1b891b4792] No waiting events found dispatching network-vif-unplugged-1de27b55-f4ed-42e6-a9b2-65d84a8a77f2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 03:27:47 np0005534516 nova_compute[253538]: 2025-11-25 08:27:47.332 253542 DEBUG nova.compute.manager [req-794c29ef-a149-44a9-9b74-36e5acea01dd req-969727de-f242-4b18-8a02-d71b97fea74b b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 575b6526-de38-4a80-a952-be1b891b4792] Received event network-vif-unplugged-1de27b55-f4ed-42e6-a9b2-65d84a8a77f2 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 25 03:27:47 np0005534516 nova_compute[253538]: 2025-11-25 08:27:47.332 253542 DEBUG nova.compute.manager [req-794c29ef-a149-44a9-9b74-36e5acea01dd req-969727de-f242-4b18-8a02-d71b97fea74b b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 575b6526-de38-4a80-a952-be1b891b4792] Received event network-vif-plugged-1de27b55-f4ed-42e6-a9b2-65d84a8a77f2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:27:47 np0005534516 nova_compute[253538]: 2025-11-25 08:27:47.332 253542 DEBUG oslo_concurrency.lockutils [req-794c29ef-a149-44a9-9b74-36e5acea01dd req-969727de-f242-4b18-8a02-d71b97fea74b b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "575b6526-de38-4a80-a952-be1b891b4792-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:27:47 np0005534516 nova_compute[253538]: 2025-11-25 08:27:47.333 253542 DEBUG oslo_concurrency.lockutils [req-794c29ef-a149-44a9-9b74-36e5acea01dd req-969727de-f242-4b18-8a02-d71b97fea74b b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "575b6526-de38-4a80-a952-be1b891b4792-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:27:47 np0005534516 nova_compute[253538]: 2025-11-25 08:27:47.333 253542 DEBUG oslo_concurrency.lockutils [req-794c29ef-a149-44a9-9b74-36e5acea01dd req-969727de-f242-4b18-8a02-d71b97fea74b b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "575b6526-de38-4a80-a952-be1b891b4792-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:27:47 np0005534516 nova_compute[253538]: 2025-11-25 08:27:47.333 253542 DEBUG nova.compute.manager [req-794c29ef-a149-44a9-9b74-36e5acea01dd req-969727de-f242-4b18-8a02-d71b97fea74b b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 575b6526-de38-4a80-a952-be1b891b4792] No waiting events found dispatching network-vif-plugged-1de27b55-f4ed-42e6-a9b2-65d84a8a77f2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 03:27:47 np0005534516 nova_compute[253538]: 2025-11-25 08:27:47.333 253542 WARNING nova.compute.manager [req-794c29ef-a149-44a9-9b74-36e5acea01dd req-969727de-f242-4b18-8a02-d71b97fea74b b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 575b6526-de38-4a80-a952-be1b891b4792] Received unexpected event network-vif-plugged-1de27b55-f4ed-42e6-a9b2-65d84a8a77f2 for instance with vm_state active and task_state deleting.#033[00m
Nov 25 03:27:48 np0005534516 nova_compute[253538]: 2025-11-25 08:27:48.653 253542 DEBUG nova.network.neutron [-] [instance: 575b6526-de38-4a80-a952-be1b891b4792] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 03:27:48 np0005534516 nova_compute[253538]: 2025-11-25 08:27:48.676 253542 INFO nova.compute.manager [-] [instance: 575b6526-de38-4a80-a952-be1b891b4792] Took 1.77 seconds to deallocate network for instance.#033[00m
Nov 25 03:27:48 np0005534516 nova_compute[253538]: 2025-11-25 08:27:48.739 253542 DEBUG oslo_concurrency.lockutils [None req-99878b1a-80c7-4f02-93f8-9a091688e41f cbe175b10d9243369c5cae0b1a0c718b 16044f9687494680b68b927090e5afc5 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:27:48 np0005534516 nova_compute[253538]: 2025-11-25 08:27:48.740 253542 DEBUG oslo_concurrency.lockutils [None req-99878b1a-80c7-4f02-93f8-9a091688e41f cbe175b10d9243369c5cae0b1a0c718b 16044f9687494680b68b927090e5afc5 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:27:48 np0005534516 nova_compute[253538]: 2025-11-25 08:27:48.815 253542 DEBUG oslo_concurrency.processutils [None req-99878b1a-80c7-4f02-93f8-9a091688e41f cbe175b10d9243369c5cae0b1a0c718b 16044f9687494680b68b927090e5afc5 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:27:49 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1277: 321 pgs: 321 active+clean; 117 MiB data, 378 MiB used, 60 GiB / 60 GiB avail; 2.7 MiB/s rd, 2.6 MiB/s wr, 148 op/s
Nov 25 03:27:49 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 03:27:49 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3834716908' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 03:27:49 np0005534516 nova_compute[253538]: 2025-11-25 08:27:49.315 253542 DEBUG oslo_concurrency.processutils [None req-99878b1a-80c7-4f02-93f8-9a091688e41f cbe175b10d9243369c5cae0b1a0c718b 16044f9687494680b68b927090e5afc5 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.500s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:27:49 np0005534516 nova_compute[253538]: 2025-11-25 08:27:49.322 253542 DEBUG nova.compute.provider_tree [None req-99878b1a-80c7-4f02-93f8-9a091688e41f cbe175b10d9243369c5cae0b1a0c718b 16044f9687494680b68b927090e5afc5 - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 03:27:49 np0005534516 nova_compute[253538]: 2025-11-25 08:27:49.336 253542 DEBUG nova.scheduler.client.report [None req-99878b1a-80c7-4f02-93f8-9a091688e41f cbe175b10d9243369c5cae0b1a0c718b 16044f9687494680b68b927090e5afc5 - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 03:27:49 np0005534516 nova_compute[253538]: 2025-11-25 08:27:49.392 253542 DEBUG oslo_concurrency.lockutils [None req-99878b1a-80c7-4f02-93f8-9a091688e41f cbe175b10d9243369c5cae0b1a0c718b 16044f9687494680b68b927090e5afc5 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.652s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:27:49 np0005534516 nova_compute[253538]: 2025-11-25 08:27:49.412 253542 DEBUG nova.compute.manager [req-e73a15f3-0671-4374-b80b-bd34353be108 req-daddbc0b-843c-4473-9be6-48d928b4d2e8 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 575b6526-de38-4a80-a952-be1b891b4792] Received event network-vif-deleted-1de27b55-f4ed-42e6-a9b2-65d84a8a77f2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:27:49 np0005534516 ovn_controller[152859]: 2025-11-25T08:27:49Z|00028|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:93:2b:39 10.100.0.13
Nov 25 03:27:49 np0005534516 ovn_controller[152859]: 2025-11-25T08:27:49Z|00029|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:93:2b:39 10.100.0.13
Nov 25 03:27:49 np0005534516 nova_compute[253538]: 2025-11-25 08:27:49.449 253542 INFO nova.scheduler.client.report [None req-99878b1a-80c7-4f02-93f8-9a091688e41f cbe175b10d9243369c5cae0b1a0c718b 16044f9687494680b68b927090e5afc5 - - default default] Deleted allocations for instance 575b6526-de38-4a80-a952-be1b891b4792#033[00m
Nov 25 03:27:49 np0005534516 nova_compute[253538]: 2025-11-25 08:27:49.547 253542 DEBUG oslo_concurrency.lockutils [None req-99878b1a-80c7-4f02-93f8-9a091688e41f cbe175b10d9243369c5cae0b1a0c718b 16044f9687494680b68b927090e5afc5 - - default default] Lock "575b6526-de38-4a80-a952-be1b891b4792" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.979s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:27:49 np0005534516 nova_compute[253538]: 2025-11-25 08:27:49.793 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:27:49 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 03:27:49 np0005534516 nova_compute[253538]: 2025-11-25 08:27:49.994 253542 DEBUG nova.virt.libvirt.driver [None req-a6ead9e6-c858-49aa-82dc-e5d94b3760e8 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 30afcbba-78f3-433c-ba0a-5a2d25cf2d48] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101#033[00m
Nov 25 03:27:51 np0005534516 nova_compute[253538]: 2025-11-25 08:27:51.072 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:27:51 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1278: 321 pgs: 321 active+clean; 104 MiB data, 376 MiB used, 60 GiB / 60 GiB avail; 3.2 MiB/s rd, 2.9 MiB/s wr, 180 op/s
Nov 25 03:27:52 np0005534516 kernel: tapd6aa33fe-8d (unregistering): left promiscuous mode
Nov 25 03:27:52 np0005534516 NetworkManager[48915]: <info>  [1764059272.4284] device (tapd6aa33fe-8d): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 03:27:52 np0005534516 ovn_controller[152859]: 2025-11-25T08:27:52Z|00167|binding|INFO|Releasing lport d6aa33fe-8dd6-4546-aa75-715ad57e5b5c from this chassis (sb_readonly=0)
Nov 25 03:27:52 np0005534516 ovn_controller[152859]: 2025-11-25T08:27:52Z|00168|binding|INFO|Setting lport d6aa33fe-8dd6-4546-aa75-715ad57e5b5c down in Southbound
Nov 25 03:27:52 np0005534516 ovn_controller[152859]: 2025-11-25T08:27:52Z|00169|binding|INFO|Removing iface tapd6aa33fe-8d ovn-installed in OVS
Nov 25 03:27:52 np0005534516 nova_compute[253538]: 2025-11-25 08:27:52.487 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:27:52 np0005534516 nova_compute[253538]: 2025-11-25 08:27:52.490 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:27:52 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:27:52.511 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:93:2b:39 10.100.0.13'], port_security=['fa:16:3e:93:2b:39 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '30afcbba-78f3-433c-ba0a-5a2d25cf2d48', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ba659d6c-c094-47d7-ba45-d0e659ce778e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b0a28d62fb1841c087b84b40bf5a54ec', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'c0dcf48e-8342-437f-bc91-be284d9d2e89', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=38cfcfdd-6d8a-45fc-8bf6-5c1aa5128b91, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=d6aa33fe-8dd6-4546-aa75-715ad57e5b5c) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 03:27:52 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:27:52.514 162739 INFO neutron.agent.ovn.metadata.agent [-] Port d6aa33fe-8dd6-4546-aa75-715ad57e5b5c in datapath ba659d6c-c094-47d7-ba45-d0e659ce778e unbound from our chassis#033[00m
Nov 25 03:27:52 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:27:52.516 162739 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network ba659d6c-c094-47d7-ba45-d0e659ce778e, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 25 03:27:52 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:27:52.517 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[bb132d1d-3791-47d8-9d27-c59547b429cd]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:27:52 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:27:52.518 162739 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-ba659d6c-c094-47d7-ba45-d0e659ce778e namespace which is not needed anymore#033[00m
Nov 25 03:27:52 np0005534516 nova_compute[253538]: 2025-11-25 08:27:52.521 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:27:52 np0005534516 systemd[1]: machine-qemu\x2d31\x2dinstance\x2d0000001a.scope: Deactivated successfully.
Nov 25 03:27:52 np0005534516 systemd[1]: machine-qemu\x2d31\x2dinstance\x2d0000001a.scope: Consumed 13.638s CPU time.
Nov 25 03:27:52 np0005534516 systemd-machined[215790]: Machine qemu-31-instance-0000001a terminated.
Nov 25 03:27:52 np0005534516 neutron-haproxy-ovnmeta-ba659d6c-c094-47d7-ba45-d0e659ce778e[288351]: [NOTICE]   (288371) : haproxy version is 2.8.14-c23fe91
Nov 25 03:27:52 np0005534516 neutron-haproxy-ovnmeta-ba659d6c-c094-47d7-ba45-d0e659ce778e[288351]: [NOTICE]   (288371) : path to executable is /usr/sbin/haproxy
Nov 25 03:27:52 np0005534516 neutron-haproxy-ovnmeta-ba659d6c-c094-47d7-ba45-d0e659ce778e[288351]: [WARNING]  (288371) : Exiting Master process...
Nov 25 03:27:52 np0005534516 neutron-haproxy-ovnmeta-ba659d6c-c094-47d7-ba45-d0e659ce778e[288351]: [ALERT]    (288371) : Current worker (288376) exited with code 143 (Terminated)
Nov 25 03:27:52 np0005534516 neutron-haproxy-ovnmeta-ba659d6c-c094-47d7-ba45-d0e659ce778e[288351]: [WARNING]  (288371) : All workers exited. Exiting... (0)
Nov 25 03:27:52 np0005534516 systemd[1]: libpod-48affac628b5196fd72e2915e1684dca5200b312cdab4ddbed73be0f5b5df529.scope: Deactivated successfully.
Nov 25 03:27:52 np0005534516 podman[289091]: 2025-11-25 08:27:52.683704618 +0000 UTC m=+0.071022647 container died 48affac628b5196fd72e2915e1684dca5200b312cdab4ddbed73be0f5b5df529 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-ba659d6c-c094-47d7-ba45-d0e659ce778e, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Nov 25 03:27:52 np0005534516 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-48affac628b5196fd72e2915e1684dca5200b312cdab4ddbed73be0f5b5df529-userdata-shm.mount: Deactivated successfully.
Nov 25 03:27:52 np0005534516 systemd[1]: var-lib-containers-storage-overlay-dedff8d23f1f42a2bd3738ba82be5e7d6fdc0734473529518931479e1db062ba-merged.mount: Deactivated successfully.
Nov 25 03:27:52 np0005534516 podman[289091]: 2025-11-25 08:27:52.750452602 +0000 UTC m=+0.137770621 container cleanup 48affac628b5196fd72e2915e1684dca5200b312cdab4ddbed73be0f5b5df529 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-ba659d6c-c094-47d7-ba45-d0e659ce778e, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Nov 25 03:27:52 np0005534516 systemd[1]: libpod-conmon-48affac628b5196fd72e2915e1684dca5200b312cdab4ddbed73be0f5b5df529.scope: Deactivated successfully.
Nov 25 03:27:52 np0005534516 podman[289132]: 2025-11-25 08:27:52.825698432 +0000 UTC m=+0.052295646 container remove 48affac628b5196fd72e2915e1684dca5200b312cdab4ddbed73be0f5b5df529 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-ba659d6c-c094-47d7-ba45-d0e659ce778e, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 25 03:27:52 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:27:52.832 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[7109513b-a5e2-4780-aedf-140b4cac6f4a]: (4, ('Tue Nov 25 08:27:52 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-ba659d6c-c094-47d7-ba45-d0e659ce778e (48affac628b5196fd72e2915e1684dca5200b312cdab4ddbed73be0f5b5df529)\n48affac628b5196fd72e2915e1684dca5200b312cdab4ddbed73be0f5b5df529\nTue Nov 25 08:27:52 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-ba659d6c-c094-47d7-ba45-d0e659ce778e (48affac628b5196fd72e2915e1684dca5200b312cdab4ddbed73be0f5b5df529)\n48affac628b5196fd72e2915e1684dca5200b312cdab4ddbed73be0f5b5df529\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:27:52 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:27:52.834 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[c4eb6c86-879f-450a-82e0-954af3fb9635]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:27:52 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:27:52.835 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapba659d6c-c0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:27:52 np0005534516 kernel: tapba659d6c-c0: left promiscuous mode
Nov 25 03:27:52 np0005534516 nova_compute[253538]: 2025-11-25 08:27:52.837 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:27:52 np0005534516 nova_compute[253538]: 2025-11-25 08:27:52.859 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:27:52 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:27:52.862 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[a95262b0-89fb-460b-8d87-ba7245e11c65]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:27:52 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:27:52.876 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[d6cab376-2519-4d7d-bf47-dd6fd7bb9996]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:27:52 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:27:52.878 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[b19fedf7-d269-4a15-b3ad-3441296da47b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:27:52 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:27:52.899 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[8d63a616-520c-48e3-9cd7-2c7e34dcd3d0]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 461034, 'reachable_time': 16044, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 289150, 'error': None, 'target': 'ovnmeta-ba659d6c-c094-47d7-ba45-d0e659ce778e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:27:52 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:27:52.901 162852 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-ba659d6c-c094-47d7-ba45-d0e659ce778e deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 25 03:27:52 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:27:52.901 162852 DEBUG oslo.privsep.daemon [-] privsep: reply[a3083430-1856-4580-8160-d58e618cf9b8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:27:52 np0005534516 systemd[1]: run-netns-ovnmeta\x2dba659d6c\x2dc094\x2d47d7\x2dba45\x2dd0e659ce778e.mount: Deactivated successfully.
Nov 25 03:27:52 np0005534516 nova_compute[253538]: 2025-11-25 08:27:52.921 253542 DEBUG nova.compute.manager [req-c48052f3-0bea-4429-869e-7b5ed89d6fd2 req-c597cdab-dd8b-4d4d-8bd9-ddb4fd9e2141 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 30afcbba-78f3-433c-ba0a-5a2d25cf2d48] Received event network-vif-unplugged-d6aa33fe-8dd6-4546-aa75-715ad57e5b5c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:27:52 np0005534516 nova_compute[253538]: 2025-11-25 08:27:52.922 253542 DEBUG oslo_concurrency.lockutils [req-c48052f3-0bea-4429-869e-7b5ed89d6fd2 req-c597cdab-dd8b-4d4d-8bd9-ddb4fd9e2141 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "30afcbba-78f3-433c-ba0a-5a2d25cf2d48-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:27:52 np0005534516 nova_compute[253538]: 2025-11-25 08:27:52.922 253542 DEBUG oslo_concurrency.lockutils [req-c48052f3-0bea-4429-869e-7b5ed89d6fd2 req-c597cdab-dd8b-4d4d-8bd9-ddb4fd9e2141 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "30afcbba-78f3-433c-ba0a-5a2d25cf2d48-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:27:52 np0005534516 nova_compute[253538]: 2025-11-25 08:27:52.922 253542 DEBUG oslo_concurrency.lockutils [req-c48052f3-0bea-4429-869e-7b5ed89d6fd2 req-c597cdab-dd8b-4d4d-8bd9-ddb4fd9e2141 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "30afcbba-78f3-433c-ba0a-5a2d25cf2d48-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:27:52 np0005534516 nova_compute[253538]: 2025-11-25 08:27:52.922 253542 DEBUG nova.compute.manager [req-c48052f3-0bea-4429-869e-7b5ed89d6fd2 req-c597cdab-dd8b-4d4d-8bd9-ddb4fd9e2141 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 30afcbba-78f3-433c-ba0a-5a2d25cf2d48] No waiting events found dispatching network-vif-unplugged-d6aa33fe-8dd6-4546-aa75-715ad57e5b5c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 03:27:52 np0005534516 nova_compute[253538]: 2025-11-25 08:27:52.923 253542 WARNING nova.compute.manager [req-c48052f3-0bea-4429-869e-7b5ed89d6fd2 req-c597cdab-dd8b-4d4d-8bd9-ddb4fd9e2141 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 30afcbba-78f3-433c-ba0a-5a2d25cf2d48] Received unexpected event network-vif-unplugged-d6aa33fe-8dd6-4546-aa75-715ad57e5b5c for instance with vm_state active and task_state powering-off.#033[00m
Nov 25 03:27:53 np0005534516 nova_compute[253538]: 2025-11-25 08:27:53.009 253542 INFO nova.virt.libvirt.driver [None req-a6ead9e6-c858-49aa-82dc-e5d94b3760e8 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 30afcbba-78f3-433c-ba0a-5a2d25cf2d48] Instance shutdown successfully after 13 seconds.#033[00m
Nov 25 03:27:53 np0005534516 nova_compute[253538]: 2025-11-25 08:27:53.015 253542 INFO nova.virt.libvirt.driver [-] [instance: 30afcbba-78f3-433c-ba0a-5a2d25cf2d48] Instance destroyed successfully.#033[00m
Nov 25 03:27:53 np0005534516 nova_compute[253538]: 2025-11-25 08:27:53.016 253542 DEBUG nova.objects.instance [None req-a6ead9e6-c858-49aa-82dc-e5d94b3760e8 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Lazy-loading 'numa_topology' on Instance uuid 30afcbba-78f3-433c-ba0a-5a2d25cf2d48 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 03:27:53 np0005534516 nova_compute[253538]: 2025-11-25 08:27:53.036 253542 DEBUG nova.compute.manager [None req-a6ead9e6-c858-49aa-82dc-e5d94b3760e8 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 30afcbba-78f3-433c-ba0a-5a2d25cf2d48] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:27:53 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1279: 321 pgs: 321 active+clean; 111 MiB data, 382 MiB used, 60 GiB / 60 GiB avail; 3.0 MiB/s rd, 2.3 MiB/s wr, 179 op/s
Nov 25 03:27:53 np0005534516 nova_compute[253538]: 2025-11-25 08:27:53.125 253542 DEBUG oslo_concurrency.lockutils [None req-a6ead9e6-c858-49aa-82dc-e5d94b3760e8 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Lock "30afcbba-78f3-433c-ba0a-5a2d25cf2d48" "released" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: held 13.218s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:27:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] Optimize plan auto_2025-11-25_08:27:53
Nov 25 03:27:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 03:27:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] do_upmap
Nov 25 03:27:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] pools ['backups', 'images', 'default.rgw.log', 'cephfs.cephfs.meta', '.mgr', 'default.rgw.meta', 'default.rgw.control', 'volumes', 'vms', '.rgw.root', 'cephfs.cephfs.data']
Nov 25 03:27:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] prepared 0/10 changes
Nov 25 03:27:53 np0005534516 nova_compute[253538]: 2025-11-25 08:27:53.349 253542 DEBUG oslo_concurrency.lockutils [None req-a08427a9-393d-477c-899d-da297a0ffeb3 5981df3e8536420ea5b8fcd98ef92e1b 7627e6bf071942db89329eee4a7d6b59 - - default default] Acquiring lock "14cd6797-cf47-44da-acac-0e5e3d5dfe11" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:27:53 np0005534516 nova_compute[253538]: 2025-11-25 08:27:53.350 253542 DEBUG oslo_concurrency.lockutils [None req-a08427a9-393d-477c-899d-da297a0ffeb3 5981df3e8536420ea5b8fcd98ef92e1b 7627e6bf071942db89329eee4a7d6b59 - - default default] Lock "14cd6797-cf47-44da-acac-0e5e3d5dfe11" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:27:53 np0005534516 nova_compute[253538]: 2025-11-25 08:27:53.392 253542 DEBUG nova.compute.manager [None req-a08427a9-393d-477c-899d-da297a0ffeb3 5981df3e8536420ea5b8fcd98ef92e1b 7627e6bf071942db89329eee4a7d6b59 - - default default] [instance: 14cd6797-cf47-44da-acac-0e5e3d5dfe11] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 25 03:27:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 03:27:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 03:27:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 03:27:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 03:27:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 03:27:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 03:27:53 np0005534516 nova_compute[253538]: 2025-11-25 08:27:53.501 253542 DEBUG oslo_concurrency.lockutils [None req-a08427a9-393d-477c-899d-da297a0ffeb3 5981df3e8536420ea5b8fcd98ef92e1b 7627e6bf071942db89329eee4a7d6b59 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:27:53 np0005534516 nova_compute[253538]: 2025-11-25 08:27:53.501 253542 DEBUG oslo_concurrency.lockutils [None req-a08427a9-393d-477c-899d-da297a0ffeb3 5981df3e8536420ea5b8fcd98ef92e1b 7627e6bf071942db89329eee4a7d6b59 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:27:53 np0005534516 nova_compute[253538]: 2025-11-25 08:27:53.509 253542 DEBUG nova.virt.hardware [None req-a08427a9-393d-477c-899d-da297a0ffeb3 5981df3e8536420ea5b8fcd98ef92e1b 7627e6bf071942db89329eee4a7d6b59 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 25 03:27:53 np0005534516 nova_compute[253538]: 2025-11-25 08:27:53.509 253542 INFO nova.compute.claims [None req-a08427a9-393d-477c-899d-da297a0ffeb3 5981df3e8536420ea5b8fcd98ef92e1b 7627e6bf071942db89329eee4a7d6b59 - - default default] [instance: 14cd6797-cf47-44da-acac-0e5e3d5dfe11] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 25 03:27:53 np0005534516 nova_compute[253538]: 2025-11-25 08:27:53.639 253542 DEBUG oslo_concurrency.processutils [None req-a08427a9-393d-477c-899d-da297a0ffeb3 5981df3e8536420ea5b8fcd98ef92e1b 7627e6bf071942db89329eee4a7d6b59 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:27:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 03:27:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 03:27:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 03:27:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 03:27:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 03:27:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 03:27:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 03:27:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 03:27:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 03:27:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 03:27:54 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 03:27:54 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/5311080' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 03:27:54 np0005534516 nova_compute[253538]: 2025-11-25 08:27:54.087 253542 DEBUG oslo_concurrency.processutils [None req-a08427a9-393d-477c-899d-da297a0ffeb3 5981df3e8536420ea5b8fcd98ef92e1b 7627e6bf071942db89329eee4a7d6b59 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.448s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:27:54 np0005534516 nova_compute[253538]: 2025-11-25 08:27:54.093 253542 DEBUG nova.compute.provider_tree [None req-a08427a9-393d-477c-899d-da297a0ffeb3 5981df3e8536420ea5b8fcd98ef92e1b 7627e6bf071942db89329eee4a7d6b59 - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 03:27:54 np0005534516 nova_compute[253538]: 2025-11-25 08:27:54.106 253542 DEBUG nova.scheduler.client.report [None req-a08427a9-393d-477c-899d-da297a0ffeb3 5981df3e8536420ea5b8fcd98ef92e1b 7627e6bf071942db89329eee4a7d6b59 - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 03:27:54 np0005534516 nova_compute[253538]: 2025-11-25 08:27:54.250 253542 DEBUG oslo_concurrency.lockutils [None req-a08427a9-393d-477c-899d-da297a0ffeb3 5981df3e8536420ea5b8fcd98ef92e1b 7627e6bf071942db89329eee4a7d6b59 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.749s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:27:54 np0005534516 nova_compute[253538]: 2025-11-25 08:27:54.251 253542 DEBUG nova.compute.manager [None req-a08427a9-393d-477c-899d-da297a0ffeb3 5981df3e8536420ea5b8fcd98ef92e1b 7627e6bf071942db89329eee4a7d6b59 - - default default] [instance: 14cd6797-cf47-44da-acac-0e5e3d5dfe11] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 25 03:27:54 np0005534516 nova_compute[253538]: 2025-11-25 08:27:54.319 253542 DEBUG nova.compute.manager [None req-a08427a9-393d-477c-899d-da297a0ffeb3 5981df3e8536420ea5b8fcd98ef92e1b 7627e6bf071942db89329eee4a7d6b59 - - default default] [instance: 14cd6797-cf47-44da-acac-0e5e3d5dfe11] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 25 03:27:54 np0005534516 nova_compute[253538]: 2025-11-25 08:27:54.320 253542 DEBUG nova.network.neutron [None req-a08427a9-393d-477c-899d-da297a0ffeb3 5981df3e8536420ea5b8fcd98ef92e1b 7627e6bf071942db89329eee4a7d6b59 - - default default] [instance: 14cd6797-cf47-44da-acac-0e5e3d5dfe11] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 25 03:27:54 np0005534516 nova_compute[253538]: 2025-11-25 08:27:54.425 253542 INFO nova.virt.libvirt.driver [None req-a08427a9-393d-477c-899d-da297a0ffeb3 5981df3e8536420ea5b8fcd98ef92e1b 7627e6bf071942db89329eee4a7d6b59 - - default default] [instance: 14cd6797-cf47-44da-acac-0e5e3d5dfe11] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 25 03:27:54 np0005534516 nova_compute[253538]: 2025-11-25 08:27:54.461 253542 DEBUG nova.compute.manager [None req-a08427a9-393d-477c-899d-da297a0ffeb3 5981df3e8536420ea5b8fcd98ef92e1b 7627e6bf071942db89329eee4a7d6b59 - - default default] [instance: 14cd6797-cf47-44da-acac-0e5e3d5dfe11] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 25 03:27:54 np0005534516 nova_compute[253538]: 2025-11-25 08:27:54.536 253542 DEBUG nova.compute.manager [None req-edca5066-bf60-4cb0-8cf6-52d8ab5e0c17 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 30afcbba-78f3-433c-ba0a-5a2d25cf2d48] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:27:54 np0005534516 nova_compute[253538]: 2025-11-25 08:27:54.584 253542 INFO nova.compute.manager [None req-edca5066-bf60-4cb0-8cf6-52d8ab5e0c17 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 30afcbba-78f3-433c-ba0a-5a2d25cf2d48] instance snapshotting#033[00m
Nov 25 03:27:54 np0005534516 nova_compute[253538]: 2025-11-25 08:27:54.585 253542 WARNING nova.compute.manager [None req-edca5066-bf60-4cb0-8cf6-52d8ab5e0c17 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 30afcbba-78f3-433c-ba0a-5a2d25cf2d48] trying to snapshot a non-running instance: (state: 4 expected: 1)#033[00m
Nov 25 03:27:54 np0005534516 nova_compute[253538]: 2025-11-25 08:27:54.587 253542 DEBUG nova.compute.manager [None req-a08427a9-393d-477c-899d-da297a0ffeb3 5981df3e8536420ea5b8fcd98ef92e1b 7627e6bf071942db89329eee4a7d6b59 - - default default] [instance: 14cd6797-cf47-44da-acac-0e5e3d5dfe11] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 25 03:27:54 np0005534516 nova_compute[253538]: 2025-11-25 08:27:54.588 253542 DEBUG nova.virt.libvirt.driver [None req-a08427a9-393d-477c-899d-da297a0ffeb3 5981df3e8536420ea5b8fcd98ef92e1b 7627e6bf071942db89329eee4a7d6b59 - - default default] [instance: 14cd6797-cf47-44da-acac-0e5e3d5dfe11] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 25 03:27:54 np0005534516 nova_compute[253538]: 2025-11-25 08:27:54.588 253542 INFO nova.virt.libvirt.driver [None req-a08427a9-393d-477c-899d-da297a0ffeb3 5981df3e8536420ea5b8fcd98ef92e1b 7627e6bf071942db89329eee4a7d6b59 - - default default] [instance: 14cd6797-cf47-44da-acac-0e5e3d5dfe11] Creating image(s)#033[00m
Nov 25 03:27:54 np0005534516 nova_compute[253538]: 2025-11-25 08:27:54.611 253542 DEBUG nova.storage.rbd_utils [None req-a08427a9-393d-477c-899d-da297a0ffeb3 5981df3e8536420ea5b8fcd98ef92e1b 7627e6bf071942db89329eee4a7d6b59 - - default default] rbd image 14cd6797-cf47-44da-acac-0e5e3d5dfe11_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:27:54 np0005534516 nova_compute[253538]: 2025-11-25 08:27:54.639 253542 DEBUG nova.storage.rbd_utils [None req-a08427a9-393d-477c-899d-da297a0ffeb3 5981df3e8536420ea5b8fcd98ef92e1b 7627e6bf071942db89329eee4a7d6b59 - - default default] rbd image 14cd6797-cf47-44da-acac-0e5e3d5dfe11_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:27:54 np0005534516 nova_compute[253538]: 2025-11-25 08:27:54.663 253542 DEBUG nova.storage.rbd_utils [None req-a08427a9-393d-477c-899d-da297a0ffeb3 5981df3e8536420ea5b8fcd98ef92e1b 7627e6bf071942db89329eee4a7d6b59 - - default default] rbd image 14cd6797-cf47-44da-acac-0e5e3d5dfe11_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:27:54 np0005534516 nova_compute[253538]: 2025-11-25 08:27:54.666 253542 DEBUG oslo_concurrency.processutils [None req-a08427a9-393d-477c-899d-da297a0ffeb3 5981df3e8536420ea5b8fcd98ef92e1b 7627e6bf071942db89329eee4a7d6b59 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:27:54 np0005534516 nova_compute[253538]: 2025-11-25 08:27:54.724 253542 DEBUG oslo_concurrency.processutils [None req-a08427a9-393d-477c-899d-da297a0ffeb3 5981df3e8536420ea5b8fcd98ef92e1b 7627e6bf071942db89329eee4a7d6b59 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json" returned: 0 in 0.057s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:27:54 np0005534516 nova_compute[253538]: 2025-11-25 08:27:54.724 253542 DEBUG oslo_concurrency.lockutils [None req-a08427a9-393d-477c-899d-da297a0ffeb3 5981df3e8536420ea5b8fcd98ef92e1b 7627e6bf071942db89329eee4a7d6b59 - - default default] Acquiring lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:27:54 np0005534516 nova_compute[253538]: 2025-11-25 08:27:54.725 253542 DEBUG oslo_concurrency.lockutils [None req-a08427a9-393d-477c-899d-da297a0ffeb3 5981df3e8536420ea5b8fcd98ef92e1b 7627e6bf071942db89329eee4a7d6b59 - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:27:54 np0005534516 nova_compute[253538]: 2025-11-25 08:27:54.725 253542 DEBUG oslo_concurrency.lockutils [None req-a08427a9-393d-477c-899d-da297a0ffeb3 5981df3e8536420ea5b8fcd98ef92e1b 7627e6bf071942db89329eee4a7d6b59 - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:27:54 np0005534516 nova_compute[253538]: 2025-11-25 08:27:54.746 253542 DEBUG nova.storage.rbd_utils [None req-a08427a9-393d-477c-899d-da297a0ffeb3 5981df3e8536420ea5b8fcd98ef92e1b 7627e6bf071942db89329eee4a7d6b59 - - default default] rbd image 14cd6797-cf47-44da-acac-0e5e3d5dfe11_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:27:54 np0005534516 nova_compute[253538]: 2025-11-25 08:27:54.750 253542 DEBUG oslo_concurrency.processutils [None req-a08427a9-393d-477c-899d-da297a0ffeb3 5981df3e8536420ea5b8fcd98ef92e1b 7627e6bf071942db89329eee4a7d6b59 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc 14cd6797-cf47-44da-acac-0e5e3d5dfe11_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:27:54 np0005534516 nova_compute[253538]: 2025-11-25 08:27:54.795 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:27:54 np0005534516 nova_compute[253538]: 2025-11-25 08:27:54.886 253542 INFO nova.virt.libvirt.driver [None req-edca5066-bf60-4cb0-8cf6-52d8ab5e0c17 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 30afcbba-78f3-433c-ba0a-5a2d25cf2d48] Beginning cold snapshot process#033[00m
Nov 25 03:27:54 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 03:27:55 np0005534516 nova_compute[253538]: 2025-11-25 08:27:55.021 253542 DEBUG nova.compute.manager [req-d3350719-728d-4f6b-8895-7884f7b90819 req-23c6dbcb-9ba0-4188-bd5c-98777a23a055 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 30afcbba-78f3-433c-ba0a-5a2d25cf2d48] Received event network-vif-plugged-d6aa33fe-8dd6-4546-aa75-715ad57e5b5c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:27:55 np0005534516 nova_compute[253538]: 2025-11-25 08:27:55.021 253542 DEBUG oslo_concurrency.lockutils [req-d3350719-728d-4f6b-8895-7884f7b90819 req-23c6dbcb-9ba0-4188-bd5c-98777a23a055 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "30afcbba-78f3-433c-ba0a-5a2d25cf2d48-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:27:55 np0005534516 nova_compute[253538]: 2025-11-25 08:27:55.022 253542 DEBUG oslo_concurrency.lockutils [req-d3350719-728d-4f6b-8895-7884f7b90819 req-23c6dbcb-9ba0-4188-bd5c-98777a23a055 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "30afcbba-78f3-433c-ba0a-5a2d25cf2d48-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:27:55 np0005534516 nova_compute[253538]: 2025-11-25 08:27:55.022 253542 DEBUG oslo_concurrency.lockutils [req-d3350719-728d-4f6b-8895-7884f7b90819 req-23c6dbcb-9ba0-4188-bd5c-98777a23a055 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "30afcbba-78f3-433c-ba0a-5a2d25cf2d48-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:27:55 np0005534516 nova_compute[253538]: 2025-11-25 08:27:55.022 253542 DEBUG nova.compute.manager [req-d3350719-728d-4f6b-8895-7884f7b90819 req-23c6dbcb-9ba0-4188-bd5c-98777a23a055 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 30afcbba-78f3-433c-ba0a-5a2d25cf2d48] No waiting events found dispatching network-vif-plugged-d6aa33fe-8dd6-4546-aa75-715ad57e5b5c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 03:27:55 np0005534516 nova_compute[253538]: 2025-11-25 08:27:55.022 253542 WARNING nova.compute.manager [req-d3350719-728d-4f6b-8895-7884f7b90819 req-23c6dbcb-9ba0-4188-bd5c-98777a23a055 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 30afcbba-78f3-433c-ba0a-5a2d25cf2d48] Received unexpected event network-vif-plugged-d6aa33fe-8dd6-4546-aa75-715ad57e5b5c for instance with vm_state stopped and task_state image_uploading.#033[00m
Nov 25 03:27:55 np0005534516 nova_compute[253538]: 2025-11-25 08:27:55.024 253542 DEBUG nova.policy [None req-a08427a9-393d-477c-899d-da297a0ffeb3 5981df3e8536420ea5b8fcd98ef92e1b 7627e6bf071942db89329eee4a7d6b59 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '5981df3e8536420ea5b8fcd98ef92e1b', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '7627e6bf071942db89329eee4a7d6b59', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 25 03:27:55 np0005534516 nova_compute[253538]: 2025-11-25 08:27:55.029 253542 DEBUG nova.virt.libvirt.imagebackend [None req-edca5066-bf60-4cb0-8cf6-52d8ab5e0c17 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] No parent info for 8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163#033[00m
Nov 25 03:27:55 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1280: 321 pgs: 321 active+clean; 121 MiB data, 384 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.2 MiB/s wr, 166 op/s
Nov 25 03:27:55 np0005534516 nova_compute[253538]: 2025-11-25 08:27:55.230 253542 DEBUG nova.storage.rbd_utils [None req-edca5066-bf60-4cb0-8cf6-52d8ab5e0c17 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] creating snapshot(b68f0d230d0b4d0ebbc0c5333f15da85) on rbd image(30afcbba-78f3-433c-ba0a-5a2d25cf2d48_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Nov 25 03:27:55 np0005534516 nova_compute[253538]: 2025-11-25 08:27:55.263 253542 DEBUG oslo_concurrency.processutils [None req-a08427a9-393d-477c-899d-da297a0ffeb3 5981df3e8536420ea5b8fcd98ef92e1b 7627e6bf071942db89329eee4a7d6b59 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc 14cd6797-cf47-44da-acac-0e5e3d5dfe11_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.513s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:27:55 np0005534516 nova_compute[253538]: 2025-11-25 08:27:55.292 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:27:55 np0005534516 nova_compute[253538]: 2025-11-25 08:27:55.332 253542 DEBUG nova.storage.rbd_utils [None req-a08427a9-393d-477c-899d-da297a0ffeb3 5981df3e8536420ea5b8fcd98ef92e1b 7627e6bf071942db89329eee4a7d6b59 - - default default] resizing rbd image 14cd6797-cf47-44da-acac-0e5e3d5dfe11_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 25 03:27:55 np0005534516 nova_compute[253538]: 2025-11-25 08:27:55.722 253542 DEBUG nova.network.neutron [None req-a08427a9-393d-477c-899d-da297a0ffeb3 5981df3e8536420ea5b8fcd98ef92e1b 7627e6bf071942db89329eee4a7d6b59 - - default default] [instance: 14cd6797-cf47-44da-acac-0e5e3d5dfe11] Successfully created port: eb3ca9e2-cc78-478d-97c2-03b1c7d29b95 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 25 03:27:55 np0005534516 nova_compute[253538]: 2025-11-25 08:27:55.827 253542 DEBUG nova.objects.instance [None req-a08427a9-393d-477c-899d-da297a0ffeb3 5981df3e8536420ea5b8fcd98ef92e1b 7627e6bf071942db89329eee4a7d6b59 - - default default] Lazy-loading 'migration_context' on Instance uuid 14cd6797-cf47-44da-acac-0e5e3d5dfe11 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 03:27:55 np0005534516 nova_compute[253538]: 2025-11-25 08:27:55.840 253542 DEBUG nova.virt.libvirt.driver [None req-a08427a9-393d-477c-899d-da297a0ffeb3 5981df3e8536420ea5b8fcd98ef92e1b 7627e6bf071942db89329eee4a7d6b59 - - default default] [instance: 14cd6797-cf47-44da-acac-0e5e3d5dfe11] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 25 03:27:55 np0005534516 nova_compute[253538]: 2025-11-25 08:27:55.841 253542 DEBUG nova.virt.libvirt.driver [None req-a08427a9-393d-477c-899d-da297a0ffeb3 5981df3e8536420ea5b8fcd98ef92e1b 7627e6bf071942db89329eee4a7d6b59 - - default default] [instance: 14cd6797-cf47-44da-acac-0e5e3d5dfe11] Ensure instance console log exists: /var/lib/nova/instances/14cd6797-cf47-44da-acac-0e5e3d5dfe11/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 25 03:27:55 np0005534516 nova_compute[253538]: 2025-11-25 08:27:55.841 253542 DEBUG oslo_concurrency.lockutils [None req-a08427a9-393d-477c-899d-da297a0ffeb3 5981df3e8536420ea5b8fcd98ef92e1b 7627e6bf071942db89329eee4a7d6b59 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:27:55 np0005534516 nova_compute[253538]: 2025-11-25 08:27:55.841 253542 DEBUG oslo_concurrency.lockutils [None req-a08427a9-393d-477c-899d-da297a0ffeb3 5981df3e8536420ea5b8fcd98ef92e1b 7627e6bf071942db89329eee4a7d6b59 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:27:55 np0005534516 nova_compute[253538]: 2025-11-25 08:27:55.842 253542 DEBUG oslo_concurrency.lockutils [None req-a08427a9-393d-477c-899d-da297a0ffeb3 5981df3e8536420ea5b8fcd98ef92e1b 7627e6bf071942db89329eee4a7d6b59 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:27:56 np0005534516 nova_compute[253538]: 2025-11-25 08:27:56.075 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:27:56 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e119 do_prune osdmap full prune enabled
Nov 25 03:27:56 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e120 e120: 3 total, 3 up, 3 in
Nov 25 03:27:56 np0005534516 ceph-mon[75015]: log_channel(cluster) log [DBG] : osdmap e120: 3 total, 3 up, 3 in
Nov 25 03:27:56 np0005534516 nova_compute[253538]: 2025-11-25 08:27:56.321 253542 DEBUG nova.storage.rbd_utils [None req-edca5066-bf60-4cb0-8cf6-52d8ab5e0c17 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] cloning vms/30afcbba-78f3-433c-ba0a-5a2d25cf2d48_disk@b68f0d230d0b4d0ebbc0c5333f15da85 to images/d060ded4-54b8-40d0-bea0-dc1f1f572072 clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Nov 25 03:27:56 np0005534516 nova_compute[253538]: 2025-11-25 08:27:56.439 253542 DEBUG nova.storage.rbd_utils [None req-edca5066-bf60-4cb0-8cf6-52d8ab5e0c17 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] flattening images/d060ded4-54b8-40d0-bea0-dc1f1f572072 flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314#033[00m
Nov 25 03:27:57 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1282: 321 pgs: 321 active+clean; 180 MiB data, 416 MiB used, 60 GiB / 60 GiB avail; 4.0 MiB/s rd, 6.1 MiB/s wr, 206 op/s
Nov 25 03:27:57 np0005534516 nova_compute[253538]: 2025-11-25 08:27:57.261 253542 DEBUG nova.storage.rbd_utils [None req-edca5066-bf60-4cb0-8cf6-52d8ab5e0c17 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] removing snapshot(b68f0d230d0b4d0ebbc0c5333f15da85) on rbd image(30afcbba-78f3-433c-ba0a-5a2d25cf2d48_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489#033[00m
Nov 25 03:27:57 np0005534516 nova_compute[253538]: 2025-11-25 08:27:57.396 253542 DEBUG nova.network.neutron [None req-a08427a9-393d-477c-899d-da297a0ffeb3 5981df3e8536420ea5b8fcd98ef92e1b 7627e6bf071942db89329eee4a7d6b59 - - default default] [instance: 14cd6797-cf47-44da-acac-0e5e3d5dfe11] Successfully updated port: eb3ca9e2-cc78-478d-97c2-03b1c7d29b95 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 25 03:27:57 np0005534516 nova_compute[253538]: 2025-11-25 08:27:57.411 253542 DEBUG oslo_concurrency.lockutils [None req-a08427a9-393d-477c-899d-da297a0ffeb3 5981df3e8536420ea5b8fcd98ef92e1b 7627e6bf071942db89329eee4a7d6b59 - - default default] Acquiring lock "refresh_cache-14cd6797-cf47-44da-acac-0e5e3d5dfe11" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 03:27:57 np0005534516 nova_compute[253538]: 2025-11-25 08:27:57.412 253542 DEBUG oslo_concurrency.lockutils [None req-a08427a9-393d-477c-899d-da297a0ffeb3 5981df3e8536420ea5b8fcd98ef92e1b 7627e6bf071942db89329eee4a7d6b59 - - default default] Acquired lock "refresh_cache-14cd6797-cf47-44da-acac-0e5e3d5dfe11" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 03:27:57 np0005534516 nova_compute[253538]: 2025-11-25 08:27:57.412 253542 DEBUG nova.network.neutron [None req-a08427a9-393d-477c-899d-da297a0ffeb3 5981df3e8536420ea5b8fcd98ef92e1b 7627e6bf071942db89329eee4a7d6b59 - - default default] [instance: 14cd6797-cf47-44da-acac-0e5e3d5dfe11] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 25 03:27:57 np0005534516 nova_compute[253538]: 2025-11-25 08:27:57.882 253542 DEBUG nova.compute.manager [req-96070fe5-3c6d-412c-beec-ba7e590d0015 req-86449235-eb99-486f-8abc-526a0eec8eeb b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 14cd6797-cf47-44da-acac-0e5e3d5dfe11] Received event network-changed-eb3ca9e2-cc78-478d-97c2-03b1c7d29b95 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:27:57 np0005534516 nova_compute[253538]: 2025-11-25 08:27:57.883 253542 DEBUG nova.compute.manager [req-96070fe5-3c6d-412c-beec-ba7e590d0015 req-86449235-eb99-486f-8abc-526a0eec8eeb b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 14cd6797-cf47-44da-acac-0e5e3d5dfe11] Refreshing instance network info cache due to event network-changed-eb3ca9e2-cc78-478d-97c2-03b1c7d29b95. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 03:27:57 np0005534516 nova_compute[253538]: 2025-11-25 08:27:57.883 253542 DEBUG oslo_concurrency.lockutils [req-96070fe5-3c6d-412c-beec-ba7e590d0015 req-86449235-eb99-486f-8abc-526a0eec8eeb b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-14cd6797-cf47-44da-acac-0e5e3d5dfe11" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 03:27:57 np0005534516 nova_compute[253538]: 2025-11-25 08:27:57.983 253542 DEBUG nova.network.neutron [None req-a08427a9-393d-477c-899d-da297a0ffeb3 5981df3e8536420ea5b8fcd98ef92e1b 7627e6bf071942db89329eee4a7d6b59 - - default default] [instance: 14cd6797-cf47-44da-acac-0e5e3d5dfe11] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 25 03:27:58 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e120 do_prune osdmap full prune enabled
Nov 25 03:27:58 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e121 e121: 3 total, 3 up, 3 in
Nov 25 03:27:58 np0005534516 ceph-mon[75015]: log_channel(cluster) log [DBG] : osdmap e121: 3 total, 3 up, 3 in
Nov 25 03:27:58 np0005534516 nova_compute[253538]: 2025-11-25 08:27:58.353 253542 DEBUG nova.storage.rbd_utils [None req-edca5066-bf60-4cb0-8cf6-52d8ab5e0c17 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] creating snapshot(snap) on rbd image(d060ded4-54b8-40d0-bea0-dc1f1f572072) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Nov 25 03:27:58 np0005534516 nova_compute[253538]: 2025-11-25 08:27:58.777 253542 DEBUG oslo_concurrency.lockutils [None req-0ab1c55e-a54f-4e9f-8e35-b9475b4764b8 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] Acquiring lock "59d5b429-b6cb-4ced-bf99-b3a5b42dc5a0" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:27:58 np0005534516 nova_compute[253538]: 2025-11-25 08:27:58.778 253542 DEBUG oslo_concurrency.lockutils [None req-0ab1c55e-a54f-4e9f-8e35-b9475b4764b8 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] Lock "59d5b429-b6cb-4ced-bf99-b3a5b42dc5a0" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:27:58 np0005534516 nova_compute[253538]: 2025-11-25 08:27:58.796 253542 DEBUG nova.compute.manager [None req-0ab1c55e-a54f-4e9f-8e35-b9475b4764b8 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] [instance: 59d5b429-b6cb-4ced-bf99-b3a5b42dc5a0] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 25 03:27:58 np0005534516 nova_compute[253538]: 2025-11-25 08:27:58.860 253542 DEBUG nova.network.neutron [None req-a08427a9-393d-477c-899d-da297a0ffeb3 5981df3e8536420ea5b8fcd98ef92e1b 7627e6bf071942db89329eee4a7d6b59 - - default default] [instance: 14cd6797-cf47-44da-acac-0e5e3d5dfe11] Updating instance_info_cache with network_info: [{"id": "eb3ca9e2-cc78-478d-97c2-03b1c7d29b95", "address": "fa:16:3e:f3:13:20", "network": {"id": "431353a0-bdb3-445c-95e7-9cd19a8e3783", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-862491740-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7627e6bf071942db89329eee4a7d6b59", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapeb3ca9e2-cc", "ovs_interfaceid": "eb3ca9e2-cc78-478d-97c2-03b1c7d29b95", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 03:27:58 np0005534516 nova_compute[253538]: 2025-11-25 08:27:58.872 253542 DEBUG oslo_concurrency.lockutils [None req-0ab1c55e-a54f-4e9f-8e35-b9475b4764b8 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:27:58 np0005534516 nova_compute[253538]: 2025-11-25 08:27:58.872 253542 DEBUG oslo_concurrency.lockutils [None req-0ab1c55e-a54f-4e9f-8e35-b9475b4764b8 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:27:58 np0005534516 nova_compute[253538]: 2025-11-25 08:27:58.880 253542 DEBUG nova.virt.hardware [None req-0ab1c55e-a54f-4e9f-8e35-b9475b4764b8 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 25 03:27:58 np0005534516 nova_compute[253538]: 2025-11-25 08:27:58.881 253542 INFO nova.compute.claims [None req-0ab1c55e-a54f-4e9f-8e35-b9475b4764b8 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] [instance: 59d5b429-b6cb-4ced-bf99-b3a5b42dc5a0] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 25 03:27:58 np0005534516 nova_compute[253538]: 2025-11-25 08:27:58.885 253542 DEBUG oslo_concurrency.lockutils [None req-a08427a9-393d-477c-899d-da297a0ffeb3 5981df3e8536420ea5b8fcd98ef92e1b 7627e6bf071942db89329eee4a7d6b59 - - default default] Releasing lock "refresh_cache-14cd6797-cf47-44da-acac-0e5e3d5dfe11" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 03:27:58 np0005534516 nova_compute[253538]: 2025-11-25 08:27:58.885 253542 DEBUG nova.compute.manager [None req-a08427a9-393d-477c-899d-da297a0ffeb3 5981df3e8536420ea5b8fcd98ef92e1b 7627e6bf071942db89329eee4a7d6b59 - - default default] [instance: 14cd6797-cf47-44da-acac-0e5e3d5dfe11] Instance network_info: |[{"id": "eb3ca9e2-cc78-478d-97c2-03b1c7d29b95", "address": "fa:16:3e:f3:13:20", "network": {"id": "431353a0-bdb3-445c-95e7-9cd19a8e3783", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-862491740-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7627e6bf071942db89329eee4a7d6b59", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapeb3ca9e2-cc", "ovs_interfaceid": "eb3ca9e2-cc78-478d-97c2-03b1c7d29b95", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 25 03:27:58 np0005534516 nova_compute[253538]: 2025-11-25 08:27:58.886 253542 DEBUG oslo_concurrency.lockutils [req-96070fe5-3c6d-412c-beec-ba7e590d0015 req-86449235-eb99-486f-8abc-526a0eec8eeb b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-14cd6797-cf47-44da-acac-0e5e3d5dfe11" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 03:27:58 np0005534516 nova_compute[253538]: 2025-11-25 08:27:58.886 253542 DEBUG nova.network.neutron [req-96070fe5-3c6d-412c-beec-ba7e590d0015 req-86449235-eb99-486f-8abc-526a0eec8eeb b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 14cd6797-cf47-44da-acac-0e5e3d5dfe11] Refreshing network info cache for port eb3ca9e2-cc78-478d-97c2-03b1c7d29b95 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 03:27:58 np0005534516 nova_compute[253538]: 2025-11-25 08:27:58.889 253542 DEBUG nova.virt.libvirt.driver [None req-a08427a9-393d-477c-899d-da297a0ffeb3 5981df3e8536420ea5b8fcd98ef92e1b 7627e6bf071942db89329eee4a7d6b59 - - default default] [instance: 14cd6797-cf47-44da-acac-0e5e3d5dfe11] Start _get_guest_xml network_info=[{"id": "eb3ca9e2-cc78-478d-97c2-03b1c7d29b95", "address": "fa:16:3e:f3:13:20", "network": {"id": "431353a0-bdb3-445c-95e7-9cd19a8e3783", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-862491740-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7627e6bf071942db89329eee4a7d6b59", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapeb3ca9e2-cc", "ovs_interfaceid": "eb3ca9e2-cc78-478d-97c2-03b1c7d29b95", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'device_name': '/dev/vda', 'guest_format': None, 'encryption_options': None, 'boot_index': 0, 'encryption_format': None, 'encryption_secret_uuid': None, 'size': 0, 'encrypted': False, 'disk_bus': 'virtio', 'image_id': '8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 25 03:27:58 np0005534516 nova_compute[253538]: 2025-11-25 08:27:58.893 253542 WARNING nova.virt.libvirt.driver [None req-a08427a9-393d-477c-899d-da297a0ffeb3 5981df3e8536420ea5b8fcd98ef92e1b 7627e6bf071942db89329eee4a7d6b59 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 03:27:58 np0005534516 nova_compute[253538]: 2025-11-25 08:27:58.898 253542 DEBUG nova.virt.libvirt.host [None req-a08427a9-393d-477c-899d-da297a0ffeb3 5981df3e8536420ea5b8fcd98ef92e1b 7627e6bf071942db89329eee4a7d6b59 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 25 03:27:58 np0005534516 nova_compute[253538]: 2025-11-25 08:27:58.898 253542 DEBUG nova.virt.libvirt.host [None req-a08427a9-393d-477c-899d-da297a0ffeb3 5981df3e8536420ea5b8fcd98ef92e1b 7627e6bf071942db89329eee4a7d6b59 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 25 03:27:58 np0005534516 nova_compute[253538]: 2025-11-25 08:27:58.901 253542 DEBUG nova.virt.libvirt.host [None req-a08427a9-393d-477c-899d-da297a0ffeb3 5981df3e8536420ea5b8fcd98ef92e1b 7627e6bf071942db89329eee4a7d6b59 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 25 03:27:58 np0005534516 nova_compute[253538]: 2025-11-25 08:27:58.902 253542 DEBUG nova.virt.libvirt.host [None req-a08427a9-393d-477c-899d-da297a0ffeb3 5981df3e8536420ea5b8fcd98ef92e1b 7627e6bf071942db89329eee4a7d6b59 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 25 03:27:58 np0005534516 nova_compute[253538]: 2025-11-25 08:27:58.902 253542 DEBUG nova.virt.libvirt.driver [None req-a08427a9-393d-477c-899d-da297a0ffeb3 5981df3e8536420ea5b8fcd98ef92e1b 7627e6bf071942db89329eee4a7d6b59 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 25 03:27:58 np0005534516 nova_compute[253538]: 2025-11-25 08:27:58.903 253542 DEBUG nova.virt.hardware [None req-a08427a9-393d-477c-899d-da297a0ffeb3 5981df3e8536420ea5b8fcd98ef92e1b 7627e6bf071942db89329eee4a7d6b59 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T08:20:54Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='c0247c91-0f34-45b2-87e7-43f31a790a00',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 25 03:27:58 np0005534516 nova_compute[253538]: 2025-11-25 08:27:58.904 253542 DEBUG nova.virt.hardware [None req-a08427a9-393d-477c-899d-da297a0ffeb3 5981df3e8536420ea5b8fcd98ef92e1b 7627e6bf071942db89329eee4a7d6b59 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 25 03:27:58 np0005534516 nova_compute[253538]: 2025-11-25 08:27:58.904 253542 DEBUG nova.virt.hardware [None req-a08427a9-393d-477c-899d-da297a0ffeb3 5981df3e8536420ea5b8fcd98ef92e1b 7627e6bf071942db89329eee4a7d6b59 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 25 03:27:58 np0005534516 nova_compute[253538]: 2025-11-25 08:27:58.904 253542 DEBUG nova.virt.hardware [None req-a08427a9-393d-477c-899d-da297a0ffeb3 5981df3e8536420ea5b8fcd98ef92e1b 7627e6bf071942db89329eee4a7d6b59 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 25 03:27:58 np0005534516 nova_compute[253538]: 2025-11-25 08:27:58.905 253542 DEBUG nova.virt.hardware [None req-a08427a9-393d-477c-899d-da297a0ffeb3 5981df3e8536420ea5b8fcd98ef92e1b 7627e6bf071942db89329eee4a7d6b59 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 25 03:27:58 np0005534516 nova_compute[253538]: 2025-11-25 08:27:58.905 253542 DEBUG nova.virt.hardware [None req-a08427a9-393d-477c-899d-da297a0ffeb3 5981df3e8536420ea5b8fcd98ef92e1b 7627e6bf071942db89329eee4a7d6b59 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 25 03:27:58 np0005534516 nova_compute[253538]: 2025-11-25 08:27:58.905 253542 DEBUG nova.virt.hardware [None req-a08427a9-393d-477c-899d-da297a0ffeb3 5981df3e8536420ea5b8fcd98ef92e1b 7627e6bf071942db89329eee4a7d6b59 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 25 03:27:58 np0005534516 nova_compute[253538]: 2025-11-25 08:27:58.906 253542 DEBUG nova.virt.hardware [None req-a08427a9-393d-477c-899d-da297a0ffeb3 5981df3e8536420ea5b8fcd98ef92e1b 7627e6bf071942db89329eee4a7d6b59 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 25 03:27:58 np0005534516 nova_compute[253538]: 2025-11-25 08:27:58.906 253542 DEBUG nova.virt.hardware [None req-a08427a9-393d-477c-899d-da297a0ffeb3 5981df3e8536420ea5b8fcd98ef92e1b 7627e6bf071942db89329eee4a7d6b59 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 25 03:27:58 np0005534516 nova_compute[253538]: 2025-11-25 08:27:58.907 253542 DEBUG nova.virt.hardware [None req-a08427a9-393d-477c-899d-da297a0ffeb3 5981df3e8536420ea5b8fcd98ef92e1b 7627e6bf071942db89329eee4a7d6b59 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 25 03:27:58 np0005534516 nova_compute[253538]: 2025-11-25 08:27:58.907 253542 DEBUG nova.virt.hardware [None req-a08427a9-393d-477c-899d-da297a0ffeb3 5981df3e8536420ea5b8fcd98ef92e1b 7627e6bf071942db89329eee4a7d6b59 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 25 03:27:58 np0005534516 nova_compute[253538]: 2025-11-25 08:27:58.911 253542 DEBUG oslo_concurrency.processutils [None req-a08427a9-393d-477c-899d-da297a0ffeb3 5981df3e8536420ea5b8fcd98ef92e1b 7627e6bf071942db89329eee4a7d6b59 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:27:59 np0005534516 nova_compute[253538]: 2025-11-25 08:27:59.041 253542 DEBUG oslo_concurrency.processutils [None req-0ab1c55e-a54f-4e9f-8e35-b9475b4764b8 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:27:59 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1284: 321 pgs: 321 active+clean; 216 MiB data, 435 MiB used, 60 GiB / 60 GiB avail; 4.7 MiB/s rd, 7.8 MiB/s wr, 166 op/s
Nov 25 03:27:59 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e121 do_prune osdmap full prune enabled
Nov 25 03:27:59 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e122 e122: 3 total, 3 up, 3 in
Nov 25 03:27:59 np0005534516 ceph-mon[75015]: log_channel(cluster) log [DBG] : osdmap e122: 3 total, 3 up, 3 in
Nov 25 03:27:59 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 03:27:59 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/421263637' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 03:27:59 np0005534516 nova_compute[253538]: 2025-11-25 08:27:59.405 253542 DEBUG oslo_concurrency.processutils [None req-a08427a9-393d-477c-899d-da297a0ffeb3 5981df3e8536420ea5b8fcd98ef92e1b 7627e6bf071942db89329eee4a7d6b59 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.494s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:27:59 np0005534516 nova_compute[253538]: 2025-11-25 08:27:59.426 253542 DEBUG nova.storage.rbd_utils [None req-a08427a9-393d-477c-899d-da297a0ffeb3 5981df3e8536420ea5b8fcd98ef92e1b 7627e6bf071942db89329eee4a7d6b59 - - default default] rbd image 14cd6797-cf47-44da-acac-0e5e3d5dfe11_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:27:59 np0005534516 nova_compute[253538]: 2025-11-25 08:27:59.430 253542 DEBUG oslo_concurrency.processutils [None req-a08427a9-393d-477c-899d-da297a0ffeb3 5981df3e8536420ea5b8fcd98ef92e1b 7627e6bf071942db89329eee4a7d6b59 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:27:59 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 03:27:59 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2498905340' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 03:27:59 np0005534516 nova_compute[253538]: 2025-11-25 08:27:59.513 253542 DEBUG oslo_concurrency.processutils [None req-0ab1c55e-a54f-4e9f-8e35-b9475b4764b8 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.472s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:27:59 np0005534516 nova_compute[253538]: 2025-11-25 08:27:59.520 253542 DEBUG nova.compute.provider_tree [None req-0ab1c55e-a54f-4e9f-8e35-b9475b4764b8 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 03:27:59 np0005534516 nova_compute[253538]: 2025-11-25 08:27:59.539 253542 DEBUG nova.scheduler.client.report [None req-0ab1c55e-a54f-4e9f-8e35-b9475b4764b8 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 03:27:59 np0005534516 nova_compute[253538]: 2025-11-25 08:27:59.692 253542 DEBUG oslo_concurrency.lockutils [None req-0ab1c55e-a54f-4e9f-8e35-b9475b4764b8 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.819s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:27:59 np0005534516 nova_compute[253538]: 2025-11-25 08:27:59.693 253542 DEBUG nova.compute.manager [None req-0ab1c55e-a54f-4e9f-8e35-b9475b4764b8 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] [instance: 59d5b429-b6cb-4ced-bf99-b3a5b42dc5a0] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 25 03:27:59 np0005534516 nova_compute[253538]: 2025-11-25 08:27:59.750 253542 DEBUG nova.compute.manager [None req-0ab1c55e-a54f-4e9f-8e35-b9475b4764b8 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] [instance: 59d5b429-b6cb-4ced-bf99-b3a5b42dc5a0] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 25 03:27:59 np0005534516 nova_compute[253538]: 2025-11-25 08:27:59.751 253542 DEBUG nova.network.neutron [None req-0ab1c55e-a54f-4e9f-8e35-b9475b4764b8 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] [instance: 59d5b429-b6cb-4ced-bf99-b3a5b42dc5a0] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 25 03:27:59 np0005534516 nova_compute[253538]: 2025-11-25 08:27:59.797 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:27:59 np0005534516 podman[289562]: 2025-11-25 08:27:59.81933087 +0000 UTC m=+0.066320094 container health_status 5c8d5508db57b4c3da7e80ae11f3a3094e87785cb74f60c98199261dd8b73481 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true)
Nov 25 03:27:59 np0005534516 nova_compute[253538]: 2025-11-25 08:27:59.830 253542 INFO nova.virt.libvirt.driver [None req-0ab1c55e-a54f-4e9f-8e35-b9475b4764b8 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] [instance: 59d5b429-b6cb-4ced-bf99-b3a5b42dc5a0] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 25 03:27:59 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 03:27:59 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/864016032' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 03:27:59 np0005534516 nova_compute[253538]: 2025-11-25 08:27:59.876 253542 DEBUG oslo_concurrency.processutils [None req-a08427a9-393d-477c-899d-da297a0ffeb3 5981df3e8536420ea5b8fcd98ef92e1b 7627e6bf071942db89329eee4a7d6b59 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.446s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:27:59 np0005534516 nova_compute[253538]: 2025-11-25 08:27:59.877 253542 DEBUG nova.virt.libvirt.vif [None req-a08427a9-393d-477c-899d-da297a0ffeb3 5981df3e8536420ea5b8fcd98ef92e1b 7627e6bf071942db89329eee4a7d6b59 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T08:27:52Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-AttachInterfacesV270Test-server-1019332220',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacesv270test-server-1019332220',id=28,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='7627e6bf071942db89329eee4a7d6b59',ramdisk_id='',reservation_id='r-ykh6cze2',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachInterfacesV270Test-968002196',owner_user_name='tempest-AttachInterfacesV270Test-968002196-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T08:27:54Z,user_data=None,user_id='5981df3e8536420ea5b8fcd98ef92e1b',uuid=14cd6797-cf47-44da-acac-0e5e3d5dfe11,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "eb3ca9e2-cc78-478d-97c2-03b1c7d29b95", "address": "fa:16:3e:f3:13:20", "network": {"id": "431353a0-bdb3-445c-95e7-9cd19a8e3783", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-862491740-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7627e6bf071942db89329eee4a7d6b59", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapeb3ca9e2-cc", "ovs_interfaceid": "eb3ca9e2-cc78-478d-97c2-03b1c7d29b95", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 25 03:27:59 np0005534516 nova_compute[253538]: 2025-11-25 08:27:59.878 253542 DEBUG nova.network.os_vif_util [None req-a08427a9-393d-477c-899d-da297a0ffeb3 5981df3e8536420ea5b8fcd98ef92e1b 7627e6bf071942db89329eee4a7d6b59 - - default default] Converting VIF {"id": "eb3ca9e2-cc78-478d-97c2-03b1c7d29b95", "address": "fa:16:3e:f3:13:20", "network": {"id": "431353a0-bdb3-445c-95e7-9cd19a8e3783", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-862491740-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7627e6bf071942db89329eee4a7d6b59", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapeb3ca9e2-cc", "ovs_interfaceid": "eb3ca9e2-cc78-478d-97c2-03b1c7d29b95", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 03:27:59 np0005534516 nova_compute[253538]: 2025-11-25 08:27:59.879 253542 DEBUG nova.network.os_vif_util [None req-a08427a9-393d-477c-899d-da297a0ffeb3 5981df3e8536420ea5b8fcd98ef92e1b 7627e6bf071942db89329eee4a7d6b59 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f3:13:20,bridge_name='br-int',has_traffic_filtering=True,id=eb3ca9e2-cc78-478d-97c2-03b1c7d29b95,network=Network(431353a0-bdb3-445c-95e7-9cd19a8e3783),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapeb3ca9e2-cc') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 03:27:59 np0005534516 nova_compute[253538]: 2025-11-25 08:27:59.879 253542 DEBUG nova.objects.instance [None req-a08427a9-393d-477c-899d-da297a0ffeb3 5981df3e8536420ea5b8fcd98ef92e1b 7627e6bf071942db89329eee4a7d6b59 - - default default] Lazy-loading 'pci_devices' on Instance uuid 14cd6797-cf47-44da-acac-0e5e3d5dfe11 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 03:27:59 np0005534516 nova_compute[253538]: 2025-11-25 08:27:59.888 253542 DEBUG nova.virt.libvirt.driver [None req-a08427a9-393d-477c-899d-da297a0ffeb3 5981df3e8536420ea5b8fcd98ef92e1b 7627e6bf071942db89329eee4a7d6b59 - - default default] [instance: 14cd6797-cf47-44da-acac-0e5e3d5dfe11] End _get_guest_xml xml=<domain type="kvm">
Nov 25 03:27:59 np0005534516 nova_compute[253538]:  <uuid>14cd6797-cf47-44da-acac-0e5e3d5dfe11</uuid>
Nov 25 03:27:59 np0005534516 nova_compute[253538]:  <name>instance-0000001c</name>
Nov 25 03:27:59 np0005534516 nova_compute[253538]:  <memory>131072</memory>
Nov 25 03:27:59 np0005534516 nova_compute[253538]:  <vcpu>1</vcpu>
Nov 25 03:27:59 np0005534516 nova_compute[253538]:  <metadata>
Nov 25 03:27:59 np0005534516 nova_compute[253538]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 03:27:59 np0005534516 nova_compute[253538]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 03:27:59 np0005534516 nova_compute[253538]:      <nova:name>tempest-AttachInterfacesV270Test-server-1019332220</nova:name>
Nov 25 03:27:59 np0005534516 nova_compute[253538]:      <nova:creationTime>2025-11-25 08:27:58</nova:creationTime>
Nov 25 03:27:59 np0005534516 nova_compute[253538]:      <nova:flavor name="m1.nano">
Nov 25 03:27:59 np0005534516 nova_compute[253538]:        <nova:memory>128</nova:memory>
Nov 25 03:27:59 np0005534516 nova_compute[253538]:        <nova:disk>1</nova:disk>
Nov 25 03:27:59 np0005534516 nova_compute[253538]:        <nova:swap>0</nova:swap>
Nov 25 03:27:59 np0005534516 nova_compute[253538]:        <nova:ephemeral>0</nova:ephemeral>
Nov 25 03:27:59 np0005534516 nova_compute[253538]:        <nova:vcpus>1</nova:vcpus>
Nov 25 03:27:59 np0005534516 nova_compute[253538]:      </nova:flavor>
Nov 25 03:27:59 np0005534516 nova_compute[253538]:      <nova:owner>
Nov 25 03:27:59 np0005534516 nova_compute[253538]:        <nova:user uuid="5981df3e8536420ea5b8fcd98ef92e1b">tempest-AttachInterfacesV270Test-968002196-project-member</nova:user>
Nov 25 03:27:59 np0005534516 nova_compute[253538]:        <nova:project uuid="7627e6bf071942db89329eee4a7d6b59">tempest-AttachInterfacesV270Test-968002196</nova:project>
Nov 25 03:27:59 np0005534516 nova_compute[253538]:      </nova:owner>
Nov 25 03:27:59 np0005534516 nova_compute[253538]:      <nova:root type="image" uuid="8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e"/>
Nov 25 03:27:59 np0005534516 nova_compute[253538]:      <nova:ports>
Nov 25 03:27:59 np0005534516 nova_compute[253538]:        <nova:port uuid="eb3ca9e2-cc78-478d-97c2-03b1c7d29b95">
Nov 25 03:27:59 np0005534516 nova_compute[253538]:          <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Nov 25 03:27:59 np0005534516 nova_compute[253538]:        </nova:port>
Nov 25 03:27:59 np0005534516 nova_compute[253538]:      </nova:ports>
Nov 25 03:27:59 np0005534516 nova_compute[253538]:    </nova:instance>
Nov 25 03:27:59 np0005534516 nova_compute[253538]:  </metadata>
Nov 25 03:27:59 np0005534516 nova_compute[253538]:  <sysinfo type="smbios">
Nov 25 03:27:59 np0005534516 nova_compute[253538]:    <system>
Nov 25 03:27:59 np0005534516 nova_compute[253538]:      <entry name="manufacturer">RDO</entry>
Nov 25 03:27:59 np0005534516 nova_compute[253538]:      <entry name="product">OpenStack Compute</entry>
Nov 25 03:27:59 np0005534516 nova_compute[253538]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 03:27:59 np0005534516 nova_compute[253538]:      <entry name="serial">14cd6797-cf47-44da-acac-0e5e3d5dfe11</entry>
Nov 25 03:27:59 np0005534516 nova_compute[253538]:      <entry name="uuid">14cd6797-cf47-44da-acac-0e5e3d5dfe11</entry>
Nov 25 03:27:59 np0005534516 nova_compute[253538]:      <entry name="family">Virtual Machine</entry>
Nov 25 03:27:59 np0005534516 nova_compute[253538]:    </system>
Nov 25 03:27:59 np0005534516 nova_compute[253538]:  </sysinfo>
Nov 25 03:27:59 np0005534516 nova_compute[253538]:  <os>
Nov 25 03:27:59 np0005534516 nova_compute[253538]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 25 03:27:59 np0005534516 nova_compute[253538]:    <boot dev="hd"/>
Nov 25 03:27:59 np0005534516 nova_compute[253538]:    <smbios mode="sysinfo"/>
Nov 25 03:27:59 np0005534516 nova_compute[253538]:  </os>
Nov 25 03:27:59 np0005534516 nova_compute[253538]:  <features>
Nov 25 03:27:59 np0005534516 nova_compute[253538]:    <acpi/>
Nov 25 03:27:59 np0005534516 nova_compute[253538]:    <apic/>
Nov 25 03:27:59 np0005534516 nova_compute[253538]:    <vmcoreinfo/>
Nov 25 03:27:59 np0005534516 nova_compute[253538]:  </features>
Nov 25 03:27:59 np0005534516 nova_compute[253538]:  <clock offset="utc">
Nov 25 03:27:59 np0005534516 nova_compute[253538]:    <timer name="pit" tickpolicy="delay"/>
Nov 25 03:27:59 np0005534516 nova_compute[253538]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 25 03:27:59 np0005534516 nova_compute[253538]:    <timer name="hpet" present="no"/>
Nov 25 03:27:59 np0005534516 nova_compute[253538]:  </clock>
Nov 25 03:27:59 np0005534516 nova_compute[253538]:  <cpu mode="host-model" match="exact">
Nov 25 03:27:59 np0005534516 nova_compute[253538]:    <topology sockets="1" cores="1" threads="1"/>
Nov 25 03:27:59 np0005534516 nova_compute[253538]:  </cpu>
Nov 25 03:27:59 np0005534516 nova_compute[253538]:  <devices>
Nov 25 03:27:59 np0005534516 nova_compute[253538]:    <disk type="network" device="disk">
Nov 25 03:27:59 np0005534516 nova_compute[253538]:      <driver type="raw" cache="none"/>
Nov 25 03:27:59 np0005534516 nova_compute[253538]:      <source protocol="rbd" name="vms/14cd6797-cf47-44da-acac-0e5e3d5dfe11_disk">
Nov 25 03:27:59 np0005534516 nova_compute[253538]:        <host name="192.168.122.100" port="6789"/>
Nov 25 03:27:59 np0005534516 nova_compute[253538]:      </source>
Nov 25 03:27:59 np0005534516 nova_compute[253538]:      <auth username="openstack">
Nov 25 03:27:59 np0005534516 nova_compute[253538]:        <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 03:27:59 np0005534516 nova_compute[253538]:      </auth>
Nov 25 03:27:59 np0005534516 nova_compute[253538]:      <target dev="vda" bus="virtio"/>
Nov 25 03:27:59 np0005534516 nova_compute[253538]:    </disk>
Nov 25 03:27:59 np0005534516 nova_compute[253538]:    <disk type="network" device="cdrom">
Nov 25 03:27:59 np0005534516 nova_compute[253538]:      <driver type="raw" cache="none"/>
Nov 25 03:27:59 np0005534516 nova_compute[253538]:      <source protocol="rbd" name="vms/14cd6797-cf47-44da-acac-0e5e3d5dfe11_disk.config">
Nov 25 03:27:59 np0005534516 nova_compute[253538]:        <host name="192.168.122.100" port="6789"/>
Nov 25 03:27:59 np0005534516 nova_compute[253538]:      </source>
Nov 25 03:27:59 np0005534516 nova_compute[253538]:      <auth username="openstack">
Nov 25 03:27:59 np0005534516 nova_compute[253538]:        <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 03:27:59 np0005534516 nova_compute[253538]:      </auth>
Nov 25 03:27:59 np0005534516 nova_compute[253538]:      <target dev="sda" bus="sata"/>
Nov 25 03:27:59 np0005534516 nova_compute[253538]:    </disk>
Nov 25 03:27:59 np0005534516 nova_compute[253538]:    <interface type="ethernet">
Nov 25 03:27:59 np0005534516 nova_compute[253538]:      <mac address="fa:16:3e:f3:13:20"/>
Nov 25 03:27:59 np0005534516 nova_compute[253538]:      <model type="virtio"/>
Nov 25 03:27:59 np0005534516 nova_compute[253538]:      <driver name="vhost" rx_queue_size="512"/>
Nov 25 03:27:59 np0005534516 nova_compute[253538]:      <mtu size="1442"/>
Nov 25 03:27:59 np0005534516 nova_compute[253538]:      <target dev="tapeb3ca9e2-cc"/>
Nov 25 03:27:59 np0005534516 nova_compute[253538]:    </interface>
Nov 25 03:27:59 np0005534516 nova_compute[253538]:    <serial type="pty">
Nov 25 03:27:59 np0005534516 nova_compute[253538]:      <log file="/var/lib/nova/instances/14cd6797-cf47-44da-acac-0e5e3d5dfe11/console.log" append="off"/>
Nov 25 03:27:59 np0005534516 nova_compute[253538]:    </serial>
Nov 25 03:27:59 np0005534516 nova_compute[253538]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 03:27:59 np0005534516 nova_compute[253538]:    <video>
Nov 25 03:27:59 np0005534516 nova_compute[253538]:      <model type="virtio"/>
Nov 25 03:27:59 np0005534516 nova_compute[253538]:    </video>
Nov 25 03:27:59 np0005534516 nova_compute[253538]:    <input type="tablet" bus="usb"/>
Nov 25 03:27:59 np0005534516 nova_compute[253538]:    <rng model="virtio">
Nov 25 03:27:59 np0005534516 nova_compute[253538]:      <backend model="random">/dev/urandom</backend>
Nov 25 03:27:59 np0005534516 nova_compute[253538]:    </rng>
Nov 25 03:27:59 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root"/>
Nov 25 03:27:59 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:27:59 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:27:59 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:27:59 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:27:59 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:27:59 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:27:59 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:27:59 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:27:59 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:27:59 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:27:59 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:27:59 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:27:59 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:27:59 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:27:59 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:27:59 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:27:59 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:27:59 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:27:59 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:27:59 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:27:59 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:27:59 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:27:59 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:27:59 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:27:59 np0005534516 nova_compute[253538]:    <controller type="usb" index="0"/>
Nov 25 03:27:59 np0005534516 nova_compute[253538]:    <memballoon model="virtio">
Nov 25 03:27:59 np0005534516 nova_compute[253538]:      <stats period="10"/>
Nov 25 03:27:59 np0005534516 nova_compute[253538]:    </memballoon>
Nov 25 03:27:59 np0005534516 nova_compute[253538]:  </devices>
Nov 25 03:27:59 np0005534516 nova_compute[253538]: </domain>
Nov 25 03:27:59 np0005534516 nova_compute[253538]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 25 03:27:59 np0005534516 nova_compute[253538]: 2025-11-25 08:27:59.890 253542 DEBUG nova.compute.manager [None req-a08427a9-393d-477c-899d-da297a0ffeb3 5981df3e8536420ea5b8fcd98ef92e1b 7627e6bf071942db89329eee4a7d6b59 - - default default] [instance: 14cd6797-cf47-44da-acac-0e5e3d5dfe11] Preparing to wait for external event network-vif-plugged-eb3ca9e2-cc78-478d-97c2-03b1c7d29b95 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 25 03:27:59 np0005534516 nova_compute[253538]: 2025-11-25 08:27:59.890 253542 DEBUG oslo_concurrency.lockutils [None req-a08427a9-393d-477c-899d-da297a0ffeb3 5981df3e8536420ea5b8fcd98ef92e1b 7627e6bf071942db89329eee4a7d6b59 - - default default] Acquiring lock "14cd6797-cf47-44da-acac-0e5e3d5dfe11-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:27:59 np0005534516 nova_compute[253538]: 2025-11-25 08:27:59.890 253542 DEBUG oslo_concurrency.lockutils [None req-a08427a9-393d-477c-899d-da297a0ffeb3 5981df3e8536420ea5b8fcd98ef92e1b 7627e6bf071942db89329eee4a7d6b59 - - default default] Lock "14cd6797-cf47-44da-acac-0e5e3d5dfe11-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:27:59 np0005534516 nova_compute[253538]: 2025-11-25 08:27:59.890 253542 DEBUG oslo_concurrency.lockutils [None req-a08427a9-393d-477c-899d-da297a0ffeb3 5981df3e8536420ea5b8fcd98ef92e1b 7627e6bf071942db89329eee4a7d6b59 - - default default] Lock "14cd6797-cf47-44da-acac-0e5e3d5dfe11-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:27:59 np0005534516 nova_compute[253538]: 2025-11-25 08:27:59.891 253542 DEBUG nova.virt.libvirt.vif [None req-a08427a9-393d-477c-899d-da297a0ffeb3 5981df3e8536420ea5b8fcd98ef92e1b 7627e6bf071942db89329eee4a7d6b59 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T08:27:52Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-AttachInterfacesV270Test-server-1019332220',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacesv270test-server-1019332220',id=28,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='7627e6bf071942db89329eee4a7d6b59',ramdisk_id='',reservation_id='r-ykh6cze2',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachInterfacesV270Test-968002196',owner_user_name='tempest-AttachInterfacesV270Test-968002196-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T08:27:54Z,user_data=None,user_id='5981df3e8536420ea5b8fcd98ef92e1b',uuid=14cd6797-cf47-44da-acac-0e5e3d5dfe11,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "eb3ca9e2-cc78-478d-97c2-03b1c7d29b95", "address": "fa:16:3e:f3:13:20", "network": {"id": "431353a0-bdb3-445c-95e7-9cd19a8e3783", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-862491740-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7627e6bf071942db89329eee4a7d6b59", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapeb3ca9e2-cc", "ovs_interfaceid": "eb3ca9e2-cc78-478d-97c2-03b1c7d29b95", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 25 03:27:59 np0005534516 nova_compute[253538]: 2025-11-25 08:27:59.891 253542 DEBUG nova.network.os_vif_util [None req-a08427a9-393d-477c-899d-da297a0ffeb3 5981df3e8536420ea5b8fcd98ef92e1b 7627e6bf071942db89329eee4a7d6b59 - - default default] Converting VIF {"id": "eb3ca9e2-cc78-478d-97c2-03b1c7d29b95", "address": "fa:16:3e:f3:13:20", "network": {"id": "431353a0-bdb3-445c-95e7-9cd19a8e3783", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-862491740-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7627e6bf071942db89329eee4a7d6b59", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapeb3ca9e2-cc", "ovs_interfaceid": "eb3ca9e2-cc78-478d-97c2-03b1c7d29b95", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 03:27:59 np0005534516 nova_compute[253538]: 2025-11-25 08:27:59.892 253542 DEBUG nova.network.os_vif_util [None req-a08427a9-393d-477c-899d-da297a0ffeb3 5981df3e8536420ea5b8fcd98ef92e1b 7627e6bf071942db89329eee4a7d6b59 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f3:13:20,bridge_name='br-int',has_traffic_filtering=True,id=eb3ca9e2-cc78-478d-97c2-03b1c7d29b95,network=Network(431353a0-bdb3-445c-95e7-9cd19a8e3783),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapeb3ca9e2-cc') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 03:27:59 np0005534516 nova_compute[253538]: 2025-11-25 08:27:59.892 253542 DEBUG os_vif [None req-a08427a9-393d-477c-899d-da297a0ffeb3 5981df3e8536420ea5b8fcd98ef92e1b 7627e6bf071942db89329eee4a7d6b59 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:f3:13:20,bridge_name='br-int',has_traffic_filtering=True,id=eb3ca9e2-cc78-478d-97c2-03b1c7d29b95,network=Network(431353a0-bdb3-445c-95e7-9cd19a8e3783),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapeb3ca9e2-cc') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 25 03:27:59 np0005534516 nova_compute[253538]: 2025-11-25 08:27:59.893 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:27:59 np0005534516 nova_compute[253538]: 2025-11-25 08:27:59.893 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:27:59 np0005534516 nova_compute[253538]: 2025-11-25 08:27:59.894 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 03:27:59 np0005534516 nova_compute[253538]: 2025-11-25 08:27:59.896 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:27:59 np0005534516 nova_compute[253538]: 2025-11-25 08:27:59.896 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapeb3ca9e2-cc, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:27:59 np0005534516 nova_compute[253538]: 2025-11-25 08:27:59.897 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapeb3ca9e2-cc, col_values=(('external_ids', {'iface-id': 'eb3ca9e2-cc78-478d-97c2-03b1c7d29b95', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:f3:13:20', 'vm-uuid': '14cd6797-cf47-44da-acac-0e5e3d5dfe11'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:27:59 np0005534516 nova_compute[253538]: 2025-11-25 08:27:59.898 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:27:59 np0005534516 NetworkManager[48915]: <info>  [1764059279.8998] manager: (tapeb3ca9e2-cc): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/84)
Nov 25 03:27:59 np0005534516 nova_compute[253538]: 2025-11-25 08:27:59.901 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 25 03:27:59 np0005534516 nova_compute[253538]: 2025-11-25 08:27:59.911 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:27:59 np0005534516 nova_compute[253538]: 2025-11-25 08:27:59.912 253542 INFO os_vif [None req-a08427a9-393d-477c-899d-da297a0ffeb3 5981df3e8536420ea5b8fcd98ef92e1b 7627e6bf071942db89329eee4a7d6b59 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:f3:13:20,bridge_name='br-int',has_traffic_filtering=True,id=eb3ca9e2-cc78-478d-97c2-03b1c7d29b95,network=Network(431353a0-bdb3-445c-95e7-9cd19a8e3783),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapeb3ca9e2-cc')#033[00m
Nov 25 03:27:59 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 03:27:59 np0005534516 nova_compute[253538]: 2025-11-25 08:27:59.957 253542 DEBUG nova.compute.manager [None req-0ab1c55e-a54f-4e9f-8e35-b9475b4764b8 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] [instance: 59d5b429-b6cb-4ced-bf99-b3a5b42dc5a0] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 25 03:28:00 np0005534516 nova_compute[253538]: 2025-11-25 08:28:00.059 253542 DEBUG nova.virt.libvirt.driver [None req-a08427a9-393d-477c-899d-da297a0ffeb3 5981df3e8536420ea5b8fcd98ef92e1b 7627e6bf071942db89329eee4a7d6b59 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 03:28:00 np0005534516 nova_compute[253538]: 2025-11-25 08:28:00.060 253542 DEBUG nova.virt.libvirt.driver [None req-a08427a9-393d-477c-899d-da297a0ffeb3 5981df3e8536420ea5b8fcd98ef92e1b 7627e6bf071942db89329eee4a7d6b59 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 03:28:00 np0005534516 nova_compute[253538]: 2025-11-25 08:28:00.060 253542 DEBUG nova.virt.libvirt.driver [None req-a08427a9-393d-477c-899d-da297a0ffeb3 5981df3e8536420ea5b8fcd98ef92e1b 7627e6bf071942db89329eee4a7d6b59 - - default default] No VIF found with MAC fa:16:3e:f3:13:20, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 25 03:28:00 np0005534516 nova_compute[253538]: 2025-11-25 08:28:00.061 253542 INFO nova.virt.libvirt.driver [None req-a08427a9-393d-477c-899d-da297a0ffeb3 5981df3e8536420ea5b8fcd98ef92e1b 7627e6bf071942db89329eee4a7d6b59 - - default default] [instance: 14cd6797-cf47-44da-acac-0e5e3d5dfe11] Using config drive#033[00m
Nov 25 03:28:00 np0005534516 nova_compute[253538]: 2025-11-25 08:28:00.086 253542 DEBUG nova.storage.rbd_utils [None req-a08427a9-393d-477c-899d-da297a0ffeb3 5981df3e8536420ea5b8fcd98ef92e1b 7627e6bf071942db89329eee4a7d6b59 - - default default] rbd image 14cd6797-cf47-44da-acac-0e5e3d5dfe11_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:28:00 np0005534516 nova_compute[253538]: 2025-11-25 08:28:00.144 253542 DEBUG nova.compute.manager [None req-0ab1c55e-a54f-4e9f-8e35-b9475b4764b8 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] [instance: 59d5b429-b6cb-4ced-bf99-b3a5b42dc5a0] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 25 03:28:00 np0005534516 nova_compute[253538]: 2025-11-25 08:28:00.145 253542 DEBUG nova.virt.libvirt.driver [None req-0ab1c55e-a54f-4e9f-8e35-b9475b4764b8 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] [instance: 59d5b429-b6cb-4ced-bf99-b3a5b42dc5a0] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 25 03:28:00 np0005534516 nova_compute[253538]: 2025-11-25 08:28:00.145 253542 INFO nova.virt.libvirt.driver [None req-0ab1c55e-a54f-4e9f-8e35-b9475b4764b8 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] [instance: 59d5b429-b6cb-4ced-bf99-b3a5b42dc5a0] Creating image(s)#033[00m
Nov 25 03:28:00 np0005534516 nova_compute[253538]: 2025-11-25 08:28:00.226 253542 DEBUG nova.storage.rbd_utils [None req-0ab1c55e-a54f-4e9f-8e35-b9475b4764b8 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] rbd image 59d5b429-b6cb-4ced-bf99-b3a5b42dc5a0_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:28:00 np0005534516 nova_compute[253538]: 2025-11-25 08:28:00.255 253542 DEBUG nova.storage.rbd_utils [None req-0ab1c55e-a54f-4e9f-8e35-b9475b4764b8 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] rbd image 59d5b429-b6cb-4ced-bf99-b3a5b42dc5a0_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:28:00 np0005534516 nova_compute[253538]: 2025-11-25 08:28:00.278 253542 DEBUG nova.storage.rbd_utils [None req-0ab1c55e-a54f-4e9f-8e35-b9475b4764b8 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] rbd image 59d5b429-b6cb-4ced-bf99-b3a5b42dc5a0_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:28:00 np0005534516 nova_compute[253538]: 2025-11-25 08:28:00.283 253542 DEBUG oslo_concurrency.processutils [None req-0ab1c55e-a54f-4e9f-8e35-b9475b4764b8 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:28:00 np0005534516 nova_compute[253538]: 2025-11-25 08:28:00.360 253542 DEBUG oslo_concurrency.processutils [None req-0ab1c55e-a54f-4e9f-8e35-b9475b4764b8 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json" returned: 0 in 0.077s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:28:00 np0005534516 nova_compute[253538]: 2025-11-25 08:28:00.361 253542 DEBUG oslo_concurrency.lockutils [None req-0ab1c55e-a54f-4e9f-8e35-b9475b4764b8 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] Acquiring lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:28:00 np0005534516 nova_compute[253538]: 2025-11-25 08:28:00.361 253542 DEBUG oslo_concurrency.lockutils [None req-0ab1c55e-a54f-4e9f-8e35-b9475b4764b8 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:28:00 np0005534516 nova_compute[253538]: 2025-11-25 08:28:00.362 253542 DEBUG oslo_concurrency.lockutils [None req-0ab1c55e-a54f-4e9f-8e35-b9475b4764b8 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:28:00 np0005534516 nova_compute[253538]: 2025-11-25 08:28:00.431 253542 DEBUG nova.storage.rbd_utils [None req-0ab1c55e-a54f-4e9f-8e35-b9475b4764b8 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] rbd image 59d5b429-b6cb-4ced-bf99-b3a5b42dc5a0_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:28:00 np0005534516 nova_compute[253538]: 2025-11-25 08:28:00.435 253542 DEBUG oslo_concurrency.processutils [None req-0ab1c55e-a54f-4e9f-8e35-b9475b4764b8 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc 59d5b429-b6cb-4ced-bf99-b3a5b42dc5a0_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:28:01 np0005534516 nova_compute[253538]: 2025-11-25 08:28:01.012 253542 DEBUG nova.policy [None req-0ab1c55e-a54f-4e9f-8e35-b9475b4764b8 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '8350a560f2bc4b57a5da0e3a1f582f82', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'c5b1125d171240e2895276836b4fd6d7', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 25 03:28:01 np0005534516 nova_compute[253538]: 2025-11-25 08:28:01.016 253542 INFO nova.virt.libvirt.driver [None req-a08427a9-393d-477c-899d-da297a0ffeb3 5981df3e8536420ea5b8fcd98ef92e1b 7627e6bf071942db89329eee4a7d6b59 - - default default] [instance: 14cd6797-cf47-44da-acac-0e5e3d5dfe11] Creating config drive at /var/lib/nova/instances/14cd6797-cf47-44da-acac-0e5e3d5dfe11/disk.config#033[00m
Nov 25 03:28:01 np0005534516 nova_compute[253538]: 2025-11-25 08:28:01.025 253542 DEBUG oslo_concurrency.processutils [None req-a08427a9-393d-477c-899d-da297a0ffeb3 5981df3e8536420ea5b8fcd98ef92e1b 7627e6bf071942db89329eee4a7d6b59 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/14cd6797-cf47-44da-acac-0e5e3d5dfe11/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpc2vr__eu execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:28:01 np0005534516 nova_compute[253538]: 2025-11-25 08:28:01.061 253542 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764059266.0391502, 575b6526-de38-4a80-a952-be1b891b4792 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 03:28:01 np0005534516 nova_compute[253538]: 2025-11-25 08:28:01.062 253542 INFO nova.compute.manager [-] [instance: 575b6526-de38-4a80-a952-be1b891b4792] VM Stopped (Lifecycle Event)#033[00m
Nov 25 03:28:01 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1286: 321 pgs: 321 active+clean; 236 MiB data, 446 MiB used, 60 GiB / 60 GiB avail; 7.8 MiB/s rd, 11 MiB/s wr, 185 op/s
Nov 25 03:28:01 np0005534516 nova_compute[253538]: 2025-11-25 08:28:01.083 253542 DEBUG nova.compute.manager [None req-c3086c63-fc5e-49a1-9cea-9bd1c40fcbcc - - - - - -] [instance: 575b6526-de38-4a80-a952-be1b891b4792] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:28:01 np0005534516 nova_compute[253538]: 2025-11-25 08:28:01.174 253542 DEBUG oslo_concurrency.processutils [None req-a08427a9-393d-477c-899d-da297a0ffeb3 5981df3e8536420ea5b8fcd98ef92e1b 7627e6bf071942db89329eee4a7d6b59 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/14cd6797-cf47-44da-acac-0e5e3d5dfe11/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpc2vr__eu" returned: 0 in 0.149s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:28:01 np0005534516 nova_compute[253538]: 2025-11-25 08:28:01.206 253542 DEBUG nova.storage.rbd_utils [None req-a08427a9-393d-477c-899d-da297a0ffeb3 5981df3e8536420ea5b8fcd98ef92e1b 7627e6bf071942db89329eee4a7d6b59 - - default default] rbd image 14cd6797-cf47-44da-acac-0e5e3d5dfe11_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:28:01 np0005534516 nova_compute[253538]: 2025-11-25 08:28:01.209 253542 DEBUG oslo_concurrency.processutils [None req-a08427a9-393d-477c-899d-da297a0ffeb3 5981df3e8536420ea5b8fcd98ef92e1b 7627e6bf071942db89329eee4a7d6b59 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/14cd6797-cf47-44da-acac-0e5e3d5dfe11/disk.config 14cd6797-cf47-44da-acac-0e5e3d5dfe11_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:28:01 np0005534516 nova_compute[253538]: 2025-11-25 08:28:01.389 253542 DEBUG oslo_concurrency.processutils [None req-0ab1c55e-a54f-4e9f-8e35-b9475b4764b8 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc 59d5b429-b6cb-4ced-bf99-b3a5b42dc5a0_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.954s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:28:01 np0005534516 nova_compute[253538]: 2025-11-25 08:28:01.457 253542 DEBUG nova.storage.rbd_utils [None req-0ab1c55e-a54f-4e9f-8e35-b9475b4764b8 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] resizing rbd image 59d5b429-b6cb-4ced-bf99-b3a5b42dc5a0_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 25 03:28:01 np0005534516 nova_compute[253538]: 2025-11-25 08:28:01.523 253542 DEBUG oslo_concurrency.processutils [None req-a08427a9-393d-477c-899d-da297a0ffeb3 5981df3e8536420ea5b8fcd98ef92e1b 7627e6bf071942db89329eee4a7d6b59 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/14cd6797-cf47-44da-acac-0e5e3d5dfe11/disk.config 14cd6797-cf47-44da-acac-0e5e3d5dfe11_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.314s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:28:01 np0005534516 nova_compute[253538]: 2025-11-25 08:28:01.525 253542 INFO nova.virt.libvirt.driver [None req-a08427a9-393d-477c-899d-da297a0ffeb3 5981df3e8536420ea5b8fcd98ef92e1b 7627e6bf071942db89329eee4a7d6b59 - - default default] [instance: 14cd6797-cf47-44da-acac-0e5e3d5dfe11] Deleting local config drive /var/lib/nova/instances/14cd6797-cf47-44da-acac-0e5e3d5dfe11/disk.config because it was imported into RBD.#033[00m
Nov 25 03:28:01 np0005534516 nova_compute[253538]: 2025-11-25 08:28:01.543 253542 DEBUG nova.network.neutron [req-96070fe5-3c6d-412c-beec-ba7e590d0015 req-86449235-eb99-486f-8abc-526a0eec8eeb b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 14cd6797-cf47-44da-acac-0e5e3d5dfe11] Updated VIF entry in instance network info cache for port eb3ca9e2-cc78-478d-97c2-03b1c7d29b95. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 25 03:28:01 np0005534516 nova_compute[253538]: 2025-11-25 08:28:01.543 253542 DEBUG nova.network.neutron [req-96070fe5-3c6d-412c-beec-ba7e590d0015 req-86449235-eb99-486f-8abc-526a0eec8eeb b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 14cd6797-cf47-44da-acac-0e5e3d5dfe11] Updating instance_info_cache with network_info: [{"id": "eb3ca9e2-cc78-478d-97c2-03b1c7d29b95", "address": "fa:16:3e:f3:13:20", "network": {"id": "431353a0-bdb3-445c-95e7-9cd19a8e3783", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-862491740-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7627e6bf071942db89329eee4a7d6b59", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapeb3ca9e2-cc", "ovs_interfaceid": "eb3ca9e2-cc78-478d-97c2-03b1c7d29b95", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 03:28:01 np0005534516 nova_compute[253538]: 2025-11-25 08:28:01.556 253542 DEBUG oslo_concurrency.lockutils [req-96070fe5-3c6d-412c-beec-ba7e590d0015 req-86449235-eb99-486f-8abc-526a0eec8eeb b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-14cd6797-cf47-44da-acac-0e5e3d5dfe11" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 03:28:01 np0005534516 kernel: tapeb3ca9e2-cc: entered promiscuous mode
Nov 25 03:28:01 np0005534516 NetworkManager[48915]: <info>  [1764059281.6058] manager: (tapeb3ca9e2-cc): new Tun device (/org/freedesktop/NetworkManager/Devices/85)
Nov 25 03:28:01 np0005534516 ovn_controller[152859]: 2025-11-25T08:28:01Z|00170|binding|INFO|Claiming lport eb3ca9e2-cc78-478d-97c2-03b1c7d29b95 for this chassis.
Nov 25 03:28:01 np0005534516 nova_compute[253538]: 2025-11-25 08:28:01.609 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:28:01 np0005534516 ovn_controller[152859]: 2025-11-25T08:28:01Z|00171|binding|INFO|eb3ca9e2-cc78-478d-97c2-03b1c7d29b95: Claiming fa:16:3e:f3:13:20 10.100.0.6
Nov 25 03:28:01 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:28:01.627 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f3:13:20 10.100.0.6'], port_security=['fa:16:3e:f3:13:20 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '14cd6797-cf47-44da-acac-0e5e3d5dfe11', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-431353a0-bdb3-445c-95e7-9cd19a8e3783', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '7627e6bf071942db89329eee4a7d6b59', 'neutron:revision_number': '2', 'neutron:security_group_ids': '0d49786c-dade-46e6-8a51-0f68b7957195', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7ee2cc9a-2b72-45fd-9005-f89765076013, chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=eb3ca9e2-cc78-478d-97c2-03b1c7d29b95) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 03:28:01 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:28:01.628 162739 INFO neutron.agent.ovn.metadata.agent [-] Port eb3ca9e2-cc78-478d-97c2-03b1c7d29b95 in datapath 431353a0-bdb3-445c-95e7-9cd19a8e3783 bound to our chassis#033[00m
Nov 25 03:28:01 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:28:01.630 162739 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 431353a0-bdb3-445c-95e7-9cd19a8e3783#033[00m
Nov 25 03:28:01 np0005534516 systemd-udevd[289809]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 03:28:01 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:28:01.646 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[8245ea9f-c32f-4489-964d-b73c2c66eb67]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:28:01 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:28:01.647 162739 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap431353a0-b1 in ovnmeta-431353a0-bdb3-445c-95e7-9cd19a8e3783 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 25 03:28:01 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:28:01.648 269370 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap431353a0-b0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 25 03:28:01 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:28:01.649 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[7bd36c31-1ec8-44be-9547-745d2afe3627]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:28:01 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:28:01.650 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[c85e3dc9-96a1-4921-8aa2-3f3eff5d3537]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:28:01 np0005534516 systemd-machined[215790]: New machine qemu-33-instance-0000001c.
Nov 25 03:28:01 np0005534516 NetworkManager[48915]: <info>  [1764059281.6644] device (tapeb3ca9e2-cc): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 03:28:01 np0005534516 NetworkManager[48915]: <info>  [1764059281.6652] device (tapeb3ca9e2-cc): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 03:28:01 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:28:01.668 162852 DEBUG oslo.privsep.daemon [-] privsep: reply[8aac51c7-5d2e-4f4b-86ba-6d5104857451]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:28:01 np0005534516 systemd[1]: Started Virtual Machine qemu-33-instance-0000001c.
Nov 25 03:28:01 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:28:01.698 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[a6c3f24b-3878-42da-ada3-e5d1e117f670]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:28:01 np0005534516 nova_compute[253538]: 2025-11-25 08:28:01.704 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:28:01 np0005534516 ovn_controller[152859]: 2025-11-25T08:28:01Z|00172|binding|INFO|Setting lport eb3ca9e2-cc78-478d-97c2-03b1c7d29b95 ovn-installed in OVS
Nov 25 03:28:01 np0005534516 ovn_controller[152859]: 2025-11-25T08:28:01Z|00173|binding|INFO|Setting lport eb3ca9e2-cc78-478d-97c2-03b1c7d29b95 up in Southbound
Nov 25 03:28:01 np0005534516 nova_compute[253538]: 2025-11-25 08:28:01.713 253542 DEBUG nova.objects.instance [None req-0ab1c55e-a54f-4e9f-8e35-b9475b4764b8 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] Lazy-loading 'migration_context' on Instance uuid 59d5b429-b6cb-4ced-bf99-b3a5b42dc5a0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 03:28:01 np0005534516 nova_compute[253538]: 2025-11-25 08:28:01.716 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:28:01 np0005534516 nova_compute[253538]: 2025-11-25 08:28:01.726 253542 DEBUG nova.virt.libvirt.driver [None req-0ab1c55e-a54f-4e9f-8e35-b9475b4764b8 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] [instance: 59d5b429-b6cb-4ced-bf99-b3a5b42dc5a0] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 25 03:28:01 np0005534516 nova_compute[253538]: 2025-11-25 08:28:01.726 253542 DEBUG nova.virt.libvirt.driver [None req-0ab1c55e-a54f-4e9f-8e35-b9475b4764b8 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] [instance: 59d5b429-b6cb-4ced-bf99-b3a5b42dc5a0] Ensure instance console log exists: /var/lib/nova/instances/59d5b429-b6cb-4ced-bf99-b3a5b42dc5a0/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 25 03:28:01 np0005534516 nova_compute[253538]: 2025-11-25 08:28:01.727 253542 DEBUG oslo_concurrency.lockutils [None req-0ab1c55e-a54f-4e9f-8e35-b9475b4764b8 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:28:01 np0005534516 nova_compute[253538]: 2025-11-25 08:28:01.728 253542 DEBUG oslo_concurrency.lockutils [None req-0ab1c55e-a54f-4e9f-8e35-b9475b4764b8 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:28:01 np0005534516 nova_compute[253538]: 2025-11-25 08:28:01.728 253542 DEBUG oslo_concurrency.lockutils [None req-0ab1c55e-a54f-4e9f-8e35-b9475b4764b8 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:28:01 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:28:01.740 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[d9688416-8e4d-4c0f-82ca-0e0cb02c2f92]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:28:01 np0005534516 nova_compute[253538]: 2025-11-25 08:28:01.742 253542 DEBUG nova.network.neutron [None req-0ab1c55e-a54f-4e9f-8e35-b9475b4764b8 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] [instance: 59d5b429-b6cb-4ced-bf99-b3a5b42dc5a0] Successfully created port: a0d5bf0b-a708-4159-968d-5c597313379d _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 25 03:28:01 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:28:01.750 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[e4562937-3bf4-47eb-b0c0-4dc8388952b6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:28:01 np0005534516 NetworkManager[48915]: <info>  [1764059281.7516] manager: (tap431353a0-b0): new Veth device (/org/freedesktop/NetworkManager/Devices/86)
Nov 25 03:28:01 np0005534516 systemd-udevd[289820]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 03:28:01 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:28:01.788 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[3a54720f-fc9c-4d60-b240-5edcc5c5b038]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:28:01 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:28:01.792 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[dd55da40-385a-4174-9b9a-2227a4aaa2af]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:28:01 np0005534516 NetworkManager[48915]: <info>  [1764059281.8203] device (tap431353a0-b0): carrier: link connected
Nov 25 03:28:01 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:28:01.826 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[1dc35469-4394-48bc-bb9f-0f9a4ea33d6d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:28:01 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:28:01.848 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[1f4343a0-baf3-4a55-9784-dc268726fe86]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap431353a0-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:0d:1b:92'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 2, 'rx_bytes': 176, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 2, 'rx_bytes': 176, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 56], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 463641, 'reachable_time': 15962, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 148, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 148, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 148, 'outmcastoctets': 148, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 289865, 'error': None, 'target': 'ovnmeta-431353a0-bdb3-445c-95e7-9cd19a8e3783', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:28:01 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:28:01.868 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[102fe676-9fd3-4b7e-bb32-a14efdac2890]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe0d:1b92'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 463641, 'tstamp': 463641}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 289866, 'error': None, 'target': 'ovnmeta-431353a0-bdb3-445c-95e7-9cd19a8e3783', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:28:01 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:28:01.888 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[2b562297-d733-49f3-9bcb-e70af75d421b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap431353a0-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:0d:1b:92'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 3, 'rx_bytes': 176, 'tx_bytes': 266, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 3, 'rx_bytes': 176, 'tx_bytes': 266, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 56], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 463641, 'reachable_time': 15962, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 148, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 224, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 148, 'outmcastoctets': 224, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 289867, 'error': None, 'target': 'ovnmeta-431353a0-bdb3-445c-95e7-9cd19a8e3783', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:28:01 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:28:01.927 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[fc014f6c-b33f-4f6b-a01e-3f3b6a10098a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:28:01 np0005534516 nova_compute[253538]: 2025-11-25 08:28:01.999 253542 INFO nova.virt.libvirt.driver [None req-edca5066-bf60-4cb0-8cf6-52d8ab5e0c17 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 30afcbba-78f3-433c-ba0a-5a2d25cf2d48] Snapshot image upload complete#033[00m
Nov 25 03:28:02 np0005534516 nova_compute[253538]: 2025-11-25 08:28:01.999 253542 INFO nova.compute.manager [None req-edca5066-bf60-4cb0-8cf6-52d8ab5e0c17 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 30afcbba-78f3-433c-ba0a-5a2d25cf2d48] Took 7.41 seconds to snapshot the instance on the hypervisor.#033[00m
Nov 25 03:28:02 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:28:02.002 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[b111b0f6-282f-42cc-ab18-d1c45afd1139]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:28:02 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:28:02.003 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap431353a0-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:28:02 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:28:02.004 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 03:28:02 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:28:02.004 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap431353a0-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:28:02 np0005534516 nova_compute[253538]: 2025-11-25 08:28:02.006 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:28:02 np0005534516 NetworkManager[48915]: <info>  [1764059282.0071] manager: (tap431353a0-b0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/87)
Nov 25 03:28:02 np0005534516 kernel: tap431353a0-b0: entered promiscuous mode
Nov 25 03:28:02 np0005534516 nova_compute[253538]: 2025-11-25 08:28:02.012 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:28:02 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:28:02.013 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap431353a0-b0, col_values=(('external_ids', {'iface-id': 'eb9dc67a-a121-4efb-a3df-9647173b8d46'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:28:02 np0005534516 nova_compute[253538]: 2025-11-25 08:28:02.014 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:28:02 np0005534516 ovn_controller[152859]: 2025-11-25T08:28:02Z|00174|binding|INFO|Releasing lport eb9dc67a-a121-4efb-a3df-9647173b8d46 from this chassis (sb_readonly=0)
Nov 25 03:28:02 np0005534516 nova_compute[253538]: 2025-11-25 08:28:02.030 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:28:02 np0005534516 nova_compute[253538]: 2025-11-25 08:28:02.036 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:28:02 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:28:02.036 162739 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/431353a0-bdb3-445c-95e7-9cd19a8e3783.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/431353a0-bdb3-445c-95e7-9cd19a8e3783.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 25 03:28:02 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:28:02.037 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[ea49440e-fd3f-4c75-842c-7d67ab55c126]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:28:02 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:28:02.038 162739 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 25 03:28:02 np0005534516 ovn_metadata_agent[162734]: global
Nov 25 03:28:02 np0005534516 ovn_metadata_agent[162734]:    log         /dev/log local0 debug
Nov 25 03:28:02 np0005534516 ovn_metadata_agent[162734]:    log-tag     haproxy-metadata-proxy-431353a0-bdb3-445c-95e7-9cd19a8e3783
Nov 25 03:28:02 np0005534516 ovn_metadata_agent[162734]:    user        root
Nov 25 03:28:02 np0005534516 ovn_metadata_agent[162734]:    group       root
Nov 25 03:28:02 np0005534516 ovn_metadata_agent[162734]:    maxconn     1024
Nov 25 03:28:02 np0005534516 ovn_metadata_agent[162734]:    pidfile     /var/lib/neutron/external/pids/431353a0-bdb3-445c-95e7-9cd19a8e3783.pid.haproxy
Nov 25 03:28:02 np0005534516 ovn_metadata_agent[162734]:    daemon
Nov 25 03:28:02 np0005534516 ovn_metadata_agent[162734]: 
Nov 25 03:28:02 np0005534516 ovn_metadata_agent[162734]: defaults
Nov 25 03:28:02 np0005534516 ovn_metadata_agent[162734]:    log global
Nov 25 03:28:02 np0005534516 ovn_metadata_agent[162734]:    mode http
Nov 25 03:28:02 np0005534516 ovn_metadata_agent[162734]:    option httplog
Nov 25 03:28:02 np0005534516 ovn_metadata_agent[162734]:    option dontlognull
Nov 25 03:28:02 np0005534516 ovn_metadata_agent[162734]:    option http-server-close
Nov 25 03:28:02 np0005534516 ovn_metadata_agent[162734]:    option forwardfor
Nov 25 03:28:02 np0005534516 ovn_metadata_agent[162734]:    retries                 3
Nov 25 03:28:02 np0005534516 ovn_metadata_agent[162734]:    timeout http-request    30s
Nov 25 03:28:02 np0005534516 ovn_metadata_agent[162734]:    timeout connect         30s
Nov 25 03:28:02 np0005534516 ovn_metadata_agent[162734]:    timeout client          32s
Nov 25 03:28:02 np0005534516 ovn_metadata_agent[162734]:    timeout server          32s
Nov 25 03:28:02 np0005534516 ovn_metadata_agent[162734]:    timeout http-keep-alive 30s
Nov 25 03:28:02 np0005534516 ovn_metadata_agent[162734]: 
Nov 25 03:28:02 np0005534516 ovn_metadata_agent[162734]: 
Nov 25 03:28:02 np0005534516 ovn_metadata_agent[162734]: listen listener
Nov 25 03:28:02 np0005534516 ovn_metadata_agent[162734]:    bind 169.254.169.254:80
Nov 25 03:28:02 np0005534516 ovn_metadata_agent[162734]:    server metadata /var/lib/neutron/metadata_proxy
Nov 25 03:28:02 np0005534516 ovn_metadata_agent[162734]:    http-request add-header X-OVN-Network-ID 431353a0-bdb3-445c-95e7-9cd19a8e3783
Nov 25 03:28:02 np0005534516 ovn_metadata_agent[162734]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 25 03:28:02 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:28:02.039 162739 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-431353a0-bdb3-445c-95e7-9cd19a8e3783', 'env', 'PROCESS_TAG=haproxy-431353a0-bdb3-445c-95e7-9cd19a8e3783', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/431353a0-bdb3-445c-95e7-9cd19a8e3783.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 25 03:28:02 np0005534516 nova_compute[253538]: 2025-11-25 08:28:02.196 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764059282.1963282, 14cd6797-cf47-44da-acac-0e5e3d5dfe11 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 03:28:02 np0005534516 nova_compute[253538]: 2025-11-25 08:28:02.198 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 14cd6797-cf47-44da-acac-0e5e3d5dfe11] VM Started (Lifecycle Event)#033[00m
Nov 25 03:28:02 np0005534516 nova_compute[253538]: 2025-11-25 08:28:02.214 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 14cd6797-cf47-44da-acac-0e5e3d5dfe11] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:28:02 np0005534516 nova_compute[253538]: 2025-11-25 08:28:02.219 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764059282.1964796, 14cd6797-cf47-44da-acac-0e5e3d5dfe11 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 03:28:02 np0005534516 nova_compute[253538]: 2025-11-25 08:28:02.219 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 14cd6797-cf47-44da-acac-0e5e3d5dfe11] VM Paused (Lifecycle Event)#033[00m
Nov 25 03:28:02 np0005534516 nova_compute[253538]: 2025-11-25 08:28:02.223 253542 DEBUG nova.compute.manager [req-ca0bc5d2-692b-4564-b8f0-84496287ff64 req-9398c8ca-6443-4042-b90c-f2f3db152888 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 14cd6797-cf47-44da-acac-0e5e3d5dfe11] Received event network-vif-plugged-eb3ca9e2-cc78-478d-97c2-03b1c7d29b95 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:28:02 np0005534516 nova_compute[253538]: 2025-11-25 08:28:02.223 253542 DEBUG oslo_concurrency.lockutils [req-ca0bc5d2-692b-4564-b8f0-84496287ff64 req-9398c8ca-6443-4042-b90c-f2f3db152888 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "14cd6797-cf47-44da-acac-0e5e3d5dfe11-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:28:02 np0005534516 nova_compute[253538]: 2025-11-25 08:28:02.224 253542 DEBUG oslo_concurrency.lockutils [req-ca0bc5d2-692b-4564-b8f0-84496287ff64 req-9398c8ca-6443-4042-b90c-f2f3db152888 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "14cd6797-cf47-44da-acac-0e5e3d5dfe11-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:28:02 np0005534516 nova_compute[253538]: 2025-11-25 08:28:02.224 253542 DEBUG oslo_concurrency.lockutils [req-ca0bc5d2-692b-4564-b8f0-84496287ff64 req-9398c8ca-6443-4042-b90c-f2f3db152888 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "14cd6797-cf47-44da-acac-0e5e3d5dfe11-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:28:02 np0005534516 nova_compute[253538]: 2025-11-25 08:28:02.224 253542 DEBUG nova.compute.manager [req-ca0bc5d2-692b-4564-b8f0-84496287ff64 req-9398c8ca-6443-4042-b90c-f2f3db152888 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 14cd6797-cf47-44da-acac-0e5e3d5dfe11] Processing event network-vif-plugged-eb3ca9e2-cc78-478d-97c2-03b1c7d29b95 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 25 03:28:02 np0005534516 nova_compute[253538]: 2025-11-25 08:28:02.225 253542 DEBUG nova.compute.manager [None req-a08427a9-393d-477c-899d-da297a0ffeb3 5981df3e8536420ea5b8fcd98ef92e1b 7627e6bf071942db89329eee4a7d6b59 - - default default] [instance: 14cd6797-cf47-44da-acac-0e5e3d5dfe11] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 25 03:28:02 np0005534516 nova_compute[253538]: 2025-11-25 08:28:02.229 253542 DEBUG nova.virt.libvirt.driver [None req-a08427a9-393d-477c-899d-da297a0ffeb3 5981df3e8536420ea5b8fcd98ef92e1b 7627e6bf071942db89329eee4a7d6b59 - - default default] [instance: 14cd6797-cf47-44da-acac-0e5e3d5dfe11] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 25 03:28:02 np0005534516 nova_compute[253538]: 2025-11-25 08:28:02.232 253542 INFO nova.virt.libvirt.driver [-] [instance: 14cd6797-cf47-44da-acac-0e5e3d5dfe11] Instance spawned successfully.#033[00m
Nov 25 03:28:02 np0005534516 nova_compute[253538]: 2025-11-25 08:28:02.233 253542 DEBUG nova.virt.libvirt.driver [None req-a08427a9-393d-477c-899d-da297a0ffeb3 5981df3e8536420ea5b8fcd98ef92e1b 7627e6bf071942db89329eee4a7d6b59 - - default default] [instance: 14cd6797-cf47-44da-acac-0e5e3d5dfe11] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 25 03:28:02 np0005534516 nova_compute[253538]: 2025-11-25 08:28:02.256 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 14cd6797-cf47-44da-acac-0e5e3d5dfe11] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:28:02 np0005534516 nova_compute[253538]: 2025-11-25 08:28:02.263 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764059282.2292566, 14cd6797-cf47-44da-acac-0e5e3d5dfe11 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 03:28:02 np0005534516 nova_compute[253538]: 2025-11-25 08:28:02.264 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 14cd6797-cf47-44da-acac-0e5e3d5dfe11] VM Resumed (Lifecycle Event)#033[00m
Nov 25 03:28:02 np0005534516 nova_compute[253538]: 2025-11-25 08:28:02.267 253542 DEBUG nova.virt.libvirt.driver [None req-a08427a9-393d-477c-899d-da297a0ffeb3 5981df3e8536420ea5b8fcd98ef92e1b 7627e6bf071942db89329eee4a7d6b59 - - default default] [instance: 14cd6797-cf47-44da-acac-0e5e3d5dfe11] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:28:02 np0005534516 nova_compute[253538]: 2025-11-25 08:28:02.268 253542 DEBUG nova.virt.libvirt.driver [None req-a08427a9-393d-477c-899d-da297a0ffeb3 5981df3e8536420ea5b8fcd98ef92e1b 7627e6bf071942db89329eee4a7d6b59 - - default default] [instance: 14cd6797-cf47-44da-acac-0e5e3d5dfe11] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:28:02 np0005534516 nova_compute[253538]: 2025-11-25 08:28:02.268 253542 DEBUG nova.virt.libvirt.driver [None req-a08427a9-393d-477c-899d-da297a0ffeb3 5981df3e8536420ea5b8fcd98ef92e1b 7627e6bf071942db89329eee4a7d6b59 - - default default] [instance: 14cd6797-cf47-44da-acac-0e5e3d5dfe11] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:28:02 np0005534516 nova_compute[253538]: 2025-11-25 08:28:02.269 253542 DEBUG nova.virt.libvirt.driver [None req-a08427a9-393d-477c-899d-da297a0ffeb3 5981df3e8536420ea5b8fcd98ef92e1b 7627e6bf071942db89329eee4a7d6b59 - - default default] [instance: 14cd6797-cf47-44da-acac-0e5e3d5dfe11] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:28:02 np0005534516 nova_compute[253538]: 2025-11-25 08:28:02.269 253542 DEBUG nova.virt.libvirt.driver [None req-a08427a9-393d-477c-899d-da297a0ffeb3 5981df3e8536420ea5b8fcd98ef92e1b 7627e6bf071942db89329eee4a7d6b59 - - default default] [instance: 14cd6797-cf47-44da-acac-0e5e3d5dfe11] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:28:02 np0005534516 nova_compute[253538]: 2025-11-25 08:28:02.270 253542 DEBUG nova.virt.libvirt.driver [None req-a08427a9-393d-477c-899d-da297a0ffeb3 5981df3e8536420ea5b8fcd98ef92e1b 7627e6bf071942db89329eee4a7d6b59 - - default default] [instance: 14cd6797-cf47-44da-acac-0e5e3d5dfe11] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:28:02 np0005534516 nova_compute[253538]: 2025-11-25 08:28:02.298 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 14cd6797-cf47-44da-acac-0e5e3d5dfe11] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:28:02 np0005534516 nova_compute[253538]: 2025-11-25 08:28:02.303 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 14cd6797-cf47-44da-acac-0e5e3d5dfe11] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 03:28:02 np0005534516 nova_compute[253538]: 2025-11-25 08:28:02.321 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 14cd6797-cf47-44da-acac-0e5e3d5dfe11] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 03:28:02 np0005534516 nova_compute[253538]: 2025-11-25 08:28:02.429 253542 INFO nova.compute.manager [None req-a08427a9-393d-477c-899d-da297a0ffeb3 5981df3e8536420ea5b8fcd98ef92e1b 7627e6bf071942db89329eee4a7d6b59 - - default default] [instance: 14cd6797-cf47-44da-acac-0e5e3d5dfe11] Took 7.84 seconds to spawn the instance on the hypervisor.#033[00m
Nov 25 03:28:02 np0005534516 nova_compute[253538]: 2025-11-25 08:28:02.430 253542 DEBUG nova.compute.manager [None req-a08427a9-393d-477c-899d-da297a0ffeb3 5981df3e8536420ea5b8fcd98ef92e1b 7627e6bf071942db89329eee4a7d6b59 - - default default] [instance: 14cd6797-cf47-44da-acac-0e5e3d5dfe11] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:28:02 np0005534516 nova_compute[253538]: 2025-11-25 08:28:02.508 253542 INFO nova.compute.manager [None req-a08427a9-393d-477c-899d-da297a0ffeb3 5981df3e8536420ea5b8fcd98ef92e1b 7627e6bf071942db89329eee4a7d6b59 - - default default] [instance: 14cd6797-cf47-44da-acac-0e5e3d5dfe11] Took 9.02 seconds to build instance.#033[00m
Nov 25 03:28:02 np0005534516 podman[289945]: 2025-11-25 08:28:02.427393186 +0000 UTC m=+0.027592674 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 25 03:28:02 np0005534516 podman[289945]: 2025-11-25 08:28:02.546074136 +0000 UTC m=+0.146273604 container create 36cca6be3031d3a8b0ade847ff4a722403a1ba0f8d0bfdc5b75ab94c88e38b16 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-431353a0-bdb3-445c-95e7-9cd19a8e3783, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 03:28:02 np0005534516 nova_compute[253538]: 2025-11-25 08:28:02.563 253542 DEBUG oslo_concurrency.lockutils [None req-a08427a9-393d-477c-899d-da297a0ffeb3 5981df3e8536420ea5b8fcd98ef92e1b 7627e6bf071942db89329eee4a7d6b59 - - default default] Lock "14cd6797-cf47-44da-acac-0e5e3d5dfe11" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.213s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:28:02 np0005534516 systemd[1]: Started libpod-conmon-36cca6be3031d3a8b0ade847ff4a722403a1ba0f8d0bfdc5b75ab94c88e38b16.scope.
Nov 25 03:28:02 np0005534516 systemd[1]: Started libcrun container.
Nov 25 03:28:02 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8d59ce0fb78beb290599dd8ea1cac60e0a337aea145db00ab9301e67824cad23/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 25 03:28:02 np0005534516 podman[289945]: 2025-11-25 08:28:02.68224433 +0000 UTC m=+0.282443838 container init 36cca6be3031d3a8b0ade847ff4a722403a1ba0f8d0bfdc5b75ab94c88e38b16 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-431353a0-bdb3-445c-95e7-9cd19a8e3783, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Nov 25 03:28:02 np0005534516 podman[289945]: 2025-11-25 08:28:02.690388675 +0000 UTC m=+0.290588143 container start 36cca6be3031d3a8b0ade847ff4a722403a1ba0f8d0bfdc5b75ab94c88e38b16 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-431353a0-bdb3-445c-95e7-9cd19a8e3783, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Nov 25 03:28:02 np0005534516 neutron-haproxy-ovnmeta-431353a0-bdb3-445c-95e7-9cd19a8e3783[289960]: [NOTICE]   (289964) : New worker (289966) forked
Nov 25 03:28:02 np0005534516 neutron-haproxy-ovnmeta-431353a0-bdb3-445c-95e7-9cd19a8e3783[289960]: [NOTICE]   (289964) : Loading success.
Nov 25 03:28:03 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1287: 321 pgs: 321 active+clean; 258 MiB data, 459 MiB used, 60 GiB / 60 GiB avail; 6.9 MiB/s rd, 11 MiB/s wr, 203 op/s
Nov 25 03:28:03 np0005534516 nova_compute[253538]: 2025-11-25 08:28:03.215 253542 DEBUG nova.network.neutron [None req-0ab1c55e-a54f-4e9f-8e35-b9475b4764b8 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] [instance: 59d5b429-b6cb-4ced-bf99-b3a5b42dc5a0] Successfully updated port: a0d5bf0b-a708-4159-968d-5c597313379d _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 25 03:28:03 np0005534516 nova_compute[253538]: 2025-11-25 08:28:03.308 253542 DEBUG oslo_concurrency.lockutils [None req-0ab1c55e-a54f-4e9f-8e35-b9475b4764b8 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] Acquiring lock "refresh_cache-59d5b429-b6cb-4ced-bf99-b3a5b42dc5a0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 03:28:03 np0005534516 nova_compute[253538]: 2025-11-25 08:28:03.308 253542 DEBUG oslo_concurrency.lockutils [None req-0ab1c55e-a54f-4e9f-8e35-b9475b4764b8 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] Acquired lock "refresh_cache-59d5b429-b6cb-4ced-bf99-b3a5b42dc5a0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 03:28:03 np0005534516 nova_compute[253538]: 2025-11-25 08:28:03.308 253542 DEBUG nova.network.neutron [None req-0ab1c55e-a54f-4e9f-8e35-b9475b4764b8 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] [instance: 59d5b429-b6cb-4ced-bf99-b3a5b42dc5a0] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 25 03:28:03 np0005534516 nova_compute[253538]: 2025-11-25 08:28:03.335 253542 DEBUG nova.compute.manager [req-589eb583-e7c2-44d7-ad18-a259ef45fcd9 req-8d07beb8-111f-4d19-a355-e50741638f40 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 59d5b429-b6cb-4ced-bf99-b3a5b42dc5a0] Received event network-changed-a0d5bf0b-a708-4159-968d-5c597313379d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:28:03 np0005534516 nova_compute[253538]: 2025-11-25 08:28:03.336 253542 DEBUG nova.compute.manager [req-589eb583-e7c2-44d7-ad18-a259ef45fcd9 req-8d07beb8-111f-4d19-a355-e50741638f40 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 59d5b429-b6cb-4ced-bf99-b3a5b42dc5a0] Refreshing instance network info cache due to event network-changed-a0d5bf0b-a708-4159-968d-5c597313379d. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 03:28:03 np0005534516 nova_compute[253538]: 2025-11-25 08:28:03.336 253542 DEBUG oslo_concurrency.lockutils [req-589eb583-e7c2-44d7-ad18-a259ef45fcd9 req-8d07beb8-111f-4d19-a355-e50741638f40 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-59d5b429-b6cb-4ced-bf99-b3a5b42dc5a0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 03:28:03 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e122 do_prune osdmap full prune enabled
Nov 25 03:28:03 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e123 e123: 3 total, 3 up, 3 in
Nov 25 03:28:03 np0005534516 ceph-mon[75015]: log_channel(cluster) log [DBG] : osdmap e123: 3 total, 3 up, 3 in
Nov 25 03:28:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 03:28:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:28:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 03:28:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:28:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0014231600410236375 of space, bias 1.0, pg target 0.4269480123070913 quantized to 32 (current 32)
Nov 25 03:28:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:28:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 03:28:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:28:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 03:28:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:28:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0014230328596079964 of space, bias 1.0, pg target 0.42690985788239894 quantized to 32 (current 32)
Nov 25 03:28:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:28:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 25 03:28:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:28:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 03:28:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:28:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 25 03:28:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:28:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 25 03:28:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:28:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 03:28:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:28:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 25 03:28:03 np0005534516 nova_compute[253538]: 2025-11-25 08:28:03.768 253542 DEBUG nova.network.neutron [None req-0ab1c55e-a54f-4e9f-8e35-b9475b4764b8 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] [instance: 59d5b429-b6cb-4ced-bf99-b3a5b42dc5a0] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 25 03:28:03 np0005534516 podman[289975]: 2025-11-25 08:28:03.823531774 +0000 UTC m=+0.071503097 container health_status 201f5ba8a846455dad7681d91099d8c3f33e8ac91298c1330a8b525b94c03938 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 25 03:28:04 np0005534516 nova_compute[253538]: 2025-11-25 08:28:04.545 253542 DEBUG nova.compute.manager [req-a2486f81-c28a-478f-bc2e-26f5c297c673 req-7b3f93e0-4eb3-4ea5-a720-5d3ec4c180d8 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 14cd6797-cf47-44da-acac-0e5e3d5dfe11] Received event network-vif-plugged-eb3ca9e2-cc78-478d-97c2-03b1c7d29b95 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:28:04 np0005534516 nova_compute[253538]: 2025-11-25 08:28:04.546 253542 DEBUG oslo_concurrency.lockutils [req-a2486f81-c28a-478f-bc2e-26f5c297c673 req-7b3f93e0-4eb3-4ea5-a720-5d3ec4c180d8 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "14cd6797-cf47-44da-acac-0e5e3d5dfe11-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:28:04 np0005534516 nova_compute[253538]: 2025-11-25 08:28:04.546 253542 DEBUG oslo_concurrency.lockutils [req-a2486f81-c28a-478f-bc2e-26f5c297c673 req-7b3f93e0-4eb3-4ea5-a720-5d3ec4c180d8 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "14cd6797-cf47-44da-acac-0e5e3d5dfe11-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:28:04 np0005534516 nova_compute[253538]: 2025-11-25 08:28:04.546 253542 DEBUG oslo_concurrency.lockutils [req-a2486f81-c28a-478f-bc2e-26f5c297c673 req-7b3f93e0-4eb3-4ea5-a720-5d3ec4c180d8 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "14cd6797-cf47-44da-acac-0e5e3d5dfe11-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:28:04 np0005534516 nova_compute[253538]: 2025-11-25 08:28:04.547 253542 DEBUG nova.compute.manager [req-a2486f81-c28a-478f-bc2e-26f5c297c673 req-7b3f93e0-4eb3-4ea5-a720-5d3ec4c180d8 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 14cd6797-cf47-44da-acac-0e5e3d5dfe11] No waiting events found dispatching network-vif-plugged-eb3ca9e2-cc78-478d-97c2-03b1c7d29b95 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 03:28:04 np0005534516 nova_compute[253538]: 2025-11-25 08:28:04.547 253542 WARNING nova.compute.manager [req-a2486f81-c28a-478f-bc2e-26f5c297c673 req-7b3f93e0-4eb3-4ea5-a720-5d3ec4c180d8 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 14cd6797-cf47-44da-acac-0e5e3d5dfe11] Received unexpected event network-vif-plugged-eb3ca9e2-cc78-478d-97c2-03b1c7d29b95 for instance with vm_state active and task_state None.#033[00m
Nov 25 03:28:04 np0005534516 nova_compute[253538]: 2025-11-25 08:28:04.743 253542 DEBUG nova.network.neutron [None req-0ab1c55e-a54f-4e9f-8e35-b9475b4764b8 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] [instance: 59d5b429-b6cb-4ced-bf99-b3a5b42dc5a0] Updating instance_info_cache with network_info: [{"id": "a0d5bf0b-a708-4159-968d-5c597313379d", "address": "fa:16:3e:67:41:bb", "network": {"id": "52a7668b-f0ac-4b07-a778-1ee89adbf076", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1430369364-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c5b1125d171240e2895276836b4fd6d7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa0d5bf0b-a7", "ovs_interfaceid": "a0d5bf0b-a708-4159-968d-5c597313379d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 03:28:04 np0005534516 nova_compute[253538]: 2025-11-25 08:28:04.799 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:28:04 np0005534516 nova_compute[253538]: 2025-11-25 08:28:04.842 253542 DEBUG oslo_concurrency.lockutils [None req-31bdda25-7846-472f-a438-75508dd174e9 5981df3e8536420ea5b8fcd98ef92e1b 7627e6bf071942db89329eee4a7d6b59 - - default default] Acquiring lock "interface-14cd6797-cf47-44da-acac-0e5e3d5dfe11-None" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:28:04 np0005534516 nova_compute[253538]: 2025-11-25 08:28:04.844 253542 DEBUG oslo_concurrency.lockutils [None req-31bdda25-7846-472f-a438-75508dd174e9 5981df3e8536420ea5b8fcd98ef92e1b 7627e6bf071942db89329eee4a7d6b59 - - default default] Lock "interface-14cd6797-cf47-44da-acac-0e5e3d5dfe11-None" acquired by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:28:04 np0005534516 nova_compute[253538]: 2025-11-25 08:28:04.844 253542 DEBUG nova.objects.instance [None req-31bdda25-7846-472f-a438-75508dd174e9 5981df3e8536420ea5b8fcd98ef92e1b 7627e6bf071942db89329eee4a7d6b59 - - default default] Lazy-loading 'flavor' on Instance uuid 14cd6797-cf47-44da-acac-0e5e3d5dfe11 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 03:28:04 np0005534516 nova_compute[253538]: 2025-11-25 08:28:04.855 253542 DEBUG oslo_concurrency.lockutils [None req-0ab1c55e-a54f-4e9f-8e35-b9475b4764b8 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] Releasing lock "refresh_cache-59d5b429-b6cb-4ced-bf99-b3a5b42dc5a0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 03:28:04 np0005534516 nova_compute[253538]: 2025-11-25 08:28:04.856 253542 DEBUG nova.compute.manager [None req-0ab1c55e-a54f-4e9f-8e35-b9475b4764b8 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] [instance: 59d5b429-b6cb-4ced-bf99-b3a5b42dc5a0] Instance network_info: |[{"id": "a0d5bf0b-a708-4159-968d-5c597313379d", "address": "fa:16:3e:67:41:bb", "network": {"id": "52a7668b-f0ac-4b07-a778-1ee89adbf076", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1430369364-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c5b1125d171240e2895276836b4fd6d7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa0d5bf0b-a7", "ovs_interfaceid": "a0d5bf0b-a708-4159-968d-5c597313379d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 25 03:28:04 np0005534516 nova_compute[253538]: 2025-11-25 08:28:04.856 253542 DEBUG oslo_concurrency.lockutils [req-589eb583-e7c2-44d7-ad18-a259ef45fcd9 req-8d07beb8-111f-4d19-a355-e50741638f40 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-59d5b429-b6cb-4ced-bf99-b3a5b42dc5a0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 03:28:04 np0005534516 nova_compute[253538]: 2025-11-25 08:28:04.856 253542 DEBUG nova.network.neutron [req-589eb583-e7c2-44d7-ad18-a259ef45fcd9 req-8d07beb8-111f-4d19-a355-e50741638f40 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 59d5b429-b6cb-4ced-bf99-b3a5b42dc5a0] Refreshing network info cache for port a0d5bf0b-a708-4159-968d-5c597313379d _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 03:28:04 np0005534516 nova_compute[253538]: 2025-11-25 08:28:04.860 253542 DEBUG nova.virt.libvirt.driver [None req-0ab1c55e-a54f-4e9f-8e35-b9475b4764b8 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] [instance: 59d5b429-b6cb-4ced-bf99-b3a5b42dc5a0] Start _get_guest_xml network_info=[{"id": "a0d5bf0b-a708-4159-968d-5c597313379d", "address": "fa:16:3e:67:41:bb", "network": {"id": "52a7668b-f0ac-4b07-a778-1ee89adbf076", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1430369364-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c5b1125d171240e2895276836b4fd6d7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa0d5bf0b-a7", "ovs_interfaceid": "a0d5bf0b-a708-4159-968d-5c597313379d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'device_name': '/dev/vda', 'guest_format': None, 'encryption_options': None, 'boot_index': 0, 'encryption_format': None, 'encryption_secret_uuid': None, 'size': 0, 'encrypted': False, 'disk_bus': 'virtio', 'image_id': '8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 25 03:28:04 np0005534516 nova_compute[253538]: 2025-11-25 08:28:04.862 253542 DEBUG nova.objects.instance [None req-31bdda25-7846-472f-a438-75508dd174e9 5981df3e8536420ea5b8fcd98ef92e1b 7627e6bf071942db89329eee4a7d6b59 - - default default] Lazy-loading 'pci_requests' on Instance uuid 14cd6797-cf47-44da-acac-0e5e3d5dfe11 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 03:28:04 np0005534516 nova_compute[253538]: 2025-11-25 08:28:04.878 253542 DEBUG nova.network.neutron [None req-31bdda25-7846-472f-a438-75508dd174e9 5981df3e8536420ea5b8fcd98ef92e1b 7627e6bf071942db89329eee4a7d6b59 - - default default] [instance: 14cd6797-cf47-44da-acac-0e5e3d5dfe11] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 25 03:28:04 np0005534516 nova_compute[253538]: 2025-11-25 08:28:04.884 253542 WARNING nova.virt.libvirt.driver [None req-0ab1c55e-a54f-4e9f-8e35-b9475b4764b8 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 03:28:04 np0005534516 nova_compute[253538]: 2025-11-25 08:28:04.890 253542 DEBUG nova.virt.libvirt.host [None req-0ab1c55e-a54f-4e9f-8e35-b9475b4764b8 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 25 03:28:04 np0005534516 nova_compute[253538]: 2025-11-25 08:28:04.891 253542 DEBUG nova.virt.libvirt.host [None req-0ab1c55e-a54f-4e9f-8e35-b9475b4764b8 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 25 03:28:04 np0005534516 nova_compute[253538]: 2025-11-25 08:28:04.893 253542 DEBUG nova.virt.libvirt.host [None req-0ab1c55e-a54f-4e9f-8e35-b9475b4764b8 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 25 03:28:04 np0005534516 nova_compute[253538]: 2025-11-25 08:28:04.894 253542 DEBUG nova.virt.libvirt.host [None req-0ab1c55e-a54f-4e9f-8e35-b9475b4764b8 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 25 03:28:04 np0005534516 nova_compute[253538]: 2025-11-25 08:28:04.894 253542 DEBUG nova.virt.libvirt.driver [None req-0ab1c55e-a54f-4e9f-8e35-b9475b4764b8 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 25 03:28:04 np0005534516 nova_compute[253538]: 2025-11-25 08:28:04.894 253542 DEBUG nova.virt.hardware [None req-0ab1c55e-a54f-4e9f-8e35-b9475b4764b8 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T08:20:54Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='c0247c91-0f34-45b2-87e7-43f31a790a00',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 25 03:28:04 np0005534516 nova_compute[253538]: 2025-11-25 08:28:04.895 253542 DEBUG nova.virt.hardware [None req-0ab1c55e-a54f-4e9f-8e35-b9475b4764b8 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 25 03:28:04 np0005534516 nova_compute[253538]: 2025-11-25 08:28:04.895 253542 DEBUG nova.virt.hardware [None req-0ab1c55e-a54f-4e9f-8e35-b9475b4764b8 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 25 03:28:04 np0005534516 nova_compute[253538]: 2025-11-25 08:28:04.895 253542 DEBUG nova.virt.hardware [None req-0ab1c55e-a54f-4e9f-8e35-b9475b4764b8 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 25 03:28:04 np0005534516 nova_compute[253538]: 2025-11-25 08:28:04.895 253542 DEBUG nova.virt.hardware [None req-0ab1c55e-a54f-4e9f-8e35-b9475b4764b8 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 25 03:28:04 np0005534516 nova_compute[253538]: 2025-11-25 08:28:04.895 253542 DEBUG nova.virt.hardware [None req-0ab1c55e-a54f-4e9f-8e35-b9475b4764b8 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 25 03:28:04 np0005534516 nova_compute[253538]: 2025-11-25 08:28:04.896 253542 DEBUG nova.virt.hardware [None req-0ab1c55e-a54f-4e9f-8e35-b9475b4764b8 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 25 03:28:04 np0005534516 nova_compute[253538]: 2025-11-25 08:28:04.896 253542 DEBUG nova.virt.hardware [None req-0ab1c55e-a54f-4e9f-8e35-b9475b4764b8 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 25 03:28:04 np0005534516 nova_compute[253538]: 2025-11-25 08:28:04.896 253542 DEBUG nova.virt.hardware [None req-0ab1c55e-a54f-4e9f-8e35-b9475b4764b8 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 25 03:28:04 np0005534516 nova_compute[253538]: 2025-11-25 08:28:04.896 253542 DEBUG nova.virt.hardware [None req-0ab1c55e-a54f-4e9f-8e35-b9475b4764b8 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 25 03:28:04 np0005534516 nova_compute[253538]: 2025-11-25 08:28:04.897 253542 DEBUG nova.virt.hardware [None req-0ab1c55e-a54f-4e9f-8e35-b9475b4764b8 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 25 03:28:04 np0005534516 nova_compute[253538]: 2025-11-25 08:28:04.899 253542 DEBUG oslo_concurrency.processutils [None req-0ab1c55e-a54f-4e9f-8e35-b9475b4764b8 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:28:04 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e123 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 03:28:04 np0005534516 nova_compute[253538]: 2025-11-25 08:28:04.934 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:28:05 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1289: 321 pgs: 321 active+clean; 292 MiB data, 490 MiB used, 60 GiB / 60 GiB avail; 2.6 MiB/s rd, 5.1 MiB/s wr, 167 op/s
Nov 25 03:28:05 np0005534516 nova_compute[253538]: 2025-11-25 08:28:05.234 253542 DEBUG nova.policy [None req-31bdda25-7846-472f-a438-75508dd174e9 5981df3e8536420ea5b8fcd98ef92e1b 7627e6bf071942db89329eee4a7d6b59 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '5981df3e8536420ea5b8fcd98ef92e1b', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '7627e6bf071942db89329eee4a7d6b59', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 25 03:28:05 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 03:28:05 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/264332856' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 03:28:05 np0005534516 nova_compute[253538]: 2025-11-25 08:28:05.394 253542 DEBUG oslo_concurrency.processutils [None req-0ab1c55e-a54f-4e9f-8e35-b9475b4764b8 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.494s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:28:05 np0005534516 nova_compute[253538]: 2025-11-25 08:28:05.419 253542 DEBUG nova.storage.rbd_utils [None req-0ab1c55e-a54f-4e9f-8e35-b9475b4764b8 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] rbd image 59d5b429-b6cb-4ced-bf99-b3a5b42dc5a0_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:28:05 np0005534516 nova_compute[253538]: 2025-11-25 08:28:05.424 253542 DEBUG oslo_concurrency.processutils [None req-0ab1c55e-a54f-4e9f-8e35-b9475b4764b8 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:28:05 np0005534516 ceph-mon[75015]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 25 03:28:05 np0005534516 ceph-mon[75015]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 2400.0 total, 600.0 interval#012Cumulative writes: 5918 writes, 26K keys, 5918 commit groups, 1.0 writes per commit group, ingest: 0.04 GB, 0.02 MB/s#012Cumulative WAL: 5917 writes, 5917 syncs, 1.00 writes per sync, written: 0.04 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1509 writes, 6764 keys, 1509 commit groups, 1.0 writes per commit group, ingest: 9.28 MB, 0.02 MB/s#012Interval WAL: 1509 writes, 1509 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     21.4      1.42              0.10        15    0.094       0      0       0.0       0.0#012  L6      1/0    6.96 MB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   3.3     40.5     32.7      3.02              0.32        14    0.215     64K   7831       0.0       0.0#012 Sum      1/0    6.96 MB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   4.3     27.5     29.1      4.43              0.42        29    0.153     64K   7831       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   4.7     20.2     20.1      1.90              0.14         8    0.237     21K   2587       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   0.0     40.5     32.7      3.02              0.32        14    0.215     64K   7831       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     21.4      1.41              0.10        14    0.101       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     19.9      0.00              0.00         1    0.003       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 2400.0 total, 600.0 interval#012Flush(GB): cumulative 0.030, interval 0.008#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.13 GB write, 0.05 MB/s write, 0.12 GB read, 0.05 MB/s read, 4.4 seconds#012Interval compaction: 0.04 GB write, 0.06 MB/s write, 0.04 GB read, 0.06 MB/s read, 1.9 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55e967e9f1f0#2 capacity: 308.00 MB usage: 13.19 MB table_size: 0 occupancy: 18446744073709551615 collections: 5 last_copies: 0 last_secs: 0.000117 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(863,12.69 MB,4.11933%) FilterBlock(30,184.17 KB,0.0583946%) IndexBlock(30,334.05 KB,0.105915%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Nov 25 03:28:05 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 03:28:05 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3241261070' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 03:28:05 np0005534516 nova_compute[253538]: 2025-11-25 08:28:05.884 253542 DEBUG oslo_concurrency.processutils [None req-0ab1c55e-a54f-4e9f-8e35-b9475b4764b8 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.461s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:28:05 np0005534516 nova_compute[253538]: 2025-11-25 08:28:05.887 253542 DEBUG nova.virt.libvirt.vif [None req-0ab1c55e-a54f-4e9f-8e35-b9475b4764b8 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T08:27:58Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesOneServerNegativeTestJSON-server-184009211',display_name='tempest-ImagesOneServerNegativeTestJSON-server-184009211',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagesoneservernegativetestjson-server-184009211',id=29,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c5b1125d171240e2895276836b4fd6d7',ramdisk_id='',reservation_id='r-7mj1pjji',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesOneServerNegativeTestJSON-192511421',owner_user_name='tempest-ImagesOneServerNegativeTestJSON-192511421-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T08:27:59Z,user_data=None,user_id='8350a560f2bc4b57a5da0e3a1f582f82',uuid=59d5b429-b6cb-4ced-bf99-b3a5b42dc5a0,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "a0d5bf0b-a708-4159-968d-5c597313379d", "address": "fa:16:3e:67:41:bb", "network": {"id": "52a7668b-f0ac-4b07-a778-1ee89adbf076", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1430369364-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c5b1125d171240e2895276836b4fd6d7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa0d5bf0b-a7", "ovs_interfaceid": "a0d5bf0b-a708-4159-968d-5c597313379d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 25 03:28:05 np0005534516 nova_compute[253538]: 2025-11-25 08:28:05.888 253542 DEBUG nova.network.os_vif_util [None req-0ab1c55e-a54f-4e9f-8e35-b9475b4764b8 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] Converting VIF {"id": "a0d5bf0b-a708-4159-968d-5c597313379d", "address": "fa:16:3e:67:41:bb", "network": {"id": "52a7668b-f0ac-4b07-a778-1ee89adbf076", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1430369364-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c5b1125d171240e2895276836b4fd6d7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa0d5bf0b-a7", "ovs_interfaceid": "a0d5bf0b-a708-4159-968d-5c597313379d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 03:28:05 np0005534516 nova_compute[253538]: 2025-11-25 08:28:05.889 253542 DEBUG nova.network.os_vif_util [None req-0ab1c55e-a54f-4e9f-8e35-b9475b4764b8 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:67:41:bb,bridge_name='br-int',has_traffic_filtering=True,id=a0d5bf0b-a708-4159-968d-5c597313379d,network=Network(52a7668b-f0ac-4b07-a778-1ee89adbf076),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa0d5bf0b-a7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 03:28:05 np0005534516 nova_compute[253538]: 2025-11-25 08:28:05.890 253542 DEBUG nova.objects.instance [None req-0ab1c55e-a54f-4e9f-8e35-b9475b4764b8 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] Lazy-loading 'pci_devices' on Instance uuid 59d5b429-b6cb-4ced-bf99-b3a5b42dc5a0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 03:28:05 np0005534516 nova_compute[253538]: 2025-11-25 08:28:05.917 253542 DEBUG nova.virt.libvirt.driver [None req-0ab1c55e-a54f-4e9f-8e35-b9475b4764b8 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] [instance: 59d5b429-b6cb-4ced-bf99-b3a5b42dc5a0] End _get_guest_xml xml=<domain type="kvm">
Nov 25 03:28:05 np0005534516 nova_compute[253538]:  <uuid>59d5b429-b6cb-4ced-bf99-b3a5b42dc5a0</uuid>
Nov 25 03:28:05 np0005534516 nova_compute[253538]:  <name>instance-0000001d</name>
Nov 25 03:28:05 np0005534516 nova_compute[253538]:  <memory>131072</memory>
Nov 25 03:28:05 np0005534516 nova_compute[253538]:  <vcpu>1</vcpu>
Nov 25 03:28:05 np0005534516 nova_compute[253538]:  <metadata>
Nov 25 03:28:05 np0005534516 nova_compute[253538]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 03:28:05 np0005534516 nova_compute[253538]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 03:28:05 np0005534516 nova_compute[253538]:      <nova:name>tempest-ImagesOneServerNegativeTestJSON-server-184009211</nova:name>
Nov 25 03:28:05 np0005534516 nova_compute[253538]:      <nova:creationTime>2025-11-25 08:28:04</nova:creationTime>
Nov 25 03:28:05 np0005534516 nova_compute[253538]:      <nova:flavor name="m1.nano">
Nov 25 03:28:05 np0005534516 nova_compute[253538]:        <nova:memory>128</nova:memory>
Nov 25 03:28:05 np0005534516 nova_compute[253538]:        <nova:disk>1</nova:disk>
Nov 25 03:28:05 np0005534516 nova_compute[253538]:        <nova:swap>0</nova:swap>
Nov 25 03:28:05 np0005534516 nova_compute[253538]:        <nova:ephemeral>0</nova:ephemeral>
Nov 25 03:28:05 np0005534516 nova_compute[253538]:        <nova:vcpus>1</nova:vcpus>
Nov 25 03:28:05 np0005534516 nova_compute[253538]:      </nova:flavor>
Nov 25 03:28:05 np0005534516 nova_compute[253538]:      <nova:owner>
Nov 25 03:28:05 np0005534516 nova_compute[253538]:        <nova:user uuid="8350a560f2bc4b57a5da0e3a1f582f82">tempest-ImagesOneServerNegativeTestJSON-192511421-project-member</nova:user>
Nov 25 03:28:05 np0005534516 nova_compute[253538]:        <nova:project uuid="c5b1125d171240e2895276836b4fd6d7">tempest-ImagesOneServerNegativeTestJSON-192511421</nova:project>
Nov 25 03:28:05 np0005534516 nova_compute[253538]:      </nova:owner>
Nov 25 03:28:05 np0005534516 nova_compute[253538]:      <nova:root type="image" uuid="8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e"/>
Nov 25 03:28:05 np0005534516 nova_compute[253538]:      <nova:ports>
Nov 25 03:28:05 np0005534516 nova_compute[253538]:        <nova:port uuid="a0d5bf0b-a708-4159-968d-5c597313379d">
Nov 25 03:28:05 np0005534516 nova_compute[253538]:          <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Nov 25 03:28:05 np0005534516 nova_compute[253538]:        </nova:port>
Nov 25 03:28:05 np0005534516 nova_compute[253538]:      </nova:ports>
Nov 25 03:28:05 np0005534516 nova_compute[253538]:    </nova:instance>
Nov 25 03:28:05 np0005534516 nova_compute[253538]:  </metadata>
Nov 25 03:28:05 np0005534516 nova_compute[253538]:  <sysinfo type="smbios">
Nov 25 03:28:05 np0005534516 nova_compute[253538]:    <system>
Nov 25 03:28:05 np0005534516 nova_compute[253538]:      <entry name="manufacturer">RDO</entry>
Nov 25 03:28:05 np0005534516 nova_compute[253538]:      <entry name="product">OpenStack Compute</entry>
Nov 25 03:28:05 np0005534516 nova_compute[253538]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 03:28:05 np0005534516 nova_compute[253538]:      <entry name="serial">59d5b429-b6cb-4ced-bf99-b3a5b42dc5a0</entry>
Nov 25 03:28:05 np0005534516 nova_compute[253538]:      <entry name="uuid">59d5b429-b6cb-4ced-bf99-b3a5b42dc5a0</entry>
Nov 25 03:28:05 np0005534516 nova_compute[253538]:      <entry name="family">Virtual Machine</entry>
Nov 25 03:28:05 np0005534516 nova_compute[253538]:    </system>
Nov 25 03:28:05 np0005534516 nova_compute[253538]:  </sysinfo>
Nov 25 03:28:05 np0005534516 nova_compute[253538]:  <os>
Nov 25 03:28:05 np0005534516 nova_compute[253538]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 25 03:28:05 np0005534516 nova_compute[253538]:    <boot dev="hd"/>
Nov 25 03:28:05 np0005534516 nova_compute[253538]:    <smbios mode="sysinfo"/>
Nov 25 03:28:05 np0005534516 nova_compute[253538]:  </os>
Nov 25 03:28:05 np0005534516 nova_compute[253538]:  <features>
Nov 25 03:28:05 np0005534516 nova_compute[253538]:    <acpi/>
Nov 25 03:28:05 np0005534516 nova_compute[253538]:    <apic/>
Nov 25 03:28:05 np0005534516 nova_compute[253538]:    <vmcoreinfo/>
Nov 25 03:28:05 np0005534516 nova_compute[253538]:  </features>
Nov 25 03:28:05 np0005534516 nova_compute[253538]:  <clock offset="utc">
Nov 25 03:28:05 np0005534516 nova_compute[253538]:    <timer name="pit" tickpolicy="delay"/>
Nov 25 03:28:05 np0005534516 nova_compute[253538]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 25 03:28:05 np0005534516 nova_compute[253538]:    <timer name="hpet" present="no"/>
Nov 25 03:28:05 np0005534516 nova_compute[253538]:  </clock>
Nov 25 03:28:05 np0005534516 nova_compute[253538]:  <cpu mode="host-model" match="exact">
Nov 25 03:28:05 np0005534516 nova_compute[253538]:    <topology sockets="1" cores="1" threads="1"/>
Nov 25 03:28:05 np0005534516 nova_compute[253538]:  </cpu>
Nov 25 03:28:05 np0005534516 nova_compute[253538]:  <devices>
Nov 25 03:28:05 np0005534516 nova_compute[253538]:    <disk type="network" device="disk">
Nov 25 03:28:05 np0005534516 nova_compute[253538]:      <driver type="raw" cache="none"/>
Nov 25 03:28:05 np0005534516 nova_compute[253538]:      <source protocol="rbd" name="vms/59d5b429-b6cb-4ced-bf99-b3a5b42dc5a0_disk">
Nov 25 03:28:05 np0005534516 nova_compute[253538]:        <host name="192.168.122.100" port="6789"/>
Nov 25 03:28:05 np0005534516 nova_compute[253538]:      </source>
Nov 25 03:28:05 np0005534516 nova_compute[253538]:      <auth username="openstack">
Nov 25 03:28:05 np0005534516 nova_compute[253538]:        <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 03:28:05 np0005534516 nova_compute[253538]:      </auth>
Nov 25 03:28:05 np0005534516 nova_compute[253538]:      <target dev="vda" bus="virtio"/>
Nov 25 03:28:05 np0005534516 nova_compute[253538]:    </disk>
Nov 25 03:28:05 np0005534516 nova_compute[253538]:    <disk type="network" device="cdrom">
Nov 25 03:28:05 np0005534516 nova_compute[253538]:      <driver type="raw" cache="none"/>
Nov 25 03:28:05 np0005534516 nova_compute[253538]:      <source protocol="rbd" name="vms/59d5b429-b6cb-4ced-bf99-b3a5b42dc5a0_disk.config">
Nov 25 03:28:05 np0005534516 nova_compute[253538]:        <host name="192.168.122.100" port="6789"/>
Nov 25 03:28:05 np0005534516 nova_compute[253538]:      </source>
Nov 25 03:28:05 np0005534516 nova_compute[253538]:      <auth username="openstack">
Nov 25 03:28:05 np0005534516 nova_compute[253538]:        <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 03:28:05 np0005534516 nova_compute[253538]:      </auth>
Nov 25 03:28:05 np0005534516 nova_compute[253538]:      <target dev="sda" bus="sata"/>
Nov 25 03:28:05 np0005534516 nova_compute[253538]:    </disk>
Nov 25 03:28:05 np0005534516 nova_compute[253538]:    <interface type="ethernet">
Nov 25 03:28:05 np0005534516 nova_compute[253538]:      <mac address="fa:16:3e:67:41:bb"/>
Nov 25 03:28:05 np0005534516 nova_compute[253538]:      <model type="virtio"/>
Nov 25 03:28:05 np0005534516 nova_compute[253538]:      <driver name="vhost" rx_queue_size="512"/>
Nov 25 03:28:05 np0005534516 nova_compute[253538]:      <mtu size="1442"/>
Nov 25 03:28:05 np0005534516 nova_compute[253538]:      <target dev="tapa0d5bf0b-a7"/>
Nov 25 03:28:05 np0005534516 nova_compute[253538]:    </interface>
Nov 25 03:28:05 np0005534516 nova_compute[253538]:    <serial type="pty">
Nov 25 03:28:05 np0005534516 nova_compute[253538]:      <log file="/var/lib/nova/instances/59d5b429-b6cb-4ced-bf99-b3a5b42dc5a0/console.log" append="off"/>
Nov 25 03:28:05 np0005534516 nova_compute[253538]:    </serial>
Nov 25 03:28:05 np0005534516 nova_compute[253538]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 03:28:05 np0005534516 nova_compute[253538]:    <video>
Nov 25 03:28:05 np0005534516 nova_compute[253538]:      <model type="virtio"/>
Nov 25 03:28:05 np0005534516 nova_compute[253538]:    </video>
Nov 25 03:28:05 np0005534516 nova_compute[253538]:    <input type="tablet" bus="usb"/>
Nov 25 03:28:05 np0005534516 nova_compute[253538]:    <rng model="virtio">
Nov 25 03:28:05 np0005534516 nova_compute[253538]:      <backend model="random">/dev/urandom</backend>
Nov 25 03:28:05 np0005534516 nova_compute[253538]:    </rng>
Nov 25 03:28:05 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root"/>
Nov 25 03:28:05 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:28:05 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:28:05 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:28:05 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:28:05 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:28:05 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:28:05 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:28:05 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:28:05 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:28:05 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:28:05 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:28:05 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:28:05 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:28:05 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:28:05 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:28:05 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:28:05 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:28:05 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:28:05 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:28:05 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:28:05 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:28:05 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:28:05 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:28:05 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:28:05 np0005534516 nova_compute[253538]:    <controller type="usb" index="0"/>
Nov 25 03:28:05 np0005534516 nova_compute[253538]:    <memballoon model="virtio">
Nov 25 03:28:05 np0005534516 nova_compute[253538]:      <stats period="10"/>
Nov 25 03:28:05 np0005534516 nova_compute[253538]:    </memballoon>
Nov 25 03:28:05 np0005534516 nova_compute[253538]:  </devices>
Nov 25 03:28:05 np0005534516 nova_compute[253538]: </domain>
Nov 25 03:28:05 np0005534516 nova_compute[253538]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 25 03:28:05 np0005534516 nova_compute[253538]: 2025-11-25 08:28:05.919 253542 DEBUG nova.compute.manager [None req-0ab1c55e-a54f-4e9f-8e35-b9475b4764b8 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] [instance: 59d5b429-b6cb-4ced-bf99-b3a5b42dc5a0] Preparing to wait for external event network-vif-plugged-a0d5bf0b-a708-4159-968d-5c597313379d prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 25 03:28:05 np0005534516 nova_compute[253538]: 2025-11-25 08:28:05.919 253542 DEBUG oslo_concurrency.lockutils [None req-0ab1c55e-a54f-4e9f-8e35-b9475b4764b8 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] Acquiring lock "59d5b429-b6cb-4ced-bf99-b3a5b42dc5a0-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:28:05 np0005534516 nova_compute[253538]: 2025-11-25 08:28:05.920 253542 DEBUG oslo_concurrency.lockutils [None req-0ab1c55e-a54f-4e9f-8e35-b9475b4764b8 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] Lock "59d5b429-b6cb-4ced-bf99-b3a5b42dc5a0-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:28:05 np0005534516 nova_compute[253538]: 2025-11-25 08:28:05.920 253542 DEBUG oslo_concurrency.lockutils [None req-0ab1c55e-a54f-4e9f-8e35-b9475b4764b8 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] Lock "59d5b429-b6cb-4ced-bf99-b3a5b42dc5a0-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:28:05 np0005534516 nova_compute[253538]: 2025-11-25 08:28:05.921 253542 DEBUG nova.virt.libvirt.vif [None req-0ab1c55e-a54f-4e9f-8e35-b9475b4764b8 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T08:27:58Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesOneServerNegativeTestJSON-server-184009211',display_name='tempest-ImagesOneServerNegativeTestJSON-server-184009211',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagesoneservernegativetestjson-server-184009211',id=29,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c5b1125d171240e2895276836b4fd6d7',ramdisk_id='',reservation_id='r-7mj1pjji',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesOneServerNegativeTestJSON-192511421',owner_user_name='tempest-ImagesOneServerNegativeTestJSON-192511421-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T08:27:59Z,user_data=None,user_id='8350a560f2bc4b57a5da0e3a1f582f82',uuid=59d5b429-b6cb-4ced-bf99-b3a5b42dc5a0,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "a0d5bf0b-a708-4159-968d-5c597313379d", "address": "fa:16:3e:67:41:bb", "network": {"id": "52a7668b-f0ac-4b07-a778-1ee89adbf076", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1430369364-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c5b1125d171240e2895276836b4fd6d7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa0d5bf0b-a7", "ovs_interfaceid": "a0d5bf0b-a708-4159-968d-5c597313379d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 25 03:28:05 np0005534516 nova_compute[253538]: 2025-11-25 08:28:05.921 253542 DEBUG nova.network.os_vif_util [None req-0ab1c55e-a54f-4e9f-8e35-b9475b4764b8 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] Converting VIF {"id": "a0d5bf0b-a708-4159-968d-5c597313379d", "address": "fa:16:3e:67:41:bb", "network": {"id": "52a7668b-f0ac-4b07-a778-1ee89adbf076", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1430369364-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c5b1125d171240e2895276836b4fd6d7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa0d5bf0b-a7", "ovs_interfaceid": "a0d5bf0b-a708-4159-968d-5c597313379d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 03:28:05 np0005534516 nova_compute[253538]: 2025-11-25 08:28:05.921 253542 DEBUG nova.network.os_vif_util [None req-0ab1c55e-a54f-4e9f-8e35-b9475b4764b8 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:67:41:bb,bridge_name='br-int',has_traffic_filtering=True,id=a0d5bf0b-a708-4159-968d-5c597313379d,network=Network(52a7668b-f0ac-4b07-a778-1ee89adbf076),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa0d5bf0b-a7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 03:28:05 np0005534516 nova_compute[253538]: 2025-11-25 08:28:05.922 253542 DEBUG os_vif [None req-0ab1c55e-a54f-4e9f-8e35-b9475b4764b8 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:67:41:bb,bridge_name='br-int',has_traffic_filtering=True,id=a0d5bf0b-a708-4159-968d-5c597313379d,network=Network(52a7668b-f0ac-4b07-a778-1ee89adbf076),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa0d5bf0b-a7') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 25 03:28:05 np0005534516 nova_compute[253538]: 2025-11-25 08:28:05.922 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:28:05 np0005534516 nova_compute[253538]: 2025-11-25 08:28:05.923 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:28:05 np0005534516 nova_compute[253538]: 2025-11-25 08:28:05.923 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 03:28:05 np0005534516 nova_compute[253538]: 2025-11-25 08:28:05.926 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:28:05 np0005534516 nova_compute[253538]: 2025-11-25 08:28:05.926 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa0d5bf0b-a7, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:28:05 np0005534516 nova_compute[253538]: 2025-11-25 08:28:05.927 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapa0d5bf0b-a7, col_values=(('external_ids', {'iface-id': 'a0d5bf0b-a708-4159-968d-5c597313379d', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:67:41:bb', 'vm-uuid': '59d5b429-b6cb-4ced-bf99-b3a5b42dc5a0'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:28:05 np0005534516 nova_compute[253538]: 2025-11-25 08:28:05.928 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:28:05 np0005534516 NetworkManager[48915]: <info>  [1764059285.9289] manager: (tapa0d5bf0b-a7): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/88)
Nov 25 03:28:05 np0005534516 nova_compute[253538]: 2025-11-25 08:28:05.930 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 25 03:28:05 np0005534516 nova_compute[253538]: 2025-11-25 08:28:05.936 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:28:05 np0005534516 nova_compute[253538]: 2025-11-25 08:28:05.937 253542 INFO os_vif [None req-0ab1c55e-a54f-4e9f-8e35-b9475b4764b8 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:67:41:bb,bridge_name='br-int',has_traffic_filtering=True,id=a0d5bf0b-a708-4159-968d-5c597313379d,network=Network(52a7668b-f0ac-4b07-a778-1ee89adbf076),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa0d5bf0b-a7')#033[00m
Nov 25 03:28:05 np0005534516 nova_compute[253538]: 2025-11-25 08:28:05.962 253542 DEBUG nova.network.neutron [None req-31bdda25-7846-472f-a438-75508dd174e9 5981df3e8536420ea5b8fcd98ef92e1b 7627e6bf071942db89329eee4a7d6b59 - - default default] [instance: 14cd6797-cf47-44da-acac-0e5e3d5dfe11] Successfully created port: c620cff4-b028-4d86-b951-0d489781da2f _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 25 03:28:06 np0005534516 nova_compute[253538]: 2025-11-25 08:28:06.004 253542 DEBUG nova.virt.libvirt.driver [None req-0ab1c55e-a54f-4e9f-8e35-b9475b4764b8 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 03:28:06 np0005534516 nova_compute[253538]: 2025-11-25 08:28:06.004 253542 DEBUG nova.virt.libvirt.driver [None req-0ab1c55e-a54f-4e9f-8e35-b9475b4764b8 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 03:28:06 np0005534516 nova_compute[253538]: 2025-11-25 08:28:06.005 253542 DEBUG nova.virt.libvirt.driver [None req-0ab1c55e-a54f-4e9f-8e35-b9475b4764b8 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] No VIF found with MAC fa:16:3e:67:41:bb, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 25 03:28:06 np0005534516 nova_compute[253538]: 2025-11-25 08:28:06.005 253542 INFO nova.virt.libvirt.driver [None req-0ab1c55e-a54f-4e9f-8e35-b9475b4764b8 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] [instance: 59d5b429-b6cb-4ced-bf99-b3a5b42dc5a0] Using config drive#033[00m
Nov 25 03:28:06 np0005534516 nova_compute[253538]: 2025-11-25 08:28:06.028 253542 DEBUG nova.storage.rbd_utils [None req-0ab1c55e-a54f-4e9f-8e35-b9475b4764b8 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] rbd image 59d5b429-b6cb-4ced-bf99-b3a5b42dc5a0_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:28:06 np0005534516 nova_compute[253538]: 2025-11-25 08:28:06.071 253542 DEBUG nova.network.neutron [req-589eb583-e7c2-44d7-ad18-a259ef45fcd9 req-8d07beb8-111f-4d19-a355-e50741638f40 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 59d5b429-b6cb-4ced-bf99-b3a5b42dc5a0] Updated VIF entry in instance network info cache for port a0d5bf0b-a708-4159-968d-5c597313379d. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 25 03:28:06 np0005534516 nova_compute[253538]: 2025-11-25 08:28:06.071 253542 DEBUG nova.network.neutron [req-589eb583-e7c2-44d7-ad18-a259ef45fcd9 req-8d07beb8-111f-4d19-a355-e50741638f40 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 59d5b429-b6cb-4ced-bf99-b3a5b42dc5a0] Updating instance_info_cache with network_info: [{"id": "a0d5bf0b-a708-4159-968d-5c597313379d", "address": "fa:16:3e:67:41:bb", "network": {"id": "52a7668b-f0ac-4b07-a778-1ee89adbf076", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1430369364-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c5b1125d171240e2895276836b4fd6d7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa0d5bf0b-a7", "ovs_interfaceid": "a0d5bf0b-a708-4159-968d-5c597313379d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 03:28:06 np0005534516 nova_compute[253538]: 2025-11-25 08:28:06.089 253542 DEBUG oslo_concurrency.lockutils [req-589eb583-e7c2-44d7-ad18-a259ef45fcd9 req-8d07beb8-111f-4d19-a355-e50741638f40 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-59d5b429-b6cb-4ced-bf99-b3a5b42dc5a0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 03:28:06 np0005534516 nova_compute[253538]: 2025-11-25 08:28:06.616 253542 DEBUG oslo_concurrency.lockutils [None req-ac44ee47-68fa-4cb1-8769-1d0727a691a8 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Acquiring lock "30afcbba-78f3-433c-ba0a-5a2d25cf2d48" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:28:06 np0005534516 nova_compute[253538]: 2025-11-25 08:28:06.617 253542 DEBUG oslo_concurrency.lockutils [None req-ac44ee47-68fa-4cb1-8769-1d0727a691a8 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Lock "30afcbba-78f3-433c-ba0a-5a2d25cf2d48" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:28:06 np0005534516 nova_compute[253538]: 2025-11-25 08:28:06.617 253542 DEBUG oslo_concurrency.lockutils [None req-ac44ee47-68fa-4cb1-8769-1d0727a691a8 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Acquiring lock "30afcbba-78f3-433c-ba0a-5a2d25cf2d48-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:28:06 np0005534516 nova_compute[253538]: 2025-11-25 08:28:06.618 253542 DEBUG oslo_concurrency.lockutils [None req-ac44ee47-68fa-4cb1-8769-1d0727a691a8 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Lock "30afcbba-78f3-433c-ba0a-5a2d25cf2d48-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:28:06 np0005534516 nova_compute[253538]: 2025-11-25 08:28:06.619 253542 DEBUG oslo_concurrency.lockutils [None req-ac44ee47-68fa-4cb1-8769-1d0727a691a8 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Lock "30afcbba-78f3-433c-ba0a-5a2d25cf2d48-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:28:06 np0005534516 nova_compute[253538]: 2025-11-25 08:28:06.621 253542 INFO nova.compute.manager [None req-ac44ee47-68fa-4cb1-8769-1d0727a691a8 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 30afcbba-78f3-433c-ba0a-5a2d25cf2d48] Terminating instance#033[00m
Nov 25 03:28:06 np0005534516 nova_compute[253538]: 2025-11-25 08:28:06.622 253542 DEBUG nova.compute.manager [None req-ac44ee47-68fa-4cb1-8769-1d0727a691a8 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 30afcbba-78f3-433c-ba0a-5a2d25cf2d48] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 25 03:28:06 np0005534516 nova_compute[253538]: 2025-11-25 08:28:06.630 253542 INFO nova.virt.libvirt.driver [-] [instance: 30afcbba-78f3-433c-ba0a-5a2d25cf2d48] Instance destroyed successfully.#033[00m
Nov 25 03:28:06 np0005534516 nova_compute[253538]: 2025-11-25 08:28:06.631 253542 DEBUG nova.objects.instance [None req-ac44ee47-68fa-4cb1-8769-1d0727a691a8 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Lazy-loading 'resources' on Instance uuid 30afcbba-78f3-433c-ba0a-5a2d25cf2d48 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 03:28:06 np0005534516 nova_compute[253538]: 2025-11-25 08:28:06.642 253542 DEBUG nova.virt.libvirt.vif [None req-ac44ee47-68fa-4cb1-8769-1d0727a691a8 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T08:27:24Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ImagesTestJSON-server-2071435205',display_name='tempest-ImagesTestJSON-server-2071435205',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagestestjson-server-2071435205',id=26,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-25T08:27:36Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='b0a28d62fb1841c087b84b40bf5a54ec',ramdisk_id='',reservation_id='r-a9u90nyq',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ImagesTestJSON-109091550',owner_user_name='tempest-ImagesTestJSON-109091550-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T08:28:02Z,user_data=None,user_id='38fa175fb699405c9a05d7c28f994ebc',uuid=30afcbba-78f3-433c-ba0a-5a2d25cf2d48,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "d6aa33fe-8dd6-4546-aa75-715ad57e5b5c", "address": "fa:16:3e:93:2b:39", "network": {"id": "ba659d6c-c094-47d7-ba45-d0e659ce778e", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1404047212-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b0a28d62fb1841c087b84b40bf5a54ec", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd6aa33fe-8d", "ovs_interfaceid": "d6aa33fe-8dd6-4546-aa75-715ad57e5b5c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 25 03:28:06 np0005534516 nova_compute[253538]: 2025-11-25 08:28:06.642 253542 DEBUG nova.network.os_vif_util [None req-ac44ee47-68fa-4cb1-8769-1d0727a691a8 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Converting VIF {"id": "d6aa33fe-8dd6-4546-aa75-715ad57e5b5c", "address": "fa:16:3e:93:2b:39", "network": {"id": "ba659d6c-c094-47d7-ba45-d0e659ce778e", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1404047212-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b0a28d62fb1841c087b84b40bf5a54ec", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd6aa33fe-8d", "ovs_interfaceid": "d6aa33fe-8dd6-4546-aa75-715ad57e5b5c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 03:28:06 np0005534516 nova_compute[253538]: 2025-11-25 08:28:06.643 253542 DEBUG nova.network.os_vif_util [None req-ac44ee47-68fa-4cb1-8769-1d0727a691a8 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:93:2b:39,bridge_name='br-int',has_traffic_filtering=True,id=d6aa33fe-8dd6-4546-aa75-715ad57e5b5c,network=Network(ba659d6c-c094-47d7-ba45-d0e659ce778e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd6aa33fe-8d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 03:28:06 np0005534516 nova_compute[253538]: 2025-11-25 08:28:06.643 253542 DEBUG os_vif [None req-ac44ee47-68fa-4cb1-8769-1d0727a691a8 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:93:2b:39,bridge_name='br-int',has_traffic_filtering=True,id=d6aa33fe-8dd6-4546-aa75-715ad57e5b5c,network=Network(ba659d6c-c094-47d7-ba45-d0e659ce778e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd6aa33fe-8d') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 25 03:28:06 np0005534516 nova_compute[253538]: 2025-11-25 08:28:06.645 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:28:06 np0005534516 nova_compute[253538]: 2025-11-25 08:28:06.646 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd6aa33fe-8d, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:28:06 np0005534516 nova_compute[253538]: 2025-11-25 08:28:06.647 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:28:06 np0005534516 nova_compute[253538]: 2025-11-25 08:28:06.649 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 25 03:28:06 np0005534516 nova_compute[253538]: 2025-11-25 08:28:06.654 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:28:06 np0005534516 nova_compute[253538]: 2025-11-25 08:28:06.656 253542 INFO os_vif [None req-ac44ee47-68fa-4cb1-8769-1d0727a691a8 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:93:2b:39,bridge_name='br-int',has_traffic_filtering=True,id=d6aa33fe-8dd6-4546-aa75-715ad57e5b5c,network=Network(ba659d6c-c094-47d7-ba45-d0e659ce778e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd6aa33fe-8d')#033[00m
Nov 25 03:28:06 np0005534516 nova_compute[253538]: 2025-11-25 08:28:06.679 253542 INFO nova.virt.libvirt.driver [None req-0ab1c55e-a54f-4e9f-8e35-b9475b4764b8 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] [instance: 59d5b429-b6cb-4ced-bf99-b3a5b42dc5a0] Creating config drive at /var/lib/nova/instances/59d5b429-b6cb-4ced-bf99-b3a5b42dc5a0/disk.config#033[00m
Nov 25 03:28:06 np0005534516 nova_compute[253538]: 2025-11-25 08:28:06.683 253542 DEBUG oslo_concurrency.processutils [None req-0ab1c55e-a54f-4e9f-8e35-b9475b4764b8 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/59d5b429-b6cb-4ced-bf99-b3a5b42dc5a0/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpkrjw_fcq execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:28:06 np0005534516 nova_compute[253538]: 2025-11-25 08:28:06.831 253542 DEBUG oslo_concurrency.processutils [None req-0ab1c55e-a54f-4e9f-8e35-b9475b4764b8 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/59d5b429-b6cb-4ced-bf99-b3a5b42dc5a0/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpkrjw_fcq" returned: 0 in 0.148s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:28:06 np0005534516 nova_compute[253538]: 2025-11-25 08:28:06.862 253542 DEBUG nova.storage.rbd_utils [None req-0ab1c55e-a54f-4e9f-8e35-b9475b4764b8 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] rbd image 59d5b429-b6cb-4ced-bf99-b3a5b42dc5a0_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:28:06 np0005534516 nova_compute[253538]: 2025-11-25 08:28:06.865 253542 DEBUG oslo_concurrency.processutils [None req-0ab1c55e-a54f-4e9f-8e35-b9475b4764b8 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/59d5b429-b6cb-4ced-bf99-b3a5b42dc5a0/disk.config 59d5b429-b6cb-4ced-bf99-b3a5b42dc5a0_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:28:07 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1290: 321 pgs: 321 active+clean; 248 MiB data, 466 MiB used, 60 GiB / 60 GiB avail; 3.8 MiB/s rd, 4.4 MiB/s wr, 222 op/s
Nov 25 03:28:07 np0005534516 nova_compute[253538]: 2025-11-25 08:28:07.288 253542 DEBUG nova.network.neutron [None req-31bdda25-7846-472f-a438-75508dd174e9 5981df3e8536420ea5b8fcd98ef92e1b 7627e6bf071942db89329eee4a7d6b59 - - default default] [instance: 14cd6797-cf47-44da-acac-0e5e3d5dfe11] Successfully updated port: c620cff4-b028-4d86-b951-0d489781da2f _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 25 03:28:07 np0005534516 nova_compute[253538]: 2025-11-25 08:28:07.310 253542 DEBUG oslo_concurrency.lockutils [None req-31bdda25-7846-472f-a438-75508dd174e9 5981df3e8536420ea5b8fcd98ef92e1b 7627e6bf071942db89329eee4a7d6b59 - - default default] Acquiring lock "refresh_cache-14cd6797-cf47-44da-acac-0e5e3d5dfe11" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 03:28:07 np0005534516 nova_compute[253538]: 2025-11-25 08:28:07.310 253542 DEBUG oslo_concurrency.lockutils [None req-31bdda25-7846-472f-a438-75508dd174e9 5981df3e8536420ea5b8fcd98ef92e1b 7627e6bf071942db89329eee4a7d6b59 - - default default] Acquired lock "refresh_cache-14cd6797-cf47-44da-acac-0e5e3d5dfe11" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 03:28:07 np0005534516 nova_compute[253538]: 2025-11-25 08:28:07.311 253542 DEBUG nova.network.neutron [None req-31bdda25-7846-472f-a438-75508dd174e9 5981df3e8536420ea5b8fcd98ef92e1b 7627e6bf071942db89329eee4a7d6b59 - - default default] [instance: 14cd6797-cf47-44da-acac-0e5e3d5dfe11] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 25 03:28:07 np0005534516 nova_compute[253538]: 2025-11-25 08:28:07.506 253542 WARNING nova.network.neutron [None req-31bdda25-7846-472f-a438-75508dd174e9 5981df3e8536420ea5b8fcd98ef92e1b 7627e6bf071942db89329eee4a7d6b59 - - default default] [instance: 14cd6797-cf47-44da-acac-0e5e3d5dfe11] 431353a0-bdb3-445c-95e7-9cd19a8e3783 already exists in list: networks containing: ['431353a0-bdb3-445c-95e7-9cd19a8e3783']. ignoring it#033[00m
Nov 25 03:28:07 np0005534516 nova_compute[253538]: 2025-11-25 08:28:07.714 253542 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764059272.7129986, 30afcbba-78f3-433c-ba0a-5a2d25cf2d48 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 03:28:07 np0005534516 nova_compute[253538]: 2025-11-25 08:28:07.715 253542 INFO nova.compute.manager [-] [instance: 30afcbba-78f3-433c-ba0a-5a2d25cf2d48] VM Stopped (Lifecycle Event)#033[00m
Nov 25 03:28:07 np0005534516 nova_compute[253538]: 2025-11-25 08:28:07.731 253542 DEBUG nova.compute.manager [None req-cbf23426-e2d5-45a5-b75b-e828c7d4e001 - - - - - -] [instance: 30afcbba-78f3-433c-ba0a-5a2d25cf2d48] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:28:07 np0005534516 nova_compute[253538]: 2025-11-25 08:28:07.734 253542 DEBUG nova.compute.manager [None req-cbf23426-e2d5-45a5-b75b-e828c7d4e001 - - - - - -] [instance: 30afcbba-78f3-433c-ba0a-5a2d25cf2d48] Synchronizing instance power state after lifecycle event "Stopped"; current vm_state: stopped, current task_state: deleting, current DB power_state: 4, VM power_state: 4 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 03:28:07 np0005534516 nova_compute[253538]: 2025-11-25 08:28:07.754 253542 INFO nova.compute.manager [None req-cbf23426-e2d5-45a5-b75b-e828c7d4e001 - - - - - -] [instance: 30afcbba-78f3-433c-ba0a-5a2d25cf2d48] During sync_power_state the instance has a pending task (deleting). Skip.#033[00m
Nov 25 03:28:08 np0005534516 nova_compute[253538]: 2025-11-25 08:28:08.469 253542 DEBUG oslo_concurrency.processutils [None req-0ab1c55e-a54f-4e9f-8e35-b9475b4764b8 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/59d5b429-b6cb-4ced-bf99-b3a5b42dc5a0/disk.config 59d5b429-b6cb-4ced-bf99-b3a5b42dc5a0_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.604s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:28:08 np0005534516 nova_compute[253538]: 2025-11-25 08:28:08.470 253542 INFO nova.virt.libvirt.driver [None req-0ab1c55e-a54f-4e9f-8e35-b9475b4764b8 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] [instance: 59d5b429-b6cb-4ced-bf99-b3a5b42dc5a0] Deleting local config drive /var/lib/nova/instances/59d5b429-b6cb-4ced-bf99-b3a5b42dc5a0/disk.config because it was imported into RBD.#033[00m
Nov 25 03:28:08 np0005534516 nova_compute[253538]: 2025-11-25 08:28:08.505 253542 DEBUG nova.compute.manager [req-94db6fa7-eede-427e-b821-ca953530c2dc req-59292340-7c53-407e-8ddc-d441fccb4a13 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 14cd6797-cf47-44da-acac-0e5e3d5dfe11] Received event network-changed-c620cff4-b028-4d86-b951-0d489781da2f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:28:08 np0005534516 nova_compute[253538]: 2025-11-25 08:28:08.505 253542 DEBUG nova.compute.manager [req-94db6fa7-eede-427e-b821-ca953530c2dc req-59292340-7c53-407e-8ddc-d441fccb4a13 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 14cd6797-cf47-44da-acac-0e5e3d5dfe11] Refreshing instance network info cache due to event network-changed-c620cff4-b028-4d86-b951-0d489781da2f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 03:28:08 np0005534516 nova_compute[253538]: 2025-11-25 08:28:08.505 253542 DEBUG oslo_concurrency.lockutils [req-94db6fa7-eede-427e-b821-ca953530c2dc req-59292340-7c53-407e-8ddc-d441fccb4a13 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-14cd6797-cf47-44da-acac-0e5e3d5dfe11" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 03:28:08 np0005534516 NetworkManager[48915]: <info>  [1764059288.5251] manager: (tapa0d5bf0b-a7): new Tun device (/org/freedesktop/NetworkManager/Devices/89)
Nov 25 03:28:08 np0005534516 kernel: tapa0d5bf0b-a7: entered promiscuous mode
Nov 25 03:28:08 np0005534516 ovn_controller[152859]: 2025-11-25T08:28:08Z|00175|binding|INFO|Claiming lport a0d5bf0b-a708-4159-968d-5c597313379d for this chassis.
Nov 25 03:28:08 np0005534516 nova_compute[253538]: 2025-11-25 08:28:08.533 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:28:08 np0005534516 ovn_controller[152859]: 2025-11-25T08:28:08Z|00176|binding|INFO|a0d5bf0b-a708-4159-968d-5c597313379d: Claiming fa:16:3e:67:41:bb 10.100.0.9
Nov 25 03:28:08 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:28:08.542 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:67:41:bb 10.100.0.9'], port_security=['fa:16:3e:67:41:bb 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '59d5b429-b6cb-4ced-bf99-b3a5b42dc5a0', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-52a7668b-f0ac-4b07-a778-1ee89adbf076', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c5b1125d171240e2895276836b4fd6d7', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'bd498cc2-3a1f-4313-a3a9-7b1fa737f1eb', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b98666ab-8fd3-4256-b0ce-536c54c0072e, chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=a0d5bf0b-a708-4159-968d-5c597313379d) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 03:28:08 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:28:08.544 162739 INFO neutron.agent.ovn.metadata.agent [-] Port a0d5bf0b-a708-4159-968d-5c597313379d in datapath 52a7668b-f0ac-4b07-a778-1ee89adbf076 bound to our chassis#033[00m
Nov 25 03:28:08 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:28:08.545 162739 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 52a7668b-f0ac-4b07-a778-1ee89adbf076#033[00m
Nov 25 03:28:08 np0005534516 nova_compute[253538]: 2025-11-25 08:28:08.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 03:28:08 np0005534516 nova_compute[253538]: 2025-11-25 08:28:08.555 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 25 03:28:08 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:28:08.556 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[75352861-793b-4210-a151-14303e116dba]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:28:08 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:28:08.557 162739 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap52a7668b-f1 in ovnmeta-52a7668b-f0ac-4b07-a778-1ee89adbf076 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 25 03:28:08 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:28:08.558 269370 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap52a7668b-f0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 25 03:28:08 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:28:08.558 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[236a2c3a-f199-4973-8e36-fc67fa1d7d0c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:28:08 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:28:08.559 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[78c52bc3-b874-4caa-b40c-03c5a17c43dc]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:28:08 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:28:08.576 162852 DEBUG oslo.privsep.daemon [-] privsep: reply[107b6888-0786-451d-8ff9-d412e8de5f52]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:28:08 np0005534516 systemd-machined[215790]: New machine qemu-34-instance-0000001d.
Nov 25 03:28:08 np0005534516 systemd[1]: Started Virtual Machine qemu-34-instance-0000001d.
Nov 25 03:28:08 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:28:08.598 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[fa87def3-074e-4e20-bf4d-f65a54834655]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:28:08 np0005534516 nova_compute[253538]: 2025-11-25 08:28:08.610 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:28:08 np0005534516 ovn_controller[152859]: 2025-11-25T08:28:08Z|00177|binding|INFO|Setting lport a0d5bf0b-a708-4159-968d-5c597313379d ovn-installed in OVS
Nov 25 03:28:08 np0005534516 ovn_controller[152859]: 2025-11-25T08:28:08Z|00178|binding|INFO|Setting lport a0d5bf0b-a708-4159-968d-5c597313379d up in Southbound
Nov 25 03:28:08 np0005534516 nova_compute[253538]: 2025-11-25 08:28:08.615 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:28:08 np0005534516 systemd-udevd[290155]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 03:28:08 np0005534516 NetworkManager[48915]: <info>  [1764059288.6348] device (tapa0d5bf0b-a7): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 03:28:08 np0005534516 NetworkManager[48915]: <info>  [1764059288.6356] device (tapa0d5bf0b-a7): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 03:28:08 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:28:08.640 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[3873ada9-7138-4051-92ef-7977e3c1a97b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:28:08 np0005534516 systemd-udevd[290159]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 03:28:08 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:28:08.645 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[890d1d5e-e1f7-4736-85f9-26747f9ac50d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:28:08 np0005534516 NetworkManager[48915]: <info>  [1764059288.6463] manager: (tap52a7668b-f0): new Veth device (/org/freedesktop/NetworkManager/Devices/90)
Nov 25 03:28:08 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:28:08.671 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[03fd2872-b28d-4e7f-ae4c-40bb2c2827b7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:28:08 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:28:08.674 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[06d9993a-af4f-4a59-9dfa-ead28fdb1931]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:28:08 np0005534516 NetworkManager[48915]: <info>  [1764059288.6961] device (tap52a7668b-f0): carrier: link connected
Nov 25 03:28:08 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:28:08.701 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[d87f490e-82d6-4b69-8dd4-4434ea9fbcdd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:28:08 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:28:08.720 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[f052f4c1-a9ca-47fd-ba07-3c7dc9b85658]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap52a7668b-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:9b:1c:70'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 58], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 464329, 'reachable_time': 30934, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 290184, 'error': None, 'target': 'ovnmeta-52a7668b-f0ac-4b07-a778-1ee89adbf076', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:28:08 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:28:08.736 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[7cabd5b9-dfc4-41cc-9818-f2ed5d8c29b3]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe9b:1c70'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 464329, 'tstamp': 464329}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 290185, 'error': None, 'target': 'ovnmeta-52a7668b-f0ac-4b07-a778-1ee89adbf076', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:28:08 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:28:08.756 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[cbf71ae5-cee1-4ca1-9212-1766d923da1f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap52a7668b-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:9b:1c:70'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 58], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 464329, 'reachable_time': 30934, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 290186, 'error': None, 'target': 'ovnmeta-52a7668b-f0ac-4b07-a778-1ee89adbf076', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:28:08 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:28:08.783 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[c1cec52e-27e1-4d17-9f28-3e0eae284d17]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:28:08 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:28:08.858 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[2aa4ff86-a7de-4440-9e67-25e8397517b8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:28:08 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:28:08.860 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap52a7668b-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:28:08 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:28:08.860 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 03:28:08 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:28:08.861 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap52a7668b-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:28:08 np0005534516 kernel: tap52a7668b-f0: entered promiscuous mode
Nov 25 03:28:08 np0005534516 nova_compute[253538]: 2025-11-25 08:28:08.863 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:28:08 np0005534516 NetworkManager[48915]: <info>  [1764059288.8645] manager: (tap52a7668b-f0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/91)
Nov 25 03:28:08 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:28:08.866 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap52a7668b-f0, col_values=(('external_ids', {'iface-id': 'ac244317-fa52-4a6a-92f4-98845a41804d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:28:08 np0005534516 ovn_controller[152859]: 2025-11-25T08:28:08Z|00179|binding|INFO|Releasing lport ac244317-fa52-4a6a-92f4-98845a41804d from this chassis (sb_readonly=0)
Nov 25 03:28:08 np0005534516 nova_compute[253538]: 2025-11-25 08:28:08.896 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:28:08 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:28:08.898 162739 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/52a7668b-f0ac-4b07-a778-1ee89adbf076.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/52a7668b-f0ac-4b07-a778-1ee89adbf076.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 25 03:28:08 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:28:08.899 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[1cf23358-e226-4791-aa47-c4663f04cabf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:28:08 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:28:08.900 162739 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 25 03:28:08 np0005534516 ovn_metadata_agent[162734]: global
Nov 25 03:28:08 np0005534516 ovn_metadata_agent[162734]:    log         /dev/log local0 debug
Nov 25 03:28:08 np0005534516 ovn_metadata_agent[162734]:    log-tag     haproxy-metadata-proxy-52a7668b-f0ac-4b07-a778-1ee89adbf076
Nov 25 03:28:08 np0005534516 ovn_metadata_agent[162734]:    user        root
Nov 25 03:28:08 np0005534516 ovn_metadata_agent[162734]:    group       root
Nov 25 03:28:08 np0005534516 ovn_metadata_agent[162734]:    maxconn     1024
Nov 25 03:28:08 np0005534516 ovn_metadata_agent[162734]:    pidfile     /var/lib/neutron/external/pids/52a7668b-f0ac-4b07-a778-1ee89adbf076.pid.haproxy
Nov 25 03:28:08 np0005534516 ovn_metadata_agent[162734]:    daemon
Nov 25 03:28:08 np0005534516 ovn_metadata_agent[162734]: 
Nov 25 03:28:08 np0005534516 ovn_metadata_agent[162734]: defaults
Nov 25 03:28:08 np0005534516 ovn_metadata_agent[162734]:    log global
Nov 25 03:28:08 np0005534516 ovn_metadata_agent[162734]:    mode http
Nov 25 03:28:08 np0005534516 ovn_metadata_agent[162734]:    option httplog
Nov 25 03:28:08 np0005534516 ovn_metadata_agent[162734]:    option dontlognull
Nov 25 03:28:08 np0005534516 ovn_metadata_agent[162734]:    option http-server-close
Nov 25 03:28:08 np0005534516 ovn_metadata_agent[162734]:    option forwardfor
Nov 25 03:28:08 np0005534516 ovn_metadata_agent[162734]:    retries                 3
Nov 25 03:28:08 np0005534516 ovn_metadata_agent[162734]:    timeout http-request    30s
Nov 25 03:28:08 np0005534516 ovn_metadata_agent[162734]:    timeout connect         30s
Nov 25 03:28:08 np0005534516 ovn_metadata_agent[162734]:    timeout client          32s
Nov 25 03:28:08 np0005534516 ovn_metadata_agent[162734]:    timeout server          32s
Nov 25 03:28:08 np0005534516 ovn_metadata_agent[162734]:    timeout http-keep-alive 30s
Nov 25 03:28:08 np0005534516 ovn_metadata_agent[162734]: 
Nov 25 03:28:08 np0005534516 ovn_metadata_agent[162734]: 
Nov 25 03:28:08 np0005534516 ovn_metadata_agent[162734]: listen listener
Nov 25 03:28:08 np0005534516 ovn_metadata_agent[162734]:    bind 169.254.169.254:80
Nov 25 03:28:08 np0005534516 ovn_metadata_agent[162734]:    server metadata /var/lib/neutron/metadata_proxy
Nov 25 03:28:08 np0005534516 ovn_metadata_agent[162734]:    http-request add-header X-OVN-Network-ID 52a7668b-f0ac-4b07-a778-1ee89adbf076
Nov 25 03:28:08 np0005534516 ovn_metadata_agent[162734]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 25 03:28:08 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:28:08.901 162739 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-52a7668b-f0ac-4b07-a778-1ee89adbf076', 'env', 'PROCESS_TAG=haproxy-52a7668b-f0ac-4b07-a778-1ee89adbf076', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/52a7668b-f0ac-4b07-a778-1ee89adbf076.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 25 03:28:09 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1291: 321 pgs: 321 active+clean; 230 MiB data, 455 MiB used, 60 GiB / 60 GiB avail; 3.7 MiB/s rd, 3.6 MiB/s wr, 210 op/s
Nov 25 03:28:09 np0005534516 nova_compute[253538]: 2025-11-25 08:28:09.184 253542 DEBUG oslo_concurrency.lockutils [None req-e3022e31-fb04-4eb1-b57c-c8286674db75 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Acquiring lock "6998a6cf-b660-4558-98cf-bf5984775b1d" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:28:09 np0005534516 nova_compute[253538]: 2025-11-25 08:28:09.184 253542 DEBUG oslo_concurrency.lockutils [None req-e3022e31-fb04-4eb1-b57c-c8286674db75 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Lock "6998a6cf-b660-4558-98cf-bf5984775b1d" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:28:09 np0005534516 nova_compute[253538]: 2025-11-25 08:28:09.198 253542 DEBUG nova.compute.manager [None req-e3022e31-fb04-4eb1-b57c-c8286674db75 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 6998a6cf-b660-4558-98cf-bf5984775b1d] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 25 03:28:09 np0005534516 nova_compute[253538]: 2025-11-25 08:28:09.246 253542 DEBUG nova.network.neutron [None req-31bdda25-7846-472f-a438-75508dd174e9 5981df3e8536420ea5b8fcd98ef92e1b 7627e6bf071942db89329eee4a7d6b59 - - default default] [instance: 14cd6797-cf47-44da-acac-0e5e3d5dfe11] Updating instance_info_cache with network_info: [{"id": "eb3ca9e2-cc78-478d-97c2-03b1c7d29b95", "address": "fa:16:3e:f3:13:20", "network": {"id": "431353a0-bdb3-445c-95e7-9cd19a8e3783", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-862491740-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7627e6bf071942db89329eee4a7d6b59", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapeb3ca9e2-cc", "ovs_interfaceid": "eb3ca9e2-cc78-478d-97c2-03b1c7d29b95", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "c620cff4-b028-4d86-b951-0d489781da2f", "address": "fa:16:3e:b0:33:b7", "network": {"id": "431353a0-bdb3-445c-95e7-9cd19a8e3783", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-862491740-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7627e6bf071942db89329eee4a7d6b59", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc620cff4-b0", "ovs_interfaceid": "c620cff4-b028-4d86-b951-0d489781da2f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 03:28:09 np0005534516 nova_compute[253538]: 2025-11-25 08:28:09.266 253542 DEBUG oslo_concurrency.lockutils [None req-31bdda25-7846-472f-a438-75508dd174e9 5981df3e8536420ea5b8fcd98ef92e1b 7627e6bf071942db89329eee4a7d6b59 - - default default] Releasing lock "refresh_cache-14cd6797-cf47-44da-acac-0e5e3d5dfe11" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 03:28:09 np0005534516 nova_compute[253538]: 2025-11-25 08:28:09.267 253542 DEBUG oslo_concurrency.lockutils [req-94db6fa7-eede-427e-b821-ca953530c2dc req-59292340-7c53-407e-8ddc-d441fccb4a13 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-14cd6797-cf47-44da-acac-0e5e3d5dfe11" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 03:28:09 np0005534516 nova_compute[253538]: 2025-11-25 08:28:09.268 253542 DEBUG nova.network.neutron [req-94db6fa7-eede-427e-b821-ca953530c2dc req-59292340-7c53-407e-8ddc-d441fccb4a13 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 14cd6797-cf47-44da-acac-0e5e3d5dfe11] Refreshing network info cache for port c620cff4-b028-4d86-b951-0d489781da2f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 03:28:09 np0005534516 nova_compute[253538]: 2025-11-25 08:28:09.273 253542 DEBUG nova.virt.libvirt.vif [None req-31bdda25-7846-472f-a438-75508dd174e9 5981df3e8536420ea5b8fcd98ef92e1b 7627e6bf071942db89329eee4a7d6b59 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T08:27:52Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-AttachInterfacesV270Test-server-1019332220',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacesv270test-server-1019332220',id=28,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-25T08:28:02Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='7627e6bf071942db89329eee4a7d6b59',ramdisk_id='',reservation_id='r-ykh6cze2',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesV270Test-968002196',owner_user_name='tempest-AttachInterfacesV270Test-968002196-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T08:28:02Z,user_data=None,user_id='5981df3e8536420ea5b8fcd98ef92e1b',uuid=14cd6797-cf47-44da-acac-0e5e3d5dfe11,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "c620cff4-b028-4d86-b951-0d489781da2f", "address": "fa:16:3e:b0:33:b7", "network": {"id": "431353a0-bdb3-445c-95e7-9cd19a8e3783", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-862491740-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7627e6bf071942db89329eee4a7d6b59", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc620cff4-b0", "ovs_interfaceid": "c620cff4-b028-4d86-b951-0d489781da2f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 25 03:28:09 np0005534516 nova_compute[253538]: 2025-11-25 08:28:09.274 253542 DEBUG nova.network.os_vif_util [None req-31bdda25-7846-472f-a438-75508dd174e9 5981df3e8536420ea5b8fcd98ef92e1b 7627e6bf071942db89329eee4a7d6b59 - - default default] Converting VIF {"id": "c620cff4-b028-4d86-b951-0d489781da2f", "address": "fa:16:3e:b0:33:b7", "network": {"id": "431353a0-bdb3-445c-95e7-9cd19a8e3783", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-862491740-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7627e6bf071942db89329eee4a7d6b59", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc620cff4-b0", "ovs_interfaceid": "c620cff4-b028-4d86-b951-0d489781da2f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 03:28:09 np0005534516 nova_compute[253538]: 2025-11-25 08:28:09.275 253542 DEBUG nova.network.os_vif_util [None req-31bdda25-7846-472f-a438-75508dd174e9 5981df3e8536420ea5b8fcd98ef92e1b 7627e6bf071942db89329eee4a7d6b59 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b0:33:b7,bridge_name='br-int',has_traffic_filtering=True,id=c620cff4-b028-4d86-b951-0d489781da2f,network=Network(431353a0-bdb3-445c-95e7-9cd19a8e3783),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc620cff4-b0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 03:28:09 np0005534516 nova_compute[253538]: 2025-11-25 08:28:09.275 253542 DEBUG os_vif [None req-31bdda25-7846-472f-a438-75508dd174e9 5981df3e8536420ea5b8fcd98ef92e1b 7627e6bf071942db89329eee4a7d6b59 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:b0:33:b7,bridge_name='br-int',has_traffic_filtering=True,id=c620cff4-b028-4d86-b951-0d489781da2f,network=Network(431353a0-bdb3-445c-95e7-9cd19a8e3783),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc620cff4-b0') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 25 03:28:09 np0005534516 nova_compute[253538]: 2025-11-25 08:28:09.276 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:28:09 np0005534516 nova_compute[253538]: 2025-11-25 08:28:09.276 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:28:09 np0005534516 nova_compute[253538]: 2025-11-25 08:28:09.277 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 03:28:09 np0005534516 nova_compute[253538]: 2025-11-25 08:28:09.281 253542 DEBUG oslo_concurrency.lockutils [None req-e3022e31-fb04-4eb1-b57c-c8286674db75 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:28:09 np0005534516 nova_compute[253538]: 2025-11-25 08:28:09.282 253542 DEBUG oslo_concurrency.lockutils [None req-e3022e31-fb04-4eb1-b57c-c8286674db75 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:28:09 np0005534516 nova_compute[253538]: 2025-11-25 08:28:09.286 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:28:09 np0005534516 nova_compute[253538]: 2025-11-25 08:28:09.286 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc620cff4-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:28:09 np0005534516 nova_compute[253538]: 2025-11-25 08:28:09.287 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapc620cff4-b0, col_values=(('external_ids', {'iface-id': 'c620cff4-b028-4d86-b951-0d489781da2f', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:b0:33:b7', 'vm-uuid': '14cd6797-cf47-44da-acac-0e5e3d5dfe11'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:28:09 np0005534516 NetworkManager[48915]: <info>  [1764059289.2900] manager: (tapc620cff4-b0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/92)
Nov 25 03:28:09 np0005534516 nova_compute[253538]: 2025-11-25 08:28:09.289 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:28:09 np0005534516 nova_compute[253538]: 2025-11-25 08:28:09.292 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 25 03:28:09 np0005534516 nova_compute[253538]: 2025-11-25 08:28:09.294 253542 DEBUG nova.virt.hardware [None req-e3022e31-fb04-4eb1-b57c-c8286674db75 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 25 03:28:09 np0005534516 nova_compute[253538]: 2025-11-25 08:28:09.295 253542 INFO nova.compute.claims [None req-e3022e31-fb04-4eb1-b57c-c8286674db75 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 6998a6cf-b660-4558-98cf-bf5984775b1d] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 25 03:28:09 np0005534516 nova_compute[253538]: 2025-11-25 08:28:09.298 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:28:09 np0005534516 nova_compute[253538]: 2025-11-25 08:28:09.299 253542 INFO os_vif [None req-31bdda25-7846-472f-a438-75508dd174e9 5981df3e8536420ea5b8fcd98ef92e1b 7627e6bf071942db89329eee4a7d6b59 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:b0:33:b7,bridge_name='br-int',has_traffic_filtering=True,id=c620cff4-b028-4d86-b951-0d489781da2f,network=Network(431353a0-bdb3-445c-95e7-9cd19a8e3783),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc620cff4-b0')#033[00m
Nov 25 03:28:09 np0005534516 nova_compute[253538]: 2025-11-25 08:28:09.300 253542 DEBUG nova.virt.libvirt.vif [None req-31bdda25-7846-472f-a438-75508dd174e9 5981df3e8536420ea5b8fcd98ef92e1b 7627e6bf071942db89329eee4a7d6b59 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T08:27:52Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-AttachInterfacesV270Test-server-1019332220',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacesv270test-server-1019332220',id=28,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-25T08:28:02Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='7627e6bf071942db89329eee4a7d6b59',ramdisk_id='',reservation_id='r-ykh6cze2',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesV270Test-968002196',owner_user_name='tempest-AttachInterfacesV270Test-968002196-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T08:28:02Z,user_data=None,user_id='5981df3e8536420ea5b8fcd98ef92e1b',uuid=14cd6797-cf47-44da-acac-0e5e3d5dfe11,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "c620cff4-b028-4d86-b951-0d489781da2f", "address": "fa:16:3e:b0:33:b7", "network": {"id": "431353a0-bdb3-445c-95e7-9cd19a8e3783", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-862491740-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7627e6bf071942db89329eee4a7d6b59", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc620cff4-b0", "ovs_interfaceid": "c620cff4-b028-4d86-b951-0d489781da2f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 25 03:28:09 np0005534516 nova_compute[253538]: 2025-11-25 08:28:09.300 253542 DEBUG nova.network.os_vif_util [None req-31bdda25-7846-472f-a438-75508dd174e9 5981df3e8536420ea5b8fcd98ef92e1b 7627e6bf071942db89329eee4a7d6b59 - - default default] Converting VIF {"id": "c620cff4-b028-4d86-b951-0d489781da2f", "address": "fa:16:3e:b0:33:b7", "network": {"id": "431353a0-bdb3-445c-95e7-9cd19a8e3783", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-862491740-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7627e6bf071942db89329eee4a7d6b59", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc620cff4-b0", "ovs_interfaceid": "c620cff4-b028-4d86-b951-0d489781da2f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 03:28:09 np0005534516 nova_compute[253538]: 2025-11-25 08:28:09.301 253542 DEBUG nova.network.os_vif_util [None req-31bdda25-7846-472f-a438-75508dd174e9 5981df3e8536420ea5b8fcd98ef92e1b 7627e6bf071942db89329eee4a7d6b59 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b0:33:b7,bridge_name='br-int',has_traffic_filtering=True,id=c620cff4-b028-4d86-b951-0d489781da2f,network=Network(431353a0-bdb3-445c-95e7-9cd19a8e3783),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc620cff4-b0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 03:28:09 np0005534516 nova_compute[253538]: 2025-11-25 08:28:09.303 253542 DEBUG nova.virt.libvirt.guest [None req-31bdda25-7846-472f-a438-75508dd174e9 5981df3e8536420ea5b8fcd98ef92e1b 7627e6bf071942db89329eee4a7d6b59 - - default default] attach device xml: <interface type="ethernet">
Nov 25 03:28:09 np0005534516 nova_compute[253538]:  <mac address="fa:16:3e:b0:33:b7"/>
Nov 25 03:28:09 np0005534516 nova_compute[253538]:  <model type="virtio"/>
Nov 25 03:28:09 np0005534516 nova_compute[253538]:  <driver name="vhost" rx_queue_size="512"/>
Nov 25 03:28:09 np0005534516 nova_compute[253538]:  <mtu size="1442"/>
Nov 25 03:28:09 np0005534516 nova_compute[253538]:  <target dev="tapc620cff4-b0"/>
Nov 25 03:28:09 np0005534516 nova_compute[253538]: </interface>
Nov 25 03:28:09 np0005534516 nova_compute[253538]: attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339#033[00m
Nov 25 03:28:09 np0005534516 NetworkManager[48915]: <info>  [1764059289.3158] manager: (tapc620cff4-b0): new Tun device (/org/freedesktop/NetworkManager/Devices/93)
Nov 25 03:28:09 np0005534516 kernel: tapc620cff4-b0: entered promiscuous mode
Nov 25 03:28:09 np0005534516 nova_compute[253538]: 2025-11-25 08:28:09.318 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:28:09 np0005534516 ovn_controller[152859]: 2025-11-25T08:28:09Z|00180|binding|INFO|Claiming lport c620cff4-b028-4d86-b951-0d489781da2f for this chassis.
Nov 25 03:28:09 np0005534516 ovn_controller[152859]: 2025-11-25T08:28:09Z|00181|binding|INFO|c620cff4-b028-4d86-b951-0d489781da2f: Claiming fa:16:3e:b0:33:b7 10.100.0.5
Nov 25 03:28:09 np0005534516 systemd-udevd[290179]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 03:28:09 np0005534516 ovn_controller[152859]: 2025-11-25T08:28:09Z|00182|binding|INFO|Setting lport c620cff4-b028-4d86-b951-0d489781da2f ovn-installed in OVS
Nov 25 03:28:09 np0005534516 nova_compute[253538]: 2025-11-25 08:28:09.338 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:28:09 np0005534516 ovn_controller[152859]: 2025-11-25T08:28:09Z|00183|binding|INFO|Setting lport c620cff4-b028-4d86-b951-0d489781da2f up in Southbound
Nov 25 03:28:09 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:28:09.339 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b0:33:b7 10.100.0.5'], port_security=['fa:16:3e:b0:33:b7 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '14cd6797-cf47-44da-acac-0e5e3d5dfe11', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-431353a0-bdb3-445c-95e7-9cd19a8e3783', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '7627e6bf071942db89329eee4a7d6b59', 'neutron:revision_number': '2', 'neutron:security_group_ids': '0d49786c-dade-46e6-8a51-0f68b7957195', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7ee2cc9a-2b72-45fd-9005-f89765076013, chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=c620cff4-b028-4d86-b951-0d489781da2f) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 03:28:09 np0005534516 nova_compute[253538]: 2025-11-25 08:28:09.341 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:28:09 np0005534516 NetworkManager[48915]: <info>  [1764059289.3471] device (tapc620cff4-b0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 03:28:09 np0005534516 NetworkManager[48915]: <info>  [1764059289.3492] device (tapc620cff4-b0): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 03:28:09 np0005534516 podman[290217]: 2025-11-25 08:28:09.262905556 +0000 UTC m=+0.026243117 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 25 03:28:09 np0005534516 nova_compute[253538]: 2025-11-25 08:28:09.414 253542 DEBUG nova.compute.manager [req-98f8ec82-1b8f-4caa-af16-2af48d577536 req-407a7fae-7369-4e64-8914-9100bcfabaef b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 59d5b429-b6cb-4ced-bf99-b3a5b42dc5a0] Received event network-vif-plugged-a0d5bf0b-a708-4159-968d-5c597313379d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:28:09 np0005534516 nova_compute[253538]: 2025-11-25 08:28:09.414 253542 DEBUG oslo_concurrency.lockutils [req-98f8ec82-1b8f-4caa-af16-2af48d577536 req-407a7fae-7369-4e64-8914-9100bcfabaef b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "59d5b429-b6cb-4ced-bf99-b3a5b42dc5a0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:28:09 np0005534516 nova_compute[253538]: 2025-11-25 08:28:09.415 253542 DEBUG oslo_concurrency.lockutils [req-98f8ec82-1b8f-4caa-af16-2af48d577536 req-407a7fae-7369-4e64-8914-9100bcfabaef b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "59d5b429-b6cb-4ced-bf99-b3a5b42dc5a0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:28:09 np0005534516 nova_compute[253538]: 2025-11-25 08:28:09.415 253542 DEBUG oslo_concurrency.lockutils [req-98f8ec82-1b8f-4caa-af16-2af48d577536 req-407a7fae-7369-4e64-8914-9100bcfabaef b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "59d5b429-b6cb-4ced-bf99-b3a5b42dc5a0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:28:09 np0005534516 nova_compute[253538]: 2025-11-25 08:28:09.416 253542 DEBUG nova.compute.manager [req-98f8ec82-1b8f-4caa-af16-2af48d577536 req-407a7fae-7369-4e64-8914-9100bcfabaef b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 59d5b429-b6cb-4ced-bf99-b3a5b42dc5a0] Processing event network-vif-plugged-a0d5bf0b-a708-4159-968d-5c597313379d _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 25 03:28:09 np0005534516 nova_compute[253538]: 2025-11-25 08:28:09.433 253542 DEBUG nova.virt.libvirt.driver [None req-31bdda25-7846-472f-a438-75508dd174e9 5981df3e8536420ea5b8fcd98ef92e1b 7627e6bf071942db89329eee4a7d6b59 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 03:28:09 np0005534516 nova_compute[253538]: 2025-11-25 08:28:09.433 253542 DEBUG nova.virt.libvirt.driver [None req-31bdda25-7846-472f-a438-75508dd174e9 5981df3e8536420ea5b8fcd98ef92e1b 7627e6bf071942db89329eee4a7d6b59 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 03:28:09 np0005534516 nova_compute[253538]: 2025-11-25 08:28:09.434 253542 DEBUG nova.virt.libvirt.driver [None req-31bdda25-7846-472f-a438-75508dd174e9 5981df3e8536420ea5b8fcd98ef92e1b 7627e6bf071942db89329eee4a7d6b59 - - default default] No VIF found with MAC fa:16:3e:f3:13:20, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 25 03:28:09 np0005534516 nova_compute[253538]: 2025-11-25 08:28:09.434 253542 DEBUG nova.virt.libvirt.driver [None req-31bdda25-7846-472f-a438-75508dd174e9 5981df3e8536420ea5b8fcd98ef92e1b 7627e6bf071942db89329eee4a7d6b59 - - default default] No VIF found with MAC fa:16:3e:b0:33:b7, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 25 03:28:09 np0005534516 nova_compute[253538]: 2025-11-25 08:28:09.460 253542 DEBUG oslo_concurrency.processutils [None req-e3022e31-fb04-4eb1-b57c-c8286674db75 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:28:09 np0005534516 nova_compute[253538]: 2025-11-25 08:28:09.497 253542 DEBUG nova.virt.libvirt.guest [None req-31bdda25-7846-472f-a438-75508dd174e9 5981df3e8536420ea5b8fcd98ef92e1b 7627e6bf071942db89329eee4a7d6b59 - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 03:28:09 np0005534516 nova_compute[253538]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 03:28:09 np0005534516 nova_compute[253538]:  <nova:name>tempest-AttachInterfacesV270Test-server-1019332220</nova:name>
Nov 25 03:28:09 np0005534516 nova_compute[253538]:  <nova:creationTime>2025-11-25 08:28:09</nova:creationTime>
Nov 25 03:28:09 np0005534516 nova_compute[253538]:  <nova:flavor name="m1.nano">
Nov 25 03:28:09 np0005534516 nova_compute[253538]:    <nova:memory>128</nova:memory>
Nov 25 03:28:09 np0005534516 nova_compute[253538]:    <nova:disk>1</nova:disk>
Nov 25 03:28:09 np0005534516 nova_compute[253538]:    <nova:swap>0</nova:swap>
Nov 25 03:28:09 np0005534516 nova_compute[253538]:    <nova:ephemeral>0</nova:ephemeral>
Nov 25 03:28:09 np0005534516 nova_compute[253538]:    <nova:vcpus>1</nova:vcpus>
Nov 25 03:28:09 np0005534516 nova_compute[253538]:  </nova:flavor>
Nov 25 03:28:09 np0005534516 nova_compute[253538]:  <nova:owner>
Nov 25 03:28:09 np0005534516 nova_compute[253538]:    <nova:user uuid="5981df3e8536420ea5b8fcd98ef92e1b">tempest-AttachInterfacesV270Test-968002196-project-member</nova:user>
Nov 25 03:28:09 np0005534516 nova_compute[253538]:    <nova:project uuid="7627e6bf071942db89329eee4a7d6b59">tempest-AttachInterfacesV270Test-968002196</nova:project>
Nov 25 03:28:09 np0005534516 nova_compute[253538]:  </nova:owner>
Nov 25 03:28:09 np0005534516 nova_compute[253538]:  <nova:root type="image" uuid="8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e"/>
Nov 25 03:28:09 np0005534516 nova_compute[253538]:  <nova:ports>
Nov 25 03:28:09 np0005534516 nova_compute[253538]:    <nova:port uuid="eb3ca9e2-cc78-478d-97c2-03b1c7d29b95">
Nov 25 03:28:09 np0005534516 nova_compute[253538]:      <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Nov 25 03:28:09 np0005534516 nova_compute[253538]:    </nova:port>
Nov 25 03:28:09 np0005534516 nova_compute[253538]:    <nova:port uuid="c620cff4-b028-4d86-b951-0d489781da2f">
Nov 25 03:28:09 np0005534516 nova_compute[253538]:      <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Nov 25 03:28:09 np0005534516 nova_compute[253538]:    </nova:port>
Nov 25 03:28:09 np0005534516 nova_compute[253538]:  </nova:ports>
Nov 25 03:28:09 np0005534516 nova_compute[253538]: </nova:instance>
Nov 25 03:28:09 np0005534516 nova_compute[253538]: set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359#033[00m
Nov 25 03:28:09 np0005534516 nova_compute[253538]: 2025-11-25 08:28:09.632 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 03:28:09 np0005534516 nova_compute[253538]: 2025-11-25 08:28:09.632 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 25 03:28:09 np0005534516 podman[290217]: 2025-11-25 08:28:09.646714464 +0000 UTC m=+0.410052015 container create 1b5d918d4aa47ffd23b27816a787433eb73a973f296732e37d895598606e1644 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-52a7668b-f0ac-4b07-a778-1ee89adbf076, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.license=GPLv2)
Nov 25 03:28:09 np0005534516 nova_compute[253538]: 2025-11-25 08:28:09.656 253542 DEBUG oslo_concurrency.lockutils [None req-31bdda25-7846-472f-a438-75508dd174e9 5981df3e8536420ea5b8fcd98ef92e1b 7627e6bf071942db89329eee4a7d6b59 - - default default] Lock "interface-14cd6797-cf47-44da-acac-0e5e3d5dfe11-None" "released" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: held 4.812s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:28:09 np0005534516 nova_compute[253538]: 2025-11-25 08:28:09.658 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 25 03:28:09 np0005534516 podman[290237]: 2025-11-25 08:28:09.792800972 +0000 UTC m=+0.420787571 container health_status 8ad1e319ea263c63021f01bb2d6f18ca5aea205883339692c6b10bddfa993191 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, tcib_managed=true, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Nov 25 03:28:09 np0005534516 nova_compute[253538]: 2025-11-25 08:28:09.802 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:28:09 np0005534516 systemd[1]: Started libpod-conmon-1b5d918d4aa47ffd23b27816a787433eb73a973f296732e37d895598606e1644.scope.
Nov 25 03:28:09 np0005534516 systemd[1]: Started libcrun container.
Nov 25 03:28:09 np0005534516 nova_compute[253538]: 2025-11-25 08:28:09.841 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764059289.8408206, 59d5b429-b6cb-4ced-bf99-b3a5b42dc5a0 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 03:28:09 np0005534516 nova_compute[253538]: 2025-11-25 08:28:09.841 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 59d5b429-b6cb-4ced-bf99-b3a5b42dc5a0] VM Started (Lifecycle Event)#033[00m
Nov 25 03:28:09 np0005534516 nova_compute[253538]: 2025-11-25 08:28:09.843 253542 DEBUG nova.compute.manager [None req-0ab1c55e-a54f-4e9f-8e35-b9475b4764b8 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] [instance: 59d5b429-b6cb-4ced-bf99-b3a5b42dc5a0] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 25 03:28:09 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/320269360c4cc6b75b0f40afebd67ecea31de81414d235d4d1629a905a53b4de/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 25 03:28:09 np0005534516 nova_compute[253538]: 2025-11-25 08:28:09.849 253542 DEBUG nova.virt.libvirt.driver [None req-0ab1c55e-a54f-4e9f-8e35-b9475b4764b8 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] [instance: 59d5b429-b6cb-4ced-bf99-b3a5b42dc5a0] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 25 03:28:09 np0005534516 nova_compute[253538]: 2025-11-25 08:28:09.854 253542 INFO nova.virt.libvirt.driver [-] [instance: 59d5b429-b6cb-4ced-bf99-b3a5b42dc5a0] Instance spawned successfully.#033[00m
Nov 25 03:28:09 np0005534516 nova_compute[253538]: 2025-11-25 08:28:09.854 253542 DEBUG nova.virt.libvirt.driver [None req-0ab1c55e-a54f-4e9f-8e35-b9475b4764b8 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] [instance: 59d5b429-b6cb-4ced-bf99-b3a5b42dc5a0] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 25 03:28:09 np0005534516 nova_compute[253538]: 2025-11-25 08:28:09.883 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 59d5b429-b6cb-4ced-bf99-b3a5b42dc5a0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:28:09 np0005534516 nova_compute[253538]: 2025-11-25 08:28:09.888 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 59d5b429-b6cb-4ced-bf99-b3a5b42dc5a0] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 03:28:09 np0005534516 nova_compute[253538]: 2025-11-25 08:28:09.891 253542 DEBUG nova.virt.libvirt.driver [None req-0ab1c55e-a54f-4e9f-8e35-b9475b4764b8 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] [instance: 59d5b429-b6cb-4ced-bf99-b3a5b42dc5a0] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:28:09 np0005534516 nova_compute[253538]: 2025-11-25 08:28:09.892 253542 DEBUG nova.virt.libvirt.driver [None req-0ab1c55e-a54f-4e9f-8e35-b9475b4764b8 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] [instance: 59d5b429-b6cb-4ced-bf99-b3a5b42dc5a0] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:28:09 np0005534516 nova_compute[253538]: 2025-11-25 08:28:09.892 253542 DEBUG nova.virt.libvirt.driver [None req-0ab1c55e-a54f-4e9f-8e35-b9475b4764b8 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] [instance: 59d5b429-b6cb-4ced-bf99-b3a5b42dc5a0] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:28:09 np0005534516 nova_compute[253538]: 2025-11-25 08:28:09.892 253542 DEBUG nova.virt.libvirt.driver [None req-0ab1c55e-a54f-4e9f-8e35-b9475b4764b8 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] [instance: 59d5b429-b6cb-4ced-bf99-b3a5b42dc5a0] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:28:09 np0005534516 nova_compute[253538]: 2025-11-25 08:28:09.893 253542 DEBUG nova.virt.libvirt.driver [None req-0ab1c55e-a54f-4e9f-8e35-b9475b4764b8 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] [instance: 59d5b429-b6cb-4ced-bf99-b3a5b42dc5a0] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:28:09 np0005534516 nova_compute[253538]: 2025-11-25 08:28:09.893 253542 DEBUG nova.virt.libvirt.driver [None req-0ab1c55e-a54f-4e9f-8e35-b9475b4764b8 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] [instance: 59d5b429-b6cb-4ced-bf99-b3a5b42dc5a0] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:28:09 np0005534516 nova_compute[253538]: 2025-11-25 08:28:09.912 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 59d5b429-b6cb-4ced-bf99-b3a5b42dc5a0] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 03:28:09 np0005534516 nova_compute[253538]: 2025-11-25 08:28:09.912 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764059289.8410053, 59d5b429-b6cb-4ced-bf99-b3a5b42dc5a0 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 03:28:09 np0005534516 nova_compute[253538]: 2025-11-25 08:28:09.912 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 59d5b429-b6cb-4ced-bf99-b3a5b42dc5a0] VM Paused (Lifecycle Event)#033[00m
Nov 25 03:28:09 np0005534516 nova_compute[253538]: 2025-11-25 08:28:09.926 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 59d5b429-b6cb-4ced-bf99-b3a5b42dc5a0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:28:09 np0005534516 nova_compute[253538]: 2025-11-25 08:28:09.930 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764059289.8494318, 59d5b429-b6cb-4ced-bf99-b3a5b42dc5a0 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 03:28:09 np0005534516 nova_compute[253538]: 2025-11-25 08:28:09.930 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 59d5b429-b6cb-4ced-bf99-b3a5b42dc5a0] VM Resumed (Lifecycle Event)#033[00m
Nov 25 03:28:09 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e123 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 03:28:09 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e123 do_prune osdmap full prune enabled
Nov 25 03:28:09 np0005534516 nova_compute[253538]: 2025-11-25 08:28:09.956 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 59d5b429-b6cb-4ced-bf99-b3a5b42dc5a0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:28:09 np0005534516 nova_compute[253538]: 2025-11-25 08:28:09.960 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 59d5b429-b6cb-4ced-bf99-b3a5b42dc5a0] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 03:28:09 np0005534516 nova_compute[253538]: 2025-11-25 08:28:09.979 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 59d5b429-b6cb-4ced-bf99-b3a5b42dc5a0] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 03:28:09 np0005534516 nova_compute[253538]: 2025-11-25 08:28:09.985 253542 INFO nova.compute.manager [None req-0ab1c55e-a54f-4e9f-8e35-b9475b4764b8 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] [instance: 59d5b429-b6cb-4ced-bf99-b3a5b42dc5a0] Took 9.84 seconds to spawn the instance on the hypervisor.#033[00m
Nov 25 03:28:09 np0005534516 nova_compute[253538]: 2025-11-25 08:28:09.985 253542 DEBUG nova.compute.manager [None req-0ab1c55e-a54f-4e9f-8e35-b9475b4764b8 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] [instance: 59d5b429-b6cb-4ced-bf99-b3a5b42dc5a0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:28:09 np0005534516 podman[290217]: 2025-11-25 08:28:09.994337582 +0000 UTC m=+0.757675143 container init 1b5d918d4aa47ffd23b27816a787433eb73a973f296732e37d895598606e1644 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-52a7668b-f0ac-4b07-a778-1ee89adbf076, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2)
Nov 25 03:28:09 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 03:28:09 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2062628715' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 03:28:10 np0005534516 podman[290217]: 2025-11-25 08:28:10.003093324 +0000 UTC m=+0.766430865 container start 1b5d918d4aa47ffd23b27816a787433eb73a973f296732e37d895598606e1644 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-52a7668b-f0ac-4b07-a778-1ee89adbf076, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 03:28:10 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e124 e124: 3 total, 3 up, 3 in
Nov 25 03:28:10 np0005534516 nova_compute[253538]: 2025-11-25 08:28:10.019 253542 DEBUG oslo_concurrency.processutils [None req-e3022e31-fb04-4eb1-b57c-c8286674db75 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.559s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:28:10 np0005534516 nova_compute[253538]: 2025-11-25 08:28:10.025 253542 DEBUG nova.compute.provider_tree [None req-e3022e31-fb04-4eb1-b57c-c8286674db75 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 03:28:10 np0005534516 neutron-haproxy-ovnmeta-52a7668b-f0ac-4b07-a778-1ee89adbf076[290325]: [NOTICE]   (290331) : New worker (290333) forked
Nov 25 03:28:10 np0005534516 neutron-haproxy-ovnmeta-52a7668b-f0ac-4b07-a778-1ee89adbf076[290325]: [NOTICE]   (290331) : Loading success.
Nov 25 03:28:10 np0005534516 nova_compute[253538]: 2025-11-25 08:28:10.035 253542 DEBUG nova.scheduler.client.report [None req-e3022e31-fb04-4eb1-b57c-c8286674db75 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 03:28:10 np0005534516 nova_compute[253538]: 2025-11-25 08:28:10.075 253542 DEBUG oslo_concurrency.lockutils [None req-e3022e31-fb04-4eb1-b57c-c8286674db75 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.793s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:28:10 np0005534516 nova_compute[253538]: 2025-11-25 08:28:10.076 253542 DEBUG nova.compute.manager [None req-e3022e31-fb04-4eb1-b57c-c8286674db75 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 6998a6cf-b660-4558-98cf-bf5984775b1d] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 25 03:28:10 np0005534516 nova_compute[253538]: 2025-11-25 08:28:10.089 253542 INFO nova.compute.manager [None req-0ab1c55e-a54f-4e9f-8e35-b9475b4764b8 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] [instance: 59d5b429-b6cb-4ced-bf99-b3a5b42dc5a0] Took 11.24 seconds to build instance.#033[00m
Nov 25 03:28:10 np0005534516 nova_compute[253538]: 2025-11-25 08:28:10.109 253542 DEBUG oslo_concurrency.lockutils [None req-0ab1c55e-a54f-4e9f-8e35-b9475b4764b8 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] Lock "59d5b429-b6cb-4ced-bf99-b3a5b42dc5a0" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.331s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:28:10 np0005534516 ceph-mon[75015]: log_channel(cluster) log [DBG] : osdmap e124: 3 total, 3 up, 3 in
Nov 25 03:28:10 np0005534516 nova_compute[253538]: 2025-11-25 08:28:10.128 253542 DEBUG nova.compute.manager [None req-e3022e31-fb04-4eb1-b57c-c8286674db75 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 6998a6cf-b660-4558-98cf-bf5984775b1d] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 25 03:28:10 np0005534516 nova_compute[253538]: 2025-11-25 08:28:10.128 253542 DEBUG nova.network.neutron [None req-e3022e31-fb04-4eb1-b57c-c8286674db75 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 6998a6cf-b660-4558-98cf-bf5984775b1d] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 25 03:28:10 np0005534516 nova_compute[253538]: 2025-11-25 08:28:10.146 253542 INFO nova.virt.libvirt.driver [None req-e3022e31-fb04-4eb1-b57c-c8286674db75 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 6998a6cf-b660-4558-98cf-bf5984775b1d] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 25 03:28:10 np0005534516 nova_compute[253538]: 2025-11-25 08:28:10.186 253542 DEBUG nova.compute.manager [None req-e3022e31-fb04-4eb1-b57c-c8286674db75 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 6998a6cf-b660-4558-98cf-bf5984775b1d] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 25 03:28:10 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:28:10.200 162739 INFO neutron.agent.ovn.metadata.agent [-] Port c620cff4-b028-4d86-b951-0d489781da2f in datapath 431353a0-bdb3-445c-95e7-9cd19a8e3783 unbound from our chassis#033[00m
Nov 25 03:28:10 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:28:10.203 162739 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 431353a0-bdb3-445c-95e7-9cd19a8e3783#033[00m
Nov 25 03:28:10 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:28:10.221 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[9bd509c0-da19-4905-8d1c-1e3bf657ec6d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:28:10 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:28:10.249 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[89a70fbf-7c68-4cb4-915a-3e38b0618162]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:28:10 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:28:10.252 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[5357f81c-6edd-43d8-98b3-3067f96b6388]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:28:10 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:28:10.280 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[8b2fd3b9-cd8e-43d4-8e2e-9cac8c0e10d1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:28:10 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:28:10.294 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[907a9ac0-2263-4e98-b1d7-debff6924579]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap431353a0-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:0d:1b:92'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 6, 'tx_packets': 7, 'rx_bytes': 532, 'tx_bytes': 530, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 6, 'tx_packets': 7, 'rx_bytes': 532, 'tx_bytes': 530, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 56], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 463641, 'reachable_time': 15962, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 5, 'outoctets': 376, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 5, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 376, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 5, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 290347, 'error': None, 'target': 'ovnmeta-431353a0-bdb3-445c-95e7-9cd19a8e3783', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:28:10 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:28:10.308 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[b15aa228-267f-46d4-9501-759d563378a4]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap431353a0-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 463655, 'tstamp': 463655}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 290348, 'error': None, 'target': 'ovnmeta-431353a0-bdb3-445c-95e7-9cd19a8e3783', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap431353a0-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 463659, 'tstamp': 463659}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 290348, 'error': None, 'target': 'ovnmeta-431353a0-bdb3-445c-95e7-9cd19a8e3783', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:28:10 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:28:10.310 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap431353a0-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:28:10 np0005534516 nova_compute[253538]: 2025-11-25 08:28:10.311 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:28:10 np0005534516 nova_compute[253538]: 2025-11-25 08:28:10.312 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:28:10 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:28:10.313 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap431353a0-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:28:10 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:28:10.313 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 03:28:10 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:28:10.314 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap431353a0-b0, col_values=(('external_ids', {'iface-id': 'eb9dc67a-a121-4efb-a3df-9647173b8d46'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:28:10 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:28:10.314 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 03:28:10 np0005534516 nova_compute[253538]: 2025-11-25 08:28:10.335 253542 DEBUG nova.compute.manager [None req-e3022e31-fb04-4eb1-b57c-c8286674db75 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 6998a6cf-b660-4558-98cf-bf5984775b1d] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 25 03:28:10 np0005534516 nova_compute[253538]: 2025-11-25 08:28:10.336 253542 DEBUG nova.virt.libvirt.driver [None req-e3022e31-fb04-4eb1-b57c-c8286674db75 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 6998a6cf-b660-4558-98cf-bf5984775b1d] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 25 03:28:10 np0005534516 nova_compute[253538]: 2025-11-25 08:28:10.336 253542 INFO nova.virt.libvirt.driver [None req-e3022e31-fb04-4eb1-b57c-c8286674db75 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 6998a6cf-b660-4558-98cf-bf5984775b1d] Creating image(s)#033[00m
Nov 25 03:28:10 np0005534516 nova_compute[253538]: 2025-11-25 08:28:10.414 253542 DEBUG nova.storage.rbd_utils [None req-e3022e31-fb04-4eb1-b57c-c8286674db75 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] rbd image 6998a6cf-b660-4558-98cf-bf5984775b1d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:28:10 np0005534516 nova_compute[253538]: 2025-11-25 08:28:10.446 253542 DEBUG nova.storage.rbd_utils [None req-e3022e31-fb04-4eb1-b57c-c8286674db75 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] rbd image 6998a6cf-b660-4558-98cf-bf5984775b1d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:28:10 np0005534516 nova_compute[253538]: 2025-11-25 08:28:10.472 253542 DEBUG nova.storage.rbd_utils [None req-e3022e31-fb04-4eb1-b57c-c8286674db75 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] rbd image 6998a6cf-b660-4558-98cf-bf5984775b1d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:28:10 np0005534516 nova_compute[253538]: 2025-11-25 08:28:10.477 253542 DEBUG oslo_concurrency.processutils [None req-e3022e31-fb04-4eb1-b57c-c8286674db75 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:28:10 np0005534516 nova_compute[253538]: 2025-11-25 08:28:10.517 253542 DEBUG nova.policy [None req-e3022e31-fb04-4eb1-b57c-c8286674db75 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '38fa175fb699405c9a05d7c28f994ebc', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'b0a28d62fb1841c087b84b40bf5a54ec', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 25 03:28:10 np0005534516 nova_compute[253538]: 2025-11-25 08:28:10.553 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 03:28:10 np0005534516 nova_compute[253538]: 2025-11-25 08:28:10.562 253542 DEBUG oslo_concurrency.processutils [None req-e3022e31-fb04-4eb1-b57c-c8286674db75 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json" returned: 0 in 0.085s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:28:10 np0005534516 nova_compute[253538]: 2025-11-25 08:28:10.562 253542 DEBUG oslo_concurrency.lockutils [None req-e3022e31-fb04-4eb1-b57c-c8286674db75 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Acquiring lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:28:10 np0005534516 nova_compute[253538]: 2025-11-25 08:28:10.566 253542 DEBUG oslo_concurrency.lockutils [None req-e3022e31-fb04-4eb1-b57c-c8286674db75 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.004s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:28:10 np0005534516 nova_compute[253538]: 2025-11-25 08:28:10.567 253542 DEBUG oslo_concurrency.lockutils [None req-e3022e31-fb04-4eb1-b57c-c8286674db75 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:28:10 np0005534516 nova_compute[253538]: 2025-11-25 08:28:10.588 253542 DEBUG nova.storage.rbd_utils [None req-e3022e31-fb04-4eb1-b57c-c8286674db75 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] rbd image 6998a6cf-b660-4558-98cf-bf5984775b1d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:28:10 np0005534516 nova_compute[253538]: 2025-11-25 08:28:10.597 253542 DEBUG oslo_concurrency.processutils [None req-e3022e31-fb04-4eb1-b57c-c8286674db75 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc 6998a6cf-b660-4558-98cf-bf5984775b1d_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:28:10 np0005534516 nova_compute[253538]: 2025-11-25 08:28:10.847 253542 DEBUG nova.network.neutron [req-94db6fa7-eede-427e-b821-ca953530c2dc req-59292340-7c53-407e-8ddc-d441fccb4a13 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 14cd6797-cf47-44da-acac-0e5e3d5dfe11] Updated VIF entry in instance network info cache for port c620cff4-b028-4d86-b951-0d489781da2f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 25 03:28:10 np0005534516 nova_compute[253538]: 2025-11-25 08:28:10.848 253542 DEBUG nova.network.neutron [req-94db6fa7-eede-427e-b821-ca953530c2dc req-59292340-7c53-407e-8ddc-d441fccb4a13 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 14cd6797-cf47-44da-acac-0e5e3d5dfe11] Updating instance_info_cache with network_info: [{"id": "eb3ca9e2-cc78-478d-97c2-03b1c7d29b95", "address": "fa:16:3e:f3:13:20", "network": {"id": "431353a0-bdb3-445c-95e7-9cd19a8e3783", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-862491740-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7627e6bf071942db89329eee4a7d6b59", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapeb3ca9e2-cc", "ovs_interfaceid": "eb3ca9e2-cc78-478d-97c2-03b1c7d29b95", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "c620cff4-b028-4d86-b951-0d489781da2f", "address": "fa:16:3e:b0:33:b7", "network": {"id": "431353a0-bdb3-445c-95e7-9cd19a8e3783", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-862491740-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7627e6bf071942db89329eee4a7d6b59", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc620cff4-b0", "ovs_interfaceid": "c620cff4-b028-4d86-b951-0d489781da2f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 03:28:10 np0005534516 nova_compute[253538]: 2025-11-25 08:28:10.866 253542 DEBUG oslo_concurrency.lockutils [req-94db6fa7-eede-427e-b821-ca953530c2dc req-59292340-7c53-407e-8ddc-d441fccb4a13 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-14cd6797-cf47-44da-acac-0e5e3d5dfe11" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 03:28:11 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1293: 321 pgs: 321 active+clean; 190 MiB data, 434 MiB used, 60 GiB / 60 GiB avail; 2.9 MiB/s rd, 1.8 MiB/s wr, 207 op/s
Nov 25 03:28:12 np0005534516 nova_compute[253538]: 2025-11-25 08:28:12.008 253542 DEBUG oslo_concurrency.processutils [None req-e3022e31-fb04-4eb1-b57c-c8286674db75 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc 6998a6cf-b660-4558-98cf-bf5984775b1d_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.412s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:28:12 np0005534516 nova_compute[253538]: 2025-11-25 08:28:12.068 253542 DEBUG nova.storage.rbd_utils [None req-e3022e31-fb04-4eb1-b57c-c8286674db75 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] resizing rbd image 6998a6cf-b660-4558-98cf-bf5984775b1d_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 25 03:28:12 np0005534516 nova_compute[253538]: 2025-11-25 08:28:12.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 03:28:12 np0005534516 nova_compute[253538]: 2025-11-25 08:28:12.945 253542 INFO nova.virt.libvirt.driver [None req-ac44ee47-68fa-4cb1-8769-1d0727a691a8 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 30afcbba-78f3-433c-ba0a-5a2d25cf2d48] Deleting instance files /var/lib/nova/instances/30afcbba-78f3-433c-ba0a-5a2d25cf2d48_del#033[00m
Nov 25 03:28:12 np0005534516 nova_compute[253538]: 2025-11-25 08:28:12.947 253542 INFO nova.virt.libvirt.driver [None req-ac44ee47-68fa-4cb1-8769-1d0727a691a8 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 30afcbba-78f3-433c-ba0a-5a2d25cf2d48] Deletion of /var/lib/nova/instances/30afcbba-78f3-433c-ba0a-5a2d25cf2d48_del complete#033[00m
Nov 25 03:28:13 np0005534516 nova_compute[253538]: 2025-11-25 08:28:13.021 253542 DEBUG nova.objects.instance [None req-e3022e31-fb04-4eb1-b57c-c8286674db75 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Lazy-loading 'migration_context' on Instance uuid 6998a6cf-b660-4558-98cf-bf5984775b1d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 03:28:13 np0005534516 nova_compute[253538]: 2025-11-25 08:28:13.041 253542 DEBUG nova.compute.manager [req-3ebb8ac9-5487-4fb9-9761-c057403ccdd0 req-55054e0c-3264-4a9d-95c8-6bc2facb0b2b b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 59d5b429-b6cb-4ced-bf99-b3a5b42dc5a0] Received event network-vif-plugged-a0d5bf0b-a708-4159-968d-5c597313379d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:28:13 np0005534516 nova_compute[253538]: 2025-11-25 08:28:13.042 253542 DEBUG oslo_concurrency.lockutils [req-3ebb8ac9-5487-4fb9-9761-c057403ccdd0 req-55054e0c-3264-4a9d-95c8-6bc2facb0b2b b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "59d5b429-b6cb-4ced-bf99-b3a5b42dc5a0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:28:13 np0005534516 nova_compute[253538]: 2025-11-25 08:28:13.043 253542 DEBUG oslo_concurrency.lockutils [req-3ebb8ac9-5487-4fb9-9761-c057403ccdd0 req-55054e0c-3264-4a9d-95c8-6bc2facb0b2b b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "59d5b429-b6cb-4ced-bf99-b3a5b42dc5a0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:28:13 np0005534516 nova_compute[253538]: 2025-11-25 08:28:13.044 253542 DEBUG oslo_concurrency.lockutils [req-3ebb8ac9-5487-4fb9-9761-c057403ccdd0 req-55054e0c-3264-4a9d-95c8-6bc2facb0b2b b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "59d5b429-b6cb-4ced-bf99-b3a5b42dc5a0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:28:13 np0005534516 nova_compute[253538]: 2025-11-25 08:28:13.044 253542 DEBUG nova.compute.manager [req-3ebb8ac9-5487-4fb9-9761-c057403ccdd0 req-55054e0c-3264-4a9d-95c8-6bc2facb0b2b b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 59d5b429-b6cb-4ced-bf99-b3a5b42dc5a0] No waiting events found dispatching network-vif-plugged-a0d5bf0b-a708-4159-968d-5c597313379d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 03:28:13 np0005534516 nova_compute[253538]: 2025-11-25 08:28:13.045 253542 WARNING nova.compute.manager [req-3ebb8ac9-5487-4fb9-9761-c057403ccdd0 req-55054e0c-3264-4a9d-95c8-6bc2facb0b2b b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 59d5b429-b6cb-4ced-bf99-b3a5b42dc5a0] Received unexpected event network-vif-plugged-a0d5bf0b-a708-4159-968d-5c597313379d for instance with vm_state active and task_state None.#033[00m
Nov 25 03:28:13 np0005534516 nova_compute[253538]: 2025-11-25 08:28:13.050 253542 DEBUG nova.virt.libvirt.driver [None req-e3022e31-fb04-4eb1-b57c-c8286674db75 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 6998a6cf-b660-4558-98cf-bf5984775b1d] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 25 03:28:13 np0005534516 nova_compute[253538]: 2025-11-25 08:28:13.051 253542 DEBUG nova.virt.libvirt.driver [None req-e3022e31-fb04-4eb1-b57c-c8286674db75 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 6998a6cf-b660-4558-98cf-bf5984775b1d] Ensure instance console log exists: /var/lib/nova/instances/6998a6cf-b660-4558-98cf-bf5984775b1d/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 25 03:28:13 np0005534516 nova_compute[253538]: 2025-11-25 08:28:13.052 253542 DEBUG oslo_concurrency.lockutils [None req-e3022e31-fb04-4eb1-b57c-c8286674db75 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:28:13 np0005534516 nova_compute[253538]: 2025-11-25 08:28:13.053 253542 DEBUG oslo_concurrency.lockutils [None req-e3022e31-fb04-4eb1-b57c-c8286674db75 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:28:13 np0005534516 nova_compute[253538]: 2025-11-25 08:28:13.054 253542 DEBUG oslo_concurrency.lockutils [None req-e3022e31-fb04-4eb1-b57c-c8286674db75 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:28:13 np0005534516 nova_compute[253538]: 2025-11-25 08:28:13.060 253542 INFO nova.compute.manager [None req-ac44ee47-68fa-4cb1-8769-1d0727a691a8 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 30afcbba-78f3-433c-ba0a-5a2d25cf2d48] Took 6.44 seconds to destroy the instance on the hypervisor.#033[00m
Nov 25 03:28:13 np0005534516 nova_compute[253538]: 2025-11-25 08:28:13.061 253542 DEBUG oslo.service.loopingcall [None req-ac44ee47-68fa-4cb1-8769-1d0727a691a8 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 25 03:28:13 np0005534516 nova_compute[253538]: 2025-11-25 08:28:13.063 253542 DEBUG nova.compute.manager [-] [instance: 30afcbba-78f3-433c-ba0a-5a2d25cf2d48] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 25 03:28:13 np0005534516 nova_compute[253538]: 2025-11-25 08:28:13.064 253542 DEBUG nova.network.neutron [-] [instance: 30afcbba-78f3-433c-ba0a-5a2d25cf2d48] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 25 03:28:13 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1294: 321 pgs: 321 active+clean; 182 MiB data, 428 MiB used, 60 GiB / 60 GiB avail; 2.9 MiB/s rd, 1.3 MiB/s wr, 184 op/s
Nov 25 03:28:13 np0005534516 nova_compute[253538]: 2025-11-25 08:28:13.105 253542 DEBUG nova.network.neutron [None req-e3022e31-fb04-4eb1-b57c-c8286674db75 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 6998a6cf-b660-4558-98cf-bf5984775b1d] Successfully created port: 65fd7d0e-59ee-4411-92ee-f934016f1d1f _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 25 03:28:13 np0005534516 nova_compute[253538]: 2025-11-25 08:28:13.236 253542 DEBUG oslo_concurrency.lockutils [None req-d8ccf12d-2ede-4970-9c6f-82a0d8301467 ee2fe69e0dfa4467926cec954790823e 9bd945273cd04d8981dcb3a319e8d026 - - default default] Acquiring lock "4c934302-d7cd-4826-835e-cab6dba97e3a" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:28:13 np0005534516 nova_compute[253538]: 2025-11-25 08:28:13.237 253542 DEBUG oslo_concurrency.lockutils [None req-d8ccf12d-2ede-4970-9c6f-82a0d8301467 ee2fe69e0dfa4467926cec954790823e 9bd945273cd04d8981dcb3a319e8d026 - - default default] Lock "4c934302-d7cd-4826-835e-cab6dba97e3a" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:28:13 np0005534516 nova_compute[253538]: 2025-11-25 08:28:13.251 253542 DEBUG nova.compute.manager [None req-d8ccf12d-2ede-4970-9c6f-82a0d8301467 ee2fe69e0dfa4467926cec954790823e 9bd945273cd04d8981dcb3a319e8d026 - - default default] [instance: 4c934302-d7cd-4826-835e-cab6dba97e3a] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 25 03:28:13 np0005534516 nova_compute[253538]: 2025-11-25 08:28:13.321 253542 DEBUG oslo_concurrency.lockutils [None req-d8ccf12d-2ede-4970-9c6f-82a0d8301467 ee2fe69e0dfa4467926cec954790823e 9bd945273cd04d8981dcb3a319e8d026 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:28:13 np0005534516 nova_compute[253538]: 2025-11-25 08:28:13.322 253542 DEBUG oslo_concurrency.lockutils [None req-d8ccf12d-2ede-4970-9c6f-82a0d8301467 ee2fe69e0dfa4467926cec954790823e 9bd945273cd04d8981dcb3a319e8d026 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:28:13 np0005534516 nova_compute[253538]: 2025-11-25 08:28:13.328 253542 DEBUG nova.virt.hardware [None req-d8ccf12d-2ede-4970-9c6f-82a0d8301467 ee2fe69e0dfa4467926cec954790823e 9bd945273cd04d8981dcb3a319e8d026 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 25 03:28:13 np0005534516 nova_compute[253538]: 2025-11-25 08:28:13.329 253542 INFO nova.compute.claims [None req-d8ccf12d-2ede-4970-9c6f-82a0d8301467 ee2fe69e0dfa4467926cec954790823e 9bd945273cd04d8981dcb3a319e8d026 - - default default] [instance: 4c934302-d7cd-4826-835e-cab6dba97e3a] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 25 03:28:13 np0005534516 nova_compute[253538]: 2025-11-25 08:28:13.499 253542 DEBUG oslo_concurrency.processutils [None req-d8ccf12d-2ede-4970-9c6f-82a0d8301467 ee2fe69e0dfa4467926cec954790823e 9bd945273cd04d8981dcb3a319e8d026 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:28:13 np0005534516 nova_compute[253538]: 2025-11-25 08:28:13.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 03:28:13 np0005534516 nova_compute[253538]: 2025-11-25 08:28:13.575 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:28:13 np0005534516 nova_compute[253538]: 2025-11-25 08:28:13.963 253542 DEBUG nova.network.neutron [-] [instance: 30afcbba-78f3-433c-ba0a-5a2d25cf2d48] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 03:28:13 np0005534516 nova_compute[253538]: 2025-11-25 08:28:13.986 253542 INFO nova.compute.manager [-] [instance: 30afcbba-78f3-433c-ba0a-5a2d25cf2d48] Took 0.92 seconds to deallocate network for instance.#033[00m
Nov 25 03:28:14 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 03:28:14 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3424324964' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 03:28:14 np0005534516 nova_compute[253538]: 2025-11-25 08:28:14.030 253542 DEBUG oslo_concurrency.processutils [None req-d8ccf12d-2ede-4970-9c6f-82a0d8301467 ee2fe69e0dfa4467926cec954790823e 9bd945273cd04d8981dcb3a319e8d026 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.531s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:28:14 np0005534516 nova_compute[253538]: 2025-11-25 08:28:14.036 253542 DEBUG nova.compute.provider_tree [None req-d8ccf12d-2ede-4970-9c6f-82a0d8301467 ee2fe69e0dfa4467926cec954790823e 9bd945273cd04d8981dcb3a319e8d026 - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 03:28:14 np0005534516 nova_compute[253538]: 2025-11-25 08:28:14.039 253542 DEBUG oslo_concurrency.lockutils [None req-ac44ee47-68fa-4cb1-8769-1d0727a691a8 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:28:14 np0005534516 nova_compute[253538]: 2025-11-25 08:28:14.045 253542 DEBUG nova.scheduler.client.report [None req-d8ccf12d-2ede-4970-9c6f-82a0d8301467 ee2fe69e0dfa4467926cec954790823e 9bd945273cd04d8981dcb3a319e8d026 - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 03:28:14 np0005534516 nova_compute[253538]: 2025-11-25 08:28:14.063 253542 DEBUG oslo_concurrency.lockutils [None req-d8ccf12d-2ede-4970-9c6f-82a0d8301467 ee2fe69e0dfa4467926cec954790823e 9bd945273cd04d8981dcb3a319e8d026 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.741s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:28:14 np0005534516 nova_compute[253538]: 2025-11-25 08:28:14.063 253542 DEBUG nova.compute.manager [None req-d8ccf12d-2ede-4970-9c6f-82a0d8301467 ee2fe69e0dfa4467926cec954790823e 9bd945273cd04d8981dcb3a319e8d026 - - default default] [instance: 4c934302-d7cd-4826-835e-cab6dba97e3a] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 25 03:28:14 np0005534516 nova_compute[253538]: 2025-11-25 08:28:14.065 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.491s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:28:14 np0005534516 nova_compute[253538]: 2025-11-25 08:28:14.065 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:28:14 np0005534516 nova_compute[253538]: 2025-11-25 08:28:14.065 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 25 03:28:14 np0005534516 nova_compute[253538]: 2025-11-25 08:28:14.066 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:28:14 np0005534516 nova_compute[253538]: 2025-11-25 08:28:14.086 253542 DEBUG oslo_concurrency.lockutils [None req-ac44ee47-68fa-4cb1-8769-1d0727a691a8 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.047s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:28:14 np0005534516 nova_compute[253538]: 2025-11-25 08:28:14.127 253542 DEBUG nova.compute.manager [None req-d8ccf12d-2ede-4970-9c6f-82a0d8301467 ee2fe69e0dfa4467926cec954790823e 9bd945273cd04d8981dcb3a319e8d026 - - default default] [instance: 4c934302-d7cd-4826-835e-cab6dba97e3a] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 25 03:28:14 np0005534516 nova_compute[253538]: 2025-11-25 08:28:14.128 253542 DEBUG nova.network.neutron [None req-d8ccf12d-2ede-4970-9c6f-82a0d8301467 ee2fe69e0dfa4467926cec954790823e 9bd945273cd04d8981dcb3a319e8d026 - - default default] [instance: 4c934302-d7cd-4826-835e-cab6dba97e3a] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 25 03:28:14 np0005534516 nova_compute[253538]: 2025-11-25 08:28:14.145 253542 INFO nova.virt.libvirt.driver [None req-d8ccf12d-2ede-4970-9c6f-82a0d8301467 ee2fe69e0dfa4467926cec954790823e 9bd945273cd04d8981dcb3a319e8d026 - - default default] [instance: 4c934302-d7cd-4826-835e-cab6dba97e3a] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 25 03:28:14 np0005534516 nova_compute[253538]: 2025-11-25 08:28:14.222 253542 DEBUG nova.compute.manager [None req-d8ccf12d-2ede-4970-9c6f-82a0d8301467 ee2fe69e0dfa4467926cec954790823e 9bd945273cd04d8981dcb3a319e8d026 - - default default] [instance: 4c934302-d7cd-4826-835e-cab6dba97e3a] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 25 03:28:14 np0005534516 nova_compute[253538]: 2025-11-25 08:28:14.289 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:28:14 np0005534516 nova_compute[253538]: 2025-11-25 08:28:14.297 253542 DEBUG oslo_concurrency.processutils [None req-ac44ee47-68fa-4cb1-8769-1d0727a691a8 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:28:14 np0005534516 nova_compute[253538]: 2025-11-25 08:28:14.350 253542 DEBUG nova.compute.manager [None req-d8ccf12d-2ede-4970-9c6f-82a0d8301467 ee2fe69e0dfa4467926cec954790823e 9bd945273cd04d8981dcb3a319e8d026 - - default default] [instance: 4c934302-d7cd-4826-835e-cab6dba97e3a] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 25 03:28:14 np0005534516 nova_compute[253538]: 2025-11-25 08:28:14.353 253542 DEBUG nova.virt.libvirt.driver [None req-d8ccf12d-2ede-4970-9c6f-82a0d8301467 ee2fe69e0dfa4467926cec954790823e 9bd945273cd04d8981dcb3a319e8d026 - - default default] [instance: 4c934302-d7cd-4826-835e-cab6dba97e3a] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 25 03:28:14 np0005534516 nova_compute[253538]: 2025-11-25 08:28:14.354 253542 INFO nova.virt.libvirt.driver [None req-d8ccf12d-2ede-4970-9c6f-82a0d8301467 ee2fe69e0dfa4467926cec954790823e 9bd945273cd04d8981dcb3a319e8d026 - - default default] [instance: 4c934302-d7cd-4826-835e-cab6dba97e3a] Creating image(s)#033[00m
Nov 25 03:28:14 np0005534516 nova_compute[253538]: 2025-11-25 08:28:14.393 253542 DEBUG nova.storage.rbd_utils [None req-d8ccf12d-2ede-4970-9c6f-82a0d8301467 ee2fe69e0dfa4467926cec954790823e 9bd945273cd04d8981dcb3a319e8d026 - - default default] rbd image 4c934302-d7cd-4826-835e-cab6dba97e3a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:28:14 np0005534516 nova_compute[253538]: 2025-11-25 08:28:14.432 253542 DEBUG nova.storage.rbd_utils [None req-d8ccf12d-2ede-4970-9c6f-82a0d8301467 ee2fe69e0dfa4467926cec954790823e 9bd945273cd04d8981dcb3a319e8d026 - - default default] rbd image 4c934302-d7cd-4826-835e-cab6dba97e3a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:28:14 np0005534516 nova_compute[253538]: 2025-11-25 08:28:14.456 253542 DEBUG nova.storage.rbd_utils [None req-d8ccf12d-2ede-4970-9c6f-82a0d8301467 ee2fe69e0dfa4467926cec954790823e 9bd945273cd04d8981dcb3a319e8d026 - - default default] rbd image 4c934302-d7cd-4826-835e-cab6dba97e3a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:28:14 np0005534516 nova_compute[253538]: 2025-11-25 08:28:14.460 253542 DEBUG oslo_concurrency.processutils [None req-d8ccf12d-2ede-4970-9c6f-82a0d8301467 ee2fe69e0dfa4467926cec954790823e 9bd945273cd04d8981dcb3a319e8d026 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:28:14 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 03:28:14 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1716634779' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 03:28:14 np0005534516 nova_compute[253538]: 2025-11-25 08:28:14.527 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.461s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:28:14 np0005534516 nova_compute[253538]: 2025-11-25 08:28:14.530 253542 DEBUG oslo_concurrency.processutils [None req-d8ccf12d-2ede-4970-9c6f-82a0d8301467 ee2fe69e0dfa4467926cec954790823e 9bd945273cd04d8981dcb3a319e8d026 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json" returned: 0 in 0.070s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:28:14 np0005534516 nova_compute[253538]: 2025-11-25 08:28:14.530 253542 DEBUG oslo_concurrency.lockutils [None req-d8ccf12d-2ede-4970-9c6f-82a0d8301467 ee2fe69e0dfa4467926cec954790823e 9bd945273cd04d8981dcb3a319e8d026 - - default default] Acquiring lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:28:14 np0005534516 nova_compute[253538]: 2025-11-25 08:28:14.531 253542 DEBUG oslo_concurrency.lockutils [None req-d8ccf12d-2ede-4970-9c6f-82a0d8301467 ee2fe69e0dfa4467926cec954790823e 9bd945273cd04d8981dcb3a319e8d026 - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:28:14 np0005534516 nova_compute[253538]: 2025-11-25 08:28:14.531 253542 DEBUG oslo_concurrency.lockutils [None req-d8ccf12d-2ede-4970-9c6f-82a0d8301467 ee2fe69e0dfa4467926cec954790823e 9bd945273cd04d8981dcb3a319e8d026 - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:28:14 np0005534516 nova_compute[253538]: 2025-11-25 08:28:14.550 253542 DEBUG nova.storage.rbd_utils [None req-d8ccf12d-2ede-4970-9c6f-82a0d8301467 ee2fe69e0dfa4467926cec954790823e 9bd945273cd04d8981dcb3a319e8d026 - - default default] rbd image 4c934302-d7cd-4826-835e-cab6dba97e3a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:28:14 np0005534516 nova_compute[253538]: 2025-11-25 08:28:14.559 253542 DEBUG oslo_concurrency.processutils [None req-d8ccf12d-2ede-4970-9c6f-82a0d8301467 ee2fe69e0dfa4467926cec954790823e 9bd945273cd04d8981dcb3a319e8d026 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc 4c934302-d7cd-4826-835e-cab6dba97e3a_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:28:14 np0005534516 nova_compute[253538]: 2025-11-25 08:28:14.669 253542 DEBUG nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] skipping disk for instance-0000001d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 25 03:28:14 np0005534516 nova_compute[253538]: 2025-11-25 08:28:14.670 253542 DEBUG nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] skipping disk for instance-0000001d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 25 03:28:14 np0005534516 nova_compute[253538]: 2025-11-25 08:28:14.675 253542 DEBUG nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] skipping disk for instance-0000001c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 25 03:28:14 np0005534516 nova_compute[253538]: 2025-11-25 08:28:14.675 253542 DEBUG nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] skipping disk for instance-0000001c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 25 03:28:14 np0005534516 nova_compute[253538]: 2025-11-25 08:28:14.804 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:28:14 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 03:28:14 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2793932564' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 03:28:14 np0005534516 nova_compute[253538]: 2025-11-25 08:28:14.838 253542 DEBUG oslo_concurrency.processutils [None req-ac44ee47-68fa-4cb1-8769-1d0727a691a8 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.541s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:28:14 np0005534516 nova_compute[253538]: 2025-11-25 08:28:14.847 253542 DEBUG nova.compute.provider_tree [None req-ac44ee47-68fa-4cb1-8769-1d0727a691a8 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 03:28:14 np0005534516 nova_compute[253538]: 2025-11-25 08:28:14.858 253542 DEBUG nova.scheduler.client.report [None req-ac44ee47-68fa-4cb1-8769-1d0727a691a8 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 03:28:14 np0005534516 nova_compute[253538]: 2025-11-25 08:28:14.878 253542 DEBUG oslo_concurrency.lockutils [None req-ac44ee47-68fa-4cb1-8769-1d0727a691a8 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.792s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:28:14 np0005534516 nova_compute[253538]: 2025-11-25 08:28:14.904 253542 INFO nova.scheduler.client.report [None req-ac44ee47-68fa-4cb1-8769-1d0727a691a8 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Deleted allocations for instance 30afcbba-78f3-433c-ba0a-5a2d25cf2d48#033[00m
Nov 25 03:28:14 np0005534516 nova_compute[253538]: 2025-11-25 08:28:14.973 253542 DEBUG oslo_concurrency.lockutils [None req-ac44ee47-68fa-4cb1-8769-1d0727a691a8 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Lock "30afcbba-78f3-433c-ba0a-5a2d25cf2d48" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 8.356s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:28:14 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e124 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 03:28:15 np0005534516 nova_compute[253538]: 2025-11-25 08:28:15.005 253542 WARNING nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 03:28:15 np0005534516 nova_compute[253538]: 2025-11-25 08:28:15.007 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4013MB free_disk=59.918922424316406GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 25 03:28:15 np0005534516 nova_compute[253538]: 2025-11-25 08:28:15.008 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:28:15 np0005534516 nova_compute[253538]: 2025-11-25 08:28:15.008 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:28:15 np0005534516 nova_compute[253538]: 2025-11-25 08:28:15.016 253542 DEBUG nova.compute.manager [req-643e9ddd-7ccd-48a2-9bad-130631332854 req-5f4b0066-9833-46ee-99b5-367bd8be7371 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 14cd6797-cf47-44da-acac-0e5e3d5dfe11] Received event network-vif-plugged-c620cff4-b028-4d86-b951-0d489781da2f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:28:15 np0005534516 nova_compute[253538]: 2025-11-25 08:28:15.017 253542 DEBUG oslo_concurrency.lockutils [req-643e9ddd-7ccd-48a2-9bad-130631332854 req-5f4b0066-9833-46ee-99b5-367bd8be7371 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "14cd6797-cf47-44da-acac-0e5e3d5dfe11-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:28:15 np0005534516 nova_compute[253538]: 2025-11-25 08:28:15.017 253542 DEBUG oslo_concurrency.lockutils [req-643e9ddd-7ccd-48a2-9bad-130631332854 req-5f4b0066-9833-46ee-99b5-367bd8be7371 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "14cd6797-cf47-44da-acac-0e5e3d5dfe11-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:28:15 np0005534516 nova_compute[253538]: 2025-11-25 08:28:15.018 253542 DEBUG oslo_concurrency.lockutils [req-643e9ddd-7ccd-48a2-9bad-130631332854 req-5f4b0066-9833-46ee-99b5-367bd8be7371 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "14cd6797-cf47-44da-acac-0e5e3d5dfe11-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:28:15 np0005534516 nova_compute[253538]: 2025-11-25 08:28:15.018 253542 DEBUG nova.compute.manager [req-643e9ddd-7ccd-48a2-9bad-130631332854 req-5f4b0066-9833-46ee-99b5-367bd8be7371 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 14cd6797-cf47-44da-acac-0e5e3d5dfe11] No waiting events found dispatching network-vif-plugged-c620cff4-b028-4d86-b951-0d489781da2f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 03:28:15 np0005534516 nova_compute[253538]: 2025-11-25 08:28:15.018 253542 WARNING nova.compute.manager [req-643e9ddd-7ccd-48a2-9bad-130631332854 req-5f4b0066-9833-46ee-99b5-367bd8be7371 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 14cd6797-cf47-44da-acac-0e5e3d5dfe11] Received unexpected event network-vif-plugged-c620cff4-b028-4d86-b951-0d489781da2f for instance with vm_state active and task_state None.#033[00m
Nov 25 03:28:15 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1295: 321 pgs: 321 active+clean; 189 MiB data, 423 MiB used, 60 GiB / 60 GiB avail; 4.2 MiB/s rd, 3.0 MiB/s wr, 246 op/s
Nov 25 03:28:15 np0005534516 nova_compute[253538]: 2025-11-25 08:28:15.085 253542 DEBUG nova.policy [None req-d8ccf12d-2ede-4970-9c6f-82a0d8301467 ee2fe69e0dfa4467926cec954790823e 9bd945273cd04d8981dcb3a319e8d026 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'ee2fe69e0dfa4467926cec954790823e', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '9bd945273cd04d8981dcb3a319e8d026', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 25 03:28:15 np0005534516 rsyslogd[1007]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 25 03:28:15 np0005534516 nova_compute[253538]: 2025-11-25 08:28:15.115 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Instance 14cd6797-cf47-44da-acac-0e5e3d5dfe11 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 25 03:28:15 np0005534516 nova_compute[253538]: 2025-11-25 08:28:15.115 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Instance 59d5b429-b6cb-4ced-bf99-b3a5b42dc5a0 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 25 03:28:15 np0005534516 nova_compute[253538]: 2025-11-25 08:28:15.116 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Instance 6998a6cf-b660-4558-98cf-bf5984775b1d actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 25 03:28:15 np0005534516 nova_compute[253538]: 2025-11-25 08:28:15.116 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Instance 4c934302-d7cd-4826-835e-cab6dba97e3a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 25 03:28:15 np0005534516 nova_compute[253538]: 2025-11-25 08:28:15.116 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 4 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 25 03:28:15 np0005534516 nova_compute[253538]: 2025-11-25 08:28:15.116 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=1024MB phys_disk=59GB used_disk=4GB total_vcpus=8 used_vcpus=4 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 25 03:28:15 np0005534516 nova_compute[253538]: 2025-11-25 08:28:15.274 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:28:15 np0005534516 nova_compute[253538]: 2025-11-25 08:28:15.452 253542 DEBUG nova.network.neutron [None req-e3022e31-fb04-4eb1-b57c-c8286674db75 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 6998a6cf-b660-4558-98cf-bf5984775b1d] Successfully updated port: 65fd7d0e-59ee-4411-92ee-f934016f1d1f _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 25 03:28:15 np0005534516 ovn_controller[152859]: 2025-11-25T08:28:15Z|00030|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:b0:33:b7 10.100.0.5
Nov 25 03:28:15 np0005534516 ovn_controller[152859]: 2025-11-25T08:28:15Z|00031|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:b0:33:b7 10.100.0.5
Nov 25 03:28:15 np0005534516 nova_compute[253538]: 2025-11-25 08:28:15.468 253542 DEBUG oslo_concurrency.lockutils [None req-e3022e31-fb04-4eb1-b57c-c8286674db75 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Acquiring lock "refresh_cache-6998a6cf-b660-4558-98cf-bf5984775b1d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 03:28:15 np0005534516 nova_compute[253538]: 2025-11-25 08:28:15.468 253542 DEBUG oslo_concurrency.lockutils [None req-e3022e31-fb04-4eb1-b57c-c8286674db75 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Acquired lock "refresh_cache-6998a6cf-b660-4558-98cf-bf5984775b1d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 03:28:15 np0005534516 nova_compute[253538]: 2025-11-25 08:28:15.468 253542 DEBUG nova.network.neutron [None req-e3022e31-fb04-4eb1-b57c-c8286674db75 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 6998a6cf-b660-4558-98cf-bf5984775b1d] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 25 03:28:15 np0005534516 nova_compute[253538]: 2025-11-25 08:28:15.502 253542 DEBUG oslo_concurrency.processutils [None req-d8ccf12d-2ede-4970-9c6f-82a0d8301467 ee2fe69e0dfa4467926cec954790823e 9bd945273cd04d8981dcb3a319e8d026 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc 4c934302-d7cd-4826-835e-cab6dba97e3a_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.943s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:28:15 np0005534516 nova_compute[253538]: 2025-11-25 08:28:15.568 253542 DEBUG nova.storage.rbd_utils [None req-d8ccf12d-2ede-4970-9c6f-82a0d8301467 ee2fe69e0dfa4467926cec954790823e 9bd945273cd04d8981dcb3a319e8d026 - - default default] resizing rbd image 4c934302-d7cd-4826-835e-cab6dba97e3a_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 25 03:28:15 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 03:28:15 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3303659574' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 03:28:15 np0005534516 nova_compute[253538]: 2025-11-25 08:28:15.754 253542 DEBUG nova.network.neutron [None req-e3022e31-fb04-4eb1-b57c-c8286674db75 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 6998a6cf-b660-4558-98cf-bf5984775b1d] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 25 03:28:15 np0005534516 nova_compute[253538]: 2025-11-25 08:28:15.759 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.485s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:28:15 np0005534516 nova_compute[253538]: 2025-11-25 08:28:15.767 253542 DEBUG nova.compute.provider_tree [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 03:28:15 np0005534516 nova_compute[253538]: 2025-11-25 08:28:15.790 253542 DEBUG nova.scheduler.client.report [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 03:28:15 np0005534516 nova_compute[253538]: 2025-11-25 08:28:15.900 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 25 03:28:15 np0005534516 nova_compute[253538]: 2025-11-25 08:28:15.900 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.892s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:28:15 np0005534516 nova_compute[253538]: 2025-11-25 08:28:15.909 253542 DEBUG nova.objects.instance [None req-d8ccf12d-2ede-4970-9c6f-82a0d8301467 ee2fe69e0dfa4467926cec954790823e 9bd945273cd04d8981dcb3a319e8d026 - - default default] Lazy-loading 'migration_context' on Instance uuid 4c934302-d7cd-4826-835e-cab6dba97e3a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 03:28:15 np0005534516 nova_compute[253538]: 2025-11-25 08:28:15.927 253542 DEBUG nova.virt.libvirt.driver [None req-d8ccf12d-2ede-4970-9c6f-82a0d8301467 ee2fe69e0dfa4467926cec954790823e 9bd945273cd04d8981dcb3a319e8d026 - - default default] [instance: 4c934302-d7cd-4826-835e-cab6dba97e3a] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 25 03:28:15 np0005534516 nova_compute[253538]: 2025-11-25 08:28:15.928 253542 DEBUG nova.virt.libvirt.driver [None req-d8ccf12d-2ede-4970-9c6f-82a0d8301467 ee2fe69e0dfa4467926cec954790823e 9bd945273cd04d8981dcb3a319e8d026 - - default default] [instance: 4c934302-d7cd-4826-835e-cab6dba97e3a] Ensure instance console log exists: /var/lib/nova/instances/4c934302-d7cd-4826-835e-cab6dba97e3a/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 25 03:28:15 np0005534516 nova_compute[253538]: 2025-11-25 08:28:15.929 253542 DEBUG oslo_concurrency.lockutils [None req-d8ccf12d-2ede-4970-9c6f-82a0d8301467 ee2fe69e0dfa4467926cec954790823e 9bd945273cd04d8981dcb3a319e8d026 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:28:15 np0005534516 nova_compute[253538]: 2025-11-25 08:28:15.930 253542 DEBUG oslo_concurrency.lockutils [None req-d8ccf12d-2ede-4970-9c6f-82a0d8301467 ee2fe69e0dfa4467926cec954790823e 9bd945273cd04d8981dcb3a319e8d026 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:28:15 np0005534516 nova_compute[253538]: 2025-11-25 08:28:15.930 253542 DEBUG oslo_concurrency.lockutils [None req-d8ccf12d-2ede-4970-9c6f-82a0d8301467 ee2fe69e0dfa4467926cec954790823e 9bd945273cd04d8981dcb3a319e8d026 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:28:16 np0005534516 nova_compute[253538]: 2025-11-25 08:28:16.201 253542 DEBUG oslo_concurrency.lockutils [None req-2c8e5d4b-37aa-44f4-8353-01f647647394 5981df3e8536420ea5b8fcd98ef92e1b 7627e6bf071942db89329eee4a7d6b59 - - default default] Acquiring lock "14cd6797-cf47-44da-acac-0e5e3d5dfe11" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:28:16 np0005534516 nova_compute[253538]: 2025-11-25 08:28:16.202 253542 DEBUG oslo_concurrency.lockutils [None req-2c8e5d4b-37aa-44f4-8353-01f647647394 5981df3e8536420ea5b8fcd98ef92e1b 7627e6bf071942db89329eee4a7d6b59 - - default default] Lock "14cd6797-cf47-44da-acac-0e5e3d5dfe11" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:28:16 np0005534516 nova_compute[253538]: 2025-11-25 08:28:16.203 253542 DEBUG oslo_concurrency.lockutils [None req-2c8e5d4b-37aa-44f4-8353-01f647647394 5981df3e8536420ea5b8fcd98ef92e1b 7627e6bf071942db89329eee4a7d6b59 - - default default] Acquiring lock "14cd6797-cf47-44da-acac-0e5e3d5dfe11-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:28:16 np0005534516 nova_compute[253538]: 2025-11-25 08:28:16.203 253542 DEBUG oslo_concurrency.lockutils [None req-2c8e5d4b-37aa-44f4-8353-01f647647394 5981df3e8536420ea5b8fcd98ef92e1b 7627e6bf071942db89329eee4a7d6b59 - - default default] Lock "14cd6797-cf47-44da-acac-0e5e3d5dfe11-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:28:16 np0005534516 nova_compute[253538]: 2025-11-25 08:28:16.204 253542 DEBUG oslo_concurrency.lockutils [None req-2c8e5d4b-37aa-44f4-8353-01f647647394 5981df3e8536420ea5b8fcd98ef92e1b 7627e6bf071942db89329eee4a7d6b59 - - default default] Lock "14cd6797-cf47-44da-acac-0e5e3d5dfe11-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:28:16 np0005534516 nova_compute[253538]: 2025-11-25 08:28:16.206 253542 INFO nova.compute.manager [None req-2c8e5d4b-37aa-44f4-8353-01f647647394 5981df3e8536420ea5b8fcd98ef92e1b 7627e6bf071942db89329eee4a7d6b59 - - default default] [instance: 14cd6797-cf47-44da-acac-0e5e3d5dfe11] Terminating instance#033[00m
Nov 25 03:28:16 np0005534516 nova_compute[253538]: 2025-11-25 08:28:16.208 253542 DEBUG nova.compute.manager [None req-2c8e5d4b-37aa-44f4-8353-01f647647394 5981df3e8536420ea5b8fcd98ef92e1b 7627e6bf071942db89329eee4a7d6b59 - - default default] [instance: 14cd6797-cf47-44da-acac-0e5e3d5dfe11] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 25 03:28:16 np0005534516 nova_compute[253538]: 2025-11-25 08:28:16.278 253542 DEBUG nova.network.neutron [None req-d8ccf12d-2ede-4970-9c6f-82a0d8301467 ee2fe69e0dfa4467926cec954790823e 9bd945273cd04d8981dcb3a319e8d026 - - default default] [instance: 4c934302-d7cd-4826-835e-cab6dba97e3a] Successfully created port: b62f3741-11c8-4840-a720-d6ee07f06284 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 25 03:28:16 np0005534516 kernel: tapeb3ca9e2-cc (unregistering): left promiscuous mode
Nov 25 03:28:16 np0005534516 NetworkManager[48915]: <info>  [1764059296.3065] device (tapeb3ca9e2-cc): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 03:28:16 np0005534516 ovn_controller[152859]: 2025-11-25T08:28:16Z|00184|binding|INFO|Releasing lport eb3ca9e2-cc78-478d-97c2-03b1c7d29b95 from this chassis (sb_readonly=0)
Nov 25 03:28:16 np0005534516 ovn_controller[152859]: 2025-11-25T08:28:16Z|00185|binding|INFO|Setting lport eb3ca9e2-cc78-478d-97c2-03b1c7d29b95 down in Southbound
Nov 25 03:28:16 np0005534516 ovn_controller[152859]: 2025-11-25T08:28:16Z|00186|binding|INFO|Removing iface tapeb3ca9e2-cc ovn-installed in OVS
Nov 25 03:28:16 np0005534516 nova_compute[253538]: 2025-11-25 08:28:16.360 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:28:16 np0005534516 kernel: tapc620cff4-b0 (unregistering): left promiscuous mode
Nov 25 03:28:16 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:28:16.372 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f3:13:20 10.100.0.6'], port_security=['fa:16:3e:f3:13:20 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '14cd6797-cf47-44da-acac-0e5e3d5dfe11', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-431353a0-bdb3-445c-95e7-9cd19a8e3783', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '7627e6bf071942db89329eee4a7d6b59', 'neutron:revision_number': '4', 'neutron:security_group_ids': '0d49786c-dade-46e6-8a51-0f68b7957195', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7ee2cc9a-2b72-45fd-9005-f89765076013, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=eb3ca9e2-cc78-478d-97c2-03b1c7d29b95) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 03:28:16 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:28:16.374 162739 INFO neutron.agent.ovn.metadata.agent [-] Port eb3ca9e2-cc78-478d-97c2-03b1c7d29b95 in datapath 431353a0-bdb3-445c-95e7-9cd19a8e3783 unbound from our chassis#033[00m
Nov 25 03:28:16 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:28:16.376 162739 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 431353a0-bdb3-445c-95e7-9cd19a8e3783#033[00m
Nov 25 03:28:16 np0005534516 NetworkManager[48915]: <info>  [1764059296.3831] device (tapc620cff4-b0): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 03:28:16 np0005534516 nova_compute[253538]: 2025-11-25 08:28:16.389 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:28:16 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:28:16.392 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[dd0eb01a-92bc-40af-af50-2d638a7fc5ec]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:28:16 np0005534516 ovn_controller[152859]: 2025-11-25T08:28:16Z|00187|binding|INFO|Releasing lport c620cff4-b028-4d86-b951-0d489781da2f from this chassis (sb_readonly=0)
Nov 25 03:28:16 np0005534516 ovn_controller[152859]: 2025-11-25T08:28:16Z|00188|binding|INFO|Setting lport c620cff4-b028-4d86-b951-0d489781da2f down in Southbound
Nov 25 03:28:16 np0005534516 ovn_controller[152859]: 2025-11-25T08:28:16Z|00189|binding|INFO|Removing iface tapc620cff4-b0 ovn-installed in OVS
Nov 25 03:28:16 np0005534516 nova_compute[253538]: 2025-11-25 08:28:16.398 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:28:16 np0005534516 nova_compute[253538]: 2025-11-25 08:28:16.421 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:28:16 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:28:16.429 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b0:33:b7 10.100.0.5'], port_security=['fa:16:3e:b0:33:b7 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '14cd6797-cf47-44da-acac-0e5e3d5dfe11', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-431353a0-bdb3-445c-95e7-9cd19a8e3783', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '7627e6bf071942db89329eee4a7d6b59', 'neutron:revision_number': '4', 'neutron:security_group_ids': '0d49786c-dade-46e6-8a51-0f68b7957195', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7ee2cc9a-2b72-45fd-9005-f89765076013, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=c620cff4-b028-4d86-b951-0d489781da2f) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 03:28:16 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:28:16.431 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[db6bfac8-c97f-4596-8e7c-51c609262b99]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:28:16 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:28:16.434 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[9220b30b-5f0a-4ed9-8474-63b9b3e30ad2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:28:16 np0005534516 systemd[1]: machine-qemu\x2d33\x2dinstance\x2d0000001c.scope: Deactivated successfully.
Nov 25 03:28:16 np0005534516 systemd[1]: machine-qemu\x2d33\x2dinstance\x2d0000001c.scope: Consumed 12.629s CPU time.
Nov 25 03:28:16 np0005534516 systemd-machined[215790]: Machine qemu-33-instance-0000001c terminated.
Nov 25 03:28:16 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:28:16.465 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[27c364a4-baac-4016-95b5-6715312d05d8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:28:16 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:28:16.482 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[e4d96486-6f8b-4299-a5de-88fdd0eb0cdb]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap431353a0-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:0d:1b:92'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 6, 'tx_packets': 9, 'rx_bytes': 532, 'tx_bytes': 614, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 6, 'tx_packets': 9, 'rx_bytes': 532, 'tx_bytes': 614, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 56], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 463641, 'reachable_time': 15962, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 5, 'outoctets': 376, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 5, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 376, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 5, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 290789, 'error': None, 'target': 'ovnmeta-431353a0-bdb3-445c-95e7-9cd19a8e3783', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:28:16 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:28:16.500 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[7e914b39-c53a-4386-8bb1-7c5bb8bbe49e]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap431353a0-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 463655, 'tstamp': 463655}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 290790, 'error': None, 'target': 'ovnmeta-431353a0-bdb3-445c-95e7-9cd19a8e3783', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap431353a0-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 463659, 'tstamp': 463659}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 290790, 'error': None, 'target': 'ovnmeta-431353a0-bdb3-445c-95e7-9cd19a8e3783', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:28:16 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:28:16.502 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap431353a0-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:28:16 np0005534516 nova_compute[253538]: 2025-11-25 08:28:16.503 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:28:16 np0005534516 nova_compute[253538]: 2025-11-25 08:28:16.509 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:28:16 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:28:16.510 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap431353a0-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:28:16 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:28:16.510 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 03:28:16 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:28:16.511 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap431353a0-b0, col_values=(('external_ids', {'iface-id': 'eb9dc67a-a121-4efb-a3df-9647173b8d46'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:28:16 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:28:16.511 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 03:28:16 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:28:16.512 162739 INFO neutron.agent.ovn.metadata.agent [-] Port c620cff4-b028-4d86-b951-0d489781da2f in datapath 431353a0-bdb3-445c-95e7-9cd19a8e3783 unbound from our chassis#033[00m
Nov 25 03:28:16 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:28:16.514 162739 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 431353a0-bdb3-445c-95e7-9cd19a8e3783, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 25 03:28:16 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:28:16.514 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[446cfd46-da82-4561-b905-4c7383c4bd87]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:28:16 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:28:16.515 162739 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-431353a0-bdb3-445c-95e7-9cd19a8e3783 namespace which is not needed anymore#033[00m
Nov 25 03:28:16 np0005534516 nova_compute[253538]: 2025-11-25 08:28:16.648 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:28:16 np0005534516 NetworkManager[48915]: <info>  [1764059296.6568] manager: (tapc620cff4-b0): new Tun device (/org/freedesktop/NetworkManager/Devices/94)
Nov 25 03:28:16 np0005534516 nova_compute[253538]: 2025-11-25 08:28:16.658 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:28:16 np0005534516 nova_compute[253538]: 2025-11-25 08:28:16.665 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:28:16 np0005534516 nova_compute[253538]: 2025-11-25 08:28:16.677 253542 INFO nova.virt.libvirt.driver [-] [instance: 14cd6797-cf47-44da-acac-0e5e3d5dfe11] Instance destroyed successfully.#033[00m
Nov 25 03:28:16 np0005534516 nova_compute[253538]: 2025-11-25 08:28:16.677 253542 DEBUG nova.objects.instance [None req-2c8e5d4b-37aa-44f4-8353-01f647647394 5981df3e8536420ea5b8fcd98ef92e1b 7627e6bf071942db89329eee4a7d6b59 - - default default] Lazy-loading 'resources' on Instance uuid 14cd6797-cf47-44da-acac-0e5e3d5dfe11 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 03:28:16 np0005534516 nova_compute[253538]: 2025-11-25 08:28:16.690 253542 DEBUG nova.virt.libvirt.vif [None req-2c8e5d4b-37aa-44f4-8353-01f647647394 5981df3e8536420ea5b8fcd98ef92e1b 7627e6bf071942db89329eee4a7d6b59 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T08:27:52Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-AttachInterfacesV270Test-server-1019332220',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacesv270test-server-1019332220',id=28,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-25T08:28:02Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='7627e6bf071942db89329eee4a7d6b59',ramdisk_id='',reservation_id='r-ykh6cze2',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesV270Test-968002196',owner_user_name='tempest-AttachInterfacesV270Test-968002196-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T08:28:02Z,user_data=None,user_id='5981df3e8536420ea5b8fcd98ef92e1b',uuid=14cd6797-cf47-44da-acac-0e5e3d5dfe11,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "eb3ca9e2-cc78-478d-97c2-03b1c7d29b95", "address": "fa:16:3e:f3:13:20", "network": {"id": "431353a0-bdb3-445c-95e7-9cd19a8e3783", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-862491740-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7627e6bf071942db89329eee4a7d6b59", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapeb3ca9e2-cc", "ovs_interfaceid": "eb3ca9e2-cc78-478d-97c2-03b1c7d29b95", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 25 03:28:16 np0005534516 nova_compute[253538]: 2025-11-25 08:28:16.691 253542 DEBUG nova.network.os_vif_util [None req-2c8e5d4b-37aa-44f4-8353-01f647647394 5981df3e8536420ea5b8fcd98ef92e1b 7627e6bf071942db89329eee4a7d6b59 - - default default] Converting VIF {"id": "eb3ca9e2-cc78-478d-97c2-03b1c7d29b95", "address": "fa:16:3e:f3:13:20", "network": {"id": "431353a0-bdb3-445c-95e7-9cd19a8e3783", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-862491740-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7627e6bf071942db89329eee4a7d6b59", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapeb3ca9e2-cc", "ovs_interfaceid": "eb3ca9e2-cc78-478d-97c2-03b1c7d29b95", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 03:28:16 np0005534516 nova_compute[253538]: 2025-11-25 08:28:16.691 253542 DEBUG nova.network.os_vif_util [None req-2c8e5d4b-37aa-44f4-8353-01f647647394 5981df3e8536420ea5b8fcd98ef92e1b 7627e6bf071942db89329eee4a7d6b59 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:f3:13:20,bridge_name='br-int',has_traffic_filtering=True,id=eb3ca9e2-cc78-478d-97c2-03b1c7d29b95,network=Network(431353a0-bdb3-445c-95e7-9cd19a8e3783),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapeb3ca9e2-cc') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 03:28:16 np0005534516 nova_compute[253538]: 2025-11-25 08:28:16.692 253542 DEBUG os_vif [None req-2c8e5d4b-37aa-44f4-8353-01f647647394 5981df3e8536420ea5b8fcd98ef92e1b 7627e6bf071942db89329eee4a7d6b59 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:f3:13:20,bridge_name='br-int',has_traffic_filtering=True,id=eb3ca9e2-cc78-478d-97c2-03b1c7d29b95,network=Network(431353a0-bdb3-445c-95e7-9cd19a8e3783),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapeb3ca9e2-cc') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 25 03:28:16 np0005534516 nova_compute[253538]: 2025-11-25 08:28:16.693 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:28:16 np0005534516 nova_compute[253538]: 2025-11-25 08:28:16.694 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapeb3ca9e2-cc, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:28:16 np0005534516 nova_compute[253538]: 2025-11-25 08:28:16.696 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:28:16 np0005534516 nova_compute[253538]: 2025-11-25 08:28:16.697 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 25 03:28:16 np0005534516 nova_compute[253538]: 2025-11-25 08:28:16.700 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:28:16 np0005534516 nova_compute[253538]: 2025-11-25 08:28:16.702 253542 INFO os_vif [None req-2c8e5d4b-37aa-44f4-8353-01f647647394 5981df3e8536420ea5b8fcd98ef92e1b 7627e6bf071942db89329eee4a7d6b59 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:f3:13:20,bridge_name='br-int',has_traffic_filtering=True,id=eb3ca9e2-cc78-478d-97c2-03b1c7d29b95,network=Network(431353a0-bdb3-445c-95e7-9cd19a8e3783),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapeb3ca9e2-cc')#033[00m
Nov 25 03:28:16 np0005534516 nova_compute[253538]: 2025-11-25 08:28:16.702 253542 DEBUG nova.virt.libvirt.vif [None req-2c8e5d4b-37aa-44f4-8353-01f647647394 5981df3e8536420ea5b8fcd98ef92e1b 7627e6bf071942db89329eee4a7d6b59 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T08:27:52Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-AttachInterfacesV270Test-server-1019332220',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacesv270test-server-1019332220',id=28,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-25T08:28:02Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='7627e6bf071942db89329eee4a7d6b59',ramdisk_id='',reservation_id='r-ykh6cze2',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesV270Test-968002196',owner_user_name='tempest-AttachInterfacesV270Test-968002196-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T08:28:02Z,user_data=None,user_id='5981df3e8536420ea5b8fcd98ef92e1b',uuid=14cd6797-cf47-44da-acac-0e5e3d5dfe11,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "c620cff4-b028-4d86-b951-0d489781da2f", "address": "fa:16:3e:b0:33:b7", "network": {"id": "431353a0-bdb3-445c-95e7-9cd19a8e3783", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-862491740-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7627e6bf071942db89329eee4a7d6b59", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc620cff4-b0", "ovs_interfaceid": "c620cff4-b028-4d86-b951-0d489781da2f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 25 03:28:16 np0005534516 nova_compute[253538]: 2025-11-25 08:28:16.703 253542 DEBUG nova.network.os_vif_util [None req-2c8e5d4b-37aa-44f4-8353-01f647647394 5981df3e8536420ea5b8fcd98ef92e1b 7627e6bf071942db89329eee4a7d6b59 - - default default] Converting VIF {"id": "c620cff4-b028-4d86-b951-0d489781da2f", "address": "fa:16:3e:b0:33:b7", "network": {"id": "431353a0-bdb3-445c-95e7-9cd19a8e3783", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-862491740-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7627e6bf071942db89329eee4a7d6b59", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc620cff4-b0", "ovs_interfaceid": "c620cff4-b028-4d86-b951-0d489781da2f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 03:28:16 np0005534516 nova_compute[253538]: 2025-11-25 08:28:16.703 253542 DEBUG nova.network.os_vif_util [None req-2c8e5d4b-37aa-44f4-8353-01f647647394 5981df3e8536420ea5b8fcd98ef92e1b 7627e6bf071942db89329eee4a7d6b59 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b0:33:b7,bridge_name='br-int',has_traffic_filtering=True,id=c620cff4-b028-4d86-b951-0d489781da2f,network=Network(431353a0-bdb3-445c-95e7-9cd19a8e3783),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc620cff4-b0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 03:28:16 np0005534516 nova_compute[253538]: 2025-11-25 08:28:16.704 253542 DEBUG os_vif [None req-2c8e5d4b-37aa-44f4-8353-01f647647394 5981df3e8536420ea5b8fcd98ef92e1b 7627e6bf071942db89329eee4a7d6b59 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:b0:33:b7,bridge_name='br-int',has_traffic_filtering=True,id=c620cff4-b028-4d86-b951-0d489781da2f,network=Network(431353a0-bdb3-445c-95e7-9cd19a8e3783),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc620cff4-b0') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 25 03:28:16 np0005534516 nova_compute[253538]: 2025-11-25 08:28:16.705 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:28:16 np0005534516 nova_compute[253538]: 2025-11-25 08:28:16.705 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc620cff4-b0, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:28:16 np0005534516 nova_compute[253538]: 2025-11-25 08:28:16.708 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:28:16 np0005534516 nova_compute[253538]: 2025-11-25 08:28:16.712 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 25 03:28:16 np0005534516 nova_compute[253538]: 2025-11-25 08:28:16.713 253542 INFO os_vif [None req-2c8e5d4b-37aa-44f4-8353-01f647647394 5981df3e8536420ea5b8fcd98ef92e1b 7627e6bf071942db89329eee4a7d6b59 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:b0:33:b7,bridge_name='br-int',has_traffic_filtering=True,id=c620cff4-b028-4d86-b951-0d489781da2f,network=Network(431353a0-bdb3-445c-95e7-9cd19a8e3783),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc620cff4-b0')#033[00m
Nov 25 03:28:16 np0005534516 neutron-haproxy-ovnmeta-431353a0-bdb3-445c-95e7-9cd19a8e3783[289960]: [NOTICE]   (289964) : haproxy version is 2.8.14-c23fe91
Nov 25 03:28:16 np0005534516 neutron-haproxy-ovnmeta-431353a0-bdb3-445c-95e7-9cd19a8e3783[289960]: [NOTICE]   (289964) : path to executable is /usr/sbin/haproxy
Nov 25 03:28:16 np0005534516 neutron-haproxy-ovnmeta-431353a0-bdb3-445c-95e7-9cd19a8e3783[289960]: [WARNING]  (289964) : Exiting Master process...
Nov 25 03:28:16 np0005534516 neutron-haproxy-ovnmeta-431353a0-bdb3-445c-95e7-9cd19a8e3783[289960]: [WARNING]  (289964) : Exiting Master process...
Nov 25 03:28:16 np0005534516 neutron-haproxy-ovnmeta-431353a0-bdb3-445c-95e7-9cd19a8e3783[289960]: [ALERT]    (289964) : Current worker (289966) exited with code 143 (Terminated)
Nov 25 03:28:16 np0005534516 neutron-haproxy-ovnmeta-431353a0-bdb3-445c-95e7-9cd19a8e3783[289960]: [WARNING]  (289964) : All workers exited. Exiting... (0)
Nov 25 03:28:16 np0005534516 systemd[1]: libpod-36cca6be3031d3a8b0ade847ff4a722403a1ba0f8d0bfdc5b75ab94c88e38b16.scope: Deactivated successfully.
Nov 25 03:28:16 np0005534516 podman[290811]: 2025-11-25 08:28:16.813422329 +0000 UTC m=+0.215704433 container died 36cca6be3031d3a8b0ade847ff4a722403a1ba0f8d0bfdc5b75ab94c88e38b16 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-431353a0-bdb3-445c-95e7-9cd19a8e3783, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true)
Nov 25 03:28:16 np0005534516 nova_compute[253538]: 2025-11-25 08:28:16.895 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 03:28:16 np0005534516 nova_compute[253538]: 2025-11-25 08:28:16.895 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 03:28:16 np0005534516 nova_compute[253538]: 2025-11-25 08:28:16.917 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 03:28:17 np0005534516 nova_compute[253538]: 2025-11-25 08:28:17.027 253542 DEBUG nova.compute.manager [req-a5068d2e-94b4-4f85-bca6-8102cf38dc9b req-0e348ebf-de72-4090-8c34-701b456d8822 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 14cd6797-cf47-44da-acac-0e5e3d5dfe11] Received event network-vif-unplugged-eb3ca9e2-cc78-478d-97c2-03b1c7d29b95 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:28:17 np0005534516 nova_compute[253538]: 2025-11-25 08:28:17.028 253542 DEBUG oslo_concurrency.lockutils [req-a5068d2e-94b4-4f85-bca6-8102cf38dc9b req-0e348ebf-de72-4090-8c34-701b456d8822 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "14cd6797-cf47-44da-acac-0e5e3d5dfe11-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:28:17 np0005534516 nova_compute[253538]: 2025-11-25 08:28:17.028 253542 DEBUG oslo_concurrency.lockutils [req-a5068d2e-94b4-4f85-bca6-8102cf38dc9b req-0e348ebf-de72-4090-8c34-701b456d8822 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "14cd6797-cf47-44da-acac-0e5e3d5dfe11-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:28:17 np0005534516 nova_compute[253538]: 2025-11-25 08:28:17.029 253542 DEBUG oslo_concurrency.lockutils [req-a5068d2e-94b4-4f85-bca6-8102cf38dc9b req-0e348ebf-de72-4090-8c34-701b456d8822 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "14cd6797-cf47-44da-acac-0e5e3d5dfe11-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:28:17 np0005534516 nova_compute[253538]: 2025-11-25 08:28:17.029 253542 DEBUG nova.compute.manager [req-a5068d2e-94b4-4f85-bca6-8102cf38dc9b req-0e348ebf-de72-4090-8c34-701b456d8822 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 14cd6797-cf47-44da-acac-0e5e3d5dfe11] No waiting events found dispatching network-vif-unplugged-eb3ca9e2-cc78-478d-97c2-03b1c7d29b95 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 03:28:17 np0005534516 nova_compute[253538]: 2025-11-25 08:28:17.030 253542 DEBUG nova.compute.manager [req-a5068d2e-94b4-4f85-bca6-8102cf38dc9b req-0e348ebf-de72-4090-8c34-701b456d8822 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 14cd6797-cf47-44da-acac-0e5e3d5dfe11] Received event network-vif-unplugged-eb3ca9e2-cc78-478d-97c2-03b1c7d29b95 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 25 03:28:17 np0005534516 nova_compute[253538]: 2025-11-25 08:28:17.036 253542 DEBUG nova.network.neutron [None req-e3022e31-fb04-4eb1-b57c-c8286674db75 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 6998a6cf-b660-4558-98cf-bf5984775b1d] Updating instance_info_cache with network_info: [{"id": "65fd7d0e-59ee-4411-92ee-f934016f1d1f", "address": "fa:16:3e:73:80:23", "network": {"id": "ba659d6c-c094-47d7-ba45-d0e659ce778e", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1404047212-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b0a28d62fb1841c087b84b40bf5a54ec", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap65fd7d0e-59", "ovs_interfaceid": "65fd7d0e-59ee-4411-92ee-f934016f1d1f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 03:28:17 np0005534516 nova_compute[253538]: 2025-11-25 08:28:17.053 253542 DEBUG oslo_concurrency.lockutils [None req-e3022e31-fb04-4eb1-b57c-c8286674db75 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Releasing lock "refresh_cache-6998a6cf-b660-4558-98cf-bf5984775b1d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 03:28:17 np0005534516 nova_compute[253538]: 2025-11-25 08:28:17.054 253542 DEBUG nova.compute.manager [None req-e3022e31-fb04-4eb1-b57c-c8286674db75 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 6998a6cf-b660-4558-98cf-bf5984775b1d] Instance network_info: |[{"id": "65fd7d0e-59ee-4411-92ee-f934016f1d1f", "address": "fa:16:3e:73:80:23", "network": {"id": "ba659d6c-c094-47d7-ba45-d0e659ce778e", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1404047212-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b0a28d62fb1841c087b84b40bf5a54ec", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap65fd7d0e-59", "ovs_interfaceid": "65fd7d0e-59ee-4411-92ee-f934016f1d1f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 25 03:28:17 np0005534516 nova_compute[253538]: 2025-11-25 08:28:17.058 253542 DEBUG nova.virt.libvirt.driver [None req-e3022e31-fb04-4eb1-b57c-c8286674db75 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 6998a6cf-b660-4558-98cf-bf5984775b1d] Start _get_guest_xml network_info=[{"id": "65fd7d0e-59ee-4411-92ee-f934016f1d1f", "address": "fa:16:3e:73:80:23", "network": {"id": "ba659d6c-c094-47d7-ba45-d0e659ce778e", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1404047212-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b0a28d62fb1841c087b84b40bf5a54ec", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap65fd7d0e-59", "ovs_interfaceid": "65fd7d0e-59ee-4411-92ee-f934016f1d1f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'device_name': '/dev/vda', 'guest_format': None, 'encryption_options': None, 'boot_index': 0, 'encryption_format': None, 'encryption_secret_uuid': None, 'size': 0, 'encrypted': False, 'disk_bus': 'virtio', 'image_id': '8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 25 03:28:17 np0005534516 nova_compute[253538]: 2025-11-25 08:28:17.066 253542 WARNING nova.virt.libvirt.driver [None req-e3022e31-fb04-4eb1-b57c-c8286674db75 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 03:28:17 np0005534516 nova_compute[253538]: 2025-11-25 08:28:17.072 253542 DEBUG nova.virt.libvirt.host [None req-e3022e31-fb04-4eb1-b57c-c8286674db75 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 25 03:28:17 np0005534516 nova_compute[253538]: 2025-11-25 08:28:17.073 253542 DEBUG nova.virt.libvirt.host [None req-e3022e31-fb04-4eb1-b57c-c8286674db75 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 25 03:28:17 np0005534516 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-36cca6be3031d3a8b0ade847ff4a722403a1ba0f8d0bfdc5b75ab94c88e38b16-userdata-shm.mount: Deactivated successfully.
Nov 25 03:28:17 np0005534516 systemd[1]: var-lib-containers-storage-overlay-8d59ce0fb78beb290599dd8ea1cac60e0a337aea145db00ab9301e67824cad23-merged.mount: Deactivated successfully.
Nov 25 03:28:17 np0005534516 nova_compute[253538]: 2025-11-25 08:28:17.083 253542 DEBUG nova.virt.libvirt.host [None req-e3022e31-fb04-4eb1-b57c-c8286674db75 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 25 03:28:17 np0005534516 nova_compute[253538]: 2025-11-25 08:28:17.084 253542 DEBUG nova.virt.libvirt.host [None req-e3022e31-fb04-4eb1-b57c-c8286674db75 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 25 03:28:17 np0005534516 nova_compute[253538]: 2025-11-25 08:28:17.084 253542 DEBUG nova.virt.libvirt.driver [None req-e3022e31-fb04-4eb1-b57c-c8286674db75 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 25 03:28:17 np0005534516 nova_compute[253538]: 2025-11-25 08:28:17.085 253542 DEBUG nova.virt.hardware [None req-e3022e31-fb04-4eb1-b57c-c8286674db75 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T08:20:54Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='c0247c91-0f34-45b2-87e7-43f31a790a00',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 25 03:28:17 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1296: 321 pgs: 321 active+clean; 215 MiB data, 441 MiB used, 60 GiB / 60 GiB avail; 3.1 MiB/s rd, 4.9 MiB/s wr, 224 op/s
Nov 25 03:28:17 np0005534516 nova_compute[253538]: 2025-11-25 08:28:17.086 253542 DEBUG nova.virt.hardware [None req-e3022e31-fb04-4eb1-b57c-c8286674db75 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 25 03:28:17 np0005534516 nova_compute[253538]: 2025-11-25 08:28:17.087 253542 DEBUG nova.virt.hardware [None req-e3022e31-fb04-4eb1-b57c-c8286674db75 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 25 03:28:17 np0005534516 nova_compute[253538]: 2025-11-25 08:28:17.087 253542 DEBUG nova.virt.hardware [None req-e3022e31-fb04-4eb1-b57c-c8286674db75 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 25 03:28:17 np0005534516 nova_compute[253538]: 2025-11-25 08:28:17.088 253542 DEBUG nova.virt.hardware [None req-e3022e31-fb04-4eb1-b57c-c8286674db75 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 25 03:28:17 np0005534516 nova_compute[253538]: 2025-11-25 08:28:17.089 253542 DEBUG nova.virt.hardware [None req-e3022e31-fb04-4eb1-b57c-c8286674db75 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 25 03:28:17 np0005534516 nova_compute[253538]: 2025-11-25 08:28:17.089 253542 DEBUG nova.virt.hardware [None req-e3022e31-fb04-4eb1-b57c-c8286674db75 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 25 03:28:17 np0005534516 nova_compute[253538]: 2025-11-25 08:28:17.090 253542 DEBUG nova.virt.hardware [None req-e3022e31-fb04-4eb1-b57c-c8286674db75 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 25 03:28:17 np0005534516 nova_compute[253538]: 2025-11-25 08:28:17.090 253542 DEBUG nova.virt.hardware [None req-e3022e31-fb04-4eb1-b57c-c8286674db75 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 25 03:28:17 np0005534516 nova_compute[253538]: 2025-11-25 08:28:17.091 253542 DEBUG nova.virt.hardware [None req-e3022e31-fb04-4eb1-b57c-c8286674db75 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 25 03:28:17 np0005534516 nova_compute[253538]: 2025-11-25 08:28:17.091 253542 DEBUG nova.virt.hardware [None req-e3022e31-fb04-4eb1-b57c-c8286674db75 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 25 03:28:17 np0005534516 nova_compute[253538]: 2025-11-25 08:28:17.097 253542 DEBUG oslo_concurrency.processutils [None req-e3022e31-fb04-4eb1-b57c-c8286674db75 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:28:17 np0005534516 podman[290811]: 2025-11-25 08:28:17.187247711 +0000 UTC m=+0.589529795 container cleanup 36cca6be3031d3a8b0ade847ff4a722403a1ba0f8d0bfdc5b75ab94c88e38b16 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-431353a0-bdb3-445c-95e7-9cd19a8e3783, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Nov 25 03:28:17 np0005534516 nova_compute[253538]: 2025-11-25 08:28:17.232 253542 DEBUG nova.compute.manager [req-0d137173-6982-4ee3-9fef-a9a2757559d9 req-07825075-34a3-423b-b2ae-42131cfb4350 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 30afcbba-78f3-433c-ba0a-5a2d25cf2d48] Received event network-vif-deleted-d6aa33fe-8dd6-4546-aa75-715ad57e5b5c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:28:17 np0005534516 nova_compute[253538]: 2025-11-25 08:28:17.234 253542 DEBUG nova.compute.manager [req-0d137173-6982-4ee3-9fef-a9a2757559d9 req-07825075-34a3-423b-b2ae-42131cfb4350 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 14cd6797-cf47-44da-acac-0e5e3d5dfe11] Received event network-vif-plugged-c620cff4-b028-4d86-b951-0d489781da2f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:28:17 np0005534516 nova_compute[253538]: 2025-11-25 08:28:17.235 253542 DEBUG oslo_concurrency.lockutils [req-0d137173-6982-4ee3-9fef-a9a2757559d9 req-07825075-34a3-423b-b2ae-42131cfb4350 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "14cd6797-cf47-44da-acac-0e5e3d5dfe11-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:28:17 np0005534516 nova_compute[253538]: 2025-11-25 08:28:17.235 253542 DEBUG oslo_concurrency.lockutils [req-0d137173-6982-4ee3-9fef-a9a2757559d9 req-07825075-34a3-423b-b2ae-42131cfb4350 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "14cd6797-cf47-44da-acac-0e5e3d5dfe11-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:28:17 np0005534516 nova_compute[253538]: 2025-11-25 08:28:17.236 253542 DEBUG oslo_concurrency.lockutils [req-0d137173-6982-4ee3-9fef-a9a2757559d9 req-07825075-34a3-423b-b2ae-42131cfb4350 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "14cd6797-cf47-44da-acac-0e5e3d5dfe11-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:28:17 np0005534516 nova_compute[253538]: 2025-11-25 08:28:17.237 253542 DEBUG nova.compute.manager [req-0d137173-6982-4ee3-9fef-a9a2757559d9 req-07825075-34a3-423b-b2ae-42131cfb4350 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 14cd6797-cf47-44da-acac-0e5e3d5dfe11] No waiting events found dispatching network-vif-plugged-c620cff4-b028-4d86-b951-0d489781da2f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 03:28:17 np0005534516 nova_compute[253538]: 2025-11-25 08:28:17.237 253542 WARNING nova.compute.manager [req-0d137173-6982-4ee3-9fef-a9a2757559d9 req-07825075-34a3-423b-b2ae-42131cfb4350 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 14cd6797-cf47-44da-acac-0e5e3d5dfe11] Received unexpected event network-vif-plugged-c620cff4-b028-4d86-b951-0d489781da2f for instance with vm_state active and task_state deleting.#033[00m
Nov 25 03:28:17 np0005534516 nova_compute[253538]: 2025-11-25 08:28:17.238 253542 DEBUG nova.compute.manager [req-0d137173-6982-4ee3-9fef-a9a2757559d9 req-07825075-34a3-423b-b2ae-42131cfb4350 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 6998a6cf-b660-4558-98cf-bf5984775b1d] Received event network-changed-65fd7d0e-59ee-4411-92ee-f934016f1d1f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:28:17 np0005534516 nova_compute[253538]: 2025-11-25 08:28:17.238 253542 DEBUG nova.compute.manager [req-0d137173-6982-4ee3-9fef-a9a2757559d9 req-07825075-34a3-423b-b2ae-42131cfb4350 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 6998a6cf-b660-4558-98cf-bf5984775b1d] Refreshing instance network info cache due to event network-changed-65fd7d0e-59ee-4411-92ee-f934016f1d1f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 03:28:17 np0005534516 nova_compute[253538]: 2025-11-25 08:28:17.239 253542 DEBUG oslo_concurrency.lockutils [req-0d137173-6982-4ee3-9fef-a9a2757559d9 req-07825075-34a3-423b-b2ae-42131cfb4350 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-6998a6cf-b660-4558-98cf-bf5984775b1d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 03:28:17 np0005534516 nova_compute[253538]: 2025-11-25 08:28:17.240 253542 DEBUG oslo_concurrency.lockutils [req-0d137173-6982-4ee3-9fef-a9a2757559d9 req-07825075-34a3-423b-b2ae-42131cfb4350 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-6998a6cf-b660-4558-98cf-bf5984775b1d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 03:28:17 np0005534516 nova_compute[253538]: 2025-11-25 08:28:17.240 253542 DEBUG nova.network.neutron [req-0d137173-6982-4ee3-9fef-a9a2757559d9 req-07825075-34a3-423b-b2ae-42131cfb4350 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 6998a6cf-b660-4558-98cf-bf5984775b1d] Refreshing network info cache for port 65fd7d0e-59ee-4411-92ee-f934016f1d1f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 03:28:17 np0005534516 podman[290878]: 2025-11-25 08:28:17.435171493 +0000 UTC m=+0.207697541 container remove 36cca6be3031d3a8b0ade847ff4a722403a1ba0f8d0bfdc5b75ab94c88e38b16 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-431353a0-bdb3-445c-95e7-9cd19a8e3783, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Nov 25 03:28:17 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:28:17.444 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[f5eaa1de-1ff2-4b82-b7e3-df675eaac5b5]: (4, ('Tue Nov 25 08:28:16 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-431353a0-bdb3-445c-95e7-9cd19a8e3783 (36cca6be3031d3a8b0ade847ff4a722403a1ba0f8d0bfdc5b75ab94c88e38b16)\n36cca6be3031d3a8b0ade847ff4a722403a1ba0f8d0bfdc5b75ab94c88e38b16\nTue Nov 25 08:28:17 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-431353a0-bdb3-445c-95e7-9cd19a8e3783 (36cca6be3031d3a8b0ade847ff4a722403a1ba0f8d0bfdc5b75ab94c88e38b16)\n36cca6be3031d3a8b0ade847ff4a722403a1ba0f8d0bfdc5b75ab94c88e38b16\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:28:17 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:28:17.445 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[5bd93369-577a-4871-9226-7913ed44baaa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:28:17 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:28:17.446 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap431353a0-b0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:28:17 np0005534516 systemd[1]: libpod-conmon-36cca6be3031d3a8b0ade847ff4a722403a1ba0f8d0bfdc5b75ab94c88e38b16.scope: Deactivated successfully.
Nov 25 03:28:17 np0005534516 kernel: tap431353a0-b0: left promiscuous mode
Nov 25 03:28:17 np0005534516 nova_compute[253538]: 2025-11-25 08:28:17.514 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:28:17 np0005534516 nova_compute[253538]: 2025-11-25 08:28:17.527 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:28:17 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:28:17.529 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[a62f3cb8-0419-49bc-8ef9-5d5bad120cf1]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:28:17 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 03:28:17 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2223899209' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 03:28:17 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:28:17.545 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[a5f29e96-4faf-4fcd-8cc1-575cab74614e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:28:17 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:28:17.546 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[d08e1cd5-e5a0-4121-891b-ad8cad749366]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:28:17 np0005534516 nova_compute[253538]: 2025-11-25 08:28:17.553 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 03:28:17 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:28:17.564 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[f17340a1-e306-4027-8cb7-491c18093de5]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 463633, 'reachable_time': 41570, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 290914, 'error': None, 'target': 'ovnmeta-431353a0-bdb3-445c-95e7-9cd19a8e3783', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:28:17 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:28:17.567 162852 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-431353a0-bdb3-445c-95e7-9cd19a8e3783 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 25 03:28:17 np0005534516 systemd[1]: run-netns-ovnmeta\x2d431353a0\x2dbdb3\x2d445c\x2d95e7\x2d9cd19a8e3783.mount: Deactivated successfully.
Nov 25 03:28:17 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:28:17.567 162852 DEBUG oslo.privsep.daemon [-] privsep: reply[6f47a3b2-26cd-42d8-9462-0a470311ff7b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:28:17 np0005534516 nova_compute[253538]: 2025-11-25 08:28:17.573 253542 DEBUG oslo_concurrency.processutils [None req-e3022e31-fb04-4eb1-b57c-c8286674db75 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.476s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:28:17 np0005534516 nova_compute[253538]: 2025-11-25 08:28:17.592 253542 DEBUG nova.storage.rbd_utils [None req-e3022e31-fb04-4eb1-b57c-c8286674db75 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] rbd image 6998a6cf-b660-4558-98cf-bf5984775b1d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:28:17 np0005534516 nova_compute[253538]: 2025-11-25 08:28:17.600 253542 DEBUG oslo_concurrency.processutils [None req-e3022e31-fb04-4eb1-b57c-c8286674db75 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:28:17 np0005534516 nova_compute[253538]: 2025-11-25 08:28:17.773 253542 INFO nova.virt.libvirt.driver [None req-2c8e5d4b-37aa-44f4-8353-01f647647394 5981df3e8536420ea5b8fcd98ef92e1b 7627e6bf071942db89329eee4a7d6b59 - - default default] [instance: 14cd6797-cf47-44da-acac-0e5e3d5dfe11] Deleting instance files /var/lib/nova/instances/14cd6797-cf47-44da-acac-0e5e3d5dfe11_del#033[00m
Nov 25 03:28:17 np0005534516 nova_compute[253538]: 2025-11-25 08:28:17.775 253542 INFO nova.virt.libvirt.driver [None req-2c8e5d4b-37aa-44f4-8353-01f647647394 5981df3e8536420ea5b8fcd98ef92e1b 7627e6bf071942db89329eee4a7d6b59 - - default default] [instance: 14cd6797-cf47-44da-acac-0e5e3d5dfe11] Deletion of /var/lib/nova/instances/14cd6797-cf47-44da-acac-0e5e3d5dfe11_del complete#033[00m
Nov 25 03:28:17 np0005534516 nova_compute[253538]: 2025-11-25 08:28:17.843 253542 INFO nova.compute.manager [None req-2c8e5d4b-37aa-44f4-8353-01f647647394 5981df3e8536420ea5b8fcd98ef92e1b 7627e6bf071942db89329eee4a7d6b59 - - default default] [instance: 14cd6797-cf47-44da-acac-0e5e3d5dfe11] Took 1.63 seconds to destroy the instance on the hypervisor.#033[00m
Nov 25 03:28:17 np0005534516 nova_compute[253538]: 2025-11-25 08:28:17.844 253542 DEBUG oslo.service.loopingcall [None req-2c8e5d4b-37aa-44f4-8353-01f647647394 5981df3e8536420ea5b8fcd98ef92e1b 7627e6bf071942db89329eee4a7d6b59 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 25 03:28:17 np0005534516 nova_compute[253538]: 2025-11-25 08:28:17.845 253542 DEBUG nova.compute.manager [-] [instance: 14cd6797-cf47-44da-acac-0e5e3d5dfe11] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 25 03:28:17 np0005534516 nova_compute[253538]: 2025-11-25 08:28:17.845 253542 DEBUG nova.network.neutron [-] [instance: 14cd6797-cf47-44da-acac-0e5e3d5dfe11] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 25 03:28:18 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 03:28:18 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/837282604' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 03:28:18 np0005534516 nova_compute[253538]: 2025-11-25 08:28:18.063 253542 DEBUG nova.network.neutron [None req-d8ccf12d-2ede-4970-9c6f-82a0d8301467 ee2fe69e0dfa4467926cec954790823e 9bd945273cd04d8981dcb3a319e8d026 - - default default] [instance: 4c934302-d7cd-4826-835e-cab6dba97e3a] Successfully updated port: b62f3741-11c8-4840-a720-d6ee07f06284 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 25 03:28:18 np0005534516 nova_compute[253538]: 2025-11-25 08:28:18.069 253542 DEBUG oslo_concurrency.processutils [None req-e3022e31-fb04-4eb1-b57c-c8286674db75 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.469s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:28:18 np0005534516 nova_compute[253538]: 2025-11-25 08:28:18.070 253542 DEBUG nova.virt.libvirt.vif [None req-e3022e31-fb04-4eb1-b57c-c8286674db75 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T08:28:08Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesTestJSON-server-577988769',display_name='tempest-ImagesTestJSON-server-577988769',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagestestjson-server-577988769',id=30,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='b0a28d62fb1841c087b84b40bf5a54ec',ramdisk_id='',reservation_id='r-r74nry8d',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesTestJSON-109091550',owner_user_name='tempest-ImagesTestJSON-109091550-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T08:28:10Z,user_data=None,user_id='38fa175fb699405c9a05d7c28f994ebc',uuid=6998a6cf-b660-4558-98cf-bf5984775b1d,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "65fd7d0e-59ee-4411-92ee-f934016f1d1f", "address": "fa:16:3e:73:80:23", "network": {"id": "ba659d6c-c094-47d7-ba45-d0e659ce778e", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1404047212-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b0a28d62fb1841c087b84b40bf5a54ec", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap65fd7d0e-59", "ovs_interfaceid": "65fd7d0e-59ee-4411-92ee-f934016f1d1f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 25 03:28:18 np0005534516 nova_compute[253538]: 2025-11-25 08:28:18.071 253542 DEBUG nova.network.os_vif_util [None req-e3022e31-fb04-4eb1-b57c-c8286674db75 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Converting VIF {"id": "65fd7d0e-59ee-4411-92ee-f934016f1d1f", "address": "fa:16:3e:73:80:23", "network": {"id": "ba659d6c-c094-47d7-ba45-d0e659ce778e", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1404047212-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b0a28d62fb1841c087b84b40bf5a54ec", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap65fd7d0e-59", "ovs_interfaceid": "65fd7d0e-59ee-4411-92ee-f934016f1d1f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 03:28:18 np0005534516 nova_compute[253538]: 2025-11-25 08:28:18.072 253542 DEBUG nova.network.os_vif_util [None req-e3022e31-fb04-4eb1-b57c-c8286674db75 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:73:80:23,bridge_name='br-int',has_traffic_filtering=True,id=65fd7d0e-59ee-4411-92ee-f934016f1d1f,network=Network(ba659d6c-c094-47d7-ba45-d0e659ce778e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap65fd7d0e-59') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 03:28:18 np0005534516 nova_compute[253538]: 2025-11-25 08:28:18.073 253542 DEBUG nova.objects.instance [None req-e3022e31-fb04-4eb1-b57c-c8286674db75 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Lazy-loading 'pci_devices' on Instance uuid 6998a6cf-b660-4558-98cf-bf5984775b1d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 03:28:18 np0005534516 nova_compute[253538]: 2025-11-25 08:28:18.115 253542 DEBUG nova.virt.libvirt.driver [None req-e3022e31-fb04-4eb1-b57c-c8286674db75 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 6998a6cf-b660-4558-98cf-bf5984775b1d] End _get_guest_xml xml=<domain type="kvm">
Nov 25 03:28:18 np0005534516 nova_compute[253538]:  <uuid>6998a6cf-b660-4558-98cf-bf5984775b1d</uuid>
Nov 25 03:28:18 np0005534516 nova_compute[253538]:  <name>instance-0000001e</name>
Nov 25 03:28:18 np0005534516 nova_compute[253538]:  <memory>131072</memory>
Nov 25 03:28:18 np0005534516 nova_compute[253538]:  <vcpu>1</vcpu>
Nov 25 03:28:18 np0005534516 nova_compute[253538]:  <metadata>
Nov 25 03:28:18 np0005534516 nova_compute[253538]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 03:28:18 np0005534516 nova_compute[253538]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 03:28:18 np0005534516 nova_compute[253538]:      <nova:name>tempest-ImagesTestJSON-server-577988769</nova:name>
Nov 25 03:28:18 np0005534516 nova_compute[253538]:      <nova:creationTime>2025-11-25 08:28:17</nova:creationTime>
Nov 25 03:28:18 np0005534516 nova_compute[253538]:      <nova:flavor name="m1.nano">
Nov 25 03:28:18 np0005534516 nova_compute[253538]:        <nova:memory>128</nova:memory>
Nov 25 03:28:18 np0005534516 nova_compute[253538]:        <nova:disk>1</nova:disk>
Nov 25 03:28:18 np0005534516 nova_compute[253538]:        <nova:swap>0</nova:swap>
Nov 25 03:28:18 np0005534516 nova_compute[253538]:        <nova:ephemeral>0</nova:ephemeral>
Nov 25 03:28:18 np0005534516 nova_compute[253538]:        <nova:vcpus>1</nova:vcpus>
Nov 25 03:28:18 np0005534516 nova_compute[253538]:      </nova:flavor>
Nov 25 03:28:18 np0005534516 nova_compute[253538]:      <nova:owner>
Nov 25 03:28:18 np0005534516 nova_compute[253538]:        <nova:user uuid="38fa175fb699405c9a05d7c28f994ebc">tempest-ImagesTestJSON-109091550-project-member</nova:user>
Nov 25 03:28:18 np0005534516 nova_compute[253538]:        <nova:project uuid="b0a28d62fb1841c087b84b40bf5a54ec">tempest-ImagesTestJSON-109091550</nova:project>
Nov 25 03:28:18 np0005534516 nova_compute[253538]:      </nova:owner>
Nov 25 03:28:18 np0005534516 nova_compute[253538]:      <nova:root type="image" uuid="8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e"/>
Nov 25 03:28:18 np0005534516 nova_compute[253538]:      <nova:ports>
Nov 25 03:28:18 np0005534516 nova_compute[253538]:        <nova:port uuid="65fd7d0e-59ee-4411-92ee-f934016f1d1f">
Nov 25 03:28:18 np0005534516 nova_compute[253538]:          <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Nov 25 03:28:18 np0005534516 nova_compute[253538]:        </nova:port>
Nov 25 03:28:18 np0005534516 nova_compute[253538]:      </nova:ports>
Nov 25 03:28:18 np0005534516 nova_compute[253538]:    </nova:instance>
Nov 25 03:28:18 np0005534516 nova_compute[253538]:  </metadata>
Nov 25 03:28:18 np0005534516 nova_compute[253538]:  <sysinfo type="smbios">
Nov 25 03:28:18 np0005534516 nova_compute[253538]:    <system>
Nov 25 03:28:18 np0005534516 nova_compute[253538]:      <entry name="manufacturer">RDO</entry>
Nov 25 03:28:18 np0005534516 nova_compute[253538]:      <entry name="product">OpenStack Compute</entry>
Nov 25 03:28:18 np0005534516 nova_compute[253538]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 03:28:18 np0005534516 nova_compute[253538]:      <entry name="serial">6998a6cf-b660-4558-98cf-bf5984775b1d</entry>
Nov 25 03:28:18 np0005534516 nova_compute[253538]:      <entry name="uuid">6998a6cf-b660-4558-98cf-bf5984775b1d</entry>
Nov 25 03:28:18 np0005534516 nova_compute[253538]:      <entry name="family">Virtual Machine</entry>
Nov 25 03:28:18 np0005534516 nova_compute[253538]:    </system>
Nov 25 03:28:18 np0005534516 nova_compute[253538]:  </sysinfo>
Nov 25 03:28:18 np0005534516 nova_compute[253538]:  <os>
Nov 25 03:28:18 np0005534516 nova_compute[253538]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 25 03:28:18 np0005534516 nova_compute[253538]:    <boot dev="hd"/>
Nov 25 03:28:18 np0005534516 nova_compute[253538]:    <smbios mode="sysinfo"/>
Nov 25 03:28:18 np0005534516 nova_compute[253538]:  </os>
Nov 25 03:28:18 np0005534516 nova_compute[253538]:  <features>
Nov 25 03:28:18 np0005534516 nova_compute[253538]:    <acpi/>
Nov 25 03:28:18 np0005534516 nova_compute[253538]:    <apic/>
Nov 25 03:28:18 np0005534516 nova_compute[253538]:    <vmcoreinfo/>
Nov 25 03:28:18 np0005534516 nova_compute[253538]:  </features>
Nov 25 03:28:18 np0005534516 nova_compute[253538]:  <clock offset="utc">
Nov 25 03:28:18 np0005534516 nova_compute[253538]:    <timer name="pit" tickpolicy="delay"/>
Nov 25 03:28:18 np0005534516 nova_compute[253538]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 25 03:28:18 np0005534516 nova_compute[253538]:    <timer name="hpet" present="no"/>
Nov 25 03:28:18 np0005534516 nova_compute[253538]:  </clock>
Nov 25 03:28:18 np0005534516 nova_compute[253538]:  <cpu mode="host-model" match="exact">
Nov 25 03:28:18 np0005534516 nova_compute[253538]:    <topology sockets="1" cores="1" threads="1"/>
Nov 25 03:28:18 np0005534516 nova_compute[253538]:  </cpu>
Nov 25 03:28:18 np0005534516 nova_compute[253538]:  <devices>
Nov 25 03:28:18 np0005534516 nova_compute[253538]:    <disk type="network" device="disk">
Nov 25 03:28:18 np0005534516 nova_compute[253538]:      <driver type="raw" cache="none"/>
Nov 25 03:28:18 np0005534516 nova_compute[253538]:      <source protocol="rbd" name="vms/6998a6cf-b660-4558-98cf-bf5984775b1d_disk">
Nov 25 03:28:18 np0005534516 nova_compute[253538]:        <host name="192.168.122.100" port="6789"/>
Nov 25 03:28:18 np0005534516 nova_compute[253538]:      </source>
Nov 25 03:28:18 np0005534516 nova_compute[253538]:      <auth username="openstack">
Nov 25 03:28:18 np0005534516 nova_compute[253538]:        <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 03:28:18 np0005534516 nova_compute[253538]:      </auth>
Nov 25 03:28:18 np0005534516 nova_compute[253538]:      <target dev="vda" bus="virtio"/>
Nov 25 03:28:18 np0005534516 nova_compute[253538]:    </disk>
Nov 25 03:28:18 np0005534516 nova_compute[253538]:    <disk type="network" device="cdrom">
Nov 25 03:28:18 np0005534516 nova_compute[253538]:      <driver type="raw" cache="none"/>
Nov 25 03:28:18 np0005534516 nova_compute[253538]:      <source protocol="rbd" name="vms/6998a6cf-b660-4558-98cf-bf5984775b1d_disk.config">
Nov 25 03:28:18 np0005534516 nova_compute[253538]:        <host name="192.168.122.100" port="6789"/>
Nov 25 03:28:18 np0005534516 nova_compute[253538]:      </source>
Nov 25 03:28:18 np0005534516 nova_compute[253538]:      <auth username="openstack">
Nov 25 03:28:18 np0005534516 nova_compute[253538]:        <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 03:28:18 np0005534516 nova_compute[253538]:      </auth>
Nov 25 03:28:18 np0005534516 nova_compute[253538]:      <target dev="sda" bus="sata"/>
Nov 25 03:28:18 np0005534516 nova_compute[253538]:    </disk>
Nov 25 03:28:18 np0005534516 nova_compute[253538]:    <interface type="ethernet">
Nov 25 03:28:18 np0005534516 nova_compute[253538]:      <mac address="fa:16:3e:73:80:23"/>
Nov 25 03:28:18 np0005534516 nova_compute[253538]:      <model type="virtio"/>
Nov 25 03:28:18 np0005534516 nova_compute[253538]:      <driver name="vhost" rx_queue_size="512"/>
Nov 25 03:28:18 np0005534516 nova_compute[253538]:      <mtu size="1442"/>
Nov 25 03:28:18 np0005534516 nova_compute[253538]:      <target dev="tap65fd7d0e-59"/>
Nov 25 03:28:18 np0005534516 nova_compute[253538]:    </interface>
Nov 25 03:28:18 np0005534516 nova_compute[253538]:    <serial type="pty">
Nov 25 03:28:18 np0005534516 nova_compute[253538]:      <log file="/var/lib/nova/instances/6998a6cf-b660-4558-98cf-bf5984775b1d/console.log" append="off"/>
Nov 25 03:28:18 np0005534516 nova_compute[253538]:    </serial>
Nov 25 03:28:18 np0005534516 nova_compute[253538]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 03:28:18 np0005534516 nova_compute[253538]:    <video>
Nov 25 03:28:18 np0005534516 nova_compute[253538]:      <model type="virtio"/>
Nov 25 03:28:18 np0005534516 nova_compute[253538]:    </video>
Nov 25 03:28:18 np0005534516 nova_compute[253538]:    <input type="tablet" bus="usb"/>
Nov 25 03:28:18 np0005534516 nova_compute[253538]:    <rng model="virtio">
Nov 25 03:28:18 np0005534516 nova_compute[253538]:      <backend model="random">/dev/urandom</backend>
Nov 25 03:28:18 np0005534516 nova_compute[253538]:    </rng>
Nov 25 03:28:18 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root"/>
Nov 25 03:28:18 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:28:18 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:28:18 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:28:18 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:28:18 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:28:18 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:28:18 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:28:18 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:28:18 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:28:18 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:28:18 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:28:18 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:28:18 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:28:18 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:28:18 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:28:18 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:28:18 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:28:18 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:28:18 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:28:18 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:28:18 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:28:18 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:28:18 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:28:18 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:28:18 np0005534516 nova_compute[253538]:    <controller type="usb" index="0"/>
Nov 25 03:28:18 np0005534516 nova_compute[253538]:    <memballoon model="virtio">
Nov 25 03:28:18 np0005534516 nova_compute[253538]:      <stats period="10"/>
Nov 25 03:28:18 np0005534516 nova_compute[253538]:    </memballoon>
Nov 25 03:28:18 np0005534516 nova_compute[253538]:  </devices>
Nov 25 03:28:18 np0005534516 nova_compute[253538]: </domain>
Nov 25 03:28:18 np0005534516 nova_compute[253538]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 25 03:28:18 np0005534516 nova_compute[253538]: 2025-11-25 08:28:18.121 253542 DEBUG nova.compute.manager [None req-e3022e31-fb04-4eb1-b57c-c8286674db75 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 6998a6cf-b660-4558-98cf-bf5984775b1d] Preparing to wait for external event network-vif-plugged-65fd7d0e-59ee-4411-92ee-f934016f1d1f prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 25 03:28:18 np0005534516 nova_compute[253538]: 2025-11-25 08:28:18.122 253542 DEBUG oslo_concurrency.lockutils [None req-e3022e31-fb04-4eb1-b57c-c8286674db75 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Acquiring lock "6998a6cf-b660-4558-98cf-bf5984775b1d-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:28:18 np0005534516 nova_compute[253538]: 2025-11-25 08:28:18.122 253542 DEBUG oslo_concurrency.lockutils [None req-e3022e31-fb04-4eb1-b57c-c8286674db75 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Lock "6998a6cf-b660-4558-98cf-bf5984775b1d-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:28:18 np0005534516 nova_compute[253538]: 2025-11-25 08:28:18.122 253542 DEBUG oslo_concurrency.lockutils [None req-e3022e31-fb04-4eb1-b57c-c8286674db75 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Lock "6998a6cf-b660-4558-98cf-bf5984775b1d-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:28:18 np0005534516 nova_compute[253538]: 2025-11-25 08:28:18.123 253542 DEBUG nova.virt.libvirt.vif [None req-e3022e31-fb04-4eb1-b57c-c8286674db75 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T08:28:08Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesTestJSON-server-577988769',display_name='tempest-ImagesTestJSON-server-577988769',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagestestjson-server-577988769',id=30,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='b0a28d62fb1841c087b84b40bf5a54ec',ramdisk_id='',reservation_id='r-r74nry8d',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesTestJSON-109091550',owner_user_name='tempest-ImagesTestJSON-109091550-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T08:28:10Z,user_data=None,user_id='38fa175fb699405c9a05d7c28f994ebc',uuid=6998a6cf-b660-4558-98cf-bf5984775b1d,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "65fd7d0e-59ee-4411-92ee-f934016f1d1f", "address": "fa:16:3e:73:80:23", "network": {"id": "ba659d6c-c094-47d7-ba45-d0e659ce778e", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1404047212-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b0a28d62fb1841c087b84b40bf5a54ec", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap65fd7d0e-59", "ovs_interfaceid": "65fd7d0e-59ee-4411-92ee-f934016f1d1f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 25 03:28:18 np0005534516 nova_compute[253538]: 2025-11-25 08:28:18.123 253542 DEBUG nova.network.os_vif_util [None req-e3022e31-fb04-4eb1-b57c-c8286674db75 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Converting VIF {"id": "65fd7d0e-59ee-4411-92ee-f934016f1d1f", "address": "fa:16:3e:73:80:23", "network": {"id": "ba659d6c-c094-47d7-ba45-d0e659ce778e", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1404047212-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b0a28d62fb1841c087b84b40bf5a54ec", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap65fd7d0e-59", "ovs_interfaceid": "65fd7d0e-59ee-4411-92ee-f934016f1d1f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 03:28:18 np0005534516 nova_compute[253538]: 2025-11-25 08:28:18.124 253542 DEBUG nova.network.os_vif_util [None req-e3022e31-fb04-4eb1-b57c-c8286674db75 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:73:80:23,bridge_name='br-int',has_traffic_filtering=True,id=65fd7d0e-59ee-4411-92ee-f934016f1d1f,network=Network(ba659d6c-c094-47d7-ba45-d0e659ce778e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap65fd7d0e-59') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 03:28:18 np0005534516 nova_compute[253538]: 2025-11-25 08:28:18.124 253542 DEBUG os_vif [None req-e3022e31-fb04-4eb1-b57c-c8286674db75 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:73:80:23,bridge_name='br-int',has_traffic_filtering=True,id=65fd7d0e-59ee-4411-92ee-f934016f1d1f,network=Network(ba659d6c-c094-47d7-ba45-d0e659ce778e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap65fd7d0e-59') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 25 03:28:18 np0005534516 nova_compute[253538]: 2025-11-25 08:28:18.125 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:28:18 np0005534516 nova_compute[253538]: 2025-11-25 08:28:18.125 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:28:18 np0005534516 nova_compute[253538]: 2025-11-25 08:28:18.125 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 03:28:18 np0005534516 nova_compute[253538]: 2025-11-25 08:28:18.128 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:28:18 np0005534516 nova_compute[253538]: 2025-11-25 08:28:18.129 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap65fd7d0e-59, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:28:18 np0005534516 nova_compute[253538]: 2025-11-25 08:28:18.129 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap65fd7d0e-59, col_values=(('external_ids', {'iface-id': '65fd7d0e-59ee-4411-92ee-f934016f1d1f', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:73:80:23', 'vm-uuid': '6998a6cf-b660-4558-98cf-bf5984775b1d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:28:18 np0005534516 NetworkManager[48915]: <info>  [1764059298.1319] manager: (tap65fd7d0e-59): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/95)
Nov 25 03:28:18 np0005534516 nova_compute[253538]: 2025-11-25 08:28:18.135 253542 DEBUG oslo_concurrency.lockutils [None req-d8ccf12d-2ede-4970-9c6f-82a0d8301467 ee2fe69e0dfa4467926cec954790823e 9bd945273cd04d8981dcb3a319e8d026 - - default default] Acquiring lock "refresh_cache-4c934302-d7cd-4826-835e-cab6dba97e3a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 03:28:18 np0005534516 nova_compute[253538]: 2025-11-25 08:28:18.135 253542 DEBUG oslo_concurrency.lockutils [None req-d8ccf12d-2ede-4970-9c6f-82a0d8301467 ee2fe69e0dfa4467926cec954790823e 9bd945273cd04d8981dcb3a319e8d026 - - default default] Acquired lock "refresh_cache-4c934302-d7cd-4826-835e-cab6dba97e3a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 03:28:18 np0005534516 nova_compute[253538]: 2025-11-25 08:28:18.136 253542 DEBUG nova.network.neutron [None req-d8ccf12d-2ede-4970-9c6f-82a0d8301467 ee2fe69e0dfa4467926cec954790823e 9bd945273cd04d8981dcb3a319e8d026 - - default default] [instance: 4c934302-d7cd-4826-835e-cab6dba97e3a] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 25 03:28:18 np0005534516 nova_compute[253538]: 2025-11-25 08:28:18.137 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 25 03:28:18 np0005534516 nova_compute[253538]: 2025-11-25 08:28:18.138 253542 INFO os_vif [None req-e3022e31-fb04-4eb1-b57c-c8286674db75 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:73:80:23,bridge_name='br-int',has_traffic_filtering=True,id=65fd7d0e-59ee-4411-92ee-f934016f1d1f,network=Network(ba659d6c-c094-47d7-ba45-d0e659ce778e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap65fd7d0e-59')#033[00m
Nov 25 03:28:18 np0005534516 nova_compute[253538]: 2025-11-25 08:28:18.214 253542 DEBUG nova.virt.libvirt.driver [None req-e3022e31-fb04-4eb1-b57c-c8286674db75 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 03:28:18 np0005534516 nova_compute[253538]: 2025-11-25 08:28:18.215 253542 DEBUG nova.virt.libvirt.driver [None req-e3022e31-fb04-4eb1-b57c-c8286674db75 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 03:28:18 np0005534516 nova_compute[253538]: 2025-11-25 08:28:18.215 253542 DEBUG nova.virt.libvirt.driver [None req-e3022e31-fb04-4eb1-b57c-c8286674db75 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] No VIF found with MAC fa:16:3e:73:80:23, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 25 03:28:18 np0005534516 nova_compute[253538]: 2025-11-25 08:28:18.216 253542 INFO nova.virt.libvirt.driver [None req-e3022e31-fb04-4eb1-b57c-c8286674db75 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 6998a6cf-b660-4558-98cf-bf5984775b1d] Using config drive#033[00m
Nov 25 03:28:18 np0005534516 nova_compute[253538]: 2025-11-25 08:28:18.237 253542 DEBUG nova.storage.rbd_utils [None req-e3022e31-fb04-4eb1-b57c-c8286674db75 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] rbd image 6998a6cf-b660-4558-98cf-bf5984775b1d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:28:19 np0005534516 nova_compute[253538]: 2025-11-25 08:28:18.997 253542 DEBUG nova.network.neutron [None req-d8ccf12d-2ede-4970-9c6f-82a0d8301467 ee2fe69e0dfa4467926cec954790823e 9bd945273cd04d8981dcb3a319e8d026 - - default default] [instance: 4c934302-d7cd-4826-835e-cab6dba97e3a] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 25 03:28:19 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1297: 321 pgs: 321 active+clean; 210 MiB data, 441 MiB used, 60 GiB / 60 GiB avail; 2.7 MiB/s rd, 6.3 MiB/s wr, 233 op/s
Nov 25 03:28:19 np0005534516 nova_compute[253538]: 2025-11-25 08:28:19.313 253542 INFO nova.virt.libvirt.driver [None req-e3022e31-fb04-4eb1-b57c-c8286674db75 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 6998a6cf-b660-4558-98cf-bf5984775b1d] Creating config drive at /var/lib/nova/instances/6998a6cf-b660-4558-98cf-bf5984775b1d/disk.config#033[00m
Nov 25 03:28:19 np0005534516 nova_compute[253538]: 2025-11-25 08:28:19.317 253542 DEBUG oslo_concurrency.processutils [None req-e3022e31-fb04-4eb1-b57c-c8286674db75 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/6998a6cf-b660-4558-98cf-bf5984775b1d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpqu7941ar execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:28:19 np0005534516 nova_compute[253538]: 2025-11-25 08:28:19.461 253542 DEBUG oslo_concurrency.processutils [None req-e3022e31-fb04-4eb1-b57c-c8286674db75 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/6998a6cf-b660-4558-98cf-bf5984775b1d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpqu7941ar" returned: 0 in 0.144s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:28:19 np0005534516 nova_compute[253538]: 2025-11-25 08:28:19.487 253542 DEBUG nova.storage.rbd_utils [None req-e3022e31-fb04-4eb1-b57c-c8286674db75 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] rbd image 6998a6cf-b660-4558-98cf-bf5984775b1d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:28:19 np0005534516 nova_compute[253538]: 2025-11-25 08:28:19.491 253542 DEBUG oslo_concurrency.processutils [None req-e3022e31-fb04-4eb1-b57c-c8286674db75 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/6998a6cf-b660-4558-98cf-bf5984775b1d/disk.config 6998a6cf-b660-4558-98cf-bf5984775b1d_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:28:19 np0005534516 nova_compute[253538]: 2025-11-25 08:28:19.638 253542 DEBUG oslo_concurrency.processutils [None req-e3022e31-fb04-4eb1-b57c-c8286674db75 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/6998a6cf-b660-4558-98cf-bf5984775b1d/disk.config 6998a6cf-b660-4558-98cf-bf5984775b1d_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.147s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:28:19 np0005534516 nova_compute[253538]: 2025-11-25 08:28:19.640 253542 INFO nova.virt.libvirt.driver [None req-e3022e31-fb04-4eb1-b57c-c8286674db75 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 6998a6cf-b660-4558-98cf-bf5984775b1d] Deleting local config drive /var/lib/nova/instances/6998a6cf-b660-4558-98cf-bf5984775b1d/disk.config because it was imported into RBD.#033[00m
Nov 25 03:28:19 np0005534516 kernel: tap65fd7d0e-59: entered promiscuous mode
Nov 25 03:28:19 np0005534516 NetworkManager[48915]: <info>  [1764059299.6902] manager: (tap65fd7d0e-59): new Tun device (/org/freedesktop/NetworkManager/Devices/96)
Nov 25 03:28:19 np0005534516 ovn_controller[152859]: 2025-11-25T08:28:19Z|00190|binding|INFO|Claiming lport 65fd7d0e-59ee-4411-92ee-f934016f1d1f for this chassis.
Nov 25 03:28:19 np0005534516 ovn_controller[152859]: 2025-11-25T08:28:19Z|00191|binding|INFO|65fd7d0e-59ee-4411-92ee-f934016f1d1f: Claiming fa:16:3e:73:80:23 10.100.0.10
Nov 25 03:28:19 np0005534516 nova_compute[253538]: 2025-11-25 08:28:19.698 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:28:19 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:28:19.707 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:73:80:23 10.100.0.10'], port_security=['fa:16:3e:73:80:23 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '6998a6cf-b660-4558-98cf-bf5984775b1d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ba659d6c-c094-47d7-ba45-d0e659ce778e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b0a28d62fb1841c087b84b40bf5a54ec', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'c0dcf48e-8342-437f-bc91-be284d9d2e89', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=38cfcfdd-6d8a-45fc-8bf6-5c1aa5128b91, chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=65fd7d0e-59ee-4411-92ee-f934016f1d1f) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 03:28:19 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:28:19.709 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 65fd7d0e-59ee-4411-92ee-f934016f1d1f in datapath ba659d6c-c094-47d7-ba45-d0e659ce778e bound to our chassis#033[00m
Nov 25 03:28:19 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:28:19.711 162739 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network ba659d6c-c094-47d7-ba45-d0e659ce778e#033[00m
Nov 25 03:28:19 np0005534516 systemd-udevd[291029]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 03:28:19 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:28:19.729 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[f0d8b162-ccb4-4de4-8a59-62668f306bee]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:28:19 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:28:19.730 162739 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapba659d6c-c1 in ovnmeta-ba659d6c-c094-47d7-ba45-d0e659ce778e namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 25 03:28:19 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:28:19.732 269370 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapba659d6c-c0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 25 03:28:19 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:28:19.732 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[e179db54-0dfa-4c22-a4f1-52093477db63]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:28:19 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:28:19.733 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[e1a6a6ad-f50f-40f1-b9c2-a5203d9a2fe5]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:28:19 np0005534516 NetworkManager[48915]: <info>  [1764059299.7400] device (tap65fd7d0e-59): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 03:28:19 np0005534516 NetworkManager[48915]: <info>  [1764059299.7410] device (tap65fd7d0e-59): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 03:28:19 np0005534516 systemd-machined[215790]: New machine qemu-35-instance-0000001e.
Nov 25 03:28:19 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:28:19.747 162852 DEBUG oslo.privsep.daemon [-] privsep: reply[d1943182-2098-4cbc-b20c-5e4d36df2cdb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:28:19 np0005534516 systemd[1]: Started Virtual Machine qemu-35-instance-0000001e.
Nov 25 03:28:19 np0005534516 nova_compute[253538]: 2025-11-25 08:28:19.760 253542 DEBUG nova.network.neutron [req-0d137173-6982-4ee3-9fef-a9a2757559d9 req-07825075-34a3-423b-b2ae-42131cfb4350 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 6998a6cf-b660-4558-98cf-bf5984775b1d] Updated VIF entry in instance network info cache for port 65fd7d0e-59ee-4411-92ee-f934016f1d1f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 25 03:28:19 np0005534516 nova_compute[253538]: 2025-11-25 08:28:19.760 253542 DEBUG nova.network.neutron [req-0d137173-6982-4ee3-9fef-a9a2757559d9 req-07825075-34a3-423b-b2ae-42131cfb4350 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 6998a6cf-b660-4558-98cf-bf5984775b1d] Updating instance_info_cache with network_info: [{"id": "65fd7d0e-59ee-4411-92ee-f934016f1d1f", "address": "fa:16:3e:73:80:23", "network": {"id": "ba659d6c-c094-47d7-ba45-d0e659ce778e", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1404047212-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b0a28d62fb1841c087b84b40bf5a54ec", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap65fd7d0e-59", "ovs_interfaceid": "65fd7d0e-59ee-4411-92ee-f934016f1d1f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 03:28:19 np0005534516 nova_compute[253538]: 2025-11-25 08:28:19.768 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:28:19 np0005534516 ovn_controller[152859]: 2025-11-25T08:28:19Z|00192|binding|INFO|Setting lport 65fd7d0e-59ee-4411-92ee-f934016f1d1f ovn-installed in OVS
Nov 25 03:28:19 np0005534516 ovn_controller[152859]: 2025-11-25T08:28:19Z|00193|binding|INFO|Setting lport 65fd7d0e-59ee-4411-92ee-f934016f1d1f up in Southbound
Nov 25 03:28:19 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:28:19.774 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[578d09b3-664c-4564-8a93-7767c9cd79e8]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:28:19 np0005534516 nova_compute[253538]: 2025-11-25 08:28:19.776 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:28:19 np0005534516 nova_compute[253538]: 2025-11-25 08:28:19.780 253542 DEBUG oslo_concurrency.lockutils [req-0d137173-6982-4ee3-9fef-a9a2757559d9 req-07825075-34a3-423b-b2ae-42131cfb4350 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-6998a6cf-b660-4558-98cf-bf5984775b1d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 03:28:19 np0005534516 nova_compute[253538]: 2025-11-25 08:28:19.780 253542 DEBUG nova.compute.manager [req-0d137173-6982-4ee3-9fef-a9a2757559d9 req-07825075-34a3-423b-b2ae-42131cfb4350 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 14cd6797-cf47-44da-acac-0e5e3d5dfe11] Received event network-vif-unplugged-c620cff4-b028-4d86-b951-0d489781da2f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:28:19 np0005534516 nova_compute[253538]: 2025-11-25 08:28:19.781 253542 DEBUG oslo_concurrency.lockutils [req-0d137173-6982-4ee3-9fef-a9a2757559d9 req-07825075-34a3-423b-b2ae-42131cfb4350 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "14cd6797-cf47-44da-acac-0e5e3d5dfe11-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:28:19 np0005534516 nova_compute[253538]: 2025-11-25 08:28:19.781 253542 DEBUG oslo_concurrency.lockutils [req-0d137173-6982-4ee3-9fef-a9a2757559d9 req-07825075-34a3-423b-b2ae-42131cfb4350 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "14cd6797-cf47-44da-acac-0e5e3d5dfe11-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:28:19 np0005534516 nova_compute[253538]: 2025-11-25 08:28:19.781 253542 DEBUG oslo_concurrency.lockutils [req-0d137173-6982-4ee3-9fef-a9a2757559d9 req-07825075-34a3-423b-b2ae-42131cfb4350 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "14cd6797-cf47-44da-acac-0e5e3d5dfe11-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:28:19 np0005534516 nova_compute[253538]: 2025-11-25 08:28:19.783 253542 DEBUG nova.compute.manager [req-0d137173-6982-4ee3-9fef-a9a2757559d9 req-07825075-34a3-423b-b2ae-42131cfb4350 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 14cd6797-cf47-44da-acac-0e5e3d5dfe11] No waiting events found dispatching network-vif-unplugged-c620cff4-b028-4d86-b951-0d489781da2f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 03:28:19 np0005534516 nova_compute[253538]: 2025-11-25 08:28:19.783 253542 DEBUG nova.compute.manager [req-0d137173-6982-4ee3-9fef-a9a2757559d9 req-07825075-34a3-423b-b2ae-42131cfb4350 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 14cd6797-cf47-44da-acac-0e5e3d5dfe11] Received event network-vif-unplugged-c620cff4-b028-4d86-b951-0d489781da2f for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 25 03:28:19 np0005534516 nova_compute[253538]: 2025-11-25 08:28:19.804 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:28:19 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:28:19.804 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[1cf661a6-6335-44da-b3c0-ec23ab132059]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:28:19 np0005534516 NetworkManager[48915]: <info>  [1764059299.8100] manager: (tapba659d6c-c0): new Veth device (/org/freedesktop/NetworkManager/Devices/97)
Nov 25 03:28:19 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:28:19.808 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[b99e90af-84d4-4dea-b006-2e357a25e7a9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:28:19 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:28:19.838 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[73cbca7f-921b-49e1-9f0d-70f19400ed70]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:28:19 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:28:19.842 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[31cda7a4-c392-439d-8e75-1a461b261efc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:28:19 np0005534516 NetworkManager[48915]: <info>  [1764059299.8653] device (tapba659d6c-c0): carrier: link connected
Nov 25 03:28:19 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:28:19.870 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[bfa99d32-a968-4a62-90f8-abcfad042a42]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:28:19 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:28:19.885 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[f1f58dfe-1f33-4ce8-9194-d8252ad98b06]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapba659d6c-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:bb:c3:40'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 63], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 465446, 'reachable_time': 41021, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 291062, 'error': None, 'target': 'ovnmeta-ba659d6c-c094-47d7-ba45-d0e659ce778e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:28:19 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:28:19.901 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[2d2c4b18-4a39-47db-ada6-030a8b98ee97]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:febb:c340'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 465446, 'tstamp': 465446}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 291063, 'error': None, 'target': 'ovnmeta-ba659d6c-c094-47d7-ba45-d0e659ce778e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:28:19 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:28:19.926 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[07c7a8cb-939c-4a66-b7e0-4545ebcb5d72]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapba659d6c-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:bb:c3:40'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 63], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 465446, 'reachable_time': 41021, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 291064, 'error': None, 'target': 'ovnmeta-ba659d6c-c094-47d7-ba45-d0e659ce778e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:28:19 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:28:19.957 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[1c8d391f-63d5-4401-b349-df4681893d18]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:28:19 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e124 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 03:28:20 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:28:20.009 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[7c0cf228-a61a-4d74-8001-99b44dfddd42]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:28:20 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:28:20.020 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapba659d6c-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:28:20 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:28:20.020 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 03:28:20 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:28:20.021 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapba659d6c-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:28:20 np0005534516 nova_compute[253538]: 2025-11-25 08:28:20.023 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:28:20 np0005534516 NetworkManager[48915]: <info>  [1764059300.0236] manager: (tapba659d6c-c0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/98)
Nov 25 03:28:20 np0005534516 kernel: tapba659d6c-c0: entered promiscuous mode
Nov 25 03:28:20 np0005534516 nova_compute[253538]: 2025-11-25 08:28:20.026 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:28:20 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:28:20.030 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapba659d6c-c0, col_values=(('external_ids', {'iface-id': '02ee51d1-7fc5-4815-93ec-b9ead088a46e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:28:20 np0005534516 nova_compute[253538]: 2025-11-25 08:28:20.031 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:28:20 np0005534516 ovn_controller[152859]: 2025-11-25T08:28:20Z|00194|binding|INFO|Releasing lport 02ee51d1-7fc5-4815-93ec-b9ead088a46e from this chassis (sb_readonly=0)
Nov 25 03:28:20 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:28:20.034 162739 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/ba659d6c-c094-47d7-ba45-d0e659ce778e.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/ba659d6c-c094-47d7-ba45-d0e659ce778e.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 25 03:28:20 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:28:20.035 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[bdd12b5f-4b04-4908-a05b-239187a40815]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:28:20 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:28:20.036 162739 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 25 03:28:20 np0005534516 ovn_metadata_agent[162734]: global
Nov 25 03:28:20 np0005534516 ovn_metadata_agent[162734]:    log         /dev/log local0 debug
Nov 25 03:28:20 np0005534516 ovn_metadata_agent[162734]:    log-tag     haproxy-metadata-proxy-ba659d6c-c094-47d7-ba45-d0e659ce778e
Nov 25 03:28:20 np0005534516 ovn_metadata_agent[162734]:    user        root
Nov 25 03:28:20 np0005534516 ovn_metadata_agent[162734]:    group       root
Nov 25 03:28:20 np0005534516 ovn_metadata_agent[162734]:    maxconn     1024
Nov 25 03:28:20 np0005534516 ovn_metadata_agent[162734]:    pidfile     /var/lib/neutron/external/pids/ba659d6c-c094-47d7-ba45-d0e659ce778e.pid.haproxy
Nov 25 03:28:20 np0005534516 ovn_metadata_agent[162734]:    daemon
Nov 25 03:28:20 np0005534516 ovn_metadata_agent[162734]: 
Nov 25 03:28:20 np0005534516 ovn_metadata_agent[162734]: defaults
Nov 25 03:28:20 np0005534516 ovn_metadata_agent[162734]:    log global
Nov 25 03:28:20 np0005534516 ovn_metadata_agent[162734]:    mode http
Nov 25 03:28:20 np0005534516 ovn_metadata_agent[162734]:    option httplog
Nov 25 03:28:20 np0005534516 ovn_metadata_agent[162734]:    option dontlognull
Nov 25 03:28:20 np0005534516 ovn_metadata_agent[162734]:    option http-server-close
Nov 25 03:28:20 np0005534516 ovn_metadata_agent[162734]:    option forwardfor
Nov 25 03:28:20 np0005534516 ovn_metadata_agent[162734]:    retries                 3
Nov 25 03:28:20 np0005534516 ovn_metadata_agent[162734]:    timeout http-request    30s
Nov 25 03:28:20 np0005534516 ovn_metadata_agent[162734]:    timeout connect         30s
Nov 25 03:28:20 np0005534516 ovn_metadata_agent[162734]:    timeout client          32s
Nov 25 03:28:20 np0005534516 ovn_metadata_agent[162734]:    timeout server          32s
Nov 25 03:28:20 np0005534516 ovn_metadata_agent[162734]:    timeout http-keep-alive 30s
Nov 25 03:28:20 np0005534516 ovn_metadata_agent[162734]: 
Nov 25 03:28:20 np0005534516 ovn_metadata_agent[162734]: 
Nov 25 03:28:20 np0005534516 ovn_metadata_agent[162734]: listen listener
Nov 25 03:28:20 np0005534516 ovn_metadata_agent[162734]:    bind 169.254.169.254:80
Nov 25 03:28:20 np0005534516 ovn_metadata_agent[162734]:    server metadata /var/lib/neutron/metadata_proxy
Nov 25 03:28:20 np0005534516 ovn_metadata_agent[162734]:    http-request add-header X-OVN-Network-ID ba659d6c-c094-47d7-ba45-d0e659ce778e
Nov 25 03:28:20 np0005534516 ovn_metadata_agent[162734]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 25 03:28:20 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:28:20.036 162739 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-ba659d6c-c094-47d7-ba45-d0e659ce778e', 'env', 'PROCESS_TAG=haproxy-ba659d6c-c094-47d7-ba45-d0e659ce778e', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/ba659d6c-c094-47d7-ba45-d0e659ce778e.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 25 03:28:20 np0005534516 nova_compute[253538]: 2025-11-25 08:28:20.039 253542 DEBUG nova.network.neutron [-] [instance: 14cd6797-cf47-44da-acac-0e5e3d5dfe11] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 03:28:20 np0005534516 nova_compute[253538]: 2025-11-25 08:28:20.045 253542 DEBUG nova.compute.manager [req-24913f6b-9e14-490d-b602-944de98bbad2 req-5991dc5b-e828-44c8-870f-66e33805b622 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 14cd6797-cf47-44da-acac-0e5e3d5dfe11] Received event network-vif-plugged-eb3ca9e2-cc78-478d-97c2-03b1c7d29b95 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:28:20 np0005534516 nova_compute[253538]: 2025-11-25 08:28:20.045 253542 DEBUG oslo_concurrency.lockutils [req-24913f6b-9e14-490d-b602-944de98bbad2 req-5991dc5b-e828-44c8-870f-66e33805b622 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "14cd6797-cf47-44da-acac-0e5e3d5dfe11-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:28:20 np0005534516 nova_compute[253538]: 2025-11-25 08:28:20.045 253542 DEBUG oslo_concurrency.lockutils [req-24913f6b-9e14-490d-b602-944de98bbad2 req-5991dc5b-e828-44c8-870f-66e33805b622 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "14cd6797-cf47-44da-acac-0e5e3d5dfe11-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:28:20 np0005534516 nova_compute[253538]: 2025-11-25 08:28:20.046 253542 DEBUG oslo_concurrency.lockutils [req-24913f6b-9e14-490d-b602-944de98bbad2 req-5991dc5b-e828-44c8-870f-66e33805b622 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "14cd6797-cf47-44da-acac-0e5e3d5dfe11-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:28:20 np0005534516 nova_compute[253538]: 2025-11-25 08:28:20.046 253542 DEBUG nova.compute.manager [req-24913f6b-9e14-490d-b602-944de98bbad2 req-5991dc5b-e828-44c8-870f-66e33805b622 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 14cd6797-cf47-44da-acac-0e5e3d5dfe11] No waiting events found dispatching network-vif-plugged-eb3ca9e2-cc78-478d-97c2-03b1c7d29b95 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 03:28:20 np0005534516 nova_compute[253538]: 2025-11-25 08:28:20.046 253542 WARNING nova.compute.manager [req-24913f6b-9e14-490d-b602-944de98bbad2 req-5991dc5b-e828-44c8-870f-66e33805b622 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 14cd6797-cf47-44da-acac-0e5e3d5dfe11] Received unexpected event network-vif-plugged-eb3ca9e2-cc78-478d-97c2-03b1c7d29b95 for instance with vm_state active and task_state deleting.#033[00m
Nov 25 03:28:20 np0005534516 nova_compute[253538]: 2025-11-25 08:28:20.049 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:28:20 np0005534516 nova_compute[253538]: 2025-11-25 08:28:20.055 253542 INFO nova.compute.manager [-] [instance: 14cd6797-cf47-44da-acac-0e5e3d5dfe11] Took 2.21 seconds to deallocate network for instance.#033[00m
Nov 25 03:28:20 np0005534516 nova_compute[253538]: 2025-11-25 08:28:20.120 253542 DEBUG oslo_concurrency.lockutils [None req-2c8e5d4b-37aa-44f4-8353-01f647647394 5981df3e8536420ea5b8fcd98ef92e1b 7627e6bf071942db89329eee4a7d6b59 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:28:20 np0005534516 nova_compute[253538]: 2025-11-25 08:28:20.120 253542 DEBUG oslo_concurrency.lockutils [None req-2c8e5d4b-37aa-44f4-8353-01f647647394 5981df3e8536420ea5b8fcd98ef92e1b 7627e6bf071942db89329eee4a7d6b59 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:28:20 np0005534516 nova_compute[253538]: 2025-11-25 08:28:20.121 253542 DEBUG nova.network.neutron [None req-d8ccf12d-2ede-4970-9c6f-82a0d8301467 ee2fe69e0dfa4467926cec954790823e 9bd945273cd04d8981dcb3a319e8d026 - - default default] [instance: 4c934302-d7cd-4826-835e-cab6dba97e3a] Updating instance_info_cache with network_info: [{"id": "b62f3741-11c8-4840-a720-d6ee07f06284", "address": "fa:16:3e:c9:9b:99", "network": {"id": "0070171d-b7ca-4ed3-baea-814d9cd382de", "bridge": "br-int", "label": "tempest-ImagesOneServerTestJSON-33340236-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9bd945273cd04d8981dcb3a319e8d026", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb62f3741-11", "ovs_interfaceid": "b62f3741-11c8-4840-a720-d6ee07f06284", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 03:28:20 np0005534516 nova_compute[253538]: 2025-11-25 08:28:20.136 253542 DEBUG oslo_concurrency.lockutils [None req-d8ccf12d-2ede-4970-9c6f-82a0d8301467 ee2fe69e0dfa4467926cec954790823e 9bd945273cd04d8981dcb3a319e8d026 - - default default] Releasing lock "refresh_cache-4c934302-d7cd-4826-835e-cab6dba97e3a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 03:28:20 np0005534516 nova_compute[253538]: 2025-11-25 08:28:20.137 253542 DEBUG nova.compute.manager [None req-d8ccf12d-2ede-4970-9c6f-82a0d8301467 ee2fe69e0dfa4467926cec954790823e 9bd945273cd04d8981dcb3a319e8d026 - - default default] [instance: 4c934302-d7cd-4826-835e-cab6dba97e3a] Instance network_info: |[{"id": "b62f3741-11c8-4840-a720-d6ee07f06284", "address": "fa:16:3e:c9:9b:99", "network": {"id": "0070171d-b7ca-4ed3-baea-814d9cd382de", "bridge": "br-int", "label": "tempest-ImagesOneServerTestJSON-33340236-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9bd945273cd04d8981dcb3a319e8d026", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb62f3741-11", "ovs_interfaceid": "b62f3741-11c8-4840-a720-d6ee07f06284", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 25 03:28:20 np0005534516 nova_compute[253538]: 2025-11-25 08:28:20.139 253542 DEBUG nova.virt.libvirt.driver [None req-d8ccf12d-2ede-4970-9c6f-82a0d8301467 ee2fe69e0dfa4467926cec954790823e 9bd945273cd04d8981dcb3a319e8d026 - - default default] [instance: 4c934302-d7cd-4826-835e-cab6dba97e3a] Start _get_guest_xml network_info=[{"id": "b62f3741-11c8-4840-a720-d6ee07f06284", "address": "fa:16:3e:c9:9b:99", "network": {"id": "0070171d-b7ca-4ed3-baea-814d9cd382de", "bridge": "br-int", "label": "tempest-ImagesOneServerTestJSON-33340236-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9bd945273cd04d8981dcb3a319e8d026", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb62f3741-11", "ovs_interfaceid": "b62f3741-11c8-4840-a720-d6ee07f06284", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'device_name': '/dev/vda', 'guest_format': None, 'encryption_options': None, 'boot_index': 0, 'encryption_format': None, 'encryption_secret_uuid': None, 'size': 0, 'encrypted': False, 'disk_bus': 'virtio', 'image_id': '8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 25 03:28:20 np0005534516 nova_compute[253538]: 2025-11-25 08:28:20.144 253542 WARNING nova.virt.libvirt.driver [None req-d8ccf12d-2ede-4970-9c6f-82a0d8301467 ee2fe69e0dfa4467926cec954790823e 9bd945273cd04d8981dcb3a319e8d026 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 03:28:20 np0005534516 nova_compute[253538]: 2025-11-25 08:28:20.154 253542 DEBUG nova.virt.libvirt.host [None req-d8ccf12d-2ede-4970-9c6f-82a0d8301467 ee2fe69e0dfa4467926cec954790823e 9bd945273cd04d8981dcb3a319e8d026 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 25 03:28:20 np0005534516 nova_compute[253538]: 2025-11-25 08:28:20.160 253542 DEBUG nova.virt.libvirt.host [None req-d8ccf12d-2ede-4970-9c6f-82a0d8301467 ee2fe69e0dfa4467926cec954790823e 9bd945273cd04d8981dcb3a319e8d026 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 25 03:28:20 np0005534516 nova_compute[253538]: 2025-11-25 08:28:20.165 253542 DEBUG nova.virt.libvirt.host [None req-d8ccf12d-2ede-4970-9c6f-82a0d8301467 ee2fe69e0dfa4467926cec954790823e 9bd945273cd04d8981dcb3a319e8d026 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 25 03:28:20 np0005534516 nova_compute[253538]: 2025-11-25 08:28:20.166 253542 DEBUG nova.virt.libvirt.host [None req-d8ccf12d-2ede-4970-9c6f-82a0d8301467 ee2fe69e0dfa4467926cec954790823e 9bd945273cd04d8981dcb3a319e8d026 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 25 03:28:20 np0005534516 nova_compute[253538]: 2025-11-25 08:28:20.166 253542 DEBUG nova.virt.libvirt.driver [None req-d8ccf12d-2ede-4970-9c6f-82a0d8301467 ee2fe69e0dfa4467926cec954790823e 9bd945273cd04d8981dcb3a319e8d026 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 25 03:28:20 np0005534516 nova_compute[253538]: 2025-11-25 08:28:20.167 253542 DEBUG nova.virt.hardware [None req-d8ccf12d-2ede-4970-9c6f-82a0d8301467 ee2fe69e0dfa4467926cec954790823e 9bd945273cd04d8981dcb3a319e8d026 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T08:20:54Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='c0247c91-0f34-45b2-87e7-43f31a790a00',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 25 03:28:20 np0005534516 nova_compute[253538]: 2025-11-25 08:28:20.167 253542 DEBUG nova.virt.hardware [None req-d8ccf12d-2ede-4970-9c6f-82a0d8301467 ee2fe69e0dfa4467926cec954790823e 9bd945273cd04d8981dcb3a319e8d026 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 25 03:28:20 np0005534516 nova_compute[253538]: 2025-11-25 08:28:20.167 253542 DEBUG nova.virt.hardware [None req-d8ccf12d-2ede-4970-9c6f-82a0d8301467 ee2fe69e0dfa4467926cec954790823e 9bd945273cd04d8981dcb3a319e8d026 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 25 03:28:20 np0005534516 nova_compute[253538]: 2025-11-25 08:28:20.168 253542 DEBUG nova.virt.hardware [None req-d8ccf12d-2ede-4970-9c6f-82a0d8301467 ee2fe69e0dfa4467926cec954790823e 9bd945273cd04d8981dcb3a319e8d026 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 25 03:28:20 np0005534516 nova_compute[253538]: 2025-11-25 08:28:20.168 253542 DEBUG nova.virt.hardware [None req-d8ccf12d-2ede-4970-9c6f-82a0d8301467 ee2fe69e0dfa4467926cec954790823e 9bd945273cd04d8981dcb3a319e8d026 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 25 03:28:20 np0005534516 nova_compute[253538]: 2025-11-25 08:28:20.168 253542 DEBUG nova.virt.hardware [None req-d8ccf12d-2ede-4970-9c6f-82a0d8301467 ee2fe69e0dfa4467926cec954790823e 9bd945273cd04d8981dcb3a319e8d026 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 25 03:28:20 np0005534516 nova_compute[253538]: 2025-11-25 08:28:20.168 253542 DEBUG nova.virt.hardware [None req-d8ccf12d-2ede-4970-9c6f-82a0d8301467 ee2fe69e0dfa4467926cec954790823e 9bd945273cd04d8981dcb3a319e8d026 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 25 03:28:20 np0005534516 nova_compute[253538]: 2025-11-25 08:28:20.169 253542 DEBUG nova.virt.hardware [None req-d8ccf12d-2ede-4970-9c6f-82a0d8301467 ee2fe69e0dfa4467926cec954790823e 9bd945273cd04d8981dcb3a319e8d026 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 25 03:28:20 np0005534516 nova_compute[253538]: 2025-11-25 08:28:20.169 253542 DEBUG nova.virt.hardware [None req-d8ccf12d-2ede-4970-9c6f-82a0d8301467 ee2fe69e0dfa4467926cec954790823e 9bd945273cd04d8981dcb3a319e8d026 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 25 03:28:20 np0005534516 nova_compute[253538]: 2025-11-25 08:28:20.169 253542 DEBUG nova.virt.hardware [None req-d8ccf12d-2ede-4970-9c6f-82a0d8301467 ee2fe69e0dfa4467926cec954790823e 9bd945273cd04d8981dcb3a319e8d026 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 25 03:28:20 np0005534516 nova_compute[253538]: 2025-11-25 08:28:20.170 253542 DEBUG nova.virt.hardware [None req-d8ccf12d-2ede-4970-9c6f-82a0d8301467 ee2fe69e0dfa4467926cec954790823e 9bd945273cd04d8981dcb3a319e8d026 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 25 03:28:20 np0005534516 nova_compute[253538]: 2025-11-25 08:28:20.173 253542 DEBUG oslo_concurrency.processutils [None req-d8ccf12d-2ede-4970-9c6f-82a0d8301467 ee2fe69e0dfa4467926cec954790823e 9bd945273cd04d8981dcb3a319e8d026 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:28:20 np0005534516 nova_compute[253538]: 2025-11-25 08:28:20.205 253542 DEBUG nova.compute.manager [req-d5e41382-4243-4e0c-a72d-eb44af4f2f14 req-0addf58c-2f95-46d8-b275-548b87250e2c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 14cd6797-cf47-44da-acac-0e5e3d5dfe11] Received event network-vif-plugged-c620cff4-b028-4d86-b951-0d489781da2f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:28:20 np0005534516 nova_compute[253538]: 2025-11-25 08:28:20.206 253542 DEBUG oslo_concurrency.lockutils [req-d5e41382-4243-4e0c-a72d-eb44af4f2f14 req-0addf58c-2f95-46d8-b275-548b87250e2c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "14cd6797-cf47-44da-acac-0e5e3d5dfe11-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:28:20 np0005534516 nova_compute[253538]: 2025-11-25 08:28:20.206 253542 DEBUG oslo_concurrency.lockutils [req-d5e41382-4243-4e0c-a72d-eb44af4f2f14 req-0addf58c-2f95-46d8-b275-548b87250e2c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "14cd6797-cf47-44da-acac-0e5e3d5dfe11-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:28:20 np0005534516 nova_compute[253538]: 2025-11-25 08:28:20.206 253542 DEBUG oslo_concurrency.lockutils [req-d5e41382-4243-4e0c-a72d-eb44af4f2f14 req-0addf58c-2f95-46d8-b275-548b87250e2c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "14cd6797-cf47-44da-acac-0e5e3d5dfe11-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:28:20 np0005534516 nova_compute[253538]: 2025-11-25 08:28:20.207 253542 DEBUG nova.compute.manager [req-d5e41382-4243-4e0c-a72d-eb44af4f2f14 req-0addf58c-2f95-46d8-b275-548b87250e2c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 14cd6797-cf47-44da-acac-0e5e3d5dfe11] No waiting events found dispatching network-vif-plugged-c620cff4-b028-4d86-b951-0d489781da2f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 03:28:20 np0005534516 nova_compute[253538]: 2025-11-25 08:28:20.207 253542 WARNING nova.compute.manager [req-d5e41382-4243-4e0c-a72d-eb44af4f2f14 req-0addf58c-2f95-46d8-b275-548b87250e2c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 14cd6797-cf47-44da-acac-0e5e3d5dfe11] Received unexpected event network-vif-plugged-c620cff4-b028-4d86-b951-0d489781da2f for instance with vm_state deleted and task_state None.#033[00m
Nov 25 03:28:20 np0005534516 nova_compute[253538]: 2025-11-25 08:28:20.207 253542 DEBUG nova.compute.manager [req-d5e41382-4243-4e0c-a72d-eb44af4f2f14 req-0addf58c-2f95-46d8-b275-548b87250e2c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 4c934302-d7cd-4826-835e-cab6dba97e3a] Received event network-changed-b62f3741-11c8-4840-a720-d6ee07f06284 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:28:20 np0005534516 nova_compute[253538]: 2025-11-25 08:28:20.207 253542 DEBUG nova.compute.manager [req-d5e41382-4243-4e0c-a72d-eb44af4f2f14 req-0addf58c-2f95-46d8-b275-548b87250e2c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 4c934302-d7cd-4826-835e-cab6dba97e3a] Refreshing instance network info cache due to event network-changed-b62f3741-11c8-4840-a720-d6ee07f06284. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 03:28:20 np0005534516 nova_compute[253538]: 2025-11-25 08:28:20.208 253542 DEBUG oslo_concurrency.lockutils [req-d5e41382-4243-4e0c-a72d-eb44af4f2f14 req-0addf58c-2f95-46d8-b275-548b87250e2c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-4c934302-d7cd-4826-835e-cab6dba97e3a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 03:28:20 np0005534516 nova_compute[253538]: 2025-11-25 08:28:20.208 253542 DEBUG oslo_concurrency.lockutils [req-d5e41382-4243-4e0c-a72d-eb44af4f2f14 req-0addf58c-2f95-46d8-b275-548b87250e2c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-4c934302-d7cd-4826-835e-cab6dba97e3a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 03:28:20 np0005534516 nova_compute[253538]: 2025-11-25 08:28:20.208 253542 DEBUG nova.network.neutron [req-d5e41382-4243-4e0c-a72d-eb44af4f2f14 req-0addf58c-2f95-46d8-b275-548b87250e2c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 4c934302-d7cd-4826-835e-cab6dba97e3a] Refreshing network info cache for port b62f3741-11c8-4840-a720-d6ee07f06284 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 03:28:20 np0005534516 nova_compute[253538]: 2025-11-25 08:28:20.265 253542 DEBUG oslo_concurrency.processutils [None req-2c8e5d4b-37aa-44f4-8353-01f647647394 5981df3e8536420ea5b8fcd98ef92e1b 7627e6bf071942db89329eee4a7d6b59 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:28:20 np0005534516 podman[291114]: 2025-11-25 08:28:20.417171304 +0000 UTC m=+0.031966764 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 25 03:28:20 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 03:28:20 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3984392094' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 03:28:20 np0005534516 nova_compute[253538]: 2025-11-25 08:28:20.634 253542 DEBUG oslo_concurrency.processutils [None req-d8ccf12d-2ede-4970-9c6f-82a0d8301467 ee2fe69e0dfa4467926cec954790823e 9bd945273cd04d8981dcb3a319e8d026 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.461s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:28:20 np0005534516 nova_compute[253538]: 2025-11-25 08:28:20.671 253542 DEBUG nova.storage.rbd_utils [None req-d8ccf12d-2ede-4970-9c6f-82a0d8301467 ee2fe69e0dfa4467926cec954790823e 9bd945273cd04d8981dcb3a319e8d026 - - default default] rbd image 4c934302-d7cd-4826-835e-cab6dba97e3a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:28:20 np0005534516 nova_compute[253538]: 2025-11-25 08:28:20.675 253542 DEBUG oslo_concurrency.processutils [None req-d8ccf12d-2ede-4970-9c6f-82a0d8301467 ee2fe69e0dfa4467926cec954790823e 9bd945273cd04d8981dcb3a319e8d026 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:28:20 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 03:28:20 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2221969757' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 03:28:20 np0005534516 nova_compute[253538]: 2025-11-25 08:28:20.705 253542 DEBUG oslo_concurrency.processutils [None req-2c8e5d4b-37aa-44f4-8353-01f647647394 5981df3e8536420ea5b8fcd98ef92e1b 7627e6bf071942db89329eee4a7d6b59 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.440s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:28:20 np0005534516 nova_compute[253538]: 2025-11-25 08:28:20.714 253542 DEBUG nova.compute.provider_tree [None req-2c8e5d4b-37aa-44f4-8353-01f647647394 5981df3e8536420ea5b8fcd98ef92e1b 7627e6bf071942db89329eee4a7d6b59 - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 03:28:20 np0005534516 nova_compute[253538]: 2025-11-25 08:28:20.747 253542 DEBUG nova.scheduler.client.report [None req-2c8e5d4b-37aa-44f4-8353-01f647647394 5981df3e8536420ea5b8fcd98ef92e1b 7627e6bf071942db89329eee4a7d6b59 - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 03:28:20 np0005534516 nova_compute[253538]: 2025-11-25 08:28:20.760 253542 DEBUG nova.compute.manager [None req-b9b3cf22-1625-4d33-91e2-a44aef1e71a9 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] [instance: 59d5b429-b6cb-4ced-bf99-b3a5b42dc5a0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:28:20 np0005534516 nova_compute[253538]: 2025-11-25 08:28:20.775 253542 DEBUG oslo_concurrency.lockutils [None req-2c8e5d4b-37aa-44f4-8353-01f647647394 5981df3e8536420ea5b8fcd98ef92e1b 7627e6bf071942db89329eee4a7d6b59 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.655s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:28:20 np0005534516 nova_compute[253538]: 2025-11-25 08:28:20.802 253542 INFO nova.compute.manager [None req-b9b3cf22-1625-4d33-91e2-a44aef1e71a9 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] [instance: 59d5b429-b6cb-4ced-bf99-b3a5b42dc5a0] instance snapshotting#033[00m
Nov 25 03:28:20 np0005534516 podman[291114]: 2025-11-25 08:28:20.81895388 +0000 UTC m=+0.433749320 container create a52ce674315fe1f72af4d85a4b1eace3568c16e615f4490b2a4b9368ff45761d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-ba659d6c-c094-47d7-ba45-d0e659ce778e, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 25 03:28:20 np0005534516 nova_compute[253538]: 2025-11-25 08:28:20.831 253542 INFO nova.scheduler.client.report [None req-2c8e5d4b-37aa-44f4-8353-01f647647394 5981df3e8536420ea5b8fcd98ef92e1b 7627e6bf071942db89329eee4a7d6b59 - - default default] Deleted allocations for instance 14cd6797-cf47-44da-acac-0e5e3d5dfe11#033[00m
Nov 25 03:28:20 np0005534516 systemd[1]: Started libpod-conmon-a52ce674315fe1f72af4d85a4b1eace3568c16e615f4490b2a4b9368ff45761d.scope.
Nov 25 03:28:20 np0005534516 systemd[1]: Started libcrun container.
Nov 25 03:28:20 np0005534516 nova_compute[253538]: 2025-11-25 08:28:20.894 253542 DEBUG oslo_concurrency.lockutils [None req-2c8e5d4b-37aa-44f4-8353-01f647647394 5981df3e8536420ea5b8fcd98ef92e1b 7627e6bf071942db89329eee4a7d6b59 - - default default] Lock "14cd6797-cf47-44da-acac-0e5e3d5dfe11" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.692s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:28:20 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7948cd9e3db77c2b6e7fa96e239c18a011489fdccbd892f78fbbd5fdba626e7d/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 25 03:28:20 np0005534516 podman[291114]: 2025-11-25 08:28:20.91050122 +0000 UTC m=+0.525296670 container init a52ce674315fe1f72af4d85a4b1eace3568c16e615f4490b2a4b9368ff45761d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-ba659d6c-c094-47d7-ba45-d0e659ce778e, org.label-schema.license=GPLv2, tcib_managed=true, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, io.buildah.version=1.41.3)
Nov 25 03:28:20 np0005534516 podman[291114]: 2025-11-25 08:28:20.916466225 +0000 UTC m=+0.531261665 container start a52ce674315fe1f72af4d85a4b1eace3568c16e615f4490b2a4b9368ff45761d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-ba659d6c-c094-47d7-ba45-d0e659ce778e, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118)
Nov 25 03:28:20 np0005534516 neutron-haproxy-ovnmeta-ba659d6c-c094-47d7-ba45-d0e659ce778e[291226]: [NOTICE]   (291234) : New worker (291237) forked
Nov 25 03:28:20 np0005534516 neutron-haproxy-ovnmeta-ba659d6c-c094-47d7-ba45-d0e659ce778e[291226]: [NOTICE]   (291234) : Loading success.
Nov 25 03:28:20 np0005534516 nova_compute[253538]: 2025-11-25 08:28:20.978 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764059300.977583, 6998a6cf-b660-4558-98cf-bf5984775b1d => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 03:28:20 np0005534516 nova_compute[253538]: 2025-11-25 08:28:20.978 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 6998a6cf-b660-4558-98cf-bf5984775b1d] VM Started (Lifecycle Event)#033[00m
Nov 25 03:28:20 np0005534516 nova_compute[253538]: 2025-11-25 08:28:20.997 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 6998a6cf-b660-4558-98cf-bf5984775b1d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:28:21 np0005534516 nova_compute[253538]: 2025-11-25 08:28:21.000 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764059300.9778752, 6998a6cf-b660-4558-98cf-bf5984775b1d => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 03:28:21 np0005534516 nova_compute[253538]: 2025-11-25 08:28:21.000 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 6998a6cf-b660-4558-98cf-bf5984775b1d] VM Paused (Lifecycle Event)#033[00m
Nov 25 03:28:21 np0005534516 nova_compute[253538]: 2025-11-25 08:28:21.019 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 6998a6cf-b660-4558-98cf-bf5984775b1d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:28:21 np0005534516 nova_compute[253538]: 2025-11-25 08:28:21.022 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 6998a6cf-b660-4558-98cf-bf5984775b1d] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 03:28:21 np0005534516 nova_compute[253538]: 2025-11-25 08:28:21.045 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 6998a6cf-b660-4558-98cf-bf5984775b1d] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 03:28:21 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1298: 321 pgs: 321 active+clean; 210 MiB data, 444 MiB used, 60 GiB / 60 GiB avail; 2.5 MiB/s rd, 6.2 MiB/s wr, 218 op/s
Nov 25 03:28:21 np0005534516 nova_compute[253538]: 2025-11-25 08:28:21.138 253542 INFO nova.virt.libvirt.driver [None req-b9b3cf22-1625-4d33-91e2-a44aef1e71a9 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] [instance: 59d5b429-b6cb-4ced-bf99-b3a5b42dc5a0] Beginning live snapshot process#033[00m
Nov 25 03:28:21 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 03:28:21 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2752781916' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 03:28:21 np0005534516 nova_compute[253538]: 2025-11-25 08:28:21.166 253542 DEBUG oslo_concurrency.processutils [None req-d8ccf12d-2ede-4970-9c6f-82a0d8301467 ee2fe69e0dfa4467926cec954790823e 9bd945273cd04d8981dcb3a319e8d026 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.491s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:28:21 np0005534516 nova_compute[253538]: 2025-11-25 08:28:21.168 253542 DEBUG nova.virt.libvirt.vif [None req-d8ccf12d-2ede-4970-9c6f-82a0d8301467 ee2fe69e0dfa4467926cec954790823e 9bd945273cd04d8981dcb3a319e8d026 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T08:28:11Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesOneServerTestJSON-server-1824498337',display_name='tempest-ImagesOneServerTestJSON-server-1824498337',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagesoneservertestjson-server-1824498337',id=31,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='9bd945273cd04d8981dcb3a319e8d026',ramdisk_id='',reservation_id='r-p00agd9j',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesOneServerTestJSON-174767469',owner_user_name='tempest-ImagesOneServerTestJSON-174767469-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T08:28:14Z,user_data=None,user_id='ee2fe69e0dfa4467926cec954790823e',uuid=4c934302-d7cd-4826-835e-cab6dba97e3a,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "b62f3741-11c8-4840-a720-d6ee07f06284", "address": "fa:16:3e:c9:9b:99", "network": {"id": "0070171d-b7ca-4ed3-baea-814d9cd382de", "bridge": "br-int", "label": "tempest-ImagesOneServerTestJSON-33340236-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9bd945273cd04d8981dcb3a319e8d026", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb62f3741-11", "ovs_interfaceid": "b62f3741-11c8-4840-a720-d6ee07f06284", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 25 03:28:21 np0005534516 nova_compute[253538]: 2025-11-25 08:28:21.168 253542 DEBUG nova.network.os_vif_util [None req-d8ccf12d-2ede-4970-9c6f-82a0d8301467 ee2fe69e0dfa4467926cec954790823e 9bd945273cd04d8981dcb3a319e8d026 - - default default] Converting VIF {"id": "b62f3741-11c8-4840-a720-d6ee07f06284", "address": "fa:16:3e:c9:9b:99", "network": {"id": "0070171d-b7ca-4ed3-baea-814d9cd382de", "bridge": "br-int", "label": "tempest-ImagesOneServerTestJSON-33340236-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9bd945273cd04d8981dcb3a319e8d026", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb62f3741-11", "ovs_interfaceid": "b62f3741-11c8-4840-a720-d6ee07f06284", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 03:28:21 np0005534516 nova_compute[253538]: 2025-11-25 08:28:21.169 253542 DEBUG nova.network.os_vif_util [None req-d8ccf12d-2ede-4970-9c6f-82a0d8301467 ee2fe69e0dfa4467926cec954790823e 9bd945273cd04d8981dcb3a319e8d026 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c9:9b:99,bridge_name='br-int',has_traffic_filtering=True,id=b62f3741-11c8-4840-a720-d6ee07f06284,network=Network(0070171d-b7ca-4ed3-baea-814d9cd382de),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb62f3741-11') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 03:28:21 np0005534516 nova_compute[253538]: 2025-11-25 08:28:21.171 253542 DEBUG nova.objects.instance [None req-d8ccf12d-2ede-4970-9c6f-82a0d8301467 ee2fe69e0dfa4467926cec954790823e 9bd945273cd04d8981dcb3a319e8d026 - - default default] Lazy-loading 'pci_devices' on Instance uuid 4c934302-d7cd-4826-835e-cab6dba97e3a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 03:28:21 np0005534516 nova_compute[253538]: 2025-11-25 08:28:21.188 253542 DEBUG nova.virt.libvirt.driver [None req-d8ccf12d-2ede-4970-9c6f-82a0d8301467 ee2fe69e0dfa4467926cec954790823e 9bd945273cd04d8981dcb3a319e8d026 - - default default] [instance: 4c934302-d7cd-4826-835e-cab6dba97e3a] End _get_guest_xml xml=<domain type="kvm">
Nov 25 03:28:21 np0005534516 nova_compute[253538]:  <uuid>4c934302-d7cd-4826-835e-cab6dba97e3a</uuid>
Nov 25 03:28:21 np0005534516 nova_compute[253538]:  <name>instance-0000001f</name>
Nov 25 03:28:21 np0005534516 nova_compute[253538]:  <memory>131072</memory>
Nov 25 03:28:21 np0005534516 nova_compute[253538]:  <vcpu>1</vcpu>
Nov 25 03:28:21 np0005534516 nova_compute[253538]:  <metadata>
Nov 25 03:28:21 np0005534516 nova_compute[253538]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 03:28:21 np0005534516 nova_compute[253538]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 03:28:21 np0005534516 nova_compute[253538]:      <nova:name>tempest-ImagesOneServerTestJSON-server-1824498337</nova:name>
Nov 25 03:28:21 np0005534516 nova_compute[253538]:      <nova:creationTime>2025-11-25 08:28:20</nova:creationTime>
Nov 25 03:28:21 np0005534516 nova_compute[253538]:      <nova:flavor name="m1.nano">
Nov 25 03:28:21 np0005534516 nova_compute[253538]:        <nova:memory>128</nova:memory>
Nov 25 03:28:21 np0005534516 nova_compute[253538]:        <nova:disk>1</nova:disk>
Nov 25 03:28:21 np0005534516 nova_compute[253538]:        <nova:swap>0</nova:swap>
Nov 25 03:28:21 np0005534516 nova_compute[253538]:        <nova:ephemeral>0</nova:ephemeral>
Nov 25 03:28:21 np0005534516 nova_compute[253538]:        <nova:vcpus>1</nova:vcpus>
Nov 25 03:28:21 np0005534516 nova_compute[253538]:      </nova:flavor>
Nov 25 03:28:21 np0005534516 nova_compute[253538]:      <nova:owner>
Nov 25 03:28:21 np0005534516 nova_compute[253538]:        <nova:user uuid="ee2fe69e0dfa4467926cec954790823e">tempest-ImagesOneServerTestJSON-174767469-project-member</nova:user>
Nov 25 03:28:21 np0005534516 nova_compute[253538]:        <nova:project uuid="9bd945273cd04d8981dcb3a319e8d026">tempest-ImagesOneServerTestJSON-174767469</nova:project>
Nov 25 03:28:21 np0005534516 nova_compute[253538]:      </nova:owner>
Nov 25 03:28:21 np0005534516 nova_compute[253538]:      <nova:root type="image" uuid="8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e"/>
Nov 25 03:28:21 np0005534516 nova_compute[253538]:      <nova:ports>
Nov 25 03:28:21 np0005534516 nova_compute[253538]:        <nova:port uuid="b62f3741-11c8-4840-a720-d6ee07f06284">
Nov 25 03:28:21 np0005534516 nova_compute[253538]:          <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Nov 25 03:28:21 np0005534516 nova_compute[253538]:        </nova:port>
Nov 25 03:28:21 np0005534516 nova_compute[253538]:      </nova:ports>
Nov 25 03:28:21 np0005534516 nova_compute[253538]:    </nova:instance>
Nov 25 03:28:21 np0005534516 nova_compute[253538]:  </metadata>
Nov 25 03:28:21 np0005534516 nova_compute[253538]:  <sysinfo type="smbios">
Nov 25 03:28:21 np0005534516 nova_compute[253538]:    <system>
Nov 25 03:28:21 np0005534516 nova_compute[253538]:      <entry name="manufacturer">RDO</entry>
Nov 25 03:28:21 np0005534516 nova_compute[253538]:      <entry name="product">OpenStack Compute</entry>
Nov 25 03:28:21 np0005534516 nova_compute[253538]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 03:28:21 np0005534516 nova_compute[253538]:      <entry name="serial">4c934302-d7cd-4826-835e-cab6dba97e3a</entry>
Nov 25 03:28:21 np0005534516 nova_compute[253538]:      <entry name="uuid">4c934302-d7cd-4826-835e-cab6dba97e3a</entry>
Nov 25 03:28:21 np0005534516 nova_compute[253538]:      <entry name="family">Virtual Machine</entry>
Nov 25 03:28:21 np0005534516 nova_compute[253538]:    </system>
Nov 25 03:28:21 np0005534516 nova_compute[253538]:  </sysinfo>
Nov 25 03:28:21 np0005534516 nova_compute[253538]:  <os>
Nov 25 03:28:21 np0005534516 nova_compute[253538]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 25 03:28:21 np0005534516 nova_compute[253538]:    <boot dev="hd"/>
Nov 25 03:28:21 np0005534516 nova_compute[253538]:    <smbios mode="sysinfo"/>
Nov 25 03:28:21 np0005534516 nova_compute[253538]:  </os>
Nov 25 03:28:21 np0005534516 nova_compute[253538]:  <features>
Nov 25 03:28:21 np0005534516 nova_compute[253538]:    <acpi/>
Nov 25 03:28:21 np0005534516 nova_compute[253538]:    <apic/>
Nov 25 03:28:21 np0005534516 nova_compute[253538]:    <vmcoreinfo/>
Nov 25 03:28:21 np0005534516 nova_compute[253538]:  </features>
Nov 25 03:28:21 np0005534516 nova_compute[253538]:  <clock offset="utc">
Nov 25 03:28:21 np0005534516 nova_compute[253538]:    <timer name="pit" tickpolicy="delay"/>
Nov 25 03:28:21 np0005534516 nova_compute[253538]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 25 03:28:21 np0005534516 nova_compute[253538]:    <timer name="hpet" present="no"/>
Nov 25 03:28:21 np0005534516 nova_compute[253538]:  </clock>
Nov 25 03:28:21 np0005534516 nova_compute[253538]:  <cpu mode="host-model" match="exact">
Nov 25 03:28:21 np0005534516 nova_compute[253538]:    <topology sockets="1" cores="1" threads="1"/>
Nov 25 03:28:21 np0005534516 nova_compute[253538]:  </cpu>
Nov 25 03:28:21 np0005534516 nova_compute[253538]:  <devices>
Nov 25 03:28:21 np0005534516 nova_compute[253538]:    <disk type="network" device="disk">
Nov 25 03:28:21 np0005534516 nova_compute[253538]:      <driver type="raw" cache="none"/>
Nov 25 03:28:21 np0005534516 nova_compute[253538]:      <source protocol="rbd" name="vms/4c934302-d7cd-4826-835e-cab6dba97e3a_disk">
Nov 25 03:28:21 np0005534516 nova_compute[253538]:        <host name="192.168.122.100" port="6789"/>
Nov 25 03:28:21 np0005534516 nova_compute[253538]:      </source>
Nov 25 03:28:21 np0005534516 nova_compute[253538]:      <auth username="openstack">
Nov 25 03:28:21 np0005534516 nova_compute[253538]:        <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 03:28:21 np0005534516 nova_compute[253538]:      </auth>
Nov 25 03:28:21 np0005534516 nova_compute[253538]:      <target dev="vda" bus="virtio"/>
Nov 25 03:28:21 np0005534516 nova_compute[253538]:    </disk>
Nov 25 03:28:21 np0005534516 nova_compute[253538]:    <disk type="network" device="cdrom">
Nov 25 03:28:21 np0005534516 nova_compute[253538]:      <driver type="raw" cache="none"/>
Nov 25 03:28:21 np0005534516 nova_compute[253538]:      <source protocol="rbd" name="vms/4c934302-d7cd-4826-835e-cab6dba97e3a_disk.config">
Nov 25 03:28:21 np0005534516 nova_compute[253538]:        <host name="192.168.122.100" port="6789"/>
Nov 25 03:28:21 np0005534516 nova_compute[253538]:      </source>
Nov 25 03:28:21 np0005534516 nova_compute[253538]:      <auth username="openstack">
Nov 25 03:28:21 np0005534516 nova_compute[253538]:        <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 03:28:21 np0005534516 nova_compute[253538]:      </auth>
Nov 25 03:28:21 np0005534516 nova_compute[253538]:      <target dev="sda" bus="sata"/>
Nov 25 03:28:21 np0005534516 nova_compute[253538]:    </disk>
Nov 25 03:28:21 np0005534516 nova_compute[253538]:    <interface type="ethernet">
Nov 25 03:28:21 np0005534516 nova_compute[253538]:      <mac address="fa:16:3e:c9:9b:99"/>
Nov 25 03:28:21 np0005534516 nova_compute[253538]:      <model type="virtio"/>
Nov 25 03:28:21 np0005534516 nova_compute[253538]:      <driver name="vhost" rx_queue_size="512"/>
Nov 25 03:28:21 np0005534516 nova_compute[253538]:      <mtu size="1442"/>
Nov 25 03:28:21 np0005534516 nova_compute[253538]:      <target dev="tapb62f3741-11"/>
Nov 25 03:28:21 np0005534516 nova_compute[253538]:    </interface>
Nov 25 03:28:21 np0005534516 nova_compute[253538]:    <serial type="pty">
Nov 25 03:28:21 np0005534516 nova_compute[253538]:      <log file="/var/lib/nova/instances/4c934302-d7cd-4826-835e-cab6dba97e3a/console.log" append="off"/>
Nov 25 03:28:21 np0005534516 nova_compute[253538]:    </serial>
Nov 25 03:28:21 np0005534516 nova_compute[253538]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 03:28:21 np0005534516 nova_compute[253538]:    <video>
Nov 25 03:28:21 np0005534516 nova_compute[253538]:      <model type="virtio"/>
Nov 25 03:28:21 np0005534516 nova_compute[253538]:    </video>
Nov 25 03:28:21 np0005534516 nova_compute[253538]:    <input type="tablet" bus="usb"/>
Nov 25 03:28:21 np0005534516 nova_compute[253538]:    <rng model="virtio">
Nov 25 03:28:21 np0005534516 nova_compute[253538]:      <backend model="random">/dev/urandom</backend>
Nov 25 03:28:21 np0005534516 nova_compute[253538]:    </rng>
Nov 25 03:28:21 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root"/>
Nov 25 03:28:21 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:28:21 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:28:21 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:28:21 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:28:21 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:28:21 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:28:21 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:28:21 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:28:21 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:28:21 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:28:21 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:28:21 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:28:21 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:28:21 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:28:21 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:28:21 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:28:21 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:28:21 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:28:21 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:28:21 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:28:21 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:28:21 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:28:21 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:28:21 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:28:21 np0005534516 nova_compute[253538]:    <controller type="usb" index="0"/>
Nov 25 03:28:21 np0005534516 nova_compute[253538]:    <memballoon model="virtio">
Nov 25 03:28:21 np0005534516 nova_compute[253538]:      <stats period="10"/>
Nov 25 03:28:21 np0005534516 nova_compute[253538]:    </memballoon>
Nov 25 03:28:21 np0005534516 nova_compute[253538]:  </devices>
Nov 25 03:28:21 np0005534516 nova_compute[253538]: </domain>
Nov 25 03:28:21 np0005534516 nova_compute[253538]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 25 03:28:21 np0005534516 nova_compute[253538]: 2025-11-25 08:28:21.189 253542 DEBUG nova.compute.manager [None req-d8ccf12d-2ede-4970-9c6f-82a0d8301467 ee2fe69e0dfa4467926cec954790823e 9bd945273cd04d8981dcb3a319e8d026 - - default default] [instance: 4c934302-d7cd-4826-835e-cab6dba97e3a] Preparing to wait for external event network-vif-plugged-b62f3741-11c8-4840-a720-d6ee07f06284 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 25 03:28:21 np0005534516 nova_compute[253538]: 2025-11-25 08:28:21.189 253542 DEBUG oslo_concurrency.lockutils [None req-d8ccf12d-2ede-4970-9c6f-82a0d8301467 ee2fe69e0dfa4467926cec954790823e 9bd945273cd04d8981dcb3a319e8d026 - - default default] Acquiring lock "4c934302-d7cd-4826-835e-cab6dba97e3a-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:28:21 np0005534516 nova_compute[253538]: 2025-11-25 08:28:21.190 253542 DEBUG oslo_concurrency.lockutils [None req-d8ccf12d-2ede-4970-9c6f-82a0d8301467 ee2fe69e0dfa4467926cec954790823e 9bd945273cd04d8981dcb3a319e8d026 - - default default] Lock "4c934302-d7cd-4826-835e-cab6dba97e3a-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:28:21 np0005534516 nova_compute[253538]: 2025-11-25 08:28:21.190 253542 DEBUG oslo_concurrency.lockutils [None req-d8ccf12d-2ede-4970-9c6f-82a0d8301467 ee2fe69e0dfa4467926cec954790823e 9bd945273cd04d8981dcb3a319e8d026 - - default default] Lock "4c934302-d7cd-4826-835e-cab6dba97e3a-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:28:21 np0005534516 nova_compute[253538]: 2025-11-25 08:28:21.191 253542 DEBUG nova.virt.libvirt.vif [None req-d8ccf12d-2ede-4970-9c6f-82a0d8301467 ee2fe69e0dfa4467926cec954790823e 9bd945273cd04d8981dcb3a319e8d026 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T08:28:11Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesOneServerTestJSON-server-1824498337',display_name='tempest-ImagesOneServerTestJSON-server-1824498337',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagesoneservertestjson-server-1824498337',id=31,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='9bd945273cd04d8981dcb3a319e8d026',ramdisk_id='',reservation_id='r-p00agd9j',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesOneServerTestJSON-174767469',owner_user_name='tempest-ImagesOneServerTestJSON-174767469-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T08:28:14Z,user_data=None,user_id='ee2fe69e0dfa4467926cec954790823e',uuid=4c934302-d7cd-4826-835e-cab6dba97e3a,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "b62f3741-11c8-4840-a720-d6ee07f06284", "address": "fa:16:3e:c9:9b:99", "network": {"id": "0070171d-b7ca-4ed3-baea-814d9cd382de", "bridge": "br-int", "label": "tempest-ImagesOneServerTestJSON-33340236-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9bd945273cd04d8981dcb3a319e8d026", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb62f3741-11", "ovs_interfaceid": "b62f3741-11c8-4840-a720-d6ee07f06284", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 25 03:28:21 np0005534516 nova_compute[253538]: 2025-11-25 08:28:21.191 253542 DEBUG nova.network.os_vif_util [None req-d8ccf12d-2ede-4970-9c6f-82a0d8301467 ee2fe69e0dfa4467926cec954790823e 9bd945273cd04d8981dcb3a319e8d026 - - default default] Converting VIF {"id": "b62f3741-11c8-4840-a720-d6ee07f06284", "address": "fa:16:3e:c9:9b:99", "network": {"id": "0070171d-b7ca-4ed3-baea-814d9cd382de", "bridge": "br-int", "label": "tempest-ImagesOneServerTestJSON-33340236-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9bd945273cd04d8981dcb3a319e8d026", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb62f3741-11", "ovs_interfaceid": "b62f3741-11c8-4840-a720-d6ee07f06284", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 03:28:21 np0005534516 nova_compute[253538]: 2025-11-25 08:28:21.192 253542 DEBUG nova.network.os_vif_util [None req-d8ccf12d-2ede-4970-9c6f-82a0d8301467 ee2fe69e0dfa4467926cec954790823e 9bd945273cd04d8981dcb3a319e8d026 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c9:9b:99,bridge_name='br-int',has_traffic_filtering=True,id=b62f3741-11c8-4840-a720-d6ee07f06284,network=Network(0070171d-b7ca-4ed3-baea-814d9cd382de),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb62f3741-11') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 03:28:21 np0005534516 nova_compute[253538]: 2025-11-25 08:28:21.193 253542 DEBUG os_vif [None req-d8ccf12d-2ede-4970-9c6f-82a0d8301467 ee2fe69e0dfa4467926cec954790823e 9bd945273cd04d8981dcb3a319e8d026 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:c9:9b:99,bridge_name='br-int',has_traffic_filtering=True,id=b62f3741-11c8-4840-a720-d6ee07f06284,network=Network(0070171d-b7ca-4ed3-baea-814d9cd382de),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb62f3741-11') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 25 03:28:21 np0005534516 nova_compute[253538]: 2025-11-25 08:28:21.194 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:28:21 np0005534516 nova_compute[253538]: 2025-11-25 08:28:21.194 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:28:21 np0005534516 nova_compute[253538]: 2025-11-25 08:28:21.195 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 03:28:21 np0005534516 nova_compute[253538]: 2025-11-25 08:28:21.200 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:28:21 np0005534516 nova_compute[253538]: 2025-11-25 08:28:21.201 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb62f3741-11, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:28:21 np0005534516 nova_compute[253538]: 2025-11-25 08:28:21.202 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapb62f3741-11, col_values=(('external_ids', {'iface-id': 'b62f3741-11c8-4840-a720-d6ee07f06284', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:c9:9b:99', 'vm-uuid': '4c934302-d7cd-4826-835e-cab6dba97e3a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:28:21 np0005534516 NetworkManager[48915]: <info>  [1764059301.2049] manager: (tapb62f3741-11): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/99)
Nov 25 03:28:21 np0005534516 nova_compute[253538]: 2025-11-25 08:28:21.204 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:28:21 np0005534516 nova_compute[253538]: 2025-11-25 08:28:21.207 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 25 03:28:21 np0005534516 nova_compute[253538]: 2025-11-25 08:28:21.210 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:28:21 np0005534516 nova_compute[253538]: 2025-11-25 08:28:21.211 253542 INFO os_vif [None req-d8ccf12d-2ede-4970-9c6f-82a0d8301467 ee2fe69e0dfa4467926cec954790823e 9bd945273cd04d8981dcb3a319e8d026 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:c9:9b:99,bridge_name='br-int',has_traffic_filtering=True,id=b62f3741-11c8-4840-a720-d6ee07f06284,network=Network(0070171d-b7ca-4ed3-baea-814d9cd382de),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb62f3741-11')#033[00m
Nov 25 03:28:21 np0005534516 nova_compute[253538]: 2025-11-25 08:28:21.290 253542 DEBUG nova.virt.libvirt.imagebackend [None req-b9b3cf22-1625-4d33-91e2-a44aef1e71a9 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] No parent info for 8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163#033[00m
Nov 25 03:28:21 np0005534516 nova_compute[253538]: 2025-11-25 08:28:21.310 253542 DEBUG nova.virt.libvirt.driver [None req-d8ccf12d-2ede-4970-9c6f-82a0d8301467 ee2fe69e0dfa4467926cec954790823e 9bd945273cd04d8981dcb3a319e8d026 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 03:28:21 np0005534516 nova_compute[253538]: 2025-11-25 08:28:21.310 253542 DEBUG nova.virt.libvirt.driver [None req-d8ccf12d-2ede-4970-9c6f-82a0d8301467 ee2fe69e0dfa4467926cec954790823e 9bd945273cd04d8981dcb3a319e8d026 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 03:28:21 np0005534516 nova_compute[253538]: 2025-11-25 08:28:21.311 253542 DEBUG nova.virt.libvirt.driver [None req-d8ccf12d-2ede-4970-9c6f-82a0d8301467 ee2fe69e0dfa4467926cec954790823e 9bd945273cd04d8981dcb3a319e8d026 - - default default] No VIF found with MAC fa:16:3e:c9:9b:99, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 25 03:28:21 np0005534516 nova_compute[253538]: 2025-11-25 08:28:21.311 253542 INFO nova.virt.libvirt.driver [None req-d8ccf12d-2ede-4970-9c6f-82a0d8301467 ee2fe69e0dfa4467926cec954790823e 9bd945273cd04d8981dcb3a319e8d026 - - default default] [instance: 4c934302-d7cd-4826-835e-cab6dba97e3a] Using config drive#033[00m
Nov 25 03:28:21 np0005534516 nova_compute[253538]: 2025-11-25 08:28:21.329 253542 DEBUG nova.storage.rbd_utils [None req-d8ccf12d-2ede-4970-9c6f-82a0d8301467 ee2fe69e0dfa4467926cec954790823e 9bd945273cd04d8981dcb3a319e8d026 - - default default] rbd image 4c934302-d7cd-4826-835e-cab6dba97e3a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:28:21 np0005534516 nova_compute[253538]: 2025-11-25 08:28:21.453 253542 DEBUG nova.storage.rbd_utils [None req-b9b3cf22-1625-4d33-91e2-a44aef1e71a9 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] creating snapshot(55c71d68d39f458abfbf1f8209ae0ece) on rbd image(59d5b429-b6cb-4ced-bf99-b3a5b42dc5a0_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Nov 25 03:28:21 np0005534516 nova_compute[253538]: 2025-11-25 08:28:21.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 03:28:21 np0005534516 nova_compute[253538]: 2025-11-25 08:28:21.673 253542 DEBUG nova.network.neutron [req-d5e41382-4243-4e0c-a72d-eb44af4f2f14 req-0addf58c-2f95-46d8-b275-548b87250e2c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 4c934302-d7cd-4826-835e-cab6dba97e3a] Updated VIF entry in instance network info cache for port b62f3741-11c8-4840-a720-d6ee07f06284. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 25 03:28:21 np0005534516 nova_compute[253538]: 2025-11-25 08:28:21.673 253542 DEBUG nova.network.neutron [req-d5e41382-4243-4e0c-a72d-eb44af4f2f14 req-0addf58c-2f95-46d8-b275-548b87250e2c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 4c934302-d7cd-4826-835e-cab6dba97e3a] Updating instance_info_cache with network_info: [{"id": "b62f3741-11c8-4840-a720-d6ee07f06284", "address": "fa:16:3e:c9:9b:99", "network": {"id": "0070171d-b7ca-4ed3-baea-814d9cd382de", "bridge": "br-int", "label": "tempest-ImagesOneServerTestJSON-33340236-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9bd945273cd04d8981dcb3a319e8d026", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb62f3741-11", "ovs_interfaceid": "b62f3741-11c8-4840-a720-d6ee07f06284", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 03:28:21 np0005534516 nova_compute[253538]: 2025-11-25 08:28:21.702 253542 DEBUG oslo_concurrency.lockutils [req-d5e41382-4243-4e0c-a72d-eb44af4f2f14 req-0addf58c-2f95-46d8-b275-548b87250e2c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-4c934302-d7cd-4826-835e-cab6dba97e3a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 03:28:21 np0005534516 nova_compute[253538]: 2025-11-25 08:28:21.754 253542 INFO nova.virt.libvirt.driver [None req-d8ccf12d-2ede-4970-9c6f-82a0d8301467 ee2fe69e0dfa4467926cec954790823e 9bd945273cd04d8981dcb3a319e8d026 - - default default] [instance: 4c934302-d7cd-4826-835e-cab6dba97e3a] Creating config drive at /var/lib/nova/instances/4c934302-d7cd-4826-835e-cab6dba97e3a/disk.config#033[00m
Nov 25 03:28:21 np0005534516 nova_compute[253538]: 2025-11-25 08:28:21.759 253542 DEBUG oslo_concurrency.processutils [None req-d8ccf12d-2ede-4970-9c6f-82a0d8301467 ee2fe69e0dfa4467926cec954790823e 9bd945273cd04d8981dcb3a319e8d026 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/4c934302-d7cd-4826-835e-cab6dba97e3a/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp_4cbba3g execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:28:21 np0005534516 nova_compute[253538]: 2025-11-25 08:28:21.891 253542 DEBUG oslo_concurrency.processutils [None req-d8ccf12d-2ede-4970-9c6f-82a0d8301467 ee2fe69e0dfa4467926cec954790823e 9bd945273cd04d8981dcb3a319e8d026 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/4c934302-d7cd-4826-835e-cab6dba97e3a/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp_4cbba3g" returned: 0 in 0.132s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:28:21 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e124 do_prune osdmap full prune enabled
Nov 25 03:28:21 np0005534516 nova_compute[253538]: 2025-11-25 08:28:21.912 253542 DEBUG nova.storage.rbd_utils [None req-d8ccf12d-2ede-4970-9c6f-82a0d8301467 ee2fe69e0dfa4467926cec954790823e 9bd945273cd04d8981dcb3a319e8d026 - - default default] rbd image 4c934302-d7cd-4826-835e-cab6dba97e3a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:28:21 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e125 e125: 3 total, 3 up, 3 in
Nov 25 03:28:21 np0005534516 nova_compute[253538]: 2025-11-25 08:28:21.916 253542 DEBUG oslo_concurrency.processutils [None req-d8ccf12d-2ede-4970-9c6f-82a0d8301467 ee2fe69e0dfa4467926cec954790823e 9bd945273cd04d8981dcb3a319e8d026 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/4c934302-d7cd-4826-835e-cab6dba97e3a/disk.config 4c934302-d7cd-4826-835e-cab6dba97e3a_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:28:21 np0005534516 ceph-mon[75015]: log_channel(cluster) log [DBG] : osdmap e125: 3 total, 3 up, 3 in
Nov 25 03:28:21 np0005534516 ceph-osd[90711]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #44. Immutable memtables: 1.
Nov 25 03:28:21 np0005534516 nova_compute[253538]: 2025-11-25 08:28:21.990 253542 DEBUG nova.storage.rbd_utils [None req-b9b3cf22-1625-4d33-91e2-a44aef1e71a9 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] cloning vms/59d5b429-b6cb-4ced-bf99-b3a5b42dc5a0_disk@55c71d68d39f458abfbf1f8209ae0ece to images/28dfc6fb-4f2c-4796-bac4-301408e87b71 clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Nov 25 03:28:22 np0005534516 nova_compute[253538]: 2025-11-25 08:28:22.131 253542 DEBUG nova.compute.manager [req-7c8135ec-ab5c-4435-862d-93b20778d67b req-797760c7-3753-40cb-9fa3-64f1540c022d b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 14cd6797-cf47-44da-acac-0e5e3d5dfe11] Received event network-vif-deleted-c620cff4-b028-4d86-b951-0d489781da2f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:28:22 np0005534516 nova_compute[253538]: 2025-11-25 08:28:22.132 253542 DEBUG nova.compute.manager [req-7c8135ec-ab5c-4435-862d-93b20778d67b req-797760c7-3753-40cb-9fa3-64f1540c022d b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 6998a6cf-b660-4558-98cf-bf5984775b1d] Received event network-vif-plugged-65fd7d0e-59ee-4411-92ee-f934016f1d1f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:28:22 np0005534516 nova_compute[253538]: 2025-11-25 08:28:22.132 253542 DEBUG oslo_concurrency.lockutils [req-7c8135ec-ab5c-4435-862d-93b20778d67b req-797760c7-3753-40cb-9fa3-64f1540c022d b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "6998a6cf-b660-4558-98cf-bf5984775b1d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:28:22 np0005534516 nova_compute[253538]: 2025-11-25 08:28:22.133 253542 DEBUG oslo_concurrency.lockutils [req-7c8135ec-ab5c-4435-862d-93b20778d67b req-797760c7-3753-40cb-9fa3-64f1540c022d b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "6998a6cf-b660-4558-98cf-bf5984775b1d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:28:22 np0005534516 nova_compute[253538]: 2025-11-25 08:28:22.133 253542 DEBUG oslo_concurrency.lockutils [req-7c8135ec-ab5c-4435-862d-93b20778d67b req-797760c7-3753-40cb-9fa3-64f1540c022d b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "6998a6cf-b660-4558-98cf-bf5984775b1d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:28:22 np0005534516 nova_compute[253538]: 2025-11-25 08:28:22.133 253542 DEBUG nova.compute.manager [req-7c8135ec-ab5c-4435-862d-93b20778d67b req-797760c7-3753-40cb-9fa3-64f1540c022d b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 6998a6cf-b660-4558-98cf-bf5984775b1d] Processing event network-vif-plugged-65fd7d0e-59ee-4411-92ee-f934016f1d1f _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 25 03:28:22 np0005534516 nova_compute[253538]: 2025-11-25 08:28:22.133 253542 DEBUG nova.compute.manager [req-7c8135ec-ab5c-4435-862d-93b20778d67b req-797760c7-3753-40cb-9fa3-64f1540c022d b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 14cd6797-cf47-44da-acac-0e5e3d5dfe11] Received event network-vif-deleted-eb3ca9e2-cc78-478d-97c2-03b1c7d29b95 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:28:22 np0005534516 nova_compute[253538]: 2025-11-25 08:28:22.134 253542 DEBUG nova.compute.manager [req-7c8135ec-ab5c-4435-862d-93b20778d67b req-797760c7-3753-40cb-9fa3-64f1540c022d b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 6998a6cf-b660-4558-98cf-bf5984775b1d] Received event network-vif-plugged-65fd7d0e-59ee-4411-92ee-f934016f1d1f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:28:22 np0005534516 nova_compute[253538]: 2025-11-25 08:28:22.134 253542 DEBUG oslo_concurrency.lockutils [req-7c8135ec-ab5c-4435-862d-93b20778d67b req-797760c7-3753-40cb-9fa3-64f1540c022d b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "6998a6cf-b660-4558-98cf-bf5984775b1d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:28:22 np0005534516 nova_compute[253538]: 2025-11-25 08:28:22.134 253542 DEBUG oslo_concurrency.lockutils [req-7c8135ec-ab5c-4435-862d-93b20778d67b req-797760c7-3753-40cb-9fa3-64f1540c022d b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "6998a6cf-b660-4558-98cf-bf5984775b1d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:28:22 np0005534516 nova_compute[253538]: 2025-11-25 08:28:22.134 253542 DEBUG oslo_concurrency.lockutils [req-7c8135ec-ab5c-4435-862d-93b20778d67b req-797760c7-3753-40cb-9fa3-64f1540c022d b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "6998a6cf-b660-4558-98cf-bf5984775b1d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:28:22 np0005534516 nova_compute[253538]: 2025-11-25 08:28:22.135 253542 DEBUG nova.compute.manager [req-7c8135ec-ab5c-4435-862d-93b20778d67b req-797760c7-3753-40cb-9fa3-64f1540c022d b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 6998a6cf-b660-4558-98cf-bf5984775b1d] No waiting events found dispatching network-vif-plugged-65fd7d0e-59ee-4411-92ee-f934016f1d1f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 03:28:22 np0005534516 nova_compute[253538]: 2025-11-25 08:28:22.135 253542 WARNING nova.compute.manager [req-7c8135ec-ab5c-4435-862d-93b20778d67b req-797760c7-3753-40cb-9fa3-64f1540c022d b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 6998a6cf-b660-4558-98cf-bf5984775b1d] Received unexpected event network-vif-plugged-65fd7d0e-59ee-4411-92ee-f934016f1d1f for instance with vm_state building and task_state spawning.#033[00m
Nov 25 03:28:22 np0005534516 nova_compute[253538]: 2025-11-25 08:28:22.136 253542 DEBUG nova.compute.manager [None req-e3022e31-fb04-4eb1-b57c-c8286674db75 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 6998a6cf-b660-4558-98cf-bf5984775b1d] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 25 03:28:22 np0005534516 nova_compute[253538]: 2025-11-25 08:28:22.140 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764059302.140143, 6998a6cf-b660-4558-98cf-bf5984775b1d => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 03:28:22 np0005534516 nova_compute[253538]: 2025-11-25 08:28:22.140 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 6998a6cf-b660-4558-98cf-bf5984775b1d] VM Resumed (Lifecycle Event)#033[00m
Nov 25 03:28:22 np0005534516 nova_compute[253538]: 2025-11-25 08:28:22.142 253542 DEBUG nova.virt.libvirt.driver [None req-e3022e31-fb04-4eb1-b57c-c8286674db75 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 6998a6cf-b660-4558-98cf-bf5984775b1d] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 25 03:28:22 np0005534516 nova_compute[253538]: 2025-11-25 08:28:22.145 253542 INFO nova.virt.libvirt.driver [-] [instance: 6998a6cf-b660-4558-98cf-bf5984775b1d] Instance spawned successfully.#033[00m
Nov 25 03:28:22 np0005534516 nova_compute[253538]: 2025-11-25 08:28:22.145 253542 DEBUG nova.virt.libvirt.driver [None req-e3022e31-fb04-4eb1-b57c-c8286674db75 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 6998a6cf-b660-4558-98cf-bf5984775b1d] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 25 03:28:22 np0005534516 nova_compute[253538]: 2025-11-25 08:28:22.171 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 6998a6cf-b660-4558-98cf-bf5984775b1d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:28:22 np0005534516 nova_compute[253538]: 2025-11-25 08:28:22.174 253542 DEBUG nova.virt.libvirt.driver [None req-e3022e31-fb04-4eb1-b57c-c8286674db75 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 6998a6cf-b660-4558-98cf-bf5984775b1d] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:28:22 np0005534516 nova_compute[253538]: 2025-11-25 08:28:22.175 253542 DEBUG nova.virt.libvirt.driver [None req-e3022e31-fb04-4eb1-b57c-c8286674db75 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 6998a6cf-b660-4558-98cf-bf5984775b1d] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:28:22 np0005534516 nova_compute[253538]: 2025-11-25 08:28:22.175 253542 DEBUG nova.virt.libvirt.driver [None req-e3022e31-fb04-4eb1-b57c-c8286674db75 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 6998a6cf-b660-4558-98cf-bf5984775b1d] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:28:22 np0005534516 nova_compute[253538]: 2025-11-25 08:28:22.175 253542 DEBUG nova.virt.libvirt.driver [None req-e3022e31-fb04-4eb1-b57c-c8286674db75 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 6998a6cf-b660-4558-98cf-bf5984775b1d] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:28:22 np0005534516 nova_compute[253538]: 2025-11-25 08:28:22.176 253542 DEBUG nova.virt.libvirt.driver [None req-e3022e31-fb04-4eb1-b57c-c8286674db75 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 6998a6cf-b660-4558-98cf-bf5984775b1d] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:28:22 np0005534516 nova_compute[253538]: 2025-11-25 08:28:22.176 253542 DEBUG nova.virt.libvirt.driver [None req-e3022e31-fb04-4eb1-b57c-c8286674db75 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 6998a6cf-b660-4558-98cf-bf5984775b1d] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:28:22 np0005534516 nova_compute[253538]: 2025-11-25 08:28:22.180 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 6998a6cf-b660-4558-98cf-bf5984775b1d] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 03:28:22 np0005534516 nova_compute[253538]: 2025-11-25 08:28:22.210 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 6998a6cf-b660-4558-98cf-bf5984775b1d] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 03:28:22 np0005534516 nova_compute[253538]: 2025-11-25 08:28:22.338 253542 INFO nova.compute.manager [None req-e3022e31-fb04-4eb1-b57c-c8286674db75 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 6998a6cf-b660-4558-98cf-bf5984775b1d] Took 12.00 seconds to spawn the instance on the hypervisor.#033[00m
Nov 25 03:28:22 np0005534516 nova_compute[253538]: 2025-11-25 08:28:22.339 253542 DEBUG nova.compute.manager [None req-e3022e31-fb04-4eb1-b57c-c8286674db75 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 6998a6cf-b660-4558-98cf-bf5984775b1d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:28:22 np0005534516 nova_compute[253538]: 2025-11-25 08:28:22.341 253542 DEBUG oslo_concurrency.processutils [None req-d8ccf12d-2ede-4970-9c6f-82a0d8301467 ee2fe69e0dfa4467926cec954790823e 9bd945273cd04d8981dcb3a319e8d026 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/4c934302-d7cd-4826-835e-cab6dba97e3a/disk.config 4c934302-d7cd-4826-835e-cab6dba97e3a_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.425s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:28:22 np0005534516 nova_compute[253538]: 2025-11-25 08:28:22.343 253542 INFO nova.virt.libvirt.driver [None req-d8ccf12d-2ede-4970-9c6f-82a0d8301467 ee2fe69e0dfa4467926cec954790823e 9bd945273cd04d8981dcb3a319e8d026 - - default default] [instance: 4c934302-d7cd-4826-835e-cab6dba97e3a] Deleting local config drive /var/lib/nova/instances/4c934302-d7cd-4826-835e-cab6dba97e3a/disk.config because it was imported into RBD.#033[00m
Nov 25 03:28:22 np0005534516 nova_compute[253538]: 2025-11-25 08:28:22.368 253542 DEBUG nova.storage.rbd_utils [None req-b9b3cf22-1625-4d33-91e2-a44aef1e71a9 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] flattening images/28dfc6fb-4f2c-4796-bac4-301408e87b71 flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314#033[00m
Nov 25 03:28:22 np0005534516 kernel: tapb62f3741-11: entered promiscuous mode
Nov 25 03:28:22 np0005534516 NetworkManager[48915]: <info>  [1764059302.4012] manager: (tapb62f3741-11): new Tun device (/org/freedesktop/NetworkManager/Devices/100)
Nov 25 03:28:22 np0005534516 systemd-udevd[291055]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 03:28:22 np0005534516 ovn_controller[152859]: 2025-11-25T08:28:22Z|00195|binding|INFO|Claiming lport b62f3741-11c8-4840-a720-d6ee07f06284 for this chassis.
Nov 25 03:28:22 np0005534516 ovn_controller[152859]: 2025-11-25T08:28:22Z|00196|binding|INFO|b62f3741-11c8-4840-a720-d6ee07f06284: Claiming fa:16:3e:c9:9b:99 10.100.0.9
Nov 25 03:28:22 np0005534516 NetworkManager[48915]: <info>  [1764059302.4150] device (tapb62f3741-11): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 03:28:22 np0005534516 NetworkManager[48915]: <info>  [1764059302.4162] device (tapb62f3741-11): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 03:28:22 np0005534516 nova_compute[253538]: 2025-11-25 08:28:22.439 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:28:22 np0005534516 systemd-machined[215790]: New machine qemu-36-instance-0000001f.
Nov 25 03:28:22 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:28:22.448 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c9:9b:99 10.100.0.9'], port_security=['fa:16:3e:c9:9b:99 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '4c934302-d7cd-4826-835e-cab6dba97e3a', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0070171d-b7ca-4ed3-baea-814d9cd382de', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '9bd945273cd04d8981dcb3a319e8d026', 'neutron:revision_number': '2', 'neutron:security_group_ids': '9ee7f6e6-6de7-4c93-8dc8-a8140fbc4a5f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b2ce04b2-ff6f-4536-bd4d-73688e8a9b75, chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=b62f3741-11c8-4840-a720-d6ee07f06284) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 03:28:22 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:28:22.450 162739 INFO neutron.agent.ovn.metadata.agent [-] Port b62f3741-11c8-4840-a720-d6ee07f06284 in datapath 0070171d-b7ca-4ed3-baea-814d9cd382de bound to our chassis#033[00m
Nov 25 03:28:22 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:28:22.452 162739 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 0070171d-b7ca-4ed3-baea-814d9cd382de#033[00m
Nov 25 03:28:22 np0005534516 nova_compute[253538]: 2025-11-25 08:28:22.457 253542 INFO nova.compute.manager [None req-e3022e31-fb04-4eb1-b57c-c8286674db75 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 6998a6cf-b660-4558-98cf-bf5984775b1d] Took 13.22 seconds to build instance.#033[00m
Nov 25 03:28:22 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:28:22.462 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[f4d602cb-3be1-49c0-9d49-521d82482bb4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:28:22 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:28:22.463 162739 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap0070171d-b1 in ovnmeta-0070171d-b7ca-4ed3-baea-814d9cd382de namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 25 03:28:22 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:28:22.469 269370 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap0070171d-b0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 25 03:28:22 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:28:22.469 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[81fe3fa5-9c94-49f1-b61f-1e89184d5aec]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:28:22 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:28:22.469 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[cb245cfd-2cf0-4cf4-8a5d-d4533cf465e7]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:28:22 np0005534516 systemd[1]: Started Virtual Machine qemu-36-instance-0000001f.
Nov 25 03:28:22 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:28:22.486 162852 DEBUG oslo.privsep.daemon [-] privsep: reply[de65e1f6-ba14-467d-af91-96f470a59978]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:28:22 np0005534516 nova_compute[253538]: 2025-11-25 08:28:22.495 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:28:22 np0005534516 ovn_controller[152859]: 2025-11-25T08:28:22Z|00197|binding|INFO|Setting lport b62f3741-11c8-4840-a720-d6ee07f06284 ovn-installed in OVS
Nov 25 03:28:22 np0005534516 nova_compute[253538]: 2025-11-25 08:28:22.500 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:28:22 np0005534516 ovn_controller[152859]: 2025-11-25T08:28:22Z|00198|binding|INFO|Setting lport b62f3741-11c8-4840-a720-d6ee07f06284 up in Southbound
Nov 25 03:28:22 np0005534516 nova_compute[253538]: 2025-11-25 08:28:22.508 253542 DEBUG oslo_concurrency.lockutils [None req-e3022e31-fb04-4eb1-b57c-c8286674db75 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Lock "6998a6cf-b660-4558-98cf-bf5984775b1d" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 13.324s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:28:22 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:28:22.515 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[c441706a-d7ac-47bd-b5ea-b3723fdb0cce]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:28:22 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:28:22.543 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[0c0d024a-f0dc-46e9-b041-757b088acde4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:28:22 np0005534516 NetworkManager[48915]: <info>  [1764059302.5489] manager: (tap0070171d-b0): new Veth device (/org/freedesktop/NetworkManager/Devices/101)
Nov 25 03:28:22 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:28:22.548 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[36f617e5-a9fe-485a-bf6e-a52e1127b8b7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:28:22 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:28:22.582 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[b6f8d0d8-e7f0-4785-8f42-0d5016c692df]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:28:22 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:28:22.585 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[56915201-bd95-4b2f-bba1-639124801146]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:28:22 np0005534516 NetworkManager[48915]: <info>  [1764059302.6121] device (tap0070171d-b0): carrier: link connected
Nov 25 03:28:22 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:28:22.615 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[f89de0df-00c3-4dd2-ba5b-263b57f401cd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:28:22 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:28:22.631 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[bf0b1f73-413e-40fc-b511-8b10c1c800ea]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap0070171d-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:4a:a2:4f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 65], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 465721, 'reachable_time': 36339, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 291444, 'error': None, 'target': 'ovnmeta-0070171d-b7ca-4ed3-baea-814d9cd382de', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:28:22 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:28:22.652 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[6322f8b4-84c8-4eeb-a5e3-5ed6a1660dab]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe4a:a24f'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 465721, 'tstamp': 465721}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 291445, 'error': None, 'target': 'ovnmeta-0070171d-b7ca-4ed3-baea-814d9cd382de', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:28:22 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:28:22.681 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[696333dc-4ad0-468a-b813-6ef2b4cfe4bf]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap0070171d-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:4a:a2:4f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 65], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 465721, 'reachable_time': 36339, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 291446, 'error': None, 'target': 'ovnmeta-0070171d-b7ca-4ed3-baea-814d9cd382de', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:28:22 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:28:22.718 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[590fe60d-199c-483a-8eb9-903dc4c3e5fd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:28:22 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:28:22.800 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[e6c7e3e4-69cb-42a9-93d1-54a69c0c530d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:28:22 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:28:22.802 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0070171d-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:28:22 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:28:22.802 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 03:28:22 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:28:22.802 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap0070171d-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:28:22 np0005534516 NetworkManager[48915]: <info>  [1764059302.8051] manager: (tap0070171d-b0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/102)
Nov 25 03:28:22 np0005534516 nova_compute[253538]: 2025-11-25 08:28:22.804 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:28:22 np0005534516 kernel: tap0070171d-b0: entered promiscuous mode
Nov 25 03:28:22 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:28:22.808 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap0070171d-b0, col_values=(('external_ids', {'iface-id': 'd8cb45b7-fcc6-4a5e-82c7-1991008fce33'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:28:22 np0005534516 ovn_controller[152859]: 2025-11-25T08:28:22Z|00199|binding|INFO|Releasing lport d8cb45b7-fcc6-4a5e-82c7-1991008fce33 from this chassis (sb_readonly=0)
Nov 25 03:28:22 np0005534516 nova_compute[253538]: 2025-11-25 08:28:22.810 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:28:22 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:28:22.811 162739 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/0070171d-b7ca-4ed3-baea-814d9cd382de.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/0070171d-b7ca-4ed3-baea-814d9cd382de.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 25 03:28:22 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:28:22.817 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[0581f5e2-108a-46d7-a6fb-9f19117525ba]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:28:22 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:28:22.818 162739 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 25 03:28:22 np0005534516 ovn_metadata_agent[162734]: global
Nov 25 03:28:22 np0005534516 ovn_metadata_agent[162734]:    log         /dev/log local0 debug
Nov 25 03:28:22 np0005534516 ovn_metadata_agent[162734]:    log-tag     haproxy-metadata-proxy-0070171d-b7ca-4ed3-baea-814d9cd382de
Nov 25 03:28:22 np0005534516 ovn_metadata_agent[162734]:    user        root
Nov 25 03:28:22 np0005534516 ovn_metadata_agent[162734]:    group       root
Nov 25 03:28:22 np0005534516 ovn_metadata_agent[162734]:    maxconn     1024
Nov 25 03:28:22 np0005534516 ovn_metadata_agent[162734]:    pidfile     /var/lib/neutron/external/pids/0070171d-b7ca-4ed3-baea-814d9cd382de.pid.haproxy
Nov 25 03:28:22 np0005534516 ovn_metadata_agent[162734]:    daemon
Nov 25 03:28:22 np0005534516 ovn_metadata_agent[162734]: 
Nov 25 03:28:22 np0005534516 ovn_metadata_agent[162734]: defaults
Nov 25 03:28:22 np0005534516 ovn_metadata_agent[162734]:    log global
Nov 25 03:28:22 np0005534516 ovn_metadata_agent[162734]:    mode http
Nov 25 03:28:22 np0005534516 ovn_metadata_agent[162734]:    option httplog
Nov 25 03:28:22 np0005534516 ovn_metadata_agent[162734]:    option dontlognull
Nov 25 03:28:22 np0005534516 ovn_metadata_agent[162734]:    option http-server-close
Nov 25 03:28:22 np0005534516 ovn_metadata_agent[162734]:    option forwardfor
Nov 25 03:28:22 np0005534516 ovn_metadata_agent[162734]:    retries                 3
Nov 25 03:28:22 np0005534516 ovn_metadata_agent[162734]:    timeout http-request    30s
Nov 25 03:28:22 np0005534516 ovn_metadata_agent[162734]:    timeout connect         30s
Nov 25 03:28:22 np0005534516 ovn_metadata_agent[162734]:    timeout client          32s
Nov 25 03:28:22 np0005534516 ovn_metadata_agent[162734]:    timeout server          32s
Nov 25 03:28:22 np0005534516 ovn_metadata_agent[162734]:    timeout http-keep-alive 30s
Nov 25 03:28:22 np0005534516 ovn_metadata_agent[162734]: 
Nov 25 03:28:22 np0005534516 ovn_metadata_agent[162734]: 
Nov 25 03:28:22 np0005534516 ovn_metadata_agent[162734]: listen listener
Nov 25 03:28:22 np0005534516 ovn_metadata_agent[162734]:    bind 169.254.169.254:80
Nov 25 03:28:22 np0005534516 ovn_metadata_agent[162734]:    server metadata /var/lib/neutron/metadata_proxy
Nov 25 03:28:22 np0005534516 ovn_metadata_agent[162734]:    http-request add-header X-OVN-Network-ID 0070171d-b7ca-4ed3-baea-814d9cd382de
Nov 25 03:28:22 np0005534516 ovn_metadata_agent[162734]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 25 03:28:22 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:28:22.818 162739 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-0070171d-b7ca-4ed3-baea-814d9cd382de', 'env', 'PROCESS_TAG=haproxy-0070171d-b7ca-4ed3-baea-814d9cd382de', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/0070171d-b7ca-4ed3-baea-814d9cd382de.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 25 03:28:22 np0005534516 nova_compute[253538]: 2025-11-25 08:28:22.826 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:28:22 np0005534516 nova_compute[253538]: 2025-11-25 08:28:22.874 253542 DEBUG nova.storage.rbd_utils [None req-b9b3cf22-1625-4d33-91e2-a44aef1e71a9 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] removing snapshot(55c71d68d39f458abfbf1f8209ae0ece) on rbd image(59d5b429-b6cb-4ced-bf99-b3a5b42dc5a0_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489#033[00m
Nov 25 03:28:22 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e125 do_prune osdmap full prune enabled
Nov 25 03:28:22 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e126 e126: 3 total, 3 up, 3 in
Nov 25 03:28:22 np0005534516 ceph-mon[75015]: log_channel(cluster) log [DBG] : osdmap e126: 3 total, 3 up, 3 in
Nov 25 03:28:22 np0005534516 nova_compute[253538]: 2025-11-25 08:28:22.976 253542 DEBUG nova.storage.rbd_utils [None req-b9b3cf22-1625-4d33-91e2-a44aef1e71a9 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] creating snapshot(snap) on rbd image(28dfc6fb-4f2c-4796-bac4-301408e87b71) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Nov 25 03:28:23 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1301: 321 pgs: 321 active+clean; 186 MiB data, 430 MiB used, 60 GiB / 60 GiB avail; 489 KiB/s rd, 5.5 MiB/s wr, 161 op/s
Nov 25 03:28:23 np0005534516 podman[291514]: 2025-11-25 08:28:23.24446996 +0000 UTC m=+0.059585828 container create e1b52127dc1ec05b94275d0736abfa77621b03b28b6ce6d889d5bd0ecf1be9c0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-0070171d-b7ca-4ed3-baea-814d9cd382de, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Nov 25 03:28:23 np0005534516 systemd[1]: Started libpod-conmon-e1b52127dc1ec05b94275d0736abfa77621b03b28b6ce6d889d5bd0ecf1be9c0.scope.
Nov 25 03:28:23 np0005534516 podman[291514]: 2025-11-25 08:28:23.211826508 +0000 UTC m=+0.026942406 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 25 03:28:23 np0005534516 systemd[1]: Started libcrun container.
Nov 25 03:28:23 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b38844f50af0338993d204476dce181cc29aa7db95beeb492a051beb5aa7d312/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 25 03:28:23 np0005534516 podman[291514]: 2025-11-25 08:28:23.344757813 +0000 UTC m=+0.159873711 container init e1b52127dc1ec05b94275d0736abfa77621b03b28b6ce6d889d5bd0ecf1be9c0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-0070171d-b7ca-4ed3-baea-814d9cd382de, tcib_managed=true, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Nov 25 03:28:23 np0005534516 podman[291514]: 2025-11-25 08:28:23.352510966 +0000 UTC m=+0.167626834 container start e1b52127dc1ec05b94275d0736abfa77621b03b28b6ce6d889d5bd0ecf1be9c0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-0070171d-b7ca-4ed3-baea-814d9cd382de, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true)
Nov 25 03:28:23 np0005534516 neutron-haproxy-ovnmeta-0070171d-b7ca-4ed3-baea-814d9cd382de[291562]: [NOTICE]   (291575) : New worker (291578) forked
Nov 25 03:28:23 np0005534516 neutron-haproxy-ovnmeta-0070171d-b7ca-4ed3-baea-814d9cd382de[291562]: [NOTICE]   (291575) : Loading success.
Nov 25 03:28:23 np0005534516 nova_compute[253538]: 2025-11-25 08:28:23.419 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764059303.4188738, 4c934302-d7cd-4826-835e-cab6dba97e3a => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 03:28:23 np0005534516 nova_compute[253538]: 2025-11-25 08:28:23.420 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 4c934302-d7cd-4826-835e-cab6dba97e3a] VM Started (Lifecycle Event)#033[00m
Nov 25 03:28:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 03:28:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 03:28:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 03:28:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 03:28:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 03:28:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 03:28:23 np0005534516 nova_compute[253538]: 2025-11-25 08:28:23.443 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 4c934302-d7cd-4826-835e-cab6dba97e3a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:28:23 np0005534516 nova_compute[253538]: 2025-11-25 08:28:23.448 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764059303.4189668, 4c934302-d7cd-4826-835e-cab6dba97e3a => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 03:28:23 np0005534516 nova_compute[253538]: 2025-11-25 08:28:23.448 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 4c934302-d7cd-4826-835e-cab6dba97e3a] VM Paused (Lifecycle Event)#033[00m
Nov 25 03:28:23 np0005534516 nova_compute[253538]: 2025-11-25 08:28:23.463 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 4c934302-d7cd-4826-835e-cab6dba97e3a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:28:23 np0005534516 nova_compute[253538]: 2025-11-25 08:28:23.467 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 4c934302-d7cd-4826-835e-cab6dba97e3a] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 03:28:23 np0005534516 nova_compute[253538]: 2025-11-25 08:28:23.483 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 4c934302-d7cd-4826-835e-cab6dba97e3a] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 03:28:23 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e126 do_prune osdmap full prune enabled
Nov 25 03:28:23 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e127 e127: 3 total, 3 up, 3 in
Nov 25 03:28:23 np0005534516 ceph-mon[75015]: log_channel(cluster) log [DBG] : osdmap e127: 3 total, 3 up, 3 in
Nov 25 03:28:24 np0005534516 ovn_controller[152859]: 2025-11-25T08:28:24Z|00032|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:67:41:bb 10.100.0.9
Nov 25 03:28:24 np0005534516 ovn_controller[152859]: 2025-11-25T08:28:24Z|00033|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:67:41:bb 10.100.0.9
Nov 25 03:28:24 np0005534516 nova_compute[253538]: 2025-11-25 08:28:24.112 253542 ERROR nova.virt.libvirt.driver [None req-b9b3cf22-1625-4d33-91e2-a44aef1e71a9 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] Failed to snapshot image: nova.exception.ImageNotFound: Image 28dfc6fb-4f2c-4796-bac4-301408e87b71 could not be found.
Nov 25 03:28:24 np0005534516 nova_compute[253538]: 2025-11-25 08:28:24.112 253542 ERROR nova.virt.libvirt.driver Traceback (most recent call last):
Nov 25 03:28:24 np0005534516 nova_compute[253538]: 2025-11-25 08:28:24.112 253542 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 691, in update
Nov 25 03:28:24 np0005534516 nova_compute[253538]: 2025-11-25 08:28:24.112 253542 ERROR nova.virt.libvirt.driver     image = self._update_v2(context, sent_service_image_meta, data)
Nov 25 03:28:24 np0005534516 nova_compute[253538]: 2025-11-25 08:28:24.112 253542 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 700, in _update_v2
Nov 25 03:28:24 np0005534516 nova_compute[253538]: 2025-11-25 08:28:24.112 253542 ERROR nova.virt.libvirt.driver     image = self._client.call(
Nov 25 03:28:24 np0005534516 nova_compute[253538]: 2025-11-25 08:28:24.112 253542 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 191, in call
Nov 25 03:28:24 np0005534516 nova_compute[253538]: 2025-11-25 08:28:24.112 253542 ERROR nova.virt.libvirt.driver     result = getattr(controller, method)(*args, **kwargs)
Nov 25 03:28:24 np0005534516 nova_compute[253538]: 2025-11-25 08:28:24.112 253542 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/v2/images.py", line 440, in update
Nov 25 03:28:24 np0005534516 nova_compute[253538]: 2025-11-25 08:28:24.112 253542 ERROR nova.virt.libvirt.driver     unvalidated_image = self.get(image_id)
Nov 25 03:28:24 np0005534516 nova_compute[253538]: 2025-11-25 08:28:24.112 253542 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/v2/images.py", line 197, in get
Nov 25 03:28:24 np0005534516 nova_compute[253538]: 2025-11-25 08:28:24.112 253542 ERROR nova.virt.libvirt.driver     return self._get(image_id)
Nov 25 03:28:24 np0005534516 nova_compute[253538]: 2025-11-25 08:28:24.112 253542 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/common/utils.py", line 649, in inner
Nov 25 03:28:24 np0005534516 nova_compute[253538]: 2025-11-25 08:28:24.112 253542 ERROR nova.virt.libvirt.driver     return RequestIdProxy(wrapped(*args, **kwargs))
Nov 25 03:28:24 np0005534516 nova_compute[253538]: 2025-11-25 08:28:24.112 253542 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/v2/images.py", line 190, in _get
Nov 25 03:28:24 np0005534516 nova_compute[253538]: 2025-11-25 08:28:24.112 253542 ERROR nova.virt.libvirt.driver     resp, body = self.http_client.get(url, headers=header)
Nov 25 03:28:24 np0005534516 nova_compute[253538]: 2025-11-25 08:28:24.112 253542 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/keystoneauth1/adapter.py", line 395, in get
Nov 25 03:28:24 np0005534516 nova_compute[253538]: 2025-11-25 08:28:24.112 253542 ERROR nova.virt.libvirt.driver     return self.request(url, 'GET', **kwargs)
Nov 25 03:28:24 np0005534516 nova_compute[253538]: 2025-11-25 08:28:24.112 253542 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/common/http.py", line 380, in request
Nov 25 03:28:24 np0005534516 nova_compute[253538]: 2025-11-25 08:28:24.112 253542 ERROR nova.virt.libvirt.driver     return self._handle_response(resp)
Nov 25 03:28:24 np0005534516 nova_compute[253538]: 2025-11-25 08:28:24.112 253542 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/common/http.py", line 120, in _handle_response
Nov 25 03:28:24 np0005534516 nova_compute[253538]: 2025-11-25 08:28:24.112 253542 ERROR nova.virt.libvirt.driver     raise exc.from_response(resp, resp.content)
Nov 25 03:28:24 np0005534516 nova_compute[253538]: 2025-11-25 08:28:24.112 253542 ERROR nova.virt.libvirt.driver glanceclient.exc.HTTPNotFound: HTTP 404 Not Found: No image found with ID 28dfc6fb-4f2c-4796-bac4-301408e87b71
Nov 25 03:28:24 np0005534516 nova_compute[253538]: 2025-11-25 08:28:24.112 253542 ERROR nova.virt.libvirt.driver 
Nov 25 03:28:24 np0005534516 nova_compute[253538]: 2025-11-25 08:28:24.112 253542 ERROR nova.virt.libvirt.driver During handling of the above exception, another exception occurred:
Nov 25 03:28:24 np0005534516 nova_compute[253538]: 2025-11-25 08:28:24.112 253542 ERROR nova.virt.libvirt.driver 
Nov 25 03:28:24 np0005534516 nova_compute[253538]: 2025-11-25 08:28:24.112 253542 ERROR nova.virt.libvirt.driver Traceback (most recent call last):
Nov 25 03:28:24 np0005534516 nova_compute[253538]: 2025-11-25 08:28:24.112 253542 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py", line 3082, in snapshot
Nov 25 03:28:24 np0005534516 nova_compute[253538]: 2025-11-25 08:28:24.112 253542 ERROR nova.virt.libvirt.driver     self._image_api.update(context, image_id, metadata,
Nov 25 03:28:24 np0005534516 nova_compute[253538]: 2025-11-25 08:28:24.112 253542 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 1243, in update
Nov 25 03:28:24 np0005534516 nova_compute[253538]: 2025-11-25 08:28:24.112 253542 ERROR nova.virt.libvirt.driver     return session.update(context, image_id, image_info, data=data,
Nov 25 03:28:24 np0005534516 nova_compute[253538]: 2025-11-25 08:28:24.112 253542 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 693, in update
Nov 25 03:28:24 np0005534516 nova_compute[253538]: 2025-11-25 08:28:24.112 253542 ERROR nova.virt.libvirt.driver     _reraise_translated_image_exception(image_id)
Nov 25 03:28:24 np0005534516 nova_compute[253538]: 2025-11-25 08:28:24.112 253542 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 1031, in _reraise_translated_image_exception
Nov 25 03:28:24 np0005534516 nova_compute[253538]: 2025-11-25 08:28:24.112 253542 ERROR nova.virt.libvirt.driver     raise new_exc.with_traceback(exc_trace)
Nov 25 03:28:24 np0005534516 nova_compute[253538]: 2025-11-25 08:28:24.112 253542 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 691, in update
Nov 25 03:28:24 np0005534516 nova_compute[253538]: 2025-11-25 08:28:24.112 253542 ERROR nova.virt.libvirt.driver     image = self._update_v2(context, sent_service_image_meta, data)
Nov 25 03:28:24 np0005534516 nova_compute[253538]: 2025-11-25 08:28:24.112 253542 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 700, in _update_v2
Nov 25 03:28:24 np0005534516 nova_compute[253538]: 2025-11-25 08:28:24.112 253542 ERROR nova.virt.libvirt.driver     image = self._client.call(
Nov 25 03:28:24 np0005534516 nova_compute[253538]: 2025-11-25 08:28:24.112 253542 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 191, in call
Nov 25 03:28:24 np0005534516 nova_compute[253538]: 2025-11-25 08:28:24.112 253542 ERROR nova.virt.libvirt.driver     result = getattr(controller, method)(*args, **kwargs)
Nov 25 03:28:24 np0005534516 nova_compute[253538]: 2025-11-25 08:28:24.112 253542 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/v2/images.py", line 440, in update
Nov 25 03:28:24 np0005534516 nova_compute[253538]: 2025-11-25 08:28:24.112 253542 ERROR nova.virt.libvirt.driver     unvalidated_image = self.get(image_id)
Nov 25 03:28:24 np0005534516 nova_compute[253538]: 2025-11-25 08:28:24.112 253542 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/v2/images.py", line 197, in get
Nov 25 03:28:24 np0005534516 nova_compute[253538]: 2025-11-25 08:28:24.112 253542 ERROR nova.virt.libvirt.driver     return self._get(image_id)
Nov 25 03:28:24 np0005534516 nova_compute[253538]: 2025-11-25 08:28:24.112 253542 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/common/utils.py", line 649, in inner
Nov 25 03:28:24 np0005534516 nova_compute[253538]: 2025-11-25 08:28:24.112 253542 ERROR nova.virt.libvirt.driver     return RequestIdProxy(wrapped(*args, **kwargs))
Nov 25 03:28:24 np0005534516 nova_compute[253538]: 2025-11-25 08:28:24.112 253542 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/v2/images.py", line 190, in _get
Nov 25 03:28:24 np0005534516 nova_compute[253538]: 2025-11-25 08:28:24.112 253542 ERROR nova.virt.libvirt.driver     resp, body = self.http_client.get(url, headers=header)
Nov 25 03:28:24 np0005534516 nova_compute[253538]: 2025-11-25 08:28:24.112 253542 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/keystoneauth1/adapter.py", line 395, in get
Nov 25 03:28:24 np0005534516 nova_compute[253538]: 2025-11-25 08:28:24.112 253542 ERROR nova.virt.libvirt.driver     return self.request(url, 'GET', **kwargs)
Nov 25 03:28:24 np0005534516 nova_compute[253538]: 2025-11-25 08:28:24.112 253542 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/common/http.py", line 380, in request
Nov 25 03:28:24 np0005534516 nova_compute[253538]: 2025-11-25 08:28:24.112 253542 ERROR nova.virt.libvirt.driver     return self._handle_response(resp)
Nov 25 03:28:24 np0005534516 nova_compute[253538]: 2025-11-25 08:28:24.112 253542 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/common/http.py", line 120, in _handle_response
Nov 25 03:28:24 np0005534516 nova_compute[253538]: 2025-11-25 08:28:24.112 253542 ERROR nova.virt.libvirt.driver     raise exc.from_response(resp, resp.content)
Nov 25 03:28:24 np0005534516 nova_compute[253538]: 2025-11-25 08:28:24.112 253542 ERROR nova.virt.libvirt.driver nova.exception.ImageNotFound: Image 28dfc6fb-4f2c-4796-bac4-301408e87b71 could not be found.
Nov 25 03:28:24 np0005534516 nova_compute[253538]: 2025-11-25 08:28:24.112 253542 ERROR nova.virt.libvirt.driver #033[00m
Nov 25 03:28:24 np0005534516 nova_compute[253538]: 2025-11-25 08:28:24.413 253542 DEBUG nova.compute.manager [req-497feb2e-6932-43a3-9b65-5a2a8fde9303 req-8f8da338-818a-43e7-a29a-10108817084b b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 4c934302-d7cd-4826-835e-cab6dba97e3a] Received event network-vif-plugged-b62f3741-11c8-4840-a720-d6ee07f06284 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:28:24 np0005534516 nova_compute[253538]: 2025-11-25 08:28:24.414 253542 DEBUG oslo_concurrency.lockutils [req-497feb2e-6932-43a3-9b65-5a2a8fde9303 req-8f8da338-818a-43e7-a29a-10108817084b b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "4c934302-d7cd-4826-835e-cab6dba97e3a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:28:24 np0005534516 nova_compute[253538]: 2025-11-25 08:28:24.414 253542 DEBUG oslo_concurrency.lockutils [req-497feb2e-6932-43a3-9b65-5a2a8fde9303 req-8f8da338-818a-43e7-a29a-10108817084b b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "4c934302-d7cd-4826-835e-cab6dba97e3a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:28:24 np0005534516 nova_compute[253538]: 2025-11-25 08:28:24.415 253542 DEBUG oslo_concurrency.lockutils [req-497feb2e-6932-43a3-9b65-5a2a8fde9303 req-8f8da338-818a-43e7-a29a-10108817084b b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "4c934302-d7cd-4826-835e-cab6dba97e3a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:28:24 np0005534516 nova_compute[253538]: 2025-11-25 08:28:24.415 253542 DEBUG nova.compute.manager [req-497feb2e-6932-43a3-9b65-5a2a8fde9303 req-8f8da338-818a-43e7-a29a-10108817084b b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 4c934302-d7cd-4826-835e-cab6dba97e3a] Processing event network-vif-plugged-b62f3741-11c8-4840-a720-d6ee07f06284 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 25 03:28:24 np0005534516 nova_compute[253538]: 2025-11-25 08:28:24.419 253542 DEBUG nova.compute.manager [None req-d8ccf12d-2ede-4970-9c6f-82a0d8301467 ee2fe69e0dfa4467926cec954790823e 9bd945273cd04d8981dcb3a319e8d026 - - default default] [instance: 4c934302-d7cd-4826-835e-cab6dba97e3a] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 25 03:28:24 np0005534516 nova_compute[253538]: 2025-11-25 08:28:24.435 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764059304.4341586, 4c934302-d7cd-4826-835e-cab6dba97e3a => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 03:28:24 np0005534516 nova_compute[253538]: 2025-11-25 08:28:24.436 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 4c934302-d7cd-4826-835e-cab6dba97e3a] VM Resumed (Lifecycle Event)#033[00m
Nov 25 03:28:24 np0005534516 nova_compute[253538]: 2025-11-25 08:28:24.437 253542 DEBUG nova.virt.libvirt.driver [None req-d8ccf12d-2ede-4970-9c6f-82a0d8301467 ee2fe69e0dfa4467926cec954790823e 9bd945273cd04d8981dcb3a319e8d026 - - default default] [instance: 4c934302-d7cd-4826-835e-cab6dba97e3a] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 25 03:28:24 np0005534516 nova_compute[253538]: 2025-11-25 08:28:24.437 253542 DEBUG nova.storage.rbd_utils [None req-b9b3cf22-1625-4d33-91e2-a44aef1e71a9 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] removing snapshot(snap) on rbd image(28dfc6fb-4f2c-4796-bac4-301408e87b71) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489#033[00m
Nov 25 03:28:24 np0005534516 nova_compute[253538]: 2025-11-25 08:28:24.441 253542 INFO nova.virt.libvirt.driver [-] [instance: 4c934302-d7cd-4826-835e-cab6dba97e3a] Instance spawned successfully.#033[00m
Nov 25 03:28:24 np0005534516 nova_compute[253538]: 2025-11-25 08:28:24.441 253542 DEBUG nova.virt.libvirt.driver [None req-d8ccf12d-2ede-4970-9c6f-82a0d8301467 ee2fe69e0dfa4467926cec954790823e 9bd945273cd04d8981dcb3a319e8d026 - - default default] [instance: 4c934302-d7cd-4826-835e-cab6dba97e3a] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 25 03:28:24 np0005534516 nova_compute[253538]: 2025-11-25 08:28:24.456 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 4c934302-d7cd-4826-835e-cab6dba97e3a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:28:24 np0005534516 nova_compute[253538]: 2025-11-25 08:28:24.460 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 4c934302-d7cd-4826-835e-cab6dba97e3a] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 03:28:24 np0005534516 nova_compute[253538]: 2025-11-25 08:28:24.465 253542 DEBUG nova.virt.libvirt.driver [None req-d8ccf12d-2ede-4970-9c6f-82a0d8301467 ee2fe69e0dfa4467926cec954790823e 9bd945273cd04d8981dcb3a319e8d026 - - default default] [instance: 4c934302-d7cd-4826-835e-cab6dba97e3a] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:28:24 np0005534516 nova_compute[253538]: 2025-11-25 08:28:24.465 253542 DEBUG nova.virt.libvirt.driver [None req-d8ccf12d-2ede-4970-9c6f-82a0d8301467 ee2fe69e0dfa4467926cec954790823e 9bd945273cd04d8981dcb3a319e8d026 - - default default] [instance: 4c934302-d7cd-4826-835e-cab6dba97e3a] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:28:24 np0005534516 nova_compute[253538]: 2025-11-25 08:28:24.465 253542 DEBUG nova.virt.libvirt.driver [None req-d8ccf12d-2ede-4970-9c6f-82a0d8301467 ee2fe69e0dfa4467926cec954790823e 9bd945273cd04d8981dcb3a319e8d026 - - default default] [instance: 4c934302-d7cd-4826-835e-cab6dba97e3a] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:28:24 np0005534516 nova_compute[253538]: 2025-11-25 08:28:24.465 253542 DEBUG nova.virt.libvirt.driver [None req-d8ccf12d-2ede-4970-9c6f-82a0d8301467 ee2fe69e0dfa4467926cec954790823e 9bd945273cd04d8981dcb3a319e8d026 - - default default] [instance: 4c934302-d7cd-4826-835e-cab6dba97e3a] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:28:24 np0005534516 nova_compute[253538]: 2025-11-25 08:28:24.466 253542 DEBUG nova.virt.libvirt.driver [None req-d8ccf12d-2ede-4970-9c6f-82a0d8301467 ee2fe69e0dfa4467926cec954790823e 9bd945273cd04d8981dcb3a319e8d026 - - default default] [instance: 4c934302-d7cd-4826-835e-cab6dba97e3a] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:28:24 np0005534516 nova_compute[253538]: 2025-11-25 08:28:24.466 253542 DEBUG nova.virt.libvirt.driver [None req-d8ccf12d-2ede-4970-9c6f-82a0d8301467 ee2fe69e0dfa4467926cec954790823e 9bd945273cd04d8981dcb3a319e8d026 - - default default] [instance: 4c934302-d7cd-4826-835e-cab6dba97e3a] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:28:24 np0005534516 nova_compute[253538]: 2025-11-25 08:28:24.469 253542 DEBUG nova.objects.instance [None req-70405417-f9f7-44d7-8864-4fb84eda8c4f 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Lazy-loading 'pci_devices' on Instance uuid 6998a6cf-b660-4558-98cf-bf5984775b1d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 03:28:24 np0005534516 nova_compute[253538]: 2025-11-25 08:28:24.507 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 4c934302-d7cd-4826-835e-cab6dba97e3a] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 03:28:24 np0005534516 nova_compute[253538]: 2025-11-25 08:28:24.511 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764059304.5114098, 6998a6cf-b660-4558-98cf-bf5984775b1d => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 03:28:24 np0005534516 nova_compute[253538]: 2025-11-25 08:28:24.511 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 6998a6cf-b660-4558-98cf-bf5984775b1d] VM Paused (Lifecycle Event)#033[00m
Nov 25 03:28:24 np0005534516 nova_compute[253538]: 2025-11-25 08:28:24.531 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 6998a6cf-b660-4558-98cf-bf5984775b1d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:28:24 np0005534516 nova_compute[253538]: 2025-11-25 08:28:24.543 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 6998a6cf-b660-4558-98cf-bf5984775b1d] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: suspending, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 03:28:24 np0005534516 nova_compute[253538]: 2025-11-25 08:28:24.572 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 6998a6cf-b660-4558-98cf-bf5984775b1d] During sync_power_state the instance has a pending task (suspending). Skip.#033[00m
Nov 25 03:28:24 np0005534516 nova_compute[253538]: 2025-11-25 08:28:24.618 253542 INFO nova.compute.manager [None req-d8ccf12d-2ede-4970-9c6f-82a0d8301467 ee2fe69e0dfa4467926cec954790823e 9bd945273cd04d8981dcb3a319e8d026 - - default default] [instance: 4c934302-d7cd-4826-835e-cab6dba97e3a] Took 10.27 seconds to spawn the instance on the hypervisor.#033[00m
Nov 25 03:28:24 np0005534516 nova_compute[253538]: 2025-11-25 08:28:24.619 253542 DEBUG nova.compute.manager [None req-d8ccf12d-2ede-4970-9c6f-82a0d8301467 ee2fe69e0dfa4467926cec954790823e 9bd945273cd04d8981dcb3a319e8d026 - - default default] [instance: 4c934302-d7cd-4826-835e-cab6dba97e3a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:28:24 np0005534516 nova_compute[253538]: 2025-11-25 08:28:24.645 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:28:24 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:28:24.644 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=10, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '3e:21:09', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '6a:71:8e:07:fa:c2'}, ipsec=False) old=SB_Global(nb_cfg=9) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 03:28:24 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:28:24.646 162739 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 25 03:28:24 np0005534516 nova_compute[253538]: 2025-11-25 08:28:24.706 253542 INFO nova.compute.manager [None req-d8ccf12d-2ede-4970-9c6f-82a0d8301467 ee2fe69e0dfa4467926cec954790823e 9bd945273cd04d8981dcb3a319e8d026 - - default default] [instance: 4c934302-d7cd-4826-835e-cab6dba97e3a] Took 11.40 seconds to build instance.#033[00m
Nov 25 03:28:24 np0005534516 nova_compute[253538]: 2025-11-25 08:28:24.766 253542 DEBUG oslo_concurrency.lockutils [None req-d8ccf12d-2ede-4970-9c6f-82a0d8301467 ee2fe69e0dfa4467926cec954790823e 9bd945273cd04d8981dcb3a319e8d026 - - default default] Lock "4c934302-d7cd-4826-835e-cab6dba97e3a" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.529s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:28:24 np0005534516 nova_compute[253538]: 2025-11-25 08:28:24.806 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:28:24 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e127 do_prune osdmap full prune enabled
Nov 25 03:28:24 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e128 e128: 3 total, 3 up, 3 in
Nov 25 03:28:24 np0005534516 ceph-mon[75015]: log_channel(cluster) log [DBG] : osdmap e128: 3 total, 3 up, 3 in
Nov 25 03:28:24 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 03:28:25 np0005534516 kernel: tap65fd7d0e-59 (unregistering): left promiscuous mode
Nov 25 03:28:25 np0005534516 NetworkManager[48915]: <info>  [1764059305.0291] device (tap65fd7d0e-59): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 03:28:25 np0005534516 ovn_controller[152859]: 2025-11-25T08:28:25Z|00200|binding|INFO|Releasing lport 65fd7d0e-59ee-4411-92ee-f934016f1d1f from this chassis (sb_readonly=0)
Nov 25 03:28:25 np0005534516 ovn_controller[152859]: 2025-11-25T08:28:25Z|00201|binding|INFO|Setting lport 65fd7d0e-59ee-4411-92ee-f934016f1d1f down in Southbound
Nov 25 03:28:25 np0005534516 ovn_controller[152859]: 2025-11-25T08:28:25Z|00202|binding|INFO|Removing iface tap65fd7d0e-59 ovn-installed in OVS
Nov 25 03:28:25 np0005534516 nova_compute[253538]: 2025-11-25 08:28:25.046 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:28:25 np0005534516 nova_compute[253538]: 2025-11-25 08:28:25.063 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:28:25 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:28:25.085 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:73:80:23 10.100.0.10'], port_security=['fa:16:3e:73:80:23 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '6998a6cf-b660-4558-98cf-bf5984775b1d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ba659d6c-c094-47d7-ba45-d0e659ce778e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b0a28d62fb1841c087b84b40bf5a54ec', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'c0dcf48e-8342-437f-bc91-be284d9d2e89', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=38cfcfdd-6d8a-45fc-8bf6-5c1aa5128b91, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=65fd7d0e-59ee-4411-92ee-f934016f1d1f) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 03:28:25 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:28:25.089 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 65fd7d0e-59ee-4411-92ee-f934016f1d1f in datapath ba659d6c-c094-47d7-ba45-d0e659ce778e unbound from our chassis#033[00m
Nov 25 03:28:25 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1304: 321 pgs: 2 active+clean+snaptrim, 4 active+clean+snaptrim_wait, 315 active+clean; 234 MiB data, 480 MiB used, 60 GiB / 60 GiB avail; 8.7 MiB/s rd, 11 MiB/s wr, 477 op/s
Nov 25 03:28:25 np0005534516 systemd[1]: machine-qemu\x2d35\x2dinstance\x2d0000001e.scope: Deactivated successfully.
Nov 25 03:28:25 np0005534516 systemd[1]: machine-qemu\x2d35\x2dinstance\x2d0000001e.scope: Consumed 3.481s CPU time.
Nov 25 03:28:25 np0005534516 systemd-machined[215790]: Machine qemu-35-instance-0000001e terminated.
Nov 25 03:28:25 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:28:25.100 162739 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network ba659d6c-c094-47d7-ba45-d0e659ce778e, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 25 03:28:25 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:28:25.102 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[9d0bc5aa-e4d6-4b84-a91d-1c469f8fea08]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:28:25 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:28:25.104 162739 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-ba659d6c-c094-47d7-ba45-d0e659ce778e namespace which is not needed anymore#033[00m
Nov 25 03:28:25 np0005534516 nova_compute[253538]: 2025-11-25 08:28:25.208 253542 DEBUG nova.compute.manager [None req-70405417-f9f7-44d7-8864-4fb84eda8c4f 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 6998a6cf-b660-4558-98cf-bf5984775b1d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:28:25 np0005534516 neutron-haproxy-ovnmeta-ba659d6c-c094-47d7-ba45-d0e659ce778e[291226]: [NOTICE]   (291234) : haproxy version is 2.8.14-c23fe91
Nov 25 03:28:25 np0005534516 neutron-haproxy-ovnmeta-ba659d6c-c094-47d7-ba45-d0e659ce778e[291226]: [NOTICE]   (291234) : path to executable is /usr/sbin/haproxy
Nov 25 03:28:25 np0005534516 neutron-haproxy-ovnmeta-ba659d6c-c094-47d7-ba45-d0e659ce778e[291226]: [WARNING]  (291234) : Exiting Master process...
Nov 25 03:28:25 np0005534516 neutron-haproxy-ovnmeta-ba659d6c-c094-47d7-ba45-d0e659ce778e[291226]: [WARNING]  (291234) : Exiting Master process...
Nov 25 03:28:25 np0005534516 neutron-haproxy-ovnmeta-ba659d6c-c094-47d7-ba45-d0e659ce778e[291226]: [ALERT]    (291234) : Current worker (291237) exited with code 143 (Terminated)
Nov 25 03:28:25 np0005534516 neutron-haproxy-ovnmeta-ba659d6c-c094-47d7-ba45-d0e659ce778e[291226]: [WARNING]  (291234) : All workers exited. Exiting... (0)
Nov 25 03:28:25 np0005534516 systemd[1]: libpod-a52ce674315fe1f72af4d85a4b1eace3568c16e615f4490b2a4b9368ff45761d.scope: Deactivated successfully.
Nov 25 03:28:25 np0005534516 conmon[291226]: conmon a52ce674315fe1f72af4 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-a52ce674315fe1f72af4d85a4b1eace3568c16e615f4490b2a4b9368ff45761d.scope/container/memory.events
Nov 25 03:28:25 np0005534516 podman[291657]: 2025-11-25 08:28:25.310650928 +0000 UTC m=+0.054699902 container died a52ce674315fe1f72af4d85a4b1eace3568c16e615f4490b2a4b9368ff45761d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-ba659d6c-c094-47d7-ba45-d0e659ce778e, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 25 03:28:25 np0005534516 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-a52ce674315fe1f72af4d85a4b1eace3568c16e615f4490b2a4b9368ff45761d-userdata-shm.mount: Deactivated successfully.
Nov 25 03:28:25 np0005534516 systemd[1]: var-lib-containers-storage-overlay-7948cd9e3db77c2b6e7fa96e239c18a011489fdccbd892f78fbbd5fdba626e7d-merged.mount: Deactivated successfully.
Nov 25 03:28:25 np0005534516 nova_compute[253538]: 2025-11-25 08:28:25.350 253542 DEBUG nova.compute.manager [req-7477ccbc-064d-410a-85c1-32f3e0dda821 req-7f2290fa-be28-4eb2-96e3-057cfafad11a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 6998a6cf-b660-4558-98cf-bf5984775b1d] Received event network-vif-unplugged-65fd7d0e-59ee-4411-92ee-f934016f1d1f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:28:25 np0005534516 nova_compute[253538]: 2025-11-25 08:28:25.352 253542 DEBUG oslo_concurrency.lockutils [req-7477ccbc-064d-410a-85c1-32f3e0dda821 req-7f2290fa-be28-4eb2-96e3-057cfafad11a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "6998a6cf-b660-4558-98cf-bf5984775b1d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:28:25 np0005534516 nova_compute[253538]: 2025-11-25 08:28:25.353 253542 DEBUG oslo_concurrency.lockutils [req-7477ccbc-064d-410a-85c1-32f3e0dda821 req-7f2290fa-be28-4eb2-96e3-057cfafad11a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "6998a6cf-b660-4558-98cf-bf5984775b1d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:28:25 np0005534516 nova_compute[253538]: 2025-11-25 08:28:25.353 253542 DEBUG oslo_concurrency.lockutils [req-7477ccbc-064d-410a-85c1-32f3e0dda821 req-7f2290fa-be28-4eb2-96e3-057cfafad11a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "6998a6cf-b660-4558-98cf-bf5984775b1d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:28:25 np0005534516 nova_compute[253538]: 2025-11-25 08:28:25.353 253542 DEBUG nova.compute.manager [req-7477ccbc-064d-410a-85c1-32f3e0dda821 req-7f2290fa-be28-4eb2-96e3-057cfafad11a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 6998a6cf-b660-4558-98cf-bf5984775b1d] No waiting events found dispatching network-vif-unplugged-65fd7d0e-59ee-4411-92ee-f934016f1d1f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 03:28:25 np0005534516 nova_compute[253538]: 2025-11-25 08:28:25.354 253542 WARNING nova.compute.manager [req-7477ccbc-064d-410a-85c1-32f3e0dda821 req-7f2290fa-be28-4eb2-96e3-057cfafad11a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 6998a6cf-b660-4558-98cf-bf5984775b1d] Received unexpected event network-vif-unplugged-65fd7d0e-59ee-4411-92ee-f934016f1d1f for instance with vm_state suspended and task_state None.#033[00m
Nov 25 03:28:25 np0005534516 podman[291657]: 2025-11-25 08:28:25.370173074 +0000 UTC m=+0.114222048 container cleanup a52ce674315fe1f72af4d85a4b1eace3568c16e615f4490b2a4b9368ff45761d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-ba659d6c-c094-47d7-ba45-d0e659ce778e, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 03:28:25 np0005534516 systemd[1]: libpod-conmon-a52ce674315fe1f72af4d85a4b1eace3568c16e615f4490b2a4b9368ff45761d.scope: Deactivated successfully.
Nov 25 03:28:25 np0005534516 podman[291687]: 2025-11-25 08:28:25.451845061 +0000 UTC m=+0.054875448 container remove a52ce674315fe1f72af4d85a4b1eace3568c16e615f4490b2a4b9368ff45761d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-ba659d6c-c094-47d7-ba45-d0e659ce778e, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 03:28:25 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:28:25.460 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[187a6dd0-309c-4898-9c04-fb122cd92f63]: (4, ('Tue Nov 25 08:28:25 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-ba659d6c-c094-47d7-ba45-d0e659ce778e (a52ce674315fe1f72af4d85a4b1eace3568c16e615f4490b2a4b9368ff45761d)\na52ce674315fe1f72af4d85a4b1eace3568c16e615f4490b2a4b9368ff45761d\nTue Nov 25 08:28:25 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-ba659d6c-c094-47d7-ba45-d0e659ce778e (a52ce674315fe1f72af4d85a4b1eace3568c16e615f4490b2a4b9368ff45761d)\na52ce674315fe1f72af4d85a4b1eace3568c16e615f4490b2a4b9368ff45761d\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:28:25 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:28:25.462 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[81b84588-dd03-45d9-a8cf-98ecc2f8dff1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:28:25 np0005534516 nova_compute[253538]: 2025-11-25 08:28:25.462 253542 WARNING nova.compute.manager [None req-b9b3cf22-1625-4d33-91e2-a44aef1e71a9 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] [instance: 59d5b429-b6cb-4ced-bf99-b3a5b42dc5a0] Image not found during snapshot: nova.exception.ImageNotFound: Image 28dfc6fb-4f2c-4796-bac4-301408e87b71 could not be found.#033[00m
Nov 25 03:28:25 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:28:25.463 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapba659d6c-c0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:28:25 np0005534516 kernel: tapba659d6c-c0: left promiscuous mode
Nov 25 03:28:25 np0005534516 nova_compute[253538]: 2025-11-25 08:28:25.465 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:28:25 np0005534516 nova_compute[253538]: 2025-11-25 08:28:25.485 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:28:25 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:28:25.489 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[695f0c5f-885f-40a0-ae8b-b85181f042de]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:28:25 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:28:25.501 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[2ea4f683-6108-4550-a367-a38e9452cd69]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:28:25 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:28:25.502 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[5626f217-eb1a-457d-9efb-697fd25f5c31]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:28:25 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:28:25.522 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[8cc9a87f-08c7-47ca-bb32-c08bc44e1147]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 465439, 'reachable_time': 20682, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 291706, 'error': None, 'target': 'ovnmeta-ba659d6c-c094-47d7-ba45-d0e659ce778e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:28:25 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:28:25.526 162852 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-ba659d6c-c094-47d7-ba45-d0e659ce778e deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 25 03:28:25 np0005534516 systemd[1]: run-netns-ovnmeta\x2dba659d6c\x2dc094\x2d47d7\x2dba45\x2dd0e659ce778e.mount: Deactivated successfully.
Nov 25 03:28:25 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:28:25.526 162852 DEBUG oslo.privsep.daemon [-] privsep: reply[adaec329-9eb6-489c-8f7f-a256bd0e76ac]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:28:26 np0005534516 ovn_controller[152859]: 2025-11-25T08:28:26Z|00203|binding|INFO|Releasing lport d8cb45b7-fcc6-4a5e-82c7-1991008fce33 from this chassis (sb_readonly=0)
Nov 25 03:28:26 np0005534516 ovn_controller[152859]: 2025-11-25T08:28:26Z|00204|binding|INFO|Releasing lport ac244317-fa52-4a6a-92f4-98845a41804d from this chassis (sb_readonly=0)
Nov 25 03:28:26 np0005534516 nova_compute[253538]: 2025-11-25 08:28:26.205 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:28:26 np0005534516 nova_compute[253538]: 2025-11-25 08:28:26.268 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:28:26 np0005534516 nova_compute[253538]: 2025-11-25 08:28:26.297 253542 DEBUG nova.compute.manager [req-e97c4579-f27e-4665-ad63-b2d873470b85 req-cf47b261-58ad-4efa-9caa-253cb5b1ecfa b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 4c934302-d7cd-4826-835e-cab6dba97e3a] Received event network-vif-plugged-b62f3741-11c8-4840-a720-d6ee07f06284 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:28:26 np0005534516 nova_compute[253538]: 2025-11-25 08:28:26.298 253542 DEBUG oslo_concurrency.lockutils [req-e97c4579-f27e-4665-ad63-b2d873470b85 req-cf47b261-58ad-4efa-9caa-253cb5b1ecfa b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "4c934302-d7cd-4826-835e-cab6dba97e3a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:28:26 np0005534516 nova_compute[253538]: 2025-11-25 08:28:26.298 253542 DEBUG oslo_concurrency.lockutils [req-e97c4579-f27e-4665-ad63-b2d873470b85 req-cf47b261-58ad-4efa-9caa-253cb5b1ecfa b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "4c934302-d7cd-4826-835e-cab6dba97e3a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:28:26 np0005534516 nova_compute[253538]: 2025-11-25 08:28:26.298 253542 DEBUG oslo_concurrency.lockutils [req-e97c4579-f27e-4665-ad63-b2d873470b85 req-cf47b261-58ad-4efa-9caa-253cb5b1ecfa b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "4c934302-d7cd-4826-835e-cab6dba97e3a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:28:26 np0005534516 nova_compute[253538]: 2025-11-25 08:28:26.298 253542 DEBUG nova.compute.manager [req-e97c4579-f27e-4665-ad63-b2d873470b85 req-cf47b261-58ad-4efa-9caa-253cb5b1ecfa b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 4c934302-d7cd-4826-835e-cab6dba97e3a] No waiting events found dispatching network-vif-plugged-b62f3741-11c8-4840-a720-d6ee07f06284 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 03:28:26 np0005534516 nova_compute[253538]: 2025-11-25 08:28:26.299 253542 WARNING nova.compute.manager [req-e97c4579-f27e-4665-ad63-b2d873470b85 req-cf47b261-58ad-4efa-9caa-253cb5b1ecfa b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 4c934302-d7cd-4826-835e-cab6dba97e3a] Received unexpected event network-vif-plugged-b62f3741-11c8-4840-a720-d6ee07f06284 for instance with vm_state active and task_state image_snapshot.#033[00m
Nov 25 03:28:26 np0005534516 nova_compute[253538]: 2025-11-25 08:28:26.354 253542 DEBUG nova.compute.manager [None req-05ede99d-9309-4616-8f19-cd02ee40c71b ee2fe69e0dfa4467926cec954790823e 9bd945273cd04d8981dcb3a319e8d026 - - default default] [instance: 4c934302-d7cd-4826-835e-cab6dba97e3a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:28:26 np0005534516 nova_compute[253538]: 2025-11-25 08:28:26.396 253542 INFO nova.compute.manager [None req-05ede99d-9309-4616-8f19-cd02ee40c71b ee2fe69e0dfa4467926cec954790823e 9bd945273cd04d8981dcb3a319e8d026 - - default default] [instance: 4c934302-d7cd-4826-835e-cab6dba97e3a] instance snapshotting#033[00m
Nov 25 03:28:26 np0005534516 nova_compute[253538]: 2025-11-25 08:28:26.629 253542 INFO nova.virt.libvirt.driver [None req-05ede99d-9309-4616-8f19-cd02ee40c71b ee2fe69e0dfa4467926cec954790823e 9bd945273cd04d8981dcb3a319e8d026 - - default default] [instance: 4c934302-d7cd-4826-835e-cab6dba97e3a] Beginning live snapshot process#033[00m
Nov 25 03:28:26 np0005534516 nova_compute[253538]: 2025-11-25 08:28:26.787 253542 DEBUG nova.virt.libvirt.imagebackend [None req-05ede99d-9309-4616-8f19-cd02ee40c71b ee2fe69e0dfa4467926cec954790823e 9bd945273cd04d8981dcb3a319e8d026 - - default default] No parent info for 8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163#033[00m
Nov 25 03:28:26 np0005534516 nova_compute[253538]: 2025-11-25 08:28:26.838 253542 DEBUG oslo_concurrency.lockutils [None req-e2a9b0ff-347a-4ae3-86f9-9b5a02e4a477 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] Acquiring lock "59d5b429-b6cb-4ced-bf99-b3a5b42dc5a0" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:28:26 np0005534516 nova_compute[253538]: 2025-11-25 08:28:26.839 253542 DEBUG oslo_concurrency.lockutils [None req-e2a9b0ff-347a-4ae3-86f9-9b5a02e4a477 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] Lock "59d5b429-b6cb-4ced-bf99-b3a5b42dc5a0" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:28:26 np0005534516 nova_compute[253538]: 2025-11-25 08:28:26.840 253542 DEBUG oslo_concurrency.lockutils [None req-e2a9b0ff-347a-4ae3-86f9-9b5a02e4a477 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] Acquiring lock "59d5b429-b6cb-4ced-bf99-b3a5b42dc5a0-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:28:26 np0005534516 nova_compute[253538]: 2025-11-25 08:28:26.840 253542 DEBUG oslo_concurrency.lockutils [None req-e2a9b0ff-347a-4ae3-86f9-9b5a02e4a477 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] Lock "59d5b429-b6cb-4ced-bf99-b3a5b42dc5a0-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:28:26 np0005534516 nova_compute[253538]: 2025-11-25 08:28:26.841 253542 DEBUG oslo_concurrency.lockutils [None req-e2a9b0ff-347a-4ae3-86f9-9b5a02e4a477 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] Lock "59d5b429-b6cb-4ced-bf99-b3a5b42dc5a0-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:28:26 np0005534516 nova_compute[253538]: 2025-11-25 08:28:26.843 253542 INFO nova.compute.manager [None req-e2a9b0ff-347a-4ae3-86f9-9b5a02e4a477 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] [instance: 59d5b429-b6cb-4ced-bf99-b3a5b42dc5a0] Terminating instance#033[00m
Nov 25 03:28:26 np0005534516 nova_compute[253538]: 2025-11-25 08:28:26.845 253542 DEBUG nova.compute.manager [None req-e2a9b0ff-347a-4ae3-86f9-9b5a02e4a477 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] [instance: 59d5b429-b6cb-4ced-bf99-b3a5b42dc5a0] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 25 03:28:26 np0005534516 kernel: tapa0d5bf0b-a7 (unregistering): left promiscuous mode
Nov 25 03:28:26 np0005534516 NetworkManager[48915]: <info>  [1764059306.9491] device (tapa0d5bf0b-a7): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 03:28:26 np0005534516 nova_compute[253538]: 2025-11-25 08:28:26.958 253542 DEBUG nova.storage.rbd_utils [None req-05ede99d-9309-4616-8f19-cd02ee40c71b ee2fe69e0dfa4467926cec954790823e 9bd945273cd04d8981dcb3a319e8d026 - - default default] creating snapshot(ff48a6a1401942de8cd2cfe7818718dc) on rbd image(4c934302-d7cd-4826-835e-cab6dba97e3a_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Nov 25 03:28:26 np0005534516 ovn_controller[152859]: 2025-11-25T08:28:26Z|00205|binding|INFO|Releasing lport a0d5bf0b-a708-4159-968d-5c597313379d from this chassis (sb_readonly=0)
Nov 25 03:28:26 np0005534516 ovn_controller[152859]: 2025-11-25T08:28:26Z|00206|binding|INFO|Setting lport a0d5bf0b-a708-4159-968d-5c597313379d down in Southbound
Nov 25 03:28:26 np0005534516 ovn_controller[152859]: 2025-11-25T08:28:26Z|00207|binding|INFO|Removing iface tapa0d5bf0b-a7 ovn-installed in OVS
Nov 25 03:28:26 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:28:26.972 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:67:41:bb 10.100.0.9'], port_security=['fa:16:3e:67:41:bb 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '59d5b429-b6cb-4ced-bf99-b3a5b42dc5a0', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-52a7668b-f0ac-4b07-a778-1ee89adbf076', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c5b1125d171240e2895276836b4fd6d7', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'bd498cc2-3a1f-4313-a3a9-7b1fa737f1eb', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b98666ab-8fd3-4256-b0ce-536c54c0072e, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=a0d5bf0b-a708-4159-968d-5c597313379d) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 03:28:26 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:28:26.973 162739 INFO neutron.agent.ovn.metadata.agent [-] Port a0d5bf0b-a708-4159-968d-5c597313379d in datapath 52a7668b-f0ac-4b07-a778-1ee89adbf076 unbound from our chassis#033[00m
Nov 25 03:28:26 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:28:26.974 162739 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 52a7668b-f0ac-4b07-a778-1ee89adbf076, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 25 03:28:26 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:28:26.975 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[e9d8344a-4dd6-4bae-aeda-d40137e7056c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:28:26 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:28:26.976 162739 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-52a7668b-f0ac-4b07-a778-1ee89adbf076 namespace which is not needed anymore#033[00m
Nov 25 03:28:27 np0005534516 nova_compute[253538]: 2025-11-25 08:28:27.005 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:28:27 np0005534516 systemd[1]: machine-qemu\x2d34\x2dinstance\x2d0000001d.scope: Deactivated successfully.
Nov 25 03:28:27 np0005534516 systemd[1]: machine-qemu\x2d34\x2dinstance\x2d0000001d.scope: Consumed 13.819s CPU time.
Nov 25 03:28:27 np0005534516 systemd-machined[215790]: Machine qemu-34-instance-0000001d terminated.
Nov 25 03:28:27 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e128 do_prune osdmap full prune enabled
Nov 25 03:28:27 np0005534516 nova_compute[253538]: 2025-11-25 08:28:27.080 253542 INFO nova.virt.libvirt.driver [-] [instance: 59d5b429-b6cb-4ced-bf99-b3a5b42dc5a0] Instance destroyed successfully.#033[00m
Nov 25 03:28:27 np0005534516 nova_compute[253538]: 2025-11-25 08:28:27.081 253542 DEBUG nova.objects.instance [None req-e2a9b0ff-347a-4ae3-86f9-9b5a02e4a477 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] Lazy-loading 'resources' on Instance uuid 59d5b429-b6cb-4ced-bf99-b3a5b42dc5a0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 03:28:27 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e129 e129: 3 total, 3 up, 3 in
Nov 25 03:28:27 np0005534516 ceph-mon[75015]: log_channel(cluster) log [DBG] : osdmap e129: 3 total, 3 up, 3 in
Nov 25 03:28:27 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1306: 321 pgs: 2 active+clean+snaptrim, 4 active+clean+snaptrim_wait, 315 active+clean; 243 MiB data, 493 MiB used, 60 GiB / 60 GiB avail; 18 MiB/s rd, 13 MiB/s wr, 705 op/s
Nov 25 03:28:27 np0005534516 nova_compute[253538]: 2025-11-25 08:28:27.095 253542 DEBUG nova.virt.libvirt.vif [None req-e2a9b0ff-347a-4ae3-86f9-9b5a02e4a477 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T08:27:58Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ImagesOneServerNegativeTestJSON-server-184009211',display_name='tempest-ImagesOneServerNegativeTestJSON-server-184009211',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagesoneservernegativetestjson-server-184009211',id=29,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-25T08:28:09Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='c5b1125d171240e2895276836b4fd6d7',ramdisk_id='',reservation_id='r-7mj1pjji',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ImagesOneServerNegativeTestJSON-192511421',owner_user_name='tempest-ImagesOneServerNegativeTestJSON-192511421-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T08:28:25Z,user_data=None,user_id='8350a560f2bc4b57a5da0e3a1f582f82',uuid=59d5b429-b6cb-4ced-bf99-b3a5b42dc5a0,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "a0d5bf0b-a708-4159-968d-5c597313379d", "address": "fa:16:3e:67:41:bb", "network": {"id": "52a7668b-f0ac-4b07-a778-1ee89adbf076", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1430369364-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c5b1125d171240e2895276836b4fd6d7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa0d5bf0b-a7", "ovs_interfaceid": "a0d5bf0b-a708-4159-968d-5c597313379d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 25 03:28:27 np0005534516 nova_compute[253538]: 2025-11-25 08:28:27.096 253542 DEBUG nova.network.os_vif_util [None req-e2a9b0ff-347a-4ae3-86f9-9b5a02e4a477 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] Converting VIF {"id": "a0d5bf0b-a708-4159-968d-5c597313379d", "address": "fa:16:3e:67:41:bb", "network": {"id": "52a7668b-f0ac-4b07-a778-1ee89adbf076", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1430369364-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c5b1125d171240e2895276836b4fd6d7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa0d5bf0b-a7", "ovs_interfaceid": "a0d5bf0b-a708-4159-968d-5c597313379d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 03:28:27 np0005534516 nova_compute[253538]: 2025-11-25 08:28:27.096 253542 DEBUG nova.network.os_vif_util [None req-e2a9b0ff-347a-4ae3-86f9-9b5a02e4a477 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:67:41:bb,bridge_name='br-int',has_traffic_filtering=True,id=a0d5bf0b-a708-4159-968d-5c597313379d,network=Network(52a7668b-f0ac-4b07-a778-1ee89adbf076),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa0d5bf0b-a7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 03:28:27 np0005534516 nova_compute[253538]: 2025-11-25 08:28:27.097 253542 DEBUG os_vif [None req-e2a9b0ff-347a-4ae3-86f9-9b5a02e4a477 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:67:41:bb,bridge_name='br-int',has_traffic_filtering=True,id=a0d5bf0b-a708-4159-968d-5c597313379d,network=Network(52a7668b-f0ac-4b07-a778-1ee89adbf076),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa0d5bf0b-a7') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 25 03:28:27 np0005534516 nova_compute[253538]: 2025-11-25 08:28:27.098 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:28:27 np0005534516 nova_compute[253538]: 2025-11-25 08:28:27.099 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa0d5bf0b-a7, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:28:27 np0005534516 nova_compute[253538]: 2025-11-25 08:28:27.100 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:28:27 np0005534516 nova_compute[253538]: 2025-11-25 08:28:27.103 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 25 03:28:27 np0005534516 nova_compute[253538]: 2025-11-25 08:28:27.107 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:28:27 np0005534516 nova_compute[253538]: 2025-11-25 08:28:27.109 253542 INFO os_vif [None req-e2a9b0ff-347a-4ae3-86f9-9b5a02e4a477 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:67:41:bb,bridge_name='br-int',has_traffic_filtering=True,id=a0d5bf0b-a708-4159-968d-5c597313379d,network=Network(52a7668b-f0ac-4b07-a778-1ee89adbf076),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa0d5bf0b-a7')#033[00m
Nov 25 03:28:27 np0005534516 neutron-haproxy-ovnmeta-52a7668b-f0ac-4b07-a778-1ee89adbf076[290325]: [NOTICE]   (290331) : haproxy version is 2.8.14-c23fe91
Nov 25 03:28:27 np0005534516 neutron-haproxy-ovnmeta-52a7668b-f0ac-4b07-a778-1ee89adbf076[290325]: [NOTICE]   (290331) : path to executable is /usr/sbin/haproxy
Nov 25 03:28:27 np0005534516 neutron-haproxy-ovnmeta-52a7668b-f0ac-4b07-a778-1ee89adbf076[290325]: [WARNING]  (290331) : Exiting Master process...
Nov 25 03:28:27 np0005534516 neutron-haproxy-ovnmeta-52a7668b-f0ac-4b07-a778-1ee89adbf076[290325]: [ALERT]    (290331) : Current worker (290333) exited with code 143 (Terminated)
Nov 25 03:28:27 np0005534516 neutron-haproxy-ovnmeta-52a7668b-f0ac-4b07-a778-1ee89adbf076[290325]: [WARNING]  (290331) : All workers exited. Exiting... (0)
Nov 25 03:28:27 np0005534516 systemd[1]: libpod-1b5d918d4aa47ffd23b27816a787433eb73a973f296732e37d895598606e1644.scope: Deactivated successfully.
Nov 25 03:28:27 np0005534516 podman[291784]: 2025-11-25 08:28:27.124982936 +0000 UTC m=+0.066047327 container died 1b5d918d4aa47ffd23b27816a787433eb73a973f296732e37d895598606e1644 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-52a7668b-f0ac-4b07-a778-1ee89adbf076, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Nov 25 03:28:27 np0005534516 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-1b5d918d4aa47ffd23b27816a787433eb73a973f296732e37d895598606e1644-userdata-shm.mount: Deactivated successfully.
Nov 25 03:28:27 np0005534516 systemd[1]: var-lib-containers-storage-overlay-320269360c4cc6b75b0f40afebd67ecea31de81414d235d4d1629a905a53b4de-merged.mount: Deactivated successfully.
Nov 25 03:28:27 np0005534516 podman[291784]: 2025-11-25 08:28:27.175829311 +0000 UTC m=+0.116893672 container cleanup 1b5d918d4aa47ffd23b27816a787433eb73a973f296732e37d895598606e1644 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-52a7668b-f0ac-4b07-a778-1ee89adbf076, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0)
Nov 25 03:28:27 np0005534516 systemd[1]: libpod-conmon-1b5d918d4aa47ffd23b27816a787433eb73a973f296732e37d895598606e1644.scope: Deactivated successfully.
Nov 25 03:28:27 np0005534516 nova_compute[253538]: 2025-11-25 08:28:27.191 253542 DEBUG nova.storage.rbd_utils [None req-05ede99d-9309-4616-8f19-cd02ee40c71b ee2fe69e0dfa4467926cec954790823e 9bd945273cd04d8981dcb3a319e8d026 - - default default] cloning vms/4c934302-d7cd-4826-835e-cab6dba97e3a_disk@ff48a6a1401942de8cd2cfe7818718dc to images/a5f8815f-c59d-400a-82d8-e11527b9e78e clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Nov 25 03:28:27 np0005534516 podman[291844]: 2025-11-25 08:28:27.271137095 +0000 UTC m=+0.065407738 container remove 1b5d918d4aa47ffd23b27816a787433eb73a973f296732e37d895598606e1644 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-52a7668b-f0ac-4b07-a778-1ee89adbf076, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team)
Nov 25 03:28:27 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:28:27.278 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[9f185328-319b-4932-b888-9efbb7b017e4]: (4, ('Tue Nov 25 08:28:27 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-52a7668b-f0ac-4b07-a778-1ee89adbf076 (1b5d918d4aa47ffd23b27816a787433eb73a973f296732e37d895598606e1644)\n1b5d918d4aa47ffd23b27816a787433eb73a973f296732e37d895598606e1644\nTue Nov 25 08:28:27 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-52a7668b-f0ac-4b07-a778-1ee89adbf076 (1b5d918d4aa47ffd23b27816a787433eb73a973f296732e37d895598606e1644)\n1b5d918d4aa47ffd23b27816a787433eb73a973f296732e37d895598606e1644\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:28:27 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:28:27.279 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[e8ef9898-804b-4001-9173-279f293d284b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:28:27 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:28:27.280 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap52a7668b-f0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:28:27 np0005534516 kernel: tap52a7668b-f0: left promiscuous mode
Nov 25 03:28:27 np0005534516 nova_compute[253538]: 2025-11-25 08:28:27.282 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:28:27 np0005534516 nova_compute[253538]: 2025-11-25 08:28:27.301 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:28:27 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:28:27.305 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[01c7eafb-4531-496f-a90d-de3f06b88c36]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:28:27 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:28:27.318 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[a2271e17-05f0-4694-8688-a78800b00e1a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:28:27 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:28:27.321 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[dbefe770-6811-4c8a-bac6-cf33f363c8f8]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:28:27 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:28:27.338 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[23a658a1-c271-43e4-ac89-1a9c6d5b9d15]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 464323, 'reachable_time': 28199, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 291898, 'error': None, 'target': 'ovnmeta-52a7668b-f0ac-4b07-a778-1ee89adbf076', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:28:27 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:28:27.340 162852 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-52a7668b-f0ac-4b07-a778-1ee89adbf076 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 25 03:28:27 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:28:27.340 162852 DEBUG oslo.privsep.daemon [-] privsep: reply[746c8db1-fd6e-4d62-9253-bc41f5efac4f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:28:27 np0005534516 systemd[1]: run-netns-ovnmeta\x2d52a7668b\x2df0ac\x2d4b07\x2da778\x2d1ee89adbf076.mount: Deactivated successfully.
Nov 25 03:28:27 np0005534516 nova_compute[253538]: 2025-11-25 08:28:27.358 253542 DEBUG nova.storage.rbd_utils [None req-05ede99d-9309-4616-8f19-cd02ee40c71b ee2fe69e0dfa4467926cec954790823e 9bd945273cd04d8981dcb3a319e8d026 - - default default] flattening images/a5f8815f-c59d-400a-82d8-e11527b9e78e flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314#033[00m
Nov 25 03:28:27 np0005534516 nova_compute[253538]: 2025-11-25 08:28:27.438 253542 DEBUG nova.compute.manager [req-3e456390-ca37-48eb-842f-785bf5eb9c6e req-248c58a1-3962-4a03-b272-0e04843245f5 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 6998a6cf-b660-4558-98cf-bf5984775b1d] Received event network-vif-plugged-65fd7d0e-59ee-4411-92ee-f934016f1d1f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:28:27 np0005534516 nova_compute[253538]: 2025-11-25 08:28:27.439 253542 DEBUG oslo_concurrency.lockutils [req-3e456390-ca37-48eb-842f-785bf5eb9c6e req-248c58a1-3962-4a03-b272-0e04843245f5 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "6998a6cf-b660-4558-98cf-bf5984775b1d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:28:27 np0005534516 nova_compute[253538]: 2025-11-25 08:28:27.439 253542 DEBUG oslo_concurrency.lockutils [req-3e456390-ca37-48eb-842f-785bf5eb9c6e req-248c58a1-3962-4a03-b272-0e04843245f5 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "6998a6cf-b660-4558-98cf-bf5984775b1d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:28:27 np0005534516 nova_compute[253538]: 2025-11-25 08:28:27.439 253542 DEBUG oslo_concurrency.lockutils [req-3e456390-ca37-48eb-842f-785bf5eb9c6e req-248c58a1-3962-4a03-b272-0e04843245f5 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "6998a6cf-b660-4558-98cf-bf5984775b1d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:28:27 np0005534516 nova_compute[253538]: 2025-11-25 08:28:27.440 253542 DEBUG nova.compute.manager [req-3e456390-ca37-48eb-842f-785bf5eb9c6e req-248c58a1-3962-4a03-b272-0e04843245f5 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 6998a6cf-b660-4558-98cf-bf5984775b1d] No waiting events found dispatching network-vif-plugged-65fd7d0e-59ee-4411-92ee-f934016f1d1f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 03:28:27 np0005534516 nova_compute[253538]: 2025-11-25 08:28:27.440 253542 WARNING nova.compute.manager [req-3e456390-ca37-48eb-842f-785bf5eb9c6e req-248c58a1-3962-4a03-b272-0e04843245f5 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 6998a6cf-b660-4558-98cf-bf5984775b1d] Received unexpected event network-vif-plugged-65fd7d0e-59ee-4411-92ee-f934016f1d1f for instance with vm_state suspended and task_state image_snapshot_pending.#033[00m
Nov 25 03:28:27 np0005534516 nova_compute[253538]: 2025-11-25 08:28:27.440 253542 DEBUG nova.compute.manager [req-3e456390-ca37-48eb-842f-785bf5eb9c6e req-248c58a1-3962-4a03-b272-0e04843245f5 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 59d5b429-b6cb-4ced-bf99-b3a5b42dc5a0] Received event network-vif-unplugged-a0d5bf0b-a708-4159-968d-5c597313379d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:28:27 np0005534516 nova_compute[253538]: 2025-11-25 08:28:27.440 253542 DEBUG oslo_concurrency.lockutils [req-3e456390-ca37-48eb-842f-785bf5eb9c6e req-248c58a1-3962-4a03-b272-0e04843245f5 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "59d5b429-b6cb-4ced-bf99-b3a5b42dc5a0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:28:27 np0005534516 nova_compute[253538]: 2025-11-25 08:28:27.441 253542 DEBUG oslo_concurrency.lockutils [req-3e456390-ca37-48eb-842f-785bf5eb9c6e req-248c58a1-3962-4a03-b272-0e04843245f5 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "59d5b429-b6cb-4ced-bf99-b3a5b42dc5a0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:28:27 np0005534516 nova_compute[253538]: 2025-11-25 08:28:27.442 253542 DEBUG oslo_concurrency.lockutils [req-3e456390-ca37-48eb-842f-785bf5eb9c6e req-248c58a1-3962-4a03-b272-0e04843245f5 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "59d5b429-b6cb-4ced-bf99-b3a5b42dc5a0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:28:27 np0005534516 nova_compute[253538]: 2025-11-25 08:28:27.442 253542 DEBUG nova.compute.manager [req-3e456390-ca37-48eb-842f-785bf5eb9c6e req-248c58a1-3962-4a03-b272-0e04843245f5 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 59d5b429-b6cb-4ced-bf99-b3a5b42dc5a0] No waiting events found dispatching network-vif-unplugged-a0d5bf0b-a708-4159-968d-5c597313379d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 03:28:27 np0005534516 nova_compute[253538]: 2025-11-25 08:28:27.442 253542 DEBUG nova.compute.manager [req-3e456390-ca37-48eb-842f-785bf5eb9c6e req-248c58a1-3962-4a03-b272-0e04843245f5 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 59d5b429-b6cb-4ced-bf99-b3a5b42dc5a0] Received event network-vif-unplugged-a0d5bf0b-a708-4159-968d-5c597313379d for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 25 03:28:27 np0005534516 nova_compute[253538]: 2025-11-25 08:28:27.442 253542 DEBUG nova.compute.manager [req-3e456390-ca37-48eb-842f-785bf5eb9c6e req-248c58a1-3962-4a03-b272-0e04843245f5 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 59d5b429-b6cb-4ced-bf99-b3a5b42dc5a0] Received event network-vif-plugged-a0d5bf0b-a708-4159-968d-5c597313379d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:28:27 np0005534516 nova_compute[253538]: 2025-11-25 08:28:27.442 253542 DEBUG oslo_concurrency.lockutils [req-3e456390-ca37-48eb-842f-785bf5eb9c6e req-248c58a1-3962-4a03-b272-0e04843245f5 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "59d5b429-b6cb-4ced-bf99-b3a5b42dc5a0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:28:27 np0005534516 nova_compute[253538]: 2025-11-25 08:28:27.443 253542 DEBUG oslo_concurrency.lockutils [req-3e456390-ca37-48eb-842f-785bf5eb9c6e req-248c58a1-3962-4a03-b272-0e04843245f5 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "59d5b429-b6cb-4ced-bf99-b3a5b42dc5a0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:28:27 np0005534516 nova_compute[253538]: 2025-11-25 08:28:27.443 253542 DEBUG oslo_concurrency.lockutils [req-3e456390-ca37-48eb-842f-785bf5eb9c6e req-248c58a1-3962-4a03-b272-0e04843245f5 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "59d5b429-b6cb-4ced-bf99-b3a5b42dc5a0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:28:27 np0005534516 nova_compute[253538]: 2025-11-25 08:28:27.443 253542 DEBUG nova.compute.manager [req-3e456390-ca37-48eb-842f-785bf5eb9c6e req-248c58a1-3962-4a03-b272-0e04843245f5 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 59d5b429-b6cb-4ced-bf99-b3a5b42dc5a0] No waiting events found dispatching network-vif-plugged-a0d5bf0b-a708-4159-968d-5c597313379d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 03:28:27 np0005534516 nova_compute[253538]: 2025-11-25 08:28:27.443 253542 WARNING nova.compute.manager [req-3e456390-ca37-48eb-842f-785bf5eb9c6e req-248c58a1-3962-4a03-b272-0e04843245f5 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 59d5b429-b6cb-4ced-bf99-b3a5b42dc5a0] Received unexpected event network-vif-plugged-a0d5bf0b-a708-4159-968d-5c597313379d for instance with vm_state active and task_state deleting.#033[00m
Nov 25 03:28:27 np0005534516 nova_compute[253538]: 2025-11-25 08:28:27.460 253542 DEBUG nova.compute.manager [None req-239d6fbf-11db-4cc3-8f20-0da7fd675135 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 6998a6cf-b660-4558-98cf-bf5984775b1d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:28:27 np0005534516 nova_compute[253538]: 2025-11-25 08:28:27.500 253542 INFO nova.compute.manager [None req-239d6fbf-11db-4cc3-8f20-0da7fd675135 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 6998a6cf-b660-4558-98cf-bf5984775b1d] instance snapshotting#033[00m
Nov 25 03:28:27 np0005534516 nova_compute[253538]: 2025-11-25 08:28:27.501 253542 WARNING nova.compute.manager [None req-239d6fbf-11db-4cc3-8f20-0da7fd675135 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 6998a6cf-b660-4558-98cf-bf5984775b1d] trying to snapshot a non-running instance: (state: 4 expected: 1)#033[00m
Nov 25 03:28:27 np0005534516 nova_compute[253538]: 2025-11-25 08:28:27.636 253542 DEBUG nova.storage.rbd_utils [None req-05ede99d-9309-4616-8f19-cd02ee40c71b ee2fe69e0dfa4467926cec954790823e 9bd945273cd04d8981dcb3a319e8d026 - - default default] removing snapshot(ff48a6a1401942de8cd2cfe7818718dc) on rbd image(4c934302-d7cd-4826-835e-cab6dba97e3a_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489#033[00m
Nov 25 03:28:27 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:28:27.648 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=0e3362c2-a4a0-4a10-9289-943331244f84, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '10'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:28:27 np0005534516 nova_compute[253538]: 2025-11-25 08:28:27.700 253542 INFO nova.virt.libvirt.driver [None req-e2a9b0ff-347a-4ae3-86f9-9b5a02e4a477 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] [instance: 59d5b429-b6cb-4ced-bf99-b3a5b42dc5a0] Deleting instance files /var/lib/nova/instances/59d5b429-b6cb-4ced-bf99-b3a5b42dc5a0_del#033[00m
Nov 25 03:28:27 np0005534516 nova_compute[253538]: 2025-11-25 08:28:27.700 253542 INFO nova.virt.libvirt.driver [None req-e2a9b0ff-347a-4ae3-86f9-9b5a02e4a477 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] [instance: 59d5b429-b6cb-4ced-bf99-b3a5b42dc5a0] Deletion of /var/lib/nova/instances/59d5b429-b6cb-4ced-bf99-b3a5b42dc5a0_del complete#033[00m
Nov 25 03:28:27 np0005534516 nova_compute[253538]: 2025-11-25 08:28:27.749 253542 INFO nova.compute.manager [None req-e2a9b0ff-347a-4ae3-86f9-9b5a02e4a477 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] [instance: 59d5b429-b6cb-4ced-bf99-b3a5b42dc5a0] Took 0.90 seconds to destroy the instance on the hypervisor.#033[00m
Nov 25 03:28:27 np0005534516 nova_compute[253538]: 2025-11-25 08:28:27.750 253542 DEBUG oslo.service.loopingcall [None req-e2a9b0ff-347a-4ae3-86f9-9b5a02e4a477 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 25 03:28:27 np0005534516 nova_compute[253538]: 2025-11-25 08:28:27.750 253542 DEBUG nova.compute.manager [-] [instance: 59d5b429-b6cb-4ced-bf99-b3a5b42dc5a0] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 25 03:28:27 np0005534516 nova_compute[253538]: 2025-11-25 08:28:27.750 253542 DEBUG nova.network.neutron [-] [instance: 59d5b429-b6cb-4ced-bf99-b3a5b42dc5a0] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 25 03:28:27 np0005534516 nova_compute[253538]: 2025-11-25 08:28:27.837 253542 INFO nova.virt.libvirt.driver [None req-239d6fbf-11db-4cc3-8f20-0da7fd675135 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 6998a6cf-b660-4558-98cf-bf5984775b1d] Beginning cold snapshot process#033[00m
Nov 25 03:28:27 np0005534516 nova_compute[253538]: 2025-11-25 08:28:27.947 253542 DEBUG nova.virt.libvirt.imagebackend [None req-239d6fbf-11db-4cc3-8f20-0da7fd675135 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] No parent info for 8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163#033[00m
Nov 25 03:28:28 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e129 do_prune osdmap full prune enabled
Nov 25 03:28:28 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e130 e130: 3 total, 3 up, 3 in
Nov 25 03:28:28 np0005534516 ceph-mon[75015]: log_channel(cluster) log [DBG] : osdmap e130: 3 total, 3 up, 3 in
Nov 25 03:28:28 np0005534516 nova_compute[253538]: 2025-11-25 08:28:28.124 253542 DEBUG nova.storage.rbd_utils [None req-239d6fbf-11db-4cc3-8f20-0da7fd675135 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] creating snapshot(0c03632062564fcfafdab5877261a564) on rbd image(6998a6cf-b660-4558-98cf-bf5984775b1d_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Nov 25 03:28:28 np0005534516 nova_compute[253538]: 2025-11-25 08:28:28.164 253542 DEBUG nova.storage.rbd_utils [None req-05ede99d-9309-4616-8f19-cd02ee40c71b ee2fe69e0dfa4467926cec954790823e 9bd945273cd04d8981dcb3a319e8d026 - - default default] creating snapshot(snap) on rbd image(a5f8815f-c59d-400a-82d8-e11527b9e78e) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Nov 25 03:28:28 np0005534516 nova_compute[253538]: 2025-11-25 08:28:28.967 253542 DEBUG nova.network.neutron [-] [instance: 59d5b429-b6cb-4ced-bf99-b3a5b42dc5a0] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 03:28:28 np0005534516 nova_compute[253538]: 2025-11-25 08:28:28.982 253542 INFO nova.compute.manager [-] [instance: 59d5b429-b6cb-4ced-bf99-b3a5b42dc5a0] Took 1.23 seconds to deallocate network for instance.#033[00m
Nov 25 03:28:28 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 25 03:28:28 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2048595304' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 03:28:28 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 25 03:28:28 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2048595304' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 03:28:29 np0005534516 nova_compute[253538]: 2025-11-25 08:28:29.021 253542 DEBUG oslo_concurrency.lockutils [None req-e2a9b0ff-347a-4ae3-86f9-9b5a02e4a477 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:28:29 np0005534516 nova_compute[253538]: 2025-11-25 08:28:29.021 253542 DEBUG oslo_concurrency.lockutils [None req-e2a9b0ff-347a-4ae3-86f9-9b5a02e4a477 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:28:29 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1308: 321 pgs: 1 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 318 active+clean; 209 MiB data, 474 MiB used, 60 GiB / 60 GiB avail; 13 MiB/s rd, 7.3 MiB/s wr, 545 op/s
Nov 25 03:28:29 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e130 do_prune osdmap full prune enabled
Nov 25 03:28:29 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e131 e131: 3 total, 3 up, 3 in
Nov 25 03:28:29 np0005534516 ceph-mon[75015]: log_channel(cluster) log [DBG] : osdmap e131: 3 total, 3 up, 3 in
Nov 25 03:28:29 np0005534516 nova_compute[253538]: 2025-11-25 08:28:29.128 253542 DEBUG oslo_concurrency.processutils [None req-e2a9b0ff-347a-4ae3-86f9-9b5a02e4a477 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:28:29 np0005534516 nova_compute[253538]: 2025-11-25 08:28:29.193 253542 DEBUG nova.storage.rbd_utils [None req-239d6fbf-11db-4cc3-8f20-0da7fd675135 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] cloning vms/6998a6cf-b660-4558-98cf-bf5984775b1d_disk@0c03632062564fcfafdab5877261a564 to images/f94ff9e5-3861-4109-9467-e810e355f205 clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Nov 25 03:28:29 np0005534516 nova_compute[253538]: 2025-11-25 08:28:29.284 253542 DEBUG nova.storage.rbd_utils [None req-239d6fbf-11db-4cc3-8f20-0da7fd675135 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] flattening images/f94ff9e5-3861-4109-9467-e810e355f205 flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314#033[00m
Nov 25 03:28:29 np0005534516 nova_compute[253538]: 2025-11-25 08:28:29.498 253542 DEBUG nova.storage.rbd_utils [None req-239d6fbf-11db-4cc3-8f20-0da7fd675135 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] removing snapshot(0c03632062564fcfafdab5877261a564) on rbd image(6998a6cf-b660-4558-98cf-bf5984775b1d_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489#033[00m
Nov 25 03:28:29 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 03:28:29 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1796532119' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 03:28:29 np0005534516 nova_compute[253538]: 2025-11-25 08:28:29.542 253542 DEBUG oslo_concurrency.processutils [None req-e2a9b0ff-347a-4ae3-86f9-9b5a02e4a477 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.414s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:28:29 np0005534516 nova_compute[253538]: 2025-11-25 08:28:29.550 253542 DEBUG nova.compute.provider_tree [None req-e2a9b0ff-347a-4ae3-86f9-9b5a02e4a477 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 03:28:29 np0005534516 nova_compute[253538]: 2025-11-25 08:28:29.561 253542 DEBUG nova.compute.manager [req-4def1102-ab73-40e1-9fe2-d62db9a5c97b req-d7c96622-9266-4dc0-974d-5e9a85c9dd3b b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 59d5b429-b6cb-4ced-bf99-b3a5b42dc5a0] Received event network-vif-deleted-a0d5bf0b-a708-4159-968d-5c597313379d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:28:29 np0005534516 nova_compute[253538]: 2025-11-25 08:28:29.564 253542 DEBUG nova.scheduler.client.report [None req-e2a9b0ff-347a-4ae3-86f9-9b5a02e4a477 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 03:28:29 np0005534516 nova_compute[253538]: 2025-11-25 08:28:29.586 253542 DEBUG oslo_concurrency.lockutils [None req-e2a9b0ff-347a-4ae3-86f9-9b5a02e4a477 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.564s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:28:29 np0005534516 nova_compute[253538]: 2025-11-25 08:28:29.609 253542 INFO nova.scheduler.client.report [None req-e2a9b0ff-347a-4ae3-86f9-9b5a02e4a477 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] Deleted allocations for instance 59d5b429-b6cb-4ced-bf99-b3a5b42dc5a0#033[00m
Nov 25 03:28:29 np0005534516 nova_compute[253538]: 2025-11-25 08:28:29.673 253542 DEBUG oslo_concurrency.lockutils [None req-e2a9b0ff-347a-4ae3-86f9-9b5a02e4a477 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] Lock "59d5b429-b6cb-4ced-bf99-b3a5b42dc5a0" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.834s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:28:29 np0005534516 nova_compute[253538]: 2025-11-25 08:28:29.808 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:28:29 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e131 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 03:28:30 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e131 do_prune osdmap full prune enabled
Nov 25 03:28:30 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e132 e132: 3 total, 3 up, 3 in
Nov 25 03:28:30 np0005534516 ceph-mon[75015]: log_channel(cluster) log [DBG] : osdmap e132: 3 total, 3 up, 3 in
Nov 25 03:28:30 np0005534516 nova_compute[253538]: 2025-11-25 08:28:30.077 253542 DEBUG nova.storage.rbd_utils [None req-239d6fbf-11db-4cc3-8f20-0da7fd675135 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] creating snapshot(snap) on rbd image(f94ff9e5-3861-4109-9467-e810e355f205) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Nov 25 03:28:30 np0005534516 nova_compute[253538]: 2025-11-25 08:28:30.393 253542 INFO nova.virt.libvirt.driver [None req-05ede99d-9309-4616-8f19-cd02ee40c71b ee2fe69e0dfa4467926cec954790823e 9bd945273cd04d8981dcb3a319e8d026 - - default default] [instance: 4c934302-d7cd-4826-835e-cab6dba97e3a] Snapshot image upload complete#033[00m
Nov 25 03:28:30 np0005534516 nova_compute[253538]: 2025-11-25 08:28:30.394 253542 INFO nova.compute.manager [None req-05ede99d-9309-4616-8f19-cd02ee40c71b ee2fe69e0dfa4467926cec954790823e 9bd945273cd04d8981dcb3a319e8d026 - - default default] [instance: 4c934302-d7cd-4826-835e-cab6dba97e3a] Took 4.00 seconds to snapshot the instance on the hypervisor.#033[00m
Nov 25 03:28:30 np0005534516 podman[292117]: 2025-11-25 08:28:30.863255109 +0000 UTC m=+0.104516799 container health_status 5c8d5508db57b4c3da7e80ae11f3a3094e87785cb74f60c98199261dd8b73481 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_metadata_agent)
Nov 25 03:28:31 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1311: 321 pgs: 1 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 318 active+clean; 201 MiB data, 470 MiB used, 60 GiB / 60 GiB avail; 6.9 MiB/s rd, 4.7 MiB/s wr, 441 op/s
Nov 25 03:28:31 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e132 do_prune osdmap full prune enabled
Nov 25 03:28:31 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e133 e133: 3 total, 3 up, 3 in
Nov 25 03:28:31 np0005534516 ceph-mon[75015]: log_channel(cluster) log [DBG] : osdmap e133: 3 total, 3 up, 3 in
Nov 25 03:28:31 np0005534516 nova_compute[253538]: 2025-11-25 08:28:31.675 253542 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764059296.6746256, 14cd6797-cf47-44da-acac-0e5e3d5dfe11 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 03:28:31 np0005534516 nova_compute[253538]: 2025-11-25 08:28:31.676 253542 INFO nova.compute.manager [-] [instance: 14cd6797-cf47-44da-acac-0e5e3d5dfe11] VM Stopped (Lifecycle Event)#033[00m
Nov 25 03:28:31 np0005534516 nova_compute[253538]: 2025-11-25 08:28:31.695 253542 DEBUG nova.compute.manager [None req-1b16c6c4-013a-43d8-9941-88b8870d9237 - - - - - -] [instance: 14cd6797-cf47-44da-acac-0e5e3d5dfe11] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:28:32 np0005534516 nova_compute[253538]: 2025-11-25 08:28:32.100 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:28:32 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e133 do_prune osdmap full prune enabled
Nov 25 03:28:32 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e134 e134: 3 total, 3 up, 3 in
Nov 25 03:28:32 np0005534516 ceph-mon[75015]: log_channel(cluster) log [DBG] : osdmap e134: 3 total, 3 up, 3 in
Nov 25 03:28:32 np0005534516 nova_compute[253538]: 2025-11-25 08:28:32.624 253542 INFO nova.virt.libvirt.driver [None req-239d6fbf-11db-4cc3-8f20-0da7fd675135 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 6998a6cf-b660-4558-98cf-bf5984775b1d] Snapshot image upload complete#033[00m
Nov 25 03:28:32 np0005534516 nova_compute[253538]: 2025-11-25 08:28:32.624 253542 INFO nova.compute.manager [None req-239d6fbf-11db-4cc3-8f20-0da7fd675135 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 6998a6cf-b660-4558-98cf-bf5984775b1d] Took 5.12 seconds to snapshot the instance on the hypervisor.#033[00m
Nov 25 03:28:33 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1314: 321 pgs: 1 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 318 active+clean; 217 MiB data, 462 MiB used, 60 GiB / 60 GiB avail; 8.7 MiB/s rd, 8.5 MiB/s wr, 415 op/s
Nov 25 03:28:33 np0005534516 nova_compute[253538]: 2025-11-25 08:28:33.457 253542 DEBUG nova.compute.manager [None req-815c19c7-4c38-4bdc-bf80-3f6608f1b2b5 ee2fe69e0dfa4467926cec954790823e 9bd945273cd04d8981dcb3a319e8d026 - - default default] [instance: 4c934302-d7cd-4826-835e-cab6dba97e3a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:28:33 np0005534516 nova_compute[253538]: 2025-11-25 08:28:33.514 253542 INFO nova.compute.manager [None req-815c19c7-4c38-4bdc-bf80-3f6608f1b2b5 ee2fe69e0dfa4467926cec954790823e 9bd945273cd04d8981dcb3a319e8d026 - - default default] [instance: 4c934302-d7cd-4826-835e-cab6dba97e3a] instance snapshotting#033[00m
Nov 25 03:28:33 np0005534516 nova_compute[253538]: 2025-11-25 08:28:33.737 253542 INFO nova.virt.libvirt.driver [None req-815c19c7-4c38-4bdc-bf80-3f6608f1b2b5 ee2fe69e0dfa4467926cec954790823e 9bd945273cd04d8981dcb3a319e8d026 - - default default] [instance: 4c934302-d7cd-4826-835e-cab6dba97e3a] Beginning live snapshot process#033[00m
Nov 25 03:28:33 np0005534516 nova_compute[253538]: 2025-11-25 08:28:33.861 253542 DEBUG nova.virt.libvirt.imagebackend [None req-815c19c7-4c38-4bdc-bf80-3f6608f1b2b5 ee2fe69e0dfa4467926cec954790823e 9bd945273cd04d8981dcb3a319e8d026 - - default default] No parent info for 8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163#033[00m
Nov 25 03:28:34 np0005534516 nova_compute[253538]: 2025-11-25 08:28:34.057 253542 DEBUG nova.storage.rbd_utils [None req-815c19c7-4c38-4bdc-bf80-3f6608f1b2b5 ee2fe69e0dfa4467926cec954790823e 9bd945273cd04d8981dcb3a319e8d026 - - default default] creating snapshot(f40154d546174793b9fabb1461848c07) on rbd image(4c934302-d7cd-4826-835e-cab6dba97e3a_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Nov 25 03:28:34 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e134 do_prune osdmap full prune enabled
Nov 25 03:28:34 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e135 e135: 3 total, 3 up, 3 in
Nov 25 03:28:34 np0005534516 ceph-mon[75015]: log_channel(cluster) log [DBG] : osdmap e135: 3 total, 3 up, 3 in
Nov 25 03:28:34 np0005534516 nova_compute[253538]: 2025-11-25 08:28:34.263 253542 DEBUG nova.storage.rbd_utils [None req-815c19c7-4c38-4bdc-bf80-3f6608f1b2b5 ee2fe69e0dfa4467926cec954790823e 9bd945273cd04d8981dcb3a319e8d026 - - default default] cloning vms/4c934302-d7cd-4826-835e-cab6dba97e3a_disk@f40154d546174793b9fabb1461848c07 to images/1a54e9d9-bd25-49b0-8ef1-8fd05dc29219 clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Nov 25 03:28:34 np0005534516 nova_compute[253538]: 2025-11-25 08:28:34.351 253542 DEBUG nova.storage.rbd_utils [None req-815c19c7-4c38-4bdc-bf80-3f6608f1b2b5 ee2fe69e0dfa4467926cec954790823e 9bd945273cd04d8981dcb3a319e8d026 - - default default] flattening images/1a54e9d9-bd25-49b0-8ef1-8fd05dc29219 flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314#033[00m
Nov 25 03:28:34 np0005534516 nova_compute[253538]: 2025-11-25 08:28:34.860 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:28:34 np0005534516 podman[292240]: 2025-11-25 08:28:34.873350977 +0000 UTC m=+0.118593619 container health_status 201f5ba8a846455dad7681d91099d8c3f33e8ac91298c1330a8b525b94c03938 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=multipathd, managed_by=edpm_ansible)
Nov 25 03:28:35 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 03:28:35 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e135 do_prune osdmap full prune enabled
Nov 25 03:28:35 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e136 e136: 3 total, 3 up, 3 in
Nov 25 03:28:35 np0005534516 ceph-mon[75015]: log_channel(cluster) log [DBG] : osdmap e136: 3 total, 3 up, 3 in
Nov 25 03:28:35 np0005534516 nova_compute[253538]: 2025-11-25 08:28:35.090 253542 DEBUG nova.storage.rbd_utils [None req-815c19c7-4c38-4bdc-bf80-3f6608f1b2b5 ee2fe69e0dfa4467926cec954790823e 9bd945273cd04d8981dcb3a319e8d026 - - default default] removing snapshot(f40154d546174793b9fabb1461848c07) on rbd image(4c934302-d7cd-4826-835e-cab6dba97e3a_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489#033[00m
Nov 25 03:28:35 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1317: 321 pgs: 321 active+clean; 198 MiB data, 456 MiB used, 60 GiB / 60 GiB avail; 8.3 MiB/s rd, 6.0 MiB/s wr, 266 op/s
Nov 25 03:28:35 np0005534516 nova_compute[253538]: 2025-11-25 08:28:35.347 253542 DEBUG oslo_concurrency.lockutils [None req-066abce6-4ea2-415d-9239-fa5fb27251b9 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] Acquiring lock "c3cdbe22-9aa9-4d2e-a76e-8d28c1edc71e" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:28:35 np0005534516 nova_compute[253538]: 2025-11-25 08:28:35.348 253542 DEBUG oslo_concurrency.lockutils [None req-066abce6-4ea2-415d-9239-fa5fb27251b9 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] Lock "c3cdbe22-9aa9-4d2e-a76e-8d28c1edc71e" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:28:35 np0005534516 nova_compute[253538]: 2025-11-25 08:28:35.364 253542 DEBUG nova.compute.manager [None req-066abce6-4ea2-415d-9239-fa5fb27251b9 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] [instance: c3cdbe22-9aa9-4d2e-a76e-8d28c1edc71e] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 25 03:28:35 np0005534516 nova_compute[253538]: 2025-11-25 08:28:35.422 253542 DEBUG oslo_concurrency.lockutils [None req-066abce6-4ea2-415d-9239-fa5fb27251b9 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:28:35 np0005534516 nova_compute[253538]: 2025-11-25 08:28:35.423 253542 DEBUG oslo_concurrency.lockutils [None req-066abce6-4ea2-415d-9239-fa5fb27251b9 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:28:35 np0005534516 nova_compute[253538]: 2025-11-25 08:28:35.429 253542 DEBUG nova.virt.hardware [None req-066abce6-4ea2-415d-9239-fa5fb27251b9 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 25 03:28:35 np0005534516 nova_compute[253538]: 2025-11-25 08:28:35.430 253542 INFO nova.compute.claims [None req-066abce6-4ea2-415d-9239-fa5fb27251b9 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] [instance: c3cdbe22-9aa9-4d2e-a76e-8d28c1edc71e] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 25 03:28:35 np0005534516 nova_compute[253538]: 2025-11-25 08:28:35.602 253542 DEBUG oslo_concurrency.processutils [None req-066abce6-4ea2-415d-9239-fa5fb27251b9 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:28:36 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e136 do_prune osdmap full prune enabled
Nov 25 03:28:36 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e137 e137: 3 total, 3 up, 3 in
Nov 25 03:28:36 np0005534516 ceph-mon[75015]: log_channel(cluster) log [DBG] : osdmap e137: 3 total, 3 up, 3 in
Nov 25 03:28:36 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 03:28:36 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2726501938' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 03:28:36 np0005534516 nova_compute[253538]: 2025-11-25 08:28:36.088 253542 DEBUG oslo_concurrency.processutils [None req-066abce6-4ea2-415d-9239-fa5fb27251b9 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.486s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:28:36 np0005534516 nova_compute[253538]: 2025-11-25 08:28:36.093 253542 DEBUG nova.compute.provider_tree [None req-066abce6-4ea2-415d-9239-fa5fb27251b9 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 03:28:36 np0005534516 nova_compute[253538]: 2025-11-25 08:28:36.106 253542 DEBUG nova.scheduler.client.report [None req-066abce6-4ea2-415d-9239-fa5fb27251b9 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 03:28:36 np0005534516 nova_compute[253538]: 2025-11-25 08:28:36.128 253542 DEBUG oslo_concurrency.lockutils [None req-066abce6-4ea2-415d-9239-fa5fb27251b9 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.704s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:28:36 np0005534516 nova_compute[253538]: 2025-11-25 08:28:36.128 253542 DEBUG nova.compute.manager [None req-066abce6-4ea2-415d-9239-fa5fb27251b9 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] [instance: c3cdbe22-9aa9-4d2e-a76e-8d28c1edc71e] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 25 03:28:36 np0005534516 nova_compute[253538]: 2025-11-25 08:28:36.204 253542 DEBUG nova.compute.manager [None req-066abce6-4ea2-415d-9239-fa5fb27251b9 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] [instance: c3cdbe22-9aa9-4d2e-a76e-8d28c1edc71e] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 25 03:28:36 np0005534516 nova_compute[253538]: 2025-11-25 08:28:36.205 253542 DEBUG nova.network.neutron [None req-066abce6-4ea2-415d-9239-fa5fb27251b9 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] [instance: c3cdbe22-9aa9-4d2e-a76e-8d28c1edc71e] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 25 03:28:36 np0005534516 nova_compute[253538]: 2025-11-25 08:28:36.243 253542 INFO nova.virt.libvirt.driver [None req-066abce6-4ea2-415d-9239-fa5fb27251b9 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] [instance: c3cdbe22-9aa9-4d2e-a76e-8d28c1edc71e] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 25 03:28:36 np0005534516 nova_compute[253538]: 2025-11-25 08:28:36.278 253542 DEBUG nova.compute.manager [None req-066abce6-4ea2-415d-9239-fa5fb27251b9 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] [instance: c3cdbe22-9aa9-4d2e-a76e-8d28c1edc71e] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 25 03:28:36 np0005534516 nova_compute[253538]: 2025-11-25 08:28:36.286 253542 DEBUG nova.storage.rbd_utils [None req-815c19c7-4c38-4bdc-bf80-3f6608f1b2b5 ee2fe69e0dfa4467926cec954790823e 9bd945273cd04d8981dcb3a319e8d026 - - default default] creating snapshot(snap) on rbd image(1a54e9d9-bd25-49b0-8ef1-8fd05dc29219) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Nov 25 03:28:36 np0005534516 nova_compute[253538]: 2025-11-25 08:28:36.370 253542 DEBUG nova.compute.manager [None req-066abce6-4ea2-415d-9239-fa5fb27251b9 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] [instance: c3cdbe22-9aa9-4d2e-a76e-8d28c1edc71e] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 25 03:28:36 np0005534516 nova_compute[253538]: 2025-11-25 08:28:36.371 253542 DEBUG nova.virt.libvirt.driver [None req-066abce6-4ea2-415d-9239-fa5fb27251b9 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] [instance: c3cdbe22-9aa9-4d2e-a76e-8d28c1edc71e] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 25 03:28:36 np0005534516 nova_compute[253538]: 2025-11-25 08:28:36.372 253542 INFO nova.virt.libvirt.driver [None req-066abce6-4ea2-415d-9239-fa5fb27251b9 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] [instance: c3cdbe22-9aa9-4d2e-a76e-8d28c1edc71e] Creating image(s)#033[00m
Nov 25 03:28:36 np0005534516 nova_compute[253538]: 2025-11-25 08:28:36.397 253542 DEBUG nova.storage.rbd_utils [None req-066abce6-4ea2-415d-9239-fa5fb27251b9 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] rbd image c3cdbe22-9aa9-4d2e-a76e-8d28c1edc71e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:28:36 np0005534516 nova_compute[253538]: 2025-11-25 08:28:36.447 253542 DEBUG nova.storage.rbd_utils [None req-066abce6-4ea2-415d-9239-fa5fb27251b9 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] rbd image c3cdbe22-9aa9-4d2e-a76e-8d28c1edc71e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:28:36 np0005534516 nova_compute[253538]: 2025-11-25 08:28:36.474 253542 DEBUG nova.storage.rbd_utils [None req-066abce6-4ea2-415d-9239-fa5fb27251b9 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] rbd image c3cdbe22-9aa9-4d2e-a76e-8d28c1edc71e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:28:36 np0005534516 nova_compute[253538]: 2025-11-25 08:28:36.477 253542 DEBUG oslo_concurrency.processutils [None req-066abce6-4ea2-415d-9239-fa5fb27251b9 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:28:36 np0005534516 nova_compute[253538]: 2025-11-25 08:28:36.505 253542 DEBUG nova.policy [None req-066abce6-4ea2-415d-9239-fa5fb27251b9 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '8350a560f2bc4b57a5da0e3a1f582f82', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'c5b1125d171240e2895276836b4fd6d7', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 25 03:28:36 np0005534516 nova_compute[253538]: 2025-11-25 08:28:36.567 253542 DEBUG oslo_concurrency.processutils [None req-066abce6-4ea2-415d-9239-fa5fb27251b9 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json" returned: 0 in 0.089s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:28:36 np0005534516 nova_compute[253538]: 2025-11-25 08:28:36.568 253542 DEBUG oslo_concurrency.lockutils [None req-066abce6-4ea2-415d-9239-fa5fb27251b9 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] Acquiring lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:28:36 np0005534516 nova_compute[253538]: 2025-11-25 08:28:36.568 253542 DEBUG oslo_concurrency.lockutils [None req-066abce6-4ea2-415d-9239-fa5fb27251b9 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:28:36 np0005534516 nova_compute[253538]: 2025-11-25 08:28:36.569 253542 DEBUG oslo_concurrency.lockutils [None req-066abce6-4ea2-415d-9239-fa5fb27251b9 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:28:36 np0005534516 nova_compute[253538]: 2025-11-25 08:28:36.596 253542 DEBUG nova.storage.rbd_utils [None req-066abce6-4ea2-415d-9239-fa5fb27251b9 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] rbd image c3cdbe22-9aa9-4d2e-a76e-8d28c1edc71e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:28:36 np0005534516 nova_compute[253538]: 2025-11-25 08:28:36.600 253542 DEBUG oslo_concurrency.processutils [None req-066abce6-4ea2-415d-9239-fa5fb27251b9 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc c3cdbe22-9aa9-4d2e-a76e-8d28c1edc71e_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:28:36 np0005534516 nova_compute[253538]: 2025-11-25 08:28:36.987 253542 DEBUG oslo_concurrency.processutils [None req-066abce6-4ea2-415d-9239-fa5fb27251b9 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc c3cdbe22-9aa9-4d2e-a76e-8d28c1edc71e_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.387s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:28:37 np0005534516 nova_compute[253538]: 2025-11-25 08:28:37.018 253542 DEBUG nova.network.neutron [None req-066abce6-4ea2-415d-9239-fa5fb27251b9 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] [instance: c3cdbe22-9aa9-4d2e-a76e-8d28c1edc71e] Successfully created port: 6bcb5ada-83f7-419f-9909-98ba6f37630c _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 25 03:28:37 np0005534516 nova_compute[253538]: 2025-11-25 08:28:37.053 253542 DEBUG nova.storage.rbd_utils [None req-066abce6-4ea2-415d-9239-fa5fb27251b9 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] resizing rbd image c3cdbe22-9aa9-4d2e-a76e-8d28c1edc71e_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 25 03:28:37 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1319: 321 pgs: 321 active+clean; 202 MiB data, 461 MiB used, 60 GiB / 60 GiB avail; 4.7 MiB/s rd, 7.7 MiB/s wr, 276 op/s
Nov 25 03:28:37 np0005534516 nova_compute[253538]: 2025-11-25 08:28:37.102 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:28:37 np0005534516 nova_compute[253538]: 2025-11-25 08:28:37.201 253542 DEBUG oslo_concurrency.lockutils [None req-bf3b8450-5c1c-44a8-9b80-e93361ab5f19 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Acquiring lock "6998a6cf-b660-4558-98cf-bf5984775b1d" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:28:37 np0005534516 nova_compute[253538]: 2025-11-25 08:28:37.201 253542 DEBUG oslo_concurrency.lockutils [None req-bf3b8450-5c1c-44a8-9b80-e93361ab5f19 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Lock "6998a6cf-b660-4558-98cf-bf5984775b1d" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:28:37 np0005534516 nova_compute[253538]: 2025-11-25 08:28:37.202 253542 DEBUG oslo_concurrency.lockutils [None req-bf3b8450-5c1c-44a8-9b80-e93361ab5f19 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Acquiring lock "6998a6cf-b660-4558-98cf-bf5984775b1d-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:28:37 np0005534516 nova_compute[253538]: 2025-11-25 08:28:37.202 253542 DEBUG oslo_concurrency.lockutils [None req-bf3b8450-5c1c-44a8-9b80-e93361ab5f19 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Lock "6998a6cf-b660-4558-98cf-bf5984775b1d-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:28:37 np0005534516 nova_compute[253538]: 2025-11-25 08:28:37.202 253542 DEBUG oslo_concurrency.lockutils [None req-bf3b8450-5c1c-44a8-9b80-e93361ab5f19 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Lock "6998a6cf-b660-4558-98cf-bf5984775b1d-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:28:37 np0005534516 nova_compute[253538]: 2025-11-25 08:28:37.204 253542 INFO nova.compute.manager [None req-bf3b8450-5c1c-44a8-9b80-e93361ab5f19 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 6998a6cf-b660-4558-98cf-bf5984775b1d] Terminating instance#033[00m
Nov 25 03:28:37 np0005534516 nova_compute[253538]: 2025-11-25 08:28:37.205 253542 DEBUG nova.compute.manager [None req-bf3b8450-5c1c-44a8-9b80-e93361ab5f19 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 6998a6cf-b660-4558-98cf-bf5984775b1d] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 25 03:28:37 np0005534516 nova_compute[253538]: 2025-11-25 08:28:37.212 253542 INFO nova.virt.libvirt.driver [-] [instance: 6998a6cf-b660-4558-98cf-bf5984775b1d] Instance destroyed successfully.#033[00m
Nov 25 03:28:37 np0005534516 nova_compute[253538]: 2025-11-25 08:28:37.212 253542 DEBUG nova.objects.instance [None req-bf3b8450-5c1c-44a8-9b80-e93361ab5f19 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Lazy-loading 'resources' on Instance uuid 6998a6cf-b660-4558-98cf-bf5984775b1d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 03:28:37 np0005534516 nova_compute[253538]: 2025-11-25 08:28:37.224 253542 DEBUG nova.virt.libvirt.vif [None req-bf3b8450-5c1c-44a8-9b80-e93361ab5f19 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T08:28:08Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ImagesTestJSON-server-577988769',display_name='tempest-ImagesTestJSON-server-577988769',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagestestjson-server-577988769',id=30,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-25T08:28:22Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='b0a28d62fb1841c087b84b40bf5a54ec',ramdisk_id='',reservation_id='r-r74nry8d',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-ImagesTestJSON-109091550',owner_user_name='tempest-ImagesTestJSON-109091550-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T08:28:32Z,user_data=None,user_id='38fa175fb699405c9a05d7c28f994ebc',uuid=6998a6cf-b660-4558-98cf-bf5984775b1d,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='suspended') vif={"id": "65fd7d0e-59ee-4411-92ee-f934016f1d1f", "address": "fa:16:3e:73:80:23", "network": {"id": "ba659d6c-c094-47d7-ba45-d0e659ce778e", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1404047212-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b0a28d62fb1841c087b84b40bf5a54ec", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap65fd7d0e-59", "ovs_interfaceid": "65fd7d0e-59ee-4411-92ee-f934016f1d1f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 25 03:28:37 np0005534516 nova_compute[253538]: 2025-11-25 08:28:37.224 253542 DEBUG nova.network.os_vif_util [None req-bf3b8450-5c1c-44a8-9b80-e93361ab5f19 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Converting VIF {"id": "65fd7d0e-59ee-4411-92ee-f934016f1d1f", "address": "fa:16:3e:73:80:23", "network": {"id": "ba659d6c-c094-47d7-ba45-d0e659ce778e", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1404047212-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b0a28d62fb1841c087b84b40bf5a54ec", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap65fd7d0e-59", "ovs_interfaceid": "65fd7d0e-59ee-4411-92ee-f934016f1d1f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 03:28:37 np0005534516 nova_compute[253538]: 2025-11-25 08:28:37.225 253542 DEBUG nova.network.os_vif_util [None req-bf3b8450-5c1c-44a8-9b80-e93361ab5f19 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:73:80:23,bridge_name='br-int',has_traffic_filtering=True,id=65fd7d0e-59ee-4411-92ee-f934016f1d1f,network=Network(ba659d6c-c094-47d7-ba45-d0e659ce778e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap65fd7d0e-59') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 03:28:37 np0005534516 nova_compute[253538]: 2025-11-25 08:28:37.226 253542 DEBUG os_vif [None req-bf3b8450-5c1c-44a8-9b80-e93361ab5f19 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:73:80:23,bridge_name='br-int',has_traffic_filtering=True,id=65fd7d0e-59ee-4411-92ee-f934016f1d1f,network=Network(ba659d6c-c094-47d7-ba45-d0e659ce778e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap65fd7d0e-59') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 25 03:28:37 np0005534516 nova_compute[253538]: 2025-11-25 08:28:37.227 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:28:37 np0005534516 nova_compute[253538]: 2025-11-25 08:28:37.228 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap65fd7d0e-59, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:28:37 np0005534516 nova_compute[253538]: 2025-11-25 08:28:37.230 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:28:37 np0005534516 nova_compute[253538]: 2025-11-25 08:28:37.231 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:28:37 np0005534516 nova_compute[253538]: 2025-11-25 08:28:37.234 253542 INFO os_vif [None req-bf3b8450-5c1c-44a8-9b80-e93361ab5f19 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:73:80:23,bridge_name='br-int',has_traffic_filtering=True,id=65fd7d0e-59ee-4411-92ee-f934016f1d1f,network=Network(ba659d6c-c094-47d7-ba45-d0e659ce778e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap65fd7d0e-59')#033[00m
Nov 25 03:28:37 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e137 do_prune osdmap full prune enabled
Nov 25 03:28:37 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e138 e138: 3 total, 3 up, 3 in
Nov 25 03:28:37 np0005534516 ceph-mon[75015]: log_channel(cluster) log [DBG] : osdmap e138: 3 total, 3 up, 3 in
Nov 25 03:28:37 np0005534516 nova_compute[253538]: 2025-11-25 08:28:37.406 253542 DEBUG nova.objects.instance [None req-066abce6-4ea2-415d-9239-fa5fb27251b9 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] Lazy-loading 'migration_context' on Instance uuid c3cdbe22-9aa9-4d2e-a76e-8d28c1edc71e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 03:28:37 np0005534516 nova_compute[253538]: 2025-11-25 08:28:37.419 253542 DEBUG nova.virt.libvirt.driver [None req-066abce6-4ea2-415d-9239-fa5fb27251b9 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] [instance: c3cdbe22-9aa9-4d2e-a76e-8d28c1edc71e] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 25 03:28:37 np0005534516 nova_compute[253538]: 2025-11-25 08:28:37.420 253542 DEBUG nova.virt.libvirt.driver [None req-066abce6-4ea2-415d-9239-fa5fb27251b9 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] [instance: c3cdbe22-9aa9-4d2e-a76e-8d28c1edc71e] Ensure instance console log exists: /var/lib/nova/instances/c3cdbe22-9aa9-4d2e-a76e-8d28c1edc71e/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 25 03:28:37 np0005534516 nova_compute[253538]: 2025-11-25 08:28:37.421 253542 DEBUG oslo_concurrency.lockutils [None req-066abce6-4ea2-415d-9239-fa5fb27251b9 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:28:37 np0005534516 nova_compute[253538]: 2025-11-25 08:28:37.421 253542 DEBUG oslo_concurrency.lockutils [None req-066abce6-4ea2-415d-9239-fa5fb27251b9 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:28:37 np0005534516 nova_compute[253538]: 2025-11-25 08:28:37.422 253542 DEBUG oslo_concurrency.lockutils [None req-066abce6-4ea2-415d-9239-fa5fb27251b9 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:28:37 np0005534516 nova_compute[253538]: 2025-11-25 08:28:37.775 253542 DEBUG nova.network.neutron [None req-066abce6-4ea2-415d-9239-fa5fb27251b9 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] [instance: c3cdbe22-9aa9-4d2e-a76e-8d28c1edc71e] Successfully updated port: 6bcb5ada-83f7-419f-9909-98ba6f37630c _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 25 03:28:37 np0005534516 nova_compute[253538]: 2025-11-25 08:28:37.793 253542 DEBUG oslo_concurrency.lockutils [None req-066abce6-4ea2-415d-9239-fa5fb27251b9 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] Acquiring lock "refresh_cache-c3cdbe22-9aa9-4d2e-a76e-8d28c1edc71e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 03:28:37 np0005534516 nova_compute[253538]: 2025-11-25 08:28:37.794 253542 DEBUG oslo_concurrency.lockutils [None req-066abce6-4ea2-415d-9239-fa5fb27251b9 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] Acquired lock "refresh_cache-c3cdbe22-9aa9-4d2e-a76e-8d28c1edc71e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 03:28:37 np0005534516 nova_compute[253538]: 2025-11-25 08:28:37.794 253542 DEBUG nova.network.neutron [None req-066abce6-4ea2-415d-9239-fa5fb27251b9 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] [instance: c3cdbe22-9aa9-4d2e-a76e-8d28c1edc71e] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 25 03:28:37 np0005534516 nova_compute[253538]: 2025-11-25 08:28:37.979 253542 DEBUG nova.network.neutron [None req-066abce6-4ea2-415d-9239-fa5fb27251b9 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] [instance: c3cdbe22-9aa9-4d2e-a76e-8d28c1edc71e] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 25 03:28:38 np0005534516 nova_compute[253538]: 2025-11-25 08:28:38.728 253542 DEBUG nova.network.neutron [None req-066abce6-4ea2-415d-9239-fa5fb27251b9 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] [instance: c3cdbe22-9aa9-4d2e-a76e-8d28c1edc71e] Updating instance_info_cache with network_info: [{"id": "6bcb5ada-83f7-419f-9909-98ba6f37630c", "address": "fa:16:3e:34:c6:fc", "network": {"id": "52a7668b-f0ac-4b07-a778-1ee89adbf076", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1430369364-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c5b1125d171240e2895276836b4fd6d7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6bcb5ada-83", "ovs_interfaceid": "6bcb5ada-83f7-419f-9909-98ba6f37630c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 03:28:38 np0005534516 nova_compute[253538]: 2025-11-25 08:28:38.756 253542 DEBUG oslo_concurrency.lockutils [None req-066abce6-4ea2-415d-9239-fa5fb27251b9 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] Releasing lock "refresh_cache-c3cdbe22-9aa9-4d2e-a76e-8d28c1edc71e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 03:28:38 np0005534516 nova_compute[253538]: 2025-11-25 08:28:38.757 253542 DEBUG nova.compute.manager [None req-066abce6-4ea2-415d-9239-fa5fb27251b9 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] [instance: c3cdbe22-9aa9-4d2e-a76e-8d28c1edc71e] Instance network_info: |[{"id": "6bcb5ada-83f7-419f-9909-98ba6f37630c", "address": "fa:16:3e:34:c6:fc", "network": {"id": "52a7668b-f0ac-4b07-a778-1ee89adbf076", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1430369364-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c5b1125d171240e2895276836b4fd6d7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6bcb5ada-83", "ovs_interfaceid": "6bcb5ada-83f7-419f-9909-98ba6f37630c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 25 03:28:38 np0005534516 nova_compute[253538]: 2025-11-25 08:28:38.759 253542 DEBUG nova.virt.libvirt.driver [None req-066abce6-4ea2-415d-9239-fa5fb27251b9 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] [instance: c3cdbe22-9aa9-4d2e-a76e-8d28c1edc71e] Start _get_guest_xml network_info=[{"id": "6bcb5ada-83f7-419f-9909-98ba6f37630c", "address": "fa:16:3e:34:c6:fc", "network": {"id": "52a7668b-f0ac-4b07-a778-1ee89adbf076", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1430369364-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c5b1125d171240e2895276836b4fd6d7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6bcb5ada-83", "ovs_interfaceid": "6bcb5ada-83f7-419f-9909-98ba6f37630c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'device_name': '/dev/vda', 'guest_format': None, 'encryption_options': None, 'boot_index': 0, 'encryption_format': None, 'encryption_secret_uuid': None, 'size': 0, 'encrypted': False, 'disk_bus': 'virtio', 'image_id': '8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 25 03:28:38 np0005534516 nova_compute[253538]: 2025-11-25 08:28:38.765 253542 WARNING nova.virt.libvirt.driver [None req-066abce6-4ea2-415d-9239-fa5fb27251b9 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 03:28:38 np0005534516 nova_compute[253538]: 2025-11-25 08:28:38.771 253542 DEBUG nova.virt.libvirt.host [None req-066abce6-4ea2-415d-9239-fa5fb27251b9 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 25 03:28:38 np0005534516 nova_compute[253538]: 2025-11-25 08:28:38.772 253542 DEBUG nova.virt.libvirt.host [None req-066abce6-4ea2-415d-9239-fa5fb27251b9 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 25 03:28:38 np0005534516 nova_compute[253538]: 2025-11-25 08:28:38.774 253542 DEBUG nova.virt.libvirt.host [None req-066abce6-4ea2-415d-9239-fa5fb27251b9 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 25 03:28:38 np0005534516 nova_compute[253538]: 2025-11-25 08:28:38.775 253542 DEBUG nova.virt.libvirt.host [None req-066abce6-4ea2-415d-9239-fa5fb27251b9 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 25 03:28:38 np0005534516 nova_compute[253538]: 2025-11-25 08:28:38.775 253542 DEBUG nova.virt.libvirt.driver [None req-066abce6-4ea2-415d-9239-fa5fb27251b9 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 25 03:28:38 np0005534516 nova_compute[253538]: 2025-11-25 08:28:38.776 253542 DEBUG nova.virt.hardware [None req-066abce6-4ea2-415d-9239-fa5fb27251b9 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T08:20:54Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='c0247c91-0f34-45b2-87e7-43f31a790a00',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 25 03:28:38 np0005534516 nova_compute[253538]: 2025-11-25 08:28:38.776 253542 DEBUG nova.virt.hardware [None req-066abce6-4ea2-415d-9239-fa5fb27251b9 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 25 03:28:38 np0005534516 nova_compute[253538]: 2025-11-25 08:28:38.776 253542 DEBUG nova.virt.hardware [None req-066abce6-4ea2-415d-9239-fa5fb27251b9 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 25 03:28:38 np0005534516 nova_compute[253538]: 2025-11-25 08:28:38.777 253542 DEBUG nova.virt.hardware [None req-066abce6-4ea2-415d-9239-fa5fb27251b9 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 25 03:28:38 np0005534516 nova_compute[253538]: 2025-11-25 08:28:38.777 253542 DEBUG nova.virt.hardware [None req-066abce6-4ea2-415d-9239-fa5fb27251b9 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 25 03:28:38 np0005534516 nova_compute[253538]: 2025-11-25 08:28:38.777 253542 DEBUG nova.virt.hardware [None req-066abce6-4ea2-415d-9239-fa5fb27251b9 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 25 03:28:38 np0005534516 nova_compute[253538]: 2025-11-25 08:28:38.777 253542 DEBUG nova.virt.hardware [None req-066abce6-4ea2-415d-9239-fa5fb27251b9 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 25 03:28:38 np0005534516 nova_compute[253538]: 2025-11-25 08:28:38.777 253542 DEBUG nova.virt.hardware [None req-066abce6-4ea2-415d-9239-fa5fb27251b9 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 25 03:28:38 np0005534516 nova_compute[253538]: 2025-11-25 08:28:38.778 253542 DEBUG nova.virt.hardware [None req-066abce6-4ea2-415d-9239-fa5fb27251b9 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 25 03:28:38 np0005534516 nova_compute[253538]: 2025-11-25 08:28:38.778 253542 DEBUG nova.virt.hardware [None req-066abce6-4ea2-415d-9239-fa5fb27251b9 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 25 03:28:38 np0005534516 nova_compute[253538]: 2025-11-25 08:28:38.778 253542 DEBUG nova.virt.hardware [None req-066abce6-4ea2-415d-9239-fa5fb27251b9 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 25 03:28:38 np0005534516 nova_compute[253538]: 2025-11-25 08:28:38.782 253542 DEBUG oslo_concurrency.processutils [None req-066abce6-4ea2-415d-9239-fa5fb27251b9 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:28:38 np0005534516 ovn_controller[152859]: 2025-11-25T08:28:38Z|00034|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:c9:9b:99 10.100.0.9
Nov 25 03:28:38 np0005534516 ovn_controller[152859]: 2025-11-25T08:28:38Z|00035|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:c9:9b:99 10.100.0.9
Nov 25 03:28:39 np0005534516 nova_compute[253538]: 2025-11-25 08:28:39.035 253542 DEBUG nova.compute.manager [req-81995c14-e38a-4ce0-a60f-f041242f0b0b req-8d88d25f-da60-4bbb-94ca-6c633d0f12ec b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: c3cdbe22-9aa9-4d2e-a76e-8d28c1edc71e] Received event network-changed-6bcb5ada-83f7-419f-9909-98ba6f37630c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:28:39 np0005534516 nova_compute[253538]: 2025-11-25 08:28:39.036 253542 DEBUG nova.compute.manager [req-81995c14-e38a-4ce0-a60f-f041242f0b0b req-8d88d25f-da60-4bbb-94ca-6c633d0f12ec b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: c3cdbe22-9aa9-4d2e-a76e-8d28c1edc71e] Refreshing instance network info cache due to event network-changed-6bcb5ada-83f7-419f-9909-98ba6f37630c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 03:28:39 np0005534516 nova_compute[253538]: 2025-11-25 08:28:39.036 253542 DEBUG oslo_concurrency.lockutils [req-81995c14-e38a-4ce0-a60f-f041242f0b0b req-8d88d25f-da60-4bbb-94ca-6c633d0f12ec b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-c3cdbe22-9aa9-4d2e-a76e-8d28c1edc71e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 03:28:39 np0005534516 nova_compute[253538]: 2025-11-25 08:28:39.036 253542 DEBUG oslo_concurrency.lockutils [req-81995c14-e38a-4ce0-a60f-f041242f0b0b req-8d88d25f-da60-4bbb-94ca-6c633d0f12ec b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-c3cdbe22-9aa9-4d2e-a76e-8d28c1edc71e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 03:28:39 np0005534516 nova_compute[253538]: 2025-11-25 08:28:39.037 253542 DEBUG nova.network.neutron [req-81995c14-e38a-4ce0-a60f-f041242f0b0b req-8d88d25f-da60-4bbb-94ca-6c633d0f12ec b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: c3cdbe22-9aa9-4d2e-a76e-8d28c1edc71e] Refreshing network info cache for port 6bcb5ada-83f7-419f-9909-98ba6f37630c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 03:28:39 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1321: 321 pgs: 321 active+clean; 240 MiB data, 475 MiB used, 60 GiB / 60 GiB avail; 4.6 MiB/s rd, 11 MiB/s wr, 372 op/s
Nov 25 03:28:39 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 03:28:39 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2669809045' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 03:28:39 np0005534516 nova_compute[253538]: 2025-11-25 08:28:39.283 253542 DEBUG oslo_concurrency.processutils [None req-066abce6-4ea2-415d-9239-fa5fb27251b9 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.501s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:28:39 np0005534516 nova_compute[253538]: 2025-11-25 08:28:39.337 253542 DEBUG nova.storage.rbd_utils [None req-066abce6-4ea2-415d-9239-fa5fb27251b9 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] rbd image c3cdbe22-9aa9-4d2e-a76e-8d28c1edc71e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:28:39 np0005534516 nova_compute[253538]: 2025-11-25 08:28:39.344 253542 DEBUG oslo_concurrency.processutils [None req-066abce6-4ea2-415d-9239-fa5fb27251b9 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:28:39 np0005534516 nova_compute[253538]: 2025-11-25 08:28:39.388 253542 INFO nova.virt.libvirt.driver [None req-815c19c7-4c38-4bdc-bf80-3f6608f1b2b5 ee2fe69e0dfa4467926cec954790823e 9bd945273cd04d8981dcb3a319e8d026 - - default default] [instance: 4c934302-d7cd-4826-835e-cab6dba97e3a] Snapshot image upload complete#033[00m
Nov 25 03:28:39 np0005534516 nova_compute[253538]: 2025-11-25 08:28:39.389 253542 INFO nova.compute.manager [None req-815c19c7-4c38-4bdc-bf80-3f6608f1b2b5 ee2fe69e0dfa4467926cec954790823e 9bd945273cd04d8981dcb3a319e8d026 - - default default] [instance: 4c934302-d7cd-4826-835e-cab6dba97e3a] Took 5.87 seconds to snapshot the instance on the hypervisor.#033[00m
Nov 25 03:28:39 np0005534516 nova_compute[253538]: 2025-11-25 08:28:39.396 253542 DEBUG oslo_concurrency.lockutils [None req-02fcfbc8-8adf-4b3c-8fc6-7f90fda8674d 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Acquiring lock "8b29fb31-718d-4926-bf4f-bae461ea70ef" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:28:39 np0005534516 nova_compute[253538]: 2025-11-25 08:28:39.396 253542 DEBUG oslo_concurrency.lockutils [None req-02fcfbc8-8adf-4b3c-8fc6-7f90fda8674d 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Lock "8b29fb31-718d-4926-bf4f-bae461ea70ef" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:28:39 np0005534516 nova_compute[253538]: 2025-11-25 08:28:39.424 253542 DEBUG nova.compute.manager [None req-02fcfbc8-8adf-4b3c-8fc6-7f90fda8674d 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 8b29fb31-718d-4926-bf4f-bae461ea70ef] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 25 03:28:39 np0005534516 nova_compute[253538]: 2025-11-25 08:28:39.516 253542 DEBUG oslo_concurrency.lockutils [None req-02fcfbc8-8adf-4b3c-8fc6-7f90fda8674d 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:28:39 np0005534516 nova_compute[253538]: 2025-11-25 08:28:39.517 253542 DEBUG oslo_concurrency.lockutils [None req-02fcfbc8-8adf-4b3c-8fc6-7f90fda8674d 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:28:39 np0005534516 nova_compute[253538]: 2025-11-25 08:28:39.522 253542 DEBUG nova.virt.hardware [None req-02fcfbc8-8adf-4b3c-8fc6-7f90fda8674d 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 25 03:28:39 np0005534516 nova_compute[253538]: 2025-11-25 08:28:39.523 253542 INFO nova.compute.claims [None req-02fcfbc8-8adf-4b3c-8fc6-7f90fda8674d 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 8b29fb31-718d-4926-bf4f-bae461ea70ef] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 25 03:28:39 np0005534516 nova_compute[253538]: 2025-11-25 08:28:39.613 253542 DEBUG oslo_concurrency.lockutils [None req-d5ac8371-613d-488f-9c4b-164aad80e8a3 9a96a98d6bb448aab904a8763d3675ec 17d31dbb1e4542daaa43d2fda87e18ad - - default default] Acquiring lock "cf07e611-51eb-4bcf-8757-8f75d3807da6" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:28:39 np0005534516 nova_compute[253538]: 2025-11-25 08:28:39.614 253542 DEBUG oslo_concurrency.lockutils [None req-d5ac8371-613d-488f-9c4b-164aad80e8a3 9a96a98d6bb448aab904a8763d3675ec 17d31dbb1e4542daaa43d2fda87e18ad - - default default] Lock "cf07e611-51eb-4bcf-8757-8f75d3807da6" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:28:39 np0005534516 nova_compute[253538]: 2025-11-25 08:28:39.645 253542 DEBUG nova.compute.manager [None req-d5ac8371-613d-488f-9c4b-164aad80e8a3 9a96a98d6bb448aab904a8763d3675ec 17d31dbb1e4542daaa43d2fda87e18ad - - default default] [instance: cf07e611-51eb-4bcf-8757-8f75d3807da6] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 25 03:28:39 np0005534516 nova_compute[253538]: 2025-11-25 08:28:39.708 253542 DEBUG oslo_concurrency.processutils [None req-02fcfbc8-8adf-4b3c-8fc6-7f90fda8674d 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:28:39 np0005534516 nova_compute[253538]: 2025-11-25 08:28:39.763 253542 DEBUG oslo_concurrency.lockutils [None req-d5ac8371-613d-488f-9c4b-164aad80e8a3 9a96a98d6bb448aab904a8763d3675ec 17d31dbb1e4542daaa43d2fda87e18ad - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:28:39 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 03:28:39 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3042903216' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 03:28:39 np0005534516 nova_compute[253538]: 2025-11-25 08:28:39.845 253542 DEBUG oslo_concurrency.processutils [None req-066abce6-4ea2-415d-9239-fa5fb27251b9 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.501s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:28:39 np0005534516 nova_compute[253538]: 2025-11-25 08:28:39.848 253542 DEBUG nova.virt.libvirt.vif [None req-066abce6-4ea2-415d-9239-fa5fb27251b9 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T08:28:34Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesOneServerNegativeTestJSON-server-492840202',display_name='tempest-ImagesOneServerNegativeTestJSON-server-492840202',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagesoneservernegativetestjson-server-492840202',id=32,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c5b1125d171240e2895276836b4fd6d7',ramdisk_id='',reservation_id='r-ftgun0ke',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesOneServerNegativeTestJSON-192511421',owner_user_name='tempest-ImagesOneServerNegativeTestJSON-192511421-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T08:28:36Z,user_data=None,user_id='8350a560f2bc4b57a5da0e3a1f582f82',uuid=c3cdbe22-9aa9-4d2e-a76e-8d28c1edc71e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "6bcb5ada-83f7-419f-9909-98ba6f37630c", "address": "fa:16:3e:34:c6:fc", "network": {"id": "52a7668b-f0ac-4b07-a778-1ee89adbf076", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1430369364-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c5b1125d171240e2895276836b4fd6d7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6bcb5ada-83", "ovs_interfaceid": "6bcb5ada-83f7-419f-9909-98ba6f37630c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 25 03:28:39 np0005534516 nova_compute[253538]: 2025-11-25 08:28:39.849 253542 DEBUG nova.network.os_vif_util [None req-066abce6-4ea2-415d-9239-fa5fb27251b9 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] Converting VIF {"id": "6bcb5ada-83f7-419f-9909-98ba6f37630c", "address": "fa:16:3e:34:c6:fc", "network": {"id": "52a7668b-f0ac-4b07-a778-1ee89adbf076", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1430369364-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c5b1125d171240e2895276836b4fd6d7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6bcb5ada-83", "ovs_interfaceid": "6bcb5ada-83f7-419f-9909-98ba6f37630c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 03:28:39 np0005534516 nova_compute[253538]: 2025-11-25 08:28:39.851 253542 DEBUG nova.network.os_vif_util [None req-066abce6-4ea2-415d-9239-fa5fb27251b9 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:34:c6:fc,bridge_name='br-int',has_traffic_filtering=True,id=6bcb5ada-83f7-419f-9909-98ba6f37630c,network=Network(52a7668b-f0ac-4b07-a778-1ee89adbf076),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6bcb5ada-83') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 03:28:39 np0005534516 nova_compute[253538]: 2025-11-25 08:28:39.853 253542 DEBUG nova.objects.instance [None req-066abce6-4ea2-415d-9239-fa5fb27251b9 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] Lazy-loading 'pci_devices' on Instance uuid c3cdbe22-9aa9-4d2e-a76e-8d28c1edc71e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 03:28:39 np0005534516 nova_compute[253538]: 2025-11-25 08:28:39.863 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:28:39 np0005534516 nova_compute[253538]: 2025-11-25 08:28:39.867 253542 DEBUG nova.virt.libvirt.driver [None req-066abce6-4ea2-415d-9239-fa5fb27251b9 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] [instance: c3cdbe22-9aa9-4d2e-a76e-8d28c1edc71e] End _get_guest_xml xml=<domain type="kvm">
Nov 25 03:28:39 np0005534516 nova_compute[253538]:  <uuid>c3cdbe22-9aa9-4d2e-a76e-8d28c1edc71e</uuid>
Nov 25 03:28:39 np0005534516 nova_compute[253538]:  <name>instance-00000020</name>
Nov 25 03:28:39 np0005534516 nova_compute[253538]:  <memory>131072</memory>
Nov 25 03:28:39 np0005534516 nova_compute[253538]:  <vcpu>1</vcpu>
Nov 25 03:28:39 np0005534516 nova_compute[253538]:  <metadata>
Nov 25 03:28:39 np0005534516 nova_compute[253538]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 03:28:39 np0005534516 nova_compute[253538]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 03:28:39 np0005534516 nova_compute[253538]:      <nova:name>tempest-ImagesOneServerNegativeTestJSON-server-492840202</nova:name>
Nov 25 03:28:39 np0005534516 nova_compute[253538]:      <nova:creationTime>2025-11-25 08:28:38</nova:creationTime>
Nov 25 03:28:39 np0005534516 nova_compute[253538]:      <nova:flavor name="m1.nano">
Nov 25 03:28:39 np0005534516 nova_compute[253538]:        <nova:memory>128</nova:memory>
Nov 25 03:28:39 np0005534516 nova_compute[253538]:        <nova:disk>1</nova:disk>
Nov 25 03:28:39 np0005534516 nova_compute[253538]:        <nova:swap>0</nova:swap>
Nov 25 03:28:39 np0005534516 nova_compute[253538]:        <nova:ephemeral>0</nova:ephemeral>
Nov 25 03:28:39 np0005534516 nova_compute[253538]:        <nova:vcpus>1</nova:vcpus>
Nov 25 03:28:39 np0005534516 nova_compute[253538]:      </nova:flavor>
Nov 25 03:28:39 np0005534516 nova_compute[253538]:      <nova:owner>
Nov 25 03:28:39 np0005534516 nova_compute[253538]:        <nova:user uuid="8350a560f2bc4b57a5da0e3a1f582f82">tempest-ImagesOneServerNegativeTestJSON-192511421-project-member</nova:user>
Nov 25 03:28:39 np0005534516 nova_compute[253538]:        <nova:project uuid="c5b1125d171240e2895276836b4fd6d7">tempest-ImagesOneServerNegativeTestJSON-192511421</nova:project>
Nov 25 03:28:39 np0005534516 nova_compute[253538]:      </nova:owner>
Nov 25 03:28:39 np0005534516 nova_compute[253538]:      <nova:root type="image" uuid="8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e"/>
Nov 25 03:28:39 np0005534516 nova_compute[253538]:      <nova:ports>
Nov 25 03:28:39 np0005534516 nova_compute[253538]:        <nova:port uuid="6bcb5ada-83f7-419f-9909-98ba6f37630c">
Nov 25 03:28:39 np0005534516 nova_compute[253538]:          <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Nov 25 03:28:39 np0005534516 nova_compute[253538]:        </nova:port>
Nov 25 03:28:39 np0005534516 nova_compute[253538]:      </nova:ports>
Nov 25 03:28:39 np0005534516 nova_compute[253538]:    </nova:instance>
Nov 25 03:28:39 np0005534516 nova_compute[253538]:  </metadata>
Nov 25 03:28:39 np0005534516 nova_compute[253538]:  <sysinfo type="smbios">
Nov 25 03:28:39 np0005534516 nova_compute[253538]:    <system>
Nov 25 03:28:39 np0005534516 nova_compute[253538]:      <entry name="manufacturer">RDO</entry>
Nov 25 03:28:39 np0005534516 nova_compute[253538]:      <entry name="product">OpenStack Compute</entry>
Nov 25 03:28:39 np0005534516 nova_compute[253538]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 03:28:39 np0005534516 nova_compute[253538]:      <entry name="serial">c3cdbe22-9aa9-4d2e-a76e-8d28c1edc71e</entry>
Nov 25 03:28:39 np0005534516 nova_compute[253538]:      <entry name="uuid">c3cdbe22-9aa9-4d2e-a76e-8d28c1edc71e</entry>
Nov 25 03:28:39 np0005534516 nova_compute[253538]:      <entry name="family">Virtual Machine</entry>
Nov 25 03:28:39 np0005534516 nova_compute[253538]:    </system>
Nov 25 03:28:39 np0005534516 nova_compute[253538]:  </sysinfo>
Nov 25 03:28:39 np0005534516 nova_compute[253538]:  <os>
Nov 25 03:28:39 np0005534516 nova_compute[253538]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 25 03:28:39 np0005534516 nova_compute[253538]:    <boot dev="hd"/>
Nov 25 03:28:39 np0005534516 nova_compute[253538]:    <smbios mode="sysinfo"/>
Nov 25 03:28:39 np0005534516 nova_compute[253538]:  </os>
Nov 25 03:28:39 np0005534516 nova_compute[253538]:  <features>
Nov 25 03:28:39 np0005534516 nova_compute[253538]:    <acpi/>
Nov 25 03:28:39 np0005534516 nova_compute[253538]:    <apic/>
Nov 25 03:28:39 np0005534516 nova_compute[253538]:    <vmcoreinfo/>
Nov 25 03:28:39 np0005534516 nova_compute[253538]:  </features>
Nov 25 03:28:39 np0005534516 nova_compute[253538]:  <clock offset="utc">
Nov 25 03:28:39 np0005534516 nova_compute[253538]:    <timer name="pit" tickpolicy="delay"/>
Nov 25 03:28:39 np0005534516 nova_compute[253538]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 25 03:28:39 np0005534516 nova_compute[253538]:    <timer name="hpet" present="no"/>
Nov 25 03:28:39 np0005534516 nova_compute[253538]:  </clock>
Nov 25 03:28:39 np0005534516 nova_compute[253538]:  <cpu mode="host-model" match="exact">
Nov 25 03:28:39 np0005534516 nova_compute[253538]:    <topology sockets="1" cores="1" threads="1"/>
Nov 25 03:28:39 np0005534516 nova_compute[253538]:  </cpu>
Nov 25 03:28:39 np0005534516 nova_compute[253538]:  <devices>
Nov 25 03:28:39 np0005534516 nova_compute[253538]:    <disk type="network" device="disk">
Nov 25 03:28:39 np0005534516 nova_compute[253538]:      <driver type="raw" cache="none"/>
Nov 25 03:28:39 np0005534516 nova_compute[253538]:      <source protocol="rbd" name="vms/c3cdbe22-9aa9-4d2e-a76e-8d28c1edc71e_disk">
Nov 25 03:28:39 np0005534516 nova_compute[253538]:        <host name="192.168.122.100" port="6789"/>
Nov 25 03:28:39 np0005534516 nova_compute[253538]:      </source>
Nov 25 03:28:39 np0005534516 nova_compute[253538]:      <auth username="openstack">
Nov 25 03:28:39 np0005534516 nova_compute[253538]:        <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 03:28:39 np0005534516 nova_compute[253538]:      </auth>
Nov 25 03:28:39 np0005534516 nova_compute[253538]:      <target dev="vda" bus="virtio"/>
Nov 25 03:28:39 np0005534516 nova_compute[253538]:    </disk>
Nov 25 03:28:39 np0005534516 nova_compute[253538]:    <disk type="network" device="cdrom">
Nov 25 03:28:39 np0005534516 nova_compute[253538]:      <driver type="raw" cache="none"/>
Nov 25 03:28:39 np0005534516 nova_compute[253538]:      <source protocol="rbd" name="vms/c3cdbe22-9aa9-4d2e-a76e-8d28c1edc71e_disk.config">
Nov 25 03:28:39 np0005534516 nova_compute[253538]:        <host name="192.168.122.100" port="6789"/>
Nov 25 03:28:39 np0005534516 nova_compute[253538]:      </source>
Nov 25 03:28:39 np0005534516 nova_compute[253538]:      <auth username="openstack">
Nov 25 03:28:39 np0005534516 nova_compute[253538]:        <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 03:28:39 np0005534516 nova_compute[253538]:      </auth>
Nov 25 03:28:39 np0005534516 nova_compute[253538]:      <target dev="sda" bus="sata"/>
Nov 25 03:28:39 np0005534516 nova_compute[253538]:    </disk>
Nov 25 03:28:39 np0005534516 nova_compute[253538]:    <interface type="ethernet">
Nov 25 03:28:39 np0005534516 nova_compute[253538]:      <mac address="fa:16:3e:34:c6:fc"/>
Nov 25 03:28:39 np0005534516 nova_compute[253538]:      <model type="virtio"/>
Nov 25 03:28:39 np0005534516 nova_compute[253538]:      <driver name="vhost" rx_queue_size="512"/>
Nov 25 03:28:39 np0005534516 nova_compute[253538]:      <mtu size="1442"/>
Nov 25 03:28:39 np0005534516 nova_compute[253538]:      <target dev="tap6bcb5ada-83"/>
Nov 25 03:28:39 np0005534516 nova_compute[253538]:    </interface>
Nov 25 03:28:39 np0005534516 nova_compute[253538]:    <serial type="pty">
Nov 25 03:28:39 np0005534516 nova_compute[253538]:      <log file="/var/lib/nova/instances/c3cdbe22-9aa9-4d2e-a76e-8d28c1edc71e/console.log" append="off"/>
Nov 25 03:28:39 np0005534516 nova_compute[253538]:    </serial>
Nov 25 03:28:39 np0005534516 nova_compute[253538]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 03:28:39 np0005534516 nova_compute[253538]:    <video>
Nov 25 03:28:39 np0005534516 nova_compute[253538]:      <model type="virtio"/>
Nov 25 03:28:39 np0005534516 nova_compute[253538]:    </video>
Nov 25 03:28:39 np0005534516 nova_compute[253538]:    <input type="tablet" bus="usb"/>
Nov 25 03:28:39 np0005534516 nova_compute[253538]:    <rng model="virtio">
Nov 25 03:28:39 np0005534516 nova_compute[253538]:      <backend model="random">/dev/urandom</backend>
Nov 25 03:28:39 np0005534516 nova_compute[253538]:    </rng>
Nov 25 03:28:39 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root"/>
Nov 25 03:28:39 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:28:39 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:28:39 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:28:39 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:28:39 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:28:39 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:28:39 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:28:39 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:28:39 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:28:39 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:28:39 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:28:39 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:28:39 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:28:39 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:28:39 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:28:39 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:28:39 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:28:39 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:28:39 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:28:39 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:28:39 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:28:39 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:28:39 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:28:39 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:28:39 np0005534516 nova_compute[253538]:    <controller type="usb" index="0"/>
Nov 25 03:28:39 np0005534516 nova_compute[253538]:    <memballoon model="virtio">
Nov 25 03:28:39 np0005534516 nova_compute[253538]:      <stats period="10"/>
Nov 25 03:28:39 np0005534516 nova_compute[253538]:    </memballoon>
Nov 25 03:28:39 np0005534516 nova_compute[253538]:  </devices>
Nov 25 03:28:39 np0005534516 nova_compute[253538]: </domain>
Nov 25 03:28:39 np0005534516 nova_compute[253538]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 25 03:28:39 np0005534516 nova_compute[253538]: 2025-11-25 08:28:39.876 253542 DEBUG nova.compute.manager [None req-066abce6-4ea2-415d-9239-fa5fb27251b9 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] [instance: c3cdbe22-9aa9-4d2e-a76e-8d28c1edc71e] Preparing to wait for external event network-vif-plugged-6bcb5ada-83f7-419f-9909-98ba6f37630c prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 25 03:28:39 np0005534516 nova_compute[253538]: 2025-11-25 08:28:39.877 253542 DEBUG oslo_concurrency.lockutils [None req-066abce6-4ea2-415d-9239-fa5fb27251b9 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] Acquiring lock "c3cdbe22-9aa9-4d2e-a76e-8d28c1edc71e-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:28:39 np0005534516 nova_compute[253538]: 2025-11-25 08:28:39.877 253542 DEBUG oslo_concurrency.lockutils [None req-066abce6-4ea2-415d-9239-fa5fb27251b9 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] Lock "c3cdbe22-9aa9-4d2e-a76e-8d28c1edc71e-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:28:39 np0005534516 nova_compute[253538]: 2025-11-25 08:28:39.878 253542 DEBUG oslo_concurrency.lockutils [None req-066abce6-4ea2-415d-9239-fa5fb27251b9 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] Lock "c3cdbe22-9aa9-4d2e-a76e-8d28c1edc71e-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:28:39 np0005534516 nova_compute[253538]: 2025-11-25 08:28:39.879 253542 DEBUG nova.virt.libvirt.vif [None req-066abce6-4ea2-415d-9239-fa5fb27251b9 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T08:28:34Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesOneServerNegativeTestJSON-server-492840202',display_name='tempest-ImagesOneServerNegativeTestJSON-server-492840202',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagesoneservernegativetestjson-server-492840202',id=32,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c5b1125d171240e2895276836b4fd6d7',ramdisk_id='',reservation_id='r-ftgun0ke',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesOneServerNegativeTestJSON-192511421',owner_user_name='tempest-ImagesOneServerNegativeTestJSON-192511421-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T08:28:36Z,user_data=None,user_id='8350a560f2bc4b57a5da0e3a1f582f82',uuid=c3cdbe22-9aa9-4d2e-a76e-8d28c1edc71e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "6bcb5ada-83f7-419f-9909-98ba6f37630c", "address": "fa:16:3e:34:c6:fc", "network": {"id": "52a7668b-f0ac-4b07-a778-1ee89adbf076", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1430369364-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c5b1125d171240e2895276836b4fd6d7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6bcb5ada-83", "ovs_interfaceid": "6bcb5ada-83f7-419f-9909-98ba6f37630c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 25 03:28:39 np0005534516 nova_compute[253538]: 2025-11-25 08:28:39.879 253542 DEBUG nova.network.os_vif_util [None req-066abce6-4ea2-415d-9239-fa5fb27251b9 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] Converting VIF {"id": "6bcb5ada-83f7-419f-9909-98ba6f37630c", "address": "fa:16:3e:34:c6:fc", "network": {"id": "52a7668b-f0ac-4b07-a778-1ee89adbf076", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1430369364-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c5b1125d171240e2895276836b4fd6d7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6bcb5ada-83", "ovs_interfaceid": "6bcb5ada-83f7-419f-9909-98ba6f37630c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 03:28:39 np0005534516 nova_compute[253538]: 2025-11-25 08:28:39.880 253542 DEBUG nova.network.os_vif_util [None req-066abce6-4ea2-415d-9239-fa5fb27251b9 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:34:c6:fc,bridge_name='br-int',has_traffic_filtering=True,id=6bcb5ada-83f7-419f-9909-98ba6f37630c,network=Network(52a7668b-f0ac-4b07-a778-1ee89adbf076),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6bcb5ada-83') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 03:28:39 np0005534516 nova_compute[253538]: 2025-11-25 08:28:39.881 253542 DEBUG os_vif [None req-066abce6-4ea2-415d-9239-fa5fb27251b9 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:34:c6:fc,bridge_name='br-int',has_traffic_filtering=True,id=6bcb5ada-83f7-419f-9909-98ba6f37630c,network=Network(52a7668b-f0ac-4b07-a778-1ee89adbf076),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6bcb5ada-83') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 25 03:28:39 np0005534516 nova_compute[253538]: 2025-11-25 08:28:39.885 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:28:39 np0005534516 nova_compute[253538]: 2025-11-25 08:28:39.886 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:28:39 np0005534516 nova_compute[253538]: 2025-11-25 08:28:39.887 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 03:28:39 np0005534516 nova_compute[253538]: 2025-11-25 08:28:39.891 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:28:39 np0005534516 nova_compute[253538]: 2025-11-25 08:28:39.891 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6bcb5ada-83, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:28:39 np0005534516 nova_compute[253538]: 2025-11-25 08:28:39.892 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap6bcb5ada-83, col_values=(('external_ids', {'iface-id': '6bcb5ada-83f7-419f-9909-98ba6f37630c', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:34:c6:fc', 'vm-uuid': 'c3cdbe22-9aa9-4d2e-a76e-8d28c1edc71e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:28:39 np0005534516 nova_compute[253538]: 2025-11-25 08:28:39.894 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:28:39 np0005534516 NetworkManager[48915]: <info>  [1764059319.8950] manager: (tap6bcb5ada-83): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/103)
Nov 25 03:28:39 np0005534516 nova_compute[253538]: 2025-11-25 08:28:39.898 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 25 03:28:39 np0005534516 nova_compute[253538]: 2025-11-25 08:28:39.900 253542 INFO os_vif [None req-066abce6-4ea2-415d-9239-fa5fb27251b9 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:34:c6:fc,bridge_name='br-int',has_traffic_filtering=True,id=6bcb5ada-83f7-419f-9909-98ba6f37630c,network=Network(52a7668b-f0ac-4b07-a778-1ee89adbf076),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6bcb5ada-83')#033[00m
Nov 25 03:28:39 np0005534516 nova_compute[253538]: 2025-11-25 08:28:39.984 253542 DEBUG nova.virt.libvirt.driver [None req-066abce6-4ea2-415d-9239-fa5fb27251b9 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 03:28:39 np0005534516 nova_compute[253538]: 2025-11-25 08:28:39.984 253542 DEBUG nova.virt.libvirt.driver [None req-066abce6-4ea2-415d-9239-fa5fb27251b9 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 03:28:39 np0005534516 nova_compute[253538]: 2025-11-25 08:28:39.985 253542 DEBUG nova.virt.libvirt.driver [None req-066abce6-4ea2-415d-9239-fa5fb27251b9 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] No VIF found with MAC fa:16:3e:34:c6:fc, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 25 03:28:39 np0005534516 nova_compute[253538]: 2025-11-25 08:28:39.985 253542 INFO nova.virt.libvirt.driver [None req-066abce6-4ea2-415d-9239-fa5fb27251b9 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] [instance: c3cdbe22-9aa9-4d2e-a76e-8d28c1edc71e] Using config drive#033[00m
Nov 25 03:28:40 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 03:28:40 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e138 do_prune osdmap full prune enabled
Nov 25 03:28:40 np0005534516 nova_compute[253538]: 2025-11-25 08:28:40.013 253542 DEBUG nova.storage.rbd_utils [None req-066abce6-4ea2-415d-9239-fa5fb27251b9 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] rbd image c3cdbe22-9aa9-4d2e-a76e-8d28c1edc71e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:28:40 np0005534516 podman[292693]: 2025-11-25 08:28:40.072459768 +0000 UTC m=+0.142012057 container health_status 8ad1e319ea263c63021f01bb2d6f18ca5aea205883339692c6b10bddfa993191 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller)
Nov 25 03:28:40 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e139 e139: 3 total, 3 up, 3 in
Nov 25 03:28:40 np0005534516 ceph-mon[75015]: log_channel(cluster) log [DBG] : osdmap e139: 3 total, 3 up, 3 in
Nov 25 03:28:40 np0005534516 nova_compute[253538]: 2025-11-25 08:28:40.211 253542 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764059305.20793, 6998a6cf-b660-4558-98cf-bf5984775b1d => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 03:28:40 np0005534516 nova_compute[253538]: 2025-11-25 08:28:40.212 253542 INFO nova.compute.manager [-] [instance: 6998a6cf-b660-4558-98cf-bf5984775b1d] VM Stopped (Lifecycle Event)#033[00m
Nov 25 03:28:40 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 03:28:40 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/818203405' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 03:28:40 np0005534516 nova_compute[253538]: 2025-11-25 08:28:40.234 253542 DEBUG oslo_concurrency.processutils [None req-02fcfbc8-8adf-4b3c-8fc6-7f90fda8674d 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.527s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:28:40 np0005534516 nova_compute[253538]: 2025-11-25 08:28:40.236 253542 DEBUG nova.compute.manager [None req-1134abea-d389-4369-b8ab-1af4eaeda085 - - - - - -] [instance: 6998a6cf-b660-4558-98cf-bf5984775b1d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:28:40 np0005534516 nova_compute[253538]: 2025-11-25 08:28:40.241 253542 DEBUG nova.compute.manager [None req-1134abea-d389-4369-b8ab-1af4eaeda085 - - - - - -] [instance: 6998a6cf-b660-4558-98cf-bf5984775b1d] Synchronizing instance power state after lifecycle event "Stopped"; current vm_state: suspended, current task_state: deleting, current DB power_state: 4, VM power_state: 4 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 03:28:40 np0005534516 nova_compute[253538]: 2025-11-25 08:28:40.244 253542 DEBUG nova.compute.provider_tree [None req-02fcfbc8-8adf-4b3c-8fc6-7f90fda8674d 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 03:28:40 np0005534516 nova_compute[253538]: 2025-11-25 08:28:40.261 253542 DEBUG nova.scheduler.client.report [None req-02fcfbc8-8adf-4b3c-8fc6-7f90fda8674d 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 03:28:40 np0005534516 nova_compute[253538]: 2025-11-25 08:28:40.265 253542 INFO nova.compute.manager [None req-1134abea-d389-4369-b8ab-1af4eaeda085 - - - - - -] [instance: 6998a6cf-b660-4558-98cf-bf5984775b1d] During sync_power_state the instance has a pending task (deleting). Skip.#033[00m
Nov 25 03:28:40 np0005534516 nova_compute[253538]: 2025-11-25 08:28:40.280 253542 DEBUG oslo_concurrency.lockutils [None req-02fcfbc8-8adf-4b3c-8fc6-7f90fda8674d 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.763s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:28:40 np0005534516 nova_compute[253538]: 2025-11-25 08:28:40.281 253542 DEBUG nova.compute.manager [None req-02fcfbc8-8adf-4b3c-8fc6-7f90fda8674d 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 8b29fb31-718d-4926-bf4f-bae461ea70ef] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 25 03:28:40 np0005534516 nova_compute[253538]: 2025-11-25 08:28:40.282 253542 DEBUG oslo_concurrency.lockutils [None req-d5ac8371-613d-488f-9c4b-164aad80e8a3 9a96a98d6bb448aab904a8763d3675ec 17d31dbb1e4542daaa43d2fda87e18ad - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.519s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:28:40 np0005534516 nova_compute[253538]: 2025-11-25 08:28:40.291 253542 DEBUG nova.virt.hardware [None req-d5ac8371-613d-488f-9c4b-164aad80e8a3 9a96a98d6bb448aab904a8763d3675ec 17d31dbb1e4542daaa43d2fda87e18ad - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 25 03:28:40 np0005534516 nova_compute[253538]: 2025-11-25 08:28:40.292 253542 INFO nova.compute.claims [None req-d5ac8371-613d-488f-9c4b-164aad80e8a3 9a96a98d6bb448aab904a8763d3675ec 17d31dbb1e4542daaa43d2fda87e18ad - - default default] [instance: cf07e611-51eb-4bcf-8757-8f75d3807da6] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 25 03:28:40 np0005534516 nova_compute[253538]: 2025-11-25 08:28:40.298 253542 DEBUG nova.network.neutron [req-81995c14-e38a-4ce0-a60f-f041242f0b0b req-8d88d25f-da60-4bbb-94ca-6c633d0f12ec b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: c3cdbe22-9aa9-4d2e-a76e-8d28c1edc71e] Updated VIF entry in instance network info cache for port 6bcb5ada-83f7-419f-9909-98ba6f37630c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 25 03:28:40 np0005534516 nova_compute[253538]: 2025-11-25 08:28:40.298 253542 DEBUG nova.network.neutron [req-81995c14-e38a-4ce0-a60f-f041242f0b0b req-8d88d25f-da60-4bbb-94ca-6c633d0f12ec b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: c3cdbe22-9aa9-4d2e-a76e-8d28c1edc71e] Updating instance_info_cache with network_info: [{"id": "6bcb5ada-83f7-419f-9909-98ba6f37630c", "address": "fa:16:3e:34:c6:fc", "network": {"id": "52a7668b-f0ac-4b07-a778-1ee89adbf076", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1430369364-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c5b1125d171240e2895276836b4fd6d7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6bcb5ada-83", "ovs_interfaceid": "6bcb5ada-83f7-419f-9909-98ba6f37630c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 03:28:40 np0005534516 nova_compute[253538]: 2025-11-25 08:28:40.339 253542 DEBUG oslo_concurrency.lockutils [req-81995c14-e38a-4ce0-a60f-f041242f0b0b req-8d88d25f-da60-4bbb-94ca-6c633d0f12ec b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-c3cdbe22-9aa9-4d2e-a76e-8d28c1edc71e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 03:28:40 np0005534516 nova_compute[253538]: 2025-11-25 08:28:40.353 253542 DEBUG nova.compute.manager [None req-02fcfbc8-8adf-4b3c-8fc6-7f90fda8674d 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 8b29fb31-718d-4926-bf4f-bae461ea70ef] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 25 03:28:40 np0005534516 nova_compute[253538]: 2025-11-25 08:28:40.355 253542 DEBUG nova.network.neutron [None req-02fcfbc8-8adf-4b3c-8fc6-7f90fda8674d 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 8b29fb31-718d-4926-bf4f-bae461ea70ef] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 25 03:28:40 np0005534516 nova_compute[253538]: 2025-11-25 08:28:40.383 253542 INFO nova.virt.libvirt.driver [None req-02fcfbc8-8adf-4b3c-8fc6-7f90fda8674d 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 8b29fb31-718d-4926-bf4f-bae461ea70ef] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 25 03:28:40 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 03:28:40 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 03:28:40 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 03:28:40 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 03:28:40 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 03:28:40 np0005534516 nova_compute[253538]: 2025-11-25 08:28:40.402 253542 INFO nova.virt.libvirt.driver [None req-066abce6-4ea2-415d-9239-fa5fb27251b9 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] [instance: c3cdbe22-9aa9-4d2e-a76e-8d28c1edc71e] Creating config drive at /var/lib/nova/instances/c3cdbe22-9aa9-4d2e-a76e-8d28c1edc71e/disk.config#033[00m
Nov 25 03:28:40 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 03:28:40 np0005534516 ceph-mgr[75313]: [progress WARNING root] complete: ev e72d64c1-bc6f-44e3-b2ce-04588f38a37b does not exist
Nov 25 03:28:40 np0005534516 ceph-mgr[75313]: [progress WARNING root] complete: ev 8d9dbce8-f271-4654-83b1-ea4a108a988a does not exist
Nov 25 03:28:40 np0005534516 ceph-mgr[75313]: [progress WARNING root] complete: ev a8917cfd-7774-4c30-aa59-367cbee98299 does not exist
Nov 25 03:28:40 np0005534516 nova_compute[253538]: 2025-11-25 08:28:40.408 253542 DEBUG oslo_concurrency.processutils [None req-066abce6-4ea2-415d-9239-fa5fb27251b9 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/c3cdbe22-9aa9-4d2e-a76e-8d28c1edc71e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpc7e1pms2 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:28:40 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 03:28:40 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 03:28:40 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 03:28:40 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 03:28:40 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 03:28:40 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 03:28:40 np0005534516 nova_compute[253538]: 2025-11-25 08:28:40.452 253542 DEBUG nova.compute.manager [None req-02fcfbc8-8adf-4b3c-8fc6-7f90fda8674d 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 8b29fb31-718d-4926-bf4f-bae461ea70ef] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 25 03:28:40 np0005534516 nova_compute[253538]: 2025-11-25 08:28:40.464 253542 INFO nova.virt.libvirt.driver [None req-bf3b8450-5c1c-44a8-9b80-e93361ab5f19 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 6998a6cf-b660-4558-98cf-bf5984775b1d] Deleting instance files /var/lib/nova/instances/6998a6cf-b660-4558-98cf-bf5984775b1d_del#033[00m
Nov 25 03:28:40 np0005534516 nova_compute[253538]: 2025-11-25 08:28:40.465 253542 INFO nova.virt.libvirt.driver [None req-bf3b8450-5c1c-44a8-9b80-e93361ab5f19 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 6998a6cf-b660-4558-98cf-bf5984775b1d] Deletion of /var/lib/nova/instances/6998a6cf-b660-4558-98cf-bf5984775b1d_del complete#033[00m
Nov 25 03:28:40 np0005534516 nova_compute[253538]: 2025-11-25 08:28:40.529 253542 DEBUG nova.policy [None req-02fcfbc8-8adf-4b3c-8fc6-7f90fda8674d 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '38fa175fb699405c9a05d7c28f994ebc', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'b0a28d62fb1841c087b84b40bf5a54ec', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 25 03:28:40 np0005534516 nova_compute[253538]: 2025-11-25 08:28:40.552 253542 DEBUG oslo_concurrency.processutils [None req-d5ac8371-613d-488f-9c4b-164aad80e8a3 9a96a98d6bb448aab904a8763d3675ec 17d31dbb1e4542daaa43d2fda87e18ad - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:28:40 np0005534516 nova_compute[253538]: 2025-11-25 08:28:40.582 253542 DEBUG oslo_concurrency.processutils [None req-066abce6-4ea2-415d-9239-fa5fb27251b9 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/c3cdbe22-9aa9-4d2e-a76e-8d28c1edc71e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpc7e1pms2" returned: 0 in 0.174s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:28:40 np0005534516 nova_compute[253538]: 2025-11-25 08:28:40.585 253542 DEBUG nova.compute.manager [None req-02fcfbc8-8adf-4b3c-8fc6-7f90fda8674d 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 8b29fb31-718d-4926-bf4f-bae461ea70ef] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 25 03:28:40 np0005534516 nova_compute[253538]: 2025-11-25 08:28:40.586 253542 DEBUG nova.virt.libvirt.driver [None req-02fcfbc8-8adf-4b3c-8fc6-7f90fda8674d 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 8b29fb31-718d-4926-bf4f-bae461ea70ef] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 25 03:28:40 np0005534516 nova_compute[253538]: 2025-11-25 08:28:40.587 253542 INFO nova.virt.libvirt.driver [None req-02fcfbc8-8adf-4b3c-8fc6-7f90fda8674d 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 8b29fb31-718d-4926-bf4f-bae461ea70ef] Creating image(s)#033[00m
Nov 25 03:28:40 np0005534516 nova_compute[253538]: 2025-11-25 08:28:40.626 253542 DEBUG nova.storage.rbd_utils [None req-02fcfbc8-8adf-4b3c-8fc6-7f90fda8674d 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] rbd image 8b29fb31-718d-4926-bf4f-bae461ea70ef_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:28:40 np0005534516 nova_compute[253538]: 2025-11-25 08:28:40.655 253542 DEBUG nova.storage.rbd_utils [None req-02fcfbc8-8adf-4b3c-8fc6-7f90fda8674d 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] rbd image 8b29fb31-718d-4926-bf4f-bae461ea70ef_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:28:40 np0005534516 nova_compute[253538]: 2025-11-25 08:28:40.703 253542 DEBUG nova.storage.rbd_utils [None req-02fcfbc8-8adf-4b3c-8fc6-7f90fda8674d 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] rbd image 8b29fb31-718d-4926-bf4f-bae461ea70ef_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:28:40 np0005534516 nova_compute[253538]: 2025-11-25 08:28:40.716 253542 DEBUG oslo_concurrency.processutils [None req-02fcfbc8-8adf-4b3c-8fc6-7f90fda8674d 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:28:40 np0005534516 nova_compute[253538]: 2025-11-25 08:28:40.747 253542 INFO nova.compute.manager [None req-bf3b8450-5c1c-44a8-9b80-e93361ab5f19 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 6998a6cf-b660-4558-98cf-bf5984775b1d] Took 3.54 seconds to destroy the instance on the hypervisor.#033[00m
Nov 25 03:28:40 np0005534516 nova_compute[253538]: 2025-11-25 08:28:40.749 253542 DEBUG oslo.service.loopingcall [None req-bf3b8450-5c1c-44a8-9b80-e93361ab5f19 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 25 03:28:40 np0005534516 nova_compute[253538]: 2025-11-25 08:28:40.784 253542 DEBUG nova.storage.rbd_utils [None req-066abce6-4ea2-415d-9239-fa5fb27251b9 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] rbd image c3cdbe22-9aa9-4d2e-a76e-8d28c1edc71e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:28:40 np0005534516 nova_compute[253538]: 2025-11-25 08:28:40.788 253542 DEBUG oslo_concurrency.processutils [None req-066abce6-4ea2-415d-9239-fa5fb27251b9 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/c3cdbe22-9aa9-4d2e-a76e-8d28c1edc71e/disk.config c3cdbe22-9aa9-4d2e-a76e-8d28c1edc71e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:28:40 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 03:28:40 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 03:28:40 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 03:28:40 np0005534516 nova_compute[253538]: 2025-11-25 08:28:40.815 253542 DEBUG nova.compute.manager [-] [instance: 6998a6cf-b660-4558-98cf-bf5984775b1d] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 25 03:28:40 np0005534516 nova_compute[253538]: 2025-11-25 08:28:40.816 253542 DEBUG nova.network.neutron [-] [instance: 6998a6cf-b660-4558-98cf-bf5984775b1d] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 25 03:28:40 np0005534516 nova_compute[253538]: 2025-11-25 08:28:40.819 253542 DEBUG oslo_concurrency.processutils [None req-02fcfbc8-8adf-4b3c-8fc6-7f90fda8674d 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json" returned: 0 in 0.102s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:28:40 np0005534516 nova_compute[253538]: 2025-11-25 08:28:40.819 253542 DEBUG oslo_concurrency.lockutils [None req-02fcfbc8-8adf-4b3c-8fc6-7f90fda8674d 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Acquiring lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:28:40 np0005534516 nova_compute[253538]: 2025-11-25 08:28:40.820 253542 DEBUG oslo_concurrency.lockutils [None req-02fcfbc8-8adf-4b3c-8fc6-7f90fda8674d 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:28:40 np0005534516 nova_compute[253538]: 2025-11-25 08:28:40.820 253542 DEBUG oslo_concurrency.lockutils [None req-02fcfbc8-8adf-4b3c-8fc6-7f90fda8674d 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:28:40 np0005534516 nova_compute[253538]: 2025-11-25 08:28:40.847 253542 DEBUG nova.storage.rbd_utils [None req-02fcfbc8-8adf-4b3c-8fc6-7f90fda8674d 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] rbd image 8b29fb31-718d-4926-bf4f-bae461ea70ef_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:28:40 np0005534516 nova_compute[253538]: 2025-11-25 08:28:40.851 253542 DEBUG oslo_concurrency.processutils [None req-02fcfbc8-8adf-4b3c-8fc6-7f90fda8674d 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc 8b29fb31-718d-4926-bf4f-bae461ea70ef_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:28:40 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 03:28:40 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/223984199' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 03:28:40 np0005534516 nova_compute[253538]: 2025-11-25 08:28:40.988 253542 DEBUG oslo_concurrency.processutils [None req-066abce6-4ea2-415d-9239-fa5fb27251b9 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/c3cdbe22-9aa9-4d2e-a76e-8d28c1edc71e/disk.config c3cdbe22-9aa9-4d2e-a76e-8d28c1edc71e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.200s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:28:40 np0005534516 nova_compute[253538]: 2025-11-25 08:28:40.989 253542 INFO nova.virt.libvirt.driver [None req-066abce6-4ea2-415d-9239-fa5fb27251b9 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] [instance: c3cdbe22-9aa9-4d2e-a76e-8d28c1edc71e] Deleting local config drive /var/lib/nova/instances/c3cdbe22-9aa9-4d2e-a76e-8d28c1edc71e/disk.config because it was imported into RBD.#033[00m
Nov 25 03:28:41 np0005534516 virtqemud[253839]: libvirt version: 11.9.0, package: 1.el9 (builder@centos.org, 2025-11-04-09:54:50, )
Nov 25 03:28:41 np0005534516 virtqemud[253839]: hostname: compute-0
Nov 25 03:28:41 np0005534516 virtqemud[253839]: End of file while reading data: Input/output error
Nov 25 03:28:41 np0005534516 nova_compute[253538]: 2025-11-25 08:28:41.007 253542 DEBUG oslo_concurrency.processutils [None req-d5ac8371-613d-488f-9c4b-164aad80e8a3 9a96a98d6bb448aab904a8763d3675ec 17d31dbb1e4542daaa43d2fda87e18ad - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.454s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:28:41 np0005534516 nova_compute[253538]: 2025-11-25 08:28:41.017 253542 DEBUG nova.compute.provider_tree [None req-d5ac8371-613d-488f-9c4b-164aad80e8a3 9a96a98d6bb448aab904a8763d3675ec 17d31dbb1e4542daaa43d2fda87e18ad - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 03:28:41 np0005534516 nova_compute[253538]: 2025-11-25 08:28:41.039 253542 DEBUG nova.scheduler.client.report [None req-d5ac8371-613d-488f-9c4b-164aad80e8a3 9a96a98d6bb448aab904a8763d3675ec 17d31dbb1e4542daaa43d2fda87e18ad - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 03:28:41 np0005534516 kernel: tap6bcb5ada-83: entered promiscuous mode
Nov 25 03:28:41 np0005534516 NetworkManager[48915]: <info>  [1764059321.0505] manager: (tap6bcb5ada-83): new Tun device (/org/freedesktop/NetworkManager/Devices/104)
Nov 25 03:28:41 np0005534516 ovn_controller[152859]: 2025-11-25T08:28:41Z|00208|binding|INFO|Claiming lport 6bcb5ada-83f7-419f-9909-98ba6f37630c for this chassis.
Nov 25 03:28:41 np0005534516 ovn_controller[152859]: 2025-11-25T08:28:41Z|00209|binding|INFO|6bcb5ada-83f7-419f-9909-98ba6f37630c: Claiming fa:16:3e:34:c6:fc 10.100.0.8
Nov 25 03:28:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:28:41.053 162739 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:28:41 np0005534516 nova_compute[253538]: 2025-11-25 08:28:41.052 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:28:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:28:41.054 162739 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:28:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:28:41.055 162739 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:28:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:28:41.059 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:34:c6:fc 10.100.0.8'], port_security=['fa:16:3e:34:c6:fc 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': 'c3cdbe22-9aa9-4d2e-a76e-8d28c1edc71e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-52a7668b-f0ac-4b07-a778-1ee89adbf076', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c5b1125d171240e2895276836b4fd6d7', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'bd498cc2-3a1f-4313-a3a9-7b1fa737f1eb', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b98666ab-8fd3-4256-b0ce-536c54c0072e, chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=6bcb5ada-83f7-419f-9909-98ba6f37630c) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 03:28:41 np0005534516 nova_compute[253538]: 2025-11-25 08:28:41.060 253542 DEBUG oslo_concurrency.lockutils [None req-d5ac8371-613d-488f-9c4b-164aad80e8a3 9a96a98d6bb448aab904a8763d3675ec 17d31dbb1e4542daaa43d2fda87e18ad - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.778s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:28:41 np0005534516 nova_compute[253538]: 2025-11-25 08:28:41.061 253542 DEBUG nova.compute.manager [None req-d5ac8371-613d-488f-9c4b-164aad80e8a3 9a96a98d6bb448aab904a8763d3675ec 17d31dbb1e4542daaa43d2fda87e18ad - - default default] [instance: cf07e611-51eb-4bcf-8757-8f75d3807da6] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 25 03:28:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:28:41.062 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 6bcb5ada-83f7-419f-9909-98ba6f37630c in datapath 52a7668b-f0ac-4b07-a778-1ee89adbf076 bound to our chassis#033[00m
Nov 25 03:28:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:28:41.065 162739 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 52a7668b-f0ac-4b07-a778-1ee89adbf076#033[00m
Nov 25 03:28:41 np0005534516 ovn_controller[152859]: 2025-11-25T08:28:41Z|00210|binding|INFO|Setting lport 6bcb5ada-83f7-419f-9909-98ba6f37630c ovn-installed in OVS
Nov 25 03:28:41 np0005534516 ovn_controller[152859]: 2025-11-25T08:28:41Z|00211|binding|INFO|Setting lport 6bcb5ada-83f7-419f-9909-98ba6f37630c up in Southbound
Nov 25 03:28:41 np0005534516 nova_compute[253538]: 2025-11-25 08:28:41.077 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:28:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:28:41.076 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[156207ff-6454-42fe-b4ea-85584de4c47a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:28:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:28:41.077 162739 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap52a7668b-f1 in ovnmeta-52a7668b-f0ac-4b07-a778-1ee89adbf076 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 25 03:28:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:28:41.080 269370 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap52a7668b-f0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 25 03:28:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:28:41.080 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[15bf3519-e8da-4448-bffe-2a262dd1edb5]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:28:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:28:41.081 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[34f45e27-bde5-4059-b8e9-c921ba89c167]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:28:41 np0005534516 systemd-machined[215790]: New machine qemu-37-instance-00000020.
Nov 25 03:28:41 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1323: 321 pgs: 321 active+clean; 241 MiB data, 458 MiB used, 60 GiB / 60 GiB avail; 3.2 MiB/s rd, 11 MiB/s wr, 317 op/s
Nov 25 03:28:41 np0005534516 podman[293060]: 2025-11-25 08:28:41.099176435 +0000 UTC m=+0.092280842 container create b79eaa40a8462881f2a8ca5419040d8334ee86246bebfa10528f69a57a184c70 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_gauss, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 03:28:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:28:41.098 162852 DEBUG oslo.privsep.daemon [-] privsep: reply[6d398dcb-deab-4af5-9447-d9f4bd4e80f6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:28:41 np0005534516 systemd[1]: Started Virtual Machine qemu-37-instance-00000020.
Nov 25 03:28:41 np0005534516 nova_compute[253538]: 2025-11-25 08:28:41.103 253542 DEBUG nova.compute.manager [None req-d5ac8371-613d-488f-9c4b-164aad80e8a3 9a96a98d6bb448aab904a8763d3675ec 17d31dbb1e4542daaa43d2fda87e18ad - - default default] [instance: cf07e611-51eb-4bcf-8757-8f75d3807da6] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 25 03:28:41 np0005534516 nova_compute[253538]: 2025-11-25 08:28:41.103 253542 DEBUG nova.network.neutron [None req-d5ac8371-613d-488f-9c4b-164aad80e8a3 9a96a98d6bb448aab904a8763d3675ec 17d31dbb1e4542daaa43d2fda87e18ad - - default default] [instance: cf07e611-51eb-4bcf-8757-8f75d3807da6] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 25 03:28:41 np0005534516 nova_compute[253538]: 2025-11-25 08:28:41.117 253542 INFO nova.virt.libvirt.driver [None req-d5ac8371-613d-488f-9c4b-164aad80e8a3 9a96a98d6bb448aab904a8763d3675ec 17d31dbb1e4542daaa43d2fda87e18ad - - default default] [instance: cf07e611-51eb-4bcf-8757-8f75d3807da6] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 25 03:28:41 np0005534516 systemd-udevd[293088]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 03:28:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:28:41.130 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[72c79670-b6f2-471f-9d11-9e1302591464]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:28:41 np0005534516 nova_compute[253538]: 2025-11-25 08:28:41.132 253542 DEBUG nova.compute.manager [None req-d5ac8371-613d-488f-9c4b-164aad80e8a3 9a96a98d6bb448aab904a8763d3675ec 17d31dbb1e4542daaa43d2fda87e18ad - - default default] [instance: cf07e611-51eb-4bcf-8757-8f75d3807da6] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 25 03:28:41 np0005534516 NetworkManager[48915]: <info>  [1764059321.1371] device (tap6bcb5ada-83): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 03:28:41 np0005534516 NetworkManager[48915]: <info>  [1764059321.1382] device (tap6bcb5ada-83): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 03:28:41 np0005534516 podman[293060]: 2025-11-25 08:28:41.046795878 +0000 UTC m=+0.039900355 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 03:28:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:28:41.170 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[6bded1e5-2e60-4ef6-914e-9608d2f481aa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:28:41 np0005534516 systemd[1]: Started libpod-conmon-b79eaa40a8462881f2a8ca5419040d8334ee86246bebfa10528f69a57a184c70.scope.
Nov 25 03:28:41 np0005534516 NetworkManager[48915]: <info>  [1764059321.1781] manager: (tap52a7668b-f0): new Veth device (/org/freedesktop/NetworkManager/Devices/105)
Nov 25 03:28:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:28:41.180 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[968ccfb0-7ee4-4e0c-96ac-bcfc3c2c8985]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:28:41 np0005534516 systemd[1]: Started libcrun container.
Nov 25 03:28:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:28:41.207 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[8b1f34ad-7e5b-4476-9567-902103180dcb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:28:41 np0005534516 nova_compute[253538]: 2025-11-25 08:28:41.209 253542 DEBUG nova.compute.manager [None req-d5ac8371-613d-488f-9c4b-164aad80e8a3 9a96a98d6bb448aab904a8763d3675ec 17d31dbb1e4542daaa43d2fda87e18ad - - default default] [instance: cf07e611-51eb-4bcf-8757-8f75d3807da6] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 25 03:28:41 np0005534516 nova_compute[253538]: 2025-11-25 08:28:41.210 253542 DEBUG nova.virt.libvirt.driver [None req-d5ac8371-613d-488f-9c4b-164aad80e8a3 9a96a98d6bb448aab904a8763d3675ec 17d31dbb1e4542daaa43d2fda87e18ad - - default default] [instance: cf07e611-51eb-4bcf-8757-8f75d3807da6] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 25 03:28:41 np0005534516 nova_compute[253538]: 2025-11-25 08:28:41.211 253542 INFO nova.virt.libvirt.driver [None req-d5ac8371-613d-488f-9c4b-164aad80e8a3 9a96a98d6bb448aab904a8763d3675ec 17d31dbb1e4542daaa43d2fda87e18ad - - default default] [instance: cf07e611-51eb-4bcf-8757-8f75d3807da6] Creating image(s)#033[00m
Nov 25 03:28:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:28:41.211 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[c45e0a1f-ed1e-437c-b713-d24c5b10e8e2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:28:41 np0005534516 nova_compute[253538]: 2025-11-25 08:28:41.233 253542 DEBUG nova.storage.rbd_utils [None req-d5ac8371-613d-488f-9c4b-164aad80e8a3 9a96a98d6bb448aab904a8763d3675ec 17d31dbb1e4542daaa43d2fda87e18ad - - default default] rbd image cf07e611-51eb-4bcf-8757-8f75d3807da6_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:28:41 np0005534516 NetworkManager[48915]: <info>  [1764059321.2342] device (tap52a7668b-f0): carrier: link connected
Nov 25 03:28:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:28:41.241 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[e8defb0f-99d6-4e59-bd1b-038af4308ad0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:28:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:28:41.261 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[4e7c8f81-84a2-4bc9-91f0-920e32000497]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap52a7668b-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:9b:1c:70'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 69], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 467583, 'reachable_time': 25642, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 293150, 'error': None, 'target': 'ovnmeta-52a7668b-f0ac-4b07-a778-1ee89adbf076', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:28:41 np0005534516 nova_compute[253538]: 2025-11-25 08:28:41.264 253542 DEBUG nova.storage.rbd_utils [None req-d5ac8371-613d-488f-9c4b-164aad80e8a3 9a96a98d6bb448aab904a8763d3675ec 17d31dbb1e4542daaa43d2fda87e18ad - - default default] rbd image cf07e611-51eb-4bcf-8757-8f75d3807da6_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:28:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:28:41.275 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[e89648be-afee-4f4a-9114-5f478beb66a8]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe9b:1c70'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 467583, 'tstamp': 467583}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 293162, 'error': None, 'target': 'ovnmeta-52a7668b-f0ac-4b07-a778-1ee89adbf076', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:28:41 np0005534516 podman[293060]: 2025-11-25 08:28:41.27879829 +0000 UTC m=+0.271902707 container init b79eaa40a8462881f2a8ca5419040d8334ee86246bebfa10528f69a57a184c70 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_gauss, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Nov 25 03:28:41 np0005534516 podman[293060]: 2025-11-25 08:28:41.291359597 +0000 UTC m=+0.284463994 container start b79eaa40a8462881f2a8ca5419040d8334ee86246bebfa10528f69a57a184c70 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_gauss, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 03:28:41 np0005534516 podman[293060]: 2025-11-25 08:28:41.295784749 +0000 UTC m=+0.288889156 container attach b79eaa40a8462881f2a8ca5419040d8334ee86246bebfa10528f69a57a184c70 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_gauss, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Nov 25 03:28:41 np0005534516 sweet_gauss[293103]: 167 167
Nov 25 03:28:41 np0005534516 systemd[1]: libpod-b79eaa40a8462881f2a8ca5419040d8334ee86246bebfa10528f69a57a184c70.scope: Deactivated successfully.
Nov 25 03:28:41 np0005534516 podman[293060]: 2025-11-25 08:28:41.299809911 +0000 UTC m=+0.292914298 container died b79eaa40a8462881f2a8ca5419040d8334ee86246bebfa10528f69a57a184c70 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_gauss, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True)
Nov 25 03:28:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:28:41.299 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[68cf7bed-7a52-43c3-a420-0ffc4de57627]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap52a7668b-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:9b:1c:70'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 69], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 467583, 'reachable_time': 25642, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 293181, 'error': None, 'target': 'ovnmeta-52a7668b-f0ac-4b07-a778-1ee89adbf076', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:28:41 np0005534516 nova_compute[253538]: 2025-11-25 08:28:41.307 253542 DEBUG nova.storage.rbd_utils [None req-d5ac8371-613d-488f-9c4b-164aad80e8a3 9a96a98d6bb448aab904a8763d3675ec 17d31dbb1e4542daaa43d2fda87e18ad - - default default] rbd image cf07e611-51eb-4bcf-8757-8f75d3807da6_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:28:41 np0005534516 nova_compute[253538]: 2025-11-25 08:28:41.312 253542 DEBUG oslo_concurrency.processutils [None req-d5ac8371-613d-488f-9c4b-164aad80e8a3 9a96a98d6bb448aab904a8763d3675ec 17d31dbb1e4542daaa43d2fda87e18ad - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:28:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:28:41.334 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[dd73c1fb-c57e-482c-a0fc-9363da5bbc9d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:28:41 np0005534516 nova_compute[253538]: 2025-11-25 08:28:41.346 253542 DEBUG oslo_concurrency.processutils [None req-02fcfbc8-8adf-4b3c-8fc6-7f90fda8674d 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc 8b29fb31-718d-4926-bf4f-bae461ea70ef_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.495s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:28:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:28:41.389 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[6d59a09d-96b9-4c53-8859-cee42cfd57c1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:28:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:28:41.392 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap52a7668b-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:28:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:28:41.392 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 03:28:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:28:41.392 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap52a7668b-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:28:41 np0005534516 kernel: tap52a7668b-f0: entered promiscuous mode
Nov 25 03:28:41 np0005534516 NetworkManager[48915]: <info>  [1764059321.4528] manager: (tap52a7668b-f0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/106)
Nov 25 03:28:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:28:41.455 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap52a7668b-f0, col_values=(('external_ids', {'iface-id': 'ac244317-fa52-4a6a-92f4-98845a41804d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:28:41 np0005534516 ovn_controller[152859]: 2025-11-25T08:28:41Z|00212|binding|INFO|Releasing lport ac244317-fa52-4a6a-92f4-98845a41804d from this chassis (sb_readonly=0)
Nov 25 03:28:41 np0005534516 nova_compute[253538]: 2025-11-25 08:28:41.464 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:28:41 np0005534516 nova_compute[253538]: 2025-11-25 08:28:41.468 253542 DEBUG oslo_concurrency.processutils [None req-d5ac8371-613d-488f-9c4b-164aad80e8a3 9a96a98d6bb448aab904a8763d3675ec 17d31dbb1e4542daaa43d2fda87e18ad - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json" returned: 0 in 0.155s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:28:41 np0005534516 nova_compute[253538]: 2025-11-25 08:28:41.468 253542 DEBUG oslo_concurrency.lockutils [None req-d5ac8371-613d-488f-9c4b-164aad80e8a3 9a96a98d6bb448aab904a8763d3675ec 17d31dbb1e4542daaa43d2fda87e18ad - - default default] Acquiring lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:28:41 np0005534516 nova_compute[253538]: 2025-11-25 08:28:41.469 253542 DEBUG oslo_concurrency.lockutils [None req-d5ac8371-613d-488f-9c4b-164aad80e8a3 9a96a98d6bb448aab904a8763d3675ec 17d31dbb1e4542daaa43d2fda87e18ad - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:28:41 np0005534516 nova_compute[253538]: 2025-11-25 08:28:41.470 253542 DEBUG oslo_concurrency.lockutils [None req-d5ac8371-613d-488f-9c4b-164aad80e8a3 9a96a98d6bb448aab904a8763d3675ec 17d31dbb1e4542daaa43d2fda87e18ad - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:28:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:28:41.474 162739 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/52a7668b-f0ac-4b07-a778-1ee89adbf076.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/52a7668b-f0ac-4b07-a778-1ee89adbf076.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 25 03:28:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:28:41.475 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[e7baf434-f3be-453e-98e9-ca8063299a7e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:28:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:28:41.475 162739 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 25 03:28:41 np0005534516 ovn_metadata_agent[162734]: global
Nov 25 03:28:41 np0005534516 ovn_metadata_agent[162734]:    log         /dev/log local0 debug
Nov 25 03:28:41 np0005534516 ovn_metadata_agent[162734]:    log-tag     haproxy-metadata-proxy-52a7668b-f0ac-4b07-a778-1ee89adbf076
Nov 25 03:28:41 np0005534516 ovn_metadata_agent[162734]:    user        root
Nov 25 03:28:41 np0005534516 ovn_metadata_agent[162734]:    group       root
Nov 25 03:28:41 np0005534516 ovn_metadata_agent[162734]:    maxconn     1024
Nov 25 03:28:41 np0005534516 ovn_metadata_agent[162734]:    pidfile     /var/lib/neutron/external/pids/52a7668b-f0ac-4b07-a778-1ee89adbf076.pid.haproxy
Nov 25 03:28:41 np0005534516 ovn_metadata_agent[162734]:    daemon
Nov 25 03:28:41 np0005534516 ovn_metadata_agent[162734]: 
Nov 25 03:28:41 np0005534516 ovn_metadata_agent[162734]: defaults
Nov 25 03:28:41 np0005534516 ovn_metadata_agent[162734]:    log global
Nov 25 03:28:41 np0005534516 ovn_metadata_agent[162734]:    mode http
Nov 25 03:28:41 np0005534516 ovn_metadata_agent[162734]:    option httplog
Nov 25 03:28:41 np0005534516 ovn_metadata_agent[162734]:    option dontlognull
Nov 25 03:28:41 np0005534516 ovn_metadata_agent[162734]:    option http-server-close
Nov 25 03:28:41 np0005534516 ovn_metadata_agent[162734]:    option forwardfor
Nov 25 03:28:41 np0005534516 ovn_metadata_agent[162734]:    retries                 3
Nov 25 03:28:41 np0005534516 ovn_metadata_agent[162734]:    timeout http-request    30s
Nov 25 03:28:41 np0005534516 ovn_metadata_agent[162734]:    timeout connect         30s
Nov 25 03:28:41 np0005534516 ovn_metadata_agent[162734]:    timeout client          32s
Nov 25 03:28:41 np0005534516 ovn_metadata_agent[162734]:    timeout server          32s
Nov 25 03:28:41 np0005534516 ovn_metadata_agent[162734]:    timeout http-keep-alive 30s
Nov 25 03:28:41 np0005534516 ovn_metadata_agent[162734]: 
Nov 25 03:28:41 np0005534516 ovn_metadata_agent[162734]: 
Nov 25 03:28:41 np0005534516 ovn_metadata_agent[162734]: listen listener
Nov 25 03:28:41 np0005534516 ovn_metadata_agent[162734]:    bind 169.254.169.254:80
Nov 25 03:28:41 np0005534516 ovn_metadata_agent[162734]:    server metadata /var/lib/neutron/metadata_proxy
Nov 25 03:28:41 np0005534516 ovn_metadata_agent[162734]:    http-request add-header X-OVN-Network-ID 52a7668b-f0ac-4b07-a778-1ee89adbf076
Nov 25 03:28:41 np0005534516 ovn_metadata_agent[162734]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 25 03:28:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:28:41.476 162739 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-52a7668b-f0ac-4b07-a778-1ee89adbf076', 'env', 'PROCESS_TAG=haproxy-52a7668b-f0ac-4b07-a778-1ee89adbf076', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/52a7668b-f0ac-4b07-a778-1ee89adbf076.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 25 03:28:41 np0005534516 nova_compute[253538]: 2025-11-25 08:28:41.498 253542 DEBUG nova.storage.rbd_utils [None req-d5ac8371-613d-488f-9c4b-164aad80e8a3 9a96a98d6bb448aab904a8763d3675ec 17d31dbb1e4542daaa43d2fda87e18ad - - default default] rbd image cf07e611-51eb-4bcf-8757-8f75d3807da6_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:28:41 np0005534516 nova_compute[253538]: 2025-11-25 08:28:41.503 253542 DEBUG oslo_concurrency.processutils [None req-d5ac8371-613d-488f-9c4b-164aad80e8a3 9a96a98d6bb448aab904a8763d3675ec 17d31dbb1e4542daaa43d2fda87e18ad - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc cf07e611-51eb-4bcf-8757-8f75d3807da6_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:28:41 np0005534516 systemd[1]: var-lib-containers-storage-overlay-d11ed5076b45d3f966f16c650926cd17b316ab1961582b80e4362b526b06dfcd-merged.mount: Deactivated successfully.
Nov 25 03:28:41 np0005534516 nova_compute[253538]: 2025-11-25 08:28:41.531 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:28:41 np0005534516 nova_compute[253538]: 2025-11-25 08:28:41.542 253542 DEBUG nova.storage.rbd_utils [None req-02fcfbc8-8adf-4b3c-8fc6-7f90fda8674d 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] resizing rbd image 8b29fb31-718d-4926-bf4f-bae461ea70ef_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 25 03:28:41 np0005534516 podman[293060]: 2025-11-25 08:28:41.55849465 +0000 UTC m=+0.551599037 container remove b79eaa40a8462881f2a8ca5419040d8334ee86246bebfa10528f69a57a184c70 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_gauss, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Nov 25 03:28:41 np0005534516 systemd[1]: libpod-conmon-b79eaa40a8462881f2a8ca5419040d8334ee86246bebfa10528f69a57a184c70.scope: Deactivated successfully.
Nov 25 03:28:41 np0005534516 nova_compute[253538]: 2025-11-25 08:28:41.618 253542 DEBUG nova.policy [None req-d5ac8371-613d-488f-9c4b-164aad80e8a3 9a96a98d6bb448aab904a8763d3675ec 17d31dbb1e4542daaa43d2fda87e18ad - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '9a96a98d6bb448aab904a8763d3675ec', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '17d31dbb1e4542daaa43d2fda87e18ad', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 25 03:28:41 np0005534516 nova_compute[253538]: 2025-11-25 08:28:41.676 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764059321.6757755, c3cdbe22-9aa9-4d2e-a76e-8d28c1edc71e => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 03:28:41 np0005534516 nova_compute[253538]: 2025-11-25 08:28:41.676 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: c3cdbe22-9aa9-4d2e-a76e-8d28c1edc71e] VM Started (Lifecycle Event)#033[00m
Nov 25 03:28:41 np0005534516 nova_compute[253538]: 2025-11-25 08:28:41.720 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: c3cdbe22-9aa9-4d2e-a76e-8d28c1edc71e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:28:41 np0005534516 nova_compute[253538]: 2025-11-25 08:28:41.732 253542 DEBUG nova.objects.instance [None req-02fcfbc8-8adf-4b3c-8fc6-7f90fda8674d 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Lazy-loading 'migration_context' on Instance uuid 8b29fb31-718d-4926-bf4f-bae461ea70ef obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 03:28:41 np0005534516 nova_compute[253538]: 2025-11-25 08:28:41.740 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764059321.6759026, c3cdbe22-9aa9-4d2e-a76e-8d28c1edc71e => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 03:28:41 np0005534516 nova_compute[253538]: 2025-11-25 08:28:41.740 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: c3cdbe22-9aa9-4d2e-a76e-8d28c1edc71e] VM Paused (Lifecycle Event)#033[00m
Nov 25 03:28:41 np0005534516 nova_compute[253538]: 2025-11-25 08:28:41.751 253542 DEBUG nova.virt.libvirt.driver [None req-02fcfbc8-8adf-4b3c-8fc6-7f90fda8674d 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 8b29fb31-718d-4926-bf4f-bae461ea70ef] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 25 03:28:41 np0005534516 nova_compute[253538]: 2025-11-25 08:28:41.752 253542 DEBUG nova.virt.libvirt.driver [None req-02fcfbc8-8adf-4b3c-8fc6-7f90fda8674d 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 8b29fb31-718d-4926-bf4f-bae461ea70ef] Ensure instance console log exists: /var/lib/nova/instances/8b29fb31-718d-4926-bf4f-bae461ea70ef/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 25 03:28:41 np0005534516 nova_compute[253538]: 2025-11-25 08:28:41.756 253542 DEBUG oslo_concurrency.lockutils [None req-02fcfbc8-8adf-4b3c-8fc6-7f90fda8674d 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:28:41 np0005534516 nova_compute[253538]: 2025-11-25 08:28:41.757 253542 DEBUG oslo_concurrency.lockutils [None req-02fcfbc8-8adf-4b3c-8fc6-7f90fda8674d 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:28:41 np0005534516 nova_compute[253538]: 2025-11-25 08:28:41.757 253542 DEBUG oslo_concurrency.lockutils [None req-02fcfbc8-8adf-4b3c-8fc6-7f90fda8674d 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:28:41 np0005534516 nova_compute[253538]: 2025-11-25 08:28:41.760 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: c3cdbe22-9aa9-4d2e-a76e-8d28c1edc71e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:28:41 np0005534516 nova_compute[253538]: 2025-11-25 08:28:41.763 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: c3cdbe22-9aa9-4d2e-a76e-8d28c1edc71e] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 03:28:41 np0005534516 nova_compute[253538]: 2025-11-25 08:28:41.781 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: c3cdbe22-9aa9-4d2e-a76e-8d28c1edc71e] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 03:28:41 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e139 do_prune osdmap full prune enabled
Nov 25 03:28:41 np0005534516 podman[293369]: 2025-11-25 08:28:41.83741396 +0000 UTC m=+0.101321161 container create 8291d7d8ca45642f360ebce1d5b883212a7d530c7b569291626b68abf79cab8d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_kowalevski, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Nov 25 03:28:41 np0005534516 nova_compute[253538]: 2025-11-25 08:28:41.843 253542 DEBUG nova.network.neutron [None req-02fcfbc8-8adf-4b3c-8fc6-7f90fda8674d 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 8b29fb31-718d-4926-bf4f-bae461ea70ef] Successfully created port: 799c50c8-d1e7-4c15-a3d9-29903d576304 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 25 03:28:41 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e140 e140: 3 total, 3 up, 3 in
Nov 25 03:28:41 np0005534516 ceph-mon[75015]: log_channel(cluster) log [DBG] : osdmap e140: 3 total, 3 up, 3 in
Nov 25 03:28:41 np0005534516 podman[293369]: 2025-11-25 08:28:41.77119945 +0000 UTC m=+0.035106681 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 03:28:41 np0005534516 nova_compute[253538]: 2025-11-25 08:28:41.877 253542 DEBUG oslo_concurrency.processutils [None req-d5ac8371-613d-488f-9c4b-164aad80e8a3 9a96a98d6bb448aab904a8763d3675ec 17d31dbb1e4542daaa43d2fda87e18ad - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc cf07e611-51eb-4bcf-8757-8f75d3807da6_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.375s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:28:41 np0005534516 systemd[1]: Started libpod-conmon-8291d7d8ca45642f360ebce1d5b883212a7d530c7b569291626b68abf79cab8d.scope.
Nov 25 03:28:41 np0005534516 podman[293403]: 2025-11-25 08:28:41.902000845 +0000 UTC m=+0.094949265 container create 528056e06a9a8fa83064f1b5b0c05273e19c81181eb5b70492a449cf2e44017e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-52a7668b-f0ac-4b07-a778-1ee89adbf076, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true)
Nov 25 03:28:41 np0005534516 systemd[1]: Started libcrun container.
Nov 25 03:28:41 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/40668a1f5dedaf4cab05a01ddcda2a8366970cc4fe4c9fc1f11870dd061fe30c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 03:28:41 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/40668a1f5dedaf4cab05a01ddcda2a8366970cc4fe4c9fc1f11870dd061fe30c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 03:28:41 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/40668a1f5dedaf4cab05a01ddcda2a8366970cc4fe4c9fc1f11870dd061fe30c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 03:28:41 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/40668a1f5dedaf4cab05a01ddcda2a8366970cc4fe4c9fc1f11870dd061fe30c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 03:28:41 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/40668a1f5dedaf4cab05a01ddcda2a8366970cc4fe4c9fc1f11870dd061fe30c/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 03:28:41 np0005534516 systemd[1]: Started libpod-conmon-528056e06a9a8fa83064f1b5b0c05273e19c81181eb5b70492a449cf2e44017e.scope.
Nov 25 03:28:41 np0005534516 podman[293369]: 2025-11-25 08:28:41.945854728 +0000 UTC m=+0.209761959 container init 8291d7d8ca45642f360ebce1d5b883212a7d530c7b569291626b68abf79cab8d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_kowalevski, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True)
Nov 25 03:28:41 np0005534516 systemd[1]: Started libcrun container.
Nov 25 03:28:41 np0005534516 podman[293369]: 2025-11-25 08:28:41.955342199 +0000 UTC m=+0.219249410 container start 8291d7d8ca45642f360ebce1d5b883212a7d530c7b569291626b68abf79cab8d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_kowalevski, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Nov 25 03:28:41 np0005534516 podman[293369]: 2025-11-25 08:28:41.958691572 +0000 UTC m=+0.222598833 container attach 8291d7d8ca45642f360ebce1d5b883212a7d530c7b569291626b68abf79cab8d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_kowalevski, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True)
Nov 25 03:28:41 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7db1af69cade0c066d85c02aae054723464bc6f1508df360470a31f2e5094b6d/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 25 03:28:41 np0005534516 podman[293403]: 2025-11-25 08:28:41.876250964 +0000 UTC m=+0.069199384 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 25 03:28:41 np0005534516 podman[293403]: 2025-11-25 08:28:41.974043027 +0000 UTC m=+0.166991487 container init 528056e06a9a8fa83064f1b5b0c05273e19c81181eb5b70492a449cf2e44017e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-52a7668b-f0ac-4b07-a778-1ee89adbf076, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 03:28:41 np0005534516 nova_compute[253538]: 2025-11-25 08:28:41.977 253542 DEBUG nova.compute.manager [req-28f61271-c089-4064-abc6-a333935f5e5e req-08b25b9c-5453-41b8-b7cf-9f3c9dc0739e b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: c3cdbe22-9aa9-4d2e-a76e-8d28c1edc71e] Received event network-vif-plugged-6bcb5ada-83f7-419f-9909-98ba6f37630c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:28:41 np0005534516 nova_compute[253538]: 2025-11-25 08:28:41.978 253542 DEBUG oslo_concurrency.lockutils [req-28f61271-c089-4064-abc6-a333935f5e5e req-08b25b9c-5453-41b8-b7cf-9f3c9dc0739e b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "c3cdbe22-9aa9-4d2e-a76e-8d28c1edc71e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:28:41 np0005534516 nova_compute[253538]: 2025-11-25 08:28:41.978 253542 DEBUG oslo_concurrency.lockutils [req-28f61271-c089-4064-abc6-a333935f5e5e req-08b25b9c-5453-41b8-b7cf-9f3c9dc0739e b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "c3cdbe22-9aa9-4d2e-a76e-8d28c1edc71e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:28:41 np0005534516 podman[293403]: 2025-11-25 08:28:41.979224889 +0000 UTC m=+0.172173319 container start 528056e06a9a8fa83064f1b5b0c05273e19c81181eb5b70492a449cf2e44017e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-52a7668b-f0ac-4b07-a778-1ee89adbf076, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Nov 25 03:28:41 np0005534516 nova_compute[253538]: 2025-11-25 08:28:41.978 253542 DEBUG oslo_concurrency.lockutils [req-28f61271-c089-4064-abc6-a333935f5e5e req-08b25b9c-5453-41b8-b7cf-9f3c9dc0739e b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "c3cdbe22-9aa9-4d2e-a76e-8d28c1edc71e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:28:41 np0005534516 nova_compute[253538]: 2025-11-25 08:28:41.980 253542 DEBUG nova.compute.manager [req-28f61271-c089-4064-abc6-a333935f5e5e req-08b25b9c-5453-41b8-b7cf-9f3c9dc0739e b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: c3cdbe22-9aa9-4d2e-a76e-8d28c1edc71e] Processing event network-vif-plugged-6bcb5ada-83f7-419f-9909-98ba6f37630c _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 25 03:28:41 np0005534516 nova_compute[253538]: 2025-11-25 08:28:41.981 253542 DEBUG nova.compute.manager [None req-066abce6-4ea2-415d-9239-fa5fb27251b9 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] [instance: c3cdbe22-9aa9-4d2e-a76e-8d28c1edc71e] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 25 03:28:41 np0005534516 nova_compute[253538]: 2025-11-25 08:28:41.989 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764059321.9890084, c3cdbe22-9aa9-4d2e-a76e-8d28c1edc71e => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 03:28:41 np0005534516 nova_compute[253538]: 2025-11-25 08:28:41.989 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: c3cdbe22-9aa9-4d2e-a76e-8d28c1edc71e] VM Resumed (Lifecycle Event)#033[00m
Nov 25 03:28:41 np0005534516 nova_compute[253538]: 2025-11-25 08:28:41.991 253542 DEBUG nova.virt.libvirt.driver [None req-066abce6-4ea2-415d-9239-fa5fb27251b9 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] [instance: c3cdbe22-9aa9-4d2e-a76e-8d28c1edc71e] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 25 03:28:42 np0005534516 nova_compute[253538]: 2025-11-25 08:28:42.002 253542 DEBUG nova.storage.rbd_utils [None req-d5ac8371-613d-488f-9c4b-164aad80e8a3 9a96a98d6bb448aab904a8763d3675ec 17d31dbb1e4542daaa43d2fda87e18ad - - default default] resizing rbd image cf07e611-51eb-4bcf-8757-8f75d3807da6_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 25 03:28:42 np0005534516 neutron-haproxy-ovnmeta-52a7668b-f0ac-4b07-a778-1ee89adbf076[293442]: [NOTICE]   (293466) : New worker (293468) forked
Nov 25 03:28:42 np0005534516 neutron-haproxy-ovnmeta-52a7668b-f0ac-4b07-a778-1ee89adbf076[293442]: [NOTICE]   (293466) : Loading success.
Nov 25 03:28:42 np0005534516 nova_compute[253538]: 2025-11-25 08:28:42.038 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: c3cdbe22-9aa9-4d2e-a76e-8d28c1edc71e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:28:42 np0005534516 nova_compute[253538]: 2025-11-25 08:28:42.040 253542 INFO nova.virt.libvirt.driver [-] [instance: c3cdbe22-9aa9-4d2e-a76e-8d28c1edc71e] Instance spawned successfully.#033[00m
Nov 25 03:28:42 np0005534516 nova_compute[253538]: 2025-11-25 08:28:42.040 253542 DEBUG nova.virt.libvirt.driver [None req-066abce6-4ea2-415d-9239-fa5fb27251b9 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] [instance: c3cdbe22-9aa9-4d2e-a76e-8d28c1edc71e] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 25 03:28:42 np0005534516 nova_compute[253538]: 2025-11-25 08:28:42.043 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: c3cdbe22-9aa9-4d2e-a76e-8d28c1edc71e] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 03:28:42 np0005534516 nova_compute[253538]: 2025-11-25 08:28:42.064 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: c3cdbe22-9aa9-4d2e-a76e-8d28c1edc71e] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 03:28:42 np0005534516 nova_compute[253538]: 2025-11-25 08:28:42.068 253542 DEBUG nova.virt.libvirt.driver [None req-066abce6-4ea2-415d-9239-fa5fb27251b9 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] [instance: c3cdbe22-9aa9-4d2e-a76e-8d28c1edc71e] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:28:42 np0005534516 nova_compute[253538]: 2025-11-25 08:28:42.068 253542 DEBUG nova.virt.libvirt.driver [None req-066abce6-4ea2-415d-9239-fa5fb27251b9 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] [instance: c3cdbe22-9aa9-4d2e-a76e-8d28c1edc71e] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:28:42 np0005534516 nova_compute[253538]: 2025-11-25 08:28:42.068 253542 DEBUG nova.virt.libvirt.driver [None req-066abce6-4ea2-415d-9239-fa5fb27251b9 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] [instance: c3cdbe22-9aa9-4d2e-a76e-8d28c1edc71e] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:28:42 np0005534516 nova_compute[253538]: 2025-11-25 08:28:42.069 253542 DEBUG nova.virt.libvirt.driver [None req-066abce6-4ea2-415d-9239-fa5fb27251b9 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] [instance: c3cdbe22-9aa9-4d2e-a76e-8d28c1edc71e] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:28:42 np0005534516 nova_compute[253538]: 2025-11-25 08:28:42.069 253542 DEBUG nova.virt.libvirt.driver [None req-066abce6-4ea2-415d-9239-fa5fb27251b9 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] [instance: c3cdbe22-9aa9-4d2e-a76e-8d28c1edc71e] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:28:42 np0005534516 nova_compute[253538]: 2025-11-25 08:28:42.069 253542 DEBUG nova.virt.libvirt.driver [None req-066abce6-4ea2-415d-9239-fa5fb27251b9 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] [instance: c3cdbe22-9aa9-4d2e-a76e-8d28c1edc71e] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:28:42 np0005534516 nova_compute[253538]: 2025-11-25 08:28:42.102 253542 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764059307.078669, 59d5b429-b6cb-4ced-bf99-b3a5b42dc5a0 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 03:28:42 np0005534516 nova_compute[253538]: 2025-11-25 08:28:42.103 253542 INFO nova.compute.manager [-] [instance: 59d5b429-b6cb-4ced-bf99-b3a5b42dc5a0] VM Stopped (Lifecycle Event)#033[00m
Nov 25 03:28:42 np0005534516 nova_compute[253538]: 2025-11-25 08:28:42.107 253542 DEBUG nova.objects.instance [None req-d5ac8371-613d-488f-9c4b-164aad80e8a3 9a96a98d6bb448aab904a8763d3675ec 17d31dbb1e4542daaa43d2fda87e18ad - - default default] Lazy-loading 'migration_context' on Instance uuid cf07e611-51eb-4bcf-8757-8f75d3807da6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 03:28:42 np0005534516 nova_compute[253538]: 2025-11-25 08:28:42.126 253542 DEBUG nova.compute.manager [None req-92a3afe2-aa5c-4ae9-aa51-aae8fb994f8f - - - - - -] [instance: 59d5b429-b6cb-4ced-bf99-b3a5b42dc5a0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:28:42 np0005534516 nova_compute[253538]: 2025-11-25 08:28:42.127 253542 DEBUG nova.virt.libvirt.driver [None req-d5ac8371-613d-488f-9c4b-164aad80e8a3 9a96a98d6bb448aab904a8763d3675ec 17d31dbb1e4542daaa43d2fda87e18ad - - default default] [instance: cf07e611-51eb-4bcf-8757-8f75d3807da6] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 25 03:28:42 np0005534516 nova_compute[253538]: 2025-11-25 08:28:42.127 253542 DEBUG nova.virt.libvirt.driver [None req-d5ac8371-613d-488f-9c4b-164aad80e8a3 9a96a98d6bb448aab904a8763d3675ec 17d31dbb1e4542daaa43d2fda87e18ad - - default default] [instance: cf07e611-51eb-4bcf-8757-8f75d3807da6] Ensure instance console log exists: /var/lib/nova/instances/cf07e611-51eb-4bcf-8757-8f75d3807da6/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 25 03:28:42 np0005534516 nova_compute[253538]: 2025-11-25 08:28:42.128 253542 DEBUG oslo_concurrency.lockutils [None req-d5ac8371-613d-488f-9c4b-164aad80e8a3 9a96a98d6bb448aab904a8763d3675ec 17d31dbb1e4542daaa43d2fda87e18ad - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:28:42 np0005534516 nova_compute[253538]: 2025-11-25 08:28:42.128 253542 DEBUG oslo_concurrency.lockutils [None req-d5ac8371-613d-488f-9c4b-164aad80e8a3 9a96a98d6bb448aab904a8763d3675ec 17d31dbb1e4542daaa43d2fda87e18ad - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:28:42 np0005534516 nova_compute[253538]: 2025-11-25 08:28:42.128 253542 DEBUG oslo_concurrency.lockutils [None req-d5ac8371-613d-488f-9c4b-164aad80e8a3 9a96a98d6bb448aab904a8763d3675ec 17d31dbb1e4542daaa43d2fda87e18ad - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:28:42 np0005534516 nova_compute[253538]: 2025-11-25 08:28:42.137 253542 INFO nova.compute.manager [None req-066abce6-4ea2-415d-9239-fa5fb27251b9 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] [instance: c3cdbe22-9aa9-4d2e-a76e-8d28c1edc71e] Took 5.77 seconds to spawn the instance on the hypervisor.#033[00m
Nov 25 03:28:42 np0005534516 nova_compute[253538]: 2025-11-25 08:28:42.137 253542 DEBUG nova.compute.manager [None req-066abce6-4ea2-415d-9239-fa5fb27251b9 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] [instance: c3cdbe22-9aa9-4d2e-a76e-8d28c1edc71e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:28:42 np0005534516 nova_compute[253538]: 2025-11-25 08:28:42.205 253542 INFO nova.compute.manager [None req-066abce6-4ea2-415d-9239-fa5fb27251b9 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] [instance: c3cdbe22-9aa9-4d2e-a76e-8d28c1edc71e] Took 6.80 seconds to build instance.#033[00m
Nov 25 03:28:42 np0005534516 nova_compute[253538]: 2025-11-25 08:28:42.220 253542 DEBUG oslo_concurrency.lockutils [None req-066abce6-4ea2-415d-9239-fa5fb27251b9 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] Lock "c3cdbe22-9aa9-4d2e-a76e-8d28c1edc71e" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 6.871s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:28:42 np0005534516 nova_compute[253538]: 2025-11-25 08:28:42.303 253542 DEBUG nova.network.neutron [-] [instance: 6998a6cf-b660-4558-98cf-bf5984775b1d] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 03:28:42 np0005534516 nova_compute[253538]: 2025-11-25 08:28:42.317 253542 INFO nova.compute.manager [-] [instance: 6998a6cf-b660-4558-98cf-bf5984775b1d] Took 1.50 seconds to deallocate network for instance.#033[00m
Nov 25 03:28:42 np0005534516 nova_compute[253538]: 2025-11-25 08:28:42.363 253542 DEBUG oslo_concurrency.lockutils [None req-bf3b8450-5c1c-44a8-9b80-e93361ab5f19 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:28:42 np0005534516 nova_compute[253538]: 2025-11-25 08:28:42.364 253542 DEBUG oslo_concurrency.lockutils [None req-bf3b8450-5c1c-44a8-9b80-e93361ab5f19 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:28:42 np0005534516 nova_compute[253538]: 2025-11-25 08:28:42.443 253542 DEBUG nova.network.neutron [None req-d5ac8371-613d-488f-9c4b-164aad80e8a3 9a96a98d6bb448aab904a8763d3675ec 17d31dbb1e4542daaa43d2fda87e18ad - - default default] [instance: cf07e611-51eb-4bcf-8757-8f75d3807da6] Successfully created port: d4493bab-df0a-4934-ab26-43dae0dbae72 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 25 03:28:42 np0005534516 nova_compute[253538]: 2025-11-25 08:28:42.496 253542 DEBUG oslo_concurrency.processutils [None req-bf3b8450-5c1c-44a8-9b80-e93361ab5f19 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:28:42 np0005534516 nova_compute[253538]: 2025-11-25 08:28:42.632 253542 DEBUG oslo_concurrency.lockutils [None req-c5756116-ca2b-4696-927b-d39eef6d1be9 ee2fe69e0dfa4467926cec954790823e 9bd945273cd04d8981dcb3a319e8d026 - - default default] Acquiring lock "4c934302-d7cd-4826-835e-cab6dba97e3a" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:28:42 np0005534516 nova_compute[253538]: 2025-11-25 08:28:42.634 253542 DEBUG oslo_concurrency.lockutils [None req-c5756116-ca2b-4696-927b-d39eef6d1be9 ee2fe69e0dfa4467926cec954790823e 9bd945273cd04d8981dcb3a319e8d026 - - default default] Lock "4c934302-d7cd-4826-835e-cab6dba97e3a" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:28:42 np0005534516 nova_compute[253538]: 2025-11-25 08:28:42.634 253542 DEBUG oslo_concurrency.lockutils [None req-c5756116-ca2b-4696-927b-d39eef6d1be9 ee2fe69e0dfa4467926cec954790823e 9bd945273cd04d8981dcb3a319e8d026 - - default default] Acquiring lock "4c934302-d7cd-4826-835e-cab6dba97e3a-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:28:42 np0005534516 nova_compute[253538]: 2025-11-25 08:28:42.635 253542 DEBUG oslo_concurrency.lockutils [None req-c5756116-ca2b-4696-927b-d39eef6d1be9 ee2fe69e0dfa4467926cec954790823e 9bd945273cd04d8981dcb3a319e8d026 - - default default] Lock "4c934302-d7cd-4826-835e-cab6dba97e3a-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:28:42 np0005534516 nova_compute[253538]: 2025-11-25 08:28:42.635 253542 DEBUG oslo_concurrency.lockutils [None req-c5756116-ca2b-4696-927b-d39eef6d1be9 ee2fe69e0dfa4467926cec954790823e 9bd945273cd04d8981dcb3a319e8d026 - - default default] Lock "4c934302-d7cd-4826-835e-cab6dba97e3a-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:28:42 np0005534516 nova_compute[253538]: 2025-11-25 08:28:42.637 253542 INFO nova.compute.manager [None req-c5756116-ca2b-4696-927b-d39eef6d1be9 ee2fe69e0dfa4467926cec954790823e 9bd945273cd04d8981dcb3a319e8d026 - - default default] [instance: 4c934302-d7cd-4826-835e-cab6dba97e3a] Terminating instance#033[00m
Nov 25 03:28:42 np0005534516 nova_compute[253538]: 2025-11-25 08:28:42.639 253542 DEBUG nova.compute.manager [None req-c5756116-ca2b-4696-927b-d39eef6d1be9 ee2fe69e0dfa4467926cec954790823e 9bd945273cd04d8981dcb3a319e8d026 - - default default] [instance: 4c934302-d7cd-4826-835e-cab6dba97e3a] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 25 03:28:42 np0005534516 kernel: tapb62f3741-11 (unregistering): left promiscuous mode
Nov 25 03:28:42 np0005534516 NetworkManager[48915]: <info>  [1764059322.6985] device (tapb62f3741-11): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 03:28:42 np0005534516 ovn_controller[152859]: 2025-11-25T08:28:42Z|00213|binding|INFO|Releasing lport b62f3741-11c8-4840-a720-d6ee07f06284 from this chassis (sb_readonly=0)
Nov 25 03:28:42 np0005534516 ovn_controller[152859]: 2025-11-25T08:28:42Z|00214|binding|INFO|Setting lport b62f3741-11c8-4840-a720-d6ee07f06284 down in Southbound
Nov 25 03:28:42 np0005534516 ovn_controller[152859]: 2025-11-25T08:28:42Z|00215|binding|INFO|Removing iface tapb62f3741-11 ovn-installed in OVS
Nov 25 03:28:42 np0005534516 nova_compute[253538]: 2025-11-25 08:28:42.710 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:28:42 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:28:42.716 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c9:9b:99 10.100.0.9'], port_security=['fa:16:3e:c9:9b:99 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '4c934302-d7cd-4826-835e-cab6dba97e3a', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0070171d-b7ca-4ed3-baea-814d9cd382de', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '9bd945273cd04d8981dcb3a319e8d026', 'neutron:revision_number': '4', 'neutron:security_group_ids': '9ee7f6e6-6de7-4c93-8dc8-a8140fbc4a5f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b2ce04b2-ff6f-4536-bd4d-73688e8a9b75, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=b62f3741-11c8-4840-a720-d6ee07f06284) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 03:28:42 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:28:42.717 162739 INFO neutron.agent.ovn.metadata.agent [-] Port b62f3741-11c8-4840-a720-d6ee07f06284 in datapath 0070171d-b7ca-4ed3-baea-814d9cd382de unbound from our chassis#033[00m
Nov 25 03:28:42 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:28:42.719 162739 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 0070171d-b7ca-4ed3-baea-814d9cd382de, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 25 03:28:42 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:28:42.720 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[cd150b9b-fe78-41db-b8bc-cf0bc24c0028]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:28:42 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:28:42.721 162739 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-0070171d-b7ca-4ed3-baea-814d9cd382de namespace which is not needed anymore#033[00m
Nov 25 03:28:42 np0005534516 nova_compute[253538]: 2025-11-25 08:28:42.731 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:28:42 np0005534516 systemd[1]: machine-qemu\x2d36\x2dinstance\x2d0000001f.scope: Deactivated successfully.
Nov 25 03:28:42 np0005534516 systemd[1]: machine-qemu\x2d36\x2dinstance\x2d0000001f.scope: Consumed 14.134s CPU time.
Nov 25 03:28:42 np0005534516 systemd-machined[215790]: Machine qemu-36-instance-0000001f terminated.
Nov 25 03:28:42 np0005534516 nova_compute[253538]: 2025-11-25 08:28:42.760 253542 DEBUG nova.network.neutron [None req-02fcfbc8-8adf-4b3c-8fc6-7f90fda8674d 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 8b29fb31-718d-4926-bf4f-bae461ea70ef] Successfully updated port: 799c50c8-d1e7-4c15-a3d9-29903d576304 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 25 03:28:42 np0005534516 nova_compute[253538]: 2025-11-25 08:28:42.774 253542 DEBUG oslo_concurrency.lockutils [None req-02fcfbc8-8adf-4b3c-8fc6-7f90fda8674d 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Acquiring lock "refresh_cache-8b29fb31-718d-4926-bf4f-bae461ea70ef" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 03:28:42 np0005534516 nova_compute[253538]: 2025-11-25 08:28:42.775 253542 DEBUG oslo_concurrency.lockutils [None req-02fcfbc8-8adf-4b3c-8fc6-7f90fda8674d 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Acquired lock "refresh_cache-8b29fb31-718d-4926-bf4f-bae461ea70ef" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 03:28:42 np0005534516 nova_compute[253538]: 2025-11-25 08:28:42.775 253542 DEBUG nova.network.neutron [None req-02fcfbc8-8adf-4b3c-8fc6-7f90fda8674d 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 8b29fb31-718d-4926-bf4f-bae461ea70ef] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 25 03:28:42 np0005534516 neutron-haproxy-ovnmeta-0070171d-b7ca-4ed3-baea-814d9cd382de[291562]: [NOTICE]   (291575) : haproxy version is 2.8.14-c23fe91
Nov 25 03:28:42 np0005534516 neutron-haproxy-ovnmeta-0070171d-b7ca-4ed3-baea-814d9cd382de[291562]: [NOTICE]   (291575) : path to executable is /usr/sbin/haproxy
Nov 25 03:28:42 np0005534516 neutron-haproxy-ovnmeta-0070171d-b7ca-4ed3-baea-814d9cd382de[291562]: [WARNING]  (291575) : Exiting Master process...
Nov 25 03:28:42 np0005534516 neutron-haproxy-ovnmeta-0070171d-b7ca-4ed3-baea-814d9cd382de[291562]: [ALERT]    (291575) : Current worker (291578) exited with code 143 (Terminated)
Nov 25 03:28:42 np0005534516 neutron-haproxy-ovnmeta-0070171d-b7ca-4ed3-baea-814d9cd382de[291562]: [WARNING]  (291575) : All workers exited. Exiting... (0)
Nov 25 03:28:42 np0005534516 systemd[1]: libpod-e1b52127dc1ec05b94275d0736abfa77621b03b28b6ce6d889d5bd0ecf1be9c0.scope: Deactivated successfully.
Nov 25 03:28:42 np0005534516 podman[293560]: 2025-11-25 08:28:42.877296862 +0000 UTC m=+0.062177560 container died e1b52127dc1ec05b94275d0736abfa77621b03b28b6ce6d889d5bd0ecf1be9c0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-0070171d-b7ca-4ed3-baea-814d9cd382de, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Nov 25 03:28:42 np0005534516 nova_compute[253538]: 2025-11-25 08:28:42.880 253542 INFO nova.virt.libvirt.driver [-] [instance: 4c934302-d7cd-4826-835e-cab6dba97e3a] Instance destroyed successfully.#033[00m
Nov 25 03:28:42 np0005534516 nova_compute[253538]: 2025-11-25 08:28:42.881 253542 DEBUG nova.objects.instance [None req-c5756116-ca2b-4696-927b-d39eef6d1be9 ee2fe69e0dfa4467926cec954790823e 9bd945273cd04d8981dcb3a319e8d026 - - default default] Lazy-loading 'resources' on Instance uuid 4c934302-d7cd-4826-835e-cab6dba97e3a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 03:28:42 np0005534516 nova_compute[253538]: 2025-11-25 08:28:42.895 253542 DEBUG nova.virt.libvirt.vif [None req-c5756116-ca2b-4696-927b-d39eef6d1be9 ee2fe69e0dfa4467926cec954790823e 9bd945273cd04d8981dcb3a319e8d026 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T08:28:11Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ImagesOneServerTestJSON-server-1824498337',display_name='tempest-ImagesOneServerTestJSON-server-1824498337',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagesoneservertestjson-server-1824498337',id=31,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-25T08:28:24Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='9bd945273cd04d8981dcb3a319e8d026',ramdisk_id='',reservation_id='r-p00agd9j',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ImagesOneServerTestJSON-174767469',owner_user_name='tempest-ImagesOneServerTestJSON-174767469-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T08:28:39Z,user_data=None,user_id='ee2fe69e0dfa4467926cec954790823e',uuid=4c934302-d7cd-4826-835e-cab6dba97e3a,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "b62f3741-11c8-4840-a720-d6ee07f06284", "address": "fa:16:3e:c9:9b:99", "network": {"id": "0070171d-b7ca-4ed3-baea-814d9cd382de", "bridge": "br-int", "label": "tempest-ImagesOneServerTestJSON-33340236-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9bd945273cd04d8981dcb3a319e8d026", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb62f3741-11", "ovs_interfaceid": "b62f3741-11c8-4840-a720-d6ee07f06284", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 25 03:28:42 np0005534516 nova_compute[253538]: 2025-11-25 08:28:42.896 253542 DEBUG nova.network.os_vif_util [None req-c5756116-ca2b-4696-927b-d39eef6d1be9 ee2fe69e0dfa4467926cec954790823e 9bd945273cd04d8981dcb3a319e8d026 - - default default] Converting VIF {"id": "b62f3741-11c8-4840-a720-d6ee07f06284", "address": "fa:16:3e:c9:9b:99", "network": {"id": "0070171d-b7ca-4ed3-baea-814d9cd382de", "bridge": "br-int", "label": "tempest-ImagesOneServerTestJSON-33340236-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9bd945273cd04d8981dcb3a319e8d026", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb62f3741-11", "ovs_interfaceid": "b62f3741-11c8-4840-a720-d6ee07f06284", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 03:28:42 np0005534516 nova_compute[253538]: 2025-11-25 08:28:42.897 253542 DEBUG nova.network.os_vif_util [None req-c5756116-ca2b-4696-927b-d39eef6d1be9 ee2fe69e0dfa4467926cec954790823e 9bd945273cd04d8981dcb3a319e8d026 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c9:9b:99,bridge_name='br-int',has_traffic_filtering=True,id=b62f3741-11c8-4840-a720-d6ee07f06284,network=Network(0070171d-b7ca-4ed3-baea-814d9cd382de),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb62f3741-11') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 03:28:42 np0005534516 nova_compute[253538]: 2025-11-25 08:28:42.897 253542 DEBUG os_vif [None req-c5756116-ca2b-4696-927b-d39eef6d1be9 ee2fe69e0dfa4467926cec954790823e 9bd945273cd04d8981dcb3a319e8d026 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:c9:9b:99,bridge_name='br-int',has_traffic_filtering=True,id=b62f3741-11c8-4840-a720-d6ee07f06284,network=Network(0070171d-b7ca-4ed3-baea-814d9cd382de),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb62f3741-11') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 25 03:28:42 np0005534516 nova_compute[253538]: 2025-11-25 08:28:42.899 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:28:42 np0005534516 nova_compute[253538]: 2025-11-25 08:28:42.899 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb62f3741-11, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:28:42 np0005534516 nova_compute[253538]: 2025-11-25 08:28:42.903 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 25 03:28:42 np0005534516 nova_compute[253538]: 2025-11-25 08:28:42.906 253542 INFO os_vif [None req-c5756116-ca2b-4696-927b-d39eef6d1be9 ee2fe69e0dfa4467926cec954790823e 9bd945273cd04d8981dcb3a319e8d026 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:c9:9b:99,bridge_name='br-int',has_traffic_filtering=True,id=b62f3741-11c8-4840-a720-d6ee07f06284,network=Network(0070171d-b7ca-4ed3-baea-814d9cd382de),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb62f3741-11')#033[00m
Nov 25 03:28:42 np0005534516 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-e1b52127dc1ec05b94275d0736abfa77621b03b28b6ce6d889d5bd0ecf1be9c0-userdata-shm.mount: Deactivated successfully.
Nov 25 03:28:42 np0005534516 systemd[1]: var-lib-containers-storage-overlay-b38844f50af0338993d204476dce181cc29aa7db95beeb492a051beb5aa7d312-merged.mount: Deactivated successfully.
Nov 25 03:28:42 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 03:28:42 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/879197356' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 03:28:42 np0005534516 nova_compute[253538]: 2025-11-25 08:28:42.964 253542 DEBUG oslo_concurrency.processutils [None req-bf3b8450-5c1c-44a8-9b80-e93361ab5f19 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.468s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:28:42 np0005534516 nova_compute[253538]: 2025-11-25 08:28:42.970 253542 DEBUG nova.compute.provider_tree [None req-bf3b8450-5c1c-44a8-9b80-e93361ab5f19 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 03:28:42 np0005534516 podman[293560]: 2025-11-25 08:28:42.970919229 +0000 UTC m=+0.155799937 container cleanup e1b52127dc1ec05b94275d0736abfa77621b03b28b6ce6d889d5bd0ecf1be9c0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-0070171d-b7ca-4ed3-baea-814d9cd382de, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Nov 25 03:28:42 np0005534516 systemd[1]: libpod-conmon-e1b52127dc1ec05b94275d0736abfa77621b03b28b6ce6d889d5bd0ecf1be9c0.scope: Deactivated successfully.
Nov 25 03:28:42 np0005534516 nova_compute[253538]: 2025-11-25 08:28:42.988 253542 DEBUG nova.scheduler.client.report [None req-bf3b8450-5c1c-44a8-9b80-e93361ab5f19 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 03:28:43 np0005534516 nova_compute[253538]: 2025-11-25 08:28:43.003 253542 DEBUG nova.network.neutron [None req-02fcfbc8-8adf-4b3c-8fc6-7f90fda8674d 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 8b29fb31-718d-4926-bf4f-bae461ea70ef] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 25 03:28:43 np0005534516 nova_compute[253538]: 2025-11-25 08:28:43.011 253542 DEBUG oslo_concurrency.lockutils [None req-bf3b8450-5c1c-44a8-9b80-e93361ab5f19 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.647s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:28:43 np0005534516 podman[293625]: 2025-11-25 08:28:43.034658251 +0000 UTC m=+0.043816852 container remove e1b52127dc1ec05b94275d0736abfa77621b03b28b6ce6d889d5bd0ecf1be9c0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-0070171d-b7ca-4ed3-baea-814d9cd382de, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_managed=true)
Nov 25 03:28:43 np0005534516 nova_compute[253538]: 2025-11-25 08:28:43.038 253542 INFO nova.scheduler.client.report [None req-bf3b8450-5c1c-44a8-9b80-e93361ab5f19 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Deleted allocations for instance 6998a6cf-b660-4558-98cf-bf5984775b1d#033[00m
Nov 25 03:28:43 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:28:43.038 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[1310e79f-e6c0-42c7-bf5b-e616aea94f66]: (4, ('Tue Nov 25 08:28:42 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-0070171d-b7ca-4ed3-baea-814d9cd382de (e1b52127dc1ec05b94275d0736abfa77621b03b28b6ce6d889d5bd0ecf1be9c0)\ne1b52127dc1ec05b94275d0736abfa77621b03b28b6ce6d889d5bd0ecf1be9c0\nTue Nov 25 08:28:42 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-0070171d-b7ca-4ed3-baea-814d9cd382de (e1b52127dc1ec05b94275d0736abfa77621b03b28b6ce6d889d5bd0ecf1be9c0)\ne1b52127dc1ec05b94275d0736abfa77621b03b28b6ce6d889d5bd0ecf1be9c0\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:28:43 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:28:43.041 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[807a07a0-46e8-46cb-baf6-0e9091d6e6bf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:28:43 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:28:43.043 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0070171d-b0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:28:43 np0005534516 kernel: tap0070171d-b0: left promiscuous mode
Nov 25 03:28:43 np0005534516 nova_compute[253538]: 2025-11-25 08:28:43.047 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:28:43 np0005534516 nova_compute[253538]: 2025-11-25 08:28:43.065 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:28:43 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:28:43.068 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[1b5db841-c956-40d5-8d3b-c94f66698622]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:28:43 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:28:43.084 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[50393e9c-abf9-44c7-b0c3-61d9984f8772]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:28:43 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:28:43.088 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[6b083f00-9f03-4bc8-a34b-9cc4744885bb]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:28:43 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1325: 321 pgs: 4 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 315 active+clean; 257 MiB data, 480 MiB used, 60 GiB / 60 GiB avail; 1.5 MiB/s rd, 9.0 MiB/s wr, 297 op/s
Nov 25 03:28:43 np0005534516 nova_compute[253538]: 2025-11-25 08:28:43.104 253542 DEBUG oslo_concurrency.lockutils [None req-bf3b8450-5c1c-44a8-9b80-e93361ab5f19 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Lock "6998a6cf-b660-4558-98cf-bf5984775b1d" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 5.902s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:28:43 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:28:43.114 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[d78b7ccb-0757-4694-99df-93bd8a59b731]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 465713, 'reachable_time': 27140, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 293645, 'error': None, 'target': 'ovnmeta-0070171d-b7ca-4ed3-baea-814d9cd382de', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:28:43 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:28:43.117 162852 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-0070171d-b7ca-4ed3-baea-814d9cd382de deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 25 03:28:43 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:28:43.117 162852 DEBUG oslo.privsep.daemon [-] privsep: reply[11c3a51b-3c39-4a0c-8712-b62b085dd2ac]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:28:43 np0005534516 systemd[1]: run-netns-ovnmeta\x2d0070171d\x2db7ca\x2d4ed3\x2dbaea\x2d814d9cd382de.mount: Deactivated successfully.
Nov 25 03:28:43 np0005534516 optimistic_kowalevski[293426]: --> passed data devices: 0 physical, 3 LVM
Nov 25 03:28:43 np0005534516 optimistic_kowalevski[293426]: --> relative data size: 1.0
Nov 25 03:28:43 np0005534516 optimistic_kowalevski[293426]: --> All data devices are unavailable
Nov 25 03:28:43 np0005534516 systemd[1]: libpod-8291d7d8ca45642f360ebce1d5b883212a7d530c7b569291626b68abf79cab8d.scope: Deactivated successfully.
Nov 25 03:28:43 np0005534516 systemd[1]: libpod-8291d7d8ca45642f360ebce1d5b883212a7d530c7b569291626b68abf79cab8d.scope: Consumed 1.074s CPU time.
Nov 25 03:28:43 np0005534516 conmon[293426]: conmon 8291d7d8ca45642f360e <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-8291d7d8ca45642f360ebce1d5b883212a7d530c7b569291626b68abf79cab8d.scope/container/memory.events
Nov 25 03:28:43 np0005534516 podman[293369]: 2025-11-25 08:28:43.178926669 +0000 UTC m=+1.442833870 container died 8291d7d8ca45642f360ebce1d5b883212a7d530c7b569291626b68abf79cab8d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_kowalevski, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 03:28:43 np0005534516 nova_compute[253538]: 2025-11-25 08:28:43.179 253542 DEBUG nova.network.neutron [None req-d5ac8371-613d-488f-9c4b-164aad80e8a3 9a96a98d6bb448aab904a8763d3675ec 17d31dbb1e4542daaa43d2fda87e18ad - - default default] [instance: cf07e611-51eb-4bcf-8757-8f75d3807da6] Successfully updated port: d4493bab-df0a-4934-ab26-43dae0dbae72 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 25 03:28:43 np0005534516 nova_compute[253538]: 2025-11-25 08:28:43.194 253542 DEBUG oslo_concurrency.lockutils [None req-d5ac8371-613d-488f-9c4b-164aad80e8a3 9a96a98d6bb448aab904a8763d3675ec 17d31dbb1e4542daaa43d2fda87e18ad - - default default] Acquiring lock "refresh_cache-cf07e611-51eb-4bcf-8757-8f75d3807da6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 03:28:43 np0005534516 nova_compute[253538]: 2025-11-25 08:28:43.195 253542 DEBUG oslo_concurrency.lockutils [None req-d5ac8371-613d-488f-9c4b-164aad80e8a3 9a96a98d6bb448aab904a8763d3675ec 17d31dbb1e4542daaa43d2fda87e18ad - - default default] Acquired lock "refresh_cache-cf07e611-51eb-4bcf-8757-8f75d3807da6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 03:28:43 np0005534516 nova_compute[253538]: 2025-11-25 08:28:43.195 253542 DEBUG nova.network.neutron [None req-d5ac8371-613d-488f-9c4b-164aad80e8a3 9a96a98d6bb448aab904a8763d3675ec 17d31dbb1e4542daaa43d2fda87e18ad - - default default] [instance: cf07e611-51eb-4bcf-8757-8f75d3807da6] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 25 03:28:43 np0005534516 systemd[1]: var-lib-containers-storage-overlay-40668a1f5dedaf4cab05a01ddcda2a8366970cc4fe4c9fc1f11870dd061fe30c-merged.mount: Deactivated successfully.
Nov 25 03:28:43 np0005534516 podman[293369]: 2025-11-25 08:28:43.256872382 +0000 UTC m=+1.520779583 container remove 8291d7d8ca45642f360ebce1d5b883212a7d530c7b569291626b68abf79cab8d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_kowalevski, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 03:28:43 np0005534516 nova_compute[253538]: 2025-11-25 08:28:43.267 253542 DEBUG nova.compute.manager [req-41ba1e70-abb1-4d6a-bb57-fa81f134d33f req-765bb381-5988-4deb-b389-6750012978b5 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: cf07e611-51eb-4bcf-8757-8f75d3807da6] Received event network-changed-d4493bab-df0a-4934-ab26-43dae0dbae72 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:28:43 np0005534516 nova_compute[253538]: 2025-11-25 08:28:43.267 253542 DEBUG nova.compute.manager [req-41ba1e70-abb1-4d6a-bb57-fa81f134d33f req-765bb381-5988-4deb-b389-6750012978b5 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: cf07e611-51eb-4bcf-8757-8f75d3807da6] Refreshing instance network info cache due to event network-changed-d4493bab-df0a-4934-ab26-43dae0dbae72. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 03:28:43 np0005534516 nova_compute[253538]: 2025-11-25 08:28:43.268 253542 DEBUG oslo_concurrency.lockutils [req-41ba1e70-abb1-4d6a-bb57-fa81f134d33f req-765bb381-5988-4deb-b389-6750012978b5 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-cf07e611-51eb-4bcf-8757-8f75d3807da6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 03:28:43 np0005534516 systemd[1]: libpod-conmon-8291d7d8ca45642f360ebce1d5b883212a7d530c7b569291626b68abf79cab8d.scope: Deactivated successfully.
Nov 25 03:28:43 np0005534516 nova_compute[253538]: 2025-11-25 08:28:43.378 253542 INFO nova.virt.libvirt.driver [None req-c5756116-ca2b-4696-927b-d39eef6d1be9 ee2fe69e0dfa4467926cec954790823e 9bd945273cd04d8981dcb3a319e8d026 - - default default] [instance: 4c934302-d7cd-4826-835e-cab6dba97e3a] Deleting instance files /var/lib/nova/instances/4c934302-d7cd-4826-835e-cab6dba97e3a_del#033[00m
Nov 25 03:28:43 np0005534516 nova_compute[253538]: 2025-11-25 08:28:43.379 253542 INFO nova.virt.libvirt.driver [None req-c5756116-ca2b-4696-927b-d39eef6d1be9 ee2fe69e0dfa4467926cec954790823e 9bd945273cd04d8981dcb3a319e8d026 - - default default] [instance: 4c934302-d7cd-4826-835e-cab6dba97e3a] Deletion of /var/lib/nova/instances/4c934302-d7cd-4826-835e-cab6dba97e3a_del complete#033[00m
Nov 25 03:28:43 np0005534516 nova_compute[253538]: 2025-11-25 08:28:43.435 253542 INFO nova.compute.manager [None req-c5756116-ca2b-4696-927b-d39eef6d1be9 ee2fe69e0dfa4467926cec954790823e 9bd945273cd04d8981dcb3a319e8d026 - - default default] [instance: 4c934302-d7cd-4826-835e-cab6dba97e3a] Took 0.80 seconds to destroy the instance on the hypervisor.#033[00m
Nov 25 03:28:43 np0005534516 nova_compute[253538]: 2025-11-25 08:28:43.436 253542 DEBUG oslo.service.loopingcall [None req-c5756116-ca2b-4696-927b-d39eef6d1be9 ee2fe69e0dfa4467926cec954790823e 9bd945273cd04d8981dcb3a319e8d026 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 25 03:28:43 np0005534516 nova_compute[253538]: 2025-11-25 08:28:43.436 253542 DEBUG nova.compute.manager [-] [instance: 4c934302-d7cd-4826-835e-cab6dba97e3a] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 25 03:28:43 np0005534516 nova_compute[253538]: 2025-11-25 08:28:43.436 253542 DEBUG nova.network.neutron [-] [instance: 4c934302-d7cd-4826-835e-cab6dba97e3a] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 25 03:28:43 np0005534516 nova_compute[253538]: 2025-11-25 08:28:43.567 253542 DEBUG nova.network.neutron [None req-d5ac8371-613d-488f-9c4b-164aad80e8a3 9a96a98d6bb448aab904a8763d3675ec 17d31dbb1e4542daaa43d2fda87e18ad - - default default] [instance: cf07e611-51eb-4bcf-8757-8f75d3807da6] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 25 03:28:43 np0005534516 podman[293805]: 2025-11-25 08:28:43.846353686 +0000 UTC m=+0.048705107 container create 2f5a8a58cf5da28159707136391d71c0f8ac81986a1e109aae90be15a9043d1f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_wilson, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 03:28:43 np0005534516 systemd[1]: Started libpod-conmon-2f5a8a58cf5da28159707136391d71c0f8ac81986a1e109aae90be15a9043d1f.scope.
Nov 25 03:28:43 np0005534516 podman[293805]: 2025-11-25 08:28:43.817586521 +0000 UTC m=+0.019937972 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 03:28:43 np0005534516 systemd[1]: Started libcrun container.
Nov 25 03:28:43 np0005534516 podman[293805]: 2025-11-25 08:28:43.949754014 +0000 UTC m=+0.152105535 container init 2f5a8a58cf5da28159707136391d71c0f8ac81986a1e109aae90be15a9043d1f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_wilson, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Nov 25 03:28:43 np0005534516 podman[293805]: 2025-11-25 08:28:43.96190469 +0000 UTC m=+0.164256121 container start 2f5a8a58cf5da28159707136391d71c0f8ac81986a1e109aae90be15a9043d1f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_wilson, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 03:28:43 np0005534516 nova_compute[253538]: 2025-11-25 08:28:43.967 253542 DEBUG nova.network.neutron [None req-02fcfbc8-8adf-4b3c-8fc6-7f90fda8674d 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 8b29fb31-718d-4926-bf4f-bae461ea70ef] Updating instance_info_cache with network_info: [{"id": "799c50c8-d1e7-4c15-a3d9-29903d576304", "address": "fa:16:3e:fd:03:96", "network": {"id": "ba659d6c-c094-47d7-ba45-d0e659ce778e", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1404047212-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b0a28d62fb1841c087b84b40bf5a54ec", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap799c50c8-d1", "ovs_interfaceid": "799c50c8-d1e7-4c15-a3d9-29903d576304", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 03:28:43 np0005534516 podman[293805]: 2025-11-25 08:28:43.965294603 +0000 UTC m=+0.167646094 container attach 2f5a8a58cf5da28159707136391d71c0f8ac81986a1e109aae90be15a9043d1f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_wilson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 03:28:43 np0005534516 boring_wilson[293823]: 167 167
Nov 25 03:28:43 np0005534516 systemd[1]: libpod-2f5a8a58cf5da28159707136391d71c0f8ac81986a1e109aae90be15a9043d1f.scope: Deactivated successfully.
Nov 25 03:28:43 np0005534516 podman[293805]: 2025-11-25 08:28:43.973109949 +0000 UTC m=+0.175461380 container died 2f5a8a58cf5da28159707136391d71c0f8ac81986a1e109aae90be15a9043d1f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_wilson, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Nov 25 03:28:43 np0005534516 nova_compute[253538]: 2025-11-25 08:28:43.984 253542 DEBUG oslo_concurrency.lockutils [None req-02fcfbc8-8adf-4b3c-8fc6-7f90fda8674d 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Releasing lock "refresh_cache-8b29fb31-718d-4926-bf4f-bae461ea70ef" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 03:28:43 np0005534516 nova_compute[253538]: 2025-11-25 08:28:43.984 253542 DEBUG nova.compute.manager [None req-02fcfbc8-8adf-4b3c-8fc6-7f90fda8674d 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 8b29fb31-718d-4926-bf4f-bae461ea70ef] Instance network_info: |[{"id": "799c50c8-d1e7-4c15-a3d9-29903d576304", "address": "fa:16:3e:fd:03:96", "network": {"id": "ba659d6c-c094-47d7-ba45-d0e659ce778e", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1404047212-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b0a28d62fb1841c087b84b40bf5a54ec", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap799c50c8-d1", "ovs_interfaceid": "799c50c8-d1e7-4c15-a3d9-29903d576304", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 25 03:28:43 np0005534516 nova_compute[253538]: 2025-11-25 08:28:43.986 253542 DEBUG nova.virt.libvirt.driver [None req-02fcfbc8-8adf-4b3c-8fc6-7f90fda8674d 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 8b29fb31-718d-4926-bf4f-bae461ea70ef] Start _get_guest_xml network_info=[{"id": "799c50c8-d1e7-4c15-a3d9-29903d576304", "address": "fa:16:3e:fd:03:96", "network": {"id": "ba659d6c-c094-47d7-ba45-d0e659ce778e", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1404047212-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b0a28d62fb1841c087b84b40bf5a54ec", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap799c50c8-d1", "ovs_interfaceid": "799c50c8-d1e7-4c15-a3d9-29903d576304", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'device_name': '/dev/vda', 'guest_format': None, 'encryption_options': None, 'boot_index': 0, 'encryption_format': None, 'encryption_secret_uuid': None, 'size': 0, 'encrypted': False, 'disk_bus': 'virtio', 'image_id': '8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 25 03:28:43 np0005534516 nova_compute[253538]: 2025-11-25 08:28:43.990 253542 WARNING nova.virt.libvirt.driver [None req-02fcfbc8-8adf-4b3c-8fc6-7f90fda8674d 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 03:28:43 np0005534516 nova_compute[253538]: 2025-11-25 08:28:43.996 253542 DEBUG nova.virt.libvirt.host [None req-02fcfbc8-8adf-4b3c-8fc6-7f90fda8674d 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 25 03:28:43 np0005534516 nova_compute[253538]: 2025-11-25 08:28:43.996 253542 DEBUG nova.virt.libvirt.host [None req-02fcfbc8-8adf-4b3c-8fc6-7f90fda8674d 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 25 03:28:44 np0005534516 nova_compute[253538]: 2025-11-25 08:28:44.000 253542 DEBUG nova.virt.libvirt.host [None req-02fcfbc8-8adf-4b3c-8fc6-7f90fda8674d 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 25 03:28:44 np0005534516 nova_compute[253538]: 2025-11-25 08:28:44.000 253542 DEBUG nova.virt.libvirt.host [None req-02fcfbc8-8adf-4b3c-8fc6-7f90fda8674d 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 25 03:28:44 np0005534516 nova_compute[253538]: 2025-11-25 08:28:44.001 253542 DEBUG nova.virt.libvirt.driver [None req-02fcfbc8-8adf-4b3c-8fc6-7f90fda8674d 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 25 03:28:44 np0005534516 nova_compute[253538]: 2025-11-25 08:28:44.001 253542 DEBUG nova.virt.hardware [None req-02fcfbc8-8adf-4b3c-8fc6-7f90fda8674d 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T08:20:54Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='c0247c91-0f34-45b2-87e7-43f31a790a00',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 25 03:28:44 np0005534516 nova_compute[253538]: 2025-11-25 08:28:44.001 253542 DEBUG nova.virt.hardware [None req-02fcfbc8-8adf-4b3c-8fc6-7f90fda8674d 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 25 03:28:44 np0005534516 nova_compute[253538]: 2025-11-25 08:28:44.001 253542 DEBUG nova.virt.hardware [None req-02fcfbc8-8adf-4b3c-8fc6-7f90fda8674d 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 25 03:28:44 np0005534516 nova_compute[253538]: 2025-11-25 08:28:44.002 253542 DEBUG nova.virt.hardware [None req-02fcfbc8-8adf-4b3c-8fc6-7f90fda8674d 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 25 03:28:44 np0005534516 nova_compute[253538]: 2025-11-25 08:28:44.002 253542 DEBUG nova.virt.hardware [None req-02fcfbc8-8adf-4b3c-8fc6-7f90fda8674d 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 25 03:28:44 np0005534516 nova_compute[253538]: 2025-11-25 08:28:44.002 253542 DEBUG nova.virt.hardware [None req-02fcfbc8-8adf-4b3c-8fc6-7f90fda8674d 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 25 03:28:44 np0005534516 nova_compute[253538]: 2025-11-25 08:28:44.002 253542 DEBUG nova.virt.hardware [None req-02fcfbc8-8adf-4b3c-8fc6-7f90fda8674d 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 25 03:28:44 np0005534516 nova_compute[253538]: 2025-11-25 08:28:44.002 253542 DEBUG nova.virt.hardware [None req-02fcfbc8-8adf-4b3c-8fc6-7f90fda8674d 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 25 03:28:44 np0005534516 nova_compute[253538]: 2025-11-25 08:28:44.002 253542 DEBUG nova.virt.hardware [None req-02fcfbc8-8adf-4b3c-8fc6-7f90fda8674d 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 25 03:28:44 np0005534516 nova_compute[253538]: 2025-11-25 08:28:44.003 253542 DEBUG nova.virt.hardware [None req-02fcfbc8-8adf-4b3c-8fc6-7f90fda8674d 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 25 03:28:44 np0005534516 nova_compute[253538]: 2025-11-25 08:28:44.003 253542 DEBUG nova.virt.hardware [None req-02fcfbc8-8adf-4b3c-8fc6-7f90fda8674d 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 25 03:28:44 np0005534516 nova_compute[253538]: 2025-11-25 08:28:44.005 253542 DEBUG oslo_concurrency.processutils [None req-02fcfbc8-8adf-4b3c-8fc6-7f90fda8674d 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:28:44 np0005534516 systemd[1]: var-lib-containers-storage-overlay-0797a8ee5200bbda7b4216fba6239f07c6a0fc4b80c27932480a39e4b316e0e1-merged.mount: Deactivated successfully.
Nov 25 03:28:44 np0005534516 podman[293805]: 2025-11-25 08:28:44.126791007 +0000 UTC m=+0.329142468 container remove 2f5a8a58cf5da28159707136391d71c0f8ac81986a1e109aae90be15a9043d1f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_wilson, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Nov 25 03:28:44 np0005534516 systemd[1]: libpod-conmon-2f5a8a58cf5da28159707136391d71c0f8ac81986a1e109aae90be15a9043d1f.scope: Deactivated successfully.
Nov 25 03:28:44 np0005534516 nova_compute[253538]: 2025-11-25 08:28:44.156 253542 DEBUG nova.network.neutron [-] [instance: 4c934302-d7cd-4826-835e-cab6dba97e3a] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 03:28:44 np0005534516 nova_compute[253538]: 2025-11-25 08:28:44.170 253542 INFO nova.compute.manager [-] [instance: 4c934302-d7cd-4826-835e-cab6dba97e3a] Took 0.73 seconds to deallocate network for instance.#033[00m
Nov 25 03:28:44 np0005534516 nova_compute[253538]: 2025-11-25 08:28:44.208 253542 DEBUG oslo_concurrency.lockutils [None req-c5756116-ca2b-4696-927b-d39eef6d1be9 ee2fe69e0dfa4467926cec954790823e 9bd945273cd04d8981dcb3a319e8d026 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:28:44 np0005534516 nova_compute[253538]: 2025-11-25 08:28:44.209 253542 DEBUG oslo_concurrency.lockutils [None req-c5756116-ca2b-4696-927b-d39eef6d1be9 ee2fe69e0dfa4467926cec954790823e 9bd945273cd04d8981dcb3a319e8d026 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:28:44 np0005534516 nova_compute[253538]: 2025-11-25 08:28:44.300 253542 DEBUG nova.compute.manager [req-5266ba18-be39-490a-b6ab-c72dfc626390 req-47b2092e-bd30-4057-80eb-996673e38468 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: c3cdbe22-9aa9-4d2e-a76e-8d28c1edc71e] Received event network-vif-plugged-6bcb5ada-83f7-419f-9909-98ba6f37630c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:28:44 np0005534516 nova_compute[253538]: 2025-11-25 08:28:44.301 253542 DEBUG oslo_concurrency.lockutils [req-5266ba18-be39-490a-b6ab-c72dfc626390 req-47b2092e-bd30-4057-80eb-996673e38468 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "c3cdbe22-9aa9-4d2e-a76e-8d28c1edc71e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:28:44 np0005534516 nova_compute[253538]: 2025-11-25 08:28:44.301 253542 DEBUG oslo_concurrency.lockutils [req-5266ba18-be39-490a-b6ab-c72dfc626390 req-47b2092e-bd30-4057-80eb-996673e38468 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "c3cdbe22-9aa9-4d2e-a76e-8d28c1edc71e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:28:44 np0005534516 nova_compute[253538]: 2025-11-25 08:28:44.301 253542 DEBUG oslo_concurrency.lockutils [req-5266ba18-be39-490a-b6ab-c72dfc626390 req-47b2092e-bd30-4057-80eb-996673e38468 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "c3cdbe22-9aa9-4d2e-a76e-8d28c1edc71e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:28:44 np0005534516 nova_compute[253538]: 2025-11-25 08:28:44.301 253542 DEBUG nova.compute.manager [req-5266ba18-be39-490a-b6ab-c72dfc626390 req-47b2092e-bd30-4057-80eb-996673e38468 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: c3cdbe22-9aa9-4d2e-a76e-8d28c1edc71e] No waiting events found dispatching network-vif-plugged-6bcb5ada-83f7-419f-9909-98ba6f37630c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 03:28:44 np0005534516 nova_compute[253538]: 2025-11-25 08:28:44.302 253542 WARNING nova.compute.manager [req-5266ba18-be39-490a-b6ab-c72dfc626390 req-47b2092e-bd30-4057-80eb-996673e38468 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: c3cdbe22-9aa9-4d2e-a76e-8d28c1edc71e] Received unexpected event network-vif-plugged-6bcb5ada-83f7-419f-9909-98ba6f37630c for instance with vm_state active and task_state None.#033[00m
Nov 25 03:28:44 np0005534516 nova_compute[253538]: 2025-11-25 08:28:44.302 253542 DEBUG nova.compute.manager [req-5266ba18-be39-490a-b6ab-c72dfc626390 req-47b2092e-bd30-4057-80eb-996673e38468 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 6998a6cf-b660-4558-98cf-bf5984775b1d] Received event network-vif-deleted-65fd7d0e-59ee-4411-92ee-f934016f1d1f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:28:44 np0005534516 nova_compute[253538]: 2025-11-25 08:28:44.302 253542 DEBUG nova.compute.manager [req-5266ba18-be39-490a-b6ab-c72dfc626390 req-47b2092e-bd30-4057-80eb-996673e38468 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 8b29fb31-718d-4926-bf4f-bae461ea70ef] Received event network-changed-799c50c8-d1e7-4c15-a3d9-29903d576304 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:28:44 np0005534516 nova_compute[253538]: 2025-11-25 08:28:44.302 253542 DEBUG nova.compute.manager [req-5266ba18-be39-490a-b6ab-c72dfc626390 req-47b2092e-bd30-4057-80eb-996673e38468 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 8b29fb31-718d-4926-bf4f-bae461ea70ef] Refreshing instance network info cache due to event network-changed-799c50c8-d1e7-4c15-a3d9-29903d576304. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 03:28:44 np0005534516 nova_compute[253538]: 2025-11-25 08:28:44.303 253542 DEBUG oslo_concurrency.lockutils [req-5266ba18-be39-490a-b6ab-c72dfc626390 req-47b2092e-bd30-4057-80eb-996673e38468 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-8b29fb31-718d-4926-bf4f-bae461ea70ef" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 03:28:44 np0005534516 nova_compute[253538]: 2025-11-25 08:28:44.303 253542 DEBUG oslo_concurrency.lockutils [req-5266ba18-be39-490a-b6ab-c72dfc626390 req-47b2092e-bd30-4057-80eb-996673e38468 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-8b29fb31-718d-4926-bf4f-bae461ea70ef" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 03:28:44 np0005534516 nova_compute[253538]: 2025-11-25 08:28:44.303 253542 DEBUG nova.network.neutron [req-5266ba18-be39-490a-b6ab-c72dfc626390 req-47b2092e-bd30-4057-80eb-996673e38468 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 8b29fb31-718d-4926-bf4f-bae461ea70ef] Refreshing network info cache for port 799c50c8-d1e7-4c15-a3d9-29903d576304 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 03:28:44 np0005534516 podman[293866]: 2025-11-25 08:28:44.334423686 +0000 UTC m=+0.062025665 container create 6529d1593ec1bb086527735570c4ab2a33f3f0f5efd982af9053f1898b4ae742 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_mccarthy, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Nov 25 03:28:44 np0005534516 nova_compute[253538]: 2025-11-25 08:28:44.335 253542 DEBUG oslo_concurrency.processutils [None req-c5756116-ca2b-4696-927b-d39eef6d1be9 ee2fe69e0dfa4467926cec954790823e 9bd945273cd04d8981dcb3a319e8d026 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:28:44 np0005534516 podman[293866]: 2025-11-25 08:28:44.297526006 +0000 UTC m=+0.025128005 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 03:28:44 np0005534516 nova_compute[253538]: 2025-11-25 08:28:44.392 253542 DEBUG nova.network.neutron [None req-d5ac8371-613d-488f-9c4b-164aad80e8a3 9a96a98d6bb448aab904a8763d3675ec 17d31dbb1e4542daaa43d2fda87e18ad - - default default] [instance: cf07e611-51eb-4bcf-8757-8f75d3807da6] Updating instance_info_cache with network_info: [{"id": "d4493bab-df0a-4934-ab26-43dae0dbae72", "address": "fa:16:3e:6b:9f:ca", "network": {"id": "3eeb3245-b22f-4899-9ec0-084ea5f63b6b", "bridge": "br-int", "label": "tempest-ServersTestJSON-406706004-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "17d31dbb1e4542daaa43d2fda87e18ad", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd4493bab-df", "ovs_interfaceid": "d4493bab-df0a-4934-ab26-43dae0dbae72", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 03:28:44 np0005534516 systemd[1]: Started libpod-conmon-6529d1593ec1bb086527735570c4ab2a33f3f0f5efd982af9053f1898b4ae742.scope.
Nov 25 03:28:44 np0005534516 nova_compute[253538]: 2025-11-25 08:28:44.418 253542 DEBUG oslo_concurrency.lockutils [None req-d5ac8371-613d-488f-9c4b-164aad80e8a3 9a96a98d6bb448aab904a8763d3675ec 17d31dbb1e4542daaa43d2fda87e18ad - - default default] Releasing lock "refresh_cache-cf07e611-51eb-4bcf-8757-8f75d3807da6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 03:28:44 np0005534516 nova_compute[253538]: 2025-11-25 08:28:44.418 253542 DEBUG nova.compute.manager [None req-d5ac8371-613d-488f-9c4b-164aad80e8a3 9a96a98d6bb448aab904a8763d3675ec 17d31dbb1e4542daaa43d2fda87e18ad - - default default] [instance: cf07e611-51eb-4bcf-8757-8f75d3807da6] Instance network_info: |[{"id": "d4493bab-df0a-4934-ab26-43dae0dbae72", "address": "fa:16:3e:6b:9f:ca", "network": {"id": "3eeb3245-b22f-4899-9ec0-084ea5f63b6b", "bridge": "br-int", "label": "tempest-ServersTestJSON-406706004-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "17d31dbb1e4542daaa43d2fda87e18ad", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd4493bab-df", "ovs_interfaceid": "d4493bab-df0a-4934-ab26-43dae0dbae72", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 25 03:28:44 np0005534516 nova_compute[253538]: 2025-11-25 08:28:44.418 253542 DEBUG oslo_concurrency.lockutils [req-41ba1e70-abb1-4d6a-bb57-fa81f134d33f req-765bb381-5988-4deb-b389-6750012978b5 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-cf07e611-51eb-4bcf-8757-8f75d3807da6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 03:28:44 np0005534516 nova_compute[253538]: 2025-11-25 08:28:44.419 253542 DEBUG nova.network.neutron [req-41ba1e70-abb1-4d6a-bb57-fa81f134d33f req-765bb381-5988-4deb-b389-6750012978b5 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: cf07e611-51eb-4bcf-8757-8f75d3807da6] Refreshing network info cache for port d4493bab-df0a-4934-ab26-43dae0dbae72 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 03:28:44 np0005534516 systemd[1]: Started libcrun container.
Nov 25 03:28:44 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c701313089dcf59b569866adaf6ca7c27a06e3d84927e44617ca2d58f785c6cd/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 03:28:44 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c701313089dcf59b569866adaf6ca7c27a06e3d84927e44617ca2d58f785c6cd/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 03:28:44 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c701313089dcf59b569866adaf6ca7c27a06e3d84927e44617ca2d58f785c6cd/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 03:28:44 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c701313089dcf59b569866adaf6ca7c27a06e3d84927e44617ca2d58f785c6cd/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 03:28:44 np0005534516 nova_compute[253538]: 2025-11-25 08:28:44.422 253542 DEBUG nova.virt.libvirt.driver [None req-d5ac8371-613d-488f-9c4b-164aad80e8a3 9a96a98d6bb448aab904a8763d3675ec 17d31dbb1e4542daaa43d2fda87e18ad - - default default] [instance: cf07e611-51eb-4bcf-8757-8f75d3807da6] Start _get_guest_xml network_info=[{"id": "d4493bab-df0a-4934-ab26-43dae0dbae72", "address": "fa:16:3e:6b:9f:ca", "network": {"id": "3eeb3245-b22f-4899-9ec0-084ea5f63b6b", "bridge": "br-int", "label": "tempest-ServersTestJSON-406706004-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "17d31dbb1e4542daaa43d2fda87e18ad", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd4493bab-df", "ovs_interfaceid": "d4493bab-df0a-4934-ab26-43dae0dbae72", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'device_name': '/dev/vda', 'guest_format': None, 'encryption_options': None, 'boot_index': 0, 'encryption_format': None, 'encryption_secret_uuid': None, 'size': 0, 'encrypted': False, 'disk_bus': 'virtio', 'image_id': '8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 25 03:28:44 np0005534516 podman[293866]: 2025-11-25 08:28:44.441083974 +0000 UTC m=+0.168686033 container init 6529d1593ec1bb086527735570c4ab2a33f3f0f5efd982af9053f1898b4ae742 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_mccarthy, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True)
Nov 25 03:28:44 np0005534516 nova_compute[253538]: 2025-11-25 08:28:44.446 253542 WARNING nova.virt.libvirt.driver [None req-d5ac8371-613d-488f-9c4b-164aad80e8a3 9a96a98d6bb448aab904a8763d3675ec 17d31dbb1e4542daaa43d2fda87e18ad - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 03:28:44 np0005534516 podman[293866]: 2025-11-25 08:28:44.448686154 +0000 UTC m=+0.176288133 container start 6529d1593ec1bb086527735570c4ab2a33f3f0f5efd982af9053f1898b4ae742 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_mccarthy, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 03:28:44 np0005534516 nova_compute[253538]: 2025-11-25 08:28:44.451 253542 DEBUG nova.virt.libvirt.host [None req-d5ac8371-613d-488f-9c4b-164aad80e8a3 9a96a98d6bb448aab904a8763d3675ec 17d31dbb1e4542daaa43d2fda87e18ad - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 25 03:28:44 np0005534516 podman[293866]: 2025-11-25 08:28:44.452247292 +0000 UTC m=+0.179849321 container attach 6529d1593ec1bb086527735570c4ab2a33f3f0f5efd982af9053f1898b4ae742 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_mccarthy, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 03:28:44 np0005534516 nova_compute[253538]: 2025-11-25 08:28:44.452 253542 DEBUG nova.virt.libvirt.host [None req-d5ac8371-613d-488f-9c4b-164aad80e8a3 9a96a98d6bb448aab904a8763d3675ec 17d31dbb1e4542daaa43d2fda87e18ad - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 25 03:28:44 np0005534516 nova_compute[253538]: 2025-11-25 08:28:44.469 253542 DEBUG nova.virt.libvirt.host [None req-d5ac8371-613d-488f-9c4b-164aad80e8a3 9a96a98d6bb448aab904a8763d3675ec 17d31dbb1e4542daaa43d2fda87e18ad - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 25 03:28:44 np0005534516 nova_compute[253538]: 2025-11-25 08:28:44.469 253542 DEBUG nova.virt.libvirt.host [None req-d5ac8371-613d-488f-9c4b-164aad80e8a3 9a96a98d6bb448aab904a8763d3675ec 17d31dbb1e4542daaa43d2fda87e18ad - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 25 03:28:44 np0005534516 nova_compute[253538]: 2025-11-25 08:28:44.470 253542 DEBUG nova.virt.libvirt.driver [None req-d5ac8371-613d-488f-9c4b-164aad80e8a3 9a96a98d6bb448aab904a8763d3675ec 17d31dbb1e4542daaa43d2fda87e18ad - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 25 03:28:44 np0005534516 nova_compute[253538]: 2025-11-25 08:28:44.470 253542 DEBUG nova.virt.hardware [None req-d5ac8371-613d-488f-9c4b-164aad80e8a3 9a96a98d6bb448aab904a8763d3675ec 17d31dbb1e4542daaa43d2fda87e18ad - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T08:20:54Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='c0247c91-0f34-45b2-87e7-43f31a790a00',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 25 03:28:44 np0005534516 nova_compute[253538]: 2025-11-25 08:28:44.470 253542 DEBUG nova.virt.hardware [None req-d5ac8371-613d-488f-9c4b-164aad80e8a3 9a96a98d6bb448aab904a8763d3675ec 17d31dbb1e4542daaa43d2fda87e18ad - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 25 03:28:44 np0005534516 nova_compute[253538]: 2025-11-25 08:28:44.470 253542 DEBUG nova.virt.hardware [None req-d5ac8371-613d-488f-9c4b-164aad80e8a3 9a96a98d6bb448aab904a8763d3675ec 17d31dbb1e4542daaa43d2fda87e18ad - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 25 03:28:44 np0005534516 nova_compute[253538]: 2025-11-25 08:28:44.470 253542 DEBUG nova.virt.hardware [None req-d5ac8371-613d-488f-9c4b-164aad80e8a3 9a96a98d6bb448aab904a8763d3675ec 17d31dbb1e4542daaa43d2fda87e18ad - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 25 03:28:44 np0005534516 nova_compute[253538]: 2025-11-25 08:28:44.471 253542 DEBUG nova.virt.hardware [None req-d5ac8371-613d-488f-9c4b-164aad80e8a3 9a96a98d6bb448aab904a8763d3675ec 17d31dbb1e4542daaa43d2fda87e18ad - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 25 03:28:44 np0005534516 nova_compute[253538]: 2025-11-25 08:28:44.471 253542 DEBUG nova.virt.hardware [None req-d5ac8371-613d-488f-9c4b-164aad80e8a3 9a96a98d6bb448aab904a8763d3675ec 17d31dbb1e4542daaa43d2fda87e18ad - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 25 03:28:44 np0005534516 nova_compute[253538]: 2025-11-25 08:28:44.471 253542 DEBUG nova.virt.hardware [None req-d5ac8371-613d-488f-9c4b-164aad80e8a3 9a96a98d6bb448aab904a8763d3675ec 17d31dbb1e4542daaa43d2fda87e18ad - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 25 03:28:44 np0005534516 nova_compute[253538]: 2025-11-25 08:28:44.471 253542 DEBUG nova.virt.hardware [None req-d5ac8371-613d-488f-9c4b-164aad80e8a3 9a96a98d6bb448aab904a8763d3675ec 17d31dbb1e4542daaa43d2fda87e18ad - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 25 03:28:44 np0005534516 nova_compute[253538]: 2025-11-25 08:28:44.471 253542 DEBUG nova.virt.hardware [None req-d5ac8371-613d-488f-9c4b-164aad80e8a3 9a96a98d6bb448aab904a8763d3675ec 17d31dbb1e4542daaa43d2fda87e18ad - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 25 03:28:44 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 03:28:44 np0005534516 nova_compute[253538]: 2025-11-25 08:28:44.471 253542 DEBUG nova.virt.hardware [None req-d5ac8371-613d-488f-9c4b-164aad80e8a3 9a96a98d6bb448aab904a8763d3675ec 17d31dbb1e4542daaa43d2fda87e18ad - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 25 03:28:44 np0005534516 nova_compute[253538]: 2025-11-25 08:28:44.472 253542 DEBUG nova.virt.hardware [None req-d5ac8371-613d-488f-9c4b-164aad80e8a3 9a96a98d6bb448aab904a8763d3675ec 17d31dbb1e4542daaa43d2fda87e18ad - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 25 03:28:44 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2352777220' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 03:28:44 np0005534516 nova_compute[253538]: 2025-11-25 08:28:44.474 253542 DEBUG oslo_concurrency.processutils [None req-d5ac8371-613d-488f-9c4b-164aad80e8a3 9a96a98d6bb448aab904a8763d3675ec 17d31dbb1e4542daaa43d2fda87e18ad - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:28:44 np0005534516 nova_compute[253538]: 2025-11-25 08:28:44.500 253542 DEBUG oslo_concurrency.processutils [None req-02fcfbc8-8adf-4b3c-8fc6-7f90fda8674d 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.495s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:28:44 np0005534516 nova_compute[253538]: 2025-11-25 08:28:44.521 253542 DEBUG nova.storage.rbd_utils [None req-02fcfbc8-8adf-4b3c-8fc6-7f90fda8674d 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] rbd image 8b29fb31-718d-4926-bf4f-bae461ea70ef_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:28:44 np0005534516 nova_compute[253538]: 2025-11-25 08:28:44.525 253542 DEBUG oslo_concurrency.processutils [None req-02fcfbc8-8adf-4b3c-8fc6-7f90fda8674d 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:28:44 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 03:28:44 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2400758869' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 03:28:44 np0005534516 nova_compute[253538]: 2025-11-25 08:28:44.799 253542 DEBUG oslo_concurrency.processutils [None req-c5756116-ca2b-4696-927b-d39eef6d1be9 ee2fe69e0dfa4467926cec954790823e 9bd945273cd04d8981dcb3a319e8d026 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.463s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:28:44 np0005534516 nova_compute[253538]: 2025-11-25 08:28:44.805 253542 DEBUG nova.compute.provider_tree [None req-c5756116-ca2b-4696-927b-d39eef6d1be9 ee2fe69e0dfa4467926cec954790823e 9bd945273cd04d8981dcb3a319e8d026 - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 03:28:44 np0005534516 nova_compute[253538]: 2025-11-25 08:28:44.818 253542 DEBUG nova.scheduler.client.report [None req-c5756116-ca2b-4696-927b-d39eef6d1be9 ee2fe69e0dfa4467926cec954790823e 9bd945273cd04d8981dcb3a319e8d026 - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 03:28:44 np0005534516 nova_compute[253538]: 2025-11-25 08:28:44.842 253542 DEBUG oslo_concurrency.lockutils [None req-c5756116-ca2b-4696-927b-d39eef6d1be9 ee2fe69e0dfa4467926cec954790823e 9bd945273cd04d8981dcb3a319e8d026 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.633s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:28:44 np0005534516 nova_compute[253538]: 2025-11-25 08:28:44.864 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:28:44 np0005534516 nova_compute[253538]: 2025-11-25 08:28:44.872 253542 INFO nova.scheduler.client.report [None req-c5756116-ca2b-4696-927b-d39eef6d1be9 ee2fe69e0dfa4467926cec954790823e 9bd945273cd04d8981dcb3a319e8d026 - - default default] Deleted allocations for instance 4c934302-d7cd-4826-835e-cab6dba97e3a#033[00m
Nov 25 03:28:44 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 03:28:44 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1742349790' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 03:28:44 np0005534516 nova_compute[253538]: 2025-11-25 08:28:44.894 253542 DEBUG oslo_concurrency.processutils [None req-d5ac8371-613d-488f-9c4b-164aad80e8a3 9a96a98d6bb448aab904a8763d3675ec 17d31dbb1e4542daaa43d2fda87e18ad - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.419s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:28:44 np0005534516 nova_compute[253538]: 2025-11-25 08:28:44.915 253542 DEBUG nova.storage.rbd_utils [None req-d5ac8371-613d-488f-9c4b-164aad80e8a3 9a96a98d6bb448aab904a8763d3675ec 17d31dbb1e4542daaa43d2fda87e18ad - - default default] rbd image cf07e611-51eb-4bcf-8757-8f75d3807da6_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:28:44 np0005534516 nova_compute[253538]: 2025-11-25 08:28:44.919 253542 DEBUG oslo_concurrency.processutils [None req-d5ac8371-613d-488f-9c4b-164aad80e8a3 9a96a98d6bb448aab904a8763d3675ec 17d31dbb1e4542daaa43d2fda87e18ad - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:28:44 np0005534516 nova_compute[253538]: 2025-11-25 08:28:44.952 253542 DEBUG oslo_concurrency.lockutils [None req-c5756116-ca2b-4696-927b-d39eef6d1be9 ee2fe69e0dfa4467926cec954790823e 9bd945273cd04d8981dcb3a319e8d026 - - default default] Lock "4c934302-d7cd-4826-835e-cab6dba97e3a" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.318s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:28:45 np0005534516 nova_compute[253538]: 2025-11-25 08:28:45.028 253542 DEBUG nova.compute.manager [None req-aaed5288-95fd-4850-a790-cb3482a467a9 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] [instance: c3cdbe22-9aa9-4d2e-a76e-8d28c1edc71e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:28:45 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 03:28:45 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1557599900' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 03:28:45 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 03:28:45 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e140 do_prune osdmap full prune enabled
Nov 25 03:28:45 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e141 e141: 3 total, 3 up, 3 in
Nov 25 03:28:45 np0005534516 ceph-mon[75015]: log_channel(cluster) log [DBG] : osdmap e141: 3 total, 3 up, 3 in
Nov 25 03:28:45 np0005534516 nova_compute[253538]: 2025-11-25 08:28:45.064 253542 INFO nova.compute.manager [None req-aaed5288-95fd-4850-a790-cb3482a467a9 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] [instance: c3cdbe22-9aa9-4d2e-a76e-8d28c1edc71e] instance snapshotting#033[00m
Nov 25 03:28:45 np0005534516 nova_compute[253538]: 2025-11-25 08:28:45.080 253542 DEBUG oslo_concurrency.processutils [None req-02fcfbc8-8adf-4b3c-8fc6-7f90fda8674d 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.554s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:28:45 np0005534516 nova_compute[253538]: 2025-11-25 08:28:45.081 253542 DEBUG nova.virt.libvirt.vif [None req-02fcfbc8-8adf-4b3c-8fc6-7f90fda8674d 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T08:28:38Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesTestJSON-server-1025077292',display_name='tempest-ImagesTestJSON-server-1025077292',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagestestjson-server-1025077292',id=33,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='b0a28d62fb1841c087b84b40bf5a54ec',ramdisk_id='',reservation_id='r-4kbbjx71',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesTestJSON-109091550',owner_user_name='tempest-ImagesTestJSON-109091550-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T08:28:40Z,user_data=None,user_id='38fa175fb699405c9a05d7c28f994ebc',uuid=8b29fb31-718d-4926-bf4f-bae461ea70ef,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "799c50c8-d1e7-4c15-a3d9-29903d576304", "address": "fa:16:3e:fd:03:96", "network": {"id": "ba659d6c-c094-47d7-ba45-d0e659ce778e", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1404047212-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b0a28d62fb1841c087b84b40bf5a54ec", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap799c50c8-d1", "ovs_interfaceid": "799c50c8-d1e7-4c15-a3d9-29903d576304", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 25 03:28:45 np0005534516 nova_compute[253538]: 2025-11-25 08:28:45.081 253542 DEBUG nova.network.os_vif_util [None req-02fcfbc8-8adf-4b3c-8fc6-7f90fda8674d 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Converting VIF {"id": "799c50c8-d1e7-4c15-a3d9-29903d576304", "address": "fa:16:3e:fd:03:96", "network": {"id": "ba659d6c-c094-47d7-ba45-d0e659ce778e", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1404047212-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b0a28d62fb1841c087b84b40bf5a54ec", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap799c50c8-d1", "ovs_interfaceid": "799c50c8-d1e7-4c15-a3d9-29903d576304", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 03:28:45 np0005534516 nova_compute[253538]: 2025-11-25 08:28:45.082 253542 DEBUG nova.network.os_vif_util [None req-02fcfbc8-8adf-4b3c-8fc6-7f90fda8674d 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:fd:03:96,bridge_name='br-int',has_traffic_filtering=True,id=799c50c8-d1e7-4c15-a3d9-29903d576304,network=Network(ba659d6c-c094-47d7-ba45-d0e659ce778e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap799c50c8-d1') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 03:28:45 np0005534516 nova_compute[253538]: 2025-11-25 08:28:45.083 253542 DEBUG nova.objects.instance [None req-02fcfbc8-8adf-4b3c-8fc6-7f90fda8674d 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Lazy-loading 'pci_devices' on Instance uuid 8b29fb31-718d-4926-bf4f-bae461ea70ef obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 03:28:45 np0005534516 nova_compute[253538]: 2025-11-25 08:28:45.093 253542 DEBUG nova.virt.libvirt.driver [None req-02fcfbc8-8adf-4b3c-8fc6-7f90fda8674d 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 8b29fb31-718d-4926-bf4f-bae461ea70ef] End _get_guest_xml xml=<domain type="kvm">
Nov 25 03:28:45 np0005534516 nova_compute[253538]:  <uuid>8b29fb31-718d-4926-bf4f-bae461ea70ef</uuid>
Nov 25 03:28:45 np0005534516 nova_compute[253538]:  <name>instance-00000021</name>
Nov 25 03:28:45 np0005534516 nova_compute[253538]:  <memory>131072</memory>
Nov 25 03:28:45 np0005534516 nova_compute[253538]:  <vcpu>1</vcpu>
Nov 25 03:28:45 np0005534516 nova_compute[253538]:  <metadata>
Nov 25 03:28:45 np0005534516 nova_compute[253538]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 03:28:45 np0005534516 nova_compute[253538]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 03:28:45 np0005534516 nova_compute[253538]:      <nova:name>tempest-ImagesTestJSON-server-1025077292</nova:name>
Nov 25 03:28:45 np0005534516 nova_compute[253538]:      <nova:creationTime>2025-11-25 08:28:43</nova:creationTime>
Nov 25 03:28:45 np0005534516 nova_compute[253538]:      <nova:flavor name="m1.nano">
Nov 25 03:28:45 np0005534516 nova_compute[253538]:        <nova:memory>128</nova:memory>
Nov 25 03:28:45 np0005534516 nova_compute[253538]:        <nova:disk>1</nova:disk>
Nov 25 03:28:45 np0005534516 nova_compute[253538]:        <nova:swap>0</nova:swap>
Nov 25 03:28:45 np0005534516 nova_compute[253538]:        <nova:ephemeral>0</nova:ephemeral>
Nov 25 03:28:45 np0005534516 nova_compute[253538]:        <nova:vcpus>1</nova:vcpus>
Nov 25 03:28:45 np0005534516 nova_compute[253538]:      </nova:flavor>
Nov 25 03:28:45 np0005534516 nova_compute[253538]:      <nova:owner>
Nov 25 03:28:45 np0005534516 nova_compute[253538]:        <nova:user uuid="38fa175fb699405c9a05d7c28f994ebc">tempest-ImagesTestJSON-109091550-project-member</nova:user>
Nov 25 03:28:45 np0005534516 nova_compute[253538]:        <nova:project uuid="b0a28d62fb1841c087b84b40bf5a54ec">tempest-ImagesTestJSON-109091550</nova:project>
Nov 25 03:28:45 np0005534516 nova_compute[253538]:      </nova:owner>
Nov 25 03:28:45 np0005534516 nova_compute[253538]:      <nova:root type="image" uuid="8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e"/>
Nov 25 03:28:45 np0005534516 nova_compute[253538]:      <nova:ports>
Nov 25 03:28:45 np0005534516 nova_compute[253538]:        <nova:port uuid="799c50c8-d1e7-4c15-a3d9-29903d576304">
Nov 25 03:28:45 np0005534516 nova_compute[253538]:          <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Nov 25 03:28:45 np0005534516 nova_compute[253538]:        </nova:port>
Nov 25 03:28:45 np0005534516 nova_compute[253538]:      </nova:ports>
Nov 25 03:28:45 np0005534516 nova_compute[253538]:    </nova:instance>
Nov 25 03:28:45 np0005534516 nova_compute[253538]:  </metadata>
Nov 25 03:28:45 np0005534516 nova_compute[253538]:  <sysinfo type="smbios">
Nov 25 03:28:45 np0005534516 nova_compute[253538]:    <system>
Nov 25 03:28:45 np0005534516 nova_compute[253538]:      <entry name="manufacturer">RDO</entry>
Nov 25 03:28:45 np0005534516 nova_compute[253538]:      <entry name="product">OpenStack Compute</entry>
Nov 25 03:28:45 np0005534516 nova_compute[253538]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 03:28:45 np0005534516 nova_compute[253538]:      <entry name="serial">8b29fb31-718d-4926-bf4f-bae461ea70ef</entry>
Nov 25 03:28:45 np0005534516 nova_compute[253538]:      <entry name="uuid">8b29fb31-718d-4926-bf4f-bae461ea70ef</entry>
Nov 25 03:28:45 np0005534516 nova_compute[253538]:      <entry name="family">Virtual Machine</entry>
Nov 25 03:28:45 np0005534516 nova_compute[253538]:    </system>
Nov 25 03:28:45 np0005534516 nova_compute[253538]:  </sysinfo>
Nov 25 03:28:45 np0005534516 nova_compute[253538]:  <os>
Nov 25 03:28:45 np0005534516 nova_compute[253538]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 25 03:28:45 np0005534516 nova_compute[253538]:    <boot dev="hd"/>
Nov 25 03:28:45 np0005534516 nova_compute[253538]:    <smbios mode="sysinfo"/>
Nov 25 03:28:45 np0005534516 nova_compute[253538]:  </os>
Nov 25 03:28:45 np0005534516 nova_compute[253538]:  <features>
Nov 25 03:28:45 np0005534516 nova_compute[253538]:    <acpi/>
Nov 25 03:28:45 np0005534516 nova_compute[253538]:    <apic/>
Nov 25 03:28:45 np0005534516 nova_compute[253538]:    <vmcoreinfo/>
Nov 25 03:28:45 np0005534516 nova_compute[253538]:  </features>
Nov 25 03:28:45 np0005534516 nova_compute[253538]:  <clock offset="utc">
Nov 25 03:28:45 np0005534516 nova_compute[253538]:    <timer name="pit" tickpolicy="delay"/>
Nov 25 03:28:45 np0005534516 nova_compute[253538]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 25 03:28:45 np0005534516 nova_compute[253538]:    <timer name="hpet" present="no"/>
Nov 25 03:28:45 np0005534516 nova_compute[253538]:  </clock>
Nov 25 03:28:45 np0005534516 nova_compute[253538]:  <cpu mode="host-model" match="exact">
Nov 25 03:28:45 np0005534516 nova_compute[253538]:    <topology sockets="1" cores="1" threads="1"/>
Nov 25 03:28:45 np0005534516 nova_compute[253538]:  </cpu>
Nov 25 03:28:45 np0005534516 nova_compute[253538]:  <devices>
Nov 25 03:28:45 np0005534516 nova_compute[253538]:    <disk type="network" device="disk">
Nov 25 03:28:45 np0005534516 nova_compute[253538]:      <driver type="raw" cache="none"/>
Nov 25 03:28:45 np0005534516 nova_compute[253538]:      <source protocol="rbd" name="vms/8b29fb31-718d-4926-bf4f-bae461ea70ef_disk">
Nov 25 03:28:45 np0005534516 nova_compute[253538]:        <host name="192.168.122.100" port="6789"/>
Nov 25 03:28:45 np0005534516 nova_compute[253538]:      </source>
Nov 25 03:28:45 np0005534516 nova_compute[253538]:      <auth username="openstack">
Nov 25 03:28:45 np0005534516 nova_compute[253538]:        <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 03:28:45 np0005534516 nova_compute[253538]:      </auth>
Nov 25 03:28:45 np0005534516 nova_compute[253538]:      <target dev="vda" bus="virtio"/>
Nov 25 03:28:45 np0005534516 nova_compute[253538]:    </disk>
Nov 25 03:28:45 np0005534516 nova_compute[253538]:    <disk type="network" device="cdrom">
Nov 25 03:28:45 np0005534516 nova_compute[253538]:      <driver type="raw" cache="none"/>
Nov 25 03:28:45 np0005534516 nova_compute[253538]:      <source protocol="rbd" name="vms/8b29fb31-718d-4926-bf4f-bae461ea70ef_disk.config">
Nov 25 03:28:45 np0005534516 nova_compute[253538]:        <host name="192.168.122.100" port="6789"/>
Nov 25 03:28:45 np0005534516 nova_compute[253538]:      </source>
Nov 25 03:28:45 np0005534516 nova_compute[253538]:      <auth username="openstack">
Nov 25 03:28:45 np0005534516 nova_compute[253538]:        <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 03:28:45 np0005534516 nova_compute[253538]:      </auth>
Nov 25 03:28:45 np0005534516 nova_compute[253538]:      <target dev="sda" bus="sata"/>
Nov 25 03:28:45 np0005534516 nova_compute[253538]:    </disk>
Nov 25 03:28:45 np0005534516 nova_compute[253538]:    <interface type="ethernet">
Nov 25 03:28:45 np0005534516 nova_compute[253538]:      <mac address="fa:16:3e:fd:03:96"/>
Nov 25 03:28:45 np0005534516 nova_compute[253538]:      <model type="virtio"/>
Nov 25 03:28:45 np0005534516 nova_compute[253538]:      <driver name="vhost" rx_queue_size="512"/>
Nov 25 03:28:45 np0005534516 nova_compute[253538]:      <mtu size="1442"/>
Nov 25 03:28:45 np0005534516 nova_compute[253538]:      <target dev="tap799c50c8-d1"/>
Nov 25 03:28:45 np0005534516 nova_compute[253538]:    </interface>
Nov 25 03:28:45 np0005534516 nova_compute[253538]:    <serial type="pty">
Nov 25 03:28:45 np0005534516 nova_compute[253538]:      <log file="/var/lib/nova/instances/8b29fb31-718d-4926-bf4f-bae461ea70ef/console.log" append="off"/>
Nov 25 03:28:45 np0005534516 nova_compute[253538]:    </serial>
Nov 25 03:28:45 np0005534516 nova_compute[253538]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 03:28:45 np0005534516 nova_compute[253538]:    <video>
Nov 25 03:28:45 np0005534516 nova_compute[253538]:      <model type="virtio"/>
Nov 25 03:28:45 np0005534516 nova_compute[253538]:    </video>
Nov 25 03:28:45 np0005534516 nova_compute[253538]:    <input type="tablet" bus="usb"/>
Nov 25 03:28:45 np0005534516 nova_compute[253538]:    <rng model="virtio">
Nov 25 03:28:45 np0005534516 nova_compute[253538]:      <backend model="random">/dev/urandom</backend>
Nov 25 03:28:45 np0005534516 nova_compute[253538]:    </rng>
Nov 25 03:28:45 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root"/>
Nov 25 03:28:45 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:28:45 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:28:45 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:28:45 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:28:45 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:28:45 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:28:45 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:28:45 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:28:45 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:28:45 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:28:45 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:28:45 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:28:45 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:28:45 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:28:45 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:28:45 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:28:45 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:28:45 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:28:45 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:28:45 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:28:45 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:28:45 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:28:45 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:28:45 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:28:45 np0005534516 nova_compute[253538]:    <controller type="usb" index="0"/>
Nov 25 03:28:45 np0005534516 nova_compute[253538]:    <memballoon model="virtio">
Nov 25 03:28:45 np0005534516 nova_compute[253538]:      <stats period="10"/>
Nov 25 03:28:45 np0005534516 nova_compute[253538]:    </memballoon>
Nov 25 03:28:45 np0005534516 nova_compute[253538]:  </devices>
Nov 25 03:28:45 np0005534516 nova_compute[253538]: </domain>
Nov 25 03:28:45 np0005534516 nova_compute[253538]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 25 03:28:45 np0005534516 nova_compute[253538]: 2025-11-25 08:28:45.093 253542 DEBUG nova.compute.manager [None req-02fcfbc8-8adf-4b3c-8fc6-7f90fda8674d 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 8b29fb31-718d-4926-bf4f-bae461ea70ef] Preparing to wait for external event network-vif-plugged-799c50c8-d1e7-4c15-a3d9-29903d576304 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 25 03:28:45 np0005534516 nova_compute[253538]: 2025-11-25 08:28:45.094 253542 DEBUG oslo_concurrency.lockutils [None req-02fcfbc8-8adf-4b3c-8fc6-7f90fda8674d 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Acquiring lock "8b29fb31-718d-4926-bf4f-bae461ea70ef-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:28:45 np0005534516 nova_compute[253538]: 2025-11-25 08:28:45.094 253542 DEBUG oslo_concurrency.lockutils [None req-02fcfbc8-8adf-4b3c-8fc6-7f90fda8674d 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Lock "8b29fb31-718d-4926-bf4f-bae461ea70ef-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:28:45 np0005534516 nova_compute[253538]: 2025-11-25 08:28:45.094 253542 DEBUG oslo_concurrency.lockutils [None req-02fcfbc8-8adf-4b3c-8fc6-7f90fda8674d 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Lock "8b29fb31-718d-4926-bf4f-bae461ea70ef-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:28:45 np0005534516 nova_compute[253538]: 2025-11-25 08:28:45.094 253542 DEBUG nova.virt.libvirt.vif [None req-02fcfbc8-8adf-4b3c-8fc6-7f90fda8674d 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T08:28:38Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesTestJSON-server-1025077292',display_name='tempest-ImagesTestJSON-server-1025077292',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagestestjson-server-1025077292',id=33,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='b0a28d62fb1841c087b84b40bf5a54ec',ramdisk_id='',reservation_id='r-4kbbjx71',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesTestJSON-109091550',owner_user_name='tempest-ImagesTestJSON-109091550-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T08:28:40Z,user_data=None,user_id='38fa175fb699405c9a05d7c28f994ebc',uuid=8b29fb31-718d-4926-bf4f-bae461ea70ef,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "799c50c8-d1e7-4c15-a3d9-29903d576304", "address": "fa:16:3e:fd:03:96", "network": {"id": "ba659d6c-c094-47d7-ba45-d0e659ce778e", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1404047212-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b0a28d62fb1841c087b84b40bf5a54ec", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap799c50c8-d1", "ovs_interfaceid": "799c50c8-d1e7-4c15-a3d9-29903d576304", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 25 03:28:45 np0005534516 nova_compute[253538]: 2025-11-25 08:28:45.095 253542 DEBUG nova.network.os_vif_util [None req-02fcfbc8-8adf-4b3c-8fc6-7f90fda8674d 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Converting VIF {"id": "799c50c8-d1e7-4c15-a3d9-29903d576304", "address": "fa:16:3e:fd:03:96", "network": {"id": "ba659d6c-c094-47d7-ba45-d0e659ce778e", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1404047212-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b0a28d62fb1841c087b84b40bf5a54ec", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap799c50c8-d1", "ovs_interfaceid": "799c50c8-d1e7-4c15-a3d9-29903d576304", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 03:28:45 np0005534516 nova_compute[253538]: 2025-11-25 08:28:45.095 253542 DEBUG nova.network.os_vif_util [None req-02fcfbc8-8adf-4b3c-8fc6-7f90fda8674d 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:fd:03:96,bridge_name='br-int',has_traffic_filtering=True,id=799c50c8-d1e7-4c15-a3d9-29903d576304,network=Network(ba659d6c-c094-47d7-ba45-d0e659ce778e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap799c50c8-d1') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 03:28:45 np0005534516 nova_compute[253538]: 2025-11-25 08:28:45.096 253542 DEBUG os_vif [None req-02fcfbc8-8adf-4b3c-8fc6-7f90fda8674d 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:fd:03:96,bridge_name='br-int',has_traffic_filtering=True,id=799c50c8-d1e7-4c15-a3d9-29903d576304,network=Network(ba659d6c-c094-47d7-ba45-d0e659ce778e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap799c50c8-d1') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 25 03:28:45 np0005534516 nova_compute[253538]: 2025-11-25 08:28:45.096 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:28:45 np0005534516 nova_compute[253538]: 2025-11-25 08:28:45.096 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:28:45 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1327: 321 pgs: 4 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 315 active+clean; 232 MiB data, 484 MiB used, 60 GiB / 60 GiB avail; 2.5 MiB/s rd, 9.6 MiB/s wr, 450 op/s
Nov 25 03:28:45 np0005534516 nova_compute[253538]: 2025-11-25 08:28:45.097 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 03:28:45 np0005534516 nova_compute[253538]: 2025-11-25 08:28:45.099 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:28:45 np0005534516 nova_compute[253538]: 2025-11-25 08:28:45.099 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap799c50c8-d1, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:28:45 np0005534516 nova_compute[253538]: 2025-11-25 08:28:45.100 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap799c50c8-d1, col_values=(('external_ids', {'iface-id': '799c50c8-d1e7-4c15-a3d9-29903d576304', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:fd:03:96', 'vm-uuid': '8b29fb31-718d-4926-bf4f-bae461ea70ef'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:28:45 np0005534516 nova_compute[253538]: 2025-11-25 08:28:45.101 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:28:45 np0005534516 NetworkManager[48915]: <info>  [1764059325.1024] manager: (tap799c50c8-d1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/107)
Nov 25 03:28:45 np0005534516 nova_compute[253538]: 2025-11-25 08:28:45.103 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 25 03:28:45 np0005534516 nova_compute[253538]: 2025-11-25 08:28:45.108 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:28:45 np0005534516 nova_compute[253538]: 2025-11-25 08:28:45.109 253542 INFO os_vif [None req-02fcfbc8-8adf-4b3c-8fc6-7f90fda8674d 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:fd:03:96,bridge_name='br-int',has_traffic_filtering=True,id=799c50c8-d1e7-4c15-a3d9-29903d576304,network=Network(ba659d6c-c094-47d7-ba45-d0e659ce778e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap799c50c8-d1')#033[00m
Nov 25 03:28:45 np0005534516 nova_compute[253538]: 2025-11-25 08:28:45.153 253542 DEBUG nova.virt.libvirt.driver [None req-02fcfbc8-8adf-4b3c-8fc6-7f90fda8674d 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 03:28:45 np0005534516 nova_compute[253538]: 2025-11-25 08:28:45.153 253542 DEBUG nova.virt.libvirt.driver [None req-02fcfbc8-8adf-4b3c-8fc6-7f90fda8674d 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 03:28:45 np0005534516 nova_compute[253538]: 2025-11-25 08:28:45.154 253542 DEBUG nova.virt.libvirt.driver [None req-02fcfbc8-8adf-4b3c-8fc6-7f90fda8674d 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] No VIF found with MAC fa:16:3e:fd:03:96, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 25 03:28:45 np0005534516 nova_compute[253538]: 2025-11-25 08:28:45.154 253542 INFO nova.virt.libvirt.driver [None req-02fcfbc8-8adf-4b3c-8fc6-7f90fda8674d 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 8b29fb31-718d-4926-bf4f-bae461ea70ef] Using config drive#033[00m
Nov 25 03:28:45 np0005534516 nova_compute[253538]: 2025-11-25 08:28:45.180 253542 DEBUG nova.storage.rbd_utils [None req-02fcfbc8-8adf-4b3c-8fc6-7f90fda8674d 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] rbd image 8b29fb31-718d-4926-bf4f-bae461ea70ef_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:28:45 np0005534516 epic_mccarthy[293883]: {
Nov 25 03:28:45 np0005534516 epic_mccarthy[293883]:    "0": [
Nov 25 03:28:45 np0005534516 epic_mccarthy[293883]:        {
Nov 25 03:28:45 np0005534516 epic_mccarthy[293883]:            "devices": [
Nov 25 03:28:45 np0005534516 epic_mccarthy[293883]:                "/dev/loop3"
Nov 25 03:28:45 np0005534516 epic_mccarthy[293883]:            ],
Nov 25 03:28:45 np0005534516 epic_mccarthy[293883]:            "lv_name": "ceph_lv0",
Nov 25 03:28:45 np0005534516 epic_mccarthy[293883]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 03:28:45 np0005534516 epic_mccarthy[293883]:            "lv_size": "21470642176",
Nov 25 03:28:45 np0005534516 epic_mccarthy[293883]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=fa0f8d90-5df8-4d42-9078-da082765696d,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 03:28:45 np0005534516 epic_mccarthy[293883]:            "lv_uuid": "ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr",
Nov 25 03:28:45 np0005534516 epic_mccarthy[293883]:            "name": "ceph_lv0",
Nov 25 03:28:45 np0005534516 epic_mccarthy[293883]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 03:28:45 np0005534516 epic_mccarthy[293883]:            "tags": {
Nov 25 03:28:45 np0005534516 epic_mccarthy[293883]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 03:28:45 np0005534516 epic_mccarthy[293883]:                "ceph.block_uuid": "ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr",
Nov 25 03:28:45 np0005534516 epic_mccarthy[293883]:                "ceph.cephx_lockbox_secret": "",
Nov 25 03:28:45 np0005534516 epic_mccarthy[293883]:                "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 03:28:45 np0005534516 epic_mccarthy[293883]:                "ceph.cluster_name": "ceph",
Nov 25 03:28:45 np0005534516 epic_mccarthy[293883]:                "ceph.crush_device_class": "",
Nov 25 03:28:45 np0005534516 epic_mccarthy[293883]:                "ceph.encrypted": "0",
Nov 25 03:28:45 np0005534516 epic_mccarthy[293883]:                "ceph.osd_fsid": "fa0f8d90-5df8-4d42-9078-da082765696d",
Nov 25 03:28:45 np0005534516 epic_mccarthy[293883]:                "ceph.osd_id": "0",
Nov 25 03:28:45 np0005534516 epic_mccarthy[293883]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 03:28:45 np0005534516 epic_mccarthy[293883]:                "ceph.type": "block",
Nov 25 03:28:45 np0005534516 epic_mccarthy[293883]:                "ceph.vdo": "0"
Nov 25 03:28:45 np0005534516 epic_mccarthy[293883]:            },
Nov 25 03:28:45 np0005534516 epic_mccarthy[293883]:            "type": "block",
Nov 25 03:28:45 np0005534516 epic_mccarthy[293883]:            "vg_name": "ceph_vg0"
Nov 25 03:28:45 np0005534516 epic_mccarthy[293883]:        }
Nov 25 03:28:45 np0005534516 epic_mccarthy[293883]:    ],
Nov 25 03:28:45 np0005534516 epic_mccarthy[293883]:    "1": [
Nov 25 03:28:45 np0005534516 epic_mccarthy[293883]:        {
Nov 25 03:28:45 np0005534516 epic_mccarthy[293883]:            "devices": [
Nov 25 03:28:45 np0005534516 epic_mccarthy[293883]:                "/dev/loop4"
Nov 25 03:28:45 np0005534516 epic_mccarthy[293883]:            ],
Nov 25 03:28:45 np0005534516 epic_mccarthy[293883]:            "lv_name": "ceph_lv1",
Nov 25 03:28:45 np0005534516 epic_mccarthy[293883]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 03:28:45 np0005534516 epic_mccarthy[293883]:            "lv_size": "21470642176",
Nov 25 03:28:45 np0005534516 epic_mccarthy[293883]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1ccbbde0-8faf-460a-800e-c84f00ed17db,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 03:28:45 np0005534516 epic_mccarthy[293883]:            "lv_uuid": "nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e",
Nov 25 03:28:45 np0005534516 epic_mccarthy[293883]:            "name": "ceph_lv1",
Nov 25 03:28:45 np0005534516 epic_mccarthy[293883]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 03:28:45 np0005534516 epic_mccarthy[293883]:            "tags": {
Nov 25 03:28:45 np0005534516 epic_mccarthy[293883]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 03:28:45 np0005534516 epic_mccarthy[293883]:                "ceph.block_uuid": "nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e",
Nov 25 03:28:45 np0005534516 epic_mccarthy[293883]:                "ceph.cephx_lockbox_secret": "",
Nov 25 03:28:45 np0005534516 epic_mccarthy[293883]:                "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 03:28:45 np0005534516 epic_mccarthy[293883]:                "ceph.cluster_name": "ceph",
Nov 25 03:28:45 np0005534516 epic_mccarthy[293883]:                "ceph.crush_device_class": "",
Nov 25 03:28:45 np0005534516 epic_mccarthy[293883]:                "ceph.encrypted": "0",
Nov 25 03:28:45 np0005534516 epic_mccarthy[293883]:                "ceph.osd_fsid": "1ccbbde0-8faf-460a-800e-c84f00ed17db",
Nov 25 03:28:45 np0005534516 epic_mccarthy[293883]:                "ceph.osd_id": "1",
Nov 25 03:28:45 np0005534516 epic_mccarthy[293883]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 03:28:45 np0005534516 epic_mccarthy[293883]:                "ceph.type": "block",
Nov 25 03:28:45 np0005534516 epic_mccarthy[293883]:                "ceph.vdo": "0"
Nov 25 03:28:45 np0005534516 epic_mccarthy[293883]:            },
Nov 25 03:28:45 np0005534516 epic_mccarthy[293883]:            "type": "block",
Nov 25 03:28:45 np0005534516 epic_mccarthy[293883]:            "vg_name": "ceph_vg1"
Nov 25 03:28:45 np0005534516 epic_mccarthy[293883]:        }
Nov 25 03:28:45 np0005534516 epic_mccarthy[293883]:    ],
Nov 25 03:28:45 np0005534516 epic_mccarthy[293883]:    "2": [
Nov 25 03:28:45 np0005534516 epic_mccarthy[293883]:        {
Nov 25 03:28:45 np0005534516 epic_mccarthy[293883]:            "devices": [
Nov 25 03:28:45 np0005534516 epic_mccarthy[293883]:                "/dev/loop5"
Nov 25 03:28:45 np0005534516 epic_mccarthy[293883]:            ],
Nov 25 03:28:45 np0005534516 epic_mccarthy[293883]:            "lv_name": "ceph_lv2",
Nov 25 03:28:45 np0005534516 epic_mccarthy[293883]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 03:28:45 np0005534516 epic_mccarthy[293883]:            "lv_size": "21470642176",
Nov 25 03:28:45 np0005534516 epic_mccarthy[293883]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2a15223e-f21c-42af-b702-2be1c3607399,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 03:28:45 np0005534516 epic_mccarthy[293883]:            "lv_uuid": "5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0",
Nov 25 03:28:45 np0005534516 epic_mccarthy[293883]:            "name": "ceph_lv2",
Nov 25 03:28:45 np0005534516 epic_mccarthy[293883]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 03:28:45 np0005534516 epic_mccarthy[293883]:            "tags": {
Nov 25 03:28:45 np0005534516 epic_mccarthy[293883]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 03:28:45 np0005534516 epic_mccarthy[293883]:                "ceph.block_uuid": "5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0",
Nov 25 03:28:45 np0005534516 epic_mccarthy[293883]:                "ceph.cephx_lockbox_secret": "",
Nov 25 03:28:45 np0005534516 epic_mccarthy[293883]:                "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 03:28:45 np0005534516 epic_mccarthy[293883]:                "ceph.cluster_name": "ceph",
Nov 25 03:28:45 np0005534516 epic_mccarthy[293883]:                "ceph.crush_device_class": "",
Nov 25 03:28:45 np0005534516 epic_mccarthy[293883]:                "ceph.encrypted": "0",
Nov 25 03:28:45 np0005534516 epic_mccarthy[293883]:                "ceph.osd_fsid": "2a15223e-f21c-42af-b702-2be1c3607399",
Nov 25 03:28:45 np0005534516 epic_mccarthy[293883]:                "ceph.osd_id": "2",
Nov 25 03:28:45 np0005534516 epic_mccarthy[293883]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 03:28:45 np0005534516 epic_mccarthy[293883]:                "ceph.type": "block",
Nov 25 03:28:45 np0005534516 epic_mccarthy[293883]:                "ceph.vdo": "0"
Nov 25 03:28:45 np0005534516 epic_mccarthy[293883]:            },
Nov 25 03:28:45 np0005534516 epic_mccarthy[293883]:            "type": "block",
Nov 25 03:28:45 np0005534516 epic_mccarthy[293883]:            "vg_name": "ceph_vg2"
Nov 25 03:28:45 np0005534516 epic_mccarthy[293883]:        }
Nov 25 03:28:45 np0005534516 epic_mccarthy[293883]:    ]
Nov 25 03:28:45 np0005534516 epic_mccarthy[293883]: }
Nov 25 03:28:45 np0005534516 systemd[1]: libpod-6529d1593ec1bb086527735570c4ab2a33f3f0f5efd982af9053f1898b4ae742.scope: Deactivated successfully.
Nov 25 03:28:45 np0005534516 podman[293866]: 2025-11-25 08:28:45.263697571 +0000 UTC m=+0.991299550 container died 6529d1593ec1bb086527735570c4ab2a33f3f0f5efd982af9053f1898b4ae742 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_mccarthy, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 25 03:28:45 np0005534516 systemd[1]: var-lib-containers-storage-overlay-c701313089dcf59b569866adaf6ca7c27a06e3d84927e44617ca2d58f785c6cd-merged.mount: Deactivated successfully.
Nov 25 03:28:45 np0005534516 nova_compute[253538]: 2025-11-25 08:28:45.321 253542 INFO nova.virt.libvirt.driver [None req-aaed5288-95fd-4850-a790-cb3482a467a9 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] [instance: c3cdbe22-9aa9-4d2e-a76e-8d28c1edc71e] Beginning live snapshot process#033[00m
Nov 25 03:28:45 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 03:28:45 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3011737728' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 03:28:45 np0005534516 podman[293866]: 2025-11-25 08:28:45.339687731 +0000 UTC m=+1.067289710 container remove 6529d1593ec1bb086527735570c4ab2a33f3f0f5efd982af9053f1898b4ae742 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_mccarthy, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 03:28:45 np0005534516 systemd[1]: libpod-conmon-6529d1593ec1bb086527735570c4ab2a33f3f0f5efd982af9053f1898b4ae742.scope: Deactivated successfully.
Nov 25 03:28:45 np0005534516 nova_compute[253538]: 2025-11-25 08:28:45.376 253542 DEBUG oslo_concurrency.processutils [None req-d5ac8371-613d-488f-9c4b-164aad80e8a3 9a96a98d6bb448aab904a8763d3675ec 17d31dbb1e4542daaa43d2fda87e18ad - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.457s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:28:45 np0005534516 nova_compute[253538]: 2025-11-25 08:28:45.378 253542 DEBUG nova.virt.libvirt.vif [None req-d5ac8371-613d-488f-9c4b-164aad80e8a3 9a96a98d6bb448aab904a8763d3675ec 17d31dbb1e4542daaa43d2fda87e18ad - - default default] vif_type=ovs instance=Instance(access_ip_v4=1.1.1.1,access_ip_v6=::babe:dc0c:1602,architecture=None,auto_disk_config=True,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T08:28:38Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestJSON-server-159347704',display_name='tempest-ServersTestJSON-server-159347704',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-159347704',id=34,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBEJTnavPSMTdJ98k7GbPudzwCAZWvOVoss8PE9qNwjiCue78AnUJTbduASU9tXAUM03eX8VLrSKKQxmPEVUcAUgD9baA3BJYk4n2P01dqgil022Gs27o2zUO7uKTgrjs9Q==',key_name='tempest-keypair-1656658254',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={hello='world'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='17d31dbb1e4542daaa43d2fda87e18ad',ramdisk_id='',reservation_id='r-6gyhk8p2',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-1586688039',owner_user_name='tempest-ServersTestJSON-1586688039-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T08:28:41Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='9a96a98d6bb448aab904a8763d3675ec',uuid=cf07e611-51eb-4bcf-8757-8f75d3807da6,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d4493bab-df0a-4934-ab26-43dae0dbae72", "address": "fa:16:3e:6b:9f:ca", "network": {"id": "3eeb3245-b22f-4899-9ec0-084ea5f63b6b", "bridge": "br-int", "label": "tempest-ServersTestJSON-406706004-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "17d31dbb1e4542daaa43d2fda87e18ad", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd4493bab-df", "ovs_interfaceid": "d4493bab-df0a-4934-ab26-43dae0dbae72", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 25 03:28:45 np0005534516 nova_compute[253538]: 2025-11-25 08:28:45.378 253542 DEBUG nova.network.os_vif_util [None req-d5ac8371-613d-488f-9c4b-164aad80e8a3 9a96a98d6bb448aab904a8763d3675ec 17d31dbb1e4542daaa43d2fda87e18ad - - default default] Converting VIF {"id": "d4493bab-df0a-4934-ab26-43dae0dbae72", "address": "fa:16:3e:6b:9f:ca", "network": {"id": "3eeb3245-b22f-4899-9ec0-084ea5f63b6b", "bridge": "br-int", "label": "tempest-ServersTestJSON-406706004-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "17d31dbb1e4542daaa43d2fda87e18ad", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd4493bab-df", "ovs_interfaceid": "d4493bab-df0a-4934-ab26-43dae0dbae72", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 03:28:45 np0005534516 nova_compute[253538]: 2025-11-25 08:28:45.379 253542 DEBUG nova.network.os_vif_util [None req-d5ac8371-613d-488f-9c4b-164aad80e8a3 9a96a98d6bb448aab904a8763d3675ec 17d31dbb1e4542daaa43d2fda87e18ad - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:6b:9f:ca,bridge_name='br-int',has_traffic_filtering=True,id=d4493bab-df0a-4934-ab26-43dae0dbae72,network=Network(3eeb3245-b22f-4899-9ec0-084ea5f63b6b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd4493bab-df') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 03:28:45 np0005534516 nova_compute[253538]: 2025-11-25 08:28:45.380 253542 DEBUG nova.objects.instance [None req-d5ac8371-613d-488f-9c4b-164aad80e8a3 9a96a98d6bb448aab904a8763d3675ec 17d31dbb1e4542daaa43d2fda87e18ad - - default default] Lazy-loading 'pci_devices' on Instance uuid cf07e611-51eb-4bcf-8757-8f75d3807da6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 03:28:45 np0005534516 nova_compute[253538]: 2025-11-25 08:28:45.395 253542 DEBUG nova.virt.libvirt.driver [None req-d5ac8371-613d-488f-9c4b-164aad80e8a3 9a96a98d6bb448aab904a8763d3675ec 17d31dbb1e4542daaa43d2fda87e18ad - - default default] [instance: cf07e611-51eb-4bcf-8757-8f75d3807da6] End _get_guest_xml xml=<domain type="kvm">
Nov 25 03:28:45 np0005534516 nova_compute[253538]:  <uuid>cf07e611-51eb-4bcf-8757-8f75d3807da6</uuid>
Nov 25 03:28:45 np0005534516 nova_compute[253538]:  <name>instance-00000022</name>
Nov 25 03:28:45 np0005534516 nova_compute[253538]:  <memory>131072</memory>
Nov 25 03:28:45 np0005534516 nova_compute[253538]:  <vcpu>1</vcpu>
Nov 25 03:28:45 np0005534516 nova_compute[253538]:  <metadata>
Nov 25 03:28:45 np0005534516 nova_compute[253538]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 03:28:45 np0005534516 nova_compute[253538]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 03:28:45 np0005534516 nova_compute[253538]:      <nova:name>tempest-ServersTestJSON-server-159347704</nova:name>
Nov 25 03:28:45 np0005534516 nova_compute[253538]:      <nova:creationTime>2025-11-25 08:28:44</nova:creationTime>
Nov 25 03:28:45 np0005534516 nova_compute[253538]:      <nova:flavor name="m1.nano">
Nov 25 03:28:45 np0005534516 nova_compute[253538]:        <nova:memory>128</nova:memory>
Nov 25 03:28:45 np0005534516 nova_compute[253538]:        <nova:disk>1</nova:disk>
Nov 25 03:28:45 np0005534516 nova_compute[253538]:        <nova:swap>0</nova:swap>
Nov 25 03:28:45 np0005534516 nova_compute[253538]:        <nova:ephemeral>0</nova:ephemeral>
Nov 25 03:28:45 np0005534516 nova_compute[253538]:        <nova:vcpus>1</nova:vcpus>
Nov 25 03:28:45 np0005534516 nova_compute[253538]:      </nova:flavor>
Nov 25 03:28:45 np0005534516 nova_compute[253538]:      <nova:owner>
Nov 25 03:28:45 np0005534516 nova_compute[253538]:        <nova:user uuid="9a96a98d6bb448aab904a8763d3675ec">tempest-ServersTestJSON-1586688039-project-member</nova:user>
Nov 25 03:28:45 np0005534516 nova_compute[253538]:        <nova:project uuid="17d31dbb1e4542daaa43d2fda87e18ad">tempest-ServersTestJSON-1586688039</nova:project>
Nov 25 03:28:45 np0005534516 nova_compute[253538]:      </nova:owner>
Nov 25 03:28:45 np0005534516 nova_compute[253538]:      <nova:root type="image" uuid="8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e"/>
Nov 25 03:28:45 np0005534516 nova_compute[253538]:      <nova:ports>
Nov 25 03:28:45 np0005534516 nova_compute[253538]:        <nova:port uuid="d4493bab-df0a-4934-ab26-43dae0dbae72">
Nov 25 03:28:45 np0005534516 nova_compute[253538]:          <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Nov 25 03:28:45 np0005534516 nova_compute[253538]:        </nova:port>
Nov 25 03:28:45 np0005534516 nova_compute[253538]:      </nova:ports>
Nov 25 03:28:45 np0005534516 nova_compute[253538]:    </nova:instance>
Nov 25 03:28:45 np0005534516 nova_compute[253538]:  </metadata>
Nov 25 03:28:45 np0005534516 nova_compute[253538]:  <sysinfo type="smbios">
Nov 25 03:28:45 np0005534516 nova_compute[253538]:    <system>
Nov 25 03:28:45 np0005534516 nova_compute[253538]:      <entry name="manufacturer">RDO</entry>
Nov 25 03:28:45 np0005534516 nova_compute[253538]:      <entry name="product">OpenStack Compute</entry>
Nov 25 03:28:45 np0005534516 nova_compute[253538]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 03:28:45 np0005534516 nova_compute[253538]:      <entry name="serial">cf07e611-51eb-4bcf-8757-8f75d3807da6</entry>
Nov 25 03:28:45 np0005534516 nova_compute[253538]:      <entry name="uuid">cf07e611-51eb-4bcf-8757-8f75d3807da6</entry>
Nov 25 03:28:45 np0005534516 nova_compute[253538]:      <entry name="family">Virtual Machine</entry>
Nov 25 03:28:45 np0005534516 nova_compute[253538]:    </system>
Nov 25 03:28:45 np0005534516 nova_compute[253538]:  </sysinfo>
Nov 25 03:28:45 np0005534516 nova_compute[253538]:  <os>
Nov 25 03:28:45 np0005534516 nova_compute[253538]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 25 03:28:45 np0005534516 nova_compute[253538]:    <boot dev="hd"/>
Nov 25 03:28:45 np0005534516 nova_compute[253538]:    <smbios mode="sysinfo"/>
Nov 25 03:28:45 np0005534516 nova_compute[253538]:  </os>
Nov 25 03:28:45 np0005534516 nova_compute[253538]:  <features>
Nov 25 03:28:45 np0005534516 nova_compute[253538]:    <acpi/>
Nov 25 03:28:45 np0005534516 nova_compute[253538]:    <apic/>
Nov 25 03:28:45 np0005534516 nova_compute[253538]:    <vmcoreinfo/>
Nov 25 03:28:45 np0005534516 nova_compute[253538]:  </features>
Nov 25 03:28:45 np0005534516 nova_compute[253538]:  <clock offset="utc">
Nov 25 03:28:45 np0005534516 nova_compute[253538]:    <timer name="pit" tickpolicy="delay"/>
Nov 25 03:28:45 np0005534516 nova_compute[253538]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 25 03:28:45 np0005534516 nova_compute[253538]:    <timer name="hpet" present="no"/>
Nov 25 03:28:45 np0005534516 nova_compute[253538]:  </clock>
Nov 25 03:28:45 np0005534516 nova_compute[253538]:  <cpu mode="host-model" match="exact">
Nov 25 03:28:45 np0005534516 nova_compute[253538]:    <topology sockets="1" cores="1" threads="1"/>
Nov 25 03:28:45 np0005534516 nova_compute[253538]:  </cpu>
Nov 25 03:28:45 np0005534516 nova_compute[253538]:  <devices>
Nov 25 03:28:45 np0005534516 nova_compute[253538]:    <disk type="network" device="disk">
Nov 25 03:28:45 np0005534516 nova_compute[253538]:      <driver type="raw" cache="none"/>
Nov 25 03:28:45 np0005534516 nova_compute[253538]:      <source protocol="rbd" name="vms/cf07e611-51eb-4bcf-8757-8f75d3807da6_disk">
Nov 25 03:28:45 np0005534516 nova_compute[253538]:        <host name="192.168.122.100" port="6789"/>
Nov 25 03:28:45 np0005534516 nova_compute[253538]:      </source>
Nov 25 03:28:45 np0005534516 nova_compute[253538]:      <auth username="openstack">
Nov 25 03:28:45 np0005534516 nova_compute[253538]:        <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 03:28:45 np0005534516 nova_compute[253538]:      </auth>
Nov 25 03:28:45 np0005534516 nova_compute[253538]:      <target dev="vda" bus="virtio"/>
Nov 25 03:28:45 np0005534516 nova_compute[253538]:    </disk>
Nov 25 03:28:45 np0005534516 nova_compute[253538]:    <disk type="network" device="cdrom">
Nov 25 03:28:45 np0005534516 nova_compute[253538]:      <driver type="raw" cache="none"/>
Nov 25 03:28:45 np0005534516 nova_compute[253538]:      <source protocol="rbd" name="vms/cf07e611-51eb-4bcf-8757-8f75d3807da6_disk.config">
Nov 25 03:28:45 np0005534516 nova_compute[253538]:        <host name="192.168.122.100" port="6789"/>
Nov 25 03:28:45 np0005534516 nova_compute[253538]:      </source>
Nov 25 03:28:45 np0005534516 nova_compute[253538]:      <auth username="openstack">
Nov 25 03:28:45 np0005534516 nova_compute[253538]:        <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 03:28:45 np0005534516 nova_compute[253538]:      </auth>
Nov 25 03:28:45 np0005534516 nova_compute[253538]:      <target dev="sda" bus="sata"/>
Nov 25 03:28:45 np0005534516 nova_compute[253538]:    </disk>
Nov 25 03:28:45 np0005534516 nova_compute[253538]:    <interface type="ethernet">
Nov 25 03:28:45 np0005534516 nova_compute[253538]:      <mac address="fa:16:3e:6b:9f:ca"/>
Nov 25 03:28:45 np0005534516 nova_compute[253538]:      <model type="virtio"/>
Nov 25 03:28:45 np0005534516 nova_compute[253538]:      <driver name="vhost" rx_queue_size="512"/>
Nov 25 03:28:45 np0005534516 nova_compute[253538]:      <mtu size="1442"/>
Nov 25 03:28:45 np0005534516 nova_compute[253538]:      <target dev="tapd4493bab-df"/>
Nov 25 03:28:45 np0005534516 nova_compute[253538]:    </interface>
Nov 25 03:28:45 np0005534516 nova_compute[253538]:    <serial type="pty">
Nov 25 03:28:45 np0005534516 nova_compute[253538]:      <log file="/var/lib/nova/instances/cf07e611-51eb-4bcf-8757-8f75d3807da6/console.log" append="off"/>
Nov 25 03:28:45 np0005534516 nova_compute[253538]:    </serial>
Nov 25 03:28:45 np0005534516 nova_compute[253538]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 03:28:45 np0005534516 nova_compute[253538]:    <video>
Nov 25 03:28:45 np0005534516 nova_compute[253538]:      <model type="virtio"/>
Nov 25 03:28:45 np0005534516 nova_compute[253538]:    </video>
Nov 25 03:28:45 np0005534516 nova_compute[253538]:    <input type="tablet" bus="usb"/>
Nov 25 03:28:45 np0005534516 nova_compute[253538]:    <rng model="virtio">
Nov 25 03:28:45 np0005534516 nova_compute[253538]:      <backend model="random">/dev/urandom</backend>
Nov 25 03:28:45 np0005534516 nova_compute[253538]:    </rng>
Nov 25 03:28:45 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root"/>
Nov 25 03:28:45 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:28:45 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:28:45 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:28:45 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:28:45 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:28:45 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:28:45 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:28:45 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:28:45 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:28:45 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:28:45 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:28:45 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:28:45 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:28:45 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:28:45 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:28:45 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:28:45 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:28:45 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:28:45 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:28:45 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:28:45 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:28:45 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:28:45 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:28:45 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:28:45 np0005534516 nova_compute[253538]:    <controller type="usb" index="0"/>
Nov 25 03:28:45 np0005534516 nova_compute[253538]:    <memballoon model="virtio">
Nov 25 03:28:45 np0005534516 nova_compute[253538]:      <stats period="10"/>
Nov 25 03:28:45 np0005534516 nova_compute[253538]:    </memballoon>
Nov 25 03:28:45 np0005534516 nova_compute[253538]:  </devices>
Nov 25 03:28:45 np0005534516 nova_compute[253538]: </domain>
Nov 25 03:28:45 np0005534516 nova_compute[253538]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 25 03:28:45 np0005534516 nova_compute[253538]: 2025-11-25 08:28:45.396 253542 DEBUG nova.compute.manager [None req-d5ac8371-613d-488f-9c4b-164aad80e8a3 9a96a98d6bb448aab904a8763d3675ec 17d31dbb1e4542daaa43d2fda87e18ad - - default default] [instance: cf07e611-51eb-4bcf-8757-8f75d3807da6] Preparing to wait for external event network-vif-plugged-d4493bab-df0a-4934-ab26-43dae0dbae72 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 25 03:28:45 np0005534516 nova_compute[253538]: 2025-11-25 08:28:45.396 253542 DEBUG oslo_concurrency.lockutils [None req-d5ac8371-613d-488f-9c4b-164aad80e8a3 9a96a98d6bb448aab904a8763d3675ec 17d31dbb1e4542daaa43d2fda87e18ad - - default default] Acquiring lock "cf07e611-51eb-4bcf-8757-8f75d3807da6-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:28:45 np0005534516 nova_compute[253538]: 2025-11-25 08:28:45.396 253542 DEBUG oslo_concurrency.lockutils [None req-d5ac8371-613d-488f-9c4b-164aad80e8a3 9a96a98d6bb448aab904a8763d3675ec 17d31dbb1e4542daaa43d2fda87e18ad - - default default] Lock "cf07e611-51eb-4bcf-8757-8f75d3807da6-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:28:45 np0005534516 nova_compute[253538]: 2025-11-25 08:28:45.397 253542 DEBUG oslo_concurrency.lockutils [None req-d5ac8371-613d-488f-9c4b-164aad80e8a3 9a96a98d6bb448aab904a8763d3675ec 17d31dbb1e4542daaa43d2fda87e18ad - - default default] Lock "cf07e611-51eb-4bcf-8757-8f75d3807da6-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:28:45 np0005534516 nova_compute[253538]: 2025-11-25 08:28:45.397 253542 DEBUG nova.virt.libvirt.vif [None req-d5ac8371-613d-488f-9c4b-164aad80e8a3 9a96a98d6bb448aab904a8763d3675ec 17d31dbb1e4542daaa43d2fda87e18ad - - default default] vif_type=ovs instance=Instance(access_ip_v4=1.1.1.1,access_ip_v6=::babe:dc0c:1602,architecture=None,auto_disk_config=True,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T08:28:38Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestJSON-server-159347704',display_name='tempest-ServersTestJSON-server-159347704',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-159347704',id=34,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBEJTnavPSMTdJ98k7GbPudzwCAZWvOVoss8PE9qNwjiCue78AnUJTbduASU9tXAUM03eX8VLrSKKQxmPEVUcAUgD9baA3BJYk4n2P01dqgil022Gs27o2zUO7uKTgrjs9Q==',key_name='tempest-keypair-1656658254',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={hello='world'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='17d31dbb1e4542daaa43d2fda87e18ad',ramdisk_id='',reservation_id='r-6gyhk8p2',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-1586688039',owner_user_name='tempest-ServersTestJSON-1586688039-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T08:28:41Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='9a96a98d6bb448aab904a8763d3675ec',uuid=cf07e611-51eb-4bcf-8757-8f75d3807da6,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d4493bab-df0a-4934-ab26-43dae0dbae72", "address": "fa:16:3e:6b:9f:ca", "network": {"id": "3eeb3245-b22f-4899-9ec0-084ea5f63b6b", "bridge": "br-int", "label": "tempest-ServersTestJSON-406706004-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "17d31dbb1e4542daaa43d2fda87e18ad", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd4493bab-df", "ovs_interfaceid": "d4493bab-df0a-4934-ab26-43dae0dbae72", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 25 03:28:45 np0005534516 nova_compute[253538]: 2025-11-25 08:28:45.398 253542 DEBUG nova.network.os_vif_util [None req-d5ac8371-613d-488f-9c4b-164aad80e8a3 9a96a98d6bb448aab904a8763d3675ec 17d31dbb1e4542daaa43d2fda87e18ad - - default default] Converting VIF {"id": "d4493bab-df0a-4934-ab26-43dae0dbae72", "address": "fa:16:3e:6b:9f:ca", "network": {"id": "3eeb3245-b22f-4899-9ec0-084ea5f63b6b", "bridge": "br-int", "label": "tempest-ServersTestJSON-406706004-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "17d31dbb1e4542daaa43d2fda87e18ad", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd4493bab-df", "ovs_interfaceid": "d4493bab-df0a-4934-ab26-43dae0dbae72", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 03:28:45 np0005534516 nova_compute[253538]: 2025-11-25 08:28:45.398 253542 DEBUG nova.network.os_vif_util [None req-d5ac8371-613d-488f-9c4b-164aad80e8a3 9a96a98d6bb448aab904a8763d3675ec 17d31dbb1e4542daaa43d2fda87e18ad - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:6b:9f:ca,bridge_name='br-int',has_traffic_filtering=True,id=d4493bab-df0a-4934-ab26-43dae0dbae72,network=Network(3eeb3245-b22f-4899-9ec0-084ea5f63b6b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd4493bab-df') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 03:28:45 np0005534516 nova_compute[253538]: 2025-11-25 08:28:45.399 253542 DEBUG os_vif [None req-d5ac8371-613d-488f-9c4b-164aad80e8a3 9a96a98d6bb448aab904a8763d3675ec 17d31dbb1e4542daaa43d2fda87e18ad - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:6b:9f:ca,bridge_name='br-int',has_traffic_filtering=True,id=d4493bab-df0a-4934-ab26-43dae0dbae72,network=Network(3eeb3245-b22f-4899-9ec0-084ea5f63b6b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd4493bab-df') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 25 03:28:45 np0005534516 nova_compute[253538]: 2025-11-25 08:28:45.399 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:28:45 np0005534516 nova_compute[253538]: 2025-11-25 08:28:45.399 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:28:45 np0005534516 nova_compute[253538]: 2025-11-25 08:28:45.400 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 03:28:45 np0005534516 nova_compute[253538]: 2025-11-25 08:28:45.450 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:28:45 np0005534516 nova_compute[253538]: 2025-11-25 08:28:45.451 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd4493bab-df, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:28:45 np0005534516 nova_compute[253538]: 2025-11-25 08:28:45.451 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapd4493bab-df, col_values=(('external_ids', {'iface-id': 'd4493bab-df0a-4934-ab26-43dae0dbae72', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:6b:9f:ca', 'vm-uuid': 'cf07e611-51eb-4bcf-8757-8f75d3807da6'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:28:45 np0005534516 nova_compute[253538]: 2025-11-25 08:28:45.452 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:28:45 np0005534516 NetworkManager[48915]: <info>  [1764059325.4535] manager: (tapd4493bab-df): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/108)
Nov 25 03:28:45 np0005534516 nova_compute[253538]: 2025-11-25 08:28:45.458 253542 DEBUG nova.virt.libvirt.imagebackend [None req-aaed5288-95fd-4850-a790-cb3482a467a9 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] No parent info for 8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163#033[00m
Nov 25 03:28:45 np0005534516 nova_compute[253538]: 2025-11-25 08:28:45.460 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 25 03:28:45 np0005534516 nova_compute[253538]: 2025-11-25 08:28:45.461 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:28:45 np0005534516 nova_compute[253538]: 2025-11-25 08:28:45.462 253542 INFO os_vif [None req-d5ac8371-613d-488f-9c4b-164aad80e8a3 9a96a98d6bb448aab904a8763d3675ec 17d31dbb1e4542daaa43d2fda87e18ad - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:6b:9f:ca,bridge_name='br-int',has_traffic_filtering=True,id=d4493bab-df0a-4934-ab26-43dae0dbae72,network=Network(3eeb3245-b22f-4899-9ec0-084ea5f63b6b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd4493bab-df')#033[00m
Nov 25 03:28:45 np0005534516 nova_compute[253538]: 2025-11-25 08:28:45.504 253542 DEBUG nova.virt.libvirt.driver [None req-d5ac8371-613d-488f-9c4b-164aad80e8a3 9a96a98d6bb448aab904a8763d3675ec 17d31dbb1e4542daaa43d2fda87e18ad - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 03:28:45 np0005534516 nova_compute[253538]: 2025-11-25 08:28:45.505 253542 DEBUG nova.virt.libvirt.driver [None req-d5ac8371-613d-488f-9c4b-164aad80e8a3 9a96a98d6bb448aab904a8763d3675ec 17d31dbb1e4542daaa43d2fda87e18ad - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 03:28:45 np0005534516 nova_compute[253538]: 2025-11-25 08:28:45.505 253542 DEBUG nova.virt.libvirt.driver [None req-d5ac8371-613d-488f-9c4b-164aad80e8a3 9a96a98d6bb448aab904a8763d3675ec 17d31dbb1e4542daaa43d2fda87e18ad - - default default] No VIF found with MAC fa:16:3e:6b:9f:ca, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 25 03:28:45 np0005534516 nova_compute[253538]: 2025-11-25 08:28:45.505 253542 INFO nova.virt.libvirt.driver [None req-d5ac8371-613d-488f-9c4b-164aad80e8a3 9a96a98d6bb448aab904a8763d3675ec 17d31dbb1e4542daaa43d2fda87e18ad - - default default] [instance: cf07e611-51eb-4bcf-8757-8f75d3807da6] Using config drive#033[00m
Nov 25 03:28:45 np0005534516 nova_compute[253538]: 2025-11-25 08:28:45.530 253542 DEBUG nova.storage.rbd_utils [None req-d5ac8371-613d-488f-9c4b-164aad80e8a3 9a96a98d6bb448aab904a8763d3675ec 17d31dbb1e4542daaa43d2fda87e18ad - - default default] rbd image cf07e611-51eb-4bcf-8757-8f75d3807da6_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:28:45 np0005534516 nova_compute[253538]: 2025-11-25 08:28:45.562 253542 DEBUG nova.network.neutron [req-5266ba18-be39-490a-b6ab-c72dfc626390 req-47b2092e-bd30-4057-80eb-996673e38468 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 8b29fb31-718d-4926-bf4f-bae461ea70ef] Updated VIF entry in instance network info cache for port 799c50c8-d1e7-4c15-a3d9-29903d576304. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 25 03:28:45 np0005534516 nova_compute[253538]: 2025-11-25 08:28:45.563 253542 DEBUG nova.network.neutron [req-5266ba18-be39-490a-b6ab-c72dfc626390 req-47b2092e-bd30-4057-80eb-996673e38468 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 8b29fb31-718d-4926-bf4f-bae461ea70ef] Updating instance_info_cache with network_info: [{"id": "799c50c8-d1e7-4c15-a3d9-29903d576304", "address": "fa:16:3e:fd:03:96", "network": {"id": "ba659d6c-c094-47d7-ba45-d0e659ce778e", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1404047212-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b0a28d62fb1841c087b84b40bf5a54ec", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap799c50c8-d1", "ovs_interfaceid": "799c50c8-d1e7-4c15-a3d9-29903d576304", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 03:28:45 np0005534516 nova_compute[253538]: 2025-11-25 08:28:45.575 253542 DEBUG oslo_concurrency.lockutils [req-5266ba18-be39-490a-b6ab-c72dfc626390 req-47b2092e-bd30-4057-80eb-996673e38468 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-8b29fb31-718d-4926-bf4f-bae461ea70ef" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 03:28:45 np0005534516 nova_compute[253538]: 2025-11-25 08:28:45.576 253542 DEBUG nova.compute.manager [req-5266ba18-be39-490a-b6ab-c72dfc626390 req-47b2092e-bd30-4057-80eb-996673e38468 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 4c934302-d7cd-4826-835e-cab6dba97e3a] Received event network-vif-unplugged-b62f3741-11c8-4840-a720-d6ee07f06284 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:28:45 np0005534516 nova_compute[253538]: 2025-11-25 08:28:45.576 253542 DEBUG oslo_concurrency.lockutils [req-5266ba18-be39-490a-b6ab-c72dfc626390 req-47b2092e-bd30-4057-80eb-996673e38468 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "4c934302-d7cd-4826-835e-cab6dba97e3a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:28:45 np0005534516 nova_compute[253538]: 2025-11-25 08:28:45.576 253542 DEBUG oslo_concurrency.lockutils [req-5266ba18-be39-490a-b6ab-c72dfc626390 req-47b2092e-bd30-4057-80eb-996673e38468 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "4c934302-d7cd-4826-835e-cab6dba97e3a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:28:45 np0005534516 nova_compute[253538]: 2025-11-25 08:28:45.576 253542 DEBUG oslo_concurrency.lockutils [req-5266ba18-be39-490a-b6ab-c72dfc626390 req-47b2092e-bd30-4057-80eb-996673e38468 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "4c934302-d7cd-4826-835e-cab6dba97e3a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:28:45 np0005534516 nova_compute[253538]: 2025-11-25 08:28:45.577 253542 DEBUG nova.compute.manager [req-5266ba18-be39-490a-b6ab-c72dfc626390 req-47b2092e-bd30-4057-80eb-996673e38468 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 4c934302-d7cd-4826-835e-cab6dba97e3a] No waiting events found dispatching network-vif-unplugged-b62f3741-11c8-4840-a720-d6ee07f06284 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 03:28:45 np0005534516 nova_compute[253538]: 2025-11-25 08:28:45.577 253542 WARNING nova.compute.manager [req-5266ba18-be39-490a-b6ab-c72dfc626390 req-47b2092e-bd30-4057-80eb-996673e38468 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 4c934302-d7cd-4826-835e-cab6dba97e3a] Received unexpected event network-vif-unplugged-b62f3741-11c8-4840-a720-d6ee07f06284 for instance with vm_state deleted and task_state None.#033[00m
Nov 25 03:28:45 np0005534516 nova_compute[253538]: 2025-11-25 08:28:45.577 253542 DEBUG nova.compute.manager [req-5266ba18-be39-490a-b6ab-c72dfc626390 req-47b2092e-bd30-4057-80eb-996673e38468 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 4c934302-d7cd-4826-835e-cab6dba97e3a] Received event network-vif-plugged-b62f3741-11c8-4840-a720-d6ee07f06284 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:28:45 np0005534516 nova_compute[253538]: 2025-11-25 08:28:45.577 253542 DEBUG oslo_concurrency.lockutils [req-5266ba18-be39-490a-b6ab-c72dfc626390 req-47b2092e-bd30-4057-80eb-996673e38468 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "4c934302-d7cd-4826-835e-cab6dba97e3a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:28:45 np0005534516 nova_compute[253538]: 2025-11-25 08:28:45.578 253542 DEBUG oslo_concurrency.lockutils [req-5266ba18-be39-490a-b6ab-c72dfc626390 req-47b2092e-bd30-4057-80eb-996673e38468 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "4c934302-d7cd-4826-835e-cab6dba97e3a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:28:45 np0005534516 nova_compute[253538]: 2025-11-25 08:28:45.578 253542 DEBUG oslo_concurrency.lockutils [req-5266ba18-be39-490a-b6ab-c72dfc626390 req-47b2092e-bd30-4057-80eb-996673e38468 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "4c934302-d7cd-4826-835e-cab6dba97e3a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:28:45 np0005534516 nova_compute[253538]: 2025-11-25 08:28:45.578 253542 DEBUG nova.compute.manager [req-5266ba18-be39-490a-b6ab-c72dfc626390 req-47b2092e-bd30-4057-80eb-996673e38468 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 4c934302-d7cd-4826-835e-cab6dba97e3a] No waiting events found dispatching network-vif-plugged-b62f3741-11c8-4840-a720-d6ee07f06284 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 03:28:45 np0005534516 nova_compute[253538]: 2025-11-25 08:28:45.578 253542 WARNING nova.compute.manager [req-5266ba18-be39-490a-b6ab-c72dfc626390 req-47b2092e-bd30-4057-80eb-996673e38468 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 4c934302-d7cd-4826-835e-cab6dba97e3a] Received unexpected event network-vif-plugged-b62f3741-11c8-4840-a720-d6ee07f06284 for instance with vm_state deleted and task_state None.#033[00m
Nov 25 03:28:45 np0005534516 nova_compute[253538]: 2025-11-25 08:28:45.619 253542 INFO nova.virt.libvirt.driver [None req-02fcfbc8-8adf-4b3c-8fc6-7f90fda8674d 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 8b29fb31-718d-4926-bf4f-bae461ea70ef] Creating config drive at /var/lib/nova/instances/8b29fb31-718d-4926-bf4f-bae461ea70ef/disk.config#033[00m
Nov 25 03:28:45 np0005534516 nova_compute[253538]: 2025-11-25 08:28:45.626 253542 DEBUG oslo_concurrency.processutils [None req-02fcfbc8-8adf-4b3c-8fc6-7f90fda8674d 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/8b29fb31-718d-4926-bf4f-bae461ea70ef/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpklaxl3p8 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:28:45 np0005534516 nova_compute[253538]: 2025-11-25 08:28:45.712 253542 DEBUG nova.storage.rbd_utils [None req-aaed5288-95fd-4850-a790-cb3482a467a9 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] creating snapshot(c22d1af405f648b08ec82c457c2957ba) on rbd image(c3cdbe22-9aa9-4d2e-a76e-8d28c1edc71e_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Nov 25 03:28:45 np0005534516 nova_compute[253538]: 2025-11-25 08:28:45.778 253542 DEBUG oslo_concurrency.processutils [None req-02fcfbc8-8adf-4b3c-8fc6-7f90fda8674d 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/8b29fb31-718d-4926-bf4f-bae461ea70ef/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpklaxl3p8" returned: 0 in 0.152s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:28:45 np0005534516 nova_compute[253538]: 2025-11-25 08:28:45.802 253542 DEBUG nova.storage.rbd_utils [None req-02fcfbc8-8adf-4b3c-8fc6-7f90fda8674d 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] rbd image 8b29fb31-718d-4926-bf4f-bae461ea70ef_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:28:45 np0005534516 nova_compute[253538]: 2025-11-25 08:28:45.805 253542 DEBUG oslo_concurrency.processutils [None req-02fcfbc8-8adf-4b3c-8fc6-7f90fda8674d 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/8b29fb31-718d-4926-bf4f-bae461ea70ef/disk.config 8b29fb31-718d-4926-bf4f-bae461ea70ef_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:28:45 np0005534516 nova_compute[253538]: 2025-11-25 08:28:45.834 253542 DEBUG nova.network.neutron [req-41ba1e70-abb1-4d6a-bb57-fa81f134d33f req-765bb381-5988-4deb-b389-6750012978b5 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: cf07e611-51eb-4bcf-8757-8f75d3807da6] Updated VIF entry in instance network info cache for port d4493bab-df0a-4934-ab26-43dae0dbae72. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 25 03:28:45 np0005534516 nova_compute[253538]: 2025-11-25 08:28:45.835 253542 DEBUG nova.network.neutron [req-41ba1e70-abb1-4d6a-bb57-fa81f134d33f req-765bb381-5988-4deb-b389-6750012978b5 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: cf07e611-51eb-4bcf-8757-8f75d3807da6] Updating instance_info_cache with network_info: [{"id": "d4493bab-df0a-4934-ab26-43dae0dbae72", "address": "fa:16:3e:6b:9f:ca", "network": {"id": "3eeb3245-b22f-4899-9ec0-084ea5f63b6b", "bridge": "br-int", "label": "tempest-ServersTestJSON-406706004-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "17d31dbb1e4542daaa43d2fda87e18ad", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd4493bab-df", "ovs_interfaceid": "d4493bab-df0a-4934-ab26-43dae0dbae72", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 03:28:45 np0005534516 nova_compute[253538]: 2025-11-25 08:28:45.852 253542 DEBUG oslo_concurrency.lockutils [req-41ba1e70-abb1-4d6a-bb57-fa81f134d33f req-765bb381-5988-4deb-b389-6750012978b5 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-cf07e611-51eb-4bcf-8757-8f75d3807da6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 03:28:45 np0005534516 nova_compute[253538]: 2025-11-25 08:28:45.942 253542 INFO nova.virt.libvirt.driver [None req-d5ac8371-613d-488f-9c4b-164aad80e8a3 9a96a98d6bb448aab904a8763d3675ec 17d31dbb1e4542daaa43d2fda87e18ad - - default default] [instance: cf07e611-51eb-4bcf-8757-8f75d3807da6] Creating config drive at /var/lib/nova/instances/cf07e611-51eb-4bcf-8757-8f75d3807da6/disk.config#033[00m
Nov 25 03:28:45 np0005534516 nova_compute[253538]: 2025-11-25 08:28:45.949 253542 DEBUG oslo_concurrency.processutils [None req-d5ac8371-613d-488f-9c4b-164aad80e8a3 9a96a98d6bb448aab904a8763d3675ec 17d31dbb1e4542daaa43d2fda87e18ad - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/cf07e611-51eb-4bcf-8757-8f75d3807da6/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpqvqme_g6 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:28:46 np0005534516 podman[294298]: 2025-11-25 08:28:45.958934907 +0000 UTC m=+0.020559549 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 03:28:46 np0005534516 nova_compute[253538]: 2025-11-25 08:28:46.086 253542 DEBUG oslo_concurrency.processutils [None req-d5ac8371-613d-488f-9c4b-164aad80e8a3 9a96a98d6bb448aab904a8763d3675ec 17d31dbb1e4542daaa43d2fda87e18ad - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/cf07e611-51eb-4bcf-8757-8f75d3807da6/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpqvqme_g6" returned: 0 in 0.137s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:28:46 np0005534516 podman[294298]: 2025-11-25 08:28:46.220734872 +0000 UTC m=+0.282359504 container create 81344a838ab472469a2ce4aa2ec860eb711a769a3bf0df0e28652a73271bb3df (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_shtern, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True)
Nov 25 03:28:46 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e141 do_prune osdmap full prune enabled
Nov 25 03:28:46 np0005534516 nova_compute[253538]: 2025-11-25 08:28:46.291 253542 DEBUG nova.storage.rbd_utils [None req-d5ac8371-613d-488f-9c4b-164aad80e8a3 9a96a98d6bb448aab904a8763d3675ec 17d31dbb1e4542daaa43d2fda87e18ad - - default default] rbd image cf07e611-51eb-4bcf-8757-8f75d3807da6_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:28:46 np0005534516 nova_compute[253538]: 2025-11-25 08:28:46.295 253542 DEBUG oslo_concurrency.processutils [None req-d5ac8371-613d-488f-9c4b-164aad80e8a3 9a96a98d6bb448aab904a8763d3675ec 17d31dbb1e4542daaa43d2fda87e18ad - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/cf07e611-51eb-4bcf-8757-8f75d3807da6/disk.config cf07e611-51eb-4bcf-8757-8f75d3807da6_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:28:46 np0005534516 systemd[1]: Started libpod-conmon-81344a838ab472469a2ce4aa2ec860eb711a769a3bf0df0e28652a73271bb3df.scope.
Nov 25 03:28:46 np0005534516 nova_compute[253538]: 2025-11-25 08:28:46.382 253542 DEBUG nova.compute.manager [req-9a5e8226-4d2e-4e64-be3c-59447e89c8dc req-aa4b130c-8c65-4f9c-9d7f-844c4c66f8de b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 4c934302-d7cd-4826-835e-cab6dba97e3a] Received event network-vif-deleted-b62f3741-11c8-4840-a720-d6ee07f06284 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:28:46 np0005534516 systemd[1]: Started libcrun container.
Nov 25 03:28:46 np0005534516 podman[294298]: 2025-11-25 08:28:46.554585541 +0000 UTC m=+0.616210263 container init 81344a838ab472469a2ce4aa2ec860eb711a769a3bf0df0e28652a73271bb3df (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_shtern, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 03:28:46 np0005534516 podman[294298]: 2025-11-25 08:28:46.563721073 +0000 UTC m=+0.625345715 container start 81344a838ab472469a2ce4aa2ec860eb711a769a3bf0df0e28652a73271bb3df (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_shtern, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 03:28:46 np0005534516 affectionate_shtern[294351]: 167 167
Nov 25 03:28:46 np0005534516 systemd[1]: libpod-81344a838ab472469a2ce4aa2ec860eb711a769a3bf0df0e28652a73271bb3df.scope: Deactivated successfully.
Nov 25 03:28:46 np0005534516 conmon[294351]: conmon 81344a838ab472469a2c <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-81344a838ab472469a2ce4aa2ec860eb711a769a3bf0df0e28652a73271bb3df.scope/container/memory.events
Nov 25 03:28:46 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e142 e142: 3 total, 3 up, 3 in
Nov 25 03:28:46 np0005534516 ceph-mon[75015]: log_channel(cluster) log [DBG] : osdmap e142: 3 total, 3 up, 3 in
Nov 25 03:28:46 np0005534516 podman[294298]: 2025-11-25 08:28:46.785928344 +0000 UTC m=+0.847552996 container attach 81344a838ab472469a2ce4aa2ec860eb711a769a3bf0df0e28652a73271bb3df (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_shtern, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Nov 25 03:28:46 np0005534516 podman[294298]: 2025-11-25 08:28:46.787548039 +0000 UTC m=+0.849172671 container died 81344a838ab472469a2ce4aa2ec860eb711a769a3bf0df0e28652a73271bb3df (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_shtern, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 03:28:47 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1329: 321 pgs: 321 active+clean; 180 MiB data, 432 MiB used, 60 GiB / 60 GiB avail; 3.8 MiB/s rd, 7.3 MiB/s wr, 426 op/s
Nov 25 03:28:47 np0005534516 systemd[1]: var-lib-containers-storage-overlay-b0ef35e2278b48f2a0c771a4fe239f72944bc5bb0dd86c0849fae98b51a10205-merged.mount: Deactivated successfully.
Nov 25 03:28:47 np0005534516 podman[294298]: 2025-11-25 08:28:47.509619707 +0000 UTC m=+1.571244349 container remove 81344a838ab472469a2ce4aa2ec860eb711a769a3bf0df0e28652a73271bb3df (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_shtern, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 03:28:47 np0005534516 systemd[1]: libpod-conmon-81344a838ab472469a2ce4aa2ec860eb711a769a3bf0df0e28652a73271bb3df.scope: Deactivated successfully.
Nov 25 03:28:47 np0005534516 podman[294382]: 2025-11-25 08:28:47.728663801 +0000 UTC m=+0.032099628 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 03:28:48 np0005534516 podman[294382]: 2025-11-25 08:28:48.135381933 +0000 UTC m=+0.438817700 container create e472271b1d3f6307594b1ec5027db1077bfb61c89e380196eb076b88a5e08781 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_pike, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Nov 25 03:28:48 np0005534516 systemd[1]: Started libpod-conmon-e472271b1d3f6307594b1ec5027db1077bfb61c89e380196eb076b88a5e08781.scope.
Nov 25 03:28:48 np0005534516 systemd[1]: Started libcrun container.
Nov 25 03:28:48 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/443e567917fa884a16c81c49e288bf3bfe26774cf959ec0db389e252dbdf0cb9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 03:28:48 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/443e567917fa884a16c81c49e288bf3bfe26774cf959ec0db389e252dbdf0cb9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 03:28:48 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/443e567917fa884a16c81c49e288bf3bfe26774cf959ec0db389e252dbdf0cb9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 03:28:48 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/443e567917fa884a16c81c49e288bf3bfe26774cf959ec0db389e252dbdf0cb9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 03:28:48 np0005534516 nova_compute[253538]: 2025-11-25 08:28:48.541 253542 DEBUG oslo_concurrency.processutils [None req-02fcfbc8-8adf-4b3c-8fc6-7f90fda8674d 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/8b29fb31-718d-4926-bf4f-bae461ea70ef/disk.config 8b29fb31-718d-4926-bf4f-bae461ea70ef_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 2.736s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:28:48 np0005534516 nova_compute[253538]: 2025-11-25 08:28:48.544 253542 INFO nova.virt.libvirt.driver [None req-02fcfbc8-8adf-4b3c-8fc6-7f90fda8674d 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 8b29fb31-718d-4926-bf4f-bae461ea70ef] Deleting local config drive /var/lib/nova/instances/8b29fb31-718d-4926-bf4f-bae461ea70ef/disk.config because it was imported into RBD.#033[00m
Nov 25 03:28:48 np0005534516 kernel: tap799c50c8-d1: entered promiscuous mode
Nov 25 03:28:48 np0005534516 NetworkManager[48915]: <info>  [1764059328.6366] manager: (tap799c50c8-d1): new Tun device (/org/freedesktop/NetworkManager/Devices/109)
Nov 25 03:28:48 np0005534516 nova_compute[253538]: 2025-11-25 08:28:48.641 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:28:48 np0005534516 podman[294382]: 2025-11-25 08:28:48.647633361 +0000 UTC m=+0.951069118 container init e472271b1d3f6307594b1ec5027db1077bfb61c89e380196eb076b88a5e08781 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_pike, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3)
Nov 25 03:28:48 np0005534516 ovn_controller[152859]: 2025-11-25T08:28:48Z|00216|binding|INFO|Claiming lport 799c50c8-d1e7-4c15-a3d9-29903d576304 for this chassis.
Nov 25 03:28:48 np0005534516 ovn_controller[152859]: 2025-11-25T08:28:48Z|00217|binding|INFO|799c50c8-d1e7-4c15-a3d9-29903d576304: Claiming fa:16:3e:fd:03:96 10.100.0.12
Nov 25 03:28:48 np0005534516 podman[294382]: 2025-11-25 08:28:48.660454015 +0000 UTC m=+0.963889762 container start e472271b1d3f6307594b1ec5027db1077bfb61c89e380196eb076b88a5e08781 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_pike, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 25 03:28:48 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:28:48.664 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:fd:03:96 10.100.0.12'], port_security=['fa:16:3e:fd:03:96 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '8b29fb31-718d-4926-bf4f-bae461ea70ef', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ba659d6c-c094-47d7-ba45-d0e659ce778e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b0a28d62fb1841c087b84b40bf5a54ec', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'c0dcf48e-8342-437f-bc91-be284d9d2e89', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=38cfcfdd-6d8a-45fc-8bf6-5c1aa5128b91, chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=799c50c8-d1e7-4c15-a3d9-29903d576304) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 03:28:48 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:28:48.665 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 799c50c8-d1e7-4c15-a3d9-29903d576304 in datapath ba659d6c-c094-47d7-ba45-d0e659ce778e bound to our chassis#033[00m
Nov 25 03:28:48 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:28:48.667 162739 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network ba659d6c-c094-47d7-ba45-d0e659ce778e#033[00m
Nov 25 03:28:48 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:28:48.681 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[58e86e0b-4b00-4bf4-8718-922ab7ac9e9d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:28:48 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:28:48.682 162739 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapba659d6c-c1 in ovnmeta-ba659d6c-c094-47d7-ba45-d0e659ce778e namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 25 03:28:48 np0005534516 systemd-machined[215790]: New machine qemu-38-instance-00000021.
Nov 25 03:28:48 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:28:48.687 269370 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapba659d6c-c0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 25 03:28:48 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:28:48.687 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[7105c66d-7a24-4039-bc00-6c505b3aa540]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:28:48 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:28:48.689 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[c6a59e38-72c3-4aa5-9413-2454e62bdfe2]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:28:48 np0005534516 systemd[1]: Started Virtual Machine qemu-38-instance-00000021.
Nov 25 03:28:48 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:28:48.703 162852 DEBUG oslo.privsep.daemon [-] privsep: reply[df1b49a7-b702-4ef5-a674-a4da1ca3789b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:28:48 np0005534516 systemd-udevd[294421]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 03:28:48 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:28:48.730 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[689bdc15-67e4-408c-9b60-bdd38c91652c]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:28:48 np0005534516 NetworkManager[48915]: <info>  [1764059328.7325] device (tap799c50c8-d1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 03:28:48 np0005534516 NetworkManager[48915]: <info>  [1764059328.7335] device (tap799c50c8-d1): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 03:28:48 np0005534516 nova_compute[253538]: 2025-11-25 08:28:48.756 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:28:48 np0005534516 podman[294382]: 2025-11-25 08:28:48.758489475 +0000 UTC m=+1.061925242 container attach e472271b1d3f6307594b1ec5027db1077bfb61c89e380196eb076b88a5e08781 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_pike, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Nov 25 03:28:48 np0005534516 ovn_controller[152859]: 2025-11-25T08:28:48Z|00218|binding|INFO|Setting lport 799c50c8-d1e7-4c15-a3d9-29903d576304 ovn-installed in OVS
Nov 25 03:28:48 np0005534516 ovn_controller[152859]: 2025-11-25T08:28:48Z|00219|binding|INFO|Setting lport 799c50c8-d1e7-4c15-a3d9-29903d576304 up in Southbound
Nov 25 03:28:48 np0005534516 nova_compute[253538]: 2025-11-25 08:28:48.760 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:28:48 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:28:48.764 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[1d0af4bc-1e58-4278-a75b-442dd3d63e78]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:28:48 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:28:48.771 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[9b02be0b-b06e-40cf-99e7-54795611d9ca]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:28:48 np0005534516 systemd-udevd[294423]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 03:28:48 np0005534516 NetworkManager[48915]: <info>  [1764059328.7720] manager: (tapba659d6c-c0): new Veth device (/org/freedesktop/NetworkManager/Devices/110)
Nov 25 03:28:48 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:28:48.803 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[d52beaaf-17c1-4f21-92aa-6ad5c8606844]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:28:48 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:28:48.810 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[ccc14722-25af-4139-9c75-1894d918fd65]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:28:48 np0005534516 nova_compute[253538]: 2025-11-25 08:28:48.831 253542 DEBUG nova.storage.rbd_utils [None req-aaed5288-95fd-4850-a790-cb3482a467a9 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] cloning vms/c3cdbe22-9aa9-4d2e-a76e-8d28c1edc71e_disk@c22d1af405f648b08ec82c457c2957ba to images/86f2be0d-0948-4bc3-9e65-5ca5e45d1b2d clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Nov 25 03:28:48 np0005534516 NetworkManager[48915]: <info>  [1764059328.8412] device (tapba659d6c-c0): carrier: link connected
Nov 25 03:28:48 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:28:48.846 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[279e8e2e-1e65-408c-92ca-0fd7fd16b7d7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:28:48 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:28:48.863 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[2650088a-7fe9-401a-a8a9-634ad8b539d5]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapba659d6c-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:bb:c3:40'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 72], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 468344, 'reachable_time': 30576, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 294470, 'error': None, 'target': 'ovnmeta-ba659d6c-c094-47d7-ba45-d0e659ce778e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:28:48 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:28:48.880 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[6888de8c-0a49-439b-86c0-2bf6e1d28c75]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:febb:c340'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 468344, 'tstamp': 468344}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 294486, 'error': None, 'target': 'ovnmeta-ba659d6c-c094-47d7-ba45-d0e659ce778e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:28:48 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:28:48.896 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[25970630-f083-4aab-ada7-94e26e7137c1]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapba659d6c-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:bb:c3:40'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 72], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 468344, 'reachable_time': 30576, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 294490, 'error': None, 'target': 'ovnmeta-ba659d6c-c094-47d7-ba45-d0e659ce778e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:28:48 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:28:48.923 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[ff3c340c-fe41-4d42-9ace-c545a4dd4506]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:28:48 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:28:48.988 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[048fc774-22a1-48ee-9be4-b7444a7b561e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:28:48 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:28:48.990 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapba659d6c-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:28:48 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:28:48.990 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 03:28:48 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:28:48.991 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapba659d6c-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:28:48 np0005534516 nova_compute[253538]: 2025-11-25 08:28:48.992 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:28:48 np0005534516 NetworkManager[48915]: <info>  [1764059328.9930] manager: (tapba659d6c-c0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/111)
Nov 25 03:28:48 np0005534516 kernel: tapba659d6c-c0: entered promiscuous mode
Nov 25 03:28:48 np0005534516 nova_compute[253538]: 2025-11-25 08:28:48.995 253542 DEBUG oslo_concurrency.processutils [None req-d5ac8371-613d-488f-9c4b-164aad80e8a3 9a96a98d6bb448aab904a8763d3675ec 17d31dbb1e4542daaa43d2fda87e18ad - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/cf07e611-51eb-4bcf-8757-8f75d3807da6/disk.config cf07e611-51eb-4bcf-8757-8f75d3807da6_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 2.700s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:28:48 np0005534516 nova_compute[253538]: 2025-11-25 08:28:48.995 253542 INFO nova.virt.libvirt.driver [None req-d5ac8371-613d-488f-9c4b-164aad80e8a3 9a96a98d6bb448aab904a8763d3675ec 17d31dbb1e4542daaa43d2fda87e18ad - - default default] [instance: cf07e611-51eb-4bcf-8757-8f75d3807da6] Deleting local config drive /var/lib/nova/instances/cf07e611-51eb-4bcf-8757-8f75d3807da6/disk.config because it was imported into RBD.#033[00m
Nov 25 03:28:48 np0005534516 nova_compute[253538]: 2025-11-25 08:28:48.996 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:28:48 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:28:48.997 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapba659d6c-c0, col_values=(('external_ids', {'iface-id': '02ee51d1-7fc5-4815-93ec-b9ead088a46e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:28:48 np0005534516 ovn_controller[152859]: 2025-11-25T08:28:48Z|00220|binding|INFO|Releasing lport 02ee51d1-7fc5-4815-93ec-b9ead088a46e from this chassis (sb_readonly=0)
Nov 25 03:28:49 np0005534516 nova_compute[253538]: 2025-11-25 08:28:49.003 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:28:49 np0005534516 nova_compute[253538]: 2025-11-25 08:28:49.019 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:28:49 np0005534516 nova_compute[253538]: 2025-11-25 08:28:49.025 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:28:49 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:28:49.026 162739 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/ba659d6c-c094-47d7-ba45-d0e659ce778e.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/ba659d6c-c094-47d7-ba45-d0e659ce778e.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 25 03:28:49 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:28:49.028 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[66466645-dd6a-460c-8d09-a04b91318b33]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:28:49 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:28:49.029 162739 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 25 03:28:49 np0005534516 ovn_metadata_agent[162734]: global
Nov 25 03:28:49 np0005534516 ovn_metadata_agent[162734]:    log         /dev/log local0 debug
Nov 25 03:28:49 np0005534516 ovn_metadata_agent[162734]:    log-tag     haproxy-metadata-proxy-ba659d6c-c094-47d7-ba45-d0e659ce778e
Nov 25 03:28:49 np0005534516 ovn_metadata_agent[162734]:    user        root
Nov 25 03:28:49 np0005534516 ovn_metadata_agent[162734]:    group       root
Nov 25 03:28:49 np0005534516 ovn_metadata_agent[162734]:    maxconn     1024
Nov 25 03:28:49 np0005534516 ovn_metadata_agent[162734]:    pidfile     /var/lib/neutron/external/pids/ba659d6c-c094-47d7-ba45-d0e659ce778e.pid.haproxy
Nov 25 03:28:49 np0005534516 ovn_metadata_agent[162734]:    daemon
Nov 25 03:28:49 np0005534516 ovn_metadata_agent[162734]: 
Nov 25 03:28:49 np0005534516 ovn_metadata_agent[162734]: defaults
Nov 25 03:28:49 np0005534516 ovn_metadata_agent[162734]:    log global
Nov 25 03:28:49 np0005534516 ovn_metadata_agent[162734]:    mode http
Nov 25 03:28:49 np0005534516 ovn_metadata_agent[162734]:    option httplog
Nov 25 03:28:49 np0005534516 ovn_metadata_agent[162734]:    option dontlognull
Nov 25 03:28:49 np0005534516 ovn_metadata_agent[162734]:    option http-server-close
Nov 25 03:28:49 np0005534516 ovn_metadata_agent[162734]:    option forwardfor
Nov 25 03:28:49 np0005534516 ovn_metadata_agent[162734]:    retries                 3
Nov 25 03:28:49 np0005534516 ovn_metadata_agent[162734]:    timeout http-request    30s
Nov 25 03:28:49 np0005534516 ovn_metadata_agent[162734]:    timeout connect         30s
Nov 25 03:28:49 np0005534516 ovn_metadata_agent[162734]:    timeout client          32s
Nov 25 03:28:49 np0005534516 ovn_metadata_agent[162734]:    timeout server          32s
Nov 25 03:28:49 np0005534516 ovn_metadata_agent[162734]:    timeout http-keep-alive 30s
Nov 25 03:28:49 np0005534516 ovn_metadata_agent[162734]: 
Nov 25 03:28:49 np0005534516 ovn_metadata_agent[162734]: 
Nov 25 03:28:49 np0005534516 ovn_metadata_agent[162734]: listen listener
Nov 25 03:28:49 np0005534516 ovn_metadata_agent[162734]:    bind 169.254.169.254:80
Nov 25 03:28:49 np0005534516 ovn_metadata_agent[162734]:    server metadata /var/lib/neutron/metadata_proxy
Nov 25 03:28:49 np0005534516 ovn_metadata_agent[162734]:    http-request add-header X-OVN-Network-ID ba659d6c-c094-47d7-ba45-d0e659ce778e
Nov 25 03:28:49 np0005534516 ovn_metadata_agent[162734]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 25 03:28:49 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:28:49.029 162739 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-ba659d6c-c094-47d7-ba45-d0e659ce778e', 'env', 'PROCESS_TAG=haproxy-ba659d6c-c094-47d7-ba45-d0e659ce778e', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/ba659d6c-c094-47d7-ba45-d0e659ce778e.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 25 03:28:49 np0005534516 kernel: tapd4493bab-df: entered promiscuous mode
Nov 25 03:28:49 np0005534516 NetworkManager[48915]: <info>  [1764059329.0479] manager: (tapd4493bab-df): new Tun device (/org/freedesktop/NetworkManager/Devices/112)
Nov 25 03:28:49 np0005534516 systemd-udevd[294442]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 03:28:49 np0005534516 ovn_controller[152859]: 2025-11-25T08:28:49Z|00221|binding|INFO|Claiming lport d4493bab-df0a-4934-ab26-43dae0dbae72 for this chassis.
Nov 25 03:28:49 np0005534516 ovn_controller[152859]: 2025-11-25T08:28:49Z|00222|binding|INFO|d4493bab-df0a-4934-ab26-43dae0dbae72: Claiming fa:16:3e:6b:9f:ca 10.100.0.3
Nov 25 03:28:49 np0005534516 nova_compute[253538]: 2025-11-25 08:28:49.050 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:28:49 np0005534516 NetworkManager[48915]: <info>  [1764059329.0616] device (tapd4493bab-df): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 03:28:49 np0005534516 nova_compute[253538]: 2025-11-25 08:28:49.061 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:28:49 np0005534516 NetworkManager[48915]: <info>  [1764059329.0621] device (tapd4493bab-df): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 03:28:49 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:28:49.068 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:6b:9f:ca 10.100.0.3'], port_security=['fa:16:3e:6b:9f:ca 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'cf07e611-51eb-4bcf-8757-8f75d3807da6', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3eeb3245-b22f-4899-9ec0-084ea5f63b6b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '17d31dbb1e4542daaa43d2fda87e18ad', 'neutron:revision_number': '2', 'neutron:security_group_ids': '5ca9dda1-b2ea-4f89-8fc2-d2049ead3ade', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d85b3127-2363-4567-943e-4e79235e055a, chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=d4493bab-df0a-4934-ab26-43dae0dbae72) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 03:28:49 np0005534516 systemd-machined[215790]: New machine qemu-39-instance-00000022.
Nov 25 03:28:49 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1330: 321 pgs: 321 active+clean; 180 MiB data, 420 MiB used, 60 GiB / 60 GiB avail; 3.5 MiB/s rd, 6.0 MiB/s wr, 385 op/s
Nov 25 03:28:49 np0005534516 systemd[1]: Started Virtual Machine qemu-39-instance-00000022.
Nov 25 03:28:49 np0005534516 ovn_controller[152859]: 2025-11-25T08:28:49Z|00223|binding|INFO|Setting lport d4493bab-df0a-4934-ab26-43dae0dbae72 ovn-installed in OVS
Nov 25 03:28:49 np0005534516 ovn_controller[152859]: 2025-11-25T08:28:49Z|00224|binding|INFO|Setting lport d4493bab-df0a-4934-ab26-43dae0dbae72 up in Southbound
Nov 25 03:28:49 np0005534516 nova_compute[253538]: 2025-11-25 08:28:49.158 253542 DEBUG nova.storage.rbd_utils [None req-aaed5288-95fd-4850-a790-cb3482a467a9 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] flattening images/86f2be0d-0948-4bc3-9e65-5ca5e45d1b2d flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314#033[00m
Nov 25 03:28:49 np0005534516 nova_compute[253538]: 2025-11-25 08:28:49.383 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764059329.2709084, 8b29fb31-718d-4926-bf4f-bae461ea70ef => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 03:28:49 np0005534516 nova_compute[253538]: 2025-11-25 08:28:49.383 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 8b29fb31-718d-4926-bf4f-bae461ea70ef] VM Started (Lifecycle Event)#033[00m
Nov 25 03:28:49 np0005534516 nova_compute[253538]: 2025-11-25 08:28:49.385 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:28:49 np0005534516 nova_compute[253538]: 2025-11-25 08:28:49.389 253542 DEBUG nova.compute.manager [req-0a552268-0ec6-46c2-ae20-9e54096ac86f req-08454a75-4b98-4b89-9e40-5e3d3b69d322 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 8b29fb31-718d-4926-bf4f-bae461ea70ef] Received event network-vif-plugged-799c50c8-d1e7-4c15-a3d9-29903d576304 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:28:49 np0005534516 nova_compute[253538]: 2025-11-25 08:28:49.390 253542 DEBUG oslo_concurrency.lockutils [req-0a552268-0ec6-46c2-ae20-9e54096ac86f req-08454a75-4b98-4b89-9e40-5e3d3b69d322 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "8b29fb31-718d-4926-bf4f-bae461ea70ef-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:28:49 np0005534516 nova_compute[253538]: 2025-11-25 08:28:49.390 253542 DEBUG oslo_concurrency.lockutils [req-0a552268-0ec6-46c2-ae20-9e54096ac86f req-08454a75-4b98-4b89-9e40-5e3d3b69d322 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "8b29fb31-718d-4926-bf4f-bae461ea70ef-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:28:49 np0005534516 nova_compute[253538]: 2025-11-25 08:28:49.390 253542 DEBUG oslo_concurrency.lockutils [req-0a552268-0ec6-46c2-ae20-9e54096ac86f req-08454a75-4b98-4b89-9e40-5e3d3b69d322 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "8b29fb31-718d-4926-bf4f-bae461ea70ef-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:28:49 np0005534516 nova_compute[253538]: 2025-11-25 08:28:49.391 253542 DEBUG nova.compute.manager [req-0a552268-0ec6-46c2-ae20-9e54096ac86f req-08454a75-4b98-4b89-9e40-5e3d3b69d322 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 8b29fb31-718d-4926-bf4f-bae461ea70ef] Processing event network-vif-plugged-799c50c8-d1e7-4c15-a3d9-29903d576304 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 25 03:28:49 np0005534516 nova_compute[253538]: 2025-11-25 08:28:49.392 253542 DEBUG nova.compute.manager [None req-02fcfbc8-8adf-4b3c-8fc6-7f90fda8674d 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 8b29fb31-718d-4926-bf4f-bae461ea70ef] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 25 03:28:49 np0005534516 nova_compute[253538]: 2025-11-25 08:28:49.396 253542 DEBUG nova.virt.libvirt.driver [None req-02fcfbc8-8adf-4b3c-8fc6-7f90fda8674d 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 8b29fb31-718d-4926-bf4f-bae461ea70ef] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 25 03:28:49 np0005534516 nova_compute[253538]: 2025-11-25 08:28:49.404 253542 INFO nova.virt.libvirt.driver [-] [instance: 8b29fb31-718d-4926-bf4f-bae461ea70ef] Instance spawned successfully.#033[00m
Nov 25 03:28:49 np0005534516 nova_compute[253538]: 2025-11-25 08:28:49.405 253542 DEBUG nova.virt.libvirt.driver [None req-02fcfbc8-8adf-4b3c-8fc6-7f90fda8674d 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 8b29fb31-718d-4926-bf4f-bae461ea70ef] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 25 03:28:49 np0005534516 ovn_controller[152859]: 2025-11-25T08:28:49Z|00225|binding|INFO|Releasing lport 02ee51d1-7fc5-4815-93ec-b9ead088a46e from this chassis (sb_readonly=0)
Nov 25 03:28:49 np0005534516 ovn_controller[152859]: 2025-11-25T08:28:49Z|00226|binding|INFO|Releasing lport ac244317-fa52-4a6a-92f4-98845a41804d from this chassis (sb_readonly=0)
Nov 25 03:28:49 np0005534516 nova_compute[253538]: 2025-11-25 08:28:49.410 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 8b29fb31-718d-4926-bf4f-bae461ea70ef] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:28:49 np0005534516 nova_compute[253538]: 2025-11-25 08:28:49.412 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 8b29fb31-718d-4926-bf4f-bae461ea70ef] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 03:28:49 np0005534516 nova_compute[253538]: 2025-11-25 08:28:49.435 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 8b29fb31-718d-4926-bf4f-bae461ea70ef] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 03:28:49 np0005534516 nova_compute[253538]: 2025-11-25 08:28:49.436 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764059329.2710547, 8b29fb31-718d-4926-bf4f-bae461ea70ef => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 03:28:49 np0005534516 nova_compute[253538]: 2025-11-25 08:28:49.436 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 8b29fb31-718d-4926-bf4f-bae461ea70ef] VM Paused (Lifecycle Event)#033[00m
Nov 25 03:28:49 np0005534516 nova_compute[253538]: 2025-11-25 08:28:49.444 253542 DEBUG nova.virt.libvirt.driver [None req-02fcfbc8-8adf-4b3c-8fc6-7f90fda8674d 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 8b29fb31-718d-4926-bf4f-bae461ea70ef] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:28:49 np0005534516 nova_compute[253538]: 2025-11-25 08:28:49.445 253542 DEBUG nova.virt.libvirt.driver [None req-02fcfbc8-8adf-4b3c-8fc6-7f90fda8674d 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 8b29fb31-718d-4926-bf4f-bae461ea70ef] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:28:49 np0005534516 nova_compute[253538]: 2025-11-25 08:28:49.445 253542 DEBUG nova.virt.libvirt.driver [None req-02fcfbc8-8adf-4b3c-8fc6-7f90fda8674d 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 8b29fb31-718d-4926-bf4f-bae461ea70ef] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:28:49 np0005534516 nova_compute[253538]: 2025-11-25 08:28:49.446 253542 DEBUG nova.virt.libvirt.driver [None req-02fcfbc8-8adf-4b3c-8fc6-7f90fda8674d 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 8b29fb31-718d-4926-bf4f-bae461ea70ef] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:28:49 np0005534516 nova_compute[253538]: 2025-11-25 08:28:49.447 253542 DEBUG nova.virt.libvirt.driver [None req-02fcfbc8-8adf-4b3c-8fc6-7f90fda8674d 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 8b29fb31-718d-4926-bf4f-bae461ea70ef] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:28:49 np0005534516 nova_compute[253538]: 2025-11-25 08:28:49.447 253542 DEBUG nova.virt.libvirt.driver [None req-02fcfbc8-8adf-4b3c-8fc6-7f90fda8674d 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 8b29fb31-718d-4926-bf4f-bae461ea70ef] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:28:49 np0005534516 nova_compute[253538]: 2025-11-25 08:28:49.451 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 8b29fb31-718d-4926-bf4f-bae461ea70ef] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:28:49 np0005534516 nova_compute[253538]: 2025-11-25 08:28:49.454 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764059329.396191, 8b29fb31-718d-4926-bf4f-bae461ea70ef => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 03:28:49 np0005534516 nova_compute[253538]: 2025-11-25 08:28:49.454 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 8b29fb31-718d-4926-bf4f-bae461ea70ef] VM Resumed (Lifecycle Event)#033[00m
Nov 25 03:28:49 np0005534516 nova_compute[253538]: 2025-11-25 08:28:49.506 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:28:49 np0005534516 nova_compute[253538]: 2025-11-25 08:28:49.508 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 8b29fb31-718d-4926-bf4f-bae461ea70ef] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:28:49 np0005534516 nova_compute[253538]: 2025-11-25 08:28:49.509 253542 INFO nova.compute.manager [None req-02fcfbc8-8adf-4b3c-8fc6-7f90fda8674d 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 8b29fb31-718d-4926-bf4f-bae461ea70ef] Took 8.92 seconds to spawn the instance on the hypervisor.#033[00m
Nov 25 03:28:49 np0005534516 nova_compute[253538]: 2025-11-25 08:28:49.510 253542 DEBUG nova.compute.manager [None req-02fcfbc8-8adf-4b3c-8fc6-7f90fda8674d 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 8b29fb31-718d-4926-bf4f-bae461ea70ef] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:28:49 np0005534516 nova_compute[253538]: 2025-11-25 08:28:49.513 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 8b29fb31-718d-4926-bf4f-bae461ea70ef] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 03:28:49 np0005534516 podman[294611]: 2025-11-25 08:28:49.437137833 +0000 UTC m=+0.045510110 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 25 03:28:49 np0005534516 nova_compute[253538]: 2025-11-25 08:28:49.536 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 8b29fb31-718d-4926-bf4f-bae461ea70ef] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 03:28:49 np0005534516 nova_compute[253538]: 2025-11-25 08:28:49.570 253542 INFO nova.compute.manager [None req-02fcfbc8-8adf-4b3c-8fc6-7f90fda8674d 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 8b29fb31-718d-4926-bf4f-bae461ea70ef] Took 10.08 seconds to build instance.#033[00m
Nov 25 03:28:49 np0005534516 nova_compute[253538]: 2025-11-25 08:28:49.589 253542 DEBUG oslo_concurrency.lockutils [None req-02fcfbc8-8adf-4b3c-8fc6-7f90fda8674d 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Lock "8b29fb31-718d-4926-bf4f-bae461ea70ef" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.193s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:28:49 np0005534516 friendly_pike[294398]: {
Nov 25 03:28:49 np0005534516 friendly_pike[294398]:    "1ccbbde0-8faf-460a-800e-c84f00ed17db": {
Nov 25 03:28:49 np0005534516 friendly_pike[294398]:        "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 03:28:49 np0005534516 friendly_pike[294398]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 03:28:49 np0005534516 friendly_pike[294398]:        "osd_id": 1,
Nov 25 03:28:49 np0005534516 friendly_pike[294398]:        "osd_uuid": "1ccbbde0-8faf-460a-800e-c84f00ed17db",
Nov 25 03:28:49 np0005534516 friendly_pike[294398]:        "type": "bluestore"
Nov 25 03:28:49 np0005534516 friendly_pike[294398]:    },
Nov 25 03:28:49 np0005534516 friendly_pike[294398]:    "2a15223e-f21c-42af-b702-2be1c3607399": {
Nov 25 03:28:49 np0005534516 friendly_pike[294398]:        "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 03:28:49 np0005534516 friendly_pike[294398]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 03:28:49 np0005534516 friendly_pike[294398]:        "osd_id": 2,
Nov 25 03:28:49 np0005534516 friendly_pike[294398]:        "osd_uuid": "2a15223e-f21c-42af-b702-2be1c3607399",
Nov 25 03:28:49 np0005534516 friendly_pike[294398]:        "type": "bluestore"
Nov 25 03:28:49 np0005534516 friendly_pike[294398]:    },
Nov 25 03:28:49 np0005534516 friendly_pike[294398]:    "fa0f8d90-5df8-4d42-9078-da082765696d": {
Nov 25 03:28:49 np0005534516 friendly_pike[294398]:        "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 03:28:49 np0005534516 friendly_pike[294398]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 03:28:49 np0005534516 friendly_pike[294398]:        "osd_id": 0,
Nov 25 03:28:49 np0005534516 friendly_pike[294398]:        "osd_uuid": "fa0f8d90-5df8-4d42-9078-da082765696d",
Nov 25 03:28:49 np0005534516 friendly_pike[294398]:        "type": "bluestore"
Nov 25 03:28:49 np0005534516 friendly_pike[294398]:    }
Nov 25 03:28:49 np0005534516 friendly_pike[294398]: }
Nov 25 03:28:49 np0005534516 systemd[1]: libpod-e472271b1d3f6307594b1ec5027db1077bfb61c89e380196eb076b88a5e08781.scope: Deactivated successfully.
Nov 25 03:28:49 np0005534516 podman[294611]: 2025-11-25 08:28:49.735142639 +0000 UTC m=+0.343514886 container create 74d9d88aec0d43eba2eea2e94d83fd5630607c65e951353c7bdd0b61f83bfda3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-ba659d6c-c094-47d7-ba45-d0e659ce778e, tcib_managed=true, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118)
Nov 25 03:28:49 np0005534516 podman[294382]: 2025-11-25 08:28:49.746642287 +0000 UTC m=+2.050078074 container died e472271b1d3f6307594b1ec5027db1077bfb61c89e380196eb076b88a5e08781 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_pike, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 03:28:49 np0005534516 nova_compute[253538]: 2025-11-25 08:28:49.866 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:28:49 np0005534516 systemd[1]: Started libpod-conmon-74d9d88aec0d43eba2eea2e94d83fd5630607c65e951353c7bdd0b61f83bfda3.scope.
Nov 25 03:28:49 np0005534516 nova_compute[253538]: 2025-11-25 08:28:49.914 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764059329.9140046, cf07e611-51eb-4bcf-8757-8f75d3807da6 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 03:28:49 np0005534516 nova_compute[253538]: 2025-11-25 08:28:49.915 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: cf07e611-51eb-4bcf-8757-8f75d3807da6] VM Started (Lifecycle Event)#033[00m
Nov 25 03:28:49 np0005534516 nova_compute[253538]: 2025-11-25 08:28:49.931 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: cf07e611-51eb-4bcf-8757-8f75d3807da6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:28:49 np0005534516 nova_compute[253538]: 2025-11-25 08:28:49.937 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764059329.9147372, cf07e611-51eb-4bcf-8757-8f75d3807da6 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 03:28:49 np0005534516 nova_compute[253538]: 2025-11-25 08:28:49.938 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: cf07e611-51eb-4bcf-8757-8f75d3807da6] VM Paused (Lifecycle Event)#033[00m
Nov 25 03:28:49 np0005534516 systemd[1]: Started libcrun container.
Nov 25 03:28:49 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5fb3ae8526a3fd45e0f449dbfc51ea63977bc5a178cd81680ec65b696722764a/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 25 03:28:49 np0005534516 nova_compute[253538]: 2025-11-25 08:28:49.959 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: cf07e611-51eb-4bcf-8757-8f75d3807da6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:28:49 np0005534516 nova_compute[253538]: 2025-11-25 08:28:49.964 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: cf07e611-51eb-4bcf-8757-8f75d3807da6] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 03:28:49 np0005534516 nova_compute[253538]: 2025-11-25 08:28:49.987 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: cf07e611-51eb-4bcf-8757-8f75d3807da6] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 03:28:50 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 03:28:50 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e142 do_prune osdmap full prune enabled
Nov 25 03:28:50 np0005534516 systemd[1]: var-lib-containers-storage-overlay-443e567917fa884a16c81c49e288bf3bfe26774cf959ec0db389e252dbdf0cb9-merged.mount: Deactivated successfully.
Nov 25 03:28:50 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e143 e143: 3 total, 3 up, 3 in
Nov 25 03:28:50 np0005534516 ceph-mon[75015]: log_channel(cluster) log [DBG] : osdmap e143: 3 total, 3 up, 3 in
Nov 25 03:28:50 np0005534516 podman[294382]: 2025-11-25 08:28:50.36399698 +0000 UTC m=+2.667432717 container remove e472271b1d3f6307594b1ec5027db1077bfb61c89e380196eb076b88a5e08781 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_pike, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 03:28:50 np0005534516 podman[294611]: 2025-11-25 08:28:50.392825318 +0000 UTC m=+1.001197585 container init 74d9d88aec0d43eba2eea2e94d83fd5630607c65e951353c7bdd0b61f83bfda3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-ba659d6c-c094-47d7-ba45-d0e659ce778e, tcib_managed=true, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0)
Nov 25 03:28:50 np0005534516 podman[294611]: 2025-11-25 08:28:50.399490241 +0000 UTC m=+1.007862478 container start 74d9d88aec0d43eba2eea2e94d83fd5630607c65e951353c7bdd0b61f83bfda3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-ba659d6c-c094-47d7-ba45-d0e659ce778e, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2)
Nov 25 03:28:50 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 03:28:50 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 03:28:50 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 03:28:50 np0005534516 neutron-haproxy-ovnmeta-ba659d6c-c094-47d7-ba45-d0e659ce778e[294697]: [NOTICE]   (294717) : New worker (294722) forked
Nov 25 03:28:50 np0005534516 neutron-haproxy-ovnmeta-ba659d6c-c094-47d7-ba45-d0e659ce778e[294697]: [NOTICE]   (294717) : Loading success.
Nov 25 03:28:50 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 03:28:50 np0005534516 ceph-mgr[75313]: [progress WARNING root] complete: ev 92748db0-d4de-4c59-8155-823f2b014854 does not exist
Nov 25 03:28:50 np0005534516 ceph-mgr[75313]: [progress WARNING root] complete: ev ae4e0042-c102-402a-a02c-b1c15b9cc4ef does not exist
Nov 25 03:28:50 np0005534516 systemd[1]: libpod-conmon-e472271b1d3f6307594b1ec5027db1077bfb61c89e380196eb076b88a5e08781.scope: Deactivated successfully.
Nov 25 03:28:50 np0005534516 nova_compute[253538]: 2025-11-25 08:28:50.454 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:28:50 np0005534516 nova_compute[253538]: 2025-11-25 08:28:50.460 253542 DEBUG nova.storage.rbd_utils [None req-aaed5288-95fd-4850-a790-cb3482a467a9 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] removing snapshot(c22d1af405f648b08ec82c457c2957ba) on rbd image(c3cdbe22-9aa9-4d2e-a76e-8d28c1edc71e_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489#033[00m
Nov 25 03:28:50 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:28:50.471 162739 INFO neutron.agent.ovn.metadata.agent [-] Port d4493bab-df0a-4934-ab26-43dae0dbae72 in datapath 3eeb3245-b22f-4899-9ec0-084ea5f63b6b unbound from our chassis#033[00m
Nov 25 03:28:50 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:28:50.473 162739 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 3eeb3245-b22f-4899-9ec0-084ea5f63b6b#033[00m
Nov 25 03:28:50 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:28:50.484 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[326f85e0-8ef7-4063-b084-3b6026a8882c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:28:50 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:28:50.485 162739 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap3eeb3245-b1 in ovnmeta-3eeb3245-b22f-4899-9ec0-084ea5f63b6b namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 25 03:28:50 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:28:50.488 269370 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap3eeb3245-b0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 25 03:28:50 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:28:50.489 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[56153995-97a0-45e8-a5bf-c1356b4581d0]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:28:50 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:28:50.489 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[5eb06096-7298-48d5-a275-d73e1f883117]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:28:50 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:28:50.505 162852 DEBUG oslo.privsep.daemon [-] privsep: reply[debe0072-d6b5-47c8-8dd3-15ccfa30a926]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:28:50 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:28:50.528 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[7e3e36e8-d327-4a40-a28f-0f7bbdf45b50]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:28:50 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:28:50.558 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[09dcd5c9-d1d1-40ef-9b95-a7ebec9e1792]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:28:50 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:28:50.565 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[1f4fb470-f6cb-4149-adc7-80d53f354a2c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:28:50 np0005534516 NetworkManager[48915]: <info>  [1764059330.5678] manager: (tap3eeb3245-b0): new Veth device (/org/freedesktop/NetworkManager/Devices/113)
Nov 25 03:28:50 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:28:50.598 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[96a6e261-e908-4a5e-83a6-aaae374998a9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:28:50 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:28:50.602 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[aba639be-1786-4136-8df4-b2c262ef9024]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:28:50 np0005534516 NetworkManager[48915]: <info>  [1764059330.6214] device (tap3eeb3245-b0): carrier: link connected
Nov 25 03:28:50 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:28:50.625 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[0b4df0b3-d8ec-4ec7-b5ae-9838a11b6154]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:28:50 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:28:50.641 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[f8e1fde7-01cb-4680-9712-b6c66fe086d1]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap3eeb3245-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:12:bc:10'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 74], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 468522, 'reachable_time': 40763, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 294791, 'error': None, 'target': 'ovnmeta-3eeb3245-b22f-4899-9ec0-084ea5f63b6b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:28:50 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:28:50.654 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[f603cecb-efb9-4a5f-a05c-cec43b7c4ed8]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe12:bc10'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 468522, 'tstamp': 468522}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 294792, 'error': None, 'target': 'ovnmeta-3eeb3245-b22f-4899-9ec0-084ea5f63b6b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:28:50 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:28:50.671 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[be27721a-99c9-4dd1-af0e-240d45b03504]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap3eeb3245-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:12:bc:10'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 74], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 468522, 'reachable_time': 40763, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 294793, 'error': None, 'target': 'ovnmeta-3eeb3245-b22f-4899-9ec0-084ea5f63b6b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:28:50 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:28:50.696 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[37ecffa5-9251-43e4-907e-6218b5f3aa33]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:28:50 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:28:50.746 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[d101aeed-4b6c-4afc-bfe6-ecccae96654e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:28:50 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:28:50.747 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3eeb3245-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:28:50 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:28:50.748 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 03:28:50 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:28:50.748 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap3eeb3245-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:28:50 np0005534516 NetworkManager[48915]: <info>  [1764059330.7584] manager: (tap3eeb3245-b0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/114)
Nov 25 03:28:50 np0005534516 kernel: tap3eeb3245-b0: entered promiscuous mode
Nov 25 03:28:50 np0005534516 nova_compute[253538]: 2025-11-25 08:28:50.757 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:28:50 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:28:50.760 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap3eeb3245-b0, col_values=(('external_ids', {'iface-id': 'ff896c66-8e2b-41d0-a738-217411538e37'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:28:50 np0005534516 nova_compute[253538]: 2025-11-25 08:28:50.761 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:28:50 np0005534516 ovn_controller[152859]: 2025-11-25T08:28:50Z|00227|binding|INFO|Releasing lport ff896c66-8e2b-41d0-a738-217411538e37 from this chassis (sb_readonly=0)
Nov 25 03:28:50 np0005534516 nova_compute[253538]: 2025-11-25 08:28:50.779 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:28:50 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:28:50.780 162739 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/3eeb3245-b22f-4899-9ec0-084ea5f63b6b.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/3eeb3245-b22f-4899-9ec0-084ea5f63b6b.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 25 03:28:50 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:28:50.784 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[76f2804f-e432-44e1-bf8c-8c679305d7d1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:28:50 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:28:50.785 162739 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 25 03:28:50 np0005534516 ovn_metadata_agent[162734]: global
Nov 25 03:28:50 np0005534516 ovn_metadata_agent[162734]:    log         /dev/log local0 debug
Nov 25 03:28:50 np0005534516 ovn_metadata_agent[162734]:    log-tag     haproxy-metadata-proxy-3eeb3245-b22f-4899-9ec0-084ea5f63b6b
Nov 25 03:28:50 np0005534516 ovn_metadata_agent[162734]:    user        root
Nov 25 03:28:50 np0005534516 ovn_metadata_agent[162734]:    group       root
Nov 25 03:28:50 np0005534516 ovn_metadata_agent[162734]:    maxconn     1024
Nov 25 03:28:50 np0005534516 ovn_metadata_agent[162734]:    pidfile     /var/lib/neutron/external/pids/3eeb3245-b22f-4899-9ec0-084ea5f63b6b.pid.haproxy
Nov 25 03:28:50 np0005534516 ovn_metadata_agent[162734]:    daemon
Nov 25 03:28:50 np0005534516 ovn_metadata_agent[162734]: 
Nov 25 03:28:50 np0005534516 ovn_metadata_agent[162734]: defaults
Nov 25 03:28:50 np0005534516 ovn_metadata_agent[162734]:    log global
Nov 25 03:28:50 np0005534516 ovn_metadata_agent[162734]:    mode http
Nov 25 03:28:50 np0005534516 ovn_metadata_agent[162734]:    option httplog
Nov 25 03:28:50 np0005534516 ovn_metadata_agent[162734]:    option dontlognull
Nov 25 03:28:50 np0005534516 ovn_metadata_agent[162734]:    option http-server-close
Nov 25 03:28:50 np0005534516 ovn_metadata_agent[162734]:    option forwardfor
Nov 25 03:28:50 np0005534516 ovn_metadata_agent[162734]:    retries                 3
Nov 25 03:28:50 np0005534516 ovn_metadata_agent[162734]:    timeout http-request    30s
Nov 25 03:28:50 np0005534516 ovn_metadata_agent[162734]:    timeout connect         30s
Nov 25 03:28:50 np0005534516 ovn_metadata_agent[162734]:    timeout client          32s
Nov 25 03:28:50 np0005534516 ovn_metadata_agent[162734]:    timeout server          32s
Nov 25 03:28:50 np0005534516 ovn_metadata_agent[162734]:    timeout http-keep-alive 30s
Nov 25 03:28:50 np0005534516 ovn_metadata_agent[162734]: 
Nov 25 03:28:50 np0005534516 ovn_metadata_agent[162734]: 
Nov 25 03:28:50 np0005534516 ovn_metadata_agent[162734]: listen listener
Nov 25 03:28:50 np0005534516 ovn_metadata_agent[162734]:    bind 169.254.169.254:80
Nov 25 03:28:50 np0005534516 ovn_metadata_agent[162734]:    server metadata /var/lib/neutron/metadata_proxy
Nov 25 03:28:50 np0005534516 ovn_metadata_agent[162734]:    http-request add-header X-OVN-Network-ID 3eeb3245-b22f-4899-9ec0-084ea5f63b6b
Nov 25 03:28:50 np0005534516 ovn_metadata_agent[162734]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 25 03:28:50 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:28:50.785 162739 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-3eeb3245-b22f-4899-9ec0-084ea5f63b6b', 'env', 'PROCESS_TAG=haproxy-3eeb3245-b22f-4899-9ec0-084ea5f63b6b', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/3eeb3245-b22f-4899-9ec0-084ea5f63b6b.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 25 03:28:51 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1332: 321 pgs: 321 active+clean; 181 MiB data, 413 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 31 KiB/s wr, 139 op/s
Nov 25 03:28:51 np0005534516 podman[294826]: 2025-11-25 08:28:51.181812434 +0000 UTC m=+0.061959823 container create b4f1371a2efedd0cc183639779022ae9dfbcbde79cef4e481238a42d80498162 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-3eeb3245-b22f-4899-9ec0-084ea5f63b6b, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118)
Nov 25 03:28:51 np0005534516 systemd[1]: Started libpod-conmon-b4f1371a2efedd0cc183639779022ae9dfbcbde79cef4e481238a42d80498162.scope.
Nov 25 03:28:51 np0005534516 podman[294826]: 2025-11-25 08:28:51.146523499 +0000 UTC m=+0.026670908 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 25 03:28:51 np0005534516 systemd[1]: Started libcrun container.
Nov 25 03:28:51 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/609605f94689c92905145955c14d87bafa567f0f1b1543ae9778cd645791164d/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 25 03:28:51 np0005534516 podman[294826]: 2025-11-25 08:28:51.290867459 +0000 UTC m=+0.171014868 container init b4f1371a2efedd0cc183639779022ae9dfbcbde79cef4e481238a42d80498162 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-3eeb3245-b22f-4899-9ec0-084ea5f63b6b, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251118)
Nov 25 03:28:51 np0005534516 podman[294826]: 2025-11-25 08:28:51.297075921 +0000 UTC m=+0.177223300 container start b4f1371a2efedd0cc183639779022ae9dfbcbde79cef4e481238a42d80498162 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-3eeb3245-b22f-4899-9ec0-084ea5f63b6b, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 03:28:51 np0005534516 neutron-haproxy-ovnmeta-3eeb3245-b22f-4899-9ec0-084ea5f63b6b[294841]: [NOTICE]   (294845) : New worker (294847) forked
Nov 25 03:28:51 np0005534516 neutron-haproxy-ovnmeta-3eeb3245-b22f-4899-9ec0-084ea5f63b6b[294841]: [NOTICE]   (294845) : Loading success.
Nov 25 03:28:51 np0005534516 nova_compute[253538]: 2025-11-25 08:28:51.389 253542 DEBUG nova.compute.manager [None req-0d4386b8-1e51-441e-9ef0-e32bc45c6e4e 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 8b29fb31-718d-4926-bf4f-bae461ea70ef] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:28:51 np0005534516 nova_compute[253538]: 2025-11-25 08:28:51.420 253542 DEBUG nova.compute.manager [req-85bf1f0e-c512-4044-ac7f-4814156d57a5 req-e3db0289-02d4-4cae-b757-c97947bb924e b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 8b29fb31-718d-4926-bf4f-bae461ea70ef] Received event network-vif-plugged-799c50c8-d1e7-4c15-a3d9-29903d576304 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:28:51 np0005534516 nova_compute[253538]: 2025-11-25 08:28:51.420 253542 DEBUG oslo_concurrency.lockutils [req-85bf1f0e-c512-4044-ac7f-4814156d57a5 req-e3db0289-02d4-4cae-b757-c97947bb924e b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "8b29fb31-718d-4926-bf4f-bae461ea70ef-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:28:51 np0005534516 nova_compute[253538]: 2025-11-25 08:28:51.421 253542 DEBUG oslo_concurrency.lockutils [req-85bf1f0e-c512-4044-ac7f-4814156d57a5 req-e3db0289-02d4-4cae-b757-c97947bb924e b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "8b29fb31-718d-4926-bf4f-bae461ea70ef-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:28:51 np0005534516 nova_compute[253538]: 2025-11-25 08:28:51.421 253542 DEBUG oslo_concurrency.lockutils [req-85bf1f0e-c512-4044-ac7f-4814156d57a5 req-e3db0289-02d4-4cae-b757-c97947bb924e b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "8b29fb31-718d-4926-bf4f-bae461ea70ef-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:28:51 np0005534516 nova_compute[253538]: 2025-11-25 08:28:51.421 253542 DEBUG nova.compute.manager [req-85bf1f0e-c512-4044-ac7f-4814156d57a5 req-e3db0289-02d4-4cae-b757-c97947bb924e b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 8b29fb31-718d-4926-bf4f-bae461ea70ef] No waiting events found dispatching network-vif-plugged-799c50c8-d1e7-4c15-a3d9-29903d576304 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 03:28:51 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 03:28:51 np0005534516 nova_compute[253538]: 2025-11-25 08:28:51.422 253542 WARNING nova.compute.manager [req-85bf1f0e-c512-4044-ac7f-4814156d57a5 req-e3db0289-02d4-4cae-b757-c97947bb924e b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 8b29fb31-718d-4926-bf4f-bae461ea70ef] Received unexpected event network-vif-plugged-799c50c8-d1e7-4c15-a3d9-29903d576304 for instance with vm_state active and task_state image_snapshot.#033[00m
Nov 25 03:28:51 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 03:28:51 np0005534516 nova_compute[253538]: 2025-11-25 08:28:51.423 253542 DEBUG nova.compute.manager [req-85bf1f0e-c512-4044-ac7f-4814156d57a5 req-e3db0289-02d4-4cae-b757-c97947bb924e b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: cf07e611-51eb-4bcf-8757-8f75d3807da6] Received event network-vif-plugged-d4493bab-df0a-4934-ab26-43dae0dbae72 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:28:51 np0005534516 nova_compute[253538]: 2025-11-25 08:28:51.423 253542 DEBUG oslo_concurrency.lockutils [req-85bf1f0e-c512-4044-ac7f-4814156d57a5 req-e3db0289-02d4-4cae-b757-c97947bb924e b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "cf07e611-51eb-4bcf-8757-8f75d3807da6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:28:51 np0005534516 nova_compute[253538]: 2025-11-25 08:28:51.423 253542 DEBUG oslo_concurrency.lockutils [req-85bf1f0e-c512-4044-ac7f-4814156d57a5 req-e3db0289-02d4-4cae-b757-c97947bb924e b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "cf07e611-51eb-4bcf-8757-8f75d3807da6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:28:51 np0005534516 nova_compute[253538]: 2025-11-25 08:28:51.424 253542 DEBUG oslo_concurrency.lockutils [req-85bf1f0e-c512-4044-ac7f-4814156d57a5 req-e3db0289-02d4-4cae-b757-c97947bb924e b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "cf07e611-51eb-4bcf-8757-8f75d3807da6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:28:51 np0005534516 nova_compute[253538]: 2025-11-25 08:28:51.424 253542 DEBUG nova.compute.manager [req-85bf1f0e-c512-4044-ac7f-4814156d57a5 req-e3db0289-02d4-4cae-b757-c97947bb924e b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: cf07e611-51eb-4bcf-8757-8f75d3807da6] Processing event network-vif-plugged-d4493bab-df0a-4934-ab26-43dae0dbae72 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 25 03:28:51 np0005534516 nova_compute[253538]: 2025-11-25 08:28:51.425 253542 DEBUG nova.compute.manager [req-85bf1f0e-c512-4044-ac7f-4814156d57a5 req-e3db0289-02d4-4cae-b757-c97947bb924e b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: cf07e611-51eb-4bcf-8757-8f75d3807da6] Received event network-vif-plugged-d4493bab-df0a-4934-ab26-43dae0dbae72 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:28:51 np0005534516 nova_compute[253538]: 2025-11-25 08:28:51.425 253542 DEBUG oslo_concurrency.lockutils [req-85bf1f0e-c512-4044-ac7f-4814156d57a5 req-e3db0289-02d4-4cae-b757-c97947bb924e b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "cf07e611-51eb-4bcf-8757-8f75d3807da6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:28:51 np0005534516 nova_compute[253538]: 2025-11-25 08:28:51.425 253542 DEBUG oslo_concurrency.lockutils [req-85bf1f0e-c512-4044-ac7f-4814156d57a5 req-e3db0289-02d4-4cae-b757-c97947bb924e b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "cf07e611-51eb-4bcf-8757-8f75d3807da6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:28:51 np0005534516 nova_compute[253538]: 2025-11-25 08:28:51.426 253542 DEBUG oslo_concurrency.lockutils [req-85bf1f0e-c512-4044-ac7f-4814156d57a5 req-e3db0289-02d4-4cae-b757-c97947bb924e b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "cf07e611-51eb-4bcf-8757-8f75d3807da6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:28:51 np0005534516 nova_compute[253538]: 2025-11-25 08:28:51.426 253542 DEBUG nova.compute.manager [req-85bf1f0e-c512-4044-ac7f-4814156d57a5 req-e3db0289-02d4-4cae-b757-c97947bb924e b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: cf07e611-51eb-4bcf-8757-8f75d3807da6] No waiting events found dispatching network-vif-plugged-d4493bab-df0a-4934-ab26-43dae0dbae72 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 03:28:51 np0005534516 nova_compute[253538]: 2025-11-25 08:28:51.426 253542 WARNING nova.compute.manager [req-85bf1f0e-c512-4044-ac7f-4814156d57a5 req-e3db0289-02d4-4cae-b757-c97947bb924e b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: cf07e611-51eb-4bcf-8757-8f75d3807da6] Received unexpected event network-vif-plugged-d4493bab-df0a-4934-ab26-43dae0dbae72 for instance with vm_state building and task_state spawning.#033[00m
Nov 25 03:28:51 np0005534516 nova_compute[253538]: 2025-11-25 08:28:51.427 253542 DEBUG nova.compute.manager [None req-d5ac8371-613d-488f-9c4b-164aad80e8a3 9a96a98d6bb448aab904a8763d3675ec 17d31dbb1e4542daaa43d2fda87e18ad - - default default] [instance: cf07e611-51eb-4bcf-8757-8f75d3807da6] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 25 03:28:51 np0005534516 nova_compute[253538]: 2025-11-25 08:28:51.431 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764059331.4312408, cf07e611-51eb-4bcf-8757-8f75d3807da6 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 03:28:51 np0005534516 nova_compute[253538]: 2025-11-25 08:28:51.432 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: cf07e611-51eb-4bcf-8757-8f75d3807da6] VM Resumed (Lifecycle Event)#033[00m
Nov 25 03:28:51 np0005534516 nova_compute[253538]: 2025-11-25 08:28:51.433 253542 DEBUG nova.virt.libvirt.driver [None req-d5ac8371-613d-488f-9c4b-164aad80e8a3 9a96a98d6bb448aab904a8763d3675ec 17d31dbb1e4542daaa43d2fda87e18ad - - default default] [instance: cf07e611-51eb-4bcf-8757-8f75d3807da6] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 25 03:28:51 np0005534516 nova_compute[253538]: 2025-11-25 08:28:51.439 253542 INFO nova.virt.libvirt.driver [-] [instance: cf07e611-51eb-4bcf-8757-8f75d3807da6] Instance spawned successfully.#033[00m
Nov 25 03:28:51 np0005534516 nova_compute[253538]: 2025-11-25 08:28:51.440 253542 DEBUG nova.virt.libvirt.driver [None req-d5ac8371-613d-488f-9c4b-164aad80e8a3 9a96a98d6bb448aab904a8763d3675ec 17d31dbb1e4542daaa43d2fda87e18ad - - default default] [instance: cf07e611-51eb-4bcf-8757-8f75d3807da6] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 25 03:28:51 np0005534516 nova_compute[253538]: 2025-11-25 08:28:51.456 253542 INFO nova.compute.manager [None req-0d4386b8-1e51-441e-9ef0-e32bc45c6e4e 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 8b29fb31-718d-4926-bf4f-bae461ea70ef] instance snapshotting#033[00m
Nov 25 03:28:51 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e143 do_prune osdmap full prune enabled
Nov 25 03:28:51 np0005534516 nova_compute[253538]: 2025-11-25 08:28:51.459 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: cf07e611-51eb-4bcf-8757-8f75d3807da6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:28:51 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e144 e144: 3 total, 3 up, 3 in
Nov 25 03:28:51 np0005534516 nova_compute[253538]: 2025-11-25 08:28:51.472 253542 DEBUG nova.virt.libvirt.driver [None req-d5ac8371-613d-488f-9c4b-164aad80e8a3 9a96a98d6bb448aab904a8763d3675ec 17d31dbb1e4542daaa43d2fda87e18ad - - default default] [instance: cf07e611-51eb-4bcf-8757-8f75d3807da6] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:28:51 np0005534516 nova_compute[253538]: 2025-11-25 08:28:51.473 253542 DEBUG nova.virt.libvirt.driver [None req-d5ac8371-613d-488f-9c4b-164aad80e8a3 9a96a98d6bb448aab904a8763d3675ec 17d31dbb1e4542daaa43d2fda87e18ad - - default default] [instance: cf07e611-51eb-4bcf-8757-8f75d3807da6] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:28:51 np0005534516 ceph-mon[75015]: log_channel(cluster) log [DBG] : osdmap e144: 3 total, 3 up, 3 in
Nov 25 03:28:51 np0005534516 nova_compute[253538]: 2025-11-25 08:28:51.474 253542 DEBUG nova.virt.libvirt.driver [None req-d5ac8371-613d-488f-9c4b-164aad80e8a3 9a96a98d6bb448aab904a8763d3675ec 17d31dbb1e4542daaa43d2fda87e18ad - - default default] [instance: cf07e611-51eb-4bcf-8757-8f75d3807da6] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:28:51 np0005534516 nova_compute[253538]: 2025-11-25 08:28:51.474 253542 DEBUG nova.virt.libvirt.driver [None req-d5ac8371-613d-488f-9c4b-164aad80e8a3 9a96a98d6bb448aab904a8763d3675ec 17d31dbb1e4542daaa43d2fda87e18ad - - default default] [instance: cf07e611-51eb-4bcf-8757-8f75d3807da6] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:28:51 np0005534516 nova_compute[253538]: 2025-11-25 08:28:51.475 253542 DEBUG nova.virt.libvirt.driver [None req-d5ac8371-613d-488f-9c4b-164aad80e8a3 9a96a98d6bb448aab904a8763d3675ec 17d31dbb1e4542daaa43d2fda87e18ad - - default default] [instance: cf07e611-51eb-4bcf-8757-8f75d3807da6] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:28:51 np0005534516 nova_compute[253538]: 2025-11-25 08:28:51.476 253542 DEBUG nova.virt.libvirt.driver [None req-d5ac8371-613d-488f-9c4b-164aad80e8a3 9a96a98d6bb448aab904a8763d3675ec 17d31dbb1e4542daaa43d2fda87e18ad - - default default] [instance: cf07e611-51eb-4bcf-8757-8f75d3807da6] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:28:51 np0005534516 nova_compute[253538]: 2025-11-25 08:28:51.479 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: cf07e611-51eb-4bcf-8757-8f75d3807da6] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 03:28:51 np0005534516 nova_compute[253538]: 2025-11-25 08:28:51.512 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: cf07e611-51eb-4bcf-8757-8f75d3807da6] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 03:28:51 np0005534516 nova_compute[253538]: 2025-11-25 08:28:51.534 253542 DEBUG nova.storage.rbd_utils [None req-aaed5288-95fd-4850-a790-cb3482a467a9 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] creating snapshot(snap) on rbd image(86f2be0d-0948-4bc3-9e65-5ca5e45d1b2d) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Nov 25 03:28:51 np0005534516 nova_compute[253538]: 2025-11-25 08:28:51.563 253542 INFO nova.compute.manager [None req-d5ac8371-613d-488f-9c4b-164aad80e8a3 9a96a98d6bb448aab904a8763d3675ec 17d31dbb1e4542daaa43d2fda87e18ad - - default default] [instance: cf07e611-51eb-4bcf-8757-8f75d3807da6] Took 10.35 seconds to spawn the instance on the hypervisor.#033[00m
Nov 25 03:28:51 np0005534516 nova_compute[253538]: 2025-11-25 08:28:51.564 253542 DEBUG nova.compute.manager [None req-d5ac8371-613d-488f-9c4b-164aad80e8a3 9a96a98d6bb448aab904a8763d3675ec 17d31dbb1e4542daaa43d2fda87e18ad - - default default] [instance: cf07e611-51eb-4bcf-8757-8f75d3807da6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:28:51 np0005534516 nova_compute[253538]: 2025-11-25 08:28:51.619 253542 INFO nova.compute.manager [None req-d5ac8371-613d-488f-9c4b-164aad80e8a3 9a96a98d6bb448aab904a8763d3675ec 17d31dbb1e4542daaa43d2fda87e18ad - - default default] [instance: cf07e611-51eb-4bcf-8757-8f75d3807da6] Took 11.92 seconds to build instance.#033[00m
Nov 25 03:28:51 np0005534516 nova_compute[253538]: 2025-11-25 08:28:51.633 253542 DEBUG oslo_concurrency.lockutils [None req-d5ac8371-613d-488f-9c4b-164aad80e8a3 9a96a98d6bb448aab904a8763d3675ec 17d31dbb1e4542daaa43d2fda87e18ad - - default default] Lock "cf07e611-51eb-4bcf-8757-8f75d3807da6" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 12.019s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:28:51 np0005534516 nova_compute[253538]: 2025-11-25 08:28:51.765 253542 INFO nova.virt.libvirt.driver [None req-0d4386b8-1e51-441e-9ef0-e32bc45c6e4e 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 8b29fb31-718d-4926-bf4f-bae461ea70ef] Beginning live snapshot process#033[00m
Nov 25 03:28:51 np0005534516 nova_compute[253538]: 2025-11-25 08:28:51.921 253542 DEBUG nova.virt.libvirt.imagebackend [None req-0d4386b8-1e51-441e-9ef0-e32bc45c6e4e 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] No parent info for 8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163#033[00m
Nov 25 03:28:52 np0005534516 nova_compute[253538]: 2025-11-25 08:28:52.171 253542 DEBUG nova.storage.rbd_utils [None req-0d4386b8-1e51-441e-9ef0-e32bc45c6e4e 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] creating snapshot(c40393c46a9146f59b3d7035ca3e9699) on rbd image(8b29fb31-718d-4926-bf4f-bae461ea70ef_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Nov 25 03:28:52 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e144 do_prune osdmap full prune enabled
Nov 25 03:28:52 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e145 e145: 3 total, 3 up, 3 in
Nov 25 03:28:52 np0005534516 ceph-mon[75015]: log_channel(cluster) log [DBG] : osdmap e145: 3 total, 3 up, 3 in
Nov 25 03:28:53 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1335: 321 pgs: 321 active+clean; 196 MiB data, 417 MiB used, 60 GiB / 60 GiB avail; 3.3 MiB/s rd, 725 KiB/s wr, 148 op/s
Nov 25 03:28:53 np0005534516 ovn_controller[152859]: 2025-11-25T08:28:53Z|00228|binding|INFO|Releasing lport 02ee51d1-7fc5-4815-93ec-b9ead088a46e from this chassis (sb_readonly=0)
Nov 25 03:28:53 np0005534516 ovn_controller[152859]: 2025-11-25T08:28:53Z|00229|binding|INFO|Releasing lport ff896c66-8e2b-41d0-a738-217411538e37 from this chassis (sb_readonly=0)
Nov 25 03:28:53 np0005534516 ovn_controller[152859]: 2025-11-25T08:28:53Z|00230|binding|INFO|Releasing lport ac244317-fa52-4a6a-92f4-98845a41804d from this chassis (sb_readonly=0)
Nov 25 03:28:53 np0005534516 nova_compute[253538]: 2025-11-25 08:28:53.111 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:28:53 np0005534516 NetworkManager[48915]: <info>  [1764059333.1134] manager: (patch-br-int-to-provnet-14ba300c-36d8-42f5-a7bd-8245a4488c5f): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/115)
Nov 25 03:28:53 np0005534516 NetworkManager[48915]: <info>  [1764059333.1141] manager: (patch-provnet-14ba300c-36d8-42f5-a7bd-8245a4488c5f-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/116)
Nov 25 03:28:53 np0005534516 nova_compute[253538]: 2025-11-25 08:28:53.114 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:28:53 np0005534516 ovn_controller[152859]: 2025-11-25T08:28:53Z|00231|binding|INFO|Releasing lport 02ee51d1-7fc5-4815-93ec-b9ead088a46e from this chassis (sb_readonly=0)
Nov 25 03:28:53 np0005534516 ovn_controller[152859]: 2025-11-25T08:28:53Z|00232|binding|INFO|Releasing lport ff896c66-8e2b-41d0-a738-217411538e37 from this chassis (sb_readonly=0)
Nov 25 03:28:53 np0005534516 ovn_controller[152859]: 2025-11-25T08:28:53Z|00233|binding|INFO|Releasing lport ac244317-fa52-4a6a-92f4-98845a41804d from this chassis (sb_readonly=0)
Nov 25 03:28:53 np0005534516 nova_compute[253538]: 2025-11-25 08:28:53.126 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:28:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] Optimize plan auto_2025-11-25_08:28:53
Nov 25 03:28:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 03:28:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] do_upmap
Nov 25 03:28:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] pools ['volumes', 'backups', 'vms', 'default.rgw.meta', 'default.rgw.log', '.mgr', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', '.rgw.root', 'default.rgw.control', 'images']
Nov 25 03:28:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] prepared 0/10 changes
Nov 25 03:28:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 03:28:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 03:28:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 03:28:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 03:28:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 03:28:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 03:28:53 np0005534516 nova_compute[253538]: 2025-11-25 08:28:53.670 253542 DEBUG nova.compute.manager [req-dd56b59c-5eb3-4b69-b32c-1d142b2e015f req-f57ec85d-8d62-4e28-8eb5-1e5b27c22d0e b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: cf07e611-51eb-4bcf-8757-8f75d3807da6] Received event network-changed-d4493bab-df0a-4934-ab26-43dae0dbae72 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:28:53 np0005534516 nova_compute[253538]: 2025-11-25 08:28:53.671 253542 DEBUG nova.compute.manager [req-dd56b59c-5eb3-4b69-b32c-1d142b2e015f req-f57ec85d-8d62-4e28-8eb5-1e5b27c22d0e b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: cf07e611-51eb-4bcf-8757-8f75d3807da6] Refreshing instance network info cache due to event network-changed-d4493bab-df0a-4934-ab26-43dae0dbae72. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 03:28:53 np0005534516 nova_compute[253538]: 2025-11-25 08:28:53.671 253542 DEBUG oslo_concurrency.lockutils [req-dd56b59c-5eb3-4b69-b32c-1d142b2e015f req-f57ec85d-8d62-4e28-8eb5-1e5b27c22d0e b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-cf07e611-51eb-4bcf-8757-8f75d3807da6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 03:28:53 np0005534516 nova_compute[253538]: 2025-11-25 08:28:53.671 253542 DEBUG oslo_concurrency.lockutils [req-dd56b59c-5eb3-4b69-b32c-1d142b2e015f req-f57ec85d-8d62-4e28-8eb5-1e5b27c22d0e b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-cf07e611-51eb-4bcf-8757-8f75d3807da6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 03:28:53 np0005534516 nova_compute[253538]: 2025-11-25 08:28:53.671 253542 DEBUG nova.network.neutron [req-dd56b59c-5eb3-4b69-b32c-1d142b2e015f req-f57ec85d-8d62-4e28-8eb5-1e5b27c22d0e b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: cf07e611-51eb-4bcf-8757-8f75d3807da6] Refreshing network info cache for port d4493bab-df0a-4934-ab26-43dae0dbae72 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 03:28:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 03:28:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 03:28:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 03:28:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 03:28:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 03:28:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 03:28:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 03:28:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 03:28:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 03:28:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 03:28:54 np0005534516 nova_compute[253538]: 2025-11-25 08:28:54.869 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:28:55 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1336: 321 pgs: 321 active+clean; 227 MiB data, 434 MiB used, 60 GiB / 60 GiB avail; 9.5 MiB/s rd, 3.6 MiB/s wr, 345 op/s
Nov 25 03:28:55 np0005534516 nova_compute[253538]: 2025-11-25 08:28:55.124 253542 DEBUG nova.network.neutron [req-dd56b59c-5eb3-4b69-b32c-1d142b2e015f req-f57ec85d-8d62-4e28-8eb5-1e5b27c22d0e b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: cf07e611-51eb-4bcf-8757-8f75d3807da6] Updated VIF entry in instance network info cache for port d4493bab-df0a-4934-ab26-43dae0dbae72. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 25 03:28:55 np0005534516 nova_compute[253538]: 2025-11-25 08:28:55.125 253542 DEBUG nova.network.neutron [req-dd56b59c-5eb3-4b69-b32c-1d142b2e015f req-f57ec85d-8d62-4e28-8eb5-1e5b27c22d0e b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: cf07e611-51eb-4bcf-8757-8f75d3807da6] Updating instance_info_cache with network_info: [{"id": "d4493bab-df0a-4934-ab26-43dae0dbae72", "address": "fa:16:3e:6b:9f:ca", "network": {"id": "3eeb3245-b22f-4899-9ec0-084ea5f63b6b", "bridge": "br-int", "label": "tempest-ServersTestJSON-406706004-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.205", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "17d31dbb1e4542daaa43d2fda87e18ad", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd4493bab-df", "ovs_interfaceid": "d4493bab-df0a-4934-ab26-43dae0dbae72", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 03:28:55 np0005534516 nova_compute[253538]: 2025-11-25 08:28:55.151 253542 DEBUG oslo_concurrency.lockutils [req-dd56b59c-5eb3-4b69-b32c-1d142b2e015f req-f57ec85d-8d62-4e28-8eb5-1e5b27c22d0e b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-cf07e611-51eb-4bcf-8757-8f75d3807da6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 03:28:55 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 03:28:55 np0005534516 nova_compute[253538]: 2025-11-25 08:28:55.455 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:28:56 np0005534516 nova_compute[253538]: 2025-11-25 08:28:56.161 253542 DEBUG nova.storage.rbd_utils [None req-0d4386b8-1e51-441e-9ef0-e32bc45c6e4e 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] cloning vms/8b29fb31-718d-4926-bf4f-bae461ea70ef_disk@c40393c46a9146f59b3d7035ca3e9699 to images/c4072411-f87d-45fb-92e8-02dc5884a35e clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Nov 25 03:28:57 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1337: 321 pgs: 321 active+clean; 227 MiB data, 435 MiB used, 60 GiB / 60 GiB avail; 10 MiB/s rd, 3.2 MiB/s wr, 345 op/s
Nov 25 03:28:57 np0005534516 nova_compute[253538]: 2025-11-25 08:28:57.479 253542 ERROR nova.virt.libvirt.driver [None req-aaed5288-95fd-4850-a790-cb3482a467a9 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] Failed to snapshot image: nova.exception.ImageNotFound: Image 86f2be0d-0948-4bc3-9e65-5ca5e45d1b2d could not be found.
Nov 25 03:28:57 np0005534516 nova_compute[253538]: 2025-11-25 08:28:57.479 253542 ERROR nova.virt.libvirt.driver Traceback (most recent call last):
Nov 25 03:28:57 np0005534516 nova_compute[253538]: 2025-11-25 08:28:57.479 253542 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 691, in update
Nov 25 03:28:57 np0005534516 nova_compute[253538]: 2025-11-25 08:28:57.479 253542 ERROR nova.virt.libvirt.driver     image = self._update_v2(context, sent_service_image_meta, data)
Nov 25 03:28:57 np0005534516 nova_compute[253538]: 2025-11-25 08:28:57.479 253542 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 700, in _update_v2
Nov 25 03:28:57 np0005534516 nova_compute[253538]: 2025-11-25 08:28:57.479 253542 ERROR nova.virt.libvirt.driver     image = self._client.call(
Nov 25 03:28:57 np0005534516 nova_compute[253538]: 2025-11-25 08:28:57.479 253542 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 191, in call
Nov 25 03:28:57 np0005534516 nova_compute[253538]: 2025-11-25 08:28:57.479 253542 ERROR nova.virt.libvirt.driver     result = getattr(controller, method)(*args, **kwargs)
Nov 25 03:28:57 np0005534516 nova_compute[253538]: 2025-11-25 08:28:57.479 253542 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/v2/images.py", line 440, in update
Nov 25 03:28:57 np0005534516 nova_compute[253538]: 2025-11-25 08:28:57.479 253542 ERROR nova.virt.libvirt.driver     unvalidated_image = self.get(image_id)
Nov 25 03:28:57 np0005534516 nova_compute[253538]: 2025-11-25 08:28:57.479 253542 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/v2/images.py", line 197, in get
Nov 25 03:28:57 np0005534516 nova_compute[253538]: 2025-11-25 08:28:57.479 253542 ERROR nova.virt.libvirt.driver     return self._get(image_id)
Nov 25 03:28:57 np0005534516 nova_compute[253538]: 2025-11-25 08:28:57.479 253542 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/common/utils.py", line 649, in inner
Nov 25 03:28:57 np0005534516 nova_compute[253538]: 2025-11-25 08:28:57.479 253542 ERROR nova.virt.libvirt.driver     return RequestIdProxy(wrapped(*args, **kwargs))
Nov 25 03:28:57 np0005534516 nova_compute[253538]: 2025-11-25 08:28:57.479 253542 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/v2/images.py", line 190, in _get
Nov 25 03:28:57 np0005534516 nova_compute[253538]: 2025-11-25 08:28:57.479 253542 ERROR nova.virt.libvirt.driver     resp, body = self.http_client.get(url, headers=header)
Nov 25 03:28:57 np0005534516 nova_compute[253538]: 2025-11-25 08:28:57.479 253542 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/keystoneauth1/adapter.py", line 395, in get
Nov 25 03:28:57 np0005534516 nova_compute[253538]: 2025-11-25 08:28:57.479 253542 ERROR nova.virt.libvirt.driver     return self.request(url, 'GET', **kwargs)
Nov 25 03:28:57 np0005534516 nova_compute[253538]: 2025-11-25 08:28:57.479 253542 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/common/http.py", line 380, in request
Nov 25 03:28:57 np0005534516 nova_compute[253538]: 2025-11-25 08:28:57.479 253542 ERROR nova.virt.libvirt.driver     return self._handle_response(resp)
Nov 25 03:28:57 np0005534516 nova_compute[253538]: 2025-11-25 08:28:57.479 253542 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/common/http.py", line 120, in _handle_response
Nov 25 03:28:57 np0005534516 nova_compute[253538]: 2025-11-25 08:28:57.479 253542 ERROR nova.virt.libvirt.driver     raise exc.from_response(resp, resp.content)
Nov 25 03:28:57 np0005534516 nova_compute[253538]: 2025-11-25 08:28:57.479 253542 ERROR nova.virt.libvirt.driver glanceclient.exc.HTTPNotFound: HTTP 404 Not Found: No image found with ID 86f2be0d-0948-4bc3-9e65-5ca5e45d1b2d
Nov 25 03:28:57 np0005534516 nova_compute[253538]: 2025-11-25 08:28:57.479 253542 ERROR nova.virt.libvirt.driver 
Nov 25 03:28:57 np0005534516 nova_compute[253538]: 2025-11-25 08:28:57.479 253542 ERROR nova.virt.libvirt.driver During handling of the above exception, another exception occurred:
Nov 25 03:28:57 np0005534516 nova_compute[253538]: 2025-11-25 08:28:57.479 253542 ERROR nova.virt.libvirt.driver 
Nov 25 03:28:57 np0005534516 nova_compute[253538]: 2025-11-25 08:28:57.479 253542 ERROR nova.virt.libvirt.driver Traceback (most recent call last):
Nov 25 03:28:57 np0005534516 nova_compute[253538]: 2025-11-25 08:28:57.479 253542 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py", line 3082, in snapshot
Nov 25 03:28:57 np0005534516 nova_compute[253538]: 2025-11-25 08:28:57.479 253542 ERROR nova.virt.libvirt.driver     self._image_api.update(context, image_id, metadata,
Nov 25 03:28:57 np0005534516 nova_compute[253538]: 2025-11-25 08:28:57.479 253542 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 1243, in update
Nov 25 03:28:57 np0005534516 nova_compute[253538]: 2025-11-25 08:28:57.479 253542 ERROR nova.virt.libvirt.driver     return session.update(context, image_id, image_info, data=data,
Nov 25 03:28:57 np0005534516 nova_compute[253538]: 2025-11-25 08:28:57.479 253542 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 693, in update
Nov 25 03:28:57 np0005534516 nova_compute[253538]: 2025-11-25 08:28:57.479 253542 ERROR nova.virt.libvirt.driver     _reraise_translated_image_exception(image_id)
Nov 25 03:28:57 np0005534516 nova_compute[253538]: 2025-11-25 08:28:57.479 253542 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 1031, in _reraise_translated_image_exception
Nov 25 03:28:57 np0005534516 nova_compute[253538]: 2025-11-25 08:28:57.479 253542 ERROR nova.virt.libvirt.driver     raise new_exc.with_traceback(exc_trace)
Nov 25 03:28:57 np0005534516 nova_compute[253538]: 2025-11-25 08:28:57.479 253542 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 691, in update
Nov 25 03:28:57 np0005534516 nova_compute[253538]: 2025-11-25 08:28:57.479 253542 ERROR nova.virt.libvirt.driver     image = self._update_v2(context, sent_service_image_meta, data)
Nov 25 03:28:57 np0005534516 nova_compute[253538]: 2025-11-25 08:28:57.479 253542 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 700, in _update_v2
Nov 25 03:28:57 np0005534516 nova_compute[253538]: 2025-11-25 08:28:57.479 253542 ERROR nova.virt.libvirt.driver     image = self._client.call(
Nov 25 03:28:57 np0005534516 nova_compute[253538]: 2025-11-25 08:28:57.479 253542 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 191, in call
Nov 25 03:28:57 np0005534516 nova_compute[253538]: 2025-11-25 08:28:57.479 253542 ERROR nova.virt.libvirt.driver     result = getattr(controller, method)(*args, **kwargs)
Nov 25 03:28:57 np0005534516 nova_compute[253538]: 2025-11-25 08:28:57.479 253542 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/v2/images.py", line 440, in update
Nov 25 03:28:57 np0005534516 nova_compute[253538]: 2025-11-25 08:28:57.479 253542 ERROR nova.virt.libvirt.driver     unvalidated_image = self.get(image_id)
Nov 25 03:28:57 np0005534516 nova_compute[253538]: 2025-11-25 08:28:57.479 253542 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/v2/images.py", line 197, in get
Nov 25 03:28:57 np0005534516 nova_compute[253538]: 2025-11-25 08:28:57.479 253542 ERROR nova.virt.libvirt.driver     return self._get(image_id)
Nov 25 03:28:57 np0005534516 nova_compute[253538]: 2025-11-25 08:28:57.479 253542 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/common/utils.py", line 649, in inner
Nov 25 03:28:57 np0005534516 nova_compute[253538]: 2025-11-25 08:28:57.479 253542 ERROR nova.virt.libvirt.driver     return RequestIdProxy(wrapped(*args, **kwargs))
Nov 25 03:28:57 np0005534516 nova_compute[253538]: 2025-11-25 08:28:57.479 253542 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/v2/images.py", line 190, in _get
Nov 25 03:28:57 np0005534516 nova_compute[253538]: 2025-11-25 08:28:57.479 253542 ERROR nova.virt.libvirt.driver     resp, body = self.http_client.get(url, headers=header)
Nov 25 03:28:57 np0005534516 nova_compute[253538]: 2025-11-25 08:28:57.479 253542 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/keystoneauth1/adapter.py", line 395, in get
Nov 25 03:28:57 np0005534516 nova_compute[253538]: 2025-11-25 08:28:57.479 253542 ERROR nova.virt.libvirt.driver     return self.request(url, 'GET', **kwargs)
Nov 25 03:28:57 np0005534516 nova_compute[253538]: 2025-11-25 08:28:57.479 253542 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/common/http.py", line 380, in request
Nov 25 03:28:57 np0005534516 nova_compute[253538]: 2025-11-25 08:28:57.479 253542 ERROR nova.virt.libvirt.driver     return self._handle_response(resp)
Nov 25 03:28:57 np0005534516 nova_compute[253538]: 2025-11-25 08:28:57.479 253542 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/common/http.py", line 120, in _handle_response
Nov 25 03:28:57 np0005534516 nova_compute[253538]: 2025-11-25 08:28:57.479 253542 ERROR nova.virt.libvirt.driver     raise exc.from_response(resp, resp.content)
Nov 25 03:28:57 np0005534516 nova_compute[253538]: 2025-11-25 08:28:57.479 253542 ERROR nova.virt.libvirt.driver nova.exception.ImageNotFound: Image 86f2be0d-0948-4bc3-9e65-5ca5e45d1b2d could not be found.
Nov 25 03:28:57 np0005534516 nova_compute[253538]: 2025-11-25 08:28:57.479 253542 ERROR nova.virt.libvirt.driver #033[00m
Nov 25 03:28:57 np0005534516 nova_compute[253538]: 2025-11-25 08:28:57.874 253542 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764059322.8733459, 4c934302-d7cd-4826-835e-cab6dba97e3a => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 03:28:57 np0005534516 nova_compute[253538]: 2025-11-25 08:28:57.874 253542 INFO nova.compute.manager [-] [instance: 4c934302-d7cd-4826-835e-cab6dba97e3a] VM Stopped (Lifecycle Event)#033[00m
Nov 25 03:28:57 np0005534516 nova_compute[253538]: 2025-11-25 08:28:57.898 253542 DEBUG nova.compute.manager [None req-75a64359-8c63-4d8d-8411-66c536a9a120 - - - - - -] [instance: 4c934302-d7cd-4826-835e-cab6dba97e3a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:28:58 np0005534516 nova_compute[253538]: 2025-11-25 08:28:58.314 253542 DEBUG nova.storage.rbd_utils [None req-aaed5288-95fd-4850-a790-cb3482a467a9 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] removing snapshot(snap) on rbd image(86f2be0d-0948-4bc3-9e65-5ca5e45d1b2d) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489#033[00m
Nov 25 03:28:58 np0005534516 nova_compute[253538]: 2025-11-25 08:28:58.612 253542 DEBUG nova.storage.rbd_utils [None req-0d4386b8-1e51-441e-9ef0-e32bc45c6e4e 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] flattening images/c4072411-f87d-45fb-92e8-02dc5884a35e flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314#033[00m
Nov 25 03:28:59 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1338: 321 pgs: 321 active+clean; 230 MiB data, 437 MiB used, 60 GiB / 60 GiB avail; 8.5 MiB/s rd, 3.0 MiB/s wr, 315 op/s
Nov 25 03:28:59 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e145 do_prune osdmap full prune enabled
Nov 25 03:28:59 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e146 e146: 3 total, 3 up, 3 in
Nov 25 03:28:59 np0005534516 ceph-mon[75015]: log_channel(cluster) log [DBG] : osdmap e146: 3 total, 3 up, 3 in
Nov 25 03:28:59 np0005534516 nova_compute[253538]: 2025-11-25 08:28:59.827 253542 DEBUG nova.storage.rbd_utils [None req-0d4386b8-1e51-441e-9ef0-e32bc45c6e4e 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] removing snapshot(c40393c46a9146f59b3d7035ca3e9699) on rbd image(8b29fb31-718d-4926-bf4f-bae461ea70ef_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489#033[00m
Nov 25 03:28:59 np0005534516 nova_compute[253538]: 2025-11-25 08:28:59.871 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:29:00 np0005534516 nova_compute[253538]: 2025-11-25 08:29:00.054 253542 WARNING nova.compute.manager [None req-aaed5288-95fd-4850-a790-cb3482a467a9 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] [instance: c3cdbe22-9aa9-4d2e-a76e-8d28c1edc71e] Image not found during snapshot: nova.exception.ImageNotFound: Image 86f2be0d-0948-4bc3-9e65-5ca5e45d1b2d could not be found.#033[00m
Nov 25 03:29:00 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 03:29:00 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e146 do_prune osdmap full prune enabled
Nov 25 03:29:00 np0005534516 ovn_controller[152859]: 2025-11-25T08:29:00Z|00036|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:34:c6:fc 10.100.0.8
Nov 25 03:29:00 np0005534516 ovn_controller[152859]: 2025-11-25T08:29:00Z|00037|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:34:c6:fc 10.100.0.8
Nov 25 03:29:00 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e147 e147: 3 total, 3 up, 3 in
Nov 25 03:29:00 np0005534516 ceph-mon[75015]: log_channel(cluster) log [DBG] : osdmap e147: 3 total, 3 up, 3 in
Nov 25 03:29:00 np0005534516 nova_compute[253538]: 2025-11-25 08:29:00.459 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:29:00 np0005534516 nova_compute[253538]: 2025-11-25 08:29:00.667 253542 DEBUG nova.storage.rbd_utils [None req-0d4386b8-1e51-441e-9ef0-e32bc45c6e4e 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] creating snapshot(snap) on rbd image(c4072411-f87d-45fb-92e8-02dc5884a35e) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Nov 25 03:29:01 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1341: 321 pgs: 321 active+clean; 245 MiB data, 445 MiB used, 60 GiB / 60 GiB avail; 7.3 MiB/s rd, 4.1 MiB/s wr, 285 op/s
Nov 25 03:29:01 np0005534516 nova_compute[253538]: 2025-11-25 08:29:01.849 253542 DEBUG oslo_concurrency.lockutils [None req-cf7082c2-f65a-40a6-ae91-ce7e371516ad 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] Acquiring lock "c3cdbe22-9aa9-4d2e-a76e-8d28c1edc71e" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:29:01 np0005534516 nova_compute[253538]: 2025-11-25 08:29:01.849 253542 DEBUG oslo_concurrency.lockutils [None req-cf7082c2-f65a-40a6-ae91-ce7e371516ad 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] Lock "c3cdbe22-9aa9-4d2e-a76e-8d28c1edc71e" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:29:01 np0005534516 nova_compute[253538]: 2025-11-25 08:29:01.850 253542 DEBUG oslo_concurrency.lockutils [None req-cf7082c2-f65a-40a6-ae91-ce7e371516ad 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] Acquiring lock "c3cdbe22-9aa9-4d2e-a76e-8d28c1edc71e-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:29:01 np0005534516 nova_compute[253538]: 2025-11-25 08:29:01.850 253542 DEBUG oslo_concurrency.lockutils [None req-cf7082c2-f65a-40a6-ae91-ce7e371516ad 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] Lock "c3cdbe22-9aa9-4d2e-a76e-8d28c1edc71e-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:29:01 np0005534516 nova_compute[253538]: 2025-11-25 08:29:01.850 253542 DEBUG oslo_concurrency.lockutils [None req-cf7082c2-f65a-40a6-ae91-ce7e371516ad 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] Lock "c3cdbe22-9aa9-4d2e-a76e-8d28c1edc71e-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:29:01 np0005534516 nova_compute[253538]: 2025-11-25 08:29:01.851 253542 INFO nova.compute.manager [None req-cf7082c2-f65a-40a6-ae91-ce7e371516ad 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] [instance: c3cdbe22-9aa9-4d2e-a76e-8d28c1edc71e] Terminating instance#033[00m
Nov 25 03:29:01 np0005534516 nova_compute[253538]: 2025-11-25 08:29:01.852 253542 DEBUG nova.compute.manager [None req-cf7082c2-f65a-40a6-ae91-ce7e371516ad 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] [instance: c3cdbe22-9aa9-4d2e-a76e-8d28c1edc71e] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 25 03:29:01 np0005534516 podman[295058]: 2025-11-25 08:29:01.860953991 +0000 UTC m=+0.100759396 container health_status 5c8d5508db57b4c3da7e80ae11f3a3094e87785cb74f60c98199261dd8b73481 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Nov 25 03:29:02 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e147 do_prune osdmap full prune enabled
Nov 25 03:29:02 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e148 e148: 3 total, 3 up, 3 in
Nov 25 03:29:02 np0005534516 ceph-mon[75015]: log_channel(cluster) log [DBG] : osdmap e148: 3 total, 3 up, 3 in
Nov 25 03:29:02 np0005534516 kernel: tap6bcb5ada-83 (unregistering): left promiscuous mode
Nov 25 03:29:02 np0005534516 NetworkManager[48915]: <info>  [1764059342.7728] device (tap6bcb5ada-83): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 03:29:02 np0005534516 ovn_controller[152859]: 2025-11-25T08:29:02Z|00234|binding|INFO|Releasing lport 6bcb5ada-83f7-419f-9909-98ba6f37630c from this chassis (sb_readonly=0)
Nov 25 03:29:02 np0005534516 ovn_controller[152859]: 2025-11-25T08:29:02Z|00235|binding|INFO|Setting lport 6bcb5ada-83f7-419f-9909-98ba6f37630c down in Southbound
Nov 25 03:29:02 np0005534516 ovn_controller[152859]: 2025-11-25T08:29:02Z|00236|binding|INFO|Removing iface tap6bcb5ada-83 ovn-installed in OVS
Nov 25 03:29:02 np0005534516 nova_compute[253538]: 2025-11-25 08:29:02.793 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:29:02 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:29:02.803 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:34:c6:fc 10.100.0.8'], port_security=['fa:16:3e:34:c6:fc 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': 'c3cdbe22-9aa9-4d2e-a76e-8d28c1edc71e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-52a7668b-f0ac-4b07-a778-1ee89adbf076', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c5b1125d171240e2895276836b4fd6d7', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'bd498cc2-3a1f-4313-a3a9-7b1fa737f1eb', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b98666ab-8fd3-4256-b0ce-536c54c0072e, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=6bcb5ada-83f7-419f-9909-98ba6f37630c) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 03:29:02 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:29:02.804 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 6bcb5ada-83f7-419f-9909-98ba6f37630c in datapath 52a7668b-f0ac-4b07-a778-1ee89adbf076 unbound from our chassis#033[00m
Nov 25 03:29:02 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:29:02.805 162739 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 52a7668b-f0ac-4b07-a778-1ee89adbf076, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 25 03:29:02 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:29:02.806 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[80242c66-d61d-4b5b-a9c5-6720eca7812b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:29:02 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:29:02.807 162739 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-52a7668b-f0ac-4b07-a778-1ee89adbf076 namespace which is not needed anymore#033[00m
Nov 25 03:29:02 np0005534516 nova_compute[253538]: 2025-11-25 08:29:02.811 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:29:02 np0005534516 systemd[1]: machine-qemu\x2d37\x2dinstance\x2d00000020.scope: Deactivated successfully.
Nov 25 03:29:02 np0005534516 systemd[1]: machine-qemu\x2d37\x2dinstance\x2d00000020.scope: Consumed 13.006s CPU time.
Nov 25 03:29:02 np0005534516 systemd-machined[215790]: Machine qemu-37-instance-00000020 terminated.
Nov 25 03:29:02 np0005534516 nova_compute[253538]: 2025-11-25 08:29:02.888 253542 INFO nova.virt.libvirt.driver [-] [instance: c3cdbe22-9aa9-4d2e-a76e-8d28c1edc71e] Instance destroyed successfully.#033[00m
Nov 25 03:29:02 np0005534516 nova_compute[253538]: 2025-11-25 08:29:02.889 253542 DEBUG nova.objects.instance [None req-cf7082c2-f65a-40a6-ae91-ce7e371516ad 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] Lazy-loading 'resources' on Instance uuid c3cdbe22-9aa9-4d2e-a76e-8d28c1edc71e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 03:29:02 np0005534516 nova_compute[253538]: 2025-11-25 08:29:02.905 253542 DEBUG nova.virt.libvirt.vif [None req-cf7082c2-f65a-40a6-ae91-ce7e371516ad 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T08:28:34Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ImagesOneServerNegativeTestJSON-server-492840202',display_name='tempest-ImagesOneServerNegativeTestJSON-server-492840202',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagesoneservernegativetestjson-server-492840202',id=32,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-25T08:28:42Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='c5b1125d171240e2895276836b4fd6d7',ramdisk_id='',reservation_id='r-ftgun0ke',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ImagesOneServerNegativeTestJSON-192511421',owner_user_name='tempest-ImagesOneServerNegativeTestJSON-192511421-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T08:29:00Z,user_data=None,user_id='8350a560f2bc4b57a5da0e3a1f582f82',uuid=c3cdbe22-9aa9-4d2e-a76e-8d28c1edc71e,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "6bcb5ada-83f7-419f-9909-98ba6f37630c", "address": "fa:16:3e:34:c6:fc", "network": {"id": "52a7668b-f0ac-4b07-a778-1ee89adbf076", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1430369364-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c5b1125d171240e2895276836b4fd6d7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6bcb5ada-83", "ovs_interfaceid": "6bcb5ada-83f7-419f-9909-98ba6f37630c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 25 03:29:02 np0005534516 nova_compute[253538]: 2025-11-25 08:29:02.906 253542 DEBUG nova.network.os_vif_util [None req-cf7082c2-f65a-40a6-ae91-ce7e371516ad 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] Converting VIF {"id": "6bcb5ada-83f7-419f-9909-98ba6f37630c", "address": "fa:16:3e:34:c6:fc", "network": {"id": "52a7668b-f0ac-4b07-a778-1ee89adbf076", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1430369364-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c5b1125d171240e2895276836b4fd6d7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6bcb5ada-83", "ovs_interfaceid": "6bcb5ada-83f7-419f-9909-98ba6f37630c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 03:29:02 np0005534516 nova_compute[253538]: 2025-11-25 08:29:02.907 253542 DEBUG nova.network.os_vif_util [None req-cf7082c2-f65a-40a6-ae91-ce7e371516ad 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:34:c6:fc,bridge_name='br-int',has_traffic_filtering=True,id=6bcb5ada-83f7-419f-9909-98ba6f37630c,network=Network(52a7668b-f0ac-4b07-a778-1ee89adbf076),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6bcb5ada-83') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 03:29:02 np0005534516 nova_compute[253538]: 2025-11-25 08:29:02.907 253542 DEBUG os_vif [None req-cf7082c2-f65a-40a6-ae91-ce7e371516ad 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:34:c6:fc,bridge_name='br-int',has_traffic_filtering=True,id=6bcb5ada-83f7-419f-9909-98ba6f37630c,network=Network(52a7668b-f0ac-4b07-a778-1ee89adbf076),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6bcb5ada-83') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 25 03:29:02 np0005534516 nova_compute[253538]: 2025-11-25 08:29:02.909 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:29:02 np0005534516 nova_compute[253538]: 2025-11-25 08:29:02.910 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6bcb5ada-83, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:29:02 np0005534516 nova_compute[253538]: 2025-11-25 08:29:02.912 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:29:02 np0005534516 nova_compute[253538]: 2025-11-25 08:29:02.915 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 25 03:29:02 np0005534516 nova_compute[253538]: 2025-11-25 08:29:02.919 253542 INFO os_vif [None req-cf7082c2-f65a-40a6-ae91-ce7e371516ad 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:34:c6:fc,bridge_name='br-int',has_traffic_filtering=True,id=6bcb5ada-83f7-419f-9909-98ba6f37630c,network=Network(52a7668b-f0ac-4b07-a778-1ee89adbf076),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6bcb5ada-83')#033[00m
Nov 25 03:29:02 np0005534516 neutron-haproxy-ovnmeta-52a7668b-f0ac-4b07-a778-1ee89adbf076[293442]: [NOTICE]   (293466) : haproxy version is 2.8.14-c23fe91
Nov 25 03:29:02 np0005534516 neutron-haproxy-ovnmeta-52a7668b-f0ac-4b07-a778-1ee89adbf076[293442]: [NOTICE]   (293466) : path to executable is /usr/sbin/haproxy
Nov 25 03:29:02 np0005534516 neutron-haproxy-ovnmeta-52a7668b-f0ac-4b07-a778-1ee89adbf076[293442]: [ALERT]    (293466) : Current worker (293468) exited with code 143 (Terminated)
Nov 25 03:29:02 np0005534516 neutron-haproxy-ovnmeta-52a7668b-f0ac-4b07-a778-1ee89adbf076[293442]: [WARNING]  (293466) : All workers exited. Exiting... (0)
Nov 25 03:29:02 np0005534516 systemd[1]: libpod-528056e06a9a8fa83064f1b5b0c05273e19c81181eb5b70492a449cf2e44017e.scope: Deactivated successfully.
Nov 25 03:29:02 np0005534516 podman[295108]: 2025-11-25 08:29:02.987725064 +0000 UTC m=+0.073472632 container died 528056e06a9a8fa83064f1b5b0c05273e19c81181eb5b70492a449cf2e44017e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-52a7668b-f0ac-4b07-a778-1ee89adbf076, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 03:29:03 np0005534516 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-528056e06a9a8fa83064f1b5b0c05273e19c81181eb5b70492a449cf2e44017e-userdata-shm.mount: Deactivated successfully.
Nov 25 03:29:03 np0005534516 systemd[1]: var-lib-containers-storage-overlay-7db1af69cade0c066d85c02aae054723464bc6f1508df360470a31f2e5094b6d-merged.mount: Deactivated successfully.
Nov 25 03:29:03 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1343: 321 pgs: 321 active+clean; 267 MiB data, 470 MiB used, 60 GiB / 60 GiB avail; 2.7 MiB/s rd, 6.6 MiB/s wr, 164 op/s
Nov 25 03:29:03 np0005534516 podman[295108]: 2025-11-25 08:29:03.109510769 +0000 UTC m=+0.195258327 container cleanup 528056e06a9a8fa83064f1b5b0c05273e19c81181eb5b70492a449cf2e44017e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-52a7668b-f0ac-4b07-a778-1ee89adbf076, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3)
Nov 25 03:29:03 np0005534516 systemd[1]: libpod-conmon-528056e06a9a8fa83064f1b5b0c05273e19c81181eb5b70492a449cf2e44017e.scope: Deactivated successfully.
Nov 25 03:29:03 np0005534516 nova_compute[253538]: 2025-11-25 08:29:03.123 253542 DEBUG nova.compute.manager [req-6bec4f1d-9e5a-41c2-b8b4-f3f73b5e020a req-60ec1b9e-2a46-498b-9c16-8203e91b48b0 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: c3cdbe22-9aa9-4d2e-a76e-8d28c1edc71e] Received event network-vif-unplugged-6bcb5ada-83f7-419f-9909-98ba6f37630c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:29:03 np0005534516 nova_compute[253538]: 2025-11-25 08:29:03.124 253542 DEBUG oslo_concurrency.lockutils [req-6bec4f1d-9e5a-41c2-b8b4-f3f73b5e020a req-60ec1b9e-2a46-498b-9c16-8203e91b48b0 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "c3cdbe22-9aa9-4d2e-a76e-8d28c1edc71e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:29:03 np0005534516 nova_compute[253538]: 2025-11-25 08:29:03.124 253542 DEBUG oslo_concurrency.lockutils [req-6bec4f1d-9e5a-41c2-b8b4-f3f73b5e020a req-60ec1b9e-2a46-498b-9c16-8203e91b48b0 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "c3cdbe22-9aa9-4d2e-a76e-8d28c1edc71e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:29:03 np0005534516 nova_compute[253538]: 2025-11-25 08:29:03.124 253542 DEBUG oslo_concurrency.lockutils [req-6bec4f1d-9e5a-41c2-b8b4-f3f73b5e020a req-60ec1b9e-2a46-498b-9c16-8203e91b48b0 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "c3cdbe22-9aa9-4d2e-a76e-8d28c1edc71e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:29:03 np0005534516 nova_compute[253538]: 2025-11-25 08:29:03.125 253542 DEBUG nova.compute.manager [req-6bec4f1d-9e5a-41c2-b8b4-f3f73b5e020a req-60ec1b9e-2a46-498b-9c16-8203e91b48b0 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: c3cdbe22-9aa9-4d2e-a76e-8d28c1edc71e] No waiting events found dispatching network-vif-unplugged-6bcb5ada-83f7-419f-9909-98ba6f37630c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 03:29:03 np0005534516 nova_compute[253538]: 2025-11-25 08:29:03.125 253542 DEBUG nova.compute.manager [req-6bec4f1d-9e5a-41c2-b8b4-f3f73b5e020a req-60ec1b9e-2a46-498b-9c16-8203e91b48b0 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: c3cdbe22-9aa9-4d2e-a76e-8d28c1edc71e] Received event network-vif-unplugged-6bcb5ada-83f7-419f-9909-98ba6f37630c for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 25 03:29:03 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e148 do_prune osdmap full prune enabled
Nov 25 03:29:03 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e149 e149: 3 total, 3 up, 3 in
Nov 25 03:29:03 np0005534516 ceph-mon[75015]: log_channel(cluster) log [DBG] : osdmap e149: 3 total, 3 up, 3 in
Nov 25 03:29:03 np0005534516 podman[295159]: 2025-11-25 08:29:03.290560574 +0000 UTC m=+0.155319145 container remove 528056e06a9a8fa83064f1b5b0c05273e19c81181eb5b70492a449cf2e44017e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-52a7668b-f0ac-4b07-a778-1ee89adbf076, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 03:29:03 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:29:03.300 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[b5757ca2-4c96-4b87-9fad-18776b253eb6]: (4, ('Tue Nov 25 08:29:02 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-52a7668b-f0ac-4b07-a778-1ee89adbf076 (528056e06a9a8fa83064f1b5b0c05273e19c81181eb5b70492a449cf2e44017e)\n528056e06a9a8fa83064f1b5b0c05273e19c81181eb5b70492a449cf2e44017e\nTue Nov 25 08:29:03 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-52a7668b-f0ac-4b07-a778-1ee89adbf076 (528056e06a9a8fa83064f1b5b0c05273e19c81181eb5b70492a449cf2e44017e)\n528056e06a9a8fa83064f1b5b0c05273e19c81181eb5b70492a449cf2e44017e\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:29:03 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:29:03.302 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[d4124f1a-818c-4a03-9744-3e8955a02fa8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:29:03 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:29:03.304 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap52a7668b-f0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:29:03 np0005534516 nova_compute[253538]: 2025-11-25 08:29:03.307 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:29:03 np0005534516 kernel: tap52a7668b-f0: left promiscuous mode
Nov 25 03:29:03 np0005534516 nova_compute[253538]: 2025-11-25 08:29:03.310 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:29:03 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:29:03.314 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[67b4f865-49af-4959-8ec8-c0f8ceef058b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:29:03 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:29:03.327 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[8c042106-bc59-4317-95b8-53e10741a8ad]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:29:03 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:29:03.329 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[e7f1330b-0be4-43e8-9127-2cc29101fa15]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:29:03 np0005534516 nova_compute[253538]: 2025-11-25 08:29:03.332 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:29:03 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:29:03.358 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[a78b62d6-9acc-4650-ba0f-42bb22a5cfa1]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 467576, 'reachable_time': 27034, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 295173, 'error': None, 'target': 'ovnmeta-52a7668b-f0ac-4b07-a778-1ee89adbf076', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:29:03 np0005534516 systemd[1]: run-netns-ovnmeta\x2d52a7668b\x2df0ac\x2d4b07\x2da778\x2d1ee89adbf076.mount: Deactivated successfully.
Nov 25 03:29:03 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:29:03.362 162852 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-52a7668b-f0ac-4b07-a778-1ee89adbf076 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 25 03:29:03 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:29:03.362 162852 DEBUG oslo.privsep.daemon [-] privsep: reply[05ea67ee-cd50-4433-b969-194536adb5b8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:29:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 03:29:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:29:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 03:29:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:29:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0014835712134531483 of space, bias 1.0, pg target 0.4450713640359445 quantized to 32 (current 32)
Nov 25 03:29:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:29:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 03:29:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:29:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 03:29:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:29:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.001186539017223416 of space, bias 1.0, pg target 0.3559617051670248 quantized to 32 (current 32)
Nov 25 03:29:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:29:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 25 03:29:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:29:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 03:29:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:29:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 25 03:29:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:29:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 25 03:29:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:29:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 03:29:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:29:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 25 03:29:04 np0005534516 ovn_controller[152859]: 2025-11-25T08:29:04Z|00038|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:6b:9f:ca 10.100.0.3
Nov 25 03:29:04 np0005534516 ovn_controller[152859]: 2025-11-25T08:29:04Z|00039|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:6b:9f:ca 10.100.0.3
Nov 25 03:29:04 np0005534516 nova_compute[253538]: 2025-11-25 08:29:04.521 253542 INFO nova.virt.libvirt.driver [None req-0d4386b8-1e51-441e-9ef0-e32bc45c6e4e 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 8b29fb31-718d-4926-bf4f-bae461ea70ef] Snapshot image upload complete#033[00m
Nov 25 03:29:04 np0005534516 nova_compute[253538]: 2025-11-25 08:29:04.522 253542 INFO nova.compute.manager [None req-0d4386b8-1e51-441e-9ef0-e32bc45c6e4e 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 8b29fb31-718d-4926-bf4f-bae461ea70ef] Took 13.06 seconds to snapshot the instance on the hypervisor.#033[00m
Nov 25 03:29:04 np0005534516 ovn_controller[152859]: 2025-11-25T08:29:04Z|00040|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:fd:03:96 10.100.0.12
Nov 25 03:29:04 np0005534516 ovn_controller[152859]: 2025-11-25T08:29:04Z|00041|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:fd:03:96 10.100.0.12
Nov 25 03:29:04 np0005534516 nova_compute[253538]: 2025-11-25 08:29:04.874 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:29:05 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1345: 321 pgs: 321 active+clean; 262 MiB data, 489 MiB used, 60 GiB / 60 GiB avail; 3.6 MiB/s rd, 13 MiB/s wr, 377 op/s
Nov 25 03:29:05 np0005534516 nova_compute[253538]: 2025-11-25 08:29:05.185 253542 DEBUG nova.compute.manager [req-46b80c08-5c8a-429d-9120-1abd2fafef2c req-03656427-4c8f-4066-b277-46e8bf7624df b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: c3cdbe22-9aa9-4d2e-a76e-8d28c1edc71e] Received event network-vif-plugged-6bcb5ada-83f7-419f-9909-98ba6f37630c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:29:05 np0005534516 nova_compute[253538]: 2025-11-25 08:29:05.185 253542 DEBUG oslo_concurrency.lockutils [req-46b80c08-5c8a-429d-9120-1abd2fafef2c req-03656427-4c8f-4066-b277-46e8bf7624df b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "c3cdbe22-9aa9-4d2e-a76e-8d28c1edc71e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:29:05 np0005534516 nova_compute[253538]: 2025-11-25 08:29:05.185 253542 DEBUG oslo_concurrency.lockutils [req-46b80c08-5c8a-429d-9120-1abd2fafef2c req-03656427-4c8f-4066-b277-46e8bf7624df b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "c3cdbe22-9aa9-4d2e-a76e-8d28c1edc71e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:29:05 np0005534516 nova_compute[253538]: 2025-11-25 08:29:05.186 253542 DEBUG oslo_concurrency.lockutils [req-46b80c08-5c8a-429d-9120-1abd2fafef2c req-03656427-4c8f-4066-b277-46e8bf7624df b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "c3cdbe22-9aa9-4d2e-a76e-8d28c1edc71e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:29:05 np0005534516 nova_compute[253538]: 2025-11-25 08:29:05.186 253542 DEBUG nova.compute.manager [req-46b80c08-5c8a-429d-9120-1abd2fafef2c req-03656427-4c8f-4066-b277-46e8bf7624df b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: c3cdbe22-9aa9-4d2e-a76e-8d28c1edc71e] No waiting events found dispatching network-vif-plugged-6bcb5ada-83f7-419f-9909-98ba6f37630c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 03:29:05 np0005534516 nova_compute[253538]: 2025-11-25 08:29:05.186 253542 WARNING nova.compute.manager [req-46b80c08-5c8a-429d-9120-1abd2fafef2c req-03656427-4c8f-4066-b277-46e8bf7624df b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: c3cdbe22-9aa9-4d2e-a76e-8d28c1edc71e] Received unexpected event network-vif-plugged-6bcb5ada-83f7-419f-9909-98ba6f37630c for instance with vm_state active and task_state deleting.#033[00m
Nov 25 03:29:05 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 03:29:05 np0005534516 podman[295175]: 2025-11-25 08:29:05.811922153 +0000 UTC m=+0.059929868 container health_status 201f5ba8a846455dad7681d91099d8c3f33e8ac91298c1330a8b525b94c03938 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=multipathd, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 03:29:06 np0005534516 nova_compute[253538]: 2025-11-25 08:29:06.149 253542 INFO nova.virt.libvirt.driver [None req-cf7082c2-f65a-40a6-ae91-ce7e371516ad 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] [instance: c3cdbe22-9aa9-4d2e-a76e-8d28c1edc71e] Deleting instance files /var/lib/nova/instances/c3cdbe22-9aa9-4d2e-a76e-8d28c1edc71e_del#033[00m
Nov 25 03:29:06 np0005534516 nova_compute[253538]: 2025-11-25 08:29:06.150 253542 INFO nova.virt.libvirt.driver [None req-cf7082c2-f65a-40a6-ae91-ce7e371516ad 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] [instance: c3cdbe22-9aa9-4d2e-a76e-8d28c1edc71e] Deletion of /var/lib/nova/instances/c3cdbe22-9aa9-4d2e-a76e-8d28c1edc71e_del complete#033[00m
Nov 25 03:29:06 np0005534516 nova_compute[253538]: 2025-11-25 08:29:06.216 253542 INFO nova.compute.manager [None req-cf7082c2-f65a-40a6-ae91-ce7e371516ad 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] [instance: c3cdbe22-9aa9-4d2e-a76e-8d28c1edc71e] Took 4.36 seconds to destroy the instance on the hypervisor.#033[00m
Nov 25 03:29:06 np0005534516 nova_compute[253538]: 2025-11-25 08:29:06.216 253542 DEBUG oslo.service.loopingcall [None req-cf7082c2-f65a-40a6-ae91-ce7e371516ad 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 25 03:29:06 np0005534516 nova_compute[253538]: 2025-11-25 08:29:06.217 253542 DEBUG nova.compute.manager [-] [instance: c3cdbe22-9aa9-4d2e-a76e-8d28c1edc71e] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 25 03:29:06 np0005534516 nova_compute[253538]: 2025-11-25 08:29:06.217 253542 DEBUG nova.network.neutron [-] [instance: c3cdbe22-9aa9-4d2e-a76e-8d28c1edc71e] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 25 03:29:06 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e149 do_prune osdmap full prune enabled
Nov 25 03:29:06 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e150 e150: 3 total, 3 up, 3 in
Nov 25 03:29:06 np0005534516 ceph-mon[75015]: log_channel(cluster) log [DBG] : osdmap e150: 3 total, 3 up, 3 in
Nov 25 03:29:07 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1347: 321 pgs: 321 active+clean; 245 MiB data, 474 MiB used, 60 GiB / 60 GiB avail; 3.7 MiB/s rd, 13 MiB/s wr, 471 op/s
Nov 25 03:29:07 np0005534516 nova_compute[253538]: 2025-11-25 08:29:07.266 253542 DEBUG nova.network.neutron [-] [instance: c3cdbe22-9aa9-4d2e-a76e-8d28c1edc71e] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 03:29:07 np0005534516 nova_compute[253538]: 2025-11-25 08:29:07.281 253542 INFO nova.compute.manager [-] [instance: c3cdbe22-9aa9-4d2e-a76e-8d28c1edc71e] Took 1.06 seconds to deallocate network for instance.#033[00m
Nov 25 03:29:07 np0005534516 nova_compute[253538]: 2025-11-25 08:29:07.318 253542 DEBUG oslo_concurrency.lockutils [None req-cf7082c2-f65a-40a6-ae91-ce7e371516ad 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:29:07 np0005534516 nova_compute[253538]: 2025-11-25 08:29:07.318 253542 DEBUG oslo_concurrency.lockutils [None req-cf7082c2-f65a-40a6-ae91-ce7e371516ad 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:29:07 np0005534516 nova_compute[253538]: 2025-11-25 08:29:07.396 253542 DEBUG oslo_concurrency.processutils [None req-cf7082c2-f65a-40a6-ae91-ce7e371516ad 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:29:07 np0005534516 nova_compute[253538]: 2025-11-25 08:29:07.427 253542 DEBUG nova.compute.manager [req-57121eb1-ca13-44e7-bae4-13009971df6c req-433d6ade-0cbb-4226-8954-106d16bf7b2b b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: c3cdbe22-9aa9-4d2e-a76e-8d28c1edc71e] Received event network-vif-deleted-6bcb5ada-83f7-419f-9909-98ba6f37630c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:29:07 np0005534516 nova_compute[253538]: 2025-11-25 08:29:07.912 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:29:08 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 03:29:08 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1514227300' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 03:29:08 np0005534516 nova_compute[253538]: 2025-11-25 08:29:08.020 253542 DEBUG oslo_concurrency.processutils [None req-cf7082c2-f65a-40a6-ae91-ce7e371516ad 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.624s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:29:08 np0005534516 nova_compute[253538]: 2025-11-25 08:29:08.027 253542 DEBUG nova.compute.provider_tree [None req-cf7082c2-f65a-40a6-ae91-ce7e371516ad 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 03:29:08 np0005534516 nova_compute[253538]: 2025-11-25 08:29:08.040 253542 DEBUG nova.scheduler.client.report [None req-cf7082c2-f65a-40a6-ae91-ce7e371516ad 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 03:29:08 np0005534516 nova_compute[253538]: 2025-11-25 08:29:08.064 253542 DEBUG oslo_concurrency.lockutils [None req-cf7082c2-f65a-40a6-ae91-ce7e371516ad 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.746s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:29:08 np0005534516 nova_compute[253538]: 2025-11-25 08:29:08.089 253542 INFO nova.scheduler.client.report [None req-cf7082c2-f65a-40a6-ae91-ce7e371516ad 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] Deleted allocations for instance c3cdbe22-9aa9-4d2e-a76e-8d28c1edc71e#033[00m
Nov 25 03:29:08 np0005534516 nova_compute[253538]: 2025-11-25 08:29:08.147 253542 DEBUG oslo_concurrency.lockutils [None req-cf7082c2-f65a-40a6-ae91-ce7e371516ad 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] Lock "c3cdbe22-9aa9-4d2e-a76e-8d28c1edc71e" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 6.297s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:29:08 np0005534516 nova_compute[253538]: 2025-11-25 08:29:08.293 253542 DEBUG oslo_concurrency.lockutils [None req-4a46e8e8-8247-4f85-ab93-5b0a8164f13c 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Acquiring lock "0067149a-8f99-4257-af2a-fd9adcc41719" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:29:08 np0005534516 nova_compute[253538]: 2025-11-25 08:29:08.293 253542 DEBUG oslo_concurrency.lockutils [None req-4a46e8e8-8247-4f85-ab93-5b0a8164f13c 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Lock "0067149a-8f99-4257-af2a-fd9adcc41719" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:29:08 np0005534516 nova_compute[253538]: 2025-11-25 08:29:08.309 253542 DEBUG nova.compute.manager [None req-4a46e8e8-8247-4f85-ab93-5b0a8164f13c 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 0067149a-8f99-4257-af2a-fd9adcc41719] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 25 03:29:08 np0005534516 nova_compute[253538]: 2025-11-25 08:29:08.362 253542 DEBUG oslo_concurrency.lockutils [None req-4a46e8e8-8247-4f85-ab93-5b0a8164f13c 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:29:08 np0005534516 nova_compute[253538]: 2025-11-25 08:29:08.362 253542 DEBUG oslo_concurrency.lockutils [None req-4a46e8e8-8247-4f85-ab93-5b0a8164f13c 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:29:08 np0005534516 nova_compute[253538]: 2025-11-25 08:29:08.368 253542 DEBUG nova.virt.hardware [None req-4a46e8e8-8247-4f85-ab93-5b0a8164f13c 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 25 03:29:08 np0005534516 nova_compute[253538]: 2025-11-25 08:29:08.368 253542 INFO nova.compute.claims [None req-4a46e8e8-8247-4f85-ab93-5b0a8164f13c 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 0067149a-8f99-4257-af2a-fd9adcc41719] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 25 03:29:08 np0005534516 nova_compute[253538]: 2025-11-25 08:29:08.480 253542 DEBUG oslo_concurrency.processutils [None req-4a46e8e8-8247-4f85-ab93-5b0a8164f13c 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:29:08 np0005534516 nova_compute[253538]: 2025-11-25 08:29:08.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 03:29:08 np0005534516 nova_compute[253538]: 2025-11-25 08:29:08.555 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 25 03:29:08 np0005534516 nova_compute[253538]: 2025-11-25 08:29:08.765 253542 DEBUG oslo_concurrency.lockutils [None req-d0807b7a-6d5d-4783-b9b7-10e6a0c3760c 6bc7b68c86ab44d29c118388df2a8bc0 70a02215d9344af388c6439ace9208a4 - - default default] Acquiring lock "8400a9a9-bd7a-434b-a11b-6db7e12a4e18" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:29:08 np0005534516 nova_compute[253538]: 2025-11-25 08:29:08.766 253542 DEBUG oslo_concurrency.lockutils [None req-d0807b7a-6d5d-4783-b9b7-10e6a0c3760c 6bc7b68c86ab44d29c118388df2a8bc0 70a02215d9344af388c6439ace9208a4 - - default default] Lock "8400a9a9-bd7a-434b-a11b-6db7e12a4e18" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:29:08 np0005534516 nova_compute[253538]: 2025-11-25 08:29:08.779 253542 DEBUG nova.compute.manager [None req-d0807b7a-6d5d-4783-b9b7-10e6a0c3760c 6bc7b68c86ab44d29c118388df2a8bc0 70a02215d9344af388c6439ace9208a4 - - default default] [instance: 8400a9a9-bd7a-434b-a11b-6db7e12a4e18] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 25 03:29:08 np0005534516 nova_compute[253538]: 2025-11-25 08:29:08.846 253542 DEBUG oslo_concurrency.lockutils [None req-d0807b7a-6d5d-4783-b9b7-10e6a0c3760c 6bc7b68c86ab44d29c118388df2a8bc0 70a02215d9344af388c6439ace9208a4 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:29:08 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 03:29:08 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3588988723' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 03:29:08 np0005534516 nova_compute[253538]: 2025-11-25 08:29:08.956 253542 DEBUG oslo_concurrency.processutils [None req-4a46e8e8-8247-4f85-ab93-5b0a8164f13c 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.476s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:29:08 np0005534516 nova_compute[253538]: 2025-11-25 08:29:08.961 253542 DEBUG nova.compute.provider_tree [None req-4a46e8e8-8247-4f85-ab93-5b0a8164f13c 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 03:29:08 np0005534516 nova_compute[253538]: 2025-11-25 08:29:08.975 253542 DEBUG nova.scheduler.client.report [None req-4a46e8e8-8247-4f85-ab93-5b0a8164f13c 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 03:29:08 np0005534516 nova_compute[253538]: 2025-11-25 08:29:08.994 253542 DEBUG oslo_concurrency.lockutils [None req-4a46e8e8-8247-4f85-ab93-5b0a8164f13c 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.632s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:29:08 np0005534516 nova_compute[253538]: 2025-11-25 08:29:08.995 253542 DEBUG nova.compute.manager [None req-4a46e8e8-8247-4f85-ab93-5b0a8164f13c 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 0067149a-8f99-4257-af2a-fd9adcc41719] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 25 03:29:08 np0005534516 nova_compute[253538]: 2025-11-25 08:29:08.998 253542 DEBUG oslo_concurrency.lockutils [None req-d0807b7a-6d5d-4783-b9b7-10e6a0c3760c 6bc7b68c86ab44d29c118388df2a8bc0 70a02215d9344af388c6439ace9208a4 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.152s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:29:09 np0005534516 nova_compute[253538]: 2025-11-25 08:29:09.004 253542 DEBUG nova.virt.hardware [None req-d0807b7a-6d5d-4783-b9b7-10e6a0c3760c 6bc7b68c86ab44d29c118388df2a8bc0 70a02215d9344af388c6439ace9208a4 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 25 03:29:09 np0005534516 nova_compute[253538]: 2025-11-25 08:29:09.004 253542 INFO nova.compute.claims [None req-d0807b7a-6d5d-4783-b9b7-10e6a0c3760c 6bc7b68c86ab44d29c118388df2a8bc0 70a02215d9344af388c6439ace9208a4 - - default default] [instance: 8400a9a9-bd7a-434b-a11b-6db7e12a4e18] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 25 03:29:09 np0005534516 nova_compute[253538]: 2025-11-25 08:29:09.067 253542 DEBUG nova.compute.manager [None req-4a46e8e8-8247-4f85-ab93-5b0a8164f13c 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 0067149a-8f99-4257-af2a-fd9adcc41719] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 25 03:29:09 np0005534516 nova_compute[253538]: 2025-11-25 08:29:09.068 253542 DEBUG nova.network.neutron [None req-4a46e8e8-8247-4f85-ab93-5b0a8164f13c 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 0067149a-8f99-4257-af2a-fd9adcc41719] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 25 03:29:09 np0005534516 nova_compute[253538]: 2025-11-25 08:29:09.086 253542 INFO nova.virt.libvirt.driver [None req-4a46e8e8-8247-4f85-ab93-5b0a8164f13c 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 0067149a-8f99-4257-af2a-fd9adcc41719] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 25 03:29:09 np0005534516 nova_compute[253538]: 2025-11-25 08:29:09.100 253542 DEBUG nova.compute.manager [None req-4a46e8e8-8247-4f85-ab93-5b0a8164f13c 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 0067149a-8f99-4257-af2a-fd9adcc41719] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 25 03:29:09 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1348: 321 pgs: 321 active+clean; 237 MiB data, 474 MiB used, 60 GiB / 60 GiB avail; 2.1 MiB/s rd, 8.1 MiB/s wr, 372 op/s
Nov 25 03:29:09 np0005534516 nova_compute[253538]: 2025-11-25 08:29:09.189 253542 DEBUG oslo_concurrency.processutils [None req-d0807b7a-6d5d-4783-b9b7-10e6a0c3760c 6bc7b68c86ab44d29c118388df2a8bc0 70a02215d9344af388c6439ace9208a4 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:29:09 np0005534516 nova_compute[253538]: 2025-11-25 08:29:09.225 253542 DEBUG nova.compute.manager [None req-4a46e8e8-8247-4f85-ab93-5b0a8164f13c 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 0067149a-8f99-4257-af2a-fd9adcc41719] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 25 03:29:09 np0005534516 nova_compute[253538]: 2025-11-25 08:29:09.227 253542 DEBUG nova.virt.libvirt.driver [None req-4a46e8e8-8247-4f85-ab93-5b0a8164f13c 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 0067149a-8f99-4257-af2a-fd9adcc41719] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 25 03:29:09 np0005534516 nova_compute[253538]: 2025-11-25 08:29:09.228 253542 INFO nova.virt.libvirt.driver [None req-4a46e8e8-8247-4f85-ab93-5b0a8164f13c 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 0067149a-8f99-4257-af2a-fd9adcc41719] Creating image(s)#033[00m
Nov 25 03:29:09 np0005534516 nova_compute[253538]: 2025-11-25 08:29:09.254 253542 DEBUG nova.storage.rbd_utils [None req-4a46e8e8-8247-4f85-ab93-5b0a8164f13c 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] rbd image 0067149a-8f99-4257-af2a-fd9adcc41719_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:29:09 np0005534516 nova_compute[253538]: 2025-11-25 08:29:09.280 253542 DEBUG nova.storage.rbd_utils [None req-4a46e8e8-8247-4f85-ab93-5b0a8164f13c 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] rbd image 0067149a-8f99-4257-af2a-fd9adcc41719_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:29:09 np0005534516 nova_compute[253538]: 2025-11-25 08:29:09.304 253542 DEBUG nova.storage.rbd_utils [None req-4a46e8e8-8247-4f85-ab93-5b0a8164f13c 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] rbd image 0067149a-8f99-4257-af2a-fd9adcc41719_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:29:09 np0005534516 nova_compute[253538]: 2025-11-25 08:29:09.309 253542 DEBUG oslo_concurrency.lockutils [None req-4a46e8e8-8247-4f85-ab93-5b0a8164f13c 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Acquiring lock "855ebb89fbe713448ddff4ee3e0e4fec7ce78acc" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:29:09 np0005534516 nova_compute[253538]: 2025-11-25 08:29:09.310 253542 DEBUG oslo_concurrency.lockutils [None req-4a46e8e8-8247-4f85-ab93-5b0a8164f13c 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Lock "855ebb89fbe713448ddff4ee3e0e4fec7ce78acc" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:29:09 np0005534516 nova_compute[253538]: 2025-11-25 08:29:09.506 253542 DEBUG nova.virt.libvirt.imagebackend [None req-4a46e8e8-8247-4f85-ab93-5b0a8164f13c 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Image locations are: [{'url': 'rbd://a058ea16-8b73-51e1-b172-ed66107102bf/images/c4072411-f87d-45fb-92e8-02dc5884a35e/snap', 'metadata': {'store': 'default_backend'}}, {'url': 'rbd://a058ea16-8b73-51e1-b172-ed66107102bf/images/c4072411-f87d-45fb-92e8-02dc5884a35e/snap', 'metadata': {}}] clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1085#033[00m
Nov 25 03:29:09 np0005534516 nova_compute[253538]: 2025-11-25 08:29:09.557 253542 DEBUG nova.virt.libvirt.imagebackend [None req-4a46e8e8-8247-4f85-ab93-5b0a8164f13c 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Selected location: {'url': 'rbd://a058ea16-8b73-51e1-b172-ed66107102bf/images/c4072411-f87d-45fb-92e8-02dc5884a35e/snap', 'metadata': {'store': 'default_backend'}} clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1094#033[00m
Nov 25 03:29:09 np0005534516 nova_compute[253538]: 2025-11-25 08:29:09.557 253542 DEBUG nova.storage.rbd_utils [None req-4a46e8e8-8247-4f85-ab93-5b0a8164f13c 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] cloning images/c4072411-f87d-45fb-92e8-02dc5884a35e@snap to None/0067149a-8f99-4257-af2a-fd9adcc41719_disk clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Nov 25 03:29:09 np0005534516 nova_compute[253538]: 2025-11-25 08:29:09.876 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:29:09 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 03:29:09 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4263290914' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 03:29:09 np0005534516 nova_compute[253538]: 2025-11-25 08:29:09.948 253542 DEBUG oslo_concurrency.processutils [None req-d0807b7a-6d5d-4783-b9b7-10e6a0c3760c 6bc7b68c86ab44d29c118388df2a8bc0 70a02215d9344af388c6439ace9208a4 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.759s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:29:09 np0005534516 nova_compute[253538]: 2025-11-25 08:29:09.953 253542 DEBUG nova.compute.provider_tree [None req-d0807b7a-6d5d-4783-b9b7-10e6a0c3760c 6bc7b68c86ab44d29c118388df2a8bc0 70a02215d9344af388c6439ace9208a4 - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 03:29:09 np0005534516 nova_compute[253538]: 2025-11-25 08:29:09.969 253542 DEBUG nova.scheduler.client.report [None req-d0807b7a-6d5d-4783-b9b7-10e6a0c3760c 6bc7b68c86ab44d29c118388df2a8bc0 70a02215d9344af388c6439ace9208a4 - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 03:29:09 np0005534516 nova_compute[253538]: 2025-11-25 08:29:09.993 253542 DEBUG oslo_concurrency.lockutils [None req-d0807b7a-6d5d-4783-b9b7-10e6a0c3760c 6bc7b68c86ab44d29c118388df2a8bc0 70a02215d9344af388c6439ace9208a4 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.996s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:29:09 np0005534516 nova_compute[253538]: 2025-11-25 08:29:09.994 253542 DEBUG nova.compute.manager [None req-d0807b7a-6d5d-4783-b9b7-10e6a0c3760c 6bc7b68c86ab44d29c118388df2a8bc0 70a02215d9344af388c6439ace9208a4 - - default default] [instance: 8400a9a9-bd7a-434b-a11b-6db7e12a4e18] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 25 03:29:10 np0005534516 nova_compute[253538]: 2025-11-25 08:29:10.048 253542 DEBUG nova.policy [None req-4a46e8e8-8247-4f85-ab93-5b0a8164f13c 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '38fa175fb699405c9a05d7c28f994ebc', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'b0a28d62fb1841c087b84b40bf5a54ec', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 25 03:29:10 np0005534516 nova_compute[253538]: 2025-11-25 08:29:10.056 253542 DEBUG nova.compute.manager [None req-d0807b7a-6d5d-4783-b9b7-10e6a0c3760c 6bc7b68c86ab44d29c118388df2a8bc0 70a02215d9344af388c6439ace9208a4 - - default default] [instance: 8400a9a9-bd7a-434b-a11b-6db7e12a4e18] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 25 03:29:10 np0005534516 nova_compute[253538]: 2025-11-25 08:29:10.057 253542 DEBUG nova.network.neutron [None req-d0807b7a-6d5d-4783-b9b7-10e6a0c3760c 6bc7b68c86ab44d29c118388df2a8bc0 70a02215d9344af388c6439ace9208a4 - - default default] [instance: 8400a9a9-bd7a-434b-a11b-6db7e12a4e18] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 25 03:29:10 np0005534516 nova_compute[253538]: 2025-11-25 08:29:10.078 253542 INFO nova.virt.libvirt.driver [None req-d0807b7a-6d5d-4783-b9b7-10e6a0c3760c 6bc7b68c86ab44d29c118388df2a8bc0 70a02215d9344af388c6439ace9208a4 - - default default] [instance: 8400a9a9-bd7a-434b-a11b-6db7e12a4e18] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 25 03:29:10 np0005534516 nova_compute[253538]: 2025-11-25 08:29:10.101 253542 DEBUG oslo_concurrency.lockutils [None req-4a46e8e8-8247-4f85-ab93-5b0a8164f13c 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Lock "855ebb89fbe713448ddff4ee3e0e4fec7ce78acc" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.791s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:29:10 np0005534516 nova_compute[253538]: 2025-11-25 08:29:10.131 253542 DEBUG nova.compute.manager [None req-d0807b7a-6d5d-4783-b9b7-10e6a0c3760c 6bc7b68c86ab44d29c118388df2a8bc0 70a02215d9344af388c6439ace9208a4 - - default default] [instance: 8400a9a9-bd7a-434b-a11b-6db7e12a4e18] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 25 03:29:10 np0005534516 nova_compute[253538]: 2025-11-25 08:29:10.215 253542 DEBUG nova.objects.instance [None req-4a46e8e8-8247-4f85-ab93-5b0a8164f13c 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Lazy-loading 'migration_context' on Instance uuid 0067149a-8f99-4257-af2a-fd9adcc41719 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 03:29:10 np0005534516 nova_compute[253538]: 2025-11-25 08:29:10.228 253542 DEBUG nova.virt.libvirt.driver [None req-4a46e8e8-8247-4f85-ab93-5b0a8164f13c 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 0067149a-8f99-4257-af2a-fd9adcc41719] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 25 03:29:10 np0005534516 nova_compute[253538]: 2025-11-25 08:29:10.228 253542 DEBUG nova.virt.libvirt.driver [None req-4a46e8e8-8247-4f85-ab93-5b0a8164f13c 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 0067149a-8f99-4257-af2a-fd9adcc41719] Ensure instance console log exists: /var/lib/nova/instances/0067149a-8f99-4257-af2a-fd9adcc41719/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 25 03:29:10 np0005534516 nova_compute[253538]: 2025-11-25 08:29:10.229 253542 DEBUG oslo_concurrency.lockutils [None req-4a46e8e8-8247-4f85-ab93-5b0a8164f13c 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:29:10 np0005534516 nova_compute[253538]: 2025-11-25 08:29:10.229 253542 DEBUG oslo_concurrency.lockutils [None req-4a46e8e8-8247-4f85-ab93-5b0a8164f13c 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:29:10 np0005534516 nova_compute[253538]: 2025-11-25 08:29:10.229 253542 DEBUG oslo_concurrency.lockutils [None req-4a46e8e8-8247-4f85-ab93-5b0a8164f13c 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:29:10 np0005534516 nova_compute[253538]: 2025-11-25 08:29:10.257 253542 DEBUG nova.compute.manager [None req-d0807b7a-6d5d-4783-b9b7-10e6a0c3760c 6bc7b68c86ab44d29c118388df2a8bc0 70a02215d9344af388c6439ace9208a4 - - default default] [instance: 8400a9a9-bd7a-434b-a11b-6db7e12a4e18] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 25 03:29:10 np0005534516 nova_compute[253538]: 2025-11-25 08:29:10.259 253542 DEBUG nova.virt.libvirt.driver [None req-d0807b7a-6d5d-4783-b9b7-10e6a0c3760c 6bc7b68c86ab44d29c118388df2a8bc0 70a02215d9344af388c6439ace9208a4 - - default default] [instance: 8400a9a9-bd7a-434b-a11b-6db7e12a4e18] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 25 03:29:10 np0005534516 nova_compute[253538]: 2025-11-25 08:29:10.259 253542 INFO nova.virt.libvirt.driver [None req-d0807b7a-6d5d-4783-b9b7-10e6a0c3760c 6bc7b68c86ab44d29c118388df2a8bc0 70a02215d9344af388c6439ace9208a4 - - default default] [instance: 8400a9a9-bd7a-434b-a11b-6db7e12a4e18] Creating image(s)#033[00m
Nov 25 03:29:10 np0005534516 nova_compute[253538]: 2025-11-25 08:29:10.278 253542 DEBUG nova.storage.rbd_utils [None req-d0807b7a-6d5d-4783-b9b7-10e6a0c3760c 6bc7b68c86ab44d29c118388df2a8bc0 70a02215d9344af388c6439ace9208a4 - - default default] rbd image 8400a9a9-bd7a-434b-a11b-6db7e12a4e18_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:29:10 np0005534516 nova_compute[253538]: 2025-11-25 08:29:10.299 253542 DEBUG nova.storage.rbd_utils [None req-d0807b7a-6d5d-4783-b9b7-10e6a0c3760c 6bc7b68c86ab44d29c118388df2a8bc0 70a02215d9344af388c6439ace9208a4 - - default default] rbd image 8400a9a9-bd7a-434b-a11b-6db7e12a4e18_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:29:10 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 03:29:10 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e150 do_prune osdmap full prune enabled
Nov 25 03:29:10 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e151 e151: 3 total, 3 up, 3 in
Nov 25 03:29:10 np0005534516 ceph-mon[75015]: log_channel(cluster) log [DBG] : osdmap e151: 3 total, 3 up, 3 in
Nov 25 03:29:10 np0005534516 nova_compute[253538]: 2025-11-25 08:29:10.353 253542 DEBUG nova.storage.rbd_utils [None req-d0807b7a-6d5d-4783-b9b7-10e6a0c3760c 6bc7b68c86ab44d29c118388df2a8bc0 70a02215d9344af388c6439ace9208a4 - - default default] rbd image 8400a9a9-bd7a-434b-a11b-6db7e12a4e18_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:29:10 np0005534516 nova_compute[253538]: 2025-11-25 08:29:10.356 253542 DEBUG oslo_concurrency.processutils [None req-d0807b7a-6d5d-4783-b9b7-10e6a0c3760c 6bc7b68c86ab44d29c118388df2a8bc0 70a02215d9344af388c6439ace9208a4 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:29:10 np0005534516 nova_compute[253538]: 2025-11-25 08:29:10.390 253542 DEBUG nova.network.neutron [None req-d0807b7a-6d5d-4783-b9b7-10e6a0c3760c 6bc7b68c86ab44d29c118388df2a8bc0 70a02215d9344af388c6439ace9208a4 - - default default] [instance: 8400a9a9-bd7a-434b-a11b-6db7e12a4e18] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188#033[00m
Nov 25 03:29:10 np0005534516 nova_compute[253538]: 2025-11-25 08:29:10.391 253542 DEBUG nova.compute.manager [None req-d0807b7a-6d5d-4783-b9b7-10e6a0c3760c 6bc7b68c86ab44d29c118388df2a8bc0 70a02215d9344af388c6439ace9208a4 - - default default] [instance: 8400a9a9-bd7a-434b-a11b-6db7e12a4e18] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 25 03:29:10 np0005534516 nova_compute[253538]: 2025-11-25 08:29:10.421 253542 DEBUG oslo_concurrency.processutils [None req-d0807b7a-6d5d-4783-b9b7-10e6a0c3760c 6bc7b68c86ab44d29c118388df2a8bc0 70a02215d9344af388c6439ace9208a4 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json" returned: 0 in 0.065s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:29:10 np0005534516 nova_compute[253538]: 2025-11-25 08:29:10.421 253542 DEBUG oslo_concurrency.lockutils [None req-d0807b7a-6d5d-4783-b9b7-10e6a0c3760c 6bc7b68c86ab44d29c118388df2a8bc0 70a02215d9344af388c6439ace9208a4 - - default default] Acquiring lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:29:10 np0005534516 nova_compute[253538]: 2025-11-25 08:29:10.422 253542 DEBUG oslo_concurrency.lockutils [None req-d0807b7a-6d5d-4783-b9b7-10e6a0c3760c 6bc7b68c86ab44d29c118388df2a8bc0 70a02215d9344af388c6439ace9208a4 - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:29:10 np0005534516 nova_compute[253538]: 2025-11-25 08:29:10.422 253542 DEBUG oslo_concurrency.lockutils [None req-d0807b7a-6d5d-4783-b9b7-10e6a0c3760c 6bc7b68c86ab44d29c118388df2a8bc0 70a02215d9344af388c6439ace9208a4 - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:29:10 np0005534516 nova_compute[253538]: 2025-11-25 08:29:10.439 253542 DEBUG nova.storage.rbd_utils [None req-d0807b7a-6d5d-4783-b9b7-10e6a0c3760c 6bc7b68c86ab44d29c118388df2a8bc0 70a02215d9344af388c6439ace9208a4 - - default default] rbd image 8400a9a9-bd7a-434b-a11b-6db7e12a4e18_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:29:10 np0005534516 nova_compute[253538]: 2025-11-25 08:29:10.442 253542 DEBUG oslo_concurrency.processutils [None req-d0807b7a-6d5d-4783-b9b7-10e6a0c3760c 6bc7b68c86ab44d29c118388df2a8bc0 70a02215d9344af388c6439ace9208a4 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc 8400a9a9-bd7a-434b-a11b-6db7e12a4e18_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:29:10 np0005534516 nova_compute[253538]: 2025-11-25 08:29:10.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 03:29:10 np0005534516 nova_compute[253538]: 2025-11-25 08:29:10.555 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 25 03:29:10 np0005534516 nova_compute[253538]: 2025-11-25 08:29:10.555 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 25 03:29:10 np0005534516 nova_compute[253538]: 2025-11-25 08:29:10.586 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] [instance: 0067149a-8f99-4257-af2a-fd9adcc41719] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Nov 25 03:29:10 np0005534516 nova_compute[253538]: 2025-11-25 08:29:10.587 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] [instance: 8400a9a9-bd7a-434b-a11b-6db7e12a4e18] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Nov 25 03:29:10 np0005534516 nova_compute[253538]: 2025-11-25 08:29:10.703 253542 DEBUG nova.network.neutron [None req-4a46e8e8-8247-4f85-ab93-5b0a8164f13c 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 0067149a-8f99-4257-af2a-fd9adcc41719] Successfully created port: f3dedfca-04a0-44af-bca1-33a95c9804fa _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 25 03:29:10 np0005534516 nova_compute[253538]: 2025-11-25 08:29:10.757 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Acquiring lock "refresh_cache-8b29fb31-718d-4926-bf4f-bae461ea70ef" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 03:29:10 np0005534516 nova_compute[253538]: 2025-11-25 08:29:10.758 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Acquired lock "refresh_cache-8b29fb31-718d-4926-bf4f-bae461ea70ef" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 03:29:10 np0005534516 nova_compute[253538]: 2025-11-25 08:29:10.758 253542 DEBUG nova.network.neutron [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] [instance: 8b29fb31-718d-4926-bf4f-bae461ea70ef] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Nov 25 03:29:10 np0005534516 nova_compute[253538]: 2025-11-25 08:29:10.758 253542 DEBUG nova.objects.instance [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 8b29fb31-718d-4926-bf4f-bae461ea70ef obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 03:29:10 np0005534516 podman[295533]: 2025-11-25 08:29:10.864046511 +0000 UTC m=+0.098320229 container health_status 8ad1e319ea263c63021f01bb2d6f18ca5aea205883339692c6b10bddfa993191 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_controller, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Nov 25 03:29:10 np0005534516 nova_compute[253538]: 2025-11-25 08:29:10.970 253542 DEBUG oslo_concurrency.lockutils [None req-a0928e58-2fe3-4ced-a142-64404b6cb388 6bc7b68c86ab44d29c118388df2a8bc0 70a02215d9344af388c6439ace9208a4 - - default default] Acquiring lock "225f80e2-9e66-46fb-b77d-9a54fa8a2a41" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:29:10 np0005534516 nova_compute[253538]: 2025-11-25 08:29:10.971 253542 DEBUG oslo_concurrency.lockutils [None req-a0928e58-2fe3-4ced-a142-64404b6cb388 6bc7b68c86ab44d29c118388df2a8bc0 70a02215d9344af388c6439ace9208a4 - - default default] Lock "225f80e2-9e66-46fb-b77d-9a54fa8a2a41" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:29:10 np0005534516 nova_compute[253538]: 2025-11-25 08:29:10.986 253542 DEBUG nova.compute.manager [None req-a0928e58-2fe3-4ced-a142-64404b6cb388 6bc7b68c86ab44d29c118388df2a8bc0 70a02215d9344af388c6439ace9208a4 - - default default] [instance: 225f80e2-9e66-46fb-b77d-9a54fa8a2a41] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 25 03:29:11 np0005534516 nova_compute[253538]: 2025-11-25 08:29:11.063 253542 DEBUG oslo_concurrency.lockutils [None req-a0928e58-2fe3-4ced-a142-64404b6cb388 6bc7b68c86ab44d29c118388df2a8bc0 70a02215d9344af388c6439ace9208a4 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:29:11 np0005534516 nova_compute[253538]: 2025-11-25 08:29:11.064 253542 DEBUG oslo_concurrency.lockutils [None req-a0928e58-2fe3-4ced-a142-64404b6cb388 6bc7b68c86ab44d29c118388df2a8bc0 70a02215d9344af388c6439ace9208a4 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:29:11 np0005534516 nova_compute[253538]: 2025-11-25 08:29:11.071 253542 DEBUG nova.virt.hardware [None req-a0928e58-2fe3-4ced-a142-64404b6cb388 6bc7b68c86ab44d29c118388df2a8bc0 70a02215d9344af388c6439ace9208a4 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 25 03:29:11 np0005534516 nova_compute[253538]: 2025-11-25 08:29:11.071 253542 INFO nova.compute.claims [None req-a0928e58-2fe3-4ced-a142-64404b6cb388 6bc7b68c86ab44d29c118388df2a8bc0 70a02215d9344af388c6439ace9208a4 - - default default] [instance: 225f80e2-9e66-46fb-b77d-9a54fa8a2a41] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 25 03:29:11 np0005534516 nova_compute[253538]: 2025-11-25 08:29:11.081 253542 DEBUG oslo_concurrency.processutils [None req-d0807b7a-6d5d-4783-b9b7-10e6a0c3760c 6bc7b68c86ab44d29c118388df2a8bc0 70a02215d9344af388c6439ace9208a4 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc 8400a9a9-bd7a-434b-a11b-6db7e12a4e18_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.639s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:29:11 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1350: 321 pgs: 321 active+clean; 246 MiB data, 463 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 7.3 MiB/s wr, 356 op/s
Nov 25 03:29:11 np0005534516 nova_compute[253538]: 2025-11-25 08:29:11.157 253542 DEBUG nova.storage.rbd_utils [None req-d0807b7a-6d5d-4783-b9b7-10e6a0c3760c 6bc7b68c86ab44d29c118388df2a8bc0 70a02215d9344af388c6439ace9208a4 - - default default] resizing rbd image 8400a9a9-bd7a-434b-a11b-6db7e12a4e18_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 25 03:29:11 np0005534516 nova_compute[253538]: 2025-11-25 08:29:11.353 253542 DEBUG oslo_concurrency.processutils [None req-a0928e58-2fe3-4ced-a142-64404b6cb388 6bc7b68c86ab44d29c118388df2a8bc0 70a02215d9344af388c6439ace9208a4 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:29:11 np0005534516 nova_compute[253538]: 2025-11-25 08:29:11.547 253542 DEBUG nova.objects.instance [None req-d0807b7a-6d5d-4783-b9b7-10e6a0c3760c 6bc7b68c86ab44d29c118388df2a8bc0 70a02215d9344af388c6439ace9208a4 - - default default] Lazy-loading 'migration_context' on Instance uuid 8400a9a9-bd7a-434b-a11b-6db7e12a4e18 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 03:29:11 np0005534516 nova_compute[253538]: 2025-11-25 08:29:11.562 253542 DEBUG nova.virt.libvirt.driver [None req-d0807b7a-6d5d-4783-b9b7-10e6a0c3760c 6bc7b68c86ab44d29c118388df2a8bc0 70a02215d9344af388c6439ace9208a4 - - default default] [instance: 8400a9a9-bd7a-434b-a11b-6db7e12a4e18] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 25 03:29:11 np0005534516 nova_compute[253538]: 2025-11-25 08:29:11.562 253542 DEBUG nova.virt.libvirt.driver [None req-d0807b7a-6d5d-4783-b9b7-10e6a0c3760c 6bc7b68c86ab44d29c118388df2a8bc0 70a02215d9344af388c6439ace9208a4 - - default default] [instance: 8400a9a9-bd7a-434b-a11b-6db7e12a4e18] Ensure instance console log exists: /var/lib/nova/instances/8400a9a9-bd7a-434b-a11b-6db7e12a4e18/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 25 03:29:11 np0005534516 nova_compute[253538]: 2025-11-25 08:29:11.563 253542 DEBUG oslo_concurrency.lockutils [None req-d0807b7a-6d5d-4783-b9b7-10e6a0c3760c 6bc7b68c86ab44d29c118388df2a8bc0 70a02215d9344af388c6439ace9208a4 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:29:11 np0005534516 nova_compute[253538]: 2025-11-25 08:29:11.563 253542 DEBUG oslo_concurrency.lockutils [None req-d0807b7a-6d5d-4783-b9b7-10e6a0c3760c 6bc7b68c86ab44d29c118388df2a8bc0 70a02215d9344af388c6439ace9208a4 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:29:11 np0005534516 nova_compute[253538]: 2025-11-25 08:29:11.563 253542 DEBUG oslo_concurrency.lockutils [None req-d0807b7a-6d5d-4783-b9b7-10e6a0c3760c 6bc7b68c86ab44d29c118388df2a8bc0 70a02215d9344af388c6439ace9208a4 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:29:11 np0005534516 nova_compute[253538]: 2025-11-25 08:29:11.565 253542 DEBUG nova.virt.libvirt.driver [None req-d0807b7a-6d5d-4783-b9b7-10e6a0c3760c 6bc7b68c86ab44d29c118388df2a8bc0 70a02215d9344af388c6439ace9208a4 - - default default] [instance: 8400a9a9-bd7a-434b-a11b-6db7e12a4e18] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'device_name': '/dev/vda', 'guest_format': None, 'encryption_options': None, 'boot_index': 0, 'encryption_format': None, 'encryption_secret_uuid': None, 'size': 0, 'encrypted': False, 'disk_bus': 'virtio', 'image_id': '8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 25 03:29:11 np0005534516 nova_compute[253538]: 2025-11-25 08:29:11.566 253542 DEBUG nova.network.neutron [None req-4a46e8e8-8247-4f85-ab93-5b0a8164f13c 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 0067149a-8f99-4257-af2a-fd9adcc41719] Successfully updated port: f3dedfca-04a0-44af-bca1-33a95c9804fa _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 25 03:29:11 np0005534516 nova_compute[253538]: 2025-11-25 08:29:11.571 253542 WARNING nova.virt.libvirt.driver [None req-d0807b7a-6d5d-4783-b9b7-10e6a0c3760c 6bc7b68c86ab44d29c118388df2a8bc0 70a02215d9344af388c6439ace9208a4 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 03:29:11 np0005534516 nova_compute[253538]: 2025-11-25 08:29:11.576 253542 DEBUG oslo_concurrency.lockutils [None req-4a46e8e8-8247-4f85-ab93-5b0a8164f13c 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Acquiring lock "refresh_cache-0067149a-8f99-4257-af2a-fd9adcc41719" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 03:29:11 np0005534516 nova_compute[253538]: 2025-11-25 08:29:11.576 253542 DEBUG oslo_concurrency.lockutils [None req-4a46e8e8-8247-4f85-ab93-5b0a8164f13c 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Acquired lock "refresh_cache-0067149a-8f99-4257-af2a-fd9adcc41719" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 03:29:11 np0005534516 nova_compute[253538]: 2025-11-25 08:29:11.577 253542 DEBUG nova.network.neutron [None req-4a46e8e8-8247-4f85-ab93-5b0a8164f13c 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 0067149a-8f99-4257-af2a-fd9adcc41719] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 25 03:29:11 np0005534516 nova_compute[253538]: 2025-11-25 08:29:11.578 253542 DEBUG nova.virt.libvirt.host [None req-d0807b7a-6d5d-4783-b9b7-10e6a0c3760c 6bc7b68c86ab44d29c118388df2a8bc0 70a02215d9344af388c6439ace9208a4 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 25 03:29:11 np0005534516 nova_compute[253538]: 2025-11-25 08:29:11.579 253542 DEBUG nova.virt.libvirt.host [None req-d0807b7a-6d5d-4783-b9b7-10e6a0c3760c 6bc7b68c86ab44d29c118388df2a8bc0 70a02215d9344af388c6439ace9208a4 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 25 03:29:11 np0005534516 nova_compute[253538]: 2025-11-25 08:29:11.583 253542 DEBUG nova.virt.libvirt.host [None req-d0807b7a-6d5d-4783-b9b7-10e6a0c3760c 6bc7b68c86ab44d29c118388df2a8bc0 70a02215d9344af388c6439ace9208a4 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 25 03:29:11 np0005534516 nova_compute[253538]: 2025-11-25 08:29:11.584 253542 DEBUG nova.virt.libvirt.host [None req-d0807b7a-6d5d-4783-b9b7-10e6a0c3760c 6bc7b68c86ab44d29c118388df2a8bc0 70a02215d9344af388c6439ace9208a4 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 25 03:29:11 np0005534516 nova_compute[253538]: 2025-11-25 08:29:11.584 253542 DEBUG nova.virt.libvirt.driver [None req-d0807b7a-6d5d-4783-b9b7-10e6a0c3760c 6bc7b68c86ab44d29c118388df2a8bc0 70a02215d9344af388c6439ace9208a4 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 25 03:29:11 np0005534516 nova_compute[253538]: 2025-11-25 08:29:11.584 253542 DEBUG nova.virt.hardware [None req-d0807b7a-6d5d-4783-b9b7-10e6a0c3760c 6bc7b68c86ab44d29c118388df2a8bc0 70a02215d9344af388c6439ace9208a4 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T08:20:54Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='c0247c91-0f34-45b2-87e7-43f31a790a00',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 25 03:29:11 np0005534516 nova_compute[253538]: 2025-11-25 08:29:11.585 253542 DEBUG nova.virt.hardware [None req-d0807b7a-6d5d-4783-b9b7-10e6a0c3760c 6bc7b68c86ab44d29c118388df2a8bc0 70a02215d9344af388c6439ace9208a4 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 25 03:29:11 np0005534516 nova_compute[253538]: 2025-11-25 08:29:11.585 253542 DEBUG nova.virt.hardware [None req-d0807b7a-6d5d-4783-b9b7-10e6a0c3760c 6bc7b68c86ab44d29c118388df2a8bc0 70a02215d9344af388c6439ace9208a4 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 25 03:29:11 np0005534516 nova_compute[253538]: 2025-11-25 08:29:11.585 253542 DEBUG nova.virt.hardware [None req-d0807b7a-6d5d-4783-b9b7-10e6a0c3760c 6bc7b68c86ab44d29c118388df2a8bc0 70a02215d9344af388c6439ace9208a4 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 25 03:29:11 np0005534516 nova_compute[253538]: 2025-11-25 08:29:11.585 253542 DEBUG nova.virt.hardware [None req-d0807b7a-6d5d-4783-b9b7-10e6a0c3760c 6bc7b68c86ab44d29c118388df2a8bc0 70a02215d9344af388c6439ace9208a4 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 25 03:29:11 np0005534516 nova_compute[253538]: 2025-11-25 08:29:11.586 253542 DEBUG nova.virt.hardware [None req-d0807b7a-6d5d-4783-b9b7-10e6a0c3760c 6bc7b68c86ab44d29c118388df2a8bc0 70a02215d9344af388c6439ace9208a4 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 25 03:29:11 np0005534516 nova_compute[253538]: 2025-11-25 08:29:11.586 253542 DEBUG nova.virt.hardware [None req-d0807b7a-6d5d-4783-b9b7-10e6a0c3760c 6bc7b68c86ab44d29c118388df2a8bc0 70a02215d9344af388c6439ace9208a4 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 25 03:29:11 np0005534516 nova_compute[253538]: 2025-11-25 08:29:11.586 253542 DEBUG nova.virt.hardware [None req-d0807b7a-6d5d-4783-b9b7-10e6a0c3760c 6bc7b68c86ab44d29c118388df2a8bc0 70a02215d9344af388c6439ace9208a4 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 25 03:29:11 np0005534516 nova_compute[253538]: 2025-11-25 08:29:11.586 253542 DEBUG nova.virt.hardware [None req-d0807b7a-6d5d-4783-b9b7-10e6a0c3760c 6bc7b68c86ab44d29c118388df2a8bc0 70a02215d9344af388c6439ace9208a4 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 25 03:29:11 np0005534516 nova_compute[253538]: 2025-11-25 08:29:11.587 253542 DEBUG nova.virt.hardware [None req-d0807b7a-6d5d-4783-b9b7-10e6a0c3760c 6bc7b68c86ab44d29c118388df2a8bc0 70a02215d9344af388c6439ace9208a4 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 25 03:29:11 np0005534516 nova_compute[253538]: 2025-11-25 08:29:11.587 253542 DEBUG nova.virt.hardware [None req-d0807b7a-6d5d-4783-b9b7-10e6a0c3760c 6bc7b68c86ab44d29c118388df2a8bc0 70a02215d9344af388c6439ace9208a4 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 25 03:29:11 np0005534516 nova_compute[253538]: 2025-11-25 08:29:11.589 253542 DEBUG oslo_concurrency.processutils [None req-d0807b7a-6d5d-4783-b9b7-10e6a0c3760c 6bc7b68c86ab44d29c118388df2a8bc0 70a02215d9344af388c6439ace9208a4 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:29:11 np0005534516 ceph-mon[75015]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #60. Immutable memtables: 0.
Nov 25 03:29:11 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:29:11.695478) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 25 03:29:11 np0005534516 ceph-mon[75015]: rocksdb: [db/flush_job.cc:856] [default] [JOB 31] Flushing memtable with next log file: 60
Nov 25 03:29:11 np0005534516 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764059351695509, "job": 31, "event": "flush_started", "num_memtables": 1, "num_entries": 2310, "num_deletes": 267, "total_data_size": 3387415, "memory_usage": 3434448, "flush_reason": "Manual Compaction"}
Nov 25 03:29:11 np0005534516 ceph-mon[75015]: rocksdb: [db/flush_job.cc:885] [default] [JOB 31] Level-0 flush table #61: started
Nov 25 03:29:11 np0005534516 nova_compute[253538]: 2025-11-25 08:29:11.699 253542 DEBUG oslo_concurrency.lockutils [None req-d5334ba9-2022-4d0b-a27d-51c535991f7d 9a96a98d6bb448aab904a8763d3675ec 17d31dbb1e4542daaa43d2fda87e18ad - - default default] Acquiring lock "cf07e611-51eb-4bcf-8757-8f75d3807da6" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:29:11 np0005534516 nova_compute[253538]: 2025-11-25 08:29:11.700 253542 DEBUG oslo_concurrency.lockutils [None req-d5334ba9-2022-4d0b-a27d-51c535991f7d 9a96a98d6bb448aab904a8763d3675ec 17d31dbb1e4542daaa43d2fda87e18ad - - default default] Lock "cf07e611-51eb-4bcf-8757-8f75d3807da6" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:29:11 np0005534516 nova_compute[253538]: 2025-11-25 08:29:11.700 253542 DEBUG oslo_concurrency.lockutils [None req-d5334ba9-2022-4d0b-a27d-51c535991f7d 9a96a98d6bb448aab904a8763d3675ec 17d31dbb1e4542daaa43d2fda87e18ad - - default default] Acquiring lock "cf07e611-51eb-4bcf-8757-8f75d3807da6-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:29:11 np0005534516 nova_compute[253538]: 2025-11-25 08:29:11.700 253542 DEBUG oslo_concurrency.lockutils [None req-d5334ba9-2022-4d0b-a27d-51c535991f7d 9a96a98d6bb448aab904a8763d3675ec 17d31dbb1e4542daaa43d2fda87e18ad - - default default] Lock "cf07e611-51eb-4bcf-8757-8f75d3807da6-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:29:11 np0005534516 nova_compute[253538]: 2025-11-25 08:29:11.701 253542 DEBUG oslo_concurrency.lockutils [None req-d5334ba9-2022-4d0b-a27d-51c535991f7d 9a96a98d6bb448aab904a8763d3675ec 17d31dbb1e4542daaa43d2fda87e18ad - - default default] Lock "cf07e611-51eb-4bcf-8757-8f75d3807da6-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:29:11 np0005534516 nova_compute[253538]: 2025-11-25 08:29:11.702 253542 INFO nova.compute.manager [None req-d5334ba9-2022-4d0b-a27d-51c535991f7d 9a96a98d6bb448aab904a8763d3675ec 17d31dbb1e4542daaa43d2fda87e18ad - - default default] [instance: cf07e611-51eb-4bcf-8757-8f75d3807da6] Terminating instance#033[00m
Nov 25 03:29:11 np0005534516 nova_compute[253538]: 2025-11-25 08:29:11.703 253542 DEBUG nova.compute.manager [None req-d5334ba9-2022-4d0b-a27d-51c535991f7d 9a96a98d6bb448aab904a8763d3675ec 17d31dbb1e4542daaa43d2fda87e18ad - - default default] [instance: cf07e611-51eb-4bcf-8757-8f75d3807da6] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 25 03:29:11 np0005534516 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764059351721591, "cf_name": "default", "job": 31, "event": "table_file_creation", "file_number": 61, "file_size": 3322828, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 25678, "largest_seqno": 27987, "table_properties": {"data_size": 3312245, "index_size": 6821, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2693, "raw_key_size": 22622, "raw_average_key_size": 21, "raw_value_size": 3290856, "raw_average_value_size": 3081, "num_data_blocks": 297, "num_entries": 1068, "num_filter_entries": 1068, "num_deletions": 267, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764059172, "oldest_key_time": 1764059172, "file_creation_time": 1764059351, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "dc5dc6ae-0fb8-474e-b34d-e37705a41add", "db_session_id": "WCSCMBNWDQZ3QKJA3OWW", "orig_file_number": 61, "seqno_to_time_mapping": "N/A"}}
Nov 25 03:29:11 np0005534516 ceph-mon[75015]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 31] Flush lasted 26155 microseconds, and 6398 cpu microseconds.
Nov 25 03:29:11 np0005534516 ceph-mon[75015]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 03:29:11 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:29:11.721631) [db/flush_job.cc:967] [default] [JOB 31] Level-0 flush table #61: 3322828 bytes OK
Nov 25 03:29:11 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:29:11.721650) [db/memtable_list.cc:519] [default] Level-0 commit table #61 started
Nov 25 03:29:11 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:29:11.733144) [db/memtable_list.cc:722] [default] Level-0 commit table #61: memtable #1 done
Nov 25 03:29:11 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:29:11.733168) EVENT_LOG_v1 {"time_micros": 1764059351733161, "job": 31, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 25 03:29:11 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:29:11.733189) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 25 03:29:11 np0005534516 ceph-mon[75015]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 31] Try to delete WAL files size 3377458, prev total WAL file size 3377458, number of live WAL files 2.
Nov 25 03:29:11 np0005534516 ceph-mon[75015]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000057.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 03:29:11 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:29:11.733968) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032323539' seq:72057594037927935, type:22 .. '7061786F730032353131' seq:0, type:0; will stop at (end)
Nov 25 03:29:11 np0005534516 ceph-mon[75015]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 32] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 25 03:29:11 np0005534516 ceph-mon[75015]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 31 Base level 0, inputs: [61(3244KB)], [59(7132KB)]
Nov 25 03:29:11 np0005534516 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764059351734000, "job": 32, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [61], "files_L6": [59], "score": -1, "input_data_size": 10626072, "oldest_snapshot_seqno": -1}
Nov 25 03:29:11 np0005534516 nova_compute[253538]: 2025-11-25 08:29:11.757 253542 DEBUG nova.compute.manager [req-18bf05de-4be4-4677-9f2b-6d6e4615819a req-141cd7a5-5ca6-4986-81cb-7da71f3f603d b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 0067149a-8f99-4257-af2a-fd9adcc41719] Received event network-changed-f3dedfca-04a0-44af-bca1-33a95c9804fa external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:29:11 np0005534516 nova_compute[253538]: 2025-11-25 08:29:11.758 253542 DEBUG nova.compute.manager [req-18bf05de-4be4-4677-9f2b-6d6e4615819a req-141cd7a5-5ca6-4986-81cb-7da71f3f603d b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 0067149a-8f99-4257-af2a-fd9adcc41719] Refreshing instance network info cache due to event network-changed-f3dedfca-04a0-44af-bca1-33a95c9804fa. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 03:29:11 np0005534516 nova_compute[253538]: 2025-11-25 08:29:11.758 253542 DEBUG oslo_concurrency.lockutils [req-18bf05de-4be4-4677-9f2b-6d6e4615819a req-141cd7a5-5ca6-4986-81cb-7da71f3f603d b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-0067149a-8f99-4257-af2a-fd9adcc41719" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 03:29:11 np0005534516 nova_compute[253538]: 2025-11-25 08:29:11.767 253542 DEBUG nova.network.neutron [None req-4a46e8e8-8247-4f85-ab93-5b0a8164f13c 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 0067149a-8f99-4257-af2a-fd9adcc41719] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 25 03:29:11 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 03:29:11 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1306774249' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 03:29:11 np0005534516 nova_compute[253538]: 2025-11-25 08:29:11.821 253542 DEBUG oslo_concurrency.processutils [None req-a0928e58-2fe3-4ced-a142-64404b6cb388 6bc7b68c86ab44d29c118388df2a8bc0 70a02215d9344af388c6439ace9208a4 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.468s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:29:11 np0005534516 nova_compute[253538]: 2025-11-25 08:29:11.826 253542 DEBUG nova.compute.provider_tree [None req-a0928e58-2fe3-4ced-a142-64404b6cb388 6bc7b68c86ab44d29c118388df2a8bc0 70a02215d9344af388c6439ace9208a4 - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 03:29:11 np0005534516 nova_compute[253538]: 2025-11-25 08:29:11.838 253542 DEBUG nova.scheduler.client.report [None req-a0928e58-2fe3-4ced-a142-64404b6cb388 6bc7b68c86ab44d29c118388df2a8bc0 70a02215d9344af388c6439ace9208a4 - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 03:29:11 np0005534516 ceph-mon[75015]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 32] Generated table #62: 5304 keys, 8901387 bytes, temperature: kUnknown
Nov 25 03:29:11 np0005534516 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764059351848001, "cf_name": "default", "job": 32, "event": "table_file_creation", "file_number": 62, "file_size": 8901387, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8864454, "index_size": 22551, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 13317, "raw_key_size": 132222, "raw_average_key_size": 24, "raw_value_size": 8767454, "raw_average_value_size": 1652, "num_data_blocks": 930, "num_entries": 5304, "num_filter_entries": 5304, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764056881, "oldest_key_time": 0, "file_creation_time": 1764059351, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "dc5dc6ae-0fb8-474e-b34d-e37705a41add", "db_session_id": "WCSCMBNWDQZ3QKJA3OWW", "orig_file_number": 62, "seqno_to_time_mapping": "N/A"}}
Nov 25 03:29:11 np0005534516 ceph-mon[75015]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 03:29:11 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:29:11.848284) [db/compaction/compaction_job.cc:1663] [default] [JOB 32] Compacted 1@0 + 1@6 files to L6 => 8901387 bytes
Nov 25 03:29:11 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:29:11.849802) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 93.1 rd, 78.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.2, 7.0 +0.0 blob) out(8.5 +0.0 blob), read-write-amplify(5.9) write-amplify(2.7) OK, records in: 5840, records dropped: 536 output_compression: NoCompression
Nov 25 03:29:11 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:29:11.850083) EVENT_LOG_v1 {"time_micros": 1764059351850072, "job": 32, "event": "compaction_finished", "compaction_time_micros": 114079, "compaction_time_cpu_micros": 18263, "output_level": 6, "num_output_files": 1, "total_output_size": 8901387, "num_input_records": 5840, "num_output_records": 5304, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 25 03:29:11 np0005534516 ceph-mon[75015]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000061.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 03:29:11 np0005534516 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764059351850859, "job": 32, "event": "table_file_deletion", "file_number": 61}
Nov 25 03:29:11 np0005534516 ceph-mon[75015]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000059.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 03:29:11 np0005534516 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764059351852081, "job": 32, "event": "table_file_deletion", "file_number": 59}
Nov 25 03:29:11 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:29:11.733897) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 03:29:11 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:29:11.852118) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 03:29:11 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:29:11.852141) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 03:29:11 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:29:11.852143) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 03:29:11 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:29:11.852145) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 03:29:11 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:29:11.852147) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 03:29:11 np0005534516 nova_compute[253538]: 2025-11-25 08:29:11.857 253542 DEBUG oslo_concurrency.lockutils [None req-a0928e58-2fe3-4ced-a142-64404b6cb388 6bc7b68c86ab44d29c118388df2a8bc0 70a02215d9344af388c6439ace9208a4 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.794s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:29:11 np0005534516 nova_compute[253538]: 2025-11-25 08:29:11.858 253542 DEBUG nova.compute.manager [None req-a0928e58-2fe3-4ced-a142-64404b6cb388 6bc7b68c86ab44d29c118388df2a8bc0 70a02215d9344af388c6439ace9208a4 - - default default] [instance: 225f80e2-9e66-46fb-b77d-9a54fa8a2a41] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 25 03:29:11 np0005534516 kernel: tapd4493bab-df (unregistering): left promiscuous mode
Nov 25 03:29:11 np0005534516 NetworkManager[48915]: <info>  [1764059351.9028] device (tapd4493bab-df): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 03:29:11 np0005534516 ovn_controller[152859]: 2025-11-25T08:29:11Z|00237|binding|INFO|Releasing lport d4493bab-df0a-4934-ab26-43dae0dbae72 from this chassis (sb_readonly=0)
Nov 25 03:29:11 np0005534516 ovn_controller[152859]: 2025-11-25T08:29:11Z|00238|binding|INFO|Setting lport d4493bab-df0a-4934-ab26-43dae0dbae72 down in Southbound
Nov 25 03:29:11 np0005534516 ovn_controller[152859]: 2025-11-25T08:29:11Z|00239|binding|INFO|Removing iface tapd4493bab-df ovn-installed in OVS
Nov 25 03:29:11 np0005534516 nova_compute[253538]: 2025-11-25 08:29:11.918 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:29:11 np0005534516 nova_compute[253538]: 2025-11-25 08:29:11.921 253542 DEBUG nova.compute.manager [None req-a0928e58-2fe3-4ced-a142-64404b6cb388 6bc7b68c86ab44d29c118388df2a8bc0 70a02215d9344af388c6439ace9208a4 - - default default] [instance: 225f80e2-9e66-46fb-b77d-9a54fa8a2a41] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 25 03:29:11 np0005534516 nova_compute[253538]: 2025-11-25 08:29:11.922 253542 DEBUG nova.network.neutron [None req-a0928e58-2fe3-4ced-a142-64404b6cb388 6bc7b68c86ab44d29c118388df2a8bc0 70a02215d9344af388c6439ace9208a4 - - default default] [instance: 225f80e2-9e66-46fb-b77d-9a54fa8a2a41] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 25 03:29:11 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:29:11.922 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:6b:9f:ca 10.100.0.3'], port_security=['fa:16:3e:6b:9f:ca 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'cf07e611-51eb-4bcf-8757-8f75d3807da6', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3eeb3245-b22f-4899-9ec0-084ea5f63b6b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '17d31dbb1e4542daaa43d2fda87e18ad', 'neutron:revision_number': '4', 'neutron:security_group_ids': '5ca9dda1-b2ea-4f89-8fc2-d2049ead3ade', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.205'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d85b3127-2363-4567-943e-4e79235e055a, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=d4493bab-df0a-4934-ab26-43dae0dbae72) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 03:29:11 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:29:11.926 162739 INFO neutron.agent.ovn.metadata.agent [-] Port d4493bab-df0a-4934-ab26-43dae0dbae72 in datapath 3eeb3245-b22f-4899-9ec0-084ea5f63b6b unbound from our chassis#033[00m
Nov 25 03:29:11 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:29:11.928 162739 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 3eeb3245-b22f-4899-9ec0-084ea5f63b6b, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 25 03:29:11 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:29:11.930 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[5add2573-b93d-47db-9e0e-9f2873978d2f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:29:11 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:29:11.931 162739 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-3eeb3245-b22f-4899-9ec0-084ea5f63b6b namespace which is not needed anymore#033[00m
Nov 25 03:29:11 np0005534516 nova_compute[253538]: 2025-11-25 08:29:11.941 253542 INFO nova.virt.libvirt.driver [None req-a0928e58-2fe3-4ced-a142-64404b6cb388 6bc7b68c86ab44d29c118388df2a8bc0 70a02215d9344af388c6439ace9208a4 - - default default] [instance: 225f80e2-9e66-46fb-b77d-9a54fa8a2a41] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 25 03:29:11 np0005534516 nova_compute[253538]: 2025-11-25 08:29:11.948 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:29:11 np0005534516 nova_compute[253538]: 2025-11-25 08:29:11.961 253542 DEBUG nova.compute.manager [None req-a0928e58-2fe3-4ced-a142-64404b6cb388 6bc7b68c86ab44d29c118388df2a8bc0 70a02215d9344af388c6439ace9208a4 - - default default] [instance: 225f80e2-9e66-46fb-b77d-9a54fa8a2a41] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 25 03:29:11 np0005534516 systemd[1]: machine-qemu\x2d39\x2dinstance\x2d00000022.scope: Deactivated successfully.
Nov 25 03:29:11 np0005534516 systemd[1]: machine-qemu\x2d39\x2dinstance\x2d00000022.scope: Consumed 13.015s CPU time.
Nov 25 03:29:11 np0005534516 systemd-machined[215790]: Machine qemu-39-instance-00000022 terminated.
Nov 25 03:29:12 np0005534516 nova_compute[253538]: 2025-11-25 08:29:12.038 253542 DEBUG nova.network.neutron [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] [instance: 8b29fb31-718d-4926-bf4f-bae461ea70ef] Updating instance_info_cache with network_info: [{"id": "799c50c8-d1e7-4c15-a3d9-29903d576304", "address": "fa:16:3e:fd:03:96", "network": {"id": "ba659d6c-c094-47d7-ba45-d0e659ce778e", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1404047212-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b0a28d62fb1841c087b84b40bf5a54ec", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap799c50c8-d1", "ovs_interfaceid": "799c50c8-d1e7-4c15-a3d9-29903d576304", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 03:29:12 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 03:29:12 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/181897552' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 03:29:12 np0005534516 nova_compute[253538]: 2025-11-25 08:29:12.045 253542 DEBUG nova.compute.manager [None req-a0928e58-2fe3-4ced-a142-64404b6cb388 6bc7b68c86ab44d29c118388df2a8bc0 70a02215d9344af388c6439ace9208a4 - - default default] [instance: 225f80e2-9e66-46fb-b77d-9a54fa8a2a41] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 25 03:29:12 np0005534516 nova_compute[253538]: 2025-11-25 08:29:12.047 253542 DEBUG nova.virt.libvirt.driver [None req-a0928e58-2fe3-4ced-a142-64404b6cb388 6bc7b68c86ab44d29c118388df2a8bc0 70a02215d9344af388c6439ace9208a4 - - default default] [instance: 225f80e2-9e66-46fb-b77d-9a54fa8a2a41] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 25 03:29:12 np0005534516 nova_compute[253538]: 2025-11-25 08:29:12.047 253542 INFO nova.virt.libvirt.driver [None req-a0928e58-2fe3-4ced-a142-64404b6cb388 6bc7b68c86ab44d29c118388df2a8bc0 70a02215d9344af388c6439ace9208a4 - - default default] [instance: 225f80e2-9e66-46fb-b77d-9a54fa8a2a41] Creating image(s)#033[00m
Nov 25 03:29:12 np0005534516 nova_compute[253538]: 2025-11-25 08:29:12.073 253542 DEBUG nova.storage.rbd_utils [None req-a0928e58-2fe3-4ced-a142-64404b6cb388 6bc7b68c86ab44d29c118388df2a8bc0 70a02215d9344af388c6439ace9208a4 - - default default] rbd image 225f80e2-9e66-46fb-b77d-9a54fa8a2a41_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:29:12 np0005534516 neutron-haproxy-ovnmeta-3eeb3245-b22f-4899-9ec0-084ea5f63b6b[294841]: [NOTICE]   (294845) : haproxy version is 2.8.14-c23fe91
Nov 25 03:29:12 np0005534516 neutron-haproxy-ovnmeta-3eeb3245-b22f-4899-9ec0-084ea5f63b6b[294841]: [NOTICE]   (294845) : path to executable is /usr/sbin/haproxy
Nov 25 03:29:12 np0005534516 neutron-haproxy-ovnmeta-3eeb3245-b22f-4899-9ec0-084ea5f63b6b[294841]: [ALERT]    (294845) : Current worker (294847) exited with code 143 (Terminated)
Nov 25 03:29:12 np0005534516 neutron-haproxy-ovnmeta-3eeb3245-b22f-4899-9ec0-084ea5f63b6b[294841]: [WARNING]  (294845) : All workers exited. Exiting... (0)
Nov 25 03:29:12 np0005534516 systemd[1]: libpod-b4f1371a2efedd0cc183639779022ae9dfbcbde79cef4e481238a42d80498162.scope: Deactivated successfully.
Nov 25 03:29:12 np0005534516 podman[295698]: 2025-11-25 08:29:12.105205516 +0000 UTC m=+0.048966094 container died b4f1371a2efedd0cc183639779022ae9dfbcbde79cef4e481238a42d80498162 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-3eeb3245-b22f-4899-9ec0-084ea5f63b6b, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 25 03:29:12 np0005534516 nova_compute[253538]: 2025-11-25 08:29:12.106 253542 DEBUG nova.storage.rbd_utils [None req-a0928e58-2fe3-4ced-a142-64404b6cb388 6bc7b68c86ab44d29c118388df2a8bc0 70a02215d9344af388c6439ace9208a4 - - default default] rbd image 225f80e2-9e66-46fb-b77d-9a54fa8a2a41_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:29:12 np0005534516 nova_compute[253538]: 2025-11-25 08:29:12.142 253542 DEBUG nova.storage.rbd_utils [None req-a0928e58-2fe3-4ced-a142-64404b6cb388 6bc7b68c86ab44d29c118388df2a8bc0 70a02215d9344af388c6439ace9208a4 - - default default] rbd image 225f80e2-9e66-46fb-b77d-9a54fa8a2a41_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:29:12 np0005534516 nova_compute[253538]: 2025-11-25 08:29:12.148 253542 DEBUG oslo_concurrency.processutils [None req-a0928e58-2fe3-4ced-a142-64404b6cb388 6bc7b68c86ab44d29c118388df2a8bc0 70a02215d9344af388c6439ace9208a4 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:29:12 np0005534516 nova_compute[253538]: 2025-11-25 08:29:12.187 253542 DEBUG oslo_concurrency.processutils [None req-d0807b7a-6d5d-4783-b9b7-10e6a0c3760c 6bc7b68c86ab44d29c118388df2a8bc0 70a02215d9344af388c6439ace9208a4 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.598s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:29:12 np0005534516 nova_compute[253538]: 2025-11-25 08:29:12.196 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Releasing lock "refresh_cache-8b29fb31-718d-4926-bf4f-bae461ea70ef" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 03:29:12 np0005534516 nova_compute[253538]: 2025-11-25 08:29:12.197 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] [instance: 8b29fb31-718d-4926-bf4f-bae461ea70ef] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Nov 25 03:29:12 np0005534516 nova_compute[253538]: 2025-11-25 08:29:12.223 253542 DEBUG nova.storage.rbd_utils [None req-d0807b7a-6d5d-4783-b9b7-10e6a0c3760c 6bc7b68c86ab44d29c118388df2a8bc0 70a02215d9344af388c6439ace9208a4 - - default default] rbd image 8400a9a9-bd7a-434b-a11b-6db7e12a4e18_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:29:12 np0005534516 nova_compute[253538]: 2025-11-25 08:29:12.228 253542 DEBUG oslo_concurrency.processutils [None req-d0807b7a-6d5d-4783-b9b7-10e6a0c3760c 6bc7b68c86ab44d29c118388df2a8bc0 70a02215d9344af388c6439ace9208a4 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:29:12 np0005534516 nova_compute[253538]: 2025-11-25 08:29:12.262 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 03:29:12 np0005534516 nova_compute[253538]: 2025-11-25 08:29:12.265 253542 DEBUG oslo_concurrency.processutils [None req-a0928e58-2fe3-4ced-a142-64404b6cb388 6bc7b68c86ab44d29c118388df2a8bc0 70a02215d9344af388c6439ace9208a4 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json" returned: 0 in 0.116s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:29:12 np0005534516 nova_compute[253538]: 2025-11-25 08:29:12.265 253542 DEBUG oslo_concurrency.lockutils [None req-a0928e58-2fe3-4ced-a142-64404b6cb388 6bc7b68c86ab44d29c118388df2a8bc0 70a02215d9344af388c6439ace9208a4 - - default default] Acquiring lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:29:12 np0005534516 nova_compute[253538]: 2025-11-25 08:29:12.266 253542 DEBUG oslo_concurrency.lockutils [None req-a0928e58-2fe3-4ced-a142-64404b6cb388 6bc7b68c86ab44d29c118388df2a8bc0 70a02215d9344af388c6439ace9208a4 - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:29:12 np0005534516 nova_compute[253538]: 2025-11-25 08:29:12.267 253542 DEBUG oslo_concurrency.lockutils [None req-a0928e58-2fe3-4ced-a142-64404b6cb388 6bc7b68c86ab44d29c118388df2a8bc0 70a02215d9344af388c6439ace9208a4 - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:29:12 np0005534516 systemd[1]: var-lib-containers-storage-overlay-609605f94689c92905145955c14d87bafa567f0f1b1543ae9778cd645791164d-merged.mount: Deactivated successfully.
Nov 25 03:29:12 np0005534516 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-b4f1371a2efedd0cc183639779022ae9dfbcbde79cef4e481238a42d80498162-userdata-shm.mount: Deactivated successfully.
Nov 25 03:29:12 np0005534516 nova_compute[253538]: 2025-11-25 08:29:12.294 253542 DEBUG nova.storage.rbd_utils [None req-a0928e58-2fe3-4ced-a142-64404b6cb388 6bc7b68c86ab44d29c118388df2a8bc0 70a02215d9344af388c6439ace9208a4 - - default default] rbd image 225f80e2-9e66-46fb-b77d-9a54fa8a2a41_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:29:12 np0005534516 nova_compute[253538]: 2025-11-25 08:29:12.298 253542 DEBUG oslo_concurrency.processutils [None req-a0928e58-2fe3-4ced-a142-64404b6cb388 6bc7b68c86ab44d29c118388df2a8bc0 70a02215d9344af388c6439ace9208a4 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc 225f80e2-9e66-46fb-b77d-9a54fa8a2a41_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:29:12 np0005534516 nova_compute[253538]: 2025-11-25 08:29:12.329 253542 INFO nova.virt.libvirt.driver [-] [instance: cf07e611-51eb-4bcf-8757-8f75d3807da6] Instance destroyed successfully.#033[00m
Nov 25 03:29:12 np0005534516 nova_compute[253538]: 2025-11-25 08:29:12.330 253542 DEBUG nova.objects.instance [None req-d5334ba9-2022-4d0b-a27d-51c535991f7d 9a96a98d6bb448aab904a8763d3675ec 17d31dbb1e4542daaa43d2fda87e18ad - - default default] Lazy-loading 'resources' on Instance uuid cf07e611-51eb-4bcf-8757-8f75d3807da6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 03:29:12 np0005534516 nova_compute[253538]: 2025-11-25 08:29:12.345 253542 DEBUG nova.virt.libvirt.vif [None req-d5334ba9-2022-4d0b-a27d-51c535991f7d 9a96a98d6bb448aab904a8763d3675ec 17d31dbb1e4542daaa43d2fda87e18ad - - default default] vif_type=ovs instance=Instance(access_ip_v4=1.1.1.1,access_ip_v6=::babe:dc0c:1602,architecture=None,auto_disk_config=True,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T08:28:38Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersTestJSON-server-159347704',display_name='tempest-ServersTestJSON-server-159347704',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-159347704',id=34,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBEJTnavPSMTdJ98k7GbPudzwCAZWvOVoss8PE9qNwjiCue78AnUJTbduASU9tXAUM03eX8VLrSKKQxmPEVUcAUgD9baA3BJYk4n2P01dqgil022Gs27o2zUO7uKTgrjs9Q==',key_name='tempest-keypair-1656658254',keypairs=<?>,launch_index=0,launched_at=2025-11-25T08:28:51Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={hello='world'},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='17d31dbb1e4542daaa43d2fda87e18ad',ramdisk_id='',reservation_id='r-6gyhk8p2',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersTestJSON-1586688039',owner_user_name='tempest-ServersTestJSON-1586688039-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T08:28:51Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='9a96a98d6bb448aab904a8763d3675ec',uuid=cf07e611-51eb-4bcf-8757-8f75d3807da6,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "d4493bab-df0a-4934-ab26-43dae0dbae72", "address": "fa:16:3e:6b:9f:ca", "network": {"id": "3eeb3245-b22f-4899-9ec0-084ea5f63b6b", "bridge": "br-int", "label": "tempest-ServersTestJSON-406706004-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.205", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "17d31dbb1e4542daaa43d2fda87e18ad", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd4493bab-df", "ovs_interfaceid": "d4493bab-df0a-4934-ab26-43dae0dbae72", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 25 03:29:12 np0005534516 nova_compute[253538]: 2025-11-25 08:29:12.346 253542 DEBUG nova.network.os_vif_util [None req-d5334ba9-2022-4d0b-a27d-51c535991f7d 9a96a98d6bb448aab904a8763d3675ec 17d31dbb1e4542daaa43d2fda87e18ad - - default default] Converting VIF {"id": "d4493bab-df0a-4934-ab26-43dae0dbae72", "address": "fa:16:3e:6b:9f:ca", "network": {"id": "3eeb3245-b22f-4899-9ec0-084ea5f63b6b", "bridge": "br-int", "label": "tempest-ServersTestJSON-406706004-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.205", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "17d31dbb1e4542daaa43d2fda87e18ad", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd4493bab-df", "ovs_interfaceid": "d4493bab-df0a-4934-ab26-43dae0dbae72", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 03:29:12 np0005534516 nova_compute[253538]: 2025-11-25 08:29:12.347 253542 DEBUG nova.network.os_vif_util [None req-d5334ba9-2022-4d0b-a27d-51c535991f7d 9a96a98d6bb448aab904a8763d3675ec 17d31dbb1e4542daaa43d2fda87e18ad - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:6b:9f:ca,bridge_name='br-int',has_traffic_filtering=True,id=d4493bab-df0a-4934-ab26-43dae0dbae72,network=Network(3eeb3245-b22f-4899-9ec0-084ea5f63b6b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd4493bab-df') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 03:29:12 np0005534516 nova_compute[253538]: 2025-11-25 08:29:12.347 253542 DEBUG os_vif [None req-d5334ba9-2022-4d0b-a27d-51c535991f7d 9a96a98d6bb448aab904a8763d3675ec 17d31dbb1e4542daaa43d2fda87e18ad - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:6b:9f:ca,bridge_name='br-int',has_traffic_filtering=True,id=d4493bab-df0a-4934-ab26-43dae0dbae72,network=Network(3eeb3245-b22f-4899-9ec0-084ea5f63b6b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd4493bab-df') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 25 03:29:12 np0005534516 nova_compute[253538]: 2025-11-25 08:29:12.350 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:29:12 np0005534516 nova_compute[253538]: 2025-11-25 08:29:12.351 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd4493bab-df, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:29:12 np0005534516 nova_compute[253538]: 2025-11-25 08:29:12.355 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 25 03:29:12 np0005534516 nova_compute[253538]: 2025-11-25 08:29:12.357 253542 INFO os_vif [None req-d5334ba9-2022-4d0b-a27d-51c535991f7d 9a96a98d6bb448aab904a8763d3675ec 17d31dbb1e4542daaa43d2fda87e18ad - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:6b:9f:ca,bridge_name='br-int',has_traffic_filtering=True,id=d4493bab-df0a-4934-ab26-43dae0dbae72,network=Network(3eeb3245-b22f-4899-9ec0-084ea5f63b6b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd4493bab-df')#033[00m
Nov 25 03:29:12 np0005534516 podman[295698]: 2025-11-25 08:29:12.457856963 +0000 UTC m=+0.401617531 container cleanup b4f1371a2efedd0cc183639779022ae9dfbcbde79cef4e481238a42d80498162 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-3eeb3245-b22f-4899-9ec0-084ea5f63b6b, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Nov 25 03:29:12 np0005534516 systemd[1]: libpod-conmon-b4f1371a2efedd0cc183639779022ae9dfbcbde79cef4e481238a42d80498162.scope: Deactivated successfully.
Nov 25 03:29:12 np0005534516 nova_compute[253538]: 2025-11-25 08:29:12.553 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 03:29:12 np0005534516 podman[295885]: 2025-11-25 08:29:12.657607455 +0000 UTC m=+0.170247877 container remove b4f1371a2efedd0cc183639779022ae9dfbcbde79cef4e481238a42d80498162 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-3eeb3245-b22f-4899-9ec0-084ea5f63b6b, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 25 03:29:12 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:29:12.665 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[0bdfc806-c9da-4a81-b814-cadb70f385ee]: (4, ('Tue Nov 25 08:29:12 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-3eeb3245-b22f-4899-9ec0-084ea5f63b6b (b4f1371a2efedd0cc183639779022ae9dfbcbde79cef4e481238a42d80498162)\nb4f1371a2efedd0cc183639779022ae9dfbcbde79cef4e481238a42d80498162\nTue Nov 25 08:29:12 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-3eeb3245-b22f-4899-9ec0-084ea5f63b6b (b4f1371a2efedd0cc183639779022ae9dfbcbde79cef4e481238a42d80498162)\nb4f1371a2efedd0cc183639779022ae9dfbcbde79cef4e481238a42d80498162\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:29:12 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:29:12.669 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[ae4440d0-398f-490f-9bfe-a5dd4e34b6f3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:29:12 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:29:12.670 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3eeb3245-b0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:29:12 np0005534516 kernel: tap3eeb3245-b0: left promiscuous mode
Nov 25 03:29:12 np0005534516 nova_compute[253538]: 2025-11-25 08:29:12.672 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:29:12 np0005534516 nova_compute[253538]: 2025-11-25 08:29:12.694 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:29:12 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:29:12.697 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[52970f9a-4d28-4745-82d6-959be0c9e27a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:29:12 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:29:12.712 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[068f7354-0d41-4918-8930-11a91e4b3649]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:29:12 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:29:12.714 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[4753a65b-b668-4878-aff9-d51e1bae75d2]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:29:12 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:29:12.731 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[0100d9ca-3e1a-4901-8e6c-76af52433700]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 468515, 'reachable_time': 38849, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 295905, 'error': None, 'target': 'ovnmeta-3eeb3245-b22f-4899-9ec0-084ea5f63b6b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:29:12 np0005534516 systemd[1]: run-netns-ovnmeta\x2d3eeb3245\x2db22f\x2d4899\x2d9ec0\x2d084ea5f63b6b.mount: Deactivated successfully.
Nov 25 03:29:12 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:29:12.734 162852 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-3eeb3245-b22f-4899-9ec0-084ea5f63b6b deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 25 03:29:12 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:29:12.734 162852 DEBUG oslo.privsep.daemon [-] privsep: reply[4ba6e13c-0941-4514-b05f-4b6d033edf18]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:29:12 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 03:29:12 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/275503310' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 03:29:12 np0005534516 nova_compute[253538]: 2025-11-25 08:29:12.791 253542 DEBUG oslo_concurrency.processutils [None req-d0807b7a-6d5d-4783-b9b7-10e6a0c3760c 6bc7b68c86ab44d29c118388df2a8bc0 70a02215d9344af388c6439ace9208a4 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.564s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:29:12 np0005534516 nova_compute[253538]: 2025-11-25 08:29:12.793 253542 DEBUG nova.objects.instance [None req-d0807b7a-6d5d-4783-b9b7-10e6a0c3760c 6bc7b68c86ab44d29c118388df2a8bc0 70a02215d9344af388c6439ace9208a4 - - default default] Lazy-loading 'pci_devices' on Instance uuid 8400a9a9-bd7a-434b-a11b-6db7e12a4e18 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 03:29:12 np0005534516 nova_compute[253538]: 2025-11-25 08:29:12.794 253542 DEBUG oslo_concurrency.processutils [None req-a0928e58-2fe3-4ced-a142-64404b6cb388 6bc7b68c86ab44d29c118388df2a8bc0 70a02215d9344af388c6439ace9208a4 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc 225f80e2-9e66-46fb-b77d-9a54fa8a2a41_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.496s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:29:12 np0005534516 nova_compute[253538]: 2025-11-25 08:29:12.824 253542 DEBUG nova.virt.libvirt.driver [None req-d0807b7a-6d5d-4783-b9b7-10e6a0c3760c 6bc7b68c86ab44d29c118388df2a8bc0 70a02215d9344af388c6439ace9208a4 - - default default] [instance: 8400a9a9-bd7a-434b-a11b-6db7e12a4e18] End _get_guest_xml xml=<domain type="kvm">
Nov 25 03:29:12 np0005534516 nova_compute[253538]:  <uuid>8400a9a9-bd7a-434b-a11b-6db7e12a4e18</uuid>
Nov 25 03:29:12 np0005534516 nova_compute[253538]:  <name>instance-00000024</name>
Nov 25 03:29:12 np0005534516 nova_compute[253538]:  <memory>131072</memory>
Nov 25 03:29:12 np0005534516 nova_compute[253538]:  <vcpu>1</vcpu>
Nov 25 03:29:12 np0005534516 nova_compute[253538]:  <metadata>
Nov 25 03:29:12 np0005534516 nova_compute[253538]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 03:29:12 np0005534516 nova_compute[253538]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 03:29:12 np0005534516 nova_compute[253538]:      <nova:name>tempest-ListImageFiltersTestJSON-server-502320856</nova:name>
Nov 25 03:29:12 np0005534516 nova_compute[253538]:      <nova:creationTime>2025-11-25 08:29:11</nova:creationTime>
Nov 25 03:29:12 np0005534516 nova_compute[253538]:      <nova:flavor name="m1.nano">
Nov 25 03:29:12 np0005534516 nova_compute[253538]:        <nova:memory>128</nova:memory>
Nov 25 03:29:12 np0005534516 nova_compute[253538]:        <nova:disk>1</nova:disk>
Nov 25 03:29:12 np0005534516 nova_compute[253538]:        <nova:swap>0</nova:swap>
Nov 25 03:29:12 np0005534516 nova_compute[253538]:        <nova:ephemeral>0</nova:ephemeral>
Nov 25 03:29:12 np0005534516 nova_compute[253538]:        <nova:vcpus>1</nova:vcpus>
Nov 25 03:29:12 np0005534516 nova_compute[253538]:      </nova:flavor>
Nov 25 03:29:12 np0005534516 nova_compute[253538]:      <nova:owner>
Nov 25 03:29:12 np0005534516 nova_compute[253538]:        <nova:user uuid="6bc7b68c86ab44d29c118388df2a8bc0">tempest-ListImageFiltersTestJSON-638123770-project-member</nova:user>
Nov 25 03:29:12 np0005534516 nova_compute[253538]:        <nova:project uuid="70a02215d9344af388c6439ace9208a4">tempest-ListImageFiltersTestJSON-638123770</nova:project>
Nov 25 03:29:12 np0005534516 nova_compute[253538]:      </nova:owner>
Nov 25 03:29:12 np0005534516 nova_compute[253538]:      <nova:root type="image" uuid="8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e"/>
Nov 25 03:29:12 np0005534516 nova_compute[253538]:      <nova:ports/>
Nov 25 03:29:12 np0005534516 nova_compute[253538]:    </nova:instance>
Nov 25 03:29:12 np0005534516 nova_compute[253538]:  </metadata>
Nov 25 03:29:12 np0005534516 nova_compute[253538]:  <sysinfo type="smbios">
Nov 25 03:29:12 np0005534516 nova_compute[253538]:    <system>
Nov 25 03:29:12 np0005534516 nova_compute[253538]:      <entry name="manufacturer">RDO</entry>
Nov 25 03:29:12 np0005534516 nova_compute[253538]:      <entry name="product">OpenStack Compute</entry>
Nov 25 03:29:12 np0005534516 nova_compute[253538]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 03:29:12 np0005534516 nova_compute[253538]:      <entry name="serial">8400a9a9-bd7a-434b-a11b-6db7e12a4e18</entry>
Nov 25 03:29:12 np0005534516 nova_compute[253538]:      <entry name="uuid">8400a9a9-bd7a-434b-a11b-6db7e12a4e18</entry>
Nov 25 03:29:12 np0005534516 nova_compute[253538]:      <entry name="family">Virtual Machine</entry>
Nov 25 03:29:12 np0005534516 nova_compute[253538]:    </system>
Nov 25 03:29:12 np0005534516 nova_compute[253538]:  </sysinfo>
Nov 25 03:29:12 np0005534516 nova_compute[253538]:  <os>
Nov 25 03:29:12 np0005534516 nova_compute[253538]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 25 03:29:12 np0005534516 nova_compute[253538]:    <boot dev="hd"/>
Nov 25 03:29:12 np0005534516 nova_compute[253538]:    <smbios mode="sysinfo"/>
Nov 25 03:29:12 np0005534516 nova_compute[253538]:  </os>
Nov 25 03:29:12 np0005534516 nova_compute[253538]:  <features>
Nov 25 03:29:12 np0005534516 nova_compute[253538]:    <acpi/>
Nov 25 03:29:12 np0005534516 nova_compute[253538]:    <apic/>
Nov 25 03:29:12 np0005534516 nova_compute[253538]:    <vmcoreinfo/>
Nov 25 03:29:12 np0005534516 nova_compute[253538]:  </features>
Nov 25 03:29:12 np0005534516 nova_compute[253538]:  <clock offset="utc">
Nov 25 03:29:12 np0005534516 nova_compute[253538]:    <timer name="pit" tickpolicy="delay"/>
Nov 25 03:29:12 np0005534516 nova_compute[253538]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 25 03:29:12 np0005534516 nova_compute[253538]:    <timer name="hpet" present="no"/>
Nov 25 03:29:12 np0005534516 nova_compute[253538]:  </clock>
Nov 25 03:29:12 np0005534516 nova_compute[253538]:  <cpu mode="host-model" match="exact">
Nov 25 03:29:12 np0005534516 nova_compute[253538]:    <topology sockets="1" cores="1" threads="1"/>
Nov 25 03:29:12 np0005534516 nova_compute[253538]:  </cpu>
Nov 25 03:29:12 np0005534516 nova_compute[253538]:  <devices>
Nov 25 03:29:12 np0005534516 nova_compute[253538]:    <disk type="network" device="disk">
Nov 25 03:29:12 np0005534516 nova_compute[253538]:      <driver type="raw" cache="none"/>
Nov 25 03:29:12 np0005534516 nova_compute[253538]:      <source protocol="rbd" name="vms/8400a9a9-bd7a-434b-a11b-6db7e12a4e18_disk">
Nov 25 03:29:12 np0005534516 nova_compute[253538]:        <host name="192.168.122.100" port="6789"/>
Nov 25 03:29:12 np0005534516 nova_compute[253538]:      </source>
Nov 25 03:29:12 np0005534516 nova_compute[253538]:      <auth username="openstack">
Nov 25 03:29:12 np0005534516 nova_compute[253538]:        <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 03:29:12 np0005534516 nova_compute[253538]:      </auth>
Nov 25 03:29:12 np0005534516 nova_compute[253538]:      <target dev="vda" bus="virtio"/>
Nov 25 03:29:12 np0005534516 nova_compute[253538]:    </disk>
Nov 25 03:29:12 np0005534516 nova_compute[253538]:    <disk type="network" device="cdrom">
Nov 25 03:29:12 np0005534516 nova_compute[253538]:      <driver type="raw" cache="none"/>
Nov 25 03:29:12 np0005534516 nova_compute[253538]:      <source protocol="rbd" name="vms/8400a9a9-bd7a-434b-a11b-6db7e12a4e18_disk.config">
Nov 25 03:29:12 np0005534516 nova_compute[253538]:        <host name="192.168.122.100" port="6789"/>
Nov 25 03:29:12 np0005534516 nova_compute[253538]:      </source>
Nov 25 03:29:12 np0005534516 nova_compute[253538]:      <auth username="openstack">
Nov 25 03:29:12 np0005534516 nova_compute[253538]:        <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 03:29:12 np0005534516 nova_compute[253538]:      </auth>
Nov 25 03:29:12 np0005534516 nova_compute[253538]:      <target dev="sda" bus="sata"/>
Nov 25 03:29:12 np0005534516 nova_compute[253538]:    </disk>
Nov 25 03:29:12 np0005534516 nova_compute[253538]:    <serial type="pty">
Nov 25 03:29:12 np0005534516 nova_compute[253538]:      <log file="/var/lib/nova/instances/8400a9a9-bd7a-434b-a11b-6db7e12a4e18/console.log" append="off"/>
Nov 25 03:29:12 np0005534516 nova_compute[253538]:    </serial>
Nov 25 03:29:12 np0005534516 nova_compute[253538]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 03:29:12 np0005534516 nova_compute[253538]:    <video>
Nov 25 03:29:12 np0005534516 nova_compute[253538]:      <model type="virtio"/>
Nov 25 03:29:12 np0005534516 nova_compute[253538]:    </video>
Nov 25 03:29:12 np0005534516 nova_compute[253538]:    <input type="tablet" bus="usb"/>
Nov 25 03:29:12 np0005534516 nova_compute[253538]:    <rng model="virtio">
Nov 25 03:29:12 np0005534516 nova_compute[253538]:      <backend model="random">/dev/urandom</backend>
Nov 25 03:29:12 np0005534516 nova_compute[253538]:    </rng>
Nov 25 03:29:12 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root"/>
Nov 25 03:29:12 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:29:12 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:29:12 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:29:12 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:29:12 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:29:12 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:29:12 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:29:12 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:29:12 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:29:12 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:29:12 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:29:12 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:29:12 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:29:12 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:29:12 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:29:12 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:29:12 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:29:12 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:29:12 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:29:12 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:29:12 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:29:12 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:29:12 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:29:12 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:29:12 np0005534516 nova_compute[253538]:    <controller type="usb" index="0"/>
Nov 25 03:29:12 np0005534516 nova_compute[253538]:    <memballoon model="virtio">
Nov 25 03:29:12 np0005534516 nova_compute[253538]:      <stats period="10"/>
Nov 25 03:29:12 np0005534516 nova_compute[253538]:    </memballoon>
Nov 25 03:29:12 np0005534516 nova_compute[253538]:  </devices>
Nov 25 03:29:12 np0005534516 nova_compute[253538]: </domain>
Nov 25 03:29:12 np0005534516 nova_compute[253538]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 25 03:29:12 np0005534516 nova_compute[253538]: 2025-11-25 08:29:12.867 253542 DEBUG nova.storage.rbd_utils [None req-a0928e58-2fe3-4ced-a142-64404b6cb388 6bc7b68c86ab44d29c118388df2a8bc0 70a02215d9344af388c6439ace9208a4 - - default default] resizing rbd image 225f80e2-9e66-46fb-b77d-9a54fa8a2a41_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 25 03:29:12 np0005534516 nova_compute[253538]: 2025-11-25 08:29:12.915 253542 DEBUG nova.network.neutron [None req-a0928e58-2fe3-4ced-a142-64404b6cb388 6bc7b68c86ab44d29c118388df2a8bc0 70a02215d9344af388c6439ace9208a4 - - default default] [instance: 225f80e2-9e66-46fb-b77d-9a54fa8a2a41] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188#033[00m
Nov 25 03:29:12 np0005534516 nova_compute[253538]: 2025-11-25 08:29:12.915 253542 DEBUG nova.compute.manager [None req-a0928e58-2fe3-4ced-a142-64404b6cb388 6bc7b68c86ab44d29c118388df2a8bc0 70a02215d9344af388c6439ace9208a4 - - default default] [instance: 225f80e2-9e66-46fb-b77d-9a54fa8a2a41] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 25 03:29:12 np0005534516 nova_compute[253538]: 2025-11-25 08:29:12.922 253542 DEBUG nova.virt.libvirt.driver [None req-d0807b7a-6d5d-4783-b9b7-10e6a0c3760c 6bc7b68c86ab44d29c118388df2a8bc0 70a02215d9344af388c6439ace9208a4 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 03:29:12 np0005534516 nova_compute[253538]: 2025-11-25 08:29:12.923 253542 DEBUG nova.virt.libvirt.driver [None req-d0807b7a-6d5d-4783-b9b7-10e6a0c3760c 6bc7b68c86ab44d29c118388df2a8bc0 70a02215d9344af388c6439ace9208a4 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 03:29:12 np0005534516 nova_compute[253538]: 2025-11-25 08:29:12.923 253542 INFO nova.virt.libvirt.driver [None req-d0807b7a-6d5d-4783-b9b7-10e6a0c3760c 6bc7b68c86ab44d29c118388df2a8bc0 70a02215d9344af388c6439ace9208a4 - - default default] [instance: 8400a9a9-bd7a-434b-a11b-6db7e12a4e18] Using config drive#033[00m
Nov 25 03:29:12 np0005534516 nova_compute[253538]: 2025-11-25 08:29:12.944 253542 DEBUG nova.storage.rbd_utils [None req-d0807b7a-6d5d-4783-b9b7-10e6a0c3760c 6bc7b68c86ab44d29c118388df2a8bc0 70a02215d9344af388c6439ace9208a4 - - default default] rbd image 8400a9a9-bd7a-434b-a11b-6db7e12a4e18_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:29:13 np0005534516 nova_compute[253538]: 2025-11-25 08:29:13.058 253542 DEBUG nova.compute.manager [req-25bb67f6-af5f-4f47-a72c-857abf06dc5c req-e1a37be8-dc97-4dde-8d22-d040c1c6cfa4 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: cf07e611-51eb-4bcf-8757-8f75d3807da6] Received event network-vif-unplugged-d4493bab-df0a-4934-ab26-43dae0dbae72 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:29:13 np0005534516 nova_compute[253538]: 2025-11-25 08:29:13.059 253542 DEBUG oslo_concurrency.lockutils [req-25bb67f6-af5f-4f47-a72c-857abf06dc5c req-e1a37be8-dc97-4dde-8d22-d040c1c6cfa4 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "cf07e611-51eb-4bcf-8757-8f75d3807da6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:29:13 np0005534516 nova_compute[253538]: 2025-11-25 08:29:13.059 253542 DEBUG oslo_concurrency.lockutils [req-25bb67f6-af5f-4f47-a72c-857abf06dc5c req-e1a37be8-dc97-4dde-8d22-d040c1c6cfa4 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "cf07e611-51eb-4bcf-8757-8f75d3807da6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:29:13 np0005534516 nova_compute[253538]: 2025-11-25 08:29:13.059 253542 DEBUG oslo_concurrency.lockutils [req-25bb67f6-af5f-4f47-a72c-857abf06dc5c req-e1a37be8-dc97-4dde-8d22-d040c1c6cfa4 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "cf07e611-51eb-4bcf-8757-8f75d3807da6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:29:13 np0005534516 nova_compute[253538]: 2025-11-25 08:29:13.059 253542 DEBUG nova.compute.manager [req-25bb67f6-af5f-4f47-a72c-857abf06dc5c req-e1a37be8-dc97-4dde-8d22-d040c1c6cfa4 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: cf07e611-51eb-4bcf-8757-8f75d3807da6] No waiting events found dispatching network-vif-unplugged-d4493bab-df0a-4934-ab26-43dae0dbae72 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 03:29:13 np0005534516 nova_compute[253538]: 2025-11-25 08:29:13.060 253542 DEBUG nova.compute.manager [req-25bb67f6-af5f-4f47-a72c-857abf06dc5c req-e1a37be8-dc97-4dde-8d22-d040c1c6cfa4 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: cf07e611-51eb-4bcf-8757-8f75d3807da6] Received event network-vif-unplugged-d4493bab-df0a-4934-ab26-43dae0dbae72 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 25 03:29:13 np0005534516 nova_compute[253538]: 2025-11-25 08:29:13.064 253542 DEBUG nova.objects.instance [None req-a0928e58-2fe3-4ced-a142-64404b6cb388 6bc7b68c86ab44d29c118388df2a8bc0 70a02215d9344af388c6439ace9208a4 - - default default] Lazy-loading 'migration_context' on Instance uuid 225f80e2-9e66-46fb-b77d-9a54fa8a2a41 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 03:29:13 np0005534516 nova_compute[253538]: 2025-11-25 08:29:13.074 253542 DEBUG nova.virt.libvirt.driver [None req-a0928e58-2fe3-4ced-a142-64404b6cb388 6bc7b68c86ab44d29c118388df2a8bc0 70a02215d9344af388c6439ace9208a4 - - default default] [instance: 225f80e2-9e66-46fb-b77d-9a54fa8a2a41] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 25 03:29:13 np0005534516 nova_compute[253538]: 2025-11-25 08:29:13.075 253542 DEBUG nova.virt.libvirt.driver [None req-a0928e58-2fe3-4ced-a142-64404b6cb388 6bc7b68c86ab44d29c118388df2a8bc0 70a02215d9344af388c6439ace9208a4 - - default default] [instance: 225f80e2-9e66-46fb-b77d-9a54fa8a2a41] Ensure instance console log exists: /var/lib/nova/instances/225f80e2-9e66-46fb-b77d-9a54fa8a2a41/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 25 03:29:13 np0005534516 nova_compute[253538]: 2025-11-25 08:29:13.075 253542 DEBUG oslo_concurrency.lockutils [None req-a0928e58-2fe3-4ced-a142-64404b6cb388 6bc7b68c86ab44d29c118388df2a8bc0 70a02215d9344af388c6439ace9208a4 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:29:13 np0005534516 nova_compute[253538]: 2025-11-25 08:29:13.075 253542 DEBUG oslo_concurrency.lockutils [None req-a0928e58-2fe3-4ced-a142-64404b6cb388 6bc7b68c86ab44d29c118388df2a8bc0 70a02215d9344af388c6439ace9208a4 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:29:13 np0005534516 nova_compute[253538]: 2025-11-25 08:29:13.075 253542 DEBUG oslo_concurrency.lockutils [None req-a0928e58-2fe3-4ced-a142-64404b6cb388 6bc7b68c86ab44d29c118388df2a8bc0 70a02215d9344af388c6439ace9208a4 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:29:13 np0005534516 nova_compute[253538]: 2025-11-25 08:29:13.077 253542 DEBUG nova.virt.libvirt.driver [None req-a0928e58-2fe3-4ced-a142-64404b6cb388 6bc7b68c86ab44d29c118388df2a8bc0 70a02215d9344af388c6439ace9208a4 - - default default] [instance: 225f80e2-9e66-46fb-b77d-9a54fa8a2a41] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'device_name': '/dev/vda', 'guest_format': None, 'encryption_options': None, 'boot_index': 0, 'encryption_format': None, 'encryption_secret_uuid': None, 'size': 0, 'encrypted': False, 'disk_bus': 'virtio', 'image_id': '8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 25 03:29:13 np0005534516 nova_compute[253538]: 2025-11-25 08:29:13.082 253542 WARNING nova.virt.libvirt.driver [None req-a0928e58-2fe3-4ced-a142-64404b6cb388 6bc7b68c86ab44d29c118388df2a8bc0 70a02215d9344af388c6439ace9208a4 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 03:29:13 np0005534516 nova_compute[253538]: 2025-11-25 08:29:13.086 253542 DEBUG nova.virt.libvirt.host [None req-a0928e58-2fe3-4ced-a142-64404b6cb388 6bc7b68c86ab44d29c118388df2a8bc0 70a02215d9344af388c6439ace9208a4 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 25 03:29:13 np0005534516 nova_compute[253538]: 2025-11-25 08:29:13.086 253542 DEBUG nova.virt.libvirt.host [None req-a0928e58-2fe3-4ced-a142-64404b6cb388 6bc7b68c86ab44d29c118388df2a8bc0 70a02215d9344af388c6439ace9208a4 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 25 03:29:13 np0005534516 nova_compute[253538]: 2025-11-25 08:29:13.089 253542 DEBUG nova.virt.libvirt.host [None req-a0928e58-2fe3-4ced-a142-64404b6cb388 6bc7b68c86ab44d29c118388df2a8bc0 70a02215d9344af388c6439ace9208a4 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 25 03:29:13 np0005534516 nova_compute[253538]: 2025-11-25 08:29:13.091 253542 DEBUG nova.virt.libvirt.host [None req-a0928e58-2fe3-4ced-a142-64404b6cb388 6bc7b68c86ab44d29c118388df2a8bc0 70a02215d9344af388c6439ace9208a4 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 25 03:29:13 np0005534516 nova_compute[253538]: 2025-11-25 08:29:13.091 253542 DEBUG nova.virt.libvirt.driver [None req-a0928e58-2fe3-4ced-a142-64404b6cb388 6bc7b68c86ab44d29c118388df2a8bc0 70a02215d9344af388c6439ace9208a4 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 25 03:29:13 np0005534516 nova_compute[253538]: 2025-11-25 08:29:13.091 253542 DEBUG nova.virt.hardware [None req-a0928e58-2fe3-4ced-a142-64404b6cb388 6bc7b68c86ab44d29c118388df2a8bc0 70a02215d9344af388c6439ace9208a4 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T08:20:54Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='c0247c91-0f34-45b2-87e7-43f31a790a00',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 25 03:29:13 np0005534516 nova_compute[253538]: 2025-11-25 08:29:13.092 253542 DEBUG nova.virt.hardware [None req-a0928e58-2fe3-4ced-a142-64404b6cb388 6bc7b68c86ab44d29c118388df2a8bc0 70a02215d9344af388c6439ace9208a4 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 25 03:29:13 np0005534516 nova_compute[253538]: 2025-11-25 08:29:13.092 253542 DEBUG nova.virt.hardware [None req-a0928e58-2fe3-4ced-a142-64404b6cb388 6bc7b68c86ab44d29c118388df2a8bc0 70a02215d9344af388c6439ace9208a4 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 25 03:29:13 np0005534516 nova_compute[253538]: 2025-11-25 08:29:13.092 253542 DEBUG nova.virt.hardware [None req-a0928e58-2fe3-4ced-a142-64404b6cb388 6bc7b68c86ab44d29c118388df2a8bc0 70a02215d9344af388c6439ace9208a4 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 25 03:29:13 np0005534516 nova_compute[253538]: 2025-11-25 08:29:13.092 253542 DEBUG nova.virt.hardware [None req-a0928e58-2fe3-4ced-a142-64404b6cb388 6bc7b68c86ab44d29c118388df2a8bc0 70a02215d9344af388c6439ace9208a4 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 25 03:29:13 np0005534516 nova_compute[253538]: 2025-11-25 08:29:13.093 253542 DEBUG nova.virt.hardware [None req-a0928e58-2fe3-4ced-a142-64404b6cb388 6bc7b68c86ab44d29c118388df2a8bc0 70a02215d9344af388c6439ace9208a4 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 25 03:29:13 np0005534516 nova_compute[253538]: 2025-11-25 08:29:13.093 253542 DEBUG nova.virt.hardware [None req-a0928e58-2fe3-4ced-a142-64404b6cb388 6bc7b68c86ab44d29c118388df2a8bc0 70a02215d9344af388c6439ace9208a4 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 25 03:29:13 np0005534516 nova_compute[253538]: 2025-11-25 08:29:13.093 253542 DEBUG nova.virt.hardware [None req-a0928e58-2fe3-4ced-a142-64404b6cb388 6bc7b68c86ab44d29c118388df2a8bc0 70a02215d9344af388c6439ace9208a4 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 25 03:29:13 np0005534516 nova_compute[253538]: 2025-11-25 08:29:13.093 253542 DEBUG nova.virt.hardware [None req-a0928e58-2fe3-4ced-a142-64404b6cb388 6bc7b68c86ab44d29c118388df2a8bc0 70a02215d9344af388c6439ace9208a4 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 25 03:29:13 np0005534516 nova_compute[253538]: 2025-11-25 08:29:13.093 253542 DEBUG nova.virt.hardware [None req-a0928e58-2fe3-4ced-a142-64404b6cb388 6bc7b68c86ab44d29c118388df2a8bc0 70a02215d9344af388c6439ace9208a4 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 25 03:29:13 np0005534516 nova_compute[253538]: 2025-11-25 08:29:13.093 253542 DEBUG nova.virt.hardware [None req-a0928e58-2fe3-4ced-a142-64404b6cb388 6bc7b68c86ab44d29c118388df2a8bc0 70a02215d9344af388c6439ace9208a4 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 25 03:29:13 np0005534516 nova_compute[253538]: 2025-11-25 08:29:13.097 253542 DEBUG oslo_concurrency.processutils [None req-a0928e58-2fe3-4ced-a142-64404b6cb388 6bc7b68c86ab44d29c118388df2a8bc0 70a02215d9344af388c6439ace9208a4 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:29:13 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1351: 321 pgs: 321 active+clean; 250 MiB data, 464 MiB used, 60 GiB / 60 GiB avail; 563 KiB/s rd, 1.8 MiB/s wr, 217 op/s
Nov 25 03:29:13 np0005534516 nova_compute[253538]: 2025-11-25 08:29:13.136 253542 INFO nova.virt.libvirt.driver [None req-d0807b7a-6d5d-4783-b9b7-10e6a0c3760c 6bc7b68c86ab44d29c118388df2a8bc0 70a02215d9344af388c6439ace9208a4 - - default default] [instance: 8400a9a9-bd7a-434b-a11b-6db7e12a4e18] Creating config drive at /var/lib/nova/instances/8400a9a9-bd7a-434b-a11b-6db7e12a4e18/disk.config#033[00m
Nov 25 03:29:13 np0005534516 nova_compute[253538]: 2025-11-25 08:29:13.141 253542 DEBUG oslo_concurrency.processutils [None req-d0807b7a-6d5d-4783-b9b7-10e6a0c3760c 6bc7b68c86ab44d29c118388df2a8bc0 70a02215d9344af388c6439ace9208a4 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/8400a9a9-bd7a-434b-a11b-6db7e12a4e18/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmptld7tzua execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:29:13 np0005534516 nova_compute[253538]: 2025-11-25 08:29:13.186 253542 DEBUG nova.network.neutron [None req-4a46e8e8-8247-4f85-ab93-5b0a8164f13c 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 0067149a-8f99-4257-af2a-fd9adcc41719] Updating instance_info_cache with network_info: [{"id": "f3dedfca-04a0-44af-bca1-33a95c9804fa", "address": "fa:16:3e:7d:ec:bd", "network": {"id": "ba659d6c-c094-47d7-ba45-d0e659ce778e", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1404047212-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b0a28d62fb1841c087b84b40bf5a54ec", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf3dedfca-04", "ovs_interfaceid": "f3dedfca-04a0-44af-bca1-33a95c9804fa", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 03:29:13 np0005534516 nova_compute[253538]: 2025-11-25 08:29:13.204 253542 DEBUG oslo_concurrency.lockutils [None req-4a46e8e8-8247-4f85-ab93-5b0a8164f13c 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Releasing lock "refresh_cache-0067149a-8f99-4257-af2a-fd9adcc41719" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 03:29:13 np0005534516 nova_compute[253538]: 2025-11-25 08:29:13.204 253542 DEBUG nova.compute.manager [None req-4a46e8e8-8247-4f85-ab93-5b0a8164f13c 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 0067149a-8f99-4257-af2a-fd9adcc41719] Instance network_info: |[{"id": "f3dedfca-04a0-44af-bca1-33a95c9804fa", "address": "fa:16:3e:7d:ec:bd", "network": {"id": "ba659d6c-c094-47d7-ba45-d0e659ce778e", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1404047212-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b0a28d62fb1841c087b84b40bf5a54ec", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf3dedfca-04", "ovs_interfaceid": "f3dedfca-04a0-44af-bca1-33a95c9804fa", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 25 03:29:13 np0005534516 nova_compute[253538]: 2025-11-25 08:29:13.204 253542 DEBUG oslo_concurrency.lockutils [req-18bf05de-4be4-4677-9f2b-6d6e4615819a req-141cd7a5-5ca6-4986-81cb-7da71f3f603d b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-0067149a-8f99-4257-af2a-fd9adcc41719" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 03:29:13 np0005534516 nova_compute[253538]: 2025-11-25 08:29:13.205 253542 DEBUG nova.network.neutron [req-18bf05de-4be4-4677-9f2b-6d6e4615819a req-141cd7a5-5ca6-4986-81cb-7da71f3f603d b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 0067149a-8f99-4257-af2a-fd9adcc41719] Refreshing network info cache for port f3dedfca-04a0-44af-bca1-33a95c9804fa _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 03:29:13 np0005534516 nova_compute[253538]: 2025-11-25 08:29:13.208 253542 DEBUG nova.virt.libvirt.driver [None req-4a46e8e8-8247-4f85-ab93-5b0a8164f13c 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 0067149a-8f99-4257-af2a-fd9adcc41719] Start _get_guest_xml network_info=[{"id": "f3dedfca-04a0-44af-bca1-33a95c9804fa", "address": "fa:16:3e:7d:ec:bd", "network": {"id": "ba659d6c-c094-47d7-ba45-d0e659ce778e", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1404047212-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b0a28d62fb1841c087b84b40bf5a54ec", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf3dedfca-04", "ovs_interfaceid": "f3dedfca-04a0-44af-bca1-33a95c9804fa", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='',container_format='bare',created_at=2025-11-25T08:28:51Z,direct_url=<?>,disk_format='raw',id=c4072411-f87d-45fb-92e8-02dc5884a35e,min_disk=1,min_ram=0,name='tempest-test-snap-1547610271',owner='b0a28d62fb1841c087b84b40bf5a54ec',properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=2025-11-25T08:29:04Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'device_name': '/dev/vda', 'guest_format': None, 'encryption_options': None, 'boot_index': 0, 'encryption_format': None, 'encryption_secret_uuid': None, 'size': 0, 'encrypted': False, 'disk_bus': 'virtio', 'image_id': 'c4072411-f87d-45fb-92e8-02dc5884a35e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 25 03:29:13 np0005534516 nova_compute[253538]: 2025-11-25 08:29:13.212 253542 WARNING nova.virt.libvirt.driver [None req-4a46e8e8-8247-4f85-ab93-5b0a8164f13c 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 03:29:13 np0005534516 nova_compute[253538]: 2025-11-25 08:29:13.216 253542 DEBUG nova.virt.libvirt.host [None req-4a46e8e8-8247-4f85-ab93-5b0a8164f13c 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 25 03:29:13 np0005534516 nova_compute[253538]: 2025-11-25 08:29:13.216 253542 DEBUG nova.virt.libvirt.host [None req-4a46e8e8-8247-4f85-ab93-5b0a8164f13c 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 25 03:29:13 np0005534516 nova_compute[253538]: 2025-11-25 08:29:13.222 253542 DEBUG nova.virt.libvirt.host [None req-4a46e8e8-8247-4f85-ab93-5b0a8164f13c 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 25 03:29:13 np0005534516 nova_compute[253538]: 2025-11-25 08:29:13.223 253542 DEBUG nova.virt.libvirt.host [None req-4a46e8e8-8247-4f85-ab93-5b0a8164f13c 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 25 03:29:13 np0005534516 nova_compute[253538]: 2025-11-25 08:29:13.223 253542 DEBUG nova.virt.libvirt.driver [None req-4a46e8e8-8247-4f85-ab93-5b0a8164f13c 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 25 03:29:13 np0005534516 nova_compute[253538]: 2025-11-25 08:29:13.223 253542 DEBUG nova.virt.hardware [None req-4a46e8e8-8247-4f85-ab93-5b0a8164f13c 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T08:20:54Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='c0247c91-0f34-45b2-87e7-43f31a790a00',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='',container_format='bare',created_at=2025-11-25T08:28:51Z,direct_url=<?>,disk_format='raw',id=c4072411-f87d-45fb-92e8-02dc5884a35e,min_disk=1,min_ram=0,name='tempest-test-snap-1547610271',owner='b0a28d62fb1841c087b84b40bf5a54ec',properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=2025-11-25T08:29:04Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 25 03:29:13 np0005534516 nova_compute[253538]: 2025-11-25 08:29:13.224 253542 DEBUG nova.virt.hardware [None req-4a46e8e8-8247-4f85-ab93-5b0a8164f13c 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 25 03:29:13 np0005534516 nova_compute[253538]: 2025-11-25 08:29:13.224 253542 DEBUG nova.virt.hardware [None req-4a46e8e8-8247-4f85-ab93-5b0a8164f13c 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 25 03:29:13 np0005534516 nova_compute[253538]: 2025-11-25 08:29:13.224 253542 DEBUG nova.virt.hardware [None req-4a46e8e8-8247-4f85-ab93-5b0a8164f13c 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 25 03:29:13 np0005534516 nova_compute[253538]: 2025-11-25 08:29:13.224 253542 DEBUG nova.virt.hardware [None req-4a46e8e8-8247-4f85-ab93-5b0a8164f13c 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 25 03:29:13 np0005534516 nova_compute[253538]: 2025-11-25 08:29:13.224 253542 DEBUG nova.virt.hardware [None req-4a46e8e8-8247-4f85-ab93-5b0a8164f13c 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 25 03:29:13 np0005534516 nova_compute[253538]: 2025-11-25 08:29:13.224 253542 DEBUG nova.virt.hardware [None req-4a46e8e8-8247-4f85-ab93-5b0a8164f13c 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 25 03:29:13 np0005534516 nova_compute[253538]: 2025-11-25 08:29:13.225 253542 DEBUG nova.virt.hardware [None req-4a46e8e8-8247-4f85-ab93-5b0a8164f13c 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 25 03:29:13 np0005534516 nova_compute[253538]: 2025-11-25 08:29:13.225 253542 DEBUG nova.virt.hardware [None req-4a46e8e8-8247-4f85-ab93-5b0a8164f13c 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 25 03:29:13 np0005534516 nova_compute[253538]: 2025-11-25 08:29:13.225 253542 DEBUG nova.virt.hardware [None req-4a46e8e8-8247-4f85-ab93-5b0a8164f13c 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 25 03:29:13 np0005534516 nova_compute[253538]: 2025-11-25 08:29:13.225 253542 DEBUG nova.virt.hardware [None req-4a46e8e8-8247-4f85-ab93-5b0a8164f13c 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 25 03:29:13 np0005534516 nova_compute[253538]: 2025-11-25 08:29:13.228 253542 DEBUG oslo_concurrency.processutils [None req-4a46e8e8-8247-4f85-ab93-5b0a8164f13c 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:29:13 np0005534516 nova_compute[253538]: 2025-11-25 08:29:13.285 253542 DEBUG oslo_concurrency.lockutils [None req-3b0c1526-8a8f-41e3-b395-6f88020e2279 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] Acquiring lock "8a8b6989-6ea7-4cf7-ad21-a1563967c7f4" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:29:13 np0005534516 nova_compute[253538]: 2025-11-25 08:29:13.285 253542 DEBUG oslo_concurrency.lockutils [None req-3b0c1526-8a8f-41e3-b395-6f88020e2279 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] Lock "8a8b6989-6ea7-4cf7-ad21-a1563967c7f4" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:29:13 np0005534516 nova_compute[253538]: 2025-11-25 08:29:13.294 253542 DEBUG oslo_concurrency.processutils [None req-d0807b7a-6d5d-4783-b9b7-10e6a0c3760c 6bc7b68c86ab44d29c118388df2a8bc0 70a02215d9344af388c6439ace9208a4 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/8400a9a9-bd7a-434b-a11b-6db7e12a4e18/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmptld7tzua" returned: 0 in 0.152s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:29:13 np0005534516 nova_compute[253538]: 2025-11-25 08:29:13.321 253542 DEBUG nova.storage.rbd_utils [None req-d0807b7a-6d5d-4783-b9b7-10e6a0c3760c 6bc7b68c86ab44d29c118388df2a8bc0 70a02215d9344af388c6439ace9208a4 - - default default] rbd image 8400a9a9-bd7a-434b-a11b-6db7e12a4e18_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:29:13 np0005534516 nova_compute[253538]: 2025-11-25 08:29:13.334 253542 DEBUG oslo_concurrency.processutils [None req-d0807b7a-6d5d-4783-b9b7-10e6a0c3760c 6bc7b68c86ab44d29c118388df2a8bc0 70a02215d9344af388c6439ace9208a4 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/8400a9a9-bd7a-434b-a11b-6db7e12a4e18/disk.config 8400a9a9-bd7a-434b-a11b-6db7e12a4e18_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:29:13 np0005534516 nova_compute[253538]: 2025-11-25 08:29:13.373 253542 DEBUG nova.compute.manager [None req-3b0c1526-8a8f-41e3-b395-6f88020e2279 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] [instance: 8a8b6989-6ea7-4cf7-ad21-a1563967c7f4] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 25 03:29:13 np0005534516 nova_compute[253538]: 2025-11-25 08:29:13.446 253542 DEBUG oslo_concurrency.lockutils [None req-3b0c1526-8a8f-41e3-b395-6f88020e2279 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:29:13 np0005534516 nova_compute[253538]: 2025-11-25 08:29:13.447 253542 DEBUG oslo_concurrency.lockutils [None req-3b0c1526-8a8f-41e3-b395-6f88020e2279 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:29:13 np0005534516 nova_compute[253538]: 2025-11-25 08:29:13.455 253542 DEBUG nova.virt.hardware [None req-3b0c1526-8a8f-41e3-b395-6f88020e2279 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 25 03:29:13 np0005534516 nova_compute[253538]: 2025-11-25 08:29:13.456 253542 INFO nova.compute.claims [None req-3b0c1526-8a8f-41e3-b395-6f88020e2279 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] [instance: 8a8b6989-6ea7-4cf7-ad21-a1563967c7f4] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 25 03:29:13 np0005534516 nova_compute[253538]: 2025-11-25 08:29:13.541 253542 INFO nova.virt.libvirt.driver [None req-d5334ba9-2022-4d0b-a27d-51c535991f7d 9a96a98d6bb448aab904a8763d3675ec 17d31dbb1e4542daaa43d2fda87e18ad - - default default] [instance: cf07e611-51eb-4bcf-8757-8f75d3807da6] Deleting instance files /var/lib/nova/instances/cf07e611-51eb-4bcf-8757-8f75d3807da6_del#033[00m
Nov 25 03:29:13 np0005534516 nova_compute[253538]: 2025-11-25 08:29:13.542 253542 INFO nova.virt.libvirt.driver [None req-d5334ba9-2022-4d0b-a27d-51c535991f7d 9a96a98d6bb448aab904a8763d3675ec 17d31dbb1e4542daaa43d2fda87e18ad - - default default] [instance: cf07e611-51eb-4bcf-8757-8f75d3807da6] Deletion of /var/lib/nova/instances/cf07e611-51eb-4bcf-8757-8f75d3807da6_del complete#033[00m
Nov 25 03:29:13 np0005534516 nova_compute[253538]: 2025-11-25 08:29:13.553 253542 DEBUG oslo_concurrency.processutils [None req-d0807b7a-6d5d-4783-b9b7-10e6a0c3760c 6bc7b68c86ab44d29c118388df2a8bc0 70a02215d9344af388c6439ace9208a4 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/8400a9a9-bd7a-434b-a11b-6db7e12a4e18/disk.config 8400a9a9-bd7a-434b-a11b-6db7e12a4e18_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.219s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:29:13 np0005534516 nova_compute[253538]: 2025-11-25 08:29:13.553 253542 INFO nova.virt.libvirt.driver [None req-d0807b7a-6d5d-4783-b9b7-10e6a0c3760c 6bc7b68c86ab44d29c118388df2a8bc0 70a02215d9344af388c6439ace9208a4 - - default default] [instance: 8400a9a9-bd7a-434b-a11b-6db7e12a4e18] Deleting local config drive /var/lib/nova/instances/8400a9a9-bd7a-434b-a11b-6db7e12a4e18/disk.config because it was imported into RBD.#033[00m
Nov 25 03:29:13 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 03:29:13 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3042899764' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 03:29:13 np0005534516 nova_compute[253538]: 2025-11-25 08:29:13.583 253542 DEBUG oslo_concurrency.processutils [None req-a0928e58-2fe3-4ced-a142-64404b6cb388 6bc7b68c86ab44d29c118388df2a8bc0 70a02215d9344af388c6439ace9208a4 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.486s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:29:13 np0005534516 nova_compute[253538]: 2025-11-25 08:29:13.612 253542 DEBUG nova.storage.rbd_utils [None req-a0928e58-2fe3-4ced-a142-64404b6cb388 6bc7b68c86ab44d29c118388df2a8bc0 70a02215d9344af388c6439ace9208a4 - - default default] rbd image 225f80e2-9e66-46fb-b77d-9a54fa8a2a41_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:29:13 np0005534516 nova_compute[253538]: 2025-11-25 08:29:13.619 253542 DEBUG oslo_concurrency.processutils [None req-a0928e58-2fe3-4ced-a142-64404b6cb388 6bc7b68c86ab44d29c118388df2a8bc0 70a02215d9344af388c6439ace9208a4 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:29:13 np0005534516 systemd-machined[215790]: New machine qemu-40-instance-00000024.
Nov 25 03:29:13 np0005534516 systemd[1]: Started Virtual Machine qemu-40-instance-00000024.
Nov 25 03:29:13 np0005534516 nova_compute[253538]: 2025-11-25 08:29:13.669 253542 INFO nova.compute.manager [None req-d5334ba9-2022-4d0b-a27d-51c535991f7d 9a96a98d6bb448aab904a8763d3675ec 17d31dbb1e4542daaa43d2fda87e18ad - - default default] [instance: cf07e611-51eb-4bcf-8757-8f75d3807da6] Took 1.97 seconds to destroy the instance on the hypervisor.#033[00m
Nov 25 03:29:13 np0005534516 nova_compute[253538]: 2025-11-25 08:29:13.670 253542 DEBUG oslo.service.loopingcall [None req-d5334ba9-2022-4d0b-a27d-51c535991f7d 9a96a98d6bb448aab904a8763d3675ec 17d31dbb1e4542daaa43d2fda87e18ad - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 25 03:29:13 np0005534516 nova_compute[253538]: 2025-11-25 08:29:13.670 253542 DEBUG nova.compute.manager [-] [instance: cf07e611-51eb-4bcf-8757-8f75d3807da6] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 25 03:29:13 np0005534516 nova_compute[253538]: 2025-11-25 08:29:13.670 253542 DEBUG nova.network.neutron [-] [instance: cf07e611-51eb-4bcf-8757-8f75d3807da6] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 25 03:29:13 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 03:29:13 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3679845287' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 03:29:13 np0005534516 nova_compute[253538]: 2025-11-25 08:29:13.727 253542 DEBUG oslo_concurrency.processutils [None req-3b0c1526-8a8f-41e3-b395-6f88020e2279 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:29:13 np0005534516 nova_compute[253538]: 2025-11-25 08:29:13.755 253542 DEBUG oslo_concurrency.processutils [None req-4a46e8e8-8247-4f85-ab93-5b0a8164f13c 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.527s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:29:13 np0005534516 nova_compute[253538]: 2025-11-25 08:29:13.791 253542 DEBUG nova.storage.rbd_utils [None req-4a46e8e8-8247-4f85-ab93-5b0a8164f13c 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] rbd image 0067149a-8f99-4257-af2a-fd9adcc41719_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:29:13 np0005534516 nova_compute[253538]: 2025-11-25 08:29:13.797 253542 DEBUG oslo_concurrency.processutils [None req-4a46e8e8-8247-4f85-ab93-5b0a8164f13c 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:29:13 np0005534516 nova_compute[253538]: 2025-11-25 08:29:13.978 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764059353.9762833, 8400a9a9-bd7a-434b-a11b-6db7e12a4e18 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 03:29:13 np0005534516 nova_compute[253538]: 2025-11-25 08:29:13.978 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 8400a9a9-bd7a-434b-a11b-6db7e12a4e18] VM Resumed (Lifecycle Event)#033[00m
Nov 25 03:29:13 np0005534516 nova_compute[253538]: 2025-11-25 08:29:13.981 253542 DEBUG nova.compute.manager [None req-d0807b7a-6d5d-4783-b9b7-10e6a0c3760c 6bc7b68c86ab44d29c118388df2a8bc0 70a02215d9344af388c6439ace9208a4 - - default default] [instance: 8400a9a9-bd7a-434b-a11b-6db7e12a4e18] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 25 03:29:13 np0005534516 nova_compute[253538]: 2025-11-25 08:29:13.981 253542 DEBUG nova.virt.libvirt.driver [None req-d0807b7a-6d5d-4783-b9b7-10e6a0c3760c 6bc7b68c86ab44d29c118388df2a8bc0 70a02215d9344af388c6439ace9208a4 - - default default] [instance: 8400a9a9-bd7a-434b-a11b-6db7e12a4e18] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 25 03:29:13 np0005534516 nova_compute[253538]: 2025-11-25 08:29:13.985 253542 INFO nova.virt.libvirt.driver [-] [instance: 8400a9a9-bd7a-434b-a11b-6db7e12a4e18] Instance spawned successfully.#033[00m
Nov 25 03:29:13 np0005534516 nova_compute[253538]: 2025-11-25 08:29:13.985 253542 DEBUG nova.virt.libvirt.driver [None req-d0807b7a-6d5d-4783-b9b7-10e6a0c3760c 6bc7b68c86ab44d29c118388df2a8bc0 70a02215d9344af388c6439ace9208a4 - - default default] [instance: 8400a9a9-bd7a-434b-a11b-6db7e12a4e18] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 25 03:29:14 np0005534516 nova_compute[253538]: 2025-11-25 08:29:14.002 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 8400a9a9-bd7a-434b-a11b-6db7e12a4e18] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:29:14 np0005534516 nova_compute[253538]: 2025-11-25 08:29:14.010 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 8400a9a9-bd7a-434b-a11b-6db7e12a4e18] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 03:29:14 np0005534516 nova_compute[253538]: 2025-11-25 08:29:14.016 253542 DEBUG nova.virt.libvirt.driver [None req-d0807b7a-6d5d-4783-b9b7-10e6a0c3760c 6bc7b68c86ab44d29c118388df2a8bc0 70a02215d9344af388c6439ace9208a4 - - default default] [instance: 8400a9a9-bd7a-434b-a11b-6db7e12a4e18] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:29:14 np0005534516 nova_compute[253538]: 2025-11-25 08:29:14.017 253542 DEBUG nova.virt.libvirt.driver [None req-d0807b7a-6d5d-4783-b9b7-10e6a0c3760c 6bc7b68c86ab44d29c118388df2a8bc0 70a02215d9344af388c6439ace9208a4 - - default default] [instance: 8400a9a9-bd7a-434b-a11b-6db7e12a4e18] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:29:14 np0005534516 nova_compute[253538]: 2025-11-25 08:29:14.017 253542 DEBUG nova.virt.libvirt.driver [None req-d0807b7a-6d5d-4783-b9b7-10e6a0c3760c 6bc7b68c86ab44d29c118388df2a8bc0 70a02215d9344af388c6439ace9208a4 - - default default] [instance: 8400a9a9-bd7a-434b-a11b-6db7e12a4e18] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:29:14 np0005534516 nova_compute[253538]: 2025-11-25 08:29:14.018 253542 DEBUG nova.virt.libvirt.driver [None req-d0807b7a-6d5d-4783-b9b7-10e6a0c3760c 6bc7b68c86ab44d29c118388df2a8bc0 70a02215d9344af388c6439ace9208a4 - - default default] [instance: 8400a9a9-bd7a-434b-a11b-6db7e12a4e18] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:29:14 np0005534516 nova_compute[253538]: 2025-11-25 08:29:14.018 253542 DEBUG nova.virt.libvirt.driver [None req-d0807b7a-6d5d-4783-b9b7-10e6a0c3760c 6bc7b68c86ab44d29c118388df2a8bc0 70a02215d9344af388c6439ace9208a4 - - default default] [instance: 8400a9a9-bd7a-434b-a11b-6db7e12a4e18] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:29:14 np0005534516 nova_compute[253538]: 2025-11-25 08:29:14.018 253542 DEBUG nova.virt.libvirt.driver [None req-d0807b7a-6d5d-4783-b9b7-10e6a0c3760c 6bc7b68c86ab44d29c118388df2a8bc0 70a02215d9344af388c6439ace9208a4 - - default default] [instance: 8400a9a9-bd7a-434b-a11b-6db7e12a4e18] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:29:14 np0005534516 nova_compute[253538]: 2025-11-25 08:29:14.041 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 8400a9a9-bd7a-434b-a11b-6db7e12a4e18] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 03:29:14 np0005534516 nova_compute[253538]: 2025-11-25 08:29:14.042 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764059353.9768515, 8400a9a9-bd7a-434b-a11b-6db7e12a4e18 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 03:29:14 np0005534516 nova_compute[253538]: 2025-11-25 08:29:14.042 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 8400a9a9-bd7a-434b-a11b-6db7e12a4e18] VM Started (Lifecycle Event)#033[00m
Nov 25 03:29:14 np0005534516 nova_compute[253538]: 2025-11-25 08:29:14.076 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 8400a9a9-bd7a-434b-a11b-6db7e12a4e18] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:29:14 np0005534516 nova_compute[253538]: 2025-11-25 08:29:14.079 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 8400a9a9-bd7a-434b-a11b-6db7e12a4e18] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 03:29:14 np0005534516 nova_compute[253538]: 2025-11-25 08:29:14.100 253542 INFO nova.compute.manager [None req-d0807b7a-6d5d-4783-b9b7-10e6a0c3760c 6bc7b68c86ab44d29c118388df2a8bc0 70a02215d9344af388c6439ace9208a4 - - default default] [instance: 8400a9a9-bd7a-434b-a11b-6db7e12a4e18] Took 3.84 seconds to spawn the instance on the hypervisor.#033[00m
Nov 25 03:29:14 np0005534516 nova_compute[253538]: 2025-11-25 08:29:14.100 253542 DEBUG nova.compute.manager [None req-d0807b7a-6d5d-4783-b9b7-10e6a0c3760c 6bc7b68c86ab44d29c118388df2a8bc0 70a02215d9344af388c6439ace9208a4 - - default default] [instance: 8400a9a9-bd7a-434b-a11b-6db7e12a4e18] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:29:14 np0005534516 nova_compute[253538]: 2025-11-25 08:29:14.102 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 8400a9a9-bd7a-434b-a11b-6db7e12a4e18] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 03:29:14 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 03:29:14 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2186389793' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 03:29:14 np0005534516 nova_compute[253538]: 2025-11-25 08:29:14.135 253542 DEBUG oslo_concurrency.processutils [None req-a0928e58-2fe3-4ced-a142-64404b6cb388 6bc7b68c86ab44d29c118388df2a8bc0 70a02215d9344af388c6439ace9208a4 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.515s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:29:14 np0005534516 nova_compute[253538]: 2025-11-25 08:29:14.136 253542 DEBUG nova.objects.instance [None req-a0928e58-2fe3-4ced-a142-64404b6cb388 6bc7b68c86ab44d29c118388df2a8bc0 70a02215d9344af388c6439ace9208a4 - - default default] Lazy-loading 'pci_devices' on Instance uuid 225f80e2-9e66-46fb-b77d-9a54fa8a2a41 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 03:29:14 np0005534516 nova_compute[253538]: 2025-11-25 08:29:14.152 253542 DEBUG nova.virt.libvirt.driver [None req-a0928e58-2fe3-4ced-a142-64404b6cb388 6bc7b68c86ab44d29c118388df2a8bc0 70a02215d9344af388c6439ace9208a4 - - default default] [instance: 225f80e2-9e66-46fb-b77d-9a54fa8a2a41] End _get_guest_xml xml=<domain type="kvm">
Nov 25 03:29:14 np0005534516 nova_compute[253538]:  <uuid>225f80e2-9e66-46fb-b77d-9a54fa8a2a41</uuid>
Nov 25 03:29:14 np0005534516 nova_compute[253538]:  <name>instance-00000025</name>
Nov 25 03:29:14 np0005534516 nova_compute[253538]:  <memory>131072</memory>
Nov 25 03:29:14 np0005534516 nova_compute[253538]:  <vcpu>1</vcpu>
Nov 25 03:29:14 np0005534516 nova_compute[253538]:  <metadata>
Nov 25 03:29:14 np0005534516 nova_compute[253538]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 03:29:14 np0005534516 nova_compute[253538]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 03:29:14 np0005534516 nova_compute[253538]:      <nova:name>tempest-ListImageFiltersTestJSON-server-185269720</nova:name>
Nov 25 03:29:14 np0005534516 nova_compute[253538]:      <nova:creationTime>2025-11-25 08:29:13</nova:creationTime>
Nov 25 03:29:14 np0005534516 nova_compute[253538]:      <nova:flavor name="m1.nano">
Nov 25 03:29:14 np0005534516 nova_compute[253538]:        <nova:memory>128</nova:memory>
Nov 25 03:29:14 np0005534516 nova_compute[253538]:        <nova:disk>1</nova:disk>
Nov 25 03:29:14 np0005534516 nova_compute[253538]:        <nova:swap>0</nova:swap>
Nov 25 03:29:14 np0005534516 nova_compute[253538]:        <nova:ephemeral>0</nova:ephemeral>
Nov 25 03:29:14 np0005534516 nova_compute[253538]:        <nova:vcpus>1</nova:vcpus>
Nov 25 03:29:14 np0005534516 nova_compute[253538]:      </nova:flavor>
Nov 25 03:29:14 np0005534516 nova_compute[253538]:      <nova:owner>
Nov 25 03:29:14 np0005534516 nova_compute[253538]:        <nova:user uuid="6bc7b68c86ab44d29c118388df2a8bc0">tempest-ListImageFiltersTestJSON-638123770-project-member</nova:user>
Nov 25 03:29:14 np0005534516 nova_compute[253538]:        <nova:project uuid="70a02215d9344af388c6439ace9208a4">tempest-ListImageFiltersTestJSON-638123770</nova:project>
Nov 25 03:29:14 np0005534516 nova_compute[253538]:      </nova:owner>
Nov 25 03:29:14 np0005534516 nova_compute[253538]:      <nova:root type="image" uuid="8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e"/>
Nov 25 03:29:14 np0005534516 nova_compute[253538]:      <nova:ports/>
Nov 25 03:29:14 np0005534516 nova_compute[253538]:    </nova:instance>
Nov 25 03:29:14 np0005534516 nova_compute[253538]:  </metadata>
Nov 25 03:29:14 np0005534516 nova_compute[253538]:  <sysinfo type="smbios">
Nov 25 03:29:14 np0005534516 nova_compute[253538]:    <system>
Nov 25 03:29:14 np0005534516 nova_compute[253538]:      <entry name="manufacturer">RDO</entry>
Nov 25 03:29:14 np0005534516 nova_compute[253538]:      <entry name="product">OpenStack Compute</entry>
Nov 25 03:29:14 np0005534516 nova_compute[253538]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 03:29:14 np0005534516 nova_compute[253538]:      <entry name="serial">225f80e2-9e66-46fb-b77d-9a54fa8a2a41</entry>
Nov 25 03:29:14 np0005534516 nova_compute[253538]:      <entry name="uuid">225f80e2-9e66-46fb-b77d-9a54fa8a2a41</entry>
Nov 25 03:29:14 np0005534516 nova_compute[253538]:      <entry name="family">Virtual Machine</entry>
Nov 25 03:29:14 np0005534516 nova_compute[253538]:    </system>
Nov 25 03:29:14 np0005534516 nova_compute[253538]:  </sysinfo>
Nov 25 03:29:14 np0005534516 nova_compute[253538]:  <os>
Nov 25 03:29:14 np0005534516 nova_compute[253538]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 25 03:29:14 np0005534516 nova_compute[253538]:    <boot dev="hd"/>
Nov 25 03:29:14 np0005534516 nova_compute[253538]:    <smbios mode="sysinfo"/>
Nov 25 03:29:14 np0005534516 nova_compute[253538]:  </os>
Nov 25 03:29:14 np0005534516 nova_compute[253538]:  <features>
Nov 25 03:29:14 np0005534516 nova_compute[253538]:    <acpi/>
Nov 25 03:29:14 np0005534516 nova_compute[253538]:    <apic/>
Nov 25 03:29:14 np0005534516 nova_compute[253538]:    <vmcoreinfo/>
Nov 25 03:29:14 np0005534516 nova_compute[253538]:  </features>
Nov 25 03:29:14 np0005534516 nova_compute[253538]:  <clock offset="utc">
Nov 25 03:29:14 np0005534516 nova_compute[253538]:    <timer name="pit" tickpolicy="delay"/>
Nov 25 03:29:14 np0005534516 nova_compute[253538]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 25 03:29:14 np0005534516 nova_compute[253538]:    <timer name="hpet" present="no"/>
Nov 25 03:29:14 np0005534516 nova_compute[253538]:  </clock>
Nov 25 03:29:14 np0005534516 nova_compute[253538]:  <cpu mode="host-model" match="exact">
Nov 25 03:29:14 np0005534516 nova_compute[253538]:    <topology sockets="1" cores="1" threads="1"/>
Nov 25 03:29:14 np0005534516 nova_compute[253538]:  </cpu>
Nov 25 03:29:14 np0005534516 nova_compute[253538]:  <devices>
Nov 25 03:29:14 np0005534516 nova_compute[253538]:    <disk type="network" device="disk">
Nov 25 03:29:14 np0005534516 nova_compute[253538]:      <driver type="raw" cache="none"/>
Nov 25 03:29:14 np0005534516 nova_compute[253538]:      <source protocol="rbd" name="vms/225f80e2-9e66-46fb-b77d-9a54fa8a2a41_disk">
Nov 25 03:29:14 np0005534516 nova_compute[253538]:        <host name="192.168.122.100" port="6789"/>
Nov 25 03:29:14 np0005534516 nova_compute[253538]:      </source>
Nov 25 03:29:14 np0005534516 nova_compute[253538]:      <auth username="openstack">
Nov 25 03:29:14 np0005534516 nova_compute[253538]:        <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 03:29:14 np0005534516 nova_compute[253538]:      </auth>
Nov 25 03:29:14 np0005534516 nova_compute[253538]:      <target dev="vda" bus="virtio"/>
Nov 25 03:29:14 np0005534516 nova_compute[253538]:    </disk>
Nov 25 03:29:14 np0005534516 nova_compute[253538]:    <disk type="network" device="cdrom">
Nov 25 03:29:14 np0005534516 nova_compute[253538]:      <driver type="raw" cache="none"/>
Nov 25 03:29:14 np0005534516 nova_compute[253538]:      <source protocol="rbd" name="vms/225f80e2-9e66-46fb-b77d-9a54fa8a2a41_disk.config">
Nov 25 03:29:14 np0005534516 nova_compute[253538]:        <host name="192.168.122.100" port="6789"/>
Nov 25 03:29:14 np0005534516 nova_compute[253538]:      </source>
Nov 25 03:29:14 np0005534516 nova_compute[253538]:      <auth username="openstack">
Nov 25 03:29:14 np0005534516 nova_compute[253538]:        <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 03:29:14 np0005534516 nova_compute[253538]:      </auth>
Nov 25 03:29:14 np0005534516 nova_compute[253538]:      <target dev="sda" bus="sata"/>
Nov 25 03:29:14 np0005534516 nova_compute[253538]:    </disk>
Nov 25 03:29:14 np0005534516 nova_compute[253538]:    <serial type="pty">
Nov 25 03:29:14 np0005534516 nova_compute[253538]:      <log file="/var/lib/nova/instances/225f80e2-9e66-46fb-b77d-9a54fa8a2a41/console.log" append="off"/>
Nov 25 03:29:14 np0005534516 nova_compute[253538]:    </serial>
Nov 25 03:29:14 np0005534516 nova_compute[253538]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 03:29:14 np0005534516 nova_compute[253538]:    <video>
Nov 25 03:29:14 np0005534516 nova_compute[253538]:      <model type="virtio"/>
Nov 25 03:29:14 np0005534516 nova_compute[253538]:    </video>
Nov 25 03:29:14 np0005534516 nova_compute[253538]:    <input type="tablet" bus="usb"/>
Nov 25 03:29:14 np0005534516 nova_compute[253538]:    <rng model="virtio">
Nov 25 03:29:14 np0005534516 nova_compute[253538]:      <backend model="random">/dev/urandom</backend>
Nov 25 03:29:14 np0005534516 nova_compute[253538]:    </rng>
Nov 25 03:29:14 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root"/>
Nov 25 03:29:14 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:29:14 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:29:14 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:29:14 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:29:14 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:29:14 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:29:14 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:29:14 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:29:14 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:29:14 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:29:14 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:29:14 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:29:14 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:29:14 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:29:14 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:29:14 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:29:14 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:29:14 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:29:14 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:29:14 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:29:14 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:29:14 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:29:14 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:29:14 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:29:14 np0005534516 nova_compute[253538]:    <controller type="usb" index="0"/>
Nov 25 03:29:14 np0005534516 nova_compute[253538]:    <memballoon model="virtio">
Nov 25 03:29:14 np0005534516 nova_compute[253538]:      <stats period="10"/>
Nov 25 03:29:14 np0005534516 nova_compute[253538]:    </memballoon>
Nov 25 03:29:14 np0005534516 nova_compute[253538]:  </devices>
Nov 25 03:29:14 np0005534516 nova_compute[253538]: </domain>
Nov 25 03:29:14 np0005534516 nova_compute[253538]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 25 03:29:14 np0005534516 nova_compute[253538]: 2025-11-25 08:29:14.164 253542 INFO nova.compute.manager [None req-d0807b7a-6d5d-4783-b9b7-10e6a0c3760c 6bc7b68c86ab44d29c118388df2a8bc0 70a02215d9344af388c6439ace9208a4 - - default default] [instance: 8400a9a9-bd7a-434b-a11b-6db7e12a4e18] Took 5.34 seconds to build instance.#033[00m
Nov 25 03:29:14 np0005534516 nova_compute[253538]: 2025-11-25 08:29:14.184 253542 DEBUG oslo_concurrency.lockutils [None req-d0807b7a-6d5d-4783-b9b7-10e6a0c3760c 6bc7b68c86ab44d29c118388df2a8bc0 70a02215d9344af388c6439ace9208a4 - - default default] Lock "8400a9a9-bd7a-434b-a11b-6db7e12a4e18" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 5.418s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:29:14 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 03:29:14 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1446547332' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 03:29:14 np0005534516 nova_compute[253538]: 2025-11-25 08:29:14.204 253542 DEBUG oslo_concurrency.processutils [None req-3b0c1526-8a8f-41e3-b395-6f88020e2279 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.478s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:29:14 np0005534516 nova_compute[253538]: 2025-11-25 08:29:14.209 253542 DEBUG nova.virt.libvirt.driver [None req-a0928e58-2fe3-4ced-a142-64404b6cb388 6bc7b68c86ab44d29c118388df2a8bc0 70a02215d9344af388c6439ace9208a4 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 03:29:14 np0005534516 nova_compute[253538]: 2025-11-25 08:29:14.209 253542 DEBUG nova.virt.libvirt.driver [None req-a0928e58-2fe3-4ced-a142-64404b6cb388 6bc7b68c86ab44d29c118388df2a8bc0 70a02215d9344af388c6439ace9208a4 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 03:29:14 np0005534516 nova_compute[253538]: 2025-11-25 08:29:14.210 253542 INFO nova.virt.libvirt.driver [None req-a0928e58-2fe3-4ced-a142-64404b6cb388 6bc7b68c86ab44d29c118388df2a8bc0 70a02215d9344af388c6439ace9208a4 - - default default] [instance: 225f80e2-9e66-46fb-b77d-9a54fa8a2a41] Using config drive#033[00m
Nov 25 03:29:14 np0005534516 nova_compute[253538]: 2025-11-25 08:29:14.232 253542 DEBUG nova.storage.rbd_utils [None req-a0928e58-2fe3-4ced-a142-64404b6cb388 6bc7b68c86ab44d29c118388df2a8bc0 70a02215d9344af388c6439ace9208a4 - - default default] rbd image 225f80e2-9e66-46fb-b77d-9a54fa8a2a41_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:29:14 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 03:29:14 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3484542476' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 03:29:14 np0005534516 nova_compute[253538]: 2025-11-25 08:29:14.250 253542 DEBUG nova.compute.provider_tree [None req-3b0c1526-8a8f-41e3-b395-6f88020e2279 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 03:29:14 np0005534516 nova_compute[253538]: 2025-11-25 08:29:14.266 253542 DEBUG nova.scheduler.client.report [None req-3b0c1526-8a8f-41e3-b395-6f88020e2279 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 03:29:14 np0005534516 nova_compute[253538]: 2025-11-25 08:29:14.270 253542 DEBUG oslo_concurrency.processutils [None req-4a46e8e8-8247-4f85-ab93-5b0a8164f13c 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.472s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:29:14 np0005534516 nova_compute[253538]: 2025-11-25 08:29:14.271 253542 DEBUG nova.virt.libvirt.vif [None req-4a46e8e8-8247-4f85-ab93-5b0a8164f13c 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T08:29:07Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesTestJSON-server-533297759',display_name='tempest-ImagesTestJSON-server-533297759',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagestestjson-server-533297759',id=35,image_ref='c4072411-f87d-45fb-92e8-02dc5884a35e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='b0a28d62fb1841c087b84b40bf5a54ec',ramdisk_id='',reservation_id='r-xg0ar0p0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_boot_roles='reader,member',image_container_format='bare',image_disk_format='raw',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_image_location='snapshot',image_image_state='available',image_image_type='snapshot',image_instance_uuid='8b29fb31-718d-4926-bf4f-bae461ea70ef',image_min_disk='1',image_min_ram='0',image_owner_id='b0a28d62fb1841c087b84b40bf5a54ec',image_owner_project_name='tempest-ImagesTestJSON-109091550',image_owner_user_name='tempest-ImagesTestJSON-109091550-project-member',image_user_id='38fa175fb699405c9a05d7c28f994ebc',network_allocated='True',owner_project_name='tempest-ImagesTestJSON-109091550',owner_user_name='tempest-ImagesTestJSON-109091550-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T08:29:09Z,user_data=None,user_id='38fa175fb699405c9a05d7c28f994ebc',uuid=0067149a-8f99-4257-af2a-fd9adcc41719,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "f3dedfca-04a0-44af-bca1-33a95c9804fa", "address": "fa:16:3e:7d:ec:bd", "network": {"id": "ba659d6c-c094-47d7-ba45-d0e659ce778e", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1404047212-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b0a28d62fb1841c087b84b40bf5a54ec", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf3dedfca-04", "ovs_interfaceid": "f3dedfca-04a0-44af-bca1-33a95c9804fa", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 25 03:29:14 np0005534516 nova_compute[253538]: 2025-11-25 08:29:14.271 253542 DEBUG nova.network.os_vif_util [None req-4a46e8e8-8247-4f85-ab93-5b0a8164f13c 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Converting VIF {"id": "f3dedfca-04a0-44af-bca1-33a95c9804fa", "address": "fa:16:3e:7d:ec:bd", "network": {"id": "ba659d6c-c094-47d7-ba45-d0e659ce778e", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1404047212-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b0a28d62fb1841c087b84b40bf5a54ec", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf3dedfca-04", "ovs_interfaceid": "f3dedfca-04a0-44af-bca1-33a95c9804fa", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 03:29:14 np0005534516 nova_compute[253538]: 2025-11-25 08:29:14.272 253542 DEBUG nova.network.os_vif_util [None req-4a46e8e8-8247-4f85-ab93-5b0a8164f13c 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:7d:ec:bd,bridge_name='br-int',has_traffic_filtering=True,id=f3dedfca-04a0-44af-bca1-33a95c9804fa,network=Network(ba659d6c-c094-47d7-ba45-d0e659ce778e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf3dedfca-04') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 03:29:14 np0005534516 nova_compute[253538]: 2025-11-25 08:29:14.272 253542 DEBUG nova.objects.instance [None req-4a46e8e8-8247-4f85-ab93-5b0a8164f13c 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Lazy-loading 'pci_devices' on Instance uuid 0067149a-8f99-4257-af2a-fd9adcc41719 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 03:29:14 np0005534516 nova_compute[253538]: 2025-11-25 08:29:14.287 253542 DEBUG nova.virt.libvirt.driver [None req-4a46e8e8-8247-4f85-ab93-5b0a8164f13c 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 0067149a-8f99-4257-af2a-fd9adcc41719] End _get_guest_xml xml=<domain type="kvm">
Nov 25 03:29:14 np0005534516 nova_compute[253538]:  <uuid>0067149a-8f99-4257-af2a-fd9adcc41719</uuid>
Nov 25 03:29:14 np0005534516 nova_compute[253538]:  <name>instance-00000023</name>
Nov 25 03:29:14 np0005534516 nova_compute[253538]:  <memory>131072</memory>
Nov 25 03:29:14 np0005534516 nova_compute[253538]:  <vcpu>1</vcpu>
Nov 25 03:29:14 np0005534516 nova_compute[253538]:  <metadata>
Nov 25 03:29:14 np0005534516 nova_compute[253538]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 03:29:14 np0005534516 nova_compute[253538]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 03:29:14 np0005534516 nova_compute[253538]:      <nova:name>tempest-ImagesTestJSON-server-533297759</nova:name>
Nov 25 03:29:14 np0005534516 nova_compute[253538]:      <nova:creationTime>2025-11-25 08:29:13</nova:creationTime>
Nov 25 03:29:14 np0005534516 nova_compute[253538]:      <nova:flavor name="m1.nano">
Nov 25 03:29:14 np0005534516 nova_compute[253538]:        <nova:memory>128</nova:memory>
Nov 25 03:29:14 np0005534516 nova_compute[253538]:        <nova:disk>1</nova:disk>
Nov 25 03:29:14 np0005534516 nova_compute[253538]:        <nova:swap>0</nova:swap>
Nov 25 03:29:14 np0005534516 nova_compute[253538]:        <nova:ephemeral>0</nova:ephemeral>
Nov 25 03:29:14 np0005534516 nova_compute[253538]:        <nova:vcpus>1</nova:vcpus>
Nov 25 03:29:14 np0005534516 nova_compute[253538]:      </nova:flavor>
Nov 25 03:29:14 np0005534516 nova_compute[253538]:      <nova:owner>
Nov 25 03:29:14 np0005534516 nova_compute[253538]:        <nova:user uuid="38fa175fb699405c9a05d7c28f994ebc">tempest-ImagesTestJSON-109091550-project-member</nova:user>
Nov 25 03:29:14 np0005534516 nova_compute[253538]:        <nova:project uuid="b0a28d62fb1841c087b84b40bf5a54ec">tempest-ImagesTestJSON-109091550</nova:project>
Nov 25 03:29:14 np0005534516 nova_compute[253538]:      </nova:owner>
Nov 25 03:29:14 np0005534516 nova_compute[253538]:      <nova:root type="image" uuid="c4072411-f87d-45fb-92e8-02dc5884a35e"/>
Nov 25 03:29:14 np0005534516 nova_compute[253538]:      <nova:ports>
Nov 25 03:29:14 np0005534516 nova_compute[253538]:        <nova:port uuid="f3dedfca-04a0-44af-bca1-33a95c9804fa">
Nov 25 03:29:14 np0005534516 nova_compute[253538]:          <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Nov 25 03:29:14 np0005534516 nova_compute[253538]:        </nova:port>
Nov 25 03:29:14 np0005534516 nova_compute[253538]:      </nova:ports>
Nov 25 03:29:14 np0005534516 nova_compute[253538]:    </nova:instance>
Nov 25 03:29:14 np0005534516 nova_compute[253538]:  </metadata>
Nov 25 03:29:14 np0005534516 nova_compute[253538]:  <sysinfo type="smbios">
Nov 25 03:29:14 np0005534516 nova_compute[253538]:    <system>
Nov 25 03:29:14 np0005534516 nova_compute[253538]:      <entry name="manufacturer">RDO</entry>
Nov 25 03:29:14 np0005534516 nova_compute[253538]:      <entry name="product">OpenStack Compute</entry>
Nov 25 03:29:14 np0005534516 nova_compute[253538]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 03:29:14 np0005534516 nova_compute[253538]:      <entry name="serial">0067149a-8f99-4257-af2a-fd9adcc41719</entry>
Nov 25 03:29:14 np0005534516 nova_compute[253538]:      <entry name="uuid">0067149a-8f99-4257-af2a-fd9adcc41719</entry>
Nov 25 03:29:14 np0005534516 nova_compute[253538]:      <entry name="family">Virtual Machine</entry>
Nov 25 03:29:14 np0005534516 nova_compute[253538]:    </system>
Nov 25 03:29:14 np0005534516 nova_compute[253538]:  </sysinfo>
Nov 25 03:29:14 np0005534516 nova_compute[253538]:  <os>
Nov 25 03:29:14 np0005534516 nova_compute[253538]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 25 03:29:14 np0005534516 nova_compute[253538]:    <boot dev="hd"/>
Nov 25 03:29:14 np0005534516 nova_compute[253538]:    <smbios mode="sysinfo"/>
Nov 25 03:29:14 np0005534516 nova_compute[253538]:  </os>
Nov 25 03:29:14 np0005534516 nova_compute[253538]:  <features>
Nov 25 03:29:14 np0005534516 nova_compute[253538]:    <acpi/>
Nov 25 03:29:14 np0005534516 nova_compute[253538]:    <apic/>
Nov 25 03:29:14 np0005534516 nova_compute[253538]:    <vmcoreinfo/>
Nov 25 03:29:14 np0005534516 nova_compute[253538]:  </features>
Nov 25 03:29:14 np0005534516 nova_compute[253538]:  <clock offset="utc">
Nov 25 03:29:14 np0005534516 nova_compute[253538]:    <timer name="pit" tickpolicy="delay"/>
Nov 25 03:29:14 np0005534516 nova_compute[253538]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 25 03:29:14 np0005534516 nova_compute[253538]:    <timer name="hpet" present="no"/>
Nov 25 03:29:14 np0005534516 nova_compute[253538]:  </clock>
Nov 25 03:29:14 np0005534516 nova_compute[253538]:  <cpu mode="host-model" match="exact">
Nov 25 03:29:14 np0005534516 nova_compute[253538]:    <topology sockets="1" cores="1" threads="1"/>
Nov 25 03:29:14 np0005534516 nova_compute[253538]:  </cpu>
Nov 25 03:29:14 np0005534516 nova_compute[253538]:  <devices>
Nov 25 03:29:14 np0005534516 nova_compute[253538]:    <disk type="network" device="disk">
Nov 25 03:29:14 np0005534516 nova_compute[253538]:      <driver type="raw" cache="none"/>
Nov 25 03:29:14 np0005534516 nova_compute[253538]:      <source protocol="rbd" name="vms/0067149a-8f99-4257-af2a-fd9adcc41719_disk">
Nov 25 03:29:14 np0005534516 nova_compute[253538]:        <host name="192.168.122.100" port="6789"/>
Nov 25 03:29:14 np0005534516 nova_compute[253538]:      </source>
Nov 25 03:29:14 np0005534516 nova_compute[253538]:      <auth username="openstack">
Nov 25 03:29:14 np0005534516 nova_compute[253538]:        <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 03:29:14 np0005534516 nova_compute[253538]:      </auth>
Nov 25 03:29:14 np0005534516 nova_compute[253538]:      <target dev="vda" bus="virtio"/>
Nov 25 03:29:14 np0005534516 nova_compute[253538]:    </disk>
Nov 25 03:29:14 np0005534516 nova_compute[253538]:    <disk type="network" device="cdrom">
Nov 25 03:29:14 np0005534516 nova_compute[253538]:      <driver type="raw" cache="none"/>
Nov 25 03:29:14 np0005534516 nova_compute[253538]:      <source protocol="rbd" name="vms/0067149a-8f99-4257-af2a-fd9adcc41719_disk.config">
Nov 25 03:29:14 np0005534516 nova_compute[253538]:        <host name="192.168.122.100" port="6789"/>
Nov 25 03:29:14 np0005534516 nova_compute[253538]:      </source>
Nov 25 03:29:14 np0005534516 nova_compute[253538]:      <auth username="openstack">
Nov 25 03:29:14 np0005534516 nova_compute[253538]:        <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 03:29:14 np0005534516 nova_compute[253538]:      </auth>
Nov 25 03:29:14 np0005534516 nova_compute[253538]:      <target dev="sda" bus="sata"/>
Nov 25 03:29:14 np0005534516 nova_compute[253538]:    </disk>
Nov 25 03:29:14 np0005534516 nova_compute[253538]:    <interface type="ethernet">
Nov 25 03:29:14 np0005534516 nova_compute[253538]:      <mac address="fa:16:3e:7d:ec:bd"/>
Nov 25 03:29:14 np0005534516 nova_compute[253538]:      <model type="virtio"/>
Nov 25 03:29:14 np0005534516 nova_compute[253538]:      <driver name="vhost" rx_queue_size="512"/>
Nov 25 03:29:14 np0005534516 nova_compute[253538]:      <mtu size="1442"/>
Nov 25 03:29:14 np0005534516 nova_compute[253538]:      <target dev="tapf3dedfca-04"/>
Nov 25 03:29:14 np0005534516 nova_compute[253538]:    </interface>
Nov 25 03:29:14 np0005534516 nova_compute[253538]:    <serial type="pty">
Nov 25 03:29:14 np0005534516 nova_compute[253538]:      <log file="/var/lib/nova/instances/0067149a-8f99-4257-af2a-fd9adcc41719/console.log" append="off"/>
Nov 25 03:29:14 np0005534516 nova_compute[253538]:    </serial>
Nov 25 03:29:14 np0005534516 nova_compute[253538]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 03:29:14 np0005534516 nova_compute[253538]:    <video>
Nov 25 03:29:14 np0005534516 nova_compute[253538]:      <model type="virtio"/>
Nov 25 03:29:14 np0005534516 nova_compute[253538]:    </video>
Nov 25 03:29:14 np0005534516 nova_compute[253538]:    <input type="tablet" bus="usb"/>
Nov 25 03:29:14 np0005534516 nova_compute[253538]:    <input type="keyboard" bus="usb"/>
Nov 25 03:29:14 np0005534516 nova_compute[253538]:    <rng model="virtio">
Nov 25 03:29:14 np0005534516 nova_compute[253538]:      <backend model="random">/dev/urandom</backend>
Nov 25 03:29:14 np0005534516 nova_compute[253538]:    </rng>
Nov 25 03:29:14 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root"/>
Nov 25 03:29:14 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:29:14 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:29:14 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:29:14 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:29:14 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:29:14 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:29:14 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:29:14 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:29:14 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:29:14 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:29:14 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:29:14 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:29:14 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:29:14 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:29:14 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:29:14 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:29:14 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:29:14 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:29:14 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:29:14 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:29:14 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:29:14 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:29:14 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:29:14 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:29:14 np0005534516 nova_compute[253538]:    <controller type="usb" index="0"/>
Nov 25 03:29:14 np0005534516 nova_compute[253538]:    <memballoon model="virtio">
Nov 25 03:29:14 np0005534516 nova_compute[253538]:      <stats period="10"/>
Nov 25 03:29:14 np0005534516 nova_compute[253538]:    </memballoon>
Nov 25 03:29:14 np0005534516 nova_compute[253538]:  </devices>
Nov 25 03:29:14 np0005534516 nova_compute[253538]: </domain>
Nov 25 03:29:14 np0005534516 nova_compute[253538]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 25 03:29:14 np0005534516 nova_compute[253538]: 2025-11-25 08:29:14.288 253542 DEBUG nova.compute.manager [None req-4a46e8e8-8247-4f85-ab93-5b0a8164f13c 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 0067149a-8f99-4257-af2a-fd9adcc41719] Preparing to wait for external event network-vif-plugged-f3dedfca-04a0-44af-bca1-33a95c9804fa prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 25 03:29:14 np0005534516 nova_compute[253538]: 2025-11-25 08:29:14.288 253542 DEBUG oslo_concurrency.lockutils [None req-4a46e8e8-8247-4f85-ab93-5b0a8164f13c 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Acquiring lock "0067149a-8f99-4257-af2a-fd9adcc41719-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:29:14 np0005534516 nova_compute[253538]: 2025-11-25 08:29:14.288 253542 DEBUG oslo_concurrency.lockutils [None req-4a46e8e8-8247-4f85-ab93-5b0a8164f13c 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Lock "0067149a-8f99-4257-af2a-fd9adcc41719-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:29:14 np0005534516 nova_compute[253538]: 2025-11-25 08:29:14.289 253542 DEBUG oslo_concurrency.lockutils [None req-4a46e8e8-8247-4f85-ab93-5b0a8164f13c 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Lock "0067149a-8f99-4257-af2a-fd9adcc41719-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:29:14 np0005534516 nova_compute[253538]: 2025-11-25 08:29:14.289 253542 DEBUG nova.virt.libvirt.vif [None req-4a46e8e8-8247-4f85-ab93-5b0a8164f13c 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T08:29:07Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesTestJSON-server-533297759',display_name='tempest-ImagesTestJSON-server-533297759',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagestestjson-server-533297759',id=35,image_ref='c4072411-f87d-45fb-92e8-02dc5884a35e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='b0a28d62fb1841c087b84b40bf5a54ec',ramdisk_id='',reservation_id='r-xg0ar0p0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_boot_roles='reader,member',image_container_format='bare',image_disk_format='raw',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_image_location='snapshot',image_image_state='available',image_image_type='snapshot',image_instance_uuid='8b29fb31-718d-4926-bf4f-bae461ea70ef',image_min_disk='1',image_min_ram='0',image_owner_id='b0a28d62fb1841c087b84b40bf5a54ec',image_owner_project_name='tempest-ImagesTestJSON-109091550',image_owner_user_name='tempest-ImagesTestJSON-109091550-project-member',image_user_id='38fa175fb699405c9a05d7c28f994ebc',network_allocated='True',owner_project_name='tempest-ImagesTestJSON-109091550',owner_user_name='tempest-ImagesTestJSON-109091550-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T08:29:09Z,user_data=None,user_id='38fa175fb699405c9a05d7c28f994ebc',uuid=0067149a-8f99-4257-af2a-fd9adcc41719,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "f3dedfca-04a0-44af-bca1-33a95c9804fa", "address": "fa:16:3e:7d:ec:bd", "network": {"id": "ba659d6c-c094-47d7-ba45-d0e659ce778e", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1404047212-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b0a28d62fb1841c087b84b40bf5a54ec", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf3dedfca-04", "ovs_interfaceid": "f3dedfca-04a0-44af-bca1-33a95c9804fa", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 25 03:29:14 np0005534516 nova_compute[253538]: 2025-11-25 08:29:14.289 253542 DEBUG nova.network.os_vif_util [None req-4a46e8e8-8247-4f85-ab93-5b0a8164f13c 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Converting VIF {"id": "f3dedfca-04a0-44af-bca1-33a95c9804fa", "address": "fa:16:3e:7d:ec:bd", "network": {"id": "ba659d6c-c094-47d7-ba45-d0e659ce778e", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1404047212-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b0a28d62fb1841c087b84b40bf5a54ec", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf3dedfca-04", "ovs_interfaceid": "f3dedfca-04a0-44af-bca1-33a95c9804fa", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 03:29:14 np0005534516 nova_compute[253538]: 2025-11-25 08:29:14.290 253542 DEBUG nova.network.os_vif_util [None req-4a46e8e8-8247-4f85-ab93-5b0a8164f13c 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:7d:ec:bd,bridge_name='br-int',has_traffic_filtering=True,id=f3dedfca-04a0-44af-bca1-33a95c9804fa,network=Network(ba659d6c-c094-47d7-ba45-d0e659ce778e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf3dedfca-04') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 03:29:14 np0005534516 nova_compute[253538]: 2025-11-25 08:29:14.290 253542 DEBUG os_vif [None req-4a46e8e8-8247-4f85-ab93-5b0a8164f13c 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:7d:ec:bd,bridge_name='br-int',has_traffic_filtering=True,id=f3dedfca-04a0-44af-bca1-33a95c9804fa,network=Network(ba659d6c-c094-47d7-ba45-d0e659ce778e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf3dedfca-04') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 25 03:29:14 np0005534516 nova_compute[253538]: 2025-11-25 08:29:14.291 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:29:14 np0005534516 nova_compute[253538]: 2025-11-25 08:29:14.291 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:29:14 np0005534516 nova_compute[253538]: 2025-11-25 08:29:14.292 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 03:29:14 np0005534516 nova_compute[253538]: 2025-11-25 08:29:14.294 253542 DEBUG oslo_concurrency.lockutils [None req-3b0c1526-8a8f-41e3-b395-6f88020e2279 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.847s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:29:14 np0005534516 nova_compute[253538]: 2025-11-25 08:29:14.294 253542 DEBUG nova.compute.manager [None req-3b0c1526-8a8f-41e3-b395-6f88020e2279 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] [instance: 8a8b6989-6ea7-4cf7-ad21-a1563967c7f4] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 25 03:29:14 np0005534516 nova_compute[253538]: 2025-11-25 08:29:14.297 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:29:14 np0005534516 nova_compute[253538]: 2025-11-25 08:29:14.297 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf3dedfca-04, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:29:14 np0005534516 nova_compute[253538]: 2025-11-25 08:29:14.298 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapf3dedfca-04, col_values=(('external_ids', {'iface-id': 'f3dedfca-04a0-44af-bca1-33a95c9804fa', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:7d:ec:bd', 'vm-uuid': '0067149a-8f99-4257-af2a-fd9adcc41719'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:29:14 np0005534516 nova_compute[253538]: 2025-11-25 08:29:14.299 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:29:14 np0005534516 nova_compute[253538]: 2025-11-25 08:29:14.300 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 25 03:29:14 np0005534516 NetworkManager[48915]: <info>  [1764059354.3009] manager: (tapf3dedfca-04): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/117)
Nov 25 03:29:14 np0005534516 nova_compute[253538]: 2025-11-25 08:29:14.305 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:29:14 np0005534516 nova_compute[253538]: 2025-11-25 08:29:14.306 253542 INFO os_vif [None req-4a46e8e8-8247-4f85-ab93-5b0a8164f13c 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:7d:ec:bd,bridge_name='br-int',has_traffic_filtering=True,id=f3dedfca-04a0-44af-bca1-33a95c9804fa,network=Network(ba659d6c-c094-47d7-ba45-d0e659ce778e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf3dedfca-04')#033[00m
Nov 25 03:29:14 np0005534516 nova_compute[253538]: 2025-11-25 08:29:14.348 253542 DEBUG nova.compute.manager [None req-3b0c1526-8a8f-41e3-b395-6f88020e2279 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] [instance: 8a8b6989-6ea7-4cf7-ad21-a1563967c7f4] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 25 03:29:14 np0005534516 nova_compute[253538]: 2025-11-25 08:29:14.349 253542 DEBUG nova.network.neutron [None req-3b0c1526-8a8f-41e3-b395-6f88020e2279 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] [instance: 8a8b6989-6ea7-4cf7-ad21-a1563967c7f4] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 25 03:29:14 np0005534516 nova_compute[253538]: 2025-11-25 08:29:14.360 253542 DEBUG nova.virt.libvirt.driver [None req-4a46e8e8-8247-4f85-ab93-5b0a8164f13c 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 03:29:14 np0005534516 nova_compute[253538]: 2025-11-25 08:29:14.360 253542 DEBUG nova.virt.libvirt.driver [None req-4a46e8e8-8247-4f85-ab93-5b0a8164f13c 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 03:29:14 np0005534516 nova_compute[253538]: 2025-11-25 08:29:14.360 253542 DEBUG nova.virt.libvirt.driver [None req-4a46e8e8-8247-4f85-ab93-5b0a8164f13c 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] No VIF found with MAC fa:16:3e:7d:ec:bd, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 25 03:29:14 np0005534516 nova_compute[253538]: 2025-11-25 08:29:14.361 253542 INFO nova.virt.libvirt.driver [None req-4a46e8e8-8247-4f85-ab93-5b0a8164f13c 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 0067149a-8f99-4257-af2a-fd9adcc41719] Using config drive#033[00m
Nov 25 03:29:14 np0005534516 nova_compute[253538]: 2025-11-25 08:29:14.377 253542 DEBUG nova.storage.rbd_utils [None req-4a46e8e8-8247-4f85-ab93-5b0a8164f13c 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] rbd image 0067149a-8f99-4257-af2a-fd9adcc41719_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:29:14 np0005534516 nova_compute[253538]: 2025-11-25 08:29:14.383 253542 INFO nova.virt.libvirt.driver [None req-3b0c1526-8a8f-41e3-b395-6f88020e2279 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] [instance: 8a8b6989-6ea7-4cf7-ad21-a1563967c7f4] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 25 03:29:14 np0005534516 nova_compute[253538]: 2025-11-25 08:29:14.403 253542 DEBUG nova.compute.manager [None req-3b0c1526-8a8f-41e3-b395-6f88020e2279 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] [instance: 8a8b6989-6ea7-4cf7-ad21-a1563967c7f4] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 25 03:29:14 np0005534516 nova_compute[253538]: 2025-11-25 08:29:14.425 253542 INFO nova.virt.libvirt.driver [None req-a0928e58-2fe3-4ced-a142-64404b6cb388 6bc7b68c86ab44d29c118388df2a8bc0 70a02215d9344af388c6439ace9208a4 - - default default] [instance: 225f80e2-9e66-46fb-b77d-9a54fa8a2a41] Creating config drive at /var/lib/nova/instances/225f80e2-9e66-46fb-b77d-9a54fa8a2a41/disk.config#033[00m
Nov 25 03:29:14 np0005534516 nova_compute[253538]: 2025-11-25 08:29:14.430 253542 DEBUG oslo_concurrency.processutils [None req-a0928e58-2fe3-4ced-a142-64404b6cb388 6bc7b68c86ab44d29c118388df2a8bc0 70a02215d9344af388c6439ace9208a4 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/225f80e2-9e66-46fb-b77d-9a54fa8a2a41/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpn4v9w67l execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:29:14 np0005534516 nova_compute[253538]: 2025-11-25 08:29:14.498 253542 DEBUG nova.compute.manager [None req-3b0c1526-8a8f-41e3-b395-6f88020e2279 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] [instance: 8a8b6989-6ea7-4cf7-ad21-a1563967c7f4] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 25 03:29:14 np0005534516 nova_compute[253538]: 2025-11-25 08:29:14.500 253542 DEBUG nova.virt.libvirt.driver [None req-3b0c1526-8a8f-41e3-b395-6f88020e2279 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] [instance: 8a8b6989-6ea7-4cf7-ad21-a1563967c7f4] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 25 03:29:14 np0005534516 nova_compute[253538]: 2025-11-25 08:29:14.500 253542 INFO nova.virt.libvirt.driver [None req-3b0c1526-8a8f-41e3-b395-6f88020e2279 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] [instance: 8a8b6989-6ea7-4cf7-ad21-a1563967c7f4] Creating image(s)#033[00m
Nov 25 03:29:14 np0005534516 nova_compute[253538]: 2025-11-25 08:29:14.523 253542 DEBUG nova.storage.rbd_utils [None req-3b0c1526-8a8f-41e3-b395-6f88020e2279 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] rbd image 8a8b6989-6ea7-4cf7-ad21-a1563967c7f4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:29:14 np0005534516 nova_compute[253538]: 2025-11-25 08:29:14.542 253542 DEBUG nova.storage.rbd_utils [None req-3b0c1526-8a8f-41e3-b395-6f88020e2279 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] rbd image 8a8b6989-6ea7-4cf7-ad21-a1563967c7f4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:29:14 np0005534516 nova_compute[253538]: 2025-11-25 08:29:14.563 253542 DEBUG nova.storage.rbd_utils [None req-3b0c1526-8a8f-41e3-b395-6f88020e2279 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] rbd image 8a8b6989-6ea7-4cf7-ad21-a1563967c7f4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:29:14 np0005534516 nova_compute[253538]: 2025-11-25 08:29:14.567 253542 DEBUG oslo_concurrency.processutils [None req-3b0c1526-8a8f-41e3-b395-6f88020e2279 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:29:14 np0005534516 nova_compute[253538]: 2025-11-25 08:29:14.608 253542 DEBUG nova.network.neutron [req-18bf05de-4be4-4677-9f2b-6d6e4615819a req-141cd7a5-5ca6-4986-81cb-7da71f3f603d b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 0067149a-8f99-4257-af2a-fd9adcc41719] Updated VIF entry in instance network info cache for port f3dedfca-04a0-44af-bca1-33a95c9804fa. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 25 03:29:14 np0005534516 nova_compute[253538]: 2025-11-25 08:29:14.609 253542 DEBUG nova.network.neutron [req-18bf05de-4be4-4677-9f2b-6d6e4615819a req-141cd7a5-5ca6-4986-81cb-7da71f3f603d b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 0067149a-8f99-4257-af2a-fd9adcc41719] Updating instance_info_cache with network_info: [{"id": "f3dedfca-04a0-44af-bca1-33a95c9804fa", "address": "fa:16:3e:7d:ec:bd", "network": {"id": "ba659d6c-c094-47d7-ba45-d0e659ce778e", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1404047212-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b0a28d62fb1841c087b84b40bf5a54ec", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf3dedfca-04", "ovs_interfaceid": "f3dedfca-04a0-44af-bca1-33a95c9804fa", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 03:29:14 np0005534516 nova_compute[253538]: 2025-11-25 08:29:14.611 253542 DEBUG oslo_concurrency.processutils [None req-a0928e58-2fe3-4ced-a142-64404b6cb388 6bc7b68c86ab44d29c118388df2a8bc0 70a02215d9344af388c6439ace9208a4 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/225f80e2-9e66-46fb-b77d-9a54fa8a2a41/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpn4v9w67l" returned: 0 in 0.180s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:29:14 np0005534516 nova_compute[253538]: 2025-11-25 08:29:14.639 253542 DEBUG nova.storage.rbd_utils [None req-a0928e58-2fe3-4ced-a142-64404b6cb388 6bc7b68c86ab44d29c118388df2a8bc0 70a02215d9344af388c6439ace9208a4 - - default default] rbd image 225f80e2-9e66-46fb-b77d-9a54fa8a2a41_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:29:14 np0005534516 nova_compute[253538]: 2025-11-25 08:29:14.642 253542 DEBUG oslo_concurrency.processutils [None req-a0928e58-2fe3-4ced-a142-64404b6cb388 6bc7b68c86ab44d29c118388df2a8bc0 70a02215d9344af388c6439ace9208a4 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/225f80e2-9e66-46fb-b77d-9a54fa8a2a41/disk.config 225f80e2-9e66-46fb-b77d-9a54fa8a2a41_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:29:14 np0005534516 nova_compute[253538]: 2025-11-25 08:29:14.670 253542 DEBUG nova.policy [None req-3b0c1526-8a8f-41e3-b395-6f88020e2279 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '8350a560f2bc4b57a5da0e3a1f582f82', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'c5b1125d171240e2895276836b4fd6d7', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 25 03:29:14 np0005534516 nova_compute[253538]: 2025-11-25 08:29:14.680 253542 DEBUG nova.network.neutron [-] [instance: cf07e611-51eb-4bcf-8757-8f75d3807da6] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 03:29:14 np0005534516 nova_compute[253538]: 2025-11-25 08:29:14.685 253542 DEBUG nova.compute.manager [req-c46d4873-a3c4-4b29-91d6-2a53edc712f2 req-ca3d4a7e-0090-4e6c-b9a0-e0c577cff9fa b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: cf07e611-51eb-4bcf-8757-8f75d3807da6] Received event network-vif-deleted-d4493bab-df0a-4934-ab26-43dae0dbae72 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:29:14 np0005534516 nova_compute[253538]: 2025-11-25 08:29:14.686 253542 INFO nova.compute.manager [req-c46d4873-a3c4-4b29-91d6-2a53edc712f2 req-ca3d4a7e-0090-4e6c-b9a0-e0c577cff9fa b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: cf07e611-51eb-4bcf-8757-8f75d3807da6] Neutron deleted interface d4493bab-df0a-4934-ab26-43dae0dbae72; detaching it from the instance and deleting it from the info cache#033[00m
Nov 25 03:29:14 np0005534516 nova_compute[253538]: 2025-11-25 08:29:14.686 253542 DEBUG nova.network.neutron [req-c46d4873-a3c4-4b29-91d6-2a53edc712f2 req-ca3d4a7e-0090-4e6c-b9a0-e0c577cff9fa b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: cf07e611-51eb-4bcf-8757-8f75d3807da6] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 03:29:14 np0005534516 nova_compute[253538]: 2025-11-25 08:29:14.689 253542 DEBUG oslo_concurrency.lockutils [req-18bf05de-4be4-4677-9f2b-6d6e4615819a req-141cd7a5-5ca6-4986-81cb-7da71f3f603d b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-0067149a-8f99-4257-af2a-fd9adcc41719" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 03:29:14 np0005534516 nova_compute[253538]: 2025-11-25 08:29:14.690 253542 DEBUG oslo_concurrency.processutils [None req-3b0c1526-8a8f-41e3-b395-6f88020e2279 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json" returned: 0 in 0.123s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:29:14 np0005534516 nova_compute[253538]: 2025-11-25 08:29:14.693 253542 DEBUG oslo_concurrency.lockutils [None req-3b0c1526-8a8f-41e3-b395-6f88020e2279 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] Acquiring lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:29:14 np0005534516 nova_compute[253538]: 2025-11-25 08:29:14.694 253542 DEBUG oslo_concurrency.lockutils [None req-3b0c1526-8a8f-41e3-b395-6f88020e2279 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:29:14 np0005534516 nova_compute[253538]: 2025-11-25 08:29:14.694 253542 DEBUG oslo_concurrency.lockutils [None req-3b0c1526-8a8f-41e3-b395-6f88020e2279 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:29:14 np0005534516 nova_compute[253538]: 2025-11-25 08:29:14.726 253542 DEBUG nova.storage.rbd_utils [None req-3b0c1526-8a8f-41e3-b395-6f88020e2279 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] rbd image 8a8b6989-6ea7-4cf7-ad21-a1563967c7f4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:29:14 np0005534516 nova_compute[253538]: 2025-11-25 08:29:14.730 253542 DEBUG oslo_concurrency.processutils [None req-3b0c1526-8a8f-41e3-b395-6f88020e2279 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc 8a8b6989-6ea7-4cf7-ad21-a1563967c7f4_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:29:14 np0005534516 nova_compute[253538]: 2025-11-25 08:29:14.765 253542 INFO nova.compute.manager [-] [instance: cf07e611-51eb-4bcf-8757-8f75d3807da6] Took 1.09 seconds to deallocate network for instance.#033[00m
Nov 25 03:29:14 np0005534516 nova_compute[253538]: 2025-11-25 08:29:14.777 253542 DEBUG nova.compute.manager [req-c46d4873-a3c4-4b29-91d6-2a53edc712f2 req-ca3d4a7e-0090-4e6c-b9a0-e0c577cff9fa b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: cf07e611-51eb-4bcf-8757-8f75d3807da6] Detach interface failed, port_id=d4493bab-df0a-4934-ab26-43dae0dbae72, reason: Instance cf07e611-51eb-4bcf-8757-8f75d3807da6 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882#033[00m
Nov 25 03:29:14 np0005534516 nova_compute[253538]: 2025-11-25 08:29:14.828 253542 INFO nova.virt.libvirt.driver [None req-4a46e8e8-8247-4f85-ab93-5b0a8164f13c 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 0067149a-8f99-4257-af2a-fd9adcc41719] Creating config drive at /var/lib/nova/instances/0067149a-8f99-4257-af2a-fd9adcc41719/disk.config#033[00m
Nov 25 03:29:14 np0005534516 nova_compute[253538]: 2025-11-25 08:29:14.834 253542 DEBUG oslo_concurrency.processutils [None req-4a46e8e8-8247-4f85-ab93-5b0a8164f13c 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/0067149a-8f99-4257-af2a-fd9adcc41719/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpcfosizp9 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:29:14 np0005534516 nova_compute[253538]: 2025-11-25 08:29:14.864 253542 DEBUG oslo_concurrency.lockutils [None req-d5334ba9-2022-4d0b-a27d-51c535991f7d 9a96a98d6bb448aab904a8763d3675ec 17d31dbb1e4542daaa43d2fda87e18ad - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:29:14 np0005534516 nova_compute[253538]: 2025-11-25 08:29:14.865 253542 DEBUG oslo_concurrency.lockutils [None req-d5334ba9-2022-4d0b-a27d-51c535991f7d 9a96a98d6bb448aab904a8763d3675ec 17d31dbb1e4542daaa43d2fda87e18ad - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:29:14 np0005534516 nova_compute[253538]: 2025-11-25 08:29:14.881 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:29:14 np0005534516 nova_compute[253538]: 2025-11-25 08:29:14.969 253542 DEBUG oslo_concurrency.processutils [None req-4a46e8e8-8247-4f85-ab93-5b0a8164f13c 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/0067149a-8f99-4257-af2a-fd9adcc41719/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpcfosizp9" returned: 0 in 0.135s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:29:15 np0005534516 nova_compute[253538]: 2025-11-25 08:29:15.009 253542 DEBUG nova.storage.rbd_utils [None req-4a46e8e8-8247-4f85-ab93-5b0a8164f13c 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] rbd image 0067149a-8f99-4257-af2a-fd9adcc41719_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:29:15 np0005534516 nova_compute[253538]: 2025-11-25 08:29:15.011 253542 DEBUG oslo_concurrency.processutils [None req-4a46e8e8-8247-4f85-ab93-5b0a8164f13c 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/0067149a-8f99-4257-af2a-fd9adcc41719/disk.config 0067149a-8f99-4257-af2a-fd9adcc41719_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:29:15 np0005534516 nova_compute[253538]: 2025-11-25 08:29:15.052 253542 DEBUG oslo_concurrency.processutils [None req-d5334ba9-2022-4d0b-a27d-51c535991f7d 9a96a98d6bb448aab904a8763d3675ec 17d31dbb1e4542daaa43d2fda87e18ad - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:29:15 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1352: 321 pgs: 321 active+clean; 280 MiB data, 493 MiB used, 60 GiB / 60 GiB avail; 767 KiB/s rd, 6.4 MiB/s wr, 285 op/s
Nov 25 03:29:15 np0005534516 nova_compute[253538]: 2025-11-25 08:29:15.130 253542 DEBUG nova.compute.manager [req-581e18a4-980e-478e-8f6a-1947078ca743 req-de7cf18e-61bd-42b4-920b-189af2eb538e b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: cf07e611-51eb-4bcf-8757-8f75d3807da6] Received event network-vif-plugged-d4493bab-df0a-4934-ab26-43dae0dbae72 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:29:15 np0005534516 nova_compute[253538]: 2025-11-25 08:29:15.130 253542 DEBUG oslo_concurrency.lockutils [req-581e18a4-980e-478e-8f6a-1947078ca743 req-de7cf18e-61bd-42b4-920b-189af2eb538e b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "cf07e611-51eb-4bcf-8757-8f75d3807da6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:29:15 np0005534516 nova_compute[253538]: 2025-11-25 08:29:15.131 253542 DEBUG oslo_concurrency.lockutils [req-581e18a4-980e-478e-8f6a-1947078ca743 req-de7cf18e-61bd-42b4-920b-189af2eb538e b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "cf07e611-51eb-4bcf-8757-8f75d3807da6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:29:15 np0005534516 nova_compute[253538]: 2025-11-25 08:29:15.131 253542 DEBUG oslo_concurrency.lockutils [req-581e18a4-980e-478e-8f6a-1947078ca743 req-de7cf18e-61bd-42b4-920b-189af2eb538e b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "cf07e611-51eb-4bcf-8757-8f75d3807da6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:29:15 np0005534516 nova_compute[253538]: 2025-11-25 08:29:15.131 253542 DEBUG nova.compute.manager [req-581e18a4-980e-478e-8f6a-1947078ca743 req-de7cf18e-61bd-42b4-920b-189af2eb538e b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: cf07e611-51eb-4bcf-8757-8f75d3807da6] No waiting events found dispatching network-vif-plugged-d4493bab-df0a-4934-ab26-43dae0dbae72 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 03:29:15 np0005534516 nova_compute[253538]: 2025-11-25 08:29:15.131 253542 WARNING nova.compute.manager [req-581e18a4-980e-478e-8f6a-1947078ca743 req-de7cf18e-61bd-42b4-920b-189af2eb538e b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: cf07e611-51eb-4bcf-8757-8f75d3807da6] Received unexpected event network-vif-plugged-d4493bab-df0a-4934-ab26-43dae0dbae72 for instance with vm_state deleted and task_state None.#033[00m
Nov 25 03:29:15 np0005534516 nova_compute[253538]: 2025-11-25 08:29:15.212 253542 DEBUG nova.network.neutron [None req-3b0c1526-8a8f-41e3-b395-6f88020e2279 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] [instance: 8a8b6989-6ea7-4cf7-ad21-a1563967c7f4] Successfully created port: 19217fbd-a123-469a-b432-be5d2543613c _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 25 03:29:15 np0005534516 nova_compute[253538]: 2025-11-25 08:29:15.258 253542 DEBUG oslo_concurrency.processutils [None req-a0928e58-2fe3-4ced-a142-64404b6cb388 6bc7b68c86ab44d29c118388df2a8bc0 70a02215d9344af388c6439ace9208a4 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/225f80e2-9e66-46fb-b77d-9a54fa8a2a41/disk.config 225f80e2-9e66-46fb-b77d-9a54fa8a2a41_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.616s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:29:15 np0005534516 nova_compute[253538]: 2025-11-25 08:29:15.260 253542 INFO nova.virt.libvirt.driver [None req-a0928e58-2fe3-4ced-a142-64404b6cb388 6bc7b68c86ab44d29c118388df2a8bc0 70a02215d9344af388c6439ace9208a4 - - default default] [instance: 225f80e2-9e66-46fb-b77d-9a54fa8a2a41] Deleting local config drive /var/lib/nova/instances/225f80e2-9e66-46fb-b77d-9a54fa8a2a41/disk.config because it was imported into RBD.#033[00m
Nov 25 03:29:15 np0005534516 systemd-machined[215790]: New machine qemu-41-instance-00000025.
Nov 25 03:29:15 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 03:29:15 np0005534516 systemd[1]: Started Virtual Machine qemu-41-instance-00000025.
Nov 25 03:29:15 np0005534516 nova_compute[253538]: 2025-11-25 08:29:15.475 253542 DEBUG oslo_concurrency.processutils [None req-3b0c1526-8a8f-41e3-b395-6f88020e2279 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc 8a8b6989-6ea7-4cf7-ad21-a1563967c7f4_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.745s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:29:15 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 03:29:15 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1433953808' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 03:29:15 np0005534516 nova_compute[253538]: 2025-11-25 08:29:15.531 253542 DEBUG oslo_concurrency.processutils [None req-d5334ba9-2022-4d0b-a27d-51c535991f7d 9a96a98d6bb448aab904a8763d3675ec 17d31dbb1e4542daaa43d2fda87e18ad - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.479s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:29:15 np0005534516 nova_compute[253538]: 2025-11-25 08:29:15.536 253542 DEBUG nova.storage.rbd_utils [None req-3b0c1526-8a8f-41e3-b395-6f88020e2279 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] resizing rbd image 8a8b6989-6ea7-4cf7-ad21-a1563967c7f4_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 25 03:29:15 np0005534516 nova_compute[253538]: 2025-11-25 08:29:15.627 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 03:29:15 np0005534516 nova_compute[253538]: 2025-11-25 08:29:15.628 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 03:29:15 np0005534516 nova_compute[253538]: 2025-11-25 08:29:15.633 253542 DEBUG nova.compute.provider_tree [None req-d5334ba9-2022-4d0b-a27d-51c535991f7d 9a96a98d6bb448aab904a8763d3675ec 17d31dbb1e4542daaa43d2fda87e18ad - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 03:29:15 np0005534516 nova_compute[253538]: 2025-11-25 08:29:15.650 253542 DEBUG nova.scheduler.client.report [None req-d5334ba9-2022-4d0b-a27d-51c535991f7d 9a96a98d6bb448aab904a8763d3675ec 17d31dbb1e4542daaa43d2fda87e18ad - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 03:29:15 np0005534516 nova_compute[253538]: 2025-11-25 08:29:15.659 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:29:15 np0005534516 nova_compute[253538]: 2025-11-25 08:29:15.671 253542 DEBUG oslo_concurrency.lockutils [None req-d5334ba9-2022-4d0b-a27d-51c535991f7d 9a96a98d6bb448aab904a8763d3675ec 17d31dbb1e4542daaa43d2fda87e18ad - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.806s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:29:15 np0005534516 nova_compute[253538]: 2025-11-25 08:29:15.673 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.014s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:29:15 np0005534516 nova_compute[253538]: 2025-11-25 08:29:15.674 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:29:15 np0005534516 nova_compute[253538]: 2025-11-25 08:29:15.674 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 25 03:29:15 np0005534516 nova_compute[253538]: 2025-11-25 08:29:15.674 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:29:15 np0005534516 nova_compute[253538]: 2025-11-25 08:29:15.727 253542 INFO nova.scheduler.client.report [None req-d5334ba9-2022-4d0b-a27d-51c535991f7d 9a96a98d6bb448aab904a8763d3675ec 17d31dbb1e4542daaa43d2fda87e18ad - - default default] Deleted allocations for instance cf07e611-51eb-4bcf-8757-8f75d3807da6#033[00m
Nov 25 03:29:15 np0005534516 nova_compute[253538]: 2025-11-25 08:29:15.789 253542 DEBUG oslo_concurrency.lockutils [None req-d5334ba9-2022-4d0b-a27d-51c535991f7d 9a96a98d6bb448aab904a8763d3675ec 17d31dbb1e4542daaa43d2fda87e18ad - - default default] Lock "cf07e611-51eb-4bcf-8757-8f75d3807da6" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.089s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:29:16 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 03:29:16 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/43977870' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 03:29:16 np0005534516 nova_compute[253538]: 2025-11-25 08:29:16.260 253542 DEBUG oslo_concurrency.processutils [None req-4a46e8e8-8247-4f85-ab93-5b0a8164f13c 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/0067149a-8f99-4257-af2a-fd9adcc41719/disk.config 0067149a-8f99-4257-af2a-fd9adcc41719_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.249s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:29:16 np0005534516 nova_compute[253538]: 2025-11-25 08:29:16.261 253542 INFO nova.virt.libvirt.driver [None req-4a46e8e8-8247-4f85-ab93-5b0a8164f13c 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 0067149a-8f99-4257-af2a-fd9adcc41719] Deleting local config drive /var/lib/nova/instances/0067149a-8f99-4257-af2a-fd9adcc41719/disk.config because it was imported into RBD.#033[00m
Nov 25 03:29:16 np0005534516 nova_compute[253538]: 2025-11-25 08:29:16.274 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.600s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:29:16 np0005534516 kernel: tapf3dedfca-04: entered promiscuous mode
Nov 25 03:29:16 np0005534516 NetworkManager[48915]: <info>  [1764059356.3041] manager: (tapf3dedfca-04): new Tun device (/org/freedesktop/NetworkManager/Devices/118)
Nov 25 03:29:16 np0005534516 ovn_controller[152859]: 2025-11-25T08:29:16Z|00240|binding|INFO|Claiming lport f3dedfca-04a0-44af-bca1-33a95c9804fa for this chassis.
Nov 25 03:29:16 np0005534516 ovn_controller[152859]: 2025-11-25T08:29:16Z|00241|binding|INFO|f3dedfca-04a0-44af-bca1-33a95c9804fa: Claiming fa:16:3e:7d:ec:bd 10.100.0.14
Nov 25 03:29:16 np0005534516 nova_compute[253538]: 2025-11-25 08:29:16.317 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:29:16 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:29:16.323 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:7d:ec:bd 10.100.0.14'], port_security=['fa:16:3e:7d:ec:bd 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '0067149a-8f99-4257-af2a-fd9adcc41719', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ba659d6c-c094-47d7-ba45-d0e659ce778e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b0a28d62fb1841c087b84b40bf5a54ec', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'c0dcf48e-8342-437f-bc91-be284d9d2e89', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=38cfcfdd-6d8a-45fc-8bf6-5c1aa5128b91, chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], tunnel_key=6, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=f3dedfca-04a0-44af-bca1-33a95c9804fa) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 03:29:16 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:29:16.325 162739 INFO neutron.agent.ovn.metadata.agent [-] Port f3dedfca-04a0-44af-bca1-33a95c9804fa in datapath ba659d6c-c094-47d7-ba45-d0e659ce778e bound to our chassis#033[00m
Nov 25 03:29:16 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:29:16.326 162739 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network ba659d6c-c094-47d7-ba45-d0e659ce778e#033[00m
Nov 25 03:29:16 np0005534516 systemd-udevd[296585]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 03:29:16 np0005534516 ovn_controller[152859]: 2025-11-25T08:29:16Z|00242|binding|INFO|Setting lport f3dedfca-04a0-44af-bca1-33a95c9804fa ovn-installed in OVS
Nov 25 03:29:16 np0005534516 ovn_controller[152859]: 2025-11-25T08:29:16Z|00243|binding|INFO|Setting lport f3dedfca-04a0-44af-bca1-33a95c9804fa up in Southbound
Nov 25 03:29:16 np0005534516 nova_compute[253538]: 2025-11-25 08:29:16.342 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:29:16 np0005534516 systemd-machined[215790]: New machine qemu-42-instance-00000023.
Nov 25 03:29:16 np0005534516 nova_compute[253538]: 2025-11-25 08:29:16.350 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:29:16 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:29:16.351 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[098eeb37-ed75-42b5-977f-d36cb6608d0b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:29:16 np0005534516 NetworkManager[48915]: <info>  [1764059356.3561] device (tapf3dedfca-04): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 03:29:16 np0005534516 NetworkManager[48915]: <info>  [1764059356.3575] device (tapf3dedfca-04): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 03:29:16 np0005534516 systemd[1]: Started Virtual Machine qemu-42-instance-00000023.
Nov 25 03:29:16 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:29:16.378 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[8e2cc40c-d0b6-435d-b7ab-fac09359d717]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:29:16 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:29:16.381 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[8f7f9b5c-d2f0-4700-a3f3-c8f0aea3279f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:29:16 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:29:16.407 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[4e1f63ba-a4fd-4797-b8ab-530e271ffa0c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:29:16 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:29:16.426 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[78a9d06a-a8f6-494a-adb9-6bec7040fa83]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapba659d6c-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:bb:c3:40'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 72], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 468344, 'reachable_time': 30576, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 296612, 'error': None, 'target': 'ovnmeta-ba659d6c-c094-47d7-ba45-d0e659ce778e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:29:16 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:29:16.443 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[86f585ce-0573-4988-b53b-2f709bca94df]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapba659d6c-c1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 468354, 'tstamp': 468354}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 296614, 'error': None, 'target': 'ovnmeta-ba659d6c-c094-47d7-ba45-d0e659ce778e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapba659d6c-c1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 468357, 'tstamp': 468357}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 296614, 'error': None, 'target': 'ovnmeta-ba659d6c-c094-47d7-ba45-d0e659ce778e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:29:16 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:29:16.445 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapba659d6c-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:29:16 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:29:16.448 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapba659d6c-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:29:16 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:29:16.448 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 03:29:16 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:29:16.449 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapba659d6c-c0, col_values=(('external_ids', {'iface-id': '02ee51d1-7fc5-4815-93ec-b9ead088a46e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:29:16 np0005534516 nova_compute[253538]: 2025-11-25 08:29:16.449 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:29:16 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:29:16.450 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 03:29:16 np0005534516 nova_compute[253538]: 2025-11-25 08:29:16.466 253542 DEBUG nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] skipping disk for instance-00000021 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 25 03:29:16 np0005534516 nova_compute[253538]: 2025-11-25 08:29:16.466 253542 DEBUG nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] skipping disk for instance-00000021 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 25 03:29:16 np0005534516 nova_compute[253538]: 2025-11-25 08:29:16.470 253542 DEBUG nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] skipping disk for instance-00000025 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 25 03:29:16 np0005534516 nova_compute[253538]: 2025-11-25 08:29:16.470 253542 DEBUG nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] skipping disk for instance-00000025 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 25 03:29:16 np0005534516 nova_compute[253538]: 2025-11-25 08:29:16.473 253542 DEBUG nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] skipping disk for instance-00000024 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 25 03:29:16 np0005534516 nova_compute[253538]: 2025-11-25 08:29:16.474 253542 DEBUG nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] skipping disk for instance-00000024 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 25 03:29:16 np0005534516 nova_compute[253538]: 2025-11-25 08:29:16.477 253542 DEBUG nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] skipping disk for instance-00000023 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 25 03:29:16 np0005534516 nova_compute[253538]: 2025-11-25 08:29:16.478 253542 DEBUG nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] skipping disk for instance-00000023 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 25 03:29:16 np0005534516 nova_compute[253538]: 2025-11-25 08:29:16.714 253542 WARNING nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 03:29:16 np0005534516 nova_compute[253538]: 2025-11-25 08:29:16.715 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3878MB free_disk=59.88420486450195GB free_vcpus=4 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 25 03:29:16 np0005534516 nova_compute[253538]: 2025-11-25 08:29:16.716 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:29:16 np0005534516 nova_compute[253538]: 2025-11-25 08:29:16.716 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:29:16 np0005534516 nova_compute[253538]: 2025-11-25 08:29:16.786 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Instance 8b29fb31-718d-4926-bf4f-bae461ea70ef actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 25 03:29:16 np0005534516 nova_compute[253538]: 2025-11-25 08:29:16.786 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Instance 0067149a-8f99-4257-af2a-fd9adcc41719 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 25 03:29:16 np0005534516 nova_compute[253538]: 2025-11-25 08:29:16.786 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Instance 8400a9a9-bd7a-434b-a11b-6db7e12a4e18 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 25 03:29:16 np0005534516 nova_compute[253538]: 2025-11-25 08:29:16.787 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Instance 225f80e2-9e66-46fb-b77d-9a54fa8a2a41 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 25 03:29:16 np0005534516 nova_compute[253538]: 2025-11-25 08:29:16.787 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Instance 8a8b6989-6ea7-4cf7-ad21-a1563967c7f4 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 25 03:29:16 np0005534516 nova_compute[253538]: 2025-11-25 08:29:16.787 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 5 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 25 03:29:16 np0005534516 nova_compute[253538]: 2025-11-25 08:29:16.788 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=1152MB phys_disk=59GB used_disk=5GB total_vcpus=8 used_vcpus=5 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 25 03:29:16 np0005534516 nova_compute[253538]: 2025-11-25 08:29:16.877 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:29:17 np0005534516 nova_compute[253538]: 2025-11-25 08:29:17.038 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764059357.015526, 225f80e2-9e66-46fb-b77d-9a54fa8a2a41 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 03:29:17 np0005534516 nova_compute[253538]: 2025-11-25 08:29:17.040 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 225f80e2-9e66-46fb-b77d-9a54fa8a2a41] VM Resumed (Lifecycle Event)#033[00m
Nov 25 03:29:17 np0005534516 nova_compute[253538]: 2025-11-25 08:29:17.042 253542 DEBUG nova.compute.manager [None req-a0928e58-2fe3-4ced-a142-64404b6cb388 6bc7b68c86ab44d29c118388df2a8bc0 70a02215d9344af388c6439ace9208a4 - - default default] [instance: 225f80e2-9e66-46fb-b77d-9a54fa8a2a41] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 25 03:29:17 np0005534516 nova_compute[253538]: 2025-11-25 08:29:17.043 253542 DEBUG nova.virt.libvirt.driver [None req-a0928e58-2fe3-4ced-a142-64404b6cb388 6bc7b68c86ab44d29c118388df2a8bc0 70a02215d9344af388c6439ace9208a4 - - default default] [instance: 225f80e2-9e66-46fb-b77d-9a54fa8a2a41] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 25 03:29:17 np0005534516 nova_compute[253538]: 2025-11-25 08:29:17.053 253542 DEBUG nova.objects.instance [None req-3b0c1526-8a8f-41e3-b395-6f88020e2279 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] Lazy-loading 'migration_context' on Instance uuid 8a8b6989-6ea7-4cf7-ad21-a1563967c7f4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 03:29:17 np0005534516 nova_compute[253538]: 2025-11-25 08:29:17.057 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 225f80e2-9e66-46fb-b77d-9a54fa8a2a41] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:29:17 np0005534516 nova_compute[253538]: 2025-11-25 08:29:17.063 253542 DEBUG nova.virt.libvirt.driver [None req-3b0c1526-8a8f-41e3-b395-6f88020e2279 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] [instance: 8a8b6989-6ea7-4cf7-ad21-a1563967c7f4] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 25 03:29:17 np0005534516 nova_compute[253538]: 2025-11-25 08:29:17.063 253542 DEBUG nova.virt.libvirt.driver [None req-3b0c1526-8a8f-41e3-b395-6f88020e2279 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] [instance: 8a8b6989-6ea7-4cf7-ad21-a1563967c7f4] Ensure instance console log exists: /var/lib/nova/instances/8a8b6989-6ea7-4cf7-ad21-a1563967c7f4/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 25 03:29:17 np0005534516 nova_compute[253538]: 2025-11-25 08:29:17.064 253542 DEBUG oslo_concurrency.lockutils [None req-3b0c1526-8a8f-41e3-b395-6f88020e2279 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:29:17 np0005534516 nova_compute[253538]: 2025-11-25 08:29:17.064 253542 DEBUG oslo_concurrency.lockutils [None req-3b0c1526-8a8f-41e3-b395-6f88020e2279 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:29:17 np0005534516 nova_compute[253538]: 2025-11-25 08:29:17.064 253542 DEBUG oslo_concurrency.lockutils [None req-3b0c1526-8a8f-41e3-b395-6f88020e2279 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:29:17 np0005534516 nova_compute[253538]: 2025-11-25 08:29:17.065 253542 INFO nova.virt.libvirt.driver [-] [instance: 225f80e2-9e66-46fb-b77d-9a54fa8a2a41] Instance spawned successfully.#033[00m
Nov 25 03:29:17 np0005534516 nova_compute[253538]: 2025-11-25 08:29:17.065 253542 DEBUG nova.virt.libvirt.driver [None req-a0928e58-2fe3-4ced-a142-64404b6cb388 6bc7b68c86ab44d29c118388df2a8bc0 70a02215d9344af388c6439ace9208a4 - - default default] [instance: 225f80e2-9e66-46fb-b77d-9a54fa8a2a41] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 25 03:29:17 np0005534516 nova_compute[253538]: 2025-11-25 08:29:17.071 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 225f80e2-9e66-46fb-b77d-9a54fa8a2a41] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 03:29:17 np0005534516 nova_compute[253538]: 2025-11-25 08:29:17.078 253542 DEBUG nova.virt.libvirt.driver [None req-a0928e58-2fe3-4ced-a142-64404b6cb388 6bc7b68c86ab44d29c118388df2a8bc0 70a02215d9344af388c6439ace9208a4 - - default default] [instance: 225f80e2-9e66-46fb-b77d-9a54fa8a2a41] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:29:17 np0005534516 nova_compute[253538]: 2025-11-25 08:29:17.079 253542 DEBUG nova.virt.libvirt.driver [None req-a0928e58-2fe3-4ced-a142-64404b6cb388 6bc7b68c86ab44d29c118388df2a8bc0 70a02215d9344af388c6439ace9208a4 - - default default] [instance: 225f80e2-9e66-46fb-b77d-9a54fa8a2a41] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:29:17 np0005534516 nova_compute[253538]: 2025-11-25 08:29:17.080 253542 DEBUG nova.virt.libvirt.driver [None req-a0928e58-2fe3-4ced-a142-64404b6cb388 6bc7b68c86ab44d29c118388df2a8bc0 70a02215d9344af388c6439ace9208a4 - - default default] [instance: 225f80e2-9e66-46fb-b77d-9a54fa8a2a41] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:29:17 np0005534516 nova_compute[253538]: 2025-11-25 08:29:17.081 253542 DEBUG nova.virt.libvirt.driver [None req-a0928e58-2fe3-4ced-a142-64404b6cb388 6bc7b68c86ab44d29c118388df2a8bc0 70a02215d9344af388c6439ace9208a4 - - default default] [instance: 225f80e2-9e66-46fb-b77d-9a54fa8a2a41] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:29:17 np0005534516 nova_compute[253538]: 2025-11-25 08:29:17.082 253542 DEBUG nova.virt.libvirt.driver [None req-a0928e58-2fe3-4ced-a142-64404b6cb388 6bc7b68c86ab44d29c118388df2a8bc0 70a02215d9344af388c6439ace9208a4 - - default default] [instance: 225f80e2-9e66-46fb-b77d-9a54fa8a2a41] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:29:17 np0005534516 nova_compute[253538]: 2025-11-25 08:29:17.082 253542 DEBUG nova.virt.libvirt.driver [None req-a0928e58-2fe3-4ced-a142-64404b6cb388 6bc7b68c86ab44d29c118388df2a8bc0 70a02215d9344af388c6439ace9208a4 - - default default] [instance: 225f80e2-9e66-46fb-b77d-9a54fa8a2a41] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:29:17 np0005534516 nova_compute[253538]: 2025-11-25 08:29:17.085 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 225f80e2-9e66-46fb-b77d-9a54fa8a2a41] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 03:29:17 np0005534516 nova_compute[253538]: 2025-11-25 08:29:17.085 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764059357.0156102, 225f80e2-9e66-46fb-b77d-9a54fa8a2a41 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 03:29:17 np0005534516 nova_compute[253538]: 2025-11-25 08:29:17.085 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 225f80e2-9e66-46fb-b77d-9a54fa8a2a41] VM Started (Lifecycle Event)#033[00m
Nov 25 03:29:17 np0005534516 nova_compute[253538]: 2025-11-25 08:29:17.104 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 225f80e2-9e66-46fb-b77d-9a54fa8a2a41] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:29:17 np0005534516 nova_compute[253538]: 2025-11-25 08:29:17.107 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 225f80e2-9e66-46fb-b77d-9a54fa8a2a41] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 03:29:17 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1353: 321 pgs: 321 active+clean; 270 MiB data, 502 MiB used, 59 GiB / 60 GiB avail; 433 KiB/s rd, 5.3 MiB/s wr, 204 op/s
Nov 25 03:29:17 np0005534516 nova_compute[253538]: 2025-11-25 08:29:17.121 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 225f80e2-9e66-46fb-b77d-9a54fa8a2a41] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 03:29:17 np0005534516 nova_compute[253538]: 2025-11-25 08:29:17.122 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764059357.0695934, 0067149a-8f99-4257-af2a-fd9adcc41719 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 03:29:17 np0005534516 nova_compute[253538]: 2025-11-25 08:29:17.122 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 0067149a-8f99-4257-af2a-fd9adcc41719] VM Started (Lifecycle Event)#033[00m
Nov 25 03:29:17 np0005534516 nova_compute[253538]: 2025-11-25 08:29:17.133 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 0067149a-8f99-4257-af2a-fd9adcc41719] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:29:17 np0005534516 nova_compute[253538]: 2025-11-25 08:29:17.137 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764059357.0703814, 0067149a-8f99-4257-af2a-fd9adcc41719 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 03:29:17 np0005534516 nova_compute[253538]: 2025-11-25 08:29:17.138 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 0067149a-8f99-4257-af2a-fd9adcc41719] VM Paused (Lifecycle Event)#033[00m
Nov 25 03:29:17 np0005534516 nova_compute[253538]: 2025-11-25 08:29:17.151 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 0067149a-8f99-4257-af2a-fd9adcc41719] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:29:17 np0005534516 nova_compute[253538]: 2025-11-25 08:29:17.153 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 0067149a-8f99-4257-af2a-fd9adcc41719] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 03:29:17 np0005534516 nova_compute[253538]: 2025-11-25 08:29:17.166 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 0067149a-8f99-4257-af2a-fd9adcc41719] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 03:29:17 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 03:29:17 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/16103673' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 03:29:17 np0005534516 nova_compute[253538]: 2025-11-25 08:29:17.378 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.502s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:29:17 np0005534516 nova_compute[253538]: 2025-11-25 08:29:17.383 253542 DEBUG nova.compute.provider_tree [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 03:29:17 np0005534516 nova_compute[253538]: 2025-11-25 08:29:17.405 253542 DEBUG nova.scheduler.client.report [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 03:29:17 np0005534516 nova_compute[253538]: 2025-11-25 08:29:17.440 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 25 03:29:17 np0005534516 nova_compute[253538]: 2025-11-25 08:29:17.441 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.724s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:29:17 np0005534516 nova_compute[253538]: 2025-11-25 08:29:17.457 253542 INFO nova.compute.manager [None req-a0928e58-2fe3-4ced-a142-64404b6cb388 6bc7b68c86ab44d29c118388df2a8bc0 70a02215d9344af388c6439ace9208a4 - - default default] [instance: 225f80e2-9e66-46fb-b77d-9a54fa8a2a41] Took 5.41 seconds to spawn the instance on the hypervisor.#033[00m
Nov 25 03:29:17 np0005534516 nova_compute[253538]: 2025-11-25 08:29:17.458 253542 DEBUG nova.compute.manager [None req-a0928e58-2fe3-4ced-a142-64404b6cb388 6bc7b68c86ab44d29c118388df2a8bc0 70a02215d9344af388c6439ace9208a4 - - default default] [instance: 225f80e2-9e66-46fb-b77d-9a54fa8a2a41] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:29:17 np0005534516 nova_compute[253538]: 2025-11-25 08:29:17.514 253542 INFO nova.compute.manager [None req-a0928e58-2fe3-4ced-a142-64404b6cb388 6bc7b68c86ab44d29c118388df2a8bc0 70a02215d9344af388c6439ace9208a4 - - default default] [instance: 225f80e2-9e66-46fb-b77d-9a54fa8a2a41] Took 6.47 seconds to build instance.#033[00m
Nov 25 03:29:17 np0005534516 nova_compute[253538]: 2025-11-25 08:29:17.529 253542 DEBUG nova.network.neutron [None req-3b0c1526-8a8f-41e3-b395-6f88020e2279 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] [instance: 8a8b6989-6ea7-4cf7-ad21-a1563967c7f4] Successfully updated port: 19217fbd-a123-469a-b432-be5d2543613c _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 25 03:29:17 np0005534516 nova_compute[253538]: 2025-11-25 08:29:17.530 253542 DEBUG oslo_concurrency.lockutils [None req-a0928e58-2fe3-4ced-a142-64404b6cb388 6bc7b68c86ab44d29c118388df2a8bc0 70a02215d9344af388c6439ace9208a4 - - default default] Lock "225f80e2-9e66-46fb-b77d-9a54fa8a2a41" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 6.560s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:29:17 np0005534516 nova_compute[253538]: 2025-11-25 08:29:17.538 253542 DEBUG oslo_concurrency.lockutils [None req-3b0c1526-8a8f-41e3-b395-6f88020e2279 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] Acquiring lock "refresh_cache-8a8b6989-6ea7-4cf7-ad21-a1563967c7f4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 03:29:17 np0005534516 nova_compute[253538]: 2025-11-25 08:29:17.538 253542 DEBUG oslo_concurrency.lockutils [None req-3b0c1526-8a8f-41e3-b395-6f88020e2279 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] Acquired lock "refresh_cache-8a8b6989-6ea7-4cf7-ad21-a1563967c7f4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 03:29:17 np0005534516 nova_compute[253538]: 2025-11-25 08:29:17.539 253542 DEBUG nova.network.neutron [None req-3b0c1526-8a8f-41e3-b395-6f88020e2279 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] [instance: 8a8b6989-6ea7-4cf7-ad21-a1563967c7f4] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 25 03:29:17 np0005534516 nova_compute[253538]: 2025-11-25 08:29:17.717 253542 DEBUG nova.network.neutron [None req-3b0c1526-8a8f-41e3-b395-6f88020e2279 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] [instance: 8a8b6989-6ea7-4cf7-ad21-a1563967c7f4] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 25 03:29:17 np0005534516 nova_compute[253538]: 2025-11-25 08:29:17.884 253542 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764059342.8839343, c3cdbe22-9aa9-4d2e-a76e-8d28c1edc71e => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 03:29:17 np0005534516 nova_compute[253538]: 2025-11-25 08:29:17.885 253542 INFO nova.compute.manager [-] [instance: c3cdbe22-9aa9-4d2e-a76e-8d28c1edc71e] VM Stopped (Lifecycle Event)#033[00m
Nov 25 03:29:17 np0005534516 nova_compute[253538]: 2025-11-25 08:29:17.900 253542 DEBUG nova.compute.manager [None req-17f0f62d-08db-488d-943b-90562cd16652 - - - - - -] [instance: c3cdbe22-9aa9-4d2e-a76e-8d28c1edc71e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:29:18 np0005534516 nova_compute[253538]: 2025-11-25 08:29:18.018 253542 DEBUG nova.compute.manager [req-3738d1d8-be7e-4326-bc9c-c8d2a8cdc755 req-0d8a5858-021b-40c6-9b52-39690b81703d b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 8a8b6989-6ea7-4cf7-ad21-a1563967c7f4] Received event network-changed-19217fbd-a123-469a-b432-be5d2543613c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:29:18 np0005534516 nova_compute[253538]: 2025-11-25 08:29:18.018 253542 DEBUG nova.compute.manager [req-3738d1d8-be7e-4326-bc9c-c8d2a8cdc755 req-0d8a5858-021b-40c6-9b52-39690b81703d b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 8a8b6989-6ea7-4cf7-ad21-a1563967c7f4] Refreshing instance network info cache due to event network-changed-19217fbd-a123-469a-b432-be5d2543613c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 03:29:18 np0005534516 nova_compute[253538]: 2025-11-25 08:29:18.018 253542 DEBUG oslo_concurrency.lockutils [req-3738d1d8-be7e-4326-bc9c-c8d2a8cdc755 req-0d8a5858-021b-40c6-9b52-39690b81703d b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-8a8b6989-6ea7-4cf7-ad21-a1563967c7f4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 03:29:19 np0005534516 nova_compute[253538]: 2025-11-25 08:29:19.073 253542 DEBUG nova.network.neutron [None req-3b0c1526-8a8f-41e3-b395-6f88020e2279 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] [instance: 8a8b6989-6ea7-4cf7-ad21-a1563967c7f4] Updating instance_info_cache with network_info: [{"id": "19217fbd-a123-469a-b432-be5d2543613c", "address": "fa:16:3e:85:9f:bb", "network": {"id": "52a7668b-f0ac-4b07-a778-1ee89adbf076", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1430369364-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c5b1125d171240e2895276836b4fd6d7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap19217fbd-a1", "ovs_interfaceid": "19217fbd-a123-469a-b432-be5d2543613c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 03:29:19 np0005534516 nova_compute[253538]: 2025-11-25 08:29:19.092 253542 DEBUG oslo_concurrency.lockutils [None req-3b0c1526-8a8f-41e3-b395-6f88020e2279 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] Releasing lock "refresh_cache-8a8b6989-6ea7-4cf7-ad21-a1563967c7f4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 03:29:19 np0005534516 nova_compute[253538]: 2025-11-25 08:29:19.092 253542 DEBUG nova.compute.manager [None req-3b0c1526-8a8f-41e3-b395-6f88020e2279 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] [instance: 8a8b6989-6ea7-4cf7-ad21-a1563967c7f4] Instance network_info: |[{"id": "19217fbd-a123-469a-b432-be5d2543613c", "address": "fa:16:3e:85:9f:bb", "network": {"id": "52a7668b-f0ac-4b07-a778-1ee89adbf076", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1430369364-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c5b1125d171240e2895276836b4fd6d7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap19217fbd-a1", "ovs_interfaceid": "19217fbd-a123-469a-b432-be5d2543613c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 25 03:29:19 np0005534516 nova_compute[253538]: 2025-11-25 08:29:19.093 253542 DEBUG oslo_concurrency.lockutils [req-3738d1d8-be7e-4326-bc9c-c8d2a8cdc755 req-0d8a5858-021b-40c6-9b52-39690b81703d b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-8a8b6989-6ea7-4cf7-ad21-a1563967c7f4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 03:29:19 np0005534516 nova_compute[253538]: 2025-11-25 08:29:19.093 253542 DEBUG nova.network.neutron [req-3738d1d8-be7e-4326-bc9c-c8d2a8cdc755 req-0d8a5858-021b-40c6-9b52-39690b81703d b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 8a8b6989-6ea7-4cf7-ad21-a1563967c7f4] Refreshing network info cache for port 19217fbd-a123-469a-b432-be5d2543613c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 03:29:19 np0005534516 nova_compute[253538]: 2025-11-25 08:29:19.095 253542 DEBUG nova.virt.libvirt.driver [None req-3b0c1526-8a8f-41e3-b395-6f88020e2279 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] [instance: 8a8b6989-6ea7-4cf7-ad21-a1563967c7f4] Start _get_guest_xml network_info=[{"id": "19217fbd-a123-469a-b432-be5d2543613c", "address": "fa:16:3e:85:9f:bb", "network": {"id": "52a7668b-f0ac-4b07-a778-1ee89adbf076", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1430369364-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c5b1125d171240e2895276836b4fd6d7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap19217fbd-a1", "ovs_interfaceid": "19217fbd-a123-469a-b432-be5d2543613c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'device_name': '/dev/vda', 'guest_format': None, 'encryption_options': None, 'boot_index': 0, 'encryption_format': None, 'encryption_secret_uuid': None, 'size': 0, 'encrypted': False, 'disk_bus': 'virtio', 'image_id': '8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 25 03:29:19 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1354: 321 pgs: 321 active+clean; 281 MiB data, 496 MiB used, 60 GiB / 60 GiB avail; 2.1 MiB/s rd, 5.9 MiB/s wr, 256 op/s
Nov 25 03:29:19 np0005534516 nova_compute[253538]: 2025-11-25 08:29:19.132 253542 WARNING nova.virt.libvirt.driver [None req-3b0c1526-8a8f-41e3-b395-6f88020e2279 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 03:29:19 np0005534516 nova_compute[253538]: 2025-11-25 08:29:19.142 253542 DEBUG nova.virt.libvirt.host [None req-3b0c1526-8a8f-41e3-b395-6f88020e2279 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 25 03:29:19 np0005534516 nova_compute[253538]: 2025-11-25 08:29:19.142 253542 DEBUG nova.virt.libvirt.host [None req-3b0c1526-8a8f-41e3-b395-6f88020e2279 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 25 03:29:19 np0005534516 nova_compute[253538]: 2025-11-25 08:29:19.146 253542 DEBUG nova.virt.libvirt.host [None req-3b0c1526-8a8f-41e3-b395-6f88020e2279 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 25 03:29:19 np0005534516 nova_compute[253538]: 2025-11-25 08:29:19.147 253542 DEBUG nova.virt.libvirt.host [None req-3b0c1526-8a8f-41e3-b395-6f88020e2279 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 25 03:29:19 np0005534516 nova_compute[253538]: 2025-11-25 08:29:19.147 253542 DEBUG nova.virt.libvirt.driver [None req-3b0c1526-8a8f-41e3-b395-6f88020e2279 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 25 03:29:19 np0005534516 nova_compute[253538]: 2025-11-25 08:29:19.148 253542 DEBUG nova.virt.hardware [None req-3b0c1526-8a8f-41e3-b395-6f88020e2279 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T08:20:54Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='c0247c91-0f34-45b2-87e7-43f31a790a00',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 25 03:29:19 np0005534516 nova_compute[253538]: 2025-11-25 08:29:19.148 253542 DEBUG nova.virt.hardware [None req-3b0c1526-8a8f-41e3-b395-6f88020e2279 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 25 03:29:19 np0005534516 nova_compute[253538]: 2025-11-25 08:29:19.149 253542 DEBUG nova.virt.hardware [None req-3b0c1526-8a8f-41e3-b395-6f88020e2279 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 25 03:29:19 np0005534516 nova_compute[253538]: 2025-11-25 08:29:19.149 253542 DEBUG nova.virt.hardware [None req-3b0c1526-8a8f-41e3-b395-6f88020e2279 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 25 03:29:19 np0005534516 nova_compute[253538]: 2025-11-25 08:29:19.149 253542 DEBUG nova.virt.hardware [None req-3b0c1526-8a8f-41e3-b395-6f88020e2279 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 25 03:29:19 np0005534516 nova_compute[253538]: 2025-11-25 08:29:19.149 253542 DEBUG nova.virt.hardware [None req-3b0c1526-8a8f-41e3-b395-6f88020e2279 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 25 03:29:19 np0005534516 nova_compute[253538]: 2025-11-25 08:29:19.150 253542 DEBUG nova.virt.hardware [None req-3b0c1526-8a8f-41e3-b395-6f88020e2279 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 25 03:29:19 np0005534516 nova_compute[253538]: 2025-11-25 08:29:19.150 253542 DEBUG nova.virt.hardware [None req-3b0c1526-8a8f-41e3-b395-6f88020e2279 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 25 03:29:19 np0005534516 nova_compute[253538]: 2025-11-25 08:29:19.151 253542 DEBUG nova.virt.hardware [None req-3b0c1526-8a8f-41e3-b395-6f88020e2279 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 25 03:29:19 np0005534516 nova_compute[253538]: 2025-11-25 08:29:19.151 253542 DEBUG nova.virt.hardware [None req-3b0c1526-8a8f-41e3-b395-6f88020e2279 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 25 03:29:19 np0005534516 nova_compute[253538]: 2025-11-25 08:29:19.151 253542 DEBUG nova.virt.hardware [None req-3b0c1526-8a8f-41e3-b395-6f88020e2279 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 25 03:29:19 np0005534516 nova_compute[253538]: 2025-11-25 08:29:19.154 253542 DEBUG oslo_concurrency.processutils [None req-3b0c1526-8a8f-41e3-b395-6f88020e2279 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:29:19 np0005534516 nova_compute[253538]: 2025-11-25 08:29:19.299 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:29:19 np0005534516 nova_compute[253538]: 2025-11-25 08:29:19.367 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 03:29:19 np0005534516 nova_compute[253538]: 2025-11-25 08:29:19.368 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 03:29:19 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 03:29:19 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1794632115' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 03:29:19 np0005534516 nova_compute[253538]: 2025-11-25 08:29:19.668 253542 DEBUG oslo_concurrency.processutils [None req-3b0c1526-8a8f-41e3-b395-6f88020e2279 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.514s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:29:19 np0005534516 nova_compute[253538]: 2025-11-25 08:29:19.692 253542 DEBUG nova.storage.rbd_utils [None req-3b0c1526-8a8f-41e3-b395-6f88020e2279 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] rbd image 8a8b6989-6ea7-4cf7-ad21-a1563967c7f4_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:29:19 np0005534516 nova_compute[253538]: 2025-11-25 08:29:19.696 253542 DEBUG oslo_concurrency.processutils [None req-3b0c1526-8a8f-41e3-b395-6f88020e2279 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:29:19 np0005534516 nova_compute[253538]: 2025-11-25 08:29:19.785 253542 DEBUG nova.compute.manager [None req-03d2cd0f-2ae1-4e1f-a91a-eab170244953 6bc7b68c86ab44d29c118388df2a8bc0 70a02215d9344af388c6439ace9208a4 - - default default] [instance: 8400a9a9-bd7a-434b-a11b-6db7e12a4e18] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:29:19 np0005534516 nova_compute[253538]: 2025-11-25 08:29:19.822 253542 INFO nova.compute.manager [None req-03d2cd0f-2ae1-4e1f-a91a-eab170244953 6bc7b68c86ab44d29c118388df2a8bc0 70a02215d9344af388c6439ace9208a4 - - default default] [instance: 8400a9a9-bd7a-434b-a11b-6db7e12a4e18] instance snapshotting#033[00m
Nov 25 03:29:19 np0005534516 nova_compute[253538]: 2025-11-25 08:29:19.911 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:29:20 np0005534516 nova_compute[253538]: 2025-11-25 08:29:20.039 253542 INFO nova.virt.libvirt.driver [None req-03d2cd0f-2ae1-4e1f-a91a-eab170244953 6bc7b68c86ab44d29c118388df2a8bc0 70a02215d9344af388c6439ace9208a4 - - default default] [instance: 8400a9a9-bd7a-434b-a11b-6db7e12a4e18] Beginning live snapshot process#033[00m
Nov 25 03:29:20 np0005534516 nova_compute[253538]: 2025-11-25 08:29:20.108 253542 DEBUG nova.compute.manager [req-bcbb7520-d840-4040-bbc2-c93da766eec4 req-eec480e9-9a4b-43ce-8344-3e286ed80fe7 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 0067149a-8f99-4257-af2a-fd9adcc41719] Received event network-vif-plugged-f3dedfca-04a0-44af-bca1-33a95c9804fa external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:29:20 np0005534516 nova_compute[253538]: 2025-11-25 08:29:20.109 253542 DEBUG oslo_concurrency.lockutils [req-bcbb7520-d840-4040-bbc2-c93da766eec4 req-eec480e9-9a4b-43ce-8344-3e286ed80fe7 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "0067149a-8f99-4257-af2a-fd9adcc41719-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:29:20 np0005534516 nova_compute[253538]: 2025-11-25 08:29:20.110 253542 DEBUG oslo_concurrency.lockutils [req-bcbb7520-d840-4040-bbc2-c93da766eec4 req-eec480e9-9a4b-43ce-8344-3e286ed80fe7 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "0067149a-8f99-4257-af2a-fd9adcc41719-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:29:20 np0005534516 nova_compute[253538]: 2025-11-25 08:29:20.110 253542 DEBUG oslo_concurrency.lockutils [req-bcbb7520-d840-4040-bbc2-c93da766eec4 req-eec480e9-9a4b-43ce-8344-3e286ed80fe7 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "0067149a-8f99-4257-af2a-fd9adcc41719-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:29:20 np0005534516 nova_compute[253538]: 2025-11-25 08:29:20.110 253542 DEBUG nova.compute.manager [req-bcbb7520-d840-4040-bbc2-c93da766eec4 req-eec480e9-9a4b-43ce-8344-3e286ed80fe7 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 0067149a-8f99-4257-af2a-fd9adcc41719] Processing event network-vif-plugged-f3dedfca-04a0-44af-bca1-33a95c9804fa _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 25 03:29:20 np0005534516 nova_compute[253538]: 2025-11-25 08:29:20.111 253542 DEBUG nova.compute.manager [req-bcbb7520-d840-4040-bbc2-c93da766eec4 req-eec480e9-9a4b-43ce-8344-3e286ed80fe7 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 0067149a-8f99-4257-af2a-fd9adcc41719] Received event network-vif-plugged-f3dedfca-04a0-44af-bca1-33a95c9804fa external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:29:20 np0005534516 nova_compute[253538]: 2025-11-25 08:29:20.111 253542 DEBUG oslo_concurrency.lockutils [req-bcbb7520-d840-4040-bbc2-c93da766eec4 req-eec480e9-9a4b-43ce-8344-3e286ed80fe7 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "0067149a-8f99-4257-af2a-fd9adcc41719-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:29:20 np0005534516 nova_compute[253538]: 2025-11-25 08:29:20.111 253542 DEBUG oslo_concurrency.lockutils [req-bcbb7520-d840-4040-bbc2-c93da766eec4 req-eec480e9-9a4b-43ce-8344-3e286ed80fe7 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "0067149a-8f99-4257-af2a-fd9adcc41719-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:29:20 np0005534516 nova_compute[253538]: 2025-11-25 08:29:20.112 253542 DEBUG oslo_concurrency.lockutils [req-bcbb7520-d840-4040-bbc2-c93da766eec4 req-eec480e9-9a4b-43ce-8344-3e286ed80fe7 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "0067149a-8f99-4257-af2a-fd9adcc41719-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:29:20 np0005534516 nova_compute[253538]: 2025-11-25 08:29:20.112 253542 DEBUG nova.compute.manager [req-bcbb7520-d840-4040-bbc2-c93da766eec4 req-eec480e9-9a4b-43ce-8344-3e286ed80fe7 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 0067149a-8f99-4257-af2a-fd9adcc41719] No waiting events found dispatching network-vif-plugged-f3dedfca-04a0-44af-bca1-33a95c9804fa pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 03:29:20 np0005534516 nova_compute[253538]: 2025-11-25 08:29:20.113 253542 WARNING nova.compute.manager [req-bcbb7520-d840-4040-bbc2-c93da766eec4 req-eec480e9-9a4b-43ce-8344-3e286ed80fe7 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 0067149a-8f99-4257-af2a-fd9adcc41719] Received unexpected event network-vif-plugged-f3dedfca-04a0-44af-bca1-33a95c9804fa for instance with vm_state building and task_state spawning.#033[00m
Nov 25 03:29:20 np0005534516 nova_compute[253538]: 2025-11-25 08:29:20.114 253542 DEBUG nova.compute.manager [None req-4a46e8e8-8247-4f85-ab93-5b0a8164f13c 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 0067149a-8f99-4257-af2a-fd9adcc41719] Instance event wait completed in 3 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 25 03:29:20 np0005534516 nova_compute[253538]: 2025-11-25 08:29:20.117 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764059360.116616, 0067149a-8f99-4257-af2a-fd9adcc41719 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 03:29:20 np0005534516 nova_compute[253538]: 2025-11-25 08:29:20.117 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 0067149a-8f99-4257-af2a-fd9adcc41719] VM Resumed (Lifecycle Event)#033[00m
Nov 25 03:29:20 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 03:29:20 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2180142779' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 03:29:20 np0005534516 nova_compute[253538]: 2025-11-25 08:29:20.165 253542 DEBUG nova.virt.libvirt.driver [None req-4a46e8e8-8247-4f85-ab93-5b0a8164f13c 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 0067149a-8f99-4257-af2a-fd9adcc41719] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 25 03:29:20 np0005534516 nova_compute[253538]: 2025-11-25 08:29:20.168 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 0067149a-8f99-4257-af2a-fd9adcc41719] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:29:20 np0005534516 nova_compute[253538]: 2025-11-25 08:29:20.173 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 0067149a-8f99-4257-af2a-fd9adcc41719] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 03:29:20 np0005534516 nova_compute[253538]: 2025-11-25 08:29:20.178 253542 DEBUG nova.virt.libvirt.imagebackend [None req-03d2cd0f-2ae1-4e1f-a91a-eab170244953 6bc7b68c86ab44d29c118388df2a8bc0 70a02215d9344af388c6439ace9208a4 - - default default] No parent info for 8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163#033[00m
Nov 25 03:29:20 np0005534516 nova_compute[253538]: 2025-11-25 08:29:20.182 253542 INFO nova.virt.libvirt.driver [-] [instance: 0067149a-8f99-4257-af2a-fd9adcc41719] Instance spawned successfully.#033[00m
Nov 25 03:29:20 np0005534516 nova_compute[253538]: 2025-11-25 08:29:20.183 253542 INFO nova.compute.manager [None req-4a46e8e8-8247-4f85-ab93-5b0a8164f13c 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 0067149a-8f99-4257-af2a-fd9adcc41719] Took 10.96 seconds to spawn the instance on the hypervisor.#033[00m
Nov 25 03:29:20 np0005534516 nova_compute[253538]: 2025-11-25 08:29:20.183 253542 DEBUG nova.compute.manager [None req-4a46e8e8-8247-4f85-ab93-5b0a8164f13c 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 0067149a-8f99-4257-af2a-fd9adcc41719] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:29:20 np0005534516 nova_compute[253538]: 2025-11-25 08:29:20.191 253542 DEBUG oslo_concurrency.processutils [None req-3b0c1526-8a8f-41e3-b395-6f88020e2279 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.494s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:29:20 np0005534516 nova_compute[253538]: 2025-11-25 08:29:20.193 253542 DEBUG nova.virt.libvirt.vif [None req-3b0c1526-8a8f-41e3-b395-6f88020e2279 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T08:29:11Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesOneServerNegativeTestJSON-server-1871314991',display_name='tempest-ImagesOneServerNegativeTestJSON-server-1871314991',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagesoneservernegativetestjson-server-1871314991',id=38,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c5b1125d171240e2895276836b4fd6d7',ramdisk_id='',reservation_id='r-np4uml6d',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesOneServerNegativeTestJSON-192511421',owner_user_name='tempest-ImagesOneServerNegativeTestJSON-192511421-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T08:29:14Z,user_data=None,user_id='8350a560f2bc4b57a5da0e3a1f582f82',uuid=8a8b6989-6ea7-4cf7-ad21-a1563967c7f4,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "19217fbd-a123-469a-b432-be5d2543613c", "address": "fa:16:3e:85:9f:bb", "network": {"id": "52a7668b-f0ac-4b07-a778-1ee89adbf076", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1430369364-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c5b1125d171240e2895276836b4fd6d7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap19217fbd-a1", "ovs_interfaceid": "19217fbd-a123-469a-b432-be5d2543613c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 25 03:29:20 np0005534516 nova_compute[253538]: 2025-11-25 08:29:20.193 253542 DEBUG nova.network.os_vif_util [None req-3b0c1526-8a8f-41e3-b395-6f88020e2279 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] Converting VIF {"id": "19217fbd-a123-469a-b432-be5d2543613c", "address": "fa:16:3e:85:9f:bb", "network": {"id": "52a7668b-f0ac-4b07-a778-1ee89adbf076", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1430369364-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c5b1125d171240e2895276836b4fd6d7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap19217fbd-a1", "ovs_interfaceid": "19217fbd-a123-469a-b432-be5d2543613c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 03:29:20 np0005534516 nova_compute[253538]: 2025-11-25 08:29:20.195 253542 DEBUG nova.network.os_vif_util [None req-3b0c1526-8a8f-41e3-b395-6f88020e2279 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:85:9f:bb,bridge_name='br-int',has_traffic_filtering=True,id=19217fbd-a123-469a-b432-be5d2543613c,network=Network(52a7668b-f0ac-4b07-a778-1ee89adbf076),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap19217fbd-a1') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 03:29:20 np0005534516 nova_compute[253538]: 2025-11-25 08:29:20.196 253542 DEBUG nova.objects.instance [None req-3b0c1526-8a8f-41e3-b395-6f88020e2279 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] Lazy-loading 'pci_devices' on Instance uuid 8a8b6989-6ea7-4cf7-ad21-a1563967c7f4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 03:29:20 np0005534516 nova_compute[253538]: 2025-11-25 08:29:20.197 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 0067149a-8f99-4257-af2a-fd9adcc41719] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 03:29:20 np0005534516 nova_compute[253538]: 2025-11-25 08:29:20.210 253542 DEBUG nova.virt.libvirt.driver [None req-3b0c1526-8a8f-41e3-b395-6f88020e2279 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] [instance: 8a8b6989-6ea7-4cf7-ad21-a1563967c7f4] End _get_guest_xml xml=<domain type="kvm">
Nov 25 03:29:20 np0005534516 nova_compute[253538]:  <uuid>8a8b6989-6ea7-4cf7-ad21-a1563967c7f4</uuid>
Nov 25 03:29:20 np0005534516 nova_compute[253538]:  <name>instance-00000026</name>
Nov 25 03:29:20 np0005534516 nova_compute[253538]:  <memory>131072</memory>
Nov 25 03:29:20 np0005534516 nova_compute[253538]:  <vcpu>1</vcpu>
Nov 25 03:29:20 np0005534516 nova_compute[253538]:  <metadata>
Nov 25 03:29:20 np0005534516 nova_compute[253538]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 03:29:20 np0005534516 nova_compute[253538]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 03:29:20 np0005534516 nova_compute[253538]:      <nova:name>tempest-ImagesOneServerNegativeTestJSON-server-1871314991</nova:name>
Nov 25 03:29:20 np0005534516 nova_compute[253538]:      <nova:creationTime>2025-11-25 08:29:19</nova:creationTime>
Nov 25 03:29:20 np0005534516 nova_compute[253538]:      <nova:flavor name="m1.nano">
Nov 25 03:29:20 np0005534516 nova_compute[253538]:        <nova:memory>128</nova:memory>
Nov 25 03:29:20 np0005534516 nova_compute[253538]:        <nova:disk>1</nova:disk>
Nov 25 03:29:20 np0005534516 nova_compute[253538]:        <nova:swap>0</nova:swap>
Nov 25 03:29:20 np0005534516 nova_compute[253538]:        <nova:ephemeral>0</nova:ephemeral>
Nov 25 03:29:20 np0005534516 nova_compute[253538]:        <nova:vcpus>1</nova:vcpus>
Nov 25 03:29:20 np0005534516 nova_compute[253538]:      </nova:flavor>
Nov 25 03:29:20 np0005534516 nova_compute[253538]:      <nova:owner>
Nov 25 03:29:20 np0005534516 nova_compute[253538]:        <nova:user uuid="8350a560f2bc4b57a5da0e3a1f582f82">tempest-ImagesOneServerNegativeTestJSON-192511421-project-member</nova:user>
Nov 25 03:29:20 np0005534516 nova_compute[253538]:        <nova:project uuid="c5b1125d171240e2895276836b4fd6d7">tempest-ImagesOneServerNegativeTestJSON-192511421</nova:project>
Nov 25 03:29:20 np0005534516 nova_compute[253538]:      </nova:owner>
Nov 25 03:29:20 np0005534516 nova_compute[253538]:      <nova:root type="image" uuid="8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e"/>
Nov 25 03:29:20 np0005534516 nova_compute[253538]:      <nova:ports>
Nov 25 03:29:20 np0005534516 nova_compute[253538]:        <nova:port uuid="19217fbd-a123-469a-b432-be5d2543613c">
Nov 25 03:29:20 np0005534516 nova_compute[253538]:          <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Nov 25 03:29:20 np0005534516 nova_compute[253538]:        </nova:port>
Nov 25 03:29:20 np0005534516 nova_compute[253538]:      </nova:ports>
Nov 25 03:29:20 np0005534516 nova_compute[253538]:    </nova:instance>
Nov 25 03:29:20 np0005534516 nova_compute[253538]:  </metadata>
Nov 25 03:29:20 np0005534516 nova_compute[253538]:  <sysinfo type="smbios">
Nov 25 03:29:20 np0005534516 nova_compute[253538]:    <system>
Nov 25 03:29:20 np0005534516 nova_compute[253538]:      <entry name="manufacturer">RDO</entry>
Nov 25 03:29:20 np0005534516 nova_compute[253538]:      <entry name="product">OpenStack Compute</entry>
Nov 25 03:29:20 np0005534516 nova_compute[253538]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 03:29:20 np0005534516 nova_compute[253538]:      <entry name="serial">8a8b6989-6ea7-4cf7-ad21-a1563967c7f4</entry>
Nov 25 03:29:20 np0005534516 nova_compute[253538]:      <entry name="uuid">8a8b6989-6ea7-4cf7-ad21-a1563967c7f4</entry>
Nov 25 03:29:20 np0005534516 nova_compute[253538]:      <entry name="family">Virtual Machine</entry>
Nov 25 03:29:20 np0005534516 nova_compute[253538]:    </system>
Nov 25 03:29:20 np0005534516 nova_compute[253538]:  </sysinfo>
Nov 25 03:29:20 np0005534516 nova_compute[253538]:  <os>
Nov 25 03:29:20 np0005534516 nova_compute[253538]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 25 03:29:20 np0005534516 nova_compute[253538]:    <boot dev="hd"/>
Nov 25 03:29:20 np0005534516 nova_compute[253538]:    <smbios mode="sysinfo"/>
Nov 25 03:29:20 np0005534516 nova_compute[253538]:  </os>
Nov 25 03:29:20 np0005534516 nova_compute[253538]:  <features>
Nov 25 03:29:20 np0005534516 nova_compute[253538]:    <acpi/>
Nov 25 03:29:20 np0005534516 nova_compute[253538]:    <apic/>
Nov 25 03:29:20 np0005534516 nova_compute[253538]:    <vmcoreinfo/>
Nov 25 03:29:20 np0005534516 nova_compute[253538]:  </features>
Nov 25 03:29:20 np0005534516 nova_compute[253538]:  <clock offset="utc">
Nov 25 03:29:20 np0005534516 nova_compute[253538]:    <timer name="pit" tickpolicy="delay"/>
Nov 25 03:29:20 np0005534516 nova_compute[253538]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 25 03:29:20 np0005534516 nova_compute[253538]:    <timer name="hpet" present="no"/>
Nov 25 03:29:20 np0005534516 nova_compute[253538]:  </clock>
Nov 25 03:29:20 np0005534516 nova_compute[253538]:  <cpu mode="host-model" match="exact">
Nov 25 03:29:20 np0005534516 nova_compute[253538]:    <topology sockets="1" cores="1" threads="1"/>
Nov 25 03:29:20 np0005534516 nova_compute[253538]:  </cpu>
Nov 25 03:29:20 np0005534516 nova_compute[253538]:  <devices>
Nov 25 03:29:20 np0005534516 nova_compute[253538]:    <disk type="network" device="disk">
Nov 25 03:29:20 np0005534516 nova_compute[253538]:      <driver type="raw" cache="none"/>
Nov 25 03:29:20 np0005534516 nova_compute[253538]:      <source protocol="rbd" name="vms/8a8b6989-6ea7-4cf7-ad21-a1563967c7f4_disk">
Nov 25 03:29:20 np0005534516 nova_compute[253538]:        <host name="192.168.122.100" port="6789"/>
Nov 25 03:29:20 np0005534516 nova_compute[253538]:      </source>
Nov 25 03:29:20 np0005534516 nova_compute[253538]:      <auth username="openstack">
Nov 25 03:29:20 np0005534516 nova_compute[253538]:        <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 03:29:20 np0005534516 nova_compute[253538]:      </auth>
Nov 25 03:29:20 np0005534516 nova_compute[253538]:      <target dev="vda" bus="virtio"/>
Nov 25 03:29:20 np0005534516 nova_compute[253538]:    </disk>
Nov 25 03:29:20 np0005534516 nova_compute[253538]:    <disk type="network" device="cdrom">
Nov 25 03:29:20 np0005534516 nova_compute[253538]:      <driver type="raw" cache="none"/>
Nov 25 03:29:20 np0005534516 nova_compute[253538]:      <source protocol="rbd" name="vms/8a8b6989-6ea7-4cf7-ad21-a1563967c7f4_disk.config">
Nov 25 03:29:20 np0005534516 nova_compute[253538]:        <host name="192.168.122.100" port="6789"/>
Nov 25 03:29:20 np0005534516 nova_compute[253538]:      </source>
Nov 25 03:29:20 np0005534516 nova_compute[253538]:      <auth username="openstack">
Nov 25 03:29:20 np0005534516 nova_compute[253538]:        <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 03:29:20 np0005534516 nova_compute[253538]:      </auth>
Nov 25 03:29:20 np0005534516 nova_compute[253538]:      <target dev="sda" bus="sata"/>
Nov 25 03:29:20 np0005534516 nova_compute[253538]:    </disk>
Nov 25 03:29:20 np0005534516 nova_compute[253538]:    <interface type="ethernet">
Nov 25 03:29:20 np0005534516 nova_compute[253538]:      <mac address="fa:16:3e:85:9f:bb"/>
Nov 25 03:29:20 np0005534516 nova_compute[253538]:      <model type="virtio"/>
Nov 25 03:29:20 np0005534516 nova_compute[253538]:      <driver name="vhost" rx_queue_size="512"/>
Nov 25 03:29:20 np0005534516 nova_compute[253538]:      <mtu size="1442"/>
Nov 25 03:29:20 np0005534516 nova_compute[253538]:      <target dev="tap19217fbd-a1"/>
Nov 25 03:29:20 np0005534516 nova_compute[253538]:    </interface>
Nov 25 03:29:20 np0005534516 nova_compute[253538]:    <serial type="pty">
Nov 25 03:29:20 np0005534516 nova_compute[253538]:      <log file="/var/lib/nova/instances/8a8b6989-6ea7-4cf7-ad21-a1563967c7f4/console.log" append="off"/>
Nov 25 03:29:20 np0005534516 nova_compute[253538]:    </serial>
Nov 25 03:29:20 np0005534516 nova_compute[253538]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 03:29:20 np0005534516 nova_compute[253538]:    <video>
Nov 25 03:29:20 np0005534516 nova_compute[253538]:      <model type="virtio"/>
Nov 25 03:29:20 np0005534516 nova_compute[253538]:    </video>
Nov 25 03:29:20 np0005534516 nova_compute[253538]:    <input type="tablet" bus="usb"/>
Nov 25 03:29:20 np0005534516 nova_compute[253538]:    <rng model="virtio">
Nov 25 03:29:20 np0005534516 nova_compute[253538]:      <backend model="random">/dev/urandom</backend>
Nov 25 03:29:20 np0005534516 nova_compute[253538]:    </rng>
Nov 25 03:29:20 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root"/>
Nov 25 03:29:20 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:29:20 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:29:20 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:29:20 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:29:20 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:29:20 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:29:20 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:29:20 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:29:20 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:29:20 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:29:20 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:29:20 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:29:20 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:29:20 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:29:20 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:29:20 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:29:20 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:29:20 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:29:20 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:29:20 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:29:20 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:29:20 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:29:20 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:29:20 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:29:20 np0005534516 nova_compute[253538]:    <controller type="usb" index="0"/>
Nov 25 03:29:20 np0005534516 nova_compute[253538]:    <memballoon model="virtio">
Nov 25 03:29:20 np0005534516 nova_compute[253538]:      <stats period="10"/>
Nov 25 03:29:20 np0005534516 nova_compute[253538]:    </memballoon>
Nov 25 03:29:20 np0005534516 nova_compute[253538]:  </devices>
Nov 25 03:29:20 np0005534516 nova_compute[253538]: </domain>
Nov 25 03:29:20 np0005534516 nova_compute[253538]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 25 03:29:20 np0005534516 nova_compute[253538]: 2025-11-25 08:29:20.217 253542 DEBUG nova.compute.manager [None req-3b0c1526-8a8f-41e3-b395-6f88020e2279 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] [instance: 8a8b6989-6ea7-4cf7-ad21-a1563967c7f4] Preparing to wait for external event network-vif-plugged-19217fbd-a123-469a-b432-be5d2543613c prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 25 03:29:20 np0005534516 nova_compute[253538]: 2025-11-25 08:29:20.217 253542 DEBUG oslo_concurrency.lockutils [None req-3b0c1526-8a8f-41e3-b395-6f88020e2279 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] Acquiring lock "8a8b6989-6ea7-4cf7-ad21-a1563967c7f4-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:29:20 np0005534516 nova_compute[253538]: 2025-11-25 08:29:20.218 253542 DEBUG oslo_concurrency.lockutils [None req-3b0c1526-8a8f-41e3-b395-6f88020e2279 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] Lock "8a8b6989-6ea7-4cf7-ad21-a1563967c7f4-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:29:20 np0005534516 nova_compute[253538]: 2025-11-25 08:29:20.218 253542 DEBUG oslo_concurrency.lockutils [None req-3b0c1526-8a8f-41e3-b395-6f88020e2279 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] Lock "8a8b6989-6ea7-4cf7-ad21-a1563967c7f4-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:29:20 np0005534516 nova_compute[253538]: 2025-11-25 08:29:20.219 253542 DEBUG nova.virt.libvirt.vif [None req-3b0c1526-8a8f-41e3-b395-6f88020e2279 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T08:29:11Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesOneServerNegativeTestJSON-server-1871314991',display_name='tempest-ImagesOneServerNegativeTestJSON-server-1871314991',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagesoneservernegativetestjson-server-1871314991',id=38,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c5b1125d171240e2895276836b4fd6d7',ramdisk_id='',reservation_id='r-np4uml6d',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesOneServerNegativeTestJSON-192511421',owner_user_name='tempest-ImagesOneServerNegativeTestJSON-192511421-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T08:29:14Z,user_data=None,user_id='8350a560f2bc4b57a5da0e3a1f582f82',uuid=8a8b6989-6ea7-4cf7-ad21-a1563967c7f4,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "19217fbd-a123-469a-b432-be5d2543613c", "address": "fa:16:3e:85:9f:bb", "network": {"id": "52a7668b-f0ac-4b07-a778-1ee89adbf076", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1430369364-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c5b1125d171240e2895276836b4fd6d7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap19217fbd-a1", "ovs_interfaceid": "19217fbd-a123-469a-b432-be5d2543613c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 25 03:29:20 np0005534516 nova_compute[253538]: 2025-11-25 08:29:20.219 253542 DEBUG nova.network.os_vif_util [None req-3b0c1526-8a8f-41e3-b395-6f88020e2279 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] Converting VIF {"id": "19217fbd-a123-469a-b432-be5d2543613c", "address": "fa:16:3e:85:9f:bb", "network": {"id": "52a7668b-f0ac-4b07-a778-1ee89adbf076", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1430369364-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c5b1125d171240e2895276836b4fd6d7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap19217fbd-a1", "ovs_interfaceid": "19217fbd-a123-469a-b432-be5d2543613c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 03:29:20 np0005534516 nova_compute[253538]: 2025-11-25 08:29:20.220 253542 DEBUG nova.network.os_vif_util [None req-3b0c1526-8a8f-41e3-b395-6f88020e2279 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:85:9f:bb,bridge_name='br-int',has_traffic_filtering=True,id=19217fbd-a123-469a-b432-be5d2543613c,network=Network(52a7668b-f0ac-4b07-a778-1ee89adbf076),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap19217fbd-a1') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 03:29:20 np0005534516 nova_compute[253538]: 2025-11-25 08:29:20.221 253542 DEBUG os_vif [None req-3b0c1526-8a8f-41e3-b395-6f88020e2279 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:85:9f:bb,bridge_name='br-int',has_traffic_filtering=True,id=19217fbd-a123-469a-b432-be5d2543613c,network=Network(52a7668b-f0ac-4b07-a778-1ee89adbf076),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap19217fbd-a1') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 25 03:29:20 np0005534516 nova_compute[253538]: 2025-11-25 08:29:20.221 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:29:20 np0005534516 nova_compute[253538]: 2025-11-25 08:29:20.222 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:29:20 np0005534516 nova_compute[253538]: 2025-11-25 08:29:20.222 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 03:29:20 np0005534516 nova_compute[253538]: 2025-11-25 08:29:20.225 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:29:20 np0005534516 nova_compute[253538]: 2025-11-25 08:29:20.225 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap19217fbd-a1, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:29:20 np0005534516 nova_compute[253538]: 2025-11-25 08:29:20.226 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap19217fbd-a1, col_values=(('external_ids', {'iface-id': '19217fbd-a123-469a-b432-be5d2543613c', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:85:9f:bb', 'vm-uuid': '8a8b6989-6ea7-4cf7-ad21-a1563967c7f4'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:29:20 np0005534516 nova_compute[253538]: 2025-11-25 08:29:20.227 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:29:20 np0005534516 NetworkManager[48915]: <info>  [1764059360.2284] manager: (tap19217fbd-a1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/119)
Nov 25 03:29:20 np0005534516 nova_compute[253538]: 2025-11-25 08:29:20.231 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 25 03:29:20 np0005534516 nova_compute[253538]: 2025-11-25 08:29:20.239 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:29:20 np0005534516 nova_compute[253538]: 2025-11-25 08:29:20.241 253542 INFO os_vif [None req-3b0c1526-8a8f-41e3-b395-6f88020e2279 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:85:9f:bb,bridge_name='br-int',has_traffic_filtering=True,id=19217fbd-a123-469a-b432-be5d2543613c,network=Network(52a7668b-f0ac-4b07-a778-1ee89adbf076),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap19217fbd-a1')#033[00m
Nov 25 03:29:20 np0005534516 nova_compute[253538]: 2025-11-25 08:29:20.243 253542 INFO nova.compute.manager [None req-4a46e8e8-8247-4f85-ab93-5b0a8164f13c 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 0067149a-8f99-4257-af2a-fd9adcc41719] Took 11.90 seconds to build instance.#033[00m
Nov 25 03:29:20 np0005534516 nova_compute[253538]: 2025-11-25 08:29:20.260 253542 DEBUG oslo_concurrency.lockutils [None req-4a46e8e8-8247-4f85-ab93-5b0a8164f13c 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Lock "0067149a-8f99-4257-af2a-fd9adcc41719" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.967s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:29:20 np0005534516 nova_compute[253538]: 2025-11-25 08:29:20.299 253542 DEBUG nova.virt.libvirt.driver [None req-3b0c1526-8a8f-41e3-b395-6f88020e2279 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 03:29:20 np0005534516 nova_compute[253538]: 2025-11-25 08:29:20.300 253542 DEBUG nova.virt.libvirt.driver [None req-3b0c1526-8a8f-41e3-b395-6f88020e2279 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 03:29:20 np0005534516 nova_compute[253538]: 2025-11-25 08:29:20.300 253542 DEBUG nova.virt.libvirt.driver [None req-3b0c1526-8a8f-41e3-b395-6f88020e2279 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] No VIF found with MAC fa:16:3e:85:9f:bb, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 25 03:29:20 np0005534516 nova_compute[253538]: 2025-11-25 08:29:20.301 253542 INFO nova.virt.libvirt.driver [None req-3b0c1526-8a8f-41e3-b395-6f88020e2279 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] [instance: 8a8b6989-6ea7-4cf7-ad21-a1563967c7f4] Using config drive#033[00m
Nov 25 03:29:20 np0005534516 nova_compute[253538]: 2025-11-25 08:29:20.318 253542 DEBUG nova.storage.rbd_utils [None req-3b0c1526-8a8f-41e3-b395-6f88020e2279 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] rbd image 8a8b6989-6ea7-4cf7-ad21-a1563967c7f4_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:29:20 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 03:29:20 np0005534516 nova_compute[253538]: 2025-11-25 08:29:20.379 253542 DEBUG nova.storage.rbd_utils [None req-03d2cd0f-2ae1-4e1f-a91a-eab170244953 6bc7b68c86ab44d29c118388df2a8bc0 70a02215d9344af388c6439ace9208a4 - - default default] creating snapshot(ac64e0e5302e4b08871927b6e4b87158) on rbd image(8400a9a9-bd7a-434b-a11b-6db7e12a4e18_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Nov 25 03:29:21 np0005534516 nova_compute[253538]: 2025-11-25 08:29:21.061 253542 DEBUG nova.network.neutron [req-3738d1d8-be7e-4326-bc9c-c8d2a8cdc755 req-0d8a5858-021b-40c6-9b52-39690b81703d b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 8a8b6989-6ea7-4cf7-ad21-a1563967c7f4] Updated VIF entry in instance network info cache for port 19217fbd-a123-469a-b432-be5d2543613c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 25 03:29:21 np0005534516 nova_compute[253538]: 2025-11-25 08:29:21.062 253542 DEBUG nova.network.neutron [req-3738d1d8-be7e-4326-bc9c-c8d2a8cdc755 req-0d8a5858-021b-40c6-9b52-39690b81703d b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 8a8b6989-6ea7-4cf7-ad21-a1563967c7f4] Updating instance_info_cache with network_info: [{"id": "19217fbd-a123-469a-b432-be5d2543613c", "address": "fa:16:3e:85:9f:bb", "network": {"id": "52a7668b-f0ac-4b07-a778-1ee89adbf076", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1430369364-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c5b1125d171240e2895276836b4fd6d7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap19217fbd-a1", "ovs_interfaceid": "19217fbd-a123-469a-b432-be5d2543613c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 03:29:21 np0005534516 nova_compute[253538]: 2025-11-25 08:29:21.076 253542 DEBUG oslo_concurrency.lockutils [req-3738d1d8-be7e-4326-bc9c-c8d2a8cdc755 req-0d8a5858-021b-40c6-9b52-39690b81703d b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-8a8b6989-6ea7-4cf7-ad21-a1563967c7f4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 03:29:21 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1355: 321 pgs: 321 active+clean; 306 MiB data, 499 MiB used, 60 GiB / 60 GiB avail; 4.3 MiB/s rd, 6.0 MiB/s wr, 327 op/s
Nov 25 03:29:21 np0005534516 nova_compute[253538]: 2025-11-25 08:29:21.182 253542 INFO nova.virt.libvirt.driver [None req-3b0c1526-8a8f-41e3-b395-6f88020e2279 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] [instance: 8a8b6989-6ea7-4cf7-ad21-a1563967c7f4] Creating config drive at /var/lib/nova/instances/8a8b6989-6ea7-4cf7-ad21-a1563967c7f4/disk.config#033[00m
Nov 25 03:29:21 np0005534516 nova_compute[253538]: 2025-11-25 08:29:21.187 253542 DEBUG oslo_concurrency.processutils [None req-3b0c1526-8a8f-41e3-b395-6f88020e2279 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/8a8b6989-6ea7-4cf7-ad21-a1563967c7f4/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpnfyp67o4 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:29:21 np0005534516 nova_compute[253538]: 2025-11-25 08:29:21.274 253542 DEBUG oslo_concurrency.lockutils [None req-f7dfe45d-327e-47e6-b09f-33bac34bd270 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Acquiring lock "0067149a-8f99-4257-af2a-fd9adcc41719" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:29:21 np0005534516 nova_compute[253538]: 2025-11-25 08:29:21.275 253542 DEBUG oslo_concurrency.lockutils [None req-f7dfe45d-327e-47e6-b09f-33bac34bd270 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Lock "0067149a-8f99-4257-af2a-fd9adcc41719" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:29:21 np0005534516 nova_compute[253538]: 2025-11-25 08:29:21.276 253542 DEBUG oslo_concurrency.lockutils [None req-f7dfe45d-327e-47e6-b09f-33bac34bd270 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Acquiring lock "0067149a-8f99-4257-af2a-fd9adcc41719-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:29:21 np0005534516 nova_compute[253538]: 2025-11-25 08:29:21.276 253542 DEBUG oslo_concurrency.lockutils [None req-f7dfe45d-327e-47e6-b09f-33bac34bd270 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Lock "0067149a-8f99-4257-af2a-fd9adcc41719-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:29:21 np0005534516 nova_compute[253538]: 2025-11-25 08:29:21.276 253542 DEBUG oslo_concurrency.lockutils [None req-f7dfe45d-327e-47e6-b09f-33bac34bd270 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Lock "0067149a-8f99-4257-af2a-fd9adcc41719-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:29:21 np0005534516 nova_compute[253538]: 2025-11-25 08:29:21.278 253542 INFO nova.compute.manager [None req-f7dfe45d-327e-47e6-b09f-33bac34bd270 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 0067149a-8f99-4257-af2a-fd9adcc41719] Terminating instance#033[00m
Nov 25 03:29:21 np0005534516 nova_compute[253538]: 2025-11-25 08:29:21.279 253542 DEBUG nova.compute.manager [None req-f7dfe45d-327e-47e6-b09f-33bac34bd270 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 0067149a-8f99-4257-af2a-fd9adcc41719] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 25 03:29:21 np0005534516 kernel: tapf3dedfca-04 (unregistering): left promiscuous mode
Nov 25 03:29:21 np0005534516 nova_compute[253538]: 2025-11-25 08:29:21.322 253542 DEBUG oslo_concurrency.processutils [None req-3b0c1526-8a8f-41e3-b395-6f88020e2279 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/8a8b6989-6ea7-4cf7-ad21-a1563967c7f4/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpnfyp67o4" returned: 0 in 0.134s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:29:21 np0005534516 NetworkManager[48915]: <info>  [1764059361.3326] device (tapf3dedfca-04): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 03:29:21 np0005534516 ovn_controller[152859]: 2025-11-25T08:29:21Z|00244|binding|INFO|Releasing lport f3dedfca-04a0-44af-bca1-33a95c9804fa from this chassis (sb_readonly=0)
Nov 25 03:29:21 np0005534516 ovn_controller[152859]: 2025-11-25T08:29:21Z|00245|binding|INFO|Setting lport f3dedfca-04a0-44af-bca1-33a95c9804fa down in Southbound
Nov 25 03:29:21 np0005534516 ovn_controller[152859]: 2025-11-25T08:29:21Z|00246|binding|INFO|Removing iface tapf3dedfca-04 ovn-installed in OVS
Nov 25 03:29:21 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:29:21.348 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:7d:ec:bd 10.100.0.14'], port_security=['fa:16:3e:7d:ec:bd 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '0067149a-8f99-4257-af2a-fd9adcc41719', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ba659d6c-c094-47d7-ba45-d0e659ce778e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b0a28d62fb1841c087b84b40bf5a54ec', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'c0dcf48e-8342-437f-bc91-be284d9d2e89', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=38cfcfdd-6d8a-45fc-8bf6-5c1aa5128b91, chassis=[], tunnel_key=6, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=f3dedfca-04a0-44af-bca1-33a95c9804fa) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 03:29:21 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:29:21.349 162739 INFO neutron.agent.ovn.metadata.agent [-] Port f3dedfca-04a0-44af-bca1-33a95c9804fa in datapath ba659d6c-c094-47d7-ba45-d0e659ce778e unbound from our chassis#033[00m
Nov 25 03:29:21 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:29:21.350 162739 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network ba659d6c-c094-47d7-ba45-d0e659ce778e#033[00m
Nov 25 03:29:21 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:29:21.368 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[3ca06695-8ce8-4a5e-86c8-b83cf4fca5f8]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:29:21 np0005534516 nova_compute[253538]: 2025-11-25 08:29:21.376 253542 DEBUG nova.storage.rbd_utils [None req-3b0c1526-8a8f-41e3-b395-6f88020e2279 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] rbd image 8a8b6989-6ea7-4cf7-ad21-a1563967c7f4_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:29:21 np0005534516 nova_compute[253538]: 2025-11-25 08:29:21.381 253542 DEBUG oslo_concurrency.processutils [None req-3b0c1526-8a8f-41e3-b395-6f88020e2279 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/8a8b6989-6ea7-4cf7-ad21-a1563967c7f4/disk.config 8a8b6989-6ea7-4cf7-ad21-a1563967c7f4_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:29:21 np0005534516 systemd[1]: machine-qemu\x2d42\x2dinstance\x2d00000023.scope: Deactivated successfully.
Nov 25 03:29:21 np0005534516 systemd[1]: machine-qemu\x2d42\x2dinstance\x2d00000023.scope: Consumed 1.621s CPU time.
Nov 25 03:29:21 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:29:21.401 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[ab4749c2-c798-4ae6-858a-f29dfcb1f9e5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:29:21 np0005534516 systemd-machined[215790]: Machine qemu-42-instance-00000023 terminated.
Nov 25 03:29:21 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:29:21.407 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[f3f99403-72eb-4eec-b020-f3c255c442de]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:29:21 np0005534516 nova_compute[253538]: 2025-11-25 08:29:21.418 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:29:21 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:29:21.435 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[9c72a808-499e-4457-9af2-df34fc48a6bf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:29:21 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:29:21.459 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[e5e3280b-519e-4c35-ae10-37675bf88104]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapba659d6c-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:bb:c3:40'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 7, 'rx_bytes': 616, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 7, 'rx_bytes': 616, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 72], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 468344, 'reachable_time': 30576, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 296889, 'error': None, 'target': 'ovnmeta-ba659d6c-c094-47d7-ba45-d0e659ce778e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:29:21 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:29:21.480 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[83b7c4b7-9ed6-4ffe-b2c2-185099f04a10]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapba659d6c-c1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 468354, 'tstamp': 468354}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 296905, 'error': None, 'target': 'ovnmeta-ba659d6c-c094-47d7-ba45-d0e659ce778e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapba659d6c-c1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 468357, 'tstamp': 468357}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 296905, 'error': None, 'target': 'ovnmeta-ba659d6c-c094-47d7-ba45-d0e659ce778e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:29:21 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:29:21.481 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapba659d6c-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:29:21 np0005534516 nova_compute[253538]: 2025-11-25 08:29:21.483 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:29:21 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:29:21.493 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapba659d6c-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:29:21 np0005534516 nova_compute[253538]: 2025-11-25 08:29:21.492 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:29:21 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:29:21.493 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 03:29:21 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:29:21.493 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapba659d6c-c0, col_values=(('external_ids', {'iface-id': '02ee51d1-7fc5-4815-93ec-b9ead088a46e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:29:21 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:29:21.494 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 03:29:21 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e151 do_prune osdmap full prune enabled
Nov 25 03:29:21 np0005534516 nova_compute[253538]: 2025-11-25 08:29:21.518 253542 INFO nova.virt.libvirt.driver [-] [instance: 0067149a-8f99-4257-af2a-fd9adcc41719] Instance destroyed successfully.#033[00m
Nov 25 03:29:21 np0005534516 nova_compute[253538]: 2025-11-25 08:29:21.519 253542 DEBUG nova.objects.instance [None req-f7dfe45d-327e-47e6-b09f-33bac34bd270 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Lazy-loading 'resources' on Instance uuid 0067149a-8f99-4257-af2a-fd9adcc41719 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 03:29:21 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e152 e152: 3 total, 3 up, 3 in
Nov 25 03:29:21 np0005534516 ceph-mon[75015]: log_channel(cluster) log [DBG] : osdmap e152: 3 total, 3 up, 3 in
Nov 25 03:29:21 np0005534516 nova_compute[253538]: 2025-11-25 08:29:21.536 253542 DEBUG nova.virt.libvirt.vif [None req-f7dfe45d-327e-47e6-b09f-33bac34bd270 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T08:29:07Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ImagesTestJSON-server-533297759',display_name='tempest-ImagesTestJSON-server-533297759',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagestestjson-server-533297759',id=35,image_ref='c4072411-f87d-45fb-92e8-02dc5884a35e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-25T08:29:20Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='b0a28d62fb1841c087b84b40bf5a54ec',ramdisk_id='',reservation_id='r-xg0ar0p0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_boot_roles='reader,member',image_container_format='bare',image_disk_format='raw',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_image_location='snapshot',image_image_state='available',image_image_type='snapshot',image_instance_uuid='8b29fb31-718d-4926-bf4f-bae461ea70ef',image_min_disk='1',image_min_ram='0',image_owner_id='b0a28d62fb1841c087b84b40bf5a54ec',image_owner_project_name='tempest-ImagesTestJSON-109091550',image_owner_user_name='tempest-ImagesTestJSON-109091550-project-member',image_user_id='38fa175fb699405c9a05d7c28f994ebc',owner_project_name='tempest-ImagesTestJSON-109091550',owner_user_name='tempest-ImagesTestJSON-109091550-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T08:29:20Z,user_data=None,user_id='38fa175fb699405c9a05d7c28f994ebc',uuid=0067149a-8f99-4257-af2a-fd9adcc41719,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "f3dedfca-04a0-44af-bca1-33a95c9804fa", "address": "fa:16:3e:7d:ec:bd", "network": {"id": "ba659d6c-c094-47d7-ba45-d0e659ce778e", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1404047212-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b0a28d62fb1841c087b84b40bf5a54ec", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf3dedfca-04", "ovs_interfaceid": "f3dedfca-04a0-44af-bca1-33a95c9804fa", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 25 03:29:21 np0005534516 nova_compute[253538]: 2025-11-25 08:29:21.537 253542 DEBUG nova.network.os_vif_util [None req-f7dfe45d-327e-47e6-b09f-33bac34bd270 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Converting VIF {"id": "f3dedfca-04a0-44af-bca1-33a95c9804fa", "address": "fa:16:3e:7d:ec:bd", "network": {"id": "ba659d6c-c094-47d7-ba45-d0e659ce778e", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1404047212-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b0a28d62fb1841c087b84b40bf5a54ec", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf3dedfca-04", "ovs_interfaceid": "f3dedfca-04a0-44af-bca1-33a95c9804fa", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 03:29:21 np0005534516 nova_compute[253538]: 2025-11-25 08:29:21.538 253542 DEBUG nova.network.os_vif_util [None req-f7dfe45d-327e-47e6-b09f-33bac34bd270 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:7d:ec:bd,bridge_name='br-int',has_traffic_filtering=True,id=f3dedfca-04a0-44af-bca1-33a95c9804fa,network=Network(ba659d6c-c094-47d7-ba45-d0e659ce778e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf3dedfca-04') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 03:29:21 np0005534516 nova_compute[253538]: 2025-11-25 08:29:21.539 253542 DEBUG os_vif [None req-f7dfe45d-327e-47e6-b09f-33bac34bd270 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:7d:ec:bd,bridge_name='br-int',has_traffic_filtering=True,id=f3dedfca-04a0-44af-bca1-33a95c9804fa,network=Network(ba659d6c-c094-47d7-ba45-d0e659ce778e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf3dedfca-04') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 25 03:29:21 np0005534516 nova_compute[253538]: 2025-11-25 08:29:21.544 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:29:21 np0005534516 nova_compute[253538]: 2025-11-25 08:29:21.544 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf3dedfca-04, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:29:21 np0005534516 nova_compute[253538]: 2025-11-25 08:29:21.548 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 25 03:29:21 np0005534516 nova_compute[253538]: 2025-11-25 08:29:21.551 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:29:21 np0005534516 nova_compute[253538]: 2025-11-25 08:29:21.554 253542 INFO os_vif [None req-f7dfe45d-327e-47e6-b09f-33bac34bd270 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:7d:ec:bd,bridge_name='br-int',has_traffic_filtering=True,id=f3dedfca-04a0-44af-bca1-33a95c9804fa,network=Network(ba659d6c-c094-47d7-ba45-d0e659ce778e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf3dedfca-04')#033[00m
Nov 25 03:29:21 np0005534516 nova_compute[253538]: 2025-11-25 08:29:21.614 253542 DEBUG oslo_concurrency.processutils [None req-3b0c1526-8a8f-41e3-b395-6f88020e2279 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/8a8b6989-6ea7-4cf7-ad21-a1563967c7f4/disk.config 8a8b6989-6ea7-4cf7-ad21-a1563967c7f4_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.232s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:29:21 np0005534516 nova_compute[253538]: 2025-11-25 08:29:21.615 253542 INFO nova.virt.libvirt.driver [None req-3b0c1526-8a8f-41e3-b395-6f88020e2279 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] [instance: 8a8b6989-6ea7-4cf7-ad21-a1563967c7f4] Deleting local config drive /var/lib/nova/instances/8a8b6989-6ea7-4cf7-ad21-a1563967c7f4/disk.config because it was imported into RBD.#033[00m
Nov 25 03:29:21 np0005534516 nova_compute[253538]: 2025-11-25 08:29:21.639 253542 DEBUG nova.storage.rbd_utils [None req-03d2cd0f-2ae1-4e1f-a91a-eab170244953 6bc7b68c86ab44d29c118388df2a8bc0 70a02215d9344af388c6439ace9208a4 - - default default] cloning vms/8400a9a9-bd7a-434b-a11b-6db7e12a4e18_disk@ac64e0e5302e4b08871927b6e4b87158 to images/148b37ac-1ea9-4409-a4ce-912163252ba4 clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Nov 25 03:29:21 np0005534516 systemd-udevd[296877]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 03:29:21 np0005534516 kernel: tap19217fbd-a1: entered promiscuous mode
Nov 25 03:29:21 np0005534516 NetworkManager[48915]: <info>  [1764059361.6809] manager: (tap19217fbd-a1): new Tun device (/org/freedesktop/NetworkManager/Devices/120)
Nov 25 03:29:21 np0005534516 ovn_controller[152859]: 2025-11-25T08:29:21Z|00247|binding|INFO|Claiming lport 19217fbd-a123-469a-b432-be5d2543613c for this chassis.
Nov 25 03:29:21 np0005534516 ovn_controller[152859]: 2025-11-25T08:29:21Z|00248|binding|INFO|19217fbd-a123-469a-b432-be5d2543613c: Claiming fa:16:3e:85:9f:bb 10.100.0.10
Nov 25 03:29:21 np0005534516 nova_compute[253538]: 2025-11-25 08:29:21.684 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:29:21 np0005534516 NetworkManager[48915]: <info>  [1764059361.6917] device (tap19217fbd-a1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 03:29:21 np0005534516 NetworkManager[48915]: <info>  [1764059361.6924] device (tap19217fbd-a1): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 03:29:21 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:29:21.694 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:85:9f:bb 10.100.0.10'], port_security=['fa:16:3e:85:9f:bb 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '8a8b6989-6ea7-4cf7-ad21-a1563967c7f4', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-52a7668b-f0ac-4b07-a778-1ee89adbf076', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c5b1125d171240e2895276836b4fd6d7', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'bd498cc2-3a1f-4313-a3a9-7b1fa737f1eb', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b98666ab-8fd3-4256-b0ce-536c54c0072e, chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=19217fbd-a123-469a-b432-be5d2543613c) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 03:29:21 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:29:21.695 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 19217fbd-a123-469a-b432-be5d2543613c in datapath 52a7668b-f0ac-4b07-a778-1ee89adbf076 bound to our chassis#033[00m
Nov 25 03:29:21 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:29:21.696 162739 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 52a7668b-f0ac-4b07-a778-1ee89adbf076#033[00m
Nov 25 03:29:21 np0005534516 ovn_controller[152859]: 2025-11-25T08:29:21Z|00249|binding|INFO|Setting lport 19217fbd-a123-469a-b432-be5d2543613c ovn-installed in OVS
Nov 25 03:29:21 np0005534516 ovn_controller[152859]: 2025-11-25T08:29:21Z|00250|binding|INFO|Setting lport 19217fbd-a123-469a-b432-be5d2543613c up in Southbound
Nov 25 03:29:21 np0005534516 nova_compute[253538]: 2025-11-25 08:29:21.707 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:29:21 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:29:21.711 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[e2d16af8-0f65-457c-932a-6f32625ad03f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:29:21 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:29:21.711 162739 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap52a7668b-f1 in ovnmeta-52a7668b-f0ac-4b07-a778-1ee89adbf076 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 25 03:29:21 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:29:21.713 269370 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap52a7668b-f0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 25 03:29:21 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:29:21.713 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[f88867a0-f88d-4517-b70a-a8026b2f27de]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:29:21 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:29:21.714 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[d9a5b566-5d91-423a-8724-6a2dbc77807a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:29:21 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:29:21.729 162852 DEBUG oslo.privsep.daemon [-] privsep: reply[ff83f894-da6a-4672-b1fe-8c6d3ccea5ca]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:29:21 np0005534516 nova_compute[253538]: 2025-11-25 08:29:21.881 253542 DEBUG nova.compute.manager [req-87ab2865-eb93-4a96-a6e2-6511885924cc req-d3c9db26-8404-4e99-b75f-22848772f58b b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 8a8b6989-6ea7-4cf7-ad21-a1563967c7f4] Received event network-vif-plugged-19217fbd-a123-469a-b432-be5d2543613c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:29:21 np0005534516 nova_compute[253538]: 2025-11-25 08:29:21.883 253542 DEBUG oslo_concurrency.lockutils [req-87ab2865-eb93-4a96-a6e2-6511885924cc req-d3c9db26-8404-4e99-b75f-22848772f58b b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "8a8b6989-6ea7-4cf7-ad21-a1563967c7f4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:29:21 np0005534516 nova_compute[253538]: 2025-11-25 08:29:21.883 253542 DEBUG oslo_concurrency.lockutils [req-87ab2865-eb93-4a96-a6e2-6511885924cc req-d3c9db26-8404-4e99-b75f-22848772f58b b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "8a8b6989-6ea7-4cf7-ad21-a1563967c7f4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:29:21 np0005534516 nova_compute[253538]: 2025-11-25 08:29:21.883 253542 DEBUG oslo_concurrency.lockutils [req-87ab2865-eb93-4a96-a6e2-6511885924cc req-d3c9db26-8404-4e99-b75f-22848772f58b b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "8a8b6989-6ea7-4cf7-ad21-a1563967c7f4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:29:21 np0005534516 nova_compute[253538]: 2025-11-25 08:29:21.884 253542 DEBUG nova.compute.manager [req-87ab2865-eb93-4a96-a6e2-6511885924cc req-d3c9db26-8404-4e99-b75f-22848772f58b b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 8a8b6989-6ea7-4cf7-ad21-a1563967c7f4] Processing event network-vif-plugged-19217fbd-a123-469a-b432-be5d2543613c _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 25 03:29:21 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:29:21.889 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[d63f7c6b-5f47-4074-9c7a-44d08984b5df]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:29:21 np0005534516 nova_compute[253538]: 2025-11-25 08:29:21.890 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:29:21 np0005534516 systemd-machined[215790]: New machine qemu-43-instance-00000026.
Nov 25 03:29:21 np0005534516 systemd[1]: Started Virtual Machine qemu-43-instance-00000026.
Nov 25 03:29:21 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:29:21.938 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[099b688d-47da-43d3-a8dc-44fd72628656]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:29:21 np0005534516 NetworkManager[48915]: <info>  [1764059361.9464] manager: (tap52a7668b-f0): new Veth device (/org/freedesktop/NetworkManager/Devices/121)
Nov 25 03:29:21 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:29:21.947 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[8f929366-6d8d-4bd4-826e-f8528bee7569]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:29:21 np0005534516 nova_compute[253538]: 2025-11-25 08:29:21.975 253542 DEBUG nova.storage.rbd_utils [None req-03d2cd0f-2ae1-4e1f-a91a-eab170244953 6bc7b68c86ab44d29c118388df2a8bc0 70a02215d9344af388c6439ace9208a4 - - default default] flattening images/148b37ac-1ea9-4409-a4ce-912163252ba4 flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314#033[00m
Nov 25 03:29:21 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:29:21.977 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[53153c15-1f3f-4d02-9b5a-62eded2898db]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:29:21 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:29:21.983 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[0db5b8dc-309c-406c-bcfc-e373160dbe5e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:29:22 np0005534516 NetworkManager[48915]: <info>  [1764059362.0158] device (tap52a7668b-f0): carrier: link connected
Nov 25 03:29:22 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:29:22.024 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[5421ff45-99f2-4b98-9fea-0b8a6f3c35d6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:29:22 np0005534516 ovn_controller[152859]: 2025-11-25T08:29:22Z|00251|binding|INFO|Releasing lport 02ee51d1-7fc5-4815-93ec-b9ead088a46e from this chassis (sb_readonly=0)
Nov 25 03:29:22 np0005534516 nova_compute[253538]: 2025-11-25 08:29:22.053 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:29:22 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:29:22.062 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[c631c40e-196d-4ce3-b2cd-77c7619462c6]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap52a7668b-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:9b:1c:70'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 80], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 471661, 'reachable_time': 16282, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 297042, 'error': None, 'target': 'ovnmeta-52a7668b-f0ac-4b07-a778-1ee89adbf076', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:29:22 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:29:22.087 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[ba31fdb6-4010-42a1-9605-147c8fa3c2e0]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe9b:1c70'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 471661, 'tstamp': 471661}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 297043, 'error': None, 'target': 'ovnmeta-52a7668b-f0ac-4b07-a778-1ee89adbf076', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:29:22 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:29:22.110 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[511b1436-9965-43c2-b40a-5cd823d49262]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap52a7668b-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:9b:1c:70'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 80], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 471661, 'reachable_time': 16282, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 297044, 'error': None, 'target': 'ovnmeta-52a7668b-f0ac-4b07-a778-1ee89adbf076', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:29:22 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:29:22.152 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[a6a644aa-897e-4952-a2e3-63acebaa01bc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:29:22 np0005534516 ovn_controller[152859]: 2025-11-25T08:29:22Z|00252|binding|INFO|Releasing lport 02ee51d1-7fc5-4815-93ec-b9ead088a46e from this chassis (sb_readonly=0)
Nov 25 03:29:22 np0005534516 nova_compute[253538]: 2025-11-25 08:29:22.165 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:29:22 np0005534516 nova_compute[253538]: 2025-11-25 08:29:22.211 253542 DEBUG nova.compute.manager [req-2daf4382-0f37-4f30-9c87-e4a7e53b8247 req-96c448d5-c11d-456e-a4c1-5a2bbde6520b b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 0067149a-8f99-4257-af2a-fd9adcc41719] Received event network-vif-unplugged-f3dedfca-04a0-44af-bca1-33a95c9804fa external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:29:22 np0005534516 nova_compute[253538]: 2025-11-25 08:29:22.211 253542 DEBUG oslo_concurrency.lockutils [req-2daf4382-0f37-4f30-9c87-e4a7e53b8247 req-96c448d5-c11d-456e-a4c1-5a2bbde6520b b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "0067149a-8f99-4257-af2a-fd9adcc41719-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:29:22 np0005534516 nova_compute[253538]: 2025-11-25 08:29:22.212 253542 DEBUG oslo_concurrency.lockutils [req-2daf4382-0f37-4f30-9c87-e4a7e53b8247 req-96c448d5-c11d-456e-a4c1-5a2bbde6520b b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "0067149a-8f99-4257-af2a-fd9adcc41719-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:29:22 np0005534516 nova_compute[253538]: 2025-11-25 08:29:22.212 253542 DEBUG oslo_concurrency.lockutils [req-2daf4382-0f37-4f30-9c87-e4a7e53b8247 req-96c448d5-c11d-456e-a4c1-5a2bbde6520b b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "0067149a-8f99-4257-af2a-fd9adcc41719-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:29:22 np0005534516 nova_compute[253538]: 2025-11-25 08:29:22.212 253542 DEBUG nova.compute.manager [req-2daf4382-0f37-4f30-9c87-e4a7e53b8247 req-96c448d5-c11d-456e-a4c1-5a2bbde6520b b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 0067149a-8f99-4257-af2a-fd9adcc41719] No waiting events found dispatching network-vif-unplugged-f3dedfca-04a0-44af-bca1-33a95c9804fa pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 03:29:22 np0005534516 nova_compute[253538]: 2025-11-25 08:29:22.212 253542 DEBUG nova.compute.manager [req-2daf4382-0f37-4f30-9c87-e4a7e53b8247 req-96c448d5-c11d-456e-a4c1-5a2bbde6520b b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 0067149a-8f99-4257-af2a-fd9adcc41719] Received event network-vif-unplugged-f3dedfca-04a0-44af-bca1-33a95c9804fa for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 25 03:29:22 np0005534516 nova_compute[253538]: 2025-11-25 08:29:22.213 253542 DEBUG nova.compute.manager [req-2daf4382-0f37-4f30-9c87-e4a7e53b8247 req-96c448d5-c11d-456e-a4c1-5a2bbde6520b b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 0067149a-8f99-4257-af2a-fd9adcc41719] Received event network-vif-plugged-f3dedfca-04a0-44af-bca1-33a95c9804fa external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:29:22 np0005534516 nova_compute[253538]: 2025-11-25 08:29:22.213 253542 DEBUG oslo_concurrency.lockutils [req-2daf4382-0f37-4f30-9c87-e4a7e53b8247 req-96c448d5-c11d-456e-a4c1-5a2bbde6520b b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "0067149a-8f99-4257-af2a-fd9adcc41719-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:29:22 np0005534516 nova_compute[253538]: 2025-11-25 08:29:22.214 253542 DEBUG oslo_concurrency.lockutils [req-2daf4382-0f37-4f30-9c87-e4a7e53b8247 req-96c448d5-c11d-456e-a4c1-5a2bbde6520b b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "0067149a-8f99-4257-af2a-fd9adcc41719-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:29:22 np0005534516 nova_compute[253538]: 2025-11-25 08:29:22.214 253542 DEBUG oslo_concurrency.lockutils [req-2daf4382-0f37-4f30-9c87-e4a7e53b8247 req-96c448d5-c11d-456e-a4c1-5a2bbde6520b b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "0067149a-8f99-4257-af2a-fd9adcc41719-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:29:22 np0005534516 nova_compute[253538]: 2025-11-25 08:29:22.214 253542 DEBUG nova.compute.manager [req-2daf4382-0f37-4f30-9c87-e4a7e53b8247 req-96c448d5-c11d-456e-a4c1-5a2bbde6520b b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 0067149a-8f99-4257-af2a-fd9adcc41719] No waiting events found dispatching network-vif-plugged-f3dedfca-04a0-44af-bca1-33a95c9804fa pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 03:29:22 np0005534516 nova_compute[253538]: 2025-11-25 08:29:22.214 253542 WARNING nova.compute.manager [req-2daf4382-0f37-4f30-9c87-e4a7e53b8247 req-96c448d5-c11d-456e-a4c1-5a2bbde6520b b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 0067149a-8f99-4257-af2a-fd9adcc41719] Received unexpected event network-vif-plugged-f3dedfca-04a0-44af-bca1-33a95c9804fa for instance with vm_state active and task_state deleting.#033[00m
Nov 25 03:29:22 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:29:22.234 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[9770c090-a90a-4a65-bc68-d6a8cf14031a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:29:22 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:29:22.235 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap52a7668b-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:29:22 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:29:22.235 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 03:29:22 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:29:22.235 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap52a7668b-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:29:22 np0005534516 kernel: tap52a7668b-f0: entered promiscuous mode
Nov 25 03:29:22 np0005534516 NetworkManager[48915]: <info>  [1764059362.2379] manager: (tap52a7668b-f0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/122)
Nov 25 03:29:22 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:29:22.241 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap52a7668b-f0, col_values=(('external_ids', {'iface-id': 'ac244317-fa52-4a6a-92f4-98845a41804d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:29:22 np0005534516 ovn_controller[152859]: 2025-11-25T08:29:22Z|00253|binding|INFO|Releasing lport ac244317-fa52-4a6a-92f4-98845a41804d from this chassis (sb_readonly=0)
Nov 25 03:29:22 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:29:22.246 162739 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/52a7668b-f0ac-4b07-a778-1ee89adbf076.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/52a7668b-f0ac-4b07-a778-1ee89adbf076.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 25 03:29:22 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:29:22.246 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[170d6208-b276-4027-ab0c-3aa4b3b5bd39]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:29:22 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:29:22.247 162739 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 25 03:29:22 np0005534516 ovn_metadata_agent[162734]: global
Nov 25 03:29:22 np0005534516 ovn_metadata_agent[162734]:    log         /dev/log local0 debug
Nov 25 03:29:22 np0005534516 ovn_metadata_agent[162734]:    log-tag     haproxy-metadata-proxy-52a7668b-f0ac-4b07-a778-1ee89adbf076
Nov 25 03:29:22 np0005534516 ovn_metadata_agent[162734]:    user        root
Nov 25 03:29:22 np0005534516 ovn_metadata_agent[162734]:    group       root
Nov 25 03:29:22 np0005534516 ovn_metadata_agent[162734]:    maxconn     1024
Nov 25 03:29:22 np0005534516 ovn_metadata_agent[162734]:    pidfile     /var/lib/neutron/external/pids/52a7668b-f0ac-4b07-a778-1ee89adbf076.pid.haproxy
Nov 25 03:29:22 np0005534516 ovn_metadata_agent[162734]:    daemon
Nov 25 03:29:22 np0005534516 ovn_metadata_agent[162734]: 
Nov 25 03:29:22 np0005534516 ovn_metadata_agent[162734]: defaults
Nov 25 03:29:22 np0005534516 ovn_metadata_agent[162734]:    log global
Nov 25 03:29:22 np0005534516 ovn_metadata_agent[162734]:    mode http
Nov 25 03:29:22 np0005534516 ovn_metadata_agent[162734]:    option httplog
Nov 25 03:29:22 np0005534516 ovn_metadata_agent[162734]:    option dontlognull
Nov 25 03:29:22 np0005534516 ovn_metadata_agent[162734]:    option http-server-close
Nov 25 03:29:22 np0005534516 ovn_metadata_agent[162734]:    option forwardfor
Nov 25 03:29:22 np0005534516 ovn_metadata_agent[162734]:    retries                 3
Nov 25 03:29:22 np0005534516 ovn_metadata_agent[162734]:    timeout http-request    30s
Nov 25 03:29:22 np0005534516 ovn_metadata_agent[162734]:    timeout connect         30s
Nov 25 03:29:22 np0005534516 ovn_metadata_agent[162734]:    timeout client          32s
Nov 25 03:29:22 np0005534516 ovn_metadata_agent[162734]:    timeout server          32s
Nov 25 03:29:22 np0005534516 ovn_metadata_agent[162734]:    timeout http-keep-alive 30s
Nov 25 03:29:22 np0005534516 ovn_metadata_agent[162734]: 
Nov 25 03:29:22 np0005534516 ovn_metadata_agent[162734]: 
Nov 25 03:29:22 np0005534516 ovn_metadata_agent[162734]: listen listener
Nov 25 03:29:22 np0005534516 ovn_metadata_agent[162734]:    bind 169.254.169.254:80
Nov 25 03:29:22 np0005534516 ovn_metadata_agent[162734]:    server metadata /var/lib/neutron/metadata_proxy
Nov 25 03:29:22 np0005534516 ovn_metadata_agent[162734]:    http-request add-header X-OVN-Network-ID 52a7668b-f0ac-4b07-a778-1ee89adbf076
Nov 25 03:29:22 np0005534516 ovn_metadata_agent[162734]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 25 03:29:22 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:29:22.248 162739 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-52a7668b-f0ac-4b07-a778-1ee89adbf076', 'env', 'PROCESS_TAG=haproxy-52a7668b-f0ac-4b07-a778-1ee89adbf076', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/52a7668b-f0ac-4b07-a778-1ee89adbf076.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 25 03:29:22 np0005534516 nova_compute[253538]: 2025-11-25 08:29:22.292 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:29:22 np0005534516 nova_compute[253538]: 2025-11-25 08:29:22.311 253542 DEBUG nova.storage.rbd_utils [None req-03d2cd0f-2ae1-4e1f-a91a-eab170244953 6bc7b68c86ab44d29c118388df2a8bc0 70a02215d9344af388c6439ace9208a4 - - default default] removing snapshot(ac64e0e5302e4b08871927b6e4b87158) on rbd image(8400a9a9-bd7a-434b-a11b-6db7e12a4e18_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489#033[00m
Nov 25 03:29:22 np0005534516 nova_compute[253538]: 2025-11-25 08:29:22.363 253542 INFO nova.virt.libvirt.driver [None req-f7dfe45d-327e-47e6-b09f-33bac34bd270 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 0067149a-8f99-4257-af2a-fd9adcc41719] Deleting instance files /var/lib/nova/instances/0067149a-8f99-4257-af2a-fd9adcc41719_del#033[00m
Nov 25 03:29:22 np0005534516 nova_compute[253538]: 2025-11-25 08:29:22.363 253542 INFO nova.virt.libvirt.driver [None req-f7dfe45d-327e-47e6-b09f-33bac34bd270 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 0067149a-8f99-4257-af2a-fd9adcc41719] Deletion of /var/lib/nova/instances/0067149a-8f99-4257-af2a-fd9adcc41719_del complete#033[00m
Nov 25 03:29:22 np0005534516 nova_compute[253538]: 2025-11-25 08:29:22.427 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764059362.4252894, 8a8b6989-6ea7-4cf7-ad21-a1563967c7f4 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 03:29:22 np0005534516 nova_compute[253538]: 2025-11-25 08:29:22.428 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 8a8b6989-6ea7-4cf7-ad21-a1563967c7f4] VM Started (Lifecycle Event)#033[00m
Nov 25 03:29:22 np0005534516 nova_compute[253538]: 2025-11-25 08:29:22.431 253542 DEBUG nova.compute.manager [None req-3b0c1526-8a8f-41e3-b395-6f88020e2279 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] [instance: 8a8b6989-6ea7-4cf7-ad21-a1563967c7f4] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 25 03:29:22 np0005534516 nova_compute[253538]: 2025-11-25 08:29:22.436 253542 DEBUG nova.virt.libvirt.driver [None req-3b0c1526-8a8f-41e3-b395-6f88020e2279 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] [instance: 8a8b6989-6ea7-4cf7-ad21-a1563967c7f4] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 25 03:29:22 np0005534516 nova_compute[253538]: 2025-11-25 08:29:22.436 253542 INFO nova.compute.manager [None req-f7dfe45d-327e-47e6-b09f-33bac34bd270 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 0067149a-8f99-4257-af2a-fd9adcc41719] Took 1.16 seconds to destroy the instance on the hypervisor.#033[00m
Nov 25 03:29:22 np0005534516 nova_compute[253538]: 2025-11-25 08:29:22.437 253542 DEBUG oslo.service.loopingcall [None req-f7dfe45d-327e-47e6-b09f-33bac34bd270 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 25 03:29:22 np0005534516 nova_compute[253538]: 2025-11-25 08:29:22.437 253542 DEBUG nova.compute.manager [-] [instance: 0067149a-8f99-4257-af2a-fd9adcc41719] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 25 03:29:22 np0005534516 nova_compute[253538]: 2025-11-25 08:29:22.437 253542 DEBUG nova.network.neutron [-] [instance: 0067149a-8f99-4257-af2a-fd9adcc41719] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 25 03:29:22 np0005534516 nova_compute[253538]: 2025-11-25 08:29:22.442 253542 INFO nova.virt.libvirt.driver [-] [instance: 8a8b6989-6ea7-4cf7-ad21-a1563967c7f4] Instance spawned successfully.#033[00m
Nov 25 03:29:22 np0005534516 nova_compute[253538]: 2025-11-25 08:29:22.442 253542 DEBUG nova.virt.libvirt.driver [None req-3b0c1526-8a8f-41e3-b395-6f88020e2279 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] [instance: 8a8b6989-6ea7-4cf7-ad21-a1563967c7f4] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 25 03:29:22 np0005534516 nova_compute[253538]: 2025-11-25 08:29:22.456 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 8a8b6989-6ea7-4cf7-ad21-a1563967c7f4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:29:22 np0005534516 nova_compute[253538]: 2025-11-25 08:29:22.461 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 8a8b6989-6ea7-4cf7-ad21-a1563967c7f4] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 03:29:22 np0005534516 nova_compute[253538]: 2025-11-25 08:29:22.466 253542 DEBUG nova.virt.libvirt.driver [None req-3b0c1526-8a8f-41e3-b395-6f88020e2279 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] [instance: 8a8b6989-6ea7-4cf7-ad21-a1563967c7f4] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:29:22 np0005534516 nova_compute[253538]: 2025-11-25 08:29:22.466 253542 DEBUG nova.virt.libvirt.driver [None req-3b0c1526-8a8f-41e3-b395-6f88020e2279 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] [instance: 8a8b6989-6ea7-4cf7-ad21-a1563967c7f4] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:29:22 np0005534516 nova_compute[253538]: 2025-11-25 08:29:22.467 253542 DEBUG nova.virt.libvirt.driver [None req-3b0c1526-8a8f-41e3-b395-6f88020e2279 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] [instance: 8a8b6989-6ea7-4cf7-ad21-a1563967c7f4] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:29:22 np0005534516 nova_compute[253538]: 2025-11-25 08:29:22.467 253542 DEBUG nova.virt.libvirt.driver [None req-3b0c1526-8a8f-41e3-b395-6f88020e2279 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] [instance: 8a8b6989-6ea7-4cf7-ad21-a1563967c7f4] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:29:22 np0005534516 nova_compute[253538]: 2025-11-25 08:29:22.467 253542 DEBUG nova.virt.libvirt.driver [None req-3b0c1526-8a8f-41e3-b395-6f88020e2279 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] [instance: 8a8b6989-6ea7-4cf7-ad21-a1563967c7f4] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:29:22 np0005534516 nova_compute[253538]: 2025-11-25 08:29:22.468 253542 DEBUG nova.virt.libvirt.driver [None req-3b0c1526-8a8f-41e3-b395-6f88020e2279 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] [instance: 8a8b6989-6ea7-4cf7-ad21-a1563967c7f4] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:29:22 np0005534516 nova_compute[253538]: 2025-11-25 08:29:22.487 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 8a8b6989-6ea7-4cf7-ad21-a1563967c7f4] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 03:29:22 np0005534516 nova_compute[253538]: 2025-11-25 08:29:22.487 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764059362.4257002, 8a8b6989-6ea7-4cf7-ad21-a1563967c7f4 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 03:29:22 np0005534516 nova_compute[253538]: 2025-11-25 08:29:22.487 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 8a8b6989-6ea7-4cf7-ad21-a1563967c7f4] VM Paused (Lifecycle Event)#033[00m
Nov 25 03:29:22 np0005534516 nova_compute[253538]: 2025-11-25 08:29:22.514 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 8a8b6989-6ea7-4cf7-ad21-a1563967c7f4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:29:22 np0005534516 nova_compute[253538]: 2025-11-25 08:29:22.517 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764059362.441391, 8a8b6989-6ea7-4cf7-ad21-a1563967c7f4 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 03:29:22 np0005534516 nova_compute[253538]: 2025-11-25 08:29:22.517 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 8a8b6989-6ea7-4cf7-ad21-a1563967c7f4] VM Resumed (Lifecycle Event)#033[00m
Nov 25 03:29:22 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e152 do_prune osdmap full prune enabled
Nov 25 03:29:22 np0005534516 nova_compute[253538]: 2025-11-25 08:29:22.534 253542 INFO nova.compute.manager [None req-3b0c1526-8a8f-41e3-b395-6f88020e2279 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] [instance: 8a8b6989-6ea7-4cf7-ad21-a1563967c7f4] Took 8.04 seconds to spawn the instance on the hypervisor.#033[00m
Nov 25 03:29:22 np0005534516 nova_compute[253538]: 2025-11-25 08:29:22.534 253542 DEBUG nova.compute.manager [None req-3b0c1526-8a8f-41e3-b395-6f88020e2279 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] [instance: 8a8b6989-6ea7-4cf7-ad21-a1563967c7f4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:29:22 np0005534516 nova_compute[253538]: 2025-11-25 08:29:22.536 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 8a8b6989-6ea7-4cf7-ad21-a1563967c7f4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:29:22 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e153 e153: 3 total, 3 up, 3 in
Nov 25 03:29:22 np0005534516 ceph-mon[75015]: log_channel(cluster) log [DBG] : osdmap e153: 3 total, 3 up, 3 in
Nov 25 03:29:22 np0005534516 nova_compute[253538]: 2025-11-25 08:29:22.546 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 8a8b6989-6ea7-4cf7-ad21-a1563967c7f4] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 03:29:22 np0005534516 nova_compute[253538]: 2025-11-25 08:29:22.575 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 8a8b6989-6ea7-4cf7-ad21-a1563967c7f4] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 03:29:22 np0005534516 nova_compute[253538]: 2025-11-25 08:29:22.598 253542 INFO nova.compute.manager [None req-3b0c1526-8a8f-41e3-b395-6f88020e2279 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] [instance: 8a8b6989-6ea7-4cf7-ad21-a1563967c7f4] Took 9.18 seconds to build instance.#033[00m
Nov 25 03:29:22 np0005534516 nova_compute[253538]: 2025-11-25 08:29:22.611 253542 DEBUG oslo_concurrency.lockutils [None req-3b0c1526-8a8f-41e3-b395-6f88020e2279 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] Lock "8a8b6989-6ea7-4cf7-ad21-a1563967c7f4" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.326s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:29:22 np0005534516 nova_compute[253538]: 2025-11-25 08:29:22.635 253542 DEBUG nova.storage.rbd_utils [None req-03d2cd0f-2ae1-4e1f-a91a-eab170244953 6bc7b68c86ab44d29c118388df2a8bc0 70a02215d9344af388c6439ace9208a4 - - default default] creating snapshot(snap) on rbd image(148b37ac-1ea9-4409-a4ce-912163252ba4) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Nov 25 03:29:22 np0005534516 podman[297134]: 2025-11-25 08:29:22.68654021 +0000 UTC m=+0.088498008 container create 371385d5ffc10739b94c041f3d057447f138b63edba250e5c30c3c74cfd71ae3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-52a7668b-f0ac-4b07-a778-1ee89adbf076, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Nov 25 03:29:22 np0005534516 podman[297134]: 2025-11-25 08:29:22.622299724 +0000 UTC m=+0.024257552 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 25 03:29:22 np0005534516 systemd[1]: Started libpod-conmon-371385d5ffc10739b94c041f3d057447f138b63edba250e5c30c3c74cfd71ae3.scope.
Nov 25 03:29:22 np0005534516 systemd[1]: Started libcrun container.
Nov 25 03:29:22 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/362e28361050f34fb4be0843849315f4707ed0cafb024b05b82f317688c6fbf5/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 25 03:29:22 np0005534516 podman[297134]: 2025-11-25 08:29:22.774434069 +0000 UTC m=+0.176391877 container init 371385d5ffc10739b94c041f3d057447f138b63edba250e5c30c3c74cfd71ae3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-52a7668b-f0ac-4b07-a778-1ee89adbf076, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 03:29:22 np0005534516 podman[297134]: 2025-11-25 08:29:22.781954397 +0000 UTC m=+0.183912185 container start 371385d5ffc10739b94c041f3d057447f138b63edba250e5c30c3c74cfd71ae3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-52a7668b-f0ac-4b07-a778-1ee89adbf076, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true)
Nov 25 03:29:22 np0005534516 neutron-haproxy-ovnmeta-52a7668b-f0ac-4b07-a778-1ee89adbf076[297166]: [NOTICE]   (297170) : New worker (297172) forked
Nov 25 03:29:22 np0005534516 neutron-haproxy-ovnmeta-52a7668b-f0ac-4b07-a778-1ee89adbf076[297166]: [NOTICE]   (297170) : Loading success.
Nov 25 03:29:23 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1358: 321 pgs: 321 active+clean; 306 MiB data, 499 MiB used, 60 GiB / 60 GiB avail; 6.1 MiB/s rd, 2.9 MiB/s wr, 333 op/s
Nov 25 03:29:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 03:29:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 03:29:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 03:29:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 03:29:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 03:29:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 03:29:23 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e153 do_prune osdmap full prune enabled
Nov 25 03:29:23 np0005534516 nova_compute[253538]: 2025-11-25 08:29:23.553 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 03:29:23 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e154 e154: 3 total, 3 up, 3 in
Nov 25 03:29:23 np0005534516 ceph-mon[75015]: log_channel(cluster) log [DBG] : osdmap e154: 3 total, 3 up, 3 in
Nov 25 03:29:23 np0005534516 nova_compute[253538]: 2025-11-25 08:29:23.798 253542 DEBUG nova.network.neutron [-] [instance: 0067149a-8f99-4257-af2a-fd9adcc41719] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 03:29:23 np0005534516 nova_compute[253538]: 2025-11-25 08:29:23.813 253542 INFO nova.compute.manager [-] [instance: 0067149a-8f99-4257-af2a-fd9adcc41719] Took 1.38 seconds to deallocate network for instance.#033[00m
Nov 25 03:29:23 np0005534516 nova_compute[253538]: 2025-11-25 08:29:23.851 253542 DEBUG oslo_concurrency.lockutils [None req-f7dfe45d-327e-47e6-b09f-33bac34bd270 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:29:23 np0005534516 nova_compute[253538]: 2025-11-25 08:29:23.852 253542 DEBUG oslo_concurrency.lockutils [None req-f7dfe45d-327e-47e6-b09f-33bac34bd270 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:29:23 np0005534516 nova_compute[253538]: 2025-11-25 08:29:23.970 253542 DEBUG oslo_concurrency.processutils [None req-f7dfe45d-327e-47e6-b09f-33bac34bd270 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:29:24 np0005534516 nova_compute[253538]: 2025-11-25 08:29:24.029 253542 DEBUG nova.compute.manager [req-8e26a821-5c81-4a85-8a8e-5fffecb8d6e3 req-771722c4-e599-479a-a77b-d4c0d907ce95 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 8a8b6989-6ea7-4cf7-ad21-a1563967c7f4] Received event network-vif-plugged-19217fbd-a123-469a-b432-be5d2543613c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:29:24 np0005534516 nova_compute[253538]: 2025-11-25 08:29:24.030 253542 DEBUG oslo_concurrency.lockutils [req-8e26a821-5c81-4a85-8a8e-5fffecb8d6e3 req-771722c4-e599-479a-a77b-d4c0d907ce95 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "8a8b6989-6ea7-4cf7-ad21-a1563967c7f4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:29:24 np0005534516 nova_compute[253538]: 2025-11-25 08:29:24.030 253542 DEBUG oslo_concurrency.lockutils [req-8e26a821-5c81-4a85-8a8e-5fffecb8d6e3 req-771722c4-e599-479a-a77b-d4c0d907ce95 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "8a8b6989-6ea7-4cf7-ad21-a1563967c7f4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:29:24 np0005534516 nova_compute[253538]: 2025-11-25 08:29:24.030 253542 DEBUG oslo_concurrency.lockutils [req-8e26a821-5c81-4a85-8a8e-5fffecb8d6e3 req-771722c4-e599-479a-a77b-d4c0d907ce95 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "8a8b6989-6ea7-4cf7-ad21-a1563967c7f4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:29:24 np0005534516 nova_compute[253538]: 2025-11-25 08:29:24.030 253542 DEBUG nova.compute.manager [req-8e26a821-5c81-4a85-8a8e-5fffecb8d6e3 req-771722c4-e599-479a-a77b-d4c0d907ce95 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 8a8b6989-6ea7-4cf7-ad21-a1563967c7f4] No waiting events found dispatching network-vif-plugged-19217fbd-a123-469a-b432-be5d2543613c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 03:29:24 np0005534516 nova_compute[253538]: 2025-11-25 08:29:24.031 253542 WARNING nova.compute.manager [req-8e26a821-5c81-4a85-8a8e-5fffecb8d6e3 req-771722c4-e599-479a-a77b-d4c0d907ce95 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 8a8b6989-6ea7-4cf7-ad21-a1563967c7f4] Received unexpected event network-vif-plugged-19217fbd-a123-469a-b432-be5d2543613c for instance with vm_state active and task_state None.#033[00m
Nov 25 03:29:24 np0005534516 nova_compute[253538]: 2025-11-25 08:29:24.204 253542 DEBUG oslo_concurrency.lockutils [None req-ce3b7f4c-22f4-4056-a25c-047ba7dc62ce 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] Acquiring lock "8a8b6989-6ea7-4cf7-ad21-a1563967c7f4" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:29:24 np0005534516 nova_compute[253538]: 2025-11-25 08:29:24.204 253542 DEBUG oslo_concurrency.lockutils [None req-ce3b7f4c-22f4-4056-a25c-047ba7dc62ce 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] Lock "8a8b6989-6ea7-4cf7-ad21-a1563967c7f4" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:29:24 np0005534516 nova_compute[253538]: 2025-11-25 08:29:24.204 253542 DEBUG oslo_concurrency.lockutils [None req-ce3b7f4c-22f4-4056-a25c-047ba7dc62ce 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] Acquiring lock "8a8b6989-6ea7-4cf7-ad21-a1563967c7f4-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:29:24 np0005534516 nova_compute[253538]: 2025-11-25 08:29:24.204 253542 DEBUG oslo_concurrency.lockutils [None req-ce3b7f4c-22f4-4056-a25c-047ba7dc62ce 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] Lock "8a8b6989-6ea7-4cf7-ad21-a1563967c7f4-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:29:24 np0005534516 nova_compute[253538]: 2025-11-25 08:29:24.205 253542 DEBUG oslo_concurrency.lockutils [None req-ce3b7f4c-22f4-4056-a25c-047ba7dc62ce 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] Lock "8a8b6989-6ea7-4cf7-ad21-a1563967c7f4-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:29:24 np0005534516 nova_compute[253538]: 2025-11-25 08:29:24.207 253542 INFO nova.compute.manager [None req-ce3b7f4c-22f4-4056-a25c-047ba7dc62ce 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] [instance: 8a8b6989-6ea7-4cf7-ad21-a1563967c7f4] Terminating instance#033[00m
Nov 25 03:29:24 np0005534516 nova_compute[253538]: 2025-11-25 08:29:24.207 253542 DEBUG nova.compute.manager [None req-ce3b7f4c-22f4-4056-a25c-047ba7dc62ce 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] [instance: 8a8b6989-6ea7-4cf7-ad21-a1563967c7f4] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 25 03:29:24 np0005534516 kernel: tap19217fbd-a1 (unregistering): left promiscuous mode
Nov 25 03:29:24 np0005534516 NetworkManager[48915]: <info>  [1764059364.2425] device (tap19217fbd-a1): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 03:29:24 np0005534516 nova_compute[253538]: 2025-11-25 08:29:24.253 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:29:24 np0005534516 ovn_controller[152859]: 2025-11-25T08:29:24Z|00254|binding|INFO|Releasing lport 19217fbd-a123-469a-b432-be5d2543613c from this chassis (sb_readonly=0)
Nov 25 03:29:24 np0005534516 ovn_controller[152859]: 2025-11-25T08:29:24Z|00255|binding|INFO|Setting lport 19217fbd-a123-469a-b432-be5d2543613c down in Southbound
Nov 25 03:29:24 np0005534516 ovn_controller[152859]: 2025-11-25T08:29:24Z|00256|binding|INFO|Removing iface tap19217fbd-a1 ovn-installed in OVS
Nov 25 03:29:24 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:29:24.262 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:85:9f:bb 10.100.0.10'], port_security=['fa:16:3e:85:9f:bb 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '8a8b6989-6ea7-4cf7-ad21-a1563967c7f4', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-52a7668b-f0ac-4b07-a778-1ee89adbf076', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c5b1125d171240e2895276836b4fd6d7', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'bd498cc2-3a1f-4313-a3a9-7b1fa737f1eb', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b98666ab-8fd3-4256-b0ce-536c54c0072e, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=19217fbd-a123-469a-b432-be5d2543613c) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 03:29:24 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:29:24.263 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 19217fbd-a123-469a-b432-be5d2543613c in datapath 52a7668b-f0ac-4b07-a778-1ee89adbf076 unbound from our chassis#033[00m
Nov 25 03:29:24 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:29:24.265 162739 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 52a7668b-f0ac-4b07-a778-1ee89adbf076, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 25 03:29:24 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:29:24.266 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[f6962f25-a17c-44e9-999e-dbe814c6079d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:29:24 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:29:24.267 162739 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-52a7668b-f0ac-4b07-a778-1ee89adbf076 namespace which is not needed anymore#033[00m
Nov 25 03:29:24 np0005534516 nova_compute[253538]: 2025-11-25 08:29:24.276 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:29:24 np0005534516 systemd[1]: machine-qemu\x2d43\x2dinstance\x2d00000026.scope: Deactivated successfully.
Nov 25 03:29:24 np0005534516 systemd[1]: machine-qemu\x2d43\x2dinstance\x2d00000026.scope: Consumed 2.191s CPU time.
Nov 25 03:29:24 np0005534516 systemd-machined[215790]: Machine qemu-43-instance-00000026 terminated.
Nov 25 03:29:24 np0005534516 nova_compute[253538]: 2025-11-25 08:29:24.371 253542 DEBUG nova.compute.manager [req-67bc24ee-244d-4169-ac18-e8205afb00f5 req-3ae60689-e8e2-4181-910d-a40e84107fff b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 0067149a-8f99-4257-af2a-fd9adcc41719] Received event network-vif-deleted-f3dedfca-04a0-44af-bca1-33a95c9804fa external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:29:24 np0005534516 neutron-haproxy-ovnmeta-52a7668b-f0ac-4b07-a778-1ee89adbf076[297166]: [NOTICE]   (297170) : haproxy version is 2.8.14-c23fe91
Nov 25 03:29:24 np0005534516 neutron-haproxy-ovnmeta-52a7668b-f0ac-4b07-a778-1ee89adbf076[297166]: [NOTICE]   (297170) : path to executable is /usr/sbin/haproxy
Nov 25 03:29:24 np0005534516 neutron-haproxy-ovnmeta-52a7668b-f0ac-4b07-a778-1ee89adbf076[297166]: [WARNING]  (297170) : Exiting Master process...
Nov 25 03:29:24 np0005534516 neutron-haproxy-ovnmeta-52a7668b-f0ac-4b07-a778-1ee89adbf076[297166]: [ALERT]    (297170) : Current worker (297172) exited with code 143 (Terminated)
Nov 25 03:29:24 np0005534516 neutron-haproxy-ovnmeta-52a7668b-f0ac-4b07-a778-1ee89adbf076[297166]: [WARNING]  (297170) : All workers exited. Exiting... (0)
Nov 25 03:29:24 np0005534516 systemd[1]: libpod-371385d5ffc10739b94c041f3d057447f138b63edba250e5c30c3c74cfd71ae3.scope: Deactivated successfully.
Nov 25 03:29:24 np0005534516 conmon[297166]: conmon 371385d5ffc10739b94c <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-371385d5ffc10739b94c041f3d057447f138b63edba250e5c30c3c74cfd71ae3.scope/container/memory.events
Nov 25 03:29:24 np0005534516 podman[297222]: 2025-11-25 08:29:24.401589923 +0000 UTC m=+0.046008324 container died 371385d5ffc10739b94c041f3d057447f138b63edba250e5c30c3c74cfd71ae3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-52a7668b-f0ac-4b07-a778-1ee89adbf076, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Nov 25 03:29:24 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 03:29:24 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/913488154' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 03:29:24 np0005534516 nova_compute[253538]: 2025-11-25 08:29:24.447 253542 INFO nova.virt.libvirt.driver [-] [instance: 8a8b6989-6ea7-4cf7-ad21-a1563967c7f4] Instance destroyed successfully.#033[00m
Nov 25 03:29:24 np0005534516 nova_compute[253538]: 2025-11-25 08:29:24.447 253542 DEBUG nova.objects.instance [None req-ce3b7f4c-22f4-4056-a25c-047ba7dc62ce 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] Lazy-loading 'resources' on Instance uuid 8a8b6989-6ea7-4cf7-ad21-a1563967c7f4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 03:29:24 np0005534516 nova_compute[253538]: 2025-11-25 08:29:24.453 253542 DEBUG oslo_concurrency.processutils [None req-f7dfe45d-327e-47e6-b09f-33bac34bd270 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.484s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:29:24 np0005534516 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-371385d5ffc10739b94c041f3d057447f138b63edba250e5c30c3c74cfd71ae3-userdata-shm.mount: Deactivated successfully.
Nov 25 03:29:24 np0005534516 systemd[1]: var-lib-containers-storage-overlay-362e28361050f34fb4be0843849315f4707ed0cafb024b05b82f317688c6fbf5-merged.mount: Deactivated successfully.
Nov 25 03:29:24 np0005534516 nova_compute[253538]: 2025-11-25 08:29:24.461 253542 DEBUG nova.virt.libvirt.vif [None req-ce3b7f4c-22f4-4056-a25c-047ba7dc62ce 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T08:29:11Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ImagesOneServerNegativeTestJSON-server-1871314991',display_name='tempest-ImagesOneServerNegativeTestJSON-server-1871314991',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagesoneservernegativetestjson-server-1871314991',id=38,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-25T08:29:22Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='c5b1125d171240e2895276836b4fd6d7',ramdisk_id='',reservation_id='r-np4uml6d',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ImagesOneServerNegativeTestJSON-192511421',owner_user_name='tempest-ImagesOneServerNegativeTestJSON-192511421-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T08:29:22Z,user_data=None,user_id='8350a560f2bc4b57a5da0e3a1f582f82',uuid=8a8b6989-6ea7-4cf7-ad21-a1563967c7f4,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "19217fbd-a123-469a-b432-be5d2543613c", "address": "fa:16:3e:85:9f:bb", "network": {"id": "52a7668b-f0ac-4b07-a778-1ee89adbf076", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1430369364-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c5b1125d171240e2895276836b4fd6d7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap19217fbd-a1", "ovs_interfaceid": "19217fbd-a123-469a-b432-be5d2543613c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 25 03:29:24 np0005534516 nova_compute[253538]: 2025-11-25 08:29:24.461 253542 DEBUG nova.network.os_vif_util [None req-ce3b7f4c-22f4-4056-a25c-047ba7dc62ce 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] Converting VIF {"id": "19217fbd-a123-469a-b432-be5d2543613c", "address": "fa:16:3e:85:9f:bb", "network": {"id": "52a7668b-f0ac-4b07-a778-1ee89adbf076", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1430369364-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c5b1125d171240e2895276836b4fd6d7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap19217fbd-a1", "ovs_interfaceid": "19217fbd-a123-469a-b432-be5d2543613c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 03:29:24 np0005534516 nova_compute[253538]: 2025-11-25 08:29:24.462 253542 DEBUG nova.network.os_vif_util [None req-ce3b7f4c-22f4-4056-a25c-047ba7dc62ce 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:85:9f:bb,bridge_name='br-int',has_traffic_filtering=True,id=19217fbd-a123-469a-b432-be5d2543613c,network=Network(52a7668b-f0ac-4b07-a778-1ee89adbf076),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap19217fbd-a1') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 03:29:24 np0005534516 nova_compute[253538]: 2025-11-25 08:29:24.463 253542 DEBUG os_vif [None req-ce3b7f4c-22f4-4056-a25c-047ba7dc62ce 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:85:9f:bb,bridge_name='br-int',has_traffic_filtering=True,id=19217fbd-a123-469a-b432-be5d2543613c,network=Network(52a7668b-f0ac-4b07-a778-1ee89adbf076),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap19217fbd-a1') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 25 03:29:24 np0005534516 nova_compute[253538]: 2025-11-25 08:29:24.465 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:29:24 np0005534516 nova_compute[253538]: 2025-11-25 08:29:24.465 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap19217fbd-a1, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:29:24 np0005534516 nova_compute[253538]: 2025-11-25 08:29:24.466 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:29:24 np0005534516 nova_compute[253538]: 2025-11-25 08:29:24.469 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 25 03:29:24 np0005534516 podman[297222]: 2025-11-25 08:29:24.471962638 +0000 UTC m=+0.116381039 container cleanup 371385d5ffc10739b94c041f3d057447f138b63edba250e5c30c3c74cfd71ae3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-52a7668b-f0ac-4b07-a778-1ee89adbf076, org.label-schema.build-date=20251118, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 03:29:24 np0005534516 nova_compute[253538]: 2025-11-25 08:29:24.471 253542 INFO os_vif [None req-ce3b7f4c-22f4-4056-a25c-047ba7dc62ce 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:85:9f:bb,bridge_name='br-int',has_traffic_filtering=True,id=19217fbd-a123-469a-b432-be5d2543613c,network=Network(52a7668b-f0ac-4b07-a778-1ee89adbf076),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap19217fbd-a1')#033[00m
Nov 25 03:29:24 np0005534516 systemd[1]: libpod-conmon-371385d5ffc10739b94c041f3d057447f138b63edba250e5c30c3c74cfd71ae3.scope: Deactivated successfully.
Nov 25 03:29:24 np0005534516 nova_compute[253538]: 2025-11-25 08:29:24.494 253542 DEBUG nova.compute.provider_tree [None req-f7dfe45d-327e-47e6-b09f-33bac34bd270 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 03:29:24 np0005534516 nova_compute[253538]: 2025-11-25 08:29:24.514 253542 DEBUG nova.scheduler.client.report [None req-f7dfe45d-327e-47e6-b09f-33bac34bd270 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 03:29:24 np0005534516 nova_compute[253538]: 2025-11-25 08:29:24.536 253542 DEBUG oslo_concurrency.lockutils [None req-f7dfe45d-327e-47e6-b09f-33bac34bd270 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.684s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:29:24 np0005534516 podman[297269]: 2025-11-25 08:29:24.541994293 +0000 UTC m=+0.040451139 container remove 371385d5ffc10739b94c041f3d057447f138b63edba250e5c30c3c74cfd71ae3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-52a7668b-f0ac-4b07-a778-1ee89adbf076, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0)
Nov 25 03:29:24 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:29:24.550 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[cf828c7e-a64a-449e-a008-ebeaa1e6fb31]: (4, ('Tue Nov 25 08:29:24 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-52a7668b-f0ac-4b07-a778-1ee89adbf076 (371385d5ffc10739b94c041f3d057447f138b63edba250e5c30c3c74cfd71ae3)\n371385d5ffc10739b94c041f3d057447f138b63edba250e5c30c3c74cfd71ae3\nTue Nov 25 08:29:24 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-52a7668b-f0ac-4b07-a778-1ee89adbf076 (371385d5ffc10739b94c041f3d057447f138b63edba250e5c30c3c74cfd71ae3)\n371385d5ffc10739b94c041f3d057447f138b63edba250e5c30c3c74cfd71ae3\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:29:24 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:29:24.551 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[07816e29-af07-47f3-9158-70c2243ae057]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:29:24 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:29:24.552 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap52a7668b-f0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:29:24 np0005534516 nova_compute[253538]: 2025-11-25 08:29:24.553 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:29:24 np0005534516 kernel: tap52a7668b-f0: left promiscuous mode
Nov 25 03:29:24 np0005534516 nova_compute[253538]: 2025-11-25 08:29:24.560 253542 INFO nova.scheduler.client.report [None req-f7dfe45d-327e-47e6-b09f-33bac34bd270 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Deleted allocations for instance 0067149a-8f99-4257-af2a-fd9adcc41719#033[00m
Nov 25 03:29:24 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:29:24.567 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[0dcc4a9d-e67b-4750-9130-685b0d8c086a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:29:24 np0005534516 nova_compute[253538]: 2025-11-25 08:29:24.579 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:29:24 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:29:24.584 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[4e324810-1c2b-4bc1-9aa2-d87268f14914]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:29:24 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:29:24.585 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[b80679ea-15f7-4519-b0ae-3a8e12ed9d4e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:29:24 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:29:24.601 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[9c9b90cf-cbb7-41a5-b86d-465082dde436]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 471653, 'reachable_time': 26697, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 297297, 'error': None, 'target': 'ovnmeta-52a7668b-f0ac-4b07-a778-1ee89adbf076', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:29:24 np0005534516 systemd[1]: run-netns-ovnmeta\x2d52a7668b\x2df0ac\x2d4b07\x2da778\x2d1ee89adbf076.mount: Deactivated successfully.
Nov 25 03:29:24 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:29:24.606 162852 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-52a7668b-f0ac-4b07-a778-1ee89adbf076 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 25 03:29:24 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:29:24.606 162852 DEBUG oslo.privsep.daemon [-] privsep: reply[677c4762-fdb8-4800-9032-5d2ebce3650a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:29:24 np0005534516 nova_compute[253538]: 2025-11-25 08:29:24.621 253542 DEBUG oslo_concurrency.lockutils [None req-f7dfe45d-327e-47e6-b09f-33bac34bd270 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Lock "0067149a-8f99-4257-af2a-fd9adcc41719" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.346s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:29:24 np0005534516 nova_compute[253538]: 2025-11-25 08:29:24.876 253542 INFO nova.virt.libvirt.driver [None req-ce3b7f4c-22f4-4056-a25c-047ba7dc62ce 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] [instance: 8a8b6989-6ea7-4cf7-ad21-a1563967c7f4] Deleting instance files /var/lib/nova/instances/8a8b6989-6ea7-4cf7-ad21-a1563967c7f4_del#033[00m
Nov 25 03:29:24 np0005534516 nova_compute[253538]: 2025-11-25 08:29:24.877 253542 INFO nova.virt.libvirt.driver [None req-ce3b7f4c-22f4-4056-a25c-047ba7dc62ce 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] [instance: 8a8b6989-6ea7-4cf7-ad21-a1563967c7f4] Deletion of /var/lib/nova/instances/8a8b6989-6ea7-4cf7-ad21-a1563967c7f4_del complete#033[00m
Nov 25 03:29:24 np0005534516 nova_compute[253538]: 2025-11-25 08:29:24.914 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:29:24 np0005534516 nova_compute[253538]: 2025-11-25 08:29:24.927 253542 INFO nova.compute.manager [None req-ce3b7f4c-22f4-4056-a25c-047ba7dc62ce 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] [instance: 8a8b6989-6ea7-4cf7-ad21-a1563967c7f4] Took 0.72 seconds to destroy the instance on the hypervisor.#033[00m
Nov 25 03:29:24 np0005534516 nova_compute[253538]: 2025-11-25 08:29:24.928 253542 DEBUG oslo.service.loopingcall [None req-ce3b7f4c-22f4-4056-a25c-047ba7dc62ce 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 25 03:29:24 np0005534516 nova_compute[253538]: 2025-11-25 08:29:24.928 253542 DEBUG nova.compute.manager [-] [instance: 8a8b6989-6ea7-4cf7-ad21-a1563967c7f4] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 25 03:29:24 np0005534516 nova_compute[253538]: 2025-11-25 08:29:24.928 253542 DEBUG nova.network.neutron [-] [instance: 8a8b6989-6ea7-4cf7-ad21-a1563967c7f4] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 25 03:29:25 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1360: 321 pgs: 321 active+clean; 340 MiB data, 515 MiB used, 59 GiB / 60 GiB avail; 12 MiB/s rd, 3.2 MiB/s wr, 494 op/s
Nov 25 03:29:25 np0005534516 nova_compute[253538]: 2025-11-25 08:29:25.245 253542 INFO nova.virt.libvirt.driver [None req-03d2cd0f-2ae1-4e1f-a91a-eab170244953 6bc7b68c86ab44d29c118388df2a8bc0 70a02215d9344af388c6439ace9208a4 - - default default] [instance: 8400a9a9-bd7a-434b-a11b-6db7e12a4e18] Snapshot image upload complete#033[00m
Nov 25 03:29:25 np0005534516 nova_compute[253538]: 2025-11-25 08:29:25.245 253542 INFO nova.compute.manager [None req-03d2cd0f-2ae1-4e1f-a91a-eab170244953 6bc7b68c86ab44d29c118388df2a8bc0 70a02215d9344af388c6439ace9208a4 - - default default] [instance: 8400a9a9-bd7a-434b-a11b-6db7e12a4e18] Took 5.42 seconds to snapshot the instance on the hypervisor.#033[00m
Nov 25 03:29:25 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 03:29:25 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:29:25.598 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=11, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '3e:21:09', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '6a:71:8e:07:fa:c2'}, ipsec=False) old=SB_Global(nb_cfg=10) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 03:29:25 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:29:25.599 162739 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 25 03:29:25 np0005534516 nova_compute[253538]: 2025-11-25 08:29:25.604 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:29:25 np0005534516 nova_compute[253538]: 2025-11-25 08:29:25.620 253542 DEBUG nova.network.neutron [-] [instance: 8a8b6989-6ea7-4cf7-ad21-a1563967c7f4] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 03:29:25 np0005534516 nova_compute[253538]: 2025-11-25 08:29:25.646 253542 INFO nova.compute.manager [-] [instance: 8a8b6989-6ea7-4cf7-ad21-a1563967c7f4] Took 0.72 seconds to deallocate network for instance.#033[00m
Nov 25 03:29:25 np0005534516 nova_compute[253538]: 2025-11-25 08:29:25.690 253542 DEBUG oslo_concurrency.lockutils [None req-ce3b7f4c-22f4-4056-a25c-047ba7dc62ce 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:29:25 np0005534516 nova_compute[253538]: 2025-11-25 08:29:25.690 253542 DEBUG oslo_concurrency.lockutils [None req-ce3b7f4c-22f4-4056-a25c-047ba7dc62ce 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:29:25 np0005534516 nova_compute[253538]: 2025-11-25 08:29:25.795 253542 DEBUG oslo_concurrency.processutils [None req-ce3b7f4c-22f4-4056-a25c-047ba7dc62ce 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:29:26 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 03:29:26 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3603604533' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 03:29:26 np0005534516 nova_compute[253538]: 2025-11-25 08:29:26.333 253542 DEBUG oslo_concurrency.processutils [None req-ce3b7f4c-22f4-4056-a25c-047ba7dc62ce 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.538s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:29:26 np0005534516 nova_compute[253538]: 2025-11-25 08:29:26.338 253542 DEBUG nova.compute.provider_tree [None req-ce3b7f4c-22f4-4056-a25c-047ba7dc62ce 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 03:29:26 np0005534516 nova_compute[253538]: 2025-11-25 08:29:26.351 253542 DEBUG nova.scheduler.client.report [None req-ce3b7f4c-22f4-4056-a25c-047ba7dc62ce 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 03:29:26 np0005534516 nova_compute[253538]: 2025-11-25 08:29:26.374 253542 DEBUG oslo_concurrency.lockutils [None req-ce3b7f4c-22f4-4056-a25c-047ba7dc62ce 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.683s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:29:26 np0005534516 nova_compute[253538]: 2025-11-25 08:29:26.400 253542 INFO nova.scheduler.client.report [None req-ce3b7f4c-22f4-4056-a25c-047ba7dc62ce 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] Deleted allocations for instance 8a8b6989-6ea7-4cf7-ad21-a1563967c7f4#033[00m
Nov 25 03:29:26 np0005534516 nova_compute[253538]: 2025-11-25 08:29:26.478 253542 DEBUG oslo_concurrency.lockutils [None req-ce3b7f4c-22f4-4056-a25c-047ba7dc62ce 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] Lock "8a8b6989-6ea7-4cf7-ad21-a1563967c7f4" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.274s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:29:26 np0005534516 nova_compute[253538]: 2025-11-25 08:29:26.510 253542 DEBUG nova.compute.manager [req-690552b6-e2a9-42b8-b243-3f0ba48b1704 req-2cbd6bff-d12c-4c58-bbf6-2b4c03b2ea42 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 8a8b6989-6ea7-4cf7-ad21-a1563967c7f4] Received event network-vif-unplugged-19217fbd-a123-469a-b432-be5d2543613c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:29:26 np0005534516 nova_compute[253538]: 2025-11-25 08:29:26.510 253542 DEBUG oslo_concurrency.lockutils [req-690552b6-e2a9-42b8-b243-3f0ba48b1704 req-2cbd6bff-d12c-4c58-bbf6-2b4c03b2ea42 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "8a8b6989-6ea7-4cf7-ad21-a1563967c7f4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:29:26 np0005534516 nova_compute[253538]: 2025-11-25 08:29:26.511 253542 DEBUG oslo_concurrency.lockutils [req-690552b6-e2a9-42b8-b243-3f0ba48b1704 req-2cbd6bff-d12c-4c58-bbf6-2b4c03b2ea42 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "8a8b6989-6ea7-4cf7-ad21-a1563967c7f4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:29:26 np0005534516 nova_compute[253538]: 2025-11-25 08:29:26.512 253542 DEBUG oslo_concurrency.lockutils [req-690552b6-e2a9-42b8-b243-3f0ba48b1704 req-2cbd6bff-d12c-4c58-bbf6-2b4c03b2ea42 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "8a8b6989-6ea7-4cf7-ad21-a1563967c7f4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:29:26 np0005534516 nova_compute[253538]: 2025-11-25 08:29:26.512 253542 DEBUG nova.compute.manager [req-690552b6-e2a9-42b8-b243-3f0ba48b1704 req-2cbd6bff-d12c-4c58-bbf6-2b4c03b2ea42 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 8a8b6989-6ea7-4cf7-ad21-a1563967c7f4] No waiting events found dispatching network-vif-unplugged-19217fbd-a123-469a-b432-be5d2543613c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 03:29:26 np0005534516 nova_compute[253538]: 2025-11-25 08:29:26.513 253542 WARNING nova.compute.manager [req-690552b6-e2a9-42b8-b243-3f0ba48b1704 req-2cbd6bff-d12c-4c58-bbf6-2b4c03b2ea42 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 8a8b6989-6ea7-4cf7-ad21-a1563967c7f4] Received unexpected event network-vif-unplugged-19217fbd-a123-469a-b432-be5d2543613c for instance with vm_state deleted and task_state None.#033[00m
Nov 25 03:29:26 np0005534516 nova_compute[253538]: 2025-11-25 08:29:26.513 253542 DEBUG nova.compute.manager [req-690552b6-e2a9-42b8-b243-3f0ba48b1704 req-2cbd6bff-d12c-4c58-bbf6-2b4c03b2ea42 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 8a8b6989-6ea7-4cf7-ad21-a1563967c7f4] Received event network-vif-plugged-19217fbd-a123-469a-b432-be5d2543613c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:29:26 np0005534516 nova_compute[253538]: 2025-11-25 08:29:26.514 253542 DEBUG oslo_concurrency.lockutils [req-690552b6-e2a9-42b8-b243-3f0ba48b1704 req-2cbd6bff-d12c-4c58-bbf6-2b4c03b2ea42 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "8a8b6989-6ea7-4cf7-ad21-a1563967c7f4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:29:26 np0005534516 nova_compute[253538]: 2025-11-25 08:29:26.515 253542 DEBUG oslo_concurrency.lockutils [req-690552b6-e2a9-42b8-b243-3f0ba48b1704 req-2cbd6bff-d12c-4c58-bbf6-2b4c03b2ea42 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "8a8b6989-6ea7-4cf7-ad21-a1563967c7f4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:29:26 np0005534516 nova_compute[253538]: 2025-11-25 08:29:26.515 253542 DEBUG oslo_concurrency.lockutils [req-690552b6-e2a9-42b8-b243-3f0ba48b1704 req-2cbd6bff-d12c-4c58-bbf6-2b4c03b2ea42 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "8a8b6989-6ea7-4cf7-ad21-a1563967c7f4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:29:26 np0005534516 nova_compute[253538]: 2025-11-25 08:29:26.516 253542 DEBUG nova.compute.manager [req-690552b6-e2a9-42b8-b243-3f0ba48b1704 req-2cbd6bff-d12c-4c58-bbf6-2b4c03b2ea42 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 8a8b6989-6ea7-4cf7-ad21-a1563967c7f4] No waiting events found dispatching network-vif-plugged-19217fbd-a123-469a-b432-be5d2543613c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 03:29:26 np0005534516 nova_compute[253538]: 2025-11-25 08:29:26.516 253542 WARNING nova.compute.manager [req-690552b6-e2a9-42b8-b243-3f0ba48b1704 req-2cbd6bff-d12c-4c58-bbf6-2b4c03b2ea42 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 8a8b6989-6ea7-4cf7-ad21-a1563967c7f4] Received unexpected event network-vif-plugged-19217fbd-a123-469a-b432-be5d2543613c for instance with vm_state deleted and task_state None.#033[00m
Nov 25 03:29:26 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e154 do_prune osdmap full prune enabled
Nov 25 03:29:26 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e155 e155: 3 total, 3 up, 3 in
Nov 25 03:29:26 np0005534516 ceph-mon[75015]: log_channel(cluster) log [DBG] : osdmap e155: 3 total, 3 up, 3 in
Nov 25 03:29:26 np0005534516 nova_compute[253538]: 2025-11-25 08:29:26.664 253542 DEBUG nova.compute.manager [req-817e43e1-64ef-4af2-93dc-ce3973720dda req-dbab75b1-4cde-4bae-9ca0-bb6228e6a783 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 8a8b6989-6ea7-4cf7-ad21-a1563967c7f4] Received event network-vif-deleted-19217fbd-a123-469a-b432-be5d2543613c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:29:27 np0005534516 nova_compute[253538]: 2025-11-25 08:29:27.064 253542 DEBUG nova.compute.manager [None req-dfbd3510-f758-4bfb-ae01-969361f55bfe 6bc7b68c86ab44d29c118388df2a8bc0 70a02215d9344af388c6439ace9208a4 - - default default] [instance: 225f80e2-9e66-46fb-b77d-9a54fa8a2a41] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:29:27 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1362: 321 pgs: 321 active+clean; 328 MiB data, 516 MiB used, 59 GiB / 60 GiB avail; 10 MiB/s rd, 4.0 MiB/s wr, 486 op/s
Nov 25 03:29:27 np0005534516 nova_compute[253538]: 2025-11-25 08:29:27.121 253542 INFO nova.compute.manager [None req-dfbd3510-f758-4bfb-ae01-969361f55bfe 6bc7b68c86ab44d29c118388df2a8bc0 70a02215d9344af388c6439ace9208a4 - - default default] [instance: 225f80e2-9e66-46fb-b77d-9a54fa8a2a41] instance snapshotting#033[00m
Nov 25 03:29:27 np0005534516 nova_compute[253538]: 2025-11-25 08:29:27.185 253542 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764059352.130481, cf07e611-51eb-4bcf-8757-8f75d3807da6 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 03:29:27 np0005534516 nova_compute[253538]: 2025-11-25 08:29:27.186 253542 INFO nova.compute.manager [-] [instance: cf07e611-51eb-4bcf-8757-8f75d3807da6] VM Stopped (Lifecycle Event)#033[00m
Nov 25 03:29:27 np0005534516 nova_compute[253538]: 2025-11-25 08:29:27.202 253542 DEBUG nova.compute.manager [None req-a72e1acd-12cf-4ca1-9a7e-d458e9a00069 - - - - - -] [instance: cf07e611-51eb-4bcf-8757-8f75d3807da6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:29:27 np0005534516 nova_compute[253538]: 2025-11-25 08:29:27.343 253542 DEBUG oslo_concurrency.lockutils [None req-dae6bf40-45fc-4390-ad8f-9e0a2e1fba54 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Acquiring lock "8b29fb31-718d-4926-bf4f-bae461ea70ef" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:29:27 np0005534516 nova_compute[253538]: 2025-11-25 08:29:27.343 253542 DEBUG oslo_concurrency.lockutils [None req-dae6bf40-45fc-4390-ad8f-9e0a2e1fba54 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Lock "8b29fb31-718d-4926-bf4f-bae461ea70ef" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:29:27 np0005534516 nova_compute[253538]: 2025-11-25 08:29:27.344 253542 DEBUG oslo_concurrency.lockutils [None req-dae6bf40-45fc-4390-ad8f-9e0a2e1fba54 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Acquiring lock "8b29fb31-718d-4926-bf4f-bae461ea70ef-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:29:27 np0005534516 nova_compute[253538]: 2025-11-25 08:29:27.344 253542 DEBUG oslo_concurrency.lockutils [None req-dae6bf40-45fc-4390-ad8f-9e0a2e1fba54 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Lock "8b29fb31-718d-4926-bf4f-bae461ea70ef-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:29:27 np0005534516 nova_compute[253538]: 2025-11-25 08:29:27.345 253542 DEBUG oslo_concurrency.lockutils [None req-dae6bf40-45fc-4390-ad8f-9e0a2e1fba54 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Lock "8b29fb31-718d-4926-bf4f-bae461ea70ef-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:29:27 np0005534516 nova_compute[253538]: 2025-11-25 08:29:27.347 253542 INFO nova.compute.manager [None req-dae6bf40-45fc-4390-ad8f-9e0a2e1fba54 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 8b29fb31-718d-4926-bf4f-bae461ea70ef] Terminating instance#033[00m
Nov 25 03:29:27 np0005534516 nova_compute[253538]: 2025-11-25 08:29:27.349 253542 DEBUG nova.compute.manager [None req-dae6bf40-45fc-4390-ad8f-9e0a2e1fba54 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 8b29fb31-718d-4926-bf4f-bae461ea70ef] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 25 03:29:27 np0005534516 nova_compute[253538]: 2025-11-25 08:29:27.360 253542 INFO nova.virt.libvirt.driver [None req-dfbd3510-f758-4bfb-ae01-969361f55bfe 6bc7b68c86ab44d29c118388df2a8bc0 70a02215d9344af388c6439ace9208a4 - - default default] [instance: 225f80e2-9e66-46fb-b77d-9a54fa8a2a41] Beginning live snapshot process#033[00m
Nov 25 03:29:27 np0005534516 kernel: tap799c50c8-d1 (unregistering): left promiscuous mode
Nov 25 03:29:27 np0005534516 NetworkManager[48915]: <info>  [1764059367.4177] device (tap799c50c8-d1): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 03:29:27 np0005534516 nova_compute[253538]: 2025-11-25 08:29:27.429 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:29:27 np0005534516 ovn_controller[152859]: 2025-11-25T08:29:27Z|00257|binding|INFO|Releasing lport 799c50c8-d1e7-4c15-a3d9-29903d576304 from this chassis (sb_readonly=0)
Nov 25 03:29:27 np0005534516 ovn_controller[152859]: 2025-11-25T08:29:27Z|00258|binding|INFO|Setting lport 799c50c8-d1e7-4c15-a3d9-29903d576304 down in Southbound
Nov 25 03:29:27 np0005534516 ovn_controller[152859]: 2025-11-25T08:29:27Z|00259|binding|INFO|Removing iface tap799c50c8-d1 ovn-installed in OVS
Nov 25 03:29:27 np0005534516 nova_compute[253538]: 2025-11-25 08:29:27.434 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:29:27 np0005534516 nova_compute[253538]: 2025-11-25 08:29:27.457 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:29:27 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:29:27.459 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:fd:03:96 10.100.0.12'], port_security=['fa:16:3e:fd:03:96 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '8b29fb31-718d-4926-bf4f-bae461ea70ef', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ba659d6c-c094-47d7-ba45-d0e659ce778e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b0a28d62fb1841c087b84b40bf5a54ec', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'c0dcf48e-8342-437f-bc91-be284d9d2e89', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=38cfcfdd-6d8a-45fc-8bf6-5c1aa5128b91, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=799c50c8-d1e7-4c15-a3d9-29903d576304) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 03:29:27 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:29:27.461 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 799c50c8-d1e7-4c15-a3d9-29903d576304 in datapath ba659d6c-c094-47d7-ba45-d0e659ce778e unbound from our chassis#033[00m
Nov 25 03:29:27 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:29:27.463 162739 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network ba659d6c-c094-47d7-ba45-d0e659ce778e, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 25 03:29:27 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:29:27.464 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[76001237-7f43-4b5d-975c-9009dbfbb9df]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:29:27 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:29:27.464 162739 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-ba659d6c-c094-47d7-ba45-d0e659ce778e namespace which is not needed anymore#033[00m
Nov 25 03:29:27 np0005534516 systemd[1]: machine-qemu\x2d38\x2dinstance\x2d00000021.scope: Deactivated successfully.
Nov 25 03:29:27 np0005534516 systemd[1]: machine-qemu\x2d38\x2dinstance\x2d00000021.scope: Consumed 13.842s CPU time.
Nov 25 03:29:27 np0005534516 systemd-machined[215790]: Machine qemu-38-instance-00000021 terminated.
Nov 25 03:29:27 np0005534516 nova_compute[253538]: 2025-11-25 08:29:27.531 253542 DEBUG nova.virt.libvirt.imagebackend [None req-dfbd3510-f758-4bfb-ae01-969361f55bfe 6bc7b68c86ab44d29c118388df2a8bc0 70a02215d9344af388c6439ace9208a4 - - default default] No parent info for 8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163#033[00m
Nov 25 03:29:27 np0005534516 kernel: tap799c50c8-d1: entered promiscuous mode
Nov 25 03:29:27 np0005534516 systemd-udevd[297343]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 03:29:27 np0005534516 NetworkManager[48915]: <info>  [1764059367.5829] manager: (tap799c50c8-d1): new Tun device (/org/freedesktop/NetworkManager/Devices/123)
Nov 25 03:29:27 np0005534516 ovn_controller[152859]: 2025-11-25T08:29:27Z|00260|binding|INFO|Claiming lport 799c50c8-d1e7-4c15-a3d9-29903d576304 for this chassis.
Nov 25 03:29:27 np0005534516 ovn_controller[152859]: 2025-11-25T08:29:27Z|00261|binding|INFO|799c50c8-d1e7-4c15-a3d9-29903d576304: Claiming fa:16:3e:fd:03:96 10.100.0.12
Nov 25 03:29:27 np0005534516 kernel: tap799c50c8-d1 (unregistering): left promiscuous mode
Nov 25 03:29:27 np0005534516 nova_compute[253538]: 2025-11-25 08:29:27.633 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:29:27 np0005534516 virtnodedevd[239165]: libvirt version: 11.9.0, package: 1.el9 (builder@centos.org, 2025-11-04-09:54:50, )
Nov 25 03:29:27 np0005534516 virtnodedevd[239165]: hostname: compute-0
Nov 25 03:29:27 np0005534516 virtnodedevd[239165]: ethtool ioctl error on tap799c50c8-d1: No such device
Nov 25 03:29:27 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:29:27.640 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:fd:03:96 10.100.0.12'], port_security=['fa:16:3e:fd:03:96 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '8b29fb31-718d-4926-bf4f-bae461ea70ef', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ba659d6c-c094-47d7-ba45-d0e659ce778e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b0a28d62fb1841c087b84b40bf5a54ec', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'c0dcf48e-8342-437f-bc91-be284d9d2e89', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=38cfcfdd-6d8a-45fc-8bf6-5c1aa5128b91, chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=799c50c8-d1e7-4c15-a3d9-29903d576304) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 03:29:27 np0005534516 virtnodedevd[239165]: ethtool ioctl error on tap799c50c8-d1: No such device
Nov 25 03:29:27 np0005534516 neutron-haproxy-ovnmeta-ba659d6c-c094-47d7-ba45-d0e659ce778e[294697]: [NOTICE]   (294717) : haproxy version is 2.8.14-c23fe91
Nov 25 03:29:27 np0005534516 neutron-haproxy-ovnmeta-ba659d6c-c094-47d7-ba45-d0e659ce778e[294697]: [NOTICE]   (294717) : path to executable is /usr/sbin/haproxy
Nov 25 03:29:27 np0005534516 neutron-haproxy-ovnmeta-ba659d6c-c094-47d7-ba45-d0e659ce778e[294697]: [WARNING]  (294717) : Exiting Master process...
Nov 25 03:29:27 np0005534516 neutron-haproxy-ovnmeta-ba659d6c-c094-47d7-ba45-d0e659ce778e[294697]: [ALERT]    (294717) : Current worker (294722) exited with code 143 (Terminated)
Nov 25 03:29:27 np0005534516 neutron-haproxy-ovnmeta-ba659d6c-c094-47d7-ba45-d0e659ce778e[294697]: [WARNING]  (294717) : All workers exited. Exiting... (0)
Nov 25 03:29:27 np0005534516 systemd[1]: libpod-74d9d88aec0d43eba2eea2e94d83fd5630607c65e951353c7bdd0b61f83bfda3.scope: Deactivated successfully.
Nov 25 03:29:27 np0005534516 virtnodedevd[239165]: ethtool ioctl error on tap799c50c8-d1: No such device
Nov 25 03:29:27 np0005534516 podman[297377]: 2025-11-25 08:29:27.657118384 +0000 UTC m=+0.099305377 container died 74d9d88aec0d43eba2eea2e94d83fd5630607c65e951353c7bdd0b61f83bfda3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-ba659d6c-c094-47d7-ba45-d0e659ce778e, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team)
Nov 25 03:29:27 np0005534516 virtnodedevd[239165]: ethtool ioctl error on tap799c50c8-d1: No such device
Nov 25 03:29:27 np0005534516 nova_compute[253538]: 2025-11-25 08:29:27.661 253542 INFO nova.virt.libvirt.driver [-] [instance: 8b29fb31-718d-4926-bf4f-bae461ea70ef] Instance destroyed successfully.#033[00m
Nov 25 03:29:27 np0005534516 nova_compute[253538]: 2025-11-25 08:29:27.662 253542 DEBUG nova.objects.instance [None req-dae6bf40-45fc-4390-ad8f-9e0a2e1fba54 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Lazy-loading 'resources' on Instance uuid 8b29fb31-718d-4926-bf4f-bae461ea70ef obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 03:29:27 np0005534516 virtnodedevd[239165]: ethtool ioctl error on tap799c50c8-d1: No such device
Nov 25 03:29:27 np0005534516 ovn_controller[152859]: 2025-11-25T08:29:27Z|00262|binding|INFO|Setting lport 799c50c8-d1e7-4c15-a3d9-29903d576304 ovn-installed in OVS
Nov 25 03:29:27 np0005534516 ovn_controller[152859]: 2025-11-25T08:29:27Z|00263|binding|INFO|Setting lport 799c50c8-d1e7-4c15-a3d9-29903d576304 up in Southbound
Nov 25 03:29:27 np0005534516 ovn_controller[152859]: 2025-11-25T08:29:27Z|00264|binding|INFO|Releasing lport 799c50c8-d1e7-4c15-a3d9-29903d576304 from this chassis (sb_readonly=1)
Nov 25 03:29:27 np0005534516 ovn_controller[152859]: 2025-11-25T08:29:27Z|00265|if_status|INFO|Not setting lport 799c50c8-d1e7-4c15-a3d9-29903d576304 down as sb is readonly
Nov 25 03:29:27 np0005534516 nova_compute[253538]: 2025-11-25 08:29:27.670 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:29:27 np0005534516 ovn_controller[152859]: 2025-11-25T08:29:27Z|00266|binding|INFO|Removing iface tap799c50c8-d1 ovn-installed in OVS
Nov 25 03:29:27 np0005534516 nova_compute[253538]: 2025-11-25 08:29:27.672 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:29:27 np0005534516 nova_compute[253538]: 2025-11-25 08:29:27.674 253542 DEBUG nova.virt.libvirt.vif [None req-dae6bf40-45fc-4390-ad8f-9e0a2e1fba54 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T08:28:38Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ImagesTestJSON-server-1025077292',display_name='tempest-ImagesTestJSON-server-1025077292',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagestestjson-server-1025077292',id=33,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-25T08:28:49Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='b0a28d62fb1841c087b84b40bf5a54ec',ramdisk_id='',reservation_id='r-4kbbjx71',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ImagesTestJSON-109091550',owner_user_name='tempest-ImagesTestJSON-109091550-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T08:29:04Z,user_data=None,user_id='38fa175fb699405c9a05d7c28f994ebc',uuid=8b29fb31-718d-4926-bf4f-bae461ea70ef,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "799c50c8-d1e7-4c15-a3d9-29903d576304", "address": "fa:16:3e:fd:03:96", "network": {"id": "ba659d6c-c094-47d7-ba45-d0e659ce778e", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1404047212-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b0a28d62fb1841c087b84b40bf5a54ec", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap799c50c8-d1", "ovs_interfaceid": "799c50c8-d1e7-4c15-a3d9-29903d576304", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 25 03:29:27 np0005534516 nova_compute[253538]: 2025-11-25 08:29:27.674 253542 DEBUG nova.network.os_vif_util [None req-dae6bf40-45fc-4390-ad8f-9e0a2e1fba54 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Converting VIF {"id": "799c50c8-d1e7-4c15-a3d9-29903d576304", "address": "fa:16:3e:fd:03:96", "network": {"id": "ba659d6c-c094-47d7-ba45-d0e659ce778e", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1404047212-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b0a28d62fb1841c087b84b40bf5a54ec", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap799c50c8-d1", "ovs_interfaceid": "799c50c8-d1e7-4c15-a3d9-29903d576304", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 03:29:27 np0005534516 nova_compute[253538]: 2025-11-25 08:29:27.675 253542 DEBUG nova.network.os_vif_util [None req-dae6bf40-45fc-4390-ad8f-9e0a2e1fba54 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:fd:03:96,bridge_name='br-int',has_traffic_filtering=True,id=799c50c8-d1e7-4c15-a3d9-29903d576304,network=Network(ba659d6c-c094-47d7-ba45-d0e659ce778e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap799c50c8-d1') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 03:29:27 np0005534516 ovn_controller[152859]: 2025-11-25T08:29:27Z|00267|binding|INFO|Releasing lport 799c50c8-d1e7-4c15-a3d9-29903d576304 from this chassis (sb_readonly=0)
Nov 25 03:29:27 np0005534516 ovn_controller[152859]: 2025-11-25T08:29:27Z|00268|binding|INFO|Setting lport 799c50c8-d1e7-4c15-a3d9-29903d576304 down in Southbound
Nov 25 03:29:27 np0005534516 nova_compute[253538]: 2025-11-25 08:29:27.675 253542 DEBUG os_vif [None req-dae6bf40-45fc-4390-ad8f-9e0a2e1fba54 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:fd:03:96,bridge_name='br-int',has_traffic_filtering=True,id=799c50c8-d1e7-4c15-a3d9-29903d576304,network=Network(ba659d6c-c094-47d7-ba45-d0e659ce778e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap799c50c8-d1') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 25 03:29:27 np0005534516 virtnodedevd[239165]: ethtool ioctl error on tap799c50c8-d1: No such device
Nov 25 03:29:27 np0005534516 nova_compute[253538]: 2025-11-25 08:29:27.679 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:29:27 np0005534516 nova_compute[253538]: 2025-11-25 08:29:27.679 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap799c50c8-d1, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:29:27 np0005534516 nova_compute[253538]: 2025-11-25 08:29:27.683 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:29:27 np0005534516 nova_compute[253538]: 2025-11-25 08:29:27.686 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 25 03:29:27 np0005534516 virtnodedevd[239165]: ethtool ioctl error on tap799c50c8-d1: No such device
Nov 25 03:29:27 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:29:27.687 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:fd:03:96 10.100.0.12'], port_security=['fa:16:3e:fd:03:96 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '8b29fb31-718d-4926-bf4f-bae461ea70ef', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ba659d6c-c094-47d7-ba45-d0e659ce778e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b0a28d62fb1841c087b84b40bf5a54ec', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'c0dcf48e-8342-437f-bc91-be284d9d2e89', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=38cfcfdd-6d8a-45fc-8bf6-5c1aa5128b91, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=799c50c8-d1e7-4c15-a3d9-29903d576304) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 03:29:27 np0005534516 nova_compute[253538]: 2025-11-25 08:29:27.693 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:29:27 np0005534516 virtnodedevd[239165]: ethtool ioctl error on tap799c50c8-d1: No such device
Nov 25 03:29:27 np0005534516 nova_compute[253538]: 2025-11-25 08:29:27.696 253542 INFO os_vif [None req-dae6bf40-45fc-4390-ad8f-9e0a2e1fba54 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:fd:03:96,bridge_name='br-int',has_traffic_filtering=True,id=799c50c8-d1e7-4c15-a3d9-29903d576304,network=Network(ba659d6c-c094-47d7-ba45-d0e659ce778e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap799c50c8-d1')#033[00m
Nov 25 03:29:27 np0005534516 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-74d9d88aec0d43eba2eea2e94d83fd5630607c65e951353c7bdd0b61f83bfda3-userdata-shm.mount: Deactivated successfully.
Nov 25 03:29:27 np0005534516 systemd[1]: var-lib-containers-storage-overlay-5fb3ae8526a3fd45e0f449dbfc51ea63977bc5a178cd81680ec65b696722764a-merged.mount: Deactivated successfully.
Nov 25 03:29:27 np0005534516 podman[297377]: 2025-11-25 08:29:27.720235898 +0000 UTC m=+0.162422941 container cleanup 74d9d88aec0d43eba2eea2e94d83fd5630607c65e951353c7bdd0b61f83bfda3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-ba659d6c-c094-47d7-ba45-d0e659ce778e, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3)
Nov 25 03:29:27 np0005534516 systemd[1]: libpod-conmon-74d9d88aec0d43eba2eea2e94d83fd5630607c65e951353c7bdd0b61f83bfda3.scope: Deactivated successfully.
Nov 25 03:29:27 np0005534516 nova_compute[253538]: 2025-11-25 08:29:27.746 253542 DEBUG nova.storage.rbd_utils [None req-dfbd3510-f758-4bfb-ae01-969361f55bfe 6bc7b68c86ab44d29c118388df2a8bc0 70a02215d9344af388c6439ace9208a4 - - default default] creating snapshot(6c0f9d2c7de3419c86919344433cfae0) on rbd image(225f80e2-9e66-46fb-b77d-9a54fa8a2a41_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Nov 25 03:29:27 np0005534516 podman[297444]: 2025-11-25 08:29:27.810347689 +0000 UTC m=+0.058385225 container remove 74d9d88aec0d43eba2eea2e94d83fd5630607c65e951353c7bdd0b61f83bfda3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-ba659d6c-c094-47d7-ba45-d0e659ce778e, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 03:29:27 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:29:27.819 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[ab8388b7-ceab-4672-9dc2-ac02a5551c22]: (4, ('Tue Nov 25 08:29:27 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-ba659d6c-c094-47d7-ba45-d0e659ce778e (74d9d88aec0d43eba2eea2e94d83fd5630607c65e951353c7bdd0b61f83bfda3)\n74d9d88aec0d43eba2eea2e94d83fd5630607c65e951353c7bdd0b61f83bfda3\nTue Nov 25 08:29:27 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-ba659d6c-c094-47d7-ba45-d0e659ce778e (74d9d88aec0d43eba2eea2e94d83fd5630607c65e951353c7bdd0b61f83bfda3)\n74d9d88aec0d43eba2eea2e94d83fd5630607c65e951353c7bdd0b61f83bfda3\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:29:27 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:29:27.821 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[5886fe56-8c77-4e14-878c-74d5db841da1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:29:27 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:29:27.823 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapba659d6c-c0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:29:27 np0005534516 kernel: tapba659d6c-c0: left promiscuous mode
Nov 25 03:29:27 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:29:27.828 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[c777495c-d75e-4008-9870-3e9cdad1c072]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:29:27 np0005534516 nova_compute[253538]: 2025-11-25 08:29:27.830 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:29:27 np0005534516 nova_compute[253538]: 2025-11-25 08:29:27.845 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:29:27 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:29:27.848 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[7d34f734-1cbc-437f-a97a-58693fbe9309]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:29:27 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:29:27.850 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[b3fe59fe-3874-4f88-97a9-9f6803a23e16]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:29:27 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:29:27.867 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[bacf4c0c-64d9-4877-be6f-c432bd39221c]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 468335, 'reachable_time': 31048, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 297479, 'error': None, 'target': 'ovnmeta-ba659d6c-c094-47d7-ba45-d0e659ce778e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:29:27 np0005534516 systemd[1]: run-netns-ovnmeta\x2dba659d6c\x2dc094\x2d47d7\x2dba45\x2dd0e659ce778e.mount: Deactivated successfully.
Nov 25 03:29:27 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:29:27.874 162852 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-ba659d6c-c094-47d7-ba45-d0e659ce778e deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 25 03:29:27 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:29:27.874 162852 DEBUG oslo.privsep.daemon [-] privsep: reply[e6a945e3-b816-44d6-8f99-55468c06a6e2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:29:27 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:29:27.875 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 799c50c8-d1e7-4c15-a3d9-29903d576304 in datapath ba659d6c-c094-47d7-ba45-d0e659ce778e unbound from our chassis#033[00m
Nov 25 03:29:27 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:29:27.877 162739 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network ba659d6c-c094-47d7-ba45-d0e659ce778e, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 25 03:29:27 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:29:27.878 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[bb54484d-e314-4f4d-857e-9fe13438f490]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:29:27 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:29:27.879 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 799c50c8-d1e7-4c15-a3d9-29903d576304 in datapath ba659d6c-c094-47d7-ba45-d0e659ce778e unbound from our chassis#033[00m
Nov 25 03:29:27 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:29:27.881 162739 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network ba659d6c-c094-47d7-ba45-d0e659ce778e, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 25 03:29:27 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:29:27.882 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[79ea8b00-3f2f-4f89-b37c-8de1cf1316be]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:29:28 np0005534516 nova_compute[253538]: 2025-11-25 08:29:28.125 253542 INFO nova.virt.libvirt.driver [None req-dae6bf40-45fc-4390-ad8f-9e0a2e1fba54 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 8b29fb31-718d-4926-bf4f-bae461ea70ef] Deleting instance files /var/lib/nova/instances/8b29fb31-718d-4926-bf4f-bae461ea70ef_del#033[00m
Nov 25 03:29:28 np0005534516 nova_compute[253538]: 2025-11-25 08:29:28.126 253542 INFO nova.virt.libvirt.driver [None req-dae6bf40-45fc-4390-ad8f-9e0a2e1fba54 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 8b29fb31-718d-4926-bf4f-bae461ea70ef] Deletion of /var/lib/nova/instances/8b29fb31-718d-4926-bf4f-bae461ea70ef_del complete#033[00m
Nov 25 03:29:28 np0005534516 nova_compute[253538]: 2025-11-25 08:29:28.192 253542 INFO nova.compute.manager [None req-dae6bf40-45fc-4390-ad8f-9e0a2e1fba54 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 8b29fb31-718d-4926-bf4f-bae461ea70ef] Took 0.84 seconds to destroy the instance on the hypervisor.#033[00m
Nov 25 03:29:28 np0005534516 nova_compute[253538]: 2025-11-25 08:29:28.192 253542 DEBUG oslo.service.loopingcall [None req-dae6bf40-45fc-4390-ad8f-9e0a2e1fba54 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 25 03:29:28 np0005534516 nova_compute[253538]: 2025-11-25 08:29:28.192 253542 DEBUG nova.compute.manager [-] [instance: 8b29fb31-718d-4926-bf4f-bae461ea70ef] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 25 03:29:28 np0005534516 nova_compute[253538]: 2025-11-25 08:29:28.192 253542 DEBUG nova.network.neutron [-] [instance: 8b29fb31-718d-4926-bf4f-bae461ea70ef] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 25 03:29:28 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e155 do_prune osdmap full prune enabled
Nov 25 03:29:28 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e156 e156: 3 total, 3 up, 3 in
Nov 25 03:29:28 np0005534516 ceph-mon[75015]: log_channel(cluster) log [DBG] : osdmap e156: 3 total, 3 up, 3 in
Nov 25 03:29:28 np0005534516 nova_compute[253538]: 2025-11-25 08:29:28.716 253542 DEBUG nova.storage.rbd_utils [None req-dfbd3510-f758-4bfb-ae01-969361f55bfe 6bc7b68c86ab44d29c118388df2a8bc0 70a02215d9344af388c6439ace9208a4 - - default default] cloning vms/225f80e2-9e66-46fb-b77d-9a54fa8a2a41_disk@6c0f9d2c7de3419c86919344433cfae0 to images/f65b3684-821a-49e5-bd6a-65afe2f061e8 clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Nov 25 03:29:28 np0005534516 nova_compute[253538]: 2025-11-25 08:29:28.761 253542 DEBUG nova.compute.manager [req-c608692c-825e-49a1-99e5-edea536d24c3 req-83f5f79b-57b2-4006-9638-763c4c5f3436 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 8b29fb31-718d-4926-bf4f-bae461ea70ef] Received event network-vif-unplugged-799c50c8-d1e7-4c15-a3d9-29903d576304 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:29:28 np0005534516 nova_compute[253538]: 2025-11-25 08:29:28.763 253542 DEBUG oslo_concurrency.lockutils [req-c608692c-825e-49a1-99e5-edea536d24c3 req-83f5f79b-57b2-4006-9638-763c4c5f3436 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "8b29fb31-718d-4926-bf4f-bae461ea70ef-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:29:28 np0005534516 nova_compute[253538]: 2025-11-25 08:29:28.764 253542 DEBUG oslo_concurrency.lockutils [req-c608692c-825e-49a1-99e5-edea536d24c3 req-83f5f79b-57b2-4006-9638-763c4c5f3436 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "8b29fb31-718d-4926-bf4f-bae461ea70ef-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:29:28 np0005534516 nova_compute[253538]: 2025-11-25 08:29:28.764 253542 DEBUG oslo_concurrency.lockutils [req-c608692c-825e-49a1-99e5-edea536d24c3 req-83f5f79b-57b2-4006-9638-763c4c5f3436 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "8b29fb31-718d-4926-bf4f-bae461ea70ef-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:29:28 np0005534516 nova_compute[253538]: 2025-11-25 08:29:28.764 253542 DEBUG nova.compute.manager [req-c608692c-825e-49a1-99e5-edea536d24c3 req-83f5f79b-57b2-4006-9638-763c4c5f3436 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 8b29fb31-718d-4926-bf4f-bae461ea70ef] No waiting events found dispatching network-vif-unplugged-799c50c8-d1e7-4c15-a3d9-29903d576304 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 03:29:28 np0005534516 nova_compute[253538]: 2025-11-25 08:29:28.765 253542 DEBUG nova.compute.manager [req-c608692c-825e-49a1-99e5-edea536d24c3 req-83f5f79b-57b2-4006-9638-763c4c5f3436 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 8b29fb31-718d-4926-bf4f-bae461ea70ef] Received event network-vif-unplugged-799c50c8-d1e7-4c15-a3d9-29903d576304 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 25 03:29:28 np0005534516 nova_compute[253538]: 2025-11-25 08:29:28.765 253542 DEBUG nova.compute.manager [req-c608692c-825e-49a1-99e5-edea536d24c3 req-83f5f79b-57b2-4006-9638-763c4c5f3436 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 8b29fb31-718d-4926-bf4f-bae461ea70ef] Received event network-vif-plugged-799c50c8-d1e7-4c15-a3d9-29903d576304 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:29:28 np0005534516 nova_compute[253538]: 2025-11-25 08:29:28.765 253542 DEBUG oslo_concurrency.lockutils [req-c608692c-825e-49a1-99e5-edea536d24c3 req-83f5f79b-57b2-4006-9638-763c4c5f3436 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "8b29fb31-718d-4926-bf4f-bae461ea70ef-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:29:28 np0005534516 nova_compute[253538]: 2025-11-25 08:29:28.766 253542 DEBUG oslo_concurrency.lockutils [req-c608692c-825e-49a1-99e5-edea536d24c3 req-83f5f79b-57b2-4006-9638-763c4c5f3436 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "8b29fb31-718d-4926-bf4f-bae461ea70ef-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:29:28 np0005534516 nova_compute[253538]: 2025-11-25 08:29:28.767 253542 DEBUG oslo_concurrency.lockutils [req-c608692c-825e-49a1-99e5-edea536d24c3 req-83f5f79b-57b2-4006-9638-763c4c5f3436 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "8b29fb31-718d-4926-bf4f-bae461ea70ef-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:29:28 np0005534516 nova_compute[253538]: 2025-11-25 08:29:28.767 253542 DEBUG nova.compute.manager [req-c608692c-825e-49a1-99e5-edea536d24c3 req-83f5f79b-57b2-4006-9638-763c4c5f3436 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 8b29fb31-718d-4926-bf4f-bae461ea70ef] No waiting events found dispatching network-vif-plugged-799c50c8-d1e7-4c15-a3d9-29903d576304 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 03:29:28 np0005534516 nova_compute[253538]: 2025-11-25 08:29:28.768 253542 WARNING nova.compute.manager [req-c608692c-825e-49a1-99e5-edea536d24c3 req-83f5f79b-57b2-4006-9638-763c4c5f3436 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 8b29fb31-718d-4926-bf4f-bae461ea70ef] Received unexpected event network-vif-plugged-799c50c8-d1e7-4c15-a3d9-29903d576304 for instance with vm_state active and task_state deleting.#033[00m
Nov 25 03:29:28 np0005534516 nova_compute[253538]: 2025-11-25 08:29:28.876 253542 DEBUG nova.storage.rbd_utils [None req-dfbd3510-f758-4bfb-ae01-969361f55bfe 6bc7b68c86ab44d29c118388df2a8bc0 70a02215d9344af388c6439ace9208a4 - - default default] flattening images/f65b3684-821a-49e5-bd6a-65afe2f061e8 flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314#033[00m
Nov 25 03:29:28 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 25 03:29:28 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/955554572' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 03:29:28 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 25 03:29:28 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/955554572' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 03:29:29 np0005534516 nova_compute[253538]: 2025-11-25 08:29:29.104 253542 DEBUG nova.storage.rbd_utils [None req-dfbd3510-f758-4bfb-ae01-969361f55bfe 6bc7b68c86ab44d29c118388df2a8bc0 70a02215d9344af388c6439ace9208a4 - - default default] removing snapshot(6c0f9d2c7de3419c86919344433cfae0) on rbd image(225f80e2-9e66-46fb-b77d-9a54fa8a2a41_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489#033[00m
Nov 25 03:29:29 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1364: 321 pgs: 321 active+clean; 280 MiB data, 511 MiB used, 59 GiB / 60 GiB avail; 9.3 MiB/s rd, 4.8 MiB/s wr, 448 op/s
Nov 25 03:29:29 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e156 do_prune osdmap full prune enabled
Nov 25 03:29:29 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e157 e157: 3 total, 3 up, 3 in
Nov 25 03:29:29 np0005534516 ceph-mon[75015]: log_channel(cluster) log [DBG] : osdmap e157: 3 total, 3 up, 3 in
Nov 25 03:29:29 np0005534516 nova_compute[253538]: 2025-11-25 08:29:29.709 253542 DEBUG nova.storage.rbd_utils [None req-dfbd3510-f758-4bfb-ae01-969361f55bfe 6bc7b68c86ab44d29c118388df2a8bc0 70a02215d9344af388c6439ace9208a4 - - default default] creating snapshot(snap) on rbd image(f65b3684-821a-49e5-bd6a-65afe2f061e8) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Nov 25 03:29:29 np0005534516 nova_compute[253538]: 2025-11-25 08:29:29.946 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:29:30 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e157 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 03:29:30 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e157 do_prune osdmap full prune enabled
Nov 25 03:29:30 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e158 e158: 3 total, 3 up, 3 in
Nov 25 03:29:30 np0005534516 ceph-mon[75015]: log_channel(cluster) log [DBG] : osdmap e158: 3 total, 3 up, 3 in
Nov 25 03:29:30 np0005534516 nova_compute[253538]: 2025-11-25 08:29:30.426 253542 DEBUG nova.network.neutron [-] [instance: 8b29fb31-718d-4926-bf4f-bae461ea70ef] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 03:29:30 np0005534516 nova_compute[253538]: 2025-11-25 08:29:30.442 253542 INFO nova.compute.manager [-] [instance: 8b29fb31-718d-4926-bf4f-bae461ea70ef] Took 2.25 seconds to deallocate network for instance.#033[00m
Nov 25 03:29:30 np0005534516 nova_compute[253538]: 2025-11-25 08:29:30.484 253542 DEBUG oslo_concurrency.lockutils [None req-dae6bf40-45fc-4390-ad8f-9e0a2e1fba54 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:29:30 np0005534516 nova_compute[253538]: 2025-11-25 08:29:30.484 253542 DEBUG oslo_concurrency.lockutils [None req-dae6bf40-45fc-4390-ad8f-9e0a2e1fba54 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:29:30 np0005534516 nova_compute[253538]: 2025-11-25 08:29:30.489 253542 DEBUG nova.compute.manager [req-5bec9bd3-d80a-4b58-a58f-d5b1fa8fc4d9 req-7d391862-b113-4c47-af88-3a85fbf2804c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 8b29fb31-718d-4926-bf4f-bae461ea70ef] Received event network-vif-deleted-799c50c8-d1e7-4c15-a3d9-29903d576304 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:29:30 np0005534516 nova_compute[253538]: 2025-11-25 08:29:30.574 253542 DEBUG oslo_concurrency.processutils [None req-dae6bf40-45fc-4390-ad8f-9e0a2e1fba54 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:29:30 np0005534516 nova_compute[253538]: 2025-11-25 08:29:30.884 253542 DEBUG nova.compute.manager [req-087c7b64-5957-4c4b-b328-05a94cee6f54 req-ad5d6ec3-076f-47f9-973e-01881daeafa2 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 8b29fb31-718d-4926-bf4f-bae461ea70ef] Received event network-vif-plugged-799c50c8-d1e7-4c15-a3d9-29903d576304 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:29:30 np0005534516 nova_compute[253538]: 2025-11-25 08:29:30.885 253542 DEBUG oslo_concurrency.lockutils [req-087c7b64-5957-4c4b-b328-05a94cee6f54 req-ad5d6ec3-076f-47f9-973e-01881daeafa2 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "8b29fb31-718d-4926-bf4f-bae461ea70ef-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:29:30 np0005534516 nova_compute[253538]: 2025-11-25 08:29:30.885 253542 DEBUG oslo_concurrency.lockutils [req-087c7b64-5957-4c4b-b328-05a94cee6f54 req-ad5d6ec3-076f-47f9-973e-01881daeafa2 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "8b29fb31-718d-4926-bf4f-bae461ea70ef-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:29:30 np0005534516 nova_compute[253538]: 2025-11-25 08:29:30.886 253542 DEBUG oslo_concurrency.lockutils [req-087c7b64-5957-4c4b-b328-05a94cee6f54 req-ad5d6ec3-076f-47f9-973e-01881daeafa2 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "8b29fb31-718d-4926-bf4f-bae461ea70ef-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:29:30 np0005534516 nova_compute[253538]: 2025-11-25 08:29:30.886 253542 DEBUG nova.compute.manager [req-087c7b64-5957-4c4b-b328-05a94cee6f54 req-ad5d6ec3-076f-47f9-973e-01881daeafa2 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 8b29fb31-718d-4926-bf4f-bae461ea70ef] No waiting events found dispatching network-vif-plugged-799c50c8-d1e7-4c15-a3d9-29903d576304 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 03:29:30 np0005534516 nova_compute[253538]: 2025-11-25 08:29:30.886 253542 WARNING nova.compute.manager [req-087c7b64-5957-4c4b-b328-05a94cee6f54 req-ad5d6ec3-076f-47f9-973e-01881daeafa2 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 8b29fb31-718d-4926-bf4f-bae461ea70ef] Received unexpected event network-vif-plugged-799c50c8-d1e7-4c15-a3d9-29903d576304 for instance with vm_state deleted and task_state None.#033[00m
Nov 25 03:29:31 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 03:29:31 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4047568817' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 03:29:31 np0005534516 nova_compute[253538]: 2025-11-25 08:29:31.085 253542 DEBUG oslo_concurrency.processutils [None req-dae6bf40-45fc-4390-ad8f-9e0a2e1fba54 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.511s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:29:31 np0005534516 nova_compute[253538]: 2025-11-25 08:29:31.096 253542 DEBUG nova.compute.provider_tree [None req-dae6bf40-45fc-4390-ad8f-9e0a2e1fba54 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 03:29:31 np0005534516 nova_compute[253538]: 2025-11-25 08:29:31.112 253542 DEBUG nova.scheduler.client.report [None req-dae6bf40-45fc-4390-ad8f-9e0a2e1fba54 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 03:29:31 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1367: 321 pgs: 321 active+clean; 267 MiB data, 538 MiB used, 59 GiB / 60 GiB avail; 7.3 MiB/s rd, 12 MiB/s wr, 725 op/s
Nov 25 03:29:31 np0005534516 nova_compute[253538]: 2025-11-25 08:29:31.138 253542 DEBUG oslo_concurrency.lockutils [None req-dae6bf40-45fc-4390-ad8f-9e0a2e1fba54 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.653s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:29:31 np0005534516 nova_compute[253538]: 2025-11-25 08:29:31.176 253542 INFO nova.scheduler.client.report [None req-dae6bf40-45fc-4390-ad8f-9e0a2e1fba54 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Deleted allocations for instance 8b29fb31-718d-4926-bf4f-bae461ea70ef#033[00m
Nov 25 03:29:31 np0005534516 nova_compute[253538]: 2025-11-25 08:29:31.281 253542 DEBUG oslo_concurrency.lockutils [None req-dae6bf40-45fc-4390-ad8f-9e0a2e1fba54 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Lock "8b29fb31-718d-4926-bf4f-bae461ea70ef" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.937s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:29:31 np0005534516 nova_compute[253538]: 2025-11-25 08:29:31.521 253542 DEBUG oslo_concurrency.lockutils [None req-3847c90e-2ac9-43e2-ae42-058e295bfd8e 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Acquiring lock "34941187-b6b9-4153-a8b9-6f5c00f10dda" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:29:31 np0005534516 nova_compute[253538]: 2025-11-25 08:29:31.522 253542 DEBUG oslo_concurrency.lockutils [None req-3847c90e-2ac9-43e2-ae42-058e295bfd8e 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Lock "34941187-b6b9-4153-a8b9-6f5c00f10dda" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:29:31 np0005534516 nova_compute[253538]: 2025-11-25 08:29:31.542 253542 DEBUG nova.compute.manager [None req-3847c90e-2ac9-43e2-ae42-058e295bfd8e 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 34941187-b6b9-4153-a8b9-6f5c00f10dda] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 25 03:29:31 np0005534516 nova_compute[253538]: 2025-11-25 08:29:31.613 253542 DEBUG oslo_concurrency.lockutils [None req-3847c90e-2ac9-43e2-ae42-058e295bfd8e 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:29:31 np0005534516 nova_compute[253538]: 2025-11-25 08:29:31.614 253542 DEBUG oslo_concurrency.lockutils [None req-3847c90e-2ac9-43e2-ae42-058e295bfd8e 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:29:31 np0005534516 nova_compute[253538]: 2025-11-25 08:29:31.624 253542 DEBUG nova.virt.hardware [None req-3847c90e-2ac9-43e2-ae42-058e295bfd8e 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 25 03:29:31 np0005534516 nova_compute[253538]: 2025-11-25 08:29:31.625 253542 INFO nova.compute.claims [None req-3847c90e-2ac9-43e2-ae42-058e295bfd8e 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 34941187-b6b9-4153-a8b9-6f5c00f10dda] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 25 03:29:31 np0005534516 nova_compute[253538]: 2025-11-25 08:29:31.783 253542 DEBUG oslo_concurrency.processutils [None req-3847c90e-2ac9-43e2-ae42-058e295bfd8e 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:29:31 np0005534516 nova_compute[253538]: 2025-11-25 08:29:31.993 253542 INFO nova.virt.libvirt.driver [None req-dfbd3510-f758-4bfb-ae01-969361f55bfe 6bc7b68c86ab44d29c118388df2a8bc0 70a02215d9344af388c6439ace9208a4 - - default default] [instance: 225f80e2-9e66-46fb-b77d-9a54fa8a2a41] Snapshot image upload complete#033[00m
Nov 25 03:29:31 np0005534516 nova_compute[253538]: 2025-11-25 08:29:31.994 253542 INFO nova.compute.manager [None req-dfbd3510-f758-4bfb-ae01-969361f55bfe 6bc7b68c86ab44d29c118388df2a8bc0 70a02215d9344af388c6439ace9208a4 - - default default] [instance: 225f80e2-9e66-46fb-b77d-9a54fa8a2a41] Took 4.87 seconds to snapshot the instance on the hypervisor.#033[00m
Nov 25 03:29:32 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 03:29:32 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3376256686' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 03:29:32 np0005534516 nova_compute[253538]: 2025-11-25 08:29:32.282 253542 DEBUG oslo_concurrency.processutils [None req-3847c90e-2ac9-43e2-ae42-058e295bfd8e 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.499s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:29:32 np0005534516 nova_compute[253538]: 2025-11-25 08:29:32.290 253542 DEBUG nova.compute.provider_tree [None req-3847c90e-2ac9-43e2-ae42-058e295bfd8e 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 03:29:32 np0005534516 nova_compute[253538]: 2025-11-25 08:29:32.303 253542 DEBUG nova.scheduler.client.report [None req-3847c90e-2ac9-43e2-ae42-058e295bfd8e 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 03:29:32 np0005534516 nova_compute[253538]: 2025-11-25 08:29:32.332 253542 DEBUG oslo_concurrency.lockutils [None req-3847c90e-2ac9-43e2-ae42-058e295bfd8e 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.718s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:29:32 np0005534516 nova_compute[253538]: 2025-11-25 08:29:32.333 253542 DEBUG nova.compute.manager [None req-3847c90e-2ac9-43e2-ae42-058e295bfd8e 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 34941187-b6b9-4153-a8b9-6f5c00f10dda] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 25 03:29:32 np0005534516 nova_compute[253538]: 2025-11-25 08:29:32.397 253542 DEBUG nova.compute.manager [None req-3847c90e-2ac9-43e2-ae42-058e295bfd8e 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 34941187-b6b9-4153-a8b9-6f5c00f10dda] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 25 03:29:32 np0005534516 nova_compute[253538]: 2025-11-25 08:29:32.398 253542 DEBUG nova.network.neutron [None req-3847c90e-2ac9-43e2-ae42-058e295bfd8e 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 34941187-b6b9-4153-a8b9-6f5c00f10dda] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 25 03:29:32 np0005534516 nova_compute[253538]: 2025-11-25 08:29:32.418 253542 INFO nova.virt.libvirt.driver [None req-3847c90e-2ac9-43e2-ae42-058e295bfd8e 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 34941187-b6b9-4153-a8b9-6f5c00f10dda] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 25 03:29:32 np0005534516 nova_compute[253538]: 2025-11-25 08:29:32.440 253542 DEBUG nova.compute.manager [None req-3847c90e-2ac9-43e2-ae42-058e295bfd8e 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 34941187-b6b9-4153-a8b9-6f5c00f10dda] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 25 03:29:32 np0005534516 nova_compute[253538]: 2025-11-25 08:29:32.568 253542 DEBUG nova.compute.manager [None req-3847c90e-2ac9-43e2-ae42-058e295bfd8e 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 34941187-b6b9-4153-a8b9-6f5c00f10dda] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 25 03:29:32 np0005534516 nova_compute[253538]: 2025-11-25 08:29:32.571 253542 DEBUG nova.virt.libvirt.driver [None req-3847c90e-2ac9-43e2-ae42-058e295bfd8e 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 34941187-b6b9-4153-a8b9-6f5c00f10dda] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 25 03:29:32 np0005534516 nova_compute[253538]: 2025-11-25 08:29:32.572 253542 INFO nova.virt.libvirt.driver [None req-3847c90e-2ac9-43e2-ae42-058e295bfd8e 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 34941187-b6b9-4153-a8b9-6f5c00f10dda] Creating image(s)#033[00m
Nov 25 03:29:32 np0005534516 nova_compute[253538]: 2025-11-25 08:29:32.612 253542 DEBUG nova.storage.rbd_utils [None req-3847c90e-2ac9-43e2-ae42-058e295bfd8e 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] rbd image 34941187-b6b9-4153-a8b9-6f5c00f10dda_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:29:32 np0005534516 nova_compute[253538]: 2025-11-25 08:29:32.647 253542 DEBUG nova.storage.rbd_utils [None req-3847c90e-2ac9-43e2-ae42-058e295bfd8e 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] rbd image 34941187-b6b9-4153-a8b9-6f5c00f10dda_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:29:32 np0005534516 nova_compute[253538]: 2025-11-25 08:29:32.675 253542 DEBUG nova.storage.rbd_utils [None req-3847c90e-2ac9-43e2-ae42-058e295bfd8e 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] rbd image 34941187-b6b9-4153-a8b9-6f5c00f10dda_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:29:32 np0005534516 nova_compute[253538]: 2025-11-25 08:29:32.679 253542 DEBUG oslo_concurrency.processutils [None req-3847c90e-2ac9-43e2-ae42-058e295bfd8e 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:29:32 np0005534516 nova_compute[253538]: 2025-11-25 08:29:32.721 253542 DEBUG nova.policy [None req-3847c90e-2ac9-43e2-ae42-058e295bfd8e 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '38fa175fb699405c9a05d7c28f994ebc', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'b0a28d62fb1841c087b84b40bf5a54ec', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 25 03:29:32 np0005534516 nova_compute[253538]: 2025-11-25 08:29:32.724 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:29:32 np0005534516 nova_compute[253538]: 2025-11-25 08:29:32.767 253542 DEBUG oslo_concurrency.processutils [None req-3847c90e-2ac9-43e2-ae42-058e295bfd8e 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json" returned: 0 in 0.088s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:29:32 np0005534516 nova_compute[253538]: 2025-11-25 08:29:32.768 253542 DEBUG oslo_concurrency.lockutils [None req-3847c90e-2ac9-43e2-ae42-058e295bfd8e 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Acquiring lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:29:32 np0005534516 nova_compute[253538]: 2025-11-25 08:29:32.769 253542 DEBUG oslo_concurrency.lockutils [None req-3847c90e-2ac9-43e2-ae42-058e295bfd8e 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:29:32 np0005534516 nova_compute[253538]: 2025-11-25 08:29:32.769 253542 DEBUG oslo_concurrency.lockutils [None req-3847c90e-2ac9-43e2-ae42-058e295bfd8e 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:29:32 np0005534516 nova_compute[253538]: 2025-11-25 08:29:32.792 253542 DEBUG nova.storage.rbd_utils [None req-3847c90e-2ac9-43e2-ae42-058e295bfd8e 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] rbd image 34941187-b6b9-4153-a8b9-6f5c00f10dda_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:29:32 np0005534516 nova_compute[253538]: 2025-11-25 08:29:32.796 253542 DEBUG oslo_concurrency.processutils [None req-3847c90e-2ac9-43e2-ae42-058e295bfd8e 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc 34941187-b6b9-4153-a8b9-6f5c00f10dda_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:29:32 np0005534516 podman[297670]: 2025-11-25 08:29:32.816433965 +0000 UTC m=+0.058768475 container health_status 5c8d5508db57b4c3da7e80ae11f3a3094e87785cb74f60c98199261dd8b73481 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 03:29:33 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1368: 321 pgs: 321 active+clean; 276 MiB data, 536 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 11 MiB/s wr, 436 op/s
Nov 25 03:29:33 np0005534516 nova_compute[253538]: 2025-11-25 08:29:33.135 253542 DEBUG oslo_concurrency.processutils [None req-3847c90e-2ac9-43e2-ae42-058e295bfd8e 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc 34941187-b6b9-4153-a8b9-6f5c00f10dda_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.339s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:29:33 np0005534516 nova_compute[253538]: 2025-11-25 08:29:33.216 253542 DEBUG nova.storage.rbd_utils [None req-3847c90e-2ac9-43e2-ae42-058e295bfd8e 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] resizing rbd image 34941187-b6b9-4153-a8b9-6f5c00f10dda_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 25 03:29:33 np0005534516 nova_compute[253538]: 2025-11-25 08:29:33.438 253542 DEBUG nova.objects.instance [None req-3847c90e-2ac9-43e2-ae42-058e295bfd8e 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Lazy-loading 'migration_context' on Instance uuid 34941187-b6b9-4153-a8b9-6f5c00f10dda obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 03:29:33 np0005534516 nova_compute[253538]: 2025-11-25 08:29:33.534 253542 DEBUG nova.virt.libvirt.driver [None req-3847c90e-2ac9-43e2-ae42-058e295bfd8e 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 34941187-b6b9-4153-a8b9-6f5c00f10dda] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 25 03:29:33 np0005534516 nova_compute[253538]: 2025-11-25 08:29:33.535 253542 DEBUG nova.virt.libvirt.driver [None req-3847c90e-2ac9-43e2-ae42-058e295bfd8e 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 34941187-b6b9-4153-a8b9-6f5c00f10dda] Ensure instance console log exists: /var/lib/nova/instances/34941187-b6b9-4153-a8b9-6f5c00f10dda/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 25 03:29:33 np0005534516 nova_compute[253538]: 2025-11-25 08:29:33.535 253542 DEBUG oslo_concurrency.lockutils [None req-3847c90e-2ac9-43e2-ae42-058e295bfd8e 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:29:33 np0005534516 nova_compute[253538]: 2025-11-25 08:29:33.535 253542 DEBUG oslo_concurrency.lockutils [None req-3847c90e-2ac9-43e2-ae42-058e295bfd8e 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:29:33 np0005534516 nova_compute[253538]: 2025-11-25 08:29:33.536 253542 DEBUG oslo_concurrency.lockutils [None req-3847c90e-2ac9-43e2-ae42-058e295bfd8e 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:29:34 np0005534516 nova_compute[253538]: 2025-11-25 08:29:34.131 253542 DEBUG nova.compute.manager [None req-2953982c-db6e-4419-befe-28abb827d0a5 6bc7b68c86ab44d29c118388df2a8bc0 70a02215d9344af388c6439ace9208a4 - - default default] [instance: 8400a9a9-bd7a-434b-a11b-6db7e12a4e18] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:29:34 np0005534516 nova_compute[253538]: 2025-11-25 08:29:34.168 253542 INFO nova.compute.manager [None req-2953982c-db6e-4419-befe-28abb827d0a5 6bc7b68c86ab44d29c118388df2a8bc0 70a02215d9344af388c6439ace9208a4 - - default default] [instance: 8400a9a9-bd7a-434b-a11b-6db7e12a4e18] instance snapshotting#033[00m
Nov 25 03:29:34 np0005534516 nova_compute[253538]: 2025-11-25 08:29:34.398 253542 INFO nova.virt.libvirt.driver [None req-2953982c-db6e-4419-befe-28abb827d0a5 6bc7b68c86ab44d29c118388df2a8bc0 70a02215d9344af388c6439ace9208a4 - - default default] [instance: 8400a9a9-bd7a-434b-a11b-6db7e12a4e18] Beginning live snapshot process#033[00m
Nov 25 03:29:34 np0005534516 nova_compute[253538]: 2025-11-25 08:29:34.570 253542 DEBUG nova.virt.libvirt.imagebackend [None req-2953982c-db6e-4419-befe-28abb827d0a5 6bc7b68c86ab44d29c118388df2a8bc0 70a02215d9344af388c6439ace9208a4 - - default default] No parent info for 8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163#033[00m
Nov 25 03:29:34 np0005534516 nova_compute[253538]: 2025-11-25 08:29:34.770 253542 DEBUG nova.network.neutron [None req-3847c90e-2ac9-43e2-ae42-058e295bfd8e 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 34941187-b6b9-4153-a8b9-6f5c00f10dda] Successfully created port: eab8f4a1-ffde-4387-94df-ebfe864e9534 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 25 03:29:34 np0005534516 nova_compute[253538]: 2025-11-25 08:29:34.782 253542 DEBUG nova.storage.rbd_utils [None req-2953982c-db6e-4419-befe-28abb827d0a5 6bc7b68c86ab44d29c118388df2a8bc0 70a02215d9344af388c6439ace9208a4 - - default default] creating snapshot(46aa4661084f4d7b8fa958b23428336b) on rbd image(8400a9a9-bd7a-434b-a11b-6db7e12a4e18_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Nov 25 03:29:34 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e158 do_prune osdmap full prune enabled
Nov 25 03:29:34 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e159 e159: 3 total, 3 up, 3 in
Nov 25 03:29:34 np0005534516 ceph-mon[75015]: log_channel(cluster) log [DBG] : osdmap e159: 3 total, 3 up, 3 in
Nov 25 03:29:34 np0005534516 nova_compute[253538]: 2025-11-25 08:29:34.895 253542 DEBUG nova.storage.rbd_utils [None req-2953982c-db6e-4419-befe-28abb827d0a5 6bc7b68c86ab44d29c118388df2a8bc0 70a02215d9344af388c6439ace9208a4 - - default default] cloning vms/8400a9a9-bd7a-434b-a11b-6db7e12a4e18_disk@46aa4661084f4d7b8fa958b23428336b to images/f798a86c-e34a-4469-9738-76f1311b65e9 clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Nov 25 03:29:34 np0005534516 nova_compute[253538]: 2025-11-25 08:29:34.947 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:29:35 np0005534516 nova_compute[253538]: 2025-11-25 08:29:35.013 253542 DEBUG nova.storage.rbd_utils [None req-2953982c-db6e-4419-befe-28abb827d0a5 6bc7b68c86ab44d29c118388df2a8bc0 70a02215d9344af388c6439ace9208a4 - - default default] flattening images/f798a86c-e34a-4469-9738-76f1311b65e9 flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314#033[00m
Nov 25 03:29:35 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1370: 321 pgs: 321 active+clean; 328 MiB data, 541 MiB used, 59 GiB / 60 GiB avail; 4.7 MiB/s rd, 13 MiB/s wr, 485 op/s
Nov 25 03:29:35 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e159 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 03:29:35 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e159 do_prune osdmap full prune enabled
Nov 25 03:29:35 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e160 e160: 3 total, 3 up, 3 in
Nov 25 03:29:35 np0005534516 ceph-mon[75015]: log_channel(cluster) log [DBG] : osdmap e160: 3 total, 3 up, 3 in
Nov 25 03:29:35 np0005534516 ceph-osd[88620]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #45. Immutable memtables: 2.
Nov 25 03:29:35 np0005534516 ceph-osd[89702]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #45. Immutable memtables: 2.
Nov 25 03:29:35 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:29:35.601 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=0e3362c2-a4a0-4a10-9289-943331244f84, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '11'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:29:35 np0005534516 nova_compute[253538]: 2025-11-25 08:29:35.603 253542 DEBUG nova.storage.rbd_utils [None req-2953982c-db6e-4419-befe-28abb827d0a5 6bc7b68c86ab44d29c118388df2a8bc0 70a02215d9344af388c6439ace9208a4 - - default default] removing snapshot(46aa4661084f4d7b8fa958b23428336b) on rbd image(8400a9a9-bd7a-434b-a11b-6db7e12a4e18_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489#033[00m
Nov 25 03:29:36 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e160 do_prune osdmap full prune enabled
Nov 25 03:29:36 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e161 e161: 3 total, 3 up, 3 in
Nov 25 03:29:36 np0005534516 ceph-mon[75015]: log_channel(cluster) log [DBG] : osdmap e161: 3 total, 3 up, 3 in
Nov 25 03:29:36 np0005534516 nova_compute[253538]: 2025-11-25 08:29:36.414 253542 DEBUG nova.storage.rbd_utils [None req-2953982c-db6e-4419-befe-28abb827d0a5 6bc7b68c86ab44d29c118388df2a8bc0 70a02215d9344af388c6439ace9208a4 - - default default] creating snapshot(snap) on rbd image(f798a86c-e34a-4469-9738-76f1311b65e9) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Nov 25 03:29:36 np0005534516 nova_compute[253538]: 2025-11-25 08:29:36.511 253542 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764059361.5100331, 0067149a-8f99-4257-af2a-fd9adcc41719 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 03:29:36 np0005534516 nova_compute[253538]: 2025-11-25 08:29:36.512 253542 INFO nova.compute.manager [-] [instance: 0067149a-8f99-4257-af2a-fd9adcc41719] VM Stopped (Lifecycle Event)#033[00m
Nov 25 03:29:36 np0005534516 nova_compute[253538]: 2025-11-25 08:29:36.541 253542 DEBUG nova.compute.manager [None req-be29be4c-f1f2-4509-8523-b9d8ec6813e9 - - - - - -] [instance: 0067149a-8f99-4257-af2a-fd9adcc41719] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:29:36 np0005534516 podman[297941]: 2025-11-25 08:29:36.826327536 +0000 UTC m=+0.077351830 container health_status 201f5ba8a846455dad7681d91099d8c3f33e8ac91298c1330a8b525b94c03938 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true)
Nov 25 03:29:37 np0005534516 nova_compute[253538]: 2025-11-25 08:29:37.044 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:29:37 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1373: 321 pgs: 321 active+clean; 397 MiB data, 601 MiB used, 59 GiB / 60 GiB avail; 1.7 MiB/s rd, 14 MiB/s wr, 316 op/s
Nov 25 03:29:37 np0005534516 nova_compute[253538]: 2025-11-25 08:29:37.212 253542 DEBUG nova.network.neutron [None req-3847c90e-2ac9-43e2-ae42-058e295bfd8e 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 34941187-b6b9-4153-a8b9-6f5c00f10dda] Successfully updated port: eab8f4a1-ffde-4387-94df-ebfe864e9534 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 25 03:29:37 np0005534516 nova_compute[253538]: 2025-11-25 08:29:37.231 253542 DEBUG oslo_concurrency.lockutils [None req-3847c90e-2ac9-43e2-ae42-058e295bfd8e 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Acquiring lock "refresh_cache-34941187-b6b9-4153-a8b9-6f5c00f10dda" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 03:29:37 np0005534516 nova_compute[253538]: 2025-11-25 08:29:37.232 253542 DEBUG oslo_concurrency.lockutils [None req-3847c90e-2ac9-43e2-ae42-058e295bfd8e 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Acquired lock "refresh_cache-34941187-b6b9-4153-a8b9-6f5c00f10dda" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 03:29:37 np0005534516 nova_compute[253538]: 2025-11-25 08:29:37.232 253542 DEBUG nova.network.neutron [None req-3847c90e-2ac9-43e2-ae42-058e295bfd8e 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 34941187-b6b9-4153-a8b9-6f5c00f10dda] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 25 03:29:37 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e161 do_prune osdmap full prune enabled
Nov 25 03:29:37 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e162 e162: 3 total, 3 up, 3 in
Nov 25 03:29:37 np0005534516 ceph-mon[75015]: log_channel(cluster) log [DBG] : osdmap e162: 3 total, 3 up, 3 in
Nov 25 03:29:37 np0005534516 nova_compute[253538]: 2025-11-25 08:29:37.362 253542 DEBUG nova.compute.manager [req-aa801ab4-0fa1-45f8-8c35-e6d0221c209b req-10dcdeb1-13d1-487b-9943-5fca30f3d918 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 34941187-b6b9-4153-a8b9-6f5c00f10dda] Received event network-changed-eab8f4a1-ffde-4387-94df-ebfe864e9534 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:29:37 np0005534516 nova_compute[253538]: 2025-11-25 08:29:37.363 253542 DEBUG nova.compute.manager [req-aa801ab4-0fa1-45f8-8c35-e6d0221c209b req-10dcdeb1-13d1-487b-9943-5fca30f3d918 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 34941187-b6b9-4153-a8b9-6f5c00f10dda] Refreshing instance network info cache due to event network-changed-eab8f4a1-ffde-4387-94df-ebfe864e9534. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 03:29:37 np0005534516 nova_compute[253538]: 2025-11-25 08:29:37.364 253542 DEBUG oslo_concurrency.lockutils [req-aa801ab4-0fa1-45f8-8c35-e6d0221c209b req-10dcdeb1-13d1-487b-9943-5fca30f3d918 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-34941187-b6b9-4153-a8b9-6f5c00f10dda" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 03:29:37 np0005534516 nova_compute[253538]: 2025-11-25 08:29:37.726 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:29:37 np0005534516 nova_compute[253538]: 2025-11-25 08:29:37.727 253542 DEBUG nova.network.neutron [None req-3847c90e-2ac9-43e2-ae42-058e295bfd8e 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 34941187-b6b9-4153-a8b9-6f5c00f10dda] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 25 03:29:38 np0005534516 nova_compute[253538]: 2025-11-25 08:29:38.672 253542 INFO nova.virt.libvirt.driver [None req-2953982c-db6e-4419-befe-28abb827d0a5 6bc7b68c86ab44d29c118388df2a8bc0 70a02215d9344af388c6439ace9208a4 - - default default] [instance: 8400a9a9-bd7a-434b-a11b-6db7e12a4e18] Snapshot image upload complete#033[00m
Nov 25 03:29:38 np0005534516 nova_compute[253538]: 2025-11-25 08:29:38.673 253542 INFO nova.compute.manager [None req-2953982c-db6e-4419-befe-28abb827d0a5 6bc7b68c86ab44d29c118388df2a8bc0 70a02215d9344af388c6439ace9208a4 - - default default] [instance: 8400a9a9-bd7a-434b-a11b-6db7e12a4e18] Took 4.50 seconds to snapshot the instance on the hypervisor.#033[00m
Nov 25 03:29:39 np0005534516 nova_compute[253538]: 2025-11-25 08:29:39.077 253542 DEBUG nova.network.neutron [None req-3847c90e-2ac9-43e2-ae42-058e295bfd8e 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 34941187-b6b9-4153-a8b9-6f5c00f10dda] Updating instance_info_cache with network_info: [{"id": "eab8f4a1-ffde-4387-94df-ebfe864e9534", "address": "fa:16:3e:52:af:86", "network": {"id": "ba659d6c-c094-47d7-ba45-d0e659ce778e", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1404047212-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b0a28d62fb1841c087b84b40bf5a54ec", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapeab8f4a1-ff", "ovs_interfaceid": "eab8f4a1-ffde-4387-94df-ebfe864e9534", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 03:29:39 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1375: 321 pgs: 321 active+clean; 408 MiB data, 610 MiB used, 59 GiB / 60 GiB avail; 4.1 MiB/s rd, 11 MiB/s wr, 237 op/s
Nov 25 03:29:39 np0005534516 nova_compute[253538]: 2025-11-25 08:29:39.131 253542 DEBUG oslo_concurrency.lockutils [None req-3847c90e-2ac9-43e2-ae42-058e295bfd8e 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Releasing lock "refresh_cache-34941187-b6b9-4153-a8b9-6f5c00f10dda" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 03:29:39 np0005534516 nova_compute[253538]: 2025-11-25 08:29:39.132 253542 DEBUG nova.compute.manager [None req-3847c90e-2ac9-43e2-ae42-058e295bfd8e 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 34941187-b6b9-4153-a8b9-6f5c00f10dda] Instance network_info: |[{"id": "eab8f4a1-ffde-4387-94df-ebfe864e9534", "address": "fa:16:3e:52:af:86", "network": {"id": "ba659d6c-c094-47d7-ba45-d0e659ce778e", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1404047212-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b0a28d62fb1841c087b84b40bf5a54ec", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapeab8f4a1-ff", "ovs_interfaceid": "eab8f4a1-ffde-4387-94df-ebfe864e9534", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 25 03:29:39 np0005534516 nova_compute[253538]: 2025-11-25 08:29:39.132 253542 DEBUG oslo_concurrency.lockutils [req-aa801ab4-0fa1-45f8-8c35-e6d0221c209b req-10dcdeb1-13d1-487b-9943-5fca30f3d918 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-34941187-b6b9-4153-a8b9-6f5c00f10dda" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 03:29:39 np0005534516 nova_compute[253538]: 2025-11-25 08:29:39.132 253542 DEBUG nova.network.neutron [req-aa801ab4-0fa1-45f8-8c35-e6d0221c209b req-10dcdeb1-13d1-487b-9943-5fca30f3d918 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 34941187-b6b9-4153-a8b9-6f5c00f10dda] Refreshing network info cache for port eab8f4a1-ffde-4387-94df-ebfe864e9534 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 03:29:39 np0005534516 nova_compute[253538]: 2025-11-25 08:29:39.137 253542 DEBUG nova.virt.libvirt.driver [None req-3847c90e-2ac9-43e2-ae42-058e295bfd8e 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 34941187-b6b9-4153-a8b9-6f5c00f10dda] Start _get_guest_xml network_info=[{"id": "eab8f4a1-ffde-4387-94df-ebfe864e9534", "address": "fa:16:3e:52:af:86", "network": {"id": "ba659d6c-c094-47d7-ba45-d0e659ce778e", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1404047212-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b0a28d62fb1841c087b84b40bf5a54ec", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapeab8f4a1-ff", "ovs_interfaceid": "eab8f4a1-ffde-4387-94df-ebfe864e9534", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'device_name': '/dev/vda', 'guest_format': None, 'encryption_options': None, 'boot_index': 0, 'encryption_format': None, 'encryption_secret_uuid': None, 'size': 0, 'encrypted': False, 'disk_bus': 'virtio', 'image_id': '8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 25 03:29:39 np0005534516 nova_compute[253538]: 2025-11-25 08:29:39.142 253542 WARNING nova.virt.libvirt.driver [None req-3847c90e-2ac9-43e2-ae42-058e295bfd8e 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 03:29:39 np0005534516 nova_compute[253538]: 2025-11-25 08:29:39.151 253542 DEBUG nova.virt.libvirt.host [None req-3847c90e-2ac9-43e2-ae42-058e295bfd8e 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 25 03:29:39 np0005534516 nova_compute[253538]: 2025-11-25 08:29:39.152 253542 DEBUG nova.virt.libvirt.host [None req-3847c90e-2ac9-43e2-ae42-058e295bfd8e 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 25 03:29:39 np0005534516 nova_compute[253538]: 2025-11-25 08:29:39.166 253542 DEBUG nova.virt.libvirt.host [None req-3847c90e-2ac9-43e2-ae42-058e295bfd8e 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 25 03:29:39 np0005534516 nova_compute[253538]: 2025-11-25 08:29:39.166 253542 DEBUG nova.virt.libvirt.host [None req-3847c90e-2ac9-43e2-ae42-058e295bfd8e 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 25 03:29:39 np0005534516 nova_compute[253538]: 2025-11-25 08:29:39.167 253542 DEBUG nova.virt.libvirt.driver [None req-3847c90e-2ac9-43e2-ae42-058e295bfd8e 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 25 03:29:39 np0005534516 nova_compute[253538]: 2025-11-25 08:29:39.168 253542 DEBUG nova.virt.hardware [None req-3847c90e-2ac9-43e2-ae42-058e295bfd8e 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T08:20:54Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='c0247c91-0f34-45b2-87e7-43f31a790a00',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 25 03:29:39 np0005534516 nova_compute[253538]: 2025-11-25 08:29:39.169 253542 DEBUG nova.virt.hardware [None req-3847c90e-2ac9-43e2-ae42-058e295bfd8e 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 25 03:29:39 np0005534516 nova_compute[253538]: 2025-11-25 08:29:39.169 253542 DEBUG nova.virt.hardware [None req-3847c90e-2ac9-43e2-ae42-058e295bfd8e 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 25 03:29:39 np0005534516 nova_compute[253538]: 2025-11-25 08:29:39.170 253542 DEBUG nova.virt.hardware [None req-3847c90e-2ac9-43e2-ae42-058e295bfd8e 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 25 03:29:39 np0005534516 nova_compute[253538]: 2025-11-25 08:29:39.170 253542 DEBUG nova.virt.hardware [None req-3847c90e-2ac9-43e2-ae42-058e295bfd8e 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 25 03:29:39 np0005534516 nova_compute[253538]: 2025-11-25 08:29:39.171 253542 DEBUG nova.virt.hardware [None req-3847c90e-2ac9-43e2-ae42-058e295bfd8e 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 25 03:29:39 np0005534516 nova_compute[253538]: 2025-11-25 08:29:39.171 253542 DEBUG nova.virt.hardware [None req-3847c90e-2ac9-43e2-ae42-058e295bfd8e 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 25 03:29:39 np0005534516 nova_compute[253538]: 2025-11-25 08:29:39.172 253542 DEBUG nova.virt.hardware [None req-3847c90e-2ac9-43e2-ae42-058e295bfd8e 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 25 03:29:39 np0005534516 nova_compute[253538]: 2025-11-25 08:29:39.172 253542 DEBUG nova.virt.hardware [None req-3847c90e-2ac9-43e2-ae42-058e295bfd8e 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 25 03:29:39 np0005534516 nova_compute[253538]: 2025-11-25 08:29:39.173 253542 DEBUG nova.virt.hardware [None req-3847c90e-2ac9-43e2-ae42-058e295bfd8e 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 25 03:29:39 np0005534516 nova_compute[253538]: 2025-11-25 08:29:39.173 253542 DEBUG nova.virt.hardware [None req-3847c90e-2ac9-43e2-ae42-058e295bfd8e 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 25 03:29:39 np0005534516 nova_compute[253538]: 2025-11-25 08:29:39.178 253542 DEBUG oslo_concurrency.processutils [None req-3847c90e-2ac9-43e2-ae42-058e295bfd8e 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:29:39 np0005534516 nova_compute[253538]: 2025-11-25 08:29:39.443 253542 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764059364.4406812, 8a8b6989-6ea7-4cf7-ad21-a1563967c7f4 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 03:29:39 np0005534516 nova_compute[253538]: 2025-11-25 08:29:39.444 253542 INFO nova.compute.manager [-] [instance: 8a8b6989-6ea7-4cf7-ad21-a1563967c7f4] VM Stopped (Lifecycle Event)#033[00m
Nov 25 03:29:39 np0005534516 nova_compute[253538]: 2025-11-25 08:29:39.470 253542 DEBUG nova.compute.manager [None req-bdfb7777-6bc1-4ecd-b418-ec97d91f8f09 - - - - - -] [instance: 8a8b6989-6ea7-4cf7-ad21-a1563967c7f4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:29:39 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 03:29:39 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/140178893' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 03:29:39 np0005534516 nova_compute[253538]: 2025-11-25 08:29:39.636 253542 DEBUG oslo_concurrency.processutils [None req-3847c90e-2ac9-43e2-ae42-058e295bfd8e 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.457s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:29:39 np0005534516 nova_compute[253538]: 2025-11-25 08:29:39.658 253542 DEBUG nova.storage.rbd_utils [None req-3847c90e-2ac9-43e2-ae42-058e295bfd8e 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] rbd image 34941187-b6b9-4153-a8b9-6f5c00f10dda_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:29:39 np0005534516 nova_compute[253538]: 2025-11-25 08:29:39.662 253542 DEBUG oslo_concurrency.processutils [None req-3847c90e-2ac9-43e2-ae42-058e295bfd8e 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:29:39 np0005534516 nova_compute[253538]: 2025-11-25 08:29:39.949 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:29:40 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 03:29:40 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/685734955' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 03:29:40 np0005534516 nova_compute[253538]: 2025-11-25 08:29:40.079 253542 DEBUG oslo_concurrency.processutils [None req-3847c90e-2ac9-43e2-ae42-058e295bfd8e 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.417s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:29:40 np0005534516 nova_compute[253538]: 2025-11-25 08:29:40.082 253542 DEBUG nova.virt.libvirt.vif [None req-3847c90e-2ac9-43e2-ae42-058e295bfd8e 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T08:29:30Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesTestJSON-server-722195337',display_name='tempest-ImagesTestJSON-server-722195337',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagestestjson-server-722195337',id=39,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='b0a28d62fb1841c087b84b40bf5a54ec',ramdisk_id='',reservation_id='r-saunovqo',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesTestJSON-109091550',owner_user_name='tempest-ImagesTestJSON-109091550-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T08:29:32Z,user_data=None,user_id='38fa175fb699405c9a05d7c28f994ebc',uuid=34941187-b6b9-4153-a8b9-6f5c00f10dda,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "eab8f4a1-ffde-4387-94df-ebfe864e9534", "address": "fa:16:3e:52:af:86", "network": {"id": "ba659d6c-c094-47d7-ba45-d0e659ce778e", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1404047212-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b0a28d62fb1841c087b84b40bf5a54ec", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapeab8f4a1-ff", "ovs_interfaceid": "eab8f4a1-ffde-4387-94df-ebfe864e9534", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 25 03:29:40 np0005534516 nova_compute[253538]: 2025-11-25 08:29:40.083 253542 DEBUG nova.network.os_vif_util [None req-3847c90e-2ac9-43e2-ae42-058e295bfd8e 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Converting VIF {"id": "eab8f4a1-ffde-4387-94df-ebfe864e9534", "address": "fa:16:3e:52:af:86", "network": {"id": "ba659d6c-c094-47d7-ba45-d0e659ce778e", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1404047212-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b0a28d62fb1841c087b84b40bf5a54ec", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapeab8f4a1-ff", "ovs_interfaceid": "eab8f4a1-ffde-4387-94df-ebfe864e9534", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 03:29:40 np0005534516 nova_compute[253538]: 2025-11-25 08:29:40.084 253542 DEBUG nova.network.os_vif_util [None req-3847c90e-2ac9-43e2-ae42-058e295bfd8e 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:52:af:86,bridge_name='br-int',has_traffic_filtering=True,id=eab8f4a1-ffde-4387-94df-ebfe864e9534,network=Network(ba659d6c-c094-47d7-ba45-d0e659ce778e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapeab8f4a1-ff') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 03:29:40 np0005534516 nova_compute[253538]: 2025-11-25 08:29:40.086 253542 DEBUG nova.objects.instance [None req-3847c90e-2ac9-43e2-ae42-058e295bfd8e 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Lazy-loading 'pci_devices' on Instance uuid 34941187-b6b9-4153-a8b9-6f5c00f10dda obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 03:29:40 np0005534516 nova_compute[253538]: 2025-11-25 08:29:40.105 253542 DEBUG nova.virt.libvirt.driver [None req-3847c90e-2ac9-43e2-ae42-058e295bfd8e 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 34941187-b6b9-4153-a8b9-6f5c00f10dda] End _get_guest_xml xml=<domain type="kvm">
Nov 25 03:29:40 np0005534516 nova_compute[253538]:  <uuid>34941187-b6b9-4153-a8b9-6f5c00f10dda</uuid>
Nov 25 03:29:40 np0005534516 nova_compute[253538]:  <name>instance-00000027</name>
Nov 25 03:29:40 np0005534516 nova_compute[253538]:  <memory>131072</memory>
Nov 25 03:29:40 np0005534516 nova_compute[253538]:  <vcpu>1</vcpu>
Nov 25 03:29:40 np0005534516 nova_compute[253538]:  <metadata>
Nov 25 03:29:40 np0005534516 nova_compute[253538]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 03:29:40 np0005534516 nova_compute[253538]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 03:29:40 np0005534516 nova_compute[253538]:      <nova:name>tempest-ImagesTestJSON-server-722195337</nova:name>
Nov 25 03:29:40 np0005534516 nova_compute[253538]:      <nova:creationTime>2025-11-25 08:29:39</nova:creationTime>
Nov 25 03:29:40 np0005534516 nova_compute[253538]:      <nova:flavor name="m1.nano">
Nov 25 03:29:40 np0005534516 nova_compute[253538]:        <nova:memory>128</nova:memory>
Nov 25 03:29:40 np0005534516 nova_compute[253538]:        <nova:disk>1</nova:disk>
Nov 25 03:29:40 np0005534516 nova_compute[253538]:        <nova:swap>0</nova:swap>
Nov 25 03:29:40 np0005534516 nova_compute[253538]:        <nova:ephemeral>0</nova:ephemeral>
Nov 25 03:29:40 np0005534516 nova_compute[253538]:        <nova:vcpus>1</nova:vcpus>
Nov 25 03:29:40 np0005534516 nova_compute[253538]:      </nova:flavor>
Nov 25 03:29:40 np0005534516 nova_compute[253538]:      <nova:owner>
Nov 25 03:29:40 np0005534516 nova_compute[253538]:        <nova:user uuid="38fa175fb699405c9a05d7c28f994ebc">tempest-ImagesTestJSON-109091550-project-member</nova:user>
Nov 25 03:29:40 np0005534516 nova_compute[253538]:        <nova:project uuid="b0a28d62fb1841c087b84b40bf5a54ec">tempest-ImagesTestJSON-109091550</nova:project>
Nov 25 03:29:40 np0005534516 nova_compute[253538]:      </nova:owner>
Nov 25 03:29:40 np0005534516 nova_compute[253538]:      <nova:root type="image" uuid="8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e"/>
Nov 25 03:29:40 np0005534516 nova_compute[253538]:      <nova:ports>
Nov 25 03:29:40 np0005534516 nova_compute[253538]:        <nova:port uuid="eab8f4a1-ffde-4387-94df-ebfe864e9534">
Nov 25 03:29:40 np0005534516 nova_compute[253538]:          <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Nov 25 03:29:40 np0005534516 nova_compute[253538]:        </nova:port>
Nov 25 03:29:40 np0005534516 nova_compute[253538]:      </nova:ports>
Nov 25 03:29:40 np0005534516 nova_compute[253538]:    </nova:instance>
Nov 25 03:29:40 np0005534516 nova_compute[253538]:  </metadata>
Nov 25 03:29:40 np0005534516 nova_compute[253538]:  <sysinfo type="smbios">
Nov 25 03:29:40 np0005534516 nova_compute[253538]:    <system>
Nov 25 03:29:40 np0005534516 nova_compute[253538]:      <entry name="manufacturer">RDO</entry>
Nov 25 03:29:40 np0005534516 nova_compute[253538]:      <entry name="product">OpenStack Compute</entry>
Nov 25 03:29:40 np0005534516 nova_compute[253538]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 03:29:40 np0005534516 nova_compute[253538]:      <entry name="serial">34941187-b6b9-4153-a8b9-6f5c00f10dda</entry>
Nov 25 03:29:40 np0005534516 nova_compute[253538]:      <entry name="uuid">34941187-b6b9-4153-a8b9-6f5c00f10dda</entry>
Nov 25 03:29:40 np0005534516 nova_compute[253538]:      <entry name="family">Virtual Machine</entry>
Nov 25 03:29:40 np0005534516 nova_compute[253538]:    </system>
Nov 25 03:29:40 np0005534516 nova_compute[253538]:  </sysinfo>
Nov 25 03:29:40 np0005534516 nova_compute[253538]:  <os>
Nov 25 03:29:40 np0005534516 nova_compute[253538]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 25 03:29:40 np0005534516 nova_compute[253538]:    <boot dev="hd"/>
Nov 25 03:29:40 np0005534516 nova_compute[253538]:    <smbios mode="sysinfo"/>
Nov 25 03:29:40 np0005534516 nova_compute[253538]:  </os>
Nov 25 03:29:40 np0005534516 nova_compute[253538]:  <features>
Nov 25 03:29:40 np0005534516 nova_compute[253538]:    <acpi/>
Nov 25 03:29:40 np0005534516 nova_compute[253538]:    <apic/>
Nov 25 03:29:40 np0005534516 nova_compute[253538]:    <vmcoreinfo/>
Nov 25 03:29:40 np0005534516 nova_compute[253538]:  </features>
Nov 25 03:29:40 np0005534516 nova_compute[253538]:  <clock offset="utc">
Nov 25 03:29:40 np0005534516 nova_compute[253538]:    <timer name="pit" tickpolicy="delay"/>
Nov 25 03:29:40 np0005534516 nova_compute[253538]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 25 03:29:40 np0005534516 nova_compute[253538]:    <timer name="hpet" present="no"/>
Nov 25 03:29:40 np0005534516 nova_compute[253538]:  </clock>
Nov 25 03:29:40 np0005534516 nova_compute[253538]:  <cpu mode="host-model" match="exact">
Nov 25 03:29:40 np0005534516 nova_compute[253538]:    <topology sockets="1" cores="1" threads="1"/>
Nov 25 03:29:40 np0005534516 nova_compute[253538]:  </cpu>
Nov 25 03:29:40 np0005534516 nova_compute[253538]:  <devices>
Nov 25 03:29:40 np0005534516 nova_compute[253538]:    <disk type="network" device="disk">
Nov 25 03:29:40 np0005534516 nova_compute[253538]:      <driver type="raw" cache="none"/>
Nov 25 03:29:40 np0005534516 nova_compute[253538]:      <source protocol="rbd" name="vms/34941187-b6b9-4153-a8b9-6f5c00f10dda_disk">
Nov 25 03:29:40 np0005534516 nova_compute[253538]:        <host name="192.168.122.100" port="6789"/>
Nov 25 03:29:40 np0005534516 nova_compute[253538]:      </source>
Nov 25 03:29:40 np0005534516 nova_compute[253538]:      <auth username="openstack">
Nov 25 03:29:40 np0005534516 nova_compute[253538]:        <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 03:29:40 np0005534516 nova_compute[253538]:      </auth>
Nov 25 03:29:40 np0005534516 nova_compute[253538]:      <target dev="vda" bus="virtio"/>
Nov 25 03:29:40 np0005534516 nova_compute[253538]:    </disk>
Nov 25 03:29:40 np0005534516 nova_compute[253538]:    <disk type="network" device="cdrom">
Nov 25 03:29:40 np0005534516 nova_compute[253538]:      <driver type="raw" cache="none"/>
Nov 25 03:29:40 np0005534516 nova_compute[253538]:      <source protocol="rbd" name="vms/34941187-b6b9-4153-a8b9-6f5c00f10dda_disk.config">
Nov 25 03:29:40 np0005534516 nova_compute[253538]:        <host name="192.168.122.100" port="6789"/>
Nov 25 03:29:40 np0005534516 nova_compute[253538]:      </source>
Nov 25 03:29:40 np0005534516 nova_compute[253538]:      <auth username="openstack">
Nov 25 03:29:40 np0005534516 nova_compute[253538]:        <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 03:29:40 np0005534516 nova_compute[253538]:      </auth>
Nov 25 03:29:40 np0005534516 nova_compute[253538]:      <target dev="sda" bus="sata"/>
Nov 25 03:29:40 np0005534516 nova_compute[253538]:    </disk>
Nov 25 03:29:40 np0005534516 nova_compute[253538]:    <interface type="ethernet">
Nov 25 03:29:40 np0005534516 nova_compute[253538]:      <mac address="fa:16:3e:52:af:86"/>
Nov 25 03:29:40 np0005534516 nova_compute[253538]:      <model type="virtio"/>
Nov 25 03:29:40 np0005534516 nova_compute[253538]:      <driver name="vhost" rx_queue_size="512"/>
Nov 25 03:29:40 np0005534516 nova_compute[253538]:      <mtu size="1442"/>
Nov 25 03:29:40 np0005534516 nova_compute[253538]:      <target dev="tapeab8f4a1-ff"/>
Nov 25 03:29:40 np0005534516 nova_compute[253538]:    </interface>
Nov 25 03:29:40 np0005534516 nova_compute[253538]:    <serial type="pty">
Nov 25 03:29:40 np0005534516 nova_compute[253538]:      <log file="/var/lib/nova/instances/34941187-b6b9-4153-a8b9-6f5c00f10dda/console.log" append="off"/>
Nov 25 03:29:40 np0005534516 nova_compute[253538]:    </serial>
Nov 25 03:29:40 np0005534516 nova_compute[253538]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 03:29:40 np0005534516 nova_compute[253538]:    <video>
Nov 25 03:29:40 np0005534516 nova_compute[253538]:      <model type="virtio"/>
Nov 25 03:29:40 np0005534516 nova_compute[253538]:    </video>
Nov 25 03:29:40 np0005534516 nova_compute[253538]:    <input type="tablet" bus="usb"/>
Nov 25 03:29:40 np0005534516 nova_compute[253538]:    <rng model="virtio">
Nov 25 03:29:40 np0005534516 nova_compute[253538]:      <backend model="random">/dev/urandom</backend>
Nov 25 03:29:40 np0005534516 nova_compute[253538]:    </rng>
Nov 25 03:29:40 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root"/>
Nov 25 03:29:40 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:29:40 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:29:40 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:29:40 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:29:40 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:29:40 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:29:40 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:29:40 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:29:40 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:29:40 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:29:40 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:29:40 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:29:40 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:29:40 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:29:40 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:29:40 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:29:40 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:29:40 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:29:40 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:29:40 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:29:40 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:29:40 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:29:40 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:29:40 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:29:40 np0005534516 nova_compute[253538]:    <controller type="usb" index="0"/>
Nov 25 03:29:40 np0005534516 nova_compute[253538]:    <memballoon model="virtio">
Nov 25 03:29:40 np0005534516 nova_compute[253538]:      <stats period="10"/>
Nov 25 03:29:40 np0005534516 nova_compute[253538]:    </memballoon>
Nov 25 03:29:40 np0005534516 nova_compute[253538]:  </devices>
Nov 25 03:29:40 np0005534516 nova_compute[253538]: </domain>
Nov 25 03:29:40 np0005534516 nova_compute[253538]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 25 03:29:40 np0005534516 nova_compute[253538]: 2025-11-25 08:29:40.106 253542 DEBUG nova.compute.manager [None req-3847c90e-2ac9-43e2-ae42-058e295bfd8e 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 34941187-b6b9-4153-a8b9-6f5c00f10dda] Preparing to wait for external event network-vif-plugged-eab8f4a1-ffde-4387-94df-ebfe864e9534 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 25 03:29:40 np0005534516 nova_compute[253538]: 2025-11-25 08:29:40.107 253542 DEBUG oslo_concurrency.lockutils [None req-3847c90e-2ac9-43e2-ae42-058e295bfd8e 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Acquiring lock "34941187-b6b9-4153-a8b9-6f5c00f10dda-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:29:40 np0005534516 nova_compute[253538]: 2025-11-25 08:29:40.107 253542 DEBUG oslo_concurrency.lockutils [None req-3847c90e-2ac9-43e2-ae42-058e295bfd8e 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Lock "34941187-b6b9-4153-a8b9-6f5c00f10dda-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:29:40 np0005534516 nova_compute[253538]: 2025-11-25 08:29:40.107 253542 DEBUG oslo_concurrency.lockutils [None req-3847c90e-2ac9-43e2-ae42-058e295bfd8e 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Lock "34941187-b6b9-4153-a8b9-6f5c00f10dda-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:29:40 np0005534516 nova_compute[253538]: 2025-11-25 08:29:40.108 253542 DEBUG nova.virt.libvirt.vif [None req-3847c90e-2ac9-43e2-ae42-058e295bfd8e 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T08:29:30Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesTestJSON-server-722195337',display_name='tempest-ImagesTestJSON-server-722195337',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagestestjson-server-722195337',id=39,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='b0a28d62fb1841c087b84b40bf5a54ec',ramdisk_id='',reservation_id='r-saunovqo',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesTestJSON-109091550',owner_user_name='tempest-ImagesTestJSON-109091550-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T08:29:32Z,user_data=None,user_id='38fa175fb699405c9a05d7c28f994ebc',uuid=34941187-b6b9-4153-a8b9-6f5c00f10dda,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "eab8f4a1-ffde-4387-94df-ebfe864e9534", "address": "fa:16:3e:52:af:86", "network": {"id": "ba659d6c-c094-47d7-ba45-d0e659ce778e", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1404047212-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b0a28d62fb1841c087b84b40bf5a54ec", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapeab8f4a1-ff", "ovs_interfaceid": "eab8f4a1-ffde-4387-94df-ebfe864e9534", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 25 03:29:40 np0005534516 nova_compute[253538]: 2025-11-25 08:29:40.108 253542 DEBUG nova.network.os_vif_util [None req-3847c90e-2ac9-43e2-ae42-058e295bfd8e 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Converting VIF {"id": "eab8f4a1-ffde-4387-94df-ebfe864e9534", "address": "fa:16:3e:52:af:86", "network": {"id": "ba659d6c-c094-47d7-ba45-d0e659ce778e", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1404047212-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b0a28d62fb1841c087b84b40bf5a54ec", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapeab8f4a1-ff", "ovs_interfaceid": "eab8f4a1-ffde-4387-94df-ebfe864e9534", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 03:29:40 np0005534516 nova_compute[253538]: 2025-11-25 08:29:40.109 253542 DEBUG nova.network.os_vif_util [None req-3847c90e-2ac9-43e2-ae42-058e295bfd8e 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:52:af:86,bridge_name='br-int',has_traffic_filtering=True,id=eab8f4a1-ffde-4387-94df-ebfe864e9534,network=Network(ba659d6c-c094-47d7-ba45-d0e659ce778e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapeab8f4a1-ff') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 03:29:40 np0005534516 nova_compute[253538]: 2025-11-25 08:29:40.109 253542 DEBUG os_vif [None req-3847c90e-2ac9-43e2-ae42-058e295bfd8e 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:52:af:86,bridge_name='br-int',has_traffic_filtering=True,id=eab8f4a1-ffde-4387-94df-ebfe864e9534,network=Network(ba659d6c-c094-47d7-ba45-d0e659ce778e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapeab8f4a1-ff') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 25 03:29:40 np0005534516 nova_compute[253538]: 2025-11-25 08:29:40.110 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:29:40 np0005534516 nova_compute[253538]: 2025-11-25 08:29:40.111 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:29:40 np0005534516 nova_compute[253538]: 2025-11-25 08:29:40.111 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 03:29:40 np0005534516 nova_compute[253538]: 2025-11-25 08:29:40.115 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:29:40 np0005534516 nova_compute[253538]: 2025-11-25 08:29:40.115 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapeab8f4a1-ff, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:29:40 np0005534516 nova_compute[253538]: 2025-11-25 08:29:40.116 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapeab8f4a1-ff, col_values=(('external_ids', {'iface-id': 'eab8f4a1-ffde-4387-94df-ebfe864e9534', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:52:af:86', 'vm-uuid': '34941187-b6b9-4153-a8b9-6f5c00f10dda'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:29:40 np0005534516 nova_compute[253538]: 2025-11-25 08:29:40.117 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:29:40 np0005534516 NetworkManager[48915]: <info>  [1764059380.1188] manager: (tapeab8f4a1-ff): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/124)
Nov 25 03:29:40 np0005534516 nova_compute[253538]: 2025-11-25 08:29:40.120 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 25 03:29:40 np0005534516 nova_compute[253538]: 2025-11-25 08:29:40.125 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:29:40 np0005534516 nova_compute[253538]: 2025-11-25 08:29:40.127 253542 INFO os_vif [None req-3847c90e-2ac9-43e2-ae42-058e295bfd8e 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:52:af:86,bridge_name='br-int',has_traffic_filtering=True,id=eab8f4a1-ffde-4387-94df-ebfe864e9534,network=Network(ba659d6c-c094-47d7-ba45-d0e659ce778e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapeab8f4a1-ff')#033[00m
Nov 25 03:29:40 np0005534516 nova_compute[253538]: 2025-11-25 08:29:40.197 253542 DEBUG nova.virt.libvirt.driver [None req-3847c90e-2ac9-43e2-ae42-058e295bfd8e 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 03:29:40 np0005534516 nova_compute[253538]: 2025-11-25 08:29:40.197 253542 DEBUG nova.virt.libvirt.driver [None req-3847c90e-2ac9-43e2-ae42-058e295bfd8e 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 03:29:40 np0005534516 nova_compute[253538]: 2025-11-25 08:29:40.198 253542 DEBUG nova.virt.libvirt.driver [None req-3847c90e-2ac9-43e2-ae42-058e295bfd8e 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] No VIF found with MAC fa:16:3e:52:af:86, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 25 03:29:40 np0005534516 nova_compute[253538]: 2025-11-25 08:29:40.198 253542 INFO nova.virt.libvirt.driver [None req-3847c90e-2ac9-43e2-ae42-058e295bfd8e 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 34941187-b6b9-4153-a8b9-6f5c00f10dda] Using config drive#033[00m
Nov 25 03:29:40 np0005534516 nova_compute[253538]: 2025-11-25 08:29:40.230 253542 DEBUG nova.storage.rbd_utils [None req-3847c90e-2ac9-43e2-ae42-058e295bfd8e 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] rbd image 34941187-b6b9-4153-a8b9-6f5c00f10dda_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:29:40 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 03:29:40 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e162 do_prune osdmap full prune enabled
Nov 25 03:29:40 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e163 e163: 3 total, 3 up, 3 in
Nov 25 03:29:40 np0005534516 ceph-mon[75015]: log_channel(cluster) log [DBG] : osdmap e163: 3 total, 3 up, 3 in
Nov 25 03:29:40 np0005534516 nova_compute[253538]: 2025-11-25 08:29:40.748 253542 INFO nova.virt.libvirt.driver [None req-3847c90e-2ac9-43e2-ae42-058e295bfd8e 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 34941187-b6b9-4153-a8b9-6f5c00f10dda] Creating config drive at /var/lib/nova/instances/34941187-b6b9-4153-a8b9-6f5c00f10dda/disk.config#033[00m
Nov 25 03:29:40 np0005534516 nova_compute[253538]: 2025-11-25 08:29:40.757 253542 DEBUG oslo_concurrency.processutils [None req-3847c90e-2ac9-43e2-ae42-058e295bfd8e 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/34941187-b6b9-4153-a8b9-6f5c00f10dda/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp8l5_jsld execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:29:40 np0005534516 nova_compute[253538]: 2025-11-25 08:29:40.912 253542 DEBUG oslo_concurrency.processutils [None req-3847c90e-2ac9-43e2-ae42-058e295bfd8e 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/34941187-b6b9-4153-a8b9-6f5c00f10dda/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp8l5_jsld" returned: 0 in 0.155s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:29:40 np0005534516 nova_compute[253538]: 2025-11-25 08:29:40.940 253542 DEBUG nova.storage.rbd_utils [None req-3847c90e-2ac9-43e2-ae42-058e295bfd8e 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] rbd image 34941187-b6b9-4153-a8b9-6f5c00f10dda_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:29:40 np0005534516 nova_compute[253538]: 2025-11-25 08:29:40.943 253542 DEBUG oslo_concurrency.processutils [None req-3847c90e-2ac9-43e2-ae42-058e295bfd8e 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/34941187-b6b9-4153-a8b9-6f5c00f10dda/disk.config 34941187-b6b9-4153-a8b9-6f5c00f10dda_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:29:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:29:41.055 162739 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:29:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:29:41.055 162739 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:29:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:29:41.056 162739 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:29:41 np0005534516 nova_compute[253538]: 2025-11-25 08:29:41.066 253542 DEBUG nova.network.neutron [req-aa801ab4-0fa1-45f8-8c35-e6d0221c209b req-10dcdeb1-13d1-487b-9943-5fca30f3d918 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 34941187-b6b9-4153-a8b9-6f5c00f10dda] Updated VIF entry in instance network info cache for port eab8f4a1-ffde-4387-94df-ebfe864e9534. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 25 03:29:41 np0005534516 nova_compute[253538]: 2025-11-25 08:29:41.067 253542 DEBUG nova.network.neutron [req-aa801ab4-0fa1-45f8-8c35-e6d0221c209b req-10dcdeb1-13d1-487b-9943-5fca30f3d918 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 34941187-b6b9-4153-a8b9-6f5c00f10dda] Updating instance_info_cache with network_info: [{"id": "eab8f4a1-ffde-4387-94df-ebfe864e9534", "address": "fa:16:3e:52:af:86", "network": {"id": "ba659d6c-c094-47d7-ba45-d0e659ce778e", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1404047212-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b0a28d62fb1841c087b84b40bf5a54ec", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapeab8f4a1-ff", "ovs_interfaceid": "eab8f4a1-ffde-4387-94df-ebfe864e9534", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 03:29:41 np0005534516 nova_compute[253538]: 2025-11-25 08:29:41.080 253542 DEBUG oslo_concurrency.lockutils [req-aa801ab4-0fa1-45f8-8c35-e6d0221c209b req-10dcdeb1-13d1-487b-9943-5fca30f3d918 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-34941187-b6b9-4153-a8b9-6f5c00f10dda" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 03:29:41 np0005534516 nova_compute[253538]: 2025-11-25 08:29:41.099 253542 DEBUG oslo_concurrency.processutils [None req-3847c90e-2ac9-43e2-ae42-058e295bfd8e 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/34941187-b6b9-4153-a8b9-6f5c00f10dda/disk.config 34941187-b6b9-4153-a8b9-6f5c00f10dda_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.156s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:29:41 np0005534516 nova_compute[253538]: 2025-11-25 08:29:41.099 253542 INFO nova.virt.libvirt.driver [None req-3847c90e-2ac9-43e2-ae42-058e295bfd8e 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 34941187-b6b9-4153-a8b9-6f5c00f10dda] Deleting local config drive /var/lib/nova/instances/34941187-b6b9-4153-a8b9-6f5c00f10dda/disk.config because it was imported into RBD.#033[00m
Nov 25 03:29:41 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1377: 321 pgs: 321 active+clean; 418 MiB data, 632 MiB used, 59 GiB / 60 GiB avail; 8.1 MiB/s rd, 9.0 MiB/s wr, 182 op/s
Nov 25 03:29:41 np0005534516 kernel: tapeab8f4a1-ff: entered promiscuous mode
Nov 25 03:29:41 np0005534516 NetworkManager[48915]: <info>  [1764059381.1802] manager: (tapeab8f4a1-ff): new Tun device (/org/freedesktop/NetworkManager/Devices/125)
Nov 25 03:29:41 np0005534516 ovn_controller[152859]: 2025-11-25T08:29:41Z|00269|binding|INFO|Claiming lport eab8f4a1-ffde-4387-94df-ebfe864e9534 for this chassis.
Nov 25 03:29:41 np0005534516 ovn_controller[152859]: 2025-11-25T08:29:41Z|00270|binding|INFO|eab8f4a1-ffde-4387-94df-ebfe864e9534: Claiming fa:16:3e:52:af:86 10.100.0.12
Nov 25 03:29:41 np0005534516 nova_compute[253538]: 2025-11-25 08:29:41.180 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:29:41 np0005534516 systemd-machined[215790]: New machine qemu-44-instance-00000027.
Nov 25 03:29:41 np0005534516 systemd[1]: Started Virtual Machine qemu-44-instance-00000027.
Nov 25 03:29:41 np0005534516 systemd-udevd[298113]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 03:29:41 np0005534516 NetworkManager[48915]: <info>  [1764059381.2607] device (tapeab8f4a1-ff): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 03:29:41 np0005534516 NetworkManager[48915]: <info>  [1764059381.2621] device (tapeab8f4a1-ff): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 03:29:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:29:41.278 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:52:af:86 10.100.0.12'], port_security=['fa:16:3e:52:af:86 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '34941187-b6b9-4153-a8b9-6f5c00f10dda', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ba659d6c-c094-47d7-ba45-d0e659ce778e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b0a28d62fb1841c087b84b40bf5a54ec', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'c0dcf48e-8342-437f-bc91-be284d9d2e89', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=38cfcfdd-6d8a-45fc-8bf6-5c1aa5128b91, chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=eab8f4a1-ffde-4387-94df-ebfe864e9534) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 03:29:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:29:41.279 162739 INFO neutron.agent.ovn.metadata.agent [-] Port eab8f4a1-ffde-4387-94df-ebfe864e9534 in datapath ba659d6c-c094-47d7-ba45-d0e659ce778e bound to our chassis#033[00m
Nov 25 03:29:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:29:41.280 162739 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network ba659d6c-c094-47d7-ba45-d0e659ce778e#033[00m
Nov 25 03:29:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:29:41.291 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[b7dad777-ee81-430c-acc4-214c7064f5c3]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:29:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:29:41.292 162739 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapba659d6c-c1 in ovnmeta-ba659d6c-c094-47d7-ba45-d0e659ce778e namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 25 03:29:41 np0005534516 nova_compute[253538]: 2025-11-25 08:29:41.293 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:29:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:29:41.293 269370 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapba659d6c-c0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 25 03:29:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:29:41.294 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[e45637c3-fe7c-4e9a-aac4-5199ebb9ba83]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:29:41 np0005534516 ovn_controller[152859]: 2025-11-25T08:29:41Z|00271|binding|INFO|Setting lport eab8f4a1-ffde-4387-94df-ebfe864e9534 ovn-installed in OVS
Nov 25 03:29:41 np0005534516 ovn_controller[152859]: 2025-11-25T08:29:41Z|00272|binding|INFO|Setting lport eab8f4a1-ffde-4387-94df-ebfe864e9534 up in Southbound
Nov 25 03:29:41 np0005534516 nova_compute[253538]: 2025-11-25 08:29:41.294 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:29:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:29:41.295 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[5cf52e68-b9c1-4dbc-938b-3393ce1eba24]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:29:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:29:41.305 162852 DEBUG oslo.privsep.daemon [-] privsep: reply[5768dd8d-e35c-4994-bfc9-35ae67cde1b9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:29:41 np0005534516 podman[298094]: 2025-11-25 08:29:41.322736273 +0000 UTC m=+0.116467200 container health_status 8ad1e319ea263c63021f01bb2d6f18ca5aea205883339692c6b10bddfa993191 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3)
Nov 25 03:29:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:29:41.327 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[35170cc2-dac3-4891-8d21-cc0f7d99fc71]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:29:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:29:41.355 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[47b0bedf-c568-47f9-8893-4106582cba65]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:29:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:29:41.360 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[12c6fc1c-b7bb-4444-8ca3-fb7651f8d74c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:29:41 np0005534516 NetworkManager[48915]: <info>  [1764059381.3616] manager: (tapba659d6c-c0): new Veth device (/org/freedesktop/NetworkManager/Devices/126)
Nov 25 03:29:41 np0005534516 systemd-udevd[298120]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 03:32:09 np0005534516 nova_compute[253538]: 2025-11-25 08:32:09.031 253542 DEBUG nova.network.neutron [req-0c2fb53b-a4fd-4782-ae00-8bba71b865c1 req-3f71c077-ba18-4fbf-aeeb-4242c11df470 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 450d4b82-4475-4cfc-b868-dc3b0fc37af5] Updated VIF entry in instance network info cache for port 1b0678fa-ee9d-4aaa-adae-a0cd4b6bbc53. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 25 03:32:09 np0005534516 nova_compute[253538]: 2025-11-25 08:32:09.033 253542 DEBUG nova.network.neutron [req-0c2fb53b-a4fd-4782-ae00-8bba71b865c1 req-3f71c077-ba18-4fbf-aeeb-4242c11df470 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 450d4b82-4475-4cfc-b868-dc3b0fc37af5] Updating instance_info_cache with network_info: [{"id": "2456d48f-9440-411c-b5f2-5c27136126f9", "address": "fa:16:3e:c2:50:72", "network": {"id": "9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-2079765623-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.186", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7d8307470c794815a028592990efca57", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2456d48f-94", "ovs_interfaceid": "2456d48f-9440-411c-b5f2-5c27136126f9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "1b0678fa-ee9d-4aaa-adae-a0cd4b6bbc53", "address": "fa:16:3e:ad:f7:0c", "network": {"id": "9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-2079765623-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7d8307470c794815a028592990efca57", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1b0678fa-ee", "ovs_interfaceid": "1b0678fa-ee9d-4aaa-adae-a0cd4b6bbc53", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 03:32:09 np0005534516 nova_compute[253538]: 2025-11-25 08:32:09.048 253542 DEBUG oslo_concurrency.lockutils [req-0c2fb53b-a4fd-4782-ae00-8bba71b865c1 req-3f71c077-ba18-4fbf-aeeb-4242c11df470 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-450d4b82-4475-4cfc-b868-dc3b0fc37af5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 03:32:09 np0005534516 rsyslogd[1007]: imjournal: 6776 messages lost due to rate-limiting (20000 allowed within 600 seconds)
Nov 25 03:32:09 np0005534516 ovn_controller[152859]: 2025-11-25T08:32:09Z|00048|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:ad:f7:0c 10.100.0.14
Nov 25 03:32:09 np0005534516 ovn_controller[152859]: 2025-11-25T08:32:09Z|00049|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:ad:f7:0c 10.100.0.14
Nov 25 03:32:09 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1464: 321 pgs: 321 active+clean; 175 MiB data, 488 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 307 KiB/s wr, 118 op/s
Nov 25 03:32:09 np0005534516 nova_compute[253538]: 2025-11-25 08:32:09.243 253542 DEBUG nova.network.neutron [None req-e7c1ffa7-e4a2-4981-85ff-8897d3cb3e5e c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] [instance: 0feca801-4630-4450-b915-616d8496ab51] Successfully created port: 15af3dd8-9788-4a34-b4b2-d3b24300cd4c _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 25 03:32:09 np0005534516 nova_compute[253538]: 2025-11-25 08:32:09.438 253542 DEBUG oslo_concurrency.lockutils [None req-cfe6e3ea-e03a-44c4-b92f-47de6ae51c25 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Acquiring lock "interface-450d4b82-4475-4cfc-b868-dc3b0fc37af5-1b0678fa-ee9d-4aaa-adae-a0cd4b6bbc53" by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:32:09 np0005534516 nova_compute[253538]: 2025-11-25 08:32:09.438 253542 DEBUG oslo_concurrency.lockutils [None req-cfe6e3ea-e03a-44c4-b92f-47de6ae51c25 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Lock "interface-450d4b82-4475-4cfc-b868-dc3b0fc37af5-1b0678fa-ee9d-4aaa-adae-a0cd4b6bbc53" acquired by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:32:09 np0005534516 nova_compute[253538]: 2025-11-25 08:32:09.457 253542 DEBUG nova.objects.instance [None req-cfe6e3ea-e03a-44c4-b92f-47de6ae51c25 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Lazy-loading 'flavor' on Instance uuid 450d4b82-4475-4cfc-b868-dc3b0fc37af5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 03:32:09 np0005534516 nova_compute[253538]: 2025-11-25 08:32:09.474 253542 DEBUG nova.virt.libvirt.vif [None req-cfe6e3ea-e03a-44c4-b92f-47de6ae51c25 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T08:31:31Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-1331902627',display_name='tempest-AttachInterfacesTestJSON-server-1331902627',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-1331902627',id=48,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFxOUDVDCQJ+lDKjOnhY9ZKMp68C2gI2GeL81bN5+Cg0KECDUKYqyfMCz6cyxM0cgudxNhn7b2Ag1lQMKrSgrlBGJM1ipNmUdJjzx17GCK7/NrryCNhuCHqxgCE2AGNPYg==',key_name='tempest-keypair-1979969978',keypairs=<?>,launch_index=0,launched_at=2025-11-25T08:31:41Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='7d8307470c794815a028592990efca57',ramdisk_id='',reservation_id='r-f66b6qgm',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-1895576257',owner_user_name='tempest-AttachInterfacesTestJSON-1895576257-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T08:31:41Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='329d8dc9d78743d4a09a38fef3a9143d',uuid=450d4b82-4475-4cfc-b868-dc3b0fc37af5,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "1b0678fa-ee9d-4aaa-adae-a0cd4b6bbc53", "address": "fa:16:3e:ad:f7:0c", "network": {"id": "9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-2079765623-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7d8307470c794815a028592990efca57", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1b0678fa-ee", "ovs_interfaceid": "1b0678fa-ee9d-4aaa-adae-a0cd4b6bbc53", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 25 03:32:09 np0005534516 nova_compute[253538]: 2025-11-25 08:32:09.475 253542 DEBUG nova.network.os_vif_util [None req-cfe6e3ea-e03a-44c4-b92f-47de6ae51c25 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Converting VIF {"id": "1b0678fa-ee9d-4aaa-adae-a0cd4b6bbc53", "address": "fa:16:3e:ad:f7:0c", "network": {"id": "9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-2079765623-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7d8307470c794815a028592990efca57", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1b0678fa-ee", "ovs_interfaceid": "1b0678fa-ee9d-4aaa-adae-a0cd4b6bbc53", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 03:32:09 np0005534516 nova_compute[253538]: 2025-11-25 08:32:09.476 253542 DEBUG nova.network.os_vif_util [None req-cfe6e3ea-e03a-44c4-b92f-47de6ae51c25 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ad:f7:0c,bridge_name='br-int',has_traffic_filtering=True,id=1b0678fa-ee9d-4aaa-adae-a0cd4b6bbc53,network=Network(9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1b0678fa-ee') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 03:32:09 np0005534516 nova_compute[253538]: 2025-11-25 08:32:09.478 253542 DEBUG nova.virt.libvirt.guest [None req-cfe6e3ea-e03a-44c4-b92f-47de6ae51c25 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:ad:f7:0c"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap1b0678fa-ee"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Nov 25 03:32:09 np0005534516 nova_compute[253538]: 2025-11-25 08:32:09.480 253542 DEBUG nova.virt.libvirt.guest [None req-cfe6e3ea-e03a-44c4-b92f-47de6ae51c25 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:ad:f7:0c"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap1b0678fa-ee"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Nov 25 03:32:09 np0005534516 nova_compute[253538]: 2025-11-25 08:32:09.483 253542 DEBUG nova.virt.libvirt.driver [None req-cfe6e3ea-e03a-44c4-b92f-47de6ae51c25 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Attempting to detach device tap1b0678fa-ee from instance 450d4b82-4475-4cfc-b868-dc3b0fc37af5 from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487#033[00m
Nov 25 03:32:09 np0005534516 nova_compute[253538]: 2025-11-25 08:32:09.483 253542 DEBUG nova.virt.libvirt.guest [None req-cfe6e3ea-e03a-44c4-b92f-47de6ae51c25 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] detach device xml: <interface type="ethernet">
Nov 25 03:32:09 np0005534516 nova_compute[253538]:  <mac address="fa:16:3e:ad:f7:0c"/>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:  <model type="virtio"/>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:  <driver name="vhost" rx_queue_size="512"/>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:  <mtu size="1442"/>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:  <target dev="tap1b0678fa-ee"/>
Nov 25 03:32:09 np0005534516 nova_compute[253538]: </interface>
Nov 25 03:32:09 np0005534516 nova_compute[253538]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Nov 25 03:32:09 np0005534516 nova_compute[253538]: 2025-11-25 08:32:09.487 253542 DEBUG nova.virt.libvirt.guest [None req-cfe6e3ea-e03a-44c4-b92f-47de6ae51c25 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:ad:f7:0c"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap1b0678fa-ee"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Nov 25 03:32:09 np0005534516 nova_compute[253538]: 2025-11-25 08:32:09.490 253542 DEBUG nova.virt.libvirt.guest [None req-cfe6e3ea-e03a-44c4-b92f-47de6ae51c25 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:ad:f7:0c"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap1b0678fa-ee"/></interface>not found in domain: <domain type='kvm' id='53'>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:  <name>instance-00000030</name>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:  <uuid>450d4b82-4475-4cfc-b868-dc3b0fc37af5</uuid>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:  <metadata>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 03:32:09 np0005534516 nova_compute[253538]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:  <nova:name>tempest-AttachInterfacesTestJSON-server-1331902627</nova:name>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:  <nova:creationTime>2025-11-25 08:32:07</nova:creationTime>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:  <nova:flavor name="m1.nano">
Nov 25 03:32:09 np0005534516 nova_compute[253538]:    <nova:memory>128</nova:memory>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:    <nova:disk>1</nova:disk>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:    <nova:swap>0</nova:swap>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:    <nova:ephemeral>0</nova:ephemeral>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:    <nova:vcpus>1</nova:vcpus>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:  </nova:flavor>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:  <nova:owner>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:    <nova:user uuid="329d8dc9d78743d4a09a38fef3a9143d">tempest-AttachInterfacesTestJSON-1895576257-project-member</nova:user>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:    <nova:project uuid="7d8307470c794815a028592990efca57">tempest-AttachInterfacesTestJSON-1895576257</nova:project>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:  </nova:owner>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:  <nova:root type="image" uuid="8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e"/>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:  <nova:ports>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:    <nova:port uuid="2456d48f-9440-411c-b5f2-5c27136126f9">
Nov 25 03:32:09 np0005534516 nova_compute[253538]:      <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:    </nova:port>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:    <nova:port uuid="1b0678fa-ee9d-4aaa-adae-a0cd4b6bbc53">
Nov 25 03:32:09 np0005534516 nova_compute[253538]:      <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:    </nova:port>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:  </nova:ports>
Nov 25 03:32:09 np0005534516 nova_compute[253538]: </nova:instance>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:  </metadata>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:  <memory unit='KiB'>131072</memory>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:  <currentMemory unit='KiB'>131072</currentMemory>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:  <vcpu placement='static'>1</vcpu>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:  <resource>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:    <partition>/machine</partition>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:  </resource>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:  <sysinfo type='smbios'>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:    <system>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:      <entry name='manufacturer'>RDO</entry>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:      <entry name='product'>OpenStack Compute</entry>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:      <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:      <entry name='serial'>450d4b82-4475-4cfc-b868-dc3b0fc37af5</entry>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:      <entry name='uuid'>450d4b82-4475-4cfc-b868-dc3b0fc37af5</entry>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:      <entry name='family'>Virtual Machine</entry>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:    </system>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:  </sysinfo>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:  <os>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:    <type arch='x86_64' machine='pc-q35-rhel9.8.0'>hvm</type>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:    <boot dev='hd'/>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:    <smbios mode='sysinfo'/>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:  </os>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:  <features>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:    <acpi/>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:    <apic/>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:    <vmcoreinfo state='on'/>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:  </features>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:  <cpu mode='custom' match='exact' check='full'>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:    <model fallback='forbid'>EPYC-Rome</model>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:    <vendor>AMD</vendor>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:    <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:    <feature policy='require' name='x2apic'/>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:    <feature policy='require' name='tsc-deadline'/>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:    <feature policy='require' name='hypervisor'/>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:    <feature policy='require' name='tsc_adjust'/>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:    <feature policy='require' name='spec-ctrl'/>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:    <feature policy='require' name='stibp'/>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:    <feature policy='require' name='ssbd'/>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:    <feature policy='require' name='cmp_legacy'/>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:    <feature policy='require' name='overflow-recov'/>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:    <feature policy='require' name='succor'/>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:    <feature policy='require' name='ibrs'/>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:    <feature policy='require' name='amd-ssbd'/>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:    <feature policy='require' name='virt-ssbd'/>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:    <feature policy='disable' name='lbrv'/>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:    <feature policy='disable' name='tsc-scale'/>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:    <feature policy='disable' name='vmcb-clean'/>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:    <feature policy='disable' name='flushbyasid'/>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:    <feature policy='disable' name='pause-filter'/>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:    <feature policy='disable' name='pfthreshold'/>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:    <feature policy='disable' name='svme-addr-chk'/>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:    <feature policy='require' name='lfence-always-serializing'/>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:    <feature policy='disable' name='xsaves'/>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:    <feature policy='disable' name='svm'/>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:    <feature policy='require' name='topoext'/>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:    <feature policy='disable' name='npt'/>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:    <feature policy='disable' name='nrip-save'/>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:  </cpu>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:  <clock offset='utc'>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:    <timer name='pit' tickpolicy='delay'/>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:    <timer name='rtc' tickpolicy='catchup'/>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:    <timer name='hpet' present='no'/>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:  </clock>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:  <on_poweroff>destroy</on_poweroff>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:  <on_reboot>restart</on_reboot>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:  <on_crash>destroy</on_crash>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:  <devices>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:    <emulator>/usr/libexec/qemu-kvm</emulator>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:    <disk type='network' device='disk'>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:      <driver name='qemu' type='raw' cache='none'/>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:      <auth username='openstack'>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:        <secret type='ceph' uuid='a058ea16-8b73-51e1-b172-ed66107102bf'/>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:      </auth>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:      <source protocol='rbd' name='vms/450d4b82-4475-4cfc-b868-dc3b0fc37af5_disk' index='2'>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:        <host name='192.168.122.100' port='6789'/>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:      </source>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:      <target dev='vda' bus='virtio'/>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:      <alias name='virtio-disk0'/>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:    </disk>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:    <disk type='network' device='cdrom'>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:      <driver name='qemu' type='raw' cache='none'/>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:      <auth username='openstack'>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:        <secret type='ceph' uuid='a058ea16-8b73-51e1-b172-ed66107102bf'/>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:      </auth>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:      <source protocol='rbd' name='vms/450d4b82-4475-4cfc-b868-dc3b0fc37af5_disk.config' index='1'>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:        <host name='192.168.122.100' port='6789'/>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:      </source>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:      <target dev='sda' bus='sata'/>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:      <readonly/>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:      <alias name='sata0-0-0'/>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:      <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:    </disk>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:    <controller type='pci' index='0' model='pcie-root'>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:      <alias name='pcie.0'/>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:    <controller type='pci' index='1' model='pcie-root-port'>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:      <model name='pcie-root-port'/>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:      <target chassis='1' port='0x10'/>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:      <alias name='pci.1'/>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:    <controller type='pci' index='2' model='pcie-root-port'>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:      <model name='pcie-root-port'/>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:      <target chassis='2' port='0x11'/>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:      <alias name='pci.2'/>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:    <controller type='pci' index='3' model='pcie-root-port'>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:      <model name='pcie-root-port'/>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:      <target chassis='3' port='0x12'/>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:      <alias name='pci.3'/>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:    <controller type='pci' index='4' model='pcie-root-port'>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:      <model name='pcie-root-port'/>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:      <target chassis='4' port='0x13'/>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:      <alias name='pci.4'/>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:    <controller type='pci' index='5' model='pcie-root-port'>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:      <model name='pcie-root-port'/>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:      <target chassis='5' port='0x14'/>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:      <alias name='pci.5'/>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:    <controller type='pci' index='6' model='pcie-root-port'>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:      <model name='pcie-root-port'/>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:      <target chassis='6' port='0x15'/>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:      <alias name='pci.6'/>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:    <controller type='pci' index='7' model='pcie-root-port'>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:      <model name='pcie-root-port'/>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:      <target chassis='7' port='0x16'/>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:      <alias name='pci.7'/>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:    <controller type='pci' index='8' model='pcie-root-port'>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:      <model name='pcie-root-port'/>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:      <target chassis='8' port='0x17'/>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:      <alias name='pci.8'/>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:    <controller type='pci' index='9' model='pcie-root-port'>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:      <model name='pcie-root-port'/>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:      <target chassis='9' port='0x18'/>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:      <alias name='pci.9'/>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:    <controller type='pci' index='10' model='pcie-root-port'>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:      <model name='pcie-root-port'/>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:      <target chassis='10' port='0x19'/>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:      <alias name='pci.10'/>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:    <controller type='pci' index='11' model='pcie-root-port'>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:      <model name='pcie-root-port'/>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:      <target chassis='11' port='0x1a'/>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:      <alias name='pci.11'/>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:    <controller type='pci' index='12' model='pcie-root-port'>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:      <model name='pcie-root-port'/>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:      <target chassis='12' port='0x1b'/>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:      <alias name='pci.12'/>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:    <controller type='pci' index='13' model='pcie-root-port'>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:      <model name='pcie-root-port'/>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:      <target chassis='13' port='0x1c'/>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:      <alias name='pci.13'/>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:    <controller type='pci' index='14' model='pcie-root-port'>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:      <model name='pcie-root-port'/>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:      <target chassis='14' port='0x1d'/>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:      <alias name='pci.14'/>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:    <controller type='pci' index='15' model='pcie-root-port'>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:      <model name='pcie-root-port'/>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:      <target chassis='15' port='0x1e'/>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:      <alias name='pci.15'/>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:    <controller type='pci' index='16' model='pcie-root-port'>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:      <model name='pcie-root-port'/>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:      <target chassis='16' port='0x1f'/>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:      <alias name='pci.16'/>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:    <controller type='pci' index='17' model='pcie-root-port'>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:      <model name='pcie-root-port'/>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:      <target chassis='17' port='0x20'/>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:      <alias name='pci.17'/>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:    <controller type='pci' index='18' model='pcie-root-port'>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:      <model name='pcie-root-port'/>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:      <target chassis='18' port='0x21'/>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:      <alias name='pci.18'/>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:    <controller type='pci' index='19' model='pcie-root-port'>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:      <model name='pcie-root-port'/>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:      <target chassis='19' port='0x22'/>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:      <alias name='pci.19'/>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:    <controller type='pci' index='20' model='pcie-root-port'>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:      <model name='pcie-root-port'/>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:      <target chassis='20' port='0x23'/>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:      <alias name='pci.20'/>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:    <controller type='pci' index='21' model='pcie-root-port'>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:      <model name='pcie-root-port'/>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:      <target chassis='21' port='0x24'/>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:      <alias name='pci.21'/>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:    <controller type='pci' index='22' model='pcie-root-port'>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:      <model name='pcie-root-port'/>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:      <target chassis='22' port='0x25'/>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:      <alias name='pci.22'/>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:    <controller type='pci' index='23' model='pcie-root-port'>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:      <model name='pcie-root-port'/>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:      <target chassis='23' port='0x26'/>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:      <alias name='pci.23'/>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:    <controller type='pci' index='24' model='pcie-root-port'>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:      <model name='pcie-root-port'/>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:      <target chassis='24' port='0x27'/>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:      <alias name='pci.24'/>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:    <controller type='pci' index='25' model='pcie-root-port'>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:      <model name='pcie-root-port'/>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:      <target chassis='25' port='0x28'/>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:      <alias name='pci.25'/>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:    <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:      <model name='pcie-pci-bridge'/>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:      <alias name='pci.26'/>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:    <controller type='usb' index='0' model='piix3-uhci'>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:      <alias name='usb'/>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:    <controller type='sata' index='0'>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:      <alias name='ide'/>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:    <interface type='ethernet'>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:      <mac address='fa:16:3e:c2:50:72'/>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:      <target dev='tap2456d48f-94'/>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:      <model type='virtio'/>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:      <driver name='vhost' rx_queue_size='512'/>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:      <mtu size='1442'/>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:      <alias name='net0'/>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:    </interface>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:    <interface type='ethernet'>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:      <mac address='fa:16:3e:ad:f7:0c'/>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:      <target dev='tap1b0678fa-ee'/>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:      <model type='virtio'/>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:      <driver name='vhost' rx_queue_size='512'/>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:      <mtu size='1442'/>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:      <alias name='net1'/>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x06' slot='0x00' function='0x0'/>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:    </interface>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:    <serial type='pty'>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:      <source path='/dev/pts/2'/>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:      <log file='/var/lib/nova/instances/450d4b82-4475-4cfc-b868-dc3b0fc37af5/console.log' append='off'/>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:      <target type='isa-serial' port='0'>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:        <model name='isa-serial'/>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:      </target>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:      <alias name='serial0'/>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:    </serial>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:    <console type='pty' tty='/dev/pts/2'>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:      <source path='/dev/pts/2'/>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:      <log file='/var/lib/nova/instances/450d4b82-4475-4cfc-b868-dc3b0fc37af5/console.log' append='off'/>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:      <target type='serial' port='0'/>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:      <alias name='serial0'/>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:    </console>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:    <input type='tablet' bus='usb'>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:      <alias name='input0'/>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:      <address type='usb' bus='0' port='1'/>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:    </input>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:    <input type='mouse' bus='ps2'>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:      <alias name='input1'/>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:    </input>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:    <input type='keyboard' bus='ps2'>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:      <alias name='input2'/>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:    </input>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:    <graphics type='vnc' port='5902' autoport='yes' listen='::0'>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:      <listen type='address' address='::0'/>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:    </graphics>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:    <audio id='1' type='none'/>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:    <video>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:      <model type='virtio' heads='1' primary='yes'/>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:      <alias name='video0'/>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:    </video>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:    <watchdog model='itco' action='reset'>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:      <alias name='watchdog0'/>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:    </watchdog>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:    <memballoon model='virtio'>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:      <stats period='10'/>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:      <alias name='balloon0'/>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:    </memballoon>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:    <rng model='virtio'>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:      <backend model='random'>/dev/urandom</backend>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:      <alias name='rng0'/>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:    </rng>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:  </devices>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:  <seclabel type='dynamic' model='selinux' relabel='yes'>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:    <label>system_u:system_r:svirt_t:s0:c273,c336</label>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:    <imagelabel>system_u:object_r:svirt_image_t:s0:c273,c336</imagelabel>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:  </seclabel>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:  <seclabel type='dynamic' model='dac' relabel='yes'>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:    <label>+107:+107</label>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:    <imagelabel>+107:+107</imagelabel>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:  </seclabel>
Nov 25 03:32:09 np0005534516 nova_compute[253538]: </domain>
Nov 25 03:32:09 np0005534516 nova_compute[253538]: get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282#033[00m
Nov 25 03:32:09 np0005534516 nova_compute[253538]: 2025-11-25 08:32:09.493 253542 INFO nova.virt.libvirt.driver [None req-cfe6e3ea-e03a-44c4-b92f-47de6ae51c25 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Successfully detached device tap1b0678fa-ee from instance 450d4b82-4475-4cfc-b868-dc3b0fc37af5 from the persistent domain config.#033[00m
Nov 25 03:32:09 np0005534516 nova_compute[253538]: 2025-11-25 08:32:09.493 253542 DEBUG nova.virt.libvirt.driver [None req-cfe6e3ea-e03a-44c4-b92f-47de6ae51c25 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] (1/8): Attempting to detach device tap1b0678fa-ee with device alias net1 from instance 450d4b82-4475-4cfc-b868-dc3b0fc37af5 from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523#033[00m
Nov 25 03:32:09 np0005534516 nova_compute[253538]: 2025-11-25 08:32:09.493 253542 DEBUG nova.virt.libvirt.guest [None req-cfe6e3ea-e03a-44c4-b92f-47de6ae51c25 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] detach device xml: <interface type="ethernet">
Nov 25 03:32:09 np0005534516 nova_compute[253538]:  <mac address="fa:16:3e:ad:f7:0c"/>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:  <model type="virtio"/>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:  <driver name="vhost" rx_queue_size="512"/>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:  <mtu size="1442"/>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:  <target dev="tap1b0678fa-ee"/>
Nov 25 03:32:09 np0005534516 nova_compute[253538]: </interface>
Nov 25 03:32:09 np0005534516 nova_compute[253538]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Nov 25 03:32:09 np0005534516 kernel: tap1b0678fa-ee (unregistering): left promiscuous mode
Nov 25 03:32:09 np0005534516 nova_compute[253538]: 2025-11-25 08:32:09.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 03:32:09 np0005534516 nova_compute[253538]: 2025-11-25 08:32:09.554 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 25 03:32:09 np0005534516 NetworkManager[48915]: <info>  [1764059529.5559] device (tap1b0678fa-ee): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 03:32:09 np0005534516 nova_compute[253538]: 2025-11-25 08:32:09.563 253542 DEBUG nova.virt.libvirt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Received event <DeviceRemovedEvent: 1764059529.5628252, 450d4b82-4475-4cfc-b868-dc3b0fc37af5 => net1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370#033[00m
Nov 25 03:32:09 np0005534516 nova_compute[253538]: 2025-11-25 08:32:09.565 253542 DEBUG nova.virt.libvirt.driver [None req-cfe6e3ea-e03a-44c4-b92f-47de6ae51c25 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Start waiting for the detach event from libvirt for device tap1b0678fa-ee with device alias net1 for instance 450d4b82-4475-4cfc-b868-dc3b0fc37af5 _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599#033[00m
Nov 25 03:32:09 np0005534516 nova_compute[253538]: 2025-11-25 08:32:09.566 253542 DEBUG nova.virt.libvirt.guest [None req-cfe6e3ea-e03a-44c4-b92f-47de6ae51c25 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:ad:f7:0c"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap1b0678fa-ee"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Nov 25 03:32:09 np0005534516 ovn_controller[152859]: 2025-11-25T08:32:09Z|00392|binding|INFO|Releasing lport 1b0678fa-ee9d-4aaa-adae-a0cd4b6bbc53 from this chassis (sb_readonly=0)
Nov 25 03:32:09 np0005534516 ovn_controller[152859]: 2025-11-25T08:32:09Z|00393|binding|INFO|Setting lport 1b0678fa-ee9d-4aaa-adae-a0cd4b6bbc53 down in Southbound
Nov 25 03:32:09 np0005534516 nova_compute[253538]: 2025-11-25 08:32:09.567 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:32:09 np0005534516 ovn_controller[152859]: 2025-11-25T08:32:09Z|00394|binding|INFO|Removing iface tap1b0678fa-ee ovn-installed in OVS
Nov 25 03:32:09 np0005534516 nova_compute[253538]: 2025-11-25 08:32:09.571 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:32:09 np0005534516 nova_compute[253538]: 2025-11-25 08:32:09.574 253542 DEBUG nova.virt.libvirt.guest [None req-cfe6e3ea-e03a-44c4-b92f-47de6ae51c25 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:ad:f7:0c"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap1b0678fa-ee"/></interface>not found in domain: <domain type='kvm' id='53'>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:  <name>instance-00000030</name>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:  <uuid>450d4b82-4475-4cfc-b868-dc3b0fc37af5</uuid>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:  <metadata>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 03:32:09 np0005534516 nova_compute[253538]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:  <nova:name>tempest-AttachInterfacesTestJSON-server-1331902627</nova:name>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:  <nova:creationTime>2025-11-25 08:32:07</nova:creationTime>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:  <nova:flavor name="m1.nano">
Nov 25 03:32:09 np0005534516 nova_compute[253538]:    <nova:memory>128</nova:memory>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:    <nova:disk>1</nova:disk>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:    <nova:swap>0</nova:swap>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:    <nova:ephemeral>0</nova:ephemeral>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:    <nova:vcpus>1</nova:vcpus>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:  </nova:flavor>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:  <nova:owner>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:    <nova:user uuid="329d8dc9d78743d4a09a38fef3a9143d">tempest-AttachInterfacesTestJSON-1895576257-project-member</nova:user>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:    <nova:project uuid="7d8307470c794815a028592990efca57">tempest-AttachInterfacesTestJSON-1895576257</nova:project>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:  </nova:owner>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:  <nova:root type="image" uuid="8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e"/>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:  <nova:ports>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:    <nova:port uuid="2456d48f-9440-411c-b5f2-5c27136126f9">
Nov 25 03:32:09 np0005534516 nova_compute[253538]:      <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:    </nova:port>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:    <nova:port uuid="1b0678fa-ee9d-4aaa-adae-a0cd4b6bbc53">
Nov 25 03:32:09 np0005534516 nova_compute[253538]:      <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:    </nova:port>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:  </nova:ports>
Nov 25 03:32:09 np0005534516 nova_compute[253538]: </nova:instance>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:  </metadata>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:  <memory unit='KiB'>131072</memory>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:  <currentMemory unit='KiB'>131072</currentMemory>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:  <vcpu placement='static'>1</vcpu>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:  <resource>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:    <partition>/machine</partition>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:  </resource>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:  <sysinfo type='smbios'>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:    <system>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:      <entry name='manufacturer'>RDO</entry>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:      <entry name='product'>OpenStack Compute</entry>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:      <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:      <entry name='serial'>450d4b82-4475-4cfc-b868-dc3b0fc37af5</entry>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:      <entry name='uuid'>450d4b82-4475-4cfc-b868-dc3b0fc37af5</entry>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:      <entry name='family'>Virtual Machine</entry>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:    </system>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:  </sysinfo>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:  <os>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:    <type arch='x86_64' machine='pc-q35-rhel9.8.0'>hvm</type>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:    <boot dev='hd'/>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:    <smbios mode='sysinfo'/>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:  </os>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:  <features>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:    <acpi/>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:    <apic/>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:    <vmcoreinfo state='on'/>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:  </features>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:  <cpu mode='custom' match='exact' check='full'>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:    <model fallback='forbid'>EPYC-Rome</model>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:    <vendor>AMD</vendor>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:    <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:    <feature policy='require' name='x2apic'/>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:    <feature policy='require' name='tsc-deadline'/>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:    <feature policy='require' name='hypervisor'/>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:    <feature policy='require' name='tsc_adjust'/>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:    <feature policy='require' name='spec-ctrl'/>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:    <feature policy='require' name='stibp'/>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:    <feature policy='require' name='ssbd'/>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:    <feature policy='require' name='cmp_legacy'/>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:    <feature policy='require' name='overflow-recov'/>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:    <feature policy='require' name='succor'/>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:    <feature policy='require' name='ibrs'/>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:    <feature policy='require' name='amd-ssbd'/>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:    <feature policy='require' name='virt-ssbd'/>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:    <feature policy='disable' name='lbrv'/>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:    <feature policy='disable' name='tsc-scale'/>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:    <feature policy='disable' name='vmcb-clean'/>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:    <feature policy='disable' name='flushbyasid'/>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:    <feature policy='disable' name='pause-filter'/>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:    <feature policy='disable' name='pfthreshold'/>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:    <feature policy='disable' name='svme-addr-chk'/>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:    <feature policy='require' name='lfence-always-serializing'/>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:    <feature policy='disable' name='xsaves'/>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:    <feature policy='disable' name='svm'/>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:    <feature policy='require' name='topoext'/>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:    <feature policy='disable' name='npt'/>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:    <feature policy='disable' name='nrip-save'/>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:  </cpu>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:  <clock offset='utc'>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:    <timer name='pit' tickpolicy='delay'/>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:    <timer name='rtc' tickpolicy='catchup'/>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:    <timer name='hpet' present='no'/>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:  </clock>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:  <on_poweroff>destroy</on_poweroff>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:  <on_reboot>restart</on_reboot>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:  <on_crash>destroy</on_crash>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:  <devices>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:    <emulator>/usr/libexec/qemu-kvm</emulator>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:    <disk type='network' device='disk'>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:      <driver name='qemu' type='raw' cache='none'/>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:      <auth username='openstack'>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:        <secret type='ceph' uuid='a058ea16-8b73-51e1-b172-ed66107102bf'/>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:      </auth>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:      <source protocol='rbd' name='vms/450d4b82-4475-4cfc-b868-dc3b0fc37af5_disk' index='2'>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:        <host name='192.168.122.100' port='6789'/>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:      </source>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:      <target dev='vda' bus='virtio'/>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:      <alias name='virtio-disk0'/>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:    </disk>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:    <disk type='network' device='cdrom'>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:      <driver name='qemu' type='raw' cache='none'/>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:      <auth username='openstack'>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:        <secret type='ceph' uuid='a058ea16-8b73-51e1-b172-ed66107102bf'/>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:      </auth>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:      <source protocol='rbd' name='vms/450d4b82-4475-4cfc-b868-dc3b0fc37af5_disk.config' index='1'>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:        <host name='192.168.122.100' port='6789'/>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:      </source>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:      <target dev='sda' bus='sata'/>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:      <readonly/>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:      <alias name='sata0-0-0'/>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:      <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:    </disk>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:    <controller type='pci' index='0' model='pcie-root'>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:      <alias name='pcie.0'/>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:    <controller type='pci' index='1' model='pcie-root-port'>
Nov 25 03:32:09 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:32:09.575 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ad:f7:0c 10.100.0.14'], port_security=['fa:16:3e:ad:f7:0c 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '450d4b82-4475-4cfc-b868-dc3b0fc37af5', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '7d8307470c794815a028592990efca57', 'neutron:revision_number': '4', 'neutron:security_group_ids': '2fc6083a-2d5e-4949-a854-57468915c521', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f51a82dc-da84-4ad1-90c6-51b8e242435f, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=1b0678fa-ee9d-4aaa-adae-a0cd4b6bbc53) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 03:32:09 np0005534516 nova_compute[253538]:      <model name='pcie-root-port'/>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:      <target chassis='1' port='0x10'/>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:      <alias name='pci.1'/>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:    <controller type='pci' index='2' model='pcie-root-port'>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:      <model name='pcie-root-port'/>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:      <target chassis='2' port='0x11'/>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:      <alias name='pci.2'/>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:    <controller type='pci' index='3' model='pcie-root-port'>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:      <model name='pcie-root-port'/>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:      <target chassis='3' port='0x12'/>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:      <alias name='pci.3'/>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:    <controller type='pci' index='4' model='pcie-root-port'>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:      <model name='pcie-root-port'/>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:      <target chassis='4' port='0x13'/>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:      <alias name='pci.4'/>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:    <controller type='pci' index='5' model='pcie-root-port'>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:      <model name='pcie-root-port'/>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:      <target chassis='5' port='0x14'/>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:      <alias name='pci.5'/>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:    <controller type='pci' index='6' model='pcie-root-port'>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:      <model name='pcie-root-port'/>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:      <target chassis='6' port='0x15'/>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:      <alias name='pci.6'/>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:    <controller type='pci' index='7' model='pcie-root-port'>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:      <model name='pcie-root-port'/>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:      <target chassis='7' port='0x16'/>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:      <alias name='pci.7'/>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:    <controller type='pci' index='8' model='pcie-root-port'>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:      <model name='pcie-root-port'/>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:      <target chassis='8' port='0x17'/>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:      <alias name='pci.8'/>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:    <controller type='pci' index='9' model='pcie-root-port'>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:      <model name='pcie-root-port'/>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:      <target chassis='9' port='0x18'/>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:      <alias name='pci.9'/>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:    <controller type='pci' index='10' model='pcie-root-port'>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:      <model name='pcie-root-port'/>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:      <target chassis='10' port='0x19'/>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:      <alias name='pci.10'/>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:    <controller type='pci' index='11' model='pcie-root-port'>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:      <model name='pcie-root-port'/>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:      <target chassis='11' port='0x1a'/>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:      <alias name='pci.11'/>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:    <controller type='pci' index='12' model='pcie-root-port'>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:      <model name='pcie-root-port'/>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:      <target chassis='12' port='0x1b'/>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:      <alias name='pci.12'/>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:    <controller type='pci' index='13' model='pcie-root-port'>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:      <model name='pcie-root-port'/>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:      <target chassis='13' port='0x1c'/>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:      <alias name='pci.13'/>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:    <controller type='pci' index='14' model='pcie-root-port'>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:      <model name='pcie-root-port'/>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:      <target chassis='14' port='0x1d'/>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:      <alias name='pci.14'/>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:    <controller type='pci' index='15' model='pcie-root-port'>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:      <model name='pcie-root-port'/>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:      <target chassis='15' port='0x1e'/>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:      <alias name='pci.15'/>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:    <controller type='pci' index='16' model='pcie-root-port'>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:      <model name='pcie-root-port'/>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:      <target chassis='16' port='0x1f'/>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:      <alias name='pci.16'/>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:    <controller type='pci' index='17' model='pcie-root-port'>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:      <model name='pcie-root-port'/>
Nov 25 03:32:09 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:32:09.576 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 1b0678fa-ee9d-4aaa-adae-a0cd4b6bbc53 in datapath 9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe unbound from our chassis#033[00m
Nov 25 03:32:09 np0005534516 nova_compute[253538]:      <target chassis='17' port='0x20'/>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:      <alias name='pci.17'/>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:    <controller type='pci' index='18' model='pcie-root-port'>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:      <model name='pcie-root-port'/>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:      <target chassis='18' port='0x21'/>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:      <alias name='pci.18'/>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:    <controller type='pci' index='19' model='pcie-root-port'>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:      <model name='pcie-root-port'/>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:      <target chassis='19' port='0x22'/>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:      <alias name='pci.19'/>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:    <controller type='pci' index='20' model='pcie-root-port'>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:      <model name='pcie-root-port'/>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:      <target chassis='20' port='0x23'/>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:      <alias name='pci.20'/>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:    <controller type='pci' index='21' model='pcie-root-port'>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:      <model name='pcie-root-port'/>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:      <target chassis='21' port='0x24'/>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:      <alias name='pci.21'/>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:    <controller type='pci' index='22' model='pcie-root-port'>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:      <model name='pcie-root-port'/>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:      <target chassis='22' port='0x25'/>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:      <alias name='pci.22'/>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:    <controller type='pci' index='23' model='pcie-root-port'>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:      <model name='pcie-root-port'/>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:      <target chassis='23' port='0x26'/>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:      <alias name='pci.23'/>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:    <controller type='pci' index='24' model='pcie-root-port'>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:      <model name='pcie-root-port'/>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:      <target chassis='24' port='0x27'/>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:      <alias name='pci.24'/>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:    <controller type='pci' index='25' model='pcie-root-port'>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:      <model name='pcie-root-port'/>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:      <target chassis='25' port='0x28'/>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:      <alias name='pci.25'/>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:    <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:      <model name='pcie-pci-bridge'/>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:      <alias name='pci.26'/>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:    <controller type='usb' index='0' model='piix3-uhci'>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:      <alias name='usb'/>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:    <controller type='sata' index='0'>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:      <alias name='ide'/>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:    <interface type='ethernet'>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:      <mac address='fa:16:3e:c2:50:72'/>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:      <target dev='tap2456d48f-94'/>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:      <model type='virtio'/>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:      <driver name='vhost' rx_queue_size='512'/>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:      <mtu size='1442'/>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:      <alias name='net0'/>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:    </interface>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:    <serial type='pty'>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:      <source path='/dev/pts/2'/>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:      <log file='/var/lib/nova/instances/450d4b82-4475-4cfc-b868-dc3b0fc37af5/console.log' append='off'/>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:      <target type='isa-serial' port='0'>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:        <model name='isa-serial'/>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:      </target>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:      <alias name='serial0'/>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:    </serial>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:    <console type='pty' tty='/dev/pts/2'>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:      <source path='/dev/pts/2'/>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:      <log file='/var/lib/nova/instances/450d4b82-4475-4cfc-b868-dc3b0fc37af5/console.log' append='off'/>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:      <target type='serial' port='0'/>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:      <alias name='serial0'/>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:    </console>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:    <input type='tablet' bus='usb'>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:      <alias name='input0'/>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:      <address type='usb' bus='0' port='1'/>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:    </input>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:    <input type='mouse' bus='ps2'>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:      <alias name='input1'/>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:    </input>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:    <input type='keyboard' bus='ps2'>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:      <alias name='input2'/>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:    </input>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:    <graphics type='vnc' port='5902' autoport='yes' listen='::0'>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:      <listen type='address' address='::0'/>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:    </graphics>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:    <audio id='1' type='none'/>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:    <video>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:      <model type='virtio' heads='1' primary='yes'/>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:      <alias name='video0'/>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:    </video>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:    <watchdog model='itco' action='reset'>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:      <alias name='watchdog0'/>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:    </watchdog>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:    <memballoon model='virtio'>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:      <stats period='10'/>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:      <alias name='balloon0'/>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:    </memballoon>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:    <rng model='virtio'>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:      <backend model='random'>/dev/urandom</backend>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:      <alias name='rng0'/>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:    </rng>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:  </devices>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:  <seclabel type='dynamic' model='selinux' relabel='yes'>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:    <label>system_u:system_r:svirt_t:s0:c273,c336</label>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:    <imagelabel>system_u:object_r:svirt_image_t:s0:c273,c336</imagelabel>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:  </seclabel>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:  <seclabel type='dynamic' model='dac' relabel='yes'>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:    <label>+107:+107</label>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:    <imagelabel>+107:+107</imagelabel>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:  </seclabel>
Nov 25 03:32:09 np0005534516 nova_compute[253538]: </domain>
Nov 25 03:32:09 np0005534516 nova_compute[253538]: get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282#033[00m
Nov 25 03:32:09 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:32:09.577 162739 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe#033[00m
Nov 25 03:32:09 np0005534516 nova_compute[253538]: 2025-11-25 08:32:09.574 253542 INFO nova.virt.libvirt.driver [None req-cfe6e3ea-e03a-44c4-b92f-47de6ae51c25 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Successfully detached device tap1b0678fa-ee from instance 450d4b82-4475-4cfc-b868-dc3b0fc37af5 from the live domain config.#033[00m
Nov 25 03:32:09 np0005534516 nova_compute[253538]: 2025-11-25 08:32:09.578 253542 DEBUG nova.virt.libvirt.vif [None req-cfe6e3ea-e03a-44c4-b92f-47de6ae51c25 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T08:31:31Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-1331902627',display_name='tempest-AttachInterfacesTestJSON-server-1331902627',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-1331902627',id=48,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFxOUDVDCQJ+lDKjOnhY9ZKMp68C2gI2GeL81bN5+Cg0KECDUKYqyfMCz6cyxM0cgudxNhn7b2Ag1lQMKrSgrlBGJM1ipNmUdJjzx17GCK7/NrryCNhuCHqxgCE2AGNPYg==',key_name='tempest-keypair-1979969978',keypairs=<?>,launch_index=0,launched_at=2025-11-25T08:31:41Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='7d8307470c794815a028592990efca57',ramdisk_id='',reservation_id='r-f66b6qgm',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-1895576257',owner_user_name='tempest-AttachInterfacesTestJSON-1895576257-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T08:31:41Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='329d8dc9d78743d4a09a38fef3a9143d',uuid=450d4b82-4475-4cfc-b868-dc3b0fc37af5,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "1b0678fa-ee9d-4aaa-adae-a0cd4b6bbc53", "address": "fa:16:3e:ad:f7:0c", "network": {"id": "9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-2079765623-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7d8307470c794815a028592990efca57", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1b0678fa-ee", "ovs_interfaceid": "1b0678fa-ee9d-4aaa-adae-a0cd4b6bbc53", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 25 03:32:09 np0005534516 nova_compute[253538]: 2025-11-25 08:32:09.584 253542 DEBUG nova.network.os_vif_util [None req-cfe6e3ea-e03a-44c4-b92f-47de6ae51c25 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Converting VIF {"id": "1b0678fa-ee9d-4aaa-adae-a0cd4b6bbc53", "address": "fa:16:3e:ad:f7:0c", "network": {"id": "9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-2079765623-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7d8307470c794815a028592990efca57", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1b0678fa-ee", "ovs_interfaceid": "1b0678fa-ee9d-4aaa-adae-a0cd4b6bbc53", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 03:32:09 np0005534516 nova_compute[253538]: 2025-11-25 08:32:09.585 253542 DEBUG nova.network.os_vif_util [None req-cfe6e3ea-e03a-44c4-b92f-47de6ae51c25 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ad:f7:0c,bridge_name='br-int',has_traffic_filtering=True,id=1b0678fa-ee9d-4aaa-adae-a0cd4b6bbc53,network=Network(9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1b0678fa-ee') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 03:32:09 np0005534516 nova_compute[253538]: 2025-11-25 08:32:09.586 253542 DEBUG os_vif [None req-cfe6e3ea-e03a-44c4-b92f-47de6ae51c25 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:ad:f7:0c,bridge_name='br-int',has_traffic_filtering=True,id=1b0678fa-ee9d-4aaa-adae-a0cd4b6bbc53,network=Network(9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1b0678fa-ee') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 25 03:32:09 np0005534516 nova_compute[253538]: 2025-11-25 08:32:09.588 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:32:09 np0005534516 nova_compute[253538]: 2025-11-25 08:32:09.589 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1b0678fa-ee, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:32:09 np0005534516 nova_compute[253538]: 2025-11-25 08:32:09.590 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:32:09 np0005534516 nova_compute[253538]: 2025-11-25 08:32:09.592 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:32:09 np0005534516 nova_compute[253538]: 2025-11-25 08:32:09.594 253542 INFO os_vif [None req-cfe6e3ea-e03a-44c4-b92f-47de6ae51c25 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:ad:f7:0c,bridge_name='br-int',has_traffic_filtering=True,id=1b0678fa-ee9d-4aaa-adae-a0cd4b6bbc53,network=Network(9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1b0678fa-ee')#033[00m
Nov 25 03:32:09 np0005534516 nova_compute[253538]: 2025-11-25 08:32:09.595 253542 DEBUG nova.virt.libvirt.guest [None req-cfe6e3ea-e03a-44c4-b92f-47de6ae51c25 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 03:32:09 np0005534516 nova_compute[253538]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:  <nova:name>tempest-AttachInterfacesTestJSON-server-1331902627</nova:name>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:  <nova:creationTime>2025-11-25 08:32:09</nova:creationTime>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:  <nova:flavor name="m1.nano">
Nov 25 03:32:09 np0005534516 nova_compute[253538]:    <nova:memory>128</nova:memory>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:    <nova:disk>1</nova:disk>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:    <nova:swap>0</nova:swap>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:    <nova:ephemeral>0</nova:ephemeral>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:    <nova:vcpus>1</nova:vcpus>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:  </nova:flavor>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:  <nova:owner>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:    <nova:user uuid="329d8dc9d78743d4a09a38fef3a9143d">tempest-AttachInterfacesTestJSON-1895576257-project-member</nova:user>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:    <nova:project uuid="7d8307470c794815a028592990efca57">tempest-AttachInterfacesTestJSON-1895576257</nova:project>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:  </nova:owner>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:  <nova:root type="image" uuid="8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e"/>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:  <nova:ports>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:    <nova:port uuid="2456d48f-9440-411c-b5f2-5c27136126f9">
Nov 25 03:32:09 np0005534516 nova_compute[253538]:      <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:    </nova:port>
Nov 25 03:32:09 np0005534516 nova_compute[253538]:  </nova:ports>
Nov 25 03:32:09 np0005534516 nova_compute[253538]: </nova:instance>
Nov 25 03:32:09 np0005534516 nova_compute[253538]: set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359#033[00m
Nov 25 03:32:09 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:32:09.606 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[1597a09e-8eb7-4e31-8d54-f5c66ad2de91]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:32:09 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:32:09.642 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[413f890d-82a2-4db8-8c4c-28a45858c57a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:32:09 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:32:09.645 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[5850664c-8087-483c-9fa9-0cdf10d1eb63]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:32:09 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:32:09.675 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[4c72c2de-0982-4f0b-b5b1-204bee6b7aec]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:32:09 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:32:09.701 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[d9c82f6e-4be8-49db-84fc-d30622992473]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap9bf3cbfa-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:cf:8f:c7'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 7, 'rx_bytes': 616, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 7, 'rx_bytes': 616, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 115], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 485392, 'reachable_time': 20076, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 306745, 'error': None, 'target': 'ovnmeta-9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:32:09 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:32:09.733 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[d1b18817-0ac9-4082-a395-887609109313]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap9bf3cbfa-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 485404, 'tstamp': 485404}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 306746, 'error': None, 'target': 'ovnmeta-9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap9bf3cbfa-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 485408, 'tstamp': 485408}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 306746, 'error': None, 'target': 'ovnmeta-9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:32:09 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:32:09.735 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9bf3cbfa-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:32:09 np0005534516 nova_compute[253538]: 2025-11-25 08:32:09.781 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:32:09 np0005534516 nova_compute[253538]: 2025-11-25 08:32:09.783 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:32:09 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:32:09.784 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9bf3cbfa-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:32:09 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:32:09.784 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 03:32:09 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:32:09.784 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap9bf3cbfa-70, col_values=(('external_ids', {'iface-id': '98660c0c-0936-4c4d-9a89-87b784d8d5cc'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:32:09 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:32:09.785 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 03:32:10 np0005534516 nova_compute[253538]: 2025-11-25 08:32:10.041 253542 DEBUG nova.compute.manager [req-2ef8a1a8-227a-4f06-9eaf-94ee7987a1d3 req-41569daf-a402-4ae0-8a88-c4befa553ce9 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 450d4b82-4475-4cfc-b868-dc3b0fc37af5] Received event network-vif-plugged-1b0678fa-ee9d-4aaa-adae-a0cd4b6bbc53 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:32:10 np0005534516 nova_compute[253538]: 2025-11-25 08:32:10.042 253542 DEBUG oslo_concurrency.lockutils [req-2ef8a1a8-227a-4f06-9eaf-94ee7987a1d3 req-41569daf-a402-4ae0-8a88-c4befa553ce9 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "450d4b82-4475-4cfc-b868-dc3b0fc37af5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:32:10 np0005534516 nova_compute[253538]: 2025-11-25 08:32:10.042 253542 DEBUG oslo_concurrency.lockutils [req-2ef8a1a8-227a-4f06-9eaf-94ee7987a1d3 req-41569daf-a402-4ae0-8a88-c4befa553ce9 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "450d4b82-4475-4cfc-b868-dc3b0fc37af5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:32:10 np0005534516 nova_compute[253538]: 2025-11-25 08:32:10.043 253542 DEBUG oslo_concurrency.lockutils [req-2ef8a1a8-227a-4f06-9eaf-94ee7987a1d3 req-41569daf-a402-4ae0-8a88-c4befa553ce9 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "450d4b82-4475-4cfc-b868-dc3b0fc37af5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:32:10 np0005534516 nova_compute[253538]: 2025-11-25 08:32:10.043 253542 DEBUG nova.compute.manager [req-2ef8a1a8-227a-4f06-9eaf-94ee7987a1d3 req-41569daf-a402-4ae0-8a88-c4befa553ce9 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 450d4b82-4475-4cfc-b868-dc3b0fc37af5] No waiting events found dispatching network-vif-plugged-1b0678fa-ee9d-4aaa-adae-a0cd4b6bbc53 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 03:32:10 np0005534516 nova_compute[253538]: 2025-11-25 08:32:10.043 253542 WARNING nova.compute.manager [req-2ef8a1a8-227a-4f06-9eaf-94ee7987a1d3 req-41569daf-a402-4ae0-8a88-c4befa553ce9 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 450d4b82-4475-4cfc-b868-dc3b0fc37af5] Received unexpected event network-vif-plugged-1b0678fa-ee9d-4aaa-adae-a0cd4b6bbc53 for instance with vm_state active and task_state None.#033[00m
Nov 25 03:32:10 np0005534516 nova_compute[253538]: 2025-11-25 08:32:10.043 253542 DEBUG nova.compute.manager [req-2ef8a1a8-227a-4f06-9eaf-94ee7987a1d3 req-41569daf-a402-4ae0-8a88-c4befa553ce9 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 450d4b82-4475-4cfc-b868-dc3b0fc37af5] Received event network-vif-unplugged-1b0678fa-ee9d-4aaa-adae-a0cd4b6bbc53 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:32:10 np0005534516 nova_compute[253538]: 2025-11-25 08:32:10.044 253542 DEBUG oslo_concurrency.lockutils [req-2ef8a1a8-227a-4f06-9eaf-94ee7987a1d3 req-41569daf-a402-4ae0-8a88-c4befa553ce9 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "450d4b82-4475-4cfc-b868-dc3b0fc37af5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:32:10 np0005534516 nova_compute[253538]: 2025-11-25 08:32:10.044 253542 DEBUG oslo_concurrency.lockutils [req-2ef8a1a8-227a-4f06-9eaf-94ee7987a1d3 req-41569daf-a402-4ae0-8a88-c4befa553ce9 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "450d4b82-4475-4cfc-b868-dc3b0fc37af5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:32:10 np0005534516 nova_compute[253538]: 2025-11-25 08:32:10.044 253542 DEBUG oslo_concurrency.lockutils [req-2ef8a1a8-227a-4f06-9eaf-94ee7987a1d3 req-41569daf-a402-4ae0-8a88-c4befa553ce9 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "450d4b82-4475-4cfc-b868-dc3b0fc37af5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:32:10 np0005534516 nova_compute[253538]: 2025-11-25 08:32:10.045 253542 DEBUG nova.compute.manager [req-2ef8a1a8-227a-4f06-9eaf-94ee7987a1d3 req-41569daf-a402-4ae0-8a88-c4befa553ce9 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 450d4b82-4475-4cfc-b868-dc3b0fc37af5] No waiting events found dispatching network-vif-unplugged-1b0678fa-ee9d-4aaa-adae-a0cd4b6bbc53 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 03:32:10 np0005534516 nova_compute[253538]: 2025-11-25 08:32:10.045 253542 WARNING nova.compute.manager [req-2ef8a1a8-227a-4f06-9eaf-94ee7987a1d3 req-41569daf-a402-4ae0-8a88-c4befa553ce9 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 450d4b82-4475-4cfc-b868-dc3b0fc37af5] Received unexpected event network-vif-unplugged-1b0678fa-ee9d-4aaa-adae-a0cd4b6bbc53 for instance with vm_state active and task_state None.#033[00m
Nov 25 03:32:10 np0005534516 nova_compute[253538]: 2025-11-25 08:32:10.309 253542 DEBUG nova.network.neutron [None req-e7c1ffa7-e4a2-4981-85ff-8897d3cb3e5e c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] [instance: 0feca801-4630-4450-b915-616d8496ab51] Successfully updated port: 15af3dd8-9788-4a34-b4b2-d3b24300cd4c _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 25 03:32:10 np0005534516 nova_compute[253538]: 2025-11-25 08:32:10.329 253542 DEBUG oslo_concurrency.lockutils [None req-e7c1ffa7-e4a2-4981-85ff-8897d3cb3e5e c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Acquiring lock "refresh_cache-0feca801-4630-4450-b915-616d8496ab51" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 03:32:10 np0005534516 nova_compute[253538]: 2025-11-25 08:32:10.330 253542 DEBUG oslo_concurrency.lockutils [None req-e7c1ffa7-e4a2-4981-85ff-8897d3cb3e5e c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Acquired lock "refresh_cache-0feca801-4630-4450-b915-616d8496ab51" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 03:32:10 np0005534516 nova_compute[253538]: 2025-11-25 08:32:10.330 253542 DEBUG nova.network.neutron [None req-e7c1ffa7-e4a2-4981-85ff-8897d3cb3e5e c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] [instance: 0feca801-4630-4450-b915-616d8496ab51] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 25 03:32:10 np0005534516 nova_compute[253538]: 2025-11-25 08:32:10.417 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:32:10 np0005534516 nova_compute[253538]: 2025-11-25 08:32:10.429 253542 DEBUG oslo_concurrency.lockutils [None req-cfe6e3ea-e03a-44c4-b92f-47de6ae51c25 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Acquiring lock "refresh_cache-450d4b82-4475-4cfc-b868-dc3b0fc37af5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 03:32:10 np0005534516 nova_compute[253538]: 2025-11-25 08:32:10.430 253542 DEBUG oslo_concurrency.lockutils [None req-cfe6e3ea-e03a-44c4-b92f-47de6ae51c25 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Acquired lock "refresh_cache-450d4b82-4475-4cfc-b868-dc3b0fc37af5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 03:32:10 np0005534516 nova_compute[253538]: 2025-11-25 08:32:10.431 253542 DEBUG nova.network.neutron [None req-cfe6e3ea-e03a-44c4-b92f-47de6ae51c25 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] [instance: 450d4b82-4475-4cfc-b868-dc3b0fc37af5] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 25 03:32:10 np0005534516 nova_compute[253538]: 2025-11-25 08:32:10.549 253542 DEBUG nova.network.neutron [None req-e7c1ffa7-e4a2-4981-85ff-8897d3cb3e5e c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] [instance: 0feca801-4630-4450-b915-616d8496ab51] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 25 03:32:10 np0005534516 nova_compute[253538]: 2025-11-25 08:32:10.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 03:32:10 np0005534516 nova_compute[253538]: 2025-11-25 08:32:10.555 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 25 03:32:10 np0005534516 nova_compute[253538]: 2025-11-25 08:32:10.555 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 25 03:32:10 np0005534516 nova_compute[253538]: 2025-11-25 08:32:10.604 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] [instance: 0feca801-4630-4450-b915-616d8496ab51] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Nov 25 03:32:10 np0005534516 nova_compute[253538]: 2025-11-25 08:32:10.618 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:32:10 np0005534516 nova_compute[253538]: 2025-11-25 08:32:10.665 253542 DEBUG nova.compute.manager [req-aa9cefe6-a4cb-44e9-9ffd-ec4185bbf8ac req-7c3ac807-4590-43f2-a1db-832851a8688c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 450d4b82-4475-4cfc-b868-dc3b0fc37af5] Received event network-vif-deleted-1b0678fa-ee9d-4aaa-adae-a0cd4b6bbc53 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:32:10 np0005534516 nova_compute[253538]: 2025-11-25 08:32:10.666 253542 INFO nova.compute.manager [req-aa9cefe6-a4cb-44e9-9ffd-ec4185bbf8ac req-7c3ac807-4590-43f2-a1db-832851a8688c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 450d4b82-4475-4cfc-b868-dc3b0fc37af5] Neutron deleted interface 1b0678fa-ee9d-4aaa-adae-a0cd4b6bbc53; detaching it from the instance and deleting it from the info cache#033[00m
Nov 25 03:32:10 np0005534516 nova_compute[253538]: 2025-11-25 08:32:10.666 253542 DEBUG nova.network.neutron [req-aa9cefe6-a4cb-44e9-9ffd-ec4185bbf8ac req-7c3ac807-4590-43f2-a1db-832851a8688c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 450d4b82-4475-4cfc-b868-dc3b0fc37af5] Updating instance_info_cache with network_info: [{"id": "2456d48f-9440-411c-b5f2-5c27136126f9", "address": "fa:16:3e:c2:50:72", "network": {"id": "9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-2079765623-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.186", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7d8307470c794815a028592990efca57", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2456d48f-94", "ovs_interfaceid": "2456d48f-9440-411c-b5f2-5c27136126f9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 03:32:10 np0005534516 nova_compute[253538]: 2025-11-25 08:32:10.728 253542 DEBUG nova.objects.instance [req-aa9cefe6-a4cb-44e9-9ffd-ec4185bbf8ac req-7c3ac807-4590-43f2-a1db-832851a8688c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lazy-loading 'system_metadata' on Instance uuid 450d4b82-4475-4cfc-b868-dc3b0fc37af5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 03:32:10 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e176 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 03:32:10 np0005534516 nova_compute[253538]: 2025-11-25 08:32:10.778 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Acquiring lock "refresh_cache-450d4b82-4475-4cfc-b868-dc3b0fc37af5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 03:32:10 np0005534516 nova_compute[253538]: 2025-11-25 08:32:10.791 253542 DEBUG nova.objects.instance [req-aa9cefe6-a4cb-44e9-9ffd-ec4185bbf8ac req-7c3ac807-4590-43f2-a1db-832851a8688c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lazy-loading 'flavor' on Instance uuid 450d4b82-4475-4cfc-b868-dc3b0fc37af5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 03:32:10 np0005534516 nova_compute[253538]: 2025-11-25 08:32:10.828 253542 DEBUG nova.virt.libvirt.vif [req-aa9cefe6-a4cb-44e9-9ffd-ec4185bbf8ac req-7c3ac807-4590-43f2-a1db-832851a8688c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T08:31:31Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-1331902627',display_name='tempest-AttachInterfacesTestJSON-server-1331902627',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-1331902627',id=48,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFxOUDVDCQJ+lDKjOnhY9ZKMp68C2gI2GeL81bN5+Cg0KECDUKYqyfMCz6cyxM0cgudxNhn7b2Ag1lQMKrSgrlBGJM1ipNmUdJjzx17GCK7/NrryCNhuCHqxgCE2AGNPYg==',key_name='tempest-keypair-1979969978',keypairs=<?>,launch_index=0,launched_at=2025-11-25T08:31:41Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata=<?>,migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='7d8307470c794815a028592990efca57',ramdisk_id='',reservation_id='r-f66b6qgm',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-1895576257',owner_user_name='tempest-AttachInterfacesTestJSON-1895576257-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T08:31:41Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='329d8dc9d78743d4a09a38fef3a9143d',uuid=450d4b82-4475-4cfc-b868-dc3b0fc37af5,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "1b0678fa-ee9d-4aaa-adae-a0cd4b6bbc53", "address": "fa:16:3e:ad:f7:0c", "network": {"id": "9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-2079765623-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7d8307470c794815a028592990efca57", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1b0678fa-ee", "ovs_interfaceid": "1b0678fa-ee9d-4aaa-adae-a0cd4b6bbc53", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 25 03:32:10 np0005534516 nova_compute[253538]: 2025-11-25 08:32:10.828 253542 DEBUG nova.network.os_vif_util [req-aa9cefe6-a4cb-44e9-9ffd-ec4185bbf8ac req-7c3ac807-4590-43f2-a1db-832851a8688c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Converting VIF {"id": "1b0678fa-ee9d-4aaa-adae-a0cd4b6bbc53", "address": "fa:16:3e:ad:f7:0c", "network": {"id": "9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-2079765623-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7d8307470c794815a028592990efca57", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1b0678fa-ee", "ovs_interfaceid": "1b0678fa-ee9d-4aaa-adae-a0cd4b6bbc53", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 03:32:10 np0005534516 nova_compute[253538]: 2025-11-25 08:32:10.829 253542 DEBUG nova.network.os_vif_util [req-aa9cefe6-a4cb-44e9-9ffd-ec4185bbf8ac req-7c3ac807-4590-43f2-a1db-832851a8688c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ad:f7:0c,bridge_name='br-int',has_traffic_filtering=True,id=1b0678fa-ee9d-4aaa-adae-a0cd4b6bbc53,network=Network(9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1b0678fa-ee') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 03:32:10 np0005534516 nova_compute[253538]: 2025-11-25 08:32:10.832 253542 DEBUG nova.virt.libvirt.guest [req-aa9cefe6-a4cb-44e9-9ffd-ec4185bbf8ac req-7c3ac807-4590-43f2-a1db-832851a8688c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:ad:f7:0c"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap1b0678fa-ee"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Nov 25 03:32:10 np0005534516 nova_compute[253538]: 2025-11-25 08:32:10.835 253542 DEBUG nova.virt.libvirt.guest [req-aa9cefe6-a4cb-44e9-9ffd-ec4185bbf8ac req-7c3ac807-4590-43f2-a1db-832851a8688c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:ad:f7:0c"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap1b0678fa-ee"/></interface>not found in domain: <domain type='kvm' id='53'>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:  <name>instance-00000030</name>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:  <uuid>450d4b82-4475-4cfc-b868-dc3b0fc37af5</uuid>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:  <metadata>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 03:32:10 np0005534516 nova_compute[253538]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:  <nova:name>tempest-AttachInterfacesTestJSON-server-1331902627</nova:name>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:  <nova:creationTime>2025-11-25 08:32:09</nova:creationTime>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:  <nova:flavor name="m1.nano">
Nov 25 03:32:10 np0005534516 nova_compute[253538]:    <nova:memory>128</nova:memory>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:    <nova:disk>1</nova:disk>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:    <nova:swap>0</nova:swap>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:    <nova:ephemeral>0</nova:ephemeral>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:    <nova:vcpus>1</nova:vcpus>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:  </nova:flavor>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:  <nova:owner>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:    <nova:user uuid="329d8dc9d78743d4a09a38fef3a9143d">tempest-AttachInterfacesTestJSON-1895576257-project-member</nova:user>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:    <nova:project uuid="7d8307470c794815a028592990efca57">tempest-AttachInterfacesTestJSON-1895576257</nova:project>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:  </nova:owner>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:  <nova:root type="image" uuid="8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e"/>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:  <nova:ports>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:    <nova:port uuid="2456d48f-9440-411c-b5f2-5c27136126f9">
Nov 25 03:32:10 np0005534516 nova_compute[253538]:      <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:    </nova:port>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:  </nova:ports>
Nov 25 03:32:10 np0005534516 nova_compute[253538]: </nova:instance>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:  </metadata>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:  <memory unit='KiB'>131072</memory>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:  <currentMemory unit='KiB'>131072</currentMemory>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:  <vcpu placement='static'>1</vcpu>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:  <resource>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:    <partition>/machine</partition>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:  </resource>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:  <sysinfo type='smbios'>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:    <system>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:      <entry name='manufacturer'>RDO</entry>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:      <entry name='product'>OpenStack Compute</entry>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:      <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:      <entry name='serial'>450d4b82-4475-4cfc-b868-dc3b0fc37af5</entry>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:      <entry name='uuid'>450d4b82-4475-4cfc-b868-dc3b0fc37af5</entry>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:      <entry name='family'>Virtual Machine</entry>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:    </system>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:  </sysinfo>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:  <os>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:    <type arch='x86_64' machine='pc-q35-rhel9.8.0'>hvm</type>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:    <boot dev='hd'/>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:    <smbios mode='sysinfo'/>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:  </os>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:  <features>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:    <acpi/>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:    <apic/>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:    <vmcoreinfo state='on'/>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:  </features>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:  <cpu mode='custom' match='exact' check='full'>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:    <model fallback='forbid'>EPYC-Rome</model>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:    <vendor>AMD</vendor>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:    <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:    <feature policy='require' name='x2apic'/>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:    <feature policy='require' name='tsc-deadline'/>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:    <feature policy='require' name='hypervisor'/>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:    <feature policy='require' name='tsc_adjust'/>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:    <feature policy='require' name='spec-ctrl'/>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:    <feature policy='require' name='stibp'/>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:    <feature policy='require' name='ssbd'/>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:    <feature policy='require' name='cmp_legacy'/>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:    <feature policy='require' name='overflow-recov'/>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:    <feature policy='require' name='succor'/>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:    <feature policy='require' name='ibrs'/>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:    <feature policy='require' name='amd-ssbd'/>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:    <feature policy='require' name='virt-ssbd'/>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:    <feature policy='disable' name='lbrv'/>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:    <feature policy='disable' name='tsc-scale'/>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:    <feature policy='disable' name='vmcb-clean'/>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:    <feature policy='disable' name='flushbyasid'/>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:    <feature policy='disable' name='pause-filter'/>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:    <feature policy='disable' name='pfthreshold'/>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:    <feature policy='disable' name='svme-addr-chk'/>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:    <feature policy='require' name='lfence-always-serializing'/>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:    <feature policy='disable' name='xsaves'/>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:    <feature policy='disable' name='svm'/>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:    <feature policy='require' name='topoext'/>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:    <feature policy='disable' name='npt'/>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:    <feature policy='disable' name='nrip-save'/>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:  </cpu>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:  <clock offset='utc'>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:    <timer name='pit' tickpolicy='delay'/>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:    <timer name='rtc' tickpolicy='catchup'/>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:    <timer name='hpet' present='no'/>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:  </clock>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:  <on_poweroff>destroy</on_poweroff>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:  <on_reboot>restart</on_reboot>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:  <on_crash>destroy</on_crash>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:  <devices>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:    <emulator>/usr/libexec/qemu-kvm</emulator>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:    <disk type='network' device='disk'>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:      <driver name='qemu' type='raw' cache='none'/>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:      <auth username='openstack'>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:        <secret type='ceph' uuid='a058ea16-8b73-51e1-b172-ed66107102bf'/>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:      </auth>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:      <source protocol='rbd' name='vms/450d4b82-4475-4cfc-b868-dc3b0fc37af5_disk' index='2'>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:        <host name='192.168.122.100' port='6789'/>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:      </source>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:      <target dev='vda' bus='virtio'/>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:      <alias name='virtio-disk0'/>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:    </disk>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:    <disk type='network' device='cdrom'>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:      <driver name='qemu' type='raw' cache='none'/>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:      <auth username='openstack'>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:        <secret type='ceph' uuid='a058ea16-8b73-51e1-b172-ed66107102bf'/>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:      </auth>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:      <source protocol='rbd' name='vms/450d4b82-4475-4cfc-b868-dc3b0fc37af5_disk.config' index='1'>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:        <host name='192.168.122.100' port='6789'/>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:      </source>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:      <target dev='sda' bus='sata'/>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:      <readonly/>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:      <alias name='sata0-0-0'/>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:      <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:    </disk>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:    <controller type='pci' index='0' model='pcie-root'>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:      <alias name='pcie.0'/>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:    <controller type='pci' index='1' model='pcie-root-port'>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:      <model name='pcie-root-port'/>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:      <target chassis='1' port='0x10'/>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:      <alias name='pci.1'/>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:    <controller type='pci' index='2' model='pcie-root-port'>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:      <model name='pcie-root-port'/>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:      <target chassis='2' port='0x11'/>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:      <alias name='pci.2'/>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:    <controller type='pci' index='3' model='pcie-root-port'>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:      <model name='pcie-root-port'/>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:      <target chassis='3' port='0x12'/>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:      <alias name='pci.3'/>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:    <controller type='pci' index='4' model='pcie-root-port'>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:      <model name='pcie-root-port'/>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:      <target chassis='4' port='0x13'/>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:      <alias name='pci.4'/>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:    <controller type='pci' index='5' model='pcie-root-port'>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:      <model name='pcie-root-port'/>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:      <target chassis='5' port='0x14'/>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:      <alias name='pci.5'/>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:    <controller type='pci' index='6' model='pcie-root-port'>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:      <model name='pcie-root-port'/>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:      <target chassis='6' port='0x15'/>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:      <alias name='pci.6'/>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:    <controller type='pci' index='7' model='pcie-root-port'>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:      <model name='pcie-root-port'/>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:      <target chassis='7' port='0x16'/>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:      <alias name='pci.7'/>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:    <controller type='pci' index='8' model='pcie-root-port'>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:      <model name='pcie-root-port'/>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:      <target chassis='8' port='0x17'/>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:      <alias name='pci.8'/>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:    <controller type='pci' index='9' model='pcie-root-port'>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:      <model name='pcie-root-port'/>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:      <target chassis='9' port='0x18'/>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:      <alias name='pci.9'/>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:    <controller type='pci' index='10' model='pcie-root-port'>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:      <model name='pcie-root-port'/>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:      <target chassis='10' port='0x19'/>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:      <alias name='pci.10'/>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:    <controller type='pci' index='11' model='pcie-root-port'>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:      <model name='pcie-root-port'/>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:      <target chassis='11' port='0x1a'/>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:      <alias name='pci.11'/>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:    <controller type='pci' index='12' model='pcie-root-port'>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:      <model name='pcie-root-port'/>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:      <target chassis='12' port='0x1b'/>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:      <alias name='pci.12'/>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:    <controller type='pci' index='13' model='pcie-root-port'>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:      <model name='pcie-root-port'/>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:      <target chassis='13' port='0x1c'/>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:      <alias name='pci.13'/>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:    <controller type='pci' index='14' model='pcie-root-port'>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:      <model name='pcie-root-port'/>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:      <target chassis='14' port='0x1d'/>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:      <alias name='pci.14'/>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:    <controller type='pci' index='15' model='pcie-root-port'>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:      <model name='pcie-root-port'/>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:      <target chassis='15' port='0x1e'/>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:      <alias name='pci.15'/>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:    <controller type='pci' index='16' model='pcie-root-port'>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:      <model name='pcie-root-port'/>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:      <target chassis='16' port='0x1f'/>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:      <alias name='pci.16'/>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:    <controller type='pci' index='17' model='pcie-root-port'>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:      <model name='pcie-root-port'/>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:      <target chassis='17' port='0x20'/>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:      <alias name='pci.17'/>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:    <controller type='pci' index='18' model='pcie-root-port'>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:      <model name='pcie-root-port'/>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:      <target chassis='18' port='0x21'/>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:      <alias name='pci.18'/>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:    <controller type='pci' index='19' model='pcie-root-port'>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:      <model name='pcie-root-port'/>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:      <target chassis='19' port='0x22'/>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:      <alias name='pci.19'/>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:    <controller type='pci' index='20' model='pcie-root-port'>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:      <model name='pcie-root-port'/>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:      <target chassis='20' port='0x23'/>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:      <alias name='pci.20'/>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:    <controller type='pci' index='21' model='pcie-root-port'>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:      <model name='pcie-root-port'/>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:      <target chassis='21' port='0x24'/>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:      <alias name='pci.21'/>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:    <controller type='pci' index='22' model='pcie-root-port'>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:      <model name='pcie-root-port'/>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:      <target chassis='22' port='0x25'/>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:      <alias name='pci.22'/>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:    <controller type='pci' index='23' model='pcie-root-port'>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:      <model name='pcie-root-port'/>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:      <target chassis='23' port='0x26'/>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:      <alias name='pci.23'/>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:    <controller type='pci' index='24' model='pcie-root-port'>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:      <model name='pcie-root-port'/>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:      <target chassis='24' port='0x27'/>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:      <alias name='pci.24'/>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:    <controller type='pci' index='25' model='pcie-root-port'>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:      <model name='pcie-root-port'/>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:      <target chassis='25' port='0x28'/>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:      <alias name='pci.25'/>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:    <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:      <model name='pcie-pci-bridge'/>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:      <alias name='pci.26'/>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:    <controller type='usb' index='0' model='piix3-uhci'>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:      <alias name='usb'/>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:    <controller type='sata' index='0'>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:      <alias name='ide'/>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:    <interface type='ethernet'>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:      <mac address='fa:16:3e:c2:50:72'/>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:      <target dev='tap2456d48f-94'/>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:      <model type='virtio'/>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:      <driver name='vhost' rx_queue_size='512'/>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:      <mtu size='1442'/>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:      <alias name='net0'/>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:    </interface>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:    <serial type='pty'>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:      <source path='/dev/pts/2'/>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:      <log file='/var/lib/nova/instances/450d4b82-4475-4cfc-b868-dc3b0fc37af5/console.log' append='off'/>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:      <target type='isa-serial' port='0'>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:        <model name='isa-serial'/>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:      </target>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:      <alias name='serial0'/>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:    </serial>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:    <console type='pty' tty='/dev/pts/2'>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:      <source path='/dev/pts/2'/>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:      <log file='/var/lib/nova/instances/450d4b82-4475-4cfc-b868-dc3b0fc37af5/console.log' append='off'/>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:      <target type='serial' port='0'/>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:      <alias name='serial0'/>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:    </console>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:    <input type='tablet' bus='usb'>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:      <alias name='input0'/>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:      <address type='usb' bus='0' port='1'/>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:    </input>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:    <input type='mouse' bus='ps2'>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:      <alias name='input1'/>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:    </input>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:    <input type='keyboard' bus='ps2'>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:      <alias name='input2'/>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:    </input>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:    <graphics type='vnc' port='5902' autoport='yes' listen='::0'>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:      <listen type='address' address='::0'/>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:    </graphics>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:    <audio id='1' type='none'/>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:    <video>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:      <model type='virtio' heads='1' primary='yes'/>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:      <alias name='video0'/>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:    </video>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:    <watchdog model='itco' action='reset'>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:      <alias name='watchdog0'/>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:    </watchdog>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:    <memballoon model='virtio'>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:      <stats period='10'/>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:      <alias name='balloon0'/>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:    </memballoon>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:    <rng model='virtio'>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:      <backend model='random'>/dev/urandom</backend>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:      <alias name='rng0'/>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:    </rng>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:  </devices>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:  <seclabel type='dynamic' model='selinux' relabel='yes'>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:    <label>system_u:system_r:svirt_t:s0:c273,c336</label>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:    <imagelabel>system_u:object_r:svirt_image_t:s0:c273,c336</imagelabel>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:  </seclabel>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:  <seclabel type='dynamic' model='dac' relabel='yes'>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:    <label>+107:+107</label>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:    <imagelabel>+107:+107</imagelabel>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:  </seclabel>
Nov 25 03:32:10 np0005534516 nova_compute[253538]: </domain>
Nov 25 03:32:10 np0005534516 nova_compute[253538]: get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282#033[00m
Nov 25 03:32:10 np0005534516 nova_compute[253538]: 2025-11-25 08:32:10.836 253542 DEBUG nova.virt.libvirt.guest [req-aa9cefe6-a4cb-44e9-9ffd-ec4185bbf8ac req-7c3ac807-4590-43f2-a1db-832851a8688c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:ad:f7:0c"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap1b0678fa-ee"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Nov 25 03:32:10 np0005534516 nova_compute[253538]: 2025-11-25 08:32:10.841 253542 DEBUG nova.virt.libvirt.guest [req-aa9cefe6-a4cb-44e9-9ffd-ec4185bbf8ac req-7c3ac807-4590-43f2-a1db-832851a8688c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:ad:f7:0c"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap1b0678fa-ee"/></interface>not found in domain: <domain type='kvm' id='53'>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:  <name>instance-00000030</name>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:  <uuid>450d4b82-4475-4cfc-b868-dc3b0fc37af5</uuid>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:  <metadata>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 03:32:10 np0005534516 nova_compute[253538]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:  <nova:name>tempest-AttachInterfacesTestJSON-server-1331902627</nova:name>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:  <nova:creationTime>2025-11-25 08:32:09</nova:creationTime>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:  <nova:flavor name="m1.nano">
Nov 25 03:32:10 np0005534516 nova_compute[253538]:    <nova:memory>128</nova:memory>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:    <nova:disk>1</nova:disk>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:    <nova:swap>0</nova:swap>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:    <nova:ephemeral>0</nova:ephemeral>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:    <nova:vcpus>1</nova:vcpus>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:  </nova:flavor>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:  <nova:owner>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:    <nova:user uuid="329d8dc9d78743d4a09a38fef3a9143d">tempest-AttachInterfacesTestJSON-1895576257-project-member</nova:user>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:    <nova:project uuid="7d8307470c794815a028592990efca57">tempest-AttachInterfacesTestJSON-1895576257</nova:project>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:  </nova:owner>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:  <nova:root type="image" uuid="8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e"/>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:  <nova:ports>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:    <nova:port uuid="2456d48f-9440-411c-b5f2-5c27136126f9">
Nov 25 03:32:10 np0005534516 nova_compute[253538]:      <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:    </nova:port>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:  </nova:ports>
Nov 25 03:32:10 np0005534516 nova_compute[253538]: </nova:instance>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:  </metadata>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:  <memory unit='KiB'>131072</memory>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:  <currentMemory unit='KiB'>131072</currentMemory>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:  <vcpu placement='static'>1</vcpu>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:  <resource>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:    <partition>/machine</partition>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:  </resource>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:  <sysinfo type='smbios'>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:    <system>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:      <entry name='manufacturer'>RDO</entry>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:      <entry name='product'>OpenStack Compute</entry>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:      <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:      <entry name='serial'>450d4b82-4475-4cfc-b868-dc3b0fc37af5</entry>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:      <entry name='uuid'>450d4b82-4475-4cfc-b868-dc3b0fc37af5</entry>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:      <entry name='family'>Virtual Machine</entry>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:    </system>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:  </sysinfo>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:  <os>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:    <type arch='x86_64' machine='pc-q35-rhel9.8.0'>hvm</type>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:    <boot dev='hd'/>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:    <smbios mode='sysinfo'/>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:  </os>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:  <features>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:    <acpi/>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:    <apic/>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:    <vmcoreinfo state='on'/>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:  </features>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:  <cpu mode='custom' match='exact' check='full'>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:    <model fallback='forbid'>EPYC-Rome</model>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:    <vendor>AMD</vendor>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:    <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:    <feature policy='require' name='x2apic'/>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:    <feature policy='require' name='tsc-deadline'/>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:    <feature policy='require' name='hypervisor'/>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:    <feature policy='require' name='tsc_adjust'/>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:    <feature policy='require' name='spec-ctrl'/>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:    <feature policy='require' name='stibp'/>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:    <feature policy='require' name='ssbd'/>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:    <feature policy='require' name='cmp_legacy'/>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:    <feature policy='require' name='overflow-recov'/>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:    <feature policy='require' name='succor'/>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:    <feature policy='require' name='ibrs'/>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:    <feature policy='require' name='amd-ssbd'/>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:    <feature policy='require' name='virt-ssbd'/>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:    <feature policy='disable' name='lbrv'/>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:    <feature policy='disable' name='tsc-scale'/>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:    <feature policy='disable' name='vmcb-clean'/>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:    <feature policy='disable' name='flushbyasid'/>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:    <feature policy='disable' name='pause-filter'/>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:    <feature policy='disable' name='pfthreshold'/>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:    <feature policy='disable' name='svme-addr-chk'/>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:    <feature policy='require' name='lfence-always-serializing'/>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:    <feature policy='disable' name='xsaves'/>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:    <feature policy='disable' name='svm'/>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:    <feature policy='require' name='topoext'/>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:    <feature policy='disable' name='npt'/>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:    <feature policy='disable' name='nrip-save'/>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:  </cpu>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:  <clock offset='utc'>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:    <timer name='pit' tickpolicy='delay'/>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:    <timer name='rtc' tickpolicy='catchup'/>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:    <timer name='hpet' present='no'/>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:  </clock>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:  <on_poweroff>destroy</on_poweroff>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:  <on_reboot>restart</on_reboot>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:  <on_crash>destroy</on_crash>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:  <devices>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:    <emulator>/usr/libexec/qemu-kvm</emulator>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:    <disk type='network' device='disk'>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:      <driver name='qemu' type='raw' cache='none'/>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:      <auth username='openstack'>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:        <secret type='ceph' uuid='a058ea16-8b73-51e1-b172-ed66107102bf'/>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:      </auth>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:      <source protocol='rbd' name='vms/450d4b82-4475-4cfc-b868-dc3b0fc37af5_disk' index='2'>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:        <host name='192.168.122.100' port='6789'/>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:      </source>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:      <target dev='vda' bus='virtio'/>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:      <alias name='virtio-disk0'/>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:    </disk>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:    <disk type='network' device='cdrom'>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:      <driver name='qemu' type='raw' cache='none'/>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:      <auth username='openstack'>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:        <secret type='ceph' uuid='a058ea16-8b73-51e1-b172-ed66107102bf'/>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:      </auth>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:      <source protocol='rbd' name='vms/450d4b82-4475-4cfc-b868-dc3b0fc37af5_disk.config' index='1'>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:        <host name='192.168.122.100' port='6789'/>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:      </source>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:      <target dev='sda' bus='sata'/>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:      <readonly/>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:      <alias name='sata0-0-0'/>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:      <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:    </disk>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:    <controller type='pci' index='0' model='pcie-root'>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:      <alias name='pcie.0'/>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:    <controller type='pci' index='1' model='pcie-root-port'>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:      <model name='pcie-root-port'/>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:      <target chassis='1' port='0x10'/>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:      <alias name='pci.1'/>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:    <controller type='pci' index='2' model='pcie-root-port'>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:      <model name='pcie-root-port'/>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:      <target chassis='2' port='0x11'/>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:      <alias name='pci.2'/>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:    <controller type='pci' index='3' model='pcie-root-port'>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:      <model name='pcie-root-port'/>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:      <target chassis='3' port='0x12'/>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:      <alias name='pci.3'/>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:    <controller type='pci' index='4' model='pcie-root-port'>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:      <model name='pcie-root-port'/>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:      <target chassis='4' port='0x13'/>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:      <alias name='pci.4'/>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:    <controller type='pci' index='5' model='pcie-root-port'>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:      <model name='pcie-root-port'/>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:      <target chassis='5' port='0x14'/>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:      <alias name='pci.5'/>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:    <controller type='pci' index='6' model='pcie-root-port'>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:      <model name='pcie-root-port'/>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:      <target chassis='6' port='0x15'/>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:      <alias name='pci.6'/>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:    <controller type='pci' index='7' model='pcie-root-port'>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:      <model name='pcie-root-port'/>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:      <target chassis='7' port='0x16'/>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:      <alias name='pci.7'/>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:    <controller type='pci' index='8' model='pcie-root-port'>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:      <model name='pcie-root-port'/>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:      <target chassis='8' port='0x17'/>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:      <alias name='pci.8'/>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:    <controller type='pci' index='9' model='pcie-root-port'>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:      <model name='pcie-root-port'/>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:      <target chassis='9' port='0x18'/>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:      <alias name='pci.9'/>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:    <controller type='pci' index='10' model='pcie-root-port'>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:      <model name='pcie-root-port'/>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:      <target chassis='10' port='0x19'/>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:      <alias name='pci.10'/>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:    <controller type='pci' index='11' model='pcie-root-port'>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:      <model name='pcie-root-port'/>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:      <target chassis='11' port='0x1a'/>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:      <alias name='pci.11'/>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:    <controller type='pci' index='12' model='pcie-root-port'>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:      <model name='pcie-root-port'/>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:      <target chassis='12' port='0x1b'/>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:      <alias name='pci.12'/>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:    <controller type='pci' index='13' model='pcie-root-port'>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:      <model name='pcie-root-port'/>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:      <target chassis='13' port='0x1c'/>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:      <alias name='pci.13'/>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:    <controller type='pci' index='14' model='pcie-root-port'>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:      <model name='pcie-root-port'/>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:      <target chassis='14' port='0x1d'/>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:      <alias name='pci.14'/>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:    <controller type='pci' index='15' model='pcie-root-port'>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:      <model name='pcie-root-port'/>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:      <target chassis='15' port='0x1e'/>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:      <alias name='pci.15'/>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:    <controller type='pci' index='16' model='pcie-root-port'>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:      <model name='pcie-root-port'/>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:      <target chassis='16' port='0x1f'/>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:      <alias name='pci.16'/>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:    <controller type='pci' index='17' model='pcie-root-port'>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:      <model name='pcie-root-port'/>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:      <target chassis='17' port='0x20'/>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:      <alias name='pci.17'/>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:    <controller type='pci' index='18' model='pcie-root-port'>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:      <model name='pcie-root-port'/>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:      <target chassis='18' port='0x21'/>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:      <alias name='pci.18'/>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:    <controller type='pci' index='19' model='pcie-root-port'>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:      <model name='pcie-root-port'/>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:      <target chassis='19' port='0x22'/>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:      <alias name='pci.19'/>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:    <controller type='pci' index='20' model='pcie-root-port'>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:      <model name='pcie-root-port'/>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:      <target chassis='20' port='0x23'/>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:      <alias name='pci.20'/>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:    <controller type='pci' index='21' model='pcie-root-port'>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:      <model name='pcie-root-port'/>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:      <target chassis='21' port='0x24'/>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:      <alias name='pci.21'/>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:    <controller type='pci' index='22' model='pcie-root-port'>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:      <model name='pcie-root-port'/>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:      <target chassis='22' port='0x25'/>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:      <alias name='pci.22'/>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:    <controller type='pci' index='23' model='pcie-root-port'>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:      <model name='pcie-root-port'/>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:      <target chassis='23' port='0x26'/>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:      <alias name='pci.23'/>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:    <controller type='pci' index='24' model='pcie-root-port'>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:      <model name='pcie-root-port'/>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:      <target chassis='24' port='0x27'/>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:      <alias name='pci.24'/>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:    <controller type='pci' index='25' model='pcie-root-port'>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:      <model name='pcie-root-port'/>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:      <target chassis='25' port='0x28'/>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:      <alias name='pci.25'/>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:    <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:      <model name='pcie-pci-bridge'/>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:      <alias name='pci.26'/>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:    <controller type='usb' index='0' model='piix3-uhci'>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:      <alias name='usb'/>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:    <controller type='sata' index='0'>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:      <alias name='ide'/>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:    <interface type='ethernet'>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:      <mac address='fa:16:3e:c2:50:72'/>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:      <target dev='tap2456d48f-94'/>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:      <model type='virtio'/>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:      <driver name='vhost' rx_queue_size='512'/>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:      <mtu size='1442'/>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:      <alias name='net0'/>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:    </interface>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:    <serial type='pty'>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:      <source path='/dev/pts/2'/>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:      <log file='/var/lib/nova/instances/450d4b82-4475-4cfc-b868-dc3b0fc37af5/console.log' append='off'/>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:      <target type='isa-serial' port='0'>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:        <model name='isa-serial'/>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:      </target>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:      <alias name='serial0'/>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:    </serial>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:    <console type='pty' tty='/dev/pts/2'>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:      <source path='/dev/pts/2'/>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:      <log file='/var/lib/nova/instances/450d4b82-4475-4cfc-b868-dc3b0fc37af5/console.log' append='off'/>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:      <target type='serial' port='0'/>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:      <alias name='serial0'/>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:    </console>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:    <input type='tablet' bus='usb'>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:      <alias name='input0'/>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:      <address type='usb' bus='0' port='1'/>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:    </input>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:    <input type='mouse' bus='ps2'>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:      <alias name='input1'/>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:    </input>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:    <input type='keyboard' bus='ps2'>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:      <alias name='input2'/>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:    </input>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:    <graphics type='vnc' port='5902' autoport='yes' listen='::0'>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:      <listen type='address' address='::0'/>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:    </graphics>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:    <audio id='1' type='none'/>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:    <video>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:      <model type='virtio' heads='1' primary='yes'/>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:      <alias name='video0'/>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:    </video>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:    <watchdog model='itco' action='reset'>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:      <alias name='watchdog0'/>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:    </watchdog>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:    <memballoon model='virtio'>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:      <stats period='10'/>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:      <alias name='balloon0'/>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:    </memballoon>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:    <rng model='virtio'>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:      <backend model='random'>/dev/urandom</backend>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:      <alias name='rng0'/>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:    </rng>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:  </devices>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:  <seclabel type='dynamic' model='selinux' relabel='yes'>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:    <label>system_u:system_r:svirt_t:s0:c273,c336</label>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:    <imagelabel>system_u:object_r:svirt_image_t:s0:c273,c336</imagelabel>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:  </seclabel>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:  <seclabel type='dynamic' model='dac' relabel='yes'>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:    <label>+107:+107</label>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:    <imagelabel>+107:+107</imagelabel>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:  </seclabel>
Nov 25 03:32:10 np0005534516 nova_compute[253538]: </domain>
Nov 25 03:32:10 np0005534516 nova_compute[253538]: get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282#033[00m
Nov 25 03:32:10 np0005534516 nova_compute[253538]: 2025-11-25 08:32:10.841 253542 WARNING nova.virt.libvirt.driver [req-aa9cefe6-a4cb-44e9-9ffd-ec4185bbf8ac req-7c3ac807-4590-43f2-a1db-832851a8688c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 450d4b82-4475-4cfc-b868-dc3b0fc37af5] Detaching interface fa:16:3e:ad:f7:0c failed because the device is no longer found on the guest.: nova.exception.DeviceNotFound: Device 'tap1b0678fa-ee' not found.#033[00m
Nov 25 03:32:10 np0005534516 nova_compute[253538]: 2025-11-25 08:32:10.842 253542 DEBUG nova.virt.libvirt.vif [req-aa9cefe6-a4cb-44e9-9ffd-ec4185bbf8ac req-7c3ac807-4590-43f2-a1db-832851a8688c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T08:31:31Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-1331902627',display_name='tempest-AttachInterfacesTestJSON-server-1331902627',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-1331902627',id=48,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFxOUDVDCQJ+lDKjOnhY9ZKMp68C2gI2GeL81bN5+Cg0KECDUKYqyfMCz6cyxM0cgudxNhn7b2Ag1lQMKrSgrlBGJM1ipNmUdJjzx17GCK7/NrryCNhuCHqxgCE2AGNPYg==',key_name='tempest-keypair-1979969978',keypairs=<?>,launch_index=0,launched_at=2025-11-25T08:31:41Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata=<?>,migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='7d8307470c794815a028592990efca57',ramdisk_id='',reservation_id='r-f66b6qgm',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-1895576257',owner_user_name='tempest-AttachInterfacesTestJSON-1895576257-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T08:31:41Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='329d8dc9d78743d4a09a38fef3a9143d',uuid=450d4b82-4475-4cfc-b868-dc3b0fc37af5,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "1b0678fa-ee9d-4aaa-adae-a0cd4b6bbc53", "address": "fa:16:3e:ad:f7:0c", "network": {"id": "9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-2079765623-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7d8307470c794815a028592990efca57", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1b0678fa-ee", "ovs_interfaceid": "1b0678fa-ee9d-4aaa-adae-a0cd4b6bbc53", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 25 03:32:10 np0005534516 nova_compute[253538]: 2025-11-25 08:32:10.842 253542 DEBUG nova.network.os_vif_util [req-aa9cefe6-a4cb-44e9-9ffd-ec4185bbf8ac req-7c3ac807-4590-43f2-a1db-832851a8688c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Converting VIF {"id": "1b0678fa-ee9d-4aaa-adae-a0cd4b6bbc53", "address": "fa:16:3e:ad:f7:0c", "network": {"id": "9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-2079765623-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7d8307470c794815a028592990efca57", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1b0678fa-ee", "ovs_interfaceid": "1b0678fa-ee9d-4aaa-adae-a0cd4b6bbc53", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 03:32:10 np0005534516 nova_compute[253538]: 2025-11-25 08:32:10.843 253542 DEBUG nova.network.os_vif_util [req-aa9cefe6-a4cb-44e9-9ffd-ec4185bbf8ac req-7c3ac807-4590-43f2-a1db-832851a8688c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ad:f7:0c,bridge_name='br-int',has_traffic_filtering=True,id=1b0678fa-ee9d-4aaa-adae-a0cd4b6bbc53,network=Network(9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1b0678fa-ee') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 03:32:10 np0005534516 nova_compute[253538]: 2025-11-25 08:32:10.843 253542 DEBUG os_vif [req-aa9cefe6-a4cb-44e9-9ffd-ec4185bbf8ac req-7c3ac807-4590-43f2-a1db-832851a8688c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:ad:f7:0c,bridge_name='br-int',has_traffic_filtering=True,id=1b0678fa-ee9d-4aaa-adae-a0cd4b6bbc53,network=Network(9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1b0678fa-ee') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 25 03:32:10 np0005534516 nova_compute[253538]: 2025-11-25 08:32:10.845 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:32:10 np0005534516 nova_compute[253538]: 2025-11-25 08:32:10.845 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1b0678fa-ee, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:32:10 np0005534516 nova_compute[253538]: 2025-11-25 08:32:10.845 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 03:32:10 np0005534516 nova_compute[253538]: 2025-11-25 08:32:10.847 253542 INFO os_vif [req-aa9cefe6-a4cb-44e9-9ffd-ec4185bbf8ac req-7c3ac807-4590-43f2-a1db-832851a8688c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:ad:f7:0c,bridge_name='br-int',has_traffic_filtering=True,id=1b0678fa-ee9d-4aaa-adae-a0cd4b6bbc53,network=Network(9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1b0678fa-ee')#033[00m
Nov 25 03:32:10 np0005534516 nova_compute[253538]: 2025-11-25 08:32:10.848 253542 DEBUG nova.virt.libvirt.guest [req-aa9cefe6-a4cb-44e9-9ffd-ec4185bbf8ac req-7c3ac807-4590-43f2-a1db-832851a8688c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 03:32:10 np0005534516 nova_compute[253538]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:  <nova:name>tempest-AttachInterfacesTestJSON-server-1331902627</nova:name>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:  <nova:creationTime>2025-11-25 08:32:10</nova:creationTime>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:  <nova:flavor name="m1.nano">
Nov 25 03:32:10 np0005534516 nova_compute[253538]:    <nova:memory>128</nova:memory>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:    <nova:disk>1</nova:disk>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:    <nova:swap>0</nova:swap>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:    <nova:ephemeral>0</nova:ephemeral>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:    <nova:vcpus>1</nova:vcpus>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:  </nova:flavor>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:  <nova:owner>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:    <nova:user uuid="329d8dc9d78743d4a09a38fef3a9143d">tempest-AttachInterfacesTestJSON-1895576257-project-member</nova:user>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:    <nova:project uuid="7d8307470c794815a028592990efca57">tempest-AttachInterfacesTestJSON-1895576257</nova:project>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:  </nova:owner>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:  <nova:root type="image" uuid="8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e"/>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:  <nova:ports>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:    <nova:port uuid="2456d48f-9440-411c-b5f2-5c27136126f9">
Nov 25 03:32:10 np0005534516 nova_compute[253538]:      <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:    </nova:port>
Nov 25 03:32:10 np0005534516 nova_compute[253538]:  </nova:ports>
Nov 25 03:32:10 np0005534516 nova_compute[253538]: </nova:instance>
Nov 25 03:32:10 np0005534516 nova_compute[253538]: set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359#033[00m
Nov 25 03:32:10 np0005534516 podman[306747]: 2025-11-25 08:32:10.869255846 +0000 UTC m=+0.101260233 container health_status 201f5ba8a846455dad7681d91099d8c3f33e8ac91298c1330a8b525b94c03938 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, managed_by=edpm_ansible, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 03:32:11 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1465: 321 pgs: 321 active+clean; 185 MiB data, 498 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 799 KiB/s wr, 153 op/s
Nov 25 03:32:11 np0005534516 nova_compute[253538]: 2025-11-25 08:32:11.339 253542 DEBUG oslo_concurrency.lockutils [None req-35e5603c-cfad-490a-9a18-06b273165b85 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Acquiring lock "450d4b82-4475-4cfc-b868-dc3b0fc37af5" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:32:11 np0005534516 nova_compute[253538]: 2025-11-25 08:32:11.339 253542 DEBUG oslo_concurrency.lockutils [None req-35e5603c-cfad-490a-9a18-06b273165b85 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Lock "450d4b82-4475-4cfc-b868-dc3b0fc37af5" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:32:11 np0005534516 nova_compute[253538]: 2025-11-25 08:32:11.339 253542 DEBUG oslo_concurrency.lockutils [None req-35e5603c-cfad-490a-9a18-06b273165b85 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Acquiring lock "450d4b82-4475-4cfc-b868-dc3b0fc37af5-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:32:11 np0005534516 nova_compute[253538]: 2025-11-25 08:32:11.340 253542 DEBUG oslo_concurrency.lockutils [None req-35e5603c-cfad-490a-9a18-06b273165b85 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Lock "450d4b82-4475-4cfc-b868-dc3b0fc37af5-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:32:11 np0005534516 nova_compute[253538]: 2025-11-25 08:32:11.340 253542 DEBUG oslo_concurrency.lockutils [None req-35e5603c-cfad-490a-9a18-06b273165b85 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Lock "450d4b82-4475-4cfc-b868-dc3b0fc37af5-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:32:11 np0005534516 nova_compute[253538]: 2025-11-25 08:32:11.341 253542 INFO nova.compute.manager [None req-35e5603c-cfad-490a-9a18-06b273165b85 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] [instance: 450d4b82-4475-4cfc-b868-dc3b0fc37af5] Terminating instance#033[00m
Nov 25 03:32:11 np0005534516 nova_compute[253538]: 2025-11-25 08:32:11.342 253542 DEBUG nova.compute.manager [None req-35e5603c-cfad-490a-9a18-06b273165b85 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] [instance: 450d4b82-4475-4cfc-b868-dc3b0fc37af5] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 25 03:32:11 np0005534516 kernel: tap2456d48f-94 (unregistering): left promiscuous mode
Nov 25 03:32:11 np0005534516 NetworkManager[48915]: <info>  [1764059531.4085] device (tap2456d48f-94): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 03:32:11 np0005534516 nova_compute[253538]: 2025-11-25 08:32:11.410 253542 DEBUG nova.network.neutron [None req-e7c1ffa7-e4a2-4981-85ff-8897d3cb3e5e c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] [instance: 0feca801-4630-4450-b915-616d8496ab51] Updating instance_info_cache with network_info: [{"id": "15af3dd8-9788-4a34-b4b2-d3b24300cd4c", "address": "fa:16:3e:07:cd:40", "network": {"id": "908154e6-322e-4607-bb65-df3f3f8daca6", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1395486665-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "23237e7592b247838e62457157e64e9e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap15af3dd8-97", "ovs_interfaceid": "15af3dd8-9788-4a34-b4b2-d3b24300cd4c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 03:32:11 np0005534516 nova_compute[253538]: 2025-11-25 08:32:11.417 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:32:11 np0005534516 ovn_controller[152859]: 2025-11-25T08:32:11Z|00395|binding|INFO|Releasing lport 2456d48f-9440-411c-b5f2-5c27136126f9 from this chassis (sb_readonly=0)
Nov 25 03:32:11 np0005534516 ovn_controller[152859]: 2025-11-25T08:32:11Z|00396|binding|INFO|Setting lport 2456d48f-9440-411c-b5f2-5c27136126f9 down in Southbound
Nov 25 03:32:11 np0005534516 ovn_controller[152859]: 2025-11-25T08:32:11Z|00397|binding|INFO|Removing iface tap2456d48f-94 ovn-installed in OVS
Nov 25 03:32:11 np0005534516 nova_compute[253538]: 2025-11-25 08:32:11.424 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:32:11 np0005534516 nova_compute[253538]: 2025-11-25 08:32:11.438 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:32:11 np0005534516 systemd[1]: machine-qemu\x2d53\x2dinstance\x2d00000030.scope: Deactivated successfully.
Nov 25 03:32:11 np0005534516 systemd[1]: machine-qemu\x2d53\x2dinstance\x2d00000030.scope: Consumed 13.353s CPU time.
Nov 25 03:32:11 np0005534516 systemd-machined[215790]: Machine qemu-53-instance-00000030 terminated.
Nov 25 03:32:11 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:32:11.502 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c2:50:72 10.100.0.7'], port_security=['fa:16:3e:c2:50:72 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '450d4b82-4475-4cfc-b868-dc3b0fc37af5', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '7d8307470c794815a028592990efca57', 'neutron:revision_number': '4', 'neutron:security_group_ids': '1111213c-e81d-4f44-8d10-b8f2ced48789', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.186'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f51a82dc-da84-4ad1-90c6-51b8e242435f, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=2456d48f-9440-411c-b5f2-5c27136126f9) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 03:32:11 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:32:11.503 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 2456d48f-9440-411c-b5f2-5c27136126f9 in datapath 9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe unbound from our chassis#033[00m
Nov 25 03:32:11 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:32:11.504 162739 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 25 03:32:11 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:32:11.505 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[672c0e39-55b1-495b-a96c-baded9e2ea35]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:32:11 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:32:11.507 162739 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe namespace which is not needed anymore#033[00m
Nov 25 03:32:11 np0005534516 nova_compute[253538]: 2025-11-25 08:32:11.511 253542 DEBUG oslo_concurrency.lockutils [None req-e7c1ffa7-e4a2-4981-85ff-8897d3cb3e5e c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Releasing lock "refresh_cache-0feca801-4630-4450-b915-616d8496ab51" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 03:32:11 np0005534516 nova_compute[253538]: 2025-11-25 08:32:11.511 253542 DEBUG nova.compute.manager [None req-e7c1ffa7-e4a2-4981-85ff-8897d3cb3e5e c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] [instance: 0feca801-4630-4450-b915-616d8496ab51] Instance network_info: |[{"id": "15af3dd8-9788-4a34-b4b2-d3b24300cd4c", "address": "fa:16:3e:07:cd:40", "network": {"id": "908154e6-322e-4607-bb65-df3f3f8daca6", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1395486665-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "23237e7592b247838e62457157e64e9e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap15af3dd8-97", "ovs_interfaceid": "15af3dd8-9788-4a34-b4b2-d3b24300cd4c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 25 03:32:11 np0005534516 nova_compute[253538]: 2025-11-25 08:32:11.517 253542 DEBUG nova.virt.libvirt.driver [None req-e7c1ffa7-e4a2-4981-85ff-8897d3cb3e5e c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] [instance: 0feca801-4630-4450-b915-616d8496ab51] Start _get_guest_xml network_info=[{"id": "15af3dd8-9788-4a34-b4b2-d3b24300cd4c", "address": "fa:16:3e:07:cd:40", "network": {"id": "908154e6-322e-4607-bb65-df3f3f8daca6", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1395486665-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "23237e7592b247838e62457157e64e9e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap15af3dd8-97", "ovs_interfaceid": "15af3dd8-9788-4a34-b4b2-d3b24300cd4c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'device_name': '/dev/vda', 'guest_format': None, 'encryption_options': None, 'boot_index': 0, 'encryption_format': None, 'encryption_secret_uuid': None, 'size': 0, 'encrypted': False, 'disk_bus': 'virtio', 'image_id': '8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 25 03:32:11 np0005534516 nova_compute[253538]: 2025-11-25 08:32:11.525 253542 WARNING nova.virt.libvirt.driver [None req-e7c1ffa7-e4a2-4981-85ff-8897d3cb3e5e c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 03:32:11 np0005534516 nova_compute[253538]: 2025-11-25 08:32:11.533 253542 DEBUG nova.virt.libvirt.host [None req-e7c1ffa7-e4a2-4981-85ff-8897d3cb3e5e c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 25 03:32:11 np0005534516 nova_compute[253538]: 2025-11-25 08:32:11.534 253542 DEBUG nova.virt.libvirt.host [None req-e7c1ffa7-e4a2-4981-85ff-8897d3cb3e5e c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 25 03:32:11 np0005534516 nova_compute[253538]: 2025-11-25 08:32:11.538 253542 DEBUG nova.virt.libvirt.host [None req-e7c1ffa7-e4a2-4981-85ff-8897d3cb3e5e c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 25 03:32:11 np0005534516 nova_compute[253538]: 2025-11-25 08:32:11.539 253542 DEBUG nova.virt.libvirt.host [None req-e7c1ffa7-e4a2-4981-85ff-8897d3cb3e5e c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 25 03:32:11 np0005534516 nova_compute[253538]: 2025-11-25 08:32:11.540 253542 DEBUG nova.virt.libvirt.driver [None req-e7c1ffa7-e4a2-4981-85ff-8897d3cb3e5e c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 25 03:32:11 np0005534516 nova_compute[253538]: 2025-11-25 08:32:11.541 253542 DEBUG nova.virt.hardware [None req-e7c1ffa7-e4a2-4981-85ff-8897d3cb3e5e c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T08:20:54Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='c0247c91-0f34-45b2-87e7-43f31a790a00',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 25 03:32:11 np0005534516 nova_compute[253538]: 2025-11-25 08:32:11.542 253542 DEBUG nova.virt.hardware [None req-e7c1ffa7-e4a2-4981-85ff-8897d3cb3e5e c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 25 03:32:11 np0005534516 nova_compute[253538]: 2025-11-25 08:32:11.542 253542 DEBUG nova.virt.hardware [None req-e7c1ffa7-e4a2-4981-85ff-8897d3cb3e5e c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 25 03:32:11 np0005534516 nova_compute[253538]: 2025-11-25 08:32:11.543 253542 DEBUG nova.virt.hardware [None req-e7c1ffa7-e4a2-4981-85ff-8897d3cb3e5e c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 25 03:32:11 np0005534516 nova_compute[253538]: 2025-11-25 08:32:11.543 253542 DEBUG nova.virt.hardware [None req-e7c1ffa7-e4a2-4981-85ff-8897d3cb3e5e c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 25 03:32:11 np0005534516 nova_compute[253538]: 2025-11-25 08:32:11.543 253542 DEBUG nova.virt.hardware [None req-e7c1ffa7-e4a2-4981-85ff-8897d3cb3e5e c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 25 03:32:11 np0005534516 nova_compute[253538]: 2025-11-25 08:32:11.544 253542 DEBUG nova.virt.hardware [None req-e7c1ffa7-e4a2-4981-85ff-8897d3cb3e5e c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 25 03:32:11 np0005534516 nova_compute[253538]: 2025-11-25 08:32:11.544 253542 DEBUG nova.virt.hardware [None req-e7c1ffa7-e4a2-4981-85ff-8897d3cb3e5e c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 25 03:32:11 np0005534516 nova_compute[253538]: 2025-11-25 08:32:11.544 253542 DEBUG nova.virt.hardware [None req-e7c1ffa7-e4a2-4981-85ff-8897d3cb3e5e c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 25 03:32:11 np0005534516 nova_compute[253538]: 2025-11-25 08:32:11.545 253542 DEBUG nova.virt.hardware [None req-e7c1ffa7-e4a2-4981-85ff-8897d3cb3e5e c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 25 03:32:11 np0005534516 nova_compute[253538]: 2025-11-25 08:32:11.545 253542 DEBUG nova.virt.hardware [None req-e7c1ffa7-e4a2-4981-85ff-8897d3cb3e5e c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 25 03:32:11 np0005534516 nova_compute[253538]: 2025-11-25 08:32:11.551 253542 DEBUG oslo_concurrency.processutils [None req-e7c1ffa7-e4a2-4981-85ff-8897d3cb3e5e c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:32:11 np0005534516 kernel: tap2456d48f-94: entered promiscuous mode
Nov 25 03:32:11 np0005534516 NetworkManager[48915]: <info>  [1764059531.5609] manager: (tap2456d48f-94): new Tun device (/org/freedesktop/NetworkManager/Devices/185)
Nov 25 03:32:11 np0005534516 ovn_controller[152859]: 2025-11-25T08:32:11Z|00398|binding|INFO|Claiming lport 2456d48f-9440-411c-b5f2-5c27136126f9 for this chassis.
Nov 25 03:32:11 np0005534516 ovn_controller[152859]: 2025-11-25T08:32:11Z|00399|binding|INFO|2456d48f-9440-411c-b5f2-5c27136126f9: Claiming fa:16:3e:c2:50:72 10.100.0.7
Nov 25 03:32:11 np0005534516 kernel: tap2456d48f-94 (unregistering): left promiscuous mode
Nov 25 03:32:11 np0005534516 ovn_controller[152859]: 2025-11-25T08:32:11Z|00400|binding|INFO|Setting lport 2456d48f-9440-411c-b5f2-5c27136126f9 ovn-installed in OVS
Nov 25 03:32:11 np0005534516 ovn_controller[152859]: 2025-11-25T08:32:11Z|00401|if_status|INFO|Dropped 2 log messages in last 164 seconds (most recently, 164 seconds ago) due to excessive rate
Nov 25 03:32:11 np0005534516 ovn_controller[152859]: 2025-11-25T08:32:11Z|00402|if_status|INFO|Not setting lport 2456d48f-9440-411c-b5f2-5c27136126f9 down as sb is readonly
Nov 25 03:32:11 np0005534516 ovn_controller[152859]: 2025-11-25T08:32:11Z|00403|binding|INFO|Releasing lport 2456d48f-9440-411c-b5f2-5c27136126f9 from this chassis (sb_readonly=0)
Nov 25 03:32:11 np0005534516 nova_compute[253538]: 2025-11-25 08:32:11.591 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:32:11 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:32:11.593 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c2:50:72 10.100.0.7'], port_security=['fa:16:3e:c2:50:72 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '450d4b82-4475-4cfc-b868-dc3b0fc37af5', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '7d8307470c794815a028592990efca57', 'neutron:revision_number': '4', 'neutron:security_group_ids': '1111213c-e81d-4f44-8d10-b8f2ced48789', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.186'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f51a82dc-da84-4ad1-90c6-51b8e242435f, chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=2456d48f-9440-411c-b5f2-5c27136126f9) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 03:32:11 np0005534516 nova_compute[253538]: 2025-11-25 08:32:11.598 253542 INFO nova.network.neutron [None req-cfe6e3ea-e03a-44c4-b92f-47de6ae51c25 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] [instance: 450d4b82-4475-4cfc-b868-dc3b0fc37af5] Port 1b0678fa-ee9d-4aaa-adae-a0cd4b6bbc53 from network info_cache is no longer associated with instance in Neutron. Removing from network info_cache.#033[00m
Nov 25 03:32:11 np0005534516 nova_compute[253538]: 2025-11-25 08:32:11.599 253542 DEBUG nova.network.neutron [None req-cfe6e3ea-e03a-44c4-b92f-47de6ae51c25 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] [instance: 450d4b82-4475-4cfc-b868-dc3b0fc37af5] Updating instance_info_cache with network_info: [{"id": "2456d48f-9440-411c-b5f2-5c27136126f9", "address": "fa:16:3e:c2:50:72", "network": {"id": "9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-2079765623-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.186", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7d8307470c794815a028592990efca57", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2456d48f-94", "ovs_interfaceid": "2456d48f-9440-411c-b5f2-5c27136126f9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 03:32:11 np0005534516 nova_compute[253538]: 2025-11-25 08:32:11.606 253542 INFO nova.virt.libvirt.driver [-] [instance: 450d4b82-4475-4cfc-b868-dc3b0fc37af5] Instance destroyed successfully.#033[00m
Nov 25 03:32:11 np0005534516 nova_compute[253538]: 2025-11-25 08:32:11.607 253542 DEBUG nova.objects.instance [None req-35e5603c-cfad-490a-9a18-06b273165b85 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Lazy-loading 'resources' on Instance uuid 450d4b82-4475-4cfc-b868-dc3b0fc37af5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 03:32:11 np0005534516 nova_compute[253538]: 2025-11-25 08:32:11.610 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:32:11 np0005534516 nova_compute[253538]: 2025-11-25 08:32:11.626 253542 DEBUG oslo_concurrency.lockutils [None req-cfe6e3ea-e03a-44c4-b92f-47de6ae51c25 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Releasing lock "refresh_cache-450d4b82-4475-4cfc-b868-dc3b0fc37af5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 03:32:11 np0005534516 nova_compute[253538]: 2025-11-25 08:32:11.630 253542 DEBUG nova.virt.libvirt.vif [None req-35e5603c-cfad-490a-9a18-06b273165b85 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T08:31:31Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-1331902627',display_name='tempest-AttachInterfacesTestJSON-server-1331902627',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-1331902627',id=48,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFxOUDVDCQJ+lDKjOnhY9ZKMp68C2gI2GeL81bN5+Cg0KECDUKYqyfMCz6cyxM0cgudxNhn7b2Ag1lQMKrSgrlBGJM1ipNmUdJjzx17GCK7/NrryCNhuCHqxgCE2AGNPYg==',key_name='tempest-keypair-1979969978',keypairs=<?>,launch_index=0,launched_at=2025-11-25T08:31:41Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='7d8307470c794815a028592990efca57',ramdisk_id='',reservation_id='r-f66b6qgm',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-1895576257',owner_user_name='tempest-AttachInterfacesTestJSON-1895576257-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T08:31:41Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='329d8dc9d78743d4a09a38fef3a9143d',uuid=450d4b82-4475-4cfc-b868-dc3b0fc37af5,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "2456d48f-9440-411c-b5f2-5c27136126f9", "address": "fa:16:3e:c2:50:72", "network": {"id": "9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-2079765623-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.186", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7d8307470c794815a028592990efca57", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2456d48f-94", "ovs_interfaceid": "2456d48f-9440-411c-b5f2-5c27136126f9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 25 03:32:11 np0005534516 nova_compute[253538]: 2025-11-25 08:32:11.630 253542 DEBUG nova.network.os_vif_util [None req-35e5603c-cfad-490a-9a18-06b273165b85 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Converting VIF {"id": "2456d48f-9440-411c-b5f2-5c27136126f9", "address": "fa:16:3e:c2:50:72", "network": {"id": "9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-2079765623-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.186", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7d8307470c794815a028592990efca57", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2456d48f-94", "ovs_interfaceid": "2456d48f-9440-411c-b5f2-5c27136126f9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 03:32:11 np0005534516 nova_compute[253538]: 2025-11-25 08:32:11.631 253542 DEBUG nova.network.os_vif_util [None req-35e5603c-cfad-490a-9a18-06b273165b85 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:c2:50:72,bridge_name='br-int',has_traffic_filtering=True,id=2456d48f-9440-411c-b5f2-5c27136126f9,network=Network(9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2456d48f-94') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 03:32:11 np0005534516 nova_compute[253538]: 2025-11-25 08:32:11.632 253542 DEBUG os_vif [None req-35e5603c-cfad-490a-9a18-06b273165b85 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:c2:50:72,bridge_name='br-int',has_traffic_filtering=True,id=2456d48f-9440-411c-b5f2-5c27136126f9,network=Network(9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2456d48f-94') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 25 03:32:11 np0005534516 nova_compute[253538]: 2025-11-25 08:32:11.634 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:32:11 np0005534516 nova_compute[253538]: 2025-11-25 08:32:11.634 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2456d48f-94, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:32:11 np0005534516 nova_compute[253538]: 2025-11-25 08:32:11.637 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Acquired lock "refresh_cache-450d4b82-4475-4cfc-b868-dc3b0fc37af5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 03:32:11 np0005534516 nova_compute[253538]: 2025-11-25 08:32:11.637 253542 DEBUG nova.network.neutron [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] [instance: 450d4b82-4475-4cfc-b868-dc3b0fc37af5] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Nov 25 03:32:11 np0005534516 nova_compute[253538]: 2025-11-25 08:32:11.638 253542 DEBUG nova.objects.instance [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 450d4b82-4475-4cfc-b868-dc3b0fc37af5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 03:32:11 np0005534516 nova_compute[253538]: 2025-11-25 08:32:11.641 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 25 03:32:11 np0005534516 nova_compute[253538]: 2025-11-25 08:32:11.643 253542 INFO os_vif [None req-35e5603c-cfad-490a-9a18-06b273165b85 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:c2:50:72,bridge_name='br-int',has_traffic_filtering=True,id=2456d48f-9440-411c-b5f2-5c27136126f9,network=Network(9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2456d48f-94')#033[00m
Nov 25 03:32:11 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:32:11.665 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c2:50:72 10.100.0.7'], port_security=['fa:16:3e:c2:50:72 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '450d4b82-4475-4cfc-b868-dc3b0fc37af5', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '7d8307470c794815a028592990efca57', 'neutron:revision_number': '4', 'neutron:security_group_ids': '1111213c-e81d-4f44-8d10-b8f2ced48789', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.186'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f51a82dc-da84-4ad1-90c6-51b8e242435f, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=2456d48f-9440-411c-b5f2-5c27136126f9) old=Port_Binding(chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 03:32:11 np0005534516 nova_compute[253538]: 2025-11-25 08:32:11.665 253542 DEBUG oslo_concurrency.lockutils [None req-cfe6e3ea-e03a-44c4-b92f-47de6ae51c25 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Lock "interface-450d4b82-4475-4cfc-b868-dc3b0fc37af5-1b0678fa-ee9d-4aaa-adae-a0cd4b6bbc53" "released" by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" :: held 2.227s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:32:11 np0005534516 neutron-haproxy-ovnmeta-9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe[305822]: [NOTICE]   (305826) : haproxy version is 2.8.14-c23fe91
Nov 25 03:32:11 np0005534516 neutron-haproxy-ovnmeta-9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe[305822]: [NOTICE]   (305826) : path to executable is /usr/sbin/haproxy
Nov 25 03:32:11 np0005534516 neutron-haproxy-ovnmeta-9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe[305822]: [WARNING]  (305826) : Exiting Master process...
Nov 25 03:32:11 np0005534516 neutron-haproxy-ovnmeta-9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe[305822]: [ALERT]    (305826) : Current worker (305828) exited with code 143 (Terminated)
Nov 25 03:32:11 np0005534516 neutron-haproxy-ovnmeta-9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe[305822]: [WARNING]  (305826) : All workers exited. Exiting... (0)
Nov 25 03:32:11 np0005534516 systemd[1]: libpod-a388b34065d921c4716fbb1467f852ca7e6045b1633250ff69d18975b6d3ae84.scope: Deactivated successfully.
Nov 25 03:32:11 np0005534516 podman[306798]: 2025-11-25 08:32:11.688861624 +0000 UTC m=+0.056989793 container died a388b34065d921c4716fbb1467f852ca7e6045b1633250ff69d18975b6d3ae84 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2)
Nov 25 03:32:11 np0005534516 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-a388b34065d921c4716fbb1467f852ca7e6045b1633250ff69d18975b6d3ae84-userdata-shm.mount: Deactivated successfully.
Nov 25 03:32:11 np0005534516 systemd[1]: var-lib-containers-storage-overlay-7f6e1a4e2eca1eca6ce4c110ec30d8b792ff0a483f2c08ec38ee2cef1cf99ecc-merged.mount: Deactivated successfully.
Nov 25 03:32:11 np0005534516 podman[306798]: 2025-11-25 08:32:11.766785518 +0000 UTC m=+0.134913687 container cleanup a388b34065d921c4716fbb1467f852ca7e6045b1633250ff69d18975b6d3ae84 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe, org.label-schema.build-date=20251118, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 03:32:11 np0005534516 systemd[1]: libpod-conmon-a388b34065d921c4716fbb1467f852ca7e6045b1633250ff69d18975b6d3ae84.scope: Deactivated successfully.
Nov 25 03:32:11 np0005534516 podman[306864]: 2025-11-25 08:32:11.855964035 +0000 UTC m=+0.064808893 container remove a388b34065d921c4716fbb1467f852ca7e6045b1633250ff69d18975b6d3ae84 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Nov 25 03:32:11 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:32:11.869 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[00b429b1-47ee-4aca-8ee4-725e5c0f7e00]: (4, ('Tue Nov 25 08:32:11 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe (a388b34065d921c4716fbb1467f852ca7e6045b1633250ff69d18975b6d3ae84)\na388b34065d921c4716fbb1467f852ca7e6045b1633250ff69d18975b6d3ae84\nTue Nov 25 08:32:11 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe (a388b34065d921c4716fbb1467f852ca7e6045b1633250ff69d18975b6d3ae84)\na388b34065d921c4716fbb1467f852ca7e6045b1633250ff69d18975b6d3ae84\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:32:11 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:32:11.870 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[49c61f01-d38b-4fb7-9712-2e726a3a138b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:32:11 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:32:11.871 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9bf3cbfa-70, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:32:11 np0005534516 kernel: tap9bf3cbfa-70: left promiscuous mode
Nov 25 03:32:11 np0005534516 nova_compute[253538]: 2025-11-25 08:32:11.873 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:32:11 np0005534516 nova_compute[253538]: 2025-11-25 08:32:11.894 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:32:11 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:32:11.899 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[46458f0c-9bfa-4ef3-88f5-ee539cf62265]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:32:11 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:32:11.924 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[698c0244-4c6a-4a47-97ee-68dcf20b78cd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:32:11 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:32:11.925 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[2e3ef820-8a6e-4044-ad05-5b0de1fee5be]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:32:11 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:32:11.946 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[3df4fe29-3e9d-4b49-a669-1e93481e1244]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 485385, 'reachable_time': 43792, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 306879, 'error': None, 'target': 'ovnmeta-9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:32:11 np0005534516 systemd[1]: run-netns-ovnmeta\x2d9bf3cbfa\x2d7e0d\x2d4c98\x2d99a2\x2d4ca14fb6bbbe.mount: Deactivated successfully.
Nov 25 03:32:11 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:32:11.949 162852 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 25 03:32:11 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:32:11.950 162852 DEBUG oslo.privsep.daemon [-] privsep: reply[372d0f4a-3a70-44d4-9246-33a39d2d8967]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:32:11 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:32:11.951 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 2456d48f-9440-411c-b5f2-5c27136126f9 in datapath 9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe unbound from our chassis#033[00m
Nov 25 03:32:11 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:32:11.954 162739 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 25 03:32:11 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:32:11.955 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[60686ee7-4c35-46d1-8709-595bd201c451]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:32:11 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:32:11.955 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 2456d48f-9440-411c-b5f2-5c27136126f9 in datapath 9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe unbound from our chassis#033[00m
Nov 25 03:32:11 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:32:11.957 162739 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 25 03:32:11 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:32:11.957 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[e4ba3892-31d9-49dc-9ada-a580672efd25]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:32:12 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 03:32:12 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2534801478' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 03:32:12 np0005534516 nova_compute[253538]: 2025-11-25 08:32:12.045 253542 DEBUG oslo_concurrency.processutils [None req-e7c1ffa7-e4a2-4981-85ff-8897d3cb3e5e c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.494s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:32:12 np0005534516 nova_compute[253538]: 2025-11-25 08:32:12.066 253542 DEBUG nova.storage.rbd_utils [None req-e7c1ffa7-e4a2-4981-85ff-8897d3cb3e5e c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] rbd image 0feca801-4630-4450-b915-616d8496ab51_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:32:12 np0005534516 nova_compute[253538]: 2025-11-25 08:32:12.070 253542 DEBUG oslo_concurrency.processutils [None req-e7c1ffa7-e4a2-4981-85ff-8897d3cb3e5e c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:32:12 np0005534516 nova_compute[253538]: 2025-11-25 08:32:12.233 253542 INFO nova.virt.libvirt.driver [None req-35e5603c-cfad-490a-9a18-06b273165b85 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] [instance: 450d4b82-4475-4cfc-b868-dc3b0fc37af5] Deleting instance files /var/lib/nova/instances/450d4b82-4475-4cfc-b868-dc3b0fc37af5_del#033[00m
Nov 25 03:32:12 np0005534516 nova_compute[253538]: 2025-11-25 08:32:12.235 253542 INFO nova.virt.libvirt.driver [None req-35e5603c-cfad-490a-9a18-06b273165b85 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] [instance: 450d4b82-4475-4cfc-b868-dc3b0fc37af5] Deletion of /var/lib/nova/instances/450d4b82-4475-4cfc-b868-dc3b0fc37af5_del complete#033[00m
Nov 25 03:32:12 np0005534516 nova_compute[253538]: 2025-11-25 08:32:12.325 253542 INFO nova.compute.manager [None req-35e5603c-cfad-490a-9a18-06b273165b85 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] [instance: 450d4b82-4475-4cfc-b868-dc3b0fc37af5] Took 0.98 seconds to destroy the instance on the hypervisor.#033[00m
Nov 25 03:32:12 np0005534516 nova_compute[253538]: 2025-11-25 08:32:12.326 253542 DEBUG oslo.service.loopingcall [None req-35e5603c-cfad-490a-9a18-06b273165b85 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 25 03:32:12 np0005534516 nova_compute[253538]: 2025-11-25 08:32:12.327 253542 DEBUG nova.compute.manager [-] [instance: 450d4b82-4475-4cfc-b868-dc3b0fc37af5] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 25 03:32:12 np0005534516 nova_compute[253538]: 2025-11-25 08:32:12.327 253542 DEBUG nova.network.neutron [-] [instance: 450d4b82-4475-4cfc-b868-dc3b0fc37af5] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 25 03:32:12 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 03:32:12 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4058347384' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 03:32:12 np0005534516 nova_compute[253538]: 2025-11-25 08:32:12.514 253542 DEBUG oslo_concurrency.processutils [None req-e7c1ffa7-e4a2-4981-85ff-8897d3cb3e5e c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.445s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:32:12 np0005534516 nova_compute[253538]: 2025-11-25 08:32:12.516 253542 DEBUG nova.virt.libvirt.vif [None req-e7c1ffa7-e4a2-4981-85ff-8897d3cb3e5e c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T08:32:05Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-1351113969',display_name='tempest-ServerActionsTestJSON-server-1351113969',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-1351113969',id=50,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBE5Fk56Q4lNqV9K7WDUeyi9Q8LXfzmP3pHaY3lQ+esJSSZiBEhWQZtQw4QEFpwpSqYGNN6+MiKdvSMZAjIxsIMhhevlSp0lI0bm/7fQanmTm+NtC/LaRja7uscM7lRA6IQ==',key_name='tempest-keypair-23618085',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='23237e7592b247838e62457157e64e9e',ramdisk_id='',reservation_id='r-ui990d0d',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestJSON-1880843108',owner_user_name='tempest-ServerActionsTestJSON-1880843108-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T08:32:07Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='c199ca353ed54a53ab7fe37d3089c82a',uuid=0feca801-4630-4450-b915-616d8496ab51,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "15af3dd8-9788-4a34-b4b2-d3b24300cd4c", "address": "fa:16:3e:07:cd:40", "network": {"id": "908154e6-322e-4607-bb65-df3f3f8daca6", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1395486665-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "23237e7592b247838e62457157e64e9e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap15af3dd8-97", "ovs_interfaceid": "15af3dd8-9788-4a34-b4b2-d3b24300cd4c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 25 03:32:12 np0005534516 nova_compute[253538]: 2025-11-25 08:32:12.517 253542 DEBUG nova.network.os_vif_util [None req-e7c1ffa7-e4a2-4981-85ff-8897d3cb3e5e c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Converting VIF {"id": "15af3dd8-9788-4a34-b4b2-d3b24300cd4c", "address": "fa:16:3e:07:cd:40", "network": {"id": "908154e6-322e-4607-bb65-df3f3f8daca6", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1395486665-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "23237e7592b247838e62457157e64e9e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap15af3dd8-97", "ovs_interfaceid": "15af3dd8-9788-4a34-b4b2-d3b24300cd4c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 03:32:12 np0005534516 nova_compute[253538]: 2025-11-25 08:32:12.518 253542 DEBUG nova.network.os_vif_util [None req-e7c1ffa7-e4a2-4981-85ff-8897d3cb3e5e c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:07:cd:40,bridge_name='br-int',has_traffic_filtering=True,id=15af3dd8-9788-4a34-b4b2-d3b24300cd4c,network=Network(908154e6-322e-4607-bb65-df3f3f8daca6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap15af3dd8-97') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 03:32:12 np0005534516 nova_compute[253538]: 2025-11-25 08:32:12.519 253542 DEBUG nova.objects.instance [None req-e7c1ffa7-e4a2-4981-85ff-8897d3cb3e5e c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Lazy-loading 'pci_devices' on Instance uuid 0feca801-4630-4450-b915-616d8496ab51 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 03:32:12 np0005534516 nova_compute[253538]: 2025-11-25 08:32:12.533 253542 DEBUG nova.virt.libvirt.driver [None req-e7c1ffa7-e4a2-4981-85ff-8897d3cb3e5e c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] [instance: 0feca801-4630-4450-b915-616d8496ab51] End _get_guest_xml xml=<domain type="kvm">
Nov 25 03:32:12 np0005534516 nova_compute[253538]:  <uuid>0feca801-4630-4450-b915-616d8496ab51</uuid>
Nov 25 03:32:12 np0005534516 nova_compute[253538]:  <name>instance-00000032</name>
Nov 25 03:32:12 np0005534516 nova_compute[253538]:  <memory>131072</memory>
Nov 25 03:32:12 np0005534516 nova_compute[253538]:  <vcpu>1</vcpu>
Nov 25 03:32:12 np0005534516 nova_compute[253538]:  <metadata>
Nov 25 03:32:12 np0005534516 nova_compute[253538]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 03:32:12 np0005534516 nova_compute[253538]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 03:32:12 np0005534516 nova_compute[253538]:      <nova:name>tempest-ServerActionsTestJSON-server-1351113969</nova:name>
Nov 25 03:32:12 np0005534516 nova_compute[253538]:      <nova:creationTime>2025-11-25 08:32:11</nova:creationTime>
Nov 25 03:32:12 np0005534516 nova_compute[253538]:      <nova:flavor name="m1.nano">
Nov 25 03:32:12 np0005534516 nova_compute[253538]:        <nova:memory>128</nova:memory>
Nov 25 03:32:12 np0005534516 nova_compute[253538]:        <nova:disk>1</nova:disk>
Nov 25 03:32:12 np0005534516 nova_compute[253538]:        <nova:swap>0</nova:swap>
Nov 25 03:32:12 np0005534516 nova_compute[253538]:        <nova:ephemeral>0</nova:ephemeral>
Nov 25 03:32:12 np0005534516 nova_compute[253538]:        <nova:vcpus>1</nova:vcpus>
Nov 25 03:32:12 np0005534516 nova_compute[253538]:      </nova:flavor>
Nov 25 03:32:12 np0005534516 nova_compute[253538]:      <nova:owner>
Nov 25 03:32:12 np0005534516 nova_compute[253538]:        <nova:user uuid="c199ca353ed54a53ab7fe37d3089c82a">tempest-ServerActionsTestJSON-1880843108-project-member</nova:user>
Nov 25 03:32:12 np0005534516 nova_compute[253538]:        <nova:project uuid="23237e7592b247838e62457157e64e9e">tempest-ServerActionsTestJSON-1880843108</nova:project>
Nov 25 03:32:12 np0005534516 nova_compute[253538]:      </nova:owner>
Nov 25 03:32:12 np0005534516 nova_compute[253538]:      <nova:root type="image" uuid="8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e"/>
Nov 25 03:32:12 np0005534516 nova_compute[253538]:      <nova:ports>
Nov 25 03:32:12 np0005534516 nova_compute[253538]:        <nova:port uuid="15af3dd8-9788-4a34-b4b2-d3b24300cd4c">
Nov 25 03:32:12 np0005534516 nova_compute[253538]:          <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Nov 25 03:32:12 np0005534516 nova_compute[253538]:        </nova:port>
Nov 25 03:32:12 np0005534516 nova_compute[253538]:      </nova:ports>
Nov 25 03:32:12 np0005534516 nova_compute[253538]:    </nova:instance>
Nov 25 03:32:12 np0005534516 nova_compute[253538]:  </metadata>
Nov 25 03:32:12 np0005534516 nova_compute[253538]:  <sysinfo type="smbios">
Nov 25 03:32:12 np0005534516 nova_compute[253538]:    <system>
Nov 25 03:32:12 np0005534516 nova_compute[253538]:      <entry name="manufacturer">RDO</entry>
Nov 25 03:32:12 np0005534516 nova_compute[253538]:      <entry name="product">OpenStack Compute</entry>
Nov 25 03:32:12 np0005534516 nova_compute[253538]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 03:32:12 np0005534516 nova_compute[253538]:      <entry name="serial">0feca801-4630-4450-b915-616d8496ab51</entry>
Nov 25 03:32:12 np0005534516 nova_compute[253538]:      <entry name="uuid">0feca801-4630-4450-b915-616d8496ab51</entry>
Nov 25 03:32:12 np0005534516 nova_compute[253538]:      <entry name="family">Virtual Machine</entry>
Nov 25 03:32:12 np0005534516 nova_compute[253538]:    </system>
Nov 25 03:32:12 np0005534516 nova_compute[253538]:  </sysinfo>
Nov 25 03:32:12 np0005534516 nova_compute[253538]:  <os>
Nov 25 03:32:12 np0005534516 nova_compute[253538]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 25 03:32:12 np0005534516 nova_compute[253538]:    <boot dev="hd"/>
Nov 25 03:32:12 np0005534516 nova_compute[253538]:    <smbios mode="sysinfo"/>
Nov 25 03:32:12 np0005534516 nova_compute[253538]:  </os>
Nov 25 03:32:12 np0005534516 nova_compute[253538]:  <features>
Nov 25 03:32:12 np0005534516 nova_compute[253538]:    <acpi/>
Nov 25 03:32:12 np0005534516 nova_compute[253538]:    <apic/>
Nov 25 03:32:12 np0005534516 nova_compute[253538]:    <vmcoreinfo/>
Nov 25 03:32:12 np0005534516 nova_compute[253538]:  </features>
Nov 25 03:32:12 np0005534516 nova_compute[253538]:  <clock offset="utc">
Nov 25 03:32:12 np0005534516 nova_compute[253538]:    <timer name="pit" tickpolicy="delay"/>
Nov 25 03:32:12 np0005534516 nova_compute[253538]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 25 03:32:12 np0005534516 nova_compute[253538]:    <timer name="hpet" present="no"/>
Nov 25 03:32:12 np0005534516 nova_compute[253538]:  </clock>
Nov 25 03:32:12 np0005534516 nova_compute[253538]:  <cpu mode="host-model" match="exact">
Nov 25 03:32:12 np0005534516 nova_compute[253538]:    <topology sockets="1" cores="1" threads="1"/>
Nov 25 03:32:12 np0005534516 nova_compute[253538]:  </cpu>
Nov 25 03:32:12 np0005534516 nova_compute[253538]:  <devices>
Nov 25 03:32:12 np0005534516 nova_compute[253538]:    <disk type="network" device="disk">
Nov 25 03:32:12 np0005534516 nova_compute[253538]:      <driver type="raw" cache="none"/>
Nov 25 03:32:12 np0005534516 nova_compute[253538]:      <source protocol="rbd" name="vms/0feca801-4630-4450-b915-616d8496ab51_disk">
Nov 25 03:32:12 np0005534516 nova_compute[253538]:        <host name="192.168.122.100" port="6789"/>
Nov 25 03:32:12 np0005534516 nova_compute[253538]:      </source>
Nov 25 03:32:12 np0005534516 nova_compute[253538]:      <auth username="openstack">
Nov 25 03:32:12 np0005534516 nova_compute[253538]:        <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 03:32:12 np0005534516 nova_compute[253538]:      </auth>
Nov 25 03:32:12 np0005534516 nova_compute[253538]:      <target dev="vda" bus="virtio"/>
Nov 25 03:32:12 np0005534516 nova_compute[253538]:    </disk>
Nov 25 03:32:12 np0005534516 nova_compute[253538]:    <disk type="network" device="cdrom">
Nov 25 03:32:12 np0005534516 nova_compute[253538]:      <driver type="raw" cache="none"/>
Nov 25 03:32:12 np0005534516 nova_compute[253538]:      <source protocol="rbd" name="vms/0feca801-4630-4450-b915-616d8496ab51_disk.config">
Nov 25 03:32:12 np0005534516 nova_compute[253538]:        <host name="192.168.122.100" port="6789"/>
Nov 25 03:32:12 np0005534516 nova_compute[253538]:      </source>
Nov 25 03:32:12 np0005534516 nova_compute[253538]:      <auth username="openstack">
Nov 25 03:32:12 np0005534516 nova_compute[253538]:        <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 03:32:12 np0005534516 nova_compute[253538]:      </auth>
Nov 25 03:32:12 np0005534516 nova_compute[253538]:      <target dev="sda" bus="sata"/>
Nov 25 03:32:12 np0005534516 nova_compute[253538]:    </disk>
Nov 25 03:32:12 np0005534516 nova_compute[253538]:    <interface type="ethernet">
Nov 25 03:32:12 np0005534516 nova_compute[253538]:      <mac address="fa:16:3e:07:cd:40"/>
Nov 25 03:32:12 np0005534516 nova_compute[253538]:      <model type="virtio"/>
Nov 25 03:32:12 np0005534516 nova_compute[253538]:      <driver name="vhost" rx_queue_size="512"/>
Nov 25 03:32:12 np0005534516 nova_compute[253538]:      <mtu size="1442"/>
Nov 25 03:32:12 np0005534516 nova_compute[253538]:      <target dev="tap15af3dd8-97"/>
Nov 25 03:32:12 np0005534516 nova_compute[253538]:    </interface>
Nov 25 03:32:12 np0005534516 nova_compute[253538]:    <serial type="pty">
Nov 25 03:32:12 np0005534516 nova_compute[253538]:      <log file="/var/lib/nova/instances/0feca801-4630-4450-b915-616d8496ab51/console.log" append="off"/>
Nov 25 03:32:12 np0005534516 nova_compute[253538]:    </serial>
Nov 25 03:32:12 np0005534516 nova_compute[253538]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 03:32:12 np0005534516 nova_compute[253538]:    <video>
Nov 25 03:32:12 np0005534516 nova_compute[253538]:      <model type="virtio"/>
Nov 25 03:32:12 np0005534516 nova_compute[253538]:    </video>
Nov 25 03:32:12 np0005534516 nova_compute[253538]:    <input type="tablet" bus="usb"/>
Nov 25 03:32:12 np0005534516 nova_compute[253538]:    <rng model="virtio">
Nov 25 03:32:12 np0005534516 nova_compute[253538]:      <backend model="random">/dev/urandom</backend>
Nov 25 03:32:12 np0005534516 nova_compute[253538]:    </rng>
Nov 25 03:32:12 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root"/>
Nov 25 03:32:12 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:32:12 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:32:12 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:32:12 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:32:12 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:32:12 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:32:12 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:32:12 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:32:12 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:32:12 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:32:12 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:32:12 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:32:12 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:32:12 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:32:12 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:32:12 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:32:12 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:32:12 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:32:12 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:32:12 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:32:12 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:32:12 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:32:12 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:32:12 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:32:12 np0005534516 nova_compute[253538]:    <controller type="usb" index="0"/>
Nov 25 03:32:12 np0005534516 nova_compute[253538]:    <memballoon model="virtio">
Nov 25 03:32:12 np0005534516 nova_compute[253538]:      <stats period="10"/>
Nov 25 03:32:12 np0005534516 nova_compute[253538]:    </memballoon>
Nov 25 03:32:12 np0005534516 nova_compute[253538]:  </devices>
Nov 25 03:32:12 np0005534516 nova_compute[253538]: </domain>
Nov 25 03:32:12 np0005534516 nova_compute[253538]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 25 03:32:12 np0005534516 nova_compute[253538]: 2025-11-25 08:32:12.534 253542 DEBUG nova.compute.manager [None req-e7c1ffa7-e4a2-4981-85ff-8897d3cb3e5e c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] [instance: 0feca801-4630-4450-b915-616d8496ab51] Preparing to wait for external event network-vif-plugged-15af3dd8-9788-4a34-b4b2-d3b24300cd4c prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 25 03:32:12 np0005534516 nova_compute[253538]: 2025-11-25 08:32:12.534 253542 DEBUG oslo_concurrency.lockutils [None req-e7c1ffa7-e4a2-4981-85ff-8897d3cb3e5e c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Acquiring lock "0feca801-4630-4450-b915-616d8496ab51-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:32:12 np0005534516 nova_compute[253538]: 2025-11-25 08:32:12.535 253542 DEBUG oslo_concurrency.lockutils [None req-e7c1ffa7-e4a2-4981-85ff-8897d3cb3e5e c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Lock "0feca801-4630-4450-b915-616d8496ab51-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:32:12 np0005534516 nova_compute[253538]: 2025-11-25 08:32:12.535 253542 DEBUG oslo_concurrency.lockutils [None req-e7c1ffa7-e4a2-4981-85ff-8897d3cb3e5e c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Lock "0feca801-4630-4450-b915-616d8496ab51-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:32:12 np0005534516 nova_compute[253538]: 2025-11-25 08:32:12.536 253542 DEBUG nova.virt.libvirt.vif [None req-e7c1ffa7-e4a2-4981-85ff-8897d3cb3e5e c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T08:32:05Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-1351113969',display_name='tempest-ServerActionsTestJSON-server-1351113969',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-1351113969',id=50,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBE5Fk56Q4lNqV9K7WDUeyi9Q8LXfzmP3pHaY3lQ+esJSSZiBEhWQZtQw4QEFpwpSqYGNN6+MiKdvSMZAjIxsIMhhevlSp0lI0bm/7fQanmTm+NtC/LaRja7uscM7lRA6IQ==',key_name='tempest-keypair-23618085',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='23237e7592b247838e62457157e64e9e',ramdisk_id='',reservation_id='r-ui990d0d',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestJSON-1880843108',owner_user_name='tempest-ServerActionsTestJSON-1880843108-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T08:32:07Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='c199ca353ed54a53ab7fe37d3089c82a',uuid=0feca801-4630-4450-b915-616d8496ab51,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "15af3dd8-9788-4a34-b4b2-d3b24300cd4c", "address": "fa:16:3e:07:cd:40", "network": {"id": "908154e6-322e-4607-bb65-df3f3f8daca6", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1395486665-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "23237e7592b247838e62457157e64e9e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap15af3dd8-97", "ovs_interfaceid": "15af3dd8-9788-4a34-b4b2-d3b24300cd4c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 25 03:32:12 np0005534516 nova_compute[253538]: 2025-11-25 08:32:12.536 253542 DEBUG nova.network.os_vif_util [None req-e7c1ffa7-e4a2-4981-85ff-8897d3cb3e5e c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Converting VIF {"id": "15af3dd8-9788-4a34-b4b2-d3b24300cd4c", "address": "fa:16:3e:07:cd:40", "network": {"id": "908154e6-322e-4607-bb65-df3f3f8daca6", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1395486665-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "23237e7592b247838e62457157e64e9e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap15af3dd8-97", "ovs_interfaceid": "15af3dd8-9788-4a34-b4b2-d3b24300cd4c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 03:32:12 np0005534516 nova_compute[253538]: 2025-11-25 08:32:12.537 253542 DEBUG nova.network.os_vif_util [None req-e7c1ffa7-e4a2-4981-85ff-8897d3cb3e5e c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:07:cd:40,bridge_name='br-int',has_traffic_filtering=True,id=15af3dd8-9788-4a34-b4b2-d3b24300cd4c,network=Network(908154e6-322e-4607-bb65-df3f3f8daca6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap15af3dd8-97') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 03:32:12 np0005534516 nova_compute[253538]: 2025-11-25 08:32:12.537 253542 DEBUG os_vif [None req-e7c1ffa7-e4a2-4981-85ff-8897d3cb3e5e c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:07:cd:40,bridge_name='br-int',has_traffic_filtering=True,id=15af3dd8-9788-4a34-b4b2-d3b24300cd4c,network=Network(908154e6-322e-4607-bb65-df3f3f8daca6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap15af3dd8-97') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 25 03:32:12 np0005534516 nova_compute[253538]: 2025-11-25 08:32:12.538 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:32:12 np0005534516 nova_compute[253538]: 2025-11-25 08:32:12.540 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:32:12 np0005534516 nova_compute[253538]: 2025-11-25 08:32:12.541 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 03:32:12 np0005534516 nova_compute[253538]: 2025-11-25 08:32:12.546 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:32:12 np0005534516 nova_compute[253538]: 2025-11-25 08:32:12.546 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap15af3dd8-97, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:32:12 np0005534516 nova_compute[253538]: 2025-11-25 08:32:12.547 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap15af3dd8-97, col_values=(('external_ids', {'iface-id': '15af3dd8-9788-4a34-b4b2-d3b24300cd4c', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:07:cd:40', 'vm-uuid': '0feca801-4630-4450-b915-616d8496ab51'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:32:12 np0005534516 nova_compute[253538]: 2025-11-25 08:32:12.549 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:32:12 np0005534516 NetworkManager[48915]: <info>  [1764059532.5506] manager: (tap15af3dd8-97): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/186)
Nov 25 03:32:12 np0005534516 nova_compute[253538]: 2025-11-25 08:32:12.551 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 25 03:32:12 np0005534516 nova_compute[253538]: 2025-11-25 08:32:12.557 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:32:12 np0005534516 nova_compute[253538]: 2025-11-25 08:32:12.559 253542 INFO os_vif [None req-e7c1ffa7-e4a2-4981-85ff-8897d3cb3e5e c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:07:cd:40,bridge_name='br-int',has_traffic_filtering=True,id=15af3dd8-9788-4a34-b4b2-d3b24300cd4c,network=Network(908154e6-322e-4607-bb65-df3f3f8daca6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap15af3dd8-97')#033[00m
Nov 25 03:32:12 np0005534516 nova_compute[253538]: 2025-11-25 08:32:12.677 253542 DEBUG nova.virt.libvirt.driver [None req-e7c1ffa7-e4a2-4981-85ff-8897d3cb3e5e c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 03:32:12 np0005534516 nova_compute[253538]: 2025-11-25 08:32:12.677 253542 DEBUG nova.virt.libvirt.driver [None req-e7c1ffa7-e4a2-4981-85ff-8897d3cb3e5e c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 03:32:12 np0005534516 nova_compute[253538]: 2025-11-25 08:32:12.678 253542 DEBUG nova.virt.libvirt.driver [None req-e7c1ffa7-e4a2-4981-85ff-8897d3cb3e5e c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] No VIF found with MAC fa:16:3e:07:cd:40, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 25 03:32:12 np0005534516 nova_compute[253538]: 2025-11-25 08:32:12.679 253542 INFO nova.virt.libvirt.driver [None req-e7c1ffa7-e4a2-4981-85ff-8897d3cb3e5e c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] [instance: 0feca801-4630-4450-b915-616d8496ab51] Using config drive#033[00m
Nov 25 03:32:12 np0005534516 nova_compute[253538]: 2025-11-25 08:32:12.711 253542 DEBUG nova.storage.rbd_utils [None req-e7c1ffa7-e4a2-4981-85ff-8897d3cb3e5e c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] rbd image 0feca801-4630-4450-b915-616d8496ab51_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:32:12 np0005534516 nova_compute[253538]: 2025-11-25 08:32:12.825 253542 DEBUG nova.compute.manager [req-ba0bcfde-c7c4-4673-a0cb-addab144830e req-c9c3e631-1ec5-4df8-86cc-fa43f56c6722 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 450d4b82-4475-4cfc-b868-dc3b0fc37af5] Received event network-vif-plugged-1b0678fa-ee9d-4aaa-adae-a0cd4b6bbc53 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:32:12 np0005534516 nova_compute[253538]: 2025-11-25 08:32:12.825 253542 DEBUG oslo_concurrency.lockutils [req-ba0bcfde-c7c4-4673-a0cb-addab144830e req-c9c3e631-1ec5-4df8-86cc-fa43f56c6722 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "450d4b82-4475-4cfc-b868-dc3b0fc37af5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:32:12 np0005534516 nova_compute[253538]: 2025-11-25 08:32:12.826 253542 DEBUG oslo_concurrency.lockutils [req-ba0bcfde-c7c4-4673-a0cb-addab144830e req-c9c3e631-1ec5-4df8-86cc-fa43f56c6722 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "450d4b82-4475-4cfc-b868-dc3b0fc37af5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:32:12 np0005534516 nova_compute[253538]: 2025-11-25 08:32:12.826 253542 DEBUG oslo_concurrency.lockutils [req-ba0bcfde-c7c4-4673-a0cb-addab144830e req-c9c3e631-1ec5-4df8-86cc-fa43f56c6722 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "450d4b82-4475-4cfc-b868-dc3b0fc37af5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:32:12 np0005534516 nova_compute[253538]: 2025-11-25 08:32:12.827 253542 DEBUG nova.compute.manager [req-ba0bcfde-c7c4-4673-a0cb-addab144830e req-c9c3e631-1ec5-4df8-86cc-fa43f56c6722 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 450d4b82-4475-4cfc-b868-dc3b0fc37af5] No waiting events found dispatching network-vif-plugged-1b0678fa-ee9d-4aaa-adae-a0cd4b6bbc53 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 03:32:12 np0005534516 nova_compute[253538]: 2025-11-25 08:32:12.827 253542 WARNING nova.compute.manager [req-ba0bcfde-c7c4-4673-a0cb-addab144830e req-c9c3e631-1ec5-4df8-86cc-fa43f56c6722 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 450d4b82-4475-4cfc-b868-dc3b0fc37af5] Received unexpected event network-vif-plugged-1b0678fa-ee9d-4aaa-adae-a0cd4b6bbc53 for instance with vm_state active and task_state deleting.#033[00m
Nov 25 03:32:12 np0005534516 nova_compute[253538]: 2025-11-25 08:32:12.828 253542 DEBUG nova.compute.manager [req-ba0bcfde-c7c4-4673-a0cb-addab144830e req-c9c3e631-1ec5-4df8-86cc-fa43f56c6722 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 0feca801-4630-4450-b915-616d8496ab51] Received event network-changed-15af3dd8-9788-4a34-b4b2-d3b24300cd4c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:32:12 np0005534516 nova_compute[253538]: 2025-11-25 08:32:12.828 253542 DEBUG nova.compute.manager [req-ba0bcfde-c7c4-4673-a0cb-addab144830e req-c9c3e631-1ec5-4df8-86cc-fa43f56c6722 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 0feca801-4630-4450-b915-616d8496ab51] Refreshing instance network info cache due to event network-changed-15af3dd8-9788-4a34-b4b2-d3b24300cd4c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 03:32:12 np0005534516 nova_compute[253538]: 2025-11-25 08:32:12.828 253542 DEBUG oslo_concurrency.lockutils [req-ba0bcfde-c7c4-4673-a0cb-addab144830e req-c9c3e631-1ec5-4df8-86cc-fa43f56c6722 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-0feca801-4630-4450-b915-616d8496ab51" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 03:32:12 np0005534516 nova_compute[253538]: 2025-11-25 08:32:12.829 253542 DEBUG oslo_concurrency.lockutils [req-ba0bcfde-c7c4-4673-a0cb-addab144830e req-c9c3e631-1ec5-4df8-86cc-fa43f56c6722 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-0feca801-4630-4450-b915-616d8496ab51" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 03:32:12 np0005534516 nova_compute[253538]: 2025-11-25 08:32:12.829 253542 DEBUG nova.network.neutron [req-ba0bcfde-c7c4-4673-a0cb-addab144830e req-c9c3e631-1ec5-4df8-86cc-fa43f56c6722 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 0feca801-4630-4450-b915-616d8496ab51] Refreshing network info cache for port 15af3dd8-9788-4a34-b4b2-d3b24300cd4c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 03:32:13 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1466: 321 pgs: 321 active+clean; 203 MiB data, 510 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 2.5 MiB/s wr, 178 op/s
Nov 25 03:32:13 np0005534516 nova_compute[253538]: 2025-11-25 08:32:13.252 253542 INFO nova.virt.libvirt.driver [None req-e7c1ffa7-e4a2-4981-85ff-8897d3cb3e5e c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] [instance: 0feca801-4630-4450-b915-616d8496ab51] Creating config drive at /var/lib/nova/instances/0feca801-4630-4450-b915-616d8496ab51/disk.config#033[00m
Nov 25 03:32:13 np0005534516 nova_compute[253538]: 2025-11-25 08:32:13.265 253542 DEBUG oslo_concurrency.processutils [None req-e7c1ffa7-e4a2-4981-85ff-8897d3cb3e5e c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/0feca801-4630-4450-b915-616d8496ab51/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmppocf7t8m execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:32:13 np0005534516 nova_compute[253538]: 2025-11-25 08:32:13.337 253542 DEBUG nova.network.neutron [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] [instance: 450d4b82-4475-4cfc-b868-dc3b0fc37af5] Updating instance_info_cache with network_info: [{"id": "2456d48f-9440-411c-b5f2-5c27136126f9", "address": "fa:16:3e:c2:50:72", "network": {"id": "9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-2079765623-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.186", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7d8307470c794815a028592990efca57", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2456d48f-94", "ovs_interfaceid": "2456d48f-9440-411c-b5f2-5c27136126f9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 03:32:13 np0005534516 nova_compute[253538]: 2025-11-25 08:32:13.386 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Releasing lock "refresh_cache-450d4b82-4475-4cfc-b868-dc3b0fc37af5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 03:32:13 np0005534516 nova_compute[253538]: 2025-11-25 08:32:13.387 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] [instance: 450d4b82-4475-4cfc-b868-dc3b0fc37af5] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Nov 25 03:32:13 np0005534516 nova_compute[253538]: 2025-11-25 08:32:13.387 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 03:32:13 np0005534516 nova_compute[253538]: 2025-11-25 08:32:13.448 253542 DEBUG oslo_concurrency.processutils [None req-e7c1ffa7-e4a2-4981-85ff-8897d3cb3e5e c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/0feca801-4630-4450-b915-616d8496ab51/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmppocf7t8m" returned: 0 in 0.183s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:32:13 np0005534516 nova_compute[253538]: 2025-11-25 08:32:13.485 253542 DEBUG nova.storage.rbd_utils [None req-e7c1ffa7-e4a2-4981-85ff-8897d3cb3e5e c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] rbd image 0feca801-4630-4450-b915-616d8496ab51_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:32:13 np0005534516 nova_compute[253538]: 2025-11-25 08:32:13.490 253542 DEBUG oslo_concurrency.processutils [None req-e7c1ffa7-e4a2-4981-85ff-8897d3cb3e5e c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/0feca801-4630-4450-b915-616d8496ab51/disk.config 0feca801-4630-4450-b915-616d8496ab51_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:32:13 np0005534516 nova_compute[253538]: 2025-11-25 08:32:13.639 253542 DEBUG nova.network.neutron [-] [instance: 450d4b82-4475-4cfc-b868-dc3b0fc37af5] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 03:32:13 np0005534516 nova_compute[253538]: 2025-11-25 08:32:13.668 253542 INFO nova.compute.manager [-] [instance: 450d4b82-4475-4cfc-b868-dc3b0fc37af5] Took 1.34 seconds to deallocate network for instance.#033[00m
Nov 25 03:32:13 np0005534516 ovn_controller[152859]: 2025-11-25T08:32:13Z|00050|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:c4:6e:6c 10.100.0.4
Nov 25 03:32:13 np0005534516 ovn_controller[152859]: 2025-11-25T08:32:13Z|00051|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:c4:6e:6c 10.100.0.4
Nov 25 03:32:13 np0005534516 nova_compute[253538]: 2025-11-25 08:32:13.771 253542 DEBUG oslo_concurrency.lockutils [None req-35e5603c-cfad-490a-9a18-06b273165b85 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:32:13 np0005534516 nova_compute[253538]: 2025-11-25 08:32:13.772 253542 DEBUG oslo_concurrency.lockutils [None req-35e5603c-cfad-490a-9a18-06b273165b85 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:32:13 np0005534516 nova_compute[253538]: 2025-11-25 08:32:13.871 253542 DEBUG oslo_concurrency.processutils [None req-35e5603c-cfad-490a-9a18-06b273165b85 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:32:13 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0) v1
Nov 25 03:32:13 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 25 03:32:13 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 03:32:13 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 03:32:13 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 03:32:13 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 03:32:13 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 03:32:13 np0005534516 nova_compute[253538]: 2025-11-25 08:32:13.913 253542 DEBUG oslo_concurrency.lockutils [None req-952fbd32-a4ae-4ddf-92ed-a7c95d18e3d1 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Acquiring lock "0201b222-1aa1-4d57-901c-e3c79170b567" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:32:13 np0005534516 nova_compute[253538]: 2025-11-25 08:32:13.914 253542 DEBUG oslo_concurrency.lockutils [None req-952fbd32-a4ae-4ddf-92ed-a7c95d18e3d1 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Lock "0201b222-1aa1-4d57-901c-e3c79170b567" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:32:13 np0005534516 nova_compute[253538]: 2025-11-25 08:32:13.916 253542 DEBUG oslo_concurrency.processutils [None req-e7c1ffa7-e4a2-4981-85ff-8897d3cb3e5e c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/0feca801-4630-4450-b915-616d8496ab51/disk.config 0feca801-4630-4450-b915-616d8496ab51_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.426s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:32:13 np0005534516 nova_compute[253538]: 2025-11-25 08:32:13.917 253542 INFO nova.virt.libvirt.driver [None req-e7c1ffa7-e4a2-4981-85ff-8897d3cb3e5e c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] [instance: 0feca801-4630-4450-b915-616d8496ab51] Deleting local config drive /var/lib/nova/instances/0feca801-4630-4450-b915-616d8496ab51/disk.config because it was imported into RBD.#033[00m
Nov 25 03:32:13 np0005534516 nova_compute[253538]: 2025-11-25 08:32:13.937 253542 DEBUG nova.compute.manager [None req-952fbd32-a4ae-4ddf-92ed-a7c95d18e3d1 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: 0201b222-1aa1-4d57-901c-e3c79170b567] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 25 03:32:13 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 03:32:13 np0005534516 ceph-mgr[75313]: [progress WARNING root] complete: ev 9ef845e4-48a4-486e-be3a-905adc939ef1 does not exist
Nov 25 03:32:13 np0005534516 ceph-mgr[75313]: [progress WARNING root] complete: ev 2e8249d5-4dd1-4c39-8980-1abacb6728f9 does not exist
Nov 25 03:32:13 np0005534516 ceph-mgr[75313]: [progress WARNING root] complete: ev 3ecd4065-cb91-4cdf-8bcc-189b9114580e does not exist
Nov 25 03:32:13 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 03:32:13 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 03:32:13 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 03:32:13 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 03:32:13 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 03:32:13 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 03:32:13 np0005534516 kernel: tap15af3dd8-97: entered promiscuous mode
Nov 25 03:32:13 np0005534516 NetworkManager[48915]: <info>  [1764059533.9846] manager: (tap15af3dd8-97): new Tun device (/org/freedesktop/NetworkManager/Devices/187)
Nov 25 03:32:13 np0005534516 ovn_controller[152859]: 2025-11-25T08:32:13Z|00404|binding|INFO|Claiming lport 15af3dd8-9788-4a34-b4b2-d3b24300cd4c for this chassis.
Nov 25 03:32:13 np0005534516 ovn_controller[152859]: 2025-11-25T08:32:13Z|00405|binding|INFO|15af3dd8-9788-4a34-b4b2-d3b24300cd4c: Claiming fa:16:3e:07:cd:40 10.100.0.13
Nov 25 03:32:13 np0005534516 nova_compute[253538]: 2025-11-25 08:32:13.990 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:32:13 np0005534516 systemd-udevd[306769]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 03:32:14 np0005534516 nova_compute[253538]: 2025-11-25 08:32:14.011 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:32:14 np0005534516 ovn_controller[152859]: 2025-11-25T08:32:14Z|00406|binding|INFO|Setting lport 15af3dd8-9788-4a34-b4b2-d3b24300cd4c ovn-installed in OVS
Nov 25 03:32:14 np0005534516 nova_compute[253538]: 2025-11-25 08:32:14.015 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:32:14 np0005534516 ovn_controller[152859]: 2025-11-25T08:32:14Z|00407|binding|INFO|Setting lport 15af3dd8-9788-4a34-b4b2-d3b24300cd4c up in Southbound
Nov 25 03:32:14 np0005534516 nova_compute[253538]: 2025-11-25 08:32:14.027 253542 DEBUG oslo_concurrency.lockutils [None req-952fbd32-a4ae-4ddf-92ed-a7c95d18e3d1 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:32:14 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:32:14.027 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:07:cd:40 10.100.0.13'], port_security=['fa:16:3e:07:cd:40 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '0feca801-4630-4450-b915-616d8496ab51', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-908154e6-322e-4607-bb65-df3f3f8daca6', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '23237e7592b247838e62457157e64e9e', 'neutron:revision_number': '2', 'neutron:security_group_ids': '3358d53f-61a0-46d5-a068-e095dc0322d0', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f04e87cb-da21-4cc9-be16-4ad52b84fb85, chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=15af3dd8-9788-4a34-b4b2-d3b24300cd4c) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 03:32:14 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:32:14.028 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 15af3dd8-9788-4a34-b4b2-d3b24300cd4c in datapath 908154e6-322e-4607-bb65-df3f3f8daca6 bound to our chassis#033[00m
Nov 25 03:32:14 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:32:14.034 162739 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 908154e6-322e-4607-bb65-df3f3f8daca6#033[00m
Nov 25 03:32:14 np0005534516 NetworkManager[48915]: <info>  [1764059534.0445] device (tap15af3dd8-97): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 03:32:14 np0005534516 NetworkManager[48915]: <info>  [1764059534.0454] device (tap15af3dd8-97): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 03:32:14 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:32:14.049 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[20e9b7da-2566-48e7-8ac6-f14100cfb955]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:32:14 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:32:14.050 162739 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap908154e6-31 in ovnmeta-908154e6-322e-4607-bb65-df3f3f8daca6 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 25 03:32:14 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:32:14.052 269370 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap908154e6-30 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 25 03:32:14 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:32:14.052 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[0ca00027-b6ec-4a76-8a97-cd72f17aff97]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:32:14 np0005534516 rsyslogd[1007]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 25 03:32:14 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:32:14.053 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[5e3c473b-a030-42b2-909f-27c7217fc669]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:32:14 np0005534516 rsyslogd[1007]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 25 03:32:14 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:32:14.068 162852 DEBUG oslo.privsep.daemon [-] privsep: reply[1835a091-ec72-4ecb-b6b1-194b7778a31e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:32:14 np0005534516 systemd-machined[215790]: New machine qemu-55-instance-00000032.
Nov 25 03:32:14 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:32:14.086 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[13f6d76a-b376-45ac-93df-172c0a9ed110]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:32:14 np0005534516 systemd[1]: Started Virtual Machine qemu-55-instance-00000032.
Nov 25 03:32:14 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:32:14.119 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[5a5e4fc0-d45c-4457-a32e-d4815ea10cbf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:32:14 np0005534516 NetworkManager[48915]: <info>  [1764059534.1254] manager: (tap908154e6-30): new Veth device (/org/freedesktop/NetworkManager/Devices/188)
Nov 25 03:32:14 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:32:14.124 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[95d4f47b-c15f-4c15-b075-a689f0c06c25]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:32:14 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:32:14.159 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[18609545-6e7d-41f0-8da5-a003eff74c03]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:32:14 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:32:14.162 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[045ad219-553c-4d27-b117-99c450967db7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:32:14 np0005534516 podman[307169]: 2025-11-25 08:32:14.171574589 +0000 UTC m=+0.105075895 container health_status 8ad1e319ea263c63021f01bb2d6f18ca5aea205883339692c6b10bddfa993191 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Nov 25 03:32:14 np0005534516 NetworkManager[48915]: <info>  [1764059534.1895] device (tap908154e6-30): carrier: link connected
Nov 25 03:32:14 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:32:14.194 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[22333cae-452e-4905-9930-e4ad1e90d0da]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:32:14 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:32:14.219 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[7ed5164e-6126-4e44-ad57-a4cdef7f5ec8]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap908154e6-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:42:59:09'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 123], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 488878, 'reachable_time': 30892, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 307277, 'error': None, 'target': 'ovnmeta-908154e6-322e-4607-bb65-df3f3f8daca6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:32:14 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:32:14.237 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[97c67f80-6e5b-490f-8ca4-4b1c42050e22]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe42:5909'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 488878, 'tstamp': 488878}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 307286, 'error': None, 'target': 'ovnmeta-908154e6-322e-4607-bb65-df3f3f8daca6', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:32:14 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:32:14.267 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[c6307b9f-173c-46da-b6fa-7a8b4fafb61c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap908154e6-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:42:59:09'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 123], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 488878, 'reachable_time': 30892, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 307303, 'error': None, 'target': 'ovnmeta-908154e6-322e-4607-bb65-df3f3f8daca6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:32:14 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:32:14.299 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[8f74e7c4-d086-45c2-bbb1-e505e5b52be4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:32:14 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 03:32:14 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3845340065' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 03:32:14 np0005534516 nova_compute[253538]: 2025-11-25 08:32:14.351 253542 DEBUG nova.network.neutron [req-ba0bcfde-c7c4-4673-a0cb-addab144830e req-c9c3e631-1ec5-4df8-86cc-fa43f56c6722 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 0feca801-4630-4450-b915-616d8496ab51] Updated VIF entry in instance network info cache for port 15af3dd8-9788-4a34-b4b2-d3b24300cd4c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 25 03:32:14 np0005534516 nova_compute[253538]: 2025-11-25 08:32:14.352 253542 DEBUG nova.network.neutron [req-ba0bcfde-c7c4-4673-a0cb-addab144830e req-c9c3e631-1ec5-4df8-86cc-fa43f56c6722 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 0feca801-4630-4450-b915-616d8496ab51] Updating instance_info_cache with network_info: [{"id": "15af3dd8-9788-4a34-b4b2-d3b24300cd4c", "address": "fa:16:3e:07:cd:40", "network": {"id": "908154e6-322e-4607-bb65-df3f3f8daca6", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1395486665-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "23237e7592b247838e62457157e64e9e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap15af3dd8-97", "ovs_interfaceid": "15af3dd8-9788-4a34-b4b2-d3b24300cd4c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 03:32:14 np0005534516 nova_compute[253538]: 2025-11-25 08:32:14.356 253542 DEBUG oslo_concurrency.processutils [None req-35e5603c-cfad-490a-9a18-06b273165b85 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.485s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:32:14 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:32:14.359 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[e23cfc78-6876-4951-a5cd-39a38081e539]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:32:14 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:32:14.361 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap908154e6-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:32:14 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:32:14.361 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 03:32:14 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:32:14.362 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap908154e6-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:32:14 np0005534516 NetworkManager[48915]: <info>  [1764059534.3645] manager: (tap908154e6-30): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/189)
Nov 25 03:32:14 np0005534516 nova_compute[253538]: 2025-11-25 08:32:14.365 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:32:14 np0005534516 kernel: tap908154e6-30: entered promiscuous mode
Nov 25 03:32:14 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:32:14.368 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap908154e6-30, col_values=(('external_ids', {'iface-id': 'a5c69233-73e9-45f3-95c2-e76d52711966'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:32:14 np0005534516 nova_compute[253538]: 2025-11-25 08:32:14.367 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:32:14 np0005534516 nova_compute[253538]: 2025-11-25 08:32:14.371 253542 DEBUG oslo_concurrency.lockutils [req-ba0bcfde-c7c4-4673-a0cb-addab144830e req-c9c3e631-1ec5-4df8-86cc-fa43f56c6722 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-0feca801-4630-4450-b915-616d8496ab51" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 03:32:14 np0005534516 nova_compute[253538]: 2025-11-25 08:32:14.371 253542 DEBUG nova.compute.manager [req-ba0bcfde-c7c4-4673-a0cb-addab144830e req-c9c3e631-1ec5-4df8-86cc-fa43f56c6722 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 450d4b82-4475-4cfc-b868-dc3b0fc37af5] Received event network-vif-unplugged-2456d48f-9440-411c-b5f2-5c27136126f9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:32:14 np0005534516 nova_compute[253538]: 2025-11-25 08:32:14.372 253542 DEBUG oslo_concurrency.lockutils [req-ba0bcfde-c7c4-4673-a0cb-addab144830e req-c9c3e631-1ec5-4df8-86cc-fa43f56c6722 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "450d4b82-4475-4cfc-b868-dc3b0fc37af5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:32:14 np0005534516 nova_compute[253538]: 2025-11-25 08:32:14.372 253542 DEBUG oslo_concurrency.lockutils [req-ba0bcfde-c7c4-4673-a0cb-addab144830e req-c9c3e631-1ec5-4df8-86cc-fa43f56c6722 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "450d4b82-4475-4cfc-b868-dc3b0fc37af5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:32:14 np0005534516 nova_compute[253538]: 2025-11-25 08:32:14.372 253542 DEBUG oslo_concurrency.lockutils [req-ba0bcfde-c7c4-4673-a0cb-addab144830e req-c9c3e631-1ec5-4df8-86cc-fa43f56c6722 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "450d4b82-4475-4cfc-b868-dc3b0fc37af5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:32:14 np0005534516 nova_compute[253538]: 2025-11-25 08:32:14.373 253542 DEBUG nova.compute.manager [req-ba0bcfde-c7c4-4673-a0cb-addab144830e req-c9c3e631-1ec5-4df8-86cc-fa43f56c6722 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 450d4b82-4475-4cfc-b868-dc3b0fc37af5] No waiting events found dispatching network-vif-unplugged-2456d48f-9440-411c-b5f2-5c27136126f9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 03:32:14 np0005534516 nova_compute[253538]: 2025-11-25 08:32:14.373 253542 DEBUG nova.compute.manager [req-ba0bcfde-c7c4-4673-a0cb-addab144830e req-c9c3e631-1ec5-4df8-86cc-fa43f56c6722 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 450d4b82-4475-4cfc-b868-dc3b0fc37af5] Received event network-vif-unplugged-2456d48f-9440-411c-b5f2-5c27136126f9 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 25 03:32:14 np0005534516 nova_compute[253538]: 2025-11-25 08:32:14.373 253542 DEBUG nova.compute.manager [req-ba0bcfde-c7c4-4673-a0cb-addab144830e req-c9c3e631-1ec5-4df8-86cc-fa43f56c6722 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 450d4b82-4475-4cfc-b868-dc3b0fc37af5] Received event network-vif-plugged-2456d48f-9440-411c-b5f2-5c27136126f9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:32:14 np0005534516 nova_compute[253538]: 2025-11-25 08:32:14.374 253542 DEBUG oslo_concurrency.lockutils [req-ba0bcfde-c7c4-4673-a0cb-addab144830e req-c9c3e631-1ec5-4df8-86cc-fa43f56c6722 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "450d4b82-4475-4cfc-b868-dc3b0fc37af5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:32:14 np0005534516 nova_compute[253538]: 2025-11-25 08:32:14.374 253542 DEBUG oslo_concurrency.lockutils [req-ba0bcfde-c7c4-4673-a0cb-addab144830e req-c9c3e631-1ec5-4df8-86cc-fa43f56c6722 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "450d4b82-4475-4cfc-b868-dc3b0fc37af5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:32:14 np0005534516 nova_compute[253538]: 2025-11-25 08:32:14.374 253542 DEBUG oslo_concurrency.lockutils [req-ba0bcfde-c7c4-4673-a0cb-addab144830e req-c9c3e631-1ec5-4df8-86cc-fa43f56c6722 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "450d4b82-4475-4cfc-b868-dc3b0fc37af5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:32:14 np0005534516 nova_compute[253538]: 2025-11-25 08:32:14.374 253542 DEBUG nova.compute.manager [req-ba0bcfde-c7c4-4673-a0cb-addab144830e req-c9c3e631-1ec5-4df8-86cc-fa43f56c6722 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 450d4b82-4475-4cfc-b868-dc3b0fc37af5] No waiting events found dispatching network-vif-plugged-2456d48f-9440-411c-b5f2-5c27136126f9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 03:32:14 np0005534516 nova_compute[253538]: 2025-11-25 08:32:14.375 253542 WARNING nova.compute.manager [req-ba0bcfde-c7c4-4673-a0cb-addab144830e req-c9c3e631-1ec5-4df8-86cc-fa43f56c6722 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 450d4b82-4475-4cfc-b868-dc3b0fc37af5] Received unexpected event network-vif-plugged-2456d48f-9440-411c-b5f2-5c27136126f9 for instance with vm_state active and task_state deleting.#033[00m
Nov 25 03:32:14 np0005534516 ovn_controller[152859]: 2025-11-25T08:32:14Z|00408|binding|INFO|Releasing lport a5c69233-73e9-45f3-95c2-e76d52711966 from this chassis (sb_readonly=0)
Nov 25 03:32:14 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:32:14.378 162739 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/908154e6-322e-4607-bb65-df3f3f8daca6.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/908154e6-322e-4607-bb65-df3f3f8daca6.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 25 03:32:14 np0005534516 nova_compute[253538]: 2025-11-25 08:32:14.379 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:32:14 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:32:14.379 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[56fa2438-0a7d-465a-b56d-c0ec228ddffb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:32:14 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:32:14.380 162739 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 25 03:32:14 np0005534516 ovn_metadata_agent[162734]: global
Nov 25 03:32:14 np0005534516 ovn_metadata_agent[162734]:    log         /dev/log local0 debug
Nov 25 03:32:14 np0005534516 ovn_metadata_agent[162734]:    log-tag     haproxy-metadata-proxy-908154e6-322e-4607-bb65-df3f3f8daca6
Nov 25 03:32:14 np0005534516 ovn_metadata_agent[162734]:    user        root
Nov 25 03:32:14 np0005534516 ovn_metadata_agent[162734]:    group       root
Nov 25 03:32:14 np0005534516 ovn_metadata_agent[162734]:    maxconn     1024
Nov 25 03:32:14 np0005534516 ovn_metadata_agent[162734]:    pidfile     /var/lib/neutron/external/pids/908154e6-322e-4607-bb65-df3f3f8daca6.pid.haproxy
Nov 25 03:32:14 np0005534516 ovn_metadata_agent[162734]:    daemon
Nov 25 03:32:14 np0005534516 ovn_metadata_agent[162734]: 
Nov 25 03:32:14 np0005534516 ovn_metadata_agent[162734]: defaults
Nov 25 03:32:14 np0005534516 ovn_metadata_agent[162734]:    log global
Nov 25 03:32:14 np0005534516 ovn_metadata_agent[162734]:    mode http
Nov 25 03:32:14 np0005534516 ovn_metadata_agent[162734]:    option httplog
Nov 25 03:32:14 np0005534516 ovn_metadata_agent[162734]:    option dontlognull
Nov 25 03:32:14 np0005534516 ovn_metadata_agent[162734]:    option http-server-close
Nov 25 03:32:14 np0005534516 ovn_metadata_agent[162734]:    option forwardfor
Nov 25 03:32:14 np0005534516 ovn_metadata_agent[162734]:    retries                 3
Nov 25 03:32:14 np0005534516 ovn_metadata_agent[162734]:    timeout http-request    30s
Nov 25 03:32:14 np0005534516 ovn_metadata_agent[162734]:    timeout connect         30s
Nov 25 03:32:14 np0005534516 ovn_metadata_agent[162734]:    timeout client          32s
Nov 25 03:32:14 np0005534516 ovn_metadata_agent[162734]:    timeout server          32s
Nov 25 03:32:14 np0005534516 ovn_metadata_agent[162734]:    timeout http-keep-alive 30s
Nov 25 03:32:14 np0005534516 ovn_metadata_agent[162734]: 
Nov 25 03:32:14 np0005534516 ovn_metadata_agent[162734]: 
Nov 25 03:32:14 np0005534516 ovn_metadata_agent[162734]: listen listener
Nov 25 03:32:14 np0005534516 ovn_metadata_agent[162734]:    bind 169.254.169.254:80
Nov 25 03:32:14 np0005534516 ovn_metadata_agent[162734]:    server metadata /var/lib/neutron/metadata_proxy
Nov 25 03:32:14 np0005534516 ovn_metadata_agent[162734]:    http-request add-header X-OVN-Network-ID 908154e6-322e-4607-bb65-df3f3f8daca6
Nov 25 03:32:14 np0005534516 ovn_metadata_agent[162734]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 25 03:32:14 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:32:14.381 162739 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-908154e6-322e-4607-bb65-df3f3f8daca6', 'env', 'PROCESS_TAG=haproxy-908154e6-322e-4607-bb65-df3f3f8daca6', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/908154e6-322e-4607-bb65-df3f3f8daca6.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 25 03:32:14 np0005534516 nova_compute[253538]: 2025-11-25 08:32:14.391 253542 DEBUG nova.compute.provider_tree [None req-35e5603c-cfad-490a-9a18-06b273165b85 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 03:32:14 np0005534516 nova_compute[253538]: 2025-11-25 08:32:14.394 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:32:14 np0005534516 nova_compute[253538]: 2025-11-25 08:32:14.402 253542 DEBUG nova.virt.libvirt.driver [None req-274b1ae7-4359-493e-ab71-686e6f17a045 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] [instance: 7aefbad8-edb5-417c-a34d-e9e3c2dd0c03] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101#033[00m
Nov 25 03:32:14 np0005534516 nova_compute[253538]: 2025-11-25 08:32:14.411 253542 DEBUG nova.scheduler.client.report [None req-35e5603c-cfad-490a-9a18-06b273165b85 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 03:32:14 np0005534516 nova_compute[253538]: 2025-11-25 08:32:14.439 253542 DEBUG oslo_concurrency.lockutils [None req-35e5603c-cfad-490a-9a18-06b273165b85 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.667s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:32:14 np0005534516 nova_compute[253538]: 2025-11-25 08:32:14.442 253542 DEBUG oslo_concurrency.lockutils [None req-952fbd32-a4ae-4ddf-92ed-a7c95d18e3d1 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.415s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:32:14 np0005534516 nova_compute[253538]: 2025-11-25 08:32:14.451 253542 DEBUG nova.virt.hardware [None req-952fbd32-a4ae-4ddf-92ed-a7c95d18e3d1 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 25 03:32:14 np0005534516 nova_compute[253538]: 2025-11-25 08:32:14.452 253542 INFO nova.compute.claims [None req-952fbd32-a4ae-4ddf-92ed-a7c95d18e3d1 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: 0201b222-1aa1-4d57-901c-e3c79170b567] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 25 03:32:14 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 25 03:32:14 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 03:32:14 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 03:32:14 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 03:32:14 np0005534516 nova_compute[253538]: 2025-11-25 08:32:14.507 253542 INFO nova.scheduler.client.report [None req-35e5603c-cfad-490a-9a18-06b273165b85 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Deleted allocations for instance 450d4b82-4475-4cfc-b868-dc3b0fc37af5#033[00m
Nov 25 03:32:14 np0005534516 nova_compute[253538]: 2025-11-25 08:32:14.583 253542 DEBUG oslo_concurrency.lockutils [None req-35e5603c-cfad-490a-9a18-06b273165b85 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Lock "450d4b82-4475-4cfc-b868-dc3b0fc37af5" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.244s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:32:14 np0005534516 nova_compute[253538]: 2025-11-25 08:32:14.622 253542 DEBUG oslo_concurrency.processutils [None req-952fbd32-a4ae-4ddf-92ed-a7c95d18e3d1 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:32:14 np0005534516 podman[307387]: 2025-11-25 08:32:14.709953428 +0000 UTC m=+0.091393907 container create 0bd5ed51b07aa59878a3bed4dccd98f69be98e172f165d201c7ec3b59695b1f5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_allen, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 03:32:14 np0005534516 nova_compute[253538]: 2025-11-25 08:32:14.716 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764059534.715351, 0feca801-4630-4450-b915-616d8496ab51 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 03:32:14 np0005534516 nova_compute[253538]: 2025-11-25 08:32:14.716 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 0feca801-4630-4450-b915-616d8496ab51] VM Started (Lifecycle Event)#033[00m
Nov 25 03:32:14 np0005534516 podman[307387]: 2025-11-25 08:32:14.640202834 +0000 UTC m=+0.021643333 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 03:32:14 np0005534516 systemd[1]: Started libpod-conmon-0bd5ed51b07aa59878a3bed4dccd98f69be98e172f165d201c7ec3b59695b1f5.scope.
Nov 25 03:32:14 np0005534516 systemd[1]: Started libcrun container.
Nov 25 03:32:14 np0005534516 podman[307426]: 2025-11-25 08:32:14.783425123 +0000 UTC m=+0.030636085 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 25 03:32:14 np0005534516 nova_compute[253538]: 2025-11-25 08:32:14.896 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 0feca801-4630-4450-b915-616d8496ab51] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:32:14 np0005534516 podman[307387]: 2025-11-25 08:32:14.901451745 +0000 UTC m=+0.282892274 container init 0bd5ed51b07aa59878a3bed4dccd98f69be98e172f165d201c7ec3b59695b1f5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_allen, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 03:32:14 np0005534516 nova_compute[253538]: 2025-11-25 08:32:14.903 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764059534.7193797, 0feca801-4630-4450-b915-616d8496ab51 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 03:32:14 np0005534516 nova_compute[253538]: 2025-11-25 08:32:14.903 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 0feca801-4630-4450-b915-616d8496ab51] VM Paused (Lifecycle Event)#033[00m
Nov 25 03:32:14 np0005534516 podman[307387]: 2025-11-25 08:32:14.909142862 +0000 UTC m=+0.290583351 container start 0bd5ed51b07aa59878a3bed4dccd98f69be98e172f165d201c7ec3b59695b1f5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_allen, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Nov 25 03:32:14 np0005534516 systemd[1]: libpod-0bd5ed51b07aa59878a3bed4dccd98f69be98e172f165d201c7ec3b59695b1f5.scope: Deactivated successfully.
Nov 25 03:32:14 np0005534516 pensive_allen[307457]: 167 167
Nov 25 03:32:14 np0005534516 conmon[307457]: conmon 0bd5ed51b07aa59878a3 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-0bd5ed51b07aa59878a3bed4dccd98f69be98e172f165d201c7ec3b59695b1f5.scope/container/memory.events
Nov 25 03:32:14 np0005534516 nova_compute[253538]: 2025-11-25 08:32:14.922 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 0feca801-4630-4450-b915-616d8496ab51] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:32:14 np0005534516 nova_compute[253538]: 2025-11-25 08:32:14.927 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 0feca801-4630-4450-b915-616d8496ab51] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 03:32:14 np0005534516 podman[307426]: 2025-11-25 08:32:14.938941043 +0000 UTC m=+0.186152015 container create 5cd8eb5bcd8ba37565e28b878ebca1895bd29ed8fc6e168ca934fc9aa8eb1826 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-908154e6-322e-4607-bb65-df3f3f8daca6, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2)
Nov 25 03:32:14 np0005534516 nova_compute[253538]: 2025-11-25 08:32:14.949 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 0feca801-4630-4450-b915-616d8496ab51] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 03:32:14 np0005534516 nova_compute[253538]: 2025-11-25 08:32:14.964 253542 DEBUG nova.compute.manager [req-724127d0-6bc1-4904-92ff-81bce992bc15 req-c3c3520c-c26f-4f87-b6ab-d147b43fbe02 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 450d4b82-4475-4cfc-b868-dc3b0fc37af5] Received event network-vif-deleted-2456d48f-9440-411c-b5f2-5c27136126f9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:32:14 np0005534516 nova_compute[253538]: 2025-11-25 08:32:14.965 253542 DEBUG nova.compute.manager [req-724127d0-6bc1-4904-92ff-81bce992bc15 req-c3c3520c-c26f-4f87-b6ab-d147b43fbe02 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 0feca801-4630-4450-b915-616d8496ab51] Received event network-vif-plugged-15af3dd8-9788-4a34-b4b2-d3b24300cd4c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:32:14 np0005534516 nova_compute[253538]: 2025-11-25 08:32:14.966 253542 DEBUG oslo_concurrency.lockutils [req-724127d0-6bc1-4904-92ff-81bce992bc15 req-c3c3520c-c26f-4f87-b6ab-d147b43fbe02 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "0feca801-4630-4450-b915-616d8496ab51-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:32:14 np0005534516 nova_compute[253538]: 2025-11-25 08:32:14.966 253542 DEBUG oslo_concurrency.lockutils [req-724127d0-6bc1-4904-92ff-81bce992bc15 req-c3c3520c-c26f-4f87-b6ab-d147b43fbe02 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "0feca801-4630-4450-b915-616d8496ab51-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:32:14 np0005534516 nova_compute[253538]: 2025-11-25 08:32:14.966 253542 DEBUG oslo_concurrency.lockutils [req-724127d0-6bc1-4904-92ff-81bce992bc15 req-c3c3520c-c26f-4f87-b6ab-d147b43fbe02 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "0feca801-4630-4450-b915-616d8496ab51-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:32:14 np0005534516 nova_compute[253538]: 2025-11-25 08:32:14.967 253542 DEBUG nova.compute.manager [req-724127d0-6bc1-4904-92ff-81bce992bc15 req-c3c3520c-c26f-4f87-b6ab-d147b43fbe02 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 0feca801-4630-4450-b915-616d8496ab51] Processing event network-vif-plugged-15af3dd8-9788-4a34-b4b2-d3b24300cd4c _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 25 03:32:14 np0005534516 nova_compute[253538]: 2025-11-25 08:32:14.967 253542 DEBUG nova.compute.manager [req-724127d0-6bc1-4904-92ff-81bce992bc15 req-c3c3520c-c26f-4f87-b6ab-d147b43fbe02 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 0feca801-4630-4450-b915-616d8496ab51] Received event network-vif-plugged-15af3dd8-9788-4a34-b4b2-d3b24300cd4c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:32:14 np0005534516 nova_compute[253538]: 2025-11-25 08:32:14.968 253542 DEBUG oslo_concurrency.lockutils [req-724127d0-6bc1-4904-92ff-81bce992bc15 req-c3c3520c-c26f-4f87-b6ab-d147b43fbe02 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "0feca801-4630-4450-b915-616d8496ab51-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:32:14 np0005534516 podman[307387]: 2025-11-25 08:32:14.972811373 +0000 UTC m=+0.354251872 container attach 0bd5ed51b07aa59878a3bed4dccd98f69be98e172f165d201c7ec3b59695b1f5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_allen, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 03:32:14 np0005534516 podman[307387]: 2025-11-25 08:32:14.97454913 +0000 UTC m=+0.355989619 container died 0bd5ed51b07aa59878a3bed4dccd98f69be98e172f165d201c7ec3b59695b1f5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_allen, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 03:32:14 np0005534516 nova_compute[253538]: 2025-11-25 08:32:14.969 253542 DEBUG oslo_concurrency.lockutils [req-724127d0-6bc1-4904-92ff-81bce992bc15 req-c3c3520c-c26f-4f87-b6ab-d147b43fbe02 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "0feca801-4630-4450-b915-616d8496ab51-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:32:14 np0005534516 nova_compute[253538]: 2025-11-25 08:32:14.974 253542 DEBUG oslo_concurrency.lockutils [req-724127d0-6bc1-4904-92ff-81bce992bc15 req-c3c3520c-c26f-4f87-b6ab-d147b43fbe02 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "0feca801-4630-4450-b915-616d8496ab51-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.006s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:32:14 np0005534516 nova_compute[253538]: 2025-11-25 08:32:14.975 253542 DEBUG nova.compute.manager [req-724127d0-6bc1-4904-92ff-81bce992bc15 req-c3c3520c-c26f-4f87-b6ab-d147b43fbe02 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 0feca801-4630-4450-b915-616d8496ab51] No waiting events found dispatching network-vif-plugged-15af3dd8-9788-4a34-b4b2-d3b24300cd4c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 03:32:14 np0005534516 nova_compute[253538]: 2025-11-25 08:32:14.975 253542 WARNING nova.compute.manager [req-724127d0-6bc1-4904-92ff-81bce992bc15 req-c3c3520c-c26f-4f87-b6ab-d147b43fbe02 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 0feca801-4630-4450-b915-616d8496ab51] Received unexpected event network-vif-plugged-15af3dd8-9788-4a34-b4b2-d3b24300cd4c for instance with vm_state building and task_state spawning.#033[00m
Nov 25 03:32:14 np0005534516 nova_compute[253538]: 2025-11-25 08:32:14.976 253542 DEBUG nova.compute.manager [None req-e7c1ffa7-e4a2-4981-85ff-8897d3cb3e5e c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] [instance: 0feca801-4630-4450-b915-616d8496ab51] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 25 03:32:14 np0005534516 nova_compute[253538]: 2025-11-25 08:32:14.982 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764059534.9802048, 0feca801-4630-4450-b915-616d8496ab51 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 03:32:14 np0005534516 nova_compute[253538]: 2025-11-25 08:32:14.982 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 0feca801-4630-4450-b915-616d8496ab51] VM Resumed (Lifecycle Event)#033[00m
Nov 25 03:32:14 np0005534516 systemd[1]: Started libpod-conmon-5cd8eb5bcd8ba37565e28b878ebca1895bd29ed8fc6e168ca934fc9aa8eb1826.scope.
Nov 25 03:32:14 np0005534516 nova_compute[253538]: 2025-11-25 08:32:14.984 253542 DEBUG nova.virt.libvirt.driver [None req-e7c1ffa7-e4a2-4981-85ff-8897d3cb3e5e c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] [instance: 0feca801-4630-4450-b915-616d8496ab51] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 25 03:32:14 np0005534516 nova_compute[253538]: 2025-11-25 08:32:14.991 253542 INFO nova.virt.libvirt.driver [-] [instance: 0feca801-4630-4450-b915-616d8496ab51] Instance spawned successfully.#033[00m
Nov 25 03:32:14 np0005534516 nova_compute[253538]: 2025-11-25 08:32:14.992 253542 DEBUG nova.virt.libvirt.driver [None req-e7c1ffa7-e4a2-4981-85ff-8897d3cb3e5e c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] [instance: 0feca801-4630-4450-b915-616d8496ab51] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 25 03:32:14 np0005534516 nova_compute[253538]: 2025-11-25 08:32:14.999 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 0feca801-4630-4450-b915-616d8496ab51] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:32:15 np0005534516 nova_compute[253538]: 2025-11-25 08:32:15.008 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 0feca801-4630-4450-b915-616d8496ab51] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 03:32:15 np0005534516 nova_compute[253538]: 2025-11-25 08:32:15.011 253542 DEBUG nova.virt.libvirt.driver [None req-e7c1ffa7-e4a2-4981-85ff-8897d3cb3e5e c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] [instance: 0feca801-4630-4450-b915-616d8496ab51] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:32:15 np0005534516 nova_compute[253538]: 2025-11-25 08:32:15.011 253542 DEBUG nova.virt.libvirt.driver [None req-e7c1ffa7-e4a2-4981-85ff-8897d3cb3e5e c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] [instance: 0feca801-4630-4450-b915-616d8496ab51] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:32:15 np0005534516 nova_compute[253538]: 2025-11-25 08:32:15.012 253542 DEBUG nova.virt.libvirt.driver [None req-e7c1ffa7-e4a2-4981-85ff-8897d3cb3e5e c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] [instance: 0feca801-4630-4450-b915-616d8496ab51] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:32:15 np0005534516 nova_compute[253538]: 2025-11-25 08:32:15.012 253542 DEBUG nova.virt.libvirt.driver [None req-e7c1ffa7-e4a2-4981-85ff-8897d3cb3e5e c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] [instance: 0feca801-4630-4450-b915-616d8496ab51] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:32:15 np0005534516 nova_compute[253538]: 2025-11-25 08:32:15.013 253542 DEBUG nova.virt.libvirt.driver [None req-e7c1ffa7-e4a2-4981-85ff-8897d3cb3e5e c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] [instance: 0feca801-4630-4450-b915-616d8496ab51] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:32:15 np0005534516 systemd[1]: Started libcrun container.
Nov 25 03:32:15 np0005534516 nova_compute[253538]: 2025-11-25 08:32:15.014 253542 DEBUG nova.virt.libvirt.driver [None req-e7c1ffa7-e4a2-4981-85ff-8897d3cb3e5e c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] [instance: 0feca801-4630-4450-b915-616d8496ab51] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:32:15 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6642dd5f19a45eda9399f2698ab38ce8fab03c28deb9c9cf4ff5295c9db6ea21/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 25 03:32:15 np0005534516 nova_compute[253538]: 2025-11-25 08:32:15.023 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 0feca801-4630-4450-b915-616d8496ab51] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 03:32:15 np0005534516 systemd[1]: var-lib-containers-storage-overlay-4bf48b96507f8f2079ed42df6dbba16301031af8c0514fe30d686a4e5306855a-merged.mount: Deactivated successfully.
Nov 25 03:32:15 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 03:32:15 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1486061073' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 03:32:15 np0005534516 nova_compute[253538]: 2025-11-25 08:32:15.076 253542 INFO nova.compute.manager [None req-e7c1ffa7-e4a2-4981-85ff-8897d3cb3e5e c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] [instance: 0feca801-4630-4450-b915-616d8496ab51] Took 7.84 seconds to spawn the instance on the hypervisor.#033[00m
Nov 25 03:32:15 np0005534516 nova_compute[253538]: 2025-11-25 08:32:15.077 253542 DEBUG nova.compute.manager [None req-e7c1ffa7-e4a2-4981-85ff-8897d3cb3e5e c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] [instance: 0feca801-4630-4450-b915-616d8496ab51] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:32:15 np0005534516 podman[307387]: 2025-11-25 08:32:15.0891835 +0000 UTC m=+0.470623979 container remove 0bd5ed51b07aa59878a3bed4dccd98f69be98e172f165d201c7ec3b59695b1f5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_allen, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 03:32:15 np0005534516 nova_compute[253538]: 2025-11-25 08:32:15.096 253542 DEBUG oslo_concurrency.processutils [None req-952fbd32-a4ae-4ddf-92ed-a7c95d18e3d1 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.475s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:32:15 np0005534516 systemd[1]: libpod-conmon-0bd5ed51b07aa59878a3bed4dccd98f69be98e172f165d201c7ec3b59695b1f5.scope: Deactivated successfully.
Nov 25 03:32:15 np0005534516 nova_compute[253538]: 2025-11-25 08:32:15.107 253542 DEBUG nova.compute.provider_tree [None req-952fbd32-a4ae-4ddf-92ed-a7c95d18e3d1 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 03:32:15 np0005534516 podman[307426]: 2025-11-25 08:32:15.116544996 +0000 UTC m=+0.363755968 container init 5cd8eb5bcd8ba37565e28b878ebca1895bd29ed8fc6e168ca934fc9aa8eb1826 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-908154e6-322e-4607-bb65-df3f3f8daca6, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 03:32:15 np0005534516 podman[307426]: 2025-11-25 08:32:15.124297284 +0000 UTC m=+0.371508226 container start 5cd8eb5bcd8ba37565e28b878ebca1895bd29ed8fc6e168ca934fc9aa8eb1826 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-908154e6-322e-4607-bb65-df3f3f8daca6, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3)
Nov 25 03:32:15 np0005534516 nova_compute[253538]: 2025-11-25 08:32:15.138 253542 DEBUG nova.scheduler.client.report [None req-952fbd32-a4ae-4ddf-92ed-a7c95d18e3d1 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 03:32:15 np0005534516 neutron-haproxy-ovnmeta-908154e6-322e-4607-bb65-df3f3f8daca6[307477]: [NOTICE]   (307488) : New worker (307490) forked
Nov 25 03:32:15 np0005534516 neutron-haproxy-ovnmeta-908154e6-322e-4607-bb65-df3f3f8daca6[307477]: [NOTICE]   (307488) : Loading success.
Nov 25 03:32:15 np0005534516 nova_compute[253538]: 2025-11-25 08:32:15.160 253542 INFO nova.compute.manager [None req-e7c1ffa7-e4a2-4981-85ff-8897d3cb3e5e c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] [instance: 0feca801-4630-4450-b915-616d8496ab51] Took 8.94 seconds to build instance.#033[00m
Nov 25 03:32:15 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1467: 321 pgs: 321 active+clean; 157 MiB data, 493 MiB used, 60 GiB / 60 GiB avail; 1.5 MiB/s rd, 3.9 MiB/s wr, 208 op/s
Nov 25 03:32:15 np0005534516 nova_compute[253538]: 2025-11-25 08:32:15.206 253542 DEBUG oslo_concurrency.lockutils [None req-952fbd32-a4ae-4ddf-92ed-a7c95d18e3d1 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.764s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:32:15 np0005534516 nova_compute[253538]: 2025-11-25 08:32:15.207 253542 DEBUG nova.compute.manager [None req-952fbd32-a4ae-4ddf-92ed-a7c95d18e3d1 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: 0201b222-1aa1-4d57-901c-e3c79170b567] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 25 03:32:15 np0005534516 nova_compute[253538]: 2025-11-25 08:32:15.223 253542 DEBUG oslo_concurrency.lockutils [None req-e7c1ffa7-e4a2-4981-85ff-8897d3cb3e5e c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Lock "0feca801-4630-4450-b915-616d8496ab51" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.101s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:32:15 np0005534516 nova_compute[253538]: 2025-11-25 08:32:15.251 253542 DEBUG nova.compute.manager [None req-952fbd32-a4ae-4ddf-92ed-a7c95d18e3d1 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: 0201b222-1aa1-4d57-901c-e3c79170b567] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 25 03:32:15 np0005534516 nova_compute[253538]: 2025-11-25 08:32:15.252 253542 DEBUG nova.network.neutron [None req-952fbd32-a4ae-4ddf-92ed-a7c95d18e3d1 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: 0201b222-1aa1-4d57-901c-e3c79170b567] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 25 03:32:15 np0005534516 nova_compute[253538]: 2025-11-25 08:32:15.266 253542 INFO nova.virt.libvirt.driver [None req-952fbd32-a4ae-4ddf-92ed-a7c95d18e3d1 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: 0201b222-1aa1-4d57-901c-e3c79170b567] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 25 03:32:15 np0005534516 podman[307504]: 2025-11-25 08:32:15.276663479 +0000 UTC m=+0.052010738 container create 1ee956ad6b069ac5d3db8869f10e848b4460c999cc7a99a7846950f8377bd6f1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_northcutt, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Nov 25 03:32:15 np0005534516 nova_compute[253538]: 2025-11-25 08:32:15.286 253542 DEBUG nova.compute.manager [None req-952fbd32-a4ae-4ddf-92ed-a7c95d18e3d1 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: 0201b222-1aa1-4d57-901c-e3c79170b567] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 25 03:32:15 np0005534516 systemd[1]: Started libpod-conmon-1ee956ad6b069ac5d3db8869f10e848b4460c999cc7a99a7846950f8377bd6f1.scope.
Nov 25 03:32:15 np0005534516 systemd[1]: Started libcrun container.
Nov 25 03:32:15 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/99db9ed113397d14766cc2d1a917cd50a5501612d4c6db0f89dd4f2452934b6a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 03:32:15 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/99db9ed113397d14766cc2d1a917cd50a5501612d4c6db0f89dd4f2452934b6a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 03:32:15 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/99db9ed113397d14766cc2d1a917cd50a5501612d4c6db0f89dd4f2452934b6a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 03:32:15 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/99db9ed113397d14766cc2d1a917cd50a5501612d4c6db0f89dd4f2452934b6a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 03:32:15 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/99db9ed113397d14766cc2d1a917cd50a5501612d4c6db0f89dd4f2452934b6a/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 03:32:15 np0005534516 podman[307504]: 2025-11-25 08:32:15.250191068 +0000 UTC m=+0.025538347 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 03:32:15 np0005534516 podman[307504]: 2025-11-25 08:32:15.359910217 +0000 UTC m=+0.135257496 container init 1ee956ad6b069ac5d3db8869f10e848b4460c999cc7a99a7846950f8377bd6f1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_northcutt, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 03:32:15 np0005534516 podman[307504]: 2025-11-25 08:32:15.369946177 +0000 UTC m=+0.145293446 container start 1ee956ad6b069ac5d3db8869f10e848b4460c999cc7a99a7846950f8377bd6f1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_northcutt, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 03:32:15 np0005534516 podman[307504]: 2025-11-25 08:32:15.373342298 +0000 UTC m=+0.148689587 container attach 1ee956ad6b069ac5d3db8869f10e848b4460c999cc7a99a7846950f8377bd6f1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_northcutt, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 03:32:15 np0005534516 nova_compute[253538]: 2025-11-25 08:32:15.386 253542 DEBUG nova.compute.manager [None req-952fbd32-a4ae-4ddf-92ed-a7c95d18e3d1 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: 0201b222-1aa1-4d57-901c-e3c79170b567] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 25 03:32:15 np0005534516 nova_compute[253538]: 2025-11-25 08:32:15.387 253542 DEBUG nova.virt.libvirt.driver [None req-952fbd32-a4ae-4ddf-92ed-a7c95d18e3d1 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: 0201b222-1aa1-4d57-901c-e3c79170b567] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 25 03:32:15 np0005534516 nova_compute[253538]: 2025-11-25 08:32:15.387 253542 INFO nova.virt.libvirt.driver [None req-952fbd32-a4ae-4ddf-92ed-a7c95d18e3d1 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: 0201b222-1aa1-4d57-901c-e3c79170b567] Creating image(s)#033[00m
Nov 25 03:32:15 np0005534516 nova_compute[253538]: 2025-11-25 08:32:15.410 253542 DEBUG nova.storage.rbd_utils [None req-952fbd32-a4ae-4ddf-92ed-a7c95d18e3d1 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] rbd image 0201b222-1aa1-4d57-901c-e3c79170b567_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:32:15 np0005534516 nova_compute[253538]: 2025-11-25 08:32:15.505 253542 DEBUG nova.storage.rbd_utils [None req-952fbd32-a4ae-4ddf-92ed-a7c95d18e3d1 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] rbd image 0201b222-1aa1-4d57-901c-e3c79170b567_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:32:15 np0005534516 nova_compute[253538]: 2025-11-25 08:32:15.524 253542 DEBUG nova.storage.rbd_utils [None req-952fbd32-a4ae-4ddf-92ed-a7c95d18e3d1 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] rbd image 0201b222-1aa1-4d57-901c-e3c79170b567_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:32:15 np0005534516 nova_compute[253538]: 2025-11-25 08:32:15.527 253542 DEBUG oslo_concurrency.processutils [None req-952fbd32-a4ae-4ddf-92ed-a7c95d18e3d1 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:32:15 np0005534516 nova_compute[253538]: 2025-11-25 08:32:15.553 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:32:15 np0005534516 nova_compute[253538]: 2025-11-25 08:32:15.564 253542 DEBUG nova.policy [None req-952fbd32-a4ae-4ddf-92ed-a7c95d18e3d1 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'a649c62aaacd4f01a93ea978066f5976', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'a9c243220ecd4ba3af10cdbc0ea76bd6', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 25 03:32:15 np0005534516 nova_compute[253538]: 2025-11-25 08:32:15.570 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 03:32:15 np0005534516 nova_compute[253538]: 2025-11-25 08:32:15.607 253542 DEBUG oslo_concurrency.processutils [None req-952fbd32-a4ae-4ddf-92ed-a7c95d18e3d1 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json" returned: 0 in 0.080s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:32:15 np0005534516 nova_compute[253538]: 2025-11-25 08:32:15.608 253542 DEBUG oslo_concurrency.lockutils [None req-952fbd32-a4ae-4ddf-92ed-a7c95d18e3d1 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Acquiring lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:32:15 np0005534516 nova_compute[253538]: 2025-11-25 08:32:15.609 253542 DEBUG oslo_concurrency.lockutils [None req-952fbd32-a4ae-4ddf-92ed-a7c95d18e3d1 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:32:15 np0005534516 nova_compute[253538]: 2025-11-25 08:32:15.609 253542 DEBUG oslo_concurrency.lockutils [None req-952fbd32-a4ae-4ddf-92ed-a7c95d18e3d1 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:32:15 np0005534516 nova_compute[253538]: 2025-11-25 08:32:15.627 253542 DEBUG nova.storage.rbd_utils [None req-952fbd32-a4ae-4ddf-92ed-a7c95d18e3d1 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] rbd image 0201b222-1aa1-4d57-901c-e3c79170b567_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:32:15 np0005534516 nova_compute[253538]: 2025-11-25 08:32:15.630 253542 DEBUG oslo_concurrency.processutils [None req-952fbd32-a4ae-4ddf-92ed-a7c95d18e3d1 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc 0201b222-1aa1-4d57-901c-e3c79170b567_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:32:15 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e176 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 03:32:15 np0005534516 nova_compute[253538]: 2025-11-25 08:32:15.945 253542 DEBUG oslo_concurrency.processutils [None req-952fbd32-a4ae-4ddf-92ed-a7c95d18e3d1 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc 0201b222-1aa1-4d57-901c-e3c79170b567_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.315s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:32:16 np0005534516 nova_compute[253538]: 2025-11-25 08:32:16.010 253542 DEBUG nova.storage.rbd_utils [None req-952fbd32-a4ae-4ddf-92ed-a7c95d18e3d1 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] resizing rbd image 0201b222-1aa1-4d57-901c-e3c79170b567_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 25 03:32:16 np0005534516 nova_compute[253538]: 2025-11-25 08:32:16.104 253542 DEBUG nova.objects.instance [None req-952fbd32-a4ae-4ddf-92ed-a7c95d18e3d1 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Lazy-loading 'migration_context' on Instance uuid 0201b222-1aa1-4d57-901c-e3c79170b567 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 03:32:16 np0005534516 nova_compute[253538]: 2025-11-25 08:32:16.120 253542 DEBUG nova.virt.libvirt.driver [None req-952fbd32-a4ae-4ddf-92ed-a7c95d18e3d1 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: 0201b222-1aa1-4d57-901c-e3c79170b567] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 25 03:32:16 np0005534516 nova_compute[253538]: 2025-11-25 08:32:16.121 253542 DEBUG nova.virt.libvirt.driver [None req-952fbd32-a4ae-4ddf-92ed-a7c95d18e3d1 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: 0201b222-1aa1-4d57-901c-e3c79170b567] Ensure instance console log exists: /var/lib/nova/instances/0201b222-1aa1-4d57-901c-e3c79170b567/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 25 03:32:16 np0005534516 nova_compute[253538]: 2025-11-25 08:32:16.121 253542 DEBUG oslo_concurrency.lockutils [None req-952fbd32-a4ae-4ddf-92ed-a7c95d18e3d1 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:32:16 np0005534516 nova_compute[253538]: 2025-11-25 08:32:16.122 253542 DEBUG oslo_concurrency.lockutils [None req-952fbd32-a4ae-4ddf-92ed-a7c95d18e3d1 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:32:16 np0005534516 nova_compute[253538]: 2025-11-25 08:32:16.122 253542 DEBUG oslo_concurrency.lockutils [None req-952fbd32-a4ae-4ddf-92ed-a7c95d18e3d1 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:32:16 np0005534516 sweet_northcutt[307520]: --> passed data devices: 0 physical, 3 LVM
Nov 25 03:32:16 np0005534516 sweet_northcutt[307520]: --> relative data size: 1.0
Nov 25 03:32:16 np0005534516 sweet_northcutt[307520]: --> All data devices are unavailable
Nov 25 03:32:16 np0005534516 systemd[1]: libpod-1ee956ad6b069ac5d3db8869f10e848b4460c999cc7a99a7846950f8377bd6f1.scope: Deactivated successfully.
Nov 25 03:32:16 np0005534516 systemd[1]: libpod-1ee956ad6b069ac5d3db8869f10e848b4460c999cc7a99a7846950f8377bd6f1.scope: Consumed 1.018s CPU time.
Nov 25 03:32:16 np0005534516 podman[307715]: 2025-11-25 08:32:16.619692745 +0000 UTC m=+0.042117203 container died 1ee956ad6b069ac5d3db8869f10e848b4460c999cc7a99a7846950f8377bd6f1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_northcutt, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 03:32:16 np0005534516 systemd[1]: var-lib-containers-storage-overlay-99db9ed113397d14766cc2d1a917cd50a5501612d4c6db0f89dd4f2452934b6a-merged.mount: Deactivated successfully.
Nov 25 03:32:16 np0005534516 podman[307715]: 2025-11-25 08:32:16.687782915 +0000 UTC m=+0.110207353 container remove 1ee956ad6b069ac5d3db8869f10e848b4460c999cc7a99a7846950f8377bd6f1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_northcutt, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 03:32:16 np0005534516 systemd[1]: libpod-conmon-1ee956ad6b069ac5d3db8869f10e848b4460c999cc7a99a7846950f8377bd6f1.scope: Deactivated successfully.
Nov 25 03:32:17 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1468: 321 pgs: 321 active+clean; 182 MiB data, 501 MiB used, 59 GiB / 60 GiB avail; 1.5 MiB/s rd, 4.8 MiB/s wr, 217 op/s
Nov 25 03:32:17 np0005534516 podman[307869]: 2025-11-25 08:32:17.287211675 +0000 UTC m=+0.038182228 container create 45e73f2cb71fecf4e95d844c25268538a39fce263c430ba53000f11674a2cb9b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_wozniak, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 03:32:17 np0005534516 kernel: tap3c3c9c20-84 (unregistering): left promiscuous mode
Nov 25 03:32:17 np0005534516 NetworkManager[48915]: <info>  [1764059537.3189] device (tap3c3c9c20-84): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 03:32:17 np0005534516 systemd[1]: Started libpod-conmon-45e73f2cb71fecf4e95d844c25268538a39fce263c430ba53000f11674a2cb9b.scope.
Nov 25 03:32:17 np0005534516 ovn_controller[152859]: 2025-11-25T08:32:17Z|00409|binding|INFO|Releasing lport 3c3c9c20-8436-4b41-9184-2061010ba6e2 from this chassis (sb_readonly=0)
Nov 25 03:32:17 np0005534516 ovn_controller[152859]: 2025-11-25T08:32:17Z|00410|binding|INFO|Setting lport 3c3c9c20-8436-4b41-9184-2061010ba6e2 down in Southbound
Nov 25 03:32:17 np0005534516 ovn_controller[152859]: 2025-11-25T08:32:17Z|00411|binding|INFO|Removing iface tap3c3c9c20-84 ovn-installed in OVS
Nov 25 03:32:17 np0005534516 nova_compute[253538]: 2025-11-25 08:32:17.324 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:32:17 np0005534516 nova_compute[253538]: 2025-11-25 08:32:17.326 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:32:17 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:32:17.332 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c4:6e:6c 10.100.0.4'], port_security=['fa:16:3e:c4:6e:6c 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '7aefbad8-edb5-417c-a34d-e9e3c2dd0c03', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-eb25945d-6002-4a99-b682-034a8a3dc901', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'dc93aa65bef7473d961e0cad1e8f2962', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'e75a559d-2985-4816-b432-9eef78e9b129', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=1d4314d0-90db-4d2a-a971-774f6d589653, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=3c3c9c20-8436-4b41-9184-2061010ba6e2) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 03:32:17 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:32:17.333 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 3c3c9c20-8436-4b41-9184-2061010ba6e2 in datapath eb25945d-6002-4a99-b682-034a8a3dc901 unbound from our chassis#033[00m
Nov 25 03:32:17 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:32:17.335 162739 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network eb25945d-6002-4a99-b682-034a8a3dc901, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 25 03:32:17 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:32:17.339 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[2e641a58-9553-4aa9-a04e-db450fea5b70]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:32:17 np0005534516 nova_compute[253538]: 2025-11-25 08:32:17.338 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:32:17 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:32:17.340 162739 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-eb25945d-6002-4a99-b682-034a8a3dc901 namespace which is not needed anymore#033[00m
Nov 25 03:32:17 np0005534516 systemd[1]: Started libcrun container.
Nov 25 03:32:17 np0005534516 podman[307869]: 2025-11-25 08:32:17.271066931 +0000 UTC m=+0.022037504 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 03:32:17 np0005534516 podman[307869]: 2025-11-25 08:32:17.380219674 +0000 UTC m=+0.131190247 container init 45e73f2cb71fecf4e95d844c25268538a39fce263c430ba53000f11674a2cb9b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_wozniak, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Nov 25 03:32:17 np0005534516 systemd[1]: machine-qemu\x2d54\x2dinstance\x2d00000031.scope: Deactivated successfully.
Nov 25 03:32:17 np0005534516 systemd[1]: machine-qemu\x2d54\x2dinstance\x2d00000031.scope: Consumed 12.754s CPU time.
Nov 25 03:32:17 np0005534516 systemd-machined[215790]: Machine qemu-54-instance-00000031 terminated.
Nov 25 03:32:17 np0005534516 podman[307869]: 2025-11-25 08:32:17.388006284 +0000 UTC m=+0.138976867 container start 45e73f2cb71fecf4e95d844c25268538a39fce263c430ba53000f11674a2cb9b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_wozniak, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Nov 25 03:32:17 np0005534516 podman[307869]: 2025-11-25 08:32:17.390558092 +0000 UTC m=+0.141528655 container attach 45e73f2cb71fecf4e95d844c25268538a39fce263c430ba53000f11674a2cb9b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_wozniak, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Nov 25 03:32:17 np0005534516 sleepy_wozniak[307889]: 167 167
Nov 25 03:32:17 np0005534516 systemd[1]: libpod-45e73f2cb71fecf4e95d844c25268538a39fce263c430ba53000f11674a2cb9b.scope: Deactivated successfully.
Nov 25 03:32:17 np0005534516 conmon[307889]: conmon 45e73f2cb71fecf4e95d <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-45e73f2cb71fecf4e95d844c25268538a39fce263c430ba53000f11674a2cb9b.scope/container/memory.events
Nov 25 03:32:17 np0005534516 podman[307869]: 2025-11-25 08:32:17.394940451 +0000 UTC m=+0.145911014 container died 45e73f2cb71fecf4e95d844c25268538a39fce263c430ba53000f11674a2cb9b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_wozniak, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 03:32:17 np0005534516 nova_compute[253538]: 2025-11-25 08:32:17.404 253542 DEBUG nova.network.neutron [None req-952fbd32-a4ae-4ddf-92ed-a7c95d18e3d1 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: 0201b222-1aa1-4d57-901c-e3c79170b567] Successfully created port: d02d0c40-ff59-4db1-8105-d39f0c8b67c5 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 25 03:32:17 np0005534516 systemd[1]: var-lib-containers-storage-overlay-be42d92085fef3901b00721071f360a2aeae076a8f78e3735f93ed543f6d3afb-merged.mount: Deactivated successfully.
Nov 25 03:32:17 np0005534516 podman[307869]: 2025-11-25 08:32:17.437199546 +0000 UTC m=+0.188170099 container remove 45e73f2cb71fecf4e95d844c25268538a39fce263c430ba53000f11674a2cb9b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_wozniak, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 03:32:17 np0005534516 systemd[1]: libpod-conmon-45e73f2cb71fecf4e95d844c25268538a39fce263c430ba53000f11674a2cb9b.scope: Deactivated successfully.
Nov 25 03:32:17 np0005534516 neutron-haproxy-ovnmeta-eb25945d-6002-4a99-b682-034a8a3dc901[306500]: [NOTICE]   (306504) : haproxy version is 2.8.14-c23fe91
Nov 25 03:32:17 np0005534516 neutron-haproxy-ovnmeta-eb25945d-6002-4a99-b682-034a8a3dc901[306500]: [NOTICE]   (306504) : path to executable is /usr/sbin/haproxy
Nov 25 03:32:17 np0005534516 neutron-haproxy-ovnmeta-eb25945d-6002-4a99-b682-034a8a3dc901[306500]: [WARNING]  (306504) : Exiting Master process...
Nov 25 03:32:17 np0005534516 neutron-haproxy-ovnmeta-eb25945d-6002-4a99-b682-034a8a3dc901[306500]: [ALERT]    (306504) : Current worker (306506) exited with code 143 (Terminated)
Nov 25 03:32:17 np0005534516 neutron-haproxy-ovnmeta-eb25945d-6002-4a99-b682-034a8a3dc901[306500]: [WARNING]  (306504) : All workers exited. Exiting... (0)
Nov 25 03:32:17 np0005534516 systemd[1]: libpod-8b01de287dc8e8945dfe6ac0027b248fe1b900bdcc53279896633ea29595ae2b.scope: Deactivated successfully.
Nov 25 03:32:17 np0005534516 podman[307919]: 2025-11-25 08:32:17.508692547 +0000 UTC m=+0.054198117 container died 8b01de287dc8e8945dfe6ac0027b248fe1b900bdcc53279896633ea29595ae2b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-eb25945d-6002-4a99-b682-034a8a3dc901, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true)
Nov 25 03:32:17 np0005534516 nova_compute[253538]: 2025-11-25 08:32:17.549 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:32:17 np0005534516 nova_compute[253538]: 2025-11-25 08:32:17.555 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 03:32:17 np0005534516 nova_compute[253538]: 2025-11-25 08:32:17.556 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:32:17 np0005534516 nova_compute[253538]: 2025-11-25 08:32:17.566 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:32:17 np0005534516 nova_compute[253538]: 2025-11-25 08:32:17.587 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:32:17 np0005534516 nova_compute[253538]: 2025-11-25 08:32:17.588 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:32:17 np0005534516 nova_compute[253538]: 2025-11-25 08:32:17.588 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:32:17 np0005534516 nova_compute[253538]: 2025-11-25 08:32:17.588 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 25 03:32:17 np0005534516 nova_compute[253538]: 2025-11-25 08:32:17.588 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:32:17 np0005534516 nova_compute[253538]: 2025-11-25 08:32:17.623 253542 DEBUG nova.compute.manager [req-ceebd3ce-bc79-4745-b37e-3b0dba23f72e req-2b075403-0424-4b50-94e5-aed3e4413a35 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 7aefbad8-edb5-417c-a34d-e9e3c2dd0c03] Received event network-vif-unplugged-3c3c9c20-8436-4b41-9184-2061010ba6e2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:32:17 np0005534516 nova_compute[253538]: 2025-11-25 08:32:17.623 253542 DEBUG oslo_concurrency.lockutils [req-ceebd3ce-bc79-4745-b37e-3b0dba23f72e req-2b075403-0424-4b50-94e5-aed3e4413a35 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "7aefbad8-edb5-417c-a34d-e9e3c2dd0c03-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:32:17 np0005534516 nova_compute[253538]: 2025-11-25 08:32:17.624 253542 DEBUG oslo_concurrency.lockutils [req-ceebd3ce-bc79-4745-b37e-3b0dba23f72e req-2b075403-0424-4b50-94e5-aed3e4413a35 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "7aefbad8-edb5-417c-a34d-e9e3c2dd0c03-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:32:17 np0005534516 nova_compute[253538]: 2025-11-25 08:32:17.624 253542 DEBUG oslo_concurrency.lockutils [req-ceebd3ce-bc79-4745-b37e-3b0dba23f72e req-2b075403-0424-4b50-94e5-aed3e4413a35 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "7aefbad8-edb5-417c-a34d-e9e3c2dd0c03-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:32:17 np0005534516 nova_compute[253538]: 2025-11-25 08:32:17.624 253542 DEBUG nova.compute.manager [req-ceebd3ce-bc79-4745-b37e-3b0dba23f72e req-2b075403-0424-4b50-94e5-aed3e4413a35 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 7aefbad8-edb5-417c-a34d-e9e3c2dd0c03] No waiting events found dispatching network-vif-unplugged-3c3c9c20-8436-4b41-9184-2061010ba6e2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 03:32:17 np0005534516 nova_compute[253538]: 2025-11-25 08:32:17.624 253542 WARNING nova.compute.manager [req-ceebd3ce-bc79-4745-b37e-3b0dba23f72e req-2b075403-0424-4b50-94e5-aed3e4413a35 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 7aefbad8-edb5-417c-a34d-e9e3c2dd0c03] Received unexpected event network-vif-unplugged-3c3c9c20-8436-4b41-9184-2061010ba6e2 for instance with vm_state active and task_state rebuilding.#033[00m
Nov 25 03:32:17 np0005534516 nova_compute[253538]: 2025-11-25 08:32:17.625 253542 INFO nova.virt.libvirt.driver [None req-274b1ae7-4359-493e-ab71-686e6f17a045 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] [instance: 7aefbad8-edb5-417c-a34d-e9e3c2dd0c03] Instance shutdown successfully after 13 seconds.#033[00m
Nov 25 03:32:17 np0005534516 nova_compute[253538]: 2025-11-25 08:32:17.633 253542 INFO nova.virt.libvirt.driver [-] [instance: 7aefbad8-edb5-417c-a34d-e9e3c2dd0c03] Instance destroyed successfully.#033[00m
Nov 25 03:32:17 np0005534516 nova_compute[253538]: 2025-11-25 08:32:17.641 253542 INFO nova.virt.libvirt.driver [-] [instance: 7aefbad8-edb5-417c-a34d-e9e3c2dd0c03] Instance destroyed successfully.#033[00m
Nov 25 03:32:17 np0005534516 nova_compute[253538]: 2025-11-25 08:32:17.642 253542 DEBUG nova.virt.libvirt.vif [None req-274b1ae7-4359-493e-ab71-686e6f17a045 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=True,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T08:31:48Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-1569463086',display_name='tempest-ServerDiskConfigTestJSON-server-1569463086',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-1569463086',id=49,image_ref='64385127-d622-49bb-be38-b33beb2692d1',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-25T08:32:00Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='dc93aa65bef7473d961e0cad1e8f2962',ramdisk_id='',reservation_id='r-pn5emsvi',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='64385127-d622-49bb-be38-b33beb2692d1',image_container_format='bare',image_disk_format='qcow2',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerDiskConfigTestJSON-1655399928',owner_user_name='tempest-ServerDiskConfigTestJSON-1655399928-project-member'},tags=<?>,task_state='rebuilding',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T08:32:03Z,user_data=None,user_id='7ad88cb0e4cf4d0b8e4cbec835318015',uuid=7aefbad8-edb5-417c-a34d-e9e3c2dd0c03,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "3c3c9c20-8436-4b41-9184-2061010ba6e2", "address": "fa:16:3e:c4:6e:6c", "network": {"id": "eb25945d-6002-4a99-b682-034a8a3dc901", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1802666542-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dc93aa65bef7473d961e0cad1e8f2962", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3c3c9c20-84", "ovs_interfaceid": "3c3c9c20-8436-4b41-9184-2061010ba6e2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 25 03:32:17 np0005534516 nova_compute[253538]: 2025-11-25 08:32:17.642 253542 DEBUG nova.network.os_vif_util [None req-274b1ae7-4359-493e-ab71-686e6f17a045 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Converting VIF {"id": "3c3c9c20-8436-4b41-9184-2061010ba6e2", "address": "fa:16:3e:c4:6e:6c", "network": {"id": "eb25945d-6002-4a99-b682-034a8a3dc901", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1802666542-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dc93aa65bef7473d961e0cad1e8f2962", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3c3c9c20-84", "ovs_interfaceid": "3c3c9c20-8436-4b41-9184-2061010ba6e2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 03:32:17 np0005534516 nova_compute[253538]: 2025-11-25 08:32:17.644 253542 DEBUG nova.network.os_vif_util [None req-274b1ae7-4359-493e-ab71-686e6f17a045 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c4:6e:6c,bridge_name='br-int',has_traffic_filtering=True,id=3c3c9c20-8436-4b41-9184-2061010ba6e2,network=Network(eb25945d-6002-4a99-b682-034a8a3dc901),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3c3c9c20-84') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 03:32:17 np0005534516 nova_compute[253538]: 2025-11-25 08:32:17.644 253542 DEBUG os_vif [None req-274b1ae7-4359-493e-ab71-686e6f17a045 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:c4:6e:6c,bridge_name='br-int',has_traffic_filtering=True,id=3c3c9c20-8436-4b41-9184-2061010ba6e2,network=Network(eb25945d-6002-4a99-b682-034a8a3dc901),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3c3c9c20-84') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 25 03:32:17 np0005534516 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-8b01de287dc8e8945dfe6ac0027b248fe1b900bdcc53279896633ea29595ae2b-userdata-shm.mount: Deactivated successfully.
Nov 25 03:32:17 np0005534516 systemd[1]: var-lib-containers-storage-overlay-e001beb4c5ea87434b006a998a71e0ebe65ce32375515f22c2726c3467336969-merged.mount: Deactivated successfully.
Nov 25 03:32:17 np0005534516 nova_compute[253538]: 2025-11-25 08:32:17.647 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:32:17 np0005534516 nova_compute[253538]: 2025-11-25 08:32:17.652 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3c3c9c20-84, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:32:17 np0005534516 nova_compute[253538]: 2025-11-25 08:32:17.657 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:32:17 np0005534516 nova_compute[253538]: 2025-11-25 08:32:17.659 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 25 03:32:17 np0005534516 nova_compute[253538]: 2025-11-25 08:32:17.660 253542 INFO os_vif [None req-274b1ae7-4359-493e-ab71-686e6f17a045 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:c4:6e:6c,bridge_name='br-int',has_traffic_filtering=True,id=3c3c9c20-8436-4b41-9184-2061010ba6e2,network=Network(eb25945d-6002-4a99-b682-034a8a3dc901),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3c3c9c20-84')#033[00m
Nov 25 03:32:17 np0005534516 podman[307919]: 2025-11-25 08:32:17.68104293 +0000 UTC m=+0.226548490 container cleanup 8b01de287dc8e8945dfe6ac0027b248fe1b900bdcc53279896633ea29595ae2b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-eb25945d-6002-4a99-b682-034a8a3dc901, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 25 03:32:17 np0005534516 systemd[1]: libpod-conmon-8b01de287dc8e8945dfe6ac0027b248fe1b900bdcc53279896633ea29595ae2b.scope: Deactivated successfully.
Nov 25 03:32:17 np0005534516 podman[307956]: 2025-11-25 08:32:17.702779403 +0000 UTC m=+0.124812915 container create 86bef5820814aca1b74a1ad451e0138f0e3fa38106fd59cffd13dc0fd141d8a6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_proskuriakova, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Nov 25 03:32:17 np0005534516 systemd[1]: Started libpod-conmon-86bef5820814aca1b74a1ad451e0138f0e3fa38106fd59cffd13dc0fd141d8a6.scope.
Nov 25 03:32:17 np0005534516 podman[307956]: 2025-11-25 08:32:17.680416302 +0000 UTC m=+0.102449844 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 03:32:17 np0005534516 podman[307997]: 2025-11-25 08:32:17.762203701 +0000 UTC m=+0.056857980 container remove 8b01de287dc8e8945dfe6ac0027b248fe1b900bdcc53279896633ea29595ae2b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-eb25945d-6002-4a99-b682-034a8a3dc901, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Nov 25 03:32:17 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:32:17.768 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[203429d6-584e-45d6-bad3-fbef1ef02222]: (4, ('Tue Nov 25 08:32:17 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-eb25945d-6002-4a99-b682-034a8a3dc901 (8b01de287dc8e8945dfe6ac0027b248fe1b900bdcc53279896633ea29595ae2b)\n8b01de287dc8e8945dfe6ac0027b248fe1b900bdcc53279896633ea29595ae2b\nTue Nov 25 08:32:17 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-eb25945d-6002-4a99-b682-034a8a3dc901 (8b01de287dc8e8945dfe6ac0027b248fe1b900bdcc53279896633ea29595ae2b)\n8b01de287dc8e8945dfe6ac0027b248fe1b900bdcc53279896633ea29595ae2b\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:32:17 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:32:17.770 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[0f57adde-8f26-4dc3-91e7-cea9028cc82b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:32:17 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:32:17.771 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapeb25945d-60, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:32:17 np0005534516 systemd[1]: Started libcrun container.
Nov 25 03:32:17 np0005534516 nova_compute[253538]: 2025-11-25 08:32:17.809 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:32:17 np0005534516 kernel: tapeb25945d-60: left promiscuous mode
Nov 25 03:32:17 np0005534516 nova_compute[253538]: 2025-11-25 08:32:17.811 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:32:17 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c69d31fcbeab19be8f3e7cba029bd6b8beb2560ce9561f0a7dae8398df57e98b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 03:32:17 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c69d31fcbeab19be8f3e7cba029bd6b8beb2560ce9561f0a7dae8398df57e98b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 03:32:17 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c69d31fcbeab19be8f3e7cba029bd6b8beb2560ce9561f0a7dae8398df57e98b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 03:32:17 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c69d31fcbeab19be8f3e7cba029bd6b8beb2560ce9561f0a7dae8398df57e98b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 03:32:17 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:32:17.815 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[0b786f93-9b36-435e-b359-9594e596ded4]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:32:17 np0005534516 podman[307956]: 2025-11-25 08:32:17.833031034 +0000 UTC m=+0.255064586 container init 86bef5820814aca1b74a1ad451e0138f0e3fa38106fd59cffd13dc0fd141d8a6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_proskuriakova, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 03:32:17 np0005534516 nova_compute[253538]: 2025-11-25 08:32:17.833 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:32:17 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:32:17.835 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[d2496451-5e81-4ecd-a525-4d6d79fe6690]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:32:17 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:32:17.837 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[59d68911-7f2e-46d9-89ce-259f5c480038]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:32:17 np0005534516 podman[307956]: 2025-11-25 08:32:17.840726381 +0000 UTC m=+0.262759903 container start 86bef5820814aca1b74a1ad451e0138f0e3fa38106fd59cffd13dc0fd141d8a6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_proskuriakova, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 03:32:17 np0005534516 podman[307956]: 2025-11-25 08:32:17.844095332 +0000 UTC m=+0.266128854 container attach 86bef5820814aca1b74a1ad451e0138f0e3fa38106fd59cffd13dc0fd141d8a6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_proskuriakova, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3)
Nov 25 03:32:17 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:32:17.853 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[dbc0c412-33fe-4c12-83b0-a6c1983dd685]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 487201, 'reachable_time': 39245, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 308036, 'error': None, 'target': 'ovnmeta-eb25945d-6002-4a99-b682-034a8a3dc901', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:32:17 np0005534516 systemd[1]: run-netns-ovnmeta\x2deb25945d\x2d6002\x2d4a99\x2db682\x2d034a8a3dc901.mount: Deactivated successfully.
Nov 25 03:32:17 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:32:17.856 162852 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-eb25945d-6002-4a99-b682-034a8a3dc901 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 25 03:32:17 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:32:17.856 162852 DEBUG oslo.privsep.daemon [-] privsep: reply[ef1538ab-9c65-4c49-b973-2f57c302a8ef]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:32:18 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 03:32:18 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/11066372' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 03:32:18 np0005534516 nova_compute[253538]: 2025-11-25 08:32:18.035 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.447s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:32:18 np0005534516 nova_compute[253538]: 2025-11-25 08:32:18.085 253542 INFO nova.virt.libvirt.driver [None req-274b1ae7-4359-493e-ab71-686e6f17a045 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] [instance: 7aefbad8-edb5-417c-a34d-e9e3c2dd0c03] Deleting instance files /var/lib/nova/instances/7aefbad8-edb5-417c-a34d-e9e3c2dd0c03_del#033[00m
Nov 25 03:32:18 np0005534516 nova_compute[253538]: 2025-11-25 08:32:18.086 253542 INFO nova.virt.libvirt.driver [None req-274b1ae7-4359-493e-ab71-686e6f17a045 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] [instance: 7aefbad8-edb5-417c-a34d-e9e3c2dd0c03] Deletion of /var/lib/nova/instances/7aefbad8-edb5-417c-a34d-e9e3c2dd0c03_del complete#033[00m
Nov 25 03:32:18 np0005534516 nova_compute[253538]: 2025-11-25 08:32:18.097 253542 DEBUG nova.compute.manager [req-eba182a9-e603-459e-a115-5410135c0215 req-30641b3d-43ca-4dcb-8c83-e3cc3fa746bd b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 0feca801-4630-4450-b915-616d8496ab51] Received event network-changed-15af3dd8-9788-4a34-b4b2-d3b24300cd4c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:32:18 np0005534516 nova_compute[253538]: 2025-11-25 08:32:18.098 253542 DEBUG nova.compute.manager [req-eba182a9-e603-459e-a115-5410135c0215 req-30641b3d-43ca-4dcb-8c83-e3cc3fa746bd b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 0feca801-4630-4450-b915-616d8496ab51] Refreshing instance network info cache due to event network-changed-15af3dd8-9788-4a34-b4b2-d3b24300cd4c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 03:32:18 np0005534516 nova_compute[253538]: 2025-11-25 08:32:18.098 253542 DEBUG oslo_concurrency.lockutils [req-eba182a9-e603-459e-a115-5410135c0215 req-30641b3d-43ca-4dcb-8c83-e3cc3fa746bd b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-0feca801-4630-4450-b915-616d8496ab51" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 03:32:18 np0005534516 nova_compute[253538]: 2025-11-25 08:32:18.098 253542 DEBUG oslo_concurrency.lockutils [req-eba182a9-e603-459e-a115-5410135c0215 req-30641b3d-43ca-4dcb-8c83-e3cc3fa746bd b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-0feca801-4630-4450-b915-616d8496ab51" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 03:32:18 np0005534516 nova_compute[253538]: 2025-11-25 08:32:18.098 253542 DEBUG nova.network.neutron [req-eba182a9-e603-459e-a115-5410135c0215 req-30641b3d-43ca-4dcb-8c83-e3cc3fa746bd b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 0feca801-4630-4450-b915-616d8496ab51] Refreshing network info cache for port 15af3dd8-9788-4a34-b4b2-d3b24300cd4c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 03:32:18 np0005534516 nova_compute[253538]: 2025-11-25 08:32:18.125 253542 DEBUG nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] skipping disk for instance-00000032 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 25 03:32:18 np0005534516 nova_compute[253538]: 2025-11-25 08:32:18.125 253542 DEBUG nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] skipping disk for instance-00000032 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 25 03:32:18 np0005534516 nova_compute[253538]: 2025-11-25 08:32:18.128 253542 DEBUG nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] skipping disk for instance-00000031 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 25 03:32:18 np0005534516 nova_compute[253538]: 2025-11-25 08:32:18.129 253542 DEBUG nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] skipping disk for instance-00000031 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 25 03:32:18 np0005534516 nova_compute[253538]: 2025-11-25 08:32:18.237 253542 DEBUG nova.virt.libvirt.driver [None req-274b1ae7-4359-493e-ab71-686e6f17a045 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] [instance: 7aefbad8-edb5-417c-a34d-e9e3c2dd0c03] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 25 03:32:18 np0005534516 nova_compute[253538]: 2025-11-25 08:32:18.237 253542 INFO nova.virt.libvirt.driver [None req-274b1ae7-4359-493e-ab71-686e6f17a045 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] [instance: 7aefbad8-edb5-417c-a34d-e9e3c2dd0c03] Creating image(s)#033[00m
Nov 25 03:32:18 np0005534516 nova_compute[253538]: 2025-11-25 08:32:18.256 253542 DEBUG nova.storage.rbd_utils [None req-274b1ae7-4359-493e-ab71-686e6f17a045 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] rbd image 7aefbad8-edb5-417c-a34d-e9e3c2dd0c03_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:32:18 np0005534516 nova_compute[253538]: 2025-11-25 08:32:18.280 253542 DEBUG nova.storage.rbd_utils [None req-274b1ae7-4359-493e-ab71-686e6f17a045 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] rbd image 7aefbad8-edb5-417c-a34d-e9e3c2dd0c03_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:32:18 np0005534516 nova_compute[253538]: 2025-11-25 08:32:18.306 253542 DEBUG nova.storage.rbd_utils [None req-274b1ae7-4359-493e-ab71-686e6f17a045 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] rbd image 7aefbad8-edb5-417c-a34d-e9e3c2dd0c03_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:32:18 np0005534516 nova_compute[253538]: 2025-11-25 08:32:18.311 253542 DEBUG oslo_concurrency.processutils [None req-274b1ae7-4359-493e-ab71-686e6f17a045 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a6a8bade0130d94f6fc91c6e483cc9b6db836676 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:32:18 np0005534516 nova_compute[253538]: 2025-11-25 08:32:18.395 253542 DEBUG oslo_concurrency.processutils [None req-274b1ae7-4359-493e-ab71-686e6f17a045 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a6a8bade0130d94f6fc91c6e483cc9b6db836676 --force-share --output=json" returned: 0 in 0.084s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:32:18 np0005534516 nova_compute[253538]: 2025-11-25 08:32:18.398 253542 DEBUG oslo_concurrency.lockutils [None req-274b1ae7-4359-493e-ab71-686e6f17a045 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Acquiring lock "a6a8bade0130d94f6fc91c6e483cc9b6db836676" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:32:18 np0005534516 nova_compute[253538]: 2025-11-25 08:32:18.399 253542 DEBUG oslo_concurrency.lockutils [None req-274b1ae7-4359-493e-ab71-686e6f17a045 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Lock "a6a8bade0130d94f6fc91c6e483cc9b6db836676" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:32:18 np0005534516 nova_compute[253538]: 2025-11-25 08:32:18.400 253542 DEBUG oslo_concurrency.lockutils [None req-274b1ae7-4359-493e-ab71-686e6f17a045 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Lock "a6a8bade0130d94f6fc91c6e483cc9b6db836676" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:32:18 np0005534516 nova_compute[253538]: 2025-11-25 08:32:18.423 253542 DEBUG nova.storage.rbd_utils [None req-274b1ae7-4359-493e-ab71-686e6f17a045 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] rbd image 7aefbad8-edb5-417c-a34d-e9e3c2dd0c03_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:32:18 np0005534516 nova_compute[253538]: 2025-11-25 08:32:18.428 253542 DEBUG oslo_concurrency.processutils [None req-274b1ae7-4359-493e-ab71-686e6f17a045 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/a6a8bade0130d94f6fc91c6e483cc9b6db836676 7aefbad8-edb5-417c-a34d-e9e3c2dd0c03_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:32:18 np0005534516 nova_compute[253538]: 2025-11-25 08:32:18.505 253542 WARNING nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 03:32:18 np0005534516 nova_compute[253538]: 2025-11-25 08:32:18.506 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3991MB free_disk=59.91162109375GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 25 03:32:18 np0005534516 nova_compute[253538]: 2025-11-25 08:32:18.507 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:32:18 np0005534516 nova_compute[253538]: 2025-11-25 08:32:18.507 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:32:18 np0005534516 ceph-mon[75015]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #66. Immutable memtables: 0.
Nov 25 03:32:18 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:32:18.567861) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 25 03:32:18 np0005534516 ceph-mon[75015]: rocksdb: [db/flush_job.cc:856] [default] [JOB 35] Flushing memtable with next log file: 66
Nov 25 03:32:18 np0005534516 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764059538567951, "job": 35, "event": "flush_started", "num_memtables": 1, "num_entries": 611, "num_deletes": 251, "total_data_size": 628411, "memory_usage": 639368, "flush_reason": "Manual Compaction"}
Nov 25 03:32:18 np0005534516 ceph-mon[75015]: rocksdb: [db/flush_job.cc:885] [default] [JOB 35] Level-0 flush table #67: started
Nov 25 03:32:18 np0005534516 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764059538572662, "cf_name": "default", "job": 35, "event": "table_file_creation", "file_number": 67, "file_size": 622088, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 30128, "largest_seqno": 30738, "table_properties": {"data_size": 618836, "index_size": 1160, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1029, "raw_key_size": 7786, "raw_average_key_size": 19, "raw_value_size": 612236, "raw_average_value_size": 1522, "num_data_blocks": 52, "num_entries": 402, "num_filter_entries": 402, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764059496, "oldest_key_time": 1764059496, "file_creation_time": 1764059538, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "dc5dc6ae-0fb8-474e-b34d-e37705a41add", "db_session_id": "WCSCMBNWDQZ3QKJA3OWW", "orig_file_number": 67, "seqno_to_time_mapping": "N/A"}}
Nov 25 03:32:18 np0005534516 ceph-mon[75015]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 35] Flush lasted 4818 microseconds, and 2284 cpu microseconds.
Nov 25 03:32:18 np0005534516 ceph-mon[75015]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 03:32:18 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:32:18.572704) [db/flush_job.cc:967] [default] [JOB 35] Level-0 flush table #67: 622088 bytes OK
Nov 25 03:32:18 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:32:18.572723) [db/memtable_list.cc:519] [default] Level-0 commit table #67 started
Nov 25 03:32:18 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:32:18.574177) [db/memtable_list.cc:722] [default] Level-0 commit table #67: memtable #1 done
Nov 25 03:32:18 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:32:18.574225) EVENT_LOG_v1 {"time_micros": 1764059538574213, "job": 35, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 25 03:32:18 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:32:18.574256) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 25 03:32:18 np0005534516 ceph-mon[75015]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 35] Try to delete WAL files size 625059, prev total WAL file size 625059, number of live WAL files 2.
Nov 25 03:32:18 np0005534516 ceph-mon[75015]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000063.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 03:32:18 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:32:18.574959) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032353130' seq:72057594037927935, type:22 .. '7061786F730032373632' seq:0, type:0; will stop at (end)
Nov 25 03:32:18 np0005534516 ceph-mon[75015]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 36] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 25 03:32:18 np0005534516 ceph-mon[75015]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 35 Base level 0, inputs: [67(607KB)], [65(7913KB)]
Nov 25 03:32:18 np0005534516 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764059538575024, "job": 36, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [67], "files_L6": [65], "score": -1, "input_data_size": 8725539, "oldest_snapshot_seqno": -1}
Nov 25 03:32:18 np0005534516 nova_compute[253538]: 2025-11-25 08:32:18.614 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Instance 7aefbad8-edb5-417c-a34d-e9e3c2dd0c03 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 25 03:32:18 np0005534516 nova_compute[253538]: 2025-11-25 08:32:18.614 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Instance 0feca801-4630-4450-b915-616d8496ab51 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 25 03:32:18 np0005534516 nova_compute[253538]: 2025-11-25 08:32:18.614 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Instance 0201b222-1aa1-4d57-901c-e3c79170b567 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 25 03:32:18 np0005534516 nova_compute[253538]: 2025-11-25 08:32:18.615 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 25 03:32:18 np0005534516 nova_compute[253538]: 2025-11-25 08:32:18.615 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=896MB phys_disk=59GB used_disk=3GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 25 03:32:18 np0005534516 ceph-mon[75015]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 36] Generated table #68: 5311 keys, 7139090 bytes, temperature: kUnknown
Nov 25 03:32:18 np0005534516 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764059538619727, "cf_name": "default", "job": 36, "event": "table_file_creation", "file_number": 68, "file_size": 7139090, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7104605, "index_size": 20103, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 13317, "raw_key_size": 134856, "raw_average_key_size": 25, "raw_value_size": 7009882, "raw_average_value_size": 1319, "num_data_blocks": 820, "num_entries": 5311, "num_filter_entries": 5311, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764056881, "oldest_key_time": 0, "file_creation_time": 1764059538, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "dc5dc6ae-0fb8-474e-b34d-e37705a41add", "db_session_id": "WCSCMBNWDQZ3QKJA3OWW", "orig_file_number": 68, "seqno_to_time_mapping": "N/A"}}
Nov 25 03:32:18 np0005534516 ceph-mon[75015]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 03:32:18 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:32:18.619959) [db/compaction/compaction_job.cc:1663] [default] [JOB 36] Compacted 1@0 + 1@6 files to L6 => 7139090 bytes
Nov 25 03:32:18 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:32:18.623698) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 194.9 rd, 159.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.6, 7.7 +0.0 blob) out(6.8 +0.0 blob), read-write-amplify(25.5) write-amplify(11.5) OK, records in: 5823, records dropped: 512 output_compression: NoCompression
Nov 25 03:32:18 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:32:18.623727) EVENT_LOG_v1 {"time_micros": 1764059538623715, "job": 36, "event": "compaction_finished", "compaction_time_micros": 44770, "compaction_time_cpu_micros": 23747, "output_level": 6, "num_output_files": 1, "total_output_size": 7139090, "num_input_records": 5823, "num_output_records": 5311, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 25 03:32:18 np0005534516 ceph-mon[75015]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000067.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 03:32:18 np0005534516 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764059538623975, "job": 36, "event": "table_file_deletion", "file_number": 67}
Nov 25 03:32:18 np0005534516 ceph-mon[75015]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000065.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 03:32:18 np0005534516 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764059538625475, "job": 36, "event": "table_file_deletion", "file_number": 65}
Nov 25 03:32:18 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:32:18.574837) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 03:32:18 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:32:18.625539) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 03:32:18 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:32:18.625546) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 03:32:18 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:32:18.625548) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 03:32:18 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:32:18.625549) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 03:32:18 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:32:18.625550) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 03:32:18 np0005534516 jolly_proskuriakova[308031]: {
Nov 25 03:32:18 np0005534516 jolly_proskuriakova[308031]:    "0": [
Nov 25 03:32:18 np0005534516 jolly_proskuriakova[308031]:        {
Nov 25 03:32:18 np0005534516 jolly_proskuriakova[308031]:            "devices": [
Nov 25 03:32:18 np0005534516 jolly_proskuriakova[308031]:                "/dev/loop3"
Nov 25 03:32:18 np0005534516 jolly_proskuriakova[308031]:            ],
Nov 25 03:32:18 np0005534516 jolly_proskuriakova[308031]:            "lv_name": "ceph_lv0",
Nov 25 03:32:18 np0005534516 jolly_proskuriakova[308031]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 03:32:18 np0005534516 jolly_proskuriakova[308031]:            "lv_size": "21470642176",
Nov 25 03:32:18 np0005534516 jolly_proskuriakova[308031]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=fa0f8d90-5df8-4d42-9078-da082765696d,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 03:32:18 np0005534516 jolly_proskuriakova[308031]:            "lv_uuid": "ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr",
Nov 25 03:32:18 np0005534516 jolly_proskuriakova[308031]:            "name": "ceph_lv0",
Nov 25 03:32:18 np0005534516 jolly_proskuriakova[308031]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 03:32:18 np0005534516 jolly_proskuriakova[308031]:            "tags": {
Nov 25 03:32:18 np0005534516 jolly_proskuriakova[308031]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 03:32:18 np0005534516 jolly_proskuriakova[308031]:                "ceph.block_uuid": "ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr",
Nov 25 03:32:18 np0005534516 jolly_proskuriakova[308031]:                "ceph.cephx_lockbox_secret": "",
Nov 25 03:32:18 np0005534516 jolly_proskuriakova[308031]:                "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 03:32:18 np0005534516 jolly_proskuriakova[308031]:                "ceph.cluster_name": "ceph",
Nov 25 03:32:18 np0005534516 jolly_proskuriakova[308031]:                "ceph.crush_device_class": "",
Nov 25 03:32:18 np0005534516 jolly_proskuriakova[308031]:                "ceph.encrypted": "0",
Nov 25 03:32:18 np0005534516 jolly_proskuriakova[308031]:                "ceph.osd_fsid": "fa0f8d90-5df8-4d42-9078-da082765696d",
Nov 25 03:32:18 np0005534516 jolly_proskuriakova[308031]:                "ceph.osd_id": "0",
Nov 25 03:32:18 np0005534516 jolly_proskuriakova[308031]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 03:32:18 np0005534516 jolly_proskuriakova[308031]:                "ceph.type": "block",
Nov 25 03:32:18 np0005534516 jolly_proskuriakova[308031]:                "ceph.vdo": "0"
Nov 25 03:32:18 np0005534516 jolly_proskuriakova[308031]:            },
Nov 25 03:32:18 np0005534516 jolly_proskuriakova[308031]:            "type": "block",
Nov 25 03:32:18 np0005534516 jolly_proskuriakova[308031]:            "vg_name": "ceph_vg0"
Nov 25 03:32:18 np0005534516 jolly_proskuriakova[308031]:        }
Nov 25 03:32:18 np0005534516 jolly_proskuriakova[308031]:    ],
Nov 25 03:32:18 np0005534516 jolly_proskuriakova[308031]:    "1": [
Nov 25 03:32:18 np0005534516 jolly_proskuriakova[308031]:        {
Nov 25 03:32:18 np0005534516 jolly_proskuriakova[308031]:            "devices": [
Nov 25 03:32:18 np0005534516 jolly_proskuriakova[308031]:                "/dev/loop4"
Nov 25 03:32:18 np0005534516 jolly_proskuriakova[308031]:            ],
Nov 25 03:32:18 np0005534516 jolly_proskuriakova[308031]:            "lv_name": "ceph_lv1",
Nov 25 03:32:18 np0005534516 jolly_proskuriakova[308031]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 03:32:18 np0005534516 jolly_proskuriakova[308031]:            "lv_size": "21470642176",
Nov 25 03:32:18 np0005534516 jolly_proskuriakova[308031]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1ccbbde0-8faf-460a-800e-c84f00ed17db,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 03:32:18 np0005534516 jolly_proskuriakova[308031]:            "lv_uuid": "nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e",
Nov 25 03:32:18 np0005534516 jolly_proskuriakova[308031]:            "name": "ceph_lv1",
Nov 25 03:32:18 np0005534516 jolly_proskuriakova[308031]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 03:32:18 np0005534516 jolly_proskuriakova[308031]:            "tags": {
Nov 25 03:32:18 np0005534516 jolly_proskuriakova[308031]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 03:32:18 np0005534516 jolly_proskuriakova[308031]:                "ceph.block_uuid": "nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e",
Nov 25 03:32:18 np0005534516 jolly_proskuriakova[308031]:                "ceph.cephx_lockbox_secret": "",
Nov 25 03:32:18 np0005534516 jolly_proskuriakova[308031]:                "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 03:32:18 np0005534516 jolly_proskuriakova[308031]:                "ceph.cluster_name": "ceph",
Nov 25 03:32:18 np0005534516 jolly_proskuriakova[308031]:                "ceph.crush_device_class": "",
Nov 25 03:32:18 np0005534516 jolly_proskuriakova[308031]:                "ceph.encrypted": "0",
Nov 25 03:32:18 np0005534516 jolly_proskuriakova[308031]:                "ceph.osd_fsid": "1ccbbde0-8faf-460a-800e-c84f00ed17db",
Nov 25 03:32:18 np0005534516 jolly_proskuriakova[308031]:                "ceph.osd_id": "1",
Nov 25 03:32:18 np0005534516 jolly_proskuriakova[308031]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 03:32:18 np0005534516 jolly_proskuriakova[308031]:                "ceph.type": "block",
Nov 25 03:32:18 np0005534516 jolly_proskuriakova[308031]:                "ceph.vdo": "0"
Nov 25 03:32:18 np0005534516 jolly_proskuriakova[308031]:            },
Nov 25 03:32:18 np0005534516 jolly_proskuriakova[308031]:            "type": "block",
Nov 25 03:32:18 np0005534516 jolly_proskuriakova[308031]:            "vg_name": "ceph_vg1"
Nov 25 03:32:18 np0005534516 jolly_proskuriakova[308031]:        }
Nov 25 03:32:18 np0005534516 jolly_proskuriakova[308031]:    ],
Nov 25 03:32:18 np0005534516 jolly_proskuriakova[308031]:    "2": [
Nov 25 03:32:18 np0005534516 jolly_proskuriakova[308031]:        {
Nov 25 03:32:18 np0005534516 jolly_proskuriakova[308031]:            "devices": [
Nov 25 03:32:18 np0005534516 jolly_proskuriakova[308031]:                "/dev/loop5"
Nov 25 03:32:18 np0005534516 jolly_proskuriakova[308031]:            ],
Nov 25 03:32:18 np0005534516 jolly_proskuriakova[308031]:            "lv_name": "ceph_lv2",
Nov 25 03:32:18 np0005534516 jolly_proskuriakova[308031]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 03:32:18 np0005534516 jolly_proskuriakova[308031]:            "lv_size": "21470642176",
Nov 25 03:32:18 np0005534516 jolly_proskuriakova[308031]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2a15223e-f21c-42af-b702-2be1c3607399,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 03:32:18 np0005534516 jolly_proskuriakova[308031]:            "lv_uuid": "5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0",
Nov 25 03:32:18 np0005534516 jolly_proskuriakova[308031]:            "name": "ceph_lv2",
Nov 25 03:32:18 np0005534516 jolly_proskuriakova[308031]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 03:32:18 np0005534516 jolly_proskuriakova[308031]:            "tags": {
Nov 25 03:32:18 np0005534516 jolly_proskuriakova[308031]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 03:32:18 np0005534516 jolly_proskuriakova[308031]:                "ceph.block_uuid": "5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0",
Nov 25 03:32:18 np0005534516 jolly_proskuriakova[308031]:                "ceph.cephx_lockbox_secret": "",
Nov 25 03:32:18 np0005534516 jolly_proskuriakova[308031]:                "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 03:32:18 np0005534516 jolly_proskuriakova[308031]:                "ceph.cluster_name": "ceph",
Nov 25 03:32:18 np0005534516 jolly_proskuriakova[308031]:                "ceph.crush_device_class": "",
Nov 25 03:32:18 np0005534516 jolly_proskuriakova[308031]:                "ceph.encrypted": "0",
Nov 25 03:32:18 np0005534516 jolly_proskuriakova[308031]:                "ceph.osd_fsid": "2a15223e-f21c-42af-b702-2be1c3607399",
Nov 25 03:32:18 np0005534516 jolly_proskuriakova[308031]:                "ceph.osd_id": "2",
Nov 25 03:32:18 np0005534516 jolly_proskuriakova[308031]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 03:32:18 np0005534516 jolly_proskuriakova[308031]:                "ceph.type": "block",
Nov 25 03:32:18 np0005534516 jolly_proskuriakova[308031]:                "ceph.vdo": "0"
Nov 25 03:32:18 np0005534516 jolly_proskuriakova[308031]:            },
Nov 25 03:32:18 np0005534516 jolly_proskuriakova[308031]:            "type": "block",
Nov 25 03:32:18 np0005534516 jolly_proskuriakova[308031]:            "vg_name": "ceph_vg2"
Nov 25 03:32:18 np0005534516 jolly_proskuriakova[308031]:        }
Nov 25 03:32:18 np0005534516 jolly_proskuriakova[308031]:    ]
Nov 25 03:32:18 np0005534516 jolly_proskuriakova[308031]: }
Nov 25 03:32:18 np0005534516 nova_compute[253538]: 2025-11-25 08:32:18.713 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:32:18 np0005534516 systemd[1]: libpod-86bef5820814aca1b74a1ad451e0138f0e3fa38106fd59cffd13dc0fd141d8a6.scope: Deactivated successfully.
Nov 25 03:32:18 np0005534516 nova_compute[253538]: 2025-11-25 08:32:18.758 253542 DEBUG oslo_concurrency.processutils [None req-274b1ae7-4359-493e-ab71-686e6f17a045 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/a6a8bade0130d94f6fc91c6e483cc9b6db836676 7aefbad8-edb5-417c-a34d-e9e3c2dd0c03_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.330s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:32:18 np0005534516 podman[308143]: 2025-11-25 08:32:18.773083599 +0000 UTC m=+0.042887664 container died 86bef5820814aca1b74a1ad451e0138f0e3fa38106fd59cffd13dc0fd141d8a6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_proskuriakova, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2)
Nov 25 03:32:18 np0005534516 systemd[1]: var-lib-containers-storage-overlay-c69d31fcbeab19be8f3e7cba029bd6b8beb2560ce9561f0a7dae8398df57e98b-merged.mount: Deactivated successfully.
Nov 25 03:32:18 np0005534516 nova_compute[253538]: 2025-11-25 08:32:18.840 253542 DEBUG nova.storage.rbd_utils [None req-274b1ae7-4359-493e-ab71-686e6f17a045 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] resizing rbd image 7aefbad8-edb5-417c-a34d-e9e3c2dd0c03_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 25 03:32:18 np0005534516 podman[308143]: 2025-11-25 08:32:18.841808056 +0000 UTC m=+0.111612041 container remove 86bef5820814aca1b74a1ad451e0138f0e3fa38106fd59cffd13dc0fd141d8a6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_proskuriakova, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Nov 25 03:32:18 np0005534516 systemd[1]: libpod-conmon-86bef5820814aca1b74a1ad451e0138f0e3fa38106fd59cffd13dc0fd141d8a6.scope: Deactivated successfully.
Nov 25 03:32:18 np0005534516 nova_compute[253538]: 2025-11-25 08:32:18.962 253542 DEBUG nova.virt.libvirt.driver [None req-274b1ae7-4359-493e-ab71-686e6f17a045 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] [instance: 7aefbad8-edb5-417c-a34d-e9e3c2dd0c03] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 25 03:32:18 np0005534516 nova_compute[253538]: 2025-11-25 08:32:18.969 253542 DEBUG nova.virt.libvirt.driver [None req-274b1ae7-4359-493e-ab71-686e6f17a045 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] [instance: 7aefbad8-edb5-417c-a34d-e9e3c2dd0c03] Ensure instance console log exists: /var/lib/nova/instances/7aefbad8-edb5-417c-a34d-e9e3c2dd0c03/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 25 03:32:18 np0005534516 nova_compute[253538]: 2025-11-25 08:32:18.970 253542 DEBUG oslo_concurrency.lockutils [None req-274b1ae7-4359-493e-ab71-686e6f17a045 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:32:18 np0005534516 nova_compute[253538]: 2025-11-25 08:32:18.970 253542 DEBUG oslo_concurrency.lockutils [None req-274b1ae7-4359-493e-ab71-686e6f17a045 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:32:18 np0005534516 nova_compute[253538]: 2025-11-25 08:32:18.971 253542 DEBUG oslo_concurrency.lockutils [None req-274b1ae7-4359-493e-ab71-686e6f17a045 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:32:18 np0005534516 nova_compute[253538]: 2025-11-25 08:32:18.976 253542 DEBUG nova.virt.libvirt.driver [None req-274b1ae7-4359-493e-ab71-686e6f17a045 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] [instance: 7aefbad8-edb5-417c-a34d-e9e3c2dd0c03] Start _get_guest_xml network_info=[{"id": "3c3c9c20-8436-4b41-9184-2061010ba6e2", "address": "fa:16:3e:c4:6e:6c", "network": {"id": "eb25945d-6002-4a99-b682-034a8a3dc901", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1802666542-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dc93aa65bef7473d961e0cad1e8f2962", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3c3c9c20-84", "ovs_interfaceid": "3c3c9c20-8436-4b41-9184-2061010ba6e2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:56Z,direct_url=<?>,disk_format='qcow2',id=64385127-d622-49bb-be38-b33beb2692d1,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:58Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'device_name': '/dev/vda', 'guest_format': None, 'encryption_options': None, 'boot_index': 0, 'encryption_format': None, 'encryption_secret_uuid': None, 'size': 0, 'encrypted': False, 'disk_bus': 'virtio', 'image_id': '8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 25 03:32:18 np0005534516 nova_compute[253538]: 2025-11-25 08:32:18.982 253542 WARNING nova.virt.libvirt.driver [None req-274b1ae7-4359-493e-ab71-686e6f17a045 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.: NotImplementedError#033[00m
Nov 25 03:32:18 np0005534516 nova_compute[253538]: 2025-11-25 08:32:18.988 253542 DEBUG nova.virt.libvirt.host [None req-274b1ae7-4359-493e-ab71-686e6f17a045 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 25 03:32:18 np0005534516 nova_compute[253538]: 2025-11-25 08:32:18.989 253542 DEBUG nova.virt.libvirt.host [None req-274b1ae7-4359-493e-ab71-686e6f17a045 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 25 03:32:18 np0005534516 nova_compute[253538]: 2025-11-25 08:32:18.993 253542 DEBUG nova.virt.libvirt.host [None req-274b1ae7-4359-493e-ab71-686e6f17a045 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 25 03:32:18 np0005534516 nova_compute[253538]: 2025-11-25 08:32:18.993 253542 DEBUG nova.virt.libvirt.host [None req-274b1ae7-4359-493e-ab71-686e6f17a045 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 25 03:32:18 np0005534516 nova_compute[253538]: 2025-11-25 08:32:18.993 253542 DEBUG nova.virt.libvirt.driver [None req-274b1ae7-4359-493e-ab71-686e6f17a045 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 25 03:32:18 np0005534516 nova_compute[253538]: 2025-11-25 08:32:18.994 253542 DEBUG nova.virt.hardware [None req-274b1ae7-4359-493e-ab71-686e6f17a045 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T08:20:54Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='c0247c91-0f34-45b2-87e7-43f31a790a00',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:56Z,direct_url=<?>,disk_format='qcow2',id=64385127-d622-49bb-be38-b33beb2692d1,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:58Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 25 03:32:18 np0005534516 nova_compute[253538]: 2025-11-25 08:32:18.994 253542 DEBUG nova.virt.hardware [None req-274b1ae7-4359-493e-ab71-686e6f17a045 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 25 03:32:18 np0005534516 nova_compute[253538]: 2025-11-25 08:32:18.994 253542 DEBUG nova.virt.hardware [None req-274b1ae7-4359-493e-ab71-686e6f17a045 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 25 03:32:18 np0005534516 nova_compute[253538]: 2025-11-25 08:32:18.994 253542 DEBUG nova.virt.hardware [None req-274b1ae7-4359-493e-ab71-686e6f17a045 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 25 03:32:18 np0005534516 nova_compute[253538]: 2025-11-25 08:32:18.994 253542 DEBUG nova.virt.hardware [None req-274b1ae7-4359-493e-ab71-686e6f17a045 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 25 03:32:18 np0005534516 nova_compute[253538]: 2025-11-25 08:32:18.995 253542 DEBUG nova.virt.hardware [None req-274b1ae7-4359-493e-ab71-686e6f17a045 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 25 03:32:18 np0005534516 nova_compute[253538]: 2025-11-25 08:32:18.995 253542 DEBUG nova.virt.hardware [None req-274b1ae7-4359-493e-ab71-686e6f17a045 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 25 03:32:18 np0005534516 nova_compute[253538]: 2025-11-25 08:32:18.995 253542 DEBUG nova.virt.hardware [None req-274b1ae7-4359-493e-ab71-686e6f17a045 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 25 03:32:18 np0005534516 nova_compute[253538]: 2025-11-25 08:32:18.995 253542 DEBUG nova.virt.hardware [None req-274b1ae7-4359-493e-ab71-686e6f17a045 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 25 03:32:18 np0005534516 nova_compute[253538]: 2025-11-25 08:32:18.995 253542 DEBUG nova.virt.hardware [None req-274b1ae7-4359-493e-ab71-686e6f17a045 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 25 03:32:18 np0005534516 nova_compute[253538]: 2025-11-25 08:32:18.996 253542 DEBUG nova.virt.hardware [None req-274b1ae7-4359-493e-ab71-686e6f17a045 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 25 03:32:18 np0005534516 nova_compute[253538]: 2025-11-25 08:32:18.996 253542 DEBUG nova.objects.instance [None req-274b1ae7-4359-493e-ab71-686e6f17a045 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Lazy-loading 'vcpu_model' on Instance uuid 7aefbad8-edb5-417c-a34d-e9e3c2dd0c03 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 03:32:19 np0005534516 nova_compute[253538]: 2025-11-25 08:32:19.013 253542 DEBUG oslo_concurrency.processutils [None req-274b1ae7-4359-493e-ab71-686e6f17a045 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:32:19 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 03:32:19 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/45810366' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 03:32:19 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1469: 321 pgs: 321 active+clean; 166 MiB data, 482 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 5.1 MiB/s wr, 232 op/s
Nov 25 03:32:19 np0005534516 nova_compute[253538]: 2025-11-25 08:32:19.213 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.500s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:32:19 np0005534516 nova_compute[253538]: 2025-11-25 08:32:19.218 253542 DEBUG nova.compute.provider_tree [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 03:32:19 np0005534516 nova_compute[253538]: 2025-11-25 08:32:19.229 253542 DEBUG nova.scheduler.client.report [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 03:32:19 np0005534516 nova_compute[253538]: 2025-11-25 08:32:19.453 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 25 03:32:19 np0005534516 nova_compute[253538]: 2025-11-25 08:32:19.453 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.947s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:32:19 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 03:32:19 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1443797501' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 03:32:19 np0005534516 nova_compute[253538]: 2025-11-25 08:32:19.497 253542 DEBUG oslo_concurrency.processutils [None req-274b1ae7-4359-493e-ab71-686e6f17a045 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.484s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:32:19 np0005534516 podman[308412]: 2025-11-25 08:32:19.516502199 +0000 UTC m=+0.050502418 container create 70f1145475ab15ed9b21a28caf7f2b416505eee1cd6c35e8683ba01ec37387be (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_taussig, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Nov 25 03:32:19 np0005534516 nova_compute[253538]: 2025-11-25 08:32:19.523 253542 DEBUG nova.storage.rbd_utils [None req-274b1ae7-4359-493e-ab71-686e6f17a045 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] rbd image 7aefbad8-edb5-417c-a34d-e9e3c2dd0c03_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:32:19 np0005534516 nova_compute[253538]: 2025-11-25 08:32:19.530 253542 DEBUG oslo_concurrency.processutils [None req-274b1ae7-4359-493e-ab71-686e6f17a045 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:32:19 np0005534516 systemd[1]: Started libpod-conmon-70f1145475ab15ed9b21a28caf7f2b416505eee1cd6c35e8683ba01ec37387be.scope.
Nov 25 03:32:19 np0005534516 podman[308412]: 2025-11-25 08:32:19.490775758 +0000 UTC m=+0.024776007 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 03:32:19 np0005534516 systemd[1]: Started libcrun container.
Nov 25 03:32:19 np0005534516 podman[308412]: 2025-11-25 08:32:19.61549965 +0000 UTC m=+0.149499899 container init 70f1145475ab15ed9b21a28caf7f2b416505eee1cd6c35e8683ba01ec37387be (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_taussig, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Nov 25 03:32:19 np0005534516 podman[308412]: 2025-11-25 08:32:19.62333224 +0000 UTC m=+0.157332469 container start 70f1145475ab15ed9b21a28caf7f2b416505eee1cd6c35e8683ba01ec37387be (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_taussig, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 03:32:19 np0005534516 podman[308412]: 2025-11-25 08:32:19.625982832 +0000 UTC m=+0.159983051 container attach 70f1145475ab15ed9b21a28caf7f2b416505eee1cd6c35e8683ba01ec37387be (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_taussig, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Nov 25 03:32:19 np0005534516 cranky_taussig[308450]: 167 167
Nov 25 03:32:19 np0005534516 systemd[1]: libpod-70f1145475ab15ed9b21a28caf7f2b416505eee1cd6c35e8683ba01ec37387be.scope: Deactivated successfully.
Nov 25 03:32:19 np0005534516 podman[308412]: 2025-11-25 08:32:19.628752426 +0000 UTC m=+0.162752645 container died 70f1145475ab15ed9b21a28caf7f2b416505eee1cd6c35e8683ba01ec37387be (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_taussig, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 03:32:19 np0005534516 systemd[1]: var-lib-containers-storage-overlay-5912157e55dad6a8a45e8bd09c364ada27ec40abc089fac2690a8f8019686762-merged.mount: Deactivated successfully.
Nov 25 03:32:19 np0005534516 nova_compute[253538]: 2025-11-25 08:32:19.656 253542 DEBUG nova.compute.manager [req-47e1396c-84a2-494c-ab16-348bf9cdc5fb req-1ed138e8-e78d-4489-b3dd-803e169a4450 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 7aefbad8-edb5-417c-a34d-e9e3c2dd0c03] Received event network-vif-plugged-3c3c9c20-8436-4b41-9184-2061010ba6e2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:32:19 np0005534516 nova_compute[253538]: 2025-11-25 08:32:19.657 253542 DEBUG oslo_concurrency.lockutils [req-47e1396c-84a2-494c-ab16-348bf9cdc5fb req-1ed138e8-e78d-4489-b3dd-803e169a4450 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "7aefbad8-edb5-417c-a34d-e9e3c2dd0c03-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:32:19 np0005534516 nova_compute[253538]: 2025-11-25 08:32:19.657 253542 DEBUG oslo_concurrency.lockutils [req-47e1396c-84a2-494c-ab16-348bf9cdc5fb req-1ed138e8-e78d-4489-b3dd-803e169a4450 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "7aefbad8-edb5-417c-a34d-e9e3c2dd0c03-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:32:19 np0005534516 nova_compute[253538]: 2025-11-25 08:32:19.657 253542 DEBUG oslo_concurrency.lockutils [req-47e1396c-84a2-494c-ab16-348bf9cdc5fb req-1ed138e8-e78d-4489-b3dd-803e169a4450 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "7aefbad8-edb5-417c-a34d-e9e3c2dd0c03-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:32:19 np0005534516 nova_compute[253538]: 2025-11-25 08:32:19.657 253542 DEBUG nova.compute.manager [req-47e1396c-84a2-494c-ab16-348bf9cdc5fb req-1ed138e8-e78d-4489-b3dd-803e169a4450 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 7aefbad8-edb5-417c-a34d-e9e3c2dd0c03] No waiting events found dispatching network-vif-plugged-3c3c9c20-8436-4b41-9184-2061010ba6e2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 03:32:19 np0005534516 nova_compute[253538]: 2025-11-25 08:32:19.658 253542 WARNING nova.compute.manager [req-47e1396c-84a2-494c-ab16-348bf9cdc5fb req-1ed138e8-e78d-4489-b3dd-803e169a4450 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 7aefbad8-edb5-417c-a34d-e9e3c2dd0c03] Received unexpected event network-vif-plugged-3c3c9c20-8436-4b41-9184-2061010ba6e2 for instance with vm_state active and task_state rebuild_spawning.#033[00m
Nov 25 03:32:19 np0005534516 podman[308412]: 2025-11-25 08:32:19.661845976 +0000 UTC m=+0.195846195 container remove 70f1145475ab15ed9b21a28caf7f2b416505eee1cd6c35e8683ba01ec37387be (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_taussig, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Nov 25 03:32:19 np0005534516 systemd[1]: libpod-conmon-70f1145475ab15ed9b21a28caf7f2b416505eee1cd6c35e8683ba01ec37387be.scope: Deactivated successfully.
Nov 25 03:32:19 np0005534516 podman[308492]: 2025-11-25 08:32:19.865030046 +0000 UTC m=+0.068438340 container create 2472c4b9c2a9a22bb94e050c9e271e6ed4ceb0fea5f79b523bd13a92d1df7880 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_nobel, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 03:32:19 np0005534516 systemd[1]: Started libpod-conmon-2472c4b9c2a9a22bb94e050c9e271e6ed4ceb0fea5f79b523bd13a92d1df7880.scope.
Nov 25 03:32:19 np0005534516 podman[308492]: 2025-11-25 08:32:19.839250503 +0000 UTC m=+0.042658877 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 03:32:19 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 03:32:19 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/133880493' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 03:32:19 np0005534516 systemd[1]: Started libcrun container.
Nov 25 03:32:19 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/36ac0440b039a9884f8324e4a53668ecee436b00d36e254a964a4ffbea35093c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 03:32:19 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/36ac0440b039a9884f8324e4a53668ecee436b00d36e254a964a4ffbea35093c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 03:32:19 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/36ac0440b039a9884f8324e4a53668ecee436b00d36e254a964a4ffbea35093c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 03:32:19 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/36ac0440b039a9884f8324e4a53668ecee436b00d36e254a964a4ffbea35093c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 03:32:19 np0005534516 nova_compute[253538]: 2025-11-25 08:32:19.966 253542 DEBUG oslo_concurrency.processutils [None req-274b1ae7-4359-493e-ab71-686e6f17a045 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.436s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:32:19 np0005534516 nova_compute[253538]: 2025-11-25 08:32:19.968 253542 DEBUG nova.virt.libvirt.vif [None req-274b1ae7-4359-493e-ab71-686e6f17a045 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=True,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-11-25T08:31:48Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-1569463086',display_name='tempest-ServerDiskConfigTestJSON-server-1569463086',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-1569463086',id=49,image_ref='64385127-d622-49bb-be38-b33beb2692d1',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-25T08:32:00Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='dc93aa65bef7473d961e0cad1e8f2962',ramdisk_id='',reservation_id='r-pn5emsvi',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='64385127-d622-49bb-be38-b33beb2692d1',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerDiskConfigTestJSON-1655399928',owner_user_name='tempest-ServerDiskConfigTestJSON-1655399928-project-member'},tags=<?>,task_state='rebuild_spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T08:32:18Z,user_data=None,user_id='7ad88cb0e4cf4d0b8e4cbec835318015',uuid=7aefbad8-edb5-417c-a34d-e9e3c2dd0c03,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "3c3c9c20-8436-4b41-9184-2061010ba6e2", "address": "fa:16:3e:c4:6e:6c", "network": {"id": "eb25945d-6002-4a99-b682-034a8a3dc901", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1802666542-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dc93aa65bef7473d961e0cad1e8f2962", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3c3c9c20-84", "ovs_interfaceid": "3c3c9c20-8436-4b41-9184-2061010ba6e2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 25 03:32:19 np0005534516 podman[308492]: 2025-11-25 08:32:19.970437939 +0000 UTC m=+0.173846253 container init 2472c4b9c2a9a22bb94e050c9e271e6ed4ceb0fea5f79b523bd13a92d1df7880 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_nobel, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Nov 25 03:32:19 np0005534516 nova_compute[253538]: 2025-11-25 08:32:19.969 253542 DEBUG nova.network.os_vif_util [None req-274b1ae7-4359-493e-ab71-686e6f17a045 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Converting VIF {"id": "3c3c9c20-8436-4b41-9184-2061010ba6e2", "address": "fa:16:3e:c4:6e:6c", "network": {"id": "eb25945d-6002-4a99-b682-034a8a3dc901", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1802666542-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dc93aa65bef7473d961e0cad1e8f2962", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3c3c9c20-84", "ovs_interfaceid": "3c3c9c20-8436-4b41-9184-2061010ba6e2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 03:32:19 np0005534516 nova_compute[253538]: 2025-11-25 08:32:19.972 253542 DEBUG nova.network.os_vif_util [None req-274b1ae7-4359-493e-ab71-686e6f17a045 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c4:6e:6c,bridge_name='br-int',has_traffic_filtering=True,id=3c3c9c20-8436-4b41-9184-2061010ba6e2,network=Network(eb25945d-6002-4a99-b682-034a8a3dc901),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3c3c9c20-84') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 03:32:19 np0005534516 nova_compute[253538]: 2025-11-25 08:32:19.978 253542 DEBUG nova.virt.libvirt.driver [None req-274b1ae7-4359-493e-ab71-686e6f17a045 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] [instance: 7aefbad8-edb5-417c-a34d-e9e3c2dd0c03] End _get_guest_xml xml=<domain type="kvm">
Nov 25 03:32:19 np0005534516 nova_compute[253538]:  <uuid>7aefbad8-edb5-417c-a34d-e9e3c2dd0c03</uuid>
Nov 25 03:32:19 np0005534516 nova_compute[253538]:  <name>instance-00000031</name>
Nov 25 03:32:19 np0005534516 nova_compute[253538]:  <memory>131072</memory>
Nov 25 03:32:19 np0005534516 nova_compute[253538]:  <vcpu>1</vcpu>
Nov 25 03:32:19 np0005534516 nova_compute[253538]:  <metadata>
Nov 25 03:32:19 np0005534516 nova_compute[253538]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 03:32:19 np0005534516 nova_compute[253538]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 03:32:19 np0005534516 nova_compute[253538]:      <nova:name>tempest-ServerDiskConfigTestJSON-server-1569463086</nova:name>
Nov 25 03:32:19 np0005534516 nova_compute[253538]:      <nova:creationTime>2025-11-25 08:32:18</nova:creationTime>
Nov 25 03:32:19 np0005534516 nova_compute[253538]:      <nova:flavor name="m1.nano">
Nov 25 03:32:19 np0005534516 nova_compute[253538]:        <nova:memory>128</nova:memory>
Nov 25 03:32:19 np0005534516 nova_compute[253538]:        <nova:disk>1</nova:disk>
Nov 25 03:32:19 np0005534516 nova_compute[253538]:        <nova:swap>0</nova:swap>
Nov 25 03:32:19 np0005534516 nova_compute[253538]:        <nova:ephemeral>0</nova:ephemeral>
Nov 25 03:32:19 np0005534516 nova_compute[253538]:        <nova:vcpus>1</nova:vcpus>
Nov 25 03:32:19 np0005534516 nova_compute[253538]:      </nova:flavor>
Nov 25 03:32:19 np0005534516 nova_compute[253538]:      <nova:owner>
Nov 25 03:32:19 np0005534516 nova_compute[253538]:        <nova:user uuid="7ad88cb0e4cf4d0b8e4cbec835318015">tempest-ServerDiskConfigTestJSON-1655399928-project-member</nova:user>
Nov 25 03:32:19 np0005534516 nova_compute[253538]:        <nova:project uuid="dc93aa65bef7473d961e0cad1e8f2962">tempest-ServerDiskConfigTestJSON-1655399928</nova:project>
Nov 25 03:32:19 np0005534516 nova_compute[253538]:      </nova:owner>
Nov 25 03:32:19 np0005534516 nova_compute[253538]:      <nova:root type="image" uuid="64385127-d622-49bb-be38-b33beb2692d1"/>
Nov 25 03:32:19 np0005534516 nova_compute[253538]:      <nova:ports>
Nov 25 03:32:19 np0005534516 nova_compute[253538]:        <nova:port uuid="3c3c9c20-8436-4b41-9184-2061010ba6e2">
Nov 25 03:32:19 np0005534516 nova_compute[253538]:          <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Nov 25 03:32:19 np0005534516 nova_compute[253538]:        </nova:port>
Nov 25 03:32:19 np0005534516 nova_compute[253538]:      </nova:ports>
Nov 25 03:32:19 np0005534516 nova_compute[253538]:    </nova:instance>
Nov 25 03:32:19 np0005534516 nova_compute[253538]:  </metadata>
Nov 25 03:32:19 np0005534516 nova_compute[253538]:  <sysinfo type="smbios">
Nov 25 03:32:19 np0005534516 nova_compute[253538]:    <system>
Nov 25 03:32:19 np0005534516 nova_compute[253538]:      <entry name="manufacturer">RDO</entry>
Nov 25 03:32:19 np0005534516 nova_compute[253538]:      <entry name="product">OpenStack Compute</entry>
Nov 25 03:32:19 np0005534516 nova_compute[253538]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 03:32:19 np0005534516 nova_compute[253538]:      <entry name="serial">7aefbad8-edb5-417c-a34d-e9e3c2dd0c03</entry>
Nov 25 03:32:19 np0005534516 nova_compute[253538]:      <entry name="uuid">7aefbad8-edb5-417c-a34d-e9e3c2dd0c03</entry>
Nov 25 03:32:19 np0005534516 nova_compute[253538]:      <entry name="family">Virtual Machine</entry>
Nov 25 03:32:19 np0005534516 nova_compute[253538]:    </system>
Nov 25 03:32:19 np0005534516 nova_compute[253538]:  </sysinfo>
Nov 25 03:32:19 np0005534516 nova_compute[253538]:  <os>
Nov 25 03:32:19 np0005534516 nova_compute[253538]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 25 03:32:19 np0005534516 nova_compute[253538]:    <boot dev="hd"/>
Nov 25 03:32:19 np0005534516 nova_compute[253538]:    <smbios mode="sysinfo"/>
Nov 25 03:32:19 np0005534516 nova_compute[253538]:  </os>
Nov 25 03:32:19 np0005534516 nova_compute[253538]:  <features>
Nov 25 03:32:19 np0005534516 nova_compute[253538]:    <acpi/>
Nov 25 03:32:19 np0005534516 nova_compute[253538]:    <apic/>
Nov 25 03:32:19 np0005534516 nova_compute[253538]:    <vmcoreinfo/>
Nov 25 03:32:19 np0005534516 nova_compute[253538]:  </features>
Nov 25 03:32:19 np0005534516 nova_compute[253538]:  <clock offset="utc">
Nov 25 03:32:19 np0005534516 nova_compute[253538]:    <timer name="pit" tickpolicy="delay"/>
Nov 25 03:32:19 np0005534516 nova_compute[253538]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 25 03:32:19 np0005534516 nova_compute[253538]:    <timer name="hpet" present="no"/>
Nov 25 03:32:19 np0005534516 nova_compute[253538]:  </clock>
Nov 25 03:32:19 np0005534516 nova_compute[253538]:  <cpu mode="host-model" match="exact">
Nov 25 03:32:19 np0005534516 nova_compute[253538]:    <topology sockets="1" cores="1" threads="1"/>
Nov 25 03:32:19 np0005534516 nova_compute[253538]:  </cpu>
Nov 25 03:32:19 np0005534516 nova_compute[253538]:  <devices>
Nov 25 03:32:19 np0005534516 nova_compute[253538]:    <disk type="network" device="disk">
Nov 25 03:32:19 np0005534516 nova_compute[253538]:      <driver type="raw" cache="none"/>
Nov 25 03:32:19 np0005534516 nova_compute[253538]:      <source protocol="rbd" name="vms/7aefbad8-edb5-417c-a34d-e9e3c2dd0c03_disk">
Nov 25 03:32:19 np0005534516 nova_compute[253538]:        <host name="192.168.122.100" port="6789"/>
Nov 25 03:32:19 np0005534516 nova_compute[253538]:      </source>
Nov 25 03:32:19 np0005534516 nova_compute[253538]:      <auth username="openstack">
Nov 25 03:32:19 np0005534516 nova_compute[253538]:        <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 03:32:19 np0005534516 nova_compute[253538]:      </auth>
Nov 25 03:32:19 np0005534516 nova_compute[253538]:      <target dev="vda" bus="virtio"/>
Nov 25 03:32:19 np0005534516 nova_compute[253538]:    </disk>
Nov 25 03:32:19 np0005534516 nova_compute[253538]:    <disk type="network" device="cdrom">
Nov 25 03:32:19 np0005534516 nova_compute[253538]:      <driver type="raw" cache="none"/>
Nov 25 03:32:19 np0005534516 nova_compute[253538]:      <source protocol="rbd" name="vms/7aefbad8-edb5-417c-a34d-e9e3c2dd0c03_disk.config">
Nov 25 03:32:19 np0005534516 nova_compute[253538]:        <host name="192.168.122.100" port="6789"/>
Nov 25 03:32:19 np0005534516 nova_compute[253538]:      </source>
Nov 25 03:32:19 np0005534516 nova_compute[253538]:      <auth username="openstack">
Nov 25 03:32:19 np0005534516 nova_compute[253538]:        <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 03:32:19 np0005534516 nova_compute[253538]:      </auth>
Nov 25 03:32:19 np0005534516 nova_compute[253538]:      <target dev="sda" bus="sata"/>
Nov 25 03:32:19 np0005534516 nova_compute[253538]:    </disk>
Nov 25 03:32:19 np0005534516 nova_compute[253538]:    <interface type="ethernet">
Nov 25 03:32:19 np0005534516 nova_compute[253538]:      <mac address="fa:16:3e:c4:6e:6c"/>
Nov 25 03:32:19 np0005534516 nova_compute[253538]:      <model type="virtio"/>
Nov 25 03:32:19 np0005534516 nova_compute[253538]:      <driver name="vhost" rx_queue_size="512"/>
Nov 25 03:32:19 np0005534516 nova_compute[253538]:      <mtu size="1442"/>
Nov 25 03:32:19 np0005534516 nova_compute[253538]:      <target dev="tap3c3c9c20-84"/>
Nov 25 03:32:19 np0005534516 nova_compute[253538]:    </interface>
Nov 25 03:32:19 np0005534516 nova_compute[253538]:    <serial type="pty">
Nov 25 03:32:19 np0005534516 nova_compute[253538]:      <log file="/var/lib/nova/instances/7aefbad8-edb5-417c-a34d-e9e3c2dd0c03/console.log" append="off"/>
Nov 25 03:32:19 np0005534516 nova_compute[253538]:    </serial>
Nov 25 03:32:19 np0005534516 nova_compute[253538]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 03:32:19 np0005534516 nova_compute[253538]:    <video>
Nov 25 03:32:19 np0005534516 nova_compute[253538]:      <model type="virtio"/>
Nov 25 03:32:19 np0005534516 nova_compute[253538]:    </video>
Nov 25 03:32:19 np0005534516 nova_compute[253538]:    <input type="tablet" bus="usb"/>
Nov 25 03:32:19 np0005534516 nova_compute[253538]:    <rng model="virtio">
Nov 25 03:32:19 np0005534516 nova_compute[253538]:      <backend model="random">/dev/urandom</backend>
Nov 25 03:32:19 np0005534516 nova_compute[253538]:    </rng>
Nov 25 03:32:19 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root"/>
Nov 25 03:32:19 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:32:19 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:32:19 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:32:19 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:32:19 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:32:19 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:32:19 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:32:19 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:32:19 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:32:19 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:32:19 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:32:19 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:32:19 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:32:19 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:32:19 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:32:19 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:32:19 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:32:19 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:32:19 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:32:19 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:32:19 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:32:19 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:32:19 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:32:19 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:32:19 np0005534516 nova_compute[253538]:    <controller type="usb" index="0"/>
Nov 25 03:32:19 np0005534516 nova_compute[253538]:    <memballoon model="virtio">
Nov 25 03:32:19 np0005534516 nova_compute[253538]:      <stats period="10"/>
Nov 25 03:32:19 np0005534516 nova_compute[253538]:    </memballoon>
Nov 25 03:32:19 np0005534516 nova_compute[253538]:  </devices>
Nov 25 03:32:19 np0005534516 nova_compute[253538]: </domain>
Nov 25 03:32:19 np0005534516 nova_compute[253538]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 25 03:32:19 np0005534516 nova_compute[253538]: 2025-11-25 08:32:19.978 253542 DEBUG nova.compute.manager [None req-274b1ae7-4359-493e-ab71-686e6f17a045 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] [instance: 7aefbad8-edb5-417c-a34d-e9e3c2dd0c03] Preparing to wait for external event network-vif-plugged-3c3c9c20-8436-4b41-9184-2061010ba6e2 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 25 03:32:19 np0005534516 nova_compute[253538]: 2025-11-25 08:32:19.979 253542 DEBUG oslo_concurrency.lockutils [None req-274b1ae7-4359-493e-ab71-686e6f17a045 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Acquiring lock "7aefbad8-edb5-417c-a34d-e9e3c2dd0c03-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:32:19 np0005534516 nova_compute[253538]: 2025-11-25 08:32:19.980 253542 DEBUG oslo_concurrency.lockutils [None req-274b1ae7-4359-493e-ab71-686e6f17a045 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Lock "7aefbad8-edb5-417c-a34d-e9e3c2dd0c03-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:32:19 np0005534516 nova_compute[253538]: 2025-11-25 08:32:19.980 253542 DEBUG oslo_concurrency.lockutils [None req-274b1ae7-4359-493e-ab71-686e6f17a045 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Lock "7aefbad8-edb5-417c-a34d-e9e3c2dd0c03-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:32:19 np0005534516 nova_compute[253538]: 2025-11-25 08:32:19.981 253542 DEBUG nova.virt.libvirt.vif [None req-274b1ae7-4359-493e-ab71-686e6f17a045 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=True,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-11-25T08:31:48Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-1569463086',display_name='tempest-ServerDiskConfigTestJSON-server-1569463086',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-1569463086',id=49,image_ref='64385127-d622-49bb-be38-b33beb2692d1',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-25T08:32:00Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='dc93aa65bef7473d961e0cad1e8f2962',ramdisk_id='',reservation_id='r-pn5emsvi',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='64385127-d622-49bb-be38-b33beb2692d1',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerDiskConfigTestJSON-1655399928',owner_user_name='tempest-ServerDiskConfigTestJSON-1655399928-project-member'},tags=<?>,task_state='rebuild_spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T08:32:18Z,user_data=None,user_id='7ad88cb0e4cf4d0b8e4cbec835318015',uuid=7aefbad8-edb5-417c-a34d-e9e3c2dd0c03,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "3c3c9c20-8436-4b41-9184-2061010ba6e2", "address": "fa:16:3e:c4:6e:6c", "network": {"id": "eb25945d-6002-4a99-b682-034a8a3dc901", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1802666542-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dc93aa65bef7473d961e0cad1e8f2962", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3c3c9c20-84", "ovs_interfaceid": "3c3c9c20-8436-4b41-9184-2061010ba6e2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 25 03:32:19 np0005534516 nova_compute[253538]: 2025-11-25 08:32:19.982 253542 DEBUG nova.network.os_vif_util [None req-274b1ae7-4359-493e-ab71-686e6f17a045 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Converting VIF {"id": "3c3c9c20-8436-4b41-9184-2061010ba6e2", "address": "fa:16:3e:c4:6e:6c", "network": {"id": "eb25945d-6002-4a99-b682-034a8a3dc901", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1802666542-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dc93aa65bef7473d961e0cad1e8f2962", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3c3c9c20-84", "ovs_interfaceid": "3c3c9c20-8436-4b41-9184-2061010ba6e2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 03:32:19 np0005534516 podman[308492]: 2025-11-25 08:32:19.982258987 +0000 UTC m=+0.185667311 container start 2472c4b9c2a9a22bb94e050c9e271e6ed4ceb0fea5f79b523bd13a92d1df7880 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_nobel, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 03:32:19 np0005534516 nova_compute[253538]: 2025-11-25 08:32:19.983 253542 DEBUG nova.network.os_vif_util [None req-274b1ae7-4359-493e-ab71-686e6f17a045 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c4:6e:6c,bridge_name='br-int',has_traffic_filtering=True,id=3c3c9c20-8436-4b41-9184-2061010ba6e2,network=Network(eb25945d-6002-4a99-b682-034a8a3dc901),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3c3c9c20-84') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 03:32:19 np0005534516 nova_compute[253538]: 2025-11-25 08:32:19.984 253542 DEBUG os_vif [None req-274b1ae7-4359-493e-ab71-686e6f17a045 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:c4:6e:6c,bridge_name='br-int',has_traffic_filtering=True,id=3c3c9c20-8436-4b41-9184-2061010ba6e2,network=Network(eb25945d-6002-4a99-b682-034a8a3dc901),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3c3c9c20-84') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 25 03:32:19 np0005534516 podman[308492]: 2025-11-25 08:32:19.986163842 +0000 UTC m=+0.189572146 container attach 2472c4b9c2a9a22bb94e050c9e271e6ed4ceb0fea5f79b523bd13a92d1df7880 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_nobel, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 03:32:19 np0005534516 nova_compute[253538]: 2025-11-25 08:32:19.986 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:32:19 np0005534516 nova_compute[253538]: 2025-11-25 08:32:19.986 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:32:19 np0005534516 nova_compute[253538]: 2025-11-25 08:32:19.987 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 03:32:19 np0005534516 nova_compute[253538]: 2025-11-25 08:32:19.991 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:32:19 np0005534516 nova_compute[253538]: 2025-11-25 08:32:19.991 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap3c3c9c20-84, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:32:19 np0005534516 nova_compute[253538]: 2025-11-25 08:32:19.992 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap3c3c9c20-84, col_values=(('external_ids', {'iface-id': '3c3c9c20-8436-4b41-9184-2061010ba6e2', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:c4:6e:6c', 'vm-uuid': '7aefbad8-edb5-417c-a34d-e9e3c2dd0c03'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:32:20 np0005534516 nova_compute[253538]: 2025-11-25 08:32:20.037 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:32:20 np0005534516 NetworkManager[48915]: <info>  [1764059540.0391] manager: (tap3c3c9c20-84): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/190)
Nov 25 03:32:20 np0005534516 nova_compute[253538]: 2025-11-25 08:32:20.041 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 25 03:32:20 np0005534516 nova_compute[253538]: 2025-11-25 08:32:20.046 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:32:20 np0005534516 nova_compute[253538]: 2025-11-25 08:32:20.047 253542 INFO os_vif [None req-274b1ae7-4359-493e-ab71-686e6f17a045 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:c4:6e:6c,bridge_name='br-int',has_traffic_filtering=True,id=3c3c9c20-8436-4b41-9184-2061010ba6e2,network=Network(eb25945d-6002-4a99-b682-034a8a3dc901),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3c3c9c20-84')#033[00m
Nov 25 03:32:20 np0005534516 nova_compute[253538]: 2025-11-25 08:32:20.097 253542 DEBUG nova.virt.libvirt.driver [None req-274b1ae7-4359-493e-ab71-686e6f17a045 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 03:32:20 np0005534516 nova_compute[253538]: 2025-11-25 08:32:20.097 253542 DEBUG nova.virt.libvirt.driver [None req-274b1ae7-4359-493e-ab71-686e6f17a045 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 03:32:20 np0005534516 nova_compute[253538]: 2025-11-25 08:32:20.097 253542 DEBUG nova.virt.libvirt.driver [None req-274b1ae7-4359-493e-ab71-686e6f17a045 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] No VIF found with MAC fa:16:3e:c4:6e:6c, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 25 03:32:20 np0005534516 nova_compute[253538]: 2025-11-25 08:32:20.098 253542 INFO nova.virt.libvirt.driver [None req-274b1ae7-4359-493e-ab71-686e6f17a045 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] [instance: 7aefbad8-edb5-417c-a34d-e9e3c2dd0c03] Using config drive#033[00m
Nov 25 03:32:20 np0005534516 nova_compute[253538]: 2025-11-25 08:32:20.120 253542 DEBUG nova.storage.rbd_utils [None req-274b1ae7-4359-493e-ab71-686e6f17a045 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] rbd image 7aefbad8-edb5-417c-a34d-e9e3c2dd0c03_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:32:20 np0005534516 nova_compute[253538]: 2025-11-25 08:32:20.145 253542 DEBUG nova.network.neutron [None req-952fbd32-a4ae-4ddf-92ed-a7c95d18e3d1 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: 0201b222-1aa1-4d57-901c-e3c79170b567] Successfully updated port: d02d0c40-ff59-4db1-8105-d39f0c8b67c5 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 25 03:32:20 np0005534516 nova_compute[253538]: 2025-11-25 08:32:20.148 253542 DEBUG nova.objects.instance [None req-274b1ae7-4359-493e-ab71-686e6f17a045 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Lazy-loading 'ec2_ids' on Instance uuid 7aefbad8-edb5-417c-a34d-e9e3c2dd0c03 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 03:32:20 np0005534516 nova_compute[253538]: 2025-11-25 08:32:20.166 253542 DEBUG oslo_concurrency.lockutils [None req-952fbd32-a4ae-4ddf-92ed-a7c95d18e3d1 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Acquiring lock "refresh_cache-0201b222-1aa1-4d57-901c-e3c79170b567" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 03:32:20 np0005534516 nova_compute[253538]: 2025-11-25 08:32:20.166 253542 DEBUG oslo_concurrency.lockutils [None req-952fbd32-a4ae-4ddf-92ed-a7c95d18e3d1 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Acquired lock "refresh_cache-0201b222-1aa1-4d57-901c-e3c79170b567" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 03:32:20 np0005534516 nova_compute[253538]: 2025-11-25 08:32:20.166 253542 DEBUG nova.network.neutron [None req-952fbd32-a4ae-4ddf-92ed-a7c95d18e3d1 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: 0201b222-1aa1-4d57-901c-e3c79170b567] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 25 03:32:20 np0005534516 nova_compute[253538]: 2025-11-25 08:32:20.192 253542 DEBUG nova.objects.instance [None req-274b1ae7-4359-493e-ab71-686e6f17a045 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Lazy-loading 'keypairs' on Instance uuid 7aefbad8-edb5-417c-a34d-e9e3c2dd0c03 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 03:32:20 np0005534516 nova_compute[253538]: 2025-11-25 08:32:20.469 253542 DEBUG nova.compute.manager [req-afb0f8b4-1fdb-4a84-9902-9bf6d5255fc3 req-de375276-d6ea-4697-a201-96d3fa0c8822 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 0201b222-1aa1-4d57-901c-e3c79170b567] Received event network-changed-d02d0c40-ff59-4db1-8105-d39f0c8b67c5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:32:20 np0005534516 nova_compute[253538]: 2025-11-25 08:32:20.470 253542 DEBUG nova.compute.manager [req-afb0f8b4-1fdb-4a84-9902-9bf6d5255fc3 req-de375276-d6ea-4697-a201-96d3fa0c8822 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 0201b222-1aa1-4d57-901c-e3c79170b567] Refreshing instance network info cache due to event network-changed-d02d0c40-ff59-4db1-8105-d39f0c8b67c5. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 03:32:20 np0005534516 nova_compute[253538]: 2025-11-25 08:32:20.470 253542 DEBUG oslo_concurrency.lockutils [req-afb0f8b4-1fdb-4a84-9902-9bf6d5255fc3 req-de375276-d6ea-4697-a201-96d3fa0c8822 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-0201b222-1aa1-4d57-901c-e3c79170b567" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 03:32:20 np0005534516 nova_compute[253538]: 2025-11-25 08:32:20.479 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:32:20 np0005534516 nova_compute[253538]: 2025-11-25 08:32:20.611 253542 DEBUG nova.network.neutron [None req-952fbd32-a4ae-4ddf-92ed-a7c95d18e3d1 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: 0201b222-1aa1-4d57-901c-e3c79170b567] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 25 03:32:20 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e176 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 03:32:20 np0005534516 frosty_nobel[308508]: {
Nov 25 03:32:20 np0005534516 frosty_nobel[308508]:    "1ccbbde0-8faf-460a-800e-c84f00ed17db": {
Nov 25 03:32:20 np0005534516 frosty_nobel[308508]:        "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 03:32:20 np0005534516 frosty_nobel[308508]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 03:32:20 np0005534516 frosty_nobel[308508]:        "osd_id": 1,
Nov 25 03:32:20 np0005534516 frosty_nobel[308508]:        "osd_uuid": "1ccbbde0-8faf-460a-800e-c84f00ed17db",
Nov 25 03:32:20 np0005534516 frosty_nobel[308508]:        "type": "bluestore"
Nov 25 03:32:20 np0005534516 frosty_nobel[308508]:    },
Nov 25 03:32:20 np0005534516 frosty_nobel[308508]:    "2a15223e-f21c-42af-b702-2be1c3607399": {
Nov 25 03:32:20 np0005534516 frosty_nobel[308508]:        "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 03:32:20 np0005534516 frosty_nobel[308508]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 03:32:20 np0005534516 frosty_nobel[308508]:        "osd_id": 2,
Nov 25 03:32:20 np0005534516 frosty_nobel[308508]:        "osd_uuid": "2a15223e-f21c-42af-b702-2be1c3607399",
Nov 25 03:32:20 np0005534516 frosty_nobel[308508]:        "type": "bluestore"
Nov 25 03:32:20 np0005534516 frosty_nobel[308508]:    },
Nov 25 03:32:20 np0005534516 frosty_nobel[308508]:    "fa0f8d90-5df8-4d42-9078-da082765696d": {
Nov 25 03:32:20 np0005534516 frosty_nobel[308508]:        "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 03:32:20 np0005534516 frosty_nobel[308508]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 03:32:20 np0005534516 frosty_nobel[308508]:        "osd_id": 0,
Nov 25 03:32:20 np0005534516 frosty_nobel[308508]:        "osd_uuid": "fa0f8d90-5df8-4d42-9078-da082765696d",
Nov 25 03:32:20 np0005534516 frosty_nobel[308508]:        "type": "bluestore"
Nov 25 03:32:20 np0005534516 frosty_nobel[308508]:    }
Nov 25 03:32:20 np0005534516 frosty_nobel[308508]: }
Nov 25 03:32:20 np0005534516 nova_compute[253538]: 2025-11-25 08:32:20.973 253542 INFO nova.virt.libvirt.driver [None req-274b1ae7-4359-493e-ab71-686e6f17a045 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] [instance: 7aefbad8-edb5-417c-a34d-e9e3c2dd0c03] Creating config drive at /var/lib/nova/instances/7aefbad8-edb5-417c-a34d-e9e3c2dd0c03/disk.config#033[00m
Nov 25 03:32:20 np0005534516 systemd[1]: libpod-2472c4b9c2a9a22bb94e050c9e271e6ed4ceb0fea5f79b523bd13a92d1df7880.scope: Deactivated successfully.
Nov 25 03:32:20 np0005534516 nova_compute[253538]: 2025-11-25 08:32:20.981 253542 DEBUG oslo_concurrency.processutils [None req-274b1ae7-4359-493e-ab71-686e6f17a045 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/7aefbad8-edb5-417c-a34d-e9e3c2dd0c03/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpz4bbeth4 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:32:21 np0005534516 podman[308564]: 2025-11-25 08:32:21.011833788 +0000 UTC m=+0.020887713 container died 2472c4b9c2a9a22bb94e050c9e271e6ed4ceb0fea5f79b523bd13a92d1df7880 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_nobel, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 03:32:21 np0005534516 systemd[1]: var-lib-containers-storage-overlay-36ac0440b039a9884f8324e4a53668ecee436b00d36e254a964a4ffbea35093c-merged.mount: Deactivated successfully.
Nov 25 03:32:21 np0005534516 podman[308564]: 2025-11-25 08:32:21.075675243 +0000 UTC m=+0.084729188 container remove 2472c4b9c2a9a22bb94e050c9e271e6ed4ceb0fea5f79b523bd13a92d1df7880 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_nobel, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Nov 25 03:32:21 np0005534516 systemd[1]: libpod-conmon-2472c4b9c2a9a22bb94e050c9e271e6ed4ceb0fea5f79b523bd13a92d1df7880.scope: Deactivated successfully.
Nov 25 03:32:21 np0005534516 nova_compute[253538]: 2025-11-25 08:32:21.088 253542 DEBUG nova.network.neutron [req-eba182a9-e603-459e-a115-5410135c0215 req-30641b3d-43ca-4dcb-8c83-e3cc3fa746bd b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 0feca801-4630-4450-b915-616d8496ab51] Updated VIF entry in instance network info cache for port 15af3dd8-9788-4a34-b4b2-d3b24300cd4c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 25 03:32:21 np0005534516 nova_compute[253538]: 2025-11-25 08:32:21.090 253542 DEBUG nova.network.neutron [req-eba182a9-e603-459e-a115-5410135c0215 req-30641b3d-43ca-4dcb-8c83-e3cc3fa746bd b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 0feca801-4630-4450-b915-616d8496ab51] Updating instance_info_cache with network_info: [{"id": "15af3dd8-9788-4a34-b4b2-d3b24300cd4c", "address": "fa:16:3e:07:cd:40", "network": {"id": "908154e6-322e-4607-bb65-df3f3f8daca6", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1395486665-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.183", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "23237e7592b247838e62457157e64e9e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap15af3dd8-97", "ovs_interfaceid": "15af3dd8-9788-4a34-b4b2-d3b24300cd4c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 03:32:21 np0005534516 nova_compute[253538]: 2025-11-25 08:32:21.125 253542 DEBUG oslo_concurrency.processutils [None req-274b1ae7-4359-493e-ab71-686e6f17a045 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/7aefbad8-edb5-417c-a34d-e9e3c2dd0c03/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpz4bbeth4" returned: 0 in 0.144s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:32:21 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 03:32:21 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 03:32:21 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 03:32:21 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 03:32:21 np0005534516 ceph-mgr[75313]: [progress WARNING root] complete: ev 7a078195-7fdf-4c9b-bd17-b04cbf9484cc does not exist
Nov 25 03:32:21 np0005534516 ceph-mgr[75313]: [progress WARNING root] complete: ev 47fbdb3c-8454-4957-b6d2-7303bf04a38e does not exist
Nov 25 03:32:21 np0005534516 nova_compute[253538]: 2025-11-25 08:32:21.164 253542 DEBUG nova.storage.rbd_utils [None req-274b1ae7-4359-493e-ab71-686e6f17a045 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] rbd image 7aefbad8-edb5-417c-a34d-e9e3c2dd0c03_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:32:21 np0005534516 nova_compute[253538]: 2025-11-25 08:32:21.171 253542 DEBUG oslo_concurrency.processutils [None req-274b1ae7-4359-493e-ab71-686e6f17a045 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/7aefbad8-edb5-417c-a34d-e9e3c2dd0c03/disk.config 7aefbad8-edb5-417c-a34d-e9e3c2dd0c03_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:32:21 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1470: 321 pgs: 321 active+clean; 198 MiB data, 497 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 6.9 MiB/s wr, 256 op/s
Nov 25 03:32:21 np0005534516 nova_compute[253538]: 2025-11-25 08:32:21.299 253542 DEBUG oslo_concurrency.lockutils [req-eba182a9-e603-459e-a115-5410135c0215 req-30641b3d-43ca-4dcb-8c83-e3cc3fa746bd b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-0feca801-4630-4450-b915-616d8496ab51" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 03:32:21 np0005534516 nova_compute[253538]: 2025-11-25 08:32:21.329 253542 DEBUG oslo_concurrency.processutils [None req-274b1ae7-4359-493e-ab71-686e6f17a045 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/7aefbad8-edb5-417c-a34d-e9e3c2dd0c03/disk.config 7aefbad8-edb5-417c-a34d-e9e3c2dd0c03_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.158s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:32:21 np0005534516 nova_compute[253538]: 2025-11-25 08:32:21.330 253542 INFO nova.virt.libvirt.driver [None req-274b1ae7-4359-493e-ab71-686e6f17a045 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] [instance: 7aefbad8-edb5-417c-a34d-e9e3c2dd0c03] Deleting local config drive /var/lib/nova/instances/7aefbad8-edb5-417c-a34d-e9e3c2dd0c03/disk.config because it was imported into RBD.#033[00m
Nov 25 03:32:21 np0005534516 kernel: tap3c3c9c20-84: entered promiscuous mode
Nov 25 03:32:21 np0005534516 NetworkManager[48915]: <info>  [1764059541.3956] manager: (tap3c3c9c20-84): new Tun device (/org/freedesktop/NetworkManager/Devices/191)
Nov 25 03:32:21 np0005534516 nova_compute[253538]: 2025-11-25 08:32:21.397 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:32:21 np0005534516 ovn_controller[152859]: 2025-11-25T08:32:21Z|00412|binding|INFO|Claiming lport 3c3c9c20-8436-4b41-9184-2061010ba6e2 for this chassis.
Nov 25 03:32:21 np0005534516 ovn_controller[152859]: 2025-11-25T08:32:21Z|00413|binding|INFO|3c3c9c20-8436-4b41-9184-2061010ba6e2: Claiming fa:16:3e:c4:6e:6c 10.100.0.4
Nov 25 03:32:21 np0005534516 ovn_controller[152859]: 2025-11-25T08:32:21Z|00414|binding|INFO|Setting lport 3c3c9c20-8436-4b41-9184-2061010ba6e2 ovn-installed in OVS
Nov 25 03:32:21 np0005534516 nova_compute[253538]: 2025-11-25 08:32:21.425 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:32:21 np0005534516 nova_compute[253538]: 2025-11-25 08:32:21.433 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:32:21 np0005534516 systemd-machined[215790]: New machine qemu-56-instance-00000031.
Nov 25 03:32:21 np0005534516 nova_compute[253538]: 2025-11-25 08:32:21.447 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 03:32:21 np0005534516 nova_compute[253538]: 2025-11-25 08:32:21.447 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 03:32:21 np0005534516 systemd[1]: Started Virtual Machine qemu-56-instance-00000031.
Nov 25 03:32:21 np0005534516 ovn_controller[152859]: 2025-11-25T08:32:21Z|00415|binding|INFO|Setting lport 3c3c9c20-8436-4b41-9184-2061010ba6e2 up in Southbound
Nov 25 03:32:21 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:32:21.465 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c4:6e:6c 10.100.0.4'], port_security=['fa:16:3e:c4:6e:6c 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '7aefbad8-edb5-417c-a34d-e9e3c2dd0c03', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-eb25945d-6002-4a99-b682-034a8a3dc901', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'dc93aa65bef7473d961e0cad1e8f2962', 'neutron:revision_number': '5', 'neutron:security_group_ids': 'e75a559d-2985-4816-b432-9eef78e9b129', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=1d4314d0-90db-4d2a-a971-774f6d589653, chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=3c3c9c20-8436-4b41-9184-2061010ba6e2) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 03:32:21 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:32:21.466 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 3c3c9c20-8436-4b41-9184-2061010ba6e2 in datapath eb25945d-6002-4a99-b682-034a8a3dc901 bound to our chassis#033[00m
Nov 25 03:32:21 np0005534516 systemd-udevd[308679]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 03:32:21 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:32:21.468 162739 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network eb25945d-6002-4a99-b682-034a8a3dc901#033[00m
Nov 25 03:32:21 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:32:21.478 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[7c9e52bd-3017-4f9e-994a-f871996c0a15]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:32:21 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:32:21.479 162739 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapeb25945d-61 in ovnmeta-eb25945d-6002-4a99-b682-034a8a3dc901 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 25 03:32:21 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:32:21.480 269370 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapeb25945d-60 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 25 03:32:21 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:32:21.480 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[df1f4b91-8ac1-49f3-9d53-10fa56c70efc]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:32:21 np0005534516 nova_compute[253538]: 2025-11-25 08:32:21.480 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 03:32:21 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:32:21.481 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[2052bdfe-c045-4fa3-b511-1b5ca966acc4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:32:21 np0005534516 NetworkManager[48915]: <info>  [1764059541.4870] device (tap3c3c9c20-84): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 03:32:21 np0005534516 NetworkManager[48915]: <info>  [1764059541.4918] device (tap3c3c9c20-84): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 03:32:21 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:32:21.492 162852 DEBUG oslo.privsep.daemon [-] privsep: reply[9a1aa2bf-2acc-4bab-84fe-b843450688ba]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:32:21 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:32:21.516 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[4576f327-ab0a-4777-b392-d7852fec9bb6]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:32:21 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:32:21.540 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[9cf4c154-9779-47aa-9700-40b3b73538e0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:32:21 np0005534516 NetworkManager[48915]: <info>  [1764059541.5467] manager: (tapeb25945d-60): new Veth device (/org/freedesktop/NetworkManager/Devices/192)
Nov 25 03:32:21 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:32:21.547 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[013b28a3-3845-44d6-99ab-0132135c10b4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:32:21 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:32:21.586 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[a1181fd3-2d14-4343-8e86-fb43725e3cc1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:32:21 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:32:21.588 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[fc967540-7010-4433-b256-b4afdc5defd6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:32:21 np0005534516 NetworkManager[48915]: <info>  [1764059541.6066] device (tapeb25945d-60): carrier: link connected
Nov 25 03:32:21 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:32:21.611 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[33ed429b-81c4-4605-8a1b-f0dd2bf3bf8a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:32:21 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:32:21.625 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[039feeb9-33f6-4e60-ba46-1361d3591b0a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapeb25945d-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f7:fc:f3'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 126], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 489620, 'reachable_time': 15279, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 308712, 'error': None, 'target': 'ovnmeta-eb25945d-6002-4a99-b682-034a8a3dc901', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:32:21 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:32:21.637 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[a3344277-8c8f-453e-84b5-338bd1fbbf83]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fef7:fcf3'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 489620, 'tstamp': 489620}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 308713, 'error': None, 'target': 'ovnmeta-eb25945d-6002-4a99-b682-034a8a3dc901', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:32:21 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:32:21.656 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[0e91a7f8-2337-411b-9d60-cf477cba0c3d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapeb25945d-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f7:fc:f3'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 126], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 489620, 'reachable_time': 15279, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 308714, 'error': None, 'target': 'ovnmeta-eb25945d-6002-4a99-b682-034a8a3dc901', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:32:21 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:32:21.681 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[d7376603-e872-43e5-b121-cdc1f43c12fc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:32:21 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:32:21.729 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[1bcdcc60-7092-4a2b-a7d4-6aa7bf328064]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:32:21 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:32:21.730 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapeb25945d-60, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:32:21 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:32:21.731 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 03:32:21 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:32:21.731 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapeb25945d-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:32:21 np0005534516 nova_compute[253538]: 2025-11-25 08:32:21.733 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:32:21 np0005534516 kernel: tapeb25945d-60: entered promiscuous mode
Nov 25 03:32:21 np0005534516 NetworkManager[48915]: <info>  [1764059541.7343] manager: (tapeb25945d-60): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/193)
Nov 25 03:32:21 np0005534516 nova_compute[253538]: 2025-11-25 08:32:21.736 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:32:21 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:32:21.738 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapeb25945d-60, col_values=(('external_ids', {'iface-id': 'f4a838c4-0817-4ff4-8792-2e2721905e98'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:32:21 np0005534516 nova_compute[253538]: 2025-11-25 08:32:21.739 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:32:21 np0005534516 ovn_controller[152859]: 2025-11-25T08:32:21Z|00416|binding|INFO|Releasing lport f4a838c4-0817-4ff4-8792-2e2721905e98 from this chassis (sb_readonly=0)
Nov 25 03:32:21 np0005534516 nova_compute[253538]: 2025-11-25 08:32:21.754 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:32:21 np0005534516 nova_compute[253538]: 2025-11-25 08:32:21.758 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:32:21 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:32:21.759 162739 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/eb25945d-6002-4a99-b682-034a8a3dc901.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/eb25945d-6002-4a99-b682-034a8a3dc901.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 25 03:32:21 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:32:21.759 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[75c1b7e3-5479-4127-8627-04ecb2a9cba2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:32:21 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:32:21.760 162739 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 25 03:32:21 np0005534516 ovn_metadata_agent[162734]: global
Nov 25 03:32:21 np0005534516 ovn_metadata_agent[162734]:    log         /dev/log local0 debug
Nov 25 03:32:21 np0005534516 ovn_metadata_agent[162734]:    log-tag     haproxy-metadata-proxy-eb25945d-6002-4a99-b682-034a8a3dc901
Nov 25 03:32:21 np0005534516 ovn_metadata_agent[162734]:    user        root
Nov 25 03:32:21 np0005534516 ovn_metadata_agent[162734]:    group       root
Nov 25 03:32:21 np0005534516 ovn_metadata_agent[162734]:    maxconn     1024
Nov 25 03:32:21 np0005534516 ovn_metadata_agent[162734]:    pidfile     /var/lib/neutron/external/pids/eb25945d-6002-4a99-b682-034a8a3dc901.pid.haproxy
Nov 25 03:32:21 np0005534516 ovn_metadata_agent[162734]:    daemon
Nov 25 03:32:21 np0005534516 ovn_metadata_agent[162734]: 
Nov 25 03:32:21 np0005534516 ovn_metadata_agent[162734]: defaults
Nov 25 03:32:21 np0005534516 ovn_metadata_agent[162734]:    log global
Nov 25 03:32:21 np0005534516 ovn_metadata_agent[162734]:    mode http
Nov 25 03:32:21 np0005534516 ovn_metadata_agent[162734]:    option httplog
Nov 25 03:32:21 np0005534516 ovn_metadata_agent[162734]:    option dontlognull
Nov 25 03:32:21 np0005534516 ovn_metadata_agent[162734]:    option http-server-close
Nov 25 03:32:21 np0005534516 ovn_metadata_agent[162734]:    option forwardfor
Nov 25 03:32:21 np0005534516 ovn_metadata_agent[162734]:    retries                 3
Nov 25 03:32:21 np0005534516 ovn_metadata_agent[162734]:    timeout http-request    30s
Nov 25 03:32:21 np0005534516 ovn_metadata_agent[162734]:    timeout connect         30s
Nov 25 03:32:21 np0005534516 ovn_metadata_agent[162734]:    timeout client          32s
Nov 25 03:32:21 np0005534516 ovn_metadata_agent[162734]:    timeout server          32s
Nov 25 03:32:21 np0005534516 ovn_metadata_agent[162734]:    timeout http-keep-alive 30s
Nov 25 03:32:21 np0005534516 ovn_metadata_agent[162734]: 
Nov 25 03:32:21 np0005534516 ovn_metadata_agent[162734]: 
Nov 25 03:32:21 np0005534516 ovn_metadata_agent[162734]: listen listener
Nov 25 03:32:21 np0005534516 ovn_metadata_agent[162734]:    bind 169.254.169.254:80
Nov 25 03:32:21 np0005534516 ovn_metadata_agent[162734]:    server metadata /var/lib/neutron/metadata_proxy
Nov 25 03:32:21 np0005534516 ovn_metadata_agent[162734]:    http-request add-header X-OVN-Network-ID eb25945d-6002-4a99-b682-034a8a3dc901
Nov 25 03:32:21 np0005534516 ovn_metadata_agent[162734]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 25 03:32:21 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:32:21.760 162739 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-eb25945d-6002-4a99-b682-034a8a3dc901', 'env', 'PROCESS_TAG=haproxy-eb25945d-6002-4a99-b682-034a8a3dc901', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/eb25945d-6002-4a99-b682-034a8a3dc901.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 25 03:32:22 np0005534516 podman[308746]: 2025-11-25 08:32:22.105700096 +0000 UTC m=+0.047993201 container create a6adb3fa2f4352cc0f422dc92678d1d649ab944164ad77928c29dc7fbe180acd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-eb25945d-6002-4a99-b682-034a8a3dc901, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Nov 25 03:32:22 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 03:32:22 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 03:32:22 np0005534516 systemd[1]: Started libpod-conmon-a6adb3fa2f4352cc0f422dc92678d1d649ab944164ad77928c29dc7fbe180acd.scope.
Nov 25 03:32:22 np0005534516 systemd[1]: Started libcrun container.
Nov 25 03:32:22 np0005534516 podman[308746]: 2025-11-25 08:32:22.082618106 +0000 UTC m=+0.024911231 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 25 03:32:22 np0005534516 nova_compute[253538]: 2025-11-25 08:32:22.179 253542 DEBUG nova.compute.manager [req-305a7c6f-3bfa-4a94-b4df-a90fe5cb4ff7 req-a5c2f81e-bc58-41dc-94cb-88e4425f830f b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 7aefbad8-edb5-417c-a34d-e9e3c2dd0c03] Received event network-vif-plugged-3c3c9c20-8436-4b41-9184-2061010ba6e2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:32:22 np0005534516 nova_compute[253538]: 2025-11-25 08:32:22.180 253542 DEBUG oslo_concurrency.lockutils [req-305a7c6f-3bfa-4a94-b4df-a90fe5cb4ff7 req-a5c2f81e-bc58-41dc-94cb-88e4425f830f b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "7aefbad8-edb5-417c-a34d-e9e3c2dd0c03-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:32:22 np0005534516 nova_compute[253538]: 2025-11-25 08:32:22.181 253542 DEBUG oslo_concurrency.lockutils [req-305a7c6f-3bfa-4a94-b4df-a90fe5cb4ff7 req-a5c2f81e-bc58-41dc-94cb-88e4425f830f b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "7aefbad8-edb5-417c-a34d-e9e3c2dd0c03-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:32:22 np0005534516 nova_compute[253538]: 2025-11-25 08:32:22.181 253542 DEBUG oslo_concurrency.lockutils [req-305a7c6f-3bfa-4a94-b4df-a90fe5cb4ff7 req-a5c2f81e-bc58-41dc-94cb-88e4425f830f b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "7aefbad8-edb5-417c-a34d-e9e3c2dd0c03-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:32:22 np0005534516 nova_compute[253538]: 2025-11-25 08:32:22.182 253542 DEBUG nova.compute.manager [req-305a7c6f-3bfa-4a94-b4df-a90fe5cb4ff7 req-a5c2f81e-bc58-41dc-94cb-88e4425f830f b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 7aefbad8-edb5-417c-a34d-e9e3c2dd0c03] Processing event network-vif-plugged-3c3c9c20-8436-4b41-9184-2061010ba6e2 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 25 03:32:22 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/91e58279ce3b57ee04aae33b135da848ead0b67f41b7f32ad9c03b3604e4992d/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 25 03:32:22 np0005534516 podman[308746]: 2025-11-25 08:32:22.199921819 +0000 UTC m=+0.142214964 container init a6adb3fa2f4352cc0f422dc92678d1d649ab944164ad77928c29dc7fbe180acd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-eb25945d-6002-4a99-b682-034a8a3dc901, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true)
Nov 25 03:32:22 np0005534516 podman[308746]: 2025-11-25 08:32:22.206219908 +0000 UTC m=+0.148513023 container start a6adb3fa2f4352cc0f422dc92678d1d649ab944164ad77928c29dc7fbe180acd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-eb25945d-6002-4a99-b682-034a8a3dc901, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118)
Nov 25 03:32:22 np0005534516 neutron-haproxy-ovnmeta-eb25945d-6002-4a99-b682-034a8a3dc901[308762]: [NOTICE]   (308766) : New worker (308768) forked
Nov 25 03:32:22 np0005534516 neutron-haproxy-ovnmeta-eb25945d-6002-4a99-b682-034a8a3dc901[308762]: [NOTICE]   (308766) : Loading success.
Nov 25 03:32:22 np0005534516 nova_compute[253538]: 2025-11-25 08:32:22.285 253542 DEBUG nova.network.neutron [None req-952fbd32-a4ae-4ddf-92ed-a7c95d18e3d1 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: 0201b222-1aa1-4d57-901c-e3c79170b567] Updating instance_info_cache with network_info: [{"id": "d02d0c40-ff59-4db1-8105-d39f0c8b67c5", "address": "fa:16:3e:5c:88:0d", "network": {"id": "a66e51b8-ecb0-4289-a1b5-d5e379727721", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-903247260-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a9c243220ecd4ba3af10cdbc0ea76bd6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd02d0c40-ff", "ovs_interfaceid": "d02d0c40-ff59-4db1-8105-d39f0c8b67c5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 03:32:22 np0005534516 nova_compute[253538]: 2025-11-25 08:32:22.306 253542 DEBUG oslo_concurrency.lockutils [None req-952fbd32-a4ae-4ddf-92ed-a7c95d18e3d1 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Releasing lock "refresh_cache-0201b222-1aa1-4d57-901c-e3c79170b567" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 03:32:22 np0005534516 nova_compute[253538]: 2025-11-25 08:32:22.306 253542 DEBUG nova.compute.manager [None req-952fbd32-a4ae-4ddf-92ed-a7c95d18e3d1 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: 0201b222-1aa1-4d57-901c-e3c79170b567] Instance network_info: |[{"id": "d02d0c40-ff59-4db1-8105-d39f0c8b67c5", "address": "fa:16:3e:5c:88:0d", "network": {"id": "a66e51b8-ecb0-4289-a1b5-d5e379727721", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-903247260-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a9c243220ecd4ba3af10cdbc0ea76bd6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd02d0c40-ff", "ovs_interfaceid": "d02d0c40-ff59-4db1-8105-d39f0c8b67c5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 25 03:32:22 np0005534516 nova_compute[253538]: 2025-11-25 08:32:22.307 253542 DEBUG oslo_concurrency.lockutils [req-afb0f8b4-1fdb-4a84-9902-9bf6d5255fc3 req-de375276-d6ea-4697-a201-96d3fa0c8822 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-0201b222-1aa1-4d57-901c-e3c79170b567" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 03:32:22 np0005534516 nova_compute[253538]: 2025-11-25 08:32:22.307 253542 DEBUG nova.network.neutron [req-afb0f8b4-1fdb-4a84-9902-9bf6d5255fc3 req-de375276-d6ea-4697-a201-96d3fa0c8822 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 0201b222-1aa1-4d57-901c-e3c79170b567] Refreshing network info cache for port d02d0c40-ff59-4db1-8105-d39f0c8b67c5 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 03:32:22 np0005534516 nova_compute[253538]: 2025-11-25 08:32:22.310 253542 DEBUG nova.virt.libvirt.driver [None req-952fbd32-a4ae-4ddf-92ed-a7c95d18e3d1 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: 0201b222-1aa1-4d57-901c-e3c79170b567] Start _get_guest_xml network_info=[{"id": "d02d0c40-ff59-4db1-8105-d39f0c8b67c5", "address": "fa:16:3e:5c:88:0d", "network": {"id": "a66e51b8-ecb0-4289-a1b5-d5e379727721", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-903247260-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a9c243220ecd4ba3af10cdbc0ea76bd6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd02d0c40-ff", "ovs_interfaceid": "d02d0c40-ff59-4db1-8105-d39f0c8b67c5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'device_name': '/dev/vda', 'guest_format': None, 'encryption_options': None, 'boot_index': 0, 'encryption_format': None, 'encryption_secret_uuid': None, 'size': 0, 'encrypted': False, 'disk_bus': 'virtio', 'image_id': '8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 25 03:32:22 np0005534516 nova_compute[253538]: 2025-11-25 08:32:22.315 253542 WARNING nova.virt.libvirt.driver [None req-952fbd32-a4ae-4ddf-92ed-a7c95d18e3d1 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 03:32:22 np0005534516 nova_compute[253538]: 2025-11-25 08:32:22.320 253542 DEBUG nova.virt.libvirt.host [None req-952fbd32-a4ae-4ddf-92ed-a7c95d18e3d1 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 25 03:32:22 np0005534516 nova_compute[253538]: 2025-11-25 08:32:22.321 253542 DEBUG nova.virt.libvirt.host [None req-952fbd32-a4ae-4ddf-92ed-a7c95d18e3d1 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 25 03:32:22 np0005534516 nova_compute[253538]: 2025-11-25 08:32:22.327 253542 DEBUG nova.virt.libvirt.host [None req-952fbd32-a4ae-4ddf-92ed-a7c95d18e3d1 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 25 03:32:22 np0005534516 nova_compute[253538]: 2025-11-25 08:32:22.327 253542 DEBUG nova.virt.libvirt.host [None req-952fbd32-a4ae-4ddf-92ed-a7c95d18e3d1 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 25 03:32:22 np0005534516 nova_compute[253538]: 2025-11-25 08:32:22.328 253542 DEBUG nova.virt.libvirt.driver [None req-952fbd32-a4ae-4ddf-92ed-a7c95d18e3d1 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 25 03:32:22 np0005534516 nova_compute[253538]: 2025-11-25 08:32:22.328 253542 DEBUG nova.virt.hardware [None req-952fbd32-a4ae-4ddf-92ed-a7c95d18e3d1 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T08:20:54Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='c0247c91-0f34-45b2-87e7-43f31a790a00',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 25 03:32:22 np0005534516 nova_compute[253538]: 2025-11-25 08:32:22.329 253542 DEBUG nova.virt.hardware [None req-952fbd32-a4ae-4ddf-92ed-a7c95d18e3d1 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 25 03:32:22 np0005534516 nova_compute[253538]: 2025-11-25 08:32:22.329 253542 DEBUG nova.virt.hardware [None req-952fbd32-a4ae-4ddf-92ed-a7c95d18e3d1 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 25 03:32:22 np0005534516 nova_compute[253538]: 2025-11-25 08:32:22.329 253542 DEBUG nova.virt.hardware [None req-952fbd32-a4ae-4ddf-92ed-a7c95d18e3d1 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 25 03:32:22 np0005534516 nova_compute[253538]: 2025-11-25 08:32:22.329 253542 DEBUG nova.virt.hardware [None req-952fbd32-a4ae-4ddf-92ed-a7c95d18e3d1 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 25 03:32:22 np0005534516 nova_compute[253538]: 2025-11-25 08:32:22.330 253542 DEBUG nova.virt.hardware [None req-952fbd32-a4ae-4ddf-92ed-a7c95d18e3d1 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 25 03:32:22 np0005534516 nova_compute[253538]: 2025-11-25 08:32:22.330 253542 DEBUG nova.virt.hardware [None req-952fbd32-a4ae-4ddf-92ed-a7c95d18e3d1 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 25 03:32:22 np0005534516 nova_compute[253538]: 2025-11-25 08:32:22.330 253542 DEBUG nova.virt.hardware [None req-952fbd32-a4ae-4ddf-92ed-a7c95d18e3d1 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 25 03:32:22 np0005534516 nova_compute[253538]: 2025-11-25 08:32:22.330 253542 DEBUG nova.virt.hardware [None req-952fbd32-a4ae-4ddf-92ed-a7c95d18e3d1 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 25 03:32:22 np0005534516 nova_compute[253538]: 2025-11-25 08:32:22.330 253542 DEBUG nova.virt.hardware [None req-952fbd32-a4ae-4ddf-92ed-a7c95d18e3d1 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 25 03:32:22 np0005534516 nova_compute[253538]: 2025-11-25 08:32:22.331 253542 DEBUG nova.virt.hardware [None req-952fbd32-a4ae-4ddf-92ed-a7c95d18e3d1 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 25 03:32:22 np0005534516 nova_compute[253538]: 2025-11-25 08:32:22.333 253542 DEBUG oslo_concurrency.processutils [None req-952fbd32-a4ae-4ddf-92ed-a7c95d18e3d1 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:32:22 np0005534516 nova_compute[253538]: 2025-11-25 08:32:22.553 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 03:32:22 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 03:32:22 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/848310532' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 03:32:22 np0005534516 nova_compute[253538]: 2025-11-25 08:32:22.801 253542 DEBUG oslo_concurrency.processutils [None req-952fbd32-a4ae-4ddf-92ed-a7c95d18e3d1 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.468s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:32:22 np0005534516 nova_compute[253538]: 2025-11-25 08:32:22.821 253542 DEBUG nova.storage.rbd_utils [None req-952fbd32-a4ae-4ddf-92ed-a7c95d18e3d1 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] rbd image 0201b222-1aa1-4d57-901c-e3c79170b567_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:32:22 np0005534516 nova_compute[253538]: 2025-11-25 08:32:22.825 253542 DEBUG oslo_concurrency.processutils [None req-952fbd32-a4ae-4ddf-92ed-a7c95d18e3d1 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:32:22 np0005534516 nova_compute[253538]: 2025-11-25 08:32:22.857 253542 DEBUG nova.virt.libvirt.host [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Removed pending event for 7aefbad8-edb5-417c-a34d-e9e3c2dd0c03 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Nov 25 03:32:22 np0005534516 nova_compute[253538]: 2025-11-25 08:32:22.858 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764059542.8024304, 7aefbad8-edb5-417c-a34d-e9e3c2dd0c03 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 03:32:22 np0005534516 nova_compute[253538]: 2025-11-25 08:32:22.858 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 7aefbad8-edb5-417c-a34d-e9e3c2dd0c03] VM Started (Lifecycle Event)#033[00m
Nov 25 03:32:22 np0005534516 nova_compute[253538]: 2025-11-25 08:32:22.860 253542 DEBUG nova.compute.manager [None req-274b1ae7-4359-493e-ab71-686e6f17a045 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] [instance: 7aefbad8-edb5-417c-a34d-e9e3c2dd0c03] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 25 03:32:22 np0005534516 nova_compute[253538]: 2025-11-25 08:32:22.864 253542 DEBUG nova.virt.libvirt.driver [None req-274b1ae7-4359-493e-ab71-686e6f17a045 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] [instance: 7aefbad8-edb5-417c-a34d-e9e3c2dd0c03] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 25 03:32:22 np0005534516 nova_compute[253538]: 2025-11-25 08:32:22.867 253542 INFO nova.virt.libvirt.driver [-] [instance: 7aefbad8-edb5-417c-a34d-e9e3c2dd0c03] Instance spawned successfully.#033[00m
Nov 25 03:32:22 np0005534516 nova_compute[253538]: 2025-11-25 08:32:22.867 253542 DEBUG nova.virt.libvirt.driver [None req-274b1ae7-4359-493e-ab71-686e6f17a045 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] [instance: 7aefbad8-edb5-417c-a34d-e9e3c2dd0c03] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 25 03:32:22 np0005534516 nova_compute[253538]: 2025-11-25 08:32:22.889 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 7aefbad8-edb5-417c-a34d-e9e3c2dd0c03] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:32:22 np0005534516 nova_compute[253538]: 2025-11-25 08:32:22.897 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 7aefbad8-edb5-417c-a34d-e9e3c2dd0c03] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 03:32:22 np0005534516 nova_compute[253538]: 2025-11-25 08:32:22.901 253542 DEBUG nova.virt.libvirt.driver [None req-274b1ae7-4359-493e-ab71-686e6f17a045 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] [instance: 7aefbad8-edb5-417c-a34d-e9e3c2dd0c03] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:32:22 np0005534516 nova_compute[253538]: 2025-11-25 08:32:22.902 253542 DEBUG nova.virt.libvirt.driver [None req-274b1ae7-4359-493e-ab71-686e6f17a045 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] [instance: 7aefbad8-edb5-417c-a34d-e9e3c2dd0c03] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:32:22 np0005534516 nova_compute[253538]: 2025-11-25 08:32:22.902 253542 DEBUG nova.virt.libvirt.driver [None req-274b1ae7-4359-493e-ab71-686e6f17a045 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] [instance: 7aefbad8-edb5-417c-a34d-e9e3c2dd0c03] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:32:22 np0005534516 nova_compute[253538]: 2025-11-25 08:32:22.903 253542 DEBUG nova.virt.libvirt.driver [None req-274b1ae7-4359-493e-ab71-686e6f17a045 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] [instance: 7aefbad8-edb5-417c-a34d-e9e3c2dd0c03] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:32:22 np0005534516 nova_compute[253538]: 2025-11-25 08:32:22.903 253542 DEBUG nova.virt.libvirt.driver [None req-274b1ae7-4359-493e-ab71-686e6f17a045 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] [instance: 7aefbad8-edb5-417c-a34d-e9e3c2dd0c03] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:32:22 np0005534516 nova_compute[253538]: 2025-11-25 08:32:22.904 253542 DEBUG nova.virt.libvirt.driver [None req-274b1ae7-4359-493e-ab71-686e6f17a045 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] [instance: 7aefbad8-edb5-417c-a34d-e9e3c2dd0c03] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:32:22 np0005534516 nova_compute[253538]: 2025-11-25 08:32:22.931 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 7aefbad8-edb5-417c-a34d-e9e3c2dd0c03] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.#033[00m
Nov 25 03:32:22 np0005534516 nova_compute[253538]: 2025-11-25 08:32:22.931 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764059542.802881, 7aefbad8-edb5-417c-a34d-e9e3c2dd0c03 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 03:32:22 np0005534516 nova_compute[253538]: 2025-11-25 08:32:22.932 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 7aefbad8-edb5-417c-a34d-e9e3c2dd0c03] VM Paused (Lifecycle Event)#033[00m
Nov 25 03:32:22 np0005534516 nova_compute[253538]: 2025-11-25 08:32:22.968 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 7aefbad8-edb5-417c-a34d-e9e3c2dd0c03] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:32:22 np0005534516 nova_compute[253538]: 2025-11-25 08:32:22.971 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764059542.863391, 7aefbad8-edb5-417c-a34d-e9e3c2dd0c03 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 03:32:22 np0005534516 nova_compute[253538]: 2025-11-25 08:32:22.972 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 7aefbad8-edb5-417c-a34d-e9e3c2dd0c03] VM Resumed (Lifecycle Event)#033[00m
Nov 25 03:32:22 np0005534516 nova_compute[253538]: 2025-11-25 08:32:22.990 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 7aefbad8-edb5-417c-a34d-e9e3c2dd0c03] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:32:22 np0005534516 nova_compute[253538]: 2025-11-25 08:32:22.996 253542 DEBUG nova.compute.manager [None req-274b1ae7-4359-493e-ab71-686e6f17a045 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] [instance: 7aefbad8-edb5-417c-a34d-e9e3c2dd0c03] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:32:22 np0005534516 nova_compute[253538]: 2025-11-25 08:32:22.998 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 7aefbad8-edb5-417c-a34d-e9e3c2dd0c03] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 03:32:23 np0005534516 nova_compute[253538]: 2025-11-25 08:32:23.033 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 7aefbad8-edb5-417c-a34d-e9e3c2dd0c03] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.#033[00m
Nov 25 03:32:23 np0005534516 nova_compute[253538]: 2025-11-25 08:32:23.139 253542 DEBUG oslo_concurrency.lockutils [None req-274b1ae7-4359-493e-ab71-686e6f17a045 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:32:23 np0005534516 nova_compute[253538]: 2025-11-25 08:32:23.140 253542 DEBUG oslo_concurrency.lockutils [None req-274b1ae7-4359-493e-ab71-686e6f17a045 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:32:23 np0005534516 nova_compute[253538]: 2025-11-25 08:32:23.140 253542 DEBUG nova.objects.instance [None req-274b1ae7-4359-493e-ab71-686e6f17a045 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] [instance: 7aefbad8-edb5-417c-a34d-e9e3c2dd0c03] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032#033[00m
Nov 25 03:32:23 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1471: 321 pgs: 321 active+clean; 177 MiB data, 484 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 6.7 MiB/s wr, 244 op/s
Nov 25 03:32:23 np0005534516 nova_compute[253538]: 2025-11-25 08:32:23.251 253542 DEBUG oslo_concurrency.lockutils [None req-274b1ae7-4359-493e-ab71-686e6f17a045 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: held 0.111s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:32:23 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 03:32:23 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3622454695' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 03:32:23 np0005534516 nova_compute[253538]: 2025-11-25 08:32:23.333 253542 DEBUG oslo_concurrency.processutils [None req-952fbd32-a4ae-4ddf-92ed-a7c95d18e3d1 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.508s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:32:23 np0005534516 nova_compute[253538]: 2025-11-25 08:32:23.335 253542 DEBUG nova.virt.libvirt.vif [None req-952fbd32-a4ae-4ddf-92ed-a7c95d18e3d1 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T08:32:12Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-DeleteServersTestJSON-server-93037066',display_name='tempest-DeleteServersTestJSON-server-93037066',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-deleteserverstestjson-server-93037066',id=51,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='a9c243220ecd4ba3af10cdbc0ea76bd6',ramdisk_id='',reservation_id='r-6q1swzq1',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-DeleteServersTestJSON-2095694504',owner_user_name='tempest-DeleteServersTestJSON-2095694504-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T08:32:15Z,user_data=None,user_id='a649c62aaacd4f01a93ea978066f5976',uuid=0201b222-1aa1-4d57-901c-e3c79170b567,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d02d0c40-ff59-4db1-8105-d39f0c8b67c5", "address": "fa:16:3e:5c:88:0d", "network": {"id": "a66e51b8-ecb0-4289-a1b5-d5e379727721", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-903247260-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a9c243220ecd4ba3af10cdbc0ea76bd6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd02d0c40-ff", "ovs_interfaceid": "d02d0c40-ff59-4db1-8105-d39f0c8b67c5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 25 03:32:23 np0005534516 nova_compute[253538]: 2025-11-25 08:32:23.335 253542 DEBUG nova.network.os_vif_util [None req-952fbd32-a4ae-4ddf-92ed-a7c95d18e3d1 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Converting VIF {"id": "d02d0c40-ff59-4db1-8105-d39f0c8b67c5", "address": "fa:16:3e:5c:88:0d", "network": {"id": "a66e51b8-ecb0-4289-a1b5-d5e379727721", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-903247260-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a9c243220ecd4ba3af10cdbc0ea76bd6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd02d0c40-ff", "ovs_interfaceid": "d02d0c40-ff59-4db1-8105-d39f0c8b67c5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 03:32:23 np0005534516 nova_compute[253538]: 2025-11-25 08:32:23.336 253542 DEBUG nova.network.os_vif_util [None req-952fbd32-a4ae-4ddf-92ed-a7c95d18e3d1 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:5c:88:0d,bridge_name='br-int',has_traffic_filtering=True,id=d02d0c40-ff59-4db1-8105-d39f0c8b67c5,network=Network(a66e51b8-ecb0-4289-a1b5-d5e379727721),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd02d0c40-ff') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 03:32:23 np0005534516 nova_compute[253538]: 2025-11-25 08:32:23.337 253542 DEBUG nova.objects.instance [None req-952fbd32-a4ae-4ddf-92ed-a7c95d18e3d1 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Lazy-loading 'pci_devices' on Instance uuid 0201b222-1aa1-4d57-901c-e3c79170b567 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 03:32:23 np0005534516 nova_compute[253538]: 2025-11-25 08:32:23.350 253542 DEBUG nova.virt.libvirt.driver [None req-952fbd32-a4ae-4ddf-92ed-a7c95d18e3d1 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: 0201b222-1aa1-4d57-901c-e3c79170b567] End _get_guest_xml xml=<domain type="kvm">
Nov 25 03:32:23 np0005534516 nova_compute[253538]:  <uuid>0201b222-1aa1-4d57-901c-e3c79170b567</uuid>
Nov 25 03:32:23 np0005534516 nova_compute[253538]:  <name>instance-00000033</name>
Nov 25 03:32:23 np0005534516 nova_compute[253538]:  <memory>131072</memory>
Nov 25 03:32:23 np0005534516 nova_compute[253538]:  <vcpu>1</vcpu>
Nov 25 03:32:23 np0005534516 nova_compute[253538]:  <metadata>
Nov 25 03:32:23 np0005534516 nova_compute[253538]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 03:32:23 np0005534516 nova_compute[253538]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 03:32:23 np0005534516 nova_compute[253538]:      <nova:name>tempest-DeleteServersTestJSON-server-93037066</nova:name>
Nov 25 03:32:23 np0005534516 nova_compute[253538]:      <nova:creationTime>2025-11-25 08:32:22</nova:creationTime>
Nov 25 03:32:23 np0005534516 nova_compute[253538]:      <nova:flavor name="m1.nano">
Nov 25 03:32:23 np0005534516 nova_compute[253538]:        <nova:memory>128</nova:memory>
Nov 25 03:32:23 np0005534516 nova_compute[253538]:        <nova:disk>1</nova:disk>
Nov 25 03:32:23 np0005534516 nova_compute[253538]:        <nova:swap>0</nova:swap>
Nov 25 03:32:23 np0005534516 nova_compute[253538]:        <nova:ephemeral>0</nova:ephemeral>
Nov 25 03:32:23 np0005534516 nova_compute[253538]:        <nova:vcpus>1</nova:vcpus>
Nov 25 03:32:23 np0005534516 nova_compute[253538]:      </nova:flavor>
Nov 25 03:32:23 np0005534516 nova_compute[253538]:      <nova:owner>
Nov 25 03:32:23 np0005534516 nova_compute[253538]:        <nova:user uuid="a649c62aaacd4f01a93ea978066f5976">tempest-DeleteServersTestJSON-2095694504-project-member</nova:user>
Nov 25 03:32:23 np0005534516 nova_compute[253538]:        <nova:project uuid="a9c243220ecd4ba3af10cdbc0ea76bd6">tempest-DeleteServersTestJSON-2095694504</nova:project>
Nov 25 03:32:23 np0005534516 nova_compute[253538]:      </nova:owner>
Nov 25 03:32:23 np0005534516 nova_compute[253538]:      <nova:root type="image" uuid="8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e"/>
Nov 25 03:32:23 np0005534516 nova_compute[253538]:      <nova:ports>
Nov 25 03:32:23 np0005534516 nova_compute[253538]:        <nova:port uuid="d02d0c40-ff59-4db1-8105-d39f0c8b67c5">
Nov 25 03:32:23 np0005534516 nova_compute[253538]:          <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Nov 25 03:32:23 np0005534516 nova_compute[253538]:        </nova:port>
Nov 25 03:32:23 np0005534516 nova_compute[253538]:      </nova:ports>
Nov 25 03:32:23 np0005534516 nova_compute[253538]:    </nova:instance>
Nov 25 03:32:23 np0005534516 nova_compute[253538]:  </metadata>
Nov 25 03:32:23 np0005534516 nova_compute[253538]:  <sysinfo type="smbios">
Nov 25 03:32:23 np0005534516 nova_compute[253538]:    <system>
Nov 25 03:32:23 np0005534516 nova_compute[253538]:      <entry name="manufacturer">RDO</entry>
Nov 25 03:32:23 np0005534516 nova_compute[253538]:      <entry name="product">OpenStack Compute</entry>
Nov 25 03:32:23 np0005534516 nova_compute[253538]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 03:32:23 np0005534516 nova_compute[253538]:      <entry name="serial">0201b222-1aa1-4d57-901c-e3c79170b567</entry>
Nov 25 03:32:23 np0005534516 nova_compute[253538]:      <entry name="uuid">0201b222-1aa1-4d57-901c-e3c79170b567</entry>
Nov 25 03:32:23 np0005534516 nova_compute[253538]:      <entry name="family">Virtual Machine</entry>
Nov 25 03:32:23 np0005534516 nova_compute[253538]:    </system>
Nov 25 03:32:23 np0005534516 nova_compute[253538]:  </sysinfo>
Nov 25 03:32:23 np0005534516 nova_compute[253538]:  <os>
Nov 25 03:32:23 np0005534516 nova_compute[253538]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 25 03:32:23 np0005534516 nova_compute[253538]:    <boot dev="hd"/>
Nov 25 03:32:23 np0005534516 nova_compute[253538]:    <smbios mode="sysinfo"/>
Nov 25 03:32:23 np0005534516 nova_compute[253538]:  </os>
Nov 25 03:32:23 np0005534516 nova_compute[253538]:  <features>
Nov 25 03:32:23 np0005534516 nova_compute[253538]:    <acpi/>
Nov 25 03:32:23 np0005534516 nova_compute[253538]:    <apic/>
Nov 25 03:32:23 np0005534516 nova_compute[253538]:    <vmcoreinfo/>
Nov 25 03:32:23 np0005534516 nova_compute[253538]:  </features>
Nov 25 03:32:23 np0005534516 nova_compute[253538]:  <clock offset="utc">
Nov 25 03:32:23 np0005534516 nova_compute[253538]:    <timer name="pit" tickpolicy="delay"/>
Nov 25 03:32:23 np0005534516 nova_compute[253538]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 25 03:32:23 np0005534516 nova_compute[253538]:    <timer name="hpet" present="no"/>
Nov 25 03:32:23 np0005534516 nova_compute[253538]:  </clock>
Nov 25 03:32:23 np0005534516 nova_compute[253538]:  <cpu mode="host-model" match="exact">
Nov 25 03:32:23 np0005534516 nova_compute[253538]:    <topology sockets="1" cores="1" threads="1"/>
Nov 25 03:32:23 np0005534516 nova_compute[253538]:  </cpu>
Nov 25 03:32:23 np0005534516 nova_compute[253538]:  <devices>
Nov 25 03:32:23 np0005534516 nova_compute[253538]:    <disk type="network" device="disk">
Nov 25 03:32:23 np0005534516 nova_compute[253538]:      <driver type="raw" cache="none"/>
Nov 25 03:32:23 np0005534516 nova_compute[253538]:      <source protocol="rbd" name="vms/0201b222-1aa1-4d57-901c-e3c79170b567_disk">
Nov 25 03:32:23 np0005534516 nova_compute[253538]:        <host name="192.168.122.100" port="6789"/>
Nov 25 03:32:23 np0005534516 nova_compute[253538]:      </source>
Nov 25 03:32:23 np0005534516 nova_compute[253538]:      <auth username="openstack">
Nov 25 03:32:23 np0005534516 nova_compute[253538]:        <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 03:32:23 np0005534516 nova_compute[253538]:      </auth>
Nov 25 03:32:23 np0005534516 nova_compute[253538]:      <target dev="vda" bus="virtio"/>
Nov 25 03:32:23 np0005534516 nova_compute[253538]:    </disk>
Nov 25 03:32:23 np0005534516 nova_compute[253538]:    <disk type="network" device="cdrom">
Nov 25 03:32:23 np0005534516 nova_compute[253538]:      <driver type="raw" cache="none"/>
Nov 25 03:32:23 np0005534516 nova_compute[253538]:      <source protocol="rbd" name="vms/0201b222-1aa1-4d57-901c-e3c79170b567_disk.config">
Nov 25 03:32:23 np0005534516 nova_compute[253538]:        <host name="192.168.122.100" port="6789"/>
Nov 25 03:32:23 np0005534516 nova_compute[253538]:      </source>
Nov 25 03:32:23 np0005534516 nova_compute[253538]:      <auth username="openstack">
Nov 25 03:32:23 np0005534516 nova_compute[253538]:        <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 03:32:23 np0005534516 nova_compute[253538]:      </auth>
Nov 25 03:32:23 np0005534516 nova_compute[253538]:      <target dev="sda" bus="sata"/>
Nov 25 03:32:23 np0005534516 nova_compute[253538]:    </disk>
Nov 25 03:32:23 np0005534516 nova_compute[253538]:    <interface type="ethernet">
Nov 25 03:32:23 np0005534516 nova_compute[253538]:      <mac address="fa:16:3e:5c:88:0d"/>
Nov 25 03:32:23 np0005534516 nova_compute[253538]:      <model type="virtio"/>
Nov 25 03:32:23 np0005534516 nova_compute[253538]:      <driver name="vhost" rx_queue_size="512"/>
Nov 25 03:32:23 np0005534516 nova_compute[253538]:      <mtu size="1442"/>
Nov 25 03:32:23 np0005534516 nova_compute[253538]:      <target dev="tapd02d0c40-ff"/>
Nov 25 03:32:23 np0005534516 nova_compute[253538]:    </interface>
Nov 25 03:32:23 np0005534516 nova_compute[253538]:    <serial type="pty">
Nov 25 03:32:23 np0005534516 nova_compute[253538]:      <log file="/var/lib/nova/instances/0201b222-1aa1-4d57-901c-e3c79170b567/console.log" append="off"/>
Nov 25 03:32:23 np0005534516 nova_compute[253538]:    </serial>
Nov 25 03:32:23 np0005534516 nova_compute[253538]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 03:32:23 np0005534516 nova_compute[253538]:    <video>
Nov 25 03:32:23 np0005534516 nova_compute[253538]:      <model type="virtio"/>
Nov 25 03:32:23 np0005534516 nova_compute[253538]:    </video>
Nov 25 03:32:23 np0005534516 nova_compute[253538]:    <input type="tablet" bus="usb"/>
Nov 25 03:32:23 np0005534516 nova_compute[253538]:    <rng model="virtio">
Nov 25 03:32:23 np0005534516 nova_compute[253538]:      <backend model="random">/dev/urandom</backend>
Nov 25 03:32:23 np0005534516 nova_compute[253538]:    </rng>
Nov 25 03:32:23 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root"/>
Nov 25 03:32:23 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:32:23 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:32:23 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:32:23 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:32:23 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:32:23 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:32:23 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:32:23 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:32:23 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:32:23 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:32:23 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:32:23 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:32:23 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:32:23 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:32:23 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:32:23 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:32:23 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:32:23 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:32:23 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:32:23 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:32:23 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:32:23 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:32:23 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:32:23 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:32:23 np0005534516 nova_compute[253538]:    <controller type="usb" index="0"/>
Nov 25 03:32:23 np0005534516 nova_compute[253538]:    <memballoon model="virtio">
Nov 25 03:32:23 np0005534516 nova_compute[253538]:      <stats period="10"/>
Nov 25 03:32:23 np0005534516 nova_compute[253538]:    </memballoon>
Nov 25 03:32:23 np0005534516 nova_compute[253538]:  </devices>
Nov 25 03:32:23 np0005534516 nova_compute[253538]: </domain>
Nov 25 03:32:23 np0005534516 nova_compute[253538]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 25 03:32:23 np0005534516 nova_compute[253538]: 2025-11-25 08:32:23.355 253542 DEBUG nova.compute.manager [None req-952fbd32-a4ae-4ddf-92ed-a7c95d18e3d1 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: 0201b222-1aa1-4d57-901c-e3c79170b567] Preparing to wait for external event network-vif-plugged-d02d0c40-ff59-4db1-8105-d39f0c8b67c5 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 25 03:32:23 np0005534516 nova_compute[253538]: 2025-11-25 08:32:23.356 253542 DEBUG oslo_concurrency.lockutils [None req-952fbd32-a4ae-4ddf-92ed-a7c95d18e3d1 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Acquiring lock "0201b222-1aa1-4d57-901c-e3c79170b567-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:32:23 np0005534516 nova_compute[253538]: 2025-11-25 08:32:23.356 253542 DEBUG oslo_concurrency.lockutils [None req-952fbd32-a4ae-4ddf-92ed-a7c95d18e3d1 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Lock "0201b222-1aa1-4d57-901c-e3c79170b567-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:32:23 np0005534516 nova_compute[253538]: 2025-11-25 08:32:23.356 253542 DEBUG oslo_concurrency.lockutils [None req-952fbd32-a4ae-4ddf-92ed-a7c95d18e3d1 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Lock "0201b222-1aa1-4d57-901c-e3c79170b567-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:32:23 np0005534516 nova_compute[253538]: 2025-11-25 08:32:23.357 253542 DEBUG nova.virt.libvirt.vif [None req-952fbd32-a4ae-4ddf-92ed-a7c95d18e3d1 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T08:32:12Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-DeleteServersTestJSON-server-93037066',display_name='tempest-DeleteServersTestJSON-server-93037066',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-deleteserverstestjson-server-93037066',id=51,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='a9c243220ecd4ba3af10cdbc0ea76bd6',ramdisk_id='',reservation_id='r-6q1swzq1',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-DeleteServersTestJSON-2095694504',owner_user_name='tempest-DeleteServersTestJSON-2095694504-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T08:32:15Z,user_data=None,user_id='a649c62aaacd4f01a93ea978066f5976',uuid=0201b222-1aa1-4d57-901c-e3c79170b567,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d02d0c40-ff59-4db1-8105-d39f0c8b67c5", "address": "fa:16:3e:5c:88:0d", "network": {"id": "a66e51b8-ecb0-4289-a1b5-d5e379727721", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-903247260-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a9c243220ecd4ba3af10cdbc0ea76bd6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd02d0c40-ff", "ovs_interfaceid": "d02d0c40-ff59-4db1-8105-d39f0c8b67c5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 25 03:32:23 np0005534516 nova_compute[253538]: 2025-11-25 08:32:23.357 253542 DEBUG nova.network.os_vif_util [None req-952fbd32-a4ae-4ddf-92ed-a7c95d18e3d1 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Converting VIF {"id": "d02d0c40-ff59-4db1-8105-d39f0c8b67c5", "address": "fa:16:3e:5c:88:0d", "network": {"id": "a66e51b8-ecb0-4289-a1b5-d5e379727721", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-903247260-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a9c243220ecd4ba3af10cdbc0ea76bd6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd02d0c40-ff", "ovs_interfaceid": "d02d0c40-ff59-4db1-8105-d39f0c8b67c5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 03:32:23 np0005534516 nova_compute[253538]: 2025-11-25 08:32:23.358 253542 DEBUG nova.network.os_vif_util [None req-952fbd32-a4ae-4ddf-92ed-a7c95d18e3d1 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:5c:88:0d,bridge_name='br-int',has_traffic_filtering=True,id=d02d0c40-ff59-4db1-8105-d39f0c8b67c5,network=Network(a66e51b8-ecb0-4289-a1b5-d5e379727721),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd02d0c40-ff') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 03:32:23 np0005534516 nova_compute[253538]: 2025-11-25 08:32:23.358 253542 DEBUG os_vif [None req-952fbd32-a4ae-4ddf-92ed-a7c95d18e3d1 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:5c:88:0d,bridge_name='br-int',has_traffic_filtering=True,id=d02d0c40-ff59-4db1-8105-d39f0c8b67c5,network=Network(a66e51b8-ecb0-4289-a1b5-d5e379727721),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd02d0c40-ff') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 25 03:32:23 np0005534516 nova_compute[253538]: 2025-11-25 08:32:23.359 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:32:23 np0005534516 nova_compute[253538]: 2025-11-25 08:32:23.360 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:32:23 np0005534516 nova_compute[253538]: 2025-11-25 08:32:23.360 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 03:32:23 np0005534516 nova_compute[253538]: 2025-11-25 08:32:23.363 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:32:23 np0005534516 nova_compute[253538]: 2025-11-25 08:32:23.363 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd02d0c40-ff, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:32:23 np0005534516 nova_compute[253538]: 2025-11-25 08:32:23.363 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapd02d0c40-ff, col_values=(('external_ids', {'iface-id': 'd02d0c40-ff59-4db1-8105-d39f0c8b67c5', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:5c:88:0d', 'vm-uuid': '0201b222-1aa1-4d57-901c-e3c79170b567'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:32:23 np0005534516 NetworkManager[48915]: <info>  [1764059543.3662] manager: (tapd02d0c40-ff): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/194)
Nov 25 03:32:23 np0005534516 nova_compute[253538]: 2025-11-25 08:32:23.367 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:32:23 np0005534516 nova_compute[253538]: 2025-11-25 08:32:23.370 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 25 03:32:23 np0005534516 nova_compute[253538]: 2025-11-25 08:32:23.372 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:32:23 np0005534516 nova_compute[253538]: 2025-11-25 08:32:23.373 253542 INFO os_vif [None req-952fbd32-a4ae-4ddf-92ed-a7c95d18e3d1 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:5c:88:0d,bridge_name='br-int',has_traffic_filtering=True,id=d02d0c40-ff59-4db1-8105-d39f0c8b67c5,network=Network(a66e51b8-ecb0-4289-a1b5-d5e379727721),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd02d0c40-ff')#033[00m
Nov 25 03:32:23 np0005534516 nova_compute[253538]: 2025-11-25 08:32:23.426 253542 DEBUG nova.virt.libvirt.driver [None req-952fbd32-a4ae-4ddf-92ed-a7c95d18e3d1 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 03:32:23 np0005534516 nova_compute[253538]: 2025-11-25 08:32:23.427 253542 DEBUG nova.virt.libvirt.driver [None req-952fbd32-a4ae-4ddf-92ed-a7c95d18e3d1 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 03:32:23 np0005534516 nova_compute[253538]: 2025-11-25 08:32:23.427 253542 DEBUG nova.virt.libvirt.driver [None req-952fbd32-a4ae-4ddf-92ed-a7c95d18e3d1 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] No VIF found with MAC fa:16:3e:5c:88:0d, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 25 03:32:23 np0005534516 nova_compute[253538]: 2025-11-25 08:32:23.427 253542 INFO nova.virt.libvirt.driver [None req-952fbd32-a4ae-4ddf-92ed-a7c95d18e3d1 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: 0201b222-1aa1-4d57-901c-e3c79170b567] Using config drive#033[00m
Nov 25 03:32:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 03:32:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 03:32:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 03:32:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 03:32:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 03:32:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 03:32:23 np0005534516 nova_compute[253538]: 2025-11-25 08:32:23.455 253542 DEBUG nova.storage.rbd_utils [None req-952fbd32-a4ae-4ddf-92ed-a7c95d18e3d1 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] rbd image 0201b222-1aa1-4d57-901c-e3c79170b567_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:32:24 np0005534516 nova_compute[253538]: 2025-11-25 08:32:24.106 253542 INFO nova.virt.libvirt.driver [None req-952fbd32-a4ae-4ddf-92ed-a7c95d18e3d1 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: 0201b222-1aa1-4d57-901c-e3c79170b567] Creating config drive at /var/lib/nova/instances/0201b222-1aa1-4d57-901c-e3c79170b567/disk.config#033[00m
Nov 25 03:32:24 np0005534516 nova_compute[253538]: 2025-11-25 08:32:24.111 253542 DEBUG oslo_concurrency.processutils [None req-952fbd32-a4ae-4ddf-92ed-a7c95d18e3d1 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/0201b222-1aa1-4d57-901c-e3c79170b567/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp9xfdw1m9 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:32:24 np0005534516 nova_compute[253538]: 2025-11-25 08:32:24.147 253542 DEBUG nova.network.neutron [req-afb0f8b4-1fdb-4a84-9902-9bf6d5255fc3 req-de375276-d6ea-4697-a201-96d3fa0c8822 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 0201b222-1aa1-4d57-901c-e3c79170b567] Updated VIF entry in instance network info cache for port d02d0c40-ff59-4db1-8105-d39f0c8b67c5. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 25 03:32:24 np0005534516 nova_compute[253538]: 2025-11-25 08:32:24.148 253542 DEBUG nova.network.neutron [req-afb0f8b4-1fdb-4a84-9902-9bf6d5255fc3 req-de375276-d6ea-4697-a201-96d3fa0c8822 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 0201b222-1aa1-4d57-901c-e3c79170b567] Updating instance_info_cache with network_info: [{"id": "d02d0c40-ff59-4db1-8105-d39f0c8b67c5", "address": "fa:16:3e:5c:88:0d", "network": {"id": "a66e51b8-ecb0-4289-a1b5-d5e379727721", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-903247260-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a9c243220ecd4ba3af10cdbc0ea76bd6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd02d0c40-ff", "ovs_interfaceid": "d02d0c40-ff59-4db1-8105-d39f0c8b67c5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 03:32:24 np0005534516 nova_compute[253538]: 2025-11-25 08:32:24.164 253542 DEBUG oslo_concurrency.lockutils [req-afb0f8b4-1fdb-4a84-9902-9bf6d5255fc3 req-de375276-d6ea-4697-a201-96d3fa0c8822 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-0201b222-1aa1-4d57-901c-e3c79170b567" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 03:32:24 np0005534516 nova_compute[253538]: 2025-11-25 08:32:24.255 253542 DEBUG oslo_concurrency.processutils [None req-952fbd32-a4ae-4ddf-92ed-a7c95d18e3d1 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/0201b222-1aa1-4d57-901c-e3c79170b567/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp9xfdw1m9" returned: 0 in 0.144s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:32:24 np0005534516 nova_compute[253538]: 2025-11-25 08:32:24.276 253542 DEBUG nova.storage.rbd_utils [None req-952fbd32-a4ae-4ddf-92ed-a7c95d18e3d1 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] rbd image 0201b222-1aa1-4d57-901c-e3c79170b567_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:32:24 np0005534516 nova_compute[253538]: 2025-11-25 08:32:24.278 253542 DEBUG oslo_concurrency.processutils [None req-952fbd32-a4ae-4ddf-92ed-a7c95d18e3d1 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/0201b222-1aa1-4d57-901c-e3c79170b567/disk.config 0201b222-1aa1-4d57-901c-e3c79170b567_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:32:24 np0005534516 nova_compute[253538]: 2025-11-25 08:32:24.426 253542 DEBUG oslo_concurrency.processutils [None req-952fbd32-a4ae-4ddf-92ed-a7c95d18e3d1 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/0201b222-1aa1-4d57-901c-e3c79170b567/disk.config 0201b222-1aa1-4d57-901c-e3c79170b567_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.148s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:32:24 np0005534516 nova_compute[253538]: 2025-11-25 08:32:24.427 253542 INFO nova.virt.libvirt.driver [None req-952fbd32-a4ae-4ddf-92ed-a7c95d18e3d1 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: 0201b222-1aa1-4d57-901c-e3c79170b567] Deleting local config drive /var/lib/nova/instances/0201b222-1aa1-4d57-901c-e3c79170b567/disk.config because it was imported into RBD.#033[00m
Nov 25 03:32:24 np0005534516 kernel: tapd02d0c40-ff: entered promiscuous mode
Nov 25 03:32:24 np0005534516 NetworkManager[48915]: <info>  [1764059544.4872] manager: (tapd02d0c40-ff): new Tun device (/org/freedesktop/NetworkManager/Devices/195)
Nov 25 03:32:24 np0005534516 nova_compute[253538]: 2025-11-25 08:32:24.491 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:32:24 np0005534516 ovn_controller[152859]: 2025-11-25T08:32:24Z|00417|binding|INFO|Claiming lport d02d0c40-ff59-4db1-8105-d39f0c8b67c5 for this chassis.
Nov 25 03:32:24 np0005534516 ovn_controller[152859]: 2025-11-25T08:32:24Z|00418|binding|INFO|d02d0c40-ff59-4db1-8105-d39f0c8b67c5: Claiming fa:16:3e:5c:88:0d 10.100.0.7
Nov 25 03:32:24 np0005534516 systemd-udevd[308952]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 03:32:24 np0005534516 NetworkManager[48915]: <info>  [1764059544.5336] device (tapd02d0c40-ff): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 03:32:24 np0005534516 NetworkManager[48915]: <info>  [1764059544.5359] device (tapd02d0c40-ff): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 03:32:24 np0005534516 ovn_controller[152859]: 2025-11-25T08:32:24Z|00419|binding|INFO|Setting lport d02d0c40-ff59-4db1-8105-d39f0c8b67c5 ovn-installed in OVS
Nov 25 03:32:24 np0005534516 nova_compute[253538]: 2025-11-25 08:32:24.537 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:32:24 np0005534516 nova_compute[253538]: 2025-11-25 08:32:24.542 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:32:24 np0005534516 systemd-machined[215790]: New machine qemu-57-instance-00000033.
Nov 25 03:32:24 np0005534516 systemd[1]: Started Virtual Machine qemu-57-instance-00000033.
Nov 25 03:32:24 np0005534516 nova_compute[253538]: 2025-11-25 08:32:24.947 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764059544.9471781, 0201b222-1aa1-4d57-901c-e3c79170b567 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 03:32:24 np0005534516 nova_compute[253538]: 2025-11-25 08:32:24.949 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 0201b222-1aa1-4d57-901c-e3c79170b567] VM Started (Lifecycle Event)#033[00m
Nov 25 03:32:24 np0005534516 nova_compute[253538]: 2025-11-25 08:32:24.966 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 0201b222-1aa1-4d57-901c-e3c79170b567] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:32:24 np0005534516 nova_compute[253538]: 2025-11-25 08:32:24.971 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764059544.9483302, 0201b222-1aa1-4d57-901c-e3c79170b567 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 03:32:24 np0005534516 nova_compute[253538]: 2025-11-25 08:32:24.971 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 0201b222-1aa1-4d57-901c-e3c79170b567] VM Paused (Lifecycle Event)#033[00m
Nov 25 03:32:24 np0005534516 nova_compute[253538]: 2025-11-25 08:32:24.988 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 0201b222-1aa1-4d57-901c-e3c79170b567] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:32:24 np0005534516 nova_compute[253538]: 2025-11-25 08:32:24.992 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 0201b222-1aa1-4d57-901c-e3c79170b567] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 03:32:25 np0005534516 nova_compute[253538]: 2025-11-25 08:32:25.009 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 0201b222-1aa1-4d57-901c-e3c79170b567] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 03:32:25 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1472: 321 pgs: 321 active+clean; 181 MiB data, 489 MiB used, 60 GiB / 60 GiB avail; 3.8 MiB/s rd, 5.1 MiB/s wr, 282 op/s
Nov 25 03:32:25 np0005534516 nova_compute[253538]: 2025-11-25 08:32:25.481 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:32:25 np0005534516 nova_compute[253538]: 2025-11-25 08:32:25.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 03:32:25 np0005534516 ovn_controller[152859]: 2025-11-25T08:32:25Z|00420|binding|INFO|Setting lport d02d0c40-ff59-4db1-8105-d39f0c8b67c5 up in Southbound
Nov 25 03:32:25 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:32:25.665 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:5c:88:0d 10.100.0.7'], port_security=['fa:16:3e:5c:88:0d 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '0201b222-1aa1-4d57-901c-e3c79170b567', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a66e51b8-ecb0-4289-a1b5-d5e379727721', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a9c243220ecd4ba3af10cdbc0ea76bd6', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'a5deaf81-ec7a-4196-8622-4e499ce185db', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=864356ca-a329-4a45-a3a1-6cef04812832, chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=d02d0c40-ff59-4db1-8105-d39f0c8b67c5) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 03:32:25 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:32:25.667 162739 INFO neutron.agent.ovn.metadata.agent [-] Port d02d0c40-ff59-4db1-8105-d39f0c8b67c5 in datapath a66e51b8-ecb0-4289-a1b5-d5e379727721 bound to our chassis#033[00m
Nov 25 03:32:25 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:32:25.669 162739 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network a66e51b8-ecb0-4289-a1b5-d5e379727721#033[00m
Nov 25 03:32:25 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:32:25.682 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[38abe753-625a-4ab1-98b3-9b29678ac1a7]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:32:25 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:32:25.684 162739 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapa66e51b8-e1 in ovnmeta-a66e51b8-ecb0-4289-a1b5-d5e379727721 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 25 03:32:25 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:32:25.686 269370 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapa66e51b8-e0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 25 03:32:25 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:32:25.686 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[33123b87-1dc6-4ad7-bff7-c01a00fdbf4c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:32:25 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:32:25.687 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[8d9d57c6-6426-440f-84dc-7003811c3b00]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:32:25 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:32:25.698 162852 DEBUG oslo.privsep.daemon [-] privsep: reply[e355cad0-aa25-41ee-998b-71610fb10442]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:32:25 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:32:25.711 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[58594e66-955e-4f74-8590-4aa677a52b9a]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:32:25 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:32:25.739 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[8ae355c1-a9cc-4aad-922f-8f2e2128eb17]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:32:25 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:32:25.745 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[e1143db2-465a-41ab-b777-67bc3216a93c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:32:25 np0005534516 NetworkManager[48915]: <info>  [1764059545.7465] manager: (tapa66e51b8-e0): new Veth device (/org/freedesktop/NetworkManager/Devices/196)
Nov 25 03:32:25 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e176 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 03:32:25 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:32:25.779 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[b5cb8338-e082-417c-a2f3-00e7c8ac8242]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:32:25 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:32:25.782 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[cd0f39f5-a027-4e19-ba46-8ca68a090c93]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:32:25 np0005534516 NetworkManager[48915]: <info>  [1764059545.8042] device (tapa66e51b8-e0): carrier: link connected
Nov 25 03:32:25 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:32:25.814 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[496a37d9-3b21-493a-8f6d-5562f3df7a38]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:32:25 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:32:25.831 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[c2c1c90e-683a-4bd0-b8e4-3ec54997a13d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapa66e51b8-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:11:2c:20'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 128], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 490040, 'reachable_time': 35533, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 309032, 'error': None, 'target': 'ovnmeta-a66e51b8-ecb0-4289-a1b5-d5e379727721', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:32:25 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:32:25.848 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[c26f495c-2569-4562-be40-ded0104d54bc]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe11:2c20'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 490040, 'tstamp': 490040}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 309033, 'error': None, 'target': 'ovnmeta-a66e51b8-ecb0-4289-a1b5-d5e379727721', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:32:25 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:32:25.865 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[02cd96db-27e8-4d52-bbc2-49111edc2a09]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapa66e51b8-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:11:2c:20'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 128], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 490040, 'reachable_time': 35533, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 309034, 'error': None, 'target': 'ovnmeta-a66e51b8-ecb0-4289-a1b5-d5e379727721', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:32:25 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:32:25.908 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[1b718439-5db6-4ee4-8fe6-d1982058922b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:32:25 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:32:25.989 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[c9562818-5af4-4279-9529-86782bb72516]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:32:25 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:32:25.990 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa66e51b8-e0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:32:25 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:32:25.991 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 03:32:25 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:32:25.991 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa66e51b8-e0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:32:25 np0005534516 NetworkManager[48915]: <info>  [1764059545.9937] manager: (tapa66e51b8-e0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/197)
Nov 25 03:32:25 np0005534516 kernel: tapa66e51b8-e0: entered promiscuous mode
Nov 25 03:32:26 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:32:26.002 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapa66e51b8-e0, col_values=(('external_ids', {'iface-id': 'c0d74b17-7eba-4096-a861-b9247777e01c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:32:26 np0005534516 ovn_controller[152859]: 2025-11-25T08:32:26Z|00421|binding|INFO|Releasing lport c0d74b17-7eba-4096-a861-b9247777e01c from this chassis (sb_readonly=0)
Nov 25 03:32:26 np0005534516 nova_compute[253538]: 2025-11-25 08:32:26.001 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:32:26 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:32:26.036 162739 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/a66e51b8-ecb0-4289-a1b5-d5e379727721.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/a66e51b8-ecb0-4289-a1b5-d5e379727721.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 25 03:32:26 np0005534516 nova_compute[253538]: 2025-11-25 08:32:26.035 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:32:26 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:32:26.037 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[481bfa73-5e55-4d29-b080-f7e37414f165]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:32:26 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:32:26.038 162739 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 25 03:32:26 np0005534516 ovn_metadata_agent[162734]: global
Nov 25 03:32:26 np0005534516 ovn_metadata_agent[162734]:    log         /dev/log local0 debug
Nov 25 03:32:26 np0005534516 ovn_metadata_agent[162734]:    log-tag     haproxy-metadata-proxy-a66e51b8-ecb0-4289-a1b5-d5e379727721
Nov 25 03:32:26 np0005534516 ovn_metadata_agent[162734]:    user        root
Nov 25 03:32:26 np0005534516 ovn_metadata_agent[162734]:    group       root
Nov 25 03:32:26 np0005534516 ovn_metadata_agent[162734]:    maxconn     1024
Nov 25 03:32:26 np0005534516 ovn_metadata_agent[162734]:    pidfile     /var/lib/neutron/external/pids/a66e51b8-ecb0-4289-a1b5-d5e379727721.pid.haproxy
Nov 25 03:32:26 np0005534516 ovn_metadata_agent[162734]:    daemon
Nov 25 03:32:26 np0005534516 ovn_metadata_agent[162734]: 
Nov 25 03:32:26 np0005534516 ovn_metadata_agent[162734]: defaults
Nov 25 03:32:26 np0005534516 ovn_metadata_agent[162734]:    log global
Nov 25 03:32:26 np0005534516 ovn_metadata_agent[162734]:    mode http
Nov 25 03:32:26 np0005534516 ovn_metadata_agent[162734]:    option httplog
Nov 25 03:32:26 np0005534516 ovn_metadata_agent[162734]:    option dontlognull
Nov 25 03:32:26 np0005534516 ovn_metadata_agent[162734]:    option http-server-close
Nov 25 03:32:26 np0005534516 ovn_metadata_agent[162734]:    option forwardfor
Nov 25 03:32:26 np0005534516 ovn_metadata_agent[162734]:    retries                 3
Nov 25 03:32:26 np0005534516 ovn_metadata_agent[162734]:    timeout http-request    30s
Nov 25 03:32:26 np0005534516 ovn_metadata_agent[162734]:    timeout connect         30s
Nov 25 03:32:26 np0005534516 ovn_metadata_agent[162734]:    timeout client          32s
Nov 25 03:32:26 np0005534516 ovn_metadata_agent[162734]:    timeout server          32s
Nov 25 03:32:26 np0005534516 ovn_metadata_agent[162734]:    timeout http-keep-alive 30s
Nov 25 03:32:26 np0005534516 ovn_metadata_agent[162734]: 
Nov 25 03:32:26 np0005534516 ovn_metadata_agent[162734]: 
Nov 25 03:32:26 np0005534516 ovn_metadata_agent[162734]: listen listener
Nov 25 03:32:26 np0005534516 ovn_metadata_agent[162734]:    bind 169.254.169.254:80
Nov 25 03:32:26 np0005534516 ovn_metadata_agent[162734]:    server metadata /var/lib/neutron/metadata_proxy
Nov 25 03:32:26 np0005534516 ovn_metadata_agent[162734]:    http-request add-header X-OVN-Network-ID a66e51b8-ecb0-4289-a1b5-d5e379727721
Nov 25 03:32:26 np0005534516 ovn_metadata_agent[162734]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 25 03:32:26 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:32:26.042 162739 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-a66e51b8-ecb0-4289-a1b5-d5e379727721', 'env', 'PROCESS_TAG=haproxy-a66e51b8-ecb0-4289-a1b5-d5e379727721', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/a66e51b8-ecb0-4289-a1b5-d5e379727721.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 25 03:32:26 np0005534516 podman[309065]: 2025-11-25 08:32:26.431254069 +0000 UTC m=+0.057528596 container create 071a11fdff16014d72bfaf3316156ef333154916592d1d18547abad2ab3c7a9d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-a66e51b8-ecb0-4289-a1b5-d5e379727721, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 25 03:32:26 np0005534516 systemd[1]: Started libpod-conmon-071a11fdff16014d72bfaf3316156ef333154916592d1d18547abad2ab3c7a9d.scope.
Nov 25 03:32:26 np0005534516 podman[309065]: 2025-11-25 08:32:26.401463979 +0000 UTC m=+0.027738536 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 25 03:32:26 np0005534516 systemd[1]: Started libcrun container.
Nov 25 03:32:26 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e6c7019eddc15bde3ab32eaf7c4b3a0396a948ded755fecc5de1ea0cb4bb9f24/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 25 03:32:26 np0005534516 podman[309065]: 2025-11-25 08:32:26.520361155 +0000 UTC m=+0.146635662 container init 071a11fdff16014d72bfaf3316156ef333154916592d1d18547abad2ab3c7a9d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-a66e51b8-ecb0-4289-a1b5-d5e379727721, org.label-schema.license=GPLv2, org.label-schema.build-date=20251118, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0)
Nov 25 03:32:26 np0005534516 podman[309065]: 2025-11-25 08:32:26.526361706 +0000 UTC m=+0.152636213 container start 071a11fdff16014d72bfaf3316156ef333154916592d1d18547abad2ab3c7a9d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-a66e51b8-ecb0-4289-a1b5-d5e379727721, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true)
Nov 25 03:32:26 np0005534516 neutron-haproxy-ovnmeta-a66e51b8-ecb0-4289-a1b5-d5e379727721[309080]: [NOTICE]   (309084) : New worker (309086) forked
Nov 25 03:32:26 np0005534516 neutron-haproxy-ovnmeta-a66e51b8-ecb0-4289-a1b5-d5e379727721[309080]: [NOTICE]   (309084) : Loading success.
Nov 25 03:32:26 np0005534516 nova_compute[253538]: 2025-11-25 08:32:26.597 253542 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764059531.5846958, 450d4b82-4475-4cfc-b868-dc3b0fc37af5 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 03:32:26 np0005534516 nova_compute[253538]: 2025-11-25 08:32:26.597 253542 INFO nova.compute.manager [-] [instance: 450d4b82-4475-4cfc-b868-dc3b0fc37af5] VM Stopped (Lifecycle Event)#033[00m
Nov 25 03:32:26 np0005534516 nova_compute[253538]: 2025-11-25 08:32:26.613 253542 DEBUG nova.compute.manager [None req-9c864453-b358-470d-931c-ea3f52d5915d - - - - - -] [instance: 450d4b82-4475-4cfc-b868-dc3b0fc37af5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:32:27 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1473: 321 pgs: 321 active+clean; 190 MiB data, 489 MiB used, 60 GiB / 60 GiB avail; 3.9 MiB/s rd, 4.8 MiB/s wr, 252 op/s
Nov 25 03:32:27 np0005534516 nova_compute[253538]: 2025-11-25 08:32:27.248 253542 DEBUG nova.compute.manager [req-440fc139-f679-4422-a669-03af311bb59b req-31745e3a-c325-420e-91fa-4be7047a6a1e b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 7aefbad8-edb5-417c-a34d-e9e3c2dd0c03] Received event network-vif-plugged-3c3c9c20-8436-4b41-9184-2061010ba6e2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:32:27 np0005534516 nova_compute[253538]: 2025-11-25 08:32:27.248 253542 DEBUG oslo_concurrency.lockutils [req-440fc139-f679-4422-a669-03af311bb59b req-31745e3a-c325-420e-91fa-4be7047a6a1e b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "7aefbad8-edb5-417c-a34d-e9e3c2dd0c03-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:32:27 np0005534516 nova_compute[253538]: 2025-11-25 08:32:27.249 253542 DEBUG oslo_concurrency.lockutils [req-440fc139-f679-4422-a669-03af311bb59b req-31745e3a-c325-420e-91fa-4be7047a6a1e b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "7aefbad8-edb5-417c-a34d-e9e3c2dd0c03-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:32:27 np0005534516 nova_compute[253538]: 2025-11-25 08:32:27.249 253542 DEBUG oslo_concurrency.lockutils [req-440fc139-f679-4422-a669-03af311bb59b req-31745e3a-c325-420e-91fa-4be7047a6a1e b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "7aefbad8-edb5-417c-a34d-e9e3c2dd0c03-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:32:27 np0005534516 nova_compute[253538]: 2025-11-25 08:32:27.249 253542 DEBUG nova.compute.manager [req-440fc139-f679-4422-a669-03af311bb59b req-31745e3a-c325-420e-91fa-4be7047a6a1e b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 7aefbad8-edb5-417c-a34d-e9e3c2dd0c03] No waiting events found dispatching network-vif-plugged-3c3c9c20-8436-4b41-9184-2061010ba6e2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 03:32:27 np0005534516 nova_compute[253538]: 2025-11-25 08:32:27.249 253542 WARNING nova.compute.manager [req-440fc139-f679-4422-a669-03af311bb59b req-31745e3a-c325-420e-91fa-4be7047a6a1e b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 7aefbad8-edb5-417c-a34d-e9e3c2dd0c03] Received unexpected event network-vif-plugged-3c3c9c20-8436-4b41-9184-2061010ba6e2 for instance with vm_state active and task_state None.#033[00m
Nov 25 03:32:27 np0005534516 nova_compute[253538]: 2025-11-25 08:32:27.330 253542 DEBUG nova.compute.manager [req-c8922be0-9073-4901-aa4f-d97d8222df97 req-d8e82eec-4fc1-49cb-b4d8-5c6d358f7f13 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 0201b222-1aa1-4d57-901c-e3c79170b567] Received event network-vif-plugged-d02d0c40-ff59-4db1-8105-d39f0c8b67c5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:32:27 np0005534516 nova_compute[253538]: 2025-11-25 08:32:27.331 253542 DEBUG oslo_concurrency.lockutils [req-c8922be0-9073-4901-aa4f-d97d8222df97 req-d8e82eec-4fc1-49cb-b4d8-5c6d358f7f13 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "0201b222-1aa1-4d57-901c-e3c79170b567-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:32:27 np0005534516 nova_compute[253538]: 2025-11-25 08:32:27.331 253542 DEBUG oslo_concurrency.lockutils [req-c8922be0-9073-4901-aa4f-d97d8222df97 req-d8e82eec-4fc1-49cb-b4d8-5c6d358f7f13 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "0201b222-1aa1-4d57-901c-e3c79170b567-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:32:27 np0005534516 nova_compute[253538]: 2025-11-25 08:32:27.331 253542 DEBUG oslo_concurrency.lockutils [req-c8922be0-9073-4901-aa4f-d97d8222df97 req-d8e82eec-4fc1-49cb-b4d8-5c6d358f7f13 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "0201b222-1aa1-4d57-901c-e3c79170b567-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:32:27 np0005534516 nova_compute[253538]: 2025-11-25 08:32:27.332 253542 DEBUG nova.compute.manager [req-c8922be0-9073-4901-aa4f-d97d8222df97 req-d8e82eec-4fc1-49cb-b4d8-5c6d358f7f13 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 0201b222-1aa1-4d57-901c-e3c79170b567] Processing event network-vif-plugged-d02d0c40-ff59-4db1-8105-d39f0c8b67c5 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 25 03:32:27 np0005534516 nova_compute[253538]: 2025-11-25 08:32:27.332 253542 DEBUG nova.compute.manager [None req-952fbd32-a4ae-4ddf-92ed-a7c95d18e3d1 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: 0201b222-1aa1-4d57-901c-e3c79170b567] Instance event wait completed in 2 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 25 03:32:27 np0005534516 nova_compute[253538]: 2025-11-25 08:32:27.347 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764059547.33592, 0201b222-1aa1-4d57-901c-e3c79170b567 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 03:32:27 np0005534516 nova_compute[253538]: 2025-11-25 08:32:27.360 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 0201b222-1aa1-4d57-901c-e3c79170b567] VM Resumed (Lifecycle Event)#033[00m
Nov 25 03:32:27 np0005534516 nova_compute[253538]: 2025-11-25 08:32:27.362 253542 DEBUG nova.virt.libvirt.driver [None req-952fbd32-a4ae-4ddf-92ed-a7c95d18e3d1 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: 0201b222-1aa1-4d57-901c-e3c79170b567] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 25 03:32:27 np0005534516 nova_compute[253538]: 2025-11-25 08:32:27.365 253542 INFO nova.virt.libvirt.driver [-] [instance: 0201b222-1aa1-4d57-901c-e3c79170b567] Instance spawned successfully.#033[00m
Nov 25 03:32:27 np0005534516 nova_compute[253538]: 2025-11-25 08:32:27.365 253542 DEBUG nova.virt.libvirt.driver [None req-952fbd32-a4ae-4ddf-92ed-a7c95d18e3d1 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: 0201b222-1aa1-4d57-901c-e3c79170b567] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 25 03:32:27 np0005534516 nova_compute[253538]: 2025-11-25 08:32:27.399 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 0201b222-1aa1-4d57-901c-e3c79170b567] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:32:27 np0005534516 nova_compute[253538]: 2025-11-25 08:32:27.406 253542 DEBUG nova.virt.libvirt.driver [None req-952fbd32-a4ae-4ddf-92ed-a7c95d18e3d1 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: 0201b222-1aa1-4d57-901c-e3c79170b567] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:32:27 np0005534516 nova_compute[253538]: 2025-11-25 08:32:27.407 253542 DEBUG nova.virt.libvirt.driver [None req-952fbd32-a4ae-4ddf-92ed-a7c95d18e3d1 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: 0201b222-1aa1-4d57-901c-e3c79170b567] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:32:27 np0005534516 nova_compute[253538]: 2025-11-25 08:32:27.407 253542 DEBUG nova.virt.libvirt.driver [None req-952fbd32-a4ae-4ddf-92ed-a7c95d18e3d1 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: 0201b222-1aa1-4d57-901c-e3c79170b567] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:32:27 np0005534516 nova_compute[253538]: 2025-11-25 08:32:27.407 253542 DEBUG nova.virt.libvirt.driver [None req-952fbd32-a4ae-4ddf-92ed-a7c95d18e3d1 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: 0201b222-1aa1-4d57-901c-e3c79170b567] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:32:27 np0005534516 nova_compute[253538]: 2025-11-25 08:32:27.408 253542 DEBUG nova.virt.libvirt.driver [None req-952fbd32-a4ae-4ddf-92ed-a7c95d18e3d1 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: 0201b222-1aa1-4d57-901c-e3c79170b567] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:32:27 np0005534516 nova_compute[253538]: 2025-11-25 08:32:27.408 253542 DEBUG nova.virt.libvirt.driver [None req-952fbd32-a4ae-4ddf-92ed-a7c95d18e3d1 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: 0201b222-1aa1-4d57-901c-e3c79170b567] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:32:27 np0005534516 nova_compute[253538]: 2025-11-25 08:32:27.410 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 0201b222-1aa1-4d57-901c-e3c79170b567] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 03:32:27 np0005534516 nova_compute[253538]: 2025-11-25 08:32:27.444 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 0201b222-1aa1-4d57-901c-e3c79170b567] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 03:32:27 np0005534516 nova_compute[253538]: 2025-11-25 08:32:27.475 253542 INFO nova.compute.manager [None req-952fbd32-a4ae-4ddf-92ed-a7c95d18e3d1 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: 0201b222-1aa1-4d57-901c-e3c79170b567] Took 12.09 seconds to spawn the instance on the hypervisor.#033[00m
Nov 25 03:32:27 np0005534516 nova_compute[253538]: 2025-11-25 08:32:27.476 253542 DEBUG nova.compute.manager [None req-952fbd32-a4ae-4ddf-92ed-a7c95d18e3d1 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: 0201b222-1aa1-4d57-901c-e3c79170b567] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:32:27 np0005534516 nova_compute[253538]: 2025-11-25 08:32:27.531 253542 INFO nova.compute.manager [None req-952fbd32-a4ae-4ddf-92ed-a7c95d18e3d1 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: 0201b222-1aa1-4d57-901c-e3c79170b567] Took 13.55 seconds to build instance.#033[00m
Nov 25 03:32:27 np0005534516 nova_compute[253538]: 2025-11-25 08:32:27.547 253542 DEBUG oslo_concurrency.lockutils [None req-952fbd32-a4ae-4ddf-92ed-a7c95d18e3d1 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Lock "0201b222-1aa1-4d57-901c-e3c79170b567" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 13.633s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:32:27 np0005534516 ovn_controller[152859]: 2025-11-25T08:32:27Z|00052|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:07:cd:40 10.100.0.13
Nov 25 03:32:27 np0005534516 ovn_controller[152859]: 2025-11-25T08:32:27Z|00053|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:07:cd:40 10.100.0.13
Nov 25 03:32:28 np0005534516 nova_compute[253538]: 2025-11-25 08:32:28.123 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:32:28 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:32:28.125 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=14, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '3e:21:09', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '6a:71:8e:07:fa:c2'}, ipsec=False) old=SB_Global(nb_cfg=13) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 03:32:28 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:32:28.127 162739 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 25 03:32:28 np0005534516 nova_compute[253538]: 2025-11-25 08:32:28.365 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:32:28 np0005534516 nova_compute[253538]: 2025-11-25 08:32:28.447 253542 DEBUG oslo_concurrency.lockutils [None req-27efaf5d-93c8-4a0c-b1f2-436697dd9392 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Acquiring lock "8191f951-44bc-4371-957a-f2e7d37c1a32" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:32:28 np0005534516 nova_compute[253538]: 2025-11-25 08:32:28.449 253542 DEBUG oslo_concurrency.lockutils [None req-27efaf5d-93c8-4a0c-b1f2-436697dd9392 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Lock "8191f951-44bc-4371-957a-f2e7d37c1a32" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:32:28 np0005534516 nova_compute[253538]: 2025-11-25 08:32:28.451 253542 DEBUG oslo_concurrency.lockutils [None req-c7431d93-c045-4619-a3d4-99f3f3e442c8 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Acquiring lock "7aefbad8-edb5-417c-a34d-e9e3c2dd0c03" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:32:28 np0005534516 nova_compute[253538]: 2025-11-25 08:32:28.451 253542 DEBUG oslo_concurrency.lockutils [None req-c7431d93-c045-4619-a3d4-99f3f3e442c8 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Lock "7aefbad8-edb5-417c-a34d-e9e3c2dd0c03" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:32:28 np0005534516 nova_compute[253538]: 2025-11-25 08:32:28.451 253542 DEBUG oslo_concurrency.lockutils [None req-c7431d93-c045-4619-a3d4-99f3f3e442c8 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Acquiring lock "7aefbad8-edb5-417c-a34d-e9e3c2dd0c03-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:32:28 np0005534516 nova_compute[253538]: 2025-11-25 08:32:28.452 253542 DEBUG oslo_concurrency.lockutils [None req-c7431d93-c045-4619-a3d4-99f3f3e442c8 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Lock "7aefbad8-edb5-417c-a34d-e9e3c2dd0c03-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:32:28 np0005534516 nova_compute[253538]: 2025-11-25 08:32:28.452 253542 DEBUG oslo_concurrency.lockutils [None req-c7431d93-c045-4619-a3d4-99f3f3e442c8 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Lock "7aefbad8-edb5-417c-a34d-e9e3c2dd0c03-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:32:28 np0005534516 nova_compute[253538]: 2025-11-25 08:32:28.453 253542 INFO nova.compute.manager [None req-c7431d93-c045-4619-a3d4-99f3f3e442c8 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] [instance: 7aefbad8-edb5-417c-a34d-e9e3c2dd0c03] Terminating instance#033[00m
Nov 25 03:32:28 np0005534516 nova_compute[253538]: 2025-11-25 08:32:28.454 253542 DEBUG nova.compute.manager [None req-c7431d93-c045-4619-a3d4-99f3f3e442c8 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] [instance: 7aefbad8-edb5-417c-a34d-e9e3c2dd0c03] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 25 03:32:28 np0005534516 nova_compute[253538]: 2025-11-25 08:32:28.469 253542 DEBUG nova.compute.manager [None req-27efaf5d-93c8-4a0c-b1f2-436697dd9392 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] [instance: 8191f951-44bc-4371-957a-f2e7d37c1a32] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 25 03:32:28 np0005534516 kernel: tap3c3c9c20-84 (unregistering): left promiscuous mode
Nov 25 03:32:28 np0005534516 NetworkManager[48915]: <info>  [1764059548.4991] device (tap3c3c9c20-84): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 03:32:28 np0005534516 ovn_controller[152859]: 2025-11-25T08:32:28Z|00422|binding|INFO|Releasing lport 3c3c9c20-8436-4b41-9184-2061010ba6e2 from this chassis (sb_readonly=0)
Nov 25 03:32:28 np0005534516 nova_compute[253538]: 2025-11-25 08:32:28.506 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:32:28 np0005534516 ovn_controller[152859]: 2025-11-25T08:32:28Z|00423|binding|INFO|Setting lport 3c3c9c20-8436-4b41-9184-2061010ba6e2 down in Southbound
Nov 25 03:32:28 np0005534516 ovn_controller[152859]: 2025-11-25T08:32:28Z|00424|binding|INFO|Removing iface tap3c3c9c20-84 ovn-installed in OVS
Nov 25 03:32:28 np0005534516 nova_compute[253538]: 2025-11-25 08:32:28.508 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:32:28 np0005534516 nova_compute[253538]: 2025-11-25 08:32:28.522 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:32:28 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:32:28.523 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c4:6e:6c 10.100.0.4'], port_security=['fa:16:3e:c4:6e:6c 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '7aefbad8-edb5-417c-a34d-e9e3c2dd0c03', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-eb25945d-6002-4a99-b682-034a8a3dc901', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'dc93aa65bef7473d961e0cad1e8f2962', 'neutron:revision_number': '6', 'neutron:security_group_ids': 'e75a559d-2985-4816-b432-9eef78e9b129', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=1d4314d0-90db-4d2a-a971-774f6d589653, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=3c3c9c20-8436-4b41-9184-2061010ba6e2) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 03:32:28 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:32:28.525 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 3c3c9c20-8436-4b41-9184-2061010ba6e2 in datapath eb25945d-6002-4a99-b682-034a8a3dc901 unbound from our chassis#033[00m
Nov 25 03:32:28 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:32:28.528 162739 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network eb25945d-6002-4a99-b682-034a8a3dc901, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 25 03:32:28 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:32:28.529 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[683fb4d7-0864-404d-9627-329b946c60c6]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:32:28 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:32:28.530 162739 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-eb25945d-6002-4a99-b682-034a8a3dc901 namespace which is not needed anymore#033[00m
Nov 25 03:32:28 np0005534516 systemd[1]: machine-qemu\x2d56\x2dinstance\x2d00000031.scope: Deactivated successfully.
Nov 25 03:32:28 np0005534516 systemd[1]: machine-qemu\x2d56\x2dinstance\x2d00000031.scope: Consumed 6.983s CPU time.
Nov 25 03:32:28 np0005534516 systemd-machined[215790]: Machine qemu-56-instance-00000031 terminated.
Nov 25 03:32:28 np0005534516 nova_compute[253538]: 2025-11-25 08:32:28.598 253542 DEBUG oslo_concurrency.lockutils [None req-27efaf5d-93c8-4a0c-b1f2-436697dd9392 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:32:28 np0005534516 nova_compute[253538]: 2025-11-25 08:32:28.598 253542 DEBUG oslo_concurrency.lockutils [None req-27efaf5d-93c8-4a0c-b1f2-436697dd9392 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:32:28 np0005534516 nova_compute[253538]: 2025-11-25 08:32:28.608 253542 DEBUG nova.virt.hardware [None req-27efaf5d-93c8-4a0c-b1f2-436697dd9392 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 25 03:32:28 np0005534516 nova_compute[253538]: 2025-11-25 08:32:28.608 253542 INFO nova.compute.claims [None req-27efaf5d-93c8-4a0c-b1f2-436697dd9392 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] [instance: 8191f951-44bc-4371-957a-f2e7d37c1a32] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 25 03:32:28 np0005534516 nova_compute[253538]: 2025-11-25 08:32:28.679 253542 DEBUG oslo_concurrency.lockutils [None req-f13ec5d5-0a9b-4d5a-9c2a-7dfd143c2b5e a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Acquiring lock "0201b222-1aa1-4d57-901c-e3c79170b567" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:32:28 np0005534516 nova_compute[253538]: 2025-11-25 08:32:28.680 253542 DEBUG oslo_concurrency.lockutils [None req-f13ec5d5-0a9b-4d5a-9c2a-7dfd143c2b5e a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Lock "0201b222-1aa1-4d57-901c-e3c79170b567" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:32:28 np0005534516 neutron-haproxy-ovnmeta-eb25945d-6002-4a99-b682-034a8a3dc901[308762]: [NOTICE]   (308766) : haproxy version is 2.8.14-c23fe91
Nov 25 03:32:28 np0005534516 neutron-haproxy-ovnmeta-eb25945d-6002-4a99-b682-034a8a3dc901[308762]: [NOTICE]   (308766) : path to executable is /usr/sbin/haproxy
Nov 25 03:32:28 np0005534516 neutron-haproxy-ovnmeta-eb25945d-6002-4a99-b682-034a8a3dc901[308762]: [WARNING]  (308766) : Exiting Master process...
Nov 25 03:32:28 np0005534516 neutron-haproxy-ovnmeta-eb25945d-6002-4a99-b682-034a8a3dc901[308762]: [ALERT]    (308766) : Current worker (308768) exited with code 143 (Terminated)
Nov 25 03:32:28 np0005534516 neutron-haproxy-ovnmeta-eb25945d-6002-4a99-b682-034a8a3dc901[308762]: [WARNING]  (308766) : All workers exited. Exiting... (0)
Nov 25 03:32:28 np0005534516 nova_compute[253538]: 2025-11-25 08:32:28.680 253542 DEBUG oslo_concurrency.lockutils [None req-f13ec5d5-0a9b-4d5a-9c2a-7dfd143c2b5e a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Acquiring lock "0201b222-1aa1-4d57-901c-e3c79170b567-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:32:28 np0005534516 nova_compute[253538]: 2025-11-25 08:32:28.686 253542 DEBUG oslo_concurrency.lockutils [None req-f13ec5d5-0a9b-4d5a-9c2a-7dfd143c2b5e a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Lock "0201b222-1aa1-4d57-901c-e3c79170b567-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.006s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:32:28 np0005534516 nova_compute[253538]: 2025-11-25 08:32:28.686 253542 DEBUG oslo_concurrency.lockutils [None req-f13ec5d5-0a9b-4d5a-9c2a-7dfd143c2b5e a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Lock "0201b222-1aa1-4d57-901c-e3c79170b567-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:32:28 np0005534516 systemd[1]: libpod-a6adb3fa2f4352cc0f422dc92678d1d649ab944164ad77928c29dc7fbe180acd.scope: Deactivated successfully.
Nov 25 03:32:28 np0005534516 nova_compute[253538]: 2025-11-25 08:32:28.687 253542 INFO nova.compute.manager [None req-f13ec5d5-0a9b-4d5a-9c2a-7dfd143c2b5e a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: 0201b222-1aa1-4d57-901c-e3c79170b567] Terminating instance#033[00m
Nov 25 03:32:28 np0005534516 conmon[308762]: conmon a6adb3fa2f4352cc0f42 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-a6adb3fa2f4352cc0f422dc92678d1d649ab944164ad77928c29dc7fbe180acd.scope/container/memory.events
Nov 25 03:32:28 np0005534516 nova_compute[253538]: 2025-11-25 08:32:28.688 253542 DEBUG nova.compute.manager [None req-f13ec5d5-0a9b-4d5a-9c2a-7dfd143c2b5e a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: 0201b222-1aa1-4d57-901c-e3c79170b567] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 25 03:32:28 np0005534516 nova_compute[253538]: 2025-11-25 08:32:28.692 253542 INFO nova.virt.libvirt.driver [-] [instance: 7aefbad8-edb5-417c-a34d-e9e3c2dd0c03] Instance destroyed successfully.#033[00m
Nov 25 03:32:28 np0005534516 nova_compute[253538]: 2025-11-25 08:32:28.693 253542 DEBUG nova.objects.instance [None req-c7431d93-c045-4619-a3d4-99f3f3e442c8 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Lazy-loading 'resources' on Instance uuid 7aefbad8-edb5-417c-a34d-e9e3c2dd0c03 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 03:32:28 np0005534516 podman[309115]: 2025-11-25 08:32:28.695260917 +0000 UTC m=+0.060152158 container died a6adb3fa2f4352cc0f422dc92678d1d649ab944164ad77928c29dc7fbe180acd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-eb25945d-6002-4a99-b682-034a8a3dc901, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0)
Nov 25 03:32:28 np0005534516 nova_compute[253538]: 2025-11-25 08:32:28.706 253542 DEBUG nova.virt.libvirt.vif [None req-c7431d93-c045-4619-a3d4-99f3f3e442c8 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=True,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-11-25T08:31:48Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-1569463086',display_name='tempest-ServerDiskConfigTestJSON-server-1569463086',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-1569463086',id=49,image_ref='64385127-d622-49bb-be38-b33beb2692d1',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-25T08:32:23Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='dc93aa65bef7473d961e0cad1e8f2962',ramdisk_id='',reservation_id='r-pn5emsvi',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='64385127-d622-49bb-be38-b33beb2692d1',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerDiskConfigTestJSON-1655399928',owner_user_name='tempest-ServerDiskConfigTestJSON-1655399928-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T08:32:23Z,user_data=None,user_id='7ad88cb0e4cf4d0b8e4cbec835318015',uuid=7aefbad8-edb5-417c-a34d-e9e3c2dd0c03,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "3c3c9c20-8436-4b41-9184-2061010ba6e2", "address": "fa:16:3e:c4:6e:6c", "network": {"id": "eb25945d-6002-4a99-b682-034a8a3dc901", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1802666542-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dc93aa65bef7473d961e0cad1e8f2962", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3c3c9c20-84", "ovs_interfaceid": "3c3c9c20-8436-4b41-9184-2061010ba6e2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 25 03:32:28 np0005534516 nova_compute[253538]: 2025-11-25 08:32:28.706 253542 DEBUG nova.network.os_vif_util [None req-c7431d93-c045-4619-a3d4-99f3f3e442c8 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Converting VIF {"id": "3c3c9c20-8436-4b41-9184-2061010ba6e2", "address": "fa:16:3e:c4:6e:6c", "network": {"id": "eb25945d-6002-4a99-b682-034a8a3dc901", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1802666542-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dc93aa65bef7473d961e0cad1e8f2962", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3c3c9c20-84", "ovs_interfaceid": "3c3c9c20-8436-4b41-9184-2061010ba6e2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 03:32:28 np0005534516 nova_compute[253538]: 2025-11-25 08:32:28.707 253542 DEBUG nova.network.os_vif_util [None req-c7431d93-c045-4619-a3d4-99f3f3e442c8 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c4:6e:6c,bridge_name='br-int',has_traffic_filtering=True,id=3c3c9c20-8436-4b41-9184-2061010ba6e2,network=Network(eb25945d-6002-4a99-b682-034a8a3dc901),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3c3c9c20-84') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 03:32:28 np0005534516 nova_compute[253538]: 2025-11-25 08:32:28.707 253542 DEBUG os_vif [None req-c7431d93-c045-4619-a3d4-99f3f3e442c8 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:c4:6e:6c,bridge_name='br-int',has_traffic_filtering=True,id=3c3c9c20-8436-4b41-9184-2061010ba6e2,network=Network(eb25945d-6002-4a99-b682-034a8a3dc901),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3c3c9c20-84') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 25 03:32:28 np0005534516 nova_compute[253538]: 2025-11-25 08:32:28.708 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:32:28 np0005534516 nova_compute[253538]: 2025-11-25 08:32:28.709 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3c3c9c20-84, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:32:28 np0005534516 nova_compute[253538]: 2025-11-25 08:32:28.761 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:32:28 np0005534516 nova_compute[253538]: 2025-11-25 08:32:28.762 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 25 03:32:28 np0005534516 nova_compute[253538]: 2025-11-25 08:32:28.765 253542 INFO os_vif [None req-c7431d93-c045-4619-a3d4-99f3f3e442c8 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:c4:6e:6c,bridge_name='br-int',has_traffic_filtering=True,id=3c3c9c20-8436-4b41-9184-2061010ba6e2,network=Network(eb25945d-6002-4a99-b682-034a8a3dc901),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3c3c9c20-84')#033[00m
Nov 25 03:32:28 np0005534516 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-a6adb3fa2f4352cc0f422dc92678d1d649ab944164ad77928c29dc7fbe180acd-userdata-shm.mount: Deactivated successfully.
Nov 25 03:32:28 np0005534516 systemd[1]: var-lib-containers-storage-overlay-91e58279ce3b57ee04aae33b135da848ead0b67f41b7f32ad9c03b3604e4992d-merged.mount: Deactivated successfully.
Nov 25 03:32:28 np0005534516 podman[309115]: 2025-11-25 08:32:28.780109068 +0000 UTC m=+0.145000319 container cleanup a6adb3fa2f4352cc0f422dc92678d1d649ab944164ad77928c29dc7fbe180acd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-eb25945d-6002-4a99-b682-034a8a3dc901, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2)
Nov 25 03:32:28 np0005534516 kernel: tapd02d0c40-ff (unregistering): left promiscuous mode
Nov 25 03:32:28 np0005534516 NetworkManager[48915]: <info>  [1764059548.7859] device (tapd02d0c40-ff): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 03:32:28 np0005534516 systemd[1]: libpod-conmon-a6adb3fa2f4352cc0f422dc92678d1d649ab944164ad77928c29dc7fbe180acd.scope: Deactivated successfully.
Nov 25 03:32:28 np0005534516 nova_compute[253538]: 2025-11-25 08:32:28.795 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:32:28 np0005534516 ovn_controller[152859]: 2025-11-25T08:32:28Z|00425|binding|INFO|Releasing lport d02d0c40-ff59-4db1-8105-d39f0c8b67c5 from this chassis (sb_readonly=0)
Nov 25 03:32:28 np0005534516 ovn_controller[152859]: 2025-11-25T08:32:28Z|00426|binding|INFO|Setting lport d02d0c40-ff59-4db1-8105-d39f0c8b67c5 down in Southbound
Nov 25 03:32:28 np0005534516 ovn_controller[152859]: 2025-11-25T08:32:28Z|00427|binding|INFO|Removing iface tapd02d0c40-ff ovn-installed in OVS
Nov 25 03:32:28 np0005534516 nova_compute[253538]: 2025-11-25 08:32:28.803 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:32:28 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:32:28.803 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:5c:88:0d 10.100.0.7'], port_security=['fa:16:3e:5c:88:0d 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '0201b222-1aa1-4d57-901c-e3c79170b567', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a66e51b8-ecb0-4289-a1b5-d5e379727721', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a9c243220ecd4ba3af10cdbc0ea76bd6', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'a5deaf81-ec7a-4196-8622-4e499ce185db', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=864356ca-a329-4a45-a3a1-6cef04812832, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=d02d0c40-ff59-4db1-8105-d39f0c8b67c5) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 03:32:28 np0005534516 nova_compute[253538]: 2025-11-25 08:32:28.820 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:32:28 np0005534516 systemd[1]: machine-qemu\x2d57\x2dinstance\x2d00000033.scope: Deactivated successfully.
Nov 25 03:32:28 np0005534516 nova_compute[253538]: 2025-11-25 08:32:28.827 253542 DEBUG oslo_concurrency.processutils [None req-27efaf5d-93c8-4a0c-b1f2-436697dd9392 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:32:28 np0005534516 systemd[1]: machine-qemu\x2d57\x2dinstance\x2d00000033.scope: Consumed 1.777s CPU time.
Nov 25 03:32:28 np0005534516 systemd-machined[215790]: Machine qemu-57-instance-00000033 terminated.
Nov 25 03:32:28 np0005534516 podman[309170]: 2025-11-25 08:32:28.859771548 +0000 UTC m=+0.050542819 container remove a6adb3fa2f4352cc0f422dc92678d1d649ab944164ad77928c29dc7fbe180acd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-eb25945d-6002-4a99-b682-034a8a3dc901, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118)
Nov 25 03:32:28 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:32:28.865 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[44662b88-01ae-479e-a7ed-d564ccf17388]: (4, ('Tue Nov 25 08:32:28 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-eb25945d-6002-4a99-b682-034a8a3dc901 (a6adb3fa2f4352cc0f422dc92678d1d649ab944164ad77928c29dc7fbe180acd)\na6adb3fa2f4352cc0f422dc92678d1d649ab944164ad77928c29dc7fbe180acd\nTue Nov 25 08:32:28 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-eb25945d-6002-4a99-b682-034a8a3dc901 (a6adb3fa2f4352cc0f422dc92678d1d649ab944164ad77928c29dc7fbe180acd)\na6adb3fa2f4352cc0f422dc92678d1d649ab944164ad77928c29dc7fbe180acd\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:32:28 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:32:28.867 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[1b8d037f-5c18-4b7f-8217-191c6c847b3a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:32:28 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:32:28.867 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapeb25945d-60, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:32:28 np0005534516 nova_compute[253538]: 2025-11-25 08:32:28.869 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:32:28 np0005534516 kernel: tapeb25945d-60: left promiscuous mode
Nov 25 03:32:28 np0005534516 nova_compute[253538]: 2025-11-25 08:32:28.887 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:32:28 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:32:28.889 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[372bae15-f129-4c15-872b-618f43271ded]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:32:28 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:32:28.905 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[33ba23cd-c223-4e43-906a-81dc1ae63fcb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:32:28 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:32:28.905 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[e68d80f2-68c2-4c3e-a9ee-bef78cc74623]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:32:28 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:32:28.919 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[3695bc8e-046a-4209-86f0-51f57927f3b2]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 489613, 'reachable_time': 25022, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 309196, 'error': None, 'target': 'ovnmeta-eb25945d-6002-4a99-b682-034a8a3dc901', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:32:28 np0005534516 systemd[1]: run-netns-ovnmeta\x2deb25945d\x2d6002\x2d4a99\x2db682\x2d034a8a3dc901.mount: Deactivated successfully.
Nov 25 03:32:28 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:32:28.924 162852 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-eb25945d-6002-4a99-b682-034a8a3dc901 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 25 03:32:28 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:32:28.924 162852 DEBUG oslo.privsep.daemon [-] privsep: reply[7ab94bbf-589d-4501-9663-bd25775fe600]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:32:28 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:32:28.924 162739 INFO neutron.agent.ovn.metadata.agent [-] Port d02d0c40-ff59-4db1-8105-d39f0c8b67c5 in datapath a66e51b8-ecb0-4289-a1b5-d5e379727721 unbound from our chassis#033[00m
Nov 25 03:32:28 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:32:28.926 162739 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network a66e51b8-ecb0-4289-a1b5-d5e379727721, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 25 03:32:28 np0005534516 nova_compute[253538]: 2025-11-25 08:32:28.926 253542 INFO nova.virt.libvirt.driver [-] [instance: 0201b222-1aa1-4d57-901c-e3c79170b567] Instance destroyed successfully.#033[00m
Nov 25 03:32:28 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:32:28.927 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[db184aff-2160-490c-9f4a-8bbe35113329]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:32:28 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:32:28.927 162739 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-a66e51b8-ecb0-4289-a1b5-d5e379727721 namespace which is not needed anymore#033[00m
Nov 25 03:32:28 np0005534516 nova_compute[253538]: 2025-11-25 08:32:28.928 253542 DEBUG nova.objects.instance [None req-f13ec5d5-0a9b-4d5a-9c2a-7dfd143c2b5e a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Lazy-loading 'resources' on Instance uuid 0201b222-1aa1-4d57-901c-e3c79170b567 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 03:32:28 np0005534516 nova_compute[253538]: 2025-11-25 08:32:28.942 253542 DEBUG nova.virt.libvirt.vif [None req-f13ec5d5-0a9b-4d5a-9c2a-7dfd143c2b5e a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T08:32:12Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-DeleteServersTestJSON-server-93037066',display_name='tempest-DeleteServersTestJSON-server-93037066',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-deleteserverstestjson-server-93037066',id=51,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-25T08:32:27Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='a9c243220ecd4ba3af10cdbc0ea76bd6',ramdisk_id='',reservation_id='r-6q1swzq1',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-DeleteServersTestJSON-2095694504',owner_user_name='tempest-DeleteServersTestJSON-2095694504-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T08:32:27Z,user_data=None,user_id='a649c62aaacd4f01a93ea978066f5976',uuid=0201b222-1aa1-4d57-901c-e3c79170b567,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "d02d0c40-ff59-4db1-8105-d39f0c8b67c5", "address": "fa:16:3e:5c:88:0d", "network": {"id": "a66e51b8-ecb0-4289-a1b5-d5e379727721", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-903247260-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a9c243220ecd4ba3af10cdbc0ea76bd6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd02d0c40-ff", "ovs_interfaceid": "d02d0c40-ff59-4db1-8105-d39f0c8b67c5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 25 03:32:28 np0005534516 nova_compute[253538]: 2025-11-25 08:32:28.942 253542 DEBUG nova.network.os_vif_util [None req-f13ec5d5-0a9b-4d5a-9c2a-7dfd143c2b5e a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Converting VIF {"id": "d02d0c40-ff59-4db1-8105-d39f0c8b67c5", "address": "fa:16:3e:5c:88:0d", "network": {"id": "a66e51b8-ecb0-4289-a1b5-d5e379727721", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-903247260-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a9c243220ecd4ba3af10cdbc0ea76bd6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd02d0c40-ff", "ovs_interfaceid": "d02d0c40-ff59-4db1-8105-d39f0c8b67c5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 03:32:28 np0005534516 nova_compute[253538]: 2025-11-25 08:32:28.944 253542 DEBUG nova.network.os_vif_util [None req-f13ec5d5-0a9b-4d5a-9c2a-7dfd143c2b5e a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:5c:88:0d,bridge_name='br-int',has_traffic_filtering=True,id=d02d0c40-ff59-4db1-8105-d39f0c8b67c5,network=Network(a66e51b8-ecb0-4289-a1b5-d5e379727721),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd02d0c40-ff') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 03:32:28 np0005534516 nova_compute[253538]: 2025-11-25 08:32:28.944 253542 DEBUG os_vif [None req-f13ec5d5-0a9b-4d5a-9c2a-7dfd143c2b5e a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:5c:88:0d,bridge_name='br-int',has_traffic_filtering=True,id=d02d0c40-ff59-4db1-8105-d39f0c8b67c5,network=Network(a66e51b8-ecb0-4289-a1b5-d5e379727721),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd02d0c40-ff') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 25 03:32:28 np0005534516 nova_compute[253538]: 2025-11-25 08:32:28.947 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:32:28 np0005534516 nova_compute[253538]: 2025-11-25 08:32:28.948 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd02d0c40-ff, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:32:28 np0005534516 nova_compute[253538]: 2025-11-25 08:32:28.950 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:32:28 np0005534516 nova_compute[253538]: 2025-11-25 08:32:28.953 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 25 03:32:28 np0005534516 nova_compute[253538]: 2025-11-25 08:32:28.956 253542 INFO os_vif [None req-f13ec5d5-0a9b-4d5a-9c2a-7dfd143c2b5e a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:5c:88:0d,bridge_name='br-int',has_traffic_filtering=True,id=d02d0c40-ff59-4db1-8105-d39f0c8b67c5,network=Network(a66e51b8-ecb0-4289-a1b5-d5e379727721),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd02d0c40-ff')#033[00m
Nov 25 03:32:28 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 25 03:32:28 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/717178773' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 03:32:28 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 25 03:32:28 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/717178773' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 03:32:29 np0005534516 neutron-haproxy-ovnmeta-a66e51b8-ecb0-4289-a1b5-d5e379727721[309080]: [NOTICE]   (309084) : haproxy version is 2.8.14-c23fe91
Nov 25 03:32:29 np0005534516 neutron-haproxy-ovnmeta-a66e51b8-ecb0-4289-a1b5-d5e379727721[309080]: [NOTICE]   (309084) : path to executable is /usr/sbin/haproxy
Nov 25 03:32:29 np0005534516 neutron-haproxy-ovnmeta-a66e51b8-ecb0-4289-a1b5-d5e379727721[309080]: [WARNING]  (309084) : Exiting Master process...
Nov 25 03:32:29 np0005534516 neutron-haproxy-ovnmeta-a66e51b8-ecb0-4289-a1b5-d5e379727721[309080]: [WARNING]  (309084) : Exiting Master process...
Nov 25 03:32:29 np0005534516 neutron-haproxy-ovnmeta-a66e51b8-ecb0-4289-a1b5-d5e379727721[309080]: [ALERT]    (309084) : Current worker (309086) exited with code 143 (Terminated)
Nov 25 03:32:29 np0005534516 neutron-haproxy-ovnmeta-a66e51b8-ecb0-4289-a1b5-d5e379727721[309080]: [WARNING]  (309084) : All workers exited. Exiting... (0)
Nov 25 03:32:29 np0005534516 systemd[1]: libpod-071a11fdff16014d72bfaf3316156ef333154916592d1d18547abad2ab3c7a9d.scope: Deactivated successfully.
Nov 25 03:32:29 np0005534516 podman[309257]: 2025-11-25 08:32:29.086203664 +0000 UTC m=+0.047197510 container died 071a11fdff16014d72bfaf3316156ef333154916592d1d18547abad2ab3c7a9d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-a66e51b8-ecb0-4289-a1b5-d5e379727721, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 25 03:32:29 np0005534516 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-071a11fdff16014d72bfaf3316156ef333154916592d1d18547abad2ab3c7a9d-userdata-shm.mount: Deactivated successfully.
Nov 25 03:32:29 np0005534516 systemd[1]: var-lib-containers-storage-overlay-e6c7019eddc15bde3ab32eaf7c4b3a0396a948ded755fecc5de1ea0cb4bb9f24-merged.mount: Deactivated successfully.
Nov 25 03:32:29 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1474: 321 pgs: 321 active+clean; 199 MiB data, 491 MiB used, 60 GiB / 60 GiB avail; 3.1 MiB/s rd, 4.0 MiB/s wr, 224 op/s
Nov 25 03:32:29 np0005534516 podman[309257]: 2025-11-25 08:32:29.213460394 +0000 UTC m=+0.174454220 container cleanup 071a11fdff16014d72bfaf3316156ef333154916592d1d18547abad2ab3c7a9d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-a66e51b8-ecb0-4289-a1b5-d5e379727721, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Nov 25 03:32:29 np0005534516 systemd[1]: libpod-conmon-071a11fdff16014d72bfaf3316156ef333154916592d1d18547abad2ab3c7a9d.scope: Deactivated successfully.
Nov 25 03:32:29 np0005534516 podman[309287]: 2025-11-25 08:32:29.291128131 +0000 UTC m=+0.055737579 container remove 071a11fdff16014d72bfaf3316156ef333154916592d1d18547abad2ab3c7a9d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-a66e51b8-ecb0-4289-a1b5-d5e379727721, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 03:32:29 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:32:29.296 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[d6f55be7-29c6-4d2b-afc9-25da29d01944]: (4, ('Tue Nov 25 08:32:29 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-a66e51b8-ecb0-4289-a1b5-d5e379727721 (071a11fdff16014d72bfaf3316156ef333154916592d1d18547abad2ab3c7a9d)\n071a11fdff16014d72bfaf3316156ef333154916592d1d18547abad2ab3c7a9d\nTue Nov 25 08:32:29 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-a66e51b8-ecb0-4289-a1b5-d5e379727721 (071a11fdff16014d72bfaf3316156ef333154916592d1d18547abad2ab3c7a9d)\n071a11fdff16014d72bfaf3316156ef333154916592d1d18547abad2ab3c7a9d\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:32:29 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:32:29.298 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[e92631fc-d7fe-41b3-b71c-9b421898ff60]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:32:29 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:32:29.299 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa66e51b8-e0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:32:29 np0005534516 nova_compute[253538]: 2025-11-25 08:32:29.300 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:32:29 np0005534516 kernel: tapa66e51b8-e0: left promiscuous mode
Nov 25 03:32:29 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 03:32:29 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3982273807' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 03:32:29 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:32:29.310 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[06ff67f3-1b8f-44bc-8a2a-6916e4782957]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:32:29 np0005534516 nova_compute[253538]: 2025-11-25 08:32:29.319 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:32:29 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:32:29.325 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[6f4d391a-4f5f-4234-94bc-85c9dcd35e01]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:32:29 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:32:29.326 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[60f00b6f-8826-4382-aa40-3841e407d1d0]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:32:29 np0005534516 nova_compute[253538]: 2025-11-25 08:32:29.327 253542 DEBUG oslo_concurrency.processutils [None req-27efaf5d-93c8-4a0c-b1f2-436697dd9392 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.500s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:32:29 np0005534516 nova_compute[253538]: 2025-11-25 08:32:29.333 253542 DEBUG nova.compute.provider_tree [None req-27efaf5d-93c8-4a0c-b1f2-436697dd9392 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 03:32:29 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:32:29.342 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[fce89d1d-1170-468e-8d26-4e740814cf1a]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 490033, 'reachable_time': 33711, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 309304, 'error': None, 'target': 'ovnmeta-a66e51b8-ecb0-4289-a1b5-d5e379727721', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:32:29 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:32:29.344 162852 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-a66e51b8-ecb0-4289-a1b5-d5e379727721 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 25 03:32:29 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:32:29.344 162852 DEBUG oslo.privsep.daemon [-] privsep: reply[e337028d-fdcb-44b9-995c-1e68f9e3928c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:32:29 np0005534516 nova_compute[253538]: 2025-11-25 08:32:29.348 253542 DEBUG nova.scheduler.client.report [None req-27efaf5d-93c8-4a0c-b1f2-436697dd9392 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 03:32:29 np0005534516 nova_compute[253538]: 2025-11-25 08:32:29.356 253542 INFO nova.virt.libvirt.driver [None req-c7431d93-c045-4619-a3d4-99f3f3e442c8 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] [instance: 7aefbad8-edb5-417c-a34d-e9e3c2dd0c03] Deleting instance files /var/lib/nova/instances/7aefbad8-edb5-417c-a34d-e9e3c2dd0c03_del#033[00m
Nov 25 03:32:29 np0005534516 nova_compute[253538]: 2025-11-25 08:32:29.356 253542 INFO nova.virt.libvirt.driver [None req-c7431d93-c045-4619-a3d4-99f3f3e442c8 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] [instance: 7aefbad8-edb5-417c-a34d-e9e3c2dd0c03] Deletion of /var/lib/nova/instances/7aefbad8-edb5-417c-a34d-e9e3c2dd0c03_del complete#033[00m
Nov 25 03:32:29 np0005534516 nova_compute[253538]: 2025-11-25 08:32:29.378 253542 DEBUG oslo_concurrency.lockutils [None req-27efaf5d-93c8-4a0c-b1f2-436697dd9392 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.780s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:32:29 np0005534516 nova_compute[253538]: 2025-11-25 08:32:29.379 253542 DEBUG nova.compute.manager [None req-27efaf5d-93c8-4a0c-b1f2-436697dd9392 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] [instance: 8191f951-44bc-4371-957a-f2e7d37c1a32] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 25 03:32:29 np0005534516 nova_compute[253538]: 2025-11-25 08:32:29.442 253542 INFO nova.compute.manager [None req-c7431d93-c045-4619-a3d4-99f3f3e442c8 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] [instance: 7aefbad8-edb5-417c-a34d-e9e3c2dd0c03] Took 0.99 seconds to destroy the instance on the hypervisor.#033[00m
Nov 25 03:32:29 np0005534516 nova_compute[253538]: 2025-11-25 08:32:29.443 253542 DEBUG oslo.service.loopingcall [None req-c7431d93-c045-4619-a3d4-99f3f3e442c8 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 25 03:32:29 np0005534516 nova_compute[253538]: 2025-11-25 08:32:29.443 253542 DEBUG nova.compute.manager [-] [instance: 7aefbad8-edb5-417c-a34d-e9e3c2dd0c03] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 25 03:32:29 np0005534516 nova_compute[253538]: 2025-11-25 08:32:29.443 253542 DEBUG nova.network.neutron [-] [instance: 7aefbad8-edb5-417c-a34d-e9e3c2dd0c03] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 25 03:32:29 np0005534516 nova_compute[253538]: 2025-11-25 08:32:29.457 253542 DEBUG nova.compute.manager [None req-27efaf5d-93c8-4a0c-b1f2-436697dd9392 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] [instance: 8191f951-44bc-4371-957a-f2e7d37c1a32] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 25 03:32:29 np0005534516 nova_compute[253538]: 2025-11-25 08:32:29.458 253542 DEBUG nova.network.neutron [None req-27efaf5d-93c8-4a0c-b1f2-436697dd9392 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] [instance: 8191f951-44bc-4371-957a-f2e7d37c1a32] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 25 03:32:29 np0005534516 nova_compute[253538]: 2025-11-25 08:32:29.498 253542 INFO nova.virt.libvirt.driver [None req-f13ec5d5-0a9b-4d5a-9c2a-7dfd143c2b5e a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: 0201b222-1aa1-4d57-901c-e3c79170b567] Deleting instance files /var/lib/nova/instances/0201b222-1aa1-4d57-901c-e3c79170b567_del#033[00m
Nov 25 03:32:29 np0005534516 nova_compute[253538]: 2025-11-25 08:32:29.499 253542 INFO nova.virt.libvirt.driver [None req-f13ec5d5-0a9b-4d5a-9c2a-7dfd143c2b5e a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: 0201b222-1aa1-4d57-901c-e3c79170b567] Deletion of /var/lib/nova/instances/0201b222-1aa1-4d57-901c-e3c79170b567_del complete#033[00m
Nov 25 03:32:29 np0005534516 nova_compute[253538]: 2025-11-25 08:32:29.520 253542 INFO nova.virt.libvirt.driver [None req-27efaf5d-93c8-4a0c-b1f2-436697dd9392 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] [instance: 8191f951-44bc-4371-957a-f2e7d37c1a32] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 25 03:32:29 np0005534516 nova_compute[253538]: 2025-11-25 08:32:29.562 253542 DEBUG nova.compute.manager [None req-27efaf5d-93c8-4a0c-b1f2-436697dd9392 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] [instance: 8191f951-44bc-4371-957a-f2e7d37c1a32] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 25 03:32:29 np0005534516 nova_compute[253538]: 2025-11-25 08:32:29.568 253542 INFO nova.compute.manager [None req-f13ec5d5-0a9b-4d5a-9c2a-7dfd143c2b5e a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: 0201b222-1aa1-4d57-901c-e3c79170b567] Took 0.88 seconds to destroy the instance on the hypervisor.#033[00m
Nov 25 03:32:29 np0005534516 nova_compute[253538]: 2025-11-25 08:32:29.568 253542 DEBUG oslo.service.loopingcall [None req-f13ec5d5-0a9b-4d5a-9c2a-7dfd143c2b5e a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 25 03:32:29 np0005534516 nova_compute[253538]: 2025-11-25 08:32:29.569 253542 DEBUG nova.compute.manager [-] [instance: 0201b222-1aa1-4d57-901c-e3c79170b567] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 25 03:32:29 np0005534516 nova_compute[253538]: 2025-11-25 08:32:29.569 253542 DEBUG nova.network.neutron [-] [instance: 0201b222-1aa1-4d57-901c-e3c79170b567] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 25 03:32:29 np0005534516 nova_compute[253538]: 2025-11-25 08:32:29.587 253542 DEBUG nova.compute.manager [req-9a53ffd2-3526-4f22-8ab6-c87ce1c45757 req-eac66823-3872-4b16-bb4a-a4455dd5b52c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 7aefbad8-edb5-417c-a34d-e9e3c2dd0c03] Received event network-vif-unplugged-3c3c9c20-8436-4b41-9184-2061010ba6e2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:32:29 np0005534516 nova_compute[253538]: 2025-11-25 08:32:29.587 253542 DEBUG oslo_concurrency.lockutils [req-9a53ffd2-3526-4f22-8ab6-c87ce1c45757 req-eac66823-3872-4b16-bb4a-a4455dd5b52c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "7aefbad8-edb5-417c-a34d-e9e3c2dd0c03-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:32:29 np0005534516 nova_compute[253538]: 2025-11-25 08:32:29.588 253542 DEBUG oslo_concurrency.lockutils [req-9a53ffd2-3526-4f22-8ab6-c87ce1c45757 req-eac66823-3872-4b16-bb4a-a4455dd5b52c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "7aefbad8-edb5-417c-a34d-e9e3c2dd0c03-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:32:29 np0005534516 nova_compute[253538]: 2025-11-25 08:32:29.588 253542 DEBUG oslo_concurrency.lockutils [req-9a53ffd2-3526-4f22-8ab6-c87ce1c45757 req-eac66823-3872-4b16-bb4a-a4455dd5b52c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "7aefbad8-edb5-417c-a34d-e9e3c2dd0c03-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:32:29 np0005534516 nova_compute[253538]: 2025-11-25 08:32:29.588 253542 DEBUG nova.compute.manager [req-9a53ffd2-3526-4f22-8ab6-c87ce1c45757 req-eac66823-3872-4b16-bb4a-a4455dd5b52c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 7aefbad8-edb5-417c-a34d-e9e3c2dd0c03] No waiting events found dispatching network-vif-unplugged-3c3c9c20-8436-4b41-9184-2061010ba6e2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 03:32:29 np0005534516 nova_compute[253538]: 2025-11-25 08:32:29.589 253542 DEBUG nova.compute.manager [req-9a53ffd2-3526-4f22-8ab6-c87ce1c45757 req-eac66823-3872-4b16-bb4a-a4455dd5b52c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 7aefbad8-edb5-417c-a34d-e9e3c2dd0c03] Received event network-vif-unplugged-3c3c9c20-8436-4b41-9184-2061010ba6e2 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 25 03:32:29 np0005534516 nova_compute[253538]: 2025-11-25 08:32:29.589 253542 DEBUG nova.compute.manager [req-9a53ffd2-3526-4f22-8ab6-c87ce1c45757 req-eac66823-3872-4b16-bb4a-a4455dd5b52c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 7aefbad8-edb5-417c-a34d-e9e3c2dd0c03] Received event network-vif-plugged-3c3c9c20-8436-4b41-9184-2061010ba6e2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:32:29 np0005534516 nova_compute[253538]: 2025-11-25 08:32:29.590 253542 DEBUG oslo_concurrency.lockutils [req-9a53ffd2-3526-4f22-8ab6-c87ce1c45757 req-eac66823-3872-4b16-bb4a-a4455dd5b52c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "7aefbad8-edb5-417c-a34d-e9e3c2dd0c03-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:32:29 np0005534516 nova_compute[253538]: 2025-11-25 08:32:29.590 253542 DEBUG oslo_concurrency.lockutils [req-9a53ffd2-3526-4f22-8ab6-c87ce1c45757 req-eac66823-3872-4b16-bb4a-a4455dd5b52c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "7aefbad8-edb5-417c-a34d-e9e3c2dd0c03-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:32:29 np0005534516 nova_compute[253538]: 2025-11-25 08:32:29.590 253542 DEBUG oslo_concurrency.lockutils [req-9a53ffd2-3526-4f22-8ab6-c87ce1c45757 req-eac66823-3872-4b16-bb4a-a4455dd5b52c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "7aefbad8-edb5-417c-a34d-e9e3c2dd0c03-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:32:29 np0005534516 nova_compute[253538]: 2025-11-25 08:32:29.591 253542 DEBUG nova.compute.manager [req-9a53ffd2-3526-4f22-8ab6-c87ce1c45757 req-eac66823-3872-4b16-bb4a-a4455dd5b52c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 7aefbad8-edb5-417c-a34d-e9e3c2dd0c03] No waiting events found dispatching network-vif-plugged-3c3c9c20-8436-4b41-9184-2061010ba6e2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 03:32:29 np0005534516 nova_compute[253538]: 2025-11-25 08:32:29.591 253542 WARNING nova.compute.manager [req-9a53ffd2-3526-4f22-8ab6-c87ce1c45757 req-eac66823-3872-4b16-bb4a-a4455dd5b52c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 7aefbad8-edb5-417c-a34d-e9e3c2dd0c03] Received unexpected event network-vif-plugged-3c3c9c20-8436-4b41-9184-2061010ba6e2 for instance with vm_state active and task_state deleting.#033[00m
Nov 25 03:32:29 np0005534516 nova_compute[253538]: 2025-11-25 08:32:29.660 253542 DEBUG nova.compute.manager [None req-27efaf5d-93c8-4a0c-b1f2-436697dd9392 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] [instance: 8191f951-44bc-4371-957a-f2e7d37c1a32] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 25 03:32:29 np0005534516 nova_compute[253538]: 2025-11-25 08:32:29.661 253542 DEBUG nova.virt.libvirt.driver [None req-27efaf5d-93c8-4a0c-b1f2-436697dd9392 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] [instance: 8191f951-44bc-4371-957a-f2e7d37c1a32] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 25 03:32:29 np0005534516 nova_compute[253538]: 2025-11-25 08:32:29.662 253542 INFO nova.virt.libvirt.driver [None req-27efaf5d-93c8-4a0c-b1f2-436697dd9392 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] [instance: 8191f951-44bc-4371-957a-f2e7d37c1a32] Creating image(s)#033[00m
Nov 25 03:32:29 np0005534516 nova_compute[253538]: 2025-11-25 08:32:29.688 253542 DEBUG nova.storage.rbd_utils [None req-27efaf5d-93c8-4a0c-b1f2-436697dd9392 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] rbd image 8191f951-44bc-4371-957a-f2e7d37c1a32_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:32:29 np0005534516 nova_compute[253538]: 2025-11-25 08:32:29.715 253542 DEBUG nova.storage.rbd_utils [None req-27efaf5d-93c8-4a0c-b1f2-436697dd9392 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] rbd image 8191f951-44bc-4371-957a-f2e7d37c1a32_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:32:29 np0005534516 nova_compute[253538]: 2025-11-25 08:32:29.743 253542 DEBUG nova.storage.rbd_utils [None req-27efaf5d-93c8-4a0c-b1f2-436697dd9392 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] rbd image 8191f951-44bc-4371-957a-f2e7d37c1a32_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:32:29 np0005534516 nova_compute[253538]: 2025-11-25 08:32:29.746 253542 DEBUG oslo_concurrency.processutils [None req-27efaf5d-93c8-4a0c-b1f2-436697dd9392 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:32:29 np0005534516 systemd[1]: run-netns-ovnmeta\x2da66e51b8\x2decb0\x2d4289\x2da1b5\x2dd5e379727721.mount: Deactivated successfully.
Nov 25 03:32:29 np0005534516 nova_compute[253538]: 2025-11-25 08:32:29.790 253542 DEBUG nova.compute.manager [req-d73e4afc-40cb-4493-8ac4-d149e0a0f188 req-399ad564-b9ca-4a85-a820-1656d6e74b6d b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 0201b222-1aa1-4d57-901c-e3c79170b567] Received event network-vif-plugged-d02d0c40-ff59-4db1-8105-d39f0c8b67c5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:32:29 np0005534516 nova_compute[253538]: 2025-11-25 08:32:29.790 253542 DEBUG oslo_concurrency.lockutils [req-d73e4afc-40cb-4493-8ac4-d149e0a0f188 req-399ad564-b9ca-4a85-a820-1656d6e74b6d b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "0201b222-1aa1-4d57-901c-e3c79170b567-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:32:29 np0005534516 nova_compute[253538]: 2025-11-25 08:32:29.791 253542 DEBUG oslo_concurrency.lockutils [req-d73e4afc-40cb-4493-8ac4-d149e0a0f188 req-399ad564-b9ca-4a85-a820-1656d6e74b6d b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "0201b222-1aa1-4d57-901c-e3c79170b567-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:32:29 np0005534516 nova_compute[253538]: 2025-11-25 08:32:29.791 253542 DEBUG oslo_concurrency.lockutils [req-d73e4afc-40cb-4493-8ac4-d149e0a0f188 req-399ad564-b9ca-4a85-a820-1656d6e74b6d b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "0201b222-1aa1-4d57-901c-e3c79170b567-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:32:29 np0005534516 nova_compute[253538]: 2025-11-25 08:32:29.791 253542 DEBUG nova.compute.manager [req-d73e4afc-40cb-4493-8ac4-d149e0a0f188 req-399ad564-b9ca-4a85-a820-1656d6e74b6d b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 0201b222-1aa1-4d57-901c-e3c79170b567] No waiting events found dispatching network-vif-plugged-d02d0c40-ff59-4db1-8105-d39f0c8b67c5 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 03:32:29 np0005534516 nova_compute[253538]: 2025-11-25 08:32:29.791 253542 WARNING nova.compute.manager [req-d73e4afc-40cb-4493-8ac4-d149e0a0f188 req-399ad564-b9ca-4a85-a820-1656d6e74b6d b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 0201b222-1aa1-4d57-901c-e3c79170b567] Received unexpected event network-vif-plugged-d02d0c40-ff59-4db1-8105-d39f0c8b67c5 for instance with vm_state active and task_state deleting.#033[00m
Nov 25 03:32:29 np0005534516 nova_compute[253538]: 2025-11-25 08:32:29.792 253542 DEBUG nova.compute.manager [req-d73e4afc-40cb-4493-8ac4-d149e0a0f188 req-399ad564-b9ca-4a85-a820-1656d6e74b6d b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 0201b222-1aa1-4d57-901c-e3c79170b567] Received event network-vif-unplugged-d02d0c40-ff59-4db1-8105-d39f0c8b67c5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:32:29 np0005534516 nova_compute[253538]: 2025-11-25 08:32:29.792 253542 DEBUG oslo_concurrency.lockutils [req-d73e4afc-40cb-4493-8ac4-d149e0a0f188 req-399ad564-b9ca-4a85-a820-1656d6e74b6d b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "0201b222-1aa1-4d57-901c-e3c79170b567-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:32:29 np0005534516 nova_compute[253538]: 2025-11-25 08:32:29.792 253542 DEBUG oslo_concurrency.lockutils [req-d73e4afc-40cb-4493-8ac4-d149e0a0f188 req-399ad564-b9ca-4a85-a820-1656d6e74b6d b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "0201b222-1aa1-4d57-901c-e3c79170b567-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:32:29 np0005534516 nova_compute[253538]: 2025-11-25 08:32:29.792 253542 DEBUG oslo_concurrency.lockutils [req-d73e4afc-40cb-4493-8ac4-d149e0a0f188 req-399ad564-b9ca-4a85-a820-1656d6e74b6d b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "0201b222-1aa1-4d57-901c-e3c79170b567-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:32:29 np0005534516 nova_compute[253538]: 2025-11-25 08:32:29.793 253542 DEBUG nova.compute.manager [req-d73e4afc-40cb-4493-8ac4-d149e0a0f188 req-399ad564-b9ca-4a85-a820-1656d6e74b6d b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 0201b222-1aa1-4d57-901c-e3c79170b567] No waiting events found dispatching network-vif-unplugged-d02d0c40-ff59-4db1-8105-d39f0c8b67c5 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 03:32:29 np0005534516 nova_compute[253538]: 2025-11-25 08:32:29.793 253542 DEBUG nova.compute.manager [req-d73e4afc-40cb-4493-8ac4-d149e0a0f188 req-399ad564-b9ca-4a85-a820-1656d6e74b6d b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 0201b222-1aa1-4d57-901c-e3c79170b567] Received event network-vif-unplugged-d02d0c40-ff59-4db1-8105-d39f0c8b67c5 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 25 03:32:29 np0005534516 nova_compute[253538]: 2025-11-25 08:32:29.793 253542 DEBUG nova.compute.manager [req-d73e4afc-40cb-4493-8ac4-d149e0a0f188 req-399ad564-b9ca-4a85-a820-1656d6e74b6d b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 0201b222-1aa1-4d57-901c-e3c79170b567] Received event network-vif-plugged-d02d0c40-ff59-4db1-8105-d39f0c8b67c5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:32:29 np0005534516 nova_compute[253538]: 2025-11-25 08:32:29.793 253542 DEBUG oslo_concurrency.lockutils [req-d73e4afc-40cb-4493-8ac4-d149e0a0f188 req-399ad564-b9ca-4a85-a820-1656d6e74b6d b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "0201b222-1aa1-4d57-901c-e3c79170b567-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:32:29 np0005534516 nova_compute[253538]: 2025-11-25 08:32:29.794 253542 DEBUG oslo_concurrency.lockutils [req-d73e4afc-40cb-4493-8ac4-d149e0a0f188 req-399ad564-b9ca-4a85-a820-1656d6e74b6d b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "0201b222-1aa1-4d57-901c-e3c79170b567-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:32:29 np0005534516 nova_compute[253538]: 2025-11-25 08:32:29.794 253542 DEBUG oslo_concurrency.lockutils [req-d73e4afc-40cb-4493-8ac4-d149e0a0f188 req-399ad564-b9ca-4a85-a820-1656d6e74b6d b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "0201b222-1aa1-4d57-901c-e3c79170b567-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:32:29 np0005534516 nova_compute[253538]: 2025-11-25 08:32:29.794 253542 DEBUG nova.compute.manager [req-d73e4afc-40cb-4493-8ac4-d149e0a0f188 req-399ad564-b9ca-4a85-a820-1656d6e74b6d b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 0201b222-1aa1-4d57-901c-e3c79170b567] No waiting events found dispatching network-vif-plugged-d02d0c40-ff59-4db1-8105-d39f0c8b67c5 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 03:32:29 np0005534516 nova_compute[253538]: 2025-11-25 08:32:29.794 253542 WARNING nova.compute.manager [req-d73e4afc-40cb-4493-8ac4-d149e0a0f188 req-399ad564-b9ca-4a85-a820-1656d6e74b6d b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 0201b222-1aa1-4d57-901c-e3c79170b567] Received unexpected event network-vif-plugged-d02d0c40-ff59-4db1-8105-d39f0c8b67c5 for instance with vm_state active and task_state deleting.#033[00m
Nov 25 03:32:29 np0005534516 nova_compute[253538]: 2025-11-25 08:32:29.842 253542 DEBUG oslo_concurrency.processutils [None req-27efaf5d-93c8-4a0c-b1f2-436697dd9392 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json" returned: 0 in 0.096s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:32:29 np0005534516 nova_compute[253538]: 2025-11-25 08:32:29.843 253542 DEBUG oslo_concurrency.lockutils [None req-27efaf5d-93c8-4a0c-b1f2-436697dd9392 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Acquiring lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:32:29 np0005534516 nova_compute[253538]: 2025-11-25 08:32:29.845 253542 DEBUG oslo_concurrency.lockutils [None req-27efaf5d-93c8-4a0c-b1f2-436697dd9392 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:32:29 np0005534516 nova_compute[253538]: 2025-11-25 08:32:29.845 253542 DEBUG oslo_concurrency.lockutils [None req-27efaf5d-93c8-4a0c-b1f2-436697dd9392 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:32:29 np0005534516 nova_compute[253538]: 2025-11-25 08:32:29.881 253542 DEBUG nova.storage.rbd_utils [None req-27efaf5d-93c8-4a0c-b1f2-436697dd9392 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] rbd image 8191f951-44bc-4371-957a-f2e7d37c1a32_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:32:29 np0005534516 nova_compute[253538]: 2025-11-25 08:32:29.886 253542 DEBUG oslo_concurrency.processutils [None req-27efaf5d-93c8-4a0c-b1f2-436697dd9392 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc 8191f951-44bc-4371-957a-f2e7d37c1a32_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:32:30 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:32:30.130 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=0e3362c2-a4a0-4a10-9289-943331244f84, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '14'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:32:30 np0005534516 nova_compute[253538]: 2025-11-25 08:32:30.175 253542 DEBUG nova.policy [None req-27efaf5d-93c8-4a0c-b1f2-436697dd9392 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '329d8dc9d78743d4a09a38fef3a9143d', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '7d8307470c794815a028592990efca57', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 25 03:32:30 np0005534516 nova_compute[253538]: 2025-11-25 08:32:30.247 253542 DEBUG oslo_concurrency.processutils [None req-27efaf5d-93c8-4a0c-b1f2-436697dd9392 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc 8191f951-44bc-4371-957a-f2e7d37c1a32_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.361s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:32:30 np0005534516 nova_compute[253538]: 2025-11-25 08:32:30.327 253542 DEBUG nova.storage.rbd_utils [None req-27efaf5d-93c8-4a0c-b1f2-436697dd9392 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] resizing rbd image 8191f951-44bc-4371-957a-f2e7d37c1a32_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 25 03:32:30 np0005534516 nova_compute[253538]: 2025-11-25 08:32:30.419 253542 DEBUG nova.objects.instance [None req-27efaf5d-93c8-4a0c-b1f2-436697dd9392 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Lazy-loading 'migration_context' on Instance uuid 8191f951-44bc-4371-957a-f2e7d37c1a32 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 03:32:30 np0005534516 nova_compute[253538]: 2025-11-25 08:32:30.431 253542 DEBUG nova.virt.libvirt.driver [None req-27efaf5d-93c8-4a0c-b1f2-436697dd9392 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] [instance: 8191f951-44bc-4371-957a-f2e7d37c1a32] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 25 03:32:30 np0005534516 nova_compute[253538]: 2025-11-25 08:32:30.432 253542 DEBUG nova.virt.libvirt.driver [None req-27efaf5d-93c8-4a0c-b1f2-436697dd9392 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] [instance: 8191f951-44bc-4371-957a-f2e7d37c1a32] Ensure instance console log exists: /var/lib/nova/instances/8191f951-44bc-4371-957a-f2e7d37c1a32/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 25 03:32:30 np0005534516 nova_compute[253538]: 2025-11-25 08:32:30.432 253542 DEBUG oslo_concurrency.lockutils [None req-27efaf5d-93c8-4a0c-b1f2-436697dd9392 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:32:30 np0005534516 nova_compute[253538]: 2025-11-25 08:32:30.433 253542 DEBUG oslo_concurrency.lockutils [None req-27efaf5d-93c8-4a0c-b1f2-436697dd9392 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:32:30 np0005534516 nova_compute[253538]: 2025-11-25 08:32:30.433 253542 DEBUG oslo_concurrency.lockutils [None req-27efaf5d-93c8-4a0c-b1f2-436697dd9392 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:32:30 np0005534516 nova_compute[253538]: 2025-11-25 08:32:30.484 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:32:30 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e176 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 03:32:30 np0005534516 nova_compute[253538]: 2025-11-25 08:32:30.861 253542 DEBUG nova.network.neutron [-] [instance: 7aefbad8-edb5-417c-a34d-e9e3c2dd0c03] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 03:32:30 np0005534516 nova_compute[253538]: 2025-11-25 08:32:30.882 253542 INFO nova.compute.manager [-] [instance: 7aefbad8-edb5-417c-a34d-e9e3c2dd0c03] Took 1.44 seconds to deallocate network for instance.#033[00m
Nov 25 03:32:30 np0005534516 nova_compute[253538]: 2025-11-25 08:32:30.929 253542 DEBUG oslo_concurrency.lockutils [None req-c7431d93-c045-4619-a3d4-99f3f3e442c8 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:32:30 np0005534516 nova_compute[253538]: 2025-11-25 08:32:30.930 253542 DEBUG oslo_concurrency.lockutils [None req-c7431d93-c045-4619-a3d4-99f3f3e442c8 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:32:30 np0005534516 nova_compute[253538]: 2025-11-25 08:32:30.990 253542 DEBUG nova.network.neutron [None req-27efaf5d-93c8-4a0c-b1f2-436697dd9392 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] [instance: 8191f951-44bc-4371-957a-f2e7d37c1a32] Successfully created port: d8bd16e1-3695-474d-be04-7fdf44bee803 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 25 03:32:31 np0005534516 nova_compute[253538]: 2025-11-25 08:32:31.019 253542 DEBUG nova.network.neutron [-] [instance: 0201b222-1aa1-4d57-901c-e3c79170b567] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 03:32:31 np0005534516 nova_compute[253538]: 2025-11-25 08:32:31.040 253542 INFO nova.compute.manager [-] [instance: 0201b222-1aa1-4d57-901c-e3c79170b567] Took 1.47 seconds to deallocate network for instance.#033[00m
Nov 25 03:32:31 np0005534516 nova_compute[253538]: 2025-11-25 08:32:31.046 253542 DEBUG oslo_concurrency.processutils [None req-c7431d93-c045-4619-a3d4-99f3f3e442c8 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:32:31 np0005534516 nova_compute[253538]: 2025-11-25 08:32:31.127 253542 DEBUG oslo_concurrency.lockutils [None req-f13ec5d5-0a9b-4d5a-9c2a-7dfd143c2b5e a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:32:31 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1475: 321 pgs: 321 active+clean; 164 MiB data, 493 MiB used, 60 GiB / 60 GiB avail; 3.5 MiB/s rd, 4.5 MiB/s wr, 254 op/s
Nov 25 03:32:31 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 03:32:31 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/191338927' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 03:32:31 np0005534516 nova_compute[253538]: 2025-11-25 08:32:31.464 253542 DEBUG oslo_concurrency.processutils [None req-c7431d93-c045-4619-a3d4-99f3f3e442c8 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.419s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:32:31 np0005534516 nova_compute[253538]: 2025-11-25 08:32:31.470 253542 DEBUG nova.compute.provider_tree [None req-c7431d93-c045-4619-a3d4-99f3f3e442c8 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 03:32:31 np0005534516 nova_compute[253538]: 2025-11-25 08:32:31.487 253542 DEBUG nova.scheduler.client.report [None req-c7431d93-c045-4619-a3d4-99f3f3e442c8 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 03:32:31 np0005534516 nova_compute[253538]: 2025-11-25 08:32:31.525 253542 DEBUG oslo_concurrency.lockutils [None req-c7431d93-c045-4619-a3d4-99f3f3e442c8 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.595s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:32:31 np0005534516 nova_compute[253538]: 2025-11-25 08:32:31.528 253542 DEBUG oslo_concurrency.lockutils [None req-f13ec5d5-0a9b-4d5a-9c2a-7dfd143c2b5e a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.400s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:32:31 np0005534516 nova_compute[253538]: 2025-11-25 08:32:31.551 253542 INFO nova.scheduler.client.report [None req-c7431d93-c045-4619-a3d4-99f3f3e442c8 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Deleted allocations for instance 7aefbad8-edb5-417c-a34d-e9e3c2dd0c03#033[00m
Nov 25 03:32:31 np0005534516 nova_compute[253538]: 2025-11-25 08:32:31.616 253542 DEBUG oslo_concurrency.processutils [None req-f13ec5d5-0a9b-4d5a-9c2a-7dfd143c2b5e a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:32:31 np0005534516 nova_compute[253538]: 2025-11-25 08:32:31.653 253542 DEBUG oslo_concurrency.lockutils [None req-c7431d93-c045-4619-a3d4-99f3f3e442c8 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Lock "7aefbad8-edb5-417c-a34d-e9e3c2dd0c03" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.202s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:32:31 np0005534516 nova_compute[253538]: 2025-11-25 08:32:31.668 253542 DEBUG nova.compute.manager [req-984860ab-0f99-486f-9723-593d5c2b1902 req-db01d342-8991-4fb5-b89a-e6138cf066ba b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 0201b222-1aa1-4d57-901c-e3c79170b567] Received event network-vif-deleted-d02d0c40-ff59-4db1-8105-d39f0c8b67c5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:32:31 np0005534516 nova_compute[253538]: 2025-11-25 08:32:31.790 253542 DEBUG nova.compute.manager [req-5492f7e0-491a-419c-8e56-94e9731d534e req-8cf02a02-97bd-4978-b33f-b61639dd8dd9 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 7aefbad8-edb5-417c-a34d-e9e3c2dd0c03] Received event network-vif-deleted-3c3c9c20-8436-4b41-9184-2061010ba6e2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:32:31 np0005534516 nova_compute[253538]: 2025-11-25 08:32:31.807 253542 DEBUG oslo_concurrency.lockutils [None req-f4fdfd78-b540-405c-9e87-7c67184bb7d8 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Acquiring lock "c0942fc7-74d4-4fc8-9574-4fea9179e71b" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:32:31 np0005534516 nova_compute[253538]: 2025-11-25 08:32:31.808 253542 DEBUG oslo_concurrency.lockutils [None req-f4fdfd78-b540-405c-9e87-7c67184bb7d8 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Lock "c0942fc7-74d4-4fc8-9574-4fea9179e71b" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:32:31 np0005534516 nova_compute[253538]: 2025-11-25 08:32:31.819 253542 DEBUG nova.compute.manager [None req-f4fdfd78-b540-405c-9e87-7c67184bb7d8 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] [instance: c0942fc7-74d4-4fc8-9574-4fea9179e71b] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 25 03:32:31 np0005534516 nova_compute[253538]: 2025-11-25 08:32:31.863 253542 DEBUG nova.network.neutron [None req-27efaf5d-93c8-4a0c-b1f2-436697dd9392 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] [instance: 8191f951-44bc-4371-957a-f2e7d37c1a32] Successfully updated port: d8bd16e1-3695-474d-be04-7fdf44bee803 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 25 03:32:31 np0005534516 nova_compute[253538]: 2025-11-25 08:32:31.873 253542 DEBUG oslo_concurrency.lockutils [None req-27efaf5d-93c8-4a0c-b1f2-436697dd9392 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Acquiring lock "refresh_cache-8191f951-44bc-4371-957a-f2e7d37c1a32" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 03:32:31 np0005534516 nova_compute[253538]: 2025-11-25 08:32:31.874 253542 DEBUG oslo_concurrency.lockutils [None req-27efaf5d-93c8-4a0c-b1f2-436697dd9392 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Acquired lock "refresh_cache-8191f951-44bc-4371-957a-f2e7d37c1a32" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 03:32:31 np0005534516 nova_compute[253538]: 2025-11-25 08:32:31.874 253542 DEBUG nova.network.neutron [None req-27efaf5d-93c8-4a0c-b1f2-436697dd9392 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] [instance: 8191f951-44bc-4371-957a-f2e7d37c1a32] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 25 03:32:31 np0005534516 nova_compute[253538]: 2025-11-25 08:32:31.876 253542 DEBUG oslo_concurrency.lockutils [None req-f4fdfd78-b540-405c-9e87-7c67184bb7d8 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:32:31 np0005534516 nova_compute[253538]: 2025-11-25 08:32:31.982 253542 DEBUG nova.network.neutron [None req-27efaf5d-93c8-4a0c-b1f2-436697dd9392 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] [instance: 8191f951-44bc-4371-957a-f2e7d37c1a32] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 25 03:32:32 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 03:32:32 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2086060253' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 03:32:32 np0005534516 nova_compute[253538]: 2025-11-25 08:32:32.039 253542 DEBUG oslo_concurrency.processutils [None req-f13ec5d5-0a9b-4d5a-9c2a-7dfd143c2b5e a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.423s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:32:32 np0005534516 nova_compute[253538]: 2025-11-25 08:32:32.046 253542 DEBUG nova.compute.provider_tree [None req-f13ec5d5-0a9b-4d5a-9c2a-7dfd143c2b5e a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 03:32:32 np0005534516 nova_compute[253538]: 2025-11-25 08:32:32.063 253542 DEBUG nova.scheduler.client.report [None req-f13ec5d5-0a9b-4d5a-9c2a-7dfd143c2b5e a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 03:32:32 np0005534516 nova_compute[253538]: 2025-11-25 08:32:32.083 253542 DEBUG oslo_concurrency.lockutils [None req-f13ec5d5-0a9b-4d5a-9c2a-7dfd143c2b5e a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.555s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:32:32 np0005534516 nova_compute[253538]: 2025-11-25 08:32:32.086 253542 DEBUG oslo_concurrency.lockutils [None req-f4fdfd78-b540-405c-9e87-7c67184bb7d8 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.209s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:32:32 np0005534516 nova_compute[253538]: 2025-11-25 08:32:32.094 253542 DEBUG nova.virt.hardware [None req-f4fdfd78-b540-405c-9e87-7c67184bb7d8 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 25 03:32:32 np0005534516 nova_compute[253538]: 2025-11-25 08:32:32.095 253542 INFO nova.compute.claims [None req-f4fdfd78-b540-405c-9e87-7c67184bb7d8 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] [instance: c0942fc7-74d4-4fc8-9574-4fea9179e71b] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 25 03:32:32 np0005534516 nova_compute[253538]: 2025-11-25 08:32:32.110 253542 INFO nova.scheduler.client.report [None req-f13ec5d5-0a9b-4d5a-9c2a-7dfd143c2b5e a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Deleted allocations for instance 0201b222-1aa1-4d57-901c-e3c79170b567#033[00m
Nov 25 03:32:32 np0005534516 nova_compute[253538]: 2025-11-25 08:32:32.300 253542 DEBUG oslo_concurrency.lockutils [None req-f13ec5d5-0a9b-4d5a-9c2a-7dfd143c2b5e a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Lock "0201b222-1aa1-4d57-901c-e3c79170b567" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.619s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:32:32 np0005534516 nova_compute[253538]: 2025-11-25 08:32:32.351 253542 DEBUG oslo_concurrency.processutils [None req-f4fdfd78-b540-405c-9e87-7c67184bb7d8 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:32:32 np0005534516 nova_compute[253538]: 2025-11-25 08:32:32.670 253542 DEBUG nova.network.neutron [None req-27efaf5d-93c8-4a0c-b1f2-436697dd9392 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] [instance: 8191f951-44bc-4371-957a-f2e7d37c1a32] Updating instance_info_cache with network_info: [{"id": "d8bd16e1-3695-474d-be04-7fdf44bee803", "address": "fa:16:3e:1a:58:19", "network": {"id": "9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-2079765623-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7d8307470c794815a028592990efca57", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd8bd16e1-36", "ovs_interfaceid": "d8bd16e1-3695-474d-be04-7fdf44bee803", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 03:32:32 np0005534516 nova_compute[253538]: 2025-11-25 08:32:32.683 253542 DEBUG oslo_concurrency.lockutils [None req-27efaf5d-93c8-4a0c-b1f2-436697dd9392 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Releasing lock "refresh_cache-8191f951-44bc-4371-957a-f2e7d37c1a32" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 03:32:32 np0005534516 nova_compute[253538]: 2025-11-25 08:32:32.683 253542 DEBUG nova.compute.manager [None req-27efaf5d-93c8-4a0c-b1f2-436697dd9392 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] [instance: 8191f951-44bc-4371-957a-f2e7d37c1a32] Instance network_info: |[{"id": "d8bd16e1-3695-474d-be04-7fdf44bee803", "address": "fa:16:3e:1a:58:19", "network": {"id": "9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-2079765623-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7d8307470c794815a028592990efca57", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd8bd16e1-36", "ovs_interfaceid": "d8bd16e1-3695-474d-be04-7fdf44bee803", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 25 03:32:32 np0005534516 nova_compute[253538]: 2025-11-25 08:32:32.685 253542 DEBUG nova.virt.libvirt.driver [None req-27efaf5d-93c8-4a0c-b1f2-436697dd9392 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] [instance: 8191f951-44bc-4371-957a-f2e7d37c1a32] Start _get_guest_xml network_info=[{"id": "d8bd16e1-3695-474d-be04-7fdf44bee803", "address": "fa:16:3e:1a:58:19", "network": {"id": "9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-2079765623-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7d8307470c794815a028592990efca57", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd8bd16e1-36", "ovs_interfaceid": "d8bd16e1-3695-474d-be04-7fdf44bee803", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'device_name': '/dev/vda', 'guest_format': None, 'encryption_options': None, 'boot_index': 0, 'encryption_format': None, 'encryption_secret_uuid': None, 'size': 0, 'encrypted': False, 'disk_bus': 'virtio', 'image_id': '8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 25 03:32:32 np0005534516 nova_compute[253538]: 2025-11-25 08:32:32.690 253542 WARNING nova.virt.libvirt.driver [None req-27efaf5d-93c8-4a0c-b1f2-436697dd9392 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 03:32:32 np0005534516 nova_compute[253538]: 2025-11-25 08:32:32.694 253542 DEBUG nova.virt.libvirt.host [None req-27efaf5d-93c8-4a0c-b1f2-436697dd9392 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 25 03:32:32 np0005534516 nova_compute[253538]: 2025-11-25 08:32:32.695 253542 DEBUG nova.virt.libvirt.host [None req-27efaf5d-93c8-4a0c-b1f2-436697dd9392 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 25 03:32:32 np0005534516 nova_compute[253538]: 2025-11-25 08:32:32.698 253542 DEBUG nova.virt.libvirt.host [None req-27efaf5d-93c8-4a0c-b1f2-436697dd9392 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 25 03:32:32 np0005534516 nova_compute[253538]: 2025-11-25 08:32:32.698 253542 DEBUG nova.virt.libvirt.host [None req-27efaf5d-93c8-4a0c-b1f2-436697dd9392 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 25 03:32:32 np0005534516 nova_compute[253538]: 2025-11-25 08:32:32.698 253542 DEBUG nova.virt.libvirt.driver [None req-27efaf5d-93c8-4a0c-b1f2-436697dd9392 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 25 03:32:32 np0005534516 nova_compute[253538]: 2025-11-25 08:32:32.699 253542 DEBUG nova.virt.hardware [None req-27efaf5d-93c8-4a0c-b1f2-436697dd9392 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T08:20:54Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='c0247c91-0f34-45b2-87e7-43f31a790a00',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 25 03:32:32 np0005534516 nova_compute[253538]: 2025-11-25 08:32:32.699 253542 DEBUG nova.virt.hardware [None req-27efaf5d-93c8-4a0c-b1f2-436697dd9392 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 25 03:32:32 np0005534516 nova_compute[253538]: 2025-11-25 08:32:32.699 253542 DEBUG nova.virt.hardware [None req-27efaf5d-93c8-4a0c-b1f2-436697dd9392 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 25 03:32:32 np0005534516 nova_compute[253538]: 2025-11-25 08:32:32.699 253542 DEBUG nova.virt.hardware [None req-27efaf5d-93c8-4a0c-b1f2-436697dd9392 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 25 03:32:32 np0005534516 nova_compute[253538]: 2025-11-25 08:32:32.699 253542 DEBUG nova.virt.hardware [None req-27efaf5d-93c8-4a0c-b1f2-436697dd9392 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 25 03:32:32 np0005534516 nova_compute[253538]: 2025-11-25 08:32:32.700 253542 DEBUG nova.virt.hardware [None req-27efaf5d-93c8-4a0c-b1f2-436697dd9392 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 25 03:32:32 np0005534516 nova_compute[253538]: 2025-11-25 08:32:32.700 253542 DEBUG nova.virt.hardware [None req-27efaf5d-93c8-4a0c-b1f2-436697dd9392 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 25 03:32:32 np0005534516 nova_compute[253538]: 2025-11-25 08:32:32.700 253542 DEBUG nova.virt.hardware [None req-27efaf5d-93c8-4a0c-b1f2-436697dd9392 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 25 03:32:32 np0005534516 nova_compute[253538]: 2025-11-25 08:32:32.700 253542 DEBUG nova.virt.hardware [None req-27efaf5d-93c8-4a0c-b1f2-436697dd9392 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 25 03:32:32 np0005534516 nova_compute[253538]: 2025-11-25 08:32:32.700 253542 DEBUG nova.virt.hardware [None req-27efaf5d-93c8-4a0c-b1f2-436697dd9392 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 25 03:32:32 np0005534516 nova_compute[253538]: 2025-11-25 08:32:32.700 253542 DEBUG nova.virt.hardware [None req-27efaf5d-93c8-4a0c-b1f2-436697dd9392 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 25 03:32:32 np0005534516 nova_compute[253538]: 2025-11-25 08:32:32.703 253542 DEBUG oslo_concurrency.processutils [None req-27efaf5d-93c8-4a0c-b1f2-436697dd9392 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:32:32 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 03:32:32 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3115266596' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 03:32:32 np0005534516 nova_compute[253538]: 2025-11-25 08:32:32.789 253542 DEBUG oslo_concurrency.processutils [None req-f4fdfd78-b540-405c-9e87-7c67184bb7d8 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.438s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:32:32 np0005534516 nova_compute[253538]: 2025-11-25 08:32:32.796 253542 DEBUG nova.compute.provider_tree [None req-f4fdfd78-b540-405c-9e87-7c67184bb7d8 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 03:32:32 np0005534516 nova_compute[253538]: 2025-11-25 08:32:32.809 253542 DEBUG nova.scheduler.client.report [None req-f4fdfd78-b540-405c-9e87-7c67184bb7d8 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 03:32:32 np0005534516 nova_compute[253538]: 2025-11-25 08:32:32.830 253542 DEBUG oslo_concurrency.lockutils [None req-f4fdfd78-b540-405c-9e87-7c67184bb7d8 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.744s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:32:32 np0005534516 nova_compute[253538]: 2025-11-25 08:32:32.831 253542 DEBUG nova.compute.manager [None req-f4fdfd78-b540-405c-9e87-7c67184bb7d8 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] [instance: c0942fc7-74d4-4fc8-9574-4fea9179e71b] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 25 03:32:32 np0005534516 nova_compute[253538]: 2025-11-25 08:32:32.871 253542 DEBUG nova.compute.manager [None req-f4fdfd78-b540-405c-9e87-7c67184bb7d8 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] [instance: c0942fc7-74d4-4fc8-9574-4fea9179e71b] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 25 03:32:32 np0005534516 nova_compute[253538]: 2025-11-25 08:32:32.871 253542 DEBUG nova.network.neutron [None req-f4fdfd78-b540-405c-9e87-7c67184bb7d8 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] [instance: c0942fc7-74d4-4fc8-9574-4fea9179e71b] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 25 03:32:32 np0005534516 nova_compute[253538]: 2025-11-25 08:32:32.892 253542 INFO nova.virt.libvirt.driver [None req-f4fdfd78-b540-405c-9e87-7c67184bb7d8 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] [instance: c0942fc7-74d4-4fc8-9574-4fea9179e71b] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 25 03:32:32 np0005534516 nova_compute[253538]: 2025-11-25 08:32:32.912 253542 DEBUG nova.compute.manager [None req-f4fdfd78-b540-405c-9e87-7c67184bb7d8 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] [instance: c0942fc7-74d4-4fc8-9574-4fea9179e71b] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 25 03:32:33 np0005534516 nova_compute[253538]: 2025-11-25 08:32:33.006 253542 DEBUG nova.compute.manager [None req-f4fdfd78-b540-405c-9e87-7c67184bb7d8 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] [instance: c0942fc7-74d4-4fc8-9574-4fea9179e71b] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 25 03:32:33 np0005534516 nova_compute[253538]: 2025-11-25 08:32:33.007 253542 DEBUG nova.virt.libvirt.driver [None req-f4fdfd78-b540-405c-9e87-7c67184bb7d8 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] [instance: c0942fc7-74d4-4fc8-9574-4fea9179e71b] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 25 03:32:33 np0005534516 nova_compute[253538]: 2025-11-25 08:32:33.008 253542 INFO nova.virt.libvirt.driver [None req-f4fdfd78-b540-405c-9e87-7c67184bb7d8 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] [instance: c0942fc7-74d4-4fc8-9574-4fea9179e71b] Creating image(s)#033[00m
Nov 25 03:32:33 np0005534516 nova_compute[253538]: 2025-11-25 08:32:33.029 253542 DEBUG nova.storage.rbd_utils [None req-f4fdfd78-b540-405c-9e87-7c67184bb7d8 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] rbd image c0942fc7-74d4-4fc8-9574-4fea9179e71b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:32:33 np0005534516 nova_compute[253538]: 2025-11-25 08:32:33.052 253542 DEBUG nova.storage.rbd_utils [None req-f4fdfd78-b540-405c-9e87-7c67184bb7d8 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] rbd image c0942fc7-74d4-4fc8-9574-4fea9179e71b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:32:33 np0005534516 nova_compute[253538]: 2025-11-25 08:32:33.075 253542 DEBUG nova.storage.rbd_utils [None req-f4fdfd78-b540-405c-9e87-7c67184bb7d8 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] rbd image c0942fc7-74d4-4fc8-9574-4fea9179e71b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:32:33 np0005534516 nova_compute[253538]: 2025-11-25 08:32:33.078 253542 DEBUG oslo_concurrency.processutils [None req-f4fdfd78-b540-405c-9e87-7c67184bb7d8 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:32:33 np0005534516 nova_compute[253538]: 2025-11-25 08:32:33.111 253542 DEBUG nova.policy [None req-f4fdfd78-b540-405c-9e87-7c67184bb7d8 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '7ad88cb0e4cf4d0b8e4cbec835318015', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'dc93aa65bef7473d961e0cad1e8f2962', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 25 03:32:33 np0005534516 nova_compute[253538]: 2025-11-25 08:32:33.151 253542 DEBUG oslo_concurrency.processutils [None req-f4fdfd78-b540-405c-9e87-7c67184bb7d8 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json" returned: 0 in 0.072s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:32:33 np0005534516 nova_compute[253538]: 2025-11-25 08:32:33.151 253542 DEBUG oslo_concurrency.lockutils [None req-f4fdfd78-b540-405c-9e87-7c67184bb7d8 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Acquiring lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:32:33 np0005534516 nova_compute[253538]: 2025-11-25 08:32:33.152 253542 DEBUG oslo_concurrency.lockutils [None req-f4fdfd78-b540-405c-9e87-7c67184bb7d8 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:32:33 np0005534516 nova_compute[253538]: 2025-11-25 08:32:33.152 253542 DEBUG oslo_concurrency.lockutils [None req-f4fdfd78-b540-405c-9e87-7c67184bb7d8 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:32:33 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 03:32:33 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3316736445' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 03:32:33 np0005534516 nova_compute[253538]: 2025-11-25 08:32:33.174 253542 DEBUG nova.storage.rbd_utils [None req-f4fdfd78-b540-405c-9e87-7c67184bb7d8 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] rbd image c0942fc7-74d4-4fc8-9574-4fea9179e71b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:32:33 np0005534516 nova_compute[253538]: 2025-11-25 08:32:33.176 253542 DEBUG oslo_concurrency.processutils [None req-f4fdfd78-b540-405c-9e87-7c67184bb7d8 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc c0942fc7-74d4-4fc8-9574-4fea9179e71b_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:32:33 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1476: 321 pgs: 321 active+clean; 146 MiB data, 508 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 3.0 MiB/s wr, 260 op/s
Nov 25 03:32:33 np0005534516 nova_compute[253538]: 2025-11-25 08:32:33.203 253542 DEBUG oslo_concurrency.processutils [None req-27efaf5d-93c8-4a0c-b1f2-436697dd9392 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.500s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:32:33 np0005534516 nova_compute[253538]: 2025-11-25 08:32:33.222 253542 DEBUG nova.storage.rbd_utils [None req-27efaf5d-93c8-4a0c-b1f2-436697dd9392 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] rbd image 8191f951-44bc-4371-957a-f2e7d37c1a32_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:32:33 np0005534516 nova_compute[253538]: 2025-11-25 08:32:33.227 253542 DEBUG oslo_concurrency.processutils [None req-27efaf5d-93c8-4a0c-b1f2-436697dd9392 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:32:33 np0005534516 nova_compute[253538]: 2025-11-25 08:32:33.437 253542 DEBUG oslo_concurrency.processutils [None req-f4fdfd78-b540-405c-9e87-7c67184bb7d8 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc c0942fc7-74d4-4fc8-9574-4fea9179e71b_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.261s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:32:33 np0005534516 nova_compute[253538]: 2025-11-25 08:32:33.518 253542 DEBUG nova.storage.rbd_utils [None req-f4fdfd78-b540-405c-9e87-7c67184bb7d8 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] resizing rbd image c0942fc7-74d4-4fc8-9574-4fea9179e71b_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 25 03:32:33 np0005534516 nova_compute[253538]: 2025-11-25 08:32:33.614 253542 DEBUG nova.objects.instance [None req-f4fdfd78-b540-405c-9e87-7c67184bb7d8 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Lazy-loading 'migration_context' on Instance uuid c0942fc7-74d4-4fc8-9574-4fea9179e71b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 03:32:33 np0005534516 nova_compute[253538]: 2025-11-25 08:32:33.626 253542 DEBUG nova.virt.libvirt.driver [None req-f4fdfd78-b540-405c-9e87-7c67184bb7d8 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] [instance: c0942fc7-74d4-4fc8-9574-4fea9179e71b] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 25 03:32:33 np0005534516 nova_compute[253538]: 2025-11-25 08:32:33.626 253542 DEBUG nova.virt.libvirt.driver [None req-f4fdfd78-b540-405c-9e87-7c67184bb7d8 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] [instance: c0942fc7-74d4-4fc8-9574-4fea9179e71b] Ensure instance console log exists: /var/lib/nova/instances/c0942fc7-74d4-4fc8-9574-4fea9179e71b/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 25 03:32:33 np0005534516 nova_compute[253538]: 2025-11-25 08:32:33.626 253542 DEBUG oslo_concurrency.lockutils [None req-f4fdfd78-b540-405c-9e87-7c67184bb7d8 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:32:33 np0005534516 nova_compute[253538]: 2025-11-25 08:32:33.627 253542 DEBUG oslo_concurrency.lockutils [None req-f4fdfd78-b540-405c-9e87-7c67184bb7d8 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:32:33 np0005534516 nova_compute[253538]: 2025-11-25 08:32:33.627 253542 DEBUG oslo_concurrency.lockutils [None req-f4fdfd78-b540-405c-9e87-7c67184bb7d8 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:32:33 np0005534516 nova_compute[253538]: 2025-11-25 08:32:33.672 253542 DEBUG nova.network.neutron [None req-f4fdfd78-b540-405c-9e87-7c67184bb7d8 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] [instance: c0942fc7-74d4-4fc8-9574-4fea9179e71b] Successfully created port: 79f4b8f5-d582-44c5-b8e0-a82ad73193de _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 25 03:32:33 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 03:32:33 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1440657913' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 03:32:33 np0005534516 nova_compute[253538]: 2025-11-25 08:32:33.752 253542 DEBUG oslo_concurrency.processutils [None req-27efaf5d-93c8-4a0c-b1f2-436697dd9392 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.525s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:32:33 np0005534516 nova_compute[253538]: 2025-11-25 08:32:33.754 253542 DEBUG nova.virt.libvirt.vif [None req-27efaf5d-93c8-4a0c-b1f2-436697dd9392 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T08:32:27Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-2072750142',display_name='tempest-AttachInterfacesTestJSON-server-2072750142',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-2072750142',id=52,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHlIbVdTjrPi47Xp1ay/t1WIi0WKTQJPoNHCD5jEl7ye+GEjFhiA/mB2rFe7IhKIxVjHVKF2hIwCbWYxBLjoyv+NSB0KC8NG+zdVJzTKR2nUpIyzOpLkhsEIm7Jy2dcm6w==',key_name='tempest-keypair-2060783228',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='7d8307470c794815a028592990efca57',ramdisk_id='',reservation_id='r-558nuebf',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachInterfacesTestJSON-1895576257',owner_user_name='tempest-AttachInterfacesTestJSON-1895576257-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T08:32:29Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='329d8dc9d78743d4a09a38fef3a9143d',uuid=8191f951-44bc-4371-957a-f2e7d37c1a32,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d8bd16e1-3695-474d-be04-7fdf44bee803", "address": "fa:16:3e:1a:58:19", "network": {"id": "9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-2079765623-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7d8307470c794815a028592990efca57", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd8bd16e1-36", "ovs_interfaceid": "d8bd16e1-3695-474d-be04-7fdf44bee803", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 25 03:32:33 np0005534516 nova_compute[253538]: 2025-11-25 08:32:33.755 253542 DEBUG nova.network.os_vif_util [None req-27efaf5d-93c8-4a0c-b1f2-436697dd9392 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Converting VIF {"id": "d8bd16e1-3695-474d-be04-7fdf44bee803", "address": "fa:16:3e:1a:58:19", "network": {"id": "9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-2079765623-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7d8307470c794815a028592990efca57", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd8bd16e1-36", "ovs_interfaceid": "d8bd16e1-3695-474d-be04-7fdf44bee803", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 03:32:33 np0005534516 nova_compute[253538]: 2025-11-25 08:32:33.756 253542 DEBUG nova.network.os_vif_util [None req-27efaf5d-93c8-4a0c-b1f2-436697dd9392 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:1a:58:19,bridge_name='br-int',has_traffic_filtering=True,id=d8bd16e1-3695-474d-be04-7fdf44bee803,network=Network(9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd8bd16e1-36') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 03:32:33 np0005534516 nova_compute[253538]: 2025-11-25 08:32:33.758 253542 DEBUG nova.objects.instance [None req-27efaf5d-93c8-4a0c-b1f2-436697dd9392 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Lazy-loading 'pci_devices' on Instance uuid 8191f951-44bc-4371-957a-f2e7d37c1a32 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 03:32:33 np0005534516 nova_compute[253538]: 2025-11-25 08:32:33.773 253542 DEBUG nova.virt.libvirt.driver [None req-27efaf5d-93c8-4a0c-b1f2-436697dd9392 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] [instance: 8191f951-44bc-4371-957a-f2e7d37c1a32] End _get_guest_xml xml=<domain type="kvm">
Nov 25 03:32:33 np0005534516 nova_compute[253538]:  <uuid>8191f951-44bc-4371-957a-f2e7d37c1a32</uuid>
Nov 25 03:32:33 np0005534516 nova_compute[253538]:  <name>instance-00000034</name>
Nov 25 03:32:33 np0005534516 nova_compute[253538]:  <memory>131072</memory>
Nov 25 03:32:33 np0005534516 nova_compute[253538]:  <vcpu>1</vcpu>
Nov 25 03:32:33 np0005534516 nova_compute[253538]:  <metadata>
Nov 25 03:32:33 np0005534516 nova_compute[253538]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 03:32:33 np0005534516 nova_compute[253538]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 03:32:33 np0005534516 nova_compute[253538]:      <nova:name>tempest-AttachInterfacesTestJSON-server-2072750142</nova:name>
Nov 25 03:32:33 np0005534516 nova_compute[253538]:      <nova:creationTime>2025-11-25 08:32:32</nova:creationTime>
Nov 25 03:32:33 np0005534516 nova_compute[253538]:      <nova:flavor name="m1.nano">
Nov 25 03:32:33 np0005534516 nova_compute[253538]:        <nova:memory>128</nova:memory>
Nov 25 03:32:33 np0005534516 nova_compute[253538]:        <nova:disk>1</nova:disk>
Nov 25 03:32:33 np0005534516 nova_compute[253538]:        <nova:swap>0</nova:swap>
Nov 25 03:32:33 np0005534516 nova_compute[253538]:        <nova:ephemeral>0</nova:ephemeral>
Nov 25 03:32:33 np0005534516 nova_compute[253538]:        <nova:vcpus>1</nova:vcpus>
Nov 25 03:32:33 np0005534516 nova_compute[253538]:      </nova:flavor>
Nov 25 03:32:33 np0005534516 nova_compute[253538]:      <nova:owner>
Nov 25 03:32:33 np0005534516 nova_compute[253538]:        <nova:user uuid="329d8dc9d78743d4a09a38fef3a9143d">tempest-AttachInterfacesTestJSON-1895576257-project-member</nova:user>
Nov 25 03:32:33 np0005534516 nova_compute[253538]:        <nova:project uuid="7d8307470c794815a028592990efca57">tempest-AttachInterfacesTestJSON-1895576257</nova:project>
Nov 25 03:32:33 np0005534516 nova_compute[253538]:      </nova:owner>
Nov 25 03:32:33 np0005534516 nova_compute[253538]:      <nova:root type="image" uuid="8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e"/>
Nov 25 03:32:33 np0005534516 nova_compute[253538]:      <nova:ports>
Nov 25 03:32:33 np0005534516 nova_compute[253538]:        <nova:port uuid="d8bd16e1-3695-474d-be04-7fdf44bee803">
Nov 25 03:32:33 np0005534516 nova_compute[253538]:          <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Nov 25 03:32:33 np0005534516 nova_compute[253538]:        </nova:port>
Nov 25 03:32:33 np0005534516 nova_compute[253538]:      </nova:ports>
Nov 25 03:32:33 np0005534516 nova_compute[253538]:    </nova:instance>
Nov 25 03:32:33 np0005534516 nova_compute[253538]:  </metadata>
Nov 25 03:32:33 np0005534516 nova_compute[253538]:  <sysinfo type="smbios">
Nov 25 03:32:33 np0005534516 nova_compute[253538]:    <system>
Nov 25 03:32:33 np0005534516 nova_compute[253538]:      <entry name="manufacturer">RDO</entry>
Nov 25 03:32:33 np0005534516 nova_compute[253538]:      <entry name="product">OpenStack Compute</entry>
Nov 25 03:32:33 np0005534516 nova_compute[253538]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 03:32:33 np0005534516 nova_compute[253538]:      <entry name="serial">8191f951-44bc-4371-957a-f2e7d37c1a32</entry>
Nov 25 03:32:33 np0005534516 nova_compute[253538]:      <entry name="uuid">8191f951-44bc-4371-957a-f2e7d37c1a32</entry>
Nov 25 03:32:33 np0005534516 nova_compute[253538]:      <entry name="family">Virtual Machine</entry>
Nov 25 03:32:33 np0005534516 nova_compute[253538]:    </system>
Nov 25 03:32:33 np0005534516 nova_compute[253538]:  </sysinfo>
Nov 25 03:32:33 np0005534516 nova_compute[253538]:  <os>
Nov 25 03:32:33 np0005534516 nova_compute[253538]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 25 03:32:33 np0005534516 nova_compute[253538]:    <boot dev="hd"/>
Nov 25 03:32:33 np0005534516 nova_compute[253538]:    <smbios mode="sysinfo"/>
Nov 25 03:32:33 np0005534516 nova_compute[253538]:  </os>
Nov 25 03:32:33 np0005534516 nova_compute[253538]:  <features>
Nov 25 03:32:33 np0005534516 nova_compute[253538]:    <acpi/>
Nov 25 03:32:33 np0005534516 nova_compute[253538]:    <apic/>
Nov 25 03:32:33 np0005534516 nova_compute[253538]:    <vmcoreinfo/>
Nov 25 03:32:33 np0005534516 nova_compute[253538]:  </features>
Nov 25 03:32:33 np0005534516 nova_compute[253538]:  <clock offset="utc">
Nov 25 03:32:33 np0005534516 nova_compute[253538]:    <timer name="pit" tickpolicy="delay"/>
Nov 25 03:32:33 np0005534516 nova_compute[253538]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 25 03:32:33 np0005534516 nova_compute[253538]:    <timer name="hpet" present="no"/>
Nov 25 03:32:33 np0005534516 nova_compute[253538]:  </clock>
Nov 25 03:32:33 np0005534516 nova_compute[253538]:  <cpu mode="host-model" match="exact">
Nov 25 03:32:33 np0005534516 nova_compute[253538]:    <topology sockets="1" cores="1" threads="1"/>
Nov 25 03:32:33 np0005534516 nova_compute[253538]:  </cpu>
Nov 25 03:32:33 np0005534516 nova_compute[253538]:  <devices>
Nov 25 03:32:33 np0005534516 nova_compute[253538]:    <disk type="network" device="disk">
Nov 25 03:32:33 np0005534516 nova_compute[253538]:      <driver type="raw" cache="none"/>
Nov 25 03:32:33 np0005534516 nova_compute[253538]:      <source protocol="rbd" name="vms/8191f951-44bc-4371-957a-f2e7d37c1a32_disk">
Nov 25 03:32:33 np0005534516 nova_compute[253538]:        <host name="192.168.122.100" port="6789"/>
Nov 25 03:32:33 np0005534516 nova_compute[253538]:      </source>
Nov 25 03:32:33 np0005534516 nova_compute[253538]:      <auth username="openstack">
Nov 25 03:32:33 np0005534516 nova_compute[253538]:        <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 03:32:33 np0005534516 nova_compute[253538]:      </auth>
Nov 25 03:32:33 np0005534516 nova_compute[253538]:      <target dev="vda" bus="virtio"/>
Nov 25 03:32:33 np0005534516 nova_compute[253538]:    </disk>
Nov 25 03:32:33 np0005534516 nova_compute[253538]:    <disk type="network" device="cdrom">
Nov 25 03:32:33 np0005534516 nova_compute[253538]:      <driver type="raw" cache="none"/>
Nov 25 03:32:33 np0005534516 nova_compute[253538]:      <source protocol="rbd" name="vms/8191f951-44bc-4371-957a-f2e7d37c1a32_disk.config">
Nov 25 03:32:33 np0005534516 nova_compute[253538]:        <host name="192.168.122.100" port="6789"/>
Nov 25 03:32:33 np0005534516 nova_compute[253538]:      </source>
Nov 25 03:32:33 np0005534516 nova_compute[253538]:      <auth username="openstack">
Nov 25 03:32:33 np0005534516 nova_compute[253538]:        <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 03:32:33 np0005534516 nova_compute[253538]:      </auth>
Nov 25 03:32:33 np0005534516 nova_compute[253538]:      <target dev="sda" bus="sata"/>
Nov 25 03:32:33 np0005534516 nova_compute[253538]:    </disk>
Nov 25 03:32:33 np0005534516 nova_compute[253538]:    <interface type="ethernet">
Nov 25 03:32:33 np0005534516 nova_compute[253538]:      <mac address="fa:16:3e:1a:58:19"/>
Nov 25 03:32:33 np0005534516 nova_compute[253538]:      <model type="virtio"/>
Nov 25 03:32:33 np0005534516 nova_compute[253538]:      <driver name="vhost" rx_queue_size="512"/>
Nov 25 03:32:33 np0005534516 nova_compute[253538]:      <mtu size="1442"/>
Nov 25 03:32:33 np0005534516 nova_compute[253538]:      <target dev="tapd8bd16e1-36"/>
Nov 25 03:32:33 np0005534516 nova_compute[253538]:    </interface>
Nov 25 03:32:33 np0005534516 nova_compute[253538]:    <serial type="pty">
Nov 25 03:32:33 np0005534516 nova_compute[253538]:      <log file="/var/lib/nova/instances/8191f951-44bc-4371-957a-f2e7d37c1a32/console.log" append="off"/>
Nov 25 03:32:33 np0005534516 nova_compute[253538]:    </serial>
Nov 25 03:32:33 np0005534516 nova_compute[253538]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 03:32:33 np0005534516 nova_compute[253538]:    <video>
Nov 25 03:32:33 np0005534516 nova_compute[253538]:      <model type="virtio"/>
Nov 25 03:32:33 np0005534516 nova_compute[253538]:    </video>
Nov 25 03:32:33 np0005534516 nova_compute[253538]:    <input type="tablet" bus="usb"/>
Nov 25 03:32:33 np0005534516 nova_compute[253538]:    <rng model="virtio">
Nov 25 03:32:33 np0005534516 nova_compute[253538]:      <backend model="random">/dev/urandom</backend>
Nov 25 03:32:33 np0005534516 nova_compute[253538]:    </rng>
Nov 25 03:32:33 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root"/>
Nov 25 03:32:33 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:32:33 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:32:33 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:32:33 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:32:33 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:32:33 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:32:33 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:32:33 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:32:33 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:32:33 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:32:33 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:32:33 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:32:33 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:32:33 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:32:33 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:32:33 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:32:33 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:32:33 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:32:33 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:32:33 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:32:33 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:32:33 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:32:33 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:32:33 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:32:33 np0005534516 nova_compute[253538]:    <controller type="usb" index="0"/>
Nov 25 03:32:33 np0005534516 nova_compute[253538]:    <memballoon model="virtio">
Nov 25 03:32:33 np0005534516 nova_compute[253538]:      <stats period="10"/>
Nov 25 03:32:33 np0005534516 nova_compute[253538]:    </memballoon>
Nov 25 03:32:33 np0005534516 nova_compute[253538]:  </devices>
Nov 25 03:32:33 np0005534516 nova_compute[253538]: </domain>
Nov 25 03:32:33 np0005534516 nova_compute[253538]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 25 03:32:33 np0005534516 nova_compute[253538]: 2025-11-25 08:32:33.773 253542 DEBUG nova.compute.manager [None req-27efaf5d-93c8-4a0c-b1f2-436697dd9392 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] [instance: 8191f951-44bc-4371-957a-f2e7d37c1a32] Preparing to wait for external event network-vif-plugged-d8bd16e1-3695-474d-be04-7fdf44bee803 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 25 03:32:33 np0005534516 nova_compute[253538]: 2025-11-25 08:32:33.774 253542 DEBUG oslo_concurrency.lockutils [None req-27efaf5d-93c8-4a0c-b1f2-436697dd9392 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Acquiring lock "8191f951-44bc-4371-957a-f2e7d37c1a32-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:32:33 np0005534516 nova_compute[253538]: 2025-11-25 08:32:33.774 253542 DEBUG oslo_concurrency.lockutils [None req-27efaf5d-93c8-4a0c-b1f2-436697dd9392 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Lock "8191f951-44bc-4371-957a-f2e7d37c1a32-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:32:33 np0005534516 nova_compute[253538]: 2025-11-25 08:32:33.775 253542 DEBUG oslo_concurrency.lockutils [None req-27efaf5d-93c8-4a0c-b1f2-436697dd9392 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Lock "8191f951-44bc-4371-957a-f2e7d37c1a32-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:32:33 np0005534516 nova_compute[253538]: 2025-11-25 08:32:33.776 253542 DEBUG nova.virt.libvirt.vif [None req-27efaf5d-93c8-4a0c-b1f2-436697dd9392 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T08:32:27Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-2072750142',display_name='tempest-AttachInterfacesTestJSON-server-2072750142',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-2072750142',id=52,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHlIbVdTjrPi47Xp1ay/t1WIi0WKTQJPoNHCD5jEl7ye+GEjFhiA/mB2rFe7IhKIxVjHVKF2hIwCbWYxBLjoyv+NSB0KC8NG+zdVJzTKR2nUpIyzOpLkhsEIm7Jy2dcm6w==',key_name='tempest-keypair-2060783228',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='7d8307470c794815a028592990efca57',ramdisk_id='',reservation_id='r-558nuebf',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachInterfacesTestJSON-1895576257',owner_user_name='tempest-AttachInterfacesTestJSON-1895576257-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T08:32:29Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='329d8dc9d78743d4a09a38fef3a9143d',uuid=8191f951-44bc-4371-957a-f2e7d37c1a32,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d8bd16e1-3695-474d-be04-7fdf44bee803", "address": "fa:16:3e:1a:58:19", "network": {"id": "9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-2079765623-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7d8307470c794815a028592990efca57", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd8bd16e1-36", "ovs_interfaceid": "d8bd16e1-3695-474d-be04-7fdf44bee803", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 25 03:32:33 np0005534516 nova_compute[253538]: 2025-11-25 08:32:33.776 253542 DEBUG nova.network.os_vif_util [None req-27efaf5d-93c8-4a0c-b1f2-436697dd9392 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Converting VIF {"id": "d8bd16e1-3695-474d-be04-7fdf44bee803", "address": "fa:16:3e:1a:58:19", "network": {"id": "9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-2079765623-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7d8307470c794815a028592990efca57", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd8bd16e1-36", "ovs_interfaceid": "d8bd16e1-3695-474d-be04-7fdf44bee803", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 03:32:33 np0005534516 nova_compute[253538]: 2025-11-25 08:32:33.777 253542 DEBUG nova.network.os_vif_util [None req-27efaf5d-93c8-4a0c-b1f2-436697dd9392 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:1a:58:19,bridge_name='br-int',has_traffic_filtering=True,id=d8bd16e1-3695-474d-be04-7fdf44bee803,network=Network(9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd8bd16e1-36') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 03:32:33 np0005534516 nova_compute[253538]: 2025-11-25 08:32:33.778 253542 DEBUG os_vif [None req-27efaf5d-93c8-4a0c-b1f2-436697dd9392 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:1a:58:19,bridge_name='br-int',has_traffic_filtering=True,id=d8bd16e1-3695-474d-be04-7fdf44bee803,network=Network(9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd8bd16e1-36') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 25 03:32:33 np0005534516 nova_compute[253538]: 2025-11-25 08:32:33.779 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:32:33 np0005534516 nova_compute[253538]: 2025-11-25 08:32:33.780 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:32:33 np0005534516 nova_compute[253538]: 2025-11-25 08:32:33.780 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 03:32:33 np0005534516 nova_compute[253538]: 2025-11-25 08:32:33.784 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:32:33 np0005534516 nova_compute[253538]: 2025-11-25 08:32:33.785 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd8bd16e1-36, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:32:33 np0005534516 nova_compute[253538]: 2025-11-25 08:32:33.786 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapd8bd16e1-36, col_values=(('external_ids', {'iface-id': 'd8bd16e1-3695-474d-be04-7fdf44bee803', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:1a:58:19', 'vm-uuid': '8191f951-44bc-4371-957a-f2e7d37c1a32'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:32:33 np0005534516 NetworkManager[48915]: <info>  [1764059553.7897] manager: (tapd8bd16e1-36): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/198)
Nov 25 03:32:33 np0005534516 nova_compute[253538]: 2025-11-25 08:32:33.792 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 25 03:32:33 np0005534516 nova_compute[253538]: 2025-11-25 08:32:33.794 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:32:33 np0005534516 nova_compute[253538]: 2025-11-25 08:32:33.796 253542 INFO os_vif [None req-27efaf5d-93c8-4a0c-b1f2-436697dd9392 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:1a:58:19,bridge_name='br-int',has_traffic_filtering=True,id=d8bd16e1-3695-474d-be04-7fdf44bee803,network=Network(9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd8bd16e1-36')#033[00m
Nov 25 03:32:33 np0005534516 nova_compute[253538]: 2025-11-25 08:32:33.850 253542 DEBUG nova.virt.libvirt.driver [None req-27efaf5d-93c8-4a0c-b1f2-436697dd9392 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 03:32:33 np0005534516 nova_compute[253538]: 2025-11-25 08:32:33.851 253542 DEBUG nova.virt.libvirt.driver [None req-27efaf5d-93c8-4a0c-b1f2-436697dd9392 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 03:32:33 np0005534516 nova_compute[253538]: 2025-11-25 08:32:33.851 253542 DEBUG nova.virt.libvirt.driver [None req-27efaf5d-93c8-4a0c-b1f2-436697dd9392 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] No VIF found with MAC fa:16:3e:1a:58:19, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 25 03:32:33 np0005534516 nova_compute[253538]: 2025-11-25 08:32:33.852 253542 INFO nova.virt.libvirt.driver [None req-27efaf5d-93c8-4a0c-b1f2-436697dd9392 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] [instance: 8191f951-44bc-4371-957a-f2e7d37c1a32] Using config drive#033[00m
Nov 25 03:32:33 np0005534516 nova_compute[253538]: 2025-11-25 08:32:33.873 253542 DEBUG nova.storage.rbd_utils [None req-27efaf5d-93c8-4a0c-b1f2-436697dd9392 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] rbd image 8191f951-44bc-4371-957a-f2e7d37c1a32_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:32:34 np0005534516 nova_compute[253538]: 2025-11-25 08:32:34.312 253542 INFO nova.virt.libvirt.driver [None req-27efaf5d-93c8-4a0c-b1f2-436697dd9392 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] [instance: 8191f951-44bc-4371-957a-f2e7d37c1a32] Creating config drive at /var/lib/nova/instances/8191f951-44bc-4371-957a-f2e7d37c1a32/disk.config#033[00m
Nov 25 03:32:34 np0005534516 nova_compute[253538]: 2025-11-25 08:32:34.320 253542 DEBUG oslo_concurrency.processutils [None req-27efaf5d-93c8-4a0c-b1f2-436697dd9392 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/8191f951-44bc-4371-957a-f2e7d37c1a32/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpod093_q4 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:32:34 np0005534516 nova_compute[253538]: 2025-11-25 08:32:34.466 253542 DEBUG oslo_concurrency.processutils [None req-27efaf5d-93c8-4a0c-b1f2-436697dd9392 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/8191f951-44bc-4371-957a-f2e7d37c1a32/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpod093_q4" returned: 0 in 0.146s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:32:34 np0005534516 nova_compute[253538]: 2025-11-25 08:32:34.500 253542 DEBUG nova.storage.rbd_utils [None req-27efaf5d-93c8-4a0c-b1f2-436697dd9392 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] rbd image 8191f951-44bc-4371-957a-f2e7d37c1a32_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:32:34 np0005534516 nova_compute[253538]: 2025-11-25 08:32:34.506 253542 DEBUG oslo_concurrency.processutils [None req-27efaf5d-93c8-4a0c-b1f2-436697dd9392 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/8191f951-44bc-4371-957a-f2e7d37c1a32/disk.config 8191f951-44bc-4371-957a-f2e7d37c1a32_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:32:34 np0005534516 nova_compute[253538]: 2025-11-25 08:32:34.550 253542 DEBUG nova.compute.manager [req-1157b120-9cba-4533-b92b-f1a40dc184e9 req-de4c1131-2bb9-4a6b-8193-6b7c66ee7b0e b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 8191f951-44bc-4371-957a-f2e7d37c1a32] Received event network-changed-d8bd16e1-3695-474d-be04-7fdf44bee803 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:32:34 np0005534516 nova_compute[253538]: 2025-11-25 08:32:34.550 253542 DEBUG nova.compute.manager [req-1157b120-9cba-4533-b92b-f1a40dc184e9 req-de4c1131-2bb9-4a6b-8193-6b7c66ee7b0e b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 8191f951-44bc-4371-957a-f2e7d37c1a32] Refreshing instance network info cache due to event network-changed-d8bd16e1-3695-474d-be04-7fdf44bee803. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 03:32:34 np0005534516 nova_compute[253538]: 2025-11-25 08:32:34.551 253542 DEBUG oslo_concurrency.lockutils [req-1157b120-9cba-4533-b92b-f1a40dc184e9 req-de4c1131-2bb9-4a6b-8193-6b7c66ee7b0e b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-8191f951-44bc-4371-957a-f2e7d37c1a32" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 03:32:34 np0005534516 nova_compute[253538]: 2025-11-25 08:32:34.551 253542 DEBUG oslo_concurrency.lockutils [req-1157b120-9cba-4533-b92b-f1a40dc184e9 req-de4c1131-2bb9-4a6b-8193-6b7c66ee7b0e b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-8191f951-44bc-4371-957a-f2e7d37c1a32" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 03:32:34 np0005534516 nova_compute[253538]: 2025-11-25 08:32:34.551 253542 DEBUG nova.network.neutron [req-1157b120-9cba-4533-b92b-f1a40dc184e9 req-de4c1131-2bb9-4a6b-8193-6b7c66ee7b0e b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 8191f951-44bc-4371-957a-f2e7d37c1a32] Refreshing network info cache for port d8bd16e1-3695-474d-be04-7fdf44bee803 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 03:32:34 np0005534516 nova_compute[253538]: 2025-11-25 08:32:34.911 253542 DEBUG nova.network.neutron [None req-f4fdfd78-b540-405c-9e87-7c67184bb7d8 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] [instance: c0942fc7-74d4-4fc8-9574-4fea9179e71b] Successfully updated port: 79f4b8f5-d582-44c5-b8e0-a82ad73193de _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 25 03:32:34 np0005534516 nova_compute[253538]: 2025-11-25 08:32:34.924 253542 DEBUG oslo_concurrency.lockutils [None req-f4fdfd78-b540-405c-9e87-7c67184bb7d8 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Acquiring lock "refresh_cache-c0942fc7-74d4-4fc8-9574-4fea9179e71b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 03:32:34 np0005534516 nova_compute[253538]: 2025-11-25 08:32:34.925 253542 DEBUG oslo_concurrency.lockutils [None req-f4fdfd78-b540-405c-9e87-7c67184bb7d8 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Acquired lock "refresh_cache-c0942fc7-74d4-4fc8-9574-4fea9179e71b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 03:32:34 np0005534516 nova_compute[253538]: 2025-11-25 08:32:34.925 253542 DEBUG nova.network.neutron [None req-f4fdfd78-b540-405c-9e87-7c67184bb7d8 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] [instance: c0942fc7-74d4-4fc8-9574-4fea9179e71b] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 25 03:32:35 np0005534516 nova_compute[253538]: 2025-11-25 08:32:35.039 253542 DEBUG oslo_concurrency.processutils [None req-27efaf5d-93c8-4a0c-b1f2-436697dd9392 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/8191f951-44bc-4371-957a-f2e7d37c1a32/disk.config 8191f951-44bc-4371-957a-f2e7d37c1a32_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.533s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:32:35 np0005534516 nova_compute[253538]: 2025-11-25 08:32:35.040 253542 INFO nova.virt.libvirt.driver [None req-27efaf5d-93c8-4a0c-b1f2-436697dd9392 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] [instance: 8191f951-44bc-4371-957a-f2e7d37c1a32] Deleting local config drive /var/lib/nova/instances/8191f951-44bc-4371-957a-f2e7d37c1a32/disk.config because it was imported into RBD.#033[00m
Nov 25 03:32:35 np0005534516 nova_compute[253538]: 2025-11-25 08:32:35.065 253542 DEBUG nova.network.neutron [None req-f4fdfd78-b540-405c-9e87-7c67184bb7d8 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] [instance: c0942fc7-74d4-4fc8-9574-4fea9179e71b] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 25 03:32:35 np0005534516 kernel: tapd8bd16e1-36: entered promiscuous mode
Nov 25 03:32:35 np0005534516 NetworkManager[48915]: <info>  [1764059555.1053] manager: (tapd8bd16e1-36): new Tun device (/org/freedesktop/NetworkManager/Devices/199)
Nov 25 03:32:35 np0005534516 ovn_controller[152859]: 2025-11-25T08:32:35Z|00428|binding|INFO|Claiming lport d8bd16e1-3695-474d-be04-7fdf44bee803 for this chassis.
Nov 25 03:32:35 np0005534516 ovn_controller[152859]: 2025-11-25T08:32:35Z|00429|binding|INFO|d8bd16e1-3695-474d-be04-7fdf44bee803: Claiming fa:16:3e:1a:58:19 10.100.0.11
Nov 25 03:32:35 np0005534516 nova_compute[253538]: 2025-11-25 08:32:35.110 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:32:35 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:32:35.116 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:1a:58:19 10.100.0.11'], port_security=['fa:16:3e:1a:58:19 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '8191f951-44bc-4371-957a-f2e7d37c1a32', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '7d8307470c794815a028592990efca57', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'ab6e7e4a-351f-4b59-b94e-a7f51f236dd7', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f51a82dc-da84-4ad1-90c6-51b8e242435f, chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=d8bd16e1-3695-474d-be04-7fdf44bee803) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 03:32:35 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:32:35.118 162739 INFO neutron.agent.ovn.metadata.agent [-] Port d8bd16e1-3695-474d-be04-7fdf44bee803 in datapath 9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe bound to our chassis#033[00m
Nov 25 03:32:35 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:32:35.119 162739 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe#033[00m
Nov 25 03:32:35 np0005534516 systemd-udevd[309840]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 03:32:35 np0005534516 ovn_controller[152859]: 2025-11-25T08:32:35Z|00430|binding|INFO|Setting lport d8bd16e1-3695-474d-be04-7fdf44bee803 ovn-installed in OVS
Nov 25 03:32:35 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:32:35.135 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[b5600931-7e42-47f9-9986-e410dfa61388]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:32:35 np0005534516 ovn_controller[152859]: 2025-11-25T08:32:35Z|00431|binding|INFO|Setting lport d8bd16e1-3695-474d-be04-7fdf44bee803 up in Southbound
Nov 25 03:32:35 np0005534516 nova_compute[253538]: 2025-11-25 08:32:35.137 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:32:35 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:32:35.136 162739 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap9bf3cbfa-71 in ovnmeta-9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 25 03:32:35 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:32:35.138 269370 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap9bf3cbfa-70 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 25 03:32:35 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:32:35.139 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[2f954a33-8d40-45b5-928a-2627855d853c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:32:35 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:32:35.140 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[413a598e-b4d0-4f4a-a162-01d0faa83691]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:32:35 np0005534516 nova_compute[253538]: 2025-11-25 08:32:35.143 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:32:35 np0005534516 NetworkManager[48915]: <info>  [1764059555.1498] device (tapd8bd16e1-36): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 03:32:35 np0005534516 NetworkManager[48915]: <info>  [1764059555.1509] device (tapd8bd16e1-36): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 03:32:35 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:32:35.151 162852 DEBUG oslo.privsep.daemon [-] privsep: reply[7797f486-cf2a-4249-82e3-ece8558596ba]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:32:35 np0005534516 systemd-machined[215790]: New machine qemu-58-instance-00000034.
Nov 25 03:32:35 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:32:35.165 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[9686e6d5-2143-442c-9c2d-4f9203580ab3]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:32:35 np0005534516 systemd[1]: Started Virtual Machine qemu-58-instance-00000034.
Nov 25 03:32:35 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:32:35.191 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[4b6654ea-077f-4523-ad15-5ee226489236]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:32:35 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1477: 321 pgs: 321 active+clean; 191 MiB data, 512 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 4.9 MiB/s wr, 285 op/s
Nov 25 03:32:35 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:32:35.197 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[1324e03a-ad18-45af-8753-aded4a2b30d1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:32:35 np0005534516 NetworkManager[48915]: <info>  [1764059555.1999] manager: (tap9bf3cbfa-70): new Veth device (/org/freedesktop/NetworkManager/Devices/200)
Nov 25 03:32:35 np0005534516 systemd-udevd[309844]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 03:32:35 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:32:35.231 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[d998355b-c55b-47d4-9555-d210028fdf58]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:32:35 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:32:35.235 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[8560003f-340c-41f1-88b5-777a8aa67046]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:32:35 np0005534516 NetworkManager[48915]: <info>  [1764059555.2572] device (tap9bf3cbfa-70): carrier: link connected
Nov 25 03:32:35 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:32:35.263 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[84e09d2a-14ab-40a7-a08b-258e34bcae4b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:32:35 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:32:35.278 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[d3077cfd-d194-45e0-bc75-fde10de1a346]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap9bf3cbfa-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:cf:8f:c7'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 132], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 490985, 'reachable_time': 15416, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 309873, 'error': None, 'target': 'ovnmeta-9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:32:35 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:32:35.294 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[b4e3ac3d-2beb-4ac4-a144-a487af70190d]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fecf:8fc7'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 490985, 'tstamp': 490985}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 309874, 'error': None, 'target': 'ovnmeta-9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:32:35 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:32:35.308 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[9aa6b383-b02b-43d2-9ccb-204bff828868]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap9bf3cbfa-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:cf:8f:c7'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 132], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 490985, 'reachable_time': 15416, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 309875, 'error': None, 'target': 'ovnmeta-9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:32:35 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:32:35.340 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[8ab1c470-9375-4902-a013-691bfc1121c2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:32:35 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:32:35.397 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[bf9e5114-840c-4f16-9c54-0d7cb623e896]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:32:35 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:32:35.398 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9bf3cbfa-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:32:35 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:32:35.398 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 03:32:35 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:32:35.399 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9bf3cbfa-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:32:35 np0005534516 kernel: tap9bf3cbfa-70: entered promiscuous mode
Nov 25 03:32:35 np0005534516 nova_compute[253538]: 2025-11-25 08:32:35.400 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:32:35 np0005534516 NetworkManager[48915]: <info>  [1764059555.4013] manager: (tap9bf3cbfa-70): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/201)
Nov 25 03:32:35 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:32:35.403 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap9bf3cbfa-70, col_values=(('external_ids', {'iface-id': '98660c0c-0936-4c4d-9a89-87b784d8d5cc'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:32:35 np0005534516 ovn_controller[152859]: 2025-11-25T08:32:35Z|00432|binding|INFO|Releasing lport 98660c0c-0936-4c4d-9a89-87b784d8d5cc from this chassis (sb_readonly=0)
Nov 25 03:32:35 np0005534516 nova_compute[253538]: 2025-11-25 08:32:35.405 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:32:35 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:32:35.405 162739 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 25 03:32:35 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:32:35.406 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[c7a5e691-10c5-43e5-8ad3-fbd33fb8ed21]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:32:35 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:32:35.407 162739 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 25 03:32:35 np0005534516 ovn_metadata_agent[162734]: global
Nov 25 03:32:35 np0005534516 ovn_metadata_agent[162734]:    log         /dev/log local0 debug
Nov 25 03:32:35 np0005534516 ovn_metadata_agent[162734]:    log-tag     haproxy-metadata-proxy-9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe
Nov 25 03:32:35 np0005534516 ovn_metadata_agent[162734]:    user        root
Nov 25 03:32:35 np0005534516 ovn_metadata_agent[162734]:    group       root
Nov 25 03:32:35 np0005534516 ovn_metadata_agent[162734]:    maxconn     1024
Nov 25 03:32:35 np0005534516 ovn_metadata_agent[162734]:    pidfile     /var/lib/neutron/external/pids/9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe.pid.haproxy
Nov 25 03:32:35 np0005534516 ovn_metadata_agent[162734]:    daemon
Nov 25 03:32:35 np0005534516 ovn_metadata_agent[162734]: 
Nov 25 03:32:35 np0005534516 ovn_metadata_agent[162734]: defaults
Nov 25 03:32:35 np0005534516 ovn_metadata_agent[162734]:    log global
Nov 25 03:32:35 np0005534516 ovn_metadata_agent[162734]:    mode http
Nov 25 03:32:35 np0005534516 ovn_metadata_agent[162734]:    option httplog
Nov 25 03:32:35 np0005534516 ovn_metadata_agent[162734]:    option dontlognull
Nov 25 03:32:35 np0005534516 ovn_metadata_agent[162734]:    option http-server-close
Nov 25 03:32:35 np0005534516 ovn_metadata_agent[162734]:    option forwardfor
Nov 25 03:32:35 np0005534516 ovn_metadata_agent[162734]:    retries                 3
Nov 25 03:32:35 np0005534516 ovn_metadata_agent[162734]:    timeout http-request    30s
Nov 25 03:32:35 np0005534516 ovn_metadata_agent[162734]:    timeout connect         30s
Nov 25 03:32:35 np0005534516 ovn_metadata_agent[162734]:    timeout client          32s
Nov 25 03:32:35 np0005534516 ovn_metadata_agent[162734]:    timeout server          32s
Nov 25 03:32:35 np0005534516 ovn_metadata_agent[162734]:    timeout http-keep-alive 30s
Nov 25 03:32:35 np0005534516 ovn_metadata_agent[162734]: 
Nov 25 03:32:35 np0005534516 ovn_metadata_agent[162734]: 
Nov 25 03:32:35 np0005534516 ovn_metadata_agent[162734]: listen listener
Nov 25 03:32:35 np0005534516 ovn_metadata_agent[162734]:    bind 169.254.169.254:80
Nov 25 03:32:35 np0005534516 ovn_metadata_agent[162734]:    server metadata /var/lib/neutron/metadata_proxy
Nov 25 03:32:35 np0005534516 ovn_metadata_agent[162734]:    http-request add-header X-OVN-Network-ID 9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe
Nov 25 03:32:35 np0005534516 ovn_metadata_agent[162734]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 25 03:32:35 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:32:35.408 162739 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe', 'env', 'PROCESS_TAG=haproxy-9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 25 03:32:35 np0005534516 nova_compute[253538]: 2025-11-25 08:32:35.425 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:32:35 np0005534516 nova_compute[253538]: 2025-11-25 08:32:35.468 253542 DEBUG nova.compute.manager [req-9b0a464b-30fa-49c6-8775-0829733b9766 req-84cc943e-ea59-4880-bda5-7680788e4257 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 8191f951-44bc-4371-957a-f2e7d37c1a32] Received event network-vif-plugged-d8bd16e1-3695-474d-be04-7fdf44bee803 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:32:35 np0005534516 nova_compute[253538]: 2025-11-25 08:32:35.468 253542 DEBUG oslo_concurrency.lockutils [req-9b0a464b-30fa-49c6-8775-0829733b9766 req-84cc943e-ea59-4880-bda5-7680788e4257 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "8191f951-44bc-4371-957a-f2e7d37c1a32-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:32:35 np0005534516 nova_compute[253538]: 2025-11-25 08:32:35.468 253542 DEBUG oslo_concurrency.lockutils [req-9b0a464b-30fa-49c6-8775-0829733b9766 req-84cc943e-ea59-4880-bda5-7680788e4257 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "8191f951-44bc-4371-957a-f2e7d37c1a32-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:32:35 np0005534516 nova_compute[253538]: 2025-11-25 08:32:35.469 253542 DEBUG oslo_concurrency.lockutils [req-9b0a464b-30fa-49c6-8775-0829733b9766 req-84cc943e-ea59-4880-bda5-7680788e4257 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "8191f951-44bc-4371-957a-f2e7d37c1a32-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:32:35 np0005534516 nova_compute[253538]: 2025-11-25 08:32:35.469 253542 DEBUG nova.compute.manager [req-9b0a464b-30fa-49c6-8775-0829733b9766 req-84cc943e-ea59-4880-bda5-7680788e4257 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 8191f951-44bc-4371-957a-f2e7d37c1a32] Processing event network-vif-plugged-d8bd16e1-3695-474d-be04-7fdf44bee803 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 25 03:32:35 np0005534516 nova_compute[253538]: 2025-11-25 08:32:35.486 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:32:35 np0005534516 nova_compute[253538]: 2025-11-25 08:32:35.681 253542 DEBUG nova.network.neutron [None req-f4fdfd78-b540-405c-9e87-7c67184bb7d8 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] [instance: c0942fc7-74d4-4fc8-9574-4fea9179e71b] Updating instance_info_cache with network_info: [{"id": "79f4b8f5-d582-44c5-b8e0-a82ad73193de", "address": "fa:16:3e:24:9d:e4", "network": {"id": "eb25945d-6002-4a99-b682-034a8a3dc901", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1802666542-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dc93aa65bef7473d961e0cad1e8f2962", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap79f4b8f5-d5", "ovs_interfaceid": "79f4b8f5-d582-44c5-b8e0-a82ad73193de", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 03:32:35 np0005534516 nova_compute[253538]: 2025-11-25 08:32:35.704 253542 DEBUG oslo_concurrency.lockutils [None req-f4fdfd78-b540-405c-9e87-7c67184bb7d8 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Releasing lock "refresh_cache-c0942fc7-74d4-4fc8-9574-4fea9179e71b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 03:32:35 np0005534516 nova_compute[253538]: 2025-11-25 08:32:35.705 253542 DEBUG nova.compute.manager [None req-f4fdfd78-b540-405c-9e87-7c67184bb7d8 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] [instance: c0942fc7-74d4-4fc8-9574-4fea9179e71b] Instance network_info: |[{"id": "79f4b8f5-d582-44c5-b8e0-a82ad73193de", "address": "fa:16:3e:24:9d:e4", "network": {"id": "eb25945d-6002-4a99-b682-034a8a3dc901", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1802666542-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dc93aa65bef7473d961e0cad1e8f2962", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap79f4b8f5-d5", "ovs_interfaceid": "79f4b8f5-d582-44c5-b8e0-a82ad73193de", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 25 03:32:35 np0005534516 nova_compute[253538]: 2025-11-25 08:32:35.709 253542 DEBUG nova.virt.libvirt.driver [None req-f4fdfd78-b540-405c-9e87-7c67184bb7d8 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] [instance: c0942fc7-74d4-4fc8-9574-4fea9179e71b] Start _get_guest_xml network_info=[{"id": "79f4b8f5-d582-44c5-b8e0-a82ad73193de", "address": "fa:16:3e:24:9d:e4", "network": {"id": "eb25945d-6002-4a99-b682-034a8a3dc901", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1802666542-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dc93aa65bef7473d961e0cad1e8f2962", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap79f4b8f5-d5", "ovs_interfaceid": "79f4b8f5-d582-44c5-b8e0-a82ad73193de", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'device_name': '/dev/vda', 'guest_format': None, 'encryption_options': None, 'boot_index': 0, 'encryption_format': None, 'encryption_secret_uuid': None, 'size': 0, 'encrypted': False, 'disk_bus': 'virtio', 'image_id': '8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 25 03:32:35 np0005534516 nova_compute[253538]: 2025-11-25 08:32:35.716 253542 WARNING nova.virt.libvirt.driver [None req-f4fdfd78-b540-405c-9e87-7c67184bb7d8 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 03:32:35 np0005534516 nova_compute[253538]: 2025-11-25 08:32:35.722 253542 DEBUG nova.virt.libvirt.host [None req-f4fdfd78-b540-405c-9e87-7c67184bb7d8 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 25 03:32:35 np0005534516 nova_compute[253538]: 2025-11-25 08:32:35.723 253542 DEBUG nova.virt.libvirt.host [None req-f4fdfd78-b540-405c-9e87-7c67184bb7d8 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 25 03:32:35 np0005534516 nova_compute[253538]: 2025-11-25 08:32:35.725 253542 DEBUG nova.virt.libvirt.host [None req-f4fdfd78-b540-405c-9e87-7c67184bb7d8 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 25 03:32:35 np0005534516 nova_compute[253538]: 2025-11-25 08:32:35.726 253542 DEBUG nova.virt.libvirt.host [None req-f4fdfd78-b540-405c-9e87-7c67184bb7d8 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 25 03:32:35 np0005534516 nova_compute[253538]: 2025-11-25 08:32:35.726 253542 DEBUG nova.virt.libvirt.driver [None req-f4fdfd78-b540-405c-9e87-7c67184bb7d8 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 25 03:32:35 np0005534516 nova_compute[253538]: 2025-11-25 08:32:35.726 253542 DEBUG nova.virt.hardware [None req-f4fdfd78-b540-405c-9e87-7c67184bb7d8 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T08:20:54Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='c0247c91-0f34-45b2-87e7-43f31a790a00',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 25 03:32:35 np0005534516 nova_compute[253538]: 2025-11-25 08:32:35.727 253542 DEBUG nova.virt.hardware [None req-f4fdfd78-b540-405c-9e87-7c67184bb7d8 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 25 03:32:35 np0005534516 nova_compute[253538]: 2025-11-25 08:32:35.727 253542 DEBUG nova.virt.hardware [None req-f4fdfd78-b540-405c-9e87-7c67184bb7d8 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 25 03:32:35 np0005534516 nova_compute[253538]: 2025-11-25 08:32:35.727 253542 DEBUG nova.virt.hardware [None req-f4fdfd78-b540-405c-9e87-7c67184bb7d8 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 25 03:32:35 np0005534516 nova_compute[253538]: 2025-11-25 08:32:35.728 253542 DEBUG nova.virt.hardware [None req-f4fdfd78-b540-405c-9e87-7c67184bb7d8 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 25 03:32:35 np0005534516 nova_compute[253538]: 2025-11-25 08:32:35.728 253542 DEBUG nova.virt.hardware [None req-f4fdfd78-b540-405c-9e87-7c67184bb7d8 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 25 03:32:35 np0005534516 nova_compute[253538]: 2025-11-25 08:32:35.728 253542 DEBUG nova.virt.hardware [None req-f4fdfd78-b540-405c-9e87-7c67184bb7d8 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 25 03:32:35 np0005534516 nova_compute[253538]: 2025-11-25 08:32:35.728 253542 DEBUG nova.virt.hardware [None req-f4fdfd78-b540-405c-9e87-7c67184bb7d8 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 25 03:32:35 np0005534516 nova_compute[253538]: 2025-11-25 08:32:35.728 253542 DEBUG nova.virt.hardware [None req-f4fdfd78-b540-405c-9e87-7c67184bb7d8 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 25 03:32:35 np0005534516 nova_compute[253538]: 2025-11-25 08:32:35.729 253542 DEBUG nova.virt.hardware [None req-f4fdfd78-b540-405c-9e87-7c67184bb7d8 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 25 03:32:35 np0005534516 nova_compute[253538]: 2025-11-25 08:32:35.729 253542 DEBUG nova.virt.hardware [None req-f4fdfd78-b540-405c-9e87-7c67184bb7d8 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 25 03:32:35 np0005534516 nova_compute[253538]: 2025-11-25 08:32:35.732 253542 DEBUG oslo_concurrency.processutils [None req-f4fdfd78-b540-405c-9e87-7c67184bb7d8 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:32:35 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e176 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 03:32:35 np0005534516 podman[309905]: 2025-11-25 08:32:35.789925332 +0000 UTC m=+0.051739122 container create 9f6e487e8d7e168d80b68d990e56d9004a91b65410e023e55400f4e4213b6b54 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Nov 25 03:32:35 np0005534516 systemd[1]: Started libpod-conmon-9f6e487e8d7e168d80b68d990e56d9004a91b65410e023e55400f4e4213b6b54.scope.
Nov 25 03:32:35 np0005534516 systemd[1]: Started libcrun container.
Nov 25 03:32:35 np0005534516 nova_compute[253538]: 2025-11-25 08:32:35.852 253542 DEBUG nova.network.neutron [req-1157b120-9cba-4533-b92b-f1a40dc184e9 req-de4c1131-2bb9-4a6b-8193-6b7c66ee7b0e b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 8191f951-44bc-4371-957a-f2e7d37c1a32] Updated VIF entry in instance network info cache for port d8bd16e1-3695-474d-be04-7fdf44bee803. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 25 03:32:35 np0005534516 nova_compute[253538]: 2025-11-25 08:32:35.854 253542 DEBUG nova.network.neutron [req-1157b120-9cba-4533-b92b-f1a40dc184e9 req-de4c1131-2bb9-4a6b-8193-6b7c66ee7b0e b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 8191f951-44bc-4371-957a-f2e7d37c1a32] Updating instance_info_cache with network_info: [{"id": "d8bd16e1-3695-474d-be04-7fdf44bee803", "address": "fa:16:3e:1a:58:19", "network": {"id": "9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-2079765623-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7d8307470c794815a028592990efca57", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd8bd16e1-36", "ovs_interfaceid": "d8bd16e1-3695-474d-be04-7fdf44bee803", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 03:32:35 np0005534516 podman[309905]: 2025-11-25 08:32:35.760653625 +0000 UTC m=+0.022467415 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 25 03:32:35 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9fc5cfaa8d3aa0802256d8920c39f377bf45d88d3a7eb5b3dfababcae814e2ac/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 25 03:32:35 np0005534516 podman[309905]: 2025-11-25 08:32:35.868901474 +0000 UTC m=+0.130715284 container init 9f6e487e8d7e168d80b68d990e56d9004a91b65410e023e55400f4e4213b6b54 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Nov 25 03:32:35 np0005534516 nova_compute[253538]: 2025-11-25 08:32:35.870 253542 DEBUG oslo_concurrency.lockutils [req-1157b120-9cba-4533-b92b-f1a40dc184e9 req-de4c1131-2bb9-4a6b-8193-6b7c66ee7b0e b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-8191f951-44bc-4371-957a-f2e7d37c1a32" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 03:32:35 np0005534516 podman[309905]: 2025-11-25 08:32:35.873908469 +0000 UTC m=+0.135722259 container start 9f6e487e8d7e168d80b68d990e56d9004a91b65410e023e55400f4e4213b6b54 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Nov 25 03:32:35 np0005534516 neutron-haproxy-ovnmeta-9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe[309957]: [NOTICE]   (309993) : New worker (309999) forked
Nov 25 03:32:35 np0005534516 neutron-haproxy-ovnmeta-9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe[309957]: [NOTICE]   (309993) : Loading success.
Nov 25 03:32:35 np0005534516 podman[309959]: 2025-11-25 08:32:35.915725873 +0000 UTC m=+0.071771400 container health_status 5c8d5508db57b4c3da7e80ae11f3a3094e87785cb74f60c98199261dd8b73481 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Nov 25 03:32:35 np0005534516 nova_compute[253538]: 2025-11-25 08:32:35.934 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764059555.933833, 8191f951-44bc-4371-957a-f2e7d37c1a32 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 03:32:35 np0005534516 nova_compute[253538]: 2025-11-25 08:32:35.934 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 8191f951-44bc-4371-957a-f2e7d37c1a32] VM Started (Lifecycle Event)#033[00m
Nov 25 03:32:35 np0005534516 nova_compute[253538]: 2025-11-25 08:32:35.937 253542 DEBUG nova.compute.manager [None req-27efaf5d-93c8-4a0c-b1f2-436697dd9392 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] [instance: 8191f951-44bc-4371-957a-f2e7d37c1a32] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 25 03:32:35 np0005534516 nova_compute[253538]: 2025-11-25 08:32:35.940 253542 DEBUG nova.virt.libvirt.driver [None req-27efaf5d-93c8-4a0c-b1f2-436697dd9392 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] [instance: 8191f951-44bc-4371-957a-f2e7d37c1a32] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 25 03:32:35 np0005534516 nova_compute[253538]: 2025-11-25 08:32:35.943 253542 INFO nova.virt.libvirt.driver [-] [instance: 8191f951-44bc-4371-957a-f2e7d37c1a32] Instance spawned successfully.#033[00m
Nov 25 03:32:35 np0005534516 nova_compute[253538]: 2025-11-25 08:32:35.943 253542 DEBUG nova.virt.libvirt.driver [None req-27efaf5d-93c8-4a0c-b1f2-436697dd9392 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] [instance: 8191f951-44bc-4371-957a-f2e7d37c1a32] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 25 03:32:35 np0005534516 nova_compute[253538]: 2025-11-25 08:32:35.954 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 8191f951-44bc-4371-957a-f2e7d37c1a32] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:32:35 np0005534516 nova_compute[253538]: 2025-11-25 08:32:35.959 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 8191f951-44bc-4371-957a-f2e7d37c1a32] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 03:32:35 np0005534516 nova_compute[253538]: 2025-11-25 08:32:35.971 253542 DEBUG nova.virt.libvirt.driver [None req-27efaf5d-93c8-4a0c-b1f2-436697dd9392 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] [instance: 8191f951-44bc-4371-957a-f2e7d37c1a32] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:32:35 np0005534516 nova_compute[253538]: 2025-11-25 08:32:35.972 253542 DEBUG nova.virt.libvirt.driver [None req-27efaf5d-93c8-4a0c-b1f2-436697dd9392 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] [instance: 8191f951-44bc-4371-957a-f2e7d37c1a32] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:32:35 np0005534516 nova_compute[253538]: 2025-11-25 08:32:35.972 253542 DEBUG nova.virt.libvirt.driver [None req-27efaf5d-93c8-4a0c-b1f2-436697dd9392 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] [instance: 8191f951-44bc-4371-957a-f2e7d37c1a32] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:32:35 np0005534516 nova_compute[253538]: 2025-11-25 08:32:35.973 253542 DEBUG nova.virt.libvirt.driver [None req-27efaf5d-93c8-4a0c-b1f2-436697dd9392 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] [instance: 8191f951-44bc-4371-957a-f2e7d37c1a32] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:32:35 np0005534516 nova_compute[253538]: 2025-11-25 08:32:35.973 253542 DEBUG nova.virt.libvirt.driver [None req-27efaf5d-93c8-4a0c-b1f2-436697dd9392 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] [instance: 8191f951-44bc-4371-957a-f2e7d37c1a32] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:32:35 np0005534516 nova_compute[253538]: 2025-11-25 08:32:35.973 253542 DEBUG nova.virt.libvirt.driver [None req-27efaf5d-93c8-4a0c-b1f2-436697dd9392 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] [instance: 8191f951-44bc-4371-957a-f2e7d37c1a32] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:32:35 np0005534516 nova_compute[253538]: 2025-11-25 08:32:35.977 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 8191f951-44bc-4371-957a-f2e7d37c1a32] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 03:32:35 np0005534516 nova_compute[253538]: 2025-11-25 08:32:35.978 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764059555.9340353, 8191f951-44bc-4371-957a-f2e7d37c1a32 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 03:32:35 np0005534516 nova_compute[253538]: 2025-11-25 08:32:35.978 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 8191f951-44bc-4371-957a-f2e7d37c1a32] VM Paused (Lifecycle Event)#033[00m
Nov 25 03:32:36 np0005534516 nova_compute[253538]: 2025-11-25 08:32:36.002 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 8191f951-44bc-4371-957a-f2e7d37c1a32] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:32:36 np0005534516 nova_compute[253538]: 2025-11-25 08:32:36.008 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764059555.9397054, 8191f951-44bc-4371-957a-f2e7d37c1a32 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 03:32:36 np0005534516 nova_compute[253538]: 2025-11-25 08:32:36.009 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 8191f951-44bc-4371-957a-f2e7d37c1a32] VM Resumed (Lifecycle Event)#033[00m
Nov 25 03:32:36 np0005534516 nova_compute[253538]: 2025-11-25 08:32:36.026 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 8191f951-44bc-4371-957a-f2e7d37c1a32] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:32:36 np0005534516 nova_compute[253538]: 2025-11-25 08:32:36.030 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 8191f951-44bc-4371-957a-f2e7d37c1a32] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 03:32:36 np0005534516 nova_compute[253538]: 2025-11-25 08:32:36.044 253542 INFO nova.compute.manager [None req-27efaf5d-93c8-4a0c-b1f2-436697dd9392 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] [instance: 8191f951-44bc-4371-957a-f2e7d37c1a32] Took 6.38 seconds to spawn the instance on the hypervisor.#033[00m
Nov 25 03:32:36 np0005534516 nova_compute[253538]: 2025-11-25 08:32:36.045 253542 DEBUG nova.compute.manager [None req-27efaf5d-93c8-4a0c-b1f2-436697dd9392 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] [instance: 8191f951-44bc-4371-957a-f2e7d37c1a32] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:32:36 np0005534516 nova_compute[253538]: 2025-11-25 08:32:36.052 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 8191f951-44bc-4371-957a-f2e7d37c1a32] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 03:32:36 np0005534516 nova_compute[253538]: 2025-11-25 08:32:36.111 253542 INFO nova.compute.manager [None req-27efaf5d-93c8-4a0c-b1f2-436697dd9392 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] [instance: 8191f951-44bc-4371-957a-f2e7d37c1a32] Took 7.58 seconds to build instance.#033[00m
Nov 25 03:32:36 np0005534516 nova_compute[253538]: 2025-11-25 08:32:36.129 253542 DEBUG oslo_concurrency.lockutils [None req-27efaf5d-93c8-4a0c-b1f2-436697dd9392 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Lock "8191f951-44bc-4371-957a-f2e7d37c1a32" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 7.681s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:32:36 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 03:32:36 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1682330192' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 03:32:36 np0005534516 nova_compute[253538]: 2025-11-25 08:32:36.218 253542 DEBUG oslo_concurrency.processutils [None req-f4fdfd78-b540-405c-9e87-7c67184bb7d8 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.486s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:32:36 np0005534516 nova_compute[253538]: 2025-11-25 08:32:36.244 253542 DEBUG nova.storage.rbd_utils [None req-f4fdfd78-b540-405c-9e87-7c67184bb7d8 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] rbd image c0942fc7-74d4-4fc8-9574-4fea9179e71b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:32:36 np0005534516 nova_compute[253538]: 2025-11-25 08:32:36.248 253542 DEBUG oslo_concurrency.processutils [None req-f4fdfd78-b540-405c-9e87-7c67184bb7d8 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:32:36 np0005534516 nova_compute[253538]: 2025-11-25 08:32:36.349 253542 DEBUG oslo_concurrency.lockutils [None req-f406ea9d-e657-4635-a30a-83190a3a473f a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Acquiring lock "8ee60656-c206-4a84-9774-e8f852386097" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:32:36 np0005534516 nova_compute[253538]: 2025-11-25 08:32:36.350 253542 DEBUG oslo_concurrency.lockutils [None req-f406ea9d-e657-4635-a30a-83190a3a473f a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Lock "8ee60656-c206-4a84-9774-e8f852386097" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:32:36 np0005534516 nova_compute[253538]: 2025-11-25 08:32:36.365 253542 DEBUG nova.compute.manager [None req-f406ea9d-e657-4635-a30a-83190a3a473f a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: 8ee60656-c206-4a84-9774-e8f852386097] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 25 03:32:36 np0005534516 nova_compute[253538]: 2025-11-25 08:32:36.408 253542 DEBUG oslo_concurrency.lockutils [None req-7c71d64c-53be-44a4-a016-71bef448fe19 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Acquiring lock "0feca801-4630-4450-b915-616d8496ab51" by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:32:36 np0005534516 nova_compute[253538]: 2025-11-25 08:32:36.409 253542 DEBUG oslo_concurrency.lockutils [None req-7c71d64c-53be-44a4-a016-71bef448fe19 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Lock "0feca801-4630-4450-b915-616d8496ab51" acquired by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:32:36 np0005534516 nova_compute[253538]: 2025-11-25 08:32:36.409 253542 INFO nova.compute.manager [None req-7c71d64c-53be-44a4-a016-71bef448fe19 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] [instance: 0feca801-4630-4450-b915-616d8496ab51] Rebooting instance#033[00m
Nov 25 03:32:36 np0005534516 nova_compute[253538]: 2025-11-25 08:32:36.419 253542 DEBUG oslo_concurrency.lockutils [None req-7c71d64c-53be-44a4-a016-71bef448fe19 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Acquiring lock "refresh_cache-0feca801-4630-4450-b915-616d8496ab51" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 03:32:36 np0005534516 nova_compute[253538]: 2025-11-25 08:32:36.419 253542 DEBUG oslo_concurrency.lockutils [None req-7c71d64c-53be-44a4-a016-71bef448fe19 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Acquired lock "refresh_cache-0feca801-4630-4450-b915-616d8496ab51" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 03:32:36 np0005534516 nova_compute[253538]: 2025-11-25 08:32:36.419 253542 DEBUG nova.network.neutron [None req-7c71d64c-53be-44a4-a016-71bef448fe19 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] [instance: 0feca801-4630-4450-b915-616d8496ab51] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 25 03:32:36 np0005534516 nova_compute[253538]: 2025-11-25 08:32:36.436 253542 DEBUG oslo_concurrency.lockutils [None req-f406ea9d-e657-4635-a30a-83190a3a473f a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:32:36 np0005534516 nova_compute[253538]: 2025-11-25 08:32:36.437 253542 DEBUG oslo_concurrency.lockutils [None req-f406ea9d-e657-4635-a30a-83190a3a473f a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:32:36 np0005534516 nova_compute[253538]: 2025-11-25 08:32:36.443 253542 DEBUG nova.virt.hardware [None req-f406ea9d-e657-4635-a30a-83190a3a473f a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 25 03:32:36 np0005534516 nova_compute[253538]: 2025-11-25 08:32:36.443 253542 INFO nova.compute.claims [None req-f406ea9d-e657-4635-a30a-83190a3a473f a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: 8ee60656-c206-4a84-9774-e8f852386097] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 25 03:32:36 np0005534516 nova_compute[253538]: 2025-11-25 08:32:36.598 253542 DEBUG oslo_concurrency.processutils [None req-f406ea9d-e657-4635-a30a-83190a3a473f a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:32:36 np0005534516 nova_compute[253538]: 2025-11-25 08:32:36.650 253542 DEBUG oslo_concurrency.lockutils [None req-c6ed4ceb-c663-4069-85c5-c7caedbbd858 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Acquiring lock "8ee60656-c206-4a84-9774-e8f852386097" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:32:36 np0005534516 nova_compute[253538]: 2025-11-25 08:32:36.671 253542 DEBUG nova.compute.manager [req-b12e2af8-80ae-4b82-aa4c-c3db6c45a76a req-15a7a207-3d78-43a5-8a0d-b929e4bd52c0 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: c0942fc7-74d4-4fc8-9574-4fea9179e71b] Received event network-changed-79f4b8f5-d582-44c5-b8e0-a82ad73193de external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:32:36 np0005534516 nova_compute[253538]: 2025-11-25 08:32:36.672 253542 DEBUG nova.compute.manager [req-b12e2af8-80ae-4b82-aa4c-c3db6c45a76a req-15a7a207-3d78-43a5-8a0d-b929e4bd52c0 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: c0942fc7-74d4-4fc8-9574-4fea9179e71b] Refreshing instance network info cache due to event network-changed-79f4b8f5-d582-44c5-b8e0-a82ad73193de. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 03:32:36 np0005534516 nova_compute[253538]: 2025-11-25 08:32:36.672 253542 DEBUG oslo_concurrency.lockutils [req-b12e2af8-80ae-4b82-aa4c-c3db6c45a76a req-15a7a207-3d78-43a5-8a0d-b929e4bd52c0 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-c0942fc7-74d4-4fc8-9574-4fea9179e71b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 03:32:36 np0005534516 nova_compute[253538]: 2025-11-25 08:32:36.672 253542 DEBUG oslo_concurrency.lockutils [req-b12e2af8-80ae-4b82-aa4c-c3db6c45a76a req-15a7a207-3d78-43a5-8a0d-b929e4bd52c0 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-c0942fc7-74d4-4fc8-9574-4fea9179e71b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 03:32:36 np0005534516 nova_compute[253538]: 2025-11-25 08:32:36.672 253542 DEBUG nova.network.neutron [req-b12e2af8-80ae-4b82-aa4c-c3db6c45a76a req-15a7a207-3d78-43a5-8a0d-b929e4bd52c0 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: c0942fc7-74d4-4fc8-9574-4fea9179e71b] Refreshing network info cache for port 79f4b8f5-d582-44c5-b8e0-a82ad73193de _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 03:32:36 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 03:32:36 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2443545900' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 03:32:36 np0005534516 nova_compute[253538]: 2025-11-25 08:32:36.714 253542 DEBUG oslo_concurrency.processutils [None req-f4fdfd78-b540-405c-9e87-7c67184bb7d8 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.466s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:32:36 np0005534516 nova_compute[253538]: 2025-11-25 08:32:36.715 253542 DEBUG nova.virt.libvirt.vif [None req-f4fdfd78-b540-405c-9e87-7c67184bb7d8 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T08:32:31Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-1031241876',display_name='tempest-ServerDiskConfigTestJSON-server-1031241876',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-1031241876',id=53,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='dc93aa65bef7473d961e0cad1e8f2962',ramdisk_id='',reservation_id='r-p1t9fwfe',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerDiskConfigTestJSON-1655399928',owner_user_name='tempest-ServerDiskConfigTestJSON-1655399928-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T08:32:32Z,user_data=None,user_id='7ad88cb0e4cf4d0b8e4cbec835318015',uuid=c0942fc7-74d4-4fc8-9574-4fea9179e71b,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "79f4b8f5-d582-44c5-b8e0-a82ad73193de", "address": "fa:16:3e:24:9d:e4", "network": {"id": "eb25945d-6002-4a99-b682-034a8a3dc901", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1802666542-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dc93aa65bef7473d961e0cad1e8f2962", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap79f4b8f5-d5", "ovs_interfaceid": "79f4b8f5-d582-44c5-b8e0-a82ad73193de", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 25 03:32:36 np0005534516 nova_compute[253538]: 2025-11-25 08:32:36.715 253542 DEBUG nova.network.os_vif_util [None req-f4fdfd78-b540-405c-9e87-7c67184bb7d8 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Converting VIF {"id": "79f4b8f5-d582-44c5-b8e0-a82ad73193de", "address": "fa:16:3e:24:9d:e4", "network": {"id": "eb25945d-6002-4a99-b682-034a8a3dc901", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1802666542-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dc93aa65bef7473d961e0cad1e8f2962", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap79f4b8f5-d5", "ovs_interfaceid": "79f4b8f5-d582-44c5-b8e0-a82ad73193de", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 03:32:36 np0005534516 nova_compute[253538]: 2025-11-25 08:32:36.716 253542 DEBUG nova.network.os_vif_util [None req-f4fdfd78-b540-405c-9e87-7c67184bb7d8 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:24:9d:e4,bridge_name='br-int',has_traffic_filtering=True,id=79f4b8f5-d582-44c5-b8e0-a82ad73193de,network=Network(eb25945d-6002-4a99-b682-034a8a3dc901),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap79f4b8f5-d5') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 03:32:36 np0005534516 nova_compute[253538]: 2025-11-25 08:32:36.717 253542 DEBUG nova.objects.instance [None req-f4fdfd78-b540-405c-9e87-7c67184bb7d8 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Lazy-loading 'pci_devices' on Instance uuid c0942fc7-74d4-4fc8-9574-4fea9179e71b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 03:32:36 np0005534516 nova_compute[253538]: 2025-11-25 08:32:36.729 253542 DEBUG nova.virt.libvirt.driver [None req-f4fdfd78-b540-405c-9e87-7c67184bb7d8 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] [instance: c0942fc7-74d4-4fc8-9574-4fea9179e71b] End _get_guest_xml xml=<domain type="kvm">
Nov 25 03:32:36 np0005534516 nova_compute[253538]:  <uuid>c0942fc7-74d4-4fc8-9574-4fea9179e71b</uuid>
Nov 25 03:32:36 np0005534516 nova_compute[253538]:  <name>instance-00000035</name>
Nov 25 03:32:36 np0005534516 nova_compute[253538]:  <memory>131072</memory>
Nov 25 03:32:36 np0005534516 nova_compute[253538]:  <vcpu>1</vcpu>
Nov 25 03:32:36 np0005534516 nova_compute[253538]:  <metadata>
Nov 25 03:32:36 np0005534516 nova_compute[253538]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 03:32:36 np0005534516 nova_compute[253538]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 03:32:36 np0005534516 nova_compute[253538]:      <nova:name>tempest-ServerDiskConfigTestJSON-server-1031241876</nova:name>
Nov 25 03:32:36 np0005534516 nova_compute[253538]:      <nova:creationTime>2025-11-25 08:32:35</nova:creationTime>
Nov 25 03:32:36 np0005534516 nova_compute[253538]:      <nova:flavor name="m1.nano">
Nov 25 03:32:36 np0005534516 nova_compute[253538]:        <nova:memory>128</nova:memory>
Nov 25 03:32:36 np0005534516 nova_compute[253538]:        <nova:disk>1</nova:disk>
Nov 25 03:32:36 np0005534516 nova_compute[253538]:        <nova:swap>0</nova:swap>
Nov 25 03:32:36 np0005534516 nova_compute[253538]:        <nova:ephemeral>0</nova:ephemeral>
Nov 25 03:32:36 np0005534516 nova_compute[253538]:        <nova:vcpus>1</nova:vcpus>
Nov 25 03:32:36 np0005534516 nova_compute[253538]:      </nova:flavor>
Nov 25 03:32:36 np0005534516 nova_compute[253538]:      <nova:owner>
Nov 25 03:32:36 np0005534516 nova_compute[253538]:        <nova:user uuid="7ad88cb0e4cf4d0b8e4cbec835318015">tempest-ServerDiskConfigTestJSON-1655399928-project-member</nova:user>
Nov 25 03:32:36 np0005534516 nova_compute[253538]:        <nova:project uuid="dc93aa65bef7473d961e0cad1e8f2962">tempest-ServerDiskConfigTestJSON-1655399928</nova:project>
Nov 25 03:32:36 np0005534516 nova_compute[253538]:      </nova:owner>
Nov 25 03:32:36 np0005534516 nova_compute[253538]:      <nova:root type="image" uuid="8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e"/>
Nov 25 03:32:36 np0005534516 nova_compute[253538]:      <nova:ports>
Nov 25 03:32:36 np0005534516 nova_compute[253538]:        <nova:port uuid="79f4b8f5-d582-44c5-b8e0-a82ad73193de">
Nov 25 03:32:36 np0005534516 nova_compute[253538]:          <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Nov 25 03:32:36 np0005534516 nova_compute[253538]:        </nova:port>
Nov 25 03:32:36 np0005534516 nova_compute[253538]:      </nova:ports>
Nov 25 03:32:36 np0005534516 nova_compute[253538]:    </nova:instance>
Nov 25 03:32:36 np0005534516 nova_compute[253538]:  </metadata>
Nov 25 03:32:36 np0005534516 nova_compute[253538]:  <sysinfo type="smbios">
Nov 25 03:32:36 np0005534516 nova_compute[253538]:    <system>
Nov 25 03:32:36 np0005534516 nova_compute[253538]:      <entry name="manufacturer">RDO</entry>
Nov 25 03:32:36 np0005534516 nova_compute[253538]:      <entry name="product">OpenStack Compute</entry>
Nov 25 03:32:36 np0005534516 nova_compute[253538]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 03:32:36 np0005534516 nova_compute[253538]:      <entry name="serial">c0942fc7-74d4-4fc8-9574-4fea9179e71b</entry>
Nov 25 03:32:36 np0005534516 nova_compute[253538]:      <entry name="uuid">c0942fc7-74d4-4fc8-9574-4fea9179e71b</entry>
Nov 25 03:32:36 np0005534516 nova_compute[253538]:      <entry name="family">Virtual Machine</entry>
Nov 25 03:32:36 np0005534516 nova_compute[253538]:    </system>
Nov 25 03:32:36 np0005534516 nova_compute[253538]:  </sysinfo>
Nov 25 03:32:36 np0005534516 nova_compute[253538]:  <os>
Nov 25 03:32:36 np0005534516 nova_compute[253538]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 25 03:32:36 np0005534516 nova_compute[253538]:    <boot dev="hd"/>
Nov 25 03:32:36 np0005534516 nova_compute[253538]:    <smbios mode="sysinfo"/>
Nov 25 03:32:36 np0005534516 nova_compute[253538]:  </os>
Nov 25 03:32:36 np0005534516 nova_compute[253538]:  <features>
Nov 25 03:32:36 np0005534516 nova_compute[253538]:    <acpi/>
Nov 25 03:32:36 np0005534516 nova_compute[253538]:    <apic/>
Nov 25 03:32:36 np0005534516 nova_compute[253538]:    <vmcoreinfo/>
Nov 25 03:32:36 np0005534516 nova_compute[253538]:  </features>
Nov 25 03:32:36 np0005534516 nova_compute[253538]:  <clock offset="utc">
Nov 25 03:32:36 np0005534516 nova_compute[253538]:    <timer name="pit" tickpolicy="delay"/>
Nov 25 03:32:36 np0005534516 nova_compute[253538]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 25 03:32:36 np0005534516 nova_compute[253538]:    <timer name="hpet" present="no"/>
Nov 25 03:32:36 np0005534516 nova_compute[253538]:  </clock>
Nov 25 03:32:36 np0005534516 nova_compute[253538]:  <cpu mode="host-model" match="exact">
Nov 25 03:32:36 np0005534516 nova_compute[253538]:    <topology sockets="1" cores="1" threads="1"/>
Nov 25 03:32:36 np0005534516 nova_compute[253538]:  </cpu>
Nov 25 03:32:36 np0005534516 nova_compute[253538]:  <devices>
Nov 25 03:32:36 np0005534516 nova_compute[253538]:    <disk type="network" device="disk">
Nov 25 03:32:36 np0005534516 nova_compute[253538]:      <driver type="raw" cache="none"/>
Nov 25 03:32:36 np0005534516 nova_compute[253538]:      <source protocol="rbd" name="vms/c0942fc7-74d4-4fc8-9574-4fea9179e71b_disk">
Nov 25 03:32:36 np0005534516 nova_compute[253538]:        <host name="192.168.122.100" port="6789"/>
Nov 25 03:32:36 np0005534516 nova_compute[253538]:      </source>
Nov 25 03:32:36 np0005534516 nova_compute[253538]:      <auth username="openstack">
Nov 25 03:32:36 np0005534516 nova_compute[253538]:        <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 03:32:36 np0005534516 nova_compute[253538]:      </auth>
Nov 25 03:32:36 np0005534516 nova_compute[253538]:      <target dev="vda" bus="virtio"/>
Nov 25 03:32:36 np0005534516 nova_compute[253538]:    </disk>
Nov 25 03:32:36 np0005534516 nova_compute[253538]:    <disk type="network" device="cdrom">
Nov 25 03:32:36 np0005534516 nova_compute[253538]:      <driver type="raw" cache="none"/>
Nov 25 03:32:36 np0005534516 nova_compute[253538]:      <source protocol="rbd" name="vms/c0942fc7-74d4-4fc8-9574-4fea9179e71b_disk.config">
Nov 25 03:32:36 np0005534516 nova_compute[253538]:        <host name="192.168.122.100" port="6789"/>
Nov 25 03:32:36 np0005534516 nova_compute[253538]:      </source>
Nov 25 03:32:36 np0005534516 nova_compute[253538]:      <auth username="openstack">
Nov 25 03:32:36 np0005534516 nova_compute[253538]:        <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 03:32:36 np0005534516 nova_compute[253538]:      </auth>
Nov 25 03:32:36 np0005534516 nova_compute[253538]:      <target dev="sda" bus="sata"/>
Nov 25 03:32:36 np0005534516 nova_compute[253538]:    </disk>
Nov 25 03:32:36 np0005534516 nova_compute[253538]:    <interface type="ethernet">
Nov 25 03:32:36 np0005534516 nova_compute[253538]:      <mac address="fa:16:3e:24:9d:e4"/>
Nov 25 03:32:36 np0005534516 nova_compute[253538]:      <model type="virtio"/>
Nov 25 03:32:36 np0005534516 nova_compute[253538]:      <driver name="vhost" rx_queue_size="512"/>
Nov 25 03:32:36 np0005534516 nova_compute[253538]:      <mtu size="1442"/>
Nov 25 03:32:36 np0005534516 nova_compute[253538]:      <target dev="tap79f4b8f5-d5"/>
Nov 25 03:32:36 np0005534516 nova_compute[253538]:    </interface>
Nov 25 03:32:36 np0005534516 nova_compute[253538]:    <serial type="pty">
Nov 25 03:32:36 np0005534516 nova_compute[253538]:      <log file="/var/lib/nova/instances/c0942fc7-74d4-4fc8-9574-4fea9179e71b/console.log" append="off"/>
Nov 25 03:32:36 np0005534516 nova_compute[253538]:    </serial>
Nov 25 03:32:36 np0005534516 nova_compute[253538]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 03:32:36 np0005534516 nova_compute[253538]:    <video>
Nov 25 03:32:36 np0005534516 nova_compute[253538]:      <model type="virtio"/>
Nov 25 03:32:36 np0005534516 nova_compute[253538]:    </video>
Nov 25 03:32:36 np0005534516 nova_compute[253538]:    <input type="tablet" bus="usb"/>
Nov 25 03:32:36 np0005534516 nova_compute[253538]:    <rng model="virtio">
Nov 25 03:32:36 np0005534516 nova_compute[253538]:      <backend model="random">/dev/urandom</backend>
Nov 25 03:32:36 np0005534516 nova_compute[253538]:    </rng>
Nov 25 03:32:36 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root"/>
Nov 25 03:32:36 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:32:36 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:32:36 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:32:36 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:32:36 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:32:36 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:32:36 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:32:36 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:32:36 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:32:36 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:32:36 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:32:36 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:32:36 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:32:36 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:32:36 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:32:36 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:32:36 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:32:36 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:32:36 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:32:36 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:32:36 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:32:36 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:32:36 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:32:36 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:32:36 np0005534516 nova_compute[253538]:    <controller type="usb" index="0"/>
Nov 25 03:32:36 np0005534516 nova_compute[253538]:    <memballoon model="virtio">
Nov 25 03:32:36 np0005534516 nova_compute[253538]:      <stats period="10"/>
Nov 25 03:32:36 np0005534516 nova_compute[253538]:    </memballoon>
Nov 25 03:32:36 np0005534516 nova_compute[253538]:  </devices>
Nov 25 03:32:36 np0005534516 nova_compute[253538]: </domain>
Nov 25 03:32:36 np0005534516 nova_compute[253538]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 25 03:32:36 np0005534516 nova_compute[253538]: 2025-11-25 08:32:36.730 253542 DEBUG nova.compute.manager [None req-f4fdfd78-b540-405c-9e87-7c67184bb7d8 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] [instance: c0942fc7-74d4-4fc8-9574-4fea9179e71b] Preparing to wait for external event network-vif-plugged-79f4b8f5-d582-44c5-b8e0-a82ad73193de prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 25 03:32:36 np0005534516 nova_compute[253538]: 2025-11-25 08:32:36.730 253542 DEBUG oslo_concurrency.lockutils [None req-f4fdfd78-b540-405c-9e87-7c67184bb7d8 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Acquiring lock "c0942fc7-74d4-4fc8-9574-4fea9179e71b-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:32:36 np0005534516 nova_compute[253538]: 2025-11-25 08:32:36.730 253542 DEBUG oslo_concurrency.lockutils [None req-f4fdfd78-b540-405c-9e87-7c67184bb7d8 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Lock "c0942fc7-74d4-4fc8-9574-4fea9179e71b-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:32:36 np0005534516 nova_compute[253538]: 2025-11-25 08:32:36.730 253542 DEBUG oslo_concurrency.lockutils [None req-f4fdfd78-b540-405c-9e87-7c67184bb7d8 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Lock "c0942fc7-74d4-4fc8-9574-4fea9179e71b-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:32:36 np0005534516 nova_compute[253538]: 2025-11-25 08:32:36.731 253542 DEBUG nova.virt.libvirt.vif [None req-f4fdfd78-b540-405c-9e87-7c67184bb7d8 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T08:32:31Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-1031241876',display_name='tempest-ServerDiskConfigTestJSON-server-1031241876',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-1031241876',id=53,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='dc93aa65bef7473d961e0cad1e8f2962',ramdisk_id='',reservation_id='r-p1t9fwfe',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerDiskConfigTestJSON-1655399928',owner_user_name='tempest-ServerDiskConfigTestJSON-1655399928-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T08:32:32Z,user_data=None,user_id='7ad88cb0e4cf4d0b8e4cbec835318015',uuid=c0942fc7-74d4-4fc8-9574-4fea9179e71b,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "79f4b8f5-d582-44c5-b8e0-a82ad73193de", "address": "fa:16:3e:24:9d:e4", "network": {"id": "eb25945d-6002-4a99-b682-034a8a3dc901", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1802666542-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dc93aa65bef7473d961e0cad1e8f2962", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap79f4b8f5-d5", "ovs_interfaceid": "79f4b8f5-d582-44c5-b8e0-a82ad73193de", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 25 03:32:36 np0005534516 nova_compute[253538]: 2025-11-25 08:32:36.731 253542 DEBUG nova.network.os_vif_util [None req-f4fdfd78-b540-405c-9e87-7c67184bb7d8 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Converting VIF {"id": "79f4b8f5-d582-44c5-b8e0-a82ad73193de", "address": "fa:16:3e:24:9d:e4", "network": {"id": "eb25945d-6002-4a99-b682-034a8a3dc901", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1802666542-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dc93aa65bef7473d961e0cad1e8f2962", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap79f4b8f5-d5", "ovs_interfaceid": "79f4b8f5-d582-44c5-b8e0-a82ad73193de", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 03:32:36 np0005534516 nova_compute[253538]: 2025-11-25 08:32:36.732 253542 DEBUG nova.network.os_vif_util [None req-f4fdfd78-b540-405c-9e87-7c67184bb7d8 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:24:9d:e4,bridge_name='br-int',has_traffic_filtering=True,id=79f4b8f5-d582-44c5-b8e0-a82ad73193de,network=Network(eb25945d-6002-4a99-b682-034a8a3dc901),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap79f4b8f5-d5') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 03:32:36 np0005534516 nova_compute[253538]: 2025-11-25 08:32:36.732 253542 DEBUG os_vif [None req-f4fdfd78-b540-405c-9e87-7c67184bb7d8 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:24:9d:e4,bridge_name='br-int',has_traffic_filtering=True,id=79f4b8f5-d582-44c5-b8e0-a82ad73193de,network=Network(eb25945d-6002-4a99-b682-034a8a3dc901),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap79f4b8f5-d5') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 25 03:32:36 np0005534516 nova_compute[253538]: 2025-11-25 08:32:36.733 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:32:36 np0005534516 nova_compute[253538]: 2025-11-25 08:32:36.733 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:32:36 np0005534516 nova_compute[253538]: 2025-11-25 08:32:36.734 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 03:32:36 np0005534516 nova_compute[253538]: 2025-11-25 08:32:36.737 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:32:36 np0005534516 nova_compute[253538]: 2025-11-25 08:32:36.738 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap79f4b8f5-d5, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:32:36 np0005534516 nova_compute[253538]: 2025-11-25 08:32:36.738 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap79f4b8f5-d5, col_values=(('external_ids', {'iface-id': '79f4b8f5-d582-44c5-b8e0-a82ad73193de', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:24:9d:e4', 'vm-uuid': 'c0942fc7-74d4-4fc8-9574-4fea9179e71b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:32:36 np0005534516 nova_compute[253538]: 2025-11-25 08:32:36.739 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:32:36 np0005534516 NetworkManager[48915]: <info>  [1764059556.7409] manager: (tap79f4b8f5-d5): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/202)
Nov 25 03:32:36 np0005534516 nova_compute[253538]: 2025-11-25 08:32:36.742 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 25 03:32:36 np0005534516 nova_compute[253538]: 2025-11-25 08:32:36.747 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:32:36 np0005534516 nova_compute[253538]: 2025-11-25 08:32:36.748 253542 INFO os_vif [None req-f4fdfd78-b540-405c-9e87-7c67184bb7d8 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:24:9d:e4,bridge_name='br-int',has_traffic_filtering=True,id=79f4b8f5-d582-44c5-b8e0-a82ad73193de,network=Network(eb25945d-6002-4a99-b682-034a8a3dc901),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap79f4b8f5-d5')#033[00m
Nov 25 03:32:36 np0005534516 nova_compute[253538]: 2025-11-25 08:32:36.799 253542 DEBUG nova.virt.libvirt.driver [None req-f4fdfd78-b540-405c-9e87-7c67184bb7d8 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 03:32:36 np0005534516 nova_compute[253538]: 2025-11-25 08:32:36.799 253542 DEBUG nova.virt.libvirt.driver [None req-f4fdfd78-b540-405c-9e87-7c67184bb7d8 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 03:32:36 np0005534516 nova_compute[253538]: 2025-11-25 08:32:36.799 253542 DEBUG nova.virt.libvirt.driver [None req-f4fdfd78-b540-405c-9e87-7c67184bb7d8 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] No VIF found with MAC fa:16:3e:24:9d:e4, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 25 03:32:36 np0005534516 nova_compute[253538]: 2025-11-25 08:32:36.800 253542 INFO nova.virt.libvirt.driver [None req-f4fdfd78-b540-405c-9e87-7c67184bb7d8 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] [instance: c0942fc7-74d4-4fc8-9574-4fea9179e71b] Using config drive#033[00m
Nov 25 03:32:36 np0005534516 nova_compute[253538]: 2025-11-25 08:32:36.822 253542 DEBUG nova.storage.rbd_utils [None req-f4fdfd78-b540-405c-9e87-7c67184bb7d8 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] rbd image c0942fc7-74d4-4fc8-9574-4fea9179e71b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:32:37 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 03:32:37 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1695310889' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 03:32:37 np0005534516 nova_compute[253538]: 2025-11-25 08:32:37.062 253542 DEBUG oslo_concurrency.processutils [None req-f406ea9d-e657-4635-a30a-83190a3a473f a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.464s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:32:37 np0005534516 nova_compute[253538]: 2025-11-25 08:32:37.067 253542 DEBUG nova.compute.provider_tree [None req-f406ea9d-e657-4635-a30a-83190a3a473f a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 03:32:37 np0005534516 nova_compute[253538]: 2025-11-25 08:32:37.078 253542 DEBUG nova.scheduler.client.report [None req-f406ea9d-e657-4635-a30a-83190a3a473f a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 03:32:37 np0005534516 nova_compute[253538]: 2025-11-25 08:32:37.107 253542 DEBUG oslo_concurrency.lockutils [None req-f406ea9d-e657-4635-a30a-83190a3a473f a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.670s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:32:37 np0005534516 nova_compute[253538]: 2025-11-25 08:32:37.107 253542 DEBUG nova.compute.manager [None req-f406ea9d-e657-4635-a30a-83190a3a473f a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: 8ee60656-c206-4a84-9774-e8f852386097] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 25 03:32:37 np0005534516 nova_compute[253538]: 2025-11-25 08:32:37.154 253542 DEBUG nova.compute.claims [None req-f406ea9d-e657-4635-a30a-83190a3a473f a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: 8ee60656-c206-4a84-9774-e8f852386097] Aborting claim: <nova.compute.claims.Claim object at 0x7f843dedb2b0> abort /usr/lib/python3.9/site-packages/nova/compute/claims.py:85#033[00m
Nov 25 03:32:37 np0005534516 nova_compute[253538]: 2025-11-25 08:32:37.155 253542 DEBUG oslo_concurrency.lockutils [None req-f406ea9d-e657-4635-a30a-83190a3a473f a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.abort_instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:32:37 np0005534516 nova_compute[253538]: 2025-11-25 08:32:37.155 253542 DEBUG oslo_concurrency.lockutils [None req-f406ea9d-e657-4635-a30a-83190a3a473f a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.abort_instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:32:37 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1478: 321 pgs: 321 active+clean; 204 MiB data, 508 MiB used, 59 GiB / 60 GiB avail; 2.5 MiB/s rd, 5.2 MiB/s wr, 242 op/s
Nov 25 03:32:37 np0005534516 nova_compute[253538]: 2025-11-25 08:32:37.220 253542 INFO nova.virt.libvirt.driver [None req-f4fdfd78-b540-405c-9e87-7c67184bb7d8 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] [instance: c0942fc7-74d4-4fc8-9574-4fea9179e71b] Creating config drive at /var/lib/nova/instances/c0942fc7-74d4-4fc8-9574-4fea9179e71b/disk.config#033[00m
Nov 25 03:32:37 np0005534516 nova_compute[253538]: 2025-11-25 08:32:37.224 253542 DEBUG oslo_concurrency.processutils [None req-f4fdfd78-b540-405c-9e87-7c67184bb7d8 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/c0942fc7-74d4-4fc8-9574-4fea9179e71b/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpt1fay3ve execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:32:37 np0005534516 nova_compute[253538]: 2025-11-25 08:32:37.349 253542 DEBUG oslo_concurrency.processutils [None req-f406ea9d-e657-4635-a30a-83190a3a473f a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:32:37 np0005534516 nova_compute[253538]: 2025-11-25 08:32:37.379 253542 DEBUG oslo_concurrency.processutils [None req-f4fdfd78-b540-405c-9e87-7c67184bb7d8 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/c0942fc7-74d4-4fc8-9574-4fea9179e71b/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpt1fay3ve" returned: 0 in 0.155s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:32:37 np0005534516 nova_compute[253538]: 2025-11-25 08:32:37.398 253542 DEBUG nova.storage.rbd_utils [None req-f4fdfd78-b540-405c-9e87-7c67184bb7d8 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] rbd image c0942fc7-74d4-4fc8-9574-4fea9179e71b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:32:37 np0005534516 nova_compute[253538]: 2025-11-25 08:32:37.401 253542 DEBUG oslo_concurrency.processutils [None req-f4fdfd78-b540-405c-9e87-7c67184bb7d8 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/c0942fc7-74d4-4fc8-9574-4fea9179e71b/disk.config c0942fc7-74d4-4fc8-9574-4fea9179e71b_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:32:37 np0005534516 nova_compute[253538]: 2025-11-25 08:32:37.539 253542 DEBUG oslo_concurrency.processutils [None req-f4fdfd78-b540-405c-9e87-7c67184bb7d8 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/c0942fc7-74d4-4fc8-9574-4fea9179e71b/disk.config c0942fc7-74d4-4fc8-9574-4fea9179e71b_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.138s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:32:37 np0005534516 nova_compute[253538]: 2025-11-25 08:32:37.539 253542 INFO nova.virt.libvirt.driver [None req-f4fdfd78-b540-405c-9e87-7c67184bb7d8 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] [instance: c0942fc7-74d4-4fc8-9574-4fea9179e71b] Deleting local config drive /var/lib/nova/instances/c0942fc7-74d4-4fc8-9574-4fea9179e71b/disk.config because it was imported into RBD.#033[00m
Nov 25 03:32:37 np0005534516 nova_compute[253538]: 2025-11-25 08:32:37.559 253542 DEBUG nova.compute.manager [req-aa97446f-68ee-4e2a-a8e2-91ee6faee270 req-39aab6b4-75fb-4e0d-a0c4-cbc94bc9cd20 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 8191f951-44bc-4371-957a-f2e7d37c1a32] Received event network-vif-plugged-d8bd16e1-3695-474d-be04-7fdf44bee803 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:32:37 np0005534516 nova_compute[253538]: 2025-11-25 08:32:37.559 253542 DEBUG oslo_concurrency.lockutils [req-aa97446f-68ee-4e2a-a8e2-91ee6faee270 req-39aab6b4-75fb-4e0d-a0c4-cbc94bc9cd20 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "8191f951-44bc-4371-957a-f2e7d37c1a32-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:32:37 np0005534516 nova_compute[253538]: 2025-11-25 08:32:37.559 253542 DEBUG oslo_concurrency.lockutils [req-aa97446f-68ee-4e2a-a8e2-91ee6faee270 req-39aab6b4-75fb-4e0d-a0c4-cbc94bc9cd20 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "8191f951-44bc-4371-957a-f2e7d37c1a32-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:32:37 np0005534516 nova_compute[253538]: 2025-11-25 08:32:37.560 253542 DEBUG oslo_concurrency.lockutils [req-aa97446f-68ee-4e2a-a8e2-91ee6faee270 req-39aab6b4-75fb-4e0d-a0c4-cbc94bc9cd20 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "8191f951-44bc-4371-957a-f2e7d37c1a32-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:32:37 np0005534516 nova_compute[253538]: 2025-11-25 08:32:37.560 253542 DEBUG nova.compute.manager [req-aa97446f-68ee-4e2a-a8e2-91ee6faee270 req-39aab6b4-75fb-4e0d-a0c4-cbc94bc9cd20 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 8191f951-44bc-4371-957a-f2e7d37c1a32] No waiting events found dispatching network-vif-plugged-d8bd16e1-3695-474d-be04-7fdf44bee803 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 03:32:37 np0005534516 nova_compute[253538]: 2025-11-25 08:32:37.560 253542 WARNING nova.compute.manager [req-aa97446f-68ee-4e2a-a8e2-91ee6faee270 req-39aab6b4-75fb-4e0d-a0c4-cbc94bc9cd20 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 8191f951-44bc-4371-957a-f2e7d37c1a32] Received unexpected event network-vif-plugged-d8bd16e1-3695-474d-be04-7fdf44bee803 for instance with vm_state active and task_state None.#033[00m
Nov 25 03:32:37 np0005534516 NetworkManager[48915]: <info>  [1764059557.5862] manager: (tap79f4b8f5-d5): new Tun device (/org/freedesktop/NetworkManager/Devices/203)
Nov 25 03:32:37 np0005534516 kernel: tap79f4b8f5-d5: entered promiscuous mode
Nov 25 03:32:37 np0005534516 systemd-udevd[309856]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 03:32:37 np0005534516 ovn_controller[152859]: 2025-11-25T08:32:37Z|00433|binding|INFO|Claiming lport 79f4b8f5-d582-44c5-b8e0-a82ad73193de for this chassis.
Nov 25 03:32:37 np0005534516 ovn_controller[152859]: 2025-11-25T08:32:37Z|00434|binding|INFO|79f4b8f5-d582-44c5-b8e0-a82ad73193de: Claiming fa:16:3e:24:9d:e4 10.100.0.11
Nov 25 03:32:37 np0005534516 nova_compute[253538]: 2025-11-25 08:32:37.588 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:32:37 np0005534516 NetworkManager[48915]: <info>  [1764059557.5981] device (tap79f4b8f5-d5): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 03:32:37 np0005534516 NetworkManager[48915]: <info>  [1764059557.5991] device (tap79f4b8f5-d5): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 03:32:37 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:32:37.600 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:24:9d:e4 10.100.0.11'], port_security=['fa:16:3e:24:9d:e4 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': 'c0942fc7-74d4-4fc8-9574-4fea9179e71b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-eb25945d-6002-4a99-b682-034a8a3dc901', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'dc93aa65bef7473d961e0cad1e8f2962', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'e75a559d-2985-4816-b432-9eef78e9b129', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=1d4314d0-90db-4d2a-a971-774f6d589653, chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=79f4b8f5-d582-44c5-b8e0-a82ad73193de) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 03:32:37 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:32:37.602 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 79f4b8f5-d582-44c5-b8e0-a82ad73193de in datapath eb25945d-6002-4a99-b682-034a8a3dc901 bound to our chassis#033[00m
Nov 25 03:32:37 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:32:37.603 162739 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network eb25945d-6002-4a99-b682-034a8a3dc901#033[00m
Nov 25 03:32:37 np0005534516 ovn_controller[152859]: 2025-11-25T08:32:37Z|00435|binding|INFO|Setting lport 79f4b8f5-d582-44c5-b8e0-a82ad73193de ovn-installed in OVS
Nov 25 03:32:37 np0005534516 ovn_controller[152859]: 2025-11-25T08:32:37Z|00436|binding|INFO|Setting lport 79f4b8f5-d582-44c5-b8e0-a82ad73193de up in Southbound
Nov 25 03:32:37 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:32:37.614 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[a5c3b49a-e3cd-431f-9c04-b2d4a3f9235b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:32:37 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:32:37.615 162739 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapeb25945d-61 in ovnmeta-eb25945d-6002-4a99-b682-034a8a3dc901 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 25 03:32:37 np0005534516 nova_compute[253538]: 2025-11-25 08:32:37.615 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:32:37 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:32:37.617 269370 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapeb25945d-60 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 25 03:32:37 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:32:37.617 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[0932c291-ab27-470e-a062-bcc00619195b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:32:37 np0005534516 systemd-machined[215790]: New machine qemu-59-instance-00000035.
Nov 25 03:32:37 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:32:37.621 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[4a883a77-f871-4c0b-9f0a-54526b92232a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:32:37 np0005534516 systemd[1]: Started Virtual Machine qemu-59-instance-00000035.
Nov 25 03:32:37 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:32:37.632 162852 DEBUG oslo.privsep.daemon [-] privsep: reply[47e37947-fee9-4235-9e06-c17fa97c2fcf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:32:37 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:32:37.654 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[f192ca24-2a31-4e9b-8c49-5b99eba6d666]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:32:37 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:32:37.695 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[155559a1-c58b-4515-8c06-41f35668810f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:32:37 np0005534516 NetworkManager[48915]: <info>  [1764059557.7045] manager: (tapeb25945d-60): new Veth device (/org/freedesktop/NetworkManager/Devices/204)
Nov 25 03:32:37 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:32:37.703 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[b8e84aac-30e8-429e-9cd7-dba53af22e66]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:32:37 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:32:37.732 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[ecebcb31-4c5b-470d-af21-987af9eefff8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:32:37 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:32:37.735 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[f505df12-8f7e-4d13-a556-30f2f0821841]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:32:37 np0005534516 nova_compute[253538]: 2025-11-25 08:32:37.750 253542 DEBUG nova.network.neutron [None req-7c71d64c-53be-44a4-a016-71bef448fe19 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] [instance: 0feca801-4630-4450-b915-616d8496ab51] Updating instance_info_cache with network_info: [{"id": "15af3dd8-9788-4a34-b4b2-d3b24300cd4c", "address": "fa:16:3e:07:cd:40", "network": {"id": "908154e6-322e-4607-bb65-df3f3f8daca6", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1395486665-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.183", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "23237e7592b247838e62457157e64e9e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap15af3dd8-97", "ovs_interfaceid": "15af3dd8-9788-4a34-b4b2-d3b24300cd4c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 03:32:37 np0005534516 NetworkManager[48915]: <info>  [1764059557.7590] device (tapeb25945d-60): carrier: link connected
Nov 25 03:32:37 np0005534516 nova_compute[253538]: 2025-11-25 08:32:37.765 253542 DEBUG oslo_concurrency.lockutils [None req-7c71d64c-53be-44a4-a016-71bef448fe19 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Releasing lock "refresh_cache-0feca801-4630-4450-b915-616d8496ab51" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 03:32:37 np0005534516 nova_compute[253538]: 2025-11-25 08:32:37.767 253542 DEBUG nova.compute.manager [None req-7c71d64c-53be-44a4-a016-71bef448fe19 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] [instance: 0feca801-4630-4450-b915-616d8496ab51] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:32:37 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:32:37.770 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[16d90b5d-1324-426a-bbee-8a2ef4aaee7d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:32:37 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:32:37.786 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[9f086ade-43d5-4165-8740-e209a9c902c4]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapeb25945d-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f7:fc:f3'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 134], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 491235, 'reachable_time': 42549, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 310192, 'error': None, 'target': 'ovnmeta-eb25945d-6002-4a99-b682-034a8a3dc901', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:32:37 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 03:32:37 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4106536887' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 03:32:37 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:32:37.801 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[841a84dd-02c4-462a-a5f9-2b7d69e88194]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fef7:fcf3'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 491235, 'tstamp': 491235}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 310193, 'error': None, 'target': 'ovnmeta-eb25945d-6002-4a99-b682-034a8a3dc901', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:32:37 np0005534516 nova_compute[253538]: 2025-11-25 08:32:37.812 253542 DEBUG oslo_concurrency.processutils [None req-f406ea9d-e657-4635-a30a-83190a3a473f a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.463s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:32:37 np0005534516 nova_compute[253538]: 2025-11-25 08:32:37.817 253542 DEBUG nova.compute.provider_tree [None req-f406ea9d-e657-4635-a30a-83190a3a473f a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 03:32:37 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:32:37.819 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[1ad5835f-cd83-4012-97cc-f48144622ffe]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapeb25945d-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f7:fc:f3'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 134], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 491235, 'reachable_time': 42549, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 310196, 'error': None, 'target': 'ovnmeta-eb25945d-6002-4a99-b682-034a8a3dc901', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:32:37 np0005534516 nova_compute[253538]: 2025-11-25 08:32:37.843 253542 DEBUG nova.scheduler.client.report [None req-f406ea9d-e657-4635-a30a-83190a3a473f a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 03:32:37 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:32:37.851 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[e369befb-7de9-460f-adc6-15fbfa7337d0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:32:37 np0005534516 nova_compute[253538]: 2025-11-25 08:32:37.876 253542 DEBUG oslo_concurrency.lockutils [None req-f406ea9d-e657-4635-a30a-83190a3a473f a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.abort_instance_claim" :: held 0.721s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:32:37 np0005534516 nova_compute[253538]: 2025-11-25 08:32:37.877 253542 DEBUG nova.compute.utils [None req-f406ea9d-e657-4635-a30a-83190a3a473f a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: 8ee60656-c206-4a84-9774-e8f852386097] Conflict updating instance 8ee60656-c206-4a84-9774-e8f852386097. Expected: {'task_state': [None]}. Actual: {'task_state': 'deleting'} notify_about_instance_usage /usr/lib/python3.9/site-packages/nova/compute/utils.py:430#033[00m
Nov 25 03:32:37 np0005534516 nova_compute[253538]: 2025-11-25 08:32:37.877 253542 DEBUG nova.compute.manager [None req-f406ea9d-e657-4635-a30a-83190a3a473f a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: 8ee60656-c206-4a84-9774-e8f852386097] Instance disappeared during build. _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2483#033[00m
Nov 25 03:32:37 np0005534516 nova_compute[253538]: 2025-11-25 08:32:37.878 253542 DEBUG nova.compute.manager [None req-f406ea9d-e657-4635-a30a-83190a3a473f a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: 8ee60656-c206-4a84-9774-e8f852386097] Unplugging VIFs for instance _cleanup_allocated_networks /usr/lib/python3.9/site-packages/nova/compute/manager.py:2976#033[00m
Nov 25 03:32:37 np0005534516 nova_compute[253538]: 2025-11-25 08:32:37.878 253542 DEBUG oslo_concurrency.lockutils [None req-f406ea9d-e657-4635-a30a-83190a3a473f a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Acquiring lock "refresh_cache-8ee60656-c206-4a84-9774-e8f852386097" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 03:32:37 np0005534516 nova_compute[253538]: 2025-11-25 08:32:37.878 253542 DEBUG oslo_concurrency.lockutils [None req-f406ea9d-e657-4635-a30a-83190a3a473f a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Acquired lock "refresh_cache-8ee60656-c206-4a84-9774-e8f852386097" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 03:32:37 np0005534516 nova_compute[253538]: 2025-11-25 08:32:37.878 253542 DEBUG nova.network.neutron [None req-f406ea9d-e657-4635-a30a-83190a3a473f a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: 8ee60656-c206-4a84-9774-e8f852386097] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 25 03:32:37 np0005534516 nova_compute[253538]: 2025-11-25 08:32:37.894 253542 DEBUG nova.network.neutron [req-b12e2af8-80ae-4b82-aa4c-c3db6c45a76a req-15a7a207-3d78-43a5-8a0d-b929e4bd52c0 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: c0942fc7-74d4-4fc8-9574-4fea9179e71b] Updated VIF entry in instance network info cache for port 79f4b8f5-d582-44c5-b8e0-a82ad73193de. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 25 03:32:37 np0005534516 nova_compute[253538]: 2025-11-25 08:32:37.894 253542 DEBUG nova.network.neutron [req-b12e2af8-80ae-4b82-aa4c-c3db6c45a76a req-15a7a207-3d78-43a5-8a0d-b929e4bd52c0 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: c0942fc7-74d4-4fc8-9574-4fea9179e71b] Updating instance_info_cache with network_info: [{"id": "79f4b8f5-d582-44c5-b8e0-a82ad73193de", "address": "fa:16:3e:24:9d:e4", "network": {"id": "eb25945d-6002-4a99-b682-034a8a3dc901", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1802666542-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dc93aa65bef7473d961e0cad1e8f2962", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap79f4b8f5-d5", "ovs_interfaceid": "79f4b8f5-d582-44c5-b8e0-a82ad73193de", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 03:32:37 np0005534516 nova_compute[253538]: 2025-11-25 08:32:37.905 253542 DEBUG oslo_concurrency.lockutils [req-b12e2af8-80ae-4b82-aa4c-c3db6c45a76a req-15a7a207-3d78-43a5-8a0d-b929e4bd52c0 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-c0942fc7-74d4-4fc8-9574-4fea9179e71b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 03:32:37 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:32:37.914 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[9b61f710-8395-4cd2-bb3e-ca1976819289]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:32:37 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:32:37.915 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapeb25945d-60, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:32:37 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:32:37.916 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 03:32:37 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:32:37.916 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapeb25945d-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:32:37 np0005534516 NetworkManager[48915]: <info>  [1764059557.9199] manager: (tapeb25945d-60): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/205)
Nov 25 03:32:37 np0005534516 nova_compute[253538]: 2025-11-25 08:32:37.918 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:32:37 np0005534516 kernel: tapeb25945d-60: entered promiscuous mode
Nov 25 03:32:37 np0005534516 nova_compute[253538]: 2025-11-25 08:32:37.922 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:32:37 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:32:37.923 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapeb25945d-60, col_values=(('external_ids', {'iface-id': 'f4a838c4-0817-4ff4-8792-2e2721905e98'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:32:37 np0005534516 nova_compute[253538]: 2025-11-25 08:32:37.924 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:32:37 np0005534516 ovn_controller[152859]: 2025-11-25T08:32:37Z|00437|binding|INFO|Releasing lport f4a838c4-0817-4ff4-8792-2e2721905e98 from this chassis (sb_readonly=0)
Nov 25 03:32:37 np0005534516 kernel: tap15af3dd8-97 (unregistering): left promiscuous mode
Nov 25 03:32:37 np0005534516 NetworkManager[48915]: <info>  [1764059557.9466] device (tap15af3dd8-97): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 03:32:37 np0005534516 nova_compute[253538]: 2025-11-25 08:32:37.952 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:32:37 np0005534516 ovn_controller[152859]: 2025-11-25T08:32:37Z|00438|binding|INFO|Releasing lport 15af3dd8-9788-4a34-b4b2-d3b24300cd4c from this chassis (sb_readonly=0)
Nov 25 03:32:37 np0005534516 ovn_controller[152859]: 2025-11-25T08:32:37Z|00439|binding|INFO|Setting lport 15af3dd8-9788-4a34-b4b2-d3b24300cd4c down in Southbound
Nov 25 03:32:37 np0005534516 ovn_controller[152859]: 2025-11-25T08:32:37Z|00440|binding|INFO|Removing iface tap15af3dd8-97 ovn-installed in OVS
Nov 25 03:32:37 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:32:37.954 162739 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/eb25945d-6002-4a99-b682-034a8a3dc901.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/eb25945d-6002-4a99-b682-034a8a3dc901.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 25 03:32:37 np0005534516 nova_compute[253538]: 2025-11-25 08:32:37.955 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:32:37 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:32:37.958 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[d12011e4-6e2b-4def-8093-218f391e4088]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:32:37 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:32:37.959 162739 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 25 03:32:37 np0005534516 ovn_metadata_agent[162734]: global
Nov 25 03:32:37 np0005534516 ovn_metadata_agent[162734]:    log         /dev/log local0 debug
Nov 25 03:32:37 np0005534516 ovn_metadata_agent[162734]:    log-tag     haproxy-metadata-proxy-eb25945d-6002-4a99-b682-034a8a3dc901
Nov 25 03:32:37 np0005534516 ovn_metadata_agent[162734]:    user        root
Nov 25 03:32:37 np0005534516 ovn_metadata_agent[162734]:    group       root
Nov 25 03:32:37 np0005534516 ovn_metadata_agent[162734]:    maxconn     1024
Nov 25 03:32:37 np0005534516 ovn_metadata_agent[162734]:    pidfile     /var/lib/neutron/external/pids/eb25945d-6002-4a99-b682-034a8a3dc901.pid.haproxy
Nov 25 03:32:37 np0005534516 ovn_metadata_agent[162734]:    daemon
Nov 25 03:32:37 np0005534516 ovn_metadata_agent[162734]: 
Nov 25 03:32:37 np0005534516 ovn_metadata_agent[162734]: defaults
Nov 25 03:32:37 np0005534516 ovn_metadata_agent[162734]:    log global
Nov 25 03:32:37 np0005534516 ovn_metadata_agent[162734]:    mode http
Nov 25 03:32:37 np0005534516 ovn_metadata_agent[162734]:    option httplog
Nov 25 03:32:37 np0005534516 ovn_metadata_agent[162734]:    option dontlognull
Nov 25 03:32:37 np0005534516 ovn_metadata_agent[162734]:    option http-server-close
Nov 25 03:32:37 np0005534516 ovn_metadata_agent[162734]:    option forwardfor
Nov 25 03:32:37 np0005534516 ovn_metadata_agent[162734]:    retries                 3
Nov 25 03:32:37 np0005534516 ovn_metadata_agent[162734]:    timeout http-request    30s
Nov 25 03:32:37 np0005534516 ovn_metadata_agent[162734]:    timeout connect         30s
Nov 25 03:32:37 np0005534516 ovn_metadata_agent[162734]:    timeout client          32s
Nov 25 03:32:37 np0005534516 ovn_metadata_agent[162734]:    timeout server          32s
Nov 25 03:32:37 np0005534516 ovn_metadata_agent[162734]:    timeout http-keep-alive 30s
Nov 25 03:32:37 np0005534516 ovn_metadata_agent[162734]: 
Nov 25 03:32:37 np0005534516 ovn_metadata_agent[162734]: 
Nov 25 03:32:37 np0005534516 ovn_metadata_agent[162734]: listen listener
Nov 25 03:32:37 np0005534516 ovn_metadata_agent[162734]:    bind 169.254.169.254:80
Nov 25 03:32:37 np0005534516 ovn_metadata_agent[162734]:    server metadata /var/lib/neutron/metadata_proxy
Nov 25 03:32:37 np0005534516 ovn_metadata_agent[162734]:    http-request add-header X-OVN-Network-ID eb25945d-6002-4a99-b682-034a8a3dc901
Nov 25 03:32:37 np0005534516 ovn_metadata_agent[162734]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 25 03:32:37 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:32:37.960 162739 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-eb25945d-6002-4a99-b682-034a8a3dc901', 'env', 'PROCESS_TAG=haproxy-eb25945d-6002-4a99-b682-034a8a3dc901', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/eb25945d-6002-4a99-b682-034a8a3dc901.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 25 03:32:37 np0005534516 nova_compute[253538]: 2025-11-25 08:32:37.975 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:32:37 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:32:37.975 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:07:cd:40 10.100.0.13'], port_security=['fa:16:3e:07:cd:40 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '0feca801-4630-4450-b915-616d8496ab51', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-908154e6-322e-4607-bb65-df3f3f8daca6', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '23237e7592b247838e62457157e64e9e', 'neutron:revision_number': '4', 'neutron:security_group_ids': '3358d53f-61a0-46d5-a068-e095dc0322d0', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.183'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f04e87cb-da21-4cc9-be16-4ad52b84fb85, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=15af3dd8-9788-4a34-b4b2-d3b24300cd4c) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 03:32:38 np0005534516 systemd[1]: machine-qemu\x2d55\x2dinstance\x2d00000032.scope: Deactivated successfully.
Nov 25 03:32:38 np0005534516 systemd[1]: machine-qemu\x2d55\x2dinstance\x2d00000032.scope: Consumed 13.642s CPU time.
Nov 25 03:32:38 np0005534516 systemd-machined[215790]: Machine qemu-55-instance-00000032 terminated.
Nov 25 03:32:38 np0005534516 nova_compute[253538]: 2025-11-25 08:32:38.029 253542 DEBUG nova.network.neutron [None req-f406ea9d-e657-4635-a30a-83190a3a473f a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: 8ee60656-c206-4a84-9774-e8f852386097] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 25 03:32:38 np0005534516 kernel: tap15af3dd8-97: entered promiscuous mode
Nov 25 03:32:38 np0005534516 NetworkManager[48915]: <info>  [1764059558.0999] manager: (tap15af3dd8-97): new Tun device (/org/freedesktop/NetworkManager/Devices/206)
Nov 25 03:32:38 np0005534516 ovn_controller[152859]: 2025-11-25T08:32:38Z|00441|binding|INFO|Claiming lport 15af3dd8-9788-4a34-b4b2-d3b24300cd4c for this chassis.
Nov 25 03:32:38 np0005534516 nova_compute[253538]: 2025-11-25 08:32:38.101 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:32:38 np0005534516 kernel: tap15af3dd8-97 (unregistering): left promiscuous mode
Nov 25 03:32:38 np0005534516 ovn_controller[152859]: 2025-11-25T08:32:38Z|00442|binding|INFO|15af3dd8-9788-4a34-b4b2-d3b24300cd4c: Claiming fa:16:3e:07:cd:40 10.100.0.13
Nov 25 03:32:38 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:32:38.109 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:07:cd:40 10.100.0.13'], port_security=['fa:16:3e:07:cd:40 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '0feca801-4630-4450-b915-616d8496ab51', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-908154e6-322e-4607-bb65-df3f3f8daca6', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '23237e7592b247838e62457157e64e9e', 'neutron:revision_number': '4', 'neutron:security_group_ids': '3358d53f-61a0-46d5-a068-e095dc0322d0', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.183'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f04e87cb-da21-4cc9-be16-4ad52b84fb85, chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=15af3dd8-9788-4a34-b4b2-d3b24300cd4c) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 03:32:38 np0005534516 ovn_controller[152859]: 2025-11-25T08:32:38Z|00443|binding|INFO|Setting lport 15af3dd8-9788-4a34-b4b2-d3b24300cd4c ovn-installed in OVS
Nov 25 03:32:38 np0005534516 ovn_controller[152859]: 2025-11-25T08:32:38Z|00444|binding|INFO|Setting lport 15af3dd8-9788-4a34-b4b2-d3b24300cd4c up in Southbound
Nov 25 03:32:38 np0005534516 nova_compute[253538]: 2025-11-25 08:32:38.136 253542 INFO nova.virt.libvirt.driver [-] [instance: 0feca801-4630-4450-b915-616d8496ab51] Instance destroyed successfully.#033[00m
Nov 25 03:32:38 np0005534516 nova_compute[253538]: 2025-11-25 08:32:38.137 253542 DEBUG nova.objects.instance [None req-7c71d64c-53be-44a4-a016-71bef448fe19 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Lazy-loading 'resources' on Instance uuid 0feca801-4630-4450-b915-616d8496ab51 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 03:32:38 np0005534516 ovn_controller[152859]: 2025-11-25T08:32:38Z|00445|binding|INFO|Releasing lport 15af3dd8-9788-4a34-b4b2-d3b24300cd4c from this chassis (sb_readonly=1)
Nov 25 03:32:38 np0005534516 ovn_controller[152859]: 2025-11-25T08:32:38Z|00446|if_status|INFO|Dropped 1 log messages in last 26 seconds (most recently, 26 seconds ago) due to excessive rate
Nov 25 03:32:38 np0005534516 ovn_controller[152859]: 2025-11-25T08:32:38Z|00447|if_status|INFO|Not setting lport 15af3dd8-9788-4a34-b4b2-d3b24300cd4c down as sb is readonly
Nov 25 03:32:38 np0005534516 ovn_controller[152859]: 2025-11-25T08:32:38Z|00448|binding|INFO|Removing iface tap15af3dd8-97 ovn-installed in OVS
Nov 25 03:32:38 np0005534516 nova_compute[253538]: 2025-11-25 08:32:38.138 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:32:38 np0005534516 ovn_controller[152859]: 2025-11-25T08:32:38Z|00449|binding|INFO|Releasing lport 15af3dd8-9788-4a34-b4b2-d3b24300cd4c from this chassis (sb_readonly=0)
Nov 25 03:32:38 np0005534516 ovn_controller[152859]: 2025-11-25T08:32:38Z|00450|binding|INFO|Setting lport 15af3dd8-9788-4a34-b4b2-d3b24300cd4c down in Southbound
Nov 25 03:32:38 np0005534516 nova_compute[253538]: 2025-11-25 08:32:38.157 253542 DEBUG nova.virt.libvirt.vif [None req-7c71d64c-53be-44a4-a016-71bef448fe19 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T08:32:05Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-1351113969',display_name='tempest-ServerActionsTestJSON-server-1351113969',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-1351113969',id=50,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBE5Fk56Q4lNqV9K7WDUeyi9Q8LXfzmP3pHaY3lQ+esJSSZiBEhWQZtQw4QEFpwpSqYGNN6+MiKdvSMZAjIxsIMhhevlSp0lI0bm/7fQanmTm+NtC/LaRja7uscM7lRA6IQ==',key_name='tempest-keypair-23618085',keypairs=<?>,launch_index=0,launched_at=2025-11-25T08:32:15Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='23237e7592b247838e62457157e64e9e',ramdisk_id='',reservation_id='r-ui990d0d',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-1880843108',owner_user_name='tempest-ServerActionsTestJSON-1880843108-project-member'},tags=<?>,task_state='reboot_started_hard',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T08:32:37Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='c199ca353ed54a53ab7fe37d3089c82a',uuid=0feca801-4630-4450-b915-616d8496ab51,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "15af3dd8-9788-4a34-b4b2-d3b24300cd4c", "address": "fa:16:3e:07:cd:40", "network": {"id": "908154e6-322e-4607-bb65-df3f3f8daca6", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1395486665-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.183", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "23237e7592b247838e62457157e64e9e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap15af3dd8-97", "ovs_interfaceid": "15af3dd8-9788-4a34-b4b2-d3b24300cd4c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 25 03:32:38 np0005534516 nova_compute[253538]: 2025-11-25 08:32:38.158 253542 DEBUG nova.network.os_vif_util [None req-7c71d64c-53be-44a4-a016-71bef448fe19 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Converting VIF {"id": "15af3dd8-9788-4a34-b4b2-d3b24300cd4c", "address": "fa:16:3e:07:cd:40", "network": {"id": "908154e6-322e-4607-bb65-df3f3f8daca6", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1395486665-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.183", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "23237e7592b247838e62457157e64e9e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap15af3dd8-97", "ovs_interfaceid": "15af3dd8-9788-4a34-b4b2-d3b24300cd4c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 03:32:38 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:32:38.158 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:07:cd:40 10.100.0.13'], port_security=['fa:16:3e:07:cd:40 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '0feca801-4630-4450-b915-616d8496ab51', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-908154e6-322e-4607-bb65-df3f3f8daca6', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '23237e7592b247838e62457157e64e9e', 'neutron:revision_number': '4', 'neutron:security_group_ids': '3358d53f-61a0-46d5-a068-e095dc0322d0', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.183'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f04e87cb-da21-4cc9-be16-4ad52b84fb85, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=15af3dd8-9788-4a34-b4b2-d3b24300cd4c) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 03:32:38 np0005534516 nova_compute[253538]: 2025-11-25 08:32:38.158 253542 DEBUG nova.network.os_vif_util [None req-7c71d64c-53be-44a4-a016-71bef448fe19 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:07:cd:40,bridge_name='br-int',has_traffic_filtering=True,id=15af3dd8-9788-4a34-b4b2-d3b24300cd4c,network=Network(908154e6-322e-4607-bb65-df3f3f8daca6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap15af3dd8-97') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 03:32:38 np0005534516 nova_compute[253538]: 2025-11-25 08:32:38.159 253542 DEBUG os_vif [None req-7c71d64c-53be-44a4-a016-71bef448fe19 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:07:cd:40,bridge_name='br-int',has_traffic_filtering=True,id=15af3dd8-9788-4a34-b4b2-d3b24300cd4c,network=Network(908154e6-322e-4607-bb65-df3f3f8daca6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap15af3dd8-97') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 25 03:32:38 np0005534516 nova_compute[253538]: 2025-11-25 08:32:38.161 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:32:38 np0005534516 nova_compute[253538]: 2025-11-25 08:32:38.161 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap15af3dd8-97, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:32:38 np0005534516 nova_compute[253538]: 2025-11-25 08:32:38.162 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:32:38 np0005534516 nova_compute[253538]: 2025-11-25 08:32:38.163 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:32:38 np0005534516 nova_compute[253538]: 2025-11-25 08:32:38.165 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:32:38 np0005534516 nova_compute[253538]: 2025-11-25 08:32:38.167 253542 INFO os_vif [None req-7c71d64c-53be-44a4-a016-71bef448fe19 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:07:cd:40,bridge_name='br-int',has_traffic_filtering=True,id=15af3dd8-9788-4a34-b4b2-d3b24300cd4c,network=Network(908154e6-322e-4607-bb65-df3f3f8daca6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap15af3dd8-97')#033[00m
Nov 25 03:32:38 np0005534516 nova_compute[253538]: 2025-11-25 08:32:38.174 253542 DEBUG nova.virt.libvirt.driver [None req-7c71d64c-53be-44a4-a016-71bef448fe19 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] [instance: 0feca801-4630-4450-b915-616d8496ab51] Start _get_guest_xml network_info=[{"id": "15af3dd8-9788-4a34-b4b2-d3b24300cd4c", "address": "fa:16:3e:07:cd:40", "network": {"id": "908154e6-322e-4607-bb65-df3f3f8daca6", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1395486665-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.183", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "23237e7592b247838e62457157e64e9e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap15af3dd8-97", "ovs_interfaceid": "15af3dd8-9788-4a34-b4b2-d3b24300cd4c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'device_name': '/dev/vda', 'guest_format': None, 'encryption_options': None, 'boot_index': 0, 'encryption_format': None, 'encryption_secret_uuid': None, 'size': 0, 'encrypted': False, 'disk_bus': 'virtio', 'image_id': '8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 25 03:32:38 np0005534516 nova_compute[253538]: 2025-11-25 08:32:38.178 253542 WARNING nova.virt.libvirt.driver [None req-7c71d64c-53be-44a4-a016-71bef448fe19 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 03:32:38 np0005534516 nova_compute[253538]: 2025-11-25 08:32:38.182 253542 DEBUG nova.virt.libvirt.host [None req-7c71d64c-53be-44a4-a016-71bef448fe19 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 25 03:32:38 np0005534516 nova_compute[253538]: 2025-11-25 08:32:38.183 253542 DEBUG nova.virt.libvirt.host [None req-7c71d64c-53be-44a4-a016-71bef448fe19 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 25 03:32:38 np0005534516 nova_compute[253538]: 2025-11-25 08:32:38.186 253542 DEBUG nova.virt.libvirt.host [None req-7c71d64c-53be-44a4-a016-71bef448fe19 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 25 03:32:38 np0005534516 nova_compute[253538]: 2025-11-25 08:32:38.186 253542 DEBUG nova.virt.libvirt.host [None req-7c71d64c-53be-44a4-a016-71bef448fe19 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 25 03:32:38 np0005534516 nova_compute[253538]: 2025-11-25 08:32:38.186 253542 DEBUG nova.virt.libvirt.driver [None req-7c71d64c-53be-44a4-a016-71bef448fe19 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 25 03:32:38 np0005534516 nova_compute[253538]: 2025-11-25 08:32:38.187 253542 DEBUG nova.virt.hardware [None req-7c71d64c-53be-44a4-a016-71bef448fe19 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T08:20:54Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='c0247c91-0f34-45b2-87e7-43f31a790a00',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 25 03:32:38 np0005534516 nova_compute[253538]: 2025-11-25 08:32:38.187 253542 DEBUG nova.virt.hardware [None req-7c71d64c-53be-44a4-a016-71bef448fe19 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 25 03:32:38 np0005534516 nova_compute[253538]: 2025-11-25 08:32:38.187 253542 DEBUG nova.virt.hardware [None req-7c71d64c-53be-44a4-a016-71bef448fe19 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 25 03:32:38 np0005534516 nova_compute[253538]: 2025-11-25 08:32:38.187 253542 DEBUG nova.virt.hardware [None req-7c71d64c-53be-44a4-a016-71bef448fe19 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 25 03:32:38 np0005534516 nova_compute[253538]: 2025-11-25 08:32:38.188 253542 DEBUG nova.virt.hardware [None req-7c71d64c-53be-44a4-a016-71bef448fe19 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 25 03:32:38 np0005534516 nova_compute[253538]: 2025-11-25 08:32:38.188 253542 DEBUG nova.virt.hardware [None req-7c71d64c-53be-44a4-a016-71bef448fe19 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 25 03:32:38 np0005534516 nova_compute[253538]: 2025-11-25 08:32:38.188 253542 DEBUG nova.virt.hardware [None req-7c71d64c-53be-44a4-a016-71bef448fe19 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 25 03:32:38 np0005534516 nova_compute[253538]: 2025-11-25 08:32:38.188 253542 DEBUG nova.virt.hardware [None req-7c71d64c-53be-44a4-a016-71bef448fe19 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 25 03:32:38 np0005534516 nova_compute[253538]: 2025-11-25 08:32:38.188 253542 DEBUG nova.virt.hardware [None req-7c71d64c-53be-44a4-a016-71bef448fe19 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 25 03:32:38 np0005534516 nova_compute[253538]: 2025-11-25 08:32:38.189 253542 DEBUG nova.virt.hardware [None req-7c71d64c-53be-44a4-a016-71bef448fe19 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 25 03:32:38 np0005534516 nova_compute[253538]: 2025-11-25 08:32:38.189 253542 DEBUG nova.virt.hardware [None req-7c71d64c-53be-44a4-a016-71bef448fe19 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 25 03:32:38 np0005534516 nova_compute[253538]: 2025-11-25 08:32:38.189 253542 DEBUG nova.objects.instance [None req-7c71d64c-53be-44a4-a016-71bef448fe19 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Lazy-loading 'vcpu_model' on Instance uuid 0feca801-4630-4450-b915-616d8496ab51 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 03:32:38 np0005534516 nova_compute[253538]: 2025-11-25 08:32:38.208 253542 DEBUG oslo_concurrency.processutils [None req-7c71d64c-53be-44a4-a016-71bef448fe19 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:32:38 np0005534516 podman[310239]: 2025-11-25 08:32:38.358133045 +0000 UTC m=+0.071375810 container create 54bbfdbca908ba90b6cb297dbb25cf4a267533c3b4a170896d3f7c9c52c76ceb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-eb25945d-6002-4a99-b682-034a8a3dc901, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2)
Nov 25 03:32:38 np0005534516 systemd[1]: Started libpod-conmon-54bbfdbca908ba90b6cb297dbb25cf4a267533c3b4a170896d3f7c9c52c76ceb.scope.
Nov 25 03:32:38 np0005534516 podman[310239]: 2025-11-25 08:32:38.318033607 +0000 UTC m=+0.031276482 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 25 03:32:38 np0005534516 systemd[1]: Started libcrun container.
Nov 25 03:32:38 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1eef64a7b2bc3a9c676eca2c757f9f8de2a6b5948fd7a6d5db2aa9098a5e44dd/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 25 03:32:38 np0005534516 podman[310239]: 2025-11-25 08:32:38.4509887 +0000 UTC m=+0.164231485 container init 54bbfdbca908ba90b6cb297dbb25cf4a267533c3b4a170896d3f7c9c52c76ceb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-eb25945d-6002-4a99-b682-034a8a3dc901, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true)
Nov 25 03:32:38 np0005534516 podman[310239]: 2025-11-25 08:32:38.457846484 +0000 UTC m=+0.171089239 container start 54bbfdbca908ba90b6cb297dbb25cf4a267533c3b4a170896d3f7c9c52c76ceb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-eb25945d-6002-4a99-b682-034a8a3dc901, tcib_managed=true, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2)
Nov 25 03:32:38 np0005534516 neutron-haproxy-ovnmeta-eb25945d-6002-4a99-b682-034a8a3dc901[310310]: [NOTICE]   (310318) : New worker (310321) forked
Nov 25 03:32:38 np0005534516 neutron-haproxy-ovnmeta-eb25945d-6002-4a99-b682-034a8a3dc901[310310]: [NOTICE]   (310318) : Loading success.
Nov 25 03:32:38 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:32:38.504 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 15af3dd8-9788-4a34-b4b2-d3b24300cd4c in datapath 908154e6-322e-4607-bb65-df3f3f8daca6 unbound from our chassis#033[00m
Nov 25 03:32:38 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:32:38.506 162739 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 908154e6-322e-4607-bb65-df3f3f8daca6, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 25 03:32:38 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:32:38.507 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[e81cbb3f-7f8d-4e67-a050-97ed634e92f6]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:32:38 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:32:38.507 162739 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-908154e6-322e-4607-bb65-df3f3f8daca6 namespace which is not needed anymore#033[00m
Nov 25 03:32:38 np0005534516 nova_compute[253538]: 2025-11-25 08:32:38.513 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764059558.5129726, c0942fc7-74d4-4fc8-9574-4fea9179e71b => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 03:32:38 np0005534516 nova_compute[253538]: 2025-11-25 08:32:38.514 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: c0942fc7-74d4-4fc8-9574-4fea9179e71b] VM Started (Lifecycle Event)#033[00m
Nov 25 03:32:38 np0005534516 nova_compute[253538]: 2025-11-25 08:32:38.535 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: c0942fc7-74d4-4fc8-9574-4fea9179e71b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:32:38 np0005534516 nova_compute[253538]: 2025-11-25 08:32:38.539 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764059558.5169563, c0942fc7-74d4-4fc8-9574-4fea9179e71b => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 03:32:38 np0005534516 nova_compute[253538]: 2025-11-25 08:32:38.539 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: c0942fc7-74d4-4fc8-9574-4fea9179e71b] VM Paused (Lifecycle Event)#033[00m
Nov 25 03:32:38 np0005534516 nova_compute[253538]: 2025-11-25 08:32:38.555 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: c0942fc7-74d4-4fc8-9574-4fea9179e71b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:32:38 np0005534516 nova_compute[253538]: 2025-11-25 08:32:38.558 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: c0942fc7-74d4-4fc8-9574-4fea9179e71b] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 03:32:38 np0005534516 nova_compute[253538]: 2025-11-25 08:32:38.578 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: c0942fc7-74d4-4fc8-9574-4fea9179e71b] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 03:32:38 np0005534516 neutron-haproxy-ovnmeta-908154e6-322e-4607-bb65-df3f3f8daca6[307477]: [NOTICE]   (307488) : haproxy version is 2.8.14-c23fe91
Nov 25 03:32:38 np0005534516 neutron-haproxy-ovnmeta-908154e6-322e-4607-bb65-df3f3f8daca6[307477]: [NOTICE]   (307488) : path to executable is /usr/sbin/haproxy
Nov 25 03:32:38 np0005534516 neutron-haproxy-ovnmeta-908154e6-322e-4607-bb65-df3f3f8daca6[307477]: [WARNING]  (307488) : Exiting Master process...
Nov 25 03:32:38 np0005534516 neutron-haproxy-ovnmeta-908154e6-322e-4607-bb65-df3f3f8daca6[307477]: [WARNING]  (307488) : Exiting Master process...
Nov 25 03:32:38 np0005534516 neutron-haproxy-ovnmeta-908154e6-322e-4607-bb65-df3f3f8daca6[307477]: [ALERT]    (307488) : Current worker (307490) exited with code 143 (Terminated)
Nov 25 03:32:38 np0005534516 neutron-haproxy-ovnmeta-908154e6-322e-4607-bb65-df3f3f8daca6[307477]: [WARNING]  (307488) : All workers exited. Exiting... (0)
Nov 25 03:32:38 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 03:32:38 np0005534516 systemd[1]: libpod-5cd8eb5bcd8ba37565e28b878ebca1895bd29ed8fc6e168ca934fc9aa8eb1826.scope: Deactivated successfully.
Nov 25 03:32:38 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4265011623' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 03:32:38 np0005534516 podman[310347]: 2025-11-25 08:32:38.669866793 +0000 UTC m=+0.052932194 container died 5cd8eb5bcd8ba37565e28b878ebca1895bd29ed8fc6e168ca934fc9aa8eb1826 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-908154e6-322e-4607-bb65-df3f3f8daca6, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 03:32:38 np0005534516 nova_compute[253538]: 2025-11-25 08:32:38.682 253542 DEBUG oslo_concurrency.processutils [None req-7c71d64c-53be-44a4-a016-71bef448fe19 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.474s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:32:38 np0005534516 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-5cd8eb5bcd8ba37565e28b878ebca1895bd29ed8fc6e168ca934fc9aa8eb1826-userdata-shm.mount: Deactivated successfully.
Nov 25 03:32:38 np0005534516 systemd[1]: var-lib-containers-storage-overlay-6642dd5f19a45eda9399f2698ab38ce8fab03c28deb9c9cf4ff5295c9db6ea21-merged.mount: Deactivated successfully.
Nov 25 03:32:38 np0005534516 podman[310347]: 2025-11-25 08:32:38.711765739 +0000 UTC m=+0.094831120 container cleanup 5cd8eb5bcd8ba37565e28b878ebca1895bd29ed8fc6e168ca934fc9aa8eb1826 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-908154e6-322e-4607-bb65-df3f3f8daca6, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3)
Nov 25 03:32:38 np0005534516 systemd[1]: libpod-conmon-5cd8eb5bcd8ba37565e28b878ebca1895bd29ed8fc6e168ca934fc9aa8eb1826.scope: Deactivated successfully.
Nov 25 03:32:38 np0005534516 nova_compute[253538]: 2025-11-25 08:32:38.732 253542 DEBUG nova.network.neutron [None req-f406ea9d-e657-4635-a30a-83190a3a473f a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: 8ee60656-c206-4a84-9774-e8f852386097] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 03:32:38 np0005534516 nova_compute[253538]: 2025-11-25 08:32:38.742 253542 DEBUG oslo_concurrency.processutils [None req-7c71d64c-53be-44a4-a016-71bef448fe19 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:32:38 np0005534516 nova_compute[253538]: 2025-11-25 08:32:38.773 253542 DEBUG oslo_concurrency.lockutils [None req-f406ea9d-e657-4635-a30a-83190a3a473f a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Releasing lock "refresh_cache-8ee60656-c206-4a84-9774-e8f852386097" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 03:32:38 np0005534516 nova_compute[253538]: 2025-11-25 08:32:38.774 253542 DEBUG nova.compute.manager [None req-f406ea9d-e657-4635-a30a-83190a3a473f a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: 8ee60656-c206-4a84-9774-e8f852386097] Unplugged VIFs for instance _cleanup_allocated_networks /usr/lib/python3.9/site-packages/nova/compute/manager.py:3012#033[00m
Nov 25 03:32:38 np0005534516 nova_compute[253538]: 2025-11-25 08:32:38.774 253542 DEBUG nova.compute.manager [None req-f406ea9d-e657-4635-a30a-83190a3a473f a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: 8ee60656-c206-4a84-9774-e8f852386097] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 25 03:32:38 np0005534516 nova_compute[253538]: 2025-11-25 08:32:38.775 253542 DEBUG nova.network.neutron [None req-f406ea9d-e657-4635-a30a-83190a3a473f a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: 8ee60656-c206-4a84-9774-e8f852386097] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 25 03:32:38 np0005534516 podman[310392]: 2025-11-25 08:32:38.786149647 +0000 UTC m=+0.056636523 container remove 5cd8eb5bcd8ba37565e28b878ebca1895bd29ed8fc6e168ca934fc9aa8eb1826 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-908154e6-322e-4607-bb65-df3f3f8daca6, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 03:32:38 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:32:38.791 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[2a4c1b75-8beb-4a43-af7b-e27776c227ce]: (4, ('Tue Nov 25 08:32:38 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-908154e6-322e-4607-bb65-df3f3f8daca6 (5cd8eb5bcd8ba37565e28b878ebca1895bd29ed8fc6e168ca934fc9aa8eb1826)\n5cd8eb5bcd8ba37565e28b878ebca1895bd29ed8fc6e168ca934fc9aa8eb1826\nTue Nov 25 08:32:38 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-908154e6-322e-4607-bb65-df3f3f8daca6 (5cd8eb5bcd8ba37565e28b878ebca1895bd29ed8fc6e168ca934fc9aa8eb1826)\n5cd8eb5bcd8ba37565e28b878ebca1895bd29ed8fc6e168ca934fc9aa8eb1826\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:32:38 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:32:38.792 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[7756bab9-35fe-4310-9e65-037cd15c2b0f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:32:38 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:32:38.793 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap908154e6-30, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:32:38 np0005534516 nova_compute[253538]: 2025-11-25 08:32:38.797 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:32:38 np0005534516 kernel: tap908154e6-30: left promiscuous mode
Nov 25 03:32:38 np0005534516 nova_compute[253538]: 2025-11-25 08:32:38.814 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:32:38 np0005534516 nova_compute[253538]: 2025-11-25 08:32:38.815 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:32:38 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:32:38.819 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[a6f52a94-ebd7-4b63-b770-7ee124b5df15]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:32:38 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:32:38.834 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[53512463-746a-4839-b002-4d091645dee2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:32:38 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:32:38.835 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[4ada17b5-e1ed-4735-905a-a5de99092166]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:32:38 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:32:38.851 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[94912074-f5ee-45d5-9be4-74a794710526]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 488871, 'reachable_time': 44099, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 310411, 'error': None, 'target': 'ovnmeta-908154e6-322e-4607-bb65-df3f3f8daca6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:32:38 np0005534516 systemd[1]: run-netns-ovnmeta\x2d908154e6\x2d322e\x2d4607\x2dbb65\x2ddf3f3f8daca6.mount: Deactivated successfully.
Nov 25 03:32:38 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:32:38.858 162852 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-908154e6-322e-4607-bb65-df3f3f8daca6 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 25 03:32:38 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:32:38.858 162852 DEBUG oslo.privsep.daemon [-] privsep: reply[81ffaf5a-696c-4a0a-baca-267728221815]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:32:38 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:32:38.859 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 15af3dd8-9788-4a34-b4b2-d3b24300cd4c in datapath 908154e6-322e-4607-bb65-df3f3f8daca6 unbound from our chassis#033[00m
Nov 25 03:32:38 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:32:38.861 162739 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 908154e6-322e-4607-bb65-df3f3f8daca6, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 25 03:32:38 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:32:38.861 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[e3539fbe-3283-4f16-84ec-07f5961ef18a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:32:38 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:32:38.862 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 15af3dd8-9788-4a34-b4b2-d3b24300cd4c in datapath 908154e6-322e-4607-bb65-df3f3f8daca6 unbound from our chassis#033[00m
Nov 25 03:32:38 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:32:38.863 162739 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 908154e6-322e-4607-bb65-df3f3f8daca6, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 25 03:32:38 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:32:38.864 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[95587370-91a1-4572-8263-d66d17efa289]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:32:38 np0005534516 nova_compute[253538]: 2025-11-25 08:32:38.929 253542 DEBUG nova.network.neutron [None req-f406ea9d-e657-4635-a30a-83190a3a473f a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: 8ee60656-c206-4a84-9774-e8f852386097] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 25 03:32:38 np0005534516 nova_compute[253538]: 2025-11-25 08:32:38.946 253542 DEBUG nova.network.neutron [None req-f406ea9d-e657-4635-a30a-83190a3a473f a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: 8ee60656-c206-4a84-9774-e8f852386097] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 03:32:38 np0005534516 nova_compute[253538]: 2025-11-25 08:32:38.959 253542 INFO nova.compute.manager [None req-f406ea9d-e657-4635-a30a-83190a3a473f a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: 8ee60656-c206-4a84-9774-e8f852386097] Took 0.18 seconds to deallocate network for instance.#033[00m
Nov 25 03:32:39 np0005534516 nova_compute[253538]: 2025-11-25 08:32:39.031 253542 DEBUG nova.compute.manager [req-8ea3216f-8de3-4f19-99a6-84f31029b706 req-a93148d7-054b-486c-98c6-00e60899f968 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: c0942fc7-74d4-4fc8-9574-4fea9179e71b] Received event network-vif-plugged-79f4b8f5-d582-44c5-b8e0-a82ad73193de external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:32:39 np0005534516 nova_compute[253538]: 2025-11-25 08:32:39.034 253542 DEBUG oslo_concurrency.lockutils [req-8ea3216f-8de3-4f19-99a6-84f31029b706 req-a93148d7-054b-486c-98c6-00e60899f968 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "c0942fc7-74d4-4fc8-9574-4fea9179e71b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:32:39 np0005534516 nova_compute[253538]: 2025-11-25 08:32:39.034 253542 DEBUG oslo_concurrency.lockutils [req-8ea3216f-8de3-4f19-99a6-84f31029b706 req-a93148d7-054b-486c-98c6-00e60899f968 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "c0942fc7-74d4-4fc8-9574-4fea9179e71b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:32:39 np0005534516 nova_compute[253538]: 2025-11-25 08:32:39.035 253542 DEBUG oslo_concurrency.lockutils [req-8ea3216f-8de3-4f19-99a6-84f31029b706 req-a93148d7-054b-486c-98c6-00e60899f968 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "c0942fc7-74d4-4fc8-9574-4fea9179e71b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:32:39 np0005534516 nova_compute[253538]: 2025-11-25 08:32:39.035 253542 DEBUG nova.compute.manager [req-8ea3216f-8de3-4f19-99a6-84f31029b706 req-a93148d7-054b-486c-98c6-00e60899f968 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: c0942fc7-74d4-4fc8-9574-4fea9179e71b] Processing event network-vif-plugged-79f4b8f5-d582-44c5-b8e0-a82ad73193de _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 25 03:32:39 np0005534516 nova_compute[253538]: 2025-11-25 08:32:39.035 253542 DEBUG nova.compute.manager [req-8ea3216f-8de3-4f19-99a6-84f31029b706 req-a93148d7-054b-486c-98c6-00e60899f968 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: c0942fc7-74d4-4fc8-9574-4fea9179e71b] Received event network-vif-plugged-79f4b8f5-d582-44c5-b8e0-a82ad73193de external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:32:39 np0005534516 nova_compute[253538]: 2025-11-25 08:32:39.036 253542 DEBUG oslo_concurrency.lockutils [req-8ea3216f-8de3-4f19-99a6-84f31029b706 req-a93148d7-054b-486c-98c6-00e60899f968 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "c0942fc7-74d4-4fc8-9574-4fea9179e71b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:32:39 np0005534516 nova_compute[253538]: 2025-11-25 08:32:39.036 253542 DEBUG oslo_concurrency.lockutils [req-8ea3216f-8de3-4f19-99a6-84f31029b706 req-a93148d7-054b-486c-98c6-00e60899f968 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "c0942fc7-74d4-4fc8-9574-4fea9179e71b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:32:39 np0005534516 nova_compute[253538]: 2025-11-25 08:32:39.036 253542 DEBUG oslo_concurrency.lockutils [req-8ea3216f-8de3-4f19-99a6-84f31029b706 req-a93148d7-054b-486c-98c6-00e60899f968 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "c0942fc7-74d4-4fc8-9574-4fea9179e71b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:32:39 np0005534516 nova_compute[253538]: 2025-11-25 08:32:39.037 253542 DEBUG nova.compute.manager [req-8ea3216f-8de3-4f19-99a6-84f31029b706 req-a93148d7-054b-486c-98c6-00e60899f968 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: c0942fc7-74d4-4fc8-9574-4fea9179e71b] No waiting events found dispatching network-vif-plugged-79f4b8f5-d582-44c5-b8e0-a82ad73193de pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 03:32:39 np0005534516 nova_compute[253538]: 2025-11-25 08:32:39.038 253542 WARNING nova.compute.manager [req-8ea3216f-8de3-4f19-99a6-84f31029b706 req-a93148d7-054b-486c-98c6-00e60899f968 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: c0942fc7-74d4-4fc8-9574-4fea9179e71b] Received unexpected event network-vif-plugged-79f4b8f5-d582-44c5-b8e0-a82ad73193de for instance with vm_state building and task_state spawning.#033[00m
Nov 25 03:32:39 np0005534516 nova_compute[253538]: 2025-11-25 08:32:39.038 253542 DEBUG nova.compute.manager [req-8ea3216f-8de3-4f19-99a6-84f31029b706 req-a93148d7-054b-486c-98c6-00e60899f968 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 0feca801-4630-4450-b915-616d8496ab51] Received event network-vif-unplugged-15af3dd8-9788-4a34-b4b2-d3b24300cd4c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:32:39 np0005534516 nova_compute[253538]: 2025-11-25 08:32:39.040 253542 DEBUG oslo_concurrency.lockutils [req-8ea3216f-8de3-4f19-99a6-84f31029b706 req-a93148d7-054b-486c-98c6-00e60899f968 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "0feca801-4630-4450-b915-616d8496ab51-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:32:39 np0005534516 nova_compute[253538]: 2025-11-25 08:32:39.040 253542 DEBUG oslo_concurrency.lockutils [req-8ea3216f-8de3-4f19-99a6-84f31029b706 req-a93148d7-054b-486c-98c6-00e60899f968 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "0feca801-4630-4450-b915-616d8496ab51-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:32:39 np0005534516 nova_compute[253538]: 2025-11-25 08:32:39.040 253542 DEBUG oslo_concurrency.lockutils [req-8ea3216f-8de3-4f19-99a6-84f31029b706 req-a93148d7-054b-486c-98c6-00e60899f968 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "0feca801-4630-4450-b915-616d8496ab51-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:32:39 np0005534516 nova_compute[253538]: 2025-11-25 08:32:39.041 253542 DEBUG nova.compute.manager [req-8ea3216f-8de3-4f19-99a6-84f31029b706 req-a93148d7-054b-486c-98c6-00e60899f968 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 0feca801-4630-4450-b915-616d8496ab51] No waiting events found dispatching network-vif-unplugged-15af3dd8-9788-4a34-b4b2-d3b24300cd4c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 03:32:39 np0005534516 nova_compute[253538]: 2025-11-25 08:32:39.041 253542 WARNING nova.compute.manager [req-8ea3216f-8de3-4f19-99a6-84f31029b706 req-a93148d7-054b-486c-98c6-00e60899f968 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 0feca801-4630-4450-b915-616d8496ab51] Received unexpected event network-vif-unplugged-15af3dd8-9788-4a34-b4b2-d3b24300cd4c for instance with vm_state active and task_state reboot_started_hard.#033[00m
Nov 25 03:32:39 np0005534516 nova_compute[253538]: 2025-11-25 08:32:39.041 253542 DEBUG nova.compute.manager [req-8ea3216f-8de3-4f19-99a6-84f31029b706 req-a93148d7-054b-486c-98c6-00e60899f968 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 0feca801-4630-4450-b915-616d8496ab51] Received event network-vif-plugged-15af3dd8-9788-4a34-b4b2-d3b24300cd4c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:32:39 np0005534516 nova_compute[253538]: 2025-11-25 08:32:39.042 253542 DEBUG oslo_concurrency.lockutils [req-8ea3216f-8de3-4f19-99a6-84f31029b706 req-a93148d7-054b-486c-98c6-00e60899f968 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "0feca801-4630-4450-b915-616d8496ab51-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:32:39 np0005534516 nova_compute[253538]: 2025-11-25 08:32:39.042 253542 DEBUG oslo_concurrency.lockutils [req-8ea3216f-8de3-4f19-99a6-84f31029b706 req-a93148d7-054b-486c-98c6-00e60899f968 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "0feca801-4630-4450-b915-616d8496ab51-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:32:39 np0005534516 nova_compute[253538]: 2025-11-25 08:32:39.042 253542 DEBUG oslo_concurrency.lockutils [req-8ea3216f-8de3-4f19-99a6-84f31029b706 req-a93148d7-054b-486c-98c6-00e60899f968 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "0feca801-4630-4450-b915-616d8496ab51-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:32:39 np0005534516 nova_compute[253538]: 2025-11-25 08:32:39.043 253542 DEBUG nova.compute.manager [req-8ea3216f-8de3-4f19-99a6-84f31029b706 req-a93148d7-054b-486c-98c6-00e60899f968 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 0feca801-4630-4450-b915-616d8496ab51] No waiting events found dispatching network-vif-plugged-15af3dd8-9788-4a34-b4b2-d3b24300cd4c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 03:32:39 np0005534516 nova_compute[253538]: 2025-11-25 08:32:39.043 253542 WARNING nova.compute.manager [req-8ea3216f-8de3-4f19-99a6-84f31029b706 req-a93148d7-054b-486c-98c6-00e60899f968 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 0feca801-4630-4450-b915-616d8496ab51] Received unexpected event network-vif-plugged-15af3dd8-9788-4a34-b4b2-d3b24300cd4c for instance with vm_state active and task_state reboot_started_hard.#033[00m
Nov 25 03:32:39 np0005534516 nova_compute[253538]: 2025-11-25 08:32:39.043 253542 DEBUG nova.compute.manager [req-8ea3216f-8de3-4f19-99a6-84f31029b706 req-a93148d7-054b-486c-98c6-00e60899f968 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 0feca801-4630-4450-b915-616d8496ab51] Received event network-vif-plugged-15af3dd8-9788-4a34-b4b2-d3b24300cd4c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:32:39 np0005534516 nova_compute[253538]: 2025-11-25 08:32:39.044 253542 DEBUG oslo_concurrency.lockutils [req-8ea3216f-8de3-4f19-99a6-84f31029b706 req-a93148d7-054b-486c-98c6-00e60899f968 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "0feca801-4630-4450-b915-616d8496ab51-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:32:39 np0005534516 nova_compute[253538]: 2025-11-25 08:32:39.044 253542 DEBUG oslo_concurrency.lockutils [req-8ea3216f-8de3-4f19-99a6-84f31029b706 req-a93148d7-054b-486c-98c6-00e60899f968 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "0feca801-4630-4450-b915-616d8496ab51-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:32:39 np0005534516 nova_compute[253538]: 2025-11-25 08:32:39.044 253542 DEBUG oslo_concurrency.lockutils [req-8ea3216f-8de3-4f19-99a6-84f31029b706 req-a93148d7-054b-486c-98c6-00e60899f968 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "0feca801-4630-4450-b915-616d8496ab51-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:32:39 np0005534516 nova_compute[253538]: 2025-11-25 08:32:39.044 253542 DEBUG nova.compute.manager [req-8ea3216f-8de3-4f19-99a6-84f31029b706 req-a93148d7-054b-486c-98c6-00e60899f968 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 0feca801-4630-4450-b915-616d8496ab51] No waiting events found dispatching network-vif-plugged-15af3dd8-9788-4a34-b4b2-d3b24300cd4c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 03:32:39 np0005534516 nova_compute[253538]: 2025-11-25 08:32:39.044 253542 WARNING nova.compute.manager [req-8ea3216f-8de3-4f19-99a6-84f31029b706 req-a93148d7-054b-486c-98c6-00e60899f968 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 0feca801-4630-4450-b915-616d8496ab51] Received unexpected event network-vif-plugged-15af3dd8-9788-4a34-b4b2-d3b24300cd4c for instance with vm_state active and task_state reboot_started_hard.#033[00m
Nov 25 03:32:39 np0005534516 nova_compute[253538]: 2025-11-25 08:32:39.045 253542 DEBUG nova.compute.manager [req-8ea3216f-8de3-4f19-99a6-84f31029b706 req-a93148d7-054b-486c-98c6-00e60899f968 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 0feca801-4630-4450-b915-616d8496ab51] Received event network-vif-plugged-15af3dd8-9788-4a34-b4b2-d3b24300cd4c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:32:39 np0005534516 nova_compute[253538]: 2025-11-25 08:32:39.045 253542 DEBUG oslo_concurrency.lockutils [req-8ea3216f-8de3-4f19-99a6-84f31029b706 req-a93148d7-054b-486c-98c6-00e60899f968 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "0feca801-4630-4450-b915-616d8496ab51-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:32:39 np0005534516 nova_compute[253538]: 2025-11-25 08:32:39.045 253542 DEBUG oslo_concurrency.lockutils [req-8ea3216f-8de3-4f19-99a6-84f31029b706 req-a93148d7-054b-486c-98c6-00e60899f968 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "0feca801-4630-4450-b915-616d8496ab51-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:32:39 np0005534516 nova_compute[253538]: 2025-11-25 08:32:39.045 253542 DEBUG oslo_concurrency.lockutils [req-8ea3216f-8de3-4f19-99a6-84f31029b706 req-a93148d7-054b-486c-98c6-00e60899f968 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "0feca801-4630-4450-b915-616d8496ab51-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:32:39 np0005534516 nova_compute[253538]: 2025-11-25 08:32:39.045 253542 DEBUG nova.compute.manager [req-8ea3216f-8de3-4f19-99a6-84f31029b706 req-a93148d7-054b-486c-98c6-00e60899f968 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 0feca801-4630-4450-b915-616d8496ab51] No waiting events found dispatching network-vif-plugged-15af3dd8-9788-4a34-b4b2-d3b24300cd4c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 03:32:39 np0005534516 nova_compute[253538]: 2025-11-25 08:32:39.046 253542 WARNING nova.compute.manager [req-8ea3216f-8de3-4f19-99a6-84f31029b706 req-a93148d7-054b-486c-98c6-00e60899f968 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 0feca801-4630-4450-b915-616d8496ab51] Received unexpected event network-vif-plugged-15af3dd8-9788-4a34-b4b2-d3b24300cd4c for instance with vm_state active and task_state reboot_started_hard.#033[00m
Nov 25 03:32:39 np0005534516 nova_compute[253538]: 2025-11-25 08:32:39.046 253542 DEBUG nova.compute.manager [req-8ea3216f-8de3-4f19-99a6-84f31029b706 req-a93148d7-054b-486c-98c6-00e60899f968 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 0feca801-4630-4450-b915-616d8496ab51] Received event network-vif-unplugged-15af3dd8-9788-4a34-b4b2-d3b24300cd4c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:32:39 np0005534516 nova_compute[253538]: 2025-11-25 08:32:39.046 253542 DEBUG oslo_concurrency.lockutils [req-8ea3216f-8de3-4f19-99a6-84f31029b706 req-a93148d7-054b-486c-98c6-00e60899f968 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "0feca801-4630-4450-b915-616d8496ab51-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:32:39 np0005534516 nova_compute[253538]: 2025-11-25 08:32:39.046 253542 DEBUG oslo_concurrency.lockutils [req-8ea3216f-8de3-4f19-99a6-84f31029b706 req-a93148d7-054b-486c-98c6-00e60899f968 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "0feca801-4630-4450-b915-616d8496ab51-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:32:39 np0005534516 nova_compute[253538]: 2025-11-25 08:32:39.046 253542 DEBUG oslo_concurrency.lockutils [req-8ea3216f-8de3-4f19-99a6-84f31029b706 req-a93148d7-054b-486c-98c6-00e60899f968 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "0feca801-4630-4450-b915-616d8496ab51-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:32:39 np0005534516 nova_compute[253538]: 2025-11-25 08:32:39.047 253542 DEBUG nova.compute.manager [req-8ea3216f-8de3-4f19-99a6-84f31029b706 req-a93148d7-054b-486c-98c6-00e60899f968 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 0feca801-4630-4450-b915-616d8496ab51] No waiting events found dispatching network-vif-unplugged-15af3dd8-9788-4a34-b4b2-d3b24300cd4c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 03:32:39 np0005534516 nova_compute[253538]: 2025-11-25 08:32:39.047 253542 WARNING nova.compute.manager [req-8ea3216f-8de3-4f19-99a6-84f31029b706 req-a93148d7-054b-486c-98c6-00e60899f968 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 0feca801-4630-4450-b915-616d8496ab51] Received unexpected event network-vif-unplugged-15af3dd8-9788-4a34-b4b2-d3b24300cd4c for instance with vm_state active and task_state reboot_started_hard.#033[00m
Nov 25 03:32:39 np0005534516 nova_compute[253538]: 2025-11-25 08:32:39.049 253542 DEBUG nova.compute.manager [None req-f4fdfd78-b540-405c-9e87-7c67184bb7d8 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] [instance: c0942fc7-74d4-4fc8-9574-4fea9179e71b] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 25 03:32:39 np0005534516 nova_compute[253538]: 2025-11-25 08:32:39.055 253542 DEBUG nova.virt.libvirt.driver [None req-f4fdfd78-b540-405c-9e87-7c67184bb7d8 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] [instance: c0942fc7-74d4-4fc8-9574-4fea9179e71b] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 25 03:32:39 np0005534516 nova_compute[253538]: 2025-11-25 08:32:39.058 253542 INFO nova.virt.libvirt.driver [-] [instance: c0942fc7-74d4-4fc8-9574-4fea9179e71b] Instance spawned successfully.#033[00m
Nov 25 03:32:39 np0005534516 nova_compute[253538]: 2025-11-25 08:32:39.058 253542 DEBUG nova.virt.libvirt.driver [None req-f4fdfd78-b540-405c-9e87-7c67184bb7d8 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] [instance: c0942fc7-74d4-4fc8-9574-4fea9179e71b] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 25 03:32:39 np0005534516 nova_compute[253538]: 2025-11-25 08:32:39.063 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764059559.0629315, c0942fc7-74d4-4fc8-9574-4fea9179e71b => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 03:32:39 np0005534516 nova_compute[253538]: 2025-11-25 08:32:39.063 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: c0942fc7-74d4-4fc8-9574-4fea9179e71b] VM Resumed (Lifecycle Event)#033[00m
Nov 25 03:32:39 np0005534516 nova_compute[253538]: 2025-11-25 08:32:39.081 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: c0942fc7-74d4-4fc8-9574-4fea9179e71b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:32:39 np0005534516 nova_compute[253538]: 2025-11-25 08:32:39.086 253542 DEBUG nova.virt.libvirt.driver [None req-f4fdfd78-b540-405c-9e87-7c67184bb7d8 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] [instance: c0942fc7-74d4-4fc8-9574-4fea9179e71b] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:32:39 np0005534516 nova_compute[253538]: 2025-11-25 08:32:39.086 253542 DEBUG nova.virt.libvirt.driver [None req-f4fdfd78-b540-405c-9e87-7c67184bb7d8 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] [instance: c0942fc7-74d4-4fc8-9574-4fea9179e71b] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:32:39 np0005534516 nova_compute[253538]: 2025-11-25 08:32:39.087 253542 DEBUG nova.virt.libvirt.driver [None req-f4fdfd78-b540-405c-9e87-7c67184bb7d8 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] [instance: c0942fc7-74d4-4fc8-9574-4fea9179e71b] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:32:39 np0005534516 nova_compute[253538]: 2025-11-25 08:32:39.087 253542 DEBUG nova.virt.libvirt.driver [None req-f4fdfd78-b540-405c-9e87-7c67184bb7d8 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] [instance: c0942fc7-74d4-4fc8-9574-4fea9179e71b] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:32:39 np0005534516 nova_compute[253538]: 2025-11-25 08:32:39.088 253542 DEBUG nova.virt.libvirt.driver [None req-f4fdfd78-b540-405c-9e87-7c67184bb7d8 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] [instance: c0942fc7-74d4-4fc8-9574-4fea9179e71b] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:32:39 np0005534516 nova_compute[253538]: 2025-11-25 08:32:39.088 253542 DEBUG nova.virt.libvirt.driver [None req-f4fdfd78-b540-405c-9e87-7c67184bb7d8 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] [instance: c0942fc7-74d4-4fc8-9574-4fea9179e71b] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:32:39 np0005534516 nova_compute[253538]: 2025-11-25 08:32:39.101 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: c0942fc7-74d4-4fc8-9574-4fea9179e71b] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 03:32:39 np0005534516 nova_compute[253538]: 2025-11-25 08:32:39.105 253542 INFO nova.scheduler.client.report [None req-f406ea9d-e657-4635-a30a-83190a3a473f a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Deleted allocations for instance 8ee60656-c206-4a84-9774-e8f852386097#033[00m
Nov 25 03:32:39 np0005534516 nova_compute[253538]: 2025-11-25 08:32:39.105 253542 DEBUG oslo_concurrency.lockutils [None req-f406ea9d-e657-4635-a30a-83190a3a473f a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Lock "8ee60656-c206-4a84-9774-e8f852386097" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 2.755s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:32:39 np0005534516 nova_compute[253538]: 2025-11-25 08:32:39.108 253542 DEBUG oslo_concurrency.lockutils [None req-c6ed4ceb-c663-4069-85c5-c7caedbbd858 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Lock "8ee60656-c206-4a84-9774-e8f852386097" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 2.459s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:32:39 np0005534516 nova_compute[253538]: 2025-11-25 08:32:39.109 253542 DEBUG oslo_concurrency.lockutils [None req-c6ed4ceb-c663-4069-85c5-c7caedbbd858 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Acquiring lock "8ee60656-c206-4a84-9774-e8f852386097-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:32:39 np0005534516 nova_compute[253538]: 2025-11-25 08:32:39.109 253542 DEBUG oslo_concurrency.lockutils [None req-c6ed4ceb-c663-4069-85c5-c7caedbbd858 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Lock "8ee60656-c206-4a84-9774-e8f852386097-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:32:39 np0005534516 nova_compute[253538]: 2025-11-25 08:32:39.109 253542 DEBUG oslo_concurrency.lockutils [None req-c6ed4ceb-c663-4069-85c5-c7caedbbd858 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Lock "8ee60656-c206-4a84-9774-e8f852386097-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:32:39 np0005534516 nova_compute[253538]: 2025-11-25 08:32:39.110 253542 DEBUG nova.objects.instance [None req-c6ed4ceb-c663-4069-85c5-c7caedbbd858 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Lazy-loading 'flavor' on Instance uuid 8ee60656-c206-4a84-9774-e8f852386097 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 03:32:39 np0005534516 nova_compute[253538]: 2025-11-25 08:32:39.134 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: c0942fc7-74d4-4fc8-9574-4fea9179e71b] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 03:32:39 np0005534516 nova_compute[253538]: 2025-11-25 08:32:39.141 253542 DEBUG nova.objects.instance [None req-c6ed4ceb-c663-4069-85c5-c7caedbbd858 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Lazy-loading 'metadata' on Instance uuid 8ee60656-c206-4a84-9774-e8f852386097 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 03:32:39 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 03:32:39 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2687400569' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 03:32:39 np0005534516 nova_compute[253538]: 2025-11-25 08:32:39.179 253542 INFO nova.compute.manager [None req-f4fdfd78-b540-405c-9e87-7c67184bb7d8 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] [instance: c0942fc7-74d4-4fc8-9574-4fea9179e71b] Took 6.17 seconds to spawn the instance on the hypervisor.#033[00m
Nov 25 03:32:39 np0005534516 nova_compute[253538]: 2025-11-25 08:32:39.179 253542 DEBUG nova.compute.manager [None req-f4fdfd78-b540-405c-9e87-7c67184bb7d8 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] [instance: c0942fc7-74d4-4fc8-9574-4fea9179e71b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:32:39 np0005534516 nova_compute[253538]: 2025-11-25 08:32:39.183 253542 INFO nova.compute.manager [None req-c6ed4ceb-c663-4069-85c5-c7caedbbd858 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: 8ee60656-c206-4a84-9774-e8f852386097] Terminating instance#033[00m
Nov 25 03:32:39 np0005534516 nova_compute[253538]: 2025-11-25 08:32:39.196 253542 DEBUG oslo_concurrency.lockutils [None req-c6ed4ceb-c663-4069-85c5-c7caedbbd858 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Acquiring lock "refresh_cache-8ee60656-c206-4a84-9774-e8f852386097" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 03:32:39 np0005534516 nova_compute[253538]: 2025-11-25 08:32:39.197 253542 DEBUG oslo_concurrency.lockutils [None req-c6ed4ceb-c663-4069-85c5-c7caedbbd858 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Acquired lock "refresh_cache-8ee60656-c206-4a84-9774-e8f852386097" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 03:32:39 np0005534516 nova_compute[253538]: 2025-11-25 08:32:39.197 253542 DEBUG nova.network.neutron [None req-c6ed4ceb-c663-4069-85c5-c7caedbbd858 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: 8ee60656-c206-4a84-9774-e8f852386097] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 25 03:32:39 np0005534516 nova_compute[253538]: 2025-11-25 08:32:39.198 253542 DEBUG oslo_concurrency.processutils [None req-7c71d64c-53be-44a4-a016-71bef448fe19 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.456s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:32:39 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1479: 321 pgs: 321 active+clean; 214 MiB data, 514 MiB used, 59 GiB / 60 GiB avail; 2.4 MiB/s rd, 4.6 MiB/s wr, 240 op/s
Nov 25 03:32:39 np0005534516 nova_compute[253538]: 2025-11-25 08:32:39.200 253542 DEBUG nova.virt.libvirt.vif [None req-7c71d64c-53be-44a4-a016-71bef448fe19 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T08:32:05Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-1351113969',display_name='tempest-ServerActionsTestJSON-server-1351113969',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-1351113969',id=50,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBE5Fk56Q4lNqV9K7WDUeyi9Q8LXfzmP3pHaY3lQ+esJSSZiBEhWQZtQw4QEFpwpSqYGNN6+MiKdvSMZAjIxsIMhhevlSp0lI0bm/7fQanmTm+NtC/LaRja7uscM7lRA6IQ==',key_name='tempest-keypair-23618085',keypairs=<?>,launch_index=0,launched_at=2025-11-25T08:32:15Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='23237e7592b247838e62457157e64e9e',ramdisk_id='',reservation_id='r-ui990d0d',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-1880843108',owner_user_name='tempest-ServerActionsTestJSON-1880843108-project-member'},tags=<?>,task_state='reboot_started_hard',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T08:32:37Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='c199ca353ed54a53ab7fe37d3089c82a',uuid=0feca801-4630-4450-b915-616d8496ab51,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "15af3dd8-9788-4a34-b4b2-d3b24300cd4c", "address": "fa:16:3e:07:cd:40", "network": {"id": "908154e6-322e-4607-bb65-df3f3f8daca6", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1395486665-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.183", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "23237e7592b247838e62457157e64e9e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap15af3dd8-97", "ovs_interfaceid": "15af3dd8-9788-4a34-b4b2-d3b24300cd4c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 25 03:32:39 np0005534516 nova_compute[253538]: 2025-11-25 08:32:39.200 253542 DEBUG nova.network.os_vif_util [None req-7c71d64c-53be-44a4-a016-71bef448fe19 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Converting VIF {"id": "15af3dd8-9788-4a34-b4b2-d3b24300cd4c", "address": "fa:16:3e:07:cd:40", "network": {"id": "908154e6-322e-4607-bb65-df3f3f8daca6", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1395486665-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.183", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "23237e7592b247838e62457157e64e9e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap15af3dd8-97", "ovs_interfaceid": "15af3dd8-9788-4a34-b4b2-d3b24300cd4c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 03:32:39 np0005534516 nova_compute[253538]: 2025-11-25 08:32:39.201 253542 DEBUG nova.network.os_vif_util [None req-7c71d64c-53be-44a4-a016-71bef448fe19 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:07:cd:40,bridge_name='br-int',has_traffic_filtering=True,id=15af3dd8-9788-4a34-b4b2-d3b24300cd4c,network=Network(908154e6-322e-4607-bb65-df3f3f8daca6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap15af3dd8-97') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 03:32:39 np0005534516 nova_compute[253538]: 2025-11-25 08:32:39.202 253542 DEBUG nova.objects.instance [None req-7c71d64c-53be-44a4-a016-71bef448fe19 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Lazy-loading 'pci_devices' on Instance uuid 0feca801-4630-4450-b915-616d8496ab51 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 03:32:39 np0005534516 nova_compute[253538]: 2025-11-25 08:32:39.237 253542 DEBUG nova.virt.libvirt.driver [None req-7c71d64c-53be-44a4-a016-71bef448fe19 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] [instance: 0feca801-4630-4450-b915-616d8496ab51] End _get_guest_xml xml=<domain type="kvm">
Nov 25 03:32:39 np0005534516 nova_compute[253538]:  <uuid>0feca801-4630-4450-b915-616d8496ab51</uuid>
Nov 25 03:32:39 np0005534516 nova_compute[253538]:  <name>instance-00000032</name>
Nov 25 03:32:39 np0005534516 nova_compute[253538]:  <memory>131072</memory>
Nov 25 03:32:39 np0005534516 nova_compute[253538]:  <vcpu>1</vcpu>
Nov 25 03:32:39 np0005534516 nova_compute[253538]:  <metadata>
Nov 25 03:32:39 np0005534516 nova_compute[253538]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 03:32:39 np0005534516 nova_compute[253538]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 03:32:39 np0005534516 nova_compute[253538]:      <nova:name>tempest-ServerActionsTestJSON-server-1351113969</nova:name>
Nov 25 03:32:39 np0005534516 nova_compute[253538]:      <nova:creationTime>2025-11-25 08:32:38</nova:creationTime>
Nov 25 03:32:39 np0005534516 nova_compute[253538]:      <nova:flavor name="m1.nano">
Nov 25 03:32:39 np0005534516 nova_compute[253538]:        <nova:memory>128</nova:memory>
Nov 25 03:32:39 np0005534516 nova_compute[253538]:        <nova:disk>1</nova:disk>
Nov 25 03:32:39 np0005534516 nova_compute[253538]:        <nova:swap>0</nova:swap>
Nov 25 03:32:39 np0005534516 nova_compute[253538]:        <nova:ephemeral>0</nova:ephemeral>
Nov 25 03:32:39 np0005534516 nova_compute[253538]:        <nova:vcpus>1</nova:vcpus>
Nov 25 03:32:39 np0005534516 nova_compute[253538]:      </nova:flavor>
Nov 25 03:32:39 np0005534516 nova_compute[253538]:      <nova:owner>
Nov 25 03:32:39 np0005534516 nova_compute[253538]:        <nova:user uuid="c199ca353ed54a53ab7fe37d3089c82a">tempest-ServerActionsTestJSON-1880843108-project-member</nova:user>
Nov 25 03:32:39 np0005534516 nova_compute[253538]:        <nova:project uuid="23237e7592b247838e62457157e64e9e">tempest-ServerActionsTestJSON-1880843108</nova:project>
Nov 25 03:32:39 np0005534516 nova_compute[253538]:      </nova:owner>
Nov 25 03:32:39 np0005534516 nova_compute[253538]:      <nova:root type="image" uuid="8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e"/>
Nov 25 03:32:39 np0005534516 nova_compute[253538]:      <nova:ports>
Nov 25 03:32:39 np0005534516 nova_compute[253538]:        <nova:port uuid="15af3dd8-9788-4a34-b4b2-d3b24300cd4c">
Nov 25 03:32:39 np0005534516 nova_compute[253538]:          <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Nov 25 03:32:39 np0005534516 nova_compute[253538]:        </nova:port>
Nov 25 03:32:39 np0005534516 nova_compute[253538]:      </nova:ports>
Nov 25 03:32:39 np0005534516 nova_compute[253538]:    </nova:instance>
Nov 25 03:32:39 np0005534516 nova_compute[253538]:  </metadata>
Nov 25 03:32:39 np0005534516 nova_compute[253538]:  <sysinfo type="smbios">
Nov 25 03:32:39 np0005534516 nova_compute[253538]:    <system>
Nov 25 03:32:39 np0005534516 nova_compute[253538]:      <entry name="manufacturer">RDO</entry>
Nov 25 03:32:39 np0005534516 nova_compute[253538]:      <entry name="product">OpenStack Compute</entry>
Nov 25 03:32:39 np0005534516 nova_compute[253538]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 03:32:39 np0005534516 nova_compute[253538]:      <entry name="serial">0feca801-4630-4450-b915-616d8496ab51</entry>
Nov 25 03:32:39 np0005534516 nova_compute[253538]:      <entry name="uuid">0feca801-4630-4450-b915-616d8496ab51</entry>
Nov 25 03:32:39 np0005534516 nova_compute[253538]:      <entry name="family">Virtual Machine</entry>
Nov 25 03:32:39 np0005534516 nova_compute[253538]:    </system>
Nov 25 03:32:39 np0005534516 nova_compute[253538]:  </sysinfo>
Nov 25 03:32:39 np0005534516 nova_compute[253538]:  <os>
Nov 25 03:32:39 np0005534516 nova_compute[253538]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 25 03:32:39 np0005534516 nova_compute[253538]:    <boot dev="hd"/>
Nov 25 03:32:39 np0005534516 nova_compute[253538]:    <smbios mode="sysinfo"/>
Nov 25 03:32:39 np0005534516 nova_compute[253538]:  </os>
Nov 25 03:32:39 np0005534516 nova_compute[253538]:  <features>
Nov 25 03:32:39 np0005534516 nova_compute[253538]:    <acpi/>
Nov 25 03:32:39 np0005534516 nova_compute[253538]:    <apic/>
Nov 25 03:32:39 np0005534516 nova_compute[253538]:    <vmcoreinfo/>
Nov 25 03:32:39 np0005534516 nova_compute[253538]:  </features>
Nov 25 03:32:39 np0005534516 nova_compute[253538]:  <clock offset="utc">
Nov 25 03:32:39 np0005534516 nova_compute[253538]:    <timer name="pit" tickpolicy="delay"/>
Nov 25 03:32:39 np0005534516 nova_compute[253538]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 25 03:32:39 np0005534516 nova_compute[253538]:    <timer name="hpet" present="no"/>
Nov 25 03:32:39 np0005534516 nova_compute[253538]:  </clock>
Nov 25 03:32:39 np0005534516 nova_compute[253538]:  <cpu mode="host-model" match="exact">
Nov 25 03:32:39 np0005534516 nova_compute[253538]:    <topology sockets="1" cores="1" threads="1"/>
Nov 25 03:32:39 np0005534516 nova_compute[253538]:  </cpu>
Nov 25 03:32:39 np0005534516 nova_compute[253538]:  <devices>
Nov 25 03:32:39 np0005534516 nova_compute[253538]:    <disk type="network" device="disk">
Nov 25 03:32:39 np0005534516 nova_compute[253538]:      <driver type="raw" cache="none"/>
Nov 25 03:32:39 np0005534516 nova_compute[253538]:      <source protocol="rbd" name="vms/0feca801-4630-4450-b915-616d8496ab51_disk">
Nov 25 03:32:39 np0005534516 nova_compute[253538]:        <host name="192.168.122.100" port="6789"/>
Nov 25 03:32:39 np0005534516 nova_compute[253538]:      </source>
Nov 25 03:32:39 np0005534516 nova_compute[253538]:      <auth username="openstack">
Nov 25 03:32:39 np0005534516 nova_compute[253538]:        <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 03:32:39 np0005534516 nova_compute[253538]:      </auth>
Nov 25 03:32:39 np0005534516 nova_compute[253538]:      <target dev="vda" bus="virtio"/>
Nov 25 03:32:39 np0005534516 nova_compute[253538]:    </disk>
Nov 25 03:32:39 np0005534516 nova_compute[253538]:    <disk type="network" device="cdrom">
Nov 25 03:32:39 np0005534516 nova_compute[253538]:      <driver type="raw" cache="none"/>
Nov 25 03:32:39 np0005534516 nova_compute[253538]:      <source protocol="rbd" name="vms/0feca801-4630-4450-b915-616d8496ab51_disk.config">
Nov 25 03:32:39 np0005534516 nova_compute[253538]:        <host name="192.168.122.100" port="6789"/>
Nov 25 03:32:39 np0005534516 nova_compute[253538]:      </source>
Nov 25 03:32:39 np0005534516 nova_compute[253538]:      <auth username="openstack">
Nov 25 03:32:39 np0005534516 nova_compute[253538]:        <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 03:32:39 np0005534516 nova_compute[253538]:      </auth>
Nov 25 03:32:39 np0005534516 nova_compute[253538]:      <target dev="sda" bus="sata"/>
Nov 25 03:32:39 np0005534516 nova_compute[253538]:    </disk>
Nov 25 03:32:39 np0005534516 nova_compute[253538]:    <interface type="ethernet">
Nov 25 03:32:39 np0005534516 nova_compute[253538]:      <mac address="fa:16:3e:07:cd:40"/>
Nov 25 03:32:39 np0005534516 nova_compute[253538]:      <model type="virtio"/>
Nov 25 03:32:39 np0005534516 nova_compute[253538]:      <driver name="vhost" rx_queue_size="512"/>
Nov 25 03:32:39 np0005534516 nova_compute[253538]:      <mtu size="1442"/>
Nov 25 03:32:39 np0005534516 nova_compute[253538]:      <target dev="tap15af3dd8-97"/>
Nov 25 03:32:39 np0005534516 nova_compute[253538]:    </interface>
Nov 25 03:32:39 np0005534516 nova_compute[253538]:    <serial type="pty">
Nov 25 03:32:39 np0005534516 nova_compute[253538]:      <log file="/var/lib/nova/instances/0feca801-4630-4450-b915-616d8496ab51/console.log" append="off"/>
Nov 25 03:32:39 np0005534516 nova_compute[253538]:    </serial>
Nov 25 03:32:39 np0005534516 nova_compute[253538]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 03:32:39 np0005534516 nova_compute[253538]:    <video>
Nov 25 03:32:39 np0005534516 nova_compute[253538]:      <model type="virtio"/>
Nov 25 03:32:39 np0005534516 nova_compute[253538]:    </video>
Nov 25 03:32:39 np0005534516 nova_compute[253538]:    <input type="tablet" bus="usb"/>
Nov 25 03:32:39 np0005534516 nova_compute[253538]:    <input type="keyboard" bus="usb"/>
Nov 25 03:32:39 np0005534516 nova_compute[253538]:    <rng model="virtio">
Nov 25 03:32:39 np0005534516 nova_compute[253538]:      <backend model="random">/dev/urandom</backend>
Nov 25 03:32:39 np0005534516 nova_compute[253538]:    </rng>
Nov 25 03:32:39 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root"/>
Nov 25 03:32:39 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:32:39 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:32:39 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:32:39 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:32:39 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:32:39 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:32:39 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:32:39 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:32:39 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:32:39 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:32:39 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:32:39 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:32:39 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:32:39 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:32:39 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:32:39 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:32:39 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:32:39 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:32:39 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:32:39 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:32:39 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:32:39 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:32:39 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:32:39 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:32:39 np0005534516 nova_compute[253538]:    <controller type="usb" index="0"/>
Nov 25 03:32:39 np0005534516 nova_compute[253538]:    <memballoon model="virtio">
Nov 25 03:32:39 np0005534516 nova_compute[253538]:      <stats period="10"/>
Nov 25 03:32:39 np0005534516 nova_compute[253538]:    </memballoon>
Nov 25 03:32:39 np0005534516 nova_compute[253538]:  </devices>
Nov 25 03:32:39 np0005534516 nova_compute[253538]: </domain>
Nov 25 03:32:39 np0005534516 nova_compute[253538]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 25 03:32:39 np0005534516 nova_compute[253538]: 2025-11-25 08:32:39.238 253542 DEBUG nova.virt.libvirt.driver [None req-7c71d64c-53be-44a4-a016-71bef448fe19 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] skipping disk for instance-00000032 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 25 03:32:39 np0005534516 nova_compute[253538]: 2025-11-25 08:32:39.238 253542 DEBUG nova.virt.libvirt.driver [None req-7c71d64c-53be-44a4-a016-71bef448fe19 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] skipping disk for instance-00000032 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 25 03:32:39 np0005534516 nova_compute[253538]: 2025-11-25 08:32:39.239 253542 DEBUG nova.virt.libvirt.vif [None req-7c71d64c-53be-44a4-a016-71bef448fe19 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T08:32:05Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-1351113969',display_name='tempest-ServerActionsTestJSON-server-1351113969',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-1351113969',id=50,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBE5Fk56Q4lNqV9K7WDUeyi9Q8LXfzmP3pHaY3lQ+esJSSZiBEhWQZtQw4QEFpwpSqYGNN6+MiKdvSMZAjIxsIMhhevlSp0lI0bm/7fQanmTm+NtC/LaRja7uscM7lRA6IQ==',key_name='tempest-keypair-23618085',keypairs=<?>,launch_index=0,launched_at=2025-11-25T08:32:15Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=<?>,power_state=1,progress=0,project_id='23237e7592b247838e62457157e64e9e',ramdisk_id='',reservation_id='r-ui990d0d',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-1880843108',owner_user_name='tempest-ServerActionsTestJSON-1880843108-project-member'},tags=<?>,task_state='reboot_started_hard',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T08:32:37Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='c199ca353ed54a53ab7fe37d3089c82a',uuid=0feca801-4630-4450-b915-616d8496ab51,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "15af3dd8-9788-4a34-b4b2-d3b24300cd4c", "address": "fa:16:3e:07:cd:40", "network": {"id": "908154e6-322e-4607-bb65-df3f3f8daca6", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1395486665-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.183", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "23237e7592b247838e62457157e64e9e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap15af3dd8-97", "ovs_interfaceid": "15af3dd8-9788-4a34-b4b2-d3b24300cd4c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 25 03:32:39 np0005534516 nova_compute[253538]: 2025-11-25 08:32:39.239 253542 DEBUG nova.network.os_vif_util [None req-7c71d64c-53be-44a4-a016-71bef448fe19 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Converting VIF {"id": "15af3dd8-9788-4a34-b4b2-d3b24300cd4c", "address": "fa:16:3e:07:cd:40", "network": {"id": "908154e6-322e-4607-bb65-df3f3f8daca6", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1395486665-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.183", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "23237e7592b247838e62457157e64e9e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap15af3dd8-97", "ovs_interfaceid": "15af3dd8-9788-4a34-b4b2-d3b24300cd4c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 03:32:39 np0005534516 nova_compute[253538]: 2025-11-25 08:32:39.241 253542 DEBUG nova.network.os_vif_util [None req-7c71d64c-53be-44a4-a016-71bef448fe19 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:07:cd:40,bridge_name='br-int',has_traffic_filtering=True,id=15af3dd8-9788-4a34-b4b2-d3b24300cd4c,network=Network(908154e6-322e-4607-bb65-df3f3f8daca6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap15af3dd8-97') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 03:32:39 np0005534516 nova_compute[253538]: 2025-11-25 08:32:39.242 253542 DEBUG os_vif [None req-7c71d64c-53be-44a4-a016-71bef448fe19 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Plugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:07:cd:40,bridge_name='br-int',has_traffic_filtering=True,id=15af3dd8-9788-4a34-b4b2-d3b24300cd4c,network=Network(908154e6-322e-4607-bb65-df3f3f8daca6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap15af3dd8-97') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 25 03:32:39 np0005534516 nova_compute[253538]: 2025-11-25 08:32:39.242 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:32:39 np0005534516 nova_compute[253538]: 2025-11-25 08:32:39.242 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:32:39 np0005534516 nova_compute[253538]: 2025-11-25 08:32:39.243 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 03:32:39 np0005534516 nova_compute[253538]: 2025-11-25 08:32:39.245 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:32:39 np0005534516 nova_compute[253538]: 2025-11-25 08:32:39.245 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap15af3dd8-97, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:32:39 np0005534516 nova_compute[253538]: 2025-11-25 08:32:39.246 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap15af3dd8-97, col_values=(('external_ids', {'iface-id': '15af3dd8-9788-4a34-b4b2-d3b24300cd4c', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:07:cd:40', 'vm-uuid': '0feca801-4630-4450-b915-616d8496ab51'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:32:39 np0005534516 nova_compute[253538]: 2025-11-25 08:32:39.247 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:32:39 np0005534516 NetworkManager[48915]: <info>  [1764059559.2485] manager: (tap15af3dd8-97): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/207)
Nov 25 03:32:39 np0005534516 nova_compute[253538]: 2025-11-25 08:32:39.250 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 25 03:32:39 np0005534516 nova_compute[253538]: 2025-11-25 08:32:39.256 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:32:39 np0005534516 nova_compute[253538]: 2025-11-25 08:32:39.258 253542 INFO os_vif [None req-7c71d64c-53be-44a4-a016-71bef448fe19 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Successfully plugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:07:cd:40,bridge_name='br-int',has_traffic_filtering=True,id=15af3dd8-9788-4a34-b4b2-d3b24300cd4c,network=Network(908154e6-322e-4607-bb65-df3f3f8daca6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap15af3dd8-97')#033[00m
Nov 25 03:32:39 np0005534516 nova_compute[253538]: 2025-11-25 08:32:39.278 253542 INFO nova.compute.manager [None req-f4fdfd78-b540-405c-9e87-7c67184bb7d8 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] [instance: c0942fc7-74d4-4fc8-9574-4fea9179e71b] Took 7.42 seconds to build instance.#033[00m
Nov 25 03:32:39 np0005534516 nova_compute[253538]: 2025-11-25 08:32:39.302 253542 DEBUG oslo_concurrency.lockutils [None req-f4fdfd78-b540-405c-9e87-7c67184bb7d8 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Lock "c0942fc7-74d4-4fc8-9574-4fea9179e71b" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 7.494s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:32:39 np0005534516 kernel: tap15af3dd8-97: entered promiscuous mode
Nov 25 03:32:39 np0005534516 systemd-udevd[310215]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 03:32:39 np0005534516 ovn_controller[152859]: 2025-11-25T08:32:39Z|00451|binding|INFO|Claiming lport 15af3dd8-9788-4a34-b4b2-d3b24300cd4c for this chassis.
Nov 25 03:32:39 np0005534516 ovn_controller[152859]: 2025-11-25T08:32:39Z|00452|binding|INFO|15af3dd8-9788-4a34-b4b2-d3b24300cd4c: Claiming fa:16:3e:07:cd:40 10.100.0.13
Nov 25 03:32:39 np0005534516 nova_compute[253538]: 2025-11-25 08:32:39.336 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:32:39 np0005534516 NetworkManager[48915]: <info>  [1764059559.3381] manager: (tap15af3dd8-97): new Tun device (/org/freedesktop/NetworkManager/Devices/208)
Nov 25 03:32:39 np0005534516 nova_compute[253538]: 2025-11-25 08:32:39.342 253542 DEBUG nova.network.neutron [None req-c6ed4ceb-c663-4069-85c5-c7caedbbd858 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: 8ee60656-c206-4a84-9774-e8f852386097] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 25 03:32:39 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:32:39.342 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:07:cd:40 10.100.0.13'], port_security=['fa:16:3e:07:cd:40 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '0feca801-4630-4450-b915-616d8496ab51', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-908154e6-322e-4607-bb65-df3f3f8daca6', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '23237e7592b247838e62457157e64e9e', 'neutron:revision_number': '7', 'neutron:security_group_ids': '3358d53f-61a0-46d5-a068-e095dc0322d0', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.183'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f04e87cb-da21-4cc9-be16-4ad52b84fb85, chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=15af3dd8-9788-4a34-b4b2-d3b24300cd4c) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 03:32:39 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:32:39.343 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 15af3dd8-9788-4a34-b4b2-d3b24300cd4c in datapath 908154e6-322e-4607-bb65-df3f3f8daca6 bound to our chassis#033[00m
Nov 25 03:32:39 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:32:39.344 162739 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 908154e6-322e-4607-bb65-df3f3f8daca6#033[00m
Nov 25 03:32:39 np0005534516 NetworkManager[48915]: <info>  [1764059559.3482] device (tap15af3dd8-97): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 03:32:39 np0005534516 NetworkManager[48915]: <info>  [1764059559.3492] device (tap15af3dd8-97): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 03:32:39 np0005534516 ovn_controller[152859]: 2025-11-25T08:32:39Z|00453|binding|INFO|Setting lport 15af3dd8-9788-4a34-b4b2-d3b24300cd4c ovn-installed in OVS
Nov 25 03:32:39 np0005534516 ovn_controller[152859]: 2025-11-25T08:32:39Z|00454|binding|INFO|Setting lport 15af3dd8-9788-4a34-b4b2-d3b24300cd4c up in Southbound
Nov 25 03:32:39 np0005534516 nova_compute[253538]: 2025-11-25 08:32:39.359 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:32:39 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:32:39.358 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[41195a4f-8f31-4845-a086-bb9ac3862ff7]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:32:39 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:32:39.359 162739 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap908154e6-31 in ovnmeta-908154e6-322e-4607-bb65-df3f3f8daca6 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 25 03:32:39 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:32:39.361 269370 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap908154e6-30 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 25 03:32:39 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:32:39.361 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[02bdab70-2fbf-4a56-9b73-07102ea663e8]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:32:39 np0005534516 nova_compute[253538]: 2025-11-25 08:32:39.364 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:32:39 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:32:39.365 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[8ab270bd-b172-4364-9f3c-00e91e09d19b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:32:39 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:32:39.376 162852 DEBUG oslo.privsep.daemon [-] privsep: reply[72518e40-79b6-4525-895d-b2a54c626726]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:32:39 np0005534516 systemd-machined[215790]: New machine qemu-60-instance-00000032.
Nov 25 03:32:39 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:32:39.387 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[f4e27e76-2934-4898-aad0-33dd07d60847]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:32:39 np0005534516 systemd[1]: Started Virtual Machine qemu-60-instance-00000032.
Nov 25 03:32:39 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:32:39.434 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[9a8dbe7d-e516-4c07-8ea1-eca90745de22]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:32:39 np0005534516 NetworkManager[48915]: <info>  [1764059559.4485] manager: (tap908154e6-30): new Veth device (/org/freedesktop/NetworkManager/Devices/209)
Nov 25 03:32:39 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:32:39.446 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[b2e8fe79-1d71-4ed4-b334-d03dae6af503]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:32:39 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:32:39.489 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[5d923724-c372-45be-ba8b-59c68f95487c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:32:39 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:32:39.492 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[9db1faec-0525-44ac-a566-10f01e255c64]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:32:39 np0005534516 NetworkManager[48915]: <info>  [1764059559.5192] device (tap908154e6-30): carrier: link connected
Nov 25 03:32:39 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:32:39.528 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[9bf7b325-771c-43fb-b368-cbbcf66b4855]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:32:39 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:32:39.549 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[8219559e-3103-4949-8e95-eaa7c3b37197]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap908154e6-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:42:59:09'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 137], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 491411, 'reachable_time': 34687, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 310478, 'error': None, 'target': 'ovnmeta-908154e6-322e-4607-bb65-df3f3f8daca6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:32:39 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:32:39.562 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[30813ce8-88a7-4e5b-afbf-557e505a6d57]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe42:5909'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 491411, 'tstamp': 491411}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 310479, 'error': None, 'target': 'ovnmeta-908154e6-322e-4607-bb65-df3f3f8daca6', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:32:39 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:32:39.583 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[4f88bdd6-ffb0-4225-a5ea-d99e32c68f00]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap908154e6-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:42:59:09'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 137], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 491411, 'reachable_time': 34687, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 310480, 'error': None, 'target': 'ovnmeta-908154e6-322e-4607-bb65-df3f3f8daca6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:32:39 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:32:39.615 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[f1e2a636-d9d2-4806-80e9-8784ddfd8af1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:32:39 np0005534516 nova_compute[253538]: 2025-11-25 08:32:39.661 253542 DEBUG nova.network.neutron [None req-c6ed4ceb-c663-4069-85c5-c7caedbbd858 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: 8ee60656-c206-4a84-9774-e8f852386097] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 03:32:39 np0005534516 nova_compute[253538]: 2025-11-25 08:32:39.665 253542 DEBUG nova.compute.manager [req-6157f941-262c-41d1-b71e-6d8ed7cdecde req-fcdcabaa-82d6-452d-8b15-5e85bc690e0d b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 8191f951-44bc-4371-957a-f2e7d37c1a32] Received event network-changed-d8bd16e1-3695-474d-be04-7fdf44bee803 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:32:39 np0005534516 nova_compute[253538]: 2025-11-25 08:32:39.665 253542 DEBUG nova.compute.manager [req-6157f941-262c-41d1-b71e-6d8ed7cdecde req-fcdcabaa-82d6-452d-8b15-5e85bc690e0d b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 8191f951-44bc-4371-957a-f2e7d37c1a32] Refreshing instance network info cache due to event network-changed-d8bd16e1-3695-474d-be04-7fdf44bee803. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 03:32:39 np0005534516 nova_compute[253538]: 2025-11-25 08:32:39.665 253542 DEBUG oslo_concurrency.lockutils [req-6157f941-262c-41d1-b71e-6d8ed7cdecde req-fcdcabaa-82d6-452d-8b15-5e85bc690e0d b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-8191f951-44bc-4371-957a-f2e7d37c1a32" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 03:32:39 np0005534516 nova_compute[253538]: 2025-11-25 08:32:39.665 253542 DEBUG oslo_concurrency.lockutils [req-6157f941-262c-41d1-b71e-6d8ed7cdecde req-fcdcabaa-82d6-452d-8b15-5e85bc690e0d b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-8191f951-44bc-4371-957a-f2e7d37c1a32" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 03:32:39 np0005534516 nova_compute[253538]: 2025-11-25 08:32:39.666 253542 DEBUG nova.network.neutron [req-6157f941-262c-41d1-b71e-6d8ed7cdecde req-fcdcabaa-82d6-452d-8b15-5e85bc690e0d b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 8191f951-44bc-4371-957a-f2e7d37c1a32] Refreshing network info cache for port d8bd16e1-3695-474d-be04-7fdf44bee803 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 03:32:39 np0005534516 nova_compute[253538]: 2025-11-25 08:32:39.676 253542 DEBUG oslo_concurrency.lockutils [None req-c6ed4ceb-c663-4069-85c5-c7caedbbd858 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Releasing lock "refresh_cache-8ee60656-c206-4a84-9774-e8f852386097" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 03:32:39 np0005534516 nova_compute[253538]: 2025-11-25 08:32:39.677 253542 DEBUG nova.compute.manager [None req-c6ed4ceb-c663-4069-85c5-c7caedbbd858 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: 8ee60656-c206-4a84-9774-e8f852386097] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 25 03:32:39 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:32:39.678 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[b7868153-0ed5-4564-b23f-e7e349ec5721]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:32:39 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:32:39.679 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap908154e6-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:32:39 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:32:39.679 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 03:32:39 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:32:39.679 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap908154e6-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:32:39 np0005534516 nova_compute[253538]: 2025-11-25 08:32:39.681 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:32:39 np0005534516 NetworkManager[48915]: <info>  [1764059559.6822] manager: (tap908154e6-30): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/210)
Nov 25 03:32:39 np0005534516 kernel: tap908154e6-30: entered promiscuous mode
Nov 25 03:32:39 np0005534516 nova_compute[253538]: 2025-11-25 08:32:39.684 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:32:39 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:32:39.685 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap908154e6-30, col_values=(('external_ids', {'iface-id': 'a5c69233-73e9-45f3-95c2-e76d52711966'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:32:39 np0005534516 ovn_controller[152859]: 2025-11-25T08:32:39Z|00455|binding|INFO|Releasing lport a5c69233-73e9-45f3-95c2-e76d52711966 from this chassis (sb_readonly=0)
Nov 25 03:32:39 np0005534516 nova_compute[253538]: 2025-11-25 08:32:39.685 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:32:39 np0005534516 nova_compute[253538]: 2025-11-25 08:32:39.702 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:32:39 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:32:39.702 162739 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/908154e6-322e-4607-bb65-df3f3f8daca6.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/908154e6-322e-4607-bb65-df3f3f8daca6.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 25 03:32:39 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:32:39.704 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[2d46d274-e852-40f6-ae89-36b995e162f2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:32:39 np0005534516 nova_compute[253538]: 2025-11-25 08:32:39.704 253542 DEBUG nova.virt.libvirt.driver [-] [instance: 8ee60656-c206-4a84-9774-e8f852386097] During wait destroy, instance disappeared. _wait_for_destroy /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:1527#033[00m
Nov 25 03:32:39 np0005534516 nova_compute[253538]: 2025-11-25 08:32:39.705 253542 INFO nova.virt.libvirt.driver [-] [instance: 8ee60656-c206-4a84-9774-e8f852386097] Instance destroyed successfully.#033[00m
Nov 25 03:32:39 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:32:39.705 162739 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 25 03:32:39 np0005534516 ovn_metadata_agent[162734]: global
Nov 25 03:32:39 np0005534516 ovn_metadata_agent[162734]:    log         /dev/log local0 debug
Nov 25 03:32:39 np0005534516 ovn_metadata_agent[162734]:    log-tag     haproxy-metadata-proxy-908154e6-322e-4607-bb65-df3f3f8daca6
Nov 25 03:32:39 np0005534516 ovn_metadata_agent[162734]:    user        root
Nov 25 03:32:39 np0005534516 ovn_metadata_agent[162734]:    group       root
Nov 25 03:32:39 np0005534516 ovn_metadata_agent[162734]:    maxconn     1024
Nov 25 03:32:39 np0005534516 ovn_metadata_agent[162734]:    pidfile     /var/lib/neutron/external/pids/908154e6-322e-4607-bb65-df3f3f8daca6.pid.haproxy
Nov 25 03:32:39 np0005534516 ovn_metadata_agent[162734]:    daemon
Nov 25 03:32:39 np0005534516 ovn_metadata_agent[162734]: 
Nov 25 03:32:39 np0005534516 ovn_metadata_agent[162734]: defaults
Nov 25 03:32:39 np0005534516 ovn_metadata_agent[162734]:    log global
Nov 25 03:32:39 np0005534516 ovn_metadata_agent[162734]:    mode http
Nov 25 03:32:39 np0005534516 ovn_metadata_agent[162734]:    option httplog
Nov 25 03:32:39 np0005534516 ovn_metadata_agent[162734]:    option dontlognull
Nov 25 03:32:39 np0005534516 ovn_metadata_agent[162734]:    option http-server-close
Nov 25 03:32:39 np0005534516 ovn_metadata_agent[162734]:    option forwardfor
Nov 25 03:32:39 np0005534516 ovn_metadata_agent[162734]:    retries                 3
Nov 25 03:32:39 np0005534516 ovn_metadata_agent[162734]:    timeout http-request    30s
Nov 25 03:32:39 np0005534516 ovn_metadata_agent[162734]:    timeout connect         30s
Nov 25 03:32:39 np0005534516 ovn_metadata_agent[162734]:    timeout client          32s
Nov 25 03:32:39 np0005534516 ovn_metadata_agent[162734]:    timeout server          32s
Nov 25 03:32:39 np0005534516 ovn_metadata_agent[162734]:    timeout http-keep-alive 30s
Nov 25 03:32:39 np0005534516 ovn_metadata_agent[162734]: 
Nov 25 03:32:39 np0005534516 ovn_metadata_agent[162734]: 
Nov 25 03:32:39 np0005534516 ovn_metadata_agent[162734]: listen listener
Nov 25 03:32:39 np0005534516 ovn_metadata_agent[162734]:    bind 169.254.169.254:80
Nov 25 03:32:39 np0005534516 ovn_metadata_agent[162734]:    server metadata /var/lib/neutron/metadata_proxy
Nov 25 03:32:39 np0005534516 ovn_metadata_agent[162734]:    http-request add-header X-OVN-Network-ID 908154e6-322e-4607-bb65-df3f3f8daca6
Nov 25 03:32:39 np0005534516 ovn_metadata_agent[162734]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 25 03:32:39 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:32:39.706 162739 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-908154e6-322e-4607-bb65-df3f3f8daca6', 'env', 'PROCESS_TAG=haproxy-908154e6-322e-4607-bb65-df3f3f8daca6', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/908154e6-322e-4607-bb65-df3f3f8daca6.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 25 03:32:39 np0005534516 nova_compute[253538]: 2025-11-25 08:32:39.705 253542 DEBUG nova.objects.instance [None req-c6ed4ceb-c663-4069-85c5-c7caedbbd858 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Lazy-loading 'numa_topology' on Instance uuid 8ee60656-c206-4a84-9774-e8f852386097 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 03:32:39 np0005534516 nova_compute[253538]: 2025-11-25 08:32:39.715 253542 DEBUG nova.objects.instance [None req-c6ed4ceb-c663-4069-85c5-c7caedbbd858 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Lazy-loading 'resources' on Instance uuid 8ee60656-c206-4a84-9774-e8f852386097 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 03:32:39 np0005534516 nova_compute[253538]: 2025-11-25 08:32:39.749 253542 INFO nova.virt.libvirt.driver [None req-c6ed4ceb-c663-4069-85c5-c7caedbbd858 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: 8ee60656-c206-4a84-9774-e8f852386097] Deletion of /var/lib/nova/instances/8ee60656-c206-4a84-9774-e8f852386097_del complete#033[00m
Nov 25 03:32:39 np0005534516 nova_compute[253538]: 2025-11-25 08:32:39.807 253542 INFO nova.compute.manager [None req-c6ed4ceb-c663-4069-85c5-c7caedbbd858 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: 8ee60656-c206-4a84-9774-e8f852386097] Took 0.13 seconds to destroy the instance on the hypervisor.#033[00m
Nov 25 03:32:39 np0005534516 nova_compute[253538]: 2025-11-25 08:32:39.808 253542 DEBUG oslo.service.loopingcall [None req-c6ed4ceb-c663-4069-85c5-c7caedbbd858 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 25 03:32:39 np0005534516 nova_compute[253538]: 2025-11-25 08:32:39.808 253542 DEBUG nova.compute.manager [-] [instance: 8ee60656-c206-4a84-9774-e8f852386097] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 25 03:32:39 np0005534516 nova_compute[253538]: 2025-11-25 08:32:39.809 253542 DEBUG nova.network.neutron [-] [instance: 8ee60656-c206-4a84-9774-e8f852386097] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 25 03:32:39 np0005534516 nova_compute[253538]: 2025-11-25 08:32:39.913 253542 DEBUG nova.network.neutron [-] [instance: 8ee60656-c206-4a84-9774-e8f852386097] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 25 03:32:39 np0005534516 nova_compute[253538]: 2025-11-25 08:32:39.923 253542 DEBUG nova.network.neutron [-] [instance: 8ee60656-c206-4a84-9774-e8f852386097] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 03:32:39 np0005534516 nova_compute[253538]: 2025-11-25 08:32:39.939 253542 INFO nova.compute.manager [-] [instance: 8ee60656-c206-4a84-9774-e8f852386097] Took 0.13 seconds to deallocate network for instance.#033[00m
Nov 25 03:32:40 np0005534516 nova_compute[253538]: 2025-11-25 08:32:40.162 253542 DEBUG oslo_concurrency.lockutils [None req-c6ed4ceb-c663-4069-85c5-c7caedbbd858 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Lock "8ee60656-c206-4a84-9774-e8f852386097" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 1.054s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:32:40 np0005534516 podman[310544]: 2025-11-25 08:32:40.122879954 +0000 UTC m=+0.034767536 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 25 03:32:40 np0005534516 nova_compute[253538]: 2025-11-25 08:32:40.364 253542 DEBUG nova.virt.libvirt.host [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Removed pending event for 0feca801-4630-4450-b915-616d8496ab51 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Nov 25 03:32:40 np0005534516 nova_compute[253538]: 2025-11-25 08:32:40.365 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764059560.364427, 0feca801-4630-4450-b915-616d8496ab51 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 03:32:40 np0005534516 nova_compute[253538]: 2025-11-25 08:32:40.365 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 0feca801-4630-4450-b915-616d8496ab51] VM Resumed (Lifecycle Event)#033[00m
Nov 25 03:32:40 np0005534516 nova_compute[253538]: 2025-11-25 08:32:40.367 253542 DEBUG nova.compute.manager [None req-7c71d64c-53be-44a4-a016-71bef448fe19 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] [instance: 0feca801-4630-4450-b915-616d8496ab51] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 25 03:32:40 np0005534516 nova_compute[253538]: 2025-11-25 08:32:40.370 253542 INFO nova.virt.libvirt.driver [-] [instance: 0feca801-4630-4450-b915-616d8496ab51] Instance rebooted successfully.#033[00m
Nov 25 03:32:40 np0005534516 nova_compute[253538]: 2025-11-25 08:32:40.370 253542 DEBUG nova.compute.manager [None req-7c71d64c-53be-44a4-a016-71bef448fe19 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] [instance: 0feca801-4630-4450-b915-616d8496ab51] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:32:40 np0005534516 nova_compute[253538]: 2025-11-25 08:32:40.407 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 0feca801-4630-4450-b915-616d8496ab51] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:32:40 np0005534516 nova_compute[253538]: 2025-11-25 08:32:40.410 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 0feca801-4630-4450-b915-616d8496ab51] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: reboot_started_hard, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 03:32:40 np0005534516 nova_compute[253538]: 2025-11-25 08:32:40.436 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 0feca801-4630-4450-b915-616d8496ab51] During sync_power_state the instance has a pending task (reboot_started_hard). Skip.#033[00m
Nov 25 03:32:40 np0005534516 nova_compute[253538]: 2025-11-25 08:32:40.437 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764059560.3678687, 0feca801-4630-4450-b915-616d8496ab51 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 03:32:40 np0005534516 nova_compute[253538]: 2025-11-25 08:32:40.437 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 0feca801-4630-4450-b915-616d8496ab51] VM Started (Lifecycle Event)#033[00m
Nov 25 03:32:40 np0005534516 nova_compute[253538]: 2025-11-25 08:32:40.448 253542 DEBUG oslo_concurrency.lockutils [None req-7c71d64c-53be-44a4-a016-71bef448fe19 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Lock "0feca801-4630-4450-b915-616d8496ab51" "released" by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" :: held 4.039s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:32:40 np0005534516 nova_compute[253538]: 2025-11-25 08:32:40.460 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 0feca801-4630-4450-b915-616d8496ab51] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:32:40 np0005534516 nova_compute[253538]: 2025-11-25 08:32:40.464 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 0feca801-4630-4450-b915-616d8496ab51] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 03:32:40 np0005534516 nova_compute[253538]: 2025-11-25 08:32:40.491 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:32:40 np0005534516 podman[310544]: 2025-11-25 08:32:40.517692735 +0000 UTC m=+0.429580277 container create 2289953be081e6c39272df663cab12ee7163a6905ffd288c559f372770c19339 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-908154e6-322e-4607-bb65-df3f3f8daca6, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Nov 25 03:32:40 np0005534516 systemd[1]: Started libpod-conmon-2289953be081e6c39272df663cab12ee7163a6905ffd288c559f372770c19339.scope.
Nov 25 03:32:40 np0005534516 systemd[1]: Started libcrun container.
Nov 25 03:32:40 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/facc729861bf2043006a179130b6eb00c45dc0c9002c2a929f8d9cf2dc4ea94f/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 25 03:32:40 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e176 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 03:32:40 np0005534516 podman[310544]: 2025-11-25 08:32:40.773796258 +0000 UTC m=+0.685683850 container init 2289953be081e6c39272df663cab12ee7163a6905ffd288c559f372770c19339 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-908154e6-322e-4607-bb65-df3f3f8daca6, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 25 03:32:40 np0005534516 podman[310544]: 2025-11-25 08:32:40.784085114 +0000 UTC m=+0.695972666 container start 2289953be081e6c39272df663cab12ee7163a6905ffd288c559f372770c19339 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-908154e6-322e-4607-bb65-df3f3f8daca6, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0)
Nov 25 03:32:40 np0005534516 neutron-haproxy-ovnmeta-908154e6-322e-4607-bb65-df3f3f8daca6[310582]: [NOTICE]   (310586) : New worker (310588) forked
Nov 25 03:32:40 np0005534516 neutron-haproxy-ovnmeta-908154e6-322e-4607-bb65-df3f3f8daca6[310582]: [NOTICE]   (310586) : Loading success.
Nov 25 03:32:41 np0005534516 nova_compute[253538]: 2025-11-25 08:32:41.018 253542 DEBUG nova.network.neutron [req-6157f941-262c-41d1-b71e-6d8ed7cdecde req-fcdcabaa-82d6-452d-8b15-5e85bc690e0d b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 8191f951-44bc-4371-957a-f2e7d37c1a32] Updated VIF entry in instance network info cache for port d8bd16e1-3695-474d-be04-7fdf44bee803. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 25 03:32:41 np0005534516 nova_compute[253538]: 2025-11-25 08:32:41.018 253542 DEBUG nova.network.neutron [req-6157f941-262c-41d1-b71e-6d8ed7cdecde req-fcdcabaa-82d6-452d-8b15-5e85bc690e0d b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 8191f951-44bc-4371-957a-f2e7d37c1a32] Updating instance_info_cache with network_info: [{"id": "d8bd16e1-3695-474d-be04-7fdf44bee803", "address": "fa:16:3e:1a:58:19", "network": {"id": "9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-2079765623-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.213", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7d8307470c794815a028592990efca57", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd8bd16e1-36", "ovs_interfaceid": "d8bd16e1-3695-474d-be04-7fdf44bee803", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 03:32:41 np0005534516 nova_compute[253538]: 2025-11-25 08:32:41.037 253542 DEBUG oslo_concurrency.lockutils [req-6157f941-262c-41d1-b71e-6d8ed7cdecde req-fcdcabaa-82d6-452d-8b15-5e85bc690e0d b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-8191f951-44bc-4371-957a-f2e7d37c1a32" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 03:32:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:32:41.057 162739 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:32:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:32:41.058 162739 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:32:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:32:41.058 162739 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:32:41 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1480: 321 pgs: 321 active+clean; 214 MiB data, 514 MiB used, 59 GiB / 60 GiB avail; 3.0 MiB/s rd, 4.4 MiB/s wr, 258 op/s
Nov 25 03:32:41 np0005534516 podman[310597]: 2025-11-25 08:32:41.81084351 +0000 UTC m=+0.068032150 container health_status 201f5ba8a846455dad7681d91099d8c3f33e8ac91298c1330a8b525b94c03938 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=multipathd, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 25 03:32:41 np0005534516 nova_compute[253538]: 2025-11-25 08:32:41.852 253542 DEBUG nova.compute.manager [req-8c1bfeb1-4f2e-4cb4-a1b4-6243b259ff58 req-c9e12778-7a48-4312-85ce-677d50dc0ded b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 0feca801-4630-4450-b915-616d8496ab51] Received event network-vif-plugged-15af3dd8-9788-4a34-b4b2-d3b24300cd4c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:32:41 np0005534516 nova_compute[253538]: 2025-11-25 08:32:41.852 253542 DEBUG oslo_concurrency.lockutils [req-8c1bfeb1-4f2e-4cb4-a1b4-6243b259ff58 req-c9e12778-7a48-4312-85ce-677d50dc0ded b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "0feca801-4630-4450-b915-616d8496ab51-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:32:41 np0005534516 nova_compute[253538]: 2025-11-25 08:32:41.852 253542 DEBUG oslo_concurrency.lockutils [req-8c1bfeb1-4f2e-4cb4-a1b4-6243b259ff58 req-c9e12778-7a48-4312-85ce-677d50dc0ded b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "0feca801-4630-4450-b915-616d8496ab51-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:32:41 np0005534516 nova_compute[253538]: 2025-11-25 08:32:41.853 253542 DEBUG oslo_concurrency.lockutils [req-8c1bfeb1-4f2e-4cb4-a1b4-6243b259ff58 req-c9e12778-7a48-4312-85ce-677d50dc0ded b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "0feca801-4630-4450-b915-616d8496ab51-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:32:41 np0005534516 nova_compute[253538]: 2025-11-25 08:32:41.853 253542 DEBUG nova.compute.manager [req-8c1bfeb1-4f2e-4cb4-a1b4-6243b259ff58 req-c9e12778-7a48-4312-85ce-677d50dc0ded b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 0feca801-4630-4450-b915-616d8496ab51] No waiting events found dispatching network-vif-plugged-15af3dd8-9788-4a34-b4b2-d3b24300cd4c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 03:32:41 np0005534516 nova_compute[253538]: 2025-11-25 08:32:41.853 253542 WARNING nova.compute.manager [req-8c1bfeb1-4f2e-4cb4-a1b4-6243b259ff58 req-c9e12778-7a48-4312-85ce-677d50dc0ded b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 0feca801-4630-4450-b915-616d8496ab51] Received unexpected event network-vif-plugged-15af3dd8-9788-4a34-b4b2-d3b24300cd4c for instance with vm_state active and task_state None.#033[00m
Nov 25 03:32:41 np0005534516 nova_compute[253538]: 2025-11-25 08:32:41.853 253542 DEBUG nova.compute.manager [req-8c1bfeb1-4f2e-4cb4-a1b4-6243b259ff58 req-c9e12778-7a48-4312-85ce-677d50dc0ded b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 0feca801-4630-4450-b915-616d8496ab51] Received event network-vif-plugged-15af3dd8-9788-4a34-b4b2-d3b24300cd4c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:32:41 np0005534516 nova_compute[253538]: 2025-11-25 08:32:41.853 253542 DEBUG oslo_concurrency.lockutils [req-8c1bfeb1-4f2e-4cb4-a1b4-6243b259ff58 req-c9e12778-7a48-4312-85ce-677d50dc0ded b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "0feca801-4630-4450-b915-616d8496ab51-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:32:41 np0005534516 nova_compute[253538]: 2025-11-25 08:32:41.853 253542 DEBUG oslo_concurrency.lockutils [req-8c1bfeb1-4f2e-4cb4-a1b4-6243b259ff58 req-c9e12778-7a48-4312-85ce-677d50dc0ded b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "0feca801-4630-4450-b915-616d8496ab51-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:32:41 np0005534516 nova_compute[253538]: 2025-11-25 08:32:41.854 253542 DEBUG oslo_concurrency.lockutils [req-8c1bfeb1-4f2e-4cb4-a1b4-6243b259ff58 req-c9e12778-7a48-4312-85ce-677d50dc0ded b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "0feca801-4630-4450-b915-616d8496ab51-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:32:41 np0005534516 nova_compute[253538]: 2025-11-25 08:32:41.854 253542 DEBUG nova.compute.manager [req-8c1bfeb1-4f2e-4cb4-a1b4-6243b259ff58 req-c9e12778-7a48-4312-85ce-677d50dc0ded b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 0feca801-4630-4450-b915-616d8496ab51] No waiting events found dispatching network-vif-plugged-15af3dd8-9788-4a34-b4b2-d3b24300cd4c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 03:32:41 np0005534516 nova_compute[253538]: 2025-11-25 08:32:41.854 253542 WARNING nova.compute.manager [req-8c1bfeb1-4f2e-4cb4-a1b4-6243b259ff58 req-c9e12778-7a48-4312-85ce-677d50dc0ded b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 0feca801-4630-4450-b915-616d8496ab51] Received unexpected event network-vif-plugged-15af3dd8-9788-4a34-b4b2-d3b24300cd4c for instance with vm_state active and task_state None.#033[00m
Nov 25 03:32:41 np0005534516 nova_compute[253538]: 2025-11-25 08:32:41.854 253542 DEBUG nova.compute.manager [req-8c1bfeb1-4f2e-4cb4-a1b4-6243b259ff58 req-c9e12778-7a48-4312-85ce-677d50dc0ded b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 0feca801-4630-4450-b915-616d8496ab51] Received event network-vif-plugged-15af3dd8-9788-4a34-b4b2-d3b24300cd4c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:32:41 np0005534516 nova_compute[253538]: 2025-11-25 08:32:41.854 253542 DEBUG oslo_concurrency.lockutils [req-8c1bfeb1-4f2e-4cb4-a1b4-6243b259ff58 req-c9e12778-7a48-4312-85ce-677d50dc0ded b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "0feca801-4630-4450-b915-616d8496ab51-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:32:41 np0005534516 nova_compute[253538]: 2025-11-25 08:32:41.854 253542 DEBUG oslo_concurrency.lockutils [req-8c1bfeb1-4f2e-4cb4-a1b4-6243b259ff58 req-c9e12778-7a48-4312-85ce-677d50dc0ded b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "0feca801-4630-4450-b915-616d8496ab51-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:32:41 np0005534516 nova_compute[253538]: 2025-11-25 08:32:41.855 253542 DEBUG oslo_concurrency.lockutils [req-8c1bfeb1-4f2e-4cb4-a1b4-6243b259ff58 req-c9e12778-7a48-4312-85ce-677d50dc0ded b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "0feca801-4630-4450-b915-616d8496ab51-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:32:41 np0005534516 nova_compute[253538]: 2025-11-25 08:32:41.855 253542 DEBUG nova.compute.manager [req-8c1bfeb1-4f2e-4cb4-a1b4-6243b259ff58 req-c9e12778-7a48-4312-85ce-677d50dc0ded b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 0feca801-4630-4450-b915-616d8496ab51] No waiting events found dispatching network-vif-plugged-15af3dd8-9788-4a34-b4b2-d3b24300cd4c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 03:32:41 np0005534516 nova_compute[253538]: 2025-11-25 08:32:41.855 253542 WARNING nova.compute.manager [req-8c1bfeb1-4f2e-4cb4-a1b4-6243b259ff58 req-c9e12778-7a48-4312-85ce-677d50dc0ded b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 0feca801-4630-4450-b915-616d8496ab51] Received unexpected event network-vif-plugged-15af3dd8-9788-4a34-b4b2-d3b24300cd4c for instance with vm_state active and task_state None.#033[00m
Nov 25 03:32:43 np0005534516 nova_compute[253538]: 2025-11-25 08:32:43.109 253542 INFO nova.compute.manager [None req-f0c7279b-1181-4e1c-a5ce-64e12d6684b3 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] [instance: 0feca801-4630-4450-b915-616d8496ab51] Get console output#033[00m
Nov 25 03:32:43 np0005534516 nova_compute[253538]: 2025-11-25 08:32:43.115 253542 INFO oslo.privsep.daemon [None req-f0c7279b-1181-4e1c-a5ce-64e12d6684b3 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Running privsep helper: ['sudo', 'nova-rootwrap', '/etc/nova/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/nova/nova.conf', '--config-file', '/etc/nova/nova-compute.conf', '--config-dir', '/etc/nova/nova.conf.d', '--privsep_context', 'nova.privsep.sys_admin_pctxt', '--privsep_sock_path', '/tmp/tmp3ey7djee/privsep.sock']#033[00m
Nov 25 03:32:43 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1481: 321 pgs: 321 active+clean; 214 MiB data, 514 MiB used, 59 GiB / 60 GiB avail; 4.3 MiB/s rd, 3.7 MiB/s wr, 261 op/s
Nov 25 03:32:43 np0005534516 nova_compute[253538]: 2025-11-25 08:32:43.324 253542 DEBUG oslo_concurrency.lockutils [None req-eed2a685-acc6-4daa-9135-691466117b97 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Acquiring lock "6fd0259d-3f5c-487b-906c-db0ac2b00830" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:32:43 np0005534516 nova_compute[253538]: 2025-11-25 08:32:43.325 253542 DEBUG oslo_concurrency.lockutils [None req-eed2a685-acc6-4daa-9135-691466117b97 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Lock "6fd0259d-3f5c-487b-906c-db0ac2b00830" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:32:43 np0005534516 nova_compute[253538]: 2025-11-25 08:32:43.340 253542 DEBUG nova.compute.manager [None req-eed2a685-acc6-4daa-9135-691466117b97 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: 6fd0259d-3f5c-487b-906c-db0ac2b00830] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 25 03:32:43 np0005534516 nova_compute[253538]: 2025-11-25 08:32:43.407 253542 DEBUG oslo_concurrency.lockutils [None req-eed2a685-acc6-4daa-9135-691466117b97 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:32:43 np0005534516 nova_compute[253538]: 2025-11-25 08:32:43.408 253542 DEBUG oslo_concurrency.lockutils [None req-eed2a685-acc6-4daa-9135-691466117b97 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:32:43 np0005534516 nova_compute[253538]: 2025-11-25 08:32:43.414 253542 DEBUG nova.virt.hardware [None req-eed2a685-acc6-4daa-9135-691466117b97 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 25 03:32:43 np0005534516 nova_compute[253538]: 2025-11-25 08:32:43.415 253542 INFO nova.compute.claims [None req-eed2a685-acc6-4daa-9135-691466117b97 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: 6fd0259d-3f5c-487b-906c-db0ac2b00830] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 25 03:32:43 np0005534516 nova_compute[253538]: 2025-11-25 08:32:43.557 253542 DEBUG oslo_concurrency.processutils [None req-eed2a685-acc6-4daa-9135-691466117b97 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:32:43 np0005534516 nova_compute[253538]: 2025-11-25 08:32:43.690 253542 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764059548.6869287, 7aefbad8-edb5-417c-a34d-e9e3c2dd0c03 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 03:32:43 np0005534516 nova_compute[253538]: 2025-11-25 08:32:43.691 253542 INFO nova.compute.manager [-] [instance: 7aefbad8-edb5-417c-a34d-e9e3c2dd0c03] VM Stopped (Lifecycle Event)#033[00m
Nov 25 03:32:43 np0005534516 nova_compute[253538]: 2025-11-25 08:32:43.713 253542 DEBUG nova.compute.manager [None req-7510b18c-133e-43cd-9017-c4e3c8b5b1be - - - - - -] [instance: 7aefbad8-edb5-417c-a34d-e9e3c2dd0c03] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:32:43 np0005534516 nova_compute[253538]: 2025-11-25 08:32:43.920 253542 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764059548.9199588, 0201b222-1aa1-4d57-901c-e3c79170b567 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 03:32:43 np0005534516 nova_compute[253538]: 2025-11-25 08:32:43.921 253542 INFO nova.compute.manager [-] [instance: 0201b222-1aa1-4d57-901c-e3c79170b567] VM Stopped (Lifecycle Event)#033[00m
Nov 25 03:32:43 np0005534516 nova_compute[253538]: 2025-11-25 08:32:43.939 253542 DEBUG nova.compute.manager [None req-41fddf24-ff8c-4fbd-8212-38d2b4875bef - - - - - -] [instance: 0201b222-1aa1-4d57-901c-e3c79170b567] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:32:44 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 03:32:44 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1318530023' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 03:32:44 np0005534516 nova_compute[253538]: 2025-11-25 08:32:44.027 253542 INFO oslo.privsep.daemon [None req-f0c7279b-1181-4e1c-a5ce-64e12d6684b3 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Spawned new privsep daemon via rootwrap#033[00m
Nov 25 03:32:44 np0005534516 nova_compute[253538]: 2025-11-25 08:32:43.902 310639 INFO oslo.privsep.daemon [-] privsep daemon starting#033[00m
Nov 25 03:32:44 np0005534516 nova_compute[253538]: 2025-11-25 08:32:43.906 310639 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0#033[00m
Nov 25 03:32:44 np0005534516 nova_compute[253538]: 2025-11-25 08:32:43.908 310639 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_CHOWN|CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_FOWNER|CAP_NET_ADMIN|CAP_SYS_ADMIN/CAP_CHOWN|CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_FOWNER|CAP_NET_ADMIN|CAP_SYS_ADMIN/none#033[00m
Nov 25 03:32:44 np0005534516 nova_compute[253538]: 2025-11-25 08:32:43.908 310639 INFO oslo.privsep.daemon [-] privsep daemon running as pid 310639#033[00m
Nov 25 03:32:44 np0005534516 nova_compute[253538]: 2025-11-25 08:32:44.038 253542 DEBUG oslo_concurrency.processutils [None req-eed2a685-acc6-4daa-9135-691466117b97 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.481s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:32:44 np0005534516 nova_compute[253538]: 2025-11-25 08:32:44.044 253542 DEBUG nova.compute.provider_tree [None req-eed2a685-acc6-4daa-9135-691466117b97 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 03:32:44 np0005534516 nova_compute[253538]: 2025-11-25 08:32:44.056 253542 DEBUG nova.scheduler.client.report [None req-eed2a685-acc6-4daa-9135-691466117b97 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 03:32:44 np0005534516 nova_compute[253538]: 2025-11-25 08:32:44.076 253542 DEBUG oslo_concurrency.lockutils [None req-eed2a685-acc6-4daa-9135-691466117b97 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.669s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:32:44 np0005534516 nova_compute[253538]: 2025-11-25 08:32:44.077 253542 DEBUG nova.compute.manager [None req-eed2a685-acc6-4daa-9135-691466117b97 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: 6fd0259d-3f5c-487b-906c-db0ac2b00830] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 25 03:32:44 np0005534516 nova_compute[253538]: 2025-11-25 08:32:44.118 310639 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Nov 25 03:32:44 np0005534516 nova_compute[253538]: 2025-11-25 08:32:44.119 253542 DEBUG nova.compute.manager [None req-eed2a685-acc6-4daa-9135-691466117b97 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: 6fd0259d-3f5c-487b-906c-db0ac2b00830] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 25 03:32:44 np0005534516 nova_compute[253538]: 2025-11-25 08:32:44.119 253542 DEBUG nova.network.neutron [None req-eed2a685-acc6-4daa-9135-691466117b97 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: 6fd0259d-3f5c-487b-906c-db0ac2b00830] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 25 03:32:44 np0005534516 nova_compute[253538]: 2025-11-25 08:32:44.140 253542 INFO nova.virt.libvirt.driver [None req-eed2a685-acc6-4daa-9135-691466117b97 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: 6fd0259d-3f5c-487b-906c-db0ac2b00830] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 25 03:32:44 np0005534516 nova_compute[253538]: 2025-11-25 08:32:44.158 253542 DEBUG nova.compute.manager [None req-eed2a685-acc6-4daa-9135-691466117b97 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: 6fd0259d-3f5c-487b-906c-db0ac2b00830] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 25 03:32:44 np0005534516 nova_compute[253538]: 2025-11-25 08:32:44.247 253542 DEBUG nova.compute.manager [None req-eed2a685-acc6-4daa-9135-691466117b97 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: 6fd0259d-3f5c-487b-906c-db0ac2b00830] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 25 03:32:44 np0005534516 nova_compute[253538]: 2025-11-25 08:32:44.249 253542 DEBUG nova.virt.libvirt.driver [None req-eed2a685-acc6-4daa-9135-691466117b97 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: 6fd0259d-3f5c-487b-906c-db0ac2b00830] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 25 03:32:44 np0005534516 nova_compute[253538]: 2025-11-25 08:32:44.249 253542 INFO nova.virt.libvirt.driver [None req-eed2a685-acc6-4daa-9135-691466117b97 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: 6fd0259d-3f5c-487b-906c-db0ac2b00830] Creating image(s)#033[00m
Nov 25 03:32:44 np0005534516 nova_compute[253538]: 2025-11-25 08:32:44.279 253542 DEBUG nova.storage.rbd_utils [None req-eed2a685-acc6-4daa-9135-691466117b97 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] rbd image 6fd0259d-3f5c-487b-906c-db0ac2b00830_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:32:44 np0005534516 nova_compute[253538]: 2025-11-25 08:32:44.317 253542 DEBUG nova.storage.rbd_utils [None req-eed2a685-acc6-4daa-9135-691466117b97 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] rbd image 6fd0259d-3f5c-487b-906c-db0ac2b00830_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:32:44 np0005534516 nova_compute[253538]: 2025-11-25 08:32:44.355 253542 DEBUG nova.storage.rbd_utils [None req-eed2a685-acc6-4daa-9135-691466117b97 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] rbd image 6fd0259d-3f5c-487b-906c-db0ac2b00830_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:32:44 np0005534516 nova_compute[253538]: 2025-11-25 08:32:44.361 253542 DEBUG oslo_concurrency.processutils [None req-eed2a685-acc6-4daa-9135-691466117b97 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:32:44 np0005534516 nova_compute[253538]: 2025-11-25 08:32:44.401 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:32:44 np0005534516 nova_compute[253538]: 2025-11-25 08:32:44.441 253542 DEBUG oslo_concurrency.processutils [None req-eed2a685-acc6-4daa-9135-691466117b97 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json" returned: 0 in 0.080s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:32:44 np0005534516 nova_compute[253538]: 2025-11-25 08:32:44.442 253542 DEBUG oslo_concurrency.lockutils [None req-eed2a685-acc6-4daa-9135-691466117b97 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Acquiring lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:32:44 np0005534516 nova_compute[253538]: 2025-11-25 08:32:44.443 253542 DEBUG oslo_concurrency.lockutils [None req-eed2a685-acc6-4daa-9135-691466117b97 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:32:44 np0005534516 nova_compute[253538]: 2025-11-25 08:32:44.443 253542 DEBUG oslo_concurrency.lockutils [None req-eed2a685-acc6-4daa-9135-691466117b97 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:32:44 np0005534516 nova_compute[253538]: 2025-11-25 08:32:44.462 253542 DEBUG nova.storage.rbd_utils [None req-eed2a685-acc6-4daa-9135-691466117b97 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] rbd image 6fd0259d-3f5c-487b-906c-db0ac2b00830_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:32:44 np0005534516 nova_compute[253538]: 2025-11-25 08:32:44.465 253542 DEBUG oslo_concurrency.processutils [None req-eed2a685-acc6-4daa-9135-691466117b97 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc 6fd0259d-3f5c-487b-906c-db0ac2b00830_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:32:44 np0005534516 nova_compute[253538]: 2025-11-25 08:32:44.537 253542 INFO nova.compute.manager [None req-cb0b45a0-9374-4f2b-9271-6d8352f48120 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] [instance: c0942fc7-74d4-4fc8-9574-4fea9179e71b] Rebuilding instance#033[00m
Nov 25 03:32:44 np0005534516 nova_compute[253538]: 2025-11-25 08:32:44.762 253542 DEBUG oslo_concurrency.processutils [None req-eed2a685-acc6-4daa-9135-691466117b97 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc 6fd0259d-3f5c-487b-906c-db0ac2b00830_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.298s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:32:44 np0005534516 nova_compute[253538]: 2025-11-25 08:32:44.800 253542 DEBUG nova.objects.instance [None req-cb0b45a0-9374-4f2b-9271-6d8352f48120 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Lazy-loading 'trusted_certs' on Instance uuid c0942fc7-74d4-4fc8-9574-4fea9179e71b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 03:32:44 np0005534516 nova_compute[253538]: 2025-11-25 08:32:44.833 253542 DEBUG nova.compute.manager [None req-cb0b45a0-9374-4f2b-9271-6d8352f48120 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] [instance: c0942fc7-74d4-4fc8-9574-4fea9179e71b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:32:44 np0005534516 nova_compute[253538]: 2025-11-25 08:32:44.839 253542 DEBUG nova.storage.rbd_utils [None req-eed2a685-acc6-4daa-9135-691466117b97 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] resizing rbd image 6fd0259d-3f5c-487b-906c-db0ac2b00830_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 25 03:32:44 np0005534516 podman[310737]: 2025-11-25 08:32:44.848651924 +0000 UTC m=+0.106987757 container health_status 8ad1e319ea263c63021f01bb2d6f18ca5aea205883339692c6b10bddfa993191 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Nov 25 03:32:44 np0005534516 nova_compute[253538]: 2025-11-25 08:32:44.932 253542 DEBUG nova.objects.instance [None req-cb0b45a0-9374-4f2b-9271-6d8352f48120 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Lazy-loading 'pci_requests' on Instance uuid c0942fc7-74d4-4fc8-9574-4fea9179e71b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 03:32:44 np0005534516 nova_compute[253538]: 2025-11-25 08:32:44.939 253542 DEBUG nova.objects.instance [None req-eed2a685-acc6-4daa-9135-691466117b97 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Lazy-loading 'migration_context' on Instance uuid 6fd0259d-3f5c-487b-906c-db0ac2b00830 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 03:32:44 np0005534516 nova_compute[253538]: 2025-11-25 08:32:44.942 253542 DEBUG nova.objects.instance [None req-cb0b45a0-9374-4f2b-9271-6d8352f48120 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Lazy-loading 'pci_devices' on Instance uuid c0942fc7-74d4-4fc8-9574-4fea9179e71b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 03:32:44 np0005534516 nova_compute[253538]: 2025-11-25 08:32:44.953 253542 DEBUG nova.virt.libvirt.driver [None req-eed2a685-acc6-4daa-9135-691466117b97 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: 6fd0259d-3f5c-487b-906c-db0ac2b00830] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 25 03:32:44 np0005534516 nova_compute[253538]: 2025-11-25 08:32:44.953 253542 DEBUG nova.virt.libvirt.driver [None req-eed2a685-acc6-4daa-9135-691466117b97 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: 6fd0259d-3f5c-487b-906c-db0ac2b00830] Ensure instance console log exists: /var/lib/nova/instances/6fd0259d-3f5c-487b-906c-db0ac2b00830/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 25 03:32:44 np0005534516 nova_compute[253538]: 2025-11-25 08:32:44.954 253542 DEBUG oslo_concurrency.lockutils [None req-eed2a685-acc6-4daa-9135-691466117b97 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:32:44 np0005534516 nova_compute[253538]: 2025-11-25 08:32:44.954 253542 DEBUG oslo_concurrency.lockutils [None req-eed2a685-acc6-4daa-9135-691466117b97 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:32:44 np0005534516 nova_compute[253538]: 2025-11-25 08:32:44.955 253542 DEBUG oslo_concurrency.lockutils [None req-eed2a685-acc6-4daa-9135-691466117b97 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:32:44 np0005534516 nova_compute[253538]: 2025-11-25 08:32:44.956 253542 DEBUG nova.objects.instance [None req-cb0b45a0-9374-4f2b-9271-6d8352f48120 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Lazy-loading 'resources' on Instance uuid c0942fc7-74d4-4fc8-9574-4fea9179e71b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 03:32:44 np0005534516 nova_compute[253538]: 2025-11-25 08:32:44.968 253542 DEBUG nova.objects.instance [None req-cb0b45a0-9374-4f2b-9271-6d8352f48120 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Lazy-loading 'migration_context' on Instance uuid c0942fc7-74d4-4fc8-9574-4fea9179e71b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 03:32:44 np0005534516 nova_compute[253538]: 2025-11-25 08:32:44.979 253542 DEBUG nova.objects.instance [None req-cb0b45a0-9374-4f2b-9271-6d8352f48120 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] [instance: c0942fc7-74d4-4fc8-9574-4fea9179e71b] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032#033[00m
Nov 25 03:32:44 np0005534516 nova_compute[253538]: 2025-11-25 08:32:44.982 253542 DEBUG nova.virt.libvirt.driver [None req-cb0b45a0-9374-4f2b-9271-6d8352f48120 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] [instance: c0942fc7-74d4-4fc8-9574-4fea9179e71b] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071#033[00m
Nov 25 03:32:45 np0005534516 nova_compute[253538]: 2025-11-25 08:32:45.144 253542 DEBUG nova.policy [None req-eed2a685-acc6-4daa-9135-691466117b97 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'a649c62aaacd4f01a93ea978066f5976', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'a9c243220ecd4ba3af10cdbc0ea76bd6', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 25 03:32:45 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1482: 321 pgs: 321 active+clean; 218 MiB data, 514 MiB used, 59 GiB / 60 GiB avail; 5.8 MiB/s rd, 3.1 MiB/s wr, 304 op/s
Nov 25 03:32:45 np0005534516 nova_compute[253538]: 2025-11-25 08:32:45.530 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:32:45 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e176 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 03:32:46 np0005534516 nova_compute[253538]: 2025-11-25 08:32:46.666 253542 DEBUG nova.network.neutron [None req-eed2a685-acc6-4daa-9135-691466117b97 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: 6fd0259d-3f5c-487b-906c-db0ac2b00830] Successfully created port: 40df73d0-e48b-4bb9-96eb-c236dd2ca614 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 25 03:32:47 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1483: 321 pgs: 321 active+clean; 229 MiB data, 517 MiB used, 59 GiB / 60 GiB avail; 5.8 MiB/s rd, 1.1 MiB/s wr, 263 op/s
Nov 25 03:32:47 np0005534516 nova_compute[253538]: 2025-11-25 08:32:47.455 253542 DEBUG nova.network.neutron [None req-eed2a685-acc6-4daa-9135-691466117b97 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: 6fd0259d-3f5c-487b-906c-db0ac2b00830] Successfully updated port: 40df73d0-e48b-4bb9-96eb-c236dd2ca614 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 25 03:32:47 np0005534516 nova_compute[253538]: 2025-11-25 08:32:47.466 253542 DEBUG oslo_concurrency.lockutils [None req-eed2a685-acc6-4daa-9135-691466117b97 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Acquiring lock "refresh_cache-6fd0259d-3f5c-487b-906c-db0ac2b00830" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 03:32:47 np0005534516 nova_compute[253538]: 2025-11-25 08:32:47.467 253542 DEBUG oslo_concurrency.lockutils [None req-eed2a685-acc6-4daa-9135-691466117b97 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Acquired lock "refresh_cache-6fd0259d-3f5c-487b-906c-db0ac2b00830" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 03:32:47 np0005534516 nova_compute[253538]: 2025-11-25 08:32:47.467 253542 DEBUG nova.network.neutron [None req-eed2a685-acc6-4daa-9135-691466117b97 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: 6fd0259d-3f5c-487b-906c-db0ac2b00830] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 25 03:32:47 np0005534516 nova_compute[253538]: 2025-11-25 08:32:47.605 253542 DEBUG nova.network.neutron [None req-eed2a685-acc6-4daa-9135-691466117b97 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: 6fd0259d-3f5c-487b-906c-db0ac2b00830] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 25 03:32:47 np0005534516 nova_compute[253538]: 2025-11-25 08:32:47.632 253542 DEBUG nova.compute.manager [req-937c237f-6dea-4dcd-9db8-34d2413968c2 req-e1b46b55-ba74-4220-9055-b54a9df9f8bc b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 6fd0259d-3f5c-487b-906c-db0ac2b00830] Received event network-changed-40df73d0-e48b-4bb9-96eb-c236dd2ca614 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:32:47 np0005534516 nova_compute[253538]: 2025-11-25 08:32:47.632 253542 DEBUG nova.compute.manager [req-937c237f-6dea-4dcd-9db8-34d2413968c2 req-e1b46b55-ba74-4220-9055-b54a9df9f8bc b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 6fd0259d-3f5c-487b-906c-db0ac2b00830] Refreshing instance network info cache due to event network-changed-40df73d0-e48b-4bb9-96eb-c236dd2ca614. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 03:32:47 np0005534516 nova_compute[253538]: 2025-11-25 08:32:47.632 253542 DEBUG oslo_concurrency.lockutils [req-937c237f-6dea-4dcd-9db8-34d2413968c2 req-e1b46b55-ba74-4220-9055-b54a9df9f8bc b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-6fd0259d-3f5c-487b-906c-db0ac2b00830" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 03:32:48 np0005534516 nova_compute[253538]: 2025-11-25 08:32:48.249 253542 DEBUG nova.network.neutron [None req-eed2a685-acc6-4daa-9135-691466117b97 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: 6fd0259d-3f5c-487b-906c-db0ac2b00830] Updating instance_info_cache with network_info: [{"id": "40df73d0-e48b-4bb9-96eb-c236dd2ca614", "address": "fa:16:3e:f5:15:48", "network": {"id": "a66e51b8-ecb0-4289-a1b5-d5e379727721", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-903247260-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a9c243220ecd4ba3af10cdbc0ea76bd6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap40df73d0-e4", "ovs_interfaceid": "40df73d0-e48b-4bb9-96eb-c236dd2ca614", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 03:32:48 np0005534516 nova_compute[253538]: 2025-11-25 08:32:48.268 253542 DEBUG oslo_concurrency.lockutils [None req-eed2a685-acc6-4daa-9135-691466117b97 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Releasing lock "refresh_cache-6fd0259d-3f5c-487b-906c-db0ac2b00830" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 03:32:48 np0005534516 nova_compute[253538]: 2025-11-25 08:32:48.269 253542 DEBUG nova.compute.manager [None req-eed2a685-acc6-4daa-9135-691466117b97 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: 6fd0259d-3f5c-487b-906c-db0ac2b00830] Instance network_info: |[{"id": "40df73d0-e48b-4bb9-96eb-c236dd2ca614", "address": "fa:16:3e:f5:15:48", "network": {"id": "a66e51b8-ecb0-4289-a1b5-d5e379727721", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-903247260-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a9c243220ecd4ba3af10cdbc0ea76bd6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap40df73d0-e4", "ovs_interfaceid": "40df73d0-e48b-4bb9-96eb-c236dd2ca614", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 25 03:32:48 np0005534516 nova_compute[253538]: 2025-11-25 08:32:48.269 253542 DEBUG oslo_concurrency.lockutils [req-937c237f-6dea-4dcd-9db8-34d2413968c2 req-e1b46b55-ba74-4220-9055-b54a9df9f8bc b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-6fd0259d-3f5c-487b-906c-db0ac2b00830" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 03:32:48 np0005534516 nova_compute[253538]: 2025-11-25 08:32:48.269 253542 DEBUG nova.network.neutron [req-937c237f-6dea-4dcd-9db8-34d2413968c2 req-e1b46b55-ba74-4220-9055-b54a9df9f8bc b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 6fd0259d-3f5c-487b-906c-db0ac2b00830] Refreshing network info cache for port 40df73d0-e48b-4bb9-96eb-c236dd2ca614 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 03:32:48 np0005534516 nova_compute[253538]: 2025-11-25 08:32:48.272 253542 DEBUG nova.virt.libvirt.driver [None req-eed2a685-acc6-4daa-9135-691466117b97 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: 6fd0259d-3f5c-487b-906c-db0ac2b00830] Start _get_guest_xml network_info=[{"id": "40df73d0-e48b-4bb9-96eb-c236dd2ca614", "address": "fa:16:3e:f5:15:48", "network": {"id": "a66e51b8-ecb0-4289-a1b5-d5e379727721", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-903247260-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a9c243220ecd4ba3af10cdbc0ea76bd6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap40df73d0-e4", "ovs_interfaceid": "40df73d0-e48b-4bb9-96eb-c236dd2ca614", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'device_name': '/dev/vda', 'guest_format': None, 'encryption_options': None, 'boot_index': 0, 'encryption_format': None, 'encryption_secret_uuid': None, 'size': 0, 'encrypted': False, 'disk_bus': 'virtio', 'image_id': '8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 25 03:32:48 np0005534516 nova_compute[253538]: 2025-11-25 08:32:48.275 253542 WARNING nova.virt.libvirt.driver [None req-eed2a685-acc6-4daa-9135-691466117b97 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 03:32:48 np0005534516 nova_compute[253538]: 2025-11-25 08:32:48.279 253542 DEBUG nova.virt.libvirt.host [None req-eed2a685-acc6-4daa-9135-691466117b97 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 25 03:32:48 np0005534516 nova_compute[253538]: 2025-11-25 08:32:48.279 253542 DEBUG nova.virt.libvirt.host [None req-eed2a685-acc6-4daa-9135-691466117b97 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 25 03:32:48 np0005534516 nova_compute[253538]: 2025-11-25 08:32:48.283 253542 DEBUG nova.virt.libvirt.host [None req-eed2a685-acc6-4daa-9135-691466117b97 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 25 03:32:48 np0005534516 nova_compute[253538]: 2025-11-25 08:32:48.284 253542 DEBUG nova.virt.libvirt.host [None req-eed2a685-acc6-4daa-9135-691466117b97 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 25 03:32:48 np0005534516 nova_compute[253538]: 2025-11-25 08:32:48.284 253542 DEBUG nova.virt.libvirt.driver [None req-eed2a685-acc6-4daa-9135-691466117b97 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 25 03:32:48 np0005534516 nova_compute[253538]: 2025-11-25 08:32:48.285 253542 DEBUG nova.virt.hardware [None req-eed2a685-acc6-4daa-9135-691466117b97 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T08:20:54Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='c0247c91-0f34-45b2-87e7-43f31a790a00',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 25 03:32:48 np0005534516 nova_compute[253538]: 2025-11-25 08:32:48.285 253542 DEBUG nova.virt.hardware [None req-eed2a685-acc6-4daa-9135-691466117b97 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 25 03:32:48 np0005534516 nova_compute[253538]: 2025-11-25 08:32:48.285 253542 DEBUG nova.virt.hardware [None req-eed2a685-acc6-4daa-9135-691466117b97 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 25 03:32:48 np0005534516 nova_compute[253538]: 2025-11-25 08:32:48.286 253542 DEBUG nova.virt.hardware [None req-eed2a685-acc6-4daa-9135-691466117b97 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 25 03:32:48 np0005534516 nova_compute[253538]: 2025-11-25 08:32:48.286 253542 DEBUG nova.virt.hardware [None req-eed2a685-acc6-4daa-9135-691466117b97 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 25 03:32:48 np0005534516 nova_compute[253538]: 2025-11-25 08:32:48.286 253542 DEBUG nova.virt.hardware [None req-eed2a685-acc6-4daa-9135-691466117b97 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 25 03:32:48 np0005534516 nova_compute[253538]: 2025-11-25 08:32:48.287 253542 DEBUG nova.virt.hardware [None req-eed2a685-acc6-4daa-9135-691466117b97 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 25 03:32:48 np0005534516 nova_compute[253538]: 2025-11-25 08:32:48.287 253542 DEBUG nova.virt.hardware [None req-eed2a685-acc6-4daa-9135-691466117b97 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 25 03:32:48 np0005534516 nova_compute[253538]: 2025-11-25 08:32:48.287 253542 DEBUG nova.virt.hardware [None req-eed2a685-acc6-4daa-9135-691466117b97 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 25 03:32:48 np0005534516 nova_compute[253538]: 2025-11-25 08:32:48.287 253542 DEBUG nova.virt.hardware [None req-eed2a685-acc6-4daa-9135-691466117b97 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 25 03:32:48 np0005534516 nova_compute[253538]: 2025-11-25 08:32:48.288 253542 DEBUG nova.virt.hardware [None req-eed2a685-acc6-4daa-9135-691466117b97 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 25 03:32:48 np0005534516 nova_compute[253538]: 2025-11-25 08:32:48.290 253542 DEBUG oslo_concurrency.processutils [None req-eed2a685-acc6-4daa-9135-691466117b97 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:32:48 np0005534516 nova_compute[253538]: 2025-11-25 08:32:48.531 253542 DEBUG oslo_concurrency.lockutils [None req-2b669202-d948-45cd-9a6e-057ea8730fac c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Acquiring lock "0feca801-4630-4450-b915-616d8496ab51" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:32:48 np0005534516 nova_compute[253538]: 2025-11-25 08:32:48.535 253542 DEBUG oslo_concurrency.lockutils [None req-2b669202-d948-45cd-9a6e-057ea8730fac c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Lock "0feca801-4630-4450-b915-616d8496ab51" acquired by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: waited 0.004s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:32:48 np0005534516 nova_compute[253538]: 2025-11-25 08:32:48.535 253542 DEBUG nova.compute.manager [None req-2b669202-d948-45cd-9a6e-057ea8730fac c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] [instance: 0feca801-4630-4450-b915-616d8496ab51] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:32:48 np0005534516 nova_compute[253538]: 2025-11-25 08:32:48.541 253542 DEBUG nova.compute.manager [None req-2b669202-d948-45cd-9a6e-057ea8730fac c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] [instance: 0feca801-4630-4450-b915-616d8496ab51] Stopping instance; current vm_state: active, current task_state: powering-off, current DB power_state: 1, current VM power_state: 1 do_stop_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3338#033[00m
Nov 25 03:32:48 np0005534516 nova_compute[253538]: 2025-11-25 08:32:48.543 253542 DEBUG nova.objects.instance [None req-2b669202-d948-45cd-9a6e-057ea8730fac c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Lazy-loading 'flavor' on Instance uuid 0feca801-4630-4450-b915-616d8496ab51 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 03:32:48 np0005534516 nova_compute[253538]: 2025-11-25 08:32:48.568 253542 DEBUG nova.virt.libvirt.driver [None req-2b669202-d948-45cd-9a6e-057ea8730fac c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] [instance: 0feca801-4630-4450-b915-616d8496ab51] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071#033[00m
Nov 25 03:32:48 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 03:32:48 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2914164472' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 03:32:48 np0005534516 nova_compute[253538]: 2025-11-25 08:32:48.764 253542 DEBUG oslo_concurrency.processutils [None req-eed2a685-acc6-4daa-9135-691466117b97 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.473s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:32:48 np0005534516 nova_compute[253538]: 2025-11-25 08:32:48.784 253542 DEBUG nova.storage.rbd_utils [None req-eed2a685-acc6-4daa-9135-691466117b97 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] rbd image 6fd0259d-3f5c-487b-906c-db0ac2b00830_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:32:48 np0005534516 nova_compute[253538]: 2025-11-25 08:32:48.791 253542 DEBUG oslo_concurrency.processutils [None req-eed2a685-acc6-4daa-9135-691466117b97 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:32:49 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1484: 321 pgs: 321 active+clean; 244 MiB data, 532 MiB used, 59 GiB / 60 GiB avail; 5.6 MiB/s rd, 2.1 MiB/s wr, 254 op/s
Nov 25 03:32:49 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 03:32:49 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/852451613' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 03:32:49 np0005534516 nova_compute[253538]: 2025-11-25 08:32:49.273 253542 DEBUG oslo_concurrency.processutils [None req-eed2a685-acc6-4daa-9135-691466117b97 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.481s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:32:49 np0005534516 nova_compute[253538]: 2025-11-25 08:32:49.277 253542 DEBUG nova.virt.libvirt.vif [None req-eed2a685-acc6-4daa-9135-691466117b97 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T08:32:42Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-DeleteServersTestJSON-server-1160841163',display_name='tempest-DeleteServersTestJSON-server-1160841163',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-deleteserverstestjson-server-1160841163',id=55,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='a9c243220ecd4ba3af10cdbc0ea76bd6',ramdisk_id='',reservation_id='r-tls64ynb',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-DeleteServersTestJSON-2095694504',owner_user_name='tempest-DeleteServersTestJSON-2095694504-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T08:32:44Z,user_data=None,user_id='a649c62aaacd4f01a93ea978066f5976',uuid=6fd0259d-3f5c-487b-906c-db0ac2b00830,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "40df73d0-e48b-4bb9-96eb-c236dd2ca614", "address": "fa:16:3e:f5:15:48", "network": {"id": "a66e51b8-ecb0-4289-a1b5-d5e379727721", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-903247260-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a9c243220ecd4ba3af10cdbc0ea76bd6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap40df73d0-e4", "ovs_interfaceid": "40df73d0-e48b-4bb9-96eb-c236dd2ca614", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 25 03:32:49 np0005534516 nova_compute[253538]: 2025-11-25 08:32:49.277 253542 DEBUG nova.network.os_vif_util [None req-eed2a685-acc6-4daa-9135-691466117b97 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Converting VIF {"id": "40df73d0-e48b-4bb9-96eb-c236dd2ca614", "address": "fa:16:3e:f5:15:48", "network": {"id": "a66e51b8-ecb0-4289-a1b5-d5e379727721", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-903247260-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a9c243220ecd4ba3af10cdbc0ea76bd6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap40df73d0-e4", "ovs_interfaceid": "40df73d0-e48b-4bb9-96eb-c236dd2ca614", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 03:32:49 np0005534516 nova_compute[253538]: 2025-11-25 08:32:49.278 253542 DEBUG nova.network.os_vif_util [None req-eed2a685-acc6-4daa-9135-691466117b97 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f5:15:48,bridge_name='br-int',has_traffic_filtering=True,id=40df73d0-e48b-4bb9-96eb-c236dd2ca614,network=Network(a66e51b8-ecb0-4289-a1b5-d5e379727721),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap40df73d0-e4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 03:32:49 np0005534516 nova_compute[253538]: 2025-11-25 08:32:49.279 253542 DEBUG nova.objects.instance [None req-eed2a685-acc6-4daa-9135-691466117b97 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Lazy-loading 'pci_devices' on Instance uuid 6fd0259d-3f5c-487b-906c-db0ac2b00830 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 03:32:49 np0005534516 nova_compute[253538]: 2025-11-25 08:32:49.292 253542 DEBUG nova.virt.libvirt.driver [None req-eed2a685-acc6-4daa-9135-691466117b97 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: 6fd0259d-3f5c-487b-906c-db0ac2b00830] End _get_guest_xml xml=<domain type="kvm">
Nov 25 03:32:49 np0005534516 nova_compute[253538]:  <uuid>6fd0259d-3f5c-487b-906c-db0ac2b00830</uuid>
Nov 25 03:32:49 np0005534516 nova_compute[253538]:  <name>instance-00000037</name>
Nov 25 03:32:49 np0005534516 nova_compute[253538]:  <memory>131072</memory>
Nov 25 03:32:49 np0005534516 nova_compute[253538]:  <vcpu>1</vcpu>
Nov 25 03:32:49 np0005534516 nova_compute[253538]:  <metadata>
Nov 25 03:32:49 np0005534516 nova_compute[253538]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 03:32:49 np0005534516 nova_compute[253538]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 03:32:49 np0005534516 nova_compute[253538]:      <nova:name>tempest-DeleteServersTestJSON-server-1160841163</nova:name>
Nov 25 03:32:49 np0005534516 nova_compute[253538]:      <nova:creationTime>2025-11-25 08:32:48</nova:creationTime>
Nov 25 03:32:49 np0005534516 nova_compute[253538]:      <nova:flavor name="m1.nano">
Nov 25 03:32:49 np0005534516 nova_compute[253538]:        <nova:memory>128</nova:memory>
Nov 25 03:32:49 np0005534516 nova_compute[253538]:        <nova:disk>1</nova:disk>
Nov 25 03:32:49 np0005534516 nova_compute[253538]:        <nova:swap>0</nova:swap>
Nov 25 03:32:49 np0005534516 nova_compute[253538]:        <nova:ephemeral>0</nova:ephemeral>
Nov 25 03:32:49 np0005534516 nova_compute[253538]:        <nova:vcpus>1</nova:vcpus>
Nov 25 03:32:49 np0005534516 nova_compute[253538]:      </nova:flavor>
Nov 25 03:32:49 np0005534516 nova_compute[253538]:      <nova:owner>
Nov 25 03:32:49 np0005534516 nova_compute[253538]:        <nova:user uuid="a649c62aaacd4f01a93ea978066f5976">tempest-DeleteServersTestJSON-2095694504-project-member</nova:user>
Nov 25 03:32:49 np0005534516 nova_compute[253538]:        <nova:project uuid="a9c243220ecd4ba3af10cdbc0ea76bd6">tempest-DeleteServersTestJSON-2095694504</nova:project>
Nov 25 03:32:49 np0005534516 nova_compute[253538]:      </nova:owner>
Nov 25 03:32:49 np0005534516 nova_compute[253538]:      <nova:root type="image" uuid="8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e"/>
Nov 25 03:32:49 np0005534516 nova_compute[253538]:      <nova:ports>
Nov 25 03:32:49 np0005534516 nova_compute[253538]:        <nova:port uuid="40df73d0-e48b-4bb9-96eb-c236dd2ca614">
Nov 25 03:32:49 np0005534516 nova_compute[253538]:          <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Nov 25 03:32:49 np0005534516 nova_compute[253538]:        </nova:port>
Nov 25 03:32:49 np0005534516 nova_compute[253538]:      </nova:ports>
Nov 25 03:32:49 np0005534516 nova_compute[253538]:    </nova:instance>
Nov 25 03:32:49 np0005534516 nova_compute[253538]:  </metadata>
Nov 25 03:32:49 np0005534516 nova_compute[253538]:  <sysinfo type="smbios">
Nov 25 03:32:49 np0005534516 nova_compute[253538]:    <system>
Nov 25 03:32:49 np0005534516 nova_compute[253538]:      <entry name="manufacturer">RDO</entry>
Nov 25 03:32:49 np0005534516 nova_compute[253538]:      <entry name="product">OpenStack Compute</entry>
Nov 25 03:32:49 np0005534516 nova_compute[253538]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 03:32:49 np0005534516 nova_compute[253538]:      <entry name="serial">6fd0259d-3f5c-487b-906c-db0ac2b00830</entry>
Nov 25 03:32:49 np0005534516 nova_compute[253538]:      <entry name="uuid">6fd0259d-3f5c-487b-906c-db0ac2b00830</entry>
Nov 25 03:32:49 np0005534516 nova_compute[253538]:      <entry name="family">Virtual Machine</entry>
Nov 25 03:32:49 np0005534516 nova_compute[253538]:    </system>
Nov 25 03:32:49 np0005534516 nova_compute[253538]:  </sysinfo>
Nov 25 03:32:49 np0005534516 nova_compute[253538]:  <os>
Nov 25 03:32:49 np0005534516 nova_compute[253538]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 25 03:32:49 np0005534516 nova_compute[253538]:    <boot dev="hd"/>
Nov 25 03:32:49 np0005534516 nova_compute[253538]:    <smbios mode="sysinfo"/>
Nov 25 03:32:49 np0005534516 nova_compute[253538]:  </os>
Nov 25 03:32:49 np0005534516 nova_compute[253538]:  <features>
Nov 25 03:32:49 np0005534516 nova_compute[253538]:    <acpi/>
Nov 25 03:32:49 np0005534516 nova_compute[253538]:    <apic/>
Nov 25 03:32:49 np0005534516 nova_compute[253538]:    <vmcoreinfo/>
Nov 25 03:32:49 np0005534516 nova_compute[253538]:  </features>
Nov 25 03:32:49 np0005534516 nova_compute[253538]:  <clock offset="utc">
Nov 25 03:32:49 np0005534516 nova_compute[253538]:    <timer name="pit" tickpolicy="delay"/>
Nov 25 03:32:49 np0005534516 nova_compute[253538]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 25 03:32:49 np0005534516 nova_compute[253538]:    <timer name="hpet" present="no"/>
Nov 25 03:32:49 np0005534516 nova_compute[253538]:  </clock>
Nov 25 03:32:49 np0005534516 nova_compute[253538]:  <cpu mode="host-model" match="exact">
Nov 25 03:32:49 np0005534516 nova_compute[253538]:    <topology sockets="1" cores="1" threads="1"/>
Nov 25 03:32:49 np0005534516 nova_compute[253538]:  </cpu>
Nov 25 03:32:49 np0005534516 nova_compute[253538]:  <devices>
Nov 25 03:32:49 np0005534516 nova_compute[253538]:    <disk type="network" device="disk">
Nov 25 03:32:49 np0005534516 nova_compute[253538]:      <driver type="raw" cache="none"/>
Nov 25 03:32:49 np0005534516 nova_compute[253538]:      <source protocol="rbd" name="vms/6fd0259d-3f5c-487b-906c-db0ac2b00830_disk">
Nov 25 03:32:49 np0005534516 nova_compute[253538]:        <host name="192.168.122.100" port="6789"/>
Nov 25 03:32:49 np0005534516 nova_compute[253538]:      </source>
Nov 25 03:32:49 np0005534516 nova_compute[253538]:      <auth username="openstack">
Nov 25 03:32:49 np0005534516 nova_compute[253538]:        <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 03:32:49 np0005534516 nova_compute[253538]:      </auth>
Nov 25 03:32:49 np0005534516 nova_compute[253538]:      <target dev="vda" bus="virtio"/>
Nov 25 03:32:49 np0005534516 nova_compute[253538]:    </disk>
Nov 25 03:32:49 np0005534516 nova_compute[253538]:    <disk type="network" device="cdrom">
Nov 25 03:32:49 np0005534516 nova_compute[253538]:      <driver type="raw" cache="none"/>
Nov 25 03:32:49 np0005534516 nova_compute[253538]:      <source protocol="rbd" name="vms/6fd0259d-3f5c-487b-906c-db0ac2b00830_disk.config">
Nov 25 03:32:49 np0005534516 nova_compute[253538]:        <host name="192.168.122.100" port="6789"/>
Nov 25 03:32:49 np0005534516 nova_compute[253538]:      </source>
Nov 25 03:32:49 np0005534516 nova_compute[253538]:      <auth username="openstack">
Nov 25 03:32:49 np0005534516 nova_compute[253538]:        <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 03:32:49 np0005534516 nova_compute[253538]:      </auth>
Nov 25 03:32:49 np0005534516 nova_compute[253538]:      <target dev="sda" bus="sata"/>
Nov 25 03:32:49 np0005534516 nova_compute[253538]:    </disk>
Nov 25 03:32:49 np0005534516 nova_compute[253538]:    <interface type="ethernet">
Nov 25 03:32:49 np0005534516 nova_compute[253538]:      <mac address="fa:16:3e:f5:15:48"/>
Nov 25 03:32:49 np0005534516 nova_compute[253538]:      <model type="virtio"/>
Nov 25 03:32:49 np0005534516 nova_compute[253538]:      <driver name="vhost" rx_queue_size="512"/>
Nov 25 03:32:49 np0005534516 nova_compute[253538]:      <mtu size="1442"/>
Nov 25 03:32:49 np0005534516 nova_compute[253538]:      <target dev="tap40df73d0-e4"/>
Nov 25 03:32:49 np0005534516 nova_compute[253538]:    </interface>
Nov 25 03:32:49 np0005534516 nova_compute[253538]:    <serial type="pty">
Nov 25 03:32:49 np0005534516 nova_compute[253538]:      <log file="/var/lib/nova/instances/6fd0259d-3f5c-487b-906c-db0ac2b00830/console.log" append="off"/>
Nov 25 03:32:49 np0005534516 nova_compute[253538]:    </serial>
Nov 25 03:32:49 np0005534516 nova_compute[253538]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 03:32:49 np0005534516 nova_compute[253538]:    <video>
Nov 25 03:32:49 np0005534516 nova_compute[253538]:      <model type="virtio"/>
Nov 25 03:32:49 np0005534516 nova_compute[253538]:    </video>
Nov 25 03:32:49 np0005534516 nova_compute[253538]:    <input type="tablet" bus="usb"/>
Nov 25 03:32:49 np0005534516 nova_compute[253538]:    <rng model="virtio">
Nov 25 03:32:49 np0005534516 nova_compute[253538]:      <backend model="random">/dev/urandom</backend>
Nov 25 03:32:49 np0005534516 nova_compute[253538]:    </rng>
Nov 25 03:32:49 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root"/>
Nov 25 03:32:49 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:32:49 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:32:49 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:32:49 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:32:49 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:32:49 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:32:49 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:32:49 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:32:49 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:32:49 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:32:49 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:32:49 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:32:49 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:32:49 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:32:49 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:32:49 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:32:49 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:32:49 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:32:49 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:32:49 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:32:49 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:32:49 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:32:49 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:32:49 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:32:49 np0005534516 nova_compute[253538]:    <controller type="usb" index="0"/>
Nov 25 03:32:49 np0005534516 nova_compute[253538]:    <memballoon model="virtio">
Nov 25 03:32:49 np0005534516 nova_compute[253538]:      <stats period="10"/>
Nov 25 03:32:49 np0005534516 nova_compute[253538]:    </memballoon>
Nov 25 03:32:49 np0005534516 nova_compute[253538]:  </devices>
Nov 25 03:32:49 np0005534516 nova_compute[253538]: </domain>
Nov 25 03:32:49 np0005534516 nova_compute[253538]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 25 03:32:49 np0005534516 nova_compute[253538]: 2025-11-25 08:32:49.298 253542 DEBUG nova.compute.manager [None req-eed2a685-acc6-4daa-9135-691466117b97 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: 6fd0259d-3f5c-487b-906c-db0ac2b00830] Preparing to wait for external event network-vif-plugged-40df73d0-e48b-4bb9-96eb-c236dd2ca614 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 25 03:32:49 np0005534516 nova_compute[253538]: 2025-11-25 08:32:49.298 253542 DEBUG oslo_concurrency.lockutils [None req-eed2a685-acc6-4daa-9135-691466117b97 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Acquiring lock "6fd0259d-3f5c-487b-906c-db0ac2b00830-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:32:49 np0005534516 nova_compute[253538]: 2025-11-25 08:32:49.298 253542 DEBUG oslo_concurrency.lockutils [None req-eed2a685-acc6-4daa-9135-691466117b97 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Lock "6fd0259d-3f5c-487b-906c-db0ac2b00830-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:32:49 np0005534516 nova_compute[253538]: 2025-11-25 08:32:49.298 253542 DEBUG oslo_concurrency.lockutils [None req-eed2a685-acc6-4daa-9135-691466117b97 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Lock "6fd0259d-3f5c-487b-906c-db0ac2b00830-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:32:49 np0005534516 nova_compute[253538]: 2025-11-25 08:32:49.299 253542 DEBUG nova.virt.libvirt.vif [None req-eed2a685-acc6-4daa-9135-691466117b97 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T08:32:42Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-DeleteServersTestJSON-server-1160841163',display_name='tempest-DeleteServersTestJSON-server-1160841163',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-deleteserverstestjson-server-1160841163',id=55,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='a9c243220ecd4ba3af10cdbc0ea76bd6',ramdisk_id='',reservation_id='r-tls64ynb',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-DeleteServersTestJSON-2095694504',owner_user_name='tempest-DeleteServersTestJSON-2095694504-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T08:32:44Z,user_data=None,user_id='a649c62aaacd4f01a93ea978066f5976',uuid=6fd0259d-3f5c-487b-906c-db0ac2b00830,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "40df73d0-e48b-4bb9-96eb-c236dd2ca614", "address": "fa:16:3e:f5:15:48", "network": {"id": "a66e51b8-ecb0-4289-a1b5-d5e379727721", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-903247260-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a9c243220ecd4ba3af10cdbc0ea76bd6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap40df73d0-e4", "ovs_interfaceid": "40df73d0-e48b-4bb9-96eb-c236dd2ca614", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 25 03:32:49 np0005534516 nova_compute[253538]: 2025-11-25 08:32:49.300 253542 DEBUG nova.network.os_vif_util [None req-eed2a685-acc6-4daa-9135-691466117b97 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Converting VIF {"id": "40df73d0-e48b-4bb9-96eb-c236dd2ca614", "address": "fa:16:3e:f5:15:48", "network": {"id": "a66e51b8-ecb0-4289-a1b5-d5e379727721", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-903247260-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a9c243220ecd4ba3af10cdbc0ea76bd6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap40df73d0-e4", "ovs_interfaceid": "40df73d0-e48b-4bb9-96eb-c236dd2ca614", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 03:32:49 np0005534516 nova_compute[253538]: 2025-11-25 08:32:49.300 253542 DEBUG nova.network.os_vif_util [None req-eed2a685-acc6-4daa-9135-691466117b97 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f5:15:48,bridge_name='br-int',has_traffic_filtering=True,id=40df73d0-e48b-4bb9-96eb-c236dd2ca614,network=Network(a66e51b8-ecb0-4289-a1b5-d5e379727721),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap40df73d0-e4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 03:32:49 np0005534516 nova_compute[253538]: 2025-11-25 08:32:49.301 253542 DEBUG os_vif [None req-eed2a685-acc6-4daa-9135-691466117b97 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:f5:15:48,bridge_name='br-int',has_traffic_filtering=True,id=40df73d0-e48b-4bb9-96eb-c236dd2ca614,network=Network(a66e51b8-ecb0-4289-a1b5-d5e379727721),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap40df73d0-e4') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 25 03:32:49 np0005534516 nova_compute[253538]: 2025-11-25 08:32:49.301 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:32:49 np0005534516 nova_compute[253538]: 2025-11-25 08:32:49.302 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:32:49 np0005534516 nova_compute[253538]: 2025-11-25 08:32:49.302 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 03:32:49 np0005534516 nova_compute[253538]: 2025-11-25 08:32:49.305 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:32:49 np0005534516 nova_compute[253538]: 2025-11-25 08:32:49.305 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap40df73d0-e4, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:32:49 np0005534516 nova_compute[253538]: 2025-11-25 08:32:49.306 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap40df73d0-e4, col_values=(('external_ids', {'iface-id': '40df73d0-e48b-4bb9-96eb-c236dd2ca614', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:f5:15:48', 'vm-uuid': '6fd0259d-3f5c-487b-906c-db0ac2b00830'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:32:49 np0005534516 NetworkManager[48915]: <info>  [1764059569.3083] manager: (tap40df73d0-e4): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/211)
Nov 25 03:32:49 np0005534516 nova_compute[253538]: 2025-11-25 08:32:49.309 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:32:49 np0005534516 nova_compute[253538]: 2025-11-25 08:32:49.312 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 25 03:32:49 np0005534516 nova_compute[253538]: 2025-11-25 08:32:49.313 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:32:49 np0005534516 nova_compute[253538]: 2025-11-25 08:32:49.314 253542 INFO os_vif [None req-eed2a685-acc6-4daa-9135-691466117b97 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:f5:15:48,bridge_name='br-int',has_traffic_filtering=True,id=40df73d0-e48b-4bb9-96eb-c236dd2ca614,network=Network(a66e51b8-ecb0-4289-a1b5-d5e379727721),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap40df73d0-e4')#033[00m
Nov 25 03:32:49 np0005534516 ovn_controller[152859]: 2025-11-25T08:32:49Z|00054|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:1a:58:19 10.100.0.11
Nov 25 03:32:49 np0005534516 ovn_controller[152859]: 2025-11-25T08:32:49Z|00055|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:1a:58:19 10.100.0.11
Nov 25 03:32:49 np0005534516 nova_compute[253538]: 2025-11-25 08:32:49.399 253542 DEBUG nova.virt.libvirt.driver [None req-eed2a685-acc6-4daa-9135-691466117b97 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 03:32:49 np0005534516 nova_compute[253538]: 2025-11-25 08:32:49.400 253542 DEBUG nova.virt.libvirt.driver [None req-eed2a685-acc6-4daa-9135-691466117b97 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 03:32:49 np0005534516 nova_compute[253538]: 2025-11-25 08:32:49.401 253542 DEBUG nova.virt.libvirt.driver [None req-eed2a685-acc6-4daa-9135-691466117b97 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] No VIF found with MAC fa:16:3e:f5:15:48, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 25 03:32:49 np0005534516 nova_compute[253538]: 2025-11-25 08:32:49.401 253542 INFO nova.virt.libvirt.driver [None req-eed2a685-acc6-4daa-9135-691466117b97 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: 6fd0259d-3f5c-487b-906c-db0ac2b00830] Using config drive#033[00m
Nov 25 03:32:49 np0005534516 nova_compute[253538]: 2025-11-25 08:32:49.427 253542 DEBUG nova.storage.rbd_utils [None req-eed2a685-acc6-4daa-9135-691466117b97 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] rbd image 6fd0259d-3f5c-487b-906c-db0ac2b00830_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:32:50 np0005534516 nova_compute[253538]: 2025-11-25 08:32:50.201 253542 INFO nova.virt.libvirt.driver [None req-eed2a685-acc6-4daa-9135-691466117b97 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: 6fd0259d-3f5c-487b-906c-db0ac2b00830] Creating config drive at /var/lib/nova/instances/6fd0259d-3f5c-487b-906c-db0ac2b00830/disk.config#033[00m
Nov 25 03:32:50 np0005534516 nova_compute[253538]: 2025-11-25 08:32:50.207 253542 DEBUG oslo_concurrency.processutils [None req-eed2a685-acc6-4daa-9135-691466117b97 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/6fd0259d-3f5c-487b-906c-db0ac2b00830/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpuyr1zuj8 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:32:50 np0005534516 nova_compute[253538]: 2025-11-25 08:32:50.360 253542 DEBUG oslo_concurrency.processutils [None req-eed2a685-acc6-4daa-9135-691466117b97 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/6fd0259d-3f5c-487b-906c-db0ac2b00830/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpuyr1zuj8" returned: 0 in 0.153s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:32:50 np0005534516 nova_compute[253538]: 2025-11-25 08:32:50.382 253542 DEBUG nova.storage.rbd_utils [None req-eed2a685-acc6-4daa-9135-691466117b97 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] rbd image 6fd0259d-3f5c-487b-906c-db0ac2b00830_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:32:50 np0005534516 nova_compute[253538]: 2025-11-25 08:32:50.391 253542 DEBUG oslo_concurrency.processutils [None req-eed2a685-acc6-4daa-9135-691466117b97 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/6fd0259d-3f5c-487b-906c-db0ac2b00830/disk.config 6fd0259d-3f5c-487b-906c-db0ac2b00830_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:32:50 np0005534516 nova_compute[253538]: 2025-11-25 08:32:50.533 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:32:50 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e176 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 03:32:51 np0005534516 nova_compute[253538]: 2025-11-25 08:32:51.091 253542 DEBUG nova.network.neutron [req-937c237f-6dea-4dcd-9db8-34d2413968c2 req-e1b46b55-ba74-4220-9055-b54a9df9f8bc b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 6fd0259d-3f5c-487b-906c-db0ac2b00830] Updated VIF entry in instance network info cache for port 40df73d0-e48b-4bb9-96eb-c236dd2ca614. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 25 03:32:51 np0005534516 nova_compute[253538]: 2025-11-25 08:32:51.092 253542 DEBUG nova.network.neutron [req-937c237f-6dea-4dcd-9db8-34d2413968c2 req-e1b46b55-ba74-4220-9055-b54a9df9f8bc b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 6fd0259d-3f5c-487b-906c-db0ac2b00830] Updating instance_info_cache with network_info: [{"id": "40df73d0-e48b-4bb9-96eb-c236dd2ca614", "address": "fa:16:3e:f5:15:48", "network": {"id": "a66e51b8-ecb0-4289-a1b5-d5e379727721", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-903247260-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a9c243220ecd4ba3af10cdbc0ea76bd6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap40df73d0-e4", "ovs_interfaceid": "40df73d0-e48b-4bb9-96eb-c236dd2ca614", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 03:32:51 np0005534516 nova_compute[253538]: 2025-11-25 08:32:51.106 253542 DEBUG oslo_concurrency.lockutils [req-937c237f-6dea-4dcd-9db8-34d2413968c2 req-e1b46b55-ba74-4220-9055-b54a9df9f8bc b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-6fd0259d-3f5c-487b-906c-db0ac2b00830" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 03:32:51 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1485: 321 pgs: 321 active+clean; 274 MiB data, 549 MiB used, 59 GiB / 60 GiB avail; 5.4 MiB/s rd, 2.9 MiB/s wr, 261 op/s
Nov 25 03:32:51 np0005534516 nova_compute[253538]: 2025-11-25 08:32:51.845 253542 DEBUG oslo_concurrency.processutils [None req-eed2a685-acc6-4daa-9135-691466117b97 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/6fd0259d-3f5c-487b-906c-db0ac2b00830/disk.config 6fd0259d-3f5c-487b-906c-db0ac2b00830_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.454s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:32:51 np0005534516 nova_compute[253538]: 2025-11-25 08:32:51.846 253542 INFO nova.virt.libvirt.driver [None req-eed2a685-acc6-4daa-9135-691466117b97 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: 6fd0259d-3f5c-487b-906c-db0ac2b00830] Deleting local config drive /var/lib/nova/instances/6fd0259d-3f5c-487b-906c-db0ac2b00830/disk.config because it was imported into RBD.#033[00m
Nov 25 03:32:51 np0005534516 kernel: tap40df73d0-e4: entered promiscuous mode
Nov 25 03:32:51 np0005534516 NetworkManager[48915]: <info>  [1764059571.9098] manager: (tap40df73d0-e4): new Tun device (/org/freedesktop/NetworkManager/Devices/212)
Nov 25 03:32:51 np0005534516 ovn_controller[152859]: 2025-11-25T08:32:51Z|00456|binding|INFO|Claiming lport 40df73d0-e48b-4bb9-96eb-c236dd2ca614 for this chassis.
Nov 25 03:32:51 np0005534516 ovn_controller[152859]: 2025-11-25T08:32:51Z|00457|binding|INFO|40df73d0-e48b-4bb9-96eb-c236dd2ca614: Claiming fa:16:3e:f5:15:48 10.100.0.10
Nov 25 03:32:51 np0005534516 nova_compute[253538]: 2025-11-25 08:32:51.912 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:32:51 np0005534516 ovn_controller[152859]: 2025-11-25T08:32:51Z|00458|binding|INFO|Setting lport 40df73d0-e48b-4bb9-96eb-c236dd2ca614 ovn-installed in OVS
Nov 25 03:32:51 np0005534516 nova_compute[253538]: 2025-11-25 08:32:51.932 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:32:51 np0005534516 nova_compute[253538]: 2025-11-25 08:32:51.935 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:32:51 np0005534516 ovn_controller[152859]: 2025-11-25T08:32:51Z|00459|binding|INFO|Setting lport 40df73d0-e48b-4bb9-96eb-c236dd2ca614 up in Southbound
Nov 25 03:32:51 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:32:51.942 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f5:15:48 10.100.0.10'], port_security=['fa:16:3e:f5:15:48 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '6fd0259d-3f5c-487b-906c-db0ac2b00830', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a66e51b8-ecb0-4289-a1b5-d5e379727721', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a9c243220ecd4ba3af10cdbc0ea76bd6', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'a5deaf81-ec7a-4196-8622-4e499ce185db', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=864356ca-a329-4a45-a3a1-6cef04812832, chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=40df73d0-e48b-4bb9-96eb-c236dd2ca614) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 03:32:51 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:32:51.944 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 40df73d0-e48b-4bb9-96eb-c236dd2ca614 in datapath a66e51b8-ecb0-4289-a1b5-d5e379727721 bound to our chassis#033[00m
Nov 25 03:32:51 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:32:51.946 162739 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network a66e51b8-ecb0-4289-a1b5-d5e379727721#033[00m
Nov 25 03:32:51 np0005534516 systemd-udevd[310974]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 03:32:51 np0005534516 systemd-machined[215790]: New machine qemu-61-instance-00000037.
Nov 25 03:32:51 np0005534516 NetworkManager[48915]: <info>  [1764059571.9643] device (tap40df73d0-e4): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 03:32:51 np0005534516 systemd[1]: Started Virtual Machine qemu-61-instance-00000037.
Nov 25 03:32:51 np0005534516 NetworkManager[48915]: <info>  [1764059571.9655] device (tap40df73d0-e4): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 03:32:51 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:32:51.966 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[927faca9-86af-4367-ab33-62ca61bee446]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:32:51 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:32:51.968 162739 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapa66e51b8-e1 in ovnmeta-a66e51b8-ecb0-4289-a1b5-d5e379727721 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 25 03:32:51 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:32:51.970 269370 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapa66e51b8-e0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 25 03:32:51 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:32:51.970 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[41bf9a3f-6e49-4535-a483-c803cf743c05]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:32:51 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:32:51.971 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[17c7d82c-9e11-44f4-9440-266f20230f04]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:32:51 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:32:51.981 162852 DEBUG oslo.privsep.daemon [-] privsep: reply[23784088-96b1-4f88-baa2-42f41cf195b5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:32:51 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:32:51.994 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[126af961-b72c-41b0-b61d-4af8538d67b7]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:32:52 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:32:52.026 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[50f81391-eeda-48ad-a318-38e35e05e719]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:32:52 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:32:52.031 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[57bfc8cd-325b-4f99-bb1e-17ac8eee47c3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:32:52 np0005534516 NetworkManager[48915]: <info>  [1764059572.0327] manager: (tapa66e51b8-e0): new Veth device (/org/freedesktop/NetworkManager/Devices/213)
Nov 25 03:32:52 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:32:52.069 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[f91090bc-9e50-414a-8642-23dbf99f389b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:32:52 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:32:52.072 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[69d1625d-6ebb-4165-bd3c-7313351ce5b0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:32:52 np0005534516 NetworkManager[48915]: <info>  [1764059572.0952] device (tapa66e51b8-e0): carrier: link connected
Nov 25 03:32:52 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:32:52.100 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[25d712eb-ef22-4261-9cbf-478e4addd614]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:32:52 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:32:52.119 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[afd58ac0-651f-4f09-a82f-fec876a01291]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapa66e51b8-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:11:2c:20'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 139], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 492669, 'reachable_time': 37257, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 311008, 'error': None, 'target': 'ovnmeta-a66e51b8-ecb0-4289-a1b5-d5e379727721', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:32:52 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:32:52.135 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[5ebf1bf0-b0f8-486a-bfc9-623733ec9bc6]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe11:2c20'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 492669, 'tstamp': 492669}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 311009, 'error': None, 'target': 'ovnmeta-a66e51b8-ecb0-4289-a1b5-d5e379727721', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:32:52 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:32:52.153 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[7863c43d-1052-4d1a-ac8e-3f88d8515b34]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapa66e51b8-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:11:2c:20'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 139], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 492669, 'reachable_time': 37257, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 311010, 'error': None, 'target': 'ovnmeta-a66e51b8-ecb0-4289-a1b5-d5e379727721', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:32:52 np0005534516 nova_compute[253538]: 2025-11-25 08:32:52.203 253542 DEBUG nova.compute.manager [req-1fd85400-7485-4ae7-833e-60fe4169e7e0 req-73916319-90e3-444b-8ea1-de5551d86721 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 6fd0259d-3f5c-487b-906c-db0ac2b00830] Received event network-vif-plugged-40df73d0-e48b-4bb9-96eb-c236dd2ca614 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:32:52 np0005534516 nova_compute[253538]: 2025-11-25 08:32:52.203 253542 DEBUG oslo_concurrency.lockutils [req-1fd85400-7485-4ae7-833e-60fe4169e7e0 req-73916319-90e3-444b-8ea1-de5551d86721 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "6fd0259d-3f5c-487b-906c-db0ac2b00830-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:32:52 np0005534516 nova_compute[253538]: 2025-11-25 08:32:52.204 253542 DEBUG oslo_concurrency.lockutils [req-1fd85400-7485-4ae7-833e-60fe4169e7e0 req-73916319-90e3-444b-8ea1-de5551d86721 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "6fd0259d-3f5c-487b-906c-db0ac2b00830-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:32:52 np0005534516 nova_compute[253538]: 2025-11-25 08:32:52.204 253542 DEBUG oslo_concurrency.lockutils [req-1fd85400-7485-4ae7-833e-60fe4169e7e0 req-73916319-90e3-444b-8ea1-de5551d86721 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "6fd0259d-3f5c-487b-906c-db0ac2b00830-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:32:52 np0005534516 nova_compute[253538]: 2025-11-25 08:32:52.204 253542 DEBUG nova.compute.manager [req-1fd85400-7485-4ae7-833e-60fe4169e7e0 req-73916319-90e3-444b-8ea1-de5551d86721 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 6fd0259d-3f5c-487b-906c-db0ac2b00830] Processing event network-vif-plugged-40df73d0-e48b-4bb9-96eb-c236dd2ca614 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 25 03:32:52 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:32:52.206 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[7da12626-1af6-4ded-8aba-ce15303ebce3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:32:52 np0005534516 kernel: tapa66e51b8-e0: entered promiscuous mode
Nov 25 03:32:52 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:32:52.273 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[fa44f851-9af1-45b1-8204-f7d34e763e4d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:32:52 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:32:52.276 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa66e51b8-e0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:32:52 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:32:52.276 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 03:32:52 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:32:52.276 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa66e51b8-e0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:32:52 np0005534516 nova_compute[253538]: 2025-11-25 08:32:52.278 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:32:52 np0005534516 NetworkManager[48915]: <info>  [1764059572.2793] manager: (tapa66e51b8-e0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/214)
Nov 25 03:32:52 np0005534516 nova_compute[253538]: 2025-11-25 08:32:52.280 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:32:52 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:32:52.282 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapa66e51b8-e0, col_values=(('external_ids', {'iface-id': 'c0d74b17-7eba-4096-a861-b9247777e01c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:32:52 np0005534516 nova_compute[253538]: 2025-11-25 08:32:52.284 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:32:52 np0005534516 nova_compute[253538]: 2025-11-25 08:32:52.287 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:32:52 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:32:52.288 162739 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/a66e51b8-ecb0-4289-a1b5-d5e379727721.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/a66e51b8-ecb0-4289-a1b5-d5e379727721.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 25 03:32:52 np0005534516 ovn_controller[152859]: 2025-11-25T08:32:52Z|00460|binding|INFO|Releasing lport c0d74b17-7eba-4096-a861-b9247777e01c from this chassis (sb_readonly=0)
Nov 25 03:32:52 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:32:52.291 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[bb3db9f9-0ee7-40c8-9126-30c9c20476a1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:32:52 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:32:52.292 162739 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 25 03:32:52 np0005534516 ovn_metadata_agent[162734]: global
Nov 25 03:32:52 np0005534516 ovn_metadata_agent[162734]:    log         /dev/log local0 debug
Nov 25 03:32:52 np0005534516 ovn_metadata_agent[162734]:    log-tag     haproxy-metadata-proxy-a66e51b8-ecb0-4289-a1b5-d5e379727721
Nov 25 03:32:52 np0005534516 ovn_metadata_agent[162734]:    user        root
Nov 25 03:32:52 np0005534516 ovn_metadata_agent[162734]:    group       root
Nov 25 03:32:52 np0005534516 ovn_metadata_agent[162734]:    maxconn     1024
Nov 25 03:32:52 np0005534516 ovn_metadata_agent[162734]:    pidfile     /var/lib/neutron/external/pids/a66e51b8-ecb0-4289-a1b5-d5e379727721.pid.haproxy
Nov 25 03:32:52 np0005534516 ovn_metadata_agent[162734]:    daemon
Nov 25 03:32:52 np0005534516 ovn_metadata_agent[162734]: 
Nov 25 03:32:52 np0005534516 ovn_metadata_agent[162734]: defaults
Nov 25 03:32:52 np0005534516 ovn_metadata_agent[162734]:    log global
Nov 25 03:32:52 np0005534516 ovn_metadata_agent[162734]:    mode http
Nov 25 03:32:52 np0005534516 ovn_metadata_agent[162734]:    option httplog
Nov 25 03:32:52 np0005534516 ovn_metadata_agent[162734]:    option dontlognull
Nov 25 03:32:52 np0005534516 ovn_metadata_agent[162734]:    option http-server-close
Nov 25 03:32:52 np0005534516 ovn_metadata_agent[162734]:    option forwardfor
Nov 25 03:32:52 np0005534516 ovn_metadata_agent[162734]:    retries                 3
Nov 25 03:32:52 np0005534516 ovn_metadata_agent[162734]:    timeout http-request    30s
Nov 25 03:32:52 np0005534516 ovn_metadata_agent[162734]:    timeout connect         30s
Nov 25 03:32:52 np0005534516 ovn_metadata_agent[162734]:    timeout client          32s
Nov 25 03:32:52 np0005534516 ovn_metadata_agent[162734]:    timeout server          32s
Nov 25 03:32:52 np0005534516 ovn_metadata_agent[162734]:    timeout http-keep-alive 30s
Nov 25 03:32:52 np0005534516 ovn_metadata_agent[162734]: 
Nov 25 03:32:52 np0005534516 ovn_metadata_agent[162734]: 
Nov 25 03:32:52 np0005534516 ovn_metadata_agent[162734]: listen listener
Nov 25 03:32:52 np0005534516 ovn_metadata_agent[162734]:    bind 169.254.169.254:80
Nov 25 03:32:52 np0005534516 ovn_metadata_agent[162734]:    server metadata /var/lib/neutron/metadata_proxy
Nov 25 03:32:52 np0005534516 ovn_metadata_agent[162734]:    http-request add-header X-OVN-Network-ID a66e51b8-ecb0-4289-a1b5-d5e379727721
Nov 25 03:32:52 np0005534516 ovn_metadata_agent[162734]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 25 03:32:52 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:32:52.293 162739 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-a66e51b8-ecb0-4289-a1b5-d5e379727721', 'env', 'PROCESS_TAG=haproxy-a66e51b8-ecb0-4289-a1b5-d5e379727721', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/a66e51b8-ecb0-4289-a1b5-d5e379727721.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 25 03:32:52 np0005534516 nova_compute[253538]: 2025-11-25 08:32:52.317 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:32:52 np0005534516 nova_compute[253538]: 2025-11-25 08:32:52.584 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764059572.584037, 6fd0259d-3f5c-487b-906c-db0ac2b00830 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 03:32:52 np0005534516 nova_compute[253538]: 2025-11-25 08:32:52.585 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 6fd0259d-3f5c-487b-906c-db0ac2b00830] VM Started (Lifecycle Event)#033[00m
Nov 25 03:32:52 np0005534516 nova_compute[253538]: 2025-11-25 08:32:52.587 253542 DEBUG nova.compute.manager [None req-eed2a685-acc6-4daa-9135-691466117b97 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: 6fd0259d-3f5c-487b-906c-db0ac2b00830] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 25 03:32:52 np0005534516 nova_compute[253538]: 2025-11-25 08:32:52.591 253542 DEBUG nova.virt.libvirt.driver [None req-eed2a685-acc6-4daa-9135-691466117b97 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: 6fd0259d-3f5c-487b-906c-db0ac2b00830] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 25 03:32:52 np0005534516 nova_compute[253538]: 2025-11-25 08:32:52.593 253542 INFO nova.virt.libvirt.driver [-] [instance: 6fd0259d-3f5c-487b-906c-db0ac2b00830] Instance spawned successfully.#033[00m
Nov 25 03:32:52 np0005534516 nova_compute[253538]: 2025-11-25 08:32:52.593 253542 DEBUG nova.virt.libvirt.driver [None req-eed2a685-acc6-4daa-9135-691466117b97 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: 6fd0259d-3f5c-487b-906c-db0ac2b00830] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 25 03:32:52 np0005534516 nova_compute[253538]: 2025-11-25 08:32:52.624 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 6fd0259d-3f5c-487b-906c-db0ac2b00830] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:32:52 np0005534516 nova_compute[253538]: 2025-11-25 08:32:52.631 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 6fd0259d-3f5c-487b-906c-db0ac2b00830] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 03:32:52 np0005534516 nova_compute[253538]: 2025-11-25 08:32:52.635 253542 DEBUG nova.virt.libvirt.driver [None req-eed2a685-acc6-4daa-9135-691466117b97 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: 6fd0259d-3f5c-487b-906c-db0ac2b00830] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:32:52 np0005534516 nova_compute[253538]: 2025-11-25 08:32:52.636 253542 DEBUG nova.virt.libvirt.driver [None req-eed2a685-acc6-4daa-9135-691466117b97 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: 6fd0259d-3f5c-487b-906c-db0ac2b00830] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:32:52 np0005534516 nova_compute[253538]: 2025-11-25 08:32:52.637 253542 DEBUG nova.virt.libvirt.driver [None req-eed2a685-acc6-4daa-9135-691466117b97 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: 6fd0259d-3f5c-487b-906c-db0ac2b00830] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:32:52 np0005534516 nova_compute[253538]: 2025-11-25 08:32:52.637 253542 DEBUG nova.virt.libvirt.driver [None req-eed2a685-acc6-4daa-9135-691466117b97 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: 6fd0259d-3f5c-487b-906c-db0ac2b00830] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:32:52 np0005534516 nova_compute[253538]: 2025-11-25 08:32:52.637 253542 DEBUG nova.virt.libvirt.driver [None req-eed2a685-acc6-4daa-9135-691466117b97 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: 6fd0259d-3f5c-487b-906c-db0ac2b00830] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:32:52 np0005534516 nova_compute[253538]: 2025-11-25 08:32:52.638 253542 DEBUG nova.virt.libvirt.driver [None req-eed2a685-acc6-4daa-9135-691466117b97 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: 6fd0259d-3f5c-487b-906c-db0ac2b00830] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:32:52 np0005534516 podman[311083]: 2025-11-25 08:32:52.691565548 +0000 UTC m=+0.029041101 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 25 03:32:52 np0005534516 nova_compute[253538]: 2025-11-25 08:32:52.801 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 6fd0259d-3f5c-487b-906c-db0ac2b00830] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 03:32:52 np0005534516 nova_compute[253538]: 2025-11-25 08:32:52.801 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764059572.5843024, 6fd0259d-3f5c-487b-906c-db0ac2b00830 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 03:32:52 np0005534516 nova_compute[253538]: 2025-11-25 08:32:52.801 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 6fd0259d-3f5c-487b-906c-db0ac2b00830] VM Paused (Lifecycle Event)#033[00m
Nov 25 03:32:52 np0005534516 podman[311083]: 2025-11-25 08:32:52.805206933 +0000 UTC m=+0.142682466 container create 11d557cb7ecd3bf9378976bca866b5227a5d6df4370ee8cee61561fe6a2481f4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-a66e51b8-ecb0-4289-a1b5-d5e379727721, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2)
Nov 25 03:32:52 np0005534516 nova_compute[253538]: 2025-11-25 08:32:52.820 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 6fd0259d-3f5c-487b-906c-db0ac2b00830] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:32:52 np0005534516 nova_compute[253538]: 2025-11-25 08:32:52.825 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764059572.590889, 6fd0259d-3f5c-487b-906c-db0ac2b00830 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 03:32:52 np0005534516 nova_compute[253538]: 2025-11-25 08:32:52.825 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 6fd0259d-3f5c-487b-906c-db0ac2b00830] VM Resumed (Lifecycle Event)#033[00m
Nov 25 03:32:52 np0005534516 nova_compute[253538]: 2025-11-25 08:32:52.842 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 6fd0259d-3f5c-487b-906c-db0ac2b00830] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:32:52 np0005534516 nova_compute[253538]: 2025-11-25 08:32:52.847 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 6fd0259d-3f5c-487b-906c-db0ac2b00830] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 03:32:52 np0005534516 nova_compute[253538]: 2025-11-25 08:32:52.860 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 6fd0259d-3f5c-487b-906c-db0ac2b00830] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 03:32:52 np0005534516 systemd[1]: Started libpod-conmon-11d557cb7ecd3bf9378976bca866b5227a5d6df4370ee8cee61561fe6a2481f4.scope.
Nov 25 03:32:52 np0005534516 nova_compute[253538]: 2025-11-25 08:32:52.896 253542 INFO nova.compute.manager [None req-eed2a685-acc6-4daa-9135-691466117b97 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: 6fd0259d-3f5c-487b-906c-db0ac2b00830] Took 8.65 seconds to spawn the instance on the hypervisor.#033[00m
Nov 25 03:32:52 np0005534516 nova_compute[253538]: 2025-11-25 08:32:52.897 253542 DEBUG nova.compute.manager [None req-eed2a685-acc6-4daa-9135-691466117b97 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: 6fd0259d-3f5c-487b-906c-db0ac2b00830] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:32:52 np0005534516 systemd[1]: Started libcrun container.
Nov 25 03:32:52 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/391b8476a98a77d0632e121cc7458acf3109db7ed9a9f0b088b923765e8de82c/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 25 03:32:52 np0005534516 podman[311083]: 2025-11-25 08:32:52.931653731 +0000 UTC m=+0.269129294 container init 11d557cb7ecd3bf9378976bca866b5227a5d6df4370ee8cee61561fe6a2481f4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-a66e51b8-ecb0-4289-a1b5-d5e379727721, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 03:32:52 np0005534516 podman[311083]: 2025-11-25 08:32:52.937641542 +0000 UTC m=+0.275117075 container start 11d557cb7ecd3bf9378976bca866b5227a5d6df4370ee8cee61561fe6a2481f4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-a66e51b8-ecb0-4289-a1b5-d5e379727721, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3)
Nov 25 03:32:52 np0005534516 nova_compute[253538]: 2025-11-25 08:32:52.954 253542 INFO nova.compute.manager [None req-eed2a685-acc6-4daa-9135-691466117b97 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: 6fd0259d-3f5c-487b-906c-db0ac2b00830] Took 9.56 seconds to build instance.#033[00m
Nov 25 03:32:52 np0005534516 neutron-haproxy-ovnmeta-a66e51b8-ecb0-4289-a1b5-d5e379727721[311098]: [NOTICE]   (311102) : New worker (311104) forked
Nov 25 03:32:52 np0005534516 neutron-haproxy-ovnmeta-a66e51b8-ecb0-4289-a1b5-d5e379727721[311098]: [NOTICE]   (311102) : Loading success.
Nov 25 03:32:52 np0005534516 nova_compute[253538]: 2025-11-25 08:32:52.986 253542 DEBUG oslo_concurrency.lockutils [None req-eed2a685-acc6-4daa-9135-691466117b97 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Lock "6fd0259d-3f5c-487b-906c-db0ac2b00830" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.662s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:32:53 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1486: 321 pgs: 321 active+clean; 299 MiB data, 572 MiB used, 59 GiB / 60 GiB avail; 4.6 MiB/s rd, 4.6 MiB/s wr, 256 op/s
Nov 25 03:32:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] Optimize plan auto_2025-11-25_08:32:53
Nov 25 03:32:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 03:32:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] do_upmap
Nov 25 03:32:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] pools ['backups', 'volumes', 'default.rgw.meta', 'default.rgw.log', 'vms', 'cephfs.cephfs.meta', '.mgr', 'cephfs.cephfs.data', 'images', '.rgw.root', 'default.rgw.control']
Nov 25 03:32:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] prepared 0/10 changes
Nov 25 03:32:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 03:32:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 03:32:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 03:32:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 03:32:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 03:32:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 03:32:53 np0005534516 ovn_controller[152859]: 2025-11-25T08:32:53Z|00056|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:24:9d:e4 10.100.0.11
Nov 25 03:32:53 np0005534516 ovn_controller[152859]: 2025-11-25T08:32:53Z|00057|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:24:9d:e4 10.100.0.11
Nov 25 03:32:53 np0005534516 ovn_controller[152859]: 2025-11-25T08:32:53Z|00058|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:07:cd:40 10.100.0.13
Nov 25 03:32:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 03:32:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 03:32:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 03:32:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 03:32:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 03:32:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 03:32:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 03:32:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 03:32:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 03:32:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 03:32:54 np0005534516 nova_compute[253538]: 2025-11-25 08:32:54.308 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:32:54 np0005534516 nova_compute[253538]: 2025-11-25 08:32:54.328 253542 INFO nova.compute.manager [None req-32b6625c-4f16-4686-a25b-706aa6659172 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: 6fd0259d-3f5c-487b-906c-db0ac2b00830] Pausing#033[00m
Nov 25 03:32:54 np0005534516 nova_compute[253538]: 2025-11-25 08:32:54.329 253542 DEBUG nova.objects.instance [None req-32b6625c-4f16-4686-a25b-706aa6659172 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Lazy-loading 'flavor' on Instance uuid 6fd0259d-3f5c-487b-906c-db0ac2b00830 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 03:32:54 np0005534516 nova_compute[253538]: 2025-11-25 08:32:54.358 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764059574.3579557, 6fd0259d-3f5c-487b-906c-db0ac2b00830 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 03:32:54 np0005534516 nova_compute[253538]: 2025-11-25 08:32:54.358 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 6fd0259d-3f5c-487b-906c-db0ac2b00830] VM Paused (Lifecycle Event)#033[00m
Nov 25 03:32:54 np0005534516 nova_compute[253538]: 2025-11-25 08:32:54.359 253542 DEBUG nova.compute.manager [None req-32b6625c-4f16-4686-a25b-706aa6659172 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: 6fd0259d-3f5c-487b-906c-db0ac2b00830] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:32:54 np0005534516 nova_compute[253538]: 2025-11-25 08:32:54.387 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 6fd0259d-3f5c-487b-906c-db0ac2b00830] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:32:54 np0005534516 nova_compute[253538]: 2025-11-25 08:32:54.390 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 6fd0259d-3f5c-487b-906c-db0ac2b00830] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: pausing, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 03:32:54 np0005534516 nova_compute[253538]: 2025-11-25 08:32:54.415 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 6fd0259d-3f5c-487b-906c-db0ac2b00830] During sync_power_state the instance has a pending task (pausing). Skip.#033[00m
Nov 25 03:32:54 np0005534516 nova_compute[253538]: 2025-11-25 08:32:54.637 253542 DEBUG nova.compute.manager [req-31da3433-ec87-46c3-affc-d3cc304f55e4 req-1cf28725-cccc-4292-b311-ffb3de48b611 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 6fd0259d-3f5c-487b-906c-db0ac2b00830] Received event network-vif-plugged-40df73d0-e48b-4bb9-96eb-c236dd2ca614 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:32:54 np0005534516 nova_compute[253538]: 2025-11-25 08:32:54.637 253542 DEBUG oslo_concurrency.lockutils [req-31da3433-ec87-46c3-affc-d3cc304f55e4 req-1cf28725-cccc-4292-b311-ffb3de48b611 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "6fd0259d-3f5c-487b-906c-db0ac2b00830-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:32:54 np0005534516 nova_compute[253538]: 2025-11-25 08:32:54.638 253542 DEBUG oslo_concurrency.lockutils [req-31da3433-ec87-46c3-affc-d3cc304f55e4 req-1cf28725-cccc-4292-b311-ffb3de48b611 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "6fd0259d-3f5c-487b-906c-db0ac2b00830-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:32:54 np0005534516 nova_compute[253538]: 2025-11-25 08:32:54.638 253542 DEBUG oslo_concurrency.lockutils [req-31da3433-ec87-46c3-affc-d3cc304f55e4 req-1cf28725-cccc-4292-b311-ffb3de48b611 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "6fd0259d-3f5c-487b-906c-db0ac2b00830-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:32:54 np0005534516 nova_compute[253538]: 2025-11-25 08:32:54.638 253542 DEBUG nova.compute.manager [req-31da3433-ec87-46c3-affc-d3cc304f55e4 req-1cf28725-cccc-4292-b311-ffb3de48b611 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 6fd0259d-3f5c-487b-906c-db0ac2b00830] No waiting events found dispatching network-vif-plugged-40df73d0-e48b-4bb9-96eb-c236dd2ca614 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 03:32:54 np0005534516 nova_compute[253538]: 2025-11-25 08:32:54.638 253542 WARNING nova.compute.manager [req-31da3433-ec87-46c3-affc-d3cc304f55e4 req-1cf28725-cccc-4292-b311-ffb3de48b611 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 6fd0259d-3f5c-487b-906c-db0ac2b00830] Received unexpected event network-vif-plugged-40df73d0-e48b-4bb9-96eb-c236dd2ca614 for instance with vm_state paused and task_state None.#033[00m
Nov 25 03:32:55 np0005534516 nova_compute[253538]: 2025-11-25 08:32:55.023 253542 DEBUG nova.virt.libvirt.driver [None req-cb0b45a0-9374-4f2b-9271-6d8352f48120 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] [instance: c0942fc7-74d4-4fc8-9574-4fea9179e71b] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101#033[00m
Nov 25 03:32:55 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1487: 321 pgs: 321 active+clean; 318 MiB data, 589 MiB used, 59 GiB / 60 GiB avail; 4.0 MiB/s rd, 6.0 MiB/s wr, 286 op/s
Nov 25 03:32:55 np0005534516 nova_compute[253538]: 2025-11-25 08:32:55.285 253542 DEBUG oslo_concurrency.lockutils [None req-e3223844-2cf6-4665-a1fb-a42de2e4838f a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Acquiring lock "6fd0259d-3f5c-487b-906c-db0ac2b00830" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:32:55 np0005534516 nova_compute[253538]: 2025-11-25 08:32:55.286 253542 DEBUG oslo_concurrency.lockutils [None req-e3223844-2cf6-4665-a1fb-a42de2e4838f a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Lock "6fd0259d-3f5c-487b-906c-db0ac2b00830" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:32:55 np0005534516 nova_compute[253538]: 2025-11-25 08:32:55.286 253542 DEBUG oslo_concurrency.lockutils [None req-e3223844-2cf6-4665-a1fb-a42de2e4838f a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Acquiring lock "6fd0259d-3f5c-487b-906c-db0ac2b00830-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:32:55 np0005534516 nova_compute[253538]: 2025-11-25 08:32:55.286 253542 DEBUG oslo_concurrency.lockutils [None req-e3223844-2cf6-4665-a1fb-a42de2e4838f a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Lock "6fd0259d-3f5c-487b-906c-db0ac2b00830-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:32:55 np0005534516 nova_compute[253538]: 2025-11-25 08:32:55.286 253542 DEBUG oslo_concurrency.lockutils [None req-e3223844-2cf6-4665-a1fb-a42de2e4838f a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Lock "6fd0259d-3f5c-487b-906c-db0ac2b00830-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:32:55 np0005534516 nova_compute[253538]: 2025-11-25 08:32:55.288 253542 INFO nova.compute.manager [None req-e3223844-2cf6-4665-a1fb-a42de2e4838f a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: 6fd0259d-3f5c-487b-906c-db0ac2b00830] Terminating instance#033[00m
Nov 25 03:32:55 np0005534516 nova_compute[253538]: 2025-11-25 08:32:55.289 253542 DEBUG nova.compute.manager [None req-e3223844-2cf6-4665-a1fb-a42de2e4838f a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: 6fd0259d-3f5c-487b-906c-db0ac2b00830] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 25 03:32:55 np0005534516 kernel: tap40df73d0-e4 (unregistering): left promiscuous mode
Nov 25 03:32:55 np0005534516 NetworkManager[48915]: <info>  [1764059575.3301] device (tap40df73d0-e4): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 03:32:55 np0005534516 nova_compute[253538]: 2025-11-25 08:32:55.343 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:32:55 np0005534516 ovn_controller[152859]: 2025-11-25T08:32:55Z|00461|binding|INFO|Releasing lport 40df73d0-e48b-4bb9-96eb-c236dd2ca614 from this chassis (sb_readonly=0)
Nov 25 03:32:55 np0005534516 ovn_controller[152859]: 2025-11-25T08:32:55Z|00462|binding|INFO|Setting lport 40df73d0-e48b-4bb9-96eb-c236dd2ca614 down in Southbound
Nov 25 03:32:55 np0005534516 ovn_controller[152859]: 2025-11-25T08:32:55Z|00463|binding|INFO|Removing iface tap40df73d0-e4 ovn-installed in OVS
Nov 25 03:32:55 np0005534516 nova_compute[253538]: 2025-11-25 08:32:55.345 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:32:55 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:32:55.351 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f5:15:48 10.100.0.10'], port_security=['fa:16:3e:f5:15:48 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '6fd0259d-3f5c-487b-906c-db0ac2b00830', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a66e51b8-ecb0-4289-a1b5-d5e379727721', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a9c243220ecd4ba3af10cdbc0ea76bd6', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'a5deaf81-ec7a-4196-8622-4e499ce185db', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=864356ca-a329-4a45-a3a1-6cef04812832, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=40df73d0-e48b-4bb9-96eb-c236dd2ca614) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 03:32:55 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:32:55.352 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 40df73d0-e48b-4bb9-96eb-c236dd2ca614 in datapath a66e51b8-ecb0-4289-a1b5-d5e379727721 unbound from our chassis#033[00m
Nov 25 03:32:55 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:32:55.353 162739 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network a66e51b8-ecb0-4289-a1b5-d5e379727721, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 25 03:32:55 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:32:55.354 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[a58a4db8-6b6e-4775-92f5-4d79557b1af5]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:32:55 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:32:55.355 162739 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-a66e51b8-ecb0-4289-a1b5-d5e379727721 namespace which is not needed anymore#033[00m
Nov 25 03:32:55 np0005534516 nova_compute[253538]: 2025-11-25 08:32:55.360 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:32:55 np0005534516 systemd[1]: machine-qemu\x2d61\x2dinstance\x2d00000037.scope: Deactivated successfully.
Nov 25 03:32:55 np0005534516 systemd[1]: machine-qemu\x2d61\x2dinstance\x2d00000037.scope: Consumed 2.110s CPU time.
Nov 25 03:32:55 np0005534516 systemd-machined[215790]: Machine qemu-61-instance-00000037 terminated.
Nov 25 03:32:55 np0005534516 nova_compute[253538]: 2025-11-25 08:32:55.489 253542 DEBUG oslo_concurrency.lockutils [None req-752c8843-dcf6-4988-b600-41333c98989c 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Acquiring lock "interface-8191f951-44bc-4371-957a-f2e7d37c1a32-None" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:32:55 np0005534516 nova_compute[253538]: 2025-11-25 08:32:55.491 253542 DEBUG oslo_concurrency.lockutils [None req-752c8843-dcf6-4988-b600-41333c98989c 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Lock "interface-8191f951-44bc-4371-957a-f2e7d37c1a32-None" acquired by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:32:55 np0005534516 nova_compute[253538]: 2025-11-25 08:32:55.492 253542 DEBUG nova.objects.instance [None req-752c8843-dcf6-4988-b600-41333c98989c 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Lazy-loading 'flavor' on Instance uuid 8191f951-44bc-4371-957a-f2e7d37c1a32 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 03:32:55 np0005534516 nova_compute[253538]: 2025-11-25 08:32:55.516 253542 DEBUG nova.objects.instance [None req-752c8843-dcf6-4988-b600-41333c98989c 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Lazy-loading 'pci_requests' on Instance uuid 8191f951-44bc-4371-957a-f2e7d37c1a32 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 03:32:55 np0005534516 nova_compute[253538]: 2025-11-25 08:32:55.528 253542 DEBUG nova.network.neutron [None req-752c8843-dcf6-4988-b600-41333c98989c 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] [instance: 8191f951-44bc-4371-957a-f2e7d37c1a32] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 25 03:32:55 np0005534516 nova_compute[253538]: 2025-11-25 08:32:55.579 253542 INFO nova.virt.libvirt.driver [-] [instance: 6fd0259d-3f5c-487b-906c-db0ac2b00830] Instance destroyed successfully.#033[00m
Nov 25 03:32:55 np0005534516 nova_compute[253538]: 2025-11-25 08:32:55.581 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:32:55 np0005534516 nova_compute[253538]: 2025-11-25 08:32:55.584 253542 DEBUG nova.objects.instance [None req-e3223844-2cf6-4665-a1fb-a42de2e4838f a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Lazy-loading 'resources' on Instance uuid 6fd0259d-3f5c-487b-906c-db0ac2b00830 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 03:32:55 np0005534516 nova_compute[253538]: 2025-11-25 08:32:55.597 253542 DEBUG nova.virt.libvirt.vif [None req-e3223844-2cf6-4665-a1fb-a42de2e4838f a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T08:32:42Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-DeleteServersTestJSON-server-1160841163',display_name='tempest-DeleteServersTestJSON-server-1160841163',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-deleteserverstestjson-server-1160841163',id=55,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-25T08:32:52Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=3,progress=0,project_id='a9c243220ecd4ba3af10cdbc0ea76bd6',ramdisk_id='',reservation_id='r-tls64ynb',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-DeleteServersTestJSON-2095694504',owner_user_name='tempest-DeleteServersTestJSON-2095694504-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T08:32:54Z,user_data=None,user_id='a649c62aaacd4f01a93ea978066f5976',uuid=6fd0259d-3f5c-487b-906c-db0ac2b00830,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='paused') vif={"id": "40df73d0-e48b-4bb9-96eb-c236dd2ca614", "address": "fa:16:3e:f5:15:48", "network": {"id": "a66e51b8-ecb0-4289-a1b5-d5e379727721", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-903247260-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a9c243220ecd4ba3af10cdbc0ea76bd6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap40df73d0-e4", "ovs_interfaceid": "40df73d0-e48b-4bb9-96eb-c236dd2ca614", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 25 03:32:55 np0005534516 nova_compute[253538]: 2025-11-25 08:32:55.598 253542 DEBUG nova.network.os_vif_util [None req-e3223844-2cf6-4665-a1fb-a42de2e4838f a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Converting VIF {"id": "40df73d0-e48b-4bb9-96eb-c236dd2ca614", "address": "fa:16:3e:f5:15:48", "network": {"id": "a66e51b8-ecb0-4289-a1b5-d5e379727721", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-903247260-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a9c243220ecd4ba3af10cdbc0ea76bd6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap40df73d0-e4", "ovs_interfaceid": "40df73d0-e48b-4bb9-96eb-c236dd2ca614", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 03:32:55 np0005534516 nova_compute[253538]: 2025-11-25 08:32:55.599 253542 DEBUG nova.network.os_vif_util [None req-e3223844-2cf6-4665-a1fb-a42de2e4838f a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f5:15:48,bridge_name='br-int',has_traffic_filtering=True,id=40df73d0-e48b-4bb9-96eb-c236dd2ca614,network=Network(a66e51b8-ecb0-4289-a1b5-d5e379727721),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap40df73d0-e4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 03:32:55 np0005534516 nova_compute[253538]: 2025-11-25 08:32:55.600 253542 DEBUG os_vif [None req-e3223844-2cf6-4665-a1fb-a42de2e4838f a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:f5:15:48,bridge_name='br-int',has_traffic_filtering=True,id=40df73d0-e48b-4bb9-96eb-c236dd2ca614,network=Network(a66e51b8-ecb0-4289-a1b5-d5e379727721),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap40df73d0-e4') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 25 03:32:55 np0005534516 nova_compute[253538]: 2025-11-25 08:32:55.602 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:32:55 np0005534516 nova_compute[253538]: 2025-11-25 08:32:55.603 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap40df73d0-e4, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:32:55 np0005534516 nova_compute[253538]: 2025-11-25 08:32:55.604 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:32:55 np0005534516 nova_compute[253538]: 2025-11-25 08:32:55.606 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:32:55 np0005534516 nova_compute[253538]: 2025-11-25 08:32:55.609 253542 INFO os_vif [None req-e3223844-2cf6-4665-a1fb-a42de2e4838f a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:f5:15:48,bridge_name='br-int',has_traffic_filtering=True,id=40df73d0-e48b-4bb9-96eb-c236dd2ca614,network=Network(a66e51b8-ecb0-4289-a1b5-d5e379727721),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap40df73d0-e4')#033[00m
Nov 25 03:32:55 np0005534516 neutron-haproxy-ovnmeta-a66e51b8-ecb0-4289-a1b5-d5e379727721[311098]: [NOTICE]   (311102) : haproxy version is 2.8.14-c23fe91
Nov 25 03:32:55 np0005534516 neutron-haproxy-ovnmeta-a66e51b8-ecb0-4289-a1b5-d5e379727721[311098]: [NOTICE]   (311102) : path to executable is /usr/sbin/haproxy
Nov 25 03:32:55 np0005534516 neutron-haproxy-ovnmeta-a66e51b8-ecb0-4289-a1b5-d5e379727721[311098]: [WARNING]  (311102) : Exiting Master process...
Nov 25 03:32:55 np0005534516 neutron-haproxy-ovnmeta-a66e51b8-ecb0-4289-a1b5-d5e379727721[311098]: [WARNING]  (311102) : Exiting Master process...
Nov 25 03:32:55 np0005534516 neutron-haproxy-ovnmeta-a66e51b8-ecb0-4289-a1b5-d5e379727721[311098]: [ALERT]    (311102) : Current worker (311104) exited with code 143 (Terminated)
Nov 25 03:32:55 np0005534516 neutron-haproxy-ovnmeta-a66e51b8-ecb0-4289-a1b5-d5e379727721[311098]: [WARNING]  (311102) : All workers exited. Exiting... (0)
Nov 25 03:32:55 np0005534516 systemd[1]: libpod-11d557cb7ecd3bf9378976bca866b5227a5d6df4370ee8cee61561fe6a2481f4.scope: Deactivated successfully.
Nov 25 03:32:55 np0005534516 podman[311138]: 2025-11-25 08:32:55.626839086 +0000 UTC m=+0.107650483 container died 11d557cb7ecd3bf9378976bca866b5227a5d6df4370ee8cee61561fe6a2481f4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-a66e51b8-ecb0-4289-a1b5-d5e379727721, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3)
Nov 25 03:32:55 np0005534516 systemd[1]: var-lib-containers-storage-overlay-391b8476a98a77d0632e121cc7458acf3109db7ed9a9f0b088b923765e8de82c-merged.mount: Deactivated successfully.
Nov 25 03:32:55 np0005534516 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-11d557cb7ecd3bf9378976bca866b5227a5d6df4370ee8cee61561fe6a2481f4-userdata-shm.mount: Deactivated successfully.
Nov 25 03:32:55 np0005534516 podman[311138]: 2025-11-25 08:32:55.679781429 +0000 UTC m=+0.160592816 container cleanup 11d557cb7ecd3bf9378976bca866b5227a5d6df4370ee8cee61561fe6a2481f4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-a66e51b8-ecb0-4289-a1b5-d5e379727721, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 03:32:55 np0005534516 systemd[1]: libpod-conmon-11d557cb7ecd3bf9378976bca866b5227a5d6df4370ee8cee61561fe6a2481f4.scope: Deactivated successfully.
Nov 25 03:32:55 np0005534516 podman[311194]: 2025-11-25 08:32:55.761577407 +0000 UTC m=+0.053888289 container remove 11d557cb7ecd3bf9378976bca866b5227a5d6df4370ee8cee61561fe6a2481f4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-a66e51b8-ecb0-4289-a1b5-d5e379727721, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 03:32:55 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e176 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 03:32:55 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:32:55.772 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[7f2169a2-5931-4d19-ae13-30826ebfd4d2]: (4, ('Tue Nov 25 08:32:55 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-a66e51b8-ecb0-4289-a1b5-d5e379727721 (11d557cb7ecd3bf9378976bca866b5227a5d6df4370ee8cee61561fe6a2481f4)\n11d557cb7ecd3bf9378976bca866b5227a5d6df4370ee8cee61561fe6a2481f4\nTue Nov 25 08:32:55 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-a66e51b8-ecb0-4289-a1b5-d5e379727721 (11d557cb7ecd3bf9378976bca866b5227a5d6df4370ee8cee61561fe6a2481f4)\n11d557cb7ecd3bf9378976bca866b5227a5d6df4370ee8cee61561fe6a2481f4\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:32:55 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:32:55.774 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[bdf3990a-5a20-48b3-8d24-a9f939f6b20e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:32:55 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:32:55.775 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa66e51b8-e0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:32:55 np0005534516 kernel: tapa66e51b8-e0: left promiscuous mode
Nov 25 03:32:55 np0005534516 nova_compute[253538]: 2025-11-25 08:32:55.777 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:32:55 np0005534516 nova_compute[253538]: 2025-11-25 08:32:55.792 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:32:55 np0005534516 nova_compute[253538]: 2025-11-25 08:32:55.794 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:32:55 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:32:55.795 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[7b9ecf93-c459-465a-b4b0-c92d01c6828c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:32:55 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:32:55.806 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[d73c4bd9-6caa-4cd2-99d1-577acd93523d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:32:55 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:32:55.808 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[00d35e44-b179-4103-9774-6d7cf7abe3bb]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:32:55 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:32:55.825 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[e79f9ebe-7c71-4a2c-9bf2-c6a9dbbe7f0d]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 492661, 'reachable_time': 25224, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 311209, 'error': None, 'target': 'ovnmeta-a66e51b8-ecb0-4289-a1b5-d5e379727721', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:32:55 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:32:55.828 162852 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-a66e51b8-ecb0-4289-a1b5-d5e379727721 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 25 03:32:55 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:32:55.828 162852 DEBUG oslo.privsep.daemon [-] privsep: reply[b5ceefa5-cff5-4beb-a0bc-619457430f8f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:32:55 np0005534516 systemd[1]: run-netns-ovnmeta\x2da66e51b8\x2decb0\x2d4289\x2da1b5\x2dd5e379727721.mount: Deactivated successfully.
Nov 25 03:32:56 np0005534516 nova_compute[253538]: 2025-11-25 08:32:56.068 253542 DEBUG nova.policy [None req-752c8843-dcf6-4988-b600-41333c98989c 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '329d8dc9d78743d4a09a38fef3a9143d', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '7d8307470c794815a028592990efca57', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 25 03:32:56 np0005534516 nova_compute[253538]: 2025-11-25 08:32:56.114 253542 INFO nova.virt.libvirt.driver [None req-e3223844-2cf6-4665-a1fb-a42de2e4838f a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: 6fd0259d-3f5c-487b-906c-db0ac2b00830] Deleting instance files /var/lib/nova/instances/6fd0259d-3f5c-487b-906c-db0ac2b00830_del#033[00m
Nov 25 03:32:56 np0005534516 nova_compute[253538]: 2025-11-25 08:32:56.115 253542 INFO nova.virt.libvirt.driver [None req-e3223844-2cf6-4665-a1fb-a42de2e4838f a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: 6fd0259d-3f5c-487b-906c-db0ac2b00830] Deletion of /var/lib/nova/instances/6fd0259d-3f5c-487b-906c-db0ac2b00830_del complete#033[00m
Nov 25 03:32:56 np0005534516 nova_compute[253538]: 2025-11-25 08:32:56.174 253542 INFO nova.compute.manager [None req-e3223844-2cf6-4665-a1fb-a42de2e4838f a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: 6fd0259d-3f5c-487b-906c-db0ac2b00830] Took 0.88 seconds to destroy the instance on the hypervisor.#033[00m
Nov 25 03:32:56 np0005534516 nova_compute[253538]: 2025-11-25 08:32:56.174 253542 DEBUG oslo.service.loopingcall [None req-e3223844-2cf6-4665-a1fb-a42de2e4838f a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 25 03:32:56 np0005534516 nova_compute[253538]: 2025-11-25 08:32:56.174 253542 DEBUG nova.compute.manager [-] [instance: 6fd0259d-3f5c-487b-906c-db0ac2b00830] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 25 03:32:56 np0005534516 nova_compute[253538]: 2025-11-25 08:32:56.175 253542 DEBUG nova.network.neutron [-] [instance: 6fd0259d-3f5c-487b-906c-db0ac2b00830] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 25 03:32:56 np0005534516 nova_compute[253538]: 2025-11-25 08:32:56.988 253542 DEBUG nova.network.neutron [None req-752c8843-dcf6-4988-b600-41333c98989c 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] [instance: 8191f951-44bc-4371-957a-f2e7d37c1a32] Successfully created port: 223207be-35e0-4b8b-bf78-113792059910 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 25 03:32:56 np0005534516 nova_compute[253538]: 2025-11-25 08:32:56.992 253542 DEBUG nova.network.neutron [-] [instance: 6fd0259d-3f5c-487b-906c-db0ac2b00830] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 03:32:57 np0005534516 nova_compute[253538]: 2025-11-25 08:32:57.006 253542 INFO nova.compute.manager [-] [instance: 6fd0259d-3f5c-487b-906c-db0ac2b00830] Took 0.83 seconds to deallocate network for instance.#033[00m
Nov 25 03:32:57 np0005534516 nova_compute[253538]: 2025-11-25 08:32:57.029 253542 DEBUG nova.compute.manager [req-bf46663e-2576-46d8-b8f5-31b61d9ddb22 req-a0d7f00a-cdd5-4b23-a1fe-c4dc91e543c0 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 6fd0259d-3f5c-487b-906c-db0ac2b00830] Received event network-vif-unplugged-40df73d0-e48b-4bb9-96eb-c236dd2ca614 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:32:57 np0005534516 nova_compute[253538]: 2025-11-25 08:32:57.029 253542 DEBUG oslo_concurrency.lockutils [req-bf46663e-2576-46d8-b8f5-31b61d9ddb22 req-a0d7f00a-cdd5-4b23-a1fe-c4dc91e543c0 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "6fd0259d-3f5c-487b-906c-db0ac2b00830-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:32:57 np0005534516 nova_compute[253538]: 2025-11-25 08:32:57.029 253542 DEBUG oslo_concurrency.lockutils [req-bf46663e-2576-46d8-b8f5-31b61d9ddb22 req-a0d7f00a-cdd5-4b23-a1fe-c4dc91e543c0 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "6fd0259d-3f5c-487b-906c-db0ac2b00830-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:32:57 np0005534516 nova_compute[253538]: 2025-11-25 08:32:57.030 253542 DEBUG oslo_concurrency.lockutils [req-bf46663e-2576-46d8-b8f5-31b61d9ddb22 req-a0d7f00a-cdd5-4b23-a1fe-c4dc91e543c0 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "6fd0259d-3f5c-487b-906c-db0ac2b00830-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:32:57 np0005534516 nova_compute[253538]: 2025-11-25 08:32:57.030 253542 DEBUG nova.compute.manager [req-bf46663e-2576-46d8-b8f5-31b61d9ddb22 req-a0d7f00a-cdd5-4b23-a1fe-c4dc91e543c0 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 6fd0259d-3f5c-487b-906c-db0ac2b00830] No waiting events found dispatching network-vif-unplugged-40df73d0-e48b-4bb9-96eb-c236dd2ca614 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 03:32:57 np0005534516 nova_compute[253538]: 2025-11-25 08:32:57.030 253542 DEBUG nova.compute.manager [req-bf46663e-2576-46d8-b8f5-31b61d9ddb22 req-a0d7f00a-cdd5-4b23-a1fe-c4dc91e543c0 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 6fd0259d-3f5c-487b-906c-db0ac2b00830] Received event network-vif-unplugged-40df73d0-e48b-4bb9-96eb-c236dd2ca614 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 25 03:32:57 np0005534516 nova_compute[253538]: 2025-11-25 08:32:57.030 253542 DEBUG nova.compute.manager [req-bf46663e-2576-46d8-b8f5-31b61d9ddb22 req-a0d7f00a-cdd5-4b23-a1fe-c4dc91e543c0 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 6fd0259d-3f5c-487b-906c-db0ac2b00830] Received event network-vif-plugged-40df73d0-e48b-4bb9-96eb-c236dd2ca614 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:32:57 np0005534516 nova_compute[253538]: 2025-11-25 08:32:57.030 253542 DEBUG oslo_concurrency.lockutils [req-bf46663e-2576-46d8-b8f5-31b61d9ddb22 req-a0d7f00a-cdd5-4b23-a1fe-c4dc91e543c0 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "6fd0259d-3f5c-487b-906c-db0ac2b00830-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:32:57 np0005534516 nova_compute[253538]: 2025-11-25 08:32:57.030 253542 DEBUG oslo_concurrency.lockutils [req-bf46663e-2576-46d8-b8f5-31b61d9ddb22 req-a0d7f00a-cdd5-4b23-a1fe-c4dc91e543c0 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "6fd0259d-3f5c-487b-906c-db0ac2b00830-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:32:57 np0005534516 nova_compute[253538]: 2025-11-25 08:32:57.031 253542 DEBUG oslo_concurrency.lockutils [req-bf46663e-2576-46d8-b8f5-31b61d9ddb22 req-a0d7f00a-cdd5-4b23-a1fe-c4dc91e543c0 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "6fd0259d-3f5c-487b-906c-db0ac2b00830-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:32:57 np0005534516 nova_compute[253538]: 2025-11-25 08:32:57.031 253542 DEBUG nova.compute.manager [req-bf46663e-2576-46d8-b8f5-31b61d9ddb22 req-a0d7f00a-cdd5-4b23-a1fe-c4dc91e543c0 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 6fd0259d-3f5c-487b-906c-db0ac2b00830] No waiting events found dispatching network-vif-plugged-40df73d0-e48b-4bb9-96eb-c236dd2ca614 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 03:32:57 np0005534516 nova_compute[253538]: 2025-11-25 08:32:57.031 253542 WARNING nova.compute.manager [req-bf46663e-2576-46d8-b8f5-31b61d9ddb22 req-a0d7f00a-cdd5-4b23-a1fe-c4dc91e543c0 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 6fd0259d-3f5c-487b-906c-db0ac2b00830] Received unexpected event network-vif-plugged-40df73d0-e48b-4bb9-96eb-c236dd2ca614 for instance with vm_state paused and task_state deleting.#033[00m
Nov 25 03:32:57 np0005534516 nova_compute[253538]: 2025-11-25 08:32:57.039 253542 DEBUG oslo_concurrency.lockutils [None req-e3223844-2cf6-4665-a1fb-a42de2e4838f a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:32:57 np0005534516 nova_compute[253538]: 2025-11-25 08:32:57.040 253542 DEBUG oslo_concurrency.lockutils [None req-e3223844-2cf6-4665-a1fb-a42de2e4838f a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:32:57 np0005534516 nova_compute[253538]: 2025-11-25 08:32:57.151 253542 DEBUG oslo_concurrency.processutils [None req-e3223844-2cf6-4665-a1fb-a42de2e4838f a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:32:57 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1488: 321 pgs: 321 active+clean; 310 MiB data, 589 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 6.0 MiB/s wr, 237 op/s
Nov 25 03:32:57 np0005534516 kernel: tap79f4b8f5-d5 (unregistering): left promiscuous mode
Nov 25 03:32:57 np0005534516 NetworkManager[48915]: <info>  [1764059577.5404] device (tap79f4b8f5-d5): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 03:32:57 np0005534516 ovn_controller[152859]: 2025-11-25T08:32:57Z|00464|binding|INFO|Releasing lport 79f4b8f5-d582-44c5-b8e0-a82ad73193de from this chassis (sb_readonly=0)
Nov 25 03:32:57 np0005534516 ovn_controller[152859]: 2025-11-25T08:32:57Z|00465|binding|INFO|Setting lport 79f4b8f5-d582-44c5-b8e0-a82ad73193de down in Southbound
Nov 25 03:32:57 np0005534516 nova_compute[253538]: 2025-11-25 08:32:57.547 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:32:57 np0005534516 ovn_controller[152859]: 2025-11-25T08:32:57Z|00466|binding|INFO|Removing iface tap79f4b8f5-d5 ovn-installed in OVS
Nov 25 03:32:57 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:32:57.555 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:24:9d:e4 10.100.0.11'], port_security=['fa:16:3e:24:9d:e4 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': 'c0942fc7-74d4-4fc8-9574-4fea9179e71b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-eb25945d-6002-4a99-b682-034a8a3dc901', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'dc93aa65bef7473d961e0cad1e8f2962', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'e75a559d-2985-4816-b432-9eef78e9b129', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=1d4314d0-90db-4d2a-a971-774f6d589653, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=79f4b8f5-d582-44c5-b8e0-a82ad73193de) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 03:32:57 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:32:57.556 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 79f4b8f5-d582-44c5-b8e0-a82ad73193de in datapath eb25945d-6002-4a99-b682-034a8a3dc901 unbound from our chassis#033[00m
Nov 25 03:32:57 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:32:57.557 162739 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network eb25945d-6002-4a99-b682-034a8a3dc901, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 25 03:32:57 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:32:57.558 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[f17a0973-a10a-45d5-ad72-efc039dc7895]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:32:57 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:32:57.558 162739 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-eb25945d-6002-4a99-b682-034a8a3dc901 namespace which is not needed anymore#033[00m
Nov 25 03:32:57 np0005534516 nova_compute[253538]: 2025-11-25 08:32:57.582 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:32:57 np0005534516 systemd[1]: machine-qemu\x2d59\x2dinstance\x2d00000035.scope: Deactivated successfully.
Nov 25 03:32:57 np0005534516 systemd[1]: machine-qemu\x2d59\x2dinstance\x2d00000035.scope: Consumed 13.937s CPU time.
Nov 25 03:32:57 np0005534516 systemd-machined[215790]: Machine qemu-59-instance-00000035 terminated.
Nov 25 03:32:57 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 03:32:57 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2859965703' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 03:32:57 np0005534516 nova_compute[253538]: 2025-11-25 08:32:57.687 253542 DEBUG oslo_concurrency.processutils [None req-e3223844-2cf6-4665-a1fb-a42de2e4838f a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.536s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:32:57 np0005534516 nova_compute[253538]: 2025-11-25 08:32:57.694 253542 DEBUG nova.compute.provider_tree [None req-e3223844-2cf6-4665-a1fb-a42de2e4838f a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 03:32:57 np0005534516 nova_compute[253538]: 2025-11-25 08:32:57.707 253542 DEBUG nova.scheduler.client.report [None req-e3223844-2cf6-4665-a1fb-a42de2e4838f a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 03:32:57 np0005534516 neutron-haproxy-ovnmeta-eb25945d-6002-4a99-b682-034a8a3dc901[310310]: [NOTICE]   (310318) : haproxy version is 2.8.14-c23fe91
Nov 25 03:32:57 np0005534516 neutron-haproxy-ovnmeta-eb25945d-6002-4a99-b682-034a8a3dc901[310310]: [NOTICE]   (310318) : path to executable is /usr/sbin/haproxy
Nov 25 03:32:57 np0005534516 neutron-haproxy-ovnmeta-eb25945d-6002-4a99-b682-034a8a3dc901[310310]: [WARNING]  (310318) : Exiting Master process...
Nov 25 03:32:57 np0005534516 neutron-haproxy-ovnmeta-eb25945d-6002-4a99-b682-034a8a3dc901[310310]: [WARNING]  (310318) : Exiting Master process...
Nov 25 03:32:57 np0005534516 neutron-haproxy-ovnmeta-eb25945d-6002-4a99-b682-034a8a3dc901[310310]: [ALERT]    (310318) : Current worker (310321) exited with code 143 (Terminated)
Nov 25 03:32:57 np0005534516 neutron-haproxy-ovnmeta-eb25945d-6002-4a99-b682-034a8a3dc901[310310]: [WARNING]  (310318) : All workers exited. Exiting... (0)
Nov 25 03:32:57 np0005534516 nova_compute[253538]: 2025-11-25 08:32:57.730 253542 DEBUG oslo_concurrency.lockutils [None req-e3223844-2cf6-4665-a1fb-a42de2e4838f a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.690s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:32:57 np0005534516 systemd[1]: libpod-54bbfdbca908ba90b6cb297dbb25cf4a267533c3b4a170896d3f7c9c52c76ceb.scope: Deactivated successfully.
Nov 25 03:32:57 np0005534516 podman[311253]: 2025-11-25 08:32:57.732440277 +0000 UTC m=+0.073203559 container died 54bbfdbca908ba90b6cb297dbb25cf4a267533c3b4a170896d3f7c9c52c76ceb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-eb25945d-6002-4a99-b682-034a8a3dc901, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2)
Nov 25 03:32:57 np0005534516 nova_compute[253538]: 2025-11-25 08:32:57.752 253542 INFO nova.scheduler.client.report [None req-e3223844-2cf6-4665-a1fb-a42de2e4838f a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Deleted allocations for instance 6fd0259d-3f5c-487b-906c-db0ac2b00830#033[00m
Nov 25 03:32:57 np0005534516 nova_compute[253538]: 2025-11-25 08:32:57.763 253542 DEBUG nova.network.neutron [None req-752c8843-dcf6-4988-b600-41333c98989c 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] [instance: 8191f951-44bc-4371-957a-f2e7d37c1a32] Successfully updated port: 223207be-35e0-4b8b-bf78-113792059910 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 25 03:32:57 np0005534516 nova_compute[253538]: 2025-11-25 08:32:57.768 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:32:57 np0005534516 nova_compute[253538]: 2025-11-25 08:32:57.774 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:32:57 np0005534516 nova_compute[253538]: 2025-11-25 08:32:57.778 253542 DEBUG oslo_concurrency.lockutils [None req-752c8843-dcf6-4988-b600-41333c98989c 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Acquiring lock "refresh_cache-8191f951-44bc-4371-957a-f2e7d37c1a32" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 03:32:57 np0005534516 nova_compute[253538]: 2025-11-25 08:32:57.778 253542 DEBUG oslo_concurrency.lockutils [None req-752c8843-dcf6-4988-b600-41333c98989c 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Acquired lock "refresh_cache-8191f951-44bc-4371-957a-f2e7d37c1a32" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 03:32:57 np0005534516 nova_compute[253538]: 2025-11-25 08:32:57.778 253542 DEBUG nova.network.neutron [None req-752c8843-dcf6-4988-b600-41333c98989c 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] [instance: 8191f951-44bc-4371-957a-f2e7d37c1a32] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 25 03:32:57 np0005534516 nova_compute[253538]: 2025-11-25 08:32:57.810 253542 DEBUG oslo_concurrency.lockutils [None req-e3223844-2cf6-4665-a1fb-a42de2e4838f a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Lock "6fd0259d-3f5c-487b-906c-db0ac2b00830" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.525s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:32:57 np0005534516 nova_compute[253538]: 2025-11-25 08:32:57.830 253542 DEBUG nova.compute.manager [req-e046f0cf-39ab-4a45-8eaa-3396c1ebfedd req-9f58b525-4a70-4510-aec9-05b9481a02e3 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: c0942fc7-74d4-4fc8-9574-4fea9179e71b] Received event network-vif-unplugged-79f4b8f5-d582-44c5-b8e0-a82ad73193de external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:32:57 np0005534516 nova_compute[253538]: 2025-11-25 08:32:57.830 253542 DEBUG oslo_concurrency.lockutils [req-e046f0cf-39ab-4a45-8eaa-3396c1ebfedd req-9f58b525-4a70-4510-aec9-05b9481a02e3 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "c0942fc7-74d4-4fc8-9574-4fea9179e71b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:32:57 np0005534516 nova_compute[253538]: 2025-11-25 08:32:57.831 253542 DEBUG oslo_concurrency.lockutils [req-e046f0cf-39ab-4a45-8eaa-3396c1ebfedd req-9f58b525-4a70-4510-aec9-05b9481a02e3 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "c0942fc7-74d4-4fc8-9574-4fea9179e71b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:32:57 np0005534516 nova_compute[253538]: 2025-11-25 08:32:57.831 253542 DEBUG oslo_concurrency.lockutils [req-e046f0cf-39ab-4a45-8eaa-3396c1ebfedd req-9f58b525-4a70-4510-aec9-05b9481a02e3 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "c0942fc7-74d4-4fc8-9574-4fea9179e71b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:32:57 np0005534516 nova_compute[253538]: 2025-11-25 08:32:57.831 253542 DEBUG nova.compute.manager [req-e046f0cf-39ab-4a45-8eaa-3396c1ebfedd req-9f58b525-4a70-4510-aec9-05b9481a02e3 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: c0942fc7-74d4-4fc8-9574-4fea9179e71b] No waiting events found dispatching network-vif-unplugged-79f4b8f5-d582-44c5-b8e0-a82ad73193de pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 03:32:57 np0005534516 nova_compute[253538]: 2025-11-25 08:32:57.831 253542 WARNING nova.compute.manager [req-e046f0cf-39ab-4a45-8eaa-3396c1ebfedd req-9f58b525-4a70-4510-aec9-05b9481a02e3 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: c0942fc7-74d4-4fc8-9574-4fea9179e71b] Received unexpected event network-vif-unplugged-79f4b8f5-d582-44c5-b8e0-a82ad73193de for instance with vm_state active and task_state rebuilding.#033[00m
Nov 25 03:32:57 np0005534516 systemd[1]: var-lib-containers-storage-overlay-1eef64a7b2bc3a9c676eca2c757f9f8de2a6b5948fd7a6d5db2aa9098a5e44dd-merged.mount: Deactivated successfully.
Nov 25 03:32:57 np0005534516 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-54bbfdbca908ba90b6cb297dbb25cf4a267533c3b4a170896d3f7c9c52c76ceb-userdata-shm.mount: Deactivated successfully.
Nov 25 03:32:57 np0005534516 podman[311253]: 2025-11-25 08:32:57.98263689 +0000 UTC m=+0.323400162 container cleanup 54bbfdbca908ba90b6cb297dbb25cf4a267533c3b4a170896d3f7c9c52c76ceb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-eb25945d-6002-4a99-b682-034a8a3dc901, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3)
Nov 25 03:32:57 np0005534516 systemd[1]: libpod-conmon-54bbfdbca908ba90b6cb297dbb25cf4a267533c3b4a170896d3f7c9c52c76ceb.scope: Deactivated successfully.
Nov 25 03:32:58 np0005534516 nova_compute[253538]: 2025-11-25 08:32:58.039 253542 INFO nova.virt.libvirt.driver [None req-cb0b45a0-9374-4f2b-9271-6d8352f48120 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] [instance: c0942fc7-74d4-4fc8-9574-4fea9179e71b] Instance shutdown successfully after 13 seconds.#033[00m
Nov 25 03:32:58 np0005534516 nova_compute[253538]: 2025-11-25 08:32:58.045 253542 INFO nova.virt.libvirt.driver [-] [instance: c0942fc7-74d4-4fc8-9574-4fea9179e71b] Instance destroyed successfully.#033[00m
Nov 25 03:32:58 np0005534516 nova_compute[253538]: 2025-11-25 08:32:58.048 253542 INFO nova.virt.libvirt.driver [-] [instance: c0942fc7-74d4-4fc8-9574-4fea9179e71b] Instance destroyed successfully.#033[00m
Nov 25 03:32:58 np0005534516 nova_compute[253538]: 2025-11-25 08:32:58.049 253542 DEBUG nova.virt.libvirt.vif [None req-cb0b45a0-9374-4f2b-9271-6d8352f48120 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T08:32:31Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-1031241876',display_name='tempest-ServerDiskConfigTestJSON-server-1031241876',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-1031241876',id=53,image_ref='64385127-d622-49bb-be38-b33beb2692d1',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-25T08:32:39Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='dc93aa65bef7473d961e0cad1e8f2962',ramdisk_id='',reservation_id='r-p1t9fwfe',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='64385127-d622-49bb-be38-b33beb2692d1',image_container_format='bare',image_disk_format='qcow2',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerDiskConfigTestJSON-1655399928',owner_user_name='tempest-ServerDiskConfigTestJSON-1655399928-project-member'},tags=<?>,task_state='rebuilding',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T08:32:44Z,user_data=None,user_id='7ad88cb0e4cf4d0b8e4cbec835318015',uuid=c0942fc7-74d4-4fc8-9574-4fea9179e71b,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "79f4b8f5-d582-44c5-b8e0-a82ad73193de", "address": "fa:16:3e:24:9d:e4", "network": {"id": "eb25945d-6002-4a99-b682-034a8a3dc901", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1802666542-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dc93aa65bef7473d961e0cad1e8f2962", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap79f4b8f5-d5", "ovs_interfaceid": "79f4b8f5-d582-44c5-b8e0-a82ad73193de", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 25 03:32:58 np0005534516 nova_compute[253538]: 2025-11-25 08:32:58.049 253542 DEBUG nova.network.os_vif_util [None req-cb0b45a0-9374-4f2b-9271-6d8352f48120 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Converting VIF {"id": "79f4b8f5-d582-44c5-b8e0-a82ad73193de", "address": "fa:16:3e:24:9d:e4", "network": {"id": "eb25945d-6002-4a99-b682-034a8a3dc901", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1802666542-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dc93aa65bef7473d961e0cad1e8f2962", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap79f4b8f5-d5", "ovs_interfaceid": "79f4b8f5-d582-44c5-b8e0-a82ad73193de", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 03:32:58 np0005534516 nova_compute[253538]: 2025-11-25 08:32:58.050 253542 DEBUG nova.network.os_vif_util [None req-cb0b45a0-9374-4f2b-9271-6d8352f48120 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:24:9d:e4,bridge_name='br-int',has_traffic_filtering=True,id=79f4b8f5-d582-44c5-b8e0-a82ad73193de,network=Network(eb25945d-6002-4a99-b682-034a8a3dc901),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap79f4b8f5-d5') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 03:32:58 np0005534516 nova_compute[253538]: 2025-11-25 08:32:58.050 253542 DEBUG os_vif [None req-cb0b45a0-9374-4f2b-9271-6d8352f48120 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:24:9d:e4,bridge_name='br-int',has_traffic_filtering=True,id=79f4b8f5-d582-44c5-b8e0-a82ad73193de,network=Network(eb25945d-6002-4a99-b682-034a8a3dc901),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap79f4b8f5-d5') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 25 03:32:58 np0005534516 nova_compute[253538]: 2025-11-25 08:32:58.052 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:32:58 np0005534516 nova_compute[253538]: 2025-11-25 08:32:58.052 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap79f4b8f5-d5, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:32:58 np0005534516 nova_compute[253538]: 2025-11-25 08:32:58.054 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 25 03:32:58 np0005534516 nova_compute[253538]: 2025-11-25 08:32:58.056 253542 INFO os_vif [None req-cb0b45a0-9374-4f2b-9271-6d8352f48120 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:24:9d:e4,bridge_name='br-int',has_traffic_filtering=True,id=79f4b8f5-d582-44c5-b8e0-a82ad73193de,network=Network(eb25945d-6002-4a99-b682-034a8a3dc901),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap79f4b8f5-d5')#033[00m
Nov 25 03:32:58 np0005534516 podman[311295]: 2025-11-25 08:32:58.069007302 +0000 UTC m=+0.066777556 container remove 54bbfdbca908ba90b6cb297dbb25cf4a267533c3b4a170896d3f7c9c52c76ceb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-eb25945d-6002-4a99-b682-034a8a3dc901, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 03:32:58 np0005534516 nova_compute[253538]: 2025-11-25 08:32:58.073 253542 WARNING nova.network.neutron [None req-752c8843-dcf6-4988-b600-41333c98989c 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] [instance: 8191f951-44bc-4371-957a-f2e7d37c1a32] 9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe already exists in list: networks containing: ['9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe']. ignoring it#033[00m
Nov 25 03:32:58 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:32:58.082 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[f1f187da-03c7-4b5a-9b0e-7697847525b2]: (4, ('Tue Nov 25 08:32:57 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-eb25945d-6002-4a99-b682-034a8a3dc901 (54bbfdbca908ba90b6cb297dbb25cf4a267533c3b4a170896d3f7c9c52c76ceb)\n54bbfdbca908ba90b6cb297dbb25cf4a267533c3b4a170896d3f7c9c52c76ceb\nTue Nov 25 08:32:57 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-eb25945d-6002-4a99-b682-034a8a3dc901 (54bbfdbca908ba90b6cb297dbb25cf4a267533c3b4a170896d3f7c9c52c76ceb)\n54bbfdbca908ba90b6cb297dbb25cf4a267533c3b4a170896d3f7c9c52c76ceb\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:32:58 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:32:58.084 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[f74a7425-4c3e-4310-ae51-b49256fe18b4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:32:58 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:32:58.085 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapeb25945d-60, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:32:58 np0005534516 nova_compute[253538]: 2025-11-25 08:32:58.087 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:32:58 np0005534516 kernel: tapeb25945d-60: left promiscuous mode
Nov 25 03:32:58 np0005534516 nova_compute[253538]: 2025-11-25 08:32:58.122 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:32:58 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:32:58.125 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[274a246c-3c09-4f7e-a2a5-b4c92d182515]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:32:58 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:32:58.145 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[e61c9af1-b33e-4a2a-bd1e-9c10229ab6b9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:32:58 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:32:58.147 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[c97c1296-c7c2-4f26-85b7-ab812716e16b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:32:58 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:32:58.163 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[6d47231b-a59b-4430-9c42-8c7343fb6477]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 491227, 'reachable_time': 42325, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 311328, 'error': None, 'target': 'ovnmeta-eb25945d-6002-4a99-b682-034a8a3dc901', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:32:58 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:32:58.166 162852 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-eb25945d-6002-4a99-b682-034a8a3dc901 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 25 03:32:58 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:32:58.166 162852 DEBUG oslo.privsep.daemon [-] privsep: reply[2505cf60-b893-4357-8906-889cf9d270d1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:32:58 np0005534516 systemd[1]: run-netns-ovnmeta\x2deb25945d\x2d6002\x2d4a99\x2db682\x2d034a8a3dc901.mount: Deactivated successfully.
Nov 25 03:32:58 np0005534516 nova_compute[253538]: 2025-11-25 08:32:58.644 253542 DEBUG nova.virt.libvirt.driver [None req-2b669202-d948-45cd-9a6e-057ea8730fac c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] [instance: 0feca801-4630-4450-b915-616d8496ab51] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101#033[00m
Nov 25 03:32:59 np0005534516 nova_compute[253538]: 2025-11-25 08:32:59.112 253542 INFO nova.virt.libvirt.driver [None req-cb0b45a0-9374-4f2b-9271-6d8352f48120 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] [instance: c0942fc7-74d4-4fc8-9574-4fea9179e71b] Deleting instance files /var/lib/nova/instances/c0942fc7-74d4-4fc8-9574-4fea9179e71b_del#033[00m
Nov 25 03:32:59 np0005534516 nova_compute[253538]: 2025-11-25 08:32:59.113 253542 INFO nova.virt.libvirt.driver [None req-cb0b45a0-9374-4f2b-9271-6d8352f48120 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] [instance: c0942fc7-74d4-4fc8-9574-4fea9179e71b] Deletion of /var/lib/nova/instances/c0942fc7-74d4-4fc8-9574-4fea9179e71b_del complete#033[00m
Nov 25 03:32:59 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1489: 321 pgs: 321 active+clean; 301 MiB data, 577 MiB used, 59 GiB / 60 GiB avail; 3.1 MiB/s rd, 5.8 MiB/s wr, 273 op/s
Nov 25 03:32:59 np0005534516 nova_compute[253538]: 2025-11-25 08:32:59.254 253542 DEBUG nova.virt.libvirt.driver [None req-cb0b45a0-9374-4f2b-9271-6d8352f48120 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] [instance: c0942fc7-74d4-4fc8-9574-4fea9179e71b] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 25 03:32:59 np0005534516 nova_compute[253538]: 2025-11-25 08:32:59.254 253542 INFO nova.virt.libvirt.driver [None req-cb0b45a0-9374-4f2b-9271-6d8352f48120 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] [instance: c0942fc7-74d4-4fc8-9574-4fea9179e71b] Creating image(s)#033[00m
Nov 25 03:32:59 np0005534516 nova_compute[253538]: 2025-11-25 08:32:59.283 253542 DEBUG nova.storage.rbd_utils [None req-cb0b45a0-9374-4f2b-9271-6d8352f48120 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] rbd image c0942fc7-74d4-4fc8-9574-4fea9179e71b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:32:59 np0005534516 nova_compute[253538]: 2025-11-25 08:32:59.310 253542 DEBUG nova.storage.rbd_utils [None req-cb0b45a0-9374-4f2b-9271-6d8352f48120 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] rbd image c0942fc7-74d4-4fc8-9574-4fea9179e71b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:32:59 np0005534516 nova_compute[253538]: 2025-11-25 08:32:59.339 253542 DEBUG nova.storage.rbd_utils [None req-cb0b45a0-9374-4f2b-9271-6d8352f48120 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] rbd image c0942fc7-74d4-4fc8-9574-4fea9179e71b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:32:59 np0005534516 nova_compute[253538]: 2025-11-25 08:32:59.343 253542 DEBUG oslo_concurrency.processutils [None req-cb0b45a0-9374-4f2b-9271-6d8352f48120 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a6a8bade0130d94f6fc91c6e483cc9b6db836676 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:32:59 np0005534516 nova_compute[253538]: 2025-11-25 08:32:59.431 253542 DEBUG oslo_concurrency.processutils [None req-cb0b45a0-9374-4f2b-9271-6d8352f48120 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a6a8bade0130d94f6fc91c6e483cc9b6db836676 --force-share --output=json" returned: 0 in 0.088s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:32:59 np0005534516 nova_compute[253538]: 2025-11-25 08:32:59.433 253542 DEBUG oslo_concurrency.lockutils [None req-cb0b45a0-9374-4f2b-9271-6d8352f48120 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Acquiring lock "a6a8bade0130d94f6fc91c6e483cc9b6db836676" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:32:59 np0005534516 nova_compute[253538]: 2025-11-25 08:32:59.434 253542 DEBUG oslo_concurrency.lockutils [None req-cb0b45a0-9374-4f2b-9271-6d8352f48120 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Lock "a6a8bade0130d94f6fc91c6e483cc9b6db836676" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:32:59 np0005534516 nova_compute[253538]: 2025-11-25 08:32:59.434 253542 DEBUG oslo_concurrency.lockutils [None req-cb0b45a0-9374-4f2b-9271-6d8352f48120 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Lock "a6a8bade0130d94f6fc91c6e483cc9b6db836676" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:32:59 np0005534516 nova_compute[253538]: 2025-11-25 08:32:59.466 253542 DEBUG nova.storage.rbd_utils [None req-cb0b45a0-9374-4f2b-9271-6d8352f48120 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] rbd image c0942fc7-74d4-4fc8-9574-4fea9179e71b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:32:59 np0005534516 nova_compute[253538]: 2025-11-25 08:32:59.469 253542 DEBUG oslo_concurrency.processutils [None req-cb0b45a0-9374-4f2b-9271-6d8352f48120 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/a6a8bade0130d94f6fc91c6e483cc9b6db836676 c0942fc7-74d4-4fc8-9574-4fea9179e71b_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:32:59 np0005534516 nova_compute[253538]: 2025-11-25 08:32:59.697 253542 DEBUG nova.network.neutron [None req-752c8843-dcf6-4988-b600-41333c98989c 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] [instance: 8191f951-44bc-4371-957a-f2e7d37c1a32] Updating instance_info_cache with network_info: [{"id": "d8bd16e1-3695-474d-be04-7fdf44bee803", "address": "fa:16:3e:1a:58:19", "network": {"id": "9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-2079765623-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.213", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7d8307470c794815a028592990efca57", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd8bd16e1-36", "ovs_interfaceid": "d8bd16e1-3695-474d-be04-7fdf44bee803", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "223207be-35e0-4b8b-bf78-113792059910", "address": "fa:16:3e:3a:eb:0c", "network": {"id": "9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-2079765623-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7d8307470c794815a028592990efca57", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap223207be-35", "ovs_interfaceid": "223207be-35e0-4b8b-bf78-113792059910", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 03:32:59 np0005534516 nova_compute[253538]: 2025-11-25 08:32:59.722 253542 DEBUG oslo_concurrency.lockutils [None req-752c8843-dcf6-4988-b600-41333c98989c 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Releasing lock "refresh_cache-8191f951-44bc-4371-957a-f2e7d37c1a32" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 03:32:59 np0005534516 nova_compute[253538]: 2025-11-25 08:32:59.732 253542 DEBUG nova.virt.libvirt.vif [None req-752c8843-dcf6-4988-b600-41333c98989c 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T08:32:27Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-2072750142',display_name='tempest-AttachInterfacesTestJSON-server-2072750142',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-2072750142',id=52,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHlIbVdTjrPi47Xp1ay/t1WIi0WKTQJPoNHCD5jEl7ye+GEjFhiA/mB2rFe7IhKIxVjHVKF2hIwCbWYxBLjoyv+NSB0KC8NG+zdVJzTKR2nUpIyzOpLkhsEIm7Jy2dcm6w==',key_name='tempest-keypair-2060783228',keypairs=<?>,launch_index=0,launched_at=2025-11-25T08:32:36Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='7d8307470c794815a028592990efca57',ramdisk_id='',reservation_id='r-558nuebf',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-1895576257',owner_user_name='tempest-AttachInterfacesTestJSON-1895576257-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T08:32:36Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='329d8dc9d78743d4a09a38fef3a9143d',uuid=8191f951-44bc-4371-957a-f2e7d37c1a32,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "223207be-35e0-4b8b-bf78-113792059910", "address": "fa:16:3e:3a:eb:0c", "network": {"id": "9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-2079765623-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7d8307470c794815a028592990efca57", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap223207be-35", "ovs_interfaceid": "223207be-35e0-4b8b-bf78-113792059910", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 25 03:32:59 np0005534516 nova_compute[253538]: 2025-11-25 08:32:59.733 253542 DEBUG nova.network.os_vif_util [None req-752c8843-dcf6-4988-b600-41333c98989c 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Converting VIF {"id": "223207be-35e0-4b8b-bf78-113792059910", "address": "fa:16:3e:3a:eb:0c", "network": {"id": "9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-2079765623-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7d8307470c794815a028592990efca57", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap223207be-35", "ovs_interfaceid": "223207be-35e0-4b8b-bf78-113792059910", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 03:32:59 np0005534516 nova_compute[253538]: 2025-11-25 08:32:59.735 253542 DEBUG nova.network.os_vif_util [None req-752c8843-dcf6-4988-b600-41333c98989c 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:3a:eb:0c,bridge_name='br-int',has_traffic_filtering=True,id=223207be-35e0-4b8b-bf78-113792059910,network=Network(9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap223207be-35') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 03:32:59 np0005534516 nova_compute[253538]: 2025-11-25 08:32:59.736 253542 DEBUG os_vif [None req-752c8843-dcf6-4988-b600-41333c98989c 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:3a:eb:0c,bridge_name='br-int',has_traffic_filtering=True,id=223207be-35e0-4b8b-bf78-113792059910,network=Network(9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap223207be-35') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 25 03:32:59 np0005534516 nova_compute[253538]: 2025-11-25 08:32:59.737 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:32:59 np0005534516 nova_compute[253538]: 2025-11-25 08:32:59.738 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:32:59 np0005534516 nova_compute[253538]: 2025-11-25 08:32:59.738 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 03:32:59 np0005534516 nova_compute[253538]: 2025-11-25 08:32:59.742 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:32:59 np0005534516 nova_compute[253538]: 2025-11-25 08:32:59.743 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap223207be-35, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:32:59 np0005534516 nova_compute[253538]: 2025-11-25 08:32:59.743 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap223207be-35, col_values=(('external_ids', {'iface-id': '223207be-35e0-4b8b-bf78-113792059910', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:3a:eb:0c', 'vm-uuid': '8191f951-44bc-4371-957a-f2e7d37c1a32'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:32:59 np0005534516 nova_compute[253538]: 2025-11-25 08:32:59.745 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:32:59 np0005534516 NetworkManager[48915]: <info>  [1764059579.7468] manager: (tap223207be-35): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/215)
Nov 25 03:32:59 np0005534516 nova_compute[253538]: 2025-11-25 08:32:59.748 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 25 03:32:59 np0005534516 nova_compute[253538]: 2025-11-25 08:32:59.754 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:32:59 np0005534516 nova_compute[253538]: 2025-11-25 08:32:59.755 253542 INFO os_vif [None req-752c8843-dcf6-4988-b600-41333c98989c 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:3a:eb:0c,bridge_name='br-int',has_traffic_filtering=True,id=223207be-35e0-4b8b-bf78-113792059910,network=Network(9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap223207be-35')#033[00m
Nov 25 03:32:59 np0005534516 nova_compute[253538]: 2025-11-25 08:32:59.756 253542 DEBUG nova.virt.libvirt.vif [None req-752c8843-dcf6-4988-b600-41333c98989c 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T08:32:27Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-2072750142',display_name='tempest-AttachInterfacesTestJSON-server-2072750142',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-2072750142',id=52,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHlIbVdTjrPi47Xp1ay/t1WIi0WKTQJPoNHCD5jEl7ye+GEjFhiA/mB2rFe7IhKIxVjHVKF2hIwCbWYxBLjoyv+NSB0KC8NG+zdVJzTKR2nUpIyzOpLkhsEIm7Jy2dcm6w==',key_name='tempest-keypair-2060783228',keypairs=<?>,launch_index=0,launched_at=2025-11-25T08:32:36Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='7d8307470c794815a028592990efca57',ramdisk_id='',reservation_id='r-558nuebf',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-1895576257',owner_user_name='tempest-AttachInterfacesTestJSON-1895576257-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T08:32:36Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='329d8dc9d78743d4a09a38fef3a9143d',uuid=8191f951-44bc-4371-957a-f2e7d37c1a32,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "223207be-35e0-4b8b-bf78-113792059910", "address": "fa:16:3e:3a:eb:0c", "network": {"id": "9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-2079765623-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7d8307470c794815a028592990efca57", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap223207be-35", "ovs_interfaceid": "223207be-35e0-4b8b-bf78-113792059910", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 25 03:32:59 np0005534516 nova_compute[253538]: 2025-11-25 08:32:59.756 253542 DEBUG nova.network.os_vif_util [None req-752c8843-dcf6-4988-b600-41333c98989c 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Converting VIF {"id": "223207be-35e0-4b8b-bf78-113792059910", "address": "fa:16:3e:3a:eb:0c", "network": {"id": "9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-2079765623-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7d8307470c794815a028592990efca57", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap223207be-35", "ovs_interfaceid": "223207be-35e0-4b8b-bf78-113792059910", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 03:32:59 np0005534516 nova_compute[253538]: 2025-11-25 08:32:59.757 253542 DEBUG nova.network.os_vif_util [None req-752c8843-dcf6-4988-b600-41333c98989c 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:3a:eb:0c,bridge_name='br-int',has_traffic_filtering=True,id=223207be-35e0-4b8b-bf78-113792059910,network=Network(9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap223207be-35') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 03:32:59 np0005534516 nova_compute[253538]: 2025-11-25 08:32:59.760 253542 DEBUG nova.virt.libvirt.guest [None req-752c8843-dcf6-4988-b600-41333c98989c 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] attach device xml: <interface type="ethernet">
Nov 25 03:32:59 np0005534516 nova_compute[253538]:  <mac address="fa:16:3e:3a:eb:0c"/>
Nov 25 03:32:59 np0005534516 nova_compute[253538]:  <model type="virtio"/>
Nov 25 03:32:59 np0005534516 nova_compute[253538]:  <driver name="vhost" rx_queue_size="512"/>
Nov 25 03:32:59 np0005534516 nova_compute[253538]:  <mtu size="1442"/>
Nov 25 03:32:59 np0005534516 nova_compute[253538]:  <target dev="tap223207be-35"/>
Nov 25 03:32:59 np0005534516 nova_compute[253538]: </interface>
Nov 25 03:32:59 np0005534516 nova_compute[253538]: attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339#033[00m
Nov 25 03:32:59 np0005534516 kernel: tap223207be-35: entered promiscuous mode
Nov 25 03:32:59 np0005534516 nova_compute[253538]: 2025-11-25 08:32:59.773 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:32:59 np0005534516 ovn_controller[152859]: 2025-11-25T08:32:59Z|00467|binding|INFO|Claiming lport 223207be-35e0-4b8b-bf78-113792059910 for this chassis.
Nov 25 03:32:59 np0005534516 ovn_controller[152859]: 2025-11-25T08:32:59Z|00468|binding|INFO|223207be-35e0-4b8b-bf78-113792059910: Claiming fa:16:3e:3a:eb:0c 10.100.0.12
Nov 25 03:32:59 np0005534516 NetworkManager[48915]: <info>  [1764059579.7748] manager: (tap223207be-35): new Tun device (/org/freedesktop/NetworkManager/Devices/216)
Nov 25 03:32:59 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:32:59.779 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:3a:eb:0c 10.100.0.12'], port_security=['fa:16:3e:3a:eb:0c 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '8191f951-44bc-4371-957a-f2e7d37c1a32', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '7d8307470c794815a028592990efca57', 'neutron:revision_number': '2', 'neutron:security_group_ids': '2fc6083a-2d5e-4949-a854-57468915c521', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f51a82dc-da84-4ad1-90c6-51b8e242435f, chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=223207be-35e0-4b8b-bf78-113792059910) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 03:32:59 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:32:59.781 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 223207be-35e0-4b8b-bf78-113792059910 in datapath 9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe bound to our chassis#033[00m
Nov 25 03:32:59 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:32:59.782 162739 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe#033[00m
Nov 25 03:32:59 np0005534516 ovn_controller[152859]: 2025-11-25T08:32:59Z|00469|binding|INFO|Setting lport 223207be-35e0-4b8b-bf78-113792059910 ovn-installed in OVS
Nov 25 03:32:59 np0005534516 ovn_controller[152859]: 2025-11-25T08:32:59Z|00470|binding|INFO|Setting lport 223207be-35e0-4b8b-bf78-113792059910 up in Southbound
Nov 25 03:32:59 np0005534516 nova_compute[253538]: 2025-11-25 08:32:59.795 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:32:59 np0005534516 nova_compute[253538]: 2025-11-25 08:32:59.798 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:32:59 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:32:59.803 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[c3acb253-ac54-49ff-b7fb-7e23ea344d6f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:32:59 np0005534516 systemd-udevd[311431]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 03:32:59 np0005534516 NetworkManager[48915]: <info>  [1764059579.8230] device (tap223207be-35): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 03:32:59 np0005534516 NetworkManager[48915]: <info>  [1764059579.8237] device (tap223207be-35): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 03:32:59 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:32:59.833 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[8bc1b561-35ec-48f7-a769-b63c3dde36f7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:32:59 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:32:59.837 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[ae981f40-ba9f-4245-aac6-539521b0772e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:32:59 np0005534516 nova_compute[253538]: 2025-11-25 08:32:59.852 253542 DEBUG oslo_concurrency.processutils [None req-cb0b45a0-9374-4f2b-9271-6d8352f48120 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/a6a8bade0130d94f6fc91c6e483cc9b6db836676 c0942fc7-74d4-4fc8-9574-4fea9179e71b_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.382s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:32:59 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:32:59.870 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[8aa452e9-9f44-4fa6-bdbd-3b0d38682187]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:32:59 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:32:59.889 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[80135125-7fcf-4105-b33c-c186399884a2]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap9bf3cbfa-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:cf:8f:c7'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 132], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 490985, 'reachable_time': 15416, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 311454, 'error': None, 'target': 'ovnmeta-9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:32:59 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:32:59.908 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[36002729-5665-41b7-8b43-b432e87ec16e]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap9bf3cbfa-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 490996, 'tstamp': 490996}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 311464, 'error': None, 'target': 'ovnmeta-9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap9bf3cbfa-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 490999, 'tstamp': 490999}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 311464, 'error': None, 'target': 'ovnmeta-9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:32:59 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:32:59.910 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9bf3cbfa-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:32:59 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:32:59.913 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9bf3cbfa-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:32:59 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:32:59.913 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 03:32:59 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:32:59.913 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap9bf3cbfa-70, col_values=(('external_ids', {'iface-id': '98660c0c-0936-4c4d-9a89-87b784d8d5cc'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:32:59 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:32:59.914 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 03:32:59 np0005534516 nova_compute[253538]: 2025-11-25 08:32:59.914 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:32:59 np0005534516 nova_compute[253538]: 2025-11-25 08:32:59.916 253542 DEBUG nova.virt.libvirt.driver [None req-752c8843-dcf6-4988-b600-41333c98989c 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 03:32:59 np0005534516 nova_compute[253538]: 2025-11-25 08:32:59.917 253542 DEBUG nova.virt.libvirt.driver [None req-752c8843-dcf6-4988-b600-41333c98989c 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 03:32:59 np0005534516 nova_compute[253538]: 2025-11-25 08:32:59.917 253542 DEBUG nova.virt.libvirt.driver [None req-752c8843-dcf6-4988-b600-41333c98989c 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] No VIF found with MAC fa:16:3e:1a:58:19, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 25 03:32:59 np0005534516 nova_compute[253538]: 2025-11-25 08:32:59.917 253542 DEBUG nova.virt.libvirt.driver [None req-752c8843-dcf6-4988-b600-41333c98989c 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] No VIF found with MAC fa:16:3e:3a:eb:0c, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 25 03:32:59 np0005534516 nova_compute[253538]: 2025-11-25 08:32:59.926 253542 DEBUG nova.storage.rbd_utils [None req-cb0b45a0-9374-4f2b-9271-6d8352f48120 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] resizing rbd image c0942fc7-74d4-4fc8-9574-4fea9179e71b_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 25 03:32:59 np0005534516 nova_compute[253538]: 2025-11-25 08:32:59.951 253542 DEBUG nova.virt.libvirt.guest [None req-752c8843-dcf6-4988-b600-41333c98989c 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 03:32:59 np0005534516 nova_compute[253538]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 03:32:59 np0005534516 nova_compute[253538]:  <nova:name>tempest-AttachInterfacesTestJSON-server-2072750142</nova:name>
Nov 25 03:32:59 np0005534516 nova_compute[253538]:  <nova:creationTime>2025-11-25 08:32:59</nova:creationTime>
Nov 25 03:32:59 np0005534516 nova_compute[253538]:  <nova:flavor name="m1.nano">
Nov 25 03:32:59 np0005534516 nova_compute[253538]:    <nova:memory>128</nova:memory>
Nov 25 03:32:59 np0005534516 nova_compute[253538]:    <nova:disk>1</nova:disk>
Nov 25 03:32:59 np0005534516 nova_compute[253538]:    <nova:swap>0</nova:swap>
Nov 25 03:32:59 np0005534516 nova_compute[253538]:    <nova:ephemeral>0</nova:ephemeral>
Nov 25 03:32:59 np0005534516 nova_compute[253538]:    <nova:vcpus>1</nova:vcpus>
Nov 25 03:32:59 np0005534516 nova_compute[253538]:  </nova:flavor>
Nov 25 03:32:59 np0005534516 nova_compute[253538]:  <nova:owner>
Nov 25 03:32:59 np0005534516 nova_compute[253538]:    <nova:user uuid="329d8dc9d78743d4a09a38fef3a9143d">tempest-AttachInterfacesTestJSON-1895576257-project-member</nova:user>
Nov 25 03:32:59 np0005534516 nova_compute[253538]:    <nova:project uuid="7d8307470c794815a028592990efca57">tempest-AttachInterfacesTestJSON-1895576257</nova:project>
Nov 25 03:32:59 np0005534516 nova_compute[253538]:  </nova:owner>
Nov 25 03:32:59 np0005534516 nova_compute[253538]:  <nova:root type="image" uuid="8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e"/>
Nov 25 03:32:59 np0005534516 nova_compute[253538]:  <nova:ports>
Nov 25 03:32:59 np0005534516 nova_compute[253538]:    <nova:port uuid="d8bd16e1-3695-474d-be04-7fdf44bee803">
Nov 25 03:32:59 np0005534516 nova_compute[253538]:      <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Nov 25 03:32:59 np0005534516 nova_compute[253538]:    </nova:port>
Nov 25 03:32:59 np0005534516 nova_compute[253538]:    <nova:port uuid="223207be-35e0-4b8b-bf78-113792059910">
Nov 25 03:32:59 np0005534516 nova_compute[253538]:      <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Nov 25 03:32:59 np0005534516 nova_compute[253538]:    </nova:port>
Nov 25 03:32:59 np0005534516 nova_compute[253538]:  </nova:ports>
Nov 25 03:32:59 np0005534516 nova_compute[253538]: </nova:instance>
Nov 25 03:32:59 np0005534516 nova_compute[253538]: set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359#033[00m
Nov 25 03:32:59 np0005534516 nova_compute[253538]: 2025-11-25 08:32:59.971 253542 DEBUG oslo_concurrency.lockutils [None req-752c8843-dcf6-4988-b600-41333c98989c 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Lock "interface-8191f951-44bc-4371-957a-f2e7d37c1a32-None" "released" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: held 4.481s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:33:00 np0005534516 nova_compute[253538]: 2025-11-25 08:33:00.017 253542 DEBUG nova.virt.libvirt.driver [None req-cb0b45a0-9374-4f2b-9271-6d8352f48120 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] [instance: c0942fc7-74d4-4fc8-9574-4fea9179e71b] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 25 03:33:00 np0005534516 nova_compute[253538]: 2025-11-25 08:33:00.018 253542 DEBUG nova.virt.libvirt.driver [None req-cb0b45a0-9374-4f2b-9271-6d8352f48120 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] [instance: c0942fc7-74d4-4fc8-9574-4fea9179e71b] Ensure instance console log exists: /var/lib/nova/instances/c0942fc7-74d4-4fc8-9574-4fea9179e71b/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 25 03:33:00 np0005534516 nova_compute[253538]: 2025-11-25 08:33:00.018 253542 DEBUG oslo_concurrency.lockutils [None req-cb0b45a0-9374-4f2b-9271-6d8352f48120 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:33:00 np0005534516 nova_compute[253538]: 2025-11-25 08:33:00.019 253542 DEBUG oslo_concurrency.lockutils [None req-cb0b45a0-9374-4f2b-9271-6d8352f48120 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:33:00 np0005534516 nova_compute[253538]: 2025-11-25 08:33:00.019 253542 DEBUG oslo_concurrency.lockutils [None req-cb0b45a0-9374-4f2b-9271-6d8352f48120 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:33:00 np0005534516 nova_compute[253538]: 2025-11-25 08:33:00.021 253542 DEBUG nova.virt.libvirt.driver [None req-cb0b45a0-9374-4f2b-9271-6d8352f48120 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] [instance: c0942fc7-74d4-4fc8-9574-4fea9179e71b] Start _get_guest_xml network_info=[{"id": "79f4b8f5-d582-44c5-b8e0-a82ad73193de", "address": "fa:16:3e:24:9d:e4", "network": {"id": "eb25945d-6002-4a99-b682-034a8a3dc901", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1802666542-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dc93aa65bef7473d961e0cad1e8f2962", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap79f4b8f5-d5", "ovs_interfaceid": "79f4b8f5-d582-44c5-b8e0-a82ad73193de", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:56Z,direct_url=<?>,disk_format='qcow2',id=64385127-d622-49bb-be38-b33beb2692d1,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:58Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'device_name': '/dev/vda', 'guest_format': None, 'encryption_options': None, 'boot_index': 0, 'encryption_format': None, 'encryption_secret_uuid': None, 'size': 0, 'encrypted': False, 'disk_bus': 'virtio', 'image_id': '8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 25 03:33:00 np0005534516 nova_compute[253538]: 2025-11-25 08:33:00.023 253542 WARNING nova.virt.libvirt.driver [None req-cb0b45a0-9374-4f2b-9271-6d8352f48120 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.: NotImplementedError#033[00m
Nov 25 03:33:00 np0005534516 nova_compute[253538]: 2025-11-25 08:33:00.028 253542 DEBUG nova.virt.libvirt.host [None req-cb0b45a0-9374-4f2b-9271-6d8352f48120 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 25 03:33:00 np0005534516 nova_compute[253538]: 2025-11-25 08:33:00.028 253542 DEBUG nova.virt.libvirt.host [None req-cb0b45a0-9374-4f2b-9271-6d8352f48120 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 25 03:33:00 np0005534516 nova_compute[253538]: 2025-11-25 08:33:00.030 253542 DEBUG nova.virt.libvirt.host [None req-cb0b45a0-9374-4f2b-9271-6d8352f48120 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 25 03:33:00 np0005534516 nova_compute[253538]: 2025-11-25 08:33:00.031 253542 DEBUG nova.virt.libvirt.host [None req-cb0b45a0-9374-4f2b-9271-6d8352f48120 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 25 03:33:00 np0005534516 nova_compute[253538]: 2025-11-25 08:33:00.031 253542 DEBUG nova.virt.libvirt.driver [None req-cb0b45a0-9374-4f2b-9271-6d8352f48120 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 25 03:33:00 np0005534516 nova_compute[253538]: 2025-11-25 08:33:00.031 253542 DEBUG nova.virt.hardware [None req-cb0b45a0-9374-4f2b-9271-6d8352f48120 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T08:20:54Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='c0247c91-0f34-45b2-87e7-43f31a790a00',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:56Z,direct_url=<?>,disk_format='qcow2',id=64385127-d622-49bb-be38-b33beb2692d1,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:58Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 25 03:33:00 np0005534516 nova_compute[253538]: 2025-11-25 08:33:00.032 253542 DEBUG nova.virt.hardware [None req-cb0b45a0-9374-4f2b-9271-6d8352f48120 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 25 03:33:00 np0005534516 nova_compute[253538]: 2025-11-25 08:33:00.032 253542 DEBUG nova.virt.hardware [None req-cb0b45a0-9374-4f2b-9271-6d8352f48120 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 25 03:33:00 np0005534516 nova_compute[253538]: 2025-11-25 08:33:00.032 253542 DEBUG nova.virt.hardware [None req-cb0b45a0-9374-4f2b-9271-6d8352f48120 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 25 03:33:00 np0005534516 nova_compute[253538]: 2025-11-25 08:33:00.032 253542 DEBUG nova.virt.hardware [None req-cb0b45a0-9374-4f2b-9271-6d8352f48120 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 25 03:33:00 np0005534516 nova_compute[253538]: 2025-11-25 08:33:00.032 253542 DEBUG nova.virt.hardware [None req-cb0b45a0-9374-4f2b-9271-6d8352f48120 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 25 03:33:00 np0005534516 nova_compute[253538]: 2025-11-25 08:33:00.032 253542 DEBUG nova.virt.hardware [None req-cb0b45a0-9374-4f2b-9271-6d8352f48120 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 25 03:33:00 np0005534516 nova_compute[253538]: 2025-11-25 08:33:00.033 253542 DEBUG nova.virt.hardware [None req-cb0b45a0-9374-4f2b-9271-6d8352f48120 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 25 03:33:00 np0005534516 nova_compute[253538]: 2025-11-25 08:33:00.033 253542 DEBUG nova.virt.hardware [None req-cb0b45a0-9374-4f2b-9271-6d8352f48120 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 25 03:33:00 np0005534516 nova_compute[253538]: 2025-11-25 08:33:00.033 253542 DEBUG nova.virt.hardware [None req-cb0b45a0-9374-4f2b-9271-6d8352f48120 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 25 03:33:00 np0005534516 nova_compute[253538]: 2025-11-25 08:33:00.033 253542 DEBUG nova.virt.hardware [None req-cb0b45a0-9374-4f2b-9271-6d8352f48120 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 25 03:33:00 np0005534516 nova_compute[253538]: 2025-11-25 08:33:00.033 253542 DEBUG nova.objects.instance [None req-cb0b45a0-9374-4f2b-9271-6d8352f48120 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Lazy-loading 'vcpu_model' on Instance uuid c0942fc7-74d4-4fc8-9574-4fea9179e71b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 03:33:00 np0005534516 nova_compute[253538]: 2025-11-25 08:33:00.050 253542 DEBUG oslo_concurrency.processutils [None req-cb0b45a0-9374-4f2b-9271-6d8352f48120 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:33:00 np0005534516 nova_compute[253538]: 2025-11-25 08:33:00.211 253542 DEBUG nova.compute.manager [req-454c1506-0d0b-4590-908a-fcb20569d3e2 req-b6d343cc-8cec-4488-987e-49d8bd765a86 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 6fd0259d-3f5c-487b-906c-db0ac2b00830] Received event network-vif-deleted-40df73d0-e48b-4bb9-96eb-c236dd2ca614 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:33:00 np0005534516 nova_compute[253538]: 2025-11-25 08:33:00.212 253542 DEBUG nova.compute.manager [req-454c1506-0d0b-4590-908a-fcb20569d3e2 req-b6d343cc-8cec-4488-987e-49d8bd765a86 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 8191f951-44bc-4371-957a-f2e7d37c1a32] Received event network-changed-223207be-35e0-4b8b-bf78-113792059910 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:33:00 np0005534516 nova_compute[253538]: 2025-11-25 08:33:00.213 253542 DEBUG nova.compute.manager [req-454c1506-0d0b-4590-908a-fcb20569d3e2 req-b6d343cc-8cec-4488-987e-49d8bd765a86 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 8191f951-44bc-4371-957a-f2e7d37c1a32] Refreshing instance network info cache due to event network-changed-223207be-35e0-4b8b-bf78-113792059910. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 03:33:00 np0005534516 nova_compute[253538]: 2025-11-25 08:33:00.213 253542 DEBUG oslo_concurrency.lockutils [req-454c1506-0d0b-4590-908a-fcb20569d3e2 req-b6d343cc-8cec-4488-987e-49d8bd765a86 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-8191f951-44bc-4371-957a-f2e7d37c1a32" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 03:33:00 np0005534516 nova_compute[253538]: 2025-11-25 08:33:00.214 253542 DEBUG oslo_concurrency.lockutils [req-454c1506-0d0b-4590-908a-fcb20569d3e2 req-b6d343cc-8cec-4488-987e-49d8bd765a86 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-8191f951-44bc-4371-957a-f2e7d37c1a32" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 03:33:00 np0005534516 nova_compute[253538]: 2025-11-25 08:33:00.214 253542 DEBUG nova.network.neutron [req-454c1506-0d0b-4590-908a-fcb20569d3e2 req-b6d343cc-8cec-4488-987e-49d8bd765a86 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 8191f951-44bc-4371-957a-f2e7d37c1a32] Refreshing network info cache for port 223207be-35e0-4b8b-bf78-113792059910 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 03:33:00 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 03:33:00 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2046479535' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 03:33:00 np0005534516 nova_compute[253538]: 2025-11-25 08:33:00.495 253542 DEBUG nova.compute.manager [req-ba8200bb-8537-4279-8720-f65b110a39fd req-c0823b6b-4ac3-4866-8896-b666875febf9 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: c0942fc7-74d4-4fc8-9574-4fea9179e71b] Received event network-vif-plugged-79f4b8f5-d582-44c5-b8e0-a82ad73193de external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:33:00 np0005534516 nova_compute[253538]: 2025-11-25 08:33:00.496 253542 DEBUG oslo_concurrency.lockutils [req-ba8200bb-8537-4279-8720-f65b110a39fd req-c0823b6b-4ac3-4866-8896-b666875febf9 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "c0942fc7-74d4-4fc8-9574-4fea9179e71b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:33:00 np0005534516 nova_compute[253538]: 2025-11-25 08:33:00.496 253542 DEBUG oslo_concurrency.lockutils [req-ba8200bb-8537-4279-8720-f65b110a39fd req-c0823b6b-4ac3-4866-8896-b666875febf9 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "c0942fc7-74d4-4fc8-9574-4fea9179e71b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:33:00 np0005534516 nova_compute[253538]: 2025-11-25 08:33:00.496 253542 DEBUG oslo_concurrency.lockutils [req-ba8200bb-8537-4279-8720-f65b110a39fd req-c0823b6b-4ac3-4866-8896-b666875febf9 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "c0942fc7-74d4-4fc8-9574-4fea9179e71b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:33:00 np0005534516 nova_compute[253538]: 2025-11-25 08:33:00.496 253542 DEBUG nova.compute.manager [req-ba8200bb-8537-4279-8720-f65b110a39fd req-c0823b6b-4ac3-4866-8896-b666875febf9 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: c0942fc7-74d4-4fc8-9574-4fea9179e71b] No waiting events found dispatching network-vif-plugged-79f4b8f5-d582-44c5-b8e0-a82ad73193de pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 03:33:00 np0005534516 nova_compute[253538]: 2025-11-25 08:33:00.497 253542 WARNING nova.compute.manager [req-ba8200bb-8537-4279-8720-f65b110a39fd req-c0823b6b-4ac3-4866-8896-b666875febf9 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: c0942fc7-74d4-4fc8-9574-4fea9179e71b] Received unexpected event network-vif-plugged-79f4b8f5-d582-44c5-b8e0-a82ad73193de for instance with vm_state active and task_state rebuild_spawning.#033[00m
Nov 25 03:33:00 np0005534516 nova_compute[253538]: 2025-11-25 08:33:00.511 253542 DEBUG oslo_concurrency.processutils [None req-cb0b45a0-9374-4f2b-9271-6d8352f48120 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.461s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:33:00 np0005534516 nova_compute[253538]: 2025-11-25 08:33:00.529 253542 DEBUG nova.storage.rbd_utils [None req-cb0b45a0-9374-4f2b-9271-6d8352f48120 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] rbd image c0942fc7-74d4-4fc8-9574-4fea9179e71b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:33:00 np0005534516 nova_compute[253538]: 2025-11-25 08:33:00.532 253542 DEBUG oslo_concurrency.processutils [None req-cb0b45a0-9374-4f2b-9271-6d8352f48120 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:33:00 np0005534516 nova_compute[253538]: 2025-11-25 08:33:00.568 253542 DEBUG oslo_concurrency.lockutils [None req-09265231-b5f9-435b-aa4b-158217848829 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Acquiring lock "528fb917-0169-441d-b32d-652963344aea" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:33:00 np0005534516 nova_compute[253538]: 2025-11-25 08:33:00.569 253542 DEBUG oslo_concurrency.lockutils [None req-09265231-b5f9-435b-aa4b-158217848829 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Lock "528fb917-0169-441d-b32d-652963344aea" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:33:00 np0005534516 nova_compute[253538]: 2025-11-25 08:33:00.582 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:33:00 np0005534516 nova_compute[253538]: 2025-11-25 08:33:00.585 253542 DEBUG nova.compute.manager [None req-09265231-b5f9-435b-aa4b-158217848829 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: 528fb917-0169-441d-b32d-652963344aea] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 25 03:33:00 np0005534516 nova_compute[253538]: 2025-11-25 08:33:00.650 253542 DEBUG oslo_concurrency.lockutils [None req-09265231-b5f9-435b-aa4b-158217848829 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:33:00 np0005534516 nova_compute[253538]: 2025-11-25 08:33:00.651 253542 DEBUG oslo_concurrency.lockutils [None req-09265231-b5f9-435b-aa4b-158217848829 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:33:00 np0005534516 nova_compute[253538]: 2025-11-25 08:33:00.657 253542 DEBUG nova.virt.hardware [None req-09265231-b5f9-435b-aa4b-158217848829 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 25 03:33:00 np0005534516 nova_compute[253538]: 2025-11-25 08:33:00.658 253542 INFO nova.compute.claims [None req-09265231-b5f9-435b-aa4b-158217848829 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: 528fb917-0169-441d-b32d-652963344aea] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 25 03:33:00 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e176 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 03:33:00 np0005534516 nova_compute[253538]: 2025-11-25 08:33:00.803 253542 DEBUG oslo_concurrency.processutils [None req-09265231-b5f9-435b-aa4b-158217848829 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:33:00 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 03:33:00 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3902323065' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 03:33:00 np0005534516 nova_compute[253538]: 2025-11-25 08:33:00.970 253542 DEBUG oslo_concurrency.lockutils [None req-ed1ff473-fcd5-40ad-9af6-a9eb8b3853e9 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Acquiring lock "interface-8191f951-44bc-4371-957a-f2e7d37c1a32-None" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:33:00 np0005534516 nova_compute[253538]: 2025-11-25 08:33:00.971 253542 DEBUG oslo_concurrency.lockutils [None req-ed1ff473-fcd5-40ad-9af6-a9eb8b3853e9 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Lock "interface-8191f951-44bc-4371-957a-f2e7d37c1a32-None" acquired by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:33:00 np0005534516 nova_compute[253538]: 2025-11-25 08:33:00.971 253542 DEBUG nova.objects.instance [None req-ed1ff473-fcd5-40ad-9af6-a9eb8b3853e9 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Lazy-loading 'flavor' on Instance uuid 8191f951-44bc-4371-957a-f2e7d37c1a32 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 03:33:00 np0005534516 nova_compute[253538]: 2025-11-25 08:33:00.980 253542 DEBUG oslo_concurrency.processutils [None req-cb0b45a0-9374-4f2b-9271-6d8352f48120 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.448s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:33:00 np0005534516 nova_compute[253538]: 2025-11-25 08:33:00.981 253542 DEBUG nova.virt.libvirt.vif [None req-cb0b45a0-9374-4f2b-9271-6d8352f48120 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-11-25T08:32:31Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-1031241876',display_name='tempest-ServerDiskConfigTestJSON-server-1031241876',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-1031241876',id=53,image_ref='64385127-d622-49bb-be38-b33beb2692d1',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-25T08:32:39Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='dc93aa65bef7473d961e0cad1e8f2962',ramdisk_id='',reservation_id='r-p1t9fwfe',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='64385127-d622-49bb-be38-b33beb2692d1',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerDiskConfigTestJSON-1655399928',owner_user_name='tempest-ServerDiskConfigTestJSON-1655399928-project-member'},tags=<?>,task_state='rebuild_spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T08:32:59Z,user_data=None,user_id='7ad88cb0e4cf4d0b8e4cbec835318015',uuid=c0942fc7-74d4-4fc8-9574-4fea9179e71b,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "79f4b8f5-d582-44c5-b8e0-a82ad73193de", "address": "fa:16:3e:24:9d:e4", "network": {"id": "eb25945d-6002-4a99-b682-034a8a3dc901", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1802666542-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dc93aa65bef7473d961e0cad1e8f2962", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap79f4b8f5-d5", "ovs_interfaceid": "79f4b8f5-d582-44c5-b8e0-a82ad73193de", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 25 03:33:00 np0005534516 nova_compute[253538]: 2025-11-25 08:33:00.982 253542 DEBUG nova.network.os_vif_util [None req-cb0b45a0-9374-4f2b-9271-6d8352f48120 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Converting VIF {"id": "79f4b8f5-d582-44c5-b8e0-a82ad73193de", "address": "fa:16:3e:24:9d:e4", "network": {"id": "eb25945d-6002-4a99-b682-034a8a3dc901", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1802666542-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dc93aa65bef7473d961e0cad1e8f2962", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap79f4b8f5-d5", "ovs_interfaceid": "79f4b8f5-d582-44c5-b8e0-a82ad73193de", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 03:33:00 np0005534516 nova_compute[253538]: 2025-11-25 08:33:00.983 253542 DEBUG nova.network.os_vif_util [None req-cb0b45a0-9374-4f2b-9271-6d8352f48120 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:24:9d:e4,bridge_name='br-int',has_traffic_filtering=True,id=79f4b8f5-d582-44c5-b8e0-a82ad73193de,network=Network(eb25945d-6002-4a99-b682-034a8a3dc901),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap79f4b8f5-d5') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 03:33:00 np0005534516 nova_compute[253538]: 2025-11-25 08:33:00.985 253542 DEBUG nova.virt.libvirt.driver [None req-cb0b45a0-9374-4f2b-9271-6d8352f48120 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] [instance: c0942fc7-74d4-4fc8-9574-4fea9179e71b] End _get_guest_xml xml=<domain type="kvm">
Nov 25 03:33:00 np0005534516 nova_compute[253538]:  <uuid>c0942fc7-74d4-4fc8-9574-4fea9179e71b</uuid>
Nov 25 03:33:00 np0005534516 nova_compute[253538]:  <name>instance-00000035</name>
Nov 25 03:33:00 np0005534516 nova_compute[253538]:  <memory>131072</memory>
Nov 25 03:33:00 np0005534516 nova_compute[253538]:  <vcpu>1</vcpu>
Nov 25 03:33:00 np0005534516 nova_compute[253538]:  <metadata>
Nov 25 03:33:00 np0005534516 nova_compute[253538]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 03:33:00 np0005534516 nova_compute[253538]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 03:33:00 np0005534516 nova_compute[253538]:      <nova:name>tempest-ServerDiskConfigTestJSON-server-1031241876</nova:name>
Nov 25 03:33:00 np0005534516 nova_compute[253538]:      <nova:creationTime>2025-11-25 08:33:00</nova:creationTime>
Nov 25 03:33:00 np0005534516 nova_compute[253538]:      <nova:flavor name="m1.nano">
Nov 25 03:33:00 np0005534516 nova_compute[253538]:        <nova:memory>128</nova:memory>
Nov 25 03:33:00 np0005534516 nova_compute[253538]:        <nova:disk>1</nova:disk>
Nov 25 03:33:00 np0005534516 nova_compute[253538]:        <nova:swap>0</nova:swap>
Nov 25 03:33:00 np0005534516 nova_compute[253538]:        <nova:ephemeral>0</nova:ephemeral>
Nov 25 03:33:00 np0005534516 nova_compute[253538]:        <nova:vcpus>1</nova:vcpus>
Nov 25 03:33:00 np0005534516 nova_compute[253538]:      </nova:flavor>
Nov 25 03:33:00 np0005534516 nova_compute[253538]:      <nova:owner>
Nov 25 03:33:00 np0005534516 nova_compute[253538]:        <nova:user uuid="7ad88cb0e4cf4d0b8e4cbec835318015">tempest-ServerDiskConfigTestJSON-1655399928-project-member</nova:user>
Nov 25 03:33:00 np0005534516 nova_compute[253538]:        <nova:project uuid="dc93aa65bef7473d961e0cad1e8f2962">tempest-ServerDiskConfigTestJSON-1655399928</nova:project>
Nov 25 03:33:00 np0005534516 nova_compute[253538]:      </nova:owner>
Nov 25 03:33:00 np0005534516 nova_compute[253538]:      <nova:root type="image" uuid="64385127-d622-49bb-be38-b33beb2692d1"/>
Nov 25 03:33:00 np0005534516 nova_compute[253538]:      <nova:ports>
Nov 25 03:33:00 np0005534516 nova_compute[253538]:        <nova:port uuid="79f4b8f5-d582-44c5-b8e0-a82ad73193de">
Nov 25 03:33:00 np0005534516 nova_compute[253538]:          <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Nov 25 03:33:00 np0005534516 nova_compute[253538]:        </nova:port>
Nov 25 03:33:00 np0005534516 nova_compute[253538]:      </nova:ports>
Nov 25 03:33:00 np0005534516 nova_compute[253538]:    </nova:instance>
Nov 25 03:33:00 np0005534516 nova_compute[253538]:  </metadata>
Nov 25 03:33:00 np0005534516 nova_compute[253538]:  <sysinfo type="smbios">
Nov 25 03:33:00 np0005534516 nova_compute[253538]:    <system>
Nov 25 03:33:00 np0005534516 nova_compute[253538]:      <entry name="manufacturer">RDO</entry>
Nov 25 03:33:00 np0005534516 nova_compute[253538]:      <entry name="product">OpenStack Compute</entry>
Nov 25 03:33:00 np0005534516 nova_compute[253538]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 03:33:00 np0005534516 nova_compute[253538]:      <entry name="serial">c0942fc7-74d4-4fc8-9574-4fea9179e71b</entry>
Nov 25 03:33:00 np0005534516 nova_compute[253538]:      <entry name="uuid">c0942fc7-74d4-4fc8-9574-4fea9179e71b</entry>
Nov 25 03:33:00 np0005534516 nova_compute[253538]:      <entry name="family">Virtual Machine</entry>
Nov 25 03:33:00 np0005534516 nova_compute[253538]:    </system>
Nov 25 03:33:00 np0005534516 nova_compute[253538]:  </sysinfo>
Nov 25 03:33:00 np0005534516 nova_compute[253538]:  <os>
Nov 25 03:33:00 np0005534516 nova_compute[253538]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 25 03:33:00 np0005534516 nova_compute[253538]:    <boot dev="hd"/>
Nov 25 03:33:00 np0005534516 nova_compute[253538]:    <smbios mode="sysinfo"/>
Nov 25 03:33:00 np0005534516 nova_compute[253538]:  </os>
Nov 25 03:33:00 np0005534516 nova_compute[253538]:  <features>
Nov 25 03:33:00 np0005534516 nova_compute[253538]:    <acpi/>
Nov 25 03:33:00 np0005534516 nova_compute[253538]:    <apic/>
Nov 25 03:33:00 np0005534516 nova_compute[253538]:    <vmcoreinfo/>
Nov 25 03:33:00 np0005534516 nova_compute[253538]:  </features>
Nov 25 03:33:00 np0005534516 nova_compute[253538]:  <clock offset="utc">
Nov 25 03:33:00 np0005534516 nova_compute[253538]:    <timer name="pit" tickpolicy="delay"/>
Nov 25 03:33:00 np0005534516 nova_compute[253538]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 25 03:33:00 np0005534516 nova_compute[253538]:    <timer name="hpet" present="no"/>
Nov 25 03:33:00 np0005534516 nova_compute[253538]:  </clock>
Nov 25 03:33:00 np0005534516 nova_compute[253538]:  <cpu mode="host-model" match="exact">
Nov 25 03:33:00 np0005534516 nova_compute[253538]:    <topology sockets="1" cores="1" threads="1"/>
Nov 25 03:33:00 np0005534516 nova_compute[253538]:  </cpu>
Nov 25 03:33:00 np0005534516 nova_compute[253538]:  <devices>
Nov 25 03:33:00 np0005534516 nova_compute[253538]:    <disk type="network" device="disk">
Nov 25 03:33:00 np0005534516 nova_compute[253538]:      <driver type="raw" cache="none"/>
Nov 25 03:33:00 np0005534516 nova_compute[253538]:      <source protocol="rbd" name="vms/c0942fc7-74d4-4fc8-9574-4fea9179e71b_disk">
Nov 25 03:33:00 np0005534516 nova_compute[253538]:        <host name="192.168.122.100" port="6789"/>
Nov 25 03:33:00 np0005534516 nova_compute[253538]:      </source>
Nov 25 03:33:00 np0005534516 nova_compute[253538]:      <auth username="openstack">
Nov 25 03:33:00 np0005534516 nova_compute[253538]:        <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 03:33:00 np0005534516 nova_compute[253538]:      </auth>
Nov 25 03:33:00 np0005534516 nova_compute[253538]:      <target dev="vda" bus="virtio"/>
Nov 25 03:33:00 np0005534516 nova_compute[253538]:    </disk>
Nov 25 03:33:00 np0005534516 nova_compute[253538]:    <disk type="network" device="cdrom">
Nov 25 03:33:00 np0005534516 nova_compute[253538]:      <driver type="raw" cache="none"/>
Nov 25 03:33:00 np0005534516 nova_compute[253538]:      <source protocol="rbd" name="vms/c0942fc7-74d4-4fc8-9574-4fea9179e71b_disk.config">
Nov 25 03:33:00 np0005534516 nova_compute[253538]:        <host name="192.168.122.100" port="6789"/>
Nov 25 03:33:00 np0005534516 nova_compute[253538]:      </source>
Nov 25 03:33:00 np0005534516 nova_compute[253538]:      <auth username="openstack">
Nov 25 03:33:00 np0005534516 nova_compute[253538]:        <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 03:33:00 np0005534516 nova_compute[253538]:      </auth>
Nov 25 03:33:00 np0005534516 nova_compute[253538]:      <target dev="sda" bus="sata"/>
Nov 25 03:33:00 np0005534516 nova_compute[253538]:    </disk>
Nov 25 03:33:00 np0005534516 nova_compute[253538]:    <interface type="ethernet">
Nov 25 03:33:00 np0005534516 nova_compute[253538]:      <mac address="fa:16:3e:24:9d:e4"/>
Nov 25 03:33:00 np0005534516 nova_compute[253538]:      <model type="virtio"/>
Nov 25 03:33:00 np0005534516 nova_compute[253538]:      <driver name="vhost" rx_queue_size="512"/>
Nov 25 03:33:00 np0005534516 nova_compute[253538]:      <mtu size="1442"/>
Nov 25 03:33:00 np0005534516 nova_compute[253538]:      <target dev="tap79f4b8f5-d5"/>
Nov 25 03:33:00 np0005534516 nova_compute[253538]:    </interface>
Nov 25 03:33:00 np0005534516 nova_compute[253538]:    <serial type="pty">
Nov 25 03:33:00 np0005534516 nova_compute[253538]:      <log file="/var/lib/nova/instances/c0942fc7-74d4-4fc8-9574-4fea9179e71b/console.log" append="off"/>
Nov 25 03:33:00 np0005534516 nova_compute[253538]:    </serial>
Nov 25 03:33:00 np0005534516 nova_compute[253538]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 03:33:00 np0005534516 nova_compute[253538]:    <video>
Nov 25 03:33:00 np0005534516 nova_compute[253538]:      <model type="virtio"/>
Nov 25 03:33:00 np0005534516 nova_compute[253538]:    </video>
Nov 25 03:33:00 np0005534516 nova_compute[253538]:    <input type="tablet" bus="usb"/>
Nov 25 03:33:00 np0005534516 nova_compute[253538]:    <rng model="virtio">
Nov 25 03:33:00 np0005534516 nova_compute[253538]:      <backend model="random">/dev/urandom</backend>
Nov 25 03:33:00 np0005534516 nova_compute[253538]:    </rng>
Nov 25 03:33:00 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root"/>
Nov 25 03:33:00 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:33:00 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:33:00 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:33:00 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:33:00 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:33:00 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:33:00 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:33:00 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:33:00 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:33:00 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:33:00 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:33:00 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:33:00 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:33:00 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:33:00 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:33:00 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:33:00 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:33:00 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:33:00 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:33:00 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:33:00 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:33:00 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:33:00 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:33:00 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:33:00 np0005534516 nova_compute[253538]:    <controller type="usb" index="0"/>
Nov 25 03:33:00 np0005534516 nova_compute[253538]:    <memballoon model="virtio">
Nov 25 03:33:00 np0005534516 nova_compute[253538]:      <stats period="10"/>
Nov 25 03:33:00 np0005534516 nova_compute[253538]:    </memballoon>
Nov 25 03:33:00 np0005534516 nova_compute[253538]:  </devices>
Nov 25 03:33:00 np0005534516 nova_compute[253538]: </domain>
Nov 25 03:33:00 np0005534516 nova_compute[253538]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 25 03:33:00 np0005534516 nova_compute[253538]: 2025-11-25 08:33:00.985 253542 DEBUG nova.compute.manager [None req-cb0b45a0-9374-4f2b-9271-6d8352f48120 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] [instance: c0942fc7-74d4-4fc8-9574-4fea9179e71b] Preparing to wait for external event network-vif-plugged-79f4b8f5-d582-44c5-b8e0-a82ad73193de prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 25 03:33:00 np0005534516 nova_compute[253538]: 2025-11-25 08:33:00.985 253542 DEBUG oslo_concurrency.lockutils [None req-cb0b45a0-9374-4f2b-9271-6d8352f48120 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Acquiring lock "c0942fc7-74d4-4fc8-9574-4fea9179e71b-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:33:00 np0005534516 nova_compute[253538]: 2025-11-25 08:33:00.986 253542 DEBUG oslo_concurrency.lockutils [None req-cb0b45a0-9374-4f2b-9271-6d8352f48120 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Lock "c0942fc7-74d4-4fc8-9574-4fea9179e71b-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:33:00 np0005534516 nova_compute[253538]: 2025-11-25 08:33:00.986 253542 DEBUG oslo_concurrency.lockutils [None req-cb0b45a0-9374-4f2b-9271-6d8352f48120 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Lock "c0942fc7-74d4-4fc8-9574-4fea9179e71b-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:33:00 np0005534516 nova_compute[253538]: 2025-11-25 08:33:00.987 253542 DEBUG nova.virt.libvirt.vif [None req-cb0b45a0-9374-4f2b-9271-6d8352f48120 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-11-25T08:32:31Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-1031241876',display_name='tempest-ServerDiskConfigTestJSON-server-1031241876',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-1031241876',id=53,image_ref='64385127-d622-49bb-be38-b33beb2692d1',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-25T08:32:39Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='dc93aa65bef7473d961e0cad1e8f2962',ramdisk_id='',reservation_id='r-p1t9fwfe',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='64385127-d622-49bb-be38-b33beb2692d1',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerDiskConfigTestJSON-1655399928',owner_user_name='tempest-ServerDiskConfigTestJSON-1655399928-project-member'},tags=<?>,task_state='rebuild_spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T08:32:59Z,user_data=None,user_id='7ad88cb0e4cf4d0b8e4cbec835318015',uuid=c0942fc7-74d4-4fc8-9574-4fea9179e71b,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "79f4b8f5-d582-44c5-b8e0-a82ad73193de", "address": "fa:16:3e:24:9d:e4", "network": {"id": "eb25945d-6002-4a99-b682-034a8a3dc901", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1802666542-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dc93aa65bef7473d961e0cad1e8f2962", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap79f4b8f5-d5", "ovs_interfaceid": "79f4b8f5-d582-44c5-b8e0-a82ad73193de", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 25 03:33:00 np0005534516 nova_compute[253538]: 2025-11-25 08:33:00.987 253542 DEBUG nova.network.os_vif_util [None req-cb0b45a0-9374-4f2b-9271-6d8352f48120 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Converting VIF {"id": "79f4b8f5-d582-44c5-b8e0-a82ad73193de", "address": "fa:16:3e:24:9d:e4", "network": {"id": "eb25945d-6002-4a99-b682-034a8a3dc901", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1802666542-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dc93aa65bef7473d961e0cad1e8f2962", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap79f4b8f5-d5", "ovs_interfaceid": "79f4b8f5-d582-44c5-b8e0-a82ad73193de", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 03:33:00 np0005534516 nova_compute[253538]: 2025-11-25 08:33:00.987 253542 DEBUG nova.network.os_vif_util [None req-cb0b45a0-9374-4f2b-9271-6d8352f48120 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:24:9d:e4,bridge_name='br-int',has_traffic_filtering=True,id=79f4b8f5-d582-44c5-b8e0-a82ad73193de,network=Network(eb25945d-6002-4a99-b682-034a8a3dc901),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap79f4b8f5-d5') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 03:33:00 np0005534516 nova_compute[253538]: 2025-11-25 08:33:00.988 253542 DEBUG os_vif [None req-cb0b45a0-9374-4f2b-9271-6d8352f48120 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:24:9d:e4,bridge_name='br-int',has_traffic_filtering=True,id=79f4b8f5-d582-44c5-b8e0-a82ad73193de,network=Network(eb25945d-6002-4a99-b682-034a8a3dc901),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap79f4b8f5-d5') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 25 03:33:00 np0005534516 nova_compute[253538]: 2025-11-25 08:33:00.990 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:33:00 np0005534516 nova_compute[253538]: 2025-11-25 08:33:00.991 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:33:00 np0005534516 nova_compute[253538]: 2025-11-25 08:33:00.992 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 03:33:00 np0005534516 ovn_controller[152859]: 2025-11-25T08:33:00Z|00059|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:3a:eb:0c 10.100.0.12
Nov 25 03:33:00 np0005534516 ovn_controller[152859]: 2025-11-25T08:33:00Z|00060|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:3a:eb:0c 10.100.0.12
Nov 25 03:33:00 np0005534516 nova_compute[253538]: 2025-11-25 08:33:00.998 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:33:00 np0005534516 nova_compute[253538]: 2025-11-25 08:33:00.998 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap79f4b8f5-d5, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:33:00 np0005534516 nova_compute[253538]: 2025-11-25 08:33:00.998 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap79f4b8f5-d5, col_values=(('external_ids', {'iface-id': '79f4b8f5-d582-44c5-b8e0-a82ad73193de', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:24:9d:e4', 'vm-uuid': 'c0942fc7-74d4-4fc8-9574-4fea9179e71b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:33:01 np0005534516 nova_compute[253538]: 2025-11-25 08:33:01.000 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:33:01 np0005534516 NetworkManager[48915]: <info>  [1764059581.0012] manager: (tap79f4b8f5-d5): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/217)
Nov 25 03:33:01 np0005534516 nova_compute[253538]: 2025-11-25 08:33:01.003 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 25 03:33:01 np0005534516 nova_compute[253538]: 2025-11-25 08:33:01.005 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:33:01 np0005534516 nova_compute[253538]: 2025-11-25 08:33:01.005 253542 INFO os_vif [None req-cb0b45a0-9374-4f2b-9271-6d8352f48120 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:24:9d:e4,bridge_name='br-int',has_traffic_filtering=True,id=79f4b8f5-d582-44c5-b8e0-a82ad73193de,network=Network(eb25945d-6002-4a99-b682-034a8a3dc901),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap79f4b8f5-d5')#033[00m
Nov 25 03:33:01 np0005534516 nova_compute[253538]: 2025-11-25 08:33:01.049 253542 DEBUG nova.virt.libvirt.driver [None req-cb0b45a0-9374-4f2b-9271-6d8352f48120 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 03:33:01 np0005534516 nova_compute[253538]: 2025-11-25 08:33:01.049 253542 DEBUG nova.virt.libvirt.driver [None req-cb0b45a0-9374-4f2b-9271-6d8352f48120 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 03:33:01 np0005534516 nova_compute[253538]: 2025-11-25 08:33:01.050 253542 DEBUG nova.virt.libvirt.driver [None req-cb0b45a0-9374-4f2b-9271-6d8352f48120 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] No VIF found with MAC fa:16:3e:24:9d:e4, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 25 03:33:01 np0005534516 nova_compute[253538]: 2025-11-25 08:33:01.050 253542 INFO nova.virt.libvirt.driver [None req-cb0b45a0-9374-4f2b-9271-6d8352f48120 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] [instance: c0942fc7-74d4-4fc8-9574-4fea9179e71b] Using config drive#033[00m
Nov 25 03:33:01 np0005534516 nova_compute[253538]: 2025-11-25 08:33:01.076 253542 DEBUG nova.storage.rbd_utils [None req-cb0b45a0-9374-4f2b-9271-6d8352f48120 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] rbd image c0942fc7-74d4-4fc8-9574-4fea9179e71b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:33:01 np0005534516 kernel: tap15af3dd8-97 (unregistering): left promiscuous mode
Nov 25 03:33:01 np0005534516 NetworkManager[48915]: <info>  [1764059581.0911] device (tap15af3dd8-97): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 03:33:01 np0005534516 nova_compute[253538]: 2025-11-25 08:33:01.093 253542 DEBUG nova.objects.instance [None req-cb0b45a0-9374-4f2b-9271-6d8352f48120 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Lazy-loading 'ec2_ids' on Instance uuid c0942fc7-74d4-4fc8-9574-4fea9179e71b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 03:33:01 np0005534516 ovn_controller[152859]: 2025-11-25T08:33:01Z|00471|binding|INFO|Releasing lport 15af3dd8-9788-4a34-b4b2-d3b24300cd4c from this chassis (sb_readonly=0)
Nov 25 03:33:01 np0005534516 nova_compute[253538]: 2025-11-25 08:33:01.106 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:33:01 np0005534516 ovn_controller[152859]: 2025-11-25T08:33:01Z|00472|binding|INFO|Setting lport 15af3dd8-9788-4a34-b4b2-d3b24300cd4c down in Southbound
Nov 25 03:33:01 np0005534516 ovn_controller[152859]: 2025-11-25T08:33:01Z|00473|binding|INFO|Removing iface tap15af3dd8-97 ovn-installed in OVS
Nov 25 03:33:01 np0005534516 nova_compute[253538]: 2025-11-25 08:33:01.108 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:33:01 np0005534516 nova_compute[253538]: 2025-11-25 08:33:01.117 253542 DEBUG nova.objects.instance [None req-cb0b45a0-9374-4f2b-9271-6d8352f48120 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Lazy-loading 'keypairs' on Instance uuid c0942fc7-74d4-4fc8-9574-4fea9179e71b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 03:33:01 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:33:01.119 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:07:cd:40 10.100.0.13'], port_security=['fa:16:3e:07:cd:40 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '0feca801-4630-4450-b915-616d8496ab51', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-908154e6-322e-4607-bb65-df3f3f8daca6', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '23237e7592b247838e62457157e64e9e', 'neutron:revision_number': '8', 'neutron:security_group_ids': '3358d53f-61a0-46d5-a068-e095dc0322d0', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.183', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f04e87cb-da21-4cc9-be16-4ad52b84fb85, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=15af3dd8-9788-4a34-b4b2-d3b24300cd4c) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 03:33:01 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:33:01.121 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 15af3dd8-9788-4a34-b4b2-d3b24300cd4c in datapath 908154e6-322e-4607-bb65-df3f3f8daca6 unbound from our chassis#033[00m
Nov 25 03:33:01 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:33:01.122 162739 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 908154e6-322e-4607-bb65-df3f3f8daca6, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 25 03:33:01 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:33:01.123 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[41d00465-6b31-4305-b5ae-f2f02c62c79a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:33:01 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:33:01.124 162739 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-908154e6-322e-4607-bb65-df3f3f8daca6 namespace which is not needed anymore#033[00m
Nov 25 03:33:01 np0005534516 nova_compute[253538]: 2025-11-25 08:33:01.124 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:33:01 np0005534516 systemd[1]: machine-qemu\x2d60\x2dinstance\x2d00000032.scope: Deactivated successfully.
Nov 25 03:33:01 np0005534516 systemd[1]: machine-qemu\x2d60\x2dinstance\x2d00000032.scope: Consumed 13.452s CPU time.
Nov 25 03:33:01 np0005534516 systemd-machined[215790]: Machine qemu-60-instance-00000032 terminated.
Nov 25 03:33:01 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1490: 321 pgs: 321 active+clean; 250 MiB data, 551 MiB used, 59 GiB / 60 GiB avail; 3.1 MiB/s rd, 4.6 MiB/s wr, 290 op/s
Nov 25 03:33:01 np0005534516 neutron-haproxy-ovnmeta-908154e6-322e-4607-bb65-df3f3f8daca6[310582]: [NOTICE]   (310586) : haproxy version is 2.8.14-c23fe91
Nov 25 03:33:01 np0005534516 neutron-haproxy-ovnmeta-908154e6-322e-4607-bb65-df3f3f8daca6[310582]: [NOTICE]   (310586) : path to executable is /usr/sbin/haproxy
Nov 25 03:33:01 np0005534516 neutron-haproxy-ovnmeta-908154e6-322e-4607-bb65-df3f3f8daca6[310582]: [ALERT]    (310586) : Current worker (310588) exited with code 143 (Terminated)
Nov 25 03:33:01 np0005534516 neutron-haproxy-ovnmeta-908154e6-322e-4607-bb65-df3f3f8daca6[310582]: [WARNING]  (310586) : All workers exited. Exiting... (0)
Nov 25 03:33:01 np0005534516 systemd[1]: libpod-2289953be081e6c39272df663cab12ee7163a6905ffd288c559f372770c19339.scope: Deactivated successfully.
Nov 25 03:33:01 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 03:33:01 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2802527729' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 03:33:01 np0005534516 podman[311637]: 2025-11-25 08:33:01.271400699 +0000 UTC m=+0.052672407 container died 2289953be081e6c39272df663cab12ee7163a6905ffd288c559f372770c19339 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-908154e6-322e-4607-bb65-df3f3f8daca6, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true)
Nov 25 03:33:01 np0005534516 nova_compute[253538]: 2025-11-25 08:33:01.280 253542 DEBUG oslo_concurrency.processutils [None req-09265231-b5f9-435b-aa4b-158217848829 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.478s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:33:01 np0005534516 nova_compute[253538]: 2025-11-25 08:33:01.287 253542 DEBUG nova.compute.provider_tree [None req-09265231-b5f9-435b-aa4b-158217848829 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 03:33:01 np0005534516 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-2289953be081e6c39272df663cab12ee7163a6905ffd288c559f372770c19339-userdata-shm.mount: Deactivated successfully.
Nov 25 03:33:01 np0005534516 nova_compute[253538]: 2025-11-25 08:33:01.304 253542 DEBUG nova.scheduler.client.report [None req-09265231-b5f9-435b-aa4b-158217848829 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 03:33:01 np0005534516 systemd[1]: var-lib-containers-storage-overlay-facc729861bf2043006a179130b6eb00c45dc0c9002c2a929f8d9cf2dc4ea94f-merged.mount: Deactivated successfully.
Nov 25 03:33:01 np0005534516 podman[311637]: 2025-11-25 08:33:01.321597928 +0000 UTC m=+0.102869636 container cleanup 2289953be081e6c39272df663cab12ee7163a6905ffd288c559f372770c19339 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-908154e6-322e-4607-bb65-df3f3f8daca6, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118)
Nov 25 03:33:01 np0005534516 systemd[1]: libpod-conmon-2289953be081e6c39272df663cab12ee7163a6905ffd288c559f372770c19339.scope: Deactivated successfully.
Nov 25 03:33:01 np0005534516 nova_compute[253538]: 2025-11-25 08:33:01.336 253542 DEBUG oslo_concurrency.lockutils [None req-09265231-b5f9-435b-aa4b-158217848829 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.685s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:33:01 np0005534516 nova_compute[253538]: 2025-11-25 08:33:01.337 253542 DEBUG nova.compute.manager [None req-09265231-b5f9-435b-aa4b-158217848829 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: 528fb917-0169-441d-b32d-652963344aea] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 25 03:33:01 np0005534516 nova_compute[253538]: 2025-11-25 08:33:01.353 253542 DEBUG nova.network.neutron [req-454c1506-0d0b-4590-908a-fcb20569d3e2 req-b6d343cc-8cec-4488-987e-49d8bd765a86 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 8191f951-44bc-4371-957a-f2e7d37c1a32] Updated VIF entry in instance network info cache for port 223207be-35e0-4b8b-bf78-113792059910. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 25 03:33:01 np0005534516 nova_compute[253538]: 2025-11-25 08:33:01.353 253542 DEBUG nova.network.neutron [req-454c1506-0d0b-4590-908a-fcb20569d3e2 req-b6d343cc-8cec-4488-987e-49d8bd765a86 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 8191f951-44bc-4371-957a-f2e7d37c1a32] Updating instance_info_cache with network_info: [{"id": "d8bd16e1-3695-474d-be04-7fdf44bee803", "address": "fa:16:3e:1a:58:19", "network": {"id": "9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-2079765623-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.213", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7d8307470c794815a028592990efca57", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd8bd16e1-36", "ovs_interfaceid": "d8bd16e1-3695-474d-be04-7fdf44bee803", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "223207be-35e0-4b8b-bf78-113792059910", "address": "fa:16:3e:3a:eb:0c", "network": {"id": "9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-2079765623-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7d8307470c794815a028592990efca57", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap223207be-35", "ovs_interfaceid": "223207be-35e0-4b8b-bf78-113792059910", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 03:33:01 np0005534516 nova_compute[253538]: 2025-11-25 08:33:01.369 253542 DEBUG nova.objects.instance [None req-ed1ff473-fcd5-40ad-9af6-a9eb8b3853e9 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Lazy-loading 'pci_requests' on Instance uuid 8191f951-44bc-4371-957a-f2e7d37c1a32 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 03:33:01 np0005534516 nova_compute[253538]: 2025-11-25 08:33:01.383 253542 DEBUG oslo_concurrency.lockutils [req-454c1506-0d0b-4590-908a-fcb20569d3e2 req-b6d343cc-8cec-4488-987e-49d8bd765a86 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-8191f951-44bc-4371-957a-f2e7d37c1a32" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 03:33:01 np0005534516 nova_compute[253538]: 2025-11-25 08:33:01.385 253542 DEBUG nova.network.neutron [None req-ed1ff473-fcd5-40ad-9af6-a9eb8b3853e9 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] [instance: 8191f951-44bc-4371-957a-f2e7d37c1a32] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 25 03:33:01 np0005534516 nova_compute[253538]: 2025-11-25 08:33:01.392 253542 DEBUG nova.compute.manager [None req-09265231-b5f9-435b-aa4b-158217848829 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: 528fb917-0169-441d-b32d-652963344aea] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 25 03:33:01 np0005534516 nova_compute[253538]: 2025-11-25 08:33:01.393 253542 DEBUG nova.network.neutron [None req-09265231-b5f9-435b-aa4b-158217848829 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: 528fb917-0169-441d-b32d-652963344aea] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 25 03:33:01 np0005534516 podman[311673]: 2025-11-25 08:33:01.401658769 +0000 UTC m=+0.056405537 container remove 2289953be081e6c39272df663cab12ee7163a6905ffd288c559f372770c19339 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-908154e6-322e-4607-bb65-df3f3f8daca6, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 03:33:01 np0005534516 nova_compute[253538]: 2025-11-25 08:33:01.407 253542 INFO nova.virt.libvirt.driver [None req-09265231-b5f9-435b-aa4b-158217848829 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: 528fb917-0169-441d-b32d-652963344aea] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 25 03:33:01 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:33:01.406 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[750bf71c-6ec1-47fd-a5ba-040ba4a0cbe2]: (4, ('Tue Nov 25 08:33:01 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-908154e6-322e-4607-bb65-df3f3f8daca6 (2289953be081e6c39272df663cab12ee7163a6905ffd288c559f372770c19339)\n2289953be081e6c39272df663cab12ee7163a6905ffd288c559f372770c19339\nTue Nov 25 08:33:01 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-908154e6-322e-4607-bb65-df3f3f8daca6 (2289953be081e6c39272df663cab12ee7163a6905ffd288c559f372770c19339)\n2289953be081e6c39272df663cab12ee7163a6905ffd288c559f372770c19339\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:33:01 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:33:01.409 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[4bb73e65-fca0-4156-b1c0-16d753a6d868]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:33:01 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:33:01.410 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap908154e6-30, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:33:01 np0005534516 nova_compute[253538]: 2025-11-25 08:33:01.412 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:33:01 np0005534516 kernel: tap908154e6-30: left promiscuous mode
Nov 25 03:33:01 np0005534516 nova_compute[253538]: 2025-11-25 08:33:01.424 253542 DEBUG nova.compute.manager [None req-09265231-b5f9-435b-aa4b-158217848829 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: 528fb917-0169-441d-b32d-652963344aea] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 25 03:33:01 np0005534516 nova_compute[253538]: 2025-11-25 08:33:01.455 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:33:01 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:33:01.459 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[5c24e52f-321f-4f06-b4e3-d1e3a2773995]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:33:01 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:33:01.477 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[65ec2c40-875e-469d-a501-4251582bfc59]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:33:01 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:33:01.479 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[bae69034-a8ee-4aba-a26a-6819a836ab90]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:33:01 np0005534516 nova_compute[253538]: 2025-11-25 08:33:01.483 253542 INFO nova.virt.libvirt.driver [None req-cb0b45a0-9374-4f2b-9271-6d8352f48120 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] [instance: c0942fc7-74d4-4fc8-9574-4fea9179e71b] Creating config drive at /var/lib/nova/instances/c0942fc7-74d4-4fc8-9574-4fea9179e71b/disk.config#033[00m
Nov 25 03:33:01 np0005534516 nova_compute[253538]: 2025-11-25 08:33:01.487 253542 DEBUG oslo_concurrency.processutils [None req-cb0b45a0-9374-4f2b-9271-6d8352f48120 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/c0942fc7-74d4-4fc8-9574-4fea9179e71b/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpuoqy3o6j execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:33:01 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:33:01.496 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[40332456-d02c-4550-98e3-43156a02045b]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 491402, 'reachable_time': 44539, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 311704, 'error': None, 'target': 'ovnmeta-908154e6-322e-4607-bb65-df3f3f8daca6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:33:01 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:33:01.499 162852 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-908154e6-322e-4607-bb65-df3f3f8daca6 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 25 03:33:01 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:33:01.499 162852 DEBUG oslo.privsep.daemon [-] privsep: reply[b86daaf4-cde2-48b5-a129-22a536002620]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:33:01 np0005534516 systemd[1]: run-netns-ovnmeta\x2d908154e6\x2d322e\x2d4607\x2dbb65\x2ddf3f3f8daca6.mount: Deactivated successfully.
Nov 25 03:33:01 np0005534516 nova_compute[253538]: 2025-11-25 08:33:01.561 253542 DEBUG nova.compute.manager [None req-09265231-b5f9-435b-aa4b-158217848829 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: 528fb917-0169-441d-b32d-652963344aea] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 25 03:33:01 np0005534516 nova_compute[253538]: 2025-11-25 08:33:01.563 253542 DEBUG nova.virt.libvirt.driver [None req-09265231-b5f9-435b-aa4b-158217848829 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: 528fb917-0169-441d-b32d-652963344aea] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 25 03:33:01 np0005534516 nova_compute[253538]: 2025-11-25 08:33:01.564 253542 INFO nova.virt.libvirt.driver [None req-09265231-b5f9-435b-aa4b-158217848829 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: 528fb917-0169-441d-b32d-652963344aea] Creating image(s)#033[00m
Nov 25 03:33:01 np0005534516 nova_compute[253538]: 2025-11-25 08:33:01.581 253542 DEBUG nova.storage.rbd_utils [None req-09265231-b5f9-435b-aa4b-158217848829 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] rbd image 528fb917-0169-441d-b32d-652963344aea_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:33:01 np0005534516 nova_compute[253538]: 2025-11-25 08:33:01.600 253542 DEBUG nova.storage.rbd_utils [None req-09265231-b5f9-435b-aa4b-158217848829 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] rbd image 528fb917-0169-441d-b32d-652963344aea_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:33:01 np0005534516 nova_compute[253538]: 2025-11-25 08:33:01.619 253542 DEBUG nova.storage.rbd_utils [None req-09265231-b5f9-435b-aa4b-158217848829 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] rbd image 528fb917-0169-441d-b32d-652963344aea_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:33:01 np0005534516 nova_compute[253538]: 2025-11-25 08:33:01.622 253542 DEBUG oslo_concurrency.processutils [None req-09265231-b5f9-435b-aa4b-158217848829 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:33:01 np0005534516 nova_compute[253538]: 2025-11-25 08:33:01.649 253542 DEBUG oslo_concurrency.processutils [None req-cb0b45a0-9374-4f2b-9271-6d8352f48120 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/c0942fc7-74d4-4fc8-9574-4fea9179e71b/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpuoqy3o6j" returned: 0 in 0.162s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:33:01 np0005534516 nova_compute[253538]: 2025-11-25 08:33:01.652 253542 DEBUG nova.policy [None req-09265231-b5f9-435b-aa4b-158217848829 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'a649c62aaacd4f01a93ea978066f5976', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'a9c243220ecd4ba3af10cdbc0ea76bd6', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 25 03:33:01 np0005534516 nova_compute[253538]: 2025-11-25 08:33:01.655 253542 DEBUG nova.policy [None req-ed1ff473-fcd5-40ad-9af6-a9eb8b3853e9 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '329d8dc9d78743d4a09a38fef3a9143d', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '7d8307470c794815a028592990efca57', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 25 03:33:01 np0005534516 nova_compute[253538]: 2025-11-25 08:33:01.675 253542 DEBUG nova.storage.rbd_utils [None req-cb0b45a0-9374-4f2b-9271-6d8352f48120 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] rbd image c0942fc7-74d4-4fc8-9574-4fea9179e71b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:33:01 np0005534516 nova_compute[253538]: 2025-11-25 08:33:01.678 253542 DEBUG oslo_concurrency.processutils [None req-cb0b45a0-9374-4f2b-9271-6d8352f48120 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/c0942fc7-74d4-4fc8-9574-4fea9179e71b/disk.config c0942fc7-74d4-4fc8-9574-4fea9179e71b_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:33:01 np0005534516 nova_compute[253538]: 2025-11-25 08:33:01.706 253542 DEBUG oslo_concurrency.processutils [None req-09265231-b5f9-435b-aa4b-158217848829 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json" returned: 0 in 0.084s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:33:01 np0005534516 nova_compute[253538]: 2025-11-25 08:33:01.708 253542 DEBUG oslo_concurrency.lockutils [None req-09265231-b5f9-435b-aa4b-158217848829 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Acquiring lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:33:01 np0005534516 nova_compute[253538]: 2025-11-25 08:33:01.709 253542 DEBUG oslo_concurrency.lockutils [None req-09265231-b5f9-435b-aa4b-158217848829 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:33:01 np0005534516 nova_compute[253538]: 2025-11-25 08:33:01.709 253542 DEBUG oslo_concurrency.lockutils [None req-09265231-b5f9-435b-aa4b-158217848829 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:33:01 np0005534516 nova_compute[253538]: 2025-11-25 08:33:01.733 253542 DEBUG nova.storage.rbd_utils [None req-09265231-b5f9-435b-aa4b-158217848829 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] rbd image 528fb917-0169-441d-b32d-652963344aea_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:33:01 np0005534516 nova_compute[253538]: 2025-11-25 08:33:01.736 253542 DEBUG oslo_concurrency.processutils [None req-09265231-b5f9-435b-aa4b-158217848829 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc 528fb917-0169-441d-b32d-652963344aea_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:33:01 np0005534516 nova_compute[253538]: 2025-11-25 08:33:01.768 253542 INFO nova.virt.libvirt.driver [None req-2b669202-d948-45cd-9a6e-057ea8730fac c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] [instance: 0feca801-4630-4450-b915-616d8496ab51] Instance shutdown successfully after 13 seconds.#033[00m
Nov 25 03:33:01 np0005534516 nova_compute[253538]: 2025-11-25 08:33:01.777 253542 INFO nova.virt.libvirt.driver [-] [instance: 0feca801-4630-4450-b915-616d8496ab51] Instance destroyed successfully.#033[00m
Nov 25 03:33:01 np0005534516 nova_compute[253538]: 2025-11-25 08:33:01.777 253542 DEBUG nova.objects.instance [None req-2b669202-d948-45cd-9a6e-057ea8730fac c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Lazy-loading 'numa_topology' on Instance uuid 0feca801-4630-4450-b915-616d8496ab51 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 03:33:01 np0005534516 nova_compute[253538]: 2025-11-25 08:33:01.796 253542 DEBUG nova.compute.manager [None req-2b669202-d948-45cd-9a6e-057ea8730fac c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] [instance: 0feca801-4630-4450-b915-616d8496ab51] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:33:01 np0005534516 nova_compute[253538]: 2025-11-25 08:33:01.830 253542 DEBUG oslo_concurrency.processutils [None req-cb0b45a0-9374-4f2b-9271-6d8352f48120 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/c0942fc7-74d4-4fc8-9574-4fea9179e71b/disk.config c0942fc7-74d4-4fc8-9574-4fea9179e71b_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.152s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:33:01 np0005534516 nova_compute[253538]: 2025-11-25 08:33:01.831 253542 INFO nova.virt.libvirt.driver [None req-cb0b45a0-9374-4f2b-9271-6d8352f48120 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] [instance: c0942fc7-74d4-4fc8-9574-4fea9179e71b] Deleting local config drive /var/lib/nova/instances/c0942fc7-74d4-4fc8-9574-4fea9179e71b/disk.config because it was imported into RBD.#033[00m
Nov 25 03:33:01 np0005534516 nova_compute[253538]: 2025-11-25 08:33:01.839 253542 DEBUG oslo_concurrency.lockutils [None req-2b669202-d948-45cd-9a6e-057ea8730fac c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Lock "0feca801-4630-4450-b915-616d8496ab51" "released" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: held 13.305s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:33:01 np0005534516 kernel: tap79f4b8f5-d5: entered promiscuous mode
Nov 25 03:33:01 np0005534516 NetworkManager[48915]: <info>  [1764059581.8883] manager: (tap79f4b8f5-d5): new Tun device (/org/freedesktop/NetworkManager/Devices/218)
Nov 25 03:33:01 np0005534516 systemd-udevd[311435]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 03:33:01 np0005534516 nova_compute[253538]: 2025-11-25 08:33:01.894 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:33:01 np0005534516 ovn_controller[152859]: 2025-11-25T08:33:01Z|00474|binding|INFO|Claiming lport 79f4b8f5-d582-44c5-b8e0-a82ad73193de for this chassis.
Nov 25 03:33:01 np0005534516 ovn_controller[152859]: 2025-11-25T08:33:01Z|00475|binding|INFO|79f4b8f5-d582-44c5-b8e0-a82ad73193de: Claiming fa:16:3e:24:9d:e4 10.100.0.11
Nov 25 03:33:01 np0005534516 NetworkManager[48915]: <info>  [1764059581.9024] device (tap79f4b8f5-d5): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 03:33:01 np0005534516 NetworkManager[48915]: <info>  [1764059581.9030] device (tap79f4b8f5-d5): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 03:33:01 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:33:01.906 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:24:9d:e4 10.100.0.11'], port_security=['fa:16:3e:24:9d:e4 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': 'c0942fc7-74d4-4fc8-9574-4fea9179e71b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-eb25945d-6002-4a99-b682-034a8a3dc901', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'dc93aa65bef7473d961e0cad1e8f2962', 'neutron:revision_number': '5', 'neutron:security_group_ids': 'e75a559d-2985-4816-b432-9eef78e9b129', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=1d4314d0-90db-4d2a-a971-774f6d589653, chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=79f4b8f5-d582-44c5-b8e0-a82ad73193de) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 03:33:01 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:33:01.908 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 79f4b8f5-d582-44c5-b8e0-a82ad73193de in datapath eb25945d-6002-4a99-b682-034a8a3dc901 bound to our chassis#033[00m
Nov 25 03:33:01 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:33:01.910 162739 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network eb25945d-6002-4a99-b682-034a8a3dc901#033[00m
Nov 25 03:33:01 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:33:01.923 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[90870a19-cb75-46b4-abbc-0788645b771f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:33:01 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:33:01.924 162739 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapeb25945d-61 in ovnmeta-eb25945d-6002-4a99-b682-034a8a3dc901 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 25 03:33:01 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:33:01.925 269370 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapeb25945d-60 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 25 03:33:01 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:33:01.925 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[7ee757ba-fbf4-4d86-9d1a-3bc374b8732a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:33:01 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:33:01.926 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[bada7701-730e-4c28-bb9a-f70043cd0e40]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:33:01 np0005534516 ovn_controller[152859]: 2025-11-25T08:33:01Z|00476|binding|INFO|Setting lport 79f4b8f5-d582-44c5-b8e0-a82ad73193de ovn-installed in OVS
Nov 25 03:33:01 np0005534516 ovn_controller[152859]: 2025-11-25T08:33:01Z|00477|binding|INFO|Setting lport 79f4b8f5-d582-44c5-b8e0-a82ad73193de up in Southbound
Nov 25 03:33:01 np0005534516 systemd-machined[215790]: New machine qemu-62-instance-00000035.
Nov 25 03:33:01 np0005534516 nova_compute[253538]: 2025-11-25 08:33:01.931 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:33:01 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:33:01.937 162852 DEBUG oslo.privsep.daemon [-] privsep: reply[44c9be7c-7b84-4eb3-b33b-589c4e919639]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:33:01 np0005534516 systemd[1]: Started Virtual Machine qemu-62-instance-00000035.
Nov 25 03:33:01 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:33:01.960 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[9e7abd1f-8771-4482-b31c-f38f3429b105]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:33:01 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:33:01.986 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[c583e1cf-42b9-432f-ba41-b69f17493037]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:33:01 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:33:01.991 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[e336f6ed-13a9-459c-a08a-60f83a280f02]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:33:01 np0005534516 NetworkManager[48915]: <info>  [1764059581.9919] manager: (tapeb25945d-60): new Veth device (/org/freedesktop/NetworkManager/Devices/219)
Nov 25 03:33:02 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:33:02.018 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[e8a27ea9-64ac-47aa-8456-ee5bed7b1ff0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:33:02 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:33:02.021 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[13f65a1c-9d5e-4143-be54-f2b1b30cb0f5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:33:02 np0005534516 NetworkManager[48915]: <info>  [1764059582.0464] device (tapeb25945d-60): carrier: link connected
Nov 25 03:33:02 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:33:02.053 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[be9ebdff-7062-4194-9051-ae296edf70b8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:33:02 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:33:02.070 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[15e5150d-2d04-4bef-8153-6a5e9bf3f6a6]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapeb25945d-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f7:fc:f3'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 145], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 493664, 'reachable_time': 43980, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 311888, 'error': None, 'target': 'ovnmeta-eb25945d-6002-4a99-b682-034a8a3dc901', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:33:02 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:33:02.087 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[929a789c-e42b-4312-bff5-aa069b202a31]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fef7:fcf3'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 493664, 'tstamp': 493664}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 311889, 'error': None, 'target': 'ovnmeta-eb25945d-6002-4a99-b682-034a8a3dc901', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:33:02 np0005534516 nova_compute[253538]: 2025-11-25 08:33:02.094 253542 DEBUG oslo_concurrency.processutils [None req-09265231-b5f9-435b-aa4b-158217848829 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc 528fb917-0169-441d-b32d-652963344aea_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.358s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:33:02 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:33:02.105 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[6448ab28-29b2-4fd0-8735-50d54daebdfd]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapeb25945d-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f7:fc:f3'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 145], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 493664, 'reachable_time': 43980, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 311890, 'error': None, 'target': 'ovnmeta-eb25945d-6002-4a99-b682-034a8a3dc901', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:33:02 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:33:02.136 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[78450a83-9d78-438a-aa3f-58d068d6b465]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:33:02 np0005534516 nova_compute[253538]: 2025-11-25 08:33:02.166 253542 DEBUG nova.storage.rbd_utils [None req-09265231-b5f9-435b-aa4b-158217848829 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] resizing rbd image 528fb917-0169-441d-b32d-652963344aea_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 25 03:33:02 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:33:02.199 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[5572aacb-0efc-4074-b228-93310b3a4d42]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:33:02 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:33:02.201 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapeb25945d-60, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:33:02 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:33:02.201 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 03:33:02 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:33:02.201 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapeb25945d-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:33:02 np0005534516 kernel: tapeb25945d-60: entered promiscuous mode
Nov 25 03:33:02 np0005534516 NetworkManager[48915]: <info>  [1764059582.2063] manager: (tapeb25945d-60): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/220)
Nov 25 03:33:02 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:33:02.208 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapeb25945d-60, col_values=(('external_ids', {'iface-id': 'f4a838c4-0817-4ff4-8792-2e2721905e98'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:33:02 np0005534516 ovn_controller[152859]: 2025-11-25T08:33:02Z|00478|binding|INFO|Releasing lport f4a838c4-0817-4ff4-8792-2e2721905e98 from this chassis (sb_readonly=0)
Nov 25 03:33:02 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:33:02.227 162739 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/eb25945d-6002-4a99-b682-034a8a3dc901.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/eb25945d-6002-4a99-b682-034a8a3dc901.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 25 03:33:02 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:33:02.228 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[4774418e-a190-48a3-a2a7-e09225ae955a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:33:02 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:33:02.229 162739 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 25 03:33:02 np0005534516 ovn_metadata_agent[162734]: global
Nov 25 03:33:02 np0005534516 ovn_metadata_agent[162734]:    log         /dev/log local0 debug
Nov 25 03:33:02 np0005534516 ovn_metadata_agent[162734]:    log-tag     haproxy-metadata-proxy-eb25945d-6002-4a99-b682-034a8a3dc901
Nov 25 03:33:02 np0005534516 ovn_metadata_agent[162734]:    user        root
Nov 25 03:33:02 np0005534516 ovn_metadata_agent[162734]:    group       root
Nov 25 03:33:02 np0005534516 ovn_metadata_agent[162734]:    maxconn     1024
Nov 25 03:33:02 np0005534516 ovn_metadata_agent[162734]:    pidfile     /var/lib/neutron/external/pids/eb25945d-6002-4a99-b682-034a8a3dc901.pid.haproxy
Nov 25 03:33:02 np0005534516 ovn_metadata_agent[162734]:    daemon
Nov 25 03:33:02 np0005534516 ovn_metadata_agent[162734]: 
Nov 25 03:33:02 np0005534516 ovn_metadata_agent[162734]: defaults
Nov 25 03:33:02 np0005534516 ovn_metadata_agent[162734]:    log global
Nov 25 03:33:02 np0005534516 ovn_metadata_agent[162734]:    mode http
Nov 25 03:33:02 np0005534516 ovn_metadata_agent[162734]:    option httplog
Nov 25 03:33:02 np0005534516 ovn_metadata_agent[162734]:    option dontlognull
Nov 25 03:33:02 np0005534516 ovn_metadata_agent[162734]:    option http-server-close
Nov 25 03:33:02 np0005534516 ovn_metadata_agent[162734]:    option forwardfor
Nov 25 03:33:02 np0005534516 ovn_metadata_agent[162734]:    retries                 3
Nov 25 03:33:02 np0005534516 ovn_metadata_agent[162734]:    timeout http-request    30s
Nov 25 03:33:02 np0005534516 ovn_metadata_agent[162734]:    timeout connect         30s
Nov 25 03:33:02 np0005534516 ovn_metadata_agent[162734]:    timeout client          32s
Nov 25 03:33:02 np0005534516 ovn_metadata_agent[162734]:    timeout server          32s
Nov 25 03:33:02 np0005534516 ovn_metadata_agent[162734]:    timeout http-keep-alive 30s
Nov 25 03:33:02 np0005534516 ovn_metadata_agent[162734]: 
Nov 25 03:33:02 np0005534516 ovn_metadata_agent[162734]: 
Nov 25 03:33:02 np0005534516 ovn_metadata_agent[162734]: listen listener
Nov 25 03:33:02 np0005534516 ovn_metadata_agent[162734]:    bind 169.254.169.254:80
Nov 25 03:33:02 np0005534516 ovn_metadata_agent[162734]:    server metadata /var/lib/neutron/metadata_proxy
Nov 25 03:33:02 np0005534516 ovn_metadata_agent[162734]:    http-request add-header X-OVN-Network-ID eb25945d-6002-4a99-b682-034a8a3dc901
Nov 25 03:33:02 np0005534516 ovn_metadata_agent[162734]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 25 03:33:02 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:33:02.231 162739 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-eb25945d-6002-4a99-b682-034a8a3dc901', 'env', 'PROCESS_TAG=haproxy-eb25945d-6002-4a99-b682-034a8a3dc901', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/eb25945d-6002-4a99-b682-034a8a3dc901.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 25 03:33:02 np0005534516 nova_compute[253538]: 2025-11-25 08:33:02.245 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:33:02 np0005534516 nova_compute[253538]: 2025-11-25 08:33:02.492 253542 DEBUG nova.objects.instance [None req-09265231-b5f9-435b-aa4b-158217848829 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Lazy-loading 'migration_context' on Instance uuid 528fb917-0169-441d-b32d-652963344aea obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 03:33:02 np0005534516 nova_compute[253538]: 2025-11-25 08:33:02.509 253542 DEBUG nova.virt.libvirt.driver [None req-09265231-b5f9-435b-aa4b-158217848829 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: 528fb917-0169-441d-b32d-652963344aea] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 25 03:33:02 np0005534516 nova_compute[253538]: 2025-11-25 08:33:02.509 253542 DEBUG nova.virt.libvirt.driver [None req-09265231-b5f9-435b-aa4b-158217848829 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: 528fb917-0169-441d-b32d-652963344aea] Ensure instance console log exists: /var/lib/nova/instances/528fb917-0169-441d-b32d-652963344aea/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 25 03:33:02 np0005534516 nova_compute[253538]: 2025-11-25 08:33:02.510 253542 DEBUG oslo_concurrency.lockutils [None req-09265231-b5f9-435b-aa4b-158217848829 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:33:02 np0005534516 nova_compute[253538]: 2025-11-25 08:33:02.510 253542 DEBUG oslo_concurrency.lockutils [None req-09265231-b5f9-435b-aa4b-158217848829 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:33:02 np0005534516 nova_compute[253538]: 2025-11-25 08:33:02.511 253542 DEBUG oslo_concurrency.lockutils [None req-09265231-b5f9-435b-aa4b-158217848829 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:33:02 np0005534516 podman[311997]: 2025-11-25 08:33:02.604065826 +0000 UTC m=+0.046332587 container create 6579f1936f19547b0b95956599794047f5a155eb5f2332d13d8dc270bbe894b5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-eb25945d-6002-4a99-b682-034a8a3dc901, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.vendor=CentOS, tcib_managed=true)
Nov 25 03:33:02 np0005534516 systemd[1]: Started libpod-conmon-6579f1936f19547b0b95956599794047f5a155eb5f2332d13d8dc270bbe894b5.scope.
Nov 25 03:33:02 np0005534516 systemd[1]: Started libcrun container.
Nov 25 03:33:02 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/27abebbb45e35e11fa193c86fc45de186c88f818d07d7200fb6e69019986f4b1/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 25 03:33:02 np0005534516 podman[311997]: 2025-11-25 08:33:02.579363232 +0000 UTC m=+0.021630023 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 25 03:33:02 np0005534516 podman[311997]: 2025-11-25 08:33:02.679623257 +0000 UTC m=+0.121890058 container init 6579f1936f19547b0b95956599794047f5a155eb5f2332d13d8dc270bbe894b5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-eb25945d-6002-4a99-b682-034a8a3dc901, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Nov 25 03:33:02 np0005534516 podman[311997]: 2025-11-25 08:33:02.685557976 +0000 UTC m=+0.127824747 container start 6579f1936f19547b0b95956599794047f5a155eb5f2332d13d8dc270bbe894b5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-eb25945d-6002-4a99-b682-034a8a3dc901, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Nov 25 03:33:02 np0005534516 nova_compute[253538]: 2025-11-25 08:33:02.716 253542 DEBUG nova.virt.libvirt.host [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Removed pending event for c0942fc7-74d4-4fc8-9574-4fea9179e71b due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Nov 25 03:33:02 np0005534516 nova_compute[253538]: 2025-11-25 08:33:02.716 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764059582.716126, c0942fc7-74d4-4fc8-9574-4fea9179e71b => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 03:33:02 np0005534516 nova_compute[253538]: 2025-11-25 08:33:02.717 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: c0942fc7-74d4-4fc8-9574-4fea9179e71b] VM Started (Lifecycle Event)#033[00m
Nov 25 03:33:02 np0005534516 neutron-haproxy-ovnmeta-eb25945d-6002-4a99-b682-034a8a3dc901[312054]: [NOTICE]   (312060) : New worker (312062) forked
Nov 25 03:33:02 np0005534516 neutron-haproxy-ovnmeta-eb25945d-6002-4a99-b682-034a8a3dc901[312054]: [NOTICE]   (312060) : Loading success.
Nov 25 03:33:02 np0005534516 nova_compute[253538]: 2025-11-25 08:33:02.733 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: c0942fc7-74d4-4fc8-9574-4fea9179e71b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:33:02 np0005534516 nova_compute[253538]: 2025-11-25 08:33:02.737 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764059582.7199795, c0942fc7-74d4-4fc8-9574-4fea9179e71b => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 03:33:02 np0005534516 nova_compute[253538]: 2025-11-25 08:33:02.737 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: c0942fc7-74d4-4fc8-9574-4fea9179e71b] VM Paused (Lifecycle Event)#033[00m
Nov 25 03:33:02 np0005534516 nova_compute[253538]: 2025-11-25 08:33:02.746 253542 DEBUG nova.network.neutron [None req-ed1ff473-fcd5-40ad-9af6-a9eb8b3853e9 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] [instance: 8191f951-44bc-4371-957a-f2e7d37c1a32] Successfully created port: 582f57a6-32d3-44a0-ab47-d147a0bb0f43 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 25 03:33:02 np0005534516 nova_compute[253538]: 2025-11-25 08:33:02.752 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: c0942fc7-74d4-4fc8-9574-4fea9179e71b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:33:02 np0005534516 nova_compute[253538]: 2025-11-25 08:33:02.754 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: c0942fc7-74d4-4fc8-9574-4fea9179e71b] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 03:33:02 np0005534516 nova_compute[253538]: 2025-11-25 08:33:02.771 253542 DEBUG nova.network.neutron [None req-09265231-b5f9-435b-aa4b-158217848829 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: 528fb917-0169-441d-b32d-652963344aea] Successfully created port: 56d077f0-8f69-40d8-bd5e-267a70c4c319 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 25 03:33:02 np0005534516 nova_compute[253538]: 2025-11-25 08:33:02.773 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: c0942fc7-74d4-4fc8-9574-4fea9179e71b] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.#033[00m
Nov 25 03:33:03 np0005534516 nova_compute[253538]: 2025-11-25 08:33:03.172 253542 DEBUG nova.compute.manager [req-888688be-5cbc-4847-8868-5c73433d8b04 req-687b7398-2bbe-4d27-941d-51cdc498b757 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 8191f951-44bc-4371-957a-f2e7d37c1a32] Received event network-vif-plugged-223207be-35e0-4b8b-bf78-113792059910 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:33:03 np0005534516 nova_compute[253538]: 2025-11-25 08:33:03.173 253542 DEBUG oslo_concurrency.lockutils [req-888688be-5cbc-4847-8868-5c73433d8b04 req-687b7398-2bbe-4d27-941d-51cdc498b757 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "8191f951-44bc-4371-957a-f2e7d37c1a32-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:33:03 np0005534516 nova_compute[253538]: 2025-11-25 08:33:03.173 253542 DEBUG oslo_concurrency.lockutils [req-888688be-5cbc-4847-8868-5c73433d8b04 req-687b7398-2bbe-4d27-941d-51cdc498b757 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "8191f951-44bc-4371-957a-f2e7d37c1a32-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:33:03 np0005534516 nova_compute[253538]: 2025-11-25 08:33:03.173 253542 DEBUG oslo_concurrency.lockutils [req-888688be-5cbc-4847-8868-5c73433d8b04 req-687b7398-2bbe-4d27-941d-51cdc498b757 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "8191f951-44bc-4371-957a-f2e7d37c1a32-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:33:03 np0005534516 nova_compute[253538]: 2025-11-25 08:33:03.173 253542 DEBUG nova.compute.manager [req-888688be-5cbc-4847-8868-5c73433d8b04 req-687b7398-2bbe-4d27-941d-51cdc498b757 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 8191f951-44bc-4371-957a-f2e7d37c1a32] No waiting events found dispatching network-vif-plugged-223207be-35e0-4b8b-bf78-113792059910 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 03:33:03 np0005534516 nova_compute[253538]: 2025-11-25 08:33:03.173 253542 WARNING nova.compute.manager [req-888688be-5cbc-4847-8868-5c73433d8b04 req-687b7398-2bbe-4d27-941d-51cdc498b757 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 8191f951-44bc-4371-957a-f2e7d37c1a32] Received unexpected event network-vif-plugged-223207be-35e0-4b8b-bf78-113792059910 for instance with vm_state active and task_state None.#033[00m
Nov 25 03:33:03 np0005534516 nova_compute[253538]: 2025-11-25 08:33:03.174 253542 DEBUG nova.compute.manager [req-888688be-5cbc-4847-8868-5c73433d8b04 req-687b7398-2bbe-4d27-941d-51cdc498b757 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 8191f951-44bc-4371-957a-f2e7d37c1a32] Received event network-vif-plugged-223207be-35e0-4b8b-bf78-113792059910 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:33:03 np0005534516 nova_compute[253538]: 2025-11-25 08:33:03.174 253542 DEBUG oslo_concurrency.lockutils [req-888688be-5cbc-4847-8868-5c73433d8b04 req-687b7398-2bbe-4d27-941d-51cdc498b757 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "8191f951-44bc-4371-957a-f2e7d37c1a32-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:33:03 np0005534516 nova_compute[253538]: 2025-11-25 08:33:03.174 253542 DEBUG oslo_concurrency.lockutils [req-888688be-5cbc-4847-8868-5c73433d8b04 req-687b7398-2bbe-4d27-941d-51cdc498b757 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "8191f951-44bc-4371-957a-f2e7d37c1a32-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:33:03 np0005534516 nova_compute[253538]: 2025-11-25 08:33:03.174 253542 DEBUG oslo_concurrency.lockutils [req-888688be-5cbc-4847-8868-5c73433d8b04 req-687b7398-2bbe-4d27-941d-51cdc498b757 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "8191f951-44bc-4371-957a-f2e7d37c1a32-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:33:03 np0005534516 nova_compute[253538]: 2025-11-25 08:33:03.174 253542 DEBUG nova.compute.manager [req-888688be-5cbc-4847-8868-5c73433d8b04 req-687b7398-2bbe-4d27-941d-51cdc498b757 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 8191f951-44bc-4371-957a-f2e7d37c1a32] No waiting events found dispatching network-vif-plugged-223207be-35e0-4b8b-bf78-113792059910 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 03:33:03 np0005534516 nova_compute[253538]: 2025-11-25 08:33:03.174 253542 WARNING nova.compute.manager [req-888688be-5cbc-4847-8868-5c73433d8b04 req-687b7398-2bbe-4d27-941d-51cdc498b757 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 8191f951-44bc-4371-957a-f2e7d37c1a32] Received unexpected event network-vif-plugged-223207be-35e0-4b8b-bf78-113792059910 for instance with vm_state active and task_state None.#033[00m
Nov 25 03:33:03 np0005534516 nova_compute[253538]: 2025-11-25 08:33:03.175 253542 DEBUG nova.compute.manager [req-888688be-5cbc-4847-8868-5c73433d8b04 req-687b7398-2bbe-4d27-941d-51cdc498b757 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 0feca801-4630-4450-b915-616d8496ab51] Received event network-vif-unplugged-15af3dd8-9788-4a34-b4b2-d3b24300cd4c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:33:03 np0005534516 nova_compute[253538]: 2025-11-25 08:33:03.175 253542 DEBUG oslo_concurrency.lockutils [req-888688be-5cbc-4847-8868-5c73433d8b04 req-687b7398-2bbe-4d27-941d-51cdc498b757 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "0feca801-4630-4450-b915-616d8496ab51-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:33:03 np0005534516 nova_compute[253538]: 2025-11-25 08:33:03.175 253542 DEBUG oslo_concurrency.lockutils [req-888688be-5cbc-4847-8868-5c73433d8b04 req-687b7398-2bbe-4d27-941d-51cdc498b757 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "0feca801-4630-4450-b915-616d8496ab51-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:33:03 np0005534516 nova_compute[253538]: 2025-11-25 08:33:03.175 253542 DEBUG oslo_concurrency.lockutils [req-888688be-5cbc-4847-8868-5c73433d8b04 req-687b7398-2bbe-4d27-941d-51cdc498b757 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "0feca801-4630-4450-b915-616d8496ab51-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:33:03 np0005534516 nova_compute[253538]: 2025-11-25 08:33:03.175 253542 DEBUG nova.compute.manager [req-888688be-5cbc-4847-8868-5c73433d8b04 req-687b7398-2bbe-4d27-941d-51cdc498b757 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 0feca801-4630-4450-b915-616d8496ab51] No waiting events found dispatching network-vif-unplugged-15af3dd8-9788-4a34-b4b2-d3b24300cd4c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 03:33:03 np0005534516 nova_compute[253538]: 2025-11-25 08:33:03.175 253542 WARNING nova.compute.manager [req-888688be-5cbc-4847-8868-5c73433d8b04 req-687b7398-2bbe-4d27-941d-51cdc498b757 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 0feca801-4630-4450-b915-616d8496ab51] Received unexpected event network-vif-unplugged-15af3dd8-9788-4a34-b4b2-d3b24300cd4c for instance with vm_state stopped and task_state None.#033[00m
Nov 25 03:33:03 np0005534516 nova_compute[253538]: 2025-11-25 08:33:03.176 253542 DEBUG nova.compute.manager [req-888688be-5cbc-4847-8868-5c73433d8b04 req-687b7398-2bbe-4d27-941d-51cdc498b757 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 0feca801-4630-4450-b915-616d8496ab51] Received event network-vif-plugged-15af3dd8-9788-4a34-b4b2-d3b24300cd4c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:33:03 np0005534516 nova_compute[253538]: 2025-11-25 08:33:03.176 253542 DEBUG oslo_concurrency.lockutils [req-888688be-5cbc-4847-8868-5c73433d8b04 req-687b7398-2bbe-4d27-941d-51cdc498b757 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "0feca801-4630-4450-b915-616d8496ab51-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:33:03 np0005534516 nova_compute[253538]: 2025-11-25 08:33:03.176 253542 DEBUG oslo_concurrency.lockutils [req-888688be-5cbc-4847-8868-5c73433d8b04 req-687b7398-2bbe-4d27-941d-51cdc498b757 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "0feca801-4630-4450-b915-616d8496ab51-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:33:03 np0005534516 nova_compute[253538]: 2025-11-25 08:33:03.176 253542 DEBUG oslo_concurrency.lockutils [req-888688be-5cbc-4847-8868-5c73433d8b04 req-687b7398-2bbe-4d27-941d-51cdc498b757 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "0feca801-4630-4450-b915-616d8496ab51-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:33:03 np0005534516 nova_compute[253538]: 2025-11-25 08:33:03.176 253542 DEBUG nova.compute.manager [req-888688be-5cbc-4847-8868-5c73433d8b04 req-687b7398-2bbe-4d27-941d-51cdc498b757 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 0feca801-4630-4450-b915-616d8496ab51] No waiting events found dispatching network-vif-plugged-15af3dd8-9788-4a34-b4b2-d3b24300cd4c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 03:33:03 np0005534516 nova_compute[253538]: 2025-11-25 08:33:03.176 253542 WARNING nova.compute.manager [req-888688be-5cbc-4847-8868-5c73433d8b04 req-687b7398-2bbe-4d27-941d-51cdc498b757 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 0feca801-4630-4450-b915-616d8496ab51] Received unexpected event network-vif-plugged-15af3dd8-9788-4a34-b4b2-d3b24300cd4c for instance with vm_state stopped and task_state None.#033[00m
Nov 25 03:33:03 np0005534516 nova_compute[253538]: 2025-11-25 08:33:03.176 253542 DEBUG nova.compute.manager [req-888688be-5cbc-4847-8868-5c73433d8b04 req-687b7398-2bbe-4d27-941d-51cdc498b757 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: c0942fc7-74d4-4fc8-9574-4fea9179e71b] Received event network-vif-plugged-79f4b8f5-d582-44c5-b8e0-a82ad73193de external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:33:03 np0005534516 nova_compute[253538]: 2025-11-25 08:33:03.177 253542 DEBUG oslo_concurrency.lockutils [req-888688be-5cbc-4847-8868-5c73433d8b04 req-687b7398-2bbe-4d27-941d-51cdc498b757 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "c0942fc7-74d4-4fc8-9574-4fea9179e71b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:33:03 np0005534516 nova_compute[253538]: 2025-11-25 08:33:03.177 253542 DEBUG oslo_concurrency.lockutils [req-888688be-5cbc-4847-8868-5c73433d8b04 req-687b7398-2bbe-4d27-941d-51cdc498b757 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "c0942fc7-74d4-4fc8-9574-4fea9179e71b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:33:03 np0005534516 nova_compute[253538]: 2025-11-25 08:33:03.177 253542 DEBUG oslo_concurrency.lockutils [req-888688be-5cbc-4847-8868-5c73433d8b04 req-687b7398-2bbe-4d27-941d-51cdc498b757 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "c0942fc7-74d4-4fc8-9574-4fea9179e71b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:33:03 np0005534516 nova_compute[253538]: 2025-11-25 08:33:03.177 253542 DEBUG nova.compute.manager [req-888688be-5cbc-4847-8868-5c73433d8b04 req-687b7398-2bbe-4d27-941d-51cdc498b757 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: c0942fc7-74d4-4fc8-9574-4fea9179e71b] Processing event network-vif-plugged-79f4b8f5-d582-44c5-b8e0-a82ad73193de _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 25 03:33:03 np0005534516 nova_compute[253538]: 2025-11-25 08:33:03.177 253542 DEBUG nova.compute.manager [req-888688be-5cbc-4847-8868-5c73433d8b04 req-687b7398-2bbe-4d27-941d-51cdc498b757 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: c0942fc7-74d4-4fc8-9574-4fea9179e71b] Received event network-vif-plugged-79f4b8f5-d582-44c5-b8e0-a82ad73193de external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:33:03 np0005534516 nova_compute[253538]: 2025-11-25 08:33:03.177 253542 DEBUG oslo_concurrency.lockutils [req-888688be-5cbc-4847-8868-5c73433d8b04 req-687b7398-2bbe-4d27-941d-51cdc498b757 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "c0942fc7-74d4-4fc8-9574-4fea9179e71b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:33:03 np0005534516 nova_compute[253538]: 2025-11-25 08:33:03.178 253542 DEBUG oslo_concurrency.lockutils [req-888688be-5cbc-4847-8868-5c73433d8b04 req-687b7398-2bbe-4d27-941d-51cdc498b757 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "c0942fc7-74d4-4fc8-9574-4fea9179e71b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:33:03 np0005534516 nova_compute[253538]: 2025-11-25 08:33:03.178 253542 DEBUG oslo_concurrency.lockutils [req-888688be-5cbc-4847-8868-5c73433d8b04 req-687b7398-2bbe-4d27-941d-51cdc498b757 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "c0942fc7-74d4-4fc8-9574-4fea9179e71b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:33:03 np0005534516 nova_compute[253538]: 2025-11-25 08:33:03.178 253542 DEBUG nova.compute.manager [req-888688be-5cbc-4847-8868-5c73433d8b04 req-687b7398-2bbe-4d27-941d-51cdc498b757 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: c0942fc7-74d4-4fc8-9574-4fea9179e71b] No waiting events found dispatching network-vif-plugged-79f4b8f5-d582-44c5-b8e0-a82ad73193de pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 03:33:03 np0005534516 nova_compute[253538]: 2025-11-25 08:33:03.178 253542 WARNING nova.compute.manager [req-888688be-5cbc-4847-8868-5c73433d8b04 req-687b7398-2bbe-4d27-941d-51cdc498b757 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: c0942fc7-74d4-4fc8-9574-4fea9179e71b] Received unexpected event network-vif-plugged-79f4b8f5-d582-44c5-b8e0-a82ad73193de for instance with vm_state active and task_state rebuild_spawning.#033[00m
Nov 25 03:33:03 np0005534516 nova_compute[253538]: 2025-11-25 08:33:03.179 253542 DEBUG nova.compute.manager [None req-cb0b45a0-9374-4f2b-9271-6d8352f48120 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] [instance: c0942fc7-74d4-4fc8-9574-4fea9179e71b] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 25 03:33:03 np0005534516 nova_compute[253538]: 2025-11-25 08:33:03.182 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764059583.1820388, c0942fc7-74d4-4fc8-9574-4fea9179e71b => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 03:33:03 np0005534516 nova_compute[253538]: 2025-11-25 08:33:03.182 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: c0942fc7-74d4-4fc8-9574-4fea9179e71b] VM Resumed (Lifecycle Event)#033[00m
Nov 25 03:33:03 np0005534516 nova_compute[253538]: 2025-11-25 08:33:03.184 253542 DEBUG nova.virt.libvirt.driver [None req-cb0b45a0-9374-4f2b-9271-6d8352f48120 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] [instance: c0942fc7-74d4-4fc8-9574-4fea9179e71b] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 25 03:33:03 np0005534516 nova_compute[253538]: 2025-11-25 08:33:03.187 253542 INFO nova.virt.libvirt.driver [-] [instance: c0942fc7-74d4-4fc8-9574-4fea9179e71b] Instance spawned successfully.#033[00m
Nov 25 03:33:03 np0005534516 nova_compute[253538]: 2025-11-25 08:33:03.187 253542 DEBUG nova.virt.libvirt.driver [None req-cb0b45a0-9374-4f2b-9271-6d8352f48120 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] [instance: c0942fc7-74d4-4fc8-9574-4fea9179e71b] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 25 03:33:03 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1491: 321 pgs: 321 active+clean; 262 MiB data, 542 MiB used, 59 GiB / 60 GiB avail; 3.0 MiB/s rd, 4.7 MiB/s wr, 288 op/s
Nov 25 03:33:03 np0005534516 nova_compute[253538]: 2025-11-25 08:33:03.225 253542 DEBUG nova.virt.libvirt.driver [None req-cb0b45a0-9374-4f2b-9271-6d8352f48120 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] [instance: c0942fc7-74d4-4fc8-9574-4fea9179e71b] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:33:03 np0005534516 nova_compute[253538]: 2025-11-25 08:33:03.226 253542 DEBUG nova.virt.libvirt.driver [None req-cb0b45a0-9374-4f2b-9271-6d8352f48120 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] [instance: c0942fc7-74d4-4fc8-9574-4fea9179e71b] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:33:03 np0005534516 nova_compute[253538]: 2025-11-25 08:33:03.227 253542 DEBUG nova.virt.libvirt.driver [None req-cb0b45a0-9374-4f2b-9271-6d8352f48120 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] [instance: c0942fc7-74d4-4fc8-9574-4fea9179e71b] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:33:03 np0005534516 nova_compute[253538]: 2025-11-25 08:33:03.228 253542 DEBUG nova.virt.libvirt.driver [None req-cb0b45a0-9374-4f2b-9271-6d8352f48120 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] [instance: c0942fc7-74d4-4fc8-9574-4fea9179e71b] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:33:03 np0005534516 nova_compute[253538]: 2025-11-25 08:33:03.229 253542 DEBUG nova.virt.libvirt.driver [None req-cb0b45a0-9374-4f2b-9271-6d8352f48120 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] [instance: c0942fc7-74d4-4fc8-9574-4fea9179e71b] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:33:03 np0005534516 nova_compute[253538]: 2025-11-25 08:33:03.230 253542 DEBUG nova.virt.libvirt.driver [None req-cb0b45a0-9374-4f2b-9271-6d8352f48120 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] [instance: c0942fc7-74d4-4fc8-9574-4fea9179e71b] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:33:03 np0005534516 nova_compute[253538]: 2025-11-25 08:33:03.238 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: c0942fc7-74d4-4fc8-9574-4fea9179e71b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:33:03 np0005534516 nova_compute[253538]: 2025-11-25 08:33:03.243 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: c0942fc7-74d4-4fc8-9574-4fea9179e71b] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 03:33:03 np0005534516 nova_compute[253538]: 2025-11-25 08:33:03.271 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: c0942fc7-74d4-4fc8-9574-4fea9179e71b] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.#033[00m
Nov 25 03:33:03 np0005534516 nova_compute[253538]: 2025-11-25 08:33:03.312 253542 DEBUG nova.compute.manager [None req-cb0b45a0-9374-4f2b-9271-6d8352f48120 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] [instance: c0942fc7-74d4-4fc8-9574-4fea9179e71b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:33:03 np0005534516 nova_compute[253538]: 2025-11-25 08:33:03.345 253542 DEBUG nova.objects.instance [None req-6b86692d-3418-4f34-b54b-b2e5d568385f c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Lazy-loading 'flavor' on Instance uuid 0feca801-4630-4450-b915-616d8496ab51 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 03:33:03 np0005534516 nova_compute[253538]: 2025-11-25 08:33:03.401 253542 DEBUG oslo_concurrency.lockutils [None req-cb0b45a0-9374-4f2b-9271-6d8352f48120 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:33:03 np0005534516 nova_compute[253538]: 2025-11-25 08:33:03.402 253542 DEBUG oslo_concurrency.lockutils [None req-cb0b45a0-9374-4f2b-9271-6d8352f48120 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:33:03 np0005534516 nova_compute[253538]: 2025-11-25 08:33:03.402 253542 DEBUG nova.objects.instance [None req-cb0b45a0-9374-4f2b-9271-6d8352f48120 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] [instance: c0942fc7-74d4-4fc8-9574-4fea9179e71b] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032#033[00m
Nov 25 03:33:03 np0005534516 nova_compute[253538]: 2025-11-25 08:33:03.482 253542 DEBUG oslo_concurrency.lockutils [None req-6b86692d-3418-4f34-b54b-b2e5d568385f c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Acquiring lock "refresh_cache-0feca801-4630-4450-b915-616d8496ab51" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 03:33:03 np0005534516 nova_compute[253538]: 2025-11-25 08:33:03.483 253542 DEBUG oslo_concurrency.lockutils [None req-6b86692d-3418-4f34-b54b-b2e5d568385f c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Acquired lock "refresh_cache-0feca801-4630-4450-b915-616d8496ab51" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 03:33:03 np0005534516 nova_compute[253538]: 2025-11-25 08:33:03.483 253542 DEBUG nova.network.neutron [None req-6b86692d-3418-4f34-b54b-b2e5d568385f c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] [instance: 0feca801-4630-4450-b915-616d8496ab51] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 25 03:33:03 np0005534516 nova_compute[253538]: 2025-11-25 08:33:03.483 253542 DEBUG nova.objects.instance [None req-6b86692d-3418-4f34-b54b-b2e5d568385f c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Lazy-loading 'info_cache' on Instance uuid 0feca801-4630-4450-b915-616d8496ab51 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 03:33:03 np0005534516 nova_compute[253538]: 2025-11-25 08:33:03.589 253542 DEBUG oslo_concurrency.lockutils [None req-cb0b45a0-9374-4f2b-9271-6d8352f48120 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: held 0.187s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:33:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 03:33:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:33:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 03:33:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:33:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.002005269380412841 of space, bias 1.0, pg target 0.6015808141238523 quantized to 32 (current 32)
Nov 25 03:33:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:33:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 03:33:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:33:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 03:33:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:33:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006661126644201341 of space, bias 1.0, pg target 0.19983379932604023 quantized to 32 (current 32)
Nov 25 03:33:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:33:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 25 03:33:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:33:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 03:33:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:33:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 25 03:33:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:33:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 25 03:33:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:33:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 03:33:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:33:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 25 03:33:04 np0005534516 nova_compute[253538]: 2025-11-25 08:33:04.528 253542 DEBUG nova.network.neutron [None req-09265231-b5f9-435b-aa4b-158217848829 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: 528fb917-0169-441d-b32d-652963344aea] Successfully updated port: 56d077f0-8f69-40d8-bd5e-267a70c4c319 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 25 03:33:04 np0005534516 nova_compute[253538]: 2025-11-25 08:33:04.540 253542 DEBUG oslo_concurrency.lockutils [None req-09265231-b5f9-435b-aa4b-158217848829 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Acquiring lock "refresh_cache-528fb917-0169-441d-b32d-652963344aea" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 03:33:04 np0005534516 nova_compute[253538]: 2025-11-25 08:33:04.540 253542 DEBUG oslo_concurrency.lockutils [None req-09265231-b5f9-435b-aa4b-158217848829 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Acquired lock "refresh_cache-528fb917-0169-441d-b32d-652963344aea" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 03:33:04 np0005534516 nova_compute[253538]: 2025-11-25 08:33:04.540 253542 DEBUG nova.network.neutron [None req-09265231-b5f9-435b-aa4b-158217848829 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: 528fb917-0169-441d-b32d-652963344aea] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 25 03:33:04 np0005534516 nova_compute[253538]: 2025-11-25 08:33:04.573 253542 DEBUG nova.network.neutron [None req-ed1ff473-fcd5-40ad-9af6-a9eb8b3853e9 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] [instance: 8191f951-44bc-4371-957a-f2e7d37c1a32] Successfully updated port: 582f57a6-32d3-44a0-ab47-d147a0bb0f43 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 25 03:33:04 np0005534516 nova_compute[253538]: 2025-11-25 08:33:04.585 253542 DEBUG oslo_concurrency.lockutils [None req-ed1ff473-fcd5-40ad-9af6-a9eb8b3853e9 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Acquiring lock "refresh_cache-8191f951-44bc-4371-957a-f2e7d37c1a32" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 03:33:04 np0005534516 nova_compute[253538]: 2025-11-25 08:33:04.586 253542 DEBUG oslo_concurrency.lockutils [None req-ed1ff473-fcd5-40ad-9af6-a9eb8b3853e9 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Acquired lock "refresh_cache-8191f951-44bc-4371-957a-f2e7d37c1a32" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 03:33:04 np0005534516 nova_compute[253538]: 2025-11-25 08:33:04.586 253542 DEBUG nova.network.neutron [None req-ed1ff473-fcd5-40ad-9af6-a9eb8b3853e9 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] [instance: 8191f951-44bc-4371-957a-f2e7d37c1a32] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 25 03:33:05 np0005534516 nova_compute[253538]: 2025-11-25 08:33:05.117 253542 WARNING nova.network.neutron [None req-ed1ff473-fcd5-40ad-9af6-a9eb8b3853e9 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] [instance: 8191f951-44bc-4371-957a-f2e7d37c1a32] 9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe already exists in list: networks containing: ['9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe']. ignoring it#033[00m
Nov 25 03:33:05 np0005534516 nova_compute[253538]: 2025-11-25 08:33:05.118 253542 WARNING nova.network.neutron [None req-ed1ff473-fcd5-40ad-9af6-a9eb8b3853e9 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] [instance: 8191f951-44bc-4371-957a-f2e7d37c1a32] 9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe already exists in list: networks containing: ['9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe']. ignoring it#033[00m
Nov 25 03:33:05 np0005534516 nova_compute[253538]: 2025-11-25 08:33:05.120 253542 DEBUG nova.network.neutron [None req-6b86692d-3418-4f34-b54b-b2e5d568385f c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] [instance: 0feca801-4630-4450-b915-616d8496ab51] Updating instance_info_cache with network_info: [{"id": "15af3dd8-9788-4a34-b4b2-d3b24300cd4c", "address": "fa:16:3e:07:cd:40", "network": {"id": "908154e6-322e-4607-bb65-df3f3f8daca6", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1395486665-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.183", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "23237e7592b247838e62457157e64e9e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap15af3dd8-97", "ovs_interfaceid": "15af3dd8-9788-4a34-b4b2-d3b24300cd4c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 03:33:05 np0005534516 nova_compute[253538]: 2025-11-25 08:33:05.132 253542 DEBUG nova.network.neutron [None req-09265231-b5f9-435b-aa4b-158217848829 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: 528fb917-0169-441d-b32d-652963344aea] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 25 03:33:05 np0005534516 nova_compute[253538]: 2025-11-25 08:33:05.166 253542 DEBUG oslo_concurrency.lockutils [None req-6b86692d-3418-4f34-b54b-b2e5d568385f c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Releasing lock "refresh_cache-0feca801-4630-4450-b915-616d8496ab51" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 03:33:05 np0005534516 nova_compute[253538]: 2025-11-25 08:33:05.190 253542 INFO nova.virt.libvirt.driver [-] [instance: 0feca801-4630-4450-b915-616d8496ab51] Instance destroyed successfully.#033[00m
Nov 25 03:33:05 np0005534516 nova_compute[253538]: 2025-11-25 08:33:05.192 253542 DEBUG nova.objects.instance [None req-6b86692d-3418-4f34-b54b-b2e5d568385f c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Lazy-loading 'numa_topology' on Instance uuid 0feca801-4630-4450-b915-616d8496ab51 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 03:33:05 np0005534516 nova_compute[253538]: 2025-11-25 08:33:05.200 253542 DEBUG nova.objects.instance [None req-6b86692d-3418-4f34-b54b-b2e5d568385f c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Lazy-loading 'resources' on Instance uuid 0feca801-4630-4450-b915-616d8496ab51 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 03:33:05 np0005534516 nova_compute[253538]: 2025-11-25 08:33:05.207 253542 DEBUG nova.virt.libvirt.vif [None req-6b86692d-3418-4f34-b54b-b2e5d568385f c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T08:32:05Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-1351113969',display_name='tempest-ServerActionsTestJSON-server-1351113969',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-1351113969',id=50,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBE5Fk56Q4lNqV9K7WDUeyi9Q8LXfzmP3pHaY3lQ+esJSSZiBEhWQZtQw4QEFpwpSqYGNN6+MiKdvSMZAjIxsIMhhevlSp0lI0bm/7fQanmTm+NtC/LaRja7uscM7lRA6IQ==',key_name='tempest-keypair-23618085',keypairs=<?>,launch_index=0,launched_at=2025-11-25T08:32:15Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='23237e7592b247838e62457157e64e9e',ramdisk_id='',reservation_id='r-ui990d0d',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-1880843108',owner_user_name='tempest-ServerActionsTestJSON-1880843108-project-member'},tags=<?>,task_state='powering-on',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T08:33:01Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='c199ca353ed54a53ab7fe37d3089c82a',uuid=0feca801-4630-4450-b915-616d8496ab51,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "15af3dd8-9788-4a34-b4b2-d3b24300cd4c", "address": "fa:16:3e:07:cd:40", "network": {"id": "908154e6-322e-4607-bb65-df3f3f8daca6", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1395486665-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.183", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "23237e7592b247838e62457157e64e9e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap15af3dd8-97", "ovs_interfaceid": "15af3dd8-9788-4a34-b4b2-d3b24300cd4c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 25 03:33:05 np0005534516 nova_compute[253538]: 2025-11-25 08:33:05.208 253542 DEBUG nova.network.os_vif_util [None req-6b86692d-3418-4f34-b54b-b2e5d568385f c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Converting VIF {"id": "15af3dd8-9788-4a34-b4b2-d3b24300cd4c", "address": "fa:16:3e:07:cd:40", "network": {"id": "908154e6-322e-4607-bb65-df3f3f8daca6", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1395486665-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.183", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "23237e7592b247838e62457157e64e9e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap15af3dd8-97", "ovs_interfaceid": "15af3dd8-9788-4a34-b4b2-d3b24300cd4c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 03:33:05 np0005534516 nova_compute[253538]: 2025-11-25 08:33:05.208 253542 DEBUG nova.network.os_vif_util [None req-6b86692d-3418-4f34-b54b-b2e5d568385f c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:07:cd:40,bridge_name='br-int',has_traffic_filtering=True,id=15af3dd8-9788-4a34-b4b2-d3b24300cd4c,network=Network(908154e6-322e-4607-bb65-df3f3f8daca6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap15af3dd8-97') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 03:33:05 np0005534516 nova_compute[253538]: 2025-11-25 08:33:05.209 253542 DEBUG os_vif [None req-6b86692d-3418-4f34-b54b-b2e5d568385f c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:07:cd:40,bridge_name='br-int',has_traffic_filtering=True,id=15af3dd8-9788-4a34-b4b2-d3b24300cd4c,network=Network(908154e6-322e-4607-bb65-df3f3f8daca6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap15af3dd8-97') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 25 03:33:05 np0005534516 nova_compute[253538]: 2025-11-25 08:33:05.210 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:33:05 np0005534516 nova_compute[253538]: 2025-11-25 08:33:05.211 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap15af3dd8-97, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:33:05 np0005534516 nova_compute[253538]: 2025-11-25 08:33:05.212 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:33:05 np0005534516 nova_compute[253538]: 2025-11-25 08:33:05.213 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:33:05 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1492: 321 pgs: 321 active+clean; 295 MiB data, 552 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 5.1 MiB/s wr, 317 op/s
Nov 25 03:33:05 np0005534516 nova_compute[253538]: 2025-11-25 08:33:05.216 253542 INFO os_vif [None req-6b86692d-3418-4f34-b54b-b2e5d568385f c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:07:cd:40,bridge_name='br-int',has_traffic_filtering=True,id=15af3dd8-9788-4a34-b4b2-d3b24300cd4c,network=Network(908154e6-322e-4607-bb65-df3f3f8daca6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap15af3dd8-97')#033[00m
Nov 25 03:33:05 np0005534516 nova_compute[253538]: 2025-11-25 08:33:05.223 253542 DEBUG nova.virt.libvirt.driver [None req-6b86692d-3418-4f34-b54b-b2e5d568385f c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] [instance: 0feca801-4630-4450-b915-616d8496ab51] Start _get_guest_xml network_info=[{"id": "15af3dd8-9788-4a34-b4b2-d3b24300cd4c", "address": "fa:16:3e:07:cd:40", "network": {"id": "908154e6-322e-4607-bb65-df3f3f8daca6", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1395486665-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.183", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "23237e7592b247838e62457157e64e9e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap15af3dd8-97", "ovs_interfaceid": "15af3dd8-9788-4a34-b4b2-d3b24300cd4c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'device_name': '/dev/vda', 'guest_format': None, 'encryption_options': None, 'boot_index': 0, 'encryption_format': None, 'encryption_secret_uuid': None, 'size': 0, 'encrypted': False, 'disk_bus': 'virtio', 'image_id': '8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 25 03:33:05 np0005534516 nova_compute[253538]: 2025-11-25 08:33:05.228 253542 WARNING nova.virt.libvirt.driver [None req-6b86692d-3418-4f34-b54b-b2e5d568385f c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 03:33:05 np0005534516 nova_compute[253538]: 2025-11-25 08:33:05.233 253542 DEBUG nova.virt.libvirt.host [None req-6b86692d-3418-4f34-b54b-b2e5d568385f c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 25 03:33:05 np0005534516 nova_compute[253538]: 2025-11-25 08:33:05.233 253542 DEBUG nova.virt.libvirt.host [None req-6b86692d-3418-4f34-b54b-b2e5d568385f c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 25 03:33:05 np0005534516 nova_compute[253538]: 2025-11-25 08:33:05.237 253542 DEBUG nova.virt.libvirt.host [None req-6b86692d-3418-4f34-b54b-b2e5d568385f c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 25 03:33:05 np0005534516 nova_compute[253538]: 2025-11-25 08:33:05.237 253542 DEBUG nova.virt.libvirt.host [None req-6b86692d-3418-4f34-b54b-b2e5d568385f c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 25 03:33:05 np0005534516 nova_compute[253538]: 2025-11-25 08:33:05.237 253542 DEBUG nova.virt.libvirt.driver [None req-6b86692d-3418-4f34-b54b-b2e5d568385f c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 25 03:33:05 np0005534516 nova_compute[253538]: 2025-11-25 08:33:05.238 253542 DEBUG nova.virt.hardware [None req-6b86692d-3418-4f34-b54b-b2e5d568385f c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T08:20:54Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='c0247c91-0f34-45b2-87e7-43f31a790a00',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 25 03:33:05 np0005534516 nova_compute[253538]: 2025-11-25 08:33:05.238 253542 DEBUG nova.virt.hardware [None req-6b86692d-3418-4f34-b54b-b2e5d568385f c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 25 03:33:05 np0005534516 nova_compute[253538]: 2025-11-25 08:33:05.238 253542 DEBUG nova.virt.hardware [None req-6b86692d-3418-4f34-b54b-b2e5d568385f c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 25 03:33:05 np0005534516 nova_compute[253538]: 2025-11-25 08:33:05.238 253542 DEBUG nova.virt.hardware [None req-6b86692d-3418-4f34-b54b-b2e5d568385f c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 25 03:33:05 np0005534516 nova_compute[253538]: 2025-11-25 08:33:05.239 253542 DEBUG nova.virt.hardware [None req-6b86692d-3418-4f34-b54b-b2e5d568385f c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 25 03:33:05 np0005534516 nova_compute[253538]: 2025-11-25 08:33:05.239 253542 DEBUG nova.virt.hardware [None req-6b86692d-3418-4f34-b54b-b2e5d568385f c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 25 03:33:05 np0005534516 nova_compute[253538]: 2025-11-25 08:33:05.239 253542 DEBUG nova.virt.hardware [None req-6b86692d-3418-4f34-b54b-b2e5d568385f c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 25 03:33:05 np0005534516 nova_compute[253538]: 2025-11-25 08:33:05.239 253542 DEBUG nova.virt.hardware [None req-6b86692d-3418-4f34-b54b-b2e5d568385f c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 25 03:33:05 np0005534516 nova_compute[253538]: 2025-11-25 08:33:05.240 253542 DEBUG nova.virt.hardware [None req-6b86692d-3418-4f34-b54b-b2e5d568385f c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 25 03:33:05 np0005534516 nova_compute[253538]: 2025-11-25 08:33:05.240 253542 DEBUG nova.virt.hardware [None req-6b86692d-3418-4f34-b54b-b2e5d568385f c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 25 03:33:05 np0005534516 nova_compute[253538]: 2025-11-25 08:33:05.240 253542 DEBUG nova.virt.hardware [None req-6b86692d-3418-4f34-b54b-b2e5d568385f c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 25 03:33:05 np0005534516 nova_compute[253538]: 2025-11-25 08:33:05.240 253542 DEBUG nova.objects.instance [None req-6b86692d-3418-4f34-b54b-b2e5d568385f c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Lazy-loading 'vcpu_model' on Instance uuid 0feca801-4630-4450-b915-616d8496ab51 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 03:33:05 np0005534516 nova_compute[253538]: 2025-11-25 08:33:05.251 253542 DEBUG oslo_concurrency.processutils [None req-6b86692d-3418-4f34-b54b-b2e5d568385f c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:33:05 np0005534516 nova_compute[253538]: 2025-11-25 08:33:05.584 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:33:05 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 03:33:05 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3731586341' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 03:33:05 np0005534516 nova_compute[253538]: 2025-11-25 08:33:05.748 253542 DEBUG oslo_concurrency.processutils [None req-6b86692d-3418-4f34-b54b-b2e5d568385f c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.498s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:33:05 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e176 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 03:33:05 np0005534516 nova_compute[253538]: 2025-11-25 08:33:05.787 253542 DEBUG oslo_concurrency.processutils [None req-6b86692d-3418-4f34-b54b-b2e5d568385f c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:33:06 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 03:33:06 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2914449147' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 03:33:06 np0005534516 nova_compute[253538]: 2025-11-25 08:33:06.267 253542 DEBUG oslo_concurrency.processutils [None req-6b86692d-3418-4f34-b54b-b2e5d568385f c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.480s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:33:06 np0005534516 nova_compute[253538]: 2025-11-25 08:33:06.268 253542 DEBUG nova.virt.libvirt.vif [None req-6b86692d-3418-4f34-b54b-b2e5d568385f c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T08:32:05Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-1351113969',display_name='tempest-ServerActionsTestJSON-server-1351113969',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-1351113969',id=50,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBE5Fk56Q4lNqV9K7WDUeyi9Q8LXfzmP3pHaY3lQ+esJSSZiBEhWQZtQw4QEFpwpSqYGNN6+MiKdvSMZAjIxsIMhhevlSp0lI0bm/7fQanmTm+NtC/LaRja7uscM7lRA6IQ==',key_name='tempest-keypair-23618085',keypairs=<?>,launch_index=0,launched_at=2025-11-25T08:32:15Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='23237e7592b247838e62457157e64e9e',ramdisk_id='',reservation_id='r-ui990d0d',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-1880843108',owner_user_name='tempest-ServerActionsTestJSON-1880843108-project-member'},tags=<?>,task_state='powering-on',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T08:33:01Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='c199ca353ed54a53ab7fe37d3089c82a',uuid=0feca801-4630-4450-b915-616d8496ab51,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "15af3dd8-9788-4a34-b4b2-d3b24300cd4c", "address": "fa:16:3e:07:cd:40", "network": {"id": "908154e6-322e-4607-bb65-df3f3f8daca6", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1395486665-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.183", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "23237e7592b247838e62457157e64e9e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap15af3dd8-97", "ovs_interfaceid": "15af3dd8-9788-4a34-b4b2-d3b24300cd4c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 25 03:33:06 np0005534516 nova_compute[253538]: 2025-11-25 08:33:06.269 253542 DEBUG nova.network.os_vif_util [None req-6b86692d-3418-4f34-b54b-b2e5d568385f c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Converting VIF {"id": "15af3dd8-9788-4a34-b4b2-d3b24300cd4c", "address": "fa:16:3e:07:cd:40", "network": {"id": "908154e6-322e-4607-bb65-df3f3f8daca6", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1395486665-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.183", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "23237e7592b247838e62457157e64e9e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap15af3dd8-97", "ovs_interfaceid": "15af3dd8-9788-4a34-b4b2-d3b24300cd4c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 03:33:06 np0005534516 nova_compute[253538]: 2025-11-25 08:33:06.270 253542 DEBUG nova.network.os_vif_util [None req-6b86692d-3418-4f34-b54b-b2e5d568385f c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:07:cd:40,bridge_name='br-int',has_traffic_filtering=True,id=15af3dd8-9788-4a34-b4b2-d3b24300cd4c,network=Network(908154e6-322e-4607-bb65-df3f3f8daca6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap15af3dd8-97') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 03:33:06 np0005534516 nova_compute[253538]: 2025-11-25 08:33:06.271 253542 DEBUG nova.objects.instance [None req-6b86692d-3418-4f34-b54b-b2e5d568385f c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Lazy-loading 'pci_devices' on Instance uuid 0feca801-4630-4450-b915-616d8496ab51 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 03:33:06 np0005534516 nova_compute[253538]: 2025-11-25 08:33:06.282 253542 DEBUG nova.virt.libvirt.driver [None req-6b86692d-3418-4f34-b54b-b2e5d568385f c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] [instance: 0feca801-4630-4450-b915-616d8496ab51] End _get_guest_xml xml=<domain type="kvm">
Nov 25 03:33:06 np0005534516 nova_compute[253538]:  <uuid>0feca801-4630-4450-b915-616d8496ab51</uuid>
Nov 25 03:33:06 np0005534516 nova_compute[253538]:  <name>instance-00000032</name>
Nov 25 03:33:06 np0005534516 nova_compute[253538]:  <memory>131072</memory>
Nov 25 03:33:06 np0005534516 nova_compute[253538]:  <vcpu>1</vcpu>
Nov 25 03:33:06 np0005534516 nova_compute[253538]:  <metadata>
Nov 25 03:33:06 np0005534516 nova_compute[253538]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 03:33:06 np0005534516 nova_compute[253538]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 03:33:06 np0005534516 nova_compute[253538]:      <nova:name>tempest-ServerActionsTestJSON-server-1351113969</nova:name>
Nov 25 03:33:06 np0005534516 nova_compute[253538]:      <nova:creationTime>2025-11-25 08:33:05</nova:creationTime>
Nov 25 03:33:06 np0005534516 nova_compute[253538]:      <nova:flavor name="m1.nano">
Nov 25 03:33:06 np0005534516 nova_compute[253538]:        <nova:memory>128</nova:memory>
Nov 25 03:33:06 np0005534516 nova_compute[253538]:        <nova:disk>1</nova:disk>
Nov 25 03:33:06 np0005534516 nova_compute[253538]:        <nova:swap>0</nova:swap>
Nov 25 03:33:06 np0005534516 nova_compute[253538]:        <nova:ephemeral>0</nova:ephemeral>
Nov 25 03:33:06 np0005534516 nova_compute[253538]:        <nova:vcpus>1</nova:vcpus>
Nov 25 03:33:06 np0005534516 nova_compute[253538]:      </nova:flavor>
Nov 25 03:33:06 np0005534516 nova_compute[253538]:      <nova:owner>
Nov 25 03:33:06 np0005534516 nova_compute[253538]:        <nova:user uuid="c199ca353ed54a53ab7fe37d3089c82a">tempest-ServerActionsTestJSON-1880843108-project-member</nova:user>
Nov 25 03:33:06 np0005534516 nova_compute[253538]:        <nova:project uuid="23237e7592b247838e62457157e64e9e">tempest-ServerActionsTestJSON-1880843108</nova:project>
Nov 25 03:33:06 np0005534516 nova_compute[253538]:      </nova:owner>
Nov 25 03:33:06 np0005534516 nova_compute[253538]:      <nova:root type="image" uuid="8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e"/>
Nov 25 03:33:06 np0005534516 nova_compute[253538]:      <nova:ports>
Nov 25 03:33:06 np0005534516 nova_compute[253538]:        <nova:port uuid="15af3dd8-9788-4a34-b4b2-d3b24300cd4c">
Nov 25 03:33:06 np0005534516 nova_compute[253538]:          <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Nov 25 03:33:06 np0005534516 nova_compute[253538]:        </nova:port>
Nov 25 03:33:06 np0005534516 nova_compute[253538]:      </nova:ports>
Nov 25 03:33:06 np0005534516 nova_compute[253538]:    </nova:instance>
Nov 25 03:33:06 np0005534516 nova_compute[253538]:  </metadata>
Nov 25 03:33:06 np0005534516 nova_compute[253538]:  <sysinfo type="smbios">
Nov 25 03:33:06 np0005534516 nova_compute[253538]:    <system>
Nov 25 03:33:06 np0005534516 nova_compute[253538]:      <entry name="manufacturer">RDO</entry>
Nov 25 03:33:06 np0005534516 nova_compute[253538]:      <entry name="product">OpenStack Compute</entry>
Nov 25 03:33:06 np0005534516 nova_compute[253538]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 03:33:06 np0005534516 nova_compute[253538]:      <entry name="serial">0feca801-4630-4450-b915-616d8496ab51</entry>
Nov 25 03:33:06 np0005534516 nova_compute[253538]:      <entry name="uuid">0feca801-4630-4450-b915-616d8496ab51</entry>
Nov 25 03:33:06 np0005534516 nova_compute[253538]:      <entry name="family">Virtual Machine</entry>
Nov 25 03:33:06 np0005534516 nova_compute[253538]:    </system>
Nov 25 03:33:06 np0005534516 nova_compute[253538]:  </sysinfo>
Nov 25 03:33:06 np0005534516 nova_compute[253538]:  <os>
Nov 25 03:33:06 np0005534516 nova_compute[253538]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 25 03:33:06 np0005534516 nova_compute[253538]:    <boot dev="hd"/>
Nov 25 03:33:06 np0005534516 nova_compute[253538]:    <smbios mode="sysinfo"/>
Nov 25 03:33:06 np0005534516 nova_compute[253538]:  </os>
Nov 25 03:33:06 np0005534516 nova_compute[253538]:  <features>
Nov 25 03:33:06 np0005534516 nova_compute[253538]:    <acpi/>
Nov 25 03:33:06 np0005534516 nova_compute[253538]:    <apic/>
Nov 25 03:33:06 np0005534516 nova_compute[253538]:    <vmcoreinfo/>
Nov 25 03:33:06 np0005534516 nova_compute[253538]:  </features>
Nov 25 03:33:06 np0005534516 nova_compute[253538]:  <clock offset="utc">
Nov 25 03:33:06 np0005534516 nova_compute[253538]:    <timer name="pit" tickpolicy="delay"/>
Nov 25 03:33:06 np0005534516 nova_compute[253538]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 25 03:33:06 np0005534516 nova_compute[253538]:    <timer name="hpet" present="no"/>
Nov 25 03:33:06 np0005534516 nova_compute[253538]:  </clock>
Nov 25 03:33:06 np0005534516 nova_compute[253538]:  <cpu mode="host-model" match="exact">
Nov 25 03:33:06 np0005534516 nova_compute[253538]:    <topology sockets="1" cores="1" threads="1"/>
Nov 25 03:33:06 np0005534516 nova_compute[253538]:  </cpu>
Nov 25 03:33:06 np0005534516 nova_compute[253538]:  <devices>
Nov 25 03:33:06 np0005534516 nova_compute[253538]:    <disk type="network" device="disk">
Nov 25 03:33:06 np0005534516 nova_compute[253538]:      <driver type="raw" cache="none"/>
Nov 25 03:33:06 np0005534516 nova_compute[253538]:      <source protocol="rbd" name="vms/0feca801-4630-4450-b915-616d8496ab51_disk">
Nov 25 03:33:06 np0005534516 nova_compute[253538]:        <host name="192.168.122.100" port="6789"/>
Nov 25 03:33:06 np0005534516 nova_compute[253538]:      </source>
Nov 25 03:33:06 np0005534516 nova_compute[253538]:      <auth username="openstack">
Nov 25 03:33:06 np0005534516 nova_compute[253538]:        <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 03:33:06 np0005534516 nova_compute[253538]:      </auth>
Nov 25 03:33:06 np0005534516 nova_compute[253538]:      <target dev="vda" bus="virtio"/>
Nov 25 03:33:06 np0005534516 nova_compute[253538]:    </disk>
Nov 25 03:33:06 np0005534516 nova_compute[253538]:    <disk type="network" device="cdrom">
Nov 25 03:33:06 np0005534516 nova_compute[253538]:      <driver type="raw" cache="none"/>
Nov 25 03:33:06 np0005534516 nova_compute[253538]:      <source protocol="rbd" name="vms/0feca801-4630-4450-b915-616d8496ab51_disk.config">
Nov 25 03:33:06 np0005534516 nova_compute[253538]:        <host name="192.168.122.100" port="6789"/>
Nov 25 03:33:06 np0005534516 nova_compute[253538]:      </source>
Nov 25 03:33:06 np0005534516 nova_compute[253538]:      <auth username="openstack">
Nov 25 03:33:06 np0005534516 nova_compute[253538]:        <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 03:33:06 np0005534516 nova_compute[253538]:      </auth>
Nov 25 03:33:06 np0005534516 nova_compute[253538]:      <target dev="sda" bus="sata"/>
Nov 25 03:33:06 np0005534516 nova_compute[253538]:    </disk>
Nov 25 03:33:06 np0005534516 nova_compute[253538]:    <interface type="ethernet">
Nov 25 03:33:06 np0005534516 nova_compute[253538]:      <mac address="fa:16:3e:07:cd:40"/>
Nov 25 03:33:06 np0005534516 nova_compute[253538]:      <model type="virtio"/>
Nov 25 03:33:06 np0005534516 nova_compute[253538]:      <driver name="vhost" rx_queue_size="512"/>
Nov 25 03:33:06 np0005534516 nova_compute[253538]:      <mtu size="1442"/>
Nov 25 03:33:06 np0005534516 nova_compute[253538]:      <target dev="tap15af3dd8-97"/>
Nov 25 03:33:06 np0005534516 nova_compute[253538]:    </interface>
Nov 25 03:33:06 np0005534516 nova_compute[253538]:    <serial type="pty">
Nov 25 03:33:06 np0005534516 nova_compute[253538]:      <log file="/var/lib/nova/instances/0feca801-4630-4450-b915-616d8496ab51/console.log" append="off"/>
Nov 25 03:33:06 np0005534516 nova_compute[253538]:    </serial>
Nov 25 03:33:06 np0005534516 nova_compute[253538]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 03:33:06 np0005534516 nova_compute[253538]:    <video>
Nov 25 03:33:06 np0005534516 nova_compute[253538]:      <model type="virtio"/>
Nov 25 03:33:06 np0005534516 nova_compute[253538]:    </video>
Nov 25 03:33:06 np0005534516 nova_compute[253538]:    <input type="tablet" bus="usb"/>
Nov 25 03:33:06 np0005534516 nova_compute[253538]:    <input type="keyboard" bus="usb"/>
Nov 25 03:33:06 np0005534516 nova_compute[253538]:    <rng model="virtio">
Nov 25 03:33:06 np0005534516 nova_compute[253538]:      <backend model="random">/dev/urandom</backend>
Nov 25 03:33:06 np0005534516 nova_compute[253538]:    </rng>
Nov 25 03:33:06 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root"/>
Nov 25 03:33:06 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:33:06 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:33:06 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:33:06 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:33:06 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:33:06 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:33:06 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:33:06 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:33:06 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:33:06 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:33:06 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:33:06 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:33:06 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:33:06 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:33:06 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:33:06 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:33:06 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:33:06 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:33:06 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:33:06 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:33:06 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:33:06 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:33:06 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:33:06 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:33:06 np0005534516 nova_compute[253538]:    <controller type="usb" index="0"/>
Nov 25 03:33:06 np0005534516 nova_compute[253538]:    <memballoon model="virtio">
Nov 25 03:33:06 np0005534516 nova_compute[253538]:      <stats period="10"/>
Nov 25 03:33:06 np0005534516 nova_compute[253538]:    </memballoon>
Nov 25 03:33:06 np0005534516 nova_compute[253538]:  </devices>
Nov 25 03:33:06 np0005534516 nova_compute[253538]: </domain>
Nov 25 03:33:06 np0005534516 nova_compute[253538]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 25 03:33:06 np0005534516 nova_compute[253538]: 2025-11-25 08:33:06.283 253542 DEBUG nova.virt.libvirt.driver [None req-6b86692d-3418-4f34-b54b-b2e5d568385f c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] skipping disk for instance-00000032 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 25 03:33:06 np0005534516 nova_compute[253538]: 2025-11-25 08:33:06.283 253542 DEBUG nova.virt.libvirt.driver [None req-6b86692d-3418-4f34-b54b-b2e5d568385f c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] skipping disk for instance-00000032 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 25 03:33:06 np0005534516 nova_compute[253538]: 2025-11-25 08:33:06.283 253542 DEBUG nova.virt.libvirt.vif [None req-6b86692d-3418-4f34-b54b-b2e5d568385f c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T08:32:05Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-1351113969',display_name='tempest-ServerActionsTestJSON-server-1351113969',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-1351113969',id=50,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBE5Fk56Q4lNqV9K7WDUeyi9Q8LXfzmP3pHaY3lQ+esJSSZiBEhWQZtQw4QEFpwpSqYGNN6+MiKdvSMZAjIxsIMhhevlSp0lI0bm/7fQanmTm+NtC/LaRja7uscM7lRA6IQ==',key_name='tempest-keypair-23618085',keypairs=<?>,launch_index=0,launched_at=2025-11-25T08:32:15Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=<?>,power_state=4,progress=0,project_id='23237e7592b247838e62457157e64e9e',ramdisk_id='',reservation_id='r-ui990d0d',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-1880843108',owner_user_name='tempest-ServerActionsTestJSON-1880843108-project-member'},tags=<?>,task_state='powering-on',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T08:33:01Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='c199ca353ed54a53ab7fe37d3089c82a',uuid=0feca801-4630-4450-b915-616d8496ab51,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "15af3dd8-9788-4a34-b4b2-d3b24300cd4c", "address": "fa:16:3e:07:cd:40", "network": {"id": "908154e6-322e-4607-bb65-df3f3f8daca6", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1395486665-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.183", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "23237e7592b247838e62457157e64e9e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap15af3dd8-97", "ovs_interfaceid": "15af3dd8-9788-4a34-b4b2-d3b24300cd4c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 25 03:33:06 np0005534516 nova_compute[253538]: 2025-11-25 08:33:06.284 253542 DEBUG nova.network.os_vif_util [None req-6b86692d-3418-4f34-b54b-b2e5d568385f c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Converting VIF {"id": "15af3dd8-9788-4a34-b4b2-d3b24300cd4c", "address": "fa:16:3e:07:cd:40", "network": {"id": "908154e6-322e-4607-bb65-df3f3f8daca6", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1395486665-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.183", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "23237e7592b247838e62457157e64e9e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap15af3dd8-97", "ovs_interfaceid": "15af3dd8-9788-4a34-b4b2-d3b24300cd4c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 03:33:06 np0005534516 nova_compute[253538]: 2025-11-25 08:33:06.284 253542 DEBUG nova.network.os_vif_util [None req-6b86692d-3418-4f34-b54b-b2e5d568385f c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:07:cd:40,bridge_name='br-int',has_traffic_filtering=True,id=15af3dd8-9788-4a34-b4b2-d3b24300cd4c,network=Network(908154e6-322e-4607-bb65-df3f3f8daca6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap15af3dd8-97') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 03:33:06 np0005534516 nova_compute[253538]: 2025-11-25 08:33:06.285 253542 DEBUG os_vif [None req-6b86692d-3418-4f34-b54b-b2e5d568385f c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:07:cd:40,bridge_name='br-int',has_traffic_filtering=True,id=15af3dd8-9788-4a34-b4b2-d3b24300cd4c,network=Network(908154e6-322e-4607-bb65-df3f3f8daca6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap15af3dd8-97') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 25 03:33:06 np0005534516 nova_compute[253538]: 2025-11-25 08:33:06.285 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:33:06 np0005534516 nova_compute[253538]: 2025-11-25 08:33:06.286 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:33:06 np0005534516 nova_compute[253538]: 2025-11-25 08:33:06.286 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 03:33:06 np0005534516 nova_compute[253538]: 2025-11-25 08:33:06.289 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:33:06 np0005534516 nova_compute[253538]: 2025-11-25 08:33:06.289 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap15af3dd8-97, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:33:06 np0005534516 nova_compute[253538]: 2025-11-25 08:33:06.290 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap15af3dd8-97, col_values=(('external_ids', {'iface-id': '15af3dd8-9788-4a34-b4b2-d3b24300cd4c', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:07:cd:40', 'vm-uuid': '0feca801-4630-4450-b915-616d8496ab51'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:33:06 np0005534516 nova_compute[253538]: 2025-11-25 08:33:06.291 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:33:06 np0005534516 NetworkManager[48915]: <info>  [1764059586.2919] manager: (tap15af3dd8-97): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/221)
Nov 25 03:33:06 np0005534516 nova_compute[253538]: 2025-11-25 08:33:06.293 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 25 03:33:06 np0005534516 nova_compute[253538]: 2025-11-25 08:33:06.295 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:33:06 np0005534516 nova_compute[253538]: 2025-11-25 08:33:06.296 253542 INFO os_vif [None req-6b86692d-3418-4f34-b54b-b2e5d568385f c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:07:cd:40,bridge_name='br-int',has_traffic_filtering=True,id=15af3dd8-9788-4a34-b4b2-d3b24300cd4c,network=Network(908154e6-322e-4607-bb65-df3f3f8daca6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap15af3dd8-97')#033[00m
Nov 25 03:33:06 np0005534516 podman[312136]: 2025-11-25 08:33:06.390743616 +0000 UTC m=+0.064085353 container health_status 5c8d5508db57b4c3da7e80ae11f3a3094e87785cb74f60c98199261dd8b73481 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251118, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent)
Nov 25 03:33:06 np0005534516 kernel: tap15af3dd8-97: entered promiscuous mode
Nov 25 03:33:06 np0005534516 ovn_controller[152859]: 2025-11-25T08:33:06Z|00479|binding|INFO|Claiming lport 15af3dd8-9788-4a34-b4b2-d3b24300cd4c for this chassis.
Nov 25 03:33:06 np0005534516 ovn_controller[152859]: 2025-11-25T08:33:06Z|00480|binding|INFO|15af3dd8-9788-4a34-b4b2-d3b24300cd4c: Claiming fa:16:3e:07:cd:40 10.100.0.13
Nov 25 03:33:06 np0005534516 nova_compute[253538]: 2025-11-25 08:33:06.429 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:33:06 np0005534516 NetworkManager[48915]: <info>  [1764059586.4337] manager: (tap15af3dd8-97): new Tun device (/org/freedesktop/NetworkManager/Devices/222)
Nov 25 03:33:06 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:33:06.443 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:07:cd:40 10.100.0.13'], port_security=['fa:16:3e:07:cd:40 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '0feca801-4630-4450-b915-616d8496ab51', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-908154e6-322e-4607-bb65-df3f3f8daca6', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '23237e7592b247838e62457157e64e9e', 'neutron:revision_number': '9', 'neutron:security_group_ids': '3358d53f-61a0-46d5-a068-e095dc0322d0', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.183'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f04e87cb-da21-4cc9-be16-4ad52b84fb85, chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=15af3dd8-9788-4a34-b4b2-d3b24300cd4c) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 03:33:06 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:33:06.444 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 15af3dd8-9788-4a34-b4b2-d3b24300cd4c in datapath 908154e6-322e-4607-bb65-df3f3f8daca6 bound to our chassis#033[00m
Nov 25 03:33:06 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:33:06.446 162739 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 908154e6-322e-4607-bb65-df3f3f8daca6#033[00m
Nov 25 03:33:06 np0005534516 nova_compute[253538]: 2025-11-25 08:33:06.457 253542 DEBUG nova.network.neutron [None req-09265231-b5f9-435b-aa4b-158217848829 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: 528fb917-0169-441d-b32d-652963344aea] Updating instance_info_cache with network_info: [{"id": "56d077f0-8f69-40d8-bd5e-267a70c4c319", "address": "fa:16:3e:7a:ce:b3", "network": {"id": "a66e51b8-ecb0-4289-a1b5-d5e379727721", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-903247260-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a9c243220ecd4ba3af10cdbc0ea76bd6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap56d077f0-8f", "ovs_interfaceid": "56d077f0-8f69-40d8-bd5e-267a70c4c319", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 03:33:06 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:33:06.457 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[59bab187-ccc6-4a3e-9583-5617a1226f31]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:33:06 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:33:06.458 162739 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap908154e6-31 in ovnmeta-908154e6-322e-4607-bb65-df3f3f8daca6 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 25 03:33:06 np0005534516 ovn_controller[152859]: 2025-11-25T08:33:06Z|00481|binding|INFO|Setting lport 15af3dd8-9788-4a34-b4b2-d3b24300cd4c ovn-installed in OVS
Nov 25 03:33:06 np0005534516 ovn_controller[152859]: 2025-11-25T08:33:06Z|00482|binding|INFO|Setting lport 15af3dd8-9788-4a34-b4b2-d3b24300cd4c up in Southbound
Nov 25 03:33:06 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:33:06.461 269370 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap908154e6-30 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 25 03:33:06 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:33:06.461 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[0e1a2fb8-b54a-4f44-8e79-61232d34f108]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:33:06 np0005534516 nova_compute[253538]: 2025-11-25 08:33:06.461 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:33:06 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:33:06.462 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[7f89f333-97a0-43fd-8903-7a59f515ea0d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:33:06 np0005534516 systemd-udevd[312165]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 03:33:06 np0005534516 nova_compute[253538]: 2025-11-25 08:33:06.466 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:33:06 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:33:06.474 162852 DEBUG oslo.privsep.daemon [-] privsep: reply[476a251c-4606-48bd-9baf-731e9238b488]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:33:06 np0005534516 NetworkManager[48915]: <info>  [1764059586.4798] device (tap15af3dd8-97): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 03:33:06 np0005534516 NetworkManager[48915]: <info>  [1764059586.4807] device (tap15af3dd8-97): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 03:33:06 np0005534516 systemd-machined[215790]: New machine qemu-63-instance-00000032.
Nov 25 03:33:06 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:33:06.492 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[34a17383-b820-4070-85d5-bfb715b58eb7]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:33:06 np0005534516 nova_compute[253538]: 2025-11-25 08:33:06.504 253542 DEBUG oslo_concurrency.lockutils [None req-09265231-b5f9-435b-aa4b-158217848829 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Releasing lock "refresh_cache-528fb917-0169-441d-b32d-652963344aea" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 03:33:06 np0005534516 systemd[1]: Started Virtual Machine qemu-63-instance-00000032.
Nov 25 03:33:06 np0005534516 nova_compute[253538]: 2025-11-25 08:33:06.505 253542 DEBUG nova.compute.manager [None req-09265231-b5f9-435b-aa4b-158217848829 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: 528fb917-0169-441d-b32d-652963344aea] Instance network_info: |[{"id": "56d077f0-8f69-40d8-bd5e-267a70c4c319", "address": "fa:16:3e:7a:ce:b3", "network": {"id": "a66e51b8-ecb0-4289-a1b5-d5e379727721", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-903247260-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a9c243220ecd4ba3af10cdbc0ea76bd6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap56d077f0-8f", "ovs_interfaceid": "56d077f0-8f69-40d8-bd5e-267a70c4c319", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 25 03:33:06 np0005534516 nova_compute[253538]: 2025-11-25 08:33:06.508 253542 DEBUG nova.virt.libvirt.driver [None req-09265231-b5f9-435b-aa4b-158217848829 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: 528fb917-0169-441d-b32d-652963344aea] Start _get_guest_xml network_info=[{"id": "56d077f0-8f69-40d8-bd5e-267a70c4c319", "address": "fa:16:3e:7a:ce:b3", "network": {"id": "a66e51b8-ecb0-4289-a1b5-d5e379727721", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-903247260-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a9c243220ecd4ba3af10cdbc0ea76bd6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap56d077f0-8f", "ovs_interfaceid": "56d077f0-8f69-40d8-bd5e-267a70c4c319", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'device_name': '/dev/vda', 'guest_format': None, 'encryption_options': None, 'boot_index': 0, 'encryption_format': None, 'encryption_secret_uuid': None, 'size': 0, 'encrypted': False, 'disk_bus': 'virtio', 'image_id': '8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 25 03:33:06 np0005534516 nova_compute[253538]: 2025-11-25 08:33:06.514 253542 WARNING nova.virt.libvirt.driver [None req-09265231-b5f9-435b-aa4b-158217848829 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 03:33:06 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:33:06.523 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[1ff101a3-3ff5-48db-9bb8-8d02b163d5dd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:33:06 np0005534516 NetworkManager[48915]: <info>  [1764059586.5348] manager: (tap908154e6-30): new Veth device (/org/freedesktop/NetworkManager/Devices/223)
Nov 25 03:33:06 np0005534516 systemd-udevd[312170]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 03:33:06 np0005534516 nova_compute[253538]: 2025-11-25 08:33:06.537 253542 DEBUG nova.virt.libvirt.host [None req-09265231-b5f9-435b-aa4b-158217848829 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 25 03:33:06 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:33:06.534 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[755797df-0694-49f2-96c8-2e98a1f677e4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:33:06 np0005534516 nova_compute[253538]: 2025-11-25 08:33:06.537 253542 DEBUG nova.virt.libvirt.host [None req-09265231-b5f9-435b-aa4b-158217848829 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 25 03:33:06 np0005534516 nova_compute[253538]: 2025-11-25 08:33:06.561 253542 DEBUG nova.virt.libvirt.host [None req-09265231-b5f9-435b-aa4b-158217848829 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 25 03:33:06 np0005534516 nova_compute[253538]: 2025-11-25 08:33:06.562 253542 DEBUG nova.virt.libvirt.host [None req-09265231-b5f9-435b-aa4b-158217848829 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 25 03:33:06 np0005534516 nova_compute[253538]: 2025-11-25 08:33:06.562 253542 DEBUG nova.virt.libvirt.driver [None req-09265231-b5f9-435b-aa4b-158217848829 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 25 03:33:06 np0005534516 nova_compute[253538]: 2025-11-25 08:33:06.562 253542 DEBUG nova.virt.hardware [None req-09265231-b5f9-435b-aa4b-158217848829 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T08:20:54Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='c0247c91-0f34-45b2-87e7-43f31a790a00',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 25 03:33:06 np0005534516 nova_compute[253538]: 2025-11-25 08:33:06.563 253542 DEBUG nova.virt.hardware [None req-09265231-b5f9-435b-aa4b-158217848829 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 25 03:33:06 np0005534516 nova_compute[253538]: 2025-11-25 08:33:06.563 253542 DEBUG nova.virt.hardware [None req-09265231-b5f9-435b-aa4b-158217848829 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 25 03:33:06 np0005534516 nova_compute[253538]: 2025-11-25 08:33:06.563 253542 DEBUG nova.virt.hardware [None req-09265231-b5f9-435b-aa4b-158217848829 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 25 03:33:06 np0005534516 nova_compute[253538]: 2025-11-25 08:33:06.563 253542 DEBUG nova.virt.hardware [None req-09265231-b5f9-435b-aa4b-158217848829 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 25 03:33:06 np0005534516 nova_compute[253538]: 2025-11-25 08:33:06.563 253542 DEBUG nova.virt.hardware [None req-09265231-b5f9-435b-aa4b-158217848829 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 25 03:33:06 np0005534516 nova_compute[253538]: 2025-11-25 08:33:06.563 253542 DEBUG nova.virt.hardware [None req-09265231-b5f9-435b-aa4b-158217848829 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 25 03:33:06 np0005534516 nova_compute[253538]: 2025-11-25 08:33:06.564 253542 DEBUG nova.virt.hardware [None req-09265231-b5f9-435b-aa4b-158217848829 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 25 03:33:06 np0005534516 nova_compute[253538]: 2025-11-25 08:33:06.564 253542 DEBUG nova.virt.hardware [None req-09265231-b5f9-435b-aa4b-158217848829 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 25 03:33:06 np0005534516 nova_compute[253538]: 2025-11-25 08:33:06.564 253542 DEBUG nova.virt.hardware [None req-09265231-b5f9-435b-aa4b-158217848829 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 25 03:33:06 np0005534516 nova_compute[253538]: 2025-11-25 08:33:06.564 253542 DEBUG nova.virt.hardware [None req-09265231-b5f9-435b-aa4b-158217848829 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 25 03:33:06 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:33:06.565 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[fb8cfcef-b521-404b-af21-076bc343a839]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:33:06 np0005534516 nova_compute[253538]: 2025-11-25 08:33:06.567 253542 DEBUG oslo_concurrency.processutils [None req-09265231-b5f9-435b-aa4b-158217848829 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:33:06 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:33:06.570 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[e6b288d1-ac1b-440b-9f4a-5499bb69efd1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:33:06 np0005534516 nova_compute[253538]: 2025-11-25 08:33:06.609 253542 DEBUG nova.compute.manager [req-c137dc15-1230-42c1-a944-37ea4e34f26b req-c7d6fc16-2ebe-4a25-9cd3-9ad0279e9663 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 528fb917-0169-441d-b32d-652963344aea] Received event network-changed-56d077f0-8f69-40d8-bd5e-267a70c4c319 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:33:06 np0005534516 nova_compute[253538]: 2025-11-25 08:33:06.610 253542 DEBUG nova.compute.manager [req-c137dc15-1230-42c1-a944-37ea4e34f26b req-c7d6fc16-2ebe-4a25-9cd3-9ad0279e9663 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 528fb917-0169-441d-b32d-652963344aea] Refreshing instance network info cache due to event network-changed-56d077f0-8f69-40d8-bd5e-267a70c4c319. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 03:33:06 np0005534516 nova_compute[253538]: 2025-11-25 08:33:06.610 253542 DEBUG oslo_concurrency.lockutils [req-c137dc15-1230-42c1-a944-37ea4e34f26b req-c7d6fc16-2ebe-4a25-9cd3-9ad0279e9663 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-528fb917-0169-441d-b32d-652963344aea" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 03:33:06 np0005534516 nova_compute[253538]: 2025-11-25 08:33:06.610 253542 DEBUG oslo_concurrency.lockutils [req-c137dc15-1230-42c1-a944-37ea4e34f26b req-c7d6fc16-2ebe-4a25-9cd3-9ad0279e9663 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-528fb917-0169-441d-b32d-652963344aea" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 03:33:06 np0005534516 nova_compute[253538]: 2025-11-25 08:33:06.610 253542 DEBUG nova.network.neutron [req-c137dc15-1230-42c1-a944-37ea4e34f26b req-c7d6fc16-2ebe-4a25-9cd3-9ad0279e9663 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 528fb917-0169-441d-b32d-652963344aea] Refreshing network info cache for port 56d077f0-8f69-40d8-bd5e-267a70c4c319 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 03:33:06 np0005534516 NetworkManager[48915]: <info>  [1764059586.6193] device (tap908154e6-30): carrier: link connected
Nov 25 03:33:06 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:33:06.629 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[e5f1c347-31fc-49ee-8710-ace6d6086484]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:33:06 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:33:06.648 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[61a5f850-3c36-4470-b2e3-b0f905cb9543]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap908154e6-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:42:59:09'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 147], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 494121, 'reachable_time': 43234, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 312201, 'error': None, 'target': 'ovnmeta-908154e6-322e-4607-bb65-df3f3f8daca6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:33:06 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:33:06.664 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[04d64fd6-985d-4490-9c43-941557f84270]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe42:5909'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 494121, 'tstamp': 494121}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 312202, 'error': None, 'target': 'ovnmeta-908154e6-322e-4607-bb65-df3f3f8daca6', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:33:06 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:33:06.691 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[4ac18e20-cceb-4331-8e6b-c273793ad7da]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap908154e6-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:42:59:09'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 147], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 494121, 'reachable_time': 43234, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 148, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 148, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 312203, 'error': None, 'target': 'ovnmeta-908154e6-322e-4607-bb65-df3f3f8daca6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:33:06 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:33:06.722 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[be132dc6-cfb9-44df-a78a-5e5f4e6f3c55]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:33:06 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:33:06.793 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[0ab4ad17-85eb-4e08-8589-68f375179650]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:33:06 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:33:06.794 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap908154e6-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:33:06 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:33:06.795 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 03:33:06 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:33:06.795 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap908154e6-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:33:06 np0005534516 nova_compute[253538]: 2025-11-25 08:33:06.797 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:33:06 np0005534516 kernel: tap908154e6-30: entered promiscuous mode
Nov 25 03:33:06 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:33:06.799 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap908154e6-30, col_values=(('external_ids', {'iface-id': 'a5c69233-73e9-45f3-95c2-e76d52711966'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:33:06 np0005534516 NetworkManager[48915]: <info>  [1764059586.7999] manager: (tap908154e6-30): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/224)
Nov 25 03:33:06 np0005534516 ovn_controller[152859]: 2025-11-25T08:33:06Z|00483|binding|INFO|Releasing lport a5c69233-73e9-45f3-95c2-e76d52711966 from this chassis (sb_readonly=0)
Nov 25 03:33:06 np0005534516 nova_compute[253538]: 2025-11-25 08:33:06.816 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:33:06 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:33:06.817 162739 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/908154e6-322e-4607-bb65-df3f3f8daca6.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/908154e6-322e-4607-bb65-df3f3f8daca6.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 25 03:33:06 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:33:06.817 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[5ec027dd-6f5d-48a1-9faf-ebb0084bc049]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:33:06 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:33:06.818 162739 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 25 03:33:06 np0005534516 ovn_metadata_agent[162734]: global
Nov 25 03:33:06 np0005534516 ovn_metadata_agent[162734]:    log         /dev/log local0 debug
Nov 25 03:33:06 np0005534516 ovn_metadata_agent[162734]:    log-tag     haproxy-metadata-proxy-908154e6-322e-4607-bb65-df3f3f8daca6
Nov 25 03:33:06 np0005534516 ovn_metadata_agent[162734]:    user        root
Nov 25 03:33:06 np0005534516 ovn_metadata_agent[162734]:    group       root
Nov 25 03:33:06 np0005534516 ovn_metadata_agent[162734]:    maxconn     1024
Nov 25 03:33:06 np0005534516 ovn_metadata_agent[162734]:    pidfile     /var/lib/neutron/external/pids/908154e6-322e-4607-bb65-df3f3f8daca6.pid.haproxy
Nov 25 03:33:06 np0005534516 ovn_metadata_agent[162734]:    daemon
Nov 25 03:33:06 np0005534516 ovn_metadata_agent[162734]: 
Nov 25 03:33:06 np0005534516 ovn_metadata_agent[162734]: defaults
Nov 25 03:33:06 np0005534516 ovn_metadata_agent[162734]:    log global
Nov 25 03:33:06 np0005534516 ovn_metadata_agent[162734]:    mode http
Nov 25 03:33:06 np0005534516 ovn_metadata_agent[162734]:    option httplog
Nov 25 03:33:06 np0005534516 ovn_metadata_agent[162734]:    option dontlognull
Nov 25 03:33:06 np0005534516 ovn_metadata_agent[162734]:    option http-server-close
Nov 25 03:33:06 np0005534516 ovn_metadata_agent[162734]:    option forwardfor
Nov 25 03:33:06 np0005534516 ovn_metadata_agent[162734]:    retries                 3
Nov 25 03:33:06 np0005534516 ovn_metadata_agent[162734]:    timeout http-request    30s
Nov 25 03:33:06 np0005534516 ovn_metadata_agent[162734]:    timeout connect         30s
Nov 25 03:33:06 np0005534516 ovn_metadata_agent[162734]:    timeout client          32s
Nov 25 03:33:06 np0005534516 ovn_metadata_agent[162734]:    timeout server          32s
Nov 25 03:33:06 np0005534516 ovn_metadata_agent[162734]:    timeout http-keep-alive 30s
Nov 25 03:33:06 np0005534516 ovn_metadata_agent[162734]: 
Nov 25 03:33:06 np0005534516 ovn_metadata_agent[162734]: 
Nov 25 03:33:06 np0005534516 ovn_metadata_agent[162734]: listen listener
Nov 25 03:33:06 np0005534516 ovn_metadata_agent[162734]:    bind 169.254.169.254:80
Nov 25 03:33:06 np0005534516 ovn_metadata_agent[162734]:    server metadata /var/lib/neutron/metadata_proxy
Nov 25 03:33:06 np0005534516 ovn_metadata_agent[162734]:    http-request add-header X-OVN-Network-ID 908154e6-322e-4607-bb65-df3f3f8daca6
Nov 25 03:33:06 np0005534516 ovn_metadata_agent[162734]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 25 03:33:06 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:33:06.819 162739 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-908154e6-322e-4607-bb65-df3f3f8daca6', 'env', 'PROCESS_TAG=haproxy-908154e6-322e-4607-bb65-df3f3f8daca6', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/908154e6-322e-4607-bb65-df3f3f8daca6.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 25 03:33:07 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 03:33:07 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/535910666' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 03:33:07 np0005534516 nova_compute[253538]: 2025-11-25 08:33:07.089 253542 DEBUG oslo_concurrency.processutils [None req-09265231-b5f9-435b-aa4b-158217848829 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.522s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:33:07 np0005534516 nova_compute[253538]: 2025-11-25 08:33:07.109 253542 DEBUG nova.storage.rbd_utils [None req-09265231-b5f9-435b-aa4b-158217848829 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] rbd image 528fb917-0169-441d-b32d-652963344aea_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:33:07 np0005534516 nova_compute[253538]: 2025-11-25 08:33:07.113 253542 DEBUG oslo_concurrency.processutils [None req-09265231-b5f9-435b-aa4b-158217848829 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:33:07 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1493: 321 pgs: 321 active+clean; 295 MiB data, 565 MiB used, 59 GiB / 60 GiB avail; 3.0 MiB/s rd, 3.7 MiB/s wr, 245 op/s
Nov 25 03:33:07 np0005534516 podman[312282]: 2025-11-25 08:33:07.21954215 +0000 UTC m=+0.054257678 container create bdef0da94390ea813c5d4981c3a419e9fafc5becd6725743a2c0041cc0ba302d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-908154e6-322e-4607-bb65-df3f3f8daca6, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Nov 25 03:33:07 np0005534516 systemd[1]: Started libpod-conmon-bdef0da94390ea813c5d4981c3a419e9fafc5becd6725743a2c0041cc0ba302d.scope.
Nov 25 03:33:07 np0005534516 podman[312282]: 2025-11-25 08:33:07.188779534 +0000 UTC m=+0.023495082 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 25 03:33:07 np0005534516 systemd[1]: Started libcrun container.
Nov 25 03:33:07 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1bbb652004c5b4ad3c3896e006b5f7cd4464afd4fecf01ab4a4f22a51d10b5b9/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 25 03:33:07 np0005534516 podman[312282]: 2025-11-25 08:33:07.304571186 +0000 UTC m=+0.139286744 container init bdef0da94390ea813c5d4981c3a419e9fafc5becd6725743a2c0041cc0ba302d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-908154e6-322e-4607-bb65-df3f3f8daca6, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118)
Nov 25 03:33:07 np0005534516 podman[312282]: 2025-11-25 08:33:07.310558157 +0000 UTC m=+0.145273685 container start bdef0da94390ea813c5d4981c3a419e9fafc5becd6725743a2c0041cc0ba302d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-908154e6-322e-4607-bb65-df3f3f8daca6, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 03:33:07 np0005534516 nova_compute[253538]: 2025-11-25 08:33:07.321 253542 DEBUG nova.compute.manager [None req-6b86692d-3418-4f34-b54b-b2e5d568385f c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] [instance: 0feca801-4630-4450-b915-616d8496ab51] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 25 03:33:07 np0005534516 nova_compute[253538]: 2025-11-25 08:33:07.322 253542 DEBUG nova.virt.libvirt.host [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Removed pending event for 0feca801-4630-4450-b915-616d8496ab51 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Nov 25 03:33:07 np0005534516 nova_compute[253538]: 2025-11-25 08:33:07.323 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764059587.3185778, 0feca801-4630-4450-b915-616d8496ab51 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 03:33:07 np0005534516 nova_compute[253538]: 2025-11-25 08:33:07.323 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 0feca801-4630-4450-b915-616d8496ab51] VM Resumed (Lifecycle Event)#033[00m
Nov 25 03:33:07 np0005534516 neutron-haproxy-ovnmeta-908154e6-322e-4607-bb65-df3f3f8daca6[312349]: [NOTICE]   (312354) : New worker (312356) forked
Nov 25 03:33:07 np0005534516 neutron-haproxy-ovnmeta-908154e6-322e-4607-bb65-df3f3f8daca6[312349]: [NOTICE]   (312354) : Loading success.
Nov 25 03:33:07 np0005534516 nova_compute[253538]: 2025-11-25 08:33:07.339 253542 INFO nova.virt.libvirt.driver [-] [instance: 0feca801-4630-4450-b915-616d8496ab51] Instance rebooted successfully.#033[00m
Nov 25 03:33:07 np0005534516 nova_compute[253538]: 2025-11-25 08:33:07.340 253542 DEBUG nova.compute.manager [None req-6b86692d-3418-4f34-b54b-b2e5d568385f c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] [instance: 0feca801-4630-4450-b915-616d8496ab51] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:33:07 np0005534516 nova_compute[253538]: 2025-11-25 08:33:07.352 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 0feca801-4630-4450-b915-616d8496ab51] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:33:07 np0005534516 nova_compute[253538]: 2025-11-25 08:33:07.355 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 0feca801-4630-4450-b915-616d8496ab51] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: stopped, current task_state: powering-on, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 03:33:07 np0005534516 nova_compute[253538]: 2025-11-25 08:33:07.382 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 0feca801-4630-4450-b915-616d8496ab51] During sync_power_state the instance has a pending task (powering-on). Skip.#033[00m
Nov 25 03:33:07 np0005534516 nova_compute[253538]: 2025-11-25 08:33:07.383 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764059587.3200617, 0feca801-4630-4450-b915-616d8496ab51 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 03:33:07 np0005534516 nova_compute[253538]: 2025-11-25 08:33:07.383 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 0feca801-4630-4450-b915-616d8496ab51] VM Started (Lifecycle Event)#033[00m
Nov 25 03:33:07 np0005534516 nova_compute[253538]: 2025-11-25 08:33:07.407 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 0feca801-4630-4450-b915-616d8496ab51] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:33:07 np0005534516 nova_compute[253538]: 2025-11-25 08:33:07.413 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 0feca801-4630-4450-b915-616d8496ab51] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 03:33:07 np0005534516 nova_compute[253538]: 2025-11-25 08:33:07.416 253542 DEBUG oslo_concurrency.lockutils [None req-4723d2bf-00e2-4979-a157-d3c878bd71f2 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Acquiring lock "c0942fc7-74d4-4fc8-9574-4fea9179e71b" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:33:07 np0005534516 nova_compute[253538]: 2025-11-25 08:33:07.416 253542 DEBUG oslo_concurrency.lockutils [None req-4723d2bf-00e2-4979-a157-d3c878bd71f2 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Lock "c0942fc7-74d4-4fc8-9574-4fea9179e71b" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:33:07 np0005534516 nova_compute[253538]: 2025-11-25 08:33:07.417 253542 DEBUG oslo_concurrency.lockutils [None req-4723d2bf-00e2-4979-a157-d3c878bd71f2 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Acquiring lock "c0942fc7-74d4-4fc8-9574-4fea9179e71b-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:33:07 np0005534516 nova_compute[253538]: 2025-11-25 08:33:07.417 253542 DEBUG oslo_concurrency.lockutils [None req-4723d2bf-00e2-4979-a157-d3c878bd71f2 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Lock "c0942fc7-74d4-4fc8-9574-4fea9179e71b-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:33:07 np0005534516 nova_compute[253538]: 2025-11-25 08:33:07.417 253542 DEBUG oslo_concurrency.lockutils [None req-4723d2bf-00e2-4979-a157-d3c878bd71f2 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Lock "c0942fc7-74d4-4fc8-9574-4fea9179e71b-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:33:07 np0005534516 nova_compute[253538]: 2025-11-25 08:33:07.419 253542 INFO nova.compute.manager [None req-4723d2bf-00e2-4979-a157-d3c878bd71f2 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] [instance: c0942fc7-74d4-4fc8-9574-4fea9179e71b] Terminating instance#033[00m
Nov 25 03:33:07 np0005534516 nova_compute[253538]: 2025-11-25 08:33:07.422 253542 DEBUG nova.compute.manager [None req-4723d2bf-00e2-4979-a157-d3c878bd71f2 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] [instance: c0942fc7-74d4-4fc8-9574-4fea9179e71b] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 25 03:33:07 np0005534516 kernel: tap79f4b8f5-d5 (unregistering): left promiscuous mode
Nov 25 03:33:07 np0005534516 NetworkManager[48915]: <info>  [1764059587.4611] device (tap79f4b8f5-d5): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 03:33:07 np0005534516 nova_compute[253538]: 2025-11-25 08:33:07.470 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:33:07 np0005534516 ovn_controller[152859]: 2025-11-25T08:33:07Z|00484|binding|INFO|Releasing lport 79f4b8f5-d582-44c5-b8e0-a82ad73193de from this chassis (sb_readonly=0)
Nov 25 03:33:07 np0005534516 ovn_controller[152859]: 2025-11-25T08:33:07Z|00485|binding|INFO|Setting lport 79f4b8f5-d582-44c5-b8e0-a82ad73193de down in Southbound
Nov 25 03:33:07 np0005534516 ovn_controller[152859]: 2025-11-25T08:33:07Z|00486|binding|INFO|Removing iface tap79f4b8f5-d5 ovn-installed in OVS
Nov 25 03:33:07 np0005534516 nova_compute[253538]: 2025-11-25 08:33:07.474 253542 DEBUG nova.compute.manager [req-94057288-f1a0-4aa2-972e-2c70bf4585a0 req-f0da8de4-4a9e-47e9-82e0-626ebc977eac b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 0feca801-4630-4450-b915-616d8496ab51] Received event network-vif-plugged-15af3dd8-9788-4a34-b4b2-d3b24300cd4c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:33:07 np0005534516 nova_compute[253538]: 2025-11-25 08:33:07.474 253542 DEBUG oslo_concurrency.lockutils [req-94057288-f1a0-4aa2-972e-2c70bf4585a0 req-f0da8de4-4a9e-47e9-82e0-626ebc977eac b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "0feca801-4630-4450-b915-616d8496ab51-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:33:07 np0005534516 nova_compute[253538]: 2025-11-25 08:33:07.475 253542 DEBUG oslo_concurrency.lockutils [req-94057288-f1a0-4aa2-972e-2c70bf4585a0 req-f0da8de4-4a9e-47e9-82e0-626ebc977eac b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "0feca801-4630-4450-b915-616d8496ab51-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:33:07 np0005534516 nova_compute[253538]: 2025-11-25 08:33:07.475 253542 DEBUG oslo_concurrency.lockutils [req-94057288-f1a0-4aa2-972e-2c70bf4585a0 req-f0da8de4-4a9e-47e9-82e0-626ebc977eac b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "0feca801-4630-4450-b915-616d8496ab51-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:33:07 np0005534516 nova_compute[253538]: 2025-11-25 08:33:07.475 253542 DEBUG nova.compute.manager [req-94057288-f1a0-4aa2-972e-2c70bf4585a0 req-f0da8de4-4a9e-47e9-82e0-626ebc977eac b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 0feca801-4630-4450-b915-616d8496ab51] No waiting events found dispatching network-vif-plugged-15af3dd8-9788-4a34-b4b2-d3b24300cd4c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 03:33:07 np0005534516 nova_compute[253538]: 2025-11-25 08:33:07.475 253542 WARNING nova.compute.manager [req-94057288-f1a0-4aa2-972e-2c70bf4585a0 req-f0da8de4-4a9e-47e9-82e0-626ebc977eac b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 0feca801-4630-4450-b915-616d8496ab51] Received unexpected event network-vif-plugged-15af3dd8-9788-4a34-b4b2-d3b24300cd4c for instance with vm_state active and task_state None.#033[00m
Nov 25 03:33:07 np0005534516 nova_compute[253538]: 2025-11-25 08:33:07.476 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:33:07 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:33:07.480 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:24:9d:e4 10.100.0.11'], port_security=['fa:16:3e:24:9d:e4 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': 'c0942fc7-74d4-4fc8-9574-4fea9179e71b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-eb25945d-6002-4a99-b682-034a8a3dc901', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'dc93aa65bef7473d961e0cad1e8f2962', 'neutron:revision_number': '6', 'neutron:security_group_ids': 'e75a559d-2985-4816-b432-9eef78e9b129', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=1d4314d0-90db-4d2a-a971-774f6d589653, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=79f4b8f5-d582-44c5-b8e0-a82ad73193de) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 03:33:07 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:33:07.481 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 79f4b8f5-d582-44c5-b8e0-a82ad73193de in datapath eb25945d-6002-4a99-b682-034a8a3dc901 unbound from our chassis#033[00m
Nov 25 03:33:07 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:33:07.483 162739 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network eb25945d-6002-4a99-b682-034a8a3dc901, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 25 03:33:07 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:33:07.484 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[071f2bc7-3671-44b8-9e83-d20c5bccda6b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:33:07 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:33:07.484 162739 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-eb25945d-6002-4a99-b682-034a8a3dc901 namespace which is not needed anymore#033[00m
Nov 25 03:33:07 np0005534516 nova_compute[253538]: 2025-11-25 08:33:07.489 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:33:07 np0005534516 systemd[1]: machine-qemu\x2d62\x2dinstance\x2d00000035.scope: Deactivated successfully.
Nov 25 03:33:07 np0005534516 systemd[1]: machine-qemu\x2d62\x2dinstance\x2d00000035.scope: Consumed 5.008s CPU time.
Nov 25 03:33:07 np0005534516 systemd-machined[215790]: Machine qemu-62-instance-00000035 terminated.
Nov 25 03:33:07 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 03:33:07 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3152049899' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 03:33:07 np0005534516 nova_compute[253538]: 2025-11-25 08:33:07.564 253542 DEBUG oslo_concurrency.processutils [None req-09265231-b5f9-435b-aa4b-158217848829 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.451s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:33:07 np0005534516 nova_compute[253538]: 2025-11-25 08:33:07.565 253542 DEBUG nova.virt.libvirt.vif [None req-09265231-b5f9-435b-aa4b-158217848829 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T08:32:59Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-DeleteServersTestJSON-server-688174086',display_name='tempest-DeleteServersTestJSON-server-688174086',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-deleteserverstestjson-server-688174086',id=56,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='a9c243220ecd4ba3af10cdbc0ea76bd6',ramdisk_id='',reservation_id='r-qasdqh9g',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-DeleteServersTestJSON-2095694504',owner_user_name='tempest-DeleteServersTestJSON-2095694504-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T08:33:01Z,user_data=None,user_id='a649c62aaacd4f01a93ea978066f5976',uuid=528fb917-0169-441d-b32d-652963344aea,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "56d077f0-8f69-40d8-bd5e-267a70c4c319", "address": "fa:16:3e:7a:ce:b3", "network": {"id": "a66e51b8-ecb0-4289-a1b5-d5e379727721", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-903247260-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a9c243220ecd4ba3af10cdbc0ea76bd6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap56d077f0-8f", "ovs_interfaceid": "56d077f0-8f69-40d8-bd5e-267a70c4c319", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 25 03:33:07 np0005534516 nova_compute[253538]: 2025-11-25 08:33:07.565 253542 DEBUG nova.network.os_vif_util [None req-09265231-b5f9-435b-aa4b-158217848829 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Converting VIF {"id": "56d077f0-8f69-40d8-bd5e-267a70c4c319", "address": "fa:16:3e:7a:ce:b3", "network": {"id": "a66e51b8-ecb0-4289-a1b5-d5e379727721", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-903247260-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a9c243220ecd4ba3af10cdbc0ea76bd6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap56d077f0-8f", "ovs_interfaceid": "56d077f0-8f69-40d8-bd5e-267a70c4c319", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 03:33:07 np0005534516 nova_compute[253538]: 2025-11-25 08:33:07.566 253542 DEBUG nova.network.os_vif_util [None req-09265231-b5f9-435b-aa4b-158217848829 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:7a:ce:b3,bridge_name='br-int',has_traffic_filtering=True,id=56d077f0-8f69-40d8-bd5e-267a70c4c319,network=Network(a66e51b8-ecb0-4289-a1b5-d5e379727721),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap56d077f0-8f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 03:33:07 np0005534516 nova_compute[253538]: 2025-11-25 08:33:07.567 253542 DEBUG nova.objects.instance [None req-09265231-b5f9-435b-aa4b-158217848829 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Lazy-loading 'pci_devices' on Instance uuid 528fb917-0169-441d-b32d-652963344aea obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 03:33:07 np0005534516 nova_compute[253538]: 2025-11-25 08:33:07.579 253542 DEBUG nova.virt.libvirt.driver [None req-09265231-b5f9-435b-aa4b-158217848829 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: 528fb917-0169-441d-b32d-652963344aea] End _get_guest_xml xml=<domain type="kvm">
Nov 25 03:33:07 np0005534516 nova_compute[253538]:  <uuid>528fb917-0169-441d-b32d-652963344aea</uuid>
Nov 25 03:33:07 np0005534516 nova_compute[253538]:  <name>instance-00000038</name>
Nov 25 03:33:07 np0005534516 nova_compute[253538]:  <memory>131072</memory>
Nov 25 03:33:07 np0005534516 nova_compute[253538]:  <vcpu>1</vcpu>
Nov 25 03:33:07 np0005534516 nova_compute[253538]:  <metadata>
Nov 25 03:33:07 np0005534516 nova_compute[253538]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 03:33:07 np0005534516 nova_compute[253538]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 03:33:07 np0005534516 nova_compute[253538]:      <nova:name>tempest-DeleteServersTestJSON-server-688174086</nova:name>
Nov 25 03:33:07 np0005534516 nova_compute[253538]:      <nova:creationTime>2025-11-25 08:33:06</nova:creationTime>
Nov 25 03:33:07 np0005534516 nova_compute[253538]:      <nova:flavor name="m1.nano">
Nov 25 03:33:07 np0005534516 nova_compute[253538]:        <nova:memory>128</nova:memory>
Nov 25 03:33:07 np0005534516 nova_compute[253538]:        <nova:disk>1</nova:disk>
Nov 25 03:33:07 np0005534516 nova_compute[253538]:        <nova:swap>0</nova:swap>
Nov 25 03:33:07 np0005534516 nova_compute[253538]:        <nova:ephemeral>0</nova:ephemeral>
Nov 25 03:33:07 np0005534516 nova_compute[253538]:        <nova:vcpus>1</nova:vcpus>
Nov 25 03:33:07 np0005534516 nova_compute[253538]:      </nova:flavor>
Nov 25 03:33:07 np0005534516 nova_compute[253538]:      <nova:owner>
Nov 25 03:33:07 np0005534516 nova_compute[253538]:        <nova:user uuid="a649c62aaacd4f01a93ea978066f5976">tempest-DeleteServersTestJSON-2095694504-project-member</nova:user>
Nov 25 03:33:07 np0005534516 nova_compute[253538]:        <nova:project uuid="a9c243220ecd4ba3af10cdbc0ea76bd6">tempest-DeleteServersTestJSON-2095694504</nova:project>
Nov 25 03:33:07 np0005534516 nova_compute[253538]:      </nova:owner>
Nov 25 03:33:07 np0005534516 nova_compute[253538]:      <nova:root type="image" uuid="8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e"/>
Nov 25 03:33:07 np0005534516 nova_compute[253538]:      <nova:ports>
Nov 25 03:33:07 np0005534516 nova_compute[253538]:        <nova:port uuid="56d077f0-8f69-40d8-bd5e-267a70c4c319">
Nov 25 03:33:07 np0005534516 nova_compute[253538]:          <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Nov 25 03:33:07 np0005534516 nova_compute[253538]:        </nova:port>
Nov 25 03:33:07 np0005534516 nova_compute[253538]:      </nova:ports>
Nov 25 03:33:07 np0005534516 nova_compute[253538]:    </nova:instance>
Nov 25 03:33:07 np0005534516 nova_compute[253538]:  </metadata>
Nov 25 03:33:07 np0005534516 nova_compute[253538]:  <sysinfo type="smbios">
Nov 25 03:33:07 np0005534516 nova_compute[253538]:    <system>
Nov 25 03:33:07 np0005534516 nova_compute[253538]:      <entry name="manufacturer">RDO</entry>
Nov 25 03:33:07 np0005534516 nova_compute[253538]:      <entry name="product">OpenStack Compute</entry>
Nov 25 03:33:07 np0005534516 nova_compute[253538]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 03:33:07 np0005534516 nova_compute[253538]:      <entry name="serial">528fb917-0169-441d-b32d-652963344aea</entry>
Nov 25 03:33:07 np0005534516 nova_compute[253538]:      <entry name="uuid">528fb917-0169-441d-b32d-652963344aea</entry>
Nov 25 03:33:07 np0005534516 nova_compute[253538]:      <entry name="family">Virtual Machine</entry>
Nov 25 03:33:07 np0005534516 nova_compute[253538]:    </system>
Nov 25 03:33:07 np0005534516 nova_compute[253538]:  </sysinfo>
Nov 25 03:33:07 np0005534516 nova_compute[253538]:  <os>
Nov 25 03:33:07 np0005534516 nova_compute[253538]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 25 03:33:07 np0005534516 nova_compute[253538]:    <boot dev="hd"/>
Nov 25 03:33:07 np0005534516 nova_compute[253538]:    <smbios mode="sysinfo"/>
Nov 25 03:33:07 np0005534516 nova_compute[253538]:  </os>
Nov 25 03:33:07 np0005534516 nova_compute[253538]:  <features>
Nov 25 03:33:07 np0005534516 nova_compute[253538]:    <acpi/>
Nov 25 03:33:07 np0005534516 nova_compute[253538]:    <apic/>
Nov 25 03:33:07 np0005534516 nova_compute[253538]:    <vmcoreinfo/>
Nov 25 03:33:07 np0005534516 nova_compute[253538]:  </features>
Nov 25 03:33:07 np0005534516 nova_compute[253538]:  <clock offset="utc">
Nov 25 03:33:07 np0005534516 nova_compute[253538]:    <timer name="pit" tickpolicy="delay"/>
Nov 25 03:33:07 np0005534516 nova_compute[253538]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 25 03:33:07 np0005534516 nova_compute[253538]:    <timer name="hpet" present="no"/>
Nov 25 03:33:07 np0005534516 nova_compute[253538]:  </clock>
Nov 25 03:33:07 np0005534516 nova_compute[253538]:  <cpu mode="host-model" match="exact">
Nov 25 03:33:07 np0005534516 nova_compute[253538]:    <topology sockets="1" cores="1" threads="1"/>
Nov 25 03:33:07 np0005534516 nova_compute[253538]:  </cpu>
Nov 25 03:33:07 np0005534516 nova_compute[253538]:  <devices>
Nov 25 03:33:07 np0005534516 nova_compute[253538]:    <disk type="network" device="disk">
Nov 25 03:33:07 np0005534516 nova_compute[253538]:      <driver type="raw" cache="none"/>
Nov 25 03:33:07 np0005534516 nova_compute[253538]:      <source protocol="rbd" name="vms/528fb917-0169-441d-b32d-652963344aea_disk">
Nov 25 03:33:07 np0005534516 nova_compute[253538]:        <host name="192.168.122.100" port="6789"/>
Nov 25 03:33:07 np0005534516 nova_compute[253538]:      </source>
Nov 25 03:33:07 np0005534516 nova_compute[253538]:      <auth username="openstack">
Nov 25 03:33:07 np0005534516 nova_compute[253538]:        <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 03:33:07 np0005534516 nova_compute[253538]:      </auth>
Nov 25 03:33:07 np0005534516 nova_compute[253538]:      <target dev="vda" bus="virtio"/>
Nov 25 03:33:07 np0005534516 nova_compute[253538]:    </disk>
Nov 25 03:33:07 np0005534516 nova_compute[253538]:    <disk type="network" device="cdrom">
Nov 25 03:33:07 np0005534516 nova_compute[253538]:      <driver type="raw" cache="none"/>
Nov 25 03:33:07 np0005534516 nova_compute[253538]:      <source protocol="rbd" name="vms/528fb917-0169-441d-b32d-652963344aea_disk.config">
Nov 25 03:33:07 np0005534516 nova_compute[253538]:        <host name="192.168.122.100" port="6789"/>
Nov 25 03:33:07 np0005534516 nova_compute[253538]:      </source>
Nov 25 03:33:07 np0005534516 nova_compute[253538]:      <auth username="openstack">
Nov 25 03:33:07 np0005534516 nova_compute[253538]:        <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 03:33:07 np0005534516 nova_compute[253538]:      </auth>
Nov 25 03:33:07 np0005534516 nova_compute[253538]:      <target dev="sda" bus="sata"/>
Nov 25 03:33:07 np0005534516 nova_compute[253538]:    </disk>
Nov 25 03:33:07 np0005534516 nova_compute[253538]:    <interface type="ethernet">
Nov 25 03:33:07 np0005534516 nova_compute[253538]:      <mac address="fa:16:3e:7a:ce:b3"/>
Nov 25 03:33:07 np0005534516 nova_compute[253538]:      <model type="virtio"/>
Nov 25 03:33:07 np0005534516 nova_compute[253538]:      <driver name="vhost" rx_queue_size="512"/>
Nov 25 03:33:07 np0005534516 nova_compute[253538]:      <mtu size="1442"/>
Nov 25 03:33:07 np0005534516 nova_compute[253538]:      <target dev="tap56d077f0-8f"/>
Nov 25 03:33:07 np0005534516 nova_compute[253538]:    </interface>
Nov 25 03:33:07 np0005534516 nova_compute[253538]:    <serial type="pty">
Nov 25 03:33:07 np0005534516 nova_compute[253538]:      <log file="/var/lib/nova/instances/528fb917-0169-441d-b32d-652963344aea/console.log" append="off"/>
Nov 25 03:33:07 np0005534516 nova_compute[253538]:    </serial>
Nov 25 03:33:07 np0005534516 nova_compute[253538]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 03:33:07 np0005534516 nova_compute[253538]:    <video>
Nov 25 03:33:07 np0005534516 nova_compute[253538]:      <model type="virtio"/>
Nov 25 03:33:07 np0005534516 nova_compute[253538]:    </video>
Nov 25 03:33:07 np0005534516 nova_compute[253538]:    <input type="tablet" bus="usb"/>
Nov 25 03:33:07 np0005534516 nova_compute[253538]:    <rng model="virtio">
Nov 25 03:33:07 np0005534516 nova_compute[253538]:      <backend model="random">/dev/urandom</backend>
Nov 25 03:33:07 np0005534516 nova_compute[253538]:    </rng>
Nov 25 03:33:07 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root"/>
Nov 25 03:33:07 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:33:07 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:33:07 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:33:07 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:33:07 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:33:07 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:33:07 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:33:07 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:33:07 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:33:07 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:33:07 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:33:07 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:33:07 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:33:07 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:33:07 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:33:07 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:33:07 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:33:07 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:33:07 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:33:07 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:33:07 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:33:07 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:33:07 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:33:07 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:33:07 np0005534516 nova_compute[253538]:    <controller type="usb" index="0"/>
Nov 25 03:33:07 np0005534516 nova_compute[253538]:    <memballoon model="virtio">
Nov 25 03:33:07 np0005534516 nova_compute[253538]:      <stats period="10"/>
Nov 25 03:33:07 np0005534516 nova_compute[253538]:    </memballoon>
Nov 25 03:33:07 np0005534516 nova_compute[253538]:  </devices>
Nov 25 03:33:07 np0005534516 nova_compute[253538]: </domain>
Nov 25 03:33:07 np0005534516 nova_compute[253538]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 25 03:33:07 np0005534516 nova_compute[253538]: 2025-11-25 08:33:07.579 253542 DEBUG nova.compute.manager [None req-09265231-b5f9-435b-aa4b-158217848829 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: 528fb917-0169-441d-b32d-652963344aea] Preparing to wait for external event network-vif-plugged-56d077f0-8f69-40d8-bd5e-267a70c4c319 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 25 03:33:07 np0005534516 nova_compute[253538]: 2025-11-25 08:33:07.580 253542 DEBUG oslo_concurrency.lockutils [None req-09265231-b5f9-435b-aa4b-158217848829 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Acquiring lock "528fb917-0169-441d-b32d-652963344aea-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:33:07 np0005534516 nova_compute[253538]: 2025-11-25 08:33:07.580 253542 DEBUG oslo_concurrency.lockutils [None req-09265231-b5f9-435b-aa4b-158217848829 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Lock "528fb917-0169-441d-b32d-652963344aea-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:33:07 np0005534516 nova_compute[253538]: 2025-11-25 08:33:07.580 253542 DEBUG oslo_concurrency.lockutils [None req-09265231-b5f9-435b-aa4b-158217848829 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Lock "528fb917-0169-441d-b32d-652963344aea-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:33:07 np0005534516 nova_compute[253538]: 2025-11-25 08:33:07.581 253542 DEBUG nova.virt.libvirt.vif [None req-09265231-b5f9-435b-aa4b-158217848829 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T08:32:59Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-DeleteServersTestJSON-server-688174086',display_name='tempest-DeleteServersTestJSON-server-688174086',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-deleteserverstestjson-server-688174086',id=56,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='a9c243220ecd4ba3af10cdbc0ea76bd6',ramdisk_id='',reservation_id='r-qasdqh9g',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-DeleteServersTestJSON-2095694504',owner_user_name='tempest-DeleteServersTestJSON-2095694504-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T08:33:01Z,user_data=None,user_id='a649c62aaacd4f01a93ea978066f5976',uuid=528fb917-0169-441d-b32d-652963344aea,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "56d077f0-8f69-40d8-bd5e-267a70c4c319", "address": "fa:16:3e:7a:ce:b3", "network": {"id": "a66e51b8-ecb0-4289-a1b5-d5e379727721", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-903247260-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a9c243220ecd4ba3af10cdbc0ea76bd6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap56d077f0-8f", "ovs_interfaceid": "56d077f0-8f69-40d8-bd5e-267a70c4c319", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 25 03:33:07 np0005534516 nova_compute[253538]: 2025-11-25 08:33:07.581 253542 DEBUG nova.network.os_vif_util [None req-09265231-b5f9-435b-aa4b-158217848829 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Converting VIF {"id": "56d077f0-8f69-40d8-bd5e-267a70c4c319", "address": "fa:16:3e:7a:ce:b3", "network": {"id": "a66e51b8-ecb0-4289-a1b5-d5e379727721", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-903247260-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a9c243220ecd4ba3af10cdbc0ea76bd6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap56d077f0-8f", "ovs_interfaceid": "56d077f0-8f69-40d8-bd5e-267a70c4c319", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 03:33:07 np0005534516 nova_compute[253538]: 2025-11-25 08:33:07.581 253542 DEBUG nova.network.os_vif_util [None req-09265231-b5f9-435b-aa4b-158217848829 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:7a:ce:b3,bridge_name='br-int',has_traffic_filtering=True,id=56d077f0-8f69-40d8-bd5e-267a70c4c319,network=Network(a66e51b8-ecb0-4289-a1b5-d5e379727721),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap56d077f0-8f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 03:33:07 np0005534516 nova_compute[253538]: 2025-11-25 08:33:07.582 253542 DEBUG os_vif [None req-09265231-b5f9-435b-aa4b-158217848829 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:7a:ce:b3,bridge_name='br-int',has_traffic_filtering=True,id=56d077f0-8f69-40d8-bd5e-267a70c4c319,network=Network(a66e51b8-ecb0-4289-a1b5-d5e379727721),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap56d077f0-8f') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 25 03:33:07 np0005534516 nova_compute[253538]: 2025-11-25 08:33:07.582 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:33:07 np0005534516 nova_compute[253538]: 2025-11-25 08:33:07.582 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:33:07 np0005534516 nova_compute[253538]: 2025-11-25 08:33:07.583 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 03:33:07 np0005534516 nova_compute[253538]: 2025-11-25 08:33:07.591 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:33:07 np0005534516 nova_compute[253538]: 2025-11-25 08:33:07.591 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap56d077f0-8f, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:33:07 np0005534516 nova_compute[253538]: 2025-11-25 08:33:07.592 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap56d077f0-8f, col_values=(('external_ids', {'iface-id': '56d077f0-8f69-40d8-bd5e-267a70c4c319', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:7a:ce:b3', 'vm-uuid': '528fb917-0169-441d-b32d-652963344aea'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:33:07 np0005534516 nova_compute[253538]: 2025-11-25 08:33:07.593 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:33:07 np0005534516 NetworkManager[48915]: <info>  [1764059587.5942] manager: (tap56d077f0-8f): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/225)
Nov 25 03:33:07 np0005534516 nova_compute[253538]: 2025-11-25 08:33:07.595 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 25 03:33:07 np0005534516 nova_compute[253538]: 2025-11-25 08:33:07.602 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:33:07 np0005534516 nova_compute[253538]: 2025-11-25 08:33:07.603 253542 INFO os_vif [None req-09265231-b5f9-435b-aa4b-158217848829 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:7a:ce:b3,bridge_name='br-int',has_traffic_filtering=True,id=56d077f0-8f69-40d8-bd5e-267a70c4c319,network=Network(a66e51b8-ecb0-4289-a1b5-d5e379727721),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap56d077f0-8f')#033[00m
Nov 25 03:33:07 np0005534516 neutron-haproxy-ovnmeta-eb25945d-6002-4a99-b682-034a8a3dc901[312054]: [NOTICE]   (312060) : haproxy version is 2.8.14-c23fe91
Nov 25 03:33:07 np0005534516 neutron-haproxy-ovnmeta-eb25945d-6002-4a99-b682-034a8a3dc901[312054]: [NOTICE]   (312060) : path to executable is /usr/sbin/haproxy
Nov 25 03:33:07 np0005534516 neutron-haproxy-ovnmeta-eb25945d-6002-4a99-b682-034a8a3dc901[312054]: [WARNING]  (312060) : Exiting Master process...
Nov 25 03:33:07 np0005534516 neutron-haproxy-ovnmeta-eb25945d-6002-4a99-b682-034a8a3dc901[312054]: [WARNING]  (312060) : Exiting Master process...
Nov 25 03:33:07 np0005534516 neutron-haproxy-ovnmeta-eb25945d-6002-4a99-b682-034a8a3dc901[312054]: [ALERT]    (312060) : Current worker (312062) exited with code 143 (Terminated)
Nov 25 03:33:07 np0005534516 neutron-haproxy-ovnmeta-eb25945d-6002-4a99-b682-034a8a3dc901[312054]: [WARNING]  (312060) : All workers exited. Exiting... (0)
Nov 25 03:33:07 np0005534516 systemd[1]: libpod-6579f1936f19547b0b95956599794047f5a155eb5f2332d13d8dc270bbe894b5.scope: Deactivated successfully.
Nov 25 03:33:07 np0005534516 podman[312386]: 2025-11-25 08:33:07.624810043 +0000 UTC m=+0.051769633 container died 6579f1936f19547b0b95956599794047f5a155eb5f2332d13d8dc270bbe894b5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-eb25945d-6002-4a99-b682-034a8a3dc901, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Nov 25 03:33:07 np0005534516 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-6579f1936f19547b0b95956599794047f5a155eb5f2332d13d8dc270bbe894b5-userdata-shm.mount: Deactivated successfully.
Nov 25 03:33:07 np0005534516 systemd[1]: var-lib-containers-storage-overlay-27abebbb45e35e11fa193c86fc45de186c88f818d07d7200fb6e69019986f4b1-merged.mount: Deactivated successfully.
Nov 25 03:33:07 np0005534516 nova_compute[253538]: 2025-11-25 08:33:07.660 253542 INFO nova.virt.libvirt.driver [-] [instance: c0942fc7-74d4-4fc8-9574-4fea9179e71b] Instance destroyed successfully.#033[00m
Nov 25 03:33:07 np0005534516 nova_compute[253538]: 2025-11-25 08:33:07.660 253542 DEBUG nova.objects.instance [None req-4723d2bf-00e2-4979-a157-d3c878bd71f2 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Lazy-loading 'resources' on Instance uuid c0942fc7-74d4-4fc8-9574-4fea9179e71b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 03:33:07 np0005534516 podman[312386]: 2025-11-25 08:33:07.670568632 +0000 UTC m=+0.097528222 container cleanup 6579f1936f19547b0b95956599794047f5a155eb5f2332d13d8dc270bbe894b5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-eb25945d-6002-4a99-b682-034a8a3dc901, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, tcib_managed=true)
Nov 25 03:33:07 np0005534516 nova_compute[253538]: 2025-11-25 08:33:07.679 253542 DEBUG nova.virt.libvirt.vif [None req-4723d2bf-00e2-4979-a157-d3c878bd71f2 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-11-25T08:32:31Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-1031241876',display_name='tempest-ServerDiskConfigTestJSON-server-1031241876',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-1031241876',id=53,image_ref='64385127-d622-49bb-be38-b33beb2692d1',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-25T08:33:03Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='dc93aa65bef7473d961e0cad1e8f2962',ramdisk_id='',reservation_id='r-p1t9fwfe',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='64385127-d622-49bb-be38-b33beb2692d1',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerDiskConfigTestJSON-1655399928',owner_user_name='tempest-ServerDiskConfigTestJSON-1655399928-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T08:33:03Z,user_data=None,user_id='7ad88cb0e4cf4d0b8e4cbec835318015',uuid=c0942fc7-74d4-4fc8-9574-4fea9179e71b,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "79f4b8f5-d582-44c5-b8e0-a82ad73193de", "address": "fa:16:3e:24:9d:e4", "network": {"id": "eb25945d-6002-4a99-b682-034a8a3dc901", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1802666542-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dc93aa65bef7473d961e0cad1e8f2962", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap79f4b8f5-d5", "ovs_interfaceid": "79f4b8f5-d582-44c5-b8e0-a82ad73193de", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 25 03:33:07 np0005534516 nova_compute[253538]: 2025-11-25 08:33:07.679 253542 DEBUG nova.network.os_vif_util [None req-4723d2bf-00e2-4979-a157-d3c878bd71f2 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Converting VIF {"id": "79f4b8f5-d582-44c5-b8e0-a82ad73193de", "address": "fa:16:3e:24:9d:e4", "network": {"id": "eb25945d-6002-4a99-b682-034a8a3dc901", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1802666542-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dc93aa65bef7473d961e0cad1e8f2962", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap79f4b8f5-d5", "ovs_interfaceid": "79f4b8f5-d582-44c5-b8e0-a82ad73193de", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 03:33:07 np0005534516 nova_compute[253538]: 2025-11-25 08:33:07.680 253542 DEBUG nova.network.os_vif_util [None req-4723d2bf-00e2-4979-a157-d3c878bd71f2 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:24:9d:e4,bridge_name='br-int',has_traffic_filtering=True,id=79f4b8f5-d582-44c5-b8e0-a82ad73193de,network=Network(eb25945d-6002-4a99-b682-034a8a3dc901),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap79f4b8f5-d5') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 03:33:07 np0005534516 nova_compute[253538]: 2025-11-25 08:33:07.680 253542 DEBUG os_vif [None req-4723d2bf-00e2-4979-a157-d3c878bd71f2 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:24:9d:e4,bridge_name='br-int',has_traffic_filtering=True,id=79f4b8f5-d582-44c5-b8e0-a82ad73193de,network=Network(eb25945d-6002-4a99-b682-034a8a3dc901),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap79f4b8f5-d5') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 25 03:33:07 np0005534516 nova_compute[253538]: 2025-11-25 08:33:07.682 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:33:07 np0005534516 nova_compute[253538]: 2025-11-25 08:33:07.682 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap79f4b8f5-d5, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:33:07 np0005534516 nova_compute[253538]: 2025-11-25 08:33:07.684 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:33:07 np0005534516 nova_compute[253538]: 2025-11-25 08:33:07.684 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 25 03:33:07 np0005534516 systemd[1]: libpod-conmon-6579f1936f19547b0b95956599794047f5a155eb5f2332d13d8dc270bbe894b5.scope: Deactivated successfully.
Nov 25 03:33:07 np0005534516 nova_compute[253538]: 2025-11-25 08:33:07.688 253542 DEBUG nova.virt.libvirt.driver [None req-09265231-b5f9-435b-aa4b-158217848829 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 03:33:07 np0005534516 nova_compute[253538]: 2025-11-25 08:33:07.688 253542 DEBUG nova.virt.libvirt.driver [None req-09265231-b5f9-435b-aa4b-158217848829 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 03:33:07 np0005534516 nova_compute[253538]: 2025-11-25 08:33:07.688 253542 DEBUG nova.virt.libvirt.driver [None req-09265231-b5f9-435b-aa4b-158217848829 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] No VIF found with MAC fa:16:3e:7a:ce:b3, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 25 03:33:07 np0005534516 nova_compute[253538]: 2025-11-25 08:33:07.689 253542 INFO nova.virt.libvirt.driver [None req-09265231-b5f9-435b-aa4b-158217848829 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: 528fb917-0169-441d-b32d-652963344aea] Using config drive#033[00m
Nov 25 03:33:07 np0005534516 nova_compute[253538]: 2025-11-25 08:33:07.709 253542 DEBUG nova.storage.rbd_utils [None req-09265231-b5f9-435b-aa4b-158217848829 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] rbd image 528fb917-0169-441d-b32d-652963344aea_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:33:07 np0005534516 nova_compute[253538]: 2025-11-25 08:33:07.713 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:33:07 np0005534516 nova_compute[253538]: 2025-11-25 08:33:07.716 253542 INFO os_vif [None req-4723d2bf-00e2-4979-a157-d3c878bd71f2 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:24:9d:e4,bridge_name='br-int',has_traffic_filtering=True,id=79f4b8f5-d582-44c5-b8e0-a82ad73193de,network=Network(eb25945d-6002-4a99-b682-034a8a3dc901),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap79f4b8f5-d5')#033[00m
Nov 25 03:33:07 np0005534516 podman[312431]: 2025-11-25 08:33:07.741213041 +0000 UTC m=+0.045922855 container remove 6579f1936f19547b0b95956599794047f5a155eb5f2332d13d8dc270bbe894b5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-eb25945d-6002-4a99-b682-034a8a3dc901, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Nov 25 03:33:07 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:33:07.747 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[430409f3-b680-4f40-9128-55ea122cb251]: (4, ('Tue Nov 25 08:33:07 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-eb25945d-6002-4a99-b682-034a8a3dc901 (6579f1936f19547b0b95956599794047f5a155eb5f2332d13d8dc270bbe894b5)\n6579f1936f19547b0b95956599794047f5a155eb5f2332d13d8dc270bbe894b5\nTue Nov 25 08:33:07 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-eb25945d-6002-4a99-b682-034a8a3dc901 (6579f1936f19547b0b95956599794047f5a155eb5f2332d13d8dc270bbe894b5)\n6579f1936f19547b0b95956599794047f5a155eb5f2332d13d8dc270bbe894b5\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:33:07 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:33:07.750 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[36871710-1f80-414e-9759-78bccca031a5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:33:07 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:33:07.751 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapeb25945d-60, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:33:07 np0005534516 kernel: tapeb25945d-60: left promiscuous mode
Nov 25 03:33:07 np0005534516 nova_compute[253538]: 2025-11-25 08:33:07.754 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:33:07 np0005534516 nova_compute[253538]: 2025-11-25 08:33:07.774 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:33:07 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:33:07.776 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[b00c75e4-e71d-4db2-a15e-e7672a423608]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:33:07 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:33:07.793 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[48538c91-86cb-498f-9f61-732f63d542dc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:33:07 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:33:07.795 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[f4607aaa-13a2-49bd-b0c6-8253bce5b2e2]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:33:07 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:33:07.812 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[af550e99-5dcd-4e15-9805-4ce4af4bf1ec]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 493658, 'reachable_time': 32183, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 312486, 'error': None, 'target': 'ovnmeta-eb25945d-6002-4a99-b682-034a8a3dc901', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:33:07 np0005534516 systemd[1]: run-netns-ovnmeta\x2deb25945d\x2d6002\x2d4a99\x2db682\x2d034a8a3dc901.mount: Deactivated successfully.
Nov 25 03:33:07 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:33:07.817 162852 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-eb25945d-6002-4a99-b682-034a8a3dc901 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 25 03:33:07 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:33:07.817 162852 DEBUG oslo.privsep.daemon [-] privsep: reply[afdc76ae-dea6-4579-aba6-9f8ad19972fa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:33:08 np0005534516 nova_compute[253538]: 2025-11-25 08:33:08.074 253542 INFO nova.virt.libvirt.driver [None req-4723d2bf-00e2-4979-a157-d3c878bd71f2 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] [instance: c0942fc7-74d4-4fc8-9574-4fea9179e71b] Deleting instance files /var/lib/nova/instances/c0942fc7-74d4-4fc8-9574-4fea9179e71b_del#033[00m
Nov 25 03:33:08 np0005534516 nova_compute[253538]: 2025-11-25 08:33:08.075 253542 INFO nova.virt.libvirt.driver [None req-4723d2bf-00e2-4979-a157-d3c878bd71f2 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] [instance: c0942fc7-74d4-4fc8-9574-4fea9179e71b] Deletion of /var/lib/nova/instances/c0942fc7-74d4-4fc8-9574-4fea9179e71b_del complete#033[00m
Nov 25 03:33:08 np0005534516 nova_compute[253538]: 2025-11-25 08:33:08.127 253542 INFO nova.compute.manager [None req-4723d2bf-00e2-4979-a157-d3c878bd71f2 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] [instance: c0942fc7-74d4-4fc8-9574-4fea9179e71b] Took 0.71 seconds to destroy the instance on the hypervisor.#033[00m
Nov 25 03:33:08 np0005534516 nova_compute[253538]: 2025-11-25 08:33:08.128 253542 DEBUG oslo.service.loopingcall [None req-4723d2bf-00e2-4979-a157-d3c878bd71f2 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 25 03:33:08 np0005534516 nova_compute[253538]: 2025-11-25 08:33:08.128 253542 DEBUG nova.compute.manager [-] [instance: c0942fc7-74d4-4fc8-9574-4fea9179e71b] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 25 03:33:08 np0005534516 nova_compute[253538]: 2025-11-25 08:33:08.128 253542 DEBUG nova.network.neutron [-] [instance: c0942fc7-74d4-4fc8-9574-4fea9179e71b] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 25 03:33:08 np0005534516 nova_compute[253538]: 2025-11-25 08:33:08.328 253542 INFO nova.virt.libvirt.driver [None req-09265231-b5f9-435b-aa4b-158217848829 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: 528fb917-0169-441d-b32d-652963344aea] Creating config drive at /var/lib/nova/instances/528fb917-0169-441d-b32d-652963344aea/disk.config#033[00m
Nov 25 03:33:08 np0005534516 nova_compute[253538]: 2025-11-25 08:33:08.333 253542 DEBUG oslo_concurrency.processutils [None req-09265231-b5f9-435b-aa4b-158217848829 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/528fb917-0169-441d-b32d-652963344aea/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpc_m1mhmf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:33:08 np0005534516 nova_compute[253538]: 2025-11-25 08:33:08.473 253542 DEBUG oslo_concurrency.processutils [None req-09265231-b5f9-435b-aa4b-158217848829 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/528fb917-0169-441d-b32d-652963344aea/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpc_m1mhmf" returned: 0 in 0.140s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:33:08 np0005534516 nova_compute[253538]: 2025-11-25 08:33:08.495 253542 DEBUG nova.storage.rbd_utils [None req-09265231-b5f9-435b-aa4b-158217848829 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] rbd image 528fb917-0169-441d-b32d-652963344aea_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:33:08 np0005534516 nova_compute[253538]: 2025-11-25 08:33:08.498 253542 DEBUG oslo_concurrency.processutils [None req-09265231-b5f9-435b-aa4b-158217848829 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/528fb917-0169-441d-b32d-652963344aea/disk.config 528fb917-0169-441d-b32d-652963344aea_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:33:08 np0005534516 nova_compute[253538]: 2025-11-25 08:33:08.654 253542 DEBUG nova.network.neutron [req-c137dc15-1230-42c1-a944-37ea4e34f26b req-c7d6fc16-2ebe-4a25-9cd3-9ad0279e9663 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 528fb917-0169-441d-b32d-652963344aea] Updated VIF entry in instance network info cache for port 56d077f0-8f69-40d8-bd5e-267a70c4c319. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 25 03:33:08 np0005534516 nova_compute[253538]: 2025-11-25 08:33:08.655 253542 DEBUG nova.network.neutron [req-c137dc15-1230-42c1-a944-37ea4e34f26b req-c7d6fc16-2ebe-4a25-9cd3-9ad0279e9663 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 528fb917-0169-441d-b32d-652963344aea] Updating instance_info_cache with network_info: [{"id": "56d077f0-8f69-40d8-bd5e-267a70c4c319", "address": "fa:16:3e:7a:ce:b3", "network": {"id": "a66e51b8-ecb0-4289-a1b5-d5e379727721", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-903247260-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a9c243220ecd4ba3af10cdbc0ea76bd6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap56d077f0-8f", "ovs_interfaceid": "56d077f0-8f69-40d8-bd5e-267a70c4c319", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 03:33:08 np0005534516 nova_compute[253538]: 2025-11-25 08:33:08.671 253542 DEBUG oslo_concurrency.lockutils [req-c137dc15-1230-42c1-a944-37ea4e34f26b req-c7d6fc16-2ebe-4a25-9cd3-9ad0279e9663 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-528fb917-0169-441d-b32d-652963344aea" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 03:33:09 np0005534516 nova_compute[253538]: 2025-11-25 08:33:09.163 253542 DEBUG nova.network.neutron [None req-ed1ff473-fcd5-40ad-9af6-a9eb8b3853e9 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] [instance: 8191f951-44bc-4371-957a-f2e7d37c1a32] Updating instance_info_cache with network_info: [{"id": "d8bd16e1-3695-474d-be04-7fdf44bee803", "address": "fa:16:3e:1a:58:19", "network": {"id": "9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-2079765623-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.213", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7d8307470c794815a028592990efca57", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd8bd16e1-36", "ovs_interfaceid": "d8bd16e1-3695-474d-be04-7fdf44bee803", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "223207be-35e0-4b8b-bf78-113792059910", "address": "fa:16:3e:3a:eb:0c", "network": {"id": "9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-2079765623-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7d8307470c794815a028592990efca57", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap223207be-35", "ovs_interfaceid": "223207be-35e0-4b8b-bf78-113792059910", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "582f57a6-32d3-44a0-ab47-d147a0bb0f43", "address": "fa:16:3e:7a:1a:cb", "network": {"id": "9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-2079765623-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7d8307470c794815a028592990efca57", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap582f57a6-32", "ovs_interfaceid": "582f57a6-32d3-44a0-ab47-d147a0bb0f43", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 03:33:09 np0005534516 nova_compute[253538]: 2025-11-25 08:33:09.194 253542 DEBUG nova.network.neutron [-] [instance: c0942fc7-74d4-4fc8-9574-4fea9179e71b] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 03:33:09 np0005534516 nova_compute[253538]: 2025-11-25 08:33:09.208 253542 DEBUG oslo_concurrency.lockutils [None req-ed1ff473-fcd5-40ad-9af6-a9eb8b3853e9 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Releasing lock "refresh_cache-8191f951-44bc-4371-957a-f2e7d37c1a32" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 03:33:09 np0005534516 nova_compute[253538]: 2025-11-25 08:33:09.212 253542 INFO nova.compute.manager [-] [instance: c0942fc7-74d4-4fc8-9574-4fea9179e71b] Took 1.08 seconds to deallocate network for instance.#033[00m
Nov 25 03:33:09 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1494: 321 pgs: 321 active+clean; 286 MiB data, 561 MiB used, 59 GiB / 60 GiB avail; 3.0 MiB/s rd, 3.6 MiB/s wr, 230 op/s
Nov 25 03:33:09 np0005534516 nova_compute[253538]: 2025-11-25 08:33:09.218 253542 DEBUG nova.virt.libvirt.vif [None req-ed1ff473-fcd5-40ad-9af6-a9eb8b3853e9 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T08:32:27Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-2072750142',display_name='tempest-AttachInterfacesTestJSON-server-2072750142',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-2072750142',id=52,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHlIbVdTjrPi47Xp1ay/t1WIi0WKTQJPoNHCD5jEl7ye+GEjFhiA/mB2rFe7IhKIxVjHVKF2hIwCbWYxBLjoyv+NSB0KC8NG+zdVJzTKR2nUpIyzOpLkhsEIm7Jy2dcm6w==',key_name='tempest-keypair-2060783228',keypairs=<?>,launch_index=0,launched_at=2025-11-25T08:32:36Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='7d8307470c794815a028592990efca57',ramdisk_id='',reservation_id='r-558nuebf',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-1895576257',owner_user_name='tempest-AttachInterfacesTestJSON-1895576257-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T08:32:36Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='329d8dc9d78743d4a09a38fef3a9143d',uuid=8191f951-44bc-4371-957a-f2e7d37c1a32,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "582f57a6-32d3-44a0-ab47-d147a0bb0f43", "address": "fa:16:3e:7a:1a:cb", "network": {"id": "9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-2079765623-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7d8307470c794815a028592990efca57", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap582f57a6-32", "ovs_interfaceid": "582f57a6-32d3-44a0-ab47-d147a0bb0f43", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 25 03:33:09 np0005534516 nova_compute[253538]: 2025-11-25 08:33:09.218 253542 DEBUG nova.network.os_vif_util [None req-ed1ff473-fcd5-40ad-9af6-a9eb8b3853e9 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Converting VIF {"id": "582f57a6-32d3-44a0-ab47-d147a0bb0f43", "address": "fa:16:3e:7a:1a:cb", "network": {"id": "9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-2079765623-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7d8307470c794815a028592990efca57", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap582f57a6-32", "ovs_interfaceid": "582f57a6-32d3-44a0-ab47-d147a0bb0f43", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 03:33:09 np0005534516 nova_compute[253538]: 2025-11-25 08:33:09.219 253542 DEBUG nova.network.os_vif_util [None req-ed1ff473-fcd5-40ad-9af6-a9eb8b3853e9 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:7a:1a:cb,bridge_name='br-int',has_traffic_filtering=True,id=582f57a6-32d3-44a0-ab47-d147a0bb0f43,network=Network(9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap582f57a6-32') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 03:33:09 np0005534516 nova_compute[253538]: 2025-11-25 08:33:09.219 253542 DEBUG os_vif [None req-ed1ff473-fcd5-40ad-9af6-a9eb8b3853e9 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:7a:1a:cb,bridge_name='br-int',has_traffic_filtering=True,id=582f57a6-32d3-44a0-ab47-d147a0bb0f43,network=Network(9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap582f57a6-32') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 25 03:33:09 np0005534516 nova_compute[253538]: 2025-11-25 08:33:09.220 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:33:09 np0005534516 nova_compute[253538]: 2025-11-25 08:33:09.220 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:33:09 np0005534516 nova_compute[253538]: 2025-11-25 08:33:09.220 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 03:33:09 np0005534516 nova_compute[253538]: 2025-11-25 08:33:09.223 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:33:09 np0005534516 nova_compute[253538]: 2025-11-25 08:33:09.223 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap582f57a6-32, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:33:09 np0005534516 nova_compute[253538]: 2025-11-25 08:33:09.223 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap582f57a6-32, col_values=(('external_ids', {'iface-id': '582f57a6-32d3-44a0-ab47-d147a0bb0f43', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:7a:1a:cb', 'vm-uuid': '8191f951-44bc-4371-957a-f2e7d37c1a32'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:33:09 np0005534516 nova_compute[253538]: 2025-11-25 08:33:09.224 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:33:09 np0005534516 NetworkManager[48915]: <info>  [1764059589.2255] manager: (tap582f57a6-32): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/226)
Nov 25 03:33:09 np0005534516 nova_compute[253538]: 2025-11-25 08:33:09.225 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 25 03:33:09 np0005534516 nova_compute[253538]: 2025-11-25 08:33:09.235 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:33:09 np0005534516 nova_compute[253538]: 2025-11-25 08:33:09.236 253542 INFO os_vif [None req-ed1ff473-fcd5-40ad-9af6-a9eb8b3853e9 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:7a:1a:cb,bridge_name='br-int',has_traffic_filtering=True,id=582f57a6-32d3-44a0-ab47-d147a0bb0f43,network=Network(9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap582f57a6-32')#033[00m
Nov 25 03:33:09 np0005534516 nova_compute[253538]: 2025-11-25 08:33:09.237 253542 DEBUG nova.virt.libvirt.vif [None req-ed1ff473-fcd5-40ad-9af6-a9eb8b3853e9 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T08:32:27Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-2072750142',display_name='tempest-AttachInterfacesTestJSON-server-2072750142',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-2072750142',id=52,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHlIbVdTjrPi47Xp1ay/t1WIi0WKTQJPoNHCD5jEl7ye+GEjFhiA/mB2rFe7IhKIxVjHVKF2hIwCbWYxBLjoyv+NSB0KC8NG+zdVJzTKR2nUpIyzOpLkhsEIm7Jy2dcm6w==',key_name='tempest-keypair-2060783228',keypairs=<?>,launch_index=0,launched_at=2025-11-25T08:32:36Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='7d8307470c794815a028592990efca57',ramdisk_id='',reservation_id='r-558nuebf',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-1895576257',owner_user_name='tempest-AttachInterfacesTestJSON-1895576257-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T08:32:36Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='329d8dc9d78743d4a09a38fef3a9143d',uuid=8191f951-44bc-4371-957a-f2e7d37c1a32,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "582f57a6-32d3-44a0-ab47-d147a0bb0f43", "address": "fa:16:3e:7a:1a:cb", "network": {"id": "9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-2079765623-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7d8307470c794815a028592990efca57", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap582f57a6-32", "ovs_interfaceid": "582f57a6-32d3-44a0-ab47-d147a0bb0f43", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 25 03:33:09 np0005534516 nova_compute[253538]: 2025-11-25 08:33:09.237 253542 DEBUG nova.network.os_vif_util [None req-ed1ff473-fcd5-40ad-9af6-a9eb8b3853e9 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Converting VIF {"id": "582f57a6-32d3-44a0-ab47-d147a0bb0f43", "address": "fa:16:3e:7a:1a:cb", "network": {"id": "9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-2079765623-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7d8307470c794815a028592990efca57", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap582f57a6-32", "ovs_interfaceid": "582f57a6-32d3-44a0-ab47-d147a0bb0f43", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 03:33:09 np0005534516 nova_compute[253538]: 2025-11-25 08:33:09.237 253542 DEBUG nova.network.os_vif_util [None req-ed1ff473-fcd5-40ad-9af6-a9eb8b3853e9 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:7a:1a:cb,bridge_name='br-int',has_traffic_filtering=True,id=582f57a6-32d3-44a0-ab47-d147a0bb0f43,network=Network(9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap582f57a6-32') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 03:33:09 np0005534516 nova_compute[253538]: 2025-11-25 08:33:09.240 253542 DEBUG nova.virt.libvirt.guest [None req-ed1ff473-fcd5-40ad-9af6-a9eb8b3853e9 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] attach device xml: <interface type="ethernet">
Nov 25 03:33:09 np0005534516 nova_compute[253538]:  <mac address="fa:16:3e:7a:1a:cb"/>
Nov 25 03:33:09 np0005534516 nova_compute[253538]:  <model type="virtio"/>
Nov 25 03:33:09 np0005534516 nova_compute[253538]:  <driver name="vhost" rx_queue_size="512"/>
Nov 25 03:33:09 np0005534516 nova_compute[253538]:  <mtu size="1442"/>
Nov 25 03:33:09 np0005534516 nova_compute[253538]:  <target dev="tap582f57a6-32"/>
Nov 25 03:33:09 np0005534516 nova_compute[253538]: </interface>
Nov 25 03:33:09 np0005534516 nova_compute[253538]: attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339#033[00m
Nov 25 03:33:09 np0005534516 systemd-udevd[312194]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 03:33:09 np0005534516 NetworkManager[48915]: <info>  [1764059589.2539] manager: (tap582f57a6-32): new Tun device (/org/freedesktop/NetworkManager/Devices/227)
Nov 25 03:33:09 np0005534516 kernel: tap582f57a6-32: entered promiscuous mode
Nov 25 03:33:09 np0005534516 nova_compute[253538]: 2025-11-25 08:33:09.257 253542 DEBUG oslo_concurrency.lockutils [None req-4723d2bf-00e2-4979-a157-d3c878bd71f2 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:33:09 np0005534516 nova_compute[253538]: 2025-11-25 08:33:09.258 253542 DEBUG oslo_concurrency.lockutils [None req-4723d2bf-00e2-4979-a157-d3c878bd71f2 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:33:09 np0005534516 nova_compute[253538]: 2025-11-25 08:33:09.258 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:33:09 np0005534516 ovn_controller[152859]: 2025-11-25T08:33:09Z|00487|binding|INFO|Claiming lport 582f57a6-32d3-44a0-ab47-d147a0bb0f43 for this chassis.
Nov 25 03:33:09 np0005534516 ovn_controller[152859]: 2025-11-25T08:33:09Z|00488|binding|INFO|582f57a6-32d3-44a0-ab47-d147a0bb0f43: Claiming fa:16:3e:7a:1a:cb 10.100.0.14
Nov 25 03:33:09 np0005534516 NetworkManager[48915]: <info>  [1764059589.2644] device (tap582f57a6-32): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 03:33:09 np0005534516 NetworkManager[48915]: <info>  [1764059589.2651] device (tap582f57a6-32): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 03:33:09 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:33:09.267 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:7a:1a:cb 10.100.0.14'], port_security=['fa:16:3e:7a:1a:cb 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '8191f951-44bc-4371-957a-f2e7d37c1a32', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '7d8307470c794815a028592990efca57', 'neutron:revision_number': '2', 'neutron:security_group_ids': '2fc6083a-2d5e-4949-a854-57468915c521', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f51a82dc-da84-4ad1-90c6-51b8e242435f, chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=582f57a6-32d3-44a0-ab47-d147a0bb0f43) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 03:33:09 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:33:09.268 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 582f57a6-32d3-44a0-ab47-d147a0bb0f43 in datapath 9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe bound to our chassis#033[00m
Nov 25 03:33:09 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:33:09.271 162739 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe#033[00m
Nov 25 03:33:09 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:33:09.288 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[39bb33ff-1d8d-45ff-b10e-46a4d39b7c92]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:33:09 np0005534516 ovn_controller[152859]: 2025-11-25T08:33:09Z|00489|binding|INFO|Setting lport 582f57a6-32d3-44a0-ab47-d147a0bb0f43 ovn-installed in OVS
Nov 25 03:33:09 np0005534516 ovn_controller[152859]: 2025-11-25T08:33:09Z|00490|binding|INFO|Setting lport 582f57a6-32d3-44a0-ab47-d147a0bb0f43 up in Southbound
Nov 25 03:33:09 np0005534516 nova_compute[253538]: 2025-11-25 08:33:09.295 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:33:09 np0005534516 nova_compute[253538]: 2025-11-25 08:33:09.308 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:33:09 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:33:09.330 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[d8e323eb-8e10-434e-863b-fd720757ad44]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:33:09 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:33:09.334 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[f6085b5a-c409-4583-8654-f7ae7f138669]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:33:09 np0005534516 nova_compute[253538]: 2025-11-25 08:33:09.365 253542 DEBUG nova.virt.libvirt.driver [None req-ed1ff473-fcd5-40ad-9af6-a9eb8b3853e9 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 03:33:09 np0005534516 nova_compute[253538]: 2025-11-25 08:33:09.365 253542 DEBUG nova.virt.libvirt.driver [None req-ed1ff473-fcd5-40ad-9af6-a9eb8b3853e9 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 03:33:09 np0005534516 nova_compute[253538]: 2025-11-25 08:33:09.365 253542 DEBUG nova.virt.libvirt.driver [None req-ed1ff473-fcd5-40ad-9af6-a9eb8b3853e9 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] No VIF found with MAC fa:16:3e:1a:58:19, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 25 03:33:09 np0005534516 nova_compute[253538]: 2025-11-25 08:33:09.365 253542 DEBUG nova.virt.libvirt.driver [None req-ed1ff473-fcd5-40ad-9af6-a9eb8b3853e9 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] No VIF found with MAC fa:16:3e:3a:eb:0c, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 25 03:33:09 np0005534516 nova_compute[253538]: 2025-11-25 08:33:09.366 253542 DEBUG nova.virt.libvirt.driver [None req-ed1ff473-fcd5-40ad-9af6-a9eb8b3853e9 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] No VIF found with MAC fa:16:3e:7a:1a:cb, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 25 03:33:09 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:33:09.370 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[71a0b079-48ab-410b-8868-c7f5f710b450]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:33:09 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:33:09.387 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[e90eaf5c-b71c-4f8a-98da-333ab6b5d9b6]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap9bf3cbfa-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:cf:8f:c7'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 7, 'rx_bytes': 700, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 7, 'rx_bytes': 700, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 132], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 490985, 'reachable_time': 15416, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 312546, 'error': None, 'target': 'ovnmeta-9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:33:09 np0005534516 nova_compute[253538]: 2025-11-25 08:33:09.391 253542 DEBUG oslo_concurrency.processutils [None req-4723d2bf-00e2-4979-a157-d3c878bd71f2 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:33:09 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:33:09.402 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[18cd78f8-aa5f-46c7-927d-808bf7b053da]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap9bf3cbfa-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 490996, 'tstamp': 490996}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 312547, 'error': None, 'target': 'ovnmeta-9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap9bf3cbfa-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 490999, 'tstamp': 490999}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 312547, 'error': None, 'target': 'ovnmeta-9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:33:09 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:33:09.404 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9bf3cbfa-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:33:09 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:33:09.411 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9bf3cbfa-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:33:09 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:33:09.411 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 03:33:09 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:33:09.411 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap9bf3cbfa-70, col_values=(('external_ids', {'iface-id': '98660c0c-0936-4c4d-9a89-87b784d8d5cc'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:33:09 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:33:09.412 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 03:33:09 np0005534516 nova_compute[253538]: 2025-11-25 08:33:09.438 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:33:09 np0005534516 nova_compute[253538]: 2025-11-25 08:33:09.441 253542 DEBUG nova.virt.libvirt.guest [None req-ed1ff473-fcd5-40ad-9af6-a9eb8b3853e9 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 03:33:09 np0005534516 nova_compute[253538]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 03:33:09 np0005534516 nova_compute[253538]:  <nova:name>tempest-AttachInterfacesTestJSON-server-2072750142</nova:name>
Nov 25 03:33:09 np0005534516 nova_compute[253538]:  <nova:creationTime>2025-11-25 08:33:09</nova:creationTime>
Nov 25 03:33:09 np0005534516 nova_compute[253538]:  <nova:flavor name="m1.nano">
Nov 25 03:33:09 np0005534516 nova_compute[253538]:    <nova:memory>128</nova:memory>
Nov 25 03:33:09 np0005534516 nova_compute[253538]:    <nova:disk>1</nova:disk>
Nov 25 03:33:09 np0005534516 nova_compute[253538]:    <nova:swap>0</nova:swap>
Nov 25 03:33:09 np0005534516 nova_compute[253538]:    <nova:ephemeral>0</nova:ephemeral>
Nov 25 03:33:09 np0005534516 nova_compute[253538]:    <nova:vcpus>1</nova:vcpus>
Nov 25 03:33:09 np0005534516 nova_compute[253538]:  </nova:flavor>
Nov 25 03:33:09 np0005534516 nova_compute[253538]:  <nova:owner>
Nov 25 03:33:09 np0005534516 nova_compute[253538]:    <nova:user uuid="329d8dc9d78743d4a09a38fef3a9143d">tempest-AttachInterfacesTestJSON-1895576257-project-member</nova:user>
Nov 25 03:33:09 np0005534516 nova_compute[253538]:    <nova:project uuid="7d8307470c794815a028592990efca57">tempest-AttachInterfacesTestJSON-1895576257</nova:project>
Nov 25 03:33:09 np0005534516 nova_compute[253538]:  </nova:owner>
Nov 25 03:33:09 np0005534516 nova_compute[253538]:  <nova:root type="image" uuid="8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e"/>
Nov 25 03:33:09 np0005534516 nova_compute[253538]:  <nova:ports>
Nov 25 03:33:09 np0005534516 nova_compute[253538]:    <nova:port uuid="d8bd16e1-3695-474d-be04-7fdf44bee803">
Nov 25 03:33:09 np0005534516 nova_compute[253538]:      <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Nov 25 03:33:09 np0005534516 nova_compute[253538]:    </nova:port>
Nov 25 03:33:09 np0005534516 nova_compute[253538]:    <nova:port uuid="223207be-35e0-4b8b-bf78-113792059910">
Nov 25 03:33:09 np0005534516 nova_compute[253538]:      <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Nov 25 03:33:09 np0005534516 nova_compute[253538]:    </nova:port>
Nov 25 03:33:09 np0005534516 nova_compute[253538]:    <nova:port uuid="582f57a6-32d3-44a0-ab47-d147a0bb0f43">
Nov 25 03:33:09 np0005534516 nova_compute[253538]:      <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Nov 25 03:33:09 np0005534516 nova_compute[253538]:    </nova:port>
Nov 25 03:33:09 np0005534516 nova_compute[253538]:  </nova:ports>
Nov 25 03:33:09 np0005534516 nova_compute[253538]: </nova:instance>
Nov 25 03:33:09 np0005534516 nova_compute[253538]: set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359#033[00m
Nov 25 03:33:09 np0005534516 nova_compute[253538]: 2025-11-25 08:33:09.464 253542 DEBUG oslo_concurrency.lockutils [None req-ed1ff473-fcd5-40ad-9af6-a9eb8b3853e9 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Lock "interface-8191f951-44bc-4371-957a-f2e7d37c1a32-None" "released" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: held 8.493s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:33:09 np0005534516 nova_compute[253538]: 2025-11-25 08:33:09.852 253542 DEBUG oslo_concurrency.processutils [None req-09265231-b5f9-435b-aa4b-158217848829 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/528fb917-0169-441d-b32d-652963344aea/disk.config 528fb917-0169-441d-b32d-652963344aea_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.354s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:33:09 np0005534516 nova_compute[253538]: 2025-11-25 08:33:09.853 253542 INFO nova.virt.libvirt.driver [None req-09265231-b5f9-435b-aa4b-158217848829 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: 528fb917-0169-441d-b32d-652963344aea] Deleting local config drive /var/lib/nova/instances/528fb917-0169-441d-b32d-652963344aea/disk.config because it was imported into RBD.#033[00m
Nov 25 03:33:09 np0005534516 kernel: tap56d077f0-8f: entered promiscuous mode
Nov 25 03:33:09 np0005534516 NetworkManager[48915]: <info>  [1764059589.9094] manager: (tap56d077f0-8f): new Tun device (/org/freedesktop/NetworkManager/Devices/228)
Nov 25 03:33:09 np0005534516 ovn_controller[152859]: 2025-11-25T08:33:09Z|00491|binding|INFO|Claiming lport 56d077f0-8f69-40d8-bd5e-267a70c4c319 for this chassis.
Nov 25 03:33:09 np0005534516 ovn_controller[152859]: 2025-11-25T08:33:09Z|00492|binding|INFO|56d077f0-8f69-40d8-bd5e-267a70c4c319: Claiming fa:16:3e:7a:ce:b3 10.100.0.4
Nov 25 03:33:09 np0005534516 nova_compute[253538]: 2025-11-25 08:33:09.921 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:33:09 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:33:09.936 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:7a:ce:b3 10.100.0.4'], port_security=['fa:16:3e:7a:ce:b3 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '528fb917-0169-441d-b32d-652963344aea', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a66e51b8-ecb0-4289-a1b5-d5e379727721', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a9c243220ecd4ba3af10cdbc0ea76bd6', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'a5deaf81-ec7a-4196-8622-4e499ce185db', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=864356ca-a329-4a45-a3a1-6cef04812832, chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=56d077f0-8f69-40d8-bd5e-267a70c4c319) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 03:33:09 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:33:09.938 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 56d077f0-8f69-40d8-bd5e-267a70c4c319 in datapath a66e51b8-ecb0-4289-a1b5-d5e379727721 bound to our chassis#033[00m
Nov 25 03:33:09 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:33:09.940 162739 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network a66e51b8-ecb0-4289-a1b5-d5e379727721#033[00m
Nov 25 03:33:09 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 03:33:09 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3440581534' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 03:33:09 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:33:09.951 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[b693a906-19fd-4782-8b0c-996934a2cbcb]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:33:09 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:33:09.953 162739 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapa66e51b8-e1 in ovnmeta-a66e51b8-ecb0-4289-a1b5-d5e379727721 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 25 03:33:09 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:33:09.954 269370 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapa66e51b8-e0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 25 03:33:09 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:33:09.954 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[67b84fbe-f78e-43cc-9215-7b01b1767aa5]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:33:09 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:33:09.956 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[eca5a557-b52e-4e06-974a-98584bb59387]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:33:09 np0005534516 systemd-udevd[312584]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 03:33:09 np0005534516 ovn_controller[152859]: 2025-11-25T08:33:09Z|00493|binding|INFO|Setting lport 56d077f0-8f69-40d8-bd5e-267a70c4c319 ovn-installed in OVS
Nov 25 03:33:09 np0005534516 ovn_controller[152859]: 2025-11-25T08:33:09Z|00494|binding|INFO|Setting lport 56d077f0-8f69-40d8-bd5e-267a70c4c319 up in Southbound
Nov 25 03:33:09 np0005534516 systemd-machined[215790]: New machine qemu-64-instance-00000038.
Nov 25 03:33:09 np0005534516 nova_compute[253538]: 2025-11-25 08:33:09.964 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:33:09 np0005534516 NetworkManager[48915]: <info>  [1764059589.9674] device (tap56d077f0-8f): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 03:33:09 np0005534516 NetworkManager[48915]: <info>  [1764059589.9679] device (tap56d077f0-8f): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 03:33:09 np0005534516 systemd[1]: Started Virtual Machine qemu-64-instance-00000038.
Nov 25 03:33:09 np0005534516 nova_compute[253538]: 2025-11-25 08:33:09.974 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:33:09 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:33:09.978 162852 DEBUG oslo.privsep.daemon [-] privsep: reply[cb78c8b3-f9c5-45ec-8931-b71c0ddaa3c4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:33:09 np0005534516 nova_compute[253538]: 2025-11-25 08:33:09.992 253542 DEBUG oslo_concurrency.processutils [None req-4723d2bf-00e2-4979-a157-d3c878bd71f2 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.601s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:33:09 np0005534516 nova_compute[253538]: 2025-11-25 08:33:09.999 253542 DEBUG nova.compute.provider_tree [None req-4723d2bf-00e2-4979-a157-d3c878bd71f2 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 03:33:10 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:33:10.001 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[0c50e584-cb64-4f71-95bf-e08fc849302e]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:33:10 np0005534516 nova_compute[253538]: 2025-11-25 08:33:10.016 253542 DEBUG nova.scheduler.client.report [None req-4723d2bf-00e2-4979-a157-d3c878bd71f2 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 03:33:10 np0005534516 nova_compute[253538]: 2025-11-25 08:33:10.034 253542 DEBUG oslo_concurrency.lockutils [None req-4723d2bf-00e2-4979-a157-d3c878bd71f2 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.776s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:33:10 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:33:10.034 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[2122eb85-23ec-4acb-aeba-03398380ad1e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:33:10 np0005534516 NetworkManager[48915]: <info>  [1764059590.0404] manager: (tapa66e51b8-e0): new Veth device (/org/freedesktop/NetworkManager/Devices/229)
Nov 25 03:33:10 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:33:10.039 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[536fa575-44f3-434b-acbd-8d681bb84171]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:33:10 np0005534516 systemd-udevd[312587]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 03:33:10 np0005534516 nova_compute[253538]: 2025-11-25 08:33:10.059 253542 INFO nova.scheduler.client.report [None req-4723d2bf-00e2-4979-a157-d3c878bd71f2 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Deleted allocations for instance c0942fc7-74d4-4fc8-9574-4fea9179e71b#033[00m
Nov 25 03:33:10 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:33:10.073 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[4a4e1942-2ad3-416d-896f-86dc65e08042]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:33:10 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:33:10.077 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[e1aa45fd-fb76-4149-ac59-cd8765eef48f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:33:10 np0005534516 NetworkManager[48915]: <info>  [1764059590.1057] device (tapa66e51b8-e0): carrier: link connected
Nov 25 03:33:10 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:33:10.111 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[3b551b5d-0e6e-490b-a59c-5755662e24db]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:33:10 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:33:10.129 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[b751b99f-f3f0-44ba-9ae6-48cd326659f2]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapa66e51b8-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:11:2c:20'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 151], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 494470, 'reachable_time': 26050, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 312616, 'error': None, 'target': 'ovnmeta-a66e51b8-ecb0-4289-a1b5-d5e379727721', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:33:10 np0005534516 nova_compute[253538]: 2025-11-25 08:33:10.131 253542 DEBUG oslo_concurrency.lockutils [None req-4723d2bf-00e2-4979-a157-d3c878bd71f2 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Lock "c0942fc7-74d4-4fc8-9574-4fea9179e71b" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.714s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:33:10 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:33:10.143 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[aec1373c-3908-45c4-b067-0767328a601f]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe11:2c20'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 494470, 'tstamp': 494470}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 312617, 'error': None, 'target': 'ovnmeta-a66e51b8-ecb0-4289-a1b5-d5e379727721', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:33:10 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:33:10.160 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[8fe55981-7740-4fe2-ba94-9b23ce138098]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapa66e51b8-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:11:2c:20'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 151], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 494470, 'reachable_time': 26050, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 312618, 'error': None, 'target': 'ovnmeta-a66e51b8-ecb0-4289-a1b5-d5e379727721', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:33:10 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:33:10.190 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[d93e89c3-9b04-444d-811f-a748354cff53]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:33:10 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:33:10.251 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[ede2a001-807a-4600-82ed-c4a19a576912]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:33:10 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:33:10.252 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa66e51b8-e0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:33:10 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:33:10.252 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 03:33:10 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:33:10.252 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa66e51b8-e0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:33:10 np0005534516 NetworkManager[48915]: <info>  [1764059590.2549] manager: (tapa66e51b8-e0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/230)
Nov 25 03:33:10 np0005534516 kernel: tapa66e51b8-e0: entered promiscuous mode
Nov 25 03:33:10 np0005534516 nova_compute[253538]: 2025-11-25 08:33:10.254 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:33:10 np0005534516 nova_compute[253538]: 2025-11-25 08:33:10.257 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:33:10 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:33:10.258 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapa66e51b8-e0, col_values=(('external_ids', {'iface-id': 'c0d74b17-7eba-4096-a861-b9247777e01c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:33:10 np0005534516 nova_compute[253538]: 2025-11-25 08:33:10.259 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:33:10 np0005534516 ovn_controller[152859]: 2025-11-25T08:33:10Z|00495|binding|INFO|Releasing lport c0d74b17-7eba-4096-a861-b9247777e01c from this chassis (sb_readonly=0)
Nov 25 03:33:10 np0005534516 nova_compute[253538]: 2025-11-25 08:33:10.289 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:33:10 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:33:10.291 162739 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/a66e51b8-ecb0-4289-a1b5-d5e379727721.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/a66e51b8-ecb0-4289-a1b5-d5e379727721.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 25 03:33:10 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:33:10.292 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[10e33483-c543-4ee0-90ce-eff858670aee]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:33:10 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:33:10.293 162739 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 25 03:33:10 np0005534516 ovn_metadata_agent[162734]: global
Nov 25 03:33:10 np0005534516 ovn_metadata_agent[162734]:    log         /dev/log local0 debug
Nov 25 03:33:10 np0005534516 ovn_metadata_agent[162734]:    log-tag     haproxy-metadata-proxy-a66e51b8-ecb0-4289-a1b5-d5e379727721
Nov 25 03:33:10 np0005534516 ovn_metadata_agent[162734]:    user        root
Nov 25 03:33:10 np0005534516 ovn_metadata_agent[162734]:    group       root
Nov 25 03:33:10 np0005534516 ovn_metadata_agent[162734]:    maxconn     1024
Nov 25 03:33:10 np0005534516 ovn_metadata_agent[162734]:    pidfile     /var/lib/neutron/external/pids/a66e51b8-ecb0-4289-a1b5-d5e379727721.pid.haproxy
Nov 25 03:33:10 np0005534516 ovn_metadata_agent[162734]:    daemon
Nov 25 03:33:10 np0005534516 ovn_metadata_agent[162734]: 
Nov 25 03:33:10 np0005534516 ovn_metadata_agent[162734]: defaults
Nov 25 03:33:10 np0005534516 ovn_metadata_agent[162734]:    log global
Nov 25 03:33:10 np0005534516 ovn_metadata_agent[162734]:    mode http
Nov 25 03:33:10 np0005534516 ovn_metadata_agent[162734]:    option httplog
Nov 25 03:33:10 np0005534516 ovn_metadata_agent[162734]:    option dontlognull
Nov 25 03:33:10 np0005534516 ovn_metadata_agent[162734]:    option http-server-close
Nov 25 03:33:10 np0005534516 ovn_metadata_agent[162734]:    option forwardfor
Nov 25 03:33:10 np0005534516 ovn_metadata_agent[162734]:    retries                 3
Nov 25 03:33:10 np0005534516 ovn_metadata_agent[162734]:    timeout http-request    30s
Nov 25 03:33:10 np0005534516 ovn_metadata_agent[162734]:    timeout connect         30s
Nov 25 03:33:10 np0005534516 ovn_metadata_agent[162734]:    timeout client          32s
Nov 25 03:33:10 np0005534516 ovn_metadata_agent[162734]:    timeout server          32s
Nov 25 03:33:10 np0005534516 ovn_metadata_agent[162734]:    timeout http-keep-alive 30s
Nov 25 03:33:10 np0005534516 ovn_metadata_agent[162734]: 
Nov 25 03:33:10 np0005534516 ovn_metadata_agent[162734]: 
Nov 25 03:33:10 np0005534516 ovn_metadata_agent[162734]: listen listener
Nov 25 03:33:10 np0005534516 ovn_metadata_agent[162734]:    bind 169.254.169.254:80
Nov 25 03:33:10 np0005534516 ovn_metadata_agent[162734]:    server metadata /var/lib/neutron/metadata_proxy
Nov 25 03:33:10 np0005534516 ovn_metadata_agent[162734]:    http-request add-header X-OVN-Network-ID a66e51b8-ecb0-4289-a1b5-d5e379727721
Nov 25 03:33:10 np0005534516 ovn_metadata_agent[162734]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 25 03:33:10 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:33:10.293 162739 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-a66e51b8-ecb0-4289-a1b5-d5e379727721', 'env', 'PROCESS_TAG=haproxy-a66e51b8-ecb0-4289-a1b5-d5e379727721', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/a66e51b8-ecb0-4289-a1b5-d5e379727721.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 25 03:33:10 np0005534516 nova_compute[253538]: 2025-11-25 08:33:10.318 253542 DEBUG nova.compute.manager [req-9450a81c-2182-4fba-8826-2ec29e919dec req-2459aebe-5265-4d4f-9540-c28782ad7407 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 8191f951-44bc-4371-957a-f2e7d37c1a32] Received event network-changed-582f57a6-32d3-44a0-ab47-d147a0bb0f43 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:33:10 np0005534516 nova_compute[253538]: 2025-11-25 08:33:10.318 253542 DEBUG nova.compute.manager [req-9450a81c-2182-4fba-8826-2ec29e919dec req-2459aebe-5265-4d4f-9540-c28782ad7407 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 8191f951-44bc-4371-957a-f2e7d37c1a32] Refreshing instance network info cache due to event network-changed-582f57a6-32d3-44a0-ab47-d147a0bb0f43. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 03:33:10 np0005534516 nova_compute[253538]: 2025-11-25 08:33:10.319 253542 DEBUG oslo_concurrency.lockutils [req-9450a81c-2182-4fba-8826-2ec29e919dec req-2459aebe-5265-4d4f-9540-c28782ad7407 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-8191f951-44bc-4371-957a-f2e7d37c1a32" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 03:33:10 np0005534516 nova_compute[253538]: 2025-11-25 08:33:10.319 253542 DEBUG oslo_concurrency.lockutils [req-9450a81c-2182-4fba-8826-2ec29e919dec req-2459aebe-5265-4d4f-9540-c28782ad7407 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-8191f951-44bc-4371-957a-f2e7d37c1a32" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 03:33:10 np0005534516 nova_compute[253538]: 2025-11-25 08:33:10.319 253542 DEBUG nova.network.neutron [req-9450a81c-2182-4fba-8826-2ec29e919dec req-2459aebe-5265-4d4f-9540-c28782ad7407 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 8191f951-44bc-4371-957a-f2e7d37c1a32] Refreshing network info cache for port 582f57a6-32d3-44a0-ab47-d147a0bb0f43 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 03:33:10 np0005534516 nova_compute[253538]: 2025-11-25 08:33:10.528 253542 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764059575.5264928, 6fd0259d-3f5c-487b-906c-db0ac2b00830 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 03:33:10 np0005534516 nova_compute[253538]: 2025-11-25 08:33:10.528 253542 INFO nova.compute.manager [-] [instance: 6fd0259d-3f5c-487b-906c-db0ac2b00830] VM Stopped (Lifecycle Event)#033[00m
Nov 25 03:33:10 np0005534516 nova_compute[253538]: 2025-11-25 08:33:10.549 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764059590.5494423, 528fb917-0169-441d-b32d-652963344aea => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 03:33:10 np0005534516 nova_compute[253538]: 2025-11-25 08:33:10.550 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 528fb917-0169-441d-b32d-652963344aea] VM Started (Lifecycle Event)#033[00m
Nov 25 03:33:10 np0005534516 nova_compute[253538]: 2025-11-25 08:33:10.553 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 03:33:10 np0005534516 nova_compute[253538]: 2025-11-25 08:33:10.554 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 25 03:33:10 np0005534516 nova_compute[253538]: 2025-11-25 08:33:10.556 253542 DEBUG nova.compute.manager [None req-5aa08cfc-0450-425d-8888-466a68a195af - - - - - -] [instance: 6fd0259d-3f5c-487b-906c-db0ac2b00830] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:33:10 np0005534516 nova_compute[253538]: 2025-11-25 08:33:10.577 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 528fb917-0169-441d-b32d-652963344aea] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:33:10 np0005534516 nova_compute[253538]: 2025-11-25 08:33:10.581 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764059590.550347, 528fb917-0169-441d-b32d-652963344aea => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 03:33:10 np0005534516 nova_compute[253538]: 2025-11-25 08:33:10.581 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 528fb917-0169-441d-b32d-652963344aea] VM Paused (Lifecycle Event)#033[00m
Nov 25 03:33:10 np0005534516 nova_compute[253538]: 2025-11-25 08:33:10.587 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:33:10 np0005534516 nova_compute[253538]: 2025-11-25 08:33:10.597 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 528fb917-0169-441d-b32d-652963344aea] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:33:10 np0005534516 nova_compute[253538]: 2025-11-25 08:33:10.602 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 528fb917-0169-441d-b32d-652963344aea] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 03:33:10 np0005534516 nova_compute[253538]: 2025-11-25 08:33:10.617 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 528fb917-0169-441d-b32d-652963344aea] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 03:33:10 np0005534516 nova_compute[253538]: 2025-11-25 08:33:10.621 253542 DEBUG nova.compute.manager [req-3d49d313-759b-4639-9598-abbfc7b96d22 req-38add6f3-2994-4461-94c0-ae6654208c07 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 0feca801-4630-4450-b915-616d8496ab51] Received event network-vif-plugged-15af3dd8-9788-4a34-b4b2-d3b24300cd4c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:33:10 np0005534516 nova_compute[253538]: 2025-11-25 08:33:10.621 253542 DEBUG oslo_concurrency.lockutils [req-3d49d313-759b-4639-9598-abbfc7b96d22 req-38add6f3-2994-4461-94c0-ae6654208c07 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "0feca801-4630-4450-b915-616d8496ab51-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:33:10 np0005534516 nova_compute[253538]: 2025-11-25 08:33:10.622 253542 DEBUG oslo_concurrency.lockutils [req-3d49d313-759b-4639-9598-abbfc7b96d22 req-38add6f3-2994-4461-94c0-ae6654208c07 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "0feca801-4630-4450-b915-616d8496ab51-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:33:10 np0005534516 nova_compute[253538]: 2025-11-25 08:33:10.622 253542 DEBUG oslo_concurrency.lockutils [req-3d49d313-759b-4639-9598-abbfc7b96d22 req-38add6f3-2994-4461-94c0-ae6654208c07 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "0feca801-4630-4450-b915-616d8496ab51-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:33:10 np0005534516 nova_compute[253538]: 2025-11-25 08:33:10.622 253542 DEBUG nova.compute.manager [req-3d49d313-759b-4639-9598-abbfc7b96d22 req-38add6f3-2994-4461-94c0-ae6654208c07 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 0feca801-4630-4450-b915-616d8496ab51] No waiting events found dispatching network-vif-plugged-15af3dd8-9788-4a34-b4b2-d3b24300cd4c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 03:33:10 np0005534516 nova_compute[253538]: 2025-11-25 08:33:10.623 253542 WARNING nova.compute.manager [req-3d49d313-759b-4639-9598-abbfc7b96d22 req-38add6f3-2994-4461-94c0-ae6654208c07 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 0feca801-4630-4450-b915-616d8496ab51] Received unexpected event network-vif-plugged-15af3dd8-9788-4a34-b4b2-d3b24300cd4c for instance with vm_state active and task_state None.#033[00m
Nov 25 03:33:10 np0005534516 nova_compute[253538]: 2025-11-25 08:33:10.623 253542 DEBUG nova.compute.manager [req-3d49d313-759b-4639-9598-abbfc7b96d22 req-38add6f3-2994-4461-94c0-ae6654208c07 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: c0942fc7-74d4-4fc8-9574-4fea9179e71b] Received event network-vif-unplugged-79f4b8f5-d582-44c5-b8e0-a82ad73193de external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:33:10 np0005534516 nova_compute[253538]: 2025-11-25 08:33:10.623 253542 DEBUG oslo_concurrency.lockutils [req-3d49d313-759b-4639-9598-abbfc7b96d22 req-38add6f3-2994-4461-94c0-ae6654208c07 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "c0942fc7-74d4-4fc8-9574-4fea9179e71b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:33:10 np0005534516 nova_compute[253538]: 2025-11-25 08:33:10.623 253542 DEBUG oslo_concurrency.lockutils [req-3d49d313-759b-4639-9598-abbfc7b96d22 req-38add6f3-2994-4461-94c0-ae6654208c07 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "c0942fc7-74d4-4fc8-9574-4fea9179e71b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:33:10 np0005534516 nova_compute[253538]: 2025-11-25 08:33:10.624 253542 DEBUG oslo_concurrency.lockutils [req-3d49d313-759b-4639-9598-abbfc7b96d22 req-38add6f3-2994-4461-94c0-ae6654208c07 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "c0942fc7-74d4-4fc8-9574-4fea9179e71b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:33:10 np0005534516 nova_compute[253538]: 2025-11-25 08:33:10.624 253542 DEBUG nova.compute.manager [req-3d49d313-759b-4639-9598-abbfc7b96d22 req-38add6f3-2994-4461-94c0-ae6654208c07 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: c0942fc7-74d4-4fc8-9574-4fea9179e71b] No waiting events found dispatching network-vif-unplugged-79f4b8f5-d582-44c5-b8e0-a82ad73193de pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 03:33:10 np0005534516 nova_compute[253538]: 2025-11-25 08:33:10.624 253542 WARNING nova.compute.manager [req-3d49d313-759b-4639-9598-abbfc7b96d22 req-38add6f3-2994-4461-94c0-ae6654208c07 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: c0942fc7-74d4-4fc8-9574-4fea9179e71b] Received unexpected event network-vif-unplugged-79f4b8f5-d582-44c5-b8e0-a82ad73193de for instance with vm_state deleted and task_state None.#033[00m
Nov 25 03:33:10 np0005534516 nova_compute[253538]: 2025-11-25 08:33:10.625 253542 DEBUG nova.compute.manager [req-3d49d313-759b-4639-9598-abbfc7b96d22 req-38add6f3-2994-4461-94c0-ae6654208c07 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: c0942fc7-74d4-4fc8-9574-4fea9179e71b] Received event network-vif-plugged-79f4b8f5-d582-44c5-b8e0-a82ad73193de external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:33:10 np0005534516 nova_compute[253538]: 2025-11-25 08:33:10.625 253542 DEBUG oslo_concurrency.lockutils [req-3d49d313-759b-4639-9598-abbfc7b96d22 req-38add6f3-2994-4461-94c0-ae6654208c07 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "c0942fc7-74d4-4fc8-9574-4fea9179e71b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:33:10 np0005534516 nova_compute[253538]: 2025-11-25 08:33:10.625 253542 DEBUG oslo_concurrency.lockutils [req-3d49d313-759b-4639-9598-abbfc7b96d22 req-38add6f3-2994-4461-94c0-ae6654208c07 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "c0942fc7-74d4-4fc8-9574-4fea9179e71b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:33:10 np0005534516 nova_compute[253538]: 2025-11-25 08:33:10.625 253542 DEBUG oslo_concurrency.lockutils [req-3d49d313-759b-4639-9598-abbfc7b96d22 req-38add6f3-2994-4461-94c0-ae6654208c07 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "c0942fc7-74d4-4fc8-9574-4fea9179e71b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:33:10 np0005534516 nova_compute[253538]: 2025-11-25 08:33:10.626 253542 DEBUG nova.compute.manager [req-3d49d313-759b-4639-9598-abbfc7b96d22 req-38add6f3-2994-4461-94c0-ae6654208c07 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: c0942fc7-74d4-4fc8-9574-4fea9179e71b] No waiting events found dispatching network-vif-plugged-79f4b8f5-d582-44c5-b8e0-a82ad73193de pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 03:33:10 np0005534516 nova_compute[253538]: 2025-11-25 08:33:10.626 253542 WARNING nova.compute.manager [req-3d49d313-759b-4639-9598-abbfc7b96d22 req-38add6f3-2994-4461-94c0-ae6654208c07 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: c0942fc7-74d4-4fc8-9574-4fea9179e71b] Received unexpected event network-vif-plugged-79f4b8f5-d582-44c5-b8e0-a82ad73193de for instance with vm_state deleted and task_state None.#033[00m
Nov 25 03:33:10 np0005534516 podman[312691]: 2025-11-25 08:33:10.660926301 +0000 UTC m=+0.043295885 container create 78d4b6a7caeeac4acbf7fb46f257a6945cc87c7f6a44d0822f61b7d110045a31 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-a66e51b8-ecb0-4289-a1b5-d5e379727721, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3)
Nov 25 03:33:10 np0005534516 systemd[1]: Started libpod-conmon-78d4b6a7caeeac4acbf7fb46f257a6945cc87c7f6a44d0822f61b7d110045a31.scope.
Nov 25 03:33:10 np0005534516 systemd[1]: Started libcrun container.
Nov 25 03:33:10 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7f3b17704a9232b7fd9a3c6391403db11755eeec93cf71409d4cffb3b11bf81a/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 25 03:33:10 np0005534516 podman[312691]: 2025-11-25 08:33:10.728774635 +0000 UTC m=+0.111144239 container init 78d4b6a7caeeac4acbf7fb46f257a6945cc87c7f6a44d0822f61b7d110045a31 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-a66e51b8-ecb0-4289-a1b5-d5e379727721, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Nov 25 03:33:10 np0005534516 podman[312691]: 2025-11-25 08:33:10.733867812 +0000 UTC m=+0.116237396 container start 78d4b6a7caeeac4acbf7fb46f257a6945cc87c7f6a44d0822f61b7d110045a31 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-a66e51b8-ecb0-4289-a1b5-d5e379727721, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Nov 25 03:33:10 np0005534516 podman[312691]: 2025-11-25 08:33:10.640479511 +0000 UTC m=+0.022849095 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 25 03:33:10 np0005534516 neutron-haproxy-ovnmeta-a66e51b8-ecb0-4289-a1b5-d5e379727721[312705]: [NOTICE]   (312709) : New worker (312711) forked
Nov 25 03:33:10 np0005534516 neutron-haproxy-ovnmeta-a66e51b8-ecb0-4289-a1b5-d5e379727721[312705]: [NOTICE]   (312709) : Loading success.
Nov 25 03:33:10 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e176 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 03:33:11 np0005534516 ovn_controller[152859]: 2025-11-25T08:33:11Z|00061|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:7a:1a:cb 10.100.0.14
Nov 25 03:33:11 np0005534516 ovn_controller[152859]: 2025-11-25T08:33:11Z|00062|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:7a:1a:cb 10.100.0.14
Nov 25 03:33:11 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1495: 321 pgs: 321 active+clean; 262 MiB data, 549 MiB used, 59 GiB / 60 GiB avail; 2.6 MiB/s rd, 3.6 MiB/s wr, 207 op/s
Nov 25 03:33:11 np0005534516 nova_compute[253538]: 2025-11-25 08:33:11.441 253542 DEBUG oslo_concurrency.lockutils [None req-81d88d46-8ae0-4bea-8353-0eaa046e7abc 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Acquiring lock "fb888d2a-db54-44dc-8ec7-db417fa3cff6" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:33:11 np0005534516 nova_compute[253538]: 2025-11-25 08:33:11.442 253542 DEBUG oslo_concurrency.lockutils [None req-81d88d46-8ae0-4bea-8353-0eaa046e7abc 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Lock "fb888d2a-db54-44dc-8ec7-db417fa3cff6" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:33:11 np0005534516 nova_compute[253538]: 2025-11-25 08:33:11.454 253542 DEBUG nova.compute.manager [None req-81d88d46-8ae0-4bea-8353-0eaa046e7abc 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] [instance: fb888d2a-db54-44dc-8ec7-db417fa3cff6] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 25 03:33:11 np0005534516 nova_compute[253538]: 2025-11-25 08:33:11.514 253542 DEBUG nova.network.neutron [req-9450a81c-2182-4fba-8826-2ec29e919dec req-2459aebe-5265-4d4f-9540-c28782ad7407 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 8191f951-44bc-4371-957a-f2e7d37c1a32] Updated VIF entry in instance network info cache for port 582f57a6-32d3-44a0-ab47-d147a0bb0f43. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 25 03:33:11 np0005534516 nova_compute[253538]: 2025-11-25 08:33:11.514 253542 DEBUG nova.network.neutron [req-9450a81c-2182-4fba-8826-2ec29e919dec req-2459aebe-5265-4d4f-9540-c28782ad7407 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 8191f951-44bc-4371-957a-f2e7d37c1a32] Updating instance_info_cache with network_info: [{"id": "d8bd16e1-3695-474d-be04-7fdf44bee803", "address": "fa:16:3e:1a:58:19", "network": {"id": "9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-2079765623-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.213", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7d8307470c794815a028592990efca57", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd8bd16e1-36", "ovs_interfaceid": "d8bd16e1-3695-474d-be04-7fdf44bee803", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "223207be-35e0-4b8b-bf78-113792059910", "address": "fa:16:3e:3a:eb:0c", "network": {"id": "9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-2079765623-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7d8307470c794815a028592990efca57", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap223207be-35", "ovs_interfaceid": "223207be-35e0-4b8b-bf78-113792059910", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "582f57a6-32d3-44a0-ab47-d147a0bb0f43", "address": "fa:16:3e:7a:1a:cb", "network": {"id": "9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-2079765623-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7d8307470c794815a028592990efca57", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap582f57a6-32", "ovs_interfaceid": "582f57a6-32d3-44a0-ab47-d147a0bb0f43", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 03:33:11 np0005534516 nova_compute[253538]: 2025-11-25 08:33:11.529 253542 DEBUG oslo_concurrency.lockutils [None req-81d88d46-8ae0-4bea-8353-0eaa046e7abc 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:33:11 np0005534516 nova_compute[253538]: 2025-11-25 08:33:11.530 253542 DEBUG oslo_concurrency.lockutils [None req-81d88d46-8ae0-4bea-8353-0eaa046e7abc 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:33:11 np0005534516 nova_compute[253538]: 2025-11-25 08:33:11.538 253542 DEBUG nova.virt.hardware [None req-81d88d46-8ae0-4bea-8353-0eaa046e7abc 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 25 03:33:11 np0005534516 nova_compute[253538]: 2025-11-25 08:33:11.539 253542 INFO nova.compute.claims [None req-81d88d46-8ae0-4bea-8353-0eaa046e7abc 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] [instance: fb888d2a-db54-44dc-8ec7-db417fa3cff6] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 25 03:33:11 np0005534516 nova_compute[253538]: 2025-11-25 08:33:11.544 253542 DEBUG oslo_concurrency.lockutils [req-9450a81c-2182-4fba-8826-2ec29e919dec req-2459aebe-5265-4d4f-9540-c28782ad7407 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-8191f951-44bc-4371-957a-f2e7d37c1a32" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 03:33:11 np0005534516 nova_compute[253538]: 2025-11-25 08:33:11.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 03:33:11 np0005534516 nova_compute[253538]: 2025-11-25 08:33:11.554 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 25 03:33:11 np0005534516 nova_compute[253538]: 2025-11-25 08:33:11.577 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 25 03:33:11 np0005534516 nova_compute[253538]: 2025-11-25 08:33:11.670 253542 DEBUG oslo_concurrency.processutils [None req-81d88d46-8ae0-4bea-8353-0eaa046e7abc 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:33:12 np0005534516 nova_compute[253538]: 2025-11-25 08:33:12.122 253542 DEBUG oslo_concurrency.lockutils [None req-1bbadccb-c6ee-41c2-883b-6cdb2308cf5a 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Acquiring lock "interface-8191f951-44bc-4371-957a-f2e7d37c1a32-9541f2fd-4ec3-47ef-a6a9-66e0052c303f" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:33:12 np0005534516 nova_compute[253538]: 2025-11-25 08:33:12.122 253542 DEBUG oslo_concurrency.lockutils [None req-1bbadccb-c6ee-41c2-883b-6cdb2308cf5a 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Lock "interface-8191f951-44bc-4371-957a-f2e7d37c1a32-9541f2fd-4ec3-47ef-a6a9-66e0052c303f" acquired by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:33:12 np0005534516 nova_compute[253538]: 2025-11-25 08:33:12.123 253542 DEBUG nova.objects.instance [None req-1bbadccb-c6ee-41c2-883b-6cdb2308cf5a 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Lazy-loading 'flavor' on Instance uuid 8191f951-44bc-4371-957a-f2e7d37c1a32 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 03:33:12 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 03:33:12 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2897483266' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 03:33:12 np0005534516 nova_compute[253538]: 2025-11-25 08:33:12.209 253542 DEBUG oslo_concurrency.processutils [None req-81d88d46-8ae0-4bea-8353-0eaa046e7abc 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.539s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:33:12 np0005534516 nova_compute[253538]: 2025-11-25 08:33:12.214 253542 DEBUG nova.compute.provider_tree [None req-81d88d46-8ae0-4bea-8353-0eaa046e7abc 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 03:33:12 np0005534516 nova_compute[253538]: 2025-11-25 08:33:12.224 253542 DEBUG nova.scheduler.client.report [None req-81d88d46-8ae0-4bea-8353-0eaa046e7abc 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 03:33:12 np0005534516 nova_compute[253538]: 2025-11-25 08:33:12.244 253542 DEBUG oslo_concurrency.lockutils [None req-81d88d46-8ae0-4bea-8353-0eaa046e7abc 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.713s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:33:12 np0005534516 nova_compute[253538]: 2025-11-25 08:33:12.244 253542 DEBUG nova.compute.manager [None req-81d88d46-8ae0-4bea-8353-0eaa046e7abc 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] [instance: fb888d2a-db54-44dc-8ec7-db417fa3cff6] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 25 03:33:12 np0005534516 nova_compute[253538]: 2025-11-25 08:33:12.287 253542 DEBUG nova.compute.manager [None req-81d88d46-8ae0-4bea-8353-0eaa046e7abc 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] [instance: fb888d2a-db54-44dc-8ec7-db417fa3cff6] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 25 03:33:12 np0005534516 nova_compute[253538]: 2025-11-25 08:33:12.288 253542 DEBUG nova.network.neutron [None req-81d88d46-8ae0-4bea-8353-0eaa046e7abc 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] [instance: fb888d2a-db54-44dc-8ec7-db417fa3cff6] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 25 03:33:12 np0005534516 nova_compute[253538]: 2025-11-25 08:33:12.300 253542 INFO nova.virt.libvirt.driver [None req-81d88d46-8ae0-4bea-8353-0eaa046e7abc 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] [instance: fb888d2a-db54-44dc-8ec7-db417fa3cff6] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 25 03:33:12 np0005534516 nova_compute[253538]: 2025-11-25 08:33:12.315 253542 DEBUG nova.compute.manager [None req-81d88d46-8ae0-4bea-8353-0eaa046e7abc 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] [instance: fb888d2a-db54-44dc-8ec7-db417fa3cff6] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 25 03:33:12 np0005534516 nova_compute[253538]: 2025-11-25 08:33:12.389 253542 DEBUG nova.compute.manager [None req-81d88d46-8ae0-4bea-8353-0eaa046e7abc 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] [instance: fb888d2a-db54-44dc-8ec7-db417fa3cff6] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 25 03:33:12 np0005534516 nova_compute[253538]: 2025-11-25 08:33:12.390 253542 DEBUG nova.virt.libvirt.driver [None req-81d88d46-8ae0-4bea-8353-0eaa046e7abc 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] [instance: fb888d2a-db54-44dc-8ec7-db417fa3cff6] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 25 03:33:12 np0005534516 nova_compute[253538]: 2025-11-25 08:33:12.391 253542 INFO nova.virt.libvirt.driver [None req-81d88d46-8ae0-4bea-8353-0eaa046e7abc 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] [instance: fb888d2a-db54-44dc-8ec7-db417fa3cff6] Creating image(s)#033[00m
Nov 25 03:33:12 np0005534516 nova_compute[253538]: 2025-11-25 08:33:12.414 253542 DEBUG nova.storage.rbd_utils [None req-81d88d46-8ae0-4bea-8353-0eaa046e7abc 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] rbd image fb888d2a-db54-44dc-8ec7-db417fa3cff6_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:33:12 np0005534516 nova_compute[253538]: 2025-11-25 08:33:12.441 253542 DEBUG nova.storage.rbd_utils [None req-81d88d46-8ae0-4bea-8353-0eaa046e7abc 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] rbd image fb888d2a-db54-44dc-8ec7-db417fa3cff6_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:33:12 np0005534516 nova_compute[253538]: 2025-11-25 08:33:12.461 253542 DEBUG nova.storage.rbd_utils [None req-81d88d46-8ae0-4bea-8353-0eaa046e7abc 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] rbd image fb888d2a-db54-44dc-8ec7-db417fa3cff6_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:33:12 np0005534516 nova_compute[253538]: 2025-11-25 08:33:12.467 253542 DEBUG oslo_concurrency.processutils [None req-81d88d46-8ae0-4bea-8353-0eaa046e7abc 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:33:12 np0005534516 nova_compute[253538]: 2025-11-25 08:33:12.505 253542 DEBUG nova.policy [None req-81d88d46-8ae0-4bea-8353-0eaa046e7abc 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '7ad88cb0e4cf4d0b8e4cbec835318015', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'dc93aa65bef7473d961e0cad1e8f2962', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 25 03:33:12 np0005534516 nova_compute[253538]: 2025-11-25 08:33:12.540 253542 DEBUG oslo_concurrency.processutils [None req-81d88d46-8ae0-4bea-8353-0eaa046e7abc 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json" returned: 0 in 0.074s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:33:12 np0005534516 nova_compute[253538]: 2025-11-25 08:33:12.542 253542 DEBUG oslo_concurrency.lockutils [None req-81d88d46-8ae0-4bea-8353-0eaa046e7abc 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Acquiring lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:33:12 np0005534516 nova_compute[253538]: 2025-11-25 08:33:12.543 253542 DEBUG oslo_concurrency.lockutils [None req-81d88d46-8ae0-4bea-8353-0eaa046e7abc 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:33:12 np0005534516 nova_compute[253538]: 2025-11-25 08:33:12.544 253542 DEBUG oslo_concurrency.lockutils [None req-81d88d46-8ae0-4bea-8353-0eaa046e7abc 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:33:12 np0005534516 nova_compute[253538]: 2025-11-25 08:33:12.571 253542 DEBUG nova.storage.rbd_utils [None req-81d88d46-8ae0-4bea-8353-0eaa046e7abc 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] rbd image fb888d2a-db54-44dc-8ec7-db417fa3cff6_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:33:12 np0005534516 nova_compute[253538]: 2025-11-25 08:33:12.576 253542 DEBUG oslo_concurrency.processutils [None req-81d88d46-8ae0-4bea-8353-0eaa046e7abc 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc fb888d2a-db54-44dc-8ec7-db417fa3cff6_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:33:12 np0005534516 nova_compute[253538]: 2025-11-25 08:33:12.614 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 03:33:12 np0005534516 nova_compute[253538]: 2025-11-25 08:33:12.660 253542 DEBUG nova.objects.instance [None req-1bbadccb-c6ee-41c2-883b-6cdb2308cf5a 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Lazy-loading 'pci_requests' on Instance uuid 8191f951-44bc-4371-957a-f2e7d37c1a32 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 03:33:12 np0005534516 nova_compute[253538]: 2025-11-25 08:33:12.674 253542 DEBUG nova.network.neutron [None req-1bbadccb-c6ee-41c2-883b-6cdb2308cf5a 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] [instance: 8191f951-44bc-4371-957a-f2e7d37c1a32] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 25 03:33:12 np0005534516 podman[312836]: 2025-11-25 08:33:12.826915234 +0000 UTC m=+0.066547149 container health_status 201f5ba8a846455dad7681d91099d8c3f33e8ac91298c1330a8b525b94c03938 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251118, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=multipathd)
Nov 25 03:33:12 np0005534516 nova_compute[253538]: 2025-11-25 08:33:12.841 253542 DEBUG nova.compute.manager [req-495902d4-5669-491e-95d7-6593ff750674 req-f00d369a-5082-4574-b94e-10a4e5a1992b b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: c0942fc7-74d4-4fc8-9574-4fea9179e71b] Received event network-vif-deleted-79f4b8f5-d582-44c5-b8e0-a82ad73193de external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:33:12 np0005534516 nova_compute[253538]: 2025-11-25 08:33:12.859 253542 DEBUG oslo_concurrency.processutils [None req-81d88d46-8ae0-4bea-8353-0eaa046e7abc 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc fb888d2a-db54-44dc-8ec7-db417fa3cff6_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.283s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:33:12 np0005534516 nova_compute[253538]: 2025-11-25 08:33:12.905 253542 DEBUG nova.storage.rbd_utils [None req-81d88d46-8ae0-4bea-8353-0eaa046e7abc 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] resizing rbd image fb888d2a-db54-44dc-8ec7-db417fa3cff6_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 25 03:33:12 np0005534516 nova_compute[253538]: 2025-11-25 08:33:12.943 253542 DEBUG nova.policy [None req-1bbadccb-c6ee-41c2-883b-6cdb2308cf5a 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '329d8dc9d78743d4a09a38fef3a9143d', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '7d8307470c794815a028592990efca57', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 25 03:33:12 np0005534516 nova_compute[253538]: 2025-11-25 08:33:12.981 253542 DEBUG nova.objects.instance [None req-81d88d46-8ae0-4bea-8353-0eaa046e7abc 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Lazy-loading 'migration_context' on Instance uuid fb888d2a-db54-44dc-8ec7-db417fa3cff6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 03:33:12 np0005534516 nova_compute[253538]: 2025-11-25 08:33:12.995 253542 DEBUG nova.virt.libvirt.driver [None req-81d88d46-8ae0-4bea-8353-0eaa046e7abc 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] [instance: fb888d2a-db54-44dc-8ec7-db417fa3cff6] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 25 03:33:12 np0005534516 nova_compute[253538]: 2025-11-25 08:33:12.996 253542 DEBUG nova.virt.libvirt.driver [None req-81d88d46-8ae0-4bea-8353-0eaa046e7abc 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] [instance: fb888d2a-db54-44dc-8ec7-db417fa3cff6] Ensure instance console log exists: /var/lib/nova/instances/fb888d2a-db54-44dc-8ec7-db417fa3cff6/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 25 03:33:12 np0005534516 nova_compute[253538]: 2025-11-25 08:33:12.996 253542 DEBUG oslo_concurrency.lockutils [None req-81d88d46-8ae0-4bea-8353-0eaa046e7abc 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:33:12 np0005534516 nova_compute[253538]: 2025-11-25 08:33:12.997 253542 DEBUG oslo_concurrency.lockutils [None req-81d88d46-8ae0-4bea-8353-0eaa046e7abc 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:33:12 np0005534516 nova_compute[253538]: 2025-11-25 08:33:12.997 253542 DEBUG oslo_concurrency.lockutils [None req-81d88d46-8ae0-4bea-8353-0eaa046e7abc 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:33:13 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1496: 321 pgs: 321 active+clean; 248 MiB data, 543 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 3.6 MiB/s wr, 239 op/s
Nov 25 03:33:13 np0005534516 nova_compute[253538]: 2025-11-25 08:33:13.240 253542 DEBUG nova.network.neutron [None req-81d88d46-8ae0-4bea-8353-0eaa046e7abc 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] [instance: fb888d2a-db54-44dc-8ec7-db417fa3cff6] Successfully created port: dc1f5923-d984-4e49-bb97-bc1a77ade410 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 25 03:33:13 np0005534516 nova_compute[253538]: 2025-11-25 08:33:13.445 253542 DEBUG nova.compute.manager [req-9cca2f5a-cc7a-4b60-b2e2-c988fc95f339 req-1f1e712b-9930-451d-ae47-cfc9d6624291 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 8191f951-44bc-4371-957a-f2e7d37c1a32] Received event network-vif-plugged-582f57a6-32d3-44a0-ab47-d147a0bb0f43 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:33:13 np0005534516 nova_compute[253538]: 2025-11-25 08:33:13.446 253542 DEBUG oslo_concurrency.lockutils [req-9cca2f5a-cc7a-4b60-b2e2-c988fc95f339 req-1f1e712b-9930-451d-ae47-cfc9d6624291 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "8191f951-44bc-4371-957a-f2e7d37c1a32-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:33:13 np0005534516 nova_compute[253538]: 2025-11-25 08:33:13.446 253542 DEBUG oslo_concurrency.lockutils [req-9cca2f5a-cc7a-4b60-b2e2-c988fc95f339 req-1f1e712b-9930-451d-ae47-cfc9d6624291 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "8191f951-44bc-4371-957a-f2e7d37c1a32-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:33:13 np0005534516 nova_compute[253538]: 2025-11-25 08:33:13.447 253542 DEBUG oslo_concurrency.lockutils [req-9cca2f5a-cc7a-4b60-b2e2-c988fc95f339 req-1f1e712b-9930-451d-ae47-cfc9d6624291 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "8191f951-44bc-4371-957a-f2e7d37c1a32-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:33:13 np0005534516 nova_compute[253538]: 2025-11-25 08:33:13.447 253542 DEBUG nova.compute.manager [req-9cca2f5a-cc7a-4b60-b2e2-c988fc95f339 req-1f1e712b-9930-451d-ae47-cfc9d6624291 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 8191f951-44bc-4371-957a-f2e7d37c1a32] No waiting events found dispatching network-vif-plugged-582f57a6-32d3-44a0-ab47-d147a0bb0f43 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 03:33:13 np0005534516 nova_compute[253538]: 2025-11-25 08:33:13.447 253542 WARNING nova.compute.manager [req-9cca2f5a-cc7a-4b60-b2e2-c988fc95f339 req-1f1e712b-9930-451d-ae47-cfc9d6624291 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 8191f951-44bc-4371-957a-f2e7d37c1a32] Received unexpected event network-vif-plugged-582f57a6-32d3-44a0-ab47-d147a0bb0f43 for instance with vm_state active and task_state None.#033[00m
Nov 25 03:33:13 np0005534516 nova_compute[253538]: 2025-11-25 08:33:13.447 253542 DEBUG nova.compute.manager [req-9cca2f5a-cc7a-4b60-b2e2-c988fc95f339 req-1f1e712b-9930-451d-ae47-cfc9d6624291 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 8191f951-44bc-4371-957a-f2e7d37c1a32] Received event network-vif-plugged-582f57a6-32d3-44a0-ab47-d147a0bb0f43 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:33:13 np0005534516 nova_compute[253538]: 2025-11-25 08:33:13.448 253542 DEBUG oslo_concurrency.lockutils [req-9cca2f5a-cc7a-4b60-b2e2-c988fc95f339 req-1f1e712b-9930-451d-ae47-cfc9d6624291 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "8191f951-44bc-4371-957a-f2e7d37c1a32-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:33:13 np0005534516 nova_compute[253538]: 2025-11-25 08:33:13.448 253542 DEBUG oslo_concurrency.lockutils [req-9cca2f5a-cc7a-4b60-b2e2-c988fc95f339 req-1f1e712b-9930-451d-ae47-cfc9d6624291 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "8191f951-44bc-4371-957a-f2e7d37c1a32-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:33:13 np0005534516 nova_compute[253538]: 2025-11-25 08:33:13.448 253542 DEBUG oslo_concurrency.lockutils [req-9cca2f5a-cc7a-4b60-b2e2-c988fc95f339 req-1f1e712b-9930-451d-ae47-cfc9d6624291 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "8191f951-44bc-4371-957a-f2e7d37c1a32-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:33:13 np0005534516 nova_compute[253538]: 2025-11-25 08:33:13.449 253542 DEBUG nova.compute.manager [req-9cca2f5a-cc7a-4b60-b2e2-c988fc95f339 req-1f1e712b-9930-451d-ae47-cfc9d6624291 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 8191f951-44bc-4371-957a-f2e7d37c1a32] No waiting events found dispatching network-vif-plugged-582f57a6-32d3-44a0-ab47-d147a0bb0f43 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 03:33:13 np0005534516 nova_compute[253538]: 2025-11-25 08:33:13.449 253542 WARNING nova.compute.manager [req-9cca2f5a-cc7a-4b60-b2e2-c988fc95f339 req-1f1e712b-9930-451d-ae47-cfc9d6624291 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 8191f951-44bc-4371-957a-f2e7d37c1a32] Received unexpected event network-vif-plugged-582f57a6-32d3-44a0-ab47-d147a0bb0f43 for instance with vm_state active and task_state None.#033[00m
Nov 25 03:33:13 np0005534516 nova_compute[253538]: 2025-11-25 08:33:13.449 253542 DEBUG nova.compute.manager [req-9cca2f5a-cc7a-4b60-b2e2-c988fc95f339 req-1f1e712b-9930-451d-ae47-cfc9d6624291 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 528fb917-0169-441d-b32d-652963344aea] Received event network-vif-plugged-56d077f0-8f69-40d8-bd5e-267a70c4c319 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:33:13 np0005534516 nova_compute[253538]: 2025-11-25 08:33:13.449 253542 DEBUG oslo_concurrency.lockutils [req-9cca2f5a-cc7a-4b60-b2e2-c988fc95f339 req-1f1e712b-9930-451d-ae47-cfc9d6624291 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "528fb917-0169-441d-b32d-652963344aea-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:33:13 np0005534516 nova_compute[253538]: 2025-11-25 08:33:13.450 253542 DEBUG oslo_concurrency.lockutils [req-9cca2f5a-cc7a-4b60-b2e2-c988fc95f339 req-1f1e712b-9930-451d-ae47-cfc9d6624291 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "528fb917-0169-441d-b32d-652963344aea-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:33:13 np0005534516 nova_compute[253538]: 2025-11-25 08:33:13.450 253542 DEBUG oslo_concurrency.lockutils [req-9cca2f5a-cc7a-4b60-b2e2-c988fc95f339 req-1f1e712b-9930-451d-ae47-cfc9d6624291 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "528fb917-0169-441d-b32d-652963344aea-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:33:13 np0005534516 nova_compute[253538]: 2025-11-25 08:33:13.450 253542 DEBUG nova.compute.manager [req-9cca2f5a-cc7a-4b60-b2e2-c988fc95f339 req-1f1e712b-9930-451d-ae47-cfc9d6624291 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 528fb917-0169-441d-b32d-652963344aea] Processing event network-vif-plugged-56d077f0-8f69-40d8-bd5e-267a70c4c319 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 25 03:33:13 np0005534516 nova_compute[253538]: 2025-11-25 08:33:13.450 253542 DEBUG nova.compute.manager [req-9cca2f5a-cc7a-4b60-b2e2-c988fc95f339 req-1f1e712b-9930-451d-ae47-cfc9d6624291 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 528fb917-0169-441d-b32d-652963344aea] Received event network-vif-plugged-56d077f0-8f69-40d8-bd5e-267a70c4c319 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:33:13 np0005534516 nova_compute[253538]: 2025-11-25 08:33:13.451 253542 DEBUG oslo_concurrency.lockutils [req-9cca2f5a-cc7a-4b60-b2e2-c988fc95f339 req-1f1e712b-9930-451d-ae47-cfc9d6624291 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "528fb917-0169-441d-b32d-652963344aea-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:33:13 np0005534516 nova_compute[253538]: 2025-11-25 08:33:13.451 253542 DEBUG oslo_concurrency.lockutils [req-9cca2f5a-cc7a-4b60-b2e2-c988fc95f339 req-1f1e712b-9930-451d-ae47-cfc9d6624291 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "528fb917-0169-441d-b32d-652963344aea-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:33:13 np0005534516 nova_compute[253538]: 2025-11-25 08:33:13.451 253542 DEBUG oslo_concurrency.lockutils [req-9cca2f5a-cc7a-4b60-b2e2-c988fc95f339 req-1f1e712b-9930-451d-ae47-cfc9d6624291 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "528fb917-0169-441d-b32d-652963344aea-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:33:13 np0005534516 nova_compute[253538]: 2025-11-25 08:33:13.452 253542 DEBUG nova.compute.manager [req-9cca2f5a-cc7a-4b60-b2e2-c988fc95f339 req-1f1e712b-9930-451d-ae47-cfc9d6624291 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 528fb917-0169-441d-b32d-652963344aea] No waiting events found dispatching network-vif-plugged-56d077f0-8f69-40d8-bd5e-267a70c4c319 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 03:33:13 np0005534516 nova_compute[253538]: 2025-11-25 08:33:13.452 253542 WARNING nova.compute.manager [req-9cca2f5a-cc7a-4b60-b2e2-c988fc95f339 req-1f1e712b-9930-451d-ae47-cfc9d6624291 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 528fb917-0169-441d-b32d-652963344aea] Received unexpected event network-vif-plugged-56d077f0-8f69-40d8-bd5e-267a70c4c319 for instance with vm_state building and task_state spawning.#033[00m
Nov 25 03:33:13 np0005534516 nova_compute[253538]: 2025-11-25 08:33:13.452 253542 DEBUG nova.compute.manager [None req-09265231-b5f9-435b-aa4b-158217848829 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: 528fb917-0169-441d-b32d-652963344aea] Instance event wait completed in 2 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 25 03:33:13 np0005534516 nova_compute[253538]: 2025-11-25 08:33:13.455 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764059593.4555595, 528fb917-0169-441d-b32d-652963344aea => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 03:33:13 np0005534516 nova_compute[253538]: 2025-11-25 08:33:13.456 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 528fb917-0169-441d-b32d-652963344aea] VM Resumed (Lifecycle Event)#033[00m
Nov 25 03:33:13 np0005534516 nova_compute[253538]: 2025-11-25 08:33:13.457 253542 DEBUG nova.virt.libvirt.driver [None req-09265231-b5f9-435b-aa4b-158217848829 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: 528fb917-0169-441d-b32d-652963344aea] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 25 03:33:13 np0005534516 nova_compute[253538]: 2025-11-25 08:33:13.460 253542 INFO nova.virt.libvirt.driver [-] [instance: 528fb917-0169-441d-b32d-652963344aea] Instance spawned successfully.#033[00m
Nov 25 03:33:13 np0005534516 nova_compute[253538]: 2025-11-25 08:33:13.460 253542 DEBUG nova.virt.libvirt.driver [None req-09265231-b5f9-435b-aa4b-158217848829 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: 528fb917-0169-441d-b32d-652963344aea] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 25 03:33:13 np0005534516 nova_compute[253538]: 2025-11-25 08:33:13.475 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 528fb917-0169-441d-b32d-652963344aea] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:33:13 np0005534516 nova_compute[253538]: 2025-11-25 08:33:13.479 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 528fb917-0169-441d-b32d-652963344aea] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 03:33:13 np0005534516 nova_compute[253538]: 2025-11-25 08:33:13.482 253542 DEBUG nova.virt.libvirt.driver [None req-09265231-b5f9-435b-aa4b-158217848829 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: 528fb917-0169-441d-b32d-652963344aea] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:33:13 np0005534516 nova_compute[253538]: 2025-11-25 08:33:13.482 253542 DEBUG nova.virt.libvirt.driver [None req-09265231-b5f9-435b-aa4b-158217848829 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: 528fb917-0169-441d-b32d-652963344aea] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:33:13 np0005534516 nova_compute[253538]: 2025-11-25 08:33:13.483 253542 DEBUG nova.virt.libvirt.driver [None req-09265231-b5f9-435b-aa4b-158217848829 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: 528fb917-0169-441d-b32d-652963344aea] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:33:13 np0005534516 nova_compute[253538]: 2025-11-25 08:33:13.483 253542 DEBUG nova.virt.libvirt.driver [None req-09265231-b5f9-435b-aa4b-158217848829 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: 528fb917-0169-441d-b32d-652963344aea] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:33:13 np0005534516 nova_compute[253538]: 2025-11-25 08:33:13.484 253542 DEBUG nova.virt.libvirt.driver [None req-09265231-b5f9-435b-aa4b-158217848829 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: 528fb917-0169-441d-b32d-652963344aea] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:33:13 np0005534516 nova_compute[253538]: 2025-11-25 08:33:13.484 253542 DEBUG nova.virt.libvirt.driver [None req-09265231-b5f9-435b-aa4b-158217848829 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: 528fb917-0169-441d-b32d-652963344aea] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:33:13 np0005534516 nova_compute[253538]: 2025-11-25 08:33:13.508 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 528fb917-0169-441d-b32d-652963344aea] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 03:33:13 np0005534516 nova_compute[253538]: 2025-11-25 08:33:13.548 253542 INFO nova.compute.manager [None req-09265231-b5f9-435b-aa4b-158217848829 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: 528fb917-0169-441d-b32d-652963344aea] Took 11.99 seconds to spawn the instance on the hypervisor.#033[00m
Nov 25 03:33:13 np0005534516 nova_compute[253538]: 2025-11-25 08:33:13.549 253542 DEBUG nova.compute.manager [None req-09265231-b5f9-435b-aa4b-158217848829 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: 528fb917-0169-441d-b32d-652963344aea] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:33:13 np0005534516 nova_compute[253538]: 2025-11-25 08:33:13.583 253542 INFO nova.compute.manager [None req-e714bd7c-bfc8-4e2d-b57b-cb7682c01223 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] [instance: 0feca801-4630-4450-b915-616d8496ab51] Pausing#033[00m
Nov 25 03:33:13 np0005534516 nova_compute[253538]: 2025-11-25 08:33:13.584 253542 DEBUG nova.objects.instance [None req-e714bd7c-bfc8-4e2d-b57b-cb7682c01223 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Lazy-loading 'flavor' on Instance uuid 0feca801-4630-4450-b915-616d8496ab51 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 03:33:13 np0005534516 nova_compute[253538]: 2025-11-25 08:33:13.634 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764059593.63393, 0feca801-4630-4450-b915-616d8496ab51 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 03:33:13 np0005534516 nova_compute[253538]: 2025-11-25 08:33:13.635 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 0feca801-4630-4450-b915-616d8496ab51] VM Paused (Lifecycle Event)#033[00m
Nov 25 03:33:13 np0005534516 nova_compute[253538]: 2025-11-25 08:33:13.643 253542 DEBUG nova.compute.manager [None req-e714bd7c-bfc8-4e2d-b57b-cb7682c01223 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] [instance: 0feca801-4630-4450-b915-616d8496ab51] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:33:13 np0005534516 nova_compute[253538]: 2025-11-25 08:33:13.645 253542 INFO nova.compute.manager [None req-09265231-b5f9-435b-aa4b-158217848829 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: 528fb917-0169-441d-b32d-652963344aea] Took 13.02 seconds to build instance.#033[00m
Nov 25 03:33:13 np0005534516 nova_compute[253538]: 2025-11-25 08:33:13.674 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 0feca801-4630-4450-b915-616d8496ab51] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:33:13 np0005534516 nova_compute[253538]: 2025-11-25 08:33:13.677 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 0feca801-4630-4450-b915-616d8496ab51] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: pausing, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 03:33:13 np0005534516 nova_compute[253538]: 2025-11-25 08:33:13.679 253542 DEBUG oslo_concurrency.lockutils [None req-09265231-b5f9-435b-aa4b-158217848829 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Lock "528fb917-0169-441d-b32d-652963344aea" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 13.111s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:33:13 np0005534516 nova_compute[253538]: 2025-11-25 08:33:13.706 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 0feca801-4630-4450-b915-616d8496ab51] During sync_power_state the instance has a pending task (pausing). Skip.#033[00m
Nov 25 03:33:13 np0005534516 nova_compute[253538]: 2025-11-25 08:33:13.739 253542 DEBUG nova.network.neutron [None req-1bbadccb-c6ee-41c2-883b-6cdb2308cf5a 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] [instance: 8191f951-44bc-4371-957a-f2e7d37c1a32] Successfully updated port: 9541f2fd-4ec3-47ef-a6a9-66e0052c303f _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 25 03:33:13 np0005534516 nova_compute[253538]: 2025-11-25 08:33:13.753 253542 DEBUG oslo_concurrency.lockutils [None req-1bbadccb-c6ee-41c2-883b-6cdb2308cf5a 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Acquiring lock "refresh_cache-8191f951-44bc-4371-957a-f2e7d37c1a32" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 03:33:13 np0005534516 nova_compute[253538]: 2025-11-25 08:33:13.754 253542 DEBUG oslo_concurrency.lockutils [None req-1bbadccb-c6ee-41c2-883b-6cdb2308cf5a 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Acquired lock "refresh_cache-8191f951-44bc-4371-957a-f2e7d37c1a32" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 03:33:13 np0005534516 nova_compute[253538]: 2025-11-25 08:33:13.754 253542 DEBUG nova.network.neutron [None req-1bbadccb-c6ee-41c2-883b-6cdb2308cf5a 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] [instance: 8191f951-44bc-4371-957a-f2e7d37c1a32] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 25 03:33:13 np0005534516 nova_compute[253538]: 2025-11-25 08:33:13.899 253542 WARNING nova.network.neutron [None req-1bbadccb-c6ee-41c2-883b-6cdb2308cf5a 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] [instance: 8191f951-44bc-4371-957a-f2e7d37c1a32] 9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe already exists in list: networks containing: ['9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe']. ignoring it#033[00m
Nov 25 03:33:13 np0005534516 nova_compute[253538]: 2025-11-25 08:33:13.900 253542 WARNING nova.network.neutron [None req-1bbadccb-c6ee-41c2-883b-6cdb2308cf5a 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] [instance: 8191f951-44bc-4371-957a-f2e7d37c1a32] 9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe already exists in list: networks containing: ['9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe']. ignoring it#033[00m
Nov 25 03:33:13 np0005534516 nova_compute[253538]: 2025-11-25 08:33:13.900 253542 WARNING nova.network.neutron [None req-1bbadccb-c6ee-41c2-883b-6cdb2308cf5a 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] [instance: 8191f951-44bc-4371-957a-f2e7d37c1a32] 9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe already exists in list: networks containing: ['9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe']. ignoring it#033[00m
Nov 25 03:33:14 np0005534516 nova_compute[253538]: 2025-11-25 08:33:14.098 253542 DEBUG nova.network.neutron [None req-81d88d46-8ae0-4bea-8353-0eaa046e7abc 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] [instance: fb888d2a-db54-44dc-8ec7-db417fa3cff6] Successfully updated port: dc1f5923-d984-4e49-bb97-bc1a77ade410 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 25 03:33:14 np0005534516 nova_compute[253538]: 2025-11-25 08:33:14.111 253542 DEBUG oslo_concurrency.lockutils [None req-81d88d46-8ae0-4bea-8353-0eaa046e7abc 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Acquiring lock "refresh_cache-fb888d2a-db54-44dc-8ec7-db417fa3cff6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 03:33:14 np0005534516 nova_compute[253538]: 2025-11-25 08:33:14.112 253542 DEBUG oslo_concurrency.lockutils [None req-81d88d46-8ae0-4bea-8353-0eaa046e7abc 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Acquired lock "refresh_cache-fb888d2a-db54-44dc-8ec7-db417fa3cff6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 03:33:14 np0005534516 nova_compute[253538]: 2025-11-25 08:33:14.112 253542 DEBUG nova.network.neutron [None req-81d88d46-8ae0-4bea-8353-0eaa046e7abc 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] [instance: fb888d2a-db54-44dc-8ec7-db417fa3cff6] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 25 03:33:14 np0005534516 nova_compute[253538]: 2025-11-25 08:33:14.226 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:33:14 np0005534516 nova_compute[253538]: 2025-11-25 08:33:14.246 253542 DEBUG nova.network.neutron [None req-81d88d46-8ae0-4bea-8353-0eaa046e7abc 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] [instance: fb888d2a-db54-44dc-8ec7-db417fa3cff6] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 25 03:33:15 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1497: 321 pgs: 321 active+clean; 287 MiB data, 560 MiB used, 59 GiB / 60 GiB avail; 4.0 MiB/s rd, 3.4 MiB/s wr, 234 op/s
Nov 25 03:33:15 np0005534516 nova_compute[253538]: 2025-11-25 08:33:15.266 253542 DEBUG oslo_concurrency.lockutils [None req-af0be6ff-9106-4536-a846-3295c2b2f70b a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Acquiring lock "528fb917-0169-441d-b32d-652963344aea" by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:33:15 np0005534516 nova_compute[253538]: 2025-11-25 08:33:15.267 253542 DEBUG oslo_concurrency.lockutils [None req-af0be6ff-9106-4536-a846-3295c2b2f70b a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Lock "528fb917-0169-441d-b32d-652963344aea" acquired by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:33:15 np0005534516 nova_compute[253538]: 2025-11-25 08:33:15.267 253542 INFO nova.compute.manager [None req-af0be6ff-9106-4536-a846-3295c2b2f70b a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: 528fb917-0169-441d-b32d-652963344aea] Shelving#033[00m
Nov 25 03:33:15 np0005534516 nova_compute[253538]: 2025-11-25 08:33:15.283 253542 DEBUG nova.virt.libvirt.driver [None req-af0be6ff-9106-4536-a846-3295c2b2f70b a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: 528fb917-0169-441d-b32d-652963344aea] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071#033[00m
Nov 25 03:33:15 np0005534516 nova_compute[253538]: 2025-11-25 08:33:15.421 253542 DEBUG nova.network.neutron [None req-81d88d46-8ae0-4bea-8353-0eaa046e7abc 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] [instance: fb888d2a-db54-44dc-8ec7-db417fa3cff6] Updating instance_info_cache with network_info: [{"id": "dc1f5923-d984-4e49-bb97-bc1a77ade410", "address": "fa:16:3e:7d:1c:27", "network": {"id": "eb25945d-6002-4a99-b682-034a8a3dc901", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1802666542-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dc93aa65bef7473d961e0cad1e8f2962", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdc1f5923-d9", "ovs_interfaceid": "dc1f5923-d984-4e49-bb97-bc1a77ade410", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 03:33:15 np0005534516 nova_compute[253538]: 2025-11-25 08:33:15.440 253542 DEBUG oslo_concurrency.lockutils [None req-81d88d46-8ae0-4bea-8353-0eaa046e7abc 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Releasing lock "refresh_cache-fb888d2a-db54-44dc-8ec7-db417fa3cff6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 03:33:15 np0005534516 nova_compute[253538]: 2025-11-25 08:33:15.440 253542 DEBUG nova.compute.manager [None req-81d88d46-8ae0-4bea-8353-0eaa046e7abc 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] [instance: fb888d2a-db54-44dc-8ec7-db417fa3cff6] Instance network_info: |[{"id": "dc1f5923-d984-4e49-bb97-bc1a77ade410", "address": "fa:16:3e:7d:1c:27", "network": {"id": "eb25945d-6002-4a99-b682-034a8a3dc901", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1802666542-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dc93aa65bef7473d961e0cad1e8f2962", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdc1f5923-d9", "ovs_interfaceid": "dc1f5923-d984-4e49-bb97-bc1a77ade410", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 25 03:33:15 np0005534516 nova_compute[253538]: 2025-11-25 08:33:15.444 253542 DEBUG nova.virt.libvirt.driver [None req-81d88d46-8ae0-4bea-8353-0eaa046e7abc 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] [instance: fb888d2a-db54-44dc-8ec7-db417fa3cff6] Start _get_guest_xml network_info=[{"id": "dc1f5923-d984-4e49-bb97-bc1a77ade410", "address": "fa:16:3e:7d:1c:27", "network": {"id": "eb25945d-6002-4a99-b682-034a8a3dc901", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1802666542-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dc93aa65bef7473d961e0cad1e8f2962", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdc1f5923-d9", "ovs_interfaceid": "dc1f5923-d984-4e49-bb97-bc1a77ade410", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'device_name': '/dev/vda', 'guest_format': None, 'encryption_options': None, 'boot_index': 0, 'encryption_format': None, 'encryption_secret_uuid': None, 'size': 0, 'encrypted': False, 'disk_bus': 'virtio', 'image_id': '8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 25 03:33:15 np0005534516 nova_compute[253538]: 2025-11-25 08:33:15.447 253542 WARNING nova.virt.libvirt.driver [None req-81d88d46-8ae0-4bea-8353-0eaa046e7abc 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 03:33:15 np0005534516 nova_compute[253538]: 2025-11-25 08:33:15.452 253542 DEBUG nova.virt.libvirt.host [None req-81d88d46-8ae0-4bea-8353-0eaa046e7abc 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 25 03:33:15 np0005534516 nova_compute[253538]: 2025-11-25 08:33:15.453 253542 DEBUG nova.virt.libvirt.host [None req-81d88d46-8ae0-4bea-8353-0eaa046e7abc 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 25 03:33:15 np0005534516 nova_compute[253538]: 2025-11-25 08:33:15.456 253542 DEBUG nova.virt.libvirt.host [None req-81d88d46-8ae0-4bea-8353-0eaa046e7abc 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 25 03:33:15 np0005534516 nova_compute[253538]: 2025-11-25 08:33:15.456 253542 DEBUG nova.virt.libvirt.host [None req-81d88d46-8ae0-4bea-8353-0eaa046e7abc 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 25 03:33:15 np0005534516 nova_compute[253538]: 2025-11-25 08:33:15.456 253542 DEBUG nova.virt.libvirt.driver [None req-81d88d46-8ae0-4bea-8353-0eaa046e7abc 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 25 03:33:15 np0005534516 nova_compute[253538]: 2025-11-25 08:33:15.457 253542 DEBUG nova.virt.hardware [None req-81d88d46-8ae0-4bea-8353-0eaa046e7abc 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T08:20:54Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='c0247c91-0f34-45b2-87e7-43f31a790a00',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 25 03:33:15 np0005534516 nova_compute[253538]: 2025-11-25 08:33:15.457 253542 DEBUG nova.virt.hardware [None req-81d88d46-8ae0-4bea-8353-0eaa046e7abc 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 25 03:33:15 np0005534516 nova_compute[253538]: 2025-11-25 08:33:15.458 253542 DEBUG nova.virt.hardware [None req-81d88d46-8ae0-4bea-8353-0eaa046e7abc 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 25 03:33:15 np0005534516 nova_compute[253538]: 2025-11-25 08:33:15.458 253542 DEBUG nova.virt.hardware [None req-81d88d46-8ae0-4bea-8353-0eaa046e7abc 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 25 03:33:15 np0005534516 nova_compute[253538]: 2025-11-25 08:33:15.458 253542 DEBUG nova.virt.hardware [None req-81d88d46-8ae0-4bea-8353-0eaa046e7abc 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 25 03:33:15 np0005534516 nova_compute[253538]: 2025-11-25 08:33:15.459 253542 DEBUG nova.virt.hardware [None req-81d88d46-8ae0-4bea-8353-0eaa046e7abc 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 25 03:33:15 np0005534516 nova_compute[253538]: 2025-11-25 08:33:15.459 253542 DEBUG nova.virt.hardware [None req-81d88d46-8ae0-4bea-8353-0eaa046e7abc 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 25 03:33:15 np0005534516 nova_compute[253538]: 2025-11-25 08:33:15.460 253542 DEBUG nova.virt.hardware [None req-81d88d46-8ae0-4bea-8353-0eaa046e7abc 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 25 03:33:15 np0005534516 nova_compute[253538]: 2025-11-25 08:33:15.460 253542 DEBUG nova.virt.hardware [None req-81d88d46-8ae0-4bea-8353-0eaa046e7abc 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 25 03:33:15 np0005534516 nova_compute[253538]: 2025-11-25 08:33:15.460 253542 DEBUG nova.virt.hardware [None req-81d88d46-8ae0-4bea-8353-0eaa046e7abc 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 25 03:33:15 np0005534516 nova_compute[253538]: 2025-11-25 08:33:15.461 253542 DEBUG nova.virt.hardware [None req-81d88d46-8ae0-4bea-8353-0eaa046e7abc 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 25 03:33:15 np0005534516 nova_compute[253538]: 2025-11-25 08:33:15.464 253542 DEBUG oslo_concurrency.processutils [None req-81d88d46-8ae0-4bea-8353-0eaa046e7abc 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:33:15 np0005534516 nova_compute[253538]: 2025-11-25 08:33:15.591 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:33:15 np0005534516 nova_compute[253538]: 2025-11-25 08:33:15.752 253542 INFO nova.compute.manager [None req-21b76c73-efff-4931-8e4f-c94d0ccb4ebd c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] [instance: 0feca801-4630-4450-b915-616d8496ab51] Unpausing#033[00m
Nov 25 03:33:15 np0005534516 nova_compute[253538]: 2025-11-25 08:33:15.754 253542 DEBUG nova.objects.instance [None req-21b76c73-efff-4931-8e4f-c94d0ccb4ebd c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Lazy-loading 'flavor' on Instance uuid 0feca801-4630-4450-b915-616d8496ab51 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 03:33:15 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e176 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 03:33:15 np0005534516 virtqemud[253839]: argument unsupported: QEMU guest agent is not configured
Nov 25 03:33:15 np0005534516 nova_compute[253538]: 2025-11-25 08:33:15.778 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764059595.7784793, 0feca801-4630-4450-b915-616d8496ab51 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 03:33:15 np0005534516 nova_compute[253538]: 2025-11-25 08:33:15.779 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 0feca801-4630-4450-b915-616d8496ab51] VM Resumed (Lifecycle Event)#033[00m
Nov 25 03:33:15 np0005534516 nova_compute[253538]: 2025-11-25 08:33:15.781 253542 DEBUG nova.virt.libvirt.guest [None req-21b76c73-efff-4931-8e4f-c94d0ccb4ebd c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] [instance: 0feca801-4630-4450-b915-616d8496ab51] Failed to set time: agent not configured sync_guest_time /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:200#033[00m
Nov 25 03:33:15 np0005534516 nova_compute[253538]: 2025-11-25 08:33:15.782 253542 DEBUG nova.compute.manager [None req-21b76c73-efff-4931-8e4f-c94d0ccb4ebd c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] [instance: 0feca801-4630-4450-b915-616d8496ab51] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:33:15 np0005534516 nova_compute[253538]: 2025-11-25 08:33:15.791 253542 DEBUG nova.compute.manager [req-a3e9e76f-a050-484e-a9d1-77728fdda6bd req-92a3a015-dc81-4b54-9b3b-bf11151f547c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: fb888d2a-db54-44dc-8ec7-db417fa3cff6] Received event network-changed-dc1f5923-d984-4e49-bb97-bc1a77ade410 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:33:15 np0005534516 nova_compute[253538]: 2025-11-25 08:33:15.791 253542 DEBUG nova.compute.manager [req-a3e9e76f-a050-484e-a9d1-77728fdda6bd req-92a3a015-dc81-4b54-9b3b-bf11151f547c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: fb888d2a-db54-44dc-8ec7-db417fa3cff6] Refreshing instance network info cache due to event network-changed-dc1f5923-d984-4e49-bb97-bc1a77ade410. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 03:33:15 np0005534516 nova_compute[253538]: 2025-11-25 08:33:15.791 253542 DEBUG oslo_concurrency.lockutils [req-a3e9e76f-a050-484e-a9d1-77728fdda6bd req-92a3a015-dc81-4b54-9b3b-bf11151f547c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-fb888d2a-db54-44dc-8ec7-db417fa3cff6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 03:33:15 np0005534516 nova_compute[253538]: 2025-11-25 08:33:15.792 253542 DEBUG oslo_concurrency.lockutils [req-a3e9e76f-a050-484e-a9d1-77728fdda6bd req-92a3a015-dc81-4b54-9b3b-bf11151f547c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-fb888d2a-db54-44dc-8ec7-db417fa3cff6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 03:33:15 np0005534516 nova_compute[253538]: 2025-11-25 08:33:15.792 253542 DEBUG nova.network.neutron [req-a3e9e76f-a050-484e-a9d1-77728fdda6bd req-92a3a015-dc81-4b54-9b3b-bf11151f547c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: fb888d2a-db54-44dc-8ec7-db417fa3cff6] Refreshing network info cache for port dc1f5923-d984-4e49-bb97-bc1a77ade410 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 03:33:15 np0005534516 nova_compute[253538]: 2025-11-25 08:33:15.805 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 0feca801-4630-4450-b915-616d8496ab51] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:33:15 np0005534516 nova_compute[253538]: 2025-11-25 08:33:15.811 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 0feca801-4630-4450-b915-616d8496ab51] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: paused, current task_state: unpausing, current DB power_state: 3, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 03:33:15 np0005534516 podman[312949]: 2025-11-25 08:33:15.827297672 +0000 UTC m=+0.074035331 container health_status 8ad1e319ea263c63021f01bb2d6f18ca5aea205883339692c6b10bddfa993191 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_controller)
Nov 25 03:33:15 np0005534516 nova_compute[253538]: 2025-11-25 08:33:15.840 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 0feca801-4630-4450-b915-616d8496ab51] During sync_power_state the instance has a pending task (unpausing). Skip.#033[00m
Nov 25 03:33:15 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 03:33:15 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3127209050' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 03:33:15 np0005534516 nova_compute[253538]: 2025-11-25 08:33:15.945 253542 DEBUG oslo_concurrency.processutils [None req-81d88d46-8ae0-4bea-8353-0eaa046e7abc 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.481s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:33:15 np0005534516 nova_compute[253538]: 2025-11-25 08:33:15.966 253542 DEBUG nova.storage.rbd_utils [None req-81d88d46-8ae0-4bea-8353-0eaa046e7abc 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] rbd image fb888d2a-db54-44dc-8ec7-db417fa3cff6_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:33:15 np0005534516 nova_compute[253538]: 2025-11-25 08:33:15.969 253542 DEBUG oslo_concurrency.processutils [None req-81d88d46-8ae0-4bea-8353-0eaa046e7abc 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:33:16 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 03:33:16 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2117530641' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 03:33:16 np0005534516 nova_compute[253538]: 2025-11-25 08:33:16.435 253542 DEBUG oslo_concurrency.processutils [None req-81d88d46-8ae0-4bea-8353-0eaa046e7abc 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.467s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:33:16 np0005534516 nova_compute[253538]: 2025-11-25 08:33:16.438 253542 DEBUG nova.virt.libvirt.vif [None req-81d88d46-8ae0-4bea-8353-0eaa046e7abc 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T08:33:10Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-2088861040',display_name='tempest-ServerDiskConfigTestJSON-server-2088861040',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-2088861040',id=57,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='dc93aa65bef7473d961e0cad1e8f2962',ramdisk_id='',reservation_id='r-piq0ju9f',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerDiskConfigTestJSON-1655399928',owner_user_name='tempest-ServerDiskConfigTestJSON-1655399928-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T08:33:12Z,user_data=None,user_id='7ad88cb0e4cf4d0b8e4cbec835318015',uuid=fb888d2a-db54-44dc-8ec7-db417fa3cff6,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "dc1f5923-d984-4e49-bb97-bc1a77ade410", "address": "fa:16:3e:7d:1c:27", "network": {"id": "eb25945d-6002-4a99-b682-034a8a3dc901", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1802666542-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dc93aa65bef7473d961e0cad1e8f2962", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdc1f5923-d9", "ovs_interfaceid": "dc1f5923-d984-4e49-bb97-bc1a77ade410", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 25 03:33:16 np0005534516 nova_compute[253538]: 2025-11-25 08:33:16.438 253542 DEBUG nova.network.os_vif_util [None req-81d88d46-8ae0-4bea-8353-0eaa046e7abc 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Converting VIF {"id": "dc1f5923-d984-4e49-bb97-bc1a77ade410", "address": "fa:16:3e:7d:1c:27", "network": {"id": "eb25945d-6002-4a99-b682-034a8a3dc901", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1802666542-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dc93aa65bef7473d961e0cad1e8f2962", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdc1f5923-d9", "ovs_interfaceid": "dc1f5923-d984-4e49-bb97-bc1a77ade410", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 03:33:16 np0005534516 nova_compute[253538]: 2025-11-25 08:33:16.439 253542 DEBUG nova.network.os_vif_util [None req-81d88d46-8ae0-4bea-8353-0eaa046e7abc 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:7d:1c:27,bridge_name='br-int',has_traffic_filtering=True,id=dc1f5923-d984-4e49-bb97-bc1a77ade410,network=Network(eb25945d-6002-4a99-b682-034a8a3dc901),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdc1f5923-d9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 03:33:16 np0005534516 nova_compute[253538]: 2025-11-25 08:33:16.440 253542 DEBUG nova.objects.instance [None req-81d88d46-8ae0-4bea-8353-0eaa046e7abc 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Lazy-loading 'pci_devices' on Instance uuid fb888d2a-db54-44dc-8ec7-db417fa3cff6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 03:33:16 np0005534516 nova_compute[253538]: 2025-11-25 08:33:16.457 253542 DEBUG nova.virt.libvirt.driver [None req-81d88d46-8ae0-4bea-8353-0eaa046e7abc 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] [instance: fb888d2a-db54-44dc-8ec7-db417fa3cff6] End _get_guest_xml xml=<domain type="kvm">
Nov 25 03:33:16 np0005534516 nova_compute[253538]:  <uuid>fb888d2a-db54-44dc-8ec7-db417fa3cff6</uuid>
Nov 25 03:33:16 np0005534516 nova_compute[253538]:  <name>instance-00000039</name>
Nov 25 03:33:16 np0005534516 nova_compute[253538]:  <memory>131072</memory>
Nov 25 03:33:16 np0005534516 nova_compute[253538]:  <vcpu>1</vcpu>
Nov 25 03:33:16 np0005534516 nova_compute[253538]:  <metadata>
Nov 25 03:33:16 np0005534516 nova_compute[253538]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 03:33:16 np0005534516 nova_compute[253538]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 03:33:16 np0005534516 nova_compute[253538]:      <nova:name>tempest-ServerDiskConfigTestJSON-server-2088861040</nova:name>
Nov 25 03:33:16 np0005534516 nova_compute[253538]:      <nova:creationTime>2025-11-25 08:33:15</nova:creationTime>
Nov 25 03:33:16 np0005534516 nova_compute[253538]:      <nova:flavor name="m1.nano">
Nov 25 03:33:16 np0005534516 nova_compute[253538]:        <nova:memory>128</nova:memory>
Nov 25 03:33:16 np0005534516 nova_compute[253538]:        <nova:disk>1</nova:disk>
Nov 25 03:33:16 np0005534516 nova_compute[253538]:        <nova:swap>0</nova:swap>
Nov 25 03:33:16 np0005534516 nova_compute[253538]:        <nova:ephemeral>0</nova:ephemeral>
Nov 25 03:33:16 np0005534516 nova_compute[253538]:        <nova:vcpus>1</nova:vcpus>
Nov 25 03:33:16 np0005534516 nova_compute[253538]:      </nova:flavor>
Nov 25 03:33:16 np0005534516 nova_compute[253538]:      <nova:owner>
Nov 25 03:33:16 np0005534516 nova_compute[253538]:        <nova:user uuid="7ad88cb0e4cf4d0b8e4cbec835318015">tempest-ServerDiskConfigTestJSON-1655399928-project-member</nova:user>
Nov 25 03:33:16 np0005534516 nova_compute[253538]:        <nova:project uuid="dc93aa65bef7473d961e0cad1e8f2962">tempest-ServerDiskConfigTestJSON-1655399928</nova:project>
Nov 25 03:33:16 np0005534516 nova_compute[253538]:      </nova:owner>
Nov 25 03:33:16 np0005534516 nova_compute[253538]:      <nova:root type="image" uuid="8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e"/>
Nov 25 03:33:16 np0005534516 nova_compute[253538]:      <nova:ports>
Nov 25 03:33:16 np0005534516 nova_compute[253538]:        <nova:port uuid="dc1f5923-d984-4e49-bb97-bc1a77ade410">
Nov 25 03:33:16 np0005534516 nova_compute[253538]:          <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Nov 25 03:33:16 np0005534516 nova_compute[253538]:        </nova:port>
Nov 25 03:33:16 np0005534516 nova_compute[253538]:      </nova:ports>
Nov 25 03:33:16 np0005534516 nova_compute[253538]:    </nova:instance>
Nov 25 03:33:16 np0005534516 nova_compute[253538]:  </metadata>
Nov 25 03:33:16 np0005534516 nova_compute[253538]:  <sysinfo type="smbios">
Nov 25 03:33:16 np0005534516 nova_compute[253538]:    <system>
Nov 25 03:33:16 np0005534516 nova_compute[253538]:      <entry name="manufacturer">RDO</entry>
Nov 25 03:33:16 np0005534516 nova_compute[253538]:      <entry name="product">OpenStack Compute</entry>
Nov 25 03:33:16 np0005534516 nova_compute[253538]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 03:33:16 np0005534516 nova_compute[253538]:      <entry name="serial">fb888d2a-db54-44dc-8ec7-db417fa3cff6</entry>
Nov 25 03:33:16 np0005534516 nova_compute[253538]:      <entry name="uuid">fb888d2a-db54-44dc-8ec7-db417fa3cff6</entry>
Nov 25 03:33:16 np0005534516 nova_compute[253538]:      <entry name="family">Virtual Machine</entry>
Nov 25 03:33:16 np0005534516 nova_compute[253538]:    </system>
Nov 25 03:33:16 np0005534516 nova_compute[253538]:  </sysinfo>
Nov 25 03:33:16 np0005534516 nova_compute[253538]:  <os>
Nov 25 03:33:16 np0005534516 nova_compute[253538]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 25 03:33:16 np0005534516 nova_compute[253538]:    <boot dev="hd"/>
Nov 25 03:33:16 np0005534516 nova_compute[253538]:    <smbios mode="sysinfo"/>
Nov 25 03:33:16 np0005534516 nova_compute[253538]:  </os>
Nov 25 03:33:16 np0005534516 nova_compute[253538]:  <features>
Nov 25 03:33:16 np0005534516 nova_compute[253538]:    <acpi/>
Nov 25 03:33:16 np0005534516 nova_compute[253538]:    <apic/>
Nov 25 03:33:16 np0005534516 nova_compute[253538]:    <vmcoreinfo/>
Nov 25 03:33:16 np0005534516 nova_compute[253538]:  </features>
Nov 25 03:33:16 np0005534516 nova_compute[253538]:  <clock offset="utc">
Nov 25 03:33:16 np0005534516 nova_compute[253538]:    <timer name="pit" tickpolicy="delay"/>
Nov 25 03:33:16 np0005534516 nova_compute[253538]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 25 03:33:16 np0005534516 nova_compute[253538]:    <timer name="hpet" present="no"/>
Nov 25 03:33:16 np0005534516 nova_compute[253538]:  </clock>
Nov 25 03:33:16 np0005534516 nova_compute[253538]:  <cpu mode="host-model" match="exact">
Nov 25 03:33:16 np0005534516 nova_compute[253538]:    <topology sockets="1" cores="1" threads="1"/>
Nov 25 03:33:16 np0005534516 nova_compute[253538]:  </cpu>
Nov 25 03:33:16 np0005534516 nova_compute[253538]:  <devices>
Nov 25 03:33:16 np0005534516 nova_compute[253538]:    <disk type="network" device="disk">
Nov 25 03:33:16 np0005534516 nova_compute[253538]:      <driver type="raw" cache="none"/>
Nov 25 03:33:16 np0005534516 nova_compute[253538]:      <source protocol="rbd" name="vms/fb888d2a-db54-44dc-8ec7-db417fa3cff6_disk">
Nov 25 03:33:16 np0005534516 nova_compute[253538]:        <host name="192.168.122.100" port="6789"/>
Nov 25 03:33:16 np0005534516 nova_compute[253538]:      </source>
Nov 25 03:33:16 np0005534516 nova_compute[253538]:      <auth username="openstack">
Nov 25 03:33:16 np0005534516 nova_compute[253538]:        <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 03:33:16 np0005534516 nova_compute[253538]:      </auth>
Nov 25 03:33:16 np0005534516 nova_compute[253538]:      <target dev="vda" bus="virtio"/>
Nov 25 03:33:16 np0005534516 nova_compute[253538]:    </disk>
Nov 25 03:33:16 np0005534516 nova_compute[253538]:    <disk type="network" device="cdrom">
Nov 25 03:33:16 np0005534516 nova_compute[253538]:      <driver type="raw" cache="none"/>
Nov 25 03:33:16 np0005534516 nova_compute[253538]:      <source protocol="rbd" name="vms/fb888d2a-db54-44dc-8ec7-db417fa3cff6_disk.config">
Nov 25 03:33:16 np0005534516 nova_compute[253538]:        <host name="192.168.122.100" port="6789"/>
Nov 25 03:33:16 np0005534516 nova_compute[253538]:      </source>
Nov 25 03:33:16 np0005534516 nova_compute[253538]:      <auth username="openstack">
Nov 25 03:33:16 np0005534516 nova_compute[253538]:        <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 03:33:16 np0005534516 nova_compute[253538]:      </auth>
Nov 25 03:33:16 np0005534516 nova_compute[253538]:      <target dev="sda" bus="sata"/>
Nov 25 03:33:16 np0005534516 nova_compute[253538]:    </disk>
Nov 25 03:33:16 np0005534516 nova_compute[253538]:    <interface type="ethernet">
Nov 25 03:33:16 np0005534516 nova_compute[253538]:      <mac address="fa:16:3e:7d:1c:27"/>
Nov 25 03:33:16 np0005534516 nova_compute[253538]:      <model type="virtio"/>
Nov 25 03:33:16 np0005534516 nova_compute[253538]:      <driver name="vhost" rx_queue_size="512"/>
Nov 25 03:33:16 np0005534516 nova_compute[253538]:      <mtu size="1442"/>
Nov 25 03:33:16 np0005534516 nova_compute[253538]:      <target dev="tapdc1f5923-d9"/>
Nov 25 03:33:16 np0005534516 nova_compute[253538]:    </interface>
Nov 25 03:33:16 np0005534516 nova_compute[253538]:    <serial type="pty">
Nov 25 03:33:16 np0005534516 nova_compute[253538]:      <log file="/var/lib/nova/instances/fb888d2a-db54-44dc-8ec7-db417fa3cff6/console.log" append="off"/>
Nov 25 03:33:16 np0005534516 nova_compute[253538]:    </serial>
Nov 25 03:33:16 np0005534516 nova_compute[253538]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 03:33:16 np0005534516 nova_compute[253538]:    <video>
Nov 25 03:33:16 np0005534516 nova_compute[253538]:      <model type="virtio"/>
Nov 25 03:33:16 np0005534516 nova_compute[253538]:    </video>
Nov 25 03:33:16 np0005534516 nova_compute[253538]:    <input type="tablet" bus="usb"/>
Nov 25 03:33:16 np0005534516 nova_compute[253538]:    <rng model="virtio">
Nov 25 03:33:16 np0005534516 nova_compute[253538]:      <backend model="random">/dev/urandom</backend>
Nov 25 03:33:16 np0005534516 nova_compute[253538]:    </rng>
Nov 25 03:33:16 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root"/>
Nov 25 03:33:16 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:33:16 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:33:16 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:33:16 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:33:16 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:33:16 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:33:16 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:33:16 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:33:16 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:33:16 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:33:16 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:33:16 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:33:16 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:33:16 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:33:16 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:33:16 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:33:16 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:33:16 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:33:16 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:33:16 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:33:16 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:33:16 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:33:16 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:33:16 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:33:16 np0005534516 nova_compute[253538]:    <controller type="usb" index="0"/>
Nov 25 03:33:16 np0005534516 nova_compute[253538]:    <memballoon model="virtio">
Nov 25 03:33:16 np0005534516 nova_compute[253538]:      <stats period="10"/>
Nov 25 03:33:16 np0005534516 nova_compute[253538]:    </memballoon>
Nov 25 03:33:16 np0005534516 nova_compute[253538]:  </devices>
Nov 25 03:33:16 np0005534516 nova_compute[253538]: </domain>
Nov 25 03:33:16 np0005534516 nova_compute[253538]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 25 03:33:16 np0005534516 nova_compute[253538]: 2025-11-25 08:33:16.463 253542 DEBUG nova.compute.manager [None req-81d88d46-8ae0-4bea-8353-0eaa046e7abc 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] [instance: fb888d2a-db54-44dc-8ec7-db417fa3cff6] Preparing to wait for external event network-vif-plugged-dc1f5923-d984-4e49-bb97-bc1a77ade410 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 25 03:33:16 np0005534516 nova_compute[253538]: 2025-11-25 08:33:16.464 253542 DEBUG oslo_concurrency.lockutils [None req-81d88d46-8ae0-4bea-8353-0eaa046e7abc 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Acquiring lock "fb888d2a-db54-44dc-8ec7-db417fa3cff6-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:33:16 np0005534516 nova_compute[253538]: 2025-11-25 08:33:16.464 253542 DEBUG oslo_concurrency.lockutils [None req-81d88d46-8ae0-4bea-8353-0eaa046e7abc 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Lock "fb888d2a-db54-44dc-8ec7-db417fa3cff6-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:33:16 np0005534516 nova_compute[253538]: 2025-11-25 08:33:16.465 253542 DEBUG oslo_concurrency.lockutils [None req-81d88d46-8ae0-4bea-8353-0eaa046e7abc 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Lock "fb888d2a-db54-44dc-8ec7-db417fa3cff6-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:33:16 np0005534516 nova_compute[253538]: 2025-11-25 08:33:16.465 253542 DEBUG nova.virt.libvirt.vif [None req-81d88d46-8ae0-4bea-8353-0eaa046e7abc 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T08:33:10Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-2088861040',display_name='tempest-ServerDiskConfigTestJSON-server-2088861040',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-2088861040',id=57,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='dc93aa65bef7473d961e0cad1e8f2962',ramdisk_id='',reservation_id='r-piq0ju9f',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerDiskConfigTestJSON-1655399928',owner_user_name='tempest-ServerDiskConfigTestJSON-1655399928-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T08:33:12Z,user_data=None,user_id='7ad88cb0e4cf4d0b8e4cbec835318015',uuid=fb888d2a-db54-44dc-8ec7-db417fa3cff6,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "dc1f5923-d984-4e49-bb97-bc1a77ade410", "address": "fa:16:3e:7d:1c:27", "network": {"id": "eb25945d-6002-4a99-b682-034a8a3dc901", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1802666542-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dc93aa65bef7473d961e0cad1e8f2962", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdc1f5923-d9", "ovs_interfaceid": "dc1f5923-d984-4e49-bb97-bc1a77ade410", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 25 03:33:16 np0005534516 nova_compute[253538]: 2025-11-25 08:33:16.466 253542 DEBUG nova.network.os_vif_util [None req-81d88d46-8ae0-4bea-8353-0eaa046e7abc 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Converting VIF {"id": "dc1f5923-d984-4e49-bb97-bc1a77ade410", "address": "fa:16:3e:7d:1c:27", "network": {"id": "eb25945d-6002-4a99-b682-034a8a3dc901", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1802666542-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dc93aa65bef7473d961e0cad1e8f2962", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdc1f5923-d9", "ovs_interfaceid": "dc1f5923-d984-4e49-bb97-bc1a77ade410", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 03:33:16 np0005534516 nova_compute[253538]: 2025-11-25 08:33:16.467 253542 DEBUG nova.network.os_vif_util [None req-81d88d46-8ae0-4bea-8353-0eaa046e7abc 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:7d:1c:27,bridge_name='br-int',has_traffic_filtering=True,id=dc1f5923-d984-4e49-bb97-bc1a77ade410,network=Network(eb25945d-6002-4a99-b682-034a8a3dc901),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdc1f5923-d9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 03:33:16 np0005534516 nova_compute[253538]: 2025-11-25 08:33:16.468 253542 DEBUG os_vif [None req-81d88d46-8ae0-4bea-8353-0eaa046e7abc 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:7d:1c:27,bridge_name='br-int',has_traffic_filtering=True,id=dc1f5923-d984-4e49-bb97-bc1a77ade410,network=Network(eb25945d-6002-4a99-b682-034a8a3dc901),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdc1f5923-d9') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 25 03:33:16 np0005534516 nova_compute[253538]: 2025-11-25 08:33:16.469 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:33:16 np0005534516 nova_compute[253538]: 2025-11-25 08:33:16.469 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:33:16 np0005534516 nova_compute[253538]: 2025-11-25 08:33:16.470 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 03:33:16 np0005534516 nova_compute[253538]: 2025-11-25 08:33:16.472 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:33:16 np0005534516 nova_compute[253538]: 2025-11-25 08:33:16.472 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapdc1f5923-d9, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:33:16 np0005534516 nova_compute[253538]: 2025-11-25 08:33:16.473 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapdc1f5923-d9, col_values=(('external_ids', {'iface-id': 'dc1f5923-d984-4e49-bb97-bc1a77ade410', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:7d:1c:27', 'vm-uuid': 'fb888d2a-db54-44dc-8ec7-db417fa3cff6'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:33:16 np0005534516 nova_compute[253538]: 2025-11-25 08:33:16.474 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:33:16 np0005534516 NetworkManager[48915]: <info>  [1764059596.4756] manager: (tapdc1f5923-d9): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/231)
Nov 25 03:33:16 np0005534516 nova_compute[253538]: 2025-11-25 08:33:16.478 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 25 03:33:16 np0005534516 nova_compute[253538]: 2025-11-25 08:33:16.484 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:33:16 np0005534516 nova_compute[253538]: 2025-11-25 08:33:16.485 253542 INFO os_vif [None req-81d88d46-8ae0-4bea-8353-0eaa046e7abc 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:7d:1c:27,bridge_name='br-int',has_traffic_filtering=True,id=dc1f5923-d984-4e49-bb97-bc1a77ade410,network=Network(eb25945d-6002-4a99-b682-034a8a3dc901),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdc1f5923-d9')#033[00m
Nov 25 03:33:16 np0005534516 nova_compute[253538]: 2025-11-25 08:33:16.549 253542 DEBUG nova.virt.libvirt.driver [None req-81d88d46-8ae0-4bea-8353-0eaa046e7abc 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 03:33:16 np0005534516 nova_compute[253538]: 2025-11-25 08:33:16.549 253542 DEBUG nova.virt.libvirt.driver [None req-81d88d46-8ae0-4bea-8353-0eaa046e7abc 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 03:33:16 np0005534516 nova_compute[253538]: 2025-11-25 08:33:16.550 253542 DEBUG nova.virt.libvirt.driver [None req-81d88d46-8ae0-4bea-8353-0eaa046e7abc 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] No VIF found with MAC fa:16:3e:7d:1c:27, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 25 03:33:16 np0005534516 nova_compute[253538]: 2025-11-25 08:33:16.550 253542 INFO nova.virt.libvirt.driver [None req-81d88d46-8ae0-4bea-8353-0eaa046e7abc 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] [instance: fb888d2a-db54-44dc-8ec7-db417fa3cff6] Using config drive#033[00m
Nov 25 03:33:16 np0005534516 nova_compute[253538]: 2025-11-25 08:33:16.569 253542 DEBUG nova.storage.rbd_utils [None req-81d88d46-8ae0-4bea-8353-0eaa046e7abc 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] rbd image fb888d2a-db54-44dc-8ec7-db417fa3cff6_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:33:17 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1498: 321 pgs: 321 active+clean; 295 MiB data, 565 MiB used, 59 GiB / 60 GiB avail; 4.1 MiB/s rd, 1.8 MiB/s wr, 213 op/s
Nov 25 03:33:17 np0005534516 nova_compute[253538]: 2025-11-25 08:33:17.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 03:33:19 np0005534516 nova_compute[253538]: 2025-11-25 08:33:19.124 253542 INFO nova.virt.libvirt.driver [None req-81d88d46-8ae0-4bea-8353-0eaa046e7abc 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] [instance: fb888d2a-db54-44dc-8ec7-db417fa3cff6] Creating config drive at /var/lib/nova/instances/fb888d2a-db54-44dc-8ec7-db417fa3cff6/disk.config#033[00m
Nov 25 03:33:19 np0005534516 nova_compute[253538]: 2025-11-25 08:33:19.132 253542 DEBUG oslo_concurrency.processutils [None req-81d88d46-8ae0-4bea-8353-0eaa046e7abc 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/fb888d2a-db54-44dc-8ec7-db417fa3cff6/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp2w_aq4k4 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:33:19 np0005534516 nova_compute[253538]: 2025-11-25 08:33:19.197 253542 DEBUG nova.compute.manager [req-7c5cbe18-4074-4fe0-b624-6fbdbc718a64 req-a832312b-ec77-4f03-a9b5-c2a2ba13565e b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 8191f951-44bc-4371-957a-f2e7d37c1a32] Received event network-changed-9541f2fd-4ec3-47ef-a6a9-66e0052c303f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:33:19 np0005534516 nova_compute[253538]: 2025-11-25 08:33:19.198 253542 DEBUG nova.compute.manager [req-7c5cbe18-4074-4fe0-b624-6fbdbc718a64 req-a832312b-ec77-4f03-a9b5-c2a2ba13565e b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 8191f951-44bc-4371-957a-f2e7d37c1a32] Refreshing instance network info cache due to event network-changed-9541f2fd-4ec3-47ef-a6a9-66e0052c303f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 03:33:19 np0005534516 nova_compute[253538]: 2025-11-25 08:33:19.199 253542 DEBUG oslo_concurrency.lockutils [req-7c5cbe18-4074-4fe0-b624-6fbdbc718a64 req-a832312b-ec77-4f03-a9b5-c2a2ba13565e b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-8191f951-44bc-4371-957a-f2e7d37c1a32" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 03:33:19 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1499: 321 pgs: 321 active+clean; 295 MiB data, 565 MiB used, 59 GiB / 60 GiB avail; 4.3 MiB/s rd, 1.8 MiB/s wr, 212 op/s
Nov 25 03:33:19 np0005534516 nova_compute[253538]: 2025-11-25 08:33:19.288 253542 DEBUG oslo_concurrency.processutils [None req-81d88d46-8ae0-4bea-8353-0eaa046e7abc 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/fb888d2a-db54-44dc-8ec7-db417fa3cff6/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp2w_aq4k4" returned: 0 in 0.156s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:33:19 np0005534516 nova_compute[253538]: 2025-11-25 08:33:19.316 253542 DEBUG nova.storage.rbd_utils [None req-81d88d46-8ae0-4bea-8353-0eaa046e7abc 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] rbd image fb888d2a-db54-44dc-8ec7-db417fa3cff6_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:33:19 np0005534516 nova_compute[253538]: 2025-11-25 08:33:19.320 253542 DEBUG oslo_concurrency.processutils [None req-81d88d46-8ae0-4bea-8353-0eaa046e7abc 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/fb888d2a-db54-44dc-8ec7-db417fa3cff6/disk.config fb888d2a-db54-44dc-8ec7-db417fa3cff6_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:33:19 np0005534516 nova_compute[253538]: 2025-11-25 08:33:19.550 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 03:33:19 np0005534516 nova_compute[253538]: 2025-11-25 08:33:19.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 03:33:19 np0005534516 nova_compute[253538]: 2025-11-25 08:33:19.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 03:33:19 np0005534516 nova_compute[253538]: 2025-11-25 08:33:19.577 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:33:19 np0005534516 nova_compute[253538]: 2025-11-25 08:33:19.578 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:33:19 np0005534516 nova_compute[253538]: 2025-11-25 08:33:19.578 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:33:19 np0005534516 nova_compute[253538]: 2025-11-25 08:33:19.579 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 25 03:33:19 np0005534516 nova_compute[253538]: 2025-11-25 08:33:19.579 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:33:20 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 03:33:20 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4123912326' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 03:33:20 np0005534516 nova_compute[253538]: 2025-11-25 08:33:20.031 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.452s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:33:20 np0005534516 nova_compute[253538]: 2025-11-25 08:33:20.112 253542 DEBUG nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] skipping disk for instance-00000034 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 25 03:33:20 np0005534516 nova_compute[253538]: 2025-11-25 08:33:20.113 253542 DEBUG nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] skipping disk for instance-00000034 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 25 03:33:20 np0005534516 nova_compute[253538]: 2025-11-25 08:33:20.120 253542 DEBUG nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] skipping disk for instance-00000039 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 25 03:33:20 np0005534516 nova_compute[253538]: 2025-11-25 08:33:20.120 253542 DEBUG nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] skipping disk for instance-00000039 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 25 03:33:20 np0005534516 nova_compute[253538]: 2025-11-25 08:33:20.128 253542 DEBUG nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] skipping disk for instance-00000032 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 25 03:33:20 np0005534516 nova_compute[253538]: 2025-11-25 08:33:20.128 253542 DEBUG nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] skipping disk for instance-00000032 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 25 03:33:20 np0005534516 nova_compute[253538]: 2025-11-25 08:33:20.133 253542 DEBUG nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] skipping disk for instance-00000038 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 25 03:33:20 np0005534516 nova_compute[253538]: 2025-11-25 08:33:20.134 253542 DEBUG nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] skipping disk for instance-00000038 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 25 03:33:20 np0005534516 nova_compute[253538]: 2025-11-25 08:33:20.211 253542 DEBUG oslo_concurrency.processutils [None req-81d88d46-8ae0-4bea-8353-0eaa046e7abc 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/fb888d2a-db54-44dc-8ec7-db417fa3cff6/disk.config fb888d2a-db54-44dc-8ec7-db417fa3cff6_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.891s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:33:20 np0005534516 nova_compute[253538]: 2025-11-25 08:33:20.211 253542 INFO nova.virt.libvirt.driver [None req-81d88d46-8ae0-4bea-8353-0eaa046e7abc 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] [instance: fb888d2a-db54-44dc-8ec7-db417fa3cff6] Deleting local config drive /var/lib/nova/instances/fb888d2a-db54-44dc-8ec7-db417fa3cff6/disk.config because it was imported into RBD.#033[00m
Nov 25 03:33:20 np0005534516 kernel: tapdc1f5923-d9: entered promiscuous mode
Nov 25 03:33:20 np0005534516 NetworkManager[48915]: <info>  [1764059600.2616] manager: (tapdc1f5923-d9): new Tun device (/org/freedesktop/NetworkManager/Devices/232)
Nov 25 03:33:20 np0005534516 nova_compute[253538]: 2025-11-25 08:33:20.263 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:33:20 np0005534516 ovn_controller[152859]: 2025-11-25T08:33:20Z|00496|binding|INFO|Claiming lport dc1f5923-d984-4e49-bb97-bc1a77ade410 for this chassis.
Nov 25 03:33:20 np0005534516 ovn_controller[152859]: 2025-11-25T08:33:20Z|00497|binding|INFO|dc1f5923-d984-4e49-bb97-bc1a77ade410: Claiming fa:16:3e:7d:1c:27 10.100.0.6
Nov 25 03:33:20 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:33:20.273 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:7d:1c:27 10.100.0.6'], port_security=['fa:16:3e:7d:1c:27 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': 'fb888d2a-db54-44dc-8ec7-db417fa3cff6', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-eb25945d-6002-4a99-b682-034a8a3dc901', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'dc93aa65bef7473d961e0cad1e8f2962', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'e75a559d-2985-4816-b432-9eef78e9b129', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=1d4314d0-90db-4d2a-a971-774f6d589653, chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=dc1f5923-d984-4e49-bb97-bc1a77ade410) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 03:33:20 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:33:20.275 162739 INFO neutron.agent.ovn.metadata.agent [-] Port dc1f5923-d984-4e49-bb97-bc1a77ade410 in datapath eb25945d-6002-4a99-b682-034a8a3dc901 bound to our chassis#033[00m
Nov 25 03:33:20 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:33:20.277 162739 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network eb25945d-6002-4a99-b682-034a8a3dc901#033[00m
Nov 25 03:33:20 np0005534516 ovn_controller[152859]: 2025-11-25T08:33:20Z|00498|binding|INFO|Setting lport dc1f5923-d984-4e49-bb97-bc1a77ade410 ovn-installed in OVS
Nov 25 03:33:20 np0005534516 ovn_controller[152859]: 2025-11-25T08:33:20Z|00499|binding|INFO|Setting lport dc1f5923-d984-4e49-bb97-bc1a77ade410 up in Southbound
Nov 25 03:33:20 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:33:20.288 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[ca74b7f6-9115-420e-990f-9c5bf595ac86]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:33:20 np0005534516 nova_compute[253538]: 2025-11-25 08:33:20.288 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:33:20 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:33:20.289 162739 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapeb25945d-61 in ovnmeta-eb25945d-6002-4a99-b682-034a8a3dc901 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 25 03:33:20 np0005534516 nova_compute[253538]: 2025-11-25 08:33:20.291 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:33:20 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:33:20.295 269370 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapeb25945d-60 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 25 03:33:20 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:33:20.295 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[605b3903-63eb-4559-ae55-6a912ff58a83]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:33:20 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:33:20.296 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[cf67c914-1267-4bc4-8510-757c668b5b35]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:33:20 np0005534516 systemd-machined[215790]: New machine qemu-65-instance-00000039.
Nov 25 03:33:20 np0005534516 systemd[1]: Started Virtual Machine qemu-65-instance-00000039.
Nov 25 03:33:20 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:33:20.309 162852 DEBUG oslo.privsep.daemon [-] privsep: reply[18896c09-9a11-42e1-bd67-00fa6905df4e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:33:20 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:33:20.328 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[3b6fb02e-cd2a-4445-953b-735a64287850]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:33:20 np0005534516 systemd-udevd[313116]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 03:33:20 np0005534516 NetworkManager[48915]: <info>  [1764059600.3536] device (tapdc1f5923-d9): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 03:33:20 np0005534516 NetworkManager[48915]: <info>  [1764059600.3546] device (tapdc1f5923-d9): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 03:33:20 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:33:20.357 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[3c94712b-9997-4fa7-b28b-7e5bca84c862]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:33:20 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:33:20.365 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[5e6f3402-79c0-4d84-9e0c-e5aa1a2068fc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:33:20 np0005534516 NetworkManager[48915]: <info>  [1764059600.3658] manager: (tapeb25945d-60): new Veth device (/org/freedesktop/NetworkManager/Devices/233)
Nov 25 03:33:20 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:33:20.401 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[a2c64831-2ad3-42f0-930d-1a8066a7bdf6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:33:20 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:33:20.405 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[83c99464-6d09-44b0-b7b2-73ceddcb8a48]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:33:20 np0005534516 NetworkManager[48915]: <info>  [1764059600.4295] device (tapeb25945d-60): carrier: link connected
Nov 25 03:33:20 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:33:20.435 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[61209d49-69b9-4c30-a361-b972731baf78]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:33:20 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:33:20.457 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[59386e88-8819-496c-aff1-d6bf0e585a7e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapeb25945d-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f7:fc:f3'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 153], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 495502, 'reachable_time': 43707, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 313146, 'error': None, 'target': 'ovnmeta-eb25945d-6002-4a99-b682-034a8a3dc901', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:33:20 np0005534516 nova_compute[253538]: 2025-11-25 08:33:20.470 253542 WARNING nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 03:33:20 np0005534516 nova_compute[253538]: 2025-11-25 08:33:20.471 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3612MB free_disk=59.85540008544922GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 25 03:33:20 np0005534516 nova_compute[253538]: 2025-11-25 08:33:20.471 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:33:20 np0005534516 nova_compute[253538]: 2025-11-25 08:33:20.472 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:33:20 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:33:20.479 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[c48f7aa2-e791-4c1e-858b-724903bad402]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fef7:fcf3'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 495502, 'tstamp': 495502}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 313147, 'error': None, 'target': 'ovnmeta-eb25945d-6002-4a99-b682-034a8a3dc901', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:33:20 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:33:20.500 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[6c067e86-de80-4c1c-af4f-80380dd77b93]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapeb25945d-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f7:fc:f3'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 153], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 495502, 'reachable_time': 43707, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 313148, 'error': None, 'target': 'ovnmeta-eb25945d-6002-4a99-b682-034a8a3dc901', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:33:20 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:33:20.535 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[2829fd98-b34e-47b3-ab85-ea26dcf491db]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:33:20 np0005534516 nova_compute[253538]: 2025-11-25 08:33:20.569 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Instance 0feca801-4630-4450-b915-616d8496ab51 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 25 03:33:20 np0005534516 nova_compute[253538]: 2025-11-25 08:33:20.569 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Instance 8191f951-44bc-4371-957a-f2e7d37c1a32 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 25 03:33:20 np0005534516 nova_compute[253538]: 2025-11-25 08:33:20.570 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Instance 528fb917-0169-441d-b32d-652963344aea actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 25 03:33:20 np0005534516 nova_compute[253538]: 2025-11-25 08:33:20.570 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Instance fb888d2a-db54-44dc-8ec7-db417fa3cff6 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 25 03:33:20 np0005534516 nova_compute[253538]: 2025-11-25 08:33:20.570 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 4 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 25 03:33:20 np0005534516 nova_compute[253538]: 2025-11-25 08:33:20.571 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=1024MB phys_disk=59GB used_disk=4GB total_vcpus=8 used_vcpus=4 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 25 03:33:20 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:33:20.597 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[9a5911e3-f49e-4341-8827-7711d515885d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:33:20 np0005534516 nova_compute[253538]: 2025-11-25 08:33:20.598 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:33:20 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:33:20.600 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapeb25945d-60, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:33:20 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:33:20.600 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 03:33:20 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:33:20.600 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapeb25945d-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:33:20 np0005534516 nova_compute[253538]: 2025-11-25 08:33:20.602 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:33:20 np0005534516 kernel: tapeb25945d-60: entered promiscuous mode
Nov 25 03:33:20 np0005534516 NetworkManager[48915]: <info>  [1764059600.6035] manager: (tapeb25945d-60): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/234)
Nov 25 03:33:20 np0005534516 nova_compute[253538]: 2025-11-25 08:33:20.605 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:33:20 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:33:20.606 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapeb25945d-60, col_values=(('external_ids', {'iface-id': 'f4a838c4-0817-4ff4-8792-2e2721905e98'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:33:20 np0005534516 ovn_controller[152859]: 2025-11-25T08:33:20Z|00500|binding|INFO|Releasing lport f4a838c4-0817-4ff4-8792-2e2721905e98 from this chassis (sb_readonly=0)
Nov 25 03:33:20 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:33:20.622 162739 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/eb25945d-6002-4a99-b682-034a8a3dc901.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/eb25945d-6002-4a99-b682-034a8a3dc901.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 25 03:33:20 np0005534516 nova_compute[253538]: 2025-11-25 08:33:20.623 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:33:20 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:33:20.623 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[552464f0-0d7e-4249-b3d2-9590c94be22c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:33:20 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:33:20.624 162739 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 25 03:33:20 np0005534516 ovn_metadata_agent[162734]: global
Nov 25 03:33:20 np0005534516 ovn_metadata_agent[162734]:    log         /dev/log local0 debug
Nov 25 03:33:20 np0005534516 ovn_metadata_agent[162734]:    log-tag     haproxy-metadata-proxy-eb25945d-6002-4a99-b682-034a8a3dc901
Nov 25 03:33:20 np0005534516 ovn_metadata_agent[162734]:    user        root
Nov 25 03:33:20 np0005534516 ovn_metadata_agent[162734]:    group       root
Nov 25 03:33:20 np0005534516 ovn_metadata_agent[162734]:    maxconn     1024
Nov 25 03:33:20 np0005534516 ovn_metadata_agent[162734]:    pidfile     /var/lib/neutron/external/pids/eb25945d-6002-4a99-b682-034a8a3dc901.pid.haproxy
Nov 25 03:33:20 np0005534516 ovn_metadata_agent[162734]:    daemon
Nov 25 03:33:20 np0005534516 ovn_metadata_agent[162734]: 
Nov 25 03:33:20 np0005534516 ovn_metadata_agent[162734]: defaults
Nov 25 03:33:20 np0005534516 ovn_metadata_agent[162734]:    log global
Nov 25 03:33:20 np0005534516 ovn_metadata_agent[162734]:    mode http
Nov 25 03:33:20 np0005534516 ovn_metadata_agent[162734]:    option httplog
Nov 25 03:33:20 np0005534516 ovn_metadata_agent[162734]:    option dontlognull
Nov 25 03:33:20 np0005534516 ovn_metadata_agent[162734]:    option http-server-close
Nov 25 03:33:20 np0005534516 ovn_metadata_agent[162734]:    option forwardfor
Nov 25 03:33:20 np0005534516 ovn_metadata_agent[162734]:    retries                 3
Nov 25 03:33:20 np0005534516 ovn_metadata_agent[162734]:    timeout http-request    30s
Nov 25 03:33:20 np0005534516 ovn_metadata_agent[162734]:    timeout connect         30s
Nov 25 03:33:20 np0005534516 ovn_metadata_agent[162734]:    timeout client          32s
Nov 25 03:33:20 np0005534516 ovn_metadata_agent[162734]:    timeout server          32s
Nov 25 03:33:20 np0005534516 ovn_metadata_agent[162734]:    timeout http-keep-alive 30s
Nov 25 03:33:20 np0005534516 ovn_metadata_agent[162734]: 
Nov 25 03:33:20 np0005534516 ovn_metadata_agent[162734]: 
Nov 25 03:33:20 np0005534516 ovn_metadata_agent[162734]: listen listener
Nov 25 03:33:20 np0005534516 ovn_metadata_agent[162734]:    bind 169.254.169.254:80
Nov 25 03:33:20 np0005534516 ovn_metadata_agent[162734]:    server metadata /var/lib/neutron/metadata_proxy
Nov 25 03:33:20 np0005534516 ovn_metadata_agent[162734]:    http-request add-header X-OVN-Network-ID eb25945d-6002-4a99-b682-034a8a3dc901
Nov 25 03:33:20 np0005534516 ovn_metadata_agent[162734]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 25 03:33:20 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:33:20.625 162739 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-eb25945d-6002-4a99-b682-034a8a3dc901', 'env', 'PROCESS_TAG=haproxy-eb25945d-6002-4a99-b682-034a8a3dc901', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/eb25945d-6002-4a99-b682-034a8a3dc901.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 25 03:33:20 np0005534516 nova_compute[253538]: 2025-11-25 08:33:20.673 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:33:20 np0005534516 nova_compute[253538]: 2025-11-25 08:33:20.753 253542 DEBUG nova.network.neutron [req-a3e9e76f-a050-484e-a9d1-77728fdda6bd req-92a3a015-dc81-4b54-9b3b-bf11151f547c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: fb888d2a-db54-44dc-8ec7-db417fa3cff6] Updated VIF entry in instance network info cache for port dc1f5923-d984-4e49-bb97-bc1a77ade410. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 25 03:33:20 np0005534516 nova_compute[253538]: 2025-11-25 08:33:20.754 253542 DEBUG nova.network.neutron [req-a3e9e76f-a050-484e-a9d1-77728fdda6bd req-92a3a015-dc81-4b54-9b3b-bf11151f547c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: fb888d2a-db54-44dc-8ec7-db417fa3cff6] Updating instance_info_cache with network_info: [{"id": "dc1f5923-d984-4e49-bb97-bc1a77ade410", "address": "fa:16:3e:7d:1c:27", "network": {"id": "eb25945d-6002-4a99-b682-034a8a3dc901", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1802666542-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dc93aa65bef7473d961e0cad1e8f2962", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdc1f5923-d9", "ovs_interfaceid": "dc1f5923-d984-4e49-bb97-bc1a77ade410", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 03:33:20 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e176 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 03:33:20 np0005534516 nova_compute[253538]: 2025-11-25 08:33:20.781 253542 DEBUG oslo_concurrency.lockutils [req-a3e9e76f-a050-484e-a9d1-77728fdda6bd req-92a3a015-dc81-4b54-9b3b-bf11151f547c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-fb888d2a-db54-44dc-8ec7-db417fa3cff6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 03:33:21 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1500: 321 pgs: 321 active+clean; 295 MiB data, 565 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 1.8 MiB/s wr, 185 op/s
Nov 25 03:33:21 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 03:33:21 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3007355957' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 03:33:21 np0005534516 nova_compute[253538]: 2025-11-25 08:33:21.265 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.592s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:33:21 np0005534516 nova_compute[253538]: 2025-11-25 08:33:21.271 253542 DEBUG nova.compute.provider_tree [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 03:33:21 np0005534516 nova_compute[253538]: 2025-11-25 08:33:21.284 253542 DEBUG nova.scheduler.client.report [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 03:33:21 np0005534516 nova_compute[253538]: 2025-11-25 08:33:21.344 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 25 03:33:21 np0005534516 nova_compute[253538]: 2025-11-25 08:33:21.346 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.874s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:33:21 np0005534516 podman[313238]: 2025-11-25 08:33:21.36302011 +0000 UTC m=+0.032602017 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 25 03:33:21 np0005534516 nova_compute[253538]: 2025-11-25 08:33:21.476 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:33:21 np0005534516 nova_compute[253538]: 2025-11-25 08:33:21.516 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764059601.5157537, fb888d2a-db54-44dc-8ec7-db417fa3cff6 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 03:33:21 np0005534516 nova_compute[253538]: 2025-11-25 08:33:21.517 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: fb888d2a-db54-44dc-8ec7-db417fa3cff6] VM Started (Lifecycle Event)#033[00m
Nov 25 03:33:21 np0005534516 nova_compute[253538]: 2025-11-25 08:33:21.535 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: fb888d2a-db54-44dc-8ec7-db417fa3cff6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:33:21 np0005534516 nova_compute[253538]: 2025-11-25 08:33:21.538 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764059601.5161715, fb888d2a-db54-44dc-8ec7-db417fa3cff6 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 03:33:21 np0005534516 nova_compute[253538]: 2025-11-25 08:33:21.539 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: fb888d2a-db54-44dc-8ec7-db417fa3cff6] VM Paused (Lifecycle Event)#033[00m
Nov 25 03:33:21 np0005534516 nova_compute[253538]: 2025-11-25 08:33:21.554 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: fb888d2a-db54-44dc-8ec7-db417fa3cff6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:33:21 np0005534516 nova_compute[253538]: 2025-11-25 08:33:21.557 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: fb888d2a-db54-44dc-8ec7-db417fa3cff6] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 03:33:21 np0005534516 podman[313238]: 2025-11-25 08:33:21.563119658 +0000 UTC m=+0.232701535 container create e03fdcecaa738056887a121fee2fa47bbc6cdb5f263c3a08e5530469a1b34b0b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-eb25945d-6002-4a99-b682-034a8a3dc901, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0)
Nov 25 03:33:21 np0005534516 nova_compute[253538]: 2025-11-25 08:33:21.573 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: fb888d2a-db54-44dc-8ec7-db417fa3cff6] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 03:33:21 np0005534516 nova_compute[253538]: 2025-11-25 08:33:21.634 253542 DEBUG nova.network.neutron [None req-1bbadccb-c6ee-41c2-883b-6cdb2308cf5a 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] [instance: 8191f951-44bc-4371-957a-f2e7d37c1a32] Updating instance_info_cache with network_info: [{"id": "d8bd16e1-3695-474d-be04-7fdf44bee803", "address": "fa:16:3e:1a:58:19", "network": {"id": "9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-2079765623-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.213", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7d8307470c794815a028592990efca57", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd8bd16e1-36", "ovs_interfaceid": "d8bd16e1-3695-474d-be04-7fdf44bee803", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "223207be-35e0-4b8b-bf78-113792059910", "address": "fa:16:3e:3a:eb:0c", "network": {"id": "9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-2079765623-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7d8307470c794815a028592990efca57", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap223207be-35", "ovs_interfaceid": "223207be-35e0-4b8b-bf78-113792059910", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "582f57a6-32d3-44a0-ab47-d147a0bb0f43", "address": "fa:16:3e:7a:1a:cb", "network": {"id": "9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-2079765623-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7d8307470c794815a028592990efca57", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap582f57a6-32", "ovs_interfaceid": "582f57a6-32d3-44a0-ab47-d147a0bb0f43", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "9541f2fd-4ec3-47ef-a6a9-66e0052c303f", "address": "fa:16:3e:b9:39:c6", "network": {"id": "9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-2079765623-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7d8307470c794815a028592990efca57", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9541f2fd-4e", "ovs_interfaceid": "9541f2fd-4ec3-47ef-a6a9-66e0052c303f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 03:33:21 np0005534516 nova_compute[253538]: 2025-11-25 08:33:21.707 253542 DEBUG oslo_concurrency.lockutils [None req-1bbadccb-c6ee-41c2-883b-6cdb2308cf5a 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Releasing lock "refresh_cache-8191f951-44bc-4371-957a-f2e7d37c1a32" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 03:33:21 np0005534516 nova_compute[253538]: 2025-11-25 08:33:21.708 253542 DEBUG oslo_concurrency.lockutils [req-7c5cbe18-4074-4fe0-b624-6fbdbc718a64 req-a832312b-ec77-4f03-a9b5-c2a2ba13565e b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-8191f951-44bc-4371-957a-f2e7d37c1a32" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 03:33:21 np0005534516 nova_compute[253538]: 2025-11-25 08:33:21.708 253542 DEBUG nova.network.neutron [req-7c5cbe18-4074-4fe0-b624-6fbdbc718a64 req-a832312b-ec77-4f03-a9b5-c2a2ba13565e b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 8191f951-44bc-4371-957a-f2e7d37c1a32] Refreshing network info cache for port 9541f2fd-4ec3-47ef-a6a9-66e0052c303f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 03:33:21 np0005534516 nova_compute[253538]: 2025-11-25 08:33:21.712 253542 DEBUG nova.virt.libvirt.vif [None req-1bbadccb-c6ee-41c2-883b-6cdb2308cf5a 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T08:32:27Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-2072750142',display_name='tempest-AttachInterfacesTestJSON-server-2072750142',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-2072750142',id=52,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHlIbVdTjrPi47Xp1ay/t1WIi0WKTQJPoNHCD5jEl7ye+GEjFhiA/mB2rFe7IhKIxVjHVKF2hIwCbWYxBLjoyv+NSB0KC8NG+zdVJzTKR2nUpIyzOpLkhsEIm7Jy2dcm6w==',key_name='tempest-keypair-2060783228',keypairs=<?>,launch_index=0,launched_at=2025-11-25T08:32:36Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='7d8307470c794815a028592990efca57',ramdisk_id='',reservation_id='r-558nuebf',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-1895576257',owner_user_name='tempest-AttachInterfacesTestJSON-1895576257-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T08:32:36Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='329d8dc9d78743d4a09a38fef3a9143d',uuid=8191f951-44bc-4371-957a-f2e7d37c1a32,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "9541f2fd-4ec3-47ef-a6a9-66e0052c303f", "address": "fa:16:3e:b9:39:c6", "network": {"id": "9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-2079765623-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7d8307470c794815a028592990efca57", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9541f2fd-4e", "ovs_interfaceid": "9541f2fd-4ec3-47ef-a6a9-66e0052c303f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 25 03:33:21 np0005534516 nova_compute[253538]: 2025-11-25 08:33:21.712 253542 DEBUG nova.network.os_vif_util [None req-1bbadccb-c6ee-41c2-883b-6cdb2308cf5a 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Converting VIF {"id": "9541f2fd-4ec3-47ef-a6a9-66e0052c303f", "address": "fa:16:3e:b9:39:c6", "network": {"id": "9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-2079765623-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7d8307470c794815a028592990efca57", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9541f2fd-4e", "ovs_interfaceid": "9541f2fd-4ec3-47ef-a6a9-66e0052c303f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 03:33:21 np0005534516 nova_compute[253538]: 2025-11-25 08:33:21.713 253542 DEBUG nova.network.os_vif_util [None req-1bbadccb-c6ee-41c2-883b-6cdb2308cf5a 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b9:39:c6,bridge_name='br-int',has_traffic_filtering=True,id=9541f2fd-4ec3-47ef-a6a9-66e0052c303f,network=Network(9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap9541f2fd-4e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 03:33:21 np0005534516 nova_compute[253538]: 2025-11-25 08:33:21.713 253542 DEBUG os_vif [None req-1bbadccb-c6ee-41c2-883b-6cdb2308cf5a 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:b9:39:c6,bridge_name='br-int',has_traffic_filtering=True,id=9541f2fd-4ec3-47ef-a6a9-66e0052c303f,network=Network(9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap9541f2fd-4e') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 25 03:33:21 np0005534516 nova_compute[253538]: 2025-11-25 08:33:21.714 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:33:21 np0005534516 nova_compute[253538]: 2025-11-25 08:33:21.714 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:33:21 np0005534516 nova_compute[253538]: 2025-11-25 08:33:21.715 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 03:33:21 np0005534516 systemd[1]: Started libpod-conmon-e03fdcecaa738056887a121fee2fa47bbc6cdb5f263c3a08e5530469a1b34b0b.scope.
Nov 25 03:33:21 np0005534516 nova_compute[253538]: 2025-11-25 08:33:21.717 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:33:21 np0005534516 nova_compute[253538]: 2025-11-25 08:33:21.718 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9541f2fd-4e, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:33:21 np0005534516 nova_compute[253538]: 2025-11-25 08:33:21.719 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap9541f2fd-4e, col_values=(('external_ids', {'iface-id': '9541f2fd-4ec3-47ef-a6a9-66e0052c303f', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:b9:39:c6', 'vm-uuid': '8191f951-44bc-4371-957a-f2e7d37c1a32'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:33:21 np0005534516 NetworkManager[48915]: <info>  [1764059601.7224] manager: (tap9541f2fd-4e): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/235)
Nov 25 03:33:21 np0005534516 nova_compute[253538]: 2025-11-25 08:33:21.738 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 25 03:33:21 np0005534516 nova_compute[253538]: 2025-11-25 08:33:21.739 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:33:21 np0005534516 nova_compute[253538]: 2025-11-25 08:33:21.740 253542 INFO os_vif [None req-1bbadccb-c6ee-41c2-883b-6cdb2308cf5a 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:b9:39:c6,bridge_name='br-int',has_traffic_filtering=True,id=9541f2fd-4ec3-47ef-a6a9-66e0052c303f,network=Network(9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap9541f2fd-4e')#033[00m
Nov 25 03:33:21 np0005534516 nova_compute[253538]: 2025-11-25 08:33:21.741 253542 DEBUG nova.virt.libvirt.vif [None req-1bbadccb-c6ee-41c2-883b-6cdb2308cf5a 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T08:32:27Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-2072750142',display_name='tempest-AttachInterfacesTestJSON-server-2072750142',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-2072750142',id=52,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHlIbVdTjrPi47Xp1ay/t1WIi0WKTQJPoNHCD5jEl7ye+GEjFhiA/mB2rFe7IhKIxVjHVKF2hIwCbWYxBLjoyv+NSB0KC8NG+zdVJzTKR2nUpIyzOpLkhsEIm7Jy2dcm6w==',key_name='tempest-keypair-2060783228',keypairs=<?>,launch_index=0,launched_at=2025-11-25T08:32:36Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='7d8307470c794815a028592990efca57',ramdisk_id='',reservation_id='r-558nuebf',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-1895576257',owner_user_name='tempest-AttachInterfacesTestJSON-1895576257-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T08:32:36Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='329d8dc9d78743d4a09a38fef3a9143d',uuid=8191f951-44bc-4371-957a-f2e7d37c1a32,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "9541f2fd-4ec3-47ef-a6a9-66e0052c303f", "address": "fa:16:3e:b9:39:c6", "network": {"id": "9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-2079765623-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7d8307470c794815a028592990efca57", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9541f2fd-4e", "ovs_interfaceid": "9541f2fd-4ec3-47ef-a6a9-66e0052c303f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 25 03:33:21 np0005534516 nova_compute[253538]: 2025-11-25 08:33:21.741 253542 DEBUG nova.network.os_vif_util [None req-1bbadccb-c6ee-41c2-883b-6cdb2308cf5a 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Converting VIF {"id": "9541f2fd-4ec3-47ef-a6a9-66e0052c303f", "address": "fa:16:3e:b9:39:c6", "network": {"id": "9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-2079765623-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7d8307470c794815a028592990efca57", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9541f2fd-4e", "ovs_interfaceid": "9541f2fd-4ec3-47ef-a6a9-66e0052c303f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 03:33:21 np0005534516 nova_compute[253538]: 2025-11-25 08:33:21.742 253542 DEBUG nova.network.os_vif_util [None req-1bbadccb-c6ee-41c2-883b-6cdb2308cf5a 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b9:39:c6,bridge_name='br-int',has_traffic_filtering=True,id=9541f2fd-4ec3-47ef-a6a9-66e0052c303f,network=Network(9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap9541f2fd-4e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 03:33:21 np0005534516 nova_compute[253538]: 2025-11-25 08:33:21.745 253542 DEBUG nova.virt.libvirt.guest [None req-1bbadccb-c6ee-41c2-883b-6cdb2308cf5a 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] attach device xml: <interface type="ethernet">
Nov 25 03:33:21 np0005534516 nova_compute[253538]:  <mac address="fa:16:3e:b9:39:c6"/>
Nov 25 03:33:21 np0005534516 nova_compute[253538]:  <model type="virtio"/>
Nov 25 03:33:21 np0005534516 nova_compute[253538]:  <driver name="vhost" rx_queue_size="512"/>
Nov 25 03:33:21 np0005534516 nova_compute[253538]:  <mtu size="1442"/>
Nov 25 03:33:21 np0005534516 nova_compute[253538]:  <target dev="tap9541f2fd-4e"/>
Nov 25 03:33:21 np0005534516 nova_compute[253538]: </interface>
Nov 25 03:33:21 np0005534516 nova_compute[253538]: attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339#033[00m
Nov 25 03:33:21 np0005534516 kernel: tap9541f2fd-4e: entered promiscuous mode
Nov 25 03:33:21 np0005534516 NetworkManager[48915]: <info>  [1764059601.7564] manager: (tap9541f2fd-4e): new Tun device (/org/freedesktop/NetworkManager/Devices/236)
Nov 25 03:33:21 np0005534516 systemd[1]: Started libcrun container.
Nov 25 03:33:21 np0005534516 systemd-udevd[313130]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 03:33:21 np0005534516 nova_compute[253538]: 2025-11-25 08:33:21.761 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:33:21 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ed956ae5cd2d22f379fbcbbfc4cf7ec18f54c995101ca09b784bc90a153cec72/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 25 03:33:21 np0005534516 ovn_controller[152859]: 2025-11-25T08:33:21Z|00501|binding|INFO|Claiming lport 9541f2fd-4ec3-47ef-a6a9-66e0052c303f for this chassis.
Nov 25 03:33:21 np0005534516 ovn_controller[152859]: 2025-11-25T08:33:21Z|00502|binding|INFO|9541f2fd-4ec3-47ef-a6a9-66e0052c303f: Claiming fa:16:3e:b9:39:c6 10.100.0.5
Nov 25 03:33:21 np0005534516 NetworkManager[48915]: <info>  [1764059601.7744] device (tap9541f2fd-4e): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 03:33:21 np0005534516 NetworkManager[48915]: <info>  [1764059601.7752] device (tap9541f2fd-4e): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 03:33:21 np0005534516 ovn_controller[152859]: 2025-11-25T08:33:21Z|00503|binding|INFO|Setting lport 9541f2fd-4ec3-47ef-a6a9-66e0052c303f ovn-installed in OVS
Nov 25 03:33:21 np0005534516 nova_compute[253538]: 2025-11-25 08:33:21.791 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:33:21 np0005534516 nova_compute[253538]: 2025-11-25 08:33:21.793 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:33:21 np0005534516 ovn_controller[152859]: 2025-11-25T08:33:21Z|00504|binding|INFO|Setting lport 9541f2fd-4ec3-47ef-a6a9-66e0052c303f up in Southbound
Nov 25 03:33:21 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:33:21.823 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b9:39:c6 10.100.0.5'], port_security=['fa:16:3e:b9:39:c6 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-AttachInterfacesTestJSON-790044012', 'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '8191f951-44bc-4371-957a-f2e7d37c1a32', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-AttachInterfacesTestJSON-790044012', 'neutron:project_id': '7d8307470c794815a028592990efca57', 'neutron:revision_number': '2', 'neutron:security_group_ids': '2fc6083a-2d5e-4949-a854-57468915c521', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f51a82dc-da84-4ad1-90c6-51b8e242435f, chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], tunnel_key=6, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=9541f2fd-4ec3-47ef-a6a9-66e0052c303f) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 03:33:21 np0005534516 podman[313238]: 2025-11-25 08:33:21.834017909 +0000 UTC m=+0.503599816 container init e03fdcecaa738056887a121fee2fa47bbc6cdb5f263c3a08e5530469a1b34b0b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-eb25945d-6002-4a99-b682-034a8a3dc901, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118)
Nov 25 03:33:21 np0005534516 podman[313238]: 2025-11-25 08:33:21.844401308 +0000 UTC m=+0.513983185 container start e03fdcecaa738056887a121fee2fa47bbc6cdb5f263c3a08e5530469a1b34b0b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-eb25945d-6002-4a99-b682-034a8a3dc901, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.license=GPLv2)
Nov 25 03:33:21 np0005534516 neutron-haproxy-ovnmeta-eb25945d-6002-4a99-b682-034a8a3dc901[313358]: [NOTICE]   (313379) : New worker (313383) forked
Nov 25 03:33:21 np0005534516 neutron-haproxy-ovnmeta-eb25945d-6002-4a99-b682-034a8a3dc901[313358]: [NOTICE]   (313379) : Loading success.
Nov 25 03:33:21 np0005534516 nova_compute[253538]: 2025-11-25 08:33:21.909 253542 DEBUG nova.virt.libvirt.driver [None req-1bbadccb-c6ee-41c2-883b-6cdb2308cf5a 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 03:33:21 np0005534516 nova_compute[253538]: 2025-11-25 08:33:21.909 253542 DEBUG nova.virt.libvirt.driver [None req-1bbadccb-c6ee-41c2-883b-6cdb2308cf5a 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 03:33:21 np0005534516 nova_compute[253538]: 2025-11-25 08:33:21.909 253542 DEBUG nova.virt.libvirt.driver [None req-1bbadccb-c6ee-41c2-883b-6cdb2308cf5a 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] No VIF found with MAC fa:16:3e:1a:58:19, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 25 03:33:21 np0005534516 nova_compute[253538]: 2025-11-25 08:33:21.910 253542 DEBUG nova.virt.libvirt.driver [None req-1bbadccb-c6ee-41c2-883b-6cdb2308cf5a 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] No VIF found with MAC fa:16:3e:3a:eb:0c, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 25 03:33:21 np0005534516 nova_compute[253538]: 2025-11-25 08:33:21.910 253542 DEBUG nova.virt.libvirt.driver [None req-1bbadccb-c6ee-41c2-883b-6cdb2308cf5a 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] No VIF found with MAC fa:16:3e:7a:1a:cb, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 25 03:33:21 np0005534516 nova_compute[253538]: 2025-11-25 08:33:21.910 253542 DEBUG nova.virt.libvirt.driver [None req-1bbadccb-c6ee-41c2-883b-6cdb2308cf5a 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] No VIF found with MAC fa:16:3e:b9:39:c6, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 25 03:33:21 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:33:21.943 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 9541f2fd-4ec3-47ef-a6a9-66e0052c303f in datapath 9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe unbound from our chassis#033[00m
Nov 25 03:33:21 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:33:21.946 162739 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe#033[00m
Nov 25 03:33:21 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:33:21.960 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[92fefc84-c9d8-4b27-81f3-01df33e39fd9]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:33:21 np0005534516 nova_compute[253538]: 2025-11-25 08:33:21.992 253542 DEBUG nova.virt.libvirt.guest [None req-1bbadccb-c6ee-41c2-883b-6cdb2308cf5a 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 03:33:21 np0005534516 nova_compute[253538]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 03:33:21 np0005534516 nova_compute[253538]:  <nova:name>tempest-AttachInterfacesTestJSON-server-2072750142</nova:name>
Nov 25 03:33:21 np0005534516 nova_compute[253538]:  <nova:creationTime>2025-11-25 08:33:21</nova:creationTime>
Nov 25 03:33:21 np0005534516 nova_compute[253538]:  <nova:flavor name="m1.nano">
Nov 25 03:33:21 np0005534516 nova_compute[253538]:    <nova:memory>128</nova:memory>
Nov 25 03:33:21 np0005534516 nova_compute[253538]:    <nova:disk>1</nova:disk>
Nov 25 03:33:21 np0005534516 nova_compute[253538]:    <nova:swap>0</nova:swap>
Nov 25 03:33:21 np0005534516 nova_compute[253538]:    <nova:ephemeral>0</nova:ephemeral>
Nov 25 03:33:21 np0005534516 nova_compute[253538]:    <nova:vcpus>1</nova:vcpus>
Nov 25 03:33:21 np0005534516 nova_compute[253538]:  </nova:flavor>
Nov 25 03:33:21 np0005534516 nova_compute[253538]:  <nova:owner>
Nov 25 03:33:21 np0005534516 nova_compute[253538]:    <nova:user uuid="329d8dc9d78743d4a09a38fef3a9143d">tempest-AttachInterfacesTestJSON-1895576257-project-member</nova:user>
Nov 25 03:33:21 np0005534516 nova_compute[253538]:    <nova:project uuid="7d8307470c794815a028592990efca57">tempest-AttachInterfacesTestJSON-1895576257</nova:project>
Nov 25 03:33:21 np0005534516 nova_compute[253538]:  </nova:owner>
Nov 25 03:33:21 np0005534516 nova_compute[253538]:  <nova:root type="image" uuid="8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e"/>
Nov 25 03:33:21 np0005534516 nova_compute[253538]:  <nova:ports>
Nov 25 03:33:21 np0005534516 nova_compute[253538]:    <nova:port uuid="d8bd16e1-3695-474d-be04-7fdf44bee803">
Nov 25 03:33:21 np0005534516 nova_compute[253538]:      <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Nov 25 03:33:21 np0005534516 nova_compute[253538]:    </nova:port>
Nov 25 03:33:21 np0005534516 nova_compute[253538]:    <nova:port uuid="223207be-35e0-4b8b-bf78-113792059910">
Nov 25 03:33:21 np0005534516 nova_compute[253538]:      <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Nov 25 03:33:21 np0005534516 nova_compute[253538]:    </nova:port>
Nov 25 03:33:21 np0005534516 nova_compute[253538]:    <nova:port uuid="582f57a6-32d3-44a0-ab47-d147a0bb0f43">
Nov 25 03:33:21 np0005534516 nova_compute[253538]:      <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Nov 25 03:33:21 np0005534516 nova_compute[253538]:    </nova:port>
Nov 25 03:33:21 np0005534516 nova_compute[253538]:    <nova:port uuid="9541f2fd-4ec3-47ef-a6a9-66e0052c303f">
Nov 25 03:33:21 np0005534516 nova_compute[253538]:      <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Nov 25 03:33:21 np0005534516 nova_compute[253538]:    </nova:port>
Nov 25 03:33:21 np0005534516 nova_compute[253538]:  </nova:ports>
Nov 25 03:33:21 np0005534516 nova_compute[253538]: </nova:instance>
Nov 25 03:33:21 np0005534516 nova_compute[253538]: set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359#033[00m
Nov 25 03:33:22 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:33:22.017 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[f602d176-501c-4f39-b391-f83210c2c782]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:33:22 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:33:22.021 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[c1846b33-9686-4971-bd74-21c11d006192]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:33:22 np0005534516 nova_compute[253538]: 2025-11-25 08:33:22.037 253542 DEBUG nova.compute.manager [req-891871b1-31a7-4ea9-bc48-2a15563445f6 req-254fd201-49c9-43e4-8811-4ecf5c8c9efb b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: fb888d2a-db54-44dc-8ec7-db417fa3cff6] Received event network-vif-plugged-dc1f5923-d984-4e49-bb97-bc1a77ade410 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:33:22 np0005534516 nova_compute[253538]: 2025-11-25 08:33:22.037 253542 DEBUG oslo_concurrency.lockutils [req-891871b1-31a7-4ea9-bc48-2a15563445f6 req-254fd201-49c9-43e4-8811-4ecf5c8c9efb b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "fb888d2a-db54-44dc-8ec7-db417fa3cff6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:33:22 np0005534516 nova_compute[253538]: 2025-11-25 08:33:22.037 253542 DEBUG oslo_concurrency.lockutils [req-891871b1-31a7-4ea9-bc48-2a15563445f6 req-254fd201-49c9-43e4-8811-4ecf5c8c9efb b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "fb888d2a-db54-44dc-8ec7-db417fa3cff6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:33:22 np0005534516 nova_compute[253538]: 2025-11-25 08:33:22.038 253542 DEBUG oslo_concurrency.lockutils [req-891871b1-31a7-4ea9-bc48-2a15563445f6 req-254fd201-49c9-43e4-8811-4ecf5c8c9efb b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "fb888d2a-db54-44dc-8ec7-db417fa3cff6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:33:22 np0005534516 nova_compute[253538]: 2025-11-25 08:33:22.038 253542 DEBUG nova.compute.manager [req-891871b1-31a7-4ea9-bc48-2a15563445f6 req-254fd201-49c9-43e4-8811-4ecf5c8c9efb b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: fb888d2a-db54-44dc-8ec7-db417fa3cff6] Processing event network-vif-plugged-dc1f5923-d984-4e49-bb97-bc1a77ade410 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 25 03:33:22 np0005534516 nova_compute[253538]: 2025-11-25 08:33:22.038 253542 DEBUG nova.compute.manager [None req-81d88d46-8ae0-4bea-8353-0eaa046e7abc 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] [instance: fb888d2a-db54-44dc-8ec7-db417fa3cff6] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 25 03:33:22 np0005534516 nova_compute[253538]: 2025-11-25 08:33:22.046 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764059602.0463004, fb888d2a-db54-44dc-8ec7-db417fa3cff6 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 03:33:22 np0005534516 nova_compute[253538]: 2025-11-25 08:33:22.046 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: fb888d2a-db54-44dc-8ec7-db417fa3cff6] VM Resumed (Lifecycle Event)#033[00m
Nov 25 03:33:22 np0005534516 nova_compute[253538]: 2025-11-25 08:33:22.049 253542 DEBUG nova.virt.libvirt.driver [None req-81d88d46-8ae0-4bea-8353-0eaa046e7abc 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] [instance: fb888d2a-db54-44dc-8ec7-db417fa3cff6] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 25 03:33:22 np0005534516 nova_compute[253538]: 2025-11-25 08:33:22.053 253542 INFO nova.virt.libvirt.driver [-] [instance: fb888d2a-db54-44dc-8ec7-db417fa3cff6] Instance spawned successfully.#033[00m
Nov 25 03:33:22 np0005534516 nova_compute[253538]: 2025-11-25 08:33:22.054 253542 DEBUG nova.virt.libvirt.driver [None req-81d88d46-8ae0-4bea-8353-0eaa046e7abc 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] [instance: fb888d2a-db54-44dc-8ec7-db417fa3cff6] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 25 03:33:22 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:33:22.058 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[18f17941-7d38-4c07-b48a-7a2897b25d17]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:33:22 np0005534516 nova_compute[253538]: 2025-11-25 08:33:22.063 253542 DEBUG oslo_concurrency.lockutils [None req-1bbadccb-c6ee-41c2-883b-6cdb2308cf5a 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Lock "interface-8191f951-44bc-4371-957a-f2e7d37c1a32-9541f2fd-4ec3-47ef-a6a9-66e0052c303f" "released" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: held 9.940s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:33:22 np0005534516 nova_compute[253538]: 2025-11-25 08:33:22.066 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: fb888d2a-db54-44dc-8ec7-db417fa3cff6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:33:22 np0005534516 nova_compute[253538]: 2025-11-25 08:33:22.071 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: fb888d2a-db54-44dc-8ec7-db417fa3cff6] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 03:33:22 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:33:22.075 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[63537506-ff1c-4bfd-b2f6-2d0d160db728]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap9bf3cbfa-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:cf:8f:c7'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 12, 'tx_packets': 9, 'rx_bytes': 784, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 12, 'tx_packets': 9, 'rx_bytes': 784, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 132], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 490985, 'reachable_time': 15416, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 313400, 'error': None, 'target': 'ovnmeta-9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:33:22 np0005534516 nova_compute[253538]: 2025-11-25 08:33:22.090 253542 DEBUG nova.virt.libvirt.driver [None req-81d88d46-8ae0-4bea-8353-0eaa046e7abc 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] [instance: fb888d2a-db54-44dc-8ec7-db417fa3cff6] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:33:22 np0005534516 nova_compute[253538]: 2025-11-25 08:33:22.091 253542 DEBUG nova.virt.libvirt.driver [None req-81d88d46-8ae0-4bea-8353-0eaa046e7abc 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] [instance: fb888d2a-db54-44dc-8ec7-db417fa3cff6] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:33:22 np0005534516 nova_compute[253538]: 2025-11-25 08:33:22.091 253542 DEBUG nova.virt.libvirt.driver [None req-81d88d46-8ae0-4bea-8353-0eaa046e7abc 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] [instance: fb888d2a-db54-44dc-8ec7-db417fa3cff6] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:33:22 np0005534516 nova_compute[253538]: 2025-11-25 08:33:22.092 253542 DEBUG nova.virt.libvirt.driver [None req-81d88d46-8ae0-4bea-8353-0eaa046e7abc 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] [instance: fb888d2a-db54-44dc-8ec7-db417fa3cff6] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:33:22 np0005534516 nova_compute[253538]: 2025-11-25 08:33:22.092 253542 DEBUG nova.virt.libvirt.driver [None req-81d88d46-8ae0-4bea-8353-0eaa046e7abc 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] [instance: fb888d2a-db54-44dc-8ec7-db417fa3cff6] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:33:22 np0005534516 nova_compute[253538]: 2025-11-25 08:33:22.093 253542 DEBUG nova.virt.libvirt.driver [None req-81d88d46-8ae0-4bea-8353-0eaa046e7abc 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] [instance: fb888d2a-db54-44dc-8ec7-db417fa3cff6] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:33:22 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:33:22.096 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[c1cd297f-d05c-413d-a485-b8dd2aea124d]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap9bf3cbfa-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 490996, 'tstamp': 490996}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 313401, 'error': None, 'target': 'ovnmeta-9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap9bf3cbfa-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 490999, 'tstamp': 490999}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 313401, 'error': None, 'target': 'ovnmeta-9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:33:22 np0005534516 nova_compute[253538]: 2025-11-25 08:33:22.097 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: fb888d2a-db54-44dc-8ec7-db417fa3cff6] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 03:33:22 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:33:22.098 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9bf3cbfa-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:33:22 np0005534516 nova_compute[253538]: 2025-11-25 08:33:22.100 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:33:22 np0005534516 nova_compute[253538]: 2025-11-25 08:33:22.101 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:33:22 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:33:22.101 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9bf3cbfa-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:33:22 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:33:22.102 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 03:33:22 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:33:22.102 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap9bf3cbfa-70, col_values=(('external_ids', {'iface-id': '98660c0c-0936-4c4d-9a89-87b784d8d5cc'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:33:22 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:33:22.103 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 03:33:22 np0005534516 nova_compute[253538]: 2025-11-25 08:33:22.266 253542 INFO nova.compute.manager [None req-81d88d46-8ae0-4bea-8353-0eaa046e7abc 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] [instance: fb888d2a-db54-44dc-8ec7-db417fa3cff6] Took 9.88 seconds to spawn the instance on the hypervisor.#033[00m
Nov 25 03:33:22 np0005534516 nova_compute[253538]: 2025-11-25 08:33:22.267 253542 DEBUG nova.compute.manager [None req-81d88d46-8ae0-4bea-8353-0eaa046e7abc 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] [instance: fb888d2a-db54-44dc-8ec7-db417fa3cff6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:33:22 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 03:33:22 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 03:33:22 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 03:33:22 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 03:33:22 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 03:33:22 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 03:33:22 np0005534516 ceph-mgr[75313]: [progress WARNING root] complete: ev 98858d83-4ef4-4cbe-ad4e-0e01660e7266 does not exist
Nov 25 03:33:22 np0005534516 ceph-mgr[75313]: [progress WARNING root] complete: ev beaeef86-70aa-4762-8f03-cd1ef0d76ee4 does not exist
Nov 25 03:33:22 np0005534516 ceph-mgr[75313]: [progress WARNING root] complete: ev 26dccc6c-2afc-4309-9281-5d3cd0f8265b does not exist
Nov 25 03:33:22 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 03:33:22 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 03:33:22 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 03:33:22 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 03:33:22 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 03:33:22 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 03:33:22 np0005534516 nova_compute[253538]: 2025-11-25 08:33:22.399 253542 INFO nova.compute.manager [None req-81d88d46-8ae0-4bea-8353-0eaa046e7abc 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] [instance: fb888d2a-db54-44dc-8ec7-db417fa3cff6] Took 10.90 seconds to build instance.#033[00m
Nov 25 03:33:22 np0005534516 nova_compute[253538]: 2025-11-25 08:33:22.432 253542 DEBUG oslo_concurrency.lockutils [None req-81d88d46-8ae0-4bea-8353-0eaa046e7abc 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Lock "fb888d2a-db54-44dc-8ec7-db417fa3cff6" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.990s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:33:22 np0005534516 nova_compute[253538]: 2025-11-25 08:33:22.659 253542 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764059587.6583495, c0942fc7-74d4-4fc8-9574-4fea9179e71b => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 03:33:22 np0005534516 nova_compute[253538]: 2025-11-25 08:33:22.660 253542 INFO nova.compute.manager [-] [instance: c0942fc7-74d4-4fc8-9574-4fea9179e71b] VM Stopped (Lifecycle Event)#033[00m
Nov 25 03:33:22 np0005534516 nova_compute[253538]: 2025-11-25 08:33:22.677 253542 DEBUG nova.compute.manager [None req-c77bd0b9-4498-45a3-b736-6fa40271e370 - - - - - -] [instance: c0942fc7-74d4-4fc8-9574-4fea9179e71b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:33:22 np0005534516 ovn_controller[152859]: 2025-11-25T08:33:22Z|00063|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:07:cd:40 10.100.0.13
Nov 25 03:33:23 np0005534516 podman[313551]: 2025-11-25 08:33:23.001515957 +0000 UTC m=+0.058718819 container create 0d04284f5d6c7a5b663a053ebe31651c7cc53c2d38a75eb526214916b64670a8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_almeida, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Nov 25 03:33:23 np0005534516 podman[313551]: 2025-11-25 08:33:22.962463617 +0000 UTC m=+0.019666489 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 03:33:23 np0005534516 systemd[1]: Started libpod-conmon-0d04284f5d6c7a5b663a053ebe31651c7cc53c2d38a75eb526214916b64670a8.scope.
Nov 25 03:33:23 np0005534516 systemd[1]: Started libcrun container.
Nov 25 03:33:23 np0005534516 podman[313551]: 2025-11-25 08:33:23.18358537 +0000 UTC m=+0.240788232 container init 0d04284f5d6c7a5b663a053ebe31651c7cc53c2d38a75eb526214916b64670a8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_almeida, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True)
Nov 25 03:33:23 np0005534516 podman[313551]: 2025-11-25 08:33:23.193390243 +0000 UTC m=+0.250593095 container start 0d04284f5d6c7a5b663a053ebe31651c7cc53c2d38a75eb526214916b64670a8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_almeida, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Nov 25 03:33:23 np0005534516 serene_almeida[313565]: 167 167
Nov 25 03:33:23 np0005534516 systemd[1]: libpod-0d04284f5d6c7a5b663a053ebe31651c7cc53c2d38a75eb526214916b64670a8.scope: Deactivated successfully.
Nov 25 03:33:23 np0005534516 conmon[313565]: conmon 0d04284f5d6c7a5b663a <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-0d04284f5d6c7a5b663a053ebe31651c7cc53c2d38a75eb526214916b64670a8.scope/container/memory.events
Nov 25 03:33:23 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1501: 321 pgs: 321 active+clean; 295 MiB data, 565 MiB used, 59 GiB / 60 GiB avail; 3.4 MiB/s rd, 1.8 MiB/s wr, 168 op/s
Nov 25 03:33:23 np0005534516 podman[313551]: 2025-11-25 08:33:23.229153995 +0000 UTC m=+0.286356847 container attach 0d04284f5d6c7a5b663a053ebe31651c7cc53c2d38a75eb526214916b64670a8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_almeida, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Nov 25 03:33:23 np0005534516 podman[313551]: 2025-11-25 08:33:23.230001797 +0000 UTC m=+0.287204669 container died 0d04284f5d6c7a5b663a053ebe31651c7cc53c2d38a75eb526214916b64670a8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_almeida, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Nov 25 03:33:23 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 03:33:23 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 03:33:23 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 03:33:23 np0005534516 nova_compute[253538]: 2025-11-25 08:33:23.318 253542 DEBUG nova.compute.manager [req-4cd832e7-ed61-4647-817f-cba40f80edd3 req-bef9a756-a49a-40e6-a604-0da79f140caa b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 8191f951-44bc-4371-957a-f2e7d37c1a32] Received event network-vif-plugged-9541f2fd-4ec3-47ef-a6a9-66e0052c303f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:33:23 np0005534516 nova_compute[253538]: 2025-11-25 08:33:23.321 253542 DEBUG oslo_concurrency.lockutils [req-4cd832e7-ed61-4647-817f-cba40f80edd3 req-bef9a756-a49a-40e6-a604-0da79f140caa b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "8191f951-44bc-4371-957a-f2e7d37c1a32-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:33:23 np0005534516 nova_compute[253538]: 2025-11-25 08:33:23.321 253542 DEBUG oslo_concurrency.lockutils [req-4cd832e7-ed61-4647-817f-cba40f80edd3 req-bef9a756-a49a-40e6-a604-0da79f140caa b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "8191f951-44bc-4371-957a-f2e7d37c1a32-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:33:23 np0005534516 nova_compute[253538]: 2025-11-25 08:33:23.322 253542 DEBUG oslo_concurrency.lockutils [req-4cd832e7-ed61-4647-817f-cba40f80edd3 req-bef9a756-a49a-40e6-a604-0da79f140caa b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "8191f951-44bc-4371-957a-f2e7d37c1a32-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:33:23 np0005534516 nova_compute[253538]: 2025-11-25 08:33:23.322 253542 DEBUG nova.compute.manager [req-4cd832e7-ed61-4647-817f-cba40f80edd3 req-bef9a756-a49a-40e6-a604-0da79f140caa b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 8191f951-44bc-4371-957a-f2e7d37c1a32] No waiting events found dispatching network-vif-plugged-9541f2fd-4ec3-47ef-a6a9-66e0052c303f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 03:33:23 np0005534516 nova_compute[253538]: 2025-11-25 08:33:23.323 253542 WARNING nova.compute.manager [req-4cd832e7-ed61-4647-817f-cba40f80edd3 req-bef9a756-a49a-40e6-a604-0da79f140caa b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 8191f951-44bc-4371-957a-f2e7d37c1a32] Received unexpected event network-vif-plugged-9541f2fd-4ec3-47ef-a6a9-66e0052c303f for instance with vm_state active and task_state None.#033[00m
Nov 25 03:33:23 np0005534516 systemd[1]: var-lib-containers-storage-overlay-56e0d7094f993eb703b400e74939d34d77075dff8cc8cd9364e06beae62b565e-merged.mount: Deactivated successfully.
Nov 25 03:33:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 03:33:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 03:33:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 03:33:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 03:33:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 03:33:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 03:33:23 np0005534516 podman[313551]: 2025-11-25 08:33:23.6501711 +0000 UTC m=+0.707373972 container remove 0d04284f5d6c7a5b663a053ebe31651c7cc53c2d38a75eb526214916b64670a8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_almeida, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 03:33:23 np0005534516 systemd[1]: libpod-conmon-0d04284f5d6c7a5b663a053ebe31651c7cc53c2d38a75eb526214916b64670a8.scope: Deactivated successfully.
Nov 25 03:33:23 np0005534516 nova_compute[253538]: 2025-11-25 08:33:23.890 253542 DEBUG nova.network.neutron [req-7c5cbe18-4074-4fe0-b624-6fbdbc718a64 req-a832312b-ec77-4f03-a9b5-c2a2ba13565e b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 8191f951-44bc-4371-957a-f2e7d37c1a32] Updated VIF entry in instance network info cache for port 9541f2fd-4ec3-47ef-a6a9-66e0052c303f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 25 03:33:23 np0005534516 nova_compute[253538]: 2025-11-25 08:33:23.891 253542 DEBUG nova.network.neutron [req-7c5cbe18-4074-4fe0-b624-6fbdbc718a64 req-a832312b-ec77-4f03-a9b5-c2a2ba13565e b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 8191f951-44bc-4371-957a-f2e7d37c1a32] Updating instance_info_cache with network_info: [{"id": "d8bd16e1-3695-474d-be04-7fdf44bee803", "address": "fa:16:3e:1a:58:19", "network": {"id": "9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-2079765623-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.213", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7d8307470c794815a028592990efca57", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd8bd16e1-36", "ovs_interfaceid": "d8bd16e1-3695-474d-be04-7fdf44bee803", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "223207be-35e0-4b8b-bf78-113792059910", "address": "fa:16:3e:3a:eb:0c", "network": {"id": "9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-2079765623-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7d8307470c794815a028592990efca57", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap223207be-35", "ovs_interfaceid": "223207be-35e0-4b8b-bf78-113792059910", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "582f57a6-32d3-44a0-ab47-d147a0bb0f43", "address": "fa:16:3e:7a:1a:cb", "network": {"id": "9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-2079765623-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7d8307470c794815a028592990efca57", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap582f57a6-32", "ovs_interfaceid": "582f57a6-32d3-44a0-ab47-d147a0bb0f43", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "9541f2fd-4ec3-47ef-a6a9-66e0052c303f", "address": "fa:16:3e:b9:39:c6", "network": {"id": "9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-2079765623-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7d8307470c794815a028592990efca57", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9541f2fd-4e", "ovs_interfaceid": "9541f2fd-4ec3-47ef-a6a9-66e0052c303f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 03:33:23 np0005534516 podman[313593]: 2025-11-25 08:33:23.908444152 +0000 UTC m=+0.052081081 container create ef9d800720ed4002728f2801005b1cb5059b485c638dd19445a5d59b2194beb0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_chatterjee, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 03:33:23 np0005534516 nova_compute[253538]: 2025-11-25 08:33:23.909 253542 DEBUG oslo_concurrency.lockutils [req-7c5cbe18-4074-4fe0-b624-6fbdbc718a64 req-a832312b-ec77-4f03-a9b5-c2a2ba13565e b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-8191f951-44bc-4371-957a-f2e7d37c1a32" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 03:33:23 np0005534516 systemd[1]: Started libpod-conmon-ef9d800720ed4002728f2801005b1cb5059b485c638dd19445a5d59b2194beb0.scope.
Nov 25 03:33:23 np0005534516 podman[313593]: 2025-11-25 08:33:23.88121661 +0000 UTC m=+0.024853559 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 03:33:23 np0005534516 systemd[1]: Started libcrun container.
Nov 25 03:33:24 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/98274429cf82ccaddf7a858c684a0f74f27d533c50c85be35abaad5dd5a1d605/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 03:33:24 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/98274429cf82ccaddf7a858c684a0f74f27d533c50c85be35abaad5dd5a1d605/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 03:33:24 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/98274429cf82ccaddf7a858c684a0f74f27d533c50c85be35abaad5dd5a1d605/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 03:33:24 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/98274429cf82ccaddf7a858c684a0f74f27d533c50c85be35abaad5dd5a1d605/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 03:33:24 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/98274429cf82ccaddf7a858c684a0f74f27d533c50c85be35abaad5dd5a1d605/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 03:33:24 np0005534516 podman[313593]: 2025-11-25 08:33:24.106891774 +0000 UTC m=+0.250528723 container init ef9d800720ed4002728f2801005b1cb5059b485c638dd19445a5d59b2194beb0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_chatterjee, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 03:33:24 np0005534516 podman[313593]: 2025-11-25 08:33:24.113222225 +0000 UTC m=+0.256859194 container start ef9d800720ed4002728f2801005b1cb5059b485c638dd19445a5d59b2194beb0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_chatterjee, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Nov 25 03:33:24 np0005534516 podman[313593]: 2025-11-25 08:33:24.146077808 +0000 UTC m=+0.289714757 container attach ef9d800720ed4002728f2801005b1cb5059b485c638dd19445a5d59b2194beb0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_chatterjee, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3)
Nov 25 03:33:24 np0005534516 nova_compute[253538]: 2025-11-25 08:33:24.285 253542 DEBUG nova.compute.manager [req-24a92d83-b85d-4de1-a5b8-f2eb9a87df61 req-277b93f0-c508-4252-9ce8-1a6dd530b7cc b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: fb888d2a-db54-44dc-8ec7-db417fa3cff6] Received event network-vif-plugged-dc1f5923-d984-4e49-bb97-bc1a77ade410 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:33:24 np0005534516 nova_compute[253538]: 2025-11-25 08:33:24.286 253542 DEBUG oslo_concurrency.lockutils [req-24a92d83-b85d-4de1-a5b8-f2eb9a87df61 req-277b93f0-c508-4252-9ce8-1a6dd530b7cc b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "fb888d2a-db54-44dc-8ec7-db417fa3cff6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:33:24 np0005534516 nova_compute[253538]: 2025-11-25 08:33:24.287 253542 DEBUG oslo_concurrency.lockutils [req-24a92d83-b85d-4de1-a5b8-f2eb9a87df61 req-277b93f0-c508-4252-9ce8-1a6dd530b7cc b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "fb888d2a-db54-44dc-8ec7-db417fa3cff6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:33:24 np0005534516 nova_compute[253538]: 2025-11-25 08:33:24.287 253542 DEBUG oslo_concurrency.lockutils [req-24a92d83-b85d-4de1-a5b8-f2eb9a87df61 req-277b93f0-c508-4252-9ce8-1a6dd530b7cc b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "fb888d2a-db54-44dc-8ec7-db417fa3cff6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:33:24 np0005534516 nova_compute[253538]: 2025-11-25 08:33:24.287 253542 DEBUG nova.compute.manager [req-24a92d83-b85d-4de1-a5b8-f2eb9a87df61 req-277b93f0-c508-4252-9ce8-1a6dd530b7cc b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: fb888d2a-db54-44dc-8ec7-db417fa3cff6] No waiting events found dispatching network-vif-plugged-dc1f5923-d984-4e49-bb97-bc1a77ade410 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 03:33:24 np0005534516 nova_compute[253538]: 2025-11-25 08:33:24.287 253542 WARNING nova.compute.manager [req-24a92d83-b85d-4de1-a5b8-f2eb9a87df61 req-277b93f0-c508-4252-9ce8-1a6dd530b7cc b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: fb888d2a-db54-44dc-8ec7-db417fa3cff6] Received unexpected event network-vif-plugged-dc1f5923-d984-4e49-bb97-bc1a77ade410 for instance with vm_state active and task_state None.#033[00m
Nov 25 03:33:24 np0005534516 ovn_controller[152859]: 2025-11-25T08:33:24Z|00064|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:b9:39:c6 10.100.0.5
Nov 25 03:33:24 np0005534516 ovn_controller[152859]: 2025-11-25T08:33:24Z|00065|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:b9:39:c6 10.100.0.5
Nov 25 03:33:24 np0005534516 nova_compute[253538]: 2025-11-25 08:33:24.346 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 03:33:25 np0005534516 sharp_chatterjee[313609]: --> passed data devices: 0 physical, 3 LVM
Nov 25 03:33:25 np0005534516 sharp_chatterjee[313609]: --> relative data size: 1.0
Nov 25 03:33:25 np0005534516 sharp_chatterjee[313609]: --> All data devices are unavailable
Nov 25 03:33:25 np0005534516 systemd[1]: libpod-ef9d800720ed4002728f2801005b1cb5059b485c638dd19445a5d59b2194beb0.scope: Deactivated successfully.
Nov 25 03:33:25 np0005534516 podman[313593]: 2025-11-25 08:33:25.150300338 +0000 UTC m=+1.293937267 container died ef9d800720ed4002728f2801005b1cb5059b485c638dd19445a5d59b2194beb0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_chatterjee, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Nov 25 03:33:25 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1502: 321 pgs: 321 active+clean; 295 MiB data, 565 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 1.8 MiB/s wr, 192 op/s
Nov 25 03:33:25 np0005534516 systemd[1]: var-lib-containers-storage-overlay-98274429cf82ccaddf7a858c684a0f74f27d533c50c85be35abaad5dd5a1d605-merged.mount: Deactivated successfully.
Nov 25 03:33:25 np0005534516 nova_compute[253538]: 2025-11-25 08:33:25.445 253542 DEBUG nova.virt.libvirt.driver [None req-af0be6ff-9106-4536-a846-3295c2b2f70b a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: 528fb917-0169-441d-b32d-652963344aea] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101#033[00m
Nov 25 03:33:25 np0005534516 nova_compute[253538]: 2025-11-25 08:33:25.604 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:33:25 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e176 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 03:33:26 np0005534516 podman[313593]: 2025-11-25 08:33:26.342709245 +0000 UTC m=+2.486346184 container remove ef9d800720ed4002728f2801005b1cb5059b485c638dd19445a5d59b2194beb0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_chatterjee, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 03:33:26 np0005534516 systemd[1]: libpod-conmon-ef9d800720ed4002728f2801005b1cb5059b485c638dd19445a5d59b2194beb0.scope: Deactivated successfully.
Nov 25 03:33:26 np0005534516 nova_compute[253538]: 2025-11-25 08:33:26.501 253542 DEBUG nova.compute.manager [req-a85980e0-7c57-48b7-867a-a2de6b7113c4 req-1c52ad12-35eb-447b-89e6-04062cf804ad b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 8191f951-44bc-4371-957a-f2e7d37c1a32] Received event network-vif-plugged-9541f2fd-4ec3-47ef-a6a9-66e0052c303f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:33:26 np0005534516 nova_compute[253538]: 2025-11-25 08:33:26.501 253542 DEBUG oslo_concurrency.lockutils [req-a85980e0-7c57-48b7-867a-a2de6b7113c4 req-1c52ad12-35eb-447b-89e6-04062cf804ad b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "8191f951-44bc-4371-957a-f2e7d37c1a32-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:33:26 np0005534516 nova_compute[253538]: 2025-11-25 08:33:26.501 253542 DEBUG oslo_concurrency.lockutils [req-a85980e0-7c57-48b7-867a-a2de6b7113c4 req-1c52ad12-35eb-447b-89e6-04062cf804ad b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "8191f951-44bc-4371-957a-f2e7d37c1a32-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:33:26 np0005534516 nova_compute[253538]: 2025-11-25 08:33:26.501 253542 DEBUG oslo_concurrency.lockutils [req-a85980e0-7c57-48b7-867a-a2de6b7113c4 req-1c52ad12-35eb-447b-89e6-04062cf804ad b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "8191f951-44bc-4371-957a-f2e7d37c1a32-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:33:26 np0005534516 nova_compute[253538]: 2025-11-25 08:33:26.502 253542 DEBUG nova.compute.manager [req-a85980e0-7c57-48b7-867a-a2de6b7113c4 req-1c52ad12-35eb-447b-89e6-04062cf804ad b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 8191f951-44bc-4371-957a-f2e7d37c1a32] No waiting events found dispatching network-vif-plugged-9541f2fd-4ec3-47ef-a6a9-66e0052c303f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 03:33:26 np0005534516 nova_compute[253538]: 2025-11-25 08:33:26.502 253542 WARNING nova.compute.manager [req-a85980e0-7c57-48b7-867a-a2de6b7113c4 req-1c52ad12-35eb-447b-89e6-04062cf804ad b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 8191f951-44bc-4371-957a-f2e7d37c1a32] Received unexpected event network-vif-plugged-9541f2fd-4ec3-47ef-a6a9-66e0052c303f for instance with vm_state active and task_state None.#033[00m
Nov 25 03:33:26 np0005534516 nova_compute[253538]: 2025-11-25 08:33:26.723 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:33:27 np0005534516 podman[313788]: 2025-11-25 08:33:27.024976141 +0000 UTC m=+0.028054995 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 03:33:27 np0005534516 podman[313788]: 2025-11-25 08:33:27.173526383 +0000 UTC m=+0.176605177 container create a7f6d496c311e2fc85886fe31accaca71ecbf474c326342bf69ef70ac456099e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_gates, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 03:33:27 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1503: 321 pgs: 321 active+clean; 300 MiB data, 571 MiB used, 59 GiB / 60 GiB avail; 4.3 MiB/s rd, 1006 KiB/s wr, 199 op/s
Nov 25 03:33:27 np0005534516 nova_compute[253538]: 2025-11-25 08:33:27.290 253542 DEBUG oslo_concurrency.lockutils [None req-56603a31-362f-47a6-8425-b7532e4dcfee 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Acquiring lock "interface-8191f951-44bc-4371-957a-f2e7d37c1a32-223207be-35e0-4b8b-bf78-113792059910" by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:33:27 np0005534516 nova_compute[253538]: 2025-11-25 08:33:27.291 253542 DEBUG oslo_concurrency.lockutils [None req-56603a31-362f-47a6-8425-b7532e4dcfee 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Lock "interface-8191f951-44bc-4371-957a-f2e7d37c1a32-223207be-35e0-4b8b-bf78-113792059910" acquired by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:33:27 np0005534516 nova_compute[253538]: 2025-11-25 08:33:27.314 253542 DEBUG nova.objects.instance [None req-56603a31-362f-47a6-8425-b7532e4dcfee 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Lazy-loading 'flavor' on Instance uuid 8191f951-44bc-4371-957a-f2e7d37c1a32 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 03:33:27 np0005534516 nova_compute[253538]: 2025-11-25 08:33:27.331 253542 DEBUG nova.virt.libvirt.vif [None req-56603a31-362f-47a6-8425-b7532e4dcfee 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T08:32:27Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-2072750142',display_name='tempest-AttachInterfacesTestJSON-server-2072750142',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-2072750142',id=52,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHlIbVdTjrPi47Xp1ay/t1WIi0WKTQJPoNHCD5jEl7ye+GEjFhiA/mB2rFe7IhKIxVjHVKF2hIwCbWYxBLjoyv+NSB0KC8NG+zdVJzTKR2nUpIyzOpLkhsEIm7Jy2dcm6w==',key_name='tempest-keypair-2060783228',keypairs=<?>,launch_index=0,launched_at=2025-11-25T08:32:36Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='7d8307470c794815a028592990efca57',ramdisk_id='',reservation_id='r-558nuebf',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-1895576257',owner_user_name='tempest-AttachInterfacesTestJSON-1895576257-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T08:32:36Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='329d8dc9d78743d4a09a38fef3a9143d',uuid=8191f951-44bc-4371-957a-f2e7d37c1a32,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "223207be-35e0-4b8b-bf78-113792059910", "address": "fa:16:3e:3a:eb:0c", "network": {"id": "9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-2079765623-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7d8307470c794815a028592990efca57", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap223207be-35", "ovs_interfaceid": "223207be-35e0-4b8b-bf78-113792059910", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 25 03:33:27 np0005534516 nova_compute[253538]: 2025-11-25 08:33:27.331 253542 DEBUG nova.network.os_vif_util [None req-56603a31-362f-47a6-8425-b7532e4dcfee 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Converting VIF {"id": "223207be-35e0-4b8b-bf78-113792059910", "address": "fa:16:3e:3a:eb:0c", "network": {"id": "9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-2079765623-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7d8307470c794815a028592990efca57", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap223207be-35", "ovs_interfaceid": "223207be-35e0-4b8b-bf78-113792059910", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 03:33:27 np0005534516 nova_compute[253538]: 2025-11-25 08:33:27.332 253542 DEBUG nova.network.os_vif_util [None req-56603a31-362f-47a6-8425-b7532e4dcfee 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:3a:eb:0c,bridge_name='br-int',has_traffic_filtering=True,id=223207be-35e0-4b8b-bf78-113792059910,network=Network(9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap223207be-35') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 03:33:27 np0005534516 nova_compute[253538]: 2025-11-25 08:33:27.337 253542 DEBUG nova.virt.libvirt.guest [None req-56603a31-362f-47a6-8425-b7532e4dcfee 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:3a:eb:0c"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap223207be-35"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Nov 25 03:33:27 np0005534516 nova_compute[253538]: 2025-11-25 08:33:27.340 253542 DEBUG nova.virt.libvirt.guest [None req-56603a31-362f-47a6-8425-b7532e4dcfee 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:3a:eb:0c"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap223207be-35"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Nov 25 03:33:27 np0005534516 nova_compute[253538]: 2025-11-25 08:33:27.344 253542 DEBUG nova.virt.libvirt.driver [None req-56603a31-362f-47a6-8425-b7532e4dcfee 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Attempting to detach device tap223207be-35 from instance 8191f951-44bc-4371-957a-f2e7d37c1a32 from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487#033[00m
Nov 25 03:33:27 np0005534516 nova_compute[253538]: 2025-11-25 08:33:27.345 253542 DEBUG nova.virt.libvirt.guest [None req-56603a31-362f-47a6-8425-b7532e4dcfee 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] detach device xml: <interface type="ethernet">
Nov 25 03:33:27 np0005534516 nova_compute[253538]:  <mac address="fa:16:3e:3a:eb:0c"/>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:  <model type="virtio"/>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:  <driver name="vhost" rx_queue_size="512"/>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:  <mtu size="1442"/>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:  <target dev="tap223207be-35"/>
Nov 25 03:33:27 np0005534516 nova_compute[253538]: </interface>
Nov 25 03:33:27 np0005534516 nova_compute[253538]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Nov 25 03:33:27 np0005534516 systemd[1]: Started libpod-conmon-a7f6d496c311e2fc85886fe31accaca71ecbf474c326342bf69ef70ac456099e.scope.
Nov 25 03:33:27 np0005534516 systemd[1]: Started libcrun container.
Nov 25 03:33:27 np0005534516 nova_compute[253538]: 2025-11-25 08:33:27.456 253542 DEBUG nova.virt.libvirt.guest [None req-56603a31-362f-47a6-8425-b7532e4dcfee 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:3a:eb:0c"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap223207be-35"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Nov 25 03:33:27 np0005534516 podman[313788]: 2025-11-25 08:33:27.465423008 +0000 UTC m=+0.468501812 container init a7f6d496c311e2fc85886fe31accaca71ecbf474c326342bf69ef70ac456099e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_gates, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef)
Nov 25 03:33:27 np0005534516 nova_compute[253538]: 2025-11-25 08:33:27.467 253542 DEBUG nova.virt.libvirt.guest [None req-56603a31-362f-47a6-8425-b7532e4dcfee 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:3a:eb:0c"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap223207be-35"/></interface>not found in domain: <domain type='kvm' id='58'>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:  <name>instance-00000034</name>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:  <uuid>8191f951-44bc-4371-957a-f2e7d37c1a32</uuid>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:  <metadata>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 03:33:27 np0005534516 nova_compute[253538]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:  <nova:name>tempest-AttachInterfacesTestJSON-server-2072750142</nova:name>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:  <nova:creationTime>2025-11-25 08:33:21</nova:creationTime>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:  <nova:flavor name="m1.nano">
Nov 25 03:33:27 np0005534516 nova_compute[253538]:    <nova:memory>128</nova:memory>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:    <nova:disk>1</nova:disk>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:    <nova:swap>0</nova:swap>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:    <nova:ephemeral>0</nova:ephemeral>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:    <nova:vcpus>1</nova:vcpus>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:  </nova:flavor>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:  <nova:owner>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:    <nova:user uuid="329d8dc9d78743d4a09a38fef3a9143d">tempest-AttachInterfacesTestJSON-1895576257-project-member</nova:user>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:    <nova:project uuid="7d8307470c794815a028592990efca57">tempest-AttachInterfacesTestJSON-1895576257</nova:project>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:  </nova:owner>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:  <nova:root type="image" uuid="8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e"/>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:  <nova:ports>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:    <nova:port uuid="d8bd16e1-3695-474d-be04-7fdf44bee803">
Nov 25 03:33:27 np0005534516 nova_compute[253538]:      <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:    </nova:port>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:    <nova:port uuid="223207be-35e0-4b8b-bf78-113792059910">
Nov 25 03:33:27 np0005534516 nova_compute[253538]:      <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:    </nova:port>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:    <nova:port uuid="582f57a6-32d3-44a0-ab47-d147a0bb0f43">
Nov 25 03:33:27 np0005534516 nova_compute[253538]:      <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:    </nova:port>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:    <nova:port uuid="9541f2fd-4ec3-47ef-a6a9-66e0052c303f">
Nov 25 03:33:27 np0005534516 nova_compute[253538]:      <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:    </nova:port>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:  </nova:ports>
Nov 25 03:33:27 np0005534516 nova_compute[253538]: </nova:instance>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:  </metadata>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:  <memory unit='KiB'>131072</memory>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:  <currentMemory unit='KiB'>131072</currentMemory>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:  <vcpu placement='static'>1</vcpu>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:  <resource>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:    <partition>/machine</partition>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:  </resource>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:  <sysinfo type='smbios'>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:    <system>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:      <entry name='manufacturer'>RDO</entry>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:      <entry name='product'>OpenStack Compute</entry>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:      <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:      <entry name='serial'>8191f951-44bc-4371-957a-f2e7d37c1a32</entry>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:      <entry name='uuid'>8191f951-44bc-4371-957a-f2e7d37c1a32</entry>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:      <entry name='family'>Virtual Machine</entry>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:    </system>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:  </sysinfo>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:  <os>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:    <type arch='x86_64' machine='pc-q35-rhel9.8.0'>hvm</type>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:    <boot dev='hd'/>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:    <smbios mode='sysinfo'/>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:  </os>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:  <features>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:    <acpi/>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:    <apic/>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:    <vmcoreinfo state='on'/>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:  </features>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:  <cpu mode='custom' match='exact' check='full'>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:    <model fallback='forbid'>EPYC-Rome</model>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:    <vendor>AMD</vendor>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:    <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:    <feature policy='require' name='x2apic'/>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:    <feature policy='require' name='tsc-deadline'/>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:    <feature policy='require' name='hypervisor'/>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:    <feature policy='require' name='tsc_adjust'/>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:    <feature policy='require' name='spec-ctrl'/>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:    <feature policy='require' name='stibp'/>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:    <feature policy='require' name='ssbd'/>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:    <feature policy='require' name='cmp_legacy'/>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:    <feature policy='require' name='overflow-recov'/>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:    <feature policy='require' name='succor'/>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:    <feature policy='require' name='ibrs'/>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:    <feature policy='require' name='amd-ssbd'/>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:    <feature policy='require' name='virt-ssbd'/>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:    <feature policy='disable' name='lbrv'/>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:    <feature policy='disable' name='tsc-scale'/>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:    <feature policy='disable' name='vmcb-clean'/>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:    <feature policy='disable' name='flushbyasid'/>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:    <feature policy='disable' name='pause-filter'/>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:    <feature policy='disable' name='pfthreshold'/>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:    <feature policy='disable' name='svme-addr-chk'/>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:    <feature policy='require' name='lfence-always-serializing'/>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:    <feature policy='disable' name='xsaves'/>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:    <feature policy='disable' name='svm'/>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:    <feature policy='require' name='topoext'/>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:    <feature policy='disable' name='npt'/>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:    <feature policy='disable' name='nrip-save'/>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:  </cpu>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:  <clock offset='utc'>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:    <timer name='pit' tickpolicy='delay'/>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:    <timer name='rtc' tickpolicy='catchup'/>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:    <timer name='hpet' present='no'/>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:  </clock>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:  <on_poweroff>destroy</on_poweroff>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:  <on_reboot>restart</on_reboot>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:  <on_crash>destroy</on_crash>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:  <devices>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:    <emulator>/usr/libexec/qemu-kvm</emulator>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:    <disk type='network' device='disk'>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:      <driver name='qemu' type='raw' cache='none'/>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:      <auth username='openstack'>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:        <secret type='ceph' uuid='a058ea16-8b73-51e1-b172-ed66107102bf'/>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:      </auth>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:      <source protocol='rbd' name='vms/8191f951-44bc-4371-957a-f2e7d37c1a32_disk' index='2'>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:        <host name='192.168.122.100' port='6789'/>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:      </source>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:      <target dev='vda' bus='virtio'/>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:      <alias name='virtio-disk0'/>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:    </disk>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:    <disk type='network' device='cdrom'>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:      <driver name='qemu' type='raw' cache='none'/>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:      <auth username='openstack'>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:        <secret type='ceph' uuid='a058ea16-8b73-51e1-b172-ed66107102bf'/>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:      </auth>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:      <source protocol='rbd' name='vms/8191f951-44bc-4371-957a-f2e7d37c1a32_disk.config' index='1'>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:        <host name='192.168.122.100' port='6789'/>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:      </source>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:      <target dev='sda' bus='sata'/>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:      <readonly/>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:      <alias name='sata0-0-0'/>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:      <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:    </disk>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:    <controller type='pci' index='0' model='pcie-root'>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:      <alias name='pcie.0'/>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:    <controller type='pci' index='1' model='pcie-root-port'>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:      <model name='pcie-root-port'/>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:      <target chassis='1' port='0x10'/>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:      <alias name='pci.1'/>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:    <controller type='pci' index='2' model='pcie-root-port'>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:      <model name='pcie-root-port'/>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:      <target chassis='2' port='0x11'/>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:      <alias name='pci.2'/>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:    <controller type='pci' index='3' model='pcie-root-port'>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:      <model name='pcie-root-port'/>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:      <target chassis='3' port='0x12'/>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:      <alias name='pci.3'/>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:    <controller type='pci' index='4' model='pcie-root-port'>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:      <model name='pcie-root-port'/>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:      <target chassis='4' port='0x13'/>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:      <alias name='pci.4'/>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:    <controller type='pci' index='5' model='pcie-root-port'>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:      <model name='pcie-root-port'/>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:      <target chassis='5' port='0x14'/>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:      <alias name='pci.5'/>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:    <controller type='pci' index='6' model='pcie-root-port'>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:      <model name='pcie-root-port'/>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:      <target chassis='6' port='0x15'/>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:      <alias name='pci.6'/>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:    <controller type='pci' index='7' model='pcie-root-port'>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:      <model name='pcie-root-port'/>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:      <target chassis='7' port='0x16'/>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:      <alias name='pci.7'/>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:    <controller type='pci' index='8' model='pcie-root-port'>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:      <model name='pcie-root-port'/>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:      <target chassis='8' port='0x17'/>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:      <alias name='pci.8'/>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:    <controller type='pci' index='9' model='pcie-root-port'>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:      <model name='pcie-root-port'/>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:      <target chassis='9' port='0x18'/>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:      <alias name='pci.9'/>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:    <controller type='pci' index='10' model='pcie-root-port'>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:      <model name='pcie-root-port'/>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:      <target chassis='10' port='0x19'/>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:      <alias name='pci.10'/>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:    <controller type='pci' index='11' model='pcie-root-port'>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:      <model name='pcie-root-port'/>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:      <target chassis='11' port='0x1a'/>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:      <alias name='pci.11'/>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:    <controller type='pci' index='12' model='pcie-root-port'>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:      <model name='pcie-root-port'/>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:      <target chassis='12' port='0x1b'/>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:      <alias name='pci.12'/>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:    <controller type='pci' index='13' model='pcie-root-port'>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:      <model name='pcie-root-port'/>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:      <target chassis='13' port='0x1c'/>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:      <alias name='pci.13'/>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:    <controller type='pci' index='14' model='pcie-root-port'>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:      <model name='pcie-root-port'/>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:      <target chassis='14' port='0x1d'/>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:      <alias name='pci.14'/>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:    <controller type='pci' index='15' model='pcie-root-port'>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:      <model name='pcie-root-port'/>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:      <target chassis='15' port='0x1e'/>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:      <alias name='pci.15'/>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:    <controller type='pci' index='16' model='pcie-root-port'>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:      <model name='pcie-root-port'/>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:      <target chassis='16' port='0x1f'/>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:      <alias name='pci.16'/>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:    <controller type='pci' index='17' model='pcie-root-port'>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:      <model name='pcie-root-port'/>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:      <target chassis='17' port='0x20'/>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:      <alias name='pci.17'/>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:    <controller type='pci' index='18' model='pcie-root-port'>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:      <model name='pcie-root-port'/>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:      <target chassis='18' port='0x21'/>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:      <alias name='pci.18'/>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:    <controller type='pci' index='19' model='pcie-root-port'>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:      <model name='pcie-root-port'/>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:      <target chassis='19' port='0x22'/>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:      <alias name='pci.19'/>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:    <controller type='pci' index='20' model='pcie-root-port'>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:      <model name='pcie-root-port'/>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:      <target chassis='20' port='0x23'/>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:      <alias name='pci.20'/>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:    <controller type='pci' index='21' model='pcie-root-port'>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:      <model name='pcie-root-port'/>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:      <target chassis='21' port='0x24'/>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:      <alias name='pci.21'/>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:    <controller type='pci' index='22' model='pcie-root-port'>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:      <model name='pcie-root-port'/>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:      <target chassis='22' port='0x25'/>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:      <alias name='pci.22'/>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:    <controller type='pci' index='23' model='pcie-root-port'>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:      <model name='pcie-root-port'/>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:      <target chassis='23' port='0x26'/>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:      <alias name='pci.23'/>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:    <controller type='pci' index='24' model='pcie-root-port'>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:      <model name='pcie-root-port'/>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:      <target chassis='24' port='0x27'/>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:      <alias name='pci.24'/>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:    <controller type='pci' index='25' model='pcie-root-port'>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:      <model name='pcie-root-port'/>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:      <target chassis='25' port='0x28'/>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:      <alias name='pci.25'/>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:    <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:      <model name='pcie-pci-bridge'/>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:      <alias name='pci.26'/>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:    <controller type='usb' index='0' model='piix3-uhci'>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:      <alias name='usb'/>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:    <controller type='sata' index='0'>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:      <alias name='ide'/>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:    <interface type='ethernet'>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:      <mac address='fa:16:3e:1a:58:19'/>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:      <target dev='tapd8bd16e1-36'/>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:      <model type='virtio'/>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:      <driver name='vhost' rx_queue_size='512'/>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:      <mtu size='1442'/>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:      <alias name='net0'/>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:    </interface>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:    <interface type='ethernet'>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:      <mac address='fa:16:3e:3a:eb:0c'/>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:      <target dev='tap223207be-35'/>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:      <model type='virtio'/>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:      <driver name='vhost' rx_queue_size='512'/>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:      <mtu size='1442'/>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:      <alias name='net1'/>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x06' slot='0x00' function='0x0'/>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:    </interface>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:    <interface type='ethernet'>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:      <mac address='fa:16:3e:7a:1a:cb'/>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:      <target dev='tap582f57a6-32'/>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:      <model type='virtio'/>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:      <driver name='vhost' rx_queue_size='512'/>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:      <mtu size='1442'/>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:      <alias name='net2'/>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x07' slot='0x00' function='0x0'/>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:    </interface>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:    <interface type='ethernet'>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:      <mac address='fa:16:3e:b9:39:c6'/>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:      <target dev='tap9541f2fd-4e'/>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:      <model type='virtio'/>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:      <driver name='vhost' rx_queue_size='512'/>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:      <mtu size='1442'/>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:      <alias name='net3'/>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x08' slot='0x00' function='0x0'/>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:    </interface>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:    <serial type='pty'>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:      <source path='/dev/pts/0'/>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:      <log file='/var/lib/nova/instances/8191f951-44bc-4371-957a-f2e7d37c1a32/console.log' append='off'/>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:      <target type='isa-serial' port='0'>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:        <model name='isa-serial'/>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:      </target>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:      <alias name='serial0'/>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:    </serial>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:    <console type='pty' tty='/dev/pts/0'>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:      <source path='/dev/pts/0'/>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:      <log file='/var/lib/nova/instances/8191f951-44bc-4371-957a-f2e7d37c1a32/console.log' append='off'/>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:      <target type='serial' port='0'/>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:      <alias name='serial0'/>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:    </console>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:    <input type='tablet' bus='usb'>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:      <alias name='input0'/>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:      <address type='usb' bus='0' port='1'/>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:    </input>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:    <input type='mouse' bus='ps2'>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:      <alias name='input1'/>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:    </input>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:    <input type='keyboard' bus='ps2'>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:      <alias name='input2'/>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:    </input>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:    <graphics type='vnc' port='5900' autoport='yes' listen='::0'>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:      <listen type='address' address='::0'/>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:    </graphics>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:    <audio id='1' type='none'/>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:    <video>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:      <model type='virtio' heads='1' primary='yes'/>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:      <alias name='video0'/>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:    </video>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:    <watchdog model='itco' action='reset'>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:      <alias name='watchdog0'/>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:    </watchdog>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:    <memballoon model='virtio'>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:      <stats period='10'/>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:      <alias name='balloon0'/>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:    </memballoon>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:    <rng model='virtio'>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:      <backend model='random'>/dev/urandom</backend>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:      <alias name='rng0'/>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:    </rng>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:  </devices>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:  <seclabel type='dynamic' model='selinux' relabel='yes'>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:    <label>system_u:system_r:svirt_t:s0:c36,c489</label>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:    <imagelabel>system_u:object_r:svirt_image_t:s0:c36,c489</imagelabel>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:  </seclabel>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:  <seclabel type='dynamic' model='dac' relabel='yes'>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:    <label>+107:+107</label>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:    <imagelabel>+107:+107</imagelabel>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:  </seclabel>
Nov 25 03:33:27 np0005534516 nova_compute[253538]: </domain>
Nov 25 03:33:27 np0005534516 nova_compute[253538]: get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282#033[00m
Nov 25 03:33:27 np0005534516 nova_compute[253538]: 2025-11-25 08:33:27.468 253542 INFO nova.virt.libvirt.driver [None req-56603a31-362f-47a6-8425-b7532e4dcfee 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Successfully detached device tap223207be-35 from instance 8191f951-44bc-4371-957a-f2e7d37c1a32 from the persistent domain config.#033[00m
Nov 25 03:33:27 np0005534516 nova_compute[253538]: 2025-11-25 08:33:27.468 253542 DEBUG nova.virt.libvirt.driver [None req-56603a31-362f-47a6-8425-b7532e4dcfee 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] (1/8): Attempting to detach device tap223207be-35 with device alias net1 from instance 8191f951-44bc-4371-957a-f2e7d37c1a32 from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523#033[00m
Nov 25 03:33:27 np0005534516 nova_compute[253538]: 2025-11-25 08:33:27.468 253542 DEBUG nova.virt.libvirt.guest [None req-56603a31-362f-47a6-8425-b7532e4dcfee 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] detach device xml: <interface type="ethernet">
Nov 25 03:33:27 np0005534516 nova_compute[253538]:  <mac address="fa:16:3e:3a:eb:0c"/>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:  <model type="virtio"/>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:  <driver name="vhost" rx_queue_size="512"/>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:  <mtu size="1442"/>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:  <target dev="tap223207be-35"/>
Nov 25 03:33:27 np0005534516 nova_compute[253538]: </interface>
Nov 25 03:33:27 np0005534516 nova_compute[253538]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Nov 25 03:33:27 np0005534516 podman[313788]: 2025-11-25 08:33:27.475665394 +0000 UTC m=+0.478744168 container start a7f6d496c311e2fc85886fe31accaca71ecbf474c326342bf69ef70ac456099e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_gates, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Nov 25 03:33:27 np0005534516 goofy_gates[313804]: 167 167
Nov 25 03:33:27 np0005534516 systemd[1]: libpod-a7f6d496c311e2fc85886fe31accaca71ecbf474c326342bf69ef70ac456099e.scope: Deactivated successfully.
Nov 25 03:33:27 np0005534516 nova_compute[253538]: 2025-11-25 08:33:27.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 03:33:27 np0005534516 kernel: tap223207be-35 (unregistering): left promiscuous mode
Nov 25 03:33:27 np0005534516 NetworkManager[48915]: <info>  [1764059607.5853] device (tap223207be-35): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 03:33:27 np0005534516 ovn_controller[152859]: 2025-11-25T08:33:27Z|00505|binding|INFO|Releasing lport 223207be-35e0-4b8b-bf78-113792059910 from this chassis (sb_readonly=0)
Nov 25 03:33:27 np0005534516 ovn_controller[152859]: 2025-11-25T08:33:27Z|00506|binding|INFO|Setting lport 223207be-35e0-4b8b-bf78-113792059910 down in Southbound
Nov 25 03:33:27 np0005534516 nova_compute[253538]: 2025-11-25 08:33:27.593 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:33:27 np0005534516 ovn_controller[152859]: 2025-11-25T08:33:27Z|00507|binding|INFO|Removing iface tap223207be-35 ovn-installed in OVS
Nov 25 03:33:27 np0005534516 nova_compute[253538]: 2025-11-25 08:33:27.600 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:33:27 np0005534516 nova_compute[253538]: 2025-11-25 08:33:27.602 253542 DEBUG nova.virt.libvirt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Received event <DeviceRemovedEvent: 1764059607.6025817, 8191f951-44bc-4371-957a-f2e7d37c1a32 => net1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370#033[00m
Nov 25 03:33:27 np0005534516 nova_compute[253538]: 2025-11-25 08:33:27.604 253542 DEBUG nova.virt.libvirt.driver [None req-56603a31-362f-47a6-8425-b7532e4dcfee 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Start waiting for the detach event from libvirt for device tap223207be-35 with device alias net1 for instance 8191f951-44bc-4371-957a-f2e7d37c1a32 _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599#033[00m
Nov 25 03:33:27 np0005534516 nova_compute[253538]: 2025-11-25 08:33:27.605 253542 DEBUG nova.virt.libvirt.guest [None req-56603a31-362f-47a6-8425-b7532e4dcfee 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:3a:eb:0c"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap223207be-35"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Nov 25 03:33:27 np0005534516 nova_compute[253538]: 2025-11-25 08:33:27.612 253542 DEBUG nova.virt.libvirt.guest [None req-56603a31-362f-47a6-8425-b7532e4dcfee 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:3a:eb:0c"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap223207be-35"/></interface>not found in domain: <domain type='kvm' id='58'>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:  <name>instance-00000034</name>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:  <uuid>8191f951-44bc-4371-957a-f2e7d37c1a32</uuid>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:  <metadata>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 03:33:27 np0005534516 nova_compute[253538]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:  <nova:name>tempest-AttachInterfacesTestJSON-server-2072750142</nova:name>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:  <nova:creationTime>2025-11-25 08:33:21</nova:creationTime>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:  <nova:flavor name="m1.nano">
Nov 25 03:33:27 np0005534516 nova_compute[253538]:    <nova:memory>128</nova:memory>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:    <nova:disk>1</nova:disk>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:    <nova:swap>0</nova:swap>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:    <nova:ephemeral>0</nova:ephemeral>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:    <nova:vcpus>1</nova:vcpus>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:  </nova:flavor>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:  <nova:owner>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:    <nova:user uuid="329d8dc9d78743d4a09a38fef3a9143d">tempest-AttachInterfacesTestJSON-1895576257-project-member</nova:user>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:    <nova:project uuid="7d8307470c794815a028592990efca57">tempest-AttachInterfacesTestJSON-1895576257</nova:project>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:  </nova:owner>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:  <nova:root type="image" uuid="8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e"/>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:  <nova:ports>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:    <nova:port uuid="d8bd16e1-3695-474d-be04-7fdf44bee803">
Nov 25 03:33:27 np0005534516 nova_compute[253538]:      <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:    </nova:port>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:    <nova:port uuid="223207be-35e0-4b8b-bf78-113792059910">
Nov 25 03:33:27 np0005534516 nova_compute[253538]:      <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:    </nova:port>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:    <nova:port uuid="582f57a6-32d3-44a0-ab47-d147a0bb0f43">
Nov 25 03:33:27 np0005534516 nova_compute[253538]:      <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:    </nova:port>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:    <nova:port uuid="9541f2fd-4ec3-47ef-a6a9-66e0052c303f">
Nov 25 03:33:27 np0005534516 nova_compute[253538]:      <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:    </nova:port>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:  </nova:ports>
Nov 25 03:33:27 np0005534516 nova_compute[253538]: </nova:instance>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:  </metadata>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:  <memory unit='KiB'>131072</memory>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:  <currentMemory unit='KiB'>131072</currentMemory>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:  <vcpu placement='static'>1</vcpu>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:  <resource>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:    <partition>/machine</partition>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:  </resource>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:  <sysinfo type='smbios'>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:    <system>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:      <entry name='manufacturer'>RDO</entry>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:      <entry name='product'>OpenStack Compute</entry>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:      <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:      <entry name='serial'>8191f951-44bc-4371-957a-f2e7d37c1a32</entry>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:      <entry name='uuid'>8191f951-44bc-4371-957a-f2e7d37c1a32</entry>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:      <entry name='family'>Virtual Machine</entry>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:    </system>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:  </sysinfo>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:  <os>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:    <type arch='x86_64' machine='pc-q35-rhel9.8.0'>hvm</type>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:    <boot dev='hd'/>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:    <smbios mode='sysinfo'/>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:  </os>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:  <features>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:    <acpi/>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:    <apic/>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:    <vmcoreinfo state='on'/>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:  </features>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:  <cpu mode='custom' match='exact' check='full'>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:    <model fallback='forbid'>EPYC-Rome</model>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:    <vendor>AMD</vendor>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:    <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:    <feature policy='require' name='x2apic'/>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:    <feature policy='require' name='tsc-deadline'/>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:    <feature policy='require' name='hypervisor'/>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:    <feature policy='require' name='tsc_adjust'/>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:    <feature policy='require' name='spec-ctrl'/>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:    <feature policy='require' name='stibp'/>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:    <feature policy='require' name='ssbd'/>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:    <feature policy='require' name='cmp_legacy'/>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:    <feature policy='require' name='overflow-recov'/>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:    <feature policy='require' name='succor'/>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:    <feature policy='require' name='ibrs'/>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:    <feature policy='require' name='amd-ssbd'/>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:    <feature policy='require' name='virt-ssbd'/>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:    <feature policy='disable' name='lbrv'/>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:    <feature policy='disable' name='tsc-scale'/>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:    <feature policy='disable' name='vmcb-clean'/>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:    <feature policy='disable' name='flushbyasid'/>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:    <feature policy='disable' name='pause-filter'/>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:    <feature policy='disable' name='pfthreshold'/>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:    <feature policy='disable' name='svme-addr-chk'/>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:    <feature policy='require' name='lfence-always-serializing'/>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:    <feature policy='disable' name='xsaves'/>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:    <feature policy='disable' name='svm'/>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:    <feature policy='require' name='topoext'/>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:    <feature policy='disable' name='npt'/>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:    <feature policy='disable' name='nrip-save'/>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:  </cpu>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:  <clock offset='utc'>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:    <timer name='pit' tickpolicy='delay'/>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:    <timer name='rtc' tickpolicy='catchup'/>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:    <timer name='hpet' present='no'/>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:  </clock>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:  <on_poweroff>destroy</on_poweroff>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:  <on_reboot>restart</on_reboot>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:  <on_crash>destroy</on_crash>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:  <devices>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:    <emulator>/usr/libexec/qemu-kvm</emulator>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:    <disk type='network' device='disk'>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:      <driver name='qemu' type='raw' cache='none'/>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:      <auth username='openstack'>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:        <secret type='ceph' uuid='a058ea16-8b73-51e1-b172-ed66107102bf'/>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:      </auth>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:      <source protocol='rbd' name='vms/8191f951-44bc-4371-957a-f2e7d37c1a32_disk' index='2'>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:        <host name='192.168.122.100' port='6789'/>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:      </source>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:      <target dev='vda' bus='virtio'/>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:      <alias name='virtio-disk0'/>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:    </disk>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:    <disk type='network' device='cdrom'>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:      <driver name='qemu' type='raw' cache='none'/>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:      <auth username='openstack'>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:        <secret type='ceph' uuid='a058ea16-8b73-51e1-b172-ed66107102bf'/>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:      </auth>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:      <source protocol='rbd' name='vms/8191f951-44bc-4371-957a-f2e7d37c1a32_disk.config' index='1'>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:        <host name='192.168.122.100' port='6789'/>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:      </source>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:      <target dev='sda' bus='sata'/>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:      <readonly/>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:      <alias name='sata0-0-0'/>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:      <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:    </disk>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:    <controller type='pci' index='0' model='pcie-root'>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:      <alias name='pcie.0'/>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:    <controller type='pci' index='1' model='pcie-root-port'>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:      <model name='pcie-root-port'/>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:      <target chassis='1' port='0x10'/>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:      <alias name='pci.1'/>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:    <controller type='pci' index='2' model='pcie-root-port'>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:      <model name='pcie-root-port'/>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:      <target chassis='2' port='0x11'/>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:      <alias name='pci.2'/>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:    <controller type='pci' index='3' model='pcie-root-port'>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:      <model name='pcie-root-port'/>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:      <target chassis='3' port='0x12'/>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:      <alias name='pci.3'/>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:    <controller type='pci' index='4' model='pcie-root-port'>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:      <model name='pcie-root-port'/>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:      <target chassis='4' port='0x13'/>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:      <alias name='pci.4'/>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:    <controller type='pci' index='5' model='pcie-root-port'>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:      <model name='pcie-root-port'/>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:      <target chassis='5' port='0x14'/>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:      <alias name='pci.5'/>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:    <controller type='pci' index='6' model='pcie-root-port'>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:      <model name='pcie-root-port'/>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:      <target chassis='6' port='0x15'/>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:      <alias name='pci.6'/>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:    <controller type='pci' index='7' model='pcie-root-port'>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:      <model name='pcie-root-port'/>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:      <target chassis='7' port='0x16'/>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:      <alias name='pci.7'/>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:    <controller type='pci' index='8' model='pcie-root-port'>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:      <model name='pcie-root-port'/>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:      <target chassis='8' port='0x17'/>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:      <alias name='pci.8'/>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:    <controller type='pci' index='9' model='pcie-root-port'>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:      <model name='pcie-root-port'/>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:      <target chassis='9' port='0x18'/>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:      <alias name='pci.9'/>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:    <controller type='pci' index='10' model='pcie-root-port'>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:      <model name='pcie-root-port'/>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:      <target chassis='10' port='0x19'/>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:      <alias name='pci.10'/>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:    <controller type='pci' index='11' model='pcie-root-port'>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:      <model name='pcie-root-port'/>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:      <target chassis='11' port='0x1a'/>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:      <alias name='pci.11'/>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:    <controller type='pci' index='12' model='pcie-root-port'>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:      <model name='pcie-root-port'/>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:      <target chassis='12' port='0x1b'/>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:      <alias name='pci.12'/>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:    <controller type='pci' index='13' model='pcie-root-port'>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:      <model name='pcie-root-port'/>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:      <target chassis='13' port='0x1c'/>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:      <alias name='pci.13'/>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:    <controller type='pci' index='14' model='pcie-root-port'>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:      <model name='pcie-root-port'/>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:      <target chassis='14' port='0x1d'/>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:      <alias name='pci.14'/>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:    <controller type='pci' index='15' model='pcie-root-port'>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:      <model name='pcie-root-port'/>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:      <target chassis='15' port='0x1e'/>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:      <alias name='pci.15'/>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:    <controller type='pci' index='16' model='pcie-root-port'>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:      <model name='pcie-root-port'/>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:      <target chassis='16' port='0x1f'/>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:      <alias name='pci.16'/>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:    <controller type='pci' index='17' model='pcie-root-port'>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:      <model name='pcie-root-port'/>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:      <target chassis='17' port='0x20'/>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:      <alias name='pci.17'/>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:    <controller type='pci' index='18' model='pcie-root-port'>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:      <model name='pcie-root-port'/>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:      <target chassis='18' port='0x21'/>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:      <alias name='pci.18'/>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:    <controller type='pci' index='19' model='pcie-root-port'>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:      <model name='pcie-root-port'/>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:      <target chassis='19' port='0x22'/>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:      <alias name='pci.19'/>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:    <controller type='pci' index='20' model='pcie-root-port'>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:      <model name='pcie-root-port'/>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:      <target chassis='20' port='0x23'/>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:      <alias name='pci.20'/>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:    <controller type='pci' index='21' model='pcie-root-port'>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:      <model name='pcie-root-port'/>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:      <target chassis='21' port='0x24'/>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:      <alias name='pci.21'/>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:    <controller type='pci' index='22' model='pcie-root-port'>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:      <model name='pcie-root-port'/>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:      <target chassis='22' port='0x25'/>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:      <alias name='pci.22'/>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:    <controller type='pci' index='23' model='pcie-root-port'>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:      <model name='pcie-root-port'/>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:      <target chassis='23' port='0x26'/>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:      <alias name='pci.23'/>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:    <controller type='pci' index='24' model='pcie-root-port'>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:      <model name='pcie-root-port'/>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:      <target chassis='24' port='0x27'/>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:      <alias name='pci.24'/>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:    <controller type='pci' index='25' model='pcie-root-port'>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:      <model name='pcie-root-port'/>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:      <target chassis='25' port='0x28'/>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:      <alias name='pci.25'/>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:    <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:      <model name='pcie-pci-bridge'/>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:      <alias name='pci.26'/>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:33:27 np0005534516 podman[313788]: 2025-11-25 08:33:27.617591469 +0000 UTC m=+0.620670323 container attach a7f6d496c311e2fc85886fe31accaca71ecbf474c326342bf69ef70ac456099e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_gates, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Nov 25 03:33:27 np0005534516 nova_compute[253538]:    <controller type='usb' index='0' model='piix3-uhci'>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:      <alias name='usb'/>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:    <controller type='sata' index='0'>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:      <alias name='ide'/>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:    <interface type='ethernet'>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:      <mac address='fa:16:3e:1a:58:19'/>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:      <target dev='tapd8bd16e1-36'/>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:      <model type='virtio'/>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:      <driver name='vhost' rx_queue_size='512'/>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:      <mtu size='1442'/>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:      <alias name='net0'/>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:    </interface>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:    <interface type='ethernet'>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:      <mac address='fa:16:3e:7a:1a:cb'/>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:      <target dev='tap582f57a6-32'/>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:      <model type='virtio'/>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:      <driver name='vhost' rx_queue_size='512'/>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:      <mtu size='1442'/>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:      <alias name='net2'/>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x07' slot='0x00' function='0x0'/>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:    </interface>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:    <interface type='ethernet'>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:      <mac address='fa:16:3e:b9:39:c6'/>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:      <target dev='tap9541f2fd-4e'/>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:      <model type='virtio'/>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:      <driver name='vhost' rx_queue_size='512'/>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:      <mtu size='1442'/>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:      <alias name='net3'/>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x08' slot='0x00' function='0x0'/>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:    </interface>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:    <serial type='pty'>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:      <source path='/dev/pts/0'/>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:      <log file='/var/lib/nova/instances/8191f951-44bc-4371-957a-f2e7d37c1a32/console.log' append='off'/>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:      <target type='isa-serial' port='0'>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:        <model name='isa-serial'/>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:      </target>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:      <alias name='serial0'/>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:    </serial>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:    <console type='pty' tty='/dev/pts/0'>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:      <source path='/dev/pts/0'/>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:      <log file='/var/lib/nova/instances/8191f951-44bc-4371-957a-f2e7d37c1a32/console.log' append='off'/>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:      <target type='serial' port='0'/>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:      <alias name='serial0'/>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:    </console>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:    <input type='tablet' bus='usb'>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:      <alias name='input0'/>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:      <address type='usb' bus='0' port='1'/>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:    </input>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:    <input type='mouse' bus='ps2'>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:      <alias name='input1'/>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:    </input>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:    <input type='keyboard' bus='ps2'>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:      <alias name='input2'/>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:    </input>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:    <graphics type='vnc' port='5900' autoport='yes' listen='::0'>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:      <listen type='address' address='::0'/>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:    </graphics>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:    <audio id='1' type='none'/>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:    <video>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:      <model type='virtio' heads='1' primary='yes'/>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:      <alias name='video0'/>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:    </video>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:    <watchdog model='itco' action='reset'>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:      <alias name='watchdog0'/>
Nov 25 03:33:27 np0005534516 podman[313788]: 2025-11-25 08:33:27.619187941 +0000 UTC m=+0.622266705 container died a7f6d496c311e2fc85886fe31accaca71ecbf474c326342bf69ef70ac456099e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_gates, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 03:33:27 np0005534516 nova_compute[253538]:    </watchdog>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:    <memballoon model='virtio'>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:      <stats period='10'/>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:      <alias name='balloon0'/>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:    </memballoon>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:    <rng model='virtio'>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:      <backend model='random'>/dev/urandom</backend>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:      <alias name='rng0'/>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:    </rng>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:  </devices>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:  <seclabel type='dynamic' model='selinux' relabel='yes'>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:    <label>system_u:system_r:svirt_t:s0:c36,c489</label>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:    <imagelabel>system_u:object_r:svirt_image_t:s0:c36,c489</imagelabel>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:  </seclabel>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:  <seclabel type='dynamic' model='dac' relabel='yes'>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:    <label>+107:+107</label>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:    <imagelabel>+107:+107</imagelabel>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:  </seclabel>
Nov 25 03:33:27 np0005534516 nova_compute[253538]: </domain>
Nov 25 03:33:27 np0005534516 nova_compute[253538]: get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282#033[00m
Nov 25 03:33:27 np0005534516 nova_compute[253538]: 2025-11-25 08:33:27.613 253542 INFO nova.virt.libvirt.driver [None req-56603a31-362f-47a6-8425-b7532e4dcfee 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Successfully detached device tap223207be-35 from instance 8191f951-44bc-4371-957a-f2e7d37c1a32 from the live domain config.#033[00m
Nov 25 03:33:27 np0005534516 nova_compute[253538]: 2025-11-25 08:33:27.614 253542 DEBUG nova.virt.libvirt.vif [None req-56603a31-362f-47a6-8425-b7532e4dcfee 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T08:32:27Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-2072750142',display_name='tempest-AttachInterfacesTestJSON-server-2072750142',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-2072750142',id=52,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHlIbVdTjrPi47Xp1ay/t1WIi0WKTQJPoNHCD5jEl7ye+GEjFhiA/mB2rFe7IhKIxVjHVKF2hIwCbWYxBLjoyv+NSB0KC8NG+zdVJzTKR2nUpIyzOpLkhsEIm7Jy2dcm6w==',key_name='tempest-keypair-2060783228',keypairs=<?>,launch_index=0,launched_at=2025-11-25T08:32:36Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='7d8307470c794815a028592990efca57',ramdisk_id='',reservation_id='r-558nuebf',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-1895576257',owner_user_name='tempest-AttachInterfacesTestJSON-1895576257-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T08:32:36Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='329d8dc9d78743d4a09a38fef3a9143d',uuid=8191f951-44bc-4371-957a-f2e7d37c1a32,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "223207be-35e0-4b8b-bf78-113792059910", "address": "fa:16:3e:3a:eb:0c", "network": {"id": "9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-2079765623-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7d8307470c794815a028592990efca57", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap223207be-35", "ovs_interfaceid": "223207be-35e0-4b8b-bf78-113792059910", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 25 03:33:27 np0005534516 nova_compute[253538]: 2025-11-25 08:33:27.614 253542 DEBUG nova.network.os_vif_util [None req-56603a31-362f-47a6-8425-b7532e4dcfee 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Converting VIF {"id": "223207be-35e0-4b8b-bf78-113792059910", "address": "fa:16:3e:3a:eb:0c", "network": {"id": "9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-2079765623-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7d8307470c794815a028592990efca57", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap223207be-35", "ovs_interfaceid": "223207be-35e0-4b8b-bf78-113792059910", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 03:33:27 np0005534516 nova_compute[253538]: 2025-11-25 08:33:27.615 253542 DEBUG nova.network.os_vif_util [None req-56603a31-362f-47a6-8425-b7532e4dcfee 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:3a:eb:0c,bridge_name='br-int',has_traffic_filtering=True,id=223207be-35e0-4b8b-bf78-113792059910,network=Network(9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap223207be-35') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 03:33:27 np0005534516 nova_compute[253538]: 2025-11-25 08:33:27.615 253542 DEBUG os_vif [None req-56603a31-362f-47a6-8425-b7532e4dcfee 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:3a:eb:0c,bridge_name='br-int',has_traffic_filtering=True,id=223207be-35e0-4b8b-bf78-113792059910,network=Network(9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap223207be-35') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 25 03:33:27 np0005534516 nova_compute[253538]: 2025-11-25 08:33:27.618 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:33:27 np0005534516 nova_compute[253538]: 2025-11-25 08:33:27.618 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap223207be-35, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:33:27 np0005534516 nova_compute[253538]: 2025-11-25 08:33:27.620 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:33:27 np0005534516 nova_compute[253538]: 2025-11-25 08:33:27.623 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 25 03:33:27 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:33:27.633 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:3a:eb:0c 10.100.0.12'], port_security=['fa:16:3e:3a:eb:0c 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '8191f951-44bc-4371-957a-f2e7d37c1a32', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '7d8307470c794815a028592990efca57', 'neutron:revision_number': '4', 'neutron:security_group_ids': '2fc6083a-2d5e-4949-a854-57468915c521', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f51a82dc-da84-4ad1-90c6-51b8e242435f, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=223207be-35e0-4b8b-bf78-113792059910) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 03:33:27 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:33:27.635 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 223207be-35e0-4b8b-bf78-113792059910 in datapath 9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe unbound from our chassis#033[00m
Nov 25 03:33:27 np0005534516 nova_compute[253538]: 2025-11-25 08:33:27.637 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:33:27 np0005534516 nova_compute[253538]: 2025-11-25 08:33:27.641 253542 INFO os_vif [None req-56603a31-362f-47a6-8425-b7532e4dcfee 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:3a:eb:0c,bridge_name='br-int',has_traffic_filtering=True,id=223207be-35e0-4b8b-bf78-113792059910,network=Network(9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap223207be-35')#033[00m
Nov 25 03:33:27 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:33:27.638 162739 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe#033[00m
Nov 25 03:33:27 np0005534516 nova_compute[253538]: 2025-11-25 08:33:27.642 253542 DEBUG nova.virt.libvirt.guest [None req-56603a31-362f-47a6-8425-b7532e4dcfee 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 03:33:27 np0005534516 nova_compute[253538]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:  <nova:name>tempest-AttachInterfacesTestJSON-server-2072750142</nova:name>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:  <nova:creationTime>2025-11-25 08:33:27</nova:creationTime>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:  <nova:flavor name="m1.nano">
Nov 25 03:33:27 np0005534516 nova_compute[253538]:    <nova:memory>128</nova:memory>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:    <nova:disk>1</nova:disk>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:    <nova:swap>0</nova:swap>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:    <nova:ephemeral>0</nova:ephemeral>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:    <nova:vcpus>1</nova:vcpus>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:  </nova:flavor>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:  <nova:owner>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:    <nova:user uuid="329d8dc9d78743d4a09a38fef3a9143d">tempest-AttachInterfacesTestJSON-1895576257-project-member</nova:user>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:    <nova:project uuid="7d8307470c794815a028592990efca57">tempest-AttachInterfacesTestJSON-1895576257</nova:project>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:  </nova:owner>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:  <nova:root type="image" uuid="8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e"/>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:  <nova:ports>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:    <nova:port uuid="d8bd16e1-3695-474d-be04-7fdf44bee803">
Nov 25 03:33:27 np0005534516 nova_compute[253538]:      <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:    </nova:port>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:    <nova:port uuid="582f57a6-32d3-44a0-ab47-d147a0bb0f43">
Nov 25 03:33:27 np0005534516 nova_compute[253538]:      <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:    </nova:port>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:    <nova:port uuid="9541f2fd-4ec3-47ef-a6a9-66e0052c303f">
Nov 25 03:33:27 np0005534516 nova_compute[253538]:      <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:    </nova:port>
Nov 25 03:33:27 np0005534516 nova_compute[253538]:  </nova:ports>
Nov 25 03:33:27 np0005534516 nova_compute[253538]: </nova:instance>
Nov 25 03:33:27 np0005534516 nova_compute[253538]: set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359#033[00m
Nov 25 03:33:27 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:33:27.659 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[d94eb40c-e4ef-4666-a3b9-ebda647840dc]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:33:27 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:33:27.696 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[aae8a132-c5cb-4e06-98ad-e8a13f72ef8f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:33:27 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:33:27.699 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[34dae8cf-bb93-4363-9746-78866093dd78]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:33:27 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:33:27.757 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[9761f06e-ac55-494e-a033-ba83f7fe85f3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:33:27 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:33:27.776 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[3c5f54db-029e-4230-8fc2-197a7551f25a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap9bf3cbfa-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:cf:8f:c7'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 12, 'tx_packets': 11, 'rx_bytes': 784, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 12, 'tx_packets': 11, 'rx_bytes': 784, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 132], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 490985, 'reachable_time': 15416, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 313832, 'error': None, 'target': 'ovnmeta-9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:33:27 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:33:27.791 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[670fdf36-ec00-4b06-a4aa-b56037c8e388]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap9bf3cbfa-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 490996, 'tstamp': 490996}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 313833, 'error': None, 'target': 'ovnmeta-9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap9bf3cbfa-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 490999, 'tstamp': 490999}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 313833, 'error': None, 'target': 'ovnmeta-9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:33:27 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:33:27.794 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9bf3cbfa-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:33:27 np0005534516 nova_compute[253538]: 2025-11-25 08:33:27.796 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:33:27 np0005534516 nova_compute[253538]: 2025-11-25 08:33:27.797 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:33:27 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:33:27.797 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9bf3cbfa-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:33:27 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:33:27.797 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 03:33:27 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:33:27.798 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap9bf3cbfa-70, col_values=(('external_ids', {'iface-id': '98660c0c-0936-4c4d-9a89-87b784d8d5cc'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:33:27 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:33:27.798 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 03:33:27 np0005534516 systemd[1]: var-lib-containers-storage-overlay-14ff2af751f70f01b043bc8c408fb365f5a32a9c6de7b29eafc2c3a4c60fda83-merged.mount: Deactivated successfully.
Nov 25 03:33:28 np0005534516 podman[313788]: 2025-11-25 08:33:28.140368768 +0000 UTC m=+1.143447532 container remove a7f6d496c311e2fc85886fe31accaca71ecbf474c326342bf69ef70ac456099e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_gates, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 03:33:28 np0005534516 systemd[1]: libpod-conmon-a7f6d496c311e2fc85886fe31accaca71ecbf474c326342bf69ef70ac456099e.scope: Deactivated successfully.
Nov 25 03:33:28 np0005534516 podman[313842]: 2025-11-25 08:33:28.418579006 +0000 UTC m=+0.050659913 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 03:33:28 np0005534516 podman[313842]: 2025-11-25 08:33:28.523381762 +0000 UTC m=+0.155462699 container create 8d2f63eeb39430ca49e42b4fac24132445c815c01488089da83fa5199188be31 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_darwin, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Nov 25 03:33:28 np0005534516 nova_compute[253538]: 2025-11-25 08:33:28.593 253542 DEBUG nova.compute.manager [req-fa14ca78-fcd1-46da-ada8-65ae4e5bb2c8 req-c42f14fa-2329-40a6-9ab5-fe44ee0003c3 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 8191f951-44bc-4371-957a-f2e7d37c1a32] Received event network-vif-unplugged-223207be-35e0-4b8b-bf78-113792059910 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:33:28 np0005534516 nova_compute[253538]: 2025-11-25 08:33:28.593 253542 DEBUG oslo_concurrency.lockutils [req-fa14ca78-fcd1-46da-ada8-65ae4e5bb2c8 req-c42f14fa-2329-40a6-9ab5-fe44ee0003c3 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "8191f951-44bc-4371-957a-f2e7d37c1a32-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:33:28 np0005534516 nova_compute[253538]: 2025-11-25 08:33:28.594 253542 DEBUG oslo_concurrency.lockutils [req-fa14ca78-fcd1-46da-ada8-65ae4e5bb2c8 req-c42f14fa-2329-40a6-9ab5-fe44ee0003c3 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "8191f951-44bc-4371-957a-f2e7d37c1a32-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:33:28 np0005534516 nova_compute[253538]: 2025-11-25 08:33:28.594 253542 DEBUG oslo_concurrency.lockutils [req-fa14ca78-fcd1-46da-ada8-65ae4e5bb2c8 req-c42f14fa-2329-40a6-9ab5-fe44ee0003c3 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "8191f951-44bc-4371-957a-f2e7d37c1a32-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:33:28 np0005534516 nova_compute[253538]: 2025-11-25 08:33:28.594 253542 DEBUG nova.compute.manager [req-fa14ca78-fcd1-46da-ada8-65ae4e5bb2c8 req-c42f14fa-2329-40a6-9ab5-fe44ee0003c3 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 8191f951-44bc-4371-957a-f2e7d37c1a32] No waiting events found dispatching network-vif-unplugged-223207be-35e0-4b8b-bf78-113792059910 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 03:33:28 np0005534516 nova_compute[253538]: 2025-11-25 08:33:28.594 253542 WARNING nova.compute.manager [req-fa14ca78-fcd1-46da-ada8-65ae4e5bb2c8 req-c42f14fa-2329-40a6-9ab5-fe44ee0003c3 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 8191f951-44bc-4371-957a-f2e7d37c1a32] Received unexpected event network-vif-unplugged-223207be-35e0-4b8b-bf78-113792059910 for instance with vm_state active and task_state None.#033[00m
Nov 25 03:33:28 np0005534516 nova_compute[253538]: 2025-11-25 08:33:28.637 253542 DEBUG oslo_concurrency.lockutils [None req-56603a31-362f-47a6-8425-b7532e4dcfee 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Acquiring lock "refresh_cache-8191f951-44bc-4371-957a-f2e7d37c1a32" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 03:33:28 np0005534516 nova_compute[253538]: 2025-11-25 08:33:28.638 253542 DEBUG oslo_concurrency.lockutils [None req-56603a31-362f-47a6-8425-b7532e4dcfee 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Acquired lock "refresh_cache-8191f951-44bc-4371-957a-f2e7d37c1a32" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 03:33:28 np0005534516 nova_compute[253538]: 2025-11-25 08:33:28.638 253542 DEBUG nova.network.neutron [None req-56603a31-362f-47a6-8425-b7532e4dcfee 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] [instance: 8191f951-44bc-4371-957a-f2e7d37c1a32] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 25 03:33:28 np0005534516 systemd[1]: Started libpod-conmon-8d2f63eeb39430ca49e42b4fac24132445c815c01488089da83fa5199188be31.scope.
Nov 25 03:33:28 np0005534516 systemd[1]: Started libcrun container.
Nov 25 03:33:28 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/51c3a8fa0b9121f1c8446a516c7153b04f9329d971deb1fa3dfe8c51e1bdffd3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 03:33:28 np0005534516 nova_compute[253538]: 2025-11-25 08:33:28.700 253542 DEBUG nova.compute.manager [req-48af5876-65ca-4802-b482-d07f1f694d56 req-db19be76-ba5d-4c9c-bdf0-e0b5743b2797 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 8191f951-44bc-4371-957a-f2e7d37c1a32] Received event network-vif-deleted-223207be-35e0-4b8b-bf78-113792059910 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:33:28 np0005534516 nova_compute[253538]: 2025-11-25 08:33:28.701 253542 INFO nova.compute.manager [req-48af5876-65ca-4802-b482-d07f1f694d56 req-db19be76-ba5d-4c9c-bdf0-e0b5743b2797 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 8191f951-44bc-4371-957a-f2e7d37c1a32] Neutron deleted interface 223207be-35e0-4b8b-bf78-113792059910; detaching it from the instance and deleting it from the info cache#033[00m
Nov 25 03:33:28 np0005534516 nova_compute[253538]: 2025-11-25 08:33:28.702 253542 DEBUG nova.network.neutron [req-48af5876-65ca-4802-b482-d07f1f694d56 req-db19be76-ba5d-4c9c-bdf0-e0b5743b2797 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 8191f951-44bc-4371-957a-f2e7d37c1a32] Updating instance_info_cache with network_info: [{"id": "d8bd16e1-3695-474d-be04-7fdf44bee803", "address": "fa:16:3e:1a:58:19", "network": {"id": "9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-2079765623-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.213", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7d8307470c794815a028592990efca57", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd8bd16e1-36", "ovs_interfaceid": "d8bd16e1-3695-474d-be04-7fdf44bee803", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "582f57a6-32d3-44a0-ab47-d147a0bb0f43", "address": "fa:16:3e:7a:1a:cb", "network": {"id": "9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-2079765623-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7d8307470c794815a028592990efca57", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap582f57a6-32", "ovs_interfaceid": "582f57a6-32d3-44a0-ab47-d147a0bb0f43", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "9541f2fd-4ec3-47ef-a6a9-66e0052c303f", "address": "fa:16:3e:b9:39:c6", "network": {"id": "9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-2079765623-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7d8307470c794815a028592990efca57", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9541f2fd-4e", "ovs_interfaceid": "9541f2fd-4ec3-47ef-a6a9-66e0052c303f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 03:33:28 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/51c3a8fa0b9121f1c8446a516c7153b04f9329d971deb1fa3dfe8c51e1bdffd3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 03:33:28 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/51c3a8fa0b9121f1c8446a516c7153b04f9329d971deb1fa3dfe8c51e1bdffd3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 03:33:28 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/51c3a8fa0b9121f1c8446a516c7153b04f9329d971deb1fa3dfe8c51e1bdffd3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 03:33:28 np0005534516 podman[313842]: 2025-11-25 08:33:28.721726263 +0000 UTC m=+0.353807200 container init 8d2f63eeb39430ca49e42b4fac24132445c815c01488089da83fa5199188be31 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_darwin, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Nov 25 03:33:28 np0005534516 podman[313842]: 2025-11-25 08:33:28.731368582 +0000 UTC m=+0.363449489 container start 8d2f63eeb39430ca49e42b4fac24132445c815c01488089da83fa5199188be31 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_darwin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 03:33:28 np0005534516 podman[313842]: 2025-11-25 08:33:28.735694429 +0000 UTC m=+0.367775366 container attach 8d2f63eeb39430ca49e42b4fac24132445c815c01488089da83fa5199188be31 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_darwin, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Nov 25 03:33:28 np0005534516 nova_compute[253538]: 2025-11-25 08:33:28.863 253542 DEBUG nova.objects.instance [req-48af5876-65ca-4802-b482-d07f1f694d56 req-db19be76-ba5d-4c9c-bdf0-e0b5743b2797 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lazy-loading 'system_metadata' on Instance uuid 8191f951-44bc-4371-957a-f2e7d37c1a32 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 03:33:28 np0005534516 nova_compute[253538]: 2025-11-25 08:33:28.885 253542 DEBUG nova.objects.instance [req-48af5876-65ca-4802-b482-d07f1f694d56 req-db19be76-ba5d-4c9c-bdf0-e0b5743b2797 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lazy-loading 'flavor' on Instance uuid 8191f951-44bc-4371-957a-f2e7d37c1a32 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 03:33:28 np0005534516 nova_compute[253538]: 2025-11-25 08:33:28.905 253542 DEBUG nova.virt.libvirt.vif [req-48af5876-65ca-4802-b482-d07f1f694d56 req-db19be76-ba5d-4c9c-bdf0-e0b5743b2797 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T08:32:27Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-2072750142',display_name='tempest-AttachInterfacesTestJSON-server-2072750142',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-2072750142',id=52,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHlIbVdTjrPi47Xp1ay/t1WIi0WKTQJPoNHCD5jEl7ye+GEjFhiA/mB2rFe7IhKIxVjHVKF2hIwCbWYxBLjoyv+NSB0KC8NG+zdVJzTKR2nUpIyzOpLkhsEIm7Jy2dcm6w==',key_name='tempest-keypair-2060783228',keypairs=<?>,launch_index=0,launched_at=2025-11-25T08:32:36Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata=<?>,migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='7d8307470c794815a028592990efca57',ramdisk_id='',reservation_id='r-558nuebf',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-1895576257',owner_user_name='tempest-AttachInterfacesTestJSON-1895576257-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T08:32:36Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='329d8dc9d78743d4a09a38fef3a9143d',uuid=8191f951-44bc-4371-957a-f2e7d37c1a32,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "223207be-35e0-4b8b-bf78-113792059910", "address": "fa:16:3e:3a:eb:0c", "network": {"id": "9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-2079765623-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7d8307470c794815a028592990efca57", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap223207be-35", "ovs_interfaceid": "223207be-35e0-4b8b-bf78-113792059910", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 25 03:33:28 np0005534516 nova_compute[253538]: 2025-11-25 08:33:28.905 253542 DEBUG nova.network.os_vif_util [req-48af5876-65ca-4802-b482-d07f1f694d56 req-db19be76-ba5d-4c9c-bdf0-e0b5743b2797 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Converting VIF {"id": "223207be-35e0-4b8b-bf78-113792059910", "address": "fa:16:3e:3a:eb:0c", "network": {"id": "9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-2079765623-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7d8307470c794815a028592990efca57", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap223207be-35", "ovs_interfaceid": "223207be-35e0-4b8b-bf78-113792059910", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 03:33:28 np0005534516 nova_compute[253538]: 2025-11-25 08:33:28.906 253542 DEBUG nova.network.os_vif_util [req-48af5876-65ca-4802-b482-d07f1f694d56 req-db19be76-ba5d-4c9c-bdf0-e0b5743b2797 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:3a:eb:0c,bridge_name='br-int',has_traffic_filtering=True,id=223207be-35e0-4b8b-bf78-113792059910,network=Network(9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap223207be-35') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 03:33:28 np0005534516 nova_compute[253538]: 2025-11-25 08:33:28.910 253542 DEBUG nova.virt.libvirt.guest [req-48af5876-65ca-4802-b482-d07f1f694d56 req-db19be76-ba5d-4c9c-bdf0-e0b5743b2797 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:3a:eb:0c"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap223207be-35"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Nov 25 03:33:28 np0005534516 nova_compute[253538]: 2025-11-25 08:33:28.914 253542 DEBUG nova.virt.libvirt.guest [req-48af5876-65ca-4802-b482-d07f1f694d56 req-db19be76-ba5d-4c9c-bdf0-e0b5743b2797 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:3a:eb:0c"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap223207be-35"/></interface>not found in domain: <domain type='kvm' id='58'>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:  <name>instance-00000034</name>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:  <uuid>8191f951-44bc-4371-957a-f2e7d37c1a32</uuid>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:  <metadata>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 03:33:28 np0005534516 nova_compute[253538]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:  <nova:name>tempest-AttachInterfacesTestJSON-server-2072750142</nova:name>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:  <nova:creationTime>2025-11-25 08:33:27</nova:creationTime>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:  <nova:flavor name="m1.nano">
Nov 25 03:33:28 np0005534516 nova_compute[253538]:    <nova:memory>128</nova:memory>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:    <nova:disk>1</nova:disk>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:    <nova:swap>0</nova:swap>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:    <nova:ephemeral>0</nova:ephemeral>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:    <nova:vcpus>1</nova:vcpus>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:  </nova:flavor>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:  <nova:owner>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:    <nova:user uuid="329d8dc9d78743d4a09a38fef3a9143d">tempest-AttachInterfacesTestJSON-1895576257-project-member</nova:user>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:    <nova:project uuid="7d8307470c794815a028592990efca57">tempest-AttachInterfacesTestJSON-1895576257</nova:project>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:  </nova:owner>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:  <nova:root type="image" uuid="8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e"/>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:  <nova:ports>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:    <nova:port uuid="d8bd16e1-3695-474d-be04-7fdf44bee803">
Nov 25 03:33:28 np0005534516 nova_compute[253538]:      <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:    </nova:port>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:    <nova:port uuid="582f57a6-32d3-44a0-ab47-d147a0bb0f43">
Nov 25 03:33:28 np0005534516 nova_compute[253538]:      <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:    </nova:port>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:    <nova:port uuid="9541f2fd-4ec3-47ef-a6a9-66e0052c303f">
Nov 25 03:33:28 np0005534516 nova_compute[253538]:      <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:    </nova:port>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:  </nova:ports>
Nov 25 03:33:28 np0005534516 nova_compute[253538]: </nova:instance>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:  </metadata>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:  <memory unit='KiB'>131072</memory>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:  <currentMemory unit='KiB'>131072</currentMemory>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:  <vcpu placement='static'>1</vcpu>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:  <resource>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:    <partition>/machine</partition>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:  </resource>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:  <sysinfo type='smbios'>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:    <system>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:      <entry name='manufacturer'>RDO</entry>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:      <entry name='product'>OpenStack Compute</entry>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:      <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:      <entry name='serial'>8191f951-44bc-4371-957a-f2e7d37c1a32</entry>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:      <entry name='uuid'>8191f951-44bc-4371-957a-f2e7d37c1a32</entry>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:      <entry name='family'>Virtual Machine</entry>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:    </system>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:  </sysinfo>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:  <os>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:    <type arch='x86_64' machine='pc-q35-rhel9.8.0'>hvm</type>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:    <boot dev='hd'/>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:    <smbios mode='sysinfo'/>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:  </os>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:  <features>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:    <acpi/>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:    <apic/>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:    <vmcoreinfo state='on'/>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:  </features>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:  <cpu mode='custom' match='exact' check='full'>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:    <model fallback='forbid'>EPYC-Rome</model>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:    <vendor>AMD</vendor>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:    <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:    <feature policy='require' name='x2apic'/>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:    <feature policy='require' name='tsc-deadline'/>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:    <feature policy='require' name='hypervisor'/>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:    <feature policy='require' name='tsc_adjust'/>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:    <feature policy='require' name='spec-ctrl'/>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:    <feature policy='require' name='stibp'/>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:    <feature policy='require' name='ssbd'/>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:    <feature policy='require' name='cmp_legacy'/>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:    <feature policy='require' name='overflow-recov'/>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:    <feature policy='require' name='succor'/>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:    <feature policy='require' name='ibrs'/>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:    <feature policy='require' name='amd-ssbd'/>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:    <feature policy='require' name='virt-ssbd'/>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:    <feature policy='disable' name='lbrv'/>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:    <feature policy='disable' name='tsc-scale'/>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:    <feature policy='disable' name='vmcb-clean'/>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:    <feature policy='disable' name='flushbyasid'/>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:    <feature policy='disable' name='pause-filter'/>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:    <feature policy='disable' name='pfthreshold'/>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:    <feature policy='disable' name='svme-addr-chk'/>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:    <feature policy='require' name='lfence-always-serializing'/>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:    <feature policy='disable' name='xsaves'/>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:    <feature policy='disable' name='svm'/>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:    <feature policy='require' name='topoext'/>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:    <feature policy='disable' name='npt'/>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:    <feature policy='disable' name='nrip-save'/>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:  </cpu>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:  <clock offset='utc'>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:    <timer name='pit' tickpolicy='delay'/>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:    <timer name='rtc' tickpolicy='catchup'/>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:    <timer name='hpet' present='no'/>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:  </clock>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:  <on_poweroff>destroy</on_poweroff>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:  <on_reboot>restart</on_reboot>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:  <on_crash>destroy</on_crash>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:  <devices>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:    <emulator>/usr/libexec/qemu-kvm</emulator>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:    <disk type='network' device='disk'>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:      <driver name='qemu' type='raw' cache='none'/>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:      <auth username='openstack'>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:        <secret type='ceph' uuid='a058ea16-8b73-51e1-b172-ed66107102bf'/>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:      </auth>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:      <source protocol='rbd' name='vms/8191f951-44bc-4371-957a-f2e7d37c1a32_disk' index='2'>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:        <host name='192.168.122.100' port='6789'/>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:      </source>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:      <target dev='vda' bus='virtio'/>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:      <alias name='virtio-disk0'/>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:    </disk>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:    <disk type='network' device='cdrom'>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:      <driver name='qemu' type='raw' cache='none'/>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:      <auth username='openstack'>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:        <secret type='ceph' uuid='a058ea16-8b73-51e1-b172-ed66107102bf'/>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:      </auth>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:      <source protocol='rbd' name='vms/8191f951-44bc-4371-957a-f2e7d37c1a32_disk.config' index='1'>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:        <host name='192.168.122.100' port='6789'/>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:      </source>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:      <target dev='sda' bus='sata'/>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:      <readonly/>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:      <alias name='sata0-0-0'/>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:      <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:    </disk>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:    <controller type='pci' index='0' model='pcie-root'>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:      <alias name='pcie.0'/>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:    <controller type='pci' index='1' model='pcie-root-port'>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:      <model name='pcie-root-port'/>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:      <target chassis='1' port='0x10'/>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:      <alias name='pci.1'/>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:    <controller type='pci' index='2' model='pcie-root-port'>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:      <model name='pcie-root-port'/>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:      <target chassis='2' port='0x11'/>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:      <alias name='pci.2'/>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:    <controller type='pci' index='3' model='pcie-root-port'>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:      <model name='pcie-root-port'/>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:      <target chassis='3' port='0x12'/>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:      <alias name='pci.3'/>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:    <controller type='pci' index='4' model='pcie-root-port'>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:      <model name='pcie-root-port'/>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:      <target chassis='4' port='0x13'/>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:      <alias name='pci.4'/>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:    <controller type='pci' index='5' model='pcie-root-port'>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:      <model name='pcie-root-port'/>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:      <target chassis='5' port='0x14'/>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:      <alias name='pci.5'/>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:    <controller type='pci' index='6' model='pcie-root-port'>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:      <model name='pcie-root-port'/>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:      <target chassis='6' port='0x15'/>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:      <alias name='pci.6'/>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:    <controller type='pci' index='7' model='pcie-root-port'>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:      <model name='pcie-root-port'/>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:      <target chassis='7' port='0x16'/>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:      <alias name='pci.7'/>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:    <controller type='pci' index='8' model='pcie-root-port'>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:      <model name='pcie-root-port'/>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:      <target chassis='8' port='0x17'/>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:      <alias name='pci.8'/>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:    <controller type='pci' index='9' model='pcie-root-port'>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:      <model name='pcie-root-port'/>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:      <target chassis='9' port='0x18'/>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:      <alias name='pci.9'/>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:    <controller type='pci' index='10' model='pcie-root-port'>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:      <model name='pcie-root-port'/>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:      <target chassis='10' port='0x19'/>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:      <alias name='pci.10'/>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:    <controller type='pci' index='11' model='pcie-root-port'>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:      <model name='pcie-root-port'/>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:      <target chassis='11' port='0x1a'/>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:      <alias name='pci.11'/>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:    <controller type='pci' index='12' model='pcie-root-port'>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:      <model name='pcie-root-port'/>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:      <target chassis='12' port='0x1b'/>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:      <alias name='pci.12'/>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:    <controller type='pci' index='13' model='pcie-root-port'>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:      <model name='pcie-root-port'/>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:      <target chassis='13' port='0x1c'/>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:      <alias name='pci.13'/>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:    <controller type='pci' index='14' model='pcie-root-port'>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:      <model name='pcie-root-port'/>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:      <target chassis='14' port='0x1d'/>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:      <alias name='pci.14'/>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:    <controller type='pci' index='15' model='pcie-root-port'>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:      <model name='pcie-root-port'/>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:      <target chassis='15' port='0x1e'/>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:      <alias name='pci.15'/>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:    <controller type='pci' index='16' model='pcie-root-port'>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:      <model name='pcie-root-port'/>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:      <target chassis='16' port='0x1f'/>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:      <alias name='pci.16'/>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:    <controller type='pci' index='17' model='pcie-root-port'>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:      <model name='pcie-root-port'/>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:      <target chassis='17' port='0x20'/>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:      <alias name='pci.17'/>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:    <controller type='pci' index='18' model='pcie-root-port'>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:      <model name='pcie-root-port'/>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:      <target chassis='18' port='0x21'/>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:      <alias name='pci.18'/>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:    <controller type='pci' index='19' model='pcie-root-port'>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:      <model name='pcie-root-port'/>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:      <target chassis='19' port='0x22'/>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:      <alias name='pci.19'/>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:    <controller type='pci' index='20' model='pcie-root-port'>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:      <model name='pcie-root-port'/>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:      <target chassis='20' port='0x23'/>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:      <alias name='pci.20'/>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:    <controller type='pci' index='21' model='pcie-root-port'>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:      <model name='pcie-root-port'/>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:      <target chassis='21' port='0x24'/>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:      <alias name='pci.21'/>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:    <controller type='pci' index='22' model='pcie-root-port'>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:      <model name='pcie-root-port'/>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:      <target chassis='22' port='0x25'/>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:      <alias name='pci.22'/>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:    <controller type='pci' index='23' model='pcie-root-port'>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:      <model name='pcie-root-port'/>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:      <target chassis='23' port='0x26'/>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:      <alias name='pci.23'/>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:    <controller type='pci' index='24' model='pcie-root-port'>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:      <model name='pcie-root-port'/>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:      <target chassis='24' port='0x27'/>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:      <alias name='pci.24'/>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:    <controller type='pci' index='25' model='pcie-root-port'>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:      <model name='pcie-root-port'/>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:      <target chassis='25' port='0x28'/>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:      <alias name='pci.25'/>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:    <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:      <model name='pcie-pci-bridge'/>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:      <alias name='pci.26'/>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:    <controller type='usb' index='0' model='piix3-uhci'>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:      <alias name='usb'/>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:    <controller type='sata' index='0'>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:      <alias name='ide'/>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:    <interface type='ethernet'>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:      <mac address='fa:16:3e:1a:58:19'/>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:      <target dev='tapd8bd16e1-36'/>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:      <model type='virtio'/>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:      <driver name='vhost' rx_queue_size='512'/>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:      <mtu size='1442'/>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:      <alias name='net0'/>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:    </interface>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:    <interface type='ethernet'>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:      <mac address='fa:16:3e:7a:1a:cb'/>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:      <target dev='tap582f57a6-32'/>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:      <model type='virtio'/>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:      <driver name='vhost' rx_queue_size='512'/>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:      <mtu size='1442'/>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:      <alias name='net2'/>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x07' slot='0x00' function='0x0'/>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:    </interface>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:    <interface type='ethernet'>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:      <mac address='fa:16:3e:b9:39:c6'/>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:      <target dev='tap9541f2fd-4e'/>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:      <model type='virtio'/>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:      <driver name='vhost' rx_queue_size='512'/>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:      <mtu size='1442'/>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:      <alias name='net3'/>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x08' slot='0x00' function='0x0'/>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:    </interface>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:    <serial type='pty'>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:      <source path='/dev/pts/0'/>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:      <log file='/var/lib/nova/instances/8191f951-44bc-4371-957a-f2e7d37c1a32/console.log' append='off'/>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:      <target type='isa-serial' port='0'>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:        <model name='isa-serial'/>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:      </target>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:      <alias name='serial0'/>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:    </serial>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:    <console type='pty' tty='/dev/pts/0'>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:      <source path='/dev/pts/0'/>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:      <log file='/var/lib/nova/instances/8191f951-44bc-4371-957a-f2e7d37c1a32/console.log' append='off'/>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:      <target type='serial' port='0'/>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:      <alias name='serial0'/>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:    </console>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:    <input type='tablet' bus='usb'>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:      <alias name='input0'/>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:      <address type='usb' bus='0' port='1'/>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:    </input>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:    <input type='mouse' bus='ps2'>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:      <alias name='input1'/>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:    </input>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:    <input type='keyboard' bus='ps2'>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:      <alias name='input2'/>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:    </input>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:    <graphics type='vnc' port='5900' autoport='yes' listen='::0'>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:      <listen type='address' address='::0'/>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:    </graphics>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:    <audio id='1' type='none'/>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:    <video>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:      <model type='virtio' heads='1' primary='yes'/>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:      <alias name='video0'/>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:    </video>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:    <watchdog model='itco' action='reset'>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:      <alias name='watchdog0'/>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:    </watchdog>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:    <memballoon model='virtio'>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:      <stats period='10'/>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:      <alias name='balloon0'/>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:    </memballoon>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:    <rng model='virtio'>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:      <backend model='random'>/dev/urandom</backend>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:      <alias name='rng0'/>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:    </rng>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:  </devices>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:  <seclabel type='dynamic' model='selinux' relabel='yes'>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:    <label>system_u:system_r:svirt_t:s0:c36,c489</label>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:    <imagelabel>system_u:object_r:svirt_image_t:s0:c36,c489</imagelabel>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:  </seclabel>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:  <seclabel type='dynamic' model='dac' relabel='yes'>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:    <label>+107:+107</label>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:    <imagelabel>+107:+107</imagelabel>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:  </seclabel>
Nov 25 03:33:28 np0005534516 nova_compute[253538]: </domain>
Nov 25 03:33:28 np0005534516 nova_compute[253538]: get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282#033[00m
Nov 25 03:33:28 np0005534516 nova_compute[253538]: 2025-11-25 08:33:28.914 253542 DEBUG nova.virt.libvirt.guest [req-48af5876-65ca-4802-b482-d07f1f694d56 req-db19be76-ba5d-4c9c-bdf0-e0b5743b2797 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:3a:eb:0c"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap223207be-35"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Nov 25 03:33:28 np0005534516 nova_compute[253538]: 2025-11-25 08:33:28.920 253542 DEBUG nova.virt.libvirt.guest [req-48af5876-65ca-4802-b482-d07f1f694d56 req-db19be76-ba5d-4c9c-bdf0-e0b5743b2797 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:3a:eb:0c"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap223207be-35"/></interface>not found in domain: <domain type='kvm' id='58'>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:  <name>instance-00000034</name>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:  <uuid>8191f951-44bc-4371-957a-f2e7d37c1a32</uuid>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:  <metadata>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 03:33:28 np0005534516 nova_compute[253538]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:  <nova:name>tempest-AttachInterfacesTestJSON-server-2072750142</nova:name>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:  <nova:creationTime>2025-11-25 08:33:27</nova:creationTime>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:  <nova:flavor name="m1.nano">
Nov 25 03:33:28 np0005534516 nova_compute[253538]:    <nova:memory>128</nova:memory>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:    <nova:disk>1</nova:disk>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:    <nova:swap>0</nova:swap>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:    <nova:ephemeral>0</nova:ephemeral>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:    <nova:vcpus>1</nova:vcpus>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:  </nova:flavor>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:  <nova:owner>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:    <nova:user uuid="329d8dc9d78743d4a09a38fef3a9143d">tempest-AttachInterfacesTestJSON-1895576257-project-member</nova:user>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:    <nova:project uuid="7d8307470c794815a028592990efca57">tempest-AttachInterfacesTestJSON-1895576257</nova:project>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:  </nova:owner>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:  <nova:root type="image" uuid="8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e"/>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:  <nova:ports>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:    <nova:port uuid="d8bd16e1-3695-474d-be04-7fdf44bee803">
Nov 25 03:33:28 np0005534516 nova_compute[253538]:      <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:    </nova:port>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:    <nova:port uuid="582f57a6-32d3-44a0-ab47-d147a0bb0f43">
Nov 25 03:33:28 np0005534516 nova_compute[253538]:      <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:    </nova:port>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:    <nova:port uuid="9541f2fd-4ec3-47ef-a6a9-66e0052c303f">
Nov 25 03:33:28 np0005534516 nova_compute[253538]:      <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:    </nova:port>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:  </nova:ports>
Nov 25 03:33:28 np0005534516 nova_compute[253538]: </nova:instance>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:  </metadata>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:  <memory unit='KiB'>131072</memory>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:  <currentMemory unit='KiB'>131072</currentMemory>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:  <vcpu placement='static'>1</vcpu>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:  <resource>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:    <partition>/machine</partition>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:  </resource>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:  <sysinfo type='smbios'>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:    <system>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:      <entry name='manufacturer'>RDO</entry>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:      <entry name='product'>OpenStack Compute</entry>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:      <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:      <entry name='serial'>8191f951-44bc-4371-957a-f2e7d37c1a32</entry>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:      <entry name='uuid'>8191f951-44bc-4371-957a-f2e7d37c1a32</entry>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:      <entry name='family'>Virtual Machine</entry>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:    </system>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:  </sysinfo>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:  <os>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:    <type arch='x86_64' machine='pc-q35-rhel9.8.0'>hvm</type>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:    <boot dev='hd'/>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:    <smbios mode='sysinfo'/>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:  </os>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:  <features>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:    <acpi/>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:    <apic/>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:    <vmcoreinfo state='on'/>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:  </features>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:  <cpu mode='custom' match='exact' check='full'>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:    <model fallback='forbid'>EPYC-Rome</model>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:    <vendor>AMD</vendor>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:    <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:    <feature policy='require' name='x2apic'/>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:    <feature policy='require' name='tsc-deadline'/>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:    <feature policy='require' name='hypervisor'/>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:    <feature policy='require' name='tsc_adjust'/>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:    <feature policy='require' name='spec-ctrl'/>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:    <feature policy='require' name='stibp'/>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:    <feature policy='require' name='ssbd'/>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:    <feature policy='require' name='cmp_legacy'/>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:    <feature policy='require' name='overflow-recov'/>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:    <feature policy='require' name='succor'/>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:    <feature policy='require' name='ibrs'/>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:    <feature policy='require' name='amd-ssbd'/>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:    <feature policy='require' name='virt-ssbd'/>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:    <feature policy='disable' name='lbrv'/>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:    <feature policy='disable' name='tsc-scale'/>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:    <feature policy='disable' name='vmcb-clean'/>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:    <feature policy='disable' name='flushbyasid'/>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:    <feature policy='disable' name='pause-filter'/>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:    <feature policy='disable' name='pfthreshold'/>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:    <feature policy='disable' name='svme-addr-chk'/>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:    <feature policy='require' name='lfence-always-serializing'/>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:    <feature policy='disable' name='xsaves'/>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:    <feature policy='disable' name='svm'/>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:    <feature policy='require' name='topoext'/>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:    <feature policy='disable' name='npt'/>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:    <feature policy='disable' name='nrip-save'/>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:  </cpu>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:  <clock offset='utc'>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:    <timer name='pit' tickpolicy='delay'/>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:    <timer name='rtc' tickpolicy='catchup'/>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:    <timer name='hpet' present='no'/>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:  </clock>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:  <on_poweroff>destroy</on_poweroff>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:  <on_reboot>restart</on_reboot>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:  <on_crash>destroy</on_crash>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:  <devices>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:    <emulator>/usr/libexec/qemu-kvm</emulator>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:    <disk type='network' device='disk'>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:      <driver name='qemu' type='raw' cache='none'/>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:      <auth username='openstack'>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:        <secret type='ceph' uuid='a058ea16-8b73-51e1-b172-ed66107102bf'/>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:      </auth>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:      <source protocol='rbd' name='vms/8191f951-44bc-4371-957a-f2e7d37c1a32_disk' index='2'>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:        <host name='192.168.122.100' port='6789'/>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:      </source>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:      <target dev='vda' bus='virtio'/>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:      <alias name='virtio-disk0'/>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:    </disk>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:    <disk type='network' device='cdrom'>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:      <driver name='qemu' type='raw' cache='none'/>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:      <auth username='openstack'>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:        <secret type='ceph' uuid='a058ea16-8b73-51e1-b172-ed66107102bf'/>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:      </auth>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:      <source protocol='rbd' name='vms/8191f951-44bc-4371-957a-f2e7d37c1a32_disk.config' index='1'>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:        <host name='192.168.122.100' port='6789'/>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:      </source>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:      <target dev='sda' bus='sata'/>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:      <readonly/>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:      <alias name='sata0-0-0'/>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:      <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:    </disk>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:    <controller type='pci' index='0' model='pcie-root'>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:      <alias name='pcie.0'/>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:    <controller type='pci' index='1' model='pcie-root-port'>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:      <model name='pcie-root-port'/>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:      <target chassis='1' port='0x10'/>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:      <alias name='pci.1'/>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:    <controller type='pci' index='2' model='pcie-root-port'>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:      <model name='pcie-root-port'/>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:      <target chassis='2' port='0x11'/>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:      <alias name='pci.2'/>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:    <controller type='pci' index='3' model='pcie-root-port'>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:      <model name='pcie-root-port'/>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:      <target chassis='3' port='0x12'/>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:      <alias name='pci.3'/>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:    <controller type='pci' index='4' model='pcie-root-port'>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:      <model name='pcie-root-port'/>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:      <target chassis='4' port='0x13'/>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:      <alias name='pci.4'/>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:    <controller type='pci' index='5' model='pcie-root-port'>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:      <model name='pcie-root-port'/>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:      <target chassis='5' port='0x14'/>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:      <alias name='pci.5'/>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:    <controller type='pci' index='6' model='pcie-root-port'>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:      <model name='pcie-root-port'/>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:      <target chassis='6' port='0x15'/>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:      <alias name='pci.6'/>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:    <controller type='pci' index='7' model='pcie-root-port'>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:      <model name='pcie-root-port'/>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:      <target chassis='7' port='0x16'/>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:      <alias name='pci.7'/>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:    <controller type='pci' index='8' model='pcie-root-port'>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:      <model name='pcie-root-port'/>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:      <target chassis='8' port='0x17'/>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:      <alias name='pci.8'/>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:    <controller type='pci' index='9' model='pcie-root-port'>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:      <model name='pcie-root-port'/>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:      <target chassis='9' port='0x18'/>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:      <alias name='pci.9'/>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:    <controller type='pci' index='10' model='pcie-root-port'>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:      <model name='pcie-root-port'/>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:      <target chassis='10' port='0x19'/>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:      <alias name='pci.10'/>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:    <controller type='pci' index='11' model='pcie-root-port'>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:      <model name='pcie-root-port'/>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:      <target chassis='11' port='0x1a'/>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:      <alias name='pci.11'/>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:    <controller type='pci' index='12' model='pcie-root-port'>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:      <model name='pcie-root-port'/>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:      <target chassis='12' port='0x1b'/>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:      <alias name='pci.12'/>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:    <controller type='pci' index='13' model='pcie-root-port'>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:      <model name='pcie-root-port'/>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:      <target chassis='13' port='0x1c'/>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:      <alias name='pci.13'/>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:    <controller type='pci' index='14' model='pcie-root-port'>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:      <model name='pcie-root-port'/>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:      <target chassis='14' port='0x1d'/>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:      <alias name='pci.14'/>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:    <controller type='pci' index='15' model='pcie-root-port'>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:      <model name='pcie-root-port'/>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:      <target chassis='15' port='0x1e'/>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:      <alias name='pci.15'/>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:    <controller type='pci' index='16' model='pcie-root-port'>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:      <model name='pcie-root-port'/>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:      <target chassis='16' port='0x1f'/>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:      <alias name='pci.16'/>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:    <controller type='pci' index='17' model='pcie-root-port'>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:      <model name='pcie-root-port'/>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:      <target chassis='17' port='0x20'/>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:      <alias name='pci.17'/>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:    <controller type='pci' index='18' model='pcie-root-port'>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:      <model name='pcie-root-port'/>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:      <target chassis='18' port='0x21'/>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:      <alias name='pci.18'/>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:    <controller type='pci' index='19' model='pcie-root-port'>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:      <model name='pcie-root-port'/>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:      <target chassis='19' port='0x22'/>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:      <alias name='pci.19'/>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:    <controller type='pci' index='20' model='pcie-root-port'>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:      <model name='pcie-root-port'/>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:      <target chassis='20' port='0x23'/>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:      <alias name='pci.20'/>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:    <controller type='pci' index='21' model='pcie-root-port'>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:      <model name='pcie-root-port'/>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:      <target chassis='21' port='0x24'/>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:      <alias name='pci.21'/>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:    <controller type='pci' index='22' model='pcie-root-port'>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:      <model name='pcie-root-port'/>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:      <target chassis='22' port='0x25'/>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:      <alias name='pci.22'/>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:    <controller type='pci' index='23' model='pcie-root-port'>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:      <model name='pcie-root-port'/>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:      <target chassis='23' port='0x26'/>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:      <alias name='pci.23'/>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:    <controller type='pci' index='24' model='pcie-root-port'>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:      <model name='pcie-root-port'/>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:      <target chassis='24' port='0x27'/>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:      <alias name='pci.24'/>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:    <controller type='pci' index='25' model='pcie-root-port'>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:      <model name='pcie-root-port'/>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:      <target chassis='25' port='0x28'/>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:      <alias name='pci.25'/>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:    <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:      <model name='pcie-pci-bridge'/>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:      <alias name='pci.26'/>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:    <controller type='usb' index='0' model='piix3-uhci'>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:      <alias name='usb'/>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:    <controller type='sata' index='0'>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:      <alias name='ide'/>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:    <interface type='ethernet'>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:      <mac address='fa:16:3e:1a:58:19'/>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:      <target dev='tapd8bd16e1-36'/>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:      <model type='virtio'/>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:      <driver name='vhost' rx_queue_size='512'/>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:      <mtu size='1442'/>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:      <alias name='net0'/>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:    </interface>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:    <interface type='ethernet'>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:      <mac address='fa:16:3e:7a:1a:cb'/>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:      <target dev='tap582f57a6-32'/>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:      <model type='virtio'/>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:      <driver name='vhost' rx_queue_size='512'/>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:      <mtu size='1442'/>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:      <alias name='net2'/>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x07' slot='0x00' function='0x0'/>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:    </interface>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:    <interface type='ethernet'>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:      <mac address='fa:16:3e:b9:39:c6'/>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:      <target dev='tap9541f2fd-4e'/>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:      <model type='virtio'/>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:      <driver name='vhost' rx_queue_size='512'/>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:      <mtu size='1442'/>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:      <alias name='net3'/>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x08' slot='0x00' function='0x0'/>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:    </interface>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:    <serial type='pty'>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:      <source path='/dev/pts/0'/>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:      <log file='/var/lib/nova/instances/8191f951-44bc-4371-957a-f2e7d37c1a32/console.log' append='off'/>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:      <target type='isa-serial' port='0'>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:        <model name='isa-serial'/>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:      </target>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:      <alias name='serial0'/>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:    </serial>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:    <console type='pty' tty='/dev/pts/0'>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:      <source path='/dev/pts/0'/>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:      <log file='/var/lib/nova/instances/8191f951-44bc-4371-957a-f2e7d37c1a32/console.log' append='off'/>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:      <target type='serial' port='0'/>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:      <alias name='serial0'/>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:    </console>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:    <input type='tablet' bus='usb'>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:      <alias name='input0'/>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:      <address type='usb' bus='0' port='1'/>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:    </input>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:    <input type='mouse' bus='ps2'>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:      <alias name='input1'/>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:    </input>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:    <input type='keyboard' bus='ps2'>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:      <alias name='input2'/>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:    </input>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:    <graphics type='vnc' port='5900' autoport='yes' listen='::0'>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:      <listen type='address' address='::0'/>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:    </graphics>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:    <audio id='1' type='none'/>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:    <video>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:      <model type='virtio' heads='1' primary='yes'/>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:      <alias name='video0'/>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:    </video>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:    <watchdog model='itco' action='reset'>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:      <alias name='watchdog0'/>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:    </watchdog>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:    <memballoon model='virtio'>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:      <stats period='10'/>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:      <alias name='balloon0'/>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:    </memballoon>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:    <rng model='virtio'>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:      <backend model='random'>/dev/urandom</backend>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:      <alias name='rng0'/>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:    </rng>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:  </devices>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:  <seclabel type='dynamic' model='selinux' relabel='yes'>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:    <label>system_u:system_r:svirt_t:s0:c36,c489</label>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:    <imagelabel>system_u:object_r:svirt_image_t:s0:c36,c489</imagelabel>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:  </seclabel>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:  <seclabel type='dynamic' model='dac' relabel='yes'>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:    <label>+107:+107</label>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:    <imagelabel>+107:+107</imagelabel>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:  </seclabel>
Nov 25 03:33:28 np0005534516 nova_compute[253538]: </domain>
Nov 25 03:33:28 np0005534516 nova_compute[253538]: get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282#033[00m
Nov 25 03:33:28 np0005534516 nova_compute[253538]: 2025-11-25 08:33:28.921 253542 WARNING nova.virt.libvirt.driver [req-48af5876-65ca-4802-b482-d07f1f694d56 req-db19be76-ba5d-4c9c-bdf0-e0b5743b2797 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 8191f951-44bc-4371-957a-f2e7d37c1a32] Detaching interface fa:16:3e:3a:eb:0c failed because the device is no longer found on the guest.: nova.exception.DeviceNotFound: Device 'tap223207be-35' not found.#033[00m
Nov 25 03:33:28 np0005534516 nova_compute[253538]: 2025-11-25 08:33:28.922 253542 DEBUG nova.virt.libvirt.vif [req-48af5876-65ca-4802-b482-d07f1f694d56 req-db19be76-ba5d-4c9c-bdf0-e0b5743b2797 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T08:32:27Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-2072750142',display_name='tempest-AttachInterfacesTestJSON-server-2072750142',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-2072750142',id=52,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHlIbVdTjrPi47Xp1ay/t1WIi0WKTQJPoNHCD5jEl7ye+GEjFhiA/mB2rFe7IhKIxVjHVKF2hIwCbWYxBLjoyv+NSB0KC8NG+zdVJzTKR2nUpIyzOpLkhsEIm7Jy2dcm6w==',key_name='tempest-keypair-2060783228',keypairs=<?>,launch_index=0,launched_at=2025-11-25T08:32:36Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata=<?>,migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='7d8307470c794815a028592990efca57',ramdisk_id='',reservation_id='r-558nuebf',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-1895576257',owner_user_name='tempest-AttachInterfacesTestJSON-1895576257-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T08:32:36Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='329d8dc9d78743d4a09a38fef3a9143d',uuid=8191f951-44bc-4371-957a-f2e7d37c1a32,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "223207be-35e0-4b8b-bf78-113792059910", "address": "fa:16:3e:3a:eb:0c", "network": {"id": "9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-2079765623-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7d8307470c794815a028592990efca57", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap223207be-35", "ovs_interfaceid": "223207be-35e0-4b8b-bf78-113792059910", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 25 03:33:28 np0005534516 nova_compute[253538]: 2025-11-25 08:33:28.922 253542 DEBUG nova.network.os_vif_util [req-48af5876-65ca-4802-b482-d07f1f694d56 req-db19be76-ba5d-4c9c-bdf0-e0b5743b2797 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Converting VIF {"id": "223207be-35e0-4b8b-bf78-113792059910", "address": "fa:16:3e:3a:eb:0c", "network": {"id": "9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-2079765623-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7d8307470c794815a028592990efca57", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap223207be-35", "ovs_interfaceid": "223207be-35e0-4b8b-bf78-113792059910", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 03:33:28 np0005534516 nova_compute[253538]: 2025-11-25 08:33:28.923 253542 DEBUG nova.network.os_vif_util [req-48af5876-65ca-4802-b482-d07f1f694d56 req-db19be76-ba5d-4c9c-bdf0-e0b5743b2797 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:3a:eb:0c,bridge_name='br-int',has_traffic_filtering=True,id=223207be-35e0-4b8b-bf78-113792059910,network=Network(9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap223207be-35') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 03:33:28 np0005534516 nova_compute[253538]: 2025-11-25 08:33:28.923 253542 DEBUG os_vif [req-48af5876-65ca-4802-b482-d07f1f694d56 req-db19be76-ba5d-4c9c-bdf0-e0b5743b2797 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:3a:eb:0c,bridge_name='br-int',has_traffic_filtering=True,id=223207be-35e0-4b8b-bf78-113792059910,network=Network(9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap223207be-35') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 25 03:33:28 np0005534516 nova_compute[253538]: 2025-11-25 08:33:28.925 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:33:28 np0005534516 nova_compute[253538]: 2025-11-25 08:33:28.925 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap223207be-35, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:33:28 np0005534516 nova_compute[253538]: 2025-11-25 08:33:28.925 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 03:33:28 np0005534516 nova_compute[253538]: 2025-11-25 08:33:28.928 253542 INFO os_vif [req-48af5876-65ca-4802-b482-d07f1f694d56 req-db19be76-ba5d-4c9c-bdf0-e0b5743b2797 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:3a:eb:0c,bridge_name='br-int',has_traffic_filtering=True,id=223207be-35e0-4b8b-bf78-113792059910,network=Network(9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap223207be-35')#033[00m
Nov 25 03:33:28 np0005534516 nova_compute[253538]: 2025-11-25 08:33:28.929 253542 DEBUG nova.virt.libvirt.guest [req-48af5876-65ca-4802-b482-d07f1f694d56 req-db19be76-ba5d-4c9c-bdf0-e0b5743b2797 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 03:33:28 np0005534516 nova_compute[253538]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:  <nova:name>tempest-AttachInterfacesTestJSON-server-2072750142</nova:name>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:  <nova:creationTime>2025-11-25 08:33:28</nova:creationTime>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:  <nova:flavor name="m1.nano">
Nov 25 03:33:28 np0005534516 nova_compute[253538]:    <nova:memory>128</nova:memory>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:    <nova:disk>1</nova:disk>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:    <nova:swap>0</nova:swap>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:    <nova:ephemeral>0</nova:ephemeral>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:    <nova:vcpus>1</nova:vcpus>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:  </nova:flavor>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:  <nova:owner>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:    <nova:user uuid="329d8dc9d78743d4a09a38fef3a9143d">tempest-AttachInterfacesTestJSON-1895576257-project-member</nova:user>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:    <nova:project uuid="7d8307470c794815a028592990efca57">tempest-AttachInterfacesTestJSON-1895576257</nova:project>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:  </nova:owner>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:  <nova:root type="image" uuid="8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e"/>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:  <nova:ports>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:    <nova:port uuid="d8bd16e1-3695-474d-be04-7fdf44bee803">
Nov 25 03:33:28 np0005534516 nova_compute[253538]:      <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:    </nova:port>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:    <nova:port uuid="582f57a6-32d3-44a0-ab47-d147a0bb0f43">
Nov 25 03:33:28 np0005534516 nova_compute[253538]:      <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:    </nova:port>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:    <nova:port uuid="9541f2fd-4ec3-47ef-a6a9-66e0052c303f">
Nov 25 03:33:28 np0005534516 nova_compute[253538]:      <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:    </nova:port>
Nov 25 03:33:28 np0005534516 nova_compute[253538]:  </nova:ports>
Nov 25 03:33:28 np0005534516 nova_compute[253538]: </nova:instance>
Nov 25 03:33:28 np0005534516 nova_compute[253538]: set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359#033[00m
Nov 25 03:33:29 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 25 03:33:29 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/433025928' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 03:33:29 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 25 03:33:29 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/433025928' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 03:33:29 np0005534516 ovn_controller[152859]: 2025-11-25T08:33:29Z|00066|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:7a:ce:b3 10.100.0.4
Nov 25 03:33:29 np0005534516 ovn_controller[152859]: 2025-11-25T08:33:29Z|00067|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:7a:ce:b3 10.100.0.4
Nov 25 03:33:29 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1504: 321 pgs: 321 active+clean; 303 MiB data, 594 MiB used, 59 GiB / 60 GiB avail; 3.2 MiB/s rd, 1.0 MiB/s wr, 159 op/s
Nov 25 03:33:29 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:33:29.272 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=15, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '3e:21:09', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '6a:71:8e:07:fa:c2'}, ipsec=False) old=SB_Global(nb_cfg=14) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 03:33:29 np0005534516 nova_compute[253538]: 2025-11-25 08:33:29.272 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:33:29 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:33:29.274 162739 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 25 03:33:29 np0005534516 nova_compute[253538]: 2025-11-25 08:33:29.274 253542 DEBUG oslo_concurrency.lockutils [None req-2cd0e226-b95f-4d12-96f9-bbfd794cf495 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Acquiring lock "0feca801-4630-4450-b915-616d8496ab51" by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:33:29 np0005534516 nova_compute[253538]: 2025-11-25 08:33:29.274 253542 DEBUG oslo_concurrency.lockutils [None req-2cd0e226-b95f-4d12-96f9-bbfd794cf495 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Lock "0feca801-4630-4450-b915-616d8496ab51" acquired by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:33:29 np0005534516 nova_compute[253538]: 2025-11-25 08:33:29.275 253542 INFO nova.compute.manager [None req-2cd0e226-b95f-4d12-96f9-bbfd794cf495 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] [instance: 0feca801-4630-4450-b915-616d8496ab51] Rebooting instance#033[00m
Nov 25 03:33:29 np0005534516 nova_compute[253538]: 2025-11-25 08:33:29.286 253542 DEBUG oslo_concurrency.lockutils [None req-2cd0e226-b95f-4d12-96f9-bbfd794cf495 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Acquiring lock "refresh_cache-0feca801-4630-4450-b915-616d8496ab51" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 03:33:29 np0005534516 nova_compute[253538]: 2025-11-25 08:33:29.286 253542 DEBUG oslo_concurrency.lockutils [None req-2cd0e226-b95f-4d12-96f9-bbfd794cf495 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Acquired lock "refresh_cache-0feca801-4630-4450-b915-616d8496ab51" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 03:33:29 np0005534516 nova_compute[253538]: 2025-11-25 08:33:29.286 253542 DEBUG nova.network.neutron [None req-2cd0e226-b95f-4d12-96f9-bbfd794cf495 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] [instance: 0feca801-4630-4450-b915-616d8496ab51] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 25 03:33:29 np0005534516 elastic_darwin[313859]: {
Nov 25 03:33:29 np0005534516 elastic_darwin[313859]:    "0": [
Nov 25 03:33:29 np0005534516 elastic_darwin[313859]:        {
Nov 25 03:33:29 np0005534516 elastic_darwin[313859]:            "devices": [
Nov 25 03:33:29 np0005534516 elastic_darwin[313859]:                "/dev/loop3"
Nov 25 03:33:29 np0005534516 elastic_darwin[313859]:            ],
Nov 25 03:33:29 np0005534516 elastic_darwin[313859]:            "lv_name": "ceph_lv0",
Nov 25 03:33:29 np0005534516 elastic_darwin[313859]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 03:33:29 np0005534516 elastic_darwin[313859]:            "lv_size": "21470642176",
Nov 25 03:33:29 np0005534516 elastic_darwin[313859]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=fa0f8d90-5df8-4d42-9078-da082765696d,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 03:33:29 np0005534516 elastic_darwin[313859]:            "lv_uuid": "ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr",
Nov 25 03:33:29 np0005534516 elastic_darwin[313859]:            "name": "ceph_lv0",
Nov 25 03:33:29 np0005534516 elastic_darwin[313859]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 03:33:29 np0005534516 elastic_darwin[313859]:            "tags": {
Nov 25 03:33:29 np0005534516 elastic_darwin[313859]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 03:33:29 np0005534516 elastic_darwin[313859]:                "ceph.block_uuid": "ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr",
Nov 25 03:33:29 np0005534516 elastic_darwin[313859]:                "ceph.cephx_lockbox_secret": "",
Nov 25 03:33:29 np0005534516 elastic_darwin[313859]:                "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 03:33:29 np0005534516 elastic_darwin[313859]:                "ceph.cluster_name": "ceph",
Nov 25 03:33:29 np0005534516 elastic_darwin[313859]:                "ceph.crush_device_class": "",
Nov 25 03:33:29 np0005534516 elastic_darwin[313859]:                "ceph.encrypted": "0",
Nov 25 03:33:29 np0005534516 elastic_darwin[313859]:                "ceph.osd_fsid": "fa0f8d90-5df8-4d42-9078-da082765696d",
Nov 25 03:33:29 np0005534516 elastic_darwin[313859]:                "ceph.osd_id": "0",
Nov 25 03:33:29 np0005534516 elastic_darwin[313859]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 03:33:29 np0005534516 elastic_darwin[313859]:                "ceph.type": "block",
Nov 25 03:33:29 np0005534516 elastic_darwin[313859]:                "ceph.vdo": "0"
Nov 25 03:33:29 np0005534516 elastic_darwin[313859]:            },
Nov 25 03:33:29 np0005534516 elastic_darwin[313859]:            "type": "block",
Nov 25 03:33:29 np0005534516 elastic_darwin[313859]:            "vg_name": "ceph_vg0"
Nov 25 03:33:29 np0005534516 elastic_darwin[313859]:        }
Nov 25 03:33:29 np0005534516 elastic_darwin[313859]:    ],
Nov 25 03:33:29 np0005534516 elastic_darwin[313859]:    "1": [
Nov 25 03:33:29 np0005534516 elastic_darwin[313859]:        {
Nov 25 03:33:29 np0005534516 elastic_darwin[313859]:            "devices": [
Nov 25 03:33:29 np0005534516 elastic_darwin[313859]:                "/dev/loop4"
Nov 25 03:33:29 np0005534516 elastic_darwin[313859]:            ],
Nov 25 03:33:29 np0005534516 elastic_darwin[313859]:            "lv_name": "ceph_lv1",
Nov 25 03:33:29 np0005534516 elastic_darwin[313859]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 03:33:29 np0005534516 elastic_darwin[313859]:            "lv_size": "21470642176",
Nov 25 03:33:29 np0005534516 elastic_darwin[313859]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1ccbbde0-8faf-460a-800e-c84f00ed17db,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 03:33:29 np0005534516 elastic_darwin[313859]:            "lv_uuid": "nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e",
Nov 25 03:33:29 np0005534516 elastic_darwin[313859]:            "name": "ceph_lv1",
Nov 25 03:33:29 np0005534516 elastic_darwin[313859]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 03:33:29 np0005534516 elastic_darwin[313859]:            "tags": {
Nov 25 03:33:29 np0005534516 elastic_darwin[313859]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 03:33:29 np0005534516 elastic_darwin[313859]:                "ceph.block_uuid": "nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e",
Nov 25 03:33:29 np0005534516 elastic_darwin[313859]:                "ceph.cephx_lockbox_secret": "",
Nov 25 03:33:29 np0005534516 elastic_darwin[313859]:                "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 03:33:29 np0005534516 elastic_darwin[313859]:                "ceph.cluster_name": "ceph",
Nov 25 03:33:29 np0005534516 elastic_darwin[313859]:                "ceph.crush_device_class": "",
Nov 25 03:33:29 np0005534516 elastic_darwin[313859]:                "ceph.encrypted": "0",
Nov 25 03:33:29 np0005534516 elastic_darwin[313859]:                "ceph.osd_fsid": "1ccbbde0-8faf-460a-800e-c84f00ed17db",
Nov 25 03:33:29 np0005534516 elastic_darwin[313859]:                "ceph.osd_id": "1",
Nov 25 03:33:29 np0005534516 elastic_darwin[313859]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 03:33:29 np0005534516 elastic_darwin[313859]:                "ceph.type": "block",
Nov 25 03:33:29 np0005534516 elastic_darwin[313859]:                "ceph.vdo": "0"
Nov 25 03:33:29 np0005534516 elastic_darwin[313859]:            },
Nov 25 03:33:29 np0005534516 elastic_darwin[313859]:            "type": "block",
Nov 25 03:33:29 np0005534516 elastic_darwin[313859]:            "vg_name": "ceph_vg1"
Nov 25 03:33:29 np0005534516 elastic_darwin[313859]:        }
Nov 25 03:33:29 np0005534516 elastic_darwin[313859]:    ],
Nov 25 03:33:29 np0005534516 elastic_darwin[313859]:    "2": [
Nov 25 03:33:29 np0005534516 elastic_darwin[313859]:        {
Nov 25 03:33:29 np0005534516 elastic_darwin[313859]:            "devices": [
Nov 25 03:33:29 np0005534516 elastic_darwin[313859]:                "/dev/loop5"
Nov 25 03:33:29 np0005534516 elastic_darwin[313859]:            ],
Nov 25 03:33:29 np0005534516 elastic_darwin[313859]:            "lv_name": "ceph_lv2",
Nov 25 03:33:29 np0005534516 elastic_darwin[313859]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 03:33:29 np0005534516 elastic_darwin[313859]:            "lv_size": "21470642176",
Nov 25 03:33:29 np0005534516 elastic_darwin[313859]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2a15223e-f21c-42af-b702-2be1c3607399,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 03:33:29 np0005534516 elastic_darwin[313859]:            "lv_uuid": "5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0",
Nov 25 03:33:29 np0005534516 elastic_darwin[313859]:            "name": "ceph_lv2",
Nov 25 03:33:29 np0005534516 elastic_darwin[313859]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 03:33:29 np0005534516 elastic_darwin[313859]:            "tags": {
Nov 25 03:33:29 np0005534516 elastic_darwin[313859]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 03:33:29 np0005534516 elastic_darwin[313859]:                "ceph.block_uuid": "5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0",
Nov 25 03:33:29 np0005534516 elastic_darwin[313859]:                "ceph.cephx_lockbox_secret": "",
Nov 25 03:33:29 np0005534516 elastic_darwin[313859]:                "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 03:33:29 np0005534516 elastic_darwin[313859]:                "ceph.cluster_name": "ceph",
Nov 25 03:33:29 np0005534516 elastic_darwin[313859]:                "ceph.crush_device_class": "",
Nov 25 03:33:29 np0005534516 elastic_darwin[313859]:                "ceph.encrypted": "0",
Nov 25 03:33:29 np0005534516 elastic_darwin[313859]:                "ceph.osd_fsid": "2a15223e-f21c-42af-b702-2be1c3607399",
Nov 25 03:33:29 np0005534516 elastic_darwin[313859]:                "ceph.osd_id": "2",
Nov 25 03:33:29 np0005534516 elastic_darwin[313859]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 03:33:29 np0005534516 elastic_darwin[313859]:                "ceph.type": "block",
Nov 25 03:33:29 np0005534516 elastic_darwin[313859]:                "ceph.vdo": "0"
Nov 25 03:33:29 np0005534516 elastic_darwin[313859]:            },
Nov 25 03:33:29 np0005534516 elastic_darwin[313859]:            "type": "block",
Nov 25 03:33:29 np0005534516 elastic_darwin[313859]:            "vg_name": "ceph_vg2"
Nov 25 03:33:29 np0005534516 elastic_darwin[313859]:        }
Nov 25 03:33:29 np0005534516 elastic_darwin[313859]:    ]
Nov 25 03:33:29 np0005534516 elastic_darwin[313859]: }
Nov 25 03:33:29 np0005534516 systemd[1]: libpod-8d2f63eeb39430ca49e42b4fac24132445c815c01488089da83fa5199188be31.scope: Deactivated successfully.
Nov 25 03:33:29 np0005534516 podman[313842]: 2025-11-25 08:33:29.606588395 +0000 UTC m=+1.238669332 container died 8d2f63eeb39430ca49e42b4fac24132445c815c01488089da83fa5199188be31 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_darwin, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 03:33:29 np0005534516 systemd[1]: var-lib-containers-storage-overlay-51c3a8fa0b9121f1c8446a516c7153b04f9329d971deb1fa3dfe8c51e1bdffd3-merged.mount: Deactivated successfully.
Nov 25 03:33:29 np0005534516 podman[313842]: 2025-11-25 08:33:29.655410167 +0000 UTC m=+1.287491074 container remove 8d2f63eeb39430ca49e42b4fac24132445c815c01488089da83fa5199188be31 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_darwin, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Nov 25 03:33:29 np0005534516 systemd[1]: libpod-conmon-8d2f63eeb39430ca49e42b4fac24132445c815c01488089da83fa5199188be31.scope: Deactivated successfully.
Nov 25 03:33:30 np0005534516 podman[314019]: 2025-11-25 08:33:30.330070409 +0000 UTC m=+0.046325287 container create cc5d00d3da3cca2a3affe7a5211241602a7cd4a1ffcd8d8b6726258a7102d5fb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_spence, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 03:33:30 np0005534516 systemd[1]: Started libpod-conmon-cc5d00d3da3cca2a3affe7a5211241602a7cd4a1ffcd8d8b6726258a7102d5fb.scope.
Nov 25 03:33:30 np0005534516 systemd[1]: Started libcrun container.
Nov 25 03:33:30 np0005534516 podman[314019]: 2025-11-25 08:33:30.395655271 +0000 UTC m=+0.111910179 container init cc5d00d3da3cca2a3affe7a5211241602a7cd4a1ffcd8d8b6726258a7102d5fb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_spence, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 03:33:30 np0005534516 podman[314019]: 2025-11-25 08:33:30.402027793 +0000 UTC m=+0.118282671 container start cc5d00d3da3cca2a3affe7a5211241602a7cd4a1ffcd8d8b6726258a7102d5fb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_spence, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Nov 25 03:33:30 np0005534516 podman[314019]: 2025-11-25 08:33:30.405025313 +0000 UTC m=+0.121280211 container attach cc5d00d3da3cca2a3affe7a5211241602a7cd4a1ffcd8d8b6726258a7102d5fb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_spence, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Nov 25 03:33:30 np0005534516 bold_spence[314036]: 167 167
Nov 25 03:33:30 np0005534516 systemd[1]: libpod-cc5d00d3da3cca2a3affe7a5211241602a7cd4a1ffcd8d8b6726258a7102d5fb.scope: Deactivated successfully.
Nov 25 03:33:30 np0005534516 conmon[314036]: conmon cc5d00d3da3cca2a3aff <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-cc5d00d3da3cca2a3affe7a5211241602a7cd4a1ffcd8d8b6726258a7102d5fb.scope/container/memory.events
Nov 25 03:33:30 np0005534516 podman[314019]: 2025-11-25 08:33:30.311616942 +0000 UTC m=+0.027871840 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 03:33:30 np0005534516 podman[314041]: 2025-11-25 08:33:30.447984478 +0000 UTC m=+0.026329318 container died cc5d00d3da3cca2a3affe7a5211241602a7cd4a1ffcd8d8b6726258a7102d5fb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_spence, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 03:33:30 np0005534516 systemd[1]: var-lib-containers-storage-overlay-c5dd66dd306135f39a966fd87e162aea3afbfd7cf1e2fa055c0e1faee25f77b9-merged.mount: Deactivated successfully.
Nov 25 03:33:30 np0005534516 podman[314041]: 2025-11-25 08:33:30.488937939 +0000 UTC m=+0.067282759 container remove cc5d00d3da3cca2a3affe7a5211241602a7cd4a1ffcd8d8b6726258a7102d5fb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_spence, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 03:33:30 np0005534516 systemd[1]: libpod-conmon-cc5d00d3da3cca2a3affe7a5211241602a7cd4a1ffcd8d8b6726258a7102d5fb.scope: Deactivated successfully.
Nov 25 03:33:30 np0005534516 nova_compute[253538]: 2025-11-25 08:33:30.534 253542 INFO nova.network.neutron [None req-56603a31-362f-47a6-8425-b7532e4dcfee 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] [instance: 8191f951-44bc-4371-957a-f2e7d37c1a32] Port 223207be-35e0-4b8b-bf78-113792059910 from network info_cache is no longer associated with instance in Neutron. Removing from network info_cache.#033[00m
Nov 25 03:33:30 np0005534516 nova_compute[253538]: 2025-11-25 08:33:30.604 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:33:30 np0005534516 podman[314063]: 2025-11-25 08:33:30.702212371 +0000 UTC m=+0.044828616 container create 3dd93b34bac3793ca1c98de596fd3402b8cb72247791f6152fb8508a8b3005d5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_hamilton, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 03:33:30 np0005534516 systemd[1]: Started libpod-conmon-3dd93b34bac3793ca1c98de596fd3402b8cb72247791f6152fb8508a8b3005d5.scope.
Nov 25 03:33:30 np0005534516 systemd[1]: Started libcrun container.
Nov 25 03:33:30 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6c449e9c7c85f013e30844a99ea13890446f37682b82672e076560ac8f88d97e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 03:33:30 np0005534516 podman[314063]: 2025-11-25 08:33:30.685227404 +0000 UTC m=+0.027843679 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 03:33:30 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6c449e9c7c85f013e30844a99ea13890446f37682b82672e076560ac8f88d97e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 03:33:30 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6c449e9c7c85f013e30844a99ea13890446f37682b82672e076560ac8f88d97e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 03:33:30 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6c449e9c7c85f013e30844a99ea13890446f37682b82672e076560ac8f88d97e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 03:33:30 np0005534516 podman[314063]: 2025-11-25 08:33:30.795324793 +0000 UTC m=+0.137941088 container init 3dd93b34bac3793ca1c98de596fd3402b8cb72247791f6152fb8508a8b3005d5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_hamilton, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Nov 25 03:33:30 np0005534516 podman[314063]: 2025-11-25 08:33:30.803255366 +0000 UTC m=+0.145871611 container start 3dd93b34bac3793ca1c98de596fd3402b8cb72247791f6152fb8508a8b3005d5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_hamilton, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 03:33:30 np0005534516 podman[314063]: 2025-11-25 08:33:30.806270067 +0000 UTC m=+0.148886322 container attach 3dd93b34bac3793ca1c98de596fd3402b8cb72247791f6152fb8508a8b3005d5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_hamilton, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 03:33:30 np0005534516 nova_compute[253538]: 2025-11-25 08:33:30.815 253542 DEBUG nova.compute.manager [req-fd32e393-3363-473d-9796-701b281b57e1 req-d30b300b-39c9-486e-a329-b3ab53583679 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 8191f951-44bc-4371-957a-f2e7d37c1a32] Received event network-vif-plugged-223207be-35e0-4b8b-bf78-113792059910 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:33:30 np0005534516 nova_compute[253538]: 2025-11-25 08:33:30.816 253542 DEBUG oslo_concurrency.lockutils [req-fd32e393-3363-473d-9796-701b281b57e1 req-d30b300b-39c9-486e-a329-b3ab53583679 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "8191f951-44bc-4371-957a-f2e7d37c1a32-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:33:30 np0005534516 nova_compute[253538]: 2025-11-25 08:33:30.816 253542 DEBUG oslo_concurrency.lockutils [req-fd32e393-3363-473d-9796-701b281b57e1 req-d30b300b-39c9-486e-a329-b3ab53583679 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "8191f951-44bc-4371-957a-f2e7d37c1a32-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:33:30 np0005534516 nova_compute[253538]: 2025-11-25 08:33:30.816 253542 DEBUG oslo_concurrency.lockutils [req-fd32e393-3363-473d-9796-701b281b57e1 req-d30b300b-39c9-486e-a329-b3ab53583679 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "8191f951-44bc-4371-957a-f2e7d37c1a32-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:33:30 np0005534516 nova_compute[253538]: 2025-11-25 08:33:30.816 253542 DEBUG nova.compute.manager [req-fd32e393-3363-473d-9796-701b281b57e1 req-d30b300b-39c9-486e-a329-b3ab53583679 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 8191f951-44bc-4371-957a-f2e7d37c1a32] No waiting events found dispatching network-vif-plugged-223207be-35e0-4b8b-bf78-113792059910 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 03:33:30 np0005534516 nova_compute[253538]: 2025-11-25 08:33:30.817 253542 WARNING nova.compute.manager [req-fd32e393-3363-473d-9796-701b281b57e1 req-d30b300b-39c9-486e-a329-b3ab53583679 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 8191f951-44bc-4371-957a-f2e7d37c1a32] Received unexpected event network-vif-plugged-223207be-35e0-4b8b-bf78-113792059910 for instance with vm_state active and task_state None.#033[00m
Nov 25 03:33:30 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e176 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 03:33:30 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:33:30.910 162739 WARNING neutron.agent.ovn.metadata.agent [-] Removing non-external type port ea1847ed-a3ec-4944-b720-ee87acf36b74 with type ""#033[00m
Nov 25 03:33:30 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:33:30.912 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched DELETE: PortBindingDeletedEvent(events=('delete',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b9:39:c6 10.100.0.5'], port_security=['fa:16:3e:b9:39:c6 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[True], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-AttachInterfacesTestJSON-790044012', 'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '8191f951-44bc-4371-957a-f2e7d37c1a32', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-AttachInterfacesTestJSON-790044012', 'neutron:project_id': '7d8307470c794815a028592990efca57', 'neutron:revision_number': '4', 'neutron:security_group_ids': '2fc6083a-2d5e-4949-a854-57468915c521', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f51a82dc-da84-4ad1-90c6-51b8e242435f, chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], tunnel_key=6, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=9541f2fd-4ec3-47ef-a6a9-66e0052c303f) old= matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 03:33:30 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:33:30.913 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 9541f2fd-4ec3-47ef-a6a9-66e0052c303f in datapath 9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe unbound from our chassis#033[00m
Nov 25 03:33:30 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:33:30.915 162739 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe#033[00m
Nov 25 03:33:30 np0005534516 ovn_controller[152859]: 2025-11-25T08:33:30Z|00508|binding|INFO|Removing iface tap9541f2fd-4e ovn-installed in OVS
Nov 25 03:33:30 np0005534516 ovn_controller[152859]: 2025-11-25T08:33:30Z|00509|binding|INFO|Removing lport 9541f2fd-4ec3-47ef-a6a9-66e0052c303f ovn-installed in OVS
Nov 25 03:33:30 np0005534516 nova_compute[253538]: 2025-11-25 08:33:30.921 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:33:30 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:33:30.931 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[32c9317b-3a7a-4f59-9f4e-e4496aae36fe]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:33:30 np0005534516 nova_compute[253538]: 2025-11-25 08:33:30.938 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:33:30 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:33:30.965 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[96a5f9ac-11ed-414f-90fb-108a69894281]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:33:30 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:33:30.968 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[9c8c381c-a325-46e0-a140-7f96530ca563]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:33:31 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:33:31.001 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[c39d6667-535e-41d4-8af7-bfb00655d8b2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:33:31 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:33:31.019 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[29768961-2caa-4257-addc-0b306f1b8d7f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap9bf3cbfa-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:cf:8f:c7'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 13, 'tx_packets': 13, 'rx_bytes': 826, 'tx_bytes': 690, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 13, 'tx_packets': 13, 'rx_bytes': 826, 'tx_bytes': 690, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 132], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 490985, 'reachable_time': 15416, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 314089, 'error': None, 'target': 'ovnmeta-9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:33:31 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:33:31.037 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[edaea7b9-1c11-4a99-bced-4e98e3621be8]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap9bf3cbfa-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 490996, 'tstamp': 490996}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 314090, 'error': None, 'target': 'ovnmeta-9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap9bf3cbfa-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 490999, 'tstamp': 490999}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 314090, 'error': None, 'target': 'ovnmeta-9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:33:31 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:33:31.040 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9bf3cbfa-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:33:31 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:33:31.096 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9bf3cbfa-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:33:31 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:33:31.097 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 03:33:31 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:33:31.097 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap9bf3cbfa-70, col_values=(('external_ids', {'iface-id': '98660c0c-0936-4c4d-9a89-87b784d8d5cc'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:33:31 np0005534516 nova_compute[253538]: 2025-11-25 08:33:31.097 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:33:31 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:33:31.098 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 03:33:31 np0005534516 nova_compute[253538]: 2025-11-25 08:33:31.181 253542 DEBUG oslo_concurrency.lockutils [None req-458a06ae-eb9f-40ca-bf5d-32973710d0dd 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Acquiring lock "8191f951-44bc-4371-957a-f2e7d37c1a32" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:33:31 np0005534516 nova_compute[253538]: 2025-11-25 08:33:31.182 253542 DEBUG oslo_concurrency.lockutils [None req-458a06ae-eb9f-40ca-bf5d-32973710d0dd 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Lock "8191f951-44bc-4371-957a-f2e7d37c1a32" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:33:31 np0005534516 nova_compute[253538]: 2025-11-25 08:33:31.183 253542 DEBUG oslo_concurrency.lockutils [None req-458a06ae-eb9f-40ca-bf5d-32973710d0dd 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Acquiring lock "8191f951-44bc-4371-957a-f2e7d37c1a32-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:33:31 np0005534516 nova_compute[253538]: 2025-11-25 08:33:31.185 253542 DEBUG oslo_concurrency.lockutils [None req-458a06ae-eb9f-40ca-bf5d-32973710d0dd 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Lock "8191f951-44bc-4371-957a-f2e7d37c1a32-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:33:31 np0005534516 nova_compute[253538]: 2025-11-25 08:33:31.186 253542 DEBUG oslo_concurrency.lockutils [None req-458a06ae-eb9f-40ca-bf5d-32973710d0dd 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Lock "8191f951-44bc-4371-957a-f2e7d37c1a32-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:33:31 np0005534516 nova_compute[253538]: 2025-11-25 08:33:31.187 253542 INFO nova.compute.manager [None req-458a06ae-eb9f-40ca-bf5d-32973710d0dd 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] [instance: 8191f951-44bc-4371-957a-f2e7d37c1a32] Terminating instance#033[00m
Nov 25 03:33:31 np0005534516 nova_compute[253538]: 2025-11-25 08:33:31.188 253542 DEBUG nova.compute.manager [None req-458a06ae-eb9f-40ca-bf5d-32973710d0dd 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] [instance: 8191f951-44bc-4371-957a-f2e7d37c1a32] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 25 03:33:31 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1505: 321 pgs: 321 active+clean; 312 MiB data, 602 MiB used, 59 GiB / 60 GiB avail; 2.6 MiB/s rd, 1.7 MiB/s wr, 157 op/s
Nov 25 03:33:31 np0005534516 kernel: tapd8bd16e1-36 (unregistering): left promiscuous mode
Nov 25 03:33:31 np0005534516 NetworkManager[48915]: <info>  [1764059611.2518] device (tapd8bd16e1-36): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 03:33:31 np0005534516 kernel: tap582f57a6-32 (unregistering): left promiscuous mode
Nov 25 03:33:31 np0005534516 nova_compute[253538]: 2025-11-25 08:33:31.269 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:33:31 np0005534516 ovn_controller[152859]: 2025-11-25T08:33:31Z|00510|binding|INFO|Releasing lport d8bd16e1-3695-474d-be04-7fdf44bee803 from this chassis (sb_readonly=0)
Nov 25 03:33:31 np0005534516 ovn_controller[152859]: 2025-11-25T08:33:31Z|00511|binding|INFO|Setting lport d8bd16e1-3695-474d-be04-7fdf44bee803 down in Southbound
Nov 25 03:33:31 np0005534516 ovn_controller[152859]: 2025-11-25T08:33:31Z|00512|binding|INFO|Removing iface tapd8bd16e1-36 ovn-installed in OVS
Nov 25 03:33:31 np0005534516 nova_compute[253538]: 2025-11-25 08:33:31.273 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:33:31 np0005534516 NetworkManager[48915]: <info>  [1764059611.2750] device (tap582f57a6-32): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 03:33:31 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:33:31.283 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:1a:58:19 10.100.0.11'], port_security=['fa:16:3e:1a:58:19 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '8191f951-44bc-4371-957a-f2e7d37c1a32', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '7d8307470c794815a028592990efca57', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'ab6e7e4a-351f-4b59-b94e-a7f51f236dd7', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.213'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f51a82dc-da84-4ad1-90c6-51b8e242435f, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=d8bd16e1-3695-474d-be04-7fdf44bee803) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 03:33:31 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:33:31.286 162739 INFO neutron.agent.ovn.metadata.agent [-] Port d8bd16e1-3695-474d-be04-7fdf44bee803 in datapath 9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe unbound from our chassis#033[00m
Nov 25 03:33:31 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:33:31.289 162739 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe#033[00m
Nov 25 03:33:31 np0005534516 kernel: tap9541f2fd-4e (unregistering): left promiscuous mode
Nov 25 03:33:31 np0005534516 NetworkManager[48915]: <info>  [1764059611.3120] device (tap9541f2fd-4e): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 03:33:31 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:33:31.319 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[7ab4d1fe-9ca3-4836-a40b-9bce1a91fbf8]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:33:31 np0005534516 ovn_controller[152859]: 2025-11-25T08:33:31Z|00513|binding|INFO|Releasing lport 582f57a6-32d3-44a0-ab47-d147a0bb0f43 from this chassis (sb_readonly=0)
Nov 25 03:33:31 np0005534516 ovn_controller[152859]: 2025-11-25T08:33:31Z|00514|binding|INFO|Setting lport 582f57a6-32d3-44a0-ab47-d147a0bb0f43 down in Southbound
Nov 25 03:33:31 np0005534516 nova_compute[253538]: 2025-11-25 08:33:31.333 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:33:31 np0005534516 ovn_controller[152859]: 2025-11-25T08:33:31Z|00515|binding|INFO|Removing iface tap582f57a6-32 ovn-installed in OVS
Nov 25 03:33:31 np0005534516 nova_compute[253538]: 2025-11-25 08:33:31.337 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:33:31 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:33:31.340 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:7a:1a:cb 10.100.0.14'], port_security=['fa:16:3e:7a:1a:cb 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '8191f951-44bc-4371-957a-f2e7d37c1a32', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '7d8307470c794815a028592990efca57', 'neutron:revision_number': '4', 'neutron:security_group_ids': '2fc6083a-2d5e-4949-a854-57468915c521', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f51a82dc-da84-4ad1-90c6-51b8e242435f, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=582f57a6-32d3-44a0-ab47-d147a0bb0f43) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 03:33:31 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:33:31.364 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[5485558a-b46a-4ca1-aa00-6e6c22486a51]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:33:31 np0005534516 nova_compute[253538]: 2025-11-25 08:33:31.366 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:33:31 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:33:31.367 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[69575316-ee99-4a9d-9462-f93015a861cb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:33:31 np0005534516 systemd[1]: machine-qemu\x2d58\x2dinstance\x2d00000034.scope: Deactivated successfully.
Nov 25 03:33:31 np0005534516 systemd[1]: machine-qemu\x2d58\x2dinstance\x2d00000034.scope: Consumed 15.499s CPU time.
Nov 25 03:33:31 np0005534516 systemd-machined[215790]: Machine qemu-58-instance-00000034 terminated.
Nov 25 03:33:31 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:33:31.394 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[acef6736-1fd9-4868-8ac7-204b990f0007]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:33:31 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:33:31.414 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[a47d0d65-357b-4a66-8d71-3a17d6fce061]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap9bf3cbfa-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:cf:8f:c7'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 13, 'tx_packets': 15, 'rx_bytes': 826, 'tx_bytes': 774, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 13, 'tx_packets': 15, 'rx_bytes': 826, 'tx_bytes': 774, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 132], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 490985, 'reachable_time': 15416, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 314112, 'error': None, 'target': 'ovnmeta-9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:33:31 np0005534516 NetworkManager[48915]: <info>  [1764059611.4190] manager: (tap582f57a6-32): new Tun device (/org/freedesktop/NetworkManager/Devices/237)
Nov 25 03:33:31 np0005534516 NetworkManager[48915]: <info>  [1764059611.4297] manager: (tap9541f2fd-4e): new Tun device (/org/freedesktop/NetworkManager/Devices/238)
Nov 25 03:33:31 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:33:31.439 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[ef2fd808-453b-4487-be93-a88b19368c5d]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap9bf3cbfa-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 490996, 'tstamp': 490996}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 314127, 'error': None, 'target': 'ovnmeta-9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap9bf3cbfa-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 490999, 'tstamp': 490999}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 314127, 'error': None, 'target': 'ovnmeta-9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:33:31 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:33:31.440 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9bf3cbfa-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:33:31 np0005534516 nova_compute[253538]: 2025-11-25 08:33:31.448 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:33:31 np0005534516 nova_compute[253538]: 2025-11-25 08:33:31.453 253542 INFO nova.virt.libvirt.driver [-] [instance: 8191f951-44bc-4371-957a-f2e7d37c1a32] Instance destroyed successfully.#033[00m
Nov 25 03:33:31 np0005534516 nova_compute[253538]: 2025-11-25 08:33:31.453 253542 DEBUG nova.objects.instance [None req-458a06ae-eb9f-40ca-bf5d-32973710d0dd 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Lazy-loading 'resources' on Instance uuid 8191f951-44bc-4371-957a-f2e7d37c1a32 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 03:33:31 np0005534516 nova_compute[253538]: 2025-11-25 08:33:31.456 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:33:31 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:33:31.456 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9bf3cbfa-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:33:31 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:33:31.457 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 03:33:31 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:33:31.457 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap9bf3cbfa-70, col_values=(('external_ids', {'iface-id': '98660c0c-0936-4c4d-9a89-87b784d8d5cc'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:33:31 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:33:31.457 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 03:33:31 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:33:31.458 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 582f57a6-32d3-44a0-ab47-d147a0bb0f43 in datapath 9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe unbound from our chassis#033[00m
Nov 25 03:33:31 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:33:31.461 162739 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 25 03:33:31 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:33:31.463 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[f0828591-373b-4cfe-801d-bba912c36f0e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:33:31 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:33:31.463 162739 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe namespace which is not needed anymore#033[00m
Nov 25 03:33:31 np0005534516 nova_compute[253538]: 2025-11-25 08:33:31.474 253542 DEBUG nova.virt.libvirt.vif [None req-458a06ae-eb9f-40ca-bf5d-32973710d0dd 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T08:32:27Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-2072750142',display_name='tempest-AttachInterfacesTestJSON-server-2072750142',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-2072750142',id=52,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHlIbVdTjrPi47Xp1ay/t1WIi0WKTQJPoNHCD5jEl7ye+GEjFhiA/mB2rFe7IhKIxVjHVKF2hIwCbWYxBLjoyv+NSB0KC8NG+zdVJzTKR2nUpIyzOpLkhsEIm7Jy2dcm6w==',key_name='tempest-keypair-2060783228',keypairs=<?>,launch_index=0,launched_at=2025-11-25T08:32:36Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='7d8307470c794815a028592990efca57',ramdisk_id='',reservation_id='r-558nuebf',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-1895576257',owner_user_name='tempest-AttachInterfacesTestJSON-1895576257-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T08:32:36Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='329d8dc9d78743d4a09a38fef3a9143d',uuid=8191f951-44bc-4371-957a-f2e7d37c1a32,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "d8bd16e1-3695-474d-be04-7fdf44bee803", "address": "fa:16:3e:1a:58:19", "network": {"id": "9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-2079765623-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.213", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7d8307470c794815a028592990efca57", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd8bd16e1-36", "ovs_interfaceid": "d8bd16e1-3695-474d-be04-7fdf44bee803", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 25 03:33:31 np0005534516 nova_compute[253538]: 2025-11-25 08:33:31.475 253542 DEBUG nova.network.os_vif_util [None req-458a06ae-eb9f-40ca-bf5d-32973710d0dd 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Converting VIF {"id": "d8bd16e1-3695-474d-be04-7fdf44bee803", "address": "fa:16:3e:1a:58:19", "network": {"id": "9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-2079765623-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.213", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7d8307470c794815a028592990efca57", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd8bd16e1-36", "ovs_interfaceid": "d8bd16e1-3695-474d-be04-7fdf44bee803", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 03:33:31 np0005534516 nova_compute[253538]: 2025-11-25 08:33:31.476 253542 DEBUG nova.network.os_vif_util [None req-458a06ae-eb9f-40ca-bf5d-32973710d0dd 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:1a:58:19,bridge_name='br-int',has_traffic_filtering=True,id=d8bd16e1-3695-474d-be04-7fdf44bee803,network=Network(9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd8bd16e1-36') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 03:33:31 np0005534516 nova_compute[253538]: 2025-11-25 08:33:31.476 253542 DEBUG os_vif [None req-458a06ae-eb9f-40ca-bf5d-32973710d0dd 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:1a:58:19,bridge_name='br-int',has_traffic_filtering=True,id=d8bd16e1-3695-474d-be04-7fdf44bee803,network=Network(9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd8bd16e1-36') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 25 03:33:31 np0005534516 nova_compute[253538]: 2025-11-25 08:33:31.477 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:33:31 np0005534516 nova_compute[253538]: 2025-11-25 08:33:31.478 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd8bd16e1-36, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:33:31 np0005534516 nova_compute[253538]: 2025-11-25 08:33:31.479 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:33:31 np0005534516 nova_compute[253538]: 2025-11-25 08:33:31.482 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 25 03:33:31 np0005534516 nova_compute[253538]: 2025-11-25 08:33:31.485 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:33:31 np0005534516 nova_compute[253538]: 2025-11-25 08:33:31.487 253542 INFO os_vif [None req-458a06ae-eb9f-40ca-bf5d-32973710d0dd 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:1a:58:19,bridge_name='br-int',has_traffic_filtering=True,id=d8bd16e1-3695-474d-be04-7fdf44bee803,network=Network(9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd8bd16e1-36')#033[00m
Nov 25 03:33:31 np0005534516 nova_compute[253538]: 2025-11-25 08:33:31.488 253542 DEBUG nova.virt.libvirt.vif [None req-458a06ae-eb9f-40ca-bf5d-32973710d0dd 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T08:32:27Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-2072750142',display_name='tempest-AttachInterfacesTestJSON-server-2072750142',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-2072750142',id=52,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHlIbVdTjrPi47Xp1ay/t1WIi0WKTQJPoNHCD5jEl7ye+GEjFhiA/mB2rFe7IhKIxVjHVKF2hIwCbWYxBLjoyv+NSB0KC8NG+zdVJzTKR2nUpIyzOpLkhsEIm7Jy2dcm6w==',key_name='tempest-keypair-2060783228',keypairs=<?>,launch_index=0,launched_at=2025-11-25T08:32:36Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='7d8307470c794815a028592990efca57',ramdisk_id='',reservation_id='r-558nuebf',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-1895576257',owner_user_name='tempest-AttachInterfacesTestJSON-1895576257-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T08:32:36Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='329d8dc9d78743d4a09a38fef3a9143d',uuid=8191f951-44bc-4371-957a-f2e7d37c1a32,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "582f57a6-32d3-44a0-ab47-d147a0bb0f43", "address": "fa:16:3e:7a:1a:cb", "network": {"id": "9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-2079765623-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7d8307470c794815a028592990efca57", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap582f57a6-32", "ovs_interfaceid": "582f57a6-32d3-44a0-ab47-d147a0bb0f43", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 25 03:33:31 np0005534516 nova_compute[253538]: 2025-11-25 08:33:31.489 253542 DEBUG nova.network.os_vif_util [None req-458a06ae-eb9f-40ca-bf5d-32973710d0dd 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Converting VIF {"id": "582f57a6-32d3-44a0-ab47-d147a0bb0f43", "address": "fa:16:3e:7a:1a:cb", "network": {"id": "9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-2079765623-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7d8307470c794815a028592990efca57", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap582f57a6-32", "ovs_interfaceid": "582f57a6-32d3-44a0-ab47-d147a0bb0f43", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 03:33:31 np0005534516 nova_compute[253538]: 2025-11-25 08:33:31.489 253542 DEBUG nova.network.os_vif_util [None req-458a06ae-eb9f-40ca-bf5d-32973710d0dd 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:7a:1a:cb,bridge_name='br-int',has_traffic_filtering=True,id=582f57a6-32d3-44a0-ab47-d147a0bb0f43,network=Network(9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap582f57a6-32') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 03:33:31 np0005534516 nova_compute[253538]: 2025-11-25 08:33:31.490 253542 DEBUG os_vif [None req-458a06ae-eb9f-40ca-bf5d-32973710d0dd 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:7a:1a:cb,bridge_name='br-int',has_traffic_filtering=True,id=582f57a6-32d3-44a0-ab47-d147a0bb0f43,network=Network(9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap582f57a6-32') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 25 03:33:31 np0005534516 nova_compute[253538]: 2025-11-25 08:33:31.491 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:33:31 np0005534516 nova_compute[253538]: 2025-11-25 08:33:31.491 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap582f57a6-32, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:33:31 np0005534516 nova_compute[253538]: 2025-11-25 08:33:31.493 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:33:31 np0005534516 nova_compute[253538]: 2025-11-25 08:33:31.496 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 25 03:33:31 np0005534516 nova_compute[253538]: 2025-11-25 08:33:31.499 253542 INFO os_vif [None req-458a06ae-eb9f-40ca-bf5d-32973710d0dd 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:7a:1a:cb,bridge_name='br-int',has_traffic_filtering=True,id=582f57a6-32d3-44a0-ab47-d147a0bb0f43,network=Network(9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap582f57a6-32')#033[00m
Nov 25 03:33:31 np0005534516 nova_compute[253538]: 2025-11-25 08:33:31.499 253542 DEBUG nova.virt.libvirt.vif [None req-458a06ae-eb9f-40ca-bf5d-32973710d0dd 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T08:32:27Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-2072750142',display_name='tempest-AttachInterfacesTestJSON-server-2072750142',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-2072750142',id=52,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHlIbVdTjrPi47Xp1ay/t1WIi0WKTQJPoNHCD5jEl7ye+GEjFhiA/mB2rFe7IhKIxVjHVKF2hIwCbWYxBLjoyv+NSB0KC8NG+zdVJzTKR2nUpIyzOpLkhsEIm7Jy2dcm6w==',key_name='tempest-keypair-2060783228',keypairs=<?>,launch_index=0,launched_at=2025-11-25T08:32:36Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='7d8307470c794815a028592990efca57',ramdisk_id='',reservation_id='r-558nuebf',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-1895576257',owner_user_name='tempest-AttachInterfacesTestJSON-1895576257-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T08:32:36Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='329d8dc9d78743d4a09a38fef3a9143d',uuid=8191f951-44bc-4371-957a-f2e7d37c1a32,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "9541f2fd-4ec3-47ef-a6a9-66e0052c303f", "address": "fa:16:3e:b9:39:c6", "network": {"id": "9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-2079765623-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7d8307470c794815a028592990efca57", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9541f2fd-4e", "ovs_interfaceid": "9541f2fd-4ec3-47ef-a6a9-66e0052c303f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 25 03:33:31 np0005534516 nova_compute[253538]: 2025-11-25 08:33:31.500 253542 DEBUG nova.network.os_vif_util [None req-458a06ae-eb9f-40ca-bf5d-32973710d0dd 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Converting VIF {"id": "9541f2fd-4ec3-47ef-a6a9-66e0052c303f", "address": "fa:16:3e:b9:39:c6", "network": {"id": "9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-2079765623-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7d8307470c794815a028592990efca57", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9541f2fd-4e", "ovs_interfaceid": "9541f2fd-4ec3-47ef-a6a9-66e0052c303f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 03:33:31 np0005534516 nova_compute[253538]: 2025-11-25 08:33:31.500 253542 DEBUG nova.network.os_vif_util [None req-458a06ae-eb9f-40ca-bf5d-32973710d0dd 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b9:39:c6,bridge_name='br-int',has_traffic_filtering=True,id=9541f2fd-4ec3-47ef-a6a9-66e0052c303f,network=Network(9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap9541f2fd-4e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 03:33:31 np0005534516 nova_compute[253538]: 2025-11-25 08:33:31.500 253542 DEBUG os_vif [None req-458a06ae-eb9f-40ca-bf5d-32973710d0dd 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:b9:39:c6,bridge_name='br-int',has_traffic_filtering=True,id=9541f2fd-4ec3-47ef-a6a9-66e0052c303f,network=Network(9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap9541f2fd-4e') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 25 03:33:31 np0005534516 nova_compute[253538]: 2025-11-25 08:33:31.501 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:33:31 np0005534516 nova_compute[253538]: 2025-11-25 08:33:31.502 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9541f2fd-4e, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:33:31 np0005534516 nova_compute[253538]: 2025-11-25 08:33:31.503 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:33:31 np0005534516 nova_compute[253538]: 2025-11-25 08:33:31.505 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 25 03:33:31 np0005534516 nova_compute[253538]: 2025-11-25 08:33:31.506 253542 INFO os_vif [None req-458a06ae-eb9f-40ca-bf5d-32973710d0dd 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:b9:39:c6,bridge_name='br-int',has_traffic_filtering=True,id=9541f2fd-4ec3-47ef-a6a9-66e0052c303f,network=Network(9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap9541f2fd-4e')#033[00m
Nov 25 03:33:31 np0005534516 neutron-haproxy-ovnmeta-9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe[309957]: [NOTICE]   (309993) : haproxy version is 2.8.14-c23fe91
Nov 25 03:33:31 np0005534516 neutron-haproxy-ovnmeta-9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe[309957]: [NOTICE]   (309993) : path to executable is /usr/sbin/haproxy
Nov 25 03:33:31 np0005534516 neutron-haproxy-ovnmeta-9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe[309957]: [WARNING]  (309993) : Exiting Master process...
Nov 25 03:33:31 np0005534516 neutron-haproxy-ovnmeta-9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe[309957]: [WARNING]  (309993) : Exiting Master process...
Nov 25 03:33:31 np0005534516 neutron-haproxy-ovnmeta-9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe[309957]: [ALERT]    (309993) : Current worker (309999) exited with code 143 (Terminated)
Nov 25 03:33:31 np0005534516 neutron-haproxy-ovnmeta-9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe[309957]: [WARNING]  (309993) : All workers exited. Exiting... (0)
Nov 25 03:33:31 np0005534516 systemd[1]: libpod-9f6e487e8d7e168d80b68d990e56d9004a91b65410e023e55400f4e4213b6b54.scope: Deactivated successfully.
Nov 25 03:33:31 np0005534516 conmon[309957]: conmon 9f6e487e8d7e168d80b6 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-9f6e487e8d7e168d80b68d990e56d9004a91b65410e023e55400f4e4213b6b54.scope/container/memory.events
Nov 25 03:33:31 np0005534516 podman[314189]: 2025-11-25 08:33:31.602137757 +0000 UTC m=+0.045930015 container died 9f6e487e8d7e168d80b68d990e56d9004a91b65410e023e55400f4e4213b6b54 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 03:33:31 np0005534516 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-9f6e487e8d7e168d80b68d990e56d9004a91b65410e023e55400f4e4213b6b54-userdata-shm.mount: Deactivated successfully.
Nov 25 03:33:31 np0005534516 systemd[1]: var-lib-containers-storage-overlay-9fc5cfaa8d3aa0802256d8920c39f377bf45d88d3a7eb5b3dfababcae814e2ac-merged.mount: Deactivated successfully.
Nov 25 03:33:31 np0005534516 podman[314189]: 2025-11-25 08:33:31.683153864 +0000 UTC m=+0.126946122 container cleanup 9f6e487e8d7e168d80b68d990e56d9004a91b65410e023e55400f4e4213b6b54 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true)
Nov 25 03:33:31 np0005534516 systemd[1]: libpod-conmon-9f6e487e8d7e168d80b68d990e56d9004a91b65410e023e55400f4e4213b6b54.scope: Deactivated successfully.
Nov 25 03:33:31 np0005534516 wonderful_hamilton[314079]: {
Nov 25 03:33:31 np0005534516 wonderful_hamilton[314079]:    "1ccbbde0-8faf-460a-800e-c84f00ed17db": {
Nov 25 03:33:31 np0005534516 wonderful_hamilton[314079]:        "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 03:33:31 np0005534516 wonderful_hamilton[314079]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 03:33:31 np0005534516 wonderful_hamilton[314079]:        "osd_id": 1,
Nov 25 03:33:31 np0005534516 wonderful_hamilton[314079]:        "osd_uuid": "1ccbbde0-8faf-460a-800e-c84f00ed17db",
Nov 25 03:33:31 np0005534516 wonderful_hamilton[314079]:        "type": "bluestore"
Nov 25 03:33:31 np0005534516 wonderful_hamilton[314079]:    },
Nov 25 03:33:31 np0005534516 wonderful_hamilton[314079]:    "2a15223e-f21c-42af-b702-2be1c3607399": {
Nov 25 03:33:31 np0005534516 wonderful_hamilton[314079]:        "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 03:33:31 np0005534516 wonderful_hamilton[314079]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 03:33:31 np0005534516 wonderful_hamilton[314079]:        "osd_id": 2,
Nov 25 03:33:31 np0005534516 wonderful_hamilton[314079]:        "osd_uuid": "2a15223e-f21c-42af-b702-2be1c3607399",
Nov 25 03:33:31 np0005534516 wonderful_hamilton[314079]:        "type": "bluestore"
Nov 25 03:33:31 np0005534516 wonderful_hamilton[314079]:    },
Nov 25 03:33:31 np0005534516 wonderful_hamilton[314079]:    "fa0f8d90-5df8-4d42-9078-da082765696d": {
Nov 25 03:33:31 np0005534516 wonderful_hamilton[314079]:        "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 03:33:31 np0005534516 wonderful_hamilton[314079]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 03:33:31 np0005534516 wonderful_hamilton[314079]:        "osd_id": 0,
Nov 25 03:33:31 np0005534516 wonderful_hamilton[314079]:        "osd_uuid": "fa0f8d90-5df8-4d42-9078-da082765696d",
Nov 25 03:33:31 np0005534516 wonderful_hamilton[314079]:        "type": "bluestore"
Nov 25 03:33:31 np0005534516 wonderful_hamilton[314079]:    }
Nov 25 03:33:31 np0005534516 wonderful_hamilton[314079]: }
Nov 25 03:33:31 np0005534516 systemd[1]: libpod-3dd93b34bac3793ca1c98de596fd3402b8cb72247791f6152fb8508a8b3005d5.scope: Deactivated successfully.
Nov 25 03:33:31 np0005534516 conmon[314079]: conmon 3dd93b34bac3793ca1c9 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-3dd93b34bac3793ca1c98de596fd3402b8cb72247791f6152fb8508a8b3005d5.scope/container/memory.events
Nov 25 03:33:31 np0005534516 podman[314063]: 2025-11-25 08:33:31.83407638 +0000 UTC m=+1.176692625 container died 3dd93b34bac3793ca1c98de596fd3402b8cb72247791f6152fb8508a8b3005d5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_hamilton, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Nov 25 03:33:31 np0005534516 nova_compute[253538]: 2025-11-25 08:33:31.906 253542 DEBUG nova.network.neutron [None req-2cd0e226-b95f-4d12-96f9-bbfd794cf495 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] [instance: 0feca801-4630-4450-b915-616d8496ab51] Updating instance_info_cache with network_info: [{"id": "15af3dd8-9788-4a34-b4b2-d3b24300cd4c", "address": "fa:16:3e:07:cd:40", "network": {"id": "908154e6-322e-4607-bb65-df3f3f8daca6", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1395486665-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.183", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "23237e7592b247838e62457157e64e9e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap15af3dd8-97", "ovs_interfaceid": "15af3dd8-9788-4a34-b4b2-d3b24300cd4c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 03:33:31 np0005534516 nova_compute[253538]: 2025-11-25 08:33:31.931 253542 DEBUG oslo_concurrency.lockutils [None req-2cd0e226-b95f-4d12-96f9-bbfd794cf495 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Releasing lock "refresh_cache-0feca801-4630-4450-b915-616d8496ab51" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 03:33:31 np0005534516 nova_compute[253538]: 2025-11-25 08:33:31.932 253542 DEBUG nova.compute.manager [None req-2cd0e226-b95f-4d12-96f9-bbfd794cf495 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] [instance: 0feca801-4630-4450-b915-616d8496ab51] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:33:32 np0005534516 systemd[1]: var-lib-containers-storage-overlay-6c449e9c7c85f013e30844a99ea13890446f37682b82672e076560ac8f88d97e-merged.mount: Deactivated successfully.
Nov 25 03:33:32 np0005534516 podman[314063]: 2025-11-25 08:33:32.143912647 +0000 UTC m=+1.486528902 container remove 3dd93b34bac3793ca1c98de596fd3402b8cb72247791f6152fb8508a8b3005d5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_hamilton, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 25 03:33:32 np0005534516 kernel: tap15af3dd8-97 (unregistering): left promiscuous mode
Nov 25 03:33:32 np0005534516 systemd[1]: libpod-conmon-3dd93b34bac3793ca1c98de596fd3402b8cb72247791f6152fb8508a8b3005d5.scope: Deactivated successfully.
Nov 25 03:33:32 np0005534516 NetworkManager[48915]: <info>  [1764059612.1655] device (tap15af3dd8-97): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 03:33:32 np0005534516 podman[314236]: 2025-11-25 08:33:32.16706191 +0000 UTC m=+0.454307542 container remove 9f6e487e8d7e168d80b68d990e56d9004a91b65410e023e55400f4e4213b6b54 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118)
Nov 25 03:33:32 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:33:32.173 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[ff1e3649-69e7-4851-93ba-8a837d49bde3]: (4, ('Tue Nov 25 08:33:31 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe (9f6e487e8d7e168d80b68d990e56d9004a91b65410e023e55400f4e4213b6b54)\n9f6e487e8d7e168d80b68d990e56d9004a91b65410e023e55400f4e4213b6b54\nTue Nov 25 08:33:31 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe (9f6e487e8d7e168d80b68d990e56d9004a91b65410e023e55400f4e4213b6b54)\n9f6e487e8d7e168d80b68d990e56d9004a91b65410e023e55400f4e4213b6b54\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:33:32 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:33:32.175 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[4924befc-6705-44bc-9de5-1403d5e553d8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:33:32 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:33:32.181 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9bf3cbfa-70, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:33:32 np0005534516 ovn_controller[152859]: 2025-11-25T08:33:32Z|00516|binding|INFO|Releasing lport 15af3dd8-9788-4a34-b4b2-d3b24300cd4c from this chassis (sb_readonly=0)
Nov 25 03:33:32 np0005534516 ovn_controller[152859]: 2025-11-25T08:33:32Z|00517|binding|INFO|Setting lport 15af3dd8-9788-4a34-b4b2-d3b24300cd4c down in Southbound
Nov 25 03:33:32 np0005534516 nova_compute[253538]: 2025-11-25 08:33:32.235 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:33:32 np0005534516 kernel: tap9bf3cbfa-70: left promiscuous mode
Nov 25 03:33:32 np0005534516 ovn_controller[152859]: 2025-11-25T08:33:32Z|00518|binding|INFO|Removing iface tap15af3dd8-97 ovn-installed in OVS
Nov 25 03:33:32 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 03:33:32 np0005534516 nova_compute[253538]: 2025-11-25 08:33:32.242 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:33:32 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:33:32.243 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:07:cd:40 10.100.0.13'], port_security=['fa:16:3e:07:cd:40 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '0feca801-4630-4450-b915-616d8496ab51', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-908154e6-322e-4607-bb65-df3f3f8daca6', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '23237e7592b247838e62457157e64e9e', 'neutron:revision_number': '10', 'neutron:security_group_ids': '3358d53f-61a0-46d5-a068-e095dc0322d0', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.183', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f04e87cb-da21-4cc9-be16-4ad52b84fb85, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=15af3dd8-9788-4a34-b4b2-d3b24300cd4c) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 03:33:32 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 03:33:32 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 03:33:32 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 03:33:32 np0005534516 ceph-mgr[75313]: [progress WARNING root] complete: ev f5342b74-0d7a-4f9c-8638-cc7a1fb0c94d does not exist
Nov 25 03:33:32 np0005534516 ceph-mgr[75313]: [progress WARNING root] complete: ev e7a2f0ed-1f54-4994-9f78-403b9c146648 does not exist
Nov 25 03:33:32 np0005534516 nova_compute[253538]: 2025-11-25 08:33:32.265 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:33:32 np0005534516 nova_compute[253538]: 2025-11-25 08:33:32.267 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:33:32 np0005534516 systemd[1]: machine-qemu\x2d63\x2dinstance\x2d00000032.scope: Deactivated successfully.
Nov 25 03:33:32 np0005534516 systemd[1]: machine-qemu\x2d63\x2dinstance\x2d00000032.scope: Consumed 13.684s CPU time.
Nov 25 03:33:32 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:33:32.270 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[fd850f6e-3ab3-4779-b776-a75678fe55bf]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:33:32 np0005534516 systemd-machined[215790]: Machine qemu-63-instance-00000032 terminated.
Nov 25 03:33:32 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:33:32.285 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[696041c5-2289-4727-8d78-097d9db4763a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:33:32 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:33:32.287 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[a2040811-a899-4cc1-ac3e-6c9e6c93431c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:33:32 np0005534516 nova_compute[253538]: 2025-11-25 08:33:32.289 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:33:32 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:33:32.310 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[aa6acdef-19b1-49ed-8563-9adaf67f9a44]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 490978, 'reachable_time': 38226, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 314296, 'error': None, 'target': 'ovnmeta-9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:33:32 np0005534516 systemd[1]: run-netns-ovnmeta\x2d9bf3cbfa\x2d7e0d\x2d4c98\x2d99a2\x2d4ca14fb6bbbe.mount: Deactivated successfully.
Nov 25 03:33:32 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:33:32.315 162852 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 25 03:33:32 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:33:32.315 162852 DEBUG oslo.privsep.daemon [-] privsep: reply[7e310f34-9025-4a57-830f-4fb7e0c9b313]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:33:32 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:33:32.316 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 15af3dd8-9788-4a34-b4b2-d3b24300cd4c in datapath 908154e6-322e-4607-bb65-df3f3f8daca6 unbound from our chassis#033[00m
Nov 25 03:33:32 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:33:32.317 162739 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 908154e6-322e-4607-bb65-df3f3f8daca6, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 25 03:33:32 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:33:32.318 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[9addd85e-e12b-4e3f-9a6f-2a9dfee76879]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:33:32 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:33:32.319 162739 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-908154e6-322e-4607-bb65-df3f3f8daca6 namespace which is not needed anymore#033[00m
Nov 25 03:33:32 np0005534516 nova_compute[253538]: 2025-11-25 08:33:32.420 253542 DEBUG nova.compute.manager [req-1bb502bd-de8c-4092-8372-32144f1e5ef6 req-8e91c61b-e9d3-4cde-bc15-c53d3f8faf3d b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 8191f951-44bc-4371-957a-f2e7d37c1a32] Received event network-vif-unplugged-d8bd16e1-3695-474d-be04-7fdf44bee803 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:33:32 np0005534516 nova_compute[253538]: 2025-11-25 08:33:32.421 253542 DEBUG oslo_concurrency.lockutils [req-1bb502bd-de8c-4092-8372-32144f1e5ef6 req-8e91c61b-e9d3-4cde-bc15-c53d3f8faf3d b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "8191f951-44bc-4371-957a-f2e7d37c1a32-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:33:32 np0005534516 nova_compute[253538]: 2025-11-25 08:33:32.421 253542 DEBUG oslo_concurrency.lockutils [req-1bb502bd-de8c-4092-8372-32144f1e5ef6 req-8e91c61b-e9d3-4cde-bc15-c53d3f8faf3d b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "8191f951-44bc-4371-957a-f2e7d37c1a32-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:33:32 np0005534516 nova_compute[253538]: 2025-11-25 08:33:32.421 253542 DEBUG oslo_concurrency.lockutils [req-1bb502bd-de8c-4092-8372-32144f1e5ef6 req-8e91c61b-e9d3-4cde-bc15-c53d3f8faf3d b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "8191f951-44bc-4371-957a-f2e7d37c1a32-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:33:32 np0005534516 nova_compute[253538]: 2025-11-25 08:33:32.421 253542 DEBUG nova.compute.manager [req-1bb502bd-de8c-4092-8372-32144f1e5ef6 req-8e91c61b-e9d3-4cde-bc15-c53d3f8faf3d b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 8191f951-44bc-4371-957a-f2e7d37c1a32] No waiting events found dispatching network-vif-unplugged-d8bd16e1-3695-474d-be04-7fdf44bee803 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 03:33:32 np0005534516 nova_compute[253538]: 2025-11-25 08:33:32.422 253542 DEBUG nova.compute.manager [req-1bb502bd-de8c-4092-8372-32144f1e5ef6 req-8e91c61b-e9d3-4cde-bc15-c53d3f8faf3d b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 8191f951-44bc-4371-957a-f2e7d37c1a32] Received event network-vif-unplugged-d8bd16e1-3695-474d-be04-7fdf44bee803 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 25 03:33:32 np0005534516 nova_compute[253538]: 2025-11-25 08:33:32.433 253542 INFO nova.virt.libvirt.driver [None req-458a06ae-eb9f-40ca-bf5d-32973710d0dd 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] [instance: 8191f951-44bc-4371-957a-f2e7d37c1a32] Deleting instance files /var/lib/nova/instances/8191f951-44bc-4371-957a-f2e7d37c1a32_del#033[00m
Nov 25 03:33:32 np0005534516 nova_compute[253538]: 2025-11-25 08:33:32.434 253542 INFO nova.virt.libvirt.driver [None req-458a06ae-eb9f-40ca-bf5d-32973710d0dd 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] [instance: 8191f951-44bc-4371-957a-f2e7d37c1a32] Deletion of /var/lib/nova/instances/8191f951-44bc-4371-957a-f2e7d37c1a32_del complete#033[00m
Nov 25 03:33:32 np0005534516 nova_compute[253538]: 2025-11-25 08:33:32.472 253542 INFO nova.virt.libvirt.driver [-] [instance: 0feca801-4630-4450-b915-616d8496ab51] Instance destroyed successfully.#033[00m
Nov 25 03:33:32 np0005534516 nova_compute[253538]: 2025-11-25 08:33:32.473 253542 DEBUG nova.objects.instance [None req-2cd0e226-b95f-4d12-96f9-bbfd794cf495 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Lazy-loading 'resources' on Instance uuid 0feca801-4630-4450-b915-616d8496ab51 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 03:33:32 np0005534516 neutron-haproxy-ovnmeta-908154e6-322e-4607-bb65-df3f3f8daca6[312349]: [NOTICE]   (312354) : haproxy version is 2.8.14-c23fe91
Nov 25 03:33:32 np0005534516 neutron-haproxy-ovnmeta-908154e6-322e-4607-bb65-df3f3f8daca6[312349]: [NOTICE]   (312354) : path to executable is /usr/sbin/haproxy
Nov 25 03:33:32 np0005534516 neutron-haproxy-ovnmeta-908154e6-322e-4607-bb65-df3f3f8daca6[312349]: [WARNING]  (312354) : Exiting Master process...
Nov 25 03:33:32 np0005534516 neutron-haproxy-ovnmeta-908154e6-322e-4607-bb65-df3f3f8daca6[312349]: [ALERT]    (312354) : Current worker (312356) exited with code 143 (Terminated)
Nov 25 03:33:32 np0005534516 neutron-haproxy-ovnmeta-908154e6-322e-4607-bb65-df3f3f8daca6[312349]: [WARNING]  (312354) : All workers exited. Exiting... (0)
Nov 25 03:33:32 np0005534516 systemd[1]: libpod-bdef0da94390ea813c5d4981c3a419e9fafc5becd6725743a2c0041cc0ba302d.scope: Deactivated successfully.
Nov 25 03:33:32 np0005534516 podman[314342]: 2025-11-25 08:33:32.48929202 +0000 UTC m=+0.057942028 container died bdef0da94390ea813c5d4981c3a419e9fafc5becd6725743a2c0041cc0ba302d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-908154e6-322e-4607-bb65-df3f3f8daca6, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2)
Nov 25 03:33:32 np0005534516 nova_compute[253538]: 2025-11-25 08:33:32.491 253542 DEBUG nova.virt.libvirt.vif [None req-2cd0e226-b95f-4d12-96f9-bbfd794cf495 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T08:32:05Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-1351113969',display_name='tempest-ServerActionsTestJSON-server-1351113969',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-1351113969',id=50,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBE5Fk56Q4lNqV9K7WDUeyi9Q8LXfzmP3pHaY3lQ+esJSSZiBEhWQZtQw4QEFpwpSqYGNN6+MiKdvSMZAjIxsIMhhevlSp0lI0bm/7fQanmTm+NtC/LaRja7uscM7lRA6IQ==',key_name='tempest-keypair-23618085',keypairs=<?>,launch_index=0,launched_at=2025-11-25T08:32:15Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='23237e7592b247838e62457157e64e9e',ramdisk_id='',reservation_id='r-ui990d0d',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-1880843108',owner_user_name='tempest-ServerActionsTestJSON-1880843108-project-member'},tags=<?>,task_state='reboot_started_hard',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T08:33:31Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='c199ca353ed54a53ab7fe37d3089c82a',uuid=0feca801-4630-4450-b915-616d8496ab51,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "15af3dd8-9788-4a34-b4b2-d3b24300cd4c", "address": "fa:16:3e:07:cd:40", "network": {"id": "908154e6-322e-4607-bb65-df3f3f8daca6", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1395486665-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.183", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "23237e7592b247838e62457157e64e9e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap15af3dd8-97", "ovs_interfaceid": "15af3dd8-9788-4a34-b4b2-d3b24300cd4c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 25 03:33:32 np0005534516 nova_compute[253538]: 2025-11-25 08:33:32.492 253542 DEBUG nova.network.os_vif_util [None req-2cd0e226-b95f-4d12-96f9-bbfd794cf495 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Converting VIF {"id": "15af3dd8-9788-4a34-b4b2-d3b24300cd4c", "address": "fa:16:3e:07:cd:40", "network": {"id": "908154e6-322e-4607-bb65-df3f3f8daca6", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1395486665-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.183", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "23237e7592b247838e62457157e64e9e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap15af3dd8-97", "ovs_interfaceid": "15af3dd8-9788-4a34-b4b2-d3b24300cd4c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 03:33:32 np0005534516 nova_compute[253538]: 2025-11-25 08:33:32.492 253542 DEBUG nova.network.os_vif_util [None req-2cd0e226-b95f-4d12-96f9-bbfd794cf495 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:07:cd:40,bridge_name='br-int',has_traffic_filtering=True,id=15af3dd8-9788-4a34-b4b2-d3b24300cd4c,network=Network(908154e6-322e-4607-bb65-df3f3f8daca6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap15af3dd8-97') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 03:33:32 np0005534516 nova_compute[253538]: 2025-11-25 08:33:32.493 253542 DEBUG os_vif [None req-2cd0e226-b95f-4d12-96f9-bbfd794cf495 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:07:cd:40,bridge_name='br-int',has_traffic_filtering=True,id=15af3dd8-9788-4a34-b4b2-d3b24300cd4c,network=Network(908154e6-322e-4607-bb65-df3f3f8daca6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap15af3dd8-97') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 25 03:33:32 np0005534516 nova_compute[253538]: 2025-11-25 08:33:32.494 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:33:32 np0005534516 nova_compute[253538]: 2025-11-25 08:33:32.494 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap15af3dd8-97, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:33:32 np0005534516 nova_compute[253538]: 2025-11-25 08:33:32.496 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:33:32 np0005534516 nova_compute[253538]: 2025-11-25 08:33:32.497 253542 INFO nova.compute.manager [None req-458a06ae-eb9f-40ca-bf5d-32973710d0dd 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] [instance: 8191f951-44bc-4371-957a-f2e7d37c1a32] Took 1.31 seconds to destroy the instance on the hypervisor.#033[00m
Nov 25 03:33:32 np0005534516 nova_compute[253538]: 2025-11-25 08:33:32.498 253542 DEBUG oslo.service.loopingcall [None req-458a06ae-eb9f-40ca-bf5d-32973710d0dd 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 25 03:33:32 np0005534516 nova_compute[253538]: 2025-11-25 08:33:32.498 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:33:32 np0005534516 nova_compute[253538]: 2025-11-25 08:33:32.499 253542 DEBUG nova.compute.manager [-] [instance: 8191f951-44bc-4371-957a-f2e7d37c1a32] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 25 03:33:32 np0005534516 nova_compute[253538]: 2025-11-25 08:33:32.499 253542 DEBUG nova.network.neutron [-] [instance: 8191f951-44bc-4371-957a-f2e7d37c1a32] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 25 03:33:32 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 03:33:32 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 03:33:32 np0005534516 nova_compute[253538]: 2025-11-25 08:33:32.508 253542 INFO os_vif [None req-2cd0e226-b95f-4d12-96f9-bbfd794cf495 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:07:cd:40,bridge_name='br-int',has_traffic_filtering=True,id=15af3dd8-9788-4a34-b4b2-d3b24300cd4c,network=Network(908154e6-322e-4607-bb65-df3f3f8daca6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap15af3dd8-97')#033[00m
Nov 25 03:33:32 np0005534516 nova_compute[253538]: 2025-11-25 08:33:32.516 253542 DEBUG nova.virt.libvirt.driver [None req-2cd0e226-b95f-4d12-96f9-bbfd794cf495 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] [instance: 0feca801-4630-4450-b915-616d8496ab51] Start _get_guest_xml network_info=[{"id": "15af3dd8-9788-4a34-b4b2-d3b24300cd4c", "address": "fa:16:3e:07:cd:40", "network": {"id": "908154e6-322e-4607-bb65-df3f3f8daca6", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1395486665-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.183", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "23237e7592b247838e62457157e64e9e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap15af3dd8-97", "ovs_interfaceid": "15af3dd8-9788-4a34-b4b2-d3b24300cd4c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'device_name': '/dev/vda', 'guest_format': None, 'encryption_options': None, 'boot_index': 0, 'encryption_format': None, 'encryption_secret_uuid': None, 'size': 0, 'encrypted': False, 'disk_bus': 'virtio', 'image_id': '8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 25 03:33:32 np0005534516 nova_compute[253538]: 2025-11-25 08:33:32.521 253542 WARNING nova.virt.libvirt.driver [None req-2cd0e226-b95f-4d12-96f9-bbfd794cf495 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 03:33:32 np0005534516 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-bdef0da94390ea813c5d4981c3a419e9fafc5becd6725743a2c0041cc0ba302d-userdata-shm.mount: Deactivated successfully.
Nov 25 03:33:32 np0005534516 systemd[1]: var-lib-containers-storage-overlay-1bbb652004c5b4ad3c3896e006b5f7cd4464afd4fecf01ab4a4f22a51d10b5b9-merged.mount: Deactivated successfully.
Nov 25 03:33:32 np0005534516 nova_compute[253538]: 2025-11-25 08:33:32.530 253542 DEBUG nova.virt.libvirt.host [None req-2cd0e226-b95f-4d12-96f9-bbfd794cf495 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 25 03:33:32 np0005534516 nova_compute[253538]: 2025-11-25 08:33:32.530 253542 DEBUG nova.virt.libvirt.host [None req-2cd0e226-b95f-4d12-96f9-bbfd794cf495 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 25 03:33:32 np0005534516 nova_compute[253538]: 2025-11-25 08:33:32.534 253542 DEBUG nova.virt.libvirt.host [None req-2cd0e226-b95f-4d12-96f9-bbfd794cf495 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 25 03:33:32 np0005534516 nova_compute[253538]: 2025-11-25 08:33:32.534 253542 DEBUG nova.virt.libvirt.host [None req-2cd0e226-b95f-4d12-96f9-bbfd794cf495 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 25 03:33:32 np0005534516 nova_compute[253538]: 2025-11-25 08:33:32.534 253542 DEBUG nova.virt.libvirt.driver [None req-2cd0e226-b95f-4d12-96f9-bbfd794cf495 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 25 03:33:32 np0005534516 nova_compute[253538]: 2025-11-25 08:33:32.535 253542 DEBUG nova.virt.hardware [None req-2cd0e226-b95f-4d12-96f9-bbfd794cf495 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T08:20:54Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='c0247c91-0f34-45b2-87e7-43f31a790a00',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 25 03:33:32 np0005534516 nova_compute[253538]: 2025-11-25 08:33:32.535 253542 DEBUG nova.virt.hardware [None req-2cd0e226-b95f-4d12-96f9-bbfd794cf495 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 25 03:33:32 np0005534516 nova_compute[253538]: 2025-11-25 08:33:32.535 253542 DEBUG nova.virt.hardware [None req-2cd0e226-b95f-4d12-96f9-bbfd794cf495 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 25 03:33:32 np0005534516 nova_compute[253538]: 2025-11-25 08:33:32.536 253542 DEBUG nova.virt.hardware [None req-2cd0e226-b95f-4d12-96f9-bbfd794cf495 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 25 03:33:32 np0005534516 nova_compute[253538]: 2025-11-25 08:33:32.536 253542 DEBUG nova.virt.hardware [None req-2cd0e226-b95f-4d12-96f9-bbfd794cf495 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 25 03:33:32 np0005534516 nova_compute[253538]: 2025-11-25 08:33:32.536 253542 DEBUG nova.virt.hardware [None req-2cd0e226-b95f-4d12-96f9-bbfd794cf495 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 25 03:33:32 np0005534516 nova_compute[253538]: 2025-11-25 08:33:32.536 253542 DEBUG nova.virt.hardware [None req-2cd0e226-b95f-4d12-96f9-bbfd794cf495 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 25 03:33:32 np0005534516 nova_compute[253538]: 2025-11-25 08:33:32.536 253542 DEBUG nova.virt.hardware [None req-2cd0e226-b95f-4d12-96f9-bbfd794cf495 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 25 03:33:32 np0005534516 nova_compute[253538]: 2025-11-25 08:33:32.537 253542 DEBUG nova.virt.hardware [None req-2cd0e226-b95f-4d12-96f9-bbfd794cf495 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 25 03:33:32 np0005534516 nova_compute[253538]: 2025-11-25 08:33:32.537 253542 DEBUG nova.virt.hardware [None req-2cd0e226-b95f-4d12-96f9-bbfd794cf495 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 25 03:33:32 np0005534516 nova_compute[253538]: 2025-11-25 08:33:32.537 253542 DEBUG nova.virt.hardware [None req-2cd0e226-b95f-4d12-96f9-bbfd794cf495 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 25 03:33:32 np0005534516 nova_compute[253538]: 2025-11-25 08:33:32.537 253542 DEBUG nova.objects.instance [None req-2cd0e226-b95f-4d12-96f9-bbfd794cf495 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Lazy-loading 'vcpu_model' on Instance uuid 0feca801-4630-4450-b915-616d8496ab51 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 03:33:32 np0005534516 podman[314342]: 2025-11-25 08:33:32.540372883 +0000 UTC m=+0.109022891 container cleanup bdef0da94390ea813c5d4981c3a419e9fafc5becd6725743a2c0041cc0ba302d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-908154e6-322e-4607-bb65-df3f3f8daca6, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, tcib_managed=true, org.label-schema.license=GPLv2)
Nov 25 03:33:32 np0005534516 systemd[1]: libpod-conmon-bdef0da94390ea813c5d4981c3a419e9fafc5becd6725743a2c0041cc0ba302d.scope: Deactivated successfully.
Nov 25 03:33:32 np0005534516 nova_compute[253538]: 2025-11-25 08:33:32.550 253542 DEBUG oslo_concurrency.processutils [None req-2cd0e226-b95f-4d12-96f9-bbfd794cf495 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:33:32 np0005534516 podman[314378]: 2025-11-25 08:33:32.60129681 +0000 UTC m=+0.039559574 container remove bdef0da94390ea813c5d4981c3a419e9fafc5becd6725743a2c0041cc0ba302d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-908154e6-322e-4607-bb65-df3f3f8daca6, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3)
Nov 25 03:33:32 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:33:32.612 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[ed0c6a54-10b3-4f7d-a0be-98678300159e]: (4, ('Tue Nov 25 08:33:32 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-908154e6-322e-4607-bb65-df3f3f8daca6 (bdef0da94390ea813c5d4981c3a419e9fafc5becd6725743a2c0041cc0ba302d)\nbdef0da94390ea813c5d4981c3a419e9fafc5becd6725743a2c0041cc0ba302d\nTue Nov 25 08:33:32 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-908154e6-322e-4607-bb65-df3f3f8daca6 (bdef0da94390ea813c5d4981c3a419e9fafc5becd6725743a2c0041cc0ba302d)\nbdef0da94390ea813c5d4981c3a419e9fafc5becd6725743a2c0041cc0ba302d\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:33:32 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:33:32.614 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[9b0462a7-7552-459b-b8b0-1b3c6ec219ca]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:33:32 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:33:32.614 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap908154e6-30, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:33:32 np0005534516 nova_compute[253538]: 2025-11-25 08:33:32.616 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:33:32 np0005534516 kernel: tap908154e6-30: left promiscuous mode
Nov 25 03:33:32 np0005534516 nova_compute[253538]: 2025-11-25 08:33:32.635 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:33:32 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:33:32.635 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[e637d905-1e8d-4940-bdbb-699e88b1cc51]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:33:32 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:33:32.653 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[01a4a9d6-0005-473c-b99d-db939941c60f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:33:32 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:33:32.654 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[f42c2a8d-fa77-42fe-9f82-93d37111748e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:33:32 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:33:32.670 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[404e5754-9173-434b-ad83-e30f0584c6e8]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 494111, 'reachable_time': 40353, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 314393, 'error': None, 'target': 'ovnmeta-908154e6-322e-4607-bb65-df3f3f8daca6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:33:32 np0005534516 systemd[1]: run-netns-ovnmeta\x2d908154e6\x2d322e\x2d4607\x2dbb65\x2ddf3f3f8daca6.mount: Deactivated successfully.
Nov 25 03:33:32 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:33:32.674 162852 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-908154e6-322e-4607-bb65-df3f3f8daca6 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 25 03:33:32 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:33:32.674 162852 DEBUG oslo.privsep.daemon [-] privsep: reply[d19c718e-ab7d-4045-84b7-271128a65845]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:33:32 np0005534516 nova_compute[253538]: 2025-11-25 08:33:32.723 253542 DEBUG nova.network.neutron [None req-56603a31-362f-47a6-8425-b7532e4dcfee 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] [instance: 8191f951-44bc-4371-957a-f2e7d37c1a32] Updating instance_info_cache with network_info: [{"id": "d8bd16e1-3695-474d-be04-7fdf44bee803", "address": "fa:16:3e:1a:58:19", "network": {"id": "9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-2079765623-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.213", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7d8307470c794815a028592990efca57", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd8bd16e1-36", "ovs_interfaceid": "d8bd16e1-3695-474d-be04-7fdf44bee803", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "582f57a6-32d3-44a0-ab47-d147a0bb0f43", "address": "fa:16:3e:7a:1a:cb", "network": {"id": "9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-2079765623-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7d8307470c794815a028592990efca57", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap582f57a6-32", "ovs_interfaceid": "582f57a6-32d3-44a0-ab47-d147a0bb0f43", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "9541f2fd-4ec3-47ef-a6a9-66e0052c303f", "address": "fa:16:3e:b9:39:c6", "network": {"id": "9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-2079765623-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7d8307470c794815a028592990efca57", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9541f2fd-4e", "ovs_interfaceid": "9541f2fd-4ec3-47ef-a6a9-66e0052c303f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 03:33:32 np0005534516 nova_compute[253538]: 2025-11-25 08:33:32.766 253542 DEBUG neutronclient.v2_0.client [-] Error message: {"NeutronError": {"type": "PortNotFound", "message": "Port 9541f2fd-4ec3-47ef-a6a9-66e0052c303f could not be found.", "detail": ""}} _handle_fault_response /usr/lib/python3.9/site-packages/neutronclient/v2_0/client.py:262#033[00m
Nov 25 03:33:32 np0005534516 nova_compute[253538]: 2025-11-25 08:33:32.766 253542 DEBUG nova.network.neutron [-] Unable to show port 9541f2fd-4ec3-47ef-a6a9-66e0052c303f as it no longer exists. _unbind_ports /usr/lib/python3.9/site-packages/nova/network/neutron.py:666#033[00m
Nov 25 03:33:32 np0005534516 nova_compute[253538]: 2025-11-25 08:33:32.857 253542 DEBUG oslo_concurrency.lockutils [None req-56603a31-362f-47a6-8425-b7532e4dcfee 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Releasing lock "refresh_cache-8191f951-44bc-4371-957a-f2e7d37c1a32" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 03:33:32 np0005534516 nova_compute[253538]: 2025-11-25 08:33:32.878 253542 DEBUG oslo_concurrency.lockutils [None req-56603a31-362f-47a6-8425-b7532e4dcfee 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Lock "interface-8191f951-44bc-4371-957a-f2e7d37c1a32-223207be-35e0-4b8b-bf78-113792059910" "released" by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" :: held 5.588s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:33:32 np0005534516 nova_compute[253538]: 2025-11-25 08:33:32.896 253542 DEBUG nova.compute.manager [req-b5891d6f-bf3f-418d-8e47-6f6436b64dfd req-fcbab208-1c29-4531-9bfb-389cd3786246 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 8191f951-44bc-4371-957a-f2e7d37c1a32] Received event network-vif-deleted-9541f2fd-4ec3-47ef-a6a9-66e0052c303f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:33:32 np0005534516 nova_compute[253538]: 2025-11-25 08:33:32.897 253542 INFO nova.compute.manager [req-b5891d6f-bf3f-418d-8e47-6f6436b64dfd req-fcbab208-1c29-4531-9bfb-389cd3786246 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 8191f951-44bc-4371-957a-f2e7d37c1a32] Neutron deleted interface 9541f2fd-4ec3-47ef-a6a9-66e0052c303f; detaching it from the instance and deleting it from the info cache#033[00m
Nov 25 03:33:32 np0005534516 nova_compute[253538]: 2025-11-25 08:33:32.897 253542 DEBUG nova.network.neutron [req-b5891d6f-bf3f-418d-8e47-6f6436b64dfd req-fcbab208-1c29-4531-9bfb-389cd3786246 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 8191f951-44bc-4371-957a-f2e7d37c1a32] Updating instance_info_cache with network_info: [{"id": "d8bd16e1-3695-474d-be04-7fdf44bee803", "address": "fa:16:3e:1a:58:19", "network": {"id": "9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-2079765623-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.213", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7d8307470c794815a028592990efca57", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd8bd16e1-36", "ovs_interfaceid": "d8bd16e1-3695-474d-be04-7fdf44bee803", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "582f57a6-32d3-44a0-ab47-d147a0bb0f43", "address": "fa:16:3e:7a:1a:cb", "network": {"id": "9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-2079765623-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7d8307470c794815a028592990efca57", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap582f57a6-32", "ovs_interfaceid": "582f57a6-32d3-44a0-ab47-d147a0bb0f43", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 03:33:32 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 03:33:32 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2956643919' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 03:33:32 np0005534516 nova_compute[253538]: 2025-11-25 08:33:32.961 253542 DEBUG nova.compute.manager [req-b5891d6f-bf3f-418d-8e47-6f6436b64dfd req-fcbab208-1c29-4531-9bfb-389cd3786246 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 8191f951-44bc-4371-957a-f2e7d37c1a32] Detach interface failed, port_id=9541f2fd-4ec3-47ef-a6a9-66e0052c303f, reason: Instance 8191f951-44bc-4371-957a-f2e7d37c1a32 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882#033[00m
Nov 25 03:33:32 np0005534516 nova_compute[253538]: 2025-11-25 08:33:32.961 253542 DEBUG nova.compute.manager [req-b5891d6f-bf3f-418d-8e47-6f6436b64dfd req-fcbab208-1c29-4531-9bfb-389cd3786246 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 8191f951-44bc-4371-957a-f2e7d37c1a32] Received event network-vif-unplugged-582f57a6-32d3-44a0-ab47-d147a0bb0f43 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:33:32 np0005534516 nova_compute[253538]: 2025-11-25 08:33:32.962 253542 DEBUG oslo_concurrency.lockutils [req-b5891d6f-bf3f-418d-8e47-6f6436b64dfd req-fcbab208-1c29-4531-9bfb-389cd3786246 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "8191f951-44bc-4371-957a-f2e7d37c1a32-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:33:32 np0005534516 nova_compute[253538]: 2025-11-25 08:33:32.963 253542 DEBUG oslo_concurrency.lockutils [req-b5891d6f-bf3f-418d-8e47-6f6436b64dfd req-fcbab208-1c29-4531-9bfb-389cd3786246 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "8191f951-44bc-4371-957a-f2e7d37c1a32-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:33:32 np0005534516 nova_compute[253538]: 2025-11-25 08:33:32.963 253542 DEBUG oslo_concurrency.lockutils [req-b5891d6f-bf3f-418d-8e47-6f6436b64dfd req-fcbab208-1c29-4531-9bfb-389cd3786246 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "8191f951-44bc-4371-957a-f2e7d37c1a32-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:33:32 np0005534516 nova_compute[253538]: 2025-11-25 08:33:32.964 253542 DEBUG nova.compute.manager [req-b5891d6f-bf3f-418d-8e47-6f6436b64dfd req-fcbab208-1c29-4531-9bfb-389cd3786246 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 8191f951-44bc-4371-957a-f2e7d37c1a32] No waiting events found dispatching network-vif-unplugged-582f57a6-32d3-44a0-ab47-d147a0bb0f43 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 03:33:32 np0005534516 nova_compute[253538]: 2025-11-25 08:33:32.964 253542 DEBUG nova.compute.manager [req-b5891d6f-bf3f-418d-8e47-6f6436b64dfd req-fcbab208-1c29-4531-9bfb-389cd3786246 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 8191f951-44bc-4371-957a-f2e7d37c1a32] Received event network-vif-unplugged-582f57a6-32d3-44a0-ab47-d147a0bb0f43 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 25 03:33:32 np0005534516 nova_compute[253538]: 2025-11-25 08:33:32.964 253542 DEBUG nova.compute.manager [req-b5891d6f-bf3f-418d-8e47-6f6436b64dfd req-fcbab208-1c29-4531-9bfb-389cd3786246 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 8191f951-44bc-4371-957a-f2e7d37c1a32] Received event network-vif-plugged-582f57a6-32d3-44a0-ab47-d147a0bb0f43 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:33:32 np0005534516 nova_compute[253538]: 2025-11-25 08:33:32.965 253542 DEBUG oslo_concurrency.lockutils [req-b5891d6f-bf3f-418d-8e47-6f6436b64dfd req-fcbab208-1c29-4531-9bfb-389cd3786246 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "8191f951-44bc-4371-957a-f2e7d37c1a32-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:33:32 np0005534516 nova_compute[253538]: 2025-11-25 08:33:32.965 253542 DEBUG oslo_concurrency.lockutils [req-b5891d6f-bf3f-418d-8e47-6f6436b64dfd req-fcbab208-1c29-4531-9bfb-389cd3786246 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "8191f951-44bc-4371-957a-f2e7d37c1a32-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:33:32 np0005534516 nova_compute[253538]: 2025-11-25 08:33:32.965 253542 DEBUG oslo_concurrency.lockutils [req-b5891d6f-bf3f-418d-8e47-6f6436b64dfd req-fcbab208-1c29-4531-9bfb-389cd3786246 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "8191f951-44bc-4371-957a-f2e7d37c1a32-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:33:32 np0005534516 nova_compute[253538]: 2025-11-25 08:33:32.966 253542 DEBUG nova.compute.manager [req-b5891d6f-bf3f-418d-8e47-6f6436b64dfd req-fcbab208-1c29-4531-9bfb-389cd3786246 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 8191f951-44bc-4371-957a-f2e7d37c1a32] No waiting events found dispatching network-vif-plugged-582f57a6-32d3-44a0-ab47-d147a0bb0f43 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 03:33:32 np0005534516 nova_compute[253538]: 2025-11-25 08:33:32.966 253542 WARNING nova.compute.manager [req-b5891d6f-bf3f-418d-8e47-6f6436b64dfd req-fcbab208-1c29-4531-9bfb-389cd3786246 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 8191f951-44bc-4371-957a-f2e7d37c1a32] Received unexpected event network-vif-plugged-582f57a6-32d3-44a0-ab47-d147a0bb0f43 for instance with vm_state active and task_state deleting.#033[00m
Nov 25 03:33:32 np0005534516 nova_compute[253538]: 2025-11-25 08:33:32.966 253542 DEBUG nova.compute.manager [req-b5891d6f-bf3f-418d-8e47-6f6436b64dfd req-fcbab208-1c29-4531-9bfb-389cd3786246 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 0feca801-4630-4450-b915-616d8496ab51] Received event network-vif-unplugged-15af3dd8-9788-4a34-b4b2-d3b24300cd4c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:33:32 np0005534516 nova_compute[253538]: 2025-11-25 08:33:32.966 253542 DEBUG oslo_concurrency.lockutils [req-b5891d6f-bf3f-418d-8e47-6f6436b64dfd req-fcbab208-1c29-4531-9bfb-389cd3786246 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "0feca801-4630-4450-b915-616d8496ab51-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:33:32 np0005534516 nova_compute[253538]: 2025-11-25 08:33:32.967 253542 DEBUG oslo_concurrency.lockutils [req-b5891d6f-bf3f-418d-8e47-6f6436b64dfd req-fcbab208-1c29-4531-9bfb-389cd3786246 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "0feca801-4630-4450-b915-616d8496ab51-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:33:32 np0005534516 nova_compute[253538]: 2025-11-25 08:33:32.967 253542 DEBUG oslo_concurrency.lockutils [req-b5891d6f-bf3f-418d-8e47-6f6436b64dfd req-fcbab208-1c29-4531-9bfb-389cd3786246 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "0feca801-4630-4450-b915-616d8496ab51-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:33:32 np0005534516 nova_compute[253538]: 2025-11-25 08:33:32.967 253542 DEBUG nova.compute.manager [req-b5891d6f-bf3f-418d-8e47-6f6436b64dfd req-fcbab208-1c29-4531-9bfb-389cd3786246 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 0feca801-4630-4450-b915-616d8496ab51] No waiting events found dispatching network-vif-unplugged-15af3dd8-9788-4a34-b4b2-d3b24300cd4c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 03:33:32 np0005534516 nova_compute[253538]: 2025-11-25 08:33:32.967 253542 WARNING nova.compute.manager [req-b5891d6f-bf3f-418d-8e47-6f6436b64dfd req-fcbab208-1c29-4531-9bfb-389cd3786246 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 0feca801-4630-4450-b915-616d8496ab51] Received unexpected event network-vif-unplugged-15af3dd8-9788-4a34-b4b2-d3b24300cd4c for instance with vm_state active and task_state reboot_started_hard.#033[00m
Nov 25 03:33:32 np0005534516 nova_compute[253538]: 2025-11-25 08:33:32.968 253542 DEBUG nova.compute.manager [req-b5891d6f-bf3f-418d-8e47-6f6436b64dfd req-fcbab208-1c29-4531-9bfb-389cd3786246 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 0feca801-4630-4450-b915-616d8496ab51] Received event network-vif-plugged-15af3dd8-9788-4a34-b4b2-d3b24300cd4c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:33:32 np0005534516 nova_compute[253538]: 2025-11-25 08:33:32.968 253542 DEBUG oslo_concurrency.lockutils [req-b5891d6f-bf3f-418d-8e47-6f6436b64dfd req-fcbab208-1c29-4531-9bfb-389cd3786246 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "0feca801-4630-4450-b915-616d8496ab51-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:33:32 np0005534516 nova_compute[253538]: 2025-11-25 08:33:32.968 253542 DEBUG oslo_concurrency.lockutils [req-b5891d6f-bf3f-418d-8e47-6f6436b64dfd req-fcbab208-1c29-4531-9bfb-389cd3786246 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "0feca801-4630-4450-b915-616d8496ab51-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:33:32 np0005534516 nova_compute[253538]: 2025-11-25 08:33:32.968 253542 DEBUG oslo_concurrency.lockutils [req-b5891d6f-bf3f-418d-8e47-6f6436b64dfd req-fcbab208-1c29-4531-9bfb-389cd3786246 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "0feca801-4630-4450-b915-616d8496ab51-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:33:32 np0005534516 nova_compute[253538]: 2025-11-25 08:33:32.968 253542 DEBUG nova.compute.manager [req-b5891d6f-bf3f-418d-8e47-6f6436b64dfd req-fcbab208-1c29-4531-9bfb-389cd3786246 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 0feca801-4630-4450-b915-616d8496ab51] No waiting events found dispatching network-vif-plugged-15af3dd8-9788-4a34-b4b2-d3b24300cd4c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 03:33:32 np0005534516 nova_compute[253538]: 2025-11-25 08:33:32.969 253542 WARNING nova.compute.manager [req-b5891d6f-bf3f-418d-8e47-6f6436b64dfd req-fcbab208-1c29-4531-9bfb-389cd3786246 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 0feca801-4630-4450-b915-616d8496ab51] Received unexpected event network-vif-plugged-15af3dd8-9788-4a34-b4b2-d3b24300cd4c for instance with vm_state active and task_state reboot_started_hard.#033[00m
Nov 25 03:33:32 np0005534516 nova_compute[253538]: 2025-11-25 08:33:32.976 253542 DEBUG oslo_concurrency.processutils [None req-2cd0e226-b95f-4d12-96f9-bbfd794cf495 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.426s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:33:33 np0005534516 nova_compute[253538]: 2025-11-25 08:33:33.003 253542 DEBUG oslo_concurrency.lockutils [None req-3a2f73f6-f459-4b47-8c06-62a351d4ec6d 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Acquiring lock "fb888d2a-db54-44dc-8ec7-db417fa3cff6" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:33:33 np0005534516 nova_compute[253538]: 2025-11-25 08:33:33.003 253542 DEBUG oslo_concurrency.lockutils [None req-3a2f73f6-f459-4b47-8c06-62a351d4ec6d 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Lock "fb888d2a-db54-44dc-8ec7-db417fa3cff6" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:33:33 np0005534516 nova_compute[253538]: 2025-11-25 08:33:33.004 253542 DEBUG oslo_concurrency.lockutils [None req-3a2f73f6-f459-4b47-8c06-62a351d4ec6d 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Acquiring lock "fb888d2a-db54-44dc-8ec7-db417fa3cff6-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:33:33 np0005534516 nova_compute[253538]: 2025-11-25 08:33:33.004 253542 DEBUG oslo_concurrency.lockutils [None req-3a2f73f6-f459-4b47-8c06-62a351d4ec6d 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Lock "fb888d2a-db54-44dc-8ec7-db417fa3cff6-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:33:33 np0005534516 nova_compute[253538]: 2025-11-25 08:33:33.004 253542 DEBUG oslo_concurrency.lockutils [None req-3a2f73f6-f459-4b47-8c06-62a351d4ec6d 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Lock "fb888d2a-db54-44dc-8ec7-db417fa3cff6-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:33:33 np0005534516 nova_compute[253538]: 2025-11-25 08:33:33.005 253542 INFO nova.compute.manager [None req-3a2f73f6-f459-4b47-8c06-62a351d4ec6d 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] [instance: fb888d2a-db54-44dc-8ec7-db417fa3cff6] Terminating instance#033[00m
Nov 25 03:33:33 np0005534516 nova_compute[253538]: 2025-11-25 08:33:33.006 253542 DEBUG nova.compute.manager [None req-3a2f73f6-f459-4b47-8c06-62a351d4ec6d 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] [instance: fb888d2a-db54-44dc-8ec7-db417fa3cff6] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 25 03:33:33 np0005534516 nova_compute[253538]: 2025-11-25 08:33:33.009 253542 DEBUG oslo_concurrency.processutils [None req-2cd0e226-b95f-4d12-96f9-bbfd794cf495 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:33:33 np0005534516 kernel: tapdc1f5923-d9 (unregistering): left promiscuous mode
Nov 25 03:33:33 np0005534516 NetworkManager[48915]: <info>  [1764059613.0892] device (tapdc1f5923-d9): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 03:33:33 np0005534516 nova_compute[253538]: 2025-11-25 08:33:33.101 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:33:33 np0005534516 ovn_controller[152859]: 2025-11-25T08:33:33Z|00519|binding|INFO|Releasing lport dc1f5923-d984-4e49-bb97-bc1a77ade410 from this chassis (sb_readonly=0)
Nov 25 03:33:33 np0005534516 ovn_controller[152859]: 2025-11-25T08:33:33Z|00520|binding|INFO|Setting lport dc1f5923-d984-4e49-bb97-bc1a77ade410 down in Southbound
Nov 25 03:33:33 np0005534516 ovn_controller[152859]: 2025-11-25T08:33:33Z|00521|binding|INFO|Removing iface tapdc1f5923-d9 ovn-installed in OVS
Nov 25 03:33:33 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:33:33.118 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:7d:1c:27 10.100.0.6'], port_security=['fa:16:3e:7d:1c:27 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': 'fb888d2a-db54-44dc-8ec7-db417fa3cff6', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-eb25945d-6002-4a99-b682-034a8a3dc901', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'dc93aa65bef7473d961e0cad1e8f2962', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'e75a559d-2985-4816-b432-9eef78e9b129', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=1d4314d0-90db-4d2a-a971-774f6d589653, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=dc1f5923-d984-4e49-bb97-bc1a77ade410) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 03:33:33 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:33:33.119 162739 INFO neutron.agent.ovn.metadata.agent [-] Port dc1f5923-d984-4e49-bb97-bc1a77ade410 in datapath eb25945d-6002-4a99-b682-034a8a3dc901 unbound from our chassis#033[00m
Nov 25 03:33:33 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:33:33.122 162739 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network eb25945d-6002-4a99-b682-034a8a3dc901, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 25 03:33:33 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:33:33.123 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[353f3ed2-c880-47dc-9f82-661d7a82f73d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:33:33 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:33:33.124 162739 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-eb25945d-6002-4a99-b682-034a8a3dc901 namespace which is not needed anymore#033[00m
Nov 25 03:33:33 np0005534516 nova_compute[253538]: 2025-11-25 08:33:33.133 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:33:33 np0005534516 systemd[1]: machine-qemu\x2d65\x2dinstance\x2d00000039.scope: Deactivated successfully.
Nov 25 03:33:33 np0005534516 systemd[1]: machine-qemu\x2d65\x2dinstance\x2d00000039.scope: Consumed 11.529s CPU time.
Nov 25 03:33:33 np0005534516 systemd-machined[215790]: Machine qemu-65-instance-00000039 terminated.
Nov 25 03:33:33 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1506: 321 pgs: 321 active+clean; 296 MiB data, 607 MiB used, 59 GiB / 60 GiB avail; 2.6 MiB/s rd, 2.1 MiB/s wr, 169 op/s
Nov 25 03:33:33 np0005534516 neutron-haproxy-ovnmeta-eb25945d-6002-4a99-b682-034a8a3dc901[313358]: [NOTICE]   (313379) : haproxy version is 2.8.14-c23fe91
Nov 25 03:33:33 np0005534516 neutron-haproxy-ovnmeta-eb25945d-6002-4a99-b682-034a8a3dc901[313358]: [NOTICE]   (313379) : path to executable is /usr/sbin/haproxy
Nov 25 03:33:33 np0005534516 neutron-haproxy-ovnmeta-eb25945d-6002-4a99-b682-034a8a3dc901[313358]: [WARNING]  (313379) : Exiting Master process...
Nov 25 03:33:33 np0005534516 neutron-haproxy-ovnmeta-eb25945d-6002-4a99-b682-034a8a3dc901[313358]: [ALERT]    (313379) : Current worker (313383) exited with code 143 (Terminated)
Nov 25 03:33:33 np0005534516 neutron-haproxy-ovnmeta-eb25945d-6002-4a99-b682-034a8a3dc901[313358]: [WARNING]  (313379) : All workers exited. Exiting... (0)
Nov 25 03:33:33 np0005534516 systemd[1]: libpod-e03fdcecaa738056887a121fee2fa47bbc6cdb5f263c3a08e5530469a1b34b0b.scope: Deactivated successfully.
Nov 25 03:33:33 np0005534516 podman[314470]: 2025-11-25 08:33:33.260529778 +0000 UTC m=+0.055008580 container died e03fdcecaa738056887a121fee2fa47bbc6cdb5f263c3a08e5530469a1b34b0b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-eb25945d-6002-4a99-b682-034a8a3dc901, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251118, tcib_managed=true)
Nov 25 03:33:33 np0005534516 nova_compute[253538]: 2025-11-25 08:33:33.299 253542 INFO nova.virt.libvirt.driver [-] [instance: fb888d2a-db54-44dc-8ec7-db417fa3cff6] Instance destroyed successfully.#033[00m
Nov 25 03:33:33 np0005534516 nova_compute[253538]: 2025-11-25 08:33:33.300 253542 DEBUG nova.objects.instance [None req-3a2f73f6-f459-4b47-8c06-62a351d4ec6d 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Lazy-loading 'resources' on Instance uuid fb888d2a-db54-44dc-8ec7-db417fa3cff6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 03:33:33 np0005534516 nova_compute[253538]: 2025-11-25 08:33:33.312 253542 DEBUG nova.virt.libvirt.vif [None req-3a2f73f6-f459-4b47-8c06-62a351d4ec6d 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T08:33:10Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-2088861040',display_name='tempest-ServerDiskConfigTestJSON-server-2088861040',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-2088861040',id=57,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-25T08:33:22Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='dc93aa65bef7473d961e0cad1e8f2962',ramdisk_id='',reservation_id='r-piq0ju9f',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerDiskConfigTestJSON-1655399928',owner_user_name='tempest-ServerDiskConfigTestJSON-1655399928-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T08:33:31Z,user_data=None,user_id='7ad88cb0e4cf4d0b8e4cbec835318015',uuid=fb888d2a-db54-44dc-8ec7-db417fa3cff6,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "dc1f5923-d984-4e49-bb97-bc1a77ade410", "address": "fa:16:3e:7d:1c:27", "network": {"id": "eb25945d-6002-4a99-b682-034a8a3dc901", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1802666542-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dc93aa65bef7473d961e0cad1e8f2962", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdc1f5923-d9", "ovs_interfaceid": "dc1f5923-d984-4e49-bb97-bc1a77ade410", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 25 03:33:33 np0005534516 nova_compute[253538]: 2025-11-25 08:33:33.313 253542 DEBUG nova.network.os_vif_util [None req-3a2f73f6-f459-4b47-8c06-62a351d4ec6d 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Converting VIF {"id": "dc1f5923-d984-4e49-bb97-bc1a77ade410", "address": "fa:16:3e:7d:1c:27", "network": {"id": "eb25945d-6002-4a99-b682-034a8a3dc901", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1802666542-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dc93aa65bef7473d961e0cad1e8f2962", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdc1f5923-d9", "ovs_interfaceid": "dc1f5923-d984-4e49-bb97-bc1a77ade410", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 03:33:33 np0005534516 nova_compute[253538]: 2025-11-25 08:33:33.315 253542 DEBUG nova.network.os_vif_util [None req-3a2f73f6-f459-4b47-8c06-62a351d4ec6d 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:7d:1c:27,bridge_name='br-int',has_traffic_filtering=True,id=dc1f5923-d984-4e49-bb97-bc1a77ade410,network=Network(eb25945d-6002-4a99-b682-034a8a3dc901),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdc1f5923-d9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 03:33:33 np0005534516 nova_compute[253538]: 2025-11-25 08:33:33.315 253542 DEBUG os_vif [None req-3a2f73f6-f459-4b47-8c06-62a351d4ec6d 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:7d:1c:27,bridge_name='br-int',has_traffic_filtering=True,id=dc1f5923-d984-4e49-bb97-bc1a77ade410,network=Network(eb25945d-6002-4a99-b682-034a8a3dc901),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdc1f5923-d9') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 25 03:33:33 np0005534516 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-e03fdcecaa738056887a121fee2fa47bbc6cdb5f263c3a08e5530469a1b34b0b-userdata-shm.mount: Deactivated successfully.
Nov 25 03:33:33 np0005534516 nova_compute[253538]: 2025-11-25 08:33:33.318 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:33:33 np0005534516 nova_compute[253538]: 2025-11-25 08:33:33.319 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapdc1f5923-d9, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:33:33 np0005534516 systemd[1]: var-lib-containers-storage-overlay-ed956ae5cd2d22f379fbcbbfc4cf7ec18f54c995101ca09b784bc90a153cec72-merged.mount: Deactivated successfully.
Nov 25 03:33:33 np0005534516 nova_compute[253538]: 2025-11-25 08:33:33.350 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:33:33 np0005534516 nova_compute[253538]: 2025-11-25 08:33:33.353 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 25 03:33:33 np0005534516 podman[314470]: 2025-11-25 08:33:33.353727972 +0000 UTC m=+0.148206774 container cleanup e03fdcecaa738056887a121fee2fa47bbc6cdb5f263c3a08e5530469a1b34b0b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-eb25945d-6002-4a99-b682-034a8a3dc901, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 03:33:33 np0005534516 nova_compute[253538]: 2025-11-25 08:33:33.355 253542 INFO os_vif [None req-3a2f73f6-f459-4b47-8c06-62a351d4ec6d 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:7d:1c:27,bridge_name='br-int',has_traffic_filtering=True,id=dc1f5923-d984-4e49-bb97-bc1a77ade410,network=Network(eb25945d-6002-4a99-b682-034a8a3dc901),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdc1f5923-d9')#033[00m
Nov 25 03:33:33 np0005534516 systemd[1]: libpod-conmon-e03fdcecaa738056887a121fee2fa47bbc6cdb5f263c3a08e5530469a1b34b0b.scope: Deactivated successfully.
Nov 25 03:33:33 np0005534516 podman[314507]: 2025-11-25 08:33:33.419239383 +0000 UTC m=+0.042324598 container remove e03fdcecaa738056887a121fee2fa47bbc6cdb5f263c3a08e5530469a1b34b0b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-eb25945d-6002-4a99-b682-034a8a3dc901, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2)
Nov 25 03:33:33 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:33:33.424 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[0069469d-9629-4fdb-b60e-0e51d1b0514d]: (4, ('Tue Nov 25 08:33:33 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-eb25945d-6002-4a99-b682-034a8a3dc901 (e03fdcecaa738056887a121fee2fa47bbc6cdb5f263c3a08e5530469a1b34b0b)\ne03fdcecaa738056887a121fee2fa47bbc6cdb5f263c3a08e5530469a1b34b0b\nTue Nov 25 08:33:33 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-eb25945d-6002-4a99-b682-034a8a3dc901 (e03fdcecaa738056887a121fee2fa47bbc6cdb5f263c3a08e5530469a1b34b0b)\ne03fdcecaa738056887a121fee2fa47bbc6cdb5f263c3a08e5530469a1b34b0b\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:33:33 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:33:33.426 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[c66ebabb-b95a-40b6-84fe-9fe9f30436e2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:33:33 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:33:33.427 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapeb25945d-60, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:33:33 np0005534516 nova_compute[253538]: 2025-11-25 08:33:33.429 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:33:33 np0005534516 kernel: tapeb25945d-60: left promiscuous mode
Nov 25 03:33:33 np0005534516 nova_compute[253538]: 2025-11-25 08:33:33.431 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:33:33 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:33:33.434 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[7c24ab10-ae22-4916-81bb-629a5a206f29]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:33:33 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:33:33.448 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[1c5e680b-3f86-40bc-91d2-3f6658000610]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:33:33 np0005534516 nova_compute[253538]: 2025-11-25 08:33:33.450 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:33:33 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:33:33.451 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[c23955e1-b4e9-4290-a967-d2780969018b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:33:33 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 03:33:33 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3777305441' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 03:33:33 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:33:33.466 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[9fab8219-970b-4152-9198-a052cae7c587]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 495494, 'reachable_time': 32216, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 314533, 'error': None, 'target': 'ovnmeta-eb25945d-6002-4a99-b682-034a8a3dc901', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:33:33 np0005534516 systemd[1]: run-netns-ovnmeta\x2deb25945d\x2d6002\x2d4a99\x2db682\x2d034a8a3dc901.mount: Deactivated successfully.
Nov 25 03:33:33 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:33:33.468 162852 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-eb25945d-6002-4a99-b682-034a8a3dc901 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 25 03:33:33 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:33:33.468 162852 DEBUG oslo.privsep.daemon [-] privsep: reply[fe4bad87-e68a-41e0-bc4c-bbf3701f68fa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:33:33 np0005534516 nova_compute[253538]: 2025-11-25 08:33:33.482 253542 DEBUG oslo_concurrency.processutils [None req-2cd0e226-b95f-4d12-96f9-bbfd794cf495 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.473s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:33:33 np0005534516 nova_compute[253538]: 2025-11-25 08:33:33.483 253542 DEBUG nova.virt.libvirt.vif [None req-2cd0e226-b95f-4d12-96f9-bbfd794cf495 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T08:32:05Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-1351113969',display_name='tempest-ServerActionsTestJSON-server-1351113969',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-1351113969',id=50,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBE5Fk56Q4lNqV9K7WDUeyi9Q8LXfzmP3pHaY3lQ+esJSSZiBEhWQZtQw4QEFpwpSqYGNN6+MiKdvSMZAjIxsIMhhevlSp0lI0bm/7fQanmTm+NtC/LaRja7uscM7lRA6IQ==',key_name='tempest-keypair-23618085',keypairs=<?>,launch_index=0,launched_at=2025-11-25T08:32:15Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='23237e7592b247838e62457157e64e9e',ramdisk_id='',reservation_id='r-ui990d0d',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-1880843108',owner_user_name='tempest-ServerActionsTestJSON-1880843108-project-member'},tags=<?>,task_state='reboot_started_hard',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T08:33:31Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='c199ca353ed54a53ab7fe37d3089c82a',uuid=0feca801-4630-4450-b915-616d8496ab51,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "15af3dd8-9788-4a34-b4b2-d3b24300cd4c", "address": "fa:16:3e:07:cd:40", "network": {"id": "908154e6-322e-4607-bb65-df3f3f8daca6", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1395486665-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.183", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "23237e7592b247838e62457157e64e9e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap15af3dd8-97", "ovs_interfaceid": "15af3dd8-9788-4a34-b4b2-d3b24300cd4c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 25 03:33:33 np0005534516 nova_compute[253538]: 2025-11-25 08:33:33.484 253542 DEBUG nova.network.os_vif_util [None req-2cd0e226-b95f-4d12-96f9-bbfd794cf495 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Converting VIF {"id": "15af3dd8-9788-4a34-b4b2-d3b24300cd4c", "address": "fa:16:3e:07:cd:40", "network": {"id": "908154e6-322e-4607-bb65-df3f3f8daca6", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1395486665-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.183", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "23237e7592b247838e62457157e64e9e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap15af3dd8-97", "ovs_interfaceid": "15af3dd8-9788-4a34-b4b2-d3b24300cd4c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 03:33:33 np0005534516 nova_compute[253538]: 2025-11-25 08:33:33.485 253542 DEBUG nova.network.os_vif_util [None req-2cd0e226-b95f-4d12-96f9-bbfd794cf495 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:07:cd:40,bridge_name='br-int',has_traffic_filtering=True,id=15af3dd8-9788-4a34-b4b2-d3b24300cd4c,network=Network(908154e6-322e-4607-bb65-df3f3f8daca6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap15af3dd8-97') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 03:33:33 np0005534516 nova_compute[253538]: 2025-11-25 08:33:33.486 253542 DEBUG nova.objects.instance [None req-2cd0e226-b95f-4d12-96f9-bbfd794cf495 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Lazy-loading 'pci_devices' on Instance uuid 0feca801-4630-4450-b915-616d8496ab51 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 03:33:33 np0005534516 nova_compute[253538]: 2025-11-25 08:33:33.499 253542 DEBUG nova.virt.libvirt.driver [None req-2cd0e226-b95f-4d12-96f9-bbfd794cf495 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] [instance: 0feca801-4630-4450-b915-616d8496ab51] End _get_guest_xml xml=<domain type="kvm">
Nov 25 03:33:33 np0005534516 nova_compute[253538]:  <uuid>0feca801-4630-4450-b915-616d8496ab51</uuid>
Nov 25 03:33:33 np0005534516 nova_compute[253538]:  <name>instance-00000032</name>
Nov 25 03:33:33 np0005534516 nova_compute[253538]:  <memory>131072</memory>
Nov 25 03:33:33 np0005534516 nova_compute[253538]:  <vcpu>1</vcpu>
Nov 25 03:33:33 np0005534516 nova_compute[253538]:  <metadata>
Nov 25 03:33:33 np0005534516 nova_compute[253538]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 03:33:33 np0005534516 nova_compute[253538]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 03:33:33 np0005534516 nova_compute[253538]:      <nova:name>tempest-ServerActionsTestJSON-server-1351113969</nova:name>
Nov 25 03:33:33 np0005534516 nova_compute[253538]:      <nova:creationTime>2025-11-25 08:33:32</nova:creationTime>
Nov 25 03:33:33 np0005534516 nova_compute[253538]:      <nova:flavor name="m1.nano">
Nov 25 03:33:33 np0005534516 nova_compute[253538]:        <nova:memory>128</nova:memory>
Nov 25 03:33:33 np0005534516 nova_compute[253538]:        <nova:disk>1</nova:disk>
Nov 25 03:33:33 np0005534516 nova_compute[253538]:        <nova:swap>0</nova:swap>
Nov 25 03:33:33 np0005534516 nova_compute[253538]:        <nova:ephemeral>0</nova:ephemeral>
Nov 25 03:33:33 np0005534516 nova_compute[253538]:        <nova:vcpus>1</nova:vcpus>
Nov 25 03:33:33 np0005534516 nova_compute[253538]:      </nova:flavor>
Nov 25 03:33:33 np0005534516 nova_compute[253538]:      <nova:owner>
Nov 25 03:33:33 np0005534516 nova_compute[253538]:        <nova:user uuid="c199ca353ed54a53ab7fe37d3089c82a">tempest-ServerActionsTestJSON-1880843108-project-member</nova:user>
Nov 25 03:33:33 np0005534516 nova_compute[253538]:        <nova:project uuid="23237e7592b247838e62457157e64e9e">tempest-ServerActionsTestJSON-1880843108</nova:project>
Nov 25 03:33:33 np0005534516 nova_compute[253538]:      </nova:owner>
Nov 25 03:33:33 np0005534516 nova_compute[253538]:      <nova:root type="image" uuid="8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e"/>
Nov 25 03:33:33 np0005534516 nova_compute[253538]:      <nova:ports>
Nov 25 03:33:33 np0005534516 nova_compute[253538]:        <nova:port uuid="15af3dd8-9788-4a34-b4b2-d3b24300cd4c">
Nov 25 03:33:33 np0005534516 nova_compute[253538]:          <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Nov 25 03:33:33 np0005534516 nova_compute[253538]:        </nova:port>
Nov 25 03:33:33 np0005534516 nova_compute[253538]:      </nova:ports>
Nov 25 03:33:33 np0005534516 nova_compute[253538]:    </nova:instance>
Nov 25 03:33:33 np0005534516 nova_compute[253538]:  </metadata>
Nov 25 03:33:33 np0005534516 nova_compute[253538]:  <sysinfo type="smbios">
Nov 25 03:33:33 np0005534516 nova_compute[253538]:    <system>
Nov 25 03:33:33 np0005534516 nova_compute[253538]:      <entry name="manufacturer">RDO</entry>
Nov 25 03:33:33 np0005534516 nova_compute[253538]:      <entry name="product">OpenStack Compute</entry>
Nov 25 03:33:33 np0005534516 nova_compute[253538]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 03:33:33 np0005534516 nova_compute[253538]:      <entry name="serial">0feca801-4630-4450-b915-616d8496ab51</entry>
Nov 25 03:33:33 np0005534516 nova_compute[253538]:      <entry name="uuid">0feca801-4630-4450-b915-616d8496ab51</entry>
Nov 25 03:33:33 np0005534516 nova_compute[253538]:      <entry name="family">Virtual Machine</entry>
Nov 25 03:33:33 np0005534516 nova_compute[253538]:    </system>
Nov 25 03:33:33 np0005534516 nova_compute[253538]:  </sysinfo>
Nov 25 03:33:33 np0005534516 nova_compute[253538]:  <os>
Nov 25 03:33:33 np0005534516 nova_compute[253538]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 25 03:33:33 np0005534516 nova_compute[253538]:    <boot dev="hd"/>
Nov 25 03:33:33 np0005534516 nova_compute[253538]:    <smbios mode="sysinfo"/>
Nov 25 03:33:33 np0005534516 nova_compute[253538]:  </os>
Nov 25 03:33:33 np0005534516 nova_compute[253538]:  <features>
Nov 25 03:33:33 np0005534516 nova_compute[253538]:    <acpi/>
Nov 25 03:33:33 np0005534516 nova_compute[253538]:    <apic/>
Nov 25 03:33:33 np0005534516 nova_compute[253538]:    <vmcoreinfo/>
Nov 25 03:33:33 np0005534516 nova_compute[253538]:  </features>
Nov 25 03:33:33 np0005534516 nova_compute[253538]:  <clock offset="utc">
Nov 25 03:33:33 np0005534516 nova_compute[253538]:    <timer name="pit" tickpolicy="delay"/>
Nov 25 03:33:33 np0005534516 nova_compute[253538]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 25 03:33:33 np0005534516 nova_compute[253538]:    <timer name="hpet" present="no"/>
Nov 25 03:33:33 np0005534516 nova_compute[253538]:  </clock>
Nov 25 03:33:33 np0005534516 nova_compute[253538]:  <cpu mode="host-model" match="exact">
Nov 25 03:33:33 np0005534516 nova_compute[253538]:    <topology sockets="1" cores="1" threads="1"/>
Nov 25 03:33:33 np0005534516 nova_compute[253538]:  </cpu>
Nov 25 03:33:33 np0005534516 nova_compute[253538]:  <devices>
Nov 25 03:33:33 np0005534516 nova_compute[253538]:    <disk type="network" device="disk">
Nov 25 03:33:33 np0005534516 nova_compute[253538]:      <driver type="raw" cache="none"/>
Nov 25 03:33:33 np0005534516 nova_compute[253538]:      <source protocol="rbd" name="vms/0feca801-4630-4450-b915-616d8496ab51_disk">
Nov 25 03:33:33 np0005534516 nova_compute[253538]:        <host name="192.168.122.100" port="6789"/>
Nov 25 03:33:33 np0005534516 nova_compute[253538]:      </source>
Nov 25 03:33:33 np0005534516 nova_compute[253538]:      <auth username="openstack">
Nov 25 03:33:33 np0005534516 nova_compute[253538]:        <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 03:33:33 np0005534516 nova_compute[253538]:      </auth>
Nov 25 03:33:33 np0005534516 nova_compute[253538]:      <target dev="vda" bus="virtio"/>
Nov 25 03:33:33 np0005534516 nova_compute[253538]:    </disk>
Nov 25 03:33:33 np0005534516 nova_compute[253538]:    <disk type="network" device="cdrom">
Nov 25 03:33:33 np0005534516 nova_compute[253538]:      <driver type="raw" cache="none"/>
Nov 25 03:33:33 np0005534516 nova_compute[253538]:      <source protocol="rbd" name="vms/0feca801-4630-4450-b915-616d8496ab51_disk.config">
Nov 25 03:33:33 np0005534516 nova_compute[253538]:        <host name="192.168.122.100" port="6789"/>
Nov 25 03:33:33 np0005534516 nova_compute[253538]:      </source>
Nov 25 03:33:33 np0005534516 nova_compute[253538]:      <auth username="openstack">
Nov 25 03:33:33 np0005534516 nova_compute[253538]:        <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 03:33:33 np0005534516 nova_compute[253538]:      </auth>
Nov 25 03:33:33 np0005534516 nova_compute[253538]:      <target dev="sda" bus="sata"/>
Nov 25 03:33:33 np0005534516 nova_compute[253538]:    </disk>
Nov 25 03:33:33 np0005534516 nova_compute[253538]:    <interface type="ethernet">
Nov 25 03:33:33 np0005534516 nova_compute[253538]:      <mac address="fa:16:3e:07:cd:40"/>
Nov 25 03:33:33 np0005534516 nova_compute[253538]:      <model type="virtio"/>
Nov 25 03:33:33 np0005534516 nova_compute[253538]:      <driver name="vhost" rx_queue_size="512"/>
Nov 25 03:33:33 np0005534516 nova_compute[253538]:      <mtu size="1442"/>
Nov 25 03:33:33 np0005534516 nova_compute[253538]:      <target dev="tap15af3dd8-97"/>
Nov 25 03:33:33 np0005534516 nova_compute[253538]:    </interface>
Nov 25 03:33:33 np0005534516 nova_compute[253538]:    <serial type="pty">
Nov 25 03:33:33 np0005534516 nova_compute[253538]:      <log file="/var/lib/nova/instances/0feca801-4630-4450-b915-616d8496ab51/console.log" append="off"/>
Nov 25 03:33:33 np0005534516 nova_compute[253538]:    </serial>
Nov 25 03:33:33 np0005534516 nova_compute[253538]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 03:33:33 np0005534516 nova_compute[253538]:    <video>
Nov 25 03:33:33 np0005534516 nova_compute[253538]:      <model type="virtio"/>
Nov 25 03:33:33 np0005534516 nova_compute[253538]:    </video>
Nov 25 03:33:33 np0005534516 nova_compute[253538]:    <input type="tablet" bus="usb"/>
Nov 25 03:33:33 np0005534516 nova_compute[253538]:    <input type="keyboard" bus="usb"/>
Nov 25 03:33:33 np0005534516 nova_compute[253538]:    <rng model="virtio">
Nov 25 03:33:33 np0005534516 nova_compute[253538]:      <backend model="random">/dev/urandom</backend>
Nov 25 03:33:33 np0005534516 nova_compute[253538]:    </rng>
Nov 25 03:33:33 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root"/>
Nov 25 03:33:33 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:33:33 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:33:33 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:33:33 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:33:33 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:33:33 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:33:33 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:33:33 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:33:33 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:33:33 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:33:33 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:33:33 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:33:33 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:33:33 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:33:33 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:33:33 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:33:33 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:33:33 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:33:33 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:33:33 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:33:33 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:33:33 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:33:33 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:33:33 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:33:33 np0005534516 nova_compute[253538]:    <controller type="usb" index="0"/>
Nov 25 03:33:33 np0005534516 nova_compute[253538]:    <memballoon model="virtio">
Nov 25 03:33:33 np0005534516 nova_compute[253538]:      <stats period="10"/>
Nov 25 03:33:33 np0005534516 nova_compute[253538]:    </memballoon>
Nov 25 03:33:33 np0005534516 nova_compute[253538]:  </devices>
Nov 25 03:33:33 np0005534516 nova_compute[253538]: </domain>
Nov 25 03:33:33 np0005534516 nova_compute[253538]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 25 03:33:33 np0005534516 nova_compute[253538]: 2025-11-25 08:33:33.501 253542 DEBUG nova.virt.libvirt.driver [None req-2cd0e226-b95f-4d12-96f9-bbfd794cf495 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] skipping disk for instance-00000032 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 25 03:33:33 np0005534516 nova_compute[253538]: 2025-11-25 08:33:33.502 253542 DEBUG nova.virt.libvirt.driver [None req-2cd0e226-b95f-4d12-96f9-bbfd794cf495 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] skipping disk for instance-00000032 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 25 03:33:33 np0005534516 nova_compute[253538]: 2025-11-25 08:33:33.503 253542 DEBUG nova.virt.libvirt.vif [None req-2cd0e226-b95f-4d12-96f9-bbfd794cf495 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T08:32:05Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-1351113969',display_name='tempest-ServerActionsTestJSON-server-1351113969',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-1351113969',id=50,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBE5Fk56Q4lNqV9K7WDUeyi9Q8LXfzmP3pHaY3lQ+esJSSZiBEhWQZtQw4QEFpwpSqYGNN6+MiKdvSMZAjIxsIMhhevlSp0lI0bm/7fQanmTm+NtC/LaRja7uscM7lRA6IQ==',key_name='tempest-keypair-23618085',keypairs=<?>,launch_index=0,launched_at=2025-11-25T08:32:15Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=<?>,power_state=1,progress=0,project_id='23237e7592b247838e62457157e64e9e',ramdisk_id='',reservation_id='r-ui990d0d',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-1880843108',owner_user_name='tempest-ServerActionsTestJSON-1880843108-project-member'},tags=<?>,task_state='reboot_started_hard',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T08:33:31Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='c199ca353ed54a53ab7fe37d3089c82a',uuid=0feca801-4630-4450-b915-616d8496ab51,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "15af3dd8-9788-4a34-b4b2-d3b24300cd4c", "address": "fa:16:3e:07:cd:40", "network": {"id": "908154e6-322e-4607-bb65-df3f3f8daca6", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1395486665-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.183", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "23237e7592b247838e62457157e64e9e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap15af3dd8-97", "ovs_interfaceid": "15af3dd8-9788-4a34-b4b2-d3b24300cd4c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 25 03:33:33 np0005534516 nova_compute[253538]: 2025-11-25 08:33:33.503 253542 DEBUG nova.network.os_vif_util [None req-2cd0e226-b95f-4d12-96f9-bbfd794cf495 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Converting VIF {"id": "15af3dd8-9788-4a34-b4b2-d3b24300cd4c", "address": "fa:16:3e:07:cd:40", "network": {"id": "908154e6-322e-4607-bb65-df3f3f8daca6", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1395486665-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.183", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "23237e7592b247838e62457157e64e9e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap15af3dd8-97", "ovs_interfaceid": "15af3dd8-9788-4a34-b4b2-d3b24300cd4c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 03:33:33 np0005534516 nova_compute[253538]: 2025-11-25 08:33:33.504 253542 DEBUG nova.network.os_vif_util [None req-2cd0e226-b95f-4d12-96f9-bbfd794cf495 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:07:cd:40,bridge_name='br-int',has_traffic_filtering=True,id=15af3dd8-9788-4a34-b4b2-d3b24300cd4c,network=Network(908154e6-322e-4607-bb65-df3f3f8daca6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap15af3dd8-97') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 03:33:33 np0005534516 nova_compute[253538]: 2025-11-25 08:33:33.504 253542 DEBUG os_vif [None req-2cd0e226-b95f-4d12-96f9-bbfd794cf495 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Plugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:07:cd:40,bridge_name='br-int',has_traffic_filtering=True,id=15af3dd8-9788-4a34-b4b2-d3b24300cd4c,network=Network(908154e6-322e-4607-bb65-df3f3f8daca6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap15af3dd8-97') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 25 03:33:33 np0005534516 nova_compute[253538]: 2025-11-25 08:33:33.505 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:33:33 np0005534516 nova_compute[253538]: 2025-11-25 08:33:33.505 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:33:33 np0005534516 nova_compute[253538]: 2025-11-25 08:33:33.505 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 03:33:33 np0005534516 nova_compute[253538]: 2025-11-25 08:33:33.509 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:33:33 np0005534516 nova_compute[253538]: 2025-11-25 08:33:33.509 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap15af3dd8-97, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:33:33 np0005534516 nova_compute[253538]: 2025-11-25 08:33:33.509 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap15af3dd8-97, col_values=(('external_ids', {'iface-id': '15af3dd8-9788-4a34-b4b2-d3b24300cd4c', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:07:cd:40', 'vm-uuid': '0feca801-4630-4450-b915-616d8496ab51'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:33:33 np0005534516 nova_compute[253538]: 2025-11-25 08:33:33.511 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:33:33 np0005534516 NetworkManager[48915]: <info>  [1764059613.5119] manager: (tap15af3dd8-97): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/239)
Nov 25 03:33:33 np0005534516 nova_compute[253538]: 2025-11-25 08:33:33.513 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 25 03:33:33 np0005534516 nova_compute[253538]: 2025-11-25 08:33:33.517 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:33:33 np0005534516 nova_compute[253538]: 2025-11-25 08:33:33.518 253542 INFO os_vif [None req-2cd0e226-b95f-4d12-96f9-bbfd794cf495 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Successfully plugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:07:cd:40,bridge_name='br-int',has_traffic_filtering=True,id=15af3dd8-9788-4a34-b4b2-d3b24300cd4c,network=Network(908154e6-322e-4607-bb65-df3f3f8daca6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap15af3dd8-97')#033[00m
Nov 25 03:33:33 np0005534516 kernel: tap15af3dd8-97: entered promiscuous mode
Nov 25 03:33:33 np0005534516 NetworkManager[48915]: <info>  [1764059613.5838] manager: (tap15af3dd8-97): new Tun device (/org/freedesktop/NetworkManager/Devices/240)
Nov 25 03:33:33 np0005534516 systemd-udevd[314134]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 03:33:33 np0005534516 nova_compute[253538]: 2025-11-25 08:33:33.585 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:33:33 np0005534516 ovn_controller[152859]: 2025-11-25T08:33:33Z|00522|binding|INFO|Claiming lport 15af3dd8-9788-4a34-b4b2-d3b24300cd4c for this chassis.
Nov 25 03:33:33 np0005534516 ovn_controller[152859]: 2025-11-25T08:33:33Z|00523|binding|INFO|15af3dd8-9788-4a34-b4b2-d3b24300cd4c: Claiming fa:16:3e:07:cd:40 10.100.0.13
Nov 25 03:33:33 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:33:33.594 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:07:cd:40 10.100.0.13'], port_security=['fa:16:3e:07:cd:40 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '0feca801-4630-4450-b915-616d8496ab51', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-908154e6-322e-4607-bb65-df3f3f8daca6', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '23237e7592b247838e62457157e64e9e', 'neutron:revision_number': '11', 'neutron:security_group_ids': '3358d53f-61a0-46d5-a068-e095dc0322d0', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.183'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f04e87cb-da21-4cc9-be16-4ad52b84fb85, chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=15af3dd8-9788-4a34-b4b2-d3b24300cd4c) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 03:33:33 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:33:33.595 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 15af3dd8-9788-4a34-b4b2-d3b24300cd4c in datapath 908154e6-322e-4607-bb65-df3f3f8daca6 bound to our chassis#033[00m
Nov 25 03:33:33 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:33:33.597 162739 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 908154e6-322e-4607-bb65-df3f3f8daca6#033[00m
Nov 25 03:33:33 np0005534516 NetworkManager[48915]: <info>  [1764059613.5995] device (tap15af3dd8-97): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 03:33:33 np0005534516 NetworkManager[48915]: <info>  [1764059613.6005] device (tap15af3dd8-97): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 03:33:33 np0005534516 ovn_controller[152859]: 2025-11-25T08:33:33Z|00524|binding|INFO|Setting lport 15af3dd8-9788-4a34-b4b2-d3b24300cd4c ovn-installed in OVS
Nov 25 03:33:33 np0005534516 ovn_controller[152859]: 2025-11-25T08:33:33Z|00525|binding|INFO|Setting lport 15af3dd8-9788-4a34-b4b2-d3b24300cd4c up in Southbound
Nov 25 03:33:33 np0005534516 nova_compute[253538]: 2025-11-25 08:33:33.606 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:33:33 np0005534516 nova_compute[253538]: 2025-11-25 08:33:33.608 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:33:33 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:33:33.609 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[00a99641-0f49-43a3-9c14-7e309dfe4f6f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:33:33 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:33:33.610 162739 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap908154e6-31 in ovnmeta-908154e6-322e-4607-bb65-df3f3f8daca6 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 25 03:33:33 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:33:33.613 269370 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap908154e6-30 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 25 03:33:33 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:33:33.613 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[44ece8cf-5865-4ea8-8e40-7f6491047be3]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:33:33 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:33:33.614 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[dbb0109a-c1f4-435c-828c-f4b7be4d4ddc]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:33:33 np0005534516 systemd-machined[215790]: New machine qemu-66-instance-00000032.
Nov 25 03:33:33 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:33:33.625 162852 DEBUG oslo.privsep.daemon [-] privsep: reply[0e746305-5cc4-49a4-bbe4-a298a67bb7ae]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:33:33 np0005534516 systemd[1]: Started Virtual Machine qemu-66-instance-00000032.
Nov 25 03:33:33 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:33:33.648 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[950a124f-9303-4d17-a1ec-74b6a42b9062]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:33:33 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:33:33.673 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[c480c082-068f-4fb7-82c0-d9a3f724a636]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:33:33 np0005534516 NetworkManager[48915]: <info>  [1764059613.6801] manager: (tap908154e6-30): new Veth device (/org/freedesktop/NetworkManager/Devices/241)
Nov 25 03:33:33 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:33:33.679 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[e16d78c3-030c-4bc9-8bb4-ce7ad16d854b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:33:33 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:33:33.711 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[ad7af771-bcc7-4f9a-b1ce-16fb2942bd26]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:33:33 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:33:33.714 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[7594475f-cab6-4bdb-b179-e227b1f4b42d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:33:33 np0005534516 nova_compute[253538]: 2025-11-25 08:33:33.716 253542 INFO nova.virt.libvirt.driver [None req-3a2f73f6-f459-4b47-8c06-62a351d4ec6d 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] [instance: fb888d2a-db54-44dc-8ec7-db417fa3cff6] Deleting instance files /var/lib/nova/instances/fb888d2a-db54-44dc-8ec7-db417fa3cff6_del#033[00m
Nov 25 03:33:33 np0005534516 nova_compute[253538]: 2025-11-25 08:33:33.717 253542 INFO nova.virt.libvirt.driver [None req-3a2f73f6-f459-4b47-8c06-62a351d4ec6d 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] [instance: fb888d2a-db54-44dc-8ec7-db417fa3cff6] Deletion of /var/lib/nova/instances/fb888d2a-db54-44dc-8ec7-db417fa3cff6_del complete#033[00m
Nov 25 03:33:33 np0005534516 NetworkManager[48915]: <info>  [1764059613.7416] device (tap908154e6-30): carrier: link connected
Nov 25 03:33:33 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:33:33.747 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[a38f5d05-d8bd-4e0a-8e8e-91bd61995c53]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:33:33 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:33:33.765 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[229d605c-4b97-451b-894e-7c0387b098bf]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap908154e6-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:42:59:09'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 161], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 496834, 'reachable_time': 23814, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 314580, 'error': None, 'target': 'ovnmeta-908154e6-322e-4607-bb65-df3f3f8daca6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:33:33 np0005534516 nova_compute[253538]: 2025-11-25 08:33:33.766 253542 INFO nova.compute.manager [None req-3a2f73f6-f459-4b47-8c06-62a351d4ec6d 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] [instance: fb888d2a-db54-44dc-8ec7-db417fa3cff6] Took 0.76 seconds to destroy the instance on the hypervisor.#033[00m
Nov 25 03:33:33 np0005534516 nova_compute[253538]: 2025-11-25 08:33:33.766 253542 DEBUG oslo.service.loopingcall [None req-3a2f73f6-f459-4b47-8c06-62a351d4ec6d 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 25 03:33:33 np0005534516 nova_compute[253538]: 2025-11-25 08:33:33.767 253542 DEBUG nova.compute.manager [-] [instance: fb888d2a-db54-44dc-8ec7-db417fa3cff6] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 25 03:33:33 np0005534516 nova_compute[253538]: 2025-11-25 08:33:33.767 253542 DEBUG nova.network.neutron [-] [instance: fb888d2a-db54-44dc-8ec7-db417fa3cff6] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 25 03:33:33 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:33:33.781 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[66027d0b-c3a7-4e27-a0b1-311bc486a284]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe42:5909'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 496834, 'tstamp': 496834}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 314581, 'error': None, 'target': 'ovnmeta-908154e6-322e-4607-bb65-df3f3f8daca6', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:33:33 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:33:33.799 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[3281fe4d-7bd5-4f6d-b079-d02d38ae8e0c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap908154e6-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:42:59:09'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 161], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 496834, 'reachable_time': 23814, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 314582, 'error': None, 'target': 'ovnmeta-908154e6-322e-4607-bb65-df3f3f8daca6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:33:33 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:33:33.827 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[24ede5e0-50c5-4346-bd38-83527a09616e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:33:33 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:33:33.888 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[2568daaa-3ce2-411a-9706-2a563b2ead2a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:33:33 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:33:33.889 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap908154e6-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:33:33 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:33:33.889 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 03:33:33 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:33:33.889 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap908154e6-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:33:33 np0005534516 kernel: tap908154e6-30: entered promiscuous mode
Nov 25 03:33:33 np0005534516 NetworkManager[48915]: <info>  [1764059613.8919] manager: (tap908154e6-30): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/242)
Nov 25 03:33:33 np0005534516 nova_compute[253538]: 2025-11-25 08:33:33.892 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:33:33 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:33:33.894 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap908154e6-30, col_values=(('external_ids', {'iface-id': 'a5c69233-73e9-45f3-95c2-e76d52711966'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:33:33 np0005534516 nova_compute[253538]: 2025-11-25 08:33:33.895 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:33:33 np0005534516 ovn_controller[152859]: 2025-11-25T08:33:33Z|00526|binding|INFO|Releasing lport a5c69233-73e9-45f3-95c2-e76d52711966 from this chassis (sb_readonly=0)
Nov 25 03:33:33 np0005534516 nova_compute[253538]: 2025-11-25 08:33:33.911 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:33:33 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:33:33.911 162739 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/908154e6-322e-4607-bb65-df3f3f8daca6.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/908154e6-322e-4607-bb65-df3f3f8daca6.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 25 03:33:33 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:33:33.912 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[68fdae4a-5b69-403e-be51-17c7dea7aae6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:33:33 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:33:33.913 162739 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 25 03:33:33 np0005534516 ovn_metadata_agent[162734]: global
Nov 25 03:33:33 np0005534516 ovn_metadata_agent[162734]:    log         /dev/log local0 debug
Nov 25 03:33:33 np0005534516 ovn_metadata_agent[162734]:    log-tag     haproxy-metadata-proxy-908154e6-322e-4607-bb65-df3f3f8daca6
Nov 25 03:33:33 np0005534516 ovn_metadata_agent[162734]:    user        root
Nov 25 03:33:33 np0005534516 ovn_metadata_agent[162734]:    group       root
Nov 25 03:33:33 np0005534516 ovn_metadata_agent[162734]:    maxconn     1024
Nov 25 03:33:33 np0005534516 ovn_metadata_agent[162734]:    pidfile     /var/lib/neutron/external/pids/908154e6-322e-4607-bb65-df3f3f8daca6.pid.haproxy
Nov 25 03:33:33 np0005534516 ovn_metadata_agent[162734]:    daemon
Nov 25 03:33:33 np0005534516 ovn_metadata_agent[162734]: 
Nov 25 03:33:33 np0005534516 ovn_metadata_agent[162734]: defaults
Nov 25 03:33:33 np0005534516 ovn_metadata_agent[162734]:    log global
Nov 25 03:33:33 np0005534516 ovn_metadata_agent[162734]:    mode http
Nov 25 03:33:33 np0005534516 ovn_metadata_agent[162734]:    option httplog
Nov 25 03:33:33 np0005534516 ovn_metadata_agent[162734]:    option dontlognull
Nov 25 03:33:33 np0005534516 ovn_metadata_agent[162734]:    option http-server-close
Nov 25 03:33:33 np0005534516 ovn_metadata_agent[162734]:    option forwardfor
Nov 25 03:33:33 np0005534516 ovn_metadata_agent[162734]:    retries                 3
Nov 25 03:33:33 np0005534516 ovn_metadata_agent[162734]:    timeout http-request    30s
Nov 25 03:33:33 np0005534516 ovn_metadata_agent[162734]:    timeout connect         30s
Nov 25 03:33:33 np0005534516 ovn_metadata_agent[162734]:    timeout client          32s
Nov 25 03:33:33 np0005534516 ovn_metadata_agent[162734]:    timeout server          32s
Nov 25 03:33:33 np0005534516 ovn_metadata_agent[162734]:    timeout http-keep-alive 30s
Nov 25 03:33:33 np0005534516 ovn_metadata_agent[162734]: 
Nov 25 03:33:33 np0005534516 ovn_metadata_agent[162734]: 
Nov 25 03:33:33 np0005534516 ovn_metadata_agent[162734]: listen listener
Nov 25 03:33:33 np0005534516 ovn_metadata_agent[162734]:    bind 169.254.169.254:80
Nov 25 03:33:33 np0005534516 ovn_metadata_agent[162734]:    server metadata /var/lib/neutron/metadata_proxy
Nov 25 03:33:33 np0005534516 ovn_metadata_agent[162734]:    http-request add-header X-OVN-Network-ID 908154e6-322e-4607-bb65-df3f3f8daca6
Nov 25 03:33:33 np0005534516 ovn_metadata_agent[162734]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 25 03:33:33 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:33:33.914 162739 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-908154e6-322e-4607-bb65-df3f3f8daca6', 'env', 'PROCESS_TAG=haproxy-908154e6-322e-4607-bb65-df3f3f8daca6', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/908154e6-322e-4607-bb65-df3f3f8daca6.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 25 03:33:33 np0005534516 nova_compute[253538]: 2025-11-25 08:33:33.917 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:33:33 np0005534516 nova_compute[253538]: 2025-11-25 08:33:33.968 253542 DEBUG nova.network.neutron [-] [instance: 8191f951-44bc-4371-957a-f2e7d37c1a32] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 03:33:33 np0005534516 nova_compute[253538]: 2025-11-25 08:33:33.989 253542 DEBUG nova.virt.libvirt.host [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Removed pending event for 0feca801-4630-4450-b915-616d8496ab51 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Nov 25 03:33:33 np0005534516 nova_compute[253538]: 2025-11-25 08:33:33.989 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764059613.9894643, 0feca801-4630-4450-b915-616d8496ab51 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 03:33:33 np0005534516 nova_compute[253538]: 2025-11-25 08:33:33.990 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 0feca801-4630-4450-b915-616d8496ab51] VM Resumed (Lifecycle Event)#033[00m
Nov 25 03:33:33 np0005534516 nova_compute[253538]: 2025-11-25 08:33:33.991 253542 DEBUG nova.compute.manager [None req-2cd0e226-b95f-4d12-96f9-bbfd794cf495 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] [instance: 0feca801-4630-4450-b915-616d8496ab51] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 25 03:33:33 np0005534516 nova_compute[253538]: 2025-11-25 08:33:33.996 253542 INFO nova.virt.libvirt.driver [-] [instance: 0feca801-4630-4450-b915-616d8496ab51] Instance rebooted successfully.#033[00m
Nov 25 03:33:33 np0005534516 nova_compute[253538]: 2025-11-25 08:33:33.996 253542 DEBUG nova.compute.manager [None req-2cd0e226-b95f-4d12-96f9-bbfd794cf495 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] [instance: 0feca801-4630-4450-b915-616d8496ab51] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:33:34 np0005534516 nova_compute[253538]: 2025-11-25 08:33:34.037 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 0feca801-4630-4450-b915-616d8496ab51] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:33:34 np0005534516 nova_compute[253538]: 2025-11-25 08:33:34.040 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 0feca801-4630-4450-b915-616d8496ab51] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: reboot_started_hard, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 03:33:34 np0005534516 nova_compute[253538]: 2025-11-25 08:33:34.055 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 0feca801-4630-4450-b915-616d8496ab51] During sync_power_state the instance has a pending task (reboot_started_hard). Skip.#033[00m
Nov 25 03:33:34 np0005534516 nova_compute[253538]: 2025-11-25 08:33:34.056 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764059613.9907825, 0feca801-4630-4450-b915-616d8496ab51 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 03:33:34 np0005534516 nova_compute[253538]: 2025-11-25 08:33:34.056 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 0feca801-4630-4450-b915-616d8496ab51] VM Started (Lifecycle Event)#033[00m
Nov 25 03:33:34 np0005534516 nova_compute[253538]: 2025-11-25 08:33:34.068 253542 INFO nova.compute.manager [-] [instance: 8191f951-44bc-4371-957a-f2e7d37c1a32] Took 1.57 seconds to deallocate network for instance.#033[00m
Nov 25 03:33:34 np0005534516 nova_compute[253538]: 2025-11-25 08:33:34.073 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 0feca801-4630-4450-b915-616d8496ab51] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:33:34 np0005534516 nova_compute[253538]: 2025-11-25 08:33:34.077 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 0feca801-4630-4450-b915-616d8496ab51] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: reboot_started_hard, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 03:33:34 np0005534516 nova_compute[253538]: 2025-11-25 08:33:34.095 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 0feca801-4630-4450-b915-616d8496ab51] During sync_power_state the instance has a pending task (reboot_started_hard). Skip.#033[00m
Nov 25 03:33:34 np0005534516 nova_compute[253538]: 2025-11-25 08:33:34.181 253542 DEBUG oslo_concurrency.lockutils [None req-2cd0e226-b95f-4d12-96f9-bbfd794cf495 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Lock "0feca801-4630-4450-b915-616d8496ab51" "released" by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" :: held 4.906s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:33:34 np0005534516 nova_compute[253538]: 2025-11-25 08:33:34.242 253542 DEBUG oslo_concurrency.lockutils [None req-458a06ae-eb9f-40ca-bf5d-32973710d0dd 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:33:34 np0005534516 nova_compute[253538]: 2025-11-25 08:33:34.242 253542 DEBUG oslo_concurrency.lockutils [None req-458a06ae-eb9f-40ca-bf5d-32973710d0dd 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:33:34 np0005534516 podman[314653]: 2025-11-25 08:33:34.322038767 +0000 UTC m=+0.069310814 container create 66d7c79572db99ff1b5636cd54dec519fd2c6d6de66365ff85c02c686dacf803 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-908154e6-322e-4607-bb65-df3f3f8daca6, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Nov 25 03:33:34 np0005534516 nova_compute[253538]: 2025-11-25 08:33:34.351 253542 DEBUG oslo_concurrency.processutils [None req-458a06ae-eb9f-40ca-bf5d-32973710d0dd 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:33:34 np0005534516 systemd[1]: Started libpod-conmon-66d7c79572db99ff1b5636cd54dec519fd2c6d6de66365ff85c02c686dacf803.scope.
Nov 25 03:33:34 np0005534516 podman[314653]: 2025-11-25 08:33:34.275225998 +0000 UTC m=+0.022498095 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 25 03:33:34 np0005534516 systemd[1]: Started libcrun container.
Nov 25 03:33:34 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b287a9705456981153ae9eff66d536d8de2c4f253b08bf0d27b3e6ca96100e2d/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 25 03:33:34 np0005534516 podman[314653]: 2025-11-25 08:33:34.407786332 +0000 UTC m=+0.155058399 container init 66d7c79572db99ff1b5636cd54dec519fd2c6d6de66365ff85c02c686dacf803 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-908154e6-322e-4607-bb65-df3f3f8daca6, tcib_managed=true, org.label-schema.build-date=20251118, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 03:33:34 np0005534516 podman[314653]: 2025-11-25 08:33:34.418217901 +0000 UTC m=+0.165489948 container start 66d7c79572db99ff1b5636cd54dec519fd2c6d6de66365ff85c02c686dacf803 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-908154e6-322e-4607-bb65-df3f3f8daca6, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2)
Nov 25 03:33:34 np0005534516 neutron-haproxy-ovnmeta-908154e6-322e-4607-bb65-df3f3f8daca6[314669]: [NOTICE]   (314674) : New worker (314676) forked
Nov 25 03:33:34 np0005534516 neutron-haproxy-ovnmeta-908154e6-322e-4607-bb65-df3f3f8daca6[314669]: [NOTICE]   (314674) : Loading success.
Nov 25 03:33:34 np0005534516 nova_compute[253538]: 2025-11-25 08:33:34.516 253542 DEBUG nova.compute.manager [req-958ad06d-4ddf-46c6-b7eb-4fb17c96b89d req-ff6382c1-2808-4e29-9096-bad183bf704a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 8191f951-44bc-4371-957a-f2e7d37c1a32] Received event network-vif-plugged-d8bd16e1-3695-474d-be04-7fdf44bee803 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:33:34 np0005534516 nova_compute[253538]: 2025-11-25 08:33:34.517 253542 DEBUG oslo_concurrency.lockutils [req-958ad06d-4ddf-46c6-b7eb-4fb17c96b89d req-ff6382c1-2808-4e29-9096-bad183bf704a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "8191f951-44bc-4371-957a-f2e7d37c1a32-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:33:34 np0005534516 nova_compute[253538]: 2025-11-25 08:33:34.517 253542 DEBUG oslo_concurrency.lockutils [req-958ad06d-4ddf-46c6-b7eb-4fb17c96b89d req-ff6382c1-2808-4e29-9096-bad183bf704a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "8191f951-44bc-4371-957a-f2e7d37c1a32-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:33:34 np0005534516 nova_compute[253538]: 2025-11-25 08:33:34.518 253542 DEBUG oslo_concurrency.lockutils [req-958ad06d-4ddf-46c6-b7eb-4fb17c96b89d req-ff6382c1-2808-4e29-9096-bad183bf704a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "8191f951-44bc-4371-957a-f2e7d37c1a32-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:33:34 np0005534516 nova_compute[253538]: 2025-11-25 08:33:34.518 253542 DEBUG nova.compute.manager [req-958ad06d-4ddf-46c6-b7eb-4fb17c96b89d req-ff6382c1-2808-4e29-9096-bad183bf704a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 8191f951-44bc-4371-957a-f2e7d37c1a32] No waiting events found dispatching network-vif-plugged-d8bd16e1-3695-474d-be04-7fdf44bee803 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 03:33:34 np0005534516 nova_compute[253538]: 2025-11-25 08:33:34.518 253542 WARNING nova.compute.manager [req-958ad06d-4ddf-46c6-b7eb-4fb17c96b89d req-ff6382c1-2808-4e29-9096-bad183bf704a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 8191f951-44bc-4371-957a-f2e7d37c1a32] Received unexpected event network-vif-plugged-d8bd16e1-3695-474d-be04-7fdf44bee803 for instance with vm_state deleted and task_state None.#033[00m
Nov 25 03:33:34 np0005534516 nova_compute[253538]: 2025-11-25 08:33:34.519 253542 DEBUG nova.compute.manager [req-958ad06d-4ddf-46c6-b7eb-4fb17c96b89d req-ff6382c1-2808-4e29-9096-bad183bf704a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: fb888d2a-db54-44dc-8ec7-db417fa3cff6] Received event network-vif-unplugged-dc1f5923-d984-4e49-bb97-bc1a77ade410 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:33:34 np0005534516 nova_compute[253538]: 2025-11-25 08:33:34.519 253542 DEBUG oslo_concurrency.lockutils [req-958ad06d-4ddf-46c6-b7eb-4fb17c96b89d req-ff6382c1-2808-4e29-9096-bad183bf704a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "fb888d2a-db54-44dc-8ec7-db417fa3cff6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:33:34 np0005534516 nova_compute[253538]: 2025-11-25 08:33:34.520 253542 DEBUG oslo_concurrency.lockutils [req-958ad06d-4ddf-46c6-b7eb-4fb17c96b89d req-ff6382c1-2808-4e29-9096-bad183bf704a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "fb888d2a-db54-44dc-8ec7-db417fa3cff6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:33:34 np0005534516 nova_compute[253538]: 2025-11-25 08:33:34.520 253542 DEBUG oslo_concurrency.lockutils [req-958ad06d-4ddf-46c6-b7eb-4fb17c96b89d req-ff6382c1-2808-4e29-9096-bad183bf704a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "fb888d2a-db54-44dc-8ec7-db417fa3cff6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:33:34 np0005534516 nova_compute[253538]: 2025-11-25 08:33:34.520 253542 DEBUG nova.compute.manager [req-958ad06d-4ddf-46c6-b7eb-4fb17c96b89d req-ff6382c1-2808-4e29-9096-bad183bf704a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: fb888d2a-db54-44dc-8ec7-db417fa3cff6] No waiting events found dispatching network-vif-unplugged-dc1f5923-d984-4e49-bb97-bc1a77ade410 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 03:33:34 np0005534516 nova_compute[253538]: 2025-11-25 08:33:34.521 253542 DEBUG nova.compute.manager [req-958ad06d-4ddf-46c6-b7eb-4fb17c96b89d req-ff6382c1-2808-4e29-9096-bad183bf704a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: fb888d2a-db54-44dc-8ec7-db417fa3cff6] Received event network-vif-unplugged-dc1f5923-d984-4e49-bb97-bc1a77ade410 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 25 03:33:34 np0005534516 nova_compute[253538]: 2025-11-25 08:33:34.521 253542 DEBUG nova.compute.manager [req-958ad06d-4ddf-46c6-b7eb-4fb17c96b89d req-ff6382c1-2808-4e29-9096-bad183bf704a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: fb888d2a-db54-44dc-8ec7-db417fa3cff6] Received event network-vif-plugged-dc1f5923-d984-4e49-bb97-bc1a77ade410 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:33:34 np0005534516 nova_compute[253538]: 2025-11-25 08:33:34.522 253542 DEBUG oslo_concurrency.lockutils [req-958ad06d-4ddf-46c6-b7eb-4fb17c96b89d req-ff6382c1-2808-4e29-9096-bad183bf704a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "fb888d2a-db54-44dc-8ec7-db417fa3cff6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:33:34 np0005534516 nova_compute[253538]: 2025-11-25 08:33:34.523 253542 DEBUG oslo_concurrency.lockutils [req-958ad06d-4ddf-46c6-b7eb-4fb17c96b89d req-ff6382c1-2808-4e29-9096-bad183bf704a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "fb888d2a-db54-44dc-8ec7-db417fa3cff6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:33:34 np0005534516 nova_compute[253538]: 2025-11-25 08:33:34.523 253542 DEBUG oslo_concurrency.lockutils [req-958ad06d-4ddf-46c6-b7eb-4fb17c96b89d req-ff6382c1-2808-4e29-9096-bad183bf704a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "fb888d2a-db54-44dc-8ec7-db417fa3cff6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:33:34 np0005534516 nova_compute[253538]: 2025-11-25 08:33:34.523 253542 DEBUG nova.compute.manager [req-958ad06d-4ddf-46c6-b7eb-4fb17c96b89d req-ff6382c1-2808-4e29-9096-bad183bf704a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: fb888d2a-db54-44dc-8ec7-db417fa3cff6] No waiting events found dispatching network-vif-plugged-dc1f5923-d984-4e49-bb97-bc1a77ade410 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 03:33:34 np0005534516 nova_compute[253538]: 2025-11-25 08:33:34.524 253542 WARNING nova.compute.manager [req-958ad06d-4ddf-46c6-b7eb-4fb17c96b89d req-ff6382c1-2808-4e29-9096-bad183bf704a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: fb888d2a-db54-44dc-8ec7-db417fa3cff6] Received unexpected event network-vif-plugged-dc1f5923-d984-4e49-bb97-bc1a77ade410 for instance with vm_state active and task_state deleting.#033[00m
Nov 25 03:33:34 np0005534516 nova_compute[253538]: 2025-11-25 08:33:34.565 253542 DEBUG nova.network.neutron [-] [instance: fb888d2a-db54-44dc-8ec7-db417fa3cff6] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 03:33:34 np0005534516 nova_compute[253538]: 2025-11-25 08:33:34.677 253542 INFO nova.compute.manager [-] [instance: fb888d2a-db54-44dc-8ec7-db417fa3cff6] Took 0.91 seconds to deallocate network for instance.#033[00m
Nov 25 03:33:34 np0005534516 nova_compute[253538]: 2025-11-25 08:33:34.772 253542 DEBUG oslo_concurrency.lockutils [None req-3a2f73f6-f459-4b47-8c06-62a351d4ec6d 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:33:34 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 03:33:34 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2105950564' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 03:33:34 np0005534516 nova_compute[253538]: 2025-11-25 08:33:34.849 253542 DEBUG oslo_concurrency.processutils [None req-458a06ae-eb9f-40ca-bf5d-32973710d0dd 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.498s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:33:34 np0005534516 nova_compute[253538]: 2025-11-25 08:33:34.855 253542 DEBUG nova.compute.provider_tree [None req-458a06ae-eb9f-40ca-bf5d-32973710d0dd 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 03:33:34 np0005534516 nova_compute[253538]: 2025-11-25 08:33:34.867 253542 DEBUG nova.scheduler.client.report [None req-458a06ae-eb9f-40ca-bf5d-32973710d0dd 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 03:33:34 np0005534516 nova_compute[253538]: 2025-11-25 08:33:34.907 253542 DEBUG oslo_concurrency.lockutils [None req-458a06ae-eb9f-40ca-bf5d-32973710d0dd 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.664s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:33:34 np0005534516 nova_compute[253538]: 2025-11-25 08:33:34.910 253542 DEBUG oslo_concurrency.lockutils [None req-3a2f73f6-f459-4b47-8c06-62a351d4ec6d 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.137s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:33:34 np0005534516 nova_compute[253538]: 2025-11-25 08:33:34.948 253542 INFO nova.scheduler.client.report [None req-458a06ae-eb9f-40ca-bf5d-32973710d0dd 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Deleted allocations for instance 8191f951-44bc-4371-957a-f2e7d37c1a32#033[00m
Nov 25 03:33:35 np0005534516 nova_compute[253538]: 2025-11-25 08:33:35.011 253542 DEBUG oslo_concurrency.processutils [None req-3a2f73f6-f459-4b47-8c06-62a351d4ec6d 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:33:35 np0005534516 nova_compute[253538]: 2025-11-25 08:33:35.048 253542 DEBUG oslo_concurrency.lockutils [None req-458a06ae-eb9f-40ca-bf5d-32973710d0dd 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Lock "8191f951-44bc-4371-957a-f2e7d37c1a32" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.867s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:33:35 np0005534516 nova_compute[253538]: 2025-11-25 08:33:35.063 253542 DEBUG nova.compute.manager [req-d5c08f4e-9720-4f4b-aacf-93f928b709f6 req-a12bc3c7-5e20-4ef6-8001-0e3507a422d7 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 8191f951-44bc-4371-957a-f2e7d37c1a32] Received event network-vif-deleted-d8bd16e1-3695-474d-be04-7fdf44bee803 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:33:35 np0005534516 nova_compute[253538]: 2025-11-25 08:33:35.063 253542 DEBUG nova.compute.manager [req-d5c08f4e-9720-4f4b-aacf-93f928b709f6 req-a12bc3c7-5e20-4ef6-8001-0e3507a422d7 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 8191f951-44bc-4371-957a-f2e7d37c1a32] Received event network-vif-deleted-582f57a6-32d3-44a0-ab47-d147a0bb0f43 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:33:35 np0005534516 nova_compute[253538]: 2025-11-25 08:33:35.063 253542 DEBUG nova.compute.manager [req-d5c08f4e-9720-4f4b-aacf-93f928b709f6 req-a12bc3c7-5e20-4ef6-8001-0e3507a422d7 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 0feca801-4630-4450-b915-616d8496ab51] Received event network-vif-plugged-15af3dd8-9788-4a34-b4b2-d3b24300cd4c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:33:35 np0005534516 nova_compute[253538]: 2025-11-25 08:33:35.064 253542 DEBUG oslo_concurrency.lockutils [req-d5c08f4e-9720-4f4b-aacf-93f928b709f6 req-a12bc3c7-5e20-4ef6-8001-0e3507a422d7 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "0feca801-4630-4450-b915-616d8496ab51-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:33:35 np0005534516 nova_compute[253538]: 2025-11-25 08:33:35.064 253542 DEBUG oslo_concurrency.lockutils [req-d5c08f4e-9720-4f4b-aacf-93f928b709f6 req-a12bc3c7-5e20-4ef6-8001-0e3507a422d7 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "0feca801-4630-4450-b915-616d8496ab51-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:33:35 np0005534516 nova_compute[253538]: 2025-11-25 08:33:35.064 253542 DEBUG oslo_concurrency.lockutils [req-d5c08f4e-9720-4f4b-aacf-93f928b709f6 req-a12bc3c7-5e20-4ef6-8001-0e3507a422d7 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "0feca801-4630-4450-b915-616d8496ab51-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:33:35 np0005534516 nova_compute[253538]: 2025-11-25 08:33:35.065 253542 DEBUG nova.compute.manager [req-d5c08f4e-9720-4f4b-aacf-93f928b709f6 req-a12bc3c7-5e20-4ef6-8001-0e3507a422d7 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 0feca801-4630-4450-b915-616d8496ab51] No waiting events found dispatching network-vif-plugged-15af3dd8-9788-4a34-b4b2-d3b24300cd4c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 03:33:35 np0005534516 nova_compute[253538]: 2025-11-25 08:33:35.065 253542 WARNING nova.compute.manager [req-d5c08f4e-9720-4f4b-aacf-93f928b709f6 req-a12bc3c7-5e20-4ef6-8001-0e3507a422d7 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 0feca801-4630-4450-b915-616d8496ab51] Received unexpected event network-vif-plugged-15af3dd8-9788-4a34-b4b2-d3b24300cd4c for instance with vm_state active and task_state None.#033[00m
Nov 25 03:33:35 np0005534516 nova_compute[253538]: 2025-11-25 08:33:35.065 253542 DEBUG nova.compute.manager [req-d5c08f4e-9720-4f4b-aacf-93f928b709f6 req-a12bc3c7-5e20-4ef6-8001-0e3507a422d7 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 0feca801-4630-4450-b915-616d8496ab51] Received event network-vif-plugged-15af3dd8-9788-4a34-b4b2-d3b24300cd4c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:33:35 np0005534516 nova_compute[253538]: 2025-11-25 08:33:35.065 253542 DEBUG oslo_concurrency.lockutils [req-d5c08f4e-9720-4f4b-aacf-93f928b709f6 req-a12bc3c7-5e20-4ef6-8001-0e3507a422d7 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "0feca801-4630-4450-b915-616d8496ab51-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:33:35 np0005534516 nova_compute[253538]: 2025-11-25 08:33:35.066 253542 DEBUG oslo_concurrency.lockutils [req-d5c08f4e-9720-4f4b-aacf-93f928b709f6 req-a12bc3c7-5e20-4ef6-8001-0e3507a422d7 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "0feca801-4630-4450-b915-616d8496ab51-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:33:35 np0005534516 nova_compute[253538]: 2025-11-25 08:33:35.066 253542 DEBUG oslo_concurrency.lockutils [req-d5c08f4e-9720-4f4b-aacf-93f928b709f6 req-a12bc3c7-5e20-4ef6-8001-0e3507a422d7 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "0feca801-4630-4450-b915-616d8496ab51-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:33:35 np0005534516 nova_compute[253538]: 2025-11-25 08:33:35.066 253542 DEBUG nova.compute.manager [req-d5c08f4e-9720-4f4b-aacf-93f928b709f6 req-a12bc3c7-5e20-4ef6-8001-0e3507a422d7 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 0feca801-4630-4450-b915-616d8496ab51] No waiting events found dispatching network-vif-plugged-15af3dd8-9788-4a34-b4b2-d3b24300cd4c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 03:33:35 np0005534516 nova_compute[253538]: 2025-11-25 08:33:35.067 253542 WARNING nova.compute.manager [req-d5c08f4e-9720-4f4b-aacf-93f928b709f6 req-a12bc3c7-5e20-4ef6-8001-0e3507a422d7 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 0feca801-4630-4450-b915-616d8496ab51] Received unexpected event network-vif-plugged-15af3dd8-9788-4a34-b4b2-d3b24300cd4c for instance with vm_state active and task_state None.#033[00m
Nov 25 03:33:35 np0005534516 nova_compute[253538]: 2025-11-25 08:33:35.067 253542 DEBUG nova.compute.manager [req-d5c08f4e-9720-4f4b-aacf-93f928b709f6 req-a12bc3c7-5e20-4ef6-8001-0e3507a422d7 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: fb888d2a-db54-44dc-8ec7-db417fa3cff6] Received event network-vif-deleted-dc1f5923-d984-4e49-bb97-bc1a77ade410 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:33:35 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1507: 321 pgs: 321 active+clean; 222 MiB data, 574 MiB used, 59 GiB / 60 GiB avail; 2.7 MiB/s rd, 2.2 MiB/s wr, 215 op/s
Nov 25 03:33:35 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 03:33:35 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2884688720' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 03:33:35 np0005534516 nova_compute[253538]: 2025-11-25 08:33:35.452 253542 DEBUG oslo_concurrency.processutils [None req-3a2f73f6-f459-4b47-8c06-62a351d4ec6d 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.440s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:33:35 np0005534516 nova_compute[253538]: 2025-11-25 08:33:35.459 253542 DEBUG nova.compute.provider_tree [None req-3a2f73f6-f459-4b47-8c06-62a351d4ec6d 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 03:33:35 np0005534516 nova_compute[253538]: 2025-11-25 08:33:35.473 253542 DEBUG nova.scheduler.client.report [None req-3a2f73f6-f459-4b47-8c06-62a351d4ec6d 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 03:33:35 np0005534516 nova_compute[253538]: 2025-11-25 08:33:35.496 253542 DEBUG oslo_concurrency.lockutils [None req-3a2f73f6-f459-4b47-8c06-62a351d4ec6d 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.587s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:33:35 np0005534516 nova_compute[253538]: 2025-11-25 08:33:35.525 253542 INFO nova.scheduler.client.report [None req-3a2f73f6-f459-4b47-8c06-62a351d4ec6d 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Deleted allocations for instance fb888d2a-db54-44dc-8ec7-db417fa3cff6#033[00m
Nov 25 03:33:35 np0005534516 nova_compute[253538]: 2025-11-25 08:33:35.575 253542 DEBUG oslo_concurrency.lockutils [None req-3a2f73f6-f459-4b47-8c06-62a351d4ec6d 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Lock "fb888d2a-db54-44dc-8ec7-db417fa3cff6" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.572s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:33:35 np0005534516 nova_compute[253538]: 2025-11-25 08:33:35.638 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:33:35 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e176 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 03:33:36 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:33:36.277 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=0e3362c2-a4a0-4a10-9289-943331244f84, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '15'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:33:36 np0005534516 nova_compute[253538]: 2025-11-25 08:33:36.545 253542 DEBUG nova.virt.libvirt.driver [None req-af0be6ff-9106-4536-a846-3295c2b2f70b a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: 528fb917-0169-441d-b32d-652963344aea] Instance in state 1 after 21 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101#033[00m
Nov 25 03:33:36 np0005534516 podman[314728]: 2025-11-25 08:33:36.796776247 +0000 UTC m=+0.043507390 container health_status 5c8d5508db57b4c3da7e80ae11f3a3094e87785cb74f60c98199261dd8b73481 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_metadata_agent, io.buildah.version=1.41.3)
Nov 25 03:33:37 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1508: 321 pgs: 321 active+clean; 214 MiB data, 551 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.2 MiB/s wr, 186 op/s
Nov 25 03:33:38 np0005534516 nova_compute[253538]: 2025-11-25 08:33:38.512 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:33:38 np0005534516 kernel: tap56d077f0-8f (unregistering): left promiscuous mode
Nov 25 03:33:38 np0005534516 NetworkManager[48915]: <info>  [1764059618.7767] device (tap56d077f0-8f): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 03:33:38 np0005534516 ovn_controller[152859]: 2025-11-25T08:33:38Z|00527|binding|INFO|Releasing lport 56d077f0-8f69-40d8-bd5e-267a70c4c319 from this chassis (sb_readonly=0)
Nov 25 03:33:38 np0005534516 ovn_controller[152859]: 2025-11-25T08:33:38Z|00528|binding|INFO|Setting lport 56d077f0-8f69-40d8-bd5e-267a70c4c319 down in Southbound
Nov 25 03:33:38 np0005534516 ovn_controller[152859]: 2025-11-25T08:33:38Z|00529|binding|INFO|Removing iface tap56d077f0-8f ovn-installed in OVS
Nov 25 03:33:38 np0005534516 nova_compute[253538]: 2025-11-25 08:33:38.787 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:33:38 np0005534516 nova_compute[253538]: 2025-11-25 08:33:38.808 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:33:38 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:33:38.809 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:7a:ce:b3 10.100.0.4'], port_security=['fa:16:3e:7a:ce:b3 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '528fb917-0169-441d-b32d-652963344aea', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a66e51b8-ecb0-4289-a1b5-d5e379727721', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a9c243220ecd4ba3af10cdbc0ea76bd6', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'a5deaf81-ec7a-4196-8622-4e499ce185db', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=864356ca-a329-4a45-a3a1-6cef04812832, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=56d077f0-8f69-40d8-bd5e-267a70c4c319) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 03:33:38 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:33:38.811 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 56d077f0-8f69-40d8-bd5e-267a70c4c319 in datapath a66e51b8-ecb0-4289-a1b5-d5e379727721 unbound from our chassis#033[00m
Nov 25 03:33:38 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:33:38.812 162739 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network a66e51b8-ecb0-4289-a1b5-d5e379727721, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 25 03:33:38 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:33:38.813 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[9da6864d-a619-4a85-a5ff-ee792d142ba8]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:33:38 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:33:38.815 162739 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-a66e51b8-ecb0-4289-a1b5-d5e379727721 namespace which is not needed anymore#033[00m
Nov 25 03:33:38 np0005534516 systemd[1]: machine-qemu\x2d64\x2dinstance\x2d00000038.scope: Deactivated successfully.
Nov 25 03:33:38 np0005534516 systemd[1]: machine-qemu\x2d64\x2dinstance\x2d00000038.scope: Consumed 13.730s CPU time.
Nov 25 03:33:38 np0005534516 systemd-machined[215790]: Machine qemu-64-instance-00000038 terminated.
Nov 25 03:33:38 np0005534516 neutron-haproxy-ovnmeta-a66e51b8-ecb0-4289-a1b5-d5e379727721[312705]: [NOTICE]   (312709) : haproxy version is 2.8.14-c23fe91
Nov 25 03:33:38 np0005534516 neutron-haproxy-ovnmeta-a66e51b8-ecb0-4289-a1b5-d5e379727721[312705]: [NOTICE]   (312709) : path to executable is /usr/sbin/haproxy
Nov 25 03:33:38 np0005534516 neutron-haproxy-ovnmeta-a66e51b8-ecb0-4289-a1b5-d5e379727721[312705]: [WARNING]  (312709) : Exiting Master process...
Nov 25 03:33:38 np0005534516 neutron-haproxy-ovnmeta-a66e51b8-ecb0-4289-a1b5-d5e379727721[312705]: [ALERT]    (312709) : Current worker (312711) exited with code 143 (Terminated)
Nov 25 03:33:38 np0005534516 neutron-haproxy-ovnmeta-a66e51b8-ecb0-4289-a1b5-d5e379727721[312705]: [WARNING]  (312709) : All workers exited. Exiting... (0)
Nov 25 03:33:38 np0005534516 systemd[1]: libpod-78d4b6a7caeeac4acbf7fb46f257a6945cc87c7f6a44d0822f61b7d110045a31.scope: Deactivated successfully.
Nov 25 03:33:38 np0005534516 podman[314771]: 2025-11-25 08:33:38.943855253 +0000 UTC m=+0.046074950 container died 78d4b6a7caeeac4acbf7fb46f257a6945cc87c7f6a44d0822f61b7d110045a31 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-a66e51b8-ecb0-4289-a1b5-d5e379727721, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_managed=true)
Nov 25 03:33:38 np0005534516 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-78d4b6a7caeeac4acbf7fb46f257a6945cc87c7f6a44d0822f61b7d110045a31-userdata-shm.mount: Deactivated successfully.
Nov 25 03:33:38 np0005534516 systemd[1]: var-lib-containers-storage-overlay-7f3b17704a9232b7fd9a3c6391403db11755eeec93cf71409d4cffb3b11bf81a-merged.mount: Deactivated successfully.
Nov 25 03:33:38 np0005534516 podman[314771]: 2025-11-25 08:33:38.992486819 +0000 UTC m=+0.094706526 container cleanup 78d4b6a7caeeac4acbf7fb46f257a6945cc87c7f6a44d0822f61b7d110045a31 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-a66e51b8-ecb0-4289-a1b5-d5e379727721, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118)
Nov 25 03:33:39 np0005534516 systemd[1]: libpod-conmon-78d4b6a7caeeac4acbf7fb46f257a6945cc87c7f6a44d0822f61b7d110045a31.scope: Deactivated successfully.
Nov 25 03:33:39 np0005534516 podman[314805]: 2025-11-25 08:33:39.061798272 +0000 UTC m=+0.044823195 container remove 78d4b6a7caeeac4acbf7fb46f257a6945cc87c7f6a44d0822f61b7d110045a31 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-a66e51b8-ecb0-4289-a1b5-d5e379727721, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 25 03:33:39 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:33:39.067 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[3204d20a-8d5e-4f21-b4ca-fb1a4df1ca07]: (4, ('Tue Nov 25 08:33:38 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-a66e51b8-ecb0-4289-a1b5-d5e379727721 (78d4b6a7caeeac4acbf7fb46f257a6945cc87c7f6a44d0822f61b7d110045a31)\n78d4b6a7caeeac4acbf7fb46f257a6945cc87c7f6a44d0822f61b7d110045a31\nTue Nov 25 08:33:39 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-a66e51b8-ecb0-4289-a1b5-d5e379727721 (78d4b6a7caeeac4acbf7fb46f257a6945cc87c7f6a44d0822f61b7d110045a31)\n78d4b6a7caeeac4acbf7fb46f257a6945cc87c7f6a44d0822f61b7d110045a31\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:33:39 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:33:39.069 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[a1488f68-95fc-474a-b62a-e16f9943312a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:33:39 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:33:39.070 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa66e51b8-e0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:33:39 np0005534516 kernel: tapa66e51b8-e0: left promiscuous mode
Nov 25 03:33:39 np0005534516 nova_compute[253538]: 2025-11-25 08:33:39.125 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:33:39 np0005534516 nova_compute[253538]: 2025-11-25 08:33:39.146 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:33:39 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:33:39.150 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[0885ac72-9c03-4c86-bdba-525e656f9d42]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:33:39 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:33:39.163 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[16763442-708b-4853-8d01-948e6acbe837]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:33:39 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:33:39.165 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[9858a396-a3df-4713-9a09-48a4965352a0]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:33:39 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:33:39.182 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[cd53454c-f119-4c72-bc67-d428e55fc112]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 494462, 'reachable_time': 38439, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 314825, 'error': None, 'target': 'ovnmeta-a66e51b8-ecb0-4289-a1b5-d5e379727721', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:33:39 np0005534516 systemd[1]: run-netns-ovnmeta\x2da66e51b8\x2decb0\x2d4289\x2da1b5\x2dd5e379727721.mount: Deactivated successfully.
Nov 25 03:33:39 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:33:39.185 162852 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-a66e51b8-ecb0-4289-a1b5-d5e379727721 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 25 03:33:39 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:33:39.185 162852 DEBUG oslo.privsep.daemon [-] privsep: reply[29b087f8-35b7-4234-a7d7-21e939d3949d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:33:39 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1509: 321 pgs: 321 active+clean; 202 MiB data, 544 MiB used, 59 GiB / 60 GiB avail; 1.8 MiB/s rd, 1.6 MiB/s wr, 174 op/s
Nov 25 03:33:39 np0005534516 nova_compute[253538]: 2025-11-25 08:33:39.563 253542 INFO nova.virt.libvirt.driver [None req-af0be6ff-9106-4536-a846-3295c2b2f70b a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: 528fb917-0169-441d-b32d-652963344aea] Instance shutdown successfully after 24 seconds.#033[00m
Nov 25 03:33:39 np0005534516 nova_compute[253538]: 2025-11-25 08:33:39.569 253542 INFO nova.virt.libvirt.driver [-] [instance: 528fb917-0169-441d-b32d-652963344aea] Instance destroyed successfully.#033[00m
Nov 25 03:33:39 np0005534516 nova_compute[253538]: 2025-11-25 08:33:39.570 253542 DEBUG nova.objects.instance [None req-af0be6ff-9106-4536-a846-3295c2b2f70b a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Lazy-loading 'numa_topology' on Instance uuid 528fb917-0169-441d-b32d-652963344aea obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 03:33:39 np0005534516 nova_compute[253538]: 2025-11-25 08:33:39.780 253542 INFO nova.virt.libvirt.driver [None req-af0be6ff-9106-4536-a846-3295c2b2f70b a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: 528fb917-0169-441d-b32d-652963344aea] Beginning cold snapshot process#033[00m
Nov 25 03:33:40 np0005534516 nova_compute[253538]: 2025-11-25 08:33:40.037 253542 DEBUG nova.virt.libvirt.imagebackend [None req-af0be6ff-9106-4536-a846-3295c2b2f70b a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] No parent info for 8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163#033[00m
Nov 25 03:33:40 np0005534516 nova_compute[253538]: 2025-11-25 08:33:40.275 253542 DEBUG nova.compute.manager [req-560ab25e-9be3-47f9-a1dc-bb3e47bbe5c0 req-dcae2697-f9be-4252-b422-fe434042c605 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 528fb917-0169-441d-b32d-652963344aea] Received event network-vif-unplugged-56d077f0-8f69-40d8-bd5e-267a70c4c319 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:33:40 np0005534516 nova_compute[253538]: 2025-11-25 08:33:40.276 253542 DEBUG oslo_concurrency.lockutils [req-560ab25e-9be3-47f9-a1dc-bb3e47bbe5c0 req-dcae2697-f9be-4252-b422-fe434042c605 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "528fb917-0169-441d-b32d-652963344aea-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:33:40 np0005534516 nova_compute[253538]: 2025-11-25 08:33:40.276 253542 DEBUG oslo_concurrency.lockutils [req-560ab25e-9be3-47f9-a1dc-bb3e47bbe5c0 req-dcae2697-f9be-4252-b422-fe434042c605 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "528fb917-0169-441d-b32d-652963344aea-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:33:40 np0005534516 nova_compute[253538]: 2025-11-25 08:33:40.277 253542 DEBUG oslo_concurrency.lockutils [req-560ab25e-9be3-47f9-a1dc-bb3e47bbe5c0 req-dcae2697-f9be-4252-b422-fe434042c605 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "528fb917-0169-441d-b32d-652963344aea-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:33:40 np0005534516 nova_compute[253538]: 2025-11-25 08:33:40.277 253542 DEBUG nova.compute.manager [req-560ab25e-9be3-47f9-a1dc-bb3e47bbe5c0 req-dcae2697-f9be-4252-b422-fe434042c605 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 528fb917-0169-441d-b32d-652963344aea] No waiting events found dispatching network-vif-unplugged-56d077f0-8f69-40d8-bd5e-267a70c4c319 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 03:33:40 np0005534516 nova_compute[253538]: 2025-11-25 08:33:40.278 253542 WARNING nova.compute.manager [req-560ab25e-9be3-47f9-a1dc-bb3e47bbe5c0 req-dcae2697-f9be-4252-b422-fe434042c605 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 528fb917-0169-441d-b32d-652963344aea] Received unexpected event network-vif-unplugged-56d077f0-8f69-40d8-bd5e-267a70c4c319 for instance with vm_state active and task_state shelving_image_uploading.#033[00m
Nov 25 03:33:40 np0005534516 nova_compute[253538]: 2025-11-25 08:33:40.319 253542 DEBUG nova.storage.rbd_utils [None req-af0be6ff-9106-4536-a846-3295c2b2f70b a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] creating snapshot(1d443195410148ad889f449ade5f87f7) on rbd image(528fb917-0169-441d-b32d-652963344aea_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Nov 25 03:33:40 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e176 do_prune osdmap full prune enabled
Nov 25 03:33:40 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e177 e177: 3 total, 3 up, 3 in
Nov 25 03:33:40 np0005534516 ceph-mon[75015]: log_channel(cluster) log [DBG] : osdmap e177: 3 total, 3 up, 3 in
Nov 25 03:33:40 np0005534516 nova_compute[253538]: 2025-11-25 08:33:40.642 253542 DEBUG nova.storage.rbd_utils [None req-af0be6ff-9106-4536-a846-3295c2b2f70b a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] cloning vms/528fb917-0169-441d-b32d-652963344aea_disk@1d443195410148ad889f449ade5f87f7 to images/98ee8490-b027-417d-923b-76479289f395 clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Nov 25 03:33:40 np0005534516 nova_compute[253538]: 2025-11-25 08:33:40.675 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:33:40 np0005534516 nova_compute[253538]: 2025-11-25 08:33:40.757 253542 DEBUG nova.storage.rbd_utils [None req-af0be6ff-9106-4536-a846-3295c2b2f70b a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] flattening images/98ee8490-b027-417d-923b-76479289f395 flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314#033[00m
Nov 25 03:33:40 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e177 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 03:33:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:33:41.058 162739 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:33:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:33:41.058 162739 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:33:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:33:41.059 162739 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:33:41 np0005534516 nova_compute[253538]: 2025-11-25 08:33:41.140 253542 DEBUG nova.storage.rbd_utils [None req-af0be6ff-9106-4536-a846-3295c2b2f70b a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] removing snapshot(1d443195410148ad889f449ade5f87f7) on rbd image(528fb917-0169-441d-b32d-652963344aea_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489#033[00m
Nov 25 03:33:41 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1511: 321 pgs: 321 active+clean; 202 MiB data, 544 MiB used, 59 GiB / 60 GiB avail; 2.6 MiB/s rd, 660 KiB/s wr, 188 op/s
Nov 25 03:33:41 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e177 do_prune osdmap full prune enabled
Nov 25 03:33:41 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e178 e178: 3 total, 3 up, 3 in
Nov 25 03:33:41 np0005534516 ceph-mon[75015]: log_channel(cluster) log [DBG] : osdmap e178: 3 total, 3 up, 3 in
Nov 25 03:33:41 np0005534516 nova_compute[253538]: 2025-11-25 08:33:41.650 253542 DEBUG nova.storage.rbd_utils [None req-af0be6ff-9106-4536-a846-3295c2b2f70b a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] creating snapshot(snap) on rbd image(98ee8490-b027-417d-923b-76479289f395) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Nov 25 03:33:42 np0005534516 nova_compute[253538]: 2025-11-25 08:33:42.395 253542 DEBUG nova.compute.manager [req-deb71369-3041-4a47-b9d7-c7694d285d3a req-31ae0f90-e1ce-47fe-8fdb-0b0858c7e5b9 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 528fb917-0169-441d-b32d-652963344aea] Received event network-vif-plugged-56d077f0-8f69-40d8-bd5e-267a70c4c319 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:33:42 np0005534516 nova_compute[253538]: 2025-11-25 08:33:42.396 253542 DEBUG oslo_concurrency.lockutils [req-deb71369-3041-4a47-b9d7-c7694d285d3a req-31ae0f90-e1ce-47fe-8fdb-0b0858c7e5b9 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "528fb917-0169-441d-b32d-652963344aea-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:33:42 np0005534516 nova_compute[253538]: 2025-11-25 08:33:42.396 253542 DEBUG oslo_concurrency.lockutils [req-deb71369-3041-4a47-b9d7-c7694d285d3a req-31ae0f90-e1ce-47fe-8fdb-0b0858c7e5b9 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "528fb917-0169-441d-b32d-652963344aea-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:33:42 np0005534516 nova_compute[253538]: 2025-11-25 08:33:42.397 253542 DEBUG oslo_concurrency.lockutils [req-deb71369-3041-4a47-b9d7-c7694d285d3a req-31ae0f90-e1ce-47fe-8fdb-0b0858c7e5b9 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "528fb917-0169-441d-b32d-652963344aea-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:33:42 np0005534516 nova_compute[253538]: 2025-11-25 08:33:42.397 253542 DEBUG nova.compute.manager [req-deb71369-3041-4a47-b9d7-c7694d285d3a req-31ae0f90-e1ce-47fe-8fdb-0b0858c7e5b9 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 528fb917-0169-441d-b32d-652963344aea] No waiting events found dispatching network-vif-plugged-56d077f0-8f69-40d8-bd5e-267a70c4c319 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 03:33:42 np0005534516 nova_compute[253538]: 2025-11-25 08:33:42.398 253542 WARNING nova.compute.manager [req-deb71369-3041-4a47-b9d7-c7694d285d3a req-31ae0f90-e1ce-47fe-8fdb-0b0858c7e5b9 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 528fb917-0169-441d-b32d-652963344aea] Received unexpected event network-vif-plugged-56d077f0-8f69-40d8-bd5e-267a70c4c319 for instance with vm_state active and task_state shelving_image_uploading.#033[00m
Nov 25 03:33:42 np0005534516 ovn_controller[152859]: 2025-11-25T08:33:42Z|00530|binding|INFO|Releasing lport a5c69233-73e9-45f3-95c2-e76d52711966 from this chassis (sb_readonly=0)
Nov 25 03:33:42 np0005534516 nova_compute[253538]: 2025-11-25 08:33:42.523 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:33:42 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e178 do_prune osdmap full prune enabled
Nov 25 03:33:42 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e179 e179: 3 total, 3 up, 3 in
Nov 25 03:33:42 np0005534516 ceph-mon[75015]: log_channel(cluster) log [DBG] : osdmap e179: 3 total, 3 up, 3 in
Nov 25 03:33:43 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1514: 321 pgs: 321 active+clean; 223 MiB data, 552 MiB used, 59 GiB / 60 GiB avail; 5.5 MiB/s rd, 1.4 MiB/s wr, 120 op/s
Nov 25 03:33:43 np0005534516 nova_compute[253538]: 2025-11-25 08:33:43.515 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:33:43 np0005534516 podman[314967]: 2025-11-25 08:33:43.854152712 +0000 UTC m=+0.094484251 container health_status 201f5ba8a846455dad7681d91099d8c3f33e8ac91298c1330a8b525b94c03938 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2)
Nov 25 03:33:43 np0005534516 nova_compute[253538]: 2025-11-25 08:33:43.943 253542 INFO nova.virt.libvirt.driver [None req-af0be6ff-9106-4536-a846-3295c2b2f70b a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: 528fb917-0169-441d-b32d-652963344aea] Snapshot image upload complete#033[00m
Nov 25 03:33:43 np0005534516 nova_compute[253538]: 2025-11-25 08:33:43.943 253542 DEBUG nova.compute.manager [None req-af0be6ff-9106-4536-a846-3295c2b2f70b a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: 528fb917-0169-441d-b32d-652963344aea] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:33:43 np0005534516 nova_compute[253538]: 2025-11-25 08:33:43.994 253542 INFO nova.compute.manager [None req-af0be6ff-9106-4536-a846-3295c2b2f70b a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: 528fb917-0169-441d-b32d-652963344aea] Shelve offloading#033[00m
Nov 25 03:33:44 np0005534516 nova_compute[253538]: 2025-11-25 08:33:44.002 253542 INFO nova.virt.libvirt.driver [-] [instance: 528fb917-0169-441d-b32d-652963344aea] Instance destroyed successfully.#033[00m
Nov 25 03:33:44 np0005534516 nova_compute[253538]: 2025-11-25 08:33:44.003 253542 DEBUG nova.compute.manager [None req-af0be6ff-9106-4536-a846-3295c2b2f70b a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: 528fb917-0169-441d-b32d-652963344aea] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:33:44 np0005534516 nova_compute[253538]: 2025-11-25 08:33:44.006 253542 DEBUG oslo_concurrency.lockutils [None req-af0be6ff-9106-4536-a846-3295c2b2f70b a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Acquiring lock "refresh_cache-528fb917-0169-441d-b32d-652963344aea" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 03:33:44 np0005534516 nova_compute[253538]: 2025-11-25 08:33:44.006 253542 DEBUG oslo_concurrency.lockutils [None req-af0be6ff-9106-4536-a846-3295c2b2f70b a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Acquired lock "refresh_cache-528fb917-0169-441d-b32d-652963344aea" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 03:33:44 np0005534516 nova_compute[253538]: 2025-11-25 08:33:44.007 253542 DEBUG nova.network.neutron [None req-af0be6ff-9106-4536-a846-3295c2b2f70b a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: 528fb917-0169-441d-b32d-652963344aea] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 25 03:33:45 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1515: 321 pgs: 321 active+clean; 281 MiB data, 586 MiB used, 59 GiB / 60 GiB avail; 8.7 MiB/s rd, 7.8 MiB/s wr, 183 op/s
Nov 25 03:33:45 np0005534516 nova_compute[253538]: 2025-11-25 08:33:45.641 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:33:45 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 03:33:45 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e179 do_prune osdmap full prune enabled
Nov 25 03:33:46 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e180 e180: 3 total, 3 up, 3 in
Nov 25 03:33:46 np0005534516 ceph-mon[75015]: log_channel(cluster) log [DBG] : osdmap e180: 3 total, 3 up, 3 in
Nov 25 03:33:46 np0005534516 nova_compute[253538]: 2025-11-25 08:33:46.449 253542 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764059611.4465885, 8191f951-44bc-4371-957a-f2e7d37c1a32 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 03:33:46 np0005534516 nova_compute[253538]: 2025-11-25 08:33:46.449 253542 INFO nova.compute.manager [-] [instance: 8191f951-44bc-4371-957a-f2e7d37c1a32] VM Stopped (Lifecycle Event)#033[00m
Nov 25 03:33:46 np0005534516 nova_compute[253538]: 2025-11-25 08:33:46.471 253542 DEBUG nova.compute.manager [None req-adb8f138-196b-4ed1-98a6-602f46fae392 - - - - - -] [instance: 8191f951-44bc-4371-957a-f2e7d37c1a32] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:33:46 np0005534516 podman[314992]: 2025-11-25 08:33:46.856099882 +0000 UTC m=+0.097571893 container health_status 8ad1e319ea263c63021f01bb2d6f18ca5aea205883339692c6b10bddfa993191 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true)
Nov 25 03:33:47 np0005534516 ovn_controller[152859]: 2025-11-25T08:33:47Z|00068|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:07:cd:40 10.100.0.13
Nov 25 03:33:47 np0005534516 nova_compute[253538]: 2025-11-25 08:33:47.164 253542 DEBUG nova.network.neutron [None req-af0be6ff-9106-4536-a846-3295c2b2f70b a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: 528fb917-0169-441d-b32d-652963344aea] Updating instance_info_cache with network_info: [{"id": "56d077f0-8f69-40d8-bd5e-267a70c4c319", "address": "fa:16:3e:7a:ce:b3", "network": {"id": "a66e51b8-ecb0-4289-a1b5-d5e379727721", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-903247260-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a9c243220ecd4ba3af10cdbc0ea76bd6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap56d077f0-8f", "ovs_interfaceid": "56d077f0-8f69-40d8-bd5e-267a70c4c319", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 03:33:47 np0005534516 nova_compute[253538]: 2025-11-25 08:33:47.181 253542 DEBUG oslo_concurrency.lockutils [None req-af0be6ff-9106-4536-a846-3295c2b2f70b a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Releasing lock "refresh_cache-528fb917-0169-441d-b32d-652963344aea" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 03:33:47 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1517: 321 pgs: 321 active+clean; 281 MiB data, 586 MiB used, 59 GiB / 60 GiB avail; 8.1 MiB/s rd, 7.8 MiB/s wr, 176 op/s
Nov 25 03:33:47 np0005534516 nova_compute[253538]: 2025-11-25 08:33:47.716 253542 DEBUG oslo_concurrency.lockutils [None req-f62146bf-1a2c-40d8-a6cb-799a30cd3c43 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Acquiring lock "5c6656ef-7ad0-4eb4-a597-aa9a8078805b" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:33:47 np0005534516 nova_compute[253538]: 2025-11-25 08:33:47.717 253542 DEBUG oslo_concurrency.lockutils [None req-f62146bf-1a2c-40d8-a6cb-799a30cd3c43 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Lock "5c6656ef-7ad0-4eb4-a597-aa9a8078805b" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:33:47 np0005534516 nova_compute[253538]: 2025-11-25 08:33:47.732 253542 DEBUG nova.compute.manager [None req-f62146bf-1a2c-40d8-a6cb-799a30cd3c43 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] [instance: 5c6656ef-7ad0-4eb4-a597-aa9a8078805b] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 25 03:33:47 np0005534516 nova_compute[253538]: 2025-11-25 08:33:47.812 253542 DEBUG oslo_concurrency.lockutils [None req-f62146bf-1a2c-40d8-a6cb-799a30cd3c43 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:33:47 np0005534516 nova_compute[253538]: 2025-11-25 08:33:47.812 253542 DEBUG oslo_concurrency.lockutils [None req-f62146bf-1a2c-40d8-a6cb-799a30cd3c43 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:33:47 np0005534516 nova_compute[253538]: 2025-11-25 08:33:47.821 253542 DEBUG nova.virt.hardware [None req-f62146bf-1a2c-40d8-a6cb-799a30cd3c43 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 25 03:33:47 np0005534516 nova_compute[253538]: 2025-11-25 08:33:47.821 253542 INFO nova.compute.claims [None req-f62146bf-1a2c-40d8-a6cb-799a30cd3c43 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] [instance: 5c6656ef-7ad0-4eb4-a597-aa9a8078805b] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 25 03:33:47 np0005534516 nova_compute[253538]: 2025-11-25 08:33:47.947 253542 DEBUG oslo_concurrency.processutils [None req-f62146bf-1a2c-40d8-a6cb-799a30cd3c43 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:33:48 np0005534516 nova_compute[253538]: 2025-11-25 08:33:48.280 253542 INFO nova.virt.libvirt.driver [-] [instance: 528fb917-0169-441d-b32d-652963344aea] Instance destroyed successfully.#033[00m
Nov 25 03:33:48 np0005534516 nova_compute[253538]: 2025-11-25 08:33:48.282 253542 DEBUG nova.objects.instance [None req-af0be6ff-9106-4536-a846-3295c2b2f70b a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Lazy-loading 'resources' on Instance uuid 528fb917-0169-441d-b32d-652963344aea obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 03:33:48 np0005534516 nova_compute[253538]: 2025-11-25 08:33:48.292 253542 DEBUG nova.virt.libvirt.vif [None req-af0be6ff-9106-4536-a846-3295c2b2f70b a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T08:32:59Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-DeleteServersTestJSON-server-688174086',display_name='tempest-DeleteServersTestJSON-server-688174086',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-deleteserverstestjson-server-688174086',id=56,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-25T08:33:13Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='a9c243220ecd4ba3af10cdbc0ea76bd6',ramdisk_id='',reservation_id='r-qasdqh9g',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-DeleteServersTestJSON-2095694504',owner_user_name='tempest-DeleteServersTestJSON-2095694504-project-member',shelved_at='2025-11-25T08:33:43.943593',shelved_host='compute-0.ctlplane.example.com',shelved_image_id='98ee8490-b027-417d-923b-76479289f395'},tags=<?>,task_state='shelving_offloading',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T08:33:39Z,user_data=None,user_id='a649c62aaacd4f01a93ea978066f5976',uuid=528fb917-0169-441d-b32d-652963344aea,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='shelved') vif={"id": "56d077f0-8f69-40d8-bd5e-267a70c4c319", "address": "fa:16:3e:7a:ce:b3", "network": {"id": "a66e51b8-ecb0-4289-a1b5-d5e379727721", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-903247260-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a9c243220ecd4ba3af10cdbc0ea76bd6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap56d077f0-8f", "ovs_interfaceid": "56d077f0-8f69-40d8-bd5e-267a70c4c319", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 25 03:33:48 np0005534516 nova_compute[253538]: 2025-11-25 08:33:48.292 253542 DEBUG nova.network.os_vif_util [None req-af0be6ff-9106-4536-a846-3295c2b2f70b a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Converting VIF {"id": "56d077f0-8f69-40d8-bd5e-267a70c4c319", "address": "fa:16:3e:7a:ce:b3", "network": {"id": "a66e51b8-ecb0-4289-a1b5-d5e379727721", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-903247260-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a9c243220ecd4ba3af10cdbc0ea76bd6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap56d077f0-8f", "ovs_interfaceid": "56d077f0-8f69-40d8-bd5e-267a70c4c319", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 03:33:48 np0005534516 nova_compute[253538]: 2025-11-25 08:33:48.293 253542 DEBUG nova.network.os_vif_util [None req-af0be6ff-9106-4536-a846-3295c2b2f70b a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:7a:ce:b3,bridge_name='br-int',has_traffic_filtering=True,id=56d077f0-8f69-40d8-bd5e-267a70c4c319,network=Network(a66e51b8-ecb0-4289-a1b5-d5e379727721),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap56d077f0-8f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 03:33:48 np0005534516 nova_compute[253538]: 2025-11-25 08:33:48.293 253542 DEBUG os_vif [None req-af0be6ff-9106-4536-a846-3295c2b2f70b a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:7a:ce:b3,bridge_name='br-int',has_traffic_filtering=True,id=56d077f0-8f69-40d8-bd5e-267a70c4c319,network=Network(a66e51b8-ecb0-4289-a1b5-d5e379727721),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap56d077f0-8f') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 25 03:33:48 np0005534516 nova_compute[253538]: 2025-11-25 08:33:48.295 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:33:48 np0005534516 nova_compute[253538]: 2025-11-25 08:33:48.295 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap56d077f0-8f, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:33:48 np0005534516 nova_compute[253538]: 2025-11-25 08:33:48.296 253542 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764059613.2900736, fb888d2a-db54-44dc-8ec7-db417fa3cff6 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 03:33:48 np0005534516 nova_compute[253538]: 2025-11-25 08:33:48.297 253542 INFO nova.compute.manager [-] [instance: fb888d2a-db54-44dc-8ec7-db417fa3cff6] VM Stopped (Lifecycle Event)#033[00m
Nov 25 03:33:48 np0005534516 nova_compute[253538]: 2025-11-25 08:33:48.301 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:33:48 np0005534516 nova_compute[253538]: 2025-11-25 08:33:48.302 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 25 03:33:48 np0005534516 nova_compute[253538]: 2025-11-25 08:33:48.305 253542 INFO os_vif [None req-af0be6ff-9106-4536-a846-3295c2b2f70b a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:7a:ce:b3,bridge_name='br-int',has_traffic_filtering=True,id=56d077f0-8f69-40d8-bd5e-267a70c4c319,network=Network(a66e51b8-ecb0-4289-a1b5-d5e379727721),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap56d077f0-8f')#033[00m
Nov 25 03:33:48 np0005534516 nova_compute[253538]: 2025-11-25 08:33:48.326 253542 DEBUG nova.compute.manager [None req-6f45b46c-99cd-453e-b384-1e92b29703bd - - - - - -] [instance: fb888d2a-db54-44dc-8ec7-db417fa3cff6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:33:48 np0005534516 nova_compute[253538]: 2025-11-25 08:33:48.368 253542 DEBUG nova.compute.manager [req-489109fd-810e-48aa-a0b8-81b32d9beccd req-a18d564c-8f4f-4d7a-9f78-dfa7c2ab0290 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 528fb917-0169-441d-b32d-652963344aea] Received event network-changed-56d077f0-8f69-40d8-bd5e-267a70c4c319 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:33:48 np0005534516 nova_compute[253538]: 2025-11-25 08:33:48.369 253542 DEBUG nova.compute.manager [req-489109fd-810e-48aa-a0b8-81b32d9beccd req-a18d564c-8f4f-4d7a-9f78-dfa7c2ab0290 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 528fb917-0169-441d-b32d-652963344aea] Refreshing instance network info cache due to event network-changed-56d077f0-8f69-40d8-bd5e-267a70c4c319. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 03:33:48 np0005534516 nova_compute[253538]: 2025-11-25 08:33:48.370 253542 DEBUG oslo_concurrency.lockutils [req-489109fd-810e-48aa-a0b8-81b32d9beccd req-a18d564c-8f4f-4d7a-9f78-dfa7c2ab0290 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-528fb917-0169-441d-b32d-652963344aea" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 03:33:48 np0005534516 nova_compute[253538]: 2025-11-25 08:33:48.371 253542 DEBUG oslo_concurrency.lockutils [req-489109fd-810e-48aa-a0b8-81b32d9beccd req-a18d564c-8f4f-4d7a-9f78-dfa7c2ab0290 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-528fb917-0169-441d-b32d-652963344aea" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 03:33:48 np0005534516 nova_compute[253538]: 2025-11-25 08:33:48.371 253542 DEBUG nova.network.neutron [req-489109fd-810e-48aa-a0b8-81b32d9beccd req-a18d564c-8f4f-4d7a-9f78-dfa7c2ab0290 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 528fb917-0169-441d-b32d-652963344aea] Refreshing network info cache for port 56d077f0-8f69-40d8-bd5e-267a70c4c319 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 03:33:48 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 03:33:48 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3184927680' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 03:33:48 np0005534516 nova_compute[253538]: 2025-11-25 08:33:48.436 253542 DEBUG oslo_concurrency.processutils [None req-f62146bf-1a2c-40d8-a6cb-799a30cd3c43 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.489s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:33:48 np0005534516 nova_compute[253538]: 2025-11-25 08:33:48.443 253542 DEBUG nova.compute.provider_tree [None req-f62146bf-1a2c-40d8-a6cb-799a30cd3c43 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 03:33:48 np0005534516 nova_compute[253538]: 2025-11-25 08:33:48.455 253542 DEBUG nova.scheduler.client.report [None req-f62146bf-1a2c-40d8-a6cb-799a30cd3c43 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 03:33:48 np0005534516 nova_compute[253538]: 2025-11-25 08:33:48.473 253542 DEBUG oslo_concurrency.lockutils [None req-f62146bf-1a2c-40d8-a6cb-799a30cd3c43 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.661s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:33:48 np0005534516 nova_compute[253538]: 2025-11-25 08:33:48.474 253542 DEBUG nova.compute.manager [None req-f62146bf-1a2c-40d8-a6cb-799a30cd3c43 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] [instance: 5c6656ef-7ad0-4eb4-a597-aa9a8078805b] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 25 03:33:48 np0005534516 nova_compute[253538]: 2025-11-25 08:33:48.518 253542 DEBUG nova.compute.manager [None req-f62146bf-1a2c-40d8-a6cb-799a30cd3c43 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] [instance: 5c6656ef-7ad0-4eb4-a597-aa9a8078805b] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 25 03:33:48 np0005534516 nova_compute[253538]: 2025-11-25 08:33:48.519 253542 DEBUG nova.network.neutron [None req-f62146bf-1a2c-40d8-a6cb-799a30cd3c43 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] [instance: 5c6656ef-7ad0-4eb4-a597-aa9a8078805b] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 25 03:33:48 np0005534516 nova_compute[253538]: 2025-11-25 08:33:48.557 253542 INFO nova.virt.libvirt.driver [None req-f62146bf-1a2c-40d8-a6cb-799a30cd3c43 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] [instance: 5c6656ef-7ad0-4eb4-a597-aa9a8078805b] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 25 03:33:48 np0005534516 nova_compute[253538]: 2025-11-25 08:33:48.576 253542 DEBUG nova.compute.manager [None req-f62146bf-1a2c-40d8-a6cb-799a30cd3c43 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] [instance: 5c6656ef-7ad0-4eb4-a597-aa9a8078805b] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 25 03:33:48 np0005534516 nova_compute[253538]: 2025-11-25 08:33:48.656 253542 DEBUG nova.compute.manager [None req-f62146bf-1a2c-40d8-a6cb-799a30cd3c43 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] [instance: 5c6656ef-7ad0-4eb4-a597-aa9a8078805b] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 25 03:33:48 np0005534516 nova_compute[253538]: 2025-11-25 08:33:48.659 253542 DEBUG nova.virt.libvirt.driver [None req-f62146bf-1a2c-40d8-a6cb-799a30cd3c43 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] [instance: 5c6656ef-7ad0-4eb4-a597-aa9a8078805b] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 25 03:33:48 np0005534516 nova_compute[253538]: 2025-11-25 08:33:48.659 253542 INFO nova.virt.libvirt.driver [None req-f62146bf-1a2c-40d8-a6cb-799a30cd3c43 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] [instance: 5c6656ef-7ad0-4eb4-a597-aa9a8078805b] Creating image(s)#033[00m
Nov 25 03:33:48 np0005534516 nova_compute[253538]: 2025-11-25 08:33:48.687 253542 DEBUG nova.storage.rbd_utils [None req-f62146bf-1a2c-40d8-a6cb-799a30cd3c43 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] rbd image 5c6656ef-7ad0-4eb4-a597-aa9a8078805b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:33:48 np0005534516 nova_compute[253538]: 2025-11-25 08:33:48.716 253542 DEBUG nova.storage.rbd_utils [None req-f62146bf-1a2c-40d8-a6cb-799a30cd3c43 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] rbd image 5c6656ef-7ad0-4eb4-a597-aa9a8078805b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:33:48 np0005534516 nova_compute[253538]: 2025-11-25 08:33:48.745 253542 DEBUG nova.storage.rbd_utils [None req-f62146bf-1a2c-40d8-a6cb-799a30cd3c43 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] rbd image 5c6656ef-7ad0-4eb4-a597-aa9a8078805b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:33:48 np0005534516 nova_compute[253538]: 2025-11-25 08:33:48.750 253542 DEBUG oslo_concurrency.processutils [None req-f62146bf-1a2c-40d8-a6cb-799a30cd3c43 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:33:48 np0005534516 nova_compute[253538]: 2025-11-25 08:33:48.838 253542 DEBUG oslo_concurrency.processutils [None req-f62146bf-1a2c-40d8-a6cb-799a30cd3c43 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json" returned: 0 in 0.088s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:33:48 np0005534516 nova_compute[253538]: 2025-11-25 08:33:48.839 253542 DEBUG oslo_concurrency.lockutils [None req-f62146bf-1a2c-40d8-a6cb-799a30cd3c43 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Acquiring lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:33:48 np0005534516 nova_compute[253538]: 2025-11-25 08:33:48.840 253542 DEBUG oslo_concurrency.lockutils [None req-f62146bf-1a2c-40d8-a6cb-799a30cd3c43 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:33:48 np0005534516 nova_compute[253538]: 2025-11-25 08:33:48.840 253542 DEBUG oslo_concurrency.lockutils [None req-f62146bf-1a2c-40d8-a6cb-799a30cd3c43 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:33:48 np0005534516 nova_compute[253538]: 2025-11-25 08:33:48.863 253542 DEBUG nova.storage.rbd_utils [None req-f62146bf-1a2c-40d8-a6cb-799a30cd3c43 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] rbd image 5c6656ef-7ad0-4eb4-a597-aa9a8078805b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:33:48 np0005534516 nova_compute[253538]: 2025-11-25 08:33:48.867 253542 DEBUG oslo_concurrency.processutils [None req-f62146bf-1a2c-40d8-a6cb-799a30cd3c43 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc 5c6656ef-7ad0-4eb4-a597-aa9a8078805b_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:33:49 np0005534516 nova_compute[253538]: 2025-11-25 08:33:49.215 253542 DEBUG nova.policy [None req-f62146bf-1a2c-40d8-a6cb-799a30cd3c43 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '329d8dc9d78743d4a09a38fef3a9143d', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '7d8307470c794815a028592990efca57', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 25 03:33:49 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1518: 321 pgs: 321 active+clean; 281 MiB data, 586 MiB used, 59 GiB / 60 GiB avail; 6.6 MiB/s rd, 6.1 MiB/s wr, 174 op/s
Nov 25 03:33:50 np0005534516 nova_compute[253538]: 2025-11-25 08:33:50.644 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:33:50 np0005534516 nova_compute[253538]: 2025-11-25 08:33:50.693 253542 DEBUG nova.network.neutron [req-489109fd-810e-48aa-a0b8-81b32d9beccd req-a18d564c-8f4f-4d7a-9f78-dfa7c2ab0290 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 528fb917-0169-441d-b32d-652963344aea] Updated VIF entry in instance network info cache for port 56d077f0-8f69-40d8-bd5e-267a70c4c319. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 25 03:33:50 np0005534516 nova_compute[253538]: 2025-11-25 08:33:50.694 253542 DEBUG nova.network.neutron [req-489109fd-810e-48aa-a0b8-81b32d9beccd req-a18d564c-8f4f-4d7a-9f78-dfa7c2ab0290 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 528fb917-0169-441d-b32d-652963344aea] Updating instance_info_cache with network_info: [{"id": "56d077f0-8f69-40d8-bd5e-267a70c4c319", "address": "fa:16:3e:7a:ce:b3", "network": {"id": "a66e51b8-ecb0-4289-a1b5-d5e379727721", "bridge": null, "label": "tempest-DeleteServersTestJSON-903247260-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a9c243220ecd4ba3af10cdbc0ea76bd6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "unbound", "details": {}, "devname": "tap56d077f0-8f", "ovs_interfaceid": null, "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 03:33:50 np0005534516 nova_compute[253538]: 2025-11-25 08:33:50.713 253542 DEBUG oslo_concurrency.lockutils [req-489109fd-810e-48aa-a0b8-81b32d9beccd req-a18d564c-8f4f-4d7a-9f78-dfa7c2ab0290 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-528fb917-0169-441d-b32d-652963344aea" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 03:33:50 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e180 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 03:33:51 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1519: 321 pgs: 321 active+clean; 281 MiB data, 586 MiB used, 59 GiB / 60 GiB avail; 3.2 MiB/s rd, 4.4 MiB/s wr, 140 op/s
Nov 25 03:33:51 np0005534516 nova_compute[253538]: 2025-11-25 08:33:51.307 253542 DEBUG nova.network.neutron [None req-f62146bf-1a2c-40d8-a6cb-799a30cd3c43 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] [instance: 5c6656ef-7ad0-4eb4-a597-aa9a8078805b] Successfully created port: 1682bdaf-1dd6-4036-8d17-a169dbaaca8f _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 25 03:33:52 np0005534516 nova_compute[253538]: 2025-11-25 08:33:52.017 253542 DEBUG oslo_concurrency.processutils [None req-f62146bf-1a2c-40d8-a6cb-799a30cd3c43 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc 5c6656ef-7ad0-4eb4-a597-aa9a8078805b_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 3.150s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:33:52 np0005534516 nova_compute[253538]: 2025-11-25 08:33:52.311 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:33:52 np0005534516 nova_compute[253538]: 2025-11-25 08:33:52.319 253542 DEBUG nova.storage.rbd_utils [None req-f62146bf-1a2c-40d8-a6cb-799a30cd3c43 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] resizing rbd image 5c6656ef-7ad0-4eb4-a597-aa9a8078805b_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 25 03:33:53 np0005534516 nova_compute[253538]: 2025-11-25 08:33:53.008 253542 DEBUG nova.network.neutron [None req-f62146bf-1a2c-40d8-a6cb-799a30cd3c43 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] [instance: 5c6656ef-7ad0-4eb4-a597-aa9a8078805b] Successfully updated port: 1682bdaf-1dd6-4036-8d17-a169dbaaca8f _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 25 03:33:53 np0005534516 nova_compute[253538]: 2025-11-25 08:33:53.024 253542 DEBUG oslo_concurrency.lockutils [None req-f62146bf-1a2c-40d8-a6cb-799a30cd3c43 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Acquiring lock "refresh_cache-5c6656ef-7ad0-4eb4-a597-aa9a8078805b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 03:33:53 np0005534516 nova_compute[253538]: 2025-11-25 08:33:53.024 253542 DEBUG oslo_concurrency.lockutils [None req-f62146bf-1a2c-40d8-a6cb-799a30cd3c43 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Acquired lock "refresh_cache-5c6656ef-7ad0-4eb4-a597-aa9a8078805b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 03:33:53 np0005534516 nova_compute[253538]: 2025-11-25 08:33:53.024 253542 DEBUG nova.network.neutron [None req-f62146bf-1a2c-40d8-a6cb-799a30cd3c43 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] [instance: 5c6656ef-7ad0-4eb4-a597-aa9a8078805b] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 25 03:33:53 np0005534516 nova_compute[253538]: 2025-11-25 08:33:53.088 253542 DEBUG nova.compute.manager [req-85b5f7b4-76de-47ae-9771-07c0ddc04761 req-6ce601b4-76f2-4a2d-b78b-a826a4ce7ce6 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 5c6656ef-7ad0-4eb4-a597-aa9a8078805b] Received event network-changed-1682bdaf-1dd6-4036-8d17-a169dbaaca8f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:33:53 np0005534516 nova_compute[253538]: 2025-11-25 08:33:53.089 253542 DEBUG nova.compute.manager [req-85b5f7b4-76de-47ae-9771-07c0ddc04761 req-6ce601b4-76f2-4a2d-b78b-a826a4ce7ce6 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 5c6656ef-7ad0-4eb4-a597-aa9a8078805b] Refreshing instance network info cache due to event network-changed-1682bdaf-1dd6-4036-8d17-a169dbaaca8f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 03:33:53 np0005534516 nova_compute[253538]: 2025-11-25 08:33:53.089 253542 DEBUG oslo_concurrency.lockutils [req-85b5f7b4-76de-47ae-9771-07c0ddc04761 req-6ce601b4-76f2-4a2d-b78b-a826a4ce7ce6 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-5c6656ef-7ad0-4eb4-a597-aa9a8078805b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 03:33:53 np0005534516 nova_compute[253538]: 2025-11-25 08:33:53.164 253542 DEBUG nova.network.neutron [None req-f62146bf-1a2c-40d8-a6cb-799a30cd3c43 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] [instance: 5c6656ef-7ad0-4eb4-a597-aa9a8078805b] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 25 03:33:53 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1520: 321 pgs: 321 active+clean; 291 MiB data, 594 MiB used, 59 GiB / 60 GiB avail; 2.9 MiB/s rd, 4.6 MiB/s wr, 136 op/s
Nov 25 03:33:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] Optimize plan auto_2025-11-25_08:33:53
Nov 25 03:33:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 03:33:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] do_upmap
Nov 25 03:33:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] pools ['.mgr', 'default.rgw.control', 'vms', 'cephfs.cephfs.data', 'default.rgw.log', 'volumes', 'cephfs.cephfs.meta', 'default.rgw.meta', '.rgw.root', 'images', 'backups']
Nov 25 03:33:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] prepared 0/10 changes
Nov 25 03:33:53 np0005534516 nova_compute[253538]: 2025-11-25 08:33:53.299 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:33:53 np0005534516 nova_compute[253538]: 2025-11-25 08:33:53.335 253542 DEBUG nova.objects.instance [None req-f62146bf-1a2c-40d8-a6cb-799a30cd3c43 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Lazy-loading 'migration_context' on Instance uuid 5c6656ef-7ad0-4eb4-a597-aa9a8078805b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 03:33:53 np0005534516 nova_compute[253538]: 2025-11-25 08:33:53.349 253542 DEBUG nova.virt.libvirt.driver [None req-f62146bf-1a2c-40d8-a6cb-799a30cd3c43 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] [instance: 5c6656ef-7ad0-4eb4-a597-aa9a8078805b] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 25 03:33:53 np0005534516 nova_compute[253538]: 2025-11-25 08:33:53.349 253542 DEBUG nova.virt.libvirt.driver [None req-f62146bf-1a2c-40d8-a6cb-799a30cd3c43 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] [instance: 5c6656ef-7ad0-4eb4-a597-aa9a8078805b] Ensure instance console log exists: /var/lib/nova/instances/5c6656ef-7ad0-4eb4-a597-aa9a8078805b/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 25 03:33:53 np0005534516 nova_compute[253538]: 2025-11-25 08:33:53.350 253542 DEBUG oslo_concurrency.lockutils [None req-f62146bf-1a2c-40d8-a6cb-799a30cd3c43 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:33:53 np0005534516 nova_compute[253538]: 2025-11-25 08:33:53.351 253542 DEBUG oslo_concurrency.lockutils [None req-f62146bf-1a2c-40d8-a6cb-799a30cd3c43 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:33:53 np0005534516 nova_compute[253538]: 2025-11-25 08:33:53.351 253542 DEBUG oslo_concurrency.lockutils [None req-f62146bf-1a2c-40d8-a6cb-799a30cd3c43 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:33:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 03:33:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 03:33:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 03:33:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 03:33:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 03:33:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 03:33:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 03:33:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 03:33:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 03:33:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 03:33:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 03:33:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 03:33:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 03:33:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 03:33:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 03:33:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 03:33:54 np0005534516 nova_compute[253538]: 2025-11-25 08:33:54.016 253542 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764059619.0152655, 528fb917-0169-441d-b32d-652963344aea => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 03:33:54 np0005534516 nova_compute[253538]: 2025-11-25 08:33:54.016 253542 INFO nova.compute.manager [-] [instance: 528fb917-0169-441d-b32d-652963344aea] VM Stopped (Lifecycle Event)#033[00m
Nov 25 03:33:54 np0005534516 nova_compute[253538]: 2025-11-25 08:33:54.027 253542 DEBUG nova.network.neutron [None req-f62146bf-1a2c-40d8-a6cb-799a30cd3c43 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] [instance: 5c6656ef-7ad0-4eb4-a597-aa9a8078805b] Updating instance_info_cache with network_info: [{"id": "1682bdaf-1dd6-4036-8d17-a169dbaaca8f", "address": "fa:16:3e:60:42:da", "network": {"id": "9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-2079765623-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7d8307470c794815a028592990efca57", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1682bdaf-1d", "ovs_interfaceid": "1682bdaf-1dd6-4036-8d17-a169dbaaca8f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 03:33:54 np0005534516 nova_compute[253538]: 2025-11-25 08:33:54.031 253542 DEBUG nova.compute.manager [None req-4a26b254-2479-46d9-b2fd-91a6059c934d - - - - - -] [instance: 528fb917-0169-441d-b32d-652963344aea] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:33:54 np0005534516 nova_compute[253538]: 2025-11-25 08:33:54.036 253542 DEBUG nova.compute.manager [None req-4a26b254-2479-46d9-b2fd-91a6059c934d - - - - - -] [instance: 528fb917-0169-441d-b32d-652963344aea] Synchronizing instance power state after lifecycle event "Stopped"; current vm_state: shelved, current task_state: shelving_offloading, current DB power_state: 4, VM power_state: 4 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 03:33:54 np0005534516 nova_compute[253538]: 2025-11-25 08:33:54.045 253542 DEBUG oslo_concurrency.lockutils [None req-f62146bf-1a2c-40d8-a6cb-799a30cd3c43 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Releasing lock "refresh_cache-5c6656ef-7ad0-4eb4-a597-aa9a8078805b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 03:33:54 np0005534516 nova_compute[253538]: 2025-11-25 08:33:54.046 253542 DEBUG nova.compute.manager [None req-f62146bf-1a2c-40d8-a6cb-799a30cd3c43 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] [instance: 5c6656ef-7ad0-4eb4-a597-aa9a8078805b] Instance network_info: |[{"id": "1682bdaf-1dd6-4036-8d17-a169dbaaca8f", "address": "fa:16:3e:60:42:da", "network": {"id": "9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-2079765623-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7d8307470c794815a028592990efca57", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1682bdaf-1d", "ovs_interfaceid": "1682bdaf-1dd6-4036-8d17-a169dbaaca8f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 25 03:33:54 np0005534516 nova_compute[253538]: 2025-11-25 08:33:54.046 253542 DEBUG oslo_concurrency.lockutils [req-85b5f7b4-76de-47ae-9771-07c0ddc04761 req-6ce601b4-76f2-4a2d-b78b-a826a4ce7ce6 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-5c6656ef-7ad0-4eb4-a597-aa9a8078805b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 03:33:54 np0005534516 nova_compute[253538]: 2025-11-25 08:33:54.046 253542 DEBUG nova.network.neutron [req-85b5f7b4-76de-47ae-9771-07c0ddc04761 req-6ce601b4-76f2-4a2d-b78b-a826a4ce7ce6 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 5c6656ef-7ad0-4eb4-a597-aa9a8078805b] Refreshing network info cache for port 1682bdaf-1dd6-4036-8d17-a169dbaaca8f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 03:33:54 np0005534516 nova_compute[253538]: 2025-11-25 08:33:54.050 253542 DEBUG nova.virt.libvirt.driver [None req-f62146bf-1a2c-40d8-a6cb-799a30cd3c43 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] [instance: 5c6656ef-7ad0-4eb4-a597-aa9a8078805b] Start _get_guest_xml network_info=[{"id": "1682bdaf-1dd6-4036-8d17-a169dbaaca8f", "address": "fa:16:3e:60:42:da", "network": {"id": "9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-2079765623-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7d8307470c794815a028592990efca57", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1682bdaf-1d", "ovs_interfaceid": "1682bdaf-1dd6-4036-8d17-a169dbaaca8f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'device_name': '/dev/vda', 'guest_format': None, 'encryption_options': None, 'boot_index': 0, 'encryption_format': None, 'encryption_secret_uuid': None, 'size': 0, 'encrypted': False, 'disk_bus': 'virtio', 'image_id': '8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 25 03:33:54 np0005534516 nova_compute[253538]: 2025-11-25 08:33:54.052 253542 INFO nova.compute.manager [None req-4a26b254-2479-46d9-b2fd-91a6059c934d - - - - - -] [instance: 528fb917-0169-441d-b32d-652963344aea] During sync_power_state the instance has a pending task (shelving_offloading). Skip.#033[00m
Nov 25 03:33:54 np0005534516 nova_compute[253538]: 2025-11-25 08:33:54.056 253542 WARNING nova.virt.libvirt.driver [None req-f62146bf-1a2c-40d8-a6cb-799a30cd3c43 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 03:33:54 np0005534516 nova_compute[253538]: 2025-11-25 08:33:54.066 253542 DEBUG nova.virt.libvirt.host [None req-f62146bf-1a2c-40d8-a6cb-799a30cd3c43 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 25 03:33:54 np0005534516 nova_compute[253538]: 2025-11-25 08:33:54.066 253542 DEBUG nova.virt.libvirt.host [None req-f62146bf-1a2c-40d8-a6cb-799a30cd3c43 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 25 03:33:54 np0005534516 nova_compute[253538]: 2025-11-25 08:33:54.107 253542 DEBUG nova.virt.libvirt.host [None req-f62146bf-1a2c-40d8-a6cb-799a30cd3c43 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 25 03:33:54 np0005534516 nova_compute[253538]: 2025-11-25 08:33:54.108 253542 DEBUG nova.virt.libvirt.host [None req-f62146bf-1a2c-40d8-a6cb-799a30cd3c43 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 25 03:33:54 np0005534516 nova_compute[253538]: 2025-11-25 08:33:54.108 253542 DEBUG nova.virt.libvirt.driver [None req-f62146bf-1a2c-40d8-a6cb-799a30cd3c43 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 25 03:33:54 np0005534516 nova_compute[253538]: 2025-11-25 08:33:54.109 253542 DEBUG nova.virt.hardware [None req-f62146bf-1a2c-40d8-a6cb-799a30cd3c43 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T08:20:54Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='c0247c91-0f34-45b2-87e7-43f31a790a00',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 25 03:33:54 np0005534516 nova_compute[253538]: 2025-11-25 08:33:54.109 253542 DEBUG nova.virt.hardware [None req-f62146bf-1a2c-40d8-a6cb-799a30cd3c43 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 25 03:33:54 np0005534516 nova_compute[253538]: 2025-11-25 08:33:54.110 253542 DEBUG nova.virt.hardware [None req-f62146bf-1a2c-40d8-a6cb-799a30cd3c43 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 25 03:33:54 np0005534516 nova_compute[253538]: 2025-11-25 08:33:54.110 253542 DEBUG nova.virt.hardware [None req-f62146bf-1a2c-40d8-a6cb-799a30cd3c43 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 25 03:33:54 np0005534516 nova_compute[253538]: 2025-11-25 08:33:54.110 253542 DEBUG nova.virt.hardware [None req-f62146bf-1a2c-40d8-a6cb-799a30cd3c43 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 25 03:33:54 np0005534516 nova_compute[253538]: 2025-11-25 08:33:54.111 253542 DEBUG nova.virt.hardware [None req-f62146bf-1a2c-40d8-a6cb-799a30cd3c43 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 25 03:33:54 np0005534516 nova_compute[253538]: 2025-11-25 08:33:54.111 253542 DEBUG nova.virt.hardware [None req-f62146bf-1a2c-40d8-a6cb-799a30cd3c43 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 25 03:33:54 np0005534516 nova_compute[253538]: 2025-11-25 08:33:54.111 253542 DEBUG nova.virt.hardware [None req-f62146bf-1a2c-40d8-a6cb-799a30cd3c43 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 25 03:33:54 np0005534516 nova_compute[253538]: 2025-11-25 08:33:54.111 253542 DEBUG nova.virt.hardware [None req-f62146bf-1a2c-40d8-a6cb-799a30cd3c43 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 25 03:33:54 np0005534516 nova_compute[253538]: 2025-11-25 08:33:54.112 253542 DEBUG nova.virt.hardware [None req-f62146bf-1a2c-40d8-a6cb-799a30cd3c43 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 25 03:33:54 np0005534516 nova_compute[253538]: 2025-11-25 08:33:54.112 253542 DEBUG nova.virt.hardware [None req-f62146bf-1a2c-40d8-a6cb-799a30cd3c43 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 25 03:33:54 np0005534516 nova_compute[253538]: 2025-11-25 08:33:54.115 253542 DEBUG oslo_concurrency.processutils [None req-f62146bf-1a2c-40d8-a6cb-799a30cd3c43 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:33:54 np0005534516 ceph-mgr[75313]: client.0 ms_handle_reset on v2:192.168.122.100:6800/3119838916
Nov 25 03:33:54 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 03:33:54 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3164742920' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 03:33:54 np0005534516 nova_compute[253538]: 2025-11-25 08:33:54.560 253542 DEBUG oslo_concurrency.processutils [None req-f62146bf-1a2c-40d8-a6cb-799a30cd3c43 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.444s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:33:54 np0005534516 nova_compute[253538]: 2025-11-25 08:33:54.592 253542 DEBUG nova.storage.rbd_utils [None req-f62146bf-1a2c-40d8-a6cb-799a30cd3c43 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] rbd image 5c6656ef-7ad0-4eb4-a597-aa9a8078805b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:33:54 np0005534516 nova_compute[253538]: 2025-11-25 08:33:54.596 253542 DEBUG oslo_concurrency.processutils [None req-f62146bf-1a2c-40d8-a6cb-799a30cd3c43 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:33:54 np0005534516 nova_compute[253538]: 2025-11-25 08:33:54.890 253542 INFO nova.virt.libvirt.driver [None req-af0be6ff-9106-4536-a846-3295c2b2f70b a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: 528fb917-0169-441d-b32d-652963344aea] Deleting instance files /var/lib/nova/instances/528fb917-0169-441d-b32d-652963344aea_del#033[00m
Nov 25 03:33:54 np0005534516 nova_compute[253538]: 2025-11-25 08:33:54.892 253542 INFO nova.virt.libvirt.driver [None req-af0be6ff-9106-4536-a846-3295c2b2f70b a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: 528fb917-0169-441d-b32d-652963344aea] Deletion of /var/lib/nova/instances/528fb917-0169-441d-b32d-652963344aea_del complete#033[00m
Nov 25 03:33:54 np0005534516 nova_compute[253538]: 2025-11-25 08:33:54.997 253542 INFO nova.scheduler.client.report [None req-af0be6ff-9106-4536-a846-3295c2b2f70b a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Deleted allocations for instance 528fb917-0169-441d-b32d-652963344aea#033[00m
Nov 25 03:33:55 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 03:33:55 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/277159241' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 03:33:55 np0005534516 nova_compute[253538]: 2025-11-25 08:33:55.036 253542 DEBUG oslo_concurrency.processutils [None req-f62146bf-1a2c-40d8-a6cb-799a30cd3c43 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.440s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:33:55 np0005534516 nova_compute[253538]: 2025-11-25 08:33:55.037 253542 DEBUG nova.virt.libvirt.vif [None req-f62146bf-1a2c-40d8-a6cb-799a30cd3c43 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T08:33:46Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-632172140',display_name='tempest-tempest.common.compute-instance-632172140',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-632172140',id=58,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBI45VX5ozelo+A26Yolp08RcM4mInQuWdkCriWdcAJEvmrG1M64+l6O6qrC1PEYY9Zv1hNrRdaOuY2Hx3qn6BPjsgdWVfumtuAipvIEJaR4T3qitr35JgGW4++DGBtyuWA==',key_name='tempest-keypair-1507828644',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='7d8307470c794815a028592990efca57',ramdisk_id='',reservation_id='r-yi71ppnf',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachInterfacesTestJSON-1895576257',owner_user_name='tempest-AttachInterfacesTestJSON-1895576257-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T08:33:48Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='329d8dc9d78743d4a09a38fef3a9143d',uuid=5c6656ef-7ad0-4eb4-a597-aa9a8078805b,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "1682bdaf-1dd6-4036-8d17-a169dbaaca8f", "address": "fa:16:3e:60:42:da", "network": {"id": "9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-2079765623-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7d8307470c794815a028592990efca57", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1682bdaf-1d", "ovs_interfaceid": "1682bdaf-1dd6-4036-8d17-a169dbaaca8f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 25 03:33:55 np0005534516 nova_compute[253538]: 2025-11-25 08:33:55.038 253542 DEBUG nova.network.os_vif_util [None req-f62146bf-1a2c-40d8-a6cb-799a30cd3c43 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Converting VIF {"id": "1682bdaf-1dd6-4036-8d17-a169dbaaca8f", "address": "fa:16:3e:60:42:da", "network": {"id": "9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-2079765623-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7d8307470c794815a028592990efca57", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1682bdaf-1d", "ovs_interfaceid": "1682bdaf-1dd6-4036-8d17-a169dbaaca8f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 03:33:55 np0005534516 nova_compute[253538]: 2025-11-25 08:33:55.039 253542 DEBUG nova.network.os_vif_util [None req-f62146bf-1a2c-40d8-a6cb-799a30cd3c43 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:60:42:da,bridge_name='br-int',has_traffic_filtering=True,id=1682bdaf-1dd6-4036-8d17-a169dbaaca8f,network=Network(9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1682bdaf-1d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 03:33:55 np0005534516 nova_compute[253538]: 2025-11-25 08:33:55.040 253542 DEBUG nova.objects.instance [None req-f62146bf-1a2c-40d8-a6cb-799a30cd3c43 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Lazy-loading 'pci_devices' on Instance uuid 5c6656ef-7ad0-4eb4-a597-aa9a8078805b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 03:33:55 np0005534516 nova_compute[253538]: 2025-11-25 08:33:55.043 253542 DEBUG oslo_concurrency.lockutils [None req-af0be6ff-9106-4536-a846-3295c2b2f70b a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:33:55 np0005534516 nova_compute[253538]: 2025-11-25 08:33:55.043 253542 DEBUG oslo_concurrency.lockutils [None req-af0be6ff-9106-4536-a846-3295c2b2f70b a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:33:55 np0005534516 nova_compute[253538]: 2025-11-25 08:33:55.052 253542 DEBUG nova.virt.libvirt.driver [None req-f62146bf-1a2c-40d8-a6cb-799a30cd3c43 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] [instance: 5c6656ef-7ad0-4eb4-a597-aa9a8078805b] End _get_guest_xml xml=<domain type="kvm">
Nov 25 03:33:55 np0005534516 nova_compute[253538]:  <uuid>5c6656ef-7ad0-4eb4-a597-aa9a8078805b</uuid>
Nov 25 03:33:55 np0005534516 nova_compute[253538]:  <name>instance-0000003a</name>
Nov 25 03:33:55 np0005534516 nova_compute[253538]:  <memory>131072</memory>
Nov 25 03:33:55 np0005534516 nova_compute[253538]:  <vcpu>1</vcpu>
Nov 25 03:33:55 np0005534516 nova_compute[253538]:  <metadata>
Nov 25 03:33:55 np0005534516 nova_compute[253538]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 03:33:55 np0005534516 nova_compute[253538]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 03:33:55 np0005534516 nova_compute[253538]:      <nova:name>tempest-tempest.common.compute-instance-632172140</nova:name>
Nov 25 03:33:55 np0005534516 nova_compute[253538]:      <nova:creationTime>2025-11-25 08:33:54</nova:creationTime>
Nov 25 03:33:55 np0005534516 nova_compute[253538]:      <nova:flavor name="m1.nano">
Nov 25 03:33:55 np0005534516 nova_compute[253538]:        <nova:memory>128</nova:memory>
Nov 25 03:33:55 np0005534516 nova_compute[253538]:        <nova:disk>1</nova:disk>
Nov 25 03:33:55 np0005534516 nova_compute[253538]:        <nova:swap>0</nova:swap>
Nov 25 03:33:55 np0005534516 nova_compute[253538]:        <nova:ephemeral>0</nova:ephemeral>
Nov 25 03:33:55 np0005534516 nova_compute[253538]:        <nova:vcpus>1</nova:vcpus>
Nov 25 03:33:55 np0005534516 nova_compute[253538]:      </nova:flavor>
Nov 25 03:33:55 np0005534516 nova_compute[253538]:      <nova:owner>
Nov 25 03:33:55 np0005534516 nova_compute[253538]:        <nova:user uuid="329d8dc9d78743d4a09a38fef3a9143d">tempest-AttachInterfacesTestJSON-1895576257-project-member</nova:user>
Nov 25 03:33:55 np0005534516 nova_compute[253538]:        <nova:project uuid="7d8307470c794815a028592990efca57">tempest-AttachInterfacesTestJSON-1895576257</nova:project>
Nov 25 03:33:55 np0005534516 nova_compute[253538]:      </nova:owner>
Nov 25 03:33:55 np0005534516 nova_compute[253538]:      <nova:root type="image" uuid="8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e"/>
Nov 25 03:33:55 np0005534516 nova_compute[253538]:      <nova:ports>
Nov 25 03:33:55 np0005534516 nova_compute[253538]:        <nova:port uuid="1682bdaf-1dd6-4036-8d17-a169dbaaca8f">
Nov 25 03:33:55 np0005534516 nova_compute[253538]:          <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Nov 25 03:33:55 np0005534516 nova_compute[253538]:        </nova:port>
Nov 25 03:33:55 np0005534516 nova_compute[253538]:      </nova:ports>
Nov 25 03:33:55 np0005534516 nova_compute[253538]:    </nova:instance>
Nov 25 03:33:55 np0005534516 nova_compute[253538]:  </metadata>
Nov 25 03:33:55 np0005534516 nova_compute[253538]:  <sysinfo type="smbios">
Nov 25 03:33:55 np0005534516 nova_compute[253538]:    <system>
Nov 25 03:33:55 np0005534516 nova_compute[253538]:      <entry name="manufacturer">RDO</entry>
Nov 25 03:33:55 np0005534516 nova_compute[253538]:      <entry name="product">OpenStack Compute</entry>
Nov 25 03:33:55 np0005534516 nova_compute[253538]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 03:33:55 np0005534516 nova_compute[253538]:      <entry name="serial">5c6656ef-7ad0-4eb4-a597-aa9a8078805b</entry>
Nov 25 03:33:55 np0005534516 nova_compute[253538]:      <entry name="uuid">5c6656ef-7ad0-4eb4-a597-aa9a8078805b</entry>
Nov 25 03:33:55 np0005534516 nova_compute[253538]:      <entry name="family">Virtual Machine</entry>
Nov 25 03:33:55 np0005534516 nova_compute[253538]:    </system>
Nov 25 03:33:55 np0005534516 nova_compute[253538]:  </sysinfo>
Nov 25 03:33:55 np0005534516 nova_compute[253538]:  <os>
Nov 25 03:33:55 np0005534516 nova_compute[253538]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 25 03:33:55 np0005534516 nova_compute[253538]:    <boot dev="hd"/>
Nov 25 03:33:55 np0005534516 nova_compute[253538]:    <smbios mode="sysinfo"/>
Nov 25 03:33:55 np0005534516 nova_compute[253538]:  </os>
Nov 25 03:33:55 np0005534516 nova_compute[253538]:  <features>
Nov 25 03:33:55 np0005534516 nova_compute[253538]:    <acpi/>
Nov 25 03:33:55 np0005534516 nova_compute[253538]:    <apic/>
Nov 25 03:33:55 np0005534516 nova_compute[253538]:    <vmcoreinfo/>
Nov 25 03:33:55 np0005534516 nova_compute[253538]:  </features>
Nov 25 03:33:55 np0005534516 nova_compute[253538]:  <clock offset="utc">
Nov 25 03:33:55 np0005534516 nova_compute[253538]:    <timer name="pit" tickpolicy="delay"/>
Nov 25 03:33:55 np0005534516 nova_compute[253538]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 25 03:33:55 np0005534516 nova_compute[253538]:    <timer name="hpet" present="no"/>
Nov 25 03:33:55 np0005534516 nova_compute[253538]:  </clock>
Nov 25 03:33:55 np0005534516 nova_compute[253538]:  <cpu mode="host-model" match="exact">
Nov 25 03:33:55 np0005534516 nova_compute[253538]:    <topology sockets="1" cores="1" threads="1"/>
Nov 25 03:33:55 np0005534516 nova_compute[253538]:  </cpu>
Nov 25 03:33:55 np0005534516 nova_compute[253538]:  <devices>
Nov 25 03:33:55 np0005534516 nova_compute[253538]:    <disk type="network" device="disk">
Nov 25 03:33:55 np0005534516 nova_compute[253538]:      <driver type="raw" cache="none"/>
Nov 25 03:33:55 np0005534516 nova_compute[253538]:      <source protocol="rbd" name="vms/5c6656ef-7ad0-4eb4-a597-aa9a8078805b_disk">
Nov 25 03:33:55 np0005534516 nova_compute[253538]:        <host name="192.168.122.100" port="6789"/>
Nov 25 03:33:55 np0005534516 nova_compute[253538]:      </source>
Nov 25 03:33:55 np0005534516 nova_compute[253538]:      <auth username="openstack">
Nov 25 03:33:55 np0005534516 nova_compute[253538]:        <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 03:33:55 np0005534516 nova_compute[253538]:      </auth>
Nov 25 03:33:55 np0005534516 nova_compute[253538]:      <target dev="vda" bus="virtio"/>
Nov 25 03:33:55 np0005534516 nova_compute[253538]:    </disk>
Nov 25 03:33:55 np0005534516 nova_compute[253538]:    <disk type="network" device="cdrom">
Nov 25 03:33:55 np0005534516 nova_compute[253538]:      <driver type="raw" cache="none"/>
Nov 25 03:33:55 np0005534516 nova_compute[253538]:      <source protocol="rbd" name="vms/5c6656ef-7ad0-4eb4-a597-aa9a8078805b_disk.config">
Nov 25 03:33:55 np0005534516 nova_compute[253538]:        <host name="192.168.122.100" port="6789"/>
Nov 25 03:33:55 np0005534516 nova_compute[253538]:      </source>
Nov 25 03:33:55 np0005534516 nova_compute[253538]:      <auth username="openstack">
Nov 25 03:33:55 np0005534516 nova_compute[253538]:        <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 03:33:55 np0005534516 nova_compute[253538]:      </auth>
Nov 25 03:33:55 np0005534516 nova_compute[253538]:      <target dev="sda" bus="sata"/>
Nov 25 03:33:55 np0005534516 nova_compute[253538]:    </disk>
Nov 25 03:33:55 np0005534516 nova_compute[253538]:    <interface type="ethernet">
Nov 25 03:33:55 np0005534516 nova_compute[253538]:      <mac address="fa:16:3e:60:42:da"/>
Nov 25 03:33:55 np0005534516 nova_compute[253538]:      <model type="virtio"/>
Nov 25 03:33:55 np0005534516 nova_compute[253538]:      <driver name="vhost" rx_queue_size="512"/>
Nov 25 03:33:55 np0005534516 nova_compute[253538]:      <mtu size="1442"/>
Nov 25 03:33:55 np0005534516 nova_compute[253538]:      <target dev="tap1682bdaf-1d"/>
Nov 25 03:33:55 np0005534516 nova_compute[253538]:    </interface>
Nov 25 03:33:55 np0005534516 nova_compute[253538]:    <serial type="pty">
Nov 25 03:33:55 np0005534516 nova_compute[253538]:      <log file="/var/lib/nova/instances/5c6656ef-7ad0-4eb4-a597-aa9a8078805b/console.log" append="off"/>
Nov 25 03:33:55 np0005534516 nova_compute[253538]:    </serial>
Nov 25 03:33:55 np0005534516 nova_compute[253538]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 03:33:55 np0005534516 nova_compute[253538]:    <video>
Nov 25 03:33:55 np0005534516 nova_compute[253538]:      <model type="virtio"/>
Nov 25 03:33:55 np0005534516 nova_compute[253538]:    </video>
Nov 25 03:33:55 np0005534516 nova_compute[253538]:    <input type="tablet" bus="usb"/>
Nov 25 03:33:55 np0005534516 nova_compute[253538]:    <rng model="virtio">
Nov 25 03:33:55 np0005534516 nova_compute[253538]:      <backend model="random">/dev/urandom</backend>
Nov 25 03:33:55 np0005534516 nova_compute[253538]:    </rng>
Nov 25 03:33:55 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root"/>
Nov 25 03:33:55 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:33:55 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:33:55 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:33:55 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:33:55 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:33:55 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:33:55 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:33:55 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:33:55 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:33:55 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:33:55 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:33:55 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:33:55 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:33:55 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:33:55 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:33:55 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:33:55 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:33:55 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:33:55 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:33:55 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:33:55 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:33:55 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:33:55 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:33:55 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:33:55 np0005534516 nova_compute[253538]:    <controller type="usb" index="0"/>
Nov 25 03:33:55 np0005534516 nova_compute[253538]:    <memballoon model="virtio">
Nov 25 03:33:55 np0005534516 nova_compute[253538]:      <stats period="10"/>
Nov 25 03:33:55 np0005534516 nova_compute[253538]:    </memballoon>
Nov 25 03:33:55 np0005534516 nova_compute[253538]:  </devices>
Nov 25 03:33:55 np0005534516 nova_compute[253538]: </domain>
Nov 25 03:33:55 np0005534516 nova_compute[253538]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 25 03:33:55 np0005534516 nova_compute[253538]: 2025-11-25 08:33:55.052 253542 DEBUG nova.compute.manager [None req-f62146bf-1a2c-40d8-a6cb-799a30cd3c43 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] [instance: 5c6656ef-7ad0-4eb4-a597-aa9a8078805b] Preparing to wait for external event network-vif-plugged-1682bdaf-1dd6-4036-8d17-a169dbaaca8f prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 25 03:33:55 np0005534516 nova_compute[253538]: 2025-11-25 08:33:55.053 253542 DEBUG oslo_concurrency.lockutils [None req-f62146bf-1a2c-40d8-a6cb-799a30cd3c43 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Acquiring lock "5c6656ef-7ad0-4eb4-a597-aa9a8078805b-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:33:55 np0005534516 nova_compute[253538]: 2025-11-25 08:33:55.053 253542 DEBUG oslo_concurrency.lockutils [None req-f62146bf-1a2c-40d8-a6cb-799a30cd3c43 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Lock "5c6656ef-7ad0-4eb4-a597-aa9a8078805b-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:33:55 np0005534516 nova_compute[253538]: 2025-11-25 08:33:55.053 253542 DEBUG oslo_concurrency.lockutils [None req-f62146bf-1a2c-40d8-a6cb-799a30cd3c43 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Lock "5c6656ef-7ad0-4eb4-a597-aa9a8078805b-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:33:55 np0005534516 nova_compute[253538]: 2025-11-25 08:33:55.054 253542 DEBUG nova.virt.libvirt.vif [None req-f62146bf-1a2c-40d8-a6cb-799a30cd3c43 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T08:33:46Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-632172140',display_name='tempest-tempest.common.compute-instance-632172140',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-632172140',id=58,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBI45VX5ozelo+A26Yolp08RcM4mInQuWdkCriWdcAJEvmrG1M64+l6O6qrC1PEYY9Zv1hNrRdaOuY2Hx3qn6BPjsgdWVfumtuAipvIEJaR4T3qitr35JgGW4++DGBtyuWA==',key_name='tempest-keypair-1507828644',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='7d8307470c794815a028592990efca57',ramdisk_id='',reservation_id='r-yi71ppnf',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachInterfacesTestJSON-1895576257',owner_user_name='tempest-AttachInterfacesTestJSON-1895576257-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T08:33:48Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='329d8dc9d78743d4a09a38fef3a9143d',uuid=5c6656ef-7ad0-4eb4-a597-aa9a8078805b,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "1682bdaf-1dd6-4036-8d17-a169dbaaca8f", "address": "fa:16:3e:60:42:da", "network": {"id": "9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-2079765623-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7d8307470c794815a028592990efca57", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1682bdaf-1d", "ovs_interfaceid": "1682bdaf-1dd6-4036-8d17-a169dbaaca8f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 25 03:33:55 np0005534516 nova_compute[253538]: 2025-11-25 08:33:55.054 253542 DEBUG nova.network.os_vif_util [None req-f62146bf-1a2c-40d8-a6cb-799a30cd3c43 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Converting VIF {"id": "1682bdaf-1dd6-4036-8d17-a169dbaaca8f", "address": "fa:16:3e:60:42:da", "network": {"id": "9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-2079765623-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7d8307470c794815a028592990efca57", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1682bdaf-1d", "ovs_interfaceid": "1682bdaf-1dd6-4036-8d17-a169dbaaca8f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 03:33:55 np0005534516 nova_compute[253538]: 2025-11-25 08:33:55.055 253542 DEBUG nova.network.os_vif_util [None req-f62146bf-1a2c-40d8-a6cb-799a30cd3c43 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:60:42:da,bridge_name='br-int',has_traffic_filtering=True,id=1682bdaf-1dd6-4036-8d17-a169dbaaca8f,network=Network(9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1682bdaf-1d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 03:33:55 np0005534516 nova_compute[253538]: 2025-11-25 08:33:55.055 253542 DEBUG os_vif [None req-f62146bf-1a2c-40d8-a6cb-799a30cd3c43 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:60:42:da,bridge_name='br-int',has_traffic_filtering=True,id=1682bdaf-1dd6-4036-8d17-a169dbaaca8f,network=Network(9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1682bdaf-1d') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 25 03:33:55 np0005534516 nova_compute[253538]: 2025-11-25 08:33:55.056 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:33:55 np0005534516 nova_compute[253538]: 2025-11-25 08:33:55.056 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:33:55 np0005534516 nova_compute[253538]: 2025-11-25 08:33:55.057 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 03:33:55 np0005534516 nova_compute[253538]: 2025-11-25 08:33:55.062 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:33:55 np0005534516 nova_compute[253538]: 2025-11-25 08:33:55.062 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap1682bdaf-1d, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:33:55 np0005534516 nova_compute[253538]: 2025-11-25 08:33:55.063 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap1682bdaf-1d, col_values=(('external_ids', {'iface-id': '1682bdaf-1dd6-4036-8d17-a169dbaaca8f', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:60:42:da', 'vm-uuid': '5c6656ef-7ad0-4eb4-a597-aa9a8078805b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:33:55 np0005534516 nova_compute[253538]: 2025-11-25 08:33:55.064 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:33:55 np0005534516 NetworkManager[48915]: <info>  [1764059635.0656] manager: (tap1682bdaf-1d): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/243)
Nov 25 03:33:55 np0005534516 nova_compute[253538]: 2025-11-25 08:33:55.066 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 25 03:33:55 np0005534516 nova_compute[253538]: 2025-11-25 08:33:55.069 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:33:55 np0005534516 nova_compute[253538]: 2025-11-25 08:33:55.070 253542 INFO os_vif [None req-f62146bf-1a2c-40d8-a6cb-799a30cd3c43 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:60:42:da,bridge_name='br-int',has_traffic_filtering=True,id=1682bdaf-1dd6-4036-8d17-a169dbaaca8f,network=Network(9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1682bdaf-1d')#033[00m
Nov 25 03:33:55 np0005534516 nova_compute[253538]: 2025-11-25 08:33:55.134 253542 DEBUG oslo_concurrency.processutils [None req-af0be6ff-9106-4536-a846-3295c2b2f70b a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:33:55 np0005534516 nova_compute[253538]: 2025-11-25 08:33:55.222 253542 DEBUG nova.virt.libvirt.driver [None req-f62146bf-1a2c-40d8-a6cb-799a30cd3c43 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 03:33:55 np0005534516 nova_compute[253538]: 2025-11-25 08:33:55.225 253542 DEBUG nova.virt.libvirt.driver [None req-f62146bf-1a2c-40d8-a6cb-799a30cd3c43 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 03:33:55 np0005534516 nova_compute[253538]: 2025-11-25 08:33:55.225 253542 DEBUG nova.virt.libvirt.driver [None req-f62146bf-1a2c-40d8-a6cb-799a30cd3c43 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] No VIF found with MAC fa:16:3e:60:42:da, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 25 03:33:55 np0005534516 nova_compute[253538]: 2025-11-25 08:33:55.226 253542 INFO nova.virt.libvirt.driver [None req-f62146bf-1a2c-40d8-a6cb-799a30cd3c43 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] [instance: 5c6656ef-7ad0-4eb4-a597-aa9a8078805b] Using config drive#033[00m
Nov 25 03:33:55 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1521: 321 pgs: 321 active+clean; 289 MiB data, 595 MiB used, 59 GiB / 60 GiB avail; 682 KiB/s rd, 2.2 MiB/s wr, 114 op/s
Nov 25 03:33:55 np0005534516 nova_compute[253538]: 2025-11-25 08:33:55.250 253542 DEBUG nova.storage.rbd_utils [None req-f62146bf-1a2c-40d8-a6cb-799a30cd3c43 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] rbd image 5c6656ef-7ad0-4eb4-a597-aa9a8078805b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:33:55 np0005534516 nova_compute[253538]: 2025-11-25 08:33:55.502 253542 DEBUG nova.network.neutron [req-85b5f7b4-76de-47ae-9771-07c0ddc04761 req-6ce601b4-76f2-4a2d-b78b-a826a4ce7ce6 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 5c6656ef-7ad0-4eb4-a597-aa9a8078805b] Updated VIF entry in instance network info cache for port 1682bdaf-1dd6-4036-8d17-a169dbaaca8f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 25 03:33:55 np0005534516 nova_compute[253538]: 2025-11-25 08:33:55.503 253542 DEBUG nova.network.neutron [req-85b5f7b4-76de-47ae-9771-07c0ddc04761 req-6ce601b4-76f2-4a2d-b78b-a826a4ce7ce6 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 5c6656ef-7ad0-4eb4-a597-aa9a8078805b] Updating instance_info_cache with network_info: [{"id": "1682bdaf-1dd6-4036-8d17-a169dbaaca8f", "address": "fa:16:3e:60:42:da", "network": {"id": "9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-2079765623-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7d8307470c794815a028592990efca57", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1682bdaf-1d", "ovs_interfaceid": "1682bdaf-1dd6-4036-8d17-a169dbaaca8f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 03:33:55 np0005534516 nova_compute[253538]: 2025-11-25 08:33:55.523 253542 DEBUG oslo_concurrency.lockutils [req-85b5f7b4-76de-47ae-9771-07c0ddc04761 req-6ce601b4-76f2-4a2d-b78b-a826a4ce7ce6 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-5c6656ef-7ad0-4eb4-a597-aa9a8078805b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 03:33:55 np0005534516 nova_compute[253538]: 2025-11-25 08:33:55.546 253542 INFO nova.virt.libvirt.driver [None req-f62146bf-1a2c-40d8-a6cb-799a30cd3c43 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] [instance: 5c6656ef-7ad0-4eb4-a597-aa9a8078805b] Creating config drive at /var/lib/nova/instances/5c6656ef-7ad0-4eb4-a597-aa9a8078805b/disk.config#033[00m
Nov 25 03:33:55 np0005534516 nova_compute[253538]: 2025-11-25 08:33:55.558 253542 DEBUG oslo_concurrency.processutils [None req-f62146bf-1a2c-40d8-a6cb-799a30cd3c43 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/5c6656ef-7ad0-4eb4-a597-aa9a8078805b/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpz454n58_ execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:33:55 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 03:33:55 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3023425656' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 03:33:55 np0005534516 nova_compute[253538]: 2025-11-25 08:33:55.616 253542 DEBUG oslo_concurrency.processutils [None req-af0be6ff-9106-4536-a846-3295c2b2f70b a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.482s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:33:55 np0005534516 nova_compute[253538]: 2025-11-25 08:33:55.626 253542 DEBUG nova.compute.provider_tree [None req-af0be6ff-9106-4536-a846-3295c2b2f70b a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 03:33:55 np0005534516 nova_compute[253538]: 2025-11-25 08:33:55.644 253542 DEBUG nova.scheduler.client.report [None req-af0be6ff-9106-4536-a846-3295c2b2f70b a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 03:33:55 np0005534516 nova_compute[253538]: 2025-11-25 08:33:55.651 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:33:55 np0005534516 nova_compute[253538]: 2025-11-25 08:33:55.675 253542 DEBUG oslo_concurrency.lockutils [None req-af0be6ff-9106-4536-a846-3295c2b2f70b a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.632s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:33:55 np0005534516 nova_compute[253538]: 2025-11-25 08:33:55.722 253542 DEBUG oslo_concurrency.processutils [None req-f62146bf-1a2c-40d8-a6cb-799a30cd3c43 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/5c6656ef-7ad0-4eb4-a597-aa9a8078805b/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpz454n58_" returned: 0 in 0.165s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:33:55 np0005534516 nova_compute[253538]: 2025-11-25 08:33:55.745 253542 DEBUG nova.storage.rbd_utils [None req-f62146bf-1a2c-40d8-a6cb-799a30cd3c43 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] rbd image 5c6656ef-7ad0-4eb4-a597-aa9a8078805b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:33:55 np0005534516 nova_compute[253538]: 2025-11-25 08:33:55.748 253542 DEBUG oslo_concurrency.processutils [None req-f62146bf-1a2c-40d8-a6cb-799a30cd3c43 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/5c6656ef-7ad0-4eb4-a597-aa9a8078805b/disk.config 5c6656ef-7ad0-4eb4-a597-aa9a8078805b_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:33:55 np0005534516 nova_compute[253538]: 2025-11-25 08:33:55.779 253542 DEBUG oslo_concurrency.lockutils [None req-af0be6ff-9106-4536-a846-3295c2b2f70b a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Lock "528fb917-0169-441d-b32d-652963344aea" "released" by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" :: held 40.512s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:33:55 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e180 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 03:33:55 np0005534516 nova_compute[253538]: 2025-11-25 08:33:55.942 253542 DEBUG oslo_concurrency.processutils [None req-f62146bf-1a2c-40d8-a6cb-799a30cd3c43 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/5c6656ef-7ad0-4eb4-a597-aa9a8078805b/disk.config 5c6656ef-7ad0-4eb4-a597-aa9a8078805b_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.194s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:33:55 np0005534516 nova_compute[253538]: 2025-11-25 08:33:55.943 253542 INFO nova.virt.libvirt.driver [None req-f62146bf-1a2c-40d8-a6cb-799a30cd3c43 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] [instance: 5c6656ef-7ad0-4eb4-a597-aa9a8078805b] Deleting local config drive /var/lib/nova/instances/5c6656ef-7ad0-4eb4-a597-aa9a8078805b/disk.config because it was imported into RBD.#033[00m
Nov 25 03:33:55 np0005534516 kernel: tap1682bdaf-1d: entered promiscuous mode
Nov 25 03:33:55 np0005534516 NetworkManager[48915]: <info>  [1764059635.9935] manager: (tap1682bdaf-1d): new Tun device (/org/freedesktop/NetworkManager/Devices/244)
Nov 25 03:33:55 np0005534516 ovn_controller[152859]: 2025-11-25T08:33:55Z|00531|binding|INFO|Claiming lport 1682bdaf-1dd6-4036-8d17-a169dbaaca8f for this chassis.
Nov 25 03:33:55 np0005534516 ovn_controller[152859]: 2025-11-25T08:33:55Z|00532|binding|INFO|1682bdaf-1dd6-4036-8d17-a169dbaaca8f: Claiming fa:16:3e:60:42:da 10.100.0.9
Nov 25 03:33:55 np0005534516 nova_compute[253538]: 2025-11-25 08:33:55.996 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:33:56 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:33:56.004 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:60:42:da 10.100.0.9'], port_security=['fa:16:3e:60:42:da 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '5c6656ef-7ad0-4eb4-a597-aa9a8078805b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '7d8307470c794815a028592990efca57', 'neutron:revision_number': '2', 'neutron:security_group_ids': '379bb9ab-2c12-4eea-bd54-6b8a24f607d1', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f51a82dc-da84-4ad1-90c6-51b8e242435f, chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=1682bdaf-1dd6-4036-8d17-a169dbaaca8f) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 03:33:56 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:33:56.005 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 1682bdaf-1dd6-4036-8d17-a169dbaaca8f in datapath 9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe bound to our chassis#033[00m
Nov 25 03:33:56 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:33:56.006 162739 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe#033[00m
Nov 25 03:33:56 np0005534516 ovn_controller[152859]: 2025-11-25T08:33:56Z|00533|binding|INFO|Setting lport 1682bdaf-1dd6-4036-8d17-a169dbaaca8f ovn-installed in OVS
Nov 25 03:33:56 np0005534516 ovn_controller[152859]: 2025-11-25T08:33:56Z|00534|binding|INFO|Setting lport 1682bdaf-1dd6-4036-8d17-a169dbaaca8f up in Southbound
Nov 25 03:33:56 np0005534516 nova_compute[253538]: 2025-11-25 08:33:56.017 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:33:56 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:33:56.019 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[eb976d80-d3df-4d37-bacd-de49a17926d7]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:33:56 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:33:56.020 162739 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap9bf3cbfa-71 in ovnmeta-9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 25 03:33:56 np0005534516 nova_compute[253538]: 2025-11-25 08:33:56.021 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:33:56 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:33:56.022 269370 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap9bf3cbfa-70 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 25 03:33:56 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:33:56.022 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[845b3e57-0f16-4d50-a9e3-927b4e3b20a4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:33:56 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:33:56.023 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[ea466ccc-c3cb-465f-9cfc-53c1eb3cf720]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:33:56 np0005534516 systemd-udevd[315383]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 03:33:56 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:33:56.033 162852 DEBUG oslo.privsep.daemon [-] privsep: reply[b81d991a-0534-4249-af61-eda4791e0a90]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:33:56 np0005534516 NetworkManager[48915]: <info>  [1764059636.0452] device (tap1682bdaf-1d): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 03:33:56 np0005534516 systemd-machined[215790]: New machine qemu-67-instance-0000003a.
Nov 25 03:33:56 np0005534516 NetworkManager[48915]: <info>  [1764059636.0464] device (tap1682bdaf-1d): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 03:33:56 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:33:56.054 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[12f02f5f-6022-43be-a661-6613f489b21f]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:33:56 np0005534516 systemd[1]: Started Virtual Machine qemu-67-instance-0000003a.
Nov 25 03:33:56 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:33:56.081 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[794826d2-1346-46aa-bb72-06d6e6471881]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:33:56 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:33:56.087 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[d6a778c9-ced7-41df-b3f0-a6746aa5a34e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:33:56 np0005534516 NetworkManager[48915]: <info>  [1764059636.0884] manager: (tap9bf3cbfa-70): new Veth device (/org/freedesktop/NetworkManager/Devices/245)
Nov 25 03:33:56 np0005534516 systemd-udevd[315387]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 03:33:56 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:33:56.113 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[afa14bc7-dda4-415c-a21d-584129275d35]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:33:56 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:33:56.116 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[8d3e420b-ff8f-485f-90c6-efff38d39a83]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:33:56 np0005534516 NetworkManager[48915]: <info>  [1764059636.1351] device (tap9bf3cbfa-70): carrier: link connected
Nov 25 03:33:56 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:33:56.141 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[8c0f9f52-5db3-4a89-9084-4f7c4cefae56]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:33:56 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:33:56.158 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[bcd5775b-86aa-4aa2-903d-9288e7aa115b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap9bf3cbfa-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:cf:8f:c7'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 164], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 499073, 'reachable_time': 17548, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 315415, 'error': None, 'target': 'ovnmeta-9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:33:56 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:33:56.172 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[6c125742-094d-42ec-b242-4829f62fe2d1]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fecf:8fc7'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 499073, 'tstamp': 499073}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 315416, 'error': None, 'target': 'ovnmeta-9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:33:56 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:33:56.196 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[149c3794-3259-411c-907f-98d5f9e34412]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap9bf3cbfa-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:cf:8f:c7'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 164], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 499073, 'reachable_time': 17548, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 315417, 'error': None, 'target': 'ovnmeta-9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:33:56 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:33:56.233 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[854c6dce-3e7f-4f23-9803-65ec84793df8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:33:56 np0005534516 nova_compute[253538]: 2025-11-25 08:33:56.252 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:33:56 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:33:56.295 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[8fd1bdd7-e770-4f4f-ba49-b2e10b414c33]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:33:56 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:33:56.296 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9bf3cbfa-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:33:56 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:33:56.296 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 03:33:56 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:33:56.297 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9bf3cbfa-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:33:56 np0005534516 kernel: tap9bf3cbfa-70: entered promiscuous mode
Nov 25 03:33:56 np0005534516 NetworkManager[48915]: <info>  [1764059636.2992] manager: (tap9bf3cbfa-70): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/246)
Nov 25 03:33:56 np0005534516 nova_compute[253538]: 2025-11-25 08:33:56.298 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:33:56 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:33:56.302 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap9bf3cbfa-70, col_values=(('external_ids', {'iface-id': '98660c0c-0936-4c4d-9a89-87b784d8d5cc'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:33:56 np0005534516 ovn_controller[152859]: 2025-11-25T08:33:56Z|00535|binding|INFO|Releasing lport 98660c0c-0936-4c4d-9a89-87b784d8d5cc from this chassis (sb_readonly=0)
Nov 25 03:33:56 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:33:56.304 162739 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 25 03:33:56 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:33:56.305 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[d1d4a9b2-c4ee-499a-9348-b5fc42bc6c27]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:33:56 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:33:56.306 162739 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 25 03:33:56 np0005534516 ovn_metadata_agent[162734]: global
Nov 25 03:33:56 np0005534516 ovn_metadata_agent[162734]:    log         /dev/log local0 debug
Nov 25 03:33:56 np0005534516 ovn_metadata_agent[162734]:    log-tag     haproxy-metadata-proxy-9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe
Nov 25 03:33:56 np0005534516 ovn_metadata_agent[162734]:    user        root
Nov 25 03:33:56 np0005534516 ovn_metadata_agent[162734]:    group       root
Nov 25 03:33:56 np0005534516 ovn_metadata_agent[162734]:    maxconn     1024
Nov 25 03:33:56 np0005534516 ovn_metadata_agent[162734]:    pidfile     /var/lib/neutron/external/pids/9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe.pid.haproxy
Nov 25 03:33:56 np0005534516 ovn_metadata_agent[162734]:    daemon
Nov 25 03:33:56 np0005534516 ovn_metadata_agent[162734]: 
Nov 25 03:33:56 np0005534516 ovn_metadata_agent[162734]: defaults
Nov 25 03:33:56 np0005534516 ovn_metadata_agent[162734]:    log global
Nov 25 03:33:56 np0005534516 ovn_metadata_agent[162734]:    mode http
Nov 25 03:33:56 np0005534516 ovn_metadata_agent[162734]:    option httplog
Nov 25 03:33:56 np0005534516 ovn_metadata_agent[162734]:    option dontlognull
Nov 25 03:33:56 np0005534516 ovn_metadata_agent[162734]:    option http-server-close
Nov 25 03:33:56 np0005534516 ovn_metadata_agent[162734]:    option forwardfor
Nov 25 03:33:56 np0005534516 ovn_metadata_agent[162734]:    retries                 3
Nov 25 03:33:56 np0005534516 ovn_metadata_agent[162734]:    timeout http-request    30s
Nov 25 03:33:56 np0005534516 ovn_metadata_agent[162734]:    timeout connect         30s
Nov 25 03:33:56 np0005534516 ovn_metadata_agent[162734]:    timeout client          32s
Nov 25 03:33:56 np0005534516 ovn_metadata_agent[162734]:    timeout server          32s
Nov 25 03:33:56 np0005534516 ovn_metadata_agent[162734]:    timeout http-keep-alive 30s
Nov 25 03:33:56 np0005534516 ovn_metadata_agent[162734]: 
Nov 25 03:33:56 np0005534516 ovn_metadata_agent[162734]: 
Nov 25 03:33:56 np0005534516 ovn_metadata_agent[162734]: listen listener
Nov 25 03:33:56 np0005534516 ovn_metadata_agent[162734]:    bind 169.254.169.254:80
Nov 25 03:33:56 np0005534516 ovn_metadata_agent[162734]:    server metadata /var/lib/neutron/metadata_proxy
Nov 25 03:33:56 np0005534516 ovn_metadata_agent[162734]:    http-request add-header X-OVN-Network-ID 9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe
Nov 25 03:33:56 np0005534516 ovn_metadata_agent[162734]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 25 03:33:56 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:33:56.306 162739 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe', 'env', 'PROCESS_TAG=haproxy-9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 25 03:33:56 np0005534516 nova_compute[253538]: 2025-11-25 08:33:56.319 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:33:56 np0005534516 nova_compute[253538]: 2025-11-25 08:33:56.532 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764059636.5310957, 5c6656ef-7ad0-4eb4-a597-aa9a8078805b => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 03:33:56 np0005534516 nova_compute[253538]: 2025-11-25 08:33:56.532 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 5c6656ef-7ad0-4eb4-a597-aa9a8078805b] VM Started (Lifecycle Event)#033[00m
Nov 25 03:33:56 np0005534516 nova_compute[253538]: 2025-11-25 08:33:56.547 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 5c6656ef-7ad0-4eb4-a597-aa9a8078805b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:33:56 np0005534516 nova_compute[253538]: 2025-11-25 08:33:56.553 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764059636.5314503, 5c6656ef-7ad0-4eb4-a597-aa9a8078805b => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 03:33:56 np0005534516 nova_compute[253538]: 2025-11-25 08:33:56.554 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 5c6656ef-7ad0-4eb4-a597-aa9a8078805b] VM Paused (Lifecycle Event)#033[00m
Nov 25 03:33:56 np0005534516 nova_compute[253538]: 2025-11-25 08:33:56.570 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 5c6656ef-7ad0-4eb4-a597-aa9a8078805b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:33:56 np0005534516 nova_compute[253538]: 2025-11-25 08:33:56.574 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 5c6656ef-7ad0-4eb4-a597-aa9a8078805b] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 03:33:56 np0005534516 nova_compute[253538]: 2025-11-25 08:33:56.590 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 5c6656ef-7ad0-4eb4-a597-aa9a8078805b] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 03:33:56 np0005534516 podman[315491]: 2025-11-25 08:33:56.720445854 +0000 UTC m=+0.073379283 container create fd84595ec77f0159adf977cc77075b96e42e38dd6f668ef949dec2535d8255cc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251118)
Nov 25 03:33:56 np0005534516 podman[315491]: 2025-11-25 08:33:56.671431267 +0000 UTC m=+0.024364776 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 25 03:33:56 np0005534516 systemd[1]: Started libpod-conmon-fd84595ec77f0159adf977cc77075b96e42e38dd6f668ef949dec2535d8255cc.scope.
Nov 25 03:33:56 np0005534516 systemd[1]: Started libcrun container.
Nov 25 03:33:56 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/17ce801fda8e4bb6e919da95eaa73f45324ff65118d3fb3c246879cdb73c93b9/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 25 03:33:56 np0005534516 podman[315491]: 2025-11-25 08:33:56.849290248 +0000 UTC m=+0.202223707 container init fd84595ec77f0159adf977cc77075b96e42e38dd6f668ef949dec2535d8255cc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.build-date=20251118, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Nov 25 03:33:56 np0005534516 podman[315491]: 2025-11-25 08:33:56.85607795 +0000 UTC m=+0.209011409 container start fd84595ec77f0159adf977cc77075b96e42e38dd6f668ef949dec2535d8255cc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Nov 25 03:33:56 np0005534516 neutron-haproxy-ovnmeta-9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe[315506]: [NOTICE]   (315510) : New worker (315512) forked
Nov 25 03:33:56 np0005534516 neutron-haproxy-ovnmeta-9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe[315506]: [NOTICE]   (315510) : Loading success.
Nov 25 03:33:57 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1522: 321 pgs: 321 active+clean; 248 MiB data, 570 MiB used, 59 GiB / 60 GiB avail; 621 KiB/s rd, 1.9 MiB/s wr, 114 op/s
Nov 25 03:33:57 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e180 do_prune osdmap full prune enabled
Nov 25 03:33:57 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e181 e181: 3 total, 3 up, 3 in
Nov 25 03:33:57 np0005534516 ceph-mon[75015]: log_channel(cluster) log [DBG] : osdmap e181: 3 total, 3 up, 3 in
Nov 25 03:33:59 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1524: 321 pgs: 321 active+clean; 213 MiB data, 537 MiB used, 59 GiB / 60 GiB avail; 385 KiB/s rd, 2.2 MiB/s wr, 120 op/s
Nov 25 03:34:00 np0005534516 nova_compute[253538]: 2025-11-25 08:34:00.065 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:34:00 np0005534516 nova_compute[253538]: 2025-11-25 08:34:00.498 253542 DEBUG oslo_concurrency.lockutils [None req-de0428eb-b428-4465-9e27-e82bef732ea3 d4a674c4114a4e4fb5e446089be3ffc0 adf7500b3b404802bc7f4ada42a72100 - - default default] Acquiring lock "e3f4ee5b-6bb5-456f-b522-426ea1ebf32f" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:34:00 np0005534516 nova_compute[253538]: 2025-11-25 08:34:00.499 253542 DEBUG oslo_concurrency.lockutils [None req-de0428eb-b428-4465-9e27-e82bef732ea3 d4a674c4114a4e4fb5e446089be3ffc0 adf7500b3b404802bc7f4ada42a72100 - - default default] Lock "e3f4ee5b-6bb5-456f-b522-426ea1ebf32f" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:34:00 np0005534516 nova_compute[253538]: 2025-11-25 08:34:00.516 253542 DEBUG nova.compute.manager [None req-de0428eb-b428-4465-9e27-e82bef732ea3 d4a674c4114a4e4fb5e446089be3ffc0 adf7500b3b404802bc7f4ada42a72100 - - default default] [instance: e3f4ee5b-6bb5-456f-b522-426ea1ebf32f] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 25 03:34:00 np0005534516 nova_compute[253538]: 2025-11-25 08:34:00.592 253542 DEBUG oslo_concurrency.lockutils [None req-de0428eb-b428-4465-9e27-e82bef732ea3 d4a674c4114a4e4fb5e446089be3ffc0 adf7500b3b404802bc7f4ada42a72100 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:34:00 np0005534516 nova_compute[253538]: 2025-11-25 08:34:00.593 253542 DEBUG oslo_concurrency.lockutils [None req-de0428eb-b428-4465-9e27-e82bef732ea3 d4a674c4114a4e4fb5e446089be3ffc0 adf7500b3b404802bc7f4ada42a72100 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:34:00 np0005534516 nova_compute[253538]: 2025-11-25 08:34:00.604 253542 DEBUG nova.virt.hardware [None req-de0428eb-b428-4465-9e27-e82bef732ea3 d4a674c4114a4e4fb5e446089be3ffc0 adf7500b3b404802bc7f4ada42a72100 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 25 03:34:00 np0005534516 nova_compute[253538]: 2025-11-25 08:34:00.605 253542 INFO nova.compute.claims [None req-de0428eb-b428-4465-9e27-e82bef732ea3 d4a674c4114a4e4fb5e446089be3ffc0 adf7500b3b404802bc7f4ada42a72100 - - default default] [instance: e3f4ee5b-6bb5-456f-b522-426ea1ebf32f] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 25 03:34:00 np0005534516 nova_compute[253538]: 2025-11-25 08:34:00.650 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:34:00 np0005534516 nova_compute[253538]: 2025-11-25 08:34:00.759 253542 DEBUG oslo_concurrency.processutils [None req-de0428eb-b428-4465-9e27-e82bef732ea3 d4a674c4114a4e4fb5e446089be3ffc0 adf7500b3b404802bc7f4ada42a72100 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:34:00 np0005534516 nova_compute[253538]: 2025-11-25 08:34:00.860 253542 DEBUG nova.compute.manager [req-8d4df46c-5c3b-4839-974e-20c5ce0129ab req-e9777216-3578-4979-b5b4-e716f21f9c17 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 5c6656ef-7ad0-4eb4-a597-aa9a8078805b] Received event network-vif-plugged-1682bdaf-1dd6-4036-8d17-a169dbaaca8f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:34:00 np0005534516 nova_compute[253538]: 2025-11-25 08:34:00.860 253542 DEBUG oslo_concurrency.lockutils [req-8d4df46c-5c3b-4839-974e-20c5ce0129ab req-e9777216-3578-4979-b5b4-e716f21f9c17 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "5c6656ef-7ad0-4eb4-a597-aa9a8078805b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:34:00 np0005534516 nova_compute[253538]: 2025-11-25 08:34:00.861 253542 DEBUG oslo_concurrency.lockutils [req-8d4df46c-5c3b-4839-974e-20c5ce0129ab req-e9777216-3578-4979-b5b4-e716f21f9c17 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "5c6656ef-7ad0-4eb4-a597-aa9a8078805b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:34:00 np0005534516 nova_compute[253538]: 2025-11-25 08:34:00.861 253542 DEBUG oslo_concurrency.lockutils [req-8d4df46c-5c3b-4839-974e-20c5ce0129ab req-e9777216-3578-4979-b5b4-e716f21f9c17 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "5c6656ef-7ad0-4eb4-a597-aa9a8078805b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:34:00 np0005534516 nova_compute[253538]: 2025-11-25 08:34:00.861 253542 DEBUG nova.compute.manager [req-8d4df46c-5c3b-4839-974e-20c5ce0129ab req-e9777216-3578-4979-b5b4-e716f21f9c17 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 5c6656ef-7ad0-4eb4-a597-aa9a8078805b] Processing event network-vif-plugged-1682bdaf-1dd6-4036-8d17-a169dbaaca8f _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 25 03:34:00 np0005534516 nova_compute[253538]: 2025-11-25 08:34:00.862 253542 DEBUG nova.compute.manager [None req-f62146bf-1a2c-40d8-a6cb-799a30cd3c43 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] [instance: 5c6656ef-7ad0-4eb4-a597-aa9a8078805b] Instance event wait completed in 4 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 25 03:34:00 np0005534516 nova_compute[253538]: 2025-11-25 08:34:00.882 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764059640.87377, 5c6656ef-7ad0-4eb4-a597-aa9a8078805b => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 03:34:00 np0005534516 nova_compute[253538]: 2025-11-25 08:34:00.882 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 5c6656ef-7ad0-4eb4-a597-aa9a8078805b] VM Resumed (Lifecycle Event)#033[00m
Nov 25 03:34:00 np0005534516 nova_compute[253538]: 2025-11-25 08:34:00.885 253542 DEBUG nova.virt.libvirt.driver [None req-f62146bf-1a2c-40d8-a6cb-799a30cd3c43 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] [instance: 5c6656ef-7ad0-4eb4-a597-aa9a8078805b] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 25 03:34:00 np0005534516 nova_compute[253538]: 2025-11-25 08:34:00.889 253542 INFO nova.virt.libvirt.driver [-] [instance: 5c6656ef-7ad0-4eb4-a597-aa9a8078805b] Instance spawned successfully.#033[00m
Nov 25 03:34:00 np0005534516 nova_compute[253538]: 2025-11-25 08:34:00.890 253542 DEBUG nova.virt.libvirt.driver [None req-f62146bf-1a2c-40d8-a6cb-799a30cd3c43 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] [instance: 5c6656ef-7ad0-4eb4-a597-aa9a8078805b] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 25 03:34:00 np0005534516 nova_compute[253538]: 2025-11-25 08:34:00.913 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 5c6656ef-7ad0-4eb4-a597-aa9a8078805b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:34:00 np0005534516 nova_compute[253538]: 2025-11-25 08:34:00.920 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 5c6656ef-7ad0-4eb4-a597-aa9a8078805b] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 03:34:00 np0005534516 nova_compute[253538]: 2025-11-25 08:34:00.925 253542 DEBUG nova.virt.libvirt.driver [None req-f62146bf-1a2c-40d8-a6cb-799a30cd3c43 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] [instance: 5c6656ef-7ad0-4eb4-a597-aa9a8078805b] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:34:00 np0005534516 nova_compute[253538]: 2025-11-25 08:34:00.926 253542 DEBUG nova.virt.libvirt.driver [None req-f62146bf-1a2c-40d8-a6cb-799a30cd3c43 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] [instance: 5c6656ef-7ad0-4eb4-a597-aa9a8078805b] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:34:00 np0005534516 nova_compute[253538]: 2025-11-25 08:34:00.927 253542 DEBUG nova.virt.libvirt.driver [None req-f62146bf-1a2c-40d8-a6cb-799a30cd3c43 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] [instance: 5c6656ef-7ad0-4eb4-a597-aa9a8078805b] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:34:00 np0005534516 nova_compute[253538]: 2025-11-25 08:34:00.927 253542 DEBUG nova.virt.libvirt.driver [None req-f62146bf-1a2c-40d8-a6cb-799a30cd3c43 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] [instance: 5c6656ef-7ad0-4eb4-a597-aa9a8078805b] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:34:00 np0005534516 nova_compute[253538]: 2025-11-25 08:34:00.928 253542 DEBUG nova.virt.libvirt.driver [None req-f62146bf-1a2c-40d8-a6cb-799a30cd3c43 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] [instance: 5c6656ef-7ad0-4eb4-a597-aa9a8078805b] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:34:00 np0005534516 nova_compute[253538]: 2025-11-25 08:34:00.928 253542 DEBUG nova.virt.libvirt.driver [None req-f62146bf-1a2c-40d8-a6cb-799a30cd3c43 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] [instance: 5c6656ef-7ad0-4eb4-a597-aa9a8078805b] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:34:00 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e181 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 03:34:00 np0005534516 nova_compute[253538]: 2025-11-25 08:34:00.938 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 5c6656ef-7ad0-4eb4-a597-aa9a8078805b] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 03:34:01 np0005534516 nova_compute[253538]: 2025-11-25 08:34:01.009 253542 INFO nova.compute.manager [None req-f62146bf-1a2c-40d8-a6cb-799a30cd3c43 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] [instance: 5c6656ef-7ad0-4eb4-a597-aa9a8078805b] Took 12.35 seconds to spawn the instance on the hypervisor.#033[00m
Nov 25 03:34:01 np0005534516 nova_compute[253538]: 2025-11-25 08:34:01.010 253542 DEBUG nova.compute.manager [None req-f62146bf-1a2c-40d8-a6cb-799a30cd3c43 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] [instance: 5c6656ef-7ad0-4eb4-a597-aa9a8078805b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:34:01 np0005534516 nova_compute[253538]: 2025-11-25 08:34:01.082 253542 INFO nova.compute.manager [None req-f62146bf-1a2c-40d8-a6cb-799a30cd3c43 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] [instance: 5c6656ef-7ad0-4eb4-a597-aa9a8078805b] Took 13.30 seconds to build instance.#033[00m
Nov 25 03:34:01 np0005534516 nova_compute[253538]: 2025-11-25 08:34:01.095 253542 DEBUG oslo_concurrency.lockutils [None req-f62146bf-1a2c-40d8-a6cb-799a30cd3c43 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Lock "5c6656ef-7ad0-4eb4-a597-aa9a8078805b" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 13.378s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:34:01 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 03:34:01 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4022972059' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 03:34:01 np0005534516 nova_compute[253538]: 2025-11-25 08:34:01.223 253542 DEBUG oslo_concurrency.processutils [None req-de0428eb-b428-4465-9e27-e82bef732ea3 d4a674c4114a4e4fb5e446089be3ffc0 adf7500b3b404802bc7f4ada42a72100 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.464s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:34:01 np0005534516 nova_compute[253538]: 2025-11-25 08:34:01.229 253542 DEBUG nova.compute.provider_tree [None req-de0428eb-b428-4465-9e27-e82bef732ea3 d4a674c4114a4e4fb5e446089be3ffc0 adf7500b3b404802bc7f4ada42a72100 - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 03:34:01 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1525: 321 pgs: 321 active+clean; 190 MiB data, 523 MiB used, 59 GiB / 60 GiB avail; 139 KiB/s rd, 2.2 MiB/s wr, 106 op/s
Nov 25 03:34:01 np0005534516 nova_compute[253538]: 2025-11-25 08:34:01.242 253542 DEBUG nova.scheduler.client.report [None req-de0428eb-b428-4465-9e27-e82bef732ea3 d4a674c4114a4e4fb5e446089be3ffc0 adf7500b3b404802bc7f4ada42a72100 - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 03:34:01 np0005534516 nova_compute[253538]: 2025-11-25 08:34:01.264 253542 DEBUG oslo_concurrency.lockutils [None req-de0428eb-b428-4465-9e27-e82bef732ea3 d4a674c4114a4e4fb5e446089be3ffc0 adf7500b3b404802bc7f4ada42a72100 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.671s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:34:01 np0005534516 nova_compute[253538]: 2025-11-25 08:34:01.264 253542 DEBUG nova.compute.manager [None req-de0428eb-b428-4465-9e27-e82bef732ea3 d4a674c4114a4e4fb5e446089be3ffc0 adf7500b3b404802bc7f4ada42a72100 - - default default] [instance: e3f4ee5b-6bb5-456f-b522-426ea1ebf32f] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 25 03:34:01 np0005534516 nova_compute[253538]: 2025-11-25 08:34:01.321 253542 DEBUG nova.compute.manager [None req-de0428eb-b428-4465-9e27-e82bef732ea3 d4a674c4114a4e4fb5e446089be3ffc0 adf7500b3b404802bc7f4ada42a72100 - - default default] [instance: e3f4ee5b-6bb5-456f-b522-426ea1ebf32f] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 25 03:34:01 np0005534516 nova_compute[253538]: 2025-11-25 08:34:01.322 253542 DEBUG nova.network.neutron [None req-de0428eb-b428-4465-9e27-e82bef732ea3 d4a674c4114a4e4fb5e446089be3ffc0 adf7500b3b404802bc7f4ada42a72100 - - default default] [instance: e3f4ee5b-6bb5-456f-b522-426ea1ebf32f] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 25 03:34:01 np0005534516 nova_compute[253538]: 2025-11-25 08:34:01.337 253542 INFO nova.virt.libvirt.driver [None req-de0428eb-b428-4465-9e27-e82bef732ea3 d4a674c4114a4e4fb5e446089be3ffc0 adf7500b3b404802bc7f4ada42a72100 - - default default] [instance: e3f4ee5b-6bb5-456f-b522-426ea1ebf32f] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 25 03:34:01 np0005534516 nova_compute[253538]: 2025-11-25 08:34:01.355 253542 DEBUG nova.compute.manager [None req-de0428eb-b428-4465-9e27-e82bef732ea3 d4a674c4114a4e4fb5e446089be3ffc0 adf7500b3b404802bc7f4ada42a72100 - - default default] [instance: e3f4ee5b-6bb5-456f-b522-426ea1ebf32f] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 25 03:34:01 np0005534516 nova_compute[253538]: 2025-11-25 08:34:01.439 253542 DEBUG nova.compute.manager [None req-de0428eb-b428-4465-9e27-e82bef732ea3 d4a674c4114a4e4fb5e446089be3ffc0 adf7500b3b404802bc7f4ada42a72100 - - default default] [instance: e3f4ee5b-6bb5-456f-b522-426ea1ebf32f] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 25 03:34:01 np0005534516 nova_compute[253538]: 2025-11-25 08:34:01.440 253542 DEBUG nova.virt.libvirt.driver [None req-de0428eb-b428-4465-9e27-e82bef732ea3 d4a674c4114a4e4fb5e446089be3ffc0 adf7500b3b404802bc7f4ada42a72100 - - default default] [instance: e3f4ee5b-6bb5-456f-b522-426ea1ebf32f] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 25 03:34:01 np0005534516 nova_compute[253538]: 2025-11-25 08:34:01.441 253542 INFO nova.virt.libvirt.driver [None req-de0428eb-b428-4465-9e27-e82bef732ea3 d4a674c4114a4e4fb5e446089be3ffc0 adf7500b3b404802bc7f4ada42a72100 - - default default] [instance: e3f4ee5b-6bb5-456f-b522-426ea1ebf32f] Creating image(s)#033[00m
Nov 25 03:34:01 np0005534516 nova_compute[253538]: 2025-11-25 08:34:01.466 253542 DEBUG nova.storage.rbd_utils [None req-de0428eb-b428-4465-9e27-e82bef732ea3 d4a674c4114a4e4fb5e446089be3ffc0 adf7500b3b404802bc7f4ada42a72100 - - default default] rbd image e3f4ee5b-6bb5-456f-b522-426ea1ebf32f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:34:01 np0005534516 nova_compute[253538]: 2025-11-25 08:34:01.488 253542 DEBUG nova.storage.rbd_utils [None req-de0428eb-b428-4465-9e27-e82bef732ea3 d4a674c4114a4e4fb5e446089be3ffc0 adf7500b3b404802bc7f4ada42a72100 - - default default] rbd image e3f4ee5b-6bb5-456f-b522-426ea1ebf32f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:34:01 np0005534516 nova_compute[253538]: 2025-11-25 08:34:01.506 253542 DEBUG nova.storage.rbd_utils [None req-de0428eb-b428-4465-9e27-e82bef732ea3 d4a674c4114a4e4fb5e446089be3ffc0 adf7500b3b404802bc7f4ada42a72100 - - default default] rbd image e3f4ee5b-6bb5-456f-b522-426ea1ebf32f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:34:01 np0005534516 nova_compute[253538]: 2025-11-25 08:34:01.509 253542 DEBUG oslo_concurrency.processutils [None req-de0428eb-b428-4465-9e27-e82bef732ea3 d4a674c4114a4e4fb5e446089be3ffc0 adf7500b3b404802bc7f4ada42a72100 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:34:01 np0005534516 nova_compute[253538]: 2025-11-25 08:34:01.580 253542 DEBUG oslo_concurrency.processutils [None req-de0428eb-b428-4465-9e27-e82bef732ea3 d4a674c4114a4e4fb5e446089be3ffc0 adf7500b3b404802bc7f4ada42a72100 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json" returned: 0 in 0.071s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:34:01 np0005534516 nova_compute[253538]: 2025-11-25 08:34:01.581 253542 DEBUG oslo_concurrency.lockutils [None req-de0428eb-b428-4465-9e27-e82bef732ea3 d4a674c4114a4e4fb5e446089be3ffc0 adf7500b3b404802bc7f4ada42a72100 - - default default] Acquiring lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:34:01 np0005534516 nova_compute[253538]: 2025-11-25 08:34:01.582 253542 DEBUG oslo_concurrency.lockutils [None req-de0428eb-b428-4465-9e27-e82bef732ea3 d4a674c4114a4e4fb5e446089be3ffc0 adf7500b3b404802bc7f4ada42a72100 - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:34:01 np0005534516 nova_compute[253538]: 2025-11-25 08:34:01.582 253542 DEBUG oslo_concurrency.lockutils [None req-de0428eb-b428-4465-9e27-e82bef732ea3 d4a674c4114a4e4fb5e446089be3ffc0 adf7500b3b404802bc7f4ada42a72100 - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:34:01 np0005534516 nova_compute[253538]: 2025-11-25 08:34:01.639 253542 DEBUG nova.storage.rbd_utils [None req-de0428eb-b428-4465-9e27-e82bef732ea3 d4a674c4114a4e4fb5e446089be3ffc0 adf7500b3b404802bc7f4ada42a72100 - - default default] rbd image e3f4ee5b-6bb5-456f-b522-426ea1ebf32f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:34:01 np0005534516 nova_compute[253538]: 2025-11-25 08:34:01.643 253542 DEBUG oslo_concurrency.processutils [None req-de0428eb-b428-4465-9e27-e82bef732ea3 d4a674c4114a4e4fb5e446089be3ffc0 adf7500b3b404802bc7f4ada42a72100 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc e3f4ee5b-6bb5-456f-b522-426ea1ebf32f_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:34:01 np0005534516 nova_compute[253538]: 2025-11-25 08:34:01.679 253542 DEBUG nova.policy [None req-de0428eb-b428-4465-9e27-e82bef732ea3 d4a674c4114a4e4fb5e446089be3ffc0 adf7500b3b404802bc7f4ada42a72100 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'd4a674c4114a4e4fb5e446089be3ffc0', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'adf7500b3b404802bc7f4ada42a72100', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 25 03:34:02 np0005534516 nova_compute[253538]: 2025-11-25 08:34:02.017 253542 DEBUG oslo_concurrency.processutils [None req-de0428eb-b428-4465-9e27-e82bef732ea3 d4a674c4114a4e4fb5e446089be3ffc0 adf7500b3b404802bc7f4ada42a72100 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc e3f4ee5b-6bb5-456f-b522-426ea1ebf32f_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.374s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:34:02 np0005534516 nova_compute[253538]: 2025-11-25 08:34:02.086 253542 DEBUG nova.storage.rbd_utils [None req-de0428eb-b428-4465-9e27-e82bef732ea3 d4a674c4114a4e4fb5e446089be3ffc0 adf7500b3b404802bc7f4ada42a72100 - - default default] resizing rbd image e3f4ee5b-6bb5-456f-b522-426ea1ebf32f_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 25 03:34:02 np0005534516 nova_compute[253538]: 2025-11-25 08:34:02.165 253542 DEBUG nova.objects.instance [None req-de0428eb-b428-4465-9e27-e82bef732ea3 d4a674c4114a4e4fb5e446089be3ffc0 adf7500b3b404802bc7f4ada42a72100 - - default default] Lazy-loading 'migration_context' on Instance uuid e3f4ee5b-6bb5-456f-b522-426ea1ebf32f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 03:34:02 np0005534516 nova_compute[253538]: 2025-11-25 08:34:02.177 253542 DEBUG nova.virt.libvirt.driver [None req-de0428eb-b428-4465-9e27-e82bef732ea3 d4a674c4114a4e4fb5e446089be3ffc0 adf7500b3b404802bc7f4ada42a72100 - - default default] [instance: e3f4ee5b-6bb5-456f-b522-426ea1ebf32f] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 25 03:34:02 np0005534516 nova_compute[253538]: 2025-11-25 08:34:02.178 253542 DEBUG nova.virt.libvirt.driver [None req-de0428eb-b428-4465-9e27-e82bef732ea3 d4a674c4114a4e4fb5e446089be3ffc0 adf7500b3b404802bc7f4ada42a72100 - - default default] [instance: e3f4ee5b-6bb5-456f-b522-426ea1ebf32f] Ensure instance console log exists: /var/lib/nova/instances/e3f4ee5b-6bb5-456f-b522-426ea1ebf32f/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 25 03:34:02 np0005534516 nova_compute[253538]: 2025-11-25 08:34:02.178 253542 DEBUG oslo_concurrency.lockutils [None req-de0428eb-b428-4465-9e27-e82bef732ea3 d4a674c4114a4e4fb5e446089be3ffc0 adf7500b3b404802bc7f4ada42a72100 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:34:02 np0005534516 nova_compute[253538]: 2025-11-25 08:34:02.179 253542 DEBUG oslo_concurrency.lockutils [None req-de0428eb-b428-4465-9e27-e82bef732ea3 d4a674c4114a4e4fb5e446089be3ffc0 adf7500b3b404802bc7f4ada42a72100 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:34:02 np0005534516 nova_compute[253538]: 2025-11-25 08:34:02.179 253542 DEBUG oslo_concurrency.lockutils [None req-de0428eb-b428-4465-9e27-e82bef732ea3 d4a674c4114a4e4fb5e446089be3ffc0 adf7500b3b404802bc7f4ada42a72100 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:34:03 np0005534516 nova_compute[253538]: 2025-11-25 08:34:03.079 253542 DEBUG nova.network.neutron [None req-de0428eb-b428-4465-9e27-e82bef732ea3 d4a674c4114a4e4fb5e446089be3ffc0 adf7500b3b404802bc7f4ada42a72100 - - default default] [instance: e3f4ee5b-6bb5-456f-b522-426ea1ebf32f] Successfully created port: 40faec4b-dd3f-4659-972d-beeeb707761f _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 25 03:34:03 np0005534516 nova_compute[253538]: 2025-11-25 08:34:03.120 253542 DEBUG nova.compute.manager [req-67d6f468-f4dd-4493-8bfb-4e5a1792bc9b req-dfd3a586-5eaa-40f7-998b-ea3f6f0cd6f6 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 5c6656ef-7ad0-4eb4-a597-aa9a8078805b] Received event network-vif-plugged-1682bdaf-1dd6-4036-8d17-a169dbaaca8f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:34:03 np0005534516 nova_compute[253538]: 2025-11-25 08:34:03.121 253542 DEBUG oslo_concurrency.lockutils [req-67d6f468-f4dd-4493-8bfb-4e5a1792bc9b req-dfd3a586-5eaa-40f7-998b-ea3f6f0cd6f6 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "5c6656ef-7ad0-4eb4-a597-aa9a8078805b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:34:03 np0005534516 nova_compute[253538]: 2025-11-25 08:34:03.122 253542 DEBUG oslo_concurrency.lockutils [req-67d6f468-f4dd-4493-8bfb-4e5a1792bc9b req-dfd3a586-5eaa-40f7-998b-ea3f6f0cd6f6 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "5c6656ef-7ad0-4eb4-a597-aa9a8078805b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:34:03 np0005534516 nova_compute[253538]: 2025-11-25 08:34:03.122 253542 DEBUG oslo_concurrency.lockutils [req-67d6f468-f4dd-4493-8bfb-4e5a1792bc9b req-dfd3a586-5eaa-40f7-998b-ea3f6f0cd6f6 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "5c6656ef-7ad0-4eb4-a597-aa9a8078805b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:34:03 np0005534516 nova_compute[253538]: 2025-11-25 08:34:03.122 253542 DEBUG nova.compute.manager [req-67d6f468-f4dd-4493-8bfb-4e5a1792bc9b req-dfd3a586-5eaa-40f7-998b-ea3f6f0cd6f6 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 5c6656ef-7ad0-4eb4-a597-aa9a8078805b] No waiting events found dispatching network-vif-plugged-1682bdaf-1dd6-4036-8d17-a169dbaaca8f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 03:34:03 np0005534516 nova_compute[253538]: 2025-11-25 08:34:03.123 253542 WARNING nova.compute.manager [req-67d6f468-f4dd-4493-8bfb-4e5a1792bc9b req-dfd3a586-5eaa-40f7-998b-ea3f6f0cd6f6 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 5c6656ef-7ad0-4eb4-a597-aa9a8078805b] Received unexpected event network-vif-plugged-1682bdaf-1dd6-4036-8d17-a169dbaaca8f for instance with vm_state active and task_state None.#033[00m
Nov 25 03:34:03 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1526: 321 pgs: 321 active+clean; 192 MiB data, 515 MiB used, 59 GiB / 60 GiB avail; 609 KiB/s rd, 2.7 MiB/s wr, 135 op/s
Nov 25 03:34:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 03:34:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:34:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 03:34:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:34:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0013273924350459074 of space, bias 1.0, pg target 0.39821773051377224 quantized to 32 (current 32)
Nov 25 03:34:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:34:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 03:34:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:34:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 03:34:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:34:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006661126644201341 of space, bias 1.0, pg target 0.19983379932604023 quantized to 32 (current 32)
Nov 25 03:34:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:34:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 25 03:34:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:34:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 03:34:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:34:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 25 03:34:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:34:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 25 03:34:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:34:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 03:34:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:34:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 25 03:34:04 np0005534516 nova_compute[253538]: 2025-11-25 08:34:04.490 253542 DEBUG oslo_concurrency.lockutils [None req-264bf83f-c6d5-4d83-a2a4-c0156230e328 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Acquiring lock "bf44124c-1a65-4bde-a777-043ae1a53557" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:34:04 np0005534516 nova_compute[253538]: 2025-11-25 08:34:04.490 253542 DEBUG oslo_concurrency.lockutils [None req-264bf83f-c6d5-4d83-a2a4-c0156230e328 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Lock "bf44124c-1a65-4bde-a777-043ae1a53557" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:34:04 np0005534516 nova_compute[253538]: 2025-11-25 08:34:04.523 253542 DEBUG nova.compute.manager [None req-264bf83f-c6d5-4d83-a2a4-c0156230e328 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: bf44124c-1a65-4bde-a777-043ae1a53557] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 25 03:34:04 np0005534516 nova_compute[253538]: 2025-11-25 08:34:04.814 253542 DEBUG oslo_concurrency.lockutils [None req-264bf83f-c6d5-4d83-a2a4-c0156230e328 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:34:04 np0005534516 nova_compute[253538]: 2025-11-25 08:34:04.815 253542 DEBUG oslo_concurrency.lockutils [None req-264bf83f-c6d5-4d83-a2a4-c0156230e328 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:34:04 np0005534516 nova_compute[253538]: 2025-11-25 08:34:04.821 253542 DEBUG nova.virt.hardware [None req-264bf83f-c6d5-4d83-a2a4-c0156230e328 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 25 03:34:04 np0005534516 nova_compute[253538]: 2025-11-25 08:34:04.821 253542 INFO nova.compute.claims [None req-264bf83f-c6d5-4d83-a2a4-c0156230e328 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: bf44124c-1a65-4bde-a777-043ae1a53557] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 25 03:34:05 np0005534516 nova_compute[253538]: 2025-11-25 08:34:05.027 253542 DEBUG oslo_concurrency.processutils [None req-264bf83f-c6d5-4d83-a2a4-c0156230e328 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:34:05 np0005534516 nova_compute[253538]: 2025-11-25 08:34:05.066 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:34:05 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1527: 321 pgs: 321 active+clean; 215 MiB data, 523 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 2.1 MiB/s wr, 155 op/s
Nov 25 03:34:05 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 03:34:05 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/685134027' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 03:34:05 np0005534516 nova_compute[253538]: 2025-11-25 08:34:05.462 253542 DEBUG oslo_concurrency.processutils [None req-264bf83f-c6d5-4d83-a2a4-c0156230e328 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.435s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:34:05 np0005534516 nova_compute[253538]: 2025-11-25 08:34:05.467 253542 DEBUG nova.compute.provider_tree [None req-264bf83f-c6d5-4d83-a2a4-c0156230e328 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 03:34:05 np0005534516 nova_compute[253538]: 2025-11-25 08:34:05.484 253542 DEBUG nova.scheduler.client.report [None req-264bf83f-c6d5-4d83-a2a4-c0156230e328 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 03:34:05 np0005534516 nova_compute[253538]: 2025-11-25 08:34:05.505 253542 DEBUG oslo_concurrency.lockutils [None req-264bf83f-c6d5-4d83-a2a4-c0156230e328 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.690s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:34:05 np0005534516 nova_compute[253538]: 2025-11-25 08:34:05.505 253542 DEBUG nova.compute.manager [None req-264bf83f-c6d5-4d83-a2a4-c0156230e328 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: bf44124c-1a65-4bde-a777-043ae1a53557] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 25 03:34:05 np0005534516 nova_compute[253538]: 2025-11-25 08:34:05.534 253542 DEBUG nova.compute.manager [req-59cdd7fd-9e07-41b6-826d-42e952c14328 req-435e780f-6b9f-4aaa-b2d8-53f2c8abbb0c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 5c6656ef-7ad0-4eb4-a597-aa9a8078805b] Received event network-changed-1682bdaf-1dd6-4036-8d17-a169dbaaca8f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:34:05 np0005534516 nova_compute[253538]: 2025-11-25 08:34:05.535 253542 DEBUG nova.compute.manager [req-59cdd7fd-9e07-41b6-826d-42e952c14328 req-435e780f-6b9f-4aaa-b2d8-53f2c8abbb0c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 5c6656ef-7ad0-4eb4-a597-aa9a8078805b] Refreshing instance network info cache due to event network-changed-1682bdaf-1dd6-4036-8d17-a169dbaaca8f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 03:34:05 np0005534516 nova_compute[253538]: 2025-11-25 08:34:05.535 253542 DEBUG oslo_concurrency.lockutils [req-59cdd7fd-9e07-41b6-826d-42e952c14328 req-435e780f-6b9f-4aaa-b2d8-53f2c8abbb0c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-5c6656ef-7ad0-4eb4-a597-aa9a8078805b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 03:34:05 np0005534516 nova_compute[253538]: 2025-11-25 08:34:05.535 253542 DEBUG oslo_concurrency.lockutils [req-59cdd7fd-9e07-41b6-826d-42e952c14328 req-435e780f-6b9f-4aaa-b2d8-53f2c8abbb0c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-5c6656ef-7ad0-4eb4-a597-aa9a8078805b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 03:34:05 np0005534516 nova_compute[253538]: 2025-11-25 08:34:05.535 253542 DEBUG nova.network.neutron [req-59cdd7fd-9e07-41b6-826d-42e952c14328 req-435e780f-6b9f-4aaa-b2d8-53f2c8abbb0c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 5c6656ef-7ad0-4eb4-a597-aa9a8078805b] Refreshing network info cache for port 1682bdaf-1dd6-4036-8d17-a169dbaaca8f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 03:34:05 np0005534516 nova_compute[253538]: 2025-11-25 08:34:05.576 253542 DEBUG nova.compute.manager [None req-264bf83f-c6d5-4d83-a2a4-c0156230e328 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: bf44124c-1a65-4bde-a777-043ae1a53557] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 25 03:34:05 np0005534516 nova_compute[253538]: 2025-11-25 08:34:05.577 253542 DEBUG nova.network.neutron [None req-264bf83f-c6d5-4d83-a2a4-c0156230e328 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: bf44124c-1a65-4bde-a777-043ae1a53557] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 25 03:34:05 np0005534516 nova_compute[253538]: 2025-11-25 08:34:05.595 253542 INFO nova.virt.libvirt.driver [None req-264bf83f-c6d5-4d83-a2a4-c0156230e328 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: bf44124c-1a65-4bde-a777-043ae1a53557] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 25 03:34:05 np0005534516 nova_compute[253538]: 2025-11-25 08:34:05.627 253542 DEBUG nova.compute.manager [None req-264bf83f-c6d5-4d83-a2a4-c0156230e328 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: bf44124c-1a65-4bde-a777-043ae1a53557] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 25 03:34:05 np0005534516 nova_compute[253538]: 2025-11-25 08:34:05.653 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:34:05 np0005534516 nova_compute[253538]: 2025-11-25 08:34:05.717 253542 DEBUG nova.compute.manager [None req-264bf83f-c6d5-4d83-a2a4-c0156230e328 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: bf44124c-1a65-4bde-a777-043ae1a53557] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 25 03:34:05 np0005534516 nova_compute[253538]: 2025-11-25 08:34:05.718 253542 DEBUG nova.virt.libvirt.driver [None req-264bf83f-c6d5-4d83-a2a4-c0156230e328 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: bf44124c-1a65-4bde-a777-043ae1a53557] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 25 03:34:05 np0005534516 nova_compute[253538]: 2025-11-25 08:34:05.718 253542 INFO nova.virt.libvirt.driver [None req-264bf83f-c6d5-4d83-a2a4-c0156230e328 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: bf44124c-1a65-4bde-a777-043ae1a53557] Creating image(s)#033[00m
Nov 25 03:34:05 np0005534516 nova_compute[253538]: 2025-11-25 08:34:05.750 253542 DEBUG nova.storage.rbd_utils [None req-264bf83f-c6d5-4d83-a2a4-c0156230e328 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] rbd image bf44124c-1a65-4bde-a777-043ae1a53557_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:34:05 np0005534516 nova_compute[253538]: 2025-11-25 08:34:05.777 253542 DEBUG nova.storage.rbd_utils [None req-264bf83f-c6d5-4d83-a2a4-c0156230e328 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] rbd image bf44124c-1a65-4bde-a777-043ae1a53557_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:34:05 np0005534516 nova_compute[253538]: 2025-11-25 08:34:05.801 253542 DEBUG nova.storage.rbd_utils [None req-264bf83f-c6d5-4d83-a2a4-c0156230e328 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] rbd image bf44124c-1a65-4bde-a777-043ae1a53557_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:34:05 np0005534516 nova_compute[253538]: 2025-11-25 08:34:05.805 253542 DEBUG oslo_concurrency.processutils [None req-264bf83f-c6d5-4d83-a2a4-c0156230e328 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:34:05 np0005534516 nova_compute[253538]: 2025-11-25 08:34:05.908 253542 DEBUG oslo_concurrency.processutils [None req-264bf83f-c6d5-4d83-a2a4-c0156230e328 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json" returned: 0 in 0.103s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:34:05 np0005534516 nova_compute[253538]: 2025-11-25 08:34:05.910 253542 DEBUG oslo_concurrency.lockutils [None req-264bf83f-c6d5-4d83-a2a4-c0156230e328 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Acquiring lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:34:05 np0005534516 nova_compute[253538]: 2025-11-25 08:34:05.911 253542 DEBUG oslo_concurrency.lockutils [None req-264bf83f-c6d5-4d83-a2a4-c0156230e328 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:34:05 np0005534516 nova_compute[253538]: 2025-11-25 08:34:05.912 253542 DEBUG oslo_concurrency.lockutils [None req-264bf83f-c6d5-4d83-a2a4-c0156230e328 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:34:05 np0005534516 nova_compute[253538]: 2025-11-25 08:34:05.938 253542 DEBUG nova.storage.rbd_utils [None req-264bf83f-c6d5-4d83-a2a4-c0156230e328 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] rbd image bf44124c-1a65-4bde-a777-043ae1a53557_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:34:05 np0005534516 nova_compute[253538]: 2025-11-25 08:34:05.942 253542 DEBUG oslo_concurrency.processutils [None req-264bf83f-c6d5-4d83-a2a4-c0156230e328 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc bf44124c-1a65-4bde-a777-043ae1a53557_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:34:05 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e181 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 03:34:05 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e181 do_prune osdmap full prune enabled
Nov 25 03:34:05 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e182 e182: 3 total, 3 up, 3 in
Nov 25 03:34:05 np0005534516 ceph-mon[75015]: log_channel(cluster) log [DBG] : osdmap e182: 3 total, 3 up, 3 in
Nov 25 03:34:06 np0005534516 nova_compute[253538]: 2025-11-25 08:34:06.162 253542 DEBUG nova.policy [None req-264bf83f-c6d5-4d83-a2a4-c0156230e328 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'a649c62aaacd4f01a93ea978066f5976', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'a9c243220ecd4ba3af10cdbc0ea76bd6', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 25 03:34:06 np0005534516 nova_compute[253538]: 2025-11-25 08:34:06.165 253542 DEBUG nova.network.neutron [None req-de0428eb-b428-4465-9e27-e82bef732ea3 d4a674c4114a4e4fb5e446089be3ffc0 adf7500b3b404802bc7f4ada42a72100 - - default default] [instance: e3f4ee5b-6bb5-456f-b522-426ea1ebf32f] Successfully updated port: 40faec4b-dd3f-4659-972d-beeeb707761f _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 25 03:34:06 np0005534516 ceph-osd[90711]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #45. Immutable memtables: 2.
Nov 25 03:34:06 np0005534516 nova_compute[253538]: 2025-11-25 08:34:06.182 253542 DEBUG oslo_concurrency.lockutils [None req-de0428eb-b428-4465-9e27-e82bef732ea3 d4a674c4114a4e4fb5e446089be3ffc0 adf7500b3b404802bc7f4ada42a72100 - - default default] Acquiring lock "refresh_cache-e3f4ee5b-6bb5-456f-b522-426ea1ebf32f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 03:34:06 np0005534516 nova_compute[253538]: 2025-11-25 08:34:06.183 253542 DEBUG oslo_concurrency.lockutils [None req-de0428eb-b428-4465-9e27-e82bef732ea3 d4a674c4114a4e4fb5e446089be3ffc0 adf7500b3b404802bc7f4ada42a72100 - - default default] Acquired lock "refresh_cache-e3f4ee5b-6bb5-456f-b522-426ea1ebf32f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 03:34:06 np0005534516 nova_compute[253538]: 2025-11-25 08:34:06.183 253542 DEBUG nova.network.neutron [None req-de0428eb-b428-4465-9e27-e82bef732ea3 d4a674c4114a4e4fb5e446089be3ffc0 adf7500b3b404802bc7f4ada42a72100 - - default default] [instance: e3f4ee5b-6bb5-456f-b522-426ea1ebf32f] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 25 03:34:06 np0005534516 nova_compute[253538]: 2025-11-25 08:34:06.268 253542 DEBUG oslo_concurrency.processutils [None req-264bf83f-c6d5-4d83-a2a4-c0156230e328 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc bf44124c-1a65-4bde-a777-043ae1a53557_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.327s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:34:06 np0005534516 nova_compute[253538]: 2025-11-25 08:34:06.329 253542 DEBUG nova.storage.rbd_utils [None req-264bf83f-c6d5-4d83-a2a4-c0156230e328 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] resizing rbd image bf44124c-1a65-4bde-a777-043ae1a53557_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 25 03:34:06 np0005534516 nova_compute[253538]: 2025-11-25 08:34:06.359 253542 DEBUG nova.network.neutron [None req-de0428eb-b428-4465-9e27-e82bef732ea3 d4a674c4114a4e4fb5e446089be3ffc0 adf7500b3b404802bc7f4ada42a72100 - - default default] [instance: e3f4ee5b-6bb5-456f-b522-426ea1ebf32f] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 25 03:34:06 np0005534516 nova_compute[253538]: 2025-11-25 08:34:06.425 253542 DEBUG nova.objects.instance [None req-264bf83f-c6d5-4d83-a2a4-c0156230e328 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Lazy-loading 'migration_context' on Instance uuid bf44124c-1a65-4bde-a777-043ae1a53557 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 03:34:06 np0005534516 nova_compute[253538]: 2025-11-25 08:34:06.443 253542 DEBUG nova.virt.libvirt.driver [None req-264bf83f-c6d5-4d83-a2a4-c0156230e328 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: bf44124c-1a65-4bde-a777-043ae1a53557] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 25 03:34:06 np0005534516 nova_compute[253538]: 2025-11-25 08:34:06.444 253542 DEBUG nova.virt.libvirt.driver [None req-264bf83f-c6d5-4d83-a2a4-c0156230e328 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: bf44124c-1a65-4bde-a777-043ae1a53557] Ensure instance console log exists: /var/lib/nova/instances/bf44124c-1a65-4bde-a777-043ae1a53557/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 25 03:34:06 np0005534516 nova_compute[253538]: 2025-11-25 08:34:06.445 253542 DEBUG oslo_concurrency.lockutils [None req-264bf83f-c6d5-4d83-a2a4-c0156230e328 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:34:06 np0005534516 nova_compute[253538]: 2025-11-25 08:34:06.445 253542 DEBUG oslo_concurrency.lockutils [None req-264bf83f-c6d5-4d83-a2a4-c0156230e328 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:34:06 np0005534516 nova_compute[253538]: 2025-11-25 08:34:06.445 253542 DEBUG oslo_concurrency.lockutils [None req-264bf83f-c6d5-4d83-a2a4-c0156230e328 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:34:07 np0005534516 nova_compute[253538]: 2025-11-25 08:34:07.147 253542 DEBUG nova.network.neutron [None req-de0428eb-b428-4465-9e27-e82bef732ea3 d4a674c4114a4e4fb5e446089be3ffc0 adf7500b3b404802bc7f4ada42a72100 - - default default] [instance: e3f4ee5b-6bb5-456f-b522-426ea1ebf32f] Updating instance_info_cache with network_info: [{"id": "40faec4b-dd3f-4659-972d-beeeb707761f", "address": "fa:16:3e:37:c8:d4", "network": {"id": "610430d6-5ea7-4c04-9b64-2dc2d0a55169", "bridge": "br-int", "label": "tempest-InstanceActionsNegativeTestJSON-598339358-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "adf7500b3b404802bc7f4ada42a72100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap40faec4b-dd", "ovs_interfaceid": "40faec4b-dd3f-4659-972d-beeeb707761f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 03:34:07 np0005534516 nova_compute[253538]: 2025-11-25 08:34:07.170 253542 DEBUG oslo_concurrency.lockutils [None req-de0428eb-b428-4465-9e27-e82bef732ea3 d4a674c4114a4e4fb5e446089be3ffc0 adf7500b3b404802bc7f4ada42a72100 - - default default] Releasing lock "refresh_cache-e3f4ee5b-6bb5-456f-b522-426ea1ebf32f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 03:34:07 np0005534516 nova_compute[253538]: 2025-11-25 08:34:07.171 253542 DEBUG nova.compute.manager [None req-de0428eb-b428-4465-9e27-e82bef732ea3 d4a674c4114a4e4fb5e446089be3ffc0 adf7500b3b404802bc7f4ada42a72100 - - default default] [instance: e3f4ee5b-6bb5-456f-b522-426ea1ebf32f] Instance network_info: |[{"id": "40faec4b-dd3f-4659-972d-beeeb707761f", "address": "fa:16:3e:37:c8:d4", "network": {"id": "610430d6-5ea7-4c04-9b64-2dc2d0a55169", "bridge": "br-int", "label": "tempest-InstanceActionsNegativeTestJSON-598339358-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "adf7500b3b404802bc7f4ada42a72100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap40faec4b-dd", "ovs_interfaceid": "40faec4b-dd3f-4659-972d-beeeb707761f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 25 03:34:07 np0005534516 nova_compute[253538]: 2025-11-25 08:34:07.175 253542 DEBUG nova.virt.libvirt.driver [None req-de0428eb-b428-4465-9e27-e82bef732ea3 d4a674c4114a4e4fb5e446089be3ffc0 adf7500b3b404802bc7f4ada42a72100 - - default default] [instance: e3f4ee5b-6bb5-456f-b522-426ea1ebf32f] Start _get_guest_xml network_info=[{"id": "40faec4b-dd3f-4659-972d-beeeb707761f", "address": "fa:16:3e:37:c8:d4", "network": {"id": "610430d6-5ea7-4c04-9b64-2dc2d0a55169", "bridge": "br-int", "label": "tempest-InstanceActionsNegativeTestJSON-598339358-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "adf7500b3b404802bc7f4ada42a72100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap40faec4b-dd", "ovs_interfaceid": "40faec4b-dd3f-4659-972d-beeeb707761f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'device_name': '/dev/vda', 'guest_format': None, 'encryption_options': None, 'boot_index': 0, 'encryption_format': None, 'encryption_secret_uuid': None, 'size': 0, 'encrypted': False, 'disk_bus': 'virtio', 'image_id': '8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 25 03:34:07 np0005534516 nova_compute[253538]: 2025-11-25 08:34:07.180 253542 WARNING nova.virt.libvirt.driver [None req-de0428eb-b428-4465-9e27-e82bef732ea3 d4a674c4114a4e4fb5e446089be3ffc0 adf7500b3b404802bc7f4ada42a72100 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 03:34:07 np0005534516 nova_compute[253538]: 2025-11-25 08:34:07.185 253542 DEBUG nova.virt.libvirt.host [None req-de0428eb-b428-4465-9e27-e82bef732ea3 d4a674c4114a4e4fb5e446089be3ffc0 adf7500b3b404802bc7f4ada42a72100 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 25 03:34:07 np0005534516 nova_compute[253538]: 2025-11-25 08:34:07.186 253542 DEBUG nova.virt.libvirt.host [None req-de0428eb-b428-4465-9e27-e82bef732ea3 d4a674c4114a4e4fb5e446089be3ffc0 adf7500b3b404802bc7f4ada42a72100 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 25 03:34:07 np0005534516 nova_compute[253538]: 2025-11-25 08:34:07.190 253542 DEBUG nova.virt.libvirt.host [None req-de0428eb-b428-4465-9e27-e82bef732ea3 d4a674c4114a4e4fb5e446089be3ffc0 adf7500b3b404802bc7f4ada42a72100 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 25 03:34:07 np0005534516 nova_compute[253538]: 2025-11-25 08:34:07.191 253542 DEBUG nova.virt.libvirt.host [None req-de0428eb-b428-4465-9e27-e82bef732ea3 d4a674c4114a4e4fb5e446089be3ffc0 adf7500b3b404802bc7f4ada42a72100 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 25 03:34:07 np0005534516 nova_compute[253538]: 2025-11-25 08:34:07.191 253542 DEBUG nova.virt.libvirt.driver [None req-de0428eb-b428-4465-9e27-e82bef732ea3 d4a674c4114a4e4fb5e446089be3ffc0 adf7500b3b404802bc7f4ada42a72100 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 25 03:34:07 np0005534516 nova_compute[253538]: 2025-11-25 08:34:07.192 253542 DEBUG nova.virt.hardware [None req-de0428eb-b428-4465-9e27-e82bef732ea3 d4a674c4114a4e4fb5e446089be3ffc0 adf7500b3b404802bc7f4ada42a72100 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T08:20:54Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='c0247c91-0f34-45b2-87e7-43f31a790a00',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 25 03:34:07 np0005534516 nova_compute[253538]: 2025-11-25 08:34:07.192 253542 DEBUG nova.virt.hardware [None req-de0428eb-b428-4465-9e27-e82bef732ea3 d4a674c4114a4e4fb5e446089be3ffc0 adf7500b3b404802bc7f4ada42a72100 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 25 03:34:07 np0005534516 nova_compute[253538]: 2025-11-25 08:34:07.193 253542 DEBUG nova.virt.hardware [None req-de0428eb-b428-4465-9e27-e82bef732ea3 d4a674c4114a4e4fb5e446089be3ffc0 adf7500b3b404802bc7f4ada42a72100 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 25 03:34:07 np0005534516 nova_compute[253538]: 2025-11-25 08:34:07.193 253542 DEBUG nova.virt.hardware [None req-de0428eb-b428-4465-9e27-e82bef732ea3 d4a674c4114a4e4fb5e446089be3ffc0 adf7500b3b404802bc7f4ada42a72100 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 25 03:34:07 np0005534516 nova_compute[253538]: 2025-11-25 08:34:07.193 253542 DEBUG nova.virt.hardware [None req-de0428eb-b428-4465-9e27-e82bef732ea3 d4a674c4114a4e4fb5e446089be3ffc0 adf7500b3b404802bc7f4ada42a72100 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 25 03:34:07 np0005534516 nova_compute[253538]: 2025-11-25 08:34:07.194 253542 DEBUG nova.virt.hardware [None req-de0428eb-b428-4465-9e27-e82bef732ea3 d4a674c4114a4e4fb5e446089be3ffc0 adf7500b3b404802bc7f4ada42a72100 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 25 03:34:07 np0005534516 nova_compute[253538]: 2025-11-25 08:34:07.194 253542 DEBUG nova.virt.hardware [None req-de0428eb-b428-4465-9e27-e82bef732ea3 d4a674c4114a4e4fb5e446089be3ffc0 adf7500b3b404802bc7f4ada42a72100 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 25 03:34:07 np0005534516 nova_compute[253538]: 2025-11-25 08:34:07.194 253542 DEBUG nova.virt.hardware [None req-de0428eb-b428-4465-9e27-e82bef732ea3 d4a674c4114a4e4fb5e446089be3ffc0 adf7500b3b404802bc7f4ada42a72100 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 25 03:34:07 np0005534516 nova_compute[253538]: 2025-11-25 08:34:07.195 253542 DEBUG nova.virt.hardware [None req-de0428eb-b428-4465-9e27-e82bef732ea3 d4a674c4114a4e4fb5e446089be3ffc0 adf7500b3b404802bc7f4ada42a72100 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 25 03:34:07 np0005534516 nova_compute[253538]: 2025-11-25 08:34:07.195 253542 DEBUG nova.virt.hardware [None req-de0428eb-b428-4465-9e27-e82bef732ea3 d4a674c4114a4e4fb5e446089be3ffc0 adf7500b3b404802bc7f4ada42a72100 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 25 03:34:07 np0005534516 nova_compute[253538]: 2025-11-25 08:34:07.196 253542 DEBUG nova.virt.hardware [None req-de0428eb-b428-4465-9e27-e82bef732ea3 d4a674c4114a4e4fb5e446089be3ffc0 adf7500b3b404802bc7f4ada42a72100 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 25 03:34:07 np0005534516 nova_compute[253538]: 2025-11-25 08:34:07.199 253542 DEBUG oslo_concurrency.processutils [None req-de0428eb-b428-4465-9e27-e82bef732ea3 d4a674c4114a4e4fb5e446089be3ffc0 adf7500b3b404802bc7f4ada42a72100 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:34:07 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1529: 321 pgs: 321 active+clean; 242 MiB data, 552 MiB used, 59 GiB / 60 GiB avail; 2.4 MiB/s rd, 3.9 MiB/s wr, 159 op/s
Nov 25 03:34:07 np0005534516 nova_compute[253538]: 2025-11-25 08:34:07.243 253542 DEBUG nova.network.neutron [None req-264bf83f-c6d5-4d83-a2a4-c0156230e328 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: bf44124c-1a65-4bde-a777-043ae1a53557] Successfully created port: 269f9bd3-f267-459c-8e24-4b1f6c943345 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 25 03:34:07 np0005534516 nova_compute[253538]: 2025-11-25 08:34:07.517 253542 DEBUG nova.network.neutron [req-59cdd7fd-9e07-41b6-826d-42e952c14328 req-435e780f-6b9f-4aaa-b2d8-53f2c8abbb0c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 5c6656ef-7ad0-4eb4-a597-aa9a8078805b] Updated VIF entry in instance network info cache for port 1682bdaf-1dd6-4036-8d17-a169dbaaca8f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 25 03:34:07 np0005534516 nova_compute[253538]: 2025-11-25 08:34:07.518 253542 DEBUG nova.network.neutron [req-59cdd7fd-9e07-41b6-826d-42e952c14328 req-435e780f-6b9f-4aaa-b2d8-53f2c8abbb0c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 5c6656ef-7ad0-4eb4-a597-aa9a8078805b] Updating instance_info_cache with network_info: [{"id": "1682bdaf-1dd6-4036-8d17-a169dbaaca8f", "address": "fa:16:3e:60:42:da", "network": {"id": "9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-2079765623-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.241", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7d8307470c794815a028592990efca57", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1682bdaf-1d", "ovs_interfaceid": "1682bdaf-1dd6-4036-8d17-a169dbaaca8f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 03:34:07 np0005534516 nova_compute[253538]: 2025-11-25 08:34:07.537 253542 DEBUG oslo_concurrency.lockutils [req-59cdd7fd-9e07-41b6-826d-42e952c14328 req-435e780f-6b9f-4aaa-b2d8-53f2c8abbb0c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-5c6656ef-7ad0-4eb4-a597-aa9a8078805b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 03:34:07 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 03:34:07 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1429719939' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 03:34:07 np0005534516 nova_compute[253538]: 2025-11-25 08:34:07.689 253542 DEBUG oslo_concurrency.processutils [None req-de0428eb-b428-4465-9e27-e82bef732ea3 d4a674c4114a4e4fb5e446089be3ffc0 adf7500b3b404802bc7f4ada42a72100 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.490s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:34:07 np0005534516 nova_compute[253538]: 2025-11-25 08:34:07.714 253542 DEBUG nova.storage.rbd_utils [None req-de0428eb-b428-4465-9e27-e82bef732ea3 d4a674c4114a4e4fb5e446089be3ffc0 adf7500b3b404802bc7f4ada42a72100 - - default default] rbd image e3f4ee5b-6bb5-456f-b522-426ea1ebf32f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:34:07 np0005534516 nova_compute[253538]: 2025-11-25 08:34:07.718 253542 DEBUG oslo_concurrency.processutils [None req-de0428eb-b428-4465-9e27-e82bef732ea3 d4a674c4114a4e4fb5e446089be3ffc0 adf7500b3b404802bc7f4ada42a72100 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:34:07 np0005534516 podman[315937]: 2025-11-25 08:34:07.792019866 +0000 UTC m=+0.049386429 container health_status 5c8d5508db57b4c3da7e80ae11f3a3094e87785cb74f60c98199261dd8b73481 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Nov 25 03:34:08 np0005534516 nova_compute[253538]: 2025-11-25 08:34:08.077 253542 DEBUG nova.compute.manager [req-ac005210-2cba-4875-8efe-5bf1ff7d88f3 req-5b1ce70b-61d5-48c5-b763-cffab2c1330b b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: e3f4ee5b-6bb5-456f-b522-426ea1ebf32f] Received event network-changed-40faec4b-dd3f-4659-972d-beeeb707761f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:34:08 np0005534516 nova_compute[253538]: 2025-11-25 08:34:08.078 253542 DEBUG nova.compute.manager [req-ac005210-2cba-4875-8efe-5bf1ff7d88f3 req-5b1ce70b-61d5-48c5-b763-cffab2c1330b b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: e3f4ee5b-6bb5-456f-b522-426ea1ebf32f] Refreshing instance network info cache due to event network-changed-40faec4b-dd3f-4659-972d-beeeb707761f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 03:34:08 np0005534516 nova_compute[253538]: 2025-11-25 08:34:08.079 253542 DEBUG oslo_concurrency.lockutils [req-ac005210-2cba-4875-8efe-5bf1ff7d88f3 req-5b1ce70b-61d5-48c5-b763-cffab2c1330b b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-e3f4ee5b-6bb5-456f-b522-426ea1ebf32f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 03:34:08 np0005534516 nova_compute[253538]: 2025-11-25 08:34:08.079 253542 DEBUG oslo_concurrency.lockutils [req-ac005210-2cba-4875-8efe-5bf1ff7d88f3 req-5b1ce70b-61d5-48c5-b763-cffab2c1330b b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-e3f4ee5b-6bb5-456f-b522-426ea1ebf32f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 03:34:08 np0005534516 nova_compute[253538]: 2025-11-25 08:34:08.080 253542 DEBUG nova.network.neutron [req-ac005210-2cba-4875-8efe-5bf1ff7d88f3 req-5b1ce70b-61d5-48c5-b763-cffab2c1330b b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: e3f4ee5b-6bb5-456f-b522-426ea1ebf32f] Refreshing network info cache for port 40faec4b-dd3f-4659-972d-beeeb707761f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 03:34:08 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 03:34:08 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1796066015' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 03:34:08 np0005534516 nova_compute[253538]: 2025-11-25 08:34:08.150 253542 DEBUG oslo_concurrency.lockutils [None req-ea508da6-2d37-4e51-9a1b-53e06fde3653 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Acquiring lock "52d39d67-b456-44e4-8804-2de0c941edae" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:34:08 np0005534516 nova_compute[253538]: 2025-11-25 08:34:08.151 253542 DEBUG oslo_concurrency.lockutils [None req-ea508da6-2d37-4e51-9a1b-53e06fde3653 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Lock "52d39d67-b456-44e4-8804-2de0c941edae" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:34:08 np0005534516 nova_compute[253538]: 2025-11-25 08:34:08.167 253542 DEBUG oslo_concurrency.processutils [None req-de0428eb-b428-4465-9e27-e82bef732ea3 d4a674c4114a4e4fb5e446089be3ffc0 adf7500b3b404802bc7f4ada42a72100 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.449s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:34:08 np0005534516 nova_compute[253538]: 2025-11-25 08:34:08.169 253542 DEBUG nova.virt.libvirt.vif [None req-de0428eb-b428-4465-9e27-e82bef732ea3 d4a674c4114a4e4fb5e446089be3ffc0 adf7500b3b404802bc7f4ada42a72100 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T08:33:59Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-InstanceActionsNegativeTestJSON-server-1084102530',display_name='tempest-InstanceActionsNegativeTestJSON-server-1084102530',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-instanceactionsnegativetestjson-server-1084102530',id=59,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='adf7500b3b404802bc7f4ada42a72100',ramdisk_id='',reservation_id='r-gymefxtd',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-InstanceActionsNegativeTestJSON-800535511',owner_user_name='tempest-InstanceActionsNegativeTestJSON-800535511-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T08:34:01Z,user_data=None,user_id='d4a674c4114a4e4fb5e446089be3ffc0',uuid=e3f4ee5b-6bb5-456f-b522-426ea1ebf32f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "40faec4b-dd3f-4659-972d-beeeb707761f", "address": "fa:16:3e:37:c8:d4", "network": {"id": "610430d6-5ea7-4c04-9b64-2dc2d0a55169", "bridge": "br-int", "label": "tempest-InstanceActionsNegativeTestJSON-598339358-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "adf7500b3b404802bc7f4ada42a72100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap40faec4b-dd", "ovs_interfaceid": "40faec4b-dd3f-4659-972d-beeeb707761f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 25 03:34:08 np0005534516 nova_compute[253538]: 2025-11-25 08:34:08.170 253542 DEBUG nova.network.os_vif_util [None req-de0428eb-b428-4465-9e27-e82bef732ea3 d4a674c4114a4e4fb5e446089be3ffc0 adf7500b3b404802bc7f4ada42a72100 - - default default] Converting VIF {"id": "40faec4b-dd3f-4659-972d-beeeb707761f", "address": "fa:16:3e:37:c8:d4", "network": {"id": "610430d6-5ea7-4c04-9b64-2dc2d0a55169", "bridge": "br-int", "label": "tempest-InstanceActionsNegativeTestJSON-598339358-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "adf7500b3b404802bc7f4ada42a72100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap40faec4b-dd", "ovs_interfaceid": "40faec4b-dd3f-4659-972d-beeeb707761f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 03:34:08 np0005534516 nova_compute[253538]: 2025-11-25 08:34:08.171 253542 DEBUG nova.network.os_vif_util [None req-de0428eb-b428-4465-9e27-e82bef732ea3 d4a674c4114a4e4fb5e446089be3ffc0 adf7500b3b404802bc7f4ada42a72100 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:37:c8:d4,bridge_name='br-int',has_traffic_filtering=True,id=40faec4b-dd3f-4659-972d-beeeb707761f,network=Network(610430d6-5ea7-4c04-9b64-2dc2d0a55169),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap40faec4b-dd') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 03:34:08 np0005534516 nova_compute[253538]: 2025-11-25 08:34:08.173 253542 DEBUG nova.objects.instance [None req-de0428eb-b428-4465-9e27-e82bef732ea3 d4a674c4114a4e4fb5e446089be3ffc0 adf7500b3b404802bc7f4ada42a72100 - - default default] Lazy-loading 'pci_devices' on Instance uuid e3f4ee5b-6bb5-456f-b522-426ea1ebf32f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 03:34:08 np0005534516 nova_compute[253538]: 2025-11-25 08:34:08.176 253542 DEBUG nova.compute.manager [None req-ea508da6-2d37-4e51-9a1b-53e06fde3653 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] [instance: 52d39d67-b456-44e4-8804-2de0c941edae] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 25 03:34:08 np0005534516 nova_compute[253538]: 2025-11-25 08:34:08.193 253542 DEBUG nova.virt.libvirt.driver [None req-de0428eb-b428-4465-9e27-e82bef732ea3 d4a674c4114a4e4fb5e446089be3ffc0 adf7500b3b404802bc7f4ada42a72100 - - default default] [instance: e3f4ee5b-6bb5-456f-b522-426ea1ebf32f] End _get_guest_xml xml=<domain type="kvm">
Nov 25 03:34:08 np0005534516 nova_compute[253538]:  <uuid>e3f4ee5b-6bb5-456f-b522-426ea1ebf32f</uuid>
Nov 25 03:34:08 np0005534516 nova_compute[253538]:  <name>instance-0000003b</name>
Nov 25 03:34:08 np0005534516 nova_compute[253538]:  <memory>131072</memory>
Nov 25 03:34:08 np0005534516 nova_compute[253538]:  <vcpu>1</vcpu>
Nov 25 03:34:08 np0005534516 nova_compute[253538]:  <metadata>
Nov 25 03:34:08 np0005534516 nova_compute[253538]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 03:34:08 np0005534516 nova_compute[253538]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 03:34:08 np0005534516 nova_compute[253538]:      <nova:name>tempest-InstanceActionsNegativeTestJSON-server-1084102530</nova:name>
Nov 25 03:34:08 np0005534516 nova_compute[253538]:      <nova:creationTime>2025-11-25 08:34:07</nova:creationTime>
Nov 25 03:34:08 np0005534516 nova_compute[253538]:      <nova:flavor name="m1.nano">
Nov 25 03:34:08 np0005534516 nova_compute[253538]:        <nova:memory>128</nova:memory>
Nov 25 03:34:08 np0005534516 nova_compute[253538]:        <nova:disk>1</nova:disk>
Nov 25 03:34:08 np0005534516 nova_compute[253538]:        <nova:swap>0</nova:swap>
Nov 25 03:34:08 np0005534516 nova_compute[253538]:        <nova:ephemeral>0</nova:ephemeral>
Nov 25 03:34:08 np0005534516 nova_compute[253538]:        <nova:vcpus>1</nova:vcpus>
Nov 25 03:34:08 np0005534516 nova_compute[253538]:      </nova:flavor>
Nov 25 03:34:08 np0005534516 nova_compute[253538]:      <nova:owner>
Nov 25 03:34:08 np0005534516 nova_compute[253538]:        <nova:user uuid="d4a674c4114a4e4fb5e446089be3ffc0">tempest-InstanceActionsNegativeTestJSON-800535511-project-member</nova:user>
Nov 25 03:34:08 np0005534516 nova_compute[253538]:        <nova:project uuid="adf7500b3b404802bc7f4ada42a72100">tempest-InstanceActionsNegativeTestJSON-800535511</nova:project>
Nov 25 03:34:08 np0005534516 nova_compute[253538]:      </nova:owner>
Nov 25 03:34:08 np0005534516 nova_compute[253538]:      <nova:root type="image" uuid="8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e"/>
Nov 25 03:34:08 np0005534516 nova_compute[253538]:      <nova:ports>
Nov 25 03:34:08 np0005534516 nova_compute[253538]:        <nova:port uuid="40faec4b-dd3f-4659-972d-beeeb707761f">
Nov 25 03:34:08 np0005534516 nova_compute[253538]:          <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Nov 25 03:34:08 np0005534516 nova_compute[253538]:        </nova:port>
Nov 25 03:34:08 np0005534516 nova_compute[253538]:      </nova:ports>
Nov 25 03:34:08 np0005534516 nova_compute[253538]:    </nova:instance>
Nov 25 03:34:08 np0005534516 nova_compute[253538]:  </metadata>
Nov 25 03:34:08 np0005534516 nova_compute[253538]:  <sysinfo type="smbios">
Nov 25 03:34:08 np0005534516 nova_compute[253538]:    <system>
Nov 25 03:34:08 np0005534516 nova_compute[253538]:      <entry name="manufacturer">RDO</entry>
Nov 25 03:34:08 np0005534516 nova_compute[253538]:      <entry name="product">OpenStack Compute</entry>
Nov 25 03:34:08 np0005534516 nova_compute[253538]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 03:34:08 np0005534516 nova_compute[253538]:      <entry name="serial">e3f4ee5b-6bb5-456f-b522-426ea1ebf32f</entry>
Nov 25 03:34:08 np0005534516 nova_compute[253538]:      <entry name="uuid">e3f4ee5b-6bb5-456f-b522-426ea1ebf32f</entry>
Nov 25 03:34:08 np0005534516 nova_compute[253538]:      <entry name="family">Virtual Machine</entry>
Nov 25 03:34:08 np0005534516 nova_compute[253538]:    </system>
Nov 25 03:34:08 np0005534516 nova_compute[253538]:  </sysinfo>
Nov 25 03:34:08 np0005534516 nova_compute[253538]:  <os>
Nov 25 03:34:08 np0005534516 nova_compute[253538]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 25 03:34:08 np0005534516 nova_compute[253538]:    <boot dev="hd"/>
Nov 25 03:34:08 np0005534516 nova_compute[253538]:    <smbios mode="sysinfo"/>
Nov 25 03:34:08 np0005534516 nova_compute[253538]:  </os>
Nov 25 03:34:08 np0005534516 nova_compute[253538]:  <features>
Nov 25 03:34:08 np0005534516 nova_compute[253538]:    <acpi/>
Nov 25 03:34:08 np0005534516 nova_compute[253538]:    <apic/>
Nov 25 03:34:08 np0005534516 nova_compute[253538]:    <vmcoreinfo/>
Nov 25 03:34:08 np0005534516 nova_compute[253538]:  </features>
Nov 25 03:34:08 np0005534516 nova_compute[253538]:  <clock offset="utc">
Nov 25 03:34:08 np0005534516 nova_compute[253538]:    <timer name="pit" tickpolicy="delay"/>
Nov 25 03:34:08 np0005534516 nova_compute[253538]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 25 03:34:08 np0005534516 nova_compute[253538]:    <timer name="hpet" present="no"/>
Nov 25 03:34:08 np0005534516 nova_compute[253538]:  </clock>
Nov 25 03:34:08 np0005534516 nova_compute[253538]:  <cpu mode="host-model" match="exact">
Nov 25 03:34:08 np0005534516 nova_compute[253538]:    <topology sockets="1" cores="1" threads="1"/>
Nov 25 03:34:08 np0005534516 nova_compute[253538]:  </cpu>
Nov 25 03:34:08 np0005534516 nova_compute[253538]:  <devices>
Nov 25 03:34:08 np0005534516 nova_compute[253538]:    <disk type="network" device="disk">
Nov 25 03:34:08 np0005534516 nova_compute[253538]:      <driver type="raw" cache="none"/>
Nov 25 03:34:08 np0005534516 nova_compute[253538]:      <source protocol="rbd" name="vms/e3f4ee5b-6bb5-456f-b522-426ea1ebf32f_disk">
Nov 25 03:34:08 np0005534516 nova_compute[253538]:        <host name="192.168.122.100" port="6789"/>
Nov 25 03:34:08 np0005534516 nova_compute[253538]:      </source>
Nov 25 03:34:08 np0005534516 nova_compute[253538]:      <auth username="openstack">
Nov 25 03:34:08 np0005534516 nova_compute[253538]:        <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 03:34:08 np0005534516 nova_compute[253538]:      </auth>
Nov 25 03:34:08 np0005534516 nova_compute[253538]:      <target dev="vda" bus="virtio"/>
Nov 25 03:34:08 np0005534516 nova_compute[253538]:    </disk>
Nov 25 03:34:08 np0005534516 nova_compute[253538]:    <disk type="network" device="cdrom">
Nov 25 03:34:08 np0005534516 nova_compute[253538]:      <driver type="raw" cache="none"/>
Nov 25 03:34:08 np0005534516 nova_compute[253538]:      <source protocol="rbd" name="vms/e3f4ee5b-6bb5-456f-b522-426ea1ebf32f_disk.config">
Nov 25 03:34:08 np0005534516 nova_compute[253538]:        <host name="192.168.122.100" port="6789"/>
Nov 25 03:34:08 np0005534516 nova_compute[253538]:      </source>
Nov 25 03:34:08 np0005534516 nova_compute[253538]:      <auth username="openstack">
Nov 25 03:34:08 np0005534516 nova_compute[253538]:        <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 03:34:08 np0005534516 nova_compute[253538]:      </auth>
Nov 25 03:34:08 np0005534516 nova_compute[253538]:      <target dev="sda" bus="sata"/>
Nov 25 03:34:08 np0005534516 nova_compute[253538]:    </disk>
Nov 25 03:34:08 np0005534516 nova_compute[253538]:    <interface type="ethernet">
Nov 25 03:34:08 np0005534516 nova_compute[253538]:      <mac address="fa:16:3e:37:c8:d4"/>
Nov 25 03:34:08 np0005534516 nova_compute[253538]:      <model type="virtio"/>
Nov 25 03:34:08 np0005534516 nova_compute[253538]:      <driver name="vhost" rx_queue_size="512"/>
Nov 25 03:34:08 np0005534516 nova_compute[253538]:      <mtu size="1442"/>
Nov 25 03:34:08 np0005534516 nova_compute[253538]:      <target dev="tap40faec4b-dd"/>
Nov 25 03:34:08 np0005534516 nova_compute[253538]:    </interface>
Nov 25 03:34:08 np0005534516 nova_compute[253538]:    <serial type="pty">
Nov 25 03:34:08 np0005534516 nova_compute[253538]:      <log file="/var/lib/nova/instances/e3f4ee5b-6bb5-456f-b522-426ea1ebf32f/console.log" append="off"/>
Nov 25 03:34:08 np0005534516 nova_compute[253538]:    </serial>
Nov 25 03:34:08 np0005534516 nova_compute[253538]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 03:34:08 np0005534516 nova_compute[253538]:    <video>
Nov 25 03:34:08 np0005534516 nova_compute[253538]:      <model type="virtio"/>
Nov 25 03:34:08 np0005534516 nova_compute[253538]:    </video>
Nov 25 03:34:08 np0005534516 nova_compute[253538]:    <input type="tablet" bus="usb"/>
Nov 25 03:34:08 np0005534516 nova_compute[253538]:    <rng model="virtio">
Nov 25 03:34:08 np0005534516 nova_compute[253538]:      <backend model="random">/dev/urandom</backend>
Nov 25 03:34:08 np0005534516 nova_compute[253538]:    </rng>
Nov 25 03:34:08 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root"/>
Nov 25 03:34:08 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:34:08 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:34:08 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:34:08 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:34:08 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:34:08 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:34:08 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:34:08 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:34:08 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:34:08 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:34:08 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:34:08 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:34:08 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:34:08 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:34:08 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:34:08 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:34:08 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:34:08 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:34:08 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:34:08 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:34:08 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:34:08 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:34:08 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:34:08 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:34:08 np0005534516 nova_compute[253538]:    <controller type="usb" index="0"/>
Nov 25 03:34:08 np0005534516 nova_compute[253538]:    <memballoon model="virtio">
Nov 25 03:34:08 np0005534516 nova_compute[253538]:      <stats period="10"/>
Nov 25 03:34:08 np0005534516 nova_compute[253538]:    </memballoon>
Nov 25 03:34:08 np0005534516 nova_compute[253538]:  </devices>
Nov 25 03:34:08 np0005534516 nova_compute[253538]: </domain>
Nov 25 03:34:08 np0005534516 nova_compute[253538]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 25 03:34:08 np0005534516 nova_compute[253538]: 2025-11-25 08:34:08.203 253542 DEBUG nova.compute.manager [None req-de0428eb-b428-4465-9e27-e82bef732ea3 d4a674c4114a4e4fb5e446089be3ffc0 adf7500b3b404802bc7f4ada42a72100 - - default default] [instance: e3f4ee5b-6bb5-456f-b522-426ea1ebf32f] Preparing to wait for external event network-vif-plugged-40faec4b-dd3f-4659-972d-beeeb707761f prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 25 03:34:08 np0005534516 nova_compute[253538]: 2025-11-25 08:34:08.204 253542 DEBUG oslo_concurrency.lockutils [None req-de0428eb-b428-4465-9e27-e82bef732ea3 d4a674c4114a4e4fb5e446089be3ffc0 adf7500b3b404802bc7f4ada42a72100 - - default default] Acquiring lock "e3f4ee5b-6bb5-456f-b522-426ea1ebf32f-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:34:08 np0005534516 nova_compute[253538]: 2025-11-25 08:34:08.204 253542 DEBUG oslo_concurrency.lockutils [None req-de0428eb-b428-4465-9e27-e82bef732ea3 d4a674c4114a4e4fb5e446089be3ffc0 adf7500b3b404802bc7f4ada42a72100 - - default default] Lock "e3f4ee5b-6bb5-456f-b522-426ea1ebf32f-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:34:08 np0005534516 nova_compute[253538]: 2025-11-25 08:34:08.205 253542 DEBUG oslo_concurrency.lockutils [None req-de0428eb-b428-4465-9e27-e82bef732ea3 d4a674c4114a4e4fb5e446089be3ffc0 adf7500b3b404802bc7f4ada42a72100 - - default default] Lock "e3f4ee5b-6bb5-456f-b522-426ea1ebf32f-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:34:08 np0005534516 nova_compute[253538]: 2025-11-25 08:34:08.206 253542 DEBUG nova.virt.libvirt.vif [None req-de0428eb-b428-4465-9e27-e82bef732ea3 d4a674c4114a4e4fb5e446089be3ffc0 adf7500b3b404802bc7f4ada42a72100 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T08:33:59Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-InstanceActionsNegativeTestJSON-server-1084102530',display_name='tempest-InstanceActionsNegativeTestJSON-server-1084102530',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-instanceactionsnegativetestjson-server-1084102530',id=59,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='adf7500b3b404802bc7f4ada42a72100',ramdisk_id='',reservation_id='r-gymefxtd',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-InstanceActionsNegativeTestJSON-800535511',owner_user_name='tempest-InstanceActionsNegativeTestJSON-800535511-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T08:34:01Z,user_data=None,user_id='d4a674c4114a4e4fb5e446089be3ffc0',uuid=e3f4ee5b-6bb5-456f-b522-426ea1ebf32f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "40faec4b-dd3f-4659-972d-beeeb707761f", "address": "fa:16:3e:37:c8:d4", "network": {"id": "610430d6-5ea7-4c04-9b64-2dc2d0a55169", "bridge": "br-int", "label": "tempest-InstanceActionsNegativeTestJSON-598339358-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "adf7500b3b404802bc7f4ada42a72100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap40faec4b-dd", "ovs_interfaceid": "40faec4b-dd3f-4659-972d-beeeb707761f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 25 03:34:08 np0005534516 nova_compute[253538]: 2025-11-25 08:34:08.206 253542 DEBUG nova.network.os_vif_util [None req-de0428eb-b428-4465-9e27-e82bef732ea3 d4a674c4114a4e4fb5e446089be3ffc0 adf7500b3b404802bc7f4ada42a72100 - - default default] Converting VIF {"id": "40faec4b-dd3f-4659-972d-beeeb707761f", "address": "fa:16:3e:37:c8:d4", "network": {"id": "610430d6-5ea7-4c04-9b64-2dc2d0a55169", "bridge": "br-int", "label": "tempest-InstanceActionsNegativeTestJSON-598339358-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "adf7500b3b404802bc7f4ada42a72100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap40faec4b-dd", "ovs_interfaceid": "40faec4b-dd3f-4659-972d-beeeb707761f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 03:34:08 np0005534516 nova_compute[253538]: 2025-11-25 08:34:08.207 253542 DEBUG nova.network.os_vif_util [None req-de0428eb-b428-4465-9e27-e82bef732ea3 d4a674c4114a4e4fb5e446089be3ffc0 adf7500b3b404802bc7f4ada42a72100 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:37:c8:d4,bridge_name='br-int',has_traffic_filtering=True,id=40faec4b-dd3f-4659-972d-beeeb707761f,network=Network(610430d6-5ea7-4c04-9b64-2dc2d0a55169),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap40faec4b-dd') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 03:34:08 np0005534516 nova_compute[253538]: 2025-11-25 08:34:08.208 253542 DEBUG os_vif [None req-de0428eb-b428-4465-9e27-e82bef732ea3 d4a674c4114a4e4fb5e446089be3ffc0 adf7500b3b404802bc7f4ada42a72100 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:37:c8:d4,bridge_name='br-int',has_traffic_filtering=True,id=40faec4b-dd3f-4659-972d-beeeb707761f,network=Network(610430d6-5ea7-4c04-9b64-2dc2d0a55169),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap40faec4b-dd') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 25 03:34:08 np0005534516 nova_compute[253538]: 2025-11-25 08:34:08.209 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:34:08 np0005534516 nova_compute[253538]: 2025-11-25 08:34:08.209 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:34:08 np0005534516 nova_compute[253538]: 2025-11-25 08:34:08.210 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 03:34:08 np0005534516 nova_compute[253538]: 2025-11-25 08:34:08.214 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:34:08 np0005534516 nova_compute[253538]: 2025-11-25 08:34:08.214 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap40faec4b-dd, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:34:08 np0005534516 nova_compute[253538]: 2025-11-25 08:34:08.217 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap40faec4b-dd, col_values=(('external_ids', {'iface-id': '40faec4b-dd3f-4659-972d-beeeb707761f', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:37:c8:d4', 'vm-uuid': 'e3f4ee5b-6bb5-456f-b522-426ea1ebf32f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:34:08 np0005534516 nova_compute[253538]: 2025-11-25 08:34:08.219 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:34:08 np0005534516 NetworkManager[48915]: <info>  [1764059648.2212] manager: (tap40faec4b-dd): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/247)
Nov 25 03:34:08 np0005534516 nova_compute[253538]: 2025-11-25 08:34:08.225 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 25 03:34:08 np0005534516 nova_compute[253538]: 2025-11-25 08:34:08.229 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:34:08 np0005534516 nova_compute[253538]: 2025-11-25 08:34:08.230 253542 INFO os_vif [None req-de0428eb-b428-4465-9e27-e82bef732ea3 d4a674c4114a4e4fb5e446089be3ffc0 adf7500b3b404802bc7f4ada42a72100 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:37:c8:d4,bridge_name='br-int',has_traffic_filtering=True,id=40faec4b-dd3f-4659-972d-beeeb707761f,network=Network(610430d6-5ea7-4c04-9b64-2dc2d0a55169),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap40faec4b-dd')#033[00m
Nov 25 03:34:08 np0005534516 nova_compute[253538]: 2025-11-25 08:34:08.270 253542 DEBUG oslo_concurrency.lockutils [None req-ea508da6-2d37-4e51-9a1b-53e06fde3653 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:34:08 np0005534516 nova_compute[253538]: 2025-11-25 08:34:08.271 253542 DEBUG oslo_concurrency.lockutils [None req-ea508da6-2d37-4e51-9a1b-53e06fde3653 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:34:08 np0005534516 nova_compute[253538]: 2025-11-25 08:34:08.278 253542 DEBUG nova.virt.hardware [None req-ea508da6-2d37-4e51-9a1b-53e06fde3653 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 25 03:34:08 np0005534516 nova_compute[253538]: 2025-11-25 08:34:08.279 253542 INFO nova.compute.claims [None req-ea508da6-2d37-4e51-9a1b-53e06fde3653 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] [instance: 52d39d67-b456-44e4-8804-2de0c941edae] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 25 03:34:08 np0005534516 nova_compute[253538]: 2025-11-25 08:34:08.307 253542 DEBUG nova.virt.libvirt.driver [None req-de0428eb-b428-4465-9e27-e82bef732ea3 d4a674c4114a4e4fb5e446089be3ffc0 adf7500b3b404802bc7f4ada42a72100 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 03:34:08 np0005534516 nova_compute[253538]: 2025-11-25 08:34:08.308 253542 DEBUG nova.virt.libvirt.driver [None req-de0428eb-b428-4465-9e27-e82bef732ea3 d4a674c4114a4e4fb5e446089be3ffc0 adf7500b3b404802bc7f4ada42a72100 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 03:34:08 np0005534516 nova_compute[253538]: 2025-11-25 08:34:08.308 253542 DEBUG nova.virt.libvirt.driver [None req-de0428eb-b428-4465-9e27-e82bef732ea3 d4a674c4114a4e4fb5e446089be3ffc0 adf7500b3b404802bc7f4ada42a72100 - - default default] No VIF found with MAC fa:16:3e:37:c8:d4, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 25 03:34:08 np0005534516 nova_compute[253538]: 2025-11-25 08:34:08.308 253542 INFO nova.virt.libvirt.driver [None req-de0428eb-b428-4465-9e27-e82bef732ea3 d4a674c4114a4e4fb5e446089be3ffc0 adf7500b3b404802bc7f4ada42a72100 - - default default] [instance: e3f4ee5b-6bb5-456f-b522-426ea1ebf32f] Using config drive#033[00m
Nov 25 03:34:08 np0005534516 nova_compute[253538]: 2025-11-25 08:34:08.329 253542 DEBUG nova.storage.rbd_utils [None req-de0428eb-b428-4465-9e27-e82bef732ea3 d4a674c4114a4e4fb5e446089be3ffc0 adf7500b3b404802bc7f4ada42a72100 - - default default] rbd image e3f4ee5b-6bb5-456f-b522-426ea1ebf32f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:34:08 np0005534516 nova_compute[253538]: 2025-11-25 08:34:08.398 253542 DEBUG nova.network.neutron [None req-264bf83f-c6d5-4d83-a2a4-c0156230e328 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: bf44124c-1a65-4bde-a777-043ae1a53557] Successfully updated port: 269f9bd3-f267-459c-8e24-4b1f6c943345 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 25 03:34:08 np0005534516 nova_compute[253538]: 2025-11-25 08:34:08.414 253542 DEBUG oslo_concurrency.lockutils [None req-264bf83f-c6d5-4d83-a2a4-c0156230e328 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Acquiring lock "refresh_cache-bf44124c-1a65-4bde-a777-043ae1a53557" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 03:34:08 np0005534516 nova_compute[253538]: 2025-11-25 08:34:08.415 253542 DEBUG oslo_concurrency.lockutils [None req-264bf83f-c6d5-4d83-a2a4-c0156230e328 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Acquired lock "refresh_cache-bf44124c-1a65-4bde-a777-043ae1a53557" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 03:34:08 np0005534516 nova_compute[253538]: 2025-11-25 08:34:08.415 253542 DEBUG nova.network.neutron [None req-264bf83f-c6d5-4d83-a2a4-c0156230e328 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: bf44124c-1a65-4bde-a777-043ae1a53557] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 25 03:34:08 np0005534516 nova_compute[253538]: 2025-11-25 08:34:08.485 253542 DEBUG oslo_concurrency.processutils [None req-ea508da6-2d37-4e51-9a1b-53e06fde3653 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:34:08 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 03:34:08 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2779625697' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 03:34:08 np0005534516 nova_compute[253538]: 2025-11-25 08:34:08.924 253542 DEBUG oslo_concurrency.processutils [None req-ea508da6-2d37-4e51-9a1b-53e06fde3653 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.439s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:34:08 np0005534516 nova_compute[253538]: 2025-11-25 08:34:08.931 253542 DEBUG nova.compute.provider_tree [None req-ea508da6-2d37-4e51-9a1b-53e06fde3653 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 03:34:08 np0005534516 nova_compute[253538]: 2025-11-25 08:34:08.944 253542 DEBUG nova.scheduler.client.report [None req-ea508da6-2d37-4e51-9a1b-53e06fde3653 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 03:34:08 np0005534516 nova_compute[253538]: 2025-11-25 08:34:08.967 253542 DEBUG oslo_concurrency.lockutils [None req-ea508da6-2d37-4e51-9a1b-53e06fde3653 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.696s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:34:08 np0005534516 nova_compute[253538]: 2025-11-25 08:34:08.970 253542 DEBUG nova.compute.manager [None req-ea508da6-2d37-4e51-9a1b-53e06fde3653 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] [instance: 52d39d67-b456-44e4-8804-2de0c941edae] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 25 03:34:09 np0005534516 nova_compute[253538]: 2025-11-25 08:34:09.040 253542 DEBUG nova.compute.manager [None req-ea508da6-2d37-4e51-9a1b-53e06fde3653 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] [instance: 52d39d67-b456-44e4-8804-2de0c941edae] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 25 03:34:09 np0005534516 nova_compute[253538]: 2025-11-25 08:34:09.042 253542 DEBUG nova.network.neutron [None req-ea508da6-2d37-4e51-9a1b-53e06fde3653 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] [instance: 52d39d67-b456-44e4-8804-2de0c941edae] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 25 03:34:09 np0005534516 nova_compute[253538]: 2025-11-25 08:34:09.069 253542 INFO nova.virt.libvirt.driver [None req-ea508da6-2d37-4e51-9a1b-53e06fde3653 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] [instance: 52d39d67-b456-44e4-8804-2de0c941edae] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 25 03:34:09 np0005534516 nova_compute[253538]: 2025-11-25 08:34:09.088 253542 DEBUG nova.compute.manager [None req-ea508da6-2d37-4e51-9a1b-53e06fde3653 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] [instance: 52d39d67-b456-44e4-8804-2de0c941edae] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 25 03:34:09 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1530: 321 pgs: 321 active+clean; 258 MiB data, 575 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 4.2 MiB/s wr, 136 op/s
Nov 25 03:34:09 np0005534516 nova_compute[253538]: 2025-11-25 08:34:09.390 253542 DEBUG nova.network.neutron [None req-264bf83f-c6d5-4d83-a2a4-c0156230e328 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: bf44124c-1a65-4bde-a777-043ae1a53557] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 25 03:34:09 np0005534516 nova_compute[253538]: 2025-11-25 08:34:09.394 253542 DEBUG nova.compute.manager [None req-ea508da6-2d37-4e51-9a1b-53e06fde3653 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] [instance: 52d39d67-b456-44e4-8804-2de0c941edae] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 25 03:34:09 np0005534516 nova_compute[253538]: 2025-11-25 08:34:09.395 253542 DEBUG nova.virt.libvirt.driver [None req-ea508da6-2d37-4e51-9a1b-53e06fde3653 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] [instance: 52d39d67-b456-44e4-8804-2de0c941edae] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 25 03:34:09 np0005534516 nova_compute[253538]: 2025-11-25 08:34:09.396 253542 INFO nova.virt.libvirt.driver [None req-ea508da6-2d37-4e51-9a1b-53e06fde3653 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] [instance: 52d39d67-b456-44e4-8804-2de0c941edae] Creating image(s)#033[00m
Nov 25 03:34:09 np0005534516 nova_compute[253538]: 2025-11-25 08:34:09.428 253542 DEBUG nova.storage.rbd_utils [None req-ea508da6-2d37-4e51-9a1b-53e06fde3653 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] rbd image 52d39d67-b456-44e4-8804-2de0c941edae_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:34:09 np0005534516 nova_compute[253538]: 2025-11-25 08:34:09.455 253542 DEBUG nova.storage.rbd_utils [None req-ea508da6-2d37-4e51-9a1b-53e06fde3653 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] rbd image 52d39d67-b456-44e4-8804-2de0c941edae_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:34:09 np0005534516 nova_compute[253538]: 2025-11-25 08:34:09.489 253542 DEBUG nova.storage.rbd_utils [None req-ea508da6-2d37-4e51-9a1b-53e06fde3653 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] rbd image 52d39d67-b456-44e4-8804-2de0c941edae_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:34:09 np0005534516 nova_compute[253538]: 2025-11-25 08:34:09.493 253542 DEBUG oslo_concurrency.processutils [None req-ea508da6-2d37-4e51-9a1b-53e06fde3653 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:34:09 np0005534516 nova_compute[253538]: 2025-11-25 08:34:09.581 253542 DEBUG nova.policy [None req-ea508da6-2d37-4e51-9a1b-53e06fde3653 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '329d8dc9d78743d4a09a38fef3a9143d', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '7d8307470c794815a028592990efca57', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 25 03:34:09 np0005534516 nova_compute[253538]: 2025-11-25 08:34:09.583 253542 DEBUG oslo_concurrency.processutils [None req-ea508da6-2d37-4e51-9a1b-53e06fde3653 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json" returned: 0 in 0.090s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:34:09 np0005534516 nova_compute[253538]: 2025-11-25 08:34:09.584 253542 DEBUG oslo_concurrency.lockutils [None req-ea508da6-2d37-4e51-9a1b-53e06fde3653 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Acquiring lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:34:09 np0005534516 nova_compute[253538]: 2025-11-25 08:34:09.585 253542 DEBUG oslo_concurrency.lockutils [None req-ea508da6-2d37-4e51-9a1b-53e06fde3653 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:34:09 np0005534516 nova_compute[253538]: 2025-11-25 08:34:09.587 253542 DEBUG oslo_concurrency.lockutils [None req-ea508da6-2d37-4e51-9a1b-53e06fde3653 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:34:09 np0005534516 nova_compute[253538]: 2025-11-25 08:34:09.621 253542 DEBUG nova.storage.rbd_utils [None req-ea508da6-2d37-4e51-9a1b-53e06fde3653 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] rbd image 52d39d67-b456-44e4-8804-2de0c941edae_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:34:09 np0005534516 nova_compute[253538]: 2025-11-25 08:34:09.627 253542 DEBUG oslo_concurrency.processutils [None req-ea508da6-2d37-4e51-9a1b-53e06fde3653 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc 52d39d67-b456-44e4-8804-2de0c941edae_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:34:10 np0005534516 nova_compute[253538]: 2025-11-25 08:34:09.999 253542 DEBUG oslo_concurrency.processutils [None req-ea508da6-2d37-4e51-9a1b-53e06fde3653 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc 52d39d67-b456-44e4-8804-2de0c941edae_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.372s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:34:10 np0005534516 nova_compute[253538]: 2025-11-25 08:34:10.060 253542 DEBUG nova.storage.rbd_utils [None req-ea508da6-2d37-4e51-9a1b-53e06fde3653 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] resizing rbd image 52d39d67-b456-44e4-8804-2de0c941edae_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 25 03:34:10 np0005534516 nova_compute[253538]: 2025-11-25 08:34:10.154 253542 INFO nova.virt.libvirt.driver [None req-de0428eb-b428-4465-9e27-e82bef732ea3 d4a674c4114a4e4fb5e446089be3ffc0 adf7500b3b404802bc7f4ada42a72100 - - default default] [instance: e3f4ee5b-6bb5-456f-b522-426ea1ebf32f] Creating config drive at /var/lib/nova/instances/e3f4ee5b-6bb5-456f-b522-426ea1ebf32f/disk.config#033[00m
Nov 25 03:34:10 np0005534516 nova_compute[253538]: 2025-11-25 08:34:10.161 253542 DEBUG oslo_concurrency.processutils [None req-de0428eb-b428-4465-9e27-e82bef732ea3 d4a674c4114a4e4fb5e446089be3ffc0 adf7500b3b404802bc7f4ada42a72100 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/e3f4ee5b-6bb5-456f-b522-426ea1ebf32f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpe23862qi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:34:10 np0005534516 nova_compute[253538]: 2025-11-25 08:34:10.200 253542 DEBUG nova.objects.instance [None req-ea508da6-2d37-4e51-9a1b-53e06fde3653 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Lazy-loading 'migration_context' on Instance uuid 52d39d67-b456-44e4-8804-2de0c941edae obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 03:34:10 np0005534516 nova_compute[253538]: 2025-11-25 08:34:10.213 253542 DEBUG nova.virt.libvirt.driver [None req-ea508da6-2d37-4e51-9a1b-53e06fde3653 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] [instance: 52d39d67-b456-44e4-8804-2de0c941edae] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 25 03:34:10 np0005534516 nova_compute[253538]: 2025-11-25 08:34:10.214 253542 DEBUG nova.virt.libvirt.driver [None req-ea508da6-2d37-4e51-9a1b-53e06fde3653 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] [instance: 52d39d67-b456-44e4-8804-2de0c941edae] Ensure instance console log exists: /var/lib/nova/instances/52d39d67-b456-44e4-8804-2de0c941edae/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 25 03:34:10 np0005534516 nova_compute[253538]: 2025-11-25 08:34:10.215 253542 DEBUG oslo_concurrency.lockutils [None req-ea508da6-2d37-4e51-9a1b-53e06fde3653 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:34:10 np0005534516 nova_compute[253538]: 2025-11-25 08:34:10.216 253542 DEBUG oslo_concurrency.lockutils [None req-ea508da6-2d37-4e51-9a1b-53e06fde3653 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:34:10 np0005534516 nova_compute[253538]: 2025-11-25 08:34:10.216 253542 DEBUG oslo_concurrency.lockutils [None req-ea508da6-2d37-4e51-9a1b-53e06fde3653 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:34:10 np0005534516 nova_compute[253538]: 2025-11-25 08:34:10.300 253542 DEBUG oslo_concurrency.processutils [None req-de0428eb-b428-4465-9e27-e82bef732ea3 d4a674c4114a4e4fb5e446089be3ffc0 adf7500b3b404802bc7f4ada42a72100 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/e3f4ee5b-6bb5-456f-b522-426ea1ebf32f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpe23862qi" returned: 0 in 0.139s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:34:10 np0005534516 nova_compute[253538]: 2025-11-25 08:34:10.331 253542 DEBUG nova.storage.rbd_utils [None req-de0428eb-b428-4465-9e27-e82bef732ea3 d4a674c4114a4e4fb5e446089be3ffc0 adf7500b3b404802bc7f4ada42a72100 - - default default] rbd image e3f4ee5b-6bb5-456f-b522-426ea1ebf32f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:34:10 np0005534516 nova_compute[253538]: 2025-11-25 08:34:10.336 253542 DEBUG oslo_concurrency.processutils [None req-de0428eb-b428-4465-9e27-e82bef732ea3 d4a674c4114a4e4fb5e446089be3ffc0 adf7500b3b404802bc7f4ada42a72100 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/e3f4ee5b-6bb5-456f-b522-426ea1ebf32f/disk.config e3f4ee5b-6bb5-456f-b522-426ea1ebf32f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:34:10 np0005534516 nova_compute[253538]: 2025-11-25 08:34:10.486 253542 DEBUG oslo_concurrency.processutils [None req-de0428eb-b428-4465-9e27-e82bef732ea3 d4a674c4114a4e4fb5e446089be3ffc0 adf7500b3b404802bc7f4ada42a72100 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/e3f4ee5b-6bb5-456f-b522-426ea1ebf32f/disk.config e3f4ee5b-6bb5-456f-b522-426ea1ebf32f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.150s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:34:10 np0005534516 nova_compute[253538]: 2025-11-25 08:34:10.487 253542 INFO nova.virt.libvirt.driver [None req-de0428eb-b428-4465-9e27-e82bef732ea3 d4a674c4114a4e4fb5e446089be3ffc0 adf7500b3b404802bc7f4ada42a72100 - - default default] [instance: e3f4ee5b-6bb5-456f-b522-426ea1ebf32f] Deleting local config drive /var/lib/nova/instances/e3f4ee5b-6bb5-456f-b522-426ea1ebf32f/disk.config because it was imported into RBD.#033[00m
Nov 25 03:34:10 np0005534516 virtqemud[253839]: End of file while reading data: Input/output error
Nov 25 03:34:10 np0005534516 virtqemud[253839]: End of file while reading data: Input/output error
Nov 25 03:34:10 np0005534516 NetworkManager[48915]: <info>  [1764059650.5421] manager: (tap40faec4b-dd): new Tun device (/org/freedesktop/NetworkManager/Devices/248)
Nov 25 03:34:10 np0005534516 kernel: tap40faec4b-dd: entered promiscuous mode
Nov 25 03:34:10 np0005534516 nova_compute[253538]: 2025-11-25 08:34:10.545 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:34:10 np0005534516 ovn_controller[152859]: 2025-11-25T08:34:10Z|00536|binding|INFO|Claiming lport 40faec4b-dd3f-4659-972d-beeeb707761f for this chassis.
Nov 25 03:34:10 np0005534516 ovn_controller[152859]: 2025-11-25T08:34:10Z|00537|binding|INFO|40faec4b-dd3f-4659-972d-beeeb707761f: Claiming fa:16:3e:37:c8:d4 10.100.0.4
Nov 25 03:34:10 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:34:10.552 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:37:c8:d4 10.100.0.4'], port_security=['fa:16:3e:37:c8:d4 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': 'e3f4ee5b-6bb5-456f-b522-426ea1ebf32f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-610430d6-5ea7-4c04-9b64-2dc2d0a55169', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'adf7500b3b404802bc7f4ada42a72100', 'neutron:revision_number': '2', 'neutron:security_group_ids': '89f00338-7004-4e97-a33e-2330c787d850', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f67d87c1-e8c8-46a3-b6be-f7e585c56ed4, chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=40faec4b-dd3f-4659-972d-beeeb707761f) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 03:34:10 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:34:10.554 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 40faec4b-dd3f-4659-972d-beeeb707761f in datapath 610430d6-5ea7-4c04-9b64-2dc2d0a55169 bound to our chassis#033[00m
Nov 25 03:34:10 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:34:10.555 162739 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 610430d6-5ea7-4c04-9b64-2dc2d0a55169#033[00m
Nov 25 03:34:10 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:34:10.570 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[32228a32-8d82-40ce-993d-7c2e06fc23e9]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:34:10 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:34:10.571 162739 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap610430d6-51 in ovnmeta-610430d6-5ea7-4c04-9b64-2dc2d0a55169 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 25 03:34:10 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:34:10.574 269370 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap610430d6-50 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 25 03:34:10 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:34:10.574 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[1004302b-5d26-41f8-832d-a664df834df1]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:34:10 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:34:10.575 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[59960b4b-575a-4aef-8cf8-50b4bd5aaa00]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:34:10 np0005534516 ovn_controller[152859]: 2025-11-25T08:34:10Z|00538|binding|INFO|Setting lport 40faec4b-dd3f-4659-972d-beeeb707761f ovn-installed in OVS
Nov 25 03:34:10 np0005534516 ovn_controller[152859]: 2025-11-25T08:34:10Z|00539|binding|INFO|Setting lport 40faec4b-dd3f-4659-972d-beeeb707761f up in Southbound
Nov 25 03:34:10 np0005534516 nova_compute[253538]: 2025-11-25 08:34:10.577 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:34:10 np0005534516 nova_compute[253538]: 2025-11-25 08:34:10.587 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:34:10 np0005534516 systemd-udevd[316239]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 03:34:10 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:34:10.589 162852 DEBUG oslo.privsep.daemon [-] privsep: reply[dccc95bf-ff2b-4358-a0e3-0d55be3283ad]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:34:10 np0005534516 systemd-machined[215790]: New machine qemu-68-instance-0000003b.
Nov 25 03:34:10 np0005534516 NetworkManager[48915]: <info>  [1764059650.6023] device (tap40faec4b-dd): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 03:34:10 np0005534516 NetworkManager[48915]: <info>  [1764059650.6036] device (tap40faec4b-dd): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 03:34:10 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:34:10.604 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[0a210e8b-587f-4784-bc8a-4016ec0b6693]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:34:10 np0005534516 systemd[1]: Started Virtual Machine qemu-68-instance-0000003b.
Nov 25 03:34:10 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:34:10.633 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[2a7b8fd0-488f-418a-befc-eea3cdc2c0f2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:34:10 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:34:10.640 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[03813421-5fa8-4ab6-940a-32977ede8628]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:34:10 np0005534516 NetworkManager[48915]: <info>  [1764059650.6440] manager: (tap610430d6-50): new Veth device (/org/freedesktop/NetworkManager/Devices/249)
Nov 25 03:34:10 np0005534516 nova_compute[253538]: 2025-11-25 08:34:10.654 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:34:10 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:34:10.673 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[6449b333-2ce0-4e55-a996-785a324229e7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:34:10 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:34:10.676 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[c883bf40-00ac-4ac0-b76d-34512bd672b1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:34:10 np0005534516 NetworkManager[48915]: <info>  [1764059650.6961] device (tap610430d6-50): carrier: link connected
Nov 25 03:34:10 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:34:10.701 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[0dfd3718-b9fd-4177-9ede-543404dd212b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:34:10 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:34:10.721 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[8f7b00ee-55b1-4c00-aa8d-d9ac75a4a7be]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap610430d6-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e5:c7:6b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 166], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 500529, 'reachable_time': 34207, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 316271, 'error': None, 'target': 'ovnmeta-610430d6-5ea7-4c04-9b64-2dc2d0a55169', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:34:10 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:34:10.736 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[f94fd54f-631b-4c13-9f65-29d61a921d8f]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fee5:c76b'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 500529, 'tstamp': 500529}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 316272, 'error': None, 'target': 'ovnmeta-610430d6-5ea7-4c04-9b64-2dc2d0a55169', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:34:10 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:34:10.757 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[15339852-b9fe-4f1b-9406-e4e06fde75d4]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap610430d6-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e5:c7:6b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 166], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 500529, 'reachable_time': 34207, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 316273, 'error': None, 'target': 'ovnmeta-610430d6-5ea7-4c04-9b64-2dc2d0a55169', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:34:10 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:34:10.782 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[6ae4ce48-0654-4d69-86f6-14059845e110]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:34:10 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:34:10.848 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[e164d01f-fa54-4644-a744-4de320ac025a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:34:10 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:34:10.849 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap610430d6-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:34:10 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:34:10.849 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 03:34:10 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:34:10.849 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap610430d6-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:34:10 np0005534516 nova_compute[253538]: 2025-11-25 08:34:10.851 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:34:10 np0005534516 NetworkManager[48915]: <info>  [1764059650.8521] manager: (tap610430d6-50): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/250)
Nov 25 03:34:10 np0005534516 kernel: tap610430d6-50: entered promiscuous mode
Nov 25 03:34:10 np0005534516 nova_compute[253538]: 2025-11-25 08:34:10.854 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:34:10 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:34:10.855 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap610430d6-50, col_values=(('external_ids', {'iface-id': 'cdf0d105-3698-42de-8860-e30ea5c5fbfa'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:34:10 np0005534516 ovn_controller[152859]: 2025-11-25T08:34:10Z|00540|binding|INFO|Releasing lport cdf0d105-3698-42de-8860-e30ea5c5fbfa from this chassis (sb_readonly=0)
Nov 25 03:34:10 np0005534516 nova_compute[253538]: 2025-11-25 08:34:10.857 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:34:10 np0005534516 nova_compute[253538]: 2025-11-25 08:34:10.875 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:34:10 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:34:10.878 162739 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/610430d6-5ea7-4c04-9b64-2dc2d0a55169.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/610430d6-5ea7-4c04-9b64-2dc2d0a55169.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 25 03:34:10 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:34:10.878 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[7943f8d5-e455-455e-a6ba-be80491eac73]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:34:10 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:34:10.880 162739 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 25 03:34:10 np0005534516 ovn_metadata_agent[162734]: global
Nov 25 03:34:10 np0005534516 ovn_metadata_agent[162734]:    log         /dev/log local0 debug
Nov 25 03:34:10 np0005534516 ovn_metadata_agent[162734]:    log-tag     haproxy-metadata-proxy-610430d6-5ea7-4c04-9b64-2dc2d0a55169
Nov 25 03:34:10 np0005534516 ovn_metadata_agent[162734]:    user        root
Nov 25 03:34:10 np0005534516 ovn_metadata_agent[162734]:    group       root
Nov 25 03:34:10 np0005534516 ovn_metadata_agent[162734]:    maxconn     1024
Nov 25 03:34:10 np0005534516 ovn_metadata_agent[162734]:    pidfile     /var/lib/neutron/external/pids/610430d6-5ea7-4c04-9b64-2dc2d0a55169.pid.haproxy
Nov 25 03:34:10 np0005534516 ovn_metadata_agent[162734]:    daemon
Nov 25 03:34:10 np0005534516 ovn_metadata_agent[162734]: 
Nov 25 03:34:10 np0005534516 ovn_metadata_agent[162734]: defaults
Nov 25 03:34:10 np0005534516 ovn_metadata_agent[162734]:    log global
Nov 25 03:34:10 np0005534516 ovn_metadata_agent[162734]:    mode http
Nov 25 03:34:10 np0005534516 ovn_metadata_agent[162734]:    option httplog
Nov 25 03:34:10 np0005534516 ovn_metadata_agent[162734]:    option dontlognull
Nov 25 03:34:10 np0005534516 ovn_metadata_agent[162734]:    option http-server-close
Nov 25 03:34:10 np0005534516 ovn_metadata_agent[162734]:    option forwardfor
Nov 25 03:34:10 np0005534516 ovn_metadata_agent[162734]:    retries                 3
Nov 25 03:34:10 np0005534516 ovn_metadata_agent[162734]:    timeout http-request    30s
Nov 25 03:34:10 np0005534516 ovn_metadata_agent[162734]:    timeout connect         30s
Nov 25 03:34:10 np0005534516 ovn_metadata_agent[162734]:    timeout client          32s
Nov 25 03:34:10 np0005534516 ovn_metadata_agent[162734]:    timeout server          32s
Nov 25 03:34:10 np0005534516 ovn_metadata_agent[162734]:    timeout http-keep-alive 30s
Nov 25 03:34:10 np0005534516 ovn_metadata_agent[162734]: 
Nov 25 03:34:10 np0005534516 ovn_metadata_agent[162734]: 
Nov 25 03:34:10 np0005534516 ovn_metadata_agent[162734]: listen listener
Nov 25 03:34:10 np0005534516 ovn_metadata_agent[162734]:    bind 169.254.169.254:80
Nov 25 03:34:10 np0005534516 ovn_metadata_agent[162734]:    server metadata /var/lib/neutron/metadata_proxy
Nov 25 03:34:10 np0005534516 ovn_metadata_agent[162734]:    http-request add-header X-OVN-Network-ID 610430d6-5ea7-4c04-9b64-2dc2d0a55169
Nov 25 03:34:10 np0005534516 ovn_metadata_agent[162734]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 25 03:34:10 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:34:10.881 162739 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-610430d6-5ea7-4c04-9b64-2dc2d0a55169', 'env', 'PROCESS_TAG=haproxy-610430d6-5ea7-4c04-9b64-2dc2d0a55169', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/610430d6-5ea7-4c04-9b64-2dc2d0a55169.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 25 03:34:10 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e182 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 03:34:11 np0005534516 nova_compute[253538]: 2025-11-25 08:34:11.056 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764059651.0560687, e3f4ee5b-6bb5-456f-b522-426ea1ebf32f => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 03:34:11 np0005534516 nova_compute[253538]: 2025-11-25 08:34:11.056 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: e3f4ee5b-6bb5-456f-b522-426ea1ebf32f] VM Started (Lifecycle Event)#033[00m
Nov 25 03:34:11 np0005534516 nova_compute[253538]: 2025-11-25 08:34:11.071 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: e3f4ee5b-6bb5-456f-b522-426ea1ebf32f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:34:11 np0005534516 nova_compute[253538]: 2025-11-25 08:34:11.074 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764059651.0583458, e3f4ee5b-6bb5-456f-b522-426ea1ebf32f => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 03:34:11 np0005534516 nova_compute[253538]: 2025-11-25 08:34:11.074 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: e3f4ee5b-6bb5-456f-b522-426ea1ebf32f] VM Paused (Lifecycle Event)#033[00m
Nov 25 03:34:11 np0005534516 nova_compute[253538]: 2025-11-25 08:34:11.088 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: e3f4ee5b-6bb5-456f-b522-426ea1ebf32f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:34:11 np0005534516 nova_compute[253538]: 2025-11-25 08:34:11.091 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: e3f4ee5b-6bb5-456f-b522-426ea1ebf32f] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 03:34:11 np0005534516 nova_compute[253538]: 2025-11-25 08:34:11.103 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: e3f4ee5b-6bb5-456f-b522-426ea1ebf32f] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 03:34:11 np0005534516 nova_compute[253538]: 2025-11-25 08:34:11.188 253542 DEBUG nova.compute.manager [req-5451ff9f-1b7d-4ffd-972a-f224f1068dc7 req-fc75a8a2-3e12-4a7a-9630-991b029fdf7c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: bf44124c-1a65-4bde-a777-043ae1a53557] Received event network-changed-269f9bd3-f267-459c-8e24-4b1f6c943345 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:34:11 np0005534516 nova_compute[253538]: 2025-11-25 08:34:11.189 253542 DEBUG nova.compute.manager [req-5451ff9f-1b7d-4ffd-972a-f224f1068dc7 req-fc75a8a2-3e12-4a7a-9630-991b029fdf7c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: bf44124c-1a65-4bde-a777-043ae1a53557] Refreshing instance network info cache due to event network-changed-269f9bd3-f267-459c-8e24-4b1f6c943345. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 03:34:11 np0005534516 nova_compute[253538]: 2025-11-25 08:34:11.189 253542 DEBUG oslo_concurrency.lockutils [req-5451ff9f-1b7d-4ffd-972a-f224f1068dc7 req-fc75a8a2-3e12-4a7a-9630-991b029fdf7c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-bf44124c-1a65-4bde-a777-043ae1a53557" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 03:34:11 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1531: 321 pgs: 321 active+clean; 262 MiB data, 575 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 4.3 MiB/s wr, 143 op/s
Nov 25 03:34:11 np0005534516 podman[316346]: 2025-11-25 08:34:11.277754098 +0000 UTC m=+0.052924523 container create a9aa281d687f553f489151220e0518c490d12d8d9ea2df973ae70a1b883b253f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-610430d6-5ea7-4c04-9b64-2dc2d0a55169, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 03:34:11 np0005534516 systemd[1]: Started libpod-conmon-a9aa281d687f553f489151220e0518c490d12d8d9ea2df973ae70a1b883b253f.scope.
Nov 25 03:34:11 np0005534516 podman[316346]: 2025-11-25 08:34:11.251292557 +0000 UTC m=+0.026463002 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 25 03:34:11 np0005534516 systemd[1]: Started libcrun container.
Nov 25 03:34:11 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1e19f3f33dd4cf5e2ac16478c7cf2cb2437ef71f6951caaa3e6af27e077a7e4a/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 25 03:34:11 np0005534516 podman[316346]: 2025-11-25 08:34:11.370738607 +0000 UTC m=+0.145909082 container init a9aa281d687f553f489151220e0518c490d12d8d9ea2df973ae70a1b883b253f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-610430d6-5ea7-4c04-9b64-2dc2d0a55169, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, tcib_managed=true)
Nov 25 03:34:11 np0005534516 podman[316346]: 2025-11-25 08:34:11.375843665 +0000 UTC m=+0.151014110 container start a9aa281d687f553f489151220e0518c490d12d8d9ea2df973ae70a1b883b253f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-610430d6-5ea7-4c04-9b64-2dc2d0a55169, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 03:34:11 np0005534516 neutron-haproxy-ovnmeta-610430d6-5ea7-4c04-9b64-2dc2d0a55169[316361]: [NOTICE]   (316365) : New worker (316367) forked
Nov 25 03:34:11 np0005534516 neutron-haproxy-ovnmeta-610430d6-5ea7-4c04-9b64-2dc2d0a55169[316361]: [NOTICE]   (316365) : Loading success.
Nov 25 03:34:11 np0005534516 nova_compute[253538]: 2025-11-25 08:34:11.824 253542 DEBUG nova.compute.manager [req-82648ef4-5e8d-47bf-a2f9-9f14ca568136 req-7f9f86e8-ebb1-4acb-9f25-aa785614bdba b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: e3f4ee5b-6bb5-456f-b522-426ea1ebf32f] Received event network-vif-plugged-40faec4b-dd3f-4659-972d-beeeb707761f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:34:11 np0005534516 nova_compute[253538]: 2025-11-25 08:34:11.825 253542 DEBUG oslo_concurrency.lockutils [req-82648ef4-5e8d-47bf-a2f9-9f14ca568136 req-7f9f86e8-ebb1-4acb-9f25-aa785614bdba b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "e3f4ee5b-6bb5-456f-b522-426ea1ebf32f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:34:11 np0005534516 nova_compute[253538]: 2025-11-25 08:34:11.825 253542 DEBUG oslo_concurrency.lockutils [req-82648ef4-5e8d-47bf-a2f9-9f14ca568136 req-7f9f86e8-ebb1-4acb-9f25-aa785614bdba b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "e3f4ee5b-6bb5-456f-b522-426ea1ebf32f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:34:11 np0005534516 nova_compute[253538]: 2025-11-25 08:34:11.826 253542 DEBUG oslo_concurrency.lockutils [req-82648ef4-5e8d-47bf-a2f9-9f14ca568136 req-7f9f86e8-ebb1-4acb-9f25-aa785614bdba b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "e3f4ee5b-6bb5-456f-b522-426ea1ebf32f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:34:11 np0005534516 nova_compute[253538]: 2025-11-25 08:34:11.826 253542 DEBUG nova.compute.manager [req-82648ef4-5e8d-47bf-a2f9-9f14ca568136 req-7f9f86e8-ebb1-4acb-9f25-aa785614bdba b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: e3f4ee5b-6bb5-456f-b522-426ea1ebf32f] Processing event network-vif-plugged-40faec4b-dd3f-4659-972d-beeeb707761f _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 25 03:34:11 np0005534516 nova_compute[253538]: 2025-11-25 08:34:11.827 253542 DEBUG nova.compute.manager [None req-de0428eb-b428-4465-9e27-e82bef732ea3 d4a674c4114a4e4fb5e446089be3ffc0 adf7500b3b404802bc7f4ada42a72100 - - default default] [instance: e3f4ee5b-6bb5-456f-b522-426ea1ebf32f] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 25 03:34:11 np0005534516 nova_compute[253538]: 2025-11-25 08:34:11.832 253542 DEBUG nova.virt.libvirt.driver [None req-de0428eb-b428-4465-9e27-e82bef732ea3 d4a674c4114a4e4fb5e446089be3ffc0 adf7500b3b404802bc7f4ada42a72100 - - default default] [instance: e3f4ee5b-6bb5-456f-b522-426ea1ebf32f] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 25 03:34:11 np0005534516 nova_compute[253538]: 2025-11-25 08:34:11.833 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764059651.8322413, e3f4ee5b-6bb5-456f-b522-426ea1ebf32f => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 03:34:11 np0005534516 nova_compute[253538]: 2025-11-25 08:34:11.833 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: e3f4ee5b-6bb5-456f-b522-426ea1ebf32f] VM Resumed (Lifecycle Event)#033[00m
Nov 25 03:34:11 np0005534516 nova_compute[253538]: 2025-11-25 08:34:11.838 253542 INFO nova.virt.libvirt.driver [-] [instance: e3f4ee5b-6bb5-456f-b522-426ea1ebf32f] Instance spawned successfully.#033[00m
Nov 25 03:34:11 np0005534516 nova_compute[253538]: 2025-11-25 08:34:11.838 253542 DEBUG nova.virt.libvirt.driver [None req-de0428eb-b428-4465-9e27-e82bef732ea3 d4a674c4114a4e4fb5e446089be3ffc0 adf7500b3b404802bc7f4ada42a72100 - - default default] [instance: e3f4ee5b-6bb5-456f-b522-426ea1ebf32f] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 25 03:34:11 np0005534516 nova_compute[253538]: 2025-11-25 08:34:11.856 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: e3f4ee5b-6bb5-456f-b522-426ea1ebf32f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:34:11 np0005534516 nova_compute[253538]: 2025-11-25 08:34:11.862 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: e3f4ee5b-6bb5-456f-b522-426ea1ebf32f] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 03:34:11 np0005534516 nova_compute[253538]: 2025-11-25 08:34:11.864 253542 DEBUG nova.virt.libvirt.driver [None req-de0428eb-b428-4465-9e27-e82bef732ea3 d4a674c4114a4e4fb5e446089be3ffc0 adf7500b3b404802bc7f4ada42a72100 - - default default] [instance: e3f4ee5b-6bb5-456f-b522-426ea1ebf32f] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:34:11 np0005534516 nova_compute[253538]: 2025-11-25 08:34:11.864 253542 DEBUG nova.virt.libvirt.driver [None req-de0428eb-b428-4465-9e27-e82bef732ea3 d4a674c4114a4e4fb5e446089be3ffc0 adf7500b3b404802bc7f4ada42a72100 - - default default] [instance: e3f4ee5b-6bb5-456f-b522-426ea1ebf32f] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:34:11 np0005534516 nova_compute[253538]: 2025-11-25 08:34:11.865 253542 DEBUG nova.virt.libvirt.driver [None req-de0428eb-b428-4465-9e27-e82bef732ea3 d4a674c4114a4e4fb5e446089be3ffc0 adf7500b3b404802bc7f4ada42a72100 - - default default] [instance: e3f4ee5b-6bb5-456f-b522-426ea1ebf32f] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:34:11 np0005534516 nova_compute[253538]: 2025-11-25 08:34:11.865 253542 DEBUG nova.virt.libvirt.driver [None req-de0428eb-b428-4465-9e27-e82bef732ea3 d4a674c4114a4e4fb5e446089be3ffc0 adf7500b3b404802bc7f4ada42a72100 - - default default] [instance: e3f4ee5b-6bb5-456f-b522-426ea1ebf32f] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:34:11 np0005534516 nova_compute[253538]: 2025-11-25 08:34:11.866 253542 DEBUG nova.virt.libvirt.driver [None req-de0428eb-b428-4465-9e27-e82bef732ea3 d4a674c4114a4e4fb5e446089be3ffc0 adf7500b3b404802bc7f4ada42a72100 - - default default] [instance: e3f4ee5b-6bb5-456f-b522-426ea1ebf32f] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:34:11 np0005534516 nova_compute[253538]: 2025-11-25 08:34:11.866 253542 DEBUG nova.virt.libvirt.driver [None req-de0428eb-b428-4465-9e27-e82bef732ea3 d4a674c4114a4e4fb5e446089be3ffc0 adf7500b3b404802bc7f4ada42a72100 - - default default] [instance: e3f4ee5b-6bb5-456f-b522-426ea1ebf32f] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:34:11 np0005534516 nova_compute[253538]: 2025-11-25 08:34:11.887 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: e3f4ee5b-6bb5-456f-b522-426ea1ebf32f] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 03:34:11 np0005534516 nova_compute[253538]: 2025-11-25 08:34:11.924 253542 INFO nova.compute.manager [None req-de0428eb-b428-4465-9e27-e82bef732ea3 d4a674c4114a4e4fb5e446089be3ffc0 adf7500b3b404802bc7f4ada42a72100 - - default default] [instance: e3f4ee5b-6bb5-456f-b522-426ea1ebf32f] Took 10.48 seconds to spawn the instance on the hypervisor.#033[00m
Nov 25 03:34:11 np0005534516 nova_compute[253538]: 2025-11-25 08:34:11.924 253542 DEBUG nova.compute.manager [None req-de0428eb-b428-4465-9e27-e82bef732ea3 d4a674c4114a4e4fb5e446089be3ffc0 adf7500b3b404802bc7f4ada42a72100 - - default default] [instance: e3f4ee5b-6bb5-456f-b522-426ea1ebf32f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:34:11 np0005534516 nova_compute[253538]: 2025-11-25 08:34:11.935 253542 DEBUG nova.network.neutron [None req-264bf83f-c6d5-4d83-a2a4-c0156230e328 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: bf44124c-1a65-4bde-a777-043ae1a53557] Updating instance_info_cache with network_info: [{"id": "269f9bd3-f267-459c-8e24-4b1f6c943345", "address": "fa:16:3e:13:ae:8e", "network": {"id": "a66e51b8-ecb0-4289-a1b5-d5e379727721", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-903247260-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a9c243220ecd4ba3af10cdbc0ea76bd6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap269f9bd3-f2", "ovs_interfaceid": "269f9bd3-f267-459c-8e24-4b1f6c943345", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 03:34:11 np0005534516 nova_compute[253538]: 2025-11-25 08:34:11.964 253542 DEBUG oslo_concurrency.lockutils [None req-264bf83f-c6d5-4d83-a2a4-c0156230e328 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Releasing lock "refresh_cache-bf44124c-1a65-4bde-a777-043ae1a53557" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 03:34:11 np0005534516 nova_compute[253538]: 2025-11-25 08:34:11.964 253542 DEBUG nova.compute.manager [None req-264bf83f-c6d5-4d83-a2a4-c0156230e328 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: bf44124c-1a65-4bde-a777-043ae1a53557] Instance network_info: |[{"id": "269f9bd3-f267-459c-8e24-4b1f6c943345", "address": "fa:16:3e:13:ae:8e", "network": {"id": "a66e51b8-ecb0-4289-a1b5-d5e379727721", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-903247260-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a9c243220ecd4ba3af10cdbc0ea76bd6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap269f9bd3-f2", "ovs_interfaceid": "269f9bd3-f267-459c-8e24-4b1f6c943345", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 25 03:34:11 np0005534516 nova_compute[253538]: 2025-11-25 08:34:11.965 253542 DEBUG oslo_concurrency.lockutils [req-5451ff9f-1b7d-4ffd-972a-f224f1068dc7 req-fc75a8a2-3e12-4a7a-9630-991b029fdf7c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-bf44124c-1a65-4bde-a777-043ae1a53557" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 03:34:11 np0005534516 nova_compute[253538]: 2025-11-25 08:34:11.965 253542 DEBUG nova.network.neutron [req-5451ff9f-1b7d-4ffd-972a-f224f1068dc7 req-fc75a8a2-3e12-4a7a-9630-991b029fdf7c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: bf44124c-1a65-4bde-a777-043ae1a53557] Refreshing network info cache for port 269f9bd3-f267-459c-8e24-4b1f6c943345 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 03:34:11 np0005534516 nova_compute[253538]: 2025-11-25 08:34:11.968 253542 DEBUG nova.virt.libvirt.driver [None req-264bf83f-c6d5-4d83-a2a4-c0156230e328 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: bf44124c-1a65-4bde-a777-043ae1a53557] Start _get_guest_xml network_info=[{"id": "269f9bd3-f267-459c-8e24-4b1f6c943345", "address": "fa:16:3e:13:ae:8e", "network": {"id": "a66e51b8-ecb0-4289-a1b5-d5e379727721", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-903247260-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a9c243220ecd4ba3af10cdbc0ea76bd6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap269f9bd3-f2", "ovs_interfaceid": "269f9bd3-f267-459c-8e24-4b1f6c943345", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'device_name': '/dev/vda', 'guest_format': None, 'encryption_options': None, 'boot_index': 0, 'encryption_format': None, 'encryption_secret_uuid': None, 'size': 0, 'encrypted': False, 'disk_bus': 'virtio', 'image_id': '8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 25 03:34:11 np0005534516 nova_compute[253538]: 2025-11-25 08:34:11.972 253542 WARNING nova.virt.libvirt.driver [None req-264bf83f-c6d5-4d83-a2a4-c0156230e328 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 03:34:11 np0005534516 nova_compute[253538]: 2025-11-25 08:34:11.978 253542 DEBUG nova.virt.libvirt.host [None req-264bf83f-c6d5-4d83-a2a4-c0156230e328 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 25 03:34:11 np0005534516 nova_compute[253538]: 2025-11-25 08:34:11.979 253542 DEBUG nova.virt.libvirt.host [None req-264bf83f-c6d5-4d83-a2a4-c0156230e328 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 25 03:34:11 np0005534516 nova_compute[253538]: 2025-11-25 08:34:11.988 253542 DEBUG nova.virt.libvirt.host [None req-264bf83f-c6d5-4d83-a2a4-c0156230e328 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 25 03:34:11 np0005534516 nova_compute[253538]: 2025-11-25 08:34:11.988 253542 DEBUG nova.virt.libvirt.host [None req-264bf83f-c6d5-4d83-a2a4-c0156230e328 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 25 03:34:11 np0005534516 nova_compute[253538]: 2025-11-25 08:34:11.989 253542 DEBUG nova.virt.libvirt.driver [None req-264bf83f-c6d5-4d83-a2a4-c0156230e328 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 25 03:34:11 np0005534516 nova_compute[253538]: 2025-11-25 08:34:11.989 253542 DEBUG nova.virt.hardware [None req-264bf83f-c6d5-4d83-a2a4-c0156230e328 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T08:20:54Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='c0247c91-0f34-45b2-87e7-43f31a790a00',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 25 03:34:11 np0005534516 nova_compute[253538]: 2025-11-25 08:34:11.990 253542 DEBUG nova.virt.hardware [None req-264bf83f-c6d5-4d83-a2a4-c0156230e328 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 25 03:34:11 np0005534516 nova_compute[253538]: 2025-11-25 08:34:11.990 253542 DEBUG nova.virt.hardware [None req-264bf83f-c6d5-4d83-a2a4-c0156230e328 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 25 03:34:11 np0005534516 nova_compute[253538]: 2025-11-25 08:34:11.990 253542 DEBUG nova.virt.hardware [None req-264bf83f-c6d5-4d83-a2a4-c0156230e328 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 25 03:34:11 np0005534516 nova_compute[253538]: 2025-11-25 08:34:11.991 253542 DEBUG nova.virt.hardware [None req-264bf83f-c6d5-4d83-a2a4-c0156230e328 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 25 03:34:11 np0005534516 nova_compute[253538]: 2025-11-25 08:34:11.991 253542 DEBUG nova.virt.hardware [None req-264bf83f-c6d5-4d83-a2a4-c0156230e328 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 25 03:34:11 np0005534516 nova_compute[253538]: 2025-11-25 08:34:11.991 253542 DEBUG nova.virt.hardware [None req-264bf83f-c6d5-4d83-a2a4-c0156230e328 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 25 03:34:11 np0005534516 nova_compute[253538]: 2025-11-25 08:34:11.992 253542 DEBUG nova.virt.hardware [None req-264bf83f-c6d5-4d83-a2a4-c0156230e328 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 25 03:34:11 np0005534516 nova_compute[253538]: 2025-11-25 08:34:11.992 253542 DEBUG nova.virt.hardware [None req-264bf83f-c6d5-4d83-a2a4-c0156230e328 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 25 03:34:11 np0005534516 nova_compute[253538]: 2025-11-25 08:34:11.992 253542 DEBUG nova.virt.hardware [None req-264bf83f-c6d5-4d83-a2a4-c0156230e328 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 25 03:34:11 np0005534516 nova_compute[253538]: 2025-11-25 08:34:11.993 253542 DEBUG nova.virt.hardware [None req-264bf83f-c6d5-4d83-a2a4-c0156230e328 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 25 03:34:11 np0005534516 nova_compute[253538]: 2025-11-25 08:34:11.997 253542 DEBUG oslo_concurrency.processutils [None req-264bf83f-c6d5-4d83-a2a4-c0156230e328 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:34:12 np0005534516 nova_compute[253538]: 2025-11-25 08:34:12.035 253542 INFO nova.compute.manager [None req-de0428eb-b428-4465-9e27-e82bef732ea3 d4a674c4114a4e4fb5e446089be3ffc0 adf7500b3b404802bc7f4ada42a72100 - - default default] [instance: e3f4ee5b-6bb5-456f-b522-426ea1ebf32f] Took 11.48 seconds to build instance.#033[00m
Nov 25 03:34:12 np0005534516 nova_compute[253538]: 2025-11-25 08:34:12.056 253542 DEBUG oslo_concurrency.lockutils [None req-de0428eb-b428-4465-9e27-e82bef732ea3 d4a674c4114a4e4fb5e446089be3ffc0 adf7500b3b404802bc7f4ada42a72100 - - default default] Lock "e3f4ee5b-6bb5-456f-b522-426ea1ebf32f" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.557s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:34:12 np0005534516 nova_compute[253538]: 2025-11-25 08:34:12.106 253542 DEBUG nova.network.neutron [req-ac005210-2cba-4875-8efe-5bf1ff7d88f3 req-5b1ce70b-61d5-48c5-b763-cffab2c1330b b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: e3f4ee5b-6bb5-456f-b522-426ea1ebf32f] Updated VIF entry in instance network info cache for port 40faec4b-dd3f-4659-972d-beeeb707761f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 25 03:34:12 np0005534516 nova_compute[253538]: 2025-11-25 08:34:12.107 253542 DEBUG nova.network.neutron [req-ac005210-2cba-4875-8efe-5bf1ff7d88f3 req-5b1ce70b-61d5-48c5-b763-cffab2c1330b b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: e3f4ee5b-6bb5-456f-b522-426ea1ebf32f] Updating instance_info_cache with network_info: [{"id": "40faec4b-dd3f-4659-972d-beeeb707761f", "address": "fa:16:3e:37:c8:d4", "network": {"id": "610430d6-5ea7-4c04-9b64-2dc2d0a55169", "bridge": "br-int", "label": "tempest-InstanceActionsNegativeTestJSON-598339358-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "adf7500b3b404802bc7f4ada42a72100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap40faec4b-dd", "ovs_interfaceid": "40faec4b-dd3f-4659-972d-beeeb707761f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 03:34:12 np0005534516 nova_compute[253538]: 2025-11-25 08:34:12.109 253542 DEBUG nova.network.neutron [None req-ea508da6-2d37-4e51-9a1b-53e06fde3653 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] [instance: 52d39d67-b456-44e4-8804-2de0c941edae] Successfully created port: 9fa407fa-661b-4b02-b4f4-656f6ae34cd8 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 25 03:34:12 np0005534516 nova_compute[253538]: 2025-11-25 08:34:12.119 253542 DEBUG oslo_concurrency.lockutils [req-ac005210-2cba-4875-8efe-5bf1ff7d88f3 req-5b1ce70b-61d5-48c5-b763-cffab2c1330b b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-e3f4ee5b-6bb5-456f-b522-426ea1ebf32f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 03:34:12 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 03:34:12 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1782277273' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 03:34:12 np0005534516 nova_compute[253538]: 2025-11-25 08:34:12.459 253542 DEBUG oslo_concurrency.processutils [None req-264bf83f-c6d5-4d83-a2a4-c0156230e328 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.462s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:34:12 np0005534516 nova_compute[253538]: 2025-11-25 08:34:12.491 253542 DEBUG nova.storage.rbd_utils [None req-264bf83f-c6d5-4d83-a2a4-c0156230e328 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] rbd image bf44124c-1a65-4bde-a777-043ae1a53557_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:34:12 np0005534516 nova_compute[253538]: 2025-11-25 08:34:12.496 253542 DEBUG oslo_concurrency.processutils [None req-264bf83f-c6d5-4d83-a2a4-c0156230e328 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:34:12 np0005534516 nova_compute[253538]: 2025-11-25 08:34:12.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 03:34:12 np0005534516 nova_compute[253538]: 2025-11-25 08:34:12.554 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 25 03:34:13 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 03:34:13 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/467150374' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 03:34:13 np0005534516 nova_compute[253538]: 2025-11-25 08:34:13.032 253542 DEBUG oslo_concurrency.processutils [None req-264bf83f-c6d5-4d83-a2a4-c0156230e328 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.536s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:34:13 np0005534516 nova_compute[253538]: 2025-11-25 08:34:13.034 253542 DEBUG nova.virt.libvirt.vif [None req-264bf83f-c6d5-4d83-a2a4-c0156230e328 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T08:34:03Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-DeleteServersTestJSON-server-960435538',display_name='tempest-DeleteServersTestJSON-server-960435538',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-deleteserverstestjson-server-960435538',id=60,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='a9c243220ecd4ba3af10cdbc0ea76bd6',ramdisk_id='',reservation_id='r-31sgadur',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-DeleteServersTestJSON-2095694504',owner_user_name='tempest-DeleteServersTestJSON-2095694504-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T08:34:05Z,user_data=None,user_id='a649c62aaacd4f01a93ea978066f5976',uuid=bf44124c-1a65-4bde-a777-043ae1a53557,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "269f9bd3-f267-459c-8e24-4b1f6c943345", "address": "fa:16:3e:13:ae:8e", "network": {"id": "a66e51b8-ecb0-4289-a1b5-d5e379727721", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-903247260-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a9c243220ecd4ba3af10cdbc0ea76bd6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap269f9bd3-f2", "ovs_interfaceid": "269f9bd3-f267-459c-8e24-4b1f6c943345", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 25 03:34:13 np0005534516 nova_compute[253538]: 2025-11-25 08:34:13.035 253542 DEBUG nova.network.os_vif_util [None req-264bf83f-c6d5-4d83-a2a4-c0156230e328 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Converting VIF {"id": "269f9bd3-f267-459c-8e24-4b1f6c943345", "address": "fa:16:3e:13:ae:8e", "network": {"id": "a66e51b8-ecb0-4289-a1b5-d5e379727721", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-903247260-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a9c243220ecd4ba3af10cdbc0ea76bd6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap269f9bd3-f2", "ovs_interfaceid": "269f9bd3-f267-459c-8e24-4b1f6c943345", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 03:34:13 np0005534516 nova_compute[253538]: 2025-11-25 08:34:13.036 253542 DEBUG nova.network.os_vif_util [None req-264bf83f-c6d5-4d83-a2a4-c0156230e328 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:13:ae:8e,bridge_name='br-int',has_traffic_filtering=True,id=269f9bd3-f267-459c-8e24-4b1f6c943345,network=Network(a66e51b8-ecb0-4289-a1b5-d5e379727721),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap269f9bd3-f2') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 03:34:13 np0005534516 nova_compute[253538]: 2025-11-25 08:34:13.038 253542 DEBUG nova.objects.instance [None req-264bf83f-c6d5-4d83-a2a4-c0156230e328 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Lazy-loading 'pci_devices' on Instance uuid bf44124c-1a65-4bde-a777-043ae1a53557 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 03:34:13 np0005534516 nova_compute[253538]: 2025-11-25 08:34:13.053 253542 DEBUG nova.virt.libvirt.driver [None req-264bf83f-c6d5-4d83-a2a4-c0156230e328 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: bf44124c-1a65-4bde-a777-043ae1a53557] End _get_guest_xml xml=<domain type="kvm">
Nov 25 03:34:13 np0005534516 nova_compute[253538]:  <uuid>bf44124c-1a65-4bde-a777-043ae1a53557</uuid>
Nov 25 03:34:13 np0005534516 nova_compute[253538]:  <name>instance-0000003c</name>
Nov 25 03:34:13 np0005534516 nova_compute[253538]:  <memory>131072</memory>
Nov 25 03:34:13 np0005534516 nova_compute[253538]:  <vcpu>1</vcpu>
Nov 25 03:34:13 np0005534516 nova_compute[253538]:  <metadata>
Nov 25 03:34:13 np0005534516 nova_compute[253538]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 03:34:13 np0005534516 nova_compute[253538]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 03:34:13 np0005534516 nova_compute[253538]:      <nova:name>tempest-DeleteServersTestJSON-server-960435538</nova:name>
Nov 25 03:34:13 np0005534516 nova_compute[253538]:      <nova:creationTime>2025-11-25 08:34:11</nova:creationTime>
Nov 25 03:34:13 np0005534516 nova_compute[253538]:      <nova:flavor name="m1.nano">
Nov 25 03:34:13 np0005534516 nova_compute[253538]:        <nova:memory>128</nova:memory>
Nov 25 03:34:13 np0005534516 nova_compute[253538]:        <nova:disk>1</nova:disk>
Nov 25 03:34:13 np0005534516 nova_compute[253538]:        <nova:swap>0</nova:swap>
Nov 25 03:34:13 np0005534516 nova_compute[253538]:        <nova:ephemeral>0</nova:ephemeral>
Nov 25 03:34:13 np0005534516 nova_compute[253538]:        <nova:vcpus>1</nova:vcpus>
Nov 25 03:34:13 np0005534516 nova_compute[253538]:      </nova:flavor>
Nov 25 03:34:13 np0005534516 nova_compute[253538]:      <nova:owner>
Nov 25 03:34:13 np0005534516 nova_compute[253538]:        <nova:user uuid="a649c62aaacd4f01a93ea978066f5976">tempest-DeleteServersTestJSON-2095694504-project-member</nova:user>
Nov 25 03:34:13 np0005534516 nova_compute[253538]:        <nova:project uuid="a9c243220ecd4ba3af10cdbc0ea76bd6">tempest-DeleteServersTestJSON-2095694504</nova:project>
Nov 25 03:34:13 np0005534516 nova_compute[253538]:      </nova:owner>
Nov 25 03:34:13 np0005534516 nova_compute[253538]:      <nova:root type="image" uuid="8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e"/>
Nov 25 03:34:13 np0005534516 nova_compute[253538]:      <nova:ports>
Nov 25 03:34:13 np0005534516 nova_compute[253538]:        <nova:port uuid="269f9bd3-f267-459c-8e24-4b1f6c943345">
Nov 25 03:34:13 np0005534516 nova_compute[253538]:          <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Nov 25 03:34:13 np0005534516 nova_compute[253538]:        </nova:port>
Nov 25 03:34:13 np0005534516 nova_compute[253538]:      </nova:ports>
Nov 25 03:34:13 np0005534516 nova_compute[253538]:    </nova:instance>
Nov 25 03:34:13 np0005534516 nova_compute[253538]:  </metadata>
Nov 25 03:34:13 np0005534516 nova_compute[253538]:  <sysinfo type="smbios">
Nov 25 03:34:13 np0005534516 nova_compute[253538]:    <system>
Nov 25 03:34:13 np0005534516 nova_compute[253538]:      <entry name="manufacturer">RDO</entry>
Nov 25 03:34:13 np0005534516 nova_compute[253538]:      <entry name="product">OpenStack Compute</entry>
Nov 25 03:34:13 np0005534516 nova_compute[253538]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 03:34:13 np0005534516 nova_compute[253538]:      <entry name="serial">bf44124c-1a65-4bde-a777-043ae1a53557</entry>
Nov 25 03:34:13 np0005534516 nova_compute[253538]:      <entry name="uuid">bf44124c-1a65-4bde-a777-043ae1a53557</entry>
Nov 25 03:34:13 np0005534516 nova_compute[253538]:      <entry name="family">Virtual Machine</entry>
Nov 25 03:34:13 np0005534516 nova_compute[253538]:    </system>
Nov 25 03:34:13 np0005534516 nova_compute[253538]:  </sysinfo>
Nov 25 03:34:13 np0005534516 nova_compute[253538]:  <os>
Nov 25 03:34:13 np0005534516 nova_compute[253538]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 25 03:34:13 np0005534516 nova_compute[253538]:    <boot dev="hd"/>
Nov 25 03:34:13 np0005534516 nova_compute[253538]:    <smbios mode="sysinfo"/>
Nov 25 03:34:13 np0005534516 nova_compute[253538]:  </os>
Nov 25 03:34:13 np0005534516 nova_compute[253538]:  <features>
Nov 25 03:34:13 np0005534516 nova_compute[253538]:    <acpi/>
Nov 25 03:34:13 np0005534516 nova_compute[253538]:    <apic/>
Nov 25 03:34:13 np0005534516 nova_compute[253538]:    <vmcoreinfo/>
Nov 25 03:34:13 np0005534516 nova_compute[253538]:  </features>
Nov 25 03:34:13 np0005534516 nova_compute[253538]:  <clock offset="utc">
Nov 25 03:34:13 np0005534516 nova_compute[253538]:    <timer name="pit" tickpolicy="delay"/>
Nov 25 03:34:13 np0005534516 nova_compute[253538]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 25 03:34:13 np0005534516 nova_compute[253538]:    <timer name="hpet" present="no"/>
Nov 25 03:34:13 np0005534516 nova_compute[253538]:  </clock>
Nov 25 03:34:13 np0005534516 nova_compute[253538]:  <cpu mode="host-model" match="exact">
Nov 25 03:34:13 np0005534516 nova_compute[253538]:    <topology sockets="1" cores="1" threads="1"/>
Nov 25 03:34:13 np0005534516 nova_compute[253538]:  </cpu>
Nov 25 03:34:13 np0005534516 nova_compute[253538]:  <devices>
Nov 25 03:34:13 np0005534516 nova_compute[253538]:    <disk type="network" device="disk">
Nov 25 03:34:13 np0005534516 nova_compute[253538]:      <driver type="raw" cache="none"/>
Nov 25 03:34:13 np0005534516 nova_compute[253538]:      <source protocol="rbd" name="vms/bf44124c-1a65-4bde-a777-043ae1a53557_disk">
Nov 25 03:34:13 np0005534516 nova_compute[253538]:        <host name="192.168.122.100" port="6789"/>
Nov 25 03:34:13 np0005534516 nova_compute[253538]:      </source>
Nov 25 03:34:13 np0005534516 nova_compute[253538]:      <auth username="openstack">
Nov 25 03:34:13 np0005534516 nova_compute[253538]:        <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 03:34:13 np0005534516 nova_compute[253538]:      </auth>
Nov 25 03:34:13 np0005534516 nova_compute[253538]:      <target dev="vda" bus="virtio"/>
Nov 25 03:34:13 np0005534516 nova_compute[253538]:    </disk>
Nov 25 03:34:13 np0005534516 nova_compute[253538]:    <disk type="network" device="cdrom">
Nov 25 03:34:13 np0005534516 nova_compute[253538]:      <driver type="raw" cache="none"/>
Nov 25 03:34:13 np0005534516 nova_compute[253538]:      <source protocol="rbd" name="vms/bf44124c-1a65-4bde-a777-043ae1a53557_disk.config">
Nov 25 03:34:13 np0005534516 nova_compute[253538]:        <host name="192.168.122.100" port="6789"/>
Nov 25 03:34:13 np0005534516 nova_compute[253538]:      </source>
Nov 25 03:34:13 np0005534516 nova_compute[253538]:      <auth username="openstack">
Nov 25 03:34:13 np0005534516 nova_compute[253538]:        <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 03:34:13 np0005534516 nova_compute[253538]:      </auth>
Nov 25 03:34:13 np0005534516 nova_compute[253538]:      <target dev="sda" bus="sata"/>
Nov 25 03:34:13 np0005534516 nova_compute[253538]:    </disk>
Nov 25 03:34:13 np0005534516 nova_compute[253538]:    <interface type="ethernet">
Nov 25 03:34:13 np0005534516 nova_compute[253538]:      <mac address="fa:16:3e:13:ae:8e"/>
Nov 25 03:34:13 np0005534516 nova_compute[253538]:      <model type="virtio"/>
Nov 25 03:34:13 np0005534516 nova_compute[253538]:      <driver name="vhost" rx_queue_size="512"/>
Nov 25 03:34:13 np0005534516 nova_compute[253538]:      <mtu size="1442"/>
Nov 25 03:34:13 np0005534516 nova_compute[253538]:      <target dev="tap269f9bd3-f2"/>
Nov 25 03:34:13 np0005534516 nova_compute[253538]:    </interface>
Nov 25 03:34:13 np0005534516 nova_compute[253538]:    <serial type="pty">
Nov 25 03:34:13 np0005534516 nova_compute[253538]:      <log file="/var/lib/nova/instances/bf44124c-1a65-4bde-a777-043ae1a53557/console.log" append="off"/>
Nov 25 03:34:13 np0005534516 nova_compute[253538]:    </serial>
Nov 25 03:34:13 np0005534516 nova_compute[253538]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 03:34:13 np0005534516 nova_compute[253538]:    <video>
Nov 25 03:34:13 np0005534516 nova_compute[253538]:      <model type="virtio"/>
Nov 25 03:34:13 np0005534516 nova_compute[253538]:    </video>
Nov 25 03:34:13 np0005534516 nova_compute[253538]:    <input type="tablet" bus="usb"/>
Nov 25 03:34:13 np0005534516 nova_compute[253538]:    <rng model="virtio">
Nov 25 03:34:13 np0005534516 nova_compute[253538]:      <backend model="random">/dev/urandom</backend>
Nov 25 03:34:13 np0005534516 nova_compute[253538]:    </rng>
Nov 25 03:34:13 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root"/>
Nov 25 03:34:13 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:34:13 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:34:13 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:34:13 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:34:13 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:34:13 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:34:13 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:34:13 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:34:13 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:34:13 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:34:13 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:34:13 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:34:13 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:34:13 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:34:13 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:34:13 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:34:13 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:34:13 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:34:13 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:34:13 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:34:13 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:34:13 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:34:13 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:34:13 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:34:13 np0005534516 nova_compute[253538]:    <controller type="usb" index="0"/>
Nov 25 03:34:13 np0005534516 nova_compute[253538]:    <memballoon model="virtio">
Nov 25 03:34:13 np0005534516 nova_compute[253538]:      <stats period="10"/>
Nov 25 03:34:13 np0005534516 nova_compute[253538]:    </memballoon>
Nov 25 03:34:13 np0005534516 nova_compute[253538]:  </devices>
Nov 25 03:34:13 np0005534516 nova_compute[253538]: </domain>
Nov 25 03:34:13 np0005534516 nova_compute[253538]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 25 03:34:13 np0005534516 nova_compute[253538]: 2025-11-25 08:34:13.055 253542 DEBUG nova.compute.manager [None req-264bf83f-c6d5-4d83-a2a4-c0156230e328 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: bf44124c-1a65-4bde-a777-043ae1a53557] Preparing to wait for external event network-vif-plugged-269f9bd3-f267-459c-8e24-4b1f6c943345 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 25 03:34:13 np0005534516 nova_compute[253538]: 2025-11-25 08:34:13.055 253542 DEBUG oslo_concurrency.lockutils [None req-264bf83f-c6d5-4d83-a2a4-c0156230e328 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Acquiring lock "bf44124c-1a65-4bde-a777-043ae1a53557-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:34:13 np0005534516 nova_compute[253538]: 2025-11-25 08:34:13.055 253542 DEBUG oslo_concurrency.lockutils [None req-264bf83f-c6d5-4d83-a2a4-c0156230e328 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Lock "bf44124c-1a65-4bde-a777-043ae1a53557-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:34:13 np0005534516 nova_compute[253538]: 2025-11-25 08:34:13.056 253542 DEBUG oslo_concurrency.lockutils [None req-264bf83f-c6d5-4d83-a2a4-c0156230e328 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Lock "bf44124c-1a65-4bde-a777-043ae1a53557-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:34:13 np0005534516 nova_compute[253538]: 2025-11-25 08:34:13.057 253542 DEBUG nova.virt.libvirt.vif [None req-264bf83f-c6d5-4d83-a2a4-c0156230e328 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T08:34:03Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-DeleteServersTestJSON-server-960435538',display_name='tempest-DeleteServersTestJSON-server-960435538',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-deleteserverstestjson-server-960435538',id=60,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='a9c243220ecd4ba3af10cdbc0ea76bd6',ramdisk_id='',reservation_id='r-31sgadur',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-DeleteServersTestJSON-2095694504',owner_user_name='tempest-DeleteServersTestJSON-2095694504-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T08:34:05Z,user_data=None,user_id='a649c62aaacd4f01a93ea978066f5976',uuid=bf44124c-1a65-4bde-a777-043ae1a53557,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "269f9bd3-f267-459c-8e24-4b1f6c943345", "address": "fa:16:3e:13:ae:8e", "network": {"id": "a66e51b8-ecb0-4289-a1b5-d5e379727721", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-903247260-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a9c243220ecd4ba3af10cdbc0ea76bd6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap269f9bd3-f2", "ovs_interfaceid": "269f9bd3-f267-459c-8e24-4b1f6c943345", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 25 03:34:13 np0005534516 nova_compute[253538]: 2025-11-25 08:34:13.057 253542 DEBUG nova.network.os_vif_util [None req-264bf83f-c6d5-4d83-a2a4-c0156230e328 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Converting VIF {"id": "269f9bd3-f267-459c-8e24-4b1f6c943345", "address": "fa:16:3e:13:ae:8e", "network": {"id": "a66e51b8-ecb0-4289-a1b5-d5e379727721", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-903247260-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a9c243220ecd4ba3af10cdbc0ea76bd6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap269f9bd3-f2", "ovs_interfaceid": "269f9bd3-f267-459c-8e24-4b1f6c943345", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 03:34:13 np0005534516 nova_compute[253538]: 2025-11-25 08:34:13.058 253542 DEBUG nova.network.os_vif_util [None req-264bf83f-c6d5-4d83-a2a4-c0156230e328 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:13:ae:8e,bridge_name='br-int',has_traffic_filtering=True,id=269f9bd3-f267-459c-8e24-4b1f6c943345,network=Network(a66e51b8-ecb0-4289-a1b5-d5e379727721),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap269f9bd3-f2') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 03:34:13 np0005534516 nova_compute[253538]: 2025-11-25 08:34:13.058 253542 DEBUG os_vif [None req-264bf83f-c6d5-4d83-a2a4-c0156230e328 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:13:ae:8e,bridge_name='br-int',has_traffic_filtering=True,id=269f9bd3-f267-459c-8e24-4b1f6c943345,network=Network(a66e51b8-ecb0-4289-a1b5-d5e379727721),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap269f9bd3-f2') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 25 03:34:13 np0005534516 nova_compute[253538]: 2025-11-25 08:34:13.059 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:34:13 np0005534516 nova_compute[253538]: 2025-11-25 08:34:13.060 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:34:13 np0005534516 nova_compute[253538]: 2025-11-25 08:34:13.060 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 03:34:13 np0005534516 nova_compute[253538]: 2025-11-25 08:34:13.065 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:34:13 np0005534516 nova_compute[253538]: 2025-11-25 08:34:13.066 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap269f9bd3-f2, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:34:13 np0005534516 nova_compute[253538]: 2025-11-25 08:34:13.066 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap269f9bd3-f2, col_values=(('external_ids', {'iface-id': '269f9bd3-f267-459c-8e24-4b1f6c943345', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:13:ae:8e', 'vm-uuid': 'bf44124c-1a65-4bde-a777-043ae1a53557'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:34:13 np0005534516 nova_compute[253538]: 2025-11-25 08:34:13.068 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:34:13 np0005534516 NetworkManager[48915]: <info>  [1764059653.0693] manager: (tap269f9bd3-f2): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/251)
Nov 25 03:34:13 np0005534516 nova_compute[253538]: 2025-11-25 08:34:13.070 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 25 03:34:13 np0005534516 nova_compute[253538]: 2025-11-25 08:34:13.075 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:34:13 np0005534516 nova_compute[253538]: 2025-11-25 08:34:13.076 253542 INFO os_vif [None req-264bf83f-c6d5-4d83-a2a4-c0156230e328 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:13:ae:8e,bridge_name='br-int',has_traffic_filtering=True,id=269f9bd3-f267-459c-8e24-4b1f6c943345,network=Network(a66e51b8-ecb0-4289-a1b5-d5e379727721),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap269f9bd3-f2')#033[00m
Nov 25 03:34:13 np0005534516 nova_compute[253538]: 2025-11-25 08:34:13.126 253542 DEBUG nova.virt.libvirt.driver [None req-264bf83f-c6d5-4d83-a2a4-c0156230e328 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 03:34:13 np0005534516 nova_compute[253538]: 2025-11-25 08:34:13.126 253542 DEBUG nova.virt.libvirt.driver [None req-264bf83f-c6d5-4d83-a2a4-c0156230e328 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 03:34:13 np0005534516 nova_compute[253538]: 2025-11-25 08:34:13.127 253542 DEBUG nova.virt.libvirt.driver [None req-264bf83f-c6d5-4d83-a2a4-c0156230e328 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] No VIF found with MAC fa:16:3e:13:ae:8e, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 25 03:34:13 np0005534516 nova_compute[253538]: 2025-11-25 08:34:13.129 253542 INFO nova.virt.libvirt.driver [None req-264bf83f-c6d5-4d83-a2a4-c0156230e328 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: bf44124c-1a65-4bde-a777-043ae1a53557] Using config drive#033[00m
Nov 25 03:34:13 np0005534516 nova_compute[253538]: 2025-11-25 08:34:13.167 253542 DEBUG nova.storage.rbd_utils [None req-264bf83f-c6d5-4d83-a2a4-c0156230e328 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] rbd image bf44124c-1a65-4bde-a777-043ae1a53557_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:34:13 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1532: 321 pgs: 321 active+clean; 278 MiB data, 581 MiB used, 59 GiB / 60 GiB avail; 1.8 MiB/s rd, 3.5 MiB/s wr, 116 op/s
Nov 25 03:34:13 np0005534516 nova_compute[253538]: 2025-11-25 08:34:13.545 253542 DEBUG nova.network.neutron [None req-ea508da6-2d37-4e51-9a1b-53e06fde3653 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] [instance: 52d39d67-b456-44e4-8804-2de0c941edae] Successfully updated port: 9fa407fa-661b-4b02-b4f4-656f6ae34cd8 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 25 03:34:13 np0005534516 nova_compute[253538]: 2025-11-25 08:34:13.555 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 03:34:13 np0005534516 nova_compute[253538]: 2025-11-25 08:34:13.555 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 25 03:34:13 np0005534516 nova_compute[253538]: 2025-11-25 08:34:13.555 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 25 03:34:13 np0005534516 nova_compute[253538]: 2025-11-25 08:34:13.556 253542 DEBUG oslo_concurrency.lockutils [None req-ea508da6-2d37-4e51-9a1b-53e06fde3653 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Acquiring lock "refresh_cache-52d39d67-b456-44e4-8804-2de0c941edae" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 03:34:13 np0005534516 nova_compute[253538]: 2025-11-25 08:34:13.556 253542 DEBUG oslo_concurrency.lockutils [None req-ea508da6-2d37-4e51-9a1b-53e06fde3653 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Acquired lock "refresh_cache-52d39d67-b456-44e4-8804-2de0c941edae" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 03:34:13 np0005534516 nova_compute[253538]: 2025-11-25 08:34:13.557 253542 DEBUG nova.network.neutron [None req-ea508da6-2d37-4e51-9a1b-53e06fde3653 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] [instance: 52d39d67-b456-44e4-8804-2de0c941edae] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 25 03:34:13 np0005534516 nova_compute[253538]: 2025-11-25 08:34:13.575 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] [instance: bf44124c-1a65-4bde-a777-043ae1a53557] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Nov 25 03:34:13 np0005534516 nova_compute[253538]: 2025-11-25 08:34:13.575 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] [instance: 52d39d67-b456-44e4-8804-2de0c941edae] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Nov 25 03:34:13 np0005534516 nova_compute[253538]: 2025-11-25 08:34:13.612 253542 INFO nova.virt.libvirt.driver [None req-264bf83f-c6d5-4d83-a2a4-c0156230e328 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: bf44124c-1a65-4bde-a777-043ae1a53557] Creating config drive at /var/lib/nova/instances/bf44124c-1a65-4bde-a777-043ae1a53557/disk.config#033[00m
Nov 25 03:34:13 np0005534516 nova_compute[253538]: 2025-11-25 08:34:13.618 253542 DEBUG oslo_concurrency.processutils [None req-264bf83f-c6d5-4d83-a2a4-c0156230e328 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/bf44124c-1a65-4bde-a777-043ae1a53557/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpp0ykwq6n execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:34:13 np0005534516 nova_compute[253538]: 2025-11-25 08:34:13.756 253542 DEBUG oslo_concurrency.processutils [None req-264bf83f-c6d5-4d83-a2a4-c0156230e328 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/bf44124c-1a65-4bde-a777-043ae1a53557/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpp0ykwq6n" returned: 0 in 0.138s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:34:13 np0005534516 nova_compute[253538]: 2025-11-25 08:34:13.783 253542 DEBUG nova.storage.rbd_utils [None req-264bf83f-c6d5-4d83-a2a4-c0156230e328 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] rbd image bf44124c-1a65-4bde-a777-043ae1a53557_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:34:13 np0005534516 nova_compute[253538]: 2025-11-25 08:34:13.787 253542 DEBUG oslo_concurrency.processutils [None req-264bf83f-c6d5-4d83-a2a4-c0156230e328 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/bf44124c-1a65-4bde-a777-043ae1a53557/disk.config bf44124c-1a65-4bde-a777-043ae1a53557_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:34:13 np0005534516 nova_compute[253538]: 2025-11-25 08:34:13.825 253542 DEBUG nova.network.neutron [None req-ea508da6-2d37-4e51-9a1b-53e06fde3653 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] [instance: 52d39d67-b456-44e4-8804-2de0c941edae] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 25 03:34:13 np0005534516 nova_compute[253538]: 2025-11-25 08:34:13.850 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Acquiring lock "refresh_cache-0feca801-4630-4450-b915-616d8496ab51" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 03:34:13 np0005534516 nova_compute[253538]: 2025-11-25 08:34:13.850 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Acquired lock "refresh_cache-0feca801-4630-4450-b915-616d8496ab51" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 03:34:13 np0005534516 nova_compute[253538]: 2025-11-25 08:34:13.850 253542 DEBUG nova.network.neutron [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] [instance: 0feca801-4630-4450-b915-616d8496ab51] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Nov 25 03:34:13 np0005534516 nova_compute[253538]: 2025-11-25 08:34:13.851 253542 DEBUG nova.objects.instance [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 0feca801-4630-4450-b915-616d8496ab51 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 03:34:13 np0005534516 nova_compute[253538]: 2025-11-25 08:34:13.887 253542 DEBUG nova.compute.manager [req-3d4a59f7-5e91-44df-ab9f-983d26bcf780 req-55f998e1-75ae-41d1-af01-ce2112fde043 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 52d39d67-b456-44e4-8804-2de0c941edae] Received event network-changed-9fa407fa-661b-4b02-b4f4-656f6ae34cd8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:34:13 np0005534516 nova_compute[253538]: 2025-11-25 08:34:13.888 253542 DEBUG nova.compute.manager [req-3d4a59f7-5e91-44df-ab9f-983d26bcf780 req-55f998e1-75ae-41d1-af01-ce2112fde043 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 52d39d67-b456-44e4-8804-2de0c941edae] Refreshing instance network info cache due to event network-changed-9fa407fa-661b-4b02-b4f4-656f6ae34cd8. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 03:34:13 np0005534516 nova_compute[253538]: 2025-11-25 08:34:13.889 253542 DEBUG oslo_concurrency.lockutils [req-3d4a59f7-5e91-44df-ab9f-983d26bcf780 req-55f998e1-75ae-41d1-af01-ce2112fde043 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-52d39d67-b456-44e4-8804-2de0c941edae" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 03:34:13 np0005534516 nova_compute[253538]: 2025-11-25 08:34:13.896 253542 DEBUG nova.network.neutron [req-5451ff9f-1b7d-4ffd-972a-f224f1068dc7 req-fc75a8a2-3e12-4a7a-9630-991b029fdf7c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: bf44124c-1a65-4bde-a777-043ae1a53557] Updated VIF entry in instance network info cache for port 269f9bd3-f267-459c-8e24-4b1f6c943345. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 25 03:34:13 np0005534516 nova_compute[253538]: 2025-11-25 08:34:13.897 253542 DEBUG nova.network.neutron [req-5451ff9f-1b7d-4ffd-972a-f224f1068dc7 req-fc75a8a2-3e12-4a7a-9630-991b029fdf7c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: bf44124c-1a65-4bde-a777-043ae1a53557] Updating instance_info_cache with network_info: [{"id": "269f9bd3-f267-459c-8e24-4b1f6c943345", "address": "fa:16:3e:13:ae:8e", "network": {"id": "a66e51b8-ecb0-4289-a1b5-d5e379727721", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-903247260-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a9c243220ecd4ba3af10cdbc0ea76bd6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap269f9bd3-f2", "ovs_interfaceid": "269f9bd3-f267-459c-8e24-4b1f6c943345", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 03:34:13 np0005534516 nova_compute[253538]: 2025-11-25 08:34:13.924 253542 DEBUG oslo_concurrency.lockutils [req-5451ff9f-1b7d-4ffd-972a-f224f1068dc7 req-fc75a8a2-3e12-4a7a-9630-991b029fdf7c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-bf44124c-1a65-4bde-a777-043ae1a53557" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 03:34:13 np0005534516 nova_compute[253538]: 2025-11-25 08:34:13.975 253542 DEBUG oslo_concurrency.processutils [None req-264bf83f-c6d5-4d83-a2a4-c0156230e328 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/bf44124c-1a65-4bde-a777-043ae1a53557/disk.config bf44124c-1a65-4bde-a777-043ae1a53557_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.187s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:34:13 np0005534516 nova_compute[253538]: 2025-11-25 08:34:13.975 253542 INFO nova.virt.libvirt.driver [None req-264bf83f-c6d5-4d83-a2a4-c0156230e328 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: bf44124c-1a65-4bde-a777-043ae1a53557] Deleting local config drive /var/lib/nova/instances/bf44124c-1a65-4bde-a777-043ae1a53557/disk.config because it was imported into RBD.#033[00m
Nov 25 03:34:14 np0005534516 kernel: tap269f9bd3-f2: entered promiscuous mode
Nov 25 03:34:14 np0005534516 NetworkManager[48915]: <info>  [1764059654.0341] manager: (tap269f9bd3-f2): new Tun device (/org/freedesktop/NetworkManager/Devices/252)
Nov 25 03:34:14 np0005534516 ovn_controller[152859]: 2025-11-25T08:34:14Z|00541|binding|INFO|Claiming lport 269f9bd3-f267-459c-8e24-4b1f6c943345 for this chassis.
Nov 25 03:34:14 np0005534516 ovn_controller[152859]: 2025-11-25T08:34:14Z|00542|binding|INFO|269f9bd3-f267-459c-8e24-4b1f6c943345: Claiming fa:16:3e:13:ae:8e 10.100.0.12
Nov 25 03:34:14 np0005534516 nova_compute[253538]: 2025-11-25 08:34:14.042 253542 DEBUG nova.compute.manager [req-7be5a64c-2458-4de5-9fc2-a86c8e1a738c req-58a5e110-e63b-4b9e-a1bf-788df1ff37a0 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: e3f4ee5b-6bb5-456f-b522-426ea1ebf32f] Received event network-vif-plugged-40faec4b-dd3f-4659-972d-beeeb707761f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:34:14 np0005534516 nova_compute[253538]: 2025-11-25 08:34:14.043 253542 DEBUG oslo_concurrency.lockutils [req-7be5a64c-2458-4de5-9fc2-a86c8e1a738c req-58a5e110-e63b-4b9e-a1bf-788df1ff37a0 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "e3f4ee5b-6bb5-456f-b522-426ea1ebf32f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:34:14 np0005534516 nova_compute[253538]: 2025-11-25 08:34:14.043 253542 DEBUG oslo_concurrency.lockutils [req-7be5a64c-2458-4de5-9fc2-a86c8e1a738c req-58a5e110-e63b-4b9e-a1bf-788df1ff37a0 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "e3f4ee5b-6bb5-456f-b522-426ea1ebf32f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:34:14 np0005534516 nova_compute[253538]: 2025-11-25 08:34:14.044 253542 DEBUG oslo_concurrency.lockutils [req-7be5a64c-2458-4de5-9fc2-a86c8e1a738c req-58a5e110-e63b-4b9e-a1bf-788df1ff37a0 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "e3f4ee5b-6bb5-456f-b522-426ea1ebf32f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:34:14 np0005534516 nova_compute[253538]: 2025-11-25 08:34:14.044 253542 DEBUG nova.compute.manager [req-7be5a64c-2458-4de5-9fc2-a86c8e1a738c req-58a5e110-e63b-4b9e-a1bf-788df1ff37a0 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: e3f4ee5b-6bb5-456f-b522-426ea1ebf32f] No waiting events found dispatching network-vif-plugged-40faec4b-dd3f-4659-972d-beeeb707761f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 03:34:14 np0005534516 nova_compute[253538]: 2025-11-25 08:34:14.044 253542 WARNING nova.compute.manager [req-7be5a64c-2458-4de5-9fc2-a86c8e1a738c req-58a5e110-e63b-4b9e-a1bf-788df1ff37a0 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: e3f4ee5b-6bb5-456f-b522-426ea1ebf32f] Received unexpected event network-vif-plugged-40faec4b-dd3f-4659-972d-beeeb707761f for instance with vm_state active and task_state None.#033[00m
Nov 25 03:34:14 np0005534516 nova_compute[253538]: 2025-11-25 08:34:14.045 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:34:14 np0005534516 ovn_controller[152859]: 2025-11-25T08:34:14Z|00543|binding|INFO|Setting lport 269f9bd3-f267-459c-8e24-4b1f6c943345 ovn-installed in OVS
Nov 25 03:34:14 np0005534516 nova_compute[253538]: 2025-11-25 08:34:14.076 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:34:14 np0005534516 nova_compute[253538]: 2025-11-25 08:34:14.078 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:34:14 np0005534516 systemd-udevd[316517]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 03:34:14 np0005534516 ovn_controller[152859]: 2025-11-25T08:34:14Z|00544|binding|INFO|Setting lport 269f9bd3-f267-459c-8e24-4b1f6c943345 up in Southbound
Nov 25 03:34:14 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:34:14.086 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:13:ae:8e 10.100.0.12'], port_security=['fa:16:3e:13:ae:8e 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': 'bf44124c-1a65-4bde-a777-043ae1a53557', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a66e51b8-ecb0-4289-a1b5-d5e379727721', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a9c243220ecd4ba3af10cdbc0ea76bd6', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'a5deaf81-ec7a-4196-8622-4e499ce185db', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=864356ca-a329-4a45-a3a1-6cef04812832, chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=269f9bd3-f267-459c-8e24-4b1f6c943345) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 03:34:14 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:34:14.088 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 269f9bd3-f267-459c-8e24-4b1f6c943345 in datapath a66e51b8-ecb0-4289-a1b5-d5e379727721 bound to our chassis#033[00m
Nov 25 03:34:14 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:34:14.089 162739 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network a66e51b8-ecb0-4289-a1b5-d5e379727721#033[00m
Nov 25 03:34:14 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:34:14.100 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[788132f1-f076-444d-8242-f25028f7289c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:34:14 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:34:14.101 162739 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapa66e51b8-e1 in ovnmeta-a66e51b8-ecb0-4289-a1b5-d5e379727721 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 25 03:34:14 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:34:14.103 269370 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapa66e51b8-e0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 25 03:34:14 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:34:14.103 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[d13b36b5-1c90-4d0a-9e7b-080f55d2747e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:34:14 np0005534516 NetworkManager[48915]: <info>  [1764059654.1077] device (tap269f9bd3-f2): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 03:34:14 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:34:14.108 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[641b8c19-4bec-451f-8a94-48a8a1110ce4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:34:14 np0005534516 NetworkManager[48915]: <info>  [1764059654.1088] device (tap269f9bd3-f2): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 03:34:14 np0005534516 systemd-machined[215790]: New machine qemu-69-instance-0000003c.
Nov 25 03:34:14 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:34:14.117 162852 DEBUG oslo.privsep.daemon [-] privsep: reply[04993176-0095-43ab-a472-1d447779b02d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:34:14 np0005534516 systemd[1]: Started Virtual Machine qemu-69-instance-0000003c.
Nov 25 03:34:14 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:34:14.142 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[9cdc8a31-0c3b-45d9-a44f-ce2ce103bba3]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:34:14 np0005534516 podman[316508]: 2025-11-25 08:34:14.146722155 +0000 UTC m=+0.087881733 container health_status 201f5ba8a846455dad7681d91099d8c3f33e8ac91298c1330a8b525b94c03938 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true)
Nov 25 03:34:14 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:34:14.176 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[9f392d85-3c3d-4a90-83f1-126d4db82971]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:34:14 np0005534516 NetworkManager[48915]: <info>  [1764059654.1851] manager: (tapa66e51b8-e0): new Veth device (/org/freedesktop/NetworkManager/Devices/253)
Nov 25 03:34:14 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:34:14.184 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[e89011dd-8d5c-4458-ba66-0b8479da06cf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:34:14 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:34:14.223 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[4a14d6b4-9f79-4742-8f59-15b3550cac24]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:34:14 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:34:14.226 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[d6b7ac97-a041-44c7-a2f6-8d22232c9ee9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:34:14 np0005534516 NetworkManager[48915]: <info>  [1764059654.2456] device (tapa66e51b8-e0): carrier: link connected
Nov 25 03:34:14 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:34:14.250 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[7905d2cd-e11e-43b0-a934-b1c67eccd7fb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:34:14 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:34:14.270 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[ae58236a-7597-4860-a78b-e9706d37cf4a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapa66e51b8-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:11:2c:20'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 168], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 500884, 'reachable_time': 36770, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 316561, 'error': None, 'target': 'ovnmeta-a66e51b8-ecb0-4289-a1b5-d5e379727721', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:34:14 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:34:14.287 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[810984ab-f9bd-4da6-88f5-cebf7dfd011d]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe11:2c20'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 500884, 'tstamp': 500884}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 316562, 'error': None, 'target': 'ovnmeta-a66e51b8-ecb0-4289-a1b5-d5e379727721', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:34:14 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:34:14.302 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[0de82015-6cae-460b-929d-af914e7ca3cf]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapa66e51b8-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:11:2c:20'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 168], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 500884, 'reachable_time': 36770, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 316563, 'error': None, 'target': 'ovnmeta-a66e51b8-ecb0-4289-a1b5-d5e379727721', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:34:14 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:34:14.335 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[bd13f4cc-0e2c-4397-924c-4dd51f2c0834]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:34:14 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:34:14.390 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[e545cd4e-9adc-4137-a3ad-8fe9374ec5c0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:34:14 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:34:14.391 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa66e51b8-e0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:34:14 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:34:14.392 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 03:34:14 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:34:14.392 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa66e51b8-e0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:34:14 np0005534516 NetworkManager[48915]: <info>  [1764059654.3944] manager: (tapa66e51b8-e0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/254)
Nov 25 03:34:14 np0005534516 nova_compute[253538]: 2025-11-25 08:34:14.393 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:34:14 np0005534516 kernel: tapa66e51b8-e0: entered promiscuous mode
Nov 25 03:34:14 np0005534516 nova_compute[253538]: 2025-11-25 08:34:14.395 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:34:14 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:34:14.396 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapa66e51b8-e0, col_values=(('external_ids', {'iface-id': 'c0d74b17-7eba-4096-a861-b9247777e01c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:34:14 np0005534516 nova_compute[253538]: 2025-11-25 08:34:14.397 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:34:14 np0005534516 ovn_controller[152859]: 2025-11-25T08:34:14Z|00545|binding|INFO|Releasing lport c0d74b17-7eba-4096-a861-b9247777e01c from this chassis (sb_readonly=0)
Nov 25 03:34:14 np0005534516 nova_compute[253538]: 2025-11-25 08:34:14.415 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:34:14 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:34:14.416 162739 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/a66e51b8-ecb0-4289-a1b5-d5e379727721.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/a66e51b8-ecb0-4289-a1b5-d5e379727721.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 25 03:34:14 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:34:14.417 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[2854f836-cfc9-4cdc-b7c0-03ab83e5018b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:34:14 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:34:14.417 162739 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 25 03:34:14 np0005534516 ovn_metadata_agent[162734]: global
Nov 25 03:34:14 np0005534516 ovn_metadata_agent[162734]:    log         /dev/log local0 debug
Nov 25 03:34:14 np0005534516 ovn_metadata_agent[162734]:    log-tag     haproxy-metadata-proxy-a66e51b8-ecb0-4289-a1b5-d5e379727721
Nov 25 03:34:14 np0005534516 ovn_metadata_agent[162734]:    user        root
Nov 25 03:34:14 np0005534516 ovn_metadata_agent[162734]:    group       root
Nov 25 03:34:14 np0005534516 ovn_metadata_agent[162734]:    maxconn     1024
Nov 25 03:34:14 np0005534516 ovn_metadata_agent[162734]:    pidfile     /var/lib/neutron/external/pids/a66e51b8-ecb0-4289-a1b5-d5e379727721.pid.haproxy
Nov 25 03:34:14 np0005534516 ovn_metadata_agent[162734]:    daemon
Nov 25 03:34:14 np0005534516 ovn_metadata_agent[162734]: 
Nov 25 03:34:14 np0005534516 ovn_metadata_agent[162734]: defaults
Nov 25 03:34:14 np0005534516 ovn_metadata_agent[162734]:    log global
Nov 25 03:34:14 np0005534516 ovn_metadata_agent[162734]:    mode http
Nov 25 03:34:14 np0005534516 ovn_metadata_agent[162734]:    option httplog
Nov 25 03:34:14 np0005534516 ovn_metadata_agent[162734]:    option dontlognull
Nov 25 03:34:14 np0005534516 ovn_metadata_agent[162734]:    option http-server-close
Nov 25 03:34:14 np0005534516 ovn_metadata_agent[162734]:    option forwardfor
Nov 25 03:34:14 np0005534516 ovn_metadata_agent[162734]:    retries                 3
Nov 25 03:34:14 np0005534516 ovn_metadata_agent[162734]:    timeout http-request    30s
Nov 25 03:34:14 np0005534516 ovn_metadata_agent[162734]:    timeout connect         30s
Nov 25 03:34:14 np0005534516 ovn_metadata_agent[162734]:    timeout client          32s
Nov 25 03:34:14 np0005534516 ovn_metadata_agent[162734]:    timeout server          32s
Nov 25 03:34:14 np0005534516 ovn_metadata_agent[162734]:    timeout http-keep-alive 30s
Nov 25 03:34:14 np0005534516 ovn_metadata_agent[162734]: 
Nov 25 03:34:14 np0005534516 ovn_metadata_agent[162734]: 
Nov 25 03:34:14 np0005534516 ovn_metadata_agent[162734]: listen listener
Nov 25 03:34:14 np0005534516 ovn_metadata_agent[162734]:    bind 169.254.169.254:80
Nov 25 03:34:14 np0005534516 ovn_metadata_agent[162734]:    server metadata /var/lib/neutron/metadata_proxy
Nov 25 03:34:14 np0005534516 ovn_metadata_agent[162734]:    http-request add-header X-OVN-Network-ID a66e51b8-ecb0-4289-a1b5-d5e379727721
Nov 25 03:34:14 np0005534516 ovn_metadata_agent[162734]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 25 03:34:14 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:34:14.418 162739 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-a66e51b8-ecb0-4289-a1b5-d5e379727721', 'env', 'PROCESS_TAG=haproxy-a66e51b8-ecb0-4289-a1b5-d5e379727721', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/a66e51b8-ecb0-4289-a1b5-d5e379727721.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 25 03:34:14 np0005534516 ovn_controller[152859]: 2025-11-25T08:34:14Z|00069|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:60:42:da 10.100.0.9
Nov 25 03:34:14 np0005534516 ovn_controller[152859]: 2025-11-25T08:34:14Z|00070|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:60:42:da 10.100.0.9
Nov 25 03:34:14 np0005534516 nova_compute[253538]: 2025-11-25 08:34:14.488 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764059654.4873943, bf44124c-1a65-4bde-a777-043ae1a53557 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 03:34:14 np0005534516 nova_compute[253538]: 2025-11-25 08:34:14.488 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: bf44124c-1a65-4bde-a777-043ae1a53557] VM Started (Lifecycle Event)#033[00m
Nov 25 03:34:14 np0005534516 nova_compute[253538]: 2025-11-25 08:34:14.517 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: bf44124c-1a65-4bde-a777-043ae1a53557] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:34:14 np0005534516 nova_compute[253538]: 2025-11-25 08:34:14.521 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764059654.48775, bf44124c-1a65-4bde-a777-043ae1a53557 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 03:34:14 np0005534516 nova_compute[253538]: 2025-11-25 08:34:14.521 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: bf44124c-1a65-4bde-a777-043ae1a53557] VM Paused (Lifecycle Event)#033[00m
Nov 25 03:34:14 np0005534516 nova_compute[253538]: 2025-11-25 08:34:14.588 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: bf44124c-1a65-4bde-a777-043ae1a53557] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:34:14 np0005534516 nova_compute[253538]: 2025-11-25 08:34:14.600 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: bf44124c-1a65-4bde-a777-043ae1a53557] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 03:34:14 np0005534516 nova_compute[253538]: 2025-11-25 08:34:14.618 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: bf44124c-1a65-4bde-a777-043ae1a53557] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 03:34:14 np0005534516 nova_compute[253538]: 2025-11-25 08:34:14.746 253542 DEBUG oslo_concurrency.lockutils [None req-0352b895-b5eb-4885-b29d-154a1d5181fb d4a674c4114a4e4fb5e446089be3ffc0 adf7500b3b404802bc7f4ada42a72100 - - default default] Acquiring lock "e3f4ee5b-6bb5-456f-b522-426ea1ebf32f" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:34:14 np0005534516 nova_compute[253538]: 2025-11-25 08:34:14.746 253542 DEBUG oslo_concurrency.lockutils [None req-0352b895-b5eb-4885-b29d-154a1d5181fb d4a674c4114a4e4fb5e446089be3ffc0 adf7500b3b404802bc7f4ada42a72100 - - default default] Lock "e3f4ee5b-6bb5-456f-b522-426ea1ebf32f" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:34:14 np0005534516 nova_compute[253538]: 2025-11-25 08:34:14.747 253542 DEBUG oslo_concurrency.lockutils [None req-0352b895-b5eb-4885-b29d-154a1d5181fb d4a674c4114a4e4fb5e446089be3ffc0 adf7500b3b404802bc7f4ada42a72100 - - default default] Acquiring lock "e3f4ee5b-6bb5-456f-b522-426ea1ebf32f-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:34:14 np0005534516 nova_compute[253538]: 2025-11-25 08:34:14.747 253542 DEBUG oslo_concurrency.lockutils [None req-0352b895-b5eb-4885-b29d-154a1d5181fb d4a674c4114a4e4fb5e446089be3ffc0 adf7500b3b404802bc7f4ada42a72100 - - default default] Lock "e3f4ee5b-6bb5-456f-b522-426ea1ebf32f-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:34:14 np0005534516 nova_compute[253538]: 2025-11-25 08:34:14.747 253542 DEBUG oslo_concurrency.lockutils [None req-0352b895-b5eb-4885-b29d-154a1d5181fb d4a674c4114a4e4fb5e446089be3ffc0 adf7500b3b404802bc7f4ada42a72100 - - default default] Lock "e3f4ee5b-6bb5-456f-b522-426ea1ebf32f-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:34:14 np0005534516 nova_compute[253538]: 2025-11-25 08:34:14.749 253542 INFO nova.compute.manager [None req-0352b895-b5eb-4885-b29d-154a1d5181fb d4a674c4114a4e4fb5e446089be3ffc0 adf7500b3b404802bc7f4ada42a72100 - - default default] [instance: e3f4ee5b-6bb5-456f-b522-426ea1ebf32f] Terminating instance#033[00m
Nov 25 03:34:14 np0005534516 nova_compute[253538]: 2025-11-25 08:34:14.750 253542 DEBUG nova.compute.manager [None req-0352b895-b5eb-4885-b29d-154a1d5181fb d4a674c4114a4e4fb5e446089be3ffc0 adf7500b3b404802bc7f4ada42a72100 - - default default] [instance: e3f4ee5b-6bb5-456f-b522-426ea1ebf32f] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 25 03:34:14 np0005534516 podman[316637]: 2025-11-25 08:34:14.766736778 +0000 UTC m=+0.055366769 container create 6e22b195a22364538f85e0288f00a1284d5f2f55e8968e456a8c6ed978d6d624 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-a66e51b8-ecb0-4289-a1b5-d5e379727721, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118)
Nov 25 03:34:14 np0005534516 kernel: tap40faec4b-dd (unregistering): left promiscuous mode
Nov 25 03:34:14 np0005534516 NetworkManager[48915]: <info>  [1764059654.7962] device (tap40faec4b-dd): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 03:34:14 np0005534516 systemd[1]: Started libpod-conmon-6e22b195a22364538f85e0288f00a1284d5f2f55e8968e456a8c6ed978d6d624.scope.
Nov 25 03:34:14 np0005534516 nova_compute[253538]: 2025-11-25 08:34:14.805 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:34:14 np0005534516 ovn_controller[152859]: 2025-11-25T08:34:14Z|00546|binding|INFO|Releasing lport 40faec4b-dd3f-4659-972d-beeeb707761f from this chassis (sb_readonly=0)
Nov 25 03:34:14 np0005534516 ovn_controller[152859]: 2025-11-25T08:34:14Z|00547|binding|INFO|Setting lport 40faec4b-dd3f-4659-972d-beeeb707761f down in Southbound
Nov 25 03:34:14 np0005534516 ovn_controller[152859]: 2025-11-25T08:34:14Z|00548|binding|INFO|Removing iface tap40faec4b-dd ovn-installed in OVS
Nov 25 03:34:14 np0005534516 nova_compute[253538]: 2025-11-25 08:34:14.809 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:34:14 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:34:14.820 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:37:c8:d4 10.100.0.4'], port_security=['fa:16:3e:37:c8:d4 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': 'e3f4ee5b-6bb5-456f-b522-426ea1ebf32f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-610430d6-5ea7-4c04-9b64-2dc2d0a55169', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'adf7500b3b404802bc7f4ada42a72100', 'neutron:revision_number': '4', 'neutron:security_group_ids': '89f00338-7004-4e97-a33e-2330c787d850', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f67d87c1-e8c8-46a3-b6be-f7e585c56ed4, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=40faec4b-dd3f-4659-972d-beeeb707761f) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 03:34:14 np0005534516 nova_compute[253538]: 2025-11-25 08:34:14.824 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:34:14 np0005534516 systemd[1]: Started libcrun container.
Nov 25 03:34:14 np0005534516 podman[316637]: 2025-11-25 08:34:14.735466747 +0000 UTC m=+0.024096758 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 25 03:34:14 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8245a978a5022c98149f488e1744d8b7505a54bb18353d96e818cdced9046a6b/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 25 03:34:14 np0005534516 systemd[1]: machine-qemu\x2d68\x2dinstance\x2d0000003b.scope: Deactivated successfully.
Nov 25 03:34:14 np0005534516 systemd[1]: machine-qemu\x2d68\x2dinstance\x2d0000003b.scope: Consumed 3.320s CPU time.
Nov 25 03:34:14 np0005534516 systemd-machined[215790]: Machine qemu-68-instance-0000003b terminated.
Nov 25 03:34:14 np0005534516 podman[316637]: 2025-11-25 08:34:14.845489234 +0000 UTC m=+0.134119265 container init 6e22b195a22364538f85e0288f00a1284d5f2f55e8968e456a8c6ed978d6d624 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-a66e51b8-ecb0-4289-a1b5-d5e379727721, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Nov 25 03:34:14 np0005534516 podman[316637]: 2025-11-25 08:34:14.851780634 +0000 UTC m=+0.140410645 container start 6e22b195a22364538f85e0288f00a1284d5f2f55e8968e456a8c6ed978d6d624 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-a66e51b8-ecb0-4289-a1b5-d5e379727721, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_managed=true, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0)
Nov 25 03:34:14 np0005534516 neutron-haproxy-ovnmeta-a66e51b8-ecb0-4289-a1b5-d5e379727721[316655]: [NOTICE]   (316659) : New worker (316661) forked
Nov 25 03:34:14 np0005534516 neutron-haproxy-ovnmeta-a66e51b8-ecb0-4289-a1b5-d5e379727721[316655]: [NOTICE]   (316659) : Loading success.
Nov 25 03:34:14 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:34:14.911 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 40faec4b-dd3f-4659-972d-beeeb707761f in datapath 610430d6-5ea7-4c04-9b64-2dc2d0a55169 unbound from our chassis#033[00m
Nov 25 03:34:14 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:34:14.913 162739 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 610430d6-5ea7-4c04-9b64-2dc2d0a55169, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 25 03:34:14 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:34:14.914 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[a25f166a-aaeb-41c7-a521-8b057df21e1b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:34:14 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:34:14.914 162739 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-610430d6-5ea7-4c04-9b64-2dc2d0a55169 namespace which is not needed anymore#033[00m
Nov 25 03:34:14 np0005534516 nova_compute[253538]: 2025-11-25 08:34:14.915 253542 DEBUG oslo_concurrency.lockutils [None req-898de7fa-98d1-45c8-93c3-87953770422a c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Acquiring lock "420c5373-d9c4-4da0-9658-90eff9a19f8d" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:34:14 np0005534516 nova_compute[253538]: 2025-11-25 08:34:14.916 253542 DEBUG oslo_concurrency.lockutils [None req-898de7fa-98d1-45c8-93c3-87953770422a c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Lock "420c5373-d9c4-4da0-9658-90eff9a19f8d" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:34:14 np0005534516 nova_compute[253538]: 2025-11-25 08:34:14.935 253542 DEBUG nova.compute.manager [None req-898de7fa-98d1-45c8-93c3-87953770422a c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] [instance: 420c5373-d9c4-4da0-9658-90eff9a19f8d] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 25 03:34:14 np0005534516 nova_compute[253538]: 2025-11-25 08:34:14.986 253542 INFO nova.virt.libvirt.driver [-] [instance: e3f4ee5b-6bb5-456f-b522-426ea1ebf32f] Instance destroyed successfully.#033[00m
Nov 25 03:34:14 np0005534516 nova_compute[253538]: 2025-11-25 08:34:14.986 253542 DEBUG nova.objects.instance [None req-0352b895-b5eb-4885-b29d-154a1d5181fb d4a674c4114a4e4fb5e446089be3ffc0 adf7500b3b404802bc7f4ada42a72100 - - default default] Lazy-loading 'resources' on Instance uuid e3f4ee5b-6bb5-456f-b522-426ea1ebf32f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 03:34:14 np0005534516 nova_compute[253538]: 2025-11-25 08:34:14.998 253542 DEBUG nova.virt.libvirt.vif [None req-0352b895-b5eb-4885-b29d-154a1d5181fb d4a674c4114a4e4fb5e446089be3ffc0 adf7500b3b404802bc7f4ada42a72100 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T08:33:59Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-InstanceActionsNegativeTestJSON-server-1084102530',display_name='tempest-InstanceActionsNegativeTestJSON-server-1084102530',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-instanceactionsnegativetestjson-server-1084102530',id=59,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-25T08:34:11Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='adf7500b3b404802bc7f4ada42a72100',ramdisk_id='',reservation_id='r-gymefxtd',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-InstanceActionsNegativeTestJSON-800535511',owner_user_name='tempest-InstanceActionsNegativeTestJSON-800535511-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T08:34:11Z,user_data=None,user_id='d4a674c4114a4e4fb5e446089be3ffc0',uuid=e3f4ee5b-6bb5-456f-b522-426ea1ebf32f,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "40faec4b-dd3f-4659-972d-beeeb707761f", "address": "fa:16:3e:37:c8:d4", "network": {"id": "610430d6-5ea7-4c04-9b64-2dc2d0a55169", "bridge": "br-int", "label": "tempest-InstanceActionsNegativeTestJSON-598339358-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "adf7500b3b404802bc7f4ada42a72100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap40faec4b-dd", "ovs_interfaceid": "40faec4b-dd3f-4659-972d-beeeb707761f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 25 03:34:14 np0005534516 nova_compute[253538]: 2025-11-25 08:34:14.998 253542 DEBUG nova.network.os_vif_util [None req-0352b895-b5eb-4885-b29d-154a1d5181fb d4a674c4114a4e4fb5e446089be3ffc0 adf7500b3b404802bc7f4ada42a72100 - - default default] Converting VIF {"id": "40faec4b-dd3f-4659-972d-beeeb707761f", "address": "fa:16:3e:37:c8:d4", "network": {"id": "610430d6-5ea7-4c04-9b64-2dc2d0a55169", "bridge": "br-int", "label": "tempest-InstanceActionsNegativeTestJSON-598339358-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "adf7500b3b404802bc7f4ada42a72100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap40faec4b-dd", "ovs_interfaceid": "40faec4b-dd3f-4659-972d-beeeb707761f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 03:34:14 np0005534516 nova_compute[253538]: 2025-11-25 08:34:14.999 253542 DEBUG nova.network.os_vif_util [None req-0352b895-b5eb-4885-b29d-154a1d5181fb d4a674c4114a4e4fb5e446089be3ffc0 adf7500b3b404802bc7f4ada42a72100 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:37:c8:d4,bridge_name='br-int',has_traffic_filtering=True,id=40faec4b-dd3f-4659-972d-beeeb707761f,network=Network(610430d6-5ea7-4c04-9b64-2dc2d0a55169),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap40faec4b-dd') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 03:34:15 np0005534516 nova_compute[253538]: 2025-11-25 08:34:14.999 253542 DEBUG os_vif [None req-0352b895-b5eb-4885-b29d-154a1d5181fb d4a674c4114a4e4fb5e446089be3ffc0 adf7500b3b404802bc7f4ada42a72100 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:37:c8:d4,bridge_name='br-int',has_traffic_filtering=True,id=40faec4b-dd3f-4659-972d-beeeb707761f,network=Network(610430d6-5ea7-4c04-9b64-2dc2d0a55169),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap40faec4b-dd') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 25 03:34:15 np0005534516 nova_compute[253538]: 2025-11-25 08:34:15.003 253542 DEBUG nova.network.neutron [None req-ea508da6-2d37-4e51-9a1b-53e06fde3653 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] [instance: 52d39d67-b456-44e4-8804-2de0c941edae] Updating instance_info_cache with network_info: [{"id": "9fa407fa-661b-4b02-b4f4-656f6ae34cd8", "address": "fa:16:3e:4d:ce:d4", "network": {"id": "9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-2079765623-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7d8307470c794815a028592990efca57", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9fa407fa-66", "ovs_interfaceid": "9fa407fa-661b-4b02-b4f4-656f6ae34cd8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 03:34:15 np0005534516 nova_compute[253538]: 2025-11-25 08:34:15.004 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:34:15 np0005534516 nova_compute[253538]: 2025-11-25 08:34:15.004 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap40faec4b-dd, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:34:15 np0005534516 nova_compute[253538]: 2025-11-25 08:34:15.008 253542 DEBUG oslo_concurrency.lockutils [None req-898de7fa-98d1-45c8-93c3-87953770422a c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:34:15 np0005534516 nova_compute[253538]: 2025-11-25 08:34:15.008 253542 DEBUG oslo_concurrency.lockutils [None req-898de7fa-98d1-45c8-93c3-87953770422a c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:34:15 np0005534516 nova_compute[253538]: 2025-11-25 08:34:15.008 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:34:15 np0005534516 nova_compute[253538]: 2025-11-25 08:34:15.009 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 25 03:34:15 np0005534516 nova_compute[253538]: 2025-11-25 08:34:15.011 253542 INFO os_vif [None req-0352b895-b5eb-4885-b29d-154a1d5181fb d4a674c4114a4e4fb5e446089be3ffc0 adf7500b3b404802bc7f4ada42a72100 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:37:c8:d4,bridge_name='br-int',has_traffic_filtering=True,id=40faec4b-dd3f-4659-972d-beeeb707761f,network=Network(610430d6-5ea7-4c04-9b64-2dc2d0a55169),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap40faec4b-dd')#033[00m
Nov 25 03:34:15 np0005534516 nova_compute[253538]: 2025-11-25 08:34:15.031 253542 DEBUG nova.virt.hardware [None req-898de7fa-98d1-45c8-93c3-87953770422a c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 25 03:34:15 np0005534516 nova_compute[253538]: 2025-11-25 08:34:15.031 253542 INFO nova.compute.claims [None req-898de7fa-98d1-45c8-93c3-87953770422a c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] [instance: 420c5373-d9c4-4da0-9658-90eff9a19f8d] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 25 03:34:15 np0005534516 nova_compute[253538]: 2025-11-25 08:34:15.034 253542 DEBUG oslo_concurrency.lockutils [None req-ea508da6-2d37-4e51-9a1b-53e06fde3653 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Releasing lock "refresh_cache-52d39d67-b456-44e4-8804-2de0c941edae" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 03:34:15 np0005534516 nova_compute[253538]: 2025-11-25 08:34:15.034 253542 DEBUG nova.compute.manager [None req-ea508da6-2d37-4e51-9a1b-53e06fde3653 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] [instance: 52d39d67-b456-44e4-8804-2de0c941edae] Instance network_info: |[{"id": "9fa407fa-661b-4b02-b4f4-656f6ae34cd8", "address": "fa:16:3e:4d:ce:d4", "network": {"id": "9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-2079765623-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7d8307470c794815a028592990efca57", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9fa407fa-66", "ovs_interfaceid": "9fa407fa-661b-4b02-b4f4-656f6ae34cd8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 25 03:34:15 np0005534516 nova_compute[253538]: 2025-11-25 08:34:15.035 253542 DEBUG oslo_concurrency.lockutils [req-3d4a59f7-5e91-44df-ab9f-983d26bcf780 req-55f998e1-75ae-41d1-af01-ce2112fde043 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-52d39d67-b456-44e4-8804-2de0c941edae" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 03:34:15 np0005534516 nova_compute[253538]: 2025-11-25 08:34:15.035 253542 DEBUG nova.network.neutron [req-3d4a59f7-5e91-44df-ab9f-983d26bcf780 req-55f998e1-75ae-41d1-af01-ce2112fde043 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 52d39d67-b456-44e4-8804-2de0c941edae] Refreshing network info cache for port 9fa407fa-661b-4b02-b4f4-656f6ae34cd8 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 03:34:15 np0005534516 nova_compute[253538]: 2025-11-25 08:34:15.038 253542 DEBUG nova.virt.libvirt.driver [None req-ea508da6-2d37-4e51-9a1b-53e06fde3653 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] [instance: 52d39d67-b456-44e4-8804-2de0c941edae] Start _get_guest_xml network_info=[{"id": "9fa407fa-661b-4b02-b4f4-656f6ae34cd8", "address": "fa:16:3e:4d:ce:d4", "network": {"id": "9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-2079765623-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7d8307470c794815a028592990efca57", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9fa407fa-66", "ovs_interfaceid": "9fa407fa-661b-4b02-b4f4-656f6ae34cd8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'device_name': '/dev/vda', 'guest_format': None, 'encryption_options': None, 'boot_index': 0, 'encryption_format': None, 'encryption_secret_uuid': None, 'size': 0, 'encrypted': False, 'disk_bus': 'virtio', 'image_id': '8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 25 03:34:15 np0005534516 nova_compute[253538]: 2025-11-25 08:34:15.045 253542 WARNING nova.virt.libvirt.driver [None req-ea508da6-2d37-4e51-9a1b-53e06fde3653 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 03:34:15 np0005534516 nova_compute[253538]: 2025-11-25 08:34:15.054 253542 DEBUG nova.virt.libvirt.host [None req-ea508da6-2d37-4e51-9a1b-53e06fde3653 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 25 03:34:15 np0005534516 nova_compute[253538]: 2025-11-25 08:34:15.055 253542 DEBUG nova.virt.libvirt.host [None req-ea508da6-2d37-4e51-9a1b-53e06fde3653 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 25 03:34:15 np0005534516 neutron-haproxy-ovnmeta-610430d6-5ea7-4c04-9b64-2dc2d0a55169[316361]: [NOTICE]   (316365) : haproxy version is 2.8.14-c23fe91
Nov 25 03:34:15 np0005534516 neutron-haproxy-ovnmeta-610430d6-5ea7-4c04-9b64-2dc2d0a55169[316361]: [NOTICE]   (316365) : path to executable is /usr/sbin/haproxy
Nov 25 03:34:15 np0005534516 neutron-haproxy-ovnmeta-610430d6-5ea7-4c04-9b64-2dc2d0a55169[316361]: [WARNING]  (316365) : Exiting Master process...
Nov 25 03:34:15 np0005534516 neutron-haproxy-ovnmeta-610430d6-5ea7-4c04-9b64-2dc2d0a55169[316361]: [ALERT]    (316365) : Current worker (316367) exited with code 143 (Terminated)
Nov 25 03:34:15 np0005534516 neutron-haproxy-ovnmeta-610430d6-5ea7-4c04-9b64-2dc2d0a55169[316361]: [WARNING]  (316365) : All workers exited. Exiting... (0)
Nov 25 03:34:15 np0005534516 systemd[1]: libpod-a9aa281d687f553f489151220e0518c490d12d8d9ea2df973ae70a1b883b253f.scope: Deactivated successfully.
Nov 25 03:34:15 np0005534516 nova_compute[253538]: 2025-11-25 08:34:15.064 253542 DEBUG nova.virt.libvirt.host [None req-ea508da6-2d37-4e51-9a1b-53e06fde3653 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 25 03:34:15 np0005534516 nova_compute[253538]: 2025-11-25 08:34:15.066 253542 DEBUG nova.virt.libvirt.host [None req-ea508da6-2d37-4e51-9a1b-53e06fde3653 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 25 03:34:15 np0005534516 nova_compute[253538]: 2025-11-25 08:34:15.066 253542 DEBUG nova.virt.libvirt.driver [None req-ea508da6-2d37-4e51-9a1b-53e06fde3653 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 25 03:34:15 np0005534516 nova_compute[253538]: 2025-11-25 08:34:15.066 253542 DEBUG nova.virt.hardware [None req-ea508da6-2d37-4e51-9a1b-53e06fde3653 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T08:20:54Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='c0247c91-0f34-45b2-87e7-43f31a790a00',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 25 03:34:15 np0005534516 nova_compute[253538]: 2025-11-25 08:34:15.067 253542 DEBUG nova.virt.hardware [None req-ea508da6-2d37-4e51-9a1b-53e06fde3653 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 25 03:34:15 np0005534516 nova_compute[253538]: 2025-11-25 08:34:15.067 253542 DEBUG nova.virt.hardware [None req-ea508da6-2d37-4e51-9a1b-53e06fde3653 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 25 03:34:15 np0005534516 nova_compute[253538]: 2025-11-25 08:34:15.067 253542 DEBUG nova.virt.hardware [None req-ea508da6-2d37-4e51-9a1b-53e06fde3653 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 25 03:34:15 np0005534516 nova_compute[253538]: 2025-11-25 08:34:15.068 253542 DEBUG nova.virt.hardware [None req-ea508da6-2d37-4e51-9a1b-53e06fde3653 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 25 03:34:15 np0005534516 nova_compute[253538]: 2025-11-25 08:34:15.068 253542 DEBUG nova.virt.hardware [None req-ea508da6-2d37-4e51-9a1b-53e06fde3653 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 25 03:34:15 np0005534516 nova_compute[253538]: 2025-11-25 08:34:15.068 253542 DEBUG nova.virt.hardware [None req-ea508da6-2d37-4e51-9a1b-53e06fde3653 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 25 03:34:15 np0005534516 nova_compute[253538]: 2025-11-25 08:34:15.068 253542 DEBUG nova.virt.hardware [None req-ea508da6-2d37-4e51-9a1b-53e06fde3653 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 25 03:34:15 np0005534516 nova_compute[253538]: 2025-11-25 08:34:15.069 253542 DEBUG nova.virt.hardware [None req-ea508da6-2d37-4e51-9a1b-53e06fde3653 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 25 03:34:15 np0005534516 nova_compute[253538]: 2025-11-25 08:34:15.069 253542 DEBUG nova.virt.hardware [None req-ea508da6-2d37-4e51-9a1b-53e06fde3653 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 25 03:34:15 np0005534516 podman[316697]: 2025-11-25 08:34:15.069291399 +0000 UTC m=+0.054338731 container died a9aa281d687f553f489151220e0518c490d12d8d9ea2df973ae70a1b883b253f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-610430d6-5ea7-4c04-9b64-2dc2d0a55169, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 25 03:34:15 np0005534516 nova_compute[253538]: 2025-11-25 08:34:15.069 253542 DEBUG nova.virt.hardware [None req-ea508da6-2d37-4e51-9a1b-53e06fde3653 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 25 03:34:15 np0005534516 nova_compute[253538]: 2025-11-25 08:34:15.073 253542 DEBUG oslo_concurrency.processutils [None req-ea508da6-2d37-4e51-9a1b-53e06fde3653 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:34:15 np0005534516 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-a9aa281d687f553f489151220e0518c490d12d8d9ea2df973ae70a1b883b253f-userdata-shm.mount: Deactivated successfully.
Nov 25 03:34:15 np0005534516 systemd[1]: var-lib-containers-storage-overlay-1e19f3f33dd4cf5e2ac16478c7cf2cb2437ef71f6951caaa3e6af27e077a7e4a-merged.mount: Deactivated successfully.
Nov 25 03:34:15 np0005534516 podman[316697]: 2025-11-25 08:34:15.125952363 +0000 UTC m=+0.110999695 container cleanup a9aa281d687f553f489151220e0518c490d12d8d9ea2df973ae70a1b883b253f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-610430d6-5ea7-4c04-9b64-2dc2d0a55169, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 25 03:34:15 np0005534516 systemd[1]: libpod-conmon-a9aa281d687f553f489151220e0518c490d12d8d9ea2df973ae70a1b883b253f.scope: Deactivated successfully.
Nov 25 03:34:15 np0005534516 podman[316742]: 2025-11-25 08:34:15.203408584 +0000 UTC m=+0.056854489 container remove a9aa281d687f553f489151220e0518c490d12d8d9ea2df973ae70a1b883b253f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-610430d6-5ea7-4c04-9b64-2dc2d0a55169, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Nov 25 03:34:15 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:34:15.209 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[0838eac5-43e9-4312-9c33-d1c4e2e2b491]: (4, ('Tue Nov 25 08:34:14 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-610430d6-5ea7-4c04-9b64-2dc2d0a55169 (a9aa281d687f553f489151220e0518c490d12d8d9ea2df973ae70a1b883b253f)\na9aa281d687f553f489151220e0518c490d12d8d9ea2df973ae70a1b883b253f\nTue Nov 25 08:34:15 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-610430d6-5ea7-4c04-9b64-2dc2d0a55169 (a9aa281d687f553f489151220e0518c490d12d8d9ea2df973ae70a1b883b253f)\na9aa281d687f553f489151220e0518c490d12d8d9ea2df973ae70a1b883b253f\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:34:15 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:34:15.211 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[79e36448-8cdc-45b6-96f4-6684f479b756]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:34:15 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:34:15.213 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap610430d6-50, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:34:15 np0005534516 kernel: tap610430d6-50: left promiscuous mode
Nov 25 03:34:15 np0005534516 nova_compute[253538]: 2025-11-25 08:34:15.216 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:34:15 np0005534516 nova_compute[253538]: 2025-11-25 08:34:15.234 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:34:15 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:34:15.238 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[f4d6fed4-77df-4138-ae20-2706d67953ac]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:34:15 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1533: 321 pgs: 321 active+clean; 328 MiB data, 631 MiB used, 59 GiB / 60 GiB avail; 1.4 MiB/s rd, 6.5 MiB/s wr, 170 op/s
Nov 25 03:34:15 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:34:15.254 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[25f1c2bd-4990-4244-8281-a3ab1aea1018]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:34:15 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:34:15.255 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[acb004c7-67df-4299-ac5d-7ec3ceaab88f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:34:15 np0005534516 nova_compute[253538]: 2025-11-25 08:34:15.269 253542 DEBUG oslo_concurrency.processutils [None req-898de7fa-98d1-45c8-93c3-87953770422a c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:34:15 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:34:15.271 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[a3b6960f-8823-4988-ae76-50a7dbf9fe04]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 500522, 'reachable_time': 42502, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 316777, 'error': None, 'target': 'ovnmeta-610430d6-5ea7-4c04-9b64-2dc2d0a55169', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:34:15 np0005534516 systemd[1]: run-netns-ovnmeta\x2d610430d6\x2d5ea7\x2d4c04\x2d9b64\x2d2dc2d0a55169.mount: Deactivated successfully.
Nov 25 03:34:15 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:34:15.276 162852 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-610430d6-5ea7-4c04-9b64-2dc2d0a55169 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 25 03:34:15 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:34:15.277 162852 DEBUG oslo.privsep.daemon [-] privsep: reply[73520c1e-80f4-4574-9649-4d5df306534d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:34:15 np0005534516 nova_compute[253538]: 2025-11-25 08:34:15.305 253542 DEBUG nova.network.neutron [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] [instance: 0feca801-4630-4450-b915-616d8496ab51] Updating instance_info_cache with network_info: [{"id": "15af3dd8-9788-4a34-b4b2-d3b24300cd4c", "address": "fa:16:3e:07:cd:40", "network": {"id": "908154e6-322e-4607-bb65-df3f3f8daca6", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1395486665-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.183", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "23237e7592b247838e62457157e64e9e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap15af3dd8-97", "ovs_interfaceid": "15af3dd8-9788-4a34-b4b2-d3b24300cd4c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 03:34:15 np0005534516 nova_compute[253538]: 2025-11-25 08:34:15.322 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Releasing lock "refresh_cache-0feca801-4630-4450-b915-616d8496ab51" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 03:34:15 np0005534516 nova_compute[253538]: 2025-11-25 08:34:15.322 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] [instance: 0feca801-4630-4450-b915-616d8496ab51] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Nov 25 03:34:15 np0005534516 nova_compute[253538]: 2025-11-25 08:34:15.323 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 03:34:15 np0005534516 nova_compute[253538]: 2025-11-25 08:34:15.367 253542 INFO nova.virt.libvirt.driver [None req-0352b895-b5eb-4885-b29d-154a1d5181fb d4a674c4114a4e4fb5e446089be3ffc0 adf7500b3b404802bc7f4ada42a72100 - - default default] [instance: e3f4ee5b-6bb5-456f-b522-426ea1ebf32f] Deleting instance files /var/lib/nova/instances/e3f4ee5b-6bb5-456f-b522-426ea1ebf32f_del#033[00m
Nov 25 03:34:15 np0005534516 nova_compute[253538]: 2025-11-25 08:34:15.369 253542 INFO nova.virt.libvirt.driver [None req-0352b895-b5eb-4885-b29d-154a1d5181fb d4a674c4114a4e4fb5e446089be3ffc0 adf7500b3b404802bc7f4ada42a72100 - - default default] [instance: e3f4ee5b-6bb5-456f-b522-426ea1ebf32f] Deletion of /var/lib/nova/instances/e3f4ee5b-6bb5-456f-b522-426ea1ebf32f_del complete#033[00m
Nov 25 03:34:15 np0005534516 nova_compute[253538]: 2025-11-25 08:34:15.417 253542 INFO nova.compute.manager [None req-0352b895-b5eb-4885-b29d-154a1d5181fb d4a674c4114a4e4fb5e446089be3ffc0 adf7500b3b404802bc7f4ada42a72100 - - default default] [instance: e3f4ee5b-6bb5-456f-b522-426ea1ebf32f] Took 0.67 seconds to destroy the instance on the hypervisor.#033[00m
Nov 25 03:34:15 np0005534516 nova_compute[253538]: 2025-11-25 08:34:15.419 253542 DEBUG oslo.service.loopingcall [None req-0352b895-b5eb-4885-b29d-154a1d5181fb d4a674c4114a4e4fb5e446089be3ffc0 adf7500b3b404802bc7f4ada42a72100 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 25 03:34:15 np0005534516 nova_compute[253538]: 2025-11-25 08:34:15.419 253542 DEBUG nova.compute.manager [-] [instance: e3f4ee5b-6bb5-456f-b522-426ea1ebf32f] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 25 03:34:15 np0005534516 nova_compute[253538]: 2025-11-25 08:34:15.420 253542 DEBUG nova.network.neutron [-] [instance: e3f4ee5b-6bb5-456f-b522-426ea1ebf32f] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 25 03:34:15 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 03:34:15 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4191464744' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 03:34:15 np0005534516 nova_compute[253538]: 2025-11-25 08:34:15.555 253542 DEBUG oslo_concurrency.processutils [None req-ea508da6-2d37-4e51-9a1b-53e06fde3653 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.482s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:34:15 np0005534516 nova_compute[253538]: 2025-11-25 08:34:15.586 253542 DEBUG nova.storage.rbd_utils [None req-ea508da6-2d37-4e51-9a1b-53e06fde3653 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] rbd image 52d39d67-b456-44e4-8804-2de0c941edae_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:34:15 np0005534516 nova_compute[253538]: 2025-11-25 08:34:15.591 253542 DEBUG oslo_concurrency.processutils [None req-ea508da6-2d37-4e51-9a1b-53e06fde3653 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:34:15 np0005534516 nova_compute[253538]: 2025-11-25 08:34:15.719 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:34:15 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 03:34:15 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1369036725' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 03:34:15 np0005534516 nova_compute[253538]: 2025-11-25 08:34:15.767 253542 DEBUG oslo_concurrency.processutils [None req-898de7fa-98d1-45c8-93c3-87953770422a c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.498s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:34:15 np0005534516 nova_compute[253538]: 2025-11-25 08:34:15.772 253542 DEBUG nova.compute.provider_tree [None req-898de7fa-98d1-45c8-93c3-87953770422a c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 03:34:15 np0005534516 nova_compute[253538]: 2025-11-25 08:34:15.784 253542 DEBUG nova.scheduler.client.report [None req-898de7fa-98d1-45c8-93c3-87953770422a c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 03:34:15 np0005534516 nova_compute[253538]: 2025-11-25 08:34:15.802 253542 DEBUG oslo_concurrency.lockutils [None req-898de7fa-98d1-45c8-93c3-87953770422a c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.794s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:34:15 np0005534516 nova_compute[253538]: 2025-11-25 08:34:15.803 253542 DEBUG nova.compute.manager [None req-898de7fa-98d1-45c8-93c3-87953770422a c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] [instance: 420c5373-d9c4-4da0-9658-90eff9a19f8d] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 25 03:34:15 np0005534516 nova_compute[253538]: 2025-11-25 08:34:15.850 253542 DEBUG nova.compute.manager [None req-898de7fa-98d1-45c8-93c3-87953770422a c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] [instance: 420c5373-d9c4-4da0-9658-90eff9a19f8d] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 25 03:34:15 np0005534516 nova_compute[253538]: 2025-11-25 08:34:15.850 253542 DEBUG nova.network.neutron [None req-898de7fa-98d1-45c8-93c3-87953770422a c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] [instance: 420c5373-d9c4-4da0-9658-90eff9a19f8d] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 25 03:34:15 np0005534516 nova_compute[253538]: 2025-11-25 08:34:15.866 253542 INFO nova.virt.libvirt.driver [None req-898de7fa-98d1-45c8-93c3-87953770422a c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] [instance: 420c5373-d9c4-4da0-9658-90eff9a19f8d] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 25 03:34:15 np0005534516 nova_compute[253538]: 2025-11-25 08:34:15.879 253542 DEBUG nova.compute.manager [None req-898de7fa-98d1-45c8-93c3-87953770422a c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] [instance: 420c5373-d9c4-4da0-9658-90eff9a19f8d] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 25 03:34:15 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e182 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 03:34:15 np0005534516 nova_compute[253538]: 2025-11-25 08:34:15.964 253542 DEBUG nova.compute.manager [None req-898de7fa-98d1-45c8-93c3-87953770422a c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] [instance: 420c5373-d9c4-4da0-9658-90eff9a19f8d] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 25 03:34:15 np0005534516 nova_compute[253538]: 2025-11-25 08:34:15.965 253542 DEBUG nova.virt.libvirt.driver [None req-898de7fa-98d1-45c8-93c3-87953770422a c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] [instance: 420c5373-d9c4-4da0-9658-90eff9a19f8d] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 25 03:34:15 np0005534516 nova_compute[253538]: 2025-11-25 08:34:15.965 253542 INFO nova.virt.libvirt.driver [None req-898de7fa-98d1-45c8-93c3-87953770422a c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] [instance: 420c5373-d9c4-4da0-9658-90eff9a19f8d] Creating image(s)#033[00m
Nov 25 03:34:15 np0005534516 nova_compute[253538]: 2025-11-25 08:34:15.984 253542 DEBUG nova.storage.rbd_utils [None req-898de7fa-98d1-45c8-93c3-87953770422a c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] rbd image 420c5373-d9c4-4da0-9658-90eff9a19f8d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:34:16 np0005534516 nova_compute[253538]: 2025-11-25 08:34:16.006 253542 DEBUG nova.storage.rbd_utils [None req-898de7fa-98d1-45c8-93c3-87953770422a c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] rbd image 420c5373-d9c4-4da0-9658-90eff9a19f8d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:34:16 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 03:34:16 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/348268589' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 03:34:16 np0005534516 nova_compute[253538]: 2025-11-25 08:34:16.028 253542 DEBUG nova.storage.rbd_utils [None req-898de7fa-98d1-45c8-93c3-87953770422a c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] rbd image 420c5373-d9c4-4da0-9658-90eff9a19f8d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:34:16 np0005534516 nova_compute[253538]: 2025-11-25 08:34:16.031 253542 DEBUG oslo_concurrency.processutils [None req-898de7fa-98d1-45c8-93c3-87953770422a c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:34:16 np0005534516 nova_compute[253538]: 2025-11-25 08:34:16.064 253542 DEBUG oslo_concurrency.processutils [None req-ea508da6-2d37-4e51-9a1b-53e06fde3653 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.473s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:34:16 np0005534516 nova_compute[253538]: 2025-11-25 08:34:16.065 253542 DEBUG nova.virt.libvirt.vif [None req-ea508da6-2d37-4e51-9a1b-53e06fde3653 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T08:34:07Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-1833307559',display_name='tempest-tempest.common.compute-instance-1833307559',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-1833307559',id=61,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBI45VX5ozelo+A26Yolp08RcM4mInQuWdkCriWdcAJEvmrG1M64+l6O6qrC1PEYY9Zv1hNrRdaOuY2Hx3qn6BPjsgdWVfumtuAipvIEJaR4T3qitr35JgGW4++DGBtyuWA==',key_name='tempest-keypair-1507828644',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='7d8307470c794815a028592990efca57',ramdisk_id='',reservation_id='r-14irc3k3',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachInterfacesTestJSON-1895576257',owner_user_name='tempest-AttachInterfacesTestJSON-1895576257-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T08:34:09Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='329d8dc9d78743d4a09a38fef3a9143d',uuid=52d39d67-b456-44e4-8804-2de0c941edae,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "9fa407fa-661b-4b02-b4f4-656f6ae34cd8", "address": "fa:16:3e:4d:ce:d4", "network": {"id": "9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-2079765623-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7d8307470c794815a028592990efca57", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9fa407fa-66", "ovs_interfaceid": "9fa407fa-661b-4b02-b4f4-656f6ae34cd8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 25 03:34:16 np0005534516 nova_compute[253538]: 2025-11-25 08:34:16.066 253542 DEBUG nova.network.os_vif_util [None req-ea508da6-2d37-4e51-9a1b-53e06fde3653 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Converting VIF {"id": "9fa407fa-661b-4b02-b4f4-656f6ae34cd8", "address": "fa:16:3e:4d:ce:d4", "network": {"id": "9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-2079765623-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7d8307470c794815a028592990efca57", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9fa407fa-66", "ovs_interfaceid": "9fa407fa-661b-4b02-b4f4-656f6ae34cd8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 03:34:16 np0005534516 nova_compute[253538]: 2025-11-25 08:34:16.066 253542 DEBUG nova.network.os_vif_util [None req-ea508da6-2d37-4e51-9a1b-53e06fde3653 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:4d:ce:d4,bridge_name='br-int',has_traffic_filtering=True,id=9fa407fa-661b-4b02-b4f4-656f6ae34cd8,network=Network(9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9fa407fa-66') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 03:34:16 np0005534516 nova_compute[253538]: 2025-11-25 08:34:16.067 253542 DEBUG nova.objects.instance [None req-ea508da6-2d37-4e51-9a1b-53e06fde3653 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Lazy-loading 'pci_devices' on Instance uuid 52d39d67-b456-44e4-8804-2de0c941edae obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 03:34:16 np0005534516 nova_compute[253538]: 2025-11-25 08:34:16.084 253542 DEBUG nova.virt.libvirt.driver [None req-ea508da6-2d37-4e51-9a1b-53e06fde3653 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] [instance: 52d39d67-b456-44e4-8804-2de0c941edae] End _get_guest_xml xml=<domain type="kvm">
Nov 25 03:34:16 np0005534516 nova_compute[253538]:  <uuid>52d39d67-b456-44e4-8804-2de0c941edae</uuid>
Nov 25 03:34:16 np0005534516 nova_compute[253538]:  <name>instance-0000003d</name>
Nov 25 03:34:16 np0005534516 nova_compute[253538]:  <memory>131072</memory>
Nov 25 03:34:16 np0005534516 nova_compute[253538]:  <vcpu>1</vcpu>
Nov 25 03:34:16 np0005534516 nova_compute[253538]:  <metadata>
Nov 25 03:34:16 np0005534516 nova_compute[253538]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 03:34:16 np0005534516 nova_compute[253538]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 03:34:16 np0005534516 nova_compute[253538]:      <nova:name>tempest-tempest.common.compute-instance-1833307559</nova:name>
Nov 25 03:34:16 np0005534516 nova_compute[253538]:      <nova:creationTime>2025-11-25 08:34:15</nova:creationTime>
Nov 25 03:34:16 np0005534516 nova_compute[253538]:      <nova:flavor name="m1.nano">
Nov 25 03:34:16 np0005534516 nova_compute[253538]:        <nova:memory>128</nova:memory>
Nov 25 03:34:16 np0005534516 nova_compute[253538]:        <nova:disk>1</nova:disk>
Nov 25 03:34:16 np0005534516 nova_compute[253538]:        <nova:swap>0</nova:swap>
Nov 25 03:34:16 np0005534516 nova_compute[253538]:        <nova:ephemeral>0</nova:ephemeral>
Nov 25 03:34:16 np0005534516 nova_compute[253538]:        <nova:vcpus>1</nova:vcpus>
Nov 25 03:34:16 np0005534516 nova_compute[253538]:      </nova:flavor>
Nov 25 03:34:16 np0005534516 nova_compute[253538]:      <nova:owner>
Nov 25 03:34:16 np0005534516 nova_compute[253538]:        <nova:user uuid="329d8dc9d78743d4a09a38fef3a9143d">tempest-AttachInterfacesTestJSON-1895576257-project-member</nova:user>
Nov 25 03:34:16 np0005534516 nova_compute[253538]:        <nova:project uuid="7d8307470c794815a028592990efca57">tempest-AttachInterfacesTestJSON-1895576257</nova:project>
Nov 25 03:34:16 np0005534516 nova_compute[253538]:      </nova:owner>
Nov 25 03:34:16 np0005534516 nova_compute[253538]:      <nova:root type="image" uuid="8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e"/>
Nov 25 03:34:16 np0005534516 nova_compute[253538]:      <nova:ports>
Nov 25 03:34:16 np0005534516 nova_compute[253538]:        <nova:port uuid="9fa407fa-661b-4b02-b4f4-656f6ae34cd8">
Nov 25 03:34:16 np0005534516 nova_compute[253538]:          <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Nov 25 03:34:16 np0005534516 nova_compute[253538]:        </nova:port>
Nov 25 03:34:16 np0005534516 nova_compute[253538]:      </nova:ports>
Nov 25 03:34:16 np0005534516 nova_compute[253538]:    </nova:instance>
Nov 25 03:34:16 np0005534516 nova_compute[253538]:  </metadata>
Nov 25 03:34:16 np0005534516 nova_compute[253538]:  <sysinfo type="smbios">
Nov 25 03:34:16 np0005534516 nova_compute[253538]:    <system>
Nov 25 03:34:16 np0005534516 nova_compute[253538]:      <entry name="manufacturer">RDO</entry>
Nov 25 03:34:16 np0005534516 nova_compute[253538]:      <entry name="product">OpenStack Compute</entry>
Nov 25 03:34:16 np0005534516 nova_compute[253538]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 03:34:16 np0005534516 nova_compute[253538]:      <entry name="serial">52d39d67-b456-44e4-8804-2de0c941edae</entry>
Nov 25 03:34:16 np0005534516 nova_compute[253538]:      <entry name="uuid">52d39d67-b456-44e4-8804-2de0c941edae</entry>
Nov 25 03:34:16 np0005534516 nova_compute[253538]:      <entry name="family">Virtual Machine</entry>
Nov 25 03:34:16 np0005534516 nova_compute[253538]:    </system>
Nov 25 03:34:16 np0005534516 nova_compute[253538]:  </sysinfo>
Nov 25 03:34:16 np0005534516 nova_compute[253538]:  <os>
Nov 25 03:34:16 np0005534516 nova_compute[253538]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 25 03:34:16 np0005534516 nova_compute[253538]:    <boot dev="hd"/>
Nov 25 03:34:16 np0005534516 nova_compute[253538]:    <smbios mode="sysinfo"/>
Nov 25 03:34:16 np0005534516 nova_compute[253538]:  </os>
Nov 25 03:34:16 np0005534516 nova_compute[253538]:  <features>
Nov 25 03:34:16 np0005534516 nova_compute[253538]:    <acpi/>
Nov 25 03:34:16 np0005534516 nova_compute[253538]:    <apic/>
Nov 25 03:34:16 np0005534516 nova_compute[253538]:    <vmcoreinfo/>
Nov 25 03:34:16 np0005534516 nova_compute[253538]:  </features>
Nov 25 03:34:16 np0005534516 nova_compute[253538]:  <clock offset="utc">
Nov 25 03:34:16 np0005534516 nova_compute[253538]:    <timer name="pit" tickpolicy="delay"/>
Nov 25 03:34:16 np0005534516 nova_compute[253538]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 25 03:34:16 np0005534516 nova_compute[253538]:    <timer name="hpet" present="no"/>
Nov 25 03:34:16 np0005534516 nova_compute[253538]:  </clock>
Nov 25 03:34:16 np0005534516 nova_compute[253538]:  <cpu mode="host-model" match="exact">
Nov 25 03:34:16 np0005534516 nova_compute[253538]:    <topology sockets="1" cores="1" threads="1"/>
Nov 25 03:34:16 np0005534516 nova_compute[253538]:  </cpu>
Nov 25 03:34:16 np0005534516 nova_compute[253538]:  <devices>
Nov 25 03:34:16 np0005534516 nova_compute[253538]:    <disk type="network" device="disk">
Nov 25 03:34:16 np0005534516 nova_compute[253538]:      <driver type="raw" cache="none"/>
Nov 25 03:34:16 np0005534516 nova_compute[253538]:      <source protocol="rbd" name="vms/52d39d67-b456-44e4-8804-2de0c941edae_disk">
Nov 25 03:34:16 np0005534516 nova_compute[253538]:        <host name="192.168.122.100" port="6789"/>
Nov 25 03:34:16 np0005534516 nova_compute[253538]:      </source>
Nov 25 03:34:16 np0005534516 nova_compute[253538]:      <auth username="openstack">
Nov 25 03:34:16 np0005534516 nova_compute[253538]:        <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 03:34:16 np0005534516 nova_compute[253538]:      </auth>
Nov 25 03:34:16 np0005534516 nova_compute[253538]:      <target dev="vda" bus="virtio"/>
Nov 25 03:34:16 np0005534516 nova_compute[253538]:    </disk>
Nov 25 03:34:16 np0005534516 nova_compute[253538]:    <disk type="network" device="cdrom">
Nov 25 03:34:16 np0005534516 nova_compute[253538]:      <driver type="raw" cache="none"/>
Nov 25 03:34:16 np0005534516 nova_compute[253538]:      <source protocol="rbd" name="vms/52d39d67-b456-44e4-8804-2de0c941edae_disk.config">
Nov 25 03:34:16 np0005534516 nova_compute[253538]:        <host name="192.168.122.100" port="6789"/>
Nov 25 03:34:16 np0005534516 nova_compute[253538]:      </source>
Nov 25 03:34:16 np0005534516 nova_compute[253538]:      <auth username="openstack">
Nov 25 03:34:16 np0005534516 nova_compute[253538]:        <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 03:34:16 np0005534516 nova_compute[253538]:      </auth>
Nov 25 03:34:16 np0005534516 nova_compute[253538]:      <target dev="sda" bus="sata"/>
Nov 25 03:34:16 np0005534516 nova_compute[253538]:    </disk>
Nov 25 03:34:16 np0005534516 nova_compute[253538]:    <interface type="ethernet">
Nov 25 03:34:16 np0005534516 nova_compute[253538]:      <mac address="fa:16:3e:4d:ce:d4"/>
Nov 25 03:34:16 np0005534516 nova_compute[253538]:      <model type="virtio"/>
Nov 25 03:34:16 np0005534516 nova_compute[253538]:      <driver name="vhost" rx_queue_size="512"/>
Nov 25 03:34:16 np0005534516 nova_compute[253538]:      <mtu size="1442"/>
Nov 25 03:34:16 np0005534516 nova_compute[253538]:      <target dev="tap9fa407fa-66"/>
Nov 25 03:34:16 np0005534516 nova_compute[253538]:    </interface>
Nov 25 03:34:16 np0005534516 nova_compute[253538]:    <serial type="pty">
Nov 25 03:34:16 np0005534516 nova_compute[253538]:      <log file="/var/lib/nova/instances/52d39d67-b456-44e4-8804-2de0c941edae/console.log" append="off"/>
Nov 25 03:34:16 np0005534516 nova_compute[253538]:    </serial>
Nov 25 03:34:16 np0005534516 nova_compute[253538]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 03:34:16 np0005534516 nova_compute[253538]:    <video>
Nov 25 03:34:16 np0005534516 nova_compute[253538]:      <model type="virtio"/>
Nov 25 03:34:16 np0005534516 nova_compute[253538]:    </video>
Nov 25 03:34:16 np0005534516 nova_compute[253538]:    <input type="tablet" bus="usb"/>
Nov 25 03:34:16 np0005534516 nova_compute[253538]:    <rng model="virtio">
Nov 25 03:34:16 np0005534516 nova_compute[253538]:      <backend model="random">/dev/urandom</backend>
Nov 25 03:34:16 np0005534516 nova_compute[253538]:    </rng>
Nov 25 03:34:16 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root"/>
Nov 25 03:34:16 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:34:16 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:34:16 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:34:16 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:34:16 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:34:16 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:34:16 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:34:16 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:34:16 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:34:16 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:34:16 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:34:16 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:34:16 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:34:16 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:34:16 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:34:16 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:34:16 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:34:16 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:34:16 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:34:16 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:34:16 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:34:16 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:34:16 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:34:16 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:34:16 np0005534516 nova_compute[253538]:    <controller type="usb" index="0"/>
Nov 25 03:34:16 np0005534516 nova_compute[253538]:    <memballoon model="virtio">
Nov 25 03:34:16 np0005534516 nova_compute[253538]:      <stats period="10"/>
Nov 25 03:34:16 np0005534516 nova_compute[253538]:    </memballoon>
Nov 25 03:34:16 np0005534516 nova_compute[253538]:  </devices>
Nov 25 03:34:16 np0005534516 nova_compute[253538]: </domain>
Nov 25 03:34:16 np0005534516 nova_compute[253538]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 25 03:34:16 np0005534516 nova_compute[253538]: 2025-11-25 08:34:16.085 253542 DEBUG nova.compute.manager [None req-ea508da6-2d37-4e51-9a1b-53e06fde3653 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] [instance: 52d39d67-b456-44e4-8804-2de0c941edae] Preparing to wait for external event network-vif-plugged-9fa407fa-661b-4b02-b4f4-656f6ae34cd8 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 25 03:34:16 np0005534516 nova_compute[253538]: 2025-11-25 08:34:16.085 253542 DEBUG oslo_concurrency.lockutils [None req-ea508da6-2d37-4e51-9a1b-53e06fde3653 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Acquiring lock "52d39d67-b456-44e4-8804-2de0c941edae-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:34:16 np0005534516 nova_compute[253538]: 2025-11-25 08:34:16.085 253542 DEBUG oslo_concurrency.lockutils [None req-ea508da6-2d37-4e51-9a1b-53e06fde3653 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Lock "52d39d67-b456-44e4-8804-2de0c941edae-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:34:16 np0005534516 nova_compute[253538]: 2025-11-25 08:34:16.085 253542 DEBUG oslo_concurrency.lockutils [None req-ea508da6-2d37-4e51-9a1b-53e06fde3653 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Lock "52d39d67-b456-44e4-8804-2de0c941edae-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:34:16 np0005534516 nova_compute[253538]: 2025-11-25 08:34:16.086 253542 DEBUG nova.virt.libvirt.vif [None req-ea508da6-2d37-4e51-9a1b-53e06fde3653 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T08:34:07Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-1833307559',display_name='tempest-tempest.common.compute-instance-1833307559',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-1833307559',id=61,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBI45VX5ozelo+A26Yolp08RcM4mInQuWdkCriWdcAJEvmrG1M64+l6O6qrC1PEYY9Zv1hNrRdaOuY2Hx3qn6BPjsgdWVfumtuAipvIEJaR4T3qitr35JgGW4++DGBtyuWA==',key_name='tempest-keypair-1507828644',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='7d8307470c794815a028592990efca57',ramdisk_id='',reservation_id='r-14irc3k3',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachInterfacesTestJSON-1895576257',owner_user_name='tempest-AttachInterfacesTestJSON-1895576257-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T08:34:09Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='329d8dc9d78743d4a09a38fef3a9143d',uuid=52d39d67-b456-44e4-8804-2de0c941edae,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "9fa407fa-661b-4b02-b4f4-656f6ae34cd8", "address": "fa:16:3e:4d:ce:d4", "network": {"id": "9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-2079765623-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7d8307470c794815a028592990efca57", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9fa407fa-66", "ovs_interfaceid": "9fa407fa-661b-4b02-b4f4-656f6ae34cd8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 25 03:34:16 np0005534516 nova_compute[253538]: 2025-11-25 08:34:16.086 253542 DEBUG nova.network.os_vif_util [None req-ea508da6-2d37-4e51-9a1b-53e06fde3653 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Converting VIF {"id": "9fa407fa-661b-4b02-b4f4-656f6ae34cd8", "address": "fa:16:3e:4d:ce:d4", "network": {"id": "9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-2079765623-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7d8307470c794815a028592990efca57", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9fa407fa-66", "ovs_interfaceid": "9fa407fa-661b-4b02-b4f4-656f6ae34cd8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 03:34:16 np0005534516 nova_compute[253538]: 2025-11-25 08:34:16.087 253542 DEBUG nova.network.os_vif_util [None req-ea508da6-2d37-4e51-9a1b-53e06fde3653 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:4d:ce:d4,bridge_name='br-int',has_traffic_filtering=True,id=9fa407fa-661b-4b02-b4f4-656f6ae34cd8,network=Network(9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9fa407fa-66') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 03:34:16 np0005534516 nova_compute[253538]: 2025-11-25 08:34:16.087 253542 DEBUG os_vif [None req-ea508da6-2d37-4e51-9a1b-53e06fde3653 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:4d:ce:d4,bridge_name='br-int',has_traffic_filtering=True,id=9fa407fa-661b-4b02-b4f4-656f6ae34cd8,network=Network(9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9fa407fa-66') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 25 03:34:16 np0005534516 nova_compute[253538]: 2025-11-25 08:34:16.087 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:34:16 np0005534516 nova_compute[253538]: 2025-11-25 08:34:16.088 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:34:16 np0005534516 nova_compute[253538]: 2025-11-25 08:34:16.088 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 03:34:16 np0005534516 nova_compute[253538]: 2025-11-25 08:34:16.091 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:34:16 np0005534516 nova_compute[253538]: 2025-11-25 08:34:16.091 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9fa407fa-66, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:34:16 np0005534516 nova_compute[253538]: 2025-11-25 08:34:16.091 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap9fa407fa-66, col_values=(('external_ids', {'iface-id': '9fa407fa-661b-4b02-b4f4-656f6ae34cd8', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:4d:ce:d4', 'vm-uuid': '52d39d67-b456-44e4-8804-2de0c941edae'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:34:16 np0005534516 nova_compute[253538]: 2025-11-25 08:34:16.093 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:34:16 np0005534516 NetworkManager[48915]: <info>  [1764059656.0940] manager: (tap9fa407fa-66): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/255)
Nov 25 03:34:16 np0005534516 nova_compute[253538]: 2025-11-25 08:34:16.095 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 25 03:34:16 np0005534516 nova_compute[253538]: 2025-11-25 08:34:16.098 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:34:16 np0005534516 nova_compute[253538]: 2025-11-25 08:34:16.098 253542 INFO os_vif [None req-ea508da6-2d37-4e51-9a1b-53e06fde3653 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:4d:ce:d4,bridge_name='br-int',has_traffic_filtering=True,id=9fa407fa-661b-4b02-b4f4-656f6ae34cd8,network=Network(9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9fa407fa-66')#033[00m
Nov 25 03:34:16 np0005534516 nova_compute[253538]: 2025-11-25 08:34:16.104 253542 DEBUG oslo_concurrency.processutils [None req-898de7fa-98d1-45c8-93c3-87953770422a c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json" returned: 0 in 0.073s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:34:16 np0005534516 nova_compute[253538]: 2025-11-25 08:34:16.104 253542 DEBUG oslo_concurrency.lockutils [None req-898de7fa-98d1-45c8-93c3-87953770422a c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Acquiring lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:34:16 np0005534516 nova_compute[253538]: 2025-11-25 08:34:16.105 253542 DEBUG oslo_concurrency.lockutils [None req-898de7fa-98d1-45c8-93c3-87953770422a c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:34:16 np0005534516 nova_compute[253538]: 2025-11-25 08:34:16.105 253542 DEBUG oslo_concurrency.lockutils [None req-898de7fa-98d1-45c8-93c3-87953770422a c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:34:16 np0005534516 nova_compute[253538]: 2025-11-25 08:34:16.123 253542 DEBUG nova.storage.rbd_utils [None req-898de7fa-98d1-45c8-93c3-87953770422a c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] rbd image 420c5373-d9c4-4da0-9658-90eff9a19f8d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:34:16 np0005534516 nova_compute[253538]: 2025-11-25 08:34:16.126 253542 DEBUG oslo_concurrency.processutils [None req-898de7fa-98d1-45c8-93c3-87953770422a c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc 420c5373-d9c4-4da0-9658-90eff9a19f8d_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:34:16 np0005534516 nova_compute[253538]: 2025-11-25 08:34:16.192 253542 DEBUG nova.virt.libvirt.driver [None req-ea508da6-2d37-4e51-9a1b-53e06fde3653 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 03:34:16 np0005534516 nova_compute[253538]: 2025-11-25 08:34:16.192 253542 DEBUG nova.virt.libvirt.driver [None req-ea508da6-2d37-4e51-9a1b-53e06fde3653 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 03:34:16 np0005534516 nova_compute[253538]: 2025-11-25 08:34:16.193 253542 DEBUG nova.virt.libvirt.driver [None req-ea508da6-2d37-4e51-9a1b-53e06fde3653 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] No VIF found with MAC fa:16:3e:4d:ce:d4, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 25 03:34:16 np0005534516 nova_compute[253538]: 2025-11-25 08:34:16.193 253542 INFO nova.virt.libvirt.driver [None req-ea508da6-2d37-4e51-9a1b-53e06fde3653 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] [instance: 52d39d67-b456-44e4-8804-2de0c941edae] Using config drive#033[00m
Nov 25 03:34:16 np0005534516 nova_compute[253538]: 2025-11-25 08:34:16.213 253542 DEBUG nova.storage.rbd_utils [None req-ea508da6-2d37-4e51-9a1b-53e06fde3653 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] rbd image 52d39d67-b456-44e4-8804-2de0c941edae_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:34:16 np0005534516 nova_compute[253538]: 2025-11-25 08:34:16.220 253542 DEBUG nova.compute.manager [req-62d0912b-5663-4692-a4cb-6f9d90b3aba9 req-4fed2556-c6a8-4eca-8489-42b5a68d9fef b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: bf44124c-1a65-4bde-a777-043ae1a53557] Received event network-vif-plugged-269f9bd3-f267-459c-8e24-4b1f6c943345 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:34:16 np0005534516 nova_compute[253538]: 2025-11-25 08:34:16.221 253542 DEBUG oslo_concurrency.lockutils [req-62d0912b-5663-4692-a4cb-6f9d90b3aba9 req-4fed2556-c6a8-4eca-8489-42b5a68d9fef b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "bf44124c-1a65-4bde-a777-043ae1a53557-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:34:16 np0005534516 nova_compute[253538]: 2025-11-25 08:34:16.221 253542 DEBUG oslo_concurrency.lockutils [req-62d0912b-5663-4692-a4cb-6f9d90b3aba9 req-4fed2556-c6a8-4eca-8489-42b5a68d9fef b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "bf44124c-1a65-4bde-a777-043ae1a53557-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:34:16 np0005534516 nova_compute[253538]: 2025-11-25 08:34:16.221 253542 DEBUG oslo_concurrency.lockutils [req-62d0912b-5663-4692-a4cb-6f9d90b3aba9 req-4fed2556-c6a8-4eca-8489-42b5a68d9fef b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "bf44124c-1a65-4bde-a777-043ae1a53557-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:34:16 np0005534516 nova_compute[253538]: 2025-11-25 08:34:16.221 253542 DEBUG nova.compute.manager [req-62d0912b-5663-4692-a4cb-6f9d90b3aba9 req-4fed2556-c6a8-4eca-8489-42b5a68d9fef b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: bf44124c-1a65-4bde-a777-043ae1a53557] Processing event network-vif-plugged-269f9bd3-f267-459c-8e24-4b1f6c943345 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 25 03:34:16 np0005534516 nova_compute[253538]: 2025-11-25 08:34:16.221 253542 DEBUG nova.compute.manager [req-62d0912b-5663-4692-a4cb-6f9d90b3aba9 req-4fed2556-c6a8-4eca-8489-42b5a68d9fef b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: bf44124c-1a65-4bde-a777-043ae1a53557] Received event network-vif-plugged-269f9bd3-f267-459c-8e24-4b1f6c943345 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:34:16 np0005534516 nova_compute[253538]: 2025-11-25 08:34:16.221 253542 DEBUG oslo_concurrency.lockutils [req-62d0912b-5663-4692-a4cb-6f9d90b3aba9 req-4fed2556-c6a8-4eca-8489-42b5a68d9fef b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "bf44124c-1a65-4bde-a777-043ae1a53557-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:34:16 np0005534516 nova_compute[253538]: 2025-11-25 08:34:16.222 253542 DEBUG oslo_concurrency.lockutils [req-62d0912b-5663-4692-a4cb-6f9d90b3aba9 req-4fed2556-c6a8-4eca-8489-42b5a68d9fef b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "bf44124c-1a65-4bde-a777-043ae1a53557-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:34:16 np0005534516 nova_compute[253538]: 2025-11-25 08:34:16.222 253542 DEBUG oslo_concurrency.lockutils [req-62d0912b-5663-4692-a4cb-6f9d90b3aba9 req-4fed2556-c6a8-4eca-8489-42b5a68d9fef b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "bf44124c-1a65-4bde-a777-043ae1a53557-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:34:16 np0005534516 nova_compute[253538]: 2025-11-25 08:34:16.222 253542 DEBUG nova.compute.manager [req-62d0912b-5663-4692-a4cb-6f9d90b3aba9 req-4fed2556-c6a8-4eca-8489-42b5a68d9fef b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: bf44124c-1a65-4bde-a777-043ae1a53557] No waiting events found dispatching network-vif-plugged-269f9bd3-f267-459c-8e24-4b1f6c943345 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 03:34:16 np0005534516 nova_compute[253538]: 2025-11-25 08:34:16.222 253542 WARNING nova.compute.manager [req-62d0912b-5663-4692-a4cb-6f9d90b3aba9 req-4fed2556-c6a8-4eca-8489-42b5a68d9fef b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: bf44124c-1a65-4bde-a777-043ae1a53557] Received unexpected event network-vif-plugged-269f9bd3-f267-459c-8e24-4b1f6c943345 for instance with vm_state building and task_state spawning.#033[00m
Nov 25 03:34:16 np0005534516 nova_compute[253538]: 2025-11-25 08:34:16.223 253542 DEBUG nova.compute.manager [None req-264bf83f-c6d5-4d83-a2a4-c0156230e328 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: bf44124c-1a65-4bde-a777-043ae1a53557] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 25 03:34:16 np0005534516 nova_compute[253538]: 2025-11-25 08:34:16.226 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764059656.2259817, bf44124c-1a65-4bde-a777-043ae1a53557 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 03:34:16 np0005534516 nova_compute[253538]: 2025-11-25 08:34:16.226 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: bf44124c-1a65-4bde-a777-043ae1a53557] VM Resumed (Lifecycle Event)#033[00m
Nov 25 03:34:16 np0005534516 nova_compute[253538]: 2025-11-25 08:34:16.228 253542 DEBUG nova.policy [None req-898de7fa-98d1-45c8-93c3-87953770422a c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'c199ca353ed54a53ab7fe37d3089c82a', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '23237e7592b247838e62457157e64e9e', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 25 03:34:16 np0005534516 nova_compute[253538]: 2025-11-25 08:34:16.230 253542 DEBUG nova.virt.libvirt.driver [None req-264bf83f-c6d5-4d83-a2a4-c0156230e328 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: bf44124c-1a65-4bde-a777-043ae1a53557] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 25 03:34:16 np0005534516 nova_compute[253538]: 2025-11-25 08:34:16.247 253542 INFO nova.virt.libvirt.driver [-] [instance: bf44124c-1a65-4bde-a777-043ae1a53557] Instance spawned successfully.#033[00m
Nov 25 03:34:16 np0005534516 nova_compute[253538]: 2025-11-25 08:34:16.247 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: bf44124c-1a65-4bde-a777-043ae1a53557] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:34:16 np0005534516 nova_compute[253538]: 2025-11-25 08:34:16.248 253542 DEBUG nova.virt.libvirt.driver [None req-264bf83f-c6d5-4d83-a2a4-c0156230e328 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: bf44124c-1a65-4bde-a777-043ae1a53557] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 25 03:34:16 np0005534516 nova_compute[253538]: 2025-11-25 08:34:16.253 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: bf44124c-1a65-4bde-a777-043ae1a53557] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 03:34:16 np0005534516 nova_compute[253538]: 2025-11-25 08:34:16.273 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: bf44124c-1a65-4bde-a777-043ae1a53557] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 03:34:16 np0005534516 nova_compute[253538]: 2025-11-25 08:34:16.276 253542 DEBUG nova.virt.libvirt.driver [None req-264bf83f-c6d5-4d83-a2a4-c0156230e328 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: bf44124c-1a65-4bde-a777-043ae1a53557] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:34:16 np0005534516 nova_compute[253538]: 2025-11-25 08:34:16.277 253542 DEBUG nova.virt.libvirt.driver [None req-264bf83f-c6d5-4d83-a2a4-c0156230e328 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: bf44124c-1a65-4bde-a777-043ae1a53557] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:34:16 np0005534516 nova_compute[253538]: 2025-11-25 08:34:16.277 253542 DEBUG nova.virt.libvirt.driver [None req-264bf83f-c6d5-4d83-a2a4-c0156230e328 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: bf44124c-1a65-4bde-a777-043ae1a53557] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:34:16 np0005534516 nova_compute[253538]: 2025-11-25 08:34:16.277 253542 DEBUG nova.virt.libvirt.driver [None req-264bf83f-c6d5-4d83-a2a4-c0156230e328 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: bf44124c-1a65-4bde-a777-043ae1a53557] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:34:16 np0005534516 nova_compute[253538]: 2025-11-25 08:34:16.278 253542 DEBUG nova.virt.libvirt.driver [None req-264bf83f-c6d5-4d83-a2a4-c0156230e328 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: bf44124c-1a65-4bde-a777-043ae1a53557] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:34:16 np0005534516 nova_compute[253538]: 2025-11-25 08:34:16.278 253542 DEBUG nova.virt.libvirt.driver [None req-264bf83f-c6d5-4d83-a2a4-c0156230e328 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: bf44124c-1a65-4bde-a777-043ae1a53557] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:34:16 np0005534516 nova_compute[253538]: 2025-11-25 08:34:16.290 253542 DEBUG nova.compute.manager [req-0bdb849e-7695-497b-bcef-93c0fb3071a3 req-afd5851a-4d50-4e76-be92-76a9633d0b8d b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: e3f4ee5b-6bb5-456f-b522-426ea1ebf32f] Received event network-vif-unplugged-40faec4b-dd3f-4659-972d-beeeb707761f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:34:16 np0005534516 nova_compute[253538]: 2025-11-25 08:34:16.290 253542 DEBUG oslo_concurrency.lockutils [req-0bdb849e-7695-497b-bcef-93c0fb3071a3 req-afd5851a-4d50-4e76-be92-76a9633d0b8d b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "e3f4ee5b-6bb5-456f-b522-426ea1ebf32f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:34:16 np0005534516 nova_compute[253538]: 2025-11-25 08:34:16.291 253542 DEBUG oslo_concurrency.lockutils [req-0bdb849e-7695-497b-bcef-93c0fb3071a3 req-afd5851a-4d50-4e76-be92-76a9633d0b8d b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "e3f4ee5b-6bb5-456f-b522-426ea1ebf32f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:34:16 np0005534516 nova_compute[253538]: 2025-11-25 08:34:16.291 253542 DEBUG oslo_concurrency.lockutils [req-0bdb849e-7695-497b-bcef-93c0fb3071a3 req-afd5851a-4d50-4e76-be92-76a9633d0b8d b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "e3f4ee5b-6bb5-456f-b522-426ea1ebf32f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:34:16 np0005534516 nova_compute[253538]: 2025-11-25 08:34:16.291 253542 DEBUG nova.compute.manager [req-0bdb849e-7695-497b-bcef-93c0fb3071a3 req-afd5851a-4d50-4e76-be92-76a9633d0b8d b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: e3f4ee5b-6bb5-456f-b522-426ea1ebf32f] No waiting events found dispatching network-vif-unplugged-40faec4b-dd3f-4659-972d-beeeb707761f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 03:34:16 np0005534516 nova_compute[253538]: 2025-11-25 08:34:16.291 253542 DEBUG nova.compute.manager [req-0bdb849e-7695-497b-bcef-93c0fb3071a3 req-afd5851a-4d50-4e76-be92-76a9633d0b8d b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: e3f4ee5b-6bb5-456f-b522-426ea1ebf32f] Received event network-vif-unplugged-40faec4b-dd3f-4659-972d-beeeb707761f for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 25 03:34:16 np0005534516 nova_compute[253538]: 2025-11-25 08:34:16.291 253542 DEBUG nova.compute.manager [req-0bdb849e-7695-497b-bcef-93c0fb3071a3 req-afd5851a-4d50-4e76-be92-76a9633d0b8d b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: e3f4ee5b-6bb5-456f-b522-426ea1ebf32f] Received event network-vif-plugged-40faec4b-dd3f-4659-972d-beeeb707761f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:34:16 np0005534516 nova_compute[253538]: 2025-11-25 08:34:16.291 253542 DEBUG oslo_concurrency.lockutils [req-0bdb849e-7695-497b-bcef-93c0fb3071a3 req-afd5851a-4d50-4e76-be92-76a9633d0b8d b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "e3f4ee5b-6bb5-456f-b522-426ea1ebf32f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:34:16 np0005534516 nova_compute[253538]: 2025-11-25 08:34:16.292 253542 DEBUG oslo_concurrency.lockutils [req-0bdb849e-7695-497b-bcef-93c0fb3071a3 req-afd5851a-4d50-4e76-be92-76a9633d0b8d b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "e3f4ee5b-6bb5-456f-b522-426ea1ebf32f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:34:16 np0005534516 nova_compute[253538]: 2025-11-25 08:34:16.292 253542 DEBUG oslo_concurrency.lockutils [req-0bdb849e-7695-497b-bcef-93c0fb3071a3 req-afd5851a-4d50-4e76-be92-76a9633d0b8d b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "e3f4ee5b-6bb5-456f-b522-426ea1ebf32f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:34:16 np0005534516 nova_compute[253538]: 2025-11-25 08:34:16.292 253542 DEBUG nova.compute.manager [req-0bdb849e-7695-497b-bcef-93c0fb3071a3 req-afd5851a-4d50-4e76-be92-76a9633d0b8d b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: e3f4ee5b-6bb5-456f-b522-426ea1ebf32f] No waiting events found dispatching network-vif-plugged-40faec4b-dd3f-4659-972d-beeeb707761f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 03:34:16 np0005534516 nova_compute[253538]: 2025-11-25 08:34:16.292 253542 WARNING nova.compute.manager [req-0bdb849e-7695-497b-bcef-93c0fb3071a3 req-afd5851a-4d50-4e76-be92-76a9633d0b8d b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: e3f4ee5b-6bb5-456f-b522-426ea1ebf32f] Received unexpected event network-vif-plugged-40faec4b-dd3f-4659-972d-beeeb707761f for instance with vm_state active and task_state deleting.#033[00m
Nov 25 03:34:16 np0005534516 nova_compute[253538]: 2025-11-25 08:34:16.420 253542 DEBUG oslo_concurrency.processutils [None req-898de7fa-98d1-45c8-93c3-87953770422a c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc 420c5373-d9c4-4da0-9658-90eff9a19f8d_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.294s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:34:16 np0005534516 nova_compute[253538]: 2025-11-25 08:34:16.460 253542 INFO nova.compute.manager [None req-264bf83f-c6d5-4d83-a2a4-c0156230e328 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: bf44124c-1a65-4bde-a777-043ae1a53557] Took 10.74 seconds to spawn the instance on the hypervisor.#033[00m
Nov 25 03:34:16 np0005534516 nova_compute[253538]: 2025-11-25 08:34:16.460 253542 DEBUG nova.compute.manager [None req-264bf83f-c6d5-4d83-a2a4-c0156230e328 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: bf44124c-1a65-4bde-a777-043ae1a53557] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:34:16 np0005534516 nova_compute[253538]: 2025-11-25 08:34:16.508 253542 DEBUG nova.storage.rbd_utils [None req-898de7fa-98d1-45c8-93c3-87953770422a c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] resizing rbd image 420c5373-d9c4-4da0-9658-90eff9a19f8d_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 25 03:34:16 np0005534516 nova_compute[253538]: 2025-11-25 08:34:16.606 253542 DEBUG nova.objects.instance [None req-898de7fa-98d1-45c8-93c3-87953770422a c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Lazy-loading 'migration_context' on Instance uuid 420c5373-d9c4-4da0-9658-90eff9a19f8d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 03:34:16 np0005534516 nova_compute[253538]: 2025-11-25 08:34:16.608 253542 INFO nova.compute.manager [None req-264bf83f-c6d5-4d83-a2a4-c0156230e328 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: bf44124c-1a65-4bde-a777-043ae1a53557] Took 11.82 seconds to build instance.#033[00m
Nov 25 03:34:16 np0005534516 nova_compute[253538]: 2025-11-25 08:34:16.625 253542 DEBUG nova.virt.libvirt.driver [None req-898de7fa-98d1-45c8-93c3-87953770422a c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] [instance: 420c5373-d9c4-4da0-9658-90eff9a19f8d] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 25 03:34:16 np0005534516 nova_compute[253538]: 2025-11-25 08:34:16.625 253542 DEBUG nova.virt.libvirt.driver [None req-898de7fa-98d1-45c8-93c3-87953770422a c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] [instance: 420c5373-d9c4-4da0-9658-90eff9a19f8d] Ensure instance console log exists: /var/lib/nova/instances/420c5373-d9c4-4da0-9658-90eff9a19f8d/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 25 03:34:16 np0005534516 nova_compute[253538]: 2025-11-25 08:34:16.626 253542 DEBUG oslo_concurrency.lockutils [None req-898de7fa-98d1-45c8-93c3-87953770422a c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:34:16 np0005534516 nova_compute[253538]: 2025-11-25 08:34:16.626 253542 DEBUG oslo_concurrency.lockutils [None req-898de7fa-98d1-45c8-93c3-87953770422a c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:34:16 np0005534516 nova_compute[253538]: 2025-11-25 08:34:16.626 253542 DEBUG oslo_concurrency.lockutils [None req-898de7fa-98d1-45c8-93c3-87953770422a c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:34:16 np0005534516 nova_compute[253538]: 2025-11-25 08:34:16.630 253542 DEBUG oslo_concurrency.lockutils [None req-264bf83f-c6d5-4d83-a2a4-c0156230e328 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Lock "bf44124c-1a65-4bde-a777-043ae1a53557" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 12.140s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:34:16 np0005534516 nova_compute[253538]: 2025-11-25 08:34:16.664 253542 DEBUG nova.network.neutron [req-3d4a59f7-5e91-44df-ab9f-983d26bcf780 req-55f998e1-75ae-41d1-af01-ce2112fde043 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 52d39d67-b456-44e4-8804-2de0c941edae] Updated VIF entry in instance network info cache for port 9fa407fa-661b-4b02-b4f4-656f6ae34cd8. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 25 03:34:16 np0005534516 nova_compute[253538]: 2025-11-25 08:34:16.664 253542 DEBUG nova.network.neutron [req-3d4a59f7-5e91-44df-ab9f-983d26bcf780 req-55f998e1-75ae-41d1-af01-ce2112fde043 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 52d39d67-b456-44e4-8804-2de0c941edae] Updating instance_info_cache with network_info: [{"id": "9fa407fa-661b-4b02-b4f4-656f6ae34cd8", "address": "fa:16:3e:4d:ce:d4", "network": {"id": "9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-2079765623-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7d8307470c794815a028592990efca57", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9fa407fa-66", "ovs_interfaceid": "9fa407fa-661b-4b02-b4f4-656f6ae34cd8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 03:34:16 np0005534516 nova_compute[253538]: 2025-11-25 08:34:16.673 253542 DEBUG nova.network.neutron [-] [instance: e3f4ee5b-6bb5-456f-b522-426ea1ebf32f] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 03:34:16 np0005534516 nova_compute[253538]: 2025-11-25 08:34:16.680 253542 DEBUG oslo_concurrency.lockutils [req-3d4a59f7-5e91-44df-ab9f-983d26bcf780 req-55f998e1-75ae-41d1-af01-ce2112fde043 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-52d39d67-b456-44e4-8804-2de0c941edae" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 03:34:16 np0005534516 nova_compute[253538]: 2025-11-25 08:34:16.690 253542 INFO nova.compute.manager [-] [instance: e3f4ee5b-6bb5-456f-b522-426ea1ebf32f] Took 1.27 seconds to deallocate network for instance.#033[00m
Nov 25 03:34:16 np0005534516 nova_compute[253538]: 2025-11-25 08:34:16.729 253542 DEBUG oslo_concurrency.lockutils [None req-0352b895-b5eb-4885-b29d-154a1d5181fb d4a674c4114a4e4fb5e446089be3ffc0 adf7500b3b404802bc7f4ada42a72100 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:34:16 np0005534516 nova_compute[253538]: 2025-11-25 08:34:16.729 253542 DEBUG oslo_concurrency.lockutils [None req-0352b895-b5eb-4885-b29d-154a1d5181fb d4a674c4114a4e4fb5e446089be3ffc0 adf7500b3b404802bc7f4ada42a72100 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:34:16 np0005534516 nova_compute[253538]: 2025-11-25 08:34:16.855 253542 DEBUG oslo_concurrency.processutils [None req-0352b895-b5eb-4885-b29d-154a1d5181fb d4a674c4114a4e4fb5e446089be3ffc0 adf7500b3b404802bc7f4ada42a72100 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:34:17 np0005534516 nova_compute[253538]: 2025-11-25 08:34:17.118 253542 INFO nova.virt.libvirt.driver [None req-ea508da6-2d37-4e51-9a1b-53e06fde3653 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] [instance: 52d39d67-b456-44e4-8804-2de0c941edae] Creating config drive at /var/lib/nova/instances/52d39d67-b456-44e4-8804-2de0c941edae/disk.config#033[00m
Nov 25 03:34:17 np0005534516 nova_compute[253538]: 2025-11-25 08:34:17.127 253542 DEBUG oslo_concurrency.processutils [None req-ea508da6-2d37-4e51-9a1b-53e06fde3653 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/52d39d67-b456-44e4-8804-2de0c941edae/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpmz8t7lsc execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:34:17 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1534: 321 pgs: 321 active+clean; 321 MiB data, 627 MiB used, 59 GiB / 60 GiB avail; 3.1 MiB/s rd, 6.5 MiB/s wr, 241 op/s
Nov 25 03:34:17 np0005534516 nova_compute[253538]: 2025-11-25 08:34:17.274 253542 DEBUG oslo_concurrency.processutils [None req-ea508da6-2d37-4e51-9a1b-53e06fde3653 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/52d39d67-b456-44e4-8804-2de0c941edae/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpmz8t7lsc" returned: 0 in 0.147s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:34:17 np0005534516 nova_compute[253538]: 2025-11-25 08:34:17.304 253542 DEBUG nova.storage.rbd_utils [None req-ea508da6-2d37-4e51-9a1b-53e06fde3653 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] rbd image 52d39d67-b456-44e4-8804-2de0c941edae_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:34:17 np0005534516 nova_compute[253538]: 2025-11-25 08:34:17.308 253542 DEBUG oslo_concurrency.processutils [None req-ea508da6-2d37-4e51-9a1b-53e06fde3653 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/52d39d67-b456-44e4-8804-2de0c941edae/disk.config 52d39d67-b456-44e4-8804-2de0c941edae_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:34:17 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 03:34:17 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3519464579' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 03:34:17 np0005534516 nova_compute[253538]: 2025-11-25 08:34:17.352 253542 DEBUG nova.network.neutron [None req-898de7fa-98d1-45c8-93c3-87953770422a c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] [instance: 420c5373-d9c4-4da0-9658-90eff9a19f8d] Successfully created port: 9200cc12-927d-418b-99c1-ca0421535979 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 25 03:34:17 np0005534516 nova_compute[253538]: 2025-11-25 08:34:17.357 253542 DEBUG oslo_concurrency.processutils [None req-0352b895-b5eb-4885-b29d-154a1d5181fb d4a674c4114a4e4fb5e446089be3ffc0 adf7500b3b404802bc7f4ada42a72100 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.501s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:34:17 np0005534516 nova_compute[253538]: 2025-11-25 08:34:17.363 253542 DEBUG nova.compute.provider_tree [None req-0352b895-b5eb-4885-b29d-154a1d5181fb d4a674c4114a4e4fb5e446089be3ffc0 adf7500b3b404802bc7f4ada42a72100 - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 03:34:17 np0005534516 nova_compute[253538]: 2025-11-25 08:34:17.381 253542 DEBUG nova.scheduler.client.report [None req-0352b895-b5eb-4885-b29d-154a1d5181fb d4a674c4114a4e4fb5e446089be3ffc0 adf7500b3b404802bc7f4ada42a72100 - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 03:34:17 np0005534516 nova_compute[253538]: 2025-11-25 08:34:17.421 253542 DEBUG oslo_concurrency.lockutils [None req-0352b895-b5eb-4885-b29d-154a1d5181fb d4a674c4114a4e4fb5e446089be3ffc0 adf7500b3b404802bc7f4ada42a72100 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.692s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:34:17 np0005534516 nova_compute[253538]: 2025-11-25 08:34:17.451 253542 INFO nova.scheduler.client.report [None req-0352b895-b5eb-4885-b29d-154a1d5181fb d4a674c4114a4e4fb5e446089be3ffc0 adf7500b3b404802bc7f4ada42a72100 - - default default] Deleted allocations for instance e3f4ee5b-6bb5-456f-b522-426ea1ebf32f#033[00m
Nov 25 03:34:17 np0005534516 nova_compute[253538]: 2025-11-25 08:34:17.452 253542 DEBUG oslo_concurrency.processutils [None req-ea508da6-2d37-4e51-9a1b-53e06fde3653 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/52d39d67-b456-44e4-8804-2de0c941edae/disk.config 52d39d67-b456-44e4-8804-2de0c941edae_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.144s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:34:17 np0005534516 nova_compute[253538]: 2025-11-25 08:34:17.452 253542 INFO nova.virt.libvirt.driver [None req-ea508da6-2d37-4e51-9a1b-53e06fde3653 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] [instance: 52d39d67-b456-44e4-8804-2de0c941edae] Deleting local config drive /var/lib/nova/instances/52d39d67-b456-44e4-8804-2de0c941edae/disk.config because it was imported into RBD.#033[00m
Nov 25 03:34:17 np0005534516 NetworkManager[48915]: <info>  [1764059657.4967] manager: (tap9fa407fa-66): new Tun device (/org/freedesktop/NetworkManager/Devices/256)
Nov 25 03:34:17 np0005534516 kernel: tap9fa407fa-66: entered promiscuous mode
Nov 25 03:34:17 np0005534516 nova_compute[253538]: 2025-11-25 08:34:17.501 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:34:17 np0005534516 ovn_controller[152859]: 2025-11-25T08:34:17Z|00549|binding|INFO|Claiming lport 9fa407fa-661b-4b02-b4f4-656f6ae34cd8 for this chassis.
Nov 25 03:34:17 np0005534516 ovn_controller[152859]: 2025-11-25T08:34:17Z|00550|binding|INFO|9fa407fa-661b-4b02-b4f4-656f6ae34cd8: Claiming fa:16:3e:4d:ce:d4 10.100.0.8
Nov 25 03:34:17 np0005534516 nova_compute[253538]: 2025-11-25 08:34:17.507 253542 DEBUG oslo_concurrency.lockutils [None req-0352b895-b5eb-4885-b29d-154a1d5181fb d4a674c4114a4e4fb5e446089be3ffc0 adf7500b3b404802bc7f4ada42a72100 - - default default] Lock "e3f4ee5b-6bb5-456f-b522-426ea1ebf32f" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.761s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:34:17 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:34:17.507 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:4d:ce:d4 10.100.0.8'], port_security=['fa:16:3e:4d:ce:d4 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '52d39d67-b456-44e4-8804-2de0c941edae', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '7d8307470c794815a028592990efca57', 'neutron:revision_number': '2', 'neutron:security_group_ids': '379bb9ab-2c12-4eea-bd54-6b8a24f607d1', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f51a82dc-da84-4ad1-90c6-51b8e242435f, chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=9fa407fa-661b-4b02-b4f4-656f6ae34cd8) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 03:34:17 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:34:17.508 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 9fa407fa-661b-4b02-b4f4-656f6ae34cd8 in datapath 9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe bound to our chassis#033[00m
Nov 25 03:34:17 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:34:17.510 162739 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe#033[00m
Nov 25 03:34:17 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:34:17.526 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[906b886a-2ed6-4661-a1bf-88845063611d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:34:17 np0005534516 ovn_controller[152859]: 2025-11-25T08:34:17Z|00551|binding|INFO|Setting lport 9fa407fa-661b-4b02-b4f4-656f6ae34cd8 ovn-installed in OVS
Nov 25 03:34:17 np0005534516 ovn_controller[152859]: 2025-11-25T08:34:17Z|00552|binding|INFO|Setting lport 9fa407fa-661b-4b02-b4f4-656f6ae34cd8 up in Southbound
Nov 25 03:34:17 np0005534516 nova_compute[253538]: 2025-11-25 08:34:17.541 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:34:17 np0005534516 nova_compute[253538]: 2025-11-25 08:34:17.547 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:34:17 np0005534516 systemd-udevd[317114]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 03:34:17 np0005534516 nova_compute[253538]: 2025-11-25 08:34:17.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 03:34:17 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:34:17.559 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[3d41e5f0-0b4b-4efc-896b-35f66f665413]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:34:17 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:34:17.562 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[318d5717-8010-45cb-95c9-136f90b7583c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:34:17 np0005534516 systemd-machined[215790]: New machine qemu-70-instance-0000003d.
Nov 25 03:34:17 np0005534516 NetworkManager[48915]: <info>  [1764059657.5682] device (tap9fa407fa-66): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 03:34:17 np0005534516 NetworkManager[48915]: <info>  [1764059657.5692] device (tap9fa407fa-66): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 03:34:17 np0005534516 systemd[1]: Started Virtual Machine qemu-70-instance-0000003d.
Nov 25 03:34:17 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:34:17.589 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[e289bd42-f6e1-4d24-abb8-a8a570b8dcea]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:34:17 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:34:17.608 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[776bcfc5-0859-4bb2-aeda-a9347a5daf45]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap9bf3cbfa-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:cf:8f:c7'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 6, 'tx_packets': 5, 'rx_bytes': 532, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 6, 'tx_packets': 5, 'rx_bytes': 532, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 164], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 499073, 'reachable_time': 17548, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 317130, 'error': None, 'target': 'ovnmeta-9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:34:17 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:34:17.623 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[311749a8-d93d-4c14-9148-45c2751fd292]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap9bf3cbfa-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 499086, 'tstamp': 499086}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 317137, 'error': None, 'target': 'ovnmeta-9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap9bf3cbfa-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 499088, 'tstamp': 499088}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 317137, 'error': None, 'target': 'ovnmeta-9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:34:17 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:34:17.625 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9bf3cbfa-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:34:17 np0005534516 nova_compute[253538]: 2025-11-25 08:34:17.626 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:34:17 np0005534516 nova_compute[253538]: 2025-11-25 08:34:17.627 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:34:17 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:34:17.627 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9bf3cbfa-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:34:17 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:34:17.628 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 03:34:17 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:34:17.628 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap9bf3cbfa-70, col_values=(('external_ids', {'iface-id': '98660c0c-0936-4c4d-9a89-87b784d8d5cc'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:34:17 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:34:17.628 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 03:34:17 np0005534516 podman[317099]: 2025-11-25 08:34:17.647798269 +0000 UTC m=+0.125782161 container health_status 8ad1e319ea263c63021f01bb2d6f18ca5aea205883339692c6b10bddfa993191 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS)
Nov 25 03:34:18 np0005534516 nova_compute[253538]: 2025-11-25 08:34:18.041 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764059658.0411303, 52d39d67-b456-44e4-8804-2de0c941edae => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 03:34:18 np0005534516 nova_compute[253538]: 2025-11-25 08:34:18.042 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 52d39d67-b456-44e4-8804-2de0c941edae] VM Started (Lifecycle Event)#033[00m
Nov 25 03:34:18 np0005534516 nova_compute[253538]: 2025-11-25 08:34:18.055 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 52d39d67-b456-44e4-8804-2de0c941edae] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:34:18 np0005534516 nova_compute[253538]: 2025-11-25 08:34:18.059 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764059658.041302, 52d39d67-b456-44e4-8804-2de0c941edae => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 03:34:18 np0005534516 nova_compute[253538]: 2025-11-25 08:34:18.059 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 52d39d67-b456-44e4-8804-2de0c941edae] VM Paused (Lifecycle Event)#033[00m
Nov 25 03:34:18 np0005534516 nova_compute[253538]: 2025-11-25 08:34:18.076 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 52d39d67-b456-44e4-8804-2de0c941edae] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:34:18 np0005534516 nova_compute[253538]: 2025-11-25 08:34:18.079 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 52d39d67-b456-44e4-8804-2de0c941edae] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 03:34:18 np0005534516 nova_compute[253538]: 2025-11-25 08:34:18.098 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 52d39d67-b456-44e4-8804-2de0c941edae] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 03:34:18 np0005534516 nova_compute[253538]: 2025-11-25 08:34:18.295 253542 DEBUG nova.compute.manager [req-f468f4a1-8713-4273-98b7-27025f3c796f req-3b9e0d06-8123-4217-b452-e6651a54e056 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: e3f4ee5b-6bb5-456f-b522-426ea1ebf32f] Received event network-vif-deleted-40faec4b-dd3f-4659-972d-beeeb707761f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:34:18 np0005534516 nova_compute[253538]: 2025-11-25 08:34:18.296 253542 DEBUG nova.compute.manager [req-f468f4a1-8713-4273-98b7-27025f3c796f req-3b9e0d06-8123-4217-b452-e6651a54e056 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 52d39d67-b456-44e4-8804-2de0c941edae] Received event network-vif-plugged-9fa407fa-661b-4b02-b4f4-656f6ae34cd8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:34:18 np0005534516 nova_compute[253538]: 2025-11-25 08:34:18.296 253542 DEBUG oslo_concurrency.lockutils [req-f468f4a1-8713-4273-98b7-27025f3c796f req-3b9e0d06-8123-4217-b452-e6651a54e056 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "52d39d67-b456-44e4-8804-2de0c941edae-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:34:18 np0005534516 nova_compute[253538]: 2025-11-25 08:34:18.297 253542 DEBUG oslo_concurrency.lockutils [req-f468f4a1-8713-4273-98b7-27025f3c796f req-3b9e0d06-8123-4217-b452-e6651a54e056 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "52d39d67-b456-44e4-8804-2de0c941edae-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:34:18 np0005534516 nova_compute[253538]: 2025-11-25 08:34:18.297 253542 DEBUG oslo_concurrency.lockutils [req-f468f4a1-8713-4273-98b7-27025f3c796f req-3b9e0d06-8123-4217-b452-e6651a54e056 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "52d39d67-b456-44e4-8804-2de0c941edae-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:34:18 np0005534516 nova_compute[253538]: 2025-11-25 08:34:18.297 253542 DEBUG nova.compute.manager [req-f468f4a1-8713-4273-98b7-27025f3c796f req-3b9e0d06-8123-4217-b452-e6651a54e056 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 52d39d67-b456-44e4-8804-2de0c941edae] Processing event network-vif-plugged-9fa407fa-661b-4b02-b4f4-656f6ae34cd8 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 25 03:34:18 np0005534516 nova_compute[253538]: 2025-11-25 08:34:18.297 253542 DEBUG nova.compute.manager [req-f468f4a1-8713-4273-98b7-27025f3c796f req-3b9e0d06-8123-4217-b452-e6651a54e056 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 52d39d67-b456-44e4-8804-2de0c941edae] Received event network-vif-plugged-9fa407fa-661b-4b02-b4f4-656f6ae34cd8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:34:18 np0005534516 nova_compute[253538]: 2025-11-25 08:34:18.297 253542 DEBUG oslo_concurrency.lockutils [req-f468f4a1-8713-4273-98b7-27025f3c796f req-3b9e0d06-8123-4217-b452-e6651a54e056 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "52d39d67-b456-44e4-8804-2de0c941edae-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:34:18 np0005534516 nova_compute[253538]: 2025-11-25 08:34:18.297 253542 DEBUG oslo_concurrency.lockutils [req-f468f4a1-8713-4273-98b7-27025f3c796f req-3b9e0d06-8123-4217-b452-e6651a54e056 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "52d39d67-b456-44e4-8804-2de0c941edae-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:34:18 np0005534516 nova_compute[253538]: 2025-11-25 08:34:18.297 253542 DEBUG oslo_concurrency.lockutils [req-f468f4a1-8713-4273-98b7-27025f3c796f req-3b9e0d06-8123-4217-b452-e6651a54e056 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "52d39d67-b456-44e4-8804-2de0c941edae-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:34:18 np0005534516 nova_compute[253538]: 2025-11-25 08:34:18.298 253542 DEBUG nova.compute.manager [req-f468f4a1-8713-4273-98b7-27025f3c796f req-3b9e0d06-8123-4217-b452-e6651a54e056 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 52d39d67-b456-44e4-8804-2de0c941edae] No waiting events found dispatching network-vif-plugged-9fa407fa-661b-4b02-b4f4-656f6ae34cd8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 03:34:18 np0005534516 nova_compute[253538]: 2025-11-25 08:34:18.298 253542 WARNING nova.compute.manager [req-f468f4a1-8713-4273-98b7-27025f3c796f req-3b9e0d06-8123-4217-b452-e6651a54e056 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 52d39d67-b456-44e4-8804-2de0c941edae] Received unexpected event network-vif-plugged-9fa407fa-661b-4b02-b4f4-656f6ae34cd8 for instance with vm_state building and task_state spawning.#033[00m
Nov 25 03:34:18 np0005534516 nova_compute[253538]: 2025-11-25 08:34:18.299 253542 DEBUG nova.compute.manager [None req-ea508da6-2d37-4e51-9a1b-53e06fde3653 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] [instance: 52d39d67-b456-44e4-8804-2de0c941edae] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 25 03:34:18 np0005534516 nova_compute[253538]: 2025-11-25 08:34:18.303 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764059658.3034234, 52d39d67-b456-44e4-8804-2de0c941edae => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 03:34:18 np0005534516 nova_compute[253538]: 2025-11-25 08:34:18.303 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 52d39d67-b456-44e4-8804-2de0c941edae] VM Resumed (Lifecycle Event)#033[00m
Nov 25 03:34:18 np0005534516 nova_compute[253538]: 2025-11-25 08:34:18.304 253542 DEBUG nova.virt.libvirt.driver [None req-ea508da6-2d37-4e51-9a1b-53e06fde3653 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] [instance: 52d39d67-b456-44e4-8804-2de0c941edae] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 25 03:34:18 np0005534516 nova_compute[253538]: 2025-11-25 08:34:18.307 253542 INFO nova.virt.libvirt.driver [-] [instance: 52d39d67-b456-44e4-8804-2de0c941edae] Instance spawned successfully.#033[00m
Nov 25 03:34:18 np0005534516 nova_compute[253538]: 2025-11-25 08:34:18.307 253542 DEBUG nova.virt.libvirt.driver [None req-ea508da6-2d37-4e51-9a1b-53e06fde3653 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] [instance: 52d39d67-b456-44e4-8804-2de0c941edae] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 25 03:34:18 np0005534516 nova_compute[253538]: 2025-11-25 08:34:18.322 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 52d39d67-b456-44e4-8804-2de0c941edae] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:34:18 np0005534516 nova_compute[253538]: 2025-11-25 08:34:18.327 253542 DEBUG oslo_concurrency.lockutils [None req-1aa77871-377d-42f4-9eb6-3670589c0274 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Acquiring lock "bf44124c-1a65-4bde-a777-043ae1a53557" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:34:18 np0005534516 nova_compute[253538]: 2025-11-25 08:34:18.327 253542 DEBUG oslo_concurrency.lockutils [None req-1aa77871-377d-42f4-9eb6-3670589c0274 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Lock "bf44124c-1a65-4bde-a777-043ae1a53557" acquired by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:34:18 np0005534516 nova_compute[253538]: 2025-11-25 08:34:18.328 253542 DEBUG nova.compute.manager [None req-1aa77871-377d-42f4-9eb6-3670589c0274 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: bf44124c-1a65-4bde-a777-043ae1a53557] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:34:18 np0005534516 nova_compute[253538]: 2025-11-25 08:34:18.328 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 52d39d67-b456-44e4-8804-2de0c941edae] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 03:34:18 np0005534516 nova_compute[253538]: 2025-11-25 08:34:18.332 253542 DEBUG nova.virt.libvirt.driver [None req-ea508da6-2d37-4e51-9a1b-53e06fde3653 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] [instance: 52d39d67-b456-44e4-8804-2de0c941edae] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:34:18 np0005534516 nova_compute[253538]: 2025-11-25 08:34:18.332 253542 DEBUG nova.virt.libvirt.driver [None req-ea508da6-2d37-4e51-9a1b-53e06fde3653 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] [instance: 52d39d67-b456-44e4-8804-2de0c941edae] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:34:18 np0005534516 nova_compute[253538]: 2025-11-25 08:34:18.333 253542 DEBUG nova.virt.libvirt.driver [None req-ea508da6-2d37-4e51-9a1b-53e06fde3653 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] [instance: 52d39d67-b456-44e4-8804-2de0c941edae] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:34:18 np0005534516 nova_compute[253538]: 2025-11-25 08:34:18.333 253542 DEBUG nova.virt.libvirt.driver [None req-ea508da6-2d37-4e51-9a1b-53e06fde3653 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] [instance: 52d39d67-b456-44e4-8804-2de0c941edae] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:34:18 np0005534516 nova_compute[253538]: 2025-11-25 08:34:18.333 253542 DEBUG nova.virt.libvirt.driver [None req-ea508da6-2d37-4e51-9a1b-53e06fde3653 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] [instance: 52d39d67-b456-44e4-8804-2de0c941edae] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:34:18 np0005534516 nova_compute[253538]: 2025-11-25 08:34:18.334 253542 DEBUG nova.virt.libvirt.driver [None req-ea508da6-2d37-4e51-9a1b-53e06fde3653 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] [instance: 52d39d67-b456-44e4-8804-2de0c941edae] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:34:18 np0005534516 nova_compute[253538]: 2025-11-25 08:34:18.336 253542 DEBUG nova.compute.manager [None req-1aa77871-377d-42f4-9eb6-3670589c0274 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: bf44124c-1a65-4bde-a777-043ae1a53557] Stopping instance; current vm_state: active, current task_state: powering-off, current DB power_state: 1, current VM power_state: 1 do_stop_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3338#033[00m
Nov 25 03:34:18 np0005534516 nova_compute[253538]: 2025-11-25 08:34:18.337 253542 DEBUG nova.objects.instance [None req-1aa77871-377d-42f4-9eb6-3670589c0274 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Lazy-loading 'flavor' on Instance uuid bf44124c-1a65-4bde-a777-043ae1a53557 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 03:34:18 np0005534516 nova_compute[253538]: 2025-11-25 08:34:18.362 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 52d39d67-b456-44e4-8804-2de0c941edae] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 03:34:18 np0005534516 nova_compute[253538]: 2025-11-25 08:34:18.377 253542 DEBUG nova.network.neutron [None req-898de7fa-98d1-45c8-93c3-87953770422a c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] [instance: 420c5373-d9c4-4da0-9658-90eff9a19f8d] Successfully updated port: 9200cc12-927d-418b-99c1-ca0421535979 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 25 03:34:18 np0005534516 nova_compute[253538]: 2025-11-25 08:34:18.378 253542 DEBUG nova.virt.libvirt.driver [None req-1aa77871-377d-42f4-9eb6-3670589c0274 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: bf44124c-1a65-4bde-a777-043ae1a53557] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071#033[00m
Nov 25 03:34:18 np0005534516 nova_compute[253538]: 2025-11-25 08:34:18.396 253542 DEBUG oslo_concurrency.lockutils [None req-898de7fa-98d1-45c8-93c3-87953770422a c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Acquiring lock "refresh_cache-420c5373-d9c4-4da0-9658-90eff9a19f8d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 03:34:18 np0005534516 nova_compute[253538]: 2025-11-25 08:34:18.396 253542 DEBUG oslo_concurrency.lockutils [None req-898de7fa-98d1-45c8-93c3-87953770422a c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Acquired lock "refresh_cache-420c5373-d9c4-4da0-9658-90eff9a19f8d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 03:34:18 np0005534516 nova_compute[253538]: 2025-11-25 08:34:18.396 253542 DEBUG nova.network.neutron [None req-898de7fa-98d1-45c8-93c3-87953770422a c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] [instance: 420c5373-d9c4-4da0-9658-90eff9a19f8d] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 25 03:34:18 np0005534516 nova_compute[253538]: 2025-11-25 08:34:18.411 253542 INFO nova.compute.manager [None req-ea508da6-2d37-4e51-9a1b-53e06fde3653 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] [instance: 52d39d67-b456-44e4-8804-2de0c941edae] Took 9.02 seconds to spawn the instance on the hypervisor.#033[00m
Nov 25 03:34:18 np0005534516 nova_compute[253538]: 2025-11-25 08:34:18.412 253542 DEBUG nova.compute.manager [None req-ea508da6-2d37-4e51-9a1b-53e06fde3653 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] [instance: 52d39d67-b456-44e4-8804-2de0c941edae] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:34:18 np0005534516 nova_compute[253538]: 2025-11-25 08:34:18.483 253542 INFO nova.compute.manager [None req-ea508da6-2d37-4e51-9a1b-53e06fde3653 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] [instance: 52d39d67-b456-44e4-8804-2de0c941edae] Took 10.24 seconds to build instance.#033[00m
Nov 25 03:34:18 np0005534516 nova_compute[253538]: 2025-11-25 08:34:18.496 253542 DEBUG oslo_concurrency.lockutils [None req-ea508da6-2d37-4e51-9a1b-53e06fde3653 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Lock "52d39d67-b456-44e4-8804-2de0c941edae" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.345s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:34:18 np0005534516 nova_compute[253538]: 2025-11-25 08:34:18.555 253542 DEBUG nova.network.neutron [None req-898de7fa-98d1-45c8-93c3-87953770422a c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] [instance: 420c5373-d9c4-4da0-9658-90eff9a19f8d] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 25 03:34:19 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1535: 321 pgs: 321 active+clean; 335 MiB data, 637 MiB used, 59 GiB / 60 GiB avail; 3.3 MiB/s rd, 5.6 MiB/s wr, 278 op/s
Nov 25 03:34:19 np0005534516 nova_compute[253538]: 2025-11-25 08:34:19.726 253542 DEBUG nova.network.neutron [None req-898de7fa-98d1-45c8-93c3-87953770422a c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] [instance: 420c5373-d9c4-4da0-9658-90eff9a19f8d] Updating instance_info_cache with network_info: [{"id": "9200cc12-927d-418b-99c1-ca0421535979", "address": "fa:16:3e:73:f0:9b", "network": {"id": "908154e6-322e-4607-bb65-df3f3f8daca6", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1395486665-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "23237e7592b247838e62457157e64e9e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9200cc12-92", "ovs_interfaceid": "9200cc12-927d-418b-99c1-ca0421535979", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 03:34:19 np0005534516 nova_compute[253538]: 2025-11-25 08:34:19.744 253542 DEBUG oslo_concurrency.lockutils [None req-898de7fa-98d1-45c8-93c3-87953770422a c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Releasing lock "refresh_cache-420c5373-d9c4-4da0-9658-90eff9a19f8d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 03:34:19 np0005534516 nova_compute[253538]: 2025-11-25 08:34:19.745 253542 DEBUG nova.compute.manager [None req-898de7fa-98d1-45c8-93c3-87953770422a c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] [instance: 420c5373-d9c4-4da0-9658-90eff9a19f8d] Instance network_info: |[{"id": "9200cc12-927d-418b-99c1-ca0421535979", "address": "fa:16:3e:73:f0:9b", "network": {"id": "908154e6-322e-4607-bb65-df3f3f8daca6", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1395486665-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "23237e7592b247838e62457157e64e9e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9200cc12-92", "ovs_interfaceid": "9200cc12-927d-418b-99c1-ca0421535979", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 25 03:34:19 np0005534516 nova_compute[253538]: 2025-11-25 08:34:19.748 253542 DEBUG nova.virt.libvirt.driver [None req-898de7fa-98d1-45c8-93c3-87953770422a c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] [instance: 420c5373-d9c4-4da0-9658-90eff9a19f8d] Start _get_guest_xml network_info=[{"id": "9200cc12-927d-418b-99c1-ca0421535979", "address": "fa:16:3e:73:f0:9b", "network": {"id": "908154e6-322e-4607-bb65-df3f3f8daca6", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1395486665-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "23237e7592b247838e62457157e64e9e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9200cc12-92", "ovs_interfaceid": "9200cc12-927d-418b-99c1-ca0421535979", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'device_name': '/dev/vda', 'guest_format': None, 'encryption_options': None, 'boot_index': 0, 'encryption_format': None, 'encryption_secret_uuid': None, 'size': 0, 'encrypted': False, 'disk_bus': 'virtio', 'image_id': '8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 25 03:34:19 np0005534516 nova_compute[253538]: 2025-11-25 08:34:19.925 253542 WARNING nova.virt.libvirt.driver [None req-898de7fa-98d1-45c8-93c3-87953770422a c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 03:34:19 np0005534516 nova_compute[253538]: 2025-11-25 08:34:19.931 253542 DEBUG nova.virt.libvirt.host [None req-898de7fa-98d1-45c8-93c3-87953770422a c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 25 03:34:19 np0005534516 nova_compute[253538]: 2025-11-25 08:34:19.931 253542 DEBUG nova.virt.libvirt.host [None req-898de7fa-98d1-45c8-93c3-87953770422a c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 25 03:34:19 np0005534516 nova_compute[253538]: 2025-11-25 08:34:19.934 253542 DEBUG nova.virt.libvirt.host [None req-898de7fa-98d1-45c8-93c3-87953770422a c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 25 03:34:19 np0005534516 nova_compute[253538]: 2025-11-25 08:34:19.935 253542 DEBUG nova.virt.libvirt.host [None req-898de7fa-98d1-45c8-93c3-87953770422a c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 25 03:34:19 np0005534516 nova_compute[253538]: 2025-11-25 08:34:19.935 253542 DEBUG nova.virt.libvirt.driver [None req-898de7fa-98d1-45c8-93c3-87953770422a c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 25 03:34:19 np0005534516 nova_compute[253538]: 2025-11-25 08:34:19.936 253542 DEBUG nova.virt.hardware [None req-898de7fa-98d1-45c8-93c3-87953770422a c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T08:20:54Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='c0247c91-0f34-45b2-87e7-43f31a790a00',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 25 03:34:19 np0005534516 nova_compute[253538]: 2025-11-25 08:34:19.936 253542 DEBUG nova.virt.hardware [None req-898de7fa-98d1-45c8-93c3-87953770422a c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 25 03:34:19 np0005534516 nova_compute[253538]: 2025-11-25 08:34:19.937 253542 DEBUG nova.virt.hardware [None req-898de7fa-98d1-45c8-93c3-87953770422a c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 25 03:34:19 np0005534516 nova_compute[253538]: 2025-11-25 08:34:19.937 253542 DEBUG nova.virt.hardware [None req-898de7fa-98d1-45c8-93c3-87953770422a c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 25 03:34:19 np0005534516 nova_compute[253538]: 2025-11-25 08:34:19.937 253542 DEBUG nova.virt.hardware [None req-898de7fa-98d1-45c8-93c3-87953770422a c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 25 03:34:19 np0005534516 nova_compute[253538]: 2025-11-25 08:34:19.938 253542 DEBUG nova.virt.hardware [None req-898de7fa-98d1-45c8-93c3-87953770422a c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 25 03:34:19 np0005534516 nova_compute[253538]: 2025-11-25 08:34:19.938 253542 DEBUG nova.virt.hardware [None req-898de7fa-98d1-45c8-93c3-87953770422a c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 25 03:34:19 np0005534516 nova_compute[253538]: 2025-11-25 08:34:19.938 253542 DEBUG nova.virt.hardware [None req-898de7fa-98d1-45c8-93c3-87953770422a c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 25 03:34:19 np0005534516 nova_compute[253538]: 2025-11-25 08:34:19.939 253542 DEBUG nova.virt.hardware [None req-898de7fa-98d1-45c8-93c3-87953770422a c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 25 03:34:19 np0005534516 nova_compute[253538]: 2025-11-25 08:34:19.939 253542 DEBUG nova.virt.hardware [None req-898de7fa-98d1-45c8-93c3-87953770422a c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 25 03:34:19 np0005534516 nova_compute[253538]: 2025-11-25 08:34:19.939 253542 DEBUG nova.virt.hardware [None req-898de7fa-98d1-45c8-93c3-87953770422a c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 25 03:34:19 np0005534516 nova_compute[253538]: 2025-11-25 08:34:19.943 253542 DEBUG oslo_concurrency.processutils [None req-898de7fa-98d1-45c8-93c3-87953770422a c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:34:20 np0005534516 nova_compute[253538]: 2025-11-25 08:34:20.407 253542 DEBUG nova.compute.manager [req-e4737e5f-2d51-4d01-89ac-2ec49c12161a req-482d4b3f-959f-4a11-8f8c-2960d1a7d876 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 420c5373-d9c4-4da0-9658-90eff9a19f8d] Received event network-changed-9200cc12-927d-418b-99c1-ca0421535979 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:34:20 np0005534516 nova_compute[253538]: 2025-11-25 08:34:20.408 253542 DEBUG nova.compute.manager [req-e4737e5f-2d51-4d01-89ac-2ec49c12161a req-482d4b3f-959f-4a11-8f8c-2960d1a7d876 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 420c5373-d9c4-4da0-9658-90eff9a19f8d] Refreshing instance network info cache due to event network-changed-9200cc12-927d-418b-99c1-ca0421535979. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 03:34:20 np0005534516 nova_compute[253538]: 2025-11-25 08:34:20.408 253542 DEBUG oslo_concurrency.lockutils [req-e4737e5f-2d51-4d01-89ac-2ec49c12161a req-482d4b3f-959f-4a11-8f8c-2960d1a7d876 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-420c5373-d9c4-4da0-9658-90eff9a19f8d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 03:34:20 np0005534516 nova_compute[253538]: 2025-11-25 08:34:20.409 253542 DEBUG oslo_concurrency.lockutils [req-e4737e5f-2d51-4d01-89ac-2ec49c12161a req-482d4b3f-959f-4a11-8f8c-2960d1a7d876 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-420c5373-d9c4-4da0-9658-90eff9a19f8d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 03:34:20 np0005534516 nova_compute[253538]: 2025-11-25 08:34:20.409 253542 DEBUG nova.network.neutron [req-e4737e5f-2d51-4d01-89ac-2ec49c12161a req-482d4b3f-959f-4a11-8f8c-2960d1a7d876 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 420c5373-d9c4-4da0-9658-90eff9a19f8d] Refreshing network info cache for port 9200cc12-927d-418b-99c1-ca0421535979 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 03:34:20 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 03:34:20 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3134672587' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 03:34:20 np0005534516 nova_compute[253538]: 2025-11-25 08:34:20.440 253542 DEBUG oslo_concurrency.processutils [None req-898de7fa-98d1-45c8-93c3-87953770422a c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.497s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:34:20 np0005534516 nova_compute[253538]: 2025-11-25 08:34:20.466 253542 DEBUG nova.storage.rbd_utils [None req-898de7fa-98d1-45c8-93c3-87953770422a c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] rbd image 420c5373-d9c4-4da0-9658-90eff9a19f8d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:34:20 np0005534516 nova_compute[253538]: 2025-11-25 08:34:20.470 253542 DEBUG oslo_concurrency.processutils [None req-898de7fa-98d1-45c8-93c3-87953770422a c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:34:20 np0005534516 nova_compute[253538]: 2025-11-25 08:34:20.547 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 03:34:20 np0005534516 nova_compute[253538]: 2025-11-25 08:34:20.553 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 03:34:20 np0005534516 nova_compute[253538]: 2025-11-25 08:34:20.720 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:34:20 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 03:34:20 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/203499940' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 03:34:20 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e182 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 03:34:20 np0005534516 nova_compute[253538]: 2025-11-25 08:34:20.957 253542 DEBUG oslo_concurrency.processutils [None req-898de7fa-98d1-45c8-93c3-87953770422a c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.487s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:34:20 np0005534516 nova_compute[253538]: 2025-11-25 08:34:20.959 253542 DEBUG nova.virt.libvirt.vif [None req-898de7fa-98d1-45c8-93c3-87953770422a c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T08:34:14Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-614557291',display_name='tempest-tempest.common.compute-instance-614557291',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-614557291',id=62,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='23237e7592b247838e62457157e64e9e',ramdisk_id='',reservation_id='r-x0c0gdce',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestJSON-1880843108',owner_user_name='tempest-ServerActionsTestJSON-1880843108-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T08:34:15Z,user_data=None,user_id='c199ca353ed54a53ab7fe37d3089c82a',uuid=420c5373-d9c4-4da0-9658-90eff9a19f8d,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "9200cc12-927d-418b-99c1-ca0421535979", "address": "fa:16:3e:73:f0:9b", "network": {"id": "908154e6-322e-4607-bb65-df3f3f8daca6", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1395486665-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "23237e7592b247838e62457157e64e9e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9200cc12-92", "ovs_interfaceid": "9200cc12-927d-418b-99c1-ca0421535979", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 25 03:34:20 np0005534516 nova_compute[253538]: 2025-11-25 08:34:20.960 253542 DEBUG nova.network.os_vif_util [None req-898de7fa-98d1-45c8-93c3-87953770422a c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Converting VIF {"id": "9200cc12-927d-418b-99c1-ca0421535979", "address": "fa:16:3e:73:f0:9b", "network": {"id": "908154e6-322e-4607-bb65-df3f3f8daca6", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1395486665-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "23237e7592b247838e62457157e64e9e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9200cc12-92", "ovs_interfaceid": "9200cc12-927d-418b-99c1-ca0421535979", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 03:34:20 np0005534516 nova_compute[253538]: 2025-11-25 08:34:20.962 253542 DEBUG nova.network.os_vif_util [None req-898de7fa-98d1-45c8-93c3-87953770422a c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:73:f0:9b,bridge_name='br-int',has_traffic_filtering=True,id=9200cc12-927d-418b-99c1-ca0421535979,network=Network(908154e6-322e-4607-bb65-df3f3f8daca6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9200cc12-92') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 03:34:20 np0005534516 nova_compute[253538]: 2025-11-25 08:34:20.964 253542 DEBUG nova.objects.instance [None req-898de7fa-98d1-45c8-93c3-87953770422a c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Lazy-loading 'pci_devices' on Instance uuid 420c5373-d9c4-4da0-9658-90eff9a19f8d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 03:34:20 np0005534516 nova_compute[253538]: 2025-11-25 08:34:20.980 253542 DEBUG nova.virt.libvirt.driver [None req-898de7fa-98d1-45c8-93c3-87953770422a c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] [instance: 420c5373-d9c4-4da0-9658-90eff9a19f8d] End _get_guest_xml xml=<domain type="kvm">
Nov 25 03:34:20 np0005534516 nova_compute[253538]:  <uuid>420c5373-d9c4-4da0-9658-90eff9a19f8d</uuid>
Nov 25 03:34:20 np0005534516 nova_compute[253538]:  <name>instance-0000003e</name>
Nov 25 03:34:20 np0005534516 nova_compute[253538]:  <memory>131072</memory>
Nov 25 03:34:20 np0005534516 nova_compute[253538]:  <vcpu>1</vcpu>
Nov 25 03:34:20 np0005534516 nova_compute[253538]:  <metadata>
Nov 25 03:34:20 np0005534516 nova_compute[253538]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 03:34:20 np0005534516 nova_compute[253538]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 03:34:20 np0005534516 nova_compute[253538]:      <nova:name>tempest-tempest.common.compute-instance-614557291</nova:name>
Nov 25 03:34:20 np0005534516 nova_compute[253538]:      <nova:creationTime>2025-11-25 08:34:19</nova:creationTime>
Nov 25 03:34:20 np0005534516 nova_compute[253538]:      <nova:flavor name="m1.nano">
Nov 25 03:34:20 np0005534516 nova_compute[253538]:        <nova:memory>128</nova:memory>
Nov 25 03:34:20 np0005534516 nova_compute[253538]:        <nova:disk>1</nova:disk>
Nov 25 03:34:20 np0005534516 nova_compute[253538]:        <nova:swap>0</nova:swap>
Nov 25 03:34:20 np0005534516 nova_compute[253538]:        <nova:ephemeral>0</nova:ephemeral>
Nov 25 03:34:20 np0005534516 nova_compute[253538]:        <nova:vcpus>1</nova:vcpus>
Nov 25 03:34:20 np0005534516 nova_compute[253538]:      </nova:flavor>
Nov 25 03:34:20 np0005534516 nova_compute[253538]:      <nova:owner>
Nov 25 03:34:20 np0005534516 nova_compute[253538]:        <nova:user uuid="c199ca353ed54a53ab7fe37d3089c82a">tempest-ServerActionsTestJSON-1880843108-project-member</nova:user>
Nov 25 03:34:20 np0005534516 nova_compute[253538]:        <nova:project uuid="23237e7592b247838e62457157e64e9e">tempest-ServerActionsTestJSON-1880843108</nova:project>
Nov 25 03:34:20 np0005534516 nova_compute[253538]:      </nova:owner>
Nov 25 03:34:20 np0005534516 nova_compute[253538]:      <nova:root type="image" uuid="8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e"/>
Nov 25 03:34:20 np0005534516 nova_compute[253538]:      <nova:ports>
Nov 25 03:34:20 np0005534516 nova_compute[253538]:        <nova:port uuid="9200cc12-927d-418b-99c1-ca0421535979">
Nov 25 03:34:20 np0005534516 nova_compute[253538]:          <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Nov 25 03:34:20 np0005534516 nova_compute[253538]:        </nova:port>
Nov 25 03:34:20 np0005534516 nova_compute[253538]:      </nova:ports>
Nov 25 03:34:20 np0005534516 nova_compute[253538]:    </nova:instance>
Nov 25 03:34:20 np0005534516 nova_compute[253538]:  </metadata>
Nov 25 03:34:20 np0005534516 nova_compute[253538]:  <sysinfo type="smbios">
Nov 25 03:34:20 np0005534516 nova_compute[253538]:    <system>
Nov 25 03:34:20 np0005534516 nova_compute[253538]:      <entry name="manufacturer">RDO</entry>
Nov 25 03:34:20 np0005534516 nova_compute[253538]:      <entry name="product">OpenStack Compute</entry>
Nov 25 03:34:20 np0005534516 nova_compute[253538]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 03:34:20 np0005534516 nova_compute[253538]:      <entry name="serial">420c5373-d9c4-4da0-9658-90eff9a19f8d</entry>
Nov 25 03:34:20 np0005534516 nova_compute[253538]:      <entry name="uuid">420c5373-d9c4-4da0-9658-90eff9a19f8d</entry>
Nov 25 03:34:20 np0005534516 nova_compute[253538]:      <entry name="family">Virtual Machine</entry>
Nov 25 03:34:20 np0005534516 nova_compute[253538]:    </system>
Nov 25 03:34:20 np0005534516 nova_compute[253538]:  </sysinfo>
Nov 25 03:34:20 np0005534516 nova_compute[253538]:  <os>
Nov 25 03:34:20 np0005534516 nova_compute[253538]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 25 03:34:20 np0005534516 nova_compute[253538]:    <boot dev="hd"/>
Nov 25 03:34:20 np0005534516 nova_compute[253538]:    <smbios mode="sysinfo"/>
Nov 25 03:34:20 np0005534516 nova_compute[253538]:  </os>
Nov 25 03:34:20 np0005534516 nova_compute[253538]:  <features>
Nov 25 03:34:20 np0005534516 nova_compute[253538]:    <acpi/>
Nov 25 03:34:20 np0005534516 nova_compute[253538]:    <apic/>
Nov 25 03:34:20 np0005534516 nova_compute[253538]:    <vmcoreinfo/>
Nov 25 03:34:20 np0005534516 nova_compute[253538]:  </features>
Nov 25 03:34:20 np0005534516 nova_compute[253538]:  <clock offset="utc">
Nov 25 03:34:20 np0005534516 nova_compute[253538]:    <timer name="pit" tickpolicy="delay"/>
Nov 25 03:34:20 np0005534516 nova_compute[253538]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 25 03:34:20 np0005534516 nova_compute[253538]:    <timer name="hpet" present="no"/>
Nov 25 03:34:20 np0005534516 nova_compute[253538]:  </clock>
Nov 25 03:34:20 np0005534516 nova_compute[253538]:  <cpu mode="host-model" match="exact">
Nov 25 03:34:20 np0005534516 nova_compute[253538]:    <topology sockets="1" cores="1" threads="1"/>
Nov 25 03:34:20 np0005534516 nova_compute[253538]:  </cpu>
Nov 25 03:34:20 np0005534516 nova_compute[253538]:  <devices>
Nov 25 03:34:20 np0005534516 nova_compute[253538]:    <disk type="network" device="disk">
Nov 25 03:34:20 np0005534516 nova_compute[253538]:      <driver type="raw" cache="none"/>
Nov 25 03:34:20 np0005534516 nova_compute[253538]:      <source protocol="rbd" name="vms/420c5373-d9c4-4da0-9658-90eff9a19f8d_disk">
Nov 25 03:34:20 np0005534516 nova_compute[253538]:        <host name="192.168.122.100" port="6789"/>
Nov 25 03:34:20 np0005534516 nova_compute[253538]:      </source>
Nov 25 03:34:20 np0005534516 nova_compute[253538]:      <auth username="openstack">
Nov 25 03:34:20 np0005534516 nova_compute[253538]:        <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 03:34:20 np0005534516 nova_compute[253538]:      </auth>
Nov 25 03:34:20 np0005534516 nova_compute[253538]:      <target dev="vda" bus="virtio"/>
Nov 25 03:34:20 np0005534516 nova_compute[253538]:    </disk>
Nov 25 03:34:20 np0005534516 nova_compute[253538]:    <disk type="network" device="cdrom">
Nov 25 03:34:20 np0005534516 nova_compute[253538]:      <driver type="raw" cache="none"/>
Nov 25 03:34:20 np0005534516 nova_compute[253538]:      <source protocol="rbd" name="vms/420c5373-d9c4-4da0-9658-90eff9a19f8d_disk.config">
Nov 25 03:34:20 np0005534516 nova_compute[253538]:        <host name="192.168.122.100" port="6789"/>
Nov 25 03:34:20 np0005534516 nova_compute[253538]:      </source>
Nov 25 03:34:20 np0005534516 nova_compute[253538]:      <auth username="openstack">
Nov 25 03:34:20 np0005534516 nova_compute[253538]:        <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 03:34:20 np0005534516 nova_compute[253538]:      </auth>
Nov 25 03:34:20 np0005534516 nova_compute[253538]:      <target dev="sda" bus="sata"/>
Nov 25 03:34:20 np0005534516 nova_compute[253538]:    </disk>
Nov 25 03:34:20 np0005534516 nova_compute[253538]:    <interface type="ethernet">
Nov 25 03:34:20 np0005534516 nova_compute[253538]:      <mac address="fa:16:3e:73:f0:9b"/>
Nov 25 03:34:20 np0005534516 nova_compute[253538]:      <model type="virtio"/>
Nov 25 03:34:20 np0005534516 nova_compute[253538]:      <driver name="vhost" rx_queue_size="512"/>
Nov 25 03:34:20 np0005534516 nova_compute[253538]:      <mtu size="1442"/>
Nov 25 03:34:20 np0005534516 nova_compute[253538]:      <target dev="tap9200cc12-92"/>
Nov 25 03:34:20 np0005534516 nova_compute[253538]:    </interface>
Nov 25 03:34:20 np0005534516 nova_compute[253538]:    <serial type="pty">
Nov 25 03:34:20 np0005534516 nova_compute[253538]:      <log file="/var/lib/nova/instances/420c5373-d9c4-4da0-9658-90eff9a19f8d/console.log" append="off"/>
Nov 25 03:34:20 np0005534516 nova_compute[253538]:    </serial>
Nov 25 03:34:20 np0005534516 nova_compute[253538]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 03:34:20 np0005534516 nova_compute[253538]:    <video>
Nov 25 03:34:20 np0005534516 nova_compute[253538]:      <model type="virtio"/>
Nov 25 03:34:20 np0005534516 nova_compute[253538]:    </video>
Nov 25 03:34:20 np0005534516 nova_compute[253538]:    <input type="tablet" bus="usb"/>
Nov 25 03:34:20 np0005534516 nova_compute[253538]:    <rng model="virtio">
Nov 25 03:34:20 np0005534516 nova_compute[253538]:      <backend model="random">/dev/urandom</backend>
Nov 25 03:34:20 np0005534516 nova_compute[253538]:    </rng>
Nov 25 03:34:20 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root"/>
Nov 25 03:34:20 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:34:20 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:34:20 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:34:20 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:34:20 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:34:20 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:34:20 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:34:20 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:34:20 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:34:20 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:34:20 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:34:20 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:34:20 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:34:20 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:34:20 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:34:20 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:34:20 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:34:20 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:34:20 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:34:20 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:34:20 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:34:20 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:34:20 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:34:20 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:34:20 np0005534516 nova_compute[253538]:    <controller type="usb" index="0"/>
Nov 25 03:34:20 np0005534516 nova_compute[253538]:    <memballoon model="virtio">
Nov 25 03:34:20 np0005534516 nova_compute[253538]:      <stats period="10"/>
Nov 25 03:34:20 np0005534516 nova_compute[253538]:    </memballoon>
Nov 25 03:34:20 np0005534516 nova_compute[253538]:  </devices>
Nov 25 03:34:20 np0005534516 nova_compute[253538]: </domain>
Nov 25 03:34:20 np0005534516 nova_compute[253538]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 25 03:34:20 np0005534516 nova_compute[253538]: 2025-11-25 08:34:20.981 253542 DEBUG nova.compute.manager [None req-898de7fa-98d1-45c8-93c3-87953770422a c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] [instance: 420c5373-d9c4-4da0-9658-90eff9a19f8d] Preparing to wait for external event network-vif-plugged-9200cc12-927d-418b-99c1-ca0421535979 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 25 03:34:20 np0005534516 nova_compute[253538]: 2025-11-25 08:34:20.981 253542 DEBUG oslo_concurrency.lockutils [None req-898de7fa-98d1-45c8-93c3-87953770422a c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Acquiring lock "420c5373-d9c4-4da0-9658-90eff9a19f8d-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:34:20 np0005534516 nova_compute[253538]: 2025-11-25 08:34:20.982 253542 DEBUG oslo_concurrency.lockutils [None req-898de7fa-98d1-45c8-93c3-87953770422a c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Lock "420c5373-d9c4-4da0-9658-90eff9a19f8d-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:34:20 np0005534516 nova_compute[253538]: 2025-11-25 08:34:20.982 253542 DEBUG oslo_concurrency.lockutils [None req-898de7fa-98d1-45c8-93c3-87953770422a c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Lock "420c5373-d9c4-4da0-9658-90eff9a19f8d-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:34:20 np0005534516 nova_compute[253538]: 2025-11-25 08:34:20.983 253542 DEBUG nova.virt.libvirt.vif [None req-898de7fa-98d1-45c8-93c3-87953770422a c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T08:34:14Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-614557291',display_name='tempest-tempest.common.compute-instance-614557291',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-614557291',id=62,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='23237e7592b247838e62457157e64e9e',ramdisk_id='',reservation_id='r-x0c0gdce',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestJSON-1880843108',owner_user_name='tempest-ServerActionsTestJSON-1880843108-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T08:34:15Z,user_data=None,user_id='c199ca353ed54a53ab7fe37d3089c82a',uuid=420c5373-d9c4-4da0-9658-90eff9a19f8d,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "9200cc12-927d-418b-99c1-ca0421535979", "address": "fa:16:3e:73:f0:9b", "network": {"id": "908154e6-322e-4607-bb65-df3f3f8daca6", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1395486665-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "23237e7592b247838e62457157e64e9e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9200cc12-92", "ovs_interfaceid": "9200cc12-927d-418b-99c1-ca0421535979", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 25 03:34:20 np0005534516 nova_compute[253538]: 2025-11-25 08:34:20.983 253542 DEBUG nova.network.os_vif_util [None req-898de7fa-98d1-45c8-93c3-87953770422a c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Converting VIF {"id": "9200cc12-927d-418b-99c1-ca0421535979", "address": "fa:16:3e:73:f0:9b", "network": {"id": "908154e6-322e-4607-bb65-df3f3f8daca6", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1395486665-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "23237e7592b247838e62457157e64e9e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9200cc12-92", "ovs_interfaceid": "9200cc12-927d-418b-99c1-ca0421535979", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 03:34:20 np0005534516 nova_compute[253538]: 2025-11-25 08:34:20.984 253542 DEBUG nova.network.os_vif_util [None req-898de7fa-98d1-45c8-93c3-87953770422a c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:73:f0:9b,bridge_name='br-int',has_traffic_filtering=True,id=9200cc12-927d-418b-99c1-ca0421535979,network=Network(908154e6-322e-4607-bb65-df3f3f8daca6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9200cc12-92') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 03:34:20 np0005534516 nova_compute[253538]: 2025-11-25 08:34:20.985 253542 DEBUG os_vif [None req-898de7fa-98d1-45c8-93c3-87953770422a c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:73:f0:9b,bridge_name='br-int',has_traffic_filtering=True,id=9200cc12-927d-418b-99c1-ca0421535979,network=Network(908154e6-322e-4607-bb65-df3f3f8daca6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9200cc12-92') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 25 03:34:20 np0005534516 nova_compute[253538]: 2025-11-25 08:34:20.985 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:34:20 np0005534516 nova_compute[253538]: 2025-11-25 08:34:20.986 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:34:20 np0005534516 nova_compute[253538]: 2025-11-25 08:34:20.986 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 03:34:20 np0005534516 nova_compute[253538]: 2025-11-25 08:34:20.989 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:34:20 np0005534516 nova_compute[253538]: 2025-11-25 08:34:20.989 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9200cc12-92, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:34:20 np0005534516 nova_compute[253538]: 2025-11-25 08:34:20.990 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap9200cc12-92, col_values=(('external_ids', {'iface-id': '9200cc12-927d-418b-99c1-ca0421535979', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:73:f0:9b', 'vm-uuid': '420c5373-d9c4-4da0-9658-90eff9a19f8d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:34:20 np0005534516 NetworkManager[48915]: <info>  [1764059660.9927] manager: (tap9200cc12-92): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/257)
Nov 25 03:34:20 np0005534516 nova_compute[253538]: 2025-11-25 08:34:20.994 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 25 03:34:20 np0005534516 nova_compute[253538]: 2025-11-25 08:34:20.998 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:34:20 np0005534516 nova_compute[253538]: 2025-11-25 08:34:20.999 253542 INFO os_vif [None req-898de7fa-98d1-45c8-93c3-87953770422a c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:73:f0:9b,bridge_name='br-int',has_traffic_filtering=True,id=9200cc12-927d-418b-99c1-ca0421535979,network=Network(908154e6-322e-4607-bb65-df3f3f8daca6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9200cc12-92')#033[00m
Nov 25 03:34:21 np0005534516 nova_compute[253538]: 2025-11-25 08:34:21.048 253542 DEBUG nova.virt.libvirt.driver [None req-898de7fa-98d1-45c8-93c3-87953770422a c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 03:34:21 np0005534516 nova_compute[253538]: 2025-11-25 08:34:21.049 253542 DEBUG nova.virt.libvirt.driver [None req-898de7fa-98d1-45c8-93c3-87953770422a c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 03:34:21 np0005534516 nova_compute[253538]: 2025-11-25 08:34:21.049 253542 DEBUG nova.virt.libvirt.driver [None req-898de7fa-98d1-45c8-93c3-87953770422a c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] No VIF found with MAC fa:16:3e:73:f0:9b, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 25 03:34:21 np0005534516 nova_compute[253538]: 2025-11-25 08:34:21.050 253542 INFO nova.virt.libvirt.driver [None req-898de7fa-98d1-45c8-93c3-87953770422a c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] [instance: 420c5373-d9c4-4da0-9658-90eff9a19f8d] Using config drive#033[00m
Nov 25 03:34:21 np0005534516 nova_compute[253538]: 2025-11-25 08:34:21.075 253542 DEBUG nova.storage.rbd_utils [None req-898de7fa-98d1-45c8-93c3-87953770422a c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] rbd image 420c5373-d9c4-4da0-9658-90eff9a19f8d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:34:21 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1536: 321 pgs: 321 active+clean; 341 MiB data, 640 MiB used, 59 GiB / 60 GiB avail; 4.7 MiB/s rd, 5.7 MiB/s wr, 330 op/s
Nov 25 03:34:21 np0005534516 nova_compute[253538]: 2025-11-25 08:34:21.463 253542 INFO nova.virt.libvirt.driver [None req-898de7fa-98d1-45c8-93c3-87953770422a c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] [instance: 420c5373-d9c4-4da0-9658-90eff9a19f8d] Creating config drive at /var/lib/nova/instances/420c5373-d9c4-4da0-9658-90eff9a19f8d/disk.config#033[00m
Nov 25 03:34:21 np0005534516 nova_compute[253538]: 2025-11-25 08:34:21.472 253542 DEBUG oslo_concurrency.processutils [None req-898de7fa-98d1-45c8-93c3-87953770422a c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/420c5373-d9c4-4da0-9658-90eff9a19f8d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp6a3fg7gu execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:34:21 np0005534516 nova_compute[253538]: 2025-11-25 08:34:21.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 03:34:21 np0005534516 nova_compute[253538]: 2025-11-25 08:34:21.579 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:34:21 np0005534516 nova_compute[253538]: 2025-11-25 08:34:21.580 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:34:21 np0005534516 nova_compute[253538]: 2025-11-25 08:34:21.580 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:34:21 np0005534516 nova_compute[253538]: 2025-11-25 08:34:21.580 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 25 03:34:21 np0005534516 nova_compute[253538]: 2025-11-25 08:34:21.580 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:34:21 np0005534516 nova_compute[253538]: 2025-11-25 08:34:21.620 253542 DEBUG oslo_concurrency.processutils [None req-898de7fa-98d1-45c8-93c3-87953770422a c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/420c5373-d9c4-4da0-9658-90eff9a19f8d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp6a3fg7gu" returned: 0 in 0.148s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:34:21 np0005534516 nova_compute[253538]: 2025-11-25 08:34:21.644 253542 DEBUG nova.storage.rbd_utils [None req-898de7fa-98d1-45c8-93c3-87953770422a c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] rbd image 420c5373-d9c4-4da0-9658-90eff9a19f8d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:34:21 np0005534516 nova_compute[253538]: 2025-11-25 08:34:21.647 253542 DEBUG oslo_concurrency.processutils [None req-898de7fa-98d1-45c8-93c3-87953770422a c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/420c5373-d9c4-4da0-9658-90eff9a19f8d/disk.config 420c5373-d9c4-4da0-9658-90eff9a19f8d_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:34:21 np0005534516 nova_compute[253538]: 2025-11-25 08:34:21.815 253542 DEBUG oslo_concurrency.processutils [None req-898de7fa-98d1-45c8-93c3-87953770422a c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/420c5373-d9c4-4da0-9658-90eff9a19f8d/disk.config 420c5373-d9c4-4da0-9658-90eff9a19f8d_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.168s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:34:21 np0005534516 nova_compute[253538]: 2025-11-25 08:34:21.816 253542 INFO nova.virt.libvirt.driver [None req-898de7fa-98d1-45c8-93c3-87953770422a c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] [instance: 420c5373-d9c4-4da0-9658-90eff9a19f8d] Deleting local config drive /var/lib/nova/instances/420c5373-d9c4-4da0-9658-90eff9a19f8d/disk.config because it was imported into RBD.#033[00m
Nov 25 03:34:21 np0005534516 kernel: tap9200cc12-92: entered promiscuous mode
Nov 25 03:34:21 np0005534516 ovn_controller[152859]: 2025-11-25T08:34:21Z|00553|binding|INFO|Claiming lport 9200cc12-927d-418b-99c1-ca0421535979 for this chassis.
Nov 25 03:34:21 np0005534516 ovn_controller[152859]: 2025-11-25T08:34:21Z|00554|binding|INFO|9200cc12-927d-418b-99c1-ca0421535979: Claiming fa:16:3e:73:f0:9b 10.100.0.11
Nov 25 03:34:21 np0005534516 NetworkManager[48915]: <info>  [1764059661.8829] manager: (tap9200cc12-92): new Tun device (/org/freedesktop/NetworkManager/Devices/258)
Nov 25 03:34:21 np0005534516 nova_compute[253538]: 2025-11-25 08:34:21.881 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:34:21 np0005534516 ovn_controller[152859]: 2025-11-25T08:34:21Z|00555|binding|INFO|Setting lport 9200cc12-927d-418b-99c1-ca0421535979 ovn-installed in OVS
Nov 25 03:34:21 np0005534516 nova_compute[253538]: 2025-11-25 08:34:21.902 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:34:21 np0005534516 nova_compute[253538]: 2025-11-25 08:34:21.906 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:34:21 np0005534516 systemd-udevd[317339]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 03:34:21 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:34:21.931 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:73:f0:9b 10.100.0.11'], port_security=['fa:16:3e:73:f0:9b 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '420c5373-d9c4-4da0-9658-90eff9a19f8d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-908154e6-322e-4607-bb65-df3f3f8daca6', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '23237e7592b247838e62457157e64e9e', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'c602cd86-40c0-467a-8b7a-b573e0a7cefa', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f04e87cb-da21-4cc9-be16-4ad52b84fb85, chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=9200cc12-927d-418b-99c1-ca0421535979) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 03:34:21 np0005534516 ovn_controller[152859]: 2025-11-25T08:34:21Z|00556|binding|INFO|Setting lport 9200cc12-927d-418b-99c1-ca0421535979 up in Southbound
Nov 25 03:34:21 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:34:21.932 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 9200cc12-927d-418b-99c1-ca0421535979 in datapath 908154e6-322e-4607-bb65-df3f3f8daca6 bound to our chassis#033[00m
Nov 25 03:34:21 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:34:21.933 162739 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 908154e6-322e-4607-bb65-df3f3f8daca6#033[00m
Nov 25 03:34:21 np0005534516 systemd-machined[215790]: New machine qemu-71-instance-0000003e.
Nov 25 03:34:21 np0005534516 NetworkManager[48915]: <info>  [1764059661.9423] device (tap9200cc12-92): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 03:34:21 np0005534516 NetworkManager[48915]: <info>  [1764059661.9429] device (tap9200cc12-92): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 03:34:21 np0005534516 systemd[1]: Started Virtual Machine qemu-71-instance-0000003e.
Nov 25 03:34:21 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:34:21.956 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[60e04df7-37c5-40bf-833d-b51e9b0781b8]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:34:21 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:34:21.994 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[9faadc58-27ca-4ea1-8b23-8500ff166054]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:34:21 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:34:21.997 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[d88bbcdf-d498-46ff-b814-0bd432433ff2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:34:22 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:34:22.033 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[3b2bc74b-9159-4f26-b361-bc6f4a1d85b0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:34:22 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:34:22.050 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[015e0c6e-fe91-43ab-841a-62720a4e776d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap908154e6-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:42:59:09'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 6, 'rx_bytes': 616, 'tx_bytes': 440, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 6, 'rx_bytes': 616, 'tx_bytes': 440, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 161], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 496834, 'reachable_time': 23814, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 300, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 300, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 317354, 'error': None, 'target': 'ovnmeta-908154e6-322e-4607-bb65-df3f3f8daca6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:34:22 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:34:22.074 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[7d81bdc1-ca1e-4d87-a283-15f0c5b844ff]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap908154e6-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 496845, 'tstamp': 496845}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 317355, 'error': None, 'target': 'ovnmeta-908154e6-322e-4607-bb65-df3f3f8daca6', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap908154e6-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 496848, 'tstamp': 496848}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 317355, 'error': None, 'target': 'ovnmeta-908154e6-322e-4607-bb65-df3f3f8daca6', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:34:22 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:34:22.076 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap908154e6-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:34:22 np0005534516 nova_compute[253538]: 2025-11-25 08:34:22.077 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:34:22 np0005534516 nova_compute[253538]: 2025-11-25 08:34:22.078 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:34:22 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:34:22.079 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap908154e6-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:34:22 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:34:22.079 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 03:34:22 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:34:22.079 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap908154e6-30, col_values=(('external_ids', {'iface-id': 'a5c69233-73e9-45f3-95c2-e76d52711966'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:34:22 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:34:22.079 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 03:34:22 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 03:34:22 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1101747595' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 03:34:22 np0005534516 nova_compute[253538]: 2025-11-25 08:34:22.130 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.550s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:34:22 np0005534516 nova_compute[253538]: 2025-11-25 08:34:22.221 253542 DEBUG nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] skipping disk for instance-0000003a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 25 03:34:22 np0005534516 nova_compute[253538]: 2025-11-25 08:34:22.221 253542 DEBUG nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] skipping disk for instance-0000003a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 25 03:34:22 np0005534516 nova_compute[253538]: 2025-11-25 08:34:22.225 253542 DEBUG nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] skipping disk for instance-0000003e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 25 03:34:22 np0005534516 nova_compute[253538]: 2025-11-25 08:34:22.225 253542 DEBUG nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] skipping disk for instance-0000003e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 25 03:34:22 np0005534516 nova_compute[253538]: 2025-11-25 08:34:22.229 253542 DEBUG nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] skipping disk for instance-0000003d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 25 03:34:22 np0005534516 nova_compute[253538]: 2025-11-25 08:34:22.230 253542 DEBUG nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] skipping disk for instance-0000003d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 25 03:34:22 np0005534516 nova_compute[253538]: 2025-11-25 08:34:22.234 253542 DEBUG nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] skipping disk for instance-00000032 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 25 03:34:22 np0005534516 nova_compute[253538]: 2025-11-25 08:34:22.234 253542 DEBUG nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] skipping disk for instance-00000032 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 25 03:34:22 np0005534516 nova_compute[253538]: 2025-11-25 08:34:22.238 253542 DEBUG nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] skipping disk for instance-0000003c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 25 03:34:22 np0005534516 nova_compute[253538]: 2025-11-25 08:34:22.238 253542 DEBUG nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] skipping disk for instance-0000003c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 25 03:34:22 np0005534516 nova_compute[253538]: 2025-11-25 08:34:22.333 253542 DEBUG nova.network.neutron [req-e4737e5f-2d51-4d01-89ac-2ec49c12161a req-482d4b3f-959f-4a11-8f8c-2960d1a7d876 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 420c5373-d9c4-4da0-9658-90eff9a19f8d] Updated VIF entry in instance network info cache for port 9200cc12-927d-418b-99c1-ca0421535979. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 25 03:34:22 np0005534516 nova_compute[253538]: 2025-11-25 08:34:22.333 253542 DEBUG nova.network.neutron [req-e4737e5f-2d51-4d01-89ac-2ec49c12161a req-482d4b3f-959f-4a11-8f8c-2960d1a7d876 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 420c5373-d9c4-4da0-9658-90eff9a19f8d] Updating instance_info_cache with network_info: [{"id": "9200cc12-927d-418b-99c1-ca0421535979", "address": "fa:16:3e:73:f0:9b", "network": {"id": "908154e6-322e-4607-bb65-df3f3f8daca6", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1395486665-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "23237e7592b247838e62457157e64e9e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9200cc12-92", "ovs_interfaceid": "9200cc12-927d-418b-99c1-ca0421535979", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 03:34:22 np0005534516 nova_compute[253538]: 2025-11-25 08:34:22.347 253542 DEBUG oslo_concurrency.lockutils [req-e4737e5f-2d51-4d01-89ac-2ec49c12161a req-482d4b3f-959f-4a11-8f8c-2960d1a7d876 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-420c5373-d9c4-4da0-9658-90eff9a19f8d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 03:34:22 np0005534516 nova_compute[253538]: 2025-11-25 08:34:22.361 253542 DEBUG nova.compute.manager [req-077676c9-9500-4d50-b580-1d398dd006a0 req-b047daa1-e92a-4074-805d-ff09b8e77525 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 420c5373-d9c4-4da0-9658-90eff9a19f8d] Received event network-vif-plugged-9200cc12-927d-418b-99c1-ca0421535979 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:34:22 np0005534516 nova_compute[253538]: 2025-11-25 08:34:22.361 253542 DEBUG oslo_concurrency.lockutils [req-077676c9-9500-4d50-b580-1d398dd006a0 req-b047daa1-e92a-4074-805d-ff09b8e77525 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "420c5373-d9c4-4da0-9658-90eff9a19f8d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:34:22 np0005534516 nova_compute[253538]: 2025-11-25 08:34:22.361 253542 DEBUG oslo_concurrency.lockutils [req-077676c9-9500-4d50-b580-1d398dd006a0 req-b047daa1-e92a-4074-805d-ff09b8e77525 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "420c5373-d9c4-4da0-9658-90eff9a19f8d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:34:22 np0005534516 nova_compute[253538]: 2025-11-25 08:34:22.362 253542 DEBUG oslo_concurrency.lockutils [req-077676c9-9500-4d50-b580-1d398dd006a0 req-b047daa1-e92a-4074-805d-ff09b8e77525 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "420c5373-d9c4-4da0-9658-90eff9a19f8d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:34:22 np0005534516 nova_compute[253538]: 2025-11-25 08:34:22.362 253542 DEBUG nova.compute.manager [req-077676c9-9500-4d50-b580-1d398dd006a0 req-b047daa1-e92a-4074-805d-ff09b8e77525 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 420c5373-d9c4-4da0-9658-90eff9a19f8d] Processing event network-vif-plugged-9200cc12-927d-418b-99c1-ca0421535979 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 25 03:34:22 np0005534516 nova_compute[253538]: 2025-11-25 08:34:22.446 253542 DEBUG nova.compute.manager [None req-898de7fa-98d1-45c8-93c3-87953770422a c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] [instance: 420c5373-d9c4-4da0-9658-90eff9a19f8d] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 25 03:34:22 np0005534516 nova_compute[253538]: 2025-11-25 08:34:22.447 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764059662.446147, 420c5373-d9c4-4da0-9658-90eff9a19f8d => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 03:34:22 np0005534516 nova_compute[253538]: 2025-11-25 08:34:22.447 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 420c5373-d9c4-4da0-9658-90eff9a19f8d] VM Started (Lifecycle Event)#033[00m
Nov 25 03:34:22 np0005534516 nova_compute[253538]: 2025-11-25 08:34:22.451 253542 DEBUG nova.virt.libvirt.driver [None req-898de7fa-98d1-45c8-93c3-87953770422a c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] [instance: 420c5373-d9c4-4da0-9658-90eff9a19f8d] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 25 03:34:22 np0005534516 nova_compute[253538]: 2025-11-25 08:34:22.455 253542 INFO nova.virt.libvirt.driver [-] [instance: 420c5373-d9c4-4da0-9658-90eff9a19f8d] Instance spawned successfully.#033[00m
Nov 25 03:34:22 np0005534516 nova_compute[253538]: 2025-11-25 08:34:22.455 253542 DEBUG nova.virt.libvirt.driver [None req-898de7fa-98d1-45c8-93c3-87953770422a c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] [instance: 420c5373-d9c4-4da0-9658-90eff9a19f8d] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 25 03:34:22 np0005534516 nova_compute[253538]: 2025-11-25 08:34:22.469 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 420c5373-d9c4-4da0-9658-90eff9a19f8d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:34:22 np0005534516 nova_compute[253538]: 2025-11-25 08:34:22.475 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 420c5373-d9c4-4da0-9658-90eff9a19f8d] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 03:34:22 np0005534516 nova_compute[253538]: 2025-11-25 08:34:22.478 253542 DEBUG nova.compute.manager [req-b0e577e2-64fe-4d13-b40d-920268e99398 req-60579c43-8260-4204-b3f3-1e6ff05305da b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 5c6656ef-7ad0-4eb4-a597-aa9a8078805b] Received event network-changed-1682bdaf-1dd6-4036-8d17-a169dbaaca8f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:34:22 np0005534516 nova_compute[253538]: 2025-11-25 08:34:22.478 253542 DEBUG nova.compute.manager [req-b0e577e2-64fe-4d13-b40d-920268e99398 req-60579c43-8260-4204-b3f3-1e6ff05305da b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 5c6656ef-7ad0-4eb4-a597-aa9a8078805b] Refreshing instance network info cache due to event network-changed-1682bdaf-1dd6-4036-8d17-a169dbaaca8f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 03:34:22 np0005534516 nova_compute[253538]: 2025-11-25 08:34:22.479 253542 DEBUG oslo_concurrency.lockutils [req-b0e577e2-64fe-4d13-b40d-920268e99398 req-60579c43-8260-4204-b3f3-1e6ff05305da b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-5c6656ef-7ad0-4eb4-a597-aa9a8078805b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 03:34:22 np0005534516 nova_compute[253538]: 2025-11-25 08:34:22.479 253542 DEBUG oslo_concurrency.lockutils [req-b0e577e2-64fe-4d13-b40d-920268e99398 req-60579c43-8260-4204-b3f3-1e6ff05305da b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-5c6656ef-7ad0-4eb4-a597-aa9a8078805b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 03:34:22 np0005534516 nova_compute[253538]: 2025-11-25 08:34:22.479 253542 DEBUG nova.network.neutron [req-b0e577e2-64fe-4d13-b40d-920268e99398 req-60579c43-8260-4204-b3f3-1e6ff05305da b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 5c6656ef-7ad0-4eb4-a597-aa9a8078805b] Refreshing network info cache for port 1682bdaf-1dd6-4036-8d17-a169dbaaca8f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 03:34:22 np0005534516 nova_compute[253538]: 2025-11-25 08:34:22.482 253542 DEBUG nova.virt.libvirt.driver [None req-898de7fa-98d1-45c8-93c3-87953770422a c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] [instance: 420c5373-d9c4-4da0-9658-90eff9a19f8d] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:34:22 np0005534516 nova_compute[253538]: 2025-11-25 08:34:22.482 253542 DEBUG nova.virt.libvirt.driver [None req-898de7fa-98d1-45c8-93c3-87953770422a c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] [instance: 420c5373-d9c4-4da0-9658-90eff9a19f8d] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:34:22 np0005534516 nova_compute[253538]: 2025-11-25 08:34:22.483 253542 DEBUG nova.virt.libvirt.driver [None req-898de7fa-98d1-45c8-93c3-87953770422a c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] [instance: 420c5373-d9c4-4da0-9658-90eff9a19f8d] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:34:22 np0005534516 nova_compute[253538]: 2025-11-25 08:34:22.483 253542 DEBUG nova.virt.libvirt.driver [None req-898de7fa-98d1-45c8-93c3-87953770422a c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] [instance: 420c5373-d9c4-4da0-9658-90eff9a19f8d] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:34:22 np0005534516 nova_compute[253538]: 2025-11-25 08:34:22.483 253542 DEBUG nova.virt.libvirt.driver [None req-898de7fa-98d1-45c8-93c3-87953770422a c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] [instance: 420c5373-d9c4-4da0-9658-90eff9a19f8d] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:34:22 np0005534516 nova_compute[253538]: 2025-11-25 08:34:22.484 253542 DEBUG nova.virt.libvirt.driver [None req-898de7fa-98d1-45c8-93c3-87953770422a c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] [instance: 420c5373-d9c4-4da0-9658-90eff9a19f8d] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:34:22 np0005534516 nova_compute[253538]: 2025-11-25 08:34:22.510 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 420c5373-d9c4-4da0-9658-90eff9a19f8d] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 03:34:22 np0005534516 nova_compute[253538]: 2025-11-25 08:34:22.510 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764059662.448038, 420c5373-d9c4-4da0-9658-90eff9a19f8d => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 03:34:22 np0005534516 nova_compute[253538]: 2025-11-25 08:34:22.510 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 420c5373-d9c4-4da0-9658-90eff9a19f8d] VM Paused (Lifecycle Event)#033[00m
Nov 25 03:34:22 np0005534516 ovn_controller[152859]: 2025-11-25T08:34:22Z|00557|binding|INFO|Releasing lport a5c69233-73e9-45f3-95c2-e76d52711966 from this chassis (sb_readonly=0)
Nov 25 03:34:22 np0005534516 ovn_controller[152859]: 2025-11-25T08:34:22Z|00558|binding|INFO|Releasing lport c0d74b17-7eba-4096-a861-b9247777e01c from this chassis (sb_readonly=0)
Nov 25 03:34:22 np0005534516 ovn_controller[152859]: 2025-11-25T08:34:22Z|00559|binding|INFO|Releasing lport 98660c0c-0936-4c4d-9a89-87b784d8d5cc from this chassis (sb_readonly=0)
Nov 25 03:34:22 np0005534516 nova_compute[253538]: 2025-11-25 08:34:22.552 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 420c5373-d9c4-4da0-9658-90eff9a19f8d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:34:22 np0005534516 nova_compute[253538]: 2025-11-25 08:34:22.555 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764059662.4502008, 420c5373-d9c4-4da0-9658-90eff9a19f8d => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 03:34:22 np0005534516 nova_compute[253538]: 2025-11-25 08:34:22.556 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 420c5373-d9c4-4da0-9658-90eff9a19f8d] VM Resumed (Lifecycle Event)#033[00m
Nov 25 03:34:22 np0005534516 nova_compute[253538]: 2025-11-25 08:34:22.559 253542 INFO nova.compute.manager [None req-898de7fa-98d1-45c8-93c3-87953770422a c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] [instance: 420c5373-d9c4-4da0-9658-90eff9a19f8d] Took 6.60 seconds to spawn the instance on the hypervisor.#033[00m
Nov 25 03:34:22 np0005534516 nova_compute[253538]: 2025-11-25 08:34:22.560 253542 DEBUG nova.compute.manager [None req-898de7fa-98d1-45c8-93c3-87953770422a c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] [instance: 420c5373-d9c4-4da0-9658-90eff9a19f8d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:34:22 np0005534516 nova_compute[253538]: 2025-11-25 08:34:22.561 253542 WARNING nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 03:34:22 np0005534516 nova_compute[253538]: 2025-11-25 08:34:22.562 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3394MB free_disk=59.83451461791992GB free_vcpus=3 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 25 03:34:22 np0005534516 nova_compute[253538]: 2025-11-25 08:34:22.562 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:34:22 np0005534516 nova_compute[253538]: 2025-11-25 08:34:22.562 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:34:22 np0005534516 nova_compute[253538]: 2025-11-25 08:34:22.572 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 420c5373-d9c4-4da0-9658-90eff9a19f8d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:34:22 np0005534516 nova_compute[253538]: 2025-11-25 08:34:22.577 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 420c5373-d9c4-4da0-9658-90eff9a19f8d] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 03:34:22 np0005534516 nova_compute[253538]: 2025-11-25 08:34:22.610 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 420c5373-d9c4-4da0-9658-90eff9a19f8d] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 03:34:22 np0005534516 nova_compute[253538]: 2025-11-25 08:34:22.637 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:34:22 np0005534516 nova_compute[253538]: 2025-11-25 08:34:22.656 253542 INFO nova.compute.manager [None req-898de7fa-98d1-45c8-93c3-87953770422a c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] [instance: 420c5373-d9c4-4da0-9658-90eff9a19f8d] Took 7.67 seconds to build instance.#033[00m
Nov 25 03:34:22 np0005534516 nova_compute[253538]: 2025-11-25 08:34:22.673 253542 DEBUG oslo_concurrency.lockutils [None req-898de7fa-98d1-45c8-93c3-87953770422a c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Lock "420c5373-d9c4-4da0-9658-90eff9a19f8d" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 7.757s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:34:22 np0005534516 nova_compute[253538]: 2025-11-25 08:34:22.681 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Instance 0feca801-4630-4450-b915-616d8496ab51 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 25 03:34:22 np0005534516 nova_compute[253538]: 2025-11-25 08:34:22.682 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Instance 5c6656ef-7ad0-4eb4-a597-aa9a8078805b actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 25 03:34:22 np0005534516 nova_compute[253538]: 2025-11-25 08:34:22.682 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Instance bf44124c-1a65-4bde-a777-043ae1a53557 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 25 03:34:22 np0005534516 nova_compute[253538]: 2025-11-25 08:34:22.682 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Instance 52d39d67-b456-44e4-8804-2de0c941edae actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 25 03:34:22 np0005534516 nova_compute[253538]: 2025-11-25 08:34:22.682 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Instance 420c5373-d9c4-4da0-9658-90eff9a19f8d actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 25 03:34:22 np0005534516 nova_compute[253538]: 2025-11-25 08:34:22.682 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 5 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 25 03:34:22 np0005534516 nova_compute[253538]: 2025-11-25 08:34:22.682 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=1152MB phys_disk=59GB used_disk=5GB total_vcpus=8 used_vcpus=5 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 25 03:34:22 np0005534516 nova_compute[253538]: 2025-11-25 08:34:22.791 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:34:23 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 03:34:23 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3264092626' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 03:34:23 np0005534516 nova_compute[253538]: 2025-11-25 08:34:23.235 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.444s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:34:23 np0005534516 nova_compute[253538]: 2025-11-25 08:34:23.246 253542 DEBUG nova.compute.provider_tree [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 03:34:23 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1537: 321 pgs: 321 active+clean; 341 MiB data, 639 MiB used, 59 GiB / 60 GiB avail; 6.0 MiB/s rd, 5.7 MiB/s wr, 362 op/s
Nov 25 03:34:23 np0005534516 nova_compute[253538]: 2025-11-25 08:34:23.260 253542 DEBUG nova.scheduler.client.report [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 03:34:23 np0005534516 nova_compute[253538]: 2025-11-25 08:34:23.284 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 25 03:34:23 np0005534516 nova_compute[253538]: 2025-11-25 08:34:23.284 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.722s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:34:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 03:34:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 03:34:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 03:34:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 03:34:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 03:34:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 03:34:23 np0005534516 nova_compute[253538]: 2025-11-25 08:34:23.854 253542 DEBUG nova.network.neutron [req-b0e577e2-64fe-4d13-b40d-920268e99398 req-60579c43-8260-4204-b3f3-1e6ff05305da b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 5c6656ef-7ad0-4eb4-a597-aa9a8078805b] Updated VIF entry in instance network info cache for port 1682bdaf-1dd6-4036-8d17-a169dbaaca8f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 25 03:34:23 np0005534516 nova_compute[253538]: 2025-11-25 08:34:23.855 253542 DEBUG nova.network.neutron [req-b0e577e2-64fe-4d13-b40d-920268e99398 req-60579c43-8260-4204-b3f3-1e6ff05305da b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 5c6656ef-7ad0-4eb4-a597-aa9a8078805b] Updating instance_info_cache with network_info: [{"id": "1682bdaf-1dd6-4036-8d17-a169dbaaca8f", "address": "fa:16:3e:60:42:da", "network": {"id": "9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-2079765623-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7d8307470c794815a028592990efca57", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1682bdaf-1d", "ovs_interfaceid": "1682bdaf-1dd6-4036-8d17-a169dbaaca8f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 03:34:23 np0005534516 nova_compute[253538]: 2025-11-25 08:34:23.877 253542 DEBUG oslo_concurrency.lockutils [req-b0e577e2-64fe-4d13-b40d-920268e99398 req-60579c43-8260-4204-b3f3-1e6ff05305da b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-5c6656ef-7ad0-4eb4-a597-aa9a8078805b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 03:34:23 np0005534516 nova_compute[253538]: 2025-11-25 08:34:23.877 253542 DEBUG nova.compute.manager [req-b0e577e2-64fe-4d13-b40d-920268e99398 req-60579c43-8260-4204-b3f3-1e6ff05305da b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 52d39d67-b456-44e4-8804-2de0c941edae] Received event network-changed-9fa407fa-661b-4b02-b4f4-656f6ae34cd8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:34:23 np0005534516 nova_compute[253538]: 2025-11-25 08:34:23.877 253542 DEBUG nova.compute.manager [req-b0e577e2-64fe-4d13-b40d-920268e99398 req-60579c43-8260-4204-b3f3-1e6ff05305da b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 52d39d67-b456-44e4-8804-2de0c941edae] Refreshing instance network info cache due to event network-changed-9fa407fa-661b-4b02-b4f4-656f6ae34cd8. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 03:34:23 np0005534516 nova_compute[253538]: 2025-11-25 08:34:23.878 253542 DEBUG oslo_concurrency.lockutils [req-b0e577e2-64fe-4d13-b40d-920268e99398 req-60579c43-8260-4204-b3f3-1e6ff05305da b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-52d39d67-b456-44e4-8804-2de0c941edae" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 03:34:23 np0005534516 nova_compute[253538]: 2025-11-25 08:34:23.878 253542 DEBUG oslo_concurrency.lockutils [req-b0e577e2-64fe-4d13-b40d-920268e99398 req-60579c43-8260-4204-b3f3-1e6ff05305da b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-52d39d67-b456-44e4-8804-2de0c941edae" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 03:34:23 np0005534516 nova_compute[253538]: 2025-11-25 08:34:23.878 253542 DEBUG nova.network.neutron [req-b0e577e2-64fe-4d13-b40d-920268e99398 req-60579c43-8260-4204-b3f3-1e6ff05305da b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 52d39d67-b456-44e4-8804-2de0c941edae] Refreshing network info cache for port 9fa407fa-661b-4b02-b4f4-656f6ae34cd8 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 03:34:24 np0005534516 nova_compute[253538]: 2025-11-25 08:34:24.514 253542 DEBUG nova.compute.manager [req-0dae40eb-26df-4f2b-987e-b79dc1a6a54c req-17484b22-b8b4-4414-a40e-2661b1780037 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 420c5373-d9c4-4da0-9658-90eff9a19f8d] Received event network-vif-plugged-9200cc12-927d-418b-99c1-ca0421535979 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:34:24 np0005534516 nova_compute[253538]: 2025-11-25 08:34:24.515 253542 DEBUG oslo_concurrency.lockutils [req-0dae40eb-26df-4f2b-987e-b79dc1a6a54c req-17484b22-b8b4-4414-a40e-2661b1780037 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "420c5373-d9c4-4da0-9658-90eff9a19f8d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:34:24 np0005534516 nova_compute[253538]: 2025-11-25 08:34:24.515 253542 DEBUG oslo_concurrency.lockutils [req-0dae40eb-26df-4f2b-987e-b79dc1a6a54c req-17484b22-b8b4-4414-a40e-2661b1780037 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "420c5373-d9c4-4da0-9658-90eff9a19f8d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:34:24 np0005534516 nova_compute[253538]: 2025-11-25 08:34:24.515 253542 DEBUG oslo_concurrency.lockutils [req-0dae40eb-26df-4f2b-987e-b79dc1a6a54c req-17484b22-b8b4-4414-a40e-2661b1780037 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "420c5373-d9c4-4da0-9658-90eff9a19f8d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:34:24 np0005534516 nova_compute[253538]: 2025-11-25 08:34:24.516 253542 DEBUG nova.compute.manager [req-0dae40eb-26df-4f2b-987e-b79dc1a6a54c req-17484b22-b8b4-4414-a40e-2661b1780037 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 420c5373-d9c4-4da0-9658-90eff9a19f8d] No waiting events found dispatching network-vif-plugged-9200cc12-927d-418b-99c1-ca0421535979 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 03:34:24 np0005534516 nova_compute[253538]: 2025-11-25 08:34:24.516 253542 WARNING nova.compute.manager [req-0dae40eb-26df-4f2b-987e-b79dc1a6a54c req-17484b22-b8b4-4414-a40e-2661b1780037 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 420c5373-d9c4-4da0-9658-90eff9a19f8d] Received unexpected event network-vif-plugged-9200cc12-927d-418b-99c1-ca0421535979 for instance with vm_state active and task_state None.#033[00m
Nov 25 03:34:25 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1538: 321 pgs: 321 active+clean; 341 MiB data, 639 MiB used, 59 GiB / 60 GiB avail; 6.9 MiB/s rd, 5.3 MiB/s wr, 391 op/s
Nov 25 03:34:25 np0005534516 nova_compute[253538]: 2025-11-25 08:34:25.670 253542 DEBUG nova.network.neutron [req-b0e577e2-64fe-4d13-b40d-920268e99398 req-60579c43-8260-4204-b3f3-1e6ff05305da b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 52d39d67-b456-44e4-8804-2de0c941edae] Updated VIF entry in instance network info cache for port 9fa407fa-661b-4b02-b4f4-656f6ae34cd8. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 25 03:34:25 np0005534516 nova_compute[253538]: 2025-11-25 08:34:25.671 253542 DEBUG nova.network.neutron [req-b0e577e2-64fe-4d13-b40d-920268e99398 req-60579c43-8260-4204-b3f3-1e6ff05305da b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 52d39d67-b456-44e4-8804-2de0c941edae] Updating instance_info_cache with network_info: [{"id": "9fa407fa-661b-4b02-b4f4-656f6ae34cd8", "address": "fa:16:3e:4d:ce:d4", "network": {"id": "9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-2079765623-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7d8307470c794815a028592990efca57", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9fa407fa-66", "ovs_interfaceid": "9fa407fa-661b-4b02-b4f4-656f6ae34cd8", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 03:34:25 np0005534516 nova_compute[253538]: 2025-11-25 08:34:25.692 253542 DEBUG oslo_concurrency.lockutils [req-b0e577e2-64fe-4d13-b40d-920268e99398 req-60579c43-8260-4204-b3f3-1e6ff05305da b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-52d39d67-b456-44e4-8804-2de0c941edae" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 03:34:25 np0005534516 nova_compute[253538]: 2025-11-25 08:34:25.724 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:34:25 np0005534516 nova_compute[253538]: 2025-11-25 08:34:25.810 253542 DEBUG oslo_concurrency.lockutils [None req-c368117c-2586-4b3a-8fd2-af52c4c1b427 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Acquiring lock "interface-5c6656ef-7ad0-4eb4-a597-aa9a8078805b-f2a4b65b-419e-44be-9413-f01693268aa8" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:34:25 np0005534516 nova_compute[253538]: 2025-11-25 08:34:25.810 253542 DEBUG oslo_concurrency.lockutils [None req-c368117c-2586-4b3a-8fd2-af52c4c1b427 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Lock "interface-5c6656ef-7ad0-4eb4-a597-aa9a8078805b-f2a4b65b-419e-44be-9413-f01693268aa8" acquired by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:34:25 np0005534516 nova_compute[253538]: 2025-11-25 08:34:25.811 253542 DEBUG nova.objects.instance [None req-c368117c-2586-4b3a-8fd2-af52c4c1b427 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Lazy-loading 'flavor' on Instance uuid 5c6656ef-7ad0-4eb4-a597-aa9a8078805b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 03:34:25 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e182 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 03:34:25 np0005534516 nova_compute[253538]: 2025-11-25 08:34:25.991 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:34:26 np0005534516 nova_compute[253538]: 2025-11-25 08:34:26.212 253542 INFO nova.compute.manager [None req-5d8b5969-a11f-4efe-b1ba-bf5400b29765 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] [instance: 420c5373-d9c4-4da0-9658-90eff9a19f8d] Rebuilding instance#033[00m
Nov 25 03:34:26 np0005534516 nova_compute[253538]: 2025-11-25 08:34:26.279 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 03:34:26 np0005534516 nova_compute[253538]: 2025-11-25 08:34:26.304 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 03:34:26 np0005534516 nova_compute[253538]: 2025-11-25 08:34:26.474 253542 DEBUG nova.objects.instance [None req-c368117c-2586-4b3a-8fd2-af52c4c1b427 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Lazy-loading 'pci_requests' on Instance uuid 5c6656ef-7ad0-4eb4-a597-aa9a8078805b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 03:34:26 np0005534516 nova_compute[253538]: 2025-11-25 08:34:26.485 253542 DEBUG nova.network.neutron [None req-c368117c-2586-4b3a-8fd2-af52c4c1b427 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] [instance: 5c6656ef-7ad0-4eb4-a597-aa9a8078805b] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 25 03:34:26 np0005534516 nova_compute[253538]: 2025-11-25 08:34:26.496 253542 DEBUG nova.objects.instance [None req-5d8b5969-a11f-4efe-b1ba-bf5400b29765 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Lazy-loading 'trusted_certs' on Instance uuid 420c5373-d9c4-4da0-9658-90eff9a19f8d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 03:34:26 np0005534516 nova_compute[253538]: 2025-11-25 08:34:26.514 253542 DEBUG nova.compute.manager [None req-5d8b5969-a11f-4efe-b1ba-bf5400b29765 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] [instance: 420c5373-d9c4-4da0-9658-90eff9a19f8d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:34:26 np0005534516 nova_compute[253538]: 2025-11-25 08:34:26.567 253542 DEBUG nova.objects.instance [None req-5d8b5969-a11f-4efe-b1ba-bf5400b29765 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Lazy-loading 'pci_requests' on Instance uuid 420c5373-d9c4-4da0-9658-90eff9a19f8d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 03:34:26 np0005534516 nova_compute[253538]: 2025-11-25 08:34:26.581 253542 DEBUG nova.objects.instance [None req-5d8b5969-a11f-4efe-b1ba-bf5400b29765 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Lazy-loading 'pci_devices' on Instance uuid 420c5373-d9c4-4da0-9658-90eff9a19f8d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 03:34:26 np0005534516 nova_compute[253538]: 2025-11-25 08:34:26.591 253542 DEBUG nova.objects.instance [None req-5d8b5969-a11f-4efe-b1ba-bf5400b29765 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Lazy-loading 'resources' on Instance uuid 420c5373-d9c4-4da0-9658-90eff9a19f8d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 03:34:26 np0005534516 nova_compute[253538]: 2025-11-25 08:34:26.602 253542 DEBUG nova.objects.instance [None req-5d8b5969-a11f-4efe-b1ba-bf5400b29765 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Lazy-loading 'migration_context' on Instance uuid 420c5373-d9c4-4da0-9658-90eff9a19f8d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 03:34:26 np0005534516 nova_compute[253538]: 2025-11-25 08:34:26.611 253542 DEBUG nova.objects.instance [None req-5d8b5969-a11f-4efe-b1ba-bf5400b29765 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] [instance: 420c5373-d9c4-4da0-9658-90eff9a19f8d] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032#033[00m
Nov 25 03:34:26 np0005534516 nova_compute[253538]: 2025-11-25 08:34:26.620 253542 DEBUG nova.virt.libvirt.driver [None req-5d8b5969-a11f-4efe-b1ba-bf5400b29765 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] [instance: 420c5373-d9c4-4da0-9658-90eff9a19f8d] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071#033[00m
Nov 25 03:34:26 np0005534516 nova_compute[253538]: 2025-11-25 08:34:26.827 253542 DEBUG nova.policy [None req-c368117c-2586-4b3a-8fd2-af52c4c1b427 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '329d8dc9d78743d4a09a38fef3a9143d', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '7d8307470c794815a028592990efca57', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 25 03:34:26 np0005534516 nova_compute[253538]: 2025-11-25 08:34:26.939 253542 DEBUG nova.compute.manager [req-4ca38ef0-59e5-4b3f-984f-bd85d5dff43f req-1be6f862-8c45-4a0f-8e1b-d1a651f789ef b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 52d39d67-b456-44e4-8804-2de0c941edae] Received event network-changed-9fa407fa-661b-4b02-b4f4-656f6ae34cd8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:34:26 np0005534516 nova_compute[253538]: 2025-11-25 08:34:26.940 253542 DEBUG nova.compute.manager [req-4ca38ef0-59e5-4b3f-984f-bd85d5dff43f req-1be6f862-8c45-4a0f-8e1b-d1a651f789ef b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 52d39d67-b456-44e4-8804-2de0c941edae] Refreshing instance network info cache due to event network-changed-9fa407fa-661b-4b02-b4f4-656f6ae34cd8. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 03:34:26 np0005534516 nova_compute[253538]: 2025-11-25 08:34:26.940 253542 DEBUG oslo_concurrency.lockutils [req-4ca38ef0-59e5-4b3f-984f-bd85d5dff43f req-1be6f862-8c45-4a0f-8e1b-d1a651f789ef b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-52d39d67-b456-44e4-8804-2de0c941edae" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 03:34:26 np0005534516 nova_compute[253538]: 2025-11-25 08:34:26.940 253542 DEBUG oslo_concurrency.lockutils [req-4ca38ef0-59e5-4b3f-984f-bd85d5dff43f req-1be6f862-8c45-4a0f-8e1b-d1a651f789ef b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-52d39d67-b456-44e4-8804-2de0c941edae" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 03:34:26 np0005534516 nova_compute[253538]: 2025-11-25 08:34:26.940 253542 DEBUG nova.network.neutron [req-4ca38ef0-59e5-4b3f-984f-bd85d5dff43f req-1be6f862-8c45-4a0f-8e1b-d1a651f789ef b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 52d39d67-b456-44e4-8804-2de0c941edae] Refreshing network info cache for port 9fa407fa-661b-4b02-b4f4-656f6ae34cd8 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 03:34:27 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1539: 321 pgs: 321 active+clean; 341 MiB data, 639 MiB used, 59 GiB / 60 GiB avail; 6.5 MiB/s rd, 2.1 MiB/s wr, 313 op/s
Nov 25 03:34:27 np0005534516 nova_compute[253538]: 2025-11-25 08:34:27.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 03:34:28 np0005534516 nova_compute[253538]: 2025-11-25 08:34:28.427 253542 DEBUG nova.virt.libvirt.driver [None req-1aa77871-377d-42f4-9eb6-3670589c0274 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: bf44124c-1a65-4bde-a777-043ae1a53557] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101#033[00m
Nov 25 03:34:28 np0005534516 nova_compute[253538]: 2025-11-25 08:34:28.655 253542 DEBUG nova.network.neutron [req-4ca38ef0-59e5-4b3f-984f-bd85d5dff43f req-1be6f862-8c45-4a0f-8e1b-d1a651f789ef b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 52d39d67-b456-44e4-8804-2de0c941edae] Updated VIF entry in instance network info cache for port 9fa407fa-661b-4b02-b4f4-656f6ae34cd8. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 25 03:34:28 np0005534516 nova_compute[253538]: 2025-11-25 08:34:28.655 253542 DEBUG nova.network.neutron [req-4ca38ef0-59e5-4b3f-984f-bd85d5dff43f req-1be6f862-8c45-4a0f-8e1b-d1a651f789ef b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 52d39d67-b456-44e4-8804-2de0c941edae] Updating instance_info_cache with network_info: [{"id": "9fa407fa-661b-4b02-b4f4-656f6ae34cd8", "address": "fa:16:3e:4d:ce:d4", "network": {"id": "9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-2079765623-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7d8307470c794815a028592990efca57", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9fa407fa-66", "ovs_interfaceid": "9fa407fa-661b-4b02-b4f4-656f6ae34cd8", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 03:34:28 np0005534516 nova_compute[253538]: 2025-11-25 08:34:28.673 253542 DEBUG oslo_concurrency.lockutils [req-4ca38ef0-59e5-4b3f-984f-bd85d5dff43f req-1be6f862-8c45-4a0f-8e1b-d1a651f789ef b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-52d39d67-b456-44e4-8804-2de0c941edae" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 03:34:28 np0005534516 nova_compute[253538]: 2025-11-25 08:34:28.674 253542 DEBUG nova.compute.manager [req-4ca38ef0-59e5-4b3f-984f-bd85d5dff43f req-1be6f862-8c45-4a0f-8e1b-d1a651f789ef b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 5c6656ef-7ad0-4eb4-a597-aa9a8078805b] Received event network-changed-1682bdaf-1dd6-4036-8d17-a169dbaaca8f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:34:28 np0005534516 nova_compute[253538]: 2025-11-25 08:34:28.674 253542 DEBUG nova.compute.manager [req-4ca38ef0-59e5-4b3f-984f-bd85d5dff43f req-1be6f862-8c45-4a0f-8e1b-d1a651f789ef b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 5c6656ef-7ad0-4eb4-a597-aa9a8078805b] Refreshing instance network info cache due to event network-changed-1682bdaf-1dd6-4036-8d17-a169dbaaca8f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 03:34:28 np0005534516 nova_compute[253538]: 2025-11-25 08:34:28.674 253542 DEBUG oslo_concurrency.lockutils [req-4ca38ef0-59e5-4b3f-984f-bd85d5dff43f req-1be6f862-8c45-4a0f-8e1b-d1a651f789ef b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-5c6656ef-7ad0-4eb4-a597-aa9a8078805b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 03:34:28 np0005534516 nova_compute[253538]: 2025-11-25 08:34:28.674 253542 DEBUG oslo_concurrency.lockutils [req-4ca38ef0-59e5-4b3f-984f-bd85d5dff43f req-1be6f862-8c45-4a0f-8e1b-d1a651f789ef b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-5c6656ef-7ad0-4eb4-a597-aa9a8078805b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 03:34:28 np0005534516 nova_compute[253538]: 2025-11-25 08:34:28.674 253542 DEBUG nova.network.neutron [req-4ca38ef0-59e5-4b3f-984f-bd85d5dff43f req-1be6f862-8c45-4a0f-8e1b-d1a651f789ef b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 5c6656ef-7ad0-4eb4-a597-aa9a8078805b] Refreshing network info cache for port 1682bdaf-1dd6-4036-8d17-a169dbaaca8f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 03:34:28 np0005534516 nova_compute[253538]: 2025-11-25 08:34:28.866 253542 DEBUG nova.network.neutron [None req-c368117c-2586-4b3a-8fd2-af52c4c1b427 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] [instance: 5c6656ef-7ad0-4eb4-a597-aa9a8078805b] Successfully updated port: f2a4b65b-419e-44be-9413-f01693268aa8 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 25 03:34:28 np0005534516 nova_compute[253538]: 2025-11-25 08:34:28.899 253542 DEBUG oslo_concurrency.lockutils [None req-c368117c-2586-4b3a-8fd2-af52c4c1b427 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Acquiring lock "refresh_cache-5c6656ef-7ad0-4eb4-a597-aa9a8078805b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 03:34:28 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 25 03:34:28 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1929195072' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 03:34:28 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 25 03:34:28 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1929195072' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 03:34:29 np0005534516 nova_compute[253538]: 2025-11-25 08:34:29.093 253542 DEBUG nova.compute.manager [req-0cf3f703-c207-457e-add0-fb5a3155aacf req-7505f40a-58ba-4b90-a0cb-5f9c4d248156 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 5c6656ef-7ad0-4eb4-a597-aa9a8078805b] Received event network-changed-f2a4b65b-419e-44be-9413-f01693268aa8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:34:29 np0005534516 nova_compute[253538]: 2025-11-25 08:34:29.093 253542 DEBUG nova.compute.manager [req-0cf3f703-c207-457e-add0-fb5a3155aacf req-7505f40a-58ba-4b90-a0cb-5f9c4d248156 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 5c6656ef-7ad0-4eb4-a597-aa9a8078805b] Refreshing instance network info cache due to event network-changed-f2a4b65b-419e-44be-9413-f01693268aa8. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 03:34:29 np0005534516 nova_compute[253538]: 2025-11-25 08:34:29.094 253542 DEBUG oslo_concurrency.lockutils [req-0cf3f703-c207-457e-add0-fb5a3155aacf req-7505f40a-58ba-4b90-a0cb-5f9c4d248156 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-5c6656ef-7ad0-4eb4-a597-aa9a8078805b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 03:34:29 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1540: 321 pgs: 321 active+clean; 341 MiB data, 639 MiB used, 59 GiB / 60 GiB avail; 5.3 MiB/s rd, 1.4 MiB/s wr, 248 op/s
Nov 25 03:34:29 np0005534516 nova_compute[253538]: 2025-11-25 08:34:29.823 253542 DEBUG nova.network.neutron [req-4ca38ef0-59e5-4b3f-984f-bd85d5dff43f req-1be6f862-8c45-4a0f-8e1b-d1a651f789ef b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 5c6656ef-7ad0-4eb4-a597-aa9a8078805b] Updated VIF entry in instance network info cache for port 1682bdaf-1dd6-4036-8d17-a169dbaaca8f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 25 03:34:29 np0005534516 nova_compute[253538]: 2025-11-25 08:34:29.823 253542 DEBUG nova.network.neutron [req-4ca38ef0-59e5-4b3f-984f-bd85d5dff43f req-1be6f862-8c45-4a0f-8e1b-d1a651f789ef b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 5c6656ef-7ad0-4eb4-a597-aa9a8078805b] Updating instance_info_cache with network_info: [{"id": "1682bdaf-1dd6-4036-8d17-a169dbaaca8f", "address": "fa:16:3e:60:42:da", "network": {"id": "9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-2079765623-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.241", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7d8307470c794815a028592990efca57", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1682bdaf-1d", "ovs_interfaceid": "1682bdaf-1dd6-4036-8d17-a169dbaaca8f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 03:34:29 np0005534516 nova_compute[253538]: 2025-11-25 08:34:29.916 253542 DEBUG oslo_concurrency.lockutils [req-4ca38ef0-59e5-4b3f-984f-bd85d5dff43f req-1be6f862-8c45-4a0f-8e1b-d1a651f789ef b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-5c6656ef-7ad0-4eb4-a597-aa9a8078805b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 03:34:29 np0005534516 nova_compute[253538]: 2025-11-25 08:34:29.916 253542 DEBUG oslo_concurrency.lockutils [None req-c368117c-2586-4b3a-8fd2-af52c4c1b427 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Acquired lock "refresh_cache-5c6656ef-7ad0-4eb4-a597-aa9a8078805b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 03:34:29 np0005534516 nova_compute[253538]: 2025-11-25 08:34:29.917 253542 DEBUG nova.network.neutron [None req-c368117c-2586-4b3a-8fd2-af52c4c1b427 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] [instance: 5c6656ef-7ad0-4eb4-a597-aa9a8078805b] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 25 03:34:29 np0005534516 nova_compute[253538]: 2025-11-25 08:34:29.977 253542 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764059654.976347, e3f4ee5b-6bb5-456f-b522-426ea1ebf32f => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 03:34:29 np0005534516 nova_compute[253538]: 2025-11-25 08:34:29.977 253542 INFO nova.compute.manager [-] [instance: e3f4ee5b-6bb5-456f-b522-426ea1ebf32f] VM Stopped (Lifecycle Event)#033[00m
Nov 25 03:34:30 np0005534516 nova_compute[253538]: 2025-11-25 08:34:29.999 253542 DEBUG nova.compute.manager [None req-b7fb386f-e36b-4863-b455-4db65d1f22e7 - - - - - -] [instance: e3f4ee5b-6bb5-456f-b522-426ea1ebf32f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:34:30 np0005534516 nova_compute[253538]: 2025-11-25 08:34:30.411 253542 WARNING nova.network.neutron [None req-c368117c-2586-4b3a-8fd2-af52c4c1b427 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] [instance: 5c6656ef-7ad0-4eb4-a597-aa9a8078805b] 9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe already exists in list: networks containing: ['9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe']. ignoring it#033[00m
Nov 25 03:34:30 np0005534516 nova_compute[253538]: 2025-11-25 08:34:30.728 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:34:30 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e182 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 03:34:30 np0005534516 nova_compute[253538]: 2025-11-25 08:34:30.993 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:34:31 np0005534516 ovn_controller[152859]: 2025-11-25T08:34:31Z|00071|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:13:ae:8e 10.100.0.12
Nov 25 03:34:31 np0005534516 ovn_controller[152859]: 2025-11-25T08:34:31Z|00072|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:13:ae:8e 10.100.0.12
Nov 25 03:34:31 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1541: 321 pgs: 321 active+clean; 349 MiB data, 645 MiB used, 59 GiB / 60 GiB avail; 4.8 MiB/s rd, 1017 KiB/s wr, 192 op/s
Nov 25 03:34:31 np0005534516 nova_compute[253538]: 2025-11-25 08:34:31.841 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:34:32 np0005534516 nova_compute[253538]: 2025-11-25 08:34:32.093 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:34:32 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:34:32.092 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=16, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '3e:21:09', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '6a:71:8e:07:fa:c2'}, ipsec=False) old=SB_Global(nb_cfg=15) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 03:34:32 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:34:32.093 162739 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 25 03:34:32 np0005534516 nova_compute[253538]: 2025-11-25 08:34:32.780 253542 DEBUG nova.network.neutron [None req-c368117c-2586-4b3a-8fd2-af52c4c1b427 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] [instance: 5c6656ef-7ad0-4eb4-a597-aa9a8078805b] Updating instance_info_cache with network_info: [{"id": "1682bdaf-1dd6-4036-8d17-a169dbaaca8f", "address": "fa:16:3e:60:42:da", "network": {"id": "9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-2079765623-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.241", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7d8307470c794815a028592990efca57", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1682bdaf-1d", "ovs_interfaceid": "1682bdaf-1dd6-4036-8d17-a169dbaaca8f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "f2a4b65b-419e-44be-9413-f01693268aa8", "address": "fa:16:3e:26:ed:fc", "network": {"id": "9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-2079765623-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7d8307470c794815a028592990efca57", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf2a4b65b-41", "ovs_interfaceid": "f2a4b65b-419e-44be-9413-f01693268aa8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 03:34:32 np0005534516 nova_compute[253538]: 2025-11-25 08:34:32.811 253542 DEBUG oslo_concurrency.lockutils [None req-c368117c-2586-4b3a-8fd2-af52c4c1b427 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Releasing lock "refresh_cache-5c6656ef-7ad0-4eb4-a597-aa9a8078805b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 03:34:32 np0005534516 nova_compute[253538]: 2025-11-25 08:34:32.812 253542 DEBUG oslo_concurrency.lockutils [req-0cf3f703-c207-457e-add0-fb5a3155aacf req-7505f40a-58ba-4b90-a0cb-5f9c4d248156 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-5c6656ef-7ad0-4eb4-a597-aa9a8078805b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 03:34:32 np0005534516 nova_compute[253538]: 2025-11-25 08:34:32.813 253542 DEBUG nova.network.neutron [req-0cf3f703-c207-457e-add0-fb5a3155aacf req-7505f40a-58ba-4b90-a0cb-5f9c4d248156 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 5c6656ef-7ad0-4eb4-a597-aa9a8078805b] Refreshing network info cache for port f2a4b65b-419e-44be-9413-f01693268aa8 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 03:34:32 np0005534516 nova_compute[253538]: 2025-11-25 08:34:32.817 253542 DEBUG nova.virt.libvirt.vif [None req-c368117c-2586-4b3a-8fd2-af52c4c1b427 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T08:33:46Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-632172140',display_name='tempest-tempest.common.compute-instance-632172140',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-632172140',id=58,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBI45VX5ozelo+A26Yolp08RcM4mInQuWdkCriWdcAJEvmrG1M64+l6O6qrC1PEYY9Zv1hNrRdaOuY2Hx3qn6BPjsgdWVfumtuAipvIEJaR4T3qitr35JgGW4++DGBtyuWA==',key_name='tempest-keypair-1507828644',keypairs=<?>,launch_index=0,launched_at=2025-11-25T08:34:01Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='7d8307470c794815a028592990efca57',ramdisk_id='',reservation_id='r-yi71ppnf',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-1895576257',owner_user_name='tempest-AttachInterfacesTestJSON-1895576257-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T08:34:01Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='329d8dc9d78743d4a09a38fef3a9143d',uuid=5c6656ef-7ad0-4eb4-a597-aa9a8078805b,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "f2a4b65b-419e-44be-9413-f01693268aa8", "address": "fa:16:3e:26:ed:fc", "network": {"id": "9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-2079765623-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7d8307470c794815a028592990efca57", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf2a4b65b-41", "ovs_interfaceid": "f2a4b65b-419e-44be-9413-f01693268aa8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 25 03:34:32 np0005534516 nova_compute[253538]: 2025-11-25 08:34:32.817 253542 DEBUG nova.network.os_vif_util [None req-c368117c-2586-4b3a-8fd2-af52c4c1b427 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Converting VIF {"id": "f2a4b65b-419e-44be-9413-f01693268aa8", "address": "fa:16:3e:26:ed:fc", "network": {"id": "9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-2079765623-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7d8307470c794815a028592990efca57", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf2a4b65b-41", "ovs_interfaceid": "f2a4b65b-419e-44be-9413-f01693268aa8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 03:34:32 np0005534516 nova_compute[253538]: 2025-11-25 08:34:32.818 253542 DEBUG nova.network.os_vif_util [None req-c368117c-2586-4b3a-8fd2-af52c4c1b427 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:26:ed:fc,bridge_name='br-int',has_traffic_filtering=True,id=f2a4b65b-419e-44be-9413-f01693268aa8,network=Network(9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapf2a4b65b-41') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 03:34:32 np0005534516 nova_compute[253538]: 2025-11-25 08:34:32.818 253542 DEBUG os_vif [None req-c368117c-2586-4b3a-8fd2-af52c4c1b427 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:26:ed:fc,bridge_name='br-int',has_traffic_filtering=True,id=f2a4b65b-419e-44be-9413-f01693268aa8,network=Network(9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapf2a4b65b-41') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 25 03:34:32 np0005534516 nova_compute[253538]: 2025-11-25 08:34:32.818 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:34:32 np0005534516 nova_compute[253538]: 2025-11-25 08:34:32.819 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:34:32 np0005534516 nova_compute[253538]: 2025-11-25 08:34:32.819 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 03:34:32 np0005534516 nova_compute[253538]: 2025-11-25 08:34:32.821 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:34:32 np0005534516 nova_compute[253538]: 2025-11-25 08:34:32.821 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf2a4b65b-41, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:34:32 np0005534516 nova_compute[253538]: 2025-11-25 08:34:32.822 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapf2a4b65b-41, col_values=(('external_ids', {'iface-id': 'f2a4b65b-419e-44be-9413-f01693268aa8', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:26:ed:fc', 'vm-uuid': '5c6656ef-7ad0-4eb4-a597-aa9a8078805b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:34:32 np0005534516 NetworkManager[48915]: <info>  [1764059672.8242] manager: (tapf2a4b65b-41): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/259)
Nov 25 03:34:32 np0005534516 nova_compute[253538]: 2025-11-25 08:34:32.824 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:34:32 np0005534516 nova_compute[253538]: 2025-11-25 08:34:32.830 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 25 03:34:32 np0005534516 nova_compute[253538]: 2025-11-25 08:34:32.832 253542 INFO os_vif [None req-c368117c-2586-4b3a-8fd2-af52c4c1b427 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:26:ed:fc,bridge_name='br-int',has_traffic_filtering=True,id=f2a4b65b-419e-44be-9413-f01693268aa8,network=Network(9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapf2a4b65b-41')#033[00m
Nov 25 03:34:32 np0005534516 nova_compute[253538]: 2025-11-25 08:34:32.832 253542 DEBUG nova.virt.libvirt.vif [None req-c368117c-2586-4b3a-8fd2-af52c4c1b427 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T08:33:46Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-632172140',display_name='tempest-tempest.common.compute-instance-632172140',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-632172140',id=58,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBI45VX5ozelo+A26Yolp08RcM4mInQuWdkCriWdcAJEvmrG1M64+l6O6qrC1PEYY9Zv1hNrRdaOuY2Hx3qn6BPjsgdWVfumtuAipvIEJaR4T3qitr35JgGW4++DGBtyuWA==',key_name='tempest-keypair-1507828644',keypairs=<?>,launch_index=0,launched_at=2025-11-25T08:34:01Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='7d8307470c794815a028592990efca57',ramdisk_id='',reservation_id='r-yi71ppnf',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-1895576257',owner_user_name='tempest-AttachInterfacesTestJSON-1895576257-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T08:34:01Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='329d8dc9d78743d4a09a38fef3a9143d',uuid=5c6656ef-7ad0-4eb4-a597-aa9a8078805b,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "f2a4b65b-419e-44be-9413-f01693268aa8", "address": "fa:16:3e:26:ed:fc", "network": {"id": "9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-2079765623-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7d8307470c794815a028592990efca57", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf2a4b65b-41", "ovs_interfaceid": "f2a4b65b-419e-44be-9413-f01693268aa8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 25 03:34:32 np0005534516 nova_compute[253538]: 2025-11-25 08:34:32.833 253542 DEBUG nova.network.os_vif_util [None req-c368117c-2586-4b3a-8fd2-af52c4c1b427 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Converting VIF {"id": "f2a4b65b-419e-44be-9413-f01693268aa8", "address": "fa:16:3e:26:ed:fc", "network": {"id": "9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-2079765623-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7d8307470c794815a028592990efca57", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf2a4b65b-41", "ovs_interfaceid": "f2a4b65b-419e-44be-9413-f01693268aa8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 03:34:32 np0005534516 nova_compute[253538]: 2025-11-25 08:34:32.833 253542 DEBUG nova.network.os_vif_util [None req-c368117c-2586-4b3a-8fd2-af52c4c1b427 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:26:ed:fc,bridge_name='br-int',has_traffic_filtering=True,id=f2a4b65b-419e-44be-9413-f01693268aa8,network=Network(9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapf2a4b65b-41') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 03:34:32 np0005534516 nova_compute[253538]: 2025-11-25 08:34:32.835 253542 DEBUG nova.virt.libvirt.guest [None req-c368117c-2586-4b3a-8fd2-af52c4c1b427 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] attach device xml: <interface type="ethernet">
Nov 25 03:34:32 np0005534516 nova_compute[253538]:  <mac address="fa:16:3e:26:ed:fc"/>
Nov 25 03:34:32 np0005534516 nova_compute[253538]:  <model type="virtio"/>
Nov 25 03:34:32 np0005534516 nova_compute[253538]:  <driver name="vhost" rx_queue_size="512"/>
Nov 25 03:34:32 np0005534516 nova_compute[253538]:  <mtu size="1442"/>
Nov 25 03:34:32 np0005534516 nova_compute[253538]:  <target dev="tapf2a4b65b-41"/>
Nov 25 03:34:32 np0005534516 nova_compute[253538]: </interface>
Nov 25 03:34:32 np0005534516 nova_compute[253538]: attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339#033[00m
Nov 25 03:34:32 np0005534516 kernel: tapf2a4b65b-41: entered promiscuous mode
Nov 25 03:34:32 np0005534516 NetworkManager[48915]: <info>  [1764059672.8463] manager: (tapf2a4b65b-41): new Tun device (/org/freedesktop/NetworkManager/Devices/260)
Nov 25 03:34:32 np0005534516 nova_compute[253538]: 2025-11-25 08:34:32.848 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:34:32 np0005534516 ovn_controller[152859]: 2025-11-25T08:34:32Z|00560|binding|INFO|Claiming lport f2a4b65b-419e-44be-9413-f01693268aa8 for this chassis.
Nov 25 03:34:32 np0005534516 ovn_controller[152859]: 2025-11-25T08:34:32Z|00561|binding|INFO|f2a4b65b-419e-44be-9413-f01693268aa8: Claiming fa:16:3e:26:ed:fc 10.100.0.3
Nov 25 03:34:32 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:34:32.856 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:26:ed:fc 10.100.0.3'], port_security=['fa:16:3e:26:ed:fc 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-AttachInterfacesTestJSON-663266119', 'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '5c6656ef-7ad0-4eb4-a597-aa9a8078805b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-AttachInterfacesTestJSON-663266119', 'neutron:project_id': '7d8307470c794815a028592990efca57', 'neutron:revision_number': '2', 'neutron:security_group_ids': '2fc6083a-2d5e-4949-a854-57468915c521', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f51a82dc-da84-4ad1-90c6-51b8e242435f, chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=f2a4b65b-419e-44be-9413-f01693268aa8) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 03:34:32 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:34:32.857 162739 INFO neutron.agent.ovn.metadata.agent [-] Port f2a4b65b-419e-44be-9413-f01693268aa8 in datapath 9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe bound to our chassis#033[00m
Nov 25 03:34:32 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:34:32.859 162739 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe#033[00m
Nov 25 03:34:32 np0005534516 ovn_controller[152859]: 2025-11-25T08:34:32Z|00562|binding|INFO|Setting lport f2a4b65b-419e-44be-9413-f01693268aa8 ovn-installed in OVS
Nov 25 03:34:32 np0005534516 ovn_controller[152859]: 2025-11-25T08:34:32Z|00563|binding|INFO|Setting lport f2a4b65b-419e-44be-9413-f01693268aa8 up in Southbound
Nov 25 03:34:32 np0005534516 nova_compute[253538]: 2025-11-25 08:34:32.873 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:34:32 np0005534516 nova_compute[253538]: 2025-11-25 08:34:32.875 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:34:32 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:34:32.880 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[c40fd45b-8f96-4c8d-b1f4-da1f23b01e04]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:34:32 np0005534516 systemd-udevd[317552]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 03:34:32 np0005534516 NetworkManager[48915]: <info>  [1764059672.9089] device (tapf2a4b65b-41): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 03:34:32 np0005534516 NetworkManager[48915]: <info>  [1764059672.9101] device (tapf2a4b65b-41): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 03:34:32 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:34:32.911 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[f9e6cc68-b044-4700-91d7-ef8fc711a167]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:34:32 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:34:32.914 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[363f7f09-3ed5-4cba-8ef6-e3b7490f476b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:34:32 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:34:32.948 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[c70b2c5b-989f-4ad2-8820-dd94fd77e48b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:34:32 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:34:32.972 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[ba8ed971-e780-462a-95f1-91b416e3252f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap9bf3cbfa-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:cf:8f:c7'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 7, 'rx_bytes': 616, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 7, 'rx_bytes': 616, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 164], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 499073, 'reachable_time': 17548, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 317570, 'error': None, 'target': 'ovnmeta-9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:34:32 np0005534516 nova_compute[253538]: 2025-11-25 08:34:32.985 253542 DEBUG nova.virt.libvirt.driver [None req-c368117c-2586-4b3a-8fd2-af52c4c1b427 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 03:34:32 np0005534516 nova_compute[253538]: 2025-11-25 08:34:32.986 253542 DEBUG nova.virt.libvirt.driver [None req-c368117c-2586-4b3a-8fd2-af52c4c1b427 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 03:34:32 np0005534516 nova_compute[253538]: 2025-11-25 08:34:32.986 253542 DEBUG nova.virt.libvirt.driver [None req-c368117c-2586-4b3a-8fd2-af52c4c1b427 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] No VIF found with MAC fa:16:3e:60:42:da, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 25 03:34:32 np0005534516 nova_compute[253538]: 2025-11-25 08:34:32.986 253542 DEBUG nova.virt.libvirt.driver [None req-c368117c-2586-4b3a-8fd2-af52c4c1b427 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] No VIF found with MAC fa:16:3e:26:ed:fc, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 25 03:34:32 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:34:32.990 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[2b5a4d2e-2ccf-460a-84bf-85fbae382c04]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap9bf3cbfa-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 499086, 'tstamp': 499086}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 317573, 'error': None, 'target': 'ovnmeta-9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap9bf3cbfa-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 499088, 'tstamp': 499088}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 317573, 'error': None, 'target': 'ovnmeta-9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:34:32 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:34:32.991 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9bf3cbfa-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:34:33 np0005534516 nova_compute[253538]: 2025-11-25 08:34:33.059 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:34:33 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:34:33.062 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9bf3cbfa-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:34:33 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:34:33.063 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 03:34:33 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:34:33.063 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap9bf3cbfa-70, col_values=(('external_ids', {'iface-id': '98660c0c-0936-4c4d-9a89-87b784d8d5cc'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:34:33 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:34:33.064 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 03:34:33 np0005534516 nova_compute[253538]: 2025-11-25 08:34:33.073 253542 DEBUG nova.virt.libvirt.guest [None req-c368117c-2586-4b3a-8fd2-af52c4c1b427 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 03:34:33 np0005534516 nova_compute[253538]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 03:34:33 np0005534516 nova_compute[253538]:  <nova:name>tempest-tempest.common.compute-instance-632172140</nova:name>
Nov 25 03:34:33 np0005534516 nova_compute[253538]:  <nova:creationTime>2025-11-25 08:34:33</nova:creationTime>
Nov 25 03:34:33 np0005534516 nova_compute[253538]:  <nova:flavor name="m1.nano">
Nov 25 03:34:33 np0005534516 nova_compute[253538]:    <nova:memory>128</nova:memory>
Nov 25 03:34:33 np0005534516 nova_compute[253538]:    <nova:disk>1</nova:disk>
Nov 25 03:34:33 np0005534516 nova_compute[253538]:    <nova:swap>0</nova:swap>
Nov 25 03:34:33 np0005534516 nova_compute[253538]:    <nova:ephemeral>0</nova:ephemeral>
Nov 25 03:34:33 np0005534516 nova_compute[253538]:    <nova:vcpus>1</nova:vcpus>
Nov 25 03:34:33 np0005534516 nova_compute[253538]:  </nova:flavor>
Nov 25 03:34:33 np0005534516 nova_compute[253538]:  <nova:owner>
Nov 25 03:34:33 np0005534516 nova_compute[253538]:    <nova:user uuid="329d8dc9d78743d4a09a38fef3a9143d">tempest-AttachInterfacesTestJSON-1895576257-project-member</nova:user>
Nov 25 03:34:33 np0005534516 nova_compute[253538]:    <nova:project uuid="7d8307470c794815a028592990efca57">tempest-AttachInterfacesTestJSON-1895576257</nova:project>
Nov 25 03:34:33 np0005534516 nova_compute[253538]:  </nova:owner>
Nov 25 03:34:33 np0005534516 nova_compute[253538]:  <nova:root type="image" uuid="8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e"/>
Nov 25 03:34:33 np0005534516 nova_compute[253538]:  <nova:ports>
Nov 25 03:34:33 np0005534516 nova_compute[253538]:    <nova:port uuid="1682bdaf-1dd6-4036-8d17-a169dbaaca8f">
Nov 25 03:34:33 np0005534516 nova_compute[253538]:      <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Nov 25 03:34:33 np0005534516 nova_compute[253538]:    </nova:port>
Nov 25 03:34:33 np0005534516 nova_compute[253538]:    <nova:port uuid="f2a4b65b-419e-44be-9413-f01693268aa8">
Nov 25 03:34:33 np0005534516 nova_compute[253538]:      <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Nov 25 03:34:33 np0005534516 nova_compute[253538]:    </nova:port>
Nov 25 03:34:33 np0005534516 nova_compute[253538]:  </nova:ports>
Nov 25 03:34:33 np0005534516 nova_compute[253538]: </nova:instance>
Nov 25 03:34:33 np0005534516 nova_compute[253538]: set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359#033[00m
Nov 25 03:34:33 np0005534516 nova_compute[253538]: 2025-11-25 08:34:33.098 253542 DEBUG oslo_concurrency.lockutils [None req-c368117c-2586-4b3a-8fd2-af52c4c1b427 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Lock "interface-5c6656ef-7ad0-4eb4-a597-aa9a8078805b-f2a4b65b-419e-44be-9413-f01693268aa8" "released" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: held 7.288s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:34:33 np0005534516 ovn_controller[152859]: 2025-11-25T08:34:33Z|00073|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:4d:ce:d4 10.100.0.8
Nov 25 03:34:33 np0005534516 ovn_controller[152859]: 2025-11-25T08:34:33Z|00074|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:4d:ce:d4 10.100.0.8
Nov 25 03:34:33 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1542: 321 pgs: 321 active+clean; 360 MiB data, 682 MiB used, 59 GiB / 60 GiB avail; 3.6 MiB/s rd, 2.1 MiB/s wr, 170 op/s
Nov 25 03:34:33 np0005534516 podman[317604]: 2025-11-25 08:34:33.472498943 +0000 UTC m=+0.306198961 container exec 04ce8b89bbacb77b3b59c4996e899d7494010827e740656ebea8754c25303956 (image=quay.io/ceph/ceph:v18, name=ceph-a058ea16-8b73-51e1-b172-ed66107102bf-mon-compute-0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 25 03:34:33 np0005534516 podman[317624]: 2025-11-25 08:34:33.685464067 +0000 UTC m=+0.109184616 container exec_died 04ce8b89bbacb77b3b59c4996e899d7494010827e740656ebea8754c25303956 (image=quay.io/ceph/ceph:v18, name=ceph-a058ea16-8b73-51e1-b172-ed66107102bf-mon-compute-0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2)
Nov 25 03:34:33 np0005534516 podman[317604]: 2025-11-25 08:34:33.700453389 +0000 UTC m=+0.534153417 container exec_died 04ce8b89bbacb77b3b59c4996e899d7494010827e740656ebea8754c25303956 (image=quay.io/ceph/ceph:v18, name=ceph-a058ea16-8b73-51e1-b172-ed66107102bf-mon-compute-0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 03:34:33 np0005534516 ceph-osd[88620]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #46. Immutable memtables: 3.
Nov 25 03:34:33 np0005534516 nova_compute[253538]: 2025-11-25 08:34:33.797 253542 DEBUG nova.compute.manager [req-ed729e2c-e8e4-4120-9f00-8591584dad7a req-e7d61782-b5db-4aee-8f8b-534332db073d b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 5c6656ef-7ad0-4eb4-a597-aa9a8078805b] Received event network-vif-plugged-f2a4b65b-419e-44be-9413-f01693268aa8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:34:33 np0005534516 nova_compute[253538]: 2025-11-25 08:34:33.798 253542 DEBUG oslo_concurrency.lockutils [req-ed729e2c-e8e4-4120-9f00-8591584dad7a req-e7d61782-b5db-4aee-8f8b-534332db073d b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "5c6656ef-7ad0-4eb4-a597-aa9a8078805b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:34:33 np0005534516 nova_compute[253538]: 2025-11-25 08:34:33.798 253542 DEBUG oslo_concurrency.lockutils [req-ed729e2c-e8e4-4120-9f00-8591584dad7a req-e7d61782-b5db-4aee-8f8b-534332db073d b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "5c6656ef-7ad0-4eb4-a597-aa9a8078805b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:34:33 np0005534516 nova_compute[253538]: 2025-11-25 08:34:33.798 253542 DEBUG oslo_concurrency.lockutils [req-ed729e2c-e8e4-4120-9f00-8591584dad7a req-e7d61782-b5db-4aee-8f8b-534332db073d b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "5c6656ef-7ad0-4eb4-a597-aa9a8078805b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:34:33 np0005534516 nova_compute[253538]: 2025-11-25 08:34:33.798 253542 DEBUG nova.compute.manager [req-ed729e2c-e8e4-4120-9f00-8591584dad7a req-e7d61782-b5db-4aee-8f8b-534332db073d b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 5c6656ef-7ad0-4eb4-a597-aa9a8078805b] No waiting events found dispatching network-vif-plugged-f2a4b65b-419e-44be-9413-f01693268aa8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 03:34:33 np0005534516 nova_compute[253538]: 2025-11-25 08:34:33.798 253542 WARNING nova.compute.manager [req-ed729e2c-e8e4-4120-9f00-8591584dad7a req-e7d61782-b5db-4aee-8f8b-534332db073d b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 5c6656ef-7ad0-4eb4-a597-aa9a8078805b] Received unexpected event network-vif-plugged-f2a4b65b-419e-44be-9413-f01693268aa8 for instance with vm_state active and task_state None.#033[00m
Nov 25 03:34:34 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 03:34:34 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 03:34:34 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 03:34:34 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 03:34:34 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 03:34:34 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 03:34:35 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:34:35.095 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=0e3362c2-a4a0-4a10-9289-943331244f84, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '16'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:34:35 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 03:34:35 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 03:34:35 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 03:34:35 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 03:34:35 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 03:34:35 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 03:34:35 np0005534516 ceph-mgr[75313]: [progress WARNING root] complete: ev fa43844e-6990-4241-af3d-4b8e73492270 does not exist
Nov 25 03:34:35 np0005534516 ceph-mgr[75313]: [progress WARNING root] complete: ev 4d3f1dd5-f01d-42ee-ad50-a0835956ceec does not exist
Nov 25 03:34:35 np0005534516 ceph-mgr[75313]: [progress WARNING root] complete: ev 981f73f4-017f-47a9-8878-f8da23fe9ad5 does not exist
Nov 25 03:34:35 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 03:34:35 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 03:34:35 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 03:34:35 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 03:34:35 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 03:34:35 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 03:34:35 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1543: 321 pgs: 321 active+clean; 402 MiB data, 725 MiB used, 59 GiB / 60 GiB avail; 2.6 MiB/s rd, 4.3 MiB/s wr, 197 op/s
Nov 25 03:34:35 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 03:34:35 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 03:34:35 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 03:34:35 np0005534516 nova_compute[253538]: 2025-11-25 08:34:35.730 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:34:35 np0005534516 ovn_controller[152859]: 2025-11-25T08:34:35Z|00075|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:26:ed:fc 10.100.0.3
Nov 25 03:34:35 np0005534516 ovn_controller[152859]: 2025-11-25T08:34:35Z|00076|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:26:ed:fc 10.100.0.3
Nov 25 03:34:35 np0005534516 podman[318031]: 2025-11-25 08:34:35.850233518 +0000 UTC m=+0.039526313 container create fc9b57bb1e3109838022b18530d42369d994b869e7f0e25ffd67abfeaa0c048f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_meninsky, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Nov 25 03:34:35 np0005534516 systemd[1]: Started libpod-conmon-fc9b57bb1e3109838022b18530d42369d994b869e7f0e25ffd67abfeaa0c048f.scope.
Nov 25 03:34:35 np0005534516 systemd[1]: Started libcrun container.
Nov 25 03:34:35 np0005534516 podman[318031]: 2025-11-25 08:34:35.83206959 +0000 UTC m=+0.021362395 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 03:34:35 np0005534516 podman[318031]: 2025-11-25 08:34:35.93551131 +0000 UTC m=+0.124804185 container init fc9b57bb1e3109838022b18530d42369d994b869e7f0e25ffd67abfeaa0c048f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_meninsky, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 03:34:35 np0005534516 podman[318031]: 2025-11-25 08:34:35.948300303 +0000 UTC m=+0.137593098 container start fc9b57bb1e3109838022b18530d42369d994b869e7f0e25ffd67abfeaa0c048f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_meninsky, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Nov 25 03:34:35 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e182 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 03:34:35 np0005534516 podman[318031]: 2025-11-25 08:34:35.951608923 +0000 UTC m=+0.140901738 container attach fc9b57bb1e3109838022b18530d42369d994b869e7f0e25ffd67abfeaa0c048f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_meninsky, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 25 03:34:35 np0005534516 interesting_meninsky[318047]: 167 167
Nov 25 03:34:35 np0005534516 systemd[1]: libpod-fc9b57bb1e3109838022b18530d42369d994b869e7f0e25ffd67abfeaa0c048f.scope: Deactivated successfully.
Nov 25 03:34:35 np0005534516 conmon[318047]: conmon fc9b57bb1e3109838022 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-fc9b57bb1e3109838022b18530d42369d994b869e7f0e25ffd67abfeaa0c048f.scope/container/memory.events
Nov 25 03:34:35 np0005534516 podman[318031]: 2025-11-25 08:34:35.959186416 +0000 UTC m=+0.148479211 container died fc9b57bb1e3109838022b18530d42369d994b869e7f0e25ffd67abfeaa0c048f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_meninsky, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 03:34:35 np0005534516 systemd[1]: var-lib-containers-storage-overlay-6fd26b4d4cdb87f7444b710014334203ce2a13bc41437ac210f2f56ee48e733e-merged.mount: Deactivated successfully.
Nov 25 03:34:36 np0005534516 podman[318031]: 2025-11-25 08:34:36.023574867 +0000 UTC m=+0.212867662 container remove fc9b57bb1e3109838022b18530d42369d994b869e7f0e25ffd67abfeaa0c048f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_meninsky, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 03:34:36 np0005534516 systemd[1]: libpod-conmon-fc9b57bb1e3109838022b18530d42369d994b869e7f0e25ffd67abfeaa0c048f.scope: Deactivated successfully.
Nov 25 03:34:36 np0005534516 nova_compute[253538]: 2025-11-25 08:34:36.222 253542 DEBUG nova.network.neutron [req-0cf3f703-c207-457e-add0-fb5a3155aacf req-7505f40a-58ba-4b90-a0cb-5f9c4d248156 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 5c6656ef-7ad0-4eb4-a597-aa9a8078805b] Updated VIF entry in instance network info cache for port f2a4b65b-419e-44be-9413-f01693268aa8. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 25 03:34:36 np0005534516 nova_compute[253538]: 2025-11-25 08:34:36.223 253542 DEBUG nova.network.neutron [req-0cf3f703-c207-457e-add0-fb5a3155aacf req-7505f40a-58ba-4b90-a0cb-5f9c4d248156 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 5c6656ef-7ad0-4eb4-a597-aa9a8078805b] Updating instance_info_cache with network_info: [{"id": "1682bdaf-1dd6-4036-8d17-a169dbaaca8f", "address": "fa:16:3e:60:42:da", "network": {"id": "9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-2079765623-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.241", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7d8307470c794815a028592990efca57", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1682bdaf-1d", "ovs_interfaceid": "1682bdaf-1dd6-4036-8d17-a169dbaaca8f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "f2a4b65b-419e-44be-9413-f01693268aa8", "address": "fa:16:3e:26:ed:fc", "network": {"id": "9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-2079765623-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7d8307470c794815a028592990efca57", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf2a4b65b-41", "ovs_interfaceid": "f2a4b65b-419e-44be-9413-f01693268aa8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 03:34:36 np0005534516 nova_compute[253538]: 2025-11-25 08:34:36.238 253542 DEBUG oslo_concurrency.lockutils [req-0cf3f703-c207-457e-add0-fb5a3155aacf req-7505f40a-58ba-4b90-a0cb-5f9c4d248156 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-5c6656ef-7ad0-4eb4-a597-aa9a8078805b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 03:34:36 np0005534516 podman[318071]: 2025-11-25 08:34:36.271046979 +0000 UTC m=+0.057428705 container create 4ef2272bc06eb5c39aa3018a0599c5bba776465bd03561f8bab724a4236696c0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_clarke, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2)
Nov 25 03:34:36 np0005534516 systemd[1]: Started libpod-conmon-4ef2272bc06eb5c39aa3018a0599c5bba776465bd03561f8bab724a4236696c0.scope.
Nov 25 03:34:36 np0005534516 podman[318071]: 2025-11-25 08:34:36.24248467 +0000 UTC m=+0.028866426 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 03:34:36 np0005534516 systemd[1]: Started libcrun container.
Nov 25 03:34:36 np0005534516 nova_compute[253538]: 2025-11-25 08:34:36.364 253542 DEBUG oslo_concurrency.lockutils [None req-5d44d1bd-3f6f-48a5-8fa6-7c656f5e1d6c 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Acquiring lock "interface-5c6656ef-7ad0-4eb4-a597-aa9a8078805b-f2a4b65b-419e-44be-9413-f01693268aa8" by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:34:36 np0005534516 nova_compute[253538]: 2025-11-25 08:34:36.365 253542 DEBUG oslo_concurrency.lockutils [None req-5d44d1bd-3f6f-48a5-8fa6-7c656f5e1d6c 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Lock "interface-5c6656ef-7ad0-4eb4-a597-aa9a8078805b-f2a4b65b-419e-44be-9413-f01693268aa8" acquired by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:34:36 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b06b83c690e9075e320bf9de04a406a4caab642e3b15187f68ef7f03244740ec/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 03:34:36 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b06b83c690e9075e320bf9de04a406a4caab642e3b15187f68ef7f03244740ec/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 03:34:36 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b06b83c690e9075e320bf9de04a406a4caab642e3b15187f68ef7f03244740ec/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 03:34:36 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b06b83c690e9075e320bf9de04a406a4caab642e3b15187f68ef7f03244740ec/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 03:34:36 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b06b83c690e9075e320bf9de04a406a4caab642e3b15187f68ef7f03244740ec/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 03:34:36 np0005534516 nova_compute[253538]: 2025-11-25 08:34:36.386 253542 DEBUG nova.objects.instance [None req-5d44d1bd-3f6f-48a5-8fa6-7c656f5e1d6c 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Lazy-loading 'flavor' on Instance uuid 5c6656ef-7ad0-4eb4-a597-aa9a8078805b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 03:34:36 np0005534516 podman[318071]: 2025-11-25 08:34:36.387844987 +0000 UTC m=+0.174226713 container init 4ef2272bc06eb5c39aa3018a0599c5bba776465bd03561f8bab724a4236696c0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_clarke, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 03:34:36 np0005534516 nova_compute[253538]: 2025-11-25 08:34:36.396 253542 DEBUG nova.compute.manager [req-85943c80-fdf8-41ff-a524-178cc9e05615 req-9f55746a-46a4-4284-b226-adf7c488ba32 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 5c6656ef-7ad0-4eb4-a597-aa9a8078805b] Received event network-vif-plugged-f2a4b65b-419e-44be-9413-f01693268aa8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:34:36 np0005534516 nova_compute[253538]: 2025-11-25 08:34:36.396 253542 DEBUG oslo_concurrency.lockutils [req-85943c80-fdf8-41ff-a524-178cc9e05615 req-9f55746a-46a4-4284-b226-adf7c488ba32 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "5c6656ef-7ad0-4eb4-a597-aa9a8078805b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:34:36 np0005534516 nova_compute[253538]: 2025-11-25 08:34:36.397 253542 DEBUG oslo_concurrency.lockutils [req-85943c80-fdf8-41ff-a524-178cc9e05615 req-9f55746a-46a4-4284-b226-adf7c488ba32 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "5c6656ef-7ad0-4eb4-a597-aa9a8078805b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:34:36 np0005534516 nova_compute[253538]: 2025-11-25 08:34:36.397 253542 DEBUG oslo_concurrency.lockutils [req-85943c80-fdf8-41ff-a524-178cc9e05615 req-9f55746a-46a4-4284-b226-adf7c488ba32 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "5c6656ef-7ad0-4eb4-a597-aa9a8078805b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:34:36 np0005534516 nova_compute[253538]: 2025-11-25 08:34:36.398 253542 DEBUG nova.compute.manager [req-85943c80-fdf8-41ff-a524-178cc9e05615 req-9f55746a-46a4-4284-b226-adf7c488ba32 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 5c6656ef-7ad0-4eb4-a597-aa9a8078805b] No waiting events found dispatching network-vif-plugged-f2a4b65b-419e-44be-9413-f01693268aa8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 03:34:36 np0005534516 podman[318071]: 2025-11-25 08:34:36.398510094 +0000 UTC m=+0.184891810 container start 4ef2272bc06eb5c39aa3018a0599c5bba776465bd03561f8bab724a4236696c0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_clarke, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 03:34:36 np0005534516 nova_compute[253538]: 2025-11-25 08:34:36.398 253542 WARNING nova.compute.manager [req-85943c80-fdf8-41ff-a524-178cc9e05615 req-9f55746a-46a4-4284-b226-adf7c488ba32 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 5c6656ef-7ad0-4eb4-a597-aa9a8078805b] Received unexpected event network-vif-plugged-f2a4b65b-419e-44be-9413-f01693268aa8 for instance with vm_state active and task_state None.#033[00m
Nov 25 03:34:36 np0005534516 nova_compute[253538]: 2025-11-25 08:34:36.407 253542 DEBUG nova.virt.libvirt.vif [None req-5d44d1bd-3f6f-48a5-8fa6-7c656f5e1d6c 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T08:33:46Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-632172140',display_name='tempest-tempest.common.compute-instance-632172140',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-632172140',id=58,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBI45VX5ozelo+A26Yolp08RcM4mInQuWdkCriWdcAJEvmrG1M64+l6O6qrC1PEYY9Zv1hNrRdaOuY2Hx3qn6BPjsgdWVfumtuAipvIEJaR4T3qitr35JgGW4++DGBtyuWA==',key_name='tempest-keypair-1507828644',keypairs=<?>,launch_index=0,launched_at=2025-11-25T08:34:01Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='7d8307470c794815a028592990efca57',ramdisk_id='',reservation_id='r-yi71ppnf',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-1895576257',owner_user_name='tempest-AttachInterfacesTestJSON-1895576257-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T08:34:01Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='329d8dc9d78743d4a09a38fef3a9143d',uuid=5c6656ef-7ad0-4eb4-a597-aa9a8078805b,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "f2a4b65b-419e-44be-9413-f01693268aa8", "address": "fa:16:3e:26:ed:fc", "network": {"id": "9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-2079765623-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7d8307470c794815a028592990efca57", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf2a4b65b-41", "ovs_interfaceid": "f2a4b65b-419e-44be-9413-f01693268aa8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 25 03:34:36 np0005534516 nova_compute[253538]: 2025-11-25 08:34:36.407 253542 DEBUG nova.network.os_vif_util [None req-5d44d1bd-3f6f-48a5-8fa6-7c656f5e1d6c 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Converting VIF {"id": "f2a4b65b-419e-44be-9413-f01693268aa8", "address": "fa:16:3e:26:ed:fc", "network": {"id": "9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-2079765623-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7d8307470c794815a028592990efca57", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf2a4b65b-41", "ovs_interfaceid": "f2a4b65b-419e-44be-9413-f01693268aa8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 03:34:36 np0005534516 nova_compute[253538]: 2025-11-25 08:34:36.408 253542 DEBUG nova.network.os_vif_util [None req-5d44d1bd-3f6f-48a5-8fa6-7c656f5e1d6c 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:26:ed:fc,bridge_name='br-int',has_traffic_filtering=True,id=f2a4b65b-419e-44be-9413-f01693268aa8,network=Network(9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapf2a4b65b-41') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 03:34:36 np0005534516 nova_compute[253538]: 2025-11-25 08:34:36.412 253542 DEBUG nova.virt.libvirt.guest [None req-5d44d1bd-3f6f-48a5-8fa6-7c656f5e1d6c 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:26:ed:fc"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapf2a4b65b-41"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Nov 25 03:34:36 np0005534516 nova_compute[253538]: 2025-11-25 08:34:36.414 253542 DEBUG nova.virt.libvirt.guest [None req-5d44d1bd-3f6f-48a5-8fa6-7c656f5e1d6c 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:26:ed:fc"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapf2a4b65b-41"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Nov 25 03:34:36 np0005534516 podman[318071]: 2025-11-25 08:34:36.417278308 +0000 UTC m=+0.203660024 container attach 4ef2272bc06eb5c39aa3018a0599c5bba776465bd03561f8bab724a4236696c0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_clarke, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 03:34:36 np0005534516 nova_compute[253538]: 2025-11-25 08:34:36.417 253542 DEBUG nova.virt.libvirt.driver [None req-5d44d1bd-3f6f-48a5-8fa6-7c656f5e1d6c 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Attempting to detach device tapf2a4b65b-41 from instance 5c6656ef-7ad0-4eb4-a597-aa9a8078805b from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487#033[00m
Nov 25 03:34:36 np0005534516 nova_compute[253538]: 2025-11-25 08:34:36.418 253542 DEBUG nova.virt.libvirt.guest [None req-5d44d1bd-3f6f-48a5-8fa6-7c656f5e1d6c 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] detach device xml: <interface type="ethernet">
Nov 25 03:34:36 np0005534516 nova_compute[253538]:  <mac address="fa:16:3e:26:ed:fc"/>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:  <model type="virtio"/>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:  <driver name="vhost" rx_queue_size="512"/>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:  <mtu size="1442"/>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:  <target dev="tapf2a4b65b-41"/>
Nov 25 03:34:36 np0005534516 nova_compute[253538]: </interface>
Nov 25 03:34:36 np0005534516 nova_compute[253538]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Nov 25 03:34:36 np0005534516 nova_compute[253538]: 2025-11-25 08:34:36.424 253542 DEBUG nova.virt.libvirt.guest [None req-5d44d1bd-3f6f-48a5-8fa6-7c656f5e1d6c 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:26:ed:fc"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapf2a4b65b-41"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Nov 25 03:34:36 np0005534516 nova_compute[253538]: 2025-11-25 08:34:36.427 253542 DEBUG nova.virt.libvirt.guest [None req-5d44d1bd-3f6f-48a5-8fa6-7c656f5e1d6c 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:26:ed:fc"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapf2a4b65b-41"/></interface>not found in domain: <domain type='kvm' id='67'>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:  <name>instance-0000003a</name>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:  <uuid>5c6656ef-7ad0-4eb4-a597-aa9a8078805b</uuid>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:  <metadata>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 03:34:36 np0005534516 nova_compute[253538]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:  <nova:name>tempest-tempest.common.compute-instance-632172140</nova:name>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:  <nova:creationTime>2025-11-25 08:34:33</nova:creationTime>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:  <nova:flavor name="m1.nano">
Nov 25 03:34:36 np0005534516 nova_compute[253538]:    <nova:memory>128</nova:memory>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:    <nova:disk>1</nova:disk>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:    <nova:swap>0</nova:swap>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:    <nova:ephemeral>0</nova:ephemeral>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:    <nova:vcpus>1</nova:vcpus>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:  </nova:flavor>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:  <nova:owner>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:    <nova:user uuid="329d8dc9d78743d4a09a38fef3a9143d">tempest-AttachInterfacesTestJSON-1895576257-project-member</nova:user>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:    <nova:project uuid="7d8307470c794815a028592990efca57">tempest-AttachInterfacesTestJSON-1895576257</nova:project>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:  </nova:owner>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:  <nova:root type="image" uuid="8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e"/>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:  <nova:ports>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:    <nova:port uuid="1682bdaf-1dd6-4036-8d17-a169dbaaca8f">
Nov 25 03:34:36 np0005534516 nova_compute[253538]:      <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:    </nova:port>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:    <nova:port uuid="f2a4b65b-419e-44be-9413-f01693268aa8">
Nov 25 03:34:36 np0005534516 nova_compute[253538]:      <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:    </nova:port>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:  </nova:ports>
Nov 25 03:34:36 np0005534516 nova_compute[253538]: </nova:instance>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:  </metadata>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:  <memory unit='KiB'>131072</memory>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:  <currentMemory unit='KiB'>131072</currentMemory>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:  <vcpu placement='static'>1</vcpu>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:  <resource>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:    <partition>/machine</partition>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:  </resource>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:  <sysinfo type='smbios'>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:    <system>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:      <entry name='manufacturer'>RDO</entry>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:      <entry name='product'>OpenStack Compute</entry>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:      <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:      <entry name='serial'>5c6656ef-7ad0-4eb4-a597-aa9a8078805b</entry>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:      <entry name='uuid'>5c6656ef-7ad0-4eb4-a597-aa9a8078805b</entry>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:      <entry name='family'>Virtual Machine</entry>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:    </system>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:  </sysinfo>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:  <os>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:    <type arch='x86_64' machine='pc-q35-rhel9.8.0'>hvm</type>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:    <boot dev='hd'/>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:    <smbios mode='sysinfo'/>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:  </os>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:  <features>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:    <acpi/>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:    <apic/>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:    <vmcoreinfo state='on'/>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:  </features>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:  <cpu mode='custom' match='exact' check='full'>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:    <model fallback='forbid'>EPYC-Rome</model>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:    <vendor>AMD</vendor>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:    <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:    <feature policy='require' name='x2apic'/>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:    <feature policy='require' name='tsc-deadline'/>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:    <feature policy='require' name='hypervisor'/>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:    <feature policy='require' name='tsc_adjust'/>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:    <feature policy='require' name='spec-ctrl'/>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:    <feature policy='require' name='stibp'/>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:    <feature policy='require' name='ssbd'/>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:    <feature policy='require' name='cmp_legacy'/>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:    <feature policy='require' name='overflow-recov'/>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:    <feature policy='require' name='succor'/>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:    <feature policy='require' name='ibrs'/>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:    <feature policy='require' name='amd-ssbd'/>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:    <feature policy='require' name='virt-ssbd'/>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:    <feature policy='disable' name='lbrv'/>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:    <feature policy='disable' name='tsc-scale'/>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:    <feature policy='disable' name='vmcb-clean'/>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:    <feature policy='disable' name='flushbyasid'/>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:    <feature policy='disable' name='pause-filter'/>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:    <feature policy='disable' name='pfthreshold'/>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:    <feature policy='disable' name='svme-addr-chk'/>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:    <feature policy='require' name='lfence-always-serializing'/>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:    <feature policy='disable' name='xsaves'/>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:    <feature policy='disable' name='svm'/>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:    <feature policy='require' name='topoext'/>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:    <feature policy='disable' name='npt'/>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:    <feature policy='disable' name='nrip-save'/>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:  </cpu>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:  <clock offset='utc'>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:    <timer name='pit' tickpolicy='delay'/>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:    <timer name='rtc' tickpolicy='catchup'/>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:    <timer name='hpet' present='no'/>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:  </clock>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:  <on_poweroff>destroy</on_poweroff>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:  <on_reboot>restart</on_reboot>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:  <on_crash>destroy</on_crash>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:  <devices>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:    <emulator>/usr/libexec/qemu-kvm</emulator>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:    <disk type='network' device='disk'>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:      <driver name='qemu' type='raw' cache='none'/>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:      <auth username='openstack'>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:        <secret type='ceph' uuid='a058ea16-8b73-51e1-b172-ed66107102bf'/>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:      </auth>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:      <source protocol='rbd' name='vms/5c6656ef-7ad0-4eb4-a597-aa9a8078805b_disk' index='2'>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:        <host name='192.168.122.100' port='6789'/>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:      </source>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:      <target dev='vda' bus='virtio'/>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:      <alias name='virtio-disk0'/>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:    </disk>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:    <disk type='network' device='cdrom'>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:      <driver name='qemu' type='raw' cache='none'/>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:      <auth username='openstack'>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:        <secret type='ceph' uuid='a058ea16-8b73-51e1-b172-ed66107102bf'/>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:      </auth>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:      <source protocol='rbd' name='vms/5c6656ef-7ad0-4eb4-a597-aa9a8078805b_disk.config' index='1'>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:        <host name='192.168.122.100' port='6789'/>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:      </source>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:      <target dev='sda' bus='sata'/>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:      <readonly/>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:      <alias name='sata0-0-0'/>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:      <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:    </disk>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:    <controller type='pci' index='0' model='pcie-root'>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:      <alias name='pcie.0'/>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:    <controller type='pci' index='1' model='pcie-root-port'>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:      <model name='pcie-root-port'/>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:      <target chassis='1' port='0x10'/>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:      <alias name='pci.1'/>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:    <controller type='pci' index='2' model='pcie-root-port'>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:      <model name='pcie-root-port'/>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:      <target chassis='2' port='0x11'/>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:      <alias name='pci.2'/>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:    <controller type='pci' index='3' model='pcie-root-port'>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:      <model name='pcie-root-port'/>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:      <target chassis='3' port='0x12'/>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:      <alias name='pci.3'/>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:    <controller type='pci' index='4' model='pcie-root-port'>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:      <model name='pcie-root-port'/>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:      <target chassis='4' port='0x13'/>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:      <alias name='pci.4'/>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:    <controller type='pci' index='5' model='pcie-root-port'>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:      <model name='pcie-root-port'/>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:      <target chassis='5' port='0x14'/>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:      <alias name='pci.5'/>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:    <controller type='pci' index='6' model='pcie-root-port'>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:      <model name='pcie-root-port'/>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:      <target chassis='6' port='0x15'/>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:      <alias name='pci.6'/>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:    <controller type='pci' index='7' model='pcie-root-port'>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:      <model name='pcie-root-port'/>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:      <target chassis='7' port='0x16'/>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:      <alias name='pci.7'/>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:    <controller type='pci' index='8' model='pcie-root-port'>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:      <model name='pcie-root-port'/>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:      <target chassis='8' port='0x17'/>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:      <alias name='pci.8'/>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:    <controller type='pci' index='9' model='pcie-root-port'>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:      <model name='pcie-root-port'/>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:      <target chassis='9' port='0x18'/>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:      <alias name='pci.9'/>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:    <controller type='pci' index='10' model='pcie-root-port'>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:      <model name='pcie-root-port'/>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:      <target chassis='10' port='0x19'/>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:      <alias name='pci.10'/>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:    <controller type='pci' index='11' model='pcie-root-port'>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:      <model name='pcie-root-port'/>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:      <target chassis='11' port='0x1a'/>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:      <alias name='pci.11'/>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:    <controller type='pci' index='12' model='pcie-root-port'>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:      <model name='pcie-root-port'/>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:      <target chassis='12' port='0x1b'/>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:      <alias name='pci.12'/>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:    <controller type='pci' index='13' model='pcie-root-port'>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:      <model name='pcie-root-port'/>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:      <target chassis='13' port='0x1c'/>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:      <alias name='pci.13'/>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:    <controller type='pci' index='14' model='pcie-root-port'>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:      <model name='pcie-root-port'/>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:      <target chassis='14' port='0x1d'/>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:      <alias name='pci.14'/>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:    <controller type='pci' index='15' model='pcie-root-port'>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:      <model name='pcie-root-port'/>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:      <target chassis='15' port='0x1e'/>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:      <alias name='pci.15'/>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:    <controller type='pci' index='16' model='pcie-root-port'>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:      <model name='pcie-root-port'/>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:      <target chassis='16' port='0x1f'/>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:      <alias name='pci.16'/>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:    <controller type='pci' index='17' model='pcie-root-port'>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:      <model name='pcie-root-port'/>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:      <target chassis='17' port='0x20'/>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:      <alias name='pci.17'/>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:    <controller type='pci' index='18' model='pcie-root-port'>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:      <model name='pcie-root-port'/>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:      <target chassis='18' port='0x21'/>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:      <alias name='pci.18'/>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:    <controller type='pci' index='19' model='pcie-root-port'>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:      <model name='pcie-root-port'/>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:      <target chassis='19' port='0x22'/>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:      <alias name='pci.19'/>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:    <controller type='pci' index='20' model='pcie-root-port'>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:      <model name='pcie-root-port'/>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:      <target chassis='20' port='0x23'/>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:      <alias name='pci.20'/>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:    <controller type='pci' index='21' model='pcie-root-port'>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:      <model name='pcie-root-port'/>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:      <target chassis='21' port='0x24'/>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:      <alias name='pci.21'/>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:    <controller type='pci' index='22' model='pcie-root-port'>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:      <model name='pcie-root-port'/>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:      <target chassis='22' port='0x25'/>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:      <alias name='pci.22'/>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:    <controller type='pci' index='23' model='pcie-root-port'>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:      <model name='pcie-root-port'/>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:      <target chassis='23' port='0x26'/>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:      <alias name='pci.23'/>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:    <controller type='pci' index='24' model='pcie-root-port'>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:      <model name='pcie-root-port'/>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:      <target chassis='24' port='0x27'/>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:      <alias name='pci.24'/>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:    <controller type='pci' index='25' model='pcie-root-port'>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:      <model name='pcie-root-port'/>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:      <target chassis='25' port='0x28'/>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:      <alias name='pci.25'/>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:    <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:      <model name='pcie-pci-bridge'/>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:      <alias name='pci.26'/>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:    <controller type='usb' index='0' model='piix3-uhci'>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:      <alias name='usb'/>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:    <controller type='sata' index='0'>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:      <alias name='ide'/>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:    <interface type='ethernet'>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:      <mac address='fa:16:3e:60:42:da'/>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:      <target dev='tap1682bdaf-1d'/>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:      <model type='virtio'/>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:      <driver name='vhost' rx_queue_size='512'/>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:      <mtu size='1442'/>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:      <alias name='net0'/>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:    </interface>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:    <interface type='ethernet'>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:      <mac address='fa:16:3e:26:ed:fc'/>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:      <target dev='tapf2a4b65b-41'/>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:      <model type='virtio'/>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:      <driver name='vhost' rx_queue_size='512'/>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:      <mtu size='1442'/>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:      <alias name='net1'/>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x06' slot='0x00' function='0x0'/>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:    </interface>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:    <serial type='pty'>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:      <source path='/dev/pts/1'/>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:      <log file='/var/lib/nova/instances/5c6656ef-7ad0-4eb4-a597-aa9a8078805b/console.log' append='off'/>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:      <target type='isa-serial' port='0'>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:        <model name='isa-serial'/>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:      </target>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:      <alias name='serial0'/>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:    </serial>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:    <console type='pty' tty='/dev/pts/1'>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:      <source path='/dev/pts/1'/>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:      <log file='/var/lib/nova/instances/5c6656ef-7ad0-4eb4-a597-aa9a8078805b/console.log' append='off'/>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:      <target type='serial' port='0'/>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:      <alias name='serial0'/>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:    </console>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:    <input type='tablet' bus='usb'>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:      <alias name='input0'/>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:      <address type='usb' bus='0' port='1'/>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:    </input>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:    <input type='mouse' bus='ps2'>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:      <alias name='input1'/>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:    </input>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:    <input type='keyboard' bus='ps2'>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:      <alias name='input2'/>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:    </input>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:    <graphics type='vnc' port='5901' autoport='yes' listen='::0'>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:      <listen type='address' address='::0'/>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:    </graphics>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:    <audio id='1' type='none'/>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:    <video>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:      <model type='virtio' heads='1' primary='yes'/>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:      <alias name='video0'/>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:    </video>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:    <watchdog model='itco' action='reset'>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:      <alias name='watchdog0'/>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:    </watchdog>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:    <memballoon model='virtio'>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:      <stats period='10'/>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:      <alias name='balloon0'/>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:    </memballoon>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:    <rng model='virtio'>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:      <backend model='random'>/dev/urandom</backend>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:      <alias name='rng0'/>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:    </rng>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:  </devices>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:  <seclabel type='dynamic' model='selinux' relabel='yes'>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:    <label>system_u:system_r:svirt_t:s0:c36,c548</label>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:    <imagelabel>system_u:object_r:svirt_image_t:s0:c36,c548</imagelabel>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:  </seclabel>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:  <seclabel type='dynamic' model='dac' relabel='yes'>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:    <label>+107:+107</label>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:    <imagelabel>+107:+107</imagelabel>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:  </seclabel>
Nov 25 03:34:36 np0005534516 nova_compute[253538]: </domain>
Nov 25 03:34:36 np0005534516 nova_compute[253538]: get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282#033[00m
Nov 25 03:34:36 np0005534516 nova_compute[253538]: 2025-11-25 08:34:36.428 253542 INFO nova.virt.libvirt.driver [None req-5d44d1bd-3f6f-48a5-8fa6-7c656f5e1d6c 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Successfully detached device tapf2a4b65b-41 from instance 5c6656ef-7ad0-4eb4-a597-aa9a8078805b from the persistent domain config.#033[00m
Nov 25 03:34:36 np0005534516 nova_compute[253538]: 2025-11-25 08:34:36.428 253542 DEBUG nova.virt.libvirt.driver [None req-5d44d1bd-3f6f-48a5-8fa6-7c656f5e1d6c 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] (1/8): Attempting to detach device tapf2a4b65b-41 with device alias net1 from instance 5c6656ef-7ad0-4eb4-a597-aa9a8078805b from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523#033[00m
Nov 25 03:34:36 np0005534516 nova_compute[253538]: 2025-11-25 08:34:36.429 253542 DEBUG nova.virt.libvirt.guest [None req-5d44d1bd-3f6f-48a5-8fa6-7c656f5e1d6c 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] detach device xml: <interface type="ethernet">
Nov 25 03:34:36 np0005534516 nova_compute[253538]:  <mac address="fa:16:3e:26:ed:fc"/>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:  <model type="virtio"/>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:  <driver name="vhost" rx_queue_size="512"/>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:  <mtu size="1442"/>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:  <target dev="tapf2a4b65b-41"/>
Nov 25 03:34:36 np0005534516 nova_compute[253538]: </interface>
Nov 25 03:34:36 np0005534516 nova_compute[253538]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Nov 25 03:34:36 np0005534516 kernel: tapf2a4b65b-41 (unregistering): left promiscuous mode
Nov 25 03:34:36 np0005534516 NetworkManager[48915]: <info>  [1764059676.5406] device (tapf2a4b65b-41): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 03:34:36 np0005534516 nova_compute[253538]: 2025-11-25 08:34:36.563 253542 DEBUG nova.virt.libvirt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Received event <DeviceRemovedEvent: 1764059676.5634155, 5c6656ef-7ad0-4eb4-a597-aa9a8078805b => net1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370#033[00m
Nov 25 03:34:36 np0005534516 nova_compute[253538]: 2025-11-25 08:34:36.564 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:34:36 np0005534516 ovn_controller[152859]: 2025-11-25T08:34:36Z|00564|binding|INFO|Releasing lport f2a4b65b-419e-44be-9413-f01693268aa8 from this chassis (sb_readonly=0)
Nov 25 03:34:36 np0005534516 ovn_controller[152859]: 2025-11-25T08:34:36Z|00565|binding|INFO|Setting lport f2a4b65b-419e-44be-9413-f01693268aa8 down in Southbound
Nov 25 03:34:36 np0005534516 ovn_controller[152859]: 2025-11-25T08:34:36Z|00566|binding|INFO|Removing iface tapf2a4b65b-41 ovn-installed in OVS
Nov 25 03:34:36 np0005534516 nova_compute[253538]: 2025-11-25 08:34:36.568 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:34:36 np0005534516 nova_compute[253538]: 2025-11-25 08:34:36.571 253542 DEBUG nova.virt.libvirt.driver [None req-5d44d1bd-3f6f-48a5-8fa6-7c656f5e1d6c 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Start waiting for the detach event from libvirt for device tapf2a4b65b-41 with device alias net1 for instance 5c6656ef-7ad0-4eb4-a597-aa9a8078805b _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599#033[00m
Nov 25 03:34:36 np0005534516 nova_compute[253538]: 2025-11-25 08:34:36.572 253542 DEBUG nova.virt.libvirt.guest [None req-5d44d1bd-3f6f-48a5-8fa6-7c656f5e1d6c 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:26:ed:fc"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapf2a4b65b-41"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Nov 25 03:34:36 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:34:36.572 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:26:ed:fc 10.100.0.3'], port_security=['fa:16:3e:26:ed:fc 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-AttachInterfacesTestJSON-663266119', 'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '5c6656ef-7ad0-4eb4-a597-aa9a8078805b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-AttachInterfacesTestJSON-663266119', 'neutron:project_id': '7d8307470c794815a028592990efca57', 'neutron:revision_number': '4', 'neutron:security_group_ids': '2fc6083a-2d5e-4949-a854-57468915c521', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f51a82dc-da84-4ad1-90c6-51b8e242435f, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=f2a4b65b-419e-44be-9413-f01693268aa8) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 03:34:36 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:34:36.573 162739 INFO neutron.agent.ovn.metadata.agent [-] Port f2a4b65b-419e-44be-9413-f01693268aa8 in datapath 9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe unbound from our chassis#033[00m
Nov 25 03:34:36 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:34:36.575 162739 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe#033[00m
Nov 25 03:34:36 np0005534516 nova_compute[253538]: 2025-11-25 08:34:36.576 253542 DEBUG nova.virt.libvirt.guest [None req-5d44d1bd-3f6f-48a5-8fa6-7c656f5e1d6c 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:26:ed:fc"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapf2a4b65b-41"/></interface>not found in domain: <domain type='kvm' id='67'>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:  <name>instance-0000003a</name>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:  <uuid>5c6656ef-7ad0-4eb4-a597-aa9a8078805b</uuid>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:  <metadata>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 03:34:36 np0005534516 nova_compute[253538]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:  <nova:name>tempest-tempest.common.compute-instance-632172140</nova:name>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:  <nova:creationTime>2025-11-25 08:34:33</nova:creationTime>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:  <nova:flavor name="m1.nano">
Nov 25 03:34:36 np0005534516 nova_compute[253538]:    <nova:memory>128</nova:memory>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:    <nova:disk>1</nova:disk>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:    <nova:swap>0</nova:swap>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:    <nova:ephemeral>0</nova:ephemeral>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:    <nova:vcpus>1</nova:vcpus>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:  </nova:flavor>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:  <nova:owner>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:    <nova:user uuid="329d8dc9d78743d4a09a38fef3a9143d">tempest-AttachInterfacesTestJSON-1895576257-project-member</nova:user>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:    <nova:project uuid="7d8307470c794815a028592990efca57">tempest-AttachInterfacesTestJSON-1895576257</nova:project>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:  </nova:owner>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:  <nova:root type="image" uuid="8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e"/>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:  <nova:ports>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:    <nova:port uuid="1682bdaf-1dd6-4036-8d17-a169dbaaca8f">
Nov 25 03:34:36 np0005534516 nova_compute[253538]:      <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:    </nova:port>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:    <nova:port uuid="f2a4b65b-419e-44be-9413-f01693268aa8">
Nov 25 03:34:36 np0005534516 nova_compute[253538]:      <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:    </nova:port>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:  </nova:ports>
Nov 25 03:34:36 np0005534516 nova_compute[253538]: </nova:instance>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:  </metadata>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:  <memory unit='KiB'>131072</memory>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:  <currentMemory unit='KiB'>131072</currentMemory>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:  <vcpu placement='static'>1</vcpu>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:  <resource>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:    <partition>/machine</partition>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:  </resource>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:  <sysinfo type='smbios'>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:    <system>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:      <entry name='manufacturer'>RDO</entry>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:      <entry name='product'>OpenStack Compute</entry>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:      <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:      <entry name='serial'>5c6656ef-7ad0-4eb4-a597-aa9a8078805b</entry>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:      <entry name='uuid'>5c6656ef-7ad0-4eb4-a597-aa9a8078805b</entry>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:      <entry name='family'>Virtual Machine</entry>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:    </system>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:  </sysinfo>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:  <os>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:    <type arch='x86_64' machine='pc-q35-rhel9.8.0'>hvm</type>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:    <boot dev='hd'/>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:    <smbios mode='sysinfo'/>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:  </os>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:  <features>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:    <acpi/>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:    <apic/>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:    <vmcoreinfo state='on'/>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:  </features>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:  <cpu mode='custom' match='exact' check='full'>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:    <model fallback='forbid'>EPYC-Rome</model>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:    <vendor>AMD</vendor>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:    <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:    <feature policy='require' name='x2apic'/>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:    <feature policy='require' name='tsc-deadline'/>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:    <feature policy='require' name='hypervisor'/>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:    <feature policy='require' name='tsc_adjust'/>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:    <feature policy='require' name='spec-ctrl'/>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:    <feature policy='require' name='stibp'/>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:    <feature policy='require' name='ssbd'/>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:    <feature policy='require' name='cmp_legacy'/>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:    <feature policy='require' name='overflow-recov'/>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:    <feature policy='require' name='succor'/>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:    <feature policy='require' name='ibrs'/>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:    <feature policy='require' name='amd-ssbd'/>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:    <feature policy='require' name='virt-ssbd'/>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:    <feature policy='disable' name='lbrv'/>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:    <feature policy='disable' name='tsc-scale'/>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:    <feature policy='disable' name='vmcb-clean'/>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:    <feature policy='disable' name='flushbyasid'/>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:    <feature policy='disable' name='pause-filter'/>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:    <feature policy='disable' name='pfthreshold'/>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:    <feature policy='disable' name='svme-addr-chk'/>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:    <feature policy='require' name='lfence-always-serializing'/>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:    <feature policy='disable' name='xsaves'/>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:    <feature policy='disable' name='svm'/>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:    <feature policy='require' name='topoext'/>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:    <feature policy='disable' name='npt'/>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:    <feature policy='disable' name='nrip-save'/>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:  </cpu>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:  <clock offset='utc'>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:    <timer name='pit' tickpolicy='delay'/>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:    <timer name='rtc' tickpolicy='catchup'/>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:    <timer name='hpet' present='no'/>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:  </clock>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:  <on_poweroff>destroy</on_poweroff>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:  <on_reboot>restart</on_reboot>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:  <on_crash>destroy</on_crash>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:  <devices>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:    <emulator>/usr/libexec/qemu-kvm</emulator>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:    <disk type='network' device='disk'>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:      <driver name='qemu' type='raw' cache='none'/>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:      <auth username='openstack'>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:        <secret type='ceph' uuid='a058ea16-8b73-51e1-b172-ed66107102bf'/>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:      </auth>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:      <source protocol='rbd' name='vms/5c6656ef-7ad0-4eb4-a597-aa9a8078805b_disk' index='2'>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:        <host name='192.168.122.100' port='6789'/>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:      </source>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:      <target dev='vda' bus='virtio'/>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:      <alias name='virtio-disk0'/>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:    </disk>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:    <disk type='network' device='cdrom'>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:      <driver name='qemu' type='raw' cache='none'/>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:      <auth username='openstack'>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:        <secret type='ceph' uuid='a058ea16-8b73-51e1-b172-ed66107102bf'/>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:      </auth>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:      <source protocol='rbd' name='vms/5c6656ef-7ad0-4eb4-a597-aa9a8078805b_disk.config' index='1'>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:        <host name='192.168.122.100' port='6789'/>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:      </source>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:      <target dev='sda' bus='sata'/>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:      <readonly/>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:      <alias name='sata0-0-0'/>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:      <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:    </disk>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:    <controller type='pci' index='0' model='pcie-root'>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:      <alias name='pcie.0'/>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:    <controller type='pci' index='1' model='pcie-root-port'>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:      <model name='pcie-root-port'/>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:      <target chassis='1' port='0x10'/>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:      <alias name='pci.1'/>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:    <controller type='pci' index='2' model='pcie-root-port'>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:      <model name='pcie-root-port'/>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:      <target chassis='2' port='0x11'/>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:      <alias name='pci.2'/>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:    <controller type='pci' index='3' model='pcie-root-port'>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:      <model name='pcie-root-port'/>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:      <target chassis='3' port='0x12'/>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:      <alias name='pci.3'/>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:    <controller type='pci' index='4' model='pcie-root-port'>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:      <model name='pcie-root-port'/>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:      <target chassis='4' port='0x13'/>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:      <alias name='pci.4'/>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:    <controller type='pci' index='5' model='pcie-root-port'>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:      <model name='pcie-root-port'/>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:      <target chassis='5' port='0x14'/>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:      <alias name='pci.5'/>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:    <controller type='pci' index='6' model='pcie-root-port'>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:      <model name='pcie-root-port'/>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:      <target chassis='6' port='0x15'/>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:      <alias name='pci.6'/>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:    <controller type='pci' index='7' model='pcie-root-port'>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:      <model name='pcie-root-port'/>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:      <target chassis='7' port='0x16'/>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:      <alias name='pci.7'/>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:    <controller type='pci' index='8' model='pcie-root-port'>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:      <model name='pcie-root-port'/>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:      <target chassis='8' port='0x17'/>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:      <alias name='pci.8'/>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:    <controller type='pci' index='9' model='pcie-root-port'>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:      <model name='pcie-root-port'/>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:      <target chassis='9' port='0x18'/>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:      <alias name='pci.9'/>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:    <controller type='pci' index='10' model='pcie-root-port'>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:      <model name='pcie-root-port'/>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:      <target chassis='10' port='0x19'/>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:      <alias name='pci.10'/>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:    <controller type='pci' index='11' model='pcie-root-port'>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:      <model name='pcie-root-port'/>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:      <target chassis='11' port='0x1a'/>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:      <alias name='pci.11'/>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:    <controller type='pci' index='12' model='pcie-root-port'>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:      <model name='pcie-root-port'/>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:      <target chassis='12' port='0x1b'/>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:      <alias name='pci.12'/>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:    <controller type='pci' index='13' model='pcie-root-port'>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:      <model name='pcie-root-port'/>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:      <target chassis='13' port='0x1c'/>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:      <alias name='pci.13'/>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:    <controller type='pci' index='14' model='pcie-root-port'>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:      <model name='pcie-root-port'/>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:      <target chassis='14' port='0x1d'/>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:      <alias name='pci.14'/>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:    <controller type='pci' index='15' model='pcie-root-port'>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:      <model name='pcie-root-port'/>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:      <target chassis='15' port='0x1e'/>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:      <alias name='pci.15'/>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:    <controller type='pci' index='16' model='pcie-root-port'>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:      <model name='pcie-root-port'/>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:      <target chassis='16' port='0x1f'/>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:      <alias name='pci.16'/>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:    <controller type='pci' index='17' model='pcie-root-port'>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:      <model name='pcie-root-port'/>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:      <target chassis='17' port='0x20'/>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:      <alias name='pci.17'/>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:    <controller type='pci' index='18' model='pcie-root-port'>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:      <model name='pcie-root-port'/>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:      <target chassis='18' port='0x21'/>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:      <alias name='pci.18'/>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:    <controller type='pci' index='19' model='pcie-root-port'>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:      <model name='pcie-root-port'/>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:      <target chassis='19' port='0x22'/>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:      <alias name='pci.19'/>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:    <controller type='pci' index='20' model='pcie-root-port'>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:      <model name='pcie-root-port'/>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:      <target chassis='20' port='0x23'/>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:      <alias name='pci.20'/>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:    <controller type='pci' index='21' model='pcie-root-port'>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:      <model name='pcie-root-port'/>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:      <target chassis='21' port='0x24'/>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:      <alias name='pci.21'/>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:    <controller type='pci' index='22' model='pcie-root-port'>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:      <model name='pcie-root-port'/>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:      <target chassis='22' port='0x25'/>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:      <alias name='pci.22'/>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:    <controller type='pci' index='23' model='pcie-root-port'>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:      <model name='pcie-root-port'/>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:      <target chassis='23' port='0x26'/>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:      <alias name='pci.23'/>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:    <controller type='pci' index='24' model='pcie-root-port'>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:      <model name='pcie-root-port'/>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:      <target chassis='24' port='0x27'/>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:      <alias name='pci.24'/>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:    <controller type='pci' index='25' model='pcie-root-port'>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:      <model name='pcie-root-port'/>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:      <target chassis='25' port='0x28'/>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:      <alias name='pci.25'/>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:    <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:      <model name='pcie-pci-bridge'/>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:      <alias name='pci.26'/>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:    <controller type='usb' index='0' model='piix3-uhci'>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:      <alias name='usb'/>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:    <controller type='sata' index='0'>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:      <alias name='ide'/>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:    <interface type='ethernet'>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:      <mac address='fa:16:3e:60:42:da'/>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:      <target dev='tap1682bdaf-1d'/>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:      <model type='virtio'/>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:      <driver name='vhost' rx_queue_size='512'/>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:      <mtu size='1442'/>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:      <alias name='net0'/>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:    </interface>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:    <serial type='pty'>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:      <source path='/dev/pts/1'/>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:      <log file='/var/lib/nova/instances/5c6656ef-7ad0-4eb4-a597-aa9a8078805b/console.log' append='off'/>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:      <target type='isa-serial' port='0'>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:        <model name='isa-serial'/>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:      </target>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:      <alias name='serial0'/>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:    </serial>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:    <console type='pty' tty='/dev/pts/1'>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:      <source path='/dev/pts/1'/>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:      <log file='/var/lib/nova/instances/5c6656ef-7ad0-4eb4-a597-aa9a8078805b/console.log' append='off'/>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:      <target type='serial' port='0'/>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:      <alias name='serial0'/>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:    </console>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:    <input type='tablet' bus='usb'>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:      <alias name='input0'/>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:      <address type='usb' bus='0' port='1'/>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:    </input>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:    <input type='mouse' bus='ps2'>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:      <alias name='input1'/>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:    </input>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:    <input type='keyboard' bus='ps2'>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:      <alias name='input2'/>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:    </input>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:    <graphics type='vnc' port='5901' autoport='yes' listen='::0'>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:      <listen type='address' address='::0'/>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:    </graphics>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:    <audio id='1' type='none'/>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:    <video>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:      <model type='virtio' heads='1' primary='yes'/>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:      <alias name='video0'/>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:    </video>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:    <watchdog model='itco' action='reset'>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:      <alias name='watchdog0'/>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:    </watchdog>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:    <memballoon model='virtio'>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:      <stats period='10'/>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:      <alias name='balloon0'/>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:    </memballoon>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:    <rng model='virtio'>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:      <backend model='random'>/dev/urandom</backend>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:      <alias name='rng0'/>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:    </rng>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:  </devices>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:  <seclabel type='dynamic' model='selinux' relabel='yes'>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:    <label>system_u:system_r:svirt_t:s0:c36,c548</label>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:    <imagelabel>system_u:object_r:svirt_image_t:s0:c36,c548</imagelabel>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:  </seclabel>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:  <seclabel type='dynamic' model='dac' relabel='yes'>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:    <label>+107:+107</label>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:    <imagelabel>+107:+107</imagelabel>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:  </seclabel>
Nov 25 03:34:36 np0005534516 nova_compute[253538]: </domain>
Nov 25 03:34:36 np0005534516 nova_compute[253538]: get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282#033[00m
Nov 25 03:34:36 np0005534516 nova_compute[253538]: 2025-11-25 08:34:36.576 253542 INFO nova.virt.libvirt.driver [None req-5d44d1bd-3f6f-48a5-8fa6-7c656f5e1d6c 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Successfully detached device tapf2a4b65b-41 from instance 5c6656ef-7ad0-4eb4-a597-aa9a8078805b from the live domain config.#033[00m
Nov 25 03:34:36 np0005534516 nova_compute[253538]: 2025-11-25 08:34:36.577 253542 DEBUG nova.virt.libvirt.vif [None req-5d44d1bd-3f6f-48a5-8fa6-7c656f5e1d6c 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T08:33:46Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-632172140',display_name='tempest-tempest.common.compute-instance-632172140',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-632172140',id=58,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBI45VX5ozelo+A26Yolp08RcM4mInQuWdkCriWdcAJEvmrG1M64+l6O6qrC1PEYY9Zv1hNrRdaOuY2Hx3qn6BPjsgdWVfumtuAipvIEJaR4T3qitr35JgGW4++DGBtyuWA==',key_name='tempest-keypair-1507828644',keypairs=<?>,launch_index=0,launched_at=2025-11-25T08:34:01Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='7d8307470c794815a028592990efca57',ramdisk_id='',reservation_id='r-yi71ppnf',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-1895576257',owner_user_name='tempest-AttachInterfacesTestJSON-1895576257-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T08:34:01Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='329d8dc9d78743d4a09a38fef3a9143d',uuid=5c6656ef-7ad0-4eb4-a597-aa9a8078805b,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "f2a4b65b-419e-44be-9413-f01693268aa8", "address": "fa:16:3e:26:ed:fc", "network": {"id": "9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-2079765623-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7d8307470c794815a028592990efca57", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf2a4b65b-41", "ovs_interfaceid": "f2a4b65b-419e-44be-9413-f01693268aa8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 25 03:34:36 np0005534516 nova_compute[253538]: 2025-11-25 08:34:36.577 253542 DEBUG nova.network.os_vif_util [None req-5d44d1bd-3f6f-48a5-8fa6-7c656f5e1d6c 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Converting VIF {"id": "f2a4b65b-419e-44be-9413-f01693268aa8", "address": "fa:16:3e:26:ed:fc", "network": {"id": "9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-2079765623-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7d8307470c794815a028592990efca57", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf2a4b65b-41", "ovs_interfaceid": "f2a4b65b-419e-44be-9413-f01693268aa8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 03:34:36 np0005534516 nova_compute[253538]: 2025-11-25 08:34:36.579 253542 DEBUG nova.network.os_vif_util [None req-5d44d1bd-3f6f-48a5-8fa6-7c656f5e1d6c 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:26:ed:fc,bridge_name='br-int',has_traffic_filtering=True,id=f2a4b65b-419e-44be-9413-f01693268aa8,network=Network(9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapf2a4b65b-41') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 03:34:36 np0005534516 nova_compute[253538]: 2025-11-25 08:34:36.579 253542 DEBUG os_vif [None req-5d44d1bd-3f6f-48a5-8fa6-7c656f5e1d6c 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:26:ed:fc,bridge_name='br-int',has_traffic_filtering=True,id=f2a4b65b-419e-44be-9413-f01693268aa8,network=Network(9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapf2a4b65b-41') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 25 03:34:36 np0005534516 nova_compute[253538]: 2025-11-25 08:34:36.584 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:34:36 np0005534516 nova_compute[253538]: 2025-11-25 08:34:36.585 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf2a4b65b-41, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:34:36 np0005534516 nova_compute[253538]: 2025-11-25 08:34:36.586 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:34:36 np0005534516 nova_compute[253538]: 2025-11-25 08:34:36.588 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 25 03:34:36 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:34:36.593 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[9c573077-0fb8-41cc-addd-2e89d39c4d6e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:34:36 np0005534516 nova_compute[253538]: 2025-11-25 08:34:36.609 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:34:36 np0005534516 nova_compute[253538]: 2025-11-25 08:34:36.616 253542 INFO os_vif [None req-5d44d1bd-3f6f-48a5-8fa6-7c656f5e1d6c 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:26:ed:fc,bridge_name='br-int',has_traffic_filtering=True,id=f2a4b65b-419e-44be-9413-f01693268aa8,network=Network(9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapf2a4b65b-41')#033[00m
Nov 25 03:34:36 np0005534516 nova_compute[253538]: 2025-11-25 08:34:36.616 253542 DEBUG nova.virt.libvirt.guest [None req-5d44d1bd-3f6f-48a5-8fa6-7c656f5e1d6c 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 03:34:36 np0005534516 nova_compute[253538]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:  <nova:name>tempest-tempest.common.compute-instance-632172140</nova:name>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:  <nova:creationTime>2025-11-25 08:34:36</nova:creationTime>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:  <nova:flavor name="m1.nano">
Nov 25 03:34:36 np0005534516 nova_compute[253538]:    <nova:memory>128</nova:memory>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:    <nova:disk>1</nova:disk>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:    <nova:swap>0</nova:swap>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:    <nova:ephemeral>0</nova:ephemeral>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:    <nova:vcpus>1</nova:vcpus>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:  </nova:flavor>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:  <nova:owner>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:    <nova:user uuid="329d8dc9d78743d4a09a38fef3a9143d">tempest-AttachInterfacesTestJSON-1895576257-project-member</nova:user>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:    <nova:project uuid="7d8307470c794815a028592990efca57">tempest-AttachInterfacesTestJSON-1895576257</nova:project>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:  </nova:owner>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:  <nova:root type="image" uuid="8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e"/>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:  <nova:ports>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:    <nova:port uuid="1682bdaf-1dd6-4036-8d17-a169dbaaca8f">
Nov 25 03:34:36 np0005534516 nova_compute[253538]:      <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:    </nova:port>
Nov 25 03:34:36 np0005534516 nova_compute[253538]:  </nova:ports>
Nov 25 03:34:36 np0005534516 nova_compute[253538]: </nova:instance>
Nov 25 03:34:36 np0005534516 nova_compute[253538]: set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359#033[00m
Nov 25 03:34:36 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:34:36.636 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[c5a3d26f-cef5-4472-89f7-94e47e71e8cb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:34:36 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:34:36.640 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[f4454cb7-e215-48bf-a809-5c893c3a6ad1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:34:36 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:34:36.672 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[a5ca12de-4637-461f-b60e-4d703b5d0839]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:34:36 np0005534516 nova_compute[253538]: 2025-11-25 08:34:36.675 253542 DEBUG nova.virt.libvirt.driver [None req-5d8b5969-a11f-4efe-b1ba-bf5400b29765 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] [instance: 420c5373-d9c4-4da0-9658-90eff9a19f8d] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101#033[00m
Nov 25 03:34:36 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:34:36.689 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[fd25f7bb-a4ee-4c54-93f3-8d53372ac5cb]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap9bf3cbfa-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:cf:8f:c7'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 9, 'rx_bytes': 616, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 9, 'rx_bytes': 616, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 164], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 499073, 'reachable_time': 17548, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 318103, 'error': None, 'target': 'ovnmeta-9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:34:36 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:34:36.706 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[2911c0da-3226-41e0-903e-bed028aabf5a]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap9bf3cbfa-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 499086, 'tstamp': 499086}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 318104, 'error': None, 'target': 'ovnmeta-9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap9bf3cbfa-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 499088, 'tstamp': 499088}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 318104, 'error': None, 'target': 'ovnmeta-9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:34:36 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:34:36.707 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9bf3cbfa-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:34:36 np0005534516 nova_compute[253538]: 2025-11-25 08:34:36.708 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:34:36 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:34:36.710 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9bf3cbfa-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:34:36 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:34:36.710 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 03:34:36 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:34:36.710 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap9bf3cbfa-70, col_values=(('external_ids', {'iface-id': '98660c0c-0936-4c4d-9a89-87b784d8d5cc'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:34:36 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:34:36.710 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 03:34:37 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1544: 321 pgs: 321 active+clean; 407 MiB data, 727 MiB used, 59 GiB / 60 GiB avail; 1.8 MiB/s rd, 4.5 MiB/s wr, 165 op/s
Nov 25 03:34:37 np0005534516 dreamy_clarke[318087]: --> passed data devices: 0 physical, 3 LVM
Nov 25 03:34:37 np0005534516 dreamy_clarke[318087]: --> relative data size: 1.0
Nov 25 03:34:37 np0005534516 dreamy_clarke[318087]: --> All data devices are unavailable
Nov 25 03:34:37 np0005534516 systemd[1]: libpod-4ef2272bc06eb5c39aa3018a0599c5bba776465bd03561f8bab724a4236696c0.scope: Deactivated successfully.
Nov 25 03:34:37 np0005534516 systemd[1]: libpod-4ef2272bc06eb5c39aa3018a0599c5bba776465bd03561f8bab724a4236696c0.scope: Consumed 1.027s CPU time.
Nov 25 03:34:37 np0005534516 podman[318071]: 2025-11-25 08:34:37.540539877 +0000 UTC m=+1.326921583 container died 4ef2272bc06eb5c39aa3018a0599c5bba776465bd03561f8bab724a4236696c0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_clarke, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Nov 25 03:34:37 np0005534516 systemd[1]: var-lib-containers-storage-overlay-b06b83c690e9075e320bf9de04a406a4caab642e3b15187f68ef7f03244740ec-merged.mount: Deactivated successfully.
Nov 25 03:34:37 np0005534516 podman[318071]: 2025-11-25 08:34:37.600772296 +0000 UTC m=+1.387154052 container remove 4ef2272bc06eb5c39aa3018a0599c5bba776465bd03561f8bab724a4236696c0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_clarke, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 03:34:37 np0005534516 systemd[1]: libpod-conmon-4ef2272bc06eb5c39aa3018a0599c5bba776465bd03561f8bab724a4236696c0.scope: Deactivated successfully.
Nov 25 03:34:37 np0005534516 nova_compute[253538]: 2025-11-25 08:34:37.895 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:34:37 np0005534516 podman[318218]: 2025-11-25 08:34:37.972203178 +0000 UTC m=+0.104512349 container health_status 5c8d5508db57b4c3da7e80ae11f3a3094e87785cb74f60c98199261dd8b73481 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible)
Nov 25 03:34:38 np0005534516 nova_compute[253538]: 2025-11-25 08:34:38.203 253542 DEBUG oslo_concurrency.lockutils [None req-5d44d1bd-3f6f-48a5-8fa6-7c656f5e1d6c 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Acquiring lock "refresh_cache-5c6656ef-7ad0-4eb4-a597-aa9a8078805b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 03:34:38 np0005534516 nova_compute[253538]: 2025-11-25 08:34:38.204 253542 DEBUG oslo_concurrency.lockutils [None req-5d44d1bd-3f6f-48a5-8fa6-7c656f5e1d6c 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Acquired lock "refresh_cache-5c6656ef-7ad0-4eb4-a597-aa9a8078805b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 03:34:38 np0005534516 nova_compute[253538]: 2025-11-25 08:34:38.204 253542 DEBUG nova.network.neutron [None req-5d44d1bd-3f6f-48a5-8fa6-7c656f5e1d6c 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] [instance: 5c6656ef-7ad0-4eb4-a597-aa9a8078805b] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 25 03:34:38 np0005534516 ovn_controller[152859]: 2025-11-25T08:34:38Z|00077|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:73:f0:9b 10.100.0.11
Nov 25 03:34:38 np0005534516 ovn_controller[152859]: 2025-11-25T08:34:38Z|00078|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:73:f0:9b 10.100.0.11
Nov 25 03:34:38 np0005534516 podman[318305]: 2025-11-25 08:34:38.301942201 +0000 UTC m=+0.061695449 container create f71d2e8b636b9bb1acf278db0f387e37f62110c685827a0550b4968b6e8a37f6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_wu, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Nov 25 03:34:38 np0005534516 systemd[1]: Started libpod-conmon-f71d2e8b636b9bb1acf278db0f387e37f62110c685827a0550b4968b6e8a37f6.scope.
Nov 25 03:34:38 np0005534516 systemd[1]: Started libcrun container.
Nov 25 03:34:38 np0005534516 podman[318305]: 2025-11-25 08:34:38.279002233 +0000 UTC m=+0.038755501 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 03:34:38 np0005534516 podman[318305]: 2025-11-25 08:34:38.388431954 +0000 UTC m=+0.148185232 container init f71d2e8b636b9bb1acf278db0f387e37f62110c685827a0550b4968b6e8a37f6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_wu, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 03:34:38 np0005534516 podman[318305]: 2025-11-25 08:34:38.396208863 +0000 UTC m=+0.155962111 container start f71d2e8b636b9bb1acf278db0f387e37f62110c685827a0550b4968b6e8a37f6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_wu, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Nov 25 03:34:38 np0005534516 podman[318305]: 2025-11-25 08:34:38.399801901 +0000 UTC m=+0.159555139 container attach f71d2e8b636b9bb1acf278db0f387e37f62110c685827a0550b4968b6e8a37f6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_wu, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 03:34:38 np0005534516 sleepy_wu[318321]: 167 167
Nov 25 03:34:38 np0005534516 systemd[1]: libpod-f71d2e8b636b9bb1acf278db0f387e37f62110c685827a0550b4968b6e8a37f6.scope: Deactivated successfully.
Nov 25 03:34:38 np0005534516 podman[318305]: 2025-11-25 08:34:38.403354926 +0000 UTC m=+0.163108184 container died f71d2e8b636b9bb1acf278db0f387e37f62110c685827a0550b4968b6e8a37f6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_wu, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Nov 25 03:34:38 np0005534516 systemd[1]: var-lib-containers-storage-overlay-c5b968a5c9d1b1215b896e920353779a8abd4970046db97f2d423a9840f70b35-merged.mount: Deactivated successfully.
Nov 25 03:34:38 np0005534516 podman[318305]: 2025-11-25 08:34:38.443722471 +0000 UTC m=+0.203475719 container remove f71d2e8b636b9bb1acf278db0f387e37f62110c685827a0550b4968b6e8a37f6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_wu, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Nov 25 03:34:38 np0005534516 systemd[1]: libpod-conmon-f71d2e8b636b9bb1acf278db0f387e37f62110c685827a0550b4968b6e8a37f6.scope: Deactivated successfully.
Nov 25 03:34:38 np0005534516 podman[318345]: 2025-11-25 08:34:38.695938519 +0000 UTC m=+0.051745371 container create f8ec6237a1bbb32d11dd70963275acea0cfc9fef92fdb4ddd691c5b6b75fa490 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_booth, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS)
Nov 25 03:34:38 np0005534516 systemd[1]: Started libpod-conmon-f8ec6237a1bbb32d11dd70963275acea0cfc9fef92fdb4ddd691c5b6b75fa490.scope.
Nov 25 03:34:38 np0005534516 systemd[1]: Started libcrun container.
Nov 25 03:34:38 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3caa84de915cca0f43548acb9c2a267f1698fd81e1651a5556c5c80faf00fef5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 03:34:38 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3caa84de915cca0f43548acb9c2a267f1698fd81e1651a5556c5c80faf00fef5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 03:34:38 np0005534516 podman[318345]: 2025-11-25 08:34:38.678514261 +0000 UTC m=+0.034321143 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 03:34:38 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3caa84de915cca0f43548acb9c2a267f1698fd81e1651a5556c5c80faf00fef5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 03:34:38 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3caa84de915cca0f43548acb9c2a267f1698fd81e1651a5556c5c80faf00fef5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 03:34:38 np0005534516 podman[318345]: 2025-11-25 08:34:38.787720766 +0000 UTC m=+0.143527618 container init f8ec6237a1bbb32d11dd70963275acea0cfc9fef92fdb4ddd691c5b6b75fa490 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_booth, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Nov 25 03:34:38 np0005534516 podman[318345]: 2025-11-25 08:34:38.794620991 +0000 UTC m=+0.150427843 container start f8ec6237a1bbb32d11dd70963275acea0cfc9fef92fdb4ddd691c5b6b75fa490 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_booth, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True)
Nov 25 03:34:38 np0005534516 podman[318345]: 2025-11-25 08:34:38.797461697 +0000 UTC m=+0.153268569 container attach f8ec6237a1bbb32d11dd70963275acea0cfc9fef92fdb4ddd691c5b6b75fa490 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_booth, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Nov 25 03:34:39 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1545: 321 pgs: 321 active+clean; 420 MiB data, 738 MiB used, 59 GiB / 60 GiB avail; 1.4 MiB/s rd, 5.4 MiB/s wr, 177 op/s
Nov 25 03:34:39 np0005534516 nova_compute[253538]: 2025-11-25 08:34:39.421 253542 DEBUG nova.compute.manager [req-7397d89e-4554-4a36-bb1f-1a69bd43d1c4 req-5c14f039-68e6-4ac5-9216-a56db52abc99 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 5c6656ef-7ad0-4eb4-a597-aa9a8078805b] Received event network-vif-unplugged-f2a4b65b-419e-44be-9413-f01693268aa8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:34:39 np0005534516 nova_compute[253538]: 2025-11-25 08:34:39.422 253542 DEBUG oslo_concurrency.lockutils [req-7397d89e-4554-4a36-bb1f-1a69bd43d1c4 req-5c14f039-68e6-4ac5-9216-a56db52abc99 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "5c6656ef-7ad0-4eb4-a597-aa9a8078805b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:34:39 np0005534516 nova_compute[253538]: 2025-11-25 08:34:39.422 253542 DEBUG oslo_concurrency.lockutils [req-7397d89e-4554-4a36-bb1f-1a69bd43d1c4 req-5c14f039-68e6-4ac5-9216-a56db52abc99 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "5c6656ef-7ad0-4eb4-a597-aa9a8078805b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:34:39 np0005534516 nova_compute[253538]: 2025-11-25 08:34:39.422 253542 DEBUG oslo_concurrency.lockutils [req-7397d89e-4554-4a36-bb1f-1a69bd43d1c4 req-5c14f039-68e6-4ac5-9216-a56db52abc99 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "5c6656ef-7ad0-4eb4-a597-aa9a8078805b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:34:39 np0005534516 nova_compute[253538]: 2025-11-25 08:34:39.422 253542 DEBUG nova.compute.manager [req-7397d89e-4554-4a36-bb1f-1a69bd43d1c4 req-5c14f039-68e6-4ac5-9216-a56db52abc99 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 5c6656ef-7ad0-4eb4-a597-aa9a8078805b] No waiting events found dispatching network-vif-unplugged-f2a4b65b-419e-44be-9413-f01693268aa8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 03:34:39 np0005534516 nova_compute[253538]: 2025-11-25 08:34:39.423 253542 WARNING nova.compute.manager [req-7397d89e-4554-4a36-bb1f-1a69bd43d1c4 req-5c14f039-68e6-4ac5-9216-a56db52abc99 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 5c6656ef-7ad0-4eb4-a597-aa9a8078805b] Received unexpected event network-vif-unplugged-f2a4b65b-419e-44be-9413-f01693268aa8 for instance with vm_state active and task_state None.#033[00m
Nov 25 03:34:39 np0005534516 nova_compute[253538]: 2025-11-25 08:34:39.423 253542 DEBUG nova.compute.manager [req-7397d89e-4554-4a36-bb1f-1a69bd43d1c4 req-5c14f039-68e6-4ac5-9216-a56db52abc99 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 5c6656ef-7ad0-4eb4-a597-aa9a8078805b] Received event network-vif-plugged-f2a4b65b-419e-44be-9413-f01693268aa8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:34:39 np0005534516 nova_compute[253538]: 2025-11-25 08:34:39.423 253542 DEBUG oslo_concurrency.lockutils [req-7397d89e-4554-4a36-bb1f-1a69bd43d1c4 req-5c14f039-68e6-4ac5-9216-a56db52abc99 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "5c6656ef-7ad0-4eb4-a597-aa9a8078805b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:34:39 np0005534516 nova_compute[253538]: 2025-11-25 08:34:39.423 253542 DEBUG oslo_concurrency.lockutils [req-7397d89e-4554-4a36-bb1f-1a69bd43d1c4 req-5c14f039-68e6-4ac5-9216-a56db52abc99 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "5c6656ef-7ad0-4eb4-a597-aa9a8078805b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:34:39 np0005534516 nova_compute[253538]: 2025-11-25 08:34:39.423 253542 DEBUG oslo_concurrency.lockutils [req-7397d89e-4554-4a36-bb1f-1a69bd43d1c4 req-5c14f039-68e6-4ac5-9216-a56db52abc99 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "5c6656ef-7ad0-4eb4-a597-aa9a8078805b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:34:39 np0005534516 nova_compute[253538]: 2025-11-25 08:34:39.423 253542 DEBUG nova.compute.manager [req-7397d89e-4554-4a36-bb1f-1a69bd43d1c4 req-5c14f039-68e6-4ac5-9216-a56db52abc99 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 5c6656ef-7ad0-4eb4-a597-aa9a8078805b] No waiting events found dispatching network-vif-plugged-f2a4b65b-419e-44be-9413-f01693268aa8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 03:34:39 np0005534516 nova_compute[253538]: 2025-11-25 08:34:39.424 253542 WARNING nova.compute.manager [req-7397d89e-4554-4a36-bb1f-1a69bd43d1c4 req-5c14f039-68e6-4ac5-9216-a56db52abc99 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 5c6656ef-7ad0-4eb4-a597-aa9a8078805b] Received unexpected event network-vif-plugged-f2a4b65b-419e-44be-9413-f01693268aa8 for instance with vm_state active and task_state None.#033[00m
Nov 25 03:34:39 np0005534516 nova_compute[253538]: 2025-11-25 08:34:39.473 253542 DEBUG nova.virt.libvirt.driver [None req-1aa77871-377d-42f4-9eb6-3670589c0274 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: bf44124c-1a65-4bde-a777-043ae1a53557] Instance in state 1 after 21 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101#033[00m
Nov 25 03:34:39 np0005534516 kind_booth[318362]: {
Nov 25 03:34:39 np0005534516 kind_booth[318362]:    "0": [
Nov 25 03:34:39 np0005534516 kind_booth[318362]:        {
Nov 25 03:34:39 np0005534516 kind_booth[318362]:            "devices": [
Nov 25 03:34:39 np0005534516 kind_booth[318362]:                "/dev/loop3"
Nov 25 03:34:39 np0005534516 kind_booth[318362]:            ],
Nov 25 03:34:39 np0005534516 kind_booth[318362]:            "lv_name": "ceph_lv0",
Nov 25 03:34:39 np0005534516 kind_booth[318362]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 03:34:39 np0005534516 kind_booth[318362]:            "lv_size": "21470642176",
Nov 25 03:34:39 np0005534516 kind_booth[318362]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=fa0f8d90-5df8-4d42-9078-da082765696d,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 03:34:39 np0005534516 kind_booth[318362]:            "lv_uuid": "ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr",
Nov 25 03:34:39 np0005534516 kind_booth[318362]:            "name": "ceph_lv0",
Nov 25 03:34:39 np0005534516 kind_booth[318362]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 03:34:39 np0005534516 kind_booth[318362]:            "tags": {
Nov 25 03:34:39 np0005534516 kind_booth[318362]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 03:34:39 np0005534516 kind_booth[318362]:                "ceph.block_uuid": "ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr",
Nov 25 03:34:39 np0005534516 kind_booth[318362]:                "ceph.cephx_lockbox_secret": "",
Nov 25 03:34:39 np0005534516 kind_booth[318362]:                "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 03:34:39 np0005534516 kind_booth[318362]:                "ceph.cluster_name": "ceph",
Nov 25 03:34:39 np0005534516 kind_booth[318362]:                "ceph.crush_device_class": "",
Nov 25 03:34:39 np0005534516 kind_booth[318362]:                "ceph.encrypted": "0",
Nov 25 03:34:39 np0005534516 kind_booth[318362]:                "ceph.osd_fsid": "fa0f8d90-5df8-4d42-9078-da082765696d",
Nov 25 03:34:39 np0005534516 kind_booth[318362]:                "ceph.osd_id": "0",
Nov 25 03:34:39 np0005534516 kind_booth[318362]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 03:34:39 np0005534516 kind_booth[318362]:                "ceph.type": "block",
Nov 25 03:34:39 np0005534516 kind_booth[318362]:                "ceph.vdo": "0"
Nov 25 03:34:39 np0005534516 kind_booth[318362]:            },
Nov 25 03:34:39 np0005534516 kind_booth[318362]:            "type": "block",
Nov 25 03:34:39 np0005534516 kind_booth[318362]:            "vg_name": "ceph_vg0"
Nov 25 03:34:39 np0005534516 kind_booth[318362]:        }
Nov 25 03:34:39 np0005534516 kind_booth[318362]:    ],
Nov 25 03:34:39 np0005534516 kind_booth[318362]:    "1": [
Nov 25 03:34:39 np0005534516 kind_booth[318362]:        {
Nov 25 03:34:39 np0005534516 kind_booth[318362]:            "devices": [
Nov 25 03:34:39 np0005534516 kind_booth[318362]:                "/dev/loop4"
Nov 25 03:34:39 np0005534516 kind_booth[318362]:            ],
Nov 25 03:34:39 np0005534516 kind_booth[318362]:            "lv_name": "ceph_lv1",
Nov 25 03:34:39 np0005534516 kind_booth[318362]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 03:34:39 np0005534516 kind_booth[318362]:            "lv_size": "21470642176",
Nov 25 03:34:39 np0005534516 kind_booth[318362]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1ccbbde0-8faf-460a-800e-c84f00ed17db,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 03:34:39 np0005534516 kind_booth[318362]:            "lv_uuid": "nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e",
Nov 25 03:34:39 np0005534516 kind_booth[318362]:            "name": "ceph_lv1",
Nov 25 03:34:39 np0005534516 kind_booth[318362]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 03:34:39 np0005534516 kind_booth[318362]:            "tags": {
Nov 25 03:34:39 np0005534516 kind_booth[318362]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 03:34:39 np0005534516 kind_booth[318362]:                "ceph.block_uuid": "nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e",
Nov 25 03:34:39 np0005534516 kind_booth[318362]:                "ceph.cephx_lockbox_secret": "",
Nov 25 03:34:39 np0005534516 kind_booth[318362]:                "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 03:34:39 np0005534516 kind_booth[318362]:                "ceph.cluster_name": "ceph",
Nov 25 03:34:39 np0005534516 kind_booth[318362]:                "ceph.crush_device_class": "",
Nov 25 03:34:39 np0005534516 kind_booth[318362]:                "ceph.encrypted": "0",
Nov 25 03:34:39 np0005534516 kind_booth[318362]:                "ceph.osd_fsid": "1ccbbde0-8faf-460a-800e-c84f00ed17db",
Nov 25 03:34:39 np0005534516 kind_booth[318362]:                "ceph.osd_id": "1",
Nov 25 03:34:39 np0005534516 kind_booth[318362]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 03:34:39 np0005534516 kind_booth[318362]:                "ceph.type": "block",
Nov 25 03:34:39 np0005534516 kind_booth[318362]:                "ceph.vdo": "0"
Nov 25 03:34:39 np0005534516 kind_booth[318362]:            },
Nov 25 03:34:39 np0005534516 kind_booth[318362]:            "type": "block",
Nov 25 03:34:39 np0005534516 kind_booth[318362]:            "vg_name": "ceph_vg1"
Nov 25 03:34:39 np0005534516 kind_booth[318362]:        }
Nov 25 03:34:39 np0005534516 kind_booth[318362]:    ],
Nov 25 03:34:39 np0005534516 kind_booth[318362]:    "2": [
Nov 25 03:34:39 np0005534516 kind_booth[318362]:        {
Nov 25 03:34:39 np0005534516 kind_booth[318362]:            "devices": [
Nov 25 03:34:39 np0005534516 kind_booth[318362]:                "/dev/loop5"
Nov 25 03:34:39 np0005534516 kind_booth[318362]:            ],
Nov 25 03:34:39 np0005534516 kind_booth[318362]:            "lv_name": "ceph_lv2",
Nov 25 03:34:39 np0005534516 kind_booth[318362]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 03:34:39 np0005534516 kind_booth[318362]:            "lv_size": "21470642176",
Nov 25 03:34:39 np0005534516 kind_booth[318362]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2a15223e-f21c-42af-b702-2be1c3607399,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 03:34:39 np0005534516 kind_booth[318362]:            "lv_uuid": "5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0",
Nov 25 03:34:39 np0005534516 kind_booth[318362]:            "name": "ceph_lv2",
Nov 25 03:34:39 np0005534516 kind_booth[318362]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 03:34:39 np0005534516 kind_booth[318362]:            "tags": {
Nov 25 03:34:39 np0005534516 kind_booth[318362]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 03:34:39 np0005534516 kind_booth[318362]:                "ceph.block_uuid": "5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0",
Nov 25 03:34:39 np0005534516 kind_booth[318362]:                "ceph.cephx_lockbox_secret": "",
Nov 25 03:34:39 np0005534516 kind_booth[318362]:                "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 03:34:39 np0005534516 kind_booth[318362]:                "ceph.cluster_name": "ceph",
Nov 25 03:34:39 np0005534516 kind_booth[318362]:                "ceph.crush_device_class": "",
Nov 25 03:34:39 np0005534516 kind_booth[318362]:                "ceph.encrypted": "0",
Nov 25 03:34:39 np0005534516 kind_booth[318362]:                "ceph.osd_fsid": "2a15223e-f21c-42af-b702-2be1c3607399",
Nov 25 03:34:39 np0005534516 kind_booth[318362]:                "ceph.osd_id": "2",
Nov 25 03:34:39 np0005534516 kind_booth[318362]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 03:34:39 np0005534516 kind_booth[318362]:                "ceph.type": "block",
Nov 25 03:34:39 np0005534516 kind_booth[318362]:                "ceph.vdo": "0"
Nov 25 03:34:39 np0005534516 kind_booth[318362]:            },
Nov 25 03:34:39 np0005534516 kind_booth[318362]:            "type": "block",
Nov 25 03:34:39 np0005534516 kind_booth[318362]:            "vg_name": "ceph_vg2"
Nov 25 03:34:39 np0005534516 kind_booth[318362]:        }
Nov 25 03:34:39 np0005534516 kind_booth[318362]:    ]
Nov 25 03:34:39 np0005534516 kind_booth[318362]: }
Nov 25 03:34:39 np0005534516 systemd[1]: libpod-f8ec6237a1bbb32d11dd70963275acea0cfc9fef92fdb4ddd691c5b6b75fa490.scope: Deactivated successfully.
Nov 25 03:34:39 np0005534516 podman[318345]: 2025-11-25 08:34:39.629177921 +0000 UTC m=+0.984984773 container died f8ec6237a1bbb32d11dd70963275acea0cfc9fef92fdb4ddd691c5b6b75fa490 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_booth, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 03:34:40 np0005534516 systemd[1]: var-lib-containers-storage-overlay-3caa84de915cca0f43548acb9c2a267f1698fd81e1651a5556c5c80faf00fef5-merged.mount: Deactivated successfully.
Nov 25 03:34:40 np0005534516 podman[318345]: 2025-11-25 08:34:40.271532055 +0000 UTC m=+1.627338907 container remove f8ec6237a1bbb32d11dd70963275acea0cfc9fef92fdb4ddd691c5b6b75fa490 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_booth, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 03:34:40 np0005534516 systemd[1]: libpod-conmon-f8ec6237a1bbb32d11dd70963275acea0cfc9fef92fdb4ddd691c5b6b75fa490.scope: Deactivated successfully.
Nov 25 03:34:40 np0005534516 nova_compute[253538]: 2025-11-25 08:34:40.733 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:34:40 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e182 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 03:34:40 np0005534516 podman[318523]: 2025-11-25 08:34:40.988671219 +0000 UTC m=+0.066085557 container create 7673b519e9e5a7d9c0d25820b3c379b239ba6f146e2c271e2c064462734072d0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_shaw, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Nov 25 03:34:41 np0005534516 podman[318523]: 2025-11-25 08:34:40.944836421 +0000 UTC m=+0.022250779 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 03:34:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:34:41.058 162739 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:34:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:34:41.061 162739 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:34:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:34:41.062 162739 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:34:41 np0005534516 systemd[1]: Started libpod-conmon-7673b519e9e5a7d9c0d25820b3c379b239ba6f146e2c271e2c064462734072d0.scope.
Nov 25 03:34:41 np0005534516 systemd[1]: Started libcrun container.
Nov 25 03:34:41 np0005534516 podman[318523]: 2025-11-25 08:34:41.129995487 +0000 UTC m=+0.207409855 container init 7673b519e9e5a7d9c0d25820b3c379b239ba6f146e2c271e2c064462734072d0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_shaw, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Nov 25 03:34:41 np0005534516 podman[318523]: 2025-11-25 08:34:41.140279984 +0000 UTC m=+0.217694322 container start 7673b519e9e5a7d9c0d25820b3c379b239ba6f146e2c271e2c064462734072d0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_shaw, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Nov 25 03:34:41 np0005534516 hardcore_shaw[318539]: 167 167
Nov 25 03:34:41 np0005534516 systemd[1]: libpod-7673b519e9e5a7d9c0d25820b3c379b239ba6f146e2c271e2c064462734072d0.scope: Deactivated successfully.
Nov 25 03:34:41 np0005534516 conmon[318539]: conmon 7673b519e9e5a7d9c0d2 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-7673b519e9e5a7d9c0d25820b3c379b239ba6f146e2c271e2c064462734072d0.scope/container/memory.events
Nov 25 03:34:41 np0005534516 podman[318523]: 2025-11-25 08:34:41.219935744 +0000 UTC m=+0.297350182 container attach 7673b519e9e5a7d9c0d25820b3c379b239ba6f146e2c271e2c064462734072d0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_shaw, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Nov 25 03:34:41 np0005534516 podman[318523]: 2025-11-25 08:34:41.220989633 +0000 UTC m=+0.298403981 container died 7673b519e9e5a7d9c0d25820b3c379b239ba6f146e2c271e2c064462734072d0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_shaw, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 03:34:41 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1546: 321 pgs: 321 active+clean; 437 MiB data, 750 MiB used, 59 GiB / 60 GiB avail; 931 KiB/s rd, 6.4 MiB/s wr, 187 op/s
Nov 25 03:34:41 np0005534516 systemd[1]: var-lib-containers-storage-overlay-70b5a491dda378af6bd9eef8366851c6de8b71bdfde4f81b60633de5a1394d53-merged.mount: Deactivated successfully.
Nov 25 03:34:41 np0005534516 podman[318523]: 2025-11-25 08:34:41.43110671 +0000 UTC m=+0.508521078 container remove 7673b519e9e5a7d9c0d25820b3c379b239ba6f146e2c271e2c064462734072d0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_shaw, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 03:34:41 np0005534516 systemd[1]: libpod-conmon-7673b519e9e5a7d9c0d25820b3c379b239ba6f146e2c271e2c064462734072d0.scope: Deactivated successfully.
Nov 25 03:34:41 np0005534516 nova_compute[253538]: 2025-11-25 08:34:41.588 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:34:41 np0005534516 podman[318564]: 2025-11-25 08:34:41.707180669 +0000 UTC m=+0.060015694 container create d3fffb18af60199a32f4f73f08ae07f91a8fd4c38a2c63e6482f22fe43218b81 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_lamport, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 03:34:41 np0005534516 systemd[1]: Started libpod-conmon-d3fffb18af60199a32f4f73f08ae07f91a8fd4c38a2c63e6482f22fe43218b81.scope.
Nov 25 03:34:41 np0005534516 podman[318564]: 2025-11-25 08:34:41.677843001 +0000 UTC m=+0.030678046 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 03:34:41 np0005534516 systemd[1]: Started libcrun container.
Nov 25 03:34:41 np0005534516 nova_compute[253538]: 2025-11-25 08:34:41.787 253542 INFO nova.network.neutron [None req-5d44d1bd-3f6f-48a5-8fa6-7c656f5e1d6c 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] [instance: 5c6656ef-7ad0-4eb4-a597-aa9a8078805b] Port f2a4b65b-419e-44be-9413-f01693268aa8 from network info_cache is no longer associated with instance in Neutron. Removing from network info_cache.#033[00m
Nov 25 03:34:41 np0005534516 nova_compute[253538]: 2025-11-25 08:34:41.788 253542 DEBUG nova.network.neutron [None req-5d44d1bd-3f6f-48a5-8fa6-7c656f5e1d6c 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] [instance: 5c6656ef-7ad0-4eb4-a597-aa9a8078805b] Updating instance_info_cache with network_info: [{"id": "1682bdaf-1dd6-4036-8d17-a169dbaaca8f", "address": "fa:16:3e:60:42:da", "network": {"id": "9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-2079765623-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.241", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7d8307470c794815a028592990efca57", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1682bdaf-1d", "ovs_interfaceid": "1682bdaf-1dd6-4036-8d17-a169dbaaca8f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 03:34:41 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/efa0c99149a76e4629cdf6ad8208140f9f21c4b5900f96d3066fd2df05d6641d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 03:34:41 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/efa0c99149a76e4629cdf6ad8208140f9f21c4b5900f96d3066fd2df05d6641d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 03:34:41 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/efa0c99149a76e4629cdf6ad8208140f9f21c4b5900f96d3066fd2df05d6641d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 03:34:41 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/efa0c99149a76e4629cdf6ad8208140f9f21c4b5900f96d3066fd2df05d6641d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 03:34:41 np0005534516 podman[318564]: 2025-11-25 08:34:41.804102604 +0000 UTC m=+0.156937599 container init d3fffb18af60199a32f4f73f08ae07f91a8fd4c38a2c63e6482f22fe43218b81 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_lamport, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 03:34:41 np0005534516 podman[318564]: 2025-11-25 08:34:41.816737543 +0000 UTC m=+0.169572528 container start d3fffb18af60199a32f4f73f08ae07f91a8fd4c38a2c63e6482f22fe43218b81 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_lamport, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 03:34:41 np0005534516 podman[318564]: 2025-11-25 08:34:41.821777839 +0000 UTC m=+0.174612824 container attach d3fffb18af60199a32f4f73f08ae07f91a8fd4c38a2c63e6482f22fe43218b81 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_lamport, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 03:34:41 np0005534516 nova_compute[253538]: 2025-11-25 08:34:41.872 253542 DEBUG oslo_concurrency.lockutils [None req-5d44d1bd-3f6f-48a5-8fa6-7c656f5e1d6c 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Releasing lock "refresh_cache-5c6656ef-7ad0-4eb4-a597-aa9a8078805b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 03:34:41 np0005534516 nova_compute[253538]: 2025-11-25 08:34:41.894 253542 DEBUG oslo_concurrency.lockutils [None req-5d44d1bd-3f6f-48a5-8fa6-7c656f5e1d6c 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Lock "interface-5c6656ef-7ad0-4eb4-a597-aa9a8078805b-f2a4b65b-419e-44be-9413-f01693268aa8" "released" by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" :: held 5.530s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:34:42 np0005534516 kernel: tap269f9bd3-f2 (unregistering): left promiscuous mode
Nov 25 03:34:42 np0005534516 nova_compute[253538]: 2025-11-25 08:34:42.488 253542 INFO nova.virt.libvirt.driver [None req-1aa77871-377d-42f4-9eb6-3670589c0274 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: bf44124c-1a65-4bde-a777-043ae1a53557] Instance shutdown successfully after 24 seconds.#033[00m
Nov 25 03:34:42 np0005534516 NetworkManager[48915]: <info>  [1764059682.4935] device (tap269f9bd3-f2): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 03:34:42 np0005534516 ovn_controller[152859]: 2025-11-25T08:34:42Z|00567|binding|INFO|Releasing lport 269f9bd3-f267-459c-8e24-4b1f6c943345 from this chassis (sb_readonly=0)
Nov 25 03:34:42 np0005534516 ovn_controller[152859]: 2025-11-25T08:34:42Z|00568|binding|INFO|Setting lport 269f9bd3-f267-459c-8e24-4b1f6c943345 down in Southbound
Nov 25 03:34:42 np0005534516 nova_compute[253538]: 2025-11-25 08:34:42.501 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:34:42 np0005534516 ovn_controller[152859]: 2025-11-25T08:34:42Z|00569|binding|INFO|Removing iface tap269f9bd3-f2 ovn-installed in OVS
Nov 25 03:34:42 np0005534516 nova_compute[253538]: 2025-11-25 08:34:42.504 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:34:42 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:34:42.512 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:13:ae:8e 10.100.0.12'], port_security=['fa:16:3e:13:ae:8e 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': 'bf44124c-1a65-4bde-a777-043ae1a53557', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a66e51b8-ecb0-4289-a1b5-d5e379727721', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a9c243220ecd4ba3af10cdbc0ea76bd6', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'a5deaf81-ec7a-4196-8622-4e499ce185db', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=864356ca-a329-4a45-a3a1-6cef04812832, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=269f9bd3-f267-459c-8e24-4b1f6c943345) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 03:34:42 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:34:42.513 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 269f9bd3-f267-459c-8e24-4b1f6c943345 in datapath a66e51b8-ecb0-4289-a1b5-d5e379727721 unbound from our chassis#033[00m
Nov 25 03:34:42 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:34:42.516 162739 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network a66e51b8-ecb0-4289-a1b5-d5e379727721, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 25 03:34:42 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:34:42.517 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[3802cc64-0afb-4050-afc1-dca398deaed7]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:34:42 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:34:42.518 162739 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-a66e51b8-ecb0-4289-a1b5-d5e379727721 namespace which is not needed anymore#033[00m
Nov 25 03:34:42 np0005534516 nova_compute[253538]: 2025-11-25 08:34:42.521 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:34:42 np0005534516 systemd[1]: machine-qemu\x2d69\x2dinstance\x2d0000003c.scope: Deactivated successfully.
Nov 25 03:34:42 np0005534516 systemd[1]: machine-qemu\x2d69\x2dinstance\x2d0000003c.scope: Consumed 15.318s CPU time.
Nov 25 03:34:42 np0005534516 systemd-machined[215790]: Machine qemu-69-instance-0000003c terminated.
Nov 25 03:34:42 np0005534516 neutron-haproxy-ovnmeta-a66e51b8-ecb0-4289-a1b5-d5e379727721[316655]: [NOTICE]   (316659) : haproxy version is 2.8.14-c23fe91
Nov 25 03:34:42 np0005534516 neutron-haproxy-ovnmeta-a66e51b8-ecb0-4289-a1b5-d5e379727721[316655]: [NOTICE]   (316659) : path to executable is /usr/sbin/haproxy
Nov 25 03:34:42 np0005534516 neutron-haproxy-ovnmeta-a66e51b8-ecb0-4289-a1b5-d5e379727721[316655]: [WARNING]  (316659) : Exiting Master process...
Nov 25 03:34:42 np0005534516 neutron-haproxy-ovnmeta-a66e51b8-ecb0-4289-a1b5-d5e379727721[316655]: [ALERT]    (316659) : Current worker (316661) exited with code 143 (Terminated)
Nov 25 03:34:42 np0005534516 neutron-haproxy-ovnmeta-a66e51b8-ecb0-4289-a1b5-d5e379727721[316655]: [WARNING]  (316659) : All workers exited. Exiting... (0)
Nov 25 03:34:42 np0005534516 systemd[1]: libpod-6e22b195a22364538f85e0288f00a1284d5f2f55e8968e456a8c6ed978d6d624.scope: Deactivated successfully.
Nov 25 03:34:42 np0005534516 conmon[316655]: conmon 6e22b195a22364538f85 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-6e22b195a22364538f85e0288f00a1284d5f2f55e8968e456a8c6ed978d6d624.scope/container/memory.events
Nov 25 03:34:42 np0005534516 podman[318616]: 2025-11-25 08:34:42.647998934 +0000 UTC m=+0.045434391 container died 6e22b195a22364538f85e0288f00a1284d5f2f55e8968e456a8c6ed978d6d624 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-a66e51b8-ecb0-4289-a1b5-d5e379727721, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Nov 25 03:34:42 np0005534516 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-6e22b195a22364538f85e0288f00a1284d5f2f55e8968e456a8c6ed978d6d624-userdata-shm.mount: Deactivated successfully.
Nov 25 03:34:42 np0005534516 systemd[1]: var-lib-containers-storage-overlay-8245a978a5022c98149f488e1744d8b7505a54bb18353d96e818cdced9046a6b-merged.mount: Deactivated successfully.
Nov 25 03:34:42 np0005534516 nova_compute[253538]: 2025-11-25 08:34:42.735 253542 INFO nova.virt.libvirt.driver [-] [instance: bf44124c-1a65-4bde-a777-043ae1a53557] Instance destroyed successfully.#033[00m
Nov 25 03:34:42 np0005534516 nova_compute[253538]: 2025-11-25 08:34:42.735 253542 DEBUG nova.objects.instance [None req-1aa77871-377d-42f4-9eb6-3670589c0274 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Lazy-loading 'numa_topology' on Instance uuid bf44124c-1a65-4bde-a777-043ae1a53557 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 03:34:42 np0005534516 nova_compute[253538]: 2025-11-25 08:34:42.760 253542 DEBUG nova.compute.manager [None req-1aa77871-377d-42f4-9eb6-3670589c0274 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: bf44124c-1a65-4bde-a777-043ae1a53557] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:34:42 np0005534516 podman[318616]: 2025-11-25 08:34:42.772411718 +0000 UTC m=+0.169847175 container cleanup 6e22b195a22364538f85e0288f00a1284d5f2f55e8968e456a8c6ed978d6d624 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-a66e51b8-ecb0-4289-a1b5-d5e379727721, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true)
Nov 25 03:34:42 np0005534516 systemd[1]: libpod-conmon-6e22b195a22364538f85e0288f00a1284d5f2f55e8968e456a8c6ed978d6d624.scope: Deactivated successfully.
Nov 25 03:34:42 np0005534516 nova_compute[253538]: 2025-11-25 08:34:42.799 253542 DEBUG oslo_concurrency.lockutils [None req-1aa77871-377d-42f4-9eb6-3670589c0274 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Lock "bf44124c-1a65-4bde-a777-043ae1a53557" "released" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: held 24.472s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:34:42 np0005534516 nervous_lamport[318580]: {
Nov 25 03:34:42 np0005534516 nervous_lamport[318580]:    "1ccbbde0-8faf-460a-800e-c84f00ed17db": {
Nov 25 03:34:42 np0005534516 nervous_lamport[318580]:        "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 03:34:42 np0005534516 nervous_lamport[318580]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 03:34:42 np0005534516 nervous_lamport[318580]:        "osd_id": 1,
Nov 25 03:34:42 np0005534516 nervous_lamport[318580]:        "osd_uuid": "1ccbbde0-8faf-460a-800e-c84f00ed17db",
Nov 25 03:34:42 np0005534516 nervous_lamport[318580]:        "type": "bluestore"
Nov 25 03:34:42 np0005534516 nervous_lamport[318580]:    },
Nov 25 03:34:42 np0005534516 nervous_lamport[318580]:    "2a15223e-f21c-42af-b702-2be1c3607399": {
Nov 25 03:34:42 np0005534516 nervous_lamport[318580]:        "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 03:34:42 np0005534516 nervous_lamport[318580]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 03:34:42 np0005534516 nervous_lamport[318580]:        "osd_id": 2,
Nov 25 03:34:42 np0005534516 nervous_lamport[318580]:        "osd_uuid": "2a15223e-f21c-42af-b702-2be1c3607399",
Nov 25 03:34:42 np0005534516 nervous_lamport[318580]:        "type": "bluestore"
Nov 25 03:34:42 np0005534516 nervous_lamport[318580]:    },
Nov 25 03:34:42 np0005534516 nervous_lamport[318580]:    "fa0f8d90-5df8-4d42-9078-da082765696d": {
Nov 25 03:34:42 np0005534516 nervous_lamport[318580]:        "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 03:34:42 np0005534516 nervous_lamport[318580]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 03:34:42 np0005534516 nervous_lamport[318580]:        "osd_id": 0,
Nov 25 03:34:42 np0005534516 nervous_lamport[318580]:        "osd_uuid": "fa0f8d90-5df8-4d42-9078-da082765696d",
Nov 25 03:34:42 np0005534516 nervous_lamport[318580]:        "type": "bluestore"
Nov 25 03:34:42 np0005534516 nervous_lamport[318580]:    }
Nov 25 03:34:42 np0005534516 nervous_lamport[318580]: }
Nov 25 03:34:42 np0005534516 systemd[1]: libpod-d3fffb18af60199a32f4f73f08ae07f91a8fd4c38a2c63e6482f22fe43218b81.scope: Deactivated successfully.
Nov 25 03:34:42 np0005534516 systemd[1]: libpod-d3fffb18af60199a32f4f73f08ae07f91a8fd4c38a2c63e6482f22fe43218b81.scope: Consumed 1.006s CPU time.
Nov 25 03:34:42 np0005534516 podman[318564]: 2025-11-25 08:34:42.83460246 +0000 UTC m=+1.187437455 container died d3fffb18af60199a32f4f73f08ae07f91a8fd4c38a2c63e6482f22fe43218b81 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_lamport, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Nov 25 03:34:42 np0005534516 podman[318674]: 2025-11-25 08:34:42.864274257 +0000 UTC m=+0.069190951 container remove 6e22b195a22364538f85e0288f00a1284d5f2f55e8968e456a8c6ed978d6d624 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-a66e51b8-ecb0-4289-a1b5-d5e379727721, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 03:34:42 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:34:42.872 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[8eb1a323-d470-43d3-ab62-6b5486a97d40]: (4, ('Tue Nov 25 08:34:42 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-a66e51b8-ecb0-4289-a1b5-d5e379727721 (6e22b195a22364538f85e0288f00a1284d5f2f55e8968e456a8c6ed978d6d624)\n6e22b195a22364538f85e0288f00a1284d5f2f55e8968e456a8c6ed978d6d624\nTue Nov 25 08:34:42 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-a66e51b8-ecb0-4289-a1b5-d5e379727721 (6e22b195a22364538f85e0288f00a1284d5f2f55e8968e456a8c6ed978d6d624)\n6e22b195a22364538f85e0288f00a1284d5f2f55e8968e456a8c6ed978d6d624\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:34:42 np0005534516 systemd[1]: var-lib-containers-storage-overlay-efa0c99149a76e4629cdf6ad8208140f9f21c4b5900f96d3066fd2df05d6641d-merged.mount: Deactivated successfully.
Nov 25 03:34:42 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:34:42.876 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[1adc95ac-af36-4cd1-94fa-058dddb86417]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:34:42 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:34:42.878 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa66e51b8-e0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:34:42 np0005534516 nova_compute[253538]: 2025-11-25 08:34:42.880 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:34:42 np0005534516 kernel: tapa66e51b8-e0: left promiscuous mode
Nov 25 03:34:42 np0005534516 nova_compute[253538]: 2025-11-25 08:34:42.901 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:34:42 np0005534516 nova_compute[253538]: 2025-11-25 08:34:42.905 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:34:42 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:34:42.908 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[15ea0fc6-14b2-4013-948f-7f9b4534dcfd]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:34:42 np0005534516 podman[318564]: 2025-11-25 08:34:42.914547048 +0000 UTC m=+1.267382033 container remove d3fffb18af60199a32f4f73f08ae07f91a8fd4c38a2c63e6482f22fe43218b81 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_lamport, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 03:34:42 np0005534516 systemd[1]: libpod-conmon-d3fffb18af60199a32f4f73f08ae07f91a8fd4c38a2c63e6482f22fe43218b81.scope: Deactivated successfully.
Nov 25 03:34:42 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:34:42.923 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[50fc3d7b-f231-44cd-871e-c7ab82526ed3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:34:42 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:34:42.926 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[27fc10a1-98f4-46b3-a361-2fd93fa32c23]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:34:42 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:34:42.947 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[114497d1-af42-45ee-99f4-8f0aaa81061c]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 500876, 'reachable_time': 34904, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 318707, 'error': None, 'target': 'ovnmeta-a66e51b8-ecb0-4289-a1b5-d5e379727721', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:34:42 np0005534516 systemd[1]: run-netns-ovnmeta\x2da66e51b8\x2decb0\x2d4289\x2da1b5\x2dd5e379727721.mount: Deactivated successfully.
Nov 25 03:34:42 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:34:42.950 162852 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-a66e51b8-ecb0-4289-a1b5-d5e379727721 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 25 03:34:42 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:34:42.951 162852 DEBUG oslo.privsep.daemon [-] privsep: reply[87b46483-ccbe-443f-8998-ab35d46a300b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:34:42 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 03:34:42 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 03:34:42 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 03:34:42 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 03:34:42 np0005534516 ceph-mgr[75313]: [progress WARNING root] complete: ev ce77821b-a426-4c46-8d24-c31e9078ce39 does not exist
Nov 25 03:34:42 np0005534516 ceph-mgr[75313]: [progress WARNING root] complete: ev 0cafe8a4-51f5-435c-b5d9-1fd27d273cd4 does not exist
Nov 25 03:34:43 np0005534516 nova_compute[253538]: 2025-11-25 08:34:43.025 253542 DEBUG nova.compute.manager [req-24668577-37a9-4f3b-8375-bfcb7a368548 req-8f1b0644-1ad0-49d9-a80f-b2b95b174283 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 5c6656ef-7ad0-4eb4-a597-aa9a8078805b] Received event network-changed-1682bdaf-1dd6-4036-8d17-a169dbaaca8f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:34:43 np0005534516 nova_compute[253538]: 2025-11-25 08:34:43.025 253542 DEBUG nova.compute.manager [req-24668577-37a9-4f3b-8375-bfcb7a368548 req-8f1b0644-1ad0-49d9-a80f-b2b95b174283 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 5c6656ef-7ad0-4eb4-a597-aa9a8078805b] Refreshing instance network info cache due to event network-changed-1682bdaf-1dd6-4036-8d17-a169dbaaca8f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 03:34:43 np0005534516 nova_compute[253538]: 2025-11-25 08:34:43.025 253542 DEBUG oslo_concurrency.lockutils [req-24668577-37a9-4f3b-8375-bfcb7a368548 req-8f1b0644-1ad0-49d9-a80f-b2b95b174283 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-5c6656ef-7ad0-4eb4-a597-aa9a8078805b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 03:34:43 np0005534516 nova_compute[253538]: 2025-11-25 08:34:43.026 253542 DEBUG oslo_concurrency.lockutils [req-24668577-37a9-4f3b-8375-bfcb7a368548 req-8f1b0644-1ad0-49d9-a80f-b2b95b174283 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-5c6656ef-7ad0-4eb4-a597-aa9a8078805b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 03:34:43 np0005534516 nova_compute[253538]: 2025-11-25 08:34:43.026 253542 DEBUG nova.network.neutron [req-24668577-37a9-4f3b-8375-bfcb7a368548 req-8f1b0644-1ad0-49d9-a80f-b2b95b174283 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 5c6656ef-7ad0-4eb4-a597-aa9a8078805b] Refreshing network info cache for port 1682bdaf-1dd6-4036-8d17-a169dbaaca8f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 03:34:43 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1547: 321 pgs: 321 active+clean; 440 MiB data, 750 MiB used, 59 GiB / 60 GiB avail; 939 KiB/s rd, 6.0 MiB/s wr, 187 op/s
Nov 25 03:34:43 np0005534516 nova_compute[253538]: 2025-11-25 08:34:43.390 253542 DEBUG oslo_concurrency.lockutils [None req-a4883817-8c02-41e5-8d27-475adf7618fd 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Acquiring lock "interface-52d39d67-b456-44e4-8804-2de0c941edae-f2a4b65b-419e-44be-9413-f01693268aa8" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:34:43 np0005534516 nova_compute[253538]: 2025-11-25 08:34:43.391 253542 DEBUG oslo_concurrency.lockutils [None req-a4883817-8c02-41e5-8d27-475adf7618fd 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Lock "interface-52d39d67-b456-44e4-8804-2de0c941edae-f2a4b65b-419e-44be-9413-f01693268aa8" acquired by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:34:43 np0005534516 nova_compute[253538]: 2025-11-25 08:34:43.391 253542 DEBUG nova.objects.instance [None req-a4883817-8c02-41e5-8d27-475adf7618fd 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Lazy-loading 'flavor' on Instance uuid 52d39d67-b456-44e4-8804-2de0c941edae obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 03:34:43 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 03:34:43 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 03:34:43 np0005534516 nova_compute[253538]: 2025-11-25 08:34:43.994 253542 DEBUG nova.objects.instance [None req-a4883817-8c02-41e5-8d27-475adf7618fd 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Lazy-loading 'pci_requests' on Instance uuid 52d39d67-b456-44e4-8804-2de0c941edae obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 03:34:44 np0005534516 nova_compute[253538]: 2025-11-25 08:34:44.008 253542 DEBUG nova.network.neutron [None req-a4883817-8c02-41e5-8d27-475adf7618fd 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] [instance: 52d39d67-b456-44e4-8804-2de0c941edae] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 25 03:34:44 np0005534516 nova_compute[253538]: 2025-11-25 08:34:44.160 253542 DEBUG nova.network.neutron [req-24668577-37a9-4f3b-8375-bfcb7a368548 req-8f1b0644-1ad0-49d9-a80f-b2b95b174283 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 5c6656ef-7ad0-4eb4-a597-aa9a8078805b] Updated VIF entry in instance network info cache for port 1682bdaf-1dd6-4036-8d17-a169dbaaca8f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 25 03:34:44 np0005534516 nova_compute[253538]: 2025-11-25 08:34:44.160 253542 DEBUG nova.network.neutron [req-24668577-37a9-4f3b-8375-bfcb7a368548 req-8f1b0644-1ad0-49d9-a80f-b2b95b174283 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 5c6656ef-7ad0-4eb4-a597-aa9a8078805b] Updating instance_info_cache with network_info: [{"id": "1682bdaf-1dd6-4036-8d17-a169dbaaca8f", "address": "fa:16:3e:60:42:da", "network": {"id": "9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-2079765623-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7d8307470c794815a028592990efca57", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1682bdaf-1d", "ovs_interfaceid": "1682bdaf-1dd6-4036-8d17-a169dbaaca8f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 03:34:44 np0005534516 nova_compute[253538]: 2025-11-25 08:34:44.336 253542 DEBUG oslo_concurrency.lockutils [req-24668577-37a9-4f3b-8375-bfcb7a368548 req-8f1b0644-1ad0-49d9-a80f-b2b95b174283 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-5c6656ef-7ad0-4eb4-a597-aa9a8078805b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 03:34:44 np0005534516 nova_compute[253538]: 2025-11-25 08:34:44.360 253542 DEBUG nova.policy [None req-a4883817-8c02-41e5-8d27-475adf7618fd 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '329d8dc9d78743d4a09a38fef3a9143d', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '7d8307470c794815a028592990efca57', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 25 03:34:44 np0005534516 nova_compute[253538]: 2025-11-25 08:34:44.367 253542 DEBUG oslo_concurrency.lockutils [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] Acquiring lock "d99c7a05-3cc3-4a8b-bce4-1185023a269f" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:34:44 np0005534516 nova_compute[253538]: 2025-11-25 08:34:44.368 253542 DEBUG oslo_concurrency.lockutils [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] Lock "d99c7a05-3cc3-4a8b-bce4-1185023a269f" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:34:44 np0005534516 nova_compute[253538]: 2025-11-25 08:34:44.392 253542 DEBUG nova.compute.manager [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] [instance: d99c7a05-3cc3-4a8b-bce4-1185023a269f] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 25 03:34:44 np0005534516 nova_compute[253538]: 2025-11-25 08:34:44.453 253542 DEBUG oslo_concurrency.lockutils [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] Acquiring lock "a4fd9f97-b160-432d-9cb7-0fa3874c6468" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:34:44 np0005534516 nova_compute[253538]: 2025-11-25 08:34:44.454 253542 DEBUG oslo_concurrency.lockutils [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] Lock "a4fd9f97-b160-432d-9cb7-0fa3874c6468" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:34:44 np0005534516 nova_compute[253538]: 2025-11-25 08:34:44.474 253542 DEBUG nova.compute.manager [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] [instance: a4fd9f97-b160-432d-9cb7-0fa3874c6468] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 25 03:34:44 np0005534516 nova_compute[253538]: 2025-11-25 08:34:44.503 253542 DEBUG oslo_concurrency.lockutils [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:34:44 np0005534516 nova_compute[253538]: 2025-11-25 08:34:44.504 253542 DEBUG oslo_concurrency.lockutils [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:34:44 np0005534516 nova_compute[253538]: 2025-11-25 08:34:44.506 253542 DEBUG oslo_concurrency.lockutils [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] Acquiring lock "e07ccbcb-d60d-4c15-95c2-9f5046ab99a3" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:34:44 np0005534516 nova_compute[253538]: 2025-11-25 08:34:44.506 253542 DEBUG oslo_concurrency.lockutils [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] Lock "e07ccbcb-d60d-4c15-95c2-9f5046ab99a3" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:34:44 np0005534516 nova_compute[253538]: 2025-11-25 08:34:44.517 253542 DEBUG nova.virt.hardware [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 25 03:34:44 np0005534516 nova_compute[253538]: 2025-11-25 08:34:44.518 253542 INFO nova.compute.claims [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] [instance: d99c7a05-3cc3-4a8b-bce4-1185023a269f] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 25 03:34:44 np0005534516 nova_compute[253538]: 2025-11-25 08:34:44.529 253542 DEBUG nova.compute.manager [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] [instance: e07ccbcb-d60d-4c15-95c2-9f5046ab99a3] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 25 03:34:44 np0005534516 nova_compute[253538]: 2025-11-25 08:34:44.544 253542 DEBUG oslo_concurrency.lockutils [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:34:44 np0005534516 nova_compute[253538]: 2025-11-25 08:34:44.596 253542 DEBUG oslo_concurrency.lockutils [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:34:44 np0005534516 nova_compute[253538]: 2025-11-25 08:34:44.722 253542 DEBUG oslo_concurrency.processutils [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:34:44 np0005534516 podman[318758]: 2025-11-25 08:34:44.851656971 +0000 UTC m=+0.083436064 container health_status 201f5ba8a846455dad7681d91099d8c3f33e8ac91298c1330a8b525b94c03938 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team)
Nov 25 03:34:45 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 03:34:45 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1784611586' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 03:34:45 np0005534516 nova_compute[253538]: 2025-11-25 08:34:45.154 253542 DEBUG oslo_concurrency.processutils [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.432s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:34:45 np0005534516 nova_compute[253538]: 2025-11-25 08:34:45.161 253542 DEBUG nova.compute.provider_tree [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 03:34:45 np0005534516 nova_compute[253538]: 2025-11-25 08:34:45.186 253542 DEBUG nova.scheduler.client.report [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 03:34:45 np0005534516 nova_compute[253538]: 2025-11-25 08:34:45.215 253542 DEBUG oslo_concurrency.lockutils [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.711s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:34:45 np0005534516 nova_compute[253538]: 2025-11-25 08:34:45.216 253542 DEBUG nova.compute.manager [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] [instance: d99c7a05-3cc3-4a8b-bce4-1185023a269f] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 25 03:34:45 np0005534516 nova_compute[253538]: 2025-11-25 08:34:45.220 253542 DEBUG oslo_concurrency.lockutils [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.676s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:34:45 np0005534516 nova_compute[253538]: 2025-11-25 08:34:45.227 253542 DEBUG nova.virt.hardware [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 25 03:34:45 np0005534516 nova_compute[253538]: 2025-11-25 08:34:45.227 253542 INFO nova.compute.claims [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] [instance: a4fd9f97-b160-432d-9cb7-0fa3874c6468] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 25 03:34:45 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1548: 321 pgs: 321 active+clean; 440 MiB data, 754 MiB used, 59 GiB / 60 GiB avail; 740 KiB/s rd, 4.3 MiB/s wr, 145 op/s
Nov 25 03:34:45 np0005534516 nova_compute[253538]: 2025-11-25 08:34:45.298 253542 DEBUG nova.compute.manager [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] [instance: d99c7a05-3cc3-4a8b-bce4-1185023a269f] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 25 03:34:45 np0005534516 nova_compute[253538]: 2025-11-25 08:34:45.299 253542 DEBUG nova.network.neutron [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] [instance: d99c7a05-3cc3-4a8b-bce4-1185023a269f] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 25 03:34:45 np0005534516 nova_compute[253538]: 2025-11-25 08:34:45.325 253542 INFO nova.virt.libvirt.driver [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] [instance: d99c7a05-3cc3-4a8b-bce4-1185023a269f] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 25 03:34:45 np0005534516 nova_compute[253538]: 2025-11-25 08:34:45.349 253542 DEBUG nova.compute.manager [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] [instance: d99c7a05-3cc3-4a8b-bce4-1185023a269f] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 25 03:34:45 np0005534516 nova_compute[253538]: 2025-11-25 08:34:45.508 253542 DEBUG nova.policy [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'dc986518148d44de9f5908ed5be317bd', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'ce0bc5c65f2f47a9a854ec892fe53bc8', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 25 03:34:45 np0005534516 nova_compute[253538]: 2025-11-25 08:34:45.532 253542 DEBUG oslo_concurrency.processutils [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:34:45 np0005534516 nova_compute[253538]: 2025-11-25 08:34:45.623 253542 DEBUG nova.compute.manager [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] [instance: d99c7a05-3cc3-4a8b-bce4-1185023a269f] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 25 03:34:45 np0005534516 nova_compute[253538]: 2025-11-25 08:34:45.625 253542 DEBUG nova.virt.libvirt.driver [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] [instance: d99c7a05-3cc3-4a8b-bce4-1185023a269f] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 25 03:34:45 np0005534516 nova_compute[253538]: 2025-11-25 08:34:45.625 253542 INFO nova.virt.libvirt.driver [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] [instance: d99c7a05-3cc3-4a8b-bce4-1185023a269f] Creating image(s)#033[00m
Nov 25 03:34:45 np0005534516 nova_compute[253538]: 2025-11-25 08:34:45.651 253542 DEBUG nova.storage.rbd_utils [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] rbd image d99c7a05-3cc3-4a8b-bce4-1185023a269f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:34:45 np0005534516 nova_compute[253538]: 2025-11-25 08:34:45.679 253542 DEBUG nova.storage.rbd_utils [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] rbd image d99c7a05-3cc3-4a8b-bce4-1185023a269f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:34:45 np0005534516 nova_compute[253538]: 2025-11-25 08:34:45.710 253542 DEBUG nova.storage.rbd_utils [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] rbd image d99c7a05-3cc3-4a8b-bce4-1185023a269f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:34:45 np0005534516 nova_compute[253538]: 2025-11-25 08:34:45.713 253542 DEBUG oslo_concurrency.processutils [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:34:45 np0005534516 nova_compute[253538]: 2025-11-25 08:34:45.744 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:34:45 np0005534516 nova_compute[253538]: 2025-11-25 08:34:45.784 253542 DEBUG oslo_concurrency.processutils [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json" returned: 0 in 0.071s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:34:45 np0005534516 nova_compute[253538]: 2025-11-25 08:34:45.785 253542 DEBUG oslo_concurrency.lockutils [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] Acquiring lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:34:45 np0005534516 nova_compute[253538]: 2025-11-25 08:34:45.786 253542 DEBUG oslo_concurrency.lockutils [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:34:45 np0005534516 nova_compute[253538]: 2025-11-25 08:34:45.786 253542 DEBUG oslo_concurrency.lockutils [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:34:45 np0005534516 nova_compute[253538]: 2025-11-25 08:34:45.810 253542 DEBUG nova.storage.rbd_utils [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] rbd image d99c7a05-3cc3-4a8b-bce4-1185023a269f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:34:45 np0005534516 nova_compute[253538]: 2025-11-25 08:34:45.813 253542 DEBUG oslo_concurrency.processutils [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc d99c7a05-3cc3-4a8b-bce4-1185023a269f_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:34:45 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e182 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 03:34:45 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 03:34:45 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2790279086' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 03:34:45 np0005534516 nova_compute[253538]: 2025-11-25 08:34:45.978 253542 DEBUG oslo_concurrency.processutils [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.446s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:34:45 np0005534516 nova_compute[253538]: 2025-11-25 08:34:45.984 253542 DEBUG nova.compute.provider_tree [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 03:34:46 np0005534516 nova_compute[253538]: 2025-11-25 08:34:46.004 253542 DEBUG nova.scheduler.client.report [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 03:34:46 np0005534516 nova_compute[253538]: 2025-11-25 08:34:46.043 253542 DEBUG oslo_concurrency.lockutils [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.824s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:34:46 np0005534516 nova_compute[253538]: 2025-11-25 08:34:46.045 253542 DEBUG nova.compute.manager [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] [instance: a4fd9f97-b160-432d-9cb7-0fa3874c6468] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 25 03:34:46 np0005534516 nova_compute[253538]: 2025-11-25 08:34:46.048 253542 DEBUG oslo_concurrency.lockutils [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 1.452s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:34:46 np0005534516 nova_compute[253538]: 2025-11-25 08:34:46.053 253542 DEBUG nova.virt.hardware [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 25 03:34:46 np0005534516 nova_compute[253538]: 2025-11-25 08:34:46.054 253542 INFO nova.compute.claims [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] [instance: e07ccbcb-d60d-4c15-95c2-9f5046ab99a3] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 25 03:34:46 np0005534516 nova_compute[253538]: 2025-11-25 08:34:46.109 253542 DEBUG nova.compute.manager [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] [instance: a4fd9f97-b160-432d-9cb7-0fa3874c6468] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 25 03:34:46 np0005534516 nova_compute[253538]: 2025-11-25 08:34:46.109 253542 DEBUG nova.network.neutron [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] [instance: a4fd9f97-b160-432d-9cb7-0fa3874c6468] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 25 03:34:46 np0005534516 nova_compute[253538]: 2025-11-25 08:34:46.162 253542 INFO nova.virt.libvirt.driver [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] [instance: a4fd9f97-b160-432d-9cb7-0fa3874c6468] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 25 03:34:46 np0005534516 nova_compute[253538]: 2025-11-25 08:34:46.177 253542 DEBUG oslo_concurrency.processutils [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc d99c7a05-3cc3-4a8b-bce4-1185023a269f_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.364s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:34:46 np0005534516 nova_compute[253538]: 2025-11-25 08:34:46.206 253542 DEBUG nova.compute.manager [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] [instance: a4fd9f97-b160-432d-9cb7-0fa3874c6468] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 25 03:34:46 np0005534516 nova_compute[253538]: 2025-11-25 08:34:46.245 253542 DEBUG nova.storage.rbd_utils [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] resizing rbd image d99c7a05-3cc3-4a8b-bce4-1185023a269f_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 25 03:34:46 np0005534516 nova_compute[253538]: 2025-11-25 08:34:46.368 253542 DEBUG nova.compute.manager [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] [instance: a4fd9f97-b160-432d-9cb7-0fa3874c6468] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 25 03:34:46 np0005534516 nova_compute[253538]: 2025-11-25 08:34:46.370 253542 DEBUG nova.virt.libvirt.driver [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] [instance: a4fd9f97-b160-432d-9cb7-0fa3874c6468] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 25 03:34:46 np0005534516 nova_compute[253538]: 2025-11-25 08:34:46.371 253542 INFO nova.virt.libvirt.driver [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] [instance: a4fd9f97-b160-432d-9cb7-0fa3874c6468] Creating image(s)#033[00m
Nov 25 03:34:46 np0005534516 nova_compute[253538]: 2025-11-25 08:34:46.396 253542 DEBUG nova.storage.rbd_utils [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] rbd image a4fd9f97-b160-432d-9cb7-0fa3874c6468_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:34:46 np0005534516 nova_compute[253538]: 2025-11-25 08:34:46.424 253542 DEBUG nova.storage.rbd_utils [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] rbd image a4fd9f97-b160-432d-9cb7-0fa3874c6468_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:34:46 np0005534516 nova_compute[253538]: 2025-11-25 08:34:46.454 253542 DEBUG nova.storage.rbd_utils [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] rbd image a4fd9f97-b160-432d-9cb7-0fa3874c6468_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:34:46 np0005534516 nova_compute[253538]: 2025-11-25 08:34:46.457 253542 DEBUG oslo_concurrency.processutils [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:34:46 np0005534516 nova_compute[253538]: 2025-11-25 08:34:46.492 253542 DEBUG nova.policy [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'dc986518148d44de9f5908ed5be317bd', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'ce0bc5c65f2f47a9a854ec892fe53bc8', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 25 03:34:46 np0005534516 nova_compute[253538]: 2025-11-25 08:34:46.498 253542 DEBUG nova.compute.manager [req-f6c2b60a-6c85-4550-93dc-064e1b75996f req-4fd63bb9-699b-4b69-ad64-5f3fb93d45c6 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 52d39d67-b456-44e4-8804-2de0c941edae] Received event network-changed-9fa407fa-661b-4b02-b4f4-656f6ae34cd8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:34:46 np0005534516 nova_compute[253538]: 2025-11-25 08:34:46.498 253542 DEBUG nova.compute.manager [req-f6c2b60a-6c85-4550-93dc-064e1b75996f req-4fd63bb9-699b-4b69-ad64-5f3fb93d45c6 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 52d39d67-b456-44e4-8804-2de0c941edae] Refreshing instance network info cache due to event network-changed-9fa407fa-661b-4b02-b4f4-656f6ae34cd8. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 03:34:46 np0005534516 nova_compute[253538]: 2025-11-25 08:34:46.498 253542 DEBUG oslo_concurrency.lockutils [req-f6c2b60a-6c85-4550-93dc-064e1b75996f req-4fd63bb9-699b-4b69-ad64-5f3fb93d45c6 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-52d39d67-b456-44e4-8804-2de0c941edae" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 03:34:46 np0005534516 nova_compute[253538]: 2025-11-25 08:34:46.498 253542 DEBUG oslo_concurrency.lockutils [req-f6c2b60a-6c85-4550-93dc-064e1b75996f req-4fd63bb9-699b-4b69-ad64-5f3fb93d45c6 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-52d39d67-b456-44e4-8804-2de0c941edae" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 03:34:46 np0005534516 nova_compute[253538]: 2025-11-25 08:34:46.499 253542 DEBUG nova.network.neutron [req-f6c2b60a-6c85-4550-93dc-064e1b75996f req-4fd63bb9-699b-4b69-ad64-5f3fb93d45c6 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 52d39d67-b456-44e4-8804-2de0c941edae] Refreshing network info cache for port 9fa407fa-661b-4b02-b4f4-656f6ae34cd8 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 03:34:46 np0005534516 nova_compute[253538]: 2025-11-25 08:34:46.505 253542 DEBUG nova.objects.instance [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] Lazy-loading 'migration_context' on Instance uuid d99c7a05-3cc3-4a8b-bce4-1185023a269f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 03:34:46 np0005534516 nova_compute[253538]: 2025-11-25 08:34:46.523 253542 DEBUG nova.virt.libvirt.driver [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] [instance: d99c7a05-3cc3-4a8b-bce4-1185023a269f] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 25 03:34:46 np0005534516 nova_compute[253538]: 2025-11-25 08:34:46.523 253542 DEBUG nova.virt.libvirt.driver [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] [instance: d99c7a05-3cc3-4a8b-bce4-1185023a269f] Ensure instance console log exists: /var/lib/nova/instances/d99c7a05-3cc3-4a8b-bce4-1185023a269f/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 25 03:34:46 np0005534516 nova_compute[253538]: 2025-11-25 08:34:46.523 253542 DEBUG oslo_concurrency.lockutils [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:34:46 np0005534516 nova_compute[253538]: 2025-11-25 08:34:46.524 253542 DEBUG oslo_concurrency.lockutils [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:34:46 np0005534516 nova_compute[253538]: 2025-11-25 08:34:46.524 253542 DEBUG oslo_concurrency.lockutils [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:34:46 np0005534516 nova_compute[253538]: 2025-11-25 08:34:46.537 253542 DEBUG oslo_concurrency.processutils [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json" returned: 0 in 0.081s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:34:46 np0005534516 nova_compute[253538]: 2025-11-25 08:34:46.538 253542 DEBUG oslo_concurrency.lockutils [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] Acquiring lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:34:46 np0005534516 nova_compute[253538]: 2025-11-25 08:34:46.539 253542 DEBUG oslo_concurrency.lockutils [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:34:46 np0005534516 nova_compute[253538]: 2025-11-25 08:34:46.540 253542 DEBUG oslo_concurrency.lockutils [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:34:46 np0005534516 nova_compute[253538]: 2025-11-25 08:34:46.568 253542 DEBUG nova.storage.rbd_utils [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] rbd image a4fd9f97-b160-432d-9cb7-0fa3874c6468_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:34:46 np0005534516 nova_compute[253538]: 2025-11-25 08:34:46.572 253542 DEBUG oslo_concurrency.processutils [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc a4fd9f97-b160-432d-9cb7-0fa3874c6468_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:34:46 np0005534516 nova_compute[253538]: 2025-11-25 08:34:46.636 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:34:46 np0005534516 nova_compute[253538]: 2025-11-25 08:34:46.666 253542 DEBUG oslo_concurrency.processutils [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:34:47 np0005534516 nova_compute[253538]: 2025-11-25 08:34:47.031 253542 DEBUG oslo_concurrency.processutils [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc a4fd9f97-b160-432d-9cb7-0fa3874c6468_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.459s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:34:47 np0005534516 nova_compute[253538]: 2025-11-25 08:34:47.081 253542 DEBUG nova.storage.rbd_utils [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] resizing rbd image a4fd9f97-b160-432d-9cb7-0fa3874c6468_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 25 03:34:47 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 03:34:47 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2836616779' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 03:34:47 np0005534516 nova_compute[253538]: 2025-11-25 08:34:47.201 253542 DEBUG nova.network.neutron [None req-a4883817-8c02-41e5-8d27-475adf7618fd 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] [instance: 52d39d67-b456-44e4-8804-2de0c941edae] Successfully updated port: f2a4b65b-419e-44be-9413-f01693268aa8 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 25 03:34:47 np0005534516 nova_compute[253538]: 2025-11-25 08:34:47.203 253542 DEBUG oslo_concurrency.processutils [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.537s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:34:47 np0005534516 nova_compute[253538]: 2025-11-25 08:34:47.209 253542 DEBUG nova.objects.instance [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] Lazy-loading 'migration_context' on Instance uuid a4fd9f97-b160-432d-9cb7-0fa3874c6468 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 03:34:47 np0005534516 nova_compute[253538]: 2025-11-25 08:34:47.213 253542 DEBUG nova.compute.provider_tree [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 03:34:47 np0005534516 nova_compute[253538]: 2025-11-25 08:34:47.215 253542 DEBUG nova.network.neutron [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] [instance: d99c7a05-3cc3-4a8b-bce4-1185023a269f] Successfully created port: 182a5a1a-c06d-4265-857f-3ea363ae01c2 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 25 03:34:47 np0005534516 nova_compute[253538]: 2025-11-25 08:34:47.218 253542 DEBUG oslo_concurrency.lockutils [None req-a4883817-8c02-41e5-8d27-475adf7618fd 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Acquiring lock "refresh_cache-52d39d67-b456-44e4-8804-2de0c941edae" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 03:34:47 np0005534516 nova_compute[253538]: 2025-11-25 08:34:47.219 253542 DEBUG nova.virt.libvirt.driver [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] [instance: a4fd9f97-b160-432d-9cb7-0fa3874c6468] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 25 03:34:47 np0005534516 nova_compute[253538]: 2025-11-25 08:34:47.219 253542 DEBUG nova.virt.libvirt.driver [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] [instance: a4fd9f97-b160-432d-9cb7-0fa3874c6468] Ensure instance console log exists: /var/lib/nova/instances/a4fd9f97-b160-432d-9cb7-0fa3874c6468/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 25 03:34:47 np0005534516 nova_compute[253538]: 2025-11-25 08:34:47.219 253542 DEBUG oslo_concurrency.lockutils [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:34:47 np0005534516 nova_compute[253538]: 2025-11-25 08:34:47.220 253542 DEBUG oslo_concurrency.lockutils [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:34:47 np0005534516 nova_compute[253538]: 2025-11-25 08:34:47.220 253542 DEBUG oslo_concurrency.lockutils [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:34:47 np0005534516 nova_compute[253538]: 2025-11-25 08:34:47.226 253542 DEBUG nova.scheduler.client.report [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 03:34:47 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1549: 321 pgs: 321 active+clean; 477 MiB data, 764 MiB used, 59 GiB / 60 GiB avail; 331 KiB/s rd, 3.6 MiB/s wr, 87 op/s
Nov 25 03:34:47 np0005534516 nova_compute[253538]: 2025-11-25 08:34:47.274 253542 DEBUG oslo_concurrency.lockutils [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.226s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:34:47 np0005534516 nova_compute[253538]: 2025-11-25 08:34:47.275 253542 DEBUG nova.compute.manager [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] [instance: e07ccbcb-d60d-4c15-95c2-9f5046ab99a3] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 25 03:34:47 np0005534516 nova_compute[253538]: 2025-11-25 08:34:47.429 253542 DEBUG oslo_concurrency.lockutils [None req-72368f87-f064-4902-b8f7-414f65b8954f a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Acquiring lock "bf44124c-1a65-4bde-a777-043ae1a53557" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:34:47 np0005534516 nova_compute[253538]: 2025-11-25 08:34:47.430 253542 DEBUG oslo_concurrency.lockutils [None req-72368f87-f064-4902-b8f7-414f65b8954f a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Lock "bf44124c-1a65-4bde-a777-043ae1a53557" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:34:47 np0005534516 nova_compute[253538]: 2025-11-25 08:34:47.431 253542 DEBUG oslo_concurrency.lockutils [None req-72368f87-f064-4902-b8f7-414f65b8954f a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Acquiring lock "bf44124c-1a65-4bde-a777-043ae1a53557-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:34:47 np0005534516 nova_compute[253538]: 2025-11-25 08:34:47.431 253542 DEBUG oslo_concurrency.lockutils [None req-72368f87-f064-4902-b8f7-414f65b8954f a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Lock "bf44124c-1a65-4bde-a777-043ae1a53557-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:34:47 np0005534516 nova_compute[253538]: 2025-11-25 08:34:47.432 253542 DEBUG oslo_concurrency.lockutils [None req-72368f87-f064-4902-b8f7-414f65b8954f a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Lock "bf44124c-1a65-4bde-a777-043ae1a53557-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:34:47 np0005534516 nova_compute[253538]: 2025-11-25 08:34:47.434 253542 INFO nova.compute.manager [None req-72368f87-f064-4902-b8f7-414f65b8954f a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: bf44124c-1a65-4bde-a777-043ae1a53557] Terminating instance#033[00m
Nov 25 03:34:47 np0005534516 nova_compute[253538]: 2025-11-25 08:34:47.436 253542 DEBUG nova.compute.manager [None req-72368f87-f064-4902-b8f7-414f65b8954f a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: bf44124c-1a65-4bde-a777-043ae1a53557] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 25 03:34:47 np0005534516 nova_compute[253538]: 2025-11-25 08:34:47.438 253542 DEBUG nova.compute.manager [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] [instance: e07ccbcb-d60d-4c15-95c2-9f5046ab99a3] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 25 03:34:47 np0005534516 nova_compute[253538]: 2025-11-25 08:34:47.438 253542 DEBUG nova.network.neutron [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] [instance: e07ccbcb-d60d-4c15-95c2-9f5046ab99a3] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 25 03:34:47 np0005534516 nova_compute[253538]: 2025-11-25 08:34:47.455 253542 INFO nova.virt.libvirt.driver [-] [instance: bf44124c-1a65-4bde-a777-043ae1a53557] Instance destroyed successfully.#033[00m
Nov 25 03:34:47 np0005534516 nova_compute[253538]: 2025-11-25 08:34:47.456 253542 DEBUG nova.objects.instance [None req-72368f87-f064-4902-b8f7-414f65b8954f a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Lazy-loading 'resources' on Instance uuid bf44124c-1a65-4bde-a777-043ae1a53557 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 03:34:47 np0005534516 nova_compute[253538]: 2025-11-25 08:34:47.467 253542 DEBUG nova.virt.libvirt.vif [None req-72368f87-f064-4902-b8f7-414f65b8954f a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T08:34:03Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-DeleteServersTestJSON-server-960435538',display_name='tempest-DeleteServersTestJSON-server-960435538',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-deleteserverstestjson-server-960435538',id=60,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-25T08:34:16Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='a9c243220ecd4ba3af10cdbc0ea76bd6',ramdisk_id='',reservation_id='r-31sgadur',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-DeleteServersTestJSON-2095694504',owner_user_name='tempest-DeleteServersTestJSON-2095694504-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T08:34:42Z,user_data=None,user_id='a649c62aaacd4f01a93ea978066f5976',uuid=bf44124c-1a65-4bde-a777-043ae1a53557,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "269f9bd3-f267-459c-8e24-4b1f6c943345", "address": "fa:16:3e:13:ae:8e", "network": {"id": "a66e51b8-ecb0-4289-a1b5-d5e379727721", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-903247260-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a9c243220ecd4ba3af10cdbc0ea76bd6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap269f9bd3-f2", "ovs_interfaceid": "269f9bd3-f267-459c-8e24-4b1f6c943345", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 25 03:34:47 np0005534516 nova_compute[253538]: 2025-11-25 08:34:47.468 253542 DEBUG nova.network.os_vif_util [None req-72368f87-f064-4902-b8f7-414f65b8954f a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Converting VIF {"id": "269f9bd3-f267-459c-8e24-4b1f6c943345", "address": "fa:16:3e:13:ae:8e", "network": {"id": "a66e51b8-ecb0-4289-a1b5-d5e379727721", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-903247260-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a9c243220ecd4ba3af10cdbc0ea76bd6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap269f9bd3-f2", "ovs_interfaceid": "269f9bd3-f267-459c-8e24-4b1f6c943345", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 03:34:47 np0005534516 nova_compute[253538]: 2025-11-25 08:34:47.469 253542 DEBUG nova.network.os_vif_util [None req-72368f87-f064-4902-b8f7-414f65b8954f a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:13:ae:8e,bridge_name='br-int',has_traffic_filtering=True,id=269f9bd3-f267-459c-8e24-4b1f6c943345,network=Network(a66e51b8-ecb0-4289-a1b5-d5e379727721),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap269f9bd3-f2') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 03:34:47 np0005534516 nova_compute[253538]: 2025-11-25 08:34:47.470 253542 DEBUG os_vif [None req-72368f87-f064-4902-b8f7-414f65b8954f a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:13:ae:8e,bridge_name='br-int',has_traffic_filtering=True,id=269f9bd3-f267-459c-8e24-4b1f6c943345,network=Network(a66e51b8-ecb0-4289-a1b5-d5e379727721),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap269f9bd3-f2') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 25 03:34:47 np0005534516 nova_compute[253538]: 2025-11-25 08:34:47.474 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:34:47 np0005534516 nova_compute[253538]: 2025-11-25 08:34:47.474 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap269f9bd3-f2, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:34:47 np0005534516 nova_compute[253538]: 2025-11-25 08:34:47.477 253542 INFO nova.virt.libvirt.driver [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] [instance: e07ccbcb-d60d-4c15-95c2-9f5046ab99a3] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 25 03:34:47 np0005534516 nova_compute[253538]: 2025-11-25 08:34:47.481 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:34:47 np0005534516 nova_compute[253538]: 2025-11-25 08:34:47.484 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 25 03:34:47 np0005534516 nova_compute[253538]: 2025-11-25 08:34:47.487 253542 INFO os_vif [None req-72368f87-f064-4902-b8f7-414f65b8954f a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:13:ae:8e,bridge_name='br-int',has_traffic_filtering=True,id=269f9bd3-f267-459c-8e24-4b1f6c943345,network=Network(a66e51b8-ecb0-4289-a1b5-d5e379727721),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap269f9bd3-f2')#033[00m
Nov 25 03:34:47 np0005534516 nova_compute[253538]: 2025-11-25 08:34:47.508 253542 DEBUG nova.compute.manager [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] [instance: e07ccbcb-d60d-4c15-95c2-9f5046ab99a3] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 25 03:34:47 np0005534516 nova_compute[253538]: 2025-11-25 08:34:47.643 253542 DEBUG nova.compute.manager [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] [instance: e07ccbcb-d60d-4c15-95c2-9f5046ab99a3] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 25 03:34:47 np0005534516 nova_compute[253538]: 2025-11-25 08:34:47.646 253542 DEBUG nova.virt.libvirt.driver [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] [instance: e07ccbcb-d60d-4c15-95c2-9f5046ab99a3] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 25 03:34:47 np0005534516 nova_compute[253538]: 2025-11-25 08:34:47.647 253542 INFO nova.virt.libvirt.driver [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] [instance: e07ccbcb-d60d-4c15-95c2-9f5046ab99a3] Creating image(s)#033[00m
Nov 25 03:34:47 np0005534516 nova_compute[253538]: 2025-11-25 08:34:47.676 253542 DEBUG nova.storage.rbd_utils [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] rbd image e07ccbcb-d60d-4c15-95c2-9f5046ab99a3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:34:47 np0005534516 nova_compute[253538]: 2025-11-25 08:34:47.701 253542 DEBUG nova.storage.rbd_utils [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] rbd image e07ccbcb-d60d-4c15-95c2-9f5046ab99a3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:34:47 np0005534516 nova_compute[253538]: 2025-11-25 08:34:47.730 253542 DEBUG nova.storage.rbd_utils [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] rbd image e07ccbcb-d60d-4c15-95c2-9f5046ab99a3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:34:47 np0005534516 nova_compute[253538]: 2025-11-25 08:34:47.734 253542 DEBUG oslo_concurrency.processutils [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:34:47 np0005534516 nova_compute[253538]: 2025-11-25 08:34:47.776 253542 DEBUG nova.policy [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'dc986518148d44de9f5908ed5be317bd', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'ce0bc5c65f2f47a9a854ec892fe53bc8', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 25 03:34:47 np0005534516 nova_compute[253538]: 2025-11-25 08:34:47.789 253542 DEBUG nova.virt.libvirt.driver [None req-5d8b5969-a11f-4efe-b1ba-bf5400b29765 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] [instance: 420c5373-d9c4-4da0-9658-90eff9a19f8d] Instance in state 1 after 21 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101#033[00m
Nov 25 03:34:47 np0005534516 nova_compute[253538]: 2025-11-25 08:34:47.826 253542 DEBUG oslo_concurrency.processutils [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json" returned: 0 in 0.092s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:34:47 np0005534516 nova_compute[253538]: 2025-11-25 08:34:47.827 253542 DEBUG oslo_concurrency.lockutils [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] Acquiring lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:34:47 np0005534516 nova_compute[253538]: 2025-11-25 08:34:47.828 253542 DEBUG oslo_concurrency.lockutils [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:34:47 np0005534516 nova_compute[253538]: 2025-11-25 08:34:47.828 253542 DEBUG oslo_concurrency.lockutils [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:34:47 np0005534516 nova_compute[253538]: 2025-11-25 08:34:47.860 253542 DEBUG nova.storage.rbd_utils [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] rbd image e07ccbcb-d60d-4c15-95c2-9f5046ab99a3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:34:47 np0005534516 nova_compute[253538]: 2025-11-25 08:34:47.865 253542 DEBUG oslo_concurrency.processutils [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc e07ccbcb-d60d-4c15-95c2-9f5046ab99a3_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:34:47 np0005534516 podman[319246]: 2025-11-25 08:34:47.880213056 +0000 UTC m=+0.127040886 container health_status 8ad1e319ea263c63021f01bb2d6f18ca5aea205883339692c6b10bddfa993191 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, container_name=ovn_controller, tcib_managed=true)
Nov 25 03:34:47 np0005534516 nova_compute[253538]: 2025-11-25 08:34:47.904 253542 DEBUG nova.network.neutron [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] [instance: a4fd9f97-b160-432d-9cb7-0fa3874c6468] Successfully created port: f9d205bf-0705-485d-b89c-f9b9c3cdccdb _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 25 03:34:47 np0005534516 nova_compute[253538]: 2025-11-25 08:34:47.976 253542 INFO nova.virt.libvirt.driver [None req-72368f87-f064-4902-b8f7-414f65b8954f a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: bf44124c-1a65-4bde-a777-043ae1a53557] Deleting instance files /var/lib/nova/instances/bf44124c-1a65-4bde-a777-043ae1a53557_del#033[00m
Nov 25 03:34:47 np0005534516 nova_compute[253538]: 2025-11-25 08:34:47.977 253542 INFO nova.virt.libvirt.driver [None req-72368f87-f064-4902-b8f7-414f65b8954f a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: bf44124c-1a65-4bde-a777-043ae1a53557] Deletion of /var/lib/nova/instances/bf44124c-1a65-4bde-a777-043ae1a53557_del complete#033[00m
Nov 25 03:34:48 np0005534516 nova_compute[253538]: 2025-11-25 08:34:48.028 253542 INFO nova.compute.manager [None req-72368f87-f064-4902-b8f7-414f65b8954f a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: bf44124c-1a65-4bde-a777-043ae1a53557] Took 0.59 seconds to destroy the instance on the hypervisor.#033[00m
Nov 25 03:34:48 np0005534516 nova_compute[253538]: 2025-11-25 08:34:48.029 253542 DEBUG oslo.service.loopingcall [None req-72368f87-f064-4902-b8f7-414f65b8954f a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 25 03:34:48 np0005534516 nova_compute[253538]: 2025-11-25 08:34:48.031 253542 DEBUG nova.compute.manager [-] [instance: bf44124c-1a65-4bde-a777-043ae1a53557] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 25 03:34:48 np0005534516 nova_compute[253538]: 2025-11-25 08:34:48.031 253542 DEBUG nova.network.neutron [-] [instance: bf44124c-1a65-4bde-a777-043ae1a53557] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 25 03:34:48 np0005534516 nova_compute[253538]: 2025-11-25 08:34:48.104 253542 DEBUG nova.network.neutron [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] [instance: d99c7a05-3cc3-4a8b-bce4-1185023a269f] Successfully updated port: 182a5a1a-c06d-4265-857f-3ea363ae01c2 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 25 03:34:48 np0005534516 nova_compute[253538]: 2025-11-25 08:34:48.119 253542 DEBUG oslo_concurrency.lockutils [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] Acquiring lock "refresh_cache-d99c7a05-3cc3-4a8b-bce4-1185023a269f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 03:34:48 np0005534516 nova_compute[253538]: 2025-11-25 08:34:48.120 253542 DEBUG oslo_concurrency.lockutils [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] Acquired lock "refresh_cache-d99c7a05-3cc3-4a8b-bce4-1185023a269f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 03:34:48 np0005534516 nova_compute[253538]: 2025-11-25 08:34:48.120 253542 DEBUG nova.network.neutron [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] [instance: d99c7a05-3cc3-4a8b-bce4-1185023a269f] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 25 03:34:48 np0005534516 nova_compute[253538]: 2025-11-25 08:34:48.200 253542 DEBUG oslo_concurrency.processutils [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc e07ccbcb-d60d-4c15-95c2-9f5046ab99a3_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.334s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:34:48 np0005534516 nova_compute[253538]: 2025-11-25 08:34:48.255 253542 DEBUG nova.network.neutron [req-f6c2b60a-6c85-4550-93dc-064e1b75996f req-4fd63bb9-699b-4b69-ad64-5f3fb93d45c6 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 52d39d67-b456-44e4-8804-2de0c941edae] Updated VIF entry in instance network info cache for port 9fa407fa-661b-4b02-b4f4-656f6ae34cd8. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 25 03:34:48 np0005534516 nova_compute[253538]: 2025-11-25 08:34:48.257 253542 DEBUG nova.network.neutron [req-f6c2b60a-6c85-4550-93dc-064e1b75996f req-4fd63bb9-699b-4b69-ad64-5f3fb93d45c6 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 52d39d67-b456-44e4-8804-2de0c941edae] Updating instance_info_cache with network_info: [{"id": "9fa407fa-661b-4b02-b4f4-656f6ae34cd8", "address": "fa:16:3e:4d:ce:d4", "network": {"id": "9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-2079765623-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.241", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7d8307470c794815a028592990efca57", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9fa407fa-66", "ovs_interfaceid": "9fa407fa-661b-4b02-b4f4-656f6ae34cd8", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 03:34:48 np0005534516 nova_compute[253538]: 2025-11-25 08:34:48.264 253542 DEBUG nova.storage.rbd_utils [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] resizing rbd image e07ccbcb-d60d-4c15-95c2-9f5046ab99a3_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 25 03:34:48 np0005534516 nova_compute[253538]: 2025-11-25 08:34:48.290 253542 DEBUG oslo_concurrency.lockutils [req-f6c2b60a-6c85-4550-93dc-064e1b75996f req-4fd63bb9-699b-4b69-ad64-5f3fb93d45c6 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-52d39d67-b456-44e4-8804-2de0c941edae" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 03:34:48 np0005534516 nova_compute[253538]: 2025-11-25 08:34:48.291 253542 DEBUG nova.compute.manager [req-f6c2b60a-6c85-4550-93dc-064e1b75996f req-4fd63bb9-699b-4b69-ad64-5f3fb93d45c6 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: bf44124c-1a65-4bde-a777-043ae1a53557] Received event network-vif-unplugged-269f9bd3-f267-459c-8e24-4b1f6c943345 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:34:48 np0005534516 nova_compute[253538]: 2025-11-25 08:34:48.292 253542 DEBUG oslo_concurrency.lockutils [req-f6c2b60a-6c85-4550-93dc-064e1b75996f req-4fd63bb9-699b-4b69-ad64-5f3fb93d45c6 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "bf44124c-1a65-4bde-a777-043ae1a53557-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:34:48 np0005534516 nova_compute[253538]: 2025-11-25 08:34:48.292 253542 DEBUG oslo_concurrency.lockutils [req-f6c2b60a-6c85-4550-93dc-064e1b75996f req-4fd63bb9-699b-4b69-ad64-5f3fb93d45c6 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "bf44124c-1a65-4bde-a777-043ae1a53557-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:34:48 np0005534516 nova_compute[253538]: 2025-11-25 08:34:48.292 253542 DEBUG oslo_concurrency.lockutils [req-f6c2b60a-6c85-4550-93dc-064e1b75996f req-4fd63bb9-699b-4b69-ad64-5f3fb93d45c6 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "bf44124c-1a65-4bde-a777-043ae1a53557-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:34:48 np0005534516 nova_compute[253538]: 2025-11-25 08:34:48.292 253542 DEBUG nova.compute.manager [req-f6c2b60a-6c85-4550-93dc-064e1b75996f req-4fd63bb9-699b-4b69-ad64-5f3fb93d45c6 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: bf44124c-1a65-4bde-a777-043ae1a53557] No waiting events found dispatching network-vif-unplugged-269f9bd3-f267-459c-8e24-4b1f6c943345 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 03:34:48 np0005534516 nova_compute[253538]: 2025-11-25 08:34:48.292 253542 WARNING nova.compute.manager [req-f6c2b60a-6c85-4550-93dc-064e1b75996f req-4fd63bb9-699b-4b69-ad64-5f3fb93d45c6 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: bf44124c-1a65-4bde-a777-043ae1a53557] Received unexpected event network-vif-unplugged-269f9bd3-f267-459c-8e24-4b1f6c943345 for instance with vm_state stopped and task_state None.#033[00m
Nov 25 03:34:48 np0005534516 nova_compute[253538]: 2025-11-25 08:34:48.292 253542 DEBUG nova.compute.manager [req-f6c2b60a-6c85-4550-93dc-064e1b75996f req-4fd63bb9-699b-4b69-ad64-5f3fb93d45c6 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: bf44124c-1a65-4bde-a777-043ae1a53557] Received event network-vif-plugged-269f9bd3-f267-459c-8e24-4b1f6c943345 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:34:48 np0005534516 nova_compute[253538]: 2025-11-25 08:34:48.293 253542 DEBUG oslo_concurrency.lockutils [req-f6c2b60a-6c85-4550-93dc-064e1b75996f req-4fd63bb9-699b-4b69-ad64-5f3fb93d45c6 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "bf44124c-1a65-4bde-a777-043ae1a53557-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:34:48 np0005534516 nova_compute[253538]: 2025-11-25 08:34:48.293 253542 DEBUG oslo_concurrency.lockutils [req-f6c2b60a-6c85-4550-93dc-064e1b75996f req-4fd63bb9-699b-4b69-ad64-5f3fb93d45c6 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "bf44124c-1a65-4bde-a777-043ae1a53557-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:34:48 np0005534516 nova_compute[253538]: 2025-11-25 08:34:48.293 253542 DEBUG oslo_concurrency.lockutils [req-f6c2b60a-6c85-4550-93dc-064e1b75996f req-4fd63bb9-699b-4b69-ad64-5f3fb93d45c6 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "bf44124c-1a65-4bde-a777-043ae1a53557-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:34:48 np0005534516 nova_compute[253538]: 2025-11-25 08:34:48.293 253542 DEBUG nova.compute.manager [req-f6c2b60a-6c85-4550-93dc-064e1b75996f req-4fd63bb9-699b-4b69-ad64-5f3fb93d45c6 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: bf44124c-1a65-4bde-a777-043ae1a53557] No waiting events found dispatching network-vif-plugged-269f9bd3-f267-459c-8e24-4b1f6c943345 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 03:34:48 np0005534516 nova_compute[253538]: 2025-11-25 08:34:48.293 253542 WARNING nova.compute.manager [req-f6c2b60a-6c85-4550-93dc-064e1b75996f req-4fd63bb9-699b-4b69-ad64-5f3fb93d45c6 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: bf44124c-1a65-4bde-a777-043ae1a53557] Received unexpected event network-vif-plugged-269f9bd3-f267-459c-8e24-4b1f6c943345 for instance with vm_state stopped and task_state None.#033[00m
Nov 25 03:34:48 np0005534516 nova_compute[253538]: 2025-11-25 08:34:48.294 253542 DEBUG oslo_concurrency.lockutils [None req-a4883817-8c02-41e5-8d27-475adf7618fd 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Acquired lock "refresh_cache-52d39d67-b456-44e4-8804-2de0c941edae" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 03:34:48 np0005534516 nova_compute[253538]: 2025-11-25 08:34:48.294 253542 DEBUG nova.network.neutron [None req-a4883817-8c02-41e5-8d27-475adf7618fd 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] [instance: 52d39d67-b456-44e4-8804-2de0c941edae] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 25 03:34:48 np0005534516 nova_compute[253538]: 2025-11-25 08:34:48.316 253542 DEBUG nova.network.neutron [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] [instance: d99c7a05-3cc3-4a8b-bce4-1185023a269f] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 25 03:34:48 np0005534516 nova_compute[253538]: 2025-11-25 08:34:48.342 253542 DEBUG nova.objects.instance [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] Lazy-loading 'migration_context' on Instance uuid e07ccbcb-d60d-4c15-95c2-9f5046ab99a3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 03:34:48 np0005534516 nova_compute[253538]: 2025-11-25 08:34:48.353 253542 DEBUG nova.virt.libvirt.driver [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] [instance: e07ccbcb-d60d-4c15-95c2-9f5046ab99a3] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 25 03:34:48 np0005534516 nova_compute[253538]: 2025-11-25 08:34:48.353 253542 DEBUG nova.virt.libvirt.driver [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] [instance: e07ccbcb-d60d-4c15-95c2-9f5046ab99a3] Ensure instance console log exists: /var/lib/nova/instances/e07ccbcb-d60d-4c15-95c2-9f5046ab99a3/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 25 03:34:48 np0005534516 nova_compute[253538]: 2025-11-25 08:34:48.354 253542 DEBUG oslo_concurrency.lockutils [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:34:48 np0005534516 nova_compute[253538]: 2025-11-25 08:34:48.354 253542 DEBUG oslo_concurrency.lockutils [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:34:48 np0005534516 nova_compute[253538]: 2025-11-25 08:34:48.354 253542 DEBUG oslo_concurrency.lockutils [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:34:48 np0005534516 nova_compute[253538]: 2025-11-25 08:34:48.476 253542 DEBUG nova.compute.manager [req-055411a1-74ac-4424-b1d2-dc4c518f2f32 req-fdbc24b2-668a-49d4-8d56-13563317ccd7 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: d99c7a05-3cc3-4a8b-bce4-1185023a269f] Received event network-changed-182a5a1a-c06d-4265-857f-3ea363ae01c2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:34:48 np0005534516 nova_compute[253538]: 2025-11-25 08:34:48.477 253542 DEBUG nova.compute.manager [req-055411a1-74ac-4424-b1d2-dc4c518f2f32 req-fdbc24b2-668a-49d4-8d56-13563317ccd7 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: d99c7a05-3cc3-4a8b-bce4-1185023a269f] Refreshing instance network info cache due to event network-changed-182a5a1a-c06d-4265-857f-3ea363ae01c2. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 03:34:48 np0005534516 nova_compute[253538]: 2025-11-25 08:34:48.477 253542 DEBUG oslo_concurrency.lockutils [req-055411a1-74ac-4424-b1d2-dc4c518f2f32 req-fdbc24b2-668a-49d4-8d56-13563317ccd7 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-d99c7a05-3cc3-4a8b-bce4-1185023a269f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 03:34:48 np0005534516 nova_compute[253538]: 2025-11-25 08:34:48.520 253542 WARNING nova.network.neutron [None req-a4883817-8c02-41e5-8d27-475adf7618fd 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] [instance: 52d39d67-b456-44e4-8804-2de0c941edae] 9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe already exists in list: networks containing: ['9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe']. ignoring it#033[00m
Nov 25 03:34:48 np0005534516 nova_compute[253538]: 2025-11-25 08:34:48.856 253542 DEBUG nova.network.neutron [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] [instance: e07ccbcb-d60d-4c15-95c2-9f5046ab99a3] Successfully created port: 54f02527-a6c1-4059-aa22-2c19fc6f351d _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 25 03:34:48 np0005534516 nova_compute[253538]: 2025-11-25 08:34:48.877 253542 DEBUG nova.network.neutron [-] [instance: bf44124c-1a65-4bde-a777-043ae1a53557] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 03:34:48 np0005534516 nova_compute[253538]: 2025-11-25 08:34:48.902 253542 INFO nova.compute.manager [-] [instance: bf44124c-1a65-4bde-a777-043ae1a53557] Took 0.87 seconds to deallocate network for instance.#033[00m
Nov 25 03:34:48 np0005534516 nova_compute[253538]: 2025-11-25 08:34:48.957 253542 DEBUG oslo_concurrency.lockutils [None req-72368f87-f064-4902-b8f7-414f65b8954f a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:34:48 np0005534516 nova_compute[253538]: 2025-11-25 08:34:48.957 253542 DEBUG oslo_concurrency.lockutils [None req-72368f87-f064-4902-b8f7-414f65b8954f a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:34:49 np0005534516 nova_compute[253538]: 2025-11-25 08:34:49.154 253542 DEBUG oslo_concurrency.processutils [None req-72368f87-f064-4902-b8f7-414f65b8954f a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:34:49 np0005534516 nova_compute[253538]: 2025-11-25 08:34:49.227 253542 DEBUG nova.compute.manager [req-195059ee-9d4d-4fad-8132-8722f099b55d req-adf28043-f52f-4401-8dc9-d2dbed02789f b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 52d39d67-b456-44e4-8804-2de0c941edae] Received event network-changed-f2a4b65b-419e-44be-9413-f01693268aa8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:34:49 np0005534516 nova_compute[253538]: 2025-11-25 08:34:49.228 253542 DEBUG nova.compute.manager [req-195059ee-9d4d-4fad-8132-8722f099b55d req-adf28043-f52f-4401-8dc9-d2dbed02789f b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 52d39d67-b456-44e4-8804-2de0c941edae] Refreshing instance network info cache due to event network-changed-f2a4b65b-419e-44be-9413-f01693268aa8. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 03:34:49 np0005534516 nova_compute[253538]: 2025-11-25 08:34:49.228 253542 DEBUG oslo_concurrency.lockutils [req-195059ee-9d4d-4fad-8132-8722f099b55d req-adf28043-f52f-4401-8dc9-d2dbed02789f b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-52d39d67-b456-44e4-8804-2de0c941edae" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 03:34:49 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1550: 321 pgs: 321 active+clean; 481 MiB data, 774 MiB used, 59 GiB / 60 GiB avail; 320 KiB/s rd, 4.6 MiB/s wr, 136 op/s
Nov 25 03:34:49 np0005534516 nova_compute[253538]: 2025-11-25 08:34:49.305 253542 DEBUG nova.network.neutron [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] [instance: d99c7a05-3cc3-4a8b-bce4-1185023a269f] Updating instance_info_cache with network_info: [{"id": "182a5a1a-c06d-4265-857f-3ea363ae01c2", "address": "fa:16:3e:35:cb:10", "network": {"id": "837c6e7b-bab2-4553-9d96-986f67153365", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-60200676-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ce0bc5c65f2f47a9a854ec892fe53bc8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap182a5a1a-c0", "ovs_interfaceid": "182a5a1a-c06d-4265-857f-3ea363ae01c2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 03:34:49 np0005534516 nova_compute[253538]: 2025-11-25 08:34:49.323 253542 DEBUG oslo_concurrency.lockutils [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] Releasing lock "refresh_cache-d99c7a05-3cc3-4a8b-bce4-1185023a269f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 03:34:49 np0005534516 nova_compute[253538]: 2025-11-25 08:34:49.323 253542 DEBUG nova.compute.manager [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] [instance: d99c7a05-3cc3-4a8b-bce4-1185023a269f] Instance network_info: |[{"id": "182a5a1a-c06d-4265-857f-3ea363ae01c2", "address": "fa:16:3e:35:cb:10", "network": {"id": "837c6e7b-bab2-4553-9d96-986f67153365", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-60200676-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ce0bc5c65f2f47a9a854ec892fe53bc8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap182a5a1a-c0", "ovs_interfaceid": "182a5a1a-c06d-4265-857f-3ea363ae01c2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 25 03:34:49 np0005534516 nova_compute[253538]: 2025-11-25 08:34:49.324 253542 DEBUG oslo_concurrency.lockutils [req-055411a1-74ac-4424-b1d2-dc4c518f2f32 req-fdbc24b2-668a-49d4-8d56-13563317ccd7 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-d99c7a05-3cc3-4a8b-bce4-1185023a269f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 03:34:49 np0005534516 nova_compute[253538]: 2025-11-25 08:34:49.324 253542 DEBUG nova.network.neutron [req-055411a1-74ac-4424-b1d2-dc4c518f2f32 req-fdbc24b2-668a-49d4-8d56-13563317ccd7 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: d99c7a05-3cc3-4a8b-bce4-1185023a269f] Refreshing network info cache for port 182a5a1a-c06d-4265-857f-3ea363ae01c2 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 03:34:49 np0005534516 nova_compute[253538]: 2025-11-25 08:34:49.326 253542 DEBUG nova.virt.libvirt.driver [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] [instance: d99c7a05-3cc3-4a8b-bce4-1185023a269f] Start _get_guest_xml network_info=[{"id": "182a5a1a-c06d-4265-857f-3ea363ae01c2", "address": "fa:16:3e:35:cb:10", "network": {"id": "837c6e7b-bab2-4553-9d96-986f67153365", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-60200676-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ce0bc5c65f2f47a9a854ec892fe53bc8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap182a5a1a-c0", "ovs_interfaceid": "182a5a1a-c06d-4265-857f-3ea363ae01c2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'device_name': '/dev/vda', 'guest_format': None, 'encryption_options': None, 'boot_index': 0, 'encryption_format': None, 'encryption_secret_uuid': None, 'size': 0, 'encrypted': False, 'disk_bus': 'virtio', 'image_id': '8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 25 03:34:49 np0005534516 nova_compute[253538]: 2025-11-25 08:34:49.328 253542 DEBUG nova.network.neutron [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] [instance: a4fd9f97-b160-432d-9cb7-0fa3874c6468] Successfully updated port: f9d205bf-0705-485d-b89c-f9b9c3cdccdb _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 25 03:34:49 np0005534516 nova_compute[253538]: 2025-11-25 08:34:49.333 253542 WARNING nova.virt.libvirt.driver [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 03:34:49 np0005534516 nova_compute[253538]: 2025-11-25 08:34:49.339 253542 DEBUG nova.virt.libvirt.host [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 25 03:34:49 np0005534516 nova_compute[253538]: 2025-11-25 08:34:49.340 253542 DEBUG nova.virt.libvirt.host [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 25 03:34:49 np0005534516 nova_compute[253538]: 2025-11-25 08:34:49.347 253542 DEBUG nova.virt.libvirt.host [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 25 03:34:49 np0005534516 nova_compute[253538]: 2025-11-25 08:34:49.348 253542 DEBUG nova.virt.libvirt.host [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 25 03:34:49 np0005534516 nova_compute[253538]: 2025-11-25 08:34:49.349 253542 DEBUG nova.virt.libvirt.driver [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 25 03:34:49 np0005534516 nova_compute[253538]: 2025-11-25 08:34:49.349 253542 DEBUG nova.virt.hardware [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T08:20:54Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='c0247c91-0f34-45b2-87e7-43f31a790a00',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 25 03:34:49 np0005534516 nova_compute[253538]: 2025-11-25 08:34:49.350 253542 DEBUG nova.virt.hardware [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 25 03:34:49 np0005534516 nova_compute[253538]: 2025-11-25 08:34:49.350 253542 DEBUG nova.virt.hardware [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 25 03:34:49 np0005534516 nova_compute[253538]: 2025-11-25 08:34:49.351 253542 DEBUG nova.virt.hardware [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 25 03:34:49 np0005534516 nova_compute[253538]: 2025-11-25 08:34:49.351 253542 DEBUG nova.virt.hardware [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 25 03:34:49 np0005534516 nova_compute[253538]: 2025-11-25 08:34:49.351 253542 DEBUG nova.virt.hardware [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 25 03:34:49 np0005534516 nova_compute[253538]: 2025-11-25 08:34:49.351 253542 DEBUG nova.virt.hardware [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 25 03:34:49 np0005534516 nova_compute[253538]: 2025-11-25 08:34:49.352 253542 DEBUG nova.virt.hardware [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 25 03:34:49 np0005534516 nova_compute[253538]: 2025-11-25 08:34:49.352 253542 DEBUG nova.virt.hardware [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 25 03:34:49 np0005534516 nova_compute[253538]: 2025-11-25 08:34:49.352 253542 DEBUG nova.virt.hardware [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 25 03:34:49 np0005534516 nova_compute[253538]: 2025-11-25 08:34:49.352 253542 DEBUG nova.virt.hardware [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 25 03:34:49 np0005534516 nova_compute[253538]: 2025-11-25 08:34:49.356 253542 DEBUG oslo_concurrency.processutils [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:34:49 np0005534516 nova_compute[253538]: 2025-11-25 08:34:49.395 253542 DEBUG oslo_concurrency.lockutils [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] Acquiring lock "refresh_cache-a4fd9f97-b160-432d-9cb7-0fa3874c6468" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 03:34:49 np0005534516 nova_compute[253538]: 2025-11-25 08:34:49.396 253542 DEBUG oslo_concurrency.lockutils [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] Acquired lock "refresh_cache-a4fd9f97-b160-432d-9cb7-0fa3874c6468" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 03:34:49 np0005534516 nova_compute[253538]: 2025-11-25 08:34:49.396 253542 DEBUG nova.network.neutron [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] [instance: a4fd9f97-b160-432d-9cb7-0fa3874c6468] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 25 03:34:49 np0005534516 nova_compute[253538]: 2025-11-25 08:34:49.560 253542 DEBUG nova.network.neutron [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] [instance: a4fd9f97-b160-432d-9cb7-0fa3874c6468] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 25 03:34:49 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 03:34:49 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/62237574' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 03:34:49 np0005534516 nova_compute[253538]: 2025-11-25 08:34:49.638 253542 DEBUG oslo_concurrency.processutils [None req-72368f87-f064-4902-b8f7-414f65b8954f a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.484s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:34:49 np0005534516 nova_compute[253538]: 2025-11-25 08:34:49.644 253542 DEBUG nova.compute.provider_tree [None req-72368f87-f064-4902-b8f7-414f65b8954f a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 03:34:49 np0005534516 nova_compute[253538]: 2025-11-25 08:34:49.661 253542 DEBUG nova.scheduler.client.report [None req-72368f87-f064-4902-b8f7-414f65b8954f a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 03:34:49 np0005534516 nova_compute[253538]: 2025-11-25 08:34:49.687 253542 DEBUG oslo_concurrency.lockutils [None req-72368f87-f064-4902-b8f7-414f65b8954f a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.730s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:34:49 np0005534516 nova_compute[253538]: 2025-11-25 08:34:49.730 253542 INFO nova.scheduler.client.report [None req-72368f87-f064-4902-b8f7-414f65b8954f a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Deleted allocations for instance bf44124c-1a65-4bde-a777-043ae1a53557#033[00m
Nov 25 03:34:49 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 03:34:49 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3654611122' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 03:34:49 np0005534516 nova_compute[253538]: 2025-11-25 08:34:49.794 253542 DEBUG oslo_concurrency.lockutils [None req-72368f87-f064-4902-b8f7-414f65b8954f a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Lock "bf44124c-1a65-4bde-a777-043ae1a53557" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.365s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:34:49 np0005534516 nova_compute[253538]: 2025-11-25 08:34:49.800 253542 DEBUG oslo_concurrency.processutils [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.443s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:34:49 np0005534516 nova_compute[253538]: 2025-11-25 08:34:49.819 253542 DEBUG nova.storage.rbd_utils [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] rbd image d99c7a05-3cc3-4a8b-bce4-1185023a269f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:34:49 np0005534516 nova_compute[253538]: 2025-11-25 08:34:49.823 253542 DEBUG oslo_concurrency.processutils [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:34:50 np0005534516 kernel: tap9200cc12-92 (unregistering): left promiscuous mode
Nov 25 03:34:50 np0005534516 NetworkManager[48915]: <info>  [1764059690.0619] device (tap9200cc12-92): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 03:34:50 np0005534516 ovn_controller[152859]: 2025-11-25T08:34:50Z|00570|binding|INFO|Releasing lport 9200cc12-927d-418b-99c1-ca0421535979 from this chassis (sb_readonly=0)
Nov 25 03:34:50 np0005534516 ovn_controller[152859]: 2025-11-25T08:34:50Z|00571|binding|INFO|Setting lport 9200cc12-927d-418b-99c1-ca0421535979 down in Southbound
Nov 25 03:34:50 np0005534516 ovn_controller[152859]: 2025-11-25T08:34:50Z|00572|binding|INFO|Removing iface tap9200cc12-92 ovn-installed in OVS
Nov 25 03:34:50 np0005534516 nova_compute[253538]: 2025-11-25 08:34:50.129 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:34:50 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:34:50.134 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:73:f0:9b 10.100.0.11'], port_security=['fa:16:3e:73:f0:9b 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '420c5373-d9c4-4da0-9658-90eff9a19f8d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-908154e6-322e-4607-bb65-df3f3f8daca6', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '23237e7592b247838e62457157e64e9e', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'c602cd86-40c0-467a-8b7a-b573e0a7cefa', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f04e87cb-da21-4cc9-be16-4ad52b84fb85, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=9200cc12-927d-418b-99c1-ca0421535979) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 03:34:50 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:34:50.135 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 9200cc12-927d-418b-99c1-ca0421535979 in datapath 908154e6-322e-4607-bb65-df3f3f8daca6 unbound from our chassis#033[00m
Nov 25 03:34:50 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:34:50.137 162739 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 908154e6-322e-4607-bb65-df3f3f8daca6#033[00m
Nov 25 03:34:50 np0005534516 nova_compute[253538]: 2025-11-25 08:34:50.149 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:34:50 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:34:50.154 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[d79a1b80-a3ea-4672-b9d1-c8eae02ee0c8]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:34:50 np0005534516 systemd[1]: machine-qemu\x2d71\x2dinstance\x2d0000003e.scope: Deactivated successfully.
Nov 25 03:34:50 np0005534516 systemd[1]: machine-qemu\x2d71\x2dinstance\x2d0000003e.scope: Consumed 15.762s CPU time.
Nov 25 03:34:50 np0005534516 systemd-machined[215790]: Machine qemu-71-instance-0000003e terminated.
Nov 25 03:34:50 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:34:50.193 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[e05ef19b-553e-4aa4-859a-0cd804df9691]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:34:50 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:34:50.196 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[a30f896a-6ab5-4227-9d09-5f2b55e614f4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:34:50 np0005534516 nova_compute[253538]: 2025-11-25 08:34:50.205 253542 DEBUG nova.network.neutron [None req-a4883817-8c02-41e5-8d27-475adf7618fd 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] [instance: 52d39d67-b456-44e4-8804-2de0c941edae] Updating instance_info_cache with network_info: [{"id": "9fa407fa-661b-4b02-b4f4-656f6ae34cd8", "address": "fa:16:3e:4d:ce:d4", "network": {"id": "9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-2079765623-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.241", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7d8307470c794815a028592990efca57", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9fa407fa-66", "ovs_interfaceid": "9fa407fa-661b-4b02-b4f4-656f6ae34cd8", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "f2a4b65b-419e-44be-9413-f01693268aa8", "address": "fa:16:3e:26:ed:fc", "network": {"id": "9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-2079765623-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7d8307470c794815a028592990efca57", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf2a4b65b-41", "ovs_interfaceid": "f2a4b65b-419e-44be-9413-f01693268aa8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 03:34:50 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:34:50.224 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[dac3b8d3-270a-4852-9dd8-b850804b13d6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:34:50 np0005534516 nova_compute[253538]: 2025-11-25 08:34:50.226 253542 DEBUG oslo_concurrency.lockutils [None req-a4883817-8c02-41e5-8d27-475adf7618fd 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Releasing lock "refresh_cache-52d39d67-b456-44e4-8804-2de0c941edae" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 03:34:50 np0005534516 nova_compute[253538]: 2025-11-25 08:34:50.227 253542 DEBUG oslo_concurrency.lockutils [req-195059ee-9d4d-4fad-8132-8722f099b55d req-adf28043-f52f-4401-8dc9-d2dbed02789f b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-52d39d67-b456-44e4-8804-2de0c941edae" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 03:34:50 np0005534516 nova_compute[253538]: 2025-11-25 08:34:50.227 253542 DEBUG nova.network.neutron [req-195059ee-9d4d-4fad-8132-8722f099b55d req-adf28043-f52f-4401-8dc9-d2dbed02789f b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 52d39d67-b456-44e4-8804-2de0c941edae] Refreshing network info cache for port f2a4b65b-419e-44be-9413-f01693268aa8 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 03:34:50 np0005534516 nova_compute[253538]: 2025-11-25 08:34:50.234 253542 DEBUG nova.virt.libvirt.vif [None req-a4883817-8c02-41e5-8d27-475adf7618fd 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T08:34:07Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-1833307559',display_name='tempest-tempest.common.compute-instance-1833307559',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-1833307559',id=61,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBI45VX5ozelo+A26Yolp08RcM4mInQuWdkCriWdcAJEvmrG1M64+l6O6qrC1PEYY9Zv1hNrRdaOuY2Hx3qn6BPjsgdWVfumtuAipvIEJaR4T3qitr35JgGW4++DGBtyuWA==',key_name='tempest-keypair-1507828644',keypairs=<?>,launch_index=0,launched_at=2025-11-25T08:34:18Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='7d8307470c794815a028592990efca57',ramdisk_id='',reservation_id='r-14irc3k3',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-1895576257',owner_user_name='tempest-AttachInterfacesTestJSON-1895576257-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T08:34:18Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='329d8dc9d78743d4a09a38fef3a9143d',uuid=52d39d67-b456-44e4-8804-2de0c941edae,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "f2a4b65b-419e-44be-9413-f01693268aa8", "address": "fa:16:3e:26:ed:fc", "network": {"id": "9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-2079765623-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7d8307470c794815a028592990efca57", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf2a4b65b-41", "ovs_interfaceid": "f2a4b65b-419e-44be-9413-f01693268aa8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 25 03:34:50 np0005534516 nova_compute[253538]: 2025-11-25 08:34:50.234 253542 DEBUG nova.network.os_vif_util [None req-a4883817-8c02-41e5-8d27-475adf7618fd 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Converting VIF {"id": "f2a4b65b-419e-44be-9413-f01693268aa8", "address": "fa:16:3e:26:ed:fc", "network": {"id": "9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-2079765623-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7d8307470c794815a028592990efca57", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf2a4b65b-41", "ovs_interfaceid": "f2a4b65b-419e-44be-9413-f01693268aa8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 03:34:50 np0005534516 nova_compute[253538]: 2025-11-25 08:34:50.235 253542 DEBUG nova.network.os_vif_util [None req-a4883817-8c02-41e5-8d27-475adf7618fd 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:26:ed:fc,bridge_name='br-int',has_traffic_filtering=True,id=f2a4b65b-419e-44be-9413-f01693268aa8,network=Network(9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapf2a4b65b-41') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 03:34:50 np0005534516 nova_compute[253538]: 2025-11-25 08:34:50.235 253542 DEBUG os_vif [None req-a4883817-8c02-41e5-8d27-475adf7618fd 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:26:ed:fc,bridge_name='br-int',has_traffic_filtering=True,id=f2a4b65b-419e-44be-9413-f01693268aa8,network=Network(9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapf2a4b65b-41') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 25 03:34:50 np0005534516 nova_compute[253538]: 2025-11-25 08:34:50.236 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:34:50 np0005534516 nova_compute[253538]: 2025-11-25 08:34:50.236 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:34:50 np0005534516 nova_compute[253538]: 2025-11-25 08:34:50.237 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 03:34:50 np0005534516 nova_compute[253538]: 2025-11-25 08:34:50.241 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:34:50 np0005534516 nova_compute[253538]: 2025-11-25 08:34:50.242 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf2a4b65b-41, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:34:50 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:34:50.242 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[907d4763-7a52-4387-98d5-fb18cd5cd0c5]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap908154e6-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:42:59:09'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 8, 'rx_bytes': 700, 'tx_bytes': 524, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 8, 'rx_bytes': 700, 'tx_bytes': 524, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 161], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 496834, 'reachable_time': 23814, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 300, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 300, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 319480, 'error': None, 'target': 'ovnmeta-908154e6-322e-4607-bb65-df3f3f8daca6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:34:50 np0005534516 nova_compute[253538]: 2025-11-25 08:34:50.243 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapf2a4b65b-41, col_values=(('external_ids', {'iface-id': 'f2a4b65b-419e-44be-9413-f01693268aa8', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:26:ed:fc', 'vm-uuid': '52d39d67-b456-44e4-8804-2de0c941edae'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:34:50 np0005534516 NetworkManager[48915]: <info>  [1764059690.2458] manager: (tapf2a4b65b-41): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/261)
Nov 25 03:34:50 np0005534516 nova_compute[253538]: 2025-11-25 08:34:50.247 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 25 03:34:50 np0005534516 nova_compute[253538]: 2025-11-25 08:34:50.251 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:34:50 np0005534516 nova_compute[253538]: 2025-11-25 08:34:50.252 253542 INFO os_vif [None req-a4883817-8c02-41e5-8d27-475adf7618fd 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:26:ed:fc,bridge_name='br-int',has_traffic_filtering=True,id=f2a4b65b-419e-44be-9413-f01693268aa8,network=Network(9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapf2a4b65b-41')#033[00m
Nov 25 03:34:50 np0005534516 nova_compute[253538]: 2025-11-25 08:34:50.253 253542 DEBUG nova.virt.libvirt.vif [None req-a4883817-8c02-41e5-8d27-475adf7618fd 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T08:34:07Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-1833307559',display_name='tempest-tempest.common.compute-instance-1833307559',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-1833307559',id=61,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBI45VX5ozelo+A26Yolp08RcM4mInQuWdkCriWdcAJEvmrG1M64+l6O6qrC1PEYY9Zv1hNrRdaOuY2Hx3qn6BPjsgdWVfumtuAipvIEJaR4T3qitr35JgGW4++DGBtyuWA==',key_name='tempest-keypair-1507828644',keypairs=<?>,launch_index=0,launched_at=2025-11-25T08:34:18Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='7d8307470c794815a028592990efca57',ramdisk_id='',reservation_id='r-14irc3k3',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-1895576257',owner_user_name='tempest-AttachInterfacesTestJSON-1895576257-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T08:34:18Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='329d8dc9d78743d4a09a38fef3a9143d',uuid=52d39d67-b456-44e4-8804-2de0c941edae,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "f2a4b65b-419e-44be-9413-f01693268aa8", "address": "fa:16:3e:26:ed:fc", "network": {"id": "9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-2079765623-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7d8307470c794815a028592990efca57", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf2a4b65b-41", "ovs_interfaceid": "f2a4b65b-419e-44be-9413-f01693268aa8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 25 03:34:50 np0005534516 nova_compute[253538]: 2025-11-25 08:34:50.253 253542 DEBUG nova.network.os_vif_util [None req-a4883817-8c02-41e5-8d27-475adf7618fd 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Converting VIF {"id": "f2a4b65b-419e-44be-9413-f01693268aa8", "address": "fa:16:3e:26:ed:fc", "network": {"id": "9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-2079765623-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7d8307470c794815a028592990efca57", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf2a4b65b-41", "ovs_interfaceid": "f2a4b65b-419e-44be-9413-f01693268aa8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 03:34:50 np0005534516 nova_compute[253538]: 2025-11-25 08:34:50.254 253542 DEBUG nova.network.os_vif_util [None req-a4883817-8c02-41e5-8d27-475adf7618fd 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:26:ed:fc,bridge_name='br-int',has_traffic_filtering=True,id=f2a4b65b-419e-44be-9413-f01693268aa8,network=Network(9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapf2a4b65b-41') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 03:34:50 np0005534516 nova_compute[253538]: 2025-11-25 08:34:50.256 253542 DEBUG nova.virt.libvirt.guest [None req-a4883817-8c02-41e5-8d27-475adf7618fd 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] attach device xml: <interface type="ethernet">
Nov 25 03:34:50 np0005534516 nova_compute[253538]:  <mac address="fa:16:3e:26:ed:fc"/>
Nov 25 03:34:50 np0005534516 nova_compute[253538]:  <model type="virtio"/>
Nov 25 03:34:50 np0005534516 nova_compute[253538]:  <driver name="vhost" rx_queue_size="512"/>
Nov 25 03:34:50 np0005534516 nova_compute[253538]:  <mtu size="1442"/>
Nov 25 03:34:50 np0005534516 nova_compute[253538]:  <target dev="tapf2a4b65b-41"/>
Nov 25 03:34:50 np0005534516 nova_compute[253538]: </interface>
Nov 25 03:34:50 np0005534516 nova_compute[253538]: attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339#033[00m
Nov 25 03:34:50 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 03:34:50 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/539818481' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 03:34:50 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:34:50.260 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[c27125c2-a79b-4fc5-8c4d-f95bf29ac8aa]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap908154e6-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 496845, 'tstamp': 496845}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 319483, 'error': None, 'target': 'ovnmeta-908154e6-322e-4607-bb65-df3f3f8daca6', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap908154e6-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 496848, 'tstamp': 496848}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 319483, 'error': None, 'target': 'ovnmeta-908154e6-322e-4607-bb65-df3f3f8daca6', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:34:50 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:34:50.262 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap908154e6-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:34:50 np0005534516 nova_compute[253538]: 2025-11-25 08:34:50.264 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:34:50 np0005534516 kernel: tapf2a4b65b-41: entered promiscuous mode
Nov 25 03:34:50 np0005534516 systemd-udevd[319472]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 03:34:50 np0005534516 NetworkManager[48915]: <info>  [1764059690.2690] manager: (tapf2a4b65b-41): new Tun device (/org/freedesktop/NetworkManager/Devices/262)
Nov 25 03:34:50 np0005534516 ovn_controller[152859]: 2025-11-25T08:34:50Z|00573|binding|INFO|Claiming lport f2a4b65b-419e-44be-9413-f01693268aa8 for this chassis.
Nov 25 03:34:50 np0005534516 ovn_controller[152859]: 2025-11-25T08:34:50Z|00574|binding|INFO|f2a4b65b-419e-44be-9413-f01693268aa8: Claiming fa:16:3e:26:ed:fc 10.100.0.3
Nov 25 03:34:50 np0005534516 nova_compute[253538]: 2025-11-25 08:34:50.271 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:34:50 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:34:50.271 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap908154e6-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:34:50 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:34:50.272 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 03:34:50 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:34:50.272 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap908154e6-30, col_values=(('external_ids', {'iface-id': 'a5c69233-73e9-45f3-95c2-e76d52711966'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:34:50 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:34:50.273 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 03:34:50 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:34:50.281 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:26:ed:fc 10.100.0.3'], port_security=['fa:16:3e:26:ed:fc 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-AttachInterfacesTestJSON-663266119', 'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '52d39d67-b456-44e4-8804-2de0c941edae', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-AttachInterfacesTestJSON-663266119', 'neutron:project_id': '7d8307470c794815a028592990efca57', 'neutron:revision_number': '7', 'neutron:security_group_ids': '2fc6083a-2d5e-4949-a854-57468915c521', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f51a82dc-da84-4ad1-90c6-51b8e242435f, chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=f2a4b65b-419e-44be-9413-f01693268aa8) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 03:34:50 np0005534516 NetworkManager[48915]: <info>  [1764059690.2818] device (tapf2a4b65b-41): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 03:34:50 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:34:50.282 162739 INFO neutron.agent.ovn.metadata.agent [-] Port f2a4b65b-419e-44be-9413-f01693268aa8 in datapath 9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe bound to our chassis#033[00m
Nov 25 03:34:50 np0005534516 nova_compute[253538]: 2025-11-25 08:34:50.283 253542 DEBUG nova.network.neutron [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] [instance: e07ccbcb-d60d-4c15-95c2-9f5046ab99a3] Successfully updated port: 54f02527-a6c1-4059-aa22-2c19fc6f351d _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 25 03:34:50 np0005534516 NetworkManager[48915]: <info>  [1764059690.2844] device (tapf2a4b65b-41): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 03:34:50 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:34:50.284 162739 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe#033[00m
Nov 25 03:34:50 np0005534516 nova_compute[253538]: 2025-11-25 08:34:50.286 253542 DEBUG oslo_concurrency.processutils [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.463s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:34:50 np0005534516 nova_compute[253538]: 2025-11-25 08:34:50.287 253542 DEBUG nova.virt.libvirt.vif [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T08:34:42Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ListServersNegativeTestJSON-server-1822986507',display_name='tempest-ListServersNegativeTestJSON-server-1822986507-1',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-listserversnegativetestjson-server-1822986507-1',id=63,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='ce0bc5c65f2f47a9a854ec892fe53bc8',ramdisk_id='',reservation_id='r-prpznqfv',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ListServersNegativeTestJSON-704382836',owner_user_name='tempest-ListServersNegativeTestJSON-704382836-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T08:34:45Z,user_data=None,user_id='dc986518148d44de9f5908ed5be317bd',uuid=d99c7a05-3cc3-4a8b-bce4-1185023a269f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "182a5a1a-c06d-4265-857f-3ea363ae01c2", "address": "fa:16:3e:35:cb:10", "network": {"id": "837c6e7b-bab2-4553-9d96-986f67153365", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-60200676-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ce0bc5c65f2f47a9a854ec892fe53bc8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap182a5a1a-c0", "ovs_interfaceid": "182a5a1a-c06d-4265-857f-3ea363ae01c2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 25 03:34:50 np0005534516 nova_compute[253538]: 2025-11-25 08:34:50.288 253542 DEBUG nova.network.os_vif_util [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] Converting VIF {"id": "182a5a1a-c06d-4265-857f-3ea363ae01c2", "address": "fa:16:3e:35:cb:10", "network": {"id": "837c6e7b-bab2-4553-9d96-986f67153365", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-60200676-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ce0bc5c65f2f47a9a854ec892fe53bc8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap182a5a1a-c0", "ovs_interfaceid": "182a5a1a-c06d-4265-857f-3ea363ae01c2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 03:34:50 np0005534516 nova_compute[253538]: 2025-11-25 08:34:50.288 253542 DEBUG nova.network.os_vif_util [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:35:cb:10,bridge_name='br-int',has_traffic_filtering=True,id=182a5a1a-c06d-4265-857f-3ea363ae01c2,network=Network(837c6e7b-bab2-4553-9d96-986f67153365),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap182a5a1a-c0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 03:34:50 np0005534516 nova_compute[253538]: 2025-11-25 08:34:50.289 253542 DEBUG nova.objects.instance [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] Lazy-loading 'pci_devices' on Instance uuid d99c7a05-3cc3-4a8b-bce4-1185023a269f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 03:34:50 np0005534516 ovn_controller[152859]: 2025-11-25T08:34:50Z|00575|binding|INFO|Setting lport f2a4b65b-419e-44be-9413-f01693268aa8 ovn-installed in OVS
Nov 25 03:34:50 np0005534516 ovn_controller[152859]: 2025-11-25T08:34:50Z|00576|binding|INFO|Setting lport f2a4b65b-419e-44be-9413-f01693268aa8 up in Southbound
Nov 25 03:34:50 np0005534516 nova_compute[253538]: 2025-11-25 08:34:50.298 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:34:50 np0005534516 nova_compute[253538]: 2025-11-25 08:34:50.301 253542 DEBUG oslo_concurrency.lockutils [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] Acquiring lock "refresh_cache-e07ccbcb-d60d-4c15-95c2-9f5046ab99a3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 03:34:50 np0005534516 nova_compute[253538]: 2025-11-25 08:34:50.302 253542 DEBUG oslo_concurrency.lockutils [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] Acquired lock "refresh_cache-e07ccbcb-d60d-4c15-95c2-9f5046ab99a3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 03:34:50 np0005534516 nova_compute[253538]: 2025-11-25 08:34:50.302 253542 DEBUG nova.network.neutron [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] [instance: e07ccbcb-d60d-4c15-95c2-9f5046ab99a3] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 25 03:34:50 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:34:50.304 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[2105de87-15ce-4b63-8dc4-f99524b6035c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:34:50 np0005534516 nova_compute[253538]: 2025-11-25 08:34:50.308 253542 DEBUG nova.virt.libvirt.driver [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] [instance: d99c7a05-3cc3-4a8b-bce4-1185023a269f] End _get_guest_xml xml=<domain type="kvm">
Nov 25 03:34:50 np0005534516 nova_compute[253538]:  <uuid>d99c7a05-3cc3-4a8b-bce4-1185023a269f</uuid>
Nov 25 03:34:50 np0005534516 nova_compute[253538]:  <name>instance-0000003f</name>
Nov 25 03:34:50 np0005534516 nova_compute[253538]:  <memory>131072</memory>
Nov 25 03:34:50 np0005534516 nova_compute[253538]:  <vcpu>1</vcpu>
Nov 25 03:34:50 np0005534516 nova_compute[253538]:  <metadata>
Nov 25 03:34:50 np0005534516 nova_compute[253538]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 03:34:50 np0005534516 nova_compute[253538]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 03:34:50 np0005534516 nova_compute[253538]:      <nova:name>tempest-ListServersNegativeTestJSON-server-1822986507-1</nova:name>
Nov 25 03:34:50 np0005534516 nova_compute[253538]:      <nova:creationTime>2025-11-25 08:34:49</nova:creationTime>
Nov 25 03:34:50 np0005534516 nova_compute[253538]:      <nova:flavor name="m1.nano">
Nov 25 03:34:50 np0005534516 nova_compute[253538]:        <nova:memory>128</nova:memory>
Nov 25 03:34:50 np0005534516 nova_compute[253538]:        <nova:disk>1</nova:disk>
Nov 25 03:34:50 np0005534516 nova_compute[253538]:        <nova:swap>0</nova:swap>
Nov 25 03:34:50 np0005534516 nova_compute[253538]:        <nova:ephemeral>0</nova:ephemeral>
Nov 25 03:34:50 np0005534516 nova_compute[253538]:        <nova:vcpus>1</nova:vcpus>
Nov 25 03:34:50 np0005534516 nova_compute[253538]:      </nova:flavor>
Nov 25 03:34:50 np0005534516 nova_compute[253538]:      <nova:owner>
Nov 25 03:34:50 np0005534516 nova_compute[253538]:        <nova:user uuid="dc986518148d44de9f5908ed5be317bd">tempest-ListServersNegativeTestJSON-704382836-project-member</nova:user>
Nov 25 03:34:50 np0005534516 nova_compute[253538]:        <nova:project uuid="ce0bc5c65f2f47a9a854ec892fe53bc8">tempest-ListServersNegativeTestJSON-704382836</nova:project>
Nov 25 03:34:50 np0005534516 nova_compute[253538]:      </nova:owner>
Nov 25 03:34:50 np0005534516 nova_compute[253538]:      <nova:root type="image" uuid="8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e"/>
Nov 25 03:34:50 np0005534516 nova_compute[253538]:      <nova:ports>
Nov 25 03:34:50 np0005534516 nova_compute[253538]:        <nova:port uuid="182a5a1a-c06d-4265-857f-3ea363ae01c2">
Nov 25 03:34:50 np0005534516 nova_compute[253538]:          <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Nov 25 03:34:50 np0005534516 nova_compute[253538]:        </nova:port>
Nov 25 03:34:50 np0005534516 nova_compute[253538]:      </nova:ports>
Nov 25 03:34:50 np0005534516 nova_compute[253538]:    </nova:instance>
Nov 25 03:34:50 np0005534516 nova_compute[253538]:  </metadata>
Nov 25 03:34:50 np0005534516 nova_compute[253538]:  <sysinfo type="smbios">
Nov 25 03:34:50 np0005534516 nova_compute[253538]:    <system>
Nov 25 03:34:50 np0005534516 nova_compute[253538]:      <entry name="manufacturer">RDO</entry>
Nov 25 03:34:50 np0005534516 nova_compute[253538]:      <entry name="product">OpenStack Compute</entry>
Nov 25 03:34:50 np0005534516 nova_compute[253538]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 03:34:50 np0005534516 nova_compute[253538]:      <entry name="serial">d99c7a05-3cc3-4a8b-bce4-1185023a269f</entry>
Nov 25 03:34:50 np0005534516 nova_compute[253538]:      <entry name="uuid">d99c7a05-3cc3-4a8b-bce4-1185023a269f</entry>
Nov 25 03:34:50 np0005534516 nova_compute[253538]:      <entry name="family">Virtual Machine</entry>
Nov 25 03:34:50 np0005534516 nova_compute[253538]:    </system>
Nov 25 03:34:50 np0005534516 nova_compute[253538]:  </sysinfo>
Nov 25 03:34:50 np0005534516 nova_compute[253538]:  <os>
Nov 25 03:34:50 np0005534516 nova_compute[253538]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 25 03:34:50 np0005534516 nova_compute[253538]:    <boot dev="hd"/>
Nov 25 03:34:50 np0005534516 nova_compute[253538]:    <smbios mode="sysinfo"/>
Nov 25 03:34:50 np0005534516 nova_compute[253538]:  </os>
Nov 25 03:34:50 np0005534516 nova_compute[253538]:  <features>
Nov 25 03:34:50 np0005534516 nova_compute[253538]:    <acpi/>
Nov 25 03:34:50 np0005534516 nova_compute[253538]:    <apic/>
Nov 25 03:34:50 np0005534516 nova_compute[253538]:    <vmcoreinfo/>
Nov 25 03:34:50 np0005534516 nova_compute[253538]:  </features>
Nov 25 03:34:50 np0005534516 nova_compute[253538]:  <clock offset="utc">
Nov 25 03:34:50 np0005534516 nova_compute[253538]:    <timer name="pit" tickpolicy="delay"/>
Nov 25 03:34:50 np0005534516 nova_compute[253538]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 25 03:34:50 np0005534516 nova_compute[253538]:    <timer name="hpet" present="no"/>
Nov 25 03:34:50 np0005534516 nova_compute[253538]:  </clock>
Nov 25 03:34:50 np0005534516 nova_compute[253538]:  <cpu mode="host-model" match="exact">
Nov 25 03:34:50 np0005534516 nova_compute[253538]:    <topology sockets="1" cores="1" threads="1"/>
Nov 25 03:34:50 np0005534516 nova_compute[253538]:  </cpu>
Nov 25 03:34:50 np0005534516 nova_compute[253538]:  <devices>
Nov 25 03:34:50 np0005534516 nova_compute[253538]:    <disk type="network" device="disk">
Nov 25 03:34:50 np0005534516 nova_compute[253538]:      <driver type="raw" cache="none"/>
Nov 25 03:34:50 np0005534516 nova_compute[253538]:      <source protocol="rbd" name="vms/d99c7a05-3cc3-4a8b-bce4-1185023a269f_disk">
Nov 25 03:34:50 np0005534516 nova_compute[253538]:        <host name="192.168.122.100" port="6789"/>
Nov 25 03:34:50 np0005534516 nova_compute[253538]:      </source>
Nov 25 03:34:50 np0005534516 nova_compute[253538]:      <auth username="openstack">
Nov 25 03:34:50 np0005534516 nova_compute[253538]:        <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 03:34:50 np0005534516 nova_compute[253538]:      </auth>
Nov 25 03:34:50 np0005534516 nova_compute[253538]:      <target dev="vda" bus="virtio"/>
Nov 25 03:34:50 np0005534516 nova_compute[253538]:    </disk>
Nov 25 03:34:50 np0005534516 nova_compute[253538]:    <disk type="network" device="cdrom">
Nov 25 03:34:50 np0005534516 nova_compute[253538]:      <driver type="raw" cache="none"/>
Nov 25 03:34:50 np0005534516 nova_compute[253538]:      <source protocol="rbd" name="vms/d99c7a05-3cc3-4a8b-bce4-1185023a269f_disk.config">
Nov 25 03:34:50 np0005534516 nova_compute[253538]:        <host name="192.168.122.100" port="6789"/>
Nov 25 03:34:50 np0005534516 nova_compute[253538]:      </source>
Nov 25 03:34:50 np0005534516 nova_compute[253538]:      <auth username="openstack">
Nov 25 03:34:50 np0005534516 nova_compute[253538]:        <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 03:34:50 np0005534516 nova_compute[253538]:      </auth>
Nov 25 03:34:50 np0005534516 nova_compute[253538]:      <target dev="sda" bus="sata"/>
Nov 25 03:34:50 np0005534516 nova_compute[253538]:    </disk>
Nov 25 03:34:50 np0005534516 nova_compute[253538]:    <interface type="ethernet">
Nov 25 03:34:50 np0005534516 nova_compute[253538]:      <mac address="fa:16:3e:35:cb:10"/>
Nov 25 03:34:50 np0005534516 nova_compute[253538]:      <model type="virtio"/>
Nov 25 03:34:50 np0005534516 nova_compute[253538]:      <driver name="vhost" rx_queue_size="512"/>
Nov 25 03:34:50 np0005534516 nova_compute[253538]:      <mtu size="1442"/>
Nov 25 03:34:50 np0005534516 nova_compute[253538]:      <target dev="tap182a5a1a-c0"/>
Nov 25 03:34:50 np0005534516 nova_compute[253538]:    </interface>
Nov 25 03:34:50 np0005534516 nova_compute[253538]:    <serial type="pty">
Nov 25 03:34:50 np0005534516 nova_compute[253538]:      <log file="/var/lib/nova/instances/d99c7a05-3cc3-4a8b-bce4-1185023a269f/console.log" append="off"/>
Nov 25 03:34:50 np0005534516 nova_compute[253538]:    </serial>
Nov 25 03:34:50 np0005534516 nova_compute[253538]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 03:34:50 np0005534516 nova_compute[253538]:    <video>
Nov 25 03:34:50 np0005534516 nova_compute[253538]:      <model type="virtio"/>
Nov 25 03:34:50 np0005534516 nova_compute[253538]:    </video>
Nov 25 03:34:50 np0005534516 nova_compute[253538]:    <input type="tablet" bus="usb"/>
Nov 25 03:34:50 np0005534516 nova_compute[253538]:    <rng model="virtio">
Nov 25 03:34:50 np0005534516 nova_compute[253538]:      <backend model="random">/dev/urandom</backend>
Nov 25 03:34:50 np0005534516 nova_compute[253538]:    </rng>
Nov 25 03:34:50 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root"/>
Nov 25 03:34:50 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:34:50 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:34:50 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:34:50 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:34:50 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:34:50 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:34:50 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:34:50 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:34:50 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:34:50 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:34:50 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:34:50 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:34:50 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:34:50 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:34:50 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:34:50 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:34:50 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:34:50 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:34:50 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:34:50 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:34:50 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:34:50 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:34:50 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:34:50 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:34:50 np0005534516 nova_compute[253538]:    <controller type="usb" index="0"/>
Nov 25 03:34:50 np0005534516 nova_compute[253538]:    <memballoon model="virtio">
Nov 25 03:34:50 np0005534516 nova_compute[253538]:      <stats period="10"/>
Nov 25 03:34:50 np0005534516 nova_compute[253538]:    </memballoon>
Nov 25 03:34:50 np0005534516 nova_compute[253538]:  </devices>
Nov 25 03:34:50 np0005534516 nova_compute[253538]: </domain>
Nov 25 03:34:50 np0005534516 nova_compute[253538]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 25 03:34:50 np0005534516 nova_compute[253538]: 2025-11-25 08:34:50.309 253542 DEBUG nova.compute.manager [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] [instance: d99c7a05-3cc3-4a8b-bce4-1185023a269f] Preparing to wait for external event network-vif-plugged-182a5a1a-c06d-4265-857f-3ea363ae01c2 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 25 03:34:50 np0005534516 nova_compute[253538]: 2025-11-25 08:34:50.309 253542 DEBUG oslo_concurrency.lockutils [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] Acquiring lock "d99c7a05-3cc3-4a8b-bce4-1185023a269f-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:34:50 np0005534516 nova_compute[253538]: 2025-11-25 08:34:50.310 253542 DEBUG oslo_concurrency.lockutils [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] Lock "d99c7a05-3cc3-4a8b-bce4-1185023a269f-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:34:50 np0005534516 nova_compute[253538]: 2025-11-25 08:34:50.310 253542 DEBUG oslo_concurrency.lockutils [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] Lock "d99c7a05-3cc3-4a8b-bce4-1185023a269f-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:34:50 np0005534516 nova_compute[253538]: 2025-11-25 08:34:50.311 253542 DEBUG nova.virt.libvirt.vif [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T08:34:42Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ListServersNegativeTestJSON-server-1822986507',display_name='tempest-ListServersNegativeTestJSON-server-1822986507-1',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-listserversnegativetestjson-server-1822986507-1',id=63,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='ce0bc5c65f2f47a9a854ec892fe53bc8',ramdisk_id='',reservation_id='r-prpznqfv',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ListServersNegativeTestJSON-704382836',owner_user_name='tempest-ListServersNegativeTestJSON-704382836-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T08:34:45Z,user_data=None,user_id='dc986518148d44de9f5908ed5be317bd',uuid=d99c7a05-3cc3-4a8b-bce4-1185023a269f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "182a5a1a-c06d-4265-857f-3ea363ae01c2", "address": "fa:16:3e:35:cb:10", "network": {"id": "837c6e7b-bab2-4553-9d96-986f67153365", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-60200676-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ce0bc5c65f2f47a9a854ec892fe53bc8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap182a5a1a-c0", "ovs_interfaceid": "182a5a1a-c06d-4265-857f-3ea363ae01c2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 25 03:34:50 np0005534516 nova_compute[253538]: 2025-11-25 08:34:50.312 253542 DEBUG nova.network.os_vif_util [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] Converting VIF {"id": "182a5a1a-c06d-4265-857f-3ea363ae01c2", "address": "fa:16:3e:35:cb:10", "network": {"id": "837c6e7b-bab2-4553-9d96-986f67153365", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-60200676-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ce0bc5c65f2f47a9a854ec892fe53bc8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap182a5a1a-c0", "ovs_interfaceid": "182a5a1a-c06d-4265-857f-3ea363ae01c2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 03:34:50 np0005534516 nova_compute[253538]: 2025-11-25 08:34:50.313 253542 DEBUG nova.network.os_vif_util [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:35:cb:10,bridge_name='br-int',has_traffic_filtering=True,id=182a5a1a-c06d-4265-857f-3ea363ae01c2,network=Network(837c6e7b-bab2-4553-9d96-986f67153365),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap182a5a1a-c0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 03:34:50 np0005534516 nova_compute[253538]: 2025-11-25 08:34:50.313 253542 DEBUG os_vif [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:35:cb:10,bridge_name='br-int',has_traffic_filtering=True,id=182a5a1a-c06d-4265-857f-3ea363ae01c2,network=Network(837c6e7b-bab2-4553-9d96-986f67153365),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap182a5a1a-c0') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 25 03:34:50 np0005534516 nova_compute[253538]: 2025-11-25 08:34:50.314 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:34:50 np0005534516 nova_compute[253538]: 2025-11-25 08:34:50.315 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:34:50 np0005534516 nova_compute[253538]: 2025-11-25 08:34:50.315 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 03:34:50 np0005534516 nova_compute[253538]: 2025-11-25 08:34:50.316 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:34:50 np0005534516 nova_compute[253538]: 2025-11-25 08:34:50.323 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:34:50 np0005534516 nova_compute[253538]: 2025-11-25 08:34:50.323 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap182a5a1a-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:34:50 np0005534516 nova_compute[253538]: 2025-11-25 08:34:50.324 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap182a5a1a-c0, col_values=(('external_ids', {'iface-id': '182a5a1a-c06d-4265-857f-3ea363ae01c2', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:35:cb:10', 'vm-uuid': 'd99c7a05-3cc3-4a8b-bce4-1185023a269f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:34:50 np0005534516 nova_compute[253538]: 2025-11-25 08:34:50.325 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:34:50 np0005534516 NetworkManager[48915]: <info>  [1764059690.3265] manager: (tap182a5a1a-c0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/263)
Nov 25 03:34:50 np0005534516 nova_compute[253538]: 2025-11-25 08:34:50.328 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 25 03:34:50 np0005534516 nova_compute[253538]: 2025-11-25 08:34:50.332 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:34:50 np0005534516 nova_compute[253538]: 2025-11-25 08:34:50.333 253542 INFO os_vif [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:35:cb:10,bridge_name='br-int',has_traffic_filtering=True,id=182a5a1a-c06d-4265-857f-3ea363ae01c2,network=Network(837c6e7b-bab2-4553-9d96-986f67153365),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap182a5a1a-c0')#033[00m
Nov 25 03:34:50 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:34:50.337 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[1253b33d-8249-494a-9c12-72b61dc2710e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:34:50 np0005534516 nova_compute[253538]: 2025-11-25 08:34:50.340 253542 DEBUG nova.network.neutron [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] [instance: a4fd9f97-b160-432d-9cb7-0fa3874c6468] Updating instance_info_cache with network_info: [{"id": "f9d205bf-0705-485d-b89c-f9b9c3cdccdb", "address": "fa:16:3e:9d:08:11", "network": {"id": "837c6e7b-bab2-4553-9d96-986f67153365", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-60200676-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ce0bc5c65f2f47a9a854ec892fe53bc8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf9d205bf-07", "ovs_interfaceid": "f9d205bf-0705-485d-b89c-f9b9c3cdccdb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 03:34:50 np0005534516 NetworkManager[48915]: <info>  [1764059690.3420] manager: (tap9200cc12-92): new Tun device (/org/freedesktop/NetworkManager/Devices/264)
Nov 25 03:34:50 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:34:50.342 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[e61d76cc-d80d-4600-ac01-25fef6893bca]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:34:50 np0005534516 nova_compute[253538]: 2025-11-25 08:34:50.365 253542 DEBUG nova.virt.libvirt.driver [None req-a4883817-8c02-41e5-8d27-475adf7618fd 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 03:34:50 np0005534516 nova_compute[253538]: 2025-11-25 08:34:50.366 253542 DEBUG nova.virt.libvirt.driver [None req-a4883817-8c02-41e5-8d27-475adf7618fd 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 03:34:50 np0005534516 nova_compute[253538]: 2025-11-25 08:34:50.366 253542 DEBUG nova.virt.libvirt.driver [None req-a4883817-8c02-41e5-8d27-475adf7618fd 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] No VIF found with MAC fa:16:3e:4d:ce:d4, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 25 03:34:50 np0005534516 nova_compute[253538]: 2025-11-25 08:34:50.366 253542 DEBUG nova.virt.libvirt.driver [None req-a4883817-8c02-41e5-8d27-475adf7618fd 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] No VIF found with MAC fa:16:3e:26:ed:fc, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 25 03:34:50 np0005534516 nova_compute[253538]: 2025-11-25 08:34:50.370 253542 DEBUG oslo_concurrency.lockutils [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] Releasing lock "refresh_cache-a4fd9f97-b160-432d-9cb7-0fa3874c6468" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 03:34:50 np0005534516 nova_compute[253538]: 2025-11-25 08:34:50.370 253542 DEBUG nova.compute.manager [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] [instance: a4fd9f97-b160-432d-9cb7-0fa3874c6468] Instance network_info: |[{"id": "f9d205bf-0705-485d-b89c-f9b9c3cdccdb", "address": "fa:16:3e:9d:08:11", "network": {"id": "837c6e7b-bab2-4553-9d96-986f67153365", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-60200676-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ce0bc5c65f2f47a9a854ec892fe53bc8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf9d205bf-07", "ovs_interfaceid": "f9d205bf-0705-485d-b89c-f9b9c3cdccdb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 25 03:34:50 np0005534516 nova_compute[253538]: 2025-11-25 08:34:50.373 253542 DEBUG nova.virt.libvirt.driver [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] [instance: a4fd9f97-b160-432d-9cb7-0fa3874c6468] Start _get_guest_xml network_info=[{"id": "f9d205bf-0705-485d-b89c-f9b9c3cdccdb", "address": "fa:16:3e:9d:08:11", "network": {"id": "837c6e7b-bab2-4553-9d96-986f67153365", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-60200676-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ce0bc5c65f2f47a9a854ec892fe53bc8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf9d205bf-07", "ovs_interfaceid": "f9d205bf-0705-485d-b89c-f9b9c3cdccdb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'device_name': '/dev/vda', 'guest_format': None, 'encryption_options': None, 'boot_index': 0, 'encryption_format': None, 'encryption_secret_uuid': None, 'size': 0, 'encrypted': False, 'disk_bus': 'virtio', 'image_id': '8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 25 03:34:50 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:34:50.375 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[f8b0a2ab-d925-47bf-9797-adc25f5b7f03]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:34:50 np0005534516 nova_compute[253538]: 2025-11-25 08:34:50.380 253542 WARNING nova.virt.libvirt.driver [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 03:34:50 np0005534516 nova_compute[253538]: 2025-11-25 08:34:50.385 253542 DEBUG nova.virt.libvirt.host [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 25 03:34:50 np0005534516 nova_compute[253538]: 2025-11-25 08:34:50.386 253542 DEBUG nova.virt.libvirt.host [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 25 03:34:50 np0005534516 nova_compute[253538]: 2025-11-25 08:34:50.391 253542 DEBUG nova.virt.libvirt.host [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 25 03:34:50 np0005534516 nova_compute[253538]: 2025-11-25 08:34:50.392 253542 DEBUG nova.virt.libvirt.host [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 25 03:34:50 np0005534516 nova_compute[253538]: 2025-11-25 08:34:50.392 253542 DEBUG nova.virt.libvirt.driver [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 25 03:34:50 np0005534516 nova_compute[253538]: 2025-11-25 08:34:50.393 253542 DEBUG nova.virt.hardware [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T08:20:54Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='c0247c91-0f34-45b2-87e7-43f31a790a00',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 25 03:34:50 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:34:50.392 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[31e8ea5b-2318-43ff-9003-a54214c68d4e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap9bf3cbfa-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:cf:8f:c7'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 11, 'rx_bytes': 700, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 11, 'rx_bytes': 700, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 164], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 499073, 'reachable_time': 17548, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 319514, 'error': None, 'target': 'ovnmeta-9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:34:50 np0005534516 nova_compute[253538]: 2025-11-25 08:34:50.393 253542 DEBUG nova.virt.hardware [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 25 03:34:50 np0005534516 nova_compute[253538]: 2025-11-25 08:34:50.393 253542 DEBUG nova.virt.hardware [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 25 03:34:50 np0005534516 nova_compute[253538]: 2025-11-25 08:34:50.394 253542 DEBUG nova.virt.hardware [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 25 03:34:50 np0005534516 nova_compute[253538]: 2025-11-25 08:34:50.394 253542 DEBUG nova.virt.hardware [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 25 03:34:50 np0005534516 nova_compute[253538]: 2025-11-25 08:34:50.394 253542 DEBUG nova.virt.hardware [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 25 03:34:50 np0005534516 nova_compute[253538]: 2025-11-25 08:34:50.395 253542 DEBUG nova.virt.hardware [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 25 03:34:50 np0005534516 nova_compute[253538]: 2025-11-25 08:34:50.395 253542 DEBUG nova.virt.hardware [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 25 03:34:50 np0005534516 nova_compute[253538]: 2025-11-25 08:34:50.395 253542 DEBUG nova.virt.hardware [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 25 03:34:50 np0005534516 nova_compute[253538]: 2025-11-25 08:34:50.395 253542 DEBUG nova.virt.hardware [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 25 03:34:50 np0005534516 nova_compute[253538]: 2025-11-25 08:34:50.396 253542 DEBUG nova.virt.hardware [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 25 03:34:50 np0005534516 nova_compute[253538]: 2025-11-25 08:34:50.399 253542 DEBUG oslo_concurrency.processutils [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:34:50 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:34:50.407 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[c40488f9-c421-458d-89b9-8ab7988fd169]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap9bf3cbfa-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 499086, 'tstamp': 499086}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 319515, 'error': None, 'target': 'ovnmeta-9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap9bf3cbfa-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 499088, 'tstamp': 499088}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 319515, 'error': None, 'target': 'ovnmeta-9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:34:50 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:34:50.408 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9bf3cbfa-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:34:50 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:34:50.415 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9bf3cbfa-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:34:50 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:34:50.415 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 03:34:50 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:34:50.416 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap9bf3cbfa-70, col_values=(('external_ids', {'iface-id': '98660c0c-0936-4c4d-9a89-87b784d8d5cc'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:34:50 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:34:50.416 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 03:34:50 np0005534516 nova_compute[253538]: 2025-11-25 08:34:50.439 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:34:50 np0005534516 nova_compute[253538]: 2025-11-25 08:34:50.445 253542 DEBUG nova.virt.libvirt.guest [None req-a4883817-8c02-41e5-8d27-475adf7618fd 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 03:34:50 np0005534516 nova_compute[253538]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 03:34:50 np0005534516 nova_compute[253538]:  <nova:name>tempest-tempest.common.compute-instance-1833307559</nova:name>
Nov 25 03:34:50 np0005534516 nova_compute[253538]:  <nova:creationTime>2025-11-25 08:34:50</nova:creationTime>
Nov 25 03:34:50 np0005534516 nova_compute[253538]:  <nova:flavor name="m1.nano">
Nov 25 03:34:50 np0005534516 nova_compute[253538]:    <nova:memory>128</nova:memory>
Nov 25 03:34:50 np0005534516 nova_compute[253538]:    <nova:disk>1</nova:disk>
Nov 25 03:34:50 np0005534516 nova_compute[253538]:    <nova:swap>0</nova:swap>
Nov 25 03:34:50 np0005534516 nova_compute[253538]:    <nova:ephemeral>0</nova:ephemeral>
Nov 25 03:34:50 np0005534516 nova_compute[253538]:    <nova:vcpus>1</nova:vcpus>
Nov 25 03:34:50 np0005534516 nova_compute[253538]:  </nova:flavor>
Nov 25 03:34:50 np0005534516 nova_compute[253538]:  <nova:owner>
Nov 25 03:34:50 np0005534516 nova_compute[253538]:    <nova:user uuid="329d8dc9d78743d4a09a38fef3a9143d">tempest-AttachInterfacesTestJSON-1895576257-project-member</nova:user>
Nov 25 03:34:50 np0005534516 nova_compute[253538]:    <nova:project uuid="7d8307470c794815a028592990efca57">tempest-AttachInterfacesTestJSON-1895576257</nova:project>
Nov 25 03:34:50 np0005534516 nova_compute[253538]:  </nova:owner>
Nov 25 03:34:50 np0005534516 nova_compute[253538]:  <nova:root type="image" uuid="8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e"/>
Nov 25 03:34:50 np0005534516 nova_compute[253538]:  <nova:ports>
Nov 25 03:34:50 np0005534516 nova_compute[253538]:    <nova:port uuid="9fa407fa-661b-4b02-b4f4-656f6ae34cd8">
Nov 25 03:34:50 np0005534516 nova_compute[253538]:      <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Nov 25 03:34:50 np0005534516 nova_compute[253538]:    </nova:port>
Nov 25 03:34:50 np0005534516 nova_compute[253538]:    <nova:port uuid="f2a4b65b-419e-44be-9413-f01693268aa8">
Nov 25 03:34:50 np0005534516 nova_compute[253538]:      <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Nov 25 03:34:50 np0005534516 nova_compute[253538]:    </nova:port>
Nov 25 03:34:50 np0005534516 nova_compute[253538]:  </nova:ports>
Nov 25 03:34:50 np0005534516 nova_compute[253538]: </nova:instance>
Nov 25 03:34:50 np0005534516 nova_compute[253538]: set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359#033[00m
Nov 25 03:34:50 np0005534516 nova_compute[253538]: 2025-11-25 08:34:50.454 253542 DEBUG nova.virt.libvirt.driver [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 03:34:50 np0005534516 nova_compute[253538]: 2025-11-25 08:34:50.454 253542 DEBUG nova.virt.libvirt.driver [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 03:34:50 np0005534516 nova_compute[253538]: 2025-11-25 08:34:50.454 253542 DEBUG nova.virt.libvirt.driver [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] No VIF found with MAC fa:16:3e:35:cb:10, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 25 03:34:50 np0005534516 nova_compute[253538]: 2025-11-25 08:34:50.455 253542 INFO nova.virt.libvirt.driver [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] [instance: d99c7a05-3cc3-4a8b-bce4-1185023a269f] Using config drive#033[00m
Nov 25 03:34:50 np0005534516 nova_compute[253538]: 2025-11-25 08:34:50.477 253542 DEBUG nova.storage.rbd_utils [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] rbd image d99c7a05-3cc3-4a8b-bce4-1185023a269f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:34:50 np0005534516 nova_compute[253538]: 2025-11-25 08:34:50.484 253542 DEBUG nova.network.neutron [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] [instance: e07ccbcb-d60d-4c15-95c2-9f5046ab99a3] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 25 03:34:50 np0005534516 nova_compute[253538]: 2025-11-25 08:34:50.488 253542 DEBUG oslo_concurrency.lockutils [None req-a4883817-8c02-41e5-8d27-475adf7618fd 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Lock "interface-52d39d67-b456-44e4-8804-2de0c941edae-f2a4b65b-419e-44be-9413-f01693268aa8" "released" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: held 7.097s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:34:50 np0005534516 nova_compute[253538]: 2025-11-25 08:34:50.738 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:34:50 np0005534516 nova_compute[253538]: 2025-11-25 08:34:50.804 253542 INFO nova.virt.libvirt.driver [None req-5d8b5969-a11f-4efe-b1ba-bf5400b29765 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] [instance: 420c5373-d9c4-4da0-9658-90eff9a19f8d] Instance shutdown successfully after 24 seconds.#033[00m
Nov 25 03:34:50 np0005534516 nova_compute[253538]: 2025-11-25 08:34:50.832 253542 DEBUG nova.compute.manager [req-374d4aff-10a9-48cf-ac90-e3042ce6f710 req-812488c2-b586-45dd-a882-13cc59a57c36 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: bf44124c-1a65-4bde-a777-043ae1a53557] Received event network-vif-deleted-269f9bd3-f267-459c-8e24-4b1f6c943345 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:34:50 np0005534516 nova_compute[253538]: 2025-11-25 08:34:50.832 253542 DEBUG nova.compute.manager [req-374d4aff-10a9-48cf-ac90-e3042ce6f710 req-812488c2-b586-45dd-a882-13cc59a57c36 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 420c5373-d9c4-4da0-9658-90eff9a19f8d] Received event network-vif-unplugged-9200cc12-927d-418b-99c1-ca0421535979 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:34:50 np0005534516 nova_compute[253538]: 2025-11-25 08:34:50.833 253542 DEBUG oslo_concurrency.lockutils [req-374d4aff-10a9-48cf-ac90-e3042ce6f710 req-812488c2-b586-45dd-a882-13cc59a57c36 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "420c5373-d9c4-4da0-9658-90eff9a19f8d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:34:50 np0005534516 nova_compute[253538]: 2025-11-25 08:34:50.834 253542 DEBUG oslo_concurrency.lockutils [req-374d4aff-10a9-48cf-ac90-e3042ce6f710 req-812488c2-b586-45dd-a882-13cc59a57c36 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "420c5373-d9c4-4da0-9658-90eff9a19f8d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:34:50 np0005534516 nova_compute[253538]: 2025-11-25 08:34:50.835 253542 DEBUG oslo_concurrency.lockutils [req-374d4aff-10a9-48cf-ac90-e3042ce6f710 req-812488c2-b586-45dd-a882-13cc59a57c36 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "420c5373-d9c4-4da0-9658-90eff9a19f8d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:34:50 np0005534516 nova_compute[253538]: 2025-11-25 08:34:50.835 253542 DEBUG nova.compute.manager [req-374d4aff-10a9-48cf-ac90-e3042ce6f710 req-812488c2-b586-45dd-a882-13cc59a57c36 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 420c5373-d9c4-4da0-9658-90eff9a19f8d] No waiting events found dispatching network-vif-unplugged-9200cc12-927d-418b-99c1-ca0421535979 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 03:34:50 np0005534516 nova_compute[253538]: 2025-11-25 08:34:50.836 253542 WARNING nova.compute.manager [req-374d4aff-10a9-48cf-ac90-e3042ce6f710 req-812488c2-b586-45dd-a882-13cc59a57c36 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 420c5373-d9c4-4da0-9658-90eff9a19f8d] Received unexpected event network-vif-unplugged-9200cc12-927d-418b-99c1-ca0421535979 for instance with vm_state active and task_state rebuilding.#033[00m
Nov 25 03:34:50 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 03:34:50 np0005534516 nova_compute[253538]: 2025-11-25 08:34:50.839 253542 INFO nova.virt.libvirt.driver [-] [instance: 420c5373-d9c4-4da0-9658-90eff9a19f8d] Instance destroyed successfully.#033[00m
Nov 25 03:34:50 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/506731877' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 03:34:50 np0005534516 nova_compute[253538]: 2025-11-25 08:34:50.848 253542 INFO nova.virt.libvirt.driver [-] [instance: 420c5373-d9c4-4da0-9658-90eff9a19f8d] Instance destroyed successfully.#033[00m
Nov 25 03:34:50 np0005534516 nova_compute[253538]: 2025-11-25 08:34:50.849 253542 DEBUG nova.virt.libvirt.vif [None req-5d8b5969-a11f-4efe-b1ba-bf5400b29765 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T08:34:14Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-614557291',display_name='tempest-ServerActionsTestJSON-server-1864891791',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-614557291',id=62,image_ref='64385127-d622-49bb-be38-b33beb2692d1',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-25T08:34:22Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={rebuild='server'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='23237e7592b247838e62457157e64e9e',ramdisk_id='',reservation_id='r-x0c0gdce',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='64385127-d622-49bb-be38-b33beb2692d1',image_container_format='bare',image_disk_format='qcow2',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-1880843108',owner_user_name='tempest-ServerActionsTestJSON-1880843108-project-member'},tags=<?>,task_state='rebuilding',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T08:34:25Z,user_data=None,user_id='c199ca353ed54a53ab7fe37d3089c82a',uuid=420c5373-d9c4-4da0-9658-90eff9a19f8d,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "9200cc12-927d-418b-99c1-ca0421535979", "address": "fa:16:3e:73:f0:9b", "network": {"id": "908154e6-322e-4607-bb65-df3f3f8daca6", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1395486665-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "23237e7592b247838e62457157e64e9e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9200cc12-92", "ovs_interfaceid": "9200cc12-927d-418b-99c1-ca0421535979", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 25 03:34:50 np0005534516 nova_compute[253538]: 2025-11-25 08:34:50.850 253542 DEBUG nova.network.os_vif_util [None req-5d8b5969-a11f-4efe-b1ba-bf5400b29765 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Converting VIF {"id": "9200cc12-927d-418b-99c1-ca0421535979", "address": "fa:16:3e:73:f0:9b", "network": {"id": "908154e6-322e-4607-bb65-df3f3f8daca6", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1395486665-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "23237e7592b247838e62457157e64e9e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9200cc12-92", "ovs_interfaceid": "9200cc12-927d-418b-99c1-ca0421535979", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 03:34:50 np0005534516 nova_compute[253538]: 2025-11-25 08:34:50.852 253542 DEBUG nova.network.os_vif_util [None req-5d8b5969-a11f-4efe-b1ba-bf5400b29765 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:73:f0:9b,bridge_name='br-int',has_traffic_filtering=True,id=9200cc12-927d-418b-99c1-ca0421535979,network=Network(908154e6-322e-4607-bb65-df3f3f8daca6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9200cc12-92') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 03:34:50 np0005534516 nova_compute[253538]: 2025-11-25 08:34:50.852 253542 DEBUG os_vif [None req-5d8b5969-a11f-4efe-b1ba-bf5400b29765 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:73:f0:9b,bridge_name='br-int',has_traffic_filtering=True,id=9200cc12-927d-418b-99c1-ca0421535979,network=Network(908154e6-322e-4607-bb65-df3f3f8daca6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9200cc12-92') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 25 03:34:50 np0005534516 nova_compute[253538]: 2025-11-25 08:34:50.854 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:34:50 np0005534516 nova_compute[253538]: 2025-11-25 08:34:50.855 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9200cc12-92, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:34:50 np0005534516 nova_compute[253538]: 2025-11-25 08:34:50.856 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:34:50 np0005534516 nova_compute[253538]: 2025-11-25 08:34:50.859 253542 DEBUG oslo_concurrency.processutils [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.460s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:34:50 np0005534516 nova_compute[253538]: 2025-11-25 08:34:50.878 253542 DEBUG nova.storage.rbd_utils [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] rbd image a4fd9f97-b160-432d-9cb7-0fa3874c6468_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:34:50 np0005534516 nova_compute[253538]: 2025-11-25 08:34:50.882 253542 DEBUG oslo_concurrency.processutils [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:34:50 np0005534516 nova_compute[253538]: 2025-11-25 08:34:50.921 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:34:50 np0005534516 nova_compute[253538]: 2025-11-25 08:34:50.926 253542 INFO os_vif [None req-5d8b5969-a11f-4efe-b1ba-bf5400b29765 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:73:f0:9b,bridge_name='br-int',has_traffic_filtering=True,id=9200cc12-927d-418b-99c1-ca0421535979,network=Network(908154e6-322e-4607-bb65-df3f3f8daca6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9200cc12-92')#033[00m
Nov 25 03:34:50 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e182 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 03:34:51 np0005534516 nova_compute[253538]: 2025-11-25 08:34:51.176 253542 INFO nova.virt.libvirt.driver [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] [instance: d99c7a05-3cc3-4a8b-bce4-1185023a269f] Creating config drive at /var/lib/nova/instances/d99c7a05-3cc3-4a8b-bce4-1185023a269f/disk.config#033[00m
Nov 25 03:34:51 np0005534516 nova_compute[253538]: 2025-11-25 08:34:51.181 253542 DEBUG oslo_concurrency.processutils [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/d99c7a05-3cc3-4a8b-bce4-1185023a269f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpi0if3ix6 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:34:51 np0005534516 nova_compute[253538]: 2025-11-25 08:34:51.221 253542 DEBUG nova.network.neutron [req-055411a1-74ac-4424-b1d2-dc4c518f2f32 req-fdbc24b2-668a-49d4-8d56-13563317ccd7 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: d99c7a05-3cc3-4a8b-bce4-1185023a269f] Updated VIF entry in instance network info cache for port 182a5a1a-c06d-4265-857f-3ea363ae01c2. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 25 03:34:51 np0005534516 nova_compute[253538]: 2025-11-25 08:34:51.222 253542 DEBUG nova.network.neutron [req-055411a1-74ac-4424-b1d2-dc4c518f2f32 req-fdbc24b2-668a-49d4-8d56-13563317ccd7 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: d99c7a05-3cc3-4a8b-bce4-1185023a269f] Updating instance_info_cache with network_info: [{"id": "182a5a1a-c06d-4265-857f-3ea363ae01c2", "address": "fa:16:3e:35:cb:10", "network": {"id": "837c6e7b-bab2-4553-9d96-986f67153365", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-60200676-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ce0bc5c65f2f47a9a854ec892fe53bc8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap182a5a1a-c0", "ovs_interfaceid": "182a5a1a-c06d-4265-857f-3ea363ae01c2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 03:34:51 np0005534516 nova_compute[253538]: 2025-11-25 08:34:51.242 253542 DEBUG oslo_concurrency.lockutils [req-055411a1-74ac-4424-b1d2-dc4c518f2f32 req-fdbc24b2-668a-49d4-8d56-13563317ccd7 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-d99c7a05-3cc3-4a8b-bce4-1185023a269f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 03:34:51 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1551: 321 pgs: 321 active+clean; 517 MiB data, 794 MiB used, 59 GiB / 60 GiB avail; 185 KiB/s rd, 5.6 MiB/s wr, 136 op/s
Nov 25 03:34:51 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 03:34:51 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3790800774' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 03:34:51 np0005534516 nova_compute[253538]: 2025-11-25 08:34:51.328 253542 DEBUG oslo_concurrency.processutils [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/d99c7a05-3cc3-4a8b-bce4-1185023a269f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpi0if3ix6" returned: 0 in 0.147s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:34:51 np0005534516 nova_compute[253538]: 2025-11-25 08:34:51.383 253542 DEBUG nova.storage.rbd_utils [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] rbd image d99c7a05-3cc3-4a8b-bce4-1185023a269f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:34:51 np0005534516 nova_compute[253538]: 2025-11-25 08:34:51.387 253542 DEBUG oslo_concurrency.processutils [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/d99c7a05-3cc3-4a8b-bce4-1185023a269f/disk.config d99c7a05-3cc3-4a8b-bce4-1185023a269f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:34:51 np0005534516 nova_compute[253538]: 2025-11-25 08:34:51.418 253542 DEBUG oslo_concurrency.processutils [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.537s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:34:51 np0005534516 nova_compute[253538]: 2025-11-25 08:34:51.421 253542 DEBUG nova.virt.libvirt.vif [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T08:34:42Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ListServersNegativeTestJSON-server-1822986507',display_name='tempest-ListServersNegativeTestJSON-server-1822986507-2',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-listserversnegativetestjson-server-1822986507-2',id=64,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=1,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='ce0bc5c65f2f47a9a854ec892fe53bc8',ramdisk_id='',reservation_id='r-prpznqfv',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ListServersNegativeTestJSON-704382836',owner_user_name='tempest-ListServersNegativeTestJSON-704382836-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T08:34:46Z,user_data=None,user_id='dc986518148d44de9f5908ed5be317bd',uuid=a4fd9f97-b160-432d-9cb7-0fa3874c6468,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "f9d205bf-0705-485d-b89c-f9b9c3cdccdb", "address": "fa:16:3e:9d:08:11", "network": {"id": "837c6e7b-bab2-4553-9d96-986f67153365", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-60200676-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ce0bc5c65f2f47a9a854ec892fe53bc8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf9d205bf-07", "ovs_interfaceid": "f9d205bf-0705-485d-b89c-f9b9c3cdccdb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 25 03:34:51 np0005534516 nova_compute[253538]: 2025-11-25 08:34:51.421 253542 DEBUG nova.network.os_vif_util [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] Converting VIF {"id": "f9d205bf-0705-485d-b89c-f9b9c3cdccdb", "address": "fa:16:3e:9d:08:11", "network": {"id": "837c6e7b-bab2-4553-9d96-986f67153365", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-60200676-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ce0bc5c65f2f47a9a854ec892fe53bc8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf9d205bf-07", "ovs_interfaceid": "f9d205bf-0705-485d-b89c-f9b9c3cdccdb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 03:34:51 np0005534516 nova_compute[253538]: 2025-11-25 08:34:51.422 253542 DEBUG nova.network.os_vif_util [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:9d:08:11,bridge_name='br-int',has_traffic_filtering=True,id=f9d205bf-0705-485d-b89c-f9b9c3cdccdb,network=Network(837c6e7b-bab2-4553-9d96-986f67153365),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf9d205bf-07') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 03:34:51 np0005534516 nova_compute[253538]: 2025-11-25 08:34:51.423 253542 DEBUG nova.objects.instance [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] Lazy-loading 'pci_devices' on Instance uuid a4fd9f97-b160-432d-9cb7-0fa3874c6468 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 03:34:51 np0005534516 nova_compute[253538]: 2025-11-25 08:34:51.439 253542 DEBUG nova.virt.libvirt.driver [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] [instance: a4fd9f97-b160-432d-9cb7-0fa3874c6468] End _get_guest_xml xml=<domain type="kvm">
Nov 25 03:34:51 np0005534516 nova_compute[253538]:  <uuid>a4fd9f97-b160-432d-9cb7-0fa3874c6468</uuid>
Nov 25 03:34:51 np0005534516 nova_compute[253538]:  <name>instance-00000040</name>
Nov 25 03:34:51 np0005534516 nova_compute[253538]:  <memory>131072</memory>
Nov 25 03:34:51 np0005534516 nova_compute[253538]:  <vcpu>1</vcpu>
Nov 25 03:34:51 np0005534516 nova_compute[253538]:  <metadata>
Nov 25 03:34:51 np0005534516 nova_compute[253538]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 03:34:51 np0005534516 nova_compute[253538]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 03:34:51 np0005534516 nova_compute[253538]:      <nova:name>tempest-ListServersNegativeTestJSON-server-1822986507-2</nova:name>
Nov 25 03:34:51 np0005534516 nova_compute[253538]:      <nova:creationTime>2025-11-25 08:34:50</nova:creationTime>
Nov 25 03:34:51 np0005534516 nova_compute[253538]:      <nova:flavor name="m1.nano">
Nov 25 03:34:51 np0005534516 nova_compute[253538]:        <nova:memory>128</nova:memory>
Nov 25 03:34:51 np0005534516 nova_compute[253538]:        <nova:disk>1</nova:disk>
Nov 25 03:34:51 np0005534516 nova_compute[253538]:        <nova:swap>0</nova:swap>
Nov 25 03:34:51 np0005534516 nova_compute[253538]:        <nova:ephemeral>0</nova:ephemeral>
Nov 25 03:34:51 np0005534516 nova_compute[253538]:        <nova:vcpus>1</nova:vcpus>
Nov 25 03:34:51 np0005534516 nova_compute[253538]:      </nova:flavor>
Nov 25 03:34:51 np0005534516 nova_compute[253538]:      <nova:owner>
Nov 25 03:34:51 np0005534516 nova_compute[253538]:        <nova:user uuid="dc986518148d44de9f5908ed5be317bd">tempest-ListServersNegativeTestJSON-704382836-project-member</nova:user>
Nov 25 03:34:51 np0005534516 nova_compute[253538]:        <nova:project uuid="ce0bc5c65f2f47a9a854ec892fe53bc8">tempest-ListServersNegativeTestJSON-704382836</nova:project>
Nov 25 03:34:51 np0005534516 nova_compute[253538]:      </nova:owner>
Nov 25 03:34:51 np0005534516 nova_compute[253538]:      <nova:root type="image" uuid="8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e"/>
Nov 25 03:34:51 np0005534516 nova_compute[253538]:      <nova:ports>
Nov 25 03:34:51 np0005534516 nova_compute[253538]:        <nova:port uuid="f9d205bf-0705-485d-b89c-f9b9c3cdccdb">
Nov 25 03:34:51 np0005534516 nova_compute[253538]:          <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Nov 25 03:34:51 np0005534516 nova_compute[253538]:        </nova:port>
Nov 25 03:34:51 np0005534516 nova_compute[253538]:      </nova:ports>
Nov 25 03:34:51 np0005534516 nova_compute[253538]:    </nova:instance>
Nov 25 03:34:51 np0005534516 nova_compute[253538]:  </metadata>
Nov 25 03:34:51 np0005534516 nova_compute[253538]:  <sysinfo type="smbios">
Nov 25 03:34:51 np0005534516 nova_compute[253538]:    <system>
Nov 25 03:34:51 np0005534516 nova_compute[253538]:      <entry name="manufacturer">RDO</entry>
Nov 25 03:34:51 np0005534516 nova_compute[253538]:      <entry name="product">OpenStack Compute</entry>
Nov 25 03:34:51 np0005534516 nova_compute[253538]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 03:34:51 np0005534516 nova_compute[253538]:      <entry name="serial">a4fd9f97-b160-432d-9cb7-0fa3874c6468</entry>
Nov 25 03:34:51 np0005534516 nova_compute[253538]:      <entry name="uuid">a4fd9f97-b160-432d-9cb7-0fa3874c6468</entry>
Nov 25 03:34:51 np0005534516 nova_compute[253538]:      <entry name="family">Virtual Machine</entry>
Nov 25 03:34:51 np0005534516 nova_compute[253538]:    </system>
Nov 25 03:34:51 np0005534516 nova_compute[253538]:  </sysinfo>
Nov 25 03:34:51 np0005534516 nova_compute[253538]:  <os>
Nov 25 03:34:51 np0005534516 nova_compute[253538]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 25 03:34:51 np0005534516 nova_compute[253538]:    <boot dev="hd"/>
Nov 25 03:34:51 np0005534516 nova_compute[253538]:    <smbios mode="sysinfo"/>
Nov 25 03:34:51 np0005534516 nova_compute[253538]:  </os>
Nov 25 03:34:51 np0005534516 nova_compute[253538]:  <features>
Nov 25 03:34:51 np0005534516 nova_compute[253538]:    <acpi/>
Nov 25 03:34:51 np0005534516 nova_compute[253538]:    <apic/>
Nov 25 03:34:51 np0005534516 nova_compute[253538]:    <vmcoreinfo/>
Nov 25 03:34:51 np0005534516 nova_compute[253538]:  </features>
Nov 25 03:34:51 np0005534516 nova_compute[253538]:  <clock offset="utc">
Nov 25 03:34:51 np0005534516 nova_compute[253538]:    <timer name="pit" tickpolicy="delay"/>
Nov 25 03:34:51 np0005534516 nova_compute[253538]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 25 03:34:51 np0005534516 nova_compute[253538]:    <timer name="hpet" present="no"/>
Nov 25 03:34:51 np0005534516 nova_compute[253538]:  </clock>
Nov 25 03:34:51 np0005534516 nova_compute[253538]:  <cpu mode="host-model" match="exact">
Nov 25 03:34:51 np0005534516 nova_compute[253538]:    <topology sockets="1" cores="1" threads="1"/>
Nov 25 03:34:51 np0005534516 nova_compute[253538]:  </cpu>
Nov 25 03:34:51 np0005534516 nova_compute[253538]:  <devices>
Nov 25 03:34:51 np0005534516 nova_compute[253538]:    <disk type="network" device="disk">
Nov 25 03:34:51 np0005534516 nova_compute[253538]:      <driver type="raw" cache="none"/>
Nov 25 03:34:51 np0005534516 nova_compute[253538]:      <source protocol="rbd" name="vms/a4fd9f97-b160-432d-9cb7-0fa3874c6468_disk">
Nov 25 03:34:51 np0005534516 nova_compute[253538]:        <host name="192.168.122.100" port="6789"/>
Nov 25 03:34:51 np0005534516 nova_compute[253538]:      </source>
Nov 25 03:34:51 np0005534516 nova_compute[253538]:      <auth username="openstack">
Nov 25 03:34:51 np0005534516 nova_compute[253538]:        <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 03:34:51 np0005534516 nova_compute[253538]:      </auth>
Nov 25 03:34:51 np0005534516 nova_compute[253538]:      <target dev="vda" bus="virtio"/>
Nov 25 03:34:51 np0005534516 nova_compute[253538]:    </disk>
Nov 25 03:34:51 np0005534516 nova_compute[253538]:    <disk type="network" device="cdrom">
Nov 25 03:34:51 np0005534516 nova_compute[253538]:      <driver type="raw" cache="none"/>
Nov 25 03:34:51 np0005534516 nova_compute[253538]:      <source protocol="rbd" name="vms/a4fd9f97-b160-432d-9cb7-0fa3874c6468_disk.config">
Nov 25 03:34:51 np0005534516 nova_compute[253538]:        <host name="192.168.122.100" port="6789"/>
Nov 25 03:34:51 np0005534516 nova_compute[253538]:      </source>
Nov 25 03:34:51 np0005534516 nova_compute[253538]:      <auth username="openstack">
Nov 25 03:34:51 np0005534516 nova_compute[253538]:        <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 03:34:51 np0005534516 nova_compute[253538]:      </auth>
Nov 25 03:34:51 np0005534516 nova_compute[253538]:      <target dev="sda" bus="sata"/>
Nov 25 03:34:51 np0005534516 nova_compute[253538]:    </disk>
Nov 25 03:34:51 np0005534516 nova_compute[253538]:    <interface type="ethernet">
Nov 25 03:34:51 np0005534516 nova_compute[253538]:      <mac address="fa:16:3e:9d:08:11"/>
Nov 25 03:34:51 np0005534516 nova_compute[253538]:      <model type="virtio"/>
Nov 25 03:34:51 np0005534516 nova_compute[253538]:      <driver name="vhost" rx_queue_size="512"/>
Nov 25 03:34:51 np0005534516 nova_compute[253538]:      <mtu size="1442"/>
Nov 25 03:34:51 np0005534516 nova_compute[253538]:      <target dev="tapf9d205bf-07"/>
Nov 25 03:34:51 np0005534516 nova_compute[253538]:    </interface>
Nov 25 03:34:51 np0005534516 nova_compute[253538]:    <serial type="pty">
Nov 25 03:34:51 np0005534516 nova_compute[253538]:      <log file="/var/lib/nova/instances/a4fd9f97-b160-432d-9cb7-0fa3874c6468/console.log" append="off"/>
Nov 25 03:34:51 np0005534516 nova_compute[253538]:    </serial>
Nov 25 03:34:51 np0005534516 nova_compute[253538]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 03:34:51 np0005534516 nova_compute[253538]:    <video>
Nov 25 03:34:51 np0005534516 nova_compute[253538]:      <model type="virtio"/>
Nov 25 03:34:51 np0005534516 nova_compute[253538]:    </video>
Nov 25 03:34:51 np0005534516 nova_compute[253538]:    <input type="tablet" bus="usb"/>
Nov 25 03:34:51 np0005534516 nova_compute[253538]:    <rng model="virtio">
Nov 25 03:34:51 np0005534516 nova_compute[253538]:      <backend model="random">/dev/urandom</backend>
Nov 25 03:34:51 np0005534516 nova_compute[253538]:    </rng>
Nov 25 03:34:51 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root"/>
Nov 25 03:34:51 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:34:51 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:34:51 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:34:51 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:34:51 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:34:51 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:34:51 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:34:51 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:34:51 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:34:51 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:34:51 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:34:51 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:34:51 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:34:51 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:34:51 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:34:51 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:34:51 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:34:51 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:34:51 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:34:51 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:34:51 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:34:51 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:34:51 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:34:51 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:34:51 np0005534516 nova_compute[253538]:    <controller type="usb" index="0"/>
Nov 25 03:34:51 np0005534516 nova_compute[253538]:    <memballoon model="virtio">
Nov 25 03:34:51 np0005534516 nova_compute[253538]:      <stats period="10"/>
Nov 25 03:34:51 np0005534516 nova_compute[253538]:    </memballoon>
Nov 25 03:34:51 np0005534516 nova_compute[253538]:  </devices>
Nov 25 03:34:51 np0005534516 nova_compute[253538]: </domain>
Nov 25 03:34:51 np0005534516 nova_compute[253538]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 25 03:34:51 np0005534516 nova_compute[253538]: 2025-11-25 08:34:51.439 253542 DEBUG nova.compute.manager [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] [instance: a4fd9f97-b160-432d-9cb7-0fa3874c6468] Preparing to wait for external event network-vif-plugged-f9d205bf-0705-485d-b89c-f9b9c3cdccdb prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 25 03:34:51 np0005534516 nova_compute[253538]: 2025-11-25 08:34:51.440 253542 DEBUG oslo_concurrency.lockutils [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] Acquiring lock "a4fd9f97-b160-432d-9cb7-0fa3874c6468-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:34:51 np0005534516 nova_compute[253538]: 2025-11-25 08:34:51.440 253542 DEBUG oslo_concurrency.lockutils [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] Lock "a4fd9f97-b160-432d-9cb7-0fa3874c6468-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:34:51 np0005534516 nova_compute[253538]: 2025-11-25 08:34:51.440 253542 DEBUG oslo_concurrency.lockutils [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] Lock "a4fd9f97-b160-432d-9cb7-0fa3874c6468-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:34:51 np0005534516 nova_compute[253538]: 2025-11-25 08:34:51.441 253542 DEBUG nova.virt.libvirt.vif [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T08:34:42Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ListServersNegativeTestJSON-server-1822986507',display_name='tempest-ListServersNegativeTestJSON-server-1822986507-2',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-listserversnegativetestjson-server-1822986507-2',id=64,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=1,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='ce0bc5c65f2f47a9a854ec892fe53bc8',ramdisk_id='',reservation_id='r-prpznqfv',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ListServersNegativeTestJSON-704382836',owner_user_name='tempest-ListServersNegativeTestJSON-704382836-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T08:34:46Z,user_data=None,user_id='dc986518148d44de9f5908ed5be317bd',uuid=a4fd9f97-b160-432d-9cb7-0fa3874c6468,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "f9d205bf-0705-485d-b89c-f9b9c3cdccdb", "address": "fa:16:3e:9d:08:11", "network": {"id": "837c6e7b-bab2-4553-9d96-986f67153365", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-60200676-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ce0bc5c65f2f47a9a854ec892fe53bc8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf9d205bf-07", "ovs_interfaceid": "f9d205bf-0705-485d-b89c-f9b9c3cdccdb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 25 03:34:51 np0005534516 nova_compute[253538]: 2025-11-25 08:34:51.441 253542 DEBUG nova.network.os_vif_util [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] Converting VIF {"id": "f9d205bf-0705-485d-b89c-f9b9c3cdccdb", "address": "fa:16:3e:9d:08:11", "network": {"id": "837c6e7b-bab2-4553-9d96-986f67153365", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-60200676-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ce0bc5c65f2f47a9a854ec892fe53bc8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf9d205bf-07", "ovs_interfaceid": "f9d205bf-0705-485d-b89c-f9b9c3cdccdb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 03:34:51 np0005534516 nova_compute[253538]: 2025-11-25 08:34:51.441 253542 DEBUG nova.network.os_vif_util [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:9d:08:11,bridge_name='br-int',has_traffic_filtering=True,id=f9d205bf-0705-485d-b89c-f9b9c3cdccdb,network=Network(837c6e7b-bab2-4553-9d96-986f67153365),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf9d205bf-07') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 03:34:51 np0005534516 nova_compute[253538]: 2025-11-25 08:34:51.442 253542 DEBUG os_vif [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:9d:08:11,bridge_name='br-int',has_traffic_filtering=True,id=f9d205bf-0705-485d-b89c-f9b9c3cdccdb,network=Network(837c6e7b-bab2-4553-9d96-986f67153365),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf9d205bf-07') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 25 03:34:51 np0005534516 nova_compute[253538]: 2025-11-25 08:34:51.442 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:34:51 np0005534516 nova_compute[253538]: 2025-11-25 08:34:51.442 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:34:51 np0005534516 nova_compute[253538]: 2025-11-25 08:34:51.443 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 03:34:51 np0005534516 nova_compute[253538]: 2025-11-25 08:34:51.444 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:34:51 np0005534516 nova_compute[253538]: 2025-11-25 08:34:51.444 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf9d205bf-07, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:34:51 np0005534516 nova_compute[253538]: 2025-11-25 08:34:51.445 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapf9d205bf-07, col_values=(('external_ids', {'iface-id': 'f9d205bf-0705-485d-b89c-f9b9c3cdccdb', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:9d:08:11', 'vm-uuid': 'a4fd9f97-b160-432d-9cb7-0fa3874c6468'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:34:51 np0005534516 NetworkManager[48915]: <info>  [1764059691.4474] manager: (tapf9d205bf-07): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/265)
Nov 25 03:34:51 np0005534516 nova_compute[253538]: 2025-11-25 08:34:51.449 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 25 03:34:51 np0005534516 nova_compute[253538]: 2025-11-25 08:34:51.452 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:34:51 np0005534516 nova_compute[253538]: 2025-11-25 08:34:51.453 253542 INFO os_vif [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:9d:08:11,bridge_name='br-int',has_traffic_filtering=True,id=f9d205bf-0705-485d-b89c-f9b9c3cdccdb,network=Network(837c6e7b-bab2-4553-9d96-986f67153365),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf9d205bf-07')#033[00m
Nov 25 03:34:51 np0005534516 nova_compute[253538]: 2025-11-25 08:34:51.576 253542 DEBUG nova.compute.manager [req-8a499574-6252-475b-9fd0-636a7b00a694 req-5722426d-7de3-4937-b016-6981a6265ead b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: a4fd9f97-b160-432d-9cb7-0fa3874c6468] Received event network-changed-f9d205bf-0705-485d-b89c-f9b9c3cdccdb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:34:51 np0005534516 nova_compute[253538]: 2025-11-25 08:34:51.576 253542 DEBUG nova.compute.manager [req-8a499574-6252-475b-9fd0-636a7b00a694 req-5722426d-7de3-4937-b016-6981a6265ead b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: a4fd9f97-b160-432d-9cb7-0fa3874c6468] Refreshing instance network info cache due to event network-changed-f9d205bf-0705-485d-b89c-f9b9c3cdccdb. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 03:34:51 np0005534516 nova_compute[253538]: 2025-11-25 08:34:51.576 253542 DEBUG oslo_concurrency.lockutils [req-8a499574-6252-475b-9fd0-636a7b00a694 req-5722426d-7de3-4937-b016-6981a6265ead b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-a4fd9f97-b160-432d-9cb7-0fa3874c6468" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 03:34:51 np0005534516 nova_compute[253538]: 2025-11-25 08:34:51.577 253542 DEBUG oslo_concurrency.lockutils [req-8a499574-6252-475b-9fd0-636a7b00a694 req-5722426d-7de3-4937-b016-6981a6265ead b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-a4fd9f97-b160-432d-9cb7-0fa3874c6468" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 03:34:51 np0005534516 nova_compute[253538]: 2025-11-25 08:34:51.577 253542 DEBUG nova.network.neutron [req-8a499574-6252-475b-9fd0-636a7b00a694 req-5722426d-7de3-4937-b016-6981a6265ead b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: a4fd9f97-b160-432d-9cb7-0fa3874c6468] Refreshing network info cache for port f9d205bf-0705-485d-b89c-f9b9c3cdccdb _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 03:34:51 np0005534516 nova_compute[253538]: 2025-11-25 08:34:51.599 253542 DEBUG nova.virt.libvirt.driver [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 03:34:51 np0005534516 nova_compute[253538]: 2025-11-25 08:34:51.599 253542 DEBUG nova.virt.libvirt.driver [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 03:34:51 np0005534516 nova_compute[253538]: 2025-11-25 08:34:51.600 253542 DEBUG nova.virt.libvirt.driver [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] No VIF found with MAC fa:16:3e:9d:08:11, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 25 03:34:51 np0005534516 nova_compute[253538]: 2025-11-25 08:34:51.601 253542 INFO nova.virt.libvirt.driver [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] [instance: a4fd9f97-b160-432d-9cb7-0fa3874c6468] Using config drive#033[00m
Nov 25 03:34:51 np0005534516 nova_compute[253538]: 2025-11-25 08:34:51.630 253542 DEBUG nova.storage.rbd_utils [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] rbd image a4fd9f97-b160-432d-9cb7-0fa3874c6468_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:34:51 np0005534516 ovn_controller[152859]: 2025-11-25T08:34:51Z|00079|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:26:ed:fc 10.100.0.3
Nov 25 03:34:51 np0005534516 ovn_controller[152859]: 2025-11-25T08:34:51Z|00080|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:26:ed:fc 10.100.0.3
Nov 25 03:34:51 np0005534516 nova_compute[253538]: 2025-11-25 08:34:51.942 253542 DEBUG oslo_concurrency.processutils [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/d99c7a05-3cc3-4a8b-bce4-1185023a269f/disk.config d99c7a05-3cc3-4a8b-bce4-1185023a269f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.556s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:34:51 np0005534516 nova_compute[253538]: 2025-11-25 08:34:51.943 253542 INFO nova.virt.libvirt.driver [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] [instance: d99c7a05-3cc3-4a8b-bce4-1185023a269f] Deleting local config drive /var/lib/nova/instances/d99c7a05-3cc3-4a8b-bce4-1185023a269f/disk.config because it was imported into RBD.#033[00m
Nov 25 03:34:52 np0005534516 NetworkManager[48915]: <info>  [1764059692.0024] manager: (tap182a5a1a-c0): new Tun device (/org/freedesktop/NetworkManager/Devices/266)
Nov 25 03:34:52 np0005534516 kernel: tap182a5a1a-c0: entered promiscuous mode
Nov 25 03:34:52 np0005534516 ovn_controller[152859]: 2025-11-25T08:34:52Z|00577|binding|INFO|Claiming lport 182a5a1a-c06d-4265-857f-3ea363ae01c2 for this chassis.
Nov 25 03:34:52 np0005534516 ovn_controller[152859]: 2025-11-25T08:34:52Z|00578|binding|INFO|182a5a1a-c06d-4265-857f-3ea363ae01c2: Claiming fa:16:3e:35:cb:10 10.100.0.3
Nov 25 03:34:52 np0005534516 nova_compute[253538]: 2025-11-25 08:34:52.006 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:34:52 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:34:52.016 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:35:cb:10 10.100.0.3'], port_security=['fa:16:3e:35:cb:10 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'd99c7a05-3cc3-4a8b-bce4-1185023a269f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-837c6e7b-bab2-4553-9d96-986f67153365', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ce0bc5c65f2f47a9a854ec892fe53bc8', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'a89d18be-0725-45a6-b621-49391e35be2a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=332be39e-bd54-4968-b056-968d29abecfa, chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=182a5a1a-c06d-4265-857f-3ea363ae01c2) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 03:34:52 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:34:52.018 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 182a5a1a-c06d-4265-857f-3ea363ae01c2 in datapath 837c6e7b-bab2-4553-9d96-986f67153365 bound to our chassis#033[00m
Nov 25 03:34:52 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:34:52.022 162739 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 837c6e7b-bab2-4553-9d96-986f67153365#033[00m
Nov 25 03:34:52 np0005534516 NetworkManager[48915]: <info>  [1764059692.0247] device (tap182a5a1a-c0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 03:34:52 np0005534516 NetworkManager[48915]: <info>  [1764059692.0259] device (tap182a5a1a-c0): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 03:34:52 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:34:52.042 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[991959d9-5dda-4473-9f82-3f5179f75bfd]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:34:52 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:34:52.043 162739 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap837c6e7b-b1 in ovnmeta-837c6e7b-bab2-4553-9d96-986f67153365 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 25 03:34:52 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:34:52.045 269370 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap837c6e7b-b0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 25 03:34:52 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:34:52.046 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[6ca0bdea-6740-4ca3-ae5c-394f20e8f23d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:34:52 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:34:52.047 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[0e748b42-7d96-4fdb-a004-48e049666926]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:34:52 np0005534516 systemd-machined[215790]: New machine qemu-72-instance-0000003f.
Nov 25 03:34:52 np0005534516 ovn_controller[152859]: 2025-11-25T08:34:52Z|00579|binding|INFO|Setting lport 182a5a1a-c06d-4265-857f-3ea363ae01c2 ovn-installed in OVS
Nov 25 03:34:52 np0005534516 ovn_controller[152859]: 2025-11-25T08:34:52Z|00580|binding|INFO|Setting lport 182a5a1a-c06d-4265-857f-3ea363ae01c2 up in Southbound
Nov 25 03:34:52 np0005534516 nova_compute[253538]: 2025-11-25 08:34:52.056 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:34:52 np0005534516 systemd[1]: Started Virtual Machine qemu-72-instance-0000003f.
Nov 25 03:34:52 np0005534516 nova_compute[253538]: 2025-11-25 08:34:52.064 253542 DEBUG nova.network.neutron [req-195059ee-9d4d-4fad-8132-8722f099b55d req-adf28043-f52f-4401-8dc9-d2dbed02789f b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 52d39d67-b456-44e4-8804-2de0c941edae] Updated VIF entry in instance network info cache for port f2a4b65b-419e-44be-9413-f01693268aa8. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 25 03:34:52 np0005534516 nova_compute[253538]: 2025-11-25 08:34:52.065 253542 DEBUG nova.network.neutron [req-195059ee-9d4d-4fad-8132-8722f099b55d req-adf28043-f52f-4401-8dc9-d2dbed02789f b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 52d39d67-b456-44e4-8804-2de0c941edae] Updating instance_info_cache with network_info: [{"id": "9fa407fa-661b-4b02-b4f4-656f6ae34cd8", "address": "fa:16:3e:4d:ce:d4", "network": {"id": "9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-2079765623-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.241", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7d8307470c794815a028592990efca57", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9fa407fa-66", "ovs_interfaceid": "9fa407fa-661b-4b02-b4f4-656f6ae34cd8", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "f2a4b65b-419e-44be-9413-f01693268aa8", "address": "fa:16:3e:26:ed:fc", "network": {"id": "9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-2079765623-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7d8307470c794815a028592990efca57", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf2a4b65b-41", "ovs_interfaceid": "f2a4b65b-419e-44be-9413-f01693268aa8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 03:34:52 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:34:52.068 162852 DEBUG oslo.privsep.daemon [-] privsep: reply[f9a46125-b82e-47c8-9845-e86cd8c26108]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:34:52 np0005534516 nova_compute[253538]: 2025-11-25 08:34:52.081 253542 DEBUG oslo_concurrency.lockutils [req-195059ee-9d4d-4fad-8132-8722f099b55d req-adf28043-f52f-4401-8dc9-d2dbed02789f b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-52d39d67-b456-44e4-8804-2de0c941edae" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 03:34:52 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:34:52.086 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[c4fc17e2-3307-4a0e-a9ef-05cc1a0dd23d]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:34:52 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:34:52.128 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[c998cff4-d8dc-45fb-b865-962db41f20b3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:34:52 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:34:52.133 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[e91f3490-111c-4f9a-839b-fc0be4f2f163]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:34:52 np0005534516 NetworkManager[48915]: <info>  [1764059692.1358] manager: (tap837c6e7b-b0): new Veth device (/org/freedesktop/NetworkManager/Devices/267)
Nov 25 03:34:52 np0005534516 nova_compute[253538]: 2025-11-25 08:34:52.136 253542 INFO nova.virt.libvirt.driver [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] [instance: a4fd9f97-b160-432d-9cb7-0fa3874c6468] Creating config drive at /var/lib/nova/instances/a4fd9f97-b160-432d-9cb7-0fa3874c6468/disk.config#033[00m
Nov 25 03:34:52 np0005534516 nova_compute[253538]: 2025-11-25 08:34:52.142 253542 DEBUG oslo_concurrency.processutils [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/a4fd9f97-b160-432d-9cb7-0fa3874c6468/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpodx1ooj5 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:34:52 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:34:52.174 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[dcb205ae-ed0b-4b50-875d-83be77185a82]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:34:52 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:34:52.177 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[0cd6010f-1ede-4d93-af6f-c40051db5a7a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:34:52 np0005534516 nova_compute[253538]: 2025-11-25 08:34:52.182 253542 DEBUG nova.network.neutron [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] [instance: e07ccbcb-d60d-4c15-95c2-9f5046ab99a3] Updating instance_info_cache with network_info: [{"id": "54f02527-a6c1-4059-aa22-2c19fc6f351d", "address": "fa:16:3e:eb:30:f0", "network": {"id": "837c6e7b-bab2-4553-9d96-986f67153365", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-60200676-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ce0bc5c65f2f47a9a854ec892fe53bc8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap54f02527-a6", "ovs_interfaceid": "54f02527-a6c1-4059-aa22-2c19fc6f351d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 03:34:52 np0005534516 nova_compute[253538]: 2025-11-25 08:34:52.200 253542 DEBUG oslo_concurrency.lockutils [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] Releasing lock "refresh_cache-e07ccbcb-d60d-4c15-95c2-9f5046ab99a3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 03:34:52 np0005534516 nova_compute[253538]: 2025-11-25 08:34:52.200 253542 DEBUG nova.compute.manager [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] [instance: e07ccbcb-d60d-4c15-95c2-9f5046ab99a3] Instance network_info: |[{"id": "54f02527-a6c1-4059-aa22-2c19fc6f351d", "address": "fa:16:3e:eb:30:f0", "network": {"id": "837c6e7b-bab2-4553-9d96-986f67153365", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-60200676-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ce0bc5c65f2f47a9a854ec892fe53bc8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap54f02527-a6", "ovs_interfaceid": "54f02527-a6c1-4059-aa22-2c19fc6f351d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 25 03:34:52 np0005534516 NetworkManager[48915]: <info>  [1764059692.2034] device (tap837c6e7b-b0): carrier: link connected
Nov 25 03:34:52 np0005534516 nova_compute[253538]: 2025-11-25 08:34:52.203 253542 DEBUG nova.virt.libvirt.driver [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] [instance: e07ccbcb-d60d-4c15-95c2-9f5046ab99a3] Start _get_guest_xml network_info=[{"id": "54f02527-a6c1-4059-aa22-2c19fc6f351d", "address": "fa:16:3e:eb:30:f0", "network": {"id": "837c6e7b-bab2-4553-9d96-986f67153365", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-60200676-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ce0bc5c65f2f47a9a854ec892fe53bc8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap54f02527-a6", "ovs_interfaceid": "54f02527-a6c1-4059-aa22-2c19fc6f351d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'device_name': '/dev/vda', 'guest_format': None, 'encryption_options': None, 'boot_index': 0, 'encryption_format': None, 'encryption_secret_uuid': None, 'size': 0, 'encrypted': False, 'disk_bus': 'virtio', 'image_id': '8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 25 03:34:52 np0005534516 nova_compute[253538]: 2025-11-25 08:34:52.210 253542 WARNING nova.virt.libvirt.driver [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 03:34:52 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:34:52.211 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[feb653fc-ba33-4c83-a709-f56a3b78813b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:34:52 np0005534516 nova_compute[253538]: 2025-11-25 08:34:52.216 253542 DEBUG nova.virt.libvirt.host [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 25 03:34:52 np0005534516 nova_compute[253538]: 2025-11-25 08:34:52.217 253542 DEBUG nova.virt.libvirt.host [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 25 03:34:52 np0005534516 nova_compute[253538]: 2025-11-25 08:34:52.222 253542 DEBUG nova.virt.libvirt.host [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 25 03:34:52 np0005534516 nova_compute[253538]: 2025-11-25 08:34:52.223 253542 DEBUG nova.virt.libvirt.host [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 25 03:34:52 np0005534516 nova_compute[253538]: 2025-11-25 08:34:52.223 253542 DEBUG nova.virt.libvirt.driver [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 25 03:34:52 np0005534516 nova_compute[253538]: 2025-11-25 08:34:52.223 253542 DEBUG nova.virt.hardware [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T08:20:54Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='c0247c91-0f34-45b2-87e7-43f31a790a00',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 25 03:34:52 np0005534516 nova_compute[253538]: 2025-11-25 08:34:52.224 253542 DEBUG nova.virt.hardware [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 25 03:34:52 np0005534516 nova_compute[253538]: 2025-11-25 08:34:52.224 253542 DEBUG nova.virt.hardware [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 25 03:34:52 np0005534516 nova_compute[253538]: 2025-11-25 08:34:52.224 253542 DEBUG nova.virt.hardware [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 25 03:34:52 np0005534516 nova_compute[253538]: 2025-11-25 08:34:52.224 253542 DEBUG nova.virt.hardware [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 25 03:34:52 np0005534516 nova_compute[253538]: 2025-11-25 08:34:52.224 253542 DEBUG nova.virt.hardware [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 25 03:34:52 np0005534516 nova_compute[253538]: 2025-11-25 08:34:52.225 253542 DEBUG nova.virt.hardware [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 25 03:34:52 np0005534516 nova_compute[253538]: 2025-11-25 08:34:52.225 253542 DEBUG nova.virt.hardware [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 25 03:34:52 np0005534516 nova_compute[253538]: 2025-11-25 08:34:52.225 253542 DEBUG nova.virt.hardware [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 25 03:34:52 np0005534516 nova_compute[253538]: 2025-11-25 08:34:52.225 253542 DEBUG nova.virt.hardware [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 25 03:34:52 np0005534516 nova_compute[253538]: 2025-11-25 08:34:52.225 253542 DEBUG nova.virt.hardware [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 25 03:34:52 np0005534516 nova_compute[253538]: 2025-11-25 08:34:52.228 253542 DEBUG oslo_concurrency.processutils [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:34:52 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:34:52.231 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[c2d0ee90-e9be-429a-beda-a154b4b962aa]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap837c6e7b-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:5f:6e:5a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 177], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 504680, 'reachable_time': 28680, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 319729, 'error': None, 'target': 'ovnmeta-837c6e7b-bab2-4553-9d96-986f67153365', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:34:52 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:34:52.247 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[52238428-da13-4e05-9c84-4713af5f90cb]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe5f:6e5a'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 504680, 'tstamp': 504680}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 319730, 'error': None, 'target': 'ovnmeta-837c6e7b-bab2-4553-9d96-986f67153365', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:34:52 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:34:52.265 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[bba6fc07-c8d2-4e8a-a730-f356ff0442f4]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap837c6e7b-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:5f:6e:5a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 177], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 504680, 'reachable_time': 28680, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 319732, 'error': None, 'target': 'ovnmeta-837c6e7b-bab2-4553-9d96-986f67153365', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:34:52 np0005534516 nova_compute[253538]: 2025-11-25 08:34:52.289 253542 DEBUG oslo_concurrency.processutils [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/a4fd9f97-b160-432d-9cb7-0fa3874c6468/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpodx1ooj5" returned: 0 in 0.148s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:34:52 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:34:52.303 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[711651d4-e4c5-4db4-93e7-a1f58ebcfa69]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:34:52 np0005534516 nova_compute[253538]: 2025-11-25 08:34:52.315 253542 DEBUG nova.storage.rbd_utils [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] rbd image a4fd9f97-b160-432d-9cb7-0fa3874c6468_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:34:52 np0005534516 nova_compute[253538]: 2025-11-25 08:34:52.323 253542 DEBUG oslo_concurrency.processutils [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/a4fd9f97-b160-432d-9cb7-0fa3874c6468/disk.config a4fd9f97-b160-432d-9cb7-0fa3874c6468_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:34:52 np0005534516 nova_compute[253538]: 2025-11-25 08:34:52.363 253542 INFO nova.virt.libvirt.driver [None req-5d8b5969-a11f-4efe-b1ba-bf5400b29765 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] [instance: 420c5373-d9c4-4da0-9658-90eff9a19f8d] Deleting instance files /var/lib/nova/instances/420c5373-d9c4-4da0-9658-90eff9a19f8d_del#033[00m
Nov 25 03:34:52 np0005534516 nova_compute[253538]: 2025-11-25 08:34:52.365 253542 INFO nova.virt.libvirt.driver [None req-5d8b5969-a11f-4efe-b1ba-bf5400b29765 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] [instance: 420c5373-d9c4-4da0-9658-90eff9a19f8d] Deletion of /var/lib/nova/instances/420c5373-d9c4-4da0-9658-90eff9a19f8d_del complete#033[00m
Nov 25 03:34:52 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:34:52.368 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[d0666f55-c7f7-4e5d-853f-a02a775a2048]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:34:52 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:34:52.369 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap837c6e7b-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:34:52 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:34:52.369 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 03:34:52 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:34:52.370 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap837c6e7b-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:34:52 np0005534516 NetworkManager[48915]: <info>  [1764059692.3723] manager: (tap837c6e7b-b0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/268)
Nov 25 03:34:52 np0005534516 nova_compute[253538]: 2025-11-25 08:34:52.372 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:34:52 np0005534516 kernel: tap837c6e7b-b0: entered promiscuous mode
Nov 25 03:34:52 np0005534516 nova_compute[253538]: 2025-11-25 08:34:52.377 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:34:52 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:34:52.378 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap837c6e7b-b0, col_values=(('external_ids', {'iface-id': 'f915f58f-151e-47fb-a373-5fd022b7fd3c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:34:52 np0005534516 nova_compute[253538]: 2025-11-25 08:34:52.379 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:34:52 np0005534516 ovn_controller[152859]: 2025-11-25T08:34:52Z|00581|binding|INFO|Releasing lport f915f58f-151e-47fb-a373-5fd022b7fd3c from this chassis (sb_readonly=0)
Nov 25 03:34:52 np0005534516 nova_compute[253538]: 2025-11-25 08:34:52.396 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:34:52 np0005534516 nova_compute[253538]: 2025-11-25 08:34:52.401 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:34:52 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:34:52.401 162739 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/837c6e7b-bab2-4553-9d96-986f67153365.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/837c6e7b-bab2-4553-9d96-986f67153365.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 25 03:34:52 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:34:52.402 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[c319bbda-7cd3-4016-8b16-96af0be3253b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:34:52 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:34:52.403 162739 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 25 03:34:52 np0005534516 ovn_metadata_agent[162734]: global
Nov 25 03:34:52 np0005534516 ovn_metadata_agent[162734]:    log         /dev/log local0 debug
Nov 25 03:34:52 np0005534516 ovn_metadata_agent[162734]:    log-tag     haproxy-metadata-proxy-837c6e7b-bab2-4553-9d96-986f67153365
Nov 25 03:34:52 np0005534516 ovn_metadata_agent[162734]:    user        root
Nov 25 03:34:52 np0005534516 ovn_metadata_agent[162734]:    group       root
Nov 25 03:34:52 np0005534516 ovn_metadata_agent[162734]:    maxconn     1024
Nov 25 03:34:52 np0005534516 ovn_metadata_agent[162734]:    pidfile     /var/lib/neutron/external/pids/837c6e7b-bab2-4553-9d96-986f67153365.pid.haproxy
Nov 25 03:34:52 np0005534516 ovn_metadata_agent[162734]:    daemon
Nov 25 03:34:52 np0005534516 ovn_metadata_agent[162734]: 
Nov 25 03:34:52 np0005534516 ovn_metadata_agent[162734]: defaults
Nov 25 03:34:52 np0005534516 ovn_metadata_agent[162734]:    log global
Nov 25 03:34:52 np0005534516 ovn_metadata_agent[162734]:    mode http
Nov 25 03:34:52 np0005534516 ovn_metadata_agent[162734]:    option httplog
Nov 25 03:34:52 np0005534516 ovn_metadata_agent[162734]:    option dontlognull
Nov 25 03:34:52 np0005534516 ovn_metadata_agent[162734]:    option http-server-close
Nov 25 03:34:52 np0005534516 ovn_metadata_agent[162734]:    option forwardfor
Nov 25 03:34:52 np0005534516 ovn_metadata_agent[162734]:    retries                 3
Nov 25 03:34:52 np0005534516 ovn_metadata_agent[162734]:    timeout http-request    30s
Nov 25 03:34:52 np0005534516 ovn_metadata_agent[162734]:    timeout connect         30s
Nov 25 03:34:52 np0005534516 ovn_metadata_agent[162734]:    timeout client          32s
Nov 25 03:34:52 np0005534516 ovn_metadata_agent[162734]:    timeout server          32s
Nov 25 03:34:52 np0005534516 ovn_metadata_agent[162734]:    timeout http-keep-alive 30s
Nov 25 03:34:52 np0005534516 ovn_metadata_agent[162734]: 
Nov 25 03:34:52 np0005534516 ovn_metadata_agent[162734]: 
Nov 25 03:34:52 np0005534516 ovn_metadata_agent[162734]: listen listener
Nov 25 03:34:52 np0005534516 ovn_metadata_agent[162734]:    bind 169.254.169.254:80
Nov 25 03:34:52 np0005534516 ovn_metadata_agent[162734]:    server metadata /var/lib/neutron/metadata_proxy
Nov 25 03:34:52 np0005534516 ovn_metadata_agent[162734]:    http-request add-header X-OVN-Network-ID 837c6e7b-bab2-4553-9d96-986f67153365
Nov 25 03:34:52 np0005534516 ovn_metadata_agent[162734]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 25 03:34:52 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:34:52.404 162739 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-837c6e7b-bab2-4553-9d96-986f67153365', 'env', 'PROCESS_TAG=haproxy-837c6e7b-bab2-4553-9d96-986f67153365', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/837c6e7b-bab2-4553-9d96-986f67153365.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 25 03:34:52 np0005534516 nova_compute[253538]: 2025-11-25 08:34:52.490 253542 DEBUG oslo_concurrency.processutils [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/a4fd9f97-b160-432d-9cb7-0fa3874c6468/disk.config a4fd9f97-b160-432d-9cb7-0fa3874c6468_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.168s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:34:52 np0005534516 nova_compute[253538]: 2025-11-25 08:34:52.491 253542 INFO nova.virt.libvirt.driver [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] [instance: a4fd9f97-b160-432d-9cb7-0fa3874c6468] Deleting local config drive /var/lib/nova/instances/a4fd9f97-b160-432d-9cb7-0fa3874c6468/disk.config because it was imported into RBD.#033[00m
Nov 25 03:34:52 np0005534516 nova_compute[253538]: 2025-11-25 08:34:52.503 253542 DEBUG nova.virt.libvirt.driver [None req-5d8b5969-a11f-4efe-b1ba-bf5400b29765 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] [instance: 420c5373-d9c4-4da0-9658-90eff9a19f8d] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 25 03:34:52 np0005534516 nova_compute[253538]: 2025-11-25 08:34:52.504 253542 INFO nova.virt.libvirt.driver [None req-5d8b5969-a11f-4efe-b1ba-bf5400b29765 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] [instance: 420c5373-d9c4-4da0-9658-90eff9a19f8d] Creating image(s)#033[00m
Nov 25 03:34:52 np0005534516 kernel: tapf9d205bf-07: entered promiscuous mode
Nov 25 03:34:52 np0005534516 systemd-udevd[319719]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 03:34:52 np0005534516 NetworkManager[48915]: <info>  [1764059692.5354] manager: (tapf9d205bf-07): new Tun device (/org/freedesktop/NetworkManager/Devices/269)
Nov 25 03:34:52 np0005534516 nova_compute[253538]: 2025-11-25 08:34:52.532 253542 DEBUG nova.storage.rbd_utils [None req-5d8b5969-a11f-4efe-b1ba-bf5400b29765 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] rbd image 420c5373-d9c4-4da0-9658-90eff9a19f8d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:34:52 np0005534516 ovn_controller[152859]: 2025-11-25T08:34:52Z|00582|binding|INFO|Claiming lport f9d205bf-0705-485d-b89c-f9b9c3cdccdb for this chassis.
Nov 25 03:34:52 np0005534516 ovn_controller[152859]: 2025-11-25T08:34:52Z|00583|binding|INFO|f9d205bf-0705-485d-b89c-f9b9c3cdccdb: Claiming fa:16:3e:9d:08:11 10.100.0.6
Nov 25 03:34:52 np0005534516 NetworkManager[48915]: <info>  [1764059692.5476] device (tapf9d205bf-07): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 03:34:52 np0005534516 NetworkManager[48915]: <info>  [1764059692.5490] device (tapf9d205bf-07): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 03:34:52 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:34:52.549 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:9d:08:11 10.100.0.6'], port_security=['fa:16:3e:9d:08:11 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': 'a4fd9f97-b160-432d-9cb7-0fa3874c6468', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-837c6e7b-bab2-4553-9d96-986f67153365', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ce0bc5c65f2f47a9a854ec892fe53bc8', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'a89d18be-0725-45a6-b621-49391e35be2a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=332be39e-bd54-4968-b056-968d29abecfa, chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=f9d205bf-0705-485d-b89c-f9b9c3cdccdb) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 03:34:52 np0005534516 ovn_controller[152859]: 2025-11-25T08:34:52Z|00584|binding|INFO|Setting lport f9d205bf-0705-485d-b89c-f9b9c3cdccdb ovn-installed in OVS
Nov 25 03:34:52 np0005534516 ovn_controller[152859]: 2025-11-25T08:34:52Z|00585|binding|INFO|Setting lport f9d205bf-0705-485d-b89c-f9b9c3cdccdb up in Southbound
Nov 25 03:34:52 np0005534516 systemd-machined[215790]: New machine qemu-73-instance-00000040.
Nov 25 03:34:52 np0005534516 systemd[1]: Started Virtual Machine qemu-73-instance-00000040.
Nov 25 03:34:52 np0005534516 nova_compute[253538]: 2025-11-25 08:34:52.588 253542 DEBUG nova.storage.rbd_utils [None req-5d8b5969-a11f-4efe-b1ba-bf5400b29765 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] rbd image 420c5373-d9c4-4da0-9658-90eff9a19f8d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:34:52 np0005534516 nova_compute[253538]: 2025-11-25 08:34:52.622 253542 DEBUG nova.storage.rbd_utils [None req-5d8b5969-a11f-4efe-b1ba-bf5400b29765 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] rbd image 420c5373-d9c4-4da0-9658-90eff9a19f8d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:34:52 np0005534516 nova_compute[253538]: 2025-11-25 08:34:52.625 253542 DEBUG oslo_concurrency.processutils [None req-5d8b5969-a11f-4efe-b1ba-bf5400b29765 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a6a8bade0130d94f6fc91c6e483cc9b6db836676 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:34:52 np0005534516 nova_compute[253538]: 2025-11-25 08:34:52.665 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:34:52 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 03:34:52 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/359251585' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 03:34:52 np0005534516 nova_compute[253538]: 2025-11-25 08:34:52.685 253542 DEBUG oslo_concurrency.lockutils [None req-aabf42ea-bd42-4934-91c6-12ef85d0f85e 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Acquiring lock "interface-52d39d67-b456-44e4-8804-2de0c941edae-f2a4b65b-419e-44be-9413-f01693268aa8" by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:34:52 np0005534516 nova_compute[253538]: 2025-11-25 08:34:52.686 253542 DEBUG oslo_concurrency.lockutils [None req-aabf42ea-bd42-4934-91c6-12ef85d0f85e 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Lock "interface-52d39d67-b456-44e4-8804-2de0c941edae-f2a4b65b-419e-44be-9413-f01693268aa8" acquired by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:34:52 np0005534516 nova_compute[253538]: 2025-11-25 08:34:52.710 253542 DEBUG oslo_concurrency.processutils [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.483s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:34:52 np0005534516 nova_compute[253538]: 2025-11-25 08:34:52.713 253542 DEBUG nova.objects.instance [None req-aabf42ea-bd42-4934-91c6-12ef85d0f85e 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Lazy-loading 'flavor' on Instance uuid 52d39d67-b456-44e4-8804-2de0c941edae obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 03:34:52 np0005534516 nova_compute[253538]: 2025-11-25 08:34:52.737 253542 DEBUG nova.storage.rbd_utils [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] rbd image e07ccbcb-d60d-4c15-95c2-9f5046ab99a3_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:34:52 np0005534516 nova_compute[253538]: 2025-11-25 08:34:52.743 253542 DEBUG oslo_concurrency.processutils [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:34:52 np0005534516 nova_compute[253538]: 2025-11-25 08:34:52.784 253542 DEBUG oslo_concurrency.processutils [None req-5d8b5969-a11f-4efe-b1ba-bf5400b29765 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a6a8bade0130d94f6fc91c6e483cc9b6db836676 --force-share --output=json" returned: 0 in 0.159s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:34:52 np0005534516 nova_compute[253538]: 2025-11-25 08:34:52.785 253542 DEBUG oslo_concurrency.lockutils [None req-5d8b5969-a11f-4efe-b1ba-bf5400b29765 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Acquiring lock "a6a8bade0130d94f6fc91c6e483cc9b6db836676" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:34:52 np0005534516 nova_compute[253538]: 2025-11-25 08:34:52.786 253542 DEBUG oslo_concurrency.lockutils [None req-5d8b5969-a11f-4efe-b1ba-bf5400b29765 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Lock "a6a8bade0130d94f6fc91c6e483cc9b6db836676" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:34:52 np0005534516 nova_compute[253538]: 2025-11-25 08:34:52.786 253542 DEBUG oslo_concurrency.lockutils [None req-5d8b5969-a11f-4efe-b1ba-bf5400b29765 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Lock "a6a8bade0130d94f6fc91c6e483cc9b6db836676" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:34:52 np0005534516 nova_compute[253538]: 2025-11-25 08:34:52.809 253542 DEBUG nova.storage.rbd_utils [None req-5d8b5969-a11f-4efe-b1ba-bf5400b29765 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] rbd image 420c5373-d9c4-4da0-9658-90eff9a19f8d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:34:52 np0005534516 nova_compute[253538]: 2025-11-25 08:34:52.815 253542 DEBUG oslo_concurrency.processutils [None req-5d8b5969-a11f-4efe-b1ba-bf5400b29765 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/a6a8bade0130d94f6fc91c6e483cc9b6db836676 420c5373-d9c4-4da0-9658-90eff9a19f8d_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:34:52 np0005534516 podman[319901]: 2025-11-25 08:34:52.845081291 +0000 UTC m=+0.110740738 container create 3430c8305e50c4278e220e4add931691e3ee2ec9f471304956bdde4a55cfd430 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-837c6e7b-bab2-4553-9d96-986f67153365, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118, tcib_managed=true, org.label-schema.vendor=CentOS)
Nov 25 03:34:52 np0005534516 podman[319901]: 2025-11-25 08:34:52.754806825 +0000 UTC m=+0.020466262 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 25 03:34:52 np0005534516 nova_compute[253538]: 2025-11-25 08:34:52.870 253542 DEBUG nova.virt.libvirt.vif [None req-aabf42ea-bd42-4934-91c6-12ef85d0f85e 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T08:34:07Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-1833307559',display_name='tempest-tempest.common.compute-instance-1833307559',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-1833307559',id=61,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBI45VX5ozelo+A26Yolp08RcM4mInQuWdkCriWdcAJEvmrG1M64+l6O6qrC1PEYY9Zv1hNrRdaOuY2Hx3qn6BPjsgdWVfumtuAipvIEJaR4T3qitr35JgGW4++DGBtyuWA==',key_name='tempest-keypair-1507828644',keypairs=<?>,launch_index=0,launched_at=2025-11-25T08:34:18Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='7d8307470c794815a028592990efca57',ramdisk_id='',reservation_id='r-14irc3k3',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-1895576257',owner_user_name='tempest-AttachInterfacesTestJSON-1895576257-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T08:34:18Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='329d8dc9d78743d4a09a38fef3a9143d',uuid=52d39d67-b456-44e4-8804-2de0c941edae,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "f2a4b65b-419e-44be-9413-f01693268aa8", "address": "fa:16:3e:26:ed:fc", "network": {"id": "9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-2079765623-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7d8307470c794815a028592990efca57", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf2a4b65b-41", "ovs_interfaceid": "f2a4b65b-419e-44be-9413-f01693268aa8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 25 03:34:52 np0005534516 nova_compute[253538]: 2025-11-25 08:34:52.871 253542 DEBUG nova.network.os_vif_util [None req-aabf42ea-bd42-4934-91c6-12ef85d0f85e 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Converting VIF {"id": "f2a4b65b-419e-44be-9413-f01693268aa8", "address": "fa:16:3e:26:ed:fc", "network": {"id": "9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-2079765623-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7d8307470c794815a028592990efca57", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf2a4b65b-41", "ovs_interfaceid": "f2a4b65b-419e-44be-9413-f01693268aa8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 03:34:52 np0005534516 nova_compute[253538]: 2025-11-25 08:34:52.873 253542 DEBUG nova.network.os_vif_util [None req-aabf42ea-bd42-4934-91c6-12ef85d0f85e 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:26:ed:fc,bridge_name='br-int',has_traffic_filtering=True,id=f2a4b65b-419e-44be-9413-f01693268aa8,network=Network(9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapf2a4b65b-41') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 03:34:52 np0005534516 nova_compute[253538]: 2025-11-25 08:34:52.879 253542 DEBUG nova.virt.libvirt.guest [None req-aabf42ea-bd42-4934-91c6-12ef85d0f85e 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:26:ed:fc"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapf2a4b65b-41"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Nov 25 03:34:52 np0005534516 nova_compute[253538]: 2025-11-25 08:34:52.883 253542 DEBUG nova.virt.libvirt.guest [None req-aabf42ea-bd42-4934-91c6-12ef85d0f85e 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:26:ed:fc"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapf2a4b65b-41"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Nov 25 03:34:52 np0005534516 nova_compute[253538]: 2025-11-25 08:34:52.887 253542 DEBUG nova.virt.libvirt.driver [None req-aabf42ea-bd42-4934-91c6-12ef85d0f85e 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Attempting to detach device tapf2a4b65b-41 from instance 52d39d67-b456-44e4-8804-2de0c941edae from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487#033[00m
Nov 25 03:34:52 np0005534516 nova_compute[253538]: 2025-11-25 08:34:52.888 253542 DEBUG nova.virt.libvirt.guest [None req-aabf42ea-bd42-4934-91c6-12ef85d0f85e 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] detach device xml: <interface type="ethernet">
Nov 25 03:34:52 np0005534516 nova_compute[253538]:  <mac address="fa:16:3e:26:ed:fc"/>
Nov 25 03:34:52 np0005534516 nova_compute[253538]:  <model type="virtio"/>
Nov 25 03:34:52 np0005534516 nova_compute[253538]:  <driver name="vhost" rx_queue_size="512"/>
Nov 25 03:34:52 np0005534516 nova_compute[253538]:  <mtu size="1442"/>
Nov 25 03:34:52 np0005534516 nova_compute[253538]:  <target dev="tapf2a4b65b-41"/>
Nov 25 03:34:52 np0005534516 nova_compute[253538]: </interface>
Nov 25 03:34:52 np0005534516 nova_compute[253538]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Nov 25 03:34:52 np0005534516 systemd[1]: Started libpod-conmon-3430c8305e50c4278e220e4add931691e3ee2ec9f471304956bdde4a55cfd430.scope.
Nov 25 03:34:52 np0005534516 nova_compute[253538]: 2025-11-25 08:34:52.917 253542 DEBUG nova.virt.libvirt.guest [None req-aabf42ea-bd42-4934-91c6-12ef85d0f85e 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:26:ed:fc"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapf2a4b65b-41"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Nov 25 03:34:52 np0005534516 nova_compute[253538]: 2025-11-25 08:34:52.930 253542 DEBUG nova.virt.libvirt.guest [None req-aabf42ea-bd42-4934-91c6-12ef85d0f85e 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:26:ed:fc"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapf2a4b65b-41"/></interface>not found in domain: <domain type='kvm' id='70'>
Nov 25 03:34:52 np0005534516 nova_compute[253538]:  <name>instance-0000003d</name>
Nov 25 03:34:52 np0005534516 nova_compute[253538]:  <uuid>52d39d67-b456-44e4-8804-2de0c941edae</uuid>
Nov 25 03:34:52 np0005534516 nova_compute[253538]:  <metadata>
Nov 25 03:34:52 np0005534516 nova_compute[253538]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 03:34:52 np0005534516 nova_compute[253538]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 03:34:52 np0005534516 nova_compute[253538]:  <nova:name>tempest-tempest.common.compute-instance-1833307559</nova:name>
Nov 25 03:34:52 np0005534516 nova_compute[253538]:  <nova:creationTime>2025-11-25 08:34:50</nova:creationTime>
Nov 25 03:34:52 np0005534516 nova_compute[253538]:  <nova:flavor name="m1.nano">
Nov 25 03:34:52 np0005534516 nova_compute[253538]:    <nova:memory>128</nova:memory>
Nov 25 03:34:52 np0005534516 nova_compute[253538]:    <nova:disk>1</nova:disk>
Nov 25 03:34:52 np0005534516 nova_compute[253538]:    <nova:swap>0</nova:swap>
Nov 25 03:34:52 np0005534516 nova_compute[253538]:    <nova:ephemeral>0</nova:ephemeral>
Nov 25 03:34:52 np0005534516 nova_compute[253538]:    <nova:vcpus>1</nova:vcpus>
Nov 25 03:34:52 np0005534516 nova_compute[253538]:  </nova:flavor>
Nov 25 03:34:52 np0005534516 nova_compute[253538]:  <nova:owner>
Nov 25 03:34:52 np0005534516 nova_compute[253538]:    <nova:user uuid="329d8dc9d78743d4a09a38fef3a9143d">tempest-AttachInterfacesTestJSON-1895576257-project-member</nova:user>
Nov 25 03:34:52 np0005534516 nova_compute[253538]:    <nova:project uuid="7d8307470c794815a028592990efca57">tempest-AttachInterfacesTestJSON-1895576257</nova:project>
Nov 25 03:34:52 np0005534516 nova_compute[253538]:  </nova:owner>
Nov 25 03:34:52 np0005534516 nova_compute[253538]:  <nova:root type="image" uuid="8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e"/>
Nov 25 03:34:52 np0005534516 nova_compute[253538]:  <nova:ports>
Nov 25 03:34:52 np0005534516 nova_compute[253538]:    <nova:port uuid="9fa407fa-661b-4b02-b4f4-656f6ae34cd8">
Nov 25 03:34:52 np0005534516 nova_compute[253538]:      <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Nov 25 03:34:52 np0005534516 nova_compute[253538]:    </nova:port>
Nov 25 03:34:52 np0005534516 nova_compute[253538]:    <nova:port uuid="f2a4b65b-419e-44be-9413-f01693268aa8">
Nov 25 03:34:52 np0005534516 nova_compute[253538]:      <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Nov 25 03:34:52 np0005534516 nova_compute[253538]:    </nova:port>
Nov 25 03:34:52 np0005534516 nova_compute[253538]:  </nova:ports>
Nov 25 03:34:52 np0005534516 nova_compute[253538]: </nova:instance>
Nov 25 03:34:52 np0005534516 nova_compute[253538]:  </metadata>
Nov 25 03:34:52 np0005534516 nova_compute[253538]:  <memory unit='KiB'>131072</memory>
Nov 25 03:34:52 np0005534516 nova_compute[253538]:  <currentMemory unit='KiB'>131072</currentMemory>
Nov 25 03:34:52 np0005534516 nova_compute[253538]:  <vcpu placement='static'>1</vcpu>
Nov 25 03:34:52 np0005534516 nova_compute[253538]:  <resource>
Nov 25 03:34:52 np0005534516 nova_compute[253538]:    <partition>/machine</partition>
Nov 25 03:34:52 np0005534516 nova_compute[253538]:  </resource>
Nov 25 03:34:52 np0005534516 nova_compute[253538]:  <sysinfo type='smbios'>
Nov 25 03:34:52 np0005534516 nova_compute[253538]:    <system>
Nov 25 03:34:52 np0005534516 nova_compute[253538]:      <entry name='manufacturer'>RDO</entry>
Nov 25 03:34:52 np0005534516 nova_compute[253538]:      <entry name='product'>OpenStack Compute</entry>
Nov 25 03:34:52 np0005534516 nova_compute[253538]:      <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 03:34:52 np0005534516 nova_compute[253538]:      <entry name='serial'>52d39d67-b456-44e4-8804-2de0c941edae</entry>
Nov 25 03:34:52 np0005534516 nova_compute[253538]:      <entry name='uuid'>52d39d67-b456-44e4-8804-2de0c941edae</entry>
Nov 25 03:34:52 np0005534516 nova_compute[253538]:      <entry name='family'>Virtual Machine</entry>
Nov 25 03:34:52 np0005534516 nova_compute[253538]:    </system>
Nov 25 03:34:52 np0005534516 nova_compute[253538]:  </sysinfo>
Nov 25 03:34:52 np0005534516 nova_compute[253538]:  <os>
Nov 25 03:34:52 np0005534516 nova_compute[253538]:    <type arch='x86_64' machine='pc-q35-rhel9.8.0'>hvm</type>
Nov 25 03:34:52 np0005534516 nova_compute[253538]:    <boot dev='hd'/>
Nov 25 03:34:52 np0005534516 nova_compute[253538]:    <smbios mode='sysinfo'/>
Nov 25 03:34:52 np0005534516 nova_compute[253538]:  </os>
Nov 25 03:34:52 np0005534516 nova_compute[253538]:  <features>
Nov 25 03:34:52 np0005534516 nova_compute[253538]:    <acpi/>
Nov 25 03:34:52 np0005534516 nova_compute[253538]:    <apic/>
Nov 25 03:34:52 np0005534516 nova_compute[253538]:    <vmcoreinfo state='on'/>
Nov 25 03:34:52 np0005534516 nova_compute[253538]:  </features>
Nov 25 03:34:52 np0005534516 nova_compute[253538]:  <cpu mode='custom' match='exact' check='full'>
Nov 25 03:34:52 np0005534516 nova_compute[253538]:    <model fallback='forbid'>EPYC-Rome</model>
Nov 25 03:34:52 np0005534516 nova_compute[253538]:    <vendor>AMD</vendor>
Nov 25 03:34:52 np0005534516 nova_compute[253538]:    <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Nov 25 03:34:52 np0005534516 nova_compute[253538]:    <feature policy='require' name='x2apic'/>
Nov 25 03:34:52 np0005534516 nova_compute[253538]:    <feature policy='require' name='tsc-deadline'/>
Nov 25 03:34:52 np0005534516 nova_compute[253538]:    <feature policy='require' name='hypervisor'/>
Nov 25 03:34:52 np0005534516 nova_compute[253538]:    <feature policy='require' name='tsc_adjust'/>
Nov 25 03:34:52 np0005534516 nova_compute[253538]:    <feature policy='require' name='spec-ctrl'/>
Nov 25 03:34:52 np0005534516 nova_compute[253538]:    <feature policy='require' name='stibp'/>
Nov 25 03:34:52 np0005534516 nova_compute[253538]:    <feature policy='require' name='ssbd'/>
Nov 25 03:34:52 np0005534516 nova_compute[253538]:    <feature policy='require' name='cmp_legacy'/>
Nov 25 03:34:52 np0005534516 nova_compute[253538]:    <feature policy='require' name='overflow-recov'/>
Nov 25 03:34:52 np0005534516 nova_compute[253538]:    <feature policy='require' name='succor'/>
Nov 25 03:34:52 np0005534516 nova_compute[253538]:    <feature policy='require' name='ibrs'/>
Nov 25 03:34:52 np0005534516 nova_compute[253538]:    <feature policy='require' name='amd-ssbd'/>
Nov 25 03:34:52 np0005534516 nova_compute[253538]:    <feature policy='require' name='virt-ssbd'/>
Nov 25 03:34:52 np0005534516 nova_compute[253538]:    <feature policy='disable' name='lbrv'/>
Nov 25 03:34:52 np0005534516 nova_compute[253538]:    <feature policy='disable' name='tsc-scale'/>
Nov 25 03:34:52 np0005534516 nova_compute[253538]:    <feature policy='disable' name='vmcb-clean'/>
Nov 25 03:34:52 np0005534516 nova_compute[253538]:    <feature policy='disable' name='flushbyasid'/>
Nov 25 03:34:52 np0005534516 nova_compute[253538]:    <feature policy='disable' name='pause-filter'/>
Nov 25 03:34:52 np0005534516 nova_compute[253538]:    <feature policy='disable' name='pfthreshold'/>
Nov 25 03:34:52 np0005534516 nova_compute[253538]:    <feature policy='disable' name='svme-addr-chk'/>
Nov 25 03:34:52 np0005534516 nova_compute[253538]:    <feature policy='require' name='lfence-always-serializing'/>
Nov 25 03:34:52 np0005534516 nova_compute[253538]:    <feature policy='disable' name='xsaves'/>
Nov 25 03:34:52 np0005534516 nova_compute[253538]:    <feature policy='disable' name='svm'/>
Nov 25 03:34:52 np0005534516 nova_compute[253538]:    <feature policy='require' name='topoext'/>
Nov 25 03:34:52 np0005534516 nova_compute[253538]:    <feature policy='disable' name='npt'/>
Nov 25 03:34:52 np0005534516 nova_compute[253538]:    <feature policy='disable' name='nrip-save'/>
Nov 25 03:34:52 np0005534516 nova_compute[253538]:  </cpu>
Nov 25 03:34:52 np0005534516 nova_compute[253538]:  <clock offset='utc'>
Nov 25 03:34:52 np0005534516 nova_compute[253538]:    <timer name='pit' tickpolicy='delay'/>
Nov 25 03:34:52 np0005534516 nova_compute[253538]:    <timer name='rtc' tickpolicy='catchup'/>
Nov 25 03:34:52 np0005534516 nova_compute[253538]:    <timer name='hpet' present='no'/>
Nov 25 03:34:52 np0005534516 nova_compute[253538]:  </clock>
Nov 25 03:34:52 np0005534516 nova_compute[253538]:  <on_poweroff>destroy</on_poweroff>
Nov 25 03:34:52 np0005534516 nova_compute[253538]:  <on_reboot>restart</on_reboot>
Nov 25 03:34:52 np0005534516 nova_compute[253538]:  <on_crash>destroy</on_crash>
Nov 25 03:34:52 np0005534516 nova_compute[253538]:  <devices>
Nov 25 03:34:52 np0005534516 nova_compute[253538]:    <emulator>/usr/libexec/qemu-kvm</emulator>
Nov 25 03:34:52 np0005534516 nova_compute[253538]:    <disk type='network' device='disk'>
Nov 25 03:34:52 np0005534516 nova_compute[253538]:      <driver name='qemu' type='raw' cache='none'/>
Nov 25 03:34:52 np0005534516 nova_compute[253538]:      <auth username='openstack'>
Nov 25 03:34:52 np0005534516 nova_compute[253538]:        <secret type='ceph' uuid='a058ea16-8b73-51e1-b172-ed66107102bf'/>
Nov 25 03:34:52 np0005534516 nova_compute[253538]:      </auth>
Nov 25 03:34:52 np0005534516 nova_compute[253538]:      <source protocol='rbd' name='vms/52d39d67-b456-44e4-8804-2de0c941edae_disk' index='2'>
Nov 25 03:34:52 np0005534516 nova_compute[253538]:        <host name='192.168.122.100' port='6789'/>
Nov 25 03:34:52 np0005534516 nova_compute[253538]:      </source>
Nov 25 03:34:52 np0005534516 nova_compute[253538]:      <target dev='vda' bus='virtio'/>
Nov 25 03:34:52 np0005534516 nova_compute[253538]:      <alias name='virtio-disk0'/>
Nov 25 03:34:52 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Nov 25 03:34:52 np0005534516 nova_compute[253538]:    </disk>
Nov 25 03:34:52 np0005534516 nova_compute[253538]:    <disk type='network' device='cdrom'>
Nov 25 03:34:52 np0005534516 nova_compute[253538]:      <driver name='qemu' type='raw' cache='none'/>
Nov 25 03:34:52 np0005534516 nova_compute[253538]:      <auth username='openstack'>
Nov 25 03:34:52 np0005534516 nova_compute[253538]:        <secret type='ceph' uuid='a058ea16-8b73-51e1-b172-ed66107102bf'/>
Nov 25 03:34:52 np0005534516 nova_compute[253538]:      </auth>
Nov 25 03:34:52 np0005534516 nova_compute[253538]:      <source protocol='rbd' name='vms/52d39d67-b456-44e4-8804-2de0c941edae_disk.config' index='1'>
Nov 25 03:34:52 np0005534516 nova_compute[253538]:        <host name='192.168.122.100' port='6789'/>
Nov 25 03:34:52 np0005534516 nova_compute[253538]:      </source>
Nov 25 03:34:52 np0005534516 nova_compute[253538]:      <target dev='sda' bus='sata'/>
Nov 25 03:34:52 np0005534516 nova_compute[253538]:      <readonly/>
Nov 25 03:34:52 np0005534516 nova_compute[253538]:      <alias name='sata0-0-0'/>
Nov 25 03:34:52 np0005534516 nova_compute[253538]:      <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Nov 25 03:34:52 np0005534516 nova_compute[253538]:    </disk>
Nov 25 03:34:52 np0005534516 nova_compute[253538]:    <controller type='pci' index='0' model='pcie-root'>
Nov 25 03:34:52 np0005534516 nova_compute[253538]:      <alias name='pcie.0'/>
Nov 25 03:34:52 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:34:52 np0005534516 nova_compute[253538]:    <controller type='pci' index='1' model='pcie-root-port'>
Nov 25 03:34:52 np0005534516 nova_compute[253538]:      <model name='pcie-root-port'/>
Nov 25 03:34:52 np0005534516 nova_compute[253538]:      <target chassis='1' port='0x10'/>
Nov 25 03:34:52 np0005534516 nova_compute[253538]:      <alias name='pci.1'/>
Nov 25 03:34:52 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Nov 25 03:34:52 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:34:52 np0005534516 nova_compute[253538]:    <controller type='pci' index='2' model='pcie-root-port'>
Nov 25 03:34:52 np0005534516 nova_compute[253538]:      <model name='pcie-root-port'/>
Nov 25 03:34:52 np0005534516 nova_compute[253538]:      <target chassis='2' port='0x11'/>
Nov 25 03:34:52 np0005534516 nova_compute[253538]:      <alias name='pci.2'/>
Nov 25 03:34:52 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Nov 25 03:34:52 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:34:52 np0005534516 nova_compute[253538]:    <controller type='pci' index='3' model='pcie-root-port'>
Nov 25 03:34:52 np0005534516 nova_compute[253538]:      <model name='pcie-root-port'/>
Nov 25 03:34:52 np0005534516 nova_compute[253538]:      <target chassis='3' port='0x12'/>
Nov 25 03:34:52 np0005534516 nova_compute[253538]:      <alias name='pci.3'/>
Nov 25 03:34:52 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Nov 25 03:34:52 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:34:52 np0005534516 nova_compute[253538]:    <controller type='pci' index='4' model='pcie-root-port'>
Nov 25 03:34:52 np0005534516 nova_compute[253538]:      <model name='pcie-root-port'/>
Nov 25 03:34:52 np0005534516 nova_compute[253538]:      <target chassis='4' port='0x13'/>
Nov 25 03:34:52 np0005534516 nova_compute[253538]:      <alias name='pci.4'/>
Nov 25 03:34:52 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Nov 25 03:34:52 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:34:52 np0005534516 nova_compute[253538]:    <controller type='pci' index='5' model='pcie-root-port'>
Nov 25 03:34:52 np0005534516 nova_compute[253538]:      <model name='pcie-root-port'/>
Nov 25 03:34:52 np0005534516 nova_compute[253538]:      <target chassis='5' port='0x14'/>
Nov 25 03:34:52 np0005534516 nova_compute[253538]:      <alias name='pci.5'/>
Nov 25 03:34:52 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Nov 25 03:34:52 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:34:52 np0005534516 nova_compute[253538]:    <controller type='pci' index='6' model='pcie-root-port'>
Nov 25 03:34:52 np0005534516 nova_compute[253538]:      <model name='pcie-root-port'/>
Nov 25 03:34:52 np0005534516 nova_compute[253538]:      <target chassis='6' port='0x15'/>
Nov 25 03:34:52 np0005534516 nova_compute[253538]:      <alias name='pci.6'/>
Nov 25 03:34:52 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Nov 25 03:34:52 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:34:52 np0005534516 nova_compute[253538]:    <controller type='pci' index='7' model='pcie-root-port'>
Nov 25 03:34:52 np0005534516 nova_compute[253538]:      <model name='pcie-root-port'/>
Nov 25 03:34:52 np0005534516 nova_compute[253538]:      <target chassis='7' port='0x16'/>
Nov 25 03:34:52 np0005534516 nova_compute[253538]:      <alias name='pci.7'/>
Nov 25 03:34:52 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Nov 25 03:34:52 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:34:52 np0005534516 nova_compute[253538]:    <controller type='pci' index='8' model='pcie-root-port'>
Nov 25 03:34:52 np0005534516 nova_compute[253538]:      <model name='pcie-root-port'/>
Nov 25 03:34:52 np0005534516 nova_compute[253538]:      <target chassis='8' port='0x17'/>
Nov 25 03:34:52 np0005534516 nova_compute[253538]:      <alias name='pci.8'/>
Nov 25 03:34:52 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Nov 25 03:34:52 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:34:52 np0005534516 nova_compute[253538]:    <controller type='pci' index='9' model='pcie-root-port'>
Nov 25 03:34:52 np0005534516 nova_compute[253538]:      <model name='pcie-root-port'/>
Nov 25 03:34:52 np0005534516 nova_compute[253538]:      <target chassis='9' port='0x18'/>
Nov 25 03:34:52 np0005534516 nova_compute[253538]:      <alias name='pci.9'/>
Nov 25 03:34:52 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Nov 25 03:34:52 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:34:52 np0005534516 nova_compute[253538]:    <controller type='pci' index='10' model='pcie-root-port'>
Nov 25 03:34:52 np0005534516 nova_compute[253538]:      <model name='pcie-root-port'/>
Nov 25 03:34:52 np0005534516 nova_compute[253538]:      <target chassis='10' port='0x19'/>
Nov 25 03:34:52 np0005534516 nova_compute[253538]:      <alias name='pci.10'/>
Nov 25 03:34:52 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Nov 25 03:34:52 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:34:52 np0005534516 nova_compute[253538]:    <controller type='pci' index='11' model='pcie-root-port'>
Nov 25 03:34:52 np0005534516 nova_compute[253538]:      <model name='pcie-root-port'/>
Nov 25 03:34:52 np0005534516 nova_compute[253538]:      <target chassis='11' port='0x1a'/>
Nov 25 03:34:52 np0005534516 nova_compute[253538]:      <alias name='pci.11'/>
Nov 25 03:34:52 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Nov 25 03:34:52 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:34:52 np0005534516 nova_compute[253538]:    <controller type='pci' index='12' model='pcie-root-port'>
Nov 25 03:34:52 np0005534516 nova_compute[253538]:      <model name='pcie-root-port'/>
Nov 25 03:34:52 np0005534516 nova_compute[253538]:      <target chassis='12' port='0x1b'/>
Nov 25 03:34:52 np0005534516 nova_compute[253538]:      <alias name='pci.12'/>
Nov 25 03:34:52 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Nov 25 03:34:52 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:34:52 np0005534516 nova_compute[253538]:    <controller type='pci' index='13' model='pcie-root-port'>
Nov 25 03:34:52 np0005534516 nova_compute[253538]:      <model name='pcie-root-port'/>
Nov 25 03:34:52 np0005534516 nova_compute[253538]:      <target chassis='13' port='0x1c'/>
Nov 25 03:34:52 np0005534516 nova_compute[253538]:      <alias name='pci.13'/>
Nov 25 03:34:52 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Nov 25 03:34:52 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:34:52 np0005534516 nova_compute[253538]:    <controller type='pci' index='14' model='pcie-root-port'>
Nov 25 03:34:52 np0005534516 nova_compute[253538]:      <model name='pcie-root-port'/>
Nov 25 03:34:52 np0005534516 nova_compute[253538]:      <target chassis='14' port='0x1d'/>
Nov 25 03:34:52 np0005534516 nova_compute[253538]:      <alias name='pci.14'/>
Nov 25 03:34:52 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Nov 25 03:34:52 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:34:52 np0005534516 nova_compute[253538]:    <controller type='pci' index='15' model='pcie-root-port'>
Nov 25 03:34:52 np0005534516 nova_compute[253538]:      <model name='pcie-root-port'/>
Nov 25 03:34:52 np0005534516 nova_compute[253538]:      <target chassis='15' port='0x1e'/>
Nov 25 03:34:52 np0005534516 nova_compute[253538]:      <alias name='pci.15'/>
Nov 25 03:34:52 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Nov 25 03:34:52 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:34:52 np0005534516 nova_compute[253538]:    <controller type='pci' index='16' model='pcie-root-port'>
Nov 25 03:34:52 np0005534516 nova_compute[253538]:      <model name='pcie-root-port'/>
Nov 25 03:34:52 np0005534516 nova_compute[253538]:      <target chassis='16' port='0x1f'/>
Nov 25 03:34:52 np0005534516 nova_compute[253538]:      <alias name='pci.16'/>
Nov 25 03:34:52 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Nov 25 03:34:52 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:34:52 np0005534516 nova_compute[253538]:    <controller type='pci' index='17' model='pcie-root-port'>
Nov 25 03:34:52 np0005534516 nova_compute[253538]:      <model name='pcie-root-port'/>
Nov 25 03:34:52 np0005534516 nova_compute[253538]:      <target chassis='17' port='0x20'/>
Nov 25 03:34:52 np0005534516 nova_compute[253538]:      <alias name='pci.17'/>
Nov 25 03:34:52 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Nov 25 03:34:52 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:34:52 np0005534516 nova_compute[253538]:    <controller type='pci' index='18' model='pcie-root-port'>
Nov 25 03:34:52 np0005534516 nova_compute[253538]:      <model name='pcie-root-port'/>
Nov 25 03:34:52 np0005534516 nova_compute[253538]:      <target chassis='18' port='0x21'/>
Nov 25 03:34:52 np0005534516 nova_compute[253538]:      <alias name='pci.18'/>
Nov 25 03:34:52 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Nov 25 03:34:52 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:34:52 np0005534516 nova_compute[253538]:    <controller type='pci' index='19' model='pcie-root-port'>
Nov 25 03:34:52 np0005534516 nova_compute[253538]:      <model name='pcie-root-port'/>
Nov 25 03:34:52 np0005534516 nova_compute[253538]:      <target chassis='19' port='0x22'/>
Nov 25 03:34:52 np0005534516 nova_compute[253538]:      <alias name='pci.19'/>
Nov 25 03:34:52 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Nov 25 03:34:52 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:34:52 np0005534516 nova_compute[253538]:    <controller type='pci' index='20' model='pcie-root-port'>
Nov 25 03:34:52 np0005534516 nova_compute[253538]:      <model name='pcie-root-port'/>
Nov 25 03:34:52 np0005534516 nova_compute[253538]:      <target chassis='20' port='0x23'/>
Nov 25 03:34:52 np0005534516 nova_compute[253538]:      <alias name='pci.20'/>
Nov 25 03:34:52 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Nov 25 03:34:52 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:34:52 np0005534516 nova_compute[253538]:    <controller type='pci' index='21' model='pcie-root-port'>
Nov 25 03:34:52 np0005534516 nova_compute[253538]:      <model name='pcie-root-port'/>
Nov 25 03:34:52 np0005534516 nova_compute[253538]:      <target chassis='21' port='0x24'/>
Nov 25 03:34:52 np0005534516 nova_compute[253538]:      <alias name='pci.21'/>
Nov 25 03:34:52 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Nov 25 03:34:52 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:34:52 np0005534516 nova_compute[253538]:    <controller type='pci' index='22' model='pcie-root-port'>
Nov 25 03:34:52 np0005534516 nova_compute[253538]:      <model name='pcie-root-port'/>
Nov 25 03:34:52 np0005534516 nova_compute[253538]:      <target chassis='22' port='0x25'/>
Nov 25 03:34:52 np0005534516 nova_compute[253538]:      <alias name='pci.22'/>
Nov 25 03:34:52 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Nov 25 03:34:52 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:34:52 np0005534516 nova_compute[253538]:    <controller type='pci' index='23' model='pcie-root-port'>
Nov 25 03:34:52 np0005534516 nova_compute[253538]:      <model name='pcie-root-port'/>
Nov 25 03:34:52 np0005534516 nova_compute[253538]:      <target chassis='23' port='0x26'/>
Nov 25 03:34:52 np0005534516 nova_compute[253538]:      <alias name='pci.23'/>
Nov 25 03:34:52 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Nov 25 03:34:52 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:34:52 np0005534516 nova_compute[253538]:    <controller type='pci' index='24' model='pcie-root-port'>
Nov 25 03:34:52 np0005534516 nova_compute[253538]:      <model name='pcie-root-port'/>
Nov 25 03:34:52 np0005534516 nova_compute[253538]:      <target chassis='24' port='0x27'/>
Nov 25 03:34:52 np0005534516 nova_compute[253538]:      <alias name='pci.24'/>
Nov 25 03:34:52 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Nov 25 03:34:52 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:34:52 np0005534516 nova_compute[253538]:    <controller type='pci' index='25' model='pcie-root-port'>
Nov 25 03:34:52 np0005534516 nova_compute[253538]:      <model name='pcie-root-port'/>
Nov 25 03:34:52 np0005534516 nova_compute[253538]:      <target chassis='25' port='0x28'/>
Nov 25 03:34:52 np0005534516 nova_compute[253538]:      <alias name='pci.25'/>
Nov 25 03:34:52 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Nov 25 03:34:52 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:34:52 np0005534516 nova_compute[253538]:    <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Nov 25 03:34:52 np0005534516 nova_compute[253538]:      <model name='pcie-pci-bridge'/>
Nov 25 03:34:52 np0005534516 nova_compute[253538]:      <alias name='pci.26'/>
Nov 25 03:34:52 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Nov 25 03:34:52 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:34:52 np0005534516 nova_compute[253538]:    <controller type='usb' index='0' model='piix3-uhci'>
Nov 25 03:34:52 np0005534516 nova_compute[253538]:      <alias name='usb'/>
Nov 25 03:34:52 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Nov 25 03:34:52 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:34:52 np0005534516 nova_compute[253538]:    <controller type='sata' index='0'>
Nov 25 03:34:52 np0005534516 nova_compute[253538]:      <alias name='ide'/>
Nov 25 03:34:52 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Nov 25 03:34:52 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:34:52 np0005534516 nova_compute[253538]:    <interface type='ethernet'>
Nov 25 03:34:52 np0005534516 nova_compute[253538]:      <mac address='fa:16:3e:4d:ce:d4'/>
Nov 25 03:34:52 np0005534516 nova_compute[253538]:      <target dev='tap9fa407fa-66'/>
Nov 25 03:34:52 np0005534516 nova_compute[253538]:      <model type='virtio'/>
Nov 25 03:34:52 np0005534516 nova_compute[253538]:      <driver name='vhost' rx_queue_size='512'/>
Nov 25 03:34:52 np0005534516 nova_compute[253538]:      <mtu size='1442'/>
Nov 25 03:34:52 np0005534516 nova_compute[253538]:      <alias name='net0'/>
Nov 25 03:34:52 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Nov 25 03:34:52 np0005534516 nova_compute[253538]:    </interface>
Nov 25 03:34:52 np0005534516 nova_compute[253538]:    <interface type='ethernet'>
Nov 25 03:34:52 np0005534516 nova_compute[253538]:      <mac address='fa:16:3e:26:ed:fc'/>
Nov 25 03:34:52 np0005534516 nova_compute[253538]:      <target dev='tapf2a4b65b-41'/>
Nov 25 03:34:52 np0005534516 nova_compute[253538]:      <model type='virtio'/>
Nov 25 03:34:52 np0005534516 nova_compute[253538]:      <driver name='vhost' rx_queue_size='512'/>
Nov 25 03:34:52 np0005534516 nova_compute[253538]:      <mtu size='1442'/>
Nov 25 03:34:52 np0005534516 nova_compute[253538]:      <alias name='net1'/>
Nov 25 03:34:52 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x06' slot='0x00' function='0x0'/>
Nov 25 03:34:52 np0005534516 nova_compute[253538]:    </interface>
Nov 25 03:34:52 np0005534516 nova_compute[253538]:    <serial type='pty'>
Nov 25 03:34:52 np0005534516 nova_compute[253538]:      <source path='/dev/pts/2'/>
Nov 25 03:34:52 np0005534516 nova_compute[253538]:      <log file='/var/lib/nova/instances/52d39d67-b456-44e4-8804-2de0c941edae/console.log' append='off'/>
Nov 25 03:34:52 np0005534516 nova_compute[253538]:      <target type='isa-serial' port='0'>
Nov 25 03:34:52 np0005534516 nova_compute[253538]:        <model name='isa-serial'/>
Nov 25 03:34:52 np0005534516 nova_compute[253538]:      </target>
Nov 25 03:34:52 np0005534516 nova_compute[253538]:      <alias name='serial0'/>
Nov 25 03:34:52 np0005534516 nova_compute[253538]:    </serial>
Nov 25 03:34:52 np0005534516 nova_compute[253538]:    <console type='pty' tty='/dev/pts/2'>
Nov 25 03:34:52 np0005534516 nova_compute[253538]:      <source path='/dev/pts/2'/>
Nov 25 03:34:52 np0005534516 nova_compute[253538]:      <log file='/var/lib/nova/instances/52d39d67-b456-44e4-8804-2de0c941edae/console.log' append='off'/>
Nov 25 03:34:52 np0005534516 nova_compute[253538]:      <target type='serial' port='0'/>
Nov 25 03:34:52 np0005534516 nova_compute[253538]:      <alias name='serial0'/>
Nov 25 03:34:52 np0005534516 nova_compute[253538]:    </console>
Nov 25 03:34:52 np0005534516 nova_compute[253538]:    <input type='tablet' bus='usb'>
Nov 25 03:34:52 np0005534516 nova_compute[253538]:      <alias name='input0'/>
Nov 25 03:34:52 np0005534516 nova_compute[253538]:      <address type='usb' bus='0' port='1'/>
Nov 25 03:34:52 np0005534516 nova_compute[253538]:    </input>
Nov 25 03:34:52 np0005534516 nova_compute[253538]:    <input type='mouse' bus='ps2'>
Nov 25 03:34:52 np0005534516 nova_compute[253538]:      <alias name='input1'/>
Nov 25 03:34:52 np0005534516 nova_compute[253538]:    </input>
Nov 25 03:34:52 np0005534516 nova_compute[253538]:    <input type='keyboard' bus='ps2'>
Nov 25 03:34:52 np0005534516 nova_compute[253538]:      <alias name='input2'/>
Nov 25 03:34:52 np0005534516 nova_compute[253538]:    </input>
Nov 25 03:34:52 np0005534516 nova_compute[253538]:    <graphics type='vnc' port='5902' autoport='yes' listen='::0'>
Nov 25 03:34:52 np0005534516 nova_compute[253538]:      <listen type='address' address='::0'/>
Nov 25 03:34:52 np0005534516 nova_compute[253538]:    </graphics>
Nov 25 03:34:52 np0005534516 nova_compute[253538]:    <audio id='1' type='none'/>
Nov 25 03:34:52 np0005534516 nova_compute[253538]:    <video>
Nov 25 03:34:52 np0005534516 nova_compute[253538]:      <model type='virtio' heads='1' primary='yes'/>
Nov 25 03:34:52 np0005534516 nova_compute[253538]:      <alias name='video0'/>
Nov 25 03:34:52 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Nov 25 03:34:52 np0005534516 nova_compute[253538]:    </video>
Nov 25 03:34:52 np0005534516 nova_compute[253538]:    <watchdog model='itco' action='reset'>
Nov 25 03:34:52 np0005534516 nova_compute[253538]:      <alias name='watchdog0'/>
Nov 25 03:34:52 np0005534516 nova_compute[253538]:    </watchdog>
Nov 25 03:34:52 np0005534516 nova_compute[253538]:    <memballoon model='virtio'>
Nov 25 03:34:52 np0005534516 nova_compute[253538]:      <stats period='10'/>
Nov 25 03:34:52 np0005534516 nova_compute[253538]:      <alias name='balloon0'/>
Nov 25 03:34:52 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Nov 25 03:34:52 np0005534516 nova_compute[253538]:    </memballoon>
Nov 25 03:34:52 np0005534516 nova_compute[253538]:    <rng model='virtio'>
Nov 25 03:34:52 np0005534516 nova_compute[253538]:      <backend model='random'>/dev/urandom</backend>
Nov 25 03:34:52 np0005534516 nova_compute[253538]:      <alias name='rng0'/>
Nov 25 03:34:52 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Nov 25 03:34:52 np0005534516 nova_compute[253538]:    </rng>
Nov 25 03:34:52 np0005534516 nova_compute[253538]:  </devices>
Nov 25 03:34:52 np0005534516 nova_compute[253538]:  <seclabel type='dynamic' model='selinux' relabel='yes'>
Nov 25 03:34:52 np0005534516 nova_compute[253538]:    <label>system_u:system_r:svirt_t:s0:c468,c889</label>
Nov 25 03:34:52 np0005534516 nova_compute[253538]:    <imagelabel>system_u:object_r:svirt_image_t:s0:c468,c889</imagelabel>
Nov 25 03:34:52 np0005534516 nova_compute[253538]:  </seclabel>
Nov 25 03:34:52 np0005534516 nova_compute[253538]:  <seclabel type='dynamic' model='dac' relabel='yes'>
Nov 25 03:34:52 np0005534516 nova_compute[253538]:    <label>+107:+107</label>
Nov 25 03:34:52 np0005534516 nova_compute[253538]:    <imagelabel>+107:+107</imagelabel>
Nov 25 03:34:52 np0005534516 nova_compute[253538]:  </seclabel>
Nov 25 03:34:52 np0005534516 nova_compute[253538]: </domain>
Nov 25 03:34:52 np0005534516 nova_compute[253538]: get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282#033[00m
Nov 25 03:34:52 np0005534516 nova_compute[253538]: 2025-11-25 08:34:52.931 253542 INFO nova.virt.libvirt.driver [None req-aabf42ea-bd42-4934-91c6-12ef85d0f85e 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Successfully detached device tapf2a4b65b-41 from instance 52d39d67-b456-44e4-8804-2de0c941edae from the persistent domain config.#033[00m
Nov 25 03:34:52 np0005534516 nova_compute[253538]: 2025-11-25 08:34:52.931 253542 DEBUG nova.virt.libvirt.driver [None req-aabf42ea-bd42-4934-91c6-12ef85d0f85e 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] (1/8): Attempting to detach device tapf2a4b65b-41 with device alias net1 from instance 52d39d67-b456-44e4-8804-2de0c941edae from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523#033[00m
Nov 25 03:34:52 np0005534516 nova_compute[253538]: 2025-11-25 08:34:52.932 253542 DEBUG nova.virt.libvirt.guest [None req-aabf42ea-bd42-4934-91c6-12ef85d0f85e 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] detach device xml: <interface type="ethernet">
Nov 25 03:34:52 np0005534516 nova_compute[253538]:  <mac address="fa:16:3e:26:ed:fc"/>
Nov 25 03:34:52 np0005534516 nova_compute[253538]:  <model type="virtio"/>
Nov 25 03:34:52 np0005534516 nova_compute[253538]:  <driver name="vhost" rx_queue_size="512"/>
Nov 25 03:34:52 np0005534516 nova_compute[253538]:  <mtu size="1442"/>
Nov 25 03:34:52 np0005534516 nova_compute[253538]:  <target dev="tapf2a4b65b-41"/>
Nov 25 03:34:52 np0005534516 nova_compute[253538]: </interface>
Nov 25 03:34:52 np0005534516 nova_compute[253538]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Nov 25 03:34:52 np0005534516 systemd[1]: Started libcrun container.
Nov 25 03:34:52 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/598152ed7896db807c1194d497cbf99580dfa554ebe08979ae464ebb4bcff12a/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 25 03:34:52 np0005534516 nova_compute[253538]: 2025-11-25 08:34:52.956 253542 DEBUG nova.compute.manager [req-12b4da8a-f19b-4777-9837-86f8dd5ae7e6 req-a8944e43-4288-41f0-8ebf-8dc94e8bbbd8 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 420c5373-d9c4-4da0-9658-90eff9a19f8d] Received event network-vif-plugged-9200cc12-927d-418b-99c1-ca0421535979 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:34:52 np0005534516 nova_compute[253538]: 2025-11-25 08:34:52.957 253542 DEBUG oslo_concurrency.lockutils [req-12b4da8a-f19b-4777-9837-86f8dd5ae7e6 req-a8944e43-4288-41f0-8ebf-8dc94e8bbbd8 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "420c5373-d9c4-4da0-9658-90eff9a19f8d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:34:52 np0005534516 nova_compute[253538]: 2025-11-25 08:34:52.957 253542 DEBUG oslo_concurrency.lockutils [req-12b4da8a-f19b-4777-9837-86f8dd5ae7e6 req-a8944e43-4288-41f0-8ebf-8dc94e8bbbd8 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "420c5373-d9c4-4da0-9658-90eff9a19f8d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:34:52 np0005534516 nova_compute[253538]: 2025-11-25 08:34:52.958 253542 DEBUG oslo_concurrency.lockutils [req-12b4da8a-f19b-4777-9837-86f8dd5ae7e6 req-a8944e43-4288-41f0-8ebf-8dc94e8bbbd8 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "420c5373-d9c4-4da0-9658-90eff9a19f8d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:34:52 np0005534516 nova_compute[253538]: 2025-11-25 08:34:52.958 253542 DEBUG nova.compute.manager [req-12b4da8a-f19b-4777-9837-86f8dd5ae7e6 req-a8944e43-4288-41f0-8ebf-8dc94e8bbbd8 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 420c5373-d9c4-4da0-9658-90eff9a19f8d] No waiting events found dispatching network-vif-plugged-9200cc12-927d-418b-99c1-ca0421535979 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 03:34:52 np0005534516 nova_compute[253538]: 2025-11-25 08:34:52.958 253542 WARNING nova.compute.manager [req-12b4da8a-f19b-4777-9837-86f8dd5ae7e6 req-a8944e43-4288-41f0-8ebf-8dc94e8bbbd8 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 420c5373-d9c4-4da0-9658-90eff9a19f8d] Received unexpected event network-vif-plugged-9200cc12-927d-418b-99c1-ca0421535979 for instance with vm_state active and task_state rebuild_spawning.#033[00m
Nov 25 03:34:52 np0005534516 nova_compute[253538]: 2025-11-25 08:34:52.958 253542 DEBUG nova.compute.manager [req-12b4da8a-f19b-4777-9837-86f8dd5ae7e6 req-a8944e43-4288-41f0-8ebf-8dc94e8bbbd8 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 52d39d67-b456-44e4-8804-2de0c941edae] Received event network-vif-plugged-f2a4b65b-419e-44be-9413-f01693268aa8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:34:52 np0005534516 nova_compute[253538]: 2025-11-25 08:34:52.959 253542 DEBUG oslo_concurrency.lockutils [req-12b4da8a-f19b-4777-9837-86f8dd5ae7e6 req-a8944e43-4288-41f0-8ebf-8dc94e8bbbd8 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "52d39d67-b456-44e4-8804-2de0c941edae-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:34:52 np0005534516 nova_compute[253538]: 2025-11-25 08:34:52.959 253542 DEBUG oslo_concurrency.lockutils [req-12b4da8a-f19b-4777-9837-86f8dd5ae7e6 req-a8944e43-4288-41f0-8ebf-8dc94e8bbbd8 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "52d39d67-b456-44e4-8804-2de0c941edae-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:34:52 np0005534516 nova_compute[253538]: 2025-11-25 08:34:52.959 253542 DEBUG oslo_concurrency.lockutils [req-12b4da8a-f19b-4777-9837-86f8dd5ae7e6 req-a8944e43-4288-41f0-8ebf-8dc94e8bbbd8 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "52d39d67-b456-44e4-8804-2de0c941edae-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:34:52 np0005534516 nova_compute[253538]: 2025-11-25 08:34:52.959 253542 DEBUG nova.compute.manager [req-12b4da8a-f19b-4777-9837-86f8dd5ae7e6 req-a8944e43-4288-41f0-8ebf-8dc94e8bbbd8 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 52d39d67-b456-44e4-8804-2de0c941edae] No waiting events found dispatching network-vif-plugged-f2a4b65b-419e-44be-9413-f01693268aa8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 03:34:52 np0005534516 nova_compute[253538]: 2025-11-25 08:34:52.960 253542 WARNING nova.compute.manager [req-12b4da8a-f19b-4777-9837-86f8dd5ae7e6 req-a8944e43-4288-41f0-8ebf-8dc94e8bbbd8 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 52d39d67-b456-44e4-8804-2de0c941edae] Received unexpected event network-vif-plugged-f2a4b65b-419e-44be-9413-f01693268aa8 for instance with vm_state active and task_state None.#033[00m
Nov 25 03:34:52 np0005534516 nova_compute[253538]: 2025-11-25 08:34:52.960 253542 DEBUG nova.compute.manager [req-12b4da8a-f19b-4777-9837-86f8dd5ae7e6 req-a8944e43-4288-41f0-8ebf-8dc94e8bbbd8 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 52d39d67-b456-44e4-8804-2de0c941edae] Received event network-vif-plugged-f2a4b65b-419e-44be-9413-f01693268aa8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:34:52 np0005534516 nova_compute[253538]: 2025-11-25 08:34:52.961 253542 DEBUG oslo_concurrency.lockutils [req-12b4da8a-f19b-4777-9837-86f8dd5ae7e6 req-a8944e43-4288-41f0-8ebf-8dc94e8bbbd8 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "52d39d67-b456-44e4-8804-2de0c941edae-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:34:52 np0005534516 nova_compute[253538]: 2025-11-25 08:34:52.961 253542 DEBUG oslo_concurrency.lockutils [req-12b4da8a-f19b-4777-9837-86f8dd5ae7e6 req-a8944e43-4288-41f0-8ebf-8dc94e8bbbd8 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "52d39d67-b456-44e4-8804-2de0c941edae-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:34:52 np0005534516 nova_compute[253538]: 2025-11-25 08:34:52.961 253542 DEBUG oslo_concurrency.lockutils [req-12b4da8a-f19b-4777-9837-86f8dd5ae7e6 req-a8944e43-4288-41f0-8ebf-8dc94e8bbbd8 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "52d39d67-b456-44e4-8804-2de0c941edae-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:34:52 np0005534516 nova_compute[253538]: 2025-11-25 08:34:52.962 253542 DEBUG nova.compute.manager [req-12b4da8a-f19b-4777-9837-86f8dd5ae7e6 req-a8944e43-4288-41f0-8ebf-8dc94e8bbbd8 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 52d39d67-b456-44e4-8804-2de0c941edae] No waiting events found dispatching network-vif-plugged-f2a4b65b-419e-44be-9413-f01693268aa8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 03:34:52 np0005534516 nova_compute[253538]: 2025-11-25 08:34:52.962 253542 WARNING nova.compute.manager [req-12b4da8a-f19b-4777-9837-86f8dd5ae7e6 req-a8944e43-4288-41f0-8ebf-8dc94e8bbbd8 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 52d39d67-b456-44e4-8804-2de0c941edae] Received unexpected event network-vif-plugged-f2a4b65b-419e-44be-9413-f01693268aa8 for instance with vm_state active and task_state None.#033[00m
Nov 25 03:34:52 np0005534516 nova_compute[253538]: 2025-11-25 08:34:52.962 253542 DEBUG nova.compute.manager [req-12b4da8a-f19b-4777-9837-86f8dd5ae7e6 req-a8944e43-4288-41f0-8ebf-8dc94e8bbbd8 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: a4fd9f97-b160-432d-9cb7-0fa3874c6468] Received event network-vif-plugged-f9d205bf-0705-485d-b89c-f9b9c3cdccdb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:34:52 np0005534516 nova_compute[253538]: 2025-11-25 08:34:52.963 253542 DEBUG oslo_concurrency.lockutils [req-12b4da8a-f19b-4777-9837-86f8dd5ae7e6 req-a8944e43-4288-41f0-8ebf-8dc94e8bbbd8 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "a4fd9f97-b160-432d-9cb7-0fa3874c6468-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:34:52 np0005534516 nova_compute[253538]: 2025-11-25 08:34:52.963 253542 DEBUG oslo_concurrency.lockutils [req-12b4da8a-f19b-4777-9837-86f8dd5ae7e6 req-a8944e43-4288-41f0-8ebf-8dc94e8bbbd8 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "a4fd9f97-b160-432d-9cb7-0fa3874c6468-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:34:52 np0005534516 nova_compute[253538]: 2025-11-25 08:34:52.963 253542 DEBUG oslo_concurrency.lockutils [req-12b4da8a-f19b-4777-9837-86f8dd5ae7e6 req-a8944e43-4288-41f0-8ebf-8dc94e8bbbd8 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "a4fd9f97-b160-432d-9cb7-0fa3874c6468-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:34:52 np0005534516 nova_compute[253538]: 2025-11-25 08:34:52.964 253542 DEBUG nova.compute.manager [req-12b4da8a-f19b-4777-9837-86f8dd5ae7e6 req-a8944e43-4288-41f0-8ebf-8dc94e8bbbd8 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: a4fd9f97-b160-432d-9cb7-0fa3874c6468] Processing event network-vif-plugged-f9d205bf-0705-485d-b89c-f9b9c3cdccdb _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 25 03:34:52 np0005534516 podman[319901]: 2025-11-25 08:34:52.97156816 +0000 UTC m=+0.237227597 container init 3430c8305e50c4278e220e4add931691e3ee2ec9f471304956bdde4a55cfd430 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-837c6e7b-bab2-4553-9d96-986f67153365, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Nov 25 03:34:52 np0005534516 podman[319901]: 2025-11-25 08:34:52.977422027 +0000 UTC m=+0.243081444 container start 3430c8305e50c4278e220e4add931691e3ee2ec9f471304956bdde4a55cfd430 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-837c6e7b-bab2-4553-9d96-986f67153365, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 03:34:53 np0005534516 neutron-haproxy-ovnmeta-837c6e7b-bab2-4553-9d96-986f67153365[320001]: [NOTICE]   (320027) : New worker (320037) forked
Nov 25 03:34:53 np0005534516 neutron-haproxy-ovnmeta-837c6e7b-bab2-4553-9d96-986f67153365[320001]: [NOTICE]   (320027) : Loading success.
Nov 25 03:34:53 np0005534516 kernel: tapf2a4b65b-41 (unregistering): left promiscuous mode
Nov 25 03:34:53 np0005534516 NetworkManager[48915]: <info>  [1764059693.0544] device (tapf2a4b65b-41): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 03:34:53 np0005534516 ovn_controller[152859]: 2025-11-25T08:34:53Z|00586|binding|INFO|Releasing lport f2a4b65b-419e-44be-9413-f01693268aa8 from this chassis (sb_readonly=0)
Nov 25 03:34:53 np0005534516 nova_compute[253538]: 2025-11-25 08:34:53.063 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:34:53 np0005534516 ovn_controller[152859]: 2025-11-25T08:34:53Z|00587|binding|INFO|Setting lport f2a4b65b-419e-44be-9413-f01693268aa8 down in Southbound
Nov 25 03:34:53 np0005534516 ovn_controller[152859]: 2025-11-25T08:34:53Z|00588|binding|INFO|Removing iface tapf2a4b65b-41 ovn-installed in OVS
Nov 25 03:34:53 np0005534516 nova_compute[253538]: 2025-11-25 08:34:53.066 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:34:53 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:34:53.071 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:26:ed:fc 10.100.0.3'], port_security=['fa:16:3e:26:ed:fc 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-AttachInterfacesTestJSON-663266119', 'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '52d39d67-b456-44e4-8804-2de0c941edae', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-AttachInterfacesTestJSON-663266119', 'neutron:project_id': '7d8307470c794815a028592990efca57', 'neutron:revision_number': '9', 'neutron:security_group_ids': '2fc6083a-2d5e-4949-a854-57468915c521', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f51a82dc-da84-4ad1-90c6-51b8e242435f, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=f2a4b65b-419e-44be-9413-f01693268aa8) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 03:34:53 np0005534516 nova_compute[253538]: 2025-11-25 08:34:53.073 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764059693.0727398, d99c7a05-3cc3-4a8b-bce4-1185023a269f => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 03:34:53 np0005534516 nova_compute[253538]: 2025-11-25 08:34:53.073 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: d99c7a05-3cc3-4a8b-bce4-1185023a269f] VM Started (Lifecycle Event)#033[00m
Nov 25 03:34:53 np0005534516 nova_compute[253538]: 2025-11-25 08:34:53.077 253542 DEBUG nova.virt.libvirt.driver [None req-aabf42ea-bd42-4934-91c6-12ef85d0f85e 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Start waiting for the detach event from libvirt for device tapf2a4b65b-41 with device alias net1 for instance 52d39d67-b456-44e4-8804-2de0c941edae _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599#033[00m
Nov 25 03:34:53 np0005534516 nova_compute[253538]: 2025-11-25 08:34:53.080 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:34:53 np0005534516 nova_compute[253538]: 2025-11-25 08:34:53.093 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: d99c7a05-3cc3-4a8b-bce4-1185023a269f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:34:53 np0005534516 nova_compute[253538]: 2025-11-25 08:34:53.099 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764059693.0748725, d99c7a05-3cc3-4a8b-bce4-1185023a269f => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 03:34:53 np0005534516 nova_compute[253538]: 2025-11-25 08:34:53.099 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: d99c7a05-3cc3-4a8b-bce4-1185023a269f] VM Paused (Lifecycle Event)#033[00m
Nov 25 03:34:53 np0005534516 nova_compute[253538]: 2025-11-25 08:34:53.123 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: d99c7a05-3cc3-4a8b-bce4-1185023a269f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:34:53 np0005534516 nova_compute[253538]: 2025-11-25 08:34:53.127 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: d99c7a05-3cc3-4a8b-bce4-1185023a269f] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 03:34:53 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:34:53.136 162739 INFO neutron.agent.ovn.metadata.agent [-] Port f9d205bf-0705-485d-b89c-f9b9c3cdccdb in datapath 837c6e7b-bab2-4553-9d96-986f67153365 unbound from our chassis#033[00m
Nov 25 03:34:53 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:34:53.137 162739 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 837c6e7b-bab2-4553-9d96-986f67153365#033[00m
Nov 25 03:34:53 np0005534516 nova_compute[253538]: 2025-11-25 08:34:53.144 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: d99c7a05-3cc3-4a8b-bce4-1185023a269f] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 03:34:53 np0005534516 nova_compute[253538]: 2025-11-25 08:34:53.144 253542 DEBUG nova.virt.libvirt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Received event <DeviceRemovedEvent: 1764059693.0751965, 52d39d67-b456-44e4-8804-2de0c941edae => net1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370#033[00m
Nov 25 03:34:53 np0005534516 nova_compute[253538]: 2025-11-25 08:34:53.145 253542 DEBUG nova.virt.libvirt.guest [None req-aabf42ea-bd42-4934-91c6-12ef85d0f85e 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:26:ed:fc"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapf2a4b65b-41"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Nov 25 03:34:53 np0005534516 nova_compute[253538]: 2025-11-25 08:34:53.149 253542 DEBUG nova.virt.libvirt.guest [None req-aabf42ea-bd42-4934-91c6-12ef85d0f85e 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:26:ed:fc"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapf2a4b65b-41"/></interface>not found in domain: <domain type='kvm' id='70'>
Nov 25 03:34:53 np0005534516 nova_compute[253538]:  <name>instance-0000003d</name>
Nov 25 03:34:53 np0005534516 nova_compute[253538]:  <uuid>52d39d67-b456-44e4-8804-2de0c941edae</uuid>
Nov 25 03:34:53 np0005534516 nova_compute[253538]:  <metadata>
Nov 25 03:34:53 np0005534516 nova_compute[253538]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 03:34:53 np0005534516 nova_compute[253538]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 03:34:53 np0005534516 nova_compute[253538]:  <nova:name>tempest-tempest.common.compute-instance-1833307559</nova:name>
Nov 25 03:34:53 np0005534516 nova_compute[253538]:  <nova:creationTime>2025-11-25 08:34:50</nova:creationTime>
Nov 25 03:34:53 np0005534516 nova_compute[253538]:  <nova:flavor name="m1.nano">
Nov 25 03:34:53 np0005534516 nova_compute[253538]:    <nova:memory>128</nova:memory>
Nov 25 03:34:53 np0005534516 nova_compute[253538]:    <nova:disk>1</nova:disk>
Nov 25 03:34:53 np0005534516 nova_compute[253538]:    <nova:swap>0</nova:swap>
Nov 25 03:34:53 np0005534516 nova_compute[253538]:    <nova:ephemeral>0</nova:ephemeral>
Nov 25 03:34:53 np0005534516 nova_compute[253538]:    <nova:vcpus>1</nova:vcpus>
Nov 25 03:34:53 np0005534516 nova_compute[253538]:  </nova:flavor>
Nov 25 03:34:53 np0005534516 nova_compute[253538]:  <nova:owner>
Nov 25 03:34:53 np0005534516 nova_compute[253538]:    <nova:user uuid="329d8dc9d78743d4a09a38fef3a9143d">tempest-AttachInterfacesTestJSON-1895576257-project-member</nova:user>
Nov 25 03:34:53 np0005534516 nova_compute[253538]:    <nova:project uuid="7d8307470c794815a028592990efca57">tempest-AttachInterfacesTestJSON-1895576257</nova:project>
Nov 25 03:34:53 np0005534516 nova_compute[253538]:  </nova:owner>
Nov 25 03:34:53 np0005534516 nova_compute[253538]:  <nova:root type="image" uuid="8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e"/>
Nov 25 03:34:53 np0005534516 nova_compute[253538]:  <nova:ports>
Nov 25 03:34:53 np0005534516 nova_compute[253538]:    <nova:port uuid="9fa407fa-661b-4b02-b4f4-656f6ae34cd8">
Nov 25 03:34:53 np0005534516 nova_compute[253538]:      <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Nov 25 03:34:53 np0005534516 nova_compute[253538]:    </nova:port>
Nov 25 03:34:53 np0005534516 nova_compute[253538]:    <nova:port uuid="f2a4b65b-419e-44be-9413-f01693268aa8">
Nov 25 03:34:53 np0005534516 nova_compute[253538]:      <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Nov 25 03:34:53 np0005534516 nova_compute[253538]:    </nova:port>
Nov 25 03:34:53 np0005534516 nova_compute[253538]:  </nova:ports>
Nov 25 03:34:53 np0005534516 nova_compute[253538]: </nova:instance>
Nov 25 03:34:53 np0005534516 nova_compute[253538]:  </metadata>
Nov 25 03:34:53 np0005534516 nova_compute[253538]:  <memory unit='KiB'>131072</memory>
Nov 25 03:34:53 np0005534516 nova_compute[253538]:  <currentMemory unit='KiB'>131072</currentMemory>
Nov 25 03:34:53 np0005534516 nova_compute[253538]:  <vcpu placement='static'>1</vcpu>
Nov 25 03:34:53 np0005534516 nova_compute[253538]:  <resource>
Nov 25 03:34:53 np0005534516 nova_compute[253538]:    <partition>/machine</partition>
Nov 25 03:34:53 np0005534516 nova_compute[253538]:  </resource>
Nov 25 03:34:53 np0005534516 nova_compute[253538]:  <sysinfo type='smbios'>
Nov 25 03:34:53 np0005534516 nova_compute[253538]:    <system>
Nov 25 03:34:53 np0005534516 nova_compute[253538]:      <entry name='manufacturer'>RDO</entry>
Nov 25 03:34:53 np0005534516 nova_compute[253538]:      <entry name='product'>OpenStack Compute</entry>
Nov 25 03:34:53 np0005534516 nova_compute[253538]:      <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 03:34:53 np0005534516 nova_compute[253538]:      <entry name='serial'>52d39d67-b456-44e4-8804-2de0c941edae</entry>
Nov 25 03:34:53 np0005534516 nova_compute[253538]:      <entry name='uuid'>52d39d67-b456-44e4-8804-2de0c941edae</entry>
Nov 25 03:34:53 np0005534516 nova_compute[253538]:      <entry name='family'>Virtual Machine</entry>
Nov 25 03:34:53 np0005534516 nova_compute[253538]:    </system>
Nov 25 03:34:53 np0005534516 nova_compute[253538]:  </sysinfo>
Nov 25 03:34:53 np0005534516 nova_compute[253538]:  <os>
Nov 25 03:34:53 np0005534516 nova_compute[253538]:    <type arch='x86_64' machine='pc-q35-rhel9.8.0'>hvm</type>
Nov 25 03:34:53 np0005534516 nova_compute[253538]:    <boot dev='hd'/>
Nov 25 03:34:53 np0005534516 nova_compute[253538]:    <smbios mode='sysinfo'/>
Nov 25 03:34:53 np0005534516 nova_compute[253538]:  </os>
Nov 25 03:34:53 np0005534516 nova_compute[253538]:  <features>
Nov 25 03:34:53 np0005534516 nova_compute[253538]:    <acpi/>
Nov 25 03:34:53 np0005534516 nova_compute[253538]:    <apic/>
Nov 25 03:34:53 np0005534516 nova_compute[253538]:    <vmcoreinfo state='on'/>
Nov 25 03:34:53 np0005534516 nova_compute[253538]:  </features>
Nov 25 03:34:53 np0005534516 nova_compute[253538]:  <cpu mode='custom' match='exact' check='full'>
Nov 25 03:34:53 np0005534516 nova_compute[253538]:    <model fallback='forbid'>EPYC-Rome</model>
Nov 25 03:34:53 np0005534516 nova_compute[253538]:    <vendor>AMD</vendor>
Nov 25 03:34:53 np0005534516 nova_compute[253538]:    <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Nov 25 03:34:53 np0005534516 nova_compute[253538]:    <feature policy='require' name='x2apic'/>
Nov 25 03:34:53 np0005534516 nova_compute[253538]:    <feature policy='require' name='tsc-deadline'/>
Nov 25 03:34:53 np0005534516 nova_compute[253538]:    <feature policy='require' name='hypervisor'/>
Nov 25 03:34:53 np0005534516 nova_compute[253538]:    <feature policy='require' name='tsc_adjust'/>
Nov 25 03:34:53 np0005534516 nova_compute[253538]:    <feature policy='require' name='spec-ctrl'/>
Nov 25 03:34:53 np0005534516 nova_compute[253538]:    <feature policy='require' name='stibp'/>
Nov 25 03:34:53 np0005534516 nova_compute[253538]:    <feature policy='require' name='ssbd'/>
Nov 25 03:34:53 np0005534516 nova_compute[253538]:    <feature policy='require' name='cmp_legacy'/>
Nov 25 03:34:53 np0005534516 nova_compute[253538]:    <feature policy='require' name='overflow-recov'/>
Nov 25 03:34:53 np0005534516 nova_compute[253538]:    <feature policy='require' name='succor'/>
Nov 25 03:34:53 np0005534516 nova_compute[253538]:    <feature policy='require' name='ibrs'/>
Nov 25 03:34:53 np0005534516 nova_compute[253538]:    <feature policy='require' name='amd-ssbd'/>
Nov 25 03:34:53 np0005534516 nova_compute[253538]:    <feature policy='require' name='virt-ssbd'/>
Nov 25 03:34:53 np0005534516 nova_compute[253538]:    <feature policy='disable' name='lbrv'/>
Nov 25 03:34:53 np0005534516 nova_compute[253538]:    <feature policy='disable' name='tsc-scale'/>
Nov 25 03:34:53 np0005534516 nova_compute[253538]:    <feature policy='disable' name='vmcb-clean'/>
Nov 25 03:34:53 np0005534516 nova_compute[253538]:    <feature policy='disable' name='flushbyasid'/>
Nov 25 03:34:53 np0005534516 nova_compute[253538]:    <feature policy='disable' name='pause-filter'/>
Nov 25 03:34:53 np0005534516 nova_compute[253538]:    <feature policy='disable' name='pfthreshold'/>
Nov 25 03:34:53 np0005534516 nova_compute[253538]:    <feature policy='disable' name='svme-addr-chk'/>
Nov 25 03:34:53 np0005534516 nova_compute[253538]:    <feature policy='require' name='lfence-always-serializing'/>
Nov 25 03:34:53 np0005534516 nova_compute[253538]:    <feature policy='disable' name='xsaves'/>
Nov 25 03:34:53 np0005534516 nova_compute[253538]:    <feature policy='disable' name='svm'/>
Nov 25 03:34:53 np0005534516 nova_compute[253538]:    <feature policy='require' name='topoext'/>
Nov 25 03:34:53 np0005534516 nova_compute[253538]:    <feature policy='disable' name='npt'/>
Nov 25 03:34:53 np0005534516 nova_compute[253538]:    <feature policy='disable' name='nrip-save'/>
Nov 25 03:34:53 np0005534516 nova_compute[253538]:  </cpu>
Nov 25 03:34:53 np0005534516 nova_compute[253538]:  <clock offset='utc'>
Nov 25 03:34:53 np0005534516 nova_compute[253538]:    <timer name='pit' tickpolicy='delay'/>
Nov 25 03:34:53 np0005534516 nova_compute[253538]:    <timer name='rtc' tickpolicy='catchup'/>
Nov 25 03:34:53 np0005534516 nova_compute[253538]:    <timer name='hpet' present='no'/>
Nov 25 03:34:53 np0005534516 nova_compute[253538]:  </clock>
Nov 25 03:34:53 np0005534516 nova_compute[253538]:  <on_poweroff>destroy</on_poweroff>
Nov 25 03:34:53 np0005534516 nova_compute[253538]:  <on_reboot>restart</on_reboot>
Nov 25 03:34:53 np0005534516 nova_compute[253538]:  <on_crash>destroy</on_crash>
Nov 25 03:34:53 np0005534516 nova_compute[253538]:  <devices>
Nov 25 03:34:53 np0005534516 nova_compute[253538]:    <emulator>/usr/libexec/qemu-kvm</emulator>
Nov 25 03:34:53 np0005534516 nova_compute[253538]:    <disk type='network' device='disk'>
Nov 25 03:34:53 np0005534516 nova_compute[253538]:      <driver name='qemu' type='raw' cache='none'/>
Nov 25 03:34:53 np0005534516 nova_compute[253538]:      <auth username='openstack'>
Nov 25 03:34:53 np0005534516 nova_compute[253538]:        <secret type='ceph' uuid='a058ea16-8b73-51e1-b172-ed66107102bf'/>
Nov 25 03:34:53 np0005534516 nova_compute[253538]:      </auth>
Nov 25 03:34:53 np0005534516 nova_compute[253538]:      <source protocol='rbd' name='vms/52d39d67-b456-44e4-8804-2de0c941edae_disk' index='2'>
Nov 25 03:34:53 np0005534516 nova_compute[253538]:        <host name='192.168.122.100' port='6789'/>
Nov 25 03:34:53 np0005534516 nova_compute[253538]:      </source>
Nov 25 03:34:53 np0005534516 nova_compute[253538]:      <target dev='vda' bus='virtio'/>
Nov 25 03:34:53 np0005534516 nova_compute[253538]:      <alias name='virtio-disk0'/>
Nov 25 03:34:53 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Nov 25 03:34:53 np0005534516 nova_compute[253538]:    </disk>
Nov 25 03:34:53 np0005534516 nova_compute[253538]:    <disk type='network' device='cdrom'>
Nov 25 03:34:53 np0005534516 nova_compute[253538]:      <driver name='qemu' type='raw' cache='none'/>
Nov 25 03:34:53 np0005534516 nova_compute[253538]:      <auth username='openstack'>
Nov 25 03:34:53 np0005534516 nova_compute[253538]:        <secret type='ceph' uuid='a058ea16-8b73-51e1-b172-ed66107102bf'/>
Nov 25 03:34:53 np0005534516 nova_compute[253538]:      </auth>
Nov 25 03:34:53 np0005534516 nova_compute[253538]:      <source protocol='rbd' name='vms/52d39d67-b456-44e4-8804-2de0c941edae_disk.config' index='1'>
Nov 25 03:34:53 np0005534516 nova_compute[253538]:        <host name='192.168.122.100' port='6789'/>
Nov 25 03:34:53 np0005534516 nova_compute[253538]:      </source>
Nov 25 03:34:53 np0005534516 nova_compute[253538]:      <target dev='sda' bus='sata'/>
Nov 25 03:34:53 np0005534516 nova_compute[253538]:      <readonly/>
Nov 25 03:34:53 np0005534516 nova_compute[253538]:      <alias name='sata0-0-0'/>
Nov 25 03:34:53 np0005534516 nova_compute[253538]:      <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Nov 25 03:34:53 np0005534516 nova_compute[253538]:    </disk>
Nov 25 03:34:53 np0005534516 nova_compute[253538]:    <controller type='pci' index='0' model='pcie-root'>
Nov 25 03:34:53 np0005534516 nova_compute[253538]:      <alias name='pcie.0'/>
Nov 25 03:34:53 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:34:53 np0005534516 nova_compute[253538]:    <controller type='pci' index='1' model='pcie-root-port'>
Nov 25 03:34:53 np0005534516 nova_compute[253538]:      <model name='pcie-root-port'/>
Nov 25 03:34:53 np0005534516 nova_compute[253538]:      <target chassis='1' port='0x10'/>
Nov 25 03:34:53 np0005534516 nova_compute[253538]:      <alias name='pci.1'/>
Nov 25 03:34:53 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Nov 25 03:34:53 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:34:53 np0005534516 nova_compute[253538]:    <controller type='pci' index='2' model='pcie-root-port'>
Nov 25 03:34:53 np0005534516 nova_compute[253538]:      <model name='pcie-root-port'/>
Nov 25 03:34:53 np0005534516 nova_compute[253538]:      <target chassis='2' port='0x11'/>
Nov 25 03:34:53 np0005534516 nova_compute[253538]:      <alias name='pci.2'/>
Nov 25 03:34:53 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Nov 25 03:34:53 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:34:53 np0005534516 nova_compute[253538]:    <controller type='pci' index='3' model='pcie-root-port'>
Nov 25 03:34:53 np0005534516 nova_compute[253538]:      <model name='pcie-root-port'/>
Nov 25 03:34:53 np0005534516 nova_compute[253538]:      <target chassis='3' port='0x12'/>
Nov 25 03:34:53 np0005534516 nova_compute[253538]:      <alias name='pci.3'/>
Nov 25 03:34:53 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Nov 25 03:34:53 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:34:53 np0005534516 nova_compute[253538]:    <controller type='pci' index='4' model='pcie-root-port'>
Nov 25 03:34:53 np0005534516 nova_compute[253538]:      <model name='pcie-root-port'/>
Nov 25 03:34:53 np0005534516 nova_compute[253538]:      <target chassis='4' port='0x13'/>
Nov 25 03:34:53 np0005534516 nova_compute[253538]:      <alias name='pci.4'/>
Nov 25 03:34:53 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Nov 25 03:34:53 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:34:53 np0005534516 nova_compute[253538]:    <controller type='pci' index='5' model='pcie-root-port'>
Nov 25 03:34:53 np0005534516 nova_compute[253538]:      <model name='pcie-root-port'/>
Nov 25 03:34:53 np0005534516 nova_compute[253538]:      <target chassis='5' port='0x14'/>
Nov 25 03:34:53 np0005534516 nova_compute[253538]:      <alias name='pci.5'/>
Nov 25 03:34:53 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Nov 25 03:34:53 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:34:53 np0005534516 nova_compute[253538]:    <controller type='pci' index='6' model='pcie-root-port'>
Nov 25 03:34:53 np0005534516 nova_compute[253538]:      <model name='pcie-root-port'/>
Nov 25 03:34:53 np0005534516 nova_compute[253538]:      <target chassis='6' port='0x15'/>
Nov 25 03:34:53 np0005534516 nova_compute[253538]:      <alias name='pci.6'/>
Nov 25 03:34:53 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Nov 25 03:34:53 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:34:53 np0005534516 nova_compute[253538]:    <controller type='pci' index='7' model='pcie-root-port'>
Nov 25 03:34:53 np0005534516 nova_compute[253538]:      <model name='pcie-root-port'/>
Nov 25 03:34:53 np0005534516 nova_compute[253538]:      <target chassis='7' port='0x16'/>
Nov 25 03:34:53 np0005534516 nova_compute[253538]:      <alias name='pci.7'/>
Nov 25 03:34:53 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Nov 25 03:34:53 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:34:53 np0005534516 nova_compute[253538]:    <controller type='pci' index='8' model='pcie-root-port'>
Nov 25 03:34:53 np0005534516 nova_compute[253538]:      <model name='pcie-root-port'/>
Nov 25 03:34:53 np0005534516 nova_compute[253538]:      <target chassis='8' port='0x17'/>
Nov 25 03:34:53 np0005534516 nova_compute[253538]:      <alias name='pci.8'/>
Nov 25 03:34:53 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Nov 25 03:34:53 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:34:53 np0005534516 nova_compute[253538]:    <controller type='pci' index='9' model='pcie-root-port'>
Nov 25 03:34:53 np0005534516 nova_compute[253538]:      <model name='pcie-root-port'/>
Nov 25 03:34:53 np0005534516 nova_compute[253538]:      <target chassis='9' port='0x18'/>
Nov 25 03:34:53 np0005534516 nova_compute[253538]:      <alias name='pci.9'/>
Nov 25 03:34:53 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Nov 25 03:34:53 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:34:53 np0005534516 nova_compute[253538]:    <controller type='pci' index='10' model='pcie-root-port'>
Nov 25 03:34:53 np0005534516 nova_compute[253538]:      <model name='pcie-root-port'/>
Nov 25 03:34:53 np0005534516 nova_compute[253538]:      <target chassis='10' port='0x19'/>
Nov 25 03:34:53 np0005534516 nova_compute[253538]:      <alias name='pci.10'/>
Nov 25 03:34:53 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Nov 25 03:34:53 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:34:53 np0005534516 nova_compute[253538]:    <controller type='pci' index='11' model='pcie-root-port'>
Nov 25 03:34:53 np0005534516 nova_compute[253538]:      <model name='pcie-root-port'/>
Nov 25 03:34:53 np0005534516 nova_compute[253538]:      <target chassis='11' port='0x1a'/>
Nov 25 03:34:53 np0005534516 nova_compute[253538]:      <alias name='pci.11'/>
Nov 25 03:34:53 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Nov 25 03:34:53 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:34:53 np0005534516 nova_compute[253538]:    <controller type='pci' index='12' model='pcie-root-port'>
Nov 25 03:34:53 np0005534516 nova_compute[253538]:      <model name='pcie-root-port'/>
Nov 25 03:34:53 np0005534516 nova_compute[253538]:      <target chassis='12' port='0x1b'/>
Nov 25 03:34:53 np0005534516 nova_compute[253538]:      <alias name='pci.12'/>
Nov 25 03:34:53 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Nov 25 03:34:53 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:34:53 np0005534516 nova_compute[253538]:    <controller type='pci' index='13' model='pcie-root-port'>
Nov 25 03:34:53 np0005534516 nova_compute[253538]:      <model name='pcie-root-port'/>
Nov 25 03:34:53 np0005534516 nova_compute[253538]:      <target chassis='13' port='0x1c'/>
Nov 25 03:34:53 np0005534516 nova_compute[253538]:      <alias name='pci.13'/>
Nov 25 03:34:53 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Nov 25 03:34:53 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:34:53 np0005534516 nova_compute[253538]:    <controller type='pci' index='14' model='pcie-root-port'>
Nov 25 03:34:53 np0005534516 nova_compute[253538]:      <model name='pcie-root-port'/>
Nov 25 03:34:53 np0005534516 nova_compute[253538]:      <target chassis='14' port='0x1d'/>
Nov 25 03:34:53 np0005534516 nova_compute[253538]:      <alias name='pci.14'/>
Nov 25 03:34:53 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Nov 25 03:34:53 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:34:53 np0005534516 nova_compute[253538]:    <controller type='pci' index='15' model='pcie-root-port'>
Nov 25 03:34:53 np0005534516 nova_compute[253538]:      <model name='pcie-root-port'/>
Nov 25 03:34:53 np0005534516 nova_compute[253538]:      <target chassis='15' port='0x1e'/>
Nov 25 03:34:53 np0005534516 nova_compute[253538]:      <alias name='pci.15'/>
Nov 25 03:34:53 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Nov 25 03:34:53 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:34:53 np0005534516 nova_compute[253538]:    <controller type='pci' index='16' model='pcie-root-port'>
Nov 25 03:34:53 np0005534516 nova_compute[253538]:      <model name='pcie-root-port'/>
Nov 25 03:34:53 np0005534516 nova_compute[253538]:      <target chassis='16' port='0x1f'/>
Nov 25 03:34:53 np0005534516 nova_compute[253538]:      <alias name='pci.16'/>
Nov 25 03:34:53 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Nov 25 03:34:53 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:34:53 np0005534516 nova_compute[253538]:    <controller type='pci' index='17' model='pcie-root-port'>
Nov 25 03:34:53 np0005534516 nova_compute[253538]:      <model name='pcie-root-port'/>
Nov 25 03:34:53 np0005534516 nova_compute[253538]:      <target chassis='17' port='0x20'/>
Nov 25 03:34:53 np0005534516 nova_compute[253538]:      <alias name='pci.17'/>
Nov 25 03:34:53 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Nov 25 03:34:53 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:34:53 np0005534516 nova_compute[253538]:    <controller type='pci' index='18' model='pcie-root-port'>
Nov 25 03:34:53 np0005534516 nova_compute[253538]:      <model name='pcie-root-port'/>
Nov 25 03:34:53 np0005534516 nova_compute[253538]:      <target chassis='18' port='0x21'/>
Nov 25 03:34:53 np0005534516 nova_compute[253538]:      <alias name='pci.18'/>
Nov 25 03:34:53 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Nov 25 03:34:53 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:34:53 np0005534516 nova_compute[253538]:    <controller type='pci' index='19' model='pcie-root-port'>
Nov 25 03:34:53 np0005534516 nova_compute[253538]:      <model name='pcie-root-port'/>
Nov 25 03:34:53 np0005534516 nova_compute[253538]:      <target chassis='19' port='0x22'/>
Nov 25 03:34:53 np0005534516 nova_compute[253538]:      <alias name='pci.19'/>
Nov 25 03:34:53 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Nov 25 03:34:53 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:34:53 np0005534516 nova_compute[253538]:    <controller type='pci' index='20' model='pcie-root-port'>
Nov 25 03:34:53 np0005534516 nova_compute[253538]:      <model name='pcie-root-port'/>
Nov 25 03:34:53 np0005534516 nova_compute[253538]:      <target chassis='20' port='0x23'/>
Nov 25 03:34:53 np0005534516 nova_compute[253538]:      <alias name='pci.20'/>
Nov 25 03:34:53 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Nov 25 03:34:53 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:34:53 np0005534516 nova_compute[253538]:    <controller type='pci' index='21' model='pcie-root-port'>
Nov 25 03:34:53 np0005534516 nova_compute[253538]:      <model name='pcie-root-port'/>
Nov 25 03:34:53 np0005534516 nova_compute[253538]:      <target chassis='21' port='0x24'/>
Nov 25 03:34:53 np0005534516 nova_compute[253538]:      <alias name='pci.21'/>
Nov 25 03:34:53 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Nov 25 03:34:53 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:34:53 np0005534516 nova_compute[253538]:    <controller type='pci' index='22' model='pcie-root-port'>
Nov 25 03:34:53 np0005534516 nova_compute[253538]:      <model name='pcie-root-port'/>
Nov 25 03:34:53 np0005534516 nova_compute[253538]:      <target chassis='22' port='0x25'/>
Nov 25 03:34:53 np0005534516 nova_compute[253538]:      <alias name='pci.22'/>
Nov 25 03:34:53 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Nov 25 03:34:53 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:34:53 np0005534516 nova_compute[253538]:    <controller type='pci' index='23' model='pcie-root-port'>
Nov 25 03:34:53 np0005534516 nova_compute[253538]:      <model name='pcie-root-port'/>
Nov 25 03:34:53 np0005534516 nova_compute[253538]:      <target chassis='23' port='0x26'/>
Nov 25 03:34:53 np0005534516 nova_compute[253538]:      <alias name='pci.23'/>
Nov 25 03:34:53 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Nov 25 03:34:53 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:34:53 np0005534516 nova_compute[253538]:    <controller type='pci' index='24' model='pcie-root-port'>
Nov 25 03:34:53 np0005534516 nova_compute[253538]:      <model name='pcie-root-port'/>
Nov 25 03:34:53 np0005534516 nova_compute[253538]:      <target chassis='24' port='0x27'/>
Nov 25 03:34:53 np0005534516 nova_compute[253538]:      <alias name='pci.24'/>
Nov 25 03:34:53 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Nov 25 03:34:53 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:34:53 np0005534516 nova_compute[253538]:    <controller type='pci' index='25' model='pcie-root-port'>
Nov 25 03:34:53 np0005534516 nova_compute[253538]:      <model name='pcie-root-port'/>
Nov 25 03:34:53 np0005534516 nova_compute[253538]:      <target chassis='25' port='0x28'/>
Nov 25 03:34:53 np0005534516 nova_compute[253538]:      <alias name='pci.25'/>
Nov 25 03:34:53 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Nov 25 03:34:53 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:34:53 np0005534516 nova_compute[253538]:    <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Nov 25 03:34:53 np0005534516 nova_compute[253538]:      <model name='pcie-pci-bridge'/>
Nov 25 03:34:53 np0005534516 nova_compute[253538]:      <alias name='pci.26'/>
Nov 25 03:34:53 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Nov 25 03:34:53 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:34:53 np0005534516 nova_compute[253538]:    <controller type='usb' index='0' model='piix3-uhci'>
Nov 25 03:34:53 np0005534516 nova_compute[253538]:      <alias name='usb'/>
Nov 25 03:34:53 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Nov 25 03:34:53 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:34:53 np0005534516 nova_compute[253538]:    <controller type='sata' index='0'>
Nov 25 03:34:53 np0005534516 nova_compute[253538]:      <alias name='ide'/>
Nov 25 03:34:53 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Nov 25 03:34:53 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:34:53 np0005534516 nova_compute[253538]:    <interface type='ethernet'>
Nov 25 03:34:53 np0005534516 nova_compute[253538]:      <mac address='fa:16:3e:4d:ce:d4'/>
Nov 25 03:34:53 np0005534516 nova_compute[253538]:      <target dev='tap9fa407fa-66'/>
Nov 25 03:34:53 np0005534516 nova_compute[253538]:      <model type='virtio'/>
Nov 25 03:34:53 np0005534516 nova_compute[253538]:      <driver name='vhost' rx_queue_size='512'/>
Nov 25 03:34:53 np0005534516 nova_compute[253538]:      <mtu size='1442'/>
Nov 25 03:34:53 np0005534516 nova_compute[253538]:      <alias name='net0'/>
Nov 25 03:34:53 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Nov 25 03:34:53 np0005534516 nova_compute[253538]:    </interface>
Nov 25 03:34:53 np0005534516 nova_compute[253538]:    <serial type='pty'>
Nov 25 03:34:53 np0005534516 nova_compute[253538]:      <source path='/dev/pts/2'/>
Nov 25 03:34:53 np0005534516 nova_compute[253538]:      <log file='/var/lib/nova/instances/52d39d67-b456-44e4-8804-2de0c941edae/console.log' append='off'/>
Nov 25 03:34:53 np0005534516 nova_compute[253538]:      <target type='isa-serial' port='0'>
Nov 25 03:34:53 np0005534516 nova_compute[253538]:        <model name='isa-serial'/>
Nov 25 03:34:53 np0005534516 nova_compute[253538]:      </target>
Nov 25 03:34:53 np0005534516 nova_compute[253538]:      <alias name='serial0'/>
Nov 25 03:34:53 np0005534516 nova_compute[253538]:    </serial>
Nov 25 03:34:53 np0005534516 nova_compute[253538]:    <console type='pty' tty='/dev/pts/2'>
Nov 25 03:34:53 np0005534516 nova_compute[253538]:      <source path='/dev/pts/2'/>
Nov 25 03:34:53 np0005534516 nova_compute[253538]:      <log file='/var/lib/nova/instances/52d39d67-b456-44e4-8804-2de0c941edae/console.log' append='off'/>
Nov 25 03:34:53 np0005534516 nova_compute[253538]:      <target type='serial' port='0'/>
Nov 25 03:34:53 np0005534516 nova_compute[253538]:      <alias name='serial0'/>
Nov 25 03:34:53 np0005534516 nova_compute[253538]:    </console>
Nov 25 03:34:53 np0005534516 nova_compute[253538]:    <input type='tablet' bus='usb'>
Nov 25 03:34:53 np0005534516 nova_compute[253538]:      <alias name='input0'/>
Nov 25 03:34:53 np0005534516 nova_compute[253538]:      <address type='usb' bus='0' port='1'/>
Nov 25 03:34:53 np0005534516 nova_compute[253538]:    </input>
Nov 25 03:34:53 np0005534516 nova_compute[253538]:    <input type='mouse' bus='ps2'>
Nov 25 03:34:53 np0005534516 nova_compute[253538]:      <alias name='input1'/>
Nov 25 03:34:53 np0005534516 nova_compute[253538]:    </input>
Nov 25 03:34:53 np0005534516 nova_compute[253538]:    <input type='keyboard' bus='ps2'>
Nov 25 03:34:53 np0005534516 nova_compute[253538]:      <alias name='input2'/>
Nov 25 03:34:53 np0005534516 nova_compute[253538]:    </input>
Nov 25 03:34:53 np0005534516 nova_compute[253538]:    <graphics type='vnc' port='5902' autoport='yes' listen='::0'>
Nov 25 03:34:53 np0005534516 nova_compute[253538]:      <listen type='address' address='::0'/>
Nov 25 03:34:53 np0005534516 nova_compute[253538]:    </graphics>
Nov 25 03:34:53 np0005534516 nova_compute[253538]:    <audio id='1' type='none'/>
Nov 25 03:34:53 np0005534516 nova_compute[253538]:    <video>
Nov 25 03:34:53 np0005534516 nova_compute[253538]:      <model type='virtio' heads='1' primary='yes'/>
Nov 25 03:34:53 np0005534516 nova_compute[253538]:      <alias name='video0'/>
Nov 25 03:34:53 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Nov 25 03:34:53 np0005534516 nova_compute[253538]:    </video>
Nov 25 03:34:53 np0005534516 nova_compute[253538]:    <watchdog model='itco' action='reset'>
Nov 25 03:34:53 np0005534516 nova_compute[253538]:      <alias name='watchdog0'/>
Nov 25 03:34:53 np0005534516 nova_compute[253538]:    </watchdog>
Nov 25 03:34:53 np0005534516 nova_compute[253538]:    <memballoon model='virtio'>
Nov 25 03:34:53 np0005534516 nova_compute[253538]:      <stats period='10'/>
Nov 25 03:34:53 np0005534516 nova_compute[253538]:      <alias name='balloon0'/>
Nov 25 03:34:53 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Nov 25 03:34:53 np0005534516 nova_compute[253538]:    </memballoon>
Nov 25 03:34:53 np0005534516 nova_compute[253538]:    <rng model='virtio'>
Nov 25 03:34:53 np0005534516 nova_compute[253538]:      <backend model='random'>/dev/urandom</backend>
Nov 25 03:34:53 np0005534516 nova_compute[253538]:      <alias name='rng0'/>
Nov 25 03:34:53 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Nov 25 03:34:53 np0005534516 nova_compute[253538]:    </rng>
Nov 25 03:34:53 np0005534516 nova_compute[253538]:  </devices>
Nov 25 03:34:53 np0005534516 nova_compute[253538]:  <seclabel type='dynamic' model='selinux' relabel='yes'>
Nov 25 03:34:53 np0005534516 nova_compute[253538]:    <label>system_u:system_r:svirt_t:s0:c468,c889</label>
Nov 25 03:34:53 np0005534516 nova_compute[253538]:    <imagelabel>system_u:object_r:svirt_image_t:s0:c468,c889</imagelabel>
Nov 25 03:34:53 np0005534516 nova_compute[253538]:  </seclabel>
Nov 25 03:34:53 np0005534516 nova_compute[253538]:  <seclabel type='dynamic' model='dac' relabel='yes'>
Nov 25 03:34:53 np0005534516 nova_compute[253538]:    <label>+107:+107</label>
Nov 25 03:34:53 np0005534516 nova_compute[253538]:    <imagelabel>+107:+107</imagelabel>
Nov 25 03:34:53 np0005534516 nova_compute[253538]:  </seclabel>
Nov 25 03:34:53 np0005534516 nova_compute[253538]: </domain>
Nov 25 03:34:53 np0005534516 nova_compute[253538]: get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282#033[00m
Nov 25 03:34:53 np0005534516 nova_compute[253538]: 2025-11-25 08:34:53.150 253542 INFO nova.virt.libvirt.driver [None req-aabf42ea-bd42-4934-91c6-12ef85d0f85e 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Successfully detached device tapf2a4b65b-41 from instance 52d39d67-b456-44e4-8804-2de0c941edae from the live domain config.#033[00m
Nov 25 03:34:53 np0005534516 nova_compute[253538]: 2025-11-25 08:34:53.150 253542 DEBUG nova.virt.libvirt.vif [None req-aabf42ea-bd42-4934-91c6-12ef85d0f85e 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T08:34:07Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-1833307559',display_name='tempest-tempest.common.compute-instance-1833307559',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-1833307559',id=61,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBI45VX5ozelo+A26Yolp08RcM4mInQuWdkCriWdcAJEvmrG1M64+l6O6qrC1PEYY9Zv1hNrRdaOuY2Hx3qn6BPjsgdWVfumtuAipvIEJaR4T3qitr35JgGW4++DGBtyuWA==',key_name='tempest-keypair-1507828644',keypairs=<?>,launch_index=0,launched_at=2025-11-25T08:34:18Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='7d8307470c794815a028592990efca57',ramdisk_id='',reservation_id='r-14irc3k3',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-1895576257',owner_user_name='tempest-AttachInterfacesTestJSON-1895576257-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T08:34:18Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='329d8dc9d78743d4a09a38fef3a9143d',uuid=52d39d67-b456-44e4-8804-2de0c941edae,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "f2a4b65b-419e-44be-9413-f01693268aa8", "address": "fa:16:3e:26:ed:fc", "network": {"id": "9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-2079765623-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7d8307470c794815a028592990efca57", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf2a4b65b-41", "ovs_interfaceid": "f2a4b65b-419e-44be-9413-f01693268aa8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 25 03:34:53 np0005534516 nova_compute[253538]: 2025-11-25 08:34:53.151 253542 DEBUG nova.network.os_vif_util [None req-aabf42ea-bd42-4934-91c6-12ef85d0f85e 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Converting VIF {"id": "f2a4b65b-419e-44be-9413-f01693268aa8", "address": "fa:16:3e:26:ed:fc", "network": {"id": "9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-2079765623-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7d8307470c794815a028592990efca57", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf2a4b65b-41", "ovs_interfaceid": "f2a4b65b-419e-44be-9413-f01693268aa8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 03:34:53 np0005534516 nova_compute[253538]: 2025-11-25 08:34:53.151 253542 DEBUG nova.network.os_vif_util [None req-aabf42ea-bd42-4934-91c6-12ef85d0f85e 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:26:ed:fc,bridge_name='br-int',has_traffic_filtering=True,id=f2a4b65b-419e-44be-9413-f01693268aa8,network=Network(9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapf2a4b65b-41') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 03:34:53 np0005534516 nova_compute[253538]: 2025-11-25 08:34:53.152 253542 DEBUG os_vif [None req-aabf42ea-bd42-4934-91c6-12ef85d0f85e 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:26:ed:fc,bridge_name='br-int',has_traffic_filtering=True,id=f2a4b65b-419e-44be-9413-f01693268aa8,network=Network(9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapf2a4b65b-41') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 25 03:34:53 np0005534516 nova_compute[253538]: 2025-11-25 08:34:53.153 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:34:53 np0005534516 nova_compute[253538]: 2025-11-25 08:34:53.153 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf2a4b65b-41, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:34:53 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:34:53.153 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[1f4aa813-ef37-4f98-a71b-d3762e553dd9]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:34:53 np0005534516 nova_compute[253538]: 2025-11-25 08:34:53.155 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:34:53 np0005534516 nova_compute[253538]: 2025-11-25 08:34:53.157 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:34:53 np0005534516 nova_compute[253538]: 2025-11-25 08:34:53.159 253542 INFO os_vif [None req-aabf42ea-bd42-4934-91c6-12ef85d0f85e 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:26:ed:fc,bridge_name='br-int',has_traffic_filtering=True,id=f2a4b65b-419e-44be-9413-f01693268aa8,network=Network(9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapf2a4b65b-41')#033[00m
Nov 25 03:34:53 np0005534516 nova_compute[253538]: 2025-11-25 08:34:53.159 253542 DEBUG nova.virt.libvirt.guest [None req-aabf42ea-bd42-4934-91c6-12ef85d0f85e 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 03:34:53 np0005534516 nova_compute[253538]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 03:34:53 np0005534516 nova_compute[253538]:  <nova:name>tempest-tempest.common.compute-instance-1833307559</nova:name>
Nov 25 03:34:53 np0005534516 nova_compute[253538]:  <nova:creationTime>2025-11-25 08:34:53</nova:creationTime>
Nov 25 03:34:53 np0005534516 nova_compute[253538]:  <nova:flavor name="m1.nano">
Nov 25 03:34:53 np0005534516 nova_compute[253538]:    <nova:memory>128</nova:memory>
Nov 25 03:34:53 np0005534516 nova_compute[253538]:    <nova:disk>1</nova:disk>
Nov 25 03:34:53 np0005534516 nova_compute[253538]:    <nova:swap>0</nova:swap>
Nov 25 03:34:53 np0005534516 nova_compute[253538]:    <nova:ephemeral>0</nova:ephemeral>
Nov 25 03:34:53 np0005534516 nova_compute[253538]:    <nova:vcpus>1</nova:vcpus>
Nov 25 03:34:53 np0005534516 nova_compute[253538]:  </nova:flavor>
Nov 25 03:34:53 np0005534516 nova_compute[253538]:  <nova:owner>
Nov 25 03:34:53 np0005534516 nova_compute[253538]:    <nova:user uuid="329d8dc9d78743d4a09a38fef3a9143d">tempest-AttachInterfacesTestJSON-1895576257-project-member</nova:user>
Nov 25 03:34:53 np0005534516 nova_compute[253538]:    <nova:project uuid="7d8307470c794815a028592990efca57">tempest-AttachInterfacesTestJSON-1895576257</nova:project>
Nov 25 03:34:53 np0005534516 nova_compute[253538]:  </nova:owner>
Nov 25 03:34:53 np0005534516 nova_compute[253538]:  <nova:root type="image" uuid="8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e"/>
Nov 25 03:34:53 np0005534516 nova_compute[253538]:  <nova:ports>
Nov 25 03:34:53 np0005534516 nova_compute[253538]:    <nova:port uuid="9fa407fa-661b-4b02-b4f4-656f6ae34cd8">
Nov 25 03:34:53 np0005534516 nova_compute[253538]:      <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Nov 25 03:34:53 np0005534516 nova_compute[253538]:    </nova:port>
Nov 25 03:34:53 np0005534516 nova_compute[253538]:  </nova:ports>
Nov 25 03:34:53 np0005534516 nova_compute[253538]: </nova:instance>
Nov 25 03:34:53 np0005534516 nova_compute[253538]: set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359#033[00m
Nov 25 03:34:53 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:34:53.184 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[53e506e8-0bf4-4c20-a62b-5649efe42d22]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:34:53 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:34:53.187 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[3d6940c9-4de0-49b6-b212-439e49dc9c74]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:34:53 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:34:53.218 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[8df9f674-7a00-4827-8b6b-b3d5f45e7fd8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:34:53 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:34:53.236 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[dd4ec501-54ed-42a7-b150-edd07436a456]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap837c6e7b-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:5f:6e:5a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 3, 'tx_packets': 4, 'rx_bytes': 266, 'tx_bytes': 264, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 3, 'tx_packets': 4, 'rx_bytes': 266, 'tx_bytes': 264, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 177], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 504680, 'reachable_time': 28680, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 3, 'inoctets': 224, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 152, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 3, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 224, 'outmcastoctets': 152, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 3, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 320080, 'error': None, 'target': 'ovnmeta-837c6e7b-bab2-4553-9d96-986f67153365', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:34:53 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 03:34:53 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2678124908' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 03:34:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] Optimize plan auto_2025-11-25_08:34:53
Nov 25 03:34:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 03:34:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] do_upmap
Nov 25 03:34:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] pools ['default.rgw.meta', 'default.rgw.log', 'volumes', 'cephfs.cephfs.meta', '.mgr', 'vms', 'default.rgw.control', 'images', 'cephfs.cephfs.data', 'backups', '.rgw.root']
Nov 25 03:34:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] prepared 0/10 changes
Nov 25 03:34:53 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:34:53.254 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[ffe6af74-6591-40c9-8856-b066b6c310ed]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap837c6e7b-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 504693, 'tstamp': 504693}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 320091, 'error': None, 'target': 'ovnmeta-837c6e7b-bab2-4553-9d96-986f67153365', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap837c6e7b-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 504696, 'tstamp': 504696}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 320091, 'error': None, 'target': 'ovnmeta-837c6e7b-bab2-4553-9d96-986f67153365', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:34:53 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:34:53.255 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap837c6e7b-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:34:53 np0005534516 nova_compute[253538]: 2025-11-25 08:34:53.257 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:34:53 np0005534516 nova_compute[253538]: 2025-11-25 08:34:53.258 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:34:53 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:34:53.258 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap837c6e7b-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:34:53 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:34:53.258 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 03:34:53 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:34:53.259 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap837c6e7b-b0, col_values=(('external_ids', {'iface-id': 'f915f58f-151e-47fb-a373-5fd022b7fd3c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:34:53 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:34:53.259 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 03:34:53 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:34:53.260 162739 INFO neutron.agent.ovn.metadata.agent [-] Port f2a4b65b-419e-44be-9413-f01693268aa8 in datapath 9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe unbound from our chassis#033[00m
Nov 25 03:34:53 np0005534516 nova_compute[253538]: 2025-11-25 08:34:53.260 253542 DEBUG oslo_concurrency.processutils [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.518s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:34:53 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:34:53.261 162739 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe#033[00m
Nov 25 03:34:53 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1552: 321 pgs: 321 active+clean; 485 MiB data, 793 MiB used, 59 GiB / 60 GiB avail; 87 KiB/s rd, 5.4 MiB/s wr, 120 op/s
Nov 25 03:34:53 np0005534516 nova_compute[253538]: 2025-11-25 08:34:53.261 253542 DEBUG nova.virt.libvirt.vif [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T08:34:42Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ListServersNegativeTestJSON-server-1822986507',display_name='tempest-ListServersNegativeTestJSON-server-1822986507-3',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-listserversnegativetestjson-server-1822986507-3',id=65,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=2,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='ce0bc5c65f2f47a9a854ec892fe53bc8',ramdisk_id='',reservation_id='r-prpznqfv',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ListServersNegativeTestJSON-704382836',owner_user_name='tempest-ListServersNegativeTestJSON-704382836-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T08:34:47Z,user_data=None,user_id='dc986518148d44de9f5908ed5be317bd',uuid=e07ccbcb-d60d-4c15-95c2-9f5046ab99a3,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "54f02527-a6c1-4059-aa22-2c19fc6f351d", "address": "fa:16:3e:eb:30:f0", "network": {"id": "837c6e7b-bab2-4553-9d96-986f67153365", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-60200676-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ce0bc5c65f2f47a9a854ec892fe53bc8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap54f02527-a6", "ovs_interfaceid": "54f02527-a6c1-4059-aa22-2c19fc6f351d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 25 03:34:53 np0005534516 nova_compute[253538]: 2025-11-25 08:34:53.261 253542 DEBUG nova.network.os_vif_util [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] Converting VIF {"id": "54f02527-a6c1-4059-aa22-2c19fc6f351d", "address": "fa:16:3e:eb:30:f0", "network": {"id": "837c6e7b-bab2-4553-9d96-986f67153365", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-60200676-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ce0bc5c65f2f47a9a854ec892fe53bc8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap54f02527-a6", "ovs_interfaceid": "54f02527-a6c1-4059-aa22-2c19fc6f351d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 03:34:53 np0005534516 nova_compute[253538]: 2025-11-25 08:34:53.262 253542 DEBUG nova.network.os_vif_util [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:eb:30:f0,bridge_name='br-int',has_traffic_filtering=True,id=54f02527-a6c1-4059-aa22-2c19fc6f351d,network=Network(837c6e7b-bab2-4553-9d96-986f67153365),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap54f02527-a6') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 03:34:53 np0005534516 nova_compute[253538]: 2025-11-25 08:34:53.263 253542 DEBUG nova.objects.instance [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] Lazy-loading 'pci_devices' on Instance uuid e07ccbcb-d60d-4c15-95c2-9f5046ab99a3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 03:34:53 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:34:53.277 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[57692c65-ac6e-439c-ba2f-a60f7dd67191]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:34:53 np0005534516 nova_compute[253538]: 2025-11-25 08:34:53.281 253542 DEBUG nova.virt.libvirt.driver [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] [instance: e07ccbcb-d60d-4c15-95c2-9f5046ab99a3] End _get_guest_xml xml=<domain type="kvm">
Nov 25 03:34:53 np0005534516 nova_compute[253538]:  <uuid>e07ccbcb-d60d-4c15-95c2-9f5046ab99a3</uuid>
Nov 25 03:34:53 np0005534516 nova_compute[253538]:  <name>instance-00000041</name>
Nov 25 03:34:53 np0005534516 nova_compute[253538]:  <memory>131072</memory>
Nov 25 03:34:53 np0005534516 nova_compute[253538]:  <vcpu>1</vcpu>
Nov 25 03:34:53 np0005534516 nova_compute[253538]:  <metadata>
Nov 25 03:34:53 np0005534516 nova_compute[253538]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 03:34:53 np0005534516 nova_compute[253538]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 03:34:53 np0005534516 nova_compute[253538]:      <nova:name>tempest-ListServersNegativeTestJSON-server-1822986507-3</nova:name>
Nov 25 03:34:53 np0005534516 nova_compute[253538]:      <nova:creationTime>2025-11-25 08:34:52</nova:creationTime>
Nov 25 03:34:53 np0005534516 nova_compute[253538]:      <nova:flavor name="m1.nano">
Nov 25 03:34:53 np0005534516 nova_compute[253538]:        <nova:memory>128</nova:memory>
Nov 25 03:34:53 np0005534516 nova_compute[253538]:        <nova:disk>1</nova:disk>
Nov 25 03:34:53 np0005534516 nova_compute[253538]:        <nova:swap>0</nova:swap>
Nov 25 03:34:53 np0005534516 nova_compute[253538]:        <nova:ephemeral>0</nova:ephemeral>
Nov 25 03:34:53 np0005534516 nova_compute[253538]:        <nova:vcpus>1</nova:vcpus>
Nov 25 03:34:53 np0005534516 nova_compute[253538]:      </nova:flavor>
Nov 25 03:34:53 np0005534516 nova_compute[253538]:      <nova:owner>
Nov 25 03:34:53 np0005534516 nova_compute[253538]:        <nova:user uuid="dc986518148d44de9f5908ed5be317bd">tempest-ListServersNegativeTestJSON-704382836-project-member</nova:user>
Nov 25 03:34:53 np0005534516 nova_compute[253538]:        <nova:project uuid="ce0bc5c65f2f47a9a854ec892fe53bc8">tempest-ListServersNegativeTestJSON-704382836</nova:project>
Nov 25 03:34:53 np0005534516 nova_compute[253538]:      </nova:owner>
Nov 25 03:34:53 np0005534516 nova_compute[253538]:      <nova:root type="image" uuid="8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e"/>
Nov 25 03:34:53 np0005534516 nova_compute[253538]:      <nova:ports>
Nov 25 03:34:53 np0005534516 nova_compute[253538]:        <nova:port uuid="54f02527-a6c1-4059-aa22-2c19fc6f351d">
Nov 25 03:34:53 np0005534516 nova_compute[253538]:          <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Nov 25 03:34:53 np0005534516 nova_compute[253538]:        </nova:port>
Nov 25 03:34:53 np0005534516 nova_compute[253538]:      </nova:ports>
Nov 25 03:34:53 np0005534516 nova_compute[253538]:    </nova:instance>
Nov 25 03:34:53 np0005534516 nova_compute[253538]:  </metadata>
Nov 25 03:34:53 np0005534516 nova_compute[253538]:  <sysinfo type="smbios">
Nov 25 03:34:53 np0005534516 nova_compute[253538]:    <system>
Nov 25 03:34:53 np0005534516 nova_compute[253538]:      <entry name="manufacturer">RDO</entry>
Nov 25 03:34:53 np0005534516 nova_compute[253538]:      <entry name="product">OpenStack Compute</entry>
Nov 25 03:34:53 np0005534516 nova_compute[253538]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 03:34:53 np0005534516 nova_compute[253538]:      <entry name="serial">e07ccbcb-d60d-4c15-95c2-9f5046ab99a3</entry>
Nov 25 03:34:53 np0005534516 nova_compute[253538]:      <entry name="uuid">e07ccbcb-d60d-4c15-95c2-9f5046ab99a3</entry>
Nov 25 03:34:53 np0005534516 nova_compute[253538]:      <entry name="family">Virtual Machine</entry>
Nov 25 03:34:53 np0005534516 nova_compute[253538]:    </system>
Nov 25 03:34:53 np0005534516 nova_compute[253538]:  </sysinfo>
Nov 25 03:34:53 np0005534516 nova_compute[253538]:  <os>
Nov 25 03:34:53 np0005534516 nova_compute[253538]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 25 03:34:53 np0005534516 nova_compute[253538]:    <boot dev="hd"/>
Nov 25 03:34:53 np0005534516 nova_compute[253538]:    <smbios mode="sysinfo"/>
Nov 25 03:34:53 np0005534516 nova_compute[253538]:  </os>
Nov 25 03:34:53 np0005534516 nova_compute[253538]:  <features>
Nov 25 03:34:53 np0005534516 nova_compute[253538]:    <acpi/>
Nov 25 03:34:53 np0005534516 nova_compute[253538]:    <apic/>
Nov 25 03:34:53 np0005534516 nova_compute[253538]:    <vmcoreinfo/>
Nov 25 03:34:53 np0005534516 nova_compute[253538]:  </features>
Nov 25 03:34:53 np0005534516 nova_compute[253538]:  <clock offset="utc">
Nov 25 03:34:53 np0005534516 nova_compute[253538]:    <timer name="pit" tickpolicy="delay"/>
Nov 25 03:34:53 np0005534516 nova_compute[253538]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 25 03:34:53 np0005534516 nova_compute[253538]:    <timer name="hpet" present="no"/>
Nov 25 03:34:53 np0005534516 nova_compute[253538]:  </clock>
Nov 25 03:34:53 np0005534516 nova_compute[253538]:  <cpu mode="host-model" match="exact">
Nov 25 03:34:53 np0005534516 nova_compute[253538]:    <topology sockets="1" cores="1" threads="1"/>
Nov 25 03:34:53 np0005534516 nova_compute[253538]:  </cpu>
Nov 25 03:34:53 np0005534516 nova_compute[253538]:  <devices>
Nov 25 03:34:53 np0005534516 nova_compute[253538]:    <disk type="network" device="disk">
Nov 25 03:34:53 np0005534516 nova_compute[253538]:      <driver type="raw" cache="none"/>
Nov 25 03:34:53 np0005534516 nova_compute[253538]:      <source protocol="rbd" name="vms/e07ccbcb-d60d-4c15-95c2-9f5046ab99a3_disk">
Nov 25 03:34:53 np0005534516 nova_compute[253538]:        <host name="192.168.122.100" port="6789"/>
Nov 25 03:34:53 np0005534516 nova_compute[253538]:      </source>
Nov 25 03:34:53 np0005534516 nova_compute[253538]:      <auth username="openstack">
Nov 25 03:34:53 np0005534516 nova_compute[253538]:        <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 03:34:53 np0005534516 nova_compute[253538]:      </auth>
Nov 25 03:34:53 np0005534516 nova_compute[253538]:      <target dev="vda" bus="virtio"/>
Nov 25 03:34:53 np0005534516 nova_compute[253538]:    </disk>
Nov 25 03:34:53 np0005534516 nova_compute[253538]:    <disk type="network" device="cdrom">
Nov 25 03:34:53 np0005534516 nova_compute[253538]:      <driver type="raw" cache="none"/>
Nov 25 03:34:53 np0005534516 nova_compute[253538]:      <source protocol="rbd" name="vms/e07ccbcb-d60d-4c15-95c2-9f5046ab99a3_disk.config">
Nov 25 03:34:53 np0005534516 nova_compute[253538]:        <host name="192.168.122.100" port="6789"/>
Nov 25 03:34:53 np0005534516 nova_compute[253538]:      </source>
Nov 25 03:34:53 np0005534516 nova_compute[253538]:      <auth username="openstack">
Nov 25 03:34:53 np0005534516 nova_compute[253538]:        <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 03:34:53 np0005534516 nova_compute[253538]:      </auth>
Nov 25 03:34:53 np0005534516 nova_compute[253538]:      <target dev="sda" bus="sata"/>
Nov 25 03:34:53 np0005534516 nova_compute[253538]:    </disk>
Nov 25 03:34:53 np0005534516 nova_compute[253538]:    <interface type="ethernet">
Nov 25 03:34:53 np0005534516 nova_compute[253538]:      <mac address="fa:16:3e:eb:30:f0"/>
Nov 25 03:34:53 np0005534516 nova_compute[253538]:      <model type="virtio"/>
Nov 25 03:34:53 np0005534516 nova_compute[253538]:      <driver name="vhost" rx_queue_size="512"/>
Nov 25 03:34:53 np0005534516 nova_compute[253538]:      <mtu size="1442"/>
Nov 25 03:34:53 np0005534516 nova_compute[253538]:      <target dev="tap54f02527-a6"/>
Nov 25 03:34:53 np0005534516 nova_compute[253538]:    </interface>
Nov 25 03:34:53 np0005534516 nova_compute[253538]:    <serial type="pty">
Nov 25 03:34:53 np0005534516 nova_compute[253538]:      <log file="/var/lib/nova/instances/e07ccbcb-d60d-4c15-95c2-9f5046ab99a3/console.log" append="off"/>
Nov 25 03:34:53 np0005534516 nova_compute[253538]:    </serial>
Nov 25 03:34:53 np0005534516 nova_compute[253538]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 03:34:53 np0005534516 nova_compute[253538]:    <video>
Nov 25 03:34:53 np0005534516 nova_compute[253538]:      <model type="virtio"/>
Nov 25 03:34:53 np0005534516 nova_compute[253538]:    </video>
Nov 25 03:34:53 np0005534516 nova_compute[253538]:    <input type="tablet" bus="usb"/>
Nov 25 03:34:53 np0005534516 nova_compute[253538]:    <rng model="virtio">
Nov 25 03:34:53 np0005534516 nova_compute[253538]:      <backend model="random">/dev/urandom</backend>
Nov 25 03:34:53 np0005534516 nova_compute[253538]:    </rng>
Nov 25 03:34:53 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root"/>
Nov 25 03:34:53 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:34:53 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:34:53 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:34:53 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:34:53 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:34:53 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:34:53 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:34:53 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:34:53 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:34:53 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:34:53 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:34:53 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:34:53 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:34:53 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:34:53 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:34:53 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:34:53 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:34:53 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:34:53 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:34:53 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:34:53 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:34:53 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:34:53 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:34:53 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:34:53 np0005534516 nova_compute[253538]:    <controller type="usb" index="0"/>
Nov 25 03:34:53 np0005534516 nova_compute[253538]:    <memballoon model="virtio">
Nov 25 03:34:53 np0005534516 nova_compute[253538]:      <stats period="10"/>
Nov 25 03:34:53 np0005534516 nova_compute[253538]:    </memballoon>
Nov 25 03:34:53 np0005534516 nova_compute[253538]:  </devices>
Nov 25 03:34:53 np0005534516 nova_compute[253538]: </domain>
Nov 25 03:34:53 np0005534516 nova_compute[253538]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 25 03:34:53 np0005534516 nova_compute[253538]: 2025-11-25 08:34:53.281 253542 DEBUG nova.compute.manager [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] [instance: e07ccbcb-d60d-4c15-95c2-9f5046ab99a3] Preparing to wait for external event network-vif-plugged-54f02527-a6c1-4059-aa22-2c19fc6f351d prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 25 03:34:53 np0005534516 nova_compute[253538]: 2025-11-25 08:34:53.282 253542 DEBUG oslo_concurrency.lockutils [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] Acquiring lock "e07ccbcb-d60d-4c15-95c2-9f5046ab99a3-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:34:53 np0005534516 nova_compute[253538]: 2025-11-25 08:34:53.282 253542 DEBUG oslo_concurrency.lockutils [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] Lock "e07ccbcb-d60d-4c15-95c2-9f5046ab99a3-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:34:53 np0005534516 nova_compute[253538]: 2025-11-25 08:34:53.282 253542 DEBUG oslo_concurrency.lockutils [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] Lock "e07ccbcb-d60d-4c15-95c2-9f5046ab99a3-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:34:53 np0005534516 nova_compute[253538]: 2025-11-25 08:34:53.283 253542 DEBUG nova.virt.libvirt.vif [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T08:34:42Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ListServersNegativeTestJSON-server-1822986507',display_name='tempest-ListServersNegativeTestJSON-server-1822986507-3',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-listserversnegativetestjson-server-1822986507-3',id=65,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=2,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='ce0bc5c65f2f47a9a854ec892fe53bc8',ramdisk_id='',reservation_id='r-prpznqfv',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ListServersNegativeTestJSON-704382836',owner_user_name='tempest-ListServersNegativeTestJSON-704382836-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T08:34:47Z,user_data=None,user_id='dc986518148d44de9f5908ed5be317bd',uuid=e07ccbcb-d60d-4c15-95c2-9f5046ab99a3,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "54f02527-a6c1-4059-aa22-2c19fc6f351d", "address": "fa:16:3e:eb:30:f0", "network": {"id": "837c6e7b-bab2-4553-9d96-986f67153365", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-60200676-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ce0bc5c65f2f47a9a854ec892fe53bc8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap54f02527-a6", "ovs_interfaceid": "54f02527-a6c1-4059-aa22-2c19fc6f351d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 25 03:34:53 np0005534516 nova_compute[253538]: 2025-11-25 08:34:53.283 253542 DEBUG nova.network.os_vif_util [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] Converting VIF {"id": "54f02527-a6c1-4059-aa22-2c19fc6f351d", "address": "fa:16:3e:eb:30:f0", "network": {"id": "837c6e7b-bab2-4553-9d96-986f67153365", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-60200676-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ce0bc5c65f2f47a9a854ec892fe53bc8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap54f02527-a6", "ovs_interfaceid": "54f02527-a6c1-4059-aa22-2c19fc6f351d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 03:34:53 np0005534516 nova_compute[253538]: 2025-11-25 08:34:53.283 253542 DEBUG nova.network.os_vif_util [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:eb:30:f0,bridge_name='br-int',has_traffic_filtering=True,id=54f02527-a6c1-4059-aa22-2c19fc6f351d,network=Network(837c6e7b-bab2-4553-9d96-986f67153365),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap54f02527-a6') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 03:34:53 np0005534516 nova_compute[253538]: 2025-11-25 08:34:53.283 253542 DEBUG os_vif [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:eb:30:f0,bridge_name='br-int',has_traffic_filtering=True,id=54f02527-a6c1-4059-aa22-2c19fc6f351d,network=Network(837c6e7b-bab2-4553-9d96-986f67153365),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap54f02527-a6') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 25 03:34:53 np0005534516 nova_compute[253538]: 2025-11-25 08:34:53.284 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:34:53 np0005534516 nova_compute[253538]: 2025-11-25 08:34:53.284 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:34:53 np0005534516 nova_compute[253538]: 2025-11-25 08:34:53.284 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 03:34:53 np0005534516 nova_compute[253538]: 2025-11-25 08:34:53.287 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:34:53 np0005534516 nova_compute[253538]: 2025-11-25 08:34:53.288 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap54f02527-a6, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:34:53 np0005534516 nova_compute[253538]: 2025-11-25 08:34:53.288 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap54f02527-a6, col_values=(('external_ids', {'iface-id': '54f02527-a6c1-4059-aa22-2c19fc6f351d', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:eb:30:f0', 'vm-uuid': 'e07ccbcb-d60d-4c15-95c2-9f5046ab99a3'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:34:53 np0005534516 NetworkManager[48915]: <info>  [1764059693.2903] manager: (tap54f02527-a6): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/270)
Nov 25 03:34:53 np0005534516 nova_compute[253538]: 2025-11-25 08:34:53.289 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:34:53 np0005534516 nova_compute[253538]: 2025-11-25 08:34:53.292 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 25 03:34:53 np0005534516 nova_compute[253538]: 2025-11-25 08:34:53.294 253542 INFO os_vif [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:eb:30:f0,bridge_name='br-int',has_traffic_filtering=True,id=54f02527-a6c1-4059-aa22-2c19fc6f351d,network=Network(837c6e7b-bab2-4553-9d96-986f67153365),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap54f02527-a6')#033[00m
Nov 25 03:34:53 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:34:53.308 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[11afd92c-97bf-428a-8a73-a86010826d1a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:34:53 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:34:53.311 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[8c7e6305-69ca-47de-8ae2-3eb43ba81718]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:34:53 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:34:53.351 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[1e5dc36c-5285-4a69-bfbe-9f4c0706b761]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:34:53 np0005534516 nova_compute[253538]: 2025-11-25 08:34:53.356 253542 DEBUG nova.virt.libvirt.driver [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 03:34:53 np0005534516 nova_compute[253538]: 2025-11-25 08:34:53.356 253542 DEBUG nova.virt.libvirt.driver [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 03:34:53 np0005534516 nova_compute[253538]: 2025-11-25 08:34:53.356 253542 DEBUG nova.virt.libvirt.driver [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] No VIF found with MAC fa:16:3e:eb:30:f0, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 25 03:34:53 np0005534516 nova_compute[253538]: 2025-11-25 08:34:53.356 253542 INFO nova.virt.libvirt.driver [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] [instance: e07ccbcb-d60d-4c15-95c2-9f5046ab99a3] Using config drive#033[00m
Nov 25 03:34:53 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:34:53.371 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[86c422da-1b1e-44d7-8dd3-14262f23ab88]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap9bf3cbfa-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:cf:8f:c7'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 13, 'rx_bytes': 700, 'tx_bytes': 690, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 13, 'rx_bytes': 700, 'tx_bytes': 690, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 164], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 499073, 'reachable_time': 17548, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 320110, 'error': None, 'target': 'ovnmeta-9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:34:53 np0005534516 nova_compute[253538]: 2025-11-25 08:34:53.380 253542 DEBUG nova.storage.rbd_utils [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] rbd image e07ccbcb-d60d-4c15-95c2-9f5046ab99a3_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:34:53 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:34:53.387 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[3fa5d27f-f97c-4282-8a99-80623eb990e6]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap9bf3cbfa-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 499086, 'tstamp': 499086}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 320126, 'error': None, 'target': 'ovnmeta-9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap9bf3cbfa-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 499088, 'tstamp': 499088}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 320126, 'error': None, 'target': 'ovnmeta-9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:34:53 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:34:53.396 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9bf3cbfa-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:34:53 np0005534516 nova_compute[253538]: 2025-11-25 08:34:53.400 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:34:53 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:34:53.404 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9bf3cbfa-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:34:53 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:34:53.404 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 03:34:53 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:34:53.404 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap9bf3cbfa-70, col_values=(('external_ids', {'iface-id': '98660c0c-0936-4c4d-9a89-87b784d8d5cc'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:34:53 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:34:53.405 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 03:34:53 np0005534516 nova_compute[253538]: 2025-11-25 08:34:53.421 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764059693.420985, a4fd9f97-b160-432d-9cb7-0fa3874c6468 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 03:34:53 np0005534516 nova_compute[253538]: 2025-11-25 08:34:53.421 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: a4fd9f97-b160-432d-9cb7-0fa3874c6468] VM Started (Lifecycle Event)#033[00m
Nov 25 03:34:53 np0005534516 nova_compute[253538]: 2025-11-25 08:34:53.423 253542 DEBUG nova.compute.manager [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] [instance: a4fd9f97-b160-432d-9cb7-0fa3874c6468] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 25 03:34:53 np0005534516 nova_compute[253538]: 2025-11-25 08:34:53.425 253542 DEBUG nova.virt.libvirt.driver [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] [instance: a4fd9f97-b160-432d-9cb7-0fa3874c6468] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 25 03:34:53 np0005534516 nova_compute[253538]: 2025-11-25 08:34:53.428 253542 INFO nova.virt.libvirt.driver [-] [instance: a4fd9f97-b160-432d-9cb7-0fa3874c6468] Instance spawned successfully.#033[00m
Nov 25 03:34:53 np0005534516 nova_compute[253538]: 2025-11-25 08:34:53.428 253542 DEBUG nova.virt.libvirt.driver [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] [instance: a4fd9f97-b160-432d-9cb7-0fa3874c6468] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 25 03:34:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 03:34:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 03:34:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 03:34:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 03:34:53 np0005534516 nova_compute[253538]: 2025-11-25 08:34:53.435 253542 DEBUG nova.network.neutron [req-8a499574-6252-475b-9fd0-636a7b00a694 req-5722426d-7de3-4937-b016-6981a6265ead b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: a4fd9f97-b160-432d-9cb7-0fa3874c6468] Updated VIF entry in instance network info cache for port f9d205bf-0705-485d-b89c-f9b9c3cdccdb. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 25 03:34:53 np0005534516 nova_compute[253538]: 2025-11-25 08:34:53.435 253542 DEBUG nova.network.neutron [req-8a499574-6252-475b-9fd0-636a7b00a694 req-5722426d-7de3-4937-b016-6981a6265ead b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: a4fd9f97-b160-432d-9cb7-0fa3874c6468] Updating instance_info_cache with network_info: [{"id": "f9d205bf-0705-485d-b89c-f9b9c3cdccdb", "address": "fa:16:3e:9d:08:11", "network": {"id": "837c6e7b-bab2-4553-9d96-986f67153365", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-60200676-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ce0bc5c65f2f47a9a854ec892fe53bc8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf9d205bf-07", "ovs_interfaceid": "f9d205bf-0705-485d-b89c-f9b9c3cdccdb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 03:34:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 03:34:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 03:34:53 np0005534516 nova_compute[253538]: 2025-11-25 08:34:53.449 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: a4fd9f97-b160-432d-9cb7-0fa3874c6468] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:34:53 np0005534516 nova_compute[253538]: 2025-11-25 08:34:53.454 253542 DEBUG oslo_concurrency.lockutils [req-8a499574-6252-475b-9fd0-636a7b00a694 req-5722426d-7de3-4937-b016-6981a6265ead b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-a4fd9f97-b160-432d-9cb7-0fa3874c6468" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 03:34:53 np0005534516 nova_compute[253538]: 2025-11-25 08:34:53.454 253542 DEBUG nova.compute.manager [req-8a499574-6252-475b-9fd0-636a7b00a694 req-5722426d-7de3-4937-b016-6981a6265ead b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: e07ccbcb-d60d-4c15-95c2-9f5046ab99a3] Received event network-changed-54f02527-a6c1-4059-aa22-2c19fc6f351d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:34:53 np0005534516 nova_compute[253538]: 2025-11-25 08:34:53.454 253542 DEBUG nova.compute.manager [req-8a499574-6252-475b-9fd0-636a7b00a694 req-5722426d-7de3-4937-b016-6981a6265ead b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: e07ccbcb-d60d-4c15-95c2-9f5046ab99a3] Refreshing instance network info cache due to event network-changed-54f02527-a6c1-4059-aa22-2c19fc6f351d. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 03:34:53 np0005534516 nova_compute[253538]: 2025-11-25 08:34:53.455 253542 DEBUG oslo_concurrency.lockutils [req-8a499574-6252-475b-9fd0-636a7b00a694 req-5722426d-7de3-4937-b016-6981a6265ead b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-e07ccbcb-d60d-4c15-95c2-9f5046ab99a3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 03:34:53 np0005534516 nova_compute[253538]: 2025-11-25 08:34:53.455 253542 DEBUG oslo_concurrency.lockutils [req-8a499574-6252-475b-9fd0-636a7b00a694 req-5722426d-7de3-4937-b016-6981a6265ead b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-e07ccbcb-d60d-4c15-95c2-9f5046ab99a3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 03:34:53 np0005534516 nova_compute[253538]: 2025-11-25 08:34:53.455 253542 DEBUG nova.network.neutron [req-8a499574-6252-475b-9fd0-636a7b00a694 req-5722426d-7de3-4937-b016-6981a6265ead b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: e07ccbcb-d60d-4c15-95c2-9f5046ab99a3] Refreshing network info cache for port 54f02527-a6c1-4059-aa22-2c19fc6f351d _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 03:34:53 np0005534516 nova_compute[253538]: 2025-11-25 08:34:53.457 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: a4fd9f97-b160-432d-9cb7-0fa3874c6468] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 03:34:53 np0005534516 nova_compute[253538]: 2025-11-25 08:34:53.460 253542 DEBUG nova.virt.libvirt.driver [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] [instance: a4fd9f97-b160-432d-9cb7-0fa3874c6468] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:34:53 np0005534516 nova_compute[253538]: 2025-11-25 08:34:53.460 253542 DEBUG nova.virt.libvirt.driver [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] [instance: a4fd9f97-b160-432d-9cb7-0fa3874c6468] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:34:53 np0005534516 nova_compute[253538]: 2025-11-25 08:34:53.461 253542 DEBUG nova.virt.libvirt.driver [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] [instance: a4fd9f97-b160-432d-9cb7-0fa3874c6468] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:34:53 np0005534516 nova_compute[253538]: 2025-11-25 08:34:53.461 253542 DEBUG nova.virt.libvirt.driver [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] [instance: a4fd9f97-b160-432d-9cb7-0fa3874c6468] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:34:53 np0005534516 nova_compute[253538]: 2025-11-25 08:34:53.461 253542 DEBUG nova.virt.libvirt.driver [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] [instance: a4fd9f97-b160-432d-9cb7-0fa3874c6468] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:34:53 np0005534516 nova_compute[253538]: 2025-11-25 08:34:53.462 253542 DEBUG nova.virt.libvirt.driver [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] [instance: a4fd9f97-b160-432d-9cb7-0fa3874c6468] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:34:53 np0005534516 nova_compute[253538]: 2025-11-25 08:34:53.486 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: a4fd9f97-b160-432d-9cb7-0fa3874c6468] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 03:34:53 np0005534516 nova_compute[253538]: 2025-11-25 08:34:53.486 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764059693.4210463, a4fd9f97-b160-432d-9cb7-0fa3874c6468 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 03:34:53 np0005534516 nova_compute[253538]: 2025-11-25 08:34:53.487 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: a4fd9f97-b160-432d-9cb7-0fa3874c6468] VM Paused (Lifecycle Event)#033[00m
Nov 25 03:34:53 np0005534516 nova_compute[253538]: 2025-11-25 08:34:53.510 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: a4fd9f97-b160-432d-9cb7-0fa3874c6468] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:34:53 np0005534516 nova_compute[253538]: 2025-11-25 08:34:53.513 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764059693.4251022, a4fd9f97-b160-432d-9cb7-0fa3874c6468 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 03:34:53 np0005534516 nova_compute[253538]: 2025-11-25 08:34:53.513 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: a4fd9f97-b160-432d-9cb7-0fa3874c6468] VM Resumed (Lifecycle Event)#033[00m
Nov 25 03:34:53 np0005534516 nova_compute[253538]: 2025-11-25 08:34:53.530 253542 INFO nova.compute.manager [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] [instance: a4fd9f97-b160-432d-9cb7-0fa3874c6468] Took 7.16 seconds to spawn the instance on the hypervisor.#033[00m
Nov 25 03:34:53 np0005534516 nova_compute[253538]: 2025-11-25 08:34:53.530 253542 DEBUG nova.compute.manager [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] [instance: a4fd9f97-b160-432d-9cb7-0fa3874c6468] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:34:53 np0005534516 nova_compute[253538]: 2025-11-25 08:34:53.537 253542 DEBUG oslo_concurrency.lockutils [None req-488a352f-842c-4afb-9b2e-192f1999f70c a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Acquiring lock "2f20fb1c-0a44-4209-aa4a-020331708117" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:34:53 np0005534516 nova_compute[253538]: 2025-11-25 08:34:53.538 253542 DEBUG oslo_concurrency.lockutils [None req-488a352f-842c-4afb-9b2e-192f1999f70c a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Lock "2f20fb1c-0a44-4209-aa4a-020331708117" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:34:53 np0005534516 nova_compute[253538]: 2025-11-25 08:34:53.539 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: a4fd9f97-b160-432d-9cb7-0fa3874c6468] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:34:53 np0005534516 nova_compute[253538]: 2025-11-25 08:34:53.547 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: a4fd9f97-b160-432d-9cb7-0fa3874c6468] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 03:34:53 np0005534516 nova_compute[253538]: 2025-11-25 08:34:53.564 253542 DEBUG nova.compute.manager [None req-488a352f-842c-4afb-9b2e-192f1999f70c a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: 2f20fb1c-0a44-4209-aa4a-020331708117] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 25 03:34:53 np0005534516 nova_compute[253538]: 2025-11-25 08:34:53.580 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: a4fd9f97-b160-432d-9cb7-0fa3874c6468] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 03:34:53 np0005534516 nova_compute[253538]: 2025-11-25 08:34:53.632 253542 INFO nova.compute.manager [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] [instance: a4fd9f97-b160-432d-9cb7-0fa3874c6468] Took 9.11 seconds to build instance.#033[00m
Nov 25 03:34:53 np0005534516 nova_compute[253538]: 2025-11-25 08:34:53.645 253542 DEBUG oslo_concurrency.lockutils [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] Lock "a4fd9f97-b160-432d-9cb7-0fa3874c6468" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.191s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:34:53 np0005534516 nova_compute[253538]: 2025-11-25 08:34:53.659 253542 DEBUG oslo_concurrency.processutils [None req-5d8b5969-a11f-4efe-b1ba-bf5400b29765 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/a6a8bade0130d94f6fc91c6e483cc9b6db836676 420c5373-d9c4-4da0-9658-90eff9a19f8d_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.844s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:34:53 np0005534516 nova_compute[253538]: 2025-11-25 08:34:53.686 253542 DEBUG oslo_concurrency.lockutils [None req-488a352f-842c-4afb-9b2e-192f1999f70c a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:34:53 np0005534516 nova_compute[253538]: 2025-11-25 08:34:53.687 253542 DEBUG oslo_concurrency.lockutils [None req-488a352f-842c-4afb-9b2e-192f1999f70c a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:34:53 np0005534516 nova_compute[253538]: 2025-11-25 08:34:53.719 253542 DEBUG nova.compute.manager [req-f6ae82d6-e364-4abb-8ece-107726fca1a6 req-71950707-8fe0-454b-a29f-063bf22cfb78 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: d99c7a05-3cc3-4a8b-bce4-1185023a269f] Received event network-vif-plugged-182a5a1a-c06d-4265-857f-3ea363ae01c2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:34:53 np0005534516 nova_compute[253538]: 2025-11-25 08:34:53.719 253542 DEBUG oslo_concurrency.lockutils [req-f6ae82d6-e364-4abb-8ece-107726fca1a6 req-71950707-8fe0-454b-a29f-063bf22cfb78 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "d99c7a05-3cc3-4a8b-bce4-1185023a269f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:34:53 np0005534516 nova_compute[253538]: 2025-11-25 08:34:53.719 253542 DEBUG oslo_concurrency.lockutils [req-f6ae82d6-e364-4abb-8ece-107726fca1a6 req-71950707-8fe0-454b-a29f-063bf22cfb78 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "d99c7a05-3cc3-4a8b-bce4-1185023a269f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:34:53 np0005534516 nova_compute[253538]: 2025-11-25 08:34:53.720 253542 DEBUG oslo_concurrency.lockutils [req-f6ae82d6-e364-4abb-8ece-107726fca1a6 req-71950707-8fe0-454b-a29f-063bf22cfb78 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "d99c7a05-3cc3-4a8b-bce4-1185023a269f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:34:53 np0005534516 nova_compute[253538]: 2025-11-25 08:34:53.720 253542 DEBUG nova.compute.manager [req-f6ae82d6-e364-4abb-8ece-107726fca1a6 req-71950707-8fe0-454b-a29f-063bf22cfb78 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: d99c7a05-3cc3-4a8b-bce4-1185023a269f] Processing event network-vif-plugged-182a5a1a-c06d-4265-857f-3ea363ae01c2 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 25 03:34:53 np0005534516 nova_compute[253538]: 2025-11-25 08:34:53.720 253542 DEBUG nova.compute.manager [req-f6ae82d6-e364-4abb-8ece-107726fca1a6 req-71950707-8fe0-454b-a29f-063bf22cfb78 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: d99c7a05-3cc3-4a8b-bce4-1185023a269f] Received event network-vif-plugged-182a5a1a-c06d-4265-857f-3ea363ae01c2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:34:53 np0005534516 nova_compute[253538]: 2025-11-25 08:34:53.720 253542 DEBUG oslo_concurrency.lockutils [req-f6ae82d6-e364-4abb-8ece-107726fca1a6 req-71950707-8fe0-454b-a29f-063bf22cfb78 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "d99c7a05-3cc3-4a8b-bce4-1185023a269f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:34:53 np0005534516 nova_compute[253538]: 2025-11-25 08:34:53.720 253542 DEBUG oslo_concurrency.lockutils [req-f6ae82d6-e364-4abb-8ece-107726fca1a6 req-71950707-8fe0-454b-a29f-063bf22cfb78 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "d99c7a05-3cc3-4a8b-bce4-1185023a269f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:34:53 np0005534516 nova_compute[253538]: 2025-11-25 08:34:53.720 253542 DEBUG oslo_concurrency.lockutils [req-f6ae82d6-e364-4abb-8ece-107726fca1a6 req-71950707-8fe0-454b-a29f-063bf22cfb78 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "d99c7a05-3cc3-4a8b-bce4-1185023a269f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:34:53 np0005534516 nova_compute[253538]: 2025-11-25 08:34:53.721 253542 DEBUG nova.compute.manager [req-f6ae82d6-e364-4abb-8ece-107726fca1a6 req-71950707-8fe0-454b-a29f-063bf22cfb78 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: d99c7a05-3cc3-4a8b-bce4-1185023a269f] No waiting events found dispatching network-vif-plugged-182a5a1a-c06d-4265-857f-3ea363ae01c2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 03:34:53 np0005534516 nova_compute[253538]: 2025-11-25 08:34:53.721 253542 WARNING nova.compute.manager [req-f6ae82d6-e364-4abb-8ece-107726fca1a6 req-71950707-8fe0-454b-a29f-063bf22cfb78 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: d99c7a05-3cc3-4a8b-bce4-1185023a269f] Received unexpected event network-vif-plugged-182a5a1a-c06d-4265-857f-3ea363ae01c2 for instance with vm_state building and task_state spawning.#033[00m
Nov 25 03:34:53 np0005534516 nova_compute[253538]: 2025-11-25 08:34:53.722 253542 DEBUG nova.virt.hardware [None req-488a352f-842c-4afb-9b2e-192f1999f70c a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 25 03:34:53 np0005534516 nova_compute[253538]: 2025-11-25 08:34:53.722 253542 INFO nova.compute.claims [None req-488a352f-842c-4afb-9b2e-192f1999f70c a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: 2f20fb1c-0a44-4209-aa4a-020331708117] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 25 03:34:53 np0005534516 nova_compute[253538]: 2025-11-25 08:34:53.725 253542 DEBUG nova.compute.manager [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] [instance: d99c7a05-3cc3-4a8b-bce4-1185023a269f] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 25 03:34:53 np0005534516 nova_compute[253538]: 2025-11-25 08:34:53.732 253542 DEBUG nova.storage.rbd_utils [None req-5d8b5969-a11f-4efe-b1ba-bf5400b29765 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] resizing rbd image 420c5373-d9c4-4da0-9658-90eff9a19f8d_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 25 03:34:53 np0005534516 nova_compute[253538]: 2025-11-25 08:34:53.776 253542 INFO nova.virt.libvirt.driver [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] [instance: e07ccbcb-d60d-4c15-95c2-9f5046ab99a3] Creating config drive at /var/lib/nova/instances/e07ccbcb-d60d-4c15-95c2-9f5046ab99a3/disk.config#033[00m
Nov 25 03:34:53 np0005534516 nova_compute[253538]: 2025-11-25 08:34:53.780 253542 DEBUG oslo_concurrency.processutils [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/e07ccbcb-d60d-4c15-95c2-9f5046ab99a3/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpf4hcm1wz execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:34:53 np0005534516 nova_compute[253538]: 2025-11-25 08:34:53.812 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764059693.7284205, d99c7a05-3cc3-4a8b-bce4-1185023a269f => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 03:34:53 np0005534516 nova_compute[253538]: 2025-11-25 08:34:53.812 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: d99c7a05-3cc3-4a8b-bce4-1185023a269f] VM Resumed (Lifecycle Event)#033[00m
Nov 25 03:34:53 np0005534516 nova_compute[253538]: 2025-11-25 08:34:53.816 253542 DEBUG nova.virt.libvirt.driver [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] [instance: d99c7a05-3cc3-4a8b-bce4-1185023a269f] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 25 03:34:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 03:34:53 np0005534516 nova_compute[253538]: 2025-11-25 08:34:53.821 253542 INFO nova.virt.libvirt.driver [-] [instance: d99c7a05-3cc3-4a8b-bce4-1185023a269f] Instance spawned successfully.#033[00m
Nov 25 03:34:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 03:34:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 03:34:53 np0005534516 nova_compute[253538]: 2025-11-25 08:34:53.821 253542 DEBUG nova.virt.libvirt.driver [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] [instance: d99c7a05-3cc3-4a8b-bce4-1185023a269f] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 25 03:34:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 03:34:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 03:34:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 03:34:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 03:34:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 03:34:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 03:34:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 03:34:53 np0005534516 nova_compute[253538]: 2025-11-25 08:34:53.834 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: d99c7a05-3cc3-4a8b-bce4-1185023a269f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:34:53 np0005534516 nova_compute[253538]: 2025-11-25 08:34:53.841 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: d99c7a05-3cc3-4a8b-bce4-1185023a269f] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 03:34:53 np0005534516 nova_compute[253538]: 2025-11-25 08:34:53.847 253542 DEBUG nova.virt.libvirt.driver [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] [instance: d99c7a05-3cc3-4a8b-bce4-1185023a269f] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:34:53 np0005534516 nova_compute[253538]: 2025-11-25 08:34:53.847 253542 DEBUG nova.virt.libvirt.driver [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] [instance: d99c7a05-3cc3-4a8b-bce4-1185023a269f] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:34:53 np0005534516 nova_compute[253538]: 2025-11-25 08:34:53.848 253542 DEBUG nova.virt.libvirt.driver [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] [instance: d99c7a05-3cc3-4a8b-bce4-1185023a269f] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:34:53 np0005534516 nova_compute[253538]: 2025-11-25 08:34:53.848 253542 DEBUG nova.virt.libvirt.driver [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] [instance: d99c7a05-3cc3-4a8b-bce4-1185023a269f] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:34:53 np0005534516 nova_compute[253538]: 2025-11-25 08:34:53.848 253542 DEBUG nova.virt.libvirt.driver [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] [instance: d99c7a05-3cc3-4a8b-bce4-1185023a269f] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:34:53 np0005534516 nova_compute[253538]: 2025-11-25 08:34:53.849 253542 DEBUG nova.virt.libvirt.driver [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] [instance: d99c7a05-3cc3-4a8b-bce4-1185023a269f] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:34:53 np0005534516 nova_compute[253538]: 2025-11-25 08:34:53.871 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: d99c7a05-3cc3-4a8b-bce4-1185023a269f] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 03:34:53 np0005534516 nova_compute[253538]: 2025-11-25 08:34:53.911 253542 DEBUG nova.virt.libvirt.driver [None req-5d8b5969-a11f-4efe-b1ba-bf5400b29765 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] [instance: 420c5373-d9c4-4da0-9658-90eff9a19f8d] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 25 03:34:53 np0005534516 nova_compute[253538]: 2025-11-25 08:34:53.912 253542 DEBUG nova.virt.libvirt.driver [None req-5d8b5969-a11f-4efe-b1ba-bf5400b29765 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] [instance: 420c5373-d9c4-4da0-9658-90eff9a19f8d] Ensure instance console log exists: /var/lib/nova/instances/420c5373-d9c4-4da0-9658-90eff9a19f8d/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 25 03:34:53 np0005534516 nova_compute[253538]: 2025-11-25 08:34:53.912 253542 DEBUG oslo_concurrency.lockutils [None req-5d8b5969-a11f-4efe-b1ba-bf5400b29765 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:34:53 np0005534516 nova_compute[253538]: 2025-11-25 08:34:53.913 253542 DEBUG oslo_concurrency.lockutils [None req-5d8b5969-a11f-4efe-b1ba-bf5400b29765 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:34:53 np0005534516 nova_compute[253538]: 2025-11-25 08:34:53.913 253542 DEBUG oslo_concurrency.lockutils [None req-5d8b5969-a11f-4efe-b1ba-bf5400b29765 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:34:53 np0005534516 nova_compute[253538]: 2025-11-25 08:34:53.914 253542 DEBUG nova.virt.libvirt.driver [None req-5d8b5969-a11f-4efe-b1ba-bf5400b29765 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] [instance: 420c5373-d9c4-4da0-9658-90eff9a19f8d] Start _get_guest_xml network_info=[{"id": "9200cc12-927d-418b-99c1-ca0421535979", "address": "fa:16:3e:73:f0:9b", "network": {"id": "908154e6-322e-4607-bb65-df3f3f8daca6", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1395486665-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "23237e7592b247838e62457157e64e9e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9200cc12-92", "ovs_interfaceid": "9200cc12-927d-418b-99c1-ca0421535979", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:56Z,direct_url=<?>,disk_format='qcow2',id=64385127-d622-49bb-be38-b33beb2692d1,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:58Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'device_name': '/dev/vda', 'guest_format': None, 'encryption_options': None, 'boot_index': 0, 'encryption_format': None, 'encryption_secret_uuid': None, 'size': 0, 'encrypted': False, 'disk_bus': 'virtio', 'image_id': '8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 25 03:34:53 np0005534516 nova_compute[253538]: 2025-11-25 08:34:53.916 253542 DEBUG oslo_concurrency.processutils [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/e07ccbcb-d60d-4c15-95c2-9f5046ab99a3/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpf4hcm1wz" returned: 0 in 0.136s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:34:53 np0005534516 nova_compute[253538]: 2025-11-25 08:34:53.917 253542 INFO nova.compute.manager [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] [instance: d99c7a05-3cc3-4a8b-bce4-1185023a269f] Took 8.29 seconds to spawn the instance on the hypervisor.#033[00m
Nov 25 03:34:53 np0005534516 nova_compute[253538]: 2025-11-25 08:34:53.917 253542 DEBUG nova.compute.manager [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] [instance: d99c7a05-3cc3-4a8b-bce4-1185023a269f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:34:53 np0005534516 nova_compute[253538]: 2025-11-25 08:34:53.934 253542 DEBUG nova.storage.rbd_utils [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] rbd image e07ccbcb-d60d-4c15-95c2-9f5046ab99a3_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:34:53 np0005534516 nova_compute[253538]: 2025-11-25 08:34:53.937 253542 DEBUG oslo_concurrency.processutils [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/e07ccbcb-d60d-4c15-95c2-9f5046ab99a3/disk.config e07ccbcb-d60d-4c15-95c2-9f5046ab99a3_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:34:53 np0005534516 nova_compute[253538]: 2025-11-25 08:34:53.981 253542 WARNING nova.virt.libvirt.driver [None req-5d8b5969-a11f-4efe-b1ba-bf5400b29765 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.: NotImplementedError#033[00m
Nov 25 03:34:53 np0005534516 nova_compute[253538]: 2025-11-25 08:34:53.987 253542 DEBUG nova.virt.libvirt.host [None req-5d8b5969-a11f-4efe-b1ba-bf5400b29765 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 25 03:34:53 np0005534516 nova_compute[253538]: 2025-11-25 08:34:53.988 253542 DEBUG nova.virt.libvirt.host [None req-5d8b5969-a11f-4efe-b1ba-bf5400b29765 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 25 03:34:53 np0005534516 nova_compute[253538]: 2025-11-25 08:34:53.991 253542 DEBUG nova.virt.libvirt.host [None req-5d8b5969-a11f-4efe-b1ba-bf5400b29765 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 25 03:34:53 np0005534516 nova_compute[253538]: 2025-11-25 08:34:53.992 253542 DEBUG nova.virt.libvirt.host [None req-5d8b5969-a11f-4efe-b1ba-bf5400b29765 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 25 03:34:53 np0005534516 nova_compute[253538]: 2025-11-25 08:34:53.992 253542 DEBUG nova.virt.libvirt.driver [None req-5d8b5969-a11f-4efe-b1ba-bf5400b29765 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 25 03:34:53 np0005534516 nova_compute[253538]: 2025-11-25 08:34:53.992 253542 DEBUG nova.virt.hardware [None req-5d8b5969-a11f-4efe-b1ba-bf5400b29765 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T08:20:54Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='c0247c91-0f34-45b2-87e7-43f31a790a00',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:56Z,direct_url=<?>,disk_format='qcow2',id=64385127-d622-49bb-be38-b33beb2692d1,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:58Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 25 03:34:53 np0005534516 nova_compute[253538]: 2025-11-25 08:34:53.993 253542 DEBUG nova.virt.hardware [None req-5d8b5969-a11f-4efe-b1ba-bf5400b29765 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 25 03:34:53 np0005534516 nova_compute[253538]: 2025-11-25 08:34:53.993 253542 DEBUG nova.virt.hardware [None req-5d8b5969-a11f-4efe-b1ba-bf5400b29765 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 25 03:34:53 np0005534516 nova_compute[253538]: 2025-11-25 08:34:53.993 253542 DEBUG nova.virt.hardware [None req-5d8b5969-a11f-4efe-b1ba-bf5400b29765 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 25 03:34:53 np0005534516 nova_compute[253538]: 2025-11-25 08:34:53.993 253542 DEBUG nova.virt.hardware [None req-5d8b5969-a11f-4efe-b1ba-bf5400b29765 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 25 03:34:53 np0005534516 nova_compute[253538]: 2025-11-25 08:34:53.994 253542 DEBUG nova.virt.hardware [None req-5d8b5969-a11f-4efe-b1ba-bf5400b29765 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 25 03:34:53 np0005534516 nova_compute[253538]: 2025-11-25 08:34:53.994 253542 DEBUG nova.virt.hardware [None req-5d8b5969-a11f-4efe-b1ba-bf5400b29765 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 25 03:34:53 np0005534516 nova_compute[253538]: 2025-11-25 08:34:53.994 253542 DEBUG nova.virt.hardware [None req-5d8b5969-a11f-4efe-b1ba-bf5400b29765 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 25 03:34:53 np0005534516 nova_compute[253538]: 2025-11-25 08:34:53.995 253542 DEBUG nova.virt.hardware [None req-5d8b5969-a11f-4efe-b1ba-bf5400b29765 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 25 03:34:53 np0005534516 nova_compute[253538]: 2025-11-25 08:34:53.995 253542 DEBUG nova.virt.hardware [None req-5d8b5969-a11f-4efe-b1ba-bf5400b29765 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 25 03:34:53 np0005534516 nova_compute[253538]: 2025-11-25 08:34:53.995 253542 DEBUG nova.virt.hardware [None req-5d8b5969-a11f-4efe-b1ba-bf5400b29765 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 25 03:34:53 np0005534516 nova_compute[253538]: 2025-11-25 08:34:53.995 253542 DEBUG nova.objects.instance [None req-5d8b5969-a11f-4efe-b1ba-bf5400b29765 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Lazy-loading 'vcpu_model' on Instance uuid 420c5373-d9c4-4da0-9658-90eff9a19f8d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 03:34:54 np0005534516 nova_compute[253538]: 2025-11-25 08:34:54.017 253542 DEBUG oslo_concurrency.processutils [None req-5d8b5969-a11f-4efe-b1ba-bf5400b29765 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:34:54 np0005534516 nova_compute[253538]: 2025-11-25 08:34:54.065 253542 INFO nova.compute.manager [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] [instance: d99c7a05-3cc3-4a8b-bce4-1185023a269f] Took 9.60 seconds to build instance.#033[00m
Nov 25 03:34:54 np0005534516 nova_compute[253538]: 2025-11-25 08:34:54.082 253542 DEBUG oslo_concurrency.lockutils [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] Lock "d99c7a05-3cc3-4a8b-bce4-1185023a269f" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.715s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:34:54 np0005534516 nova_compute[253538]: 2025-11-25 08:34:54.112 253542 DEBUG oslo_concurrency.processutils [None req-488a352f-842c-4afb-9b2e-192f1999f70c a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:34:54 np0005534516 nova_compute[253538]: 2025-11-25 08:34:54.344 253542 DEBUG oslo_concurrency.processutils [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/e07ccbcb-d60d-4c15-95c2-9f5046ab99a3/disk.config e07ccbcb-d60d-4c15-95c2-9f5046ab99a3_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.407s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:34:54 np0005534516 nova_compute[253538]: 2025-11-25 08:34:54.346 253542 INFO nova.virt.libvirt.driver [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] [instance: e07ccbcb-d60d-4c15-95c2-9f5046ab99a3] Deleting local config drive /var/lib/nova/instances/e07ccbcb-d60d-4c15-95c2-9f5046ab99a3/disk.config because it was imported into RBD.#033[00m
Nov 25 03:34:54 np0005534516 NetworkManager[48915]: <info>  [1764059694.4078] manager: (tap54f02527-a6): new Tun device (/org/freedesktop/NetworkManager/Devices/271)
Nov 25 03:34:54 np0005534516 kernel: tap54f02527-a6: entered promiscuous mode
Nov 25 03:34:54 np0005534516 systemd-udevd[320109]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 03:34:54 np0005534516 nova_compute[253538]: 2025-11-25 08:34:54.413 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:34:54 np0005534516 ovn_controller[152859]: 2025-11-25T08:34:54Z|00589|binding|INFO|Claiming lport 54f02527-a6c1-4059-aa22-2c19fc6f351d for this chassis.
Nov 25 03:34:54 np0005534516 ovn_controller[152859]: 2025-11-25T08:34:54Z|00590|binding|INFO|54f02527-a6c1-4059-aa22-2c19fc6f351d: Claiming fa:16:3e:eb:30:f0 10.100.0.10
Nov 25 03:34:54 np0005534516 NetworkManager[48915]: <info>  [1764059694.4267] device (tap54f02527-a6): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 03:34:54 np0005534516 NetworkManager[48915]: <info>  [1764059694.4275] device (tap54f02527-a6): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 03:34:54 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:34:54.426 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:eb:30:f0 10.100.0.10'], port_security=['fa:16:3e:eb:30:f0 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': 'e07ccbcb-d60d-4c15-95c2-9f5046ab99a3', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-837c6e7b-bab2-4553-9d96-986f67153365', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ce0bc5c65f2f47a9a854ec892fe53bc8', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'a89d18be-0725-45a6-b621-49391e35be2a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=332be39e-bd54-4968-b056-968d29abecfa, chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=54f02527-a6c1-4059-aa22-2c19fc6f351d) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 03:34:54 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:34:54.427 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 54f02527-a6c1-4059-aa22-2c19fc6f351d in datapath 837c6e7b-bab2-4553-9d96-986f67153365 bound to our chassis#033[00m
Nov 25 03:34:54 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:34:54.428 162739 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 837c6e7b-bab2-4553-9d96-986f67153365#033[00m
Nov 25 03:34:54 np0005534516 ovn_controller[152859]: 2025-11-25T08:34:54Z|00591|binding|INFO|Setting lport 54f02527-a6c1-4059-aa22-2c19fc6f351d ovn-installed in OVS
Nov 25 03:34:54 np0005534516 ovn_controller[152859]: 2025-11-25T08:34:54Z|00592|binding|INFO|Setting lport 54f02527-a6c1-4059-aa22-2c19fc6f351d up in Southbound
Nov 25 03:34:54 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:34:54.445 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[aa56a9b5-7596-4485-a20f-503db02d1b97]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:34:54 np0005534516 nova_compute[253538]: 2025-11-25 08:34:54.445 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:34:54 np0005534516 systemd-machined[215790]: New machine qemu-74-instance-00000041.
Nov 25 03:34:54 np0005534516 systemd[1]: Started Virtual Machine qemu-74-instance-00000041.
Nov 25 03:34:54 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:34:54.480 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[ad041cef-90f9-490a-bcf5-eae81d46b427]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:34:54 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:34:54.486 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[6af6dd1e-7cfd-4d7d-9d92-46947e991aed]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:34:54 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:34:54.517 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[7e074554-ba76-4d68-8df4-7f8d0df2be60]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:34:54 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 03:34:54 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1609599863' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 03:34:54 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:34:54.540 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[16c780a3-f9f9-4fa5-ab1c-fa30463b7de8]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap837c6e7b-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:5f:6e:5a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 6, 'tx_packets': 7, 'rx_bytes': 532, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 6, 'tx_packets': 7, 'rx_bytes': 532, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 177], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 504680, 'reachable_time': 28680, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 320312, 'error': None, 'target': 'ovnmeta-837c6e7b-bab2-4553-9d96-986f67153365', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:34:54 np0005534516 nova_compute[253538]: 2025-11-25 08:34:54.552 253542 DEBUG oslo_concurrency.processutils [None req-5d8b5969-a11f-4efe-b1ba-bf5400b29765 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.535s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:34:54 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:34:54.576 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[e7445a4e-f1ff-4a56-a68e-1d477432abcc]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap837c6e7b-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 504693, 'tstamp': 504693}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 320315, 'error': None, 'target': 'ovnmeta-837c6e7b-bab2-4553-9d96-986f67153365', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap837c6e7b-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 504696, 'tstamp': 504696}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 320315, 'error': None, 'target': 'ovnmeta-837c6e7b-bab2-4553-9d96-986f67153365', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:34:54 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:34:54.578 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap837c6e7b-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:34:54 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:34:54.581 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap837c6e7b-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:34:54 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:34:54.582 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 03:34:54 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:34:54.582 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap837c6e7b-b0, col_values=(('external_ids', {'iface-id': 'f915f58f-151e-47fb-a373-5fd022b7fd3c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:34:54 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:34:54.583 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 03:34:54 np0005534516 nova_compute[253538]: 2025-11-25 08:34:54.601 253542 DEBUG nova.storage.rbd_utils [None req-5d8b5969-a11f-4efe-b1ba-bf5400b29765 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] rbd image 420c5373-d9c4-4da0-9658-90eff9a19f8d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:34:54 np0005534516 nova_compute[253538]: 2025-11-25 08:34:54.607 253542 DEBUG oslo_concurrency.processutils [None req-5d8b5969-a11f-4efe-b1ba-bf5400b29765 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:34:54 np0005534516 nova_compute[253538]: 2025-11-25 08:34:54.656 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:34:54 np0005534516 nova_compute[253538]: 2025-11-25 08:34:54.685 253542 DEBUG oslo_concurrency.lockutils [None req-aabf42ea-bd42-4934-91c6-12ef85d0f85e 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Acquiring lock "refresh_cache-52d39d67-b456-44e4-8804-2de0c941edae" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 03:34:54 np0005534516 nova_compute[253538]: 2025-11-25 08:34:54.686 253542 DEBUG oslo_concurrency.lockutils [None req-aabf42ea-bd42-4934-91c6-12ef85d0f85e 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Acquired lock "refresh_cache-52d39d67-b456-44e4-8804-2de0c941edae" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 03:34:54 np0005534516 nova_compute[253538]: 2025-11-25 08:34:54.686 253542 DEBUG nova.network.neutron [None req-aabf42ea-bd42-4934-91c6-12ef85d0f85e 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] [instance: 52d39d67-b456-44e4-8804-2de0c941edae] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 25 03:34:54 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 03:34:54 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2724557845' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 03:34:54 np0005534516 nova_compute[253538]: 2025-11-25 08:34:54.763 253542 DEBUG oslo_concurrency.processutils [None req-488a352f-842c-4afb-9b2e-192f1999f70c a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.651s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:34:54 np0005534516 nova_compute[253538]: 2025-11-25 08:34:54.777 253542 DEBUG nova.compute.provider_tree [None req-488a352f-842c-4afb-9b2e-192f1999f70c a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 03:34:54 np0005534516 nova_compute[253538]: 2025-11-25 08:34:54.789 253542 DEBUG nova.scheduler.client.report [None req-488a352f-842c-4afb-9b2e-192f1999f70c a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 03:34:54 np0005534516 nova_compute[253538]: 2025-11-25 08:34:54.810 253542 DEBUG oslo_concurrency.lockutils [None req-488a352f-842c-4afb-9b2e-192f1999f70c a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.123s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:34:54 np0005534516 nova_compute[253538]: 2025-11-25 08:34:54.811 253542 DEBUG nova.compute.manager [None req-488a352f-842c-4afb-9b2e-192f1999f70c a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: 2f20fb1c-0a44-4209-aa4a-020331708117] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 25 03:34:54 np0005534516 nova_compute[253538]: 2025-11-25 08:34:54.849 253542 DEBUG nova.compute.manager [None req-488a352f-842c-4afb-9b2e-192f1999f70c a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: 2f20fb1c-0a44-4209-aa4a-020331708117] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 25 03:34:54 np0005534516 nova_compute[253538]: 2025-11-25 08:34:54.849 253542 DEBUG nova.network.neutron [None req-488a352f-842c-4afb-9b2e-192f1999f70c a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: 2f20fb1c-0a44-4209-aa4a-020331708117] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 25 03:34:54 np0005534516 nova_compute[253538]: 2025-11-25 08:34:54.866 253542 INFO nova.virt.libvirt.driver [None req-488a352f-842c-4afb-9b2e-192f1999f70c a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: 2f20fb1c-0a44-4209-aa4a-020331708117] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 25 03:34:54 np0005534516 nova_compute[253538]: 2025-11-25 08:34:54.882 253542 DEBUG nova.compute.manager [None req-488a352f-842c-4afb-9b2e-192f1999f70c a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: 2f20fb1c-0a44-4209-aa4a-020331708117] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 25 03:34:54 np0005534516 nova_compute[253538]: 2025-11-25 08:34:54.970 253542 DEBUG nova.compute.manager [None req-488a352f-842c-4afb-9b2e-192f1999f70c a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: 2f20fb1c-0a44-4209-aa4a-020331708117] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 25 03:34:54 np0005534516 nova_compute[253538]: 2025-11-25 08:34:54.971 253542 DEBUG nova.virt.libvirt.driver [None req-488a352f-842c-4afb-9b2e-192f1999f70c a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: 2f20fb1c-0a44-4209-aa4a-020331708117] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 25 03:34:54 np0005534516 nova_compute[253538]: 2025-11-25 08:34:54.971 253542 INFO nova.virt.libvirt.driver [None req-488a352f-842c-4afb-9b2e-192f1999f70c a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: 2f20fb1c-0a44-4209-aa4a-020331708117] Creating image(s)#033[00m
Nov 25 03:34:55 np0005534516 nova_compute[253538]: 2025-11-25 08:34:55.000 253542 DEBUG nova.storage.rbd_utils [None req-488a352f-842c-4afb-9b2e-192f1999f70c a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] rbd image 2f20fb1c-0a44-4209-aa4a-020331708117_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:34:55 np0005534516 nova_compute[253538]: 2025-11-25 08:34:55.025 253542 DEBUG nova.storage.rbd_utils [None req-488a352f-842c-4afb-9b2e-192f1999f70c a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] rbd image 2f20fb1c-0a44-4209-aa4a-020331708117_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:34:55 np0005534516 nova_compute[253538]: 2025-11-25 08:34:55.051 253542 DEBUG nova.storage.rbd_utils [None req-488a352f-842c-4afb-9b2e-192f1999f70c a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] rbd image 2f20fb1c-0a44-4209-aa4a-020331708117_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:34:55 np0005534516 nova_compute[253538]: 2025-11-25 08:34:55.055 253542 DEBUG oslo_concurrency.processutils [None req-488a352f-842c-4afb-9b2e-192f1999f70c a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:34:55 np0005534516 nova_compute[253538]: 2025-11-25 08:34:55.100 253542 DEBUG nova.compute.manager [req-a2dd39d2-8bad-45ef-8b32-ddc36a635c2c req-ab604b62-b2a7-42aa-9d1b-322381b6cd21 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: a4fd9f97-b160-432d-9cb7-0fa3874c6468] Received event network-vif-plugged-f9d205bf-0705-485d-b89c-f9b9c3cdccdb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:34:55 np0005534516 nova_compute[253538]: 2025-11-25 08:34:55.101 253542 DEBUG oslo_concurrency.lockutils [req-a2dd39d2-8bad-45ef-8b32-ddc36a635c2c req-ab604b62-b2a7-42aa-9d1b-322381b6cd21 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "a4fd9f97-b160-432d-9cb7-0fa3874c6468-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:34:55 np0005534516 nova_compute[253538]: 2025-11-25 08:34:55.101 253542 DEBUG oslo_concurrency.lockutils [req-a2dd39d2-8bad-45ef-8b32-ddc36a635c2c req-ab604b62-b2a7-42aa-9d1b-322381b6cd21 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "a4fd9f97-b160-432d-9cb7-0fa3874c6468-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:34:55 np0005534516 nova_compute[253538]: 2025-11-25 08:34:55.102 253542 DEBUG oslo_concurrency.lockutils [req-a2dd39d2-8bad-45ef-8b32-ddc36a635c2c req-ab604b62-b2a7-42aa-9d1b-322381b6cd21 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "a4fd9f97-b160-432d-9cb7-0fa3874c6468-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:34:55 np0005534516 nova_compute[253538]: 2025-11-25 08:34:55.102 253542 DEBUG nova.compute.manager [req-a2dd39d2-8bad-45ef-8b32-ddc36a635c2c req-ab604b62-b2a7-42aa-9d1b-322381b6cd21 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: a4fd9f97-b160-432d-9cb7-0fa3874c6468] No waiting events found dispatching network-vif-plugged-f9d205bf-0705-485d-b89c-f9b9c3cdccdb pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 03:34:55 np0005534516 nova_compute[253538]: 2025-11-25 08:34:55.102 253542 WARNING nova.compute.manager [req-a2dd39d2-8bad-45ef-8b32-ddc36a635c2c req-ab604b62-b2a7-42aa-9d1b-322381b6cd21 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: a4fd9f97-b160-432d-9cb7-0fa3874c6468] Received unexpected event network-vif-plugged-f9d205bf-0705-485d-b89c-f9b9c3cdccdb for instance with vm_state active and task_state None.#033[00m
Nov 25 03:34:55 np0005534516 nova_compute[253538]: 2025-11-25 08:34:55.102 253542 DEBUG nova.compute.manager [req-a2dd39d2-8bad-45ef-8b32-ddc36a635c2c req-ab604b62-b2a7-42aa-9d1b-322381b6cd21 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 52d39d67-b456-44e4-8804-2de0c941edae] Received event network-vif-unplugged-f2a4b65b-419e-44be-9413-f01693268aa8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:34:55 np0005534516 nova_compute[253538]: 2025-11-25 08:34:55.103 253542 DEBUG oslo_concurrency.lockutils [req-a2dd39d2-8bad-45ef-8b32-ddc36a635c2c req-ab604b62-b2a7-42aa-9d1b-322381b6cd21 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "52d39d67-b456-44e4-8804-2de0c941edae-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:34:55 np0005534516 nova_compute[253538]: 2025-11-25 08:34:55.103 253542 DEBUG oslo_concurrency.lockutils [req-a2dd39d2-8bad-45ef-8b32-ddc36a635c2c req-ab604b62-b2a7-42aa-9d1b-322381b6cd21 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "52d39d67-b456-44e4-8804-2de0c941edae-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:34:55 np0005534516 nova_compute[253538]: 2025-11-25 08:34:55.103 253542 DEBUG oslo_concurrency.lockutils [req-a2dd39d2-8bad-45ef-8b32-ddc36a635c2c req-ab604b62-b2a7-42aa-9d1b-322381b6cd21 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "52d39d67-b456-44e4-8804-2de0c941edae-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:34:55 np0005534516 nova_compute[253538]: 2025-11-25 08:34:55.104 253542 DEBUG nova.compute.manager [req-a2dd39d2-8bad-45ef-8b32-ddc36a635c2c req-ab604b62-b2a7-42aa-9d1b-322381b6cd21 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 52d39d67-b456-44e4-8804-2de0c941edae] No waiting events found dispatching network-vif-unplugged-f2a4b65b-419e-44be-9413-f01693268aa8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 03:34:55 np0005534516 nova_compute[253538]: 2025-11-25 08:34:55.104 253542 WARNING nova.compute.manager [req-a2dd39d2-8bad-45ef-8b32-ddc36a635c2c req-ab604b62-b2a7-42aa-9d1b-322381b6cd21 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 52d39d67-b456-44e4-8804-2de0c941edae] Received unexpected event network-vif-unplugged-f2a4b65b-419e-44be-9413-f01693268aa8 for instance with vm_state active and task_state None.#033[00m
Nov 25 03:34:55 np0005534516 nova_compute[253538]: 2025-11-25 08:34:55.104 253542 DEBUG nova.compute.manager [req-a2dd39d2-8bad-45ef-8b32-ddc36a635c2c req-ab604b62-b2a7-42aa-9d1b-322381b6cd21 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 52d39d67-b456-44e4-8804-2de0c941edae] Received event network-vif-plugged-f2a4b65b-419e-44be-9413-f01693268aa8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:34:55 np0005534516 nova_compute[253538]: 2025-11-25 08:34:55.104 253542 DEBUG oslo_concurrency.lockutils [req-a2dd39d2-8bad-45ef-8b32-ddc36a635c2c req-ab604b62-b2a7-42aa-9d1b-322381b6cd21 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "52d39d67-b456-44e4-8804-2de0c941edae-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:34:55 np0005534516 nova_compute[253538]: 2025-11-25 08:34:55.105 253542 DEBUG oslo_concurrency.lockutils [req-a2dd39d2-8bad-45ef-8b32-ddc36a635c2c req-ab604b62-b2a7-42aa-9d1b-322381b6cd21 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "52d39d67-b456-44e4-8804-2de0c941edae-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:34:55 np0005534516 nova_compute[253538]: 2025-11-25 08:34:55.105 253542 DEBUG oslo_concurrency.lockutils [req-a2dd39d2-8bad-45ef-8b32-ddc36a635c2c req-ab604b62-b2a7-42aa-9d1b-322381b6cd21 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "52d39d67-b456-44e4-8804-2de0c941edae-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:34:55 np0005534516 nova_compute[253538]: 2025-11-25 08:34:55.105 253542 DEBUG nova.compute.manager [req-a2dd39d2-8bad-45ef-8b32-ddc36a635c2c req-ab604b62-b2a7-42aa-9d1b-322381b6cd21 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 52d39d67-b456-44e4-8804-2de0c941edae] No waiting events found dispatching network-vif-plugged-f2a4b65b-419e-44be-9413-f01693268aa8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 03:34:55 np0005534516 nova_compute[253538]: 2025-11-25 08:34:55.105 253542 WARNING nova.compute.manager [req-a2dd39d2-8bad-45ef-8b32-ddc36a635c2c req-ab604b62-b2a7-42aa-9d1b-322381b6cd21 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 52d39d67-b456-44e4-8804-2de0c941edae] Received unexpected event network-vif-plugged-f2a4b65b-419e-44be-9413-f01693268aa8 for instance with vm_state active and task_state None.#033[00m
Nov 25 03:34:55 np0005534516 nova_compute[253538]: 2025-11-25 08:34:55.106 253542 DEBUG nova.compute.manager [req-a2dd39d2-8bad-45ef-8b32-ddc36a635c2c req-ab604b62-b2a7-42aa-9d1b-322381b6cd21 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: e07ccbcb-d60d-4c15-95c2-9f5046ab99a3] Received event network-vif-plugged-54f02527-a6c1-4059-aa22-2c19fc6f351d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:34:55 np0005534516 nova_compute[253538]: 2025-11-25 08:34:55.106 253542 DEBUG oslo_concurrency.lockutils [req-a2dd39d2-8bad-45ef-8b32-ddc36a635c2c req-ab604b62-b2a7-42aa-9d1b-322381b6cd21 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "e07ccbcb-d60d-4c15-95c2-9f5046ab99a3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:34:55 np0005534516 nova_compute[253538]: 2025-11-25 08:34:55.106 253542 DEBUG oslo_concurrency.lockutils [req-a2dd39d2-8bad-45ef-8b32-ddc36a635c2c req-ab604b62-b2a7-42aa-9d1b-322381b6cd21 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "e07ccbcb-d60d-4c15-95c2-9f5046ab99a3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:34:55 np0005534516 nova_compute[253538]: 2025-11-25 08:34:55.106 253542 DEBUG oslo_concurrency.lockutils [req-a2dd39d2-8bad-45ef-8b32-ddc36a635c2c req-ab604b62-b2a7-42aa-9d1b-322381b6cd21 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "e07ccbcb-d60d-4c15-95c2-9f5046ab99a3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:34:55 np0005534516 nova_compute[253538]: 2025-11-25 08:34:55.107 253542 DEBUG nova.compute.manager [req-a2dd39d2-8bad-45ef-8b32-ddc36a635c2c req-ab604b62-b2a7-42aa-9d1b-322381b6cd21 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: e07ccbcb-d60d-4c15-95c2-9f5046ab99a3] Processing event network-vif-plugged-54f02527-a6c1-4059-aa22-2c19fc6f351d _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 25 03:34:55 np0005534516 nova_compute[253538]: 2025-11-25 08:34:55.107 253542 DEBUG nova.compute.manager [req-a2dd39d2-8bad-45ef-8b32-ddc36a635c2c req-ab604b62-b2a7-42aa-9d1b-322381b6cd21 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: e07ccbcb-d60d-4c15-95c2-9f5046ab99a3] Received event network-vif-plugged-54f02527-a6c1-4059-aa22-2c19fc6f351d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:34:55 np0005534516 nova_compute[253538]: 2025-11-25 08:34:55.108 253542 DEBUG oslo_concurrency.lockutils [req-a2dd39d2-8bad-45ef-8b32-ddc36a635c2c req-ab604b62-b2a7-42aa-9d1b-322381b6cd21 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "e07ccbcb-d60d-4c15-95c2-9f5046ab99a3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:34:55 np0005534516 nova_compute[253538]: 2025-11-25 08:34:55.108 253542 DEBUG oslo_concurrency.lockutils [req-a2dd39d2-8bad-45ef-8b32-ddc36a635c2c req-ab604b62-b2a7-42aa-9d1b-322381b6cd21 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "e07ccbcb-d60d-4c15-95c2-9f5046ab99a3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:34:55 np0005534516 nova_compute[253538]: 2025-11-25 08:34:55.108 253542 DEBUG oslo_concurrency.lockutils [req-a2dd39d2-8bad-45ef-8b32-ddc36a635c2c req-ab604b62-b2a7-42aa-9d1b-322381b6cd21 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "e07ccbcb-d60d-4c15-95c2-9f5046ab99a3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:34:55 np0005534516 nova_compute[253538]: 2025-11-25 08:34:55.109 253542 DEBUG nova.compute.manager [req-a2dd39d2-8bad-45ef-8b32-ddc36a635c2c req-ab604b62-b2a7-42aa-9d1b-322381b6cd21 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: e07ccbcb-d60d-4c15-95c2-9f5046ab99a3] No waiting events found dispatching network-vif-plugged-54f02527-a6c1-4059-aa22-2c19fc6f351d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 03:34:55 np0005534516 nova_compute[253538]: 2025-11-25 08:34:55.109 253542 WARNING nova.compute.manager [req-a2dd39d2-8bad-45ef-8b32-ddc36a635c2c req-ab604b62-b2a7-42aa-9d1b-322381b6cd21 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: e07ccbcb-d60d-4c15-95c2-9f5046ab99a3] Received unexpected event network-vif-plugged-54f02527-a6c1-4059-aa22-2c19fc6f351d for instance with vm_state building and task_state spawning.#033[00m
Nov 25 03:34:55 np0005534516 nova_compute[253538]: 2025-11-25 08:34:55.139 253542 DEBUG oslo_concurrency.processutils [None req-488a352f-842c-4afb-9b2e-192f1999f70c a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json" returned: 0 in 0.084s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:34:55 np0005534516 nova_compute[253538]: 2025-11-25 08:34:55.144 253542 DEBUG oslo_concurrency.lockutils [None req-488a352f-842c-4afb-9b2e-192f1999f70c a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Acquiring lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:34:55 np0005534516 nova_compute[253538]: 2025-11-25 08:34:55.144 253542 DEBUG oslo_concurrency.lockutils [None req-488a352f-842c-4afb-9b2e-192f1999f70c a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:34:55 np0005534516 nova_compute[253538]: 2025-11-25 08:34:55.145 253542 DEBUG oslo_concurrency.lockutils [None req-488a352f-842c-4afb-9b2e-192f1999f70c a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:34:55 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 03:34:55 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/107995451' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 03:34:55 np0005534516 nova_compute[253538]: 2025-11-25 08:34:55.167 253542 DEBUG nova.storage.rbd_utils [None req-488a352f-842c-4afb-9b2e-192f1999f70c a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] rbd image 2f20fb1c-0a44-4209-aa4a-020331708117_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:34:55 np0005534516 nova_compute[253538]: 2025-11-25 08:34:55.176 253542 DEBUG oslo_concurrency.processutils [None req-488a352f-842c-4afb-9b2e-192f1999f70c a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc 2f20fb1c-0a44-4209-aa4a-020331708117_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:34:55 np0005534516 nova_compute[253538]: 2025-11-25 08:34:55.213 253542 DEBUG nova.policy [None req-488a352f-842c-4afb-9b2e-192f1999f70c a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'a649c62aaacd4f01a93ea978066f5976', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'a9c243220ecd4ba3af10cdbc0ea76bd6', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 25 03:34:55 np0005534516 nova_compute[253538]: 2025-11-25 08:34:55.216 253542 DEBUG oslo_concurrency.processutils [None req-5d8b5969-a11f-4efe-b1ba-bf5400b29765 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.609s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:34:55 np0005534516 nova_compute[253538]: 2025-11-25 08:34:55.218 253542 DEBUG nova.virt.libvirt.vif [None req-5d8b5969-a11f-4efe-b1ba-bf5400b29765 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-11-25T08:34:14Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-614557291',display_name='tempest-ServerActionsTestJSON-server-1864891791',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-614557291',id=62,image_ref='64385127-d622-49bb-be38-b33beb2692d1',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-25T08:34:22Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={rebuild='server'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='23237e7592b247838e62457157e64e9e',ramdisk_id='',reservation_id='r-x0c0gdce',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='64385127-d622-49bb-be38-b33beb2692d1',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-1880843108',owner_user_name='tempest-ServerActionsTestJSON-1880843108-project-member'},tags=<?>,task_state='rebuild_spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T08:34:52Z,user_data=None,user_id='c199ca353ed54a53ab7fe37d3089c82a',uuid=420c5373-d9c4-4da0-9658-90eff9a19f8d,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "9200cc12-927d-418b-99c1-ca0421535979", "address": "fa:16:3e:73:f0:9b", "network": {"id": "908154e6-322e-4607-bb65-df3f3f8daca6", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1395486665-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "23237e7592b247838e62457157e64e9e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9200cc12-92", "ovs_interfaceid": "9200cc12-927d-418b-99c1-ca0421535979", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 25 03:34:55 np0005534516 nova_compute[253538]: 2025-11-25 08:34:55.218 253542 DEBUG nova.network.os_vif_util [None req-5d8b5969-a11f-4efe-b1ba-bf5400b29765 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Converting VIF {"id": "9200cc12-927d-418b-99c1-ca0421535979", "address": "fa:16:3e:73:f0:9b", "network": {"id": "908154e6-322e-4607-bb65-df3f3f8daca6", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1395486665-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "23237e7592b247838e62457157e64e9e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9200cc12-92", "ovs_interfaceid": "9200cc12-927d-418b-99c1-ca0421535979", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 03:34:55 np0005534516 nova_compute[253538]: 2025-11-25 08:34:55.219 253542 DEBUG nova.network.os_vif_util [None req-5d8b5969-a11f-4efe-b1ba-bf5400b29765 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:73:f0:9b,bridge_name='br-int',has_traffic_filtering=True,id=9200cc12-927d-418b-99c1-ca0421535979,network=Network(908154e6-322e-4607-bb65-df3f3f8daca6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9200cc12-92') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 03:34:55 np0005534516 nova_compute[253538]: 2025-11-25 08:34:55.221 253542 DEBUG nova.virt.libvirt.driver [None req-5d8b5969-a11f-4efe-b1ba-bf5400b29765 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] [instance: 420c5373-d9c4-4da0-9658-90eff9a19f8d] End _get_guest_xml xml=<domain type="kvm">
Nov 25 03:34:55 np0005534516 nova_compute[253538]:  <uuid>420c5373-d9c4-4da0-9658-90eff9a19f8d</uuid>
Nov 25 03:34:55 np0005534516 nova_compute[253538]:  <name>instance-0000003e</name>
Nov 25 03:34:55 np0005534516 nova_compute[253538]:  <memory>131072</memory>
Nov 25 03:34:55 np0005534516 nova_compute[253538]:  <vcpu>1</vcpu>
Nov 25 03:34:55 np0005534516 nova_compute[253538]:  <metadata>
Nov 25 03:34:55 np0005534516 nova_compute[253538]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 03:34:55 np0005534516 nova_compute[253538]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 03:34:55 np0005534516 nova_compute[253538]:      <nova:name>tempest-ServerActionsTestJSON-server-1864891791</nova:name>
Nov 25 03:34:55 np0005534516 nova_compute[253538]:      <nova:creationTime>2025-11-25 08:34:53</nova:creationTime>
Nov 25 03:34:55 np0005534516 nova_compute[253538]:      <nova:flavor name="m1.nano">
Nov 25 03:34:55 np0005534516 nova_compute[253538]:        <nova:memory>128</nova:memory>
Nov 25 03:34:55 np0005534516 nova_compute[253538]:        <nova:disk>1</nova:disk>
Nov 25 03:34:55 np0005534516 nova_compute[253538]:        <nova:swap>0</nova:swap>
Nov 25 03:34:55 np0005534516 nova_compute[253538]:        <nova:ephemeral>0</nova:ephemeral>
Nov 25 03:34:55 np0005534516 nova_compute[253538]:        <nova:vcpus>1</nova:vcpus>
Nov 25 03:34:55 np0005534516 nova_compute[253538]:      </nova:flavor>
Nov 25 03:34:55 np0005534516 nova_compute[253538]:      <nova:owner>
Nov 25 03:34:55 np0005534516 nova_compute[253538]:        <nova:user uuid="c199ca353ed54a53ab7fe37d3089c82a">tempest-ServerActionsTestJSON-1880843108-project-member</nova:user>
Nov 25 03:34:55 np0005534516 nova_compute[253538]:        <nova:project uuid="23237e7592b247838e62457157e64e9e">tempest-ServerActionsTestJSON-1880843108</nova:project>
Nov 25 03:34:55 np0005534516 nova_compute[253538]:      </nova:owner>
Nov 25 03:34:55 np0005534516 nova_compute[253538]:      <nova:root type="image" uuid="64385127-d622-49bb-be38-b33beb2692d1"/>
Nov 25 03:34:55 np0005534516 nova_compute[253538]:      <nova:ports>
Nov 25 03:34:55 np0005534516 nova_compute[253538]:        <nova:port uuid="9200cc12-927d-418b-99c1-ca0421535979">
Nov 25 03:34:55 np0005534516 nova_compute[253538]:          <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Nov 25 03:34:55 np0005534516 nova_compute[253538]:        </nova:port>
Nov 25 03:34:55 np0005534516 nova_compute[253538]:      </nova:ports>
Nov 25 03:34:55 np0005534516 nova_compute[253538]:    </nova:instance>
Nov 25 03:34:55 np0005534516 nova_compute[253538]:  </metadata>
Nov 25 03:34:55 np0005534516 nova_compute[253538]:  <sysinfo type="smbios">
Nov 25 03:34:55 np0005534516 nova_compute[253538]:    <system>
Nov 25 03:34:55 np0005534516 nova_compute[253538]:      <entry name="manufacturer">RDO</entry>
Nov 25 03:34:55 np0005534516 nova_compute[253538]:      <entry name="product">OpenStack Compute</entry>
Nov 25 03:34:55 np0005534516 nova_compute[253538]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 03:34:55 np0005534516 nova_compute[253538]:      <entry name="serial">420c5373-d9c4-4da0-9658-90eff9a19f8d</entry>
Nov 25 03:34:55 np0005534516 nova_compute[253538]:      <entry name="uuid">420c5373-d9c4-4da0-9658-90eff9a19f8d</entry>
Nov 25 03:34:55 np0005534516 nova_compute[253538]:      <entry name="family">Virtual Machine</entry>
Nov 25 03:34:55 np0005534516 nova_compute[253538]:    </system>
Nov 25 03:34:55 np0005534516 nova_compute[253538]:  </sysinfo>
Nov 25 03:34:55 np0005534516 nova_compute[253538]:  <os>
Nov 25 03:34:55 np0005534516 nova_compute[253538]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 25 03:34:55 np0005534516 nova_compute[253538]:    <boot dev="hd"/>
Nov 25 03:34:55 np0005534516 nova_compute[253538]:    <smbios mode="sysinfo"/>
Nov 25 03:34:55 np0005534516 nova_compute[253538]:  </os>
Nov 25 03:34:55 np0005534516 nova_compute[253538]:  <features>
Nov 25 03:34:55 np0005534516 nova_compute[253538]:    <acpi/>
Nov 25 03:34:55 np0005534516 nova_compute[253538]:    <apic/>
Nov 25 03:34:55 np0005534516 nova_compute[253538]:    <vmcoreinfo/>
Nov 25 03:34:55 np0005534516 nova_compute[253538]:  </features>
Nov 25 03:34:55 np0005534516 nova_compute[253538]:  <clock offset="utc">
Nov 25 03:34:55 np0005534516 nova_compute[253538]:    <timer name="pit" tickpolicy="delay"/>
Nov 25 03:34:55 np0005534516 nova_compute[253538]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 25 03:34:55 np0005534516 nova_compute[253538]:    <timer name="hpet" present="no"/>
Nov 25 03:34:55 np0005534516 nova_compute[253538]:  </clock>
Nov 25 03:34:55 np0005534516 nova_compute[253538]:  <cpu mode="host-model" match="exact">
Nov 25 03:34:55 np0005534516 nova_compute[253538]:    <topology sockets="1" cores="1" threads="1"/>
Nov 25 03:34:55 np0005534516 nova_compute[253538]:  </cpu>
Nov 25 03:34:55 np0005534516 nova_compute[253538]:  <devices>
Nov 25 03:34:55 np0005534516 nova_compute[253538]:    <disk type="network" device="disk">
Nov 25 03:34:55 np0005534516 nova_compute[253538]:      <driver type="raw" cache="none"/>
Nov 25 03:34:55 np0005534516 nova_compute[253538]:      <source protocol="rbd" name="vms/420c5373-d9c4-4da0-9658-90eff9a19f8d_disk">
Nov 25 03:34:55 np0005534516 nova_compute[253538]:        <host name="192.168.122.100" port="6789"/>
Nov 25 03:34:55 np0005534516 nova_compute[253538]:      </source>
Nov 25 03:34:55 np0005534516 nova_compute[253538]:      <auth username="openstack">
Nov 25 03:34:55 np0005534516 nova_compute[253538]:        <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 03:34:55 np0005534516 nova_compute[253538]:      </auth>
Nov 25 03:34:55 np0005534516 nova_compute[253538]:      <target dev="vda" bus="virtio"/>
Nov 25 03:34:55 np0005534516 nova_compute[253538]:    </disk>
Nov 25 03:34:55 np0005534516 nova_compute[253538]:    <disk type="network" device="cdrom">
Nov 25 03:34:55 np0005534516 nova_compute[253538]:      <driver type="raw" cache="none"/>
Nov 25 03:34:55 np0005534516 nova_compute[253538]:      <source protocol="rbd" name="vms/420c5373-d9c4-4da0-9658-90eff9a19f8d_disk.config">
Nov 25 03:34:55 np0005534516 nova_compute[253538]:        <host name="192.168.122.100" port="6789"/>
Nov 25 03:34:55 np0005534516 nova_compute[253538]:      </source>
Nov 25 03:34:55 np0005534516 nova_compute[253538]:      <auth username="openstack">
Nov 25 03:34:55 np0005534516 nova_compute[253538]:        <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 03:34:55 np0005534516 nova_compute[253538]:      </auth>
Nov 25 03:34:55 np0005534516 nova_compute[253538]:      <target dev="sda" bus="sata"/>
Nov 25 03:34:55 np0005534516 nova_compute[253538]:    </disk>
Nov 25 03:34:55 np0005534516 nova_compute[253538]:    <interface type="ethernet">
Nov 25 03:34:55 np0005534516 nova_compute[253538]:      <mac address="fa:16:3e:73:f0:9b"/>
Nov 25 03:34:55 np0005534516 nova_compute[253538]:      <model type="virtio"/>
Nov 25 03:34:55 np0005534516 nova_compute[253538]:      <driver name="vhost" rx_queue_size="512"/>
Nov 25 03:34:55 np0005534516 nova_compute[253538]:      <mtu size="1442"/>
Nov 25 03:34:55 np0005534516 nova_compute[253538]:      <target dev="tap9200cc12-92"/>
Nov 25 03:34:55 np0005534516 nova_compute[253538]:    </interface>
Nov 25 03:34:55 np0005534516 nova_compute[253538]:    <serial type="pty">
Nov 25 03:34:55 np0005534516 nova_compute[253538]:      <log file="/var/lib/nova/instances/420c5373-d9c4-4da0-9658-90eff9a19f8d/console.log" append="off"/>
Nov 25 03:34:55 np0005534516 nova_compute[253538]:    </serial>
Nov 25 03:34:55 np0005534516 nova_compute[253538]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 03:34:55 np0005534516 nova_compute[253538]:    <video>
Nov 25 03:34:55 np0005534516 nova_compute[253538]:      <model type="virtio"/>
Nov 25 03:34:55 np0005534516 nova_compute[253538]:    </video>
Nov 25 03:34:55 np0005534516 nova_compute[253538]:    <input type="tablet" bus="usb"/>
Nov 25 03:34:55 np0005534516 nova_compute[253538]:    <rng model="virtio">
Nov 25 03:34:55 np0005534516 nova_compute[253538]:      <backend model="random">/dev/urandom</backend>
Nov 25 03:34:55 np0005534516 nova_compute[253538]:    </rng>
Nov 25 03:34:55 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root"/>
Nov 25 03:34:55 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:34:55 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:34:55 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:34:55 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:34:55 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:34:55 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:34:55 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:34:55 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:34:55 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:34:55 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:34:55 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:34:55 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:34:55 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:34:55 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:34:55 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:34:55 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:34:55 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:34:55 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:34:55 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:34:55 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:34:55 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:34:55 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:34:55 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:34:55 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:34:55 np0005534516 nova_compute[253538]:    <controller type="usb" index="0"/>
Nov 25 03:34:55 np0005534516 nova_compute[253538]:    <memballoon model="virtio">
Nov 25 03:34:55 np0005534516 nova_compute[253538]:      <stats period="10"/>
Nov 25 03:34:55 np0005534516 nova_compute[253538]:    </memballoon>
Nov 25 03:34:55 np0005534516 nova_compute[253538]:  </devices>
Nov 25 03:34:55 np0005534516 nova_compute[253538]: </domain>
Nov 25 03:34:55 np0005534516 nova_compute[253538]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 25 03:34:55 np0005534516 nova_compute[253538]: 2025-11-25 08:34:55.222 253542 DEBUG nova.compute.manager [None req-5d8b5969-a11f-4efe-b1ba-bf5400b29765 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] [instance: 420c5373-d9c4-4da0-9658-90eff9a19f8d] Preparing to wait for external event network-vif-plugged-9200cc12-927d-418b-99c1-ca0421535979 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 25 03:34:55 np0005534516 nova_compute[253538]: 2025-11-25 08:34:55.222 253542 DEBUG oslo_concurrency.lockutils [None req-5d8b5969-a11f-4efe-b1ba-bf5400b29765 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Acquiring lock "420c5373-d9c4-4da0-9658-90eff9a19f8d-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:34:55 np0005534516 nova_compute[253538]: 2025-11-25 08:34:55.222 253542 DEBUG oslo_concurrency.lockutils [None req-5d8b5969-a11f-4efe-b1ba-bf5400b29765 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Lock "420c5373-d9c4-4da0-9658-90eff9a19f8d-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:34:55 np0005534516 nova_compute[253538]: 2025-11-25 08:34:55.223 253542 DEBUG oslo_concurrency.lockutils [None req-5d8b5969-a11f-4efe-b1ba-bf5400b29765 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Lock "420c5373-d9c4-4da0-9658-90eff9a19f8d-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:34:55 np0005534516 nova_compute[253538]: 2025-11-25 08:34:55.223 253542 DEBUG nova.virt.libvirt.vif [None req-5d8b5969-a11f-4efe-b1ba-bf5400b29765 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-11-25T08:34:14Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-614557291',display_name='tempest-ServerActionsTestJSON-server-1864891791',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-614557291',id=62,image_ref='64385127-d622-49bb-be38-b33beb2692d1',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-25T08:34:22Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={rebuild='server'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='23237e7592b247838e62457157e64e9e',ramdisk_id='',reservation_id='r-x0c0gdce',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='64385127-d622-49bb-be38-b33beb2692d1',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-1880843108',owner_user_name='tempest-ServerActionsTestJSON-1880843108-project-member'},tags=<?>,task_state='rebuild_spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T08:34:52Z,user_data=None,user_id='c199ca353ed54a53ab7fe37d3089c82a',uuid=420c5373-d9c4-4da0-9658-90eff9a19f8d,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "9200cc12-927d-418b-99c1-ca0421535979", "address": "fa:16:3e:73:f0:9b", "network": {"id": "908154e6-322e-4607-bb65-df3f3f8daca6", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1395486665-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "23237e7592b247838e62457157e64e9e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9200cc12-92", "ovs_interfaceid": "9200cc12-927d-418b-99c1-ca0421535979", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 25 03:34:55 np0005534516 nova_compute[253538]: 2025-11-25 08:34:55.224 253542 DEBUG nova.network.os_vif_util [None req-5d8b5969-a11f-4efe-b1ba-bf5400b29765 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Converting VIF {"id": "9200cc12-927d-418b-99c1-ca0421535979", "address": "fa:16:3e:73:f0:9b", "network": {"id": "908154e6-322e-4607-bb65-df3f3f8daca6", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1395486665-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "23237e7592b247838e62457157e64e9e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9200cc12-92", "ovs_interfaceid": "9200cc12-927d-418b-99c1-ca0421535979", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 03:34:55 np0005534516 nova_compute[253538]: 2025-11-25 08:34:55.224 253542 DEBUG nova.network.os_vif_util [None req-5d8b5969-a11f-4efe-b1ba-bf5400b29765 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:73:f0:9b,bridge_name='br-int',has_traffic_filtering=True,id=9200cc12-927d-418b-99c1-ca0421535979,network=Network(908154e6-322e-4607-bb65-df3f3f8daca6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9200cc12-92') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 03:34:55 np0005534516 nova_compute[253538]: 2025-11-25 08:34:55.224 253542 DEBUG os_vif [None req-5d8b5969-a11f-4efe-b1ba-bf5400b29765 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:73:f0:9b,bridge_name='br-int',has_traffic_filtering=True,id=9200cc12-927d-418b-99c1-ca0421535979,network=Network(908154e6-322e-4607-bb65-df3f3f8daca6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9200cc12-92') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 25 03:34:55 np0005534516 nova_compute[253538]: 2025-11-25 08:34:55.225 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:34:55 np0005534516 nova_compute[253538]: 2025-11-25 08:34:55.225 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:34:55 np0005534516 nova_compute[253538]: 2025-11-25 08:34:55.226 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 03:34:55 np0005534516 nova_compute[253538]: 2025-11-25 08:34:55.227 253542 DEBUG nova.network.neutron [req-8a499574-6252-475b-9fd0-636a7b00a694 req-5722426d-7de3-4937-b016-6981a6265ead b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: e07ccbcb-d60d-4c15-95c2-9f5046ab99a3] Updated VIF entry in instance network info cache for port 54f02527-a6c1-4059-aa22-2c19fc6f351d. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 25 03:34:55 np0005534516 nova_compute[253538]: 2025-11-25 08:34:55.227 253542 DEBUG nova.network.neutron [req-8a499574-6252-475b-9fd0-636a7b00a694 req-5722426d-7de3-4937-b016-6981a6265ead b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: e07ccbcb-d60d-4c15-95c2-9f5046ab99a3] Updating instance_info_cache with network_info: [{"id": "54f02527-a6c1-4059-aa22-2c19fc6f351d", "address": "fa:16:3e:eb:30:f0", "network": {"id": "837c6e7b-bab2-4553-9d96-986f67153365", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-60200676-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ce0bc5c65f2f47a9a854ec892fe53bc8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap54f02527-a6", "ovs_interfaceid": "54f02527-a6c1-4059-aa22-2c19fc6f351d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 03:34:55 np0005534516 nova_compute[253538]: 2025-11-25 08:34:55.231 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:34:55 np0005534516 nova_compute[253538]: 2025-11-25 08:34:55.231 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9200cc12-92, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:34:55 np0005534516 nova_compute[253538]: 2025-11-25 08:34:55.232 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap9200cc12-92, col_values=(('external_ids', {'iface-id': '9200cc12-927d-418b-99c1-ca0421535979', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:73:f0:9b', 'vm-uuid': '420c5373-d9c4-4da0-9658-90eff9a19f8d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:34:55 np0005534516 nova_compute[253538]: 2025-11-25 08:34:55.233 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:34:55 np0005534516 NetworkManager[48915]: <info>  [1764059695.2340] manager: (tap9200cc12-92): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/272)
Nov 25 03:34:55 np0005534516 nova_compute[253538]: 2025-11-25 08:34:55.236 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 25 03:34:55 np0005534516 nova_compute[253538]: 2025-11-25 08:34:55.242 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:34:55 np0005534516 nova_compute[253538]: 2025-11-25 08:34:55.243 253542 INFO os_vif [None req-5d8b5969-a11f-4efe-b1ba-bf5400b29765 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:73:f0:9b,bridge_name='br-int',has_traffic_filtering=True,id=9200cc12-927d-418b-99c1-ca0421535979,network=Network(908154e6-322e-4607-bb65-df3f3f8daca6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9200cc12-92')#033[00m
Nov 25 03:34:55 np0005534516 nova_compute[253538]: 2025-11-25 08:34:55.245 253542 DEBUG oslo_concurrency.lockutils [req-8a499574-6252-475b-9fd0-636a7b00a694 req-5722426d-7de3-4937-b016-6981a6265ead b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-e07ccbcb-d60d-4c15-95c2-9f5046ab99a3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 03:34:55 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1553: 321 pgs: 321 active+clean; 437 MiB data, 757 MiB used, 59 GiB / 60 GiB avail; 913 KiB/s rd, 6.2 MiB/s wr, 192 op/s
Nov 25 03:34:55 np0005534516 nova_compute[253538]: 2025-11-25 08:34:55.294 253542 DEBUG nova.virt.libvirt.driver [None req-5d8b5969-a11f-4efe-b1ba-bf5400b29765 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 03:34:55 np0005534516 nova_compute[253538]: 2025-11-25 08:34:55.294 253542 DEBUG nova.virt.libvirt.driver [None req-5d8b5969-a11f-4efe-b1ba-bf5400b29765 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 03:34:55 np0005534516 nova_compute[253538]: 2025-11-25 08:34:55.294 253542 DEBUG nova.virt.libvirt.driver [None req-5d8b5969-a11f-4efe-b1ba-bf5400b29765 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] No VIF found with MAC fa:16:3e:73:f0:9b, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 25 03:34:55 np0005534516 nova_compute[253538]: 2025-11-25 08:34:55.295 253542 INFO nova.virt.libvirt.driver [None req-5d8b5969-a11f-4efe-b1ba-bf5400b29765 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] [instance: 420c5373-d9c4-4da0-9658-90eff9a19f8d] Using config drive#033[00m
Nov 25 03:34:55 np0005534516 nova_compute[253538]: 2025-11-25 08:34:55.323 253542 DEBUG nova.storage.rbd_utils [None req-5d8b5969-a11f-4efe-b1ba-bf5400b29765 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] rbd image 420c5373-d9c4-4da0-9658-90eff9a19f8d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:34:55 np0005534516 nova_compute[253538]: 2025-11-25 08:34:55.329 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764059695.3204894, e07ccbcb-d60d-4c15-95c2-9f5046ab99a3 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 03:34:55 np0005534516 nova_compute[253538]: 2025-11-25 08:34:55.329 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: e07ccbcb-d60d-4c15-95c2-9f5046ab99a3] VM Started (Lifecycle Event)#033[00m
Nov 25 03:34:55 np0005534516 nova_compute[253538]: 2025-11-25 08:34:55.333 253542 DEBUG nova.compute.manager [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] [instance: e07ccbcb-d60d-4c15-95c2-9f5046ab99a3] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 25 03:34:55 np0005534516 nova_compute[253538]: 2025-11-25 08:34:55.341 253542 DEBUG nova.virt.libvirt.driver [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] [instance: e07ccbcb-d60d-4c15-95c2-9f5046ab99a3] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 25 03:34:55 np0005534516 nova_compute[253538]: 2025-11-25 08:34:55.345 253542 INFO nova.virt.libvirt.driver [-] [instance: e07ccbcb-d60d-4c15-95c2-9f5046ab99a3] Instance spawned successfully.#033[00m
Nov 25 03:34:55 np0005534516 nova_compute[253538]: 2025-11-25 08:34:55.346 253542 DEBUG nova.virt.libvirt.driver [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] [instance: e07ccbcb-d60d-4c15-95c2-9f5046ab99a3] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 25 03:34:55 np0005534516 nova_compute[253538]: 2025-11-25 08:34:55.352 253542 DEBUG nova.objects.instance [None req-5d8b5969-a11f-4efe-b1ba-bf5400b29765 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Lazy-loading 'ec2_ids' on Instance uuid 420c5373-d9c4-4da0-9658-90eff9a19f8d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 03:34:55 np0005534516 nova_compute[253538]: 2025-11-25 08:34:55.356 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: e07ccbcb-d60d-4c15-95c2-9f5046ab99a3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:34:55 np0005534516 nova_compute[253538]: 2025-11-25 08:34:55.357 253542 DEBUG oslo_concurrency.lockutils [None req-d67c3dc3-4176-4c68-afa1-6ea74d5b348e 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Acquiring lock "52d39d67-b456-44e4-8804-2de0c941edae" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:34:55 np0005534516 nova_compute[253538]: 2025-11-25 08:34:55.358 253542 DEBUG oslo_concurrency.lockutils [None req-d67c3dc3-4176-4c68-afa1-6ea74d5b348e 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Lock "52d39d67-b456-44e4-8804-2de0c941edae" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:34:55 np0005534516 nova_compute[253538]: 2025-11-25 08:34:55.358 253542 DEBUG oslo_concurrency.lockutils [None req-d67c3dc3-4176-4c68-afa1-6ea74d5b348e 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Acquiring lock "52d39d67-b456-44e4-8804-2de0c941edae-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:34:55 np0005534516 nova_compute[253538]: 2025-11-25 08:34:55.359 253542 DEBUG oslo_concurrency.lockutils [None req-d67c3dc3-4176-4c68-afa1-6ea74d5b348e 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Lock "52d39d67-b456-44e4-8804-2de0c941edae-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:34:55 np0005534516 nova_compute[253538]: 2025-11-25 08:34:55.359 253542 DEBUG oslo_concurrency.lockutils [None req-d67c3dc3-4176-4c68-afa1-6ea74d5b348e 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Lock "52d39d67-b456-44e4-8804-2de0c941edae-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:34:55 np0005534516 nova_compute[253538]: 2025-11-25 08:34:55.364 253542 INFO nova.compute.manager [None req-d67c3dc3-4176-4c68-afa1-6ea74d5b348e 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] [instance: 52d39d67-b456-44e4-8804-2de0c941edae] Terminating instance#033[00m
Nov 25 03:34:55 np0005534516 nova_compute[253538]: 2025-11-25 08:34:55.365 253542 DEBUG nova.compute.manager [None req-d67c3dc3-4176-4c68-afa1-6ea74d5b348e 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] [instance: 52d39d67-b456-44e4-8804-2de0c941edae] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 25 03:34:55 np0005534516 nova_compute[253538]: 2025-11-25 08:34:55.376 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: e07ccbcb-d60d-4c15-95c2-9f5046ab99a3] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 03:34:55 np0005534516 nova_compute[253538]: 2025-11-25 08:34:55.385 253542 DEBUG nova.objects.instance [None req-5d8b5969-a11f-4efe-b1ba-bf5400b29765 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Lazy-loading 'keypairs' on Instance uuid 420c5373-d9c4-4da0-9658-90eff9a19f8d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 03:34:55 np0005534516 nova_compute[253538]: 2025-11-25 08:34:55.391 253542 DEBUG nova.virt.libvirt.driver [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] [instance: e07ccbcb-d60d-4c15-95c2-9f5046ab99a3] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:34:55 np0005534516 nova_compute[253538]: 2025-11-25 08:34:55.391 253542 DEBUG nova.virt.libvirt.driver [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] [instance: e07ccbcb-d60d-4c15-95c2-9f5046ab99a3] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:34:55 np0005534516 nova_compute[253538]: 2025-11-25 08:34:55.392 253542 DEBUG nova.virt.libvirt.driver [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] [instance: e07ccbcb-d60d-4c15-95c2-9f5046ab99a3] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:34:55 np0005534516 nova_compute[253538]: 2025-11-25 08:34:55.392 253542 DEBUG nova.virt.libvirt.driver [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] [instance: e07ccbcb-d60d-4c15-95c2-9f5046ab99a3] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:34:55 np0005534516 nova_compute[253538]: 2025-11-25 08:34:55.393 253542 DEBUG nova.virt.libvirt.driver [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] [instance: e07ccbcb-d60d-4c15-95c2-9f5046ab99a3] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:34:55 np0005534516 nova_compute[253538]: 2025-11-25 08:34:55.394 253542 DEBUG nova.virt.libvirt.driver [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] [instance: e07ccbcb-d60d-4c15-95c2-9f5046ab99a3] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:34:55 np0005534516 nova_compute[253538]: 2025-11-25 08:34:55.403 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: e07ccbcb-d60d-4c15-95c2-9f5046ab99a3] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 03:34:55 np0005534516 nova_compute[253538]: 2025-11-25 08:34:55.403 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764059695.32059, e07ccbcb-d60d-4c15-95c2-9f5046ab99a3 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 03:34:55 np0005534516 nova_compute[253538]: 2025-11-25 08:34:55.404 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: e07ccbcb-d60d-4c15-95c2-9f5046ab99a3] VM Paused (Lifecycle Event)#033[00m
Nov 25 03:34:55 np0005534516 nova_compute[253538]: 2025-11-25 08:34:55.424 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: e07ccbcb-d60d-4c15-95c2-9f5046ab99a3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:34:55 np0005534516 nova_compute[253538]: 2025-11-25 08:34:55.428 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764059695.3370552, e07ccbcb-d60d-4c15-95c2-9f5046ab99a3 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 03:34:55 np0005534516 nova_compute[253538]: 2025-11-25 08:34:55.428 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: e07ccbcb-d60d-4c15-95c2-9f5046ab99a3] VM Resumed (Lifecycle Event)#033[00m
Nov 25 03:34:55 np0005534516 nova_compute[253538]: 2025-11-25 08:34:55.451 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: e07ccbcb-d60d-4c15-95c2-9f5046ab99a3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:34:55 np0005534516 nova_compute[253538]: 2025-11-25 08:34:55.454 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: e07ccbcb-d60d-4c15-95c2-9f5046ab99a3] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 03:34:55 np0005534516 nova_compute[253538]: 2025-11-25 08:34:55.464 253542 INFO nova.compute.manager [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] [instance: e07ccbcb-d60d-4c15-95c2-9f5046ab99a3] Took 7.82 seconds to spawn the instance on the hypervisor.#033[00m
Nov 25 03:34:55 np0005534516 nova_compute[253538]: 2025-11-25 08:34:55.465 253542 DEBUG nova.compute.manager [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] [instance: e07ccbcb-d60d-4c15-95c2-9f5046ab99a3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:34:55 np0005534516 nova_compute[253538]: 2025-11-25 08:34:55.475 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: e07ccbcb-d60d-4c15-95c2-9f5046ab99a3] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 03:34:55 np0005534516 kernel: tap9fa407fa-66 (unregistering): left promiscuous mode
Nov 25 03:34:55 np0005534516 NetworkManager[48915]: <info>  [1764059695.4963] device (tap9fa407fa-66): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 03:34:55 np0005534516 nova_compute[253538]: 2025-11-25 08:34:55.503 253542 DEBUG oslo_concurrency.processutils [None req-488a352f-842c-4afb-9b2e-192f1999f70c a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc 2f20fb1c-0a44-4209-aa4a-020331708117_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.328s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:34:55 np0005534516 ovn_controller[152859]: 2025-11-25T08:34:55Z|00593|binding|INFO|Releasing lport 9fa407fa-661b-4b02-b4f4-656f6ae34cd8 from this chassis (sb_readonly=0)
Nov 25 03:34:55 np0005534516 ovn_controller[152859]: 2025-11-25T08:34:55Z|00594|binding|INFO|Setting lport 9fa407fa-661b-4b02-b4f4-656f6ae34cd8 down in Southbound
Nov 25 03:34:55 np0005534516 ovn_controller[152859]: 2025-11-25T08:34:55Z|00595|binding|INFO|Removing iface tap9fa407fa-66 ovn-installed in OVS
Nov 25 03:34:55 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:34:55.519 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:4d:ce:d4 10.100.0.8'], port_security=['fa:16:3e:4d:ce:d4 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '52d39d67-b456-44e4-8804-2de0c941edae', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '7d8307470c794815a028592990efca57', 'neutron:revision_number': '4', 'neutron:security_group_ids': '379bb9ab-2c12-4eea-bd54-6b8a24f607d1', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.241'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f51a82dc-da84-4ad1-90c6-51b8e242435f, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=9fa407fa-661b-4b02-b4f4-656f6ae34cd8) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 03:34:55 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:34:55.521 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 9fa407fa-661b-4b02-b4f4-656f6ae34cd8 in datapath 9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe unbound from our chassis#033[00m
Nov 25 03:34:55 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:34:55.522 162739 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe#033[00m
Nov 25 03:34:55 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:34:55.537 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[21fccf96-67c0-45a0-86ca-0fae5a55a1e6]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:34:55 np0005534516 systemd[1]: machine-qemu\x2d70\x2dinstance\x2d0000003d.scope: Deactivated successfully.
Nov 25 03:34:55 np0005534516 systemd[1]: machine-qemu\x2d70\x2dinstance\x2d0000003d.scope: Consumed 15.665s CPU time.
Nov 25 03:34:55 np0005534516 systemd-machined[215790]: Machine qemu-70-instance-0000003d terminated.
Nov 25 03:34:55 np0005534516 nova_compute[253538]: 2025-11-25 08:34:55.560 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:34:55 np0005534516 nova_compute[253538]: 2025-11-25 08:34:55.570 253542 INFO nova.compute.manager [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] [instance: e07ccbcb-d60d-4c15-95c2-9f5046ab99a3] Took 10.99 seconds to build instance.#033[00m
Nov 25 03:34:55 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:34:55.576 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[5ce4c197-de97-4c1b-a941-28ec79611651]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:34:55 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:34:55.588 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[1d0845d5-925e-4787-b8fb-d8050e2740d2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:34:55 np0005534516 kernel: tap9fa407fa-66: entered promiscuous mode
Nov 25 03:34:55 np0005534516 systemd-udevd[320298]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 03:34:55 np0005534516 NetworkManager[48915]: <info>  [1764059695.5990] manager: (tap9fa407fa-66): new Tun device (/org/freedesktop/NetworkManager/Devices/273)
Nov 25 03:34:55 np0005534516 kernel: tap9fa407fa-66 (unregistering): left promiscuous mode
Nov 25 03:34:55 np0005534516 ovn_controller[152859]: 2025-11-25T08:34:55Z|00596|binding|INFO|Claiming lport 9fa407fa-661b-4b02-b4f4-656f6ae34cd8 for this chassis.
Nov 25 03:34:55 np0005534516 ovn_controller[152859]: 2025-11-25T08:34:55Z|00597|binding|INFO|9fa407fa-661b-4b02-b4f4-656f6ae34cd8: Claiming fa:16:3e:4d:ce:d4 10.100.0.8
Nov 25 03:34:55 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:34:55.618 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:4d:ce:d4 10.100.0.8'], port_security=['fa:16:3e:4d:ce:d4 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '52d39d67-b456-44e4-8804-2de0c941edae', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '7d8307470c794815a028592990efca57', 'neutron:revision_number': '4', 'neutron:security_group_ids': '379bb9ab-2c12-4eea-bd54-6b8a24f607d1', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.241'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f51a82dc-da84-4ad1-90c6-51b8e242435f, chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=9fa407fa-661b-4b02-b4f4-656f6ae34cd8) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 03:34:55 np0005534516 nova_compute[253538]: 2025-11-25 08:34:55.625 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:34:55 np0005534516 nova_compute[253538]: 2025-11-25 08:34:55.627 253542 DEBUG oslo_concurrency.lockutils [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] Lock "e07ccbcb-d60d-4c15-95c2-9f5046ab99a3" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.121s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:34:55 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:34:55.632 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[eb6d15d9-f5a3-4434-800e-f482769a4956]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:34:55 np0005534516 nova_compute[253538]: 2025-11-25 08:34:55.633 253542 INFO nova.virt.libvirt.driver [-] [instance: 52d39d67-b456-44e4-8804-2de0c941edae] Instance destroyed successfully.#033[00m
Nov 25 03:34:55 np0005534516 nova_compute[253538]: 2025-11-25 08:34:55.634 253542 DEBUG nova.objects.instance [None req-d67c3dc3-4176-4c68-afa1-6ea74d5b348e 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Lazy-loading 'resources' on Instance uuid 52d39d67-b456-44e4-8804-2de0c941edae obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 03:34:55 np0005534516 ovn_controller[152859]: 2025-11-25T08:34:55Z|00598|binding|INFO|Setting lport 9fa407fa-661b-4b02-b4f4-656f6ae34cd8 ovn-installed in OVS
Nov 25 03:34:55 np0005534516 ovn_controller[152859]: 2025-11-25T08:34:55Z|00599|binding|INFO|Setting lport 9fa407fa-661b-4b02-b4f4-656f6ae34cd8 up in Southbound
Nov 25 03:34:55 np0005534516 ovn_controller[152859]: 2025-11-25T08:34:55Z|00600|binding|INFO|Releasing lport 9fa407fa-661b-4b02-b4f4-656f6ae34cd8 from this chassis (sb_readonly=1)
Nov 25 03:34:55 np0005534516 ovn_controller[152859]: 2025-11-25T08:34:55Z|00601|if_status|INFO|Dropped 2 log messages in last 138 seconds (most recently, 138 seconds ago) due to excessive rate
Nov 25 03:34:55 np0005534516 ovn_controller[152859]: 2025-11-25T08:34:55Z|00602|if_status|INFO|Not setting lport 9fa407fa-661b-4b02-b4f4-656f6ae34cd8 down as sb is readonly
Nov 25 03:34:55 np0005534516 ovn_controller[152859]: 2025-11-25T08:34:55Z|00603|binding|INFO|Removing iface tap9fa407fa-66 ovn-installed in OVS
Nov 25 03:34:55 np0005534516 ovn_controller[152859]: 2025-11-25T08:34:55Z|00604|binding|INFO|Releasing lport 9fa407fa-661b-4b02-b4f4-656f6ae34cd8 from this chassis (sb_readonly=0)
Nov 25 03:34:55 np0005534516 nova_compute[253538]: 2025-11-25 08:34:55.645 253542 DEBUG nova.storage.rbd_utils [None req-488a352f-842c-4afb-9b2e-192f1999f70c a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] resizing rbd image 2f20fb1c-0a44-4209-aa4a-020331708117_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 25 03:34:55 np0005534516 ovn_controller[152859]: 2025-11-25T08:34:55Z|00605|binding|INFO|Setting lport 9fa407fa-661b-4b02-b4f4-656f6ae34cd8 down in Southbound
Nov 25 03:34:55 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:34:55.649 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[d9236022-d031-47bb-993b-05f339516801]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap9bf3cbfa-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:cf:8f:c7'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 15, 'rx_bytes': 700, 'tx_bytes': 774, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 15, 'rx_bytes': 700, 'tx_bytes': 774, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 164], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 499073, 'reachable_time': 17548, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 320567, 'error': None, 'target': 'ovnmeta-9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:34:55 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:34:55.659 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:4d:ce:d4 10.100.0.8'], port_security=['fa:16:3e:4d:ce:d4 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '52d39d67-b456-44e4-8804-2de0c941edae', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '7d8307470c794815a028592990efca57', 'neutron:revision_number': '4', 'neutron:security_group_ids': '379bb9ab-2c12-4eea-bd54-6b8a24f607d1', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.241'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f51a82dc-da84-4ad1-90c6-51b8e242435f, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=9fa407fa-661b-4b02-b4f4-656f6ae34cd8) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 03:34:55 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:34:55.673 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[c2907a8f-ebf8-4572-b462-c318ebe62857]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap9bf3cbfa-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 499086, 'tstamp': 499086}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 320576, 'error': None, 'target': 'ovnmeta-9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap9bf3cbfa-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 499088, 'tstamp': 499088}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 320576, 'error': None, 'target': 'ovnmeta-9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:34:55 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:34:55.675 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9bf3cbfa-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:34:55 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:34:55.683 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9bf3cbfa-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:34:55 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:34:55.683 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 03:34:55 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:34:55.683 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap9bf3cbfa-70, col_values=(('external_ids', {'iface-id': '98660c0c-0936-4c4d-9a89-87b784d8d5cc'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:34:55 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:34:55.684 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 03:34:55 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:34:55.686 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 9fa407fa-661b-4b02-b4f4-656f6ae34cd8 in datapath 9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe unbound from our chassis#033[00m
Nov 25 03:34:55 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:34:55.687 162739 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe#033[00m
Nov 25 03:34:55 np0005534516 nova_compute[253538]: 2025-11-25 08:34:55.699 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:34:55 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:34:55.705 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[e25c23d8-75c6-4103-b1c8-c45cc0d43b07]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:34:55 np0005534516 nova_compute[253538]: 2025-11-25 08:34:55.712 253542 DEBUG nova.virt.libvirt.vif [None req-d67c3dc3-4176-4c68-afa1-6ea74d5b348e 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T08:34:07Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-1833307559',display_name='tempest-tempest.common.compute-instance-1833307559',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-1833307559',id=61,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBI45VX5ozelo+A26Yolp08RcM4mInQuWdkCriWdcAJEvmrG1M64+l6O6qrC1PEYY9Zv1hNrRdaOuY2Hx3qn6BPjsgdWVfumtuAipvIEJaR4T3qitr35JgGW4++DGBtyuWA==',key_name='tempest-keypair-1507828644',keypairs=<?>,launch_index=0,launched_at=2025-11-25T08:34:18Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='7d8307470c794815a028592990efca57',ramdisk_id='',reservation_id='r-14irc3k3',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-1895576257',owner_user_name='tempest-AttachInterfacesTestJSON-1895576257-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T08:34:18Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='329d8dc9d78743d4a09a38fef3a9143d',uuid=52d39d67-b456-44e4-8804-2de0c941edae,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "9fa407fa-661b-4b02-b4f4-656f6ae34cd8", "address": "fa:16:3e:4d:ce:d4", "network": {"id": "9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-2079765623-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.241", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7d8307470c794815a028592990efca57", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9fa407fa-66", "ovs_interfaceid": "9fa407fa-661b-4b02-b4f4-656f6ae34cd8", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 25 03:34:55 np0005534516 nova_compute[253538]: 2025-11-25 08:34:55.713 253542 DEBUG nova.network.os_vif_util [None req-d67c3dc3-4176-4c68-afa1-6ea74d5b348e 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Converting VIF {"id": "9fa407fa-661b-4b02-b4f4-656f6ae34cd8", "address": "fa:16:3e:4d:ce:d4", "network": {"id": "9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-2079765623-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.241", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7d8307470c794815a028592990efca57", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9fa407fa-66", "ovs_interfaceid": "9fa407fa-661b-4b02-b4f4-656f6ae34cd8", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 03:34:55 np0005534516 nova_compute[253538]: 2025-11-25 08:34:55.714 253542 DEBUG nova.network.os_vif_util [None req-d67c3dc3-4176-4c68-afa1-6ea74d5b348e 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:4d:ce:d4,bridge_name='br-int',has_traffic_filtering=True,id=9fa407fa-661b-4b02-b4f4-656f6ae34cd8,network=Network(9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9fa407fa-66') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 03:34:55 np0005534516 nova_compute[253538]: 2025-11-25 08:34:55.715 253542 DEBUG os_vif [None req-d67c3dc3-4176-4c68-afa1-6ea74d5b348e 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:4d:ce:d4,bridge_name='br-int',has_traffic_filtering=True,id=9fa407fa-661b-4b02-b4f4-656f6ae34cd8,network=Network(9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9fa407fa-66') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 25 03:34:55 np0005534516 nova_compute[253538]: 2025-11-25 08:34:55.718 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:34:55 np0005534516 nova_compute[253538]: 2025-11-25 08:34:55.718 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9fa407fa-66, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:34:55 np0005534516 nova_compute[253538]: 2025-11-25 08:34:55.719 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:34:55 np0005534516 nova_compute[253538]: 2025-11-25 08:34:55.722 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 25 03:34:55 np0005534516 nova_compute[253538]: 2025-11-25 08:34:55.728 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:34:55 np0005534516 nova_compute[253538]: 2025-11-25 08:34:55.730 253542 INFO os_vif [None req-d67c3dc3-4176-4c68-afa1-6ea74d5b348e 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:4d:ce:d4,bridge_name='br-int',has_traffic_filtering=True,id=9fa407fa-661b-4b02-b4f4-656f6ae34cd8,network=Network(9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9fa407fa-66')#033[00m
Nov 25 03:34:55 np0005534516 nova_compute[253538]: 2025-11-25 08:34:55.731 253542 DEBUG nova.virt.libvirt.vif [None req-d67c3dc3-4176-4c68-afa1-6ea74d5b348e 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T08:34:07Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-1833307559',display_name='tempest-tempest.common.compute-instance-1833307559',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-1833307559',id=61,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBI45VX5ozelo+A26Yolp08RcM4mInQuWdkCriWdcAJEvmrG1M64+l6O6qrC1PEYY9Zv1hNrRdaOuY2Hx3qn6BPjsgdWVfumtuAipvIEJaR4T3qitr35JgGW4++DGBtyuWA==',key_name='tempest-keypair-1507828644',keypairs=<?>,launch_index=0,launched_at=2025-11-25T08:34:18Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='7d8307470c794815a028592990efca57',ramdisk_id='',reservation_id='r-14irc3k3',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-1895576257',owner_user_name='tempest-AttachInterfacesTestJSON-1895576257-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T08:34:18Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='329d8dc9d78743d4a09a38fef3a9143d',uuid=52d39d67-b456-44e4-8804-2de0c941edae,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "f2a4b65b-419e-44be-9413-f01693268aa8", "address": "fa:16:3e:26:ed:fc", "network": {"id": "9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-2079765623-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7d8307470c794815a028592990efca57", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf2a4b65b-41", "ovs_interfaceid": "f2a4b65b-419e-44be-9413-f01693268aa8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 25 03:34:55 np0005534516 nova_compute[253538]: 2025-11-25 08:34:55.731 253542 DEBUG nova.network.os_vif_util [None req-d67c3dc3-4176-4c68-afa1-6ea74d5b348e 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Converting VIF {"id": "f2a4b65b-419e-44be-9413-f01693268aa8", "address": "fa:16:3e:26:ed:fc", "network": {"id": "9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-2079765623-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7d8307470c794815a028592990efca57", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf2a4b65b-41", "ovs_interfaceid": "f2a4b65b-419e-44be-9413-f01693268aa8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 03:34:55 np0005534516 nova_compute[253538]: 2025-11-25 08:34:55.732 253542 DEBUG nova.network.os_vif_util [None req-d67c3dc3-4176-4c68-afa1-6ea74d5b348e 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:26:ed:fc,bridge_name='br-int',has_traffic_filtering=True,id=f2a4b65b-419e-44be-9413-f01693268aa8,network=Network(9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapf2a4b65b-41') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 03:34:55 np0005534516 nova_compute[253538]: 2025-11-25 08:34:55.733 253542 DEBUG os_vif [None req-d67c3dc3-4176-4c68-afa1-6ea74d5b348e 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:26:ed:fc,bridge_name='br-int',has_traffic_filtering=True,id=f2a4b65b-419e-44be-9413-f01693268aa8,network=Network(9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapf2a4b65b-41') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 25 03:34:55 np0005534516 nova_compute[253538]: 2025-11-25 08:34:55.734 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:34:55 np0005534516 nova_compute[253538]: 2025-11-25 08:34:55.735 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf2a4b65b-41, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:34:55 np0005534516 nova_compute[253538]: 2025-11-25 08:34:55.735 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 03:34:55 np0005534516 nova_compute[253538]: 2025-11-25 08:34:55.737 253542 INFO os_vif [None req-d67c3dc3-4176-4c68-afa1-6ea74d5b348e 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:26:ed:fc,bridge_name='br-int',has_traffic_filtering=True,id=f2a4b65b-419e-44be-9413-f01693268aa8,network=Network(9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapf2a4b65b-41')#033[00m
Nov 25 03:34:55 np0005534516 nova_compute[253538]: 2025-11-25 08:34:55.752 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:34:55 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:34:55.771 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[f901cd64-886c-4acb-908b-689f7ed42438]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:34:55 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:34:55.779 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[108691f3-a3da-4524-9859-bc8048008ed6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:34:55 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:34:55.818 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[6b80ba2c-1369-4451-a092-ad742c31dfd5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:34:55 np0005534516 nova_compute[253538]: 2025-11-25 08:34:55.830 253542 DEBUG nova.objects.instance [None req-488a352f-842c-4afb-9b2e-192f1999f70c a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Lazy-loading 'migration_context' on Instance uuid 2f20fb1c-0a44-4209-aa4a-020331708117 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 03:34:55 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:34:55.840 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[5700870d-09f3-4f71-af4d-b7b57206be82]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap9bf3cbfa-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:cf:8f:c7'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 17, 'rx_bytes': 700, 'tx_bytes': 858, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 17, 'rx_bytes': 700, 'tx_bytes': 858, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 164], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 499073, 'reachable_time': 17548, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 320632, 'error': None, 'target': 'ovnmeta-9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:34:55 np0005534516 nova_compute[253538]: 2025-11-25 08:34:55.841 253542 DEBUG nova.virt.libvirt.driver [None req-488a352f-842c-4afb-9b2e-192f1999f70c a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: 2f20fb1c-0a44-4209-aa4a-020331708117] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 25 03:34:55 np0005534516 nova_compute[253538]: 2025-11-25 08:34:55.841 253542 DEBUG nova.virt.libvirt.driver [None req-488a352f-842c-4afb-9b2e-192f1999f70c a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: 2f20fb1c-0a44-4209-aa4a-020331708117] Ensure instance console log exists: /var/lib/nova/instances/2f20fb1c-0a44-4209-aa4a-020331708117/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 25 03:34:55 np0005534516 nova_compute[253538]: 2025-11-25 08:34:55.841 253542 DEBUG oslo_concurrency.lockutils [None req-488a352f-842c-4afb-9b2e-192f1999f70c a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:34:55 np0005534516 nova_compute[253538]: 2025-11-25 08:34:55.842 253542 DEBUG oslo_concurrency.lockutils [None req-488a352f-842c-4afb-9b2e-192f1999f70c a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:34:55 np0005534516 nova_compute[253538]: 2025-11-25 08:34:55.842 253542 DEBUG oslo_concurrency.lockutils [None req-488a352f-842c-4afb-9b2e-192f1999f70c a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:34:55 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:34:55.862 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[fc601ee1-398a-4d47-9126-7180343cedf4]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap9bf3cbfa-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 499086, 'tstamp': 499086}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 320633, 'error': None, 'target': 'ovnmeta-9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap9bf3cbfa-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 499088, 'tstamp': 499088}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 320633, 'error': None, 'target': 'ovnmeta-9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:34:55 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:34:55.863 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9bf3cbfa-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:34:55 np0005534516 nova_compute[253538]: 2025-11-25 08:34:55.865 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:34:55 np0005534516 nova_compute[253538]: 2025-11-25 08:34:55.869 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:34:55 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:34:55.870 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9bf3cbfa-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:34:55 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:34:55.870 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 03:34:55 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:34:55.870 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap9bf3cbfa-70, col_values=(('external_ids', {'iface-id': '98660c0c-0936-4c4d-9a89-87b784d8d5cc'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:34:55 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:34:55.871 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 03:34:55 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:34:55.872 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 9fa407fa-661b-4b02-b4f4-656f6ae34cd8 in datapath 9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe unbound from our chassis#033[00m
Nov 25 03:34:55 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:34:55.874 162739 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe#033[00m
Nov 25 03:34:55 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:34:55.891 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[59bd7269-6b85-4835-99b8-0fa52089f67f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:34:55 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:34:55.932 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[df3d383c-8449-4e72-89f8-7d76c2578af8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:34:55 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:34:55.946 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[0dacfcad-ca84-4da9-b0b1-e4ad6b081180]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:34:55 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e182 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 03:34:55 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:34:55.984 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[83869591-d767-4577-a0d9-52b093e2e8f9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:34:56 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:34:56.004 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[944043f7-a47c-49de-b787-6348e3faac95]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap9bf3cbfa-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:cf:8f:c7'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 19, 'rx_bytes': 700, 'tx_bytes': 942, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 19, 'rx_bytes': 700, 'tx_bytes': 942, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 164], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 499073, 'reachable_time': 17548, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 320641, 'error': None, 'target': 'ovnmeta-9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:34:56 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:34:56.027 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[95eb7f2c-9c20-4582-8fbf-fcb85d1bcee7]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap9bf3cbfa-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 499086, 'tstamp': 499086}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 320642, 'error': None, 'target': 'ovnmeta-9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap9bf3cbfa-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 499088, 'tstamp': 499088}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 320642, 'error': None, 'target': 'ovnmeta-9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:34:56 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:34:56.029 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9bf3cbfa-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:34:56 np0005534516 nova_compute[253538]: 2025-11-25 08:34:56.031 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:34:56 np0005534516 nova_compute[253538]: 2025-11-25 08:34:56.035 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:34:56 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:34:56.036 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9bf3cbfa-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:34:56 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:34:56.036 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 03:34:56 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:34:56.036 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap9bf3cbfa-70, col_values=(('external_ids', {'iface-id': '98660c0c-0936-4c4d-9a89-87b784d8d5cc'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:34:56 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:34:56.037 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 03:34:56 np0005534516 nova_compute[253538]: 2025-11-25 08:34:56.136 253542 INFO nova.virt.libvirt.driver [None req-d67c3dc3-4176-4c68-afa1-6ea74d5b348e 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] [instance: 52d39d67-b456-44e4-8804-2de0c941edae] Deleting instance files /var/lib/nova/instances/52d39d67-b456-44e4-8804-2de0c941edae_del#033[00m
Nov 25 03:34:56 np0005534516 nova_compute[253538]: 2025-11-25 08:34:56.137 253542 INFO nova.virt.libvirt.driver [None req-d67c3dc3-4176-4c68-afa1-6ea74d5b348e 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] [instance: 52d39d67-b456-44e4-8804-2de0c941edae] Deletion of /var/lib/nova/instances/52d39d67-b456-44e4-8804-2de0c941edae_del complete#033[00m
Nov 25 03:34:56 np0005534516 nova_compute[253538]: 2025-11-25 08:34:56.245 253542 INFO nova.compute.manager [None req-d67c3dc3-4176-4c68-afa1-6ea74d5b348e 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] [instance: 52d39d67-b456-44e4-8804-2de0c941edae] Took 0.88 seconds to destroy the instance on the hypervisor.#033[00m
Nov 25 03:34:56 np0005534516 nova_compute[253538]: 2025-11-25 08:34:56.246 253542 DEBUG oslo.service.loopingcall [None req-d67c3dc3-4176-4c68-afa1-6ea74d5b348e 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 25 03:34:56 np0005534516 nova_compute[253538]: 2025-11-25 08:34:56.246 253542 DEBUG nova.compute.manager [-] [instance: 52d39d67-b456-44e4-8804-2de0c941edae] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 25 03:34:56 np0005534516 nova_compute[253538]: 2025-11-25 08:34:56.246 253542 DEBUG nova.network.neutron [-] [instance: 52d39d67-b456-44e4-8804-2de0c941edae] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 25 03:34:56 np0005534516 nova_compute[253538]: 2025-11-25 08:34:56.271 253542 INFO nova.virt.libvirt.driver [None req-5d8b5969-a11f-4efe-b1ba-bf5400b29765 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] [instance: 420c5373-d9c4-4da0-9658-90eff9a19f8d] Creating config drive at /var/lib/nova/instances/420c5373-d9c4-4da0-9658-90eff9a19f8d/disk.config#033[00m
Nov 25 03:34:56 np0005534516 nova_compute[253538]: 2025-11-25 08:34:56.291 253542 DEBUG oslo_concurrency.processutils [None req-5d8b5969-a11f-4efe-b1ba-bf5400b29765 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/420c5373-d9c4-4da0-9658-90eff9a19f8d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpwuauc57n execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:34:56 np0005534516 nova_compute[253538]: 2025-11-25 08:34:56.334 253542 DEBUG nova.network.neutron [None req-488a352f-842c-4afb-9b2e-192f1999f70c a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: 2f20fb1c-0a44-4209-aa4a-020331708117] Successfully created port: 69b7733c-f471-4b5d-9fe9-b9b25d5836d9 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 25 03:34:56 np0005534516 nova_compute[253538]: 2025-11-25 08:34:56.441 253542 DEBUG oslo_concurrency.processutils [None req-5d8b5969-a11f-4efe-b1ba-bf5400b29765 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/420c5373-d9c4-4da0-9658-90eff9a19f8d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpwuauc57n" returned: 0 in 0.150s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:34:56 np0005534516 nova_compute[253538]: 2025-11-25 08:34:56.473 253542 DEBUG nova.storage.rbd_utils [None req-5d8b5969-a11f-4efe-b1ba-bf5400b29765 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] rbd image 420c5373-d9c4-4da0-9658-90eff9a19f8d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:34:56 np0005534516 nova_compute[253538]: 2025-11-25 08:34:56.478 253542 DEBUG oslo_concurrency.processutils [None req-5d8b5969-a11f-4efe-b1ba-bf5400b29765 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/420c5373-d9c4-4da0-9658-90eff9a19f8d/disk.config 420c5373-d9c4-4da0-9658-90eff9a19f8d_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:34:56 np0005534516 nova_compute[253538]: 2025-11-25 08:34:56.669 253542 DEBUG oslo_concurrency.processutils [None req-5d8b5969-a11f-4efe-b1ba-bf5400b29765 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/420c5373-d9c4-4da0-9658-90eff9a19f8d/disk.config 420c5373-d9c4-4da0-9658-90eff9a19f8d_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.191s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:34:56 np0005534516 nova_compute[253538]: 2025-11-25 08:34:56.670 253542 INFO nova.virt.libvirt.driver [None req-5d8b5969-a11f-4efe-b1ba-bf5400b29765 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] [instance: 420c5373-d9c4-4da0-9658-90eff9a19f8d] Deleting local config drive /var/lib/nova/instances/420c5373-d9c4-4da0-9658-90eff9a19f8d/disk.config because it was imported into RBD.#033[00m
Nov 25 03:34:56 np0005534516 NetworkManager[48915]: <info>  [1764059696.7202] manager: (tap9200cc12-92): new Tun device (/org/freedesktop/NetworkManager/Devices/274)
Nov 25 03:34:56 np0005534516 kernel: tap9200cc12-92: entered promiscuous mode
Nov 25 03:34:56 np0005534516 ovn_controller[152859]: 2025-11-25T08:34:56Z|00606|binding|INFO|Claiming lport 9200cc12-927d-418b-99c1-ca0421535979 for this chassis.
Nov 25 03:34:56 np0005534516 ovn_controller[152859]: 2025-11-25T08:34:56Z|00607|binding|INFO|9200cc12-927d-418b-99c1-ca0421535979: Claiming fa:16:3e:73:f0:9b 10.100.0.11
Nov 25 03:34:56 np0005534516 nova_compute[253538]: 2025-11-25 08:34:56.725 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:34:56 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:34:56.733 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:73:f0:9b 10.100.0.11'], port_security=['fa:16:3e:73:f0:9b 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '420c5373-d9c4-4da0-9658-90eff9a19f8d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-908154e6-322e-4607-bb65-df3f3f8daca6', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '23237e7592b247838e62457157e64e9e', 'neutron:revision_number': '5', 'neutron:security_group_ids': 'c602cd86-40c0-467a-8b7a-b573e0a7cefa', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f04e87cb-da21-4cc9-be16-4ad52b84fb85, chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=9200cc12-927d-418b-99c1-ca0421535979) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 03:34:56 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:34:56.734 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 9200cc12-927d-418b-99c1-ca0421535979 in datapath 908154e6-322e-4607-bb65-df3f3f8daca6 bound to our chassis#033[00m
Nov 25 03:34:56 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:34:56.736 162739 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 908154e6-322e-4607-bb65-df3f3f8daca6#033[00m
Nov 25 03:34:56 np0005534516 ovn_controller[152859]: 2025-11-25T08:34:56Z|00608|binding|INFO|Setting lport 9200cc12-927d-418b-99c1-ca0421535979 ovn-installed in OVS
Nov 25 03:34:56 np0005534516 ovn_controller[152859]: 2025-11-25T08:34:56Z|00609|binding|INFO|Setting lport 9200cc12-927d-418b-99c1-ca0421535979 up in Southbound
Nov 25 03:34:56 np0005534516 nova_compute[253538]: 2025-11-25 08:34:56.749 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:34:56 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:34:56.755 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[f52c02da-d870-47d1-a3f6-df50c73e39b5]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:34:56 np0005534516 nova_compute[253538]: 2025-11-25 08:34:56.757 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:34:56 np0005534516 systemd-udevd[320699]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 03:34:56 np0005534516 systemd-machined[215790]: New machine qemu-75-instance-0000003e.
Nov 25 03:34:56 np0005534516 NetworkManager[48915]: <info>  [1764059696.7832] device (tap9200cc12-92): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 03:34:56 np0005534516 NetworkManager[48915]: <info>  [1764059696.7840] device (tap9200cc12-92): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 03:34:56 np0005534516 systemd[1]: Started Virtual Machine qemu-75-instance-0000003e.
Nov 25 03:34:56 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:34:56.792 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[cb519d5d-d374-4b3f-99f3-207b465c20f8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:34:56 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:34:56.798 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[9abf77f1-beb7-442d-b9c7-2ef022f065d3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:34:56 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:34:56.828 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[784bb3da-b0f9-43b9-b07f-6cff0c57cd12]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:34:56 np0005534516 nova_compute[253538]: 2025-11-25 08:34:56.844 253542 DEBUG nova.compute.manager [req-e81dd064-4199-439f-8119-829fddde841f req-a445ea1c-7353-4362-b2a9-fe5106ffdd05 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 52d39d67-b456-44e4-8804-2de0c941edae] Received event network-vif-unplugged-9fa407fa-661b-4b02-b4f4-656f6ae34cd8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:34:56 np0005534516 nova_compute[253538]: 2025-11-25 08:34:56.845 253542 DEBUG oslo_concurrency.lockutils [req-e81dd064-4199-439f-8119-829fddde841f req-a445ea1c-7353-4362-b2a9-fe5106ffdd05 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "52d39d67-b456-44e4-8804-2de0c941edae-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:34:56 np0005534516 nova_compute[253538]: 2025-11-25 08:34:56.845 253542 DEBUG oslo_concurrency.lockutils [req-e81dd064-4199-439f-8119-829fddde841f req-a445ea1c-7353-4362-b2a9-fe5106ffdd05 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "52d39d67-b456-44e4-8804-2de0c941edae-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:34:56 np0005534516 nova_compute[253538]: 2025-11-25 08:34:56.845 253542 DEBUG oslo_concurrency.lockutils [req-e81dd064-4199-439f-8119-829fddde841f req-a445ea1c-7353-4362-b2a9-fe5106ffdd05 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "52d39d67-b456-44e4-8804-2de0c941edae-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:34:56 np0005534516 nova_compute[253538]: 2025-11-25 08:34:56.846 253542 DEBUG nova.compute.manager [req-e81dd064-4199-439f-8119-829fddde841f req-a445ea1c-7353-4362-b2a9-fe5106ffdd05 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 52d39d67-b456-44e4-8804-2de0c941edae] No waiting events found dispatching network-vif-unplugged-9fa407fa-661b-4b02-b4f4-656f6ae34cd8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 03:34:56 np0005534516 nova_compute[253538]: 2025-11-25 08:34:56.846 253542 DEBUG nova.compute.manager [req-e81dd064-4199-439f-8119-829fddde841f req-a445ea1c-7353-4362-b2a9-fe5106ffdd05 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 52d39d67-b456-44e4-8804-2de0c941edae] Received event network-vif-unplugged-9fa407fa-661b-4b02-b4f4-656f6ae34cd8 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 25 03:34:56 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:34:56.846 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[dfa281a3-716b-41a0-a28d-29fab07f32b2]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap908154e6-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:42:59:09'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 10, 'rx_bytes': 700, 'tx_bytes': 608, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 10, 'rx_bytes': 700, 'tx_bytes': 608, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 161], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 496834, 'reachable_time': 23814, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 300, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 300, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 320709, 'error': None, 'target': 'ovnmeta-908154e6-322e-4607-bb65-df3f3f8daca6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:34:56 np0005534516 nova_compute[253538]: 2025-11-25 08:34:56.854 253542 INFO nova.network.neutron [None req-aabf42ea-bd42-4934-91c6-12ef85d0f85e 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] [instance: 52d39d67-b456-44e4-8804-2de0c941edae] Port f2a4b65b-419e-44be-9413-f01693268aa8 from network info_cache is no longer associated with instance in Neutron. Removing from network info_cache.#033[00m
Nov 25 03:34:56 np0005534516 nova_compute[253538]: 2025-11-25 08:34:56.855 253542 DEBUG nova.network.neutron [None req-aabf42ea-bd42-4934-91c6-12ef85d0f85e 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] [instance: 52d39d67-b456-44e4-8804-2de0c941edae] Updating instance_info_cache with network_info: [{"id": "9fa407fa-661b-4b02-b4f4-656f6ae34cd8", "address": "fa:16:3e:4d:ce:d4", "network": {"id": "9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-2079765623-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.241", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7d8307470c794815a028592990efca57", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9fa407fa-66", "ovs_interfaceid": "9fa407fa-661b-4b02-b4f4-656f6ae34cd8", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 03:34:56 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:34:56.870 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[8a1c7bcc-0ea6-4da5-9a5b-71a12c35fcd7]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap908154e6-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 496845, 'tstamp': 496845}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 320710, 'error': None, 'target': 'ovnmeta-908154e6-322e-4607-bb65-df3f3f8daca6', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap908154e6-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 496848, 'tstamp': 496848}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 320710, 'error': None, 'target': 'ovnmeta-908154e6-322e-4607-bb65-df3f3f8daca6', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:34:56 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:34:56.871 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap908154e6-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:34:56 np0005534516 nova_compute[253538]: 2025-11-25 08:34:56.873 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:34:56 np0005534516 nova_compute[253538]: 2025-11-25 08:34:56.874 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:34:56 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:34:56.874 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap908154e6-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:34:56 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:34:56.875 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 03:34:56 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:34:56.875 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap908154e6-30, col_values=(('external_ids', {'iface-id': 'a5c69233-73e9-45f3-95c2-e76d52711966'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:34:56 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:34:56.875 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 03:34:56 np0005534516 nova_compute[253538]: 2025-11-25 08:34:56.876 253542 DEBUG oslo_concurrency.lockutils [None req-aabf42ea-bd42-4934-91c6-12ef85d0f85e 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Releasing lock "refresh_cache-52d39d67-b456-44e4-8804-2de0c941edae" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 03:34:56 np0005534516 nova_compute[253538]: 2025-11-25 08:34:56.896 253542 DEBUG oslo_concurrency.lockutils [None req-aabf42ea-bd42-4934-91c6-12ef85d0f85e 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Lock "interface-52d39d67-b456-44e4-8804-2de0c941edae-f2a4b65b-419e-44be-9413-f01693268aa8" "released" by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" :: held 4.210s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:34:57 np0005534516 nova_compute[253538]: 2025-11-25 08:34:57.121 253542 DEBUG nova.compute.manager [req-06fb5981-e24c-4b2c-aa98-792beaee7fc6 req-37960740-111f-459b-8c02-263dcff0e60b b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 420c5373-d9c4-4da0-9658-90eff9a19f8d] Received event network-vif-plugged-9200cc12-927d-418b-99c1-ca0421535979 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:34:57 np0005534516 nova_compute[253538]: 2025-11-25 08:34:57.122 253542 DEBUG oslo_concurrency.lockutils [req-06fb5981-e24c-4b2c-aa98-792beaee7fc6 req-37960740-111f-459b-8c02-263dcff0e60b b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "420c5373-d9c4-4da0-9658-90eff9a19f8d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:34:57 np0005534516 nova_compute[253538]: 2025-11-25 08:34:57.122 253542 DEBUG oslo_concurrency.lockutils [req-06fb5981-e24c-4b2c-aa98-792beaee7fc6 req-37960740-111f-459b-8c02-263dcff0e60b b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "420c5373-d9c4-4da0-9658-90eff9a19f8d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:34:57 np0005534516 nova_compute[253538]: 2025-11-25 08:34:57.123 253542 DEBUG oslo_concurrency.lockutils [req-06fb5981-e24c-4b2c-aa98-792beaee7fc6 req-37960740-111f-459b-8c02-263dcff0e60b b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "420c5373-d9c4-4da0-9658-90eff9a19f8d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:34:57 np0005534516 nova_compute[253538]: 2025-11-25 08:34:57.123 253542 DEBUG nova.compute.manager [req-06fb5981-e24c-4b2c-aa98-792beaee7fc6 req-37960740-111f-459b-8c02-263dcff0e60b b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 420c5373-d9c4-4da0-9658-90eff9a19f8d] Processing event network-vif-plugged-9200cc12-927d-418b-99c1-ca0421535979 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 25 03:34:57 np0005534516 nova_compute[253538]: 2025-11-25 08:34:57.218 253542 DEBUG nova.virt.libvirt.host [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Removed pending event for 420c5373-d9c4-4da0-9658-90eff9a19f8d due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Nov 25 03:34:57 np0005534516 nova_compute[253538]: 2025-11-25 08:34:57.218 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764059697.217494, 420c5373-d9c4-4da0-9658-90eff9a19f8d => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 03:34:57 np0005534516 nova_compute[253538]: 2025-11-25 08:34:57.218 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 420c5373-d9c4-4da0-9658-90eff9a19f8d] VM Started (Lifecycle Event)#033[00m
Nov 25 03:34:57 np0005534516 nova_compute[253538]: 2025-11-25 08:34:57.220 253542 DEBUG nova.compute.manager [None req-5d8b5969-a11f-4efe-b1ba-bf5400b29765 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] [instance: 420c5373-d9c4-4da0-9658-90eff9a19f8d] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 25 03:34:57 np0005534516 nova_compute[253538]: 2025-11-25 08:34:57.224 253542 DEBUG nova.virt.libvirt.driver [None req-5d8b5969-a11f-4efe-b1ba-bf5400b29765 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] [instance: 420c5373-d9c4-4da0-9658-90eff9a19f8d] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 25 03:34:57 np0005534516 nova_compute[253538]: 2025-11-25 08:34:57.230 253542 INFO nova.virt.libvirt.driver [-] [instance: 420c5373-d9c4-4da0-9658-90eff9a19f8d] Instance spawned successfully.#033[00m
Nov 25 03:34:57 np0005534516 nova_compute[253538]: 2025-11-25 08:34:57.232 253542 DEBUG nova.virt.libvirt.driver [None req-5d8b5969-a11f-4efe-b1ba-bf5400b29765 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] [instance: 420c5373-d9c4-4da0-9658-90eff9a19f8d] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 25 03:34:57 np0005534516 nova_compute[253538]: 2025-11-25 08:34:57.234 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 420c5373-d9c4-4da0-9658-90eff9a19f8d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:34:57 np0005534516 nova_compute[253538]: 2025-11-25 08:34:57.237 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 420c5373-d9c4-4da0-9658-90eff9a19f8d] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 03:34:57 np0005534516 nova_compute[253538]: 2025-11-25 08:34:57.251 253542 DEBUG nova.virt.libvirt.driver [None req-5d8b5969-a11f-4efe-b1ba-bf5400b29765 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] [instance: 420c5373-d9c4-4da0-9658-90eff9a19f8d] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:34:57 np0005534516 nova_compute[253538]: 2025-11-25 08:34:57.251 253542 DEBUG nova.virt.libvirt.driver [None req-5d8b5969-a11f-4efe-b1ba-bf5400b29765 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] [instance: 420c5373-d9c4-4da0-9658-90eff9a19f8d] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:34:57 np0005534516 nova_compute[253538]: 2025-11-25 08:34:57.252 253542 DEBUG nova.virt.libvirt.driver [None req-5d8b5969-a11f-4efe-b1ba-bf5400b29765 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] [instance: 420c5373-d9c4-4da0-9658-90eff9a19f8d] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:34:57 np0005534516 nova_compute[253538]: 2025-11-25 08:34:57.252 253542 DEBUG nova.virt.libvirt.driver [None req-5d8b5969-a11f-4efe-b1ba-bf5400b29765 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] [instance: 420c5373-d9c4-4da0-9658-90eff9a19f8d] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:34:57 np0005534516 nova_compute[253538]: 2025-11-25 08:34:57.253 253542 DEBUG nova.virt.libvirt.driver [None req-5d8b5969-a11f-4efe-b1ba-bf5400b29765 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] [instance: 420c5373-d9c4-4da0-9658-90eff9a19f8d] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:34:57 np0005534516 nova_compute[253538]: 2025-11-25 08:34:57.254 253542 DEBUG nova.virt.libvirt.driver [None req-5d8b5969-a11f-4efe-b1ba-bf5400b29765 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] [instance: 420c5373-d9c4-4da0-9658-90eff9a19f8d] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:34:57 np0005534516 nova_compute[253538]: 2025-11-25 08:34:57.258 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 420c5373-d9c4-4da0-9658-90eff9a19f8d] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.#033[00m
Nov 25 03:34:57 np0005534516 nova_compute[253538]: 2025-11-25 08:34:57.259 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764059697.2177112, 420c5373-d9c4-4da0-9658-90eff9a19f8d => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 03:34:57 np0005534516 nova_compute[253538]: 2025-11-25 08:34:57.259 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 420c5373-d9c4-4da0-9658-90eff9a19f8d] VM Paused (Lifecycle Event)#033[00m
Nov 25 03:34:57 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1554: 321 pgs: 321 active+clean; 445 MiB data, 744 MiB used, 59 GiB / 60 GiB avail; 3.5 MiB/s rd, 7.5 MiB/s wr, 325 op/s
Nov 25 03:34:57 np0005534516 nova_compute[253538]: 2025-11-25 08:34:57.291 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 420c5373-d9c4-4da0-9658-90eff9a19f8d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:34:57 np0005534516 nova_compute[253538]: 2025-11-25 08:34:57.294 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764059697.2267354, 420c5373-d9c4-4da0-9658-90eff9a19f8d => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 03:34:57 np0005534516 nova_compute[253538]: 2025-11-25 08:34:57.294 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 420c5373-d9c4-4da0-9658-90eff9a19f8d] VM Resumed (Lifecycle Event)#033[00m
Nov 25 03:34:57 np0005534516 nova_compute[253538]: 2025-11-25 08:34:57.324 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 420c5373-d9c4-4da0-9658-90eff9a19f8d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:34:57 np0005534516 nova_compute[253538]: 2025-11-25 08:34:57.327 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 420c5373-d9c4-4da0-9658-90eff9a19f8d] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 03:34:57 np0005534516 nova_compute[253538]: 2025-11-25 08:34:57.340 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 420c5373-d9c4-4da0-9658-90eff9a19f8d] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.#033[00m
Nov 25 03:34:57 np0005534516 nova_compute[253538]: 2025-11-25 08:34:57.363 253542 DEBUG nova.compute.manager [None req-5d8b5969-a11f-4efe-b1ba-bf5400b29765 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] [instance: 420c5373-d9c4-4da0-9658-90eff9a19f8d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:34:57 np0005534516 nova_compute[253538]: 2025-11-25 08:34:57.422 253542 DEBUG oslo_concurrency.lockutils [None req-5d8b5969-a11f-4efe-b1ba-bf5400b29765 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:34:57 np0005534516 nova_compute[253538]: 2025-11-25 08:34:57.423 253542 DEBUG oslo_concurrency.lockutils [None req-5d8b5969-a11f-4efe-b1ba-bf5400b29765 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:34:57 np0005534516 nova_compute[253538]: 2025-11-25 08:34:57.423 253542 DEBUG nova.objects.instance [None req-5d8b5969-a11f-4efe-b1ba-bf5400b29765 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] [instance: 420c5373-d9c4-4da0-9658-90eff9a19f8d] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032#033[00m
Nov 25 03:34:57 np0005534516 nova_compute[253538]: 2025-11-25 08:34:57.475 253542 DEBUG oslo_concurrency.lockutils [None req-5d8b5969-a11f-4efe-b1ba-bf5400b29765 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: held 0.052s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:34:57 np0005534516 nova_compute[253538]: 2025-11-25 08:34:57.590 253542 DEBUG nova.network.neutron [None req-488a352f-842c-4afb-9b2e-192f1999f70c a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: 2f20fb1c-0a44-4209-aa4a-020331708117] Successfully updated port: 69b7733c-f471-4b5d-9fe9-b9b25d5836d9 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 25 03:34:57 np0005534516 nova_compute[253538]: 2025-11-25 08:34:57.613 253542 DEBUG oslo_concurrency.lockutils [None req-488a352f-842c-4afb-9b2e-192f1999f70c a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Acquiring lock "refresh_cache-2f20fb1c-0a44-4209-aa4a-020331708117" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 03:34:57 np0005534516 nova_compute[253538]: 2025-11-25 08:34:57.613 253542 DEBUG oslo_concurrency.lockutils [None req-488a352f-842c-4afb-9b2e-192f1999f70c a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Acquired lock "refresh_cache-2f20fb1c-0a44-4209-aa4a-020331708117" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 03:34:57 np0005534516 nova_compute[253538]: 2025-11-25 08:34:57.614 253542 DEBUG nova.network.neutron [None req-488a352f-842c-4afb-9b2e-192f1999f70c a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: 2f20fb1c-0a44-4209-aa4a-020331708117] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 25 03:34:57 np0005534516 nova_compute[253538]: 2025-11-25 08:34:57.733 253542 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764059682.732818, bf44124c-1a65-4bde-a777-043ae1a53557 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 03:34:57 np0005534516 nova_compute[253538]: 2025-11-25 08:34:57.733 253542 INFO nova.compute.manager [-] [instance: bf44124c-1a65-4bde-a777-043ae1a53557] VM Stopped (Lifecycle Event)#033[00m
Nov 25 03:34:57 np0005534516 nova_compute[253538]: 2025-11-25 08:34:57.752 253542 DEBUG nova.compute.manager [None req-7c0358f1-3e28-44c3-8787-6cd667603606 - - - - - -] [instance: bf44124c-1a65-4bde-a777-043ae1a53557] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:34:57 np0005534516 nova_compute[253538]: 2025-11-25 08:34:57.809 253542 DEBUG nova.network.neutron [None req-488a352f-842c-4afb-9b2e-192f1999f70c a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: 2f20fb1c-0a44-4209-aa4a-020331708117] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 25 03:34:57 np0005534516 nova_compute[253538]: 2025-11-25 08:34:57.936 253542 DEBUG oslo_concurrency.lockutils [None req-62ab5a8c-f0ba-4dbd-86c2-7733e309b0ca dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] Acquiring lock "d99c7a05-3cc3-4a8b-bce4-1185023a269f" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:34:57 np0005534516 nova_compute[253538]: 2025-11-25 08:34:57.937 253542 DEBUG oslo_concurrency.lockutils [None req-62ab5a8c-f0ba-4dbd-86c2-7733e309b0ca dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] Lock "d99c7a05-3cc3-4a8b-bce4-1185023a269f" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:34:57 np0005534516 nova_compute[253538]: 2025-11-25 08:34:57.938 253542 DEBUG oslo_concurrency.lockutils [None req-62ab5a8c-f0ba-4dbd-86c2-7733e309b0ca dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] Acquiring lock "d99c7a05-3cc3-4a8b-bce4-1185023a269f-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:34:57 np0005534516 nova_compute[253538]: 2025-11-25 08:34:57.938 253542 DEBUG oslo_concurrency.lockutils [None req-62ab5a8c-f0ba-4dbd-86c2-7733e309b0ca dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] Lock "d99c7a05-3cc3-4a8b-bce4-1185023a269f-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:34:57 np0005534516 nova_compute[253538]: 2025-11-25 08:34:57.939 253542 DEBUG oslo_concurrency.lockutils [None req-62ab5a8c-f0ba-4dbd-86c2-7733e309b0ca dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] Lock "d99c7a05-3cc3-4a8b-bce4-1185023a269f-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:34:57 np0005534516 nova_compute[253538]: 2025-11-25 08:34:57.940 253542 INFO nova.compute.manager [None req-62ab5a8c-f0ba-4dbd-86c2-7733e309b0ca dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] [instance: d99c7a05-3cc3-4a8b-bce4-1185023a269f] Terminating instance#033[00m
Nov 25 03:34:57 np0005534516 nova_compute[253538]: 2025-11-25 08:34:57.941 253542 DEBUG nova.compute.manager [None req-62ab5a8c-f0ba-4dbd-86c2-7733e309b0ca dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] [instance: d99c7a05-3cc3-4a8b-bce4-1185023a269f] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 25 03:34:57 np0005534516 kernel: tap182a5a1a-c0 (unregistering): left promiscuous mode
Nov 25 03:34:57 np0005534516 NetworkManager[48915]: <info>  [1764059697.9778] device (tap182a5a1a-c0): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 03:34:57 np0005534516 nova_compute[253538]: 2025-11-25 08:34:57.989 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:34:57 np0005534516 ovn_controller[152859]: 2025-11-25T08:34:57Z|00610|binding|INFO|Releasing lport 182a5a1a-c06d-4265-857f-3ea363ae01c2 from this chassis (sb_readonly=0)
Nov 25 03:34:57 np0005534516 ovn_controller[152859]: 2025-11-25T08:34:57Z|00611|binding|INFO|Setting lport 182a5a1a-c06d-4265-857f-3ea363ae01c2 down in Southbound
Nov 25 03:34:57 np0005534516 ovn_controller[152859]: 2025-11-25T08:34:57Z|00612|binding|INFO|Removing iface tap182a5a1a-c0 ovn-installed in OVS
Nov 25 03:34:57 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:34:57.996 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:35:cb:10 10.100.0.3'], port_security=['fa:16:3e:35:cb:10 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'd99c7a05-3cc3-4a8b-bce4-1185023a269f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-837c6e7b-bab2-4553-9d96-986f67153365', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ce0bc5c65f2f47a9a854ec892fe53bc8', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'a89d18be-0725-45a6-b621-49391e35be2a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=332be39e-bd54-4968-b056-968d29abecfa, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=182a5a1a-c06d-4265-857f-3ea363ae01c2) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 03:34:57 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:34:57.997 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 182a5a1a-c06d-4265-857f-3ea363ae01c2 in datapath 837c6e7b-bab2-4553-9d96-986f67153365 unbound from our chassis#033[00m
Nov 25 03:34:58 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:34:57.999 162739 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 837c6e7b-bab2-4553-9d96-986f67153365#033[00m
Nov 25 03:34:58 np0005534516 nova_compute[253538]: 2025-11-25 08:34:58.009 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:34:58 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:34:58.012 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[79a31f81-a187-40f2-8455-5f3e73dd316b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:34:58 np0005534516 systemd[1]: machine-qemu\x2d72\x2dinstance\x2d0000003f.scope: Deactivated successfully.
Nov 25 03:34:58 np0005534516 systemd[1]: machine-qemu\x2d72\x2dinstance\x2d0000003f.scope: Consumed 5.036s CPU time.
Nov 25 03:34:58 np0005534516 systemd-machined[215790]: Machine qemu-72-instance-0000003f terminated.
Nov 25 03:34:58 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:34:58.040 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[0d5fa50d-17a5-46c2-9c02-97028853e315]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:34:58 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:34:58.043 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[7fc62b71-289e-4456-b9c4-2deefe1b4398]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:34:58 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:34:58.085 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[ba86a56b-11d3-4acf-bee1-c148d2b4b844]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:34:58 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:34:58.106 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[d642e0be-1597-4599-b3d8-e8b92cc9df7e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap837c6e7b-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:5f:6e:5a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 6, 'tx_packets': 9, 'rx_bytes': 532, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 6, 'tx_packets': 9, 'rx_bytes': 532, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 177], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 504680, 'reachable_time': 28680, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 320763, 'error': None, 'target': 'ovnmeta-837c6e7b-bab2-4553-9d96-986f67153365', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:34:58 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:34:58.125 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[666c6dd3-031a-49fc-b45c-88fc1b212269]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap837c6e7b-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 504693, 'tstamp': 504693}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 320764, 'error': None, 'target': 'ovnmeta-837c6e7b-bab2-4553-9d96-986f67153365', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap837c6e7b-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 504696, 'tstamp': 504696}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 320764, 'error': None, 'target': 'ovnmeta-837c6e7b-bab2-4553-9d96-986f67153365', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:34:58 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:34:58.126 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap837c6e7b-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:34:58 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:34:58.178 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap837c6e7b-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:34:58 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:34:58.179 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 03:34:58 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:34:58.179 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap837c6e7b-b0, col_values=(('external_ids', {'iface-id': 'f915f58f-151e-47fb-a373-5fd022b7fd3c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:34:58 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:34:58.179 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 03:34:58 np0005534516 nova_compute[253538]: 2025-11-25 08:34:58.181 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:34:58 np0005534516 nova_compute[253538]: 2025-11-25 08:34:58.188 253542 INFO nova.virt.libvirt.driver [-] [instance: d99c7a05-3cc3-4a8b-bce4-1185023a269f] Instance destroyed successfully.#033[00m
Nov 25 03:34:58 np0005534516 nova_compute[253538]: 2025-11-25 08:34:58.188 253542 DEBUG nova.objects.instance [None req-62ab5a8c-f0ba-4dbd-86c2-7733e309b0ca dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] Lazy-loading 'resources' on Instance uuid d99c7a05-3cc3-4a8b-bce4-1185023a269f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 03:34:58 np0005534516 nova_compute[253538]: 2025-11-25 08:34:58.198 253542 DEBUG nova.virt.libvirt.vif [None req-62ab5a8c-f0ba-4dbd-86c2-7733e309b0ca dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T08:34:42Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ListServersNegativeTestJSON-server-1822986507',display_name='tempest-ListServersNegativeTestJSON-server-1822986507-1',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-listserversnegativetestjson-server-1822986507-1',id=63,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-25T08:34:53Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='ce0bc5c65f2f47a9a854ec892fe53bc8',ramdisk_id='',reservation_id='r-prpznqfv',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ListServersNegativeTestJSON-704382836',owner_user_name='tempest-ListServersNegativeTestJSON-704382836-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T08:34:54Z,user_data=None,user_id='dc986518148d44de9f5908ed5be317bd',uuid=d99c7a05-3cc3-4a8b-bce4-1185023a269f,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "182a5a1a-c06d-4265-857f-3ea363ae01c2", "address": "fa:16:3e:35:cb:10", "network": {"id": "837c6e7b-bab2-4553-9d96-986f67153365", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-60200676-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ce0bc5c65f2f47a9a854ec892fe53bc8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap182a5a1a-c0", "ovs_interfaceid": "182a5a1a-c06d-4265-857f-3ea363ae01c2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 25 03:34:58 np0005534516 nova_compute[253538]: 2025-11-25 08:34:58.199 253542 DEBUG nova.network.os_vif_util [None req-62ab5a8c-f0ba-4dbd-86c2-7733e309b0ca dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] Converting VIF {"id": "182a5a1a-c06d-4265-857f-3ea363ae01c2", "address": "fa:16:3e:35:cb:10", "network": {"id": "837c6e7b-bab2-4553-9d96-986f67153365", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-60200676-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ce0bc5c65f2f47a9a854ec892fe53bc8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap182a5a1a-c0", "ovs_interfaceid": "182a5a1a-c06d-4265-857f-3ea363ae01c2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 03:34:58 np0005534516 nova_compute[253538]: 2025-11-25 08:34:58.200 253542 DEBUG nova.network.os_vif_util [None req-62ab5a8c-f0ba-4dbd-86c2-7733e309b0ca dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:35:cb:10,bridge_name='br-int',has_traffic_filtering=True,id=182a5a1a-c06d-4265-857f-3ea363ae01c2,network=Network(837c6e7b-bab2-4553-9d96-986f67153365),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap182a5a1a-c0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 03:34:58 np0005534516 nova_compute[253538]: 2025-11-25 08:34:58.200 253542 DEBUG os_vif [None req-62ab5a8c-f0ba-4dbd-86c2-7733e309b0ca dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:35:cb:10,bridge_name='br-int',has_traffic_filtering=True,id=182a5a1a-c06d-4265-857f-3ea363ae01c2,network=Network(837c6e7b-bab2-4553-9d96-986f67153365),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap182a5a1a-c0') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 25 03:34:58 np0005534516 nova_compute[253538]: 2025-11-25 08:34:58.202 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:34:58 np0005534516 nova_compute[253538]: 2025-11-25 08:34:58.203 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap182a5a1a-c0, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:34:58 np0005534516 nova_compute[253538]: 2025-11-25 08:34:58.204 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:34:58 np0005534516 nova_compute[253538]: 2025-11-25 08:34:58.207 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 25 03:34:58 np0005534516 nova_compute[253538]: 2025-11-25 08:34:58.209 253542 INFO os_vif [None req-62ab5a8c-f0ba-4dbd-86c2-7733e309b0ca dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:35:cb:10,bridge_name='br-int',has_traffic_filtering=True,id=182a5a1a-c06d-4265-857f-3ea363ae01c2,network=Network(837c6e7b-bab2-4553-9d96-986f67153365),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap182a5a1a-c0')#033[00m
Nov 25 03:34:58 np0005534516 nova_compute[253538]: 2025-11-25 08:34:58.451 253542 DEBUG nova.network.neutron [-] [instance: 52d39d67-b456-44e4-8804-2de0c941edae] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 03:34:58 np0005534516 nova_compute[253538]: 2025-11-25 08:34:58.509 253542 INFO nova.compute.manager [-] [instance: 52d39d67-b456-44e4-8804-2de0c941edae] Took 2.26 seconds to deallocate network for instance.#033[00m
Nov 25 03:34:58 np0005534516 nova_compute[253538]: 2025-11-25 08:34:58.603 253542 DEBUG oslo_concurrency.lockutils [None req-d67c3dc3-4176-4c68-afa1-6ea74d5b348e 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:34:58 np0005534516 nova_compute[253538]: 2025-11-25 08:34:58.603 253542 DEBUG oslo_concurrency.lockutils [None req-d67c3dc3-4176-4c68-afa1-6ea74d5b348e 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:34:58 np0005534516 nova_compute[253538]: 2025-11-25 08:34:58.782 253542 DEBUG oslo_concurrency.processutils [None req-d67c3dc3-4176-4c68-afa1-6ea74d5b348e 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:34:58 np0005534516 nova_compute[253538]: 2025-11-25 08:34:58.952 253542 DEBUG nova.network.neutron [None req-488a352f-842c-4afb-9b2e-192f1999f70c a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: 2f20fb1c-0a44-4209-aa4a-020331708117] Updating instance_info_cache with network_info: [{"id": "69b7733c-f471-4b5d-9fe9-b9b25d5836d9", "address": "fa:16:3e:a2:ff:8f", "network": {"id": "a66e51b8-ecb0-4289-a1b5-d5e379727721", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-903247260-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a9c243220ecd4ba3af10cdbc0ea76bd6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap69b7733c-f4", "ovs_interfaceid": "69b7733c-f471-4b5d-9fe9-b9b25d5836d9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 03:34:58 np0005534516 nova_compute[253538]: 2025-11-25 08:34:58.987 253542 DEBUG oslo_concurrency.lockutils [None req-488a352f-842c-4afb-9b2e-192f1999f70c a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Releasing lock "refresh_cache-2f20fb1c-0a44-4209-aa4a-020331708117" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 03:34:58 np0005534516 nova_compute[253538]: 2025-11-25 08:34:58.988 253542 DEBUG nova.compute.manager [None req-488a352f-842c-4afb-9b2e-192f1999f70c a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: 2f20fb1c-0a44-4209-aa4a-020331708117] Instance network_info: |[{"id": "69b7733c-f471-4b5d-9fe9-b9b25d5836d9", "address": "fa:16:3e:a2:ff:8f", "network": {"id": "a66e51b8-ecb0-4289-a1b5-d5e379727721", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-903247260-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a9c243220ecd4ba3af10cdbc0ea76bd6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap69b7733c-f4", "ovs_interfaceid": "69b7733c-f471-4b5d-9fe9-b9b25d5836d9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 25 03:34:58 np0005534516 nova_compute[253538]: 2025-11-25 08:34:58.992 253542 DEBUG nova.virt.libvirt.driver [None req-488a352f-842c-4afb-9b2e-192f1999f70c a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: 2f20fb1c-0a44-4209-aa4a-020331708117] Start _get_guest_xml network_info=[{"id": "69b7733c-f471-4b5d-9fe9-b9b25d5836d9", "address": "fa:16:3e:a2:ff:8f", "network": {"id": "a66e51b8-ecb0-4289-a1b5-d5e379727721", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-903247260-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a9c243220ecd4ba3af10cdbc0ea76bd6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap69b7733c-f4", "ovs_interfaceid": "69b7733c-f471-4b5d-9fe9-b9b25d5836d9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'device_name': '/dev/vda', 'guest_format': None, 'encryption_options': None, 'boot_index': 0, 'encryption_format': None, 'encryption_secret_uuid': None, 'size': 0, 'encrypted': False, 'disk_bus': 'virtio', 'image_id': '8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 25 03:34:59 np0005534516 nova_compute[253538]: 2025-11-25 08:34:59.002 253542 WARNING nova.virt.libvirt.driver [None req-488a352f-842c-4afb-9b2e-192f1999f70c a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 03:34:59 np0005534516 nova_compute[253538]: 2025-11-25 08:34:59.012 253542 DEBUG nova.compute.manager [req-a1f6d729-cdf4-4e2d-af49-d7a4fa845c0a req-09ad2581-6ab9-419f-a41d-47b02ba29df7 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 52d39d67-b456-44e4-8804-2de0c941edae] Received event network-vif-plugged-9fa407fa-661b-4b02-b4f4-656f6ae34cd8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:34:59 np0005534516 nova_compute[253538]: 2025-11-25 08:34:59.013 253542 DEBUG oslo_concurrency.lockutils [req-a1f6d729-cdf4-4e2d-af49-d7a4fa845c0a req-09ad2581-6ab9-419f-a41d-47b02ba29df7 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "52d39d67-b456-44e4-8804-2de0c941edae-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:34:59 np0005534516 nova_compute[253538]: 2025-11-25 08:34:59.013 253542 DEBUG oslo_concurrency.lockutils [req-a1f6d729-cdf4-4e2d-af49-d7a4fa845c0a req-09ad2581-6ab9-419f-a41d-47b02ba29df7 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "52d39d67-b456-44e4-8804-2de0c941edae-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:34:59 np0005534516 nova_compute[253538]: 2025-11-25 08:34:59.013 253542 DEBUG oslo_concurrency.lockutils [req-a1f6d729-cdf4-4e2d-af49-d7a4fa845c0a req-09ad2581-6ab9-419f-a41d-47b02ba29df7 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "52d39d67-b456-44e4-8804-2de0c941edae-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:34:59 np0005534516 nova_compute[253538]: 2025-11-25 08:34:59.013 253542 DEBUG nova.compute.manager [req-a1f6d729-cdf4-4e2d-af49-d7a4fa845c0a req-09ad2581-6ab9-419f-a41d-47b02ba29df7 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 52d39d67-b456-44e4-8804-2de0c941edae] No waiting events found dispatching network-vif-plugged-9fa407fa-661b-4b02-b4f4-656f6ae34cd8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 03:34:59 np0005534516 nova_compute[253538]: 2025-11-25 08:34:59.014 253542 WARNING nova.compute.manager [req-a1f6d729-cdf4-4e2d-af49-d7a4fa845c0a req-09ad2581-6ab9-419f-a41d-47b02ba29df7 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 52d39d67-b456-44e4-8804-2de0c941edae] Received unexpected event network-vif-plugged-9fa407fa-661b-4b02-b4f4-656f6ae34cd8 for instance with vm_state deleted and task_state None.#033[00m
Nov 25 03:34:59 np0005534516 nova_compute[253538]: 2025-11-25 08:34:59.014 253542 DEBUG nova.compute.manager [req-a1f6d729-cdf4-4e2d-af49-d7a4fa845c0a req-09ad2581-6ab9-419f-a41d-47b02ba29df7 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 52d39d67-b456-44e4-8804-2de0c941edae] Received event network-vif-plugged-9fa407fa-661b-4b02-b4f4-656f6ae34cd8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:34:59 np0005534516 nova_compute[253538]: 2025-11-25 08:34:59.014 253542 DEBUG oslo_concurrency.lockutils [req-a1f6d729-cdf4-4e2d-af49-d7a4fa845c0a req-09ad2581-6ab9-419f-a41d-47b02ba29df7 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "52d39d67-b456-44e4-8804-2de0c941edae-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:34:59 np0005534516 nova_compute[253538]: 2025-11-25 08:34:59.014 253542 DEBUG oslo_concurrency.lockutils [req-a1f6d729-cdf4-4e2d-af49-d7a4fa845c0a req-09ad2581-6ab9-419f-a41d-47b02ba29df7 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "52d39d67-b456-44e4-8804-2de0c941edae-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:34:59 np0005534516 nova_compute[253538]: 2025-11-25 08:34:59.014 253542 DEBUG oslo_concurrency.lockutils [req-a1f6d729-cdf4-4e2d-af49-d7a4fa845c0a req-09ad2581-6ab9-419f-a41d-47b02ba29df7 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "52d39d67-b456-44e4-8804-2de0c941edae-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:34:59 np0005534516 nova_compute[253538]: 2025-11-25 08:34:59.015 253542 DEBUG nova.compute.manager [req-a1f6d729-cdf4-4e2d-af49-d7a4fa845c0a req-09ad2581-6ab9-419f-a41d-47b02ba29df7 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 52d39d67-b456-44e4-8804-2de0c941edae] No waiting events found dispatching network-vif-plugged-9fa407fa-661b-4b02-b4f4-656f6ae34cd8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 03:34:59 np0005534516 nova_compute[253538]: 2025-11-25 08:34:59.015 253542 WARNING nova.compute.manager [req-a1f6d729-cdf4-4e2d-af49-d7a4fa845c0a req-09ad2581-6ab9-419f-a41d-47b02ba29df7 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 52d39d67-b456-44e4-8804-2de0c941edae] Received unexpected event network-vif-plugged-9fa407fa-661b-4b02-b4f4-656f6ae34cd8 for instance with vm_state deleted and task_state None.#033[00m
Nov 25 03:34:59 np0005534516 nova_compute[253538]: 2025-11-25 08:34:59.015 253542 DEBUG nova.compute.manager [req-a1f6d729-cdf4-4e2d-af49-d7a4fa845c0a req-09ad2581-6ab9-419f-a41d-47b02ba29df7 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 52d39d67-b456-44e4-8804-2de0c941edae] Received event network-vif-plugged-9fa407fa-661b-4b02-b4f4-656f6ae34cd8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:34:59 np0005534516 nova_compute[253538]: 2025-11-25 08:34:59.015 253542 DEBUG oslo_concurrency.lockutils [req-a1f6d729-cdf4-4e2d-af49-d7a4fa845c0a req-09ad2581-6ab9-419f-a41d-47b02ba29df7 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "52d39d67-b456-44e4-8804-2de0c941edae-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:34:59 np0005534516 nova_compute[253538]: 2025-11-25 08:34:59.015 253542 DEBUG oslo_concurrency.lockutils [req-a1f6d729-cdf4-4e2d-af49-d7a4fa845c0a req-09ad2581-6ab9-419f-a41d-47b02ba29df7 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "52d39d67-b456-44e4-8804-2de0c941edae-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:34:59 np0005534516 nova_compute[253538]: 2025-11-25 08:34:59.016 253542 DEBUG oslo_concurrency.lockutils [req-a1f6d729-cdf4-4e2d-af49-d7a4fa845c0a req-09ad2581-6ab9-419f-a41d-47b02ba29df7 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "52d39d67-b456-44e4-8804-2de0c941edae-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:34:59 np0005534516 nova_compute[253538]: 2025-11-25 08:34:59.016 253542 DEBUG nova.compute.manager [req-a1f6d729-cdf4-4e2d-af49-d7a4fa845c0a req-09ad2581-6ab9-419f-a41d-47b02ba29df7 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 52d39d67-b456-44e4-8804-2de0c941edae] No waiting events found dispatching network-vif-plugged-9fa407fa-661b-4b02-b4f4-656f6ae34cd8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 03:34:59 np0005534516 nova_compute[253538]: 2025-11-25 08:34:59.016 253542 WARNING nova.compute.manager [req-a1f6d729-cdf4-4e2d-af49-d7a4fa845c0a req-09ad2581-6ab9-419f-a41d-47b02ba29df7 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 52d39d67-b456-44e4-8804-2de0c941edae] Received unexpected event network-vif-plugged-9fa407fa-661b-4b02-b4f4-656f6ae34cd8 for instance with vm_state deleted and task_state None.#033[00m
Nov 25 03:34:59 np0005534516 nova_compute[253538]: 2025-11-25 08:34:59.016 253542 DEBUG nova.compute.manager [req-a1f6d729-cdf4-4e2d-af49-d7a4fa845c0a req-09ad2581-6ab9-419f-a41d-47b02ba29df7 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 52d39d67-b456-44e4-8804-2de0c941edae] Received event network-vif-unplugged-9fa407fa-661b-4b02-b4f4-656f6ae34cd8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:34:59 np0005534516 nova_compute[253538]: 2025-11-25 08:34:59.016 253542 DEBUG oslo_concurrency.lockutils [req-a1f6d729-cdf4-4e2d-af49-d7a4fa845c0a req-09ad2581-6ab9-419f-a41d-47b02ba29df7 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "52d39d67-b456-44e4-8804-2de0c941edae-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:34:59 np0005534516 nova_compute[253538]: 2025-11-25 08:34:59.016 253542 DEBUG oslo_concurrency.lockutils [req-a1f6d729-cdf4-4e2d-af49-d7a4fa845c0a req-09ad2581-6ab9-419f-a41d-47b02ba29df7 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "52d39d67-b456-44e4-8804-2de0c941edae-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:34:59 np0005534516 nova_compute[253538]: 2025-11-25 08:34:59.017 253542 DEBUG oslo_concurrency.lockutils [req-a1f6d729-cdf4-4e2d-af49-d7a4fa845c0a req-09ad2581-6ab9-419f-a41d-47b02ba29df7 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "52d39d67-b456-44e4-8804-2de0c941edae-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:34:59 np0005534516 nova_compute[253538]: 2025-11-25 08:34:59.017 253542 DEBUG nova.compute.manager [req-a1f6d729-cdf4-4e2d-af49-d7a4fa845c0a req-09ad2581-6ab9-419f-a41d-47b02ba29df7 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 52d39d67-b456-44e4-8804-2de0c941edae] No waiting events found dispatching network-vif-unplugged-9fa407fa-661b-4b02-b4f4-656f6ae34cd8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 03:34:59 np0005534516 nova_compute[253538]: 2025-11-25 08:34:59.017 253542 WARNING nova.compute.manager [req-a1f6d729-cdf4-4e2d-af49-d7a4fa845c0a req-09ad2581-6ab9-419f-a41d-47b02ba29df7 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 52d39d67-b456-44e4-8804-2de0c941edae] Received unexpected event network-vif-unplugged-9fa407fa-661b-4b02-b4f4-656f6ae34cd8 for instance with vm_state deleted and task_state None.#033[00m
Nov 25 03:34:59 np0005534516 nova_compute[253538]: 2025-11-25 08:34:59.017 253542 DEBUG nova.compute.manager [req-a1f6d729-cdf4-4e2d-af49-d7a4fa845c0a req-09ad2581-6ab9-419f-a41d-47b02ba29df7 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 52d39d67-b456-44e4-8804-2de0c941edae] Received event network-vif-plugged-9fa407fa-661b-4b02-b4f4-656f6ae34cd8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:34:59 np0005534516 nova_compute[253538]: 2025-11-25 08:34:59.017 253542 DEBUG oslo_concurrency.lockutils [req-a1f6d729-cdf4-4e2d-af49-d7a4fa845c0a req-09ad2581-6ab9-419f-a41d-47b02ba29df7 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "52d39d67-b456-44e4-8804-2de0c941edae-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:34:59 np0005534516 nova_compute[253538]: 2025-11-25 08:34:59.017 253542 DEBUG oslo_concurrency.lockutils [req-a1f6d729-cdf4-4e2d-af49-d7a4fa845c0a req-09ad2581-6ab9-419f-a41d-47b02ba29df7 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "52d39d67-b456-44e4-8804-2de0c941edae-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:34:59 np0005534516 nova_compute[253538]: 2025-11-25 08:34:59.018 253542 DEBUG oslo_concurrency.lockutils [req-a1f6d729-cdf4-4e2d-af49-d7a4fa845c0a req-09ad2581-6ab9-419f-a41d-47b02ba29df7 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "52d39d67-b456-44e4-8804-2de0c941edae-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:34:59 np0005534516 nova_compute[253538]: 2025-11-25 08:34:59.018 253542 DEBUG nova.compute.manager [req-a1f6d729-cdf4-4e2d-af49-d7a4fa845c0a req-09ad2581-6ab9-419f-a41d-47b02ba29df7 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 52d39d67-b456-44e4-8804-2de0c941edae] No waiting events found dispatching network-vif-plugged-9fa407fa-661b-4b02-b4f4-656f6ae34cd8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 03:34:59 np0005534516 nova_compute[253538]: 2025-11-25 08:34:59.018 253542 WARNING nova.compute.manager [req-a1f6d729-cdf4-4e2d-af49-d7a4fa845c0a req-09ad2581-6ab9-419f-a41d-47b02ba29df7 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 52d39d67-b456-44e4-8804-2de0c941edae] Received unexpected event network-vif-plugged-9fa407fa-661b-4b02-b4f4-656f6ae34cd8 for instance with vm_state deleted and task_state None.#033[00m
Nov 25 03:34:59 np0005534516 nova_compute[253538]: 2025-11-25 08:34:59.018 253542 DEBUG nova.compute.manager [req-a1f6d729-cdf4-4e2d-af49-d7a4fa845c0a req-09ad2581-6ab9-419f-a41d-47b02ba29df7 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: d99c7a05-3cc3-4a8b-bce4-1185023a269f] Received event network-vif-unplugged-182a5a1a-c06d-4265-857f-3ea363ae01c2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:34:59 np0005534516 nova_compute[253538]: 2025-11-25 08:34:59.018 253542 DEBUG oslo_concurrency.lockutils [req-a1f6d729-cdf4-4e2d-af49-d7a4fa845c0a req-09ad2581-6ab9-419f-a41d-47b02ba29df7 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "d99c7a05-3cc3-4a8b-bce4-1185023a269f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:34:59 np0005534516 nova_compute[253538]: 2025-11-25 08:34:59.019 253542 DEBUG oslo_concurrency.lockutils [req-a1f6d729-cdf4-4e2d-af49-d7a4fa845c0a req-09ad2581-6ab9-419f-a41d-47b02ba29df7 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "d99c7a05-3cc3-4a8b-bce4-1185023a269f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:34:59 np0005534516 nova_compute[253538]: 2025-11-25 08:34:59.019 253542 DEBUG oslo_concurrency.lockutils [req-a1f6d729-cdf4-4e2d-af49-d7a4fa845c0a req-09ad2581-6ab9-419f-a41d-47b02ba29df7 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "d99c7a05-3cc3-4a8b-bce4-1185023a269f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:34:59 np0005534516 nova_compute[253538]: 2025-11-25 08:34:59.019 253542 DEBUG nova.compute.manager [req-a1f6d729-cdf4-4e2d-af49-d7a4fa845c0a req-09ad2581-6ab9-419f-a41d-47b02ba29df7 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: d99c7a05-3cc3-4a8b-bce4-1185023a269f] No waiting events found dispatching network-vif-unplugged-182a5a1a-c06d-4265-857f-3ea363ae01c2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 03:34:59 np0005534516 nova_compute[253538]: 2025-11-25 08:34:59.019 253542 DEBUG nova.compute.manager [req-a1f6d729-cdf4-4e2d-af49-d7a4fa845c0a req-09ad2581-6ab9-419f-a41d-47b02ba29df7 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: d99c7a05-3cc3-4a8b-bce4-1185023a269f] Received event network-vif-unplugged-182a5a1a-c06d-4265-857f-3ea363ae01c2 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 25 03:34:59 np0005534516 nova_compute[253538]: 2025-11-25 08:34:59.019 253542 DEBUG nova.compute.manager [req-a1f6d729-cdf4-4e2d-af49-d7a4fa845c0a req-09ad2581-6ab9-419f-a41d-47b02ba29df7 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: d99c7a05-3cc3-4a8b-bce4-1185023a269f] Received event network-vif-plugged-182a5a1a-c06d-4265-857f-3ea363ae01c2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:34:59 np0005534516 nova_compute[253538]: 2025-11-25 08:34:59.019 253542 DEBUG oslo_concurrency.lockutils [req-a1f6d729-cdf4-4e2d-af49-d7a4fa845c0a req-09ad2581-6ab9-419f-a41d-47b02ba29df7 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "d99c7a05-3cc3-4a8b-bce4-1185023a269f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:34:59 np0005534516 nova_compute[253538]: 2025-11-25 08:34:59.020 253542 DEBUG oslo_concurrency.lockutils [req-a1f6d729-cdf4-4e2d-af49-d7a4fa845c0a req-09ad2581-6ab9-419f-a41d-47b02ba29df7 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "d99c7a05-3cc3-4a8b-bce4-1185023a269f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:34:59 np0005534516 nova_compute[253538]: 2025-11-25 08:34:59.020 253542 DEBUG oslo_concurrency.lockutils [req-a1f6d729-cdf4-4e2d-af49-d7a4fa845c0a req-09ad2581-6ab9-419f-a41d-47b02ba29df7 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "d99c7a05-3cc3-4a8b-bce4-1185023a269f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:34:59 np0005534516 nova_compute[253538]: 2025-11-25 08:34:59.020 253542 DEBUG nova.compute.manager [req-a1f6d729-cdf4-4e2d-af49-d7a4fa845c0a req-09ad2581-6ab9-419f-a41d-47b02ba29df7 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: d99c7a05-3cc3-4a8b-bce4-1185023a269f] No waiting events found dispatching network-vif-plugged-182a5a1a-c06d-4265-857f-3ea363ae01c2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 03:34:59 np0005534516 nova_compute[253538]: 2025-11-25 08:34:59.020 253542 WARNING nova.compute.manager [req-a1f6d729-cdf4-4e2d-af49-d7a4fa845c0a req-09ad2581-6ab9-419f-a41d-47b02ba29df7 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: d99c7a05-3cc3-4a8b-bce4-1185023a269f] Received unexpected event network-vif-plugged-182a5a1a-c06d-4265-857f-3ea363ae01c2 for instance with vm_state active and task_state deleting.#033[00m
Nov 25 03:34:59 np0005534516 nova_compute[253538]: 2025-11-25 08:34:59.021 253542 DEBUG nova.virt.libvirt.host [None req-488a352f-842c-4afb-9b2e-192f1999f70c a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 25 03:34:59 np0005534516 nova_compute[253538]: 2025-11-25 08:34:59.022 253542 DEBUG nova.virt.libvirt.host [None req-488a352f-842c-4afb-9b2e-192f1999f70c a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 25 03:34:59 np0005534516 nova_compute[253538]: 2025-11-25 08:34:59.025 253542 DEBUG nova.virt.libvirt.host [None req-488a352f-842c-4afb-9b2e-192f1999f70c a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 25 03:34:59 np0005534516 nova_compute[253538]: 2025-11-25 08:34:59.026 253542 DEBUG nova.virt.libvirt.host [None req-488a352f-842c-4afb-9b2e-192f1999f70c a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 25 03:34:59 np0005534516 nova_compute[253538]: 2025-11-25 08:34:59.026 253542 DEBUG nova.virt.libvirt.driver [None req-488a352f-842c-4afb-9b2e-192f1999f70c a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 25 03:34:59 np0005534516 nova_compute[253538]: 2025-11-25 08:34:59.026 253542 DEBUG nova.virt.hardware [None req-488a352f-842c-4afb-9b2e-192f1999f70c a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T08:20:54Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='c0247c91-0f34-45b2-87e7-43f31a790a00',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 25 03:34:59 np0005534516 nova_compute[253538]: 2025-11-25 08:34:59.027 253542 DEBUG nova.virt.hardware [None req-488a352f-842c-4afb-9b2e-192f1999f70c a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 25 03:34:59 np0005534516 nova_compute[253538]: 2025-11-25 08:34:59.027 253542 DEBUG nova.virt.hardware [None req-488a352f-842c-4afb-9b2e-192f1999f70c a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 25 03:34:59 np0005534516 nova_compute[253538]: 2025-11-25 08:34:59.027 253542 DEBUG nova.virt.hardware [None req-488a352f-842c-4afb-9b2e-192f1999f70c a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 25 03:34:59 np0005534516 nova_compute[253538]: 2025-11-25 08:34:59.027 253542 DEBUG nova.virt.hardware [None req-488a352f-842c-4afb-9b2e-192f1999f70c a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 25 03:34:59 np0005534516 nova_compute[253538]: 2025-11-25 08:34:59.027 253542 DEBUG nova.virt.hardware [None req-488a352f-842c-4afb-9b2e-192f1999f70c a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 25 03:34:59 np0005534516 nova_compute[253538]: 2025-11-25 08:34:59.027 253542 DEBUG nova.virt.hardware [None req-488a352f-842c-4afb-9b2e-192f1999f70c a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 25 03:34:59 np0005534516 nova_compute[253538]: 2025-11-25 08:34:59.028 253542 DEBUG nova.virt.hardware [None req-488a352f-842c-4afb-9b2e-192f1999f70c a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 25 03:34:59 np0005534516 nova_compute[253538]: 2025-11-25 08:34:59.028 253542 DEBUG nova.virt.hardware [None req-488a352f-842c-4afb-9b2e-192f1999f70c a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 25 03:34:59 np0005534516 nova_compute[253538]: 2025-11-25 08:34:59.028 253542 DEBUG nova.virt.hardware [None req-488a352f-842c-4afb-9b2e-192f1999f70c a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 25 03:34:59 np0005534516 nova_compute[253538]: 2025-11-25 08:34:59.028 253542 DEBUG nova.virt.hardware [None req-488a352f-842c-4afb-9b2e-192f1999f70c a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 25 03:34:59 np0005534516 nova_compute[253538]: 2025-11-25 08:34:59.032 253542 DEBUG oslo_concurrency.processutils [None req-488a352f-842c-4afb-9b2e-192f1999f70c a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:34:59 np0005534516 nova_compute[253538]: 2025-11-25 08:34:59.112 253542 INFO nova.virt.libvirt.driver [None req-62ab5a8c-f0ba-4dbd-86c2-7733e309b0ca dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] [instance: d99c7a05-3cc3-4a8b-bce4-1185023a269f] Deleting instance files /var/lib/nova/instances/d99c7a05-3cc3-4a8b-bce4-1185023a269f_del#033[00m
Nov 25 03:34:59 np0005534516 nova_compute[253538]: 2025-11-25 08:34:59.114 253542 INFO nova.virt.libvirt.driver [None req-62ab5a8c-f0ba-4dbd-86c2-7733e309b0ca dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] [instance: d99c7a05-3cc3-4a8b-bce4-1185023a269f] Deletion of /var/lib/nova/instances/d99c7a05-3cc3-4a8b-bce4-1185023a269f_del complete#033[00m
Nov 25 03:34:59 np0005534516 nova_compute[253538]: 2025-11-25 08:34:59.165 253542 INFO nova.compute.manager [None req-62ab5a8c-f0ba-4dbd-86c2-7733e309b0ca dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] [instance: d99c7a05-3cc3-4a8b-bce4-1185023a269f] Took 1.22 seconds to destroy the instance on the hypervisor.#033[00m
Nov 25 03:34:59 np0005534516 nova_compute[253538]: 2025-11-25 08:34:59.166 253542 DEBUG oslo.service.loopingcall [None req-62ab5a8c-f0ba-4dbd-86c2-7733e309b0ca dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 25 03:34:59 np0005534516 nova_compute[253538]: 2025-11-25 08:34:59.166 253542 DEBUG nova.compute.manager [-] [instance: d99c7a05-3cc3-4a8b-bce4-1185023a269f] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 25 03:34:59 np0005534516 nova_compute[253538]: 2025-11-25 08:34:59.166 253542 DEBUG nova.network.neutron [-] [instance: d99c7a05-3cc3-4a8b-bce4-1185023a269f] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 25 03:34:59 np0005534516 nova_compute[253538]: 2025-11-25 08:34:59.194 253542 DEBUG nova.compute.manager [req-987a49e5-3794-4f15-8b59-420fe96250f8 req-95d4a563-e7e0-499b-b654-7927bd7c2c28 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 420c5373-d9c4-4da0-9658-90eff9a19f8d] Received event network-vif-plugged-9200cc12-927d-418b-99c1-ca0421535979 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:34:59 np0005534516 nova_compute[253538]: 2025-11-25 08:34:59.195 253542 DEBUG oslo_concurrency.lockutils [req-987a49e5-3794-4f15-8b59-420fe96250f8 req-95d4a563-e7e0-499b-b654-7927bd7c2c28 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "420c5373-d9c4-4da0-9658-90eff9a19f8d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:34:59 np0005534516 nova_compute[253538]: 2025-11-25 08:34:59.195 253542 DEBUG oslo_concurrency.lockutils [req-987a49e5-3794-4f15-8b59-420fe96250f8 req-95d4a563-e7e0-499b-b654-7927bd7c2c28 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "420c5373-d9c4-4da0-9658-90eff9a19f8d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:34:59 np0005534516 nova_compute[253538]: 2025-11-25 08:34:59.195 253542 DEBUG oslo_concurrency.lockutils [req-987a49e5-3794-4f15-8b59-420fe96250f8 req-95d4a563-e7e0-499b-b654-7927bd7c2c28 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "420c5373-d9c4-4da0-9658-90eff9a19f8d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:34:59 np0005534516 nova_compute[253538]: 2025-11-25 08:34:59.195 253542 DEBUG nova.compute.manager [req-987a49e5-3794-4f15-8b59-420fe96250f8 req-95d4a563-e7e0-499b-b654-7927bd7c2c28 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 420c5373-d9c4-4da0-9658-90eff9a19f8d] No waiting events found dispatching network-vif-plugged-9200cc12-927d-418b-99c1-ca0421535979 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 03:34:59 np0005534516 nova_compute[253538]: 2025-11-25 08:34:59.196 253542 WARNING nova.compute.manager [req-987a49e5-3794-4f15-8b59-420fe96250f8 req-95d4a563-e7e0-499b-b654-7927bd7c2c28 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 420c5373-d9c4-4da0-9658-90eff9a19f8d] Received unexpected event network-vif-plugged-9200cc12-927d-418b-99c1-ca0421535979 for instance with vm_state active and task_state None.#033[00m
Nov 25 03:34:59 np0005534516 nova_compute[253538]: 2025-11-25 08:34:59.196 253542 DEBUG nova.compute.manager [req-987a49e5-3794-4f15-8b59-420fe96250f8 req-95d4a563-e7e0-499b-b654-7927bd7c2c28 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 2f20fb1c-0a44-4209-aa4a-020331708117] Received event network-changed-69b7733c-f471-4b5d-9fe9-b9b25d5836d9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:34:59 np0005534516 nova_compute[253538]: 2025-11-25 08:34:59.196 253542 DEBUG nova.compute.manager [req-987a49e5-3794-4f15-8b59-420fe96250f8 req-95d4a563-e7e0-499b-b654-7927bd7c2c28 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 2f20fb1c-0a44-4209-aa4a-020331708117] Refreshing instance network info cache due to event network-changed-69b7733c-f471-4b5d-9fe9-b9b25d5836d9. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 03:34:59 np0005534516 nova_compute[253538]: 2025-11-25 08:34:59.196 253542 DEBUG oslo_concurrency.lockutils [req-987a49e5-3794-4f15-8b59-420fe96250f8 req-95d4a563-e7e0-499b-b654-7927bd7c2c28 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-2f20fb1c-0a44-4209-aa4a-020331708117" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 03:34:59 np0005534516 nova_compute[253538]: 2025-11-25 08:34:59.197 253542 DEBUG oslo_concurrency.lockutils [req-987a49e5-3794-4f15-8b59-420fe96250f8 req-95d4a563-e7e0-499b-b654-7927bd7c2c28 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-2f20fb1c-0a44-4209-aa4a-020331708117" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 03:34:59 np0005534516 nova_compute[253538]: 2025-11-25 08:34:59.197 253542 DEBUG nova.network.neutron [req-987a49e5-3794-4f15-8b59-420fe96250f8 req-95d4a563-e7e0-499b-b654-7927bd7c2c28 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 2f20fb1c-0a44-4209-aa4a-020331708117] Refreshing network info cache for port 69b7733c-f471-4b5d-9fe9-b9b25d5836d9 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 03:34:59 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1555: 321 pgs: 321 active+clean; 453 MiB data, 747 MiB used, 59 GiB / 60 GiB avail; 5.0 MiB/s rd, 7.0 MiB/s wr, 392 op/s
Nov 25 03:34:59 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 03:34:59 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3581526924' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 03:34:59 np0005534516 nova_compute[253538]: 2025-11-25 08:34:59.298 253542 DEBUG oslo_concurrency.processutils [None req-d67c3dc3-4176-4c68-afa1-6ea74d5b348e 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.516s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:34:59 np0005534516 nova_compute[253538]: 2025-11-25 08:34:59.307 253542 DEBUG nova.compute.provider_tree [None req-d67c3dc3-4176-4c68-afa1-6ea74d5b348e 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 03:34:59 np0005534516 nova_compute[253538]: 2025-11-25 08:34:59.323 253542 DEBUG nova.scheduler.client.report [None req-d67c3dc3-4176-4c68-afa1-6ea74d5b348e 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 03:34:59 np0005534516 nova_compute[253538]: 2025-11-25 08:34:59.355 253542 DEBUG oslo_concurrency.lockutils [None req-d67c3dc3-4176-4c68-afa1-6ea74d5b348e 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.752s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:34:59 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 03:34:59 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3819003180' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 03:34:59 np0005534516 nova_compute[253538]: 2025-11-25 08:34:59.479 253542 DEBUG oslo_concurrency.processutils [None req-488a352f-842c-4afb-9b2e-192f1999f70c a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.447s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:34:59 np0005534516 nova_compute[253538]: 2025-11-25 08:34:59.503 253542 DEBUG nova.storage.rbd_utils [None req-488a352f-842c-4afb-9b2e-192f1999f70c a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] rbd image 2f20fb1c-0a44-4209-aa4a-020331708117_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:34:59 np0005534516 nova_compute[253538]: 2025-11-25 08:34:59.509 253542 DEBUG oslo_concurrency.processutils [None req-488a352f-842c-4afb-9b2e-192f1999f70c a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:34:59 np0005534516 nova_compute[253538]: 2025-11-25 08:34:59.556 253542 INFO nova.scheduler.client.report [None req-d67c3dc3-4176-4c68-afa1-6ea74d5b348e 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Deleted allocations for instance 52d39d67-b456-44e4-8804-2de0c941edae#033[00m
Nov 25 03:34:59 np0005534516 nova_compute[253538]: 2025-11-25 08:34:59.644 253542 DEBUG oslo_concurrency.lockutils [None req-79bcfe17-6c4e-4b92-86db-d7c7e382865b c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Acquiring lock "420c5373-d9c4-4da0-9658-90eff9a19f8d" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:34:59 np0005534516 nova_compute[253538]: 2025-11-25 08:34:59.645 253542 DEBUG oslo_concurrency.lockutils [None req-79bcfe17-6c4e-4b92-86db-d7c7e382865b c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Lock "420c5373-d9c4-4da0-9658-90eff9a19f8d" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:34:59 np0005534516 nova_compute[253538]: 2025-11-25 08:34:59.645 253542 DEBUG oslo_concurrency.lockutils [None req-79bcfe17-6c4e-4b92-86db-d7c7e382865b c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Acquiring lock "420c5373-d9c4-4da0-9658-90eff9a19f8d-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:34:59 np0005534516 nova_compute[253538]: 2025-11-25 08:34:59.645 253542 DEBUG oslo_concurrency.lockutils [None req-79bcfe17-6c4e-4b92-86db-d7c7e382865b c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Lock "420c5373-d9c4-4da0-9658-90eff9a19f8d-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:34:59 np0005534516 nova_compute[253538]: 2025-11-25 08:34:59.646 253542 DEBUG oslo_concurrency.lockutils [None req-79bcfe17-6c4e-4b92-86db-d7c7e382865b c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Lock "420c5373-d9c4-4da0-9658-90eff9a19f8d-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:34:59 np0005534516 nova_compute[253538]: 2025-11-25 08:34:59.647 253542 INFO nova.compute.manager [None req-79bcfe17-6c4e-4b92-86db-d7c7e382865b c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] [instance: 420c5373-d9c4-4da0-9658-90eff9a19f8d] Terminating instance#033[00m
Nov 25 03:34:59 np0005534516 nova_compute[253538]: 2025-11-25 08:34:59.649 253542 DEBUG nova.compute.manager [None req-79bcfe17-6c4e-4b92-86db-d7c7e382865b c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] [instance: 420c5373-d9c4-4da0-9658-90eff9a19f8d] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 25 03:34:59 np0005534516 kernel: tap9200cc12-92 (unregistering): left promiscuous mode
Nov 25 03:34:59 np0005534516 NetworkManager[48915]: <info>  [1764059699.6965] device (tap9200cc12-92): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 03:34:59 np0005534516 ovn_controller[152859]: 2025-11-25T08:34:59Z|00613|binding|INFO|Releasing lport 9200cc12-927d-418b-99c1-ca0421535979 from this chassis (sb_readonly=0)
Nov 25 03:34:59 np0005534516 ovn_controller[152859]: 2025-11-25T08:34:59Z|00614|binding|INFO|Setting lport 9200cc12-927d-418b-99c1-ca0421535979 down in Southbound
Nov 25 03:34:59 np0005534516 ovn_controller[152859]: 2025-11-25T08:34:59Z|00615|binding|INFO|Removing iface tap9200cc12-92 ovn-installed in OVS
Nov 25 03:34:59 np0005534516 nova_compute[253538]: 2025-11-25 08:34:59.703 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:34:59 np0005534516 nova_compute[253538]: 2025-11-25 08:34:59.737 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:34:59 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:34:59.755 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:73:f0:9b 10.100.0.11'], port_security=['fa:16:3e:73:f0:9b 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '420c5373-d9c4-4da0-9658-90eff9a19f8d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-908154e6-322e-4607-bb65-df3f3f8daca6', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '23237e7592b247838e62457157e64e9e', 'neutron:revision_number': '6', 'neutron:security_group_ids': 'c602cd86-40c0-467a-8b7a-b573e0a7cefa', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f04e87cb-da21-4cc9-be16-4ad52b84fb85, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=9200cc12-927d-418b-99c1-ca0421535979) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 03:34:59 np0005534516 systemd[1]: machine-qemu\x2d75\x2dinstance\x2d0000003e.scope: Deactivated successfully.
Nov 25 03:34:59 np0005534516 systemd[1]: machine-qemu\x2d75\x2dinstance\x2d0000003e.scope: Consumed 2.820s CPU time.
Nov 25 03:34:59 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:34:59.756 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 9200cc12-927d-418b-99c1-ca0421535979 in datapath 908154e6-322e-4607-bb65-df3f3f8daca6 unbound from our chassis#033[00m
Nov 25 03:34:59 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:34:59.759 162739 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 908154e6-322e-4607-bb65-df3f3f8daca6#033[00m
Nov 25 03:34:59 np0005534516 systemd-machined[215790]: Machine qemu-75-instance-0000003e terminated.
Nov 25 03:34:59 np0005534516 nova_compute[253538]: 2025-11-25 08:34:59.761 253542 DEBUG oslo_concurrency.lockutils [None req-d67c3dc3-4176-4c68-afa1-6ea74d5b348e 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Lock "52d39d67-b456-44e4-8804-2de0c941edae" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.403s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:34:59 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:34:59.772 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[ed165fcc-b3ce-42e2-9cc9-d7388f464d31]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:34:59 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:34:59.810 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[985a7323-2bcb-416e-b0c1-67646a92db9b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:34:59 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:34:59.812 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[607becfd-411a-4784-bea8-fe3eede2648b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:34:59 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:34:59.852 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[0804ae8a-75c4-4e4b-835a-a821c5795a1e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:34:59 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:34:59.869 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[3ca0999e-e2aa-4a36-b4e0-88fedd8460d4]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap908154e6-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:42:59:09'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 12, 'rx_bytes': 700, 'tx_bytes': 692, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 12, 'rx_bytes': 700, 'tx_bytes': 692, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 161], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 496834, 'reachable_time': 23814, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 300, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 300, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 320888, 'error': None, 'target': 'ovnmeta-908154e6-322e-4607-bb65-df3f3f8daca6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:34:59 np0005534516 NetworkManager[48915]: <info>  [1764059699.8709] manager: (tap9200cc12-92): new Tun device (/org/freedesktop/NetworkManager/Devices/275)
Nov 25 03:34:59 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:34:59.887 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[3d640a2c-ef89-4eb9-8616-f51b7f2e971e]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap908154e6-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 496845, 'tstamp': 496845}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 320892, 'error': None, 'target': 'ovnmeta-908154e6-322e-4607-bb65-df3f3f8daca6', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap908154e6-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 496848, 'tstamp': 496848}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 320892, 'error': None, 'target': 'ovnmeta-908154e6-322e-4607-bb65-df3f3f8daca6', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:34:59 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:34:59.889 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap908154e6-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:34:59 np0005534516 nova_compute[253538]: 2025-11-25 08:34:59.893 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:34:59 np0005534516 nova_compute[253538]: 2025-11-25 08:34:59.895 253542 INFO nova.virt.libvirt.driver [-] [instance: 420c5373-d9c4-4da0-9658-90eff9a19f8d] Instance destroyed successfully.#033[00m
Nov 25 03:34:59 np0005534516 nova_compute[253538]: 2025-11-25 08:34:59.895 253542 DEBUG nova.objects.instance [None req-79bcfe17-6c4e-4b92-86db-d7c7e382865b c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Lazy-loading 'resources' on Instance uuid 420c5373-d9c4-4da0-9658-90eff9a19f8d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 03:34:59 np0005534516 nova_compute[253538]: 2025-11-25 08:34:59.897 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:34:59 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:34:59.898 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap908154e6-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:34:59 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:34:59.898 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 03:34:59 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:34:59.898 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap908154e6-30, col_values=(('external_ids', {'iface-id': 'a5c69233-73e9-45f3-95c2-e76d52711966'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:34:59 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:34:59.899 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 03:34:59 np0005534516 nova_compute[253538]: 2025-11-25 08:34:59.907 253542 DEBUG nova.virt.libvirt.vif [None req-79bcfe17-6c4e-4b92-86db-d7c7e382865b c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-11-25T08:34:14Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-614557291',display_name='tempest-ServerActionsTestJSON-server-1864891791',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-614557291',id=62,image_ref='64385127-d622-49bb-be38-b33beb2692d1',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-25T08:34:57Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={rebuild='server'},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='23237e7592b247838e62457157e64e9e',ramdisk_id='',reservation_id='r-x0c0gdce',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='64385127-d622-49bb-be38-b33beb2692d1',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-1880843108',owner_user_name='tempest-ServerActionsTestJSON-1880843108-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T08:34:57Z,user_data=None,user_id='c199ca353ed54a53ab7fe37d3089c82a',uuid=420c5373-d9c4-4da0-9658-90eff9a19f8d,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "9200cc12-927d-418b-99c1-ca0421535979", "address": "fa:16:3e:73:f0:9b", "network": {"id": "908154e6-322e-4607-bb65-df3f3f8daca6", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1395486665-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "23237e7592b247838e62457157e64e9e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9200cc12-92", "ovs_interfaceid": "9200cc12-927d-418b-99c1-ca0421535979", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 25 03:34:59 np0005534516 nova_compute[253538]: 2025-11-25 08:34:59.907 253542 DEBUG nova.network.os_vif_util [None req-79bcfe17-6c4e-4b92-86db-d7c7e382865b c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Converting VIF {"id": "9200cc12-927d-418b-99c1-ca0421535979", "address": "fa:16:3e:73:f0:9b", "network": {"id": "908154e6-322e-4607-bb65-df3f3f8daca6", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1395486665-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "23237e7592b247838e62457157e64e9e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9200cc12-92", "ovs_interfaceid": "9200cc12-927d-418b-99c1-ca0421535979", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 03:34:59 np0005534516 nova_compute[253538]: 2025-11-25 08:34:59.908 253542 DEBUG nova.network.os_vif_util [None req-79bcfe17-6c4e-4b92-86db-d7c7e382865b c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:73:f0:9b,bridge_name='br-int',has_traffic_filtering=True,id=9200cc12-927d-418b-99c1-ca0421535979,network=Network(908154e6-322e-4607-bb65-df3f3f8daca6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9200cc12-92') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 03:34:59 np0005534516 nova_compute[253538]: 2025-11-25 08:34:59.908 253542 DEBUG os_vif [None req-79bcfe17-6c4e-4b92-86db-d7c7e382865b c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:73:f0:9b,bridge_name='br-int',has_traffic_filtering=True,id=9200cc12-927d-418b-99c1-ca0421535979,network=Network(908154e6-322e-4607-bb65-df3f3f8daca6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9200cc12-92') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 25 03:34:59 np0005534516 nova_compute[253538]: 2025-11-25 08:34:59.909 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:34:59 np0005534516 nova_compute[253538]: 2025-11-25 08:34:59.910 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9200cc12-92, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:34:59 np0005534516 nova_compute[253538]: 2025-11-25 08:34:59.911 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:34:59 np0005534516 nova_compute[253538]: 2025-11-25 08:34:59.912 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 25 03:34:59 np0005534516 nova_compute[253538]: 2025-11-25 08:34:59.913 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:34:59 np0005534516 nova_compute[253538]: 2025-11-25 08:34:59.915 253542 INFO os_vif [None req-79bcfe17-6c4e-4b92-86db-d7c7e382865b c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:73:f0:9b,bridge_name='br-int',has_traffic_filtering=True,id=9200cc12-927d-418b-99c1-ca0421535979,network=Network(908154e6-322e-4607-bb65-df3f3f8daca6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9200cc12-92')#033[00m
Nov 25 03:34:59 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 03:34:59 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1172560043' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 03:34:59 np0005534516 nova_compute[253538]: 2025-11-25 08:34:59.967 253542 DEBUG oslo_concurrency.processutils [None req-488a352f-842c-4afb-9b2e-192f1999f70c a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.458s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:34:59 np0005534516 nova_compute[253538]: 2025-11-25 08:34:59.968 253542 DEBUG nova.virt.libvirt.vif [None req-488a352f-842c-4afb-9b2e-192f1999f70c a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T08:34:52Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-DeleteServersTestJSON-server-756634894',display_name='tempest-DeleteServersTestJSON-server-756634894',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-deleteserverstestjson-server-756634894',id=66,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='a9c243220ecd4ba3af10cdbc0ea76bd6',ramdisk_id='',reservation_id='r-6zflsg5w',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-DeleteServersTestJSON-2095694504',owner_user_name='tempest-DeleteServersTestJSON-2095694504-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T08:34:54Z,user_data=None,user_id='a649c62aaacd4f01a93ea978066f5976',uuid=2f20fb1c-0a44-4209-aa4a-020331708117,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "69b7733c-f471-4b5d-9fe9-b9b25d5836d9", "address": "fa:16:3e:a2:ff:8f", "network": {"id": "a66e51b8-ecb0-4289-a1b5-d5e379727721", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-903247260-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a9c243220ecd4ba3af10cdbc0ea76bd6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap69b7733c-f4", "ovs_interfaceid": "69b7733c-f471-4b5d-9fe9-b9b25d5836d9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 25 03:34:59 np0005534516 nova_compute[253538]: 2025-11-25 08:34:59.969 253542 DEBUG nova.network.os_vif_util [None req-488a352f-842c-4afb-9b2e-192f1999f70c a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Converting VIF {"id": "69b7733c-f471-4b5d-9fe9-b9b25d5836d9", "address": "fa:16:3e:a2:ff:8f", "network": {"id": "a66e51b8-ecb0-4289-a1b5-d5e379727721", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-903247260-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a9c243220ecd4ba3af10cdbc0ea76bd6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap69b7733c-f4", "ovs_interfaceid": "69b7733c-f471-4b5d-9fe9-b9b25d5836d9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 03:34:59 np0005534516 nova_compute[253538]: 2025-11-25 08:34:59.970 253542 DEBUG nova.network.os_vif_util [None req-488a352f-842c-4afb-9b2e-192f1999f70c a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a2:ff:8f,bridge_name='br-int',has_traffic_filtering=True,id=69b7733c-f471-4b5d-9fe9-b9b25d5836d9,network=Network(a66e51b8-ecb0-4289-a1b5-d5e379727721),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap69b7733c-f4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 03:34:59 np0005534516 nova_compute[253538]: 2025-11-25 08:34:59.971 253542 DEBUG nova.objects.instance [None req-488a352f-842c-4afb-9b2e-192f1999f70c a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Lazy-loading 'pci_devices' on Instance uuid 2f20fb1c-0a44-4209-aa4a-020331708117 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 03:34:59 np0005534516 nova_compute[253538]: 2025-11-25 08:34:59.988 253542 DEBUG nova.virt.libvirt.driver [None req-488a352f-842c-4afb-9b2e-192f1999f70c a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: 2f20fb1c-0a44-4209-aa4a-020331708117] End _get_guest_xml xml=<domain type="kvm">
Nov 25 03:34:59 np0005534516 nova_compute[253538]:  <uuid>2f20fb1c-0a44-4209-aa4a-020331708117</uuid>
Nov 25 03:34:59 np0005534516 nova_compute[253538]:  <name>instance-00000042</name>
Nov 25 03:34:59 np0005534516 nova_compute[253538]:  <memory>131072</memory>
Nov 25 03:34:59 np0005534516 nova_compute[253538]:  <vcpu>1</vcpu>
Nov 25 03:34:59 np0005534516 nova_compute[253538]:  <metadata>
Nov 25 03:34:59 np0005534516 nova_compute[253538]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 03:34:59 np0005534516 nova_compute[253538]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 03:34:59 np0005534516 nova_compute[253538]:      <nova:name>tempest-DeleteServersTestJSON-server-756634894</nova:name>
Nov 25 03:34:59 np0005534516 nova_compute[253538]:      <nova:creationTime>2025-11-25 08:34:59</nova:creationTime>
Nov 25 03:34:59 np0005534516 nova_compute[253538]:      <nova:flavor name="m1.nano">
Nov 25 03:34:59 np0005534516 nova_compute[253538]:        <nova:memory>128</nova:memory>
Nov 25 03:34:59 np0005534516 nova_compute[253538]:        <nova:disk>1</nova:disk>
Nov 25 03:34:59 np0005534516 nova_compute[253538]:        <nova:swap>0</nova:swap>
Nov 25 03:34:59 np0005534516 nova_compute[253538]:        <nova:ephemeral>0</nova:ephemeral>
Nov 25 03:34:59 np0005534516 nova_compute[253538]:        <nova:vcpus>1</nova:vcpus>
Nov 25 03:34:59 np0005534516 nova_compute[253538]:      </nova:flavor>
Nov 25 03:34:59 np0005534516 nova_compute[253538]:      <nova:owner>
Nov 25 03:34:59 np0005534516 nova_compute[253538]:        <nova:user uuid="a649c62aaacd4f01a93ea978066f5976">tempest-DeleteServersTestJSON-2095694504-project-member</nova:user>
Nov 25 03:34:59 np0005534516 nova_compute[253538]:        <nova:project uuid="a9c243220ecd4ba3af10cdbc0ea76bd6">tempest-DeleteServersTestJSON-2095694504</nova:project>
Nov 25 03:34:59 np0005534516 nova_compute[253538]:      </nova:owner>
Nov 25 03:34:59 np0005534516 nova_compute[253538]:      <nova:root type="image" uuid="8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e"/>
Nov 25 03:34:59 np0005534516 nova_compute[253538]:      <nova:ports>
Nov 25 03:34:59 np0005534516 nova_compute[253538]:        <nova:port uuid="69b7733c-f471-4b5d-9fe9-b9b25d5836d9">
Nov 25 03:34:59 np0005534516 nova_compute[253538]:          <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Nov 25 03:34:59 np0005534516 nova_compute[253538]:        </nova:port>
Nov 25 03:34:59 np0005534516 nova_compute[253538]:      </nova:ports>
Nov 25 03:34:59 np0005534516 nova_compute[253538]:    </nova:instance>
Nov 25 03:34:59 np0005534516 nova_compute[253538]:  </metadata>
Nov 25 03:34:59 np0005534516 nova_compute[253538]:  <sysinfo type="smbios">
Nov 25 03:34:59 np0005534516 nova_compute[253538]:    <system>
Nov 25 03:34:59 np0005534516 nova_compute[253538]:      <entry name="manufacturer">RDO</entry>
Nov 25 03:34:59 np0005534516 nova_compute[253538]:      <entry name="product">OpenStack Compute</entry>
Nov 25 03:34:59 np0005534516 nova_compute[253538]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 03:34:59 np0005534516 nova_compute[253538]:      <entry name="serial">2f20fb1c-0a44-4209-aa4a-020331708117</entry>
Nov 25 03:34:59 np0005534516 nova_compute[253538]:      <entry name="uuid">2f20fb1c-0a44-4209-aa4a-020331708117</entry>
Nov 25 03:34:59 np0005534516 nova_compute[253538]:      <entry name="family">Virtual Machine</entry>
Nov 25 03:34:59 np0005534516 nova_compute[253538]:    </system>
Nov 25 03:34:59 np0005534516 nova_compute[253538]:  </sysinfo>
Nov 25 03:34:59 np0005534516 nova_compute[253538]:  <os>
Nov 25 03:34:59 np0005534516 nova_compute[253538]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 25 03:34:59 np0005534516 nova_compute[253538]:    <boot dev="hd"/>
Nov 25 03:34:59 np0005534516 nova_compute[253538]:    <smbios mode="sysinfo"/>
Nov 25 03:34:59 np0005534516 nova_compute[253538]:  </os>
Nov 25 03:34:59 np0005534516 nova_compute[253538]:  <features>
Nov 25 03:34:59 np0005534516 nova_compute[253538]:    <acpi/>
Nov 25 03:34:59 np0005534516 nova_compute[253538]:    <apic/>
Nov 25 03:34:59 np0005534516 nova_compute[253538]:    <vmcoreinfo/>
Nov 25 03:34:59 np0005534516 nova_compute[253538]:  </features>
Nov 25 03:34:59 np0005534516 nova_compute[253538]:  <clock offset="utc">
Nov 25 03:34:59 np0005534516 nova_compute[253538]:    <timer name="pit" tickpolicy="delay"/>
Nov 25 03:34:59 np0005534516 nova_compute[253538]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 25 03:34:59 np0005534516 nova_compute[253538]:    <timer name="hpet" present="no"/>
Nov 25 03:34:59 np0005534516 nova_compute[253538]:  </clock>
Nov 25 03:34:59 np0005534516 nova_compute[253538]:  <cpu mode="host-model" match="exact">
Nov 25 03:34:59 np0005534516 nova_compute[253538]:    <topology sockets="1" cores="1" threads="1"/>
Nov 25 03:34:59 np0005534516 nova_compute[253538]:  </cpu>
Nov 25 03:34:59 np0005534516 nova_compute[253538]:  <devices>
Nov 25 03:34:59 np0005534516 nova_compute[253538]:    <disk type="network" device="disk">
Nov 25 03:34:59 np0005534516 nova_compute[253538]:      <driver type="raw" cache="none"/>
Nov 25 03:34:59 np0005534516 nova_compute[253538]:      <source protocol="rbd" name="vms/2f20fb1c-0a44-4209-aa4a-020331708117_disk">
Nov 25 03:34:59 np0005534516 nova_compute[253538]:        <host name="192.168.122.100" port="6789"/>
Nov 25 03:34:59 np0005534516 nova_compute[253538]:      </source>
Nov 25 03:34:59 np0005534516 nova_compute[253538]:      <auth username="openstack">
Nov 25 03:34:59 np0005534516 nova_compute[253538]:        <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 03:34:59 np0005534516 nova_compute[253538]:      </auth>
Nov 25 03:34:59 np0005534516 nova_compute[253538]:      <target dev="vda" bus="virtio"/>
Nov 25 03:34:59 np0005534516 nova_compute[253538]:    </disk>
Nov 25 03:34:59 np0005534516 nova_compute[253538]:    <disk type="network" device="cdrom">
Nov 25 03:34:59 np0005534516 nova_compute[253538]:      <driver type="raw" cache="none"/>
Nov 25 03:34:59 np0005534516 nova_compute[253538]:      <source protocol="rbd" name="vms/2f20fb1c-0a44-4209-aa4a-020331708117_disk.config">
Nov 25 03:34:59 np0005534516 nova_compute[253538]:        <host name="192.168.122.100" port="6789"/>
Nov 25 03:34:59 np0005534516 nova_compute[253538]:      </source>
Nov 25 03:34:59 np0005534516 nova_compute[253538]:      <auth username="openstack">
Nov 25 03:34:59 np0005534516 nova_compute[253538]:        <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 03:34:59 np0005534516 nova_compute[253538]:      </auth>
Nov 25 03:34:59 np0005534516 nova_compute[253538]:      <target dev="sda" bus="sata"/>
Nov 25 03:34:59 np0005534516 nova_compute[253538]:    </disk>
Nov 25 03:34:59 np0005534516 nova_compute[253538]:    <interface type="ethernet">
Nov 25 03:34:59 np0005534516 nova_compute[253538]:      <mac address="fa:16:3e:a2:ff:8f"/>
Nov 25 03:34:59 np0005534516 nova_compute[253538]:      <model type="virtio"/>
Nov 25 03:34:59 np0005534516 nova_compute[253538]:      <driver name="vhost" rx_queue_size="512"/>
Nov 25 03:34:59 np0005534516 nova_compute[253538]:      <mtu size="1442"/>
Nov 25 03:34:59 np0005534516 nova_compute[253538]:      <target dev="tap69b7733c-f4"/>
Nov 25 03:34:59 np0005534516 nova_compute[253538]:    </interface>
Nov 25 03:34:59 np0005534516 nova_compute[253538]:    <serial type="pty">
Nov 25 03:34:59 np0005534516 nova_compute[253538]:      <log file="/var/lib/nova/instances/2f20fb1c-0a44-4209-aa4a-020331708117/console.log" append="off"/>
Nov 25 03:34:59 np0005534516 nova_compute[253538]:    </serial>
Nov 25 03:34:59 np0005534516 nova_compute[253538]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 03:34:59 np0005534516 nova_compute[253538]:    <video>
Nov 25 03:34:59 np0005534516 nova_compute[253538]:      <model type="virtio"/>
Nov 25 03:34:59 np0005534516 nova_compute[253538]:    </video>
Nov 25 03:34:59 np0005534516 nova_compute[253538]:    <input type="tablet" bus="usb"/>
Nov 25 03:34:59 np0005534516 nova_compute[253538]:    <rng model="virtio">
Nov 25 03:34:59 np0005534516 nova_compute[253538]:      <backend model="random">/dev/urandom</backend>
Nov 25 03:34:59 np0005534516 nova_compute[253538]:    </rng>
Nov 25 03:34:59 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root"/>
Nov 25 03:34:59 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:34:59 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:34:59 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:34:59 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:34:59 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:34:59 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:34:59 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:34:59 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:34:59 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:34:59 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:34:59 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:34:59 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:34:59 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:34:59 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:34:59 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:34:59 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:34:59 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:34:59 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:34:59 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:34:59 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:34:59 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:34:59 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:34:59 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:34:59 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:34:59 np0005534516 nova_compute[253538]:    <controller type="usb" index="0"/>
Nov 25 03:34:59 np0005534516 nova_compute[253538]:    <memballoon model="virtio">
Nov 25 03:34:59 np0005534516 nova_compute[253538]:      <stats period="10"/>
Nov 25 03:34:59 np0005534516 nova_compute[253538]:    </memballoon>
Nov 25 03:34:59 np0005534516 nova_compute[253538]:  </devices>
Nov 25 03:34:59 np0005534516 nova_compute[253538]: </domain>
Nov 25 03:34:59 np0005534516 nova_compute[253538]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 25 03:35:00 np0005534516 nova_compute[253538]: 2025-11-25 08:34:59.998 253542 DEBUG nova.compute.manager [None req-488a352f-842c-4afb-9b2e-192f1999f70c a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: 2f20fb1c-0a44-4209-aa4a-020331708117] Preparing to wait for external event network-vif-plugged-69b7733c-f471-4b5d-9fe9-b9b25d5836d9 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 25 03:35:00 np0005534516 nova_compute[253538]: 2025-11-25 08:34:59.998 253542 DEBUG oslo_concurrency.lockutils [None req-488a352f-842c-4afb-9b2e-192f1999f70c a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Acquiring lock "2f20fb1c-0a44-4209-aa4a-020331708117-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:35:00 np0005534516 nova_compute[253538]: 2025-11-25 08:34:59.998 253542 DEBUG oslo_concurrency.lockutils [None req-488a352f-842c-4afb-9b2e-192f1999f70c a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Lock "2f20fb1c-0a44-4209-aa4a-020331708117-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:35:00 np0005534516 nova_compute[253538]: 2025-11-25 08:34:59.998 253542 DEBUG oslo_concurrency.lockutils [None req-488a352f-842c-4afb-9b2e-192f1999f70c a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Lock "2f20fb1c-0a44-4209-aa4a-020331708117-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:35:00 np0005534516 nova_compute[253538]: 2025-11-25 08:34:59.999 253542 DEBUG nova.virt.libvirt.vif [None req-488a352f-842c-4afb-9b2e-192f1999f70c a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T08:34:52Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-DeleteServersTestJSON-server-756634894',display_name='tempest-DeleteServersTestJSON-server-756634894',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-deleteserverstestjson-server-756634894',id=66,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='a9c243220ecd4ba3af10cdbc0ea76bd6',ramdisk_id='',reservation_id='r-6zflsg5w',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-DeleteServersTestJSON-2095694504',owner_user_name='tempest-DeleteServersTestJSON-2095694504-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T08:34:54Z,user_data=None,user_id='a649c62aaacd4f01a93ea978066f5976',uuid=2f20fb1c-0a44-4209-aa4a-020331708117,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "69b7733c-f471-4b5d-9fe9-b9b25d5836d9", "address": "fa:16:3e:a2:ff:8f", "network": {"id": "a66e51b8-ecb0-4289-a1b5-d5e379727721", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-903247260-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a9c243220ecd4ba3af10cdbc0ea76bd6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap69b7733c-f4", "ovs_interfaceid": "69b7733c-f471-4b5d-9fe9-b9b25d5836d9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 25 03:35:00 np0005534516 nova_compute[253538]: 2025-11-25 08:34:59.999 253542 DEBUG nova.network.os_vif_util [None req-488a352f-842c-4afb-9b2e-192f1999f70c a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Converting VIF {"id": "69b7733c-f471-4b5d-9fe9-b9b25d5836d9", "address": "fa:16:3e:a2:ff:8f", "network": {"id": "a66e51b8-ecb0-4289-a1b5-d5e379727721", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-903247260-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a9c243220ecd4ba3af10cdbc0ea76bd6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap69b7733c-f4", "ovs_interfaceid": "69b7733c-f471-4b5d-9fe9-b9b25d5836d9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 03:35:00 np0005534516 nova_compute[253538]: 2025-11-25 08:35:00.000 253542 DEBUG nova.network.os_vif_util [None req-488a352f-842c-4afb-9b2e-192f1999f70c a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a2:ff:8f,bridge_name='br-int',has_traffic_filtering=True,id=69b7733c-f471-4b5d-9fe9-b9b25d5836d9,network=Network(a66e51b8-ecb0-4289-a1b5-d5e379727721),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap69b7733c-f4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 03:35:00 np0005534516 nova_compute[253538]: 2025-11-25 08:35:00.000 253542 DEBUG os_vif [None req-488a352f-842c-4afb-9b2e-192f1999f70c a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:a2:ff:8f,bridge_name='br-int',has_traffic_filtering=True,id=69b7733c-f471-4b5d-9fe9-b9b25d5836d9,network=Network(a66e51b8-ecb0-4289-a1b5-d5e379727721),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap69b7733c-f4') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 25 03:35:00 np0005534516 nova_compute[253538]: 2025-11-25 08:35:00.003 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:35:00 np0005534516 nova_compute[253538]: 2025-11-25 08:35:00.004 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:35:00 np0005534516 nova_compute[253538]: 2025-11-25 08:35:00.004 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 03:35:00 np0005534516 nova_compute[253538]: 2025-11-25 08:35:00.010 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:35:00 np0005534516 nova_compute[253538]: 2025-11-25 08:35:00.011 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap69b7733c-f4, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:35:00 np0005534516 nova_compute[253538]: 2025-11-25 08:35:00.011 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap69b7733c-f4, col_values=(('external_ids', {'iface-id': '69b7733c-f471-4b5d-9fe9-b9b25d5836d9', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:a2:ff:8f', 'vm-uuid': '2f20fb1c-0a44-4209-aa4a-020331708117'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:35:00 np0005534516 NetworkManager[48915]: <info>  [1764059700.0152] manager: (tap69b7733c-f4): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/276)
Nov 25 03:35:00 np0005534516 nova_compute[253538]: 2025-11-25 08:35:00.015 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:35:00 np0005534516 nova_compute[253538]: 2025-11-25 08:35:00.017 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 25 03:35:00 np0005534516 nova_compute[253538]: 2025-11-25 08:35:00.021 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:35:00 np0005534516 nova_compute[253538]: 2025-11-25 08:35:00.022 253542 INFO os_vif [None req-488a352f-842c-4afb-9b2e-192f1999f70c a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:a2:ff:8f,bridge_name='br-int',has_traffic_filtering=True,id=69b7733c-f471-4b5d-9fe9-b9b25d5836d9,network=Network(a66e51b8-ecb0-4289-a1b5-d5e379727721),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap69b7733c-f4')#033[00m
Nov 25 03:35:00 np0005534516 nova_compute[253538]: 2025-11-25 08:35:00.041 253542 DEBUG nova.network.neutron [-] [instance: d99c7a05-3cc3-4a8b-bce4-1185023a269f] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 03:35:00 np0005534516 nova_compute[253538]: 2025-11-25 08:35:00.068 253542 DEBUG nova.virt.libvirt.driver [None req-488a352f-842c-4afb-9b2e-192f1999f70c a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 03:35:00 np0005534516 nova_compute[253538]: 2025-11-25 08:35:00.068 253542 DEBUG nova.virt.libvirt.driver [None req-488a352f-842c-4afb-9b2e-192f1999f70c a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 03:35:00 np0005534516 nova_compute[253538]: 2025-11-25 08:35:00.069 253542 DEBUG nova.virt.libvirt.driver [None req-488a352f-842c-4afb-9b2e-192f1999f70c a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] No VIF found with MAC fa:16:3e:a2:ff:8f, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 25 03:35:00 np0005534516 nova_compute[253538]: 2025-11-25 08:35:00.069 253542 INFO nova.virt.libvirt.driver [None req-488a352f-842c-4afb-9b2e-192f1999f70c a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: 2f20fb1c-0a44-4209-aa4a-020331708117] Using config drive#033[00m
Nov 25 03:35:00 np0005534516 nova_compute[253538]: 2025-11-25 08:35:00.095 253542 DEBUG nova.storage.rbd_utils [None req-488a352f-842c-4afb-9b2e-192f1999f70c a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] rbd image 2f20fb1c-0a44-4209-aa4a-020331708117_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:35:00 np0005534516 nova_compute[253538]: 2025-11-25 08:35:00.107 253542 INFO nova.compute.manager [-] [instance: d99c7a05-3cc3-4a8b-bce4-1185023a269f] Took 0.94 seconds to deallocate network for instance.#033[00m
Nov 25 03:35:00 np0005534516 nova_compute[253538]: 2025-11-25 08:35:00.172 253542 DEBUG oslo_concurrency.lockutils [None req-62ab5a8c-f0ba-4dbd-86c2-7733e309b0ca dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:35:00 np0005534516 nova_compute[253538]: 2025-11-25 08:35:00.173 253542 DEBUG oslo_concurrency.lockutils [None req-62ab5a8c-f0ba-4dbd-86c2-7733e309b0ca dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:35:00 np0005534516 nova_compute[253538]: 2025-11-25 08:35:00.324 253542 DEBUG oslo_concurrency.processutils [None req-62ab5a8c-f0ba-4dbd-86c2-7733e309b0ca dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:35:00 np0005534516 nova_compute[253538]: 2025-11-25 08:35:00.375 253542 DEBUG nova.network.neutron [req-987a49e5-3794-4f15-8b59-420fe96250f8 req-95d4a563-e7e0-499b-b654-7927bd7c2c28 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 2f20fb1c-0a44-4209-aa4a-020331708117] Updated VIF entry in instance network info cache for port 69b7733c-f471-4b5d-9fe9-b9b25d5836d9. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 25 03:35:00 np0005534516 nova_compute[253538]: 2025-11-25 08:35:00.376 253542 DEBUG nova.network.neutron [req-987a49e5-3794-4f15-8b59-420fe96250f8 req-95d4a563-e7e0-499b-b654-7927bd7c2c28 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 2f20fb1c-0a44-4209-aa4a-020331708117] Updating instance_info_cache with network_info: [{"id": "69b7733c-f471-4b5d-9fe9-b9b25d5836d9", "address": "fa:16:3e:a2:ff:8f", "network": {"id": "a66e51b8-ecb0-4289-a1b5-d5e379727721", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-903247260-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a9c243220ecd4ba3af10cdbc0ea76bd6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap69b7733c-f4", "ovs_interfaceid": "69b7733c-f471-4b5d-9fe9-b9b25d5836d9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 03:35:00 np0005534516 nova_compute[253538]: 2025-11-25 08:35:00.389 253542 DEBUG oslo_concurrency.lockutils [req-987a49e5-3794-4f15-8b59-420fe96250f8 req-95d4a563-e7e0-499b-b654-7927bd7c2c28 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-2f20fb1c-0a44-4209-aa4a-020331708117" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 03:35:00 np0005534516 nova_compute[253538]: 2025-11-25 08:35:00.389 253542 DEBUG nova.compute.manager [req-987a49e5-3794-4f15-8b59-420fe96250f8 req-95d4a563-e7e0-499b-b654-7927bd7c2c28 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 52d39d67-b456-44e4-8804-2de0c941edae] Received event network-vif-deleted-9fa407fa-661b-4b02-b4f4-656f6ae34cd8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:35:00 np0005534516 nova_compute[253538]: 2025-11-25 08:35:00.467 253542 DEBUG oslo_concurrency.lockutils [None req-71b5dbd9-53be-49dc-9125-f878a11f92ea 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Acquiring lock "5c6656ef-7ad0-4eb4-a597-aa9a8078805b" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:35:00 np0005534516 nova_compute[253538]: 2025-11-25 08:35:00.468 253542 DEBUG oslo_concurrency.lockutils [None req-71b5dbd9-53be-49dc-9125-f878a11f92ea 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Lock "5c6656ef-7ad0-4eb4-a597-aa9a8078805b" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:35:00 np0005534516 nova_compute[253538]: 2025-11-25 08:35:00.469 253542 DEBUG oslo_concurrency.lockutils [None req-71b5dbd9-53be-49dc-9125-f878a11f92ea 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Acquiring lock "5c6656ef-7ad0-4eb4-a597-aa9a8078805b-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:35:00 np0005534516 nova_compute[253538]: 2025-11-25 08:35:00.470 253542 DEBUG oslo_concurrency.lockutils [None req-71b5dbd9-53be-49dc-9125-f878a11f92ea 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Lock "5c6656ef-7ad0-4eb4-a597-aa9a8078805b-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:35:00 np0005534516 nova_compute[253538]: 2025-11-25 08:35:00.470 253542 DEBUG oslo_concurrency.lockutils [None req-71b5dbd9-53be-49dc-9125-f878a11f92ea 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Lock "5c6656ef-7ad0-4eb4-a597-aa9a8078805b-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:35:00 np0005534516 nova_compute[253538]: 2025-11-25 08:35:00.472 253542 INFO nova.compute.manager [None req-71b5dbd9-53be-49dc-9125-f878a11f92ea 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] [instance: 5c6656ef-7ad0-4eb4-a597-aa9a8078805b] Terminating instance#033[00m
Nov 25 03:35:00 np0005534516 nova_compute[253538]: 2025-11-25 08:35:00.474 253542 DEBUG nova.compute.manager [None req-71b5dbd9-53be-49dc-9125-f878a11f92ea 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] [instance: 5c6656ef-7ad0-4eb4-a597-aa9a8078805b] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 25 03:35:00 np0005534516 nova_compute[253538]: 2025-11-25 08:35:00.526 253542 INFO nova.virt.libvirt.driver [None req-488a352f-842c-4afb-9b2e-192f1999f70c a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: 2f20fb1c-0a44-4209-aa4a-020331708117] Creating config drive at /var/lib/nova/instances/2f20fb1c-0a44-4209-aa4a-020331708117/disk.config#033[00m
Nov 25 03:35:00 np0005534516 nova_compute[253538]: 2025-11-25 08:35:00.531 253542 DEBUG oslo_concurrency.processutils [None req-488a352f-842c-4afb-9b2e-192f1999f70c a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/2f20fb1c-0a44-4209-aa4a-020331708117/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp7ad94_ok execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:35:00 np0005534516 nova_compute[253538]: 2025-11-25 08:35:00.680 253542 DEBUG oslo_concurrency.processutils [None req-488a352f-842c-4afb-9b2e-192f1999f70c a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/2f20fb1c-0a44-4209-aa4a-020331708117/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp7ad94_ok" returned: 0 in 0.149s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:35:00 np0005534516 nova_compute[253538]: 2025-11-25 08:35:00.748 253542 DEBUG nova.storage.rbd_utils [None req-488a352f-842c-4afb-9b2e-192f1999f70c a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] rbd image 2f20fb1c-0a44-4209-aa4a-020331708117_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:35:00 np0005534516 nova_compute[253538]: 2025-11-25 08:35:00.761 253542 DEBUG oslo_concurrency.processutils [None req-488a352f-842c-4afb-9b2e-192f1999f70c a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/2f20fb1c-0a44-4209-aa4a-020331708117/disk.config 2f20fb1c-0a44-4209-aa4a-020331708117_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:35:00 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 03:35:00 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1420102889' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 03:35:00 np0005534516 kernel: tap1682bdaf-1d (unregistering): left promiscuous mode
Nov 25 03:35:00 np0005534516 NetworkManager[48915]: <info>  [1764059700.7762] device (tap1682bdaf-1d): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 03:35:00 np0005534516 ovn_controller[152859]: 2025-11-25T08:35:00Z|00616|binding|INFO|Releasing lport 1682bdaf-1dd6-4036-8d17-a169dbaaca8f from this chassis (sb_readonly=0)
Nov 25 03:35:00 np0005534516 ovn_controller[152859]: 2025-11-25T08:35:00Z|00617|binding|INFO|Setting lport 1682bdaf-1dd6-4036-8d17-a169dbaaca8f down in Southbound
Nov 25 03:35:00 np0005534516 ovn_controller[152859]: 2025-11-25T08:35:00Z|00618|binding|INFO|Removing iface tap1682bdaf-1d ovn-installed in OVS
Nov 25 03:35:00 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:35:00.808 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:60:42:da 10.100.0.9'], port_security=['fa:16:3e:60:42:da 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '5c6656ef-7ad0-4eb4-a597-aa9a8078805b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '7d8307470c794815a028592990efca57', 'neutron:revision_number': '4', 'neutron:security_group_ids': '379bb9ab-2c12-4eea-bd54-6b8a24f607d1', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f51a82dc-da84-4ad1-90c6-51b8e242435f, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=1682bdaf-1dd6-4036-8d17-a169dbaaca8f) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 03:35:00 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:35:00.810 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 1682bdaf-1dd6-4036-8d17-a169dbaaca8f in datapath 9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe unbound from our chassis#033[00m
Nov 25 03:35:00 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:35:00.811 162739 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 25 03:35:00 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:35:00.812 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[8e108de8-9efb-484e-9cca-947de2a31918]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:35:00 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:35:00.813 162739 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe namespace which is not needed anymore#033[00m
Nov 25 03:35:00 np0005534516 nova_compute[253538]: 2025-11-25 08:35:00.821 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:35:00 np0005534516 nova_compute[253538]: 2025-11-25 08:35:00.827 253542 DEBUG oslo_concurrency.processutils [None req-62ab5a8c-f0ba-4dbd-86c2-7733e309b0ca dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.503s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:35:00 np0005534516 systemd[1]: machine-qemu\x2d67\x2dinstance\x2d0000003a.scope: Deactivated successfully.
Nov 25 03:35:00 np0005534516 systemd[1]: machine-qemu\x2d67\x2dinstance\x2d0000003a.scope: Consumed 15.229s CPU time.
Nov 25 03:35:00 np0005534516 systemd-machined[215790]: Machine qemu-67-instance-0000003a terminated.
Nov 25 03:35:00 np0005534516 nova_compute[253538]: 2025-11-25 08:35:00.834 253542 DEBUG nova.compute.provider_tree [None req-62ab5a8c-f0ba-4dbd-86c2-7733e309b0ca dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 03:35:00 np0005534516 nova_compute[253538]: 2025-11-25 08:35:00.847 253542 DEBUG nova.scheduler.client.report [None req-62ab5a8c-f0ba-4dbd-86c2-7733e309b0ca dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 03:35:00 np0005534516 nova_compute[253538]: 2025-11-25 08:35:00.875 253542 DEBUG oslo_concurrency.lockutils [None req-62ab5a8c-f0ba-4dbd-86c2-7733e309b0ca dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.702s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:35:00 np0005534516 nova_compute[253538]: 2025-11-25 08:35:00.911 253542 INFO nova.scheduler.client.report [None req-62ab5a8c-f0ba-4dbd-86c2-7733e309b0ca dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] Deleted allocations for instance d99c7a05-3cc3-4a8b-bce4-1185023a269f#033[00m
Nov 25 03:35:00 np0005534516 nova_compute[253538]: 2025-11-25 08:35:00.918 253542 INFO nova.virt.libvirt.driver [-] [instance: 5c6656ef-7ad0-4eb4-a597-aa9a8078805b] Instance destroyed successfully.#033[00m
Nov 25 03:35:00 np0005534516 nova_compute[253538]: 2025-11-25 08:35:00.918 253542 DEBUG nova.objects.instance [None req-71b5dbd9-53be-49dc-9125-f878a11f92ea 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Lazy-loading 'resources' on Instance uuid 5c6656ef-7ad0-4eb4-a597-aa9a8078805b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 03:35:00 np0005534516 nova_compute[253538]: 2025-11-25 08:35:00.930 253542 DEBUG nova.virt.libvirt.vif [None req-71b5dbd9-53be-49dc-9125-f878a11f92ea 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T08:33:46Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-632172140',display_name='tempest-tempest.common.compute-instance-632172140',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-632172140',id=58,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBI45VX5ozelo+A26Yolp08RcM4mInQuWdkCriWdcAJEvmrG1M64+l6O6qrC1PEYY9Zv1hNrRdaOuY2Hx3qn6BPjsgdWVfumtuAipvIEJaR4T3qitr35JgGW4++DGBtyuWA==',key_name='tempest-keypair-1507828644',keypairs=<?>,launch_index=0,launched_at=2025-11-25T08:34:01Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='7d8307470c794815a028592990efca57',ramdisk_id='',reservation_id='r-yi71ppnf',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-1895576257',owner_user_name='tempest-AttachInterfacesTestJSON-1895576257-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T08:34:01Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='329d8dc9d78743d4a09a38fef3a9143d',uuid=5c6656ef-7ad0-4eb4-a597-aa9a8078805b,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "1682bdaf-1dd6-4036-8d17-a169dbaaca8f", "address": "fa:16:3e:60:42:da", "network": {"id": "9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-2079765623-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7d8307470c794815a028592990efca57", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1682bdaf-1d", "ovs_interfaceid": "1682bdaf-1dd6-4036-8d17-a169dbaaca8f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 25 03:35:00 np0005534516 nova_compute[253538]: 2025-11-25 08:35:00.932 253542 DEBUG nova.network.os_vif_util [None req-71b5dbd9-53be-49dc-9125-f878a11f92ea 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Converting VIF {"id": "1682bdaf-1dd6-4036-8d17-a169dbaaca8f", "address": "fa:16:3e:60:42:da", "network": {"id": "9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-2079765623-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7d8307470c794815a028592990efca57", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1682bdaf-1d", "ovs_interfaceid": "1682bdaf-1dd6-4036-8d17-a169dbaaca8f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 03:35:00 np0005534516 nova_compute[253538]: 2025-11-25 08:35:00.933 253542 DEBUG nova.network.os_vif_util [None req-71b5dbd9-53be-49dc-9125-f878a11f92ea 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:60:42:da,bridge_name='br-int',has_traffic_filtering=True,id=1682bdaf-1dd6-4036-8d17-a169dbaaca8f,network=Network(9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1682bdaf-1d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 03:35:00 np0005534516 nova_compute[253538]: 2025-11-25 08:35:00.933 253542 DEBUG os_vif [None req-71b5dbd9-53be-49dc-9125-f878a11f92ea 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:60:42:da,bridge_name='br-int',has_traffic_filtering=True,id=1682bdaf-1dd6-4036-8d17-a169dbaaca8f,network=Network(9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1682bdaf-1d') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 25 03:35:00 np0005534516 nova_compute[253538]: 2025-11-25 08:35:00.935 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:35:00 np0005534516 nova_compute[253538]: 2025-11-25 08:35:00.935 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1682bdaf-1d, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:35:00 np0005534516 nova_compute[253538]: 2025-11-25 08:35:00.937 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:35:00 np0005534516 nova_compute[253538]: 2025-11-25 08:35:00.942 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:35:00 np0005534516 nova_compute[253538]: 2025-11-25 08:35:00.944 253542 INFO os_vif [None req-71b5dbd9-53be-49dc-9125-f878a11f92ea 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:60:42:da,bridge_name='br-int',has_traffic_filtering=True,id=1682bdaf-1dd6-4036-8d17-a169dbaaca8f,network=Network(9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1682bdaf-1d')#033[00m
Nov 25 03:35:00 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e182 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 03:35:00 np0005534516 neutron-haproxy-ovnmeta-9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe[315506]: [NOTICE]   (315510) : haproxy version is 2.8.14-c23fe91
Nov 25 03:35:00 np0005534516 neutron-haproxy-ovnmeta-9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe[315506]: [NOTICE]   (315510) : path to executable is /usr/sbin/haproxy
Nov 25 03:35:00 np0005534516 neutron-haproxy-ovnmeta-9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe[315506]: [WARNING]  (315510) : Exiting Master process...
Nov 25 03:35:00 np0005534516 neutron-haproxy-ovnmeta-9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe[315506]: [ALERT]    (315510) : Current worker (315512) exited with code 143 (Terminated)
Nov 25 03:35:00 np0005534516 neutron-haproxy-ovnmeta-9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe[315506]: [WARNING]  (315510) : All workers exited. Exiting... (0)
Nov 25 03:35:00 np0005534516 systemd[1]: libpod-fd84595ec77f0159adf977cc77075b96e42e38dd6f668ef949dec2535d8255cc.scope: Deactivated successfully.
Nov 25 03:35:00 np0005534516 nova_compute[253538]: 2025-11-25 08:35:00.995 253542 DEBUG oslo_concurrency.lockutils [None req-62ab5a8c-f0ba-4dbd-86c2-7733e309b0ca dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] Lock "d99c7a05-3cc3-4a8b-bce4-1185023a269f" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.058s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:35:01 np0005534516 podman[321034]: 2025-11-25 08:35:01.003083965 +0000 UTC m=+0.070111345 container died fd84595ec77f0159adf977cc77075b96e42e38dd6f668ef949dec2535d8255cc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Nov 25 03:35:01 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1556: 321 pgs: 321 active+clean; 416 MiB data, 722 MiB used, 59 GiB / 60 GiB avail; 7.0 MiB/s rd, 6.4 MiB/s wr, 437 op/s
Nov 25 03:35:01 np0005534516 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-fd84595ec77f0159adf977cc77075b96e42e38dd6f668ef949dec2535d8255cc-userdata-shm.mount: Deactivated successfully.
Nov 25 03:35:01 np0005534516 systemd[1]: var-lib-containers-storage-overlay-17ce801fda8e4bb6e919da95eaa73f45324ff65118d3fb3c246879cdb73c93b9-merged.mount: Deactivated successfully.
Nov 25 03:35:01 np0005534516 podman[321034]: 2025-11-25 08:35:01.449767131 +0000 UTC m=+0.516794501 container cleanup fd84595ec77f0159adf977cc77075b96e42e38dd6f668ef949dec2535d8255cc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true)
Nov 25 03:35:01 np0005534516 systemd[1]: libpod-conmon-fd84595ec77f0159adf977cc77075b96e42e38dd6f668ef949dec2535d8255cc.scope: Deactivated successfully.
Nov 25 03:35:01 np0005534516 nova_compute[253538]: 2025-11-25 08:35:01.485 253542 DEBUG oslo_concurrency.processutils [None req-488a352f-842c-4afb-9b2e-192f1999f70c a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/2f20fb1c-0a44-4209-aa4a-020331708117/disk.config 2f20fb1c-0a44-4209-aa4a-020331708117_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.724s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:35:01 np0005534516 nova_compute[253538]: 2025-11-25 08:35:01.486 253542 INFO nova.virt.libvirt.driver [None req-488a352f-842c-4afb-9b2e-192f1999f70c a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: 2f20fb1c-0a44-4209-aa4a-020331708117] Deleting local config drive /var/lib/nova/instances/2f20fb1c-0a44-4209-aa4a-020331708117/disk.config because it was imported into RBD.#033[00m
Nov 25 03:35:01 np0005534516 podman[321085]: 2025-11-25 08:35:01.531148908 +0000 UTC m=+0.057799835 container remove fd84595ec77f0159adf977cc77075b96e42e38dd6f668ef949dec2535d8255cc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Nov 25 03:35:01 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:35:01.540 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[fedb1dd7-f100-47d0-b7c4-ecfabe31ab31]: (4, ('Tue Nov 25 08:35:00 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe (fd84595ec77f0159adf977cc77075b96e42e38dd6f668ef949dec2535d8255cc)\nfd84595ec77f0159adf977cc77075b96e42e38dd6f668ef949dec2535d8255cc\nTue Nov 25 08:35:01 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe (fd84595ec77f0159adf977cc77075b96e42e38dd6f668ef949dec2535d8255cc)\nfd84595ec77f0159adf977cc77075b96e42e38dd6f668ef949dec2535d8255cc\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:35:01 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:35:01.541 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[18e10e87-5976-4d9b-8f82-e90d27578346]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:35:01 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:35:01.542 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9bf3cbfa-70, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:35:01 np0005534516 nova_compute[253538]: 2025-11-25 08:35:01.546 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:35:01 np0005534516 kernel: tap9bf3cbfa-70: left promiscuous mode
Nov 25 03:35:01 np0005534516 kernel: tap69b7733c-f4: entered promiscuous mode
Nov 25 03:35:01 np0005534516 systemd-udevd[320899]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 03:35:01 np0005534516 NetworkManager[48915]: <info>  [1764059701.5539] manager: (tap69b7733c-f4): new Tun device (/org/freedesktop/NetworkManager/Devices/277)
Nov 25 03:35:01 np0005534516 NetworkManager[48915]: <info>  [1764059701.5641] device (tap69b7733c-f4): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 03:35:01 np0005534516 NetworkManager[48915]: <info>  [1764059701.5652] device (tap69b7733c-f4): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 03:35:01 np0005534516 ovn_controller[152859]: 2025-11-25T08:35:01Z|00619|binding|INFO|Claiming lport 69b7733c-f471-4b5d-9fe9-b9b25d5836d9 for this chassis.
Nov 25 03:35:01 np0005534516 ovn_controller[152859]: 2025-11-25T08:35:01Z|00620|binding|INFO|69b7733c-f471-4b5d-9fe9-b9b25d5836d9: Claiming fa:16:3e:a2:ff:8f 10.100.0.4
Nov 25 03:35:01 np0005534516 nova_compute[253538]: 2025-11-25 08:35:01.572 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:35:01 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:35:01.577 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[ac2f7c9b-585b-428d-b529-34b8467df6cb]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:35:01 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:35:01.589 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a2:ff:8f 10.100.0.4'], port_security=['fa:16:3e:a2:ff:8f 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '2f20fb1c-0a44-4209-aa4a-020331708117', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a66e51b8-ecb0-4289-a1b5-d5e379727721', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a9c243220ecd4ba3af10cdbc0ea76bd6', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'a5deaf81-ec7a-4196-8622-4e499ce185db', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=864356ca-a329-4a45-a3a1-6cef04812832, chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=69b7733c-f471-4b5d-9fe9-b9b25d5836d9) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 03:35:01 np0005534516 ovn_controller[152859]: 2025-11-25T08:35:01Z|00621|binding|INFO|Setting lport 69b7733c-f471-4b5d-9fe9-b9b25d5836d9 ovn-installed in OVS
Nov 25 03:35:01 np0005534516 ovn_controller[152859]: 2025-11-25T08:35:01Z|00622|binding|INFO|Setting lport 69b7733c-f471-4b5d-9fe9-b9b25d5836d9 up in Southbound
Nov 25 03:35:01 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:35:01.592 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[1b29a1e8-935f-44e7-995d-6bb973ff3fb4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:35:01 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:35:01.594 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[cc757cfd-d497-45d8-9b8a-02f3a59821a2]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:35:01 np0005534516 systemd-machined[215790]: New machine qemu-76-instance-00000042.
Nov 25 03:35:01 np0005534516 nova_compute[253538]: 2025-11-25 08:35:01.603 253542 INFO nova.virt.libvirt.driver [None req-79bcfe17-6c4e-4b92-86db-d7c7e382865b c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] [instance: 420c5373-d9c4-4da0-9658-90eff9a19f8d] Deleting instance files /var/lib/nova/instances/420c5373-d9c4-4da0-9658-90eff9a19f8d_del#033[00m
Nov 25 03:35:01 np0005534516 nova_compute[253538]: 2025-11-25 08:35:01.604 253542 INFO nova.virt.libvirt.driver [None req-79bcfe17-6c4e-4b92-86db-d7c7e382865b c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] [instance: 420c5373-d9c4-4da0-9658-90eff9a19f8d] Deletion of /var/lib/nova/instances/420c5373-d9c4-4da0-9658-90eff9a19f8d_del complete#033[00m
Nov 25 03:35:01 np0005534516 nova_compute[253538]: 2025-11-25 08:35:01.607 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:35:01 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:35:01.609 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[6161fa95-5712-42c2-b994-7cd45772ff1a]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 499067, 'reachable_time': 44139, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 321114, 'error': None, 'target': 'ovnmeta-9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:35:01 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:35:01.611 162852 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 25 03:35:01 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:35:01.612 162852 DEBUG oslo.privsep.daemon [-] privsep: reply[a606b700-b5ad-4272-b5fb-97e8ba11876c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:35:01 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:35:01.612 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 69b7733c-f471-4b5d-9fe9-b9b25d5836d9 in datapath a66e51b8-ecb0-4289-a1b5-d5e379727721 unbound from our chassis#033[00m
Nov 25 03:35:01 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:35:01.614 162739 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network a66e51b8-ecb0-4289-a1b5-d5e379727721#033[00m
Nov 25 03:35:01 np0005534516 systemd[1]: Started Virtual Machine qemu-76-instance-00000042.
Nov 25 03:35:01 np0005534516 systemd[1]: run-netns-ovnmeta\x2d9bf3cbfa\x2d7e0d\x2d4c98\x2d99a2\x2d4ca14fb6bbbe.mount: Deactivated successfully.
Nov 25 03:35:01 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:35:01.635 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[f2099ec5-b240-4899-a95f-d16f20a43f22]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:35:01 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:35:01.638 162739 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapa66e51b8-e1 in ovnmeta-a66e51b8-ecb0-4289-a1b5-d5e379727721 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 25 03:35:01 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:35:01.639 269370 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapa66e51b8-e0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 25 03:35:01 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:35:01.640 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[281f25ac-2ba9-4050-91b1-7b6ee457111f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:35:01 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:35:01.641 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[e3dd98c0-4293-4f9e-a13d-2fe2590be237]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:35:01 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:35:01.657 162852 DEBUG oslo.privsep.daemon [-] privsep: reply[9d421845-85b2-41b2-bee9-614c1106baa8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:35:01 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:35:01.685 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[b70f6d29-74cf-4f7f-b4d6-511bac96dc52]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:35:01 np0005534516 nova_compute[253538]: 2025-11-25 08:35:01.695 253542 INFO nova.compute.manager [None req-79bcfe17-6c4e-4b92-86db-d7c7e382865b c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] [instance: 420c5373-d9c4-4da0-9658-90eff9a19f8d] Took 2.05 seconds to destroy the instance on the hypervisor.#033[00m
Nov 25 03:35:01 np0005534516 nova_compute[253538]: 2025-11-25 08:35:01.695 253542 DEBUG oslo.service.loopingcall [None req-79bcfe17-6c4e-4b92-86db-d7c7e382865b c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 25 03:35:01 np0005534516 nova_compute[253538]: 2025-11-25 08:35:01.695 253542 DEBUG nova.compute.manager [-] [instance: 420c5373-d9c4-4da0-9658-90eff9a19f8d] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 25 03:35:01 np0005534516 nova_compute[253538]: 2025-11-25 08:35:01.696 253542 DEBUG nova.network.neutron [-] [instance: 420c5373-d9c4-4da0-9658-90eff9a19f8d] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 25 03:35:01 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:35:01.723 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[879b237f-e2c3-41f9-89f4-6ea7a0a4b8c8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:35:01 np0005534516 NetworkManager[48915]: <info>  [1764059701.7312] manager: (tapa66e51b8-e0): new Veth device (/org/freedesktop/NetworkManager/Devices/278)
Nov 25 03:35:01 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:35:01.731 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[2abf82fc-cf6f-43bc-87e3-153518fa88d2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:35:01 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:35:01.765 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[6e9b89c7-1fdc-493d-8f66-5f8e6b0902a8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:35:01 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:35:01.769 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[a53d6770-305e-4a8b-a549-19ab55a989a6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:35:01 np0005534516 NetworkManager[48915]: <info>  [1764059701.8014] device (tapa66e51b8-e0): carrier: link connected
Nov 25 03:35:01 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:35:01.809 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[51784082-e1b0-4334-8e34-218b8aa904ae]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:35:01 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:35:01.832 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[1e4ac16f-851e-4b47-ad5a-45c37df44baa]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapa66e51b8-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:11:2c:20'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 186], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 505640, 'reachable_time': 36641, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 321146, 'error': None, 'target': 'ovnmeta-a66e51b8-ecb0-4289-a1b5-d5e379727721', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:35:01 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:35:01.849 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[2d21f907-6aa2-4669-b4ad-b86c521e91d8]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe11:2c20'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 505640, 'tstamp': 505640}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 321147, 'error': None, 'target': 'ovnmeta-a66e51b8-ecb0-4289-a1b5-d5e379727721', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:35:01 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:35:01.869 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[57859526-096d-4356-ac31-7e9243d41b45]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapa66e51b8-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:11:2c:20'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 186], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 505640, 'reachable_time': 36641, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 321148, 'error': None, 'target': 'ovnmeta-a66e51b8-ecb0-4289-a1b5-d5e379727721', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:35:01 np0005534516 nova_compute[253538]: 2025-11-25 08:35:01.883 253542 DEBUG nova.compute.manager [req-ff4d7b5a-506b-4a62-b1d7-d7dd0167aa29 req-a8715817-61a4-4e62-bbea-6269116df64a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 5c6656ef-7ad0-4eb4-a597-aa9a8078805b] Received event network-vif-unplugged-1682bdaf-1dd6-4036-8d17-a169dbaaca8f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:35:01 np0005534516 nova_compute[253538]: 2025-11-25 08:35:01.884 253542 DEBUG oslo_concurrency.lockutils [req-ff4d7b5a-506b-4a62-b1d7-d7dd0167aa29 req-a8715817-61a4-4e62-bbea-6269116df64a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "5c6656ef-7ad0-4eb4-a597-aa9a8078805b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:35:01 np0005534516 nova_compute[253538]: 2025-11-25 08:35:01.884 253542 DEBUG oslo_concurrency.lockutils [req-ff4d7b5a-506b-4a62-b1d7-d7dd0167aa29 req-a8715817-61a4-4e62-bbea-6269116df64a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "5c6656ef-7ad0-4eb4-a597-aa9a8078805b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:35:01 np0005534516 nova_compute[253538]: 2025-11-25 08:35:01.884 253542 DEBUG oslo_concurrency.lockutils [req-ff4d7b5a-506b-4a62-b1d7-d7dd0167aa29 req-a8715817-61a4-4e62-bbea-6269116df64a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "5c6656ef-7ad0-4eb4-a597-aa9a8078805b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:35:01 np0005534516 nova_compute[253538]: 2025-11-25 08:35:01.884 253542 DEBUG nova.compute.manager [req-ff4d7b5a-506b-4a62-b1d7-d7dd0167aa29 req-a8715817-61a4-4e62-bbea-6269116df64a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 5c6656ef-7ad0-4eb4-a597-aa9a8078805b] No waiting events found dispatching network-vif-unplugged-1682bdaf-1dd6-4036-8d17-a169dbaaca8f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 03:35:01 np0005534516 nova_compute[253538]: 2025-11-25 08:35:01.885 253542 DEBUG nova.compute.manager [req-ff4d7b5a-506b-4a62-b1d7-d7dd0167aa29 req-a8715817-61a4-4e62-bbea-6269116df64a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 5c6656ef-7ad0-4eb4-a597-aa9a8078805b] Received event network-vif-unplugged-1682bdaf-1dd6-4036-8d17-a169dbaaca8f for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 25 03:35:01 np0005534516 nova_compute[253538]: 2025-11-25 08:35:01.892 253542 INFO nova.virt.libvirt.driver [None req-71b5dbd9-53be-49dc-9125-f878a11f92ea 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] [instance: 5c6656ef-7ad0-4eb4-a597-aa9a8078805b] Deleting instance files /var/lib/nova/instances/5c6656ef-7ad0-4eb4-a597-aa9a8078805b_del#033[00m
Nov 25 03:35:01 np0005534516 nova_compute[253538]: 2025-11-25 08:35:01.892 253542 INFO nova.virt.libvirt.driver [None req-71b5dbd9-53be-49dc-9125-f878a11f92ea 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] [instance: 5c6656ef-7ad0-4eb4-a597-aa9a8078805b] Deletion of /var/lib/nova/instances/5c6656ef-7ad0-4eb4-a597-aa9a8078805b_del complete#033[00m
Nov 25 03:35:01 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:35:01.924 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[adea9f82-5ba9-45a6-91a7-f37058d9af73]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:35:01 np0005534516 nova_compute[253538]: 2025-11-25 08:35:01.943 253542 DEBUG nova.compute.manager [req-2da4079d-f730-4344-bf7f-d672340c624e req-e69986b2-dada-417c-8af5-6e9544d6507c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 420c5373-d9c4-4da0-9658-90eff9a19f8d] Received event network-vif-unplugged-9200cc12-927d-418b-99c1-ca0421535979 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:35:01 np0005534516 nova_compute[253538]: 2025-11-25 08:35:01.943 253542 DEBUG oslo_concurrency.lockutils [req-2da4079d-f730-4344-bf7f-d672340c624e req-e69986b2-dada-417c-8af5-6e9544d6507c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "420c5373-d9c4-4da0-9658-90eff9a19f8d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:35:01 np0005534516 nova_compute[253538]: 2025-11-25 08:35:01.943 253542 DEBUG oslo_concurrency.lockutils [req-2da4079d-f730-4344-bf7f-d672340c624e req-e69986b2-dada-417c-8af5-6e9544d6507c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "420c5373-d9c4-4da0-9658-90eff9a19f8d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:35:01 np0005534516 nova_compute[253538]: 2025-11-25 08:35:01.943 253542 DEBUG oslo_concurrency.lockutils [req-2da4079d-f730-4344-bf7f-d672340c624e req-e69986b2-dada-417c-8af5-6e9544d6507c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "420c5373-d9c4-4da0-9658-90eff9a19f8d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:35:01 np0005534516 nova_compute[253538]: 2025-11-25 08:35:01.944 253542 DEBUG nova.compute.manager [req-2da4079d-f730-4344-bf7f-d672340c624e req-e69986b2-dada-417c-8af5-6e9544d6507c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 420c5373-d9c4-4da0-9658-90eff9a19f8d] No waiting events found dispatching network-vif-unplugged-9200cc12-927d-418b-99c1-ca0421535979 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 03:35:01 np0005534516 nova_compute[253538]: 2025-11-25 08:35:01.944 253542 DEBUG nova.compute.manager [req-2da4079d-f730-4344-bf7f-d672340c624e req-e69986b2-dada-417c-8af5-6e9544d6507c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 420c5373-d9c4-4da0-9658-90eff9a19f8d] Received event network-vif-unplugged-9200cc12-927d-418b-99c1-ca0421535979 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 25 03:35:01 np0005534516 nova_compute[253538]: 2025-11-25 08:35:01.944 253542 DEBUG nova.compute.manager [req-2da4079d-f730-4344-bf7f-d672340c624e req-e69986b2-dada-417c-8af5-6e9544d6507c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: d99c7a05-3cc3-4a8b-bce4-1185023a269f] Received event network-vif-deleted-182a5a1a-c06d-4265-857f-3ea363ae01c2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:35:01 np0005534516 nova_compute[253538]: 2025-11-25 08:35:01.944 253542 DEBUG nova.compute.manager [req-2da4079d-f730-4344-bf7f-d672340c624e req-e69986b2-dada-417c-8af5-6e9544d6507c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 420c5373-d9c4-4da0-9658-90eff9a19f8d] Received event network-vif-plugged-9200cc12-927d-418b-99c1-ca0421535979 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:35:01 np0005534516 nova_compute[253538]: 2025-11-25 08:35:01.944 253542 DEBUG oslo_concurrency.lockutils [req-2da4079d-f730-4344-bf7f-d672340c624e req-e69986b2-dada-417c-8af5-6e9544d6507c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "420c5373-d9c4-4da0-9658-90eff9a19f8d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:35:01 np0005534516 nova_compute[253538]: 2025-11-25 08:35:01.944 253542 DEBUG oslo_concurrency.lockutils [req-2da4079d-f730-4344-bf7f-d672340c624e req-e69986b2-dada-417c-8af5-6e9544d6507c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "420c5373-d9c4-4da0-9658-90eff9a19f8d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:35:01 np0005534516 nova_compute[253538]: 2025-11-25 08:35:01.945 253542 DEBUG oslo_concurrency.lockutils [req-2da4079d-f730-4344-bf7f-d672340c624e req-e69986b2-dada-417c-8af5-6e9544d6507c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "420c5373-d9c4-4da0-9658-90eff9a19f8d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:35:01 np0005534516 nova_compute[253538]: 2025-11-25 08:35:01.945 253542 DEBUG nova.compute.manager [req-2da4079d-f730-4344-bf7f-d672340c624e req-e69986b2-dada-417c-8af5-6e9544d6507c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 420c5373-d9c4-4da0-9658-90eff9a19f8d] No waiting events found dispatching network-vif-plugged-9200cc12-927d-418b-99c1-ca0421535979 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 03:35:01 np0005534516 nova_compute[253538]: 2025-11-25 08:35:01.945 253542 WARNING nova.compute.manager [req-2da4079d-f730-4344-bf7f-d672340c624e req-e69986b2-dada-417c-8af5-6e9544d6507c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 420c5373-d9c4-4da0-9658-90eff9a19f8d] Received unexpected event network-vif-plugged-9200cc12-927d-418b-99c1-ca0421535979 for instance with vm_state active and task_state deleting.#033[00m
Nov 25 03:35:01 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:35:01.978 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[e4f726e2-8fe4-4139-b0dc-1a44b6904c55]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:35:01 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:35:01.980 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa66e51b8-e0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:35:01 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:35:01.980 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 03:35:01 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:35:01.980 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa66e51b8-e0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:35:01 np0005534516 NetworkManager[48915]: <info>  [1764059701.9827] manager: (tapa66e51b8-e0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/279)
Nov 25 03:35:01 np0005534516 kernel: tapa66e51b8-e0: entered promiscuous mode
Nov 25 03:35:01 np0005534516 nova_compute[253538]: 2025-11-25 08:35:01.982 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:35:01 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:35:01.986 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapa66e51b8-e0, col_values=(('external_ids', {'iface-id': 'c0d74b17-7eba-4096-a861-b9247777e01c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:35:01 np0005534516 nova_compute[253538]: 2025-11-25 08:35:01.987 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:35:01 np0005534516 ovn_controller[152859]: 2025-11-25T08:35:01Z|00623|binding|INFO|Releasing lport c0d74b17-7eba-4096-a861-b9247777e01c from this chassis (sb_readonly=0)
Nov 25 03:35:01 np0005534516 nova_compute[253538]: 2025-11-25 08:35:01.989 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:35:01 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:35:01.991 162739 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/a66e51b8-ecb0-4289-a1b5-d5e379727721.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/a66e51b8-ecb0-4289-a1b5-d5e379727721.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 25 03:35:01 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:35:01.992 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[1c542b7c-aa02-40f2-8579-8fc3db65f722]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:35:01 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:35:01.993 162739 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 25 03:35:01 np0005534516 ovn_metadata_agent[162734]: global
Nov 25 03:35:01 np0005534516 ovn_metadata_agent[162734]:    log         /dev/log local0 debug
Nov 25 03:35:01 np0005534516 ovn_metadata_agent[162734]:    log-tag     haproxy-metadata-proxy-a66e51b8-ecb0-4289-a1b5-d5e379727721
Nov 25 03:35:01 np0005534516 ovn_metadata_agent[162734]:    user        root
Nov 25 03:35:01 np0005534516 ovn_metadata_agent[162734]:    group       root
Nov 25 03:35:01 np0005534516 ovn_metadata_agent[162734]:    maxconn     1024
Nov 25 03:35:01 np0005534516 ovn_metadata_agent[162734]:    pidfile     /var/lib/neutron/external/pids/a66e51b8-ecb0-4289-a1b5-d5e379727721.pid.haproxy
Nov 25 03:35:01 np0005534516 ovn_metadata_agent[162734]:    daemon
Nov 25 03:35:01 np0005534516 ovn_metadata_agent[162734]: 
Nov 25 03:35:01 np0005534516 ovn_metadata_agent[162734]: defaults
Nov 25 03:35:01 np0005534516 ovn_metadata_agent[162734]:    log global
Nov 25 03:35:01 np0005534516 ovn_metadata_agent[162734]:    mode http
Nov 25 03:35:01 np0005534516 ovn_metadata_agent[162734]:    option httplog
Nov 25 03:35:01 np0005534516 ovn_metadata_agent[162734]:    option dontlognull
Nov 25 03:35:01 np0005534516 ovn_metadata_agent[162734]:    option http-server-close
Nov 25 03:35:01 np0005534516 ovn_metadata_agent[162734]:    option forwardfor
Nov 25 03:35:01 np0005534516 ovn_metadata_agent[162734]:    retries                 3
Nov 25 03:35:01 np0005534516 ovn_metadata_agent[162734]:    timeout http-request    30s
Nov 25 03:35:01 np0005534516 ovn_metadata_agent[162734]:    timeout connect         30s
Nov 25 03:35:01 np0005534516 ovn_metadata_agent[162734]:    timeout client          32s
Nov 25 03:35:01 np0005534516 ovn_metadata_agent[162734]:    timeout server          32s
Nov 25 03:35:01 np0005534516 ovn_metadata_agent[162734]:    timeout http-keep-alive 30s
Nov 25 03:35:01 np0005534516 ovn_metadata_agent[162734]: 
Nov 25 03:35:01 np0005534516 ovn_metadata_agent[162734]: 
Nov 25 03:35:01 np0005534516 ovn_metadata_agent[162734]: listen listener
Nov 25 03:35:01 np0005534516 ovn_metadata_agent[162734]:    bind 169.254.169.254:80
Nov 25 03:35:01 np0005534516 ovn_metadata_agent[162734]:    server metadata /var/lib/neutron/metadata_proxy
Nov 25 03:35:01 np0005534516 ovn_metadata_agent[162734]:    http-request add-header X-OVN-Network-ID a66e51b8-ecb0-4289-a1b5-d5e379727721
Nov 25 03:35:01 np0005534516 ovn_metadata_agent[162734]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 25 03:35:01 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:35:01.995 162739 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-a66e51b8-ecb0-4289-a1b5-d5e379727721', 'env', 'PROCESS_TAG=haproxy-a66e51b8-ecb0-4289-a1b5-d5e379727721', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/a66e51b8-ecb0-4289-a1b5-d5e379727721.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 25 03:35:02 np0005534516 nova_compute[253538]: 2025-11-25 08:35:02.007 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:35:02 np0005534516 nova_compute[253538]: 2025-11-25 08:35:02.046 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764059702.0462747, 2f20fb1c-0a44-4209-aa4a-020331708117 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 03:35:02 np0005534516 nova_compute[253538]: 2025-11-25 08:35:02.047 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 2f20fb1c-0a44-4209-aa4a-020331708117] VM Started (Lifecycle Event)#033[00m
Nov 25 03:35:02 np0005534516 nova_compute[253538]: 2025-11-25 08:35:02.061 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 2f20fb1c-0a44-4209-aa4a-020331708117] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:35:02 np0005534516 nova_compute[253538]: 2025-11-25 08:35:02.070 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764059702.0463982, 2f20fb1c-0a44-4209-aa4a-020331708117 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 03:35:02 np0005534516 nova_compute[253538]: 2025-11-25 08:35:02.070 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 2f20fb1c-0a44-4209-aa4a-020331708117] VM Paused (Lifecycle Event)#033[00m
Nov 25 03:35:02 np0005534516 nova_compute[253538]: 2025-11-25 08:35:02.084 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 2f20fb1c-0a44-4209-aa4a-020331708117] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:35:02 np0005534516 nova_compute[253538]: 2025-11-25 08:35:02.087 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 2f20fb1c-0a44-4209-aa4a-020331708117] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 03:35:02 np0005534516 nova_compute[253538]: 2025-11-25 08:35:02.103 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 2f20fb1c-0a44-4209-aa4a-020331708117] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 03:35:02 np0005534516 nova_compute[253538]: 2025-11-25 08:35:02.150 253542 INFO nova.compute.manager [None req-71b5dbd9-53be-49dc-9125-f878a11f92ea 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] [instance: 5c6656ef-7ad0-4eb4-a597-aa9a8078805b] Took 1.68 seconds to destroy the instance on the hypervisor.#033[00m
Nov 25 03:35:02 np0005534516 nova_compute[253538]: 2025-11-25 08:35:02.150 253542 DEBUG oslo.service.loopingcall [None req-71b5dbd9-53be-49dc-9125-f878a11f92ea 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 25 03:35:02 np0005534516 nova_compute[253538]: 2025-11-25 08:35:02.151 253542 DEBUG nova.compute.manager [-] [instance: 5c6656ef-7ad0-4eb4-a597-aa9a8078805b] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 25 03:35:02 np0005534516 nova_compute[253538]: 2025-11-25 08:35:02.151 253542 DEBUG nova.network.neutron [-] [instance: 5c6656ef-7ad0-4eb4-a597-aa9a8078805b] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 25 03:35:02 np0005534516 podman[321222]: 2025-11-25 08:35:02.357777864 +0000 UTC m=+0.057214359 container create ac55993ff18ecc64dd91216bc21f1d60c7edd635ec4e1931e146e7a696109f47 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-a66e51b8-ecb0-4289-a1b5-d5e379727721, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 03:35:02 np0005534516 systemd[1]: Started libpod-conmon-ac55993ff18ecc64dd91216bc21f1d60c7edd635ec4e1931e146e7a696109f47.scope.
Nov 25 03:35:02 np0005534516 podman[321222]: 2025-11-25 08:35:02.327792138 +0000 UTC m=+0.027228653 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 25 03:35:02 np0005534516 systemd[1]: Started libcrun container.
Nov 25 03:35:02 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/686e5e97ca0c4d027b1dcfdd9419f6c15acdd73a33a167ddf2036db8efab363c/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 25 03:35:02 np0005534516 podman[321222]: 2025-11-25 08:35:02.451122212 +0000 UTC m=+0.150558717 container init ac55993ff18ecc64dd91216bc21f1d60c7edd635ec4e1931e146e7a696109f47 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-a66e51b8-ecb0-4289-a1b5-d5e379727721, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Nov 25 03:35:02 np0005534516 podman[321222]: 2025-11-25 08:35:02.459142858 +0000 UTC m=+0.158579353 container start ac55993ff18ecc64dd91216bc21f1d60c7edd635ec4e1931e146e7a696109f47 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-a66e51b8-ecb0-4289-a1b5-d5e379727721, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 03:35:02 np0005534516 neutron-haproxy-ovnmeta-a66e51b8-ecb0-4289-a1b5-d5e379727721[321237]: [NOTICE]   (321241) : New worker (321243) forked
Nov 25 03:35:02 np0005534516 neutron-haproxy-ovnmeta-a66e51b8-ecb0-4289-a1b5-d5e379727721[321237]: [NOTICE]   (321241) : Loading success.
Nov 25 03:35:03 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1557: 321 pgs: 321 active+clean; 361 MiB data, 701 MiB used, 59 GiB / 60 GiB avail; 7.7 MiB/s rd, 4.4 MiB/s wr, 452 op/s
Nov 25 03:35:03 np0005534516 nova_compute[253538]: 2025-11-25 08:35:03.429 253542 DEBUG nova.network.neutron [-] [instance: 420c5373-d9c4-4da0-9658-90eff9a19f8d] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 03:35:03 np0005534516 nova_compute[253538]: 2025-11-25 08:35:03.446 253542 INFO nova.compute.manager [-] [instance: 420c5373-d9c4-4da0-9658-90eff9a19f8d] Took 1.75 seconds to deallocate network for instance.#033[00m
Nov 25 03:35:03 np0005534516 nova_compute[253538]: 2025-11-25 08:35:03.497 253542 DEBUG oslo_concurrency.lockutils [None req-79bcfe17-6c4e-4b92-86db-d7c7e382865b c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:35:03 np0005534516 nova_compute[253538]: 2025-11-25 08:35:03.497 253542 DEBUG oslo_concurrency.lockutils [None req-79bcfe17-6c4e-4b92-86db-d7c7e382865b c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:35:03 np0005534516 nova_compute[253538]: 2025-11-25 08:35:03.595 253542 DEBUG nova.network.neutron [-] [instance: 5c6656ef-7ad0-4eb4-a597-aa9a8078805b] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 03:35:03 np0005534516 nova_compute[253538]: 2025-11-25 08:35:03.616 253542 INFO nova.compute.manager [-] [instance: 5c6656ef-7ad0-4eb4-a597-aa9a8078805b] Took 1.47 seconds to deallocate network for instance.#033[00m
Nov 25 03:35:03 np0005534516 nova_compute[253538]: 2025-11-25 08:35:03.657 253542 DEBUG oslo_concurrency.processutils [None req-79bcfe17-6c4e-4b92-86db-d7c7e382865b c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:35:03 np0005534516 nova_compute[253538]: 2025-11-25 08:35:03.702 253542 DEBUG oslo_concurrency.lockutils [None req-71b5dbd9-53be-49dc-9125-f878a11f92ea 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:35:03 np0005534516 nova_compute[253538]: 2025-11-25 08:35:03.805 253542 DEBUG oslo_concurrency.lockutils [None req-22a055bc-1d78-41e0-94c2-05bf396d3c16 dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] Acquiring lock "a4fd9f97-b160-432d-9cb7-0fa3874c6468" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:35:03 np0005534516 nova_compute[253538]: 2025-11-25 08:35:03.806 253542 DEBUG oslo_concurrency.lockutils [None req-22a055bc-1d78-41e0-94c2-05bf396d3c16 dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] Lock "a4fd9f97-b160-432d-9cb7-0fa3874c6468" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:35:03 np0005534516 nova_compute[253538]: 2025-11-25 08:35:03.806 253542 DEBUG oslo_concurrency.lockutils [None req-22a055bc-1d78-41e0-94c2-05bf396d3c16 dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] Acquiring lock "a4fd9f97-b160-432d-9cb7-0fa3874c6468-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:35:03 np0005534516 nova_compute[253538]: 2025-11-25 08:35:03.806 253542 DEBUG oslo_concurrency.lockutils [None req-22a055bc-1d78-41e0-94c2-05bf396d3c16 dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] Lock "a4fd9f97-b160-432d-9cb7-0fa3874c6468-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:35:03 np0005534516 nova_compute[253538]: 2025-11-25 08:35:03.806 253542 DEBUG oslo_concurrency.lockutils [None req-22a055bc-1d78-41e0-94c2-05bf396d3c16 dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] Lock "a4fd9f97-b160-432d-9cb7-0fa3874c6468-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:35:03 np0005534516 nova_compute[253538]: 2025-11-25 08:35:03.808 253542 INFO nova.compute.manager [None req-22a055bc-1d78-41e0-94c2-05bf396d3c16 dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] [instance: a4fd9f97-b160-432d-9cb7-0fa3874c6468] Terminating instance#033[00m
Nov 25 03:35:03 np0005534516 nova_compute[253538]: 2025-11-25 08:35:03.809 253542 DEBUG nova.compute.manager [None req-22a055bc-1d78-41e0-94c2-05bf396d3c16 dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] [instance: a4fd9f97-b160-432d-9cb7-0fa3874c6468] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 25 03:35:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 03:35:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:35:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 03:35:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:35:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.002689950531516572 of space, bias 1.0, pg target 0.8069851594549716 quantized to 32 (current 32)
Nov 25 03:35:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:35:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 03:35:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:35:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 03:35:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:35:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006661126644201341 of space, bias 1.0, pg target 0.19983379932604023 quantized to 32 (current 32)
Nov 25 03:35:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:35:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 25 03:35:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:35:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 03:35:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:35:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 25 03:35:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:35:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 25 03:35:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:35:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 03:35:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:35:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 25 03:35:03 np0005534516 kernel: tapf9d205bf-07 (unregistering): left promiscuous mode
Nov 25 03:35:03 np0005534516 nova_compute[253538]: 2025-11-25 08:35:03.951 253542 DEBUG oslo_concurrency.lockutils [None req-7d88309f-64d0-46d8-b872-576c223d70b5 dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] Acquiring lock "e07ccbcb-d60d-4c15-95c2-9f5046ab99a3" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:35:03 np0005534516 nova_compute[253538]: 2025-11-25 08:35:03.951 253542 DEBUG oslo_concurrency.lockutils [None req-7d88309f-64d0-46d8-b872-576c223d70b5 dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] Lock "e07ccbcb-d60d-4c15-95c2-9f5046ab99a3" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:35:03 np0005534516 nova_compute[253538]: 2025-11-25 08:35:03.951 253542 DEBUG oslo_concurrency.lockutils [None req-7d88309f-64d0-46d8-b872-576c223d70b5 dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] Acquiring lock "e07ccbcb-d60d-4c15-95c2-9f5046ab99a3-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:35:03 np0005534516 nova_compute[253538]: 2025-11-25 08:35:03.952 253542 DEBUG oslo_concurrency.lockutils [None req-7d88309f-64d0-46d8-b872-576c223d70b5 dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] Lock "e07ccbcb-d60d-4c15-95c2-9f5046ab99a3-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:35:03 np0005534516 nova_compute[253538]: 2025-11-25 08:35:03.952 253542 DEBUG oslo_concurrency.lockutils [None req-7d88309f-64d0-46d8-b872-576c223d70b5 dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] Lock "e07ccbcb-d60d-4c15-95c2-9f5046ab99a3-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:35:03 np0005534516 nova_compute[253538]: 2025-11-25 08:35:03.953 253542 INFO nova.compute.manager [None req-7d88309f-64d0-46d8-b872-576c223d70b5 dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] [instance: e07ccbcb-d60d-4c15-95c2-9f5046ab99a3] Terminating instance#033[00m
Nov 25 03:35:03 np0005534516 NetworkManager[48915]: <info>  [1764059703.9559] device (tapf9d205bf-07): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 03:35:03 np0005534516 nova_compute[253538]: 2025-11-25 08:35:03.957 253542 DEBUG nova.compute.manager [None req-7d88309f-64d0-46d8-b872-576c223d70b5 dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] [instance: e07ccbcb-d60d-4c15-95c2-9f5046ab99a3] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 25 03:35:03 np0005534516 nova_compute[253538]: 2025-11-25 08:35:03.969 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:35:03 np0005534516 ovn_controller[152859]: 2025-11-25T08:35:03Z|00624|binding|INFO|Releasing lport f9d205bf-0705-485d-b89c-f9b9c3cdccdb from this chassis (sb_readonly=0)
Nov 25 03:35:03 np0005534516 ovn_controller[152859]: 2025-11-25T08:35:03Z|00625|binding|INFO|Setting lport f9d205bf-0705-485d-b89c-f9b9c3cdccdb down in Southbound
Nov 25 03:35:03 np0005534516 ovn_controller[152859]: 2025-11-25T08:35:03Z|00626|binding|INFO|Removing iface tapf9d205bf-07 ovn-installed in OVS
Nov 25 03:35:03 np0005534516 nova_compute[253538]: 2025-11-25 08:35:03.971 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:35:03 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:35:03.976 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:9d:08:11 10.100.0.6'], port_security=['fa:16:3e:9d:08:11 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': 'a4fd9f97-b160-432d-9cb7-0fa3874c6468', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-837c6e7b-bab2-4553-9d96-986f67153365', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ce0bc5c65f2f47a9a854ec892fe53bc8', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'a89d18be-0725-45a6-b621-49391e35be2a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=332be39e-bd54-4968-b056-968d29abecfa, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=f9d205bf-0705-485d-b89c-f9b9c3cdccdb) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 03:35:03 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:35:03.977 162739 INFO neutron.agent.ovn.metadata.agent [-] Port f9d205bf-0705-485d-b89c-f9b9c3cdccdb in datapath 837c6e7b-bab2-4553-9d96-986f67153365 unbound from our chassis#033[00m
Nov 25 03:35:03 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:35:03.979 162739 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 837c6e7b-bab2-4553-9d96-986f67153365#033[00m
Nov 25 03:35:03 np0005534516 nova_compute[253538]: 2025-11-25 08:35:03.994 253542 DEBUG nova.compute.manager [req-379ba195-352e-4dd4-9bec-5f845e03ed74 req-a4d25dfd-98d8-4ca1-81ff-ac6327fef6cd b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 5c6656ef-7ad0-4eb4-a597-aa9a8078805b] Received event network-vif-plugged-1682bdaf-1dd6-4036-8d17-a169dbaaca8f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:35:03 np0005534516 nova_compute[253538]: 2025-11-25 08:35:03.994 253542 DEBUG oslo_concurrency.lockutils [req-379ba195-352e-4dd4-9bec-5f845e03ed74 req-a4d25dfd-98d8-4ca1-81ff-ac6327fef6cd b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "5c6656ef-7ad0-4eb4-a597-aa9a8078805b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:35:03 np0005534516 nova_compute[253538]: 2025-11-25 08:35:03.994 253542 DEBUG oslo_concurrency.lockutils [req-379ba195-352e-4dd4-9bec-5f845e03ed74 req-a4d25dfd-98d8-4ca1-81ff-ac6327fef6cd b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "5c6656ef-7ad0-4eb4-a597-aa9a8078805b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:35:03 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:35:03.994 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[04128c00-854f-4737-86ee-a1d67bd494bb]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:35:03 np0005534516 nova_compute[253538]: 2025-11-25 08:35:03.995 253542 DEBUG oslo_concurrency.lockutils [req-379ba195-352e-4dd4-9bec-5f845e03ed74 req-a4d25dfd-98d8-4ca1-81ff-ac6327fef6cd b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "5c6656ef-7ad0-4eb4-a597-aa9a8078805b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:35:03 np0005534516 nova_compute[253538]: 2025-11-25 08:35:03.995 253542 DEBUG nova.compute.manager [req-379ba195-352e-4dd4-9bec-5f845e03ed74 req-a4d25dfd-98d8-4ca1-81ff-ac6327fef6cd b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 5c6656ef-7ad0-4eb4-a597-aa9a8078805b] No waiting events found dispatching network-vif-plugged-1682bdaf-1dd6-4036-8d17-a169dbaaca8f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 03:35:03 np0005534516 nova_compute[253538]: 2025-11-25 08:35:03.996 253542 WARNING nova.compute.manager [req-379ba195-352e-4dd4-9bec-5f845e03ed74 req-a4d25dfd-98d8-4ca1-81ff-ac6327fef6cd b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 5c6656ef-7ad0-4eb4-a597-aa9a8078805b] Received unexpected event network-vif-plugged-1682bdaf-1dd6-4036-8d17-a169dbaaca8f for instance with vm_state deleted and task_state None.#033[00m
Nov 25 03:35:03 np0005534516 nova_compute[253538]: 2025-11-25 08:35:03.996 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:35:04 np0005534516 systemd[1]: machine-qemu\x2d73\x2dinstance\x2d00000040.scope: Deactivated successfully.
Nov 25 03:35:04 np0005534516 systemd[1]: machine-qemu\x2d73\x2dinstance\x2d00000040.scope: Consumed 10.820s CPU time.
Nov 25 03:35:04 np0005534516 systemd-machined[215790]: Machine qemu-73-instance-00000040 terminated.
Nov 25 03:35:04 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:35:04.022 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[2b681a2a-d8eb-42c3-b74c-f5b16f6c623e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:35:04 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:35:04.025 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[ca2c1fab-22af-4905-bec5-1302483d6bbf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:35:04 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:35:04.053 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[de6f9dec-01bb-48b8-835d-c8d5061c2bb9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:35:04 np0005534516 kernel: tap54f02527-a6 (unregistering): left promiscuous mode
Nov 25 03:35:04 np0005534516 NetworkManager[48915]: <info>  [1764059704.0680] device (tap54f02527-a6): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 03:35:04 np0005534516 nova_compute[253538]: 2025-11-25 08:35:04.078 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:35:04 np0005534516 ovn_controller[152859]: 2025-11-25T08:35:04Z|00627|binding|INFO|Releasing lport 54f02527-a6c1-4059-aa22-2c19fc6f351d from this chassis (sb_readonly=0)
Nov 25 03:35:04 np0005534516 ovn_controller[152859]: 2025-11-25T08:35:04Z|00628|binding|INFO|Setting lport 54f02527-a6c1-4059-aa22-2c19fc6f351d down in Southbound
Nov 25 03:35:04 np0005534516 ovn_controller[152859]: 2025-11-25T08:35:04Z|00629|binding|INFO|Removing iface tap54f02527-a6 ovn-installed in OVS
Nov 25 03:35:04 np0005534516 nova_compute[253538]: 2025-11-25 08:35:04.080 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:35:04 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:35:04.083 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[ebc7d6d5-0582-4f8f-9d20-6ec264311e8d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap837c6e7b-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:5f:6e:5a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 6, 'tx_packets': 11, 'rx_bytes': 532, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 6, 'tx_packets': 11, 'rx_bytes': 532, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 177], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 504680, 'reachable_time': 28680, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 321283, 'error': None, 'target': 'ovnmeta-837c6e7b-bab2-4553-9d96-986f67153365', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:35:04 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:35:04.088 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:eb:30:f0 10.100.0.10'], port_security=['fa:16:3e:eb:30:f0 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': 'e07ccbcb-d60d-4c15-95c2-9f5046ab99a3', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-837c6e7b-bab2-4553-9d96-986f67153365', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ce0bc5c65f2f47a9a854ec892fe53bc8', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'a89d18be-0725-45a6-b621-49391e35be2a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=332be39e-bd54-4968-b056-968d29abecfa, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=54f02527-a6c1-4059-aa22-2c19fc6f351d) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 03:35:04 np0005534516 nova_compute[253538]: 2025-11-25 08:35:04.105 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:35:04 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:35:04.107 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[9d81ffb5-802b-42f2-82f1-ea7f45585280]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap837c6e7b-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 504693, 'tstamp': 504693}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 321288, 'error': None, 'target': 'ovnmeta-837c6e7b-bab2-4553-9d96-986f67153365', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap837c6e7b-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 504696, 'tstamp': 504696}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 321288, 'error': None, 'target': 'ovnmeta-837c6e7b-bab2-4553-9d96-986f67153365', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:35:04 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:35:04.115 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap837c6e7b-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:35:04 np0005534516 nova_compute[253538]: 2025-11-25 08:35:04.116 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:35:04 np0005534516 nova_compute[253538]: 2025-11-25 08:35:04.124 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:35:04 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:35:04.125 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap837c6e7b-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:35:04 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:35:04.125 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 03:35:04 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:35:04.126 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap837c6e7b-b0, col_values=(('external_ids', {'iface-id': 'f915f58f-151e-47fb-a373-5fd022b7fd3c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:35:04 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:35:04.126 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 03:35:04 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:35:04.127 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 54f02527-a6c1-4059-aa22-2c19fc6f351d in datapath 837c6e7b-bab2-4553-9d96-986f67153365 unbound from our chassis#033[00m
Nov 25 03:35:04 np0005534516 systemd[1]: machine-qemu\x2d74\x2dinstance\x2d00000041.scope: Deactivated successfully.
Nov 25 03:35:04 np0005534516 systemd[1]: machine-qemu\x2d74\x2dinstance\x2d00000041.scope: Consumed 9.233s CPU time.
Nov 25 03:35:04 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:35:04.129 162739 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 837c6e7b-bab2-4553-9d96-986f67153365, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 25 03:35:04 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:35:04.129 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[c5ccb488-b107-4d4f-8c37-b590037e9131]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:35:04 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:35:04.130 162739 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-837c6e7b-bab2-4553-9d96-986f67153365 namespace which is not needed anymore#033[00m
Nov 25 03:35:04 np0005534516 systemd-machined[215790]: Machine qemu-74-instance-00000041 terminated.
Nov 25 03:35:04 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 03:35:04 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3635438713' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 03:35:04 np0005534516 nova_compute[253538]: 2025-11-25 08:35:04.178 253542 DEBUG oslo_concurrency.processutils [None req-79bcfe17-6c4e-4b92-86db-d7c7e382865b c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.522s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:35:04 np0005534516 nova_compute[253538]: 2025-11-25 08:35:04.187 253542 DEBUG nova.compute.provider_tree [None req-79bcfe17-6c4e-4b92-86db-d7c7e382865b c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 03:35:04 np0005534516 nova_compute[253538]: 2025-11-25 08:35:04.195 253542 INFO nova.virt.libvirt.driver [-] [instance: e07ccbcb-d60d-4c15-95c2-9f5046ab99a3] Instance destroyed successfully.#033[00m
Nov 25 03:35:04 np0005534516 nova_compute[253538]: 2025-11-25 08:35:04.196 253542 DEBUG nova.objects.instance [None req-7d88309f-64d0-46d8-b872-576c223d70b5 dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] Lazy-loading 'resources' on Instance uuid e07ccbcb-d60d-4c15-95c2-9f5046ab99a3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 03:35:04 np0005534516 nova_compute[253538]: 2025-11-25 08:35:04.202 253542 DEBUG nova.scheduler.client.report [None req-79bcfe17-6c4e-4b92-86db-d7c7e382865b c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 03:35:04 np0005534516 nova_compute[253538]: 2025-11-25 08:35:04.213 253542 DEBUG nova.virt.libvirt.vif [None req-7d88309f-64d0-46d8-b872-576c223d70b5 dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T08:34:42Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ListServersNegativeTestJSON-server-1822986507',display_name='tempest-ListServersNegativeTestJSON-server-1822986507-3',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-listserversnegativetestjson-server-1822986507-3',id=65,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=2,launched_at=2025-11-25T08:34:55Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='ce0bc5c65f2f47a9a854ec892fe53bc8',ramdisk_id='',reservation_id='r-prpznqfv',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ListServersNegativeTestJSON-704382836',owner_user_name='tempest-ListServersNegativeTestJSON-704382836-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T08:34:55Z,user_data=None,user_id='dc986518148d44de9f5908ed5be317bd',uuid=e07ccbcb-d60d-4c15-95c2-9f5046ab99a3,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "54f02527-a6c1-4059-aa22-2c19fc6f351d", "address": "fa:16:3e:eb:30:f0", "network": {"id": "837c6e7b-bab2-4553-9d96-986f67153365", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-60200676-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ce0bc5c65f2f47a9a854ec892fe53bc8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap54f02527-a6", "ovs_interfaceid": "54f02527-a6c1-4059-aa22-2c19fc6f351d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 25 03:35:04 np0005534516 nova_compute[253538]: 2025-11-25 08:35:04.214 253542 DEBUG nova.network.os_vif_util [None req-7d88309f-64d0-46d8-b872-576c223d70b5 dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] Converting VIF {"id": "54f02527-a6c1-4059-aa22-2c19fc6f351d", "address": "fa:16:3e:eb:30:f0", "network": {"id": "837c6e7b-bab2-4553-9d96-986f67153365", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-60200676-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ce0bc5c65f2f47a9a854ec892fe53bc8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap54f02527-a6", "ovs_interfaceid": "54f02527-a6c1-4059-aa22-2c19fc6f351d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 03:35:04 np0005534516 nova_compute[253538]: 2025-11-25 08:35:04.215 253542 DEBUG nova.network.os_vif_util [None req-7d88309f-64d0-46d8-b872-576c223d70b5 dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:eb:30:f0,bridge_name='br-int',has_traffic_filtering=True,id=54f02527-a6c1-4059-aa22-2c19fc6f351d,network=Network(837c6e7b-bab2-4553-9d96-986f67153365),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap54f02527-a6') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 03:35:04 np0005534516 nova_compute[253538]: 2025-11-25 08:35:04.215 253542 DEBUG os_vif [None req-7d88309f-64d0-46d8-b872-576c223d70b5 dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:eb:30:f0,bridge_name='br-int',has_traffic_filtering=True,id=54f02527-a6c1-4059-aa22-2c19fc6f351d,network=Network(837c6e7b-bab2-4553-9d96-986f67153365),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap54f02527-a6') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 25 03:35:04 np0005534516 nova_compute[253538]: 2025-11-25 08:35:04.218 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:35:04 np0005534516 nova_compute[253538]: 2025-11-25 08:35:04.218 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap54f02527-a6, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:35:04 np0005534516 nova_compute[253538]: 2025-11-25 08:35:04.220 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:35:04 np0005534516 nova_compute[253538]: 2025-11-25 08:35:04.222 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 25 03:35:04 np0005534516 nova_compute[253538]: 2025-11-25 08:35:04.224 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:35:04 np0005534516 nova_compute[253538]: 2025-11-25 08:35:04.227 253542 DEBUG oslo_concurrency.lockutils [None req-79bcfe17-6c4e-4b92-86db-d7c7e382865b c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.730s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:35:04 np0005534516 nova_compute[253538]: 2025-11-25 08:35:04.233 253542 DEBUG oslo_concurrency.lockutils [None req-71b5dbd9-53be-49dc-9125-f878a11f92ea 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.531s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:35:04 np0005534516 nova_compute[253538]: 2025-11-25 08:35:04.235 253542 INFO os_vif [None req-7d88309f-64d0-46d8-b872-576c223d70b5 dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:eb:30:f0,bridge_name='br-int',has_traffic_filtering=True,id=54f02527-a6c1-4059-aa22-2c19fc6f351d,network=Network(837c6e7b-bab2-4553-9d96-986f67153365),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap54f02527-a6')#033[00m
Nov 25 03:35:04 np0005534516 nova_compute[253538]: 2025-11-25 08:35:04.265 253542 INFO nova.virt.libvirt.driver [-] [instance: a4fd9f97-b160-432d-9cb7-0fa3874c6468] Instance destroyed successfully.#033[00m
Nov 25 03:35:04 np0005534516 nova_compute[253538]: 2025-11-25 08:35:04.266 253542 DEBUG nova.objects.instance [None req-22a055bc-1d78-41e0-94c2-05bf396d3c16 dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] Lazy-loading 'resources' on Instance uuid a4fd9f97-b160-432d-9cb7-0fa3874c6468 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 03:35:04 np0005534516 neutron-haproxy-ovnmeta-837c6e7b-bab2-4553-9d96-986f67153365[320001]: [NOTICE]   (320027) : haproxy version is 2.8.14-c23fe91
Nov 25 03:35:04 np0005534516 neutron-haproxy-ovnmeta-837c6e7b-bab2-4553-9d96-986f67153365[320001]: [NOTICE]   (320027) : path to executable is /usr/sbin/haproxy
Nov 25 03:35:04 np0005534516 neutron-haproxy-ovnmeta-837c6e7b-bab2-4553-9d96-986f67153365[320001]: [WARNING]  (320027) : Exiting Master process...
Nov 25 03:35:04 np0005534516 neutron-haproxy-ovnmeta-837c6e7b-bab2-4553-9d96-986f67153365[320001]: [ALERT]    (320027) : Current worker (320037) exited with code 143 (Terminated)
Nov 25 03:35:04 np0005534516 neutron-haproxy-ovnmeta-837c6e7b-bab2-4553-9d96-986f67153365[320001]: [WARNING]  (320027) : All workers exited. Exiting... (0)
Nov 25 03:35:04 np0005534516 systemd[1]: libpod-3430c8305e50c4278e220e4add931691e3ee2ec9f471304956bdde4a55cfd430.scope: Deactivated successfully.
Nov 25 03:35:04 np0005534516 podman[321324]: 2025-11-25 08:35:04.280414496 +0000 UTC m=+0.047467636 container died 3430c8305e50c4278e220e4add931691e3ee2ec9f471304956bdde4a55cfd430 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-837c6e7b-bab2-4553-9d96-986f67153365, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 25 03:35:04 np0005534516 nova_compute[253538]: 2025-11-25 08:35:04.282 253542 DEBUG nova.virt.libvirt.vif [None req-22a055bc-1d78-41e0-94c2-05bf396d3c16 dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T08:34:42Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ListServersNegativeTestJSON-server-1822986507',display_name='tempest-ListServersNegativeTestJSON-server-1822986507-2',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-listserversnegativetestjson-server-1822986507-2',id=64,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=1,launched_at=2025-11-25T08:34:53Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='ce0bc5c65f2f47a9a854ec892fe53bc8',ramdisk_id='',reservation_id='r-prpznqfv',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ListServersNegativeTestJSON-704382836',owner_user_name='tempest-ListServersNegativeTestJSON-704382836-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T08:34:53Z,user_data=None,user_id='dc986518148d44de9f5908ed5be317bd',uuid=a4fd9f97-b160-432d-9cb7-0fa3874c6468,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "f9d205bf-0705-485d-b89c-f9b9c3cdccdb", "address": "fa:16:3e:9d:08:11", "network": {"id": "837c6e7b-bab2-4553-9d96-986f67153365", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-60200676-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ce0bc5c65f2f47a9a854ec892fe53bc8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf9d205bf-07", "ovs_interfaceid": "f9d205bf-0705-485d-b89c-f9b9c3cdccdb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 25 03:35:04 np0005534516 nova_compute[253538]: 2025-11-25 08:35:04.282 253542 DEBUG nova.network.os_vif_util [None req-22a055bc-1d78-41e0-94c2-05bf396d3c16 dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] Converting VIF {"id": "f9d205bf-0705-485d-b89c-f9b9c3cdccdb", "address": "fa:16:3e:9d:08:11", "network": {"id": "837c6e7b-bab2-4553-9d96-986f67153365", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-60200676-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ce0bc5c65f2f47a9a854ec892fe53bc8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf9d205bf-07", "ovs_interfaceid": "f9d205bf-0705-485d-b89c-f9b9c3cdccdb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 03:35:04 np0005534516 nova_compute[253538]: 2025-11-25 08:35:04.283 253542 DEBUG nova.network.os_vif_util [None req-22a055bc-1d78-41e0-94c2-05bf396d3c16 dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:9d:08:11,bridge_name='br-int',has_traffic_filtering=True,id=f9d205bf-0705-485d-b89c-f9b9c3cdccdb,network=Network(837c6e7b-bab2-4553-9d96-986f67153365),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf9d205bf-07') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 03:35:04 np0005534516 nova_compute[253538]: 2025-11-25 08:35:04.283 253542 DEBUG os_vif [None req-22a055bc-1d78-41e0-94c2-05bf396d3c16 dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:9d:08:11,bridge_name='br-int',has_traffic_filtering=True,id=f9d205bf-0705-485d-b89c-f9b9c3cdccdb,network=Network(837c6e7b-bab2-4553-9d96-986f67153365),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf9d205bf-07') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 25 03:35:04 np0005534516 nova_compute[253538]: 2025-11-25 08:35:04.285 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:35:04 np0005534516 nova_compute[253538]: 2025-11-25 08:35:04.285 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf9d205bf-07, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:35:04 np0005534516 nova_compute[253538]: 2025-11-25 08:35:04.287 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:35:04 np0005534516 nova_compute[253538]: 2025-11-25 08:35:04.291 253542 INFO nova.scheduler.client.report [None req-79bcfe17-6c4e-4b92-86db-d7c7e382865b c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Deleted allocations for instance 420c5373-d9c4-4da0-9658-90eff9a19f8d#033[00m
Nov 25 03:35:04 np0005534516 nova_compute[253538]: 2025-11-25 08:35:04.292 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 25 03:35:04 np0005534516 nova_compute[253538]: 2025-11-25 08:35:04.294 253542 INFO os_vif [None req-22a055bc-1d78-41e0-94c2-05bf396d3c16 dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:9d:08:11,bridge_name='br-int',has_traffic_filtering=True,id=f9d205bf-0705-485d-b89c-f9b9c3cdccdb,network=Network(837c6e7b-bab2-4553-9d96-986f67153365),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf9d205bf-07')#033[00m
Nov 25 03:35:04 np0005534516 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-3430c8305e50c4278e220e4add931691e3ee2ec9f471304956bdde4a55cfd430-userdata-shm.mount: Deactivated successfully.
Nov 25 03:35:04 np0005534516 systemd[1]: var-lib-containers-storage-overlay-598152ed7896db807c1194d497cbf99580dfa554ebe08979ae464ebb4bcff12a-merged.mount: Deactivated successfully.
Nov 25 03:35:04 np0005534516 podman[321324]: 2025-11-25 08:35:04.330353189 +0000 UTC m=+0.097406289 container cleanup 3430c8305e50c4278e220e4add931691e3ee2ec9f471304956bdde4a55cfd430 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-837c6e7b-bab2-4553-9d96-986f67153365, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true)
Nov 25 03:35:04 np0005534516 systemd[1]: libpod-conmon-3430c8305e50c4278e220e4add931691e3ee2ec9f471304956bdde4a55cfd430.scope: Deactivated successfully.
Nov 25 03:35:04 np0005534516 nova_compute[253538]: 2025-11-25 08:35:04.377 253542 DEBUG oslo_concurrency.lockutils [None req-79bcfe17-6c4e-4b92-86db-d7c7e382865b c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Lock "420c5373-d9c4-4da0-9658-90eff9a19f8d" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.732s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:35:04 np0005534516 podman[321396]: 2025-11-25 08:35:04.403445613 +0000 UTC m=+0.051759322 container remove 3430c8305e50c4278e220e4add931691e3ee2ec9f471304956bdde4a55cfd430 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-837c6e7b-bab2-4553-9d96-986f67153365, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2)
Nov 25 03:35:04 np0005534516 nova_compute[253538]: 2025-11-25 08:35:04.408 253542 DEBUG nova.compute.manager [req-d303be84-1983-4854-bd01-7718361d1a8c req-c3d5276e-f4c1-4e2a-91f1-9868921db278 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 2f20fb1c-0a44-4209-aa4a-020331708117] Received event network-vif-plugged-69b7733c-f471-4b5d-9fe9-b9b25d5836d9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:35:04 np0005534516 rsyslogd[1007]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 25 03:35:04 np0005534516 nova_compute[253538]: 2025-11-25 08:35:04.409 253542 DEBUG oslo_concurrency.lockutils [req-d303be84-1983-4854-bd01-7718361d1a8c req-c3d5276e-f4c1-4e2a-91f1-9868921db278 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "2f20fb1c-0a44-4209-aa4a-020331708117-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:35:04 np0005534516 rsyslogd[1007]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 25 03:35:04 np0005534516 nova_compute[253538]: 2025-11-25 08:35:04.409 253542 DEBUG oslo_concurrency.lockutils [req-d303be84-1983-4854-bd01-7718361d1a8c req-c3d5276e-f4c1-4e2a-91f1-9868921db278 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "2f20fb1c-0a44-4209-aa4a-020331708117-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:35:04 np0005534516 nova_compute[253538]: 2025-11-25 08:35:04.409 253542 DEBUG oslo_concurrency.lockutils [req-d303be84-1983-4854-bd01-7718361d1a8c req-c3d5276e-f4c1-4e2a-91f1-9868921db278 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "2f20fb1c-0a44-4209-aa4a-020331708117-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:35:04 np0005534516 nova_compute[253538]: 2025-11-25 08:35:04.409 253542 DEBUG nova.compute.manager [req-d303be84-1983-4854-bd01-7718361d1a8c req-c3d5276e-f4c1-4e2a-91f1-9868921db278 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 2f20fb1c-0a44-4209-aa4a-020331708117] Processing event network-vif-plugged-69b7733c-f471-4b5d-9fe9-b9b25d5836d9 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 25 03:35:04 np0005534516 nova_compute[253538]: 2025-11-25 08:35:04.409 253542 DEBUG nova.compute.manager [req-d303be84-1983-4854-bd01-7718361d1a8c req-c3d5276e-f4c1-4e2a-91f1-9868921db278 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 2f20fb1c-0a44-4209-aa4a-020331708117] Received event network-vif-plugged-69b7733c-f471-4b5d-9fe9-b9b25d5836d9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:35:04 np0005534516 nova_compute[253538]: 2025-11-25 08:35:04.410 253542 DEBUG oslo_concurrency.lockutils [req-d303be84-1983-4854-bd01-7718361d1a8c req-c3d5276e-f4c1-4e2a-91f1-9868921db278 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "2f20fb1c-0a44-4209-aa4a-020331708117-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:35:04 np0005534516 nova_compute[253538]: 2025-11-25 08:35:04.410 253542 DEBUG oslo_concurrency.lockutils [req-d303be84-1983-4854-bd01-7718361d1a8c req-c3d5276e-f4c1-4e2a-91f1-9868921db278 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "2f20fb1c-0a44-4209-aa4a-020331708117-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:35:04 np0005534516 nova_compute[253538]: 2025-11-25 08:35:04.410 253542 DEBUG oslo_concurrency.lockutils [req-d303be84-1983-4854-bd01-7718361d1a8c req-c3d5276e-f4c1-4e2a-91f1-9868921db278 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "2f20fb1c-0a44-4209-aa4a-020331708117-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:35:04 np0005534516 nova_compute[253538]: 2025-11-25 08:35:04.410 253542 DEBUG nova.compute.manager [req-d303be84-1983-4854-bd01-7718361d1a8c req-c3d5276e-f4c1-4e2a-91f1-9868921db278 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 2f20fb1c-0a44-4209-aa4a-020331708117] No waiting events found dispatching network-vif-plugged-69b7733c-f471-4b5d-9fe9-b9b25d5836d9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 03:35:04 np0005534516 nova_compute[253538]: 2025-11-25 08:35:04.410 253542 WARNING nova.compute.manager [req-d303be84-1983-4854-bd01-7718361d1a8c req-c3d5276e-f4c1-4e2a-91f1-9868921db278 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 2f20fb1c-0a44-4209-aa4a-020331708117] Received unexpected event network-vif-plugged-69b7733c-f471-4b5d-9fe9-b9b25d5836d9 for instance with vm_state building and task_state spawning.#033[00m
Nov 25 03:35:04 np0005534516 nova_compute[253538]: 2025-11-25 08:35:04.410 253542 DEBUG nova.compute.manager [req-d303be84-1983-4854-bd01-7718361d1a8c req-c3d5276e-f4c1-4e2a-91f1-9868921db278 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 420c5373-d9c4-4da0-9658-90eff9a19f8d] Received event network-vif-deleted-9200cc12-927d-418b-99c1-ca0421535979 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:35:04 np0005534516 nova_compute[253538]: 2025-11-25 08:35:04.411 253542 DEBUG nova.compute.manager [req-d303be84-1983-4854-bd01-7718361d1a8c req-c3d5276e-f4c1-4e2a-91f1-9868921db278 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 5c6656ef-7ad0-4eb4-a597-aa9a8078805b] Received event network-vif-deleted-1682bdaf-1dd6-4036-8d17-a169dbaaca8f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:35:04 np0005534516 nova_compute[253538]: 2025-11-25 08:35:04.411 253542 DEBUG nova.compute.manager [req-d303be84-1983-4854-bd01-7718361d1a8c req-c3d5276e-f4c1-4e2a-91f1-9868921db278 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: a4fd9f97-b160-432d-9cb7-0fa3874c6468] Received event network-vif-unplugged-f9d205bf-0705-485d-b89c-f9b9c3cdccdb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:35:04 np0005534516 nova_compute[253538]: 2025-11-25 08:35:04.411 253542 DEBUG oslo_concurrency.lockutils [req-d303be84-1983-4854-bd01-7718361d1a8c req-c3d5276e-f4c1-4e2a-91f1-9868921db278 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "a4fd9f97-b160-432d-9cb7-0fa3874c6468-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:35:04 np0005534516 nova_compute[253538]: 2025-11-25 08:35:04.411 253542 DEBUG oslo_concurrency.lockutils [req-d303be84-1983-4854-bd01-7718361d1a8c req-c3d5276e-f4c1-4e2a-91f1-9868921db278 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "a4fd9f97-b160-432d-9cb7-0fa3874c6468-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:35:04 np0005534516 nova_compute[253538]: 2025-11-25 08:35:04.411 253542 DEBUG oslo_concurrency.lockutils [req-d303be84-1983-4854-bd01-7718361d1a8c req-c3d5276e-f4c1-4e2a-91f1-9868921db278 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "a4fd9f97-b160-432d-9cb7-0fa3874c6468-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:35:04 np0005534516 nova_compute[253538]: 2025-11-25 08:35:04.411 253542 DEBUG nova.compute.manager [req-d303be84-1983-4854-bd01-7718361d1a8c req-c3d5276e-f4c1-4e2a-91f1-9868921db278 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: a4fd9f97-b160-432d-9cb7-0fa3874c6468] No waiting events found dispatching network-vif-unplugged-f9d205bf-0705-485d-b89c-f9b9c3cdccdb pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 03:35:04 np0005534516 nova_compute[253538]: 2025-11-25 08:35:04.412 253542 DEBUG nova.compute.manager [req-d303be84-1983-4854-bd01-7718361d1a8c req-c3d5276e-f4c1-4e2a-91f1-9868921db278 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: a4fd9f97-b160-432d-9cb7-0fa3874c6468] Received event network-vif-unplugged-f9d205bf-0705-485d-b89c-f9b9c3cdccdb for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 25 03:35:04 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:35:04.411 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[ddea79e4-2efb-468f-8085-b2d67e854840]: (4, ('Tue Nov 25 08:35:04 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-837c6e7b-bab2-4553-9d96-986f67153365 (3430c8305e50c4278e220e4add931691e3ee2ec9f471304956bdde4a55cfd430)\n3430c8305e50c4278e220e4add931691e3ee2ec9f471304956bdde4a55cfd430\nTue Nov 25 08:35:04 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-837c6e7b-bab2-4553-9d96-986f67153365 (3430c8305e50c4278e220e4add931691e3ee2ec9f471304956bdde4a55cfd430)\n3430c8305e50c4278e220e4add931691e3ee2ec9f471304956bdde4a55cfd430\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:35:04 np0005534516 nova_compute[253538]: 2025-11-25 08:35:04.412 253542 DEBUG nova.compute.manager [None req-488a352f-842c-4afb-9b2e-192f1999f70c a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: 2f20fb1c-0a44-4209-aa4a-020331708117] Instance event wait completed in 2 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 25 03:35:04 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:35:04.413 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[72fa818a-b90e-4e2e-a6d0-8bcfa96e67c9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:35:04 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:35:04.414 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap837c6e7b-b0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:35:04 np0005534516 nova_compute[253538]: 2025-11-25 08:35:04.416 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:35:04 np0005534516 kernel: tap837c6e7b-b0: left promiscuous mode
Nov 25 03:35:04 np0005534516 nova_compute[253538]: 2025-11-25 08:35:04.419 253542 DEBUG oslo_concurrency.processutils [None req-71b5dbd9-53be-49dc-9125-f878a11f92ea 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:35:04 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:35:04.442 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[9df85d97-97e4-46de-9fe4-f3005a4d7f46]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:35:04 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:35:04.457 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[60d29175-b96d-4311-87f0-7032cbd2929c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:35:04 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:35:04.459 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[ec6cef38-2637-41d4-b64e-2d72105fdcf4]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:35:04 np0005534516 nova_compute[253538]: 2025-11-25 08:35:04.460 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:35:04 np0005534516 nova_compute[253538]: 2025-11-25 08:35:04.461 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764059704.4195974, 2f20fb1c-0a44-4209-aa4a-020331708117 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 03:35:04 np0005534516 nova_compute[253538]: 2025-11-25 08:35:04.462 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 2f20fb1c-0a44-4209-aa4a-020331708117] VM Resumed (Lifecycle Event)#033[00m
Nov 25 03:35:04 np0005534516 nova_compute[253538]: 2025-11-25 08:35:04.463 253542 DEBUG nova.virt.libvirt.driver [None req-488a352f-842c-4afb-9b2e-192f1999f70c a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: 2f20fb1c-0a44-4209-aa4a-020331708117] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 25 03:35:04 np0005534516 nova_compute[253538]: 2025-11-25 08:35:04.467 253542 INFO nova.virt.libvirt.driver [-] [instance: 2f20fb1c-0a44-4209-aa4a-020331708117] Instance spawned successfully.#033[00m
Nov 25 03:35:04 np0005534516 nova_compute[253538]: 2025-11-25 08:35:04.467 253542 DEBUG nova.virt.libvirt.driver [None req-488a352f-842c-4afb-9b2e-192f1999f70c a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: 2f20fb1c-0a44-4209-aa4a-020331708117] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 25 03:35:04 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:35:04.476 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[1710d80c-a254-4a86-b9de-6a1d6ee41d63]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 504671, 'reachable_time': 24555, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 321412, 'error': None, 'target': 'ovnmeta-837c6e7b-bab2-4553-9d96-986f67153365', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:35:04 np0005534516 systemd[1]: run-netns-ovnmeta\x2d837c6e7b\x2dbab2\x2d4553\x2d9d96\x2d986f67153365.mount: Deactivated successfully.
Nov 25 03:35:04 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:35:04.481 162852 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-837c6e7b-bab2-4553-9d96-986f67153365 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 25 03:35:04 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:35:04.482 162852 DEBUG oslo.privsep.daemon [-] privsep: reply[0051f546-20e0-41e6-89e2-8a64b0fa4e5d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:35:04 np0005534516 nova_compute[253538]: 2025-11-25 08:35:04.489 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 2f20fb1c-0a44-4209-aa4a-020331708117] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:35:04 np0005534516 nova_compute[253538]: 2025-11-25 08:35:04.495 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 2f20fb1c-0a44-4209-aa4a-020331708117] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 03:35:04 np0005534516 nova_compute[253538]: 2025-11-25 08:35:04.499 253542 DEBUG nova.virt.libvirt.driver [None req-488a352f-842c-4afb-9b2e-192f1999f70c a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: 2f20fb1c-0a44-4209-aa4a-020331708117] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:35:04 np0005534516 nova_compute[253538]: 2025-11-25 08:35:04.499 253542 DEBUG nova.virt.libvirt.driver [None req-488a352f-842c-4afb-9b2e-192f1999f70c a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: 2f20fb1c-0a44-4209-aa4a-020331708117] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:35:04 np0005534516 nova_compute[253538]: 2025-11-25 08:35:04.500 253542 DEBUG nova.virt.libvirt.driver [None req-488a352f-842c-4afb-9b2e-192f1999f70c a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: 2f20fb1c-0a44-4209-aa4a-020331708117] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:35:04 np0005534516 nova_compute[253538]: 2025-11-25 08:35:04.500 253542 DEBUG nova.virt.libvirt.driver [None req-488a352f-842c-4afb-9b2e-192f1999f70c a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: 2f20fb1c-0a44-4209-aa4a-020331708117] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:35:04 np0005534516 nova_compute[253538]: 2025-11-25 08:35:04.501 253542 DEBUG nova.virt.libvirt.driver [None req-488a352f-842c-4afb-9b2e-192f1999f70c a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: 2f20fb1c-0a44-4209-aa4a-020331708117] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:35:04 np0005534516 nova_compute[253538]: 2025-11-25 08:35:04.501 253542 DEBUG nova.virt.libvirt.driver [None req-488a352f-842c-4afb-9b2e-192f1999f70c a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: 2f20fb1c-0a44-4209-aa4a-020331708117] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:35:04 np0005534516 nova_compute[253538]: 2025-11-25 08:35:04.529 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 2f20fb1c-0a44-4209-aa4a-020331708117] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 03:35:04 np0005534516 nova_compute[253538]: 2025-11-25 08:35:04.572 253542 INFO nova.compute.manager [None req-488a352f-842c-4afb-9b2e-192f1999f70c a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: 2f20fb1c-0a44-4209-aa4a-020331708117] Took 9.60 seconds to spawn the instance on the hypervisor.#033[00m
Nov 25 03:35:04 np0005534516 nova_compute[253538]: 2025-11-25 08:35:04.573 253542 DEBUG nova.compute.manager [None req-488a352f-842c-4afb-9b2e-192f1999f70c a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: 2f20fb1c-0a44-4209-aa4a-020331708117] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:35:04 np0005534516 nova_compute[253538]: 2025-11-25 08:35:04.647 253542 INFO nova.compute.manager [None req-488a352f-842c-4afb-9b2e-192f1999f70c a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: 2f20fb1c-0a44-4209-aa4a-020331708117] Took 11.02 seconds to build instance.#033[00m
Nov 25 03:35:04 np0005534516 nova_compute[253538]: 2025-11-25 08:35:04.662 253542 DEBUG oslo_concurrency.lockutils [None req-488a352f-842c-4afb-9b2e-192f1999f70c a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Lock "2f20fb1c-0a44-4209-aa4a-020331708117" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.125s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:35:04 np0005534516 nova_compute[253538]: 2025-11-25 08:35:04.729 253542 INFO nova.virt.libvirt.driver [None req-7d88309f-64d0-46d8-b872-576c223d70b5 dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] [instance: e07ccbcb-d60d-4c15-95c2-9f5046ab99a3] Deleting instance files /var/lib/nova/instances/e07ccbcb-d60d-4c15-95c2-9f5046ab99a3_del#033[00m
Nov 25 03:35:04 np0005534516 nova_compute[253538]: 2025-11-25 08:35:04.730 253542 INFO nova.virt.libvirt.driver [None req-7d88309f-64d0-46d8-b872-576c223d70b5 dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] [instance: e07ccbcb-d60d-4c15-95c2-9f5046ab99a3] Deletion of /var/lib/nova/instances/e07ccbcb-d60d-4c15-95c2-9f5046ab99a3_del complete#033[00m
Nov 25 03:35:04 np0005534516 nova_compute[253538]: 2025-11-25 08:35:04.775 253542 INFO nova.virt.libvirt.driver [None req-22a055bc-1d78-41e0-94c2-05bf396d3c16 dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] [instance: a4fd9f97-b160-432d-9cb7-0fa3874c6468] Deleting instance files /var/lib/nova/instances/a4fd9f97-b160-432d-9cb7-0fa3874c6468_del#033[00m
Nov 25 03:35:04 np0005534516 nova_compute[253538]: 2025-11-25 08:35:04.776 253542 INFO nova.virt.libvirt.driver [None req-22a055bc-1d78-41e0-94c2-05bf396d3c16 dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] [instance: a4fd9f97-b160-432d-9cb7-0fa3874c6468] Deletion of /var/lib/nova/instances/a4fd9f97-b160-432d-9cb7-0fa3874c6468_del complete#033[00m
Nov 25 03:35:04 np0005534516 nova_compute[253538]: 2025-11-25 08:35:04.785 253542 INFO nova.compute.manager [None req-7d88309f-64d0-46d8-b872-576c223d70b5 dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] [instance: e07ccbcb-d60d-4c15-95c2-9f5046ab99a3] Took 0.83 seconds to destroy the instance on the hypervisor.#033[00m
Nov 25 03:35:04 np0005534516 nova_compute[253538]: 2025-11-25 08:35:04.786 253542 DEBUG oslo.service.loopingcall [None req-7d88309f-64d0-46d8-b872-576c223d70b5 dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 25 03:35:04 np0005534516 nova_compute[253538]: 2025-11-25 08:35:04.787 253542 DEBUG nova.compute.manager [-] [instance: e07ccbcb-d60d-4c15-95c2-9f5046ab99a3] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 25 03:35:04 np0005534516 nova_compute[253538]: 2025-11-25 08:35:04.787 253542 DEBUG nova.network.neutron [-] [instance: e07ccbcb-d60d-4c15-95c2-9f5046ab99a3] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 25 03:35:04 np0005534516 nova_compute[253538]: 2025-11-25 08:35:04.824 253542 INFO nova.compute.manager [None req-22a055bc-1d78-41e0-94c2-05bf396d3c16 dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] [instance: a4fd9f97-b160-432d-9cb7-0fa3874c6468] Took 1.01 seconds to destroy the instance on the hypervisor.#033[00m
Nov 25 03:35:04 np0005534516 nova_compute[253538]: 2025-11-25 08:35:04.825 253542 DEBUG oslo.service.loopingcall [None req-22a055bc-1d78-41e0-94c2-05bf396d3c16 dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 25 03:35:04 np0005534516 nova_compute[253538]: 2025-11-25 08:35:04.825 253542 DEBUG nova.compute.manager [-] [instance: a4fd9f97-b160-432d-9cb7-0fa3874c6468] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 25 03:35:04 np0005534516 nova_compute[253538]: 2025-11-25 08:35:04.826 253542 DEBUG nova.network.neutron [-] [instance: a4fd9f97-b160-432d-9cb7-0fa3874c6468] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 25 03:35:04 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 03:35:04 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1531562368' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 03:35:04 np0005534516 nova_compute[253538]: 2025-11-25 08:35:04.907 253542 DEBUG oslo_concurrency.processutils [None req-71b5dbd9-53be-49dc-9125-f878a11f92ea 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.488s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:35:04 np0005534516 nova_compute[253538]: 2025-11-25 08:35:04.913 253542 DEBUG nova.compute.provider_tree [None req-71b5dbd9-53be-49dc-9125-f878a11f92ea 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 03:35:04 np0005534516 nova_compute[253538]: 2025-11-25 08:35:04.926 253542 DEBUG nova.scheduler.client.report [None req-71b5dbd9-53be-49dc-9125-f878a11f92ea 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 03:35:04 np0005534516 nova_compute[253538]: 2025-11-25 08:35:04.947 253542 DEBUG oslo_concurrency.lockutils [None req-71b5dbd9-53be-49dc-9125-f878a11f92ea 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.714s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:35:04 np0005534516 nova_compute[253538]: 2025-11-25 08:35:04.983 253542 INFO nova.scheduler.client.report [None req-71b5dbd9-53be-49dc-9125-f878a11f92ea 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Deleted allocations for instance 5c6656ef-7ad0-4eb4-a597-aa9a8078805b#033[00m
Nov 25 03:35:05 np0005534516 nova_compute[253538]: 2025-11-25 08:35:05.062 253542 DEBUG oslo_concurrency.lockutils [None req-71b5dbd9-53be-49dc-9125-f878a11f92ea 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Lock "5c6656ef-7ad0-4eb4-a597-aa9a8078805b" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.594s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:35:05 np0005534516 nova_compute[253538]: 2025-11-25 08:35:05.257 253542 DEBUG nova.objects.instance [None req-f68b9c9e-aef8-46c1-add9-b3b1d2c03b48 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Lazy-loading 'pci_devices' on Instance uuid 2f20fb1c-0a44-4209-aa4a-020331708117 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 03:35:05 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1558: 321 pgs: 321 active+clean; 228 MiB data, 630 MiB used, 59 GiB / 60 GiB avail; 7.8 MiB/s rd, 3.6 MiB/s wr, 499 op/s
Nov 25 03:35:05 np0005534516 nova_compute[253538]: 2025-11-25 08:35:05.292 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764059705.292629, 2f20fb1c-0a44-4209-aa4a-020331708117 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 03:35:05 np0005534516 nova_compute[253538]: 2025-11-25 08:35:05.293 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 2f20fb1c-0a44-4209-aa4a-020331708117] VM Paused (Lifecycle Event)#033[00m
Nov 25 03:35:05 np0005534516 nova_compute[253538]: 2025-11-25 08:35:05.314 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 2f20fb1c-0a44-4209-aa4a-020331708117] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:35:05 np0005534516 nova_compute[253538]: 2025-11-25 08:35:05.319 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 2f20fb1c-0a44-4209-aa4a-020331708117] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: suspending, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 03:35:05 np0005534516 nova_compute[253538]: 2025-11-25 08:35:05.338 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 2f20fb1c-0a44-4209-aa4a-020331708117] During sync_power_state the instance has a pending task (suspending). Skip.#033[00m
Nov 25 03:35:05 np0005534516 kernel: tap69b7733c-f4 (unregistering): left promiscuous mode
Nov 25 03:35:05 np0005534516 NetworkManager[48915]: <info>  [1764059705.4887] device (tap69b7733c-f4): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 03:35:05 np0005534516 ovn_controller[152859]: 2025-11-25T08:35:05Z|00630|binding|INFO|Releasing lport 69b7733c-f471-4b5d-9fe9-b9b25d5836d9 from this chassis (sb_readonly=0)
Nov 25 03:35:05 np0005534516 ovn_controller[152859]: 2025-11-25T08:35:05Z|00631|binding|INFO|Setting lport 69b7733c-f471-4b5d-9fe9-b9b25d5836d9 down in Southbound
Nov 25 03:35:05 np0005534516 ovn_controller[152859]: 2025-11-25T08:35:05Z|00632|binding|INFO|Removing iface tap69b7733c-f4 ovn-installed in OVS
Nov 25 03:35:05 np0005534516 nova_compute[253538]: 2025-11-25 08:35:05.544 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:35:05 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:35:05.555 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a2:ff:8f 10.100.0.4'], port_security=['fa:16:3e:a2:ff:8f 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '2f20fb1c-0a44-4209-aa4a-020331708117', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a66e51b8-ecb0-4289-a1b5-d5e379727721', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a9c243220ecd4ba3af10cdbc0ea76bd6', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'a5deaf81-ec7a-4196-8622-4e499ce185db', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=864356ca-a329-4a45-a3a1-6cef04812832, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=69b7733c-f471-4b5d-9fe9-b9b25d5836d9) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 03:35:05 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:35:05.557 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 69b7733c-f471-4b5d-9fe9-b9b25d5836d9 in datapath a66e51b8-ecb0-4289-a1b5-d5e379727721 unbound from our chassis#033[00m
Nov 25 03:35:05 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:35:05.560 162739 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network a66e51b8-ecb0-4289-a1b5-d5e379727721, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 25 03:35:05 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:35:05.561 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[af1dab43-3410-4183-8ca4-86cbf014e174]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:35:05 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:35:05.562 162739 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-a66e51b8-ecb0-4289-a1b5-d5e379727721 namespace which is not needed anymore#033[00m
Nov 25 03:35:05 np0005534516 nova_compute[253538]: 2025-11-25 08:35:05.580 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:35:05 np0005534516 systemd[1]: machine-qemu\x2d76\x2dinstance\x2d00000042.scope: Deactivated successfully.
Nov 25 03:35:05 np0005534516 systemd[1]: machine-qemu\x2d76\x2dinstance\x2d00000042.scope: Consumed 1.322s CPU time.
Nov 25 03:35:05 np0005534516 systemd-machined[215790]: Machine qemu-76-instance-00000042 terminated.
Nov 25 03:35:05 np0005534516 nova_compute[253538]: 2025-11-25 08:35:05.681 253542 DEBUG nova.compute.manager [None req-f68b9c9e-aef8-46c1-add9-b3b1d2c03b48 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: 2f20fb1c-0a44-4209-aa4a-020331708117] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:35:05 np0005534516 neutron-haproxy-ovnmeta-a66e51b8-ecb0-4289-a1b5-d5e379727721[321237]: [NOTICE]   (321241) : haproxy version is 2.8.14-c23fe91
Nov 25 03:35:05 np0005534516 neutron-haproxy-ovnmeta-a66e51b8-ecb0-4289-a1b5-d5e379727721[321237]: [NOTICE]   (321241) : path to executable is /usr/sbin/haproxy
Nov 25 03:35:05 np0005534516 neutron-haproxy-ovnmeta-a66e51b8-ecb0-4289-a1b5-d5e379727721[321237]: [WARNING]  (321241) : Exiting Master process...
Nov 25 03:35:05 np0005534516 neutron-haproxy-ovnmeta-a66e51b8-ecb0-4289-a1b5-d5e379727721[321237]: [WARNING]  (321241) : Exiting Master process...
Nov 25 03:35:05 np0005534516 neutron-haproxy-ovnmeta-a66e51b8-ecb0-4289-a1b5-d5e379727721[321237]: [ALERT]    (321241) : Current worker (321243) exited with code 143 (Terminated)
Nov 25 03:35:05 np0005534516 neutron-haproxy-ovnmeta-a66e51b8-ecb0-4289-a1b5-d5e379727721[321237]: [WARNING]  (321241) : All workers exited. Exiting... (0)
Nov 25 03:35:05 np0005534516 systemd[1]: libpod-ac55993ff18ecc64dd91216bc21f1d60c7edd635ec4e1931e146e7a696109f47.scope: Deactivated successfully.
Nov 25 03:35:05 np0005534516 podman[321461]: 2025-11-25 08:35:05.72570264 +0000 UTC m=+0.057098255 container died ac55993ff18ecc64dd91216bc21f1d60c7edd635ec4e1931e146e7a696109f47 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-a66e51b8-ecb0-4289-a1b5-d5e379727721, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Nov 25 03:35:05 np0005534516 nova_compute[253538]: 2025-11-25 08:35:05.747 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:35:05 np0005534516 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-ac55993ff18ecc64dd91216bc21f1d60c7edd635ec4e1931e146e7a696109f47-userdata-shm.mount: Deactivated successfully.
Nov 25 03:35:05 np0005534516 systemd[1]: var-lib-containers-storage-overlay-686e5e97ca0c4d027b1dcfdd9419f6c15acdd73a33a167ddf2036db8efab363c-merged.mount: Deactivated successfully.
Nov 25 03:35:05 np0005534516 podman[321461]: 2025-11-25 08:35:05.785199349 +0000 UTC m=+0.116594954 container cleanup ac55993ff18ecc64dd91216bc21f1d60c7edd635ec4e1931e146e7a696109f47 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-a66e51b8-ecb0-4289-a1b5-d5e379727721, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 03:35:05 np0005534516 systemd[1]: libpod-conmon-ac55993ff18ecc64dd91216bc21f1d60c7edd635ec4e1931e146e7a696109f47.scope: Deactivated successfully.
Nov 25 03:35:05 np0005534516 podman[321502]: 2025-11-25 08:35:05.860630596 +0000 UTC m=+0.049157182 container remove ac55993ff18ecc64dd91216bc21f1d60c7edd635ec4e1931e146e7a696109f47 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-a66e51b8-ecb0-4289-a1b5-d5e379727721, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251118)
Nov 25 03:35:05 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:35:05.866 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[60fc96ca-ffe0-4df2-abca-c4be787fa0e8]: (4, ('Tue Nov 25 08:35:05 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-a66e51b8-ecb0-4289-a1b5-d5e379727721 (ac55993ff18ecc64dd91216bc21f1d60c7edd635ec4e1931e146e7a696109f47)\nac55993ff18ecc64dd91216bc21f1d60c7edd635ec4e1931e146e7a696109f47\nTue Nov 25 08:35:05 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-a66e51b8-ecb0-4289-a1b5-d5e379727721 (ac55993ff18ecc64dd91216bc21f1d60c7edd635ec4e1931e146e7a696109f47)\nac55993ff18ecc64dd91216bc21f1d60c7edd635ec4e1931e146e7a696109f47\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:35:05 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:35:05.869 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[083fe962-ed47-45b3-965a-a9d3a10b2dab]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:35:05 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:35:05.870 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa66e51b8-e0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:35:05 np0005534516 nova_compute[253538]: 2025-11-25 08:35:05.872 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:35:05 np0005534516 kernel: tapa66e51b8-e0: left promiscuous mode
Nov 25 03:35:05 np0005534516 nova_compute[253538]: 2025-11-25 08:35:05.891 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:35:05 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:35:05.894 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[38606bdc-97a6-433f-9306-0541ad4a67ce]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:35:05 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:35:05.911 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[1d8550ca-eed0-43f4-855c-911d43f026d4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:35:05 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:35:05.912 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[c7fc454c-e93f-486d-a79b-cac0253f0ce0]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:35:05 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:35:05.936 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[3a1a1100-079c-4de0-b565-3cbd7ef2224f]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 505631, 'reachable_time': 33367, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 321520, 'error': None, 'target': 'ovnmeta-a66e51b8-ecb0-4289-a1b5-d5e379727721', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:35:05 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:35:05.939 162852 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-a66e51b8-ecb0-4289-a1b5-d5e379727721 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 25 03:35:05 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:35:05.939 162852 DEBUG oslo.privsep.daemon [-] privsep: reply[5524840b-45e1-464f-8b09-f8cb067f81b9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:35:05 np0005534516 systemd[1]: run-netns-ovnmeta\x2da66e51b8\x2decb0\x2d4289\x2da1b5\x2dd5e379727721.mount: Deactivated successfully.
Nov 25 03:35:05 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e182 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 03:35:07 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1559: 321 pgs: 321 active+clean; 191 MiB data, 603 MiB used, 59 GiB / 60 GiB avail; 7.0 MiB/s rd, 2.8 MiB/s wr, 442 op/s
Nov 25 03:35:07 np0005534516 nova_compute[253538]: 2025-11-25 08:35:07.509 253542 DEBUG nova.network.neutron [-] [instance: a4fd9f97-b160-432d-9cb7-0fa3874c6468] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 03:35:07 np0005534516 nova_compute[253538]: 2025-11-25 08:35:07.534 253542 DEBUG nova.network.neutron [-] [instance: e07ccbcb-d60d-4c15-95c2-9f5046ab99a3] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 03:35:07 np0005534516 nova_compute[253538]: 2025-11-25 08:35:07.537 253542 INFO nova.compute.manager [-] [instance: a4fd9f97-b160-432d-9cb7-0fa3874c6468] Took 2.71 seconds to deallocate network for instance.#033[00m
Nov 25 03:35:07 np0005534516 nova_compute[253538]: 2025-11-25 08:35:07.557 253542 INFO nova.compute.manager [-] [instance: e07ccbcb-d60d-4c15-95c2-9f5046ab99a3] Took 2.77 seconds to deallocate network for instance.#033[00m
Nov 25 03:35:07 np0005534516 nova_compute[253538]: 2025-11-25 08:35:07.595 253542 DEBUG oslo_concurrency.lockutils [None req-22a055bc-1d78-41e0-94c2-05bf396d3c16 dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:35:07 np0005534516 nova_compute[253538]: 2025-11-25 08:35:07.596 253542 DEBUG oslo_concurrency.lockutils [None req-22a055bc-1d78-41e0-94c2-05bf396d3c16 dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:35:07 np0005534516 nova_compute[253538]: 2025-11-25 08:35:07.610 253542 DEBUG nova.compute.manager [req-ff357fed-772b-497d-a8fc-f77b68476ec2 req-cd4f7fe3-bc0a-41d2-ba41-5a055e074582 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: a4fd9f97-b160-432d-9cb7-0fa3874c6468] Received event network-vif-plugged-f9d205bf-0705-485d-b89c-f9b9c3cdccdb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:35:07 np0005534516 nova_compute[253538]: 2025-11-25 08:35:07.611 253542 DEBUG oslo_concurrency.lockutils [req-ff357fed-772b-497d-a8fc-f77b68476ec2 req-cd4f7fe3-bc0a-41d2-ba41-5a055e074582 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "a4fd9f97-b160-432d-9cb7-0fa3874c6468-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:35:07 np0005534516 nova_compute[253538]: 2025-11-25 08:35:07.611 253542 DEBUG oslo_concurrency.lockutils [req-ff357fed-772b-497d-a8fc-f77b68476ec2 req-cd4f7fe3-bc0a-41d2-ba41-5a055e074582 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "a4fd9f97-b160-432d-9cb7-0fa3874c6468-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:35:07 np0005534516 nova_compute[253538]: 2025-11-25 08:35:07.612 253542 DEBUG oslo_concurrency.lockutils [req-ff357fed-772b-497d-a8fc-f77b68476ec2 req-cd4f7fe3-bc0a-41d2-ba41-5a055e074582 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "a4fd9f97-b160-432d-9cb7-0fa3874c6468-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:35:07 np0005534516 nova_compute[253538]: 2025-11-25 08:35:07.612 253542 DEBUG nova.compute.manager [req-ff357fed-772b-497d-a8fc-f77b68476ec2 req-cd4f7fe3-bc0a-41d2-ba41-5a055e074582 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: a4fd9f97-b160-432d-9cb7-0fa3874c6468] No waiting events found dispatching network-vif-plugged-f9d205bf-0705-485d-b89c-f9b9c3cdccdb pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 03:35:07 np0005534516 nova_compute[253538]: 2025-11-25 08:35:07.612 253542 WARNING nova.compute.manager [req-ff357fed-772b-497d-a8fc-f77b68476ec2 req-cd4f7fe3-bc0a-41d2-ba41-5a055e074582 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: a4fd9f97-b160-432d-9cb7-0fa3874c6468] Received unexpected event network-vif-plugged-f9d205bf-0705-485d-b89c-f9b9c3cdccdb for instance with vm_state deleted and task_state None.#033[00m
Nov 25 03:35:07 np0005534516 nova_compute[253538]: 2025-11-25 08:35:07.613 253542 DEBUG nova.compute.manager [req-ff357fed-772b-497d-a8fc-f77b68476ec2 req-cd4f7fe3-bc0a-41d2-ba41-5a055e074582 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: e07ccbcb-d60d-4c15-95c2-9f5046ab99a3] Received event network-vif-unplugged-54f02527-a6c1-4059-aa22-2c19fc6f351d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:35:07 np0005534516 nova_compute[253538]: 2025-11-25 08:35:07.613 253542 DEBUG oslo_concurrency.lockutils [req-ff357fed-772b-497d-a8fc-f77b68476ec2 req-cd4f7fe3-bc0a-41d2-ba41-5a055e074582 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "e07ccbcb-d60d-4c15-95c2-9f5046ab99a3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:35:07 np0005534516 nova_compute[253538]: 2025-11-25 08:35:07.613 253542 DEBUG oslo_concurrency.lockutils [req-ff357fed-772b-497d-a8fc-f77b68476ec2 req-cd4f7fe3-bc0a-41d2-ba41-5a055e074582 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "e07ccbcb-d60d-4c15-95c2-9f5046ab99a3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:35:07 np0005534516 nova_compute[253538]: 2025-11-25 08:35:07.613 253542 DEBUG oslo_concurrency.lockutils [req-ff357fed-772b-497d-a8fc-f77b68476ec2 req-cd4f7fe3-bc0a-41d2-ba41-5a055e074582 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "e07ccbcb-d60d-4c15-95c2-9f5046ab99a3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:35:07 np0005534516 nova_compute[253538]: 2025-11-25 08:35:07.614 253542 DEBUG nova.compute.manager [req-ff357fed-772b-497d-a8fc-f77b68476ec2 req-cd4f7fe3-bc0a-41d2-ba41-5a055e074582 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: e07ccbcb-d60d-4c15-95c2-9f5046ab99a3] No waiting events found dispatching network-vif-unplugged-54f02527-a6c1-4059-aa22-2c19fc6f351d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 03:35:07 np0005534516 nova_compute[253538]: 2025-11-25 08:35:07.614 253542 DEBUG nova.compute.manager [req-ff357fed-772b-497d-a8fc-f77b68476ec2 req-cd4f7fe3-bc0a-41d2-ba41-5a055e074582 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: e07ccbcb-d60d-4c15-95c2-9f5046ab99a3] Received event network-vif-unplugged-54f02527-a6c1-4059-aa22-2c19fc6f351d for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 25 03:35:07 np0005534516 nova_compute[253538]: 2025-11-25 08:35:07.615 253542 DEBUG nova.compute.manager [req-ff357fed-772b-497d-a8fc-f77b68476ec2 req-cd4f7fe3-bc0a-41d2-ba41-5a055e074582 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: e07ccbcb-d60d-4c15-95c2-9f5046ab99a3] Received event network-vif-plugged-54f02527-a6c1-4059-aa22-2c19fc6f351d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:35:07 np0005534516 nova_compute[253538]: 2025-11-25 08:35:07.615 253542 DEBUG oslo_concurrency.lockutils [req-ff357fed-772b-497d-a8fc-f77b68476ec2 req-cd4f7fe3-bc0a-41d2-ba41-5a055e074582 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "e07ccbcb-d60d-4c15-95c2-9f5046ab99a3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:35:07 np0005534516 nova_compute[253538]: 2025-11-25 08:35:07.615 253542 DEBUG oslo_concurrency.lockutils [req-ff357fed-772b-497d-a8fc-f77b68476ec2 req-cd4f7fe3-bc0a-41d2-ba41-5a055e074582 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "e07ccbcb-d60d-4c15-95c2-9f5046ab99a3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:35:07 np0005534516 nova_compute[253538]: 2025-11-25 08:35:07.616 253542 DEBUG oslo_concurrency.lockutils [req-ff357fed-772b-497d-a8fc-f77b68476ec2 req-cd4f7fe3-bc0a-41d2-ba41-5a055e074582 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "e07ccbcb-d60d-4c15-95c2-9f5046ab99a3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:35:07 np0005534516 nova_compute[253538]: 2025-11-25 08:35:07.616 253542 DEBUG nova.compute.manager [req-ff357fed-772b-497d-a8fc-f77b68476ec2 req-cd4f7fe3-bc0a-41d2-ba41-5a055e074582 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: e07ccbcb-d60d-4c15-95c2-9f5046ab99a3] No waiting events found dispatching network-vif-plugged-54f02527-a6c1-4059-aa22-2c19fc6f351d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 03:35:07 np0005534516 nova_compute[253538]: 2025-11-25 08:35:07.616 253542 WARNING nova.compute.manager [req-ff357fed-772b-497d-a8fc-f77b68476ec2 req-cd4f7fe3-bc0a-41d2-ba41-5a055e074582 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: e07ccbcb-d60d-4c15-95c2-9f5046ab99a3] Received unexpected event network-vif-plugged-54f02527-a6c1-4059-aa22-2c19fc6f351d for instance with vm_state active and task_state deleting.#033[00m
Nov 25 03:35:07 np0005534516 nova_compute[253538]: 2025-11-25 08:35:07.618 253542 DEBUG oslo_concurrency.lockutils [None req-7d88309f-64d0-46d8-b872-576c223d70b5 dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:35:07 np0005534516 nova_compute[253538]: 2025-11-25 08:35:07.692 253542 DEBUG oslo_concurrency.processutils [None req-22a055bc-1d78-41e0-94c2-05bf396d3c16 dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:35:07 np0005534516 nova_compute[253538]: 2025-11-25 08:35:07.810 253542 DEBUG oslo_concurrency.lockutils [None req-4b6f99df-06c9-49d2-8253-b9379e379d90 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Acquiring lock "0feca801-4630-4450-b915-616d8496ab51" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:35:07 np0005534516 nova_compute[253538]: 2025-11-25 08:35:07.811 253542 DEBUG oslo_concurrency.lockutils [None req-4b6f99df-06c9-49d2-8253-b9379e379d90 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Lock "0feca801-4630-4450-b915-616d8496ab51" acquired by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:35:07 np0005534516 nova_compute[253538]: 2025-11-25 08:35:07.811 253542 DEBUG nova.compute.manager [None req-4b6f99df-06c9-49d2-8253-b9379e379d90 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] [instance: 0feca801-4630-4450-b915-616d8496ab51] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:35:07 np0005534516 nova_compute[253538]: 2025-11-25 08:35:07.816 253542 DEBUG nova.compute.manager [None req-4b6f99df-06c9-49d2-8253-b9379e379d90 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] [instance: 0feca801-4630-4450-b915-616d8496ab51] Stopping instance; current vm_state: active, current task_state: powering-off, current DB power_state: 1, current VM power_state: 1 do_stop_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3338#033[00m
Nov 25 03:35:07 np0005534516 nova_compute[253538]: 2025-11-25 08:35:07.817 253542 DEBUG nova.objects.instance [None req-4b6f99df-06c9-49d2-8253-b9379e379d90 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Lazy-loading 'flavor' on Instance uuid 0feca801-4630-4450-b915-616d8496ab51 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 03:35:07 np0005534516 nova_compute[253538]: 2025-11-25 08:35:07.836 253542 DEBUG nova.virt.libvirt.driver [None req-4b6f99df-06c9-49d2-8253-b9379e379d90 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] [instance: 0feca801-4630-4450-b915-616d8496ab51] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071#033[00m
Nov 25 03:35:08 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 03:35:08 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1628093699' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 03:35:08 np0005534516 nova_compute[253538]: 2025-11-25 08:35:08.137 253542 DEBUG oslo_concurrency.processutils [None req-22a055bc-1d78-41e0-94c2-05bf396d3c16 dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.445s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:35:08 np0005534516 nova_compute[253538]: 2025-11-25 08:35:08.143 253542 DEBUG nova.compute.provider_tree [None req-22a055bc-1d78-41e0-94c2-05bf396d3c16 dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 03:35:08 np0005534516 nova_compute[253538]: 2025-11-25 08:35:08.154 253542 DEBUG nova.scheduler.client.report [None req-22a055bc-1d78-41e0-94c2-05bf396d3c16 dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 03:35:08 np0005534516 nova_compute[253538]: 2025-11-25 08:35:08.173 253542 DEBUG oslo_concurrency.lockutils [None req-22a055bc-1d78-41e0-94c2-05bf396d3c16 dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.577s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:35:08 np0005534516 nova_compute[253538]: 2025-11-25 08:35:08.175 253542 DEBUG oslo_concurrency.lockutils [None req-7d88309f-64d0-46d8-b872-576c223d70b5 dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.557s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:35:08 np0005534516 nova_compute[253538]: 2025-11-25 08:35:08.198 253542 INFO nova.scheduler.client.report [None req-22a055bc-1d78-41e0-94c2-05bf396d3c16 dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] Deleted allocations for instance a4fd9f97-b160-432d-9cb7-0fa3874c6468#033[00m
Nov 25 03:35:08 np0005534516 nova_compute[253538]: 2025-11-25 08:35:08.266 253542 DEBUG oslo_concurrency.processutils [None req-7d88309f-64d0-46d8-b872-576c223d70b5 dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:35:08 np0005534516 nova_compute[253538]: 2025-11-25 08:35:08.303 253542 DEBUG oslo_concurrency.lockutils [None req-22a055bc-1d78-41e0-94c2-05bf396d3c16 dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] Lock "a4fd9f97-b160-432d-9cb7-0fa3874c6468" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.498s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:35:08 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 03:35:08 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/764482650' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 03:35:08 np0005534516 nova_compute[253538]: 2025-11-25 08:35:08.691 253542 DEBUG oslo_concurrency.processutils [None req-7d88309f-64d0-46d8-b872-576c223d70b5 dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.425s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:35:08 np0005534516 nova_compute[253538]: 2025-11-25 08:35:08.697 253542 DEBUG nova.compute.provider_tree [None req-7d88309f-64d0-46d8-b872-576c223d70b5 dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 03:35:08 np0005534516 nova_compute[253538]: 2025-11-25 08:35:08.709 253542 DEBUG nova.scheduler.client.report [None req-7d88309f-64d0-46d8-b872-576c223d70b5 dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 03:35:08 np0005534516 nova_compute[253538]: 2025-11-25 08:35:08.727 253542 DEBUG oslo_concurrency.lockutils [None req-7d88309f-64d0-46d8-b872-576c223d70b5 dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.552s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:35:08 np0005534516 nova_compute[253538]: 2025-11-25 08:35:08.757 253542 INFO nova.scheduler.client.report [None req-7d88309f-64d0-46d8-b872-576c223d70b5 dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] Deleted allocations for instance e07ccbcb-d60d-4c15-95c2-9f5046ab99a3#033[00m
Nov 25 03:35:08 np0005534516 podman[321565]: 2025-11-25 08:35:08.792107702 +0000 UTC m=+0.048179357 container health_status 5c8d5508db57b4c3da7e80ae11f3a3094e87785cb74f60c98199261dd8b73481 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS)
Nov 25 03:35:08 np0005534516 nova_compute[253538]: 2025-11-25 08:35:08.812 253542 DEBUG oslo_concurrency.lockutils [None req-7d88309f-64d0-46d8-b872-576c223d70b5 dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] Lock "e07ccbcb-d60d-4c15-95c2-9f5046ab99a3" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.861s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:35:09 np0005534516 nova_compute[253538]: 2025-11-25 08:35:09.188 253542 DEBUG oslo_concurrency.lockutils [None req-312f15ca-4eb9-4d06-8c07-859f37fb3975 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Acquiring lock "2f20fb1c-0a44-4209-aa4a-020331708117" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:35:09 np0005534516 nova_compute[253538]: 2025-11-25 08:35:09.189 253542 DEBUG oslo_concurrency.lockutils [None req-312f15ca-4eb9-4d06-8c07-859f37fb3975 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Lock "2f20fb1c-0a44-4209-aa4a-020331708117" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:35:09 np0005534516 nova_compute[253538]: 2025-11-25 08:35:09.189 253542 DEBUG oslo_concurrency.lockutils [None req-312f15ca-4eb9-4d06-8c07-859f37fb3975 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Acquiring lock "2f20fb1c-0a44-4209-aa4a-020331708117-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:35:09 np0005534516 nova_compute[253538]: 2025-11-25 08:35:09.190 253542 DEBUG oslo_concurrency.lockutils [None req-312f15ca-4eb9-4d06-8c07-859f37fb3975 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Lock "2f20fb1c-0a44-4209-aa4a-020331708117-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:35:09 np0005534516 nova_compute[253538]: 2025-11-25 08:35:09.190 253542 DEBUG oslo_concurrency.lockutils [None req-312f15ca-4eb9-4d06-8c07-859f37fb3975 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Lock "2f20fb1c-0a44-4209-aa4a-020331708117-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:35:09 np0005534516 nova_compute[253538]: 2025-11-25 08:35:09.191 253542 INFO nova.compute.manager [None req-312f15ca-4eb9-4d06-8c07-859f37fb3975 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: 2f20fb1c-0a44-4209-aa4a-020331708117] Terminating instance#033[00m
Nov 25 03:35:09 np0005534516 nova_compute[253538]: 2025-11-25 08:35:09.192 253542 DEBUG nova.compute.manager [None req-312f15ca-4eb9-4d06-8c07-859f37fb3975 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: 2f20fb1c-0a44-4209-aa4a-020331708117] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 25 03:35:09 np0005534516 nova_compute[253538]: 2025-11-25 08:35:09.198 253542 INFO nova.virt.libvirt.driver [-] [instance: 2f20fb1c-0a44-4209-aa4a-020331708117] Instance destroyed successfully.#033[00m
Nov 25 03:35:09 np0005534516 nova_compute[253538]: 2025-11-25 08:35:09.198 253542 DEBUG nova.objects.instance [None req-312f15ca-4eb9-4d06-8c07-859f37fb3975 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Lazy-loading 'resources' on Instance uuid 2f20fb1c-0a44-4209-aa4a-020331708117 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 03:35:09 np0005534516 nova_compute[253538]: 2025-11-25 08:35:09.209 253542 DEBUG nova.virt.libvirt.vif [None req-312f15ca-4eb9-4d06-8c07-859f37fb3975 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T08:34:52Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-DeleteServersTestJSON-server-756634894',display_name='tempest-DeleteServersTestJSON-server-756634894',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-deleteserverstestjson-server-756634894',id=66,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-25T08:35:04Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='a9c243220ecd4ba3af10cdbc0ea76bd6',ramdisk_id='',reservation_id='r-6zflsg5w',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-DeleteServersTestJSON-2095694504',owner_user_name='tempest-DeleteServersTestJSON-2095694504-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T08:35:05Z,user_data=None,user_id='a649c62aaacd4f01a93ea978066f5976',uuid=2f20fb1c-0a44-4209-aa4a-020331708117,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='suspended') vif={"id": "69b7733c-f471-4b5d-9fe9-b9b25d5836d9", "address": "fa:16:3e:a2:ff:8f", "network": {"id": "a66e51b8-ecb0-4289-a1b5-d5e379727721", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-903247260-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a9c243220ecd4ba3af10cdbc0ea76bd6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap69b7733c-f4", "ovs_interfaceid": "69b7733c-f471-4b5d-9fe9-b9b25d5836d9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 25 03:35:09 np0005534516 nova_compute[253538]: 2025-11-25 08:35:09.209 253542 DEBUG nova.network.os_vif_util [None req-312f15ca-4eb9-4d06-8c07-859f37fb3975 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Converting VIF {"id": "69b7733c-f471-4b5d-9fe9-b9b25d5836d9", "address": "fa:16:3e:a2:ff:8f", "network": {"id": "a66e51b8-ecb0-4289-a1b5-d5e379727721", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-903247260-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a9c243220ecd4ba3af10cdbc0ea76bd6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap69b7733c-f4", "ovs_interfaceid": "69b7733c-f471-4b5d-9fe9-b9b25d5836d9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 03:35:09 np0005534516 nova_compute[253538]: 2025-11-25 08:35:09.210 253542 DEBUG nova.network.os_vif_util [None req-312f15ca-4eb9-4d06-8c07-859f37fb3975 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a2:ff:8f,bridge_name='br-int',has_traffic_filtering=True,id=69b7733c-f471-4b5d-9fe9-b9b25d5836d9,network=Network(a66e51b8-ecb0-4289-a1b5-d5e379727721),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap69b7733c-f4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 03:35:09 np0005534516 nova_compute[253538]: 2025-11-25 08:35:09.210 253542 DEBUG os_vif [None req-312f15ca-4eb9-4d06-8c07-859f37fb3975 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:a2:ff:8f,bridge_name='br-int',has_traffic_filtering=True,id=69b7733c-f471-4b5d-9fe9-b9b25d5836d9,network=Network(a66e51b8-ecb0-4289-a1b5-d5e379727721),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap69b7733c-f4') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 25 03:35:09 np0005534516 nova_compute[253538]: 2025-11-25 08:35:09.213 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:35:09 np0005534516 nova_compute[253538]: 2025-11-25 08:35:09.213 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap69b7733c-f4, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:35:09 np0005534516 nova_compute[253538]: 2025-11-25 08:35:09.216 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:35:09 np0005534516 nova_compute[253538]: 2025-11-25 08:35:09.217 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 25 03:35:09 np0005534516 nova_compute[253538]: 2025-11-25 08:35:09.219 253542 INFO os_vif [None req-312f15ca-4eb9-4d06-8c07-859f37fb3975 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:a2:ff:8f,bridge_name='br-int',has_traffic_filtering=True,id=69b7733c-f471-4b5d-9fe9-b9b25d5836d9,network=Network(a66e51b8-ecb0-4289-a1b5-d5e379727721),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap69b7733c-f4')#033[00m
Nov 25 03:35:09 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1560: 321 pgs: 321 active+clean; 169 MiB data, 594 MiB used, 59 GiB / 60 GiB avail; 4.6 MiB/s rd, 1.5 MiB/s wr, 341 op/s
Nov 25 03:35:09 np0005534516 nova_compute[253538]: 2025-11-25 08:35:09.710 253542 INFO nova.virt.libvirt.driver [None req-312f15ca-4eb9-4d06-8c07-859f37fb3975 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: 2f20fb1c-0a44-4209-aa4a-020331708117] Deleting instance files /var/lib/nova/instances/2f20fb1c-0a44-4209-aa4a-020331708117_del#033[00m
Nov 25 03:35:09 np0005534516 nova_compute[253538]: 2025-11-25 08:35:09.711 253542 INFO nova.virt.libvirt.driver [None req-312f15ca-4eb9-4d06-8c07-859f37fb3975 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: 2f20fb1c-0a44-4209-aa4a-020331708117] Deletion of /var/lib/nova/instances/2f20fb1c-0a44-4209-aa4a-020331708117_del complete#033[00m
Nov 25 03:35:09 np0005534516 nova_compute[253538]: 2025-11-25 08:35:09.757 253542 INFO nova.compute.manager [None req-312f15ca-4eb9-4d06-8c07-859f37fb3975 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: 2f20fb1c-0a44-4209-aa4a-020331708117] Took 0.56 seconds to destroy the instance on the hypervisor.#033[00m
Nov 25 03:35:09 np0005534516 nova_compute[253538]: 2025-11-25 08:35:09.758 253542 DEBUG oslo.service.loopingcall [None req-312f15ca-4eb9-4d06-8c07-859f37fb3975 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 25 03:35:09 np0005534516 nova_compute[253538]: 2025-11-25 08:35:09.759 253542 DEBUG nova.compute.manager [-] [instance: 2f20fb1c-0a44-4209-aa4a-020331708117] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 25 03:35:09 np0005534516 nova_compute[253538]: 2025-11-25 08:35:09.759 253542 DEBUG nova.network.neutron [-] [instance: 2f20fb1c-0a44-4209-aa4a-020331708117] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 25 03:35:10 np0005534516 kernel: tap15af3dd8-97 (unregistering): left promiscuous mode
Nov 25 03:35:10 np0005534516 NetworkManager[48915]: <info>  [1764059710.1130] device (tap15af3dd8-97): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 03:35:10 np0005534516 nova_compute[253538]: 2025-11-25 08:35:10.128 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:35:10 np0005534516 ovn_controller[152859]: 2025-11-25T08:35:10Z|00633|binding|INFO|Releasing lport 15af3dd8-9788-4a34-b4b2-d3b24300cd4c from this chassis (sb_readonly=0)
Nov 25 03:35:10 np0005534516 ovn_controller[152859]: 2025-11-25T08:35:10Z|00634|binding|INFO|Setting lport 15af3dd8-9788-4a34-b4b2-d3b24300cd4c down in Southbound
Nov 25 03:35:10 np0005534516 ovn_controller[152859]: 2025-11-25T08:35:10Z|00635|binding|INFO|Removing iface tap15af3dd8-97 ovn-installed in OVS
Nov 25 03:35:10 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:35:10.148 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:07:cd:40 10.100.0.13'], port_security=['fa:16:3e:07:cd:40 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '0feca801-4630-4450-b915-616d8496ab51', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-908154e6-322e-4607-bb65-df3f3f8daca6', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '23237e7592b247838e62457157e64e9e', 'neutron:revision_number': '12', 'neutron:security_group_ids': '3358d53f-61a0-46d5-a068-e095dc0322d0', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.183', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f04e87cb-da21-4cc9-be16-4ad52b84fb85, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=15af3dd8-9788-4a34-b4b2-d3b24300cd4c) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 03:35:10 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:35:10.149 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 15af3dd8-9788-4a34-b4b2-d3b24300cd4c in datapath 908154e6-322e-4607-bb65-df3f3f8daca6 unbound from our chassis#033[00m
Nov 25 03:35:10 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:35:10.151 162739 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 908154e6-322e-4607-bb65-df3f3f8daca6, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 25 03:35:10 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:35:10.152 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[210c3741-b8fb-4ba8-891e-a3b0cd202062]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:35:10 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:35:10.153 162739 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-908154e6-322e-4607-bb65-df3f3f8daca6 namespace which is not needed anymore#033[00m
Nov 25 03:35:10 np0005534516 nova_compute[253538]: 2025-11-25 08:35:10.159 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:35:10 np0005534516 systemd[1]: machine-qemu\x2d66\x2dinstance\x2d00000032.scope: Deactivated successfully.
Nov 25 03:35:10 np0005534516 systemd[1]: machine-qemu\x2d66\x2dinstance\x2d00000032.scope: Consumed 17.728s CPU time.
Nov 25 03:35:10 np0005534516 systemd-machined[215790]: Machine qemu-66-instance-00000032 terminated.
Nov 25 03:35:10 np0005534516 neutron-haproxy-ovnmeta-908154e6-322e-4607-bb65-df3f3f8daca6[314669]: [NOTICE]   (314674) : haproxy version is 2.8.14-c23fe91
Nov 25 03:35:10 np0005534516 neutron-haproxy-ovnmeta-908154e6-322e-4607-bb65-df3f3f8daca6[314669]: [NOTICE]   (314674) : path to executable is /usr/sbin/haproxy
Nov 25 03:35:10 np0005534516 neutron-haproxy-ovnmeta-908154e6-322e-4607-bb65-df3f3f8daca6[314669]: [WARNING]  (314674) : Exiting Master process...
Nov 25 03:35:10 np0005534516 neutron-haproxy-ovnmeta-908154e6-322e-4607-bb65-df3f3f8daca6[314669]: [ALERT]    (314674) : Current worker (314676) exited with code 143 (Terminated)
Nov 25 03:35:10 np0005534516 neutron-haproxy-ovnmeta-908154e6-322e-4607-bb65-df3f3f8daca6[314669]: [WARNING]  (314674) : All workers exited. Exiting... (0)
Nov 25 03:35:10 np0005534516 systemd[1]: libpod-66d7c79572db99ff1b5636cd54dec519fd2c6d6de66365ff85c02c686dacf803.scope: Deactivated successfully.
Nov 25 03:35:10 np0005534516 podman[321627]: 2025-11-25 08:35:10.290003549 +0000 UTC m=+0.044399294 container died 66d7c79572db99ff1b5636cd54dec519fd2c6d6de66365ff85c02c686dacf803 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-908154e6-322e-4607-bb65-df3f3f8daca6, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 03:35:10 np0005534516 systemd[1]: var-lib-containers-storage-overlay-b287a9705456981153ae9eff66d536d8de2c4f253b08bf0d27b3e6ca96100e2d-merged.mount: Deactivated successfully.
Nov 25 03:35:10 np0005534516 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-66d7c79572db99ff1b5636cd54dec519fd2c6d6de66365ff85c02c686dacf803-userdata-shm.mount: Deactivated successfully.
Nov 25 03:35:10 np0005534516 podman[321627]: 2025-11-25 08:35:10.329083359 +0000 UTC m=+0.083479084 container cleanup 66d7c79572db99ff1b5636cd54dec519fd2c6d6de66365ff85c02c686dacf803 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-908154e6-322e-4607-bb65-df3f3f8daca6, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 03:35:10 np0005534516 systemd[1]: libpod-conmon-66d7c79572db99ff1b5636cd54dec519fd2c6d6de66365ff85c02c686dacf803.scope: Deactivated successfully.
Nov 25 03:35:10 np0005534516 podman[321657]: 2025-11-25 08:35:10.392414722 +0000 UTC m=+0.042853742 container remove 66d7c79572db99ff1b5636cd54dec519fd2c6d6de66365ff85c02c686dacf803 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-908154e6-322e-4607-bb65-df3f3f8daca6, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 03:35:10 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:35:10.397 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[9940e066-a224-4d00-ae62-094c71da7b00]: (4, ('Tue Nov 25 08:35:10 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-908154e6-322e-4607-bb65-df3f3f8daca6 (66d7c79572db99ff1b5636cd54dec519fd2c6d6de66365ff85c02c686dacf803)\n66d7c79572db99ff1b5636cd54dec519fd2c6d6de66365ff85c02c686dacf803\nTue Nov 25 08:35:10 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-908154e6-322e-4607-bb65-df3f3f8daca6 (66d7c79572db99ff1b5636cd54dec519fd2c6d6de66365ff85c02c686dacf803)\n66d7c79572db99ff1b5636cd54dec519fd2c6d6de66365ff85c02c686dacf803\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:35:10 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:35:10.399 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[d87984ec-2c12-4fc8-bc63-f052f629f042]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:35:10 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:35:10.400 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap908154e6-30, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:35:10 np0005534516 nova_compute[253538]: 2025-11-25 08:35:10.444 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:35:10 np0005534516 kernel: tap908154e6-30: left promiscuous mode
Nov 25 03:35:10 np0005534516 nova_compute[253538]: 2025-11-25 08:35:10.464 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:35:10 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:35:10.467 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[c9a9c84a-a990-4db2-b5fe-911fb70dbade]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:35:10 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:35:10.480 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[d01d519a-ad76-4d0a-b83a-c1dc25c30708]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:35:10 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:35:10.481 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[3622cf3a-a253-4187-9e71-731c9ec30433]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:35:10 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:35:10.493 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[7797dc00-1e9a-4e43-9133-c4ed0b5d8e3a]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 496826, 'reachable_time': 34901, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 321686, 'error': None, 'target': 'ovnmeta-908154e6-322e-4607-bb65-df3f3f8daca6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:35:10 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:35:10.495 162852 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-908154e6-322e-4607-bb65-df3f3f8daca6 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 25 03:35:10 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:35:10.495 162852 DEBUG oslo.privsep.daemon [-] privsep: reply[e2a06ceb-aed9-43ec-9fd7-1a0485fb5ffc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:35:10 np0005534516 systemd[1]: run-netns-ovnmeta\x2d908154e6\x2d322e\x2d4607\x2dbb65\x2ddf3f3f8daca6.mount: Deactivated successfully.
Nov 25 03:35:10 np0005534516 nova_compute[253538]: 2025-11-25 08:35:10.497 253542 DEBUG nova.compute.manager [req-a85fc9a1-78c4-4091-a5df-6e9b62728afd req-b7b78e0b-d147-458e-b043-fa6eb26199fc b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 2f20fb1c-0a44-4209-aa4a-020331708117] Received event network-vif-unplugged-69b7733c-f471-4b5d-9fe9-b9b25d5836d9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:35:10 np0005534516 nova_compute[253538]: 2025-11-25 08:35:10.498 253542 DEBUG oslo_concurrency.lockutils [req-a85fc9a1-78c4-4091-a5df-6e9b62728afd req-b7b78e0b-d147-458e-b043-fa6eb26199fc b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "2f20fb1c-0a44-4209-aa4a-020331708117-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:35:10 np0005534516 nova_compute[253538]: 2025-11-25 08:35:10.498 253542 DEBUG oslo_concurrency.lockutils [req-a85fc9a1-78c4-4091-a5df-6e9b62728afd req-b7b78e0b-d147-458e-b043-fa6eb26199fc b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "2f20fb1c-0a44-4209-aa4a-020331708117-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:35:10 np0005534516 nova_compute[253538]: 2025-11-25 08:35:10.498 253542 DEBUG oslo_concurrency.lockutils [req-a85fc9a1-78c4-4091-a5df-6e9b62728afd req-b7b78e0b-d147-458e-b043-fa6eb26199fc b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "2f20fb1c-0a44-4209-aa4a-020331708117-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:35:10 np0005534516 nova_compute[253538]: 2025-11-25 08:35:10.498 253542 DEBUG nova.compute.manager [req-a85fc9a1-78c4-4091-a5df-6e9b62728afd req-b7b78e0b-d147-458e-b043-fa6eb26199fc b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 2f20fb1c-0a44-4209-aa4a-020331708117] No waiting events found dispatching network-vif-unplugged-69b7733c-f471-4b5d-9fe9-b9b25d5836d9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 03:35:10 np0005534516 nova_compute[253538]: 2025-11-25 08:35:10.498 253542 DEBUG nova.compute.manager [req-a85fc9a1-78c4-4091-a5df-6e9b62728afd req-b7b78e0b-d147-458e-b043-fa6eb26199fc b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 2f20fb1c-0a44-4209-aa4a-020331708117] Received event network-vif-unplugged-69b7733c-f471-4b5d-9fe9-b9b25d5836d9 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 25 03:35:10 np0005534516 nova_compute[253538]: 2025-11-25 08:35:10.499 253542 DEBUG nova.compute.manager [req-a85fc9a1-78c4-4091-a5df-6e9b62728afd req-b7b78e0b-d147-458e-b043-fa6eb26199fc b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: a4fd9f97-b160-432d-9cb7-0fa3874c6468] Received event network-vif-deleted-f9d205bf-0705-485d-b89c-f9b9c3cdccdb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:35:10 np0005534516 nova_compute[253538]: 2025-11-25 08:35:10.499 253542 DEBUG nova.compute.manager [req-a85fc9a1-78c4-4091-a5df-6e9b62728afd req-b7b78e0b-d147-458e-b043-fa6eb26199fc b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: e07ccbcb-d60d-4c15-95c2-9f5046ab99a3] Received event network-vif-deleted-54f02527-a6c1-4059-aa22-2c19fc6f351d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:35:10 np0005534516 nova_compute[253538]: 2025-11-25 08:35:10.499 253542 DEBUG nova.compute.manager [req-a85fc9a1-78c4-4091-a5df-6e9b62728afd req-b7b78e0b-d147-458e-b043-fa6eb26199fc b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 2f20fb1c-0a44-4209-aa4a-020331708117] Received event network-vif-plugged-69b7733c-f471-4b5d-9fe9-b9b25d5836d9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:35:10 np0005534516 nova_compute[253538]: 2025-11-25 08:35:10.499 253542 DEBUG oslo_concurrency.lockutils [req-a85fc9a1-78c4-4091-a5df-6e9b62728afd req-b7b78e0b-d147-458e-b043-fa6eb26199fc b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "2f20fb1c-0a44-4209-aa4a-020331708117-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:35:10 np0005534516 nova_compute[253538]: 2025-11-25 08:35:10.500 253542 DEBUG oslo_concurrency.lockutils [req-a85fc9a1-78c4-4091-a5df-6e9b62728afd req-b7b78e0b-d147-458e-b043-fa6eb26199fc b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "2f20fb1c-0a44-4209-aa4a-020331708117-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:35:10 np0005534516 nova_compute[253538]: 2025-11-25 08:35:10.500 253542 DEBUG oslo_concurrency.lockutils [req-a85fc9a1-78c4-4091-a5df-6e9b62728afd req-b7b78e0b-d147-458e-b043-fa6eb26199fc b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "2f20fb1c-0a44-4209-aa4a-020331708117-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:35:10 np0005534516 nova_compute[253538]: 2025-11-25 08:35:10.500 253542 DEBUG nova.compute.manager [req-a85fc9a1-78c4-4091-a5df-6e9b62728afd req-b7b78e0b-d147-458e-b043-fa6eb26199fc b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 2f20fb1c-0a44-4209-aa4a-020331708117] No waiting events found dispatching network-vif-plugged-69b7733c-f471-4b5d-9fe9-b9b25d5836d9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 03:35:10 np0005534516 nova_compute[253538]: 2025-11-25 08:35:10.500 253542 WARNING nova.compute.manager [req-a85fc9a1-78c4-4091-a5df-6e9b62728afd req-b7b78e0b-d147-458e-b043-fa6eb26199fc b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 2f20fb1c-0a44-4209-aa4a-020331708117] Received unexpected event network-vif-plugged-69b7733c-f471-4b5d-9fe9-b9b25d5836d9 for instance with vm_state suspended and task_state deleting.#033[00m
Nov 25 03:35:10 np0005534516 nova_compute[253538]: 2025-11-25 08:35:10.544 253542 DEBUG nova.network.neutron [-] [instance: 2f20fb1c-0a44-4209-aa4a-020331708117] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 03:35:10 np0005534516 nova_compute[253538]: 2025-11-25 08:35:10.573 253542 INFO nova.compute.manager [-] [instance: 2f20fb1c-0a44-4209-aa4a-020331708117] Took 0.81 seconds to deallocate network for instance.#033[00m
Nov 25 03:35:10 np0005534516 nova_compute[253538]: 2025-11-25 08:35:10.627 253542 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764059695.61694, 52d39d67-b456-44e4-8804-2de0c941edae => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 03:35:10 np0005534516 nova_compute[253538]: 2025-11-25 08:35:10.628 253542 INFO nova.compute.manager [-] [instance: 52d39d67-b456-44e4-8804-2de0c941edae] VM Stopped (Lifecycle Event)#033[00m
Nov 25 03:35:10 np0005534516 nova_compute[253538]: 2025-11-25 08:35:10.643 253542 DEBUG nova.compute.manager [req-3e56227c-ed54-4558-8f6e-66b7b93e5dfa req-c245171d-ffd8-4110-8edd-d98acb30b661 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 0feca801-4630-4450-b915-616d8496ab51] Received event network-vif-unplugged-15af3dd8-9788-4a34-b4b2-d3b24300cd4c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:35:10 np0005534516 nova_compute[253538]: 2025-11-25 08:35:10.643 253542 DEBUG oslo_concurrency.lockutils [req-3e56227c-ed54-4558-8f6e-66b7b93e5dfa req-c245171d-ffd8-4110-8edd-d98acb30b661 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "0feca801-4630-4450-b915-616d8496ab51-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:35:10 np0005534516 nova_compute[253538]: 2025-11-25 08:35:10.644 253542 DEBUG oslo_concurrency.lockutils [req-3e56227c-ed54-4558-8f6e-66b7b93e5dfa req-c245171d-ffd8-4110-8edd-d98acb30b661 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "0feca801-4630-4450-b915-616d8496ab51-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:35:10 np0005534516 nova_compute[253538]: 2025-11-25 08:35:10.644 253542 DEBUG oslo_concurrency.lockutils [req-3e56227c-ed54-4558-8f6e-66b7b93e5dfa req-c245171d-ffd8-4110-8edd-d98acb30b661 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "0feca801-4630-4450-b915-616d8496ab51-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:35:10 np0005534516 nova_compute[253538]: 2025-11-25 08:35:10.644 253542 DEBUG nova.compute.manager [req-3e56227c-ed54-4558-8f6e-66b7b93e5dfa req-c245171d-ffd8-4110-8edd-d98acb30b661 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 0feca801-4630-4450-b915-616d8496ab51] No waiting events found dispatching network-vif-unplugged-15af3dd8-9788-4a34-b4b2-d3b24300cd4c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 03:35:10 np0005534516 nova_compute[253538]: 2025-11-25 08:35:10.644 253542 WARNING nova.compute.manager [req-3e56227c-ed54-4558-8f6e-66b7b93e5dfa req-c245171d-ffd8-4110-8edd-d98acb30b661 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 0feca801-4630-4450-b915-616d8496ab51] Received unexpected event network-vif-unplugged-15af3dd8-9788-4a34-b4b2-d3b24300cd4c for instance with vm_state active and task_state powering-off.#033[00m
Nov 25 03:35:10 np0005534516 nova_compute[253538]: 2025-11-25 08:35:10.647 253542 DEBUG oslo_concurrency.lockutils [None req-312f15ca-4eb9-4d06-8c07-859f37fb3975 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:35:10 np0005534516 nova_compute[253538]: 2025-11-25 08:35:10.647 253542 DEBUG oslo_concurrency.lockutils [None req-312f15ca-4eb9-4d06-8c07-859f37fb3975 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:35:10 np0005534516 nova_compute[253538]: 2025-11-25 08:35:10.648 253542 DEBUG nova.compute.manager [None req-a9c3559c-4910-451c-bee8-1335b1bf61b4 - - - - - -] [instance: 52d39d67-b456-44e4-8804-2de0c941edae] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:35:10 np0005534516 nova_compute[253538]: 2025-11-25 08:35:10.709 253542 DEBUG oslo_concurrency.processutils [None req-312f15ca-4eb9-4d06-8c07-859f37fb3975 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:35:10 np0005534516 nova_compute[253538]: 2025-11-25 08:35:10.747 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:35:10 np0005534516 nova_compute[253538]: 2025-11-25 08:35:10.855 253542 INFO nova.virt.libvirt.driver [None req-4b6f99df-06c9-49d2-8253-b9379e379d90 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] [instance: 0feca801-4630-4450-b915-616d8496ab51] Instance shutdown successfully after 3 seconds.#033[00m
Nov 25 03:35:10 np0005534516 nova_compute[253538]: 2025-11-25 08:35:10.860 253542 INFO nova.virt.libvirt.driver [-] [instance: 0feca801-4630-4450-b915-616d8496ab51] Instance destroyed successfully.#033[00m
Nov 25 03:35:10 np0005534516 nova_compute[253538]: 2025-11-25 08:35:10.860 253542 DEBUG nova.objects.instance [None req-4b6f99df-06c9-49d2-8253-b9379e379d90 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Lazy-loading 'numa_topology' on Instance uuid 0feca801-4630-4450-b915-616d8496ab51 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 03:35:10 np0005534516 nova_compute[253538]: 2025-11-25 08:35:10.877 253542 DEBUG nova.compute.manager [None req-4b6f99df-06c9-49d2-8253-b9379e379d90 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] [instance: 0feca801-4630-4450-b915-616d8496ab51] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:35:10 np0005534516 nova_compute[253538]: 2025-11-25 08:35:10.931 253542 DEBUG oslo_concurrency.lockutils [None req-4b6f99df-06c9-49d2-8253-b9379e379d90 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Lock "0feca801-4630-4450-b915-616d8496ab51" "released" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: held 3.120s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:35:10 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e182 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 03:35:11 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 03:35:11 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/427142117' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 03:35:11 np0005534516 nova_compute[253538]: 2025-11-25 08:35:11.180 253542 DEBUG oslo_concurrency.processutils [None req-312f15ca-4eb9-4d06-8c07-859f37fb3975 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.471s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:35:11 np0005534516 nova_compute[253538]: 2025-11-25 08:35:11.186 253542 DEBUG nova.compute.provider_tree [None req-312f15ca-4eb9-4d06-8c07-859f37fb3975 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 03:35:11 np0005534516 nova_compute[253538]: 2025-11-25 08:35:11.200 253542 DEBUG nova.scheduler.client.report [None req-312f15ca-4eb9-4d06-8c07-859f37fb3975 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 03:35:11 np0005534516 nova_compute[253538]: 2025-11-25 08:35:11.221 253542 DEBUG oslo_concurrency.lockutils [None req-312f15ca-4eb9-4d06-8c07-859f37fb3975 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.574s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:35:11 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1561: 321 pgs: 321 active+clean; 147 MiB data, 587 MiB used, 59 GiB / 60 GiB avail; 3.5 MiB/s rd, 615 KiB/s wr, 282 op/s
Nov 25 03:35:11 np0005534516 nova_compute[253538]: 2025-11-25 08:35:11.276 253542 INFO nova.scheduler.client.report [None req-312f15ca-4eb9-4d06-8c07-859f37fb3975 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Deleted allocations for instance 2f20fb1c-0a44-4209-aa4a-020331708117#033[00m
Nov 25 03:35:11 np0005534516 nova_compute[253538]: 2025-11-25 08:35:11.359 253542 DEBUG oslo_concurrency.lockutils [None req-312f15ca-4eb9-4d06-8c07-859f37fb3975 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Lock "2f20fb1c-0a44-4209-aa4a-020331708117" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.170s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:35:12 np0005534516 nova_compute[253538]: 2025-11-25 08:35:12.176 253542 DEBUG nova.objects.instance [None req-01405e48-7ab3-417a-ae6c-48ef451d06ba c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Lazy-loading 'flavor' on Instance uuid 0feca801-4630-4450-b915-616d8496ab51 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 03:35:12 np0005534516 nova_compute[253538]: 2025-11-25 08:35:12.194 253542 DEBUG oslo_concurrency.lockutils [None req-01405e48-7ab3-417a-ae6c-48ef451d06ba c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Acquiring lock "refresh_cache-0feca801-4630-4450-b915-616d8496ab51" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 03:35:12 np0005534516 nova_compute[253538]: 2025-11-25 08:35:12.194 253542 DEBUG oslo_concurrency.lockutils [None req-01405e48-7ab3-417a-ae6c-48ef451d06ba c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Acquired lock "refresh_cache-0feca801-4630-4450-b915-616d8496ab51" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 03:35:12 np0005534516 nova_compute[253538]: 2025-11-25 08:35:12.194 253542 DEBUG nova.network.neutron [None req-01405e48-7ab3-417a-ae6c-48ef451d06ba c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] [instance: 0feca801-4630-4450-b915-616d8496ab51] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 25 03:35:12 np0005534516 nova_compute[253538]: 2025-11-25 08:35:12.194 253542 DEBUG nova.objects.instance [None req-01405e48-7ab3-417a-ae6c-48ef451d06ba c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Lazy-loading 'info_cache' on Instance uuid 0feca801-4630-4450-b915-616d8496ab51 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 03:35:12 np0005534516 nova_compute[253538]: 2025-11-25 08:35:12.560 253542 DEBUG nova.compute.manager [req-697b3ecf-193e-4f25-bf0e-6e131f90124a req-f29b0fb8-ac0e-4fb4-90eb-23ab29ac105e b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 2f20fb1c-0a44-4209-aa4a-020331708117] Received event network-vif-deleted-69b7733c-f471-4b5d-9fe9-b9b25d5836d9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:35:12 np0005534516 nova_compute[253538]: 2025-11-25 08:35:12.725 253542 DEBUG nova.compute.manager [req-e38a8e7d-7924-41a5-b9d5-eb72f28d6cba req-ab6a5fdb-5fe7-4aed-8941-cd22fb2c0cfc b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 0feca801-4630-4450-b915-616d8496ab51] Received event network-vif-plugged-15af3dd8-9788-4a34-b4b2-d3b24300cd4c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:35:12 np0005534516 nova_compute[253538]: 2025-11-25 08:35:12.726 253542 DEBUG oslo_concurrency.lockutils [req-e38a8e7d-7924-41a5-b9d5-eb72f28d6cba req-ab6a5fdb-5fe7-4aed-8941-cd22fb2c0cfc b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "0feca801-4630-4450-b915-616d8496ab51-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:35:12 np0005534516 nova_compute[253538]: 2025-11-25 08:35:12.726 253542 DEBUG oslo_concurrency.lockutils [req-e38a8e7d-7924-41a5-b9d5-eb72f28d6cba req-ab6a5fdb-5fe7-4aed-8941-cd22fb2c0cfc b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "0feca801-4630-4450-b915-616d8496ab51-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:35:12 np0005534516 nova_compute[253538]: 2025-11-25 08:35:12.726 253542 DEBUG oslo_concurrency.lockutils [req-e38a8e7d-7924-41a5-b9d5-eb72f28d6cba req-ab6a5fdb-5fe7-4aed-8941-cd22fb2c0cfc b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "0feca801-4630-4450-b915-616d8496ab51-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:35:12 np0005534516 nova_compute[253538]: 2025-11-25 08:35:12.727 253542 DEBUG nova.compute.manager [req-e38a8e7d-7924-41a5-b9d5-eb72f28d6cba req-ab6a5fdb-5fe7-4aed-8941-cd22fb2c0cfc b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 0feca801-4630-4450-b915-616d8496ab51] No waiting events found dispatching network-vif-plugged-15af3dd8-9788-4a34-b4b2-d3b24300cd4c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 03:35:12 np0005534516 nova_compute[253538]: 2025-11-25 08:35:12.727 253542 WARNING nova.compute.manager [req-e38a8e7d-7924-41a5-b9d5-eb72f28d6cba req-ab6a5fdb-5fe7-4aed-8941-cd22fb2c0cfc b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 0feca801-4630-4450-b915-616d8496ab51] Received unexpected event network-vif-plugged-15af3dd8-9788-4a34-b4b2-d3b24300cd4c for instance with vm_state stopped and task_state powering-on.#033[00m
Nov 25 03:35:13 np0005534516 nova_compute[253538]: 2025-11-25 08:35:13.185 253542 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764059698.18283, d99c7a05-3cc3-4a8b-bce4-1185023a269f => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 03:35:13 np0005534516 nova_compute[253538]: 2025-11-25 08:35:13.186 253542 INFO nova.compute.manager [-] [instance: d99c7a05-3cc3-4a8b-bce4-1185023a269f] VM Stopped (Lifecycle Event)#033[00m
Nov 25 03:35:13 np0005534516 nova_compute[253538]: 2025-11-25 08:35:13.206 253542 DEBUG nova.compute.manager [None req-81db9d12-62c4-4de8-9adf-6a8bf78990ab - - - - - -] [instance: d99c7a05-3cc3-4a8b-bce4-1185023a269f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:35:13 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1562: 321 pgs: 321 active+clean; 140 MiB data, 581 MiB used, 59 GiB / 60 GiB avail; 1.4 MiB/s rd, 24 KiB/s wr, 189 op/s
Nov 25 03:35:13 np0005534516 nova_compute[253538]: 2025-11-25 08:35:13.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 03:35:13 np0005534516 nova_compute[253538]: 2025-11-25 08:35:13.555 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 25 03:35:13 np0005534516 nova_compute[253538]: 2025-11-25 08:35:13.589 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 25 03:35:13 np0005534516 nova_compute[253538]: 2025-11-25 08:35:13.942 253542 DEBUG nova.network.neutron [None req-01405e48-7ab3-417a-ae6c-48ef451d06ba c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] [instance: 0feca801-4630-4450-b915-616d8496ab51] Updating instance_info_cache with network_info: [{"id": "15af3dd8-9788-4a34-b4b2-d3b24300cd4c", "address": "fa:16:3e:07:cd:40", "network": {"id": "908154e6-322e-4607-bb65-df3f3f8daca6", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1395486665-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.183", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "23237e7592b247838e62457157e64e9e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap15af3dd8-97", "ovs_interfaceid": "15af3dd8-9788-4a34-b4b2-d3b24300cd4c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 03:35:13 np0005534516 nova_compute[253538]: 2025-11-25 08:35:13.970 253542 DEBUG oslo_concurrency.lockutils [None req-01405e48-7ab3-417a-ae6c-48ef451d06ba c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Releasing lock "refresh_cache-0feca801-4630-4450-b915-616d8496ab51" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 03:35:13 np0005534516 nova_compute[253538]: 2025-11-25 08:35:13.996 253542 INFO nova.virt.libvirt.driver [-] [instance: 0feca801-4630-4450-b915-616d8496ab51] Instance destroyed successfully.#033[00m
Nov 25 03:35:13 np0005534516 nova_compute[253538]: 2025-11-25 08:35:13.996 253542 DEBUG nova.objects.instance [None req-01405e48-7ab3-417a-ae6c-48ef451d06ba c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Lazy-loading 'numa_topology' on Instance uuid 0feca801-4630-4450-b915-616d8496ab51 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 03:35:14 np0005534516 nova_compute[253538]: 2025-11-25 08:35:14.010 253542 DEBUG nova.objects.instance [None req-01405e48-7ab3-417a-ae6c-48ef451d06ba c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Lazy-loading 'resources' on Instance uuid 0feca801-4630-4450-b915-616d8496ab51 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 03:35:14 np0005534516 nova_compute[253538]: 2025-11-25 08:35:14.020 253542 DEBUG nova.virt.libvirt.vif [None req-01405e48-7ab3-417a-ae6c-48ef451d06ba c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T08:32:05Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-1351113969',display_name='tempest-ServerActionsTestJSON-server-1351113969',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-1351113969',id=50,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBE5Fk56Q4lNqV9K7WDUeyi9Q8LXfzmP3pHaY3lQ+esJSSZiBEhWQZtQw4QEFpwpSqYGNN6+MiKdvSMZAjIxsIMhhevlSp0lI0bm/7fQanmTm+NtC/LaRja7uscM7lRA6IQ==',key_name='tempest-keypair-23618085',keypairs=<?>,launch_index=0,launched_at=2025-11-25T08:32:15Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='23237e7592b247838e62457157e64e9e',ramdisk_id='',reservation_id='r-ui990d0d',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-1880843108',owner_user_name='tempest-ServerActionsTestJSON-1880843108-project-member'},tags=<?>,task_state='powering-on',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T08:35:10Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='c199ca353ed54a53ab7fe37d3089c82a',uuid=0feca801-4630-4450-b915-616d8496ab51,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "15af3dd8-9788-4a34-b4b2-d3b24300cd4c", "address": "fa:16:3e:07:cd:40", "network": {"id": "908154e6-322e-4607-bb65-df3f3f8daca6", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1395486665-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.183", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "23237e7592b247838e62457157e64e9e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap15af3dd8-97", "ovs_interfaceid": "15af3dd8-9788-4a34-b4b2-d3b24300cd4c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 25 03:35:14 np0005534516 nova_compute[253538]: 2025-11-25 08:35:14.021 253542 DEBUG nova.network.os_vif_util [None req-01405e48-7ab3-417a-ae6c-48ef451d06ba c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Converting VIF {"id": "15af3dd8-9788-4a34-b4b2-d3b24300cd4c", "address": "fa:16:3e:07:cd:40", "network": {"id": "908154e6-322e-4607-bb65-df3f3f8daca6", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1395486665-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.183", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "23237e7592b247838e62457157e64e9e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap15af3dd8-97", "ovs_interfaceid": "15af3dd8-9788-4a34-b4b2-d3b24300cd4c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 03:35:14 np0005534516 nova_compute[253538]: 2025-11-25 08:35:14.022 253542 DEBUG nova.network.os_vif_util [None req-01405e48-7ab3-417a-ae6c-48ef451d06ba c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:07:cd:40,bridge_name='br-int',has_traffic_filtering=True,id=15af3dd8-9788-4a34-b4b2-d3b24300cd4c,network=Network(908154e6-322e-4607-bb65-df3f3f8daca6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap15af3dd8-97') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 03:35:14 np0005534516 nova_compute[253538]: 2025-11-25 08:35:14.022 253542 DEBUG os_vif [None req-01405e48-7ab3-417a-ae6c-48ef451d06ba c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:07:cd:40,bridge_name='br-int',has_traffic_filtering=True,id=15af3dd8-9788-4a34-b4b2-d3b24300cd4c,network=Network(908154e6-322e-4607-bb65-df3f3f8daca6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap15af3dd8-97') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 25 03:35:14 np0005534516 nova_compute[253538]: 2025-11-25 08:35:14.024 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:35:14 np0005534516 nova_compute[253538]: 2025-11-25 08:35:14.025 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap15af3dd8-97, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:35:14 np0005534516 nova_compute[253538]: 2025-11-25 08:35:14.027 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:35:14 np0005534516 nova_compute[253538]: 2025-11-25 08:35:14.028 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:35:14 np0005534516 nova_compute[253538]: 2025-11-25 08:35:14.030 253542 INFO os_vif [None req-01405e48-7ab3-417a-ae6c-48ef451d06ba c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:07:cd:40,bridge_name='br-int',has_traffic_filtering=True,id=15af3dd8-9788-4a34-b4b2-d3b24300cd4c,network=Network(908154e6-322e-4607-bb65-df3f3f8daca6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap15af3dd8-97')#033[00m
Nov 25 03:35:14 np0005534516 nova_compute[253538]: 2025-11-25 08:35:14.037 253542 DEBUG nova.virt.libvirt.driver [None req-01405e48-7ab3-417a-ae6c-48ef451d06ba c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] [instance: 0feca801-4630-4450-b915-616d8496ab51] Start _get_guest_xml network_info=[{"id": "15af3dd8-9788-4a34-b4b2-d3b24300cd4c", "address": "fa:16:3e:07:cd:40", "network": {"id": "908154e6-322e-4607-bb65-df3f3f8daca6", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1395486665-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.183", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "23237e7592b247838e62457157e64e9e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap15af3dd8-97", "ovs_interfaceid": "15af3dd8-9788-4a34-b4b2-d3b24300cd4c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'device_name': '/dev/vda', 'guest_format': None, 'encryption_options': None, 'boot_index': 0, 'encryption_format': None, 'encryption_secret_uuid': None, 'size': 0, 'encrypted': False, 'disk_bus': 'virtio', 'image_id': '8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 25 03:35:14 np0005534516 nova_compute[253538]: 2025-11-25 08:35:14.040 253542 WARNING nova.virt.libvirt.driver [None req-01405e48-7ab3-417a-ae6c-48ef451d06ba c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 03:35:14 np0005534516 nova_compute[253538]: 2025-11-25 08:35:14.046 253542 DEBUG nova.virt.libvirt.host [None req-01405e48-7ab3-417a-ae6c-48ef451d06ba c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 25 03:35:14 np0005534516 nova_compute[253538]: 2025-11-25 08:35:14.046 253542 DEBUG nova.virt.libvirt.host [None req-01405e48-7ab3-417a-ae6c-48ef451d06ba c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 25 03:35:14 np0005534516 nova_compute[253538]: 2025-11-25 08:35:14.049 253542 DEBUG nova.virt.libvirt.host [None req-01405e48-7ab3-417a-ae6c-48ef451d06ba c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 25 03:35:14 np0005534516 nova_compute[253538]: 2025-11-25 08:35:14.049 253542 DEBUG nova.virt.libvirt.host [None req-01405e48-7ab3-417a-ae6c-48ef451d06ba c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 25 03:35:14 np0005534516 nova_compute[253538]: 2025-11-25 08:35:14.050 253542 DEBUG nova.virt.libvirt.driver [None req-01405e48-7ab3-417a-ae6c-48ef451d06ba c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 25 03:35:14 np0005534516 nova_compute[253538]: 2025-11-25 08:35:14.050 253542 DEBUG nova.virt.hardware [None req-01405e48-7ab3-417a-ae6c-48ef451d06ba c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T08:20:54Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='c0247c91-0f34-45b2-87e7-43f31a790a00',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 25 03:35:14 np0005534516 nova_compute[253538]: 2025-11-25 08:35:14.051 253542 DEBUG nova.virt.hardware [None req-01405e48-7ab3-417a-ae6c-48ef451d06ba c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 25 03:35:14 np0005534516 nova_compute[253538]: 2025-11-25 08:35:14.051 253542 DEBUG nova.virt.hardware [None req-01405e48-7ab3-417a-ae6c-48ef451d06ba c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 25 03:35:14 np0005534516 nova_compute[253538]: 2025-11-25 08:35:14.051 253542 DEBUG nova.virt.hardware [None req-01405e48-7ab3-417a-ae6c-48ef451d06ba c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 25 03:35:14 np0005534516 nova_compute[253538]: 2025-11-25 08:35:14.051 253542 DEBUG nova.virt.hardware [None req-01405e48-7ab3-417a-ae6c-48ef451d06ba c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 25 03:35:14 np0005534516 nova_compute[253538]: 2025-11-25 08:35:14.052 253542 DEBUG nova.virt.hardware [None req-01405e48-7ab3-417a-ae6c-48ef451d06ba c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 25 03:35:14 np0005534516 nova_compute[253538]: 2025-11-25 08:35:14.052 253542 DEBUG nova.virt.hardware [None req-01405e48-7ab3-417a-ae6c-48ef451d06ba c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 25 03:35:14 np0005534516 nova_compute[253538]: 2025-11-25 08:35:14.052 253542 DEBUG nova.virt.hardware [None req-01405e48-7ab3-417a-ae6c-48ef451d06ba c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 25 03:35:14 np0005534516 nova_compute[253538]: 2025-11-25 08:35:14.053 253542 DEBUG nova.virt.hardware [None req-01405e48-7ab3-417a-ae6c-48ef451d06ba c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 25 03:35:14 np0005534516 nova_compute[253538]: 2025-11-25 08:35:14.053 253542 DEBUG nova.virt.hardware [None req-01405e48-7ab3-417a-ae6c-48ef451d06ba c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 25 03:35:14 np0005534516 nova_compute[253538]: 2025-11-25 08:35:14.053 253542 DEBUG nova.virt.hardware [None req-01405e48-7ab3-417a-ae6c-48ef451d06ba c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 25 03:35:14 np0005534516 nova_compute[253538]: 2025-11-25 08:35:14.053 253542 DEBUG nova.objects.instance [None req-01405e48-7ab3-417a-ae6c-48ef451d06ba c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Lazy-loading 'vcpu_model' on Instance uuid 0feca801-4630-4450-b915-616d8496ab51 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 03:35:14 np0005534516 nova_compute[253538]: 2025-11-25 08:35:14.066 253542 DEBUG oslo_concurrency.processutils [None req-01405e48-7ab3-417a-ae6c-48ef451d06ba c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:35:14 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 03:35:14 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/603385909' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 03:35:14 np0005534516 nova_compute[253538]: 2025-11-25 08:35:14.517 253542 DEBUG oslo_concurrency.processutils [None req-01405e48-7ab3-417a-ae6c-48ef451d06ba c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.450s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:35:14 np0005534516 nova_compute[253538]: 2025-11-25 08:35:14.549 253542 DEBUG oslo_concurrency.processutils [None req-01405e48-7ab3-417a-ae6c-48ef451d06ba c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:35:14 np0005534516 nova_compute[253538]: 2025-11-25 08:35:14.583 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 03:35:14 np0005534516 nova_compute[253538]: 2025-11-25 08:35:14.584 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 25 03:35:14 np0005534516 nova_compute[253538]: 2025-11-25 08:35:14.882 253542 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764059699.8811498, 420c5373-d9c4-4da0-9658-90eff9a19f8d => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 03:35:14 np0005534516 nova_compute[253538]: 2025-11-25 08:35:14.883 253542 INFO nova.compute.manager [-] [instance: 420c5373-d9c4-4da0-9658-90eff9a19f8d] VM Stopped (Lifecycle Event)#033[00m
Nov 25 03:35:14 np0005534516 nova_compute[253538]: 2025-11-25 08:35:14.898 253542 DEBUG nova.compute.manager [None req-86ae75ba-a465-4286-a688-0fffc66ecbdb - - - - - -] [instance: 420c5373-d9c4-4da0-9658-90eff9a19f8d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:35:14 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 03:35:14 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/904315795' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 03:35:14 np0005534516 nova_compute[253538]: 2025-11-25 08:35:14.986 253542 DEBUG oslo_concurrency.processutils [None req-01405e48-7ab3-417a-ae6c-48ef451d06ba c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.437s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:35:14 np0005534516 nova_compute[253538]: 2025-11-25 08:35:14.989 253542 DEBUG nova.virt.libvirt.vif [None req-01405e48-7ab3-417a-ae6c-48ef451d06ba c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T08:32:05Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-1351113969',display_name='tempest-ServerActionsTestJSON-server-1351113969',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-1351113969',id=50,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBE5Fk56Q4lNqV9K7WDUeyi9Q8LXfzmP3pHaY3lQ+esJSSZiBEhWQZtQw4QEFpwpSqYGNN6+MiKdvSMZAjIxsIMhhevlSp0lI0bm/7fQanmTm+NtC/LaRja7uscM7lRA6IQ==',key_name='tempest-keypair-23618085',keypairs=<?>,launch_index=0,launched_at=2025-11-25T08:32:15Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='23237e7592b247838e62457157e64e9e',ramdisk_id='',reservation_id='r-ui990d0d',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-1880843108',owner_user_name='tempest-ServerActionsTestJSON-1880843108-project-member'},tags=<?>,task_state='powering-on',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T08:35:10Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='c199ca353ed54a53ab7fe37d3089c82a',uuid=0feca801-4630-4450-b915-616d8496ab51,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "15af3dd8-9788-4a34-b4b2-d3b24300cd4c", "address": "fa:16:3e:07:cd:40", "network": {"id": "908154e6-322e-4607-bb65-df3f3f8daca6", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1395486665-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.183", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "23237e7592b247838e62457157e64e9e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap15af3dd8-97", "ovs_interfaceid": "15af3dd8-9788-4a34-b4b2-d3b24300cd4c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 25 03:35:14 np0005534516 nova_compute[253538]: 2025-11-25 08:35:14.990 253542 DEBUG nova.network.os_vif_util [None req-01405e48-7ab3-417a-ae6c-48ef451d06ba c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Converting VIF {"id": "15af3dd8-9788-4a34-b4b2-d3b24300cd4c", "address": "fa:16:3e:07:cd:40", "network": {"id": "908154e6-322e-4607-bb65-df3f3f8daca6", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1395486665-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.183", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "23237e7592b247838e62457157e64e9e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap15af3dd8-97", "ovs_interfaceid": "15af3dd8-9788-4a34-b4b2-d3b24300cd4c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 03:35:14 np0005534516 nova_compute[253538]: 2025-11-25 08:35:14.991 253542 DEBUG nova.network.os_vif_util [None req-01405e48-7ab3-417a-ae6c-48ef451d06ba c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:07:cd:40,bridge_name='br-int',has_traffic_filtering=True,id=15af3dd8-9788-4a34-b4b2-d3b24300cd4c,network=Network(908154e6-322e-4607-bb65-df3f3f8daca6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap15af3dd8-97') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 03:35:14 np0005534516 nova_compute[253538]: 2025-11-25 08:35:14.993 253542 DEBUG nova.objects.instance [None req-01405e48-7ab3-417a-ae6c-48ef451d06ba c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Lazy-loading 'pci_devices' on Instance uuid 0feca801-4630-4450-b915-616d8496ab51 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 03:35:15 np0005534516 nova_compute[253538]: 2025-11-25 08:35:15.009 253542 DEBUG nova.virt.libvirt.driver [None req-01405e48-7ab3-417a-ae6c-48ef451d06ba c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] [instance: 0feca801-4630-4450-b915-616d8496ab51] End _get_guest_xml xml=<domain type="kvm">
Nov 25 03:35:15 np0005534516 nova_compute[253538]:  <uuid>0feca801-4630-4450-b915-616d8496ab51</uuid>
Nov 25 03:35:15 np0005534516 nova_compute[253538]:  <name>instance-00000032</name>
Nov 25 03:35:15 np0005534516 nova_compute[253538]:  <memory>131072</memory>
Nov 25 03:35:15 np0005534516 nova_compute[253538]:  <vcpu>1</vcpu>
Nov 25 03:35:15 np0005534516 nova_compute[253538]:  <metadata>
Nov 25 03:35:15 np0005534516 nova_compute[253538]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 03:35:15 np0005534516 nova_compute[253538]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 03:35:15 np0005534516 nova_compute[253538]:      <nova:name>tempest-ServerActionsTestJSON-server-1351113969</nova:name>
Nov 25 03:35:15 np0005534516 nova_compute[253538]:      <nova:creationTime>2025-11-25 08:35:14</nova:creationTime>
Nov 25 03:35:15 np0005534516 nova_compute[253538]:      <nova:flavor name="m1.nano">
Nov 25 03:35:15 np0005534516 nova_compute[253538]:        <nova:memory>128</nova:memory>
Nov 25 03:35:15 np0005534516 nova_compute[253538]:        <nova:disk>1</nova:disk>
Nov 25 03:35:15 np0005534516 nova_compute[253538]:        <nova:swap>0</nova:swap>
Nov 25 03:35:15 np0005534516 nova_compute[253538]:        <nova:ephemeral>0</nova:ephemeral>
Nov 25 03:35:15 np0005534516 nova_compute[253538]:        <nova:vcpus>1</nova:vcpus>
Nov 25 03:35:15 np0005534516 nova_compute[253538]:      </nova:flavor>
Nov 25 03:35:15 np0005534516 nova_compute[253538]:      <nova:owner>
Nov 25 03:35:15 np0005534516 nova_compute[253538]:        <nova:user uuid="c199ca353ed54a53ab7fe37d3089c82a">tempest-ServerActionsTestJSON-1880843108-project-member</nova:user>
Nov 25 03:35:15 np0005534516 nova_compute[253538]:        <nova:project uuid="23237e7592b247838e62457157e64e9e">tempest-ServerActionsTestJSON-1880843108</nova:project>
Nov 25 03:35:15 np0005534516 nova_compute[253538]:      </nova:owner>
Nov 25 03:35:15 np0005534516 nova_compute[253538]:      <nova:root type="image" uuid="8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e"/>
Nov 25 03:35:15 np0005534516 nova_compute[253538]:      <nova:ports>
Nov 25 03:35:15 np0005534516 nova_compute[253538]:        <nova:port uuid="15af3dd8-9788-4a34-b4b2-d3b24300cd4c">
Nov 25 03:35:15 np0005534516 nova_compute[253538]:          <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Nov 25 03:35:15 np0005534516 nova_compute[253538]:        </nova:port>
Nov 25 03:35:15 np0005534516 nova_compute[253538]:      </nova:ports>
Nov 25 03:35:15 np0005534516 nova_compute[253538]:    </nova:instance>
Nov 25 03:35:15 np0005534516 nova_compute[253538]:  </metadata>
Nov 25 03:35:15 np0005534516 nova_compute[253538]:  <sysinfo type="smbios">
Nov 25 03:35:15 np0005534516 nova_compute[253538]:    <system>
Nov 25 03:35:15 np0005534516 nova_compute[253538]:      <entry name="manufacturer">RDO</entry>
Nov 25 03:35:15 np0005534516 nova_compute[253538]:      <entry name="product">OpenStack Compute</entry>
Nov 25 03:35:15 np0005534516 nova_compute[253538]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 03:35:15 np0005534516 nova_compute[253538]:      <entry name="serial">0feca801-4630-4450-b915-616d8496ab51</entry>
Nov 25 03:35:15 np0005534516 nova_compute[253538]:      <entry name="uuid">0feca801-4630-4450-b915-616d8496ab51</entry>
Nov 25 03:35:15 np0005534516 nova_compute[253538]:      <entry name="family">Virtual Machine</entry>
Nov 25 03:35:15 np0005534516 nova_compute[253538]:    </system>
Nov 25 03:35:15 np0005534516 nova_compute[253538]:  </sysinfo>
Nov 25 03:35:15 np0005534516 nova_compute[253538]:  <os>
Nov 25 03:35:15 np0005534516 nova_compute[253538]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 25 03:35:15 np0005534516 nova_compute[253538]:    <boot dev="hd"/>
Nov 25 03:35:15 np0005534516 nova_compute[253538]:    <smbios mode="sysinfo"/>
Nov 25 03:35:15 np0005534516 nova_compute[253538]:  </os>
Nov 25 03:35:15 np0005534516 nova_compute[253538]:  <features>
Nov 25 03:35:15 np0005534516 nova_compute[253538]:    <acpi/>
Nov 25 03:35:15 np0005534516 nova_compute[253538]:    <apic/>
Nov 25 03:35:15 np0005534516 nova_compute[253538]:    <vmcoreinfo/>
Nov 25 03:35:15 np0005534516 nova_compute[253538]:  </features>
Nov 25 03:35:15 np0005534516 nova_compute[253538]:  <clock offset="utc">
Nov 25 03:35:15 np0005534516 nova_compute[253538]:    <timer name="pit" tickpolicy="delay"/>
Nov 25 03:35:15 np0005534516 nova_compute[253538]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 25 03:35:15 np0005534516 nova_compute[253538]:    <timer name="hpet" present="no"/>
Nov 25 03:35:15 np0005534516 nova_compute[253538]:  </clock>
Nov 25 03:35:15 np0005534516 nova_compute[253538]:  <cpu mode="host-model" match="exact">
Nov 25 03:35:15 np0005534516 nova_compute[253538]:    <topology sockets="1" cores="1" threads="1"/>
Nov 25 03:35:15 np0005534516 nova_compute[253538]:  </cpu>
Nov 25 03:35:15 np0005534516 nova_compute[253538]:  <devices>
Nov 25 03:35:15 np0005534516 nova_compute[253538]:    <disk type="network" device="disk">
Nov 25 03:35:15 np0005534516 nova_compute[253538]:      <driver type="raw" cache="none"/>
Nov 25 03:35:15 np0005534516 nova_compute[253538]:      <source protocol="rbd" name="vms/0feca801-4630-4450-b915-616d8496ab51_disk">
Nov 25 03:35:15 np0005534516 nova_compute[253538]:        <host name="192.168.122.100" port="6789"/>
Nov 25 03:35:15 np0005534516 nova_compute[253538]:      </source>
Nov 25 03:35:15 np0005534516 nova_compute[253538]:      <auth username="openstack">
Nov 25 03:35:15 np0005534516 nova_compute[253538]:        <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 03:35:15 np0005534516 nova_compute[253538]:      </auth>
Nov 25 03:35:15 np0005534516 nova_compute[253538]:      <target dev="vda" bus="virtio"/>
Nov 25 03:35:15 np0005534516 nova_compute[253538]:    </disk>
Nov 25 03:35:15 np0005534516 nova_compute[253538]:    <disk type="network" device="cdrom">
Nov 25 03:35:15 np0005534516 nova_compute[253538]:      <driver type="raw" cache="none"/>
Nov 25 03:35:15 np0005534516 nova_compute[253538]:      <source protocol="rbd" name="vms/0feca801-4630-4450-b915-616d8496ab51_disk.config">
Nov 25 03:35:15 np0005534516 nova_compute[253538]:        <host name="192.168.122.100" port="6789"/>
Nov 25 03:35:15 np0005534516 nova_compute[253538]:      </source>
Nov 25 03:35:15 np0005534516 nova_compute[253538]:      <auth username="openstack">
Nov 25 03:35:15 np0005534516 nova_compute[253538]:        <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 03:35:15 np0005534516 nova_compute[253538]:      </auth>
Nov 25 03:35:15 np0005534516 nova_compute[253538]:      <target dev="sda" bus="sata"/>
Nov 25 03:35:15 np0005534516 nova_compute[253538]:    </disk>
Nov 25 03:35:15 np0005534516 nova_compute[253538]:    <interface type="ethernet">
Nov 25 03:35:15 np0005534516 nova_compute[253538]:      <mac address="fa:16:3e:07:cd:40"/>
Nov 25 03:35:15 np0005534516 nova_compute[253538]:      <model type="virtio"/>
Nov 25 03:35:15 np0005534516 nova_compute[253538]:      <driver name="vhost" rx_queue_size="512"/>
Nov 25 03:35:15 np0005534516 nova_compute[253538]:      <mtu size="1442"/>
Nov 25 03:35:15 np0005534516 nova_compute[253538]:      <target dev="tap15af3dd8-97"/>
Nov 25 03:35:15 np0005534516 nova_compute[253538]:    </interface>
Nov 25 03:35:15 np0005534516 nova_compute[253538]:    <serial type="pty">
Nov 25 03:35:15 np0005534516 nova_compute[253538]:      <log file="/var/lib/nova/instances/0feca801-4630-4450-b915-616d8496ab51/console.log" append="off"/>
Nov 25 03:35:15 np0005534516 nova_compute[253538]:    </serial>
Nov 25 03:35:15 np0005534516 nova_compute[253538]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 03:35:15 np0005534516 nova_compute[253538]:    <video>
Nov 25 03:35:15 np0005534516 nova_compute[253538]:      <model type="virtio"/>
Nov 25 03:35:15 np0005534516 nova_compute[253538]:    </video>
Nov 25 03:35:15 np0005534516 nova_compute[253538]:    <input type="tablet" bus="usb"/>
Nov 25 03:35:15 np0005534516 nova_compute[253538]:    <input type="keyboard" bus="usb"/>
Nov 25 03:35:15 np0005534516 nova_compute[253538]:    <rng model="virtio">
Nov 25 03:35:15 np0005534516 nova_compute[253538]:      <backend model="random">/dev/urandom</backend>
Nov 25 03:35:15 np0005534516 nova_compute[253538]:    </rng>
Nov 25 03:35:15 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root"/>
Nov 25 03:35:15 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:35:15 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:35:15 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:35:15 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:35:15 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:35:15 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:35:15 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:35:15 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:35:15 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:35:15 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:35:15 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:35:15 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:35:15 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:35:15 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:35:15 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:35:15 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:35:15 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:35:15 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:35:15 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:35:15 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:35:15 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:35:15 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:35:15 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:35:15 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:35:15 np0005534516 nova_compute[253538]:    <controller type="usb" index="0"/>
Nov 25 03:35:15 np0005534516 nova_compute[253538]:    <memballoon model="virtio">
Nov 25 03:35:15 np0005534516 nova_compute[253538]:      <stats period="10"/>
Nov 25 03:35:15 np0005534516 nova_compute[253538]:    </memballoon>
Nov 25 03:35:15 np0005534516 nova_compute[253538]:  </devices>
Nov 25 03:35:15 np0005534516 nova_compute[253538]: </domain>
Nov 25 03:35:15 np0005534516 nova_compute[253538]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 25 03:35:15 np0005534516 nova_compute[253538]: 2025-11-25 08:35:15.010 253542 DEBUG nova.virt.libvirt.driver [None req-01405e48-7ab3-417a-ae6c-48ef451d06ba c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] skipping disk for instance-00000032 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 25 03:35:15 np0005534516 nova_compute[253538]: 2025-11-25 08:35:15.011 253542 DEBUG nova.virt.libvirt.driver [None req-01405e48-7ab3-417a-ae6c-48ef451d06ba c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] skipping disk for instance-00000032 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 25 03:35:15 np0005534516 nova_compute[253538]: 2025-11-25 08:35:15.012 253542 DEBUG nova.virt.libvirt.vif [None req-01405e48-7ab3-417a-ae6c-48ef451d06ba c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T08:32:05Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-1351113969',display_name='tempest-ServerActionsTestJSON-server-1351113969',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-1351113969',id=50,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBE5Fk56Q4lNqV9K7WDUeyi9Q8LXfzmP3pHaY3lQ+esJSSZiBEhWQZtQw4QEFpwpSqYGNN6+MiKdvSMZAjIxsIMhhevlSp0lI0bm/7fQanmTm+NtC/LaRja7uscM7lRA6IQ==',key_name='tempest-keypair-23618085',keypairs=<?>,launch_index=0,launched_at=2025-11-25T08:32:15Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=<?>,power_state=4,progress=0,project_id='23237e7592b247838e62457157e64e9e',ramdisk_id='',reservation_id='r-ui990d0d',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-1880843108',owner_user_name='tempest-ServerActionsTestJSON-1880843108-project-member'},tags=<?>,task_state='powering-on',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T08:35:10Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='c199ca353ed54a53ab7fe37d3089c82a',uuid=0feca801-4630-4450-b915-616d8496ab51,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "15af3dd8-9788-4a34-b4b2-d3b24300cd4c", "address": "fa:16:3e:07:cd:40", "network": {"id": "908154e6-322e-4607-bb65-df3f3f8daca6", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1395486665-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.183", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "23237e7592b247838e62457157e64e9e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap15af3dd8-97", "ovs_interfaceid": "15af3dd8-9788-4a34-b4b2-d3b24300cd4c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 25 03:35:15 np0005534516 nova_compute[253538]: 2025-11-25 08:35:15.012 253542 DEBUG nova.network.os_vif_util [None req-01405e48-7ab3-417a-ae6c-48ef451d06ba c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Converting VIF {"id": "15af3dd8-9788-4a34-b4b2-d3b24300cd4c", "address": "fa:16:3e:07:cd:40", "network": {"id": "908154e6-322e-4607-bb65-df3f3f8daca6", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1395486665-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.183", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "23237e7592b247838e62457157e64e9e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap15af3dd8-97", "ovs_interfaceid": "15af3dd8-9788-4a34-b4b2-d3b24300cd4c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 03:35:15 np0005534516 nova_compute[253538]: 2025-11-25 08:35:15.013 253542 DEBUG nova.network.os_vif_util [None req-01405e48-7ab3-417a-ae6c-48ef451d06ba c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:07:cd:40,bridge_name='br-int',has_traffic_filtering=True,id=15af3dd8-9788-4a34-b4b2-d3b24300cd4c,network=Network(908154e6-322e-4607-bb65-df3f3f8daca6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap15af3dd8-97') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 03:35:15 np0005534516 nova_compute[253538]: 2025-11-25 08:35:15.013 253542 DEBUG os_vif [None req-01405e48-7ab3-417a-ae6c-48ef451d06ba c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:07:cd:40,bridge_name='br-int',has_traffic_filtering=True,id=15af3dd8-9788-4a34-b4b2-d3b24300cd4c,network=Network(908154e6-322e-4607-bb65-df3f3f8daca6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap15af3dd8-97') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 25 03:35:15 np0005534516 nova_compute[253538]: 2025-11-25 08:35:15.014 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:35:15 np0005534516 nova_compute[253538]: 2025-11-25 08:35:15.015 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:35:15 np0005534516 nova_compute[253538]: 2025-11-25 08:35:15.015 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 03:35:15 np0005534516 nova_compute[253538]: 2025-11-25 08:35:15.018 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:35:15 np0005534516 nova_compute[253538]: 2025-11-25 08:35:15.019 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap15af3dd8-97, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:35:15 np0005534516 nova_compute[253538]: 2025-11-25 08:35:15.019 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap15af3dd8-97, col_values=(('external_ids', {'iface-id': '15af3dd8-9788-4a34-b4b2-d3b24300cd4c', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:07:cd:40', 'vm-uuid': '0feca801-4630-4450-b915-616d8496ab51'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:35:15 np0005534516 nova_compute[253538]: 2025-11-25 08:35:15.075 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:35:15 np0005534516 NetworkManager[48915]: <info>  [1764059715.0765] manager: (tap15af3dd8-97): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/280)
Nov 25 03:35:15 np0005534516 nova_compute[253538]: 2025-11-25 08:35:15.078 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 25 03:35:15 np0005534516 nova_compute[253538]: 2025-11-25 08:35:15.080 253542 INFO os_vif [None req-01405e48-7ab3-417a-ae6c-48ef451d06ba c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:07:cd:40,bridge_name='br-int',has_traffic_filtering=True,id=15af3dd8-9788-4a34-b4b2-d3b24300cd4c,network=Network(908154e6-322e-4607-bb65-df3f3f8daca6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap15af3dd8-97')#033[00m
Nov 25 03:35:15 np0005534516 kernel: tap15af3dd8-97: entered promiscuous mode
Nov 25 03:35:15 np0005534516 NetworkManager[48915]: <info>  [1764059715.1734] manager: (tap15af3dd8-97): new Tun device (/org/freedesktop/NetworkManager/Devices/281)
Nov 25 03:35:15 np0005534516 ovn_controller[152859]: 2025-11-25T08:35:15Z|00636|binding|INFO|Claiming lport 15af3dd8-9788-4a34-b4b2-d3b24300cd4c for this chassis.
Nov 25 03:35:15 np0005534516 ovn_controller[152859]: 2025-11-25T08:35:15Z|00637|binding|INFO|15af3dd8-9788-4a34-b4b2-d3b24300cd4c: Claiming fa:16:3e:07:cd:40 10.100.0.13
Nov 25 03:35:15 np0005534516 nova_compute[253538]: 2025-11-25 08:35:15.173 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:35:15 np0005534516 podman[321773]: 2025-11-25 08:35:15.182883991 +0000 UTC m=+0.073614859 container health_status 201f5ba8a846455dad7681d91099d8c3f33e8ac91298c1330a8b525b94c03938 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 03:35:15 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:35:15.187 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:07:cd:40 10.100.0.13'], port_security=['fa:16:3e:07:cd:40 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '0feca801-4630-4450-b915-616d8496ab51', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-908154e6-322e-4607-bb65-df3f3f8daca6', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '23237e7592b247838e62457157e64e9e', 'neutron:revision_number': '13', 'neutron:security_group_ids': '3358d53f-61a0-46d5-a068-e095dc0322d0', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.183'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f04e87cb-da21-4cc9-be16-4ad52b84fb85, chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=15af3dd8-9788-4a34-b4b2-d3b24300cd4c) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 03:35:15 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:35:15.190 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 15af3dd8-9788-4a34-b4b2-d3b24300cd4c in datapath 908154e6-322e-4607-bb65-df3f3f8daca6 bound to our chassis#033[00m
Nov 25 03:35:15 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:35:15.192 162739 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 908154e6-322e-4607-bb65-df3f3f8daca6#033[00m
Nov 25 03:35:15 np0005534516 ovn_controller[152859]: 2025-11-25T08:35:15Z|00638|binding|INFO|Setting lport 15af3dd8-9788-4a34-b4b2-d3b24300cd4c ovn-installed in OVS
Nov 25 03:35:15 np0005534516 ovn_controller[152859]: 2025-11-25T08:35:15Z|00639|binding|INFO|Setting lport 15af3dd8-9788-4a34-b4b2-d3b24300cd4c up in Southbound
Nov 25 03:35:15 np0005534516 nova_compute[253538]: 2025-11-25 08:35:15.197 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:35:15 np0005534516 nova_compute[253538]: 2025-11-25 08:35:15.200 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:35:15 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:35:15.206 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[88641a7b-baa0-4f24-9602-4bd50cccbec3]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:35:15 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:35:15.207 162739 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap908154e6-31 in ovnmeta-908154e6-322e-4607-bb65-df3f3f8daca6 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 25 03:35:15 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:35:15.209 269370 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap908154e6-30 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 25 03:35:15 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:35:15.209 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[5fb97d23-b2af-436c-9c81-9213cac752e1]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:35:15 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:35:15.210 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[345b537f-a23a-457c-b0e7-84f39ade06b9]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:35:15 np0005534516 systemd-udevd[321809]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 03:35:15 np0005534516 systemd-machined[215790]: New machine qemu-77-instance-00000032.
Nov 25 03:35:15 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:35:15.223 162852 DEBUG oslo.privsep.daemon [-] privsep: reply[e7651176-0b0a-47ae-a4d7-19e9c711020b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:35:15 np0005534516 systemd[1]: Started Virtual Machine qemu-77-instance-00000032.
Nov 25 03:35:15 np0005534516 NetworkManager[48915]: <info>  [1764059715.2418] device (tap15af3dd8-97): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 03:35:15 np0005534516 NetworkManager[48915]: <info>  [1764059715.2427] device (tap15af3dd8-97): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 03:35:15 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:35:15.252 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[bb5282d0-d03a-4de8-a5fb-c77a5f466c37]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:35:15 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1563: 321 pgs: 321 active+clean; 123 MiB data, 574 MiB used, 59 GiB / 60 GiB avail; 721 KiB/s rd, 11 KiB/s wr, 156 op/s
Nov 25 03:35:15 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:35:15.277 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[68d1d40b-0dda-4465-ac74-11eb5ffe6ee5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:35:15 np0005534516 systemd-udevd[321813]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 03:35:15 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:35:15.283 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[0907a705-6274-48a4-ab52-67ac02f7b50c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:35:15 np0005534516 NetworkManager[48915]: <info>  [1764059715.2847] manager: (tap908154e6-30): new Veth device (/org/freedesktop/NetworkManager/Devices/282)
Nov 25 03:35:15 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:35:15.313 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[fdc7d450-00f0-454f-89f5-cbd89b01733e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:35:15 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:35:15.316 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[fc3c17bf-7427-4d9b-b864-84e455f3bdd2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:35:15 np0005534516 NetworkManager[48915]: <info>  [1764059715.3388] device (tap908154e6-30): carrier: link connected
Nov 25 03:35:15 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:35:15.346 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[a916c606-dcbe-48b3-8e5a-e48678425d6c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:35:15 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:35:15.364 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[b0ed39dd-2634-421c-918b-0df904c99d74]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap908154e6-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:42:59:09'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 192], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 506993, 'reachable_time': 30597, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 321841, 'error': None, 'target': 'ovnmeta-908154e6-322e-4607-bb65-df3f3f8daca6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:35:15 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:35:15.380 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[fbe7e294-53f6-4fb3-a69f-7fc9d4a0580f]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe42:5909'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 506993, 'tstamp': 506993}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 321842, 'error': None, 'target': 'ovnmeta-908154e6-322e-4607-bb65-df3f3f8daca6', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:35:15 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:35:15.399 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[d26e34e0-3ba4-4809-b1ff-9ab6a411dc2f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap908154e6-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:42:59:09'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 192], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 506993, 'reachable_time': 30597, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 321843, 'error': None, 'target': 'ovnmeta-908154e6-322e-4607-bb65-df3f3f8daca6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:35:15 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:35:15.431 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[3971ce30-5258-43eb-ba33-79be3cbfe0d6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:35:15 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:35:15.494 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[13537e89-8144-486b-8e21-50836a30454e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:35:15 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:35:15.495 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap908154e6-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:35:15 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:35:15.495 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 03:35:15 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:35:15.495 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap908154e6-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:35:15 np0005534516 nova_compute[253538]: 2025-11-25 08:35:15.497 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:35:15 np0005534516 NetworkManager[48915]: <info>  [1764059715.4980] manager: (tap908154e6-30): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/283)
Nov 25 03:35:15 np0005534516 nova_compute[253538]: 2025-11-25 08:35:15.555 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 03:35:15 np0005534516 kernel: tap908154e6-30: entered promiscuous mode
Nov 25 03:35:15 np0005534516 nova_compute[253538]: 2025-11-25 08:35:15.574 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:35:15 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:35:15.575 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap908154e6-30, col_values=(('external_ids', {'iface-id': 'a5c69233-73e9-45f3-95c2-e76d52711966'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:35:15 np0005534516 nova_compute[253538]: 2025-11-25 08:35:15.576 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:35:15 np0005534516 ovn_controller[152859]: 2025-11-25T08:35:15Z|00640|binding|INFO|Releasing lport a5c69233-73e9-45f3-95c2-e76d52711966 from this chassis (sb_readonly=0)
Nov 25 03:35:15 np0005534516 nova_compute[253538]: 2025-11-25 08:35:15.579 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:35:15 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:35:15.580 162739 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/908154e6-322e-4607-bb65-df3f3f8daca6.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/908154e6-322e-4607-bb65-df3f3f8daca6.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 25 03:35:15 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:35:15.581 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[cac754c4-7abf-4cd4-a9b8-512fdc84aeba]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:35:15 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:35:15.581 162739 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 25 03:35:15 np0005534516 ovn_metadata_agent[162734]: global
Nov 25 03:35:15 np0005534516 ovn_metadata_agent[162734]:    log         /dev/log local0 debug
Nov 25 03:35:15 np0005534516 ovn_metadata_agent[162734]:    log-tag     haproxy-metadata-proxy-908154e6-322e-4607-bb65-df3f3f8daca6
Nov 25 03:35:15 np0005534516 ovn_metadata_agent[162734]:    user        root
Nov 25 03:35:15 np0005534516 ovn_metadata_agent[162734]:    group       root
Nov 25 03:35:15 np0005534516 ovn_metadata_agent[162734]:    maxconn     1024
Nov 25 03:35:15 np0005534516 ovn_metadata_agent[162734]:    pidfile     /var/lib/neutron/external/pids/908154e6-322e-4607-bb65-df3f3f8daca6.pid.haproxy
Nov 25 03:35:15 np0005534516 ovn_metadata_agent[162734]:    daemon
Nov 25 03:35:15 np0005534516 ovn_metadata_agent[162734]: 
Nov 25 03:35:15 np0005534516 ovn_metadata_agent[162734]: defaults
Nov 25 03:35:15 np0005534516 ovn_metadata_agent[162734]:    log global
Nov 25 03:35:15 np0005534516 ovn_metadata_agent[162734]:    mode http
Nov 25 03:35:15 np0005534516 ovn_metadata_agent[162734]:    option httplog
Nov 25 03:35:15 np0005534516 ovn_metadata_agent[162734]:    option dontlognull
Nov 25 03:35:15 np0005534516 ovn_metadata_agent[162734]:    option http-server-close
Nov 25 03:35:15 np0005534516 ovn_metadata_agent[162734]:    option forwardfor
Nov 25 03:35:15 np0005534516 ovn_metadata_agent[162734]:    retries                 3
Nov 25 03:35:15 np0005534516 ovn_metadata_agent[162734]:    timeout http-request    30s
Nov 25 03:35:15 np0005534516 ovn_metadata_agent[162734]:    timeout connect         30s
Nov 25 03:35:15 np0005534516 ovn_metadata_agent[162734]:    timeout client          32s
Nov 25 03:35:15 np0005534516 ovn_metadata_agent[162734]:    timeout server          32s
Nov 25 03:35:15 np0005534516 ovn_metadata_agent[162734]:    timeout http-keep-alive 30s
Nov 25 03:35:15 np0005534516 ovn_metadata_agent[162734]: 
Nov 25 03:35:15 np0005534516 ovn_metadata_agent[162734]: 
Nov 25 03:35:15 np0005534516 ovn_metadata_agent[162734]: listen listener
Nov 25 03:35:15 np0005534516 ovn_metadata_agent[162734]:    bind 169.254.169.254:80
Nov 25 03:35:15 np0005534516 ovn_metadata_agent[162734]:    server metadata /var/lib/neutron/metadata_proxy
Nov 25 03:35:15 np0005534516 ovn_metadata_agent[162734]:    http-request add-header X-OVN-Network-ID 908154e6-322e-4607-bb65-df3f3f8daca6
Nov 25 03:35:15 np0005534516 ovn_metadata_agent[162734]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 25 03:35:15 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:35:15.583 162739 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-908154e6-322e-4607-bb65-df3f3f8daca6', 'env', 'PROCESS_TAG=haproxy-908154e6-322e-4607-bb65-df3f3f8daca6', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/908154e6-322e-4607-bb65-df3f3f8daca6.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 25 03:35:15 np0005534516 nova_compute[253538]: 2025-11-25 08:35:15.582 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:35:15 np0005534516 nova_compute[253538]: 2025-11-25 08:35:15.596 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:35:15 np0005534516 nova_compute[253538]: 2025-11-25 08:35:15.632 253542 DEBUG nova.virt.libvirt.host [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Removed pending event for 0feca801-4630-4450-b915-616d8496ab51 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Nov 25 03:35:15 np0005534516 nova_compute[253538]: 2025-11-25 08:35:15.632 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764059715.6318457, 0feca801-4630-4450-b915-616d8496ab51 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 03:35:15 np0005534516 nova_compute[253538]: 2025-11-25 08:35:15.633 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 0feca801-4630-4450-b915-616d8496ab51] VM Resumed (Lifecycle Event)#033[00m
Nov 25 03:35:15 np0005534516 nova_compute[253538]: 2025-11-25 08:35:15.635 253542 DEBUG nova.compute.manager [None req-01405e48-7ab3-417a-ae6c-48ef451d06ba c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] [instance: 0feca801-4630-4450-b915-616d8496ab51] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 25 03:35:15 np0005534516 nova_compute[253538]: 2025-11-25 08:35:15.639 253542 INFO nova.virt.libvirt.driver [-] [instance: 0feca801-4630-4450-b915-616d8496ab51] Instance rebooted successfully.#033[00m
Nov 25 03:35:15 np0005534516 nova_compute[253538]: 2025-11-25 08:35:15.639 253542 DEBUG nova.compute.manager [None req-01405e48-7ab3-417a-ae6c-48ef451d06ba c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] [instance: 0feca801-4630-4450-b915-616d8496ab51] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:35:15 np0005534516 nova_compute[253538]: 2025-11-25 08:35:15.660 253542 DEBUG nova.compute.manager [req-b7c527b9-5ffd-4a68-9642-85e3e454dd2e req-18f01761-3498-45c6-acfb-b8f09c461397 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 0feca801-4630-4450-b915-616d8496ab51] Received event network-vif-plugged-15af3dd8-9788-4a34-b4b2-d3b24300cd4c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:35:15 np0005534516 nova_compute[253538]: 2025-11-25 08:35:15.660 253542 DEBUG oslo_concurrency.lockutils [req-b7c527b9-5ffd-4a68-9642-85e3e454dd2e req-18f01761-3498-45c6-acfb-b8f09c461397 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "0feca801-4630-4450-b915-616d8496ab51-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:35:15 np0005534516 nova_compute[253538]: 2025-11-25 08:35:15.661 253542 DEBUG oslo_concurrency.lockutils [req-b7c527b9-5ffd-4a68-9642-85e3e454dd2e req-18f01761-3498-45c6-acfb-b8f09c461397 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "0feca801-4630-4450-b915-616d8496ab51-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:35:15 np0005534516 nova_compute[253538]: 2025-11-25 08:35:15.661 253542 DEBUG oslo_concurrency.lockutils [req-b7c527b9-5ffd-4a68-9642-85e3e454dd2e req-18f01761-3498-45c6-acfb-b8f09c461397 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "0feca801-4630-4450-b915-616d8496ab51-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:35:15 np0005534516 nova_compute[253538]: 2025-11-25 08:35:15.661 253542 DEBUG nova.compute.manager [req-b7c527b9-5ffd-4a68-9642-85e3e454dd2e req-18f01761-3498-45c6-acfb-b8f09c461397 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 0feca801-4630-4450-b915-616d8496ab51] No waiting events found dispatching network-vif-plugged-15af3dd8-9788-4a34-b4b2-d3b24300cd4c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 03:35:15 np0005534516 nova_compute[253538]: 2025-11-25 08:35:15.661 253542 WARNING nova.compute.manager [req-b7c527b9-5ffd-4a68-9642-85e3e454dd2e req-18f01761-3498-45c6-acfb-b8f09c461397 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 0feca801-4630-4450-b915-616d8496ab51] Received unexpected event network-vif-plugged-15af3dd8-9788-4a34-b4b2-d3b24300cd4c for instance with vm_state stopped and task_state powering-on.#033[00m
Nov 25 03:35:15 np0005534516 nova_compute[253538]: 2025-11-25 08:35:15.663 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 0feca801-4630-4450-b915-616d8496ab51] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:35:15 np0005534516 nova_compute[253538]: 2025-11-25 08:35:15.666 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 0feca801-4630-4450-b915-616d8496ab51] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: stopped, current task_state: powering-on, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 03:35:15 np0005534516 nova_compute[253538]: 2025-11-25 08:35:15.688 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 0feca801-4630-4450-b915-616d8496ab51] During sync_power_state the instance has a pending task (powering-on). Skip.#033[00m
Nov 25 03:35:15 np0005534516 nova_compute[253538]: 2025-11-25 08:35:15.689 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764059715.6325064, 0feca801-4630-4450-b915-616d8496ab51 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 03:35:15 np0005534516 nova_compute[253538]: 2025-11-25 08:35:15.689 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 0feca801-4630-4450-b915-616d8496ab51] VM Started (Lifecycle Event)#033[00m
Nov 25 03:35:15 np0005534516 nova_compute[253538]: 2025-11-25 08:35:15.712 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 0feca801-4630-4450-b915-616d8496ab51] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:35:15 np0005534516 nova_compute[253538]: 2025-11-25 08:35:15.715 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 0feca801-4630-4450-b915-616d8496ab51] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 03:35:15 np0005534516 nova_compute[253538]: 2025-11-25 08:35:15.750 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:35:15 np0005534516 nova_compute[253538]: 2025-11-25 08:35:15.906 253542 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764059700.9039412, 5c6656ef-7ad0-4eb4-a597-aa9a8078805b => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 03:35:15 np0005534516 nova_compute[253538]: 2025-11-25 08:35:15.906 253542 INFO nova.compute.manager [-] [instance: 5c6656ef-7ad0-4eb4-a597-aa9a8078805b] VM Stopped (Lifecycle Event)#033[00m
Nov 25 03:35:15 np0005534516 nova_compute[253538]: 2025-11-25 08:35:15.933 253542 DEBUG nova.compute.manager [None req-34ad2fe7-b995-48da-a2c5-cb2b21db8c38 - - - - - -] [instance: 5c6656ef-7ad0-4eb4-a597-aa9a8078805b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:35:15 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e182 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 03:35:15 np0005534516 podman[321917]: 2025-11-25 08:35:15.992558142 +0000 UTC m=+0.051164686 container create cfd868e91fe20210b80519512156df31c00ac2f978f08bf54e2e99bdc458958b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-908154e6-322e-4607-bb65-df3f3f8daca6, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3)
Nov 25 03:35:16 np0005534516 systemd[1]: Started libpod-conmon-cfd868e91fe20210b80519512156df31c00ac2f978f08bf54e2e99bdc458958b.scope.
Nov 25 03:35:16 np0005534516 systemd[1]: Started libcrun container.
Nov 25 03:35:16 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3a31a78ffe15d62f0c5013ee06ef65d7b8dd91142ec16803a1a277f69dadb23b/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 25 03:35:16 np0005534516 podman[321917]: 2025-11-25 08:35:15.967823067 +0000 UTC m=+0.026429631 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 25 03:35:16 np0005534516 podman[321917]: 2025-11-25 08:35:16.076487438 +0000 UTC m=+0.135094032 container init cfd868e91fe20210b80519512156df31c00ac2f978f08bf54e2e99bdc458958b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-908154e6-322e-4607-bb65-df3f3f8daca6, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Nov 25 03:35:16 np0005534516 podman[321917]: 2025-11-25 08:35:16.087022671 +0000 UTC m=+0.145629225 container start cfd868e91fe20210b80519512156df31c00ac2f978f08bf54e2e99bdc458958b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-908154e6-322e-4607-bb65-df3f3f8daca6, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 03:35:16 np0005534516 neutron-haproxy-ovnmeta-908154e6-322e-4607-bb65-df3f3f8daca6[321932]: [NOTICE]   (321936) : New worker (321938) forked
Nov 25 03:35:16 np0005534516 neutron-haproxy-ovnmeta-908154e6-322e-4607-bb65-df3f3f8daca6[321932]: [NOTICE]   (321936) : Loading success.
Nov 25 03:35:16 np0005534516 ovn_controller[152859]: 2025-11-25T08:35:16Z|00641|binding|INFO|Releasing lport a5c69233-73e9-45f3-95c2-e76d52711966 from this chassis (sb_readonly=0)
Nov 25 03:35:16 np0005534516 nova_compute[253538]: 2025-11-25 08:35:16.217 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:35:16 np0005534516 ovn_controller[152859]: 2025-11-25T08:35:16Z|00642|binding|INFO|Releasing lport a5c69233-73e9-45f3-95c2-e76d52711966 from this chassis (sb_readonly=0)
Nov 25 03:35:16 np0005534516 nova_compute[253538]: 2025-11-25 08:35:16.766 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:35:17 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1564: 321 pgs: 321 active+clean; 123 MiB data, 574 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 9.0 KiB/s wr, 136 op/s
Nov 25 03:35:17 np0005534516 nova_compute[253538]: 2025-11-25 08:35:17.748 253542 DEBUG nova.compute.manager [req-367d3f26-4fb7-4170-959c-2dd2fa8d6311 req-211d545d-65a3-458d-b115-4cef40fb8d40 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 0feca801-4630-4450-b915-616d8496ab51] Received event network-vif-plugged-15af3dd8-9788-4a34-b4b2-d3b24300cd4c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:35:17 np0005534516 nova_compute[253538]: 2025-11-25 08:35:17.749 253542 DEBUG oslo_concurrency.lockutils [req-367d3f26-4fb7-4170-959c-2dd2fa8d6311 req-211d545d-65a3-458d-b115-4cef40fb8d40 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "0feca801-4630-4450-b915-616d8496ab51-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:35:17 np0005534516 nova_compute[253538]: 2025-11-25 08:35:17.749 253542 DEBUG oslo_concurrency.lockutils [req-367d3f26-4fb7-4170-959c-2dd2fa8d6311 req-211d545d-65a3-458d-b115-4cef40fb8d40 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "0feca801-4630-4450-b915-616d8496ab51-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:35:17 np0005534516 nova_compute[253538]: 2025-11-25 08:35:17.749 253542 DEBUG oslo_concurrency.lockutils [req-367d3f26-4fb7-4170-959c-2dd2fa8d6311 req-211d545d-65a3-458d-b115-4cef40fb8d40 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "0feca801-4630-4450-b915-616d8496ab51-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:35:17 np0005534516 nova_compute[253538]: 2025-11-25 08:35:17.750 253542 DEBUG nova.compute.manager [req-367d3f26-4fb7-4170-959c-2dd2fa8d6311 req-211d545d-65a3-458d-b115-4cef40fb8d40 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 0feca801-4630-4450-b915-616d8496ab51] No waiting events found dispatching network-vif-plugged-15af3dd8-9788-4a34-b4b2-d3b24300cd4c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 03:35:17 np0005534516 nova_compute[253538]: 2025-11-25 08:35:17.750 253542 WARNING nova.compute.manager [req-367d3f26-4fb7-4170-959c-2dd2fa8d6311 req-211d545d-65a3-458d-b115-4cef40fb8d40 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 0feca801-4630-4450-b915-616d8496ab51] Received unexpected event network-vif-plugged-15af3dd8-9788-4a34-b4b2-d3b24300cd4c for instance with vm_state active and task_state None.#033[00m
Nov 25 03:35:18 np0005534516 nova_compute[253538]: 2025-11-25 08:35:18.553 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 03:35:18 np0005534516 nova_compute[253538]: 2025-11-25 08:35:18.553 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 03:35:18 np0005534516 nova_compute[253538]: 2025-11-25 08:35:18.554 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Nov 25 03:35:18 np0005534516 nova_compute[253538]: 2025-11-25 08:35:18.580 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Nov 25 03:35:18 np0005534516 podman[321947]: 2025-11-25 08:35:18.842468016 +0000 UTC m=+0.093032652 container health_status 8ad1e319ea263c63021f01bb2d6f18ca5aea205883339692c6b10bddfa993191 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251118, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team)
Nov 25 03:35:19 np0005534516 nova_compute[253538]: 2025-11-25 08:35:19.194 253542 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764059704.191218, e07ccbcb-d60d-4c15-95c2-9f5046ab99a3 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 03:35:19 np0005534516 nova_compute[253538]: 2025-11-25 08:35:19.195 253542 INFO nova.compute.manager [-] [instance: e07ccbcb-d60d-4c15-95c2-9f5046ab99a3] VM Stopped (Lifecycle Event)#033[00m
Nov 25 03:35:19 np0005534516 nova_compute[253538]: 2025-11-25 08:35:19.212 253542 DEBUG nova.compute.manager [None req-eb9e7493-4570-4b79-bea8-8b556d29863f - - - - - -] [instance: e07ccbcb-d60d-4c15-95c2-9f5046ab99a3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:35:19 np0005534516 nova_compute[253538]: 2025-11-25 08:35:19.261 253542 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764059704.2376945, a4fd9f97-b160-432d-9cb7-0fa3874c6468 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 03:35:19 np0005534516 nova_compute[253538]: 2025-11-25 08:35:19.261 253542 INFO nova.compute.manager [-] [instance: a4fd9f97-b160-432d-9cb7-0fa3874c6468] VM Stopped (Lifecycle Event)#033[00m
Nov 25 03:35:19 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1565: 321 pgs: 321 active+clean; 123 MiB data, 574 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 6.0 KiB/s wr, 124 op/s
Nov 25 03:35:19 np0005534516 nova_compute[253538]: 2025-11-25 08:35:19.286 253542 DEBUG nova.compute.manager [None req-5c63b1a7-2038-4e84-af7a-520278b886c5 - - - - - -] [instance: a4fd9f97-b160-432d-9cb7-0fa3874c6468] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:35:20 np0005534516 nova_compute[253538]: 2025-11-25 08:35:20.075 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:35:20 np0005534516 nova_compute[253538]: 2025-11-25 08:35:20.681 253542 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764059705.6809561, 2f20fb1c-0a44-4209-aa4a-020331708117 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 03:35:20 np0005534516 nova_compute[253538]: 2025-11-25 08:35:20.682 253542 INFO nova.compute.manager [-] [instance: 2f20fb1c-0a44-4209-aa4a-020331708117] VM Stopped (Lifecycle Event)#033[00m
Nov 25 03:35:20 np0005534516 nova_compute[253538]: 2025-11-25 08:35:20.700 253542 DEBUG nova.compute.manager [None req-c157c6cf-732a-4df1-b659-095f9f17cdb2 - - - - - -] [instance: 2f20fb1c-0a44-4209-aa4a-020331708117] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:35:20 np0005534516 nova_compute[253538]: 2025-11-25 08:35:20.752 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:35:20 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e182 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 03:35:21 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1566: 321 pgs: 321 active+clean; 123 MiB data, 574 MiB used, 59 GiB / 60 GiB avail; 2.4 MiB/s rd, 5.1 KiB/s wr, 115 op/s
Nov 25 03:35:21 np0005534516 nova_compute[253538]: 2025-11-25 08:35:21.575 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 03:35:21 np0005534516 nova_compute[253538]: 2025-11-25 08:35:21.576 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 03:35:21 np0005534516 nova_compute[253538]: 2025-11-25 08:35:21.594 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:35:21 np0005534516 nova_compute[253538]: 2025-11-25 08:35:21.595 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:35:21 np0005534516 nova_compute[253538]: 2025-11-25 08:35:21.595 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:35:21 np0005534516 nova_compute[253538]: 2025-11-25 08:35:21.595 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 25 03:35:21 np0005534516 nova_compute[253538]: 2025-11-25 08:35:21.596 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:35:21 np0005534516 nova_compute[253538]: 2025-11-25 08:35:21.678 253542 DEBUG nova.objects.instance [None req-9f740164-f431-46f6-a0c8-8a900b27e40e c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Lazy-loading 'pci_devices' on Instance uuid 0feca801-4630-4450-b915-616d8496ab51 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 03:35:22 np0005534516 nova_compute[253538]: 2025-11-25 08:35:22.196 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764059722.1954565, 0feca801-4630-4450-b915-616d8496ab51 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 03:35:22 np0005534516 nova_compute[253538]: 2025-11-25 08:35:22.196 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 0feca801-4630-4450-b915-616d8496ab51] VM Paused (Lifecycle Event)#033[00m
Nov 25 03:35:22 np0005534516 nova_compute[253538]: 2025-11-25 08:35:22.213 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 0feca801-4630-4450-b915-616d8496ab51] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:35:22 np0005534516 nova_compute[253538]: 2025-11-25 08:35:22.218 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 0feca801-4630-4450-b915-616d8496ab51] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: suspending, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 03:35:22 np0005534516 nova_compute[253538]: 2025-11-25 08:35:22.232 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 0feca801-4630-4450-b915-616d8496ab51] During sync_power_state the instance has a pending task (suspending). Skip.#033[00m
Nov 25 03:35:22 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 03:35:22 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3890726360' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 03:35:22 np0005534516 nova_compute[253538]: 2025-11-25 08:35:22.519 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.923s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:35:22 np0005534516 nova_compute[253538]: 2025-11-25 08:35:22.590 253542 DEBUG nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] skipping disk for instance-00000032 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 25 03:35:22 np0005534516 nova_compute[253538]: 2025-11-25 08:35:22.592 253542 DEBUG nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] skipping disk for instance-00000032 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 25 03:35:22 np0005534516 kernel: tap15af3dd8-97 (unregistering): left promiscuous mode
Nov 25 03:35:22 np0005534516 NetworkManager[48915]: <info>  [1764059722.7004] device (tap15af3dd8-97): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 03:35:22 np0005534516 nova_compute[253538]: 2025-11-25 08:35:22.708 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:35:22 np0005534516 nova_compute[253538]: 2025-11-25 08:35:22.711 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:35:22 np0005534516 ovn_controller[152859]: 2025-11-25T08:35:22Z|00643|binding|INFO|Releasing lport 15af3dd8-9788-4a34-b4b2-d3b24300cd4c from this chassis (sb_readonly=0)
Nov 25 03:35:22 np0005534516 ovn_controller[152859]: 2025-11-25T08:35:22Z|00644|binding|INFO|Setting lport 15af3dd8-9788-4a34-b4b2-d3b24300cd4c down in Southbound
Nov 25 03:35:22 np0005534516 ovn_controller[152859]: 2025-11-25T08:35:22Z|00645|binding|INFO|Removing iface tap15af3dd8-97 ovn-installed in OVS
Nov 25 03:35:22 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:35:22.719 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:07:cd:40 10.100.0.13'], port_security=['fa:16:3e:07:cd:40 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '0feca801-4630-4450-b915-616d8496ab51', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-908154e6-322e-4607-bb65-df3f3f8daca6', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '23237e7592b247838e62457157e64e9e', 'neutron:revision_number': '14', 'neutron:security_group_ids': '3358d53f-61a0-46d5-a068-e095dc0322d0', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.183', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f04e87cb-da21-4cc9-be16-4ad52b84fb85, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=15af3dd8-9788-4a34-b4b2-d3b24300cd4c) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 03:35:22 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:35:22.722 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 15af3dd8-9788-4a34-b4b2-d3b24300cd4c in datapath 908154e6-322e-4607-bb65-df3f3f8daca6 unbound from our chassis#033[00m
Nov 25 03:35:22 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:35:22.723 162739 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 908154e6-322e-4607-bb65-df3f3f8daca6, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 25 03:35:22 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:35:22.724 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[f54a0704-8a8c-4dba-a67d-e4ea579dc412]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:35:22 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:35:22.725 162739 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-908154e6-322e-4607-bb65-df3f3f8daca6 namespace which is not needed anymore#033[00m
Nov 25 03:35:22 np0005534516 nova_compute[253538]: 2025-11-25 08:35:22.734 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:35:22 np0005534516 systemd[1]: machine-qemu\x2d77\x2dinstance\x2d00000032.scope: Deactivated successfully.
Nov 25 03:35:22 np0005534516 systemd[1]: machine-qemu\x2d77\x2dinstance\x2d00000032.scope: Consumed 7.326s CPU time.
Nov 25 03:35:22 np0005534516 systemd-machined[215790]: Machine qemu-77-instance-00000032 terminated.
Nov 25 03:35:22 np0005534516 nova_compute[253538]: 2025-11-25 08:35:22.816 253542 WARNING nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 03:35:22 np0005534516 nova_compute[253538]: 2025-11-25 08:35:22.817 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3853MB free_disk=59.942623138427734GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 25 03:35:22 np0005534516 nova_compute[253538]: 2025-11-25 08:35:22.818 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:35:22 np0005534516 nova_compute[253538]: 2025-11-25 08:35:22.818 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:35:22 np0005534516 nova_compute[253538]: 2025-11-25 08:35:22.843 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:35:22 np0005534516 nova_compute[253538]: 2025-11-25 08:35:22.846 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:35:22 np0005534516 nova_compute[253538]: 2025-11-25 08:35:22.861 253542 DEBUG nova.compute.manager [None req-9f740164-f431-46f6-a0c8-8a900b27e40e c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] [instance: 0feca801-4630-4450-b915-616d8496ab51] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:35:22 np0005534516 neutron-haproxy-ovnmeta-908154e6-322e-4607-bb65-df3f3f8daca6[321932]: [NOTICE]   (321936) : haproxy version is 2.8.14-c23fe91
Nov 25 03:35:22 np0005534516 neutron-haproxy-ovnmeta-908154e6-322e-4607-bb65-df3f3f8daca6[321932]: [NOTICE]   (321936) : path to executable is /usr/sbin/haproxy
Nov 25 03:35:22 np0005534516 neutron-haproxy-ovnmeta-908154e6-322e-4607-bb65-df3f3f8daca6[321932]: [WARNING]  (321936) : Exiting Master process...
Nov 25 03:35:22 np0005534516 neutron-haproxy-ovnmeta-908154e6-322e-4607-bb65-df3f3f8daca6[321932]: [WARNING]  (321936) : Exiting Master process...
Nov 25 03:35:22 np0005534516 neutron-haproxy-ovnmeta-908154e6-322e-4607-bb65-df3f3f8daca6[321932]: [ALERT]    (321936) : Current worker (321938) exited with code 143 (Terminated)
Nov 25 03:35:22 np0005534516 neutron-haproxy-ovnmeta-908154e6-322e-4607-bb65-df3f3f8daca6[321932]: [WARNING]  (321936) : All workers exited. Exiting... (0)
Nov 25 03:35:22 np0005534516 systemd[1]: libpod-cfd868e91fe20210b80519512156df31c00ac2f978f08bf54e2e99bdc458958b.scope: Deactivated successfully.
Nov 25 03:35:22 np0005534516 podman[322021]: 2025-11-25 08:35:22.876967406 +0000 UTC m=+0.060632229 container died cfd868e91fe20210b80519512156df31c00ac2f978f08bf54e2e99bdc458958b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-908154e6-322e-4607-bb65-df3f3f8daca6, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Nov 25 03:35:22 np0005534516 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-cfd868e91fe20210b80519512156df31c00ac2f978f08bf54e2e99bdc458958b-userdata-shm.mount: Deactivated successfully.
Nov 25 03:35:22 np0005534516 systemd[1]: var-lib-containers-storage-overlay-3a31a78ffe15d62f0c5013ee06ef65d7b8dd91142ec16803a1a277f69dadb23b-merged.mount: Deactivated successfully.
Nov 25 03:35:22 np0005534516 podman[322021]: 2025-11-25 08:35:22.92137505 +0000 UTC m=+0.105039863 container cleanup cfd868e91fe20210b80519512156df31c00ac2f978f08bf54e2e99bdc458958b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-908154e6-322e-4607-bb65-df3f3f8daca6, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 03:35:22 np0005534516 systemd[1]: libpod-conmon-cfd868e91fe20210b80519512156df31c00ac2f978f08bf54e2e99bdc458958b.scope: Deactivated successfully.
Nov 25 03:35:23 np0005534516 podman[322058]: 2025-11-25 08:35:23.002433589 +0000 UTC m=+0.058832412 container remove cfd868e91fe20210b80519512156df31c00ac2f978f08bf54e2e99bdc458958b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-908154e6-322e-4607-bb65-df3f3f8daca6, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Nov 25 03:35:23 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:35:23.007 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[687f659e-1608-455c-b7d2-9dd1a6605e84]: (4, ('Tue Nov 25 08:35:22 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-908154e6-322e-4607-bb65-df3f3f8daca6 (cfd868e91fe20210b80519512156df31c00ac2f978f08bf54e2e99bdc458958b)\ncfd868e91fe20210b80519512156df31c00ac2f978f08bf54e2e99bdc458958b\nTue Nov 25 08:35:22 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-908154e6-322e-4607-bb65-df3f3f8daca6 (cfd868e91fe20210b80519512156df31c00ac2f978f08bf54e2e99bdc458958b)\ncfd868e91fe20210b80519512156df31c00ac2f978f08bf54e2e99bdc458958b\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:35:23 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:35:23.009 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[a37d22fc-2ed2-4569-bc08-968da5fa871f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:35:23 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:35:23.010 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap908154e6-30, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:35:23 np0005534516 kernel: tap908154e6-30: left promiscuous mode
Nov 25 03:35:23 np0005534516 nova_compute[253538]: 2025-11-25 08:35:23.013 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Instance 0feca801-4630-4450-b915-616d8496ab51 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 25 03:35:23 np0005534516 nova_compute[253538]: 2025-11-25 08:35:23.013 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 25 03:35:23 np0005534516 nova_compute[253538]: 2025-11-25 08:35:23.014 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 25 03:35:23 np0005534516 nova_compute[253538]: 2025-11-25 08:35:23.017 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:35:23 np0005534516 nova_compute[253538]: 2025-11-25 08:35:23.030 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:35:23 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:35:23.033 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[24943546-06b1-4297-a32b-ad6d509746f3]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:35:23 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:35:23.046 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[f5b91ae9-bab6-4ee9-afc6-d47cf2a2ee8a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:35:23 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:35:23.048 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[25be9ef3-4ed6-4ddb-acde-bb4a2b88bc94]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:35:23 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:35:23.061 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[944b09f9-90f5-4405-aad0-4c26cba89f56]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 506987, 'reachable_time': 35124, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 322077, 'error': None, 'target': 'ovnmeta-908154e6-322e-4607-bb65-df3f3f8daca6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:35:23 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:35:23.064 162852 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-908154e6-322e-4607-bb65-df3f3f8daca6 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 25 03:35:23 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:35:23.064 162852 DEBUG oslo.privsep.daemon [-] privsep: reply[a8fdd051-9ccc-4e26-9cab-ffa6891149eb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:35:23 np0005534516 systemd[1]: run-netns-ovnmeta\x2d908154e6\x2d322e\x2d4607\x2dbb65\x2ddf3f3f8daca6.mount: Deactivated successfully.
Nov 25 03:35:23 np0005534516 nova_compute[253538]: 2025-11-25 08:35:23.103 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:35:23 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1567: 321 pgs: 321 active+clean; 123 MiB data, 574 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 4.2 KiB/s wr, 93 op/s
Nov 25 03:35:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 03:35:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 03:35:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 03:35:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 03:35:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 03:35:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 03:35:23 np0005534516 nova_compute[253538]: 2025-11-25 08:35:23.456 253542 DEBUG nova.compute.manager [req-dfe08cc9-e6ae-43b9-baa8-761aef1f392a req-46df6485-92de-4611-9574-b6a1a78dba7d b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 0feca801-4630-4450-b915-616d8496ab51] Received event network-vif-unplugged-15af3dd8-9788-4a34-b4b2-d3b24300cd4c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:35:23 np0005534516 nova_compute[253538]: 2025-11-25 08:35:23.456 253542 DEBUG oslo_concurrency.lockutils [req-dfe08cc9-e6ae-43b9-baa8-761aef1f392a req-46df6485-92de-4611-9574-b6a1a78dba7d b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "0feca801-4630-4450-b915-616d8496ab51-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:35:23 np0005534516 nova_compute[253538]: 2025-11-25 08:35:23.457 253542 DEBUG oslo_concurrency.lockutils [req-dfe08cc9-e6ae-43b9-baa8-761aef1f392a req-46df6485-92de-4611-9574-b6a1a78dba7d b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "0feca801-4630-4450-b915-616d8496ab51-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:35:23 np0005534516 nova_compute[253538]: 2025-11-25 08:35:23.457 253542 DEBUG oslo_concurrency.lockutils [req-dfe08cc9-e6ae-43b9-baa8-761aef1f392a req-46df6485-92de-4611-9574-b6a1a78dba7d b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "0feca801-4630-4450-b915-616d8496ab51-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:35:23 np0005534516 nova_compute[253538]: 2025-11-25 08:35:23.457 253542 DEBUG nova.compute.manager [req-dfe08cc9-e6ae-43b9-baa8-761aef1f392a req-46df6485-92de-4611-9574-b6a1a78dba7d b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 0feca801-4630-4450-b915-616d8496ab51] No waiting events found dispatching network-vif-unplugged-15af3dd8-9788-4a34-b4b2-d3b24300cd4c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 03:35:23 np0005534516 nova_compute[253538]: 2025-11-25 08:35:23.457 253542 WARNING nova.compute.manager [req-dfe08cc9-e6ae-43b9-baa8-761aef1f392a req-46df6485-92de-4611-9574-b6a1a78dba7d b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 0feca801-4630-4450-b915-616d8496ab51] Received unexpected event network-vif-unplugged-15af3dd8-9788-4a34-b4b2-d3b24300cd4c for instance with vm_state suspended and task_state None.#033[00m
Nov 25 03:35:23 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 03:35:23 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1106798545' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 03:35:23 np0005534516 nova_compute[253538]: 2025-11-25 08:35:23.563 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.460s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:35:23 np0005534516 nova_compute[253538]: 2025-11-25 08:35:23.568 253542 DEBUG nova.compute.provider_tree [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 03:35:23 np0005534516 nova_compute[253538]: 2025-11-25 08:35:23.586 253542 DEBUG nova.scheduler.client.report [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 03:35:23 np0005534516 nova_compute[253538]: 2025-11-25 08:35:23.615 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 25 03:35:23 np0005534516 nova_compute[253538]: 2025-11-25 08:35:23.615 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.798s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:35:24 np0005534516 nova_compute[253538]: 2025-11-25 08:35:24.594 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 03:35:25 np0005534516 nova_compute[253538]: 2025-11-25 08:35:25.076 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:35:25 np0005534516 nova_compute[253538]: 2025-11-25 08:35:25.135 253542 INFO nova.compute.manager [None req-496ec884-0571-4657-9326-42d315c9f10e c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] [instance: 0feca801-4630-4450-b915-616d8496ab51] Resuming#033[00m
Nov 25 03:35:25 np0005534516 nova_compute[253538]: 2025-11-25 08:35:25.136 253542 DEBUG nova.objects.instance [None req-496ec884-0571-4657-9326-42d315c9f10e c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Lazy-loading 'flavor' on Instance uuid 0feca801-4630-4450-b915-616d8496ab51 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 03:35:25 np0005534516 nova_compute[253538]: 2025-11-25 08:35:25.175 253542 DEBUG oslo_concurrency.lockutils [None req-496ec884-0571-4657-9326-42d315c9f10e c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Acquiring lock "refresh_cache-0feca801-4630-4450-b915-616d8496ab51" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 03:35:25 np0005534516 nova_compute[253538]: 2025-11-25 08:35:25.176 253542 DEBUG oslo_concurrency.lockutils [None req-496ec884-0571-4657-9326-42d315c9f10e c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Acquired lock "refresh_cache-0feca801-4630-4450-b915-616d8496ab51" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 03:35:25 np0005534516 nova_compute[253538]: 2025-11-25 08:35:25.177 253542 DEBUG nova.network.neutron [None req-496ec884-0571-4657-9326-42d315c9f10e c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] [instance: 0feca801-4630-4450-b915-616d8496ab51] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 25 03:35:25 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1568: 321 pgs: 321 active+clean; 123 MiB data, 574 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 426 B/s wr, 83 op/s
Nov 25 03:35:25 np0005534516 nova_compute[253538]: 2025-11-25 08:35:25.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 03:35:25 np0005534516 nova_compute[253538]: 2025-11-25 08:35:25.606 253542 DEBUG nova.compute.manager [req-30d106b9-6879-4cca-9d6c-13df7c5f5c69 req-a450639b-7319-491b-a8cc-9c31353f4e11 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 0feca801-4630-4450-b915-616d8496ab51] Received event network-vif-plugged-15af3dd8-9788-4a34-b4b2-d3b24300cd4c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:35:25 np0005534516 nova_compute[253538]: 2025-11-25 08:35:25.607 253542 DEBUG oslo_concurrency.lockutils [req-30d106b9-6879-4cca-9d6c-13df7c5f5c69 req-a450639b-7319-491b-a8cc-9c31353f4e11 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "0feca801-4630-4450-b915-616d8496ab51-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:35:25 np0005534516 nova_compute[253538]: 2025-11-25 08:35:25.607 253542 DEBUG oslo_concurrency.lockutils [req-30d106b9-6879-4cca-9d6c-13df7c5f5c69 req-a450639b-7319-491b-a8cc-9c31353f4e11 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "0feca801-4630-4450-b915-616d8496ab51-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:35:25 np0005534516 nova_compute[253538]: 2025-11-25 08:35:25.607 253542 DEBUG oslo_concurrency.lockutils [req-30d106b9-6879-4cca-9d6c-13df7c5f5c69 req-a450639b-7319-491b-a8cc-9c31353f4e11 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "0feca801-4630-4450-b915-616d8496ab51-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:35:25 np0005534516 nova_compute[253538]: 2025-11-25 08:35:25.607 253542 DEBUG nova.compute.manager [req-30d106b9-6879-4cca-9d6c-13df7c5f5c69 req-a450639b-7319-491b-a8cc-9c31353f4e11 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 0feca801-4630-4450-b915-616d8496ab51] No waiting events found dispatching network-vif-plugged-15af3dd8-9788-4a34-b4b2-d3b24300cd4c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 03:35:25 np0005534516 nova_compute[253538]: 2025-11-25 08:35:25.608 253542 WARNING nova.compute.manager [req-30d106b9-6879-4cca-9d6c-13df7c5f5c69 req-a450639b-7319-491b-a8cc-9c31353f4e11 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 0feca801-4630-4450-b915-616d8496ab51] Received unexpected event network-vif-plugged-15af3dd8-9788-4a34-b4b2-d3b24300cd4c for instance with vm_state suspended and task_state resuming.#033[00m
Nov 25 03:35:25 np0005534516 nova_compute[253538]: 2025-11-25 08:35:25.785 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:35:25 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e182 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 03:35:26 np0005534516 nova_compute[253538]: 2025-11-25 08:35:26.713 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:35:26 np0005534516 nova_compute[253538]: 2025-11-25 08:35:26.874 253542 DEBUG nova.network.neutron [None req-496ec884-0571-4657-9326-42d315c9f10e c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] [instance: 0feca801-4630-4450-b915-616d8496ab51] Updating instance_info_cache with network_info: [{"id": "15af3dd8-9788-4a34-b4b2-d3b24300cd4c", "address": "fa:16:3e:07:cd:40", "network": {"id": "908154e6-322e-4607-bb65-df3f3f8daca6", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1395486665-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.183", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "23237e7592b247838e62457157e64e9e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap15af3dd8-97", "ovs_interfaceid": "15af3dd8-9788-4a34-b4b2-d3b24300cd4c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 03:35:26 np0005534516 nova_compute[253538]: 2025-11-25 08:35:26.896 253542 DEBUG oslo_concurrency.lockutils [None req-496ec884-0571-4657-9326-42d315c9f10e c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Releasing lock "refresh_cache-0feca801-4630-4450-b915-616d8496ab51" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 03:35:26 np0005534516 nova_compute[253538]: 2025-11-25 08:35:26.903 253542 DEBUG nova.virt.libvirt.vif [None req-496ec884-0571-4657-9326-42d315c9f10e c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T08:32:05Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-1351113969',display_name='tempest-ServerActionsTestJSON-server-1351113969',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-1351113969',id=50,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBE5Fk56Q4lNqV9K7WDUeyi9Q8LXfzmP3pHaY3lQ+esJSSZiBEhWQZtQw4QEFpwpSqYGNN6+MiKdvSMZAjIxsIMhhevlSp0lI0bm/7fQanmTm+NtC/LaRja7uscM7lRA6IQ==',key_name='tempest-keypair-23618085',keypairs=<?>,launch_index=0,launched_at=2025-11-25T08:32:15Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='23237e7592b247838e62457157e64e9e',ramdisk_id='',reservation_id='r-ui990d0d',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-ServerActionsTestJSON-1880843108',owner_user_name='tempest-ServerActionsTestJSON-1880843108-project-member'},tags=<?>,task_state='resuming',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T08:35:22Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='c199ca353ed54a53ab7fe37d3089c82a',uuid=0feca801-4630-4450-b915-616d8496ab51,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='suspended') vif={"id": "15af3dd8-9788-4a34-b4b2-d3b24300cd4c", "address": "fa:16:3e:07:cd:40", "network": {"id": "908154e6-322e-4607-bb65-df3f3f8daca6", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1395486665-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.183", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "23237e7592b247838e62457157e64e9e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap15af3dd8-97", "ovs_interfaceid": "15af3dd8-9788-4a34-b4b2-d3b24300cd4c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 25 03:35:26 np0005534516 nova_compute[253538]: 2025-11-25 08:35:26.904 253542 DEBUG nova.network.os_vif_util [None req-496ec884-0571-4657-9326-42d315c9f10e c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Converting VIF {"id": "15af3dd8-9788-4a34-b4b2-d3b24300cd4c", "address": "fa:16:3e:07:cd:40", "network": {"id": "908154e6-322e-4607-bb65-df3f3f8daca6", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1395486665-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.183", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "23237e7592b247838e62457157e64e9e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap15af3dd8-97", "ovs_interfaceid": "15af3dd8-9788-4a34-b4b2-d3b24300cd4c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 03:35:26 np0005534516 nova_compute[253538]: 2025-11-25 08:35:26.905 253542 DEBUG nova.network.os_vif_util [None req-496ec884-0571-4657-9326-42d315c9f10e c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:07:cd:40,bridge_name='br-int',has_traffic_filtering=True,id=15af3dd8-9788-4a34-b4b2-d3b24300cd4c,network=Network(908154e6-322e-4607-bb65-df3f3f8daca6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap15af3dd8-97') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 03:35:26 np0005534516 nova_compute[253538]: 2025-11-25 08:35:26.905 253542 DEBUG os_vif [None req-496ec884-0571-4657-9326-42d315c9f10e c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:07:cd:40,bridge_name='br-int',has_traffic_filtering=True,id=15af3dd8-9788-4a34-b4b2-d3b24300cd4c,network=Network(908154e6-322e-4607-bb65-df3f3f8daca6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap15af3dd8-97') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 25 03:35:26 np0005534516 nova_compute[253538]: 2025-11-25 08:35:26.906 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:35:26 np0005534516 nova_compute[253538]: 2025-11-25 08:35:26.907 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:35:26 np0005534516 nova_compute[253538]: 2025-11-25 08:35:26.908 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 03:35:26 np0005534516 nova_compute[253538]: 2025-11-25 08:35:26.911 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:35:26 np0005534516 nova_compute[253538]: 2025-11-25 08:35:26.911 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap15af3dd8-97, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:35:26 np0005534516 nova_compute[253538]: 2025-11-25 08:35:26.912 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap15af3dd8-97, col_values=(('external_ids', {'iface-id': '15af3dd8-9788-4a34-b4b2-d3b24300cd4c', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:07:cd:40', 'vm-uuid': '0feca801-4630-4450-b915-616d8496ab51'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:35:26 np0005534516 nova_compute[253538]: 2025-11-25 08:35:26.912 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 03:35:26 np0005534516 nova_compute[253538]: 2025-11-25 08:35:26.913 253542 INFO os_vif [None req-496ec884-0571-4657-9326-42d315c9f10e c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:07:cd:40,bridge_name='br-int',has_traffic_filtering=True,id=15af3dd8-9788-4a34-b4b2-d3b24300cd4c,network=Network(908154e6-322e-4607-bb65-df3f3f8daca6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap15af3dd8-97')#033[00m
Nov 25 03:35:26 np0005534516 nova_compute[253538]: 2025-11-25 08:35:26.935 253542 DEBUG nova.objects.instance [None req-496ec884-0571-4657-9326-42d315c9f10e c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Lazy-loading 'numa_topology' on Instance uuid 0feca801-4630-4450-b915-616d8496ab51 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 03:35:27 np0005534516 kernel: tap15af3dd8-97: entered promiscuous mode
Nov 25 03:35:27 np0005534516 ovn_controller[152859]: 2025-11-25T08:35:27Z|00646|binding|INFO|Claiming lport 15af3dd8-9788-4a34-b4b2-d3b24300cd4c for this chassis.
Nov 25 03:35:27 np0005534516 ovn_controller[152859]: 2025-11-25T08:35:27Z|00647|binding|INFO|15af3dd8-9788-4a34-b4b2-d3b24300cd4c: Claiming fa:16:3e:07:cd:40 10.100.0.13
Nov 25 03:35:27 np0005534516 nova_compute[253538]: 2025-11-25 08:35:27.017 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:35:27 np0005534516 NetworkManager[48915]: <info>  [1764059727.0204] manager: (tap15af3dd8-97): new Tun device (/org/freedesktop/NetworkManager/Devices/284)
Nov 25 03:35:27 np0005534516 ovn_controller[152859]: 2025-11-25T08:35:27Z|00648|binding|INFO|Setting lport 15af3dd8-9788-4a34-b4b2-d3b24300cd4c ovn-installed in OVS
Nov 25 03:35:27 np0005534516 nova_compute[253538]: 2025-11-25 08:35:27.035 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:35:27 np0005534516 nova_compute[253538]: 2025-11-25 08:35:27.036 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:35:27 np0005534516 systemd-udevd[322114]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 03:35:27 np0005534516 systemd-machined[215790]: New machine qemu-78-instance-00000032.
Nov 25 03:35:27 np0005534516 ovn_controller[152859]: 2025-11-25T08:35:27Z|00649|binding|INFO|Setting lport 15af3dd8-9788-4a34-b4b2-d3b24300cd4c up in Southbound
Nov 25 03:35:27 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:35:27.061 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:07:cd:40 10.100.0.13'], port_security=['fa:16:3e:07:cd:40 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '0feca801-4630-4450-b915-616d8496ab51', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-908154e6-322e-4607-bb65-df3f3f8daca6', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '23237e7592b247838e62457157e64e9e', 'neutron:revision_number': '15', 'neutron:security_group_ids': '3358d53f-61a0-46d5-a068-e095dc0322d0', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.183'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f04e87cb-da21-4cc9-be16-4ad52b84fb85, chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=15af3dd8-9788-4a34-b4b2-d3b24300cd4c) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 03:35:27 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:35:27.063 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 15af3dd8-9788-4a34-b4b2-d3b24300cd4c in datapath 908154e6-322e-4607-bb65-df3f3f8daca6 bound to our chassis#033[00m
Nov 25 03:35:27 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:35:27.064 162739 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 908154e6-322e-4607-bb65-df3f3f8daca6#033[00m
Nov 25 03:35:27 np0005534516 NetworkManager[48915]: <info>  [1764059727.0660] device (tap15af3dd8-97): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 03:35:27 np0005534516 NetworkManager[48915]: <info>  [1764059727.0668] device (tap15af3dd8-97): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 03:35:27 np0005534516 systemd[1]: Started Virtual Machine qemu-78-instance-00000032.
Nov 25 03:35:27 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:35:27.075 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[09e2be14-e5fb-4e50-b5f8-1debbf13292a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:35:27 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:35:27.076 162739 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap908154e6-31 in ovnmeta-908154e6-322e-4607-bb65-df3f3f8daca6 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 25 03:35:27 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:35:27.078 269370 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap908154e6-30 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 25 03:35:27 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:35:27.078 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[2f6d0b52-5123-43ff-87f1-5b6213dd788b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:35:27 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:35:27.079 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[53857431-a190-4942-a45f-5f8416c51690]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:35:27 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:35:27.093 162852 DEBUG oslo.privsep.daemon [-] privsep: reply[a0a34ad5-a9d3-404c-9365-5b1b4ff0bb0c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:35:27 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:35:27.104 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[b92fe500-b099-4361-b4f8-48104f1b1140]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:35:27 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:35:27.134 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[36a1745a-3897-48aa-bc41-156bd2fc42bf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:35:27 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:35:27.140 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[d9ebbbe2-baac-48d7-96c5-50cbfc931b18]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:35:27 np0005534516 systemd-udevd[322116]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 03:35:27 np0005534516 NetworkManager[48915]: <info>  [1764059727.1418] manager: (tap908154e6-30): new Veth device (/org/freedesktop/NetworkManager/Devices/285)
Nov 25 03:35:27 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:35:27.176 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[77081318-ec25-4f75-9863-afe1e4fbfe57]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:35:27 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:35:27.179 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[3c165424-c4c1-46d2-a938-e27a653a30af]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:35:27 np0005534516 NetworkManager[48915]: <info>  [1764059727.2050] device (tap908154e6-30): carrier: link connected
Nov 25 03:35:27 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:35:27.213 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[6ba833ee-4e21-40f2-b56d-0f123befc591]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:35:27 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:35:27.235 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[90ff64ab-b4cb-4663-b58f-9151037a8ab7]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap908154e6-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:42:59:09'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 195], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 508180, 'reachable_time': 33009, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 322147, 'error': None, 'target': 'ovnmeta-908154e6-322e-4607-bb65-df3f3f8daca6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:35:27 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:35:27.251 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[3d5d91ba-864d-4cff-b343-21a8506ef3ae]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe42:5909'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 508180, 'tstamp': 508180}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 322148, 'error': None, 'target': 'ovnmeta-908154e6-322e-4607-bb65-df3f3f8daca6', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:35:27 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:35:27.279 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[a89d1850-e270-4514-8513-9b866b70058e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap908154e6-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:42:59:09'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 195], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 508180, 'reachable_time': 33009, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 322149, 'error': None, 'target': 'ovnmeta-908154e6-322e-4607-bb65-df3f3f8daca6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:35:27 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1569: 321 pgs: 321 active+clean; 123 MiB data, 574 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 85 B/s wr, 71 op/s
Nov 25 03:35:27 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:35:27.312 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[ce2e5b52-30e5-412a-a707-d50c1de10d85]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:35:27 np0005534516 nova_compute[253538]: 2025-11-25 08:35:27.318 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:35:27 np0005534516 nova_compute[253538]: 2025-11-25 08:35:27.372 253542 DEBUG oslo_concurrency.lockutils [None req-040bbfc9-8ac2-47a0-959c-a5377d60a6bc 78e69376f7924a5695ba6f4672139f68 d2de800215da4cac8d9fd3a6a3cf4a55 - - default default] Acquiring lock "cab0bbd2-96e3-43ed-970b-0b49c7581fef" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:35:27 np0005534516 nova_compute[253538]: 2025-11-25 08:35:27.373 253542 DEBUG oslo_concurrency.lockutils [None req-040bbfc9-8ac2-47a0-959c-a5377d60a6bc 78e69376f7924a5695ba6f4672139f68 d2de800215da4cac8d9fd3a6a3cf4a55 - - default default] Lock "cab0bbd2-96e3-43ed-970b-0b49c7581fef" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:35:27 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:35:27.375 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[35a5f83a-e7e4-4d75-87f3-f5ec572812de]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:35:27 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:35:27.376 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap908154e6-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:35:27 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:35:27.376 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 03:35:27 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:35:27.377 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap908154e6-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:35:27 np0005534516 nova_compute[253538]: 2025-11-25 08:35:27.378 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:35:27 np0005534516 NetworkManager[48915]: <info>  [1764059727.3794] manager: (tap908154e6-30): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/286)
Nov 25 03:35:27 np0005534516 kernel: tap908154e6-30: entered promiscuous mode
Nov 25 03:35:27 np0005534516 nova_compute[253538]: 2025-11-25 08:35:27.382 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:35:27 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:35:27.383 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap908154e6-30, col_values=(('external_ids', {'iface-id': 'a5c69233-73e9-45f3-95c2-e76d52711966'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:35:27 np0005534516 nova_compute[253538]: 2025-11-25 08:35:27.384 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:35:27 np0005534516 ovn_controller[152859]: 2025-11-25T08:35:27Z|00650|binding|INFO|Releasing lport a5c69233-73e9-45f3-95c2-e76d52711966 from this chassis (sb_readonly=0)
Nov 25 03:35:27 np0005534516 nova_compute[253538]: 2025-11-25 08:35:27.397 253542 DEBUG nova.compute.manager [None req-040bbfc9-8ac2-47a0-959c-a5377d60a6bc 78e69376f7924a5695ba6f4672139f68 d2de800215da4cac8d9fd3a6a3cf4a55 - - default default] [instance: cab0bbd2-96e3-43ed-970b-0b49c7581fef] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 25 03:35:27 np0005534516 nova_compute[253538]: 2025-11-25 08:35:27.403 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:35:27 np0005534516 nova_compute[253538]: 2025-11-25 08:35:27.404 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:35:27 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:35:27.405 162739 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/908154e6-322e-4607-bb65-df3f3f8daca6.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/908154e6-322e-4607-bb65-df3f3f8daca6.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 25 03:35:27 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:35:27.406 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[d06e43af-8949-46dc-a190-94be5525bcaf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:35:27 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:35:27.407 162739 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 25 03:35:27 np0005534516 ovn_metadata_agent[162734]: global
Nov 25 03:35:27 np0005534516 ovn_metadata_agent[162734]:    log         /dev/log local0 debug
Nov 25 03:35:27 np0005534516 ovn_metadata_agent[162734]:    log-tag     haproxy-metadata-proxy-908154e6-322e-4607-bb65-df3f3f8daca6
Nov 25 03:35:27 np0005534516 ovn_metadata_agent[162734]:    user        root
Nov 25 03:35:27 np0005534516 ovn_metadata_agent[162734]:    group       root
Nov 25 03:35:27 np0005534516 ovn_metadata_agent[162734]:    maxconn     1024
Nov 25 03:35:27 np0005534516 ovn_metadata_agent[162734]:    pidfile     /var/lib/neutron/external/pids/908154e6-322e-4607-bb65-df3f3f8daca6.pid.haproxy
Nov 25 03:35:27 np0005534516 ovn_metadata_agent[162734]:    daemon
Nov 25 03:35:27 np0005534516 ovn_metadata_agent[162734]: 
Nov 25 03:35:27 np0005534516 ovn_metadata_agent[162734]: defaults
Nov 25 03:35:27 np0005534516 ovn_metadata_agent[162734]:    log global
Nov 25 03:35:27 np0005534516 ovn_metadata_agent[162734]:    mode http
Nov 25 03:35:27 np0005534516 ovn_metadata_agent[162734]:    option httplog
Nov 25 03:35:27 np0005534516 ovn_metadata_agent[162734]:    option dontlognull
Nov 25 03:35:27 np0005534516 ovn_metadata_agent[162734]:    option http-server-close
Nov 25 03:35:27 np0005534516 ovn_metadata_agent[162734]:    option forwardfor
Nov 25 03:35:27 np0005534516 ovn_metadata_agent[162734]:    retries                 3
Nov 25 03:35:27 np0005534516 ovn_metadata_agent[162734]:    timeout http-request    30s
Nov 25 03:35:27 np0005534516 ovn_metadata_agent[162734]:    timeout connect         30s
Nov 25 03:35:27 np0005534516 ovn_metadata_agent[162734]:    timeout client          32s
Nov 25 03:35:27 np0005534516 ovn_metadata_agent[162734]:    timeout server          32s
Nov 25 03:35:27 np0005534516 ovn_metadata_agent[162734]:    timeout http-keep-alive 30s
Nov 25 03:35:27 np0005534516 ovn_metadata_agent[162734]: 
Nov 25 03:35:27 np0005534516 ovn_metadata_agent[162734]: 
Nov 25 03:35:27 np0005534516 ovn_metadata_agent[162734]: listen listener
Nov 25 03:35:27 np0005534516 ovn_metadata_agent[162734]:    bind 169.254.169.254:80
Nov 25 03:35:27 np0005534516 ovn_metadata_agent[162734]:    server metadata /var/lib/neutron/metadata_proxy
Nov 25 03:35:27 np0005534516 ovn_metadata_agent[162734]:    http-request add-header X-OVN-Network-ID 908154e6-322e-4607-bb65-df3f3f8daca6
Nov 25 03:35:27 np0005534516 ovn_metadata_agent[162734]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 25 03:35:27 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:35:27.407 162739 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-908154e6-322e-4607-bb65-df3f3f8daca6', 'env', 'PROCESS_TAG=haproxy-908154e6-322e-4607-bb65-df3f3f8daca6', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/908154e6-322e-4607-bb65-df3f3f8daca6.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 25 03:35:27 np0005534516 nova_compute[253538]: 2025-11-25 08:35:27.494 253542 DEBUG oslo_concurrency.lockutils [None req-040bbfc9-8ac2-47a0-959c-a5377d60a6bc 78e69376f7924a5695ba6f4672139f68 d2de800215da4cac8d9fd3a6a3cf4a55 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:35:27 np0005534516 nova_compute[253538]: 2025-11-25 08:35:27.495 253542 DEBUG oslo_concurrency.lockutils [None req-040bbfc9-8ac2-47a0-959c-a5377d60a6bc 78e69376f7924a5695ba6f4672139f68 d2de800215da4cac8d9fd3a6a3cf4a55 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:35:27 np0005534516 nova_compute[253538]: 2025-11-25 08:35:27.503 253542 DEBUG nova.virt.hardware [None req-040bbfc9-8ac2-47a0-959c-a5377d60a6bc 78e69376f7924a5695ba6f4672139f68 d2de800215da4cac8d9fd3a6a3cf4a55 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 25 03:35:27 np0005534516 nova_compute[253538]: 2025-11-25 08:35:27.504 253542 INFO nova.compute.claims [None req-040bbfc9-8ac2-47a0-959c-a5377d60a6bc 78e69376f7924a5695ba6f4672139f68 d2de800215da4cac8d9fd3a6a3cf4a55 - - default default] [instance: cab0bbd2-96e3-43ed-970b-0b49c7581fef] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 25 03:35:27 np0005534516 nova_compute[253538]: 2025-11-25 08:35:27.626 253542 DEBUG oslo_concurrency.processutils [None req-040bbfc9-8ac2-47a0-959c-a5377d60a6bc 78e69376f7924a5695ba6f4672139f68 d2de800215da4cac8d9fd3a6a3cf4a55 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:35:27 np0005534516 podman[322181]: 2025-11-25 08:35:27.772207532 +0000 UTC m=+0.042610546 container create 1f8218895f077b5577d2b628b28c3f3a7162df45e0b0a88502945a35a3202f81 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-908154e6-322e-4607-bb65-df3f3f8daca6, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2)
Nov 25 03:35:27 np0005534516 systemd[1]: Started libpod-conmon-1f8218895f077b5577d2b628b28c3f3a7162df45e0b0a88502945a35a3202f81.scope.
Nov 25 03:35:27 np0005534516 systemd[1]: Started libcrun container.
Nov 25 03:35:27 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5a9436bb21b625d65243e687d48f6f742a3a3013b2c6388d6c2603db017f5bb0/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 25 03:35:27 np0005534516 podman[322181]: 2025-11-25 08:35:27.75019437 +0000 UTC m=+0.020597414 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 25 03:35:27 np0005534516 podman[322181]: 2025-11-25 08:35:27.854517234 +0000 UTC m=+0.124920248 container init 1f8218895f077b5577d2b628b28c3f3a7162df45e0b0a88502945a35a3202f81 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-908154e6-322e-4607-bb65-df3f3f8daca6, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118, maintainer=OpenStack Kubernetes Operator team)
Nov 25 03:35:27 np0005534516 podman[322181]: 2025-11-25 08:35:27.867081432 +0000 UTC m=+0.137484446 container start 1f8218895f077b5577d2b628b28c3f3a7162df45e0b0a88502945a35a3202f81 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-908154e6-322e-4607-bb65-df3f3f8daca6, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Nov 25 03:35:27 np0005534516 neutron-haproxy-ovnmeta-908154e6-322e-4607-bb65-df3f3f8daca6[322215]: [NOTICE]   (322235) : New worker (322254) forked
Nov 25 03:35:27 np0005534516 neutron-haproxy-ovnmeta-908154e6-322e-4607-bb65-df3f3f8daca6[322215]: [NOTICE]   (322235) : Loading success.
Nov 25 03:35:28 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 03:35:28 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3784469813' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 03:35:28 np0005534516 nova_compute[253538]: 2025-11-25 08:35:28.081 253542 DEBUG nova.virt.libvirt.host [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Removed pending event for 0feca801-4630-4450-b915-616d8496ab51 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Nov 25 03:35:28 np0005534516 nova_compute[253538]: 2025-11-25 08:35:28.083 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764059728.0814462, 0feca801-4630-4450-b915-616d8496ab51 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 03:35:28 np0005534516 nova_compute[253538]: 2025-11-25 08:35:28.083 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 0feca801-4630-4450-b915-616d8496ab51] VM Started (Lifecycle Event)#033[00m
Nov 25 03:35:28 np0005534516 nova_compute[253538]: 2025-11-25 08:35:28.088 253542 DEBUG oslo_concurrency.processutils [None req-040bbfc9-8ac2-47a0-959c-a5377d60a6bc 78e69376f7924a5695ba6f4672139f68 d2de800215da4cac8d9fd3a6a3cf4a55 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.463s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:35:28 np0005534516 nova_compute[253538]: 2025-11-25 08:35:28.095 253542 DEBUG nova.compute.provider_tree [None req-040bbfc9-8ac2-47a0-959c-a5377d60a6bc 78e69376f7924a5695ba6f4672139f68 d2de800215da4cac8d9fd3a6a3cf4a55 - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 03:35:28 np0005534516 nova_compute[253538]: 2025-11-25 08:35:28.099 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 0feca801-4630-4450-b915-616d8496ab51] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:35:28 np0005534516 nova_compute[253538]: 2025-11-25 08:35:28.104 253542 DEBUG nova.compute.manager [None req-496ec884-0571-4657-9326-42d315c9f10e c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] [instance: 0feca801-4630-4450-b915-616d8496ab51] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 25 03:35:28 np0005534516 nova_compute[253538]: 2025-11-25 08:35:28.104 253542 DEBUG nova.objects.instance [None req-496ec884-0571-4657-9326-42d315c9f10e c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Lazy-loading 'pci_devices' on Instance uuid 0feca801-4630-4450-b915-616d8496ab51 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 03:35:28 np0005534516 nova_compute[253538]: 2025-11-25 08:35:28.108 253542 DEBUG nova.scheduler.client.report [None req-040bbfc9-8ac2-47a0-959c-a5377d60a6bc 78e69376f7924a5695ba6f4672139f68 d2de800215da4cac8d9fd3a6a3cf4a55 - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 03:35:28 np0005534516 nova_compute[253538]: 2025-11-25 08:35:28.114 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 0feca801-4630-4450-b915-616d8496ab51] Synchronizing instance power state after lifecycle event "Started"; current vm_state: suspended, current task_state: resuming, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 03:35:28 np0005534516 nova_compute[253538]: 2025-11-25 08:35:28.130 253542 INFO nova.virt.libvirt.driver [-] [instance: 0feca801-4630-4450-b915-616d8496ab51] Instance running successfully.#033[00m
Nov 25 03:35:28 np0005534516 virtqemud[253839]: argument unsupported: QEMU guest agent is not configured
Nov 25 03:35:28 np0005534516 nova_compute[253538]: 2025-11-25 08:35:28.131 253542 DEBUG nova.virt.libvirt.guest [None req-496ec884-0571-4657-9326-42d315c9f10e c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] [instance: 0feca801-4630-4450-b915-616d8496ab51] Failed to set time: agent not configured sync_guest_time /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:200#033[00m
Nov 25 03:35:28 np0005534516 nova_compute[253538]: 2025-11-25 08:35:28.131 253542 DEBUG nova.compute.manager [None req-496ec884-0571-4657-9326-42d315c9f10e c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] [instance: 0feca801-4630-4450-b915-616d8496ab51] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:35:28 np0005534516 nova_compute[253538]: 2025-11-25 08:35:28.134 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 0feca801-4630-4450-b915-616d8496ab51] During sync_power_state the instance has a pending task (resuming). Skip.#033[00m
Nov 25 03:35:28 np0005534516 nova_compute[253538]: 2025-11-25 08:35:28.134 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764059728.0860713, 0feca801-4630-4450-b915-616d8496ab51 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 03:35:28 np0005534516 nova_compute[253538]: 2025-11-25 08:35:28.134 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 0feca801-4630-4450-b915-616d8496ab51] VM Resumed (Lifecycle Event)#033[00m
Nov 25 03:35:28 np0005534516 nova_compute[253538]: 2025-11-25 08:35:28.157 253542 DEBUG oslo_concurrency.lockutils [None req-040bbfc9-8ac2-47a0-959c-a5377d60a6bc 78e69376f7924a5695ba6f4672139f68 d2de800215da4cac8d9fd3a6a3cf4a55 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.662s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:35:28 np0005534516 nova_compute[253538]: 2025-11-25 08:35:28.158 253542 DEBUG nova.compute.manager [None req-040bbfc9-8ac2-47a0-959c-a5377d60a6bc 78e69376f7924a5695ba6f4672139f68 d2de800215da4cac8d9fd3a6a3cf4a55 - - default default] [instance: cab0bbd2-96e3-43ed-970b-0b49c7581fef] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 25 03:35:28 np0005534516 nova_compute[253538]: 2025-11-25 08:35:28.161 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 0feca801-4630-4450-b915-616d8496ab51] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:35:28 np0005534516 nova_compute[253538]: 2025-11-25 08:35:28.164 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 0feca801-4630-4450-b915-616d8496ab51] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: suspended, current task_state: resuming, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 03:35:28 np0005534516 nova_compute[253538]: 2025-11-25 08:35:28.186 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 0feca801-4630-4450-b915-616d8496ab51] During sync_power_state the instance has a pending task (resuming). Skip.#033[00m
Nov 25 03:35:28 np0005534516 nova_compute[253538]: 2025-11-25 08:35:28.235 253542 DEBUG nova.compute.manager [None req-040bbfc9-8ac2-47a0-959c-a5377d60a6bc 78e69376f7924a5695ba6f4672139f68 d2de800215da4cac8d9fd3a6a3cf4a55 - - default default] [instance: cab0bbd2-96e3-43ed-970b-0b49c7581fef] Not allocating networking since 'none' was specified. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1948#033[00m
Nov 25 03:35:28 np0005534516 nova_compute[253538]: 2025-11-25 08:35:28.261 253542 INFO nova.virt.libvirt.driver [None req-040bbfc9-8ac2-47a0-959c-a5377d60a6bc 78e69376f7924a5695ba6f4672139f68 d2de800215da4cac8d9fd3a6a3cf4a55 - - default default] [instance: cab0bbd2-96e3-43ed-970b-0b49c7581fef] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 25 03:35:28 np0005534516 nova_compute[253538]: 2025-11-25 08:35:28.287 253542 DEBUG nova.compute.manager [None req-040bbfc9-8ac2-47a0-959c-a5377d60a6bc 78e69376f7924a5695ba6f4672139f68 d2de800215da4cac8d9fd3a6a3cf4a55 - - default default] [instance: cab0bbd2-96e3-43ed-970b-0b49c7581fef] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 25 03:35:28 np0005534516 nova_compute[253538]: 2025-11-25 08:35:28.399 253542 DEBUG nova.compute.manager [None req-040bbfc9-8ac2-47a0-959c-a5377d60a6bc 78e69376f7924a5695ba6f4672139f68 d2de800215da4cac8d9fd3a6a3cf4a55 - - default default] [instance: cab0bbd2-96e3-43ed-970b-0b49c7581fef] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 25 03:35:28 np0005534516 nova_compute[253538]: 2025-11-25 08:35:28.401 253542 DEBUG nova.virt.libvirt.driver [None req-040bbfc9-8ac2-47a0-959c-a5377d60a6bc 78e69376f7924a5695ba6f4672139f68 d2de800215da4cac8d9fd3a6a3cf4a55 - - default default] [instance: cab0bbd2-96e3-43ed-970b-0b49c7581fef] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 25 03:35:28 np0005534516 nova_compute[253538]: 2025-11-25 08:35:28.402 253542 INFO nova.virt.libvirt.driver [None req-040bbfc9-8ac2-47a0-959c-a5377d60a6bc 78e69376f7924a5695ba6f4672139f68 d2de800215da4cac8d9fd3a6a3cf4a55 - - default default] [instance: cab0bbd2-96e3-43ed-970b-0b49c7581fef] Creating image(s)#033[00m
Nov 25 03:35:28 np0005534516 nova_compute[253538]: 2025-11-25 08:35:28.428 253542 DEBUG nova.storage.rbd_utils [None req-040bbfc9-8ac2-47a0-959c-a5377d60a6bc 78e69376f7924a5695ba6f4672139f68 d2de800215da4cac8d9fd3a6a3cf4a55 - - default default] rbd image cab0bbd2-96e3-43ed-970b-0b49c7581fef_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:35:28 np0005534516 nova_compute[253538]: 2025-11-25 08:35:28.454 253542 DEBUG nova.storage.rbd_utils [None req-040bbfc9-8ac2-47a0-959c-a5377d60a6bc 78e69376f7924a5695ba6f4672139f68 d2de800215da4cac8d9fd3a6a3cf4a55 - - default default] rbd image cab0bbd2-96e3-43ed-970b-0b49c7581fef_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:35:28 np0005534516 nova_compute[253538]: 2025-11-25 08:35:28.477 253542 DEBUG nova.storage.rbd_utils [None req-040bbfc9-8ac2-47a0-959c-a5377d60a6bc 78e69376f7924a5695ba6f4672139f68 d2de800215da4cac8d9fd3a6a3cf4a55 - - default default] rbd image cab0bbd2-96e3-43ed-970b-0b49c7581fef_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:35:28 np0005534516 nova_compute[253538]: 2025-11-25 08:35:28.481 253542 DEBUG oslo_concurrency.processutils [None req-040bbfc9-8ac2-47a0-959c-a5377d60a6bc 78e69376f7924a5695ba6f4672139f68 d2de800215da4cac8d9fd3a6a3cf4a55 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:35:28 np0005534516 nova_compute[253538]: 2025-11-25 08:35:28.570 253542 DEBUG oslo_concurrency.processutils [None req-040bbfc9-8ac2-47a0-959c-a5377d60a6bc 78e69376f7924a5695ba6f4672139f68 d2de800215da4cac8d9fd3a6a3cf4a55 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json" returned: 0 in 0.090s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:35:28 np0005534516 nova_compute[253538]: 2025-11-25 08:35:28.571 253542 DEBUG oslo_concurrency.lockutils [None req-040bbfc9-8ac2-47a0-959c-a5377d60a6bc 78e69376f7924a5695ba6f4672139f68 d2de800215da4cac8d9fd3a6a3cf4a55 - - default default] Acquiring lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:35:28 np0005534516 nova_compute[253538]: 2025-11-25 08:35:28.572 253542 DEBUG oslo_concurrency.lockutils [None req-040bbfc9-8ac2-47a0-959c-a5377d60a6bc 78e69376f7924a5695ba6f4672139f68 d2de800215da4cac8d9fd3a6a3cf4a55 - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:35:28 np0005534516 nova_compute[253538]: 2025-11-25 08:35:28.572 253542 DEBUG oslo_concurrency.lockutils [None req-040bbfc9-8ac2-47a0-959c-a5377d60a6bc 78e69376f7924a5695ba6f4672139f68 d2de800215da4cac8d9fd3a6a3cf4a55 - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:35:28 np0005534516 nova_compute[253538]: 2025-11-25 08:35:28.594 253542 DEBUG nova.storage.rbd_utils [None req-040bbfc9-8ac2-47a0-959c-a5377d60a6bc 78e69376f7924a5695ba6f4672139f68 d2de800215da4cac8d9fd3a6a3cf4a55 - - default default] rbd image cab0bbd2-96e3-43ed-970b-0b49c7581fef_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:35:28 np0005534516 nova_compute[253538]: 2025-11-25 08:35:28.597 253542 DEBUG oslo_concurrency.processutils [None req-040bbfc9-8ac2-47a0-959c-a5377d60a6bc 78e69376f7924a5695ba6f4672139f68 d2de800215da4cac8d9fd3a6a3cf4a55 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc cab0bbd2-96e3-43ed-970b-0b49c7581fef_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:35:28 np0005534516 nova_compute[253538]: 2025-11-25 08:35:28.927 253542 DEBUG oslo_concurrency.processutils [None req-040bbfc9-8ac2-47a0-959c-a5377d60a6bc 78e69376f7924a5695ba6f4672139f68 d2de800215da4cac8d9fd3a6a3cf4a55 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc cab0bbd2-96e3-43ed-970b-0b49c7581fef_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.330s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:35:28 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 25 03:35:28 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3935479186' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 03:35:28 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 25 03:35:28 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3935479186' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 03:35:28 np0005534516 nova_compute[253538]: 2025-11-25 08:35:28.994 253542 DEBUG nova.storage.rbd_utils [None req-040bbfc9-8ac2-47a0-959c-a5377d60a6bc 78e69376f7924a5695ba6f4672139f68 d2de800215da4cac8d9fd3a6a3cf4a55 - - default default] resizing rbd image cab0bbd2-96e3-43ed-970b-0b49c7581fef_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 25 03:35:29 np0005534516 nova_compute[253538]: 2025-11-25 08:35:29.105 253542 DEBUG nova.objects.instance [None req-040bbfc9-8ac2-47a0-959c-a5377d60a6bc 78e69376f7924a5695ba6f4672139f68 d2de800215da4cac8d9fd3a6a3cf4a55 - - default default] Lazy-loading 'migration_context' on Instance uuid cab0bbd2-96e3-43ed-970b-0b49c7581fef obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 03:35:29 np0005534516 nova_compute[253538]: 2025-11-25 08:35:29.119 253542 DEBUG nova.virt.libvirt.driver [None req-040bbfc9-8ac2-47a0-959c-a5377d60a6bc 78e69376f7924a5695ba6f4672139f68 d2de800215da4cac8d9fd3a6a3cf4a55 - - default default] [instance: cab0bbd2-96e3-43ed-970b-0b49c7581fef] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 25 03:35:29 np0005534516 nova_compute[253538]: 2025-11-25 08:35:29.120 253542 DEBUG nova.virt.libvirt.driver [None req-040bbfc9-8ac2-47a0-959c-a5377d60a6bc 78e69376f7924a5695ba6f4672139f68 d2de800215da4cac8d9fd3a6a3cf4a55 - - default default] [instance: cab0bbd2-96e3-43ed-970b-0b49c7581fef] Ensure instance console log exists: /var/lib/nova/instances/cab0bbd2-96e3-43ed-970b-0b49c7581fef/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 25 03:35:29 np0005534516 nova_compute[253538]: 2025-11-25 08:35:29.120 253542 DEBUG oslo_concurrency.lockutils [None req-040bbfc9-8ac2-47a0-959c-a5377d60a6bc 78e69376f7924a5695ba6f4672139f68 d2de800215da4cac8d9fd3a6a3cf4a55 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:35:29 np0005534516 nova_compute[253538]: 2025-11-25 08:35:29.120 253542 DEBUG oslo_concurrency.lockutils [None req-040bbfc9-8ac2-47a0-959c-a5377d60a6bc 78e69376f7924a5695ba6f4672139f68 d2de800215da4cac8d9fd3a6a3cf4a55 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:35:29 np0005534516 nova_compute[253538]: 2025-11-25 08:35:29.121 253542 DEBUG oslo_concurrency.lockutils [None req-040bbfc9-8ac2-47a0-959c-a5377d60a6bc 78e69376f7924a5695ba6f4672139f68 d2de800215da4cac8d9fd3a6a3cf4a55 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:35:29 np0005534516 nova_compute[253538]: 2025-11-25 08:35:29.122 253542 DEBUG nova.virt.libvirt.driver [None req-040bbfc9-8ac2-47a0-959c-a5377d60a6bc 78e69376f7924a5695ba6f4672139f68 d2de800215da4cac8d9fd3a6a3cf4a55 - - default default] [instance: cab0bbd2-96e3-43ed-970b-0b49c7581fef] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'device_name': '/dev/vda', 'guest_format': None, 'encryption_options': None, 'boot_index': 0, 'encryption_format': None, 'encryption_secret_uuid': None, 'size': 0, 'encrypted': False, 'disk_bus': 'virtio', 'image_id': '8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 25 03:35:29 np0005534516 nova_compute[253538]: 2025-11-25 08:35:29.127 253542 WARNING nova.virt.libvirt.driver [None req-040bbfc9-8ac2-47a0-959c-a5377d60a6bc 78e69376f7924a5695ba6f4672139f68 d2de800215da4cac8d9fd3a6a3cf4a55 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 03:35:29 np0005534516 nova_compute[253538]: 2025-11-25 08:35:29.131 253542 DEBUG nova.virt.libvirt.host [None req-040bbfc9-8ac2-47a0-959c-a5377d60a6bc 78e69376f7924a5695ba6f4672139f68 d2de800215da4cac8d9fd3a6a3cf4a55 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 25 03:35:29 np0005534516 nova_compute[253538]: 2025-11-25 08:35:29.131 253542 DEBUG nova.virt.libvirt.host [None req-040bbfc9-8ac2-47a0-959c-a5377d60a6bc 78e69376f7924a5695ba6f4672139f68 d2de800215da4cac8d9fd3a6a3cf4a55 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 25 03:35:29 np0005534516 nova_compute[253538]: 2025-11-25 08:35:29.135 253542 DEBUG nova.virt.libvirt.host [None req-040bbfc9-8ac2-47a0-959c-a5377d60a6bc 78e69376f7924a5695ba6f4672139f68 d2de800215da4cac8d9fd3a6a3cf4a55 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 25 03:35:29 np0005534516 nova_compute[253538]: 2025-11-25 08:35:29.135 253542 DEBUG nova.virt.libvirt.host [None req-040bbfc9-8ac2-47a0-959c-a5377d60a6bc 78e69376f7924a5695ba6f4672139f68 d2de800215da4cac8d9fd3a6a3cf4a55 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 25 03:35:29 np0005534516 nova_compute[253538]: 2025-11-25 08:35:29.136 253542 DEBUG nova.virt.libvirt.driver [None req-040bbfc9-8ac2-47a0-959c-a5377d60a6bc 78e69376f7924a5695ba6f4672139f68 d2de800215da4cac8d9fd3a6a3cf4a55 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 25 03:35:29 np0005534516 nova_compute[253538]: 2025-11-25 08:35:29.136 253542 DEBUG nova.virt.hardware [None req-040bbfc9-8ac2-47a0-959c-a5377d60a6bc 78e69376f7924a5695ba6f4672139f68 d2de800215da4cac8d9fd3a6a3cf4a55 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T08:20:54Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='c0247c91-0f34-45b2-87e7-43f31a790a00',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 25 03:35:29 np0005534516 nova_compute[253538]: 2025-11-25 08:35:29.137 253542 DEBUG nova.virt.hardware [None req-040bbfc9-8ac2-47a0-959c-a5377d60a6bc 78e69376f7924a5695ba6f4672139f68 d2de800215da4cac8d9fd3a6a3cf4a55 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 25 03:35:29 np0005534516 nova_compute[253538]: 2025-11-25 08:35:29.137 253542 DEBUG nova.virt.hardware [None req-040bbfc9-8ac2-47a0-959c-a5377d60a6bc 78e69376f7924a5695ba6f4672139f68 d2de800215da4cac8d9fd3a6a3cf4a55 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 25 03:35:29 np0005534516 nova_compute[253538]: 2025-11-25 08:35:29.137 253542 DEBUG nova.virt.hardware [None req-040bbfc9-8ac2-47a0-959c-a5377d60a6bc 78e69376f7924a5695ba6f4672139f68 d2de800215da4cac8d9fd3a6a3cf4a55 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 25 03:35:29 np0005534516 nova_compute[253538]: 2025-11-25 08:35:29.137 253542 DEBUG nova.virt.hardware [None req-040bbfc9-8ac2-47a0-959c-a5377d60a6bc 78e69376f7924a5695ba6f4672139f68 d2de800215da4cac8d9fd3a6a3cf4a55 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 25 03:35:29 np0005534516 nova_compute[253538]: 2025-11-25 08:35:29.138 253542 DEBUG nova.virt.hardware [None req-040bbfc9-8ac2-47a0-959c-a5377d60a6bc 78e69376f7924a5695ba6f4672139f68 d2de800215da4cac8d9fd3a6a3cf4a55 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 25 03:35:29 np0005534516 nova_compute[253538]: 2025-11-25 08:35:29.138 253542 DEBUG nova.virt.hardware [None req-040bbfc9-8ac2-47a0-959c-a5377d60a6bc 78e69376f7924a5695ba6f4672139f68 d2de800215da4cac8d9fd3a6a3cf4a55 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 25 03:35:29 np0005534516 nova_compute[253538]: 2025-11-25 08:35:29.138 253542 DEBUG nova.virt.hardware [None req-040bbfc9-8ac2-47a0-959c-a5377d60a6bc 78e69376f7924a5695ba6f4672139f68 d2de800215da4cac8d9fd3a6a3cf4a55 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 25 03:35:29 np0005534516 nova_compute[253538]: 2025-11-25 08:35:29.138 253542 DEBUG nova.virt.hardware [None req-040bbfc9-8ac2-47a0-959c-a5377d60a6bc 78e69376f7924a5695ba6f4672139f68 d2de800215da4cac8d9fd3a6a3cf4a55 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 25 03:35:29 np0005534516 nova_compute[253538]: 2025-11-25 08:35:29.138 253542 DEBUG nova.virt.hardware [None req-040bbfc9-8ac2-47a0-959c-a5377d60a6bc 78e69376f7924a5695ba6f4672139f68 d2de800215da4cac8d9fd3a6a3cf4a55 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 25 03:35:29 np0005534516 nova_compute[253538]: 2025-11-25 08:35:29.139 253542 DEBUG nova.virt.hardware [None req-040bbfc9-8ac2-47a0-959c-a5377d60a6bc 78e69376f7924a5695ba6f4672139f68 d2de800215da4cac8d9fd3a6a3cf4a55 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 25 03:35:29 np0005534516 nova_compute[253538]: 2025-11-25 08:35:29.142 253542 DEBUG oslo_concurrency.processutils [None req-040bbfc9-8ac2-47a0-959c-a5377d60a6bc 78e69376f7924a5695ba6f4672139f68 d2de800215da4cac8d9fd3a6a3cf4a55 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:35:29 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1570: 321 pgs: 321 active+clean; 123 MiB data, 574 MiB used, 59 GiB / 60 GiB avail; 737 KiB/s rd, 85 B/s wr, 31 op/s
Nov 25 03:35:29 np0005534516 nova_compute[253538]: 2025-11-25 08:35:29.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 03:35:29 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 03:35:29 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2265566824' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 03:35:29 np0005534516 nova_compute[253538]: 2025-11-25 08:35:29.606 253542 DEBUG oslo_concurrency.processutils [None req-040bbfc9-8ac2-47a0-959c-a5377d60a6bc 78e69376f7924a5695ba6f4672139f68 d2de800215da4cac8d9fd3a6a3cf4a55 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.464s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:35:29 np0005534516 nova_compute[253538]: 2025-11-25 08:35:29.628 253542 DEBUG nova.storage.rbd_utils [None req-040bbfc9-8ac2-47a0-959c-a5377d60a6bc 78e69376f7924a5695ba6f4672139f68 d2de800215da4cac8d9fd3a6a3cf4a55 - - default default] rbd image cab0bbd2-96e3-43ed-970b-0b49c7581fef_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:35:29 np0005534516 nova_compute[253538]: 2025-11-25 08:35:29.632 253542 DEBUG oslo_concurrency.processutils [None req-040bbfc9-8ac2-47a0-959c-a5377d60a6bc 78e69376f7924a5695ba6f4672139f68 d2de800215da4cac8d9fd3a6a3cf4a55 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:35:30 np0005534516 nova_compute[253538]: 2025-11-25 08:35:30.079 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:35:30 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 03:35:30 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2659968424' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 03:35:30 np0005534516 nova_compute[253538]: 2025-11-25 08:35:30.113 253542 DEBUG oslo_concurrency.processutils [None req-040bbfc9-8ac2-47a0-959c-a5377d60a6bc 78e69376f7924a5695ba6f4672139f68 d2de800215da4cac8d9fd3a6a3cf4a55 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.481s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:35:30 np0005534516 nova_compute[253538]: 2025-11-25 08:35:30.116 253542 DEBUG nova.objects.instance [None req-040bbfc9-8ac2-47a0-959c-a5377d60a6bc 78e69376f7924a5695ba6f4672139f68 d2de800215da4cac8d9fd3a6a3cf4a55 - - default default] Lazy-loading 'pci_devices' on Instance uuid cab0bbd2-96e3-43ed-970b-0b49c7581fef obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 03:35:30 np0005534516 nova_compute[253538]: 2025-11-25 08:35:30.132 253542 DEBUG nova.virt.libvirt.driver [None req-040bbfc9-8ac2-47a0-959c-a5377d60a6bc 78e69376f7924a5695ba6f4672139f68 d2de800215da4cac8d9fd3a6a3cf4a55 - - default default] [instance: cab0bbd2-96e3-43ed-970b-0b49c7581fef] End _get_guest_xml xml=<domain type="kvm">
Nov 25 03:35:30 np0005534516 nova_compute[253538]:  <uuid>cab0bbd2-96e3-43ed-970b-0b49c7581fef</uuid>
Nov 25 03:35:30 np0005534516 nova_compute[253538]:  <name>instance-00000043</name>
Nov 25 03:35:30 np0005534516 nova_compute[253538]:  <memory>131072</memory>
Nov 25 03:35:30 np0005534516 nova_compute[253538]:  <vcpu>1</vcpu>
Nov 25 03:35:30 np0005534516 nova_compute[253538]:  <metadata>
Nov 25 03:35:30 np0005534516 nova_compute[253538]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 03:35:30 np0005534516 nova_compute[253538]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 03:35:30 np0005534516 nova_compute[253538]:      <nova:name>tempest-ServersAaction247Test-server-1613086338</nova:name>
Nov 25 03:35:30 np0005534516 nova_compute[253538]:      <nova:creationTime>2025-11-25 08:35:29</nova:creationTime>
Nov 25 03:35:30 np0005534516 nova_compute[253538]:      <nova:flavor name="m1.nano">
Nov 25 03:35:30 np0005534516 nova_compute[253538]:        <nova:memory>128</nova:memory>
Nov 25 03:35:30 np0005534516 nova_compute[253538]:        <nova:disk>1</nova:disk>
Nov 25 03:35:30 np0005534516 nova_compute[253538]:        <nova:swap>0</nova:swap>
Nov 25 03:35:30 np0005534516 nova_compute[253538]:        <nova:ephemeral>0</nova:ephemeral>
Nov 25 03:35:30 np0005534516 nova_compute[253538]:        <nova:vcpus>1</nova:vcpus>
Nov 25 03:35:30 np0005534516 nova_compute[253538]:      </nova:flavor>
Nov 25 03:35:30 np0005534516 nova_compute[253538]:      <nova:owner>
Nov 25 03:35:30 np0005534516 nova_compute[253538]:        <nova:user uuid="78e69376f7924a5695ba6f4672139f68">tempest-ServersAaction247Test-1472148412-project-member</nova:user>
Nov 25 03:35:30 np0005534516 nova_compute[253538]:        <nova:project uuid="d2de800215da4cac8d9fd3a6a3cf4a55">tempest-ServersAaction247Test-1472148412</nova:project>
Nov 25 03:35:30 np0005534516 nova_compute[253538]:      </nova:owner>
Nov 25 03:35:30 np0005534516 nova_compute[253538]:      <nova:root type="image" uuid="8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e"/>
Nov 25 03:35:30 np0005534516 nova_compute[253538]:      <nova:ports/>
Nov 25 03:35:30 np0005534516 nova_compute[253538]:    </nova:instance>
Nov 25 03:35:30 np0005534516 nova_compute[253538]:  </metadata>
Nov 25 03:35:30 np0005534516 nova_compute[253538]:  <sysinfo type="smbios">
Nov 25 03:35:30 np0005534516 nova_compute[253538]:    <system>
Nov 25 03:35:30 np0005534516 nova_compute[253538]:      <entry name="manufacturer">RDO</entry>
Nov 25 03:35:30 np0005534516 nova_compute[253538]:      <entry name="product">OpenStack Compute</entry>
Nov 25 03:35:30 np0005534516 nova_compute[253538]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 03:35:30 np0005534516 nova_compute[253538]:      <entry name="serial">cab0bbd2-96e3-43ed-970b-0b49c7581fef</entry>
Nov 25 03:35:30 np0005534516 nova_compute[253538]:      <entry name="uuid">cab0bbd2-96e3-43ed-970b-0b49c7581fef</entry>
Nov 25 03:35:30 np0005534516 nova_compute[253538]:      <entry name="family">Virtual Machine</entry>
Nov 25 03:35:30 np0005534516 nova_compute[253538]:    </system>
Nov 25 03:35:30 np0005534516 nova_compute[253538]:  </sysinfo>
Nov 25 03:35:30 np0005534516 nova_compute[253538]:  <os>
Nov 25 03:35:30 np0005534516 nova_compute[253538]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 25 03:35:30 np0005534516 nova_compute[253538]:    <boot dev="hd"/>
Nov 25 03:35:30 np0005534516 nova_compute[253538]:    <smbios mode="sysinfo"/>
Nov 25 03:35:30 np0005534516 nova_compute[253538]:  </os>
Nov 25 03:35:30 np0005534516 nova_compute[253538]:  <features>
Nov 25 03:35:30 np0005534516 nova_compute[253538]:    <acpi/>
Nov 25 03:35:30 np0005534516 nova_compute[253538]:    <apic/>
Nov 25 03:35:30 np0005534516 nova_compute[253538]:    <vmcoreinfo/>
Nov 25 03:35:30 np0005534516 nova_compute[253538]:  </features>
Nov 25 03:35:30 np0005534516 nova_compute[253538]:  <clock offset="utc">
Nov 25 03:35:30 np0005534516 nova_compute[253538]:    <timer name="pit" tickpolicy="delay"/>
Nov 25 03:35:30 np0005534516 nova_compute[253538]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 25 03:35:30 np0005534516 nova_compute[253538]:    <timer name="hpet" present="no"/>
Nov 25 03:35:30 np0005534516 nova_compute[253538]:  </clock>
Nov 25 03:35:30 np0005534516 nova_compute[253538]:  <cpu mode="host-model" match="exact">
Nov 25 03:35:30 np0005534516 nova_compute[253538]:    <topology sockets="1" cores="1" threads="1"/>
Nov 25 03:35:30 np0005534516 nova_compute[253538]:  </cpu>
Nov 25 03:35:30 np0005534516 nova_compute[253538]:  <devices>
Nov 25 03:35:30 np0005534516 nova_compute[253538]:    <disk type="network" device="disk">
Nov 25 03:35:30 np0005534516 nova_compute[253538]:      <driver type="raw" cache="none"/>
Nov 25 03:35:30 np0005534516 nova_compute[253538]:      <source protocol="rbd" name="vms/cab0bbd2-96e3-43ed-970b-0b49c7581fef_disk">
Nov 25 03:35:30 np0005534516 nova_compute[253538]:        <host name="192.168.122.100" port="6789"/>
Nov 25 03:35:30 np0005534516 nova_compute[253538]:      </source>
Nov 25 03:35:30 np0005534516 nova_compute[253538]:      <auth username="openstack">
Nov 25 03:35:30 np0005534516 nova_compute[253538]:        <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 03:35:30 np0005534516 nova_compute[253538]:      </auth>
Nov 25 03:35:30 np0005534516 nova_compute[253538]:      <target dev="vda" bus="virtio"/>
Nov 25 03:35:30 np0005534516 nova_compute[253538]:    </disk>
Nov 25 03:35:30 np0005534516 nova_compute[253538]:    <disk type="network" device="cdrom">
Nov 25 03:35:30 np0005534516 nova_compute[253538]:      <driver type="raw" cache="none"/>
Nov 25 03:35:30 np0005534516 nova_compute[253538]:      <source protocol="rbd" name="vms/cab0bbd2-96e3-43ed-970b-0b49c7581fef_disk.config">
Nov 25 03:35:30 np0005534516 nova_compute[253538]:        <host name="192.168.122.100" port="6789"/>
Nov 25 03:35:30 np0005534516 nova_compute[253538]:      </source>
Nov 25 03:35:30 np0005534516 nova_compute[253538]:      <auth username="openstack">
Nov 25 03:35:30 np0005534516 nova_compute[253538]:        <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 03:35:30 np0005534516 nova_compute[253538]:      </auth>
Nov 25 03:35:30 np0005534516 nova_compute[253538]:      <target dev="sda" bus="sata"/>
Nov 25 03:35:30 np0005534516 nova_compute[253538]:    </disk>
Nov 25 03:35:30 np0005534516 nova_compute[253538]:    <serial type="pty">
Nov 25 03:35:30 np0005534516 nova_compute[253538]:      <log file="/var/lib/nova/instances/cab0bbd2-96e3-43ed-970b-0b49c7581fef/console.log" append="off"/>
Nov 25 03:35:30 np0005534516 nova_compute[253538]:    </serial>
Nov 25 03:35:30 np0005534516 nova_compute[253538]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 03:35:30 np0005534516 nova_compute[253538]:    <video>
Nov 25 03:35:30 np0005534516 nova_compute[253538]:      <model type="virtio"/>
Nov 25 03:35:30 np0005534516 nova_compute[253538]:    </video>
Nov 25 03:35:30 np0005534516 nova_compute[253538]:    <input type="tablet" bus="usb"/>
Nov 25 03:35:30 np0005534516 nova_compute[253538]:    <rng model="virtio">
Nov 25 03:35:30 np0005534516 nova_compute[253538]:      <backend model="random">/dev/urandom</backend>
Nov 25 03:35:30 np0005534516 nova_compute[253538]:    </rng>
Nov 25 03:35:30 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root"/>
Nov 25 03:35:30 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:35:30 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:35:30 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:35:30 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:35:30 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:35:30 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:35:30 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:35:30 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:35:30 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:35:30 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:35:30 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:35:30 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:35:30 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:35:30 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:35:30 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:35:30 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:35:30 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:35:30 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:35:30 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:35:30 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:35:30 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:35:30 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:35:30 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:35:30 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:35:30 np0005534516 nova_compute[253538]:    <controller type="usb" index="0"/>
Nov 25 03:35:30 np0005534516 nova_compute[253538]:    <memballoon model="virtio">
Nov 25 03:35:30 np0005534516 nova_compute[253538]:      <stats period="10"/>
Nov 25 03:35:30 np0005534516 nova_compute[253538]:    </memballoon>
Nov 25 03:35:30 np0005534516 nova_compute[253538]:  </devices>
Nov 25 03:35:30 np0005534516 nova_compute[253538]: </domain>
Nov 25 03:35:30 np0005534516 nova_compute[253538]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 25 03:35:30 np0005534516 nova_compute[253538]: 2025-11-25 08:35:30.225 253542 DEBUG nova.virt.libvirt.driver [None req-040bbfc9-8ac2-47a0-959c-a5377d60a6bc 78e69376f7924a5695ba6f4672139f68 d2de800215da4cac8d9fd3a6a3cf4a55 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 03:35:30 np0005534516 nova_compute[253538]: 2025-11-25 08:35:30.226 253542 DEBUG nova.virt.libvirt.driver [None req-040bbfc9-8ac2-47a0-959c-a5377d60a6bc 78e69376f7924a5695ba6f4672139f68 d2de800215da4cac8d9fd3a6a3cf4a55 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 03:35:30 np0005534516 nova_compute[253538]: 2025-11-25 08:35:30.227 253542 INFO nova.virt.libvirt.driver [None req-040bbfc9-8ac2-47a0-959c-a5377d60a6bc 78e69376f7924a5695ba6f4672139f68 d2de800215da4cac8d9fd3a6a3cf4a55 - - default default] [instance: cab0bbd2-96e3-43ed-970b-0b49c7581fef] Using config drive#033[00m
Nov 25 03:35:30 np0005534516 nova_compute[253538]: 2025-11-25 08:35:30.265 253542 DEBUG nova.storage.rbd_utils [None req-040bbfc9-8ac2-47a0-959c-a5377d60a6bc 78e69376f7924a5695ba6f4672139f68 d2de800215da4cac8d9fd3a6a3cf4a55 - - default default] rbd image cab0bbd2-96e3-43ed-970b-0b49c7581fef_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:35:30 np0005534516 nova_compute[253538]: 2025-11-25 08:35:30.787 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:35:30 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e182 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 03:35:31 np0005534516 nova_compute[253538]: 2025-11-25 08:35:31.166 253542 INFO nova.virt.libvirt.driver [None req-040bbfc9-8ac2-47a0-959c-a5377d60a6bc 78e69376f7924a5695ba6f4672139f68 d2de800215da4cac8d9fd3a6a3cf4a55 - - default default] [instance: cab0bbd2-96e3-43ed-970b-0b49c7581fef] Creating config drive at /var/lib/nova/instances/cab0bbd2-96e3-43ed-970b-0b49c7581fef/disk.config#033[00m
Nov 25 03:35:31 np0005534516 nova_compute[253538]: 2025-11-25 08:35:31.173 253542 DEBUG oslo_concurrency.processutils [None req-040bbfc9-8ac2-47a0-959c-a5377d60a6bc 78e69376f7924a5695ba6f4672139f68 d2de800215da4cac8d9fd3a6a3cf4a55 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/cab0bbd2-96e3-43ed-970b-0b49c7581fef/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpp9x7oxwh execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:35:31 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1571: 321 pgs: 321 active+clean; 142 MiB data, 574 MiB used, 59 GiB / 60 GiB avail; 541 KiB/s rd, 637 KiB/s wr, 30 op/s
Nov 25 03:35:31 np0005534516 nova_compute[253538]: 2025-11-25 08:35:31.322 253542 DEBUG oslo_concurrency.processutils [None req-040bbfc9-8ac2-47a0-959c-a5377d60a6bc 78e69376f7924a5695ba6f4672139f68 d2de800215da4cac8d9fd3a6a3cf4a55 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/cab0bbd2-96e3-43ed-970b-0b49c7581fef/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpp9x7oxwh" returned: 0 in 0.149s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:35:31 np0005534516 nova_compute[253538]: 2025-11-25 08:35:31.348 253542 DEBUG nova.storage.rbd_utils [None req-040bbfc9-8ac2-47a0-959c-a5377d60a6bc 78e69376f7924a5695ba6f4672139f68 d2de800215da4cac8d9fd3a6a3cf4a55 - - default default] rbd image cab0bbd2-96e3-43ed-970b-0b49c7581fef_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:35:31 np0005534516 nova_compute[253538]: 2025-11-25 08:35:31.351 253542 DEBUG oslo_concurrency.processutils [None req-040bbfc9-8ac2-47a0-959c-a5377d60a6bc 78e69376f7924a5695ba6f4672139f68 d2de800215da4cac8d9fd3a6a3cf4a55 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/cab0bbd2-96e3-43ed-970b-0b49c7581fef/disk.config cab0bbd2-96e3-43ed-970b-0b49c7581fef_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:35:31 np0005534516 nova_compute[253538]: 2025-11-25 08:35:31.541 253542 DEBUG oslo_concurrency.processutils [None req-040bbfc9-8ac2-47a0-959c-a5377d60a6bc 78e69376f7924a5695ba6f4672139f68 d2de800215da4cac8d9fd3a6a3cf4a55 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/cab0bbd2-96e3-43ed-970b-0b49c7581fef/disk.config cab0bbd2-96e3-43ed-970b-0b49c7581fef_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.190s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:35:31 np0005534516 nova_compute[253538]: 2025-11-25 08:35:31.542 253542 INFO nova.virt.libvirt.driver [None req-040bbfc9-8ac2-47a0-959c-a5377d60a6bc 78e69376f7924a5695ba6f4672139f68 d2de800215da4cac8d9fd3a6a3cf4a55 - - default default] [instance: cab0bbd2-96e3-43ed-970b-0b49c7581fef] Deleting local config drive /var/lib/nova/instances/cab0bbd2-96e3-43ed-970b-0b49c7581fef/disk.config because it was imported into RBD.#033[00m
Nov 25 03:35:31 np0005534516 nova_compute[253538]: 2025-11-25 08:35:31.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 03:35:31 np0005534516 systemd-machined[215790]: New machine qemu-79-instance-00000043.
Nov 25 03:35:31 np0005534516 systemd[1]: Started Virtual Machine qemu-79-instance-00000043.
Nov 25 03:35:31 np0005534516 nova_compute[253538]: 2025-11-25 08:35:31.745 253542 DEBUG nova.compute.manager [req-91404eef-f298-4cbd-af09-5da3630b7d28 req-b1ae69a8-ff58-4a01-9a8b-7fb93af3ee2f b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 0feca801-4630-4450-b915-616d8496ab51] Received event network-vif-plugged-15af3dd8-9788-4a34-b4b2-d3b24300cd4c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:35:31 np0005534516 nova_compute[253538]: 2025-11-25 08:35:31.746 253542 DEBUG oslo_concurrency.lockutils [req-91404eef-f298-4cbd-af09-5da3630b7d28 req-b1ae69a8-ff58-4a01-9a8b-7fb93af3ee2f b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "0feca801-4630-4450-b915-616d8496ab51-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:35:31 np0005534516 nova_compute[253538]: 2025-11-25 08:35:31.747 253542 DEBUG oslo_concurrency.lockutils [req-91404eef-f298-4cbd-af09-5da3630b7d28 req-b1ae69a8-ff58-4a01-9a8b-7fb93af3ee2f b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "0feca801-4630-4450-b915-616d8496ab51-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:35:31 np0005534516 nova_compute[253538]: 2025-11-25 08:35:31.748 253542 DEBUG oslo_concurrency.lockutils [req-91404eef-f298-4cbd-af09-5da3630b7d28 req-b1ae69a8-ff58-4a01-9a8b-7fb93af3ee2f b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "0feca801-4630-4450-b915-616d8496ab51-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:35:31 np0005534516 nova_compute[253538]: 2025-11-25 08:35:31.748 253542 DEBUG nova.compute.manager [req-91404eef-f298-4cbd-af09-5da3630b7d28 req-b1ae69a8-ff58-4a01-9a8b-7fb93af3ee2f b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 0feca801-4630-4450-b915-616d8496ab51] No waiting events found dispatching network-vif-plugged-15af3dd8-9788-4a34-b4b2-d3b24300cd4c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 03:35:31 np0005534516 nova_compute[253538]: 2025-11-25 08:35:31.749 253542 WARNING nova.compute.manager [req-91404eef-f298-4cbd-af09-5da3630b7d28 req-b1ae69a8-ff58-4a01-9a8b-7fb93af3ee2f b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 0feca801-4630-4450-b915-616d8496ab51] Received unexpected event network-vif-plugged-15af3dd8-9788-4a34-b4b2-d3b24300cd4c for instance with vm_state active and task_state None.#033[00m
Nov 25 03:35:31 np0005534516 nova_compute[253538]: 2025-11-25 08:35:31.824 253542 DEBUG oslo_concurrency.lockutils [None req-55f650df-f7c4-4b19-8bef-7a3b81e30460 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Acquiring lock "0feca801-4630-4450-b915-616d8496ab51" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:35:31 np0005534516 nova_compute[253538]: 2025-11-25 08:35:31.825 253542 DEBUG oslo_concurrency.lockutils [None req-55f650df-f7c4-4b19-8bef-7a3b81e30460 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Lock "0feca801-4630-4450-b915-616d8496ab51" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:35:31 np0005534516 nova_compute[253538]: 2025-11-25 08:35:31.826 253542 DEBUG oslo_concurrency.lockutils [None req-55f650df-f7c4-4b19-8bef-7a3b81e30460 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Acquiring lock "0feca801-4630-4450-b915-616d8496ab51-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:35:31 np0005534516 nova_compute[253538]: 2025-11-25 08:35:31.826 253542 DEBUG oslo_concurrency.lockutils [None req-55f650df-f7c4-4b19-8bef-7a3b81e30460 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Lock "0feca801-4630-4450-b915-616d8496ab51-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:35:31 np0005534516 nova_compute[253538]: 2025-11-25 08:35:31.826 253542 DEBUG oslo_concurrency.lockutils [None req-55f650df-f7c4-4b19-8bef-7a3b81e30460 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Lock "0feca801-4630-4450-b915-616d8496ab51-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:35:31 np0005534516 nova_compute[253538]: 2025-11-25 08:35:31.828 253542 INFO nova.compute.manager [None req-55f650df-f7c4-4b19-8bef-7a3b81e30460 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] [instance: 0feca801-4630-4450-b915-616d8496ab51] Terminating instance#033[00m
Nov 25 03:35:31 np0005534516 nova_compute[253538]: 2025-11-25 08:35:31.829 253542 DEBUG nova.compute.manager [None req-55f650df-f7c4-4b19-8bef-7a3b81e30460 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] [instance: 0feca801-4630-4450-b915-616d8496ab51] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 25 03:35:31 np0005534516 kernel: tap15af3dd8-97 (unregistering): left promiscuous mode
Nov 25 03:35:31 np0005534516 NetworkManager[48915]: <info>  [1764059731.8765] device (tap15af3dd8-97): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 03:35:31 np0005534516 ovn_controller[152859]: 2025-11-25T08:35:31Z|00651|binding|INFO|Releasing lport 15af3dd8-9788-4a34-b4b2-d3b24300cd4c from this chassis (sb_readonly=0)
Nov 25 03:35:31 np0005534516 ovn_controller[152859]: 2025-11-25T08:35:31Z|00652|binding|INFO|Setting lport 15af3dd8-9788-4a34-b4b2-d3b24300cd4c down in Southbound
Nov 25 03:35:31 np0005534516 ovn_controller[152859]: 2025-11-25T08:35:31Z|00653|binding|INFO|Removing iface tap15af3dd8-97 ovn-installed in OVS
Nov 25 03:35:31 np0005534516 nova_compute[253538]: 2025-11-25 08:35:31.886 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:35:31 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:35:31.898 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:07:cd:40 10.100.0.13'], port_security=['fa:16:3e:07:cd:40 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '0feca801-4630-4450-b915-616d8496ab51', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-908154e6-322e-4607-bb65-df3f3f8daca6', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '23237e7592b247838e62457157e64e9e', 'neutron:revision_number': '16', 'neutron:security_group_ids': '3358d53f-61a0-46d5-a068-e095dc0322d0', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.183'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f04e87cb-da21-4cc9-be16-4ad52b84fb85, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=15af3dd8-9788-4a34-b4b2-d3b24300cd4c) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 03:35:31 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:35:31.901 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 15af3dd8-9788-4a34-b4b2-d3b24300cd4c in datapath 908154e6-322e-4607-bb65-df3f3f8daca6 unbound from our chassis#033[00m
Nov 25 03:35:31 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:35:31.902 162739 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 908154e6-322e-4607-bb65-df3f3f8daca6, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 25 03:35:31 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:35:31.904 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[c71ba676-22c2-4c80-b547-46342e183d2b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:35:31 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:35:31.906 162739 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-908154e6-322e-4607-bb65-df3f3f8daca6 namespace which is not needed anymore#033[00m
Nov 25 03:35:31 np0005534516 nova_compute[253538]: 2025-11-25 08:35:31.910 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:35:31 np0005534516 systemd[1]: machine-qemu\x2d78\x2dinstance\x2d00000032.scope: Deactivated successfully.
Nov 25 03:35:31 np0005534516 systemd[1]: machine-qemu\x2d78\x2dinstance\x2d00000032.scope: Consumed 4.796s CPU time.
Nov 25 03:35:31 np0005534516 systemd-machined[215790]: Machine qemu-78-instance-00000032 terminated.
Nov 25 03:35:32 np0005534516 neutron-haproxy-ovnmeta-908154e6-322e-4607-bb65-df3f3f8daca6[322215]: [NOTICE]   (322235) : haproxy version is 2.8.14-c23fe91
Nov 25 03:35:32 np0005534516 neutron-haproxy-ovnmeta-908154e6-322e-4607-bb65-df3f3f8daca6[322215]: [NOTICE]   (322235) : path to executable is /usr/sbin/haproxy
Nov 25 03:35:32 np0005534516 neutron-haproxy-ovnmeta-908154e6-322e-4607-bb65-df3f3f8daca6[322215]: [WARNING]  (322235) : Exiting Master process...
Nov 25 03:35:32 np0005534516 neutron-haproxy-ovnmeta-908154e6-322e-4607-bb65-df3f3f8daca6[322215]: [WARNING]  (322235) : Exiting Master process...
Nov 25 03:35:32 np0005534516 neutron-haproxy-ovnmeta-908154e6-322e-4607-bb65-df3f3f8daca6[322215]: [ALERT]    (322235) : Current worker (322254) exited with code 143 (Terminated)
Nov 25 03:35:32 np0005534516 neutron-haproxy-ovnmeta-908154e6-322e-4607-bb65-df3f3f8daca6[322215]: [WARNING]  (322235) : All workers exited. Exiting... (0)
Nov 25 03:35:32 np0005534516 systemd[1]: libpod-1f8218895f077b5577d2b628b28c3f3a7162df45e0b0a88502945a35a3202f81.scope: Deactivated successfully.
Nov 25 03:35:32 np0005534516 podman[322598]: 2025-11-25 08:35:32.043515808 +0000 UTC m=+0.047388205 container died 1f8218895f077b5577d2b628b28c3f3a7162df45e0b0a88502945a35a3202f81 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-908154e6-322e-4607-bb65-df3f3f8daca6, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 25 03:35:32 np0005534516 nova_compute[253538]: 2025-11-25 08:35:32.079 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:35:32 np0005534516 nova_compute[253538]: 2025-11-25 08:35:32.083 253542 INFO nova.virt.libvirt.driver [-] [instance: 0feca801-4630-4450-b915-616d8496ab51] Instance destroyed successfully.#033[00m
Nov 25 03:35:32 np0005534516 nova_compute[253538]: 2025-11-25 08:35:32.084 253542 DEBUG nova.objects.instance [None req-55f650df-f7c4-4b19-8bef-7a3b81e30460 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Lazy-loading 'resources' on Instance uuid 0feca801-4630-4450-b915-616d8496ab51 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 03:35:32 np0005534516 nova_compute[253538]: 2025-11-25 08:35:32.092 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:35:32 np0005534516 nova_compute[253538]: 2025-11-25 08:35:32.095 253542 DEBUG nova.virt.libvirt.vif [None req-55f650df-f7c4-4b19-8bef-7a3b81e30460 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T08:32:05Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-1351113969',display_name='tempest-ServerActionsTestJSON-server-1351113969',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-1351113969',id=50,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBE5Fk56Q4lNqV9K7WDUeyi9Q8LXfzmP3pHaY3lQ+esJSSZiBEhWQZtQw4QEFpwpSqYGNN6+MiKdvSMZAjIxsIMhhevlSp0lI0bm/7fQanmTm+NtC/LaRja7uscM7lRA6IQ==',key_name='tempest-keypair-23618085',keypairs=<?>,launch_index=0,launched_at=2025-11-25T08:32:15Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='23237e7592b247838e62457157e64e9e',ramdisk_id='',reservation_id='r-ui990d0d',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-1880843108',owner_user_name='tempest-ServerActionsTestJSON-1880843108-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T08:35:28Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='c199ca353ed54a53ab7fe37d3089c82a',uuid=0feca801-4630-4450-b915-616d8496ab51,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "15af3dd8-9788-4a34-b4b2-d3b24300cd4c", "address": "fa:16:3e:07:cd:40", "network": {"id": "908154e6-322e-4607-bb65-df3f3f8daca6", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1395486665-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.183", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "23237e7592b247838e62457157e64e9e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap15af3dd8-97", "ovs_interfaceid": "15af3dd8-9788-4a34-b4b2-d3b24300cd4c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 25 03:35:32 np0005534516 nova_compute[253538]: 2025-11-25 08:35:32.095 253542 DEBUG nova.network.os_vif_util [None req-55f650df-f7c4-4b19-8bef-7a3b81e30460 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Converting VIF {"id": "15af3dd8-9788-4a34-b4b2-d3b24300cd4c", "address": "fa:16:3e:07:cd:40", "network": {"id": "908154e6-322e-4607-bb65-df3f3f8daca6", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1395486665-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.183", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "23237e7592b247838e62457157e64e9e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap15af3dd8-97", "ovs_interfaceid": "15af3dd8-9788-4a34-b4b2-d3b24300cd4c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 03:35:32 np0005534516 nova_compute[253538]: 2025-11-25 08:35:32.096 253542 DEBUG nova.network.os_vif_util [None req-55f650df-f7c4-4b19-8bef-7a3b81e30460 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:07:cd:40,bridge_name='br-int',has_traffic_filtering=True,id=15af3dd8-9788-4a34-b4b2-d3b24300cd4c,network=Network(908154e6-322e-4607-bb65-df3f3f8daca6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap15af3dd8-97') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 03:35:32 np0005534516 nova_compute[253538]: 2025-11-25 08:35:32.097 253542 DEBUG os_vif [None req-55f650df-f7c4-4b19-8bef-7a3b81e30460 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:07:cd:40,bridge_name='br-int',has_traffic_filtering=True,id=15af3dd8-9788-4a34-b4b2-d3b24300cd4c,network=Network(908154e6-322e-4607-bb65-df3f3f8daca6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap15af3dd8-97') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 25 03:35:32 np0005534516 nova_compute[253538]: 2025-11-25 08:35:32.099 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:35:32 np0005534516 nova_compute[253538]: 2025-11-25 08:35:32.100 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap15af3dd8-97, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:35:32 np0005534516 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-1f8218895f077b5577d2b628b28c3f3a7162df45e0b0a88502945a35a3202f81-userdata-shm.mount: Deactivated successfully.
Nov 25 03:35:32 np0005534516 systemd[1]: var-lib-containers-storage-overlay-5a9436bb21b625d65243e687d48f6f742a3a3013b2c6388d6c2603db017f5bb0-merged.mount: Deactivated successfully.
Nov 25 03:35:32 np0005534516 nova_compute[253538]: 2025-11-25 08:35:32.105 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:35:32 np0005534516 nova_compute[253538]: 2025-11-25 08:35:32.106 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:35:32 np0005534516 nova_compute[253538]: 2025-11-25 08:35:32.109 253542 INFO os_vif [None req-55f650df-f7c4-4b19-8bef-7a3b81e30460 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:07:cd:40,bridge_name='br-int',has_traffic_filtering=True,id=15af3dd8-9788-4a34-b4b2-d3b24300cd4c,network=Network(908154e6-322e-4607-bb65-df3f3f8daca6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap15af3dd8-97')#033[00m
Nov 25 03:35:32 np0005534516 podman[322598]: 2025-11-25 08:35:32.109646514 +0000 UTC m=+0.113518901 container cleanup 1f8218895f077b5577d2b628b28c3f3a7162df45e0b0a88502945a35a3202f81 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-908154e6-322e-4607-bb65-df3f3f8daca6, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.build-date=20251118)
Nov 25 03:35:32 np0005534516 systemd[1]: libpod-conmon-1f8218895f077b5577d2b628b28c3f3a7162df45e0b0a88502945a35a3202f81.scope: Deactivated successfully.
Nov 25 03:35:32 np0005534516 podman[322635]: 2025-11-25 08:35:32.174361454 +0000 UTC m=+0.045443442 container remove 1f8218895f077b5577d2b628b28c3f3a7162df45e0b0a88502945a35a3202f81 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-908154e6-322e-4607-bb65-df3f3f8daca6, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2)
Nov 25 03:35:32 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:35:32.179 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[c9970b97-4889-4591-b20d-e88cd54e3e32]: (4, ('Tue Nov 25 08:35:31 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-908154e6-322e-4607-bb65-df3f3f8daca6 (1f8218895f077b5577d2b628b28c3f3a7162df45e0b0a88502945a35a3202f81)\n1f8218895f077b5577d2b628b28c3f3a7162df45e0b0a88502945a35a3202f81\nTue Nov 25 08:35:32 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-908154e6-322e-4607-bb65-df3f3f8daca6 (1f8218895f077b5577d2b628b28c3f3a7162df45e0b0a88502945a35a3202f81)\n1f8218895f077b5577d2b628b28c3f3a7162df45e0b0a88502945a35a3202f81\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:35:32 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:35:32.181 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[65ff64ef-3388-4cde-8a12-3e23f579bd5c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:35:32 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:35:32.182 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap908154e6-30, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:35:32 np0005534516 kernel: tap908154e6-30: left promiscuous mode
Nov 25 03:35:32 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:35:32.185 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=17, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '3e:21:09', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '6a:71:8e:07:fa:c2'}, ipsec=False) old=SB_Global(nb_cfg=16) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 03:35:32 np0005534516 nova_compute[253538]: 2025-11-25 08:35:32.183 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:35:32 np0005534516 nova_compute[253538]: 2025-11-25 08:35:32.199 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:35:32 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:35:32.201 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[1bc36eff-aea2-48bc-9742-2cb4be42fd1e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:35:32 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:35:32.218 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[22fa8bd0-b370-4471-8ea7-471358d1e991]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:35:32 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:35:32.220 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[efc8b560-6b5f-46b9-be58-977ea40102b4]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:35:32 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:35:32.242 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[37cb74a9-9e73-4992-bd05-708fd74cbb3e]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 508172, 'reachable_time': 43632, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 322668, 'error': None, 'target': 'ovnmeta-908154e6-322e-4607-bb65-df3f3f8daca6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:35:32 np0005534516 systemd[1]: run-netns-ovnmeta\x2d908154e6\x2d322e\x2d4607\x2dbb65\x2ddf3f3f8daca6.mount: Deactivated successfully.
Nov 25 03:35:32 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:35:32.247 162852 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-908154e6-322e-4607-bb65-df3f3f8daca6 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 25 03:35:32 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:35:32.248 162852 DEBUG oslo.privsep.daemon [-] privsep: reply[b1fa98f1-4b0b-4a68-a01b-25a9da0d39ee]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:35:32 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:35:32.248 162739 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 1 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 25 03:35:32 np0005534516 nova_compute[253538]: 2025-11-25 08:35:32.526 253542 INFO nova.virt.libvirt.driver [None req-55f650df-f7c4-4b19-8bef-7a3b81e30460 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] [instance: 0feca801-4630-4450-b915-616d8496ab51] Deleting instance files /var/lib/nova/instances/0feca801-4630-4450-b915-616d8496ab51_del#033[00m
Nov 25 03:35:32 np0005534516 nova_compute[253538]: 2025-11-25 08:35:32.527 253542 INFO nova.virt.libvirt.driver [None req-55f650df-f7c4-4b19-8bef-7a3b81e30460 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] [instance: 0feca801-4630-4450-b915-616d8496ab51] Deletion of /var/lib/nova/instances/0feca801-4630-4450-b915-616d8496ab51_del complete#033[00m
Nov 25 03:35:32 np0005534516 nova_compute[253538]: 2025-11-25 08:35:32.571 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 03:35:32 np0005534516 nova_compute[253538]: 2025-11-25 08:35:32.571 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Nov 25 03:35:32 np0005534516 nova_compute[253538]: 2025-11-25 08:35:32.582 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764059732.581664, cab0bbd2-96e3-43ed-970b-0b49c7581fef => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 03:35:32 np0005534516 nova_compute[253538]: 2025-11-25 08:35:32.583 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: cab0bbd2-96e3-43ed-970b-0b49c7581fef] VM Resumed (Lifecycle Event)#033[00m
Nov 25 03:35:32 np0005534516 nova_compute[253538]: 2025-11-25 08:35:32.584 253542 DEBUG nova.compute.manager [None req-040bbfc9-8ac2-47a0-959c-a5377d60a6bc 78e69376f7924a5695ba6f4672139f68 d2de800215da4cac8d9fd3a6a3cf4a55 - - default default] [instance: cab0bbd2-96e3-43ed-970b-0b49c7581fef] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 25 03:35:32 np0005534516 nova_compute[253538]: 2025-11-25 08:35:32.584 253542 DEBUG nova.virt.libvirt.driver [None req-040bbfc9-8ac2-47a0-959c-a5377d60a6bc 78e69376f7924a5695ba6f4672139f68 d2de800215da4cac8d9fd3a6a3cf4a55 - - default default] [instance: cab0bbd2-96e3-43ed-970b-0b49c7581fef] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 25 03:35:32 np0005534516 nova_compute[253538]: 2025-11-25 08:35:32.587 253542 INFO nova.virt.libvirt.driver [-] [instance: cab0bbd2-96e3-43ed-970b-0b49c7581fef] Instance spawned successfully.#033[00m
Nov 25 03:35:32 np0005534516 nova_compute[253538]: 2025-11-25 08:35:32.588 253542 DEBUG nova.virt.libvirt.driver [None req-040bbfc9-8ac2-47a0-959c-a5377d60a6bc 78e69376f7924a5695ba6f4672139f68 d2de800215da4cac8d9fd3a6a3cf4a55 - - default default] [instance: cab0bbd2-96e3-43ed-970b-0b49c7581fef] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 25 03:35:32 np0005534516 nova_compute[253538]: 2025-11-25 08:35:32.611 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: cab0bbd2-96e3-43ed-970b-0b49c7581fef] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:35:32 np0005534516 nova_compute[253538]: 2025-11-25 08:35:32.618 253542 INFO nova.compute.manager [None req-55f650df-f7c4-4b19-8bef-7a3b81e30460 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] [instance: 0feca801-4630-4450-b915-616d8496ab51] Took 0.79 seconds to destroy the instance on the hypervisor.#033[00m
Nov 25 03:35:32 np0005534516 nova_compute[253538]: 2025-11-25 08:35:32.619 253542 DEBUG oslo.service.loopingcall [None req-55f650df-f7c4-4b19-8bef-7a3b81e30460 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 25 03:35:32 np0005534516 nova_compute[253538]: 2025-11-25 08:35:32.620 253542 DEBUG nova.compute.manager [-] [instance: 0feca801-4630-4450-b915-616d8496ab51] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 25 03:35:32 np0005534516 nova_compute[253538]: 2025-11-25 08:35:32.621 253542 DEBUG nova.network.neutron [-] [instance: 0feca801-4630-4450-b915-616d8496ab51] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 25 03:35:32 np0005534516 nova_compute[253538]: 2025-11-25 08:35:32.627 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: cab0bbd2-96e3-43ed-970b-0b49c7581fef] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 03:35:32 np0005534516 nova_compute[253538]: 2025-11-25 08:35:32.631 253542 DEBUG nova.virt.libvirt.driver [None req-040bbfc9-8ac2-47a0-959c-a5377d60a6bc 78e69376f7924a5695ba6f4672139f68 d2de800215da4cac8d9fd3a6a3cf4a55 - - default default] [instance: cab0bbd2-96e3-43ed-970b-0b49c7581fef] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:35:32 np0005534516 nova_compute[253538]: 2025-11-25 08:35:32.632 253542 DEBUG nova.virt.libvirt.driver [None req-040bbfc9-8ac2-47a0-959c-a5377d60a6bc 78e69376f7924a5695ba6f4672139f68 d2de800215da4cac8d9fd3a6a3cf4a55 - - default default] [instance: cab0bbd2-96e3-43ed-970b-0b49c7581fef] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:35:32 np0005534516 nova_compute[253538]: 2025-11-25 08:35:32.633 253542 DEBUG nova.virt.libvirt.driver [None req-040bbfc9-8ac2-47a0-959c-a5377d60a6bc 78e69376f7924a5695ba6f4672139f68 d2de800215da4cac8d9fd3a6a3cf4a55 - - default default] [instance: cab0bbd2-96e3-43ed-970b-0b49c7581fef] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:35:32 np0005534516 nova_compute[253538]: 2025-11-25 08:35:32.634 253542 DEBUG nova.virt.libvirt.driver [None req-040bbfc9-8ac2-47a0-959c-a5377d60a6bc 78e69376f7924a5695ba6f4672139f68 d2de800215da4cac8d9fd3a6a3cf4a55 - - default default] [instance: cab0bbd2-96e3-43ed-970b-0b49c7581fef] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:35:32 np0005534516 nova_compute[253538]: 2025-11-25 08:35:32.634 253542 DEBUG nova.virt.libvirt.driver [None req-040bbfc9-8ac2-47a0-959c-a5377d60a6bc 78e69376f7924a5695ba6f4672139f68 d2de800215da4cac8d9fd3a6a3cf4a55 - - default default] [instance: cab0bbd2-96e3-43ed-970b-0b49c7581fef] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:35:32 np0005534516 nova_compute[253538]: 2025-11-25 08:35:32.635 253542 DEBUG nova.virt.libvirt.driver [None req-040bbfc9-8ac2-47a0-959c-a5377d60a6bc 78e69376f7924a5695ba6f4672139f68 d2de800215da4cac8d9fd3a6a3cf4a55 - - default default] [instance: cab0bbd2-96e3-43ed-970b-0b49c7581fef] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:35:32 np0005534516 nova_compute[253538]: 2025-11-25 08:35:32.667 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: cab0bbd2-96e3-43ed-970b-0b49c7581fef] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 03:35:32 np0005534516 nova_compute[253538]: 2025-11-25 08:35:32.667 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764059732.585585, cab0bbd2-96e3-43ed-970b-0b49c7581fef => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 03:35:32 np0005534516 nova_compute[253538]: 2025-11-25 08:35:32.667 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: cab0bbd2-96e3-43ed-970b-0b49c7581fef] VM Started (Lifecycle Event)#033[00m
Nov 25 03:35:32 np0005534516 nova_compute[253538]: 2025-11-25 08:35:32.692 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: cab0bbd2-96e3-43ed-970b-0b49c7581fef] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:35:32 np0005534516 nova_compute[253538]: 2025-11-25 08:35:32.694 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: cab0bbd2-96e3-43ed-970b-0b49c7581fef] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 03:35:32 np0005534516 nova_compute[253538]: 2025-11-25 08:35:32.711 253542 INFO nova.compute.manager [None req-040bbfc9-8ac2-47a0-959c-a5377d60a6bc 78e69376f7924a5695ba6f4672139f68 d2de800215da4cac8d9fd3a6a3cf4a55 - - default default] [instance: cab0bbd2-96e3-43ed-970b-0b49c7581fef] Took 4.31 seconds to spawn the instance on the hypervisor.#033[00m
Nov 25 03:35:32 np0005534516 nova_compute[253538]: 2025-11-25 08:35:32.712 253542 DEBUG nova.compute.manager [None req-040bbfc9-8ac2-47a0-959c-a5377d60a6bc 78e69376f7924a5695ba6f4672139f68 d2de800215da4cac8d9fd3a6a3cf4a55 - - default default] [instance: cab0bbd2-96e3-43ed-970b-0b49c7581fef] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:35:32 np0005534516 nova_compute[253538]: 2025-11-25 08:35:32.713 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: cab0bbd2-96e3-43ed-970b-0b49c7581fef] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 03:35:32 np0005534516 nova_compute[253538]: 2025-11-25 08:35:32.765 253542 INFO nova.compute.manager [None req-040bbfc9-8ac2-47a0-959c-a5377d60a6bc 78e69376f7924a5695ba6f4672139f68 d2de800215da4cac8d9fd3a6a3cf4a55 - - default default] [instance: cab0bbd2-96e3-43ed-970b-0b49c7581fef] Took 5.29 seconds to build instance.#033[00m
Nov 25 03:35:32 np0005534516 nova_compute[253538]: 2025-11-25 08:35:32.783 253542 DEBUG oslo_concurrency.lockutils [None req-040bbfc9-8ac2-47a0-959c-a5377d60a6bc 78e69376f7924a5695ba6f4672139f68 d2de800215da4cac8d9fd3a6a3cf4a55 - - default default] Lock "cab0bbd2-96e3-43ed-970b-0b49c7581fef" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 5.410s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:35:33 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:35:33.251 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=0e3362c2-a4a0-4a10-9289-943331244f84, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '17'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:35:33 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1572: 321 pgs: 321 active+clean; 160 MiB data, 581 MiB used, 59 GiB / 60 GiB avail; 6.4 KiB/s rd, 1.4 MiB/s wr, 10 op/s
Nov 25 03:35:33 np0005534516 nova_compute[253538]: 2025-11-25 08:35:33.527 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:35:33 np0005534516 nova_compute[253538]: 2025-11-25 08:35:33.836 253542 DEBUG nova.compute.manager [req-a3a95b4b-2356-4331-be37-2f269d8e8742 req-70f5e85a-1a7d-4a7b-ae22-dec03ccb8dbb b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 0feca801-4630-4450-b915-616d8496ab51] Received event network-vif-plugged-15af3dd8-9788-4a34-b4b2-d3b24300cd4c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:35:33 np0005534516 nova_compute[253538]: 2025-11-25 08:35:33.836 253542 DEBUG oslo_concurrency.lockutils [req-a3a95b4b-2356-4331-be37-2f269d8e8742 req-70f5e85a-1a7d-4a7b-ae22-dec03ccb8dbb b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "0feca801-4630-4450-b915-616d8496ab51-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:35:33 np0005534516 nova_compute[253538]: 2025-11-25 08:35:33.836 253542 DEBUG oslo_concurrency.lockutils [req-a3a95b4b-2356-4331-be37-2f269d8e8742 req-70f5e85a-1a7d-4a7b-ae22-dec03ccb8dbb b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "0feca801-4630-4450-b915-616d8496ab51-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:35:33 np0005534516 nova_compute[253538]: 2025-11-25 08:35:33.837 253542 DEBUG oslo_concurrency.lockutils [req-a3a95b4b-2356-4331-be37-2f269d8e8742 req-70f5e85a-1a7d-4a7b-ae22-dec03ccb8dbb b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "0feca801-4630-4450-b915-616d8496ab51-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:35:33 np0005534516 nova_compute[253538]: 2025-11-25 08:35:33.837 253542 DEBUG nova.compute.manager [req-a3a95b4b-2356-4331-be37-2f269d8e8742 req-70f5e85a-1a7d-4a7b-ae22-dec03ccb8dbb b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 0feca801-4630-4450-b915-616d8496ab51] No waiting events found dispatching network-vif-plugged-15af3dd8-9788-4a34-b4b2-d3b24300cd4c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 03:35:33 np0005534516 nova_compute[253538]: 2025-11-25 08:35:33.837 253542 WARNING nova.compute.manager [req-a3a95b4b-2356-4331-be37-2f269d8e8742 req-70f5e85a-1a7d-4a7b-ae22-dec03ccb8dbb b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 0feca801-4630-4450-b915-616d8496ab51] Received unexpected event network-vif-plugged-15af3dd8-9788-4a34-b4b2-d3b24300cd4c for instance with vm_state active and task_state deleting.#033[00m
Nov 25 03:35:33 np0005534516 nova_compute[253538]: 2025-11-25 08:35:33.883 253542 DEBUG nova.network.neutron [-] [instance: 0feca801-4630-4450-b915-616d8496ab51] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 03:35:33 np0005534516 nova_compute[253538]: 2025-11-25 08:35:33.904 253542 INFO nova.compute.manager [-] [instance: 0feca801-4630-4450-b915-616d8496ab51] Took 1.28 seconds to deallocate network for instance.#033[00m
Nov 25 03:35:33 np0005534516 nova_compute[253538]: 2025-11-25 08:35:33.944 253542 DEBUG oslo_concurrency.lockutils [None req-55f650df-f7c4-4b19-8bef-7a3b81e30460 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:35:33 np0005534516 nova_compute[253538]: 2025-11-25 08:35:33.945 253542 DEBUG oslo_concurrency.lockutils [None req-55f650df-f7c4-4b19-8bef-7a3b81e30460 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:35:33 np0005534516 nova_compute[253538]: 2025-11-25 08:35:33.962 253542 DEBUG nova.compute.manager [req-09053c5a-a672-4e4c-97c0-a66975d96667 req-382fd079-8979-470b-83ca-5d7e8ec7b13d b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 0feca801-4630-4450-b915-616d8496ab51] Received event network-vif-deleted-15af3dd8-9788-4a34-b4b2-d3b24300cd4c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:35:34 np0005534516 nova_compute[253538]: 2025-11-25 08:35:34.036 253542 DEBUG oslo_concurrency.processutils [None req-55f650df-f7c4-4b19-8bef-7a3b81e30460 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:35:34 np0005534516 ceph-mon[75015]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #69. Immutable memtables: 0.
Nov 25 03:35:34 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:35:34.350576) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 25 03:35:34 np0005534516 ceph-mon[75015]: rocksdb: [db/flush_job.cc:856] [default] [JOB 37] Flushing memtable with next log file: 69
Nov 25 03:35:34 np0005534516 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764059734350603, "job": 37, "event": "flush_started", "num_memtables": 1, "num_entries": 2120, "num_deletes": 253, "total_data_size": 3162604, "memory_usage": 3207088, "flush_reason": "Manual Compaction"}
Nov 25 03:35:34 np0005534516 ceph-mon[75015]: rocksdb: [db/flush_job.cc:885] [default] [JOB 37] Level-0 flush table #70: started
Nov 25 03:35:34 np0005534516 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764059734376424, "cf_name": "default", "job": 37, "event": "table_file_creation", "file_number": 70, "file_size": 3094649, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 30739, "largest_seqno": 32858, "table_properties": {"data_size": 3085310, "index_size": 5705, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2565, "raw_key_size": 20612, "raw_average_key_size": 20, "raw_value_size": 3066062, "raw_average_value_size": 3062, "num_data_blocks": 252, "num_entries": 1001, "num_filter_entries": 1001, "num_deletions": 253, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764059539, "oldest_key_time": 1764059539, "file_creation_time": 1764059734, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "dc5dc6ae-0fb8-474e-b34d-e37705a41add", "db_session_id": "WCSCMBNWDQZ3QKJA3OWW", "orig_file_number": 70, "seqno_to_time_mapping": "N/A"}}
Nov 25 03:35:34 np0005534516 ceph-mon[75015]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 37] Flush lasted 25886 microseconds, and 5894 cpu microseconds.
Nov 25 03:35:34 np0005534516 ceph-mon[75015]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 03:35:34 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:35:34.376461) [db/flush_job.cc:967] [default] [JOB 37] Level-0 flush table #70: 3094649 bytes OK
Nov 25 03:35:34 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:35:34.376478) [db/memtable_list.cc:519] [default] Level-0 commit table #70 started
Nov 25 03:35:34 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:35:34.378709) [db/memtable_list.cc:722] [default] Level-0 commit table #70: memtable #1 done
Nov 25 03:35:34 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:35:34.378722) EVENT_LOG_v1 {"time_micros": 1764059734378718, "job": 37, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 25 03:35:34 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:35:34.378738) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 25 03:35:34 np0005534516 ceph-mon[75015]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 37] Try to delete WAL files size 3153607, prev total WAL file size 3153607, number of live WAL files 2.
Nov 25 03:35:34 np0005534516 ceph-mon[75015]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000066.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 03:35:34 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:35:34.379663) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032373631' seq:72057594037927935, type:22 .. '7061786F730033303133' seq:0, type:0; will stop at (end)
Nov 25 03:35:34 np0005534516 ceph-mon[75015]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 38] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 25 03:35:34 np0005534516 ceph-mon[75015]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 37 Base level 0, inputs: [70(3022KB)], [68(6971KB)]
Nov 25 03:35:34 np0005534516 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764059734379718, "job": 38, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [70], "files_L6": [68], "score": -1, "input_data_size": 10233739, "oldest_snapshot_seqno": -1}
Nov 25 03:35:34 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 03:35:34 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2397339520' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 03:35:34 np0005534516 nova_compute[253538]: 2025-11-25 08:35:34.488 253542 DEBUG oslo_concurrency.processutils [None req-55f650df-f7c4-4b19-8bef-7a3b81e30460 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.451s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:35:34 np0005534516 nova_compute[253538]: 2025-11-25 08:35:34.493 253542 DEBUG nova.compute.provider_tree [None req-55f650df-f7c4-4b19-8bef-7a3b81e30460 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 03:35:34 np0005534516 nova_compute[253538]: 2025-11-25 08:35:34.510 253542 DEBUG nova.scheduler.client.report [None req-55f650df-f7c4-4b19-8bef-7a3b81e30460 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 03:35:34 np0005534516 ceph-mon[75015]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 38] Generated table #71: 5790 keys, 8519501 bytes, temperature: kUnknown
Nov 25 03:35:34 np0005534516 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764059734525497, "cf_name": "default", "job": 38, "event": "table_file_creation", "file_number": 71, "file_size": 8519501, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8480901, "index_size": 23002, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 14533, "raw_key_size": 145745, "raw_average_key_size": 25, "raw_value_size": 8376996, "raw_average_value_size": 1446, "num_data_blocks": 939, "num_entries": 5790, "num_filter_entries": 5790, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764056881, "oldest_key_time": 0, "file_creation_time": 1764059734, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "dc5dc6ae-0fb8-474e-b34d-e37705a41add", "db_session_id": "WCSCMBNWDQZ3QKJA3OWW", "orig_file_number": 71, "seqno_to_time_mapping": "N/A"}}
Nov 25 03:35:34 np0005534516 ceph-mon[75015]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 03:35:34 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:35:34.525729) [db/compaction/compaction_job.cc:1663] [default] [JOB 38] Compacted 1@0 + 1@6 files to L6 => 8519501 bytes
Nov 25 03:35:34 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:35:34.528631) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 70.2 rd, 58.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.0, 6.8 +0.0 blob) out(8.1 +0.0 blob), read-write-amplify(6.1) write-amplify(2.8) OK, records in: 6312, records dropped: 522 output_compression: NoCompression
Nov 25 03:35:34 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:35:34.528649) EVENT_LOG_v1 {"time_micros": 1764059734528640, "job": 38, "event": "compaction_finished", "compaction_time_micros": 145861, "compaction_time_cpu_micros": 30827, "output_level": 6, "num_output_files": 1, "total_output_size": 8519501, "num_input_records": 6312, "num_output_records": 5790, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 25 03:35:34 np0005534516 ceph-mon[75015]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000070.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 03:35:34 np0005534516 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764059734529150, "job": 38, "event": "table_file_deletion", "file_number": 70}
Nov 25 03:35:34 np0005534516 ceph-mon[75015]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000068.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 03:35:34 np0005534516 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764059734530179, "job": 38, "event": "table_file_deletion", "file_number": 68}
Nov 25 03:35:34 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:35:34.379537) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 03:35:34 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:35:34.530246) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 03:35:34 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:35:34.530253) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 03:35:34 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:35:34.530256) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 03:35:34 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:35:34.530259) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 03:35:34 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:35:34.530262) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 03:35:34 np0005534516 nova_compute[253538]: 2025-11-25 08:35:34.531 253542 DEBUG oslo_concurrency.lockutils [None req-55f650df-f7c4-4b19-8bef-7a3b81e30460 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.587s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:35:34 np0005534516 nova_compute[253538]: 2025-11-25 08:35:34.567 253542 INFO nova.scheduler.client.report [None req-55f650df-f7c4-4b19-8bef-7a3b81e30460 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Deleted allocations for instance 0feca801-4630-4450-b915-616d8496ab51#033[00m
Nov 25 03:35:34 np0005534516 nova_compute[253538]: 2025-11-25 08:35:34.655 253542 DEBUG oslo_concurrency.lockutils [None req-55f650df-f7c4-4b19-8bef-7a3b81e30460 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Lock "0feca801-4630-4450-b915-616d8496ab51" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.830s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:35:35 np0005534516 nova_compute[253538]: 2025-11-25 08:35:35.180 253542 DEBUG nova.compute.manager [None req-6945dc43-ae27-49fa-97b6-869081f2415a 78e69376f7924a5695ba6f4672139f68 d2de800215da4cac8d9fd3a6a3cf4a55 - - default default] [instance: cab0bbd2-96e3-43ed-970b-0b49c7581fef] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:35:35 np0005534516 nova_compute[253538]: 2025-11-25 08:35:35.218 253542 INFO nova.compute.manager [None req-6945dc43-ae27-49fa-97b6-869081f2415a 78e69376f7924a5695ba6f4672139f68 d2de800215da4cac8d9fd3a6a3cf4a55 - - default default] [instance: cab0bbd2-96e3-43ed-970b-0b49c7581fef] instance snapshotting#033[00m
Nov 25 03:35:35 np0005534516 nova_compute[253538]: 2025-11-25 08:35:35.219 253542 DEBUG nova.objects.instance [None req-6945dc43-ae27-49fa-97b6-869081f2415a 78e69376f7924a5695ba6f4672139f68 d2de800215da4cac8d9fd3a6a3cf4a55 - - default default] Lazy-loading 'flavor' on Instance uuid cab0bbd2-96e3-43ed-970b-0b49c7581fef obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 03:35:35 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1573: 321 pgs: 321 active+clean; 127 MiB data, 576 MiB used, 59 GiB / 60 GiB avail; 822 KiB/s rd, 1.8 MiB/s wr, 89 op/s
Nov 25 03:35:35 np0005534516 nova_compute[253538]: 2025-11-25 08:35:35.511 253542 INFO nova.virt.libvirt.driver [None req-6945dc43-ae27-49fa-97b6-869081f2415a 78e69376f7924a5695ba6f4672139f68 d2de800215da4cac8d9fd3a6a3cf4a55 - - default default] [instance: cab0bbd2-96e3-43ed-970b-0b49c7581fef] Beginning live snapshot process#033[00m
Nov 25 03:35:35 np0005534516 nova_compute[253538]: 2025-11-25 08:35:35.688 253542 DEBUG nova.virt.libvirt.imagebackend [None req-6945dc43-ae27-49fa-97b6-869081f2415a 78e69376f7924a5695ba6f4672139f68 d2de800215da4cac8d9fd3a6a3cf4a55 - - default default] No parent info for 8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163#033[00m
Nov 25 03:35:35 np0005534516 nova_compute[253538]: 2025-11-25 08:35:35.760 253542 DEBUG oslo_concurrency.lockutils [None req-9db5aae1-9953-40d4-8e1a-8d4bdd6e5498 78e69376f7924a5695ba6f4672139f68 d2de800215da4cac8d9fd3a6a3cf4a55 - - default default] Acquiring lock "cab0bbd2-96e3-43ed-970b-0b49c7581fef" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:35:35 np0005534516 nova_compute[253538]: 2025-11-25 08:35:35.761 253542 DEBUG oslo_concurrency.lockutils [None req-9db5aae1-9953-40d4-8e1a-8d4bdd6e5498 78e69376f7924a5695ba6f4672139f68 d2de800215da4cac8d9fd3a6a3cf4a55 - - default default] Lock "cab0bbd2-96e3-43ed-970b-0b49c7581fef" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:35:35 np0005534516 nova_compute[253538]: 2025-11-25 08:35:35.762 253542 DEBUG oslo_concurrency.lockutils [None req-9db5aae1-9953-40d4-8e1a-8d4bdd6e5498 78e69376f7924a5695ba6f4672139f68 d2de800215da4cac8d9fd3a6a3cf4a55 - - default default] Acquiring lock "cab0bbd2-96e3-43ed-970b-0b49c7581fef-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:35:35 np0005534516 nova_compute[253538]: 2025-11-25 08:35:35.763 253542 DEBUG oslo_concurrency.lockutils [None req-9db5aae1-9953-40d4-8e1a-8d4bdd6e5498 78e69376f7924a5695ba6f4672139f68 d2de800215da4cac8d9fd3a6a3cf4a55 - - default default] Lock "cab0bbd2-96e3-43ed-970b-0b49c7581fef-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:35:35 np0005534516 nova_compute[253538]: 2025-11-25 08:35:35.763 253542 DEBUG oslo_concurrency.lockutils [None req-9db5aae1-9953-40d4-8e1a-8d4bdd6e5498 78e69376f7924a5695ba6f4672139f68 d2de800215da4cac8d9fd3a6a3cf4a55 - - default default] Lock "cab0bbd2-96e3-43ed-970b-0b49c7581fef-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:35:35 np0005534516 nova_compute[253538]: 2025-11-25 08:35:35.766 253542 INFO nova.compute.manager [None req-9db5aae1-9953-40d4-8e1a-8d4bdd6e5498 78e69376f7924a5695ba6f4672139f68 d2de800215da4cac8d9fd3a6a3cf4a55 - - default default] [instance: cab0bbd2-96e3-43ed-970b-0b49c7581fef] Terminating instance#033[00m
Nov 25 03:35:35 np0005534516 nova_compute[253538]: 2025-11-25 08:35:35.768 253542 DEBUG oslo_concurrency.lockutils [None req-9db5aae1-9953-40d4-8e1a-8d4bdd6e5498 78e69376f7924a5695ba6f4672139f68 d2de800215da4cac8d9fd3a6a3cf4a55 - - default default] Acquiring lock "refresh_cache-cab0bbd2-96e3-43ed-970b-0b49c7581fef" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 03:35:35 np0005534516 nova_compute[253538]: 2025-11-25 08:35:35.768 253542 DEBUG oslo_concurrency.lockutils [None req-9db5aae1-9953-40d4-8e1a-8d4bdd6e5498 78e69376f7924a5695ba6f4672139f68 d2de800215da4cac8d9fd3a6a3cf4a55 - - default default] Acquired lock "refresh_cache-cab0bbd2-96e3-43ed-970b-0b49c7581fef" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 03:35:35 np0005534516 nova_compute[253538]: 2025-11-25 08:35:35.769 253542 DEBUG nova.network.neutron [None req-9db5aae1-9953-40d4-8e1a-8d4bdd6e5498 78e69376f7924a5695ba6f4672139f68 d2de800215da4cac8d9fd3a6a3cf4a55 - - default default] [instance: cab0bbd2-96e3-43ed-970b-0b49c7581fef] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 25 03:35:35 np0005534516 nova_compute[253538]: 2025-11-25 08:35:35.794 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:35:35 np0005534516 nova_compute[253538]: 2025-11-25 08:35:35.834 253542 DEBUG oslo_concurrency.lockutils [None req-fcd9fa15-e55d-4b89-a908-feb02df038df 596f8d994ec145beb9244f5f01713555 3b96d13f13da43468269abb6dc6185d1 - - default default] Acquiring lock "9396c5ff-9457-400c-8916-ecd03eded0c1" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:35:35 np0005534516 nova_compute[253538]: 2025-11-25 08:35:35.835 253542 DEBUG oslo_concurrency.lockutils [None req-fcd9fa15-e55d-4b89-a908-feb02df038df 596f8d994ec145beb9244f5f01713555 3b96d13f13da43468269abb6dc6185d1 - - default default] Lock "9396c5ff-9457-400c-8916-ecd03eded0c1" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:35:35 np0005534516 nova_compute[253538]: 2025-11-25 08:35:35.910 253542 DEBUG nova.compute.manager [None req-fcd9fa15-e55d-4b89-a908-feb02df038df 596f8d994ec145beb9244f5f01713555 3b96d13f13da43468269abb6dc6185d1 - - default default] [instance: 9396c5ff-9457-400c-8916-ecd03eded0c1] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 25 03:35:35 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e182 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 03:35:35 np0005534516 nova_compute[253538]: 2025-11-25 08:35:35.991 253542 DEBUG oslo_concurrency.lockutils [None req-fcd9fa15-e55d-4b89-a908-feb02df038df 596f8d994ec145beb9244f5f01713555 3b96d13f13da43468269abb6dc6185d1 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:35:35 np0005534516 nova_compute[253538]: 2025-11-25 08:35:35.992 253542 DEBUG oslo_concurrency.lockutils [None req-fcd9fa15-e55d-4b89-a908-feb02df038df 596f8d994ec145beb9244f5f01713555 3b96d13f13da43468269abb6dc6185d1 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:35:36 np0005534516 nova_compute[253538]: 2025-11-25 08:35:36.000 253542 DEBUG nova.virt.hardware [None req-fcd9fa15-e55d-4b89-a908-feb02df038df 596f8d994ec145beb9244f5f01713555 3b96d13f13da43468269abb6dc6185d1 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 25 03:35:36 np0005534516 nova_compute[253538]: 2025-11-25 08:35:36.000 253542 INFO nova.compute.claims [None req-fcd9fa15-e55d-4b89-a908-feb02df038df 596f8d994ec145beb9244f5f01713555 3b96d13f13da43468269abb6dc6185d1 - - default default] [instance: 9396c5ff-9457-400c-8916-ecd03eded0c1] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 25 03:35:36 np0005534516 nova_compute[253538]: 2025-11-25 08:35:36.022 253542 DEBUG nova.storage.rbd_utils [None req-6945dc43-ae27-49fa-97b6-869081f2415a 78e69376f7924a5695ba6f4672139f68 d2de800215da4cac8d9fd3a6a3cf4a55 - - default default] creating snapshot(befacc2a29f94475a36706369e285e5b) on rbd image(cab0bbd2-96e3-43ed-970b-0b49c7581fef_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Nov 25 03:35:36 np0005534516 nova_compute[253538]: 2025-11-25 08:35:36.065 253542 DEBUG nova.network.neutron [None req-9db5aae1-9953-40d4-8e1a-8d4bdd6e5498 78e69376f7924a5695ba6f4672139f68 d2de800215da4cac8d9fd3a6a3cf4a55 - - default default] [instance: cab0bbd2-96e3-43ed-970b-0b49c7581fef] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 25 03:35:36 np0005534516 nova_compute[253538]: 2025-11-25 08:35:36.153 253542 DEBUG oslo_concurrency.processutils [None req-fcd9fa15-e55d-4b89-a908-feb02df038df 596f8d994ec145beb9244f5f01713555 3b96d13f13da43468269abb6dc6185d1 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:35:36 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e182 do_prune osdmap full prune enabled
Nov 25 03:35:36 np0005534516 nova_compute[253538]: 2025-11-25 08:35:36.388 253542 DEBUG nova.network.neutron [None req-9db5aae1-9953-40d4-8e1a-8d4bdd6e5498 78e69376f7924a5695ba6f4672139f68 d2de800215da4cac8d9fd3a6a3cf4a55 - - default default] [instance: cab0bbd2-96e3-43ed-970b-0b49c7581fef] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 03:35:36 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e183 e183: 3 total, 3 up, 3 in
Nov 25 03:35:36 np0005534516 ceph-mon[75015]: log_channel(cluster) log [DBG] : osdmap e183: 3 total, 3 up, 3 in
Nov 25 03:35:36 np0005534516 nova_compute[253538]: 2025-11-25 08:35:36.415 253542 DEBUG oslo_concurrency.lockutils [None req-9db5aae1-9953-40d4-8e1a-8d4bdd6e5498 78e69376f7924a5695ba6f4672139f68 d2de800215da4cac8d9fd3a6a3cf4a55 - - default default] Releasing lock "refresh_cache-cab0bbd2-96e3-43ed-970b-0b49c7581fef" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 03:35:36 np0005534516 nova_compute[253538]: 2025-11-25 08:35:36.416 253542 DEBUG nova.compute.manager [None req-9db5aae1-9953-40d4-8e1a-8d4bdd6e5498 78e69376f7924a5695ba6f4672139f68 d2de800215da4cac8d9fd3a6a3cf4a55 - - default default] [instance: cab0bbd2-96e3-43ed-970b-0b49c7581fef] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 25 03:35:36 np0005534516 nova_compute[253538]: 2025-11-25 08:35:36.455 253542 DEBUG nova.storage.rbd_utils [None req-6945dc43-ae27-49fa-97b6-869081f2415a 78e69376f7924a5695ba6f4672139f68 d2de800215da4cac8d9fd3a6a3cf4a55 - - default default] cloning vms/cab0bbd2-96e3-43ed-970b-0b49c7581fef_disk@befacc2a29f94475a36706369e285e5b to images/dd6f5947-0c8f-4755-b44f-279630d6448e clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Nov 25 03:35:36 np0005534516 systemd[1]: machine-qemu\x2d79\x2dinstance\x2d00000043.scope: Deactivated successfully.
Nov 25 03:35:36 np0005534516 systemd[1]: machine-qemu\x2d79\x2dinstance\x2d00000043.scope: Consumed 4.851s CPU time.
Nov 25 03:35:36 np0005534516 systemd-machined[215790]: Machine qemu-79-instance-00000043 terminated.
Nov 25 03:35:36 np0005534516 nova_compute[253538]: 2025-11-25 08:35:36.560 253542 DEBUG nova.storage.rbd_utils [None req-6945dc43-ae27-49fa-97b6-869081f2415a 78e69376f7924a5695ba6f4672139f68 d2de800215da4cac8d9fd3a6a3cf4a55 - - default default] flattening images/dd6f5947-0c8f-4755-b44f-279630d6448e flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314#033[00m
Nov 25 03:35:36 np0005534516 nova_compute[253538]: 2025-11-25 08:35:36.646 253542 INFO nova.virt.libvirt.driver [-] [instance: cab0bbd2-96e3-43ed-970b-0b49c7581fef] Instance destroyed successfully.#033[00m
Nov 25 03:35:36 np0005534516 nova_compute[253538]: 2025-11-25 08:35:36.647 253542 DEBUG nova.objects.instance [None req-9db5aae1-9953-40d4-8e1a-8d4bdd6e5498 78e69376f7924a5695ba6f4672139f68 d2de800215da4cac8d9fd3a6a3cf4a55 - - default default] Lazy-loading 'resources' on Instance uuid cab0bbd2-96e3-43ed-970b-0b49c7581fef obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 03:35:36 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 03:35:36 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2315898247' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 03:35:36 np0005534516 nova_compute[253538]: 2025-11-25 08:35:36.683 253542 DEBUG oslo_concurrency.processutils [None req-fcd9fa15-e55d-4b89-a908-feb02df038df 596f8d994ec145beb9244f5f01713555 3b96d13f13da43468269abb6dc6185d1 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.530s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:35:36 np0005534516 nova_compute[253538]: 2025-11-25 08:35:36.688 253542 DEBUG nova.compute.provider_tree [None req-fcd9fa15-e55d-4b89-a908-feb02df038df 596f8d994ec145beb9244f5f01713555 3b96d13f13da43468269abb6dc6185d1 - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 03:35:36 np0005534516 nova_compute[253538]: 2025-11-25 08:35:36.705 253542 DEBUG nova.scheduler.client.report [None req-fcd9fa15-e55d-4b89-a908-feb02df038df 596f8d994ec145beb9244f5f01713555 3b96d13f13da43468269abb6dc6185d1 - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 03:35:36 np0005534516 nova_compute[253538]: 2025-11-25 08:35:36.738 253542 DEBUG oslo_concurrency.lockutils [None req-fcd9fa15-e55d-4b89-a908-feb02df038df 596f8d994ec145beb9244f5f01713555 3b96d13f13da43468269abb6dc6185d1 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.746s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:35:36 np0005534516 nova_compute[253538]: 2025-11-25 08:35:36.739 253542 DEBUG nova.compute.manager [None req-fcd9fa15-e55d-4b89-a908-feb02df038df 596f8d994ec145beb9244f5f01713555 3b96d13f13da43468269abb6dc6185d1 - - default default] [instance: 9396c5ff-9457-400c-8916-ecd03eded0c1] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 25 03:35:36 np0005534516 nova_compute[253538]: 2025-11-25 08:35:36.828 253542 DEBUG nova.compute.manager [None req-fcd9fa15-e55d-4b89-a908-feb02df038df 596f8d994ec145beb9244f5f01713555 3b96d13f13da43468269abb6dc6185d1 - - default default] [instance: 9396c5ff-9457-400c-8916-ecd03eded0c1] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 25 03:35:36 np0005534516 nova_compute[253538]: 2025-11-25 08:35:36.828 253542 DEBUG nova.network.neutron [None req-fcd9fa15-e55d-4b89-a908-feb02df038df 596f8d994ec145beb9244f5f01713555 3b96d13f13da43468269abb6dc6185d1 - - default default] [instance: 9396c5ff-9457-400c-8916-ecd03eded0c1] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 25 03:35:36 np0005534516 nova_compute[253538]: 2025-11-25 08:35:36.882 253542 INFO nova.virt.libvirt.driver [None req-fcd9fa15-e55d-4b89-a908-feb02df038df 596f8d994ec145beb9244f5f01713555 3b96d13f13da43468269abb6dc6185d1 - - default default] [instance: 9396c5ff-9457-400c-8916-ecd03eded0c1] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 25 03:35:36 np0005534516 nova_compute[253538]: 2025-11-25 08:35:36.898 253542 DEBUG nova.storage.rbd_utils [None req-6945dc43-ae27-49fa-97b6-869081f2415a 78e69376f7924a5695ba6f4672139f68 d2de800215da4cac8d9fd3a6a3cf4a55 - - default default] removing snapshot(befacc2a29f94475a36706369e285e5b) on rbd image(cab0bbd2-96e3-43ed-970b-0b49c7581fef_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489#033[00m
Nov 25 03:35:36 np0005534516 nova_compute[253538]: 2025-11-25 08:35:36.902 253542 DEBUG nova.compute.manager [None req-fcd9fa15-e55d-4b89-a908-feb02df038df 596f8d994ec145beb9244f5f01713555 3b96d13f13da43468269abb6dc6185d1 - - default default] [instance: 9396c5ff-9457-400c-8916-ecd03eded0c1] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 25 03:35:37 np0005534516 nova_compute[253538]: 2025-11-25 08:35:36.999 253542 DEBUG nova.compute.manager [None req-fcd9fa15-e55d-4b89-a908-feb02df038df 596f8d994ec145beb9244f5f01713555 3b96d13f13da43468269abb6dc6185d1 - - default default] [instance: 9396c5ff-9457-400c-8916-ecd03eded0c1] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 25 03:35:37 np0005534516 nova_compute[253538]: 2025-11-25 08:35:37.001 253542 DEBUG nova.virt.libvirt.driver [None req-fcd9fa15-e55d-4b89-a908-feb02df038df 596f8d994ec145beb9244f5f01713555 3b96d13f13da43468269abb6dc6185d1 - - default default] [instance: 9396c5ff-9457-400c-8916-ecd03eded0c1] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 25 03:35:37 np0005534516 nova_compute[253538]: 2025-11-25 08:35:37.002 253542 INFO nova.virt.libvirt.driver [None req-fcd9fa15-e55d-4b89-a908-feb02df038df 596f8d994ec145beb9244f5f01713555 3b96d13f13da43468269abb6dc6185d1 - - default default] [instance: 9396c5ff-9457-400c-8916-ecd03eded0c1] Creating image(s)#033[00m
Nov 25 03:35:37 np0005534516 nova_compute[253538]: 2025-11-25 08:35:37.032 253542 DEBUG nova.storage.rbd_utils [None req-fcd9fa15-e55d-4b89-a908-feb02df038df 596f8d994ec145beb9244f5f01713555 3b96d13f13da43468269abb6dc6185d1 - - default default] rbd image 9396c5ff-9457-400c-8916-ecd03eded0c1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:35:37 np0005534516 nova_compute[253538]: 2025-11-25 08:35:37.061 253542 DEBUG nova.storage.rbd_utils [None req-fcd9fa15-e55d-4b89-a908-feb02df038df 596f8d994ec145beb9244f5f01713555 3b96d13f13da43468269abb6dc6185d1 - - default default] rbd image 9396c5ff-9457-400c-8916-ecd03eded0c1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:35:37 np0005534516 nova_compute[253538]: 2025-11-25 08:35:37.083 253542 DEBUG nova.storage.rbd_utils [None req-fcd9fa15-e55d-4b89-a908-feb02df038df 596f8d994ec145beb9244f5f01713555 3b96d13f13da43468269abb6dc6185d1 - - default default] rbd image 9396c5ff-9457-400c-8916-ecd03eded0c1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:35:37 np0005534516 nova_compute[253538]: 2025-11-25 08:35:37.086 253542 DEBUG oslo_concurrency.processutils [None req-fcd9fa15-e55d-4b89-a908-feb02df038df 596f8d994ec145beb9244f5f01713555 3b96d13f13da43468269abb6dc6185d1 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:35:37 np0005534516 nova_compute[253538]: 2025-11-25 08:35:37.146 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:35:37 np0005534516 nova_compute[253538]: 2025-11-25 08:35:37.185 253542 DEBUG oslo_concurrency.processutils [None req-fcd9fa15-e55d-4b89-a908-feb02df038df 596f8d994ec145beb9244f5f01713555 3b96d13f13da43468269abb6dc6185d1 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json" returned: 0 in 0.099s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:35:37 np0005534516 nova_compute[253538]: 2025-11-25 08:35:37.186 253542 DEBUG oslo_concurrency.lockutils [None req-fcd9fa15-e55d-4b89-a908-feb02df038df 596f8d994ec145beb9244f5f01713555 3b96d13f13da43468269abb6dc6185d1 - - default default] Acquiring lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:35:37 np0005534516 nova_compute[253538]: 2025-11-25 08:35:37.187 253542 DEBUG oslo_concurrency.lockutils [None req-fcd9fa15-e55d-4b89-a908-feb02df038df 596f8d994ec145beb9244f5f01713555 3b96d13f13da43468269abb6dc6185d1 - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:35:37 np0005534516 nova_compute[253538]: 2025-11-25 08:35:37.187 253542 DEBUG oslo_concurrency.lockutils [None req-fcd9fa15-e55d-4b89-a908-feb02df038df 596f8d994ec145beb9244f5f01713555 3b96d13f13da43468269abb6dc6185d1 - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:35:37 np0005534516 nova_compute[253538]: 2025-11-25 08:35:37.210 253542 DEBUG nova.storage.rbd_utils [None req-fcd9fa15-e55d-4b89-a908-feb02df038df 596f8d994ec145beb9244f5f01713555 3b96d13f13da43468269abb6dc6185d1 - - default default] rbd image 9396c5ff-9457-400c-8916-ecd03eded0c1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:35:37 np0005534516 nova_compute[253538]: 2025-11-25 08:35:37.213 253542 DEBUG oslo_concurrency.processutils [None req-fcd9fa15-e55d-4b89-a908-feb02df038df 596f8d994ec145beb9244f5f01713555 3b96d13f13da43468269abb6dc6185d1 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc 9396c5ff-9457-400c-8916-ecd03eded0c1_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:35:37 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1575: 321 pgs: 321 active+clean; 112 MiB data, 554 MiB used, 59 GiB / 60 GiB avail; 2.9 MiB/s rd, 2.6 MiB/s wr, 169 op/s
Nov 25 03:35:37 np0005534516 nova_compute[253538]: 2025-11-25 08:35:37.390 253542 DEBUG nova.policy [None req-fcd9fa15-e55d-4b89-a908-feb02df038df 596f8d994ec145beb9244f5f01713555 3b96d13f13da43468269abb6dc6185d1 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '596f8d994ec145beb9244f5f01713555', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '3b96d13f13da43468269abb6dc6185d1', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 25 03:35:37 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e183 do_prune osdmap full prune enabled
Nov 25 03:35:37 np0005534516 ceph-osd[89702]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #46. Immutable memtables: 3.
Nov 25 03:35:37 np0005534516 nova_compute[253538]: 2025-11-25 08:35:37.473 253542 DEBUG oslo_concurrency.lockutils [None req-11d37a5c-d5cb-48ae-a62e-f9b39d45d4b5 4de06a7985be4463b069db269e2882d4 a20ef4bed55a408c8933a4956b2dd3e4 - - default default] Acquiring lock "4e9d3984-d789-45e1-83e3-8909597d3265" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:35:37 np0005534516 nova_compute[253538]: 2025-11-25 08:35:37.474 253542 DEBUG oslo_concurrency.lockutils [None req-11d37a5c-d5cb-48ae-a62e-f9b39d45d4b5 4de06a7985be4463b069db269e2882d4 a20ef4bed55a408c8933a4956b2dd3e4 - - default default] Lock "4e9d3984-d789-45e1-83e3-8909597d3265" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:35:37 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e184 e184: 3 total, 3 up, 3 in
Nov 25 03:35:37 np0005534516 nova_compute[253538]: 2025-11-25 08:35:37.489 253542 DEBUG nova.compute.manager [None req-11d37a5c-d5cb-48ae-a62e-f9b39d45d4b5 4de06a7985be4463b069db269e2882d4 a20ef4bed55a408c8933a4956b2dd3e4 - - default default] [instance: 4e9d3984-d789-45e1-83e3-8909597d3265] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 25 03:35:37 np0005534516 ceph-mon[75015]: log_channel(cluster) log [DBG] : osdmap e184: 3 total, 3 up, 3 in
Nov 25 03:35:37 np0005534516 nova_compute[253538]: 2025-11-25 08:35:37.547 253542 DEBUG oslo_concurrency.lockutils [None req-11d37a5c-d5cb-48ae-a62e-f9b39d45d4b5 4de06a7985be4463b069db269e2882d4 a20ef4bed55a408c8933a4956b2dd3e4 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:35:37 np0005534516 nova_compute[253538]: 2025-11-25 08:35:37.548 253542 DEBUG oslo_concurrency.lockutils [None req-11d37a5c-d5cb-48ae-a62e-f9b39d45d4b5 4de06a7985be4463b069db269e2882d4 a20ef4bed55a408c8933a4956b2dd3e4 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:35:37 np0005534516 nova_compute[253538]: 2025-11-25 08:35:37.555 253542 DEBUG nova.virt.hardware [None req-11d37a5c-d5cb-48ae-a62e-f9b39d45d4b5 4de06a7985be4463b069db269e2882d4 a20ef4bed55a408c8933a4956b2dd3e4 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 25 03:35:37 np0005534516 nova_compute[253538]: 2025-11-25 08:35:37.556 253542 INFO nova.compute.claims [None req-11d37a5c-d5cb-48ae-a62e-f9b39d45d4b5 4de06a7985be4463b069db269e2882d4 a20ef4bed55a408c8933a4956b2dd3e4 - - default default] [instance: 4e9d3984-d789-45e1-83e3-8909597d3265] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 25 03:35:37 np0005534516 nova_compute[253538]: 2025-11-25 08:35:37.570 253542 DEBUG nova.storage.rbd_utils [None req-6945dc43-ae27-49fa-97b6-869081f2415a 78e69376f7924a5695ba6f4672139f68 d2de800215da4cac8d9fd3a6a3cf4a55 - - default default] creating snapshot(snap) on rbd image(dd6f5947-0c8f-4755-b44f-279630d6448e) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Nov 25 03:35:37 np0005534516 nova_compute[253538]: 2025-11-25 08:35:37.624 253542 DEBUG oslo_concurrency.processutils [None req-fcd9fa15-e55d-4b89-a908-feb02df038df 596f8d994ec145beb9244f5f01713555 3b96d13f13da43468269abb6dc6185d1 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc 9396c5ff-9457-400c-8916-ecd03eded0c1_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.410s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:35:37 np0005534516 nova_compute[253538]: 2025-11-25 08:35:37.698 253542 DEBUG nova.storage.rbd_utils [None req-fcd9fa15-e55d-4b89-a908-feb02df038df 596f8d994ec145beb9244f5f01713555 3b96d13f13da43468269abb6dc6185d1 - - default default] resizing rbd image 9396c5ff-9457-400c-8916-ecd03eded0c1_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 25 03:35:37 np0005534516 nova_compute[253538]: 2025-11-25 08:35:37.787 253542 DEBUG oslo_concurrency.processutils [None req-11d37a5c-d5cb-48ae-a62e-f9b39d45d4b5 4de06a7985be4463b069db269e2882d4 a20ef4bed55a408c8933a4956b2dd3e4 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:35:37 np0005534516 nova_compute[253538]: 2025-11-25 08:35:37.826 253542 DEBUG nova.objects.instance [None req-fcd9fa15-e55d-4b89-a908-feb02df038df 596f8d994ec145beb9244f5f01713555 3b96d13f13da43468269abb6dc6185d1 - - default default] Lazy-loading 'migration_context' on Instance uuid 9396c5ff-9457-400c-8916-ecd03eded0c1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 03:35:37 np0005534516 nova_compute[253538]: 2025-11-25 08:35:37.837 253542 DEBUG nova.virt.libvirt.driver [None req-fcd9fa15-e55d-4b89-a908-feb02df038df 596f8d994ec145beb9244f5f01713555 3b96d13f13da43468269abb6dc6185d1 - - default default] [instance: 9396c5ff-9457-400c-8916-ecd03eded0c1] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 25 03:35:37 np0005534516 nova_compute[253538]: 2025-11-25 08:35:37.838 253542 DEBUG nova.virt.libvirt.driver [None req-fcd9fa15-e55d-4b89-a908-feb02df038df 596f8d994ec145beb9244f5f01713555 3b96d13f13da43468269abb6dc6185d1 - - default default] [instance: 9396c5ff-9457-400c-8916-ecd03eded0c1] Ensure instance console log exists: /var/lib/nova/instances/9396c5ff-9457-400c-8916-ecd03eded0c1/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 25 03:35:37 np0005534516 nova_compute[253538]: 2025-11-25 08:35:37.838 253542 DEBUG oslo_concurrency.lockutils [None req-fcd9fa15-e55d-4b89-a908-feb02df038df 596f8d994ec145beb9244f5f01713555 3b96d13f13da43468269abb6dc6185d1 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:35:37 np0005534516 nova_compute[253538]: 2025-11-25 08:35:37.838 253542 DEBUG oslo_concurrency.lockutils [None req-fcd9fa15-e55d-4b89-a908-feb02df038df 596f8d994ec145beb9244f5f01713555 3b96d13f13da43468269abb6dc6185d1 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:35:37 np0005534516 nova_compute[253538]: 2025-11-25 08:35:37.839 253542 DEBUG oslo_concurrency.lockutils [None req-fcd9fa15-e55d-4b89-a908-feb02df038df 596f8d994ec145beb9244f5f01713555 3b96d13f13da43468269abb6dc6185d1 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:35:38 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 03:35:38 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3716399622' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 03:35:38 np0005534516 nova_compute[253538]: 2025-11-25 08:35:38.216 253542 DEBUG oslo_concurrency.processutils [None req-11d37a5c-d5cb-48ae-a62e-f9b39d45d4b5 4de06a7985be4463b069db269e2882d4 a20ef4bed55a408c8933a4956b2dd3e4 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.430s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:35:38 np0005534516 nova_compute[253538]: 2025-11-25 08:35:38.224 253542 DEBUG nova.compute.provider_tree [None req-11d37a5c-d5cb-48ae-a62e-f9b39d45d4b5 4de06a7985be4463b069db269e2882d4 a20ef4bed55a408c8933a4956b2dd3e4 - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 03:35:38 np0005534516 nova_compute[253538]: 2025-11-25 08:35:38.247 253542 DEBUG nova.scheduler.client.report [None req-11d37a5c-d5cb-48ae-a62e-f9b39d45d4b5 4de06a7985be4463b069db269e2882d4 a20ef4bed55a408c8933a4956b2dd3e4 - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 03:35:38 np0005534516 nova_compute[253538]: 2025-11-25 08:35:38.298 253542 DEBUG oslo_concurrency.lockutils [None req-11d37a5c-d5cb-48ae-a62e-f9b39d45d4b5 4de06a7985be4463b069db269e2882d4 a20ef4bed55a408c8933a4956b2dd3e4 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.750s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:35:38 np0005534516 nova_compute[253538]: 2025-11-25 08:35:38.300 253542 DEBUG nova.compute.manager [None req-11d37a5c-d5cb-48ae-a62e-f9b39d45d4b5 4de06a7985be4463b069db269e2882d4 a20ef4bed55a408c8933a4956b2dd3e4 - - default default] [instance: 4e9d3984-d789-45e1-83e3-8909597d3265] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 25 03:35:38 np0005534516 nova_compute[253538]: 2025-11-25 08:35:38.361 253542 DEBUG nova.compute.manager [None req-11d37a5c-d5cb-48ae-a62e-f9b39d45d4b5 4de06a7985be4463b069db269e2882d4 a20ef4bed55a408c8933a4956b2dd3e4 - - default default] [instance: 4e9d3984-d789-45e1-83e3-8909597d3265] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 25 03:35:38 np0005534516 nova_compute[253538]: 2025-11-25 08:35:38.362 253542 DEBUG nova.network.neutron [None req-11d37a5c-d5cb-48ae-a62e-f9b39d45d4b5 4de06a7985be4463b069db269e2882d4 a20ef4bed55a408c8933a4956b2dd3e4 - - default default] [instance: 4e9d3984-d789-45e1-83e3-8909597d3265] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 25 03:35:38 np0005534516 nova_compute[253538]: 2025-11-25 08:35:38.373 253542 DEBUG nova.network.neutron [None req-fcd9fa15-e55d-4b89-a908-feb02df038df 596f8d994ec145beb9244f5f01713555 3b96d13f13da43468269abb6dc6185d1 - - default default] [instance: 9396c5ff-9457-400c-8916-ecd03eded0c1] Successfully created port: 33ae6d28-9d12-4e42-9874-7f5c7a27c9c8 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 25 03:35:38 np0005534516 nova_compute[253538]: 2025-11-25 08:35:38.387 253542 INFO nova.virt.libvirt.driver [None req-11d37a5c-d5cb-48ae-a62e-f9b39d45d4b5 4de06a7985be4463b069db269e2882d4 a20ef4bed55a408c8933a4956b2dd3e4 - - default default] [instance: 4e9d3984-d789-45e1-83e3-8909597d3265] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 25 03:35:38 np0005534516 nova_compute[253538]: 2025-11-25 08:35:38.418 253542 DEBUG nova.compute.manager [None req-11d37a5c-d5cb-48ae-a62e-f9b39d45d4b5 4de06a7985be4463b069db269e2882d4 a20ef4bed55a408c8933a4956b2dd3e4 - - default default] [instance: 4e9d3984-d789-45e1-83e3-8909597d3265] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 25 03:35:38 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e184 do_prune osdmap full prune enabled
Nov 25 03:35:38 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e185 e185: 3 total, 3 up, 3 in
Nov 25 03:35:38 np0005534516 ceph-mon[75015]: log_channel(cluster) log [DBG] : osdmap e185: 3 total, 3 up, 3 in
Nov 25 03:35:38 np0005534516 nova_compute[253538]: 2025-11-25 08:35:38.505 253542 DEBUG nova.compute.manager [None req-11d37a5c-d5cb-48ae-a62e-f9b39d45d4b5 4de06a7985be4463b069db269e2882d4 a20ef4bed55a408c8933a4956b2dd3e4 - - default default] [instance: 4e9d3984-d789-45e1-83e3-8909597d3265] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 25 03:35:38 np0005534516 nova_compute[253538]: 2025-11-25 08:35:38.507 253542 DEBUG nova.virt.libvirt.driver [None req-11d37a5c-d5cb-48ae-a62e-f9b39d45d4b5 4de06a7985be4463b069db269e2882d4 a20ef4bed55a408c8933a4956b2dd3e4 - - default default] [instance: 4e9d3984-d789-45e1-83e3-8909597d3265] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 25 03:35:38 np0005534516 nova_compute[253538]: 2025-11-25 08:35:38.507 253542 INFO nova.virt.libvirt.driver [None req-11d37a5c-d5cb-48ae-a62e-f9b39d45d4b5 4de06a7985be4463b069db269e2882d4 a20ef4bed55a408c8933a4956b2dd3e4 - - default default] [instance: 4e9d3984-d789-45e1-83e3-8909597d3265] Creating image(s)#033[00m
Nov 25 03:35:38 np0005534516 nova_compute[253538]: 2025-11-25 08:35:38.537 253542 DEBUG nova.storage.rbd_utils [None req-11d37a5c-d5cb-48ae-a62e-f9b39d45d4b5 4de06a7985be4463b069db269e2882d4 a20ef4bed55a408c8933a4956b2dd3e4 - - default default] rbd image 4e9d3984-d789-45e1-83e3-8909597d3265_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:35:38 np0005534516 nova_compute[253538]: 2025-11-25 08:35:38.561 253542 DEBUG nova.storage.rbd_utils [None req-11d37a5c-d5cb-48ae-a62e-f9b39d45d4b5 4de06a7985be4463b069db269e2882d4 a20ef4bed55a408c8933a4956b2dd3e4 - - default default] rbd image 4e9d3984-d789-45e1-83e3-8909597d3265_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:35:38 np0005534516 nova_compute[253538]: 2025-11-25 08:35:38.588 253542 DEBUG nova.storage.rbd_utils [None req-11d37a5c-d5cb-48ae-a62e-f9b39d45d4b5 4de06a7985be4463b069db269e2882d4 a20ef4bed55a408c8933a4956b2dd3e4 - - default default] rbd image 4e9d3984-d789-45e1-83e3-8909597d3265_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:35:38 np0005534516 nova_compute[253538]: 2025-11-25 08:35:38.591 253542 DEBUG oslo_concurrency.processutils [None req-11d37a5c-d5cb-48ae-a62e-f9b39d45d4b5 4de06a7985be4463b069db269e2882d4 a20ef4bed55a408c8933a4956b2dd3e4 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:35:38 np0005534516 nova_compute[253538]: 2025-11-25 08:35:38.679 253542 DEBUG oslo_concurrency.processutils [None req-11d37a5c-d5cb-48ae-a62e-f9b39d45d4b5 4de06a7985be4463b069db269e2882d4 a20ef4bed55a408c8933a4956b2dd3e4 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json" returned: 0 in 0.088s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:35:38 np0005534516 nova_compute[253538]: 2025-11-25 08:35:38.680 253542 DEBUG oslo_concurrency.lockutils [None req-11d37a5c-d5cb-48ae-a62e-f9b39d45d4b5 4de06a7985be4463b069db269e2882d4 a20ef4bed55a408c8933a4956b2dd3e4 - - default default] Acquiring lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:35:38 np0005534516 nova_compute[253538]: 2025-11-25 08:35:38.680 253542 DEBUG oslo_concurrency.lockutils [None req-11d37a5c-d5cb-48ae-a62e-f9b39d45d4b5 4de06a7985be4463b069db269e2882d4 a20ef4bed55a408c8933a4956b2dd3e4 - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:35:38 np0005534516 nova_compute[253538]: 2025-11-25 08:35:38.680 253542 DEBUG oslo_concurrency.lockutils [None req-11d37a5c-d5cb-48ae-a62e-f9b39d45d4b5 4de06a7985be4463b069db269e2882d4 a20ef4bed55a408c8933a4956b2dd3e4 - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:35:38 np0005534516 nova_compute[253538]: 2025-11-25 08:35:38.698 253542 DEBUG nova.storage.rbd_utils [None req-11d37a5c-d5cb-48ae-a62e-f9b39d45d4b5 4de06a7985be4463b069db269e2882d4 a20ef4bed55a408c8933a4956b2dd3e4 - - default default] rbd image 4e9d3984-d789-45e1-83e3-8909597d3265_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:35:38 np0005534516 nova_compute[253538]: 2025-11-25 08:35:38.700 253542 DEBUG oslo_concurrency.processutils [None req-11d37a5c-d5cb-48ae-a62e-f9b39d45d4b5 4de06a7985be4463b069db269e2882d4 a20ef4bed55a408c8933a4956b2dd3e4 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc 4e9d3984-d789-45e1-83e3-8909597d3265_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:35:38 np0005534516 nova_compute[253538]: 2025-11-25 08:35:38.985 253542 DEBUG oslo_concurrency.processutils [None req-11d37a5c-d5cb-48ae-a62e-f9b39d45d4b5 4de06a7985be4463b069db269e2882d4 a20ef4bed55a408c8933a4956b2dd3e4 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc 4e9d3984-d789-45e1-83e3-8909597d3265_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.284s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:35:39 np0005534516 nova_compute[253538]: 2025-11-25 08:35:39.033 253542 DEBUG nova.storage.rbd_utils [None req-11d37a5c-d5cb-48ae-a62e-f9b39d45d4b5 4de06a7985be4463b069db269e2882d4 a20ef4bed55a408c8933a4956b2dd3e4 - - default default] resizing rbd image 4e9d3984-d789-45e1-83e3-8909597d3265_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 25 03:35:39 np0005534516 nova_compute[253538]: 2025-11-25 08:35:39.112 253542 DEBUG nova.objects.instance [None req-11d37a5c-d5cb-48ae-a62e-f9b39d45d4b5 4de06a7985be4463b069db269e2882d4 a20ef4bed55a408c8933a4956b2dd3e4 - - default default] Lazy-loading 'migration_context' on Instance uuid 4e9d3984-d789-45e1-83e3-8909597d3265 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 03:35:39 np0005534516 nova_compute[253538]: 2025-11-25 08:35:39.125 253542 DEBUG nova.virt.libvirt.driver [None req-11d37a5c-d5cb-48ae-a62e-f9b39d45d4b5 4de06a7985be4463b069db269e2882d4 a20ef4bed55a408c8933a4956b2dd3e4 - - default default] [instance: 4e9d3984-d789-45e1-83e3-8909597d3265] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 25 03:35:39 np0005534516 nova_compute[253538]: 2025-11-25 08:35:39.126 253542 DEBUG nova.virt.libvirt.driver [None req-11d37a5c-d5cb-48ae-a62e-f9b39d45d4b5 4de06a7985be4463b069db269e2882d4 a20ef4bed55a408c8933a4956b2dd3e4 - - default default] [instance: 4e9d3984-d789-45e1-83e3-8909597d3265] Ensure instance console log exists: /var/lib/nova/instances/4e9d3984-d789-45e1-83e3-8909597d3265/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 25 03:35:39 np0005534516 nova_compute[253538]: 2025-11-25 08:35:39.126 253542 DEBUG oslo_concurrency.lockutils [None req-11d37a5c-d5cb-48ae-a62e-f9b39d45d4b5 4de06a7985be4463b069db269e2882d4 a20ef4bed55a408c8933a4956b2dd3e4 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:35:39 np0005534516 nova_compute[253538]: 2025-11-25 08:35:39.126 253542 DEBUG oslo_concurrency.lockutils [None req-11d37a5c-d5cb-48ae-a62e-f9b39d45d4b5 4de06a7985be4463b069db269e2882d4 a20ef4bed55a408c8933a4956b2dd3e4 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:35:39 np0005534516 nova_compute[253538]: 2025-11-25 08:35:39.127 253542 DEBUG oslo_concurrency.lockutils [None req-11d37a5c-d5cb-48ae-a62e-f9b39d45d4b5 4de06a7985be4463b069db269e2882d4 a20ef4bed55a408c8933a4956b2dd3e4 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:35:39 np0005534516 nova_compute[253538]: 2025-11-25 08:35:39.174 253542 DEBUG nova.policy [None req-11d37a5c-d5cb-48ae-a62e-f9b39d45d4b5 4de06a7985be4463b069db269e2882d4 a20ef4bed55a408c8933a4956b2dd3e4 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '4de06a7985be4463b069db269e2882d4', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'a20ef4bed55a408c8933a4956b2dd3e4', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 25 03:35:39 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1578: 321 pgs: 321 active+clean; 135 MiB data, 565 MiB used, 59 GiB / 60 GiB avail; 6.3 MiB/s rd, 3.4 MiB/s wr, 355 op/s
Nov 25 03:35:39 np0005534516 podman[323289]: 2025-11-25 08:35:39.80706653 +0000 UTC m=+0.056382516 container health_status 5c8d5508db57b4c3da7e80ae11f3a3094e87785cb74f60c98199261dd8b73481 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, managed_by=edpm_ansible, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2)
Nov 25 03:35:39 np0005534516 nova_compute[253538]: 2025-11-25 08:35:39.831 253542 DEBUG nova.network.neutron [None req-11d37a5c-d5cb-48ae-a62e-f9b39d45d4b5 4de06a7985be4463b069db269e2882d4 a20ef4bed55a408c8933a4956b2dd3e4 - - default default] [instance: 4e9d3984-d789-45e1-83e3-8909597d3265] Successfully created port: d553507f-4019-4ce0-b549-4d221b9089cd _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 25 03:35:40 np0005534516 nova_compute[253538]: 2025-11-25 08:35:40.015 253542 INFO nova.virt.libvirt.driver [None req-6945dc43-ae27-49fa-97b6-869081f2415a 78e69376f7924a5695ba6f4672139f68 d2de800215da4cac8d9fd3a6a3cf4a55 - - default default] [instance: cab0bbd2-96e3-43ed-970b-0b49c7581fef] Snapshot image upload complete#033[00m
Nov 25 03:35:40 np0005534516 nova_compute[253538]: 2025-11-25 08:35:40.015 253542 INFO nova.compute.manager [None req-6945dc43-ae27-49fa-97b6-869081f2415a 78e69376f7924a5695ba6f4672139f68 d2de800215da4cac8d9fd3a6a3cf4a55 - - default default] [instance: cab0bbd2-96e3-43ed-970b-0b49c7581fef] Took 4.78 seconds to snapshot the instance on the hypervisor.#033[00m
Nov 25 03:35:40 np0005534516 nova_compute[253538]: 2025-11-25 08:35:40.056 253542 DEBUG nova.compute.manager [None req-6945dc43-ae27-49fa-97b6-869081f2415a 78e69376f7924a5695ba6f4672139f68 d2de800215da4cac8d9fd3a6a3cf4a55 - - default default] [instance: cab0bbd2-96e3-43ed-970b-0b49c7581fef] Instance disappeared during snapshot _snapshot_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:4390#033[00m
Nov 25 03:35:40 np0005534516 nova_compute[253538]: 2025-11-25 08:35:40.300 253542 DEBUG nova.network.neutron [None req-fcd9fa15-e55d-4b89-a908-feb02df038df 596f8d994ec145beb9244f5f01713555 3b96d13f13da43468269abb6dc6185d1 - - default default] [instance: 9396c5ff-9457-400c-8916-ecd03eded0c1] Successfully updated port: 33ae6d28-9d12-4e42-9874-7f5c7a27c9c8 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 25 03:35:40 np0005534516 nova_compute[253538]: 2025-11-25 08:35:40.313 253542 DEBUG oslo_concurrency.lockutils [None req-fcd9fa15-e55d-4b89-a908-feb02df038df 596f8d994ec145beb9244f5f01713555 3b96d13f13da43468269abb6dc6185d1 - - default default] Acquiring lock "refresh_cache-9396c5ff-9457-400c-8916-ecd03eded0c1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 03:35:40 np0005534516 nova_compute[253538]: 2025-11-25 08:35:40.313 253542 DEBUG oslo_concurrency.lockutils [None req-fcd9fa15-e55d-4b89-a908-feb02df038df 596f8d994ec145beb9244f5f01713555 3b96d13f13da43468269abb6dc6185d1 - - default default] Acquired lock "refresh_cache-9396c5ff-9457-400c-8916-ecd03eded0c1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 03:35:40 np0005534516 nova_compute[253538]: 2025-11-25 08:35:40.313 253542 DEBUG nova.network.neutron [None req-fcd9fa15-e55d-4b89-a908-feb02df038df 596f8d994ec145beb9244f5f01713555 3b96d13f13da43468269abb6dc6185d1 - - default default] [instance: 9396c5ff-9457-400c-8916-ecd03eded0c1] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 25 03:35:40 np0005534516 nova_compute[253538]: 2025-11-25 08:35:40.673 253542 DEBUG nova.compute.manager [None req-6945dc43-ae27-49fa-97b6-869081f2415a 78e69376f7924a5695ba6f4672139f68 d2de800215da4cac8d9fd3a6a3cf4a55 - - default default] [instance: cab0bbd2-96e3-43ed-970b-0b49c7581fef] Found 1 images (rotation: 2) _rotate_backups /usr/lib/python3.9/site-packages/nova/compute/manager.py:4450#033[00m
Nov 25 03:35:40 np0005534516 nova_compute[253538]: 2025-11-25 08:35:40.711 253542 DEBUG nova.compute.manager [req-0e4fb5a9-20ab-4129-8a16-ded88e0a087b req-7a2d93d7-d203-4267-baea-4b01d2c267fe b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 9396c5ff-9457-400c-8916-ecd03eded0c1] Received event network-changed-33ae6d28-9d12-4e42-9874-7f5c7a27c9c8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:35:40 np0005534516 nova_compute[253538]: 2025-11-25 08:35:40.712 253542 DEBUG nova.compute.manager [req-0e4fb5a9-20ab-4129-8a16-ded88e0a087b req-7a2d93d7-d203-4267-baea-4b01d2c267fe b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 9396c5ff-9457-400c-8916-ecd03eded0c1] Refreshing instance network info cache due to event network-changed-33ae6d28-9d12-4e42-9874-7f5c7a27c9c8. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 03:35:40 np0005534516 nova_compute[253538]: 2025-11-25 08:35:40.712 253542 DEBUG oslo_concurrency.lockutils [req-0e4fb5a9-20ab-4129-8a16-ded88e0a087b req-7a2d93d7-d203-4267-baea-4b01d2c267fe b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-9396c5ff-9457-400c-8916-ecd03eded0c1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 03:35:40 np0005534516 nova_compute[253538]: 2025-11-25 08:35:40.726 253542 DEBUG nova.network.neutron [None req-fcd9fa15-e55d-4b89-a908-feb02df038df 596f8d994ec145beb9244f5f01713555 3b96d13f13da43468269abb6dc6185d1 - - default default] [instance: 9396c5ff-9457-400c-8916-ecd03eded0c1] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 25 03:35:40 np0005534516 nova_compute[253538]: 2025-11-25 08:35:40.790 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:35:40 np0005534516 nova_compute[253538]: 2025-11-25 08:35:40.881 253542 DEBUG nova.network.neutron [None req-11d37a5c-d5cb-48ae-a62e-f9b39d45d4b5 4de06a7985be4463b069db269e2882d4 a20ef4bed55a408c8933a4956b2dd3e4 - - default default] [instance: 4e9d3984-d789-45e1-83e3-8909597d3265] Successfully updated port: d553507f-4019-4ce0-b549-4d221b9089cd _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 25 03:35:40 np0005534516 nova_compute[253538]: 2025-11-25 08:35:40.901 253542 DEBUG oslo_concurrency.lockutils [None req-11d37a5c-d5cb-48ae-a62e-f9b39d45d4b5 4de06a7985be4463b069db269e2882d4 a20ef4bed55a408c8933a4956b2dd3e4 - - default default] Acquiring lock "refresh_cache-4e9d3984-d789-45e1-83e3-8909597d3265" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 03:35:40 np0005534516 nova_compute[253538]: 2025-11-25 08:35:40.901 253542 DEBUG oslo_concurrency.lockutils [None req-11d37a5c-d5cb-48ae-a62e-f9b39d45d4b5 4de06a7985be4463b069db269e2882d4 a20ef4bed55a408c8933a4956b2dd3e4 - - default default] Acquired lock "refresh_cache-4e9d3984-d789-45e1-83e3-8909597d3265" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 03:35:40 np0005534516 nova_compute[253538]: 2025-11-25 08:35:40.901 253542 DEBUG nova.network.neutron [None req-11d37a5c-d5cb-48ae-a62e-f9b39d45d4b5 4de06a7985be4463b069db269e2882d4 a20ef4bed55a408c8933a4956b2dd3e4 - - default default] [instance: 4e9d3984-d789-45e1-83e3-8909597d3265] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 25 03:35:40 np0005534516 nova_compute[253538]: 2025-11-25 08:35:40.957 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:35:40 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e185 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 03:35:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:35:41.059 162739 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:35:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:35:41.060 162739 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:35:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:35:41.060 162739 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:35:41 np0005534516 nova_compute[253538]: 2025-11-25 08:35:41.104 253542 DEBUG nova.compute.manager [req-ffe973bb-b0df-4a54-b49f-2474602e89ab req-9a2630d4-249b-42ee-8a59-7de5dc9b9479 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 4e9d3984-d789-45e1-83e3-8909597d3265] Received event network-changed-d553507f-4019-4ce0-b549-4d221b9089cd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:35:41 np0005534516 nova_compute[253538]: 2025-11-25 08:35:41.105 253542 DEBUG nova.compute.manager [req-ffe973bb-b0df-4a54-b49f-2474602e89ab req-9a2630d4-249b-42ee-8a59-7de5dc9b9479 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 4e9d3984-d789-45e1-83e3-8909597d3265] Refreshing instance network info cache due to event network-changed-d553507f-4019-4ce0-b549-4d221b9089cd. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 03:35:41 np0005534516 nova_compute[253538]: 2025-11-25 08:35:41.105 253542 DEBUG oslo_concurrency.lockutils [req-ffe973bb-b0df-4a54-b49f-2474602e89ab req-9a2630d4-249b-42ee-8a59-7de5dc9b9479 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-4e9d3984-d789-45e1-83e3-8909597d3265" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 03:35:41 np0005534516 nova_compute[253538]: 2025-11-25 08:35:41.113 253542 DEBUG nova.network.neutron [None req-11d37a5c-d5cb-48ae-a62e-f9b39d45d4b5 4de06a7985be4463b069db269e2882d4 a20ef4bed55a408c8933a4956b2dd3e4 - - default default] [instance: 4e9d3984-d789-45e1-83e3-8909597d3265] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 25 03:35:41 np0005534516 nova_compute[253538]: 2025-11-25 08:35:41.172 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:35:41 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1579: 321 pgs: 321 active+clean; 185 MiB data, 581 MiB used, 59 GiB / 60 GiB avail; 5.9 MiB/s rd, 7.6 MiB/s wr, 232 op/s
Nov 25 03:35:42 np0005534516 nova_compute[253538]: 2025-11-25 08:35:42.149 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:35:42 np0005534516 nova_compute[253538]: 2025-11-25 08:35:42.442 253542 DEBUG nova.network.neutron [None req-fcd9fa15-e55d-4b89-a908-feb02df038df 596f8d994ec145beb9244f5f01713555 3b96d13f13da43468269abb6dc6185d1 - - default default] [instance: 9396c5ff-9457-400c-8916-ecd03eded0c1] Updating instance_info_cache with network_info: [{"id": "33ae6d28-9d12-4e42-9874-7f5c7a27c9c8", "address": "fa:16:3e:bb:e0:fa", "network": {"id": "77b0065f-12e9-4121-b463-93a7fd9a5ff0", "bridge": "br-int", "label": "tempest-InstanceActionsV221TestJSON-579014610-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3b96d13f13da43468269abb6dc6185d1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap33ae6d28-9d", "ovs_interfaceid": "33ae6d28-9d12-4e42-9874-7f5c7a27c9c8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 03:35:42 np0005534516 nova_compute[253538]: 2025-11-25 08:35:42.479 253542 DEBUG oslo_concurrency.lockutils [None req-fcd9fa15-e55d-4b89-a908-feb02df038df 596f8d994ec145beb9244f5f01713555 3b96d13f13da43468269abb6dc6185d1 - - default default] Releasing lock "refresh_cache-9396c5ff-9457-400c-8916-ecd03eded0c1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 03:35:42 np0005534516 nova_compute[253538]: 2025-11-25 08:35:42.480 253542 DEBUG nova.compute.manager [None req-fcd9fa15-e55d-4b89-a908-feb02df038df 596f8d994ec145beb9244f5f01713555 3b96d13f13da43468269abb6dc6185d1 - - default default] [instance: 9396c5ff-9457-400c-8916-ecd03eded0c1] Instance network_info: |[{"id": "33ae6d28-9d12-4e42-9874-7f5c7a27c9c8", "address": "fa:16:3e:bb:e0:fa", "network": {"id": "77b0065f-12e9-4121-b463-93a7fd9a5ff0", "bridge": "br-int", "label": "tempest-InstanceActionsV221TestJSON-579014610-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3b96d13f13da43468269abb6dc6185d1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap33ae6d28-9d", "ovs_interfaceid": "33ae6d28-9d12-4e42-9874-7f5c7a27c9c8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 25 03:35:42 np0005534516 nova_compute[253538]: 2025-11-25 08:35:42.480 253542 DEBUG oslo_concurrency.lockutils [req-0e4fb5a9-20ab-4129-8a16-ded88e0a087b req-7a2d93d7-d203-4267-baea-4b01d2c267fe b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-9396c5ff-9457-400c-8916-ecd03eded0c1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 03:35:42 np0005534516 nova_compute[253538]: 2025-11-25 08:35:42.481 253542 DEBUG nova.network.neutron [req-0e4fb5a9-20ab-4129-8a16-ded88e0a087b req-7a2d93d7-d203-4267-baea-4b01d2c267fe b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 9396c5ff-9457-400c-8916-ecd03eded0c1] Refreshing network info cache for port 33ae6d28-9d12-4e42-9874-7f5c7a27c9c8 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 03:35:42 np0005534516 nova_compute[253538]: 2025-11-25 08:35:42.486 253542 DEBUG nova.virt.libvirt.driver [None req-fcd9fa15-e55d-4b89-a908-feb02df038df 596f8d994ec145beb9244f5f01713555 3b96d13f13da43468269abb6dc6185d1 - - default default] [instance: 9396c5ff-9457-400c-8916-ecd03eded0c1] Start _get_guest_xml network_info=[{"id": "33ae6d28-9d12-4e42-9874-7f5c7a27c9c8", "address": "fa:16:3e:bb:e0:fa", "network": {"id": "77b0065f-12e9-4121-b463-93a7fd9a5ff0", "bridge": "br-int", "label": "tempest-InstanceActionsV221TestJSON-579014610-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3b96d13f13da43468269abb6dc6185d1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap33ae6d28-9d", "ovs_interfaceid": "33ae6d28-9d12-4e42-9874-7f5c7a27c9c8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'device_name': '/dev/vda', 'guest_format': None, 'encryption_options': None, 'boot_index': 0, 'encryption_format': None, 'encryption_secret_uuid': None, 'size': 0, 'encrypted': False, 'disk_bus': 'virtio', 'image_id': '8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 25 03:35:42 np0005534516 nova_compute[253538]: 2025-11-25 08:35:42.493 253542 WARNING nova.virt.libvirt.driver [None req-fcd9fa15-e55d-4b89-a908-feb02df038df 596f8d994ec145beb9244f5f01713555 3b96d13f13da43468269abb6dc6185d1 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 03:35:42 np0005534516 nova_compute[253538]: 2025-11-25 08:35:42.503 253542 DEBUG nova.virt.libvirt.host [None req-fcd9fa15-e55d-4b89-a908-feb02df038df 596f8d994ec145beb9244f5f01713555 3b96d13f13da43468269abb6dc6185d1 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 25 03:35:42 np0005534516 nova_compute[253538]: 2025-11-25 08:35:42.504 253542 DEBUG nova.virt.libvirt.host [None req-fcd9fa15-e55d-4b89-a908-feb02df038df 596f8d994ec145beb9244f5f01713555 3b96d13f13da43468269abb6dc6185d1 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 25 03:35:42 np0005534516 nova_compute[253538]: 2025-11-25 08:35:42.520 253542 DEBUG nova.virt.libvirt.host [None req-fcd9fa15-e55d-4b89-a908-feb02df038df 596f8d994ec145beb9244f5f01713555 3b96d13f13da43468269abb6dc6185d1 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 25 03:35:42 np0005534516 nova_compute[253538]: 2025-11-25 08:35:42.521 253542 DEBUG nova.virt.libvirt.host [None req-fcd9fa15-e55d-4b89-a908-feb02df038df 596f8d994ec145beb9244f5f01713555 3b96d13f13da43468269abb6dc6185d1 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 25 03:35:42 np0005534516 nova_compute[253538]: 2025-11-25 08:35:42.522 253542 DEBUG nova.virt.libvirt.driver [None req-fcd9fa15-e55d-4b89-a908-feb02df038df 596f8d994ec145beb9244f5f01713555 3b96d13f13da43468269abb6dc6185d1 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 25 03:35:42 np0005534516 nova_compute[253538]: 2025-11-25 08:35:42.522 253542 DEBUG nova.virt.hardware [None req-fcd9fa15-e55d-4b89-a908-feb02df038df 596f8d994ec145beb9244f5f01713555 3b96d13f13da43468269abb6dc6185d1 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T08:20:54Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='c0247c91-0f34-45b2-87e7-43f31a790a00',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 25 03:35:42 np0005534516 nova_compute[253538]: 2025-11-25 08:35:42.523 253542 DEBUG nova.virt.hardware [None req-fcd9fa15-e55d-4b89-a908-feb02df038df 596f8d994ec145beb9244f5f01713555 3b96d13f13da43468269abb6dc6185d1 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 25 03:35:42 np0005534516 nova_compute[253538]: 2025-11-25 08:35:42.523 253542 DEBUG nova.virt.hardware [None req-fcd9fa15-e55d-4b89-a908-feb02df038df 596f8d994ec145beb9244f5f01713555 3b96d13f13da43468269abb6dc6185d1 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 25 03:35:42 np0005534516 nova_compute[253538]: 2025-11-25 08:35:42.524 253542 DEBUG nova.virt.hardware [None req-fcd9fa15-e55d-4b89-a908-feb02df038df 596f8d994ec145beb9244f5f01713555 3b96d13f13da43468269abb6dc6185d1 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 25 03:35:42 np0005534516 nova_compute[253538]: 2025-11-25 08:35:42.524 253542 DEBUG nova.virt.hardware [None req-fcd9fa15-e55d-4b89-a908-feb02df038df 596f8d994ec145beb9244f5f01713555 3b96d13f13da43468269abb6dc6185d1 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 25 03:35:42 np0005534516 nova_compute[253538]: 2025-11-25 08:35:42.525 253542 DEBUG nova.virt.hardware [None req-fcd9fa15-e55d-4b89-a908-feb02df038df 596f8d994ec145beb9244f5f01713555 3b96d13f13da43468269abb6dc6185d1 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 25 03:35:42 np0005534516 nova_compute[253538]: 2025-11-25 08:35:42.525 253542 DEBUG nova.virt.hardware [None req-fcd9fa15-e55d-4b89-a908-feb02df038df 596f8d994ec145beb9244f5f01713555 3b96d13f13da43468269abb6dc6185d1 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 25 03:35:42 np0005534516 nova_compute[253538]: 2025-11-25 08:35:42.526 253542 DEBUG nova.virt.hardware [None req-fcd9fa15-e55d-4b89-a908-feb02df038df 596f8d994ec145beb9244f5f01713555 3b96d13f13da43468269abb6dc6185d1 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 25 03:35:42 np0005534516 nova_compute[253538]: 2025-11-25 08:35:42.526 253542 DEBUG nova.virt.hardware [None req-fcd9fa15-e55d-4b89-a908-feb02df038df 596f8d994ec145beb9244f5f01713555 3b96d13f13da43468269abb6dc6185d1 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 25 03:35:42 np0005534516 nova_compute[253538]: 2025-11-25 08:35:42.527 253542 DEBUG nova.virt.hardware [None req-fcd9fa15-e55d-4b89-a908-feb02df038df 596f8d994ec145beb9244f5f01713555 3b96d13f13da43468269abb6dc6185d1 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 25 03:35:42 np0005534516 nova_compute[253538]: 2025-11-25 08:35:42.527 253542 DEBUG nova.virt.hardware [None req-fcd9fa15-e55d-4b89-a908-feb02df038df 596f8d994ec145beb9244f5f01713555 3b96d13f13da43468269abb6dc6185d1 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 25 03:35:42 np0005534516 nova_compute[253538]: 2025-11-25 08:35:42.532 253542 DEBUG oslo_concurrency.processutils [None req-fcd9fa15-e55d-4b89-a908-feb02df038df 596f8d994ec145beb9244f5f01713555 3b96d13f13da43468269abb6dc6185d1 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:35:42 np0005534516 nova_compute[253538]: 2025-11-25 08:35:42.591 253542 INFO nova.virt.libvirt.driver [None req-9db5aae1-9953-40d4-8e1a-8d4bdd6e5498 78e69376f7924a5695ba6f4672139f68 d2de800215da4cac8d9fd3a6a3cf4a55 - - default default] [instance: cab0bbd2-96e3-43ed-970b-0b49c7581fef] Deleting instance files /var/lib/nova/instances/cab0bbd2-96e3-43ed-970b-0b49c7581fef_del#033[00m
Nov 25 03:35:42 np0005534516 nova_compute[253538]: 2025-11-25 08:35:42.593 253542 INFO nova.virt.libvirt.driver [None req-9db5aae1-9953-40d4-8e1a-8d4bdd6e5498 78e69376f7924a5695ba6f4672139f68 d2de800215da4cac8d9fd3a6a3cf4a55 - - default default] [instance: cab0bbd2-96e3-43ed-970b-0b49c7581fef] Deletion of /var/lib/nova/instances/cab0bbd2-96e3-43ed-970b-0b49c7581fef_del complete#033[00m
Nov 25 03:35:42 np0005534516 nova_compute[253538]: 2025-11-25 08:35:42.683 253542 INFO nova.compute.manager [None req-9db5aae1-9953-40d4-8e1a-8d4bdd6e5498 78e69376f7924a5695ba6f4672139f68 d2de800215da4cac8d9fd3a6a3cf4a55 - - default default] [instance: cab0bbd2-96e3-43ed-970b-0b49c7581fef] Took 6.27 seconds to destroy the instance on the hypervisor.#033[00m
Nov 25 03:35:42 np0005534516 nova_compute[253538]: 2025-11-25 08:35:42.685 253542 DEBUG oslo.service.loopingcall [None req-9db5aae1-9953-40d4-8e1a-8d4bdd6e5498 78e69376f7924a5695ba6f4672139f68 d2de800215da4cac8d9fd3a6a3cf4a55 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 25 03:35:42 np0005534516 nova_compute[253538]: 2025-11-25 08:35:42.686 253542 DEBUG nova.compute.manager [-] [instance: cab0bbd2-96e3-43ed-970b-0b49c7581fef] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 25 03:35:42 np0005534516 nova_compute[253538]: 2025-11-25 08:35:42.686 253542 DEBUG nova.network.neutron [-] [instance: cab0bbd2-96e3-43ed-970b-0b49c7581fef] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 25 03:35:42 np0005534516 nova_compute[253538]: 2025-11-25 08:35:42.838 253542 DEBUG nova.network.neutron [None req-11d37a5c-d5cb-48ae-a62e-f9b39d45d4b5 4de06a7985be4463b069db269e2882d4 a20ef4bed55a408c8933a4956b2dd3e4 - - default default] [instance: 4e9d3984-d789-45e1-83e3-8909597d3265] Updating instance_info_cache with network_info: [{"id": "d553507f-4019-4ce0-b549-4d221b9089cd", "address": "fa:16:3e:d8:47:0f", "network": {"id": "66249d1f-478b-4b2b-a784-933c0556752e", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-179501336-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a20ef4bed55a408c8933a4956b2dd3e4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd553507f-40", "ovs_interfaceid": "d553507f-4019-4ce0-b549-4d221b9089cd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 03:35:42 np0005534516 nova_compute[253538]: 2025-11-25 08:35:42.841 253542 DEBUG nova.network.neutron [-] [instance: cab0bbd2-96e3-43ed-970b-0b49c7581fef] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 25 03:35:42 np0005534516 nova_compute[253538]: 2025-11-25 08:35:42.855 253542 DEBUG nova.network.neutron [-] [instance: cab0bbd2-96e3-43ed-970b-0b49c7581fef] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 03:35:42 np0005534516 nova_compute[253538]: 2025-11-25 08:35:42.871 253542 INFO nova.compute.manager [-] [instance: cab0bbd2-96e3-43ed-970b-0b49c7581fef] Took 0.18 seconds to deallocate network for instance.#033[00m
Nov 25 03:35:42 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 03:35:42 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/557205148' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 03:35:43 np0005534516 nova_compute[253538]: 2025-11-25 08:35:43.010 253542 DEBUG oslo_concurrency.processutils [None req-fcd9fa15-e55d-4b89-a908-feb02df038df 596f8d994ec145beb9244f5f01713555 3b96d13f13da43468269abb6dc6185d1 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.478s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:35:43 np0005534516 nova_compute[253538]: 2025-11-25 08:35:43.032 253542 DEBUG nova.storage.rbd_utils [None req-fcd9fa15-e55d-4b89-a908-feb02df038df 596f8d994ec145beb9244f5f01713555 3b96d13f13da43468269abb6dc6185d1 - - default default] rbd image 9396c5ff-9457-400c-8916-ecd03eded0c1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:35:43 np0005534516 nova_compute[253538]: 2025-11-25 08:35:43.037 253542 DEBUG oslo_concurrency.processutils [None req-fcd9fa15-e55d-4b89-a908-feb02df038df 596f8d994ec145beb9244f5f01713555 3b96d13f13da43468269abb6dc6185d1 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:35:43 np0005534516 nova_compute[253538]: 2025-11-25 08:35:43.076 253542 DEBUG oslo_concurrency.lockutils [None req-11d37a5c-d5cb-48ae-a62e-f9b39d45d4b5 4de06a7985be4463b069db269e2882d4 a20ef4bed55a408c8933a4956b2dd3e4 - - default default] Releasing lock "refresh_cache-4e9d3984-d789-45e1-83e3-8909597d3265" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 03:35:43 np0005534516 nova_compute[253538]: 2025-11-25 08:35:43.077 253542 DEBUG nova.compute.manager [None req-11d37a5c-d5cb-48ae-a62e-f9b39d45d4b5 4de06a7985be4463b069db269e2882d4 a20ef4bed55a408c8933a4956b2dd3e4 - - default default] [instance: 4e9d3984-d789-45e1-83e3-8909597d3265] Instance network_info: |[{"id": "d553507f-4019-4ce0-b549-4d221b9089cd", "address": "fa:16:3e:d8:47:0f", "network": {"id": "66249d1f-478b-4b2b-a784-933c0556752e", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-179501336-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a20ef4bed55a408c8933a4956b2dd3e4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd553507f-40", "ovs_interfaceid": "d553507f-4019-4ce0-b549-4d221b9089cd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 25 03:35:43 np0005534516 nova_compute[253538]: 2025-11-25 08:35:43.078 253542 DEBUG oslo_concurrency.lockutils [None req-9db5aae1-9953-40d4-8e1a-8d4bdd6e5498 78e69376f7924a5695ba6f4672139f68 d2de800215da4cac8d9fd3a6a3cf4a55 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:35:43 np0005534516 nova_compute[253538]: 2025-11-25 08:35:43.078 253542 DEBUG oslo_concurrency.lockutils [None req-9db5aae1-9953-40d4-8e1a-8d4bdd6e5498 78e69376f7924a5695ba6f4672139f68 d2de800215da4cac8d9fd3a6a3cf4a55 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:35:43 np0005534516 nova_compute[253538]: 2025-11-25 08:35:43.079 253542 DEBUG oslo_concurrency.lockutils [req-ffe973bb-b0df-4a54-b49f-2474602e89ab req-9a2630d4-249b-42ee-8a59-7de5dc9b9479 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-4e9d3984-d789-45e1-83e3-8909597d3265" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 03:35:43 np0005534516 nova_compute[253538]: 2025-11-25 08:35:43.079 253542 DEBUG nova.network.neutron [req-ffe973bb-b0df-4a54-b49f-2474602e89ab req-9a2630d4-249b-42ee-8a59-7de5dc9b9479 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 4e9d3984-d789-45e1-83e3-8909597d3265] Refreshing network info cache for port d553507f-4019-4ce0-b549-4d221b9089cd _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 03:35:43 np0005534516 nova_compute[253538]: 2025-11-25 08:35:43.082 253542 DEBUG nova.virt.libvirt.driver [None req-11d37a5c-d5cb-48ae-a62e-f9b39d45d4b5 4de06a7985be4463b069db269e2882d4 a20ef4bed55a408c8933a4956b2dd3e4 - - default default] [instance: 4e9d3984-d789-45e1-83e3-8909597d3265] Start _get_guest_xml network_info=[{"id": "d553507f-4019-4ce0-b549-4d221b9089cd", "address": "fa:16:3e:d8:47:0f", "network": {"id": "66249d1f-478b-4b2b-a784-933c0556752e", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-179501336-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a20ef4bed55a408c8933a4956b2dd3e4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd553507f-40", "ovs_interfaceid": "d553507f-4019-4ce0-b549-4d221b9089cd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'device_name': '/dev/vda', 'guest_format': None, 'encryption_options': None, 'boot_index': 0, 'encryption_format': None, 'encryption_secret_uuid': None, 'size': 0, 'encrypted': False, 'disk_bus': 'virtio', 'image_id': '8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 25 03:35:43 np0005534516 nova_compute[253538]: 2025-11-25 08:35:43.095 253542 WARNING nova.virt.libvirt.driver [None req-11d37a5c-d5cb-48ae-a62e-f9b39d45d4b5 4de06a7985be4463b069db269e2882d4 a20ef4bed55a408c8933a4956b2dd3e4 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 03:35:43 np0005534516 nova_compute[253538]: 2025-11-25 08:35:43.102 253542 DEBUG nova.virt.libvirt.host [None req-11d37a5c-d5cb-48ae-a62e-f9b39d45d4b5 4de06a7985be4463b069db269e2882d4 a20ef4bed55a408c8933a4956b2dd3e4 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 25 03:35:43 np0005534516 nova_compute[253538]: 2025-11-25 08:35:43.103 253542 DEBUG nova.virt.libvirt.host [None req-11d37a5c-d5cb-48ae-a62e-f9b39d45d4b5 4de06a7985be4463b069db269e2882d4 a20ef4bed55a408c8933a4956b2dd3e4 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 25 03:35:43 np0005534516 nova_compute[253538]: 2025-11-25 08:35:43.105 253542 DEBUG nova.virt.libvirt.host [None req-11d37a5c-d5cb-48ae-a62e-f9b39d45d4b5 4de06a7985be4463b069db269e2882d4 a20ef4bed55a408c8933a4956b2dd3e4 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 25 03:35:43 np0005534516 nova_compute[253538]: 2025-11-25 08:35:43.106 253542 DEBUG nova.virt.libvirt.host [None req-11d37a5c-d5cb-48ae-a62e-f9b39d45d4b5 4de06a7985be4463b069db269e2882d4 a20ef4bed55a408c8933a4956b2dd3e4 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 25 03:35:43 np0005534516 nova_compute[253538]: 2025-11-25 08:35:43.107 253542 DEBUG nova.virt.libvirt.driver [None req-11d37a5c-d5cb-48ae-a62e-f9b39d45d4b5 4de06a7985be4463b069db269e2882d4 a20ef4bed55a408c8933a4956b2dd3e4 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 25 03:35:43 np0005534516 nova_compute[253538]: 2025-11-25 08:35:43.107 253542 DEBUG nova.virt.hardware [None req-11d37a5c-d5cb-48ae-a62e-f9b39d45d4b5 4de06a7985be4463b069db269e2882d4 a20ef4bed55a408c8933a4956b2dd3e4 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T08:20:54Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='c0247c91-0f34-45b2-87e7-43f31a790a00',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 25 03:35:43 np0005534516 nova_compute[253538]: 2025-11-25 08:35:43.108 253542 DEBUG nova.virt.hardware [None req-11d37a5c-d5cb-48ae-a62e-f9b39d45d4b5 4de06a7985be4463b069db269e2882d4 a20ef4bed55a408c8933a4956b2dd3e4 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 25 03:35:43 np0005534516 nova_compute[253538]: 2025-11-25 08:35:43.108 253542 DEBUG nova.virt.hardware [None req-11d37a5c-d5cb-48ae-a62e-f9b39d45d4b5 4de06a7985be4463b069db269e2882d4 a20ef4bed55a408c8933a4956b2dd3e4 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 25 03:35:43 np0005534516 nova_compute[253538]: 2025-11-25 08:35:43.108 253542 DEBUG nova.virt.hardware [None req-11d37a5c-d5cb-48ae-a62e-f9b39d45d4b5 4de06a7985be4463b069db269e2882d4 a20ef4bed55a408c8933a4956b2dd3e4 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 25 03:35:43 np0005534516 nova_compute[253538]: 2025-11-25 08:35:43.109 253542 DEBUG nova.virt.hardware [None req-11d37a5c-d5cb-48ae-a62e-f9b39d45d4b5 4de06a7985be4463b069db269e2882d4 a20ef4bed55a408c8933a4956b2dd3e4 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 25 03:35:43 np0005534516 nova_compute[253538]: 2025-11-25 08:35:43.109 253542 DEBUG nova.virt.hardware [None req-11d37a5c-d5cb-48ae-a62e-f9b39d45d4b5 4de06a7985be4463b069db269e2882d4 a20ef4bed55a408c8933a4956b2dd3e4 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 25 03:35:43 np0005534516 nova_compute[253538]: 2025-11-25 08:35:43.109 253542 DEBUG nova.virt.hardware [None req-11d37a5c-d5cb-48ae-a62e-f9b39d45d4b5 4de06a7985be4463b069db269e2882d4 a20ef4bed55a408c8933a4956b2dd3e4 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 25 03:35:43 np0005534516 nova_compute[253538]: 2025-11-25 08:35:43.110 253542 DEBUG nova.virt.hardware [None req-11d37a5c-d5cb-48ae-a62e-f9b39d45d4b5 4de06a7985be4463b069db269e2882d4 a20ef4bed55a408c8933a4956b2dd3e4 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 25 03:35:43 np0005534516 nova_compute[253538]: 2025-11-25 08:35:43.110 253542 DEBUG nova.virt.hardware [None req-11d37a5c-d5cb-48ae-a62e-f9b39d45d4b5 4de06a7985be4463b069db269e2882d4 a20ef4bed55a408c8933a4956b2dd3e4 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 25 03:35:43 np0005534516 nova_compute[253538]: 2025-11-25 08:35:43.110 253542 DEBUG nova.virt.hardware [None req-11d37a5c-d5cb-48ae-a62e-f9b39d45d4b5 4de06a7985be4463b069db269e2882d4 a20ef4bed55a408c8933a4956b2dd3e4 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 25 03:35:43 np0005534516 nova_compute[253538]: 2025-11-25 08:35:43.111 253542 DEBUG nova.virt.hardware [None req-11d37a5c-d5cb-48ae-a62e-f9b39d45d4b5 4de06a7985be4463b069db269e2882d4 a20ef4bed55a408c8933a4956b2dd3e4 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 25 03:35:43 np0005534516 nova_compute[253538]: 2025-11-25 08:35:43.115 253542 DEBUG oslo_concurrency.processutils [None req-11d37a5c-d5cb-48ae-a62e-f9b39d45d4b5 4de06a7985be4463b069db269e2882d4 a20ef4bed55a408c8933a4956b2dd3e4 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:35:43 np0005534516 nova_compute[253538]: 2025-11-25 08:35:43.198 253542 DEBUG oslo_concurrency.processutils [None req-9db5aae1-9953-40d4-8e1a-8d4bdd6e5498 78e69376f7924a5695ba6f4672139f68 d2de800215da4cac8d9fd3a6a3cf4a55 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:35:43 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1580: 321 pgs: 321 active+clean; 218 MiB data, 618 MiB used, 59 GiB / 60 GiB avail; 5.2 MiB/s rd, 9.3 MiB/s wr, 320 op/s
Nov 25 03:35:43 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 03:35:43 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/586325570' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 03:35:43 np0005534516 nova_compute[253538]: 2025-11-25 08:35:43.510 253542 DEBUG oslo_concurrency.processutils [None req-fcd9fa15-e55d-4b89-a908-feb02df038df 596f8d994ec145beb9244f5f01713555 3b96d13f13da43468269abb6dc6185d1 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.474s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:35:43 np0005534516 nova_compute[253538]: 2025-11-25 08:35:43.512 253542 DEBUG nova.virt.libvirt.vif [None req-fcd9fa15-e55d-4b89-a908-feb02df038df 596f8d994ec145beb9244f5f01713555 3b96d13f13da43468269abb6dc6185d1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T08:35:34Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-InstanceActionsV221TestJSON-server-82069614',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-instanceactionsv221testjson-server-82069614',id=68,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='3b96d13f13da43468269abb6dc6185d1',ramdisk_id='',reservation_id='r-0e395ga0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-InstanceActionsV221TestJSON-1033089060',owner_user_name='tempest-InstanceActionsV221TestJSON-1033089060-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T08:35:36Z,user_data=None,user_id='596f8d994ec145beb9244f5f01713555',uuid=9396c5ff-9457-400c-8916-ecd03eded0c1,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "33ae6d28-9d12-4e42-9874-7f5c7a27c9c8", "address": "fa:16:3e:bb:e0:fa", "network": {"id": "77b0065f-12e9-4121-b463-93a7fd9a5ff0", "bridge": "br-int", "label": "tempest-InstanceActionsV221TestJSON-579014610-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3b96d13f13da43468269abb6dc6185d1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap33ae6d28-9d", "ovs_interfaceid": "33ae6d28-9d12-4e42-9874-7f5c7a27c9c8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 25 03:35:43 np0005534516 nova_compute[253538]: 2025-11-25 08:35:43.512 253542 DEBUG nova.network.os_vif_util [None req-fcd9fa15-e55d-4b89-a908-feb02df038df 596f8d994ec145beb9244f5f01713555 3b96d13f13da43468269abb6dc6185d1 - - default default] Converting VIF {"id": "33ae6d28-9d12-4e42-9874-7f5c7a27c9c8", "address": "fa:16:3e:bb:e0:fa", "network": {"id": "77b0065f-12e9-4121-b463-93a7fd9a5ff0", "bridge": "br-int", "label": "tempest-InstanceActionsV221TestJSON-579014610-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3b96d13f13da43468269abb6dc6185d1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap33ae6d28-9d", "ovs_interfaceid": "33ae6d28-9d12-4e42-9874-7f5c7a27c9c8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 03:35:43 np0005534516 nova_compute[253538]: 2025-11-25 08:35:43.527 253542 DEBUG nova.network.os_vif_util [None req-fcd9fa15-e55d-4b89-a908-feb02df038df 596f8d994ec145beb9244f5f01713555 3b96d13f13da43468269abb6dc6185d1 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:bb:e0:fa,bridge_name='br-int',has_traffic_filtering=True,id=33ae6d28-9d12-4e42-9874-7f5c7a27c9c8,network=Network(77b0065f-12e9-4121-b463-93a7fd9a5ff0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap33ae6d28-9d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 03:35:43 np0005534516 nova_compute[253538]: 2025-11-25 08:35:43.529 253542 DEBUG nova.objects.instance [None req-fcd9fa15-e55d-4b89-a908-feb02df038df 596f8d994ec145beb9244f5f01713555 3b96d13f13da43468269abb6dc6185d1 - - default default] Lazy-loading 'pci_devices' on Instance uuid 9396c5ff-9457-400c-8916-ecd03eded0c1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 03:35:43 np0005534516 nova_compute[253538]: 2025-11-25 08:35:43.546 253542 DEBUG nova.virt.libvirt.driver [None req-fcd9fa15-e55d-4b89-a908-feb02df038df 596f8d994ec145beb9244f5f01713555 3b96d13f13da43468269abb6dc6185d1 - - default default] [instance: 9396c5ff-9457-400c-8916-ecd03eded0c1] End _get_guest_xml xml=<domain type="kvm">
Nov 25 03:35:43 np0005534516 nova_compute[253538]:  <uuid>9396c5ff-9457-400c-8916-ecd03eded0c1</uuid>
Nov 25 03:35:43 np0005534516 nova_compute[253538]:  <name>instance-00000044</name>
Nov 25 03:35:43 np0005534516 nova_compute[253538]:  <memory>131072</memory>
Nov 25 03:35:43 np0005534516 nova_compute[253538]:  <vcpu>1</vcpu>
Nov 25 03:35:43 np0005534516 nova_compute[253538]:  <metadata>
Nov 25 03:35:43 np0005534516 nova_compute[253538]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 03:35:43 np0005534516 nova_compute[253538]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 03:35:43 np0005534516 nova_compute[253538]:      <nova:name>tempest-InstanceActionsV221TestJSON-server-82069614</nova:name>
Nov 25 03:35:43 np0005534516 nova_compute[253538]:      <nova:creationTime>2025-11-25 08:35:42</nova:creationTime>
Nov 25 03:35:43 np0005534516 nova_compute[253538]:      <nova:flavor name="m1.nano">
Nov 25 03:35:43 np0005534516 nova_compute[253538]:        <nova:memory>128</nova:memory>
Nov 25 03:35:43 np0005534516 nova_compute[253538]:        <nova:disk>1</nova:disk>
Nov 25 03:35:43 np0005534516 nova_compute[253538]:        <nova:swap>0</nova:swap>
Nov 25 03:35:43 np0005534516 nova_compute[253538]:        <nova:ephemeral>0</nova:ephemeral>
Nov 25 03:35:43 np0005534516 nova_compute[253538]:        <nova:vcpus>1</nova:vcpus>
Nov 25 03:35:43 np0005534516 nova_compute[253538]:      </nova:flavor>
Nov 25 03:35:43 np0005534516 nova_compute[253538]:      <nova:owner>
Nov 25 03:35:43 np0005534516 nova_compute[253538]:        <nova:user uuid="596f8d994ec145beb9244f5f01713555">tempest-InstanceActionsV221TestJSON-1033089060-project-member</nova:user>
Nov 25 03:35:43 np0005534516 nova_compute[253538]:        <nova:project uuid="3b96d13f13da43468269abb6dc6185d1">tempest-InstanceActionsV221TestJSON-1033089060</nova:project>
Nov 25 03:35:43 np0005534516 nova_compute[253538]:      </nova:owner>
Nov 25 03:35:43 np0005534516 nova_compute[253538]:      <nova:root type="image" uuid="8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e"/>
Nov 25 03:35:43 np0005534516 nova_compute[253538]:      <nova:ports>
Nov 25 03:35:43 np0005534516 nova_compute[253538]:        <nova:port uuid="33ae6d28-9d12-4e42-9874-7f5c7a27c9c8">
Nov 25 03:35:43 np0005534516 nova_compute[253538]:          <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Nov 25 03:35:43 np0005534516 nova_compute[253538]:        </nova:port>
Nov 25 03:35:43 np0005534516 nova_compute[253538]:      </nova:ports>
Nov 25 03:35:43 np0005534516 nova_compute[253538]:    </nova:instance>
Nov 25 03:35:43 np0005534516 nova_compute[253538]:  </metadata>
Nov 25 03:35:43 np0005534516 nova_compute[253538]:  <sysinfo type="smbios">
Nov 25 03:35:43 np0005534516 nova_compute[253538]:    <system>
Nov 25 03:35:43 np0005534516 nova_compute[253538]:      <entry name="manufacturer">RDO</entry>
Nov 25 03:35:43 np0005534516 nova_compute[253538]:      <entry name="product">OpenStack Compute</entry>
Nov 25 03:35:43 np0005534516 nova_compute[253538]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 03:35:43 np0005534516 nova_compute[253538]:      <entry name="serial">9396c5ff-9457-400c-8916-ecd03eded0c1</entry>
Nov 25 03:35:43 np0005534516 nova_compute[253538]:      <entry name="uuid">9396c5ff-9457-400c-8916-ecd03eded0c1</entry>
Nov 25 03:35:43 np0005534516 nova_compute[253538]:      <entry name="family">Virtual Machine</entry>
Nov 25 03:35:43 np0005534516 nova_compute[253538]:    </system>
Nov 25 03:35:43 np0005534516 nova_compute[253538]:  </sysinfo>
Nov 25 03:35:43 np0005534516 nova_compute[253538]:  <os>
Nov 25 03:35:43 np0005534516 nova_compute[253538]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 25 03:35:43 np0005534516 nova_compute[253538]:    <boot dev="hd"/>
Nov 25 03:35:43 np0005534516 nova_compute[253538]:    <smbios mode="sysinfo"/>
Nov 25 03:35:43 np0005534516 nova_compute[253538]:  </os>
Nov 25 03:35:43 np0005534516 nova_compute[253538]:  <features>
Nov 25 03:35:43 np0005534516 nova_compute[253538]:    <acpi/>
Nov 25 03:35:43 np0005534516 nova_compute[253538]:    <apic/>
Nov 25 03:35:43 np0005534516 nova_compute[253538]:    <vmcoreinfo/>
Nov 25 03:35:43 np0005534516 nova_compute[253538]:  </features>
Nov 25 03:35:43 np0005534516 nova_compute[253538]:  <clock offset="utc">
Nov 25 03:35:43 np0005534516 nova_compute[253538]:    <timer name="pit" tickpolicy="delay"/>
Nov 25 03:35:43 np0005534516 nova_compute[253538]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 25 03:35:43 np0005534516 nova_compute[253538]:    <timer name="hpet" present="no"/>
Nov 25 03:35:43 np0005534516 nova_compute[253538]:  </clock>
Nov 25 03:35:43 np0005534516 nova_compute[253538]:  <cpu mode="host-model" match="exact">
Nov 25 03:35:43 np0005534516 nova_compute[253538]:    <topology sockets="1" cores="1" threads="1"/>
Nov 25 03:35:43 np0005534516 nova_compute[253538]:  </cpu>
Nov 25 03:35:43 np0005534516 nova_compute[253538]:  <devices>
Nov 25 03:35:43 np0005534516 nova_compute[253538]:    <disk type="network" device="disk">
Nov 25 03:35:43 np0005534516 nova_compute[253538]:      <driver type="raw" cache="none"/>
Nov 25 03:35:43 np0005534516 nova_compute[253538]:      <source protocol="rbd" name="vms/9396c5ff-9457-400c-8916-ecd03eded0c1_disk">
Nov 25 03:35:43 np0005534516 nova_compute[253538]:        <host name="192.168.122.100" port="6789"/>
Nov 25 03:35:43 np0005534516 nova_compute[253538]:      </source>
Nov 25 03:35:43 np0005534516 nova_compute[253538]:      <auth username="openstack">
Nov 25 03:35:43 np0005534516 nova_compute[253538]:        <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 03:35:43 np0005534516 nova_compute[253538]:      </auth>
Nov 25 03:35:43 np0005534516 nova_compute[253538]:      <target dev="vda" bus="virtio"/>
Nov 25 03:35:43 np0005534516 nova_compute[253538]:    </disk>
Nov 25 03:35:43 np0005534516 nova_compute[253538]:    <disk type="network" device="cdrom">
Nov 25 03:35:43 np0005534516 nova_compute[253538]:      <driver type="raw" cache="none"/>
Nov 25 03:35:43 np0005534516 nova_compute[253538]:      <source protocol="rbd" name="vms/9396c5ff-9457-400c-8916-ecd03eded0c1_disk.config">
Nov 25 03:35:43 np0005534516 nova_compute[253538]:        <host name="192.168.122.100" port="6789"/>
Nov 25 03:35:43 np0005534516 nova_compute[253538]:      </source>
Nov 25 03:35:43 np0005534516 nova_compute[253538]:      <auth username="openstack">
Nov 25 03:35:43 np0005534516 nova_compute[253538]:        <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 03:35:43 np0005534516 nova_compute[253538]:      </auth>
Nov 25 03:35:43 np0005534516 nova_compute[253538]:      <target dev="sda" bus="sata"/>
Nov 25 03:35:43 np0005534516 nova_compute[253538]:    </disk>
Nov 25 03:35:43 np0005534516 nova_compute[253538]:    <interface type="ethernet">
Nov 25 03:35:43 np0005534516 nova_compute[253538]:      <mac address="fa:16:3e:bb:e0:fa"/>
Nov 25 03:35:43 np0005534516 nova_compute[253538]:      <model type="virtio"/>
Nov 25 03:35:43 np0005534516 nova_compute[253538]:      <driver name="vhost" rx_queue_size="512"/>
Nov 25 03:35:43 np0005534516 nova_compute[253538]:      <mtu size="1442"/>
Nov 25 03:35:43 np0005534516 nova_compute[253538]:      <target dev="tap33ae6d28-9d"/>
Nov 25 03:35:43 np0005534516 nova_compute[253538]:    </interface>
Nov 25 03:35:43 np0005534516 nova_compute[253538]:    <serial type="pty">
Nov 25 03:35:43 np0005534516 nova_compute[253538]:      <log file="/var/lib/nova/instances/9396c5ff-9457-400c-8916-ecd03eded0c1/console.log" append="off"/>
Nov 25 03:35:43 np0005534516 nova_compute[253538]:    </serial>
Nov 25 03:35:43 np0005534516 nova_compute[253538]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 03:35:43 np0005534516 nova_compute[253538]:    <video>
Nov 25 03:35:43 np0005534516 nova_compute[253538]:      <model type="virtio"/>
Nov 25 03:35:43 np0005534516 nova_compute[253538]:    </video>
Nov 25 03:35:43 np0005534516 nova_compute[253538]:    <input type="tablet" bus="usb"/>
Nov 25 03:35:43 np0005534516 nova_compute[253538]:    <rng model="virtio">
Nov 25 03:35:43 np0005534516 nova_compute[253538]:      <backend model="random">/dev/urandom</backend>
Nov 25 03:35:43 np0005534516 nova_compute[253538]:    </rng>
Nov 25 03:35:43 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root"/>
Nov 25 03:35:43 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:35:43 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:35:43 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:35:43 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:35:43 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:35:43 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:35:43 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:35:43 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:35:43 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:35:43 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:35:43 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:35:43 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:35:43 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:35:43 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:35:43 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:35:43 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:35:43 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:35:43 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:35:43 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:35:43 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:35:43 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:35:43 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:35:43 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:35:43 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:35:43 np0005534516 nova_compute[253538]:    <controller type="usb" index="0"/>
Nov 25 03:35:43 np0005534516 nova_compute[253538]:    <memballoon model="virtio">
Nov 25 03:35:43 np0005534516 nova_compute[253538]:      <stats period="10"/>
Nov 25 03:35:43 np0005534516 nova_compute[253538]:    </memballoon>
Nov 25 03:35:43 np0005534516 nova_compute[253538]:  </devices>
Nov 25 03:35:43 np0005534516 nova_compute[253538]: </domain>
Nov 25 03:35:43 np0005534516 nova_compute[253538]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 25 03:35:43 np0005534516 nova_compute[253538]: 2025-11-25 08:35:43.548 253542 DEBUG nova.compute.manager [None req-fcd9fa15-e55d-4b89-a908-feb02df038df 596f8d994ec145beb9244f5f01713555 3b96d13f13da43468269abb6dc6185d1 - - default default] [instance: 9396c5ff-9457-400c-8916-ecd03eded0c1] Preparing to wait for external event network-vif-plugged-33ae6d28-9d12-4e42-9874-7f5c7a27c9c8 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 25 03:35:43 np0005534516 nova_compute[253538]: 2025-11-25 08:35:43.549 253542 DEBUG oslo_concurrency.lockutils [None req-fcd9fa15-e55d-4b89-a908-feb02df038df 596f8d994ec145beb9244f5f01713555 3b96d13f13da43468269abb6dc6185d1 - - default default] Acquiring lock "9396c5ff-9457-400c-8916-ecd03eded0c1-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:35:43 np0005534516 nova_compute[253538]: 2025-11-25 08:35:43.550 253542 DEBUG oslo_concurrency.lockutils [None req-fcd9fa15-e55d-4b89-a908-feb02df038df 596f8d994ec145beb9244f5f01713555 3b96d13f13da43468269abb6dc6185d1 - - default default] Lock "9396c5ff-9457-400c-8916-ecd03eded0c1-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:35:43 np0005534516 nova_compute[253538]: 2025-11-25 08:35:43.550 253542 DEBUG oslo_concurrency.lockutils [None req-fcd9fa15-e55d-4b89-a908-feb02df038df 596f8d994ec145beb9244f5f01713555 3b96d13f13da43468269abb6dc6185d1 - - default default] Lock "9396c5ff-9457-400c-8916-ecd03eded0c1-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:35:43 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 03:35:43 np0005534516 nova_compute[253538]: 2025-11-25 08:35:43.551 253542 DEBUG nova.virt.libvirt.vif [None req-fcd9fa15-e55d-4b89-a908-feb02df038df 596f8d994ec145beb9244f5f01713555 3b96d13f13da43468269abb6dc6185d1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T08:35:34Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-InstanceActionsV221TestJSON-server-82069614',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-instanceactionsv221testjson-server-82069614',id=68,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='3b96d13f13da43468269abb6dc6185d1',ramdisk_id='',reservation_id='r-0e395ga0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-InstanceActionsV221TestJSON-1033089060',owner_user_name='tempest-InstanceActionsV221TestJSON-1033089060-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T08:35:36Z,user_data=None,user_id='596f8d994ec145beb9244f5f01713555',uuid=9396c5ff-9457-400c-8916-ecd03eded0c1,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "33ae6d28-9d12-4e42-9874-7f5c7a27c9c8", "address": "fa:16:3e:bb:e0:fa", "network": {"id": "77b0065f-12e9-4121-b463-93a7fd9a5ff0", "bridge": "br-int", "label": "tempest-InstanceActionsV221TestJSON-579014610-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3b96d13f13da43468269abb6dc6185d1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap33ae6d28-9d", "ovs_interfaceid": "33ae6d28-9d12-4e42-9874-7f5c7a27c9c8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 25 03:35:43 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4204648431' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 03:35:43 np0005534516 nova_compute[253538]: 2025-11-25 08:35:43.552 253542 DEBUG nova.network.os_vif_util [None req-fcd9fa15-e55d-4b89-a908-feb02df038df 596f8d994ec145beb9244f5f01713555 3b96d13f13da43468269abb6dc6185d1 - - default default] Converting VIF {"id": "33ae6d28-9d12-4e42-9874-7f5c7a27c9c8", "address": "fa:16:3e:bb:e0:fa", "network": {"id": "77b0065f-12e9-4121-b463-93a7fd9a5ff0", "bridge": "br-int", "label": "tempest-InstanceActionsV221TestJSON-579014610-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3b96d13f13da43468269abb6dc6185d1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap33ae6d28-9d", "ovs_interfaceid": "33ae6d28-9d12-4e42-9874-7f5c7a27c9c8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 03:35:43 np0005534516 nova_compute[253538]: 2025-11-25 08:35:43.553 253542 DEBUG nova.network.os_vif_util [None req-fcd9fa15-e55d-4b89-a908-feb02df038df 596f8d994ec145beb9244f5f01713555 3b96d13f13da43468269abb6dc6185d1 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:bb:e0:fa,bridge_name='br-int',has_traffic_filtering=True,id=33ae6d28-9d12-4e42-9874-7f5c7a27c9c8,network=Network(77b0065f-12e9-4121-b463-93a7fd9a5ff0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap33ae6d28-9d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 03:35:43 np0005534516 nova_compute[253538]: 2025-11-25 08:35:43.554 253542 DEBUG os_vif [None req-fcd9fa15-e55d-4b89-a908-feb02df038df 596f8d994ec145beb9244f5f01713555 3b96d13f13da43468269abb6dc6185d1 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:bb:e0:fa,bridge_name='br-int',has_traffic_filtering=True,id=33ae6d28-9d12-4e42-9874-7f5c7a27c9c8,network=Network(77b0065f-12e9-4121-b463-93a7fd9a5ff0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap33ae6d28-9d') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 25 03:35:43 np0005534516 nova_compute[253538]: 2025-11-25 08:35:43.555 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:35:43 np0005534516 nova_compute[253538]: 2025-11-25 08:35:43.556 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:35:43 np0005534516 nova_compute[253538]: 2025-11-25 08:35:43.556 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 03:35:43 np0005534516 nova_compute[253538]: 2025-11-25 08:35:43.561 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:35:43 np0005534516 nova_compute[253538]: 2025-11-25 08:35:43.562 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap33ae6d28-9d, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:35:43 np0005534516 nova_compute[253538]: 2025-11-25 08:35:43.563 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap33ae6d28-9d, col_values=(('external_ids', {'iface-id': '33ae6d28-9d12-4e42-9874-7f5c7a27c9c8', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:bb:e0:fa', 'vm-uuid': '9396c5ff-9457-400c-8916-ecd03eded0c1'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:35:43 np0005534516 NetworkManager[48915]: <info>  [1764059743.5662] manager: (tap33ae6d28-9d): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/287)
Nov 25 03:35:43 np0005534516 nova_compute[253538]: 2025-11-25 08:35:43.565 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:35:43 np0005534516 nova_compute[253538]: 2025-11-25 08:35:43.575 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:35:43 np0005534516 nova_compute[253538]: 2025-11-25 08:35:43.577 253542 DEBUG oslo_concurrency.processutils [None req-11d37a5c-d5cb-48ae-a62e-f9b39d45d4b5 4de06a7985be4463b069db269e2882d4 a20ef4bed55a408c8933a4956b2dd3e4 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.462s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:35:43 np0005534516 nova_compute[253538]: 2025-11-25 08:35:43.577 253542 INFO os_vif [None req-fcd9fa15-e55d-4b89-a908-feb02df038df 596f8d994ec145beb9244f5f01713555 3b96d13f13da43468269abb6dc6185d1 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:bb:e0:fa,bridge_name='br-int',has_traffic_filtering=True,id=33ae6d28-9d12-4e42-9874-7f5c7a27c9c8,network=Network(77b0065f-12e9-4121-b463-93a7fd9a5ff0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap33ae6d28-9d')#033[00m
Nov 25 03:35:43 np0005534516 nova_compute[253538]: 2025-11-25 08:35:43.600 253542 DEBUG nova.storage.rbd_utils [None req-11d37a5c-d5cb-48ae-a62e-f9b39d45d4b5 4de06a7985be4463b069db269e2882d4 a20ef4bed55a408c8933a4956b2dd3e4 - - default default] rbd image 4e9d3984-d789-45e1-83e3-8909597d3265_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:35:43 np0005534516 nova_compute[253538]: 2025-11-25 08:35:43.624 253542 DEBUG oslo_concurrency.processutils [None req-11d37a5c-d5cb-48ae-a62e-f9b39d45d4b5 4de06a7985be4463b069db269e2882d4 a20ef4bed55a408c8933a4956b2dd3e4 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:35:43 np0005534516 nova_compute[253538]: 2025-11-25 08:35:43.711 253542 DEBUG nova.virt.libvirt.driver [None req-fcd9fa15-e55d-4b89-a908-feb02df038df 596f8d994ec145beb9244f5f01713555 3b96d13f13da43468269abb6dc6185d1 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 03:35:43 np0005534516 nova_compute[253538]: 2025-11-25 08:35:43.712 253542 DEBUG nova.virt.libvirt.driver [None req-fcd9fa15-e55d-4b89-a908-feb02df038df 596f8d994ec145beb9244f5f01713555 3b96d13f13da43468269abb6dc6185d1 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 03:35:43 np0005534516 nova_compute[253538]: 2025-11-25 08:35:43.713 253542 DEBUG nova.virt.libvirt.driver [None req-fcd9fa15-e55d-4b89-a908-feb02df038df 596f8d994ec145beb9244f5f01713555 3b96d13f13da43468269abb6dc6185d1 - - default default] No VIF found with MAC fa:16:3e:bb:e0:fa, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 25 03:35:43 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 03:35:43 np0005534516 nova_compute[253538]: 2025-11-25 08:35:43.713 253542 INFO nova.virt.libvirt.driver [None req-fcd9fa15-e55d-4b89-a908-feb02df038df 596f8d994ec145beb9244f5f01713555 3b96d13f13da43468269abb6dc6185d1 - - default default] [instance: 9396c5ff-9457-400c-8916-ecd03eded0c1] Using config drive#033[00m
Nov 25 03:35:43 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/371612900' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 03:35:43 np0005534516 nova_compute[253538]: 2025-11-25 08:35:43.734 253542 DEBUG nova.storage.rbd_utils [None req-fcd9fa15-e55d-4b89-a908-feb02df038df 596f8d994ec145beb9244f5f01713555 3b96d13f13da43468269abb6dc6185d1 - - default default] rbd image 9396c5ff-9457-400c-8916-ecd03eded0c1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:35:43 np0005534516 nova_compute[253538]: 2025-11-25 08:35:43.739 253542 DEBUG oslo_concurrency.processutils [None req-9db5aae1-9953-40d4-8e1a-8d4bdd6e5498 78e69376f7924a5695ba6f4672139f68 d2de800215da4cac8d9fd3a6a3cf4a55 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.541s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:35:43 np0005534516 nova_compute[253538]: 2025-11-25 08:35:43.743 253542 DEBUG nova.compute.provider_tree [None req-9db5aae1-9953-40d4-8e1a-8d4bdd6e5498 78e69376f7924a5695ba6f4672139f68 d2de800215da4cac8d9fd3a6a3cf4a55 - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 03:35:43 np0005534516 nova_compute[253538]: 2025-11-25 08:35:43.764 253542 DEBUG nova.scheduler.client.report [None req-9db5aae1-9953-40d4-8e1a-8d4bdd6e5498 78e69376f7924a5695ba6f4672139f68 d2de800215da4cac8d9fd3a6a3cf4a55 - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 03:35:43 np0005534516 nova_compute[253538]: 2025-11-25 08:35:43.791 253542 DEBUG oslo_concurrency.lockutils [None req-9db5aae1-9953-40d4-8e1a-8d4bdd6e5498 78e69376f7924a5695ba6f4672139f68 d2de800215da4cac8d9fd3a6a3cf4a55 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.712s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:35:43 np0005534516 nova_compute[253538]: 2025-11-25 08:35:43.816 253542 INFO nova.scheduler.client.report [None req-9db5aae1-9953-40d4-8e1a-8d4bdd6e5498 78e69376f7924a5695ba6f4672139f68 d2de800215da4cac8d9fd3a6a3cf4a55 - - default default] Deleted allocations for instance cab0bbd2-96e3-43ed-970b-0b49c7581fef#033[00m
Nov 25 03:35:43 np0005534516 nova_compute[253538]: 2025-11-25 08:35:43.878 253542 DEBUG oslo_concurrency.lockutils [None req-9db5aae1-9953-40d4-8e1a-8d4bdd6e5498 78e69376f7924a5695ba6f4672139f68 d2de800215da4cac8d9fd3a6a3cf4a55 - - default default] Lock "cab0bbd2-96e3-43ed-970b-0b49c7581fef" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 8.117s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:35:43 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 03:35:43 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 03:35:43 np0005534516 nova_compute[253538]: 2025-11-25 08:35:43.886 253542 DEBUG nova.network.neutron [req-0e4fb5a9-20ab-4129-8a16-ded88e0a087b req-7a2d93d7-d203-4267-baea-4b01d2c267fe b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 9396c5ff-9457-400c-8916-ecd03eded0c1] Updated VIF entry in instance network info cache for port 33ae6d28-9d12-4e42-9874-7f5c7a27c9c8. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 25 03:35:43 np0005534516 nova_compute[253538]: 2025-11-25 08:35:43.886 253542 DEBUG nova.network.neutron [req-0e4fb5a9-20ab-4129-8a16-ded88e0a087b req-7a2d93d7-d203-4267-baea-4b01d2c267fe b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 9396c5ff-9457-400c-8916-ecd03eded0c1] Updating instance_info_cache with network_info: [{"id": "33ae6d28-9d12-4e42-9874-7f5c7a27c9c8", "address": "fa:16:3e:bb:e0:fa", "network": {"id": "77b0065f-12e9-4121-b463-93a7fd9a5ff0", "bridge": "br-int", "label": "tempest-InstanceActionsV221TestJSON-579014610-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3b96d13f13da43468269abb6dc6185d1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap33ae6d28-9d", "ovs_interfaceid": "33ae6d28-9d12-4e42-9874-7f5c7a27c9c8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 03:35:43 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 03:35:43 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 03:35:43 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 03:35:43 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 03:35:43 np0005534516 ceph-mgr[75313]: [progress WARNING root] complete: ev c97e4d49-54f7-46cc-8d0e-2441f3163053 does not exist
Nov 25 03:35:43 np0005534516 ceph-mgr[75313]: [progress WARNING root] complete: ev 37b68f44-8f84-4fe8-9d47-b8e793bb0d76 does not exist
Nov 25 03:35:43 np0005534516 ceph-mgr[75313]: [progress WARNING root] complete: ev 1c440712-b92e-4656-98ca-30b5deb6b155 does not exist
Nov 25 03:35:43 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 03:35:43 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 03:35:43 np0005534516 nova_compute[253538]: 2025-11-25 08:35:43.901 253542 DEBUG oslo_concurrency.lockutils [req-0e4fb5a9-20ab-4129-8a16-ded88e0a087b req-7a2d93d7-d203-4267-baea-4b01d2c267fe b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-9396c5ff-9457-400c-8916-ecd03eded0c1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 03:35:43 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 03:35:43 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 03:35:43 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 03:35:43 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 03:35:44 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 03:35:44 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3327401896' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 03:35:44 np0005534516 nova_compute[253538]: 2025-11-25 08:35:44.079 253542 DEBUG oslo_concurrency.processutils [None req-11d37a5c-d5cb-48ae-a62e-f9b39d45d4b5 4de06a7985be4463b069db269e2882d4 a20ef4bed55a408c8933a4956b2dd3e4 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.455s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:35:44 np0005534516 nova_compute[253538]: 2025-11-25 08:35:44.080 253542 DEBUG nova.virt.libvirt.vif [None req-11d37a5c-d5cb-48ae-a62e-f9b39d45d4b5 4de06a7985be4463b069db269e2882d4 a20ef4bed55a408c8933a4956b2dd3e4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T08:35:36Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-InstanceActionsTestJSON-server-1670811507',display_name='tempest-InstanceActionsTestJSON-server-1670811507',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-instanceactionstestjson-server-1670811507',id=69,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='a20ef4bed55a408c8933a4956b2dd3e4',ramdisk_id='',reservation_id='r-7noz6z35',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-InstanceActionsTestJSON-270987687',owner_user_name='tempest-InstanceActionsTestJSON-270987687-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T08:35:38Z,user_data=None,user_id='4de06a7985be4463b069db269e2882d4',uuid=4e9d3984-d789-45e1-83e3-8909597d3265,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d553507f-4019-4ce0-b549-4d221b9089cd", "address": "fa:16:3e:d8:47:0f", "network": {"id": "66249d1f-478b-4b2b-a784-933c0556752e", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-179501336-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a20ef4bed55a408c8933a4956b2dd3e4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd553507f-40", "ovs_interfaceid": "d553507f-4019-4ce0-b549-4d221b9089cd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 25 03:35:44 np0005534516 nova_compute[253538]: 2025-11-25 08:35:44.081 253542 DEBUG nova.network.os_vif_util [None req-11d37a5c-d5cb-48ae-a62e-f9b39d45d4b5 4de06a7985be4463b069db269e2882d4 a20ef4bed55a408c8933a4956b2dd3e4 - - default default] Converting VIF {"id": "d553507f-4019-4ce0-b549-4d221b9089cd", "address": "fa:16:3e:d8:47:0f", "network": {"id": "66249d1f-478b-4b2b-a784-933c0556752e", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-179501336-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a20ef4bed55a408c8933a4956b2dd3e4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd553507f-40", "ovs_interfaceid": "d553507f-4019-4ce0-b549-4d221b9089cd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 03:35:44 np0005534516 nova_compute[253538]: 2025-11-25 08:35:44.081 253542 DEBUG nova.network.os_vif_util [None req-11d37a5c-d5cb-48ae-a62e-f9b39d45d4b5 4de06a7985be4463b069db269e2882d4 a20ef4bed55a408c8933a4956b2dd3e4 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d8:47:0f,bridge_name='br-int',has_traffic_filtering=True,id=d553507f-4019-4ce0-b549-4d221b9089cd,network=Network(66249d1f-478b-4b2b-a784-933c0556752e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd553507f-40') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 03:35:44 np0005534516 nova_compute[253538]: 2025-11-25 08:35:44.083 253542 DEBUG nova.objects.instance [None req-11d37a5c-d5cb-48ae-a62e-f9b39d45d4b5 4de06a7985be4463b069db269e2882d4 a20ef4bed55a408c8933a4956b2dd3e4 - - default default] Lazy-loading 'pci_devices' on Instance uuid 4e9d3984-d789-45e1-83e3-8909597d3265 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 03:35:44 np0005534516 nova_compute[253538]: 2025-11-25 08:35:44.095 253542 DEBUG nova.virt.libvirt.driver [None req-11d37a5c-d5cb-48ae-a62e-f9b39d45d4b5 4de06a7985be4463b069db269e2882d4 a20ef4bed55a408c8933a4956b2dd3e4 - - default default] [instance: 4e9d3984-d789-45e1-83e3-8909597d3265] End _get_guest_xml xml=<domain type="kvm">
Nov 25 03:35:44 np0005534516 nova_compute[253538]:  <uuid>4e9d3984-d789-45e1-83e3-8909597d3265</uuid>
Nov 25 03:35:44 np0005534516 nova_compute[253538]:  <name>instance-00000045</name>
Nov 25 03:35:44 np0005534516 nova_compute[253538]:  <memory>131072</memory>
Nov 25 03:35:44 np0005534516 nova_compute[253538]:  <vcpu>1</vcpu>
Nov 25 03:35:44 np0005534516 nova_compute[253538]:  <metadata>
Nov 25 03:35:44 np0005534516 nova_compute[253538]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 03:35:44 np0005534516 nova_compute[253538]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 03:35:44 np0005534516 nova_compute[253538]:      <nova:name>tempest-InstanceActionsTestJSON-server-1670811507</nova:name>
Nov 25 03:35:44 np0005534516 nova_compute[253538]:      <nova:creationTime>2025-11-25 08:35:43</nova:creationTime>
Nov 25 03:35:44 np0005534516 nova_compute[253538]:      <nova:flavor name="m1.nano">
Nov 25 03:35:44 np0005534516 nova_compute[253538]:        <nova:memory>128</nova:memory>
Nov 25 03:35:44 np0005534516 nova_compute[253538]:        <nova:disk>1</nova:disk>
Nov 25 03:35:44 np0005534516 nova_compute[253538]:        <nova:swap>0</nova:swap>
Nov 25 03:35:44 np0005534516 nova_compute[253538]:        <nova:ephemeral>0</nova:ephemeral>
Nov 25 03:35:44 np0005534516 nova_compute[253538]:        <nova:vcpus>1</nova:vcpus>
Nov 25 03:35:44 np0005534516 nova_compute[253538]:      </nova:flavor>
Nov 25 03:35:44 np0005534516 nova_compute[253538]:      <nova:owner>
Nov 25 03:35:44 np0005534516 nova_compute[253538]:        <nova:user uuid="4de06a7985be4463b069db269e2882d4">tempest-InstanceActionsTestJSON-270987687-project-member</nova:user>
Nov 25 03:35:44 np0005534516 nova_compute[253538]:        <nova:project uuid="a20ef4bed55a408c8933a4956b2dd3e4">tempest-InstanceActionsTestJSON-270987687</nova:project>
Nov 25 03:35:44 np0005534516 nova_compute[253538]:      </nova:owner>
Nov 25 03:35:44 np0005534516 nova_compute[253538]:      <nova:root type="image" uuid="8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e"/>
Nov 25 03:35:44 np0005534516 nova_compute[253538]:      <nova:ports>
Nov 25 03:35:44 np0005534516 nova_compute[253538]:        <nova:port uuid="d553507f-4019-4ce0-b549-4d221b9089cd">
Nov 25 03:35:44 np0005534516 nova_compute[253538]:          <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Nov 25 03:35:44 np0005534516 nova_compute[253538]:        </nova:port>
Nov 25 03:35:44 np0005534516 nova_compute[253538]:      </nova:ports>
Nov 25 03:35:44 np0005534516 nova_compute[253538]:    </nova:instance>
Nov 25 03:35:44 np0005534516 nova_compute[253538]:  </metadata>
Nov 25 03:35:44 np0005534516 nova_compute[253538]:  <sysinfo type="smbios">
Nov 25 03:35:44 np0005534516 nova_compute[253538]:    <system>
Nov 25 03:35:44 np0005534516 nova_compute[253538]:      <entry name="manufacturer">RDO</entry>
Nov 25 03:35:44 np0005534516 nova_compute[253538]:      <entry name="product">OpenStack Compute</entry>
Nov 25 03:35:44 np0005534516 nova_compute[253538]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 03:35:44 np0005534516 nova_compute[253538]:      <entry name="serial">4e9d3984-d789-45e1-83e3-8909597d3265</entry>
Nov 25 03:35:44 np0005534516 nova_compute[253538]:      <entry name="uuid">4e9d3984-d789-45e1-83e3-8909597d3265</entry>
Nov 25 03:35:44 np0005534516 nova_compute[253538]:      <entry name="family">Virtual Machine</entry>
Nov 25 03:35:44 np0005534516 nova_compute[253538]:    </system>
Nov 25 03:35:44 np0005534516 nova_compute[253538]:  </sysinfo>
Nov 25 03:35:44 np0005534516 nova_compute[253538]:  <os>
Nov 25 03:35:44 np0005534516 nova_compute[253538]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 25 03:35:44 np0005534516 nova_compute[253538]:    <boot dev="hd"/>
Nov 25 03:35:44 np0005534516 nova_compute[253538]:    <smbios mode="sysinfo"/>
Nov 25 03:35:44 np0005534516 nova_compute[253538]:  </os>
Nov 25 03:35:44 np0005534516 nova_compute[253538]:  <features>
Nov 25 03:35:44 np0005534516 nova_compute[253538]:    <acpi/>
Nov 25 03:35:44 np0005534516 nova_compute[253538]:    <apic/>
Nov 25 03:35:44 np0005534516 nova_compute[253538]:    <vmcoreinfo/>
Nov 25 03:35:44 np0005534516 nova_compute[253538]:  </features>
Nov 25 03:35:44 np0005534516 nova_compute[253538]:  <clock offset="utc">
Nov 25 03:35:44 np0005534516 nova_compute[253538]:    <timer name="pit" tickpolicy="delay"/>
Nov 25 03:35:44 np0005534516 nova_compute[253538]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 25 03:35:44 np0005534516 nova_compute[253538]:    <timer name="hpet" present="no"/>
Nov 25 03:35:44 np0005534516 nova_compute[253538]:  </clock>
Nov 25 03:35:44 np0005534516 nova_compute[253538]:  <cpu mode="host-model" match="exact">
Nov 25 03:35:44 np0005534516 nova_compute[253538]:    <topology sockets="1" cores="1" threads="1"/>
Nov 25 03:35:44 np0005534516 nova_compute[253538]:  </cpu>
Nov 25 03:35:44 np0005534516 nova_compute[253538]:  <devices>
Nov 25 03:35:44 np0005534516 nova_compute[253538]:    <disk type="network" device="disk">
Nov 25 03:35:44 np0005534516 nova_compute[253538]:      <driver type="raw" cache="none"/>
Nov 25 03:35:44 np0005534516 nova_compute[253538]:      <source protocol="rbd" name="vms/4e9d3984-d789-45e1-83e3-8909597d3265_disk">
Nov 25 03:35:44 np0005534516 nova_compute[253538]:        <host name="192.168.122.100" port="6789"/>
Nov 25 03:35:44 np0005534516 nova_compute[253538]:      </source>
Nov 25 03:35:44 np0005534516 nova_compute[253538]:      <auth username="openstack">
Nov 25 03:35:44 np0005534516 nova_compute[253538]:        <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 03:35:44 np0005534516 nova_compute[253538]:      </auth>
Nov 25 03:35:44 np0005534516 nova_compute[253538]:      <target dev="vda" bus="virtio"/>
Nov 25 03:35:44 np0005534516 nova_compute[253538]:    </disk>
Nov 25 03:35:44 np0005534516 nova_compute[253538]:    <disk type="network" device="cdrom">
Nov 25 03:35:44 np0005534516 nova_compute[253538]:      <driver type="raw" cache="none"/>
Nov 25 03:35:44 np0005534516 nova_compute[253538]:      <source protocol="rbd" name="vms/4e9d3984-d789-45e1-83e3-8909597d3265_disk.config">
Nov 25 03:35:44 np0005534516 nova_compute[253538]:        <host name="192.168.122.100" port="6789"/>
Nov 25 03:35:44 np0005534516 nova_compute[253538]:      </source>
Nov 25 03:35:44 np0005534516 nova_compute[253538]:      <auth username="openstack">
Nov 25 03:35:44 np0005534516 nova_compute[253538]:        <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 03:35:44 np0005534516 nova_compute[253538]:      </auth>
Nov 25 03:35:44 np0005534516 nova_compute[253538]:      <target dev="sda" bus="sata"/>
Nov 25 03:35:44 np0005534516 nova_compute[253538]:    </disk>
Nov 25 03:35:44 np0005534516 nova_compute[253538]:    <interface type="ethernet">
Nov 25 03:35:44 np0005534516 nova_compute[253538]:      <mac address="fa:16:3e:d8:47:0f"/>
Nov 25 03:35:44 np0005534516 nova_compute[253538]:      <model type="virtio"/>
Nov 25 03:35:44 np0005534516 nova_compute[253538]:      <driver name="vhost" rx_queue_size="512"/>
Nov 25 03:35:44 np0005534516 nova_compute[253538]:      <mtu size="1442"/>
Nov 25 03:35:44 np0005534516 nova_compute[253538]:      <target dev="tapd553507f-40"/>
Nov 25 03:35:44 np0005534516 nova_compute[253538]:    </interface>
Nov 25 03:35:44 np0005534516 nova_compute[253538]:    <serial type="pty">
Nov 25 03:35:44 np0005534516 nova_compute[253538]:      <log file="/var/lib/nova/instances/4e9d3984-d789-45e1-83e3-8909597d3265/console.log" append="off"/>
Nov 25 03:35:44 np0005534516 nova_compute[253538]:    </serial>
Nov 25 03:35:44 np0005534516 nova_compute[253538]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 03:35:44 np0005534516 nova_compute[253538]:    <video>
Nov 25 03:35:44 np0005534516 nova_compute[253538]:      <model type="virtio"/>
Nov 25 03:35:44 np0005534516 nova_compute[253538]:    </video>
Nov 25 03:35:44 np0005534516 nova_compute[253538]:    <input type="tablet" bus="usb"/>
Nov 25 03:35:44 np0005534516 nova_compute[253538]:    <rng model="virtio">
Nov 25 03:35:44 np0005534516 nova_compute[253538]:      <backend model="random">/dev/urandom</backend>
Nov 25 03:35:44 np0005534516 nova_compute[253538]:    </rng>
Nov 25 03:35:44 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root"/>
Nov 25 03:35:44 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:35:44 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:35:44 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:35:44 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:35:44 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:35:44 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:35:44 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:35:44 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:35:44 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:35:44 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:35:44 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:35:44 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:35:44 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:35:44 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:35:44 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:35:44 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:35:44 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:35:44 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:35:44 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:35:44 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:35:44 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:35:44 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:35:44 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:35:44 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:35:44 np0005534516 nova_compute[253538]:    <controller type="usb" index="0"/>
Nov 25 03:35:44 np0005534516 nova_compute[253538]:    <memballoon model="virtio">
Nov 25 03:35:44 np0005534516 nova_compute[253538]:      <stats period="10"/>
Nov 25 03:35:44 np0005534516 nova_compute[253538]:    </memballoon>
Nov 25 03:35:44 np0005534516 nova_compute[253538]:  </devices>
Nov 25 03:35:44 np0005534516 nova_compute[253538]: </domain>
Nov 25 03:35:44 np0005534516 nova_compute[253538]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 25 03:35:44 np0005534516 nova_compute[253538]: 2025-11-25 08:35:44.097 253542 DEBUG nova.compute.manager [None req-11d37a5c-d5cb-48ae-a62e-f9b39d45d4b5 4de06a7985be4463b069db269e2882d4 a20ef4bed55a408c8933a4956b2dd3e4 - - default default] [instance: 4e9d3984-d789-45e1-83e3-8909597d3265] Preparing to wait for external event network-vif-plugged-d553507f-4019-4ce0-b549-4d221b9089cd prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 25 03:35:44 np0005534516 nova_compute[253538]: 2025-11-25 08:35:44.097 253542 DEBUG oslo_concurrency.lockutils [None req-11d37a5c-d5cb-48ae-a62e-f9b39d45d4b5 4de06a7985be4463b069db269e2882d4 a20ef4bed55a408c8933a4956b2dd3e4 - - default default] Acquiring lock "4e9d3984-d789-45e1-83e3-8909597d3265-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:35:44 np0005534516 nova_compute[253538]: 2025-11-25 08:35:44.097 253542 DEBUG oslo_concurrency.lockutils [None req-11d37a5c-d5cb-48ae-a62e-f9b39d45d4b5 4de06a7985be4463b069db269e2882d4 a20ef4bed55a408c8933a4956b2dd3e4 - - default default] Lock "4e9d3984-d789-45e1-83e3-8909597d3265-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:35:44 np0005534516 nova_compute[253538]: 2025-11-25 08:35:44.098 253542 DEBUG oslo_concurrency.lockutils [None req-11d37a5c-d5cb-48ae-a62e-f9b39d45d4b5 4de06a7985be4463b069db269e2882d4 a20ef4bed55a408c8933a4956b2dd3e4 - - default default] Lock "4e9d3984-d789-45e1-83e3-8909597d3265-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:35:44 np0005534516 nova_compute[253538]: 2025-11-25 08:35:44.098 253542 DEBUG nova.virt.libvirt.vif [None req-11d37a5c-d5cb-48ae-a62e-f9b39d45d4b5 4de06a7985be4463b069db269e2882d4 a20ef4bed55a408c8933a4956b2dd3e4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T08:35:36Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-InstanceActionsTestJSON-server-1670811507',display_name='tempest-InstanceActionsTestJSON-server-1670811507',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-instanceactionstestjson-server-1670811507',id=69,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='a20ef4bed55a408c8933a4956b2dd3e4',ramdisk_id='',reservation_id='r-7noz6z35',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-InstanceActionsTestJSON-270987687',owner_user_name='tempest-InstanceActionsTestJSON-270987687-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T08:35:38Z,user_data=None,user_id='4de06a7985be4463b069db269e2882d4',uuid=4e9d3984-d789-45e1-83e3-8909597d3265,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d553507f-4019-4ce0-b549-4d221b9089cd", "address": "fa:16:3e:d8:47:0f", "network": {"id": "66249d1f-478b-4b2b-a784-933c0556752e", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-179501336-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a20ef4bed55a408c8933a4956b2dd3e4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd553507f-40", "ovs_interfaceid": "d553507f-4019-4ce0-b549-4d221b9089cd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 25 03:35:44 np0005534516 nova_compute[253538]: 2025-11-25 08:35:44.098 253542 DEBUG nova.network.os_vif_util [None req-11d37a5c-d5cb-48ae-a62e-f9b39d45d4b5 4de06a7985be4463b069db269e2882d4 a20ef4bed55a408c8933a4956b2dd3e4 - - default default] Converting VIF {"id": "d553507f-4019-4ce0-b549-4d221b9089cd", "address": "fa:16:3e:d8:47:0f", "network": {"id": "66249d1f-478b-4b2b-a784-933c0556752e", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-179501336-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a20ef4bed55a408c8933a4956b2dd3e4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd553507f-40", "ovs_interfaceid": "d553507f-4019-4ce0-b549-4d221b9089cd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 03:35:44 np0005534516 nova_compute[253538]: 2025-11-25 08:35:44.099 253542 DEBUG nova.network.os_vif_util [None req-11d37a5c-d5cb-48ae-a62e-f9b39d45d4b5 4de06a7985be4463b069db269e2882d4 a20ef4bed55a408c8933a4956b2dd3e4 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d8:47:0f,bridge_name='br-int',has_traffic_filtering=True,id=d553507f-4019-4ce0-b549-4d221b9089cd,network=Network(66249d1f-478b-4b2b-a784-933c0556752e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd553507f-40') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 03:35:44 np0005534516 nova_compute[253538]: 2025-11-25 08:35:44.099 253542 DEBUG os_vif [None req-11d37a5c-d5cb-48ae-a62e-f9b39d45d4b5 4de06a7985be4463b069db269e2882d4 a20ef4bed55a408c8933a4956b2dd3e4 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:d8:47:0f,bridge_name='br-int',has_traffic_filtering=True,id=d553507f-4019-4ce0-b549-4d221b9089cd,network=Network(66249d1f-478b-4b2b-a784-933c0556752e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd553507f-40') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 25 03:35:44 np0005534516 nova_compute[253538]: 2025-11-25 08:35:44.100 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:35:44 np0005534516 nova_compute[253538]: 2025-11-25 08:35:44.100 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:35:44 np0005534516 nova_compute[253538]: 2025-11-25 08:35:44.100 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 03:35:44 np0005534516 nova_compute[253538]: 2025-11-25 08:35:44.103 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:35:44 np0005534516 nova_compute[253538]: 2025-11-25 08:35:44.104 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd553507f-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:35:44 np0005534516 nova_compute[253538]: 2025-11-25 08:35:44.104 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapd553507f-40, col_values=(('external_ids', {'iface-id': 'd553507f-4019-4ce0-b549-4d221b9089cd', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:d8:47:0f', 'vm-uuid': '4e9d3984-d789-45e1-83e3-8909597d3265'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:35:44 np0005534516 nova_compute[253538]: 2025-11-25 08:35:44.105 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:35:44 np0005534516 NetworkManager[48915]: <info>  [1764059744.1067] manager: (tapd553507f-40): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/288)
Nov 25 03:35:44 np0005534516 nova_compute[253538]: 2025-11-25 08:35:44.108 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 25 03:35:44 np0005534516 nova_compute[253538]: 2025-11-25 08:35:44.113 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:35:44 np0005534516 nova_compute[253538]: 2025-11-25 08:35:44.114 253542 INFO os_vif [None req-11d37a5c-d5cb-48ae-a62e-f9b39d45d4b5 4de06a7985be4463b069db269e2882d4 a20ef4bed55a408c8933a4956b2dd3e4 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:d8:47:0f,bridge_name='br-int',has_traffic_filtering=True,id=d553507f-4019-4ce0-b549-4d221b9089cd,network=Network(66249d1f-478b-4b2b-a784-933c0556752e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd553507f-40')#033[00m
Nov 25 03:35:44 np0005534516 nova_compute[253538]: 2025-11-25 08:35:44.168 253542 DEBUG nova.virt.libvirt.driver [None req-11d37a5c-d5cb-48ae-a62e-f9b39d45d4b5 4de06a7985be4463b069db269e2882d4 a20ef4bed55a408c8933a4956b2dd3e4 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 03:35:44 np0005534516 nova_compute[253538]: 2025-11-25 08:35:44.168 253542 DEBUG nova.virt.libvirt.driver [None req-11d37a5c-d5cb-48ae-a62e-f9b39d45d4b5 4de06a7985be4463b069db269e2882d4 a20ef4bed55a408c8933a4956b2dd3e4 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 03:35:44 np0005534516 nova_compute[253538]: 2025-11-25 08:35:44.169 253542 DEBUG nova.virt.libvirt.driver [None req-11d37a5c-d5cb-48ae-a62e-f9b39d45d4b5 4de06a7985be4463b069db269e2882d4 a20ef4bed55a408c8933a4956b2dd3e4 - - default default] No VIF found with MAC fa:16:3e:d8:47:0f, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 25 03:35:44 np0005534516 nova_compute[253538]: 2025-11-25 08:35:44.169 253542 INFO nova.virt.libvirt.driver [None req-11d37a5c-d5cb-48ae-a62e-f9b39d45d4b5 4de06a7985be4463b069db269e2882d4 a20ef4bed55a408c8933a4956b2dd3e4 - - default default] [instance: 4e9d3984-d789-45e1-83e3-8909597d3265] Using config drive#033[00m
Nov 25 03:35:44 np0005534516 nova_compute[253538]: 2025-11-25 08:35:44.191 253542 DEBUG nova.storage.rbd_utils [None req-11d37a5c-d5cb-48ae-a62e-f9b39d45d4b5 4de06a7985be4463b069db269e2882d4 a20ef4bed55a408c8933a4956b2dd3e4 - - default default] rbd image 4e9d3984-d789-45e1-83e3-8909597d3265_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:35:44 np0005534516 nova_compute[253538]: 2025-11-25 08:35:44.198 253542 INFO nova.virt.libvirt.driver [None req-fcd9fa15-e55d-4b89-a908-feb02df038df 596f8d994ec145beb9244f5f01713555 3b96d13f13da43468269abb6dc6185d1 - - default default] [instance: 9396c5ff-9457-400c-8916-ecd03eded0c1] Creating config drive at /var/lib/nova/instances/9396c5ff-9457-400c-8916-ecd03eded0c1/disk.config#033[00m
Nov 25 03:35:44 np0005534516 nova_compute[253538]: 2025-11-25 08:35:44.203 253542 DEBUG oslo_concurrency.processutils [None req-fcd9fa15-e55d-4b89-a908-feb02df038df 596f8d994ec145beb9244f5f01713555 3b96d13f13da43468269abb6dc6185d1 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/9396c5ff-9457-400c-8916-ecd03eded0c1/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpeemjyazy execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:35:44 np0005534516 nova_compute[253538]: 2025-11-25 08:35:44.347 253542 DEBUG oslo_concurrency.processutils [None req-fcd9fa15-e55d-4b89-a908-feb02df038df 596f8d994ec145beb9244f5f01713555 3b96d13f13da43468269abb6dc6185d1 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/9396c5ff-9457-400c-8916-ecd03eded0c1/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpeemjyazy" returned: 0 in 0.144s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:35:44 np0005534516 nova_compute[253538]: 2025-11-25 08:35:44.373 253542 DEBUG nova.storage.rbd_utils [None req-fcd9fa15-e55d-4b89-a908-feb02df038df 596f8d994ec145beb9244f5f01713555 3b96d13f13da43468269abb6dc6185d1 - - default default] rbd image 9396c5ff-9457-400c-8916-ecd03eded0c1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:35:44 np0005534516 nova_compute[253538]: 2025-11-25 08:35:44.376 253542 DEBUG oslo_concurrency.processutils [None req-fcd9fa15-e55d-4b89-a908-feb02df038df 596f8d994ec145beb9244f5f01713555 3b96d13f13da43468269abb6dc6185d1 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/9396c5ff-9457-400c-8916-ecd03eded0c1/disk.config 9396c5ff-9457-400c-8916-ecd03eded0c1_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:35:44 np0005534516 podman[323787]: 2025-11-25 08:35:44.438116994 +0000 UTC m=+0.042285167 container create 0a2f9a1f75369844990c86af5292d08665e280d72186e37dc4c970206e59214d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_edison, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Nov 25 03:35:44 np0005534516 systemd[1]: Started libpod-conmon-0a2f9a1f75369844990c86af5292d08665e280d72186e37dc4c970206e59214d.scope.
Nov 25 03:35:44 np0005534516 nova_compute[253538]: 2025-11-25 08:35:44.497 253542 DEBUG nova.network.neutron [req-ffe973bb-b0df-4a54-b49f-2474602e89ab req-9a2630d4-249b-42ee-8a59-7de5dc9b9479 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 4e9d3984-d789-45e1-83e3-8909597d3265] Updated VIF entry in instance network info cache for port d553507f-4019-4ce0-b549-4d221b9089cd. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 25 03:35:44 np0005534516 nova_compute[253538]: 2025-11-25 08:35:44.498 253542 DEBUG nova.network.neutron [req-ffe973bb-b0df-4a54-b49f-2474602e89ab req-9a2630d4-249b-42ee-8a59-7de5dc9b9479 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 4e9d3984-d789-45e1-83e3-8909597d3265] Updating instance_info_cache with network_info: [{"id": "d553507f-4019-4ce0-b549-4d221b9089cd", "address": "fa:16:3e:d8:47:0f", "network": {"id": "66249d1f-478b-4b2b-a784-933c0556752e", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-179501336-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a20ef4bed55a408c8933a4956b2dd3e4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd553507f-40", "ovs_interfaceid": "d553507f-4019-4ce0-b549-4d221b9089cd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 03:35:44 np0005534516 systemd[1]: Started libcrun container.
Nov 25 03:35:44 np0005534516 nova_compute[253538]: 2025-11-25 08:35:44.512 253542 DEBUG oslo_concurrency.lockutils [req-ffe973bb-b0df-4a54-b49f-2474602e89ab req-9a2630d4-249b-42ee-8a59-7de5dc9b9479 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-4e9d3984-d789-45e1-83e3-8909597d3265" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 03:35:44 np0005534516 podman[323787]: 2025-11-25 08:35:44.418000984 +0000 UTC m=+0.022169167 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 03:35:44 np0005534516 nova_compute[253538]: 2025-11-25 08:35:44.524 253542 DEBUG oslo_concurrency.processutils [None req-fcd9fa15-e55d-4b89-a908-feb02df038df 596f8d994ec145beb9244f5f01713555 3b96d13f13da43468269abb6dc6185d1 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/9396c5ff-9457-400c-8916-ecd03eded0c1/disk.config 9396c5ff-9457-400c-8916-ecd03eded0c1_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.148s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:35:44 np0005534516 nova_compute[253538]: 2025-11-25 08:35:44.525 253542 INFO nova.virt.libvirt.driver [None req-fcd9fa15-e55d-4b89-a908-feb02df038df 596f8d994ec145beb9244f5f01713555 3b96d13f13da43468269abb6dc6185d1 - - default default] [instance: 9396c5ff-9457-400c-8916-ecd03eded0c1] Deleting local config drive /var/lib/nova/instances/9396c5ff-9457-400c-8916-ecd03eded0c1/disk.config because it was imported into RBD.#033[00m
Nov 25 03:35:44 np0005534516 podman[323787]: 2025-11-25 08:35:44.531407061 +0000 UTC m=+0.135575254 container init 0a2f9a1f75369844990c86af5292d08665e280d72186e37dc4c970206e59214d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_edison, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 03:35:44 np0005534516 podman[323787]: 2025-11-25 08:35:44.537247798 +0000 UTC m=+0.141415981 container start 0a2f9a1f75369844990c86af5292d08665e280d72186e37dc4c970206e59214d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_edison, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 03:35:44 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 03:35:44 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 03:35:44 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 03:35:44 np0005534516 podman[323787]: 2025-11-25 08:35:44.540626679 +0000 UTC m=+0.144794902 container attach 0a2f9a1f75369844990c86af5292d08665e280d72186e37dc4c970206e59214d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_edison, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 03:35:44 np0005534516 systemd[1]: libpod-0a2f9a1f75369844990c86af5292d08665e280d72186e37dc4c970206e59214d.scope: Deactivated successfully.
Nov 25 03:35:44 np0005534516 modest_edison[323820]: 167 167
Nov 25 03:35:44 np0005534516 conmon[323820]: conmon 0a2f9a1f75369844990c <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-0a2f9a1f75369844990c86af5292d08665e280d72186e37dc4c970206e59214d.scope/container/memory.events
Nov 25 03:35:44 np0005534516 podman[323787]: 2025-11-25 08:35:44.544284167 +0000 UTC m=+0.148452360 container died 0a2f9a1f75369844990c86af5292d08665e280d72186e37dc4c970206e59214d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_edison, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True)
Nov 25 03:35:44 np0005534516 systemd[1]: var-lib-containers-storage-overlay-6f1ce2f8c4119389a9f2ccc5b4f30564935916e4bc888b0271bc8f6281ca1f03-merged.mount: Deactivated successfully.
Nov 25 03:35:44 np0005534516 NetworkManager[48915]: <info>  [1764059744.5812] manager: (tap33ae6d28-9d): new Tun device (/org/freedesktop/NetworkManager/Devices/289)
Nov 25 03:35:44 np0005534516 kernel: tap33ae6d28-9d: entered promiscuous mode
Nov 25 03:35:44 np0005534516 podman[323787]: 2025-11-25 08:35:44.5871657 +0000 UTC m=+0.191333893 container remove 0a2f9a1f75369844990c86af5292d08665e280d72186e37dc4c970206e59214d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_edison, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Nov 25 03:35:44 np0005534516 nova_compute[253538]: 2025-11-25 08:35:44.586 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:35:44 np0005534516 ovn_controller[152859]: 2025-11-25T08:35:44Z|00654|binding|INFO|Claiming lport 33ae6d28-9d12-4e42-9874-7f5c7a27c9c8 for this chassis.
Nov 25 03:35:44 np0005534516 ovn_controller[152859]: 2025-11-25T08:35:44Z|00655|binding|INFO|33ae6d28-9d12-4e42-9874-7f5c7a27c9c8: Claiming fa:16:3e:bb:e0:fa 10.100.0.12
Nov 25 03:35:44 np0005534516 systemd[1]: libpod-conmon-0a2f9a1f75369844990c86af5292d08665e280d72186e37dc4c970206e59214d.scope: Deactivated successfully.
Nov 25 03:35:44 np0005534516 nova_compute[253538]: 2025-11-25 08:35:44.601 253542 INFO nova.virt.libvirt.driver [None req-11d37a5c-d5cb-48ae-a62e-f9b39d45d4b5 4de06a7985be4463b069db269e2882d4 a20ef4bed55a408c8933a4956b2dd3e4 - - default default] [instance: 4e9d3984-d789-45e1-83e3-8909597d3265] Creating config drive at /var/lib/nova/instances/4e9d3984-d789-45e1-83e3-8909597d3265/disk.config#033[00m
Nov 25 03:35:44 np0005534516 nova_compute[253538]: 2025-11-25 08:35:44.606 253542 DEBUG oslo_concurrency.processutils [None req-11d37a5c-d5cb-48ae-a62e-f9b39d45d4b5 4de06a7985be4463b069db269e2882d4 a20ef4bed55a408c8933a4956b2dd3e4 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/4e9d3984-d789-45e1-83e3-8909597d3265/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpl6v_0lha execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:35:44 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:35:44.608 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:bb:e0:fa 10.100.0.12'], port_security=['fa:16:3e:bb:e0:fa 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '9396c5ff-9457-400c-8916-ecd03eded0c1', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-77b0065f-12e9-4121-b463-93a7fd9a5ff0', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3b96d13f13da43468269abb6dc6185d1', 'neutron:revision_number': '2', 'neutron:security_group_ids': '086c6d0d-fe38-46de-a484-0a651367668f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=92cc438c-7163-401b-8f9b-e1ec1d29a1db, chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=33ae6d28-9d12-4e42-9874-7f5c7a27c9c8) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 03:35:44 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:35:44.609 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 33ae6d28-9d12-4e42-9874-7f5c7a27c9c8 in datapath 77b0065f-12e9-4121-b463-93a7fd9a5ff0 bound to our chassis#033[00m
Nov 25 03:35:44 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:35:44.610 162739 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 77b0065f-12e9-4121-b463-93a7fd9a5ff0#033[00m
Nov 25 03:35:44 np0005534516 systemd-udevd[323851]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 03:35:44 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:35:44.623 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[d1d93afc-459a-4319-910a-63cb47539277]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:35:44 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:35:44.624 162739 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap77b0065f-11 in ovnmeta-77b0065f-12e9-4121-b463-93a7fd9a5ff0 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 25 03:35:44 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:35:44.626 269370 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap77b0065f-10 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 25 03:35:44 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:35:44.626 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[f344dd3b-3423-4a7b-a3d4-9888cb1999f5]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:35:44 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:35:44.627 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[ecfbfaf2-f86e-4ad9-a994-73e434eb5f9e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:35:44 np0005534516 systemd-machined[215790]: New machine qemu-80-instance-00000044.
Nov 25 03:35:44 np0005534516 NetworkManager[48915]: <info>  [1764059744.6327] device (tap33ae6d28-9d): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 03:35:44 np0005534516 NetworkManager[48915]: <info>  [1764059744.6339] device (tap33ae6d28-9d): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 03:35:44 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:35:44.639 162852 DEBUG oslo.privsep.daemon [-] privsep: reply[397a8170-d4c2-4b37-adba-083a5b368c35]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:35:44 np0005534516 systemd[1]: Started Virtual Machine qemu-80-instance-00000044.
Nov 25 03:35:44 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:35:44.663 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[28567bf5-ee02-48b9-a34f-d98356553680]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:35:44 np0005534516 nova_compute[253538]: 2025-11-25 08:35:44.688 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:35:44 np0005534516 ovn_controller[152859]: 2025-11-25T08:35:44Z|00656|binding|INFO|Setting lport 33ae6d28-9d12-4e42-9874-7f5c7a27c9c8 ovn-installed in OVS
Nov 25 03:35:44 np0005534516 ovn_controller[152859]: 2025-11-25T08:35:44Z|00657|binding|INFO|Setting lport 33ae6d28-9d12-4e42-9874-7f5c7a27c9c8 up in Southbound
Nov 25 03:35:44 np0005534516 nova_compute[253538]: 2025-11-25 08:35:44.693 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:35:44 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:35:44.693 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[bfeaecdf-6fda-4da2-8276-293a287ba311]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:35:44 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:35:44.698 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[359b5409-f26a-4197-92a2-93bb9596cc4f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:35:44 np0005534516 NetworkManager[48915]: <info>  [1764059744.6994] manager: (tap77b0065f-10): new Veth device (/org/freedesktop/NetworkManager/Devices/290)
Nov 25 03:35:44 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:35:44.729 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[75efa693-72cf-49db-a90d-6144ab93fb99]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:35:44 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:35:44.732 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[a836d88b-7e60-4bf2-aff1-6e78a280f9c2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:35:44 np0005534516 NetworkManager[48915]: <info>  [1764059744.7523] device (tap77b0065f-10): carrier: link connected
Nov 25 03:35:44 np0005534516 nova_compute[253538]: 2025-11-25 08:35:44.755 253542 DEBUG oslo_concurrency.processutils [None req-11d37a5c-d5cb-48ae-a62e-f9b39d45d4b5 4de06a7985be4463b069db269e2882d4 a20ef4bed55a408c8933a4956b2dd3e4 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/4e9d3984-d789-45e1-83e3-8909597d3265/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpl6v_0lha" returned: 0 in 0.149s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:35:44 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:35:44.757 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[2f38a843-f857-4e80-b374-2a15ca1f24ed]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:35:44 np0005534516 podman[323878]: 2025-11-25 08:35:44.761397532 +0000 UTC m=+0.039872102 container create 2034211cc0a1cf7ee0163ae64ce42c5b087fccefc3e15669487030bf4094b1df (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_kepler, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Nov 25 03:35:44 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:35:44.776 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[edf737d7-0d63-448e-8c0d-5ba2a895a222]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap77b0065f-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:01:0a:ff'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 198], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 509935, 'reachable_time': 24057, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 323910, 'error': None, 'target': 'ovnmeta-77b0065f-12e9-4121-b463-93a7fd9a5ff0', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:35:44 np0005534516 nova_compute[253538]: 2025-11-25 08:35:44.786 253542 DEBUG nova.storage.rbd_utils [None req-11d37a5c-d5cb-48ae-a62e-f9b39d45d4b5 4de06a7985be4463b069db269e2882d4 a20ef4bed55a408c8933a4956b2dd3e4 - - default default] rbd image 4e9d3984-d789-45e1-83e3-8909597d3265_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:35:44 np0005534516 nova_compute[253538]: 2025-11-25 08:35:44.793 253542 DEBUG oslo_concurrency.processutils [None req-11d37a5c-d5cb-48ae-a62e-f9b39d45d4b5 4de06a7985be4463b069db269e2882d4 a20ef4bed55a408c8933a4956b2dd3e4 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/4e9d3984-d789-45e1-83e3-8909597d3265/disk.config 4e9d3984-d789-45e1-83e3-8909597d3265_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:35:44 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:35:44.796 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[b26553d5-ce85-4775-9150-e9b97c14bb97]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe01:aff'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 509935, 'tstamp': 509935}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 323927, 'error': None, 'target': 'ovnmeta-77b0065f-12e9-4121-b463-93a7fd9a5ff0', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:35:44 np0005534516 systemd[1]: Started libpod-conmon-2034211cc0a1cf7ee0163ae64ce42c5b087fccefc3e15669487030bf4094b1df.scope.
Nov 25 03:35:44 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:35:44.814 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[5c16c373-5206-454a-abec-6854ca44a100]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap77b0065f-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:01:0a:ff'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 198], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 509935, 'reachable_time': 24057, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 323934, 'error': None, 'target': 'ovnmeta-77b0065f-12e9-4121-b463-93a7fd9a5ff0', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:35:44 np0005534516 systemd[1]: Started libcrun container.
Nov 25 03:35:44 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4f4f82949cc3592642d3e401181e19ab18fee3a0fa68cbeae83abc8f2bdc59f0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 03:35:44 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4f4f82949cc3592642d3e401181e19ab18fee3a0fa68cbeae83abc8f2bdc59f0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 03:35:44 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4f4f82949cc3592642d3e401181e19ab18fee3a0fa68cbeae83abc8f2bdc59f0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 03:35:44 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4f4f82949cc3592642d3e401181e19ab18fee3a0fa68cbeae83abc8f2bdc59f0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 03:35:44 np0005534516 podman[323878]: 2025-11-25 08:35:44.746142653 +0000 UTC m=+0.024617233 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 03:35:44 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4f4f82949cc3592642d3e401181e19ab18fee3a0fa68cbeae83abc8f2bdc59f0/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 03:35:44 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:35:44.846 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[3b24c2b1-519f-480b-a1d5-da8d22a6e6f1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:35:44 np0005534516 podman[323878]: 2025-11-25 08:35:44.855000848 +0000 UTC m=+0.133475448 container init 2034211cc0a1cf7ee0163ae64ce42c5b087fccefc3e15669487030bf4094b1df (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_kepler, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Nov 25 03:35:44 np0005534516 podman[323878]: 2025-11-25 08:35:44.86323506 +0000 UTC m=+0.141709630 container start 2034211cc0a1cf7ee0163ae64ce42c5b087fccefc3e15669487030bf4094b1df (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_kepler, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2)
Nov 25 03:35:44 np0005534516 podman[323878]: 2025-11-25 08:35:44.867634017 +0000 UTC m=+0.146108617 container attach 2034211cc0a1cf7ee0163ae64ce42c5b087fccefc3e15669487030bf4094b1df (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_kepler, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 03:35:44 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:35:44.908 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[00b24520-3105-41df-b297-b0c3f0780dc1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:35:44 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:35:44.910 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap77b0065f-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:35:44 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:35:44.910 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 03:35:44 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:35:44.911 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap77b0065f-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:35:44 np0005534516 nova_compute[253538]: 2025-11-25 08:35:44.912 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:35:44 np0005534516 NetworkManager[48915]: <info>  [1764059744.9134] manager: (tap77b0065f-10): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/291)
Nov 25 03:35:44 np0005534516 kernel: tap77b0065f-10: entered promiscuous mode
Nov 25 03:35:44 np0005534516 nova_compute[253538]: 2025-11-25 08:35:44.921 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:35:44 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:35:44.922 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap77b0065f-10, col_values=(('external_ids', {'iface-id': '9fb4a874-a628-417d-9e88-cf4274450252'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:35:44 np0005534516 ovn_controller[152859]: 2025-11-25T08:35:44Z|00658|binding|INFO|Releasing lport 9fb4a874-a628-417d-9e88-cf4274450252 from this chassis (sb_readonly=0)
Nov 25 03:35:44 np0005534516 nova_compute[253538]: 2025-11-25 08:35:44.945 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:35:44 np0005534516 nova_compute[253538]: 2025-11-25 08:35:44.949 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:35:44 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:35:44.950 162739 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/77b0065f-12e9-4121-b463-93a7fd9a5ff0.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/77b0065f-12e9-4121-b463-93a7fd9a5ff0.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 25 03:35:44 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:35:44.950 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[612c1b10-cb22-419a-a5a0-71f48e206cdf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:35:44 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:35:44.951 162739 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 25 03:35:44 np0005534516 ovn_metadata_agent[162734]: global
Nov 25 03:35:44 np0005534516 ovn_metadata_agent[162734]:    log         /dev/log local0 debug
Nov 25 03:35:44 np0005534516 ovn_metadata_agent[162734]:    log-tag     haproxy-metadata-proxy-77b0065f-12e9-4121-b463-93a7fd9a5ff0
Nov 25 03:35:44 np0005534516 ovn_metadata_agent[162734]:    user        root
Nov 25 03:35:44 np0005534516 ovn_metadata_agent[162734]:    group       root
Nov 25 03:35:44 np0005534516 ovn_metadata_agent[162734]:    maxconn     1024
Nov 25 03:35:44 np0005534516 ovn_metadata_agent[162734]:    pidfile     /var/lib/neutron/external/pids/77b0065f-12e9-4121-b463-93a7fd9a5ff0.pid.haproxy
Nov 25 03:35:44 np0005534516 ovn_metadata_agent[162734]:    daemon
Nov 25 03:35:44 np0005534516 ovn_metadata_agent[162734]: 
Nov 25 03:35:44 np0005534516 ovn_metadata_agent[162734]: defaults
Nov 25 03:35:44 np0005534516 ovn_metadata_agent[162734]:    log global
Nov 25 03:35:44 np0005534516 ovn_metadata_agent[162734]:    mode http
Nov 25 03:35:44 np0005534516 ovn_metadata_agent[162734]:    option httplog
Nov 25 03:35:44 np0005534516 ovn_metadata_agent[162734]:    option dontlognull
Nov 25 03:35:44 np0005534516 ovn_metadata_agent[162734]:    option http-server-close
Nov 25 03:35:44 np0005534516 ovn_metadata_agent[162734]:    option forwardfor
Nov 25 03:35:44 np0005534516 ovn_metadata_agent[162734]:    retries                 3
Nov 25 03:35:44 np0005534516 ovn_metadata_agent[162734]:    timeout http-request    30s
Nov 25 03:35:44 np0005534516 ovn_metadata_agent[162734]:    timeout connect         30s
Nov 25 03:35:44 np0005534516 ovn_metadata_agent[162734]:    timeout client          32s
Nov 25 03:35:44 np0005534516 ovn_metadata_agent[162734]:    timeout server          32s
Nov 25 03:35:44 np0005534516 ovn_metadata_agent[162734]:    timeout http-keep-alive 30s
Nov 25 03:35:44 np0005534516 ovn_metadata_agent[162734]: 
Nov 25 03:35:44 np0005534516 ovn_metadata_agent[162734]: 
Nov 25 03:35:44 np0005534516 ovn_metadata_agent[162734]: listen listener
Nov 25 03:35:44 np0005534516 ovn_metadata_agent[162734]:    bind 169.254.169.254:80
Nov 25 03:35:44 np0005534516 ovn_metadata_agent[162734]:    server metadata /var/lib/neutron/metadata_proxy
Nov 25 03:35:44 np0005534516 ovn_metadata_agent[162734]:    http-request add-header X-OVN-Network-ID 77b0065f-12e9-4121-b463-93a7fd9a5ff0
Nov 25 03:35:44 np0005534516 ovn_metadata_agent[162734]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 25 03:35:44 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:35:44.952 162739 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-77b0065f-12e9-4121-b463-93a7fd9a5ff0', 'env', 'PROCESS_TAG=haproxy-77b0065f-12e9-4121-b463-93a7fd9a5ff0', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/77b0065f-12e9-4121-b463-93a7fd9a5ff0.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 25 03:35:44 np0005534516 nova_compute[253538]: 2025-11-25 08:35:44.957 253542 DEBUG oslo_concurrency.processutils [None req-11d37a5c-d5cb-48ae-a62e-f9b39d45d4b5 4de06a7985be4463b069db269e2882d4 a20ef4bed55a408c8933a4956b2dd3e4 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/4e9d3984-d789-45e1-83e3-8909597d3265/disk.config 4e9d3984-d789-45e1-83e3-8909597d3265_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.164s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:35:44 np0005534516 nova_compute[253538]: 2025-11-25 08:35:44.958 253542 INFO nova.virt.libvirt.driver [None req-11d37a5c-d5cb-48ae-a62e-f9b39d45d4b5 4de06a7985be4463b069db269e2882d4 a20ef4bed55a408c8933a4956b2dd3e4 - - default default] [instance: 4e9d3984-d789-45e1-83e3-8909597d3265] Deleting local config drive /var/lib/nova/instances/4e9d3984-d789-45e1-83e3-8909597d3265/disk.config because it was imported into RBD.#033[00m
Nov 25 03:35:45 np0005534516 NetworkManager[48915]: <info>  [1764059745.0039] manager: (tapd553507f-40): new Tun device (/org/freedesktop/NetworkManager/Devices/292)
Nov 25 03:35:45 np0005534516 systemd-udevd[323891]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 03:35:45 np0005534516 kernel: tapd553507f-40: entered promiscuous mode
Nov 25 03:35:45 np0005534516 nova_compute[253538]: 2025-11-25 08:35:45.007 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:35:45 np0005534516 ovn_controller[152859]: 2025-11-25T08:35:45Z|00659|binding|INFO|Claiming lport d553507f-4019-4ce0-b549-4d221b9089cd for this chassis.
Nov 25 03:35:45 np0005534516 ovn_controller[152859]: 2025-11-25T08:35:45Z|00660|binding|INFO|d553507f-4019-4ce0-b549-4d221b9089cd: Claiming fa:16:3e:d8:47:0f 10.100.0.9
Nov 25 03:35:45 np0005534516 nova_compute[253538]: 2025-11-25 08:35:45.011 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:35:45 np0005534516 nova_compute[253538]: 2025-11-25 08:35:45.014 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:35:45 np0005534516 NetworkManager[48915]: <info>  [1764059745.0187] device (tapd553507f-40): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 03:35:45 np0005534516 NetworkManager[48915]: <info>  [1764059745.0196] device (tapd553507f-40): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 03:35:45 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:35:45.028 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d8:47:0f 10.100.0.9'], port_security=['fa:16:3e:d8:47:0f 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '4e9d3984-d789-45e1-83e3-8909597d3265', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-66249d1f-478b-4b2b-a784-933c0556752e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a20ef4bed55a408c8933a4956b2dd3e4', 'neutron:revision_number': '2', 'neutron:security_group_ids': '99b8ce64-92d7-4b35-99db-f9518100b84f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5891e4e5-b944-481c-a074-f90df3e14ac2, chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=d553507f-4019-4ce0-b549-4d221b9089cd) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 03:35:45 np0005534516 systemd-machined[215790]: New machine qemu-81-instance-00000045.
Nov 25 03:35:45 np0005534516 systemd[1]: Started Virtual Machine qemu-81-instance-00000045.
Nov 25 03:35:45 np0005534516 ovn_controller[152859]: 2025-11-25T08:35:45Z|00661|binding|INFO|Setting lport d553507f-4019-4ce0-b549-4d221b9089cd ovn-installed in OVS
Nov 25 03:35:45 np0005534516 ovn_controller[152859]: 2025-11-25T08:35:45Z|00662|binding|INFO|Setting lport d553507f-4019-4ce0-b549-4d221b9089cd up in Southbound
Nov 25 03:35:45 np0005534516 nova_compute[253538]: 2025-11-25 08:35:45.100 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:35:45 np0005534516 nova_compute[253538]: 2025-11-25 08:35:45.146 253542 DEBUG nova.compute.manager [req-ab56c401-c095-491e-ab31-719a47bb04a7 req-47ceb8d0-1ea1-4601-957e-956c8769967d b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 9396c5ff-9457-400c-8916-ecd03eded0c1] Received event network-vif-plugged-33ae6d28-9d12-4e42-9874-7f5c7a27c9c8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:35:45 np0005534516 nova_compute[253538]: 2025-11-25 08:35:45.147 253542 DEBUG oslo_concurrency.lockutils [req-ab56c401-c095-491e-ab31-719a47bb04a7 req-47ceb8d0-1ea1-4601-957e-956c8769967d b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "9396c5ff-9457-400c-8916-ecd03eded0c1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:35:45 np0005534516 nova_compute[253538]: 2025-11-25 08:35:45.147 253542 DEBUG oslo_concurrency.lockutils [req-ab56c401-c095-491e-ab31-719a47bb04a7 req-47ceb8d0-1ea1-4601-957e-956c8769967d b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "9396c5ff-9457-400c-8916-ecd03eded0c1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:35:45 np0005534516 nova_compute[253538]: 2025-11-25 08:35:45.148 253542 DEBUG oslo_concurrency.lockutils [req-ab56c401-c095-491e-ab31-719a47bb04a7 req-47ceb8d0-1ea1-4601-957e-956c8769967d b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "9396c5ff-9457-400c-8916-ecd03eded0c1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:35:45 np0005534516 nova_compute[253538]: 2025-11-25 08:35:45.148 253542 DEBUG nova.compute.manager [req-ab56c401-c095-491e-ab31-719a47bb04a7 req-47ceb8d0-1ea1-4601-957e-956c8769967d b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 9396c5ff-9457-400c-8916-ecd03eded0c1] Processing event network-vif-plugged-33ae6d28-9d12-4e42-9874-7f5c7a27c9c8 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 25 03:35:45 np0005534516 nova_compute[253538]: 2025-11-25 08:35:45.174 253542 DEBUG nova.compute.manager [None req-fcd9fa15-e55d-4b89-a908-feb02df038df 596f8d994ec145beb9244f5f01713555 3b96d13f13da43468269abb6dc6185d1 - - default default] [instance: 9396c5ff-9457-400c-8916-ecd03eded0c1] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 25 03:35:45 np0005534516 nova_compute[253538]: 2025-11-25 08:35:45.174 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764059745.1735632, 9396c5ff-9457-400c-8916-ecd03eded0c1 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 03:35:45 np0005534516 nova_compute[253538]: 2025-11-25 08:35:45.175 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 9396c5ff-9457-400c-8916-ecd03eded0c1] VM Started (Lifecycle Event)#033[00m
Nov 25 03:35:45 np0005534516 nova_compute[253538]: 2025-11-25 08:35:45.177 253542 DEBUG nova.virt.libvirt.driver [None req-fcd9fa15-e55d-4b89-a908-feb02df038df 596f8d994ec145beb9244f5f01713555 3b96d13f13da43468269abb6dc6185d1 - - default default] [instance: 9396c5ff-9457-400c-8916-ecd03eded0c1] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 25 03:35:45 np0005534516 nova_compute[253538]: 2025-11-25 08:35:45.180 253542 INFO nova.virt.libvirt.driver [-] [instance: 9396c5ff-9457-400c-8916-ecd03eded0c1] Instance spawned successfully.#033[00m
Nov 25 03:35:45 np0005534516 nova_compute[253538]: 2025-11-25 08:35:45.180 253542 DEBUG nova.virt.libvirt.driver [None req-fcd9fa15-e55d-4b89-a908-feb02df038df 596f8d994ec145beb9244f5f01713555 3b96d13f13da43468269abb6dc6185d1 - - default default] [instance: 9396c5ff-9457-400c-8916-ecd03eded0c1] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 25 03:35:45 np0005534516 nova_compute[253538]: 2025-11-25 08:35:45.195 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 9396c5ff-9457-400c-8916-ecd03eded0c1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:35:45 np0005534516 nova_compute[253538]: 2025-11-25 08:35:45.206 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 9396c5ff-9457-400c-8916-ecd03eded0c1] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 03:35:45 np0005534516 nova_compute[253538]: 2025-11-25 08:35:45.210 253542 DEBUG nova.virt.libvirt.driver [None req-fcd9fa15-e55d-4b89-a908-feb02df038df 596f8d994ec145beb9244f5f01713555 3b96d13f13da43468269abb6dc6185d1 - - default default] [instance: 9396c5ff-9457-400c-8916-ecd03eded0c1] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:35:45 np0005534516 nova_compute[253538]: 2025-11-25 08:35:45.211 253542 DEBUG nova.virt.libvirt.driver [None req-fcd9fa15-e55d-4b89-a908-feb02df038df 596f8d994ec145beb9244f5f01713555 3b96d13f13da43468269abb6dc6185d1 - - default default] [instance: 9396c5ff-9457-400c-8916-ecd03eded0c1] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:35:45 np0005534516 nova_compute[253538]: 2025-11-25 08:35:45.211 253542 DEBUG nova.virt.libvirt.driver [None req-fcd9fa15-e55d-4b89-a908-feb02df038df 596f8d994ec145beb9244f5f01713555 3b96d13f13da43468269abb6dc6185d1 - - default default] [instance: 9396c5ff-9457-400c-8916-ecd03eded0c1] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:35:45 np0005534516 nova_compute[253538]: 2025-11-25 08:35:45.212 253542 DEBUG nova.virt.libvirt.driver [None req-fcd9fa15-e55d-4b89-a908-feb02df038df 596f8d994ec145beb9244f5f01713555 3b96d13f13da43468269abb6dc6185d1 - - default default] [instance: 9396c5ff-9457-400c-8916-ecd03eded0c1] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:35:45 np0005534516 nova_compute[253538]: 2025-11-25 08:35:45.212 253542 DEBUG nova.virt.libvirt.driver [None req-fcd9fa15-e55d-4b89-a908-feb02df038df 596f8d994ec145beb9244f5f01713555 3b96d13f13da43468269abb6dc6185d1 - - default default] [instance: 9396c5ff-9457-400c-8916-ecd03eded0c1] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:35:45 np0005534516 nova_compute[253538]: 2025-11-25 08:35:45.213 253542 DEBUG nova.virt.libvirt.driver [None req-fcd9fa15-e55d-4b89-a908-feb02df038df 596f8d994ec145beb9244f5f01713555 3b96d13f13da43468269abb6dc6185d1 - - default default] [instance: 9396c5ff-9457-400c-8916-ecd03eded0c1] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:35:45 np0005534516 nova_compute[253538]: 2025-11-25 08:35:45.241 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 9396c5ff-9457-400c-8916-ecd03eded0c1] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 03:35:45 np0005534516 nova_compute[253538]: 2025-11-25 08:35:45.242 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764059745.1745691, 9396c5ff-9457-400c-8916-ecd03eded0c1 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 03:35:45 np0005534516 nova_compute[253538]: 2025-11-25 08:35:45.242 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 9396c5ff-9457-400c-8916-ecd03eded0c1] VM Paused (Lifecycle Event)#033[00m
Nov 25 03:35:45 np0005534516 nova_compute[253538]: 2025-11-25 08:35:45.260 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 9396c5ff-9457-400c-8916-ecd03eded0c1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:35:45 np0005534516 nova_compute[253538]: 2025-11-25 08:35:45.263 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764059745.1773849, 9396c5ff-9457-400c-8916-ecd03eded0c1 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 03:35:45 np0005534516 nova_compute[253538]: 2025-11-25 08:35:45.263 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 9396c5ff-9457-400c-8916-ecd03eded0c1] VM Resumed (Lifecycle Event)#033[00m
Nov 25 03:35:45 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1581: 321 pgs: 321 active+clean; 199 MiB data, 620 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 7.4 MiB/s wr, 201 op/s
Nov 25 03:35:45 np0005534516 nova_compute[253538]: 2025-11-25 08:35:45.296 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 9396c5ff-9457-400c-8916-ecd03eded0c1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:35:45 np0005534516 nova_compute[253538]: 2025-11-25 08:35:45.300 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 9396c5ff-9457-400c-8916-ecd03eded0c1] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 03:35:45 np0005534516 nova_compute[253538]: 2025-11-25 08:35:45.344 253542 INFO nova.compute.manager [None req-fcd9fa15-e55d-4b89-a908-feb02df038df 596f8d994ec145beb9244f5f01713555 3b96d13f13da43468269abb6dc6185d1 - - default default] [instance: 9396c5ff-9457-400c-8916-ecd03eded0c1] Took 8.34 seconds to spawn the instance on the hypervisor.#033[00m
Nov 25 03:35:45 np0005534516 nova_compute[253538]: 2025-11-25 08:35:45.344 253542 DEBUG nova.compute.manager [None req-fcd9fa15-e55d-4b89-a908-feb02df038df 596f8d994ec145beb9244f5f01713555 3b96d13f13da43468269abb6dc6185d1 - - default default] [instance: 9396c5ff-9457-400c-8916-ecd03eded0c1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:35:45 np0005534516 podman[324087]: 2025-11-25 08:35:45.357569815 +0000 UTC m=+0.052096151 container create 9a907ba470efda663bb059281b20d70f9de26488b0442ace9ecf1bf7f12590e6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-77b0065f-12e9-4121-b463-93a7fd9a5ff0, tcib_managed=true, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 03:35:45 np0005534516 nova_compute[253538]: 2025-11-25 08:35:45.379 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 9396c5ff-9457-400c-8916-ecd03eded0c1] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 03:35:45 np0005534516 systemd[1]: Started libpod-conmon-9a907ba470efda663bb059281b20d70f9de26488b0442ace9ecf1bf7f12590e6.scope.
Nov 25 03:35:45 np0005534516 nova_compute[253538]: 2025-11-25 08:35:45.400 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764059745.3994076, 4e9d3984-d789-45e1-83e3-8909597d3265 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 03:35:45 np0005534516 nova_compute[253538]: 2025-11-25 08:35:45.400 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 4e9d3984-d789-45e1-83e3-8909597d3265] VM Started (Lifecycle Event)#033[00m
Nov 25 03:35:45 np0005534516 systemd[1]: Started libcrun container.
Nov 25 03:35:45 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/49574312e8f4af9bf3a8c8febb2214b41ab32d1992c105e91eb0c713b6ff73e1/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 25 03:35:45 np0005534516 nova_compute[253538]: 2025-11-25 08:35:45.427 253542 INFO nova.compute.manager [None req-fcd9fa15-e55d-4b89-a908-feb02df038df 596f8d994ec145beb9244f5f01713555 3b96d13f13da43468269abb6dc6185d1 - - default default] [instance: 9396c5ff-9457-400c-8916-ecd03eded0c1] Took 9.45 seconds to build instance.#033[00m
Nov 25 03:35:45 np0005534516 podman[324087]: 2025-11-25 08:35:45.332962903 +0000 UTC m=+0.027489249 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 25 03:35:45 np0005534516 nova_compute[253538]: 2025-11-25 08:35:45.431 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 4e9d3984-d789-45e1-83e3-8909597d3265] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:35:45 np0005534516 podman[324087]: 2025-11-25 08:35:45.433778573 +0000 UTC m=+0.128304899 container init 9a907ba470efda663bb059281b20d70f9de26488b0442ace9ecf1bf7f12590e6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-77b0065f-12e9-4121-b463-93a7fd9a5ff0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, tcib_managed=true, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 03:35:45 np0005534516 nova_compute[253538]: 2025-11-25 08:35:45.435 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764059745.40055, 4e9d3984-d789-45e1-83e3-8909597d3265 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 03:35:45 np0005534516 nova_compute[253538]: 2025-11-25 08:35:45.436 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 4e9d3984-d789-45e1-83e3-8909597d3265] VM Paused (Lifecycle Event)#033[00m
Nov 25 03:35:45 np0005534516 podman[324087]: 2025-11-25 08:35:45.440260838 +0000 UTC m=+0.134787164 container start 9a907ba470efda663bb059281b20d70f9de26488b0442ace9ecf1bf7f12590e6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-77b0065f-12e9-4121-b463-93a7fd9a5ff0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 25 03:35:45 np0005534516 podman[324106]: 2025-11-25 08:35:45.445375835 +0000 UTC m=+0.054998629 container health_status 201f5ba8a846455dad7681d91099d8c3f33e8ac91298c1330a8b525b94c03938 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Nov 25 03:35:45 np0005534516 nova_compute[253538]: 2025-11-25 08:35:45.454 253542 DEBUG oslo_concurrency.lockutils [None req-fcd9fa15-e55d-4b89-a908-feb02df038df 596f8d994ec145beb9244f5f01713555 3b96d13f13da43468269abb6dc6185d1 - - default default] Lock "9396c5ff-9457-400c-8916-ecd03eded0c1" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.619s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:35:45 np0005534516 neutron-haproxy-ovnmeta-77b0065f-12e9-4121-b463-93a7fd9a5ff0[324112]: [NOTICE]   (324133) : New worker (324135) forked
Nov 25 03:35:45 np0005534516 neutron-haproxy-ovnmeta-77b0065f-12e9-4121-b463-93a7fd9a5ff0[324112]: [NOTICE]   (324133) : Loading success.
Nov 25 03:35:45 np0005534516 nova_compute[253538]: 2025-11-25 08:35:45.470 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 4e9d3984-d789-45e1-83e3-8909597d3265] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:35:45 np0005534516 nova_compute[253538]: 2025-11-25 08:35:45.472 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 4e9d3984-d789-45e1-83e3-8909597d3265] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 03:35:45 np0005534516 nova_compute[253538]: 2025-11-25 08:35:45.493 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 4e9d3984-d789-45e1-83e3-8909597d3265] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 03:35:45 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:35:45.525 162739 INFO neutron.agent.ovn.metadata.agent [-] Port d553507f-4019-4ce0-b549-4d221b9089cd in datapath 66249d1f-478b-4b2b-a784-933c0556752e unbound from our chassis#033[00m
Nov 25 03:35:45 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:35:45.527 162739 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 66249d1f-478b-4b2b-a784-933c0556752e#033[00m
Nov 25 03:35:45 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:35:45.536 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[34f1439c-c6b0-43ac-a391-b071658c0316]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:35:45 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:35:45.537 162739 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap66249d1f-41 in ovnmeta-66249d1f-478b-4b2b-a784-933c0556752e namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 25 03:35:45 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:35:45.539 269370 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap66249d1f-40 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 25 03:35:45 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:35:45.539 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[65d85625-b87c-4e0f-bf7f-cf06a807345c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:35:45 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:35:45.540 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[6f113b3c-0c6c-429a-815d-380c6f573d12]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:35:45 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:35:45.553 162852 DEBUG oslo.privsep.daemon [-] privsep: reply[aa255a23-1708-43b7-b587-6e0ab2fa843d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:35:45 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:35:45.580 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[cb39e3f0-6d7d-46e3-9800-4b32ab493cdc]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:35:45 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:35:45.620 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[ae55d5c7-4f0b-441a-927c-e084ce19c701]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:35:45 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:35:45.626 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[af6390b7-f465-4470-9417-3132034cade4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:35:45 np0005534516 NetworkManager[48915]: <info>  [1764059745.6277] manager: (tap66249d1f-40): new Veth device (/org/freedesktop/NetworkManager/Devices/293)
Nov 25 03:35:45 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:35:45.661 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[ce2fdfa6-9346-4de5-9c15-397f8d018685]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:35:45 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:35:45.668 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[82d87c45-c98d-4662-9997-0fc9777a89c5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:35:45 np0005534516 NetworkManager[48915]: <info>  [1764059745.6952] device (tap66249d1f-40): carrier: link connected
Nov 25 03:35:45 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:35:45.703 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[20eaed6a-4466-4ac5-9ae6-f48cd136cc89]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:35:45 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:35:45.726 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[d0fcaa06-2b42-4f9f-99ff-0c27b156b68e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap66249d1f-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:5d:29:ed'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 200], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 510029, 'reachable_time': 39756, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 324162, 'error': None, 'target': 'ovnmeta-66249d1f-478b-4b2b-a784-933c0556752e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:35:45 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:35:45.747 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[93aa8b7c-4133-4c9d-b810-107401657fad]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe5d:29ed'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 510029, 'tstamp': 510029}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 324163, 'error': None, 'target': 'ovnmeta-66249d1f-478b-4b2b-a784-933c0556752e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:35:45 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:35:45.768 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[8144f26d-ef07-4220-8cd9-b627bd5a5afe]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap66249d1f-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:5d:29:ed'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 200], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 510029, 'reachable_time': 39756, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 324165, 'error': None, 'target': 'ovnmeta-66249d1f-478b-4b2b-a784-933c0556752e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:35:45 np0005534516 nova_compute[253538]: 2025-11-25 08:35:45.792 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:35:45 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:35:45.799 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[8418c390-5a66-41a1-810c-3ad37a5a7e61]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:35:45 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:35:45.860 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[b8f44377-01e6-459b-85d3-407f894f8afd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:35:45 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:35:45.862 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap66249d1f-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:35:45 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:35:45.862 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 03:35:45 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:35:45.862 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap66249d1f-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:35:45 np0005534516 kernel: tap66249d1f-40: entered promiscuous mode
Nov 25 03:35:45 np0005534516 nova_compute[253538]: 2025-11-25 08:35:45.864 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:35:45 np0005534516 NetworkManager[48915]: <info>  [1764059745.8649] manager: (tap66249d1f-40): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/294)
Nov 25 03:35:45 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:35:45.867 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap66249d1f-40, col_values=(('external_ids', {'iface-id': '57f8eb8e-0895-4599-b15e-b1a08378dfc1'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:35:45 np0005534516 nova_compute[253538]: 2025-11-25 08:35:45.868 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:35:45 np0005534516 ovn_controller[152859]: 2025-11-25T08:35:45Z|00663|binding|INFO|Releasing lport 57f8eb8e-0895-4599-b15e-b1a08378dfc1 from this chassis (sb_readonly=0)
Nov 25 03:35:45 np0005534516 nova_compute[253538]: 2025-11-25 08:35:45.892 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:35:45 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:35:45.893 162739 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/66249d1f-478b-4b2b-a784-933c0556752e.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/66249d1f-478b-4b2b-a784-933c0556752e.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 25 03:35:45 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:35:45.894 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[164cb1cb-90d4-4d6c-92ea-9e764d339417]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:35:45 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:35:45.895 162739 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 25 03:35:45 np0005534516 ovn_metadata_agent[162734]: global
Nov 25 03:35:45 np0005534516 ovn_metadata_agent[162734]:    log         /dev/log local0 debug
Nov 25 03:35:45 np0005534516 ovn_metadata_agent[162734]:    log-tag     haproxy-metadata-proxy-66249d1f-478b-4b2b-a784-933c0556752e
Nov 25 03:35:45 np0005534516 ovn_metadata_agent[162734]:    user        root
Nov 25 03:35:45 np0005534516 ovn_metadata_agent[162734]:    group       root
Nov 25 03:35:45 np0005534516 ovn_metadata_agent[162734]:    maxconn     1024
Nov 25 03:35:45 np0005534516 ovn_metadata_agent[162734]:    pidfile     /var/lib/neutron/external/pids/66249d1f-478b-4b2b-a784-933c0556752e.pid.haproxy
Nov 25 03:35:45 np0005534516 ovn_metadata_agent[162734]:    daemon
Nov 25 03:35:45 np0005534516 ovn_metadata_agent[162734]: 
Nov 25 03:35:45 np0005534516 ovn_metadata_agent[162734]: defaults
Nov 25 03:35:45 np0005534516 ovn_metadata_agent[162734]:    log global
Nov 25 03:35:45 np0005534516 ovn_metadata_agent[162734]:    mode http
Nov 25 03:35:45 np0005534516 ovn_metadata_agent[162734]:    option httplog
Nov 25 03:35:45 np0005534516 ovn_metadata_agent[162734]:    option dontlognull
Nov 25 03:35:45 np0005534516 ovn_metadata_agent[162734]:    option http-server-close
Nov 25 03:35:45 np0005534516 ovn_metadata_agent[162734]:    option forwardfor
Nov 25 03:35:45 np0005534516 ovn_metadata_agent[162734]:    retries                 3
Nov 25 03:35:45 np0005534516 ovn_metadata_agent[162734]:    timeout http-request    30s
Nov 25 03:35:45 np0005534516 ovn_metadata_agent[162734]:    timeout connect         30s
Nov 25 03:35:45 np0005534516 ovn_metadata_agent[162734]:    timeout client          32s
Nov 25 03:35:45 np0005534516 ovn_metadata_agent[162734]:    timeout server          32s
Nov 25 03:35:45 np0005534516 ovn_metadata_agent[162734]:    timeout http-keep-alive 30s
Nov 25 03:35:45 np0005534516 ovn_metadata_agent[162734]: 
Nov 25 03:35:45 np0005534516 ovn_metadata_agent[162734]: 
Nov 25 03:35:45 np0005534516 ovn_metadata_agent[162734]: listen listener
Nov 25 03:35:45 np0005534516 ovn_metadata_agent[162734]:    bind 169.254.169.254:80
Nov 25 03:35:45 np0005534516 ovn_metadata_agent[162734]:    server metadata /var/lib/neutron/metadata_proxy
Nov 25 03:35:45 np0005534516 ovn_metadata_agent[162734]:    http-request add-header X-OVN-Network-ID 66249d1f-478b-4b2b-a784-933c0556752e
Nov 25 03:35:45 np0005534516 ovn_metadata_agent[162734]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 25 03:35:45 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:35:45.896 162739 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-66249d1f-478b-4b2b-a784-933c0556752e', 'env', 'PROCESS_TAG=haproxy-66249d1f-478b-4b2b-a784-933c0556752e', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/66249d1f-478b-4b2b-a784-933c0556752e.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 25 03:35:45 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e185 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 03:35:45 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e185 do_prune osdmap full prune enabled
Nov 25 03:35:45 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e186 e186: 3 total, 3 up, 3 in
Nov 25 03:35:45 np0005534516 ceph-mon[75015]: log_channel(cluster) log [DBG] : osdmap e186: 3 total, 3 up, 3 in
Nov 25 03:35:45 np0005534516 affectionate_kepler[323936]: --> passed data devices: 0 physical, 3 LVM
Nov 25 03:35:45 np0005534516 affectionate_kepler[323936]: --> relative data size: 1.0
Nov 25 03:35:45 np0005534516 affectionate_kepler[323936]: --> All data devices are unavailable
Nov 25 03:35:46 np0005534516 systemd[1]: libpod-2034211cc0a1cf7ee0163ae64ce42c5b087fccefc3e15669487030bf4094b1df.scope: Deactivated successfully.
Nov 25 03:35:46 np0005534516 systemd[1]: libpod-2034211cc0a1cf7ee0163ae64ce42c5b087fccefc3e15669487030bf4094b1df.scope: Consumed 1.070s CPU time.
Nov 25 03:35:46 np0005534516 podman[323878]: 2025-11-25 08:35:46.041508116 +0000 UTC m=+1.319982686 container died 2034211cc0a1cf7ee0163ae64ce42c5b087fccefc3e15669487030bf4094b1df (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_kepler, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default)
Nov 25 03:35:46 np0005534516 systemd[1]: var-lib-containers-storage-overlay-4f4f82949cc3592642d3e401181e19ab18fee3a0fa68cbeae83abc8f2bdc59f0-merged.mount: Deactivated successfully.
Nov 25 03:35:46 np0005534516 podman[323878]: 2025-11-25 08:35:46.096933656 +0000 UTC m=+1.375408226 container remove 2034211cc0a1cf7ee0163ae64ce42c5b087fccefc3e15669487030bf4094b1df (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_kepler, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 03:35:46 np0005534516 systemd[1]: libpod-conmon-2034211cc0a1cf7ee0163ae64ce42c5b087fccefc3e15669487030bf4094b1df.scope: Deactivated successfully.
Nov 25 03:35:46 np0005534516 podman[324264]: 2025-11-25 08:35:46.291543197 +0000 UTC m=+0.060021505 container create 382bcc9e9cd06446e4d63c9f940deab344c4f9c9f42d4897cc96091c1e1e2dec (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-66249d1f-478b-4b2b-a784-933c0556752e, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Nov 25 03:35:46 np0005534516 systemd[1]: Started libpod-conmon-382bcc9e9cd06446e4d63c9f940deab344c4f9c9f42d4897cc96091c1e1e2dec.scope.
Nov 25 03:35:46 np0005534516 systemd[1]: Started libcrun container.
Nov 25 03:35:46 np0005534516 podman[324264]: 2025-11-25 08:35:46.261031497 +0000 UTC m=+0.029509825 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 25 03:35:46 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dcacdbab04bdd2d6eb0a58f860b5d214abfab69870a1def2a93eb666e475b92f/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 25 03:35:46 np0005534516 podman[324264]: 2025-11-25 08:35:46.370984292 +0000 UTC m=+0.139462610 container init 382bcc9e9cd06446e4d63c9f940deab344c4f9c9f42d4897cc96091c1e1e2dec (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-66249d1f-478b-4b2b-a784-933c0556752e, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118)
Nov 25 03:35:46 np0005534516 podman[324264]: 2025-11-25 08:35:46.380430185 +0000 UTC m=+0.148908493 container start 382bcc9e9cd06446e4d63c9f940deab344c4f9c9f42d4897cc96091c1e1e2dec (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-66249d1f-478b-4b2b-a784-933c0556752e, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 25 03:35:46 np0005534516 neutron-haproxy-ovnmeta-66249d1f-478b-4b2b-a784-933c0556752e[324315]: [NOTICE]   (324341) : New worker (324343) forked
Nov 25 03:35:46 np0005534516 neutron-haproxy-ovnmeta-66249d1f-478b-4b2b-a784-933c0556752e[324315]: [NOTICE]   (324341) : Loading success.
Nov 25 03:35:46 np0005534516 podman[324394]: 2025-11-25 08:35:46.738413807 +0000 UTC m=+0.045698189 container create 9d37edf9b09a376f994979597ce7d5cbc80b5a3d54ab0b62398051ec0e887954 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_tharp, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 03:35:46 np0005534516 systemd[1]: Started libpod-conmon-9d37edf9b09a376f994979597ce7d5cbc80b5a3d54ab0b62398051ec0e887954.scope.
Nov 25 03:35:46 np0005534516 systemd[1]: Started libcrun container.
Nov 25 03:35:46 np0005534516 podman[324394]: 2025-11-25 08:35:46.713015114 +0000 UTC m=+0.020299536 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 03:35:46 np0005534516 podman[324394]: 2025-11-25 08:35:46.811058339 +0000 UTC m=+0.118342821 container init 9d37edf9b09a376f994979597ce7d5cbc80b5a3d54ab0b62398051ec0e887954 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_tharp, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 03:35:46 np0005534516 podman[324394]: 2025-11-25 08:35:46.819220888 +0000 UTC m=+0.126505280 container start 9d37edf9b09a376f994979597ce7d5cbc80b5a3d54ab0b62398051ec0e887954 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_tharp, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 03:35:46 np0005534516 podman[324394]: 2025-11-25 08:35:46.82189312 +0000 UTC m=+0.129177512 container attach 9d37edf9b09a376f994979597ce7d5cbc80b5a3d54ab0b62398051ec0e887954 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_tharp, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 03:35:46 np0005534516 dreamy_tharp[324410]: 167 167
Nov 25 03:35:46 np0005534516 systemd[1]: libpod-9d37edf9b09a376f994979597ce7d5cbc80b5a3d54ab0b62398051ec0e887954.scope: Deactivated successfully.
Nov 25 03:35:46 np0005534516 podman[324394]: 2025-11-25 08:35:46.825827066 +0000 UTC m=+0.133111448 container died 9d37edf9b09a376f994979597ce7d5cbc80b5a3d54ab0b62398051ec0e887954 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_tharp, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Nov 25 03:35:46 np0005534516 systemd[1]: var-lib-containers-storage-overlay-d75aed93030466703e9360beb8f7435f6f5a5304f6c269e3930f053d390a1e40-merged.mount: Deactivated successfully.
Nov 25 03:35:46 np0005534516 podman[324394]: 2025-11-25 08:35:46.860873858 +0000 UTC m=+0.168158250 container remove 9d37edf9b09a376f994979597ce7d5cbc80b5a3d54ab0b62398051ec0e887954 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_tharp, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Nov 25 03:35:46 np0005534516 systemd[1]: libpod-conmon-9d37edf9b09a376f994979597ce7d5cbc80b5a3d54ab0b62398051ec0e887954.scope: Deactivated successfully.
Nov 25 03:35:47 np0005534516 nova_compute[253538]: 2025-11-25 08:35:47.081 253542 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764059732.08056, 0feca801-4630-4450-b915-616d8496ab51 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 03:35:47 np0005534516 nova_compute[253538]: 2025-11-25 08:35:47.082 253542 INFO nova.compute.manager [-] [instance: 0feca801-4630-4450-b915-616d8496ab51] VM Stopped (Lifecycle Event)#033[00m
Nov 25 03:35:47 np0005534516 podman[324433]: 2025-11-25 08:35:47.081203609 +0000 UTC m=+0.052388358 container create 3590f92a42b13a64075226fa82e13f879043952a665f7d4894341b120d80fd85 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_ishizaka, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Nov 25 03:35:47 np0005534516 nova_compute[253538]: 2025-11-25 08:35:47.103 253542 DEBUG nova.compute.manager [None req-a99a98c5-bb74-4a7a-a84d-c3785864261e - - - - - -] [instance: 0feca801-4630-4450-b915-616d8496ab51] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:35:47 np0005534516 systemd[1]: Started libpod-conmon-3590f92a42b13a64075226fa82e13f879043952a665f7d4894341b120d80fd85.scope.
Nov 25 03:35:47 np0005534516 podman[324433]: 2025-11-25 08:35:47.061562191 +0000 UTC m=+0.032746960 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 03:35:47 np0005534516 systemd[1]: Started libcrun container.
Nov 25 03:35:47 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/821e11d79e5afa54e6aac158217c9d3623ae4c5f773936e81b24162aba5586d7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 03:35:47 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/821e11d79e5afa54e6aac158217c9d3623ae4c5f773936e81b24162aba5586d7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 03:35:47 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/821e11d79e5afa54e6aac158217c9d3623ae4c5f773936e81b24162aba5586d7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 03:35:47 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/821e11d79e5afa54e6aac158217c9d3623ae4c5f773936e81b24162aba5586d7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 03:35:47 np0005534516 podman[324433]: 2025-11-25 08:35:47.197266399 +0000 UTC m=+0.168451228 container init 3590f92a42b13a64075226fa82e13f879043952a665f7d4894341b120d80fd85 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_ishizaka, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Nov 25 03:35:47 np0005534516 podman[324433]: 2025-11-25 08:35:47.20807502 +0000 UTC m=+0.179259769 container start 3590f92a42b13a64075226fa82e13f879043952a665f7d4894341b120d80fd85 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_ishizaka, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Nov 25 03:35:47 np0005534516 podman[324433]: 2025-11-25 08:35:47.220187665 +0000 UTC m=+0.191372494 container attach 3590f92a42b13a64075226fa82e13f879043952a665f7d4894341b120d80fd85 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_ishizaka, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Nov 25 03:35:47 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1583: 321 pgs: 321 active+clean; 180 MiB data, 611 MiB used, 59 GiB / 60 GiB avail; 1.4 MiB/s rd, 5.4 MiB/s wr, 168 op/s
Nov 25 03:35:47 np0005534516 nova_compute[253538]: 2025-11-25 08:35:47.334 253542 DEBUG nova.compute.manager [req-0c600357-d53b-4950-8df2-a3ea5cc3d702 req-98012491-6ed3-4d73-900a-3af66d591f14 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 9396c5ff-9457-400c-8916-ecd03eded0c1] Received event network-vif-plugged-33ae6d28-9d12-4e42-9874-7f5c7a27c9c8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:35:47 np0005534516 nova_compute[253538]: 2025-11-25 08:35:47.335 253542 DEBUG oslo_concurrency.lockutils [req-0c600357-d53b-4950-8df2-a3ea5cc3d702 req-98012491-6ed3-4d73-900a-3af66d591f14 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "9396c5ff-9457-400c-8916-ecd03eded0c1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:35:47 np0005534516 nova_compute[253538]: 2025-11-25 08:35:47.335 253542 DEBUG oslo_concurrency.lockutils [req-0c600357-d53b-4950-8df2-a3ea5cc3d702 req-98012491-6ed3-4d73-900a-3af66d591f14 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "9396c5ff-9457-400c-8916-ecd03eded0c1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:35:47 np0005534516 nova_compute[253538]: 2025-11-25 08:35:47.335 253542 DEBUG oslo_concurrency.lockutils [req-0c600357-d53b-4950-8df2-a3ea5cc3d702 req-98012491-6ed3-4d73-900a-3af66d591f14 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "9396c5ff-9457-400c-8916-ecd03eded0c1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:35:47 np0005534516 nova_compute[253538]: 2025-11-25 08:35:47.335 253542 DEBUG nova.compute.manager [req-0c600357-d53b-4950-8df2-a3ea5cc3d702 req-98012491-6ed3-4d73-900a-3af66d591f14 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 9396c5ff-9457-400c-8916-ecd03eded0c1] No waiting events found dispatching network-vif-plugged-33ae6d28-9d12-4e42-9874-7f5c7a27c9c8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 03:35:47 np0005534516 nova_compute[253538]: 2025-11-25 08:35:47.335 253542 WARNING nova.compute.manager [req-0c600357-d53b-4950-8df2-a3ea5cc3d702 req-98012491-6ed3-4d73-900a-3af66d591f14 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 9396c5ff-9457-400c-8916-ecd03eded0c1] Received unexpected event network-vif-plugged-33ae6d28-9d12-4e42-9874-7f5c7a27c9c8 for instance with vm_state active and task_state None.#033[00m
Nov 25 03:35:47 np0005534516 nova_compute[253538]: 2025-11-25 08:35:47.335 253542 DEBUG nova.compute.manager [req-0c600357-d53b-4950-8df2-a3ea5cc3d702 req-98012491-6ed3-4d73-900a-3af66d591f14 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 4e9d3984-d789-45e1-83e3-8909597d3265] Received event network-vif-plugged-d553507f-4019-4ce0-b549-4d221b9089cd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:35:47 np0005534516 nova_compute[253538]: 2025-11-25 08:35:47.336 253542 DEBUG oslo_concurrency.lockutils [req-0c600357-d53b-4950-8df2-a3ea5cc3d702 req-98012491-6ed3-4d73-900a-3af66d591f14 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "4e9d3984-d789-45e1-83e3-8909597d3265-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:35:47 np0005534516 nova_compute[253538]: 2025-11-25 08:35:47.336 253542 DEBUG oslo_concurrency.lockutils [req-0c600357-d53b-4950-8df2-a3ea5cc3d702 req-98012491-6ed3-4d73-900a-3af66d591f14 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "4e9d3984-d789-45e1-83e3-8909597d3265-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:35:47 np0005534516 nova_compute[253538]: 2025-11-25 08:35:47.336 253542 DEBUG oslo_concurrency.lockutils [req-0c600357-d53b-4950-8df2-a3ea5cc3d702 req-98012491-6ed3-4d73-900a-3af66d591f14 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "4e9d3984-d789-45e1-83e3-8909597d3265-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:35:47 np0005534516 nova_compute[253538]: 2025-11-25 08:35:47.336 253542 DEBUG nova.compute.manager [req-0c600357-d53b-4950-8df2-a3ea5cc3d702 req-98012491-6ed3-4d73-900a-3af66d591f14 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 4e9d3984-d789-45e1-83e3-8909597d3265] Processing event network-vif-plugged-d553507f-4019-4ce0-b549-4d221b9089cd _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 25 03:35:47 np0005534516 nova_compute[253538]: 2025-11-25 08:35:47.336 253542 DEBUG nova.compute.manager [req-0c600357-d53b-4950-8df2-a3ea5cc3d702 req-98012491-6ed3-4d73-900a-3af66d591f14 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 4e9d3984-d789-45e1-83e3-8909597d3265] Received event network-vif-plugged-d553507f-4019-4ce0-b549-4d221b9089cd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:35:47 np0005534516 nova_compute[253538]: 2025-11-25 08:35:47.336 253542 DEBUG oslo_concurrency.lockutils [req-0c600357-d53b-4950-8df2-a3ea5cc3d702 req-98012491-6ed3-4d73-900a-3af66d591f14 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "4e9d3984-d789-45e1-83e3-8909597d3265-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:35:47 np0005534516 nova_compute[253538]: 2025-11-25 08:35:47.337 253542 DEBUG oslo_concurrency.lockutils [req-0c600357-d53b-4950-8df2-a3ea5cc3d702 req-98012491-6ed3-4d73-900a-3af66d591f14 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "4e9d3984-d789-45e1-83e3-8909597d3265-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:35:47 np0005534516 nova_compute[253538]: 2025-11-25 08:35:47.337 253542 DEBUG oslo_concurrency.lockutils [req-0c600357-d53b-4950-8df2-a3ea5cc3d702 req-98012491-6ed3-4d73-900a-3af66d591f14 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "4e9d3984-d789-45e1-83e3-8909597d3265-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:35:47 np0005534516 nova_compute[253538]: 2025-11-25 08:35:47.337 253542 DEBUG nova.compute.manager [req-0c600357-d53b-4950-8df2-a3ea5cc3d702 req-98012491-6ed3-4d73-900a-3af66d591f14 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 4e9d3984-d789-45e1-83e3-8909597d3265] No waiting events found dispatching network-vif-plugged-d553507f-4019-4ce0-b549-4d221b9089cd pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 03:35:47 np0005534516 nova_compute[253538]: 2025-11-25 08:35:47.337 253542 WARNING nova.compute.manager [req-0c600357-d53b-4950-8df2-a3ea5cc3d702 req-98012491-6ed3-4d73-900a-3af66d591f14 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 4e9d3984-d789-45e1-83e3-8909597d3265] Received unexpected event network-vif-plugged-d553507f-4019-4ce0-b549-4d221b9089cd for instance with vm_state building and task_state spawning.#033[00m
Nov 25 03:35:47 np0005534516 nova_compute[253538]: 2025-11-25 08:35:47.339 253542 DEBUG nova.compute.manager [None req-11d37a5c-d5cb-48ae-a62e-f9b39d45d4b5 4de06a7985be4463b069db269e2882d4 a20ef4bed55a408c8933a4956b2dd3e4 - - default default] [instance: 4e9d3984-d789-45e1-83e3-8909597d3265] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 25 03:35:47 np0005534516 nova_compute[253538]: 2025-11-25 08:35:47.346 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764059747.3459518, 4e9d3984-d789-45e1-83e3-8909597d3265 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 03:35:47 np0005534516 nova_compute[253538]: 2025-11-25 08:35:47.347 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 4e9d3984-d789-45e1-83e3-8909597d3265] VM Resumed (Lifecycle Event)#033[00m
Nov 25 03:35:47 np0005534516 nova_compute[253538]: 2025-11-25 08:35:47.361 253542 DEBUG nova.virt.libvirt.driver [None req-11d37a5c-d5cb-48ae-a62e-f9b39d45d4b5 4de06a7985be4463b069db269e2882d4 a20ef4bed55a408c8933a4956b2dd3e4 - - default default] [instance: 4e9d3984-d789-45e1-83e3-8909597d3265] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 25 03:35:47 np0005534516 nova_compute[253538]: 2025-11-25 08:35:47.366 253542 INFO nova.virt.libvirt.driver [-] [instance: 4e9d3984-d789-45e1-83e3-8909597d3265] Instance spawned successfully.#033[00m
Nov 25 03:35:47 np0005534516 nova_compute[253538]: 2025-11-25 08:35:47.367 253542 DEBUG nova.virt.libvirt.driver [None req-11d37a5c-d5cb-48ae-a62e-f9b39d45d4b5 4de06a7985be4463b069db269e2882d4 a20ef4bed55a408c8933a4956b2dd3e4 - - default default] [instance: 4e9d3984-d789-45e1-83e3-8909597d3265] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 25 03:35:47 np0005534516 nova_compute[253538]: 2025-11-25 08:35:47.395 253542 DEBUG nova.virt.libvirt.driver [None req-11d37a5c-d5cb-48ae-a62e-f9b39d45d4b5 4de06a7985be4463b069db269e2882d4 a20ef4bed55a408c8933a4956b2dd3e4 - - default default] [instance: 4e9d3984-d789-45e1-83e3-8909597d3265] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:35:47 np0005534516 nova_compute[253538]: 2025-11-25 08:35:47.396 253542 DEBUG nova.virt.libvirt.driver [None req-11d37a5c-d5cb-48ae-a62e-f9b39d45d4b5 4de06a7985be4463b069db269e2882d4 a20ef4bed55a408c8933a4956b2dd3e4 - - default default] [instance: 4e9d3984-d789-45e1-83e3-8909597d3265] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:35:47 np0005534516 nova_compute[253538]: 2025-11-25 08:35:47.397 253542 DEBUG nova.virt.libvirt.driver [None req-11d37a5c-d5cb-48ae-a62e-f9b39d45d4b5 4de06a7985be4463b069db269e2882d4 a20ef4bed55a408c8933a4956b2dd3e4 - - default default] [instance: 4e9d3984-d789-45e1-83e3-8909597d3265] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:35:47 np0005534516 nova_compute[253538]: 2025-11-25 08:35:47.399 253542 DEBUG nova.virt.libvirt.driver [None req-11d37a5c-d5cb-48ae-a62e-f9b39d45d4b5 4de06a7985be4463b069db269e2882d4 a20ef4bed55a408c8933a4956b2dd3e4 - - default default] [instance: 4e9d3984-d789-45e1-83e3-8909597d3265] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:35:47 np0005534516 nova_compute[253538]: 2025-11-25 08:35:47.400 253542 DEBUG nova.virt.libvirt.driver [None req-11d37a5c-d5cb-48ae-a62e-f9b39d45d4b5 4de06a7985be4463b069db269e2882d4 a20ef4bed55a408c8933a4956b2dd3e4 - - default default] [instance: 4e9d3984-d789-45e1-83e3-8909597d3265] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:35:47 np0005534516 nova_compute[253538]: 2025-11-25 08:35:47.402 253542 DEBUG nova.virt.libvirt.driver [None req-11d37a5c-d5cb-48ae-a62e-f9b39d45d4b5 4de06a7985be4463b069db269e2882d4 a20ef4bed55a408c8933a4956b2dd3e4 - - default default] [instance: 4e9d3984-d789-45e1-83e3-8909597d3265] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:35:47 np0005534516 nova_compute[253538]: 2025-11-25 08:35:47.415 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 4e9d3984-d789-45e1-83e3-8909597d3265] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:35:47 np0005534516 nova_compute[253538]: 2025-11-25 08:35:47.423 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 4e9d3984-d789-45e1-83e3-8909597d3265] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 03:35:47 np0005534516 nova_compute[253538]: 2025-11-25 08:35:47.450 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 4e9d3984-d789-45e1-83e3-8909597d3265] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 03:35:47 np0005534516 nova_compute[253538]: 2025-11-25 08:35:47.481 253542 INFO nova.compute.manager [None req-11d37a5c-d5cb-48ae-a62e-f9b39d45d4b5 4de06a7985be4463b069db269e2882d4 a20ef4bed55a408c8933a4956b2dd3e4 - - default default] [instance: 4e9d3984-d789-45e1-83e3-8909597d3265] Took 8.98 seconds to spawn the instance on the hypervisor.#033[00m
Nov 25 03:35:47 np0005534516 nova_compute[253538]: 2025-11-25 08:35:47.482 253542 DEBUG nova.compute.manager [None req-11d37a5c-d5cb-48ae-a62e-f9b39d45d4b5 4de06a7985be4463b069db269e2882d4 a20ef4bed55a408c8933a4956b2dd3e4 - - default default] [instance: 4e9d3984-d789-45e1-83e3-8909597d3265] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:35:47 np0005534516 nova_compute[253538]: 2025-11-25 08:35:47.565 253542 INFO nova.compute.manager [None req-11d37a5c-d5cb-48ae-a62e-f9b39d45d4b5 4de06a7985be4463b069db269e2882d4 a20ef4bed55a408c8933a4956b2dd3e4 - - default default] [instance: 4e9d3984-d789-45e1-83e3-8909597d3265] Took 10.03 seconds to build instance.#033[00m
Nov 25 03:35:47 np0005534516 nova_compute[253538]: 2025-11-25 08:35:47.594 253542 DEBUG oslo_concurrency.lockutils [None req-11d37a5c-d5cb-48ae-a62e-f9b39d45d4b5 4de06a7985be4463b069db269e2882d4 a20ef4bed55a408c8933a4956b2dd3e4 - - default default] Lock "4e9d3984-d789-45e1-83e3-8909597d3265" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.119s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:35:48 np0005534516 strange_ishizaka[324449]: {
Nov 25 03:35:48 np0005534516 strange_ishizaka[324449]:    "0": [
Nov 25 03:35:48 np0005534516 strange_ishizaka[324449]:        {
Nov 25 03:35:48 np0005534516 strange_ishizaka[324449]:            "devices": [
Nov 25 03:35:48 np0005534516 strange_ishizaka[324449]:                "/dev/loop3"
Nov 25 03:35:48 np0005534516 strange_ishizaka[324449]:            ],
Nov 25 03:35:48 np0005534516 strange_ishizaka[324449]:            "lv_name": "ceph_lv0",
Nov 25 03:35:48 np0005534516 strange_ishizaka[324449]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 03:35:48 np0005534516 strange_ishizaka[324449]:            "lv_size": "21470642176",
Nov 25 03:35:48 np0005534516 strange_ishizaka[324449]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=fa0f8d90-5df8-4d42-9078-da082765696d,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 03:35:48 np0005534516 strange_ishizaka[324449]:            "lv_uuid": "ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr",
Nov 25 03:35:48 np0005534516 strange_ishizaka[324449]:            "name": "ceph_lv0",
Nov 25 03:35:48 np0005534516 strange_ishizaka[324449]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 03:35:48 np0005534516 strange_ishizaka[324449]:            "tags": {
Nov 25 03:35:48 np0005534516 strange_ishizaka[324449]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 03:35:48 np0005534516 strange_ishizaka[324449]:                "ceph.block_uuid": "ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr",
Nov 25 03:35:48 np0005534516 strange_ishizaka[324449]:                "ceph.cephx_lockbox_secret": "",
Nov 25 03:35:48 np0005534516 strange_ishizaka[324449]:                "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 03:35:48 np0005534516 strange_ishizaka[324449]:                "ceph.cluster_name": "ceph",
Nov 25 03:35:48 np0005534516 strange_ishizaka[324449]:                "ceph.crush_device_class": "",
Nov 25 03:35:48 np0005534516 strange_ishizaka[324449]:                "ceph.encrypted": "0",
Nov 25 03:35:48 np0005534516 strange_ishizaka[324449]:                "ceph.osd_fsid": "fa0f8d90-5df8-4d42-9078-da082765696d",
Nov 25 03:35:48 np0005534516 strange_ishizaka[324449]:                "ceph.osd_id": "0",
Nov 25 03:35:48 np0005534516 strange_ishizaka[324449]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 03:35:48 np0005534516 strange_ishizaka[324449]:                "ceph.type": "block",
Nov 25 03:35:48 np0005534516 strange_ishizaka[324449]:                "ceph.vdo": "0"
Nov 25 03:35:48 np0005534516 strange_ishizaka[324449]:            },
Nov 25 03:35:48 np0005534516 strange_ishizaka[324449]:            "type": "block",
Nov 25 03:35:48 np0005534516 strange_ishizaka[324449]:            "vg_name": "ceph_vg0"
Nov 25 03:35:48 np0005534516 strange_ishizaka[324449]:        }
Nov 25 03:35:48 np0005534516 strange_ishizaka[324449]:    ],
Nov 25 03:35:48 np0005534516 strange_ishizaka[324449]:    "1": [
Nov 25 03:35:48 np0005534516 strange_ishizaka[324449]:        {
Nov 25 03:35:48 np0005534516 strange_ishizaka[324449]:            "devices": [
Nov 25 03:35:48 np0005534516 strange_ishizaka[324449]:                "/dev/loop4"
Nov 25 03:35:48 np0005534516 strange_ishizaka[324449]:            ],
Nov 25 03:35:48 np0005534516 strange_ishizaka[324449]:            "lv_name": "ceph_lv1",
Nov 25 03:35:48 np0005534516 strange_ishizaka[324449]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 03:35:48 np0005534516 strange_ishizaka[324449]:            "lv_size": "21470642176",
Nov 25 03:35:48 np0005534516 strange_ishizaka[324449]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1ccbbde0-8faf-460a-800e-c84f00ed17db,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 03:35:48 np0005534516 strange_ishizaka[324449]:            "lv_uuid": "nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e",
Nov 25 03:35:48 np0005534516 strange_ishizaka[324449]:            "name": "ceph_lv1",
Nov 25 03:35:48 np0005534516 strange_ishizaka[324449]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 03:35:48 np0005534516 strange_ishizaka[324449]:            "tags": {
Nov 25 03:35:48 np0005534516 strange_ishizaka[324449]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 03:35:48 np0005534516 strange_ishizaka[324449]:                "ceph.block_uuid": "nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e",
Nov 25 03:35:48 np0005534516 strange_ishizaka[324449]:                "ceph.cephx_lockbox_secret": "",
Nov 25 03:35:48 np0005534516 strange_ishizaka[324449]:                "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 03:35:48 np0005534516 strange_ishizaka[324449]:                "ceph.cluster_name": "ceph",
Nov 25 03:35:48 np0005534516 strange_ishizaka[324449]:                "ceph.crush_device_class": "",
Nov 25 03:35:48 np0005534516 strange_ishizaka[324449]:                "ceph.encrypted": "0",
Nov 25 03:35:48 np0005534516 strange_ishizaka[324449]:                "ceph.osd_fsid": "1ccbbde0-8faf-460a-800e-c84f00ed17db",
Nov 25 03:35:48 np0005534516 strange_ishizaka[324449]:                "ceph.osd_id": "1",
Nov 25 03:35:48 np0005534516 strange_ishizaka[324449]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 03:35:48 np0005534516 strange_ishizaka[324449]:                "ceph.type": "block",
Nov 25 03:35:48 np0005534516 strange_ishizaka[324449]:                "ceph.vdo": "0"
Nov 25 03:35:48 np0005534516 strange_ishizaka[324449]:            },
Nov 25 03:35:48 np0005534516 strange_ishizaka[324449]:            "type": "block",
Nov 25 03:35:48 np0005534516 strange_ishizaka[324449]:            "vg_name": "ceph_vg1"
Nov 25 03:35:48 np0005534516 strange_ishizaka[324449]:        }
Nov 25 03:35:48 np0005534516 strange_ishizaka[324449]:    ],
Nov 25 03:35:48 np0005534516 strange_ishizaka[324449]:    "2": [
Nov 25 03:35:48 np0005534516 strange_ishizaka[324449]:        {
Nov 25 03:35:48 np0005534516 strange_ishizaka[324449]:            "devices": [
Nov 25 03:35:48 np0005534516 strange_ishizaka[324449]:                "/dev/loop5"
Nov 25 03:35:48 np0005534516 strange_ishizaka[324449]:            ],
Nov 25 03:35:48 np0005534516 strange_ishizaka[324449]:            "lv_name": "ceph_lv2",
Nov 25 03:35:48 np0005534516 strange_ishizaka[324449]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 03:35:48 np0005534516 strange_ishizaka[324449]:            "lv_size": "21470642176",
Nov 25 03:35:48 np0005534516 strange_ishizaka[324449]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2a15223e-f21c-42af-b702-2be1c3607399,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 03:35:48 np0005534516 strange_ishizaka[324449]:            "lv_uuid": "5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0",
Nov 25 03:35:48 np0005534516 strange_ishizaka[324449]:            "name": "ceph_lv2",
Nov 25 03:35:48 np0005534516 strange_ishizaka[324449]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 03:35:48 np0005534516 strange_ishizaka[324449]:            "tags": {
Nov 25 03:35:48 np0005534516 strange_ishizaka[324449]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 03:35:48 np0005534516 strange_ishizaka[324449]:                "ceph.block_uuid": "5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0",
Nov 25 03:35:48 np0005534516 strange_ishizaka[324449]:                "ceph.cephx_lockbox_secret": "",
Nov 25 03:35:48 np0005534516 strange_ishizaka[324449]:                "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 03:35:48 np0005534516 strange_ishizaka[324449]:                "ceph.cluster_name": "ceph",
Nov 25 03:35:48 np0005534516 strange_ishizaka[324449]:                "ceph.crush_device_class": "",
Nov 25 03:35:48 np0005534516 strange_ishizaka[324449]:                "ceph.encrypted": "0",
Nov 25 03:35:48 np0005534516 strange_ishizaka[324449]:                "ceph.osd_fsid": "2a15223e-f21c-42af-b702-2be1c3607399",
Nov 25 03:35:48 np0005534516 strange_ishizaka[324449]:                "ceph.osd_id": "2",
Nov 25 03:35:48 np0005534516 strange_ishizaka[324449]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 03:35:48 np0005534516 strange_ishizaka[324449]:                "ceph.type": "block",
Nov 25 03:35:48 np0005534516 strange_ishizaka[324449]:                "ceph.vdo": "0"
Nov 25 03:35:48 np0005534516 strange_ishizaka[324449]:            },
Nov 25 03:35:48 np0005534516 strange_ishizaka[324449]:            "type": "block",
Nov 25 03:35:48 np0005534516 strange_ishizaka[324449]:            "vg_name": "ceph_vg2"
Nov 25 03:35:48 np0005534516 strange_ishizaka[324449]:        }
Nov 25 03:35:48 np0005534516 strange_ishizaka[324449]:    ]
Nov 25 03:35:48 np0005534516 strange_ishizaka[324449]: }
Nov 25 03:35:48 np0005534516 systemd[1]: libpod-3590f92a42b13a64075226fa82e13f879043952a665f7d4894341b120d80fd85.scope: Deactivated successfully.
Nov 25 03:35:48 np0005534516 podman[324433]: 2025-11-25 08:35:48.170297 +0000 UTC m=+1.141481749 container died 3590f92a42b13a64075226fa82e13f879043952a665f7d4894341b120d80fd85 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_ishizaka, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Nov 25 03:35:48 np0005534516 nova_compute[253538]: 2025-11-25 08:35:48.252 253542 DEBUG oslo_concurrency.lockutils [None req-aa0f63c9-da87-49af-abeb-54e2523715df 596f8d994ec145beb9244f5f01713555 3b96d13f13da43468269abb6dc6185d1 - - default default] Acquiring lock "9396c5ff-9457-400c-8916-ecd03eded0c1" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:35:48 np0005534516 nova_compute[253538]: 2025-11-25 08:35:48.252 253542 DEBUG oslo_concurrency.lockutils [None req-aa0f63c9-da87-49af-abeb-54e2523715df 596f8d994ec145beb9244f5f01713555 3b96d13f13da43468269abb6dc6185d1 - - default default] Lock "9396c5ff-9457-400c-8916-ecd03eded0c1" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:35:48 np0005534516 nova_compute[253538]: 2025-11-25 08:35:48.252 253542 DEBUG oslo_concurrency.lockutils [None req-aa0f63c9-da87-49af-abeb-54e2523715df 596f8d994ec145beb9244f5f01713555 3b96d13f13da43468269abb6dc6185d1 - - default default] Acquiring lock "9396c5ff-9457-400c-8916-ecd03eded0c1-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:35:48 np0005534516 nova_compute[253538]: 2025-11-25 08:35:48.253 253542 DEBUG oslo_concurrency.lockutils [None req-aa0f63c9-da87-49af-abeb-54e2523715df 596f8d994ec145beb9244f5f01713555 3b96d13f13da43468269abb6dc6185d1 - - default default] Lock "9396c5ff-9457-400c-8916-ecd03eded0c1-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:35:48 np0005534516 nova_compute[253538]: 2025-11-25 08:35:48.253 253542 DEBUG oslo_concurrency.lockutils [None req-aa0f63c9-da87-49af-abeb-54e2523715df 596f8d994ec145beb9244f5f01713555 3b96d13f13da43468269abb6dc6185d1 - - default default] Lock "9396c5ff-9457-400c-8916-ecd03eded0c1-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:35:48 np0005534516 nova_compute[253538]: 2025-11-25 08:35:48.254 253542 INFO nova.compute.manager [None req-aa0f63c9-da87-49af-abeb-54e2523715df 596f8d994ec145beb9244f5f01713555 3b96d13f13da43468269abb6dc6185d1 - - default default] [instance: 9396c5ff-9457-400c-8916-ecd03eded0c1] Terminating instance#033[00m
Nov 25 03:35:48 np0005534516 nova_compute[253538]: 2025-11-25 08:35:48.255 253542 DEBUG nova.compute.manager [None req-aa0f63c9-da87-49af-abeb-54e2523715df 596f8d994ec145beb9244f5f01713555 3b96d13f13da43468269abb6dc6185d1 - - default default] [instance: 9396c5ff-9457-400c-8916-ecd03eded0c1] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 25 03:35:48 np0005534516 systemd[1]: var-lib-containers-storage-overlay-821e11d79e5afa54e6aac158217c9d3623ae4c5f773936e81b24162aba5586d7-merged.mount: Deactivated successfully.
Nov 25 03:35:48 np0005534516 nova_compute[253538]: 2025-11-25 08:35:48.729 253542 DEBUG oslo_concurrency.lockutils [None req-b76fc119-15bb-4231-90b7-ef24446d6cfa 4de06a7985be4463b069db269e2882d4 a20ef4bed55a408c8933a4956b2dd3e4 - - default default] Acquiring lock "4e9d3984-d789-45e1-83e3-8909597d3265" by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:35:48 np0005534516 nova_compute[253538]: 2025-11-25 08:35:48.729 253542 DEBUG oslo_concurrency.lockutils [None req-b76fc119-15bb-4231-90b7-ef24446d6cfa 4de06a7985be4463b069db269e2882d4 a20ef4bed55a408c8933a4956b2dd3e4 - - default default] Lock "4e9d3984-d789-45e1-83e3-8909597d3265" acquired by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:35:48 np0005534516 nova_compute[253538]: 2025-11-25 08:35:48.730 253542 INFO nova.compute.manager [None req-b76fc119-15bb-4231-90b7-ef24446d6cfa 4de06a7985be4463b069db269e2882d4 a20ef4bed55a408c8933a4956b2dd3e4 - - default default] [instance: 4e9d3984-d789-45e1-83e3-8909597d3265] Rebooting instance#033[00m
Nov 25 03:35:48 np0005534516 nova_compute[253538]: 2025-11-25 08:35:48.748 253542 DEBUG oslo_concurrency.lockutils [None req-b76fc119-15bb-4231-90b7-ef24446d6cfa 4de06a7985be4463b069db269e2882d4 a20ef4bed55a408c8933a4956b2dd3e4 - - default default] Acquiring lock "refresh_cache-4e9d3984-d789-45e1-83e3-8909597d3265" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 03:35:48 np0005534516 nova_compute[253538]: 2025-11-25 08:35:48.748 253542 DEBUG oslo_concurrency.lockutils [None req-b76fc119-15bb-4231-90b7-ef24446d6cfa 4de06a7985be4463b069db269e2882d4 a20ef4bed55a408c8933a4956b2dd3e4 - - default default] Acquired lock "refresh_cache-4e9d3984-d789-45e1-83e3-8909597d3265" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 03:35:48 np0005534516 nova_compute[253538]: 2025-11-25 08:35:48.749 253542 DEBUG nova.network.neutron [None req-b76fc119-15bb-4231-90b7-ef24446d6cfa 4de06a7985be4463b069db269e2882d4 a20ef4bed55a408c8933a4956b2dd3e4 - - default default] [instance: 4e9d3984-d789-45e1-83e3-8909597d3265] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 25 03:35:48 np0005534516 kernel: tap33ae6d28-9d (unregistering): left promiscuous mode
Nov 25 03:35:48 np0005534516 NetworkManager[48915]: <info>  [1764059748.7560] device (tap33ae6d28-9d): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 03:35:48 np0005534516 ovn_controller[152859]: 2025-11-25T08:35:48Z|00664|binding|INFO|Releasing lport 33ae6d28-9d12-4e42-9874-7f5c7a27c9c8 from this chassis (sb_readonly=0)
Nov 25 03:35:48 np0005534516 ovn_controller[152859]: 2025-11-25T08:35:48Z|00665|binding|INFO|Setting lport 33ae6d28-9d12-4e42-9874-7f5c7a27c9c8 down in Southbound
Nov 25 03:35:48 np0005534516 nova_compute[253538]: 2025-11-25 08:35:48.768 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:35:48 np0005534516 ovn_controller[152859]: 2025-11-25T08:35:48Z|00666|binding|INFO|Removing iface tap33ae6d28-9d ovn-installed in OVS
Nov 25 03:35:48 np0005534516 nova_compute[253538]: 2025-11-25 08:35:48.771 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:35:48 np0005534516 nova_compute[253538]: 2025-11-25 08:35:48.788 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:35:48 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:35:48.799 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:bb:e0:fa 10.100.0.12'], port_security=['fa:16:3e:bb:e0:fa 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '9396c5ff-9457-400c-8916-ecd03eded0c1', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-77b0065f-12e9-4121-b463-93a7fd9a5ff0', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3b96d13f13da43468269abb6dc6185d1', 'neutron:revision_number': '4', 'neutron:security_group_ids': '086c6d0d-fe38-46de-a484-0a651367668f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=92cc438c-7163-401b-8f9b-e1ec1d29a1db, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=33ae6d28-9d12-4e42-9874-7f5c7a27c9c8) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 03:35:48 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:35:48.800 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 33ae6d28-9d12-4e42-9874-7f5c7a27c9c8 in datapath 77b0065f-12e9-4121-b463-93a7fd9a5ff0 unbound from our chassis#033[00m
Nov 25 03:35:48 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:35:48.801 162739 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 77b0065f-12e9-4121-b463-93a7fd9a5ff0, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 25 03:35:48 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:35:48.816 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[fb565bd2-4d01-4396-81f7-2a291e9c9280]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:35:48 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:35:48.817 162739 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-77b0065f-12e9-4121-b463-93a7fd9a5ff0 namespace which is not needed anymore#033[00m
Nov 25 03:35:48 np0005534516 systemd[1]: machine-qemu\x2d80\x2dinstance\x2d00000044.scope: Deactivated successfully.
Nov 25 03:35:48 np0005534516 systemd[1]: machine-qemu\x2d80\x2dinstance\x2d00000044.scope: Consumed 3.615s CPU time.
Nov 25 03:35:48 np0005534516 systemd-machined[215790]: Machine qemu-80-instance-00000044 terminated.
Nov 25 03:35:48 np0005534516 nova_compute[253538]: 2025-11-25 08:35:48.962 253542 INFO nova.virt.libvirt.driver [-] [instance: 9396c5ff-9457-400c-8916-ecd03eded0c1] Instance destroyed successfully.#033[00m
Nov 25 03:35:48 np0005534516 nova_compute[253538]: 2025-11-25 08:35:48.963 253542 DEBUG nova.objects.instance [None req-aa0f63c9-da87-49af-abeb-54e2523715df 596f8d994ec145beb9244f5f01713555 3b96d13f13da43468269abb6dc6185d1 - - default default] Lazy-loading 'resources' on Instance uuid 9396c5ff-9457-400c-8916-ecd03eded0c1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 03:35:48 np0005534516 nova_compute[253538]: 2025-11-25 08:35:48.976 253542 DEBUG nova.virt.libvirt.vif [None req-aa0f63c9-da87-49af-abeb-54e2523715df 596f8d994ec145beb9244f5f01713555 3b96d13f13da43468269abb6dc6185d1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T08:35:34Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-InstanceActionsV221TestJSON-server-82069614',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-instanceactionsv221testjson-server-82069614',id=68,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-25T08:35:45Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='3b96d13f13da43468269abb6dc6185d1',ramdisk_id='',reservation_id='r-0e395ga0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-InstanceActionsV221TestJSON-1033089060',owner_user_name='tempest-InstanceActionsV221TestJSON-1033089060-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T08:35:45Z,user_data=None,user_id='596f8d994ec145beb9244f5f01713555',uuid=9396c5ff-9457-400c-8916-ecd03eded0c1,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "33ae6d28-9d12-4e42-9874-7f5c7a27c9c8", "address": "fa:16:3e:bb:e0:fa", "network": {"id": "77b0065f-12e9-4121-b463-93a7fd9a5ff0", "bridge": "br-int", "label": "tempest-InstanceActionsV221TestJSON-579014610-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3b96d13f13da43468269abb6dc6185d1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap33ae6d28-9d", "ovs_interfaceid": "33ae6d28-9d12-4e42-9874-7f5c7a27c9c8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 25 03:35:48 np0005534516 nova_compute[253538]: 2025-11-25 08:35:48.977 253542 DEBUG nova.network.os_vif_util [None req-aa0f63c9-da87-49af-abeb-54e2523715df 596f8d994ec145beb9244f5f01713555 3b96d13f13da43468269abb6dc6185d1 - - default default] Converting VIF {"id": "33ae6d28-9d12-4e42-9874-7f5c7a27c9c8", "address": "fa:16:3e:bb:e0:fa", "network": {"id": "77b0065f-12e9-4121-b463-93a7fd9a5ff0", "bridge": "br-int", "label": "tempest-InstanceActionsV221TestJSON-579014610-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3b96d13f13da43468269abb6dc6185d1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap33ae6d28-9d", "ovs_interfaceid": "33ae6d28-9d12-4e42-9874-7f5c7a27c9c8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 03:35:48 np0005534516 nova_compute[253538]: 2025-11-25 08:35:48.977 253542 DEBUG nova.network.os_vif_util [None req-aa0f63c9-da87-49af-abeb-54e2523715df 596f8d994ec145beb9244f5f01713555 3b96d13f13da43468269abb6dc6185d1 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:bb:e0:fa,bridge_name='br-int',has_traffic_filtering=True,id=33ae6d28-9d12-4e42-9874-7f5c7a27c9c8,network=Network(77b0065f-12e9-4121-b463-93a7fd9a5ff0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap33ae6d28-9d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 03:35:48 np0005534516 nova_compute[253538]: 2025-11-25 08:35:48.978 253542 DEBUG os_vif [None req-aa0f63c9-da87-49af-abeb-54e2523715df 596f8d994ec145beb9244f5f01713555 3b96d13f13da43468269abb6dc6185d1 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:bb:e0:fa,bridge_name='br-int',has_traffic_filtering=True,id=33ae6d28-9d12-4e42-9874-7f5c7a27c9c8,network=Network(77b0065f-12e9-4121-b463-93a7fd9a5ff0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap33ae6d28-9d') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 25 03:35:48 np0005534516 nova_compute[253538]: 2025-11-25 08:35:48.980 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:35:48 np0005534516 nova_compute[253538]: 2025-11-25 08:35:48.980 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap33ae6d28-9d, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:35:48 np0005534516 nova_compute[253538]: 2025-11-25 08:35:48.983 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 25 03:35:48 np0005534516 nova_compute[253538]: 2025-11-25 08:35:48.983 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:35:48 np0005534516 nova_compute[253538]: 2025-11-25 08:35:48.985 253542 INFO os_vif [None req-aa0f63c9-da87-49af-abeb-54e2523715df 596f8d994ec145beb9244f5f01713555 3b96d13f13da43468269abb6dc6185d1 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:bb:e0:fa,bridge_name='br-int',has_traffic_filtering=True,id=33ae6d28-9d12-4e42-9874-7f5c7a27c9c8,network=Network(77b0065f-12e9-4121-b463-93a7fd9a5ff0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap33ae6d28-9d')#033[00m
Nov 25 03:35:49 np0005534516 podman[324433]: 2025-11-25 08:35:49.034807335 +0000 UTC m=+2.005992084 container remove 3590f92a42b13a64075226fa82e13f879043952a665f7d4894341b120d80fd85 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_ishizaka, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Nov 25 03:35:49 np0005534516 podman[324500]: 2025-11-25 08:35:49.107984901 +0000 UTC m=+0.149187270 container health_status 8ad1e319ea263c63021f01bb2d6f18ca5aea205883339692c6b10bddfa993191 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 25 03:35:49 np0005534516 neutron-haproxy-ovnmeta-77b0065f-12e9-4121-b463-93a7fd9a5ff0[324112]: [NOTICE]   (324133) : haproxy version is 2.8.14-c23fe91
Nov 25 03:35:49 np0005534516 neutron-haproxy-ovnmeta-77b0065f-12e9-4121-b463-93a7fd9a5ff0[324112]: [NOTICE]   (324133) : path to executable is /usr/sbin/haproxy
Nov 25 03:35:49 np0005534516 neutron-haproxy-ovnmeta-77b0065f-12e9-4121-b463-93a7fd9a5ff0[324112]: [ALERT]    (324133) : Current worker (324135) exited with code 143 (Terminated)
Nov 25 03:35:49 np0005534516 neutron-haproxy-ovnmeta-77b0065f-12e9-4121-b463-93a7fd9a5ff0[324112]: [WARNING]  (324133) : All workers exited. Exiting... (0)
Nov 25 03:35:49 np0005534516 systemd[1]: libpod-9a907ba470efda663bb059281b20d70f9de26488b0442ace9ecf1bf7f12590e6.scope: Deactivated successfully.
Nov 25 03:35:49 np0005534516 podman[324550]: 2025-11-25 08:35:49.159805474 +0000 UTC m=+0.053336285 container died 9a907ba470efda663bb059281b20d70f9de26488b0442ace9ecf1bf7f12590e6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-77b0065f-12e9-4121-b463-93a7fd9a5ff0, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true)
Nov 25 03:35:49 np0005534516 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-9a907ba470efda663bb059281b20d70f9de26488b0442ace9ecf1bf7f12590e6-userdata-shm.mount: Deactivated successfully.
Nov 25 03:35:49 np0005534516 systemd[1]: var-lib-containers-storage-overlay-49574312e8f4af9bf3a8c8febb2214b41ab32d1992c105e91eb0c713b6ff73e1-merged.mount: Deactivated successfully.
Nov 25 03:35:49 np0005534516 systemd[1]: libpod-conmon-3590f92a42b13a64075226fa82e13f879043952a665f7d4894341b120d80fd85.scope: Deactivated successfully.
Nov 25 03:35:49 np0005534516 podman[324550]: 2025-11-25 08:35:49.263146701 +0000 UTC m=+0.156677532 container cleanup 9a907ba470efda663bb059281b20d70f9de26488b0442ace9ecf1bf7f12590e6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-77b0065f-12e9-4121-b463-93a7fd9a5ff0, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3)
Nov 25 03:35:49 np0005534516 systemd[1]: libpod-conmon-9a907ba470efda663bb059281b20d70f9de26488b0442ace9ecf1bf7f12590e6.scope: Deactivated successfully.
Nov 25 03:35:49 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1584: 321 pgs: 321 active+clean; 180 MiB data, 611 MiB used, 59 GiB / 60 GiB avail; 1.8 MiB/s rd, 4.8 MiB/s wr, 172 op/s
Nov 25 03:35:49 np0005534516 podman[324662]: 2025-11-25 08:35:49.335816384 +0000 UTC m=+0.041668301 container remove 9a907ba470efda663bb059281b20d70f9de26488b0442ace9ecf1bf7f12590e6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-77b0065f-12e9-4121-b463-93a7fd9a5ff0, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 03:35:49 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:35:49.341 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[c63a2e59-3ea0-41d0-94c9-c97e3a8175da]: (4, ('Tue Nov 25 08:35:49 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-77b0065f-12e9-4121-b463-93a7fd9a5ff0 (9a907ba470efda663bb059281b20d70f9de26488b0442ace9ecf1bf7f12590e6)\n9a907ba470efda663bb059281b20d70f9de26488b0442ace9ecf1bf7f12590e6\nTue Nov 25 08:35:49 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-77b0065f-12e9-4121-b463-93a7fd9a5ff0 (9a907ba470efda663bb059281b20d70f9de26488b0442ace9ecf1bf7f12590e6)\n9a907ba470efda663bb059281b20d70f9de26488b0442ace9ecf1bf7f12590e6\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:35:49 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:35:49.342 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[584f4014-2ab7-42fc-80e3-1500badeb16b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:35:49 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:35:49.345 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap77b0065f-10, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:35:49 np0005534516 kernel: tap77b0065f-10: left promiscuous mode
Nov 25 03:35:49 np0005534516 nova_compute[253538]: 2025-11-25 08:35:49.347 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:35:49 np0005534516 nova_compute[253538]: 2025-11-25 08:35:49.364 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:35:49 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:35:49.366 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[3752506a-c563-48a5-b997-44d202aaa7de]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:35:49 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:35:49.388 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[07b128ce-4e97-42b9-895d-9d37b021a9dd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:35:49 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:35:49.390 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[45d8882f-93f2-452a-bc6b-0cccc98458d6]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:35:49 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:35:49.407 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[8a485ca8-c812-4b10-8b78-a8479692f665]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 509928, 'reachable_time': 36644, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 324689, 'error': None, 'target': 'ovnmeta-77b0065f-12e9-4121-b463-93a7fd9a5ff0', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:35:49 np0005534516 systemd[1]: run-netns-ovnmeta\x2d77b0065f\x2d12e9\x2d4121\x2db463\x2d93a7fd9a5ff0.mount: Deactivated successfully.
Nov 25 03:35:49 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:35:49.413 162852 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-77b0065f-12e9-4121-b463-93a7fd9a5ff0 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 25 03:35:49 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:35:49.413 162852 DEBUG oslo.privsep.daemon [-] privsep: reply[8706cead-7c74-4b45-aff4-72487fdd5d8b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:35:49 np0005534516 nova_compute[253538]: 2025-11-25 08:35:49.444 253542 DEBUG nova.compute.manager [req-41ac899d-c8b4-429f-8fab-0b605ac53bbc req-74f5da56-745f-4dbf-8564-3b748e3f9c3e b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 9396c5ff-9457-400c-8916-ecd03eded0c1] Received event network-vif-unplugged-33ae6d28-9d12-4e42-9874-7f5c7a27c9c8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:35:49 np0005534516 nova_compute[253538]: 2025-11-25 08:35:49.444 253542 DEBUG oslo_concurrency.lockutils [req-41ac899d-c8b4-429f-8fab-0b605ac53bbc req-74f5da56-745f-4dbf-8564-3b748e3f9c3e b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "9396c5ff-9457-400c-8916-ecd03eded0c1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:35:49 np0005534516 nova_compute[253538]: 2025-11-25 08:35:49.444 253542 DEBUG oslo_concurrency.lockutils [req-41ac899d-c8b4-429f-8fab-0b605ac53bbc req-74f5da56-745f-4dbf-8564-3b748e3f9c3e b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "9396c5ff-9457-400c-8916-ecd03eded0c1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:35:49 np0005534516 nova_compute[253538]: 2025-11-25 08:35:49.445 253542 DEBUG oslo_concurrency.lockutils [req-41ac899d-c8b4-429f-8fab-0b605ac53bbc req-74f5da56-745f-4dbf-8564-3b748e3f9c3e b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "9396c5ff-9457-400c-8916-ecd03eded0c1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:35:49 np0005534516 nova_compute[253538]: 2025-11-25 08:35:49.445 253542 DEBUG nova.compute.manager [req-41ac899d-c8b4-429f-8fab-0b605ac53bbc req-74f5da56-745f-4dbf-8564-3b748e3f9c3e b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 9396c5ff-9457-400c-8916-ecd03eded0c1] No waiting events found dispatching network-vif-unplugged-33ae6d28-9d12-4e42-9874-7f5c7a27c9c8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 03:35:49 np0005534516 nova_compute[253538]: 2025-11-25 08:35:49.445 253542 DEBUG nova.compute.manager [req-41ac899d-c8b4-429f-8fab-0b605ac53bbc req-74f5da56-745f-4dbf-8564-3b748e3f9c3e b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 9396c5ff-9457-400c-8916-ecd03eded0c1] Received event network-vif-unplugged-33ae6d28-9d12-4e42-9874-7f5c7a27c9c8 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 25 03:35:49 np0005534516 nova_compute[253538]: 2025-11-25 08:35:49.508 253542 INFO nova.virt.libvirt.driver [None req-aa0f63c9-da87-49af-abeb-54e2523715df 596f8d994ec145beb9244f5f01713555 3b96d13f13da43468269abb6dc6185d1 - - default default] [instance: 9396c5ff-9457-400c-8916-ecd03eded0c1] Deleting instance files /var/lib/nova/instances/9396c5ff-9457-400c-8916-ecd03eded0c1_del#033[00m
Nov 25 03:35:49 np0005534516 nova_compute[253538]: 2025-11-25 08:35:49.509 253542 INFO nova.virt.libvirt.driver [None req-aa0f63c9-da87-49af-abeb-54e2523715df 596f8d994ec145beb9244f5f01713555 3b96d13f13da43468269abb6dc6185d1 - - default default] [instance: 9396c5ff-9457-400c-8916-ecd03eded0c1] Deletion of /var/lib/nova/instances/9396c5ff-9457-400c-8916-ecd03eded0c1_del complete#033[00m
Nov 25 03:35:49 np0005534516 nova_compute[253538]: 2025-11-25 08:35:49.621 253542 INFO nova.compute.manager [None req-aa0f63c9-da87-49af-abeb-54e2523715df 596f8d994ec145beb9244f5f01713555 3b96d13f13da43468269abb6dc6185d1 - - default default] [instance: 9396c5ff-9457-400c-8916-ecd03eded0c1] Took 1.37 seconds to destroy the instance on the hypervisor.#033[00m
Nov 25 03:35:49 np0005534516 nova_compute[253538]: 2025-11-25 08:35:49.621 253542 DEBUG oslo.service.loopingcall [None req-aa0f63c9-da87-49af-abeb-54e2523715df 596f8d994ec145beb9244f5f01713555 3b96d13f13da43468269abb6dc6185d1 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 25 03:35:49 np0005534516 nova_compute[253538]: 2025-11-25 08:35:49.622 253542 DEBUG nova.compute.manager [-] [instance: 9396c5ff-9457-400c-8916-ecd03eded0c1] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 25 03:35:49 np0005534516 nova_compute[253538]: 2025-11-25 08:35:49.622 253542 DEBUG nova.network.neutron [-] [instance: 9396c5ff-9457-400c-8916-ecd03eded0c1] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 25 03:35:49 np0005534516 podman[324731]: 2025-11-25 08:35:49.700604259 +0000 UTC m=+0.038466835 container create 2a5d00baee5dc425c593e8b5c8b27d9524d2c9ed7d67ba6cf533b7d0a539705d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_perlman, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 03:35:49 np0005534516 systemd[1]: Started libpod-conmon-2a5d00baee5dc425c593e8b5c8b27d9524d2c9ed7d67ba6cf533b7d0a539705d.scope.
Nov 25 03:35:49 np0005534516 systemd[1]: Started libcrun container.
Nov 25 03:35:49 np0005534516 podman[324731]: 2025-11-25 08:35:49.682094751 +0000 UTC m=+0.019957347 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 03:35:49 np0005534516 podman[324731]: 2025-11-25 08:35:49.800517084 +0000 UTC m=+0.138379750 container init 2a5d00baee5dc425c593e8b5c8b27d9524d2c9ed7d67ba6cf533b7d0a539705d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_perlman, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 03:35:49 np0005534516 podman[324731]: 2025-11-25 08:35:49.811805237 +0000 UTC m=+0.149667813 container start 2a5d00baee5dc425c593e8b5c8b27d9524d2c9ed7d67ba6cf533b7d0a539705d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_perlman, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 03:35:49 np0005534516 podman[324731]: 2025-11-25 08:35:49.814465959 +0000 UTC m=+0.152328615 container attach 2a5d00baee5dc425c593e8b5c8b27d9524d2c9ed7d67ba6cf533b7d0a539705d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_perlman, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 03:35:49 np0005534516 keen_perlman[324748]: 167 167
Nov 25 03:35:49 np0005534516 systemd[1]: libpod-2a5d00baee5dc425c593e8b5c8b27d9524d2c9ed7d67ba6cf533b7d0a539705d.scope: Deactivated successfully.
Nov 25 03:35:49 np0005534516 conmon[324748]: conmon 2a5d00baee5dc425c593 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-2a5d00baee5dc425c593e8b5c8b27d9524d2c9ed7d67ba6cf533b7d0a539705d.scope/container/memory.events
Nov 25 03:35:49 np0005534516 podman[324753]: 2025-11-25 08:35:49.891570082 +0000 UTC m=+0.041281371 container died 2a5d00baee5dc425c593e8b5c8b27d9524d2c9ed7d67ba6cf533b7d0a539705d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_perlman, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Nov 25 03:35:49 np0005534516 systemd[1]: var-lib-containers-storage-overlay-3cbc937b4efd33327833009a85f307c6f48ced3ff857d1583fdd5163be30b4f7-merged.mount: Deactivated successfully.
Nov 25 03:35:49 np0005534516 podman[324753]: 2025-11-25 08:35:49.939250133 +0000 UTC m=+0.088961442 container remove 2a5d00baee5dc425c593e8b5c8b27d9524d2c9ed7d67ba6cf533b7d0a539705d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_perlman, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 03:35:49 np0005534516 systemd[1]: libpod-conmon-2a5d00baee5dc425c593e8b5c8b27d9524d2c9ed7d67ba6cf533b7d0a539705d.scope: Deactivated successfully.
Nov 25 03:35:50 np0005534516 podman[324774]: 2025-11-25 08:35:50.184921004 +0000 UTC m=+0.044766724 container create e5ffbbdb1a81db44c4545f529ea861d35fd23ec94ac4a3c7d31e14df9a3da1a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_agnesi, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Nov 25 03:35:50 np0005534516 systemd[1]: Started libpod-conmon-e5ffbbdb1a81db44c4545f529ea861d35fd23ec94ac4a3c7d31e14df9a3da1a9.scope.
Nov 25 03:35:50 np0005534516 nova_compute[253538]: 2025-11-25 08:35:50.224 253542 DEBUG nova.network.neutron [None req-b76fc119-15bb-4231-90b7-ef24446d6cfa 4de06a7985be4463b069db269e2882d4 a20ef4bed55a408c8933a4956b2dd3e4 - - default default] [instance: 4e9d3984-d789-45e1-83e3-8909597d3265] Updating instance_info_cache with network_info: [{"id": "d553507f-4019-4ce0-b549-4d221b9089cd", "address": "fa:16:3e:d8:47:0f", "network": {"id": "66249d1f-478b-4b2b-a784-933c0556752e", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-179501336-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a20ef4bed55a408c8933a4956b2dd3e4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd553507f-40", "ovs_interfaceid": "d553507f-4019-4ce0-b549-4d221b9089cd", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 03:35:50 np0005534516 systemd[1]: Started libcrun container.
Nov 25 03:35:50 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fdbc9fb77ea03d21db0655dff2fc7edbcc98e3091b6a250b10edcb7175f36e60/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 03:35:50 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fdbc9fb77ea03d21db0655dff2fc7edbcc98e3091b6a250b10edcb7175f36e60/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 03:35:50 np0005534516 podman[324774]: 2025-11-25 08:35:50.168370289 +0000 UTC m=+0.028216029 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 03:35:50 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fdbc9fb77ea03d21db0655dff2fc7edbcc98e3091b6a250b10edcb7175f36e60/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 03:35:50 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fdbc9fb77ea03d21db0655dff2fc7edbcc98e3091b6a250b10edcb7175f36e60/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 03:35:50 np0005534516 nova_compute[253538]: 2025-11-25 08:35:50.268 253542 DEBUG oslo_concurrency.lockutils [None req-b76fc119-15bb-4231-90b7-ef24446d6cfa 4de06a7985be4463b069db269e2882d4 a20ef4bed55a408c8933a4956b2dd3e4 - - default default] Releasing lock "refresh_cache-4e9d3984-d789-45e1-83e3-8909597d3265" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 03:35:50 np0005534516 nova_compute[253538]: 2025-11-25 08:35:50.269 253542 DEBUG nova.compute.manager [None req-b76fc119-15bb-4231-90b7-ef24446d6cfa 4de06a7985be4463b069db269e2882d4 a20ef4bed55a408c8933a4956b2dd3e4 - - default default] [instance: 4e9d3984-d789-45e1-83e3-8909597d3265] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:35:50 np0005534516 podman[324774]: 2025-11-25 08:35:50.27705996 +0000 UTC m=+0.136905700 container init e5ffbbdb1a81db44c4545f529ea861d35fd23ec94ac4a3c7d31e14df9a3da1a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_agnesi, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Nov 25 03:35:50 np0005534516 podman[324774]: 2025-11-25 08:35:50.283598567 +0000 UTC m=+0.143444287 container start e5ffbbdb1a81db44c4545f529ea861d35fd23ec94ac4a3c7d31e14df9a3da1a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_agnesi, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 03:35:50 np0005534516 podman[324774]: 2025-11-25 08:35:50.286277259 +0000 UTC m=+0.146122999 container attach e5ffbbdb1a81db44c4545f529ea861d35fd23ec94ac4a3c7d31e14df9a3da1a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_agnesi, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 03:35:50 np0005534516 kernel: tapd553507f-40 (unregistering): left promiscuous mode
Nov 25 03:35:50 np0005534516 NetworkManager[48915]: <info>  [1764059750.4894] device (tapd553507f-40): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 03:35:50 np0005534516 nova_compute[253538]: 2025-11-25 08:35:50.512 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:35:50 np0005534516 ovn_controller[152859]: 2025-11-25T08:35:50Z|00667|binding|INFO|Releasing lport d553507f-4019-4ce0-b549-4d221b9089cd from this chassis (sb_readonly=0)
Nov 25 03:35:50 np0005534516 ovn_controller[152859]: 2025-11-25T08:35:50Z|00668|binding|INFO|Setting lport d553507f-4019-4ce0-b549-4d221b9089cd down in Southbound
Nov 25 03:35:50 np0005534516 ovn_controller[152859]: 2025-11-25T08:35:50Z|00669|binding|INFO|Removing iface tapd553507f-40 ovn-installed in OVS
Nov 25 03:35:50 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:35:50.539 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d8:47:0f 10.100.0.9'], port_security=['fa:16:3e:d8:47:0f 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '4e9d3984-d789-45e1-83e3-8909597d3265', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-66249d1f-478b-4b2b-a784-933c0556752e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a20ef4bed55a408c8933a4956b2dd3e4', 'neutron:revision_number': '4', 'neutron:security_group_ids': '99b8ce64-92d7-4b35-99db-f9518100b84f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5891e4e5-b944-481c-a074-f90df3e14ac2, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=d553507f-4019-4ce0-b549-4d221b9089cd) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 03:35:50 np0005534516 nova_compute[253538]: 2025-11-25 08:35:50.540 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:35:50 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:35:50.541 162739 INFO neutron.agent.ovn.metadata.agent [-] Port d553507f-4019-4ce0-b549-4d221b9089cd in datapath 66249d1f-478b-4b2b-a784-933c0556752e unbound from our chassis#033[00m
Nov 25 03:35:50 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:35:50.542 162739 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 66249d1f-478b-4b2b-a784-933c0556752e, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 25 03:35:50 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:35:50.543 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[123a7ffb-4f38-4474-9c68-dd776a10d21a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:35:50 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:35:50.544 162739 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-66249d1f-478b-4b2b-a784-933c0556752e namespace which is not needed anymore#033[00m
Nov 25 03:35:50 np0005534516 systemd[1]: machine-qemu\x2d81\x2dinstance\x2d00000045.scope: Deactivated successfully.
Nov 25 03:35:50 np0005534516 systemd[1]: machine-qemu\x2d81\x2dinstance\x2d00000045.scope: Consumed 3.544s CPU time.
Nov 25 03:35:50 np0005534516 systemd-machined[215790]: Machine qemu-81-instance-00000045 terminated.
Nov 25 03:35:50 np0005534516 NetworkManager[48915]: <info>  [1764059750.6714] manager: (tapd553507f-40): new Tun device (/org/freedesktop/NetworkManager/Devices/295)
Nov 25 03:35:50 np0005534516 nova_compute[253538]: 2025-11-25 08:35:50.674 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:35:50 np0005534516 nova_compute[253538]: 2025-11-25 08:35:50.690 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:35:50 np0005534516 nova_compute[253538]: 2025-11-25 08:35:50.694 253542 INFO nova.virt.libvirt.driver [-] [instance: 4e9d3984-d789-45e1-83e3-8909597d3265] Instance destroyed successfully.#033[00m
Nov 25 03:35:50 np0005534516 nova_compute[253538]: 2025-11-25 08:35:50.694 253542 DEBUG nova.objects.instance [None req-b76fc119-15bb-4231-90b7-ef24446d6cfa 4de06a7985be4463b069db269e2882d4 a20ef4bed55a408c8933a4956b2dd3e4 - - default default] Lazy-loading 'resources' on Instance uuid 4e9d3984-d789-45e1-83e3-8909597d3265 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 03:35:50 np0005534516 nova_compute[253538]: 2025-11-25 08:35:50.706 253542 DEBUG nova.virt.libvirt.vif [None req-b76fc119-15bb-4231-90b7-ef24446d6cfa 4de06a7985be4463b069db269e2882d4 a20ef4bed55a408c8933a4956b2dd3e4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T08:35:36Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-InstanceActionsTestJSON-server-1670811507',display_name='tempest-InstanceActionsTestJSON-server-1670811507',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-instanceactionstestjson-server-1670811507',id=69,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-25T08:35:47Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='a20ef4bed55a408c8933a4956b2dd3e4',ramdisk_id='',reservation_id='r-7noz6z35',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-InstanceActionsTestJSON-270987687',owner_user_name='tempest-InstanceActionsTestJSON-270987687-project-member'},tags=<?>,task_state='reboot_started_hard',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T08:35:50Z,user_data=None,user_id='4de06a7985be4463b069db269e2882d4',uuid=4e9d3984-d789-45e1-83e3-8909597d3265,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "d553507f-4019-4ce0-b549-4d221b9089cd", "address": "fa:16:3e:d8:47:0f", "network": {"id": "66249d1f-478b-4b2b-a784-933c0556752e", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-179501336-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a20ef4bed55a408c8933a4956b2dd3e4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd553507f-40", "ovs_interfaceid": "d553507f-4019-4ce0-b549-4d221b9089cd", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 25 03:35:50 np0005534516 nova_compute[253538]: 2025-11-25 08:35:50.706 253542 DEBUG nova.network.os_vif_util [None req-b76fc119-15bb-4231-90b7-ef24446d6cfa 4de06a7985be4463b069db269e2882d4 a20ef4bed55a408c8933a4956b2dd3e4 - - default default] Converting VIF {"id": "d553507f-4019-4ce0-b549-4d221b9089cd", "address": "fa:16:3e:d8:47:0f", "network": {"id": "66249d1f-478b-4b2b-a784-933c0556752e", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-179501336-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a20ef4bed55a408c8933a4956b2dd3e4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd553507f-40", "ovs_interfaceid": "d553507f-4019-4ce0-b549-4d221b9089cd", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 03:35:50 np0005534516 nova_compute[253538]: 2025-11-25 08:35:50.707 253542 DEBUG nova.network.os_vif_util [None req-b76fc119-15bb-4231-90b7-ef24446d6cfa 4de06a7985be4463b069db269e2882d4 a20ef4bed55a408c8933a4956b2dd3e4 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:d8:47:0f,bridge_name='br-int',has_traffic_filtering=True,id=d553507f-4019-4ce0-b549-4d221b9089cd,network=Network(66249d1f-478b-4b2b-a784-933c0556752e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd553507f-40') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 03:35:50 np0005534516 nova_compute[253538]: 2025-11-25 08:35:50.707 253542 DEBUG os_vif [None req-b76fc119-15bb-4231-90b7-ef24446d6cfa 4de06a7985be4463b069db269e2882d4 a20ef4bed55a408c8933a4956b2dd3e4 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:d8:47:0f,bridge_name='br-int',has_traffic_filtering=True,id=d553507f-4019-4ce0-b549-4d221b9089cd,network=Network(66249d1f-478b-4b2b-a784-933c0556752e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd553507f-40') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 25 03:35:50 np0005534516 nova_compute[253538]: 2025-11-25 08:35:50.709 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:35:50 np0005534516 nova_compute[253538]: 2025-11-25 08:35:50.709 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd553507f-40, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:35:50 np0005534516 nova_compute[253538]: 2025-11-25 08:35:50.713 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:35:50 np0005534516 nova_compute[253538]: 2025-11-25 08:35:50.714 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 25 03:35:50 np0005534516 nova_compute[253538]: 2025-11-25 08:35:50.716 253542 INFO os_vif [None req-b76fc119-15bb-4231-90b7-ef24446d6cfa 4de06a7985be4463b069db269e2882d4 a20ef4bed55a408c8933a4956b2dd3e4 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:d8:47:0f,bridge_name='br-int',has_traffic_filtering=True,id=d553507f-4019-4ce0-b549-4d221b9089cd,network=Network(66249d1f-478b-4b2b-a784-933c0556752e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd553507f-40')#033[00m
Nov 25 03:35:50 np0005534516 nova_compute[253538]: 2025-11-25 08:35:50.727 253542 DEBUG nova.virt.libvirt.driver [None req-b76fc119-15bb-4231-90b7-ef24446d6cfa 4de06a7985be4463b069db269e2882d4 a20ef4bed55a408c8933a4956b2dd3e4 - - default default] [instance: 4e9d3984-d789-45e1-83e3-8909597d3265] Start _get_guest_xml network_info=[{"id": "d553507f-4019-4ce0-b549-4d221b9089cd", "address": "fa:16:3e:d8:47:0f", "network": {"id": "66249d1f-478b-4b2b-a784-933c0556752e", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-179501336-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a20ef4bed55a408c8933a4956b2dd3e4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd553507f-40", "ovs_interfaceid": "d553507f-4019-4ce0-b549-4d221b9089cd", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'device_name': '/dev/vda', 'guest_format': None, 'encryption_options': None, 'boot_index': 0, 'encryption_format': None, 'encryption_secret_uuid': None, 'size': 0, 'encrypted': False, 'disk_bus': 'virtio', 'image_id': '8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 25 03:35:50 np0005534516 nova_compute[253538]: 2025-11-25 08:35:50.732 253542 WARNING nova.virt.libvirt.driver [None req-b76fc119-15bb-4231-90b7-ef24446d6cfa 4de06a7985be4463b069db269e2882d4 a20ef4bed55a408c8933a4956b2dd3e4 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 03:35:50 np0005534516 nova_compute[253538]: 2025-11-25 08:35:50.737 253542 DEBUG nova.virt.libvirt.host [None req-b76fc119-15bb-4231-90b7-ef24446d6cfa 4de06a7985be4463b069db269e2882d4 a20ef4bed55a408c8933a4956b2dd3e4 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 25 03:35:50 np0005534516 neutron-haproxy-ovnmeta-66249d1f-478b-4b2b-a784-933c0556752e[324315]: [NOTICE]   (324341) : haproxy version is 2.8.14-c23fe91
Nov 25 03:35:50 np0005534516 neutron-haproxy-ovnmeta-66249d1f-478b-4b2b-a784-933c0556752e[324315]: [NOTICE]   (324341) : path to executable is /usr/sbin/haproxy
Nov 25 03:35:50 np0005534516 neutron-haproxy-ovnmeta-66249d1f-478b-4b2b-a784-933c0556752e[324315]: [WARNING]  (324341) : Exiting Master process...
Nov 25 03:35:50 np0005534516 neutron-haproxy-ovnmeta-66249d1f-478b-4b2b-a784-933c0556752e[324315]: [WARNING]  (324341) : Exiting Master process...
Nov 25 03:35:50 np0005534516 neutron-haproxy-ovnmeta-66249d1f-478b-4b2b-a784-933c0556752e[324315]: [ALERT]    (324341) : Current worker (324343) exited with code 143 (Terminated)
Nov 25 03:35:50 np0005534516 neutron-haproxy-ovnmeta-66249d1f-478b-4b2b-a784-933c0556752e[324315]: [WARNING]  (324341) : All workers exited. Exiting... (0)
Nov 25 03:35:50 np0005534516 systemd[1]: libpod-382bcc9e9cd06446e4d63c9f940deab344c4f9c9f42d4897cc96091c1e1e2dec.scope: Deactivated successfully.
Nov 25 03:35:50 np0005534516 nova_compute[253538]: 2025-11-25 08:35:50.742 253542 DEBUG nova.virt.libvirt.host [None req-b76fc119-15bb-4231-90b7-ef24446d6cfa 4de06a7985be4463b069db269e2882d4 a20ef4bed55a408c8933a4956b2dd3e4 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 25 03:35:50 np0005534516 podman[324818]: 2025-11-25 08:35:50.749618711 +0000 UTC m=+0.071480392 container died 382bcc9e9cd06446e4d63c9f940deab344c4f9c9f42d4897cc96091c1e1e2dec (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-66249d1f-478b-4b2b-a784-933c0556752e, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 03:35:50 np0005534516 nova_compute[253538]: 2025-11-25 08:35:50.750 253542 DEBUG nova.virt.libvirt.host [None req-b76fc119-15bb-4231-90b7-ef24446d6cfa 4de06a7985be4463b069db269e2882d4 a20ef4bed55a408c8933a4956b2dd3e4 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 25 03:35:50 np0005534516 nova_compute[253538]: 2025-11-25 08:35:50.750 253542 DEBUG nova.virt.libvirt.host [None req-b76fc119-15bb-4231-90b7-ef24446d6cfa 4de06a7985be4463b069db269e2882d4 a20ef4bed55a408c8933a4956b2dd3e4 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 25 03:35:50 np0005534516 nova_compute[253538]: 2025-11-25 08:35:50.751 253542 DEBUG nova.virt.libvirt.driver [None req-b76fc119-15bb-4231-90b7-ef24446d6cfa 4de06a7985be4463b069db269e2882d4 a20ef4bed55a408c8933a4956b2dd3e4 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 25 03:35:50 np0005534516 nova_compute[253538]: 2025-11-25 08:35:50.751 253542 DEBUG nova.virt.hardware [None req-b76fc119-15bb-4231-90b7-ef24446d6cfa 4de06a7985be4463b069db269e2882d4 a20ef4bed55a408c8933a4956b2dd3e4 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T08:20:54Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='c0247c91-0f34-45b2-87e7-43f31a790a00',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 25 03:35:50 np0005534516 nova_compute[253538]: 2025-11-25 08:35:50.751 253542 DEBUG nova.virt.hardware [None req-b76fc119-15bb-4231-90b7-ef24446d6cfa 4de06a7985be4463b069db269e2882d4 a20ef4bed55a408c8933a4956b2dd3e4 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 25 03:35:50 np0005534516 nova_compute[253538]: 2025-11-25 08:35:50.752 253542 DEBUG nova.virt.hardware [None req-b76fc119-15bb-4231-90b7-ef24446d6cfa 4de06a7985be4463b069db269e2882d4 a20ef4bed55a408c8933a4956b2dd3e4 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 25 03:35:50 np0005534516 nova_compute[253538]: 2025-11-25 08:35:50.752 253542 DEBUG nova.virt.hardware [None req-b76fc119-15bb-4231-90b7-ef24446d6cfa 4de06a7985be4463b069db269e2882d4 a20ef4bed55a408c8933a4956b2dd3e4 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 25 03:35:50 np0005534516 nova_compute[253538]: 2025-11-25 08:35:50.752 253542 DEBUG nova.virt.hardware [None req-b76fc119-15bb-4231-90b7-ef24446d6cfa 4de06a7985be4463b069db269e2882d4 a20ef4bed55a408c8933a4956b2dd3e4 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 25 03:35:50 np0005534516 nova_compute[253538]: 2025-11-25 08:35:50.752 253542 DEBUG nova.virt.hardware [None req-b76fc119-15bb-4231-90b7-ef24446d6cfa 4de06a7985be4463b069db269e2882d4 a20ef4bed55a408c8933a4956b2dd3e4 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 25 03:35:50 np0005534516 nova_compute[253538]: 2025-11-25 08:35:50.753 253542 DEBUG nova.virt.hardware [None req-b76fc119-15bb-4231-90b7-ef24446d6cfa 4de06a7985be4463b069db269e2882d4 a20ef4bed55a408c8933a4956b2dd3e4 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 25 03:35:50 np0005534516 nova_compute[253538]: 2025-11-25 08:35:50.753 253542 DEBUG nova.virt.hardware [None req-b76fc119-15bb-4231-90b7-ef24446d6cfa 4de06a7985be4463b069db269e2882d4 a20ef4bed55a408c8933a4956b2dd3e4 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 25 03:35:50 np0005534516 nova_compute[253538]: 2025-11-25 08:35:50.753 253542 DEBUG nova.virt.hardware [None req-b76fc119-15bb-4231-90b7-ef24446d6cfa 4de06a7985be4463b069db269e2882d4 a20ef4bed55a408c8933a4956b2dd3e4 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 25 03:35:50 np0005534516 nova_compute[253538]: 2025-11-25 08:35:50.753 253542 DEBUG nova.virt.hardware [None req-b76fc119-15bb-4231-90b7-ef24446d6cfa 4de06a7985be4463b069db269e2882d4 a20ef4bed55a408c8933a4956b2dd3e4 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 25 03:35:50 np0005534516 nova_compute[253538]: 2025-11-25 08:35:50.754 253542 DEBUG nova.virt.hardware [None req-b76fc119-15bb-4231-90b7-ef24446d6cfa 4de06a7985be4463b069db269e2882d4 a20ef4bed55a408c8933a4956b2dd3e4 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 25 03:35:50 np0005534516 nova_compute[253538]: 2025-11-25 08:35:50.754 253542 DEBUG nova.objects.instance [None req-b76fc119-15bb-4231-90b7-ef24446d6cfa 4de06a7985be4463b069db269e2882d4 a20ef4bed55a408c8933a4956b2dd3e4 - - default default] Lazy-loading 'vcpu_model' on Instance uuid 4e9d3984-d789-45e1-83e3-8909597d3265 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 03:35:50 np0005534516 nova_compute[253538]: 2025-11-25 08:35:50.767 253542 DEBUG oslo_concurrency.processutils [None req-b76fc119-15bb-4231-90b7-ef24446d6cfa 4de06a7985be4463b069db269e2882d4 a20ef4bed55a408c8933a4956b2dd3e4 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:35:50 np0005534516 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-382bcc9e9cd06446e4d63c9f940deab344c4f9c9f42d4897cc96091c1e1e2dec-userdata-shm.mount: Deactivated successfully.
Nov 25 03:35:50 np0005534516 systemd[1]: var-lib-containers-storage-overlay-dcacdbab04bdd2d6eb0a58f860b5d214abfab69870a1def2a93eb666e475b92f-merged.mount: Deactivated successfully.
Nov 25 03:35:50 np0005534516 nova_compute[253538]: 2025-11-25 08:35:50.807 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:35:50 np0005534516 podman[324818]: 2025-11-25 08:35:50.821493333 +0000 UTC m=+0.143354984 container cleanup 382bcc9e9cd06446e4d63c9f940deab344c4f9c9f42d4897cc96091c1e1e2dec (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-66249d1f-478b-4b2b-a784-933c0556752e, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, tcib_managed=true)
Nov 25 03:35:50 np0005534516 systemd[1]: libpod-conmon-382bcc9e9cd06446e4d63c9f940deab344c4f9c9f42d4897cc96091c1e1e2dec.scope: Deactivated successfully.
Nov 25 03:35:50 np0005534516 podman[324857]: 2025-11-25 08:35:50.890006104 +0000 UTC m=+0.047468797 container remove 382bcc9e9cd06446e4d63c9f940deab344c4f9c9f42d4897cc96091c1e1e2dec (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-66249d1f-478b-4b2b-a784-933c0556752e, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Nov 25 03:35:50 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:35:50.895 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[0871f00c-91b5-41ca-ae20-58ef58c13b19]: (4, ('Tue Nov 25 08:35:50 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-66249d1f-478b-4b2b-a784-933c0556752e (382bcc9e9cd06446e4d63c9f940deab344c4f9c9f42d4897cc96091c1e1e2dec)\n382bcc9e9cd06446e4d63c9f940deab344c4f9c9f42d4897cc96091c1e1e2dec\nTue Nov 25 08:35:50 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-66249d1f-478b-4b2b-a784-933c0556752e (382bcc9e9cd06446e4d63c9f940deab344c4f9c9f42d4897cc96091c1e1e2dec)\n382bcc9e9cd06446e4d63c9f940deab344c4f9c9f42d4897cc96091c1e1e2dec\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:35:50 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:35:50.902 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[050a1d39-0aec-4bb2-8997-6eff98a9e381]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:35:50 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:35:50.903 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap66249d1f-40, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:35:50 np0005534516 kernel: tap66249d1f-40: left promiscuous mode
Nov 25 03:35:50 np0005534516 nova_compute[253538]: 2025-11-25 08:35:50.904 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:35:50 np0005534516 nova_compute[253538]: 2025-11-25 08:35:50.924 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:35:50 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:35:50.926 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[65ccae7a-80fb-4d60-9aad-21c0165001ce]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:35:50 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:35:50.941 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[b7d61e01-3b70-4f20-a474-388b431fe101]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:35:50 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:35:50.943 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[07ca9778-66bb-4591-8161-4bee54c42322]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:35:50 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:35:50.961 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[facac444-c169-4ad4-9245-7d4e0c5aacb6]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 510021, 'reachable_time': 18556, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 324889, 'error': None, 'target': 'ovnmeta-66249d1f-478b-4b2b-a784-933c0556752e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:35:50 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:35:50.963 162852 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-66249d1f-478b-4b2b-a784-933c0556752e deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 25 03:35:50 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:35:50.964 162852 DEBUG oslo.privsep.daemon [-] privsep: reply[1686cacf-3fe6-4a66-a61f-222af962a1c6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:35:50 np0005534516 systemd[1]: run-netns-ovnmeta\x2d66249d1f\x2d478b\x2d4b2b\x2da784\x2d933c0556752e.mount: Deactivated successfully.
Nov 25 03:35:50 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e186 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 03:35:51 np0005534516 cranky_agnesi[324791]: {
Nov 25 03:35:51 np0005534516 cranky_agnesi[324791]:    "1ccbbde0-8faf-460a-800e-c84f00ed17db": {
Nov 25 03:35:51 np0005534516 cranky_agnesi[324791]:        "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 03:35:51 np0005534516 cranky_agnesi[324791]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 03:35:51 np0005534516 cranky_agnesi[324791]:        "osd_id": 1,
Nov 25 03:35:51 np0005534516 cranky_agnesi[324791]:        "osd_uuid": "1ccbbde0-8faf-460a-800e-c84f00ed17db",
Nov 25 03:35:51 np0005534516 cranky_agnesi[324791]:        "type": "bluestore"
Nov 25 03:35:51 np0005534516 cranky_agnesi[324791]:    },
Nov 25 03:35:51 np0005534516 cranky_agnesi[324791]:    "2a15223e-f21c-42af-b702-2be1c3607399": {
Nov 25 03:35:51 np0005534516 cranky_agnesi[324791]:        "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 03:35:51 np0005534516 cranky_agnesi[324791]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 03:35:51 np0005534516 cranky_agnesi[324791]:        "osd_id": 2,
Nov 25 03:35:51 np0005534516 cranky_agnesi[324791]:        "osd_uuid": "2a15223e-f21c-42af-b702-2be1c3607399",
Nov 25 03:35:51 np0005534516 cranky_agnesi[324791]:        "type": "bluestore"
Nov 25 03:35:51 np0005534516 cranky_agnesi[324791]:    },
Nov 25 03:35:51 np0005534516 cranky_agnesi[324791]:    "fa0f8d90-5df8-4d42-9078-da082765696d": {
Nov 25 03:35:51 np0005534516 cranky_agnesi[324791]:        "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 03:35:51 np0005534516 cranky_agnesi[324791]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 03:35:51 np0005534516 cranky_agnesi[324791]:        "osd_id": 0,
Nov 25 03:35:51 np0005534516 cranky_agnesi[324791]:        "osd_uuid": "fa0f8d90-5df8-4d42-9078-da082765696d",
Nov 25 03:35:51 np0005534516 cranky_agnesi[324791]:        "type": "bluestore"
Nov 25 03:35:51 np0005534516 cranky_agnesi[324791]:    }
Nov 25 03:35:51 np0005534516 cranky_agnesi[324791]: }
Nov 25 03:35:51 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 03:35:51 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4211642391' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 03:35:51 np0005534516 nova_compute[253538]: 2025-11-25 08:35:51.246 253542 DEBUG nova.network.neutron [-] [instance: 9396c5ff-9457-400c-8916-ecd03eded0c1] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 03:35:51 np0005534516 systemd[1]: libpod-e5ffbbdb1a81db44c4545f529ea861d35fd23ec94ac4a3c7d31e14df9a3da1a9.scope: Deactivated successfully.
Nov 25 03:35:51 np0005534516 conmon[324791]: conmon e5ffbbdb1a81db44c454 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-e5ffbbdb1a81db44c4545f529ea861d35fd23ec94ac4a3c7d31e14df9a3da1a9.scope/container/memory.events
Nov 25 03:35:51 np0005534516 podman[324774]: 2025-11-25 08:35:51.257437259 +0000 UTC m=+1.117282999 container died e5ffbbdb1a81db44c4545f529ea861d35fd23ec94ac4a3c7d31e14df9a3da1a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_agnesi, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True)
Nov 25 03:35:51 np0005534516 nova_compute[253538]: 2025-11-25 08:35:51.259 253542 DEBUG oslo_concurrency.processutils [None req-b76fc119-15bb-4231-90b7-ef24446d6cfa 4de06a7985be4463b069db269e2882d4 a20ef4bed55a408c8933a4956b2dd3e4 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.491s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:35:51 np0005534516 systemd[1]: var-lib-containers-storage-overlay-fdbc9fb77ea03d21db0655dff2fc7edbcc98e3091b6a250b10edcb7175f36e60-merged.mount: Deactivated successfully.
Nov 25 03:35:51 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1585: 321 pgs: 321 active+clean; 159 MiB data, 611 MiB used, 59 GiB / 60 GiB avail; 3.1 MiB/s rd, 1.9 MiB/s wr, 224 op/s
Nov 25 03:35:51 np0005534516 nova_compute[253538]: 2025-11-25 08:35:51.296 253542 DEBUG oslo_concurrency.processutils [None req-b76fc119-15bb-4231-90b7-ef24446d6cfa 4de06a7985be4463b069db269e2882d4 a20ef4bed55a408c8933a4956b2dd3e4 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:35:51 np0005534516 podman[324774]: 2025-11-25 08:35:51.318708856 +0000 UTC m=+1.178554576 container remove e5ffbbdb1a81db44c4545f529ea861d35fd23ec94ac4a3c7d31e14df9a3da1a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_agnesi, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Nov 25 03:35:51 np0005534516 nova_compute[253538]: 2025-11-25 08:35:51.330 253542 INFO nova.compute.manager [-] [instance: 9396c5ff-9457-400c-8916-ecd03eded0c1] Took 1.71 seconds to deallocate network for instance.#033[00m
Nov 25 03:35:51 np0005534516 systemd[1]: libpod-conmon-e5ffbbdb1a81db44c4545f529ea861d35fd23ec94ac4a3c7d31e14df9a3da1a9.scope: Deactivated successfully.
Nov 25 03:35:51 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 03:35:51 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 03:35:51 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 03:35:51 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 03:35:51 np0005534516 ceph-mgr[75313]: [progress WARNING root] complete: ev a02c63c4-ba47-4d3e-a31d-c3f27d71e1a8 does not exist
Nov 25 03:35:51 np0005534516 ceph-mgr[75313]: [progress WARNING root] complete: ev cac9dff5-de03-498a-939e-1a25cdd45215 does not exist
Nov 25 03:35:51 np0005534516 nova_compute[253538]: 2025-11-25 08:35:51.454 253542 DEBUG oslo_concurrency.lockutils [None req-aa0f63c9-da87-49af-abeb-54e2523715df 596f8d994ec145beb9244f5f01713555 3b96d13f13da43468269abb6dc6185d1 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:35:51 np0005534516 nova_compute[253538]: 2025-11-25 08:35:51.454 253542 DEBUG oslo_concurrency.lockutils [None req-aa0f63c9-da87-49af-abeb-54e2523715df 596f8d994ec145beb9244f5f01713555 3b96d13f13da43468269abb6dc6185d1 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:35:51 np0005534516 nova_compute[253538]: 2025-11-25 08:35:51.563 253542 DEBUG oslo_concurrency.processutils [None req-aa0f63c9-da87-49af-abeb-54e2523715df 596f8d994ec145beb9244f5f01713555 3b96d13f13da43468269abb6dc6185d1 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:35:51 np0005534516 nova_compute[253538]: 2025-11-25 08:35:51.610 253542 DEBUG nova.compute.manager [req-7666324b-2549-41a8-861e-c962d8f29030 req-1ebf8616-b2b9-4967-9eee-096fa0bcf4c8 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 9396c5ff-9457-400c-8916-ecd03eded0c1] Received event network-vif-plugged-33ae6d28-9d12-4e42-9874-7f5c7a27c9c8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:35:51 np0005534516 nova_compute[253538]: 2025-11-25 08:35:51.612 253542 DEBUG oslo_concurrency.lockutils [req-7666324b-2549-41a8-861e-c962d8f29030 req-1ebf8616-b2b9-4967-9eee-096fa0bcf4c8 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "9396c5ff-9457-400c-8916-ecd03eded0c1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:35:51 np0005534516 nova_compute[253538]: 2025-11-25 08:35:51.612 253542 DEBUG oslo_concurrency.lockutils [req-7666324b-2549-41a8-861e-c962d8f29030 req-1ebf8616-b2b9-4967-9eee-096fa0bcf4c8 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "9396c5ff-9457-400c-8916-ecd03eded0c1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:35:51 np0005534516 nova_compute[253538]: 2025-11-25 08:35:51.613 253542 DEBUG oslo_concurrency.lockutils [req-7666324b-2549-41a8-861e-c962d8f29030 req-1ebf8616-b2b9-4967-9eee-096fa0bcf4c8 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "9396c5ff-9457-400c-8916-ecd03eded0c1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:35:51 np0005534516 nova_compute[253538]: 2025-11-25 08:35:51.613 253542 DEBUG nova.compute.manager [req-7666324b-2549-41a8-861e-c962d8f29030 req-1ebf8616-b2b9-4967-9eee-096fa0bcf4c8 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 9396c5ff-9457-400c-8916-ecd03eded0c1] No waiting events found dispatching network-vif-plugged-33ae6d28-9d12-4e42-9874-7f5c7a27c9c8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 03:35:51 np0005534516 nova_compute[253538]: 2025-11-25 08:35:51.614 253542 WARNING nova.compute.manager [req-7666324b-2549-41a8-861e-c962d8f29030 req-1ebf8616-b2b9-4967-9eee-096fa0bcf4c8 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 9396c5ff-9457-400c-8916-ecd03eded0c1] Received unexpected event network-vif-plugged-33ae6d28-9d12-4e42-9874-7f5c7a27c9c8 for instance with vm_state deleted and task_state None.#033[00m
Nov 25 03:35:51 np0005534516 nova_compute[253538]: 2025-11-25 08:35:51.614 253542 DEBUG nova.compute.manager [req-7666324b-2549-41a8-861e-c962d8f29030 req-1ebf8616-b2b9-4967-9eee-096fa0bcf4c8 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 4e9d3984-d789-45e1-83e3-8909597d3265] Received event network-vif-unplugged-d553507f-4019-4ce0-b549-4d221b9089cd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:35:51 np0005534516 nova_compute[253538]: 2025-11-25 08:35:51.615 253542 DEBUG oslo_concurrency.lockutils [req-7666324b-2549-41a8-861e-c962d8f29030 req-1ebf8616-b2b9-4967-9eee-096fa0bcf4c8 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "4e9d3984-d789-45e1-83e3-8909597d3265-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:35:51 np0005534516 nova_compute[253538]: 2025-11-25 08:35:51.615 253542 DEBUG oslo_concurrency.lockutils [req-7666324b-2549-41a8-861e-c962d8f29030 req-1ebf8616-b2b9-4967-9eee-096fa0bcf4c8 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "4e9d3984-d789-45e1-83e3-8909597d3265-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:35:51 np0005534516 nova_compute[253538]: 2025-11-25 08:35:51.616 253542 DEBUG oslo_concurrency.lockutils [req-7666324b-2549-41a8-861e-c962d8f29030 req-1ebf8616-b2b9-4967-9eee-096fa0bcf4c8 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "4e9d3984-d789-45e1-83e3-8909597d3265-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:35:51 np0005534516 nova_compute[253538]: 2025-11-25 08:35:51.616 253542 DEBUG nova.compute.manager [req-7666324b-2549-41a8-861e-c962d8f29030 req-1ebf8616-b2b9-4967-9eee-096fa0bcf4c8 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 4e9d3984-d789-45e1-83e3-8909597d3265] No waiting events found dispatching network-vif-unplugged-d553507f-4019-4ce0-b549-4d221b9089cd pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 03:35:51 np0005534516 nova_compute[253538]: 2025-11-25 08:35:51.617 253542 WARNING nova.compute.manager [req-7666324b-2549-41a8-861e-c962d8f29030 req-1ebf8616-b2b9-4967-9eee-096fa0bcf4c8 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 4e9d3984-d789-45e1-83e3-8909597d3265] Received unexpected event network-vif-unplugged-d553507f-4019-4ce0-b549-4d221b9089cd for instance with vm_state active and task_state reboot_started_hard.#033[00m
Nov 25 03:35:51 np0005534516 nova_compute[253538]: 2025-11-25 08:35:51.617 253542 DEBUG nova.compute.manager [req-7666324b-2549-41a8-861e-c962d8f29030 req-1ebf8616-b2b9-4967-9eee-096fa0bcf4c8 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 9396c5ff-9457-400c-8916-ecd03eded0c1] Received event network-vif-deleted-33ae6d28-9d12-4e42-9874-7f5c7a27c9c8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:35:51 np0005534516 nova_compute[253538]: 2025-11-25 08:35:51.618 253542 DEBUG nova.compute.manager [req-7666324b-2549-41a8-861e-c962d8f29030 req-1ebf8616-b2b9-4967-9eee-096fa0bcf4c8 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 4e9d3984-d789-45e1-83e3-8909597d3265] Received event network-vif-plugged-d553507f-4019-4ce0-b549-4d221b9089cd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:35:51 np0005534516 nova_compute[253538]: 2025-11-25 08:35:51.618 253542 DEBUG oslo_concurrency.lockutils [req-7666324b-2549-41a8-861e-c962d8f29030 req-1ebf8616-b2b9-4967-9eee-096fa0bcf4c8 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "4e9d3984-d789-45e1-83e3-8909597d3265-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:35:51 np0005534516 nova_compute[253538]: 2025-11-25 08:35:51.619 253542 DEBUG oslo_concurrency.lockutils [req-7666324b-2549-41a8-861e-c962d8f29030 req-1ebf8616-b2b9-4967-9eee-096fa0bcf4c8 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "4e9d3984-d789-45e1-83e3-8909597d3265-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:35:51 np0005534516 nova_compute[253538]: 2025-11-25 08:35:51.619 253542 DEBUG oslo_concurrency.lockutils [req-7666324b-2549-41a8-861e-c962d8f29030 req-1ebf8616-b2b9-4967-9eee-096fa0bcf4c8 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "4e9d3984-d789-45e1-83e3-8909597d3265-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:35:51 np0005534516 nova_compute[253538]: 2025-11-25 08:35:51.620 253542 DEBUG nova.compute.manager [req-7666324b-2549-41a8-861e-c962d8f29030 req-1ebf8616-b2b9-4967-9eee-096fa0bcf4c8 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 4e9d3984-d789-45e1-83e3-8909597d3265] No waiting events found dispatching network-vif-plugged-d553507f-4019-4ce0-b549-4d221b9089cd pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 03:35:51 np0005534516 nova_compute[253538]: 2025-11-25 08:35:51.620 253542 WARNING nova.compute.manager [req-7666324b-2549-41a8-861e-c962d8f29030 req-1ebf8616-b2b9-4967-9eee-096fa0bcf4c8 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 4e9d3984-d789-45e1-83e3-8909597d3265] Received unexpected event network-vif-plugged-d553507f-4019-4ce0-b549-4d221b9089cd for instance with vm_state active and task_state reboot_started_hard.#033[00m
Nov 25 03:35:51 np0005534516 nova_compute[253538]: 2025-11-25 08:35:51.639 253542 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764059736.6345212, cab0bbd2-96e3-43ed-970b-0b49c7581fef => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 03:35:51 np0005534516 nova_compute[253538]: 2025-11-25 08:35:51.639 253542 INFO nova.compute.manager [-] [instance: cab0bbd2-96e3-43ed-970b-0b49c7581fef] VM Stopped (Lifecycle Event)#033[00m
Nov 25 03:35:51 np0005534516 nova_compute[253538]: 2025-11-25 08:35:51.658 253542 DEBUG nova.compute.manager [None req-d5c8f677-cf87-4069-bdb7-0179c9186399 - - - - - -] [instance: cab0bbd2-96e3-43ed-970b-0b49c7581fef] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:35:51 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 03:35:51 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4082249916' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 03:35:51 np0005534516 nova_compute[253538]: 2025-11-25 08:35:51.747 253542 DEBUG oslo_concurrency.processutils [None req-b76fc119-15bb-4231-90b7-ef24446d6cfa 4de06a7985be4463b069db269e2882d4 a20ef4bed55a408c8933a4956b2dd3e4 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.450s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:35:51 np0005534516 nova_compute[253538]: 2025-11-25 08:35:51.749 253542 DEBUG nova.virt.libvirt.vif [None req-b76fc119-15bb-4231-90b7-ef24446d6cfa 4de06a7985be4463b069db269e2882d4 a20ef4bed55a408c8933a4956b2dd3e4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T08:35:36Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-InstanceActionsTestJSON-server-1670811507',display_name='tempest-InstanceActionsTestJSON-server-1670811507',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-instanceactionstestjson-server-1670811507',id=69,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-25T08:35:47Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='a20ef4bed55a408c8933a4956b2dd3e4',ramdisk_id='',reservation_id='r-7noz6z35',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-InstanceActionsTestJSON-270987687',owner_user_name='tempest-InstanceActionsTestJSON-270987687-project-member'},tags=<?>,task_state='reboot_started_hard',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T08:35:50Z,user_data=None,user_id='4de06a7985be4463b069db269e2882d4',uuid=4e9d3984-d789-45e1-83e3-8909597d3265,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "d553507f-4019-4ce0-b549-4d221b9089cd", "address": "fa:16:3e:d8:47:0f", "network": {"id": "66249d1f-478b-4b2b-a784-933c0556752e", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-179501336-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a20ef4bed55a408c8933a4956b2dd3e4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd553507f-40", "ovs_interfaceid": "d553507f-4019-4ce0-b549-4d221b9089cd", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 25 03:35:51 np0005534516 nova_compute[253538]: 2025-11-25 08:35:51.750 253542 DEBUG nova.network.os_vif_util [None req-b76fc119-15bb-4231-90b7-ef24446d6cfa 4de06a7985be4463b069db269e2882d4 a20ef4bed55a408c8933a4956b2dd3e4 - - default default] Converting VIF {"id": "d553507f-4019-4ce0-b549-4d221b9089cd", "address": "fa:16:3e:d8:47:0f", "network": {"id": "66249d1f-478b-4b2b-a784-933c0556752e", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-179501336-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a20ef4bed55a408c8933a4956b2dd3e4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd553507f-40", "ovs_interfaceid": "d553507f-4019-4ce0-b549-4d221b9089cd", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 03:35:51 np0005534516 nova_compute[253538]: 2025-11-25 08:35:51.752 253542 DEBUG nova.network.os_vif_util [None req-b76fc119-15bb-4231-90b7-ef24446d6cfa 4de06a7985be4463b069db269e2882d4 a20ef4bed55a408c8933a4956b2dd3e4 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:d8:47:0f,bridge_name='br-int',has_traffic_filtering=True,id=d553507f-4019-4ce0-b549-4d221b9089cd,network=Network(66249d1f-478b-4b2b-a784-933c0556752e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd553507f-40') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 03:35:51 np0005534516 nova_compute[253538]: 2025-11-25 08:35:51.754 253542 DEBUG nova.objects.instance [None req-b76fc119-15bb-4231-90b7-ef24446d6cfa 4de06a7985be4463b069db269e2882d4 a20ef4bed55a408c8933a4956b2dd3e4 - - default default] Lazy-loading 'pci_devices' on Instance uuid 4e9d3984-d789-45e1-83e3-8909597d3265 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 03:35:51 np0005534516 nova_compute[253538]: 2025-11-25 08:35:51.775 253542 DEBUG nova.virt.libvirt.driver [None req-b76fc119-15bb-4231-90b7-ef24446d6cfa 4de06a7985be4463b069db269e2882d4 a20ef4bed55a408c8933a4956b2dd3e4 - - default default] [instance: 4e9d3984-d789-45e1-83e3-8909597d3265] End _get_guest_xml xml=<domain type="kvm">
Nov 25 03:35:51 np0005534516 nova_compute[253538]:  <uuid>4e9d3984-d789-45e1-83e3-8909597d3265</uuid>
Nov 25 03:35:51 np0005534516 nova_compute[253538]:  <name>instance-00000045</name>
Nov 25 03:35:51 np0005534516 nova_compute[253538]:  <memory>131072</memory>
Nov 25 03:35:51 np0005534516 nova_compute[253538]:  <vcpu>1</vcpu>
Nov 25 03:35:51 np0005534516 nova_compute[253538]:  <metadata>
Nov 25 03:35:51 np0005534516 nova_compute[253538]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 03:35:51 np0005534516 nova_compute[253538]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 03:35:51 np0005534516 nova_compute[253538]:      <nova:name>tempest-InstanceActionsTestJSON-server-1670811507</nova:name>
Nov 25 03:35:51 np0005534516 nova_compute[253538]:      <nova:creationTime>2025-11-25 08:35:50</nova:creationTime>
Nov 25 03:35:51 np0005534516 nova_compute[253538]:      <nova:flavor name="m1.nano">
Nov 25 03:35:51 np0005534516 nova_compute[253538]:        <nova:memory>128</nova:memory>
Nov 25 03:35:51 np0005534516 nova_compute[253538]:        <nova:disk>1</nova:disk>
Nov 25 03:35:51 np0005534516 nova_compute[253538]:        <nova:swap>0</nova:swap>
Nov 25 03:35:51 np0005534516 nova_compute[253538]:        <nova:ephemeral>0</nova:ephemeral>
Nov 25 03:35:51 np0005534516 nova_compute[253538]:        <nova:vcpus>1</nova:vcpus>
Nov 25 03:35:51 np0005534516 nova_compute[253538]:      </nova:flavor>
Nov 25 03:35:51 np0005534516 nova_compute[253538]:      <nova:owner>
Nov 25 03:35:51 np0005534516 nova_compute[253538]:        <nova:user uuid="4de06a7985be4463b069db269e2882d4">tempest-InstanceActionsTestJSON-270987687-project-member</nova:user>
Nov 25 03:35:51 np0005534516 nova_compute[253538]:        <nova:project uuid="a20ef4bed55a408c8933a4956b2dd3e4">tempest-InstanceActionsTestJSON-270987687</nova:project>
Nov 25 03:35:51 np0005534516 nova_compute[253538]:      </nova:owner>
Nov 25 03:35:51 np0005534516 nova_compute[253538]:      <nova:root type="image" uuid="8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e"/>
Nov 25 03:35:51 np0005534516 nova_compute[253538]:      <nova:ports>
Nov 25 03:35:51 np0005534516 nova_compute[253538]:        <nova:port uuid="d553507f-4019-4ce0-b549-4d221b9089cd">
Nov 25 03:35:51 np0005534516 nova_compute[253538]:          <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Nov 25 03:35:51 np0005534516 nova_compute[253538]:        </nova:port>
Nov 25 03:35:51 np0005534516 nova_compute[253538]:      </nova:ports>
Nov 25 03:35:51 np0005534516 nova_compute[253538]:    </nova:instance>
Nov 25 03:35:51 np0005534516 nova_compute[253538]:  </metadata>
Nov 25 03:35:51 np0005534516 nova_compute[253538]:  <sysinfo type="smbios">
Nov 25 03:35:51 np0005534516 nova_compute[253538]:    <system>
Nov 25 03:35:51 np0005534516 nova_compute[253538]:      <entry name="manufacturer">RDO</entry>
Nov 25 03:35:51 np0005534516 nova_compute[253538]:      <entry name="product">OpenStack Compute</entry>
Nov 25 03:35:51 np0005534516 nova_compute[253538]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 03:35:51 np0005534516 nova_compute[253538]:      <entry name="serial">4e9d3984-d789-45e1-83e3-8909597d3265</entry>
Nov 25 03:35:51 np0005534516 nova_compute[253538]:      <entry name="uuid">4e9d3984-d789-45e1-83e3-8909597d3265</entry>
Nov 25 03:35:51 np0005534516 nova_compute[253538]:      <entry name="family">Virtual Machine</entry>
Nov 25 03:35:51 np0005534516 nova_compute[253538]:    </system>
Nov 25 03:35:51 np0005534516 nova_compute[253538]:  </sysinfo>
Nov 25 03:35:51 np0005534516 nova_compute[253538]:  <os>
Nov 25 03:35:51 np0005534516 nova_compute[253538]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 25 03:35:51 np0005534516 nova_compute[253538]:    <boot dev="hd"/>
Nov 25 03:35:51 np0005534516 nova_compute[253538]:    <smbios mode="sysinfo"/>
Nov 25 03:35:51 np0005534516 nova_compute[253538]:  </os>
Nov 25 03:35:51 np0005534516 nova_compute[253538]:  <features>
Nov 25 03:35:51 np0005534516 nova_compute[253538]:    <acpi/>
Nov 25 03:35:51 np0005534516 nova_compute[253538]:    <apic/>
Nov 25 03:35:51 np0005534516 nova_compute[253538]:    <vmcoreinfo/>
Nov 25 03:35:51 np0005534516 nova_compute[253538]:  </features>
Nov 25 03:35:51 np0005534516 nova_compute[253538]:  <clock offset="utc">
Nov 25 03:35:51 np0005534516 nova_compute[253538]:    <timer name="pit" tickpolicy="delay"/>
Nov 25 03:35:51 np0005534516 nova_compute[253538]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 25 03:35:51 np0005534516 nova_compute[253538]:    <timer name="hpet" present="no"/>
Nov 25 03:35:51 np0005534516 nova_compute[253538]:  </clock>
Nov 25 03:35:51 np0005534516 nova_compute[253538]:  <cpu mode="host-model" match="exact">
Nov 25 03:35:51 np0005534516 nova_compute[253538]:    <topology sockets="1" cores="1" threads="1"/>
Nov 25 03:35:51 np0005534516 nova_compute[253538]:  </cpu>
Nov 25 03:35:51 np0005534516 nova_compute[253538]:  <devices>
Nov 25 03:35:51 np0005534516 nova_compute[253538]:    <disk type="network" device="disk">
Nov 25 03:35:51 np0005534516 nova_compute[253538]:      <driver type="raw" cache="none"/>
Nov 25 03:35:51 np0005534516 nova_compute[253538]:      <source protocol="rbd" name="vms/4e9d3984-d789-45e1-83e3-8909597d3265_disk">
Nov 25 03:35:51 np0005534516 nova_compute[253538]:        <host name="192.168.122.100" port="6789"/>
Nov 25 03:35:51 np0005534516 nova_compute[253538]:      </source>
Nov 25 03:35:51 np0005534516 nova_compute[253538]:      <auth username="openstack">
Nov 25 03:35:51 np0005534516 nova_compute[253538]:        <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 03:35:51 np0005534516 nova_compute[253538]:      </auth>
Nov 25 03:35:51 np0005534516 nova_compute[253538]:      <target dev="vda" bus="virtio"/>
Nov 25 03:35:51 np0005534516 nova_compute[253538]:    </disk>
Nov 25 03:35:51 np0005534516 nova_compute[253538]:    <disk type="network" device="cdrom">
Nov 25 03:35:51 np0005534516 nova_compute[253538]:      <driver type="raw" cache="none"/>
Nov 25 03:35:51 np0005534516 nova_compute[253538]:      <source protocol="rbd" name="vms/4e9d3984-d789-45e1-83e3-8909597d3265_disk.config">
Nov 25 03:35:51 np0005534516 nova_compute[253538]:        <host name="192.168.122.100" port="6789"/>
Nov 25 03:35:51 np0005534516 nova_compute[253538]:      </source>
Nov 25 03:35:51 np0005534516 nova_compute[253538]:      <auth username="openstack">
Nov 25 03:35:51 np0005534516 nova_compute[253538]:        <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 03:35:51 np0005534516 nova_compute[253538]:      </auth>
Nov 25 03:35:51 np0005534516 nova_compute[253538]:      <target dev="sda" bus="sata"/>
Nov 25 03:35:51 np0005534516 nova_compute[253538]:    </disk>
Nov 25 03:35:51 np0005534516 nova_compute[253538]:    <interface type="ethernet">
Nov 25 03:35:51 np0005534516 nova_compute[253538]:      <mac address="fa:16:3e:d8:47:0f"/>
Nov 25 03:35:51 np0005534516 nova_compute[253538]:      <model type="virtio"/>
Nov 25 03:35:51 np0005534516 nova_compute[253538]:      <driver name="vhost" rx_queue_size="512"/>
Nov 25 03:35:51 np0005534516 nova_compute[253538]:      <mtu size="1442"/>
Nov 25 03:35:51 np0005534516 nova_compute[253538]:      <target dev="tapd553507f-40"/>
Nov 25 03:35:51 np0005534516 nova_compute[253538]:    </interface>
Nov 25 03:35:51 np0005534516 nova_compute[253538]:    <serial type="pty">
Nov 25 03:35:51 np0005534516 nova_compute[253538]:      <log file="/var/lib/nova/instances/4e9d3984-d789-45e1-83e3-8909597d3265/console.log" append="off"/>
Nov 25 03:35:51 np0005534516 nova_compute[253538]:    </serial>
Nov 25 03:35:51 np0005534516 nova_compute[253538]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 03:35:51 np0005534516 nova_compute[253538]:    <video>
Nov 25 03:35:51 np0005534516 nova_compute[253538]:      <model type="virtio"/>
Nov 25 03:35:51 np0005534516 nova_compute[253538]:    </video>
Nov 25 03:35:51 np0005534516 nova_compute[253538]:    <input type="tablet" bus="usb"/>
Nov 25 03:35:51 np0005534516 nova_compute[253538]:    <input type="keyboard" bus="usb"/>
Nov 25 03:35:51 np0005534516 nova_compute[253538]:    <rng model="virtio">
Nov 25 03:35:51 np0005534516 nova_compute[253538]:      <backend model="random">/dev/urandom</backend>
Nov 25 03:35:51 np0005534516 nova_compute[253538]:    </rng>
Nov 25 03:35:51 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root"/>
Nov 25 03:35:51 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:35:51 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:35:51 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:35:51 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:35:51 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:35:51 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:35:51 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:35:51 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:35:51 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:35:51 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:35:51 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:35:51 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:35:51 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:35:51 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:35:51 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:35:51 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:35:51 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:35:51 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:35:51 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:35:51 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:35:51 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:35:51 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:35:51 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:35:51 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:35:51 np0005534516 nova_compute[253538]:    <controller type="usb" index="0"/>
Nov 25 03:35:51 np0005534516 nova_compute[253538]:    <memballoon model="virtio">
Nov 25 03:35:51 np0005534516 nova_compute[253538]:      <stats period="10"/>
Nov 25 03:35:51 np0005534516 nova_compute[253538]:    </memballoon>
Nov 25 03:35:51 np0005534516 nova_compute[253538]:  </devices>
Nov 25 03:35:51 np0005534516 nova_compute[253538]: </domain>
Nov 25 03:35:51 np0005534516 nova_compute[253538]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 25 03:35:51 np0005534516 nova_compute[253538]: 2025-11-25 08:35:51.778 253542 DEBUG nova.virt.libvirt.driver [None req-b76fc119-15bb-4231-90b7-ef24446d6cfa 4de06a7985be4463b069db269e2882d4 a20ef4bed55a408c8933a4956b2dd3e4 - - default default] skipping disk for instance-00000045 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 25 03:35:51 np0005534516 nova_compute[253538]: 2025-11-25 08:35:51.778 253542 DEBUG nova.virt.libvirt.driver [None req-b76fc119-15bb-4231-90b7-ef24446d6cfa 4de06a7985be4463b069db269e2882d4 a20ef4bed55a408c8933a4956b2dd3e4 - - default default] skipping disk for instance-00000045 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 25 03:35:51 np0005534516 nova_compute[253538]: 2025-11-25 08:35:51.780 253542 DEBUG nova.virt.libvirt.vif [None req-b76fc119-15bb-4231-90b7-ef24446d6cfa 4de06a7985be4463b069db269e2882d4 a20ef4bed55a408c8933a4956b2dd3e4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T08:35:36Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-InstanceActionsTestJSON-server-1670811507',display_name='tempest-InstanceActionsTestJSON-server-1670811507',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-instanceactionstestjson-server-1670811507',id=69,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-25T08:35:47Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=<?>,power_state=1,progress=0,project_id='a20ef4bed55a408c8933a4956b2dd3e4',ramdisk_id='',reservation_id='r-7noz6z35',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-InstanceActionsTestJSON-270987687',owner_user_name='tempest-InstanceActionsTestJSON-270987687-project-member'},tags=<?>,task_state='reboot_started_hard',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T08:35:50Z,user_data=None,user_id='4de06a7985be4463b069db269e2882d4',uuid=4e9d3984-d789-45e1-83e3-8909597d3265,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "d553507f-4019-4ce0-b549-4d221b9089cd", "address": "fa:16:3e:d8:47:0f", "network": {"id": "66249d1f-478b-4b2b-a784-933c0556752e", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-179501336-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a20ef4bed55a408c8933a4956b2dd3e4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd553507f-40", "ovs_interfaceid": "d553507f-4019-4ce0-b549-4d221b9089cd", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 25 03:35:51 np0005534516 nova_compute[253538]: 2025-11-25 08:35:51.780 253542 DEBUG nova.network.os_vif_util [None req-b76fc119-15bb-4231-90b7-ef24446d6cfa 4de06a7985be4463b069db269e2882d4 a20ef4bed55a408c8933a4956b2dd3e4 - - default default] Converting VIF {"id": "d553507f-4019-4ce0-b549-4d221b9089cd", "address": "fa:16:3e:d8:47:0f", "network": {"id": "66249d1f-478b-4b2b-a784-933c0556752e", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-179501336-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a20ef4bed55a408c8933a4956b2dd3e4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd553507f-40", "ovs_interfaceid": "d553507f-4019-4ce0-b549-4d221b9089cd", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 03:35:51 np0005534516 nova_compute[253538]: 2025-11-25 08:35:51.781 253542 DEBUG nova.network.os_vif_util [None req-b76fc119-15bb-4231-90b7-ef24446d6cfa 4de06a7985be4463b069db269e2882d4 a20ef4bed55a408c8933a4956b2dd3e4 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:d8:47:0f,bridge_name='br-int',has_traffic_filtering=True,id=d553507f-4019-4ce0-b549-4d221b9089cd,network=Network(66249d1f-478b-4b2b-a784-933c0556752e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd553507f-40') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 03:35:51 np0005534516 nova_compute[253538]: 2025-11-25 08:35:51.782 253542 DEBUG os_vif [None req-b76fc119-15bb-4231-90b7-ef24446d6cfa 4de06a7985be4463b069db269e2882d4 a20ef4bed55a408c8933a4956b2dd3e4 - - default default] Plugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:d8:47:0f,bridge_name='br-int',has_traffic_filtering=True,id=d553507f-4019-4ce0-b549-4d221b9089cd,network=Network(66249d1f-478b-4b2b-a784-933c0556752e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd553507f-40') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 25 03:35:51 np0005534516 nova_compute[253538]: 2025-11-25 08:35:51.783 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:35:51 np0005534516 nova_compute[253538]: 2025-11-25 08:35:51.784 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:35:51 np0005534516 nova_compute[253538]: 2025-11-25 08:35:51.785 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 03:35:51 np0005534516 nova_compute[253538]: 2025-11-25 08:35:51.790 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:35:51 np0005534516 nova_compute[253538]: 2025-11-25 08:35:51.790 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd553507f-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:35:51 np0005534516 nova_compute[253538]: 2025-11-25 08:35:51.791 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapd553507f-40, col_values=(('external_ids', {'iface-id': 'd553507f-4019-4ce0-b549-4d221b9089cd', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:d8:47:0f', 'vm-uuid': '4e9d3984-d789-45e1-83e3-8909597d3265'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:35:51 np0005534516 nova_compute[253538]: 2025-11-25 08:35:51.794 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:35:51 np0005534516 NetworkManager[48915]: <info>  [1764059751.7952] manager: (tapd553507f-40): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/296)
Nov 25 03:35:51 np0005534516 nova_compute[253538]: 2025-11-25 08:35:51.796 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 25 03:35:51 np0005534516 nova_compute[253538]: 2025-11-25 08:35:51.804 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:35:51 np0005534516 nova_compute[253538]: 2025-11-25 08:35:51.806 253542 INFO os_vif [None req-b76fc119-15bb-4231-90b7-ef24446d6cfa 4de06a7985be4463b069db269e2882d4 a20ef4bed55a408c8933a4956b2dd3e4 - - default default] Successfully plugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:d8:47:0f,bridge_name='br-int',has_traffic_filtering=True,id=d553507f-4019-4ce0-b549-4d221b9089cd,network=Network(66249d1f-478b-4b2b-a784-933c0556752e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd553507f-40')#033[00m
Nov 25 03:35:51 np0005534516 kernel: tapd553507f-40: entered promiscuous mode
Nov 25 03:35:51 np0005534516 NetworkManager[48915]: <info>  [1764059751.8956] manager: (tapd553507f-40): new Tun device (/org/freedesktop/NetworkManager/Devices/297)
Nov 25 03:35:51 np0005534516 ovn_controller[152859]: 2025-11-25T08:35:51Z|00670|binding|INFO|Claiming lport d553507f-4019-4ce0-b549-4d221b9089cd for this chassis.
Nov 25 03:35:51 np0005534516 ovn_controller[152859]: 2025-11-25T08:35:51Z|00671|binding|INFO|d553507f-4019-4ce0-b549-4d221b9089cd: Claiming fa:16:3e:d8:47:0f 10.100.0.9
Nov 25 03:35:51 np0005534516 nova_compute[253538]: 2025-11-25 08:35:51.896 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:35:51 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:35:51.926 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d8:47:0f 10.100.0.9'], port_security=['fa:16:3e:d8:47:0f 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '4e9d3984-d789-45e1-83e3-8909597d3265', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-66249d1f-478b-4b2b-a784-933c0556752e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a20ef4bed55a408c8933a4956b2dd3e4', 'neutron:revision_number': '5', 'neutron:security_group_ids': '99b8ce64-92d7-4b35-99db-f9518100b84f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5891e4e5-b944-481c-a074-f90df3e14ac2, chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=d553507f-4019-4ce0-b549-4d221b9089cd) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 03:35:51 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:35:51.927 162739 INFO neutron.agent.ovn.metadata.agent [-] Port d553507f-4019-4ce0-b549-4d221b9089cd in datapath 66249d1f-478b-4b2b-a784-933c0556752e bound to our chassis#033[00m
Nov 25 03:35:51 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:35:51.929 162739 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 66249d1f-478b-4b2b-a784-933c0556752e#033[00m
Nov 25 03:35:51 np0005534516 systemd-udevd[325055]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 03:35:51 np0005534516 ovn_controller[152859]: 2025-11-25T08:35:51Z|00672|binding|INFO|Setting lport d553507f-4019-4ce0-b549-4d221b9089cd ovn-installed in OVS
Nov 25 03:35:51 np0005534516 ovn_controller[152859]: 2025-11-25T08:35:51Z|00673|binding|INFO|Setting lport d553507f-4019-4ce0-b549-4d221b9089cd up in Southbound
Nov 25 03:35:51 np0005534516 nova_compute[253538]: 2025-11-25 08:35:51.936 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:35:51 np0005534516 systemd-machined[215790]: New machine qemu-82-instance-00000045.
Nov 25 03:35:51 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:35:51.942 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[ab375394-1abf-4515-9f6d-893a629016df]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:35:51 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:35:51.944 162739 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap66249d1f-41 in ovnmeta-66249d1f-478b-4b2b-a784-933c0556752e namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 25 03:35:51 np0005534516 nova_compute[253538]: 2025-11-25 08:35:51.942 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:35:51 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:35:51.946 269370 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap66249d1f-40 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 25 03:35:51 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:35:51.946 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[46a3fabe-6ad2-467b-99c7-16e5701a06b1]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:35:51 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:35:51.947 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[0beab3af-2b77-444d-90ac-7fdb95861af7]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:35:51 np0005534516 systemd[1]: Started Virtual Machine qemu-82-instance-00000045.
Nov 25 03:35:51 np0005534516 NetworkManager[48915]: <info>  [1764059751.9589] device (tapd553507f-40): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 03:35:51 np0005534516 NetworkManager[48915]: <info>  [1764059751.9600] device (tapd553507f-40): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 03:35:51 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:35:51.958 162852 DEBUG oslo.privsep.daemon [-] privsep: reply[13835c2a-dad2-4f7c-a3d8-580507d6781f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:35:51 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:35:51.988 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[32c3782e-b201-4f41-a113-976d4033cec4]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:35:51 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 03:35:51 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/444701479' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 03:35:52 np0005534516 nova_compute[253538]: 2025-11-25 08:35:52.019 253542 DEBUG oslo_concurrency.processutils [None req-aa0f63c9-da87-49af-abeb-54e2523715df 596f8d994ec145beb9244f5f01713555 3b96d13f13da43468269abb6dc6185d1 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.456s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:35:52 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:35:52.023 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[e308f19c-1d38-483c-b8a4-f3378cb965b9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:35:52 np0005534516 nova_compute[253538]: 2025-11-25 08:35:52.025 253542 DEBUG nova.compute.provider_tree [None req-aa0f63c9-da87-49af-abeb-54e2523715df 596f8d994ec145beb9244f5f01713555 3b96d13f13da43468269abb6dc6185d1 - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 03:35:52 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:35:52.028 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[65638bb4-5de3-4d99-bcca-eab1026483f3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:35:52 np0005534516 NetworkManager[48915]: <info>  [1764059752.0294] manager: (tap66249d1f-40): new Veth device (/org/freedesktop/NetworkManager/Devices/298)
Nov 25 03:35:52 np0005534516 systemd-udevd[325060]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 03:35:52 np0005534516 nova_compute[253538]: 2025-11-25 08:35:52.038 253542 DEBUG nova.scheduler.client.report [None req-aa0f63c9-da87-49af-abeb-54e2523715df 596f8d994ec145beb9244f5f01713555 3b96d13f13da43468269abb6dc6185d1 - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 03:35:52 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:35:52.064 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[12823ccd-2bd7-4fe6-a509-7d52cbd909b8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:35:52 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:35:52.067 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[b20ca17e-8305-40e8-8661-4e0a47bb8338]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:35:52 np0005534516 NetworkManager[48915]: <info>  [1764059752.0897] device (tap66249d1f-40): carrier: link connected
Nov 25 03:35:52 np0005534516 nova_compute[253538]: 2025-11-25 08:35:52.094 253542 DEBUG oslo_concurrency.lockutils [None req-aa0f63c9-da87-49af-abeb-54e2523715df 596f8d994ec145beb9244f5f01713555 3b96d13f13da43468269abb6dc6185d1 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.640s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:35:52 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:35:52.094 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[f4d2cd01-5e7a-494d-a718-9b9088876eff]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:35:52 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:35:52.111 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[596e70f0-b5b4-4de6-a593-2f5d46df790b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap66249d1f-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:5d:29:ed'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 204], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 510668, 'reachable_time': 32783, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 325091, 'error': None, 'target': 'ovnmeta-66249d1f-478b-4b2b-a784-933c0556752e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:35:52 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:35:52.131 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[ae8e9a4a-d00e-43c2-b9a9-e084aa1804d6]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe5d:29ed'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 510668, 'tstamp': 510668}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 325092, 'error': None, 'target': 'ovnmeta-66249d1f-478b-4b2b-a784-933c0556752e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:35:52 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:35:52.150 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[66678557-d654-41ff-9251-626671d96448]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap66249d1f-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:5d:29:ed'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 204], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 510668, 'reachable_time': 32783, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 325093, 'error': None, 'target': 'ovnmeta-66249d1f-478b-4b2b-a784-933c0556752e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:35:52 np0005534516 nova_compute[253538]: 2025-11-25 08:35:52.160 253542 INFO nova.scheduler.client.report [None req-aa0f63c9-da87-49af-abeb-54e2523715df 596f8d994ec145beb9244f5f01713555 3b96d13f13da43468269abb6dc6185d1 - - default default] Deleted allocations for instance 9396c5ff-9457-400c-8916-ecd03eded0c1#033[00m
Nov 25 03:35:52 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:35:52.186 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[9d39e1a5-79cf-45cc-acb7-0005abbb7515]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:35:52 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:35:52.249 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[978cbf4f-8594-4180-b73f-14cbfdcddb0c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:35:52 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:35:52.250 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap66249d1f-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:35:52 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:35:52.251 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 03:35:52 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:35:52.252 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap66249d1f-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:35:52 np0005534516 NetworkManager[48915]: <info>  [1764059752.2545] manager: (tap66249d1f-40): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/299)
Nov 25 03:35:52 np0005534516 kernel: tap66249d1f-40: entered promiscuous mode
Nov 25 03:35:52 np0005534516 nova_compute[253538]: 2025-11-25 08:35:52.256 253542 DEBUG oslo_concurrency.lockutils [None req-aa0f63c9-da87-49af-abeb-54e2523715df 596f8d994ec145beb9244f5f01713555 3b96d13f13da43468269abb6dc6185d1 - - default default] Lock "9396c5ff-9457-400c-8916-ecd03eded0c1" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.004s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:35:52 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:35:52.257 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap66249d1f-40, col_values=(('external_ids', {'iface-id': '57f8eb8e-0895-4599-b15e-b1a08378dfc1'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:35:52 np0005534516 nova_compute[253538]: 2025-11-25 08:35:52.258 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:35:52 np0005534516 ovn_controller[152859]: 2025-11-25T08:35:52Z|00674|binding|INFO|Releasing lport 57f8eb8e-0895-4599-b15e-b1a08378dfc1 from this chassis (sb_readonly=0)
Nov 25 03:35:52 np0005534516 nova_compute[253538]: 2025-11-25 08:35:52.277 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:35:52 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:35:52.278 162739 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/66249d1f-478b-4b2b-a784-933c0556752e.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/66249d1f-478b-4b2b-a784-933c0556752e.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 25 03:35:52 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:35:52.281 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[eff374fb-4ac1-4d41-845e-b4047473d0a5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:35:52 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:35:52.282 162739 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 25 03:35:52 np0005534516 ovn_metadata_agent[162734]: global
Nov 25 03:35:52 np0005534516 ovn_metadata_agent[162734]:    log         /dev/log local0 debug
Nov 25 03:35:52 np0005534516 ovn_metadata_agent[162734]:    log-tag     haproxy-metadata-proxy-66249d1f-478b-4b2b-a784-933c0556752e
Nov 25 03:35:52 np0005534516 ovn_metadata_agent[162734]:    user        root
Nov 25 03:35:52 np0005534516 ovn_metadata_agent[162734]:    group       root
Nov 25 03:35:52 np0005534516 ovn_metadata_agent[162734]:    maxconn     1024
Nov 25 03:35:52 np0005534516 ovn_metadata_agent[162734]:    pidfile     /var/lib/neutron/external/pids/66249d1f-478b-4b2b-a784-933c0556752e.pid.haproxy
Nov 25 03:35:52 np0005534516 ovn_metadata_agent[162734]:    daemon
Nov 25 03:35:52 np0005534516 ovn_metadata_agent[162734]: 
Nov 25 03:35:52 np0005534516 ovn_metadata_agent[162734]: defaults
Nov 25 03:35:52 np0005534516 ovn_metadata_agent[162734]:    log global
Nov 25 03:35:52 np0005534516 ovn_metadata_agent[162734]:    mode http
Nov 25 03:35:52 np0005534516 ovn_metadata_agent[162734]:    option httplog
Nov 25 03:35:52 np0005534516 ovn_metadata_agent[162734]:    option dontlognull
Nov 25 03:35:52 np0005534516 ovn_metadata_agent[162734]:    option http-server-close
Nov 25 03:35:52 np0005534516 ovn_metadata_agent[162734]:    option forwardfor
Nov 25 03:35:52 np0005534516 ovn_metadata_agent[162734]:    retries                 3
Nov 25 03:35:52 np0005534516 ovn_metadata_agent[162734]:    timeout http-request    30s
Nov 25 03:35:52 np0005534516 ovn_metadata_agent[162734]:    timeout connect         30s
Nov 25 03:35:52 np0005534516 ovn_metadata_agent[162734]:    timeout client          32s
Nov 25 03:35:52 np0005534516 ovn_metadata_agent[162734]:    timeout server          32s
Nov 25 03:35:52 np0005534516 ovn_metadata_agent[162734]:    timeout http-keep-alive 30s
Nov 25 03:35:52 np0005534516 ovn_metadata_agent[162734]: 
Nov 25 03:35:52 np0005534516 ovn_metadata_agent[162734]: 
Nov 25 03:35:52 np0005534516 ovn_metadata_agent[162734]: listen listener
Nov 25 03:35:52 np0005534516 ovn_metadata_agent[162734]:    bind 169.254.169.254:80
Nov 25 03:35:52 np0005534516 ovn_metadata_agent[162734]:    server metadata /var/lib/neutron/metadata_proxy
Nov 25 03:35:52 np0005534516 ovn_metadata_agent[162734]:    http-request add-header X-OVN-Network-ID 66249d1f-478b-4b2b-a784-933c0556752e
Nov 25 03:35:52 np0005534516 ovn_metadata_agent[162734]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 25 03:35:52 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:35:52.283 162739 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-66249d1f-478b-4b2b-a784-933c0556752e', 'env', 'PROCESS_TAG=haproxy-66249d1f-478b-4b2b-a784-933c0556752e', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/66249d1f-478b-4b2b-a784-933c0556752e.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 25 03:35:52 np0005534516 nova_compute[253538]: 2025-11-25 08:35:52.347 253542 DEBUG nova.virt.libvirt.host [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Removed pending event for 4e9d3984-d789-45e1-83e3-8909597d3265 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Nov 25 03:35:52 np0005534516 nova_compute[253538]: 2025-11-25 08:35:52.347 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764059752.3455155, 4e9d3984-d789-45e1-83e3-8909597d3265 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 03:35:52 np0005534516 nova_compute[253538]: 2025-11-25 08:35:52.347 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 4e9d3984-d789-45e1-83e3-8909597d3265] VM Resumed (Lifecycle Event)#033[00m
Nov 25 03:35:52 np0005534516 nova_compute[253538]: 2025-11-25 08:35:52.350 253542 DEBUG nova.compute.manager [None req-b76fc119-15bb-4231-90b7-ef24446d6cfa 4de06a7985be4463b069db269e2882d4 a20ef4bed55a408c8933a4956b2dd3e4 - - default default] [instance: 4e9d3984-d789-45e1-83e3-8909597d3265] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 25 03:35:52 np0005534516 nova_compute[253538]: 2025-11-25 08:35:52.353 253542 INFO nova.virt.libvirt.driver [-] [instance: 4e9d3984-d789-45e1-83e3-8909597d3265] Instance rebooted successfully.#033[00m
Nov 25 03:35:52 np0005534516 nova_compute[253538]: 2025-11-25 08:35:52.354 253542 DEBUG nova.compute.manager [None req-b76fc119-15bb-4231-90b7-ef24446d6cfa 4de06a7985be4463b069db269e2882d4 a20ef4bed55a408c8933a4956b2dd3e4 - - default default] [instance: 4e9d3984-d789-45e1-83e3-8909597d3265] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:35:52 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 03:35:52 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 03:35:52 np0005534516 nova_compute[253538]: 2025-11-25 08:35:52.536 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 4e9d3984-d789-45e1-83e3-8909597d3265] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:35:52 np0005534516 nova_compute[253538]: 2025-11-25 08:35:52.543 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 4e9d3984-d789-45e1-83e3-8909597d3265] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: reboot_started_hard, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 03:35:52 np0005534516 nova_compute[253538]: 2025-11-25 08:35:52.563 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 4e9d3984-d789-45e1-83e3-8909597d3265] During sync_power_state the instance has a pending task (reboot_started_hard). Skip.#033[00m
Nov 25 03:35:52 np0005534516 nova_compute[253538]: 2025-11-25 08:35:52.564 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764059752.3478527, 4e9d3984-d789-45e1-83e3-8909597d3265 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 03:35:52 np0005534516 nova_compute[253538]: 2025-11-25 08:35:52.564 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 4e9d3984-d789-45e1-83e3-8909597d3265] VM Started (Lifecycle Event)#033[00m
Nov 25 03:35:52 np0005534516 nova_compute[253538]: 2025-11-25 08:35:52.584 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 4e9d3984-d789-45e1-83e3-8909597d3265] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:35:52 np0005534516 nova_compute[253538]: 2025-11-25 08:35:52.589 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 4e9d3984-d789-45e1-83e3-8909597d3265] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: reboot_started_hard, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 03:35:52 np0005534516 nova_compute[253538]: 2025-11-25 08:35:52.606 253542 DEBUG oslo_concurrency.lockutils [None req-b76fc119-15bb-4231-90b7-ef24446d6cfa 4de06a7985be4463b069db269e2882d4 a20ef4bed55a408c8933a4956b2dd3e4 - - default default] Lock "4e9d3984-d789-45e1-83e3-8909597d3265" "released" by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" :: held 3.877s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:35:52 np0005534516 podman[325167]: 2025-11-25 08:35:52.662646746 +0000 UTC m=+0.044963250 container create 8bb7ce6de75d314a91eb621d3fd8a3ea32ded08d9eb82cd69f1e17ed6ec4ca6c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-66249d1f-478b-4b2b-a784-933c0556752e, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team)
Nov 25 03:35:52 np0005534516 systemd[1]: Started libpod-conmon-8bb7ce6de75d314a91eb621d3fd8a3ea32ded08d9eb82cd69f1e17ed6ec4ca6c.scope.
Nov 25 03:35:52 np0005534516 podman[325167]: 2025-11-25 08:35:52.638973289 +0000 UTC m=+0.021289813 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 25 03:35:52 np0005534516 systemd[1]: Started libcrun container.
Nov 25 03:35:52 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/676346057d689cb023f9cac0e4de20ceef90b417e6dbb2d3799ce4df248a42ed/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 25 03:35:52 np0005534516 podman[325167]: 2025-11-25 08:35:52.758662896 +0000 UTC m=+0.140979450 container init 8bb7ce6de75d314a91eb621d3fd8a3ea32ded08d9eb82cd69f1e17ed6ec4ca6c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-66249d1f-478b-4b2b-a784-933c0556752e, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Nov 25 03:35:52 np0005534516 podman[325167]: 2025-11-25 08:35:52.765785938 +0000 UTC m=+0.148102452 container start 8bb7ce6de75d314a91eb621d3fd8a3ea32ded08d9eb82cd69f1e17ed6ec4ca6c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-66249d1f-478b-4b2b-a784-933c0556752e, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0)
Nov 25 03:35:52 np0005534516 neutron-haproxy-ovnmeta-66249d1f-478b-4b2b-a784-933c0556752e[325182]: [NOTICE]   (325186) : New worker (325188) forked
Nov 25 03:35:52 np0005534516 neutron-haproxy-ovnmeta-66249d1f-478b-4b2b-a784-933c0556752e[325182]: [NOTICE]   (325186) : Loading success.
Nov 25 03:35:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] Optimize plan auto_2025-11-25_08:35:53
Nov 25 03:35:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 03:35:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] do_upmap
Nov 25 03:35:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'cephfs.cephfs.data', '.mgr', 'default.rgw.log', 'default.rgw.meta', 'default.rgw.control', 'images', '.rgw.root', 'volumes', 'vms', 'backups']
Nov 25 03:35:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] prepared 0/10 changes
Nov 25 03:35:53 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1586: 321 pgs: 321 active+clean; 134 MiB data, 602 MiB used, 59 GiB / 60 GiB avail; 4.8 MiB/s rd, 32 KiB/s wr, 233 op/s
Nov 25 03:35:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 03:35:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 03:35:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 03:35:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 03:35:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 03:35:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 03:35:53 np0005534516 nova_compute[253538]: 2025-11-25 08:35:53.663 253542 DEBUG nova.compute.manager [req-df7ed42f-2184-4737-8431-cc92fe4130af req-3f46bbd0-956e-4453-b182-55e006bd8de0 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 4e9d3984-d789-45e1-83e3-8909597d3265] Received event network-vif-plugged-d553507f-4019-4ce0-b549-4d221b9089cd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:35:53 np0005534516 nova_compute[253538]: 2025-11-25 08:35:53.664 253542 DEBUG oslo_concurrency.lockutils [req-df7ed42f-2184-4737-8431-cc92fe4130af req-3f46bbd0-956e-4453-b182-55e006bd8de0 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "4e9d3984-d789-45e1-83e3-8909597d3265-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:35:53 np0005534516 nova_compute[253538]: 2025-11-25 08:35:53.665 253542 DEBUG oslo_concurrency.lockutils [req-df7ed42f-2184-4737-8431-cc92fe4130af req-3f46bbd0-956e-4453-b182-55e006bd8de0 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "4e9d3984-d789-45e1-83e3-8909597d3265-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:35:53 np0005534516 nova_compute[253538]: 2025-11-25 08:35:53.665 253542 DEBUG oslo_concurrency.lockutils [req-df7ed42f-2184-4737-8431-cc92fe4130af req-3f46bbd0-956e-4453-b182-55e006bd8de0 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "4e9d3984-d789-45e1-83e3-8909597d3265-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:35:53 np0005534516 nova_compute[253538]: 2025-11-25 08:35:53.665 253542 DEBUG nova.compute.manager [req-df7ed42f-2184-4737-8431-cc92fe4130af req-3f46bbd0-956e-4453-b182-55e006bd8de0 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 4e9d3984-d789-45e1-83e3-8909597d3265] No waiting events found dispatching network-vif-plugged-d553507f-4019-4ce0-b549-4d221b9089cd pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 03:35:53 np0005534516 nova_compute[253538]: 2025-11-25 08:35:53.665 253542 WARNING nova.compute.manager [req-df7ed42f-2184-4737-8431-cc92fe4130af req-3f46bbd0-956e-4453-b182-55e006bd8de0 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 4e9d3984-d789-45e1-83e3-8909597d3265] Received unexpected event network-vif-plugged-d553507f-4019-4ce0-b549-4d221b9089cd for instance with vm_state active and task_state None.#033[00m
Nov 25 03:35:53 np0005534516 nova_compute[253538]: 2025-11-25 08:35:53.666 253542 DEBUG nova.compute.manager [req-df7ed42f-2184-4737-8431-cc92fe4130af req-3f46bbd0-956e-4453-b182-55e006bd8de0 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 4e9d3984-d789-45e1-83e3-8909597d3265] Received event network-vif-plugged-d553507f-4019-4ce0-b549-4d221b9089cd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:35:53 np0005534516 nova_compute[253538]: 2025-11-25 08:35:53.666 253542 DEBUG oslo_concurrency.lockutils [req-df7ed42f-2184-4737-8431-cc92fe4130af req-3f46bbd0-956e-4453-b182-55e006bd8de0 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "4e9d3984-d789-45e1-83e3-8909597d3265-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:35:53 np0005534516 nova_compute[253538]: 2025-11-25 08:35:53.666 253542 DEBUG oslo_concurrency.lockutils [req-df7ed42f-2184-4737-8431-cc92fe4130af req-3f46bbd0-956e-4453-b182-55e006bd8de0 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "4e9d3984-d789-45e1-83e3-8909597d3265-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:35:53 np0005534516 nova_compute[253538]: 2025-11-25 08:35:53.666 253542 DEBUG oslo_concurrency.lockutils [req-df7ed42f-2184-4737-8431-cc92fe4130af req-3f46bbd0-956e-4453-b182-55e006bd8de0 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "4e9d3984-d789-45e1-83e3-8909597d3265-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:35:53 np0005534516 nova_compute[253538]: 2025-11-25 08:35:53.667 253542 DEBUG nova.compute.manager [req-df7ed42f-2184-4737-8431-cc92fe4130af req-3f46bbd0-956e-4453-b182-55e006bd8de0 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 4e9d3984-d789-45e1-83e3-8909597d3265] No waiting events found dispatching network-vif-plugged-d553507f-4019-4ce0-b549-4d221b9089cd pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 03:35:53 np0005534516 nova_compute[253538]: 2025-11-25 08:35:53.667 253542 WARNING nova.compute.manager [req-df7ed42f-2184-4737-8431-cc92fe4130af req-3f46bbd0-956e-4453-b182-55e006bd8de0 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 4e9d3984-d789-45e1-83e3-8909597d3265] Received unexpected event network-vif-plugged-d553507f-4019-4ce0-b549-4d221b9089cd for instance with vm_state active and task_state None.#033[00m
Nov 25 03:35:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 03:35:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 03:35:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 03:35:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 03:35:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 03:35:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 03:35:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 03:35:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 03:35:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 03:35:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 03:35:55 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1587: 321 pgs: 321 active+clean; 134 MiB data, 590 MiB used, 59 GiB / 60 GiB avail; 5.9 MiB/s rd, 32 KiB/s wr, 266 op/s
Nov 25 03:35:55 np0005534516 nova_compute[253538]: 2025-11-25 08:35:55.613 253542 DEBUG oslo_concurrency.lockutils [None req-96ec4fff-ef93-444c-a38f-13891d9af9b3 4de06a7985be4463b069db269e2882d4 a20ef4bed55a408c8933a4956b2dd3e4 - - default default] Acquiring lock "4e9d3984-d789-45e1-83e3-8909597d3265" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:35:55 np0005534516 nova_compute[253538]: 2025-11-25 08:35:55.615 253542 DEBUG oslo_concurrency.lockutils [None req-96ec4fff-ef93-444c-a38f-13891d9af9b3 4de06a7985be4463b069db269e2882d4 a20ef4bed55a408c8933a4956b2dd3e4 - - default default] Lock "4e9d3984-d789-45e1-83e3-8909597d3265" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:35:55 np0005534516 nova_compute[253538]: 2025-11-25 08:35:55.615 253542 DEBUG oslo_concurrency.lockutils [None req-96ec4fff-ef93-444c-a38f-13891d9af9b3 4de06a7985be4463b069db269e2882d4 a20ef4bed55a408c8933a4956b2dd3e4 - - default default] Acquiring lock "4e9d3984-d789-45e1-83e3-8909597d3265-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:35:55 np0005534516 nova_compute[253538]: 2025-11-25 08:35:55.616 253542 DEBUG oslo_concurrency.lockutils [None req-96ec4fff-ef93-444c-a38f-13891d9af9b3 4de06a7985be4463b069db269e2882d4 a20ef4bed55a408c8933a4956b2dd3e4 - - default default] Lock "4e9d3984-d789-45e1-83e3-8909597d3265-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:35:55 np0005534516 nova_compute[253538]: 2025-11-25 08:35:55.616 253542 DEBUG oslo_concurrency.lockutils [None req-96ec4fff-ef93-444c-a38f-13891d9af9b3 4de06a7985be4463b069db269e2882d4 a20ef4bed55a408c8933a4956b2dd3e4 - - default default] Lock "4e9d3984-d789-45e1-83e3-8909597d3265-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:35:55 np0005534516 nova_compute[253538]: 2025-11-25 08:35:55.618 253542 INFO nova.compute.manager [None req-96ec4fff-ef93-444c-a38f-13891d9af9b3 4de06a7985be4463b069db269e2882d4 a20ef4bed55a408c8933a4956b2dd3e4 - - default default] [instance: 4e9d3984-d789-45e1-83e3-8909597d3265] Terminating instance#033[00m
Nov 25 03:35:55 np0005534516 nova_compute[253538]: 2025-11-25 08:35:55.621 253542 DEBUG nova.compute.manager [None req-96ec4fff-ef93-444c-a38f-13891d9af9b3 4de06a7985be4463b069db269e2882d4 a20ef4bed55a408c8933a4956b2dd3e4 - - default default] [instance: 4e9d3984-d789-45e1-83e3-8909597d3265] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 25 03:35:55 np0005534516 kernel: tapd553507f-40 (unregistering): left promiscuous mode
Nov 25 03:35:55 np0005534516 NetworkManager[48915]: <info>  [1764059755.6628] device (tapd553507f-40): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 03:35:55 np0005534516 nova_compute[253538]: 2025-11-25 08:35:55.670 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:35:55 np0005534516 ovn_controller[152859]: 2025-11-25T08:35:55Z|00675|binding|INFO|Releasing lport d553507f-4019-4ce0-b549-4d221b9089cd from this chassis (sb_readonly=0)
Nov 25 03:35:55 np0005534516 ovn_controller[152859]: 2025-11-25T08:35:55Z|00676|binding|INFO|Setting lport d553507f-4019-4ce0-b549-4d221b9089cd down in Southbound
Nov 25 03:35:55 np0005534516 nova_compute[253538]: 2025-11-25 08:35:55.672 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:35:55 np0005534516 ovn_controller[152859]: 2025-11-25T08:35:55Z|00677|binding|INFO|Removing iface tapd553507f-40 ovn-installed in OVS
Nov 25 03:35:55 np0005534516 nova_compute[253538]: 2025-11-25 08:35:55.673 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:35:55 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:35:55.693 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d8:47:0f 10.100.0.9'], port_security=['fa:16:3e:d8:47:0f 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '4e9d3984-d789-45e1-83e3-8909597d3265', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-66249d1f-478b-4b2b-a784-933c0556752e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a20ef4bed55a408c8933a4956b2dd3e4', 'neutron:revision_number': '6', 'neutron:security_group_ids': '99b8ce64-92d7-4b35-99db-f9518100b84f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5891e4e5-b944-481c-a074-f90df3e14ac2, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=d553507f-4019-4ce0-b549-4d221b9089cd) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 03:35:55 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:35:55.695 162739 INFO neutron.agent.ovn.metadata.agent [-] Port d553507f-4019-4ce0-b549-4d221b9089cd in datapath 66249d1f-478b-4b2b-a784-933c0556752e unbound from our chassis#033[00m
Nov 25 03:35:55 np0005534516 nova_compute[253538]: 2025-11-25 08:35:55.695 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:35:55 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:35:55.696 162739 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 66249d1f-478b-4b2b-a784-933c0556752e, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 25 03:35:55 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:35:55.697 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[1403c730-08ee-4fdc-8aa8-b85f5169e4bd]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:35:55 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:35:55.697 162739 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-66249d1f-478b-4b2b-a784-933c0556752e namespace which is not needed anymore#033[00m
Nov 25 03:35:55 np0005534516 systemd[1]: machine-qemu\x2d82\x2dinstance\x2d00000045.scope: Deactivated successfully.
Nov 25 03:35:55 np0005534516 systemd[1]: machine-qemu\x2d82\x2dinstance\x2d00000045.scope: Consumed 3.808s CPU time.
Nov 25 03:35:55 np0005534516 systemd-machined[215790]: Machine qemu-82-instance-00000045 terminated.
Nov 25 03:35:55 np0005534516 nova_compute[253538]: 2025-11-25 08:35:55.796 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:35:55 np0005534516 neutron-haproxy-ovnmeta-66249d1f-478b-4b2b-a784-933c0556752e[325182]: [NOTICE]   (325186) : haproxy version is 2.8.14-c23fe91
Nov 25 03:35:55 np0005534516 neutron-haproxy-ovnmeta-66249d1f-478b-4b2b-a784-933c0556752e[325182]: [NOTICE]   (325186) : path to executable is /usr/sbin/haproxy
Nov 25 03:35:55 np0005534516 neutron-haproxy-ovnmeta-66249d1f-478b-4b2b-a784-933c0556752e[325182]: [WARNING]  (325186) : Exiting Master process...
Nov 25 03:35:55 np0005534516 neutron-haproxy-ovnmeta-66249d1f-478b-4b2b-a784-933c0556752e[325182]: [ALERT]    (325186) : Current worker (325188) exited with code 143 (Terminated)
Nov 25 03:35:55 np0005534516 neutron-haproxy-ovnmeta-66249d1f-478b-4b2b-a784-933c0556752e[325182]: [WARNING]  (325186) : All workers exited. Exiting... (0)
Nov 25 03:35:55 np0005534516 systemd[1]: libpod-8bb7ce6de75d314a91eb621d3fd8a3ea32ded08d9eb82cd69f1e17ed6ec4ca6c.scope: Deactivated successfully.
Nov 25 03:35:55 np0005534516 podman[325222]: 2025-11-25 08:35:55.842713013 +0000 UTC m=+0.048097844 container died 8bb7ce6de75d314a91eb621d3fd8a3ea32ded08d9eb82cd69f1e17ed6ec4ca6c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-66249d1f-478b-4b2b-a784-933c0556752e, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Nov 25 03:35:55 np0005534516 nova_compute[253538]: 2025-11-25 08:35:55.854 253542 INFO nova.virt.libvirt.driver [-] [instance: 4e9d3984-d789-45e1-83e3-8909597d3265] Instance destroyed successfully.#033[00m
Nov 25 03:35:55 np0005534516 nova_compute[253538]: 2025-11-25 08:35:55.855 253542 DEBUG nova.objects.instance [None req-96ec4fff-ef93-444c-a38f-13891d9af9b3 4de06a7985be4463b069db269e2882d4 a20ef4bed55a408c8933a4956b2dd3e4 - - default default] Lazy-loading 'resources' on Instance uuid 4e9d3984-d789-45e1-83e3-8909597d3265 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 03:35:55 np0005534516 nova_compute[253538]: 2025-11-25 08:35:55.867 253542 DEBUG nova.virt.libvirt.vif [None req-96ec4fff-ef93-444c-a38f-13891d9af9b3 4de06a7985be4463b069db269e2882d4 a20ef4bed55a408c8933a4956b2dd3e4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T08:35:36Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-InstanceActionsTestJSON-server-1670811507',display_name='tempest-InstanceActionsTestJSON-server-1670811507',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-instanceactionstestjson-server-1670811507',id=69,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-25T08:35:47Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='a20ef4bed55a408c8933a4956b2dd3e4',ramdisk_id='',reservation_id='r-7noz6z35',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-InstanceActionsTestJSON-270987687',owner_user_name='tempest-InstanceActionsTestJSON-270987687-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T08:35:52Z,user_data=None,user_id='4de06a7985be4463b069db269e2882d4',uuid=4e9d3984-d789-45e1-83e3-8909597d3265,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "d553507f-4019-4ce0-b549-4d221b9089cd", "address": "fa:16:3e:d8:47:0f", "network": {"id": "66249d1f-478b-4b2b-a784-933c0556752e", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-179501336-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a20ef4bed55a408c8933a4956b2dd3e4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd553507f-40", "ovs_interfaceid": "d553507f-4019-4ce0-b549-4d221b9089cd", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 25 03:35:55 np0005534516 nova_compute[253538]: 2025-11-25 08:35:55.868 253542 DEBUG nova.network.os_vif_util [None req-96ec4fff-ef93-444c-a38f-13891d9af9b3 4de06a7985be4463b069db269e2882d4 a20ef4bed55a408c8933a4956b2dd3e4 - - default default] Converting VIF {"id": "d553507f-4019-4ce0-b549-4d221b9089cd", "address": "fa:16:3e:d8:47:0f", "network": {"id": "66249d1f-478b-4b2b-a784-933c0556752e", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-179501336-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a20ef4bed55a408c8933a4956b2dd3e4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd553507f-40", "ovs_interfaceid": "d553507f-4019-4ce0-b549-4d221b9089cd", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 03:35:55 np0005534516 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-8bb7ce6de75d314a91eb621d3fd8a3ea32ded08d9eb82cd69f1e17ed6ec4ca6c-userdata-shm.mount: Deactivated successfully.
Nov 25 03:35:55 np0005534516 nova_compute[253538]: 2025-11-25 08:35:55.869 253542 DEBUG nova.network.os_vif_util [None req-96ec4fff-ef93-444c-a38f-13891d9af9b3 4de06a7985be4463b069db269e2882d4 a20ef4bed55a408c8933a4956b2dd3e4 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:d8:47:0f,bridge_name='br-int',has_traffic_filtering=True,id=d553507f-4019-4ce0-b549-4d221b9089cd,network=Network(66249d1f-478b-4b2b-a784-933c0556752e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd553507f-40') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 03:35:55 np0005534516 nova_compute[253538]: 2025-11-25 08:35:55.869 253542 DEBUG os_vif [None req-96ec4fff-ef93-444c-a38f-13891d9af9b3 4de06a7985be4463b069db269e2882d4 a20ef4bed55a408c8933a4956b2dd3e4 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:d8:47:0f,bridge_name='br-int',has_traffic_filtering=True,id=d553507f-4019-4ce0-b549-4d221b9089cd,network=Network(66249d1f-478b-4b2b-a784-933c0556752e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd553507f-40') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 25 03:35:55 np0005534516 systemd[1]: var-lib-containers-storage-overlay-676346057d689cb023f9cac0e4de20ceef90b417e6dbb2d3799ce4df248a42ed-merged.mount: Deactivated successfully.
Nov 25 03:35:55 np0005534516 nova_compute[253538]: 2025-11-25 08:35:55.871 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:35:55 np0005534516 nova_compute[253538]: 2025-11-25 08:35:55.871 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd553507f-40, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:35:55 np0005534516 nova_compute[253538]: 2025-11-25 08:35:55.872 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:35:55 np0005534516 nova_compute[253538]: 2025-11-25 08:35:55.873 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:35:55 np0005534516 nova_compute[253538]: 2025-11-25 08:35:55.875 253542 INFO os_vif [None req-96ec4fff-ef93-444c-a38f-13891d9af9b3 4de06a7985be4463b069db269e2882d4 a20ef4bed55a408c8933a4956b2dd3e4 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:d8:47:0f,bridge_name='br-int',has_traffic_filtering=True,id=d553507f-4019-4ce0-b549-4d221b9089cd,network=Network(66249d1f-478b-4b2b-a784-933c0556752e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd553507f-40')#033[00m
Nov 25 03:35:55 np0005534516 podman[325222]: 2025-11-25 08:35:55.879031829 +0000 UTC m=+0.084416650 container cleanup 8bb7ce6de75d314a91eb621d3fd8a3ea32ded08d9eb82cd69f1e17ed6ec4ca6c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-66249d1f-478b-4b2b-a784-933c0556752e, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118)
Nov 25 03:35:55 np0005534516 systemd[1]: libpod-conmon-8bb7ce6de75d314a91eb621d3fd8a3ea32ded08d9eb82cd69f1e17ed6ec4ca6c.scope: Deactivated successfully.
Nov 25 03:35:55 np0005534516 podman[325270]: 2025-11-25 08:35:55.939146114 +0000 UTC m=+0.037356175 container remove 8bb7ce6de75d314a91eb621d3fd8a3ea32ded08d9eb82cd69f1e17ed6ec4ca6c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-66249d1f-478b-4b2b-a784-933c0556752e, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 03:35:55 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:35:55.944 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[e25f94b8-b3e1-40a3-97fd-3d423ffe49ed]: (4, ('Tue Nov 25 08:35:55 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-66249d1f-478b-4b2b-a784-933c0556752e (8bb7ce6de75d314a91eb621d3fd8a3ea32ded08d9eb82cd69f1e17ed6ec4ca6c)\n8bb7ce6de75d314a91eb621d3fd8a3ea32ded08d9eb82cd69f1e17ed6ec4ca6c\nTue Nov 25 08:35:55 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-66249d1f-478b-4b2b-a784-933c0556752e (8bb7ce6de75d314a91eb621d3fd8a3ea32ded08d9eb82cd69f1e17ed6ec4ca6c)\n8bb7ce6de75d314a91eb621d3fd8a3ea32ded08d9eb82cd69f1e17ed6ec4ca6c\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:35:55 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:35:55.946 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[39ccc98c-4e1f-49c9-91ee-3b36598aef66]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:35:55 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:35:55.947 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap66249d1f-40, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:35:55 np0005534516 nova_compute[253538]: 2025-11-25 08:35:55.949 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:35:55 np0005534516 kernel: tap66249d1f-40: left promiscuous mode
Nov 25 03:35:55 np0005534516 nova_compute[253538]: 2025-11-25 08:35:55.965 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:35:55 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e186 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 03:35:55 np0005534516 nova_compute[253538]: 2025-11-25 08:35:55.966 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:35:55 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:35:55.968 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[97b2738b-0a20-4b0f-a133-66adbb5b98a7]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:35:55 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:35:55.979 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[e08b5657-6d95-4d41-a5b8-6fb97c04ae85]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:35:55 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:35:55.980 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[a1d5d257-bfa4-451b-a006-c48294be27d6]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:35:55 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:35:55.995 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[9ddfb548-f4aa-433d-81ba-94d0021be331]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 510661, 'reachable_time': 19678, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 325297, 'error': None, 'target': 'ovnmeta-66249d1f-478b-4b2b-a784-933c0556752e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:35:55 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:35:55.998 162852 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-66249d1f-478b-4b2b-a784-933c0556752e deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 25 03:35:55 np0005534516 systemd[1]: run-netns-ovnmeta\x2d66249d1f\x2d478b\x2d4b2b\x2da784\x2d933c0556752e.mount: Deactivated successfully.
Nov 25 03:35:55 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:35:55.998 162852 DEBUG oslo.privsep.daemon [-] privsep: reply[6a37bc94-fd85-401c-97d1-814bcc4c7849]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:35:56 np0005534516 nova_compute[253538]: 2025-11-25 08:35:56.178 253542 INFO nova.virt.libvirt.driver [None req-96ec4fff-ef93-444c-a38f-13891d9af9b3 4de06a7985be4463b069db269e2882d4 a20ef4bed55a408c8933a4956b2dd3e4 - - default default] [instance: 4e9d3984-d789-45e1-83e3-8909597d3265] Deleting instance files /var/lib/nova/instances/4e9d3984-d789-45e1-83e3-8909597d3265_del#033[00m
Nov 25 03:35:56 np0005534516 nova_compute[253538]: 2025-11-25 08:35:56.179 253542 INFO nova.virt.libvirt.driver [None req-96ec4fff-ef93-444c-a38f-13891d9af9b3 4de06a7985be4463b069db269e2882d4 a20ef4bed55a408c8933a4956b2dd3e4 - - default default] [instance: 4e9d3984-d789-45e1-83e3-8909597d3265] Deletion of /var/lib/nova/instances/4e9d3984-d789-45e1-83e3-8909597d3265_del complete#033[00m
Nov 25 03:35:56 np0005534516 nova_compute[253538]: 2025-11-25 08:35:56.261 253542 INFO nova.compute.manager [None req-96ec4fff-ef93-444c-a38f-13891d9af9b3 4de06a7985be4463b069db269e2882d4 a20ef4bed55a408c8933a4956b2dd3e4 - - default default] [instance: 4e9d3984-d789-45e1-83e3-8909597d3265] Took 0.64 seconds to destroy the instance on the hypervisor.#033[00m
Nov 25 03:35:56 np0005534516 nova_compute[253538]: 2025-11-25 08:35:56.262 253542 DEBUG oslo.service.loopingcall [None req-96ec4fff-ef93-444c-a38f-13891d9af9b3 4de06a7985be4463b069db269e2882d4 a20ef4bed55a408c8933a4956b2dd3e4 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 25 03:35:56 np0005534516 nova_compute[253538]: 2025-11-25 08:35:56.262 253542 DEBUG nova.compute.manager [-] [instance: 4e9d3984-d789-45e1-83e3-8909597d3265] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 25 03:35:56 np0005534516 nova_compute[253538]: 2025-11-25 08:35:56.263 253542 DEBUG nova.network.neutron [-] [instance: 4e9d3984-d789-45e1-83e3-8909597d3265] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 25 03:35:57 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1588: 321 pgs: 321 active+clean; 117 MiB data, 590 MiB used, 59 GiB / 60 GiB avail; 5.9 MiB/s rd, 29 KiB/s wr, 279 op/s
Nov 25 03:35:57 np0005534516 nova_compute[253538]: 2025-11-25 08:35:57.674 253542 DEBUG nova.network.neutron [-] [instance: 4e9d3984-d789-45e1-83e3-8909597d3265] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 03:35:57 np0005534516 nova_compute[253538]: 2025-11-25 08:35:57.727 253542 DEBUG nova.compute.manager [req-bf4488fa-86b6-41ec-a06a-0c65cc6fd139 req-6e775f5f-bc47-4452-a775-05927c3cec64 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 4e9d3984-d789-45e1-83e3-8909597d3265] Received event network-vif-deleted-d553507f-4019-4ce0-b549-4d221b9089cd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:35:57 np0005534516 nova_compute[253538]: 2025-11-25 08:35:57.727 253542 INFO nova.compute.manager [req-bf4488fa-86b6-41ec-a06a-0c65cc6fd139 req-6e775f5f-bc47-4452-a775-05927c3cec64 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 4e9d3984-d789-45e1-83e3-8909597d3265] Neutron deleted interface d553507f-4019-4ce0-b549-4d221b9089cd; detaching it from the instance and deleting it from the info cache#033[00m
Nov 25 03:35:57 np0005534516 nova_compute[253538]: 2025-11-25 08:35:57.728 253542 DEBUG nova.network.neutron [req-bf4488fa-86b6-41ec-a06a-0c65cc6fd139 req-6e775f5f-bc47-4452-a775-05927c3cec64 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 4e9d3984-d789-45e1-83e3-8909597d3265] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 03:35:57 np0005534516 nova_compute[253538]: 2025-11-25 08:35:57.778 253542 INFO nova.compute.manager [-] [instance: 4e9d3984-d789-45e1-83e3-8909597d3265] Took 1.51 seconds to deallocate network for instance.#033[00m
Nov 25 03:35:57 np0005534516 nova_compute[253538]: 2025-11-25 08:35:57.784 253542 DEBUG nova.compute.manager [req-bf4488fa-86b6-41ec-a06a-0c65cc6fd139 req-6e775f5f-bc47-4452-a775-05927c3cec64 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 4e9d3984-d789-45e1-83e3-8909597d3265] Detach interface failed, port_id=d553507f-4019-4ce0-b549-4d221b9089cd, reason: Instance 4e9d3984-d789-45e1-83e3-8909597d3265 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882#033[00m
Nov 25 03:35:57 np0005534516 nova_compute[253538]: 2025-11-25 08:35:57.824 253542 DEBUG oslo_concurrency.lockutils [None req-96ec4fff-ef93-444c-a38f-13891d9af9b3 4de06a7985be4463b069db269e2882d4 a20ef4bed55a408c8933a4956b2dd3e4 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:35:57 np0005534516 nova_compute[253538]: 2025-11-25 08:35:57.825 253542 DEBUG oslo_concurrency.lockutils [None req-96ec4fff-ef93-444c-a38f-13891d9af9b3 4de06a7985be4463b069db269e2882d4 a20ef4bed55a408c8933a4956b2dd3e4 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:35:57 np0005534516 nova_compute[253538]: 2025-11-25 08:35:57.875 253542 DEBUG oslo_concurrency.processutils [None req-96ec4fff-ef93-444c-a38f-13891d9af9b3 4de06a7985be4463b069db269e2882d4 a20ef4bed55a408c8933a4956b2dd3e4 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:35:58 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 03:35:58 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4259423462' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 03:35:58 np0005534516 nova_compute[253538]: 2025-11-25 08:35:58.352 253542 DEBUG oslo_concurrency.processutils [None req-96ec4fff-ef93-444c-a38f-13891d9af9b3 4de06a7985be4463b069db269e2882d4 a20ef4bed55a408c8933a4956b2dd3e4 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.478s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:35:58 np0005534516 nova_compute[253538]: 2025-11-25 08:35:58.360 253542 DEBUG nova.compute.provider_tree [None req-96ec4fff-ef93-444c-a38f-13891d9af9b3 4de06a7985be4463b069db269e2882d4 a20ef4bed55a408c8933a4956b2dd3e4 - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 03:35:58 np0005534516 nova_compute[253538]: 2025-11-25 08:35:58.375 253542 DEBUG nova.scheduler.client.report [None req-96ec4fff-ef93-444c-a38f-13891d9af9b3 4de06a7985be4463b069db269e2882d4 a20ef4bed55a408c8933a4956b2dd3e4 - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 03:35:58 np0005534516 nova_compute[253538]: 2025-11-25 08:35:58.405 253542 DEBUG oslo_concurrency.lockutils [None req-96ec4fff-ef93-444c-a38f-13891d9af9b3 4de06a7985be4463b069db269e2882d4 a20ef4bed55a408c8933a4956b2dd3e4 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.580s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:35:58 np0005534516 nova_compute[253538]: 2025-11-25 08:35:58.438 253542 INFO nova.scheduler.client.report [None req-96ec4fff-ef93-444c-a38f-13891d9af9b3 4de06a7985be4463b069db269e2882d4 a20ef4bed55a408c8933a4956b2dd3e4 - - default default] Deleted allocations for instance 4e9d3984-d789-45e1-83e3-8909597d3265#033[00m
Nov 25 03:35:58 np0005534516 nova_compute[253538]: 2025-11-25 08:35:58.549 253542 DEBUG oslo_concurrency.lockutils [None req-96ec4fff-ef93-444c-a38f-13891d9af9b3 4de06a7985be4463b069db269e2882d4 a20ef4bed55a408c8933a4956b2dd3e4 - - default default] Lock "4e9d3984-d789-45e1-83e3-8909597d3265" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.935s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:35:58 np0005534516 nova_compute[253538]: 2025-11-25 08:35:58.948 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:35:59 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1589: 321 pgs: 321 active+clean; 105 MiB data, 585 MiB used, 59 GiB / 60 GiB avail; 5.4 MiB/s rd, 27 KiB/s wr, 241 op/s
Nov 25 03:36:00 np0005534516 nova_compute[253538]: 2025-11-25 08:36:00.797 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:36:00 np0005534516 nova_compute[253538]: 2025-11-25 08:36:00.869 253542 DEBUG oslo_concurrency.lockutils [None req-3b99383b-0b20-4892-abc5-089d79671081 01007a4199f147399e37549741f618ff db308c5623d94a66b4201cf58d23c882 - - default default] Acquiring lock "2274b091-0cde-4f9c-a067-0bec4dd614e4" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:36:00 np0005534516 nova_compute[253538]: 2025-11-25 08:36:00.869 253542 DEBUG oslo_concurrency.lockutils [None req-3b99383b-0b20-4892-abc5-089d79671081 01007a4199f147399e37549741f618ff db308c5623d94a66b4201cf58d23c882 - - default default] Lock "2274b091-0cde-4f9c-a067-0bec4dd614e4" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:36:00 np0005534516 nova_compute[253538]: 2025-11-25 08:36:00.872 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:36:00 np0005534516 nova_compute[253538]: 2025-11-25 08:36:00.898 253542 DEBUG nova.compute.manager [None req-3b99383b-0b20-4892-abc5-089d79671081 01007a4199f147399e37549741f618ff db308c5623d94a66b4201cf58d23c882 - - default default] [instance: 2274b091-0cde-4f9c-a067-0bec4dd614e4] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 25 03:36:00 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e186 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 03:36:01 np0005534516 nova_compute[253538]: 2025-11-25 08:36:01.040 253542 DEBUG oslo_concurrency.lockutils [None req-3b99383b-0b20-4892-abc5-089d79671081 01007a4199f147399e37549741f618ff db308c5623d94a66b4201cf58d23c882 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:36:01 np0005534516 nova_compute[253538]: 2025-11-25 08:36:01.040 253542 DEBUG oslo_concurrency.lockutils [None req-3b99383b-0b20-4892-abc5-089d79671081 01007a4199f147399e37549741f618ff db308c5623d94a66b4201cf58d23c882 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:36:01 np0005534516 nova_compute[253538]: 2025-11-25 08:36:01.046 253542 DEBUG nova.virt.hardware [None req-3b99383b-0b20-4892-abc5-089d79671081 01007a4199f147399e37549741f618ff db308c5623d94a66b4201cf58d23c882 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 25 03:36:01 np0005534516 nova_compute[253538]: 2025-11-25 08:36:01.047 253542 INFO nova.compute.claims [None req-3b99383b-0b20-4892-abc5-089d79671081 01007a4199f147399e37549741f618ff db308c5623d94a66b4201cf58d23c882 - - default default] [instance: 2274b091-0cde-4f9c-a067-0bec4dd614e4] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 25 03:36:01 np0005534516 nova_compute[253538]: 2025-11-25 08:36:01.151 253542 DEBUG oslo_concurrency.processutils [None req-3b99383b-0b20-4892-abc5-089d79671081 01007a4199f147399e37549741f618ff db308c5623d94a66b4201cf58d23c882 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:36:01 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1590: 321 pgs: 321 active+clean; 88 MiB data, 575 MiB used, 59 GiB / 60 GiB avail; 5.0 MiB/s rd, 15 KiB/s wr, 225 op/s
Nov 25 03:36:01 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 03:36:01 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/762858983' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 03:36:01 np0005534516 nova_compute[253538]: 2025-11-25 08:36:01.604 253542 DEBUG oslo_concurrency.processutils [None req-3b99383b-0b20-4892-abc5-089d79671081 01007a4199f147399e37549741f618ff db308c5623d94a66b4201cf58d23c882 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.453s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:36:01 np0005534516 nova_compute[253538]: 2025-11-25 08:36:01.610 253542 DEBUG nova.compute.provider_tree [None req-3b99383b-0b20-4892-abc5-089d79671081 01007a4199f147399e37549741f618ff db308c5623d94a66b4201cf58d23c882 - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 03:36:01 np0005534516 nova_compute[253538]: 2025-11-25 08:36:01.625 253542 DEBUG nova.scheduler.client.report [None req-3b99383b-0b20-4892-abc5-089d79671081 01007a4199f147399e37549741f618ff db308c5623d94a66b4201cf58d23c882 - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 03:36:01 np0005534516 nova_compute[253538]: 2025-11-25 08:36:01.656 253542 DEBUG oslo_concurrency.lockutils [None req-3b99383b-0b20-4892-abc5-089d79671081 01007a4199f147399e37549741f618ff db308c5623d94a66b4201cf58d23c882 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.616s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:36:01 np0005534516 nova_compute[253538]: 2025-11-25 08:36:01.657 253542 DEBUG nova.compute.manager [None req-3b99383b-0b20-4892-abc5-089d79671081 01007a4199f147399e37549741f618ff db308c5623d94a66b4201cf58d23c882 - - default default] [instance: 2274b091-0cde-4f9c-a067-0bec4dd614e4] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 25 03:36:01 np0005534516 nova_compute[253538]: 2025-11-25 08:36:01.704 253542 DEBUG nova.compute.manager [None req-3b99383b-0b20-4892-abc5-089d79671081 01007a4199f147399e37549741f618ff db308c5623d94a66b4201cf58d23c882 - - default default] [instance: 2274b091-0cde-4f9c-a067-0bec4dd614e4] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 25 03:36:01 np0005534516 nova_compute[253538]: 2025-11-25 08:36:01.704 253542 DEBUG nova.network.neutron [None req-3b99383b-0b20-4892-abc5-089d79671081 01007a4199f147399e37549741f618ff db308c5623d94a66b4201cf58d23c882 - - default default] [instance: 2274b091-0cde-4f9c-a067-0bec4dd614e4] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 25 03:36:01 np0005534516 nova_compute[253538]: 2025-11-25 08:36:01.732 253542 INFO nova.virt.libvirt.driver [None req-3b99383b-0b20-4892-abc5-089d79671081 01007a4199f147399e37549741f618ff db308c5623d94a66b4201cf58d23c882 - - default default] [instance: 2274b091-0cde-4f9c-a067-0bec4dd614e4] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 25 03:36:01 np0005534516 nova_compute[253538]: 2025-11-25 08:36:01.759 253542 DEBUG nova.compute.manager [None req-3b99383b-0b20-4892-abc5-089d79671081 01007a4199f147399e37549741f618ff db308c5623d94a66b4201cf58d23c882 - - default default] [instance: 2274b091-0cde-4f9c-a067-0bec4dd614e4] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 25 03:36:01 np0005534516 nova_compute[253538]: 2025-11-25 08:36:01.862 253542 DEBUG nova.compute.manager [None req-3b99383b-0b20-4892-abc5-089d79671081 01007a4199f147399e37549741f618ff db308c5623d94a66b4201cf58d23c882 - - default default] [instance: 2274b091-0cde-4f9c-a067-0bec4dd614e4] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 25 03:42:09 np0005534516 nova_compute[253538]: 2025-11-25 08:42:09.942 253542 INFO nova.virt.libvirt.driver [None req-9de3f94f-4b87-42fb-8bc7-7611a4f8e9fd 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] [instance: 85d7ff38-1884-4942-82fe-fb79122afe63] Creating config drive at /var/lib/nova/instances/85d7ff38-1884-4942-82fe-fb79122afe63/disk.config#033[00m
Nov 25 03:42:09 np0005534516 nova_compute[253538]: 2025-11-25 08:42:09.952 253542 DEBUG oslo_concurrency.processutils [None req-9de3f94f-4b87-42fb-8bc7-7611a4f8e9fd 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/85d7ff38-1884-4942-82fe-fb79122afe63/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpv2u06uy8 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:42:10 np0005534516 nova_compute[253538]: 2025-11-25 08:42:10.116 253542 DEBUG oslo_concurrency.processutils [None req-9de3f94f-4b87-42fb-8bc7-7611a4f8e9fd 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/85d7ff38-1884-4942-82fe-fb79122afe63/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpv2u06uy8" returned: 0 in 0.164s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:42:10 np0005534516 nova_compute[253538]: 2025-11-25 08:42:10.158 253542 DEBUG nova.storage.rbd_utils [None req-9de3f94f-4b87-42fb-8bc7-7611a4f8e9fd 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] rbd image 85d7ff38-1884-4942-82fe-fb79122afe63_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:42:10 np0005534516 nova_compute[253538]: 2025-11-25 08:42:10.162 253542 DEBUG oslo_concurrency.processutils [None req-9de3f94f-4b87-42fb-8bc7-7611a4f8e9fd 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/85d7ff38-1884-4942-82fe-fb79122afe63/disk.config 85d7ff38-1884-4942-82fe-fb79122afe63_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:42:10 np0005534516 rsyslogd[1007]: imjournal: 17176 messages lost due to rate-limiting (20000 allowed within 600 seconds)
Nov 25 03:42:10 np0005534516 nova_compute[253538]: 2025-11-25 08:42:10.283 253542 DEBUG nova.network.neutron [req-0f668a86-ce33-4ea3-a585-4f7b44fc20c1 req-76074080-f06f-4a25-9294-5ee50fb9faba b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 85d7ff38-1884-4942-82fe-fb79122afe63] Updated VIF entry in instance network info cache for port 3ed26b91-50ed-4d4d-ad1a-a63df94e9607. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 25 03:42:10 np0005534516 nova_compute[253538]: 2025-11-25 08:42:10.286 253542 DEBUG nova.network.neutron [req-0f668a86-ce33-4ea3-a585-4f7b44fc20c1 req-76074080-f06f-4a25-9294-5ee50fb9faba b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 85d7ff38-1884-4942-82fe-fb79122afe63] Updating instance_info_cache with network_info: [{"id": "3ed26b91-50ed-4d4d-ad1a-a63df94e9607", "address": "fa:16:3e:44:bf:83", "network": {"id": "6e77a51d-2695-4e70-8b9d-c02ec0c62f35", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1129579753-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "295fcc758cf24ab4b01eb393f4863e36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3ed26b91-50", "ovs_interfaceid": "3ed26b91-50ed-4d4d-ad1a-a63df94e9607", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 03:42:10 np0005534516 nova_compute[253538]: 2025-11-25 08:42:10.312 253542 DEBUG oslo_concurrency.lockutils [req-0f668a86-ce33-4ea3-a585-4f7b44fc20c1 req-76074080-f06f-4a25-9294-5ee50fb9faba b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-85d7ff38-1884-4942-82fe-fb79122afe63" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 03:42:10 np0005534516 nova_compute[253538]: 2025-11-25 08:42:10.357 253542 DEBUG oslo_concurrency.processutils [None req-9de3f94f-4b87-42fb-8bc7-7611a4f8e9fd 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/85d7ff38-1884-4942-82fe-fb79122afe63/disk.config 85d7ff38-1884-4942-82fe-fb79122afe63_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.196s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:42:10 np0005534516 nova_compute[253538]: 2025-11-25 08:42:10.358 253542 INFO nova.virt.libvirt.driver [None req-9de3f94f-4b87-42fb-8bc7-7611a4f8e9fd 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] [instance: 85d7ff38-1884-4942-82fe-fb79122afe63] Deleting local config drive /var/lib/nova/instances/85d7ff38-1884-4942-82fe-fb79122afe63/disk.config because it was imported into RBD.#033[00m
Nov 25 03:42:10 np0005534516 kernel: tap3ed26b91-50: entered promiscuous mode
Nov 25 03:42:10 np0005534516 NetworkManager[48915]: <info>  [1764060130.4325] manager: (tap3ed26b91-50): new Tun device (/org/freedesktop/NetworkManager/Devices/389)
Nov 25 03:42:10 np0005534516 ovn_controller[152859]: 2025-11-25T08:42:10Z|00945|binding|INFO|Claiming lport 3ed26b91-50ed-4d4d-ad1a-a63df94e9607 for this chassis.
Nov 25 03:42:10 np0005534516 ovn_controller[152859]: 2025-11-25T08:42:10Z|00946|binding|INFO|3ed26b91-50ed-4d4d-ad1a-a63df94e9607: Claiming fa:16:3e:44:bf:83 10.100.0.13
Nov 25 03:42:10 np0005534516 nova_compute[253538]: 2025-11-25 08:42:10.434 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:42:10 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:42:10.446 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:44:bf:83 10.100.0.13'], port_security=['fa:16:3e:44:bf:83 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '85d7ff38-1884-4942-82fe-fb79122afe63', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6e77a51d-2695-4e70-8b9d-c02ec0c62f35', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '295fcc758cf24ab4b01eb393f4863e36', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'f53cda67-e087-4973-b43b-027ef8b57bb8', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=31935ac8-a5f7-4fad-9e0b-ee28fade4ce4, chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=3ed26b91-50ed-4d4d-ad1a-a63df94e9607) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 03:42:10 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:42:10.449 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 3ed26b91-50ed-4d4d-ad1a-a63df94e9607 in datapath 6e77a51d-2695-4e70-8b9d-c02ec0c62f35 bound to our chassis#033[00m
Nov 25 03:42:10 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:42:10.453 162739 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 6e77a51d-2695-4e70-8b9d-c02ec0c62f35#033[00m
Nov 25 03:42:10 np0005534516 ovn_controller[152859]: 2025-11-25T08:42:10Z|00947|binding|INFO|Setting lport 3ed26b91-50ed-4d4d-ad1a-a63df94e9607 ovn-installed in OVS
Nov 25 03:42:10 np0005534516 ovn_controller[152859]: 2025-11-25T08:42:10Z|00948|binding|INFO|Setting lport 3ed26b91-50ed-4d4d-ad1a-a63df94e9607 up in Southbound
Nov 25 03:42:10 np0005534516 nova_compute[253538]: 2025-11-25 08:42:10.460 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:42:10 np0005534516 systemd-machined[215790]: New machine qemu-120-instance-00000062.
Nov 25 03:42:10 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:42:10.482 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[d8026efe-f6d7-4c47-8312-a24b81623c12]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:42:10 np0005534516 systemd[1]: Started Virtual Machine qemu-120-instance-00000062.
Nov 25 03:42:10 np0005534516 systemd-udevd[347852]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 03:42:10 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:42:10.537 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[a3154567-0fe8-42d2-a5da-51a0187cd7d3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:42:10 np0005534516 NetworkManager[48915]: <info>  [1764060130.5418] device (tap3ed26b91-50): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 03:42:10 np0005534516 NetworkManager[48915]: <info>  [1764060130.5432] device (tap3ed26b91-50): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 03:42:10 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:42:10.544 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[dcbacd1f-d9c3-483c-a1ba-00c231568d21]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:42:10 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:42:10.594 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[9ef09eea-1c9b-49a7-99ae-e76f7b0d3dc4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:42:10 np0005534516 ceph-mon[75015]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #81. Immutable memtables: 0.
Nov 25 03:42:10 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:42:10.606528) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 25 03:42:10 np0005534516 ceph-mon[75015]: rocksdb: [db/flush_job.cc:856] [default] [JOB 45] Flushing memtable with next log file: 81
Nov 25 03:42:10 np0005534516 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764060130606574, "job": 45, "event": "flush_started", "num_memtables": 1, "num_entries": 1612, "num_deletes": 257, "total_data_size": 2345290, "memory_usage": 2391632, "flush_reason": "Manual Compaction"}
Nov 25 03:42:10 np0005534516 ceph-mon[75015]: rocksdb: [db/flush_job.cc:885] [default] [JOB 45] Level-0 flush table #82: started
Nov 25 03:42:10 np0005534516 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764060130623629, "cf_name": "default", "job": 45, "event": "table_file_creation", "file_number": 82, "file_size": 2296546, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 36051, "largest_seqno": 37662, "table_properties": {"data_size": 2289001, "index_size": 4488, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1989, "raw_key_size": 16111, "raw_average_key_size": 20, "raw_value_size": 2273758, "raw_average_value_size": 2900, "num_data_blocks": 199, "num_entries": 784, "num_filter_entries": 784, "num_deletions": 257, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764059997, "oldest_key_time": 1764059997, "file_creation_time": 1764060130, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "dc5dc6ae-0fb8-474e-b34d-e37705a41add", "db_session_id": "WCSCMBNWDQZ3QKJA3OWW", "orig_file_number": 82, "seqno_to_time_mapping": "N/A"}}
Nov 25 03:42:10 np0005534516 ceph-mon[75015]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 45] Flush lasted 17159 microseconds, and 9427 cpu microseconds.
Nov 25 03:42:10 np0005534516 ceph-mon[75015]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 03:42:10 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:42:10.622 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[8e7ff6d0-ee9c-45a7-b2a3-1af01768274c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6e77a51d-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b8:6f:fd'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 9, 'tx_packets': 7, 'rx_bytes': 658, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 9, 'tx_packets': 7, 'rx_bytes': 658, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 273], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 538137, 'reachable_time': 29181, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 347863, 'error': None, 'target': 'ovnmeta-6e77a51d-2695-4e70-8b9d-c02ec0c62f35', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:42:10 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:42:10.623687) [db/flush_job.cc:967] [default] [JOB 45] Level-0 flush table #82: 2296546 bytes OK
Nov 25 03:42:10 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:42:10.623711) [db/memtable_list.cc:519] [default] Level-0 commit table #82 started
Nov 25 03:42:10 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:42:10.625405) [db/memtable_list.cc:722] [default] Level-0 commit table #82: memtable #1 done
Nov 25 03:42:10 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:42:10.625428) EVENT_LOG_v1 {"time_micros": 1764060130625421, "job": 45, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 25 03:42:10 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:42:10.625448) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 25 03:42:10 np0005534516 ceph-mon[75015]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 45] Try to delete WAL files size 2338177, prev total WAL file size 2338177, number of live WAL files 2.
Nov 25 03:42:10 np0005534516 ceph-mon[75015]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000078.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 03:42:10 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:42:10.626475) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730033323633' seq:72057594037927935, type:22 .. '7061786F730033353135' seq:0, type:0; will stop at (end)
Nov 25 03:42:10 np0005534516 ceph-mon[75015]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 46] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 25 03:42:10 np0005534516 ceph-mon[75015]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 45 Base level 0, inputs: [82(2242KB)], [80(7357KB)]
Nov 25 03:42:10 np0005534516 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764060130626515, "job": 46, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [82], "files_L6": [80], "score": -1, "input_data_size": 9830778, "oldest_snapshot_seqno": -1}
Nov 25 03:42:10 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:42:10.646 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[52ac0c75-4608-4981-a5cc-dcad0f169ee9]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap6e77a51d-21'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 538152, 'tstamp': 538152}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 347864, 'error': None, 'target': 'ovnmeta-6e77a51d-2695-4e70-8b9d-c02ec0c62f35', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap6e77a51d-21'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 538156, 'tstamp': 538156}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 347864, 'error': None, 'target': 'ovnmeta-6e77a51d-2695-4e70-8b9d-c02ec0c62f35', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:42:10 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:42:10.648 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6e77a51d-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:42:10 np0005534516 nova_compute[253538]: 2025-11-25 08:42:10.650 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:42:10 np0005534516 nova_compute[253538]: 2025-11-25 08:42:10.652 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:42:10 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:42:10.653 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6e77a51d-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:42:10 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:42:10.654 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 03:42:10 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:42:10.655 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap6e77a51d-20, col_values=(('external_ids', {'iface-id': '66275e2b-0197-461a-9be3-ae2fe1aec502'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:42:10 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:42:10.656 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 03:42:10 np0005534516 ceph-mon[75015]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 46] Generated table #83: 6258 keys, 8230205 bytes, temperature: kUnknown
Nov 25 03:42:10 np0005534516 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764060130692378, "cf_name": "default", "job": 46, "event": "table_file_creation", "file_number": 83, "file_size": 8230205, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8189787, "index_size": 23668, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 15685, "raw_key_size": 158609, "raw_average_key_size": 25, "raw_value_size": 8078995, "raw_average_value_size": 1290, "num_data_blocks": 958, "num_entries": 6258, "num_filter_entries": 6258, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764056881, "oldest_key_time": 0, "file_creation_time": 1764060130, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "dc5dc6ae-0fb8-474e-b34d-e37705a41add", "db_session_id": "WCSCMBNWDQZ3QKJA3OWW", "orig_file_number": 83, "seqno_to_time_mapping": "N/A"}}
Nov 25 03:42:10 np0005534516 ceph-mon[75015]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 03:42:10 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:42:10.692587) [db/compaction/compaction_job.cc:1663] [default] [JOB 46] Compacted 1@0 + 1@6 files to L6 => 8230205 bytes
Nov 25 03:42:10 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:42:10.693894) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 149.1 rd, 124.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.2, 7.2 +0.0 blob) out(7.8 +0.0 blob), read-write-amplify(7.9) write-amplify(3.6) OK, records in: 6783, records dropped: 525 output_compression: NoCompression
Nov 25 03:42:10 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:42:10.693926) EVENT_LOG_v1 {"time_micros": 1764060130693911, "job": 46, "event": "compaction_finished", "compaction_time_micros": 65929, "compaction_time_cpu_micros": 36970, "output_level": 6, "num_output_files": 1, "total_output_size": 8230205, "num_input_records": 6783, "num_output_records": 6258, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 25 03:42:10 np0005534516 ceph-mon[75015]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000082.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 03:42:10 np0005534516 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764060130694788, "job": 46, "event": "table_file_deletion", "file_number": 82}
Nov 25 03:42:10 np0005534516 ceph-mon[75015]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000080.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 03:42:10 np0005534516 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764060130697479, "job": 46, "event": "table_file_deletion", "file_number": 80}
Nov 25 03:42:10 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:42:10.626401) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 03:42:10 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:42:10.697553) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 03:42:10 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:42:10.697559) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 03:42:10 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:42:10.697563) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 03:42:10 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:42:10.697566) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 03:42:10 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:42:10.697569) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 03:42:10 np0005534516 nova_compute[253538]: 2025-11-25 08:42:10.890 253542 DEBUG nova.compute.manager [req-ae466bb2-ceeb-4671-9630-baf0ccb4a760 req-ef0d218f-7253-4cc0-af7b-5900061c7354 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 85d7ff38-1884-4942-82fe-fb79122afe63] Received event network-vif-plugged-3ed26b91-50ed-4d4d-ad1a-a63df94e9607 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:42:10 np0005534516 nova_compute[253538]: 2025-11-25 08:42:10.891 253542 DEBUG oslo_concurrency.lockutils [req-ae466bb2-ceeb-4671-9630-baf0ccb4a760 req-ef0d218f-7253-4cc0-af7b-5900061c7354 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "85d7ff38-1884-4942-82fe-fb79122afe63-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:42:10 np0005534516 nova_compute[253538]: 2025-11-25 08:42:10.892 253542 DEBUG oslo_concurrency.lockutils [req-ae466bb2-ceeb-4671-9630-baf0ccb4a760 req-ef0d218f-7253-4cc0-af7b-5900061c7354 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "85d7ff38-1884-4942-82fe-fb79122afe63-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:42:10 np0005534516 nova_compute[253538]: 2025-11-25 08:42:10.894 253542 DEBUG oslo_concurrency.lockutils [req-ae466bb2-ceeb-4671-9630-baf0ccb4a760 req-ef0d218f-7253-4cc0-af7b-5900061c7354 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "85d7ff38-1884-4942-82fe-fb79122afe63-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:42:10 np0005534516 nova_compute[253538]: 2025-11-25 08:42:10.896 253542 DEBUG nova.compute.manager [req-ae466bb2-ceeb-4671-9630-baf0ccb4a760 req-ef0d218f-7253-4cc0-af7b-5900061c7354 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 85d7ff38-1884-4942-82fe-fb79122afe63] Processing event network-vif-plugged-3ed26b91-50ed-4d4d-ad1a-a63df94e9607 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 25 03:42:11 np0005534516 nova_compute[253538]: 2025-11-25 08:42:11.063 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764060131.0630364, 85d7ff38-1884-4942-82fe-fb79122afe63 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 03:42:11 np0005534516 nova_compute[253538]: 2025-11-25 08:42:11.064 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 85d7ff38-1884-4942-82fe-fb79122afe63] VM Started (Lifecycle Event)#033[00m
Nov 25 03:42:11 np0005534516 nova_compute[253538]: 2025-11-25 08:42:11.068 253542 DEBUG nova.compute.manager [None req-9de3f94f-4b87-42fb-8bc7-7611a4f8e9fd 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] [instance: 85d7ff38-1884-4942-82fe-fb79122afe63] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 25 03:42:11 np0005534516 nova_compute[253538]: 2025-11-25 08:42:11.072 253542 DEBUG nova.virt.libvirt.driver [None req-9de3f94f-4b87-42fb-8bc7-7611a4f8e9fd 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] [instance: 85d7ff38-1884-4942-82fe-fb79122afe63] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 25 03:42:11 np0005534516 nova_compute[253538]: 2025-11-25 08:42:11.076 253542 INFO nova.virt.libvirt.driver [-] [instance: 85d7ff38-1884-4942-82fe-fb79122afe63] Instance spawned successfully.#033[00m
Nov 25 03:42:11 np0005534516 nova_compute[253538]: 2025-11-25 08:42:11.077 253542 DEBUG nova.virt.libvirt.driver [None req-9de3f94f-4b87-42fb-8bc7-7611a4f8e9fd 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] [instance: 85d7ff38-1884-4942-82fe-fb79122afe63] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 25 03:42:11 np0005534516 nova_compute[253538]: 2025-11-25 08:42:11.139 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 85d7ff38-1884-4942-82fe-fb79122afe63] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:42:11 np0005534516 nova_compute[253538]: 2025-11-25 08:42:11.148 253542 DEBUG nova.virt.libvirt.driver [None req-9de3f94f-4b87-42fb-8bc7-7611a4f8e9fd 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] [instance: 85d7ff38-1884-4942-82fe-fb79122afe63] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:42:11 np0005534516 nova_compute[253538]: 2025-11-25 08:42:11.149 253542 DEBUG nova.virt.libvirt.driver [None req-9de3f94f-4b87-42fb-8bc7-7611a4f8e9fd 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] [instance: 85d7ff38-1884-4942-82fe-fb79122afe63] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:42:11 np0005534516 nova_compute[253538]: 2025-11-25 08:42:11.150 253542 DEBUG nova.virt.libvirt.driver [None req-9de3f94f-4b87-42fb-8bc7-7611a4f8e9fd 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] [instance: 85d7ff38-1884-4942-82fe-fb79122afe63] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:42:11 np0005534516 nova_compute[253538]: 2025-11-25 08:42:11.151 253542 DEBUG nova.virt.libvirt.driver [None req-9de3f94f-4b87-42fb-8bc7-7611a4f8e9fd 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] [instance: 85d7ff38-1884-4942-82fe-fb79122afe63] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:42:11 np0005534516 nova_compute[253538]: 2025-11-25 08:42:11.152 253542 DEBUG nova.virt.libvirt.driver [None req-9de3f94f-4b87-42fb-8bc7-7611a4f8e9fd 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] [instance: 85d7ff38-1884-4942-82fe-fb79122afe63] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:42:11 np0005534516 nova_compute[253538]: 2025-11-25 08:42:11.152 253542 DEBUG nova.virt.libvirt.driver [None req-9de3f94f-4b87-42fb-8bc7-7611a4f8e9fd 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] [instance: 85d7ff38-1884-4942-82fe-fb79122afe63] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:42:11 np0005534516 nova_compute[253538]: 2025-11-25 08:42:11.159 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 85d7ff38-1884-4942-82fe-fb79122afe63] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 03:42:11 np0005534516 nova_compute[253538]: 2025-11-25 08:42:11.201 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 85d7ff38-1884-4942-82fe-fb79122afe63] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 03:42:11 np0005534516 nova_compute[253538]: 2025-11-25 08:42:11.202 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764060131.0632844, 85d7ff38-1884-4942-82fe-fb79122afe63 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 03:42:11 np0005534516 nova_compute[253538]: 2025-11-25 08:42:11.202 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 85d7ff38-1884-4942-82fe-fb79122afe63] VM Paused (Lifecycle Event)#033[00m
Nov 25 03:42:11 np0005534516 nova_compute[253538]: 2025-11-25 08:42:11.234 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 85d7ff38-1884-4942-82fe-fb79122afe63] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:42:11 np0005534516 nova_compute[253538]: 2025-11-25 08:42:11.239 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764060131.0719643, 85d7ff38-1884-4942-82fe-fb79122afe63 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 03:42:11 np0005534516 nova_compute[253538]: 2025-11-25 08:42:11.240 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 85d7ff38-1884-4942-82fe-fb79122afe63] VM Resumed (Lifecycle Event)#033[00m
Nov 25 03:42:11 np0005534516 nova_compute[253538]: 2025-11-25 08:42:11.250 253542 INFO nova.compute.manager [None req-9de3f94f-4b87-42fb-8bc7-7611a4f8e9fd 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] [instance: 85d7ff38-1884-4942-82fe-fb79122afe63] Took 8.21 seconds to spawn the instance on the hypervisor.#033[00m
Nov 25 03:42:11 np0005534516 nova_compute[253538]: 2025-11-25 08:42:11.251 253542 DEBUG nova.compute.manager [None req-9de3f94f-4b87-42fb-8bc7-7611a4f8e9fd 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] [instance: 85d7ff38-1884-4942-82fe-fb79122afe63] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:42:11 np0005534516 nova_compute[253538]: 2025-11-25 08:42:11.262 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 85d7ff38-1884-4942-82fe-fb79122afe63] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:42:11 np0005534516 nova_compute[253538]: 2025-11-25 08:42:11.267 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 85d7ff38-1884-4942-82fe-fb79122afe63] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 03:42:11 np0005534516 nova_compute[253538]: 2025-11-25 08:42:11.306 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 85d7ff38-1884-4942-82fe-fb79122afe63] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 03:42:11 np0005534516 nova_compute[253538]: 2025-11-25 08:42:11.344 253542 INFO nova.compute.manager [None req-9de3f94f-4b87-42fb-8bc7-7611a4f8e9fd 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] [instance: 85d7ff38-1884-4942-82fe-fb79122afe63] Took 9.23 seconds to build instance.#033[00m
Nov 25 03:42:11 np0005534516 nova_compute[253538]: 2025-11-25 08:42:11.364 253542 DEBUG oslo_concurrency.lockutils [None req-9de3f94f-4b87-42fb-8bc7-7611a4f8e9fd 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] Lock "85d7ff38-1884-4942-82fe-fb79122afe63" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.318s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:42:11 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1792: 321 pgs: 321 active+clean; 293 MiB data, 785 MiB used, 59 GiB / 60 GiB avail; 379 KiB/s rd, 3.9 MiB/s wr, 149 op/s
Nov 25 03:42:11 np0005534516 nova_compute[253538]: 2025-11-25 08:42:11.761 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:42:12 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e203 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 03:42:13 np0005534516 nova_compute[253538]: 2025-11-25 08:42:13.085 253542 DEBUG nova.compute.manager [req-0f90b276-35ef-41c6-8842-7356a36021c1 req-297f5679-b932-4648-81c8-a20be6f21ad7 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 85d7ff38-1884-4942-82fe-fb79122afe63] Received event network-vif-plugged-3ed26b91-50ed-4d4d-ad1a-a63df94e9607 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:42:13 np0005534516 nova_compute[253538]: 2025-11-25 08:42:13.086 253542 DEBUG oslo_concurrency.lockutils [req-0f90b276-35ef-41c6-8842-7356a36021c1 req-297f5679-b932-4648-81c8-a20be6f21ad7 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "85d7ff38-1884-4942-82fe-fb79122afe63-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:42:13 np0005534516 nova_compute[253538]: 2025-11-25 08:42:13.086 253542 DEBUG oslo_concurrency.lockutils [req-0f90b276-35ef-41c6-8842-7356a36021c1 req-297f5679-b932-4648-81c8-a20be6f21ad7 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "85d7ff38-1884-4942-82fe-fb79122afe63-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:42:13 np0005534516 nova_compute[253538]: 2025-11-25 08:42:13.087 253542 DEBUG oslo_concurrency.lockutils [req-0f90b276-35ef-41c6-8842-7356a36021c1 req-297f5679-b932-4648-81c8-a20be6f21ad7 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "85d7ff38-1884-4942-82fe-fb79122afe63-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:42:13 np0005534516 nova_compute[253538]: 2025-11-25 08:42:13.087 253542 DEBUG nova.compute.manager [req-0f90b276-35ef-41c6-8842-7356a36021c1 req-297f5679-b932-4648-81c8-a20be6f21ad7 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 85d7ff38-1884-4942-82fe-fb79122afe63] No waiting events found dispatching network-vif-plugged-3ed26b91-50ed-4d4d-ad1a-a63df94e9607 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 03:42:13 np0005534516 nova_compute[253538]: 2025-11-25 08:42:13.088 253542 WARNING nova.compute.manager [req-0f90b276-35ef-41c6-8842-7356a36021c1 req-297f5679-b932-4648-81c8-a20be6f21ad7 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 85d7ff38-1884-4942-82fe-fb79122afe63] Received unexpected event network-vif-plugged-3ed26b91-50ed-4d4d-ad1a-a63df94e9607 for instance with vm_state active and task_state None.#033[00m
Nov 25 03:42:13 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1793: 321 pgs: 321 active+clean; 293 MiB data, 785 MiB used, 59 GiB / 60 GiB avail; 1.0 MiB/s rd, 3.9 MiB/s wr, 175 op/s
Nov 25 03:42:13 np0005534516 nova_compute[253538]: 2025-11-25 08:42:13.724 253542 INFO nova.compute.manager [None req-64d7b8de-0dbe-4e5b-b187-25541d8eaa6f 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] [instance: 85d7ff38-1884-4942-82fe-fb79122afe63] Pausing#033[00m
Nov 25 03:42:13 np0005534516 nova_compute[253538]: 2025-11-25 08:42:13.725 253542 DEBUG nova.objects.instance [None req-64d7b8de-0dbe-4e5b-b187-25541d8eaa6f 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] Lazy-loading 'flavor' on Instance uuid 85d7ff38-1884-4942-82fe-fb79122afe63 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 03:42:13 np0005534516 nova_compute[253538]: 2025-11-25 08:42:13.751 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764060133.7508879, 85d7ff38-1884-4942-82fe-fb79122afe63 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 03:42:13 np0005534516 nova_compute[253538]: 2025-11-25 08:42:13.751 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 85d7ff38-1884-4942-82fe-fb79122afe63] VM Paused (Lifecycle Event)#033[00m
Nov 25 03:42:13 np0005534516 nova_compute[253538]: 2025-11-25 08:42:13.753 253542 DEBUG nova.compute.manager [None req-64d7b8de-0dbe-4e5b-b187-25541d8eaa6f 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] [instance: 85d7ff38-1884-4942-82fe-fb79122afe63] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:42:13 np0005534516 nova_compute[253538]: 2025-11-25 08:42:13.781 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 85d7ff38-1884-4942-82fe-fb79122afe63] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:42:13 np0005534516 nova_compute[253538]: 2025-11-25 08:42:13.785 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 85d7ff38-1884-4942-82fe-fb79122afe63] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: pausing, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 03:42:13 np0005534516 nova_compute[253538]: 2025-11-25 08:42:13.817 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 85d7ff38-1884-4942-82fe-fb79122afe63] During sync_power_state the instance has a pending task (pausing). Skip.#033[00m
Nov 25 03:42:14 np0005534516 nova_compute[253538]: 2025-11-25 08:42:14.002 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:42:15 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1794: 321 pgs: 321 active+clean; 293 MiB data, 785 MiB used, 59 GiB / 60 GiB avail; 1.6 MiB/s rd, 3.5 MiB/s wr, 195 op/s
Nov 25 03:42:16 np0005534516 nova_compute[253538]: 2025-11-25 08:42:16.576 253542 DEBUG oslo_concurrency.lockutils [None req-9a51a591-baff-4c8d-8d97-79d7506b979e 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] Acquiring lock "85d7ff38-1884-4942-82fe-fb79122afe63" by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:42:16 np0005534516 nova_compute[253538]: 2025-11-25 08:42:16.576 253542 DEBUG oslo_concurrency.lockutils [None req-9a51a591-baff-4c8d-8d97-79d7506b979e 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] Lock "85d7ff38-1884-4942-82fe-fb79122afe63" acquired by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:42:16 np0005534516 nova_compute[253538]: 2025-11-25 08:42:16.577 253542 INFO nova.compute.manager [None req-9a51a591-baff-4c8d-8d97-79d7506b979e 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] [instance: 85d7ff38-1884-4942-82fe-fb79122afe63] Shelving#033[00m
Nov 25 03:42:16 np0005534516 kernel: tap3ed26b91-50 (unregistering): left promiscuous mode
Nov 25 03:42:16 np0005534516 NetworkManager[48915]: <info>  [1764060136.6494] device (tap3ed26b91-50): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 03:42:16 np0005534516 ovn_controller[152859]: 2025-11-25T08:42:16Z|00949|binding|INFO|Releasing lport 3ed26b91-50ed-4d4d-ad1a-a63df94e9607 from this chassis (sb_readonly=0)
Nov 25 03:42:16 np0005534516 ovn_controller[152859]: 2025-11-25T08:42:16Z|00950|binding|INFO|Setting lport 3ed26b91-50ed-4d4d-ad1a-a63df94e9607 down in Southbound
Nov 25 03:42:16 np0005534516 ovn_controller[152859]: 2025-11-25T08:42:16Z|00951|binding|INFO|Removing iface tap3ed26b91-50 ovn-installed in OVS
Nov 25 03:42:16 np0005534516 nova_compute[253538]: 2025-11-25 08:42:16.663 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:42:16 np0005534516 nova_compute[253538]: 2025-11-25 08:42:16.667 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:42:16 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:42:16.674 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:44:bf:83 10.100.0.13'], port_security=['fa:16:3e:44:bf:83 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '85d7ff38-1884-4942-82fe-fb79122afe63', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6e77a51d-2695-4e70-8b9d-c02ec0c62f35', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '295fcc758cf24ab4b01eb393f4863e36', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'f53cda67-e087-4973-b43b-027ef8b57bb8', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=31935ac8-a5f7-4fad-9e0b-ee28fade4ce4, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=3ed26b91-50ed-4d4d-ad1a-a63df94e9607) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 03:42:16 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:42:16.677 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 3ed26b91-50ed-4d4d-ad1a-a63df94e9607 in datapath 6e77a51d-2695-4e70-8b9d-c02ec0c62f35 unbound from our chassis#033[00m
Nov 25 03:42:16 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:42:16.679 162739 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 6e77a51d-2695-4e70-8b9d-c02ec0c62f35#033[00m
Nov 25 03:42:16 np0005534516 nova_compute[253538]: 2025-11-25 08:42:16.697 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:42:16 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:42:16.710 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[d9e5d610-e00f-4833-b14a-36a25547e1a3]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:42:16 np0005534516 systemd[1]: machine-qemu\x2d120\x2dinstance\x2d00000062.scope: Deactivated successfully.
Nov 25 03:42:16 np0005534516 systemd[1]: machine-qemu\x2d120\x2dinstance\x2d00000062.scope: Consumed 3.334s CPU time.
Nov 25 03:42:16 np0005534516 systemd-machined[215790]: Machine qemu-120-instance-00000062 terminated.
Nov 25 03:42:16 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:42:16.756 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[aa847270-b30d-4335-bec0-88045b92d596]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:42:16 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:42:16.761 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[f4e28856-c893-4b89-b3bc-6d4a62b26cb9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:42:16 np0005534516 nova_compute[253538]: 2025-11-25 08:42:16.764 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:42:16 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:42:16.803 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[47ed5818-56b5-497b-a7fc-e854972c5c4d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:42:16 np0005534516 nova_compute[253538]: 2025-11-25 08:42:16.821 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:42:16 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:42:16.823 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[b696d728-3122-4c0a-93d3-3e1b8b5affff]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6e77a51d-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b8:6f:fd'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 9, 'rx_bytes': 700, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 9, 'rx_bytes': 700, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 273], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 538137, 'reachable_time': 29181, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 347921, 'error': None, 'target': 'ovnmeta-6e77a51d-2695-4e70-8b9d-c02ec0c62f35', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:42:16 np0005534516 nova_compute[253538]: 2025-11-25 08:42:16.826 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:42:16 np0005534516 nova_compute[253538]: 2025-11-25 08:42:16.839 253542 INFO nova.virt.libvirt.driver [-] [instance: 85d7ff38-1884-4942-82fe-fb79122afe63] Instance destroyed successfully.#033[00m
Nov 25 03:42:16 np0005534516 nova_compute[253538]: 2025-11-25 08:42:16.839 253542 DEBUG nova.objects.instance [None req-9a51a591-baff-4c8d-8d97-79d7506b979e 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] Lazy-loading 'numa_topology' on Instance uuid 85d7ff38-1884-4942-82fe-fb79122afe63 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 03:42:16 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:42:16.844 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[59e1cb55-3b1d-4be2-9d57-fa772b76c110]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap6e77a51d-21'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 538152, 'tstamp': 538152}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 347929, 'error': None, 'target': 'ovnmeta-6e77a51d-2695-4e70-8b9d-c02ec0c62f35', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap6e77a51d-21'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 538156, 'tstamp': 538156}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 347929, 'error': None, 'target': 'ovnmeta-6e77a51d-2695-4e70-8b9d-c02ec0c62f35', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:42:16 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:42:16.845 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6e77a51d-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:42:16 np0005534516 nova_compute[253538]: 2025-11-25 08:42:16.847 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:42:16 np0005534516 nova_compute[253538]: 2025-11-25 08:42:16.851 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:42:16 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:42:16.852 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6e77a51d-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:42:16 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:42:16.853 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 03:42:16 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:42:16.853 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap6e77a51d-20, col_values=(('external_ids', {'iface-id': '66275e2b-0197-461a-9be3-ae2fe1aec502'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:42:16 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:42:16.854 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 03:42:16 np0005534516 nova_compute[253538]: 2025-11-25 08:42:16.979 253542 DEBUG nova.compute.manager [req-5e3545a1-3317-49c4-b40b-356174a434ee req-98e02369-0fe9-47c4-bfc9-e457d8f7fadb b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 85d7ff38-1884-4942-82fe-fb79122afe63] Received event network-vif-unplugged-3ed26b91-50ed-4d4d-ad1a-a63df94e9607 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:42:16 np0005534516 nova_compute[253538]: 2025-11-25 08:42:16.979 253542 DEBUG oslo_concurrency.lockutils [req-5e3545a1-3317-49c4-b40b-356174a434ee req-98e02369-0fe9-47c4-bfc9-e457d8f7fadb b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "85d7ff38-1884-4942-82fe-fb79122afe63-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:42:16 np0005534516 nova_compute[253538]: 2025-11-25 08:42:16.980 253542 DEBUG oslo_concurrency.lockutils [req-5e3545a1-3317-49c4-b40b-356174a434ee req-98e02369-0fe9-47c4-bfc9-e457d8f7fadb b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "85d7ff38-1884-4942-82fe-fb79122afe63-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:42:16 np0005534516 nova_compute[253538]: 2025-11-25 08:42:16.980 253542 DEBUG oslo_concurrency.lockutils [req-5e3545a1-3317-49c4-b40b-356174a434ee req-98e02369-0fe9-47c4-bfc9-e457d8f7fadb b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "85d7ff38-1884-4942-82fe-fb79122afe63-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:42:16 np0005534516 nova_compute[253538]: 2025-11-25 08:42:16.980 253542 DEBUG nova.compute.manager [req-5e3545a1-3317-49c4-b40b-356174a434ee req-98e02369-0fe9-47c4-bfc9-e457d8f7fadb b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 85d7ff38-1884-4942-82fe-fb79122afe63] No waiting events found dispatching network-vif-unplugged-3ed26b91-50ed-4d4d-ad1a-a63df94e9607 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 03:42:16 np0005534516 nova_compute[253538]: 2025-11-25 08:42:16.981 253542 WARNING nova.compute.manager [req-5e3545a1-3317-49c4-b40b-356174a434ee req-98e02369-0fe9-47c4-bfc9-e457d8f7fadb b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 85d7ff38-1884-4942-82fe-fb79122afe63] Received unexpected event network-vif-unplugged-3ed26b91-50ed-4d4d-ad1a-a63df94e9607 for instance with vm_state paused and task_state shelving.#033[00m
Nov 25 03:42:17 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e203 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 03:42:17 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1795: 321 pgs: 321 active+clean; 293 MiB data, 785 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 2.0 MiB/s wr, 190 op/s
Nov 25 03:42:17 np0005534516 nova_compute[253538]: 2025-11-25 08:42:17.865 253542 INFO nova.virt.libvirt.driver [None req-9a51a591-baff-4c8d-8d97-79d7506b979e 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] [instance: 85d7ff38-1884-4942-82fe-fb79122afe63] Beginning cold snapshot process#033[00m
Nov 25 03:42:18 np0005534516 nova_compute[253538]: 2025-11-25 08:42:18.034 253542 DEBUG nova.virt.libvirt.imagebackend [None req-9a51a591-baff-4c8d-8d97-79d7506b979e 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] No parent info for 8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163#033[00m
Nov 25 03:42:18 np0005534516 nova_compute[253538]: 2025-11-25 08:42:18.584 253542 DEBUG nova.storage.rbd_utils [None req-9a51a591-baff-4c8d-8d97-79d7506b979e 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] creating snapshot(f5d1a2552dbe40718b77fec433a1f35c) on rbd image(85d7ff38-1884-4942-82fe-fb79122afe63_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Nov 25 03:42:18 np0005534516 podman[347984]: 2025-11-25 08:42:18.805060067 +0000 UTC m=+0.054594450 container health_status 5c8d5508db57b4c3da7e80ae11f3a3094e87785cb74f60c98199261dd8b73481 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Nov 25 03:42:19 np0005534516 nova_compute[253538]: 2025-11-25 08:42:19.005 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:42:19 np0005534516 nova_compute[253538]: 2025-11-25 08:42:19.440 253542 DEBUG nova.compute.manager [req-83b29455-6e0b-4a16-b470-a3e8897d88c4 req-fde52005-ef0c-4827-8112-1926e8281e92 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 85d7ff38-1884-4942-82fe-fb79122afe63] Received event network-vif-plugged-3ed26b91-50ed-4d4d-ad1a-a63df94e9607 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:42:19 np0005534516 nova_compute[253538]: 2025-11-25 08:42:19.441 253542 DEBUG oslo_concurrency.lockutils [req-83b29455-6e0b-4a16-b470-a3e8897d88c4 req-fde52005-ef0c-4827-8112-1926e8281e92 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "85d7ff38-1884-4942-82fe-fb79122afe63-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:42:19 np0005534516 nova_compute[253538]: 2025-11-25 08:42:19.442 253542 DEBUG oslo_concurrency.lockutils [req-83b29455-6e0b-4a16-b470-a3e8897d88c4 req-fde52005-ef0c-4827-8112-1926e8281e92 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "85d7ff38-1884-4942-82fe-fb79122afe63-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:42:19 np0005534516 nova_compute[253538]: 2025-11-25 08:42:19.442 253542 DEBUG oslo_concurrency.lockutils [req-83b29455-6e0b-4a16-b470-a3e8897d88c4 req-fde52005-ef0c-4827-8112-1926e8281e92 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "85d7ff38-1884-4942-82fe-fb79122afe63-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:42:19 np0005534516 nova_compute[253538]: 2025-11-25 08:42:19.443 253542 DEBUG nova.compute.manager [req-83b29455-6e0b-4a16-b470-a3e8897d88c4 req-fde52005-ef0c-4827-8112-1926e8281e92 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 85d7ff38-1884-4942-82fe-fb79122afe63] No waiting events found dispatching network-vif-plugged-3ed26b91-50ed-4d4d-ad1a-a63df94e9607 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 03:42:19 np0005534516 nova_compute[253538]: 2025-11-25 08:42:19.443 253542 WARNING nova.compute.manager [req-83b29455-6e0b-4a16-b470-a3e8897d88c4 req-fde52005-ef0c-4827-8112-1926e8281e92 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 85d7ff38-1884-4942-82fe-fb79122afe63] Received unexpected event network-vif-plugged-3ed26b91-50ed-4d4d-ad1a-a63df94e9607 for instance with vm_state paused and task_state shelving_image_uploading.#033[00m
Nov 25 03:42:19 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1796: 321 pgs: 321 active+clean; 293 MiB data, 785 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 681 KiB/s wr, 131 op/s
Nov 25 03:42:19 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e203 do_prune osdmap full prune enabled
Nov 25 03:42:19 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e204 e204: 3 total, 3 up, 3 in
Nov 25 03:42:19 np0005534516 ceph-mon[75015]: log_channel(cluster) log [DBG] : osdmap e204: 3 total, 3 up, 3 in
Nov 25 03:42:20 np0005534516 nova_compute[253538]: 2025-11-25 08:42:20.220 253542 DEBUG nova.storage.rbd_utils [None req-9a51a591-baff-4c8d-8d97-79d7506b979e 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] cloning vms/85d7ff38-1884-4942-82fe-fb79122afe63_disk@f5d1a2552dbe40718b77fec433a1f35c to images/394afc84-ee61-4a8e-9e6c-3501a71601dc clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Nov 25 03:42:20 np0005534516 nova_compute[253538]: 2025-11-25 08:42:20.553 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 03:42:20 np0005534516 nova_compute[253538]: 2025-11-25 08:42:20.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 03:42:20 np0005534516 nova_compute[253538]: 2025-11-25 08:42:20.554 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 25 03:42:20 np0005534516 nova_compute[253538]: 2025-11-25 08:42:20.602 253542 DEBUG nova.storage.rbd_utils [None req-9a51a591-baff-4c8d-8d97-79d7506b979e 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] flattening images/394afc84-ee61-4a8e-9e6c-3501a71601dc flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314#033[00m
Nov 25 03:42:20 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e204 do_prune osdmap full prune enabled
Nov 25 03:42:20 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e205 e205: 3 total, 3 up, 3 in
Nov 25 03:42:20 np0005534516 ceph-mon[75015]: log_channel(cluster) log [DBG] : osdmap e205: 3 total, 3 up, 3 in
Nov 25 03:42:20 np0005534516 nova_compute[253538]: 2025-11-25 08:42:20.934 253542 DEBUG nova.storage.rbd_utils [None req-9a51a591-baff-4c8d-8d97-79d7506b979e 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] removing snapshot(f5d1a2552dbe40718b77fec433a1f35c) on rbd image(85d7ff38-1884-4942-82fe-fb79122afe63_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489#033[00m
Nov 25 03:42:21 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1799: 321 pgs: 321 active+clean; 293 MiB data, 785 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 19 KiB/s wr, 72 op/s
Nov 25 03:42:21 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e205 do_prune osdmap full prune enabled
Nov 25 03:42:21 np0005534516 nova_compute[253538]: 2025-11-25 08:42:21.766 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:42:21 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e206 e206: 3 total, 3 up, 3 in
Nov 25 03:42:21 np0005534516 ceph-mon[75015]: log_channel(cluster) log [DBG] : osdmap e206: 3 total, 3 up, 3 in
Nov 25 03:42:21 np0005534516 nova_compute[253538]: 2025-11-25 08:42:21.819 253542 DEBUG nova.storage.rbd_utils [None req-9a51a591-baff-4c8d-8d97-79d7506b979e 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] creating snapshot(snap) on rbd image(394afc84-ee61-4a8e-9e6c-3501a71601dc) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Nov 25 03:42:22 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e206 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 03:42:22 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e206 do_prune osdmap full prune enabled
Nov 25 03:42:22 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e207 e207: 3 total, 3 up, 3 in
Nov 25 03:42:22 np0005534516 ceph-mon[75015]: log_channel(cluster) log [DBG] : osdmap e207: 3 total, 3 up, 3 in
Nov 25 03:42:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 03:42:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 03:42:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 03:42:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 03:42:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 03:42:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 03:42:23 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1802: 321 pgs: 321 active+clean; 327 MiB data, 803 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 4.3 MiB/s wr, 179 op/s
Nov 25 03:42:23 np0005534516 nova_compute[253538]: 2025-11-25 08:42:23.506 253542 DEBUG oslo_concurrency.lockutils [None req-e77195f2-215e-4911-889c-8f883fa50b11 55be30cefe8b4a10b26c37d845e9e2fa 3c6a99942fff45b7809546d76f7d9c36 - - default default] Acquiring lock "0774dd07-d931-40b5-8590-915c0611277d" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:42:23 np0005534516 nova_compute[253538]: 2025-11-25 08:42:23.507 253542 DEBUG oslo_concurrency.lockutils [None req-e77195f2-215e-4911-889c-8f883fa50b11 55be30cefe8b4a10b26c37d845e9e2fa 3c6a99942fff45b7809546d76f7d9c36 - - default default] Lock "0774dd07-d931-40b5-8590-915c0611277d" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:42:23 np0005534516 nova_compute[253538]: 2025-11-25 08:42:23.527 253542 DEBUG nova.compute.manager [None req-e77195f2-215e-4911-889c-8f883fa50b11 55be30cefe8b4a10b26c37d845e9e2fa 3c6a99942fff45b7809546d76f7d9c36 - - default default] [instance: 0774dd07-d931-40b5-8590-915c0611277d] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 25 03:42:23 np0005534516 nova_compute[253538]: 2025-11-25 08:42:23.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 03:42:23 np0005534516 nova_compute[253538]: 2025-11-25 08:42:23.555 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 25 03:42:23 np0005534516 nova_compute[253538]: 2025-11-25 08:42:23.555 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 25 03:42:23 np0005534516 nova_compute[253538]: 2025-11-25 08:42:23.624 253542 DEBUG oslo_concurrency.lockutils [None req-e77195f2-215e-4911-889c-8f883fa50b11 55be30cefe8b4a10b26c37d845e9e2fa 3c6a99942fff45b7809546d76f7d9c36 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:42:23 np0005534516 nova_compute[253538]: 2025-11-25 08:42:23.625 253542 DEBUG oslo_concurrency.lockutils [None req-e77195f2-215e-4911-889c-8f883fa50b11 55be30cefe8b4a10b26c37d845e9e2fa 3c6a99942fff45b7809546d76f7d9c36 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:42:23 np0005534516 nova_compute[253538]: 2025-11-25 08:42:23.635 253542 DEBUG nova.virt.hardware [None req-e77195f2-215e-4911-889c-8f883fa50b11 55be30cefe8b4a10b26c37d845e9e2fa 3c6a99942fff45b7809546d76f7d9c36 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 25 03:42:23 np0005534516 nova_compute[253538]: 2025-11-25 08:42:23.635 253542 INFO nova.compute.claims [None req-e77195f2-215e-4911-889c-8f883fa50b11 55be30cefe8b4a10b26c37d845e9e2fa 3c6a99942fff45b7809546d76f7d9c36 - - default default] [instance: 0774dd07-d931-40b5-8590-915c0611277d] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 25 03:42:23 np0005534516 nova_compute[253538]: 2025-11-25 08:42:23.710 253542 DEBUG nova.scheduler.client.report [None req-e77195f2-215e-4911-889c-8f883fa50b11 55be30cefe8b4a10b26c37d845e9e2fa 3c6a99942fff45b7809546d76f7d9c36 - - default default] Refreshing inventories for resource provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Nov 25 03:42:23 np0005534516 nova_compute[253538]: 2025-11-25 08:42:23.736 253542 DEBUG nova.scheduler.client.report [None req-e77195f2-215e-4911-889c-8f883fa50b11 55be30cefe8b4a10b26c37d845e9e2fa 3c6a99942fff45b7809546d76f7d9c36 - - default default] Updating ProviderTree inventory for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Nov 25 03:42:23 np0005534516 nova_compute[253538]: 2025-11-25 08:42:23.736 253542 DEBUG nova.compute.provider_tree [None req-e77195f2-215e-4911-889c-8f883fa50b11 55be30cefe8b4a10b26c37d845e9e2fa 3c6a99942fff45b7809546d76f7d9c36 - - default default] Updating inventory in ProviderTree for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Nov 25 03:42:23 np0005534516 nova_compute[253538]: 2025-11-25 08:42:23.757 253542 DEBUG nova.scheduler.client.report [None req-e77195f2-215e-4911-889c-8f883fa50b11 55be30cefe8b4a10b26c37d845e9e2fa 3c6a99942fff45b7809546d76f7d9c36 - - default default] Refreshing aggregate associations for resource provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Nov 25 03:42:23 np0005534516 nova_compute[253538]: 2025-11-25 08:42:23.781 253542 DEBUG nova.scheduler.client.report [None req-e77195f2-215e-4911-889c-8f883fa50b11 55be30cefe8b4a10b26c37d845e9e2fa 3c6a99942fff45b7809546d76f7d9c36 - - default default] Refreshing trait associations for resource provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4, traits: HW_CPU_X86_ABM,HW_CPU_X86_SSE,HW_CPU_X86_SSE4A,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_NET_VIF_MODEL_VIRTIO,HW_CPU_X86_SSSE3,COMPUTE_ACCELERATORS,COMPUTE_RESCUE_BFV,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_BMI2,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_DEVICE_TAGGING,HW_CPU_X86_SSE2,HW_CPU_X86_SSE42,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_SECURITY_TPM_1_2,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_SVM,HW_CPU_X86_AVX,HW_CPU_X86_CLMUL,COMPUTE_NODE,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_AVX2,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_SHA,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_BMI,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_MMX,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_SSE41,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_SECURITY_TPM_2_0,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_F16C,COMPUTE_STORAGE_BUS_FDC,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_AMD_SVM,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_AESNI,HW_CPU_X86_FMA3 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Nov 25 03:42:23 np0005534516 nova_compute[253538]: 2025-11-25 08:42:23.798 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Acquiring lock "refresh_cache-3e75d0af-c514-42c5-aa05-88ae5552f196" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 03:42:23 np0005534516 nova_compute[253538]: 2025-11-25 08:42:23.798 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Acquired lock "refresh_cache-3e75d0af-c514-42c5-aa05-88ae5552f196" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 03:42:23 np0005534516 nova_compute[253538]: 2025-11-25 08:42:23.799 253542 DEBUG nova.network.neutron [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] [instance: 3e75d0af-c514-42c5-aa05-88ae5552f196] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Nov 25 03:42:23 np0005534516 nova_compute[253538]: 2025-11-25 08:42:23.799 253542 DEBUG nova.objects.instance [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 3e75d0af-c514-42c5-aa05-88ae5552f196 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 03:42:23 np0005534516 podman[348096]: 2025-11-25 08:42:23.85883599 +0000 UTC m=+0.107399928 container health_status 201f5ba8a846455dad7681d91099d8c3f33e8ac91298c1330a8b525b94c03938 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Nov 25 03:42:23 np0005534516 nova_compute[253538]: 2025-11-25 08:42:23.883 253542 DEBUG oslo_concurrency.processutils [None req-e77195f2-215e-4911-889c-8f883fa50b11 55be30cefe8b4a10b26c37d845e9e2fa 3c6a99942fff45b7809546d76f7d9c36 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:42:24 np0005534516 nova_compute[253538]: 2025-11-25 08:42:24.009 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:42:24 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 03:42:24 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3515899805' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 03:42:24 np0005534516 nova_compute[253538]: 2025-11-25 08:42:24.390 253542 DEBUG oslo_concurrency.processutils [None req-e77195f2-215e-4911-889c-8f883fa50b11 55be30cefe8b4a10b26c37d845e9e2fa 3c6a99942fff45b7809546d76f7d9c36 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.507s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:42:24 np0005534516 nova_compute[253538]: 2025-11-25 08:42:24.401 253542 DEBUG nova.compute.provider_tree [None req-e77195f2-215e-4911-889c-8f883fa50b11 55be30cefe8b4a10b26c37d845e9e2fa 3c6a99942fff45b7809546d76f7d9c36 - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 03:42:24 np0005534516 nova_compute[253538]: 2025-11-25 08:42:24.416 253542 INFO nova.virt.libvirt.driver [None req-9a51a591-baff-4c8d-8d97-79d7506b979e 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] [instance: 85d7ff38-1884-4942-82fe-fb79122afe63] Snapshot image upload complete#033[00m
Nov 25 03:42:24 np0005534516 nova_compute[253538]: 2025-11-25 08:42:24.417 253542 DEBUG nova.compute.manager [None req-9a51a591-baff-4c8d-8d97-79d7506b979e 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] [instance: 85d7ff38-1884-4942-82fe-fb79122afe63] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:42:24 np0005534516 nova_compute[253538]: 2025-11-25 08:42:24.419 253542 DEBUG nova.scheduler.client.report [None req-e77195f2-215e-4911-889c-8f883fa50b11 55be30cefe8b4a10b26c37d845e9e2fa 3c6a99942fff45b7809546d76f7d9c36 - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 03:42:24 np0005534516 nova_compute[253538]: 2025-11-25 08:42:24.473 253542 DEBUG oslo_concurrency.lockutils [None req-e77195f2-215e-4911-889c-8f883fa50b11 55be30cefe8b4a10b26c37d845e9e2fa 3c6a99942fff45b7809546d76f7d9c36 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.848s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:42:24 np0005534516 nova_compute[253538]: 2025-11-25 08:42:24.474 253542 DEBUG nova.compute.manager [None req-e77195f2-215e-4911-889c-8f883fa50b11 55be30cefe8b4a10b26c37d845e9e2fa 3c6a99942fff45b7809546d76f7d9c36 - - default default] [instance: 0774dd07-d931-40b5-8590-915c0611277d] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 25 03:42:24 np0005534516 nova_compute[253538]: 2025-11-25 08:42:24.523 253542 INFO nova.compute.manager [None req-9a51a591-baff-4c8d-8d97-79d7506b979e 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] [instance: 85d7ff38-1884-4942-82fe-fb79122afe63] Shelve offloading#033[00m
Nov 25 03:42:24 np0005534516 nova_compute[253538]: 2025-11-25 08:42:24.534 253542 INFO nova.virt.libvirt.driver [-] [instance: 85d7ff38-1884-4942-82fe-fb79122afe63] Instance destroyed successfully.#033[00m
Nov 25 03:42:24 np0005534516 nova_compute[253538]: 2025-11-25 08:42:24.535 253542 DEBUG nova.compute.manager [None req-9a51a591-baff-4c8d-8d97-79d7506b979e 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] [instance: 85d7ff38-1884-4942-82fe-fb79122afe63] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:42:24 np0005534516 nova_compute[253538]: 2025-11-25 08:42:24.538 253542 DEBUG oslo_concurrency.lockutils [None req-9a51a591-baff-4c8d-8d97-79d7506b979e 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] Acquiring lock "refresh_cache-85d7ff38-1884-4942-82fe-fb79122afe63" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 03:42:24 np0005534516 nova_compute[253538]: 2025-11-25 08:42:24.538 253542 DEBUG oslo_concurrency.lockutils [None req-9a51a591-baff-4c8d-8d97-79d7506b979e 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] Acquired lock "refresh_cache-85d7ff38-1884-4942-82fe-fb79122afe63" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 03:42:24 np0005534516 nova_compute[253538]: 2025-11-25 08:42:24.539 253542 DEBUG nova.network.neutron [None req-9a51a591-baff-4c8d-8d97-79d7506b979e 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] [instance: 85d7ff38-1884-4942-82fe-fb79122afe63] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 25 03:42:24 np0005534516 nova_compute[253538]: 2025-11-25 08:42:24.548 253542 DEBUG nova.compute.manager [None req-e77195f2-215e-4911-889c-8f883fa50b11 55be30cefe8b4a10b26c37d845e9e2fa 3c6a99942fff45b7809546d76f7d9c36 - - default default] [instance: 0774dd07-d931-40b5-8590-915c0611277d] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 25 03:42:24 np0005534516 nova_compute[253538]: 2025-11-25 08:42:24.548 253542 DEBUG nova.network.neutron [None req-e77195f2-215e-4911-889c-8f883fa50b11 55be30cefe8b4a10b26c37d845e9e2fa 3c6a99942fff45b7809546d76f7d9c36 - - default default] [instance: 0774dd07-d931-40b5-8590-915c0611277d] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 25 03:42:24 np0005534516 nova_compute[253538]: 2025-11-25 08:42:24.567 253542 INFO nova.virt.libvirt.driver [None req-e77195f2-215e-4911-889c-8f883fa50b11 55be30cefe8b4a10b26c37d845e9e2fa 3c6a99942fff45b7809546d76f7d9c36 - - default default] [instance: 0774dd07-d931-40b5-8590-915c0611277d] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 25 03:42:24 np0005534516 nova_compute[253538]: 2025-11-25 08:42:24.591 253542 DEBUG nova.compute.manager [None req-e77195f2-215e-4911-889c-8f883fa50b11 55be30cefe8b4a10b26c37d845e9e2fa 3c6a99942fff45b7809546d76f7d9c36 - - default default] [instance: 0774dd07-d931-40b5-8590-915c0611277d] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 25 03:42:24 np0005534516 nova_compute[253538]: 2025-11-25 08:42:24.694 253542 DEBUG nova.compute.manager [None req-e77195f2-215e-4911-889c-8f883fa50b11 55be30cefe8b4a10b26c37d845e9e2fa 3c6a99942fff45b7809546d76f7d9c36 - - default default] [instance: 0774dd07-d931-40b5-8590-915c0611277d] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 25 03:42:24 np0005534516 nova_compute[253538]: 2025-11-25 08:42:24.696 253542 DEBUG nova.virt.libvirt.driver [None req-e77195f2-215e-4911-889c-8f883fa50b11 55be30cefe8b4a10b26c37d845e9e2fa 3c6a99942fff45b7809546d76f7d9c36 - - default default] [instance: 0774dd07-d931-40b5-8590-915c0611277d] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 25 03:42:24 np0005534516 nova_compute[253538]: 2025-11-25 08:42:24.697 253542 INFO nova.virt.libvirt.driver [None req-e77195f2-215e-4911-889c-8f883fa50b11 55be30cefe8b4a10b26c37d845e9e2fa 3c6a99942fff45b7809546d76f7d9c36 - - default default] [instance: 0774dd07-d931-40b5-8590-915c0611277d] Creating image(s)#033[00m
Nov 25 03:42:24 np0005534516 nova_compute[253538]: 2025-11-25 08:42:24.733 253542 DEBUG nova.storage.rbd_utils [None req-e77195f2-215e-4911-889c-8f883fa50b11 55be30cefe8b4a10b26c37d845e9e2fa 3c6a99942fff45b7809546d76f7d9c36 - - default default] rbd image 0774dd07-d931-40b5-8590-915c0611277d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:42:24 np0005534516 nova_compute[253538]: 2025-11-25 08:42:24.772 253542 DEBUG nova.storage.rbd_utils [None req-e77195f2-215e-4911-889c-8f883fa50b11 55be30cefe8b4a10b26c37d845e9e2fa 3c6a99942fff45b7809546d76f7d9c36 - - default default] rbd image 0774dd07-d931-40b5-8590-915c0611277d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:42:24 np0005534516 nova_compute[253538]: 2025-11-25 08:42:24.811 253542 DEBUG nova.storage.rbd_utils [None req-e77195f2-215e-4911-889c-8f883fa50b11 55be30cefe8b4a10b26c37d845e9e2fa 3c6a99942fff45b7809546d76f7d9c36 - - default default] rbd image 0774dd07-d931-40b5-8590-915c0611277d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:42:24 np0005534516 nova_compute[253538]: 2025-11-25 08:42:24.818 253542 DEBUG oslo_concurrency.lockutils [None req-e77195f2-215e-4911-889c-8f883fa50b11 55be30cefe8b4a10b26c37d845e9e2fa 3c6a99942fff45b7809546d76f7d9c36 - - default default] Acquiring lock "de4dfd283254203262189b67ccae3be59d25acb8" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:42:24 np0005534516 nova_compute[253538]: 2025-11-25 08:42:24.819 253542 DEBUG oslo_concurrency.lockutils [None req-e77195f2-215e-4911-889c-8f883fa50b11 55be30cefe8b4a10b26c37d845e9e2fa 3c6a99942fff45b7809546d76f7d9c36 - - default default] Lock "de4dfd283254203262189b67ccae3be59d25acb8" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:42:24 np0005534516 nova_compute[253538]: 2025-11-25 08:42:24.828 253542 DEBUG nova.network.neutron [None req-e77195f2-215e-4911-889c-8f883fa50b11 55be30cefe8b4a10b26c37d845e9e2fa 3c6a99942fff45b7809546d76f7d9c36 - - default default] [instance: 0774dd07-d931-40b5-8590-915c0611277d] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188#033[00m
Nov 25 03:42:24 np0005534516 nova_compute[253538]: 2025-11-25 08:42:24.829 253542 DEBUG nova.compute.manager [None req-e77195f2-215e-4911-889c-8f883fa50b11 55be30cefe8b4a10b26c37d845e9e2fa 3c6a99942fff45b7809546d76f7d9c36 - - default default] [instance: 0774dd07-d931-40b5-8590-915c0611277d] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 25 03:42:25 np0005534516 nova_compute[253538]: 2025-11-25 08:42:25.138 253542 DEBUG nova.virt.libvirt.imagebackend [None req-e77195f2-215e-4911-889c-8f883fa50b11 55be30cefe8b4a10b26c37d845e9e2fa 3c6a99942fff45b7809546d76f7d9c36 - - default default] Image locations are: [{'url': 'rbd://a058ea16-8b73-51e1-b172-ed66107102bf/images/46122271-b2f8-4ae8-a1b5-9573f094bebe/snap', 'metadata': {'store': 'default_backend'}}, {'url': 'rbd://a058ea16-8b73-51e1-b172-ed66107102bf/images/46122271-b2f8-4ae8-a1b5-9573f094bebe/snap', 'metadata': {}}] clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1085#033[00m
Nov 25 03:42:25 np0005534516 nova_compute[253538]: 2025-11-25 08:42:25.221 253542 DEBUG nova.virt.libvirt.imagebackend [None req-e77195f2-215e-4911-889c-8f883fa50b11 55be30cefe8b4a10b26c37d845e9e2fa 3c6a99942fff45b7809546d76f7d9c36 - - default default] Selected location: {'url': 'rbd://a058ea16-8b73-51e1-b172-ed66107102bf/images/46122271-b2f8-4ae8-a1b5-9573f094bebe/snap', 'metadata': {'store': 'default_backend'}} clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1094#033[00m
Nov 25 03:42:25 np0005534516 nova_compute[253538]: 2025-11-25 08:42:25.222 253542 DEBUG nova.storage.rbd_utils [None req-e77195f2-215e-4911-889c-8f883fa50b11 55be30cefe8b4a10b26c37d845e9e2fa 3c6a99942fff45b7809546d76f7d9c36 - - default default] cloning images/46122271-b2f8-4ae8-a1b5-9573f094bebe@snap to None/0774dd07-d931-40b5-8590-915c0611277d_disk clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Nov 25 03:42:25 np0005534516 nova_compute[253538]: 2025-11-25 08:42:25.375 253542 DEBUG oslo_concurrency.lockutils [None req-e77195f2-215e-4911-889c-8f883fa50b11 55be30cefe8b4a10b26c37d845e9e2fa 3c6a99942fff45b7809546d76f7d9c36 - - default default] Lock "de4dfd283254203262189b67ccae3be59d25acb8" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.556s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:42:25 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1803: 321 pgs: 321 active+clean; 339 MiB data, 807 MiB used, 59 GiB / 60 GiB avail; 3.7 MiB/s rd, 3.7 MiB/s wr, 135 op/s
Nov 25 03:42:25 np0005534516 nova_compute[253538]: 2025-11-25 08:42:25.525 253542 DEBUG nova.storage.rbd_utils [None req-e77195f2-215e-4911-889c-8f883fa50b11 55be30cefe8b4a10b26c37d845e9e2fa 3c6a99942fff45b7809546d76f7d9c36 - - default default] resizing rbd image 0774dd07-d931-40b5-8590-915c0611277d_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 25 03:42:25 np0005534516 nova_compute[253538]: 2025-11-25 08:42:25.646 253542 DEBUG nova.objects.instance [None req-e77195f2-215e-4911-889c-8f883fa50b11 55be30cefe8b4a10b26c37d845e9e2fa 3c6a99942fff45b7809546d76f7d9c36 - - default default] Lazy-loading 'migration_context' on Instance uuid 0774dd07-d931-40b5-8590-915c0611277d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 03:42:25 np0005534516 nova_compute[253538]: 2025-11-25 08:42:25.659 253542 DEBUG nova.virt.libvirt.driver [None req-e77195f2-215e-4911-889c-8f883fa50b11 55be30cefe8b4a10b26c37d845e9e2fa 3c6a99942fff45b7809546d76f7d9c36 - - default default] [instance: 0774dd07-d931-40b5-8590-915c0611277d] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 25 03:42:25 np0005534516 nova_compute[253538]: 2025-11-25 08:42:25.660 253542 DEBUG nova.virt.libvirt.driver [None req-e77195f2-215e-4911-889c-8f883fa50b11 55be30cefe8b4a10b26c37d845e9e2fa 3c6a99942fff45b7809546d76f7d9c36 - - default default] [instance: 0774dd07-d931-40b5-8590-915c0611277d] Ensure instance console log exists: /var/lib/nova/instances/0774dd07-d931-40b5-8590-915c0611277d/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 25 03:42:25 np0005534516 nova_compute[253538]: 2025-11-25 08:42:25.660 253542 DEBUG oslo_concurrency.lockutils [None req-e77195f2-215e-4911-889c-8f883fa50b11 55be30cefe8b4a10b26c37d845e9e2fa 3c6a99942fff45b7809546d76f7d9c36 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:42:25 np0005534516 nova_compute[253538]: 2025-11-25 08:42:25.661 253542 DEBUG oslo_concurrency.lockutils [None req-e77195f2-215e-4911-889c-8f883fa50b11 55be30cefe8b4a10b26c37d845e9e2fa 3c6a99942fff45b7809546d76f7d9c36 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:42:25 np0005534516 nova_compute[253538]: 2025-11-25 08:42:25.661 253542 DEBUG oslo_concurrency.lockutils [None req-e77195f2-215e-4911-889c-8f883fa50b11 55be30cefe8b4a10b26c37d845e9e2fa 3c6a99942fff45b7809546d76f7d9c36 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:42:25 np0005534516 nova_compute[253538]: 2025-11-25 08:42:25.662 253542 DEBUG nova.virt.libvirt.driver [None req-e77195f2-215e-4911-889c-8f883fa50b11 55be30cefe8b4a10b26c37d845e9e2fa 3c6a99942fff45b7809546d76f7d9c36 - - default default] [instance: 0774dd07-d931-40b5-8590-915c0611277d] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c6a42cd5b932233a822877975aa7662d',container_format='bare',created_at=2025-11-25T08:42:19Z,direct_url=<?>,disk_format='raw',id=46122271-b2f8-4ae8-a1b5-9573f094bebe,min_disk=0,min_ram=0,name='tempest-image-dependency-test-593485743',owner='3c6a99942fff45b7809546d76f7d9c36',properties=ImageMetaProps,protected=<?>,size=1024,status='active',tags=<?>,updated_at=2025-11-25T08:42:21Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'device_name': '/dev/vda', 'guest_format': None, 'encryption_options': None, 'boot_index': 0, 'encryption_format': None, 'encryption_secret_uuid': None, 'size': 0, 'encrypted': False, 'disk_bus': 'virtio', 'image_id': '46122271-b2f8-4ae8-a1b5-9573f094bebe'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 25 03:42:25 np0005534516 nova_compute[253538]: 2025-11-25 08:42:25.666 253542 WARNING nova.virt.libvirt.driver [None req-e77195f2-215e-4911-889c-8f883fa50b11 55be30cefe8b4a10b26c37d845e9e2fa 3c6a99942fff45b7809546d76f7d9c36 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 03:42:25 np0005534516 nova_compute[253538]: 2025-11-25 08:42:25.671 253542 DEBUG nova.virt.libvirt.host [None req-e77195f2-215e-4911-889c-8f883fa50b11 55be30cefe8b4a10b26c37d845e9e2fa 3c6a99942fff45b7809546d76f7d9c36 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 25 03:42:25 np0005534516 nova_compute[253538]: 2025-11-25 08:42:25.672 253542 DEBUG nova.virt.libvirt.host [None req-e77195f2-215e-4911-889c-8f883fa50b11 55be30cefe8b4a10b26c37d845e9e2fa 3c6a99942fff45b7809546d76f7d9c36 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 25 03:42:25 np0005534516 nova_compute[253538]: 2025-11-25 08:42:25.675 253542 DEBUG nova.virt.libvirt.host [None req-e77195f2-215e-4911-889c-8f883fa50b11 55be30cefe8b4a10b26c37d845e9e2fa 3c6a99942fff45b7809546d76f7d9c36 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 25 03:42:25 np0005534516 nova_compute[253538]: 2025-11-25 08:42:25.676 253542 DEBUG nova.virt.libvirt.host [None req-e77195f2-215e-4911-889c-8f883fa50b11 55be30cefe8b4a10b26c37d845e9e2fa 3c6a99942fff45b7809546d76f7d9c36 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 25 03:42:25 np0005534516 nova_compute[253538]: 2025-11-25 08:42:25.676 253542 DEBUG nova.virt.libvirt.driver [None req-e77195f2-215e-4911-889c-8f883fa50b11 55be30cefe8b4a10b26c37d845e9e2fa 3c6a99942fff45b7809546d76f7d9c36 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 25 03:42:25 np0005534516 nova_compute[253538]: 2025-11-25 08:42:25.676 253542 DEBUG nova.virt.hardware [None req-e77195f2-215e-4911-889c-8f883fa50b11 55be30cefe8b4a10b26c37d845e9e2fa 3c6a99942fff45b7809546d76f7d9c36 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T08:20:54Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='c0247c91-0f34-45b2-87e7-43f31a790a00',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c6a42cd5b932233a822877975aa7662d',container_format='bare',created_at=2025-11-25T08:42:19Z,direct_url=<?>,disk_format='raw',id=46122271-b2f8-4ae8-a1b5-9573f094bebe,min_disk=0,min_ram=0,name='tempest-image-dependency-test-593485743',owner='3c6a99942fff45b7809546d76f7d9c36',properties=ImageMetaProps,protected=<?>,size=1024,status='active',tags=<?>,updated_at=2025-11-25T08:42:21Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 25 03:42:25 np0005534516 nova_compute[253538]: 2025-11-25 08:42:25.677 253542 DEBUG nova.virt.hardware [None req-e77195f2-215e-4911-889c-8f883fa50b11 55be30cefe8b4a10b26c37d845e9e2fa 3c6a99942fff45b7809546d76f7d9c36 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 25 03:42:25 np0005534516 nova_compute[253538]: 2025-11-25 08:42:25.678 253542 DEBUG nova.virt.hardware [None req-e77195f2-215e-4911-889c-8f883fa50b11 55be30cefe8b4a10b26c37d845e9e2fa 3c6a99942fff45b7809546d76f7d9c36 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 25 03:42:25 np0005534516 nova_compute[253538]: 2025-11-25 08:42:25.679 253542 DEBUG nova.virt.hardware [None req-e77195f2-215e-4911-889c-8f883fa50b11 55be30cefe8b4a10b26c37d845e9e2fa 3c6a99942fff45b7809546d76f7d9c36 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 25 03:42:25 np0005534516 nova_compute[253538]: 2025-11-25 08:42:25.679 253542 DEBUG nova.virt.hardware [None req-e77195f2-215e-4911-889c-8f883fa50b11 55be30cefe8b4a10b26c37d845e9e2fa 3c6a99942fff45b7809546d76f7d9c36 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 25 03:42:25 np0005534516 nova_compute[253538]: 2025-11-25 08:42:25.679 253542 DEBUG nova.virt.hardware [None req-e77195f2-215e-4911-889c-8f883fa50b11 55be30cefe8b4a10b26c37d845e9e2fa 3c6a99942fff45b7809546d76f7d9c36 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 25 03:42:25 np0005534516 nova_compute[253538]: 2025-11-25 08:42:25.679 253542 DEBUG nova.virt.hardware [None req-e77195f2-215e-4911-889c-8f883fa50b11 55be30cefe8b4a10b26c37d845e9e2fa 3c6a99942fff45b7809546d76f7d9c36 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 25 03:42:25 np0005534516 nova_compute[253538]: 2025-11-25 08:42:25.679 253542 DEBUG nova.virt.hardware [None req-e77195f2-215e-4911-889c-8f883fa50b11 55be30cefe8b4a10b26c37d845e9e2fa 3c6a99942fff45b7809546d76f7d9c36 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 25 03:42:25 np0005534516 nova_compute[253538]: 2025-11-25 08:42:25.680 253542 DEBUG nova.virt.hardware [None req-e77195f2-215e-4911-889c-8f883fa50b11 55be30cefe8b4a10b26c37d845e9e2fa 3c6a99942fff45b7809546d76f7d9c36 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 25 03:42:25 np0005534516 nova_compute[253538]: 2025-11-25 08:42:25.680 253542 DEBUG nova.virt.hardware [None req-e77195f2-215e-4911-889c-8f883fa50b11 55be30cefe8b4a10b26c37d845e9e2fa 3c6a99942fff45b7809546d76f7d9c36 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 25 03:42:25 np0005534516 nova_compute[253538]: 2025-11-25 08:42:25.680 253542 DEBUG nova.virt.hardware [None req-e77195f2-215e-4911-889c-8f883fa50b11 55be30cefe8b4a10b26c37d845e9e2fa 3c6a99942fff45b7809546d76f7d9c36 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 25 03:42:25 np0005534516 nova_compute[253538]: 2025-11-25 08:42:25.683 253542 DEBUG oslo_concurrency.processutils [None req-e77195f2-215e-4911-889c-8f883fa50b11 55be30cefe8b4a10b26c37d845e9e2fa 3c6a99942fff45b7809546d76f7d9c36 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:42:25 np0005534516 nova_compute[253538]: 2025-11-25 08:42:25.949 253542 DEBUG nova.network.neutron [None req-9a51a591-baff-4c8d-8d97-79d7506b979e 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] [instance: 85d7ff38-1884-4942-82fe-fb79122afe63] Updating instance_info_cache with network_info: [{"id": "3ed26b91-50ed-4d4d-ad1a-a63df94e9607", "address": "fa:16:3e:44:bf:83", "network": {"id": "6e77a51d-2695-4e70-8b9d-c02ec0c62f35", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1129579753-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "295fcc758cf24ab4b01eb393f4863e36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3ed26b91-50", "ovs_interfaceid": "3ed26b91-50ed-4d4d-ad1a-a63df94e9607", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 03:42:25 np0005534516 nova_compute[253538]: 2025-11-25 08:42:25.965 253542 DEBUG oslo_concurrency.lockutils [None req-9a51a591-baff-4c8d-8d97-79d7506b979e 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] Releasing lock "refresh_cache-85d7ff38-1884-4942-82fe-fb79122afe63" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 03:42:26 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 03:42:26 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3024165051' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 03:42:26 np0005534516 nova_compute[253538]: 2025-11-25 08:42:26.137 253542 DEBUG oslo_concurrency.processutils [None req-e77195f2-215e-4911-889c-8f883fa50b11 55be30cefe8b4a10b26c37d845e9e2fa 3c6a99942fff45b7809546d76f7d9c36 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.454s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:42:26 np0005534516 nova_compute[253538]: 2025-11-25 08:42:26.164 253542 DEBUG nova.storage.rbd_utils [None req-e77195f2-215e-4911-889c-8f883fa50b11 55be30cefe8b4a10b26c37d845e9e2fa 3c6a99942fff45b7809546d76f7d9c36 - - default default] rbd image 0774dd07-d931-40b5-8590-915c0611277d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:42:26 np0005534516 nova_compute[253538]: 2025-11-25 08:42:26.169 253542 DEBUG oslo_concurrency.processutils [None req-e77195f2-215e-4911-889c-8f883fa50b11 55be30cefe8b4a10b26c37d845e9e2fa 3c6a99942fff45b7809546d76f7d9c36 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:42:26 np0005534516 nova_compute[253538]: 2025-11-25 08:42:26.538 253542 DEBUG nova.network.neutron [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] [instance: 3e75d0af-c514-42c5-aa05-88ae5552f196] Updating instance_info_cache with network_info: [{"id": "f7c4b9b0-3445-468a-a19a-8b19b2d029a2", "address": "fa:16:3e:83:e9:db", "network": {"id": "6e77a51d-2695-4e70-8b9d-c02ec0c62f35", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1129579753-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.248", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "295fcc758cf24ab4b01eb393f4863e36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf7c4b9b0-34", "ovs_interfaceid": "f7c4b9b0-3445-468a-a19a-8b19b2d029a2", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 03:42:26 np0005534516 nova_compute[253538]: 2025-11-25 08:42:26.558 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Releasing lock "refresh_cache-3e75d0af-c514-42c5-aa05-88ae5552f196" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 03:42:26 np0005534516 nova_compute[253538]: 2025-11-25 08:42:26.559 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] [instance: 3e75d0af-c514-42c5-aa05-88ae5552f196] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Nov 25 03:42:26 np0005534516 nova_compute[253538]: 2025-11-25 08:42:26.560 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 03:42:26 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 03:42:26 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3224901811' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 03:42:26 np0005534516 nova_compute[253538]: 2025-11-25 08:42:26.589 253542 DEBUG oslo_concurrency.processutils [None req-e77195f2-215e-4911-889c-8f883fa50b11 55be30cefe8b4a10b26c37d845e9e2fa 3c6a99942fff45b7809546d76f7d9c36 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.420s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:42:26 np0005534516 nova_compute[253538]: 2025-11-25 08:42:26.591 253542 DEBUG nova.objects.instance [None req-e77195f2-215e-4911-889c-8f883fa50b11 55be30cefe8b4a10b26c37d845e9e2fa 3c6a99942fff45b7809546d76f7d9c36 - - default default] Lazy-loading 'pci_devices' on Instance uuid 0774dd07-d931-40b5-8590-915c0611277d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 03:42:26 np0005534516 nova_compute[253538]: 2025-11-25 08:42:26.605 253542 DEBUG nova.virt.libvirt.driver [None req-e77195f2-215e-4911-889c-8f883fa50b11 55be30cefe8b4a10b26c37d845e9e2fa 3c6a99942fff45b7809546d76f7d9c36 - - default default] [instance: 0774dd07-d931-40b5-8590-915c0611277d] End _get_guest_xml xml=<domain type="kvm">
Nov 25 03:42:26 np0005534516 nova_compute[253538]:  <uuid>0774dd07-d931-40b5-8590-915c0611277d</uuid>
Nov 25 03:42:26 np0005534516 nova_compute[253538]:  <name>instance-00000063</name>
Nov 25 03:42:26 np0005534516 nova_compute[253538]:  <memory>131072</memory>
Nov 25 03:42:26 np0005534516 nova_compute[253538]:  <vcpu>1</vcpu>
Nov 25 03:42:26 np0005534516 nova_compute[253538]:  <metadata>
Nov 25 03:42:26 np0005534516 nova_compute[253538]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 03:42:26 np0005534516 nova_compute[253538]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 03:42:26 np0005534516 nova_compute[253538]:      <nova:name>instance-depend-image</nova:name>
Nov 25 03:42:26 np0005534516 nova_compute[253538]:      <nova:creationTime>2025-11-25 08:42:25</nova:creationTime>
Nov 25 03:42:26 np0005534516 nova_compute[253538]:      <nova:flavor name="m1.nano">
Nov 25 03:42:26 np0005534516 nova_compute[253538]:        <nova:memory>128</nova:memory>
Nov 25 03:42:26 np0005534516 nova_compute[253538]:        <nova:disk>1</nova:disk>
Nov 25 03:42:26 np0005534516 nova_compute[253538]:        <nova:swap>0</nova:swap>
Nov 25 03:42:26 np0005534516 nova_compute[253538]:        <nova:ephemeral>0</nova:ephemeral>
Nov 25 03:42:26 np0005534516 nova_compute[253538]:        <nova:vcpus>1</nova:vcpus>
Nov 25 03:42:26 np0005534516 nova_compute[253538]:      </nova:flavor>
Nov 25 03:42:26 np0005534516 nova_compute[253538]:      <nova:owner>
Nov 25 03:42:26 np0005534516 nova_compute[253538]:        <nova:user uuid="55be30cefe8b4a10b26c37d845e9e2fa">tempest-ImageDependencyTests-1858023939-project-member</nova:user>
Nov 25 03:42:26 np0005534516 nova_compute[253538]:        <nova:project uuid="3c6a99942fff45b7809546d76f7d9c36">tempest-ImageDependencyTests-1858023939</nova:project>
Nov 25 03:42:26 np0005534516 nova_compute[253538]:      </nova:owner>
Nov 25 03:42:26 np0005534516 nova_compute[253538]:      <nova:root type="image" uuid="46122271-b2f8-4ae8-a1b5-9573f094bebe"/>
Nov 25 03:42:26 np0005534516 nova_compute[253538]:      <nova:ports/>
Nov 25 03:42:26 np0005534516 nova_compute[253538]:    </nova:instance>
Nov 25 03:42:26 np0005534516 nova_compute[253538]:  </metadata>
Nov 25 03:42:26 np0005534516 nova_compute[253538]:  <sysinfo type="smbios">
Nov 25 03:42:26 np0005534516 nova_compute[253538]:    <system>
Nov 25 03:42:26 np0005534516 nova_compute[253538]:      <entry name="manufacturer">RDO</entry>
Nov 25 03:42:26 np0005534516 nova_compute[253538]:      <entry name="product">OpenStack Compute</entry>
Nov 25 03:42:26 np0005534516 nova_compute[253538]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 03:42:26 np0005534516 nova_compute[253538]:      <entry name="serial">0774dd07-d931-40b5-8590-915c0611277d</entry>
Nov 25 03:42:26 np0005534516 nova_compute[253538]:      <entry name="uuid">0774dd07-d931-40b5-8590-915c0611277d</entry>
Nov 25 03:42:26 np0005534516 nova_compute[253538]:      <entry name="family">Virtual Machine</entry>
Nov 25 03:42:26 np0005534516 nova_compute[253538]:    </system>
Nov 25 03:42:26 np0005534516 nova_compute[253538]:  </sysinfo>
Nov 25 03:42:26 np0005534516 nova_compute[253538]:  <os>
Nov 25 03:42:26 np0005534516 nova_compute[253538]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 25 03:42:26 np0005534516 nova_compute[253538]:    <boot dev="hd"/>
Nov 25 03:42:26 np0005534516 nova_compute[253538]:    <smbios mode="sysinfo"/>
Nov 25 03:42:26 np0005534516 nova_compute[253538]:  </os>
Nov 25 03:42:26 np0005534516 nova_compute[253538]:  <features>
Nov 25 03:42:26 np0005534516 nova_compute[253538]:    <acpi/>
Nov 25 03:42:26 np0005534516 nova_compute[253538]:    <apic/>
Nov 25 03:42:26 np0005534516 nova_compute[253538]:    <vmcoreinfo/>
Nov 25 03:42:26 np0005534516 nova_compute[253538]:  </features>
Nov 25 03:42:26 np0005534516 nova_compute[253538]:  <clock offset="utc">
Nov 25 03:42:26 np0005534516 nova_compute[253538]:    <timer name="pit" tickpolicy="delay"/>
Nov 25 03:42:26 np0005534516 nova_compute[253538]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 25 03:42:26 np0005534516 nova_compute[253538]:    <timer name="hpet" present="no"/>
Nov 25 03:42:26 np0005534516 nova_compute[253538]:  </clock>
Nov 25 03:42:26 np0005534516 nova_compute[253538]:  <cpu mode="host-model" match="exact">
Nov 25 03:42:26 np0005534516 nova_compute[253538]:    <topology sockets="1" cores="1" threads="1"/>
Nov 25 03:42:26 np0005534516 nova_compute[253538]:  </cpu>
Nov 25 03:42:26 np0005534516 nova_compute[253538]:  <devices>
Nov 25 03:42:26 np0005534516 nova_compute[253538]:    <disk type="network" device="disk">
Nov 25 03:42:26 np0005534516 nova_compute[253538]:      <driver type="raw" cache="none"/>
Nov 25 03:42:26 np0005534516 nova_compute[253538]:      <source protocol="rbd" name="vms/0774dd07-d931-40b5-8590-915c0611277d_disk">
Nov 25 03:42:26 np0005534516 nova_compute[253538]:        <host name="192.168.122.100" port="6789"/>
Nov 25 03:42:26 np0005534516 nova_compute[253538]:      </source>
Nov 25 03:42:26 np0005534516 nova_compute[253538]:      <auth username="openstack">
Nov 25 03:42:26 np0005534516 nova_compute[253538]:        <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 03:42:26 np0005534516 nova_compute[253538]:      </auth>
Nov 25 03:42:26 np0005534516 nova_compute[253538]:      <target dev="vda" bus="virtio"/>
Nov 25 03:42:26 np0005534516 nova_compute[253538]:    </disk>
Nov 25 03:42:26 np0005534516 nova_compute[253538]:    <disk type="network" device="cdrom">
Nov 25 03:42:26 np0005534516 nova_compute[253538]:      <driver type="raw" cache="none"/>
Nov 25 03:42:26 np0005534516 nova_compute[253538]:      <source protocol="rbd" name="vms/0774dd07-d931-40b5-8590-915c0611277d_disk.config">
Nov 25 03:42:26 np0005534516 nova_compute[253538]:        <host name="192.168.122.100" port="6789"/>
Nov 25 03:42:26 np0005534516 nova_compute[253538]:      </source>
Nov 25 03:42:26 np0005534516 nova_compute[253538]:      <auth username="openstack">
Nov 25 03:42:26 np0005534516 nova_compute[253538]:        <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 03:42:26 np0005534516 nova_compute[253538]:      </auth>
Nov 25 03:42:26 np0005534516 nova_compute[253538]:      <target dev="sda" bus="sata"/>
Nov 25 03:42:26 np0005534516 nova_compute[253538]:    </disk>
Nov 25 03:42:26 np0005534516 nova_compute[253538]:    <serial type="pty">
Nov 25 03:42:26 np0005534516 nova_compute[253538]:      <log file="/var/lib/nova/instances/0774dd07-d931-40b5-8590-915c0611277d/console.log" append="off"/>
Nov 25 03:42:26 np0005534516 nova_compute[253538]:    </serial>
Nov 25 03:42:26 np0005534516 nova_compute[253538]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 03:42:26 np0005534516 nova_compute[253538]:    <video>
Nov 25 03:42:26 np0005534516 nova_compute[253538]:      <model type="virtio"/>
Nov 25 03:42:26 np0005534516 nova_compute[253538]:    </video>
Nov 25 03:42:26 np0005534516 nova_compute[253538]:    <input type="tablet" bus="usb"/>
Nov 25 03:42:26 np0005534516 nova_compute[253538]:    <rng model="virtio">
Nov 25 03:42:26 np0005534516 nova_compute[253538]:      <backend model="random">/dev/urandom</backend>
Nov 25 03:42:26 np0005534516 nova_compute[253538]:    </rng>
Nov 25 03:42:26 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root"/>
Nov 25 03:42:26 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:42:26 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:42:26 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:42:26 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:42:26 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:42:26 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:42:26 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:42:26 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:42:26 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:42:26 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:42:26 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:42:26 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:42:26 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:42:26 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:42:26 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:42:26 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:42:26 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:42:26 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:42:26 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:42:26 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:42:26 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:42:26 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:42:26 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:42:26 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:42:26 np0005534516 nova_compute[253538]:    <controller type="usb" index="0"/>
Nov 25 03:42:26 np0005534516 nova_compute[253538]:    <memballoon model="virtio">
Nov 25 03:42:26 np0005534516 nova_compute[253538]:      <stats period="10"/>
Nov 25 03:42:26 np0005534516 nova_compute[253538]:    </memballoon>
Nov 25 03:42:26 np0005534516 nova_compute[253538]:  </devices>
Nov 25 03:42:26 np0005534516 nova_compute[253538]: </domain>
Nov 25 03:42:26 np0005534516 nova_compute[253538]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 25 03:42:26 np0005534516 nova_compute[253538]: 2025-11-25 08:42:26.654 253542 DEBUG nova.virt.libvirt.driver [None req-e77195f2-215e-4911-889c-8f883fa50b11 55be30cefe8b4a10b26c37d845e9e2fa 3c6a99942fff45b7809546d76f7d9c36 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 03:42:26 np0005534516 nova_compute[253538]: 2025-11-25 08:42:26.655 253542 DEBUG nova.virt.libvirt.driver [None req-e77195f2-215e-4911-889c-8f883fa50b11 55be30cefe8b4a10b26c37d845e9e2fa 3c6a99942fff45b7809546d76f7d9c36 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 03:42:26 np0005534516 nova_compute[253538]: 2025-11-25 08:42:26.655 253542 INFO nova.virt.libvirt.driver [None req-e77195f2-215e-4911-889c-8f883fa50b11 55be30cefe8b4a10b26c37d845e9e2fa 3c6a99942fff45b7809546d76f7d9c36 - - default default] [instance: 0774dd07-d931-40b5-8590-915c0611277d] Using config drive#033[00m
Nov 25 03:42:26 np0005534516 nova_compute[253538]: 2025-11-25 08:42:26.678 253542 DEBUG nova.storage.rbd_utils [None req-e77195f2-215e-4911-889c-8f883fa50b11 55be30cefe8b4a10b26c37d845e9e2fa 3c6a99942fff45b7809546d76f7d9c36 - - default default] rbd image 0774dd07-d931-40b5-8590-915c0611277d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:42:26 np0005534516 nova_compute[253538]: 2025-11-25 08:42:26.769 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:42:26 np0005534516 podman[348397]: 2025-11-25 08:42:26.78822081 +0000 UTC m=+0.134184278 container health_status 8ad1e319ea263c63021f01bb2d6f18ca5aea205883339692c6b10bddfa993191 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, managed_by=edpm_ansible, container_name=ovn_controller, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Nov 25 03:42:26 np0005534516 nova_compute[253538]: 2025-11-25 08:42:26.887 253542 INFO nova.virt.libvirt.driver [None req-e77195f2-215e-4911-889c-8f883fa50b11 55be30cefe8b4a10b26c37d845e9e2fa 3c6a99942fff45b7809546d76f7d9c36 - - default default] [instance: 0774dd07-d931-40b5-8590-915c0611277d] Creating config drive at /var/lib/nova/instances/0774dd07-d931-40b5-8590-915c0611277d/disk.config#033[00m
Nov 25 03:42:26 np0005534516 nova_compute[253538]: 2025-11-25 08:42:26.891 253542 DEBUG oslo_concurrency.processutils [None req-e77195f2-215e-4911-889c-8f883fa50b11 55be30cefe8b4a10b26c37d845e9e2fa 3c6a99942fff45b7809546d76f7d9c36 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/0774dd07-d931-40b5-8590-915c0611277d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpk2x_suxn execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:42:27 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e207 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 03:42:27 np0005534516 nova_compute[253538]: 2025-11-25 08:42:27.053 253542 DEBUG oslo_concurrency.processutils [None req-e77195f2-215e-4911-889c-8f883fa50b11 55be30cefe8b4a10b26c37d845e9e2fa 3c6a99942fff45b7809546d76f7d9c36 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/0774dd07-d931-40b5-8590-915c0611277d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpk2x_suxn" returned: 0 in 0.162s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:42:27 np0005534516 nova_compute[253538]: 2025-11-25 08:42:27.075 253542 DEBUG nova.storage.rbd_utils [None req-e77195f2-215e-4911-889c-8f883fa50b11 55be30cefe8b4a10b26c37d845e9e2fa 3c6a99942fff45b7809546d76f7d9c36 - - default default] rbd image 0774dd07-d931-40b5-8590-915c0611277d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:42:27 np0005534516 nova_compute[253538]: 2025-11-25 08:42:27.079 253542 DEBUG oslo_concurrency.processutils [None req-e77195f2-215e-4911-889c-8f883fa50b11 55be30cefe8b4a10b26c37d845e9e2fa 3c6a99942fff45b7809546d76f7d9c36 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/0774dd07-d931-40b5-8590-915c0611277d/disk.config 0774dd07-d931-40b5-8590-915c0611277d_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:42:27 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:42:27.136 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=24, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '3e:21:09', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '6a:71:8e:07:fa:c2'}, ipsec=False) old=SB_Global(nb_cfg=23) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 03:42:27 np0005534516 nova_compute[253538]: 2025-11-25 08:42:27.136 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:42:27 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:42:27.138 162739 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 25 03:42:27 np0005534516 nova_compute[253538]: 2025-11-25 08:42:27.175 253542 INFO nova.virt.libvirt.driver [-] [instance: 85d7ff38-1884-4942-82fe-fb79122afe63] Instance destroyed successfully.#033[00m
Nov 25 03:42:27 np0005534516 nova_compute[253538]: 2025-11-25 08:42:27.176 253542 DEBUG nova.objects.instance [None req-9a51a591-baff-4c8d-8d97-79d7506b979e 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] Lazy-loading 'resources' on Instance uuid 85d7ff38-1884-4942-82fe-fb79122afe63 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 03:42:27 np0005534516 nova_compute[253538]: 2025-11-25 08:42:27.188 253542 DEBUG nova.virt.libvirt.vif [None req-9a51a591-baff-4c8d-8d97-79d7506b979e 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T08:42:01Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestOtherB-server-732970817',display_name='tempest-ServerActionsTestOtherB-server-732970817',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestotherb-server-732970817',id=98,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-25T08:42:11Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='295fcc758cf24ab4b01eb393f4863e36',ramdisk_id='',reservation_id='r-0hxuw32b',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestOtherB-587178207',owner_user_name='tempest-ServerActionsTestOtherB-587178207-project-member',shelved_at='2025-11-25T08:42:24.417271',shelved_host='compute-0.ctlplane.example.com',shelved_image_id='394afc84-ee61-4a8e-9e6c-3501a71601dc'},tags=<?>,task_state='shelving_offloading',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T08:42:17Z,user_data=None,user_id='66e1f27ea22d4ee08a0a470a8c18135e',uuid=85d7ff38-1884-4942-82fe-fb79122afe63,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='shelved') vif={"id": "3ed26b91-50ed-4d4d-ad1a-a63df94e9607", "address": "fa:16:3e:44:bf:83", "network": {"id": "6e77a51d-2695-4e70-8b9d-c02ec0c62f35", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1129579753-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "295fcc758cf24ab4b01eb393f4863e36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3ed26b91-50", "ovs_interfaceid": "3ed26b91-50ed-4d4d-ad1a-a63df94e9607", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 25 03:42:27 np0005534516 nova_compute[253538]: 2025-11-25 08:42:27.188 253542 DEBUG nova.network.os_vif_util [None req-9a51a591-baff-4c8d-8d97-79d7506b979e 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] Converting VIF {"id": "3ed26b91-50ed-4d4d-ad1a-a63df94e9607", "address": "fa:16:3e:44:bf:83", "network": {"id": "6e77a51d-2695-4e70-8b9d-c02ec0c62f35", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1129579753-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "295fcc758cf24ab4b01eb393f4863e36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3ed26b91-50", "ovs_interfaceid": "3ed26b91-50ed-4d4d-ad1a-a63df94e9607", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 03:42:27 np0005534516 nova_compute[253538]: 2025-11-25 08:42:27.189 253542 DEBUG nova.network.os_vif_util [None req-9a51a591-baff-4c8d-8d97-79d7506b979e 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:44:bf:83,bridge_name='br-int',has_traffic_filtering=True,id=3ed26b91-50ed-4d4d-ad1a-a63df94e9607,network=Network(6e77a51d-2695-4e70-8b9d-c02ec0c62f35),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3ed26b91-50') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 03:42:27 np0005534516 nova_compute[253538]: 2025-11-25 08:42:27.189 253542 DEBUG os_vif [None req-9a51a591-baff-4c8d-8d97-79d7506b979e 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:44:bf:83,bridge_name='br-int',has_traffic_filtering=True,id=3ed26b91-50ed-4d4d-ad1a-a63df94e9607,network=Network(6e77a51d-2695-4e70-8b9d-c02ec0c62f35),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3ed26b91-50') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 25 03:42:27 np0005534516 nova_compute[253538]: 2025-11-25 08:42:27.191 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:42:27 np0005534516 nova_compute[253538]: 2025-11-25 08:42:27.192 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3ed26b91-50, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:42:27 np0005534516 nova_compute[253538]: 2025-11-25 08:42:27.193 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:42:27 np0005534516 nova_compute[253538]: 2025-11-25 08:42:27.194 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:42:27 np0005534516 nova_compute[253538]: 2025-11-25 08:42:27.197 253542 INFO os_vif [None req-9a51a591-baff-4c8d-8d97-79d7506b979e 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:44:bf:83,bridge_name='br-int',has_traffic_filtering=True,id=3ed26b91-50ed-4d4d-ad1a-a63df94e9607,network=Network(6e77a51d-2695-4e70-8b9d-c02ec0c62f35),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3ed26b91-50')#033[00m
Nov 25 03:42:27 np0005534516 nova_compute[253538]: 2025-11-25 08:42:27.251 253542 DEBUG oslo_concurrency.processutils [None req-e77195f2-215e-4911-889c-8f883fa50b11 55be30cefe8b4a10b26c37d845e9e2fa 3c6a99942fff45b7809546d76f7d9c36 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/0774dd07-d931-40b5-8590-915c0611277d/disk.config 0774dd07-d931-40b5-8590-915c0611277d_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.173s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:42:27 np0005534516 nova_compute[253538]: 2025-11-25 08:42:27.252 253542 INFO nova.virt.libvirt.driver [None req-e77195f2-215e-4911-889c-8f883fa50b11 55be30cefe8b4a10b26c37d845e9e2fa 3c6a99942fff45b7809546d76f7d9c36 - - default default] [instance: 0774dd07-d931-40b5-8590-915c0611277d] Deleting local config drive /var/lib/nova/instances/0774dd07-d931-40b5-8590-915c0611277d/disk.config because it was imported into RBD.#033[00m
Nov 25 03:42:27 np0005534516 nova_compute[253538]: 2025-11-25 08:42:27.305 253542 DEBUG nova.compute.manager [req-410d373e-f72f-40f0-974a-94ad47aa2def req-b8927ba8-d7b5-49b1-be31-5857c6bc6787 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 85d7ff38-1884-4942-82fe-fb79122afe63] Received event network-changed-3ed26b91-50ed-4d4d-ad1a-a63df94e9607 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:42:27 np0005534516 nova_compute[253538]: 2025-11-25 08:42:27.306 253542 DEBUG nova.compute.manager [req-410d373e-f72f-40f0-974a-94ad47aa2def req-b8927ba8-d7b5-49b1-be31-5857c6bc6787 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 85d7ff38-1884-4942-82fe-fb79122afe63] Refreshing instance network info cache due to event network-changed-3ed26b91-50ed-4d4d-ad1a-a63df94e9607. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 03:42:27 np0005534516 nova_compute[253538]: 2025-11-25 08:42:27.306 253542 DEBUG oslo_concurrency.lockutils [req-410d373e-f72f-40f0-974a-94ad47aa2def req-b8927ba8-d7b5-49b1-be31-5857c6bc6787 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-85d7ff38-1884-4942-82fe-fb79122afe63" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 03:42:27 np0005534516 nova_compute[253538]: 2025-11-25 08:42:27.306 253542 DEBUG oslo_concurrency.lockutils [req-410d373e-f72f-40f0-974a-94ad47aa2def req-b8927ba8-d7b5-49b1-be31-5857c6bc6787 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-85d7ff38-1884-4942-82fe-fb79122afe63" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 03:42:27 np0005534516 nova_compute[253538]: 2025-11-25 08:42:27.306 253542 DEBUG nova.network.neutron [req-410d373e-f72f-40f0-974a-94ad47aa2def req-b8927ba8-d7b5-49b1-be31-5857c6bc6787 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 85d7ff38-1884-4942-82fe-fb79122afe63] Refreshing network info cache for port 3ed26b91-50ed-4d4d-ad1a-a63df94e9607 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 03:42:27 np0005534516 systemd-machined[215790]: New machine qemu-121-instance-00000063.
Nov 25 03:42:27 np0005534516 systemd[1]: Started Virtual Machine qemu-121-instance-00000063.
Nov 25 03:42:27 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1804: 321 pgs: 321 active+clean; 339 MiB data, 807 MiB used, 59 GiB / 60 GiB avail; 3.3 MiB/s rd, 3.2 MiB/s wr, 173 op/s
Nov 25 03:42:27 np0005534516 nova_compute[253538]: 2025-11-25 08:42:27.617 253542 INFO nova.virt.libvirt.driver [None req-9a51a591-baff-4c8d-8d97-79d7506b979e 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] [instance: 85d7ff38-1884-4942-82fe-fb79122afe63] Deleting instance files /var/lib/nova/instances/85d7ff38-1884-4942-82fe-fb79122afe63_del#033[00m
Nov 25 03:42:27 np0005534516 nova_compute[253538]: 2025-11-25 08:42:27.617 253542 INFO nova.virt.libvirt.driver [None req-9a51a591-baff-4c8d-8d97-79d7506b979e 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] [instance: 85d7ff38-1884-4942-82fe-fb79122afe63] Deletion of /var/lib/nova/instances/85d7ff38-1884-4942-82fe-fb79122afe63_del complete#033[00m
Nov 25 03:42:27 np0005534516 nova_compute[253538]: 2025-11-25 08:42:27.706 253542 INFO nova.scheduler.client.report [None req-9a51a591-baff-4c8d-8d97-79d7506b979e 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] Deleted allocations for instance 85d7ff38-1884-4942-82fe-fb79122afe63#033[00m
Nov 25 03:42:27 np0005534516 nova_compute[253538]: 2025-11-25 08:42:27.756 253542 DEBUG oslo_concurrency.lockutils [None req-9a51a591-baff-4c8d-8d97-79d7506b979e 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:42:27 np0005534516 nova_compute[253538]: 2025-11-25 08:42:27.756 253542 DEBUG oslo_concurrency.lockutils [None req-9a51a591-baff-4c8d-8d97-79d7506b979e 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:42:27 np0005534516 nova_compute[253538]: 2025-11-25 08:42:27.845 253542 DEBUG oslo_concurrency.processutils [None req-9a51a591-baff-4c8d-8d97-79d7506b979e 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:42:28 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 03:42:28 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1059818539' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 03:42:28 np0005534516 nova_compute[253538]: 2025-11-25 08:42:28.317 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764060148.3164241, 0774dd07-d931-40b5-8590-915c0611277d => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 03:42:28 np0005534516 nova_compute[253538]: 2025-11-25 08:42:28.318 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 0774dd07-d931-40b5-8590-915c0611277d] VM Resumed (Lifecycle Event)#033[00m
Nov 25 03:42:28 np0005534516 nova_compute[253538]: 2025-11-25 08:42:28.322 253542 DEBUG nova.compute.manager [None req-e77195f2-215e-4911-889c-8f883fa50b11 55be30cefe8b4a10b26c37d845e9e2fa 3c6a99942fff45b7809546d76f7d9c36 - - default default] [instance: 0774dd07-d931-40b5-8590-915c0611277d] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 25 03:42:28 np0005534516 nova_compute[253538]: 2025-11-25 08:42:28.323 253542 DEBUG nova.virt.libvirt.driver [None req-e77195f2-215e-4911-889c-8f883fa50b11 55be30cefe8b4a10b26c37d845e9e2fa 3c6a99942fff45b7809546d76f7d9c36 - - default default] [instance: 0774dd07-d931-40b5-8590-915c0611277d] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 25 03:42:28 np0005534516 nova_compute[253538]: 2025-11-25 08:42:28.325 253542 DEBUG oslo_concurrency.processutils [None req-9a51a591-baff-4c8d-8d97-79d7506b979e 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.480s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:42:28 np0005534516 nova_compute[253538]: 2025-11-25 08:42:28.330 253542 INFO nova.virt.libvirt.driver [-] [instance: 0774dd07-d931-40b5-8590-915c0611277d] Instance spawned successfully.#033[00m
Nov 25 03:42:28 np0005534516 nova_compute[253538]: 2025-11-25 08:42:28.330 253542 DEBUG nova.virt.libvirt.driver [None req-e77195f2-215e-4911-889c-8f883fa50b11 55be30cefe8b4a10b26c37d845e9e2fa 3c6a99942fff45b7809546d76f7d9c36 - - default default] [instance: 0774dd07-d931-40b5-8590-915c0611277d] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 25 03:42:28 np0005534516 nova_compute[253538]: 2025-11-25 08:42:28.335 253542 DEBUG nova.compute.provider_tree [None req-9a51a591-baff-4c8d-8d97-79d7506b979e 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 03:42:28 np0005534516 nova_compute[253538]: 2025-11-25 08:42:28.341 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 0774dd07-d931-40b5-8590-915c0611277d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:42:28 np0005534516 nova_compute[253538]: 2025-11-25 08:42:28.343 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 0774dd07-d931-40b5-8590-915c0611277d] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 03:42:28 np0005534516 nova_compute[253538]: 2025-11-25 08:42:28.354 253542 DEBUG nova.virt.libvirt.driver [None req-e77195f2-215e-4911-889c-8f883fa50b11 55be30cefe8b4a10b26c37d845e9e2fa 3c6a99942fff45b7809546d76f7d9c36 - - default default] [instance: 0774dd07-d931-40b5-8590-915c0611277d] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:42:28 np0005534516 nova_compute[253538]: 2025-11-25 08:42:28.354 253542 DEBUG nova.virt.libvirt.driver [None req-e77195f2-215e-4911-889c-8f883fa50b11 55be30cefe8b4a10b26c37d845e9e2fa 3c6a99942fff45b7809546d76f7d9c36 - - default default] [instance: 0774dd07-d931-40b5-8590-915c0611277d] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:42:28 np0005534516 nova_compute[253538]: 2025-11-25 08:42:28.354 253542 DEBUG nova.virt.libvirt.driver [None req-e77195f2-215e-4911-889c-8f883fa50b11 55be30cefe8b4a10b26c37d845e9e2fa 3c6a99942fff45b7809546d76f7d9c36 - - default default] [instance: 0774dd07-d931-40b5-8590-915c0611277d] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:42:28 np0005534516 nova_compute[253538]: 2025-11-25 08:42:28.355 253542 DEBUG nova.virt.libvirt.driver [None req-e77195f2-215e-4911-889c-8f883fa50b11 55be30cefe8b4a10b26c37d845e9e2fa 3c6a99942fff45b7809546d76f7d9c36 - - default default] [instance: 0774dd07-d931-40b5-8590-915c0611277d] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:42:28 np0005534516 nova_compute[253538]: 2025-11-25 08:42:28.355 253542 DEBUG nova.virt.libvirt.driver [None req-e77195f2-215e-4911-889c-8f883fa50b11 55be30cefe8b4a10b26c37d845e9e2fa 3c6a99942fff45b7809546d76f7d9c36 - - default default] [instance: 0774dd07-d931-40b5-8590-915c0611277d] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:42:28 np0005534516 nova_compute[253538]: 2025-11-25 08:42:28.355 253542 DEBUG nova.virt.libvirt.driver [None req-e77195f2-215e-4911-889c-8f883fa50b11 55be30cefe8b4a10b26c37d845e9e2fa 3c6a99942fff45b7809546d76f7d9c36 - - default default] [instance: 0774dd07-d931-40b5-8590-915c0611277d] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:42:28 np0005534516 nova_compute[253538]: 2025-11-25 08:42:28.358 253542 DEBUG nova.scheduler.client.report [None req-9a51a591-baff-4c8d-8d97-79d7506b979e 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 03:42:28 np0005534516 nova_compute[253538]: 2025-11-25 08:42:28.363 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 0774dd07-d931-40b5-8590-915c0611277d] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 03:42:28 np0005534516 nova_compute[253538]: 2025-11-25 08:42:28.363 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764060148.3167772, 0774dd07-d931-40b5-8590-915c0611277d => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 03:42:28 np0005534516 nova_compute[253538]: 2025-11-25 08:42:28.363 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 0774dd07-d931-40b5-8590-915c0611277d] VM Started (Lifecycle Event)#033[00m
Nov 25 03:42:28 np0005534516 nova_compute[253538]: 2025-11-25 08:42:28.395 253542 DEBUG oslo_concurrency.lockutils [None req-9a51a591-baff-4c8d-8d97-79d7506b979e 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.639s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:42:28 np0005534516 nova_compute[253538]: 2025-11-25 08:42:28.397 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 0774dd07-d931-40b5-8590-915c0611277d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:42:28 np0005534516 nova_compute[253538]: 2025-11-25 08:42:28.406 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 0774dd07-d931-40b5-8590-915c0611277d] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 03:42:28 np0005534516 nova_compute[253538]: 2025-11-25 08:42:28.432 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 0774dd07-d931-40b5-8590-915c0611277d] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 03:42:28 np0005534516 nova_compute[253538]: 2025-11-25 08:42:28.444 253542 INFO nova.compute.manager [None req-e77195f2-215e-4911-889c-8f883fa50b11 55be30cefe8b4a10b26c37d845e9e2fa 3c6a99942fff45b7809546d76f7d9c36 - - default default] [instance: 0774dd07-d931-40b5-8590-915c0611277d] Took 3.75 seconds to spawn the instance on the hypervisor.#033[00m
Nov 25 03:42:28 np0005534516 nova_compute[253538]: 2025-11-25 08:42:28.445 253542 DEBUG nova.compute.manager [None req-e77195f2-215e-4911-889c-8f883fa50b11 55be30cefe8b4a10b26c37d845e9e2fa 3c6a99942fff45b7809546d76f7d9c36 - - default default] [instance: 0774dd07-d931-40b5-8590-915c0611277d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:42:28 np0005534516 nova_compute[253538]: 2025-11-25 08:42:28.462 253542 DEBUG oslo_concurrency.lockutils [None req-9a51a591-baff-4c8d-8d97-79d7506b979e 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] Lock "85d7ff38-1884-4942-82fe-fb79122afe63" "released" by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" :: held 11.885s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:42:28 np0005534516 nova_compute[253538]: 2025-11-25 08:42:28.517 253542 INFO nova.compute.manager [None req-e77195f2-215e-4911-889c-8f883fa50b11 55be30cefe8b4a10b26c37d845e9e2fa 3c6a99942fff45b7809546d76f7d9c36 - - default default] [instance: 0774dd07-d931-40b5-8590-915c0611277d] Took 4.94 seconds to build instance.#033[00m
Nov 25 03:42:28 np0005534516 nova_compute[253538]: 2025-11-25 08:42:28.545 253542 DEBUG oslo_concurrency.lockutils [None req-e77195f2-215e-4911-889c-8f883fa50b11 55be30cefe8b4a10b26c37d845e9e2fa 3c6a99942fff45b7809546d76f7d9c36 - - default default] Lock "0774dd07-d931-40b5-8590-915c0611277d" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 5.038s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:42:28 np0005534516 nova_compute[253538]: 2025-11-25 08:42:28.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 03:42:28 np0005534516 nova_compute[253538]: 2025-11-25 08:42:28.837 253542 DEBUG nova.network.neutron [req-410d373e-f72f-40f0-974a-94ad47aa2def req-b8927ba8-d7b5-49b1-be31-5857c6bc6787 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 85d7ff38-1884-4942-82fe-fb79122afe63] Updated VIF entry in instance network info cache for port 3ed26b91-50ed-4d4d-ad1a-a63df94e9607. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 25 03:42:28 np0005534516 nova_compute[253538]: 2025-11-25 08:42:28.838 253542 DEBUG nova.network.neutron [req-410d373e-f72f-40f0-974a-94ad47aa2def req-b8927ba8-d7b5-49b1-be31-5857c6bc6787 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 85d7ff38-1884-4942-82fe-fb79122afe63] Updating instance_info_cache with network_info: [{"id": "3ed26b91-50ed-4d4d-ad1a-a63df94e9607", "address": "fa:16:3e:44:bf:83", "network": {"id": "6e77a51d-2695-4e70-8b9d-c02ec0c62f35", "bridge": null, "label": "tempest-ServerActionsTestOtherB-1129579753-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "295fcc758cf24ab4b01eb393f4863e36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "unbound", "details": {}, "devname": "tap3ed26b91-50", "ovs_interfaceid": null, "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 03:42:28 np0005534516 nova_compute[253538]: 2025-11-25 08:42:28.859 253542 DEBUG oslo_concurrency.lockutils [req-410d373e-f72f-40f0-974a-94ad47aa2def req-b8927ba8-d7b5-49b1-be31-5857c6bc6787 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-85d7ff38-1884-4942-82fe-fb79122afe63" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 03:42:28 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 25 03:42:28 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3629812643' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 03:42:28 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 25 03:42:28 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3629812643' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 03:42:29 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1805: 321 pgs: 321 active+clean; 324 MiB data, 799 MiB used, 59 GiB / 60 GiB avail; 2.8 MiB/s rd, 2.7 MiB/s wr, 163 op/s
Nov 25 03:42:30 np0005534516 nova_compute[253538]: 2025-11-25 08:42:30.029 253542 DEBUG nova.compute.manager [None req-b80dea55-ac8a-4e8b-b2b7-bf7386efb256 55be30cefe8b4a10b26c37d845e9e2fa 3c6a99942fff45b7809546d76f7d9c36 - - default default] [instance: 0774dd07-d931-40b5-8590-915c0611277d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:42:30 np0005534516 nova_compute[253538]: 2025-11-25 08:42:30.078 253542 INFO nova.compute.manager [None req-b80dea55-ac8a-4e8b-b2b7-bf7386efb256 55be30cefe8b4a10b26c37d845e9e2fa 3c6a99942fff45b7809546d76f7d9c36 - - default default] [instance: 0774dd07-d931-40b5-8590-915c0611277d] instance snapshotting#033[00m
Nov 25 03:42:30 np0005534516 nova_compute[253538]: 2025-11-25 08:42:30.370 253542 INFO nova.virt.libvirt.driver [None req-b80dea55-ac8a-4e8b-b2b7-bf7386efb256 55be30cefe8b4a10b26c37d845e9e2fa 3c6a99942fff45b7809546d76f7d9c36 - - default default] [instance: 0774dd07-d931-40b5-8590-915c0611277d] Beginning live snapshot process#033[00m
Nov 25 03:42:30 np0005534516 nova_compute[253538]: 2025-11-25 08:42:30.599 253542 DEBUG nova.storage.rbd_utils [None req-b80dea55-ac8a-4e8b-b2b7-bf7386efb256 55be30cefe8b4a10b26c37d845e9e2fa 3c6a99942fff45b7809546d76f7d9c36 - - default default] creating snapshot(26df58f0b1d24626a08974dbe2780fb6) on rbd image(0774dd07-d931-40b5-8590-915c0611277d_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Nov 25 03:42:30 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e207 do_prune osdmap full prune enabled
Nov 25 03:42:30 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e208 e208: 3 total, 3 up, 3 in
Nov 25 03:42:30 np0005534516 ceph-mon[75015]: log_channel(cluster) log [DBG] : osdmap e208: 3 total, 3 up, 3 in
Nov 25 03:42:30 np0005534516 nova_compute[253538]: 2025-11-25 08:42:30.986 253542 DEBUG nova.storage.rbd_utils [None req-b80dea55-ac8a-4e8b-b2b7-bf7386efb256 55be30cefe8b4a10b26c37d845e9e2fa 3c6a99942fff45b7809546d76f7d9c36 - - default default] cloning vms/0774dd07-d931-40b5-8590-915c0611277d_disk@26df58f0b1d24626a08974dbe2780fb6 to images/90509b26-5374-45ef-ab5c-c852fb5dfe98 clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Nov 25 03:42:31 np0005534516 nova_compute[253538]: 2025-11-25 08:42:31.119 253542 DEBUG nova.storage.rbd_utils [None req-b80dea55-ac8a-4e8b-b2b7-bf7386efb256 55be30cefe8b4a10b26c37d845e9e2fa 3c6a99942fff45b7809546d76f7d9c36 - - default default] flattening images/90509b26-5374-45ef-ab5c-c852fb5dfe98 flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314#033[00m
Nov 25 03:42:31 np0005534516 nova_compute[253538]: 2025-11-25 08:42:31.317 253542 DEBUG nova.storage.rbd_utils [None req-b80dea55-ac8a-4e8b-b2b7-bf7386efb256 55be30cefe8b4a10b26c37d845e9e2fa 3c6a99942fff45b7809546d76f7d9c36 - - default default] removing snapshot(26df58f0b1d24626a08974dbe2780fb6) on rbd image(0774dd07-d931-40b5-8590-915c0611277d_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489#033[00m
Nov 25 03:42:31 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1807: 321 pgs: 321 active+clean; 309 MiB data, 793 MiB used, 59 GiB / 60 GiB avail; 1.7 MiB/s rd, 1.3 MiB/s wr, 119 op/s
Nov 25 03:42:31 np0005534516 nova_compute[253538]: 2025-11-25 08:42:31.547 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 03:42:31 np0005534516 nova_compute[253538]: 2025-11-25 08:42:31.553 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 03:42:31 np0005534516 nova_compute[253538]: 2025-11-25 08:42:31.553 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 03:42:31 np0005534516 nova_compute[253538]: 2025-11-25 08:42:31.572 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:42:31 np0005534516 nova_compute[253538]: 2025-11-25 08:42:31.573 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:42:31 np0005534516 nova_compute[253538]: 2025-11-25 08:42:31.573 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:42:31 np0005534516 nova_compute[253538]: 2025-11-25 08:42:31.574 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 25 03:42:31 np0005534516 nova_compute[253538]: 2025-11-25 08:42:31.574 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:42:31 np0005534516 nova_compute[253538]: 2025-11-25 08:42:31.771 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:42:31 np0005534516 nova_compute[253538]: 2025-11-25 08:42:31.838 253542 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764060136.8360877, 85d7ff38-1884-4942-82fe-fb79122afe63 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 03:42:31 np0005534516 nova_compute[253538]: 2025-11-25 08:42:31.838 253542 INFO nova.compute.manager [-] [instance: 85d7ff38-1884-4942-82fe-fb79122afe63] VM Stopped (Lifecycle Event)#033[00m
Nov 25 03:42:31 np0005534516 nova_compute[253538]: 2025-11-25 08:42:31.857 253542 DEBUG nova.compute.manager [None req-3dd7a9ee-2223-425b-b271-52bc41ed0731 - - - - - -] [instance: 85d7ff38-1884-4942-82fe-fb79122afe63] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:42:31 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e208 do_prune osdmap full prune enabled
Nov 25 03:42:31 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e209 e209: 3 total, 3 up, 3 in
Nov 25 03:42:31 np0005534516 ceph-mon[75015]: log_channel(cluster) log [DBG] : osdmap e209: 3 total, 3 up, 3 in
Nov 25 03:42:31 np0005534516 nova_compute[253538]: 2025-11-25 08:42:31.967 253542 DEBUG nova.storage.rbd_utils [None req-b80dea55-ac8a-4e8b-b2b7-bf7386efb256 55be30cefe8b4a10b26c37d845e9e2fa 3c6a99942fff45b7809546d76f7d9c36 - - default default] creating snapshot(snap) on rbd image(90509b26-5374-45ef-ab5c-c852fb5dfe98) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Nov 25 03:42:32 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e209 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 03:42:32 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e209 do_prune osdmap full prune enabled
Nov 25 03:42:32 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e210 e210: 3 total, 3 up, 3 in
Nov 25 03:42:32 np0005534516 ceph-mon[75015]: log_channel(cluster) log [DBG] : osdmap e210: 3 total, 3 up, 3 in
Nov 25 03:42:32 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 03:42:32 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1700371143' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 03:42:32 np0005534516 nova_compute[253538]: 2025-11-25 08:42:32.090 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.516s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:42:32 np0005534516 nova_compute[253538]: 2025-11-25 08:42:32.178 253542 DEBUG nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] skipping disk for instance-00000061 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 25 03:42:32 np0005534516 nova_compute[253538]: 2025-11-25 08:42:32.178 253542 DEBUG nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] skipping disk for instance-00000061 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 25 03:42:32 np0005534516 nova_compute[253538]: 2025-11-25 08:42:32.183 253542 DEBUG nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] skipping disk for instance-0000005f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 25 03:42:32 np0005534516 nova_compute[253538]: 2025-11-25 08:42:32.183 253542 DEBUG nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] skipping disk for instance-0000005f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 25 03:42:32 np0005534516 nova_compute[253538]: 2025-11-25 08:42:32.189 253542 DEBUG nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] skipping disk for instance-00000063 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 25 03:42:32 np0005534516 nova_compute[253538]: 2025-11-25 08:42:32.190 253542 DEBUG nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] skipping disk for instance-00000063 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 25 03:42:32 np0005534516 nova_compute[253538]: 2025-11-25 08:42:32.193 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:42:32 np0005534516 nova_compute[253538]: 2025-11-25 08:42:32.477 253542 WARNING nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 03:42:32 np0005534516 nova_compute[253538]: 2025-11-25 08:42:32.478 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3394MB free_disk=59.88978576660156GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 25 03:42:32 np0005534516 nova_compute[253538]: 2025-11-25 08:42:32.479 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:42:32 np0005534516 nova_compute[253538]: 2025-11-25 08:42:32.479 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:42:32 np0005534516 nova_compute[253538]: 2025-11-25 08:42:32.540 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Instance 3e75d0af-c514-42c5-aa05-88ae5552f196 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 25 03:42:32 np0005534516 nova_compute[253538]: 2025-11-25 08:42:32.541 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Instance 5f960b00-a365-4665-8a74-50d2e7b7f940 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 25 03:42:32 np0005534516 nova_compute[253538]: 2025-11-25 08:42:32.541 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Instance 0774dd07-d931-40b5-8590-915c0611277d actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 25 03:42:32 np0005534516 nova_compute[253538]: 2025-11-25 08:42:32.541 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 25 03:42:32 np0005534516 nova_compute[253538]: 2025-11-25 08:42:32.542 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=896MB phys_disk=59GB used_disk=3GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 25 03:42:32 np0005534516 nova_compute[253538]: 2025-11-25 08:42:32.609 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:42:32 np0005534516 nova_compute[253538]: 2025-11-25 08:42:32.945 253542 DEBUG oslo_concurrency.lockutils [None req-2694c2ae-293c-4f40-9b7d-17cf8cceb678 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] Acquiring lock "3e75d0af-c514-42c5-aa05-88ae5552f196" by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:42:32 np0005534516 nova_compute[253538]: 2025-11-25 08:42:32.945 253542 DEBUG oslo_concurrency.lockutils [None req-2694c2ae-293c-4f40-9b7d-17cf8cceb678 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] Lock "3e75d0af-c514-42c5-aa05-88ae5552f196" acquired by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:42:32 np0005534516 nova_compute[253538]: 2025-11-25 08:42:32.946 253542 INFO nova.compute.manager [None req-2694c2ae-293c-4f40-9b7d-17cf8cceb678 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] [instance: 3e75d0af-c514-42c5-aa05-88ae5552f196] Shelving#033[00m
Nov 25 03:42:32 np0005534516 nova_compute[253538]: 2025-11-25 08:42:32.963 253542 DEBUG nova.virt.libvirt.driver [None req-2694c2ae-293c-4f40-9b7d-17cf8cceb678 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] [instance: 3e75d0af-c514-42c5-aa05-88ae5552f196] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071#033[00m
Nov 25 03:42:33 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e210 do_prune osdmap full prune enabled
Nov 25 03:42:33 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e211 e211: 3 total, 3 up, 3 in
Nov 25 03:42:33 np0005534516 ceph-mon[75015]: log_channel(cluster) log [DBG] : osdmap e211: 3 total, 3 up, 3 in
Nov 25 03:42:33 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 03:42:33 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/499480278' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 03:42:33 np0005534516 nova_compute[253538]: 2025-11-25 08:42:33.135 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.526s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:42:33 np0005534516 nova_compute[253538]: 2025-11-25 08:42:33.143 253542 DEBUG nova.compute.provider_tree [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 03:42:33 np0005534516 nova_compute[253538]: 2025-11-25 08:42:33.161 253542 DEBUG nova.scheduler.client.report [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 03:42:33 np0005534516 nova_compute[253538]: 2025-11-25 08:42:33.209 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 25 03:42:33 np0005534516 nova_compute[253538]: 2025-11-25 08:42:33.209 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.731s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:42:33 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1811: 321 pgs: 321 active+clean; 293 MiB data, 786 MiB used, 59 GiB / 60 GiB avail; 208 KiB/s rd, 48 KiB/s wr, 273 op/s
Nov 25 03:42:35 np0005534516 kernel: tapf7c4b9b0-34 (unregistering): left promiscuous mode
Nov 25 03:42:35 np0005534516 NetworkManager[48915]: <info>  [1764060155.3220] device (tapf7c4b9b0-34): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 03:42:35 np0005534516 nova_compute[253538]: 2025-11-25 08:42:35.325 253542 INFO nova.virt.libvirt.driver [None req-b80dea55-ac8a-4e8b-b2b7-bf7386efb256 55be30cefe8b4a10b26c37d845e9e2fa 3c6a99942fff45b7809546d76f7d9c36 - - default default] [instance: 0774dd07-d931-40b5-8590-915c0611277d] Snapshot image upload complete#033[00m
Nov 25 03:42:35 np0005534516 nova_compute[253538]: 2025-11-25 08:42:35.326 253542 INFO nova.compute.manager [None req-b80dea55-ac8a-4e8b-b2b7-bf7386efb256 55be30cefe8b4a10b26c37d845e9e2fa 3c6a99942fff45b7809546d76f7d9c36 - - default default] [instance: 0774dd07-d931-40b5-8590-915c0611277d] Took 5.25 seconds to snapshot the instance on the hypervisor.#033[00m
Nov 25 03:42:35 np0005534516 ovn_controller[152859]: 2025-11-25T08:42:35Z|00952|binding|INFO|Releasing lport f7c4b9b0-3445-468a-a19a-8b19b2d029a2 from this chassis (sb_readonly=0)
Nov 25 03:42:35 np0005534516 ovn_controller[152859]: 2025-11-25T08:42:35Z|00953|binding|INFO|Setting lport f7c4b9b0-3445-468a-a19a-8b19b2d029a2 down in Southbound
Nov 25 03:42:35 np0005534516 nova_compute[253538]: 2025-11-25 08:42:35.339 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:42:35 np0005534516 ovn_controller[152859]: 2025-11-25T08:42:35Z|00954|binding|INFO|Removing iface tapf7c4b9b0-34 ovn-installed in OVS
Nov 25 03:42:35 np0005534516 nova_compute[253538]: 2025-11-25 08:42:35.342 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:42:35 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:42:35.348 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:83:e9:db 10.100.0.12'], port_security=['fa:16:3e:83:e9:db 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '3e75d0af-c514-42c5-aa05-88ae5552f196', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6e77a51d-2695-4e70-8b9d-c02ec0c62f35', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '295fcc758cf24ab4b01eb393f4863e36', 'neutron:revision_number': '4', 'neutron:security_group_ids': '4662b63b-c8aa-4161-b270-71466cebee15', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.248'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=31935ac8-a5f7-4fad-9e0b-ee28fade4ce4, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=f7c4b9b0-3445-468a-a19a-8b19b2d029a2) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 03:42:35 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:42:35.350 162739 INFO neutron.agent.ovn.metadata.agent [-] Port f7c4b9b0-3445-468a-a19a-8b19b2d029a2 in datapath 6e77a51d-2695-4e70-8b9d-c02ec0c62f35 unbound from our chassis#033[00m
Nov 25 03:42:35 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:42:35.352 162739 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 6e77a51d-2695-4e70-8b9d-c02ec0c62f35#033[00m
Nov 25 03:42:35 np0005534516 nova_compute[253538]: 2025-11-25 08:42:35.375 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:42:35 np0005534516 systemd[1]: machine-qemu\x2d116\x2dinstance\x2d0000005f.scope: Deactivated successfully.
Nov 25 03:42:35 np0005534516 systemd[1]: machine-qemu\x2d116\x2dinstance\x2d0000005f.scope: Consumed 19.170s CPU time.
Nov 25 03:42:35 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:42:35.384 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[4b4e3056-7246-4220-a28a-e765e1e8834d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:42:35 np0005534516 systemd-machined[215790]: Machine qemu-116-instance-0000005f terminated.
Nov 25 03:42:35 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:42:35.426 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[60022fc6-a9aa-41e4-a72f-3ca931533f33]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:42:35 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:42:35.429 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[42c2ab19-dceb-4968-b283-91584b402343]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:42:35 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:42:35.455 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[0110be4b-0651-47f9-9927-a364794d47f8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:42:35 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1812: 321 pgs: 321 active+clean; 293 MiB data, 786 MiB used, 59 GiB / 60 GiB avail; 153 KiB/s rd, 63 KiB/s wr, 200 op/s
Nov 25 03:42:35 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:42:35.477 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[a8b72e8d-e396-4e17-8561-6bbf9b49ab2e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6e77a51d-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b8:6f:fd'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 11, 'rx_bytes': 700, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 11, 'rx_bytes': 700, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 273], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 538137, 'reachable_time': 29181, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 348776, 'error': None, 'target': 'ovnmeta-6e77a51d-2695-4e70-8b9d-c02ec0c62f35', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:42:35 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:42:35.501 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[3a6061d5-4acb-4124-93f8-f79c4e2f1518]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap6e77a51d-21'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 538152, 'tstamp': 538152}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 348777, 'error': None, 'target': 'ovnmeta-6e77a51d-2695-4e70-8b9d-c02ec0c62f35', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap6e77a51d-21'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 538156, 'tstamp': 538156}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 348777, 'error': None, 'target': 'ovnmeta-6e77a51d-2695-4e70-8b9d-c02ec0c62f35', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:42:35 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:42:35.504 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6e77a51d-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:42:35 np0005534516 nova_compute[253538]: 2025-11-25 08:42:35.506 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:42:35 np0005534516 nova_compute[253538]: 2025-11-25 08:42:35.511 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:42:35 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:42:35.513 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6e77a51d-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:42:35 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:42:35.513 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 03:42:35 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:42:35.514 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap6e77a51d-20, col_values=(('external_ids', {'iface-id': '66275e2b-0197-461a-9be3-ae2fe1aec502'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:42:35 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:42:35.515 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 03:42:35 np0005534516 nova_compute[253538]: 2025-11-25 08:42:35.546 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:42:35 np0005534516 nova_compute[253538]: 2025-11-25 08:42:35.553 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:42:35 np0005534516 nova_compute[253538]: 2025-11-25 08:42:35.677 253542 DEBUG nova.compute.manager [req-a70688ac-b91a-459a-bfb6-d85810aa67fd req-c3d0bb99-0e41-4c02-b884-27dd2ef4d07f b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 3e75d0af-c514-42c5-aa05-88ae5552f196] Received event network-vif-unplugged-f7c4b9b0-3445-468a-a19a-8b19b2d029a2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:42:35 np0005534516 nova_compute[253538]: 2025-11-25 08:42:35.678 253542 DEBUG oslo_concurrency.lockutils [req-a70688ac-b91a-459a-bfb6-d85810aa67fd req-c3d0bb99-0e41-4c02-b884-27dd2ef4d07f b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "3e75d0af-c514-42c5-aa05-88ae5552f196-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:42:35 np0005534516 nova_compute[253538]: 2025-11-25 08:42:35.678 253542 DEBUG oslo_concurrency.lockutils [req-a70688ac-b91a-459a-bfb6-d85810aa67fd req-c3d0bb99-0e41-4c02-b884-27dd2ef4d07f b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "3e75d0af-c514-42c5-aa05-88ae5552f196-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:42:35 np0005534516 nova_compute[253538]: 2025-11-25 08:42:35.678 253542 DEBUG oslo_concurrency.lockutils [req-a70688ac-b91a-459a-bfb6-d85810aa67fd req-c3d0bb99-0e41-4c02-b884-27dd2ef4d07f b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "3e75d0af-c514-42c5-aa05-88ae5552f196-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:42:35 np0005534516 nova_compute[253538]: 2025-11-25 08:42:35.679 253542 DEBUG nova.compute.manager [req-a70688ac-b91a-459a-bfb6-d85810aa67fd req-c3d0bb99-0e41-4c02-b884-27dd2ef4d07f b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 3e75d0af-c514-42c5-aa05-88ae5552f196] No waiting events found dispatching network-vif-unplugged-f7c4b9b0-3445-468a-a19a-8b19b2d029a2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 03:42:35 np0005534516 nova_compute[253538]: 2025-11-25 08:42:35.679 253542 WARNING nova.compute.manager [req-a70688ac-b91a-459a-bfb6-d85810aa67fd req-c3d0bb99-0e41-4c02-b884-27dd2ef4d07f b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 3e75d0af-c514-42c5-aa05-88ae5552f196] Received unexpected event network-vif-unplugged-f7c4b9b0-3445-468a-a19a-8b19b2d029a2 for instance with vm_state active and task_state shelving.#033[00m
Nov 25 03:42:35 np0005534516 nova_compute[253538]: 2025-11-25 08:42:35.986 253542 INFO nova.virt.libvirt.driver [None req-2694c2ae-293c-4f40-9b7d-17cf8cceb678 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] [instance: 3e75d0af-c514-42c5-aa05-88ae5552f196] Instance shutdown successfully after 3 seconds.#033[00m
Nov 25 03:42:35 np0005534516 nova_compute[253538]: 2025-11-25 08:42:35.993 253542 INFO nova.virt.libvirt.driver [-] [instance: 3e75d0af-c514-42c5-aa05-88ae5552f196] Instance destroyed successfully.#033[00m
Nov 25 03:42:35 np0005534516 nova_compute[253538]: 2025-11-25 08:42:35.994 253542 DEBUG nova.objects.instance [None req-2694c2ae-293c-4f40-9b7d-17cf8cceb678 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] Lazy-loading 'numa_topology' on Instance uuid 3e75d0af-c514-42c5-aa05-88ae5552f196 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 03:42:36 np0005534516 nova_compute[253538]: 2025-11-25 08:42:36.487 253542 INFO nova.virt.libvirt.driver [None req-2694c2ae-293c-4f40-9b7d-17cf8cceb678 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] [instance: 3e75d0af-c514-42c5-aa05-88ae5552f196] Beginning cold snapshot process#033[00m
Nov 25 03:42:36 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e211 do_prune osdmap full prune enabled
Nov 25 03:42:36 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e212 e212: 3 total, 3 up, 3 in
Nov 25 03:42:36 np0005534516 ceph-mon[75015]: log_channel(cluster) log [DBG] : osdmap e212: 3 total, 3 up, 3 in
Nov 25 03:42:36 np0005534516 nova_compute[253538]: 2025-11-25 08:42:36.881 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:42:36 np0005534516 nova_compute[253538]: 2025-11-25 08:42:36.890 253542 DEBUG nova.virt.libvirt.imagebackend [None req-2694c2ae-293c-4f40-9b7d-17cf8cceb678 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] No parent info for 8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163#033[00m
Nov 25 03:42:37 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e212 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 03:42:37 np0005534516 ceph-mon[75015]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #84. Immutable memtables: 0.
Nov 25 03:42:37 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:42:37.087825) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 25 03:42:37 np0005534516 ceph-mon[75015]: rocksdb: [db/flush_job.cc:856] [default] [JOB 47] Flushing memtable with next log file: 84
Nov 25 03:42:37 np0005534516 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764060157087872, "job": 47, "event": "flush_started", "num_memtables": 1, "num_entries": 578, "num_deletes": 258, "total_data_size": 526448, "memory_usage": 538808, "flush_reason": "Manual Compaction"}
Nov 25 03:42:37 np0005534516 ceph-mon[75015]: rocksdb: [db/flush_job.cc:885] [default] [JOB 47] Level-0 flush table #85: started
Nov 25 03:42:37 np0005534516 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764060157113812, "cf_name": "default", "job": 47, "event": "table_file_creation", "file_number": 85, "file_size": 520248, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 37663, "largest_seqno": 38240, "table_properties": {"data_size": 517043, "index_size": 1113, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1029, "raw_key_size": 7372, "raw_average_key_size": 18, "raw_value_size": 510585, "raw_average_value_size": 1309, "num_data_blocks": 49, "num_entries": 390, "num_filter_entries": 390, "num_deletions": 258, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764060131, "oldest_key_time": 1764060131, "file_creation_time": 1764060157, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "dc5dc6ae-0fb8-474e-b34d-e37705a41add", "db_session_id": "WCSCMBNWDQZ3QKJA3OWW", "orig_file_number": 85, "seqno_to_time_mapping": "N/A"}}
Nov 25 03:42:37 np0005534516 ceph-mon[75015]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 47] Flush lasted 26119 microseconds, and 2828 cpu microseconds.
Nov 25 03:42:37 np0005534516 ceph-mon[75015]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 03:42:37 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:42:37.141 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=0e3362c2-a4a0-4a10-9289-943331244f84, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '24'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:42:37 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:42:37.113927) [db/flush_job.cc:967] [default] [JOB 47] Level-0 flush table #85: 520248 bytes OK
Nov 25 03:42:37 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:42:37.113973) [db/memtable_list.cc:519] [default] Level-0 commit table #85 started
Nov 25 03:42:37 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:42:37.150742) [db/memtable_list.cc:722] [default] Level-0 commit table #85: memtable #1 done
Nov 25 03:42:37 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:42:37.150794) EVENT_LOG_v1 {"time_micros": 1764060157150782, "job": 47, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 25 03:42:37 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:42:37.150826) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 25 03:42:37 np0005534516 ceph-mon[75015]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 47] Try to delete WAL files size 523188, prev total WAL file size 523188, number of live WAL files 2.
Nov 25 03:42:37 np0005534516 ceph-mon[75015]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000081.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 03:42:37 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:42:37.151615) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0031323538' seq:72057594037927935, type:22 .. '6C6F676D0031353131' seq:0, type:0; will stop at (end)
Nov 25 03:42:37 np0005534516 ceph-mon[75015]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 48] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 25 03:42:37 np0005534516 ceph-mon[75015]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 47 Base level 0, inputs: [85(508KB)], [83(8037KB)]
Nov 25 03:42:37 np0005534516 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764060157151660, "job": 48, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [85], "files_L6": [83], "score": -1, "input_data_size": 8750453, "oldest_snapshot_seqno": -1}
Nov 25 03:42:37 np0005534516 nova_compute[253538]: 2025-11-25 08:42:37.196 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:42:37 np0005534516 nova_compute[253538]: 2025-11-25 08:42:37.204 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 03:42:37 np0005534516 ceph-mon[75015]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 48] Generated table #86: 6121 keys, 8625972 bytes, temperature: kUnknown
Nov 25 03:42:37 np0005534516 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764060157363819, "cf_name": "default", "job": 48, "event": "table_file_creation", "file_number": 86, "file_size": 8625972, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8585317, "index_size": 24225, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 15365, "raw_key_size": 156809, "raw_average_key_size": 25, "raw_value_size": 8475843, "raw_average_value_size": 1384, "num_data_blocks": 977, "num_entries": 6121, "num_filter_entries": 6121, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764056881, "oldest_key_time": 0, "file_creation_time": 1764060157, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "dc5dc6ae-0fb8-474e-b34d-e37705a41add", "db_session_id": "WCSCMBNWDQZ3QKJA3OWW", "orig_file_number": 86, "seqno_to_time_mapping": "N/A"}}
Nov 25 03:42:37 np0005534516 ceph-mon[75015]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 03:42:37 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:42:37.364268) [db/compaction/compaction_job.cc:1663] [default] [JOB 48] Compacted 1@0 + 1@6 files to L6 => 8625972 bytes
Nov 25 03:42:37 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:42:37.383845) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 41.2 rd, 40.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.5, 7.8 +0.0 blob) out(8.2 +0.0 blob), read-write-amplify(33.4) write-amplify(16.6) OK, records in: 6648, records dropped: 527 output_compression: NoCompression
Nov 25 03:42:37 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:42:37.384019) EVENT_LOG_v1 {"time_micros": 1764060157383881, "job": 48, "event": "compaction_finished", "compaction_time_micros": 212346, "compaction_time_cpu_micros": 33954, "output_level": 6, "num_output_files": 1, "total_output_size": 8625972, "num_input_records": 6648, "num_output_records": 6121, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 25 03:42:37 np0005534516 ceph-mon[75015]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000085.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 03:42:37 np0005534516 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764060157384817, "job": 48, "event": "table_file_deletion", "file_number": 85}
Nov 25 03:42:37 np0005534516 ceph-mon[75015]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000083.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 03:42:37 np0005534516 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764060157388251, "job": 48, "event": "table_file_deletion", "file_number": 83}
Nov 25 03:42:37 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:42:37.151478) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 03:42:37 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:42:37.388431) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 03:42:37 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:42:37.388439) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 03:42:37 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:42:37.388443) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 03:42:37 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:42:37.388446) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 03:42:37 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:42:37.388450) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 03:42:37 np0005534516 nova_compute[253538]: 2025-11-25 08:42:37.390 253542 DEBUG nova.storage.rbd_utils [None req-2694c2ae-293c-4f40-9b7d-17cf8cceb678 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] creating snapshot(16445a25cfbf4f2aa316f34132ebb55f) on rbd image(3e75d0af-c514-42c5-aa05-88ae5552f196_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Nov 25 03:42:37 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1814: 321 pgs: 321 active+clean; 293 MiB data, 786 MiB used, 59 GiB / 60 GiB avail; 63 KiB/s rd, 24 KiB/s wr, 86 op/s
Nov 25 03:42:37 np0005534516 nova_compute[253538]: 2025-11-25 08:42:37.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 03:42:37 np0005534516 nova_compute[253538]: 2025-11-25 08:42:37.897 253542 DEBUG nova.compute.manager [req-21932724-0f26-4d54-8734-d7c5ad7d3fd5 req-dc48f4f3-c9c9-4885-a17b-0983c0ce248e b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 3e75d0af-c514-42c5-aa05-88ae5552f196] Received event network-vif-plugged-f7c4b9b0-3445-468a-a19a-8b19b2d029a2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:42:37 np0005534516 nova_compute[253538]: 2025-11-25 08:42:37.898 253542 DEBUG oslo_concurrency.lockutils [req-21932724-0f26-4d54-8734-d7c5ad7d3fd5 req-dc48f4f3-c9c9-4885-a17b-0983c0ce248e b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "3e75d0af-c514-42c5-aa05-88ae5552f196-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:42:37 np0005534516 nova_compute[253538]: 2025-11-25 08:42:37.898 253542 DEBUG oslo_concurrency.lockutils [req-21932724-0f26-4d54-8734-d7c5ad7d3fd5 req-dc48f4f3-c9c9-4885-a17b-0983c0ce248e b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "3e75d0af-c514-42c5-aa05-88ae5552f196-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:42:37 np0005534516 nova_compute[253538]: 2025-11-25 08:42:37.898 253542 DEBUG oslo_concurrency.lockutils [req-21932724-0f26-4d54-8734-d7c5ad7d3fd5 req-dc48f4f3-c9c9-4885-a17b-0983c0ce248e b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "3e75d0af-c514-42c5-aa05-88ae5552f196-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:42:37 np0005534516 nova_compute[253538]: 2025-11-25 08:42:37.898 253542 DEBUG nova.compute.manager [req-21932724-0f26-4d54-8734-d7c5ad7d3fd5 req-dc48f4f3-c9c9-4885-a17b-0983c0ce248e b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 3e75d0af-c514-42c5-aa05-88ae5552f196] No waiting events found dispatching network-vif-plugged-f7c4b9b0-3445-468a-a19a-8b19b2d029a2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 03:42:37 np0005534516 nova_compute[253538]: 2025-11-25 08:42:37.899 253542 WARNING nova.compute.manager [req-21932724-0f26-4d54-8734-d7c5ad7d3fd5 req-dc48f4f3-c9c9-4885-a17b-0983c0ce248e b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 3e75d0af-c514-42c5-aa05-88ae5552f196] Received unexpected event network-vif-plugged-f7c4b9b0-3445-468a-a19a-8b19b2d029a2 for instance with vm_state active and task_state shelving_image_uploading.#033[00m
Nov 25 03:42:38 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e212 do_prune osdmap full prune enabled
Nov 25 03:42:38 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e213 e213: 3 total, 3 up, 3 in
Nov 25 03:42:38 np0005534516 ceph-mon[75015]: log_channel(cluster) log [DBG] : osdmap e213: 3 total, 3 up, 3 in
Nov 25 03:42:38 np0005534516 nova_compute[253538]: 2025-11-25 08:42:38.618 253542 DEBUG nova.storage.rbd_utils [None req-2694c2ae-293c-4f40-9b7d-17cf8cceb678 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] cloning vms/3e75d0af-c514-42c5-aa05-88ae5552f196_disk@16445a25cfbf4f2aa316f34132ebb55f to images/34e9b311-13e0-4ffd-bc6f-64a46ba4b491 clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Nov 25 03:42:39 np0005534516 nova_compute[253538]: 2025-11-25 08:42:39.096 253542 DEBUG nova.storage.rbd_utils [None req-2694c2ae-293c-4f40-9b7d-17cf8cceb678 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] flattening images/34e9b311-13e0-4ffd-bc6f-64a46ba4b491 flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314#033[00m
Nov 25 03:42:39 np0005534516 nova_compute[253538]: 2025-11-25 08:42:39.204 253542 DEBUG oslo_concurrency.lockutils [None req-bac1f4f8-1b2d-49e0-a36b-c8741dca8b4d 55be30cefe8b4a10b26c37d845e9e2fa 3c6a99942fff45b7809546d76f7d9c36 - - default default] Acquiring lock "0774dd07-d931-40b5-8590-915c0611277d" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:42:39 np0005534516 nova_compute[253538]: 2025-11-25 08:42:39.204 253542 DEBUG oslo_concurrency.lockutils [None req-bac1f4f8-1b2d-49e0-a36b-c8741dca8b4d 55be30cefe8b4a10b26c37d845e9e2fa 3c6a99942fff45b7809546d76f7d9c36 - - default default] Lock "0774dd07-d931-40b5-8590-915c0611277d" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:42:39 np0005534516 nova_compute[253538]: 2025-11-25 08:42:39.205 253542 DEBUG oslo_concurrency.lockutils [None req-bac1f4f8-1b2d-49e0-a36b-c8741dca8b4d 55be30cefe8b4a10b26c37d845e9e2fa 3c6a99942fff45b7809546d76f7d9c36 - - default default] Acquiring lock "0774dd07-d931-40b5-8590-915c0611277d-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:42:39 np0005534516 nova_compute[253538]: 2025-11-25 08:42:39.205 253542 DEBUG oslo_concurrency.lockutils [None req-bac1f4f8-1b2d-49e0-a36b-c8741dca8b4d 55be30cefe8b4a10b26c37d845e9e2fa 3c6a99942fff45b7809546d76f7d9c36 - - default default] Lock "0774dd07-d931-40b5-8590-915c0611277d-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:42:39 np0005534516 nova_compute[253538]: 2025-11-25 08:42:39.206 253542 DEBUG oslo_concurrency.lockutils [None req-bac1f4f8-1b2d-49e0-a36b-c8741dca8b4d 55be30cefe8b4a10b26c37d845e9e2fa 3c6a99942fff45b7809546d76f7d9c36 - - default default] Lock "0774dd07-d931-40b5-8590-915c0611277d-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:42:39 np0005534516 nova_compute[253538]: 2025-11-25 08:42:39.208 253542 INFO nova.compute.manager [None req-bac1f4f8-1b2d-49e0-a36b-c8741dca8b4d 55be30cefe8b4a10b26c37d845e9e2fa 3c6a99942fff45b7809546d76f7d9c36 - - default default] [instance: 0774dd07-d931-40b5-8590-915c0611277d] Terminating instance#033[00m
Nov 25 03:42:39 np0005534516 nova_compute[253538]: 2025-11-25 08:42:39.210 253542 DEBUG oslo_concurrency.lockutils [None req-bac1f4f8-1b2d-49e0-a36b-c8741dca8b4d 55be30cefe8b4a10b26c37d845e9e2fa 3c6a99942fff45b7809546d76f7d9c36 - - default default] Acquiring lock "refresh_cache-0774dd07-d931-40b5-8590-915c0611277d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 03:42:39 np0005534516 nova_compute[253538]: 2025-11-25 08:42:39.211 253542 DEBUG oslo_concurrency.lockutils [None req-bac1f4f8-1b2d-49e0-a36b-c8741dca8b4d 55be30cefe8b4a10b26c37d845e9e2fa 3c6a99942fff45b7809546d76f7d9c36 - - default default] Acquired lock "refresh_cache-0774dd07-d931-40b5-8590-915c0611277d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 03:42:39 np0005534516 nova_compute[253538]: 2025-11-25 08:42:39.211 253542 DEBUG nova.network.neutron [None req-bac1f4f8-1b2d-49e0-a36b-c8741dca8b4d 55be30cefe8b4a10b26c37d845e9e2fa 3c6a99942fff45b7809546d76f7d9c36 - - default default] [instance: 0774dd07-d931-40b5-8590-915c0611277d] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 25 03:42:39 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1816: 321 pgs: 321 active+clean; 293 MiB data, 769 MiB used, 59 GiB / 60 GiB avail; 92 KiB/s rd, 23 KiB/s wr, 122 op/s
Nov 25 03:42:39 np0005534516 nova_compute[253538]: 2025-11-25 08:42:39.616 253542 DEBUG nova.network.neutron [None req-bac1f4f8-1b2d-49e0-a36b-c8741dca8b4d 55be30cefe8b4a10b26c37d845e9e2fa 3c6a99942fff45b7809546d76f7d9c36 - - default default] [instance: 0774dd07-d931-40b5-8590-915c0611277d] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 25 03:42:40 np0005534516 nova_compute[253538]: 2025-11-25 08:42:40.247 253542 DEBUG nova.storage.rbd_utils [None req-2694c2ae-293c-4f40-9b7d-17cf8cceb678 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] removing snapshot(16445a25cfbf4f2aa316f34132ebb55f) on rbd image(3e75d0af-c514-42c5-aa05-88ae5552f196_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489#033[00m
Nov 25 03:42:40 np0005534516 nova_compute[253538]: 2025-11-25 08:42:40.294 253542 DEBUG nova.network.neutron [None req-bac1f4f8-1b2d-49e0-a36b-c8741dca8b4d 55be30cefe8b4a10b26c37d845e9e2fa 3c6a99942fff45b7809546d76f7d9c36 - - default default] [instance: 0774dd07-d931-40b5-8590-915c0611277d] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 03:42:40 np0005534516 nova_compute[253538]: 2025-11-25 08:42:40.304 253542 DEBUG oslo_concurrency.lockutils [None req-bac1f4f8-1b2d-49e0-a36b-c8741dca8b4d 55be30cefe8b4a10b26c37d845e9e2fa 3c6a99942fff45b7809546d76f7d9c36 - - default default] Releasing lock "refresh_cache-0774dd07-d931-40b5-8590-915c0611277d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 03:42:40 np0005534516 nova_compute[253538]: 2025-11-25 08:42:40.305 253542 DEBUG nova.compute.manager [None req-bac1f4f8-1b2d-49e0-a36b-c8741dca8b4d 55be30cefe8b4a10b26c37d845e9e2fa 3c6a99942fff45b7809546d76f7d9c36 - - default default] [instance: 0774dd07-d931-40b5-8590-915c0611277d] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 25 03:42:40 np0005534516 systemd[1]: machine-qemu\x2d121\x2dinstance\x2d00000063.scope: Deactivated successfully.
Nov 25 03:42:40 np0005534516 systemd[1]: machine-qemu\x2d121\x2dinstance\x2d00000063.scope: Consumed 1.498s CPU time.
Nov 25 03:42:40 np0005534516 systemd-machined[215790]: Machine qemu-121-instance-00000063 terminated.
Nov 25 03:42:40 np0005534516 nova_compute[253538]: 2025-11-25 08:42:40.535 253542 INFO nova.virt.libvirt.driver [-] [instance: 0774dd07-d931-40b5-8590-915c0611277d] Instance destroyed successfully.#033[00m
Nov 25 03:42:40 np0005534516 nova_compute[253538]: 2025-11-25 08:42:40.537 253542 DEBUG nova.objects.instance [None req-bac1f4f8-1b2d-49e0-a36b-c8741dca8b4d 55be30cefe8b4a10b26c37d845e9e2fa 3c6a99942fff45b7809546d76f7d9c36 - - default default] Lazy-loading 'resources' on Instance uuid 0774dd07-d931-40b5-8590-915c0611277d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 03:42:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:42:41.067 162739 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:42:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:42:41.071 162739 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.004s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:42:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:42:41.073 162739 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:42:41 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0) v1
Nov 25 03:42:41 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 25 03:42:41 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 03:42:41 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 03:42:41 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 03:42:41 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 03:42:41 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 03:42:41 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 03:42:41 np0005534516 ceph-mgr[75313]: [progress WARNING root] complete: ev 0b9be7ac-c19f-46ae-82fb-02a0feaa8617 does not exist
Nov 25 03:42:41 np0005534516 ceph-mgr[75313]: [progress WARNING root] complete: ev fc3a0c5f-c2bf-4fff-bdc3-033c8763e273 does not exist
Nov 25 03:42:41 np0005534516 ceph-mgr[75313]: [progress WARNING root] complete: ev 0c8492b2-956b-473c-aaf7-f3146ae218c3 does not exist
Nov 25 03:42:41 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 03:42:41 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 03:42:41 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e213 do_prune osdmap full prune enabled
Nov 25 03:42:41 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 03:42:41 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 03:42:41 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 03:42:41 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 03:42:41 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 25 03:42:41 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 03:42:41 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 03:42:41 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e214 e214: 3 total, 3 up, 3 in
Nov 25 03:42:41 np0005534516 ceph-mon[75015]: log_channel(cluster) log [DBG] : osdmap e214: 3 total, 3 up, 3 in
Nov 25 03:42:41 np0005534516 nova_compute[253538]: 2025-11-25 08:42:41.269 253542 DEBUG nova.storage.rbd_utils [None req-2694c2ae-293c-4f40-9b7d-17cf8cceb678 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] creating snapshot(snap) on rbd image(34e9b311-13e0-4ffd-bc6f-64a46ba4b491) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Nov 25 03:42:41 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1818: 321 pgs: 321 active+clean; 297 MiB data, 769 MiB used, 59 GiB / 60 GiB avail; 2.5 MiB/s rd, 692 KiB/s wr, 128 op/s
Nov 25 03:42:41 np0005534516 nova_compute[253538]: 2025-11-25 08:42:41.614 253542 INFO nova.virt.libvirt.driver [None req-bac1f4f8-1b2d-49e0-a36b-c8741dca8b4d 55be30cefe8b4a10b26c37d845e9e2fa 3c6a99942fff45b7809546d76f7d9c36 - - default default] [instance: 0774dd07-d931-40b5-8590-915c0611277d] Deleting instance files /var/lib/nova/instances/0774dd07-d931-40b5-8590-915c0611277d_del#033[00m
Nov 25 03:42:41 np0005534516 nova_compute[253538]: 2025-11-25 08:42:41.615 253542 INFO nova.virt.libvirt.driver [None req-bac1f4f8-1b2d-49e0-a36b-c8741dca8b4d 55be30cefe8b4a10b26c37d845e9e2fa 3c6a99942fff45b7809546d76f7d9c36 - - default default] [instance: 0774dd07-d931-40b5-8590-915c0611277d] Deletion of /var/lib/nova/instances/0774dd07-d931-40b5-8590-915c0611277d_del complete#033[00m
Nov 25 03:42:41 np0005534516 nova_compute[253538]: 2025-11-25 08:42:41.677 253542 INFO nova.compute.manager [None req-bac1f4f8-1b2d-49e0-a36b-c8741dca8b4d 55be30cefe8b4a10b26c37d845e9e2fa 3c6a99942fff45b7809546d76f7d9c36 - - default default] [instance: 0774dd07-d931-40b5-8590-915c0611277d] Took 1.37 seconds to destroy the instance on the hypervisor.#033[00m
Nov 25 03:42:41 np0005534516 nova_compute[253538]: 2025-11-25 08:42:41.678 253542 DEBUG oslo.service.loopingcall [None req-bac1f4f8-1b2d-49e0-a36b-c8741dca8b4d 55be30cefe8b4a10b26c37d845e9e2fa 3c6a99942fff45b7809546d76f7d9c36 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 25 03:42:41 np0005534516 nova_compute[253538]: 2025-11-25 08:42:41.679 253542 DEBUG nova.compute.manager [-] [instance: 0774dd07-d931-40b5-8590-915c0611277d] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 25 03:42:41 np0005534516 nova_compute[253538]: 2025-11-25 08:42:41.679 253542 DEBUG nova.network.neutron [-] [instance: 0774dd07-d931-40b5-8590-915c0611277d] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 25 03:42:41 np0005534516 nova_compute[253538]: 2025-11-25 08:42:41.775 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:42:41 np0005534516 podman[349225]: 2025-11-25 08:42:41.995296312 +0000 UTC m=+0.058347071 container create 8a7c7e00019e9c01596a864eb0cf6d71873db466940f496981374b3d5909baf8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_ardinghelli, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 03:42:42 np0005534516 systemd[1]: Started libpod-conmon-8a7c7e00019e9c01596a864eb0cf6d71873db466940f496981374b3d5909baf8.scope.
Nov 25 03:42:42 np0005534516 podman[349225]: 2025-11-25 08:42:41.964057741 +0000 UTC m=+0.027108590 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 03:42:42 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e214 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 03:42:42 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e214 do_prune osdmap full prune enabled
Nov 25 03:42:42 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e215 e215: 3 total, 3 up, 3 in
Nov 25 03:42:42 np0005534516 ceph-mon[75015]: log_channel(cluster) log [DBG] : osdmap e215: 3 total, 3 up, 3 in
Nov 25 03:42:42 np0005534516 nova_compute[253538]: 2025-11-25 08:42:42.092 253542 DEBUG nova.network.neutron [-] [instance: 0774dd07-d931-40b5-8590-915c0611277d] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 25 03:42:42 np0005534516 systemd[1]: Started libcrun container.
Nov 25 03:42:42 np0005534516 nova_compute[253538]: 2025-11-25 08:42:42.105 253542 DEBUG nova.network.neutron [-] [instance: 0774dd07-d931-40b5-8590-915c0611277d] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 03:42:42 np0005534516 nova_compute[253538]: 2025-11-25 08:42:42.117 253542 INFO nova.compute.manager [-] [instance: 0774dd07-d931-40b5-8590-915c0611277d] Took 0.44 seconds to deallocate network for instance.#033[00m
Nov 25 03:42:42 np0005534516 podman[349225]: 2025-11-25 08:42:42.128300956 +0000 UTC m=+0.191351805 container init 8a7c7e00019e9c01596a864eb0cf6d71873db466940f496981374b3d5909baf8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_ardinghelli, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Nov 25 03:42:42 np0005534516 podman[349225]: 2025-11-25 08:42:42.141035114 +0000 UTC m=+0.204085913 container start 8a7c7e00019e9c01596a864eb0cf6d71873db466940f496981374b3d5909baf8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_ardinghelli, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Nov 25 03:42:42 np0005534516 podman[349225]: 2025-11-25 08:42:42.147578672 +0000 UTC m=+0.210629531 container attach 8a7c7e00019e9c01596a864eb0cf6d71873db466940f496981374b3d5909baf8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_ardinghelli, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 03:42:42 np0005534516 festive_ardinghelli[349242]: 167 167
Nov 25 03:42:42 np0005534516 systemd[1]: libpod-8a7c7e00019e9c01596a864eb0cf6d71873db466940f496981374b3d5909baf8.scope: Deactivated successfully.
Nov 25 03:42:42 np0005534516 podman[349225]: 2025-11-25 08:42:42.151155599 +0000 UTC m=+0.214206398 container died 8a7c7e00019e9c01596a864eb0cf6d71873db466940f496981374b3d5909baf8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_ardinghelli, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Nov 25 03:42:42 np0005534516 nova_compute[253538]: 2025-11-25 08:42:42.161 253542 DEBUG oslo_concurrency.lockutils [None req-bac1f4f8-1b2d-49e0-a36b-c8741dca8b4d 55be30cefe8b4a10b26c37d845e9e2fa 3c6a99942fff45b7809546d76f7d9c36 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:42:42 np0005534516 nova_compute[253538]: 2025-11-25 08:42:42.162 253542 DEBUG oslo_concurrency.lockutils [None req-bac1f4f8-1b2d-49e0-a36b-c8741dca8b4d 55be30cefe8b4a10b26c37d845e9e2fa 3c6a99942fff45b7809546d76f7d9c36 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:42:42 np0005534516 systemd[1]: var-lib-containers-storage-overlay-0ee65a6019a8c9a25dd2e6d46478134f051e7448330ec6a5060321ef2aa7fa52-merged.mount: Deactivated successfully.
Nov 25 03:42:42 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 03:42:42 np0005534516 nova_compute[253538]: 2025-11-25 08:42:42.199 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:42:42 np0005534516 podman[349225]: 2025-11-25 08:42:42.226765549 +0000 UTC m=+0.289816318 container remove 8a7c7e00019e9c01596a864eb0cf6d71873db466940f496981374b3d5909baf8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_ardinghelli, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3)
Nov 25 03:42:42 np0005534516 systemd[1]: libpod-conmon-8a7c7e00019e9c01596a864eb0cf6d71873db466940f496981374b3d5909baf8.scope: Deactivated successfully.
Nov 25 03:42:42 np0005534516 nova_compute[253538]: 2025-11-25 08:42:42.258 253542 DEBUG oslo_concurrency.processutils [None req-bac1f4f8-1b2d-49e0-a36b-c8741dca8b4d 55be30cefe8b4a10b26c37d845e9e2fa 3c6a99942fff45b7809546d76f7d9c36 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:42:42 np0005534516 podman[349265]: 2025-11-25 08:42:42.457136298 +0000 UTC m=+0.067280895 container create 1f2e6d0972f8e45ba4a3992e35792504fbe875bc8dc2617cf39b9800a2cd69c5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_ritchie, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 03:42:42 np0005534516 systemd[1]: Started libpod-conmon-1f2e6d0972f8e45ba4a3992e35792504fbe875bc8dc2617cf39b9800a2cd69c5.scope.
Nov 25 03:42:42 np0005534516 podman[349265]: 2025-11-25 08:42:42.428180798 +0000 UTC m=+0.038325445 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 03:42:42 np0005534516 systemd[1]: Started libcrun container.
Nov 25 03:42:42 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d2fc3f2d22f0df06236a4c9effcfec717c68f3985e5d42cd5d7c3e6ff86d4326/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 03:42:42 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d2fc3f2d22f0df06236a4c9effcfec717c68f3985e5d42cd5d7c3e6ff86d4326/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 03:42:42 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d2fc3f2d22f0df06236a4c9effcfec717c68f3985e5d42cd5d7c3e6ff86d4326/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 03:42:42 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d2fc3f2d22f0df06236a4c9effcfec717c68f3985e5d42cd5d7c3e6ff86d4326/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 03:42:42 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d2fc3f2d22f0df06236a4c9effcfec717c68f3985e5d42cd5d7c3e6ff86d4326/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 03:42:42 np0005534516 podman[349265]: 2025-11-25 08:42:42.577832037 +0000 UTC m=+0.187976634 container init 1f2e6d0972f8e45ba4a3992e35792504fbe875bc8dc2617cf39b9800a2cd69c5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_ritchie, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 03:42:42 np0005534516 podman[349265]: 2025-11-25 08:42:42.589545396 +0000 UTC m=+0.199689963 container start 1f2e6d0972f8e45ba4a3992e35792504fbe875bc8dc2617cf39b9800a2cd69c5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_ritchie, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Nov 25 03:42:42 np0005534516 podman[349265]: 2025-11-25 08:42:42.59671167 +0000 UTC m=+0.206856237 container attach 1f2e6d0972f8e45ba4a3992e35792504fbe875bc8dc2617cf39b9800a2cd69c5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_ritchie, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Nov 25 03:42:42 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 03:42:42 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3982093874' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 03:42:42 np0005534516 nova_compute[253538]: 2025-11-25 08:42:42.754 253542 DEBUG oslo_concurrency.processutils [None req-bac1f4f8-1b2d-49e0-a36b-c8741dca8b4d 55be30cefe8b4a10b26c37d845e9e2fa 3c6a99942fff45b7809546d76f7d9c36 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.496s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:42:42 np0005534516 nova_compute[253538]: 2025-11-25 08:42:42.764 253542 DEBUG nova.compute.provider_tree [None req-bac1f4f8-1b2d-49e0-a36b-c8741dca8b4d 55be30cefe8b4a10b26c37d845e9e2fa 3c6a99942fff45b7809546d76f7d9c36 - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 03:42:42 np0005534516 nova_compute[253538]: 2025-11-25 08:42:42.785 253542 DEBUG nova.scheduler.client.report [None req-bac1f4f8-1b2d-49e0-a36b-c8741dca8b4d 55be30cefe8b4a10b26c37d845e9e2fa 3c6a99942fff45b7809546d76f7d9c36 - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 03:42:42 np0005534516 nova_compute[253538]: 2025-11-25 08:42:42.814 253542 DEBUG oslo_concurrency.lockutils [None req-bac1f4f8-1b2d-49e0-a36b-c8741dca8b4d 55be30cefe8b4a10b26c37d845e9e2fa 3c6a99942fff45b7809546d76f7d9c36 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.651s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:42:42 np0005534516 nova_compute[253538]: 2025-11-25 08:42:42.847 253542 INFO nova.scheduler.client.report [None req-bac1f4f8-1b2d-49e0-a36b-c8741dca8b4d 55be30cefe8b4a10b26c37d845e9e2fa 3c6a99942fff45b7809546d76f7d9c36 - - default default] Deleted allocations for instance 0774dd07-d931-40b5-8590-915c0611277d#033[00m
Nov 25 03:42:42 np0005534516 nova_compute[253538]: 2025-11-25 08:42:42.937 253542 DEBUG oslo_concurrency.lockutils [None req-bac1f4f8-1b2d-49e0-a36b-c8741dca8b4d 55be30cefe8b4a10b26c37d845e9e2fa 3c6a99942fff45b7809546d76f7d9c36 - - default default] Lock "0774dd07-d931-40b5-8590-915c0611277d" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.732s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:42:43 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1820: 321 pgs: 8 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 311 active+clean; 344 MiB data, 797 MiB used, 59 GiB / 60 GiB avail; 6.2 MiB/s rd, 4.6 MiB/s wr, 209 op/s
Nov 25 03:42:43 np0005534516 recursing_ritchie[349300]: --> passed data devices: 0 physical, 3 LVM
Nov 25 03:42:43 np0005534516 recursing_ritchie[349300]: --> relative data size: 1.0
Nov 25 03:42:43 np0005534516 recursing_ritchie[349300]: --> All data devices are unavailable
Nov 25 03:42:43 np0005534516 systemd[1]: libpod-1f2e6d0972f8e45ba4a3992e35792504fbe875bc8dc2617cf39b9800a2cd69c5.scope: Deactivated successfully.
Nov 25 03:42:43 np0005534516 podman[349265]: 2025-11-25 08:42:43.802162982 +0000 UTC m=+1.412307599 container died 1f2e6d0972f8e45ba4a3992e35792504fbe875bc8dc2617cf39b9800a2cd69c5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_ritchie, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef)
Nov 25 03:42:43 np0005534516 systemd[1]: libpod-1f2e6d0972f8e45ba4a3992e35792504fbe875bc8dc2617cf39b9800a2cd69c5.scope: Consumed 1.147s CPU time.
Nov 25 03:42:43 np0005534516 systemd[1]: var-lib-containers-storage-overlay-d2fc3f2d22f0df06236a4c9effcfec717c68f3985e5d42cd5d7c3e6ff86d4326-merged.mount: Deactivated successfully.
Nov 25 03:42:43 np0005534516 podman[349265]: 2025-11-25 08:42:43.894644862 +0000 UTC m=+1.504789459 container remove 1f2e6d0972f8e45ba4a3992e35792504fbe875bc8dc2617cf39b9800a2cd69c5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_ritchie, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 03:42:43 np0005534516 systemd[1]: libpod-conmon-1f2e6d0972f8e45ba4a3992e35792504fbe875bc8dc2617cf39b9800a2cd69c5.scope: Deactivated successfully.
Nov 25 03:42:43 np0005534516 nova_compute[253538]: 2025-11-25 08:42:43.921 253542 INFO nova.virt.libvirt.driver [None req-2694c2ae-293c-4f40-9b7d-17cf8cceb678 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] [instance: 3e75d0af-c514-42c5-aa05-88ae5552f196] Snapshot image upload complete#033[00m
Nov 25 03:42:43 np0005534516 nova_compute[253538]: 2025-11-25 08:42:43.922 253542 DEBUG nova.compute.manager [None req-2694c2ae-293c-4f40-9b7d-17cf8cceb678 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] [instance: 3e75d0af-c514-42c5-aa05-88ae5552f196] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:42:44 np0005534516 nova_compute[253538]: 2025-11-25 08:42:44.031 253542 INFO nova.compute.manager [None req-2694c2ae-293c-4f40-9b7d-17cf8cceb678 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] [instance: 3e75d0af-c514-42c5-aa05-88ae5552f196] Shelve offloading#033[00m
Nov 25 03:42:44 np0005534516 nova_compute[253538]: 2025-11-25 08:42:44.045 253542 INFO nova.virt.libvirt.driver [-] [instance: 3e75d0af-c514-42c5-aa05-88ae5552f196] Instance destroyed successfully.#033[00m
Nov 25 03:42:44 np0005534516 nova_compute[253538]: 2025-11-25 08:42:44.046 253542 DEBUG nova.compute.manager [None req-2694c2ae-293c-4f40-9b7d-17cf8cceb678 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] [instance: 3e75d0af-c514-42c5-aa05-88ae5552f196] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:42:44 np0005534516 nova_compute[253538]: 2025-11-25 08:42:44.051 253542 DEBUG oslo_concurrency.lockutils [None req-2694c2ae-293c-4f40-9b7d-17cf8cceb678 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] Acquiring lock "refresh_cache-3e75d0af-c514-42c5-aa05-88ae5552f196" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 03:42:44 np0005534516 nova_compute[253538]: 2025-11-25 08:42:44.051 253542 DEBUG oslo_concurrency.lockutils [None req-2694c2ae-293c-4f40-9b7d-17cf8cceb678 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] Acquired lock "refresh_cache-3e75d0af-c514-42c5-aa05-88ae5552f196" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 03:42:44 np0005534516 nova_compute[253538]: 2025-11-25 08:42:44.052 253542 DEBUG nova.network.neutron [None req-2694c2ae-293c-4f40-9b7d-17cf8cceb678 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] [instance: 3e75d0af-c514-42c5-aa05-88ae5552f196] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 25 03:42:44 np0005534516 podman[349483]: 2025-11-25 08:42:44.712189142 +0000 UTC m=+0.055994097 container create 8d627db7af9257606716a4e5ccb8ba3f51e299267e70d309fef3813c242c0103 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_chatelet, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Nov 25 03:42:44 np0005534516 systemd[1]: Started libpod-conmon-8d627db7af9257606716a4e5ccb8ba3f51e299267e70d309fef3813c242c0103.scope.
Nov 25 03:42:44 np0005534516 podman[349483]: 2025-11-25 08:42:44.684055785 +0000 UTC m=+0.027860790 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 03:42:44 np0005534516 systemd[1]: Started libcrun container.
Nov 25 03:42:44 np0005534516 podman[349483]: 2025-11-25 08:42:44.822023385 +0000 UTC m=+0.165828320 container init 8d627db7af9257606716a4e5ccb8ba3f51e299267e70d309fef3813c242c0103 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_chatelet, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Nov 25 03:42:44 np0005534516 podman[349483]: 2025-11-25 08:42:44.829066976 +0000 UTC m=+0.172871901 container start 8d627db7af9257606716a4e5ccb8ba3f51e299267e70d309fef3813c242c0103 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_chatelet, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 03:42:44 np0005534516 podman[349483]: 2025-11-25 08:42:44.832644924 +0000 UTC m=+0.176449859 container attach 8d627db7af9257606716a4e5ccb8ba3f51e299267e70d309fef3813c242c0103 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_chatelet, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 03:42:44 np0005534516 systemd[1]: libpod-8d627db7af9257606716a4e5ccb8ba3f51e299267e70d309fef3813c242c0103.scope: Deactivated successfully.
Nov 25 03:42:44 np0005534516 jolly_chatelet[349500]: 167 167
Nov 25 03:42:44 np0005534516 conmon[349500]: conmon 8d627db7af9257606716 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-8d627db7af9257606716a4e5ccb8ba3f51e299267e70d309fef3813c242c0103.scope/container/memory.events
Nov 25 03:42:44 np0005534516 podman[349483]: 2025-11-25 08:42:44.837366862 +0000 UTC m=+0.181171857 container died 8d627db7af9257606716a4e5ccb8ba3f51e299267e70d309fef3813c242c0103 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_chatelet, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 03:42:44 np0005534516 systemd[1]: var-lib-containers-storage-overlay-1c61b86257a5fdf306f3af9ac048da22fbfa076aa0483122d6f46c9f8f08ba96-merged.mount: Deactivated successfully.
Nov 25 03:42:44 np0005534516 podman[349483]: 2025-11-25 08:42:44.905105578 +0000 UTC m=+0.248910523 container remove 8d627db7af9257606716a4e5ccb8ba3f51e299267e70d309fef3813c242c0103 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_chatelet, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 03:42:44 np0005534516 systemd[1]: libpod-conmon-8d627db7af9257606716a4e5ccb8ba3f51e299267e70d309fef3813c242c0103.scope: Deactivated successfully.
Nov 25 03:42:45 np0005534516 podman[349524]: 2025-11-25 08:42:45.126595895 +0000 UTC m=+0.043583819 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 03:42:45 np0005534516 podman[349524]: 2025-11-25 08:42:45.238036061 +0000 UTC m=+0.155023955 container create 00328178fd9473017aba9d46db8586de05ac788f800a55d51844910b1d51d611 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_booth, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 03:42:45 np0005534516 systemd[1]: Started libpod-conmon-00328178fd9473017aba9d46db8586de05ac788f800a55d51844910b1d51d611.scope.
Nov 25 03:42:45 np0005534516 systemd[1]: Started libcrun container.
Nov 25 03:42:45 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/97e8578496aa0fcfac7042614cdbcaa7c1a2e42138b365765e4248995b397439/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 03:42:45 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/97e8578496aa0fcfac7042614cdbcaa7c1a2e42138b365765e4248995b397439/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 03:42:45 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/97e8578496aa0fcfac7042614cdbcaa7c1a2e42138b365765e4248995b397439/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 03:42:45 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/97e8578496aa0fcfac7042614cdbcaa7c1a2e42138b365765e4248995b397439/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 03:42:45 np0005534516 podman[349524]: 2025-11-25 08:42:45.389266452 +0000 UTC m=+0.306254356 container init 00328178fd9473017aba9d46db8586de05ac788f800a55d51844910b1d51d611 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_booth, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 03:42:45 np0005534516 podman[349524]: 2025-11-25 08:42:45.398990658 +0000 UTC m=+0.315978542 container start 00328178fd9473017aba9d46db8586de05ac788f800a55d51844910b1d51d611 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_booth, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 03:42:45 np0005534516 podman[349524]: 2025-11-25 08:42:45.404797216 +0000 UTC m=+0.321785080 container attach 00328178fd9473017aba9d46db8586de05ac788f800a55d51844910b1d51d611 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_booth, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 25 03:42:45 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1821: 321 pgs: 8 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 311 active+clean; 372 MiB data, 820 MiB used, 59 GiB / 60 GiB avail; 6.5 MiB/s rd, 6.4 MiB/s wr, 223 op/s
Nov 25 03:42:45 np0005534516 nova_compute[253538]: 2025-11-25 08:42:45.867 253542 DEBUG nova.network.neutron [None req-2694c2ae-293c-4f40-9b7d-17cf8cceb678 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] [instance: 3e75d0af-c514-42c5-aa05-88ae5552f196] Updating instance_info_cache with network_info: [{"id": "f7c4b9b0-3445-468a-a19a-8b19b2d029a2", "address": "fa:16:3e:83:e9:db", "network": {"id": "6e77a51d-2695-4e70-8b9d-c02ec0c62f35", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1129579753-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.248", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "295fcc758cf24ab4b01eb393f4863e36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf7c4b9b0-34", "ovs_interfaceid": "f7c4b9b0-3445-468a-a19a-8b19b2d029a2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 03:42:45 np0005534516 nova_compute[253538]: 2025-11-25 08:42:45.908 253542 DEBUG oslo_concurrency.lockutils [None req-2694c2ae-293c-4f40-9b7d-17cf8cceb678 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] Releasing lock "refresh_cache-3e75d0af-c514-42c5-aa05-88ae5552f196" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 03:42:46 np0005534516 elegant_booth[349541]: {
Nov 25 03:42:46 np0005534516 elegant_booth[349541]:    "0": [
Nov 25 03:42:46 np0005534516 elegant_booth[349541]:        {
Nov 25 03:42:46 np0005534516 elegant_booth[349541]:            "devices": [
Nov 25 03:42:46 np0005534516 elegant_booth[349541]:                "/dev/loop3"
Nov 25 03:42:46 np0005534516 elegant_booth[349541]:            ],
Nov 25 03:42:46 np0005534516 elegant_booth[349541]:            "lv_name": "ceph_lv0",
Nov 25 03:42:46 np0005534516 elegant_booth[349541]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 03:42:46 np0005534516 elegant_booth[349541]:            "lv_size": "21470642176",
Nov 25 03:42:46 np0005534516 elegant_booth[349541]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=fa0f8d90-5df8-4d42-9078-da082765696d,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 03:42:46 np0005534516 elegant_booth[349541]:            "lv_uuid": "ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr",
Nov 25 03:42:46 np0005534516 elegant_booth[349541]:            "name": "ceph_lv0",
Nov 25 03:42:46 np0005534516 elegant_booth[349541]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 03:42:46 np0005534516 elegant_booth[349541]:            "tags": {
Nov 25 03:42:46 np0005534516 elegant_booth[349541]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 03:42:46 np0005534516 elegant_booth[349541]:                "ceph.block_uuid": "ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr",
Nov 25 03:42:46 np0005534516 elegant_booth[349541]:                "ceph.cephx_lockbox_secret": "",
Nov 25 03:42:46 np0005534516 elegant_booth[349541]:                "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 03:42:46 np0005534516 elegant_booth[349541]:                "ceph.cluster_name": "ceph",
Nov 25 03:42:46 np0005534516 elegant_booth[349541]:                "ceph.crush_device_class": "",
Nov 25 03:42:46 np0005534516 elegant_booth[349541]:                "ceph.encrypted": "0",
Nov 25 03:42:46 np0005534516 elegant_booth[349541]:                "ceph.osd_fsid": "fa0f8d90-5df8-4d42-9078-da082765696d",
Nov 25 03:42:46 np0005534516 elegant_booth[349541]:                "ceph.osd_id": "0",
Nov 25 03:42:46 np0005534516 elegant_booth[349541]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 03:42:46 np0005534516 elegant_booth[349541]:                "ceph.type": "block",
Nov 25 03:42:46 np0005534516 elegant_booth[349541]:                "ceph.vdo": "0"
Nov 25 03:42:46 np0005534516 elegant_booth[349541]:            },
Nov 25 03:42:46 np0005534516 elegant_booth[349541]:            "type": "block",
Nov 25 03:42:46 np0005534516 elegant_booth[349541]:            "vg_name": "ceph_vg0"
Nov 25 03:42:46 np0005534516 elegant_booth[349541]:        }
Nov 25 03:42:46 np0005534516 elegant_booth[349541]:    ],
Nov 25 03:42:46 np0005534516 elegant_booth[349541]:    "1": [
Nov 25 03:42:46 np0005534516 elegant_booth[349541]:        {
Nov 25 03:42:46 np0005534516 elegant_booth[349541]:            "devices": [
Nov 25 03:42:46 np0005534516 elegant_booth[349541]:                "/dev/loop4"
Nov 25 03:42:46 np0005534516 elegant_booth[349541]:            ],
Nov 25 03:42:46 np0005534516 elegant_booth[349541]:            "lv_name": "ceph_lv1",
Nov 25 03:42:46 np0005534516 elegant_booth[349541]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 03:42:46 np0005534516 elegant_booth[349541]:            "lv_size": "21470642176",
Nov 25 03:42:46 np0005534516 elegant_booth[349541]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1ccbbde0-8faf-460a-800e-c84f00ed17db,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 03:42:46 np0005534516 elegant_booth[349541]:            "lv_uuid": "nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e",
Nov 25 03:42:46 np0005534516 elegant_booth[349541]:            "name": "ceph_lv1",
Nov 25 03:42:46 np0005534516 elegant_booth[349541]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 03:42:46 np0005534516 elegant_booth[349541]:            "tags": {
Nov 25 03:42:46 np0005534516 elegant_booth[349541]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 03:42:46 np0005534516 elegant_booth[349541]:                "ceph.block_uuid": "nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e",
Nov 25 03:42:46 np0005534516 elegant_booth[349541]:                "ceph.cephx_lockbox_secret": "",
Nov 25 03:42:46 np0005534516 elegant_booth[349541]:                "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 03:42:46 np0005534516 elegant_booth[349541]:                "ceph.cluster_name": "ceph",
Nov 25 03:42:46 np0005534516 elegant_booth[349541]:                "ceph.crush_device_class": "",
Nov 25 03:42:46 np0005534516 elegant_booth[349541]:                "ceph.encrypted": "0",
Nov 25 03:42:46 np0005534516 elegant_booth[349541]:                "ceph.osd_fsid": "1ccbbde0-8faf-460a-800e-c84f00ed17db",
Nov 25 03:42:46 np0005534516 elegant_booth[349541]:                "ceph.osd_id": "1",
Nov 25 03:42:46 np0005534516 elegant_booth[349541]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 03:42:46 np0005534516 elegant_booth[349541]:                "ceph.type": "block",
Nov 25 03:42:46 np0005534516 elegant_booth[349541]:                "ceph.vdo": "0"
Nov 25 03:42:46 np0005534516 elegant_booth[349541]:            },
Nov 25 03:42:46 np0005534516 elegant_booth[349541]:            "type": "block",
Nov 25 03:42:46 np0005534516 elegant_booth[349541]:            "vg_name": "ceph_vg1"
Nov 25 03:42:46 np0005534516 elegant_booth[349541]:        }
Nov 25 03:42:46 np0005534516 elegant_booth[349541]:    ],
Nov 25 03:42:46 np0005534516 elegant_booth[349541]:    "2": [
Nov 25 03:42:46 np0005534516 elegant_booth[349541]:        {
Nov 25 03:42:46 np0005534516 elegant_booth[349541]:            "devices": [
Nov 25 03:42:46 np0005534516 elegant_booth[349541]:                "/dev/loop5"
Nov 25 03:42:46 np0005534516 elegant_booth[349541]:            ],
Nov 25 03:42:46 np0005534516 elegant_booth[349541]:            "lv_name": "ceph_lv2",
Nov 25 03:42:46 np0005534516 elegant_booth[349541]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 03:42:46 np0005534516 elegant_booth[349541]:            "lv_size": "21470642176",
Nov 25 03:42:46 np0005534516 elegant_booth[349541]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2a15223e-f21c-42af-b702-2be1c3607399,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 03:42:46 np0005534516 elegant_booth[349541]:            "lv_uuid": "5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0",
Nov 25 03:42:46 np0005534516 elegant_booth[349541]:            "name": "ceph_lv2",
Nov 25 03:42:46 np0005534516 elegant_booth[349541]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 03:42:46 np0005534516 elegant_booth[349541]:            "tags": {
Nov 25 03:42:46 np0005534516 elegant_booth[349541]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 03:42:46 np0005534516 elegant_booth[349541]:                "ceph.block_uuid": "5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0",
Nov 25 03:42:46 np0005534516 elegant_booth[349541]:                "ceph.cephx_lockbox_secret": "",
Nov 25 03:42:46 np0005534516 elegant_booth[349541]:                "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 03:42:46 np0005534516 elegant_booth[349541]:                "ceph.cluster_name": "ceph",
Nov 25 03:42:46 np0005534516 elegant_booth[349541]:                "ceph.crush_device_class": "",
Nov 25 03:42:46 np0005534516 elegant_booth[349541]:                "ceph.encrypted": "0",
Nov 25 03:42:46 np0005534516 elegant_booth[349541]:                "ceph.osd_fsid": "2a15223e-f21c-42af-b702-2be1c3607399",
Nov 25 03:42:46 np0005534516 elegant_booth[349541]:                "ceph.osd_id": "2",
Nov 25 03:42:46 np0005534516 elegant_booth[349541]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 03:42:46 np0005534516 elegant_booth[349541]:                "ceph.type": "block",
Nov 25 03:42:46 np0005534516 elegant_booth[349541]:                "ceph.vdo": "0"
Nov 25 03:42:46 np0005534516 elegant_booth[349541]:            },
Nov 25 03:42:46 np0005534516 elegant_booth[349541]:            "type": "block",
Nov 25 03:42:46 np0005534516 elegant_booth[349541]:            "vg_name": "ceph_vg2"
Nov 25 03:42:46 np0005534516 elegant_booth[349541]:        }
Nov 25 03:42:46 np0005534516 elegant_booth[349541]:    ]
Nov 25 03:42:46 np0005534516 elegant_booth[349541]: }
Nov 25 03:42:46 np0005534516 systemd[1]: libpod-00328178fd9473017aba9d46db8586de05ac788f800a55d51844910b1d51d611.scope: Deactivated successfully.
Nov 25 03:42:46 np0005534516 podman[349524]: 2025-11-25 08:42:46.180557797 +0000 UTC m=+1.097545711 container died 00328178fd9473017aba9d46db8586de05ac788f800a55d51844910b1d51d611 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_booth, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Nov 25 03:42:46 np0005534516 systemd[1]: var-lib-containers-storage-overlay-97e8578496aa0fcfac7042614cdbcaa7c1a2e42138b365765e4248995b397439-merged.mount: Deactivated successfully.
Nov 25 03:42:46 np0005534516 podman[349524]: 2025-11-25 08:42:46.302995053 +0000 UTC m=+1.219982947 container remove 00328178fd9473017aba9d46db8586de05ac788f800a55d51844910b1d51d611 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_booth, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 25 03:42:46 np0005534516 systemd[1]: libpod-conmon-00328178fd9473017aba9d46db8586de05ac788f800a55d51844910b1d51d611.scope: Deactivated successfully.
Nov 25 03:42:46 np0005534516 nova_compute[253538]: 2025-11-25 08:42:46.777 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:42:46 np0005534516 nova_compute[253538]: 2025-11-25 08:42:46.912 253542 INFO nova.virt.libvirt.driver [-] [instance: 3e75d0af-c514-42c5-aa05-88ae5552f196] Instance destroyed successfully.#033[00m
Nov 25 03:42:46 np0005534516 nova_compute[253538]: 2025-11-25 08:42:46.913 253542 DEBUG nova.objects.instance [None req-2694c2ae-293c-4f40-9b7d-17cf8cceb678 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] Lazy-loading 'resources' on Instance uuid 3e75d0af-c514-42c5-aa05-88ae5552f196 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 03:42:46 np0005534516 nova_compute[253538]: 2025-11-25 08:42:46.925 253542 DEBUG nova.virt.libvirt.vif [None req-2694c2ae-293c-4f40-9b7d-17cf8cceb678 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T08:40:17Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestOtherB-server-439401428',display_name='tempest-ServerActionsTestOtherB-server-439401428',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestotherb-server-439401428',id=95,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJg9vWbWDQUJP+5O2Ge5sP4yW+A5RDbOBkV9U0C3hvxoWu1yQZFyI5Vs8mvdnljTrZSXJgG69Yru9lsQdThAcjefMLvUo4eUx6Akjue1XjQsVfgM0pq0/Z3uC1qyMxn0Ew==',key_name='tempest-keypair-1642602877',keypairs=<?>,launch_index=0,launched_at=2025-11-25T08:40:29Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='295fcc758cf24ab4b01eb393f4863e36',ramdisk_id='',reservation_id='r-iqkw2vgl',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestOtherB-587178207',owner_user_name='tempest-ServerActionsTestOtherB-587178207-project-member',shelved_at='2025-11-25T08:42:43.922397',shelved_host='compute-0.ctlplane.example.com',shelved_image_id='34e9b311-13e0-4ffd-bc6f-64a46ba4b491'},tags=<?>,task_state='shelving_offloading',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T08:42:36Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='66e1f27ea22d4ee08a0a470a8c18135e',uuid=3e75d0af-c514-42c5-aa05-88ae5552f196,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='shelved') vif={"id": "f7c4b9b0-3445-468a-a19a-8b19b2d029a2", "address": "fa:16:3e:83:e9:db", "network": {"id": "6e77a51d-2695-4e70-8b9d-c02ec0c62f35", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1129579753-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.248", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "295fcc758cf24ab4b01eb393f4863e36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf7c4b9b0-34", "ovs_interfaceid": "f7c4b9b0-3445-468a-a19a-8b19b2d029a2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 25 03:42:46 np0005534516 nova_compute[253538]: 2025-11-25 08:42:46.926 253542 DEBUG nova.network.os_vif_util [None req-2694c2ae-293c-4f40-9b7d-17cf8cceb678 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] Converting VIF {"id": "f7c4b9b0-3445-468a-a19a-8b19b2d029a2", "address": "fa:16:3e:83:e9:db", "network": {"id": "6e77a51d-2695-4e70-8b9d-c02ec0c62f35", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1129579753-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.248", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "295fcc758cf24ab4b01eb393f4863e36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf7c4b9b0-34", "ovs_interfaceid": "f7c4b9b0-3445-468a-a19a-8b19b2d029a2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 03:42:46 np0005534516 nova_compute[253538]: 2025-11-25 08:42:46.927 253542 DEBUG nova.network.os_vif_util [None req-2694c2ae-293c-4f40-9b7d-17cf8cceb678 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:83:e9:db,bridge_name='br-int',has_traffic_filtering=True,id=f7c4b9b0-3445-468a-a19a-8b19b2d029a2,network=Network(6e77a51d-2695-4e70-8b9d-c02ec0c62f35),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf7c4b9b0-34') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 03:42:46 np0005534516 nova_compute[253538]: 2025-11-25 08:42:46.927 253542 DEBUG os_vif [None req-2694c2ae-293c-4f40-9b7d-17cf8cceb678 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:83:e9:db,bridge_name='br-int',has_traffic_filtering=True,id=f7c4b9b0-3445-468a-a19a-8b19b2d029a2,network=Network(6e77a51d-2695-4e70-8b9d-c02ec0c62f35),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf7c4b9b0-34') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 25 03:42:46 np0005534516 nova_compute[253538]: 2025-11-25 08:42:46.929 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:42:46 np0005534516 nova_compute[253538]: 2025-11-25 08:42:46.930 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf7c4b9b0-34, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:42:46 np0005534516 nova_compute[253538]: 2025-11-25 08:42:46.931 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:42:46 np0005534516 nova_compute[253538]: 2025-11-25 08:42:46.933 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:42:46 np0005534516 nova_compute[253538]: 2025-11-25 08:42:46.937 253542 INFO os_vif [None req-2694c2ae-293c-4f40-9b7d-17cf8cceb678 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:83:e9:db,bridge_name='br-int',has_traffic_filtering=True,id=f7c4b9b0-3445-468a-a19a-8b19b2d029a2,network=Network(6e77a51d-2695-4e70-8b9d-c02ec0c62f35),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf7c4b9b0-34')#033[00m
Nov 25 03:42:47 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e215 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 03:42:47 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e215 do_prune osdmap full prune enabled
Nov 25 03:42:47 np0005534516 nova_compute[253538]: 2025-11-25 08:42:47.086 253542 DEBUG nova.compute.manager [req-06d30168-3877-4d90-91f3-bb84d61239e9 req-29b64ce9-ef11-4a23-9474-606e0ca5c566 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 3e75d0af-c514-42c5-aa05-88ae5552f196] Received event network-changed-f7c4b9b0-3445-468a-a19a-8b19b2d029a2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:42:47 np0005534516 nova_compute[253538]: 2025-11-25 08:42:47.087 253542 DEBUG nova.compute.manager [req-06d30168-3877-4d90-91f3-bb84d61239e9 req-29b64ce9-ef11-4a23-9474-606e0ca5c566 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 3e75d0af-c514-42c5-aa05-88ae5552f196] Refreshing instance network info cache due to event network-changed-f7c4b9b0-3445-468a-a19a-8b19b2d029a2. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 03:42:47 np0005534516 nova_compute[253538]: 2025-11-25 08:42:47.088 253542 DEBUG oslo_concurrency.lockutils [req-06d30168-3877-4d90-91f3-bb84d61239e9 req-29b64ce9-ef11-4a23-9474-606e0ca5c566 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-3e75d0af-c514-42c5-aa05-88ae5552f196" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 03:42:47 np0005534516 nova_compute[253538]: 2025-11-25 08:42:47.088 253542 DEBUG oslo_concurrency.lockutils [req-06d30168-3877-4d90-91f3-bb84d61239e9 req-29b64ce9-ef11-4a23-9474-606e0ca5c566 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-3e75d0af-c514-42c5-aa05-88ae5552f196" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 03:42:47 np0005534516 nova_compute[253538]: 2025-11-25 08:42:47.088 253542 DEBUG nova.network.neutron [req-06d30168-3877-4d90-91f3-bb84d61239e9 req-29b64ce9-ef11-4a23-9474-606e0ca5c566 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 3e75d0af-c514-42c5-aa05-88ae5552f196] Refreshing network info cache for port f7c4b9b0-3445-468a-a19a-8b19b2d029a2 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 03:42:47 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e216 e216: 3 total, 3 up, 3 in
Nov 25 03:42:47 np0005534516 ceph-mon[75015]: log_channel(cluster) log [DBG] : osdmap e216: 3 total, 3 up, 3 in
Nov 25 03:42:47 np0005534516 podman[349724]: 2025-11-25 08:42:47.144337181 +0000 UTC m=+0.062991538 container create 790f094a52d7e384a4771dfef2fa3bb6796923e5000be8ed71fbba660daa16f8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_bouman, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 03:42:47 np0005534516 systemd[1]: Started libpod-conmon-790f094a52d7e384a4771dfef2fa3bb6796923e5000be8ed71fbba660daa16f8.scope.
Nov 25 03:42:47 np0005534516 podman[349724]: 2025-11-25 08:42:47.111884827 +0000 UTC m=+0.030539204 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 03:42:47 np0005534516 systemd[1]: Started libcrun container.
Nov 25 03:42:47 np0005534516 podman[349724]: 2025-11-25 08:42:47.230533121 +0000 UTC m=+0.149187488 container init 790f094a52d7e384a4771dfef2fa3bb6796923e5000be8ed71fbba660daa16f8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_bouman, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 03:42:47 np0005534516 podman[349724]: 2025-11-25 08:42:47.244238953 +0000 UTC m=+0.162893290 container start 790f094a52d7e384a4771dfef2fa3bb6796923e5000be8ed71fbba660daa16f8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_bouman, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Nov 25 03:42:47 np0005534516 sleepy_bouman[349740]: 167 167
Nov 25 03:42:47 np0005534516 systemd[1]: libpod-790f094a52d7e384a4771dfef2fa3bb6796923e5000be8ed71fbba660daa16f8.scope: Deactivated successfully.
Nov 25 03:42:47 np0005534516 podman[349724]: 2025-11-25 08:42:47.25145554 +0000 UTC m=+0.170109877 container attach 790f094a52d7e384a4771dfef2fa3bb6796923e5000be8ed71fbba660daa16f8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_bouman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 03:42:47 np0005534516 podman[349724]: 2025-11-25 08:42:47.25181943 +0000 UTC m=+0.170473787 container died 790f094a52d7e384a4771dfef2fa3bb6796923e5000be8ed71fbba660daa16f8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_bouman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Nov 25 03:42:47 np0005534516 systemd[1]: var-lib-containers-storage-overlay-931829aa5af67a8842217d8c23573fa4521208b21e856625ffc10155b8b50ece-merged.mount: Deactivated successfully.
Nov 25 03:42:47 np0005534516 podman[349724]: 2025-11-25 08:42:47.307793815 +0000 UTC m=+0.226448152 container remove 790f094a52d7e384a4771dfef2fa3bb6796923e5000be8ed71fbba660daa16f8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_bouman, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Nov 25 03:42:47 np0005534516 systemd[1]: libpod-conmon-790f094a52d7e384a4771dfef2fa3bb6796923e5000be8ed71fbba660daa16f8.scope: Deactivated successfully.
Nov 25 03:42:47 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1823: 321 pgs: 321 active+clean; 372 MiB data, 820 MiB used, 59 GiB / 60 GiB avail; 5.3 MiB/s rd, 6.8 MiB/s wr, 196 op/s
Nov 25 03:42:47 np0005534516 nova_compute[253538]: 2025-11-25 08:42:47.522 253542 INFO nova.virt.libvirt.driver [None req-2694c2ae-293c-4f40-9b7d-17cf8cceb678 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] [instance: 3e75d0af-c514-42c5-aa05-88ae5552f196] Deleting instance files /var/lib/nova/instances/3e75d0af-c514-42c5-aa05-88ae5552f196_del#033[00m
Nov 25 03:42:47 np0005534516 nova_compute[253538]: 2025-11-25 08:42:47.524 253542 INFO nova.virt.libvirt.driver [None req-2694c2ae-293c-4f40-9b7d-17cf8cceb678 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] [instance: 3e75d0af-c514-42c5-aa05-88ae5552f196] Deletion of /var/lib/nova/instances/3e75d0af-c514-42c5-aa05-88ae5552f196_del complete#033[00m
Nov 25 03:42:47 np0005534516 podman[349765]: 2025-11-25 08:42:47.53363372 +0000 UTC m=+0.069356671 container create 89976c1ebb13c2015aef29fa0012cc60e360f7e8281b42cd23171920fdfca48a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_dubinsky, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 03:42:47 np0005534516 systemd[1]: Started libpod-conmon-89976c1ebb13c2015aef29fa0012cc60e360f7e8281b42cd23171920fdfca48a.scope.
Nov 25 03:42:47 np0005534516 podman[349765]: 2025-11-25 08:42:47.502022558 +0000 UTC m=+0.037745559 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 03:42:47 np0005534516 nova_compute[253538]: 2025-11-25 08:42:47.603 253542 INFO nova.scheduler.client.report [None req-2694c2ae-293c-4f40-9b7d-17cf8cceb678 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] Deleted allocations for instance 3e75d0af-c514-42c5-aa05-88ae5552f196#033[00m
Nov 25 03:42:47 np0005534516 systemd[1]: Started libcrun container.
Nov 25 03:42:47 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f073fcb3654e5766bd8cc15cce12fbe36dcbdbc51d64c836efdb886aba003534/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 03:42:47 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f073fcb3654e5766bd8cc15cce12fbe36dcbdbc51d64c836efdb886aba003534/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 03:42:47 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f073fcb3654e5766bd8cc15cce12fbe36dcbdbc51d64c836efdb886aba003534/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 03:42:47 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f073fcb3654e5766bd8cc15cce12fbe36dcbdbc51d64c836efdb886aba003534/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 03:42:47 np0005534516 podman[349765]: 2025-11-25 08:42:47.661283748 +0000 UTC m=+0.197006749 container init 89976c1ebb13c2015aef29fa0012cc60e360f7e8281b42cd23171920fdfca48a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_dubinsky, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Nov 25 03:42:47 np0005534516 nova_compute[253538]: 2025-11-25 08:42:47.662 253542 DEBUG oslo_concurrency.lockutils [None req-2694c2ae-293c-4f40-9b7d-17cf8cceb678 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:42:47 np0005534516 nova_compute[253538]: 2025-11-25 08:42:47.663 253542 DEBUG oslo_concurrency.lockutils [None req-2694c2ae-293c-4f40-9b7d-17cf8cceb678 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:42:47 np0005534516 podman[349765]: 2025-11-25 08:42:47.67415519 +0000 UTC m=+0.209878131 container start 89976c1ebb13c2015aef29fa0012cc60e360f7e8281b42cd23171920fdfca48a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_dubinsky, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 03:42:47 np0005534516 nova_compute[253538]: 2025-11-25 08:42:47.709 253542 DEBUG oslo_concurrency.processutils [None req-2694c2ae-293c-4f40-9b7d-17cf8cceb678 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:42:47 np0005534516 podman[349765]: 2025-11-25 08:42:47.78865333 +0000 UTC m=+0.324376281 container attach 89976c1ebb13c2015aef29fa0012cc60e360f7e8281b42cd23171920fdfca48a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_dubinsky, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 03:42:48 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 03:42:48 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4238448093' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 03:42:48 np0005534516 nova_compute[253538]: 2025-11-25 08:42:48.215 253542 DEBUG oslo_concurrency.processutils [None req-2694c2ae-293c-4f40-9b7d-17cf8cceb678 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.506s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:42:48 np0005534516 nova_compute[253538]: 2025-11-25 08:42:48.229 253542 DEBUG nova.compute.provider_tree [None req-2694c2ae-293c-4f40-9b7d-17cf8cceb678 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 03:42:48 np0005534516 nova_compute[253538]: 2025-11-25 08:42:48.245 253542 DEBUG nova.scheduler.client.report [None req-2694c2ae-293c-4f40-9b7d-17cf8cceb678 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 03:42:48 np0005534516 nova_compute[253538]: 2025-11-25 08:42:48.272 253542 DEBUG oslo_concurrency.lockutils [None req-2694c2ae-293c-4f40-9b7d-17cf8cceb678 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.609s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:42:48 np0005534516 nova_compute[253538]: 2025-11-25 08:42:48.311 253542 DEBUG oslo_concurrency.lockutils [None req-2694c2ae-293c-4f40-9b7d-17cf8cceb678 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] Lock "3e75d0af-c514-42c5-aa05-88ae5552f196" "released" by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" :: held 15.365s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:42:48 np0005534516 musing_dubinsky[349781]: {
Nov 25 03:42:48 np0005534516 musing_dubinsky[349781]:    "1ccbbde0-8faf-460a-800e-c84f00ed17db": {
Nov 25 03:42:48 np0005534516 musing_dubinsky[349781]:        "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 03:42:48 np0005534516 musing_dubinsky[349781]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 03:42:48 np0005534516 musing_dubinsky[349781]:        "osd_id": 1,
Nov 25 03:42:48 np0005534516 musing_dubinsky[349781]:        "osd_uuid": "1ccbbde0-8faf-460a-800e-c84f00ed17db",
Nov 25 03:42:48 np0005534516 musing_dubinsky[349781]:        "type": "bluestore"
Nov 25 03:42:48 np0005534516 musing_dubinsky[349781]:    },
Nov 25 03:42:48 np0005534516 musing_dubinsky[349781]:    "2a15223e-f21c-42af-b702-2be1c3607399": {
Nov 25 03:42:48 np0005534516 musing_dubinsky[349781]:        "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 03:42:48 np0005534516 musing_dubinsky[349781]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 03:42:48 np0005534516 musing_dubinsky[349781]:        "osd_id": 2,
Nov 25 03:42:48 np0005534516 musing_dubinsky[349781]:        "osd_uuid": "2a15223e-f21c-42af-b702-2be1c3607399",
Nov 25 03:42:48 np0005534516 musing_dubinsky[349781]:        "type": "bluestore"
Nov 25 03:42:48 np0005534516 musing_dubinsky[349781]:    },
Nov 25 03:42:48 np0005534516 musing_dubinsky[349781]:    "fa0f8d90-5df8-4d42-9078-da082765696d": {
Nov 25 03:42:48 np0005534516 musing_dubinsky[349781]:        "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 03:42:48 np0005534516 musing_dubinsky[349781]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 03:42:48 np0005534516 musing_dubinsky[349781]:        "osd_id": 0,
Nov 25 03:42:48 np0005534516 musing_dubinsky[349781]:        "osd_uuid": "fa0f8d90-5df8-4d42-9078-da082765696d",
Nov 25 03:42:48 np0005534516 musing_dubinsky[349781]:        "type": "bluestore"
Nov 25 03:42:48 np0005534516 musing_dubinsky[349781]:    }
Nov 25 03:42:48 np0005534516 musing_dubinsky[349781]: }
Nov 25 03:42:48 np0005534516 systemd[1]: libpod-89976c1ebb13c2015aef29fa0012cc60e360f7e8281b42cd23171920fdfca48a.scope: Deactivated successfully.
Nov 25 03:42:48 np0005534516 systemd[1]: libpod-89976c1ebb13c2015aef29fa0012cc60e360f7e8281b42cd23171920fdfca48a.scope: Consumed 1.102s CPU time.
Nov 25 03:42:48 np0005534516 podman[349765]: 2025-11-25 08:42:48.786680517 +0000 UTC m=+1.322403498 container died 89976c1ebb13c2015aef29fa0012cc60e360f7e8281b42cd23171920fdfca48a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_dubinsky, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Nov 25 03:42:48 np0005534516 systemd[1]: var-lib-containers-storage-overlay-f073fcb3654e5766bd8cc15cce12fbe36dcbdbc51d64c836efdb886aba003534-merged.mount: Deactivated successfully.
Nov 25 03:42:48 np0005534516 nova_compute[253538]: 2025-11-25 08:42:48.845 253542 DEBUG nova.network.neutron [req-06d30168-3877-4d90-91f3-bb84d61239e9 req-29b64ce9-ef11-4a23-9474-606e0ca5c566 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 3e75d0af-c514-42c5-aa05-88ae5552f196] Updated VIF entry in instance network info cache for port f7c4b9b0-3445-468a-a19a-8b19b2d029a2. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 25 03:42:48 np0005534516 nova_compute[253538]: 2025-11-25 08:42:48.845 253542 DEBUG nova.network.neutron [req-06d30168-3877-4d90-91f3-bb84d61239e9 req-29b64ce9-ef11-4a23-9474-606e0ca5c566 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 3e75d0af-c514-42c5-aa05-88ae5552f196] Updating instance_info_cache with network_info: [{"id": "f7c4b9b0-3445-468a-a19a-8b19b2d029a2", "address": "fa:16:3e:83:e9:db", "network": {"id": "6e77a51d-2695-4e70-8b9d-c02ec0c62f35", "bridge": null, "label": "tempest-ServerActionsTestOtherB-1129579753-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.248", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "295fcc758cf24ab4b01eb393f4863e36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "unbound", "details": {}, "devname": "tapf7c4b9b0-34", "ovs_interfaceid": null, "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 03:42:48 np0005534516 nova_compute[253538]: 2025-11-25 08:42:48.863 253542 DEBUG oslo_concurrency.lockutils [req-06d30168-3877-4d90-91f3-bb84d61239e9 req-29b64ce9-ef11-4a23-9474-606e0ca5c566 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-3e75d0af-c514-42c5-aa05-88ae5552f196" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 03:42:48 np0005534516 podman[349765]: 2025-11-25 08:42:48.868434805 +0000 UTC m=+1.404157756 container remove 89976c1ebb13c2015aef29fa0012cc60e360f7e8281b42cd23171920fdfca48a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_dubinsky, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 03:42:48 np0005534516 systemd[1]: libpod-conmon-89976c1ebb13c2015aef29fa0012cc60e360f7e8281b42cd23171920fdfca48a.scope: Deactivated successfully.
Nov 25 03:42:48 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 03:42:48 np0005534516 podman[349846]: 2025-11-25 08:42:48.921891352 +0000 UTC m=+0.071720225 container health_status 5c8d5508db57b4c3da7e80ae11f3a3094e87785cb74f60c98199261dd8b73481 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118)
Nov 25 03:42:48 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 03:42:48 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 03:42:48 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 03:42:48 np0005534516 ceph-mgr[75313]: [progress WARNING root] complete: ev 9d7b91ca-7127-452e-87be-20fe5aabb5fb does not exist
Nov 25 03:42:48 np0005534516 ceph-mgr[75313]: [progress WARNING root] complete: ev 236f71f6-fc38-45d3-899c-fc5c7613c370 does not exist
Nov 25 03:42:49 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 03:42:49 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 03:42:49 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1824: 321 pgs: 321 active+clean; 355 MiB data, 811 MiB used, 59 GiB / 60 GiB avail; 4.1 MiB/s rd, 5.3 MiB/s wr, 156 op/s
Nov 25 03:42:50 np0005534516 nova_compute[253538]: 2025-11-25 08:42:50.560 253542 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764060155.558094, 3e75d0af-c514-42c5-aa05-88ae5552f196 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 03:42:50 np0005534516 nova_compute[253538]: 2025-11-25 08:42:50.561 253542 INFO nova.compute.manager [-] [instance: 3e75d0af-c514-42c5-aa05-88ae5552f196] VM Stopped (Lifecycle Event)#033[00m
Nov 25 03:42:50 np0005534516 nova_compute[253538]: 2025-11-25 08:42:50.593 253542 DEBUG nova.compute.manager [None req-cc8c31b2-1168-4f1c-8f1d-dd1a234fc641 - - - - - -] [instance: 3e75d0af-c514-42c5-aa05-88ae5552f196] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:42:51 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1825: 321 pgs: 321 active+clean; 313 MiB data, 787 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 3.2 MiB/s wr, 108 op/s
Nov 25 03:42:51 np0005534516 nova_compute[253538]: 2025-11-25 08:42:51.780 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:42:51 np0005534516 nova_compute[253538]: 2025-11-25 08:42:51.932 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:42:52 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e216 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 03:42:52 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e216 do_prune osdmap full prune enabled
Nov 25 03:42:52 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e217 e217: 3 total, 3 up, 3 in
Nov 25 03:42:52 np0005534516 ceph-mon[75015]: log_channel(cluster) log [DBG] : osdmap e217: 3 total, 3 up, 3 in
Nov 25 03:42:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] Optimize plan auto_2025-11-25_08:42:53
Nov 25 03:42:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 03:42:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] do_upmap
Nov 25 03:42:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] pools ['default.rgw.control', 'cephfs.cephfs.data', 'volumes', 'images', '.rgw.root', 'default.rgw.meta', 'vms', 'cephfs.cephfs.meta', '.mgr', 'backups', 'default.rgw.log']
Nov 25 03:42:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] prepared 0/10 changes
Nov 25 03:42:53 np0005534516 nova_compute[253538]: 2025-11-25 08:42:53.400 253542 DEBUG oslo_concurrency.lockutils [None req-2be8c927-37f3-42bf-8fd4-6e589def3687 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] Acquiring lock "3e75d0af-c514-42c5-aa05-88ae5552f196" by "nova.compute.manager.ComputeManager.unshelve_instance.<locals>.do_unshelve_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:42:53 np0005534516 nova_compute[253538]: 2025-11-25 08:42:53.402 253542 DEBUG oslo_concurrency.lockutils [None req-2be8c927-37f3-42bf-8fd4-6e589def3687 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] Lock "3e75d0af-c514-42c5-aa05-88ae5552f196" acquired by "nova.compute.manager.ComputeManager.unshelve_instance.<locals>.do_unshelve_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:42:53 np0005534516 nova_compute[253538]: 2025-11-25 08:42:53.402 253542 INFO nova.compute.manager [None req-2be8c927-37f3-42bf-8fd4-6e589def3687 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] [instance: 3e75d0af-c514-42c5-aa05-88ae5552f196] Unshelving#033[00m
Nov 25 03:42:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 03:42:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 03:42:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 03:42:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 03:42:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 03:42:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 03:42:53 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1827: 321 pgs: 321 active+clean; 292 MiB data, 773 MiB used, 59 GiB / 60 GiB avail; 42 KiB/s rd, 14 KiB/s wr, 60 op/s
Nov 25 03:42:53 np0005534516 nova_compute[253538]: 2025-11-25 08:42:53.488 253542 DEBUG oslo_concurrency.lockutils [None req-2be8c927-37f3-42bf-8fd4-6e589def3687 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:42:53 np0005534516 nova_compute[253538]: 2025-11-25 08:42:53.489 253542 DEBUG oslo_concurrency.lockutils [None req-2be8c927-37f3-42bf-8fd4-6e589def3687 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:42:53 np0005534516 nova_compute[253538]: 2025-11-25 08:42:53.495 253542 DEBUG nova.objects.instance [None req-2be8c927-37f3-42bf-8fd4-6e589def3687 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] Lazy-loading 'pci_requests' on Instance uuid 3e75d0af-c514-42c5-aa05-88ae5552f196 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 03:42:53 np0005534516 nova_compute[253538]: 2025-11-25 08:42:53.507 253542 DEBUG nova.objects.instance [None req-2be8c927-37f3-42bf-8fd4-6e589def3687 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] Lazy-loading 'numa_topology' on Instance uuid 3e75d0af-c514-42c5-aa05-88ae5552f196 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 03:42:53 np0005534516 nova_compute[253538]: 2025-11-25 08:42:53.519 253542 DEBUG nova.virt.hardware [None req-2be8c927-37f3-42bf-8fd4-6e589def3687 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 25 03:42:53 np0005534516 nova_compute[253538]: 2025-11-25 08:42:53.519 253542 INFO nova.compute.claims [None req-2be8c927-37f3-42bf-8fd4-6e589def3687 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] [instance: 3e75d0af-c514-42c5-aa05-88ae5552f196] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 25 03:42:53 np0005534516 nova_compute[253538]: 2025-11-25 08:42:53.633 253542 DEBUG oslo_concurrency.processutils [None req-2be8c927-37f3-42bf-8fd4-6e589def3687 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:42:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 03:42:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 03:42:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 03:42:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 03:42:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 03:42:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 03:42:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 03:42:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 03:42:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 03:42:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 03:42:54 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 03:42:54 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2362125678' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 03:42:54 np0005534516 nova_compute[253538]: 2025-11-25 08:42:54.109 253542 DEBUG oslo_concurrency.processutils [None req-2be8c927-37f3-42bf-8fd4-6e589def3687 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.476s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:42:54 np0005534516 nova_compute[253538]: 2025-11-25 08:42:54.116 253542 DEBUG nova.compute.provider_tree [None req-2be8c927-37f3-42bf-8fd4-6e589def3687 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 03:42:54 np0005534516 nova_compute[253538]: 2025-11-25 08:42:54.131 253542 DEBUG nova.scheduler.client.report [None req-2be8c927-37f3-42bf-8fd4-6e589def3687 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 03:42:54 np0005534516 nova_compute[253538]: 2025-11-25 08:42:54.162 253542 DEBUG oslo_concurrency.lockutils [None req-2be8c927-37f3-42bf-8fd4-6e589def3687 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.673s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:42:54 np0005534516 nova_compute[253538]: 2025-11-25 08:42:54.549 253542 INFO nova.network.neutron [None req-2be8c927-37f3-42bf-8fd4-6e589def3687 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] [instance: 3e75d0af-c514-42c5-aa05-88ae5552f196] Updating port f7c4b9b0-3445-468a-a19a-8b19b2d029a2 with attributes {'binding:host_id': 'compute-0.ctlplane.example.com', 'device_owner': 'compute:nova'}#033[00m
Nov 25 03:42:54 np0005534516 podman[349941]: 2025-11-25 08:42:54.877440789 +0000 UTC m=+0.118449428 container health_status 201f5ba8a846455dad7681d91099d8c3f33e8ac91298c1330a8b525b94c03938 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, container_name=multipathd, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Nov 25 03:42:55 np0005534516 nova_compute[253538]: 2025-11-25 08:42:55.294 253542 DEBUG oslo_concurrency.lockutils [None req-2be8c927-37f3-42bf-8fd4-6e589def3687 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] Acquiring lock "refresh_cache-3e75d0af-c514-42c5-aa05-88ae5552f196" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 03:42:55 np0005534516 nova_compute[253538]: 2025-11-25 08:42:55.294 253542 DEBUG oslo_concurrency.lockutils [None req-2be8c927-37f3-42bf-8fd4-6e589def3687 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] Acquired lock "refresh_cache-3e75d0af-c514-42c5-aa05-88ae5552f196" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 03:42:55 np0005534516 nova_compute[253538]: 2025-11-25 08:42:55.294 253542 DEBUG nova.network.neutron [None req-2be8c927-37f3-42bf-8fd4-6e589def3687 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] [instance: 3e75d0af-c514-42c5-aa05-88ae5552f196] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 25 03:42:55 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1828: 321 pgs: 321 active+clean; 292 MiB data, 773 MiB used, 59 GiB / 60 GiB avail; 27 KiB/s rd, 13 KiB/s wr, 42 op/s
Nov 25 03:42:55 np0005534516 nova_compute[253538]: 2025-11-25 08:42:55.481 253542 DEBUG nova.compute.manager [req-838168d9-b9c7-4c84-ab0d-db60d7c0fdd0 req-84d56c32-0195-4af8-b2f5-ed039942cd94 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 3e75d0af-c514-42c5-aa05-88ae5552f196] Received event network-changed-f7c4b9b0-3445-468a-a19a-8b19b2d029a2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:42:55 np0005534516 nova_compute[253538]: 2025-11-25 08:42:55.482 253542 DEBUG nova.compute.manager [req-838168d9-b9c7-4c84-ab0d-db60d7c0fdd0 req-84d56c32-0195-4af8-b2f5-ed039942cd94 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 3e75d0af-c514-42c5-aa05-88ae5552f196] Refreshing instance network info cache due to event network-changed-f7c4b9b0-3445-468a-a19a-8b19b2d029a2. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 03:42:55 np0005534516 nova_compute[253538]: 2025-11-25 08:42:55.483 253542 DEBUG oslo_concurrency.lockutils [req-838168d9-b9c7-4c84-ab0d-db60d7c0fdd0 req-84d56c32-0195-4af8-b2f5-ed039942cd94 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-3e75d0af-c514-42c5-aa05-88ae5552f196" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 03:42:55 np0005534516 nova_compute[253538]: 2025-11-25 08:42:55.534 253542 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764060160.532469, 0774dd07-d931-40b5-8590-915c0611277d => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 03:42:55 np0005534516 nova_compute[253538]: 2025-11-25 08:42:55.534 253542 INFO nova.compute.manager [-] [instance: 0774dd07-d931-40b5-8590-915c0611277d] VM Stopped (Lifecycle Event)#033[00m
Nov 25 03:42:55 np0005534516 nova_compute[253538]: 2025-11-25 08:42:55.553 253542 DEBUG nova.compute.manager [None req-d4722c92-5b78-4994-8a8b-3c62d78a4efe - - - - - -] [instance: 0774dd07-d931-40b5-8590-915c0611277d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:42:56 np0005534516 nova_compute[253538]: 2025-11-25 08:42:56.782 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:42:56 np0005534516 nova_compute[253538]: 2025-11-25 08:42:56.934 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:42:57 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e217 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 03:42:57 np0005534516 nova_compute[253538]: 2025-11-25 08:42:57.213 253542 DEBUG nova.network.neutron [None req-2be8c927-37f3-42bf-8fd4-6e589def3687 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] [instance: 3e75d0af-c514-42c5-aa05-88ae5552f196] Updating instance_info_cache with network_info: [{"id": "f7c4b9b0-3445-468a-a19a-8b19b2d029a2", "address": "fa:16:3e:83:e9:db", "network": {"id": "6e77a51d-2695-4e70-8b9d-c02ec0c62f35", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1129579753-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.248", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "295fcc758cf24ab4b01eb393f4863e36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf7c4b9b0-34", "ovs_interfaceid": "f7c4b9b0-3445-468a-a19a-8b19b2d029a2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 03:42:57 np0005534516 nova_compute[253538]: 2025-11-25 08:42:57.230 253542 DEBUG oslo_concurrency.lockutils [None req-2be8c927-37f3-42bf-8fd4-6e589def3687 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] Releasing lock "refresh_cache-3e75d0af-c514-42c5-aa05-88ae5552f196" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 03:42:57 np0005534516 nova_compute[253538]: 2025-11-25 08:42:57.232 253542 DEBUG nova.virt.libvirt.driver [None req-2be8c927-37f3-42bf-8fd4-6e589def3687 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] [instance: 3e75d0af-c514-42c5-aa05-88ae5552f196] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 25 03:42:57 np0005534516 nova_compute[253538]: 2025-11-25 08:42:57.233 253542 INFO nova.virt.libvirt.driver [None req-2be8c927-37f3-42bf-8fd4-6e589def3687 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] [instance: 3e75d0af-c514-42c5-aa05-88ae5552f196] Creating image(s)#033[00m
Nov 25 03:42:57 np0005534516 nova_compute[253538]: 2025-11-25 08:42:57.260 253542 DEBUG nova.storage.rbd_utils [None req-2be8c927-37f3-42bf-8fd4-6e589def3687 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] rbd image 3e75d0af-c514-42c5-aa05-88ae5552f196_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:42:57 np0005534516 nova_compute[253538]: 2025-11-25 08:42:57.264 253542 DEBUG nova.objects.instance [None req-2be8c927-37f3-42bf-8fd4-6e589def3687 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] Lazy-loading 'trusted_certs' on Instance uuid 3e75d0af-c514-42c5-aa05-88ae5552f196 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 03:42:57 np0005534516 nova_compute[253538]: 2025-11-25 08:42:57.267 253542 DEBUG oslo_concurrency.lockutils [req-838168d9-b9c7-4c84-ab0d-db60d7c0fdd0 req-84d56c32-0195-4af8-b2f5-ed039942cd94 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-3e75d0af-c514-42c5-aa05-88ae5552f196" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 03:42:57 np0005534516 nova_compute[253538]: 2025-11-25 08:42:57.267 253542 DEBUG nova.network.neutron [req-838168d9-b9c7-4c84-ab0d-db60d7c0fdd0 req-84d56c32-0195-4af8-b2f5-ed039942cd94 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 3e75d0af-c514-42c5-aa05-88ae5552f196] Refreshing network info cache for port f7c4b9b0-3445-468a-a19a-8b19b2d029a2 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 03:42:57 np0005534516 nova_compute[253538]: 2025-11-25 08:42:57.313 253542 DEBUG nova.storage.rbd_utils [None req-2be8c927-37f3-42bf-8fd4-6e589def3687 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] rbd image 3e75d0af-c514-42c5-aa05-88ae5552f196_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:42:57 np0005534516 nova_compute[253538]: 2025-11-25 08:42:57.343 253542 DEBUG nova.storage.rbd_utils [None req-2be8c927-37f3-42bf-8fd4-6e589def3687 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] rbd image 3e75d0af-c514-42c5-aa05-88ae5552f196_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:42:57 np0005534516 nova_compute[253538]: 2025-11-25 08:42:57.347 253542 DEBUG oslo_concurrency.lockutils [None req-2be8c927-37f3-42bf-8fd4-6e589def3687 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] Acquiring lock "88c5d518cc852b54df0f546077c13ae28485063d" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:42:57 np0005534516 nova_compute[253538]: 2025-11-25 08:42:57.348 253542 DEBUG oslo_concurrency.lockutils [None req-2be8c927-37f3-42bf-8fd4-6e589def3687 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] Lock "88c5d518cc852b54df0f546077c13ae28485063d" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:42:57 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1829: 321 pgs: 321 active+clean; 292 MiB data, 773 MiB used, 59 GiB / 60 GiB avail; 23 KiB/s rd, 11 KiB/s wr, 35 op/s
Nov 25 03:42:57 np0005534516 nova_compute[253538]: 2025-11-25 08:42:57.802 253542 DEBUG nova.virt.libvirt.imagebackend [None req-2be8c927-37f3-42bf-8fd4-6e589def3687 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] Image locations are: [{'url': 'rbd://a058ea16-8b73-51e1-b172-ed66107102bf/images/34e9b311-13e0-4ffd-bc6f-64a46ba4b491/snap', 'metadata': {'store': 'default_backend'}}, {'url': 'rbd://a058ea16-8b73-51e1-b172-ed66107102bf/images/34e9b311-13e0-4ffd-bc6f-64a46ba4b491/snap', 'metadata': {}}] clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1085#033[00m
Nov 25 03:42:57 np0005534516 podman[350015]: 2025-11-25 08:42:57.867200356 +0000 UTC m=+0.112506948 container health_status 8ad1e319ea263c63021f01bb2d6f18ca5aea205883339692c6b10bddfa993191 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 03:42:57 np0005534516 nova_compute[253538]: 2025-11-25 08:42:57.881 253542 DEBUG nova.virt.libvirt.imagebackend [None req-2be8c927-37f3-42bf-8fd4-6e589def3687 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] Selected location: {'url': 'rbd://a058ea16-8b73-51e1-b172-ed66107102bf/images/34e9b311-13e0-4ffd-bc6f-64a46ba4b491/snap', 'metadata': {'store': 'default_backend'}} clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1094#033[00m
Nov 25 03:42:57 np0005534516 nova_compute[253538]: 2025-11-25 08:42:57.882 253542 DEBUG nova.storage.rbd_utils [None req-2be8c927-37f3-42bf-8fd4-6e589def3687 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] cloning images/34e9b311-13e0-4ffd-bc6f-64a46ba4b491@snap to None/3e75d0af-c514-42c5-aa05-88ae5552f196_disk clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Nov 25 03:42:57 np0005534516 nova_compute[253538]: 2025-11-25 08:42:57.989 253542 DEBUG oslo_concurrency.lockutils [None req-2be8c927-37f3-42bf-8fd4-6e589def3687 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] Lock "88c5d518cc852b54df0f546077c13ae28485063d" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.641s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:42:58 np0005534516 nova_compute[253538]: 2025-11-25 08:42:58.137 253542 DEBUG nova.objects.instance [None req-2be8c927-37f3-42bf-8fd4-6e589def3687 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] Lazy-loading 'migration_context' on Instance uuid 3e75d0af-c514-42c5-aa05-88ae5552f196 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 03:42:58 np0005534516 nova_compute[253538]: 2025-11-25 08:42:58.205 253542 DEBUG nova.storage.rbd_utils [None req-2be8c927-37f3-42bf-8fd4-6e589def3687 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] flattening vms/3e75d0af-c514-42c5-aa05-88ae5552f196_disk flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314#033[00m
Nov 25 03:42:58 np0005534516 nova_compute[253538]: 2025-11-25 08:42:58.876 253542 DEBUG nova.virt.libvirt.driver [None req-2be8c927-37f3-42bf-8fd4-6e589def3687 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] [instance: 3e75d0af-c514-42c5-aa05-88ae5552f196] Image rbd:vms/3e75d0af-c514-42c5-aa05-88ae5552f196_disk:id=openstack:conf=/etc/ceph/ceph.conf flattened successfully while unshelving instance. _try_fetch_image_cache /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11007#033[00m
Nov 25 03:42:58 np0005534516 nova_compute[253538]: 2025-11-25 08:42:58.878 253542 DEBUG nova.virt.libvirt.driver [None req-2be8c927-37f3-42bf-8fd4-6e589def3687 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] [instance: 3e75d0af-c514-42c5-aa05-88ae5552f196] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 25 03:42:58 np0005534516 nova_compute[253538]: 2025-11-25 08:42:58.878 253542 DEBUG nova.virt.libvirt.driver [None req-2be8c927-37f3-42bf-8fd4-6e589def3687 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] [instance: 3e75d0af-c514-42c5-aa05-88ae5552f196] Ensure instance console log exists: /var/lib/nova/instances/3e75d0af-c514-42c5-aa05-88ae5552f196/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 25 03:42:58 np0005534516 nova_compute[253538]: 2025-11-25 08:42:58.879 253542 DEBUG oslo_concurrency.lockutils [None req-2be8c927-37f3-42bf-8fd4-6e589def3687 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:42:58 np0005534516 nova_compute[253538]: 2025-11-25 08:42:58.879 253542 DEBUG oslo_concurrency.lockutils [None req-2be8c927-37f3-42bf-8fd4-6e589def3687 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:42:58 np0005534516 nova_compute[253538]: 2025-11-25 08:42:58.879 253542 DEBUG oslo_concurrency.lockutils [None req-2be8c927-37f3-42bf-8fd4-6e589def3687 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:42:58 np0005534516 nova_compute[253538]: 2025-11-25 08:42:58.883 253542 DEBUG nova.virt.libvirt.driver [None req-2be8c927-37f3-42bf-8fd4-6e589def3687 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] [instance: 3e75d0af-c514-42c5-aa05-88ae5552f196] Start _get_guest_xml network_info=[{"id": "f7c4b9b0-3445-468a-a19a-8b19b2d029a2", "address": "fa:16:3e:83:e9:db", "network": {"id": "6e77a51d-2695-4e70-8b9d-c02ec0c62f35", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1129579753-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.248", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "295fcc758cf24ab4b01eb393f4863e36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf7c4b9b0-34", "ovs_interfaceid": "f7c4b9b0-3445-468a-a19a-8b19b2d029a2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='',container_format='bare',created_at=2025-11-25T08:42:32Z,direct_url=<?>,disk_format='raw',id=34e9b311-13e0-4ffd-bc6f-64a46ba4b491,min_disk=1,min_ram=0,name='tempest-ServerActionsTestOtherB-server-439401428-shelved',owner='295fcc758cf24ab4b01eb393f4863e36',properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=2025-11-25T08:42:43Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'device_name': '/dev/vda', 'guest_format': None, 'encryption_options': None, 'boot_index': 0, 'encryption_format': None, 'encryption_secret_uuid': None, 'size': 0, 'encrypted': False, 'disk_bus': 'virtio', 'image_id': '8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 25 03:42:58 np0005534516 nova_compute[253538]: 2025-11-25 08:42:58.890 253542 WARNING nova.virt.libvirt.driver [None req-2be8c927-37f3-42bf-8fd4-6e589def3687 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 03:42:58 np0005534516 nova_compute[253538]: 2025-11-25 08:42:58.897 253542 DEBUG nova.virt.libvirt.host [None req-2be8c927-37f3-42bf-8fd4-6e589def3687 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 25 03:42:58 np0005534516 nova_compute[253538]: 2025-11-25 08:42:58.898 253542 DEBUG nova.virt.libvirt.host [None req-2be8c927-37f3-42bf-8fd4-6e589def3687 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 25 03:42:58 np0005534516 nova_compute[253538]: 2025-11-25 08:42:58.902 253542 DEBUG nova.virt.libvirt.host [None req-2be8c927-37f3-42bf-8fd4-6e589def3687 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 25 03:42:58 np0005534516 nova_compute[253538]: 2025-11-25 08:42:58.902 253542 DEBUG nova.virt.libvirt.host [None req-2be8c927-37f3-42bf-8fd4-6e589def3687 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 25 03:42:58 np0005534516 nova_compute[253538]: 2025-11-25 08:42:58.903 253542 DEBUG nova.virt.libvirt.driver [None req-2be8c927-37f3-42bf-8fd4-6e589def3687 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 25 03:42:58 np0005534516 nova_compute[253538]: 2025-11-25 08:42:58.903 253542 DEBUG nova.virt.hardware [None req-2be8c927-37f3-42bf-8fd4-6e589def3687 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T08:20:54Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='c0247c91-0f34-45b2-87e7-43f31a790a00',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='',container_format='bare',created_at=2025-11-25T08:42:32Z,direct_url=<?>,disk_format='raw',id=34e9b311-13e0-4ffd-bc6f-64a46ba4b491,min_disk=1,min_ram=0,name='tempest-ServerActionsTestOtherB-server-439401428-shelved',owner='295fcc758cf24ab4b01eb393f4863e36',properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=2025-11-25T08:42:43Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 25 03:42:58 np0005534516 nova_compute[253538]: 2025-11-25 08:42:58.904 253542 DEBUG nova.virt.hardware [None req-2be8c927-37f3-42bf-8fd4-6e589def3687 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 25 03:42:58 np0005534516 nova_compute[253538]: 2025-11-25 08:42:58.905 253542 DEBUG nova.virt.hardware [None req-2be8c927-37f3-42bf-8fd4-6e589def3687 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 25 03:42:58 np0005534516 nova_compute[253538]: 2025-11-25 08:42:58.905 253542 DEBUG nova.virt.hardware [None req-2be8c927-37f3-42bf-8fd4-6e589def3687 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 25 03:42:58 np0005534516 nova_compute[253538]: 2025-11-25 08:42:58.905 253542 DEBUG nova.virt.hardware [None req-2be8c927-37f3-42bf-8fd4-6e589def3687 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 25 03:42:58 np0005534516 nova_compute[253538]: 2025-11-25 08:42:58.906 253542 DEBUG nova.virt.hardware [None req-2be8c927-37f3-42bf-8fd4-6e589def3687 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 25 03:42:58 np0005534516 nova_compute[253538]: 2025-11-25 08:42:58.906 253542 DEBUG nova.virt.hardware [None req-2be8c927-37f3-42bf-8fd4-6e589def3687 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 25 03:42:58 np0005534516 nova_compute[253538]: 2025-11-25 08:42:58.906 253542 DEBUG nova.virt.hardware [None req-2be8c927-37f3-42bf-8fd4-6e589def3687 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 25 03:42:58 np0005534516 nova_compute[253538]: 2025-11-25 08:42:58.907 253542 DEBUG nova.virt.hardware [None req-2be8c927-37f3-42bf-8fd4-6e589def3687 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 25 03:42:58 np0005534516 nova_compute[253538]: 2025-11-25 08:42:58.907 253542 DEBUG nova.virt.hardware [None req-2be8c927-37f3-42bf-8fd4-6e589def3687 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 25 03:42:58 np0005534516 nova_compute[253538]: 2025-11-25 08:42:58.907 253542 DEBUG nova.virt.hardware [None req-2be8c927-37f3-42bf-8fd4-6e589def3687 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 25 03:42:58 np0005534516 nova_compute[253538]: 2025-11-25 08:42:58.908 253542 DEBUG nova.objects.instance [None req-2be8c927-37f3-42bf-8fd4-6e589def3687 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] Lazy-loading 'vcpu_model' on Instance uuid 3e75d0af-c514-42c5-aa05-88ae5552f196 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 03:42:58 np0005534516 nova_compute[253538]: 2025-11-25 08:42:58.924 253542 DEBUG oslo_concurrency.processutils [None req-2be8c927-37f3-42bf-8fd4-6e589def3687 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:42:59 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 03:42:59 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1265293251' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 03:42:59 np0005534516 nova_compute[253538]: 2025-11-25 08:42:59.395 253542 DEBUG oslo_concurrency.processutils [None req-2be8c927-37f3-42bf-8fd4-6e589def3687 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.471s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:42:59 np0005534516 nova_compute[253538]: 2025-11-25 08:42:59.430 253542 DEBUG nova.storage.rbd_utils [None req-2be8c927-37f3-42bf-8fd4-6e589def3687 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] rbd image 3e75d0af-c514-42c5-aa05-88ae5552f196_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:42:59 np0005534516 nova_compute[253538]: 2025-11-25 08:42:59.438 253542 DEBUG oslo_concurrency.processutils [None req-2be8c927-37f3-42bf-8fd4-6e589def3687 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:42:59 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1830: 321 pgs: 321 active+clean; 296 MiB data, 773 MiB used, 59 GiB / 60 GiB avail; 684 KiB/s rd, 267 KiB/s wr, 37 op/s
Nov 25 03:42:59 np0005534516 nova_compute[253538]: 2025-11-25 08:42:59.649 253542 DEBUG nova.network.neutron [req-838168d9-b9c7-4c84-ab0d-db60d7c0fdd0 req-84d56c32-0195-4af8-b2f5-ed039942cd94 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 3e75d0af-c514-42c5-aa05-88ae5552f196] Updated VIF entry in instance network info cache for port f7c4b9b0-3445-468a-a19a-8b19b2d029a2. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 25 03:42:59 np0005534516 nova_compute[253538]: 2025-11-25 08:42:59.650 253542 DEBUG nova.network.neutron [req-838168d9-b9c7-4c84-ab0d-db60d7c0fdd0 req-84d56c32-0195-4af8-b2f5-ed039942cd94 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 3e75d0af-c514-42c5-aa05-88ae5552f196] Updating instance_info_cache with network_info: [{"id": "f7c4b9b0-3445-468a-a19a-8b19b2d029a2", "address": "fa:16:3e:83:e9:db", "network": {"id": "6e77a51d-2695-4e70-8b9d-c02ec0c62f35", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1129579753-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.248", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "295fcc758cf24ab4b01eb393f4863e36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf7c4b9b0-34", "ovs_interfaceid": "f7c4b9b0-3445-468a-a19a-8b19b2d029a2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 03:42:59 np0005534516 nova_compute[253538]: 2025-11-25 08:42:59.672 253542 DEBUG oslo_concurrency.lockutils [req-838168d9-b9c7-4c84-ab0d-db60d7c0fdd0 req-84d56c32-0195-4af8-b2f5-ed039942cd94 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-3e75d0af-c514-42c5-aa05-88ae5552f196" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 03:42:59 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 03:42:59 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3982515680' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 03:42:59 np0005534516 nova_compute[253538]: 2025-11-25 08:42:59.964 253542 DEBUG oslo_concurrency.processutils [None req-2be8c927-37f3-42bf-8fd4-6e589def3687 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.526s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:42:59 np0005534516 nova_compute[253538]: 2025-11-25 08:42:59.967 253542 DEBUG nova.virt.libvirt.vif [None req-2be8c927-37f3-42bf-8fd4-6e589def3687 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-11-25T08:40:17Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestOtherB-server-439401428',display_name='tempest-ServerActionsTestOtherB-server-439401428',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestotherb-server-439401428',id=95,image_ref='34e9b311-13e0-4ffd-bc6f-64a46ba4b491',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name='tempest-keypair-1642602877',keypairs=<?>,launch_index=0,launched_at=2025-11-25T08:40:29Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=4,progress=0,project_id='295fcc758cf24ab4b01eb393f4863e36',ramdisk_id='',reservation_id='r-iqkw2vgl',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',clean_attempts='1',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestOtherB-587178207',owner_user_name='tempest-ServerActionsTestOtherB-587178207-project-member',shelved_at='2025-11-25T08:42:43.922397',shelved_host='compute-0.ctlplane.example.com',shelved_image_id='34e9b311-13e0-4ffd-bc6f-64a46ba4b491'},tags=<?>,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T08:42:53Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='66e1f27ea22d4ee08a0a470a8c18135e',uuid=3e75d0af-c514-42c5-aa05-88ae5552f196,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='shelved_offloaded') vif={"id": "f7c4b9b0-3445-468a-a19a-8b19b2d029a2", "address": "fa:16:3e:83:e9:db", "network": {"id": "6e77a51d-2695-4e70-8b9d-c02ec0c62f35", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1129579753-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.248", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "295fcc758cf24ab4b01eb393f4863e36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf7c4b9b0-34", "ovs_interfaceid": "f7c4b9b0-3445-468a-a19a-8b19b2d029a2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 25 03:42:59 np0005534516 nova_compute[253538]: 2025-11-25 08:42:59.968 253542 DEBUG nova.network.os_vif_util [None req-2be8c927-37f3-42bf-8fd4-6e589def3687 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] Converting VIF {"id": "f7c4b9b0-3445-468a-a19a-8b19b2d029a2", "address": "fa:16:3e:83:e9:db", "network": {"id": "6e77a51d-2695-4e70-8b9d-c02ec0c62f35", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1129579753-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.248", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "295fcc758cf24ab4b01eb393f4863e36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf7c4b9b0-34", "ovs_interfaceid": "f7c4b9b0-3445-468a-a19a-8b19b2d029a2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 03:42:59 np0005534516 nova_compute[253538]: 2025-11-25 08:42:59.970 253542 DEBUG nova.network.os_vif_util [None req-2be8c927-37f3-42bf-8fd4-6e589def3687 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:83:e9:db,bridge_name='br-int',has_traffic_filtering=True,id=f7c4b9b0-3445-468a-a19a-8b19b2d029a2,network=Network(6e77a51d-2695-4e70-8b9d-c02ec0c62f35),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf7c4b9b0-34') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 03:42:59 np0005534516 nova_compute[253538]: 2025-11-25 08:42:59.973 253542 DEBUG nova.objects.instance [None req-2be8c927-37f3-42bf-8fd4-6e589def3687 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] Lazy-loading 'pci_devices' on Instance uuid 3e75d0af-c514-42c5-aa05-88ae5552f196 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 03:42:59 np0005534516 nova_compute[253538]: 2025-11-25 08:42:59.992 253542 DEBUG nova.virt.libvirt.driver [None req-2be8c927-37f3-42bf-8fd4-6e589def3687 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] [instance: 3e75d0af-c514-42c5-aa05-88ae5552f196] End _get_guest_xml xml=<domain type="kvm">
Nov 25 03:42:59 np0005534516 nova_compute[253538]:  <uuid>3e75d0af-c514-42c5-aa05-88ae5552f196</uuid>
Nov 25 03:42:59 np0005534516 nova_compute[253538]:  <name>instance-0000005f</name>
Nov 25 03:42:59 np0005534516 nova_compute[253538]:  <memory>131072</memory>
Nov 25 03:42:59 np0005534516 nova_compute[253538]:  <vcpu>1</vcpu>
Nov 25 03:42:59 np0005534516 nova_compute[253538]:  <metadata>
Nov 25 03:42:59 np0005534516 nova_compute[253538]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 03:42:59 np0005534516 nova_compute[253538]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 03:42:59 np0005534516 nova_compute[253538]:      <nova:name>tempest-ServerActionsTestOtherB-server-439401428</nova:name>
Nov 25 03:42:59 np0005534516 nova_compute[253538]:      <nova:creationTime>2025-11-25 08:42:58</nova:creationTime>
Nov 25 03:42:59 np0005534516 nova_compute[253538]:      <nova:flavor name="m1.nano">
Nov 25 03:42:59 np0005534516 nova_compute[253538]:        <nova:memory>128</nova:memory>
Nov 25 03:42:59 np0005534516 nova_compute[253538]:        <nova:disk>1</nova:disk>
Nov 25 03:42:59 np0005534516 nova_compute[253538]:        <nova:swap>0</nova:swap>
Nov 25 03:42:59 np0005534516 nova_compute[253538]:        <nova:ephemeral>0</nova:ephemeral>
Nov 25 03:42:59 np0005534516 nova_compute[253538]:        <nova:vcpus>1</nova:vcpus>
Nov 25 03:42:59 np0005534516 nova_compute[253538]:      </nova:flavor>
Nov 25 03:42:59 np0005534516 nova_compute[253538]:      <nova:owner>
Nov 25 03:42:59 np0005534516 nova_compute[253538]:        <nova:user uuid="66e1f27ea22d4ee08a0a470a8c18135e">tempest-ServerActionsTestOtherB-587178207-project-member</nova:user>
Nov 25 03:42:59 np0005534516 nova_compute[253538]:        <nova:project uuid="295fcc758cf24ab4b01eb393f4863e36">tempest-ServerActionsTestOtherB-587178207</nova:project>
Nov 25 03:42:59 np0005534516 nova_compute[253538]:      </nova:owner>
Nov 25 03:42:59 np0005534516 nova_compute[253538]:      <nova:root type="image" uuid="34e9b311-13e0-4ffd-bc6f-64a46ba4b491"/>
Nov 25 03:42:59 np0005534516 nova_compute[253538]:      <nova:ports>
Nov 25 03:42:59 np0005534516 nova_compute[253538]:        <nova:port uuid="f7c4b9b0-3445-468a-a19a-8b19b2d029a2">
Nov 25 03:42:59 np0005534516 nova_compute[253538]:          <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Nov 25 03:42:59 np0005534516 nova_compute[253538]:        </nova:port>
Nov 25 03:42:59 np0005534516 nova_compute[253538]:      </nova:ports>
Nov 25 03:42:59 np0005534516 nova_compute[253538]:    </nova:instance>
Nov 25 03:42:59 np0005534516 nova_compute[253538]:  </metadata>
Nov 25 03:42:59 np0005534516 nova_compute[253538]:  <sysinfo type="smbios">
Nov 25 03:42:59 np0005534516 nova_compute[253538]:    <system>
Nov 25 03:42:59 np0005534516 nova_compute[253538]:      <entry name="manufacturer">RDO</entry>
Nov 25 03:42:59 np0005534516 nova_compute[253538]:      <entry name="product">OpenStack Compute</entry>
Nov 25 03:42:59 np0005534516 nova_compute[253538]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 03:42:59 np0005534516 nova_compute[253538]:      <entry name="serial">3e75d0af-c514-42c5-aa05-88ae5552f196</entry>
Nov 25 03:42:59 np0005534516 nova_compute[253538]:      <entry name="uuid">3e75d0af-c514-42c5-aa05-88ae5552f196</entry>
Nov 25 03:42:59 np0005534516 nova_compute[253538]:      <entry name="family">Virtual Machine</entry>
Nov 25 03:42:59 np0005534516 nova_compute[253538]:    </system>
Nov 25 03:42:59 np0005534516 nova_compute[253538]:  </sysinfo>
Nov 25 03:42:59 np0005534516 nova_compute[253538]:  <os>
Nov 25 03:42:59 np0005534516 nova_compute[253538]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 25 03:42:59 np0005534516 nova_compute[253538]:    <boot dev="hd"/>
Nov 25 03:42:59 np0005534516 nova_compute[253538]:    <smbios mode="sysinfo"/>
Nov 25 03:42:59 np0005534516 nova_compute[253538]:  </os>
Nov 25 03:42:59 np0005534516 nova_compute[253538]:  <features>
Nov 25 03:42:59 np0005534516 nova_compute[253538]:    <acpi/>
Nov 25 03:42:59 np0005534516 nova_compute[253538]:    <apic/>
Nov 25 03:42:59 np0005534516 nova_compute[253538]:    <vmcoreinfo/>
Nov 25 03:42:59 np0005534516 nova_compute[253538]:  </features>
Nov 25 03:42:59 np0005534516 nova_compute[253538]:  <clock offset="utc">
Nov 25 03:42:59 np0005534516 nova_compute[253538]:    <timer name="pit" tickpolicy="delay"/>
Nov 25 03:42:59 np0005534516 nova_compute[253538]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 25 03:42:59 np0005534516 nova_compute[253538]:    <timer name="hpet" present="no"/>
Nov 25 03:42:59 np0005534516 nova_compute[253538]:  </clock>
Nov 25 03:42:59 np0005534516 nova_compute[253538]:  <cpu mode="host-model" match="exact">
Nov 25 03:42:59 np0005534516 nova_compute[253538]:    <topology sockets="1" cores="1" threads="1"/>
Nov 25 03:42:59 np0005534516 nova_compute[253538]:  </cpu>
Nov 25 03:42:59 np0005534516 nova_compute[253538]:  <devices>
Nov 25 03:42:59 np0005534516 nova_compute[253538]:    <disk type="network" device="disk">
Nov 25 03:42:59 np0005534516 nova_compute[253538]:      <driver type="raw" cache="none"/>
Nov 25 03:42:59 np0005534516 nova_compute[253538]:      <source protocol="rbd" name="vms/3e75d0af-c514-42c5-aa05-88ae5552f196_disk">
Nov 25 03:42:59 np0005534516 nova_compute[253538]:        <host name="192.168.122.100" port="6789"/>
Nov 25 03:42:59 np0005534516 nova_compute[253538]:      </source>
Nov 25 03:42:59 np0005534516 nova_compute[253538]:      <auth username="openstack">
Nov 25 03:42:59 np0005534516 nova_compute[253538]:        <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 03:42:59 np0005534516 nova_compute[253538]:      </auth>
Nov 25 03:42:59 np0005534516 nova_compute[253538]:      <target dev="vda" bus="virtio"/>
Nov 25 03:42:59 np0005534516 nova_compute[253538]:    </disk>
Nov 25 03:42:59 np0005534516 nova_compute[253538]:    <disk type="network" device="cdrom">
Nov 25 03:42:59 np0005534516 nova_compute[253538]:      <driver type="raw" cache="none"/>
Nov 25 03:42:59 np0005534516 nova_compute[253538]:      <source protocol="rbd" name="vms/3e75d0af-c514-42c5-aa05-88ae5552f196_disk.config">
Nov 25 03:42:59 np0005534516 nova_compute[253538]:        <host name="192.168.122.100" port="6789"/>
Nov 25 03:42:59 np0005534516 nova_compute[253538]:      </source>
Nov 25 03:42:59 np0005534516 nova_compute[253538]:      <auth username="openstack">
Nov 25 03:42:59 np0005534516 nova_compute[253538]:        <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 03:42:59 np0005534516 nova_compute[253538]:      </auth>
Nov 25 03:42:59 np0005534516 nova_compute[253538]:      <target dev="sda" bus="sata"/>
Nov 25 03:42:59 np0005534516 nova_compute[253538]:    </disk>
Nov 25 03:42:59 np0005534516 nova_compute[253538]:    <interface type="ethernet">
Nov 25 03:42:59 np0005534516 nova_compute[253538]:      <mac address="fa:16:3e:83:e9:db"/>
Nov 25 03:42:59 np0005534516 nova_compute[253538]:      <model type="virtio"/>
Nov 25 03:42:59 np0005534516 nova_compute[253538]:      <driver name="vhost" rx_queue_size="512"/>
Nov 25 03:42:59 np0005534516 nova_compute[253538]:      <mtu size="1442"/>
Nov 25 03:42:59 np0005534516 nova_compute[253538]:      <target dev="tapf7c4b9b0-34"/>
Nov 25 03:42:59 np0005534516 nova_compute[253538]:    </interface>
Nov 25 03:42:59 np0005534516 nova_compute[253538]:    <serial type="pty">
Nov 25 03:42:59 np0005534516 nova_compute[253538]:      <log file="/var/lib/nova/instances/3e75d0af-c514-42c5-aa05-88ae5552f196/console.log" append="off"/>
Nov 25 03:42:59 np0005534516 nova_compute[253538]:    </serial>
Nov 25 03:42:59 np0005534516 nova_compute[253538]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 03:42:59 np0005534516 nova_compute[253538]:    <video>
Nov 25 03:42:59 np0005534516 nova_compute[253538]:      <model type="virtio"/>
Nov 25 03:42:59 np0005534516 nova_compute[253538]:    </video>
Nov 25 03:42:59 np0005534516 nova_compute[253538]:    <input type="tablet" bus="usb"/>
Nov 25 03:42:59 np0005534516 nova_compute[253538]:    <input type="keyboard" bus="usb"/>
Nov 25 03:42:59 np0005534516 nova_compute[253538]:    <rng model="virtio">
Nov 25 03:42:59 np0005534516 nova_compute[253538]:      <backend model="random">/dev/urandom</backend>
Nov 25 03:42:59 np0005534516 nova_compute[253538]:    </rng>
Nov 25 03:42:59 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root"/>
Nov 25 03:42:59 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:42:59 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:42:59 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:42:59 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:42:59 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:42:59 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:42:59 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:42:59 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:42:59 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:42:59 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:42:59 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:42:59 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:42:59 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:42:59 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:42:59 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:42:59 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:42:59 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:42:59 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:42:59 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:42:59 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:42:59 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:42:59 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:42:59 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:42:59 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:42:59 np0005534516 nova_compute[253538]:    <controller type="usb" index="0"/>
Nov 25 03:42:59 np0005534516 nova_compute[253538]:    <memballoon model="virtio">
Nov 25 03:42:59 np0005534516 nova_compute[253538]:      <stats period="10"/>
Nov 25 03:42:59 np0005534516 nova_compute[253538]:    </memballoon>
Nov 25 03:42:59 np0005534516 nova_compute[253538]:  </devices>
Nov 25 03:42:59 np0005534516 nova_compute[253538]: </domain>
Nov 25 03:42:59 np0005534516 nova_compute[253538]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 25 03:42:59 np0005534516 nova_compute[253538]: 2025-11-25 08:42:59.993 253542 DEBUG nova.compute.manager [None req-2be8c927-37f3-42bf-8fd4-6e589def3687 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] [instance: 3e75d0af-c514-42c5-aa05-88ae5552f196] Preparing to wait for external event network-vif-plugged-f7c4b9b0-3445-468a-a19a-8b19b2d029a2 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 25 03:42:59 np0005534516 nova_compute[253538]: 2025-11-25 08:42:59.993 253542 DEBUG oslo_concurrency.lockutils [None req-2be8c927-37f3-42bf-8fd4-6e589def3687 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] Acquiring lock "3e75d0af-c514-42c5-aa05-88ae5552f196-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:42:59 np0005534516 nova_compute[253538]: 2025-11-25 08:42:59.993 253542 DEBUG oslo_concurrency.lockutils [None req-2be8c927-37f3-42bf-8fd4-6e589def3687 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] Lock "3e75d0af-c514-42c5-aa05-88ae5552f196-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:42:59 np0005534516 nova_compute[253538]: 2025-11-25 08:42:59.994 253542 DEBUG oslo_concurrency.lockutils [None req-2be8c927-37f3-42bf-8fd4-6e589def3687 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] Lock "3e75d0af-c514-42c5-aa05-88ae5552f196-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:42:59 np0005534516 nova_compute[253538]: 2025-11-25 08:42:59.994 253542 DEBUG nova.virt.libvirt.vif [None req-2be8c927-37f3-42bf-8fd4-6e589def3687 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-11-25T08:40:17Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestOtherB-server-439401428',display_name='tempest-ServerActionsTestOtherB-server-439401428',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestotherb-server-439401428',id=95,image_ref='34e9b311-13e0-4ffd-bc6f-64a46ba4b491',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name='tempest-keypair-1642602877',keypairs=<?>,launch_index=0,launched_at=2025-11-25T08:40:29Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=4,progress=0,project_id='295fcc758cf24ab4b01eb393f4863e36',ramdisk_id='',reservation_id='r-iqkw2vgl',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',clean_attempts='1',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestOtherB-587178207',owner_user_name='tempest-ServerActionsTestOtherB-587178207-project-member',shelved_at='2025-11-25T08:42:43.922397',shelved_host='compute-0.ctlplane.example.com',shelved_image_id='34e9b311-13e0-4ffd-bc6f-64a46ba4b491'},tags=<?>,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T08:42:53Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='66e1f27ea22d4ee08a0a470a8c18135e',uuid=3e75d0af-c514-42c5-aa05-88ae5552f196,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='shelved_offloaded') vif={"id": "f7c4b9b0-3445-468a-a19a-8b19b2d029a2", "address": "fa:16:3e:83:e9:db", "network": {"id": "6e77a51d-2695-4e70-8b9d-c02ec0c62f35", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1129579753-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.248", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "295fcc758cf24ab4b01eb393f4863e36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf7c4b9b0-34", "ovs_interfaceid": "f7c4b9b0-3445-468a-a19a-8b19b2d029a2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 25 03:42:59 np0005534516 nova_compute[253538]: 2025-11-25 08:42:59.995 253542 DEBUG nova.network.os_vif_util [None req-2be8c927-37f3-42bf-8fd4-6e589def3687 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] Converting VIF {"id": "f7c4b9b0-3445-468a-a19a-8b19b2d029a2", "address": "fa:16:3e:83:e9:db", "network": {"id": "6e77a51d-2695-4e70-8b9d-c02ec0c62f35", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1129579753-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.248", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "295fcc758cf24ab4b01eb393f4863e36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf7c4b9b0-34", "ovs_interfaceid": "f7c4b9b0-3445-468a-a19a-8b19b2d029a2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 03:42:59 np0005534516 nova_compute[253538]: 2025-11-25 08:42:59.995 253542 DEBUG nova.network.os_vif_util [None req-2be8c927-37f3-42bf-8fd4-6e589def3687 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:83:e9:db,bridge_name='br-int',has_traffic_filtering=True,id=f7c4b9b0-3445-468a-a19a-8b19b2d029a2,network=Network(6e77a51d-2695-4e70-8b9d-c02ec0c62f35),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf7c4b9b0-34') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 03:42:59 np0005534516 nova_compute[253538]: 2025-11-25 08:42:59.996 253542 DEBUG os_vif [None req-2be8c927-37f3-42bf-8fd4-6e589def3687 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:83:e9:db,bridge_name='br-int',has_traffic_filtering=True,id=f7c4b9b0-3445-468a-a19a-8b19b2d029a2,network=Network(6e77a51d-2695-4e70-8b9d-c02ec0c62f35),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf7c4b9b0-34') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 25 03:42:59 np0005534516 nova_compute[253538]: 2025-11-25 08:42:59.997 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:42:59 np0005534516 nova_compute[253538]: 2025-11-25 08:42:59.997 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:42:59 np0005534516 nova_compute[253538]: 2025-11-25 08:42:59.998 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 03:43:00 np0005534516 nova_compute[253538]: 2025-11-25 08:43:00.001 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:43:00 np0005534516 nova_compute[253538]: 2025-11-25 08:43:00.001 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf7c4b9b0-34, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:43:00 np0005534516 nova_compute[253538]: 2025-11-25 08:43:00.002 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapf7c4b9b0-34, col_values=(('external_ids', {'iface-id': 'f7c4b9b0-3445-468a-a19a-8b19b2d029a2', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:83:e9:db', 'vm-uuid': '3e75d0af-c514-42c5-aa05-88ae5552f196'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:43:00 np0005534516 nova_compute[253538]: 2025-11-25 08:43:00.059 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:43:00 np0005534516 NetworkManager[48915]: <info>  [1764060180.0604] manager: (tapf7c4b9b0-34): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/390)
Nov 25 03:43:00 np0005534516 nova_compute[253538]: 2025-11-25 08:43:00.063 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 25 03:43:00 np0005534516 nova_compute[253538]: 2025-11-25 08:43:00.066 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:43:00 np0005534516 nova_compute[253538]: 2025-11-25 08:43:00.068 253542 INFO os_vif [None req-2be8c927-37f3-42bf-8fd4-6e589def3687 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:83:e9:db,bridge_name='br-int',has_traffic_filtering=True,id=f7c4b9b0-3445-468a-a19a-8b19b2d029a2,network=Network(6e77a51d-2695-4e70-8b9d-c02ec0c62f35),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf7c4b9b0-34')#033[00m
Nov 25 03:43:00 np0005534516 nova_compute[253538]: 2025-11-25 08:43:00.124 253542 DEBUG nova.virt.libvirt.driver [None req-2be8c927-37f3-42bf-8fd4-6e589def3687 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 03:43:00 np0005534516 nova_compute[253538]: 2025-11-25 08:43:00.125 253542 DEBUG nova.virt.libvirt.driver [None req-2be8c927-37f3-42bf-8fd4-6e589def3687 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 03:43:00 np0005534516 nova_compute[253538]: 2025-11-25 08:43:00.125 253542 DEBUG nova.virt.libvirt.driver [None req-2be8c927-37f3-42bf-8fd4-6e589def3687 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] No VIF found with MAC fa:16:3e:83:e9:db, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 25 03:43:00 np0005534516 nova_compute[253538]: 2025-11-25 08:43:00.127 253542 INFO nova.virt.libvirt.driver [None req-2be8c927-37f3-42bf-8fd4-6e589def3687 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] [instance: 3e75d0af-c514-42c5-aa05-88ae5552f196] Using config drive#033[00m
Nov 25 03:43:00 np0005534516 nova_compute[253538]: 2025-11-25 08:43:00.161 253542 DEBUG nova.storage.rbd_utils [None req-2be8c927-37f3-42bf-8fd4-6e589def3687 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] rbd image 3e75d0af-c514-42c5-aa05-88ae5552f196_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:43:00 np0005534516 nova_compute[253538]: 2025-11-25 08:43:00.181 253542 DEBUG nova.objects.instance [None req-2be8c927-37f3-42bf-8fd4-6e589def3687 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] Lazy-loading 'ec2_ids' on Instance uuid 3e75d0af-c514-42c5-aa05-88ae5552f196 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 03:43:00 np0005534516 nova_compute[253538]: 2025-11-25 08:43:00.213 253542 DEBUG nova.objects.instance [None req-2be8c927-37f3-42bf-8fd4-6e589def3687 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] Lazy-loading 'keypairs' on Instance uuid 3e75d0af-c514-42c5-aa05-88ae5552f196 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 03:43:00 np0005534516 nova_compute[253538]: 2025-11-25 08:43:00.557 253542 INFO nova.virt.libvirt.driver [None req-2be8c927-37f3-42bf-8fd4-6e589def3687 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] [instance: 3e75d0af-c514-42c5-aa05-88ae5552f196] Creating config drive at /var/lib/nova/instances/3e75d0af-c514-42c5-aa05-88ae5552f196/disk.config#033[00m
Nov 25 03:43:00 np0005534516 nova_compute[253538]: 2025-11-25 08:43:00.570 253542 DEBUG oslo_concurrency.processutils [None req-2be8c927-37f3-42bf-8fd4-6e589def3687 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/3e75d0af-c514-42c5-aa05-88ae5552f196/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpn_ot580u execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:43:00 np0005534516 nova_compute[253538]: 2025-11-25 08:43:00.719 253542 DEBUG oslo_concurrency.processutils [None req-2be8c927-37f3-42bf-8fd4-6e589def3687 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/3e75d0af-c514-42c5-aa05-88ae5552f196/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpn_ot580u" returned: 0 in 0.150s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:43:00 np0005534516 nova_compute[253538]: 2025-11-25 08:43:00.768 253542 DEBUG nova.storage.rbd_utils [None req-2be8c927-37f3-42bf-8fd4-6e589def3687 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] rbd image 3e75d0af-c514-42c5-aa05-88ae5552f196_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:43:00 np0005534516 nova_compute[253538]: 2025-11-25 08:43:00.776 253542 DEBUG oslo_concurrency.processutils [None req-2be8c927-37f3-42bf-8fd4-6e589def3687 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/3e75d0af-c514-42c5-aa05-88ae5552f196/disk.config 3e75d0af-c514-42c5-aa05-88ae5552f196_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:43:01 np0005534516 nova_compute[253538]: 2025-11-25 08:43:01.021 253542 DEBUG oslo_concurrency.processutils [None req-2be8c927-37f3-42bf-8fd4-6e589def3687 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/3e75d0af-c514-42c5-aa05-88ae5552f196/disk.config 3e75d0af-c514-42c5-aa05-88ae5552f196_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.244s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:43:01 np0005534516 nova_compute[253538]: 2025-11-25 08:43:01.022 253542 INFO nova.virt.libvirt.driver [None req-2be8c927-37f3-42bf-8fd4-6e589def3687 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] [instance: 3e75d0af-c514-42c5-aa05-88ae5552f196] Deleting local config drive /var/lib/nova/instances/3e75d0af-c514-42c5-aa05-88ae5552f196/disk.config because it was imported into RBD.#033[00m
Nov 25 03:43:01 np0005534516 kernel: tapf7c4b9b0-34: entered promiscuous mode
Nov 25 03:43:01 np0005534516 NetworkManager[48915]: <info>  [1764060181.0946] manager: (tapf7c4b9b0-34): new Tun device (/org/freedesktop/NetworkManager/Devices/391)
Nov 25 03:43:01 np0005534516 nova_compute[253538]: 2025-11-25 08:43:01.094 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:43:01 np0005534516 ovn_controller[152859]: 2025-11-25T08:43:01Z|00955|binding|INFO|Claiming lport f7c4b9b0-3445-468a-a19a-8b19b2d029a2 for this chassis.
Nov 25 03:43:01 np0005534516 ovn_controller[152859]: 2025-11-25T08:43:01Z|00956|binding|INFO|f7c4b9b0-3445-468a-a19a-8b19b2d029a2: Claiming fa:16:3e:83:e9:db 10.100.0.12
Nov 25 03:43:01 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:43:01.106 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:83:e9:db 10.100.0.12'], port_security=['fa:16:3e:83:e9:db 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '3e75d0af-c514-42c5-aa05-88ae5552f196', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6e77a51d-2695-4e70-8b9d-c02ec0c62f35', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '295fcc758cf24ab4b01eb393f4863e36', 'neutron:revision_number': '7', 'neutron:security_group_ids': '4662b63b-c8aa-4161-b270-71466cebee15', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.248'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=31935ac8-a5f7-4fad-9e0b-ee28fade4ce4, chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=f7c4b9b0-3445-468a-a19a-8b19b2d029a2) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 03:43:01 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:43:01.107 162739 INFO neutron.agent.ovn.metadata.agent [-] Port f7c4b9b0-3445-468a-a19a-8b19b2d029a2 in datapath 6e77a51d-2695-4e70-8b9d-c02ec0c62f35 bound to our chassis#033[00m
Nov 25 03:43:01 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:43:01.109 162739 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 6e77a51d-2695-4e70-8b9d-c02ec0c62f35#033[00m
Nov 25 03:43:01 np0005534516 ovn_controller[152859]: 2025-11-25T08:43:01Z|00957|binding|INFO|Setting lport f7c4b9b0-3445-468a-a19a-8b19b2d029a2 ovn-installed in OVS
Nov 25 03:43:01 np0005534516 ovn_controller[152859]: 2025-11-25T08:43:01Z|00958|binding|INFO|Setting lport f7c4b9b0-3445-468a-a19a-8b19b2d029a2 up in Southbound
Nov 25 03:43:01 np0005534516 nova_compute[253538]: 2025-11-25 08:43:01.121 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:43:01 np0005534516 nova_compute[253538]: 2025-11-25 08:43:01.124 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:43:01 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:43:01.131 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[8b2ea6c4-fc12-441e-b2b0-ce5dc87b48d6]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:43:01 np0005534516 systemd-udevd[350337]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 03:43:01 np0005534516 systemd-machined[215790]: New machine qemu-122-instance-0000005f.
Nov 25 03:43:01 np0005534516 NetworkManager[48915]: <info>  [1764060181.1569] device (tapf7c4b9b0-34): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 03:43:01 np0005534516 NetworkManager[48915]: <info>  [1764060181.1584] device (tapf7c4b9b0-34): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 03:43:01 np0005534516 systemd[1]: Started Virtual Machine qemu-122-instance-0000005f.
Nov 25 03:43:01 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:43:01.166 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[52780ba8-c756-476c-891b-851944d1c59a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:43:01 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:43:01.170 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[a516c0de-2c89-40e8-b9a3-8d03c528276f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:43:01 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:43:01.206 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[f8574ea3-1eac-475a-8ace-5586d0f1172f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:43:01 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:43:01.226 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[9b20ab35-deec-4055-bcdb-ca320108909f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6e77a51d-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b8:6f:fd'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 13, 'rx_bytes': 700, 'tx_bytes': 690, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 13, 'rx_bytes': 700, 'tx_bytes': 690, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 273], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 538137, 'reachable_time': 29181, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 350348, 'error': None, 'target': 'ovnmeta-6e77a51d-2695-4e70-8b9d-c02ec0c62f35', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:43:01 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:43:01.249 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[eb2ac9ce-f99f-4f8c-a6e1-6950feaa5ecd]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap6e77a51d-21'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 538152, 'tstamp': 538152}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 350350, 'error': None, 'target': 'ovnmeta-6e77a51d-2695-4e70-8b9d-c02ec0c62f35', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap6e77a51d-21'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 538156, 'tstamp': 538156}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 350350, 'error': None, 'target': 'ovnmeta-6e77a51d-2695-4e70-8b9d-c02ec0c62f35', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:43:01 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:43:01.252 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6e77a51d-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:43:01 np0005534516 nova_compute[253538]: 2025-11-25 08:43:01.255 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:43:01 np0005534516 nova_compute[253538]: 2025-11-25 08:43:01.256 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:43:01 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:43:01.257 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6e77a51d-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:43:01 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:43:01.257 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 03:43:01 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:43:01.257 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap6e77a51d-20, col_values=(('external_ids', {'iface-id': '66275e2b-0197-461a-9be3-ae2fe1aec502'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:43:01 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:43:01.258 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 03:43:01 np0005534516 nova_compute[253538]: 2025-11-25 08:43:01.388 253542 DEBUG nova.compute.manager [req-4996d02a-acd2-42d6-9b1c-944ea93eab7d req-2a66fd64-208c-4434-95f8-f505e3b7dff2 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 3e75d0af-c514-42c5-aa05-88ae5552f196] Received event network-vif-plugged-f7c4b9b0-3445-468a-a19a-8b19b2d029a2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:43:01 np0005534516 nova_compute[253538]: 2025-11-25 08:43:01.389 253542 DEBUG oslo_concurrency.lockutils [req-4996d02a-acd2-42d6-9b1c-944ea93eab7d req-2a66fd64-208c-4434-95f8-f505e3b7dff2 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "3e75d0af-c514-42c5-aa05-88ae5552f196-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:43:01 np0005534516 nova_compute[253538]: 2025-11-25 08:43:01.396 253542 DEBUG oslo_concurrency.lockutils [req-4996d02a-acd2-42d6-9b1c-944ea93eab7d req-2a66fd64-208c-4434-95f8-f505e3b7dff2 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "3e75d0af-c514-42c5-aa05-88ae5552f196-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.007s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:43:01 np0005534516 nova_compute[253538]: 2025-11-25 08:43:01.396 253542 DEBUG oslo_concurrency.lockutils [req-4996d02a-acd2-42d6-9b1c-944ea93eab7d req-2a66fd64-208c-4434-95f8-f505e3b7dff2 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "3e75d0af-c514-42c5-aa05-88ae5552f196-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:43:01 np0005534516 nova_compute[253538]: 2025-11-25 08:43:01.397 253542 DEBUG nova.compute.manager [req-4996d02a-acd2-42d6-9b1c-944ea93eab7d req-2a66fd64-208c-4434-95f8-f505e3b7dff2 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 3e75d0af-c514-42c5-aa05-88ae5552f196] Processing event network-vif-plugged-f7c4b9b0-3445-468a-a19a-8b19b2d029a2 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 25 03:43:01 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1831: 321 pgs: 321 active+clean; 334 MiB data, 807 MiB used, 59 GiB / 60 GiB avail; 2.9 MiB/s rd, 1.9 MiB/s wr, 29 op/s
Nov 25 03:43:01 np0005534516 nova_compute[253538]: 2025-11-25 08:43:01.785 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:43:01 np0005534516 nova_compute[253538]: 2025-11-25 08:43:01.793 253542 DEBUG nova.compute.manager [None req-2be8c927-37f3-42bf-8fd4-6e589def3687 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] [instance: 3e75d0af-c514-42c5-aa05-88ae5552f196] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 25 03:43:01 np0005534516 nova_compute[253538]: 2025-11-25 08:43:01.794 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764060181.792646, 3e75d0af-c514-42c5-aa05-88ae5552f196 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 03:43:01 np0005534516 nova_compute[253538]: 2025-11-25 08:43:01.794 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 3e75d0af-c514-42c5-aa05-88ae5552f196] VM Started (Lifecycle Event)#033[00m
Nov 25 03:43:01 np0005534516 nova_compute[253538]: 2025-11-25 08:43:01.798 253542 DEBUG nova.virt.libvirt.driver [None req-2be8c927-37f3-42bf-8fd4-6e589def3687 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] [instance: 3e75d0af-c514-42c5-aa05-88ae5552f196] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 25 03:43:01 np0005534516 nova_compute[253538]: 2025-11-25 08:43:01.803 253542 INFO nova.virt.libvirt.driver [-] [instance: 3e75d0af-c514-42c5-aa05-88ae5552f196] Instance spawned successfully.#033[00m
Nov 25 03:43:01 np0005534516 nova_compute[253538]: 2025-11-25 08:43:01.813 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 3e75d0af-c514-42c5-aa05-88ae5552f196] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:43:01 np0005534516 nova_compute[253538]: 2025-11-25 08:43:01.817 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 3e75d0af-c514-42c5-aa05-88ae5552f196] Synchronizing instance power state after lifecycle event "Started"; current vm_state: shelved_offloaded, current task_state: spawning, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 03:43:01 np0005534516 nova_compute[253538]: 2025-11-25 08:43:01.834 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 3e75d0af-c514-42c5-aa05-88ae5552f196] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 03:43:01 np0005534516 nova_compute[253538]: 2025-11-25 08:43:01.834 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764060181.7937539, 3e75d0af-c514-42c5-aa05-88ae5552f196 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 03:43:01 np0005534516 nova_compute[253538]: 2025-11-25 08:43:01.834 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 3e75d0af-c514-42c5-aa05-88ae5552f196] VM Paused (Lifecycle Event)#033[00m
Nov 25 03:43:01 np0005534516 nova_compute[253538]: 2025-11-25 08:43:01.850 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 3e75d0af-c514-42c5-aa05-88ae5552f196] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:43:01 np0005534516 nova_compute[253538]: 2025-11-25 08:43:01.855 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764060181.7971272, 3e75d0af-c514-42c5-aa05-88ae5552f196 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 03:43:01 np0005534516 nova_compute[253538]: 2025-11-25 08:43:01.856 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 3e75d0af-c514-42c5-aa05-88ae5552f196] VM Resumed (Lifecycle Event)#033[00m
Nov 25 03:43:01 np0005534516 nova_compute[253538]: 2025-11-25 08:43:01.874 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 3e75d0af-c514-42c5-aa05-88ae5552f196] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:43:01 np0005534516 nova_compute[253538]: 2025-11-25 08:43:01.878 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 3e75d0af-c514-42c5-aa05-88ae5552f196] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: shelved_offloaded, current task_state: spawning, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 03:43:01 np0005534516 nova_compute[253538]: 2025-11-25 08:43:01.896 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 3e75d0af-c514-42c5-aa05-88ae5552f196] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 03:43:02 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e217 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 03:43:02 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e217 do_prune osdmap full prune enabled
Nov 25 03:43:02 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e218 e218: 3 total, 3 up, 3 in
Nov 25 03:43:02 np0005534516 ceph-mon[75015]: log_channel(cluster) log [DBG] : osdmap e218: 3 total, 3 up, 3 in
Nov 25 03:43:03 np0005534516 nova_compute[253538]: 2025-11-25 08:43:03.236 253542 DEBUG nova.compute.manager [None req-2be8c927-37f3-42bf-8fd4-6e589def3687 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] [instance: 3e75d0af-c514-42c5-aa05-88ae5552f196] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:43:03 np0005534516 nova_compute[253538]: 2025-11-25 08:43:03.320 253542 DEBUG oslo_concurrency.lockutils [None req-2be8c927-37f3-42bf-8fd4-6e589def3687 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] Lock "3e75d0af-c514-42c5-aa05-88ae5552f196" "released" by "nova.compute.manager.ComputeManager.unshelve_instance.<locals>.do_unshelve_instance" :: held 9.918s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:43:03 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1833: 321 pgs: 321 active+clean; 351 MiB data, 830 MiB used, 59 GiB / 60 GiB avail; 4.8 MiB/s rd, 4.7 MiB/s wr, 110 op/s
Nov 25 03:43:03 np0005534516 nova_compute[253538]: 2025-11-25 08:43:03.660 253542 DEBUG nova.compute.manager [req-b2acd8c4-94ff-464e-bf30-1b8895629119 req-e95a60d8-93c1-42f6-bd28-14c7b3e595ae b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 3e75d0af-c514-42c5-aa05-88ae5552f196] Received event network-vif-plugged-f7c4b9b0-3445-468a-a19a-8b19b2d029a2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:43:03 np0005534516 nova_compute[253538]: 2025-11-25 08:43:03.660 253542 DEBUG oslo_concurrency.lockutils [req-b2acd8c4-94ff-464e-bf30-1b8895629119 req-e95a60d8-93c1-42f6-bd28-14c7b3e595ae b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "3e75d0af-c514-42c5-aa05-88ae5552f196-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:43:03 np0005534516 nova_compute[253538]: 2025-11-25 08:43:03.662 253542 DEBUG oslo_concurrency.lockutils [req-b2acd8c4-94ff-464e-bf30-1b8895629119 req-e95a60d8-93c1-42f6-bd28-14c7b3e595ae b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "3e75d0af-c514-42c5-aa05-88ae5552f196-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:43:03 np0005534516 nova_compute[253538]: 2025-11-25 08:43:03.663 253542 DEBUG oslo_concurrency.lockutils [req-b2acd8c4-94ff-464e-bf30-1b8895629119 req-e95a60d8-93c1-42f6-bd28-14c7b3e595ae b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "3e75d0af-c514-42c5-aa05-88ae5552f196-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:43:03 np0005534516 nova_compute[253538]: 2025-11-25 08:43:03.663 253542 DEBUG nova.compute.manager [req-b2acd8c4-94ff-464e-bf30-1b8895629119 req-e95a60d8-93c1-42f6-bd28-14c7b3e595ae b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 3e75d0af-c514-42c5-aa05-88ae5552f196] No waiting events found dispatching network-vif-plugged-f7c4b9b0-3445-468a-a19a-8b19b2d029a2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 03:43:03 np0005534516 nova_compute[253538]: 2025-11-25 08:43:03.663 253542 WARNING nova.compute.manager [req-b2acd8c4-94ff-464e-bf30-1b8895629119 req-e95a60d8-93c1-42f6-bd28-14c7b3e595ae b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 3e75d0af-c514-42c5-aa05-88ae5552f196] Received unexpected event network-vif-plugged-f7c4b9b0-3445-468a-a19a-8b19b2d029a2 for instance with vm_state active and task_state None.#033[00m
Nov 25 03:43:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 03:43:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:43:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 03:43:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:43:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0015193727819561111 of space, bias 1.0, pg target 0.45581183458683333 quantized to 32 (current 32)
Nov 25 03:43:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:43:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 03:43:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:43:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 03:43:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:43:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019389442721560201 of space, bias 1.0, pg target 0.581683281646806 quantized to 32 (current 32)
Nov 25 03:43:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:43:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 25 03:43:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:43:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 03:43:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:43:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 25 03:43:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:43:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 25 03:43:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:43:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 03:43:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:43:03 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 25 03:43:05 np0005534516 nova_compute[253538]: 2025-11-25 08:43:05.061 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:43:05 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1834: 321 pgs: 321 active+clean; 319 MiB data, 808 MiB used, 59 GiB / 60 GiB avail; 5.4 MiB/s rd, 4.7 MiB/s wr, 135 op/s
Nov 25 03:43:06 np0005534516 nova_compute[253538]: 2025-11-25 08:43:06.788 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:43:07 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e218 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 03:43:07 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1835: 321 pgs: 321 active+clean; 293 MiB data, 795 MiB used, 59 GiB / 60 GiB avail; 7.0 MiB/s rd, 4.7 MiB/s wr, 208 op/s
Nov 25 03:43:08 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e218 do_prune osdmap full prune enabled
Nov 25 03:43:08 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e219 e219: 3 total, 3 up, 3 in
Nov 25 03:43:08 np0005534516 ceph-mon[75015]: log_channel(cluster) log [DBG] : osdmap e219: 3 total, 3 up, 3 in
Nov 25 03:43:09 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1837: 321 pgs: 321 active+clean; 293 MiB data, 791 MiB used, 59 GiB / 60 GiB avail; 5.1 MiB/s rd, 3.5 MiB/s wr, 250 op/s
Nov 25 03:43:10 np0005534516 nova_compute[253538]: 2025-11-25 08:43:10.110 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:43:11 np0005534516 nova_compute[253538]: 2025-11-25 08:43:11.469 253542 DEBUG oslo_concurrency.lockutils [None req-f6da964d-bc0e-4e5a-bf71-7948afd029de 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] Acquiring lock "5f960b00-a365-4665-8a74-50d2e7b7f940" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:43:11 np0005534516 nova_compute[253538]: 2025-11-25 08:43:11.470 253542 DEBUG oslo_concurrency.lockutils [None req-f6da964d-bc0e-4e5a-bf71-7948afd029de 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] Lock "5f960b00-a365-4665-8a74-50d2e7b7f940" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:43:11 np0005534516 nova_compute[253538]: 2025-11-25 08:43:11.470 253542 DEBUG oslo_concurrency.lockutils [None req-f6da964d-bc0e-4e5a-bf71-7948afd029de 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] Acquiring lock "5f960b00-a365-4665-8a74-50d2e7b7f940-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:43:11 np0005534516 nova_compute[253538]: 2025-11-25 08:43:11.470 253542 DEBUG oslo_concurrency.lockutils [None req-f6da964d-bc0e-4e5a-bf71-7948afd029de 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] Lock "5f960b00-a365-4665-8a74-50d2e7b7f940-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:43:11 np0005534516 nova_compute[253538]: 2025-11-25 08:43:11.471 253542 DEBUG oslo_concurrency.lockutils [None req-f6da964d-bc0e-4e5a-bf71-7948afd029de 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] Lock "5f960b00-a365-4665-8a74-50d2e7b7f940-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:43:11 np0005534516 nova_compute[253538]: 2025-11-25 08:43:11.472 253542 INFO nova.compute.manager [None req-f6da964d-bc0e-4e5a-bf71-7948afd029de 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] [instance: 5f960b00-a365-4665-8a74-50d2e7b7f940] Terminating instance#033[00m
Nov 25 03:43:11 np0005534516 nova_compute[253538]: 2025-11-25 08:43:11.473 253542 DEBUG nova.compute.manager [None req-f6da964d-bc0e-4e5a-bf71-7948afd029de 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] [instance: 5f960b00-a365-4665-8a74-50d2e7b7f940] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 25 03:43:11 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1838: 321 pgs: 321 active+clean; 280 MiB data, 791 MiB used, 59 GiB / 60 GiB avail; 3.2 MiB/s rd, 817 KiB/s wr, 139 op/s
Nov 25 03:43:11 np0005534516 kernel: tap957fffc1-ba (unregistering): left promiscuous mode
Nov 25 03:43:11 np0005534516 NetworkManager[48915]: <info>  [1764060191.5373] device (tap957fffc1-ba): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 03:43:11 np0005534516 nova_compute[253538]: 2025-11-25 08:43:11.550 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:43:11 np0005534516 ovn_controller[152859]: 2025-11-25T08:43:11Z|00959|binding|INFO|Releasing lport 957fffc1-ba49-42af-b933-a544944131aa from this chassis (sb_readonly=0)
Nov 25 03:43:11 np0005534516 ovn_controller[152859]: 2025-11-25T08:43:11Z|00960|binding|INFO|Setting lport 957fffc1-ba49-42af-b933-a544944131aa down in Southbound
Nov 25 03:43:11 np0005534516 ovn_controller[152859]: 2025-11-25T08:43:11Z|00961|binding|INFO|Removing iface tap957fffc1-ba ovn-installed in OVS
Nov 25 03:43:11 np0005534516 nova_compute[253538]: 2025-11-25 08:43:11.553 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:43:11 np0005534516 nova_compute[253538]: 2025-11-25 08:43:11.587 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:43:11 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:43:11.591 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:5b:b1:e0 10.100.0.8'], port_security=['fa:16:3e:5b:b1:e0 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '5f960b00-a365-4665-8a74-50d2e7b7f940', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6e77a51d-2695-4e70-8b9d-c02ec0c62f35', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '295fcc758cf24ab4b01eb393f4863e36', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'f53cda67-e087-4973-b43b-027ef8b57bb8', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=31935ac8-a5f7-4fad-9e0b-ee28fade4ce4, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=957fffc1-ba49-42af-b933-a544944131aa) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 03:43:11 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:43:11.593 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 957fffc1-ba49-42af-b933-a544944131aa in datapath 6e77a51d-2695-4e70-8b9d-c02ec0c62f35 unbound from our chassis#033[00m
Nov 25 03:43:11 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:43:11.595 162739 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 6e77a51d-2695-4e70-8b9d-c02ec0c62f35#033[00m
Nov 25 03:43:11 np0005534516 systemd[1]: machine-qemu\x2d119\x2dinstance\x2d00000061.scope: Deactivated successfully.
Nov 25 03:43:11 np0005534516 systemd[1]: machine-qemu\x2d119\x2dinstance\x2d00000061.scope: Consumed 16.614s CPU time.
Nov 25 03:43:11 np0005534516 systemd-machined[215790]: Machine qemu-119-instance-00000061 terminated.
Nov 25 03:43:11 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:43:11.612 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[4d542d4d-a5af-47cb-b191-087d026e08f5]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:43:11 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:43:11.647 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[5d83f0c4-7262-4937-8a4a-6c05722398bb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:43:11 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:43:11.651 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[1da6f318-475f-4f53-bca7-468a6698e946]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:43:11 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:43:11.679 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[253c3163-56cf-4389-b2b1-41eea9a43db4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:43:11 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:43:11.700 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[056108d3-1cb9-4a3b-bc7e-c3d9779a5045]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6e77a51d-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b8:6f:fd'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 15, 'rx_bytes': 700, 'tx_bytes': 774, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 15, 'rx_bytes': 700, 'tx_bytes': 774, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 273], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 538137, 'reachable_time': 29181, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 350406, 'error': None, 'target': 'ovnmeta-6e77a51d-2695-4e70-8b9d-c02ec0c62f35', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:43:11 np0005534516 nova_compute[253538]: 2025-11-25 08:43:11.706 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:43:11 np0005534516 nova_compute[253538]: 2025-11-25 08:43:11.714 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:43:11 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:43:11.722 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[0bfa9947-ba81-4d90-91ab-3ba7854c8fe5]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap6e77a51d-21'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 538152, 'tstamp': 538152}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 350412, 'error': None, 'target': 'ovnmeta-6e77a51d-2695-4e70-8b9d-c02ec0c62f35', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap6e77a51d-21'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 538156, 'tstamp': 538156}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 350412, 'error': None, 'target': 'ovnmeta-6e77a51d-2695-4e70-8b9d-c02ec0c62f35', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:43:11 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:43:11.723 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6e77a51d-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:43:11 np0005534516 nova_compute[253538]: 2025-11-25 08:43:11.725 253542 INFO nova.virt.libvirt.driver [-] [instance: 5f960b00-a365-4665-8a74-50d2e7b7f940] Instance destroyed successfully.#033[00m
Nov 25 03:43:11 np0005534516 nova_compute[253538]: 2025-11-25 08:43:11.725 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:43:11 np0005534516 nova_compute[253538]: 2025-11-25 08:43:11.726 253542 DEBUG nova.objects.instance [None req-f6da964d-bc0e-4e5a-bf71-7948afd029de 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] Lazy-loading 'resources' on Instance uuid 5f960b00-a365-4665-8a74-50d2e7b7f940 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 03:43:11 np0005534516 nova_compute[253538]: 2025-11-25 08:43:11.734 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:43:11 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:43:11.737 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6e77a51d-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:43:11 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:43:11.738 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 03:43:11 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:43:11.738 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap6e77a51d-20, col_values=(('external_ids', {'iface-id': '66275e2b-0197-461a-9be3-ae2fe1aec502'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:43:11 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:43:11.739 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 03:43:11 np0005534516 nova_compute[253538]: 2025-11-25 08:43:11.741 253542 DEBUG nova.virt.libvirt.vif [None req-f6da964d-bc0e-4e5a-bf71-7948afd029de 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T08:41:40Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestOtherB-server-402804146',display_name='tempest-ServerActionsTestOtherB-server-402804146',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestotherb-server-402804146',id=97,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-25T08:41:52Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='295fcc758cf24ab4b01eb393f4863e36',ramdisk_id='',reservation_id='r-totndcwk',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestOtherB-587178207',owner_user_name='tempest-ServerActionsTestOtherB-587178207-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T08:41:52Z,user_data=None,user_id='66e1f27ea22d4ee08a0a470a8c18135e',uuid=5f960b00-a365-4665-8a74-50d2e7b7f940,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "957fffc1-ba49-42af-b933-a544944131aa", "address": "fa:16:3e:5b:b1:e0", "network": {"id": "6e77a51d-2695-4e70-8b9d-c02ec0c62f35", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1129579753-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "295fcc758cf24ab4b01eb393f4863e36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap957fffc1-ba", "ovs_interfaceid": "957fffc1-ba49-42af-b933-a544944131aa", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 25 03:43:11 np0005534516 nova_compute[253538]: 2025-11-25 08:43:11.741 253542 DEBUG nova.network.os_vif_util [None req-f6da964d-bc0e-4e5a-bf71-7948afd029de 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] Converting VIF {"id": "957fffc1-ba49-42af-b933-a544944131aa", "address": "fa:16:3e:5b:b1:e0", "network": {"id": "6e77a51d-2695-4e70-8b9d-c02ec0c62f35", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1129579753-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "295fcc758cf24ab4b01eb393f4863e36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap957fffc1-ba", "ovs_interfaceid": "957fffc1-ba49-42af-b933-a544944131aa", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 03:43:11 np0005534516 nova_compute[253538]: 2025-11-25 08:43:11.742 253542 DEBUG nova.network.os_vif_util [None req-f6da964d-bc0e-4e5a-bf71-7948afd029de 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:5b:b1:e0,bridge_name='br-int',has_traffic_filtering=True,id=957fffc1-ba49-42af-b933-a544944131aa,network=Network(6e77a51d-2695-4e70-8b9d-c02ec0c62f35),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap957fffc1-ba') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 03:43:11 np0005534516 nova_compute[253538]: 2025-11-25 08:43:11.743 253542 DEBUG os_vif [None req-f6da964d-bc0e-4e5a-bf71-7948afd029de 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:5b:b1:e0,bridge_name='br-int',has_traffic_filtering=True,id=957fffc1-ba49-42af-b933-a544944131aa,network=Network(6e77a51d-2695-4e70-8b9d-c02ec0c62f35),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap957fffc1-ba') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 25 03:43:11 np0005534516 nova_compute[253538]: 2025-11-25 08:43:11.745 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:43:11 np0005534516 nova_compute[253538]: 2025-11-25 08:43:11.745 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap957fffc1-ba, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:43:11 np0005534516 nova_compute[253538]: 2025-11-25 08:43:11.747 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:43:11 np0005534516 nova_compute[253538]: 2025-11-25 08:43:11.750 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 25 03:43:11 np0005534516 nova_compute[253538]: 2025-11-25 08:43:11.752 253542 INFO os_vif [None req-f6da964d-bc0e-4e5a-bf71-7948afd029de 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:5b:b1:e0,bridge_name='br-int',has_traffic_filtering=True,id=957fffc1-ba49-42af-b933-a544944131aa,network=Network(6e77a51d-2695-4e70-8b9d-c02ec0c62f35),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap957fffc1-ba')#033[00m
Nov 25 03:43:11 np0005534516 nova_compute[253538]: 2025-11-25 08:43:11.790 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:43:12 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e219 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 03:43:12 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e219 do_prune osdmap full prune enabled
Nov 25 03:43:12 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e220 e220: 3 total, 3 up, 3 in
Nov 25 03:43:12 np0005534516 ceph-mon[75015]: log_channel(cluster) log [DBG] : osdmap e220: 3 total, 3 up, 3 in
Nov 25 03:43:12 np0005534516 nova_compute[253538]: 2025-11-25 08:43:12.248 253542 INFO nova.virt.libvirt.driver [None req-f6da964d-bc0e-4e5a-bf71-7948afd029de 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] [instance: 5f960b00-a365-4665-8a74-50d2e7b7f940] Deleting instance files /var/lib/nova/instances/5f960b00-a365-4665-8a74-50d2e7b7f940_del#033[00m
Nov 25 03:43:12 np0005534516 nova_compute[253538]: 2025-11-25 08:43:12.249 253542 INFO nova.virt.libvirt.driver [None req-f6da964d-bc0e-4e5a-bf71-7948afd029de 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] [instance: 5f960b00-a365-4665-8a74-50d2e7b7f940] Deletion of /var/lib/nova/instances/5f960b00-a365-4665-8a74-50d2e7b7f940_del complete#033[00m
Nov 25 03:43:12 np0005534516 nova_compute[253538]: 2025-11-25 08:43:12.345 253542 INFO nova.compute.manager [None req-f6da964d-bc0e-4e5a-bf71-7948afd029de 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] [instance: 5f960b00-a365-4665-8a74-50d2e7b7f940] Took 0.87 seconds to destroy the instance on the hypervisor.#033[00m
Nov 25 03:43:12 np0005534516 nova_compute[253538]: 2025-11-25 08:43:12.346 253542 DEBUG oslo.service.loopingcall [None req-f6da964d-bc0e-4e5a-bf71-7948afd029de 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 25 03:43:12 np0005534516 nova_compute[253538]: 2025-11-25 08:43:12.347 253542 DEBUG nova.compute.manager [-] [instance: 5f960b00-a365-4665-8a74-50d2e7b7f940] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 25 03:43:12 np0005534516 nova_compute[253538]: 2025-11-25 08:43:12.347 253542 DEBUG nova.network.neutron [-] [instance: 5f960b00-a365-4665-8a74-50d2e7b7f940] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 25 03:43:13 np0005534516 nova_compute[253538]: 2025-11-25 08:43:13.467 253542 DEBUG nova.network.neutron [-] [instance: 5f960b00-a365-4665-8a74-50d2e7b7f940] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 03:43:13 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1840: 321 pgs: 321 active+clean; 234 MiB data, 767 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 2.5 KiB/s wr, 139 op/s
Nov 25 03:43:13 np0005534516 nova_compute[253538]: 2025-11-25 08:43:13.485 253542 INFO nova.compute.manager [-] [instance: 5f960b00-a365-4665-8a74-50d2e7b7f940] Took 1.14 seconds to deallocate network for instance.#033[00m
Nov 25 03:43:13 np0005534516 nova_compute[253538]: 2025-11-25 08:43:13.549 253542 DEBUG oslo_concurrency.lockutils [None req-f6da964d-bc0e-4e5a-bf71-7948afd029de 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:43:13 np0005534516 nova_compute[253538]: 2025-11-25 08:43:13.550 253542 DEBUG oslo_concurrency.lockutils [None req-f6da964d-bc0e-4e5a-bf71-7948afd029de 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:43:13 np0005534516 nova_compute[253538]: 2025-11-25 08:43:13.596 253542 DEBUG nova.compute.manager [req-ade884f5-bd65-43fa-b5fa-6efd6d5c5317 req-9e8c1aa4-2bf5-4a57-b5b6-0061d44b10fb b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 5f960b00-a365-4665-8a74-50d2e7b7f940] Received event network-vif-deleted-957fffc1-ba49-42af-b933-a544944131aa external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:43:13 np0005534516 nova_compute[253538]: 2025-11-25 08:43:13.644 253542 DEBUG oslo_concurrency.processutils [None req-f6da964d-bc0e-4e5a-bf71-7948afd029de 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:43:14 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 03:43:14 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1192620321' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 03:43:14 np0005534516 nova_compute[253538]: 2025-11-25 08:43:14.097 253542 DEBUG oslo_concurrency.processutils [None req-f6da964d-bc0e-4e5a-bf71-7948afd029de 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.453s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:43:14 np0005534516 nova_compute[253538]: 2025-11-25 08:43:14.106 253542 DEBUG nova.compute.provider_tree [None req-f6da964d-bc0e-4e5a-bf71-7948afd029de 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 03:43:14 np0005534516 nova_compute[253538]: 2025-11-25 08:43:14.130 253542 DEBUG nova.scheduler.client.report [None req-f6da964d-bc0e-4e5a-bf71-7948afd029de 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 03:43:14 np0005534516 nova_compute[253538]: 2025-11-25 08:43:14.150 253542 DEBUG oslo_concurrency.lockutils [None req-f6da964d-bc0e-4e5a-bf71-7948afd029de 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.600s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:43:14 np0005534516 nova_compute[253538]: 2025-11-25 08:43:14.186 253542 INFO nova.scheduler.client.report [None req-f6da964d-bc0e-4e5a-bf71-7948afd029de 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] Deleted allocations for instance 5f960b00-a365-4665-8a74-50d2e7b7f940#033[00m
Nov 25 03:43:14 np0005534516 nova_compute[253538]: 2025-11-25 08:43:14.261 253542 DEBUG oslo_concurrency.lockutils [None req-f6da964d-bc0e-4e5a-bf71-7948afd029de 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] Lock "5f960b00-a365-4665-8a74-50d2e7b7f940" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.792s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:43:15 np0005534516 nova_compute[253538]: 2025-11-25 08:43:15.139 253542 DEBUG oslo_concurrency.lockutils [None req-a6412edf-1536-403d-86f1-80a20e2d46f6 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] Acquiring lock "3e75d0af-c514-42c5-aa05-88ae5552f196" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:43:15 np0005534516 nova_compute[253538]: 2025-11-25 08:43:15.140 253542 DEBUG oslo_concurrency.lockutils [None req-a6412edf-1536-403d-86f1-80a20e2d46f6 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] Lock "3e75d0af-c514-42c5-aa05-88ae5552f196" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:43:15 np0005534516 nova_compute[253538]: 2025-11-25 08:43:15.140 253542 DEBUG oslo_concurrency.lockutils [None req-a6412edf-1536-403d-86f1-80a20e2d46f6 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] Acquiring lock "3e75d0af-c514-42c5-aa05-88ae5552f196-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:43:15 np0005534516 nova_compute[253538]: 2025-11-25 08:43:15.140 253542 DEBUG oslo_concurrency.lockutils [None req-a6412edf-1536-403d-86f1-80a20e2d46f6 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] Lock "3e75d0af-c514-42c5-aa05-88ae5552f196-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:43:15 np0005534516 nova_compute[253538]: 2025-11-25 08:43:15.141 253542 DEBUG oslo_concurrency.lockutils [None req-a6412edf-1536-403d-86f1-80a20e2d46f6 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] Lock "3e75d0af-c514-42c5-aa05-88ae5552f196-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:43:15 np0005534516 nova_compute[253538]: 2025-11-25 08:43:15.142 253542 INFO nova.compute.manager [None req-a6412edf-1536-403d-86f1-80a20e2d46f6 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] [instance: 3e75d0af-c514-42c5-aa05-88ae5552f196] Terminating instance#033[00m
Nov 25 03:43:15 np0005534516 nova_compute[253538]: 2025-11-25 08:43:15.143 253542 DEBUG nova.compute.manager [None req-a6412edf-1536-403d-86f1-80a20e2d46f6 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] [instance: 3e75d0af-c514-42c5-aa05-88ae5552f196] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 25 03:43:15 np0005534516 kernel: tapf7c4b9b0-34 (unregistering): left promiscuous mode
Nov 25 03:43:15 np0005534516 NetworkManager[48915]: <info>  [1764060195.2022] device (tapf7c4b9b0-34): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 03:43:15 np0005534516 ovn_controller[152859]: 2025-11-25T08:43:15Z|00962|binding|INFO|Releasing lport f7c4b9b0-3445-468a-a19a-8b19b2d029a2 from this chassis (sb_readonly=0)
Nov 25 03:43:15 np0005534516 ovn_controller[152859]: 2025-11-25T08:43:15Z|00963|binding|INFO|Setting lport f7c4b9b0-3445-468a-a19a-8b19b2d029a2 down in Southbound
Nov 25 03:43:15 np0005534516 ovn_controller[152859]: 2025-11-25T08:43:15Z|00964|binding|INFO|Removing iface tapf7c4b9b0-34 ovn-installed in OVS
Nov 25 03:43:15 np0005534516 nova_compute[253538]: 2025-11-25 08:43:15.218 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:43:15 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:43:15.223 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:83:e9:db 10.100.0.12'], port_security=['fa:16:3e:83:e9:db 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '3e75d0af-c514-42c5-aa05-88ae5552f196', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6e77a51d-2695-4e70-8b9d-c02ec0c62f35', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '295fcc758cf24ab4b01eb393f4863e36', 'neutron:revision_number': '9', 'neutron:security_group_ids': '4662b63b-c8aa-4161-b270-71466cebee15', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.248', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=31935ac8-a5f7-4fad-9e0b-ee28fade4ce4, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=f7c4b9b0-3445-468a-a19a-8b19b2d029a2) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 03:43:15 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:43:15.227 162739 INFO neutron.agent.ovn.metadata.agent [-] Port f7c4b9b0-3445-468a-a19a-8b19b2d029a2 in datapath 6e77a51d-2695-4e70-8b9d-c02ec0c62f35 unbound from our chassis#033[00m
Nov 25 03:43:15 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:43:15.230 162739 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 6e77a51d-2695-4e70-8b9d-c02ec0c62f35, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 25 03:43:15 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:43:15.231 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[55d8aadf-ef7c-473f-82f8-ba89c95f2039]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:43:15 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:43:15.232 162739 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-6e77a51d-2695-4e70-8b9d-c02ec0c62f35 namespace which is not needed anymore#033[00m
Nov 25 03:43:15 np0005534516 nova_compute[253538]: 2025-11-25 08:43:15.239 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:43:15 np0005534516 systemd[1]: machine-qemu\x2d122\x2dinstance\x2d0000005f.scope: Deactivated successfully.
Nov 25 03:43:15 np0005534516 systemd[1]: machine-qemu\x2d122\x2dinstance\x2d0000005f.scope: Consumed 14.099s CPU time.
Nov 25 03:43:15 np0005534516 systemd-machined[215790]: Machine qemu-122-instance-0000005f terminated.
Nov 25 03:43:15 np0005534516 nova_compute[253538]: 2025-11-25 08:43:15.421 253542 INFO nova.virt.libvirt.driver [-] [instance: 3e75d0af-c514-42c5-aa05-88ae5552f196] Instance destroyed successfully.#033[00m
Nov 25 03:43:15 np0005534516 nova_compute[253538]: 2025-11-25 08:43:15.422 253542 DEBUG nova.objects.instance [None req-a6412edf-1536-403d-86f1-80a20e2d46f6 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] Lazy-loading 'resources' on Instance uuid 3e75d0af-c514-42c5-aa05-88ae5552f196 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 03:43:15 np0005534516 neutron-haproxy-ovnmeta-6e77a51d-2695-4e70-8b9d-c02ec0c62f35[344535]: [NOTICE]   (344562) : haproxy version is 2.8.14-c23fe91
Nov 25 03:43:15 np0005534516 neutron-haproxy-ovnmeta-6e77a51d-2695-4e70-8b9d-c02ec0c62f35[344535]: [NOTICE]   (344562) : path to executable is /usr/sbin/haproxy
Nov 25 03:43:15 np0005534516 neutron-haproxy-ovnmeta-6e77a51d-2695-4e70-8b9d-c02ec0c62f35[344535]: [WARNING]  (344562) : Exiting Master process...
Nov 25 03:43:15 np0005534516 nova_compute[253538]: 2025-11-25 08:43:15.435 253542 DEBUG nova.virt.libvirt.vif [None req-a6412edf-1536-403d-86f1-80a20e2d46f6 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-11-25T08:40:17Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestOtherB-server-439401428',display_name='tempest-ServerActionsTestOtherB-server-439401428',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestotherb-server-439401428',id=95,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJg9vWbWDQUJP+5O2Ge5sP4yW+A5RDbOBkV9U0C3hvxoWu1yQZFyI5Vs8mvdnljTrZSXJgG69Yru9lsQdThAcjefMLvUo4eUx6Akjue1XjQsVfgM0pq0/Z3uC1qyMxn0Ew==',key_name='tempest-keypair-1642602877',keypairs=<?>,launch_index=0,launched_at=2025-11-25T08:43:03Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='295fcc758cf24ab4b01eb393f4863e36',ramdisk_id='',reservation_id='r-iqkw2vgl',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',clean_attempts='1',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestOtherB-587178207',owner_user_name='tempest-ServerActionsTestOtherB-587178207-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T08:43:03Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='66e1f27ea22d4ee08a0a470a8c18135e',uuid=3e75d0af-c514-42c5-aa05-88ae5552f196,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "f7c4b9b0-3445-468a-a19a-8b19b2d029a2", "address": "fa:16:3e:83:e9:db", "network": {"id": "6e77a51d-2695-4e70-8b9d-c02ec0c62f35", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1129579753-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.248", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "295fcc758cf24ab4b01eb393f4863e36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf7c4b9b0-34", "ovs_interfaceid": "f7c4b9b0-3445-468a-a19a-8b19b2d029a2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 25 03:43:15 np0005534516 nova_compute[253538]: 2025-11-25 08:43:15.435 253542 DEBUG nova.network.os_vif_util [None req-a6412edf-1536-403d-86f1-80a20e2d46f6 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] Converting VIF {"id": "f7c4b9b0-3445-468a-a19a-8b19b2d029a2", "address": "fa:16:3e:83:e9:db", "network": {"id": "6e77a51d-2695-4e70-8b9d-c02ec0c62f35", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1129579753-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.248", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "295fcc758cf24ab4b01eb393f4863e36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf7c4b9b0-34", "ovs_interfaceid": "f7c4b9b0-3445-468a-a19a-8b19b2d029a2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 03:43:15 np0005534516 nova_compute[253538]: 2025-11-25 08:43:15.436 253542 DEBUG nova.network.os_vif_util [None req-a6412edf-1536-403d-86f1-80a20e2d46f6 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:83:e9:db,bridge_name='br-int',has_traffic_filtering=True,id=f7c4b9b0-3445-468a-a19a-8b19b2d029a2,network=Network(6e77a51d-2695-4e70-8b9d-c02ec0c62f35),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf7c4b9b0-34') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 03:43:15 np0005534516 nova_compute[253538]: 2025-11-25 08:43:15.436 253542 DEBUG os_vif [None req-a6412edf-1536-403d-86f1-80a20e2d46f6 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:83:e9:db,bridge_name='br-int',has_traffic_filtering=True,id=f7c4b9b0-3445-468a-a19a-8b19b2d029a2,network=Network(6e77a51d-2695-4e70-8b9d-c02ec0c62f35),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf7c4b9b0-34') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 25 03:43:15 np0005534516 neutron-haproxy-ovnmeta-6e77a51d-2695-4e70-8b9d-c02ec0c62f35[344535]: [ALERT]    (344562) : Current worker (344564) exited with code 143 (Terminated)
Nov 25 03:43:15 np0005534516 neutron-haproxy-ovnmeta-6e77a51d-2695-4e70-8b9d-c02ec0c62f35[344535]: [WARNING]  (344562) : All workers exited. Exiting... (0)
Nov 25 03:43:15 np0005534516 nova_compute[253538]: 2025-11-25 08:43:15.438 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:43:15 np0005534516 nova_compute[253538]: 2025-11-25 08:43:15.438 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf7c4b9b0-34, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:43:15 np0005534516 systemd[1]: libpod-f3400fe71aa393ec401ab352cccfc649c5d4596cc58f98e493af76e9ecd4cc4b.scope: Deactivated successfully.
Nov 25 03:43:15 np0005534516 nova_compute[253538]: 2025-11-25 08:43:15.443 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 25 03:43:15 np0005534516 podman[350485]: 2025-11-25 08:43:15.445929284 +0000 UTC m=+0.100659684 container died f3400fe71aa393ec401ab352cccfc649c5d4596cc58f98e493af76e9ecd4cc4b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-6e77a51d-2695-4e70-8b9d-c02ec0c62f35, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_managed=true)
Nov 25 03:43:15 np0005534516 nova_compute[253538]: 2025-11-25 08:43:15.446 253542 INFO os_vif [None req-a6412edf-1536-403d-86f1-80a20e2d46f6 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:83:e9:db,bridge_name='br-int',has_traffic_filtering=True,id=f7c4b9b0-3445-468a-a19a-8b19b2d029a2,network=Network(6e77a51d-2695-4e70-8b9d-c02ec0c62f35),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf7c4b9b0-34')#033[00m
Nov 25 03:43:15 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1841: 321 pgs: 321 active+clean; 194 MiB data, 735 MiB used, 59 GiB / 60 GiB avail; 236 KiB/s rd, 19 KiB/s wr, 88 op/s
Nov 25 03:43:15 np0005534516 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-f3400fe71aa393ec401ab352cccfc649c5d4596cc58f98e493af76e9ecd4cc4b-userdata-shm.mount: Deactivated successfully.
Nov 25 03:43:15 np0005534516 systemd[1]: var-lib-containers-storage-overlay-6947475d689df6f820a434a2c1f0b94b91a492171023f6878dbc9d586341b108-merged.mount: Deactivated successfully.
Nov 25 03:43:15 np0005534516 podman[350485]: 2025-11-25 08:43:15.520631399 +0000 UTC m=+0.175361799 container cleanup f3400fe71aa393ec401ab352cccfc649c5d4596cc58f98e493af76e9ecd4cc4b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-6e77a51d-2695-4e70-8b9d-c02ec0c62f35, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251118)
Nov 25 03:43:15 np0005534516 systemd[1]: libpod-conmon-f3400fe71aa393ec401ab352cccfc649c5d4596cc58f98e493af76e9ecd4cc4b.scope: Deactivated successfully.
Nov 25 03:43:15 np0005534516 podman[350538]: 2025-11-25 08:43:15.666561806 +0000 UTC m=+0.117541924 container remove f3400fe71aa393ec401ab352cccfc649c5d4596cc58f98e493af76e9ecd4cc4b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-6e77a51d-2695-4e70-8b9d-c02ec0c62f35, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3)
Nov 25 03:43:15 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:43:15.677 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[bc633608-0346-4b12-84ed-3721ad216f44]: (4, ('Tue Nov 25 08:43:15 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-6e77a51d-2695-4e70-8b9d-c02ec0c62f35 (f3400fe71aa393ec401ab352cccfc649c5d4596cc58f98e493af76e9ecd4cc4b)\nf3400fe71aa393ec401ab352cccfc649c5d4596cc58f98e493af76e9ecd4cc4b\nTue Nov 25 08:43:15 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-6e77a51d-2695-4e70-8b9d-c02ec0c62f35 (f3400fe71aa393ec401ab352cccfc649c5d4596cc58f98e493af76e9ecd4cc4b)\nf3400fe71aa393ec401ab352cccfc649c5d4596cc58f98e493af76e9ecd4cc4b\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:43:15 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:43:15.679 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[1f9ba656-c9e9-4f4f-b29a-724bb274e640]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:43:15 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:43:15.682 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6e77a51d-20, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:43:15 np0005534516 nova_compute[253538]: 2025-11-25 08:43:15.685 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:43:15 np0005534516 kernel: tap6e77a51d-20: left promiscuous mode
Nov 25 03:43:15 np0005534516 nova_compute[253538]: 2025-11-25 08:43:15.717 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:43:15 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:43:15.720 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[78b00812-2f1a-462e-8b6c-aabaee7db9ab]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:43:15 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:43:15.733 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[54924742-e7df-4d9a-a814-a4333e4cf4b5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:43:15 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:43:15.734 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[53f396df-f206-4da1-85f1-e5cdfa411680]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:43:15 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:43:15.754 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[de1568c3-e6db-454d-979d-375a2cd5ecb9]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 538129, 'reachable_time': 18513, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 350555, 'error': None, 'target': 'ovnmeta-6e77a51d-2695-4e70-8b9d-c02ec0c62f35', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:43:15 np0005534516 systemd[1]: run-netns-ovnmeta\x2d6e77a51d\x2d2695\x2d4e70\x2d8b9d\x2dc02ec0c62f35.mount: Deactivated successfully.
Nov 25 03:43:15 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:43:15.758 162852 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-6e77a51d-2695-4e70-8b9d-c02ec0c62f35 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 25 03:43:15 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:43:15.759 162852 DEBUG oslo.privsep.daemon [-] privsep: reply[7f41a51e-62e6-4c5c-a31e-e4d8ab83c517]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:43:16 np0005534516 nova_compute[253538]: 2025-11-25 08:43:16.226 253542 INFO nova.virt.libvirt.driver [None req-a6412edf-1536-403d-86f1-80a20e2d46f6 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] [instance: 3e75d0af-c514-42c5-aa05-88ae5552f196] Deleting instance files /var/lib/nova/instances/3e75d0af-c514-42c5-aa05-88ae5552f196_del#033[00m
Nov 25 03:43:16 np0005534516 nova_compute[253538]: 2025-11-25 08:43:16.227 253542 INFO nova.virt.libvirt.driver [None req-a6412edf-1536-403d-86f1-80a20e2d46f6 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] [instance: 3e75d0af-c514-42c5-aa05-88ae5552f196] Deletion of /var/lib/nova/instances/3e75d0af-c514-42c5-aa05-88ae5552f196_del complete#033[00m
Nov 25 03:43:16 np0005534516 nova_compute[253538]: 2025-11-25 08:43:16.268 253542 INFO nova.compute.manager [None req-a6412edf-1536-403d-86f1-80a20e2d46f6 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] [instance: 3e75d0af-c514-42c5-aa05-88ae5552f196] Took 1.12 seconds to destroy the instance on the hypervisor.#033[00m
Nov 25 03:43:16 np0005534516 nova_compute[253538]: 2025-11-25 08:43:16.268 253542 DEBUG oslo.service.loopingcall [None req-a6412edf-1536-403d-86f1-80a20e2d46f6 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 25 03:43:16 np0005534516 nova_compute[253538]: 2025-11-25 08:43:16.269 253542 DEBUG nova.compute.manager [-] [instance: 3e75d0af-c514-42c5-aa05-88ae5552f196] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 25 03:43:16 np0005534516 nova_compute[253538]: 2025-11-25 08:43:16.269 253542 DEBUG nova.network.neutron [-] [instance: 3e75d0af-c514-42c5-aa05-88ae5552f196] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 25 03:43:16 np0005534516 nova_compute[253538]: 2025-11-25 08:43:16.792 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:43:16 np0005534516 nova_compute[253538]: 2025-11-25 08:43:16.858 253542 DEBUG nova.compute.manager [req-9155c7fb-0e4b-49bd-8ad1-4d667b6999a6 req-2ce25ed4-72c5-49bf-b043-7f90dfb163d3 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 3e75d0af-c514-42c5-aa05-88ae5552f196] Received event network-vif-unplugged-f7c4b9b0-3445-468a-a19a-8b19b2d029a2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:43:16 np0005534516 nova_compute[253538]: 2025-11-25 08:43:16.858 253542 DEBUG oslo_concurrency.lockutils [req-9155c7fb-0e4b-49bd-8ad1-4d667b6999a6 req-2ce25ed4-72c5-49bf-b043-7f90dfb163d3 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "3e75d0af-c514-42c5-aa05-88ae5552f196-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:43:16 np0005534516 nova_compute[253538]: 2025-11-25 08:43:16.859 253542 DEBUG oslo_concurrency.lockutils [req-9155c7fb-0e4b-49bd-8ad1-4d667b6999a6 req-2ce25ed4-72c5-49bf-b043-7f90dfb163d3 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "3e75d0af-c514-42c5-aa05-88ae5552f196-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:43:16 np0005534516 nova_compute[253538]: 2025-11-25 08:43:16.859 253542 DEBUG oslo_concurrency.lockutils [req-9155c7fb-0e4b-49bd-8ad1-4d667b6999a6 req-2ce25ed4-72c5-49bf-b043-7f90dfb163d3 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "3e75d0af-c514-42c5-aa05-88ae5552f196-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:43:16 np0005534516 nova_compute[253538]: 2025-11-25 08:43:16.860 253542 DEBUG nova.compute.manager [req-9155c7fb-0e4b-49bd-8ad1-4d667b6999a6 req-2ce25ed4-72c5-49bf-b043-7f90dfb163d3 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 3e75d0af-c514-42c5-aa05-88ae5552f196] No waiting events found dispatching network-vif-unplugged-f7c4b9b0-3445-468a-a19a-8b19b2d029a2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 03:43:16 np0005534516 nova_compute[253538]: 2025-11-25 08:43:16.860 253542 DEBUG nova.compute.manager [req-9155c7fb-0e4b-49bd-8ad1-4d667b6999a6 req-2ce25ed4-72c5-49bf-b043-7f90dfb163d3 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 3e75d0af-c514-42c5-aa05-88ae5552f196] Received event network-vif-unplugged-f7c4b9b0-3445-468a-a19a-8b19b2d029a2 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 25 03:43:17 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e220 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 03:43:17 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e220 do_prune osdmap full prune enabled
Nov 25 03:43:17 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e221 e221: 3 total, 3 up, 3 in
Nov 25 03:43:17 np0005534516 ceph-mon[75015]: log_channel(cluster) log [DBG] : osdmap e221: 3 total, 3 up, 3 in
Nov 25 03:43:17 np0005534516 nova_compute[253538]: 2025-11-25 08:43:17.411 253542 DEBUG nova.network.neutron [-] [instance: 3e75d0af-c514-42c5-aa05-88ae5552f196] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 03:43:17 np0005534516 nova_compute[253538]: 2025-11-25 08:43:17.435 253542 INFO nova.compute.manager [-] [instance: 3e75d0af-c514-42c5-aa05-88ae5552f196] Took 1.17 seconds to deallocate network for instance.#033[00m
Nov 25 03:43:17 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1843: 321 pgs: 321 active+clean; 141 MiB data, 702 MiB used, 59 GiB / 60 GiB avail; 484 KiB/s rd, 23 KiB/s wr, 131 op/s
Nov 25 03:43:17 np0005534516 nova_compute[253538]: 2025-11-25 08:43:17.506 253542 DEBUG oslo_concurrency.lockutils [None req-a6412edf-1536-403d-86f1-80a20e2d46f6 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:43:17 np0005534516 nova_compute[253538]: 2025-11-25 08:43:17.506 253542 DEBUG oslo_concurrency.lockutils [None req-a6412edf-1536-403d-86f1-80a20e2d46f6 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:43:17 np0005534516 nova_compute[253538]: 2025-11-25 08:43:17.554 253542 DEBUG oslo_concurrency.processutils [None req-a6412edf-1536-403d-86f1-80a20e2d46f6 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:43:17 np0005534516 nova_compute[253538]: 2025-11-25 08:43:17.602 253542 DEBUG nova.compute.manager [req-ad0623fa-3212-4403-8597-e3afc31d3fe5 req-bb736b34-6c3c-45b0-93b1-01185bf88f55 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 3e75d0af-c514-42c5-aa05-88ae5552f196] Received event network-vif-deleted-f7c4b9b0-3445-468a-a19a-8b19b2d029a2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:43:17 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 03:43:17 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/643779945' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 03:43:17 np0005534516 nova_compute[253538]: 2025-11-25 08:43:17.988 253542 DEBUG oslo_concurrency.processutils [None req-a6412edf-1536-403d-86f1-80a20e2d46f6 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.434s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:43:17 np0005534516 nova_compute[253538]: 2025-11-25 08:43:17.994 253542 DEBUG nova.compute.provider_tree [None req-a6412edf-1536-403d-86f1-80a20e2d46f6 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 03:43:18 np0005534516 nova_compute[253538]: 2025-11-25 08:43:18.012 253542 DEBUG nova.scheduler.client.report [None req-a6412edf-1536-403d-86f1-80a20e2d46f6 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 03:43:18 np0005534516 nova_compute[253538]: 2025-11-25 08:43:18.042 253542 DEBUG oslo_concurrency.lockutils [None req-a6412edf-1536-403d-86f1-80a20e2d46f6 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.536s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:43:18 np0005534516 nova_compute[253538]: 2025-11-25 08:43:18.073 253542 INFO nova.scheduler.client.report [None req-a6412edf-1536-403d-86f1-80a20e2d46f6 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] Deleted allocations for instance 3e75d0af-c514-42c5-aa05-88ae5552f196#033[00m
Nov 25 03:43:18 np0005534516 nova_compute[253538]: 2025-11-25 08:43:18.141 253542 DEBUG oslo_concurrency.lockutils [None req-a6412edf-1536-403d-86f1-80a20e2d46f6 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] Lock "3e75d0af-c514-42c5-aa05-88ae5552f196" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:43:19 np0005534516 nova_compute[253538]: 2025-11-25 08:43:19.057 253542 DEBUG nova.compute.manager [req-e28b6101-49fe-4693-9d55-f43944974aa0 req-358b267f-240f-4eb5-b604-9515080455af b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 3e75d0af-c514-42c5-aa05-88ae5552f196] Received event network-vif-plugged-f7c4b9b0-3445-468a-a19a-8b19b2d029a2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:43:19 np0005534516 nova_compute[253538]: 2025-11-25 08:43:19.057 253542 DEBUG oslo_concurrency.lockutils [req-e28b6101-49fe-4693-9d55-f43944974aa0 req-358b267f-240f-4eb5-b604-9515080455af b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "3e75d0af-c514-42c5-aa05-88ae5552f196-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:43:19 np0005534516 nova_compute[253538]: 2025-11-25 08:43:19.058 253542 DEBUG oslo_concurrency.lockutils [req-e28b6101-49fe-4693-9d55-f43944974aa0 req-358b267f-240f-4eb5-b604-9515080455af b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "3e75d0af-c514-42c5-aa05-88ae5552f196-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:43:19 np0005534516 nova_compute[253538]: 2025-11-25 08:43:19.058 253542 DEBUG oslo_concurrency.lockutils [req-e28b6101-49fe-4693-9d55-f43944974aa0 req-358b267f-240f-4eb5-b604-9515080455af b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "3e75d0af-c514-42c5-aa05-88ae5552f196-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:43:19 np0005534516 nova_compute[253538]: 2025-11-25 08:43:19.058 253542 DEBUG nova.compute.manager [req-e28b6101-49fe-4693-9d55-f43944974aa0 req-358b267f-240f-4eb5-b604-9515080455af b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 3e75d0af-c514-42c5-aa05-88ae5552f196] No waiting events found dispatching network-vif-plugged-f7c4b9b0-3445-468a-a19a-8b19b2d029a2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 03:43:19 np0005534516 nova_compute[253538]: 2025-11-25 08:43:19.058 253542 WARNING nova.compute.manager [req-e28b6101-49fe-4693-9d55-f43944974aa0 req-358b267f-240f-4eb5-b604-9515080455af b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 3e75d0af-c514-42c5-aa05-88ae5552f196] Received unexpected event network-vif-plugged-f7c4b9b0-3445-468a-a19a-8b19b2d029a2 for instance with vm_state deleted and task_state None.#033[00m
Nov 25 03:43:19 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1844: 321 pgs: 321 active+clean; 125 MiB data, 693 MiB used, 59 GiB / 60 GiB avail; 578 KiB/s rd, 23 KiB/s wr, 149 op/s
Nov 25 03:43:20 np0005534516 podman[350579]: 2025-11-25 08:43:20.198453697 +0000 UTC m=+0.066859263 container health_status 5c8d5508db57b4c3da7e80ae11f3a3094e87785cb74f60c98199261dd8b73481 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 25 03:43:20 np0005534516 nova_compute[253538]: 2025-11-25 08:43:20.466 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:43:20 np0005534516 nova_compute[253538]: 2025-11-25 08:43:20.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 03:43:21 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1845: 321 pgs: 321 active+clean; 88 MiB data, 677 MiB used, 59 GiB / 60 GiB avail; 496 KiB/s rd, 19 KiB/s wr, 125 op/s
Nov 25 03:43:21 np0005534516 nova_compute[253538]: 2025-11-25 08:43:21.795 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:43:22 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e221 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 03:43:22 np0005534516 nova_compute[253538]: 2025-11-25 08:43:22.553 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 03:43:22 np0005534516 nova_compute[253538]: 2025-11-25 08:43:22.554 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 25 03:43:23 np0005534516 nova_compute[253538]: 2025-11-25 08:43:23.436 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:43:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 03:43:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 03:43:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 03:43:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 03:43:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 03:43:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 03:43:23 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1846: 321 pgs: 321 active+clean; 88 MiB data, 677 MiB used, 59 GiB / 60 GiB avail; 438 KiB/s rd, 18 KiB/s wr, 93 op/s
Nov 25 03:43:23 np0005534516 nova_compute[253538]: 2025-11-25 08:43:23.544 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:43:25 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1847: 321 pgs: 321 active+clean; 88 MiB data, 677 MiB used, 59 GiB / 60 GiB avail; 281 KiB/s rd, 4.0 KiB/s wr, 60 op/s
Nov 25 03:43:25 np0005534516 nova_compute[253538]: 2025-11-25 08:43:25.504 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:43:25 np0005534516 nova_compute[253538]: 2025-11-25 08:43:25.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 03:43:25 np0005534516 nova_compute[253538]: 2025-11-25 08:43:25.554 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 25 03:43:25 np0005534516 nova_compute[253538]: 2025-11-25 08:43:25.601 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 25 03:43:25 np0005534516 nova_compute[253538]: 2025-11-25 08:43:25.602 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 03:43:25 np0005534516 podman[350602]: 2025-11-25 08:43:25.831382293 +0000 UTC m=+0.074632725 container health_status 201f5ba8a846455dad7681d91099d8c3f33e8ac91298c1330a8b525b94c03938 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible, org.label-schema.build-date=20251118, tcib_managed=true, config_id=multipathd, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3)
Nov 25 03:43:26 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e221 do_prune osdmap full prune enabled
Nov 25 03:43:26 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e222 e222: 3 total, 3 up, 3 in
Nov 25 03:43:26 np0005534516 ceph-mon[75015]: log_channel(cluster) log [DBG] : osdmap e222: 3 total, 3 up, 3 in
Nov 25 03:43:26 np0005534516 nova_compute[253538]: 2025-11-25 08:43:26.721 253542 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764060191.719975, 5f960b00-a365-4665-8a74-50d2e7b7f940 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 03:43:26 np0005534516 nova_compute[253538]: 2025-11-25 08:43:26.721 253542 INFO nova.compute.manager [-] [instance: 5f960b00-a365-4665-8a74-50d2e7b7f940] VM Stopped (Lifecycle Event)#033[00m
Nov 25 03:43:26 np0005534516 nova_compute[253538]: 2025-11-25 08:43:26.745 253542 DEBUG nova.compute.manager [None req-d948998c-dde8-4137-8337-8db65ddd58ff - - - - - -] [instance: 5f960b00-a365-4665-8a74-50d2e7b7f940] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:43:26 np0005534516 nova_compute[253538]: 2025-11-25 08:43:26.799 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:43:27 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e222 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 03:43:27 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1849: 321 pgs: 321 active+clean; 88 MiB data, 677 MiB used, 59 GiB / 60 GiB avail; 84 KiB/s rd, 1.5 KiB/s wr, 29 op/s
Nov 25 03:43:28 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:43:28.208 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=25, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '3e:21:09', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '6a:71:8e:07:fa:c2'}, ipsec=False) old=SB_Global(nb_cfg=24) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 03:43:28 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:43:28.209 162739 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 25 03:43:28 np0005534516 nova_compute[253538]: 2025-11-25 08:43:28.210 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:43:28 np0005534516 podman[350622]: 2025-11-25 08:43:28.883716942 +0000 UTC m=+0.130268280 container health_status 8ad1e319ea263c63021f01bb2d6f18ca5aea205883339692c6b10bddfa993191 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Nov 25 03:43:29 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 25 03:43:29 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3769272791' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 03:43:29 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 25 03:43:29 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3769272791' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 03:43:29 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1850: 321 pgs: 321 active+clean; 88 MiB data, 677 MiB used, 59 GiB / 60 GiB avail; 9.4 KiB/s rd, 1.4 KiB/s wr, 15 op/s
Nov 25 03:43:29 np0005534516 nova_compute[253538]: 2025-11-25 08:43:29.555 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 03:43:29 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e222 do_prune osdmap full prune enabled
Nov 25 03:43:29 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e223 e223: 3 total, 3 up, 3 in
Nov 25 03:43:29 np0005534516 ceph-mon[75015]: log_channel(cluster) log [DBG] : osdmap e223: 3 total, 3 up, 3 in
Nov 25 03:43:30 np0005534516 nova_compute[253538]: 2025-11-25 08:43:30.421 253542 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764060195.420136, 3e75d0af-c514-42c5-aa05-88ae5552f196 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 03:43:30 np0005534516 nova_compute[253538]: 2025-11-25 08:43:30.422 253542 INFO nova.compute.manager [-] [instance: 3e75d0af-c514-42c5-aa05-88ae5552f196] VM Stopped (Lifecycle Event)#033[00m
Nov 25 03:43:30 np0005534516 nova_compute[253538]: 2025-11-25 08:43:30.454 253542 DEBUG nova.compute.manager [None req-d70cc2fb-970d-4ec4-942c-f046682c686e - - - - - -] [instance: 3e75d0af-c514-42c5-aa05-88ae5552f196] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:43:30 np0005534516 nova_compute[253538]: 2025-11-25 08:43:30.539 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:43:31 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:43:31.211 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=0e3362c2-a4a0-4a10-9289-943331244f84, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '25'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:43:31 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1852: 321 pgs: 321 active+clean; 88 MiB data, 677 MiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 2.9 KiB/s wr, 27 op/s
Nov 25 03:43:31 np0005534516 nova_compute[253538]: 2025-11-25 08:43:31.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 03:43:31 np0005534516 nova_compute[253538]: 2025-11-25 08:43:31.581 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:43:31 np0005534516 nova_compute[253538]: 2025-11-25 08:43:31.582 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:43:31 np0005534516 nova_compute[253538]: 2025-11-25 08:43:31.582 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:43:31 np0005534516 nova_compute[253538]: 2025-11-25 08:43:31.582 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 25 03:43:31 np0005534516 nova_compute[253538]: 2025-11-25 08:43:31.582 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:43:31 np0005534516 nova_compute[253538]: 2025-11-25 08:43:31.801 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:43:32 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 03:43:32 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1789811154' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 03:43:32 np0005534516 nova_compute[253538]: 2025-11-25 08:43:32.072 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.490s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:43:32 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e223 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 03:43:32 np0005534516 nova_compute[253538]: 2025-11-25 08:43:32.267 253542 WARNING nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 03:43:32 np0005534516 nova_compute[253538]: 2025-11-25 08:43:32.269 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3961MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 25 03:43:32 np0005534516 nova_compute[253538]: 2025-11-25 08:43:32.270 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:43:32 np0005534516 nova_compute[253538]: 2025-11-25 08:43:32.270 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:43:32 np0005534516 nova_compute[253538]: 2025-11-25 08:43:32.334 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 25 03:43:32 np0005534516 nova_compute[253538]: 2025-11-25 08:43:32.335 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 25 03:43:32 np0005534516 nova_compute[253538]: 2025-11-25 08:43:32.356 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:43:32 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e223 do_prune osdmap full prune enabled
Nov 25 03:43:32 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e224 e224: 3 total, 3 up, 3 in
Nov 25 03:43:32 np0005534516 ceph-mon[75015]: log_channel(cluster) log [DBG] : osdmap e224: 3 total, 3 up, 3 in
Nov 25 03:43:32 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 03:43:32 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/203456022' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 03:43:32 np0005534516 nova_compute[253538]: 2025-11-25 08:43:32.842 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.486s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:43:32 np0005534516 nova_compute[253538]: 2025-11-25 08:43:32.848 253542 DEBUG nova.compute.provider_tree [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 03:43:32 np0005534516 nova_compute[253538]: 2025-11-25 08:43:32.865 253542 DEBUG nova.scheduler.client.report [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 03:43:33 np0005534516 nova_compute[253538]: 2025-11-25 08:43:33.095 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 25 03:43:33 np0005534516 nova_compute[253538]: 2025-11-25 08:43:33.096 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.825s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:43:33 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1854: 321 pgs: 321 active+clean; 88 MiB data, 677 MiB used, 59 GiB / 60 GiB avail; 36 KiB/s rd, 3.9 KiB/s wr, 50 op/s
Nov 25 03:43:34 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e224 do_prune osdmap full prune enabled
Nov 25 03:43:34 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e225 e225: 3 total, 3 up, 3 in
Nov 25 03:43:34 np0005534516 ceph-mon[75015]: log_channel(cluster) log [DBG] : osdmap e225: 3 total, 3 up, 3 in
Nov 25 03:43:35 np0005534516 nova_compute[253538]: 2025-11-25 08:43:35.090 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 03:43:35 np0005534516 nova_compute[253538]: 2025-11-25 08:43:35.091 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 03:43:35 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1856: 321 pgs: 321 active+clean; 88 MiB data, 677 MiB used, 59 GiB / 60 GiB avail; 71 KiB/s rd, 5.3 KiB/s wr, 96 op/s
Nov 25 03:43:35 np0005534516 nova_compute[253538]: 2025-11-25 08:43:35.570 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:43:36 np0005534516 nova_compute[253538]: 2025-11-25 08:43:36.822 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:43:37 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e225 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 03:43:37 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1857: 321 pgs: 321 active+clean; 88 MiB data, 677 MiB used, 59 GiB / 60 GiB avail; 68 KiB/s rd, 4.3 KiB/s wr, 91 op/s
Nov 25 03:43:38 np0005534516 nova_compute[253538]: 2025-11-25 08:43:38.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 03:43:39 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1858: 321 pgs: 321 active+clean; 88 MiB data, 677 MiB used, 59 GiB / 60 GiB avail; 68 KiB/s rd, 4.2 KiB/s wr, 91 op/s
Nov 25 03:43:40 np0005534516 nova_compute[253538]: 2025-11-25 08:43:40.623 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:43:40 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e225 do_prune osdmap full prune enabled
Nov 25 03:43:40 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e226 e226: 3 total, 3 up, 3 in
Nov 25 03:43:40 np0005534516 ceph-mon[75015]: log_channel(cluster) log [DBG] : osdmap e226: 3 total, 3 up, 3 in
Nov 25 03:43:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:43:41.068 162739 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:43:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:43:41.068 162739 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:43:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:43:41.068 162739 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:43:41 np0005534516 nova_compute[253538]: 2025-11-25 08:43:41.217 253542 DEBUG oslo_concurrency.lockutils [None req-c32a9ba2-ab75-4948-b380-addcdd4656a2 3d8339b871b34cf5bbf797eb592ec74e 31e0445675494c73be5eb4d1a6ec9597 - - default default] Acquiring lock "a5dba47f-da80-465e-9659-1897b7d8b1dc" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:43:41 np0005534516 nova_compute[253538]: 2025-11-25 08:43:41.217 253542 DEBUG oslo_concurrency.lockutils [None req-c32a9ba2-ab75-4948-b380-addcdd4656a2 3d8339b871b34cf5bbf797eb592ec74e 31e0445675494c73be5eb4d1a6ec9597 - - default default] Lock "a5dba47f-da80-465e-9659-1897b7d8b1dc" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:43:41 np0005534516 nova_compute[253538]: 2025-11-25 08:43:41.231 253542 DEBUG nova.compute.manager [None req-c32a9ba2-ab75-4948-b380-addcdd4656a2 3d8339b871b34cf5bbf797eb592ec74e 31e0445675494c73be5eb4d1a6ec9597 - - default default] [instance: a5dba47f-da80-465e-9659-1897b7d8b1dc] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 25 03:43:41 np0005534516 nova_compute[253538]: 2025-11-25 08:43:41.295 253542 DEBUG oslo_concurrency.lockutils [None req-c32a9ba2-ab75-4948-b380-addcdd4656a2 3d8339b871b34cf5bbf797eb592ec74e 31e0445675494c73be5eb4d1a6ec9597 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:43:41 np0005534516 nova_compute[253538]: 2025-11-25 08:43:41.295 253542 DEBUG oslo_concurrency.lockutils [None req-c32a9ba2-ab75-4948-b380-addcdd4656a2 3d8339b871b34cf5bbf797eb592ec74e 31e0445675494c73be5eb4d1a6ec9597 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:43:41 np0005534516 nova_compute[253538]: 2025-11-25 08:43:41.300 253542 DEBUG nova.virt.hardware [None req-c32a9ba2-ab75-4948-b380-addcdd4656a2 3d8339b871b34cf5bbf797eb592ec74e 31e0445675494c73be5eb4d1a6ec9597 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 25 03:43:41 np0005534516 nova_compute[253538]: 2025-11-25 08:43:41.300 253542 INFO nova.compute.claims [None req-c32a9ba2-ab75-4948-b380-addcdd4656a2 3d8339b871b34cf5bbf797eb592ec74e 31e0445675494c73be5eb4d1a6ec9597 - - default default] [instance: a5dba47f-da80-465e-9659-1897b7d8b1dc] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 25 03:43:41 np0005534516 nova_compute[253538]: 2025-11-25 08:43:41.411 253542 DEBUG oslo_concurrency.processutils [None req-c32a9ba2-ab75-4948-b380-addcdd4656a2 3d8339b871b34cf5bbf797eb592ec74e 31e0445675494c73be5eb4d1a6ec9597 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:43:41 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1860: 321 pgs: 321 active+clean; 88 MiB data, 677 MiB used, 59 GiB / 60 GiB avail; 56 KiB/s rd, 4.1 KiB/s wr, 76 op/s
Nov 25 03:43:41 np0005534516 nova_compute[253538]: 2025-11-25 08:43:41.858 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:43:41 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 03:43:41 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2670874538' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 03:43:41 np0005534516 nova_compute[253538]: 2025-11-25 08:43:41.907 253542 DEBUG oslo_concurrency.processutils [None req-c32a9ba2-ab75-4948-b380-addcdd4656a2 3d8339b871b34cf5bbf797eb592ec74e 31e0445675494c73be5eb4d1a6ec9597 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.496s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:43:41 np0005534516 nova_compute[253538]: 2025-11-25 08:43:41.917 253542 DEBUG nova.compute.provider_tree [None req-c32a9ba2-ab75-4948-b380-addcdd4656a2 3d8339b871b34cf5bbf797eb592ec74e 31e0445675494c73be5eb4d1a6ec9597 - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 03:43:41 np0005534516 nova_compute[253538]: 2025-11-25 08:43:41.933 253542 DEBUG nova.scheduler.client.report [None req-c32a9ba2-ab75-4948-b380-addcdd4656a2 3d8339b871b34cf5bbf797eb592ec74e 31e0445675494c73be5eb4d1a6ec9597 - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 03:43:42 np0005534516 nova_compute[253538]: 2025-11-25 08:43:42.047 253542 DEBUG oslo_concurrency.lockutils [None req-c32a9ba2-ab75-4948-b380-addcdd4656a2 3d8339b871b34cf5bbf797eb592ec74e 31e0445675494c73be5eb4d1a6ec9597 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.751s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:43:42 np0005534516 nova_compute[253538]: 2025-11-25 08:43:42.048 253542 DEBUG nova.compute.manager [None req-c32a9ba2-ab75-4948-b380-addcdd4656a2 3d8339b871b34cf5bbf797eb592ec74e 31e0445675494c73be5eb4d1a6ec9597 - - default default] [instance: a5dba47f-da80-465e-9659-1897b7d8b1dc] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 25 03:43:42 np0005534516 nova_compute[253538]: 2025-11-25 08:43:42.097 253542 DEBUG nova.compute.manager [None req-c32a9ba2-ab75-4948-b380-addcdd4656a2 3d8339b871b34cf5bbf797eb592ec74e 31e0445675494c73be5eb4d1a6ec9597 - - default default] [instance: a5dba47f-da80-465e-9659-1897b7d8b1dc] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 25 03:43:42 np0005534516 nova_compute[253538]: 2025-11-25 08:43:42.097 253542 DEBUG nova.network.neutron [None req-c32a9ba2-ab75-4948-b380-addcdd4656a2 3d8339b871b34cf5bbf797eb592ec74e 31e0445675494c73be5eb4d1a6ec9597 - - default default] [instance: a5dba47f-da80-465e-9659-1897b7d8b1dc] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 25 03:43:42 np0005534516 nova_compute[253538]: 2025-11-25 08:43:42.118 253542 INFO nova.virt.libvirt.driver [None req-c32a9ba2-ab75-4948-b380-addcdd4656a2 3d8339b871b34cf5bbf797eb592ec74e 31e0445675494c73be5eb4d1a6ec9597 - - default default] [instance: a5dba47f-da80-465e-9659-1897b7d8b1dc] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 25 03:43:42 np0005534516 nova_compute[253538]: 2025-11-25 08:43:42.143 253542 DEBUG nova.compute.manager [None req-c32a9ba2-ab75-4948-b380-addcdd4656a2 3d8339b871b34cf5bbf797eb592ec74e 31e0445675494c73be5eb4d1a6ec9597 - - default default] [instance: a5dba47f-da80-465e-9659-1897b7d8b1dc] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 25 03:43:42 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e226 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 03:43:42 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e226 do_prune osdmap full prune enabled
Nov 25 03:43:42 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e227 e227: 3 total, 3 up, 3 in
Nov 25 03:43:42 np0005534516 ceph-mon[75015]: log_channel(cluster) log [DBG] : osdmap e227: 3 total, 3 up, 3 in
Nov 25 03:43:42 np0005534516 nova_compute[253538]: 2025-11-25 08:43:42.277 253542 DEBUG nova.compute.manager [None req-c32a9ba2-ab75-4948-b380-addcdd4656a2 3d8339b871b34cf5bbf797eb592ec74e 31e0445675494c73be5eb4d1a6ec9597 - - default default] [instance: a5dba47f-da80-465e-9659-1897b7d8b1dc] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 25 03:43:42 np0005534516 nova_compute[253538]: 2025-11-25 08:43:42.278 253542 DEBUG nova.virt.libvirt.driver [None req-c32a9ba2-ab75-4948-b380-addcdd4656a2 3d8339b871b34cf5bbf797eb592ec74e 31e0445675494c73be5eb4d1a6ec9597 - - default default] [instance: a5dba47f-da80-465e-9659-1897b7d8b1dc] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 25 03:43:42 np0005534516 nova_compute[253538]: 2025-11-25 08:43:42.279 253542 INFO nova.virt.libvirt.driver [None req-c32a9ba2-ab75-4948-b380-addcdd4656a2 3d8339b871b34cf5bbf797eb592ec74e 31e0445675494c73be5eb4d1a6ec9597 - - default default] [instance: a5dba47f-da80-465e-9659-1897b7d8b1dc] Creating image(s)#033[00m
Nov 25 03:43:42 np0005534516 nova_compute[253538]: 2025-11-25 08:43:42.307 253542 DEBUG nova.storage.rbd_utils [None req-c32a9ba2-ab75-4948-b380-addcdd4656a2 3d8339b871b34cf5bbf797eb592ec74e 31e0445675494c73be5eb4d1a6ec9597 - - default default] rbd image a5dba47f-da80-465e-9659-1897b7d8b1dc_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:43:42 np0005534516 nova_compute[253538]: 2025-11-25 08:43:42.332 253542 DEBUG nova.storage.rbd_utils [None req-c32a9ba2-ab75-4948-b380-addcdd4656a2 3d8339b871b34cf5bbf797eb592ec74e 31e0445675494c73be5eb4d1a6ec9597 - - default default] rbd image a5dba47f-da80-465e-9659-1897b7d8b1dc_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:43:42 np0005534516 nova_compute[253538]: 2025-11-25 08:43:42.361 253542 DEBUG nova.storage.rbd_utils [None req-c32a9ba2-ab75-4948-b380-addcdd4656a2 3d8339b871b34cf5bbf797eb592ec74e 31e0445675494c73be5eb4d1a6ec9597 - - default default] rbd image a5dba47f-da80-465e-9659-1897b7d8b1dc_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:43:42 np0005534516 nova_compute[253538]: 2025-11-25 08:43:42.365 253542 DEBUG oslo_concurrency.processutils [None req-c32a9ba2-ab75-4948-b380-addcdd4656a2 3d8339b871b34cf5bbf797eb592ec74e 31e0445675494c73be5eb4d1a6ec9597 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:43:42 np0005534516 nova_compute[253538]: 2025-11-25 08:43:42.408 253542 DEBUG nova.policy [None req-c32a9ba2-ab75-4948-b380-addcdd4656a2 3d8339b871b34cf5bbf797eb592ec74e 31e0445675494c73be5eb4d1a6ec9597 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '3d8339b871b34cf5bbf797eb592ec74e', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '31e0445675494c73be5eb4d1a6ec9597', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 25 03:43:42 np0005534516 nova_compute[253538]: 2025-11-25 08:43:42.462 253542 DEBUG oslo_concurrency.processutils [None req-c32a9ba2-ab75-4948-b380-addcdd4656a2 3d8339b871b34cf5bbf797eb592ec74e 31e0445675494c73be5eb4d1a6ec9597 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json" returned: 0 in 0.098s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:43:42 np0005534516 nova_compute[253538]: 2025-11-25 08:43:42.464 253542 DEBUG oslo_concurrency.lockutils [None req-c32a9ba2-ab75-4948-b380-addcdd4656a2 3d8339b871b34cf5bbf797eb592ec74e 31e0445675494c73be5eb4d1a6ec9597 - - default default] Acquiring lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:43:42 np0005534516 nova_compute[253538]: 2025-11-25 08:43:42.465 253542 DEBUG oslo_concurrency.lockutils [None req-c32a9ba2-ab75-4948-b380-addcdd4656a2 3d8339b871b34cf5bbf797eb592ec74e 31e0445675494c73be5eb4d1a6ec9597 - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:43:42 np0005534516 nova_compute[253538]: 2025-11-25 08:43:42.465 253542 DEBUG oslo_concurrency.lockutils [None req-c32a9ba2-ab75-4948-b380-addcdd4656a2 3d8339b871b34cf5bbf797eb592ec74e 31e0445675494c73be5eb4d1a6ec9597 - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:43:42 np0005534516 nova_compute[253538]: 2025-11-25 08:43:42.497 253542 DEBUG nova.storage.rbd_utils [None req-c32a9ba2-ab75-4948-b380-addcdd4656a2 3d8339b871b34cf5bbf797eb592ec74e 31e0445675494c73be5eb4d1a6ec9597 - - default default] rbd image a5dba47f-da80-465e-9659-1897b7d8b1dc_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:43:42 np0005534516 nova_compute[253538]: 2025-11-25 08:43:42.501 253542 DEBUG oslo_concurrency.processutils [None req-c32a9ba2-ab75-4948-b380-addcdd4656a2 3d8339b871b34cf5bbf797eb592ec74e 31e0445675494c73be5eb4d1a6ec9597 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc a5dba47f-da80-465e-9659-1897b7d8b1dc_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:43:42 np0005534516 nova_compute[253538]: 2025-11-25 08:43:42.865 253542 DEBUG oslo_concurrency.processutils [None req-c32a9ba2-ab75-4948-b380-addcdd4656a2 3d8339b871b34cf5bbf797eb592ec74e 31e0445675494c73be5eb4d1a6ec9597 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc a5dba47f-da80-465e-9659-1897b7d8b1dc_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.364s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:43:42 np0005534516 nova_compute[253538]: 2025-11-25 08:43:42.916 253542 DEBUG nova.storage.rbd_utils [None req-c32a9ba2-ab75-4948-b380-addcdd4656a2 3d8339b871b34cf5bbf797eb592ec74e 31e0445675494c73be5eb4d1a6ec9597 - - default default] resizing rbd image a5dba47f-da80-465e-9659-1897b7d8b1dc_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 25 03:43:42 np0005534516 nova_compute[253538]: 2025-11-25 08:43:42.999 253542 DEBUG nova.objects.instance [None req-c32a9ba2-ab75-4948-b380-addcdd4656a2 3d8339b871b34cf5bbf797eb592ec74e 31e0445675494c73be5eb4d1a6ec9597 - - default default] Lazy-loading 'migration_context' on Instance uuid a5dba47f-da80-465e-9659-1897b7d8b1dc obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 03:43:43 np0005534516 nova_compute[253538]: 2025-11-25 08:43:43.016 253542 DEBUG nova.virt.libvirt.driver [None req-c32a9ba2-ab75-4948-b380-addcdd4656a2 3d8339b871b34cf5bbf797eb592ec74e 31e0445675494c73be5eb4d1a6ec9597 - - default default] [instance: a5dba47f-da80-465e-9659-1897b7d8b1dc] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 25 03:43:43 np0005534516 nova_compute[253538]: 2025-11-25 08:43:43.016 253542 DEBUG nova.virt.libvirt.driver [None req-c32a9ba2-ab75-4948-b380-addcdd4656a2 3d8339b871b34cf5bbf797eb592ec74e 31e0445675494c73be5eb4d1a6ec9597 - - default default] [instance: a5dba47f-da80-465e-9659-1897b7d8b1dc] Ensure instance console log exists: /var/lib/nova/instances/a5dba47f-da80-465e-9659-1897b7d8b1dc/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 25 03:43:43 np0005534516 nova_compute[253538]: 2025-11-25 08:43:43.017 253542 DEBUG oslo_concurrency.lockutils [None req-c32a9ba2-ab75-4948-b380-addcdd4656a2 3d8339b871b34cf5bbf797eb592ec74e 31e0445675494c73be5eb4d1a6ec9597 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:43:43 np0005534516 nova_compute[253538]: 2025-11-25 08:43:43.017 253542 DEBUG oslo_concurrency.lockutils [None req-c32a9ba2-ab75-4948-b380-addcdd4656a2 3d8339b871b34cf5bbf797eb592ec74e 31e0445675494c73be5eb4d1a6ec9597 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:43:43 np0005534516 nova_compute[253538]: 2025-11-25 08:43:43.017 253542 DEBUG oslo_concurrency.lockutils [None req-c32a9ba2-ab75-4948-b380-addcdd4656a2 3d8339b871b34cf5bbf797eb592ec74e 31e0445675494c73be5eb4d1a6ec9597 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:43:43 np0005534516 nova_compute[253538]: 2025-11-25 08:43:43.174 253542 DEBUG nova.network.neutron [None req-c32a9ba2-ab75-4948-b380-addcdd4656a2 3d8339b871b34cf5bbf797eb592ec74e 31e0445675494c73be5eb4d1a6ec9597 - - default default] [instance: a5dba47f-da80-465e-9659-1897b7d8b1dc] Successfully created port: 5de7e6a0-0b2c-4247-8314-ebb08913a220 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 25 03:43:43 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1862: 321 pgs: 321 active+clean; 96 MiB data, 685 MiB used, 59 GiB / 60 GiB avail; 65 KiB/s rd, 1005 KiB/s wr, 89 op/s
Nov 25 03:43:45 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1863: 321 pgs: 321 active+clean; 132 MiB data, 700 MiB used, 59 GiB / 60 GiB avail; 46 KiB/s rd, 3.9 MiB/s wr, 67 op/s
Nov 25 03:43:45 np0005534516 nova_compute[253538]: 2025-11-25 08:43:45.627 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:43:45 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e227 do_prune osdmap full prune enabled
Nov 25 03:43:45 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e228 e228: 3 total, 3 up, 3 in
Nov 25 03:43:45 np0005534516 ceph-mon[75015]: log_channel(cluster) log [DBG] : osdmap e228: 3 total, 3 up, 3 in
Nov 25 03:43:45 np0005534516 nova_compute[253538]: 2025-11-25 08:43:45.957 253542 DEBUG nova.network.neutron [None req-c32a9ba2-ab75-4948-b380-addcdd4656a2 3d8339b871b34cf5bbf797eb592ec74e 31e0445675494c73be5eb4d1a6ec9597 - - default default] [instance: a5dba47f-da80-465e-9659-1897b7d8b1dc] Successfully updated port: 5de7e6a0-0b2c-4247-8314-ebb08913a220 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 25 03:43:46 np0005534516 nova_compute[253538]: 2025-11-25 08:43:46.008 253542 DEBUG oslo_concurrency.lockutils [None req-c32a9ba2-ab75-4948-b380-addcdd4656a2 3d8339b871b34cf5bbf797eb592ec74e 31e0445675494c73be5eb4d1a6ec9597 - - default default] Acquiring lock "refresh_cache-a5dba47f-da80-465e-9659-1897b7d8b1dc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 03:43:46 np0005534516 nova_compute[253538]: 2025-11-25 08:43:46.008 253542 DEBUG oslo_concurrency.lockutils [None req-c32a9ba2-ab75-4948-b380-addcdd4656a2 3d8339b871b34cf5bbf797eb592ec74e 31e0445675494c73be5eb4d1a6ec9597 - - default default] Acquired lock "refresh_cache-a5dba47f-da80-465e-9659-1897b7d8b1dc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 03:43:46 np0005534516 nova_compute[253538]: 2025-11-25 08:43:46.009 253542 DEBUG nova.network.neutron [None req-c32a9ba2-ab75-4948-b380-addcdd4656a2 3d8339b871b34cf5bbf797eb592ec74e 31e0445675494c73be5eb4d1a6ec9597 - - default default] [instance: a5dba47f-da80-465e-9659-1897b7d8b1dc] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 25 03:43:46 np0005534516 nova_compute[253538]: 2025-11-25 08:43:46.071 253542 DEBUG nova.compute.manager [req-f2042fb7-ac06-493d-a6ea-e4ceb028d411 req-942ffc40-6e2c-4ce3-86a0-5a7dd510c0c1 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: a5dba47f-da80-465e-9659-1897b7d8b1dc] Received event network-changed-5de7e6a0-0b2c-4247-8314-ebb08913a220 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:43:46 np0005534516 nova_compute[253538]: 2025-11-25 08:43:46.072 253542 DEBUG nova.compute.manager [req-f2042fb7-ac06-493d-a6ea-e4ceb028d411 req-942ffc40-6e2c-4ce3-86a0-5a7dd510c0c1 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: a5dba47f-da80-465e-9659-1897b7d8b1dc] Refreshing instance network info cache due to event network-changed-5de7e6a0-0b2c-4247-8314-ebb08913a220. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 03:43:46 np0005534516 nova_compute[253538]: 2025-11-25 08:43:46.072 253542 DEBUG oslo_concurrency.lockutils [req-f2042fb7-ac06-493d-a6ea-e4ceb028d411 req-942ffc40-6e2c-4ce3-86a0-5a7dd510c0c1 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-a5dba47f-da80-465e-9659-1897b7d8b1dc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 03:43:46 np0005534516 nova_compute[253538]: 2025-11-25 08:43:46.177 253542 DEBUG nova.network.neutron [None req-c32a9ba2-ab75-4948-b380-addcdd4656a2 3d8339b871b34cf5bbf797eb592ec74e 31e0445675494c73be5eb4d1a6ec9597 - - default default] [instance: a5dba47f-da80-465e-9659-1897b7d8b1dc] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 25 03:43:46 np0005534516 nova_compute[253538]: 2025-11-25 08:43:46.915 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:43:47 np0005534516 nova_compute[253538]: 2025-11-25 08:43:47.104 253542 DEBUG nova.network.neutron [None req-c32a9ba2-ab75-4948-b380-addcdd4656a2 3d8339b871b34cf5bbf797eb592ec74e 31e0445675494c73be5eb4d1a6ec9597 - - default default] [instance: a5dba47f-da80-465e-9659-1897b7d8b1dc] Updating instance_info_cache with network_info: [{"id": "5de7e6a0-0b2c-4247-8314-ebb08913a220", "address": "fa:16:3e:68:61:a9", "network": {"id": "ff042a1d-571e-4cfe-b6e8-5931f017fb97", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-2146982642-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "31e0445675494c73be5eb4d1a6ec9597", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5de7e6a0-0b", "ovs_interfaceid": "5de7e6a0-0b2c-4247-8314-ebb08913a220", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 03:43:47 np0005534516 nova_compute[253538]: 2025-11-25 08:43:47.175 253542 DEBUG oslo_concurrency.lockutils [None req-c32a9ba2-ab75-4948-b380-addcdd4656a2 3d8339b871b34cf5bbf797eb592ec74e 31e0445675494c73be5eb4d1a6ec9597 - - default default] Releasing lock "refresh_cache-a5dba47f-da80-465e-9659-1897b7d8b1dc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 03:43:47 np0005534516 nova_compute[253538]: 2025-11-25 08:43:47.175 253542 DEBUG nova.compute.manager [None req-c32a9ba2-ab75-4948-b380-addcdd4656a2 3d8339b871b34cf5bbf797eb592ec74e 31e0445675494c73be5eb4d1a6ec9597 - - default default] [instance: a5dba47f-da80-465e-9659-1897b7d8b1dc] Instance network_info: |[{"id": "5de7e6a0-0b2c-4247-8314-ebb08913a220", "address": "fa:16:3e:68:61:a9", "network": {"id": "ff042a1d-571e-4cfe-b6e8-5931f017fb97", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-2146982642-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "31e0445675494c73be5eb4d1a6ec9597", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5de7e6a0-0b", "ovs_interfaceid": "5de7e6a0-0b2c-4247-8314-ebb08913a220", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 25 03:43:47 np0005534516 nova_compute[253538]: 2025-11-25 08:43:47.176 253542 DEBUG oslo_concurrency.lockutils [req-f2042fb7-ac06-493d-a6ea-e4ceb028d411 req-942ffc40-6e2c-4ce3-86a0-5a7dd510c0c1 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-a5dba47f-da80-465e-9659-1897b7d8b1dc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 03:43:47 np0005534516 nova_compute[253538]: 2025-11-25 08:43:47.176 253542 DEBUG nova.network.neutron [req-f2042fb7-ac06-493d-a6ea-e4ceb028d411 req-942ffc40-6e2c-4ce3-86a0-5a7dd510c0c1 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: a5dba47f-da80-465e-9659-1897b7d8b1dc] Refreshing network info cache for port 5de7e6a0-0b2c-4247-8314-ebb08913a220 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 03:43:47 np0005534516 nova_compute[253538]: 2025-11-25 08:43:47.182 253542 DEBUG nova.virt.libvirt.driver [None req-c32a9ba2-ab75-4948-b380-addcdd4656a2 3d8339b871b34cf5bbf797eb592ec74e 31e0445675494c73be5eb4d1a6ec9597 - - default default] [instance: a5dba47f-da80-465e-9659-1897b7d8b1dc] Start _get_guest_xml network_info=[{"id": "5de7e6a0-0b2c-4247-8314-ebb08913a220", "address": "fa:16:3e:68:61:a9", "network": {"id": "ff042a1d-571e-4cfe-b6e8-5931f017fb97", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-2146982642-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "31e0445675494c73be5eb4d1a6ec9597", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5de7e6a0-0b", "ovs_interfaceid": "5de7e6a0-0b2c-4247-8314-ebb08913a220", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'device_name': '/dev/vda', 'guest_format': None, 'encryption_options': None, 'boot_index': 0, 'encryption_format': None, 'encryption_secret_uuid': None, 'size': 0, 'encrypted': False, 'disk_bus': 'virtio', 'image_id': '8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 25 03:43:47 np0005534516 nova_compute[253538]: 2025-11-25 08:43:47.189 253542 WARNING nova.virt.libvirt.driver [None req-c32a9ba2-ab75-4948-b380-addcdd4656a2 3d8339b871b34cf5bbf797eb592ec74e 31e0445675494c73be5eb4d1a6ec9597 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 03:43:47 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e228 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 03:43:47 np0005534516 nova_compute[253538]: 2025-11-25 08:43:47.197 253542 DEBUG nova.virt.libvirt.host [None req-c32a9ba2-ab75-4948-b380-addcdd4656a2 3d8339b871b34cf5bbf797eb592ec74e 31e0445675494c73be5eb4d1a6ec9597 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 25 03:43:47 np0005534516 nova_compute[253538]: 2025-11-25 08:43:47.198 253542 DEBUG nova.virt.libvirt.host [None req-c32a9ba2-ab75-4948-b380-addcdd4656a2 3d8339b871b34cf5bbf797eb592ec74e 31e0445675494c73be5eb4d1a6ec9597 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 25 03:43:47 np0005534516 nova_compute[253538]: 2025-11-25 08:43:47.204 253542 DEBUG nova.virt.libvirt.host [None req-c32a9ba2-ab75-4948-b380-addcdd4656a2 3d8339b871b34cf5bbf797eb592ec74e 31e0445675494c73be5eb4d1a6ec9597 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 25 03:43:47 np0005534516 nova_compute[253538]: 2025-11-25 08:43:47.205 253542 DEBUG nova.virt.libvirt.host [None req-c32a9ba2-ab75-4948-b380-addcdd4656a2 3d8339b871b34cf5bbf797eb592ec74e 31e0445675494c73be5eb4d1a6ec9597 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 25 03:43:47 np0005534516 nova_compute[253538]: 2025-11-25 08:43:47.206 253542 DEBUG nova.virt.libvirt.driver [None req-c32a9ba2-ab75-4948-b380-addcdd4656a2 3d8339b871b34cf5bbf797eb592ec74e 31e0445675494c73be5eb4d1a6ec9597 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 25 03:43:47 np0005534516 nova_compute[253538]: 2025-11-25 08:43:47.206 253542 DEBUG nova.virt.hardware [None req-c32a9ba2-ab75-4948-b380-addcdd4656a2 3d8339b871b34cf5bbf797eb592ec74e 31e0445675494c73be5eb4d1a6ec9597 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T08:20:54Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='c0247c91-0f34-45b2-87e7-43f31a790a00',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 25 03:43:47 np0005534516 nova_compute[253538]: 2025-11-25 08:43:47.207 253542 DEBUG nova.virt.hardware [None req-c32a9ba2-ab75-4948-b380-addcdd4656a2 3d8339b871b34cf5bbf797eb592ec74e 31e0445675494c73be5eb4d1a6ec9597 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 25 03:43:47 np0005534516 nova_compute[253538]: 2025-11-25 08:43:47.207 253542 DEBUG nova.virt.hardware [None req-c32a9ba2-ab75-4948-b380-addcdd4656a2 3d8339b871b34cf5bbf797eb592ec74e 31e0445675494c73be5eb4d1a6ec9597 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 25 03:43:47 np0005534516 nova_compute[253538]: 2025-11-25 08:43:47.208 253542 DEBUG nova.virt.hardware [None req-c32a9ba2-ab75-4948-b380-addcdd4656a2 3d8339b871b34cf5bbf797eb592ec74e 31e0445675494c73be5eb4d1a6ec9597 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 25 03:43:47 np0005534516 nova_compute[253538]: 2025-11-25 08:43:47.208 253542 DEBUG nova.virt.hardware [None req-c32a9ba2-ab75-4948-b380-addcdd4656a2 3d8339b871b34cf5bbf797eb592ec74e 31e0445675494c73be5eb4d1a6ec9597 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 25 03:43:47 np0005534516 nova_compute[253538]: 2025-11-25 08:43:47.208 253542 DEBUG nova.virt.hardware [None req-c32a9ba2-ab75-4948-b380-addcdd4656a2 3d8339b871b34cf5bbf797eb592ec74e 31e0445675494c73be5eb4d1a6ec9597 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 25 03:43:47 np0005534516 nova_compute[253538]: 2025-11-25 08:43:47.209 253542 DEBUG nova.virt.hardware [None req-c32a9ba2-ab75-4948-b380-addcdd4656a2 3d8339b871b34cf5bbf797eb592ec74e 31e0445675494c73be5eb4d1a6ec9597 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 25 03:43:47 np0005534516 nova_compute[253538]: 2025-11-25 08:43:47.209 253542 DEBUG nova.virt.hardware [None req-c32a9ba2-ab75-4948-b380-addcdd4656a2 3d8339b871b34cf5bbf797eb592ec74e 31e0445675494c73be5eb4d1a6ec9597 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 25 03:43:47 np0005534516 nova_compute[253538]: 2025-11-25 08:43:47.210 253542 DEBUG nova.virt.hardware [None req-c32a9ba2-ab75-4948-b380-addcdd4656a2 3d8339b871b34cf5bbf797eb592ec74e 31e0445675494c73be5eb4d1a6ec9597 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 25 03:43:47 np0005534516 nova_compute[253538]: 2025-11-25 08:43:47.210 253542 DEBUG nova.virt.hardware [None req-c32a9ba2-ab75-4948-b380-addcdd4656a2 3d8339b871b34cf5bbf797eb592ec74e 31e0445675494c73be5eb4d1a6ec9597 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 25 03:43:47 np0005534516 nova_compute[253538]: 2025-11-25 08:43:47.211 253542 DEBUG nova.virt.hardware [None req-c32a9ba2-ab75-4948-b380-addcdd4656a2 3d8339b871b34cf5bbf797eb592ec74e 31e0445675494c73be5eb4d1a6ec9597 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 25 03:43:47 np0005534516 nova_compute[253538]: 2025-11-25 08:43:47.215 253542 DEBUG oslo_concurrency.processutils [None req-c32a9ba2-ab75-4948-b380-addcdd4656a2 3d8339b871b34cf5bbf797eb592ec74e 31e0445675494c73be5eb4d1a6ec9597 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:43:47 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1865: 321 pgs: 321 active+clean; 182 MiB data, 738 MiB used, 59 GiB / 60 GiB avail; 54 KiB/s rd, 10 MiB/s wr, 83 op/s
Nov 25 03:43:47 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 03:43:47 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2657472670' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 03:43:47 np0005534516 nova_compute[253538]: 2025-11-25 08:43:47.742 253542 DEBUG oslo_concurrency.processutils [None req-c32a9ba2-ab75-4948-b380-addcdd4656a2 3d8339b871b34cf5bbf797eb592ec74e 31e0445675494c73be5eb4d1a6ec9597 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.526s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:43:47 np0005534516 nova_compute[253538]: 2025-11-25 08:43:47.777 253542 DEBUG nova.storage.rbd_utils [None req-c32a9ba2-ab75-4948-b380-addcdd4656a2 3d8339b871b34cf5bbf797eb592ec74e 31e0445675494c73be5eb4d1a6ec9597 - - default default] rbd image a5dba47f-da80-465e-9659-1897b7d8b1dc_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:43:47 np0005534516 nova_compute[253538]: 2025-11-25 08:43:47.782 253542 DEBUG oslo_concurrency.processutils [None req-c32a9ba2-ab75-4948-b380-addcdd4656a2 3d8339b871b34cf5bbf797eb592ec74e 31e0445675494c73be5eb4d1a6ec9597 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:43:47 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e228 do_prune osdmap full prune enabled
Nov 25 03:43:47 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e229 e229: 3 total, 3 up, 3 in
Nov 25 03:43:47 np0005534516 ceph-mon[75015]: log_channel(cluster) log [DBG] : osdmap e229: 3 total, 3 up, 3 in
Nov 25 03:43:48 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 03:43:48 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/367120127' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 03:43:48 np0005534516 nova_compute[253538]: 2025-11-25 08:43:48.301 253542 DEBUG oslo_concurrency.processutils [None req-c32a9ba2-ab75-4948-b380-addcdd4656a2 3d8339b871b34cf5bbf797eb592ec74e 31e0445675494c73be5eb4d1a6ec9597 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.519s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:43:48 np0005534516 nova_compute[253538]: 2025-11-25 08:43:48.304 253542 DEBUG nova.virt.libvirt.vif [None req-c32a9ba2-ab75-4948-b380-addcdd4656a2 3d8339b871b34cf5bbf797eb592ec74e 31e0445675494c73be5eb4d1a6ec9597 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T08:43:40Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerAddressesTestJSON-server-1260419212',display_name='tempest-ServerAddressesTestJSON-server-1260419212',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveraddressestestjson-server-1260419212',id=100,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='31e0445675494c73be5eb4d1a6ec9597',ramdisk_id='',reservation_id='r-b20g17nb',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerAddressesTestJSON-1579214935',owner_user_name='tempest-ServerAddressesTestJSON-1579214935-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T08:43:42Z,user_data=None,user_id='3d8339b871b34cf5bbf797eb592ec74e',uuid=a5dba47f-da80-465e-9659-1897b7d8b1dc,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "5de7e6a0-0b2c-4247-8314-ebb08913a220", "address": "fa:16:3e:68:61:a9", "network": {"id": "ff042a1d-571e-4cfe-b6e8-5931f017fb97", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-2146982642-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "31e0445675494c73be5eb4d1a6ec9597", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5de7e6a0-0b", "ovs_interfaceid": "5de7e6a0-0b2c-4247-8314-ebb08913a220", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 25 03:43:48 np0005534516 nova_compute[253538]: 2025-11-25 08:43:48.305 253542 DEBUG nova.network.os_vif_util [None req-c32a9ba2-ab75-4948-b380-addcdd4656a2 3d8339b871b34cf5bbf797eb592ec74e 31e0445675494c73be5eb4d1a6ec9597 - - default default] Converting VIF {"id": "5de7e6a0-0b2c-4247-8314-ebb08913a220", "address": "fa:16:3e:68:61:a9", "network": {"id": "ff042a1d-571e-4cfe-b6e8-5931f017fb97", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-2146982642-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "31e0445675494c73be5eb4d1a6ec9597", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5de7e6a0-0b", "ovs_interfaceid": "5de7e6a0-0b2c-4247-8314-ebb08913a220", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 03:43:48 np0005534516 nova_compute[253538]: 2025-11-25 08:43:48.306 253542 DEBUG nova.network.os_vif_util [None req-c32a9ba2-ab75-4948-b380-addcdd4656a2 3d8339b871b34cf5bbf797eb592ec74e 31e0445675494c73be5eb4d1a6ec9597 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:68:61:a9,bridge_name='br-int',has_traffic_filtering=True,id=5de7e6a0-0b2c-4247-8314-ebb08913a220,network=Network(ff042a1d-571e-4cfe-b6e8-5931f017fb97),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5de7e6a0-0b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 03:43:48 np0005534516 nova_compute[253538]: 2025-11-25 08:43:48.308 253542 DEBUG nova.objects.instance [None req-c32a9ba2-ab75-4948-b380-addcdd4656a2 3d8339b871b34cf5bbf797eb592ec74e 31e0445675494c73be5eb4d1a6ec9597 - - default default] Lazy-loading 'pci_devices' on Instance uuid a5dba47f-da80-465e-9659-1897b7d8b1dc obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 03:43:48 np0005534516 nova_compute[253538]: 2025-11-25 08:43:48.331 253542 DEBUG nova.virt.libvirt.driver [None req-c32a9ba2-ab75-4948-b380-addcdd4656a2 3d8339b871b34cf5bbf797eb592ec74e 31e0445675494c73be5eb4d1a6ec9597 - - default default] [instance: a5dba47f-da80-465e-9659-1897b7d8b1dc] End _get_guest_xml xml=<domain type="kvm">
Nov 25 03:43:48 np0005534516 nova_compute[253538]:  <uuid>a5dba47f-da80-465e-9659-1897b7d8b1dc</uuid>
Nov 25 03:43:48 np0005534516 nova_compute[253538]:  <name>instance-00000064</name>
Nov 25 03:43:48 np0005534516 nova_compute[253538]:  <memory>131072</memory>
Nov 25 03:43:48 np0005534516 nova_compute[253538]:  <vcpu>1</vcpu>
Nov 25 03:43:48 np0005534516 nova_compute[253538]:  <metadata>
Nov 25 03:43:48 np0005534516 nova_compute[253538]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 03:43:48 np0005534516 nova_compute[253538]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 03:43:48 np0005534516 nova_compute[253538]:      <nova:name>tempest-ServerAddressesTestJSON-server-1260419212</nova:name>
Nov 25 03:43:48 np0005534516 nova_compute[253538]:      <nova:creationTime>2025-11-25 08:43:47</nova:creationTime>
Nov 25 03:43:48 np0005534516 nova_compute[253538]:      <nova:flavor name="m1.nano">
Nov 25 03:43:48 np0005534516 nova_compute[253538]:        <nova:memory>128</nova:memory>
Nov 25 03:43:48 np0005534516 nova_compute[253538]:        <nova:disk>1</nova:disk>
Nov 25 03:43:48 np0005534516 nova_compute[253538]:        <nova:swap>0</nova:swap>
Nov 25 03:43:48 np0005534516 nova_compute[253538]:        <nova:ephemeral>0</nova:ephemeral>
Nov 25 03:43:48 np0005534516 nova_compute[253538]:        <nova:vcpus>1</nova:vcpus>
Nov 25 03:43:48 np0005534516 nova_compute[253538]:      </nova:flavor>
Nov 25 03:43:48 np0005534516 nova_compute[253538]:      <nova:owner>
Nov 25 03:43:48 np0005534516 nova_compute[253538]:        <nova:user uuid="3d8339b871b34cf5bbf797eb592ec74e">tempest-ServerAddressesTestJSON-1579214935-project-member</nova:user>
Nov 25 03:43:48 np0005534516 nova_compute[253538]:        <nova:project uuid="31e0445675494c73be5eb4d1a6ec9597">tempest-ServerAddressesTestJSON-1579214935</nova:project>
Nov 25 03:43:48 np0005534516 nova_compute[253538]:      </nova:owner>
Nov 25 03:43:48 np0005534516 nova_compute[253538]:      <nova:root type="image" uuid="8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e"/>
Nov 25 03:43:48 np0005534516 nova_compute[253538]:      <nova:ports>
Nov 25 03:43:48 np0005534516 nova_compute[253538]:        <nova:port uuid="5de7e6a0-0b2c-4247-8314-ebb08913a220">
Nov 25 03:43:48 np0005534516 nova_compute[253538]:          <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Nov 25 03:43:48 np0005534516 nova_compute[253538]:        </nova:port>
Nov 25 03:43:48 np0005534516 nova_compute[253538]:      </nova:ports>
Nov 25 03:43:48 np0005534516 nova_compute[253538]:    </nova:instance>
Nov 25 03:43:48 np0005534516 nova_compute[253538]:  </metadata>
Nov 25 03:43:48 np0005534516 nova_compute[253538]:  <sysinfo type="smbios">
Nov 25 03:43:48 np0005534516 nova_compute[253538]:    <system>
Nov 25 03:43:48 np0005534516 nova_compute[253538]:      <entry name="manufacturer">RDO</entry>
Nov 25 03:43:48 np0005534516 nova_compute[253538]:      <entry name="product">OpenStack Compute</entry>
Nov 25 03:43:48 np0005534516 nova_compute[253538]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 03:43:48 np0005534516 nova_compute[253538]:      <entry name="serial">a5dba47f-da80-465e-9659-1897b7d8b1dc</entry>
Nov 25 03:43:48 np0005534516 nova_compute[253538]:      <entry name="uuid">a5dba47f-da80-465e-9659-1897b7d8b1dc</entry>
Nov 25 03:43:48 np0005534516 nova_compute[253538]:      <entry name="family">Virtual Machine</entry>
Nov 25 03:43:48 np0005534516 nova_compute[253538]:    </system>
Nov 25 03:43:48 np0005534516 nova_compute[253538]:  </sysinfo>
Nov 25 03:43:48 np0005534516 nova_compute[253538]:  <os>
Nov 25 03:43:48 np0005534516 nova_compute[253538]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 25 03:43:48 np0005534516 nova_compute[253538]:    <boot dev="hd"/>
Nov 25 03:43:48 np0005534516 nova_compute[253538]:    <smbios mode="sysinfo"/>
Nov 25 03:43:48 np0005534516 nova_compute[253538]:  </os>
Nov 25 03:43:48 np0005534516 nova_compute[253538]:  <features>
Nov 25 03:43:48 np0005534516 nova_compute[253538]:    <acpi/>
Nov 25 03:43:48 np0005534516 nova_compute[253538]:    <apic/>
Nov 25 03:43:48 np0005534516 nova_compute[253538]:    <vmcoreinfo/>
Nov 25 03:43:48 np0005534516 nova_compute[253538]:  </features>
Nov 25 03:43:48 np0005534516 nova_compute[253538]:  <clock offset="utc">
Nov 25 03:43:48 np0005534516 nova_compute[253538]:    <timer name="pit" tickpolicy="delay"/>
Nov 25 03:43:48 np0005534516 nova_compute[253538]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 25 03:43:48 np0005534516 nova_compute[253538]:    <timer name="hpet" present="no"/>
Nov 25 03:43:48 np0005534516 nova_compute[253538]:  </clock>
Nov 25 03:43:48 np0005534516 nova_compute[253538]:  <cpu mode="host-model" match="exact">
Nov 25 03:43:48 np0005534516 nova_compute[253538]:    <topology sockets="1" cores="1" threads="1"/>
Nov 25 03:43:48 np0005534516 nova_compute[253538]:  </cpu>
Nov 25 03:43:48 np0005534516 nova_compute[253538]:  <devices>
Nov 25 03:43:48 np0005534516 nova_compute[253538]:    <disk type="network" device="disk">
Nov 25 03:43:48 np0005534516 nova_compute[253538]:      <driver type="raw" cache="none"/>
Nov 25 03:43:48 np0005534516 nova_compute[253538]:      <source protocol="rbd" name="vms/a5dba47f-da80-465e-9659-1897b7d8b1dc_disk">
Nov 25 03:43:48 np0005534516 nova_compute[253538]:        <host name="192.168.122.100" port="6789"/>
Nov 25 03:43:48 np0005534516 nova_compute[253538]:      </source>
Nov 25 03:43:48 np0005534516 nova_compute[253538]:      <auth username="openstack">
Nov 25 03:43:48 np0005534516 nova_compute[253538]:        <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 03:43:48 np0005534516 nova_compute[253538]:      </auth>
Nov 25 03:43:48 np0005534516 nova_compute[253538]:      <target dev="vda" bus="virtio"/>
Nov 25 03:43:48 np0005534516 nova_compute[253538]:    </disk>
Nov 25 03:43:48 np0005534516 nova_compute[253538]:    <disk type="network" device="cdrom">
Nov 25 03:43:48 np0005534516 nova_compute[253538]:      <driver type="raw" cache="none"/>
Nov 25 03:43:48 np0005534516 nova_compute[253538]:      <source protocol="rbd" name="vms/a5dba47f-da80-465e-9659-1897b7d8b1dc_disk.config">
Nov 25 03:43:48 np0005534516 nova_compute[253538]:        <host name="192.168.122.100" port="6789"/>
Nov 25 03:43:48 np0005534516 nova_compute[253538]:      </source>
Nov 25 03:43:48 np0005534516 nova_compute[253538]:      <auth username="openstack">
Nov 25 03:43:48 np0005534516 nova_compute[253538]:        <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 03:43:48 np0005534516 nova_compute[253538]:      </auth>
Nov 25 03:43:48 np0005534516 nova_compute[253538]:      <target dev="sda" bus="sata"/>
Nov 25 03:43:48 np0005534516 nova_compute[253538]:    </disk>
Nov 25 03:43:48 np0005534516 nova_compute[253538]:    <interface type="ethernet">
Nov 25 03:43:48 np0005534516 nova_compute[253538]:      <mac address="fa:16:3e:68:61:a9"/>
Nov 25 03:43:48 np0005534516 nova_compute[253538]:      <model type="virtio"/>
Nov 25 03:43:48 np0005534516 nova_compute[253538]:      <driver name="vhost" rx_queue_size="512"/>
Nov 25 03:43:48 np0005534516 nova_compute[253538]:      <mtu size="1442"/>
Nov 25 03:43:48 np0005534516 nova_compute[253538]:      <target dev="tap5de7e6a0-0b"/>
Nov 25 03:43:48 np0005534516 nova_compute[253538]:    </interface>
Nov 25 03:43:48 np0005534516 nova_compute[253538]:    <serial type="pty">
Nov 25 03:43:48 np0005534516 nova_compute[253538]:      <log file="/var/lib/nova/instances/a5dba47f-da80-465e-9659-1897b7d8b1dc/console.log" append="off"/>
Nov 25 03:43:48 np0005534516 nova_compute[253538]:    </serial>
Nov 25 03:43:48 np0005534516 nova_compute[253538]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 03:43:48 np0005534516 nova_compute[253538]:    <video>
Nov 25 03:43:48 np0005534516 nova_compute[253538]:      <model type="virtio"/>
Nov 25 03:43:48 np0005534516 nova_compute[253538]:    </video>
Nov 25 03:43:48 np0005534516 nova_compute[253538]:    <input type="tablet" bus="usb"/>
Nov 25 03:43:48 np0005534516 nova_compute[253538]:    <rng model="virtio">
Nov 25 03:43:48 np0005534516 nova_compute[253538]:      <backend model="random">/dev/urandom</backend>
Nov 25 03:43:48 np0005534516 nova_compute[253538]:    </rng>
Nov 25 03:43:48 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root"/>
Nov 25 03:43:48 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:43:48 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:43:48 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:43:48 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:43:48 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:43:48 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:43:48 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:43:48 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:43:48 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:43:48 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:43:48 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:43:48 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:43:48 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:43:48 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:43:48 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:43:48 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:43:48 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:43:48 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:43:48 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:43:48 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:43:48 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:43:48 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:43:48 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:43:48 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:43:48 np0005534516 nova_compute[253538]:    <controller type="usb" index="0"/>
Nov 25 03:43:48 np0005534516 nova_compute[253538]:    <memballoon model="virtio">
Nov 25 03:43:48 np0005534516 nova_compute[253538]:      <stats period="10"/>
Nov 25 03:43:48 np0005534516 nova_compute[253538]:    </memballoon>
Nov 25 03:43:48 np0005534516 nova_compute[253538]:  </devices>
Nov 25 03:43:48 np0005534516 nova_compute[253538]: </domain>
Nov 25 03:43:48 np0005534516 nova_compute[253538]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 25 03:43:48 np0005534516 nova_compute[253538]: 2025-11-25 08:43:48.332 253542 DEBUG nova.compute.manager [None req-c32a9ba2-ab75-4948-b380-addcdd4656a2 3d8339b871b34cf5bbf797eb592ec74e 31e0445675494c73be5eb4d1a6ec9597 - - default default] [instance: a5dba47f-da80-465e-9659-1897b7d8b1dc] Preparing to wait for external event network-vif-plugged-5de7e6a0-0b2c-4247-8314-ebb08913a220 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 25 03:43:48 np0005534516 nova_compute[253538]: 2025-11-25 08:43:48.333 253542 DEBUG oslo_concurrency.lockutils [None req-c32a9ba2-ab75-4948-b380-addcdd4656a2 3d8339b871b34cf5bbf797eb592ec74e 31e0445675494c73be5eb4d1a6ec9597 - - default default] Acquiring lock "a5dba47f-da80-465e-9659-1897b7d8b1dc-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:43:48 np0005534516 nova_compute[253538]: 2025-11-25 08:43:48.333 253542 DEBUG oslo_concurrency.lockutils [None req-c32a9ba2-ab75-4948-b380-addcdd4656a2 3d8339b871b34cf5bbf797eb592ec74e 31e0445675494c73be5eb4d1a6ec9597 - - default default] Lock "a5dba47f-da80-465e-9659-1897b7d8b1dc-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:43:48 np0005534516 nova_compute[253538]: 2025-11-25 08:43:48.334 253542 DEBUG oslo_concurrency.lockutils [None req-c32a9ba2-ab75-4948-b380-addcdd4656a2 3d8339b871b34cf5bbf797eb592ec74e 31e0445675494c73be5eb4d1a6ec9597 - - default default] Lock "a5dba47f-da80-465e-9659-1897b7d8b1dc-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:43:48 np0005534516 nova_compute[253538]: 2025-11-25 08:43:48.334 253542 DEBUG nova.virt.libvirt.vif [None req-c32a9ba2-ab75-4948-b380-addcdd4656a2 3d8339b871b34cf5bbf797eb592ec74e 31e0445675494c73be5eb4d1a6ec9597 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T08:43:40Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerAddressesTestJSON-server-1260419212',display_name='tempest-ServerAddressesTestJSON-server-1260419212',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveraddressestestjson-server-1260419212',id=100,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='31e0445675494c73be5eb4d1a6ec9597',ramdisk_id='',reservation_id='r-b20g17nb',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerAddressesTestJSON-1579214935',owner_user_name='tempest-ServerAddressesTestJSON-1579214935-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T08:43:42Z,user_data=None,user_id='3d8339b871b34cf5bbf797eb592ec74e',uuid=a5dba47f-da80-465e-9659-1897b7d8b1dc,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "5de7e6a0-0b2c-4247-8314-ebb08913a220", "address": "fa:16:3e:68:61:a9", "network": {"id": "ff042a1d-571e-4cfe-b6e8-5931f017fb97", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-2146982642-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "31e0445675494c73be5eb4d1a6ec9597", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5de7e6a0-0b", "ovs_interfaceid": "5de7e6a0-0b2c-4247-8314-ebb08913a220", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 25 03:43:48 np0005534516 nova_compute[253538]: 2025-11-25 08:43:48.335 253542 DEBUG nova.network.os_vif_util [None req-c32a9ba2-ab75-4948-b380-addcdd4656a2 3d8339b871b34cf5bbf797eb592ec74e 31e0445675494c73be5eb4d1a6ec9597 - - default default] Converting VIF {"id": "5de7e6a0-0b2c-4247-8314-ebb08913a220", "address": "fa:16:3e:68:61:a9", "network": {"id": "ff042a1d-571e-4cfe-b6e8-5931f017fb97", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-2146982642-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "31e0445675494c73be5eb4d1a6ec9597", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5de7e6a0-0b", "ovs_interfaceid": "5de7e6a0-0b2c-4247-8314-ebb08913a220", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 03:43:48 np0005534516 nova_compute[253538]: 2025-11-25 08:43:48.336 253542 DEBUG nova.network.os_vif_util [None req-c32a9ba2-ab75-4948-b380-addcdd4656a2 3d8339b871b34cf5bbf797eb592ec74e 31e0445675494c73be5eb4d1a6ec9597 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:68:61:a9,bridge_name='br-int',has_traffic_filtering=True,id=5de7e6a0-0b2c-4247-8314-ebb08913a220,network=Network(ff042a1d-571e-4cfe-b6e8-5931f017fb97),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5de7e6a0-0b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 03:43:48 np0005534516 nova_compute[253538]: 2025-11-25 08:43:48.336 253542 DEBUG os_vif [None req-c32a9ba2-ab75-4948-b380-addcdd4656a2 3d8339b871b34cf5bbf797eb592ec74e 31e0445675494c73be5eb4d1a6ec9597 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:68:61:a9,bridge_name='br-int',has_traffic_filtering=True,id=5de7e6a0-0b2c-4247-8314-ebb08913a220,network=Network(ff042a1d-571e-4cfe-b6e8-5931f017fb97),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5de7e6a0-0b') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 25 03:43:48 np0005534516 nova_compute[253538]: 2025-11-25 08:43:48.337 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:43:48 np0005534516 nova_compute[253538]: 2025-11-25 08:43:48.338 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:43:48 np0005534516 nova_compute[253538]: 2025-11-25 08:43:48.339 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 03:43:48 np0005534516 nova_compute[253538]: 2025-11-25 08:43:48.342 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:43:48 np0005534516 nova_compute[253538]: 2025-11-25 08:43:48.343 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap5de7e6a0-0b, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:43:48 np0005534516 nova_compute[253538]: 2025-11-25 08:43:48.343 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap5de7e6a0-0b, col_values=(('external_ids', {'iface-id': '5de7e6a0-0b2c-4247-8314-ebb08913a220', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:68:61:a9', 'vm-uuid': 'a5dba47f-da80-465e-9659-1897b7d8b1dc'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:43:48 np0005534516 nova_compute[253538]: 2025-11-25 08:43:48.345 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:43:48 np0005534516 NetworkManager[48915]: <info>  [1764060228.3471] manager: (tap5de7e6a0-0b): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/392)
Nov 25 03:43:48 np0005534516 nova_compute[253538]: 2025-11-25 08:43:48.348 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 25 03:43:48 np0005534516 nova_compute[253538]: 2025-11-25 08:43:48.354 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:43:48 np0005534516 nova_compute[253538]: 2025-11-25 08:43:48.355 253542 INFO os_vif [None req-c32a9ba2-ab75-4948-b380-addcdd4656a2 3d8339b871b34cf5bbf797eb592ec74e 31e0445675494c73be5eb4d1a6ec9597 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:68:61:a9,bridge_name='br-int',has_traffic_filtering=True,id=5de7e6a0-0b2c-4247-8314-ebb08913a220,network=Network(ff042a1d-571e-4cfe-b6e8-5931f017fb97),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5de7e6a0-0b')#033[00m
Nov 25 03:43:48 np0005534516 nova_compute[253538]: 2025-11-25 08:43:48.400 253542 DEBUG nova.virt.libvirt.driver [None req-c32a9ba2-ab75-4948-b380-addcdd4656a2 3d8339b871b34cf5bbf797eb592ec74e 31e0445675494c73be5eb4d1a6ec9597 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 03:43:48 np0005534516 nova_compute[253538]: 2025-11-25 08:43:48.400 253542 DEBUG nova.virt.libvirt.driver [None req-c32a9ba2-ab75-4948-b380-addcdd4656a2 3d8339b871b34cf5bbf797eb592ec74e 31e0445675494c73be5eb4d1a6ec9597 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 03:43:48 np0005534516 nova_compute[253538]: 2025-11-25 08:43:48.401 253542 DEBUG nova.virt.libvirt.driver [None req-c32a9ba2-ab75-4948-b380-addcdd4656a2 3d8339b871b34cf5bbf797eb592ec74e 31e0445675494c73be5eb4d1a6ec9597 - - default default] No VIF found with MAC fa:16:3e:68:61:a9, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 25 03:43:48 np0005534516 nova_compute[253538]: 2025-11-25 08:43:48.401 253542 INFO nova.virt.libvirt.driver [None req-c32a9ba2-ab75-4948-b380-addcdd4656a2 3d8339b871b34cf5bbf797eb592ec74e 31e0445675494c73be5eb4d1a6ec9597 - - default default] [instance: a5dba47f-da80-465e-9659-1897b7d8b1dc] Using config drive#033[00m
Nov 25 03:43:48 np0005534516 nova_compute[253538]: 2025-11-25 08:43:48.426 253542 DEBUG nova.storage.rbd_utils [None req-c32a9ba2-ab75-4948-b380-addcdd4656a2 3d8339b871b34cf5bbf797eb592ec74e 31e0445675494c73be5eb4d1a6ec9597 - - default default] rbd image a5dba47f-da80-465e-9659-1897b7d8b1dc_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:43:48 np0005534516 nova_compute[253538]: 2025-11-25 08:43:48.619 253542 DEBUG nova.network.neutron [req-f2042fb7-ac06-493d-a6ea-e4ceb028d411 req-942ffc40-6e2c-4ce3-86a0-5a7dd510c0c1 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: a5dba47f-da80-465e-9659-1897b7d8b1dc] Updated VIF entry in instance network info cache for port 5de7e6a0-0b2c-4247-8314-ebb08913a220. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 25 03:43:48 np0005534516 nova_compute[253538]: 2025-11-25 08:43:48.619 253542 DEBUG nova.network.neutron [req-f2042fb7-ac06-493d-a6ea-e4ceb028d411 req-942ffc40-6e2c-4ce3-86a0-5a7dd510c0c1 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: a5dba47f-da80-465e-9659-1897b7d8b1dc] Updating instance_info_cache with network_info: [{"id": "5de7e6a0-0b2c-4247-8314-ebb08913a220", "address": "fa:16:3e:68:61:a9", "network": {"id": "ff042a1d-571e-4cfe-b6e8-5931f017fb97", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-2146982642-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "31e0445675494c73be5eb4d1a6ec9597", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5de7e6a0-0b", "ovs_interfaceid": "5de7e6a0-0b2c-4247-8314-ebb08913a220", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 03:43:48 np0005534516 nova_compute[253538]: 2025-11-25 08:43:48.635 253542 DEBUG oslo_concurrency.lockutils [req-f2042fb7-ac06-493d-a6ea-e4ceb028d411 req-942ffc40-6e2c-4ce3-86a0-5a7dd510c0c1 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-a5dba47f-da80-465e-9659-1897b7d8b1dc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 03:43:48 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e229 do_prune osdmap full prune enabled
Nov 25 03:43:48 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e230 e230: 3 total, 3 up, 3 in
Nov 25 03:43:48 np0005534516 ceph-mon[75015]: log_channel(cluster) log [DBG] : osdmap e230: 3 total, 3 up, 3 in
Nov 25 03:43:48 np0005534516 nova_compute[253538]: 2025-11-25 08:43:48.894 253542 INFO nova.virt.libvirt.driver [None req-c32a9ba2-ab75-4948-b380-addcdd4656a2 3d8339b871b34cf5bbf797eb592ec74e 31e0445675494c73be5eb4d1a6ec9597 - - default default] [instance: a5dba47f-da80-465e-9659-1897b7d8b1dc] Creating config drive at /var/lib/nova/instances/a5dba47f-da80-465e-9659-1897b7d8b1dc/disk.config#033[00m
Nov 25 03:43:48 np0005534516 nova_compute[253538]: 2025-11-25 08:43:48.904 253542 DEBUG oslo_concurrency.processutils [None req-c32a9ba2-ab75-4948-b380-addcdd4656a2 3d8339b871b34cf5bbf797eb592ec74e 31e0445675494c73be5eb4d1a6ec9597 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/a5dba47f-da80-465e-9659-1897b7d8b1dc/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp80pndh_r execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:43:49 np0005534516 nova_compute[253538]: 2025-11-25 08:43:49.074 253542 DEBUG oslo_concurrency.processutils [None req-c32a9ba2-ab75-4948-b380-addcdd4656a2 3d8339b871b34cf5bbf797eb592ec74e 31e0445675494c73be5eb4d1a6ec9597 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/a5dba47f-da80-465e-9659-1897b7d8b1dc/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp80pndh_r" returned: 0 in 0.170s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:43:49 np0005534516 nova_compute[253538]: 2025-11-25 08:43:49.118 253542 DEBUG nova.storage.rbd_utils [None req-c32a9ba2-ab75-4948-b380-addcdd4656a2 3d8339b871b34cf5bbf797eb592ec74e 31e0445675494c73be5eb4d1a6ec9597 - - default default] rbd image a5dba47f-da80-465e-9659-1897b7d8b1dc_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:43:49 np0005534516 nova_compute[253538]: 2025-11-25 08:43:49.125 253542 DEBUG oslo_concurrency.processutils [None req-c32a9ba2-ab75-4948-b380-addcdd4656a2 3d8339b871b34cf5bbf797eb592ec74e 31e0445675494c73be5eb4d1a6ec9597 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/a5dba47f-da80-465e-9659-1897b7d8b1dc/disk.config a5dba47f-da80-465e-9659-1897b7d8b1dc_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:43:49 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1868: 321 pgs: 321 active+clean; 182 MiB data, 786 MiB used, 59 GiB / 60 GiB avail; 61 KiB/s rd, 18 MiB/s wr, 96 op/s
Nov 25 03:43:49 np0005534516 nova_compute[253538]: 2025-11-25 08:43:49.521 253542 DEBUG oslo_concurrency.processutils [None req-c32a9ba2-ab75-4948-b380-addcdd4656a2 3d8339b871b34cf5bbf797eb592ec74e 31e0445675494c73be5eb4d1a6ec9597 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/a5dba47f-da80-465e-9659-1897b7d8b1dc/disk.config a5dba47f-da80-465e-9659-1897b7d8b1dc_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.396s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:43:49 np0005534516 nova_compute[253538]: 2025-11-25 08:43:49.522 253542 INFO nova.virt.libvirt.driver [None req-c32a9ba2-ab75-4948-b380-addcdd4656a2 3d8339b871b34cf5bbf797eb592ec74e 31e0445675494c73be5eb4d1a6ec9597 - - default default] [instance: a5dba47f-da80-465e-9659-1897b7d8b1dc] Deleting local config drive /var/lib/nova/instances/a5dba47f-da80-465e-9659-1897b7d8b1dc/disk.config because it was imported into RBD.#033[00m
Nov 25 03:43:49 np0005534516 kernel: tap5de7e6a0-0b: entered promiscuous mode
Nov 25 03:43:49 np0005534516 NetworkManager[48915]: <info>  [1764060229.6008] manager: (tap5de7e6a0-0b): new Tun device (/org/freedesktop/NetworkManager/Devices/393)
Nov 25 03:43:49 np0005534516 ovn_controller[152859]: 2025-11-25T08:43:49Z|00965|binding|INFO|Claiming lport 5de7e6a0-0b2c-4247-8314-ebb08913a220 for this chassis.
Nov 25 03:43:49 np0005534516 ovn_controller[152859]: 2025-11-25T08:43:49Z|00966|binding|INFO|5de7e6a0-0b2c-4247-8314-ebb08913a220: Claiming fa:16:3e:68:61:a9 10.100.0.12
Nov 25 03:43:49 np0005534516 nova_compute[253538]: 2025-11-25 08:43:49.601 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:43:49 np0005534516 nova_compute[253538]: 2025-11-25 08:43:49.605 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:43:49 np0005534516 nova_compute[253538]: 2025-11-25 08:43:49.609 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:43:49 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:43:49.616 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:68:61:a9 10.100.0.12'], port_security=['fa:16:3e:68:61:a9 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': 'a5dba47f-da80-465e-9659-1897b7d8b1dc', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ff042a1d-571e-4cfe-b6e8-5931f017fb97', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '31e0445675494c73be5eb4d1a6ec9597', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'c438a4c1-c66c-4de5-aec1-723426528db4', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d691f172-35d6-40c9-b66c-3f38462fa73a, chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=5de7e6a0-0b2c-4247-8314-ebb08913a220) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 03:43:49 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:43:49.618 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 5de7e6a0-0b2c-4247-8314-ebb08913a220 in datapath ff042a1d-571e-4cfe-b6e8-5931f017fb97 bound to our chassis#033[00m
Nov 25 03:43:49 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:43:49.619 162739 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network ff042a1d-571e-4cfe-b6e8-5931f017fb97#033[00m
Nov 25 03:43:49 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:43:49.633 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[6acb2e5c-1845-466f-b77f-8a778546745c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:43:49 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:43:49.634 162739 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapff042a1d-51 in ovnmeta-ff042a1d-571e-4cfe-b6e8-5931f017fb97 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 25 03:43:49 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:43:49.636 269370 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapff042a1d-50 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 25 03:43:49 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:43:49.637 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[0e045c9b-f7ab-4167-9de9-724614a00cf2]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:43:49 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:43:49.638 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[7b2763b7-5599-425f-ac4f-352709816fef]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:43:49 np0005534516 systemd-machined[215790]: New machine qemu-123-instance-00000064.
Nov 25 03:43:49 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:43:49.652 162852 DEBUG oslo.privsep.daemon [-] privsep: reply[d704204f-2c73-4f10-b369-1dbd19ef35df]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:43:49 np0005534516 systemd[1]: Started Virtual Machine qemu-123-instance-00000064.
Nov 25 03:43:49 np0005534516 nova_compute[253538]: 2025-11-25 08:43:49.678 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:43:49 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:43:49.678 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[a15b030e-e492-401e-b57a-85b9db973eaa]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:43:49 np0005534516 ovn_controller[152859]: 2025-11-25T08:43:49Z|00967|binding|INFO|Setting lport 5de7e6a0-0b2c-4247-8314-ebb08913a220 ovn-installed in OVS
Nov 25 03:43:49 np0005534516 ovn_controller[152859]: 2025-11-25T08:43:49Z|00968|binding|INFO|Setting lport 5de7e6a0-0b2c-4247-8314-ebb08913a220 up in Southbound
Nov 25 03:43:49 np0005534516 nova_compute[253538]: 2025-11-25 08:43:49.682 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:43:49 np0005534516 systemd-udevd[351132]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 03:43:49 np0005534516 NetworkManager[48915]: <info>  [1764060229.7015] device (tap5de7e6a0-0b): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 03:43:49 np0005534516 NetworkManager[48915]: <info>  [1764060229.7029] device (tap5de7e6a0-0b): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 03:43:49 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:43:49.718 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[fa853ec2-382b-4a57-a9c2-6fc55ac70d45]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:43:49 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:43:49.724 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[58d47d1b-0e17-4275-a7ca-5ec8689ae637]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:43:49 np0005534516 NetworkManager[48915]: <info>  [1764060229.7267] manager: (tapff042a1d-50): new Veth device (/org/freedesktop/NetworkManager/Devices/394)
Nov 25 03:43:49 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:43:49.762 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[00014d28-37a5-4315-9a9c-c23d91f35cac]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:43:49 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:43:49.765 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[3b92cdde-8ff0-4058-a576-a978f99115b0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:43:49 np0005534516 NetworkManager[48915]: <info>  [1764060229.7936] device (tapff042a1d-50): carrier: link connected
Nov 25 03:43:49 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:43:49.799 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[312a5206-7700-4fdf-9340-21f0ed8a2f0d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:43:49 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:43:49.823 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[b285309f-435b-4d3b-b9c9-6587bccdc086]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapff042a1d-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:80:19:5b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 285], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 558439, 'reachable_time': 36076, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 351165, 'error': None, 'target': 'ovnmeta-ff042a1d-571e-4cfe-b6e8-5931f017fb97', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:43:49 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:43:49.848 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[a1effee9-e4c5-4573-babe-999673e20253]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe80:195b'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 558439, 'tstamp': 558439}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 351168, 'error': None, 'target': 'ovnmeta-ff042a1d-571e-4cfe-b6e8-5931f017fb97', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:43:49 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:43:49.871 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[8ffc2d2e-3729-4d8f-93ef-7eda847b5a3d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapff042a1d-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:80:19:5b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 285], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 558439, 'reachable_time': 36076, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 148, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 148, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 351169, 'error': None, 'target': 'ovnmeta-ff042a1d-571e-4cfe-b6e8-5931f017fb97', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:43:49 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:43:49.924 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[9fa5d4dd-f93c-4b8e-9f86-723061fabb71]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:43:49 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 03:43:49 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 03:43:49 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 03:43:49 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 03:43:50 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 03:43:50 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:43:50.005 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[060524da-a9c2-4908-86a6-f30a1370b131]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:43:50 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:43:50.007 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapff042a1d-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:43:50 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:43:50.008 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 03:43:50 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:43:50.009 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapff042a1d-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:43:50 np0005534516 NetworkManager[48915]: <info>  [1764060230.0122] manager: (tapff042a1d-50): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/395)
Nov 25 03:43:50 np0005534516 kernel: tapff042a1d-50: entered promiscuous mode
Nov 25 03:43:50 np0005534516 nova_compute[253538]: 2025-11-25 08:43:50.013 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:43:50 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:43:50.017 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapff042a1d-50, col_values=(('external_ids', {'iface-id': 'de55f523-30c0-4da1-b06c-16fb81cd9b07'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:43:50 np0005534516 nova_compute[253538]: 2025-11-25 08:43:50.018 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:43:50 np0005534516 ovn_controller[152859]: 2025-11-25T08:43:50Z|00969|binding|INFO|Releasing lport de55f523-30c0-4da1-b06c-16fb81cd9b07 from this chassis (sb_readonly=0)
Nov 25 03:43:50 np0005534516 nova_compute[253538]: 2025-11-25 08:43:50.019 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:43:50 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:43:50.020 162739 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/ff042a1d-571e-4cfe-b6e8-5931f017fb97.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/ff042a1d-571e-4cfe-b6e8-5931f017fb97.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 25 03:43:50 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:43:50.021 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[0cd50ef5-e1a2-480d-873b-793f7ee2159a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:43:50 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:43:50.021 162739 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 25 03:43:50 np0005534516 ovn_metadata_agent[162734]: global
Nov 25 03:43:50 np0005534516 ovn_metadata_agent[162734]:    log         /dev/log local0 debug
Nov 25 03:43:50 np0005534516 ovn_metadata_agent[162734]:    log-tag     haproxy-metadata-proxy-ff042a1d-571e-4cfe-b6e8-5931f017fb97
Nov 25 03:43:50 np0005534516 ovn_metadata_agent[162734]:    user        root
Nov 25 03:43:50 np0005534516 ovn_metadata_agent[162734]:    group       root
Nov 25 03:43:50 np0005534516 ovn_metadata_agent[162734]:    maxconn     1024
Nov 25 03:43:50 np0005534516 ovn_metadata_agent[162734]:    pidfile     /var/lib/neutron/external/pids/ff042a1d-571e-4cfe-b6e8-5931f017fb97.pid.haproxy
Nov 25 03:43:50 np0005534516 ovn_metadata_agent[162734]:    daemon
Nov 25 03:43:50 np0005534516 ovn_metadata_agent[162734]: 
Nov 25 03:43:50 np0005534516 ovn_metadata_agent[162734]: defaults
Nov 25 03:43:50 np0005534516 ovn_metadata_agent[162734]:    log global
Nov 25 03:43:50 np0005534516 ovn_metadata_agent[162734]:    mode http
Nov 25 03:43:50 np0005534516 ovn_metadata_agent[162734]:    option httplog
Nov 25 03:43:50 np0005534516 ovn_metadata_agent[162734]:    option dontlognull
Nov 25 03:43:50 np0005534516 ovn_metadata_agent[162734]:    option http-server-close
Nov 25 03:43:50 np0005534516 ovn_metadata_agent[162734]:    option forwardfor
Nov 25 03:43:50 np0005534516 ovn_metadata_agent[162734]:    retries                 3
Nov 25 03:43:50 np0005534516 ovn_metadata_agent[162734]:    timeout http-request    30s
Nov 25 03:43:50 np0005534516 ovn_metadata_agent[162734]:    timeout connect         30s
Nov 25 03:43:50 np0005534516 ovn_metadata_agent[162734]:    timeout client          32s
Nov 25 03:43:50 np0005534516 ovn_metadata_agent[162734]:    timeout server          32s
Nov 25 03:43:50 np0005534516 ovn_metadata_agent[162734]:    timeout http-keep-alive 30s
Nov 25 03:43:50 np0005534516 ovn_metadata_agent[162734]: 
Nov 25 03:43:50 np0005534516 ovn_metadata_agent[162734]: 
Nov 25 03:43:50 np0005534516 ovn_metadata_agent[162734]: listen listener
Nov 25 03:43:50 np0005534516 ovn_metadata_agent[162734]:    bind 169.254.169.254:80
Nov 25 03:43:50 np0005534516 ovn_metadata_agent[162734]:    server metadata /var/lib/neutron/metadata_proxy
Nov 25 03:43:50 np0005534516 ovn_metadata_agent[162734]:    http-request add-header X-OVN-Network-ID ff042a1d-571e-4cfe-b6e8-5931f017fb97
Nov 25 03:43:50 np0005534516 ovn_metadata_agent[162734]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 25 03:43:50 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:43:50.022 162739 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-ff042a1d-571e-4cfe-b6e8-5931f017fb97', 'env', 'PROCESS_TAG=haproxy-ff042a1d-571e-4cfe-b6e8-5931f017fb97', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/ff042a1d-571e-4cfe-b6e8-5931f017fb97.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 25 03:43:50 np0005534516 nova_compute[253538]: 2025-11-25 08:43:50.032 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:43:50 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 03:43:50 np0005534516 ceph-mgr[75313]: [progress WARNING root] complete: ev e5e5a70e-114a-4cd6-8e1a-45d5c8c831c4 does not exist
Nov 25 03:43:50 np0005534516 ceph-mgr[75313]: [progress WARNING root] complete: ev 5c51b251-b367-440f-974f-3a898733076d does not exist
Nov 25 03:43:50 np0005534516 ceph-mgr[75313]: [progress WARNING root] complete: ev 2b5a232c-d973-4422-8447-5272c210ff61 does not exist
Nov 25 03:43:50 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 03:43:50 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 03:43:50 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 03:43:50 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 03:43:50 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 03:43:50 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 03:43:50 np0005534516 nova_compute[253538]: 2025-11-25 08:43:50.065 253542 DEBUG nova.compute.manager [req-42d1cf6f-e8dd-4b86-a869-6c1bb3e281b5 req-ab4a04c6-a3db-424e-840d-f0ca9031341c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: a5dba47f-da80-465e-9659-1897b7d8b1dc] Received event network-vif-plugged-5de7e6a0-0b2c-4247-8314-ebb08913a220 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:43:50 np0005534516 nova_compute[253538]: 2025-11-25 08:43:50.065 253542 DEBUG oslo_concurrency.lockutils [req-42d1cf6f-e8dd-4b86-a869-6c1bb3e281b5 req-ab4a04c6-a3db-424e-840d-f0ca9031341c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "a5dba47f-da80-465e-9659-1897b7d8b1dc-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:43:50 np0005534516 nova_compute[253538]: 2025-11-25 08:43:50.066 253542 DEBUG oslo_concurrency.lockutils [req-42d1cf6f-e8dd-4b86-a869-6c1bb3e281b5 req-ab4a04c6-a3db-424e-840d-f0ca9031341c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "a5dba47f-da80-465e-9659-1897b7d8b1dc-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:43:50 np0005534516 nova_compute[253538]: 2025-11-25 08:43:50.066 253542 DEBUG oslo_concurrency.lockutils [req-42d1cf6f-e8dd-4b86-a869-6c1bb3e281b5 req-ab4a04c6-a3db-424e-840d-f0ca9031341c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "a5dba47f-da80-465e-9659-1897b7d8b1dc-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:43:50 np0005534516 nova_compute[253538]: 2025-11-25 08:43:50.066 253542 DEBUG nova.compute.manager [req-42d1cf6f-e8dd-4b86-a869-6c1bb3e281b5 req-ab4a04c6-a3db-424e-840d-f0ca9031341c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: a5dba47f-da80-465e-9659-1897b7d8b1dc] Processing event network-vif-plugged-5de7e6a0-0b2c-4247-8314-ebb08913a220 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 25 03:43:50 np0005534516 nova_compute[253538]: 2025-11-25 08:43:50.246 253542 DEBUG nova.compute.manager [None req-c32a9ba2-ab75-4948-b380-addcdd4656a2 3d8339b871b34cf5bbf797eb592ec74e 31e0445675494c73be5eb4d1a6ec9597 - - default default] [instance: a5dba47f-da80-465e-9659-1897b7d8b1dc] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 25 03:43:50 np0005534516 nova_compute[253538]: 2025-11-25 08:43:50.248 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764060230.2460926, a5dba47f-da80-465e-9659-1897b7d8b1dc => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 03:43:50 np0005534516 nova_compute[253538]: 2025-11-25 08:43:50.250 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: a5dba47f-da80-465e-9659-1897b7d8b1dc] VM Started (Lifecycle Event)#033[00m
Nov 25 03:43:50 np0005534516 nova_compute[253538]: 2025-11-25 08:43:50.259 253542 DEBUG nova.virt.libvirt.driver [None req-c32a9ba2-ab75-4948-b380-addcdd4656a2 3d8339b871b34cf5bbf797eb592ec74e 31e0445675494c73be5eb4d1a6ec9597 - - default default] [instance: a5dba47f-da80-465e-9659-1897b7d8b1dc] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 25 03:43:50 np0005534516 nova_compute[253538]: 2025-11-25 08:43:50.263 253542 INFO nova.virt.libvirt.driver [-] [instance: a5dba47f-da80-465e-9659-1897b7d8b1dc] Instance spawned successfully.#033[00m
Nov 25 03:43:50 np0005534516 nova_compute[253538]: 2025-11-25 08:43:50.264 253542 DEBUG nova.virt.libvirt.driver [None req-c32a9ba2-ab75-4948-b380-addcdd4656a2 3d8339b871b34cf5bbf797eb592ec74e 31e0445675494c73be5eb4d1a6ec9597 - - default default] [instance: a5dba47f-da80-465e-9659-1897b7d8b1dc] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 25 03:43:50 np0005534516 nova_compute[253538]: 2025-11-25 08:43:50.272 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: a5dba47f-da80-465e-9659-1897b7d8b1dc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:43:50 np0005534516 nova_compute[253538]: 2025-11-25 08:43:50.278 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: a5dba47f-da80-465e-9659-1897b7d8b1dc] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 03:43:50 np0005534516 nova_compute[253538]: 2025-11-25 08:43:50.291 253542 DEBUG nova.virt.libvirt.driver [None req-c32a9ba2-ab75-4948-b380-addcdd4656a2 3d8339b871b34cf5bbf797eb592ec74e 31e0445675494c73be5eb4d1a6ec9597 - - default default] [instance: a5dba47f-da80-465e-9659-1897b7d8b1dc] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:43:50 np0005534516 nova_compute[253538]: 2025-11-25 08:43:50.292 253542 DEBUG nova.virt.libvirt.driver [None req-c32a9ba2-ab75-4948-b380-addcdd4656a2 3d8339b871b34cf5bbf797eb592ec74e 31e0445675494c73be5eb4d1a6ec9597 - - default default] [instance: a5dba47f-da80-465e-9659-1897b7d8b1dc] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:43:50 np0005534516 nova_compute[253538]: 2025-11-25 08:43:50.293 253542 DEBUG nova.virt.libvirt.driver [None req-c32a9ba2-ab75-4948-b380-addcdd4656a2 3d8339b871b34cf5bbf797eb592ec74e 31e0445675494c73be5eb4d1a6ec9597 - - default default] [instance: a5dba47f-da80-465e-9659-1897b7d8b1dc] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:43:50 np0005534516 nova_compute[253538]: 2025-11-25 08:43:50.294 253542 DEBUG nova.virt.libvirt.driver [None req-c32a9ba2-ab75-4948-b380-addcdd4656a2 3d8339b871b34cf5bbf797eb592ec74e 31e0445675494c73be5eb4d1a6ec9597 - - default default] [instance: a5dba47f-da80-465e-9659-1897b7d8b1dc] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:43:50 np0005534516 nova_compute[253538]: 2025-11-25 08:43:50.295 253542 DEBUG nova.virt.libvirt.driver [None req-c32a9ba2-ab75-4948-b380-addcdd4656a2 3d8339b871b34cf5bbf797eb592ec74e 31e0445675494c73be5eb4d1a6ec9597 - - default default] [instance: a5dba47f-da80-465e-9659-1897b7d8b1dc] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:43:50 np0005534516 nova_compute[253538]: 2025-11-25 08:43:50.296 253542 DEBUG nova.virt.libvirt.driver [None req-c32a9ba2-ab75-4948-b380-addcdd4656a2 3d8339b871b34cf5bbf797eb592ec74e 31e0445675494c73be5eb4d1a6ec9597 - - default default] [instance: a5dba47f-da80-465e-9659-1897b7d8b1dc] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:43:50 np0005534516 nova_compute[253538]: 2025-11-25 08:43:50.303 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: a5dba47f-da80-465e-9659-1897b7d8b1dc] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 03:43:50 np0005534516 nova_compute[253538]: 2025-11-25 08:43:50.304 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764060230.2476506, a5dba47f-da80-465e-9659-1897b7d8b1dc => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 03:43:50 np0005534516 nova_compute[253538]: 2025-11-25 08:43:50.305 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: a5dba47f-da80-465e-9659-1897b7d8b1dc] VM Paused (Lifecycle Event)#033[00m
Nov 25 03:43:50 np0005534516 podman[351282]: 2025-11-25 08:43:50.314735834 +0000 UTC m=+0.065694802 container health_status 5c8d5508db57b4c3da7e80ae11f3a3094e87785cb74f60c98199261dd8b73481 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 03:43:50 np0005534516 nova_compute[253538]: 2025-11-25 08:43:50.323 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: a5dba47f-da80-465e-9659-1897b7d8b1dc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:43:50 np0005534516 nova_compute[253538]: 2025-11-25 08:43:50.329 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764060230.253739, a5dba47f-da80-465e-9659-1897b7d8b1dc => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 03:43:50 np0005534516 nova_compute[253538]: 2025-11-25 08:43:50.329 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: a5dba47f-da80-465e-9659-1897b7d8b1dc] VM Resumed (Lifecycle Event)#033[00m
Nov 25 03:43:50 np0005534516 nova_compute[253538]: 2025-11-25 08:43:50.341 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: a5dba47f-da80-465e-9659-1897b7d8b1dc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:43:50 np0005534516 nova_compute[253538]: 2025-11-25 08:43:50.345 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: a5dba47f-da80-465e-9659-1897b7d8b1dc] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 03:43:50 np0005534516 nova_compute[253538]: 2025-11-25 08:43:50.358 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: a5dba47f-da80-465e-9659-1897b7d8b1dc] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 03:43:50 np0005534516 podman[351352]: 2025-11-25 08:43:50.417061852 +0000 UTC m=+0.053120988 container create 4a41c7c055871c76bd9207cca29fe04716cffc16bba1cdedf7fb9e7d00ec1f16 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-ff042a1d-571e-4cfe-b6e8-5931f017fb97, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 03:43:50 np0005534516 systemd[1]: Started libpod-conmon-4a41c7c055871c76bd9207cca29fe04716cffc16bba1cdedf7fb9e7d00ec1f16.scope.
Nov 25 03:43:50 np0005534516 podman[351352]: 2025-11-25 08:43:50.387953329 +0000 UTC m=+0.024012475 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 25 03:43:50 np0005534516 systemd[1]: Started libcrun container.
Nov 25 03:43:50 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/25bac1800189459c5bbd21b0537bc43218693610d30ca13e122f4e702894dc9e/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 25 03:43:50 np0005534516 nova_compute[253538]: 2025-11-25 08:43:50.495 253542 INFO nova.compute.manager [None req-c32a9ba2-ab75-4948-b380-addcdd4656a2 3d8339b871b34cf5bbf797eb592ec74e 31e0445675494c73be5eb4d1a6ec9597 - - default default] [instance: a5dba47f-da80-465e-9659-1897b7d8b1dc] Took 8.22 seconds to spawn the instance on the hypervisor.#033[00m
Nov 25 03:43:50 np0005534516 nova_compute[253538]: 2025-11-25 08:43:50.496 253542 DEBUG nova.compute.manager [None req-c32a9ba2-ab75-4948-b380-addcdd4656a2 3d8339b871b34cf5bbf797eb592ec74e 31e0445675494c73be5eb4d1a6ec9597 - - default default] [instance: a5dba47f-da80-465e-9659-1897b7d8b1dc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:43:50 np0005534516 podman[351352]: 2025-11-25 08:43:50.509873181 +0000 UTC m=+0.145932397 container init 4a41c7c055871c76bd9207cca29fe04716cffc16bba1cdedf7fb9e7d00ec1f16 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-ff042a1d-571e-4cfe-b6e8-5931f017fb97, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3)
Nov 25 03:43:50 np0005534516 podman[351352]: 2025-11-25 08:43:50.521013085 +0000 UTC m=+0.157072241 container start 4a41c7c055871c76bd9207cca29fe04716cffc16bba1cdedf7fb9e7d00ec1f16 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-ff042a1d-571e-4cfe-b6e8-5931f017fb97, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Nov 25 03:43:50 np0005534516 neutron-haproxy-ovnmeta-ff042a1d-571e-4cfe-b6e8-5931f017fb97[351386]: [NOTICE]   (351397) : New worker (351403) forked
Nov 25 03:43:50 np0005534516 neutron-haproxy-ovnmeta-ff042a1d-571e-4cfe-b6e8-5931f017fb97[351386]: [NOTICE]   (351397) : Loading success.
Nov 25 03:43:50 np0005534516 nova_compute[253538]: 2025-11-25 08:43:50.572 253542 INFO nova.compute.manager [None req-c32a9ba2-ab75-4948-b380-addcdd4656a2 3d8339b871b34cf5bbf797eb592ec74e 31e0445675494c73be5eb4d1a6ec9597 - - default default] [instance: a5dba47f-da80-465e-9659-1897b7d8b1dc] Took 9.30 seconds to build instance.#033[00m
Nov 25 03:43:50 np0005534516 nova_compute[253538]: 2025-11-25 08:43:50.588 253542 DEBUG oslo_concurrency.lockutils [None req-c32a9ba2-ab75-4948-b380-addcdd4656a2 3d8339b871b34cf5bbf797eb592ec74e 31e0445675494c73be5eb4d1a6ec9597 - - default default] Lock "a5dba47f-da80-465e-9659-1897b7d8b1dc" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.370s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:43:50 np0005534516 podman[351440]: 2025-11-25 08:43:50.748564316 +0000 UTC m=+0.052501071 container create 75c41a190a6cb71868c2fe90205c3c1d2deaf4e6e15dfa3d4fce4b7cc16b009c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_matsumoto, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 03:43:50 np0005534516 systemd[1]: Started libpod-conmon-75c41a190a6cb71868c2fe90205c3c1d2deaf4e6e15dfa3d4fce4b7cc16b009c.scope.
Nov 25 03:43:50 np0005534516 podman[351440]: 2025-11-25 08:43:50.724124391 +0000 UTC m=+0.028061166 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 03:43:50 np0005534516 systemd[1]: Started libcrun container.
Nov 25 03:43:50 np0005534516 podman[351440]: 2025-11-25 08:43:50.866621534 +0000 UTC m=+0.170558309 container init 75c41a190a6cb71868c2fe90205c3c1d2deaf4e6e15dfa3d4fce4b7cc16b009c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_matsumoto, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 03:43:50 np0005534516 podman[351440]: 2025-11-25 08:43:50.876584445 +0000 UTC m=+0.180521200 container start 75c41a190a6cb71868c2fe90205c3c1d2deaf4e6e15dfa3d4fce4b7cc16b009c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_matsumoto, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Nov 25 03:43:50 np0005534516 podman[351440]: 2025-11-25 08:43:50.881417687 +0000 UTC m=+0.185354442 container attach 75c41a190a6cb71868c2fe90205c3c1d2deaf4e6e15dfa3d4fce4b7cc16b009c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_matsumoto, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 03:43:50 np0005534516 systemd[1]: libpod-75c41a190a6cb71868c2fe90205c3c1d2deaf4e6e15dfa3d4fce4b7cc16b009c.scope: Deactivated successfully.
Nov 25 03:43:50 np0005534516 strange_matsumoto[351456]: 167 167
Nov 25 03:43:50 np0005534516 conmon[351456]: conmon 75c41a190a6cb71868c2 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-75c41a190a6cb71868c2fe90205c3c1d2deaf4e6e15dfa3d4fce4b7cc16b009c.scope/container/memory.events
Nov 25 03:43:50 np0005534516 podman[351440]: 2025-11-25 08:43:50.885125378 +0000 UTC m=+0.189062123 container died 75c41a190a6cb71868c2fe90205c3c1d2deaf4e6e15dfa3d4fce4b7cc16b009c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_matsumoto, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 03:43:50 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 03:43:50 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 03:43:50 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 03:43:50 np0005534516 systemd[1]: var-lib-containers-storage-overlay-f1fca7a212772035d5792b863d52dbeb5a61cec29600586aea766c8dc9b2e325-merged.mount: Deactivated successfully.
Nov 25 03:43:50 np0005534516 podman[351440]: 2025-11-25 08:43:50.929138127 +0000 UTC m=+0.233074852 container remove 75c41a190a6cb71868c2fe90205c3c1d2deaf4e6e15dfa3d4fce4b7cc16b009c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_matsumoto, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 03:43:50 np0005534516 systemd[1]: libpod-conmon-75c41a190a6cb71868c2fe90205c3c1d2deaf4e6e15dfa3d4fce4b7cc16b009c.scope: Deactivated successfully.
Nov 25 03:43:51 np0005534516 podman[351482]: 2025-11-25 08:43:51.092888169 +0000 UTC m=+0.042734425 container create 8d8a101f84de1baf141e7bc27c1c40abf6ce2712eba6593057dedf697a2d8846 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_faraday, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 03:43:51 np0005534516 systemd[1]: Started libpod-conmon-8d8a101f84de1baf141e7bc27c1c40abf6ce2712eba6593057dedf697a2d8846.scope.
Nov 25 03:43:51 np0005534516 podman[351482]: 2025-11-25 08:43:51.073956234 +0000 UTC m=+0.023802480 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 03:43:51 np0005534516 systemd[1]: Started libcrun container.
Nov 25 03:43:51 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cfcbc420c1146610adc4c348bab9816e7bc2b9687508ee3dec75985ebf7b1b44/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 03:43:51 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cfcbc420c1146610adc4c348bab9816e7bc2b9687508ee3dec75985ebf7b1b44/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 03:43:51 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cfcbc420c1146610adc4c348bab9816e7bc2b9687508ee3dec75985ebf7b1b44/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 03:43:51 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cfcbc420c1146610adc4c348bab9816e7bc2b9687508ee3dec75985ebf7b1b44/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 03:43:51 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cfcbc420c1146610adc4c348bab9816e7bc2b9687508ee3dec75985ebf7b1b44/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 03:43:51 np0005534516 podman[351482]: 2025-11-25 08:43:51.197859461 +0000 UTC m=+0.147705717 container init 8d8a101f84de1baf141e7bc27c1c40abf6ce2712eba6593057dedf697a2d8846 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_faraday, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 03:43:51 np0005534516 podman[351482]: 2025-11-25 08:43:51.214045362 +0000 UTC m=+0.163891648 container start 8d8a101f84de1baf141e7bc27c1c40abf6ce2712eba6593057dedf697a2d8846 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_faraday, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 03:43:51 np0005534516 podman[351482]: 2025-11-25 08:43:51.219084458 +0000 UTC m=+0.168930734 container attach 8d8a101f84de1baf141e7bc27c1c40abf6ce2712eba6593057dedf697a2d8846 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_faraday, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 03:43:51 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1869: 321 pgs: 321 active+clean; 166 MiB data, 778 MiB used, 59 GiB / 60 GiB avail; 70 KiB/s rd, 17 MiB/s wr, 108 op/s
Nov 25 03:43:51 np0005534516 nova_compute[253538]: 2025-11-25 08:43:51.948 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:43:52 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e230 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 03:43:52 np0005534516 nova_compute[253538]: 2025-11-25 08:43:52.217 253542 DEBUG nova.compute.manager [req-5345a655-4649-444f-a605-ae65f97db990 req-5d6fc922-49f2-4fd3-be70-a249e7761839 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: a5dba47f-da80-465e-9659-1897b7d8b1dc] Received event network-vif-plugged-5de7e6a0-0b2c-4247-8314-ebb08913a220 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:43:52 np0005534516 nova_compute[253538]: 2025-11-25 08:43:52.218 253542 DEBUG oslo_concurrency.lockutils [req-5345a655-4649-444f-a605-ae65f97db990 req-5d6fc922-49f2-4fd3-be70-a249e7761839 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "a5dba47f-da80-465e-9659-1897b7d8b1dc-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:43:52 np0005534516 nova_compute[253538]: 2025-11-25 08:43:52.218 253542 DEBUG oslo_concurrency.lockutils [req-5345a655-4649-444f-a605-ae65f97db990 req-5d6fc922-49f2-4fd3-be70-a249e7761839 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "a5dba47f-da80-465e-9659-1897b7d8b1dc-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:43:52 np0005534516 nova_compute[253538]: 2025-11-25 08:43:52.220 253542 DEBUG oslo_concurrency.lockutils [req-5345a655-4649-444f-a605-ae65f97db990 req-5d6fc922-49f2-4fd3-be70-a249e7761839 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "a5dba47f-da80-465e-9659-1897b7d8b1dc-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:43:52 np0005534516 nova_compute[253538]: 2025-11-25 08:43:52.221 253542 DEBUG nova.compute.manager [req-5345a655-4649-444f-a605-ae65f97db990 req-5d6fc922-49f2-4fd3-be70-a249e7761839 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: a5dba47f-da80-465e-9659-1897b7d8b1dc] No waiting events found dispatching network-vif-plugged-5de7e6a0-0b2c-4247-8314-ebb08913a220 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 03:43:52 np0005534516 nova_compute[253538]: 2025-11-25 08:43:52.222 253542 WARNING nova.compute.manager [req-5345a655-4649-444f-a605-ae65f97db990 req-5d6fc922-49f2-4fd3-be70-a249e7761839 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: a5dba47f-da80-465e-9659-1897b7d8b1dc] Received unexpected event network-vif-plugged-5de7e6a0-0b2c-4247-8314-ebb08913a220 for instance with vm_state active and task_state None.#033[00m
Nov 25 03:43:52 np0005534516 agitated_faraday[351499]: --> passed data devices: 0 physical, 3 LVM
Nov 25 03:43:52 np0005534516 agitated_faraday[351499]: --> relative data size: 1.0
Nov 25 03:43:52 np0005534516 agitated_faraday[351499]: --> All data devices are unavailable
Nov 25 03:43:52 np0005534516 systemd[1]: libpod-8d8a101f84de1baf141e7bc27c1c40abf6ce2712eba6593057dedf697a2d8846.scope: Deactivated successfully.
Nov 25 03:43:52 np0005534516 systemd[1]: libpod-8d8a101f84de1baf141e7bc27c1c40abf6ce2712eba6593057dedf697a2d8846.scope: Consumed 1.005s CPU time.
Nov 25 03:43:52 np0005534516 podman[351482]: 2025-11-25 08:43:52.296236973 +0000 UTC m=+1.246083239 container died 8d8a101f84de1baf141e7bc27c1c40abf6ce2712eba6593057dedf697a2d8846 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_faraday, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Nov 25 03:43:52 np0005534516 systemd[1]: var-lib-containers-storage-overlay-cfcbc420c1146610adc4c348bab9816e7bc2b9687508ee3dec75985ebf7b1b44-merged.mount: Deactivated successfully.
Nov 25 03:43:52 np0005534516 podman[351482]: 2025-11-25 08:43:52.466813572 +0000 UTC m=+1.416659838 container remove 8d8a101f84de1baf141e7bc27c1c40abf6ce2712eba6593057dedf697a2d8846 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_faraday, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 03:43:52 np0005534516 systemd[1]: libpod-conmon-8d8a101f84de1baf141e7bc27c1c40abf6ce2712eba6593057dedf697a2d8846.scope: Deactivated successfully.
Nov 25 03:43:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] Optimize plan auto_2025-11-25_08:43:53
Nov 25 03:43:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 03:43:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] do_upmap
Nov 25 03:43:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] pools ['.mgr', 'cephfs.cephfs.meta', 'default.rgw.meta', 'default.rgw.log', 'default.rgw.control', 'vms', 'cephfs.cephfs.data', 'backups', 'volumes', '.rgw.root', 'images']
Nov 25 03:43:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] prepared 0/10 changes
Nov 25 03:43:53 np0005534516 podman[351682]: 2025-11-25 08:43:53.303636756 +0000 UTC m=+0.084336580 container create bb58c7f3005b568581db992baa824589fd2ff42b70fbb24cc2f24c36b5d82cff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_torvalds, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 03:43:53 np0005534516 nova_compute[253538]: 2025-11-25 08:43:53.346 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:43:53 np0005534516 systemd[1]: Started libpod-conmon-bb58c7f3005b568581db992baa824589fd2ff42b70fbb24cc2f24c36b5d82cff.scope.
Nov 25 03:43:53 np0005534516 podman[351682]: 2025-11-25 08:43:53.276091536 +0000 UTC m=+0.056791410 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 03:43:53 np0005534516 systemd[1]: Started libcrun container.
Nov 25 03:43:53 np0005534516 nova_compute[253538]: 2025-11-25 08:43:53.393 253542 DEBUG oslo_concurrency.lockutils [None req-1fe5a5ed-fa78-490a-8b4b-cc53adb653da 3d8339b871b34cf5bbf797eb592ec74e 31e0445675494c73be5eb4d1a6ec9597 - - default default] Acquiring lock "a5dba47f-da80-465e-9659-1897b7d8b1dc" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:43:53 np0005534516 nova_compute[253538]: 2025-11-25 08:43:53.394 253542 DEBUG oslo_concurrency.lockutils [None req-1fe5a5ed-fa78-490a-8b4b-cc53adb653da 3d8339b871b34cf5bbf797eb592ec74e 31e0445675494c73be5eb4d1a6ec9597 - - default default] Lock "a5dba47f-da80-465e-9659-1897b7d8b1dc" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:43:53 np0005534516 nova_compute[253538]: 2025-11-25 08:43:53.395 253542 DEBUG oslo_concurrency.lockutils [None req-1fe5a5ed-fa78-490a-8b4b-cc53adb653da 3d8339b871b34cf5bbf797eb592ec74e 31e0445675494c73be5eb4d1a6ec9597 - - default default] Acquiring lock "a5dba47f-da80-465e-9659-1897b7d8b1dc-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:43:53 np0005534516 nova_compute[253538]: 2025-11-25 08:43:53.395 253542 DEBUG oslo_concurrency.lockutils [None req-1fe5a5ed-fa78-490a-8b4b-cc53adb653da 3d8339b871b34cf5bbf797eb592ec74e 31e0445675494c73be5eb4d1a6ec9597 - - default default] Lock "a5dba47f-da80-465e-9659-1897b7d8b1dc-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:43:53 np0005534516 nova_compute[253538]: 2025-11-25 08:43:53.395 253542 DEBUG oslo_concurrency.lockutils [None req-1fe5a5ed-fa78-490a-8b4b-cc53adb653da 3d8339b871b34cf5bbf797eb592ec74e 31e0445675494c73be5eb4d1a6ec9597 - - default default] Lock "a5dba47f-da80-465e-9659-1897b7d8b1dc-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:43:53 np0005534516 nova_compute[253538]: 2025-11-25 08:43:53.396 253542 INFO nova.compute.manager [None req-1fe5a5ed-fa78-490a-8b4b-cc53adb653da 3d8339b871b34cf5bbf797eb592ec74e 31e0445675494c73be5eb4d1a6ec9597 - - default default] [instance: a5dba47f-da80-465e-9659-1897b7d8b1dc] Terminating instance#033[00m
Nov 25 03:43:53 np0005534516 nova_compute[253538]: 2025-11-25 08:43:53.397 253542 DEBUG nova.compute.manager [None req-1fe5a5ed-fa78-490a-8b4b-cc53adb653da 3d8339b871b34cf5bbf797eb592ec74e 31e0445675494c73be5eb4d1a6ec9597 - - default default] [instance: a5dba47f-da80-465e-9659-1897b7d8b1dc] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 25 03:43:53 np0005534516 podman[351682]: 2025-11-25 08:43:53.415479504 +0000 UTC m=+0.196179318 container init bb58c7f3005b568581db992baa824589fd2ff42b70fbb24cc2f24c36b5d82cff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_torvalds, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 03:43:53 np0005534516 podman[351682]: 2025-11-25 08:43:53.428411167 +0000 UTC m=+0.209110951 container start bb58c7f3005b568581db992baa824589fd2ff42b70fbb24cc2f24c36b5d82cff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_torvalds, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Nov 25 03:43:53 np0005534516 kernel: tap5de7e6a0-0b (unregistering): left promiscuous mode
Nov 25 03:43:53 np0005534516 podman[351682]: 2025-11-25 08:43:53.433268639 +0000 UTC m=+0.213968423 container attach bb58c7f3005b568581db992baa824589fd2ff42b70fbb24cc2f24c36b5d82cff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_torvalds, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True)
Nov 25 03:43:53 np0005534516 compassionate_torvalds[351699]: 167 167
Nov 25 03:43:53 np0005534516 NetworkManager[48915]: <info>  [1764060233.4368] device (tap5de7e6a0-0b): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 03:43:53 np0005534516 podman[351682]: 2025-11-25 08:43:53.439205371 +0000 UTC m=+0.219905165 container died bb58c7f3005b568581db992baa824589fd2ff42b70fbb24cc2f24c36b5d82cff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_torvalds, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Nov 25 03:43:53 np0005534516 systemd[1]: libpod-bb58c7f3005b568581db992baa824589fd2ff42b70fbb24cc2f24c36b5d82cff.scope: Deactivated successfully.
Nov 25 03:43:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 03:43:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 03:43:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 03:43:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 03:43:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 03:43:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 03:43:53 np0005534516 nova_compute[253538]: 2025-11-25 08:43:53.450 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:43:53 np0005534516 ovn_controller[152859]: 2025-11-25T08:43:53Z|00970|binding|INFO|Releasing lport 5de7e6a0-0b2c-4247-8314-ebb08913a220 from this chassis (sb_readonly=0)
Nov 25 03:43:53 np0005534516 ovn_controller[152859]: 2025-11-25T08:43:53Z|00971|binding|INFO|Setting lport 5de7e6a0-0b2c-4247-8314-ebb08913a220 down in Southbound
Nov 25 03:43:53 np0005534516 ovn_controller[152859]: 2025-11-25T08:43:53Z|00972|binding|INFO|Removing iface tap5de7e6a0-0b ovn-installed in OVS
Nov 25 03:43:53 np0005534516 nova_compute[253538]: 2025-11-25 08:43:53.453 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:43:53 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:43:53.462 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:68:61:a9 10.100.0.12'], port_security=['fa:16:3e:68:61:a9 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': 'a5dba47f-da80-465e-9659-1897b7d8b1dc', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ff042a1d-571e-4cfe-b6e8-5931f017fb97', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '31e0445675494c73be5eb4d1a6ec9597', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'c438a4c1-c66c-4de5-aec1-723426528db4', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d691f172-35d6-40c9-b66c-3f38462fa73a, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=5de7e6a0-0b2c-4247-8314-ebb08913a220) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 03:43:53 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:43:53.464 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 5de7e6a0-0b2c-4247-8314-ebb08913a220 in datapath ff042a1d-571e-4cfe-b6e8-5931f017fb97 unbound from our chassis#033[00m
Nov 25 03:43:53 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:43:53.466 162739 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network ff042a1d-571e-4cfe-b6e8-5931f017fb97, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 25 03:43:53 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:43:53.469 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[a2c4cd5c-d8a9-482b-8b42-dd249395e813]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:43:53 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:43:53.472 162739 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-ff042a1d-571e-4cfe-b6e8-5931f017fb97 namespace which is not needed anymore#033[00m
Nov 25 03:43:53 np0005534516 systemd[1]: var-lib-containers-storage-overlay-592d93dc4749e2c5202be24bdd51d8b899a1fbdb34fa48c053075301d7347642-merged.mount: Deactivated successfully.
Nov 25 03:43:53 np0005534516 nova_compute[253538]: 2025-11-25 08:43:53.482 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:43:53 np0005534516 podman[351682]: 2025-11-25 08:43:53.491969529 +0000 UTC m=+0.272669333 container remove bb58c7f3005b568581db992baa824589fd2ff42b70fbb24cc2f24c36b5d82cff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_torvalds, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 03:43:53 np0005534516 systemd[1]: machine-qemu\x2d123\x2dinstance\x2d00000064.scope: Deactivated successfully.
Nov 25 03:43:53 np0005534516 systemd[1]: machine-qemu\x2d123\x2dinstance\x2d00000064.scope: Consumed 3.626s CPU time.
Nov 25 03:43:53 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1870: 321 pgs: 321 active+clean; 134 MiB data, 698 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 13 MiB/s wr, 184 op/s
Nov 25 03:43:53 np0005534516 systemd-machined[215790]: Machine qemu-123-instance-00000064 terminated.
Nov 25 03:43:53 np0005534516 systemd[1]: libpod-conmon-bb58c7f3005b568581db992baa824589fd2ff42b70fbb24cc2f24c36b5d82cff.scope: Deactivated successfully.
Nov 25 03:43:53 np0005534516 nova_compute[253538]: 2025-11-25 08:43:53.627 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:43:53 np0005534516 nova_compute[253538]: 2025-11-25 08:43:53.634 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:43:53 np0005534516 nova_compute[253538]: 2025-11-25 08:43:53.644 253542 INFO nova.virt.libvirt.driver [-] [instance: a5dba47f-da80-465e-9659-1897b7d8b1dc] Instance destroyed successfully.#033[00m
Nov 25 03:43:53 np0005534516 nova_compute[253538]: 2025-11-25 08:43:53.644 253542 DEBUG nova.objects.instance [None req-1fe5a5ed-fa78-490a-8b4b-cc53adb653da 3d8339b871b34cf5bbf797eb592ec74e 31e0445675494c73be5eb4d1a6ec9597 - - default default] Lazy-loading 'resources' on Instance uuid a5dba47f-da80-465e-9659-1897b7d8b1dc obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 03:43:53 np0005534516 neutron-haproxy-ovnmeta-ff042a1d-571e-4cfe-b6e8-5931f017fb97[351386]: [NOTICE]   (351397) : haproxy version is 2.8.14-c23fe91
Nov 25 03:43:53 np0005534516 neutron-haproxy-ovnmeta-ff042a1d-571e-4cfe-b6e8-5931f017fb97[351386]: [NOTICE]   (351397) : path to executable is /usr/sbin/haproxy
Nov 25 03:43:53 np0005534516 neutron-haproxy-ovnmeta-ff042a1d-571e-4cfe-b6e8-5931f017fb97[351386]: [WARNING]  (351397) : Exiting Master process...
Nov 25 03:43:53 np0005534516 neutron-haproxy-ovnmeta-ff042a1d-571e-4cfe-b6e8-5931f017fb97[351386]: [ALERT]    (351397) : Current worker (351403) exited with code 143 (Terminated)
Nov 25 03:43:53 np0005534516 neutron-haproxy-ovnmeta-ff042a1d-571e-4cfe-b6e8-5931f017fb97[351386]: [WARNING]  (351397) : All workers exited. Exiting... (0)
Nov 25 03:43:53 np0005534516 systemd[1]: libpod-4a41c7c055871c76bd9207cca29fe04716cffc16bba1cdedf7fb9e7d00ec1f16.scope: Deactivated successfully.
Nov 25 03:43:53 np0005534516 conmon[351386]: conmon 4a41c7c055871c76bd92 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-4a41c7c055871c76bd9207cca29fe04716cffc16bba1cdedf7fb9e7d00ec1f16.scope/container/memory.events
Nov 25 03:43:53 np0005534516 nova_compute[253538]: 2025-11-25 08:43:53.658 253542 DEBUG nova.virt.libvirt.vif [None req-1fe5a5ed-fa78-490a-8b4b-cc53adb653da 3d8339b871b34cf5bbf797eb592ec74e 31e0445675494c73be5eb4d1a6ec9597 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T08:43:40Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerAddressesTestJSON-server-1260419212',display_name='tempest-ServerAddressesTestJSON-server-1260419212',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveraddressestestjson-server-1260419212',id=100,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-25T08:43:50Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='31e0445675494c73be5eb4d1a6ec9597',ramdisk_id='',reservation_id='r-b20g17nb',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerAddressesTestJSON-1579214935',owner_user_name='tempest-ServerAddressesTestJSON-1579214935-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T08:43:50Z,user_data=None,user_id='3d8339b871b34cf5bbf797eb592ec74e',uuid=a5dba47f-da80-465e-9659-1897b7d8b1dc,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "5de7e6a0-0b2c-4247-8314-ebb08913a220", "address": "fa:16:3e:68:61:a9", "network": {"id": "ff042a1d-571e-4cfe-b6e8-5931f017fb97", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-2146982642-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "31e0445675494c73be5eb4d1a6ec9597", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5de7e6a0-0b", "ovs_interfaceid": "5de7e6a0-0b2c-4247-8314-ebb08913a220", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 25 03:43:53 np0005534516 nova_compute[253538]: 2025-11-25 08:43:53.658 253542 DEBUG nova.network.os_vif_util [None req-1fe5a5ed-fa78-490a-8b4b-cc53adb653da 3d8339b871b34cf5bbf797eb592ec74e 31e0445675494c73be5eb4d1a6ec9597 - - default default] Converting VIF {"id": "5de7e6a0-0b2c-4247-8314-ebb08913a220", "address": "fa:16:3e:68:61:a9", "network": {"id": "ff042a1d-571e-4cfe-b6e8-5931f017fb97", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-2146982642-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "31e0445675494c73be5eb4d1a6ec9597", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5de7e6a0-0b", "ovs_interfaceid": "5de7e6a0-0b2c-4247-8314-ebb08913a220", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 03:43:53 np0005534516 podman[351743]: 2025-11-25 08:43:53.660991195 +0000 UTC m=+0.093583591 container died 4a41c7c055871c76bd9207cca29fe04716cffc16bba1cdedf7fb9e7d00ec1f16 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-ff042a1d-571e-4cfe-b6e8-5931f017fb97, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true)
Nov 25 03:43:53 np0005534516 nova_compute[253538]: 2025-11-25 08:43:53.660 253542 DEBUG nova.network.os_vif_util [None req-1fe5a5ed-fa78-490a-8b4b-cc53adb653da 3d8339b871b34cf5bbf797eb592ec74e 31e0445675494c73be5eb4d1a6ec9597 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:68:61:a9,bridge_name='br-int',has_traffic_filtering=True,id=5de7e6a0-0b2c-4247-8314-ebb08913a220,network=Network(ff042a1d-571e-4cfe-b6e8-5931f017fb97),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5de7e6a0-0b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 03:43:53 np0005534516 nova_compute[253538]: 2025-11-25 08:43:53.663 253542 DEBUG os_vif [None req-1fe5a5ed-fa78-490a-8b4b-cc53adb653da 3d8339b871b34cf5bbf797eb592ec74e 31e0445675494c73be5eb4d1a6ec9597 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:68:61:a9,bridge_name='br-int',has_traffic_filtering=True,id=5de7e6a0-0b2c-4247-8314-ebb08913a220,network=Network(ff042a1d-571e-4cfe-b6e8-5931f017fb97),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5de7e6a0-0b') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 25 03:43:53 np0005534516 nova_compute[253538]: 2025-11-25 08:43:53.665 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:43:53 np0005534516 nova_compute[253538]: 2025-11-25 08:43:53.666 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5de7e6a0-0b, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:43:53 np0005534516 nova_compute[253538]: 2025-11-25 08:43:53.667 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:43:53 np0005534516 nova_compute[253538]: 2025-11-25 08:43:53.670 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 25 03:43:53 np0005534516 nova_compute[253538]: 2025-11-25 08:43:53.673 253542 INFO os_vif [None req-1fe5a5ed-fa78-490a-8b4b-cc53adb653da 3d8339b871b34cf5bbf797eb592ec74e 31e0445675494c73be5eb4d1a6ec9597 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:68:61:a9,bridge_name='br-int',has_traffic_filtering=True,id=5de7e6a0-0b2c-4247-8314-ebb08913a220,network=Network(ff042a1d-571e-4cfe-b6e8-5931f017fb97),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5de7e6a0-0b')#033[00m
Nov 25 03:43:53 np0005534516 podman[351762]: 2025-11-25 08:43:53.656605825 +0000 UTC m=+0.023719197 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 03:43:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 03:43:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 03:43:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 03:43:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 03:43:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 03:43:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 03:43:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 03:43:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 03:43:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 03:43:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 03:43:54 np0005534516 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-4a41c7c055871c76bd9207cca29fe04716cffc16bba1cdedf7fb9e7d00ec1f16-userdata-shm.mount: Deactivated successfully.
Nov 25 03:43:54 np0005534516 systemd[1]: var-lib-containers-storage-overlay-25bac1800189459c5bbd21b0537bc43218693610d30ca13e122f4e702894dc9e-merged.mount: Deactivated successfully.
Nov 25 03:43:54 np0005534516 podman[351762]: 2025-11-25 08:43:54.045605836 +0000 UTC m=+0.412719228 container create ac5273ef9ae56c3344bd833dc6585094f7c4a715f7167625208fb000ac992d71 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_mahavira, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Nov 25 03:43:54 np0005534516 podman[351743]: 2025-11-25 08:43:54.050764907 +0000 UTC m=+0.483357333 container cleanup 4a41c7c055871c76bd9207cca29fe04716cffc16bba1cdedf7fb9e7d00ec1f16 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-ff042a1d-571e-4cfe-b6e8-5931f017fb97, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251118)
Nov 25 03:43:54 np0005534516 systemd[1]: libpod-conmon-4a41c7c055871c76bd9207cca29fe04716cffc16bba1cdedf7fb9e7d00ec1f16.scope: Deactivated successfully.
Nov 25 03:43:54 np0005534516 systemd[1]: Started libpod-conmon-ac5273ef9ae56c3344bd833dc6585094f7c4a715f7167625208fb000ac992d71.scope.
Nov 25 03:43:54 np0005534516 systemd[1]: Started libcrun container.
Nov 25 03:43:54 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1b277a5b8cad874e155653a18414fb1cf9b9335df84dcf2df73f8bf3711af37f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 03:43:54 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1b277a5b8cad874e155653a18414fb1cf9b9335df84dcf2df73f8bf3711af37f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 03:43:54 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1b277a5b8cad874e155653a18414fb1cf9b9335df84dcf2df73f8bf3711af37f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 03:43:54 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1b277a5b8cad874e155653a18414fb1cf9b9335df84dcf2df73f8bf3711af37f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 03:43:54 np0005534516 podman[351762]: 2025-11-25 08:43:54.181101959 +0000 UTC m=+0.548215411 container init ac5273ef9ae56c3344bd833dc6585094f7c4a715f7167625208fb000ac992d71 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_mahavira, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 03:43:54 np0005534516 podman[351817]: 2025-11-25 08:43:54.188850939 +0000 UTC m=+0.086455226 container remove 4a41c7c055871c76bd9207cca29fe04716cffc16bba1cdedf7fb9e7d00ec1f16 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-ff042a1d-571e-4cfe-b6e8-5931f017fb97, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team)
Nov 25 03:43:54 np0005534516 podman[351762]: 2025-11-25 08:43:54.196486348 +0000 UTC m=+0.563599720 container start ac5273ef9ae56c3344bd833dc6585094f7c4a715f7167625208fb000ac992d71 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_mahavira, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2)
Nov 25 03:43:54 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:43:54.196 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[9aefc608-76cc-4acb-aad8-a7c15d33de3f]: (4, ('Tue Nov 25 08:43:53 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-ff042a1d-571e-4cfe-b6e8-5931f017fb97 (4a41c7c055871c76bd9207cca29fe04716cffc16bba1cdedf7fb9e7d00ec1f16)\n4a41c7c055871c76bd9207cca29fe04716cffc16bba1cdedf7fb9e7d00ec1f16\nTue Nov 25 08:43:54 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-ff042a1d-571e-4cfe-b6e8-5931f017fb97 (4a41c7c055871c76bd9207cca29fe04716cffc16bba1cdedf7fb9e7d00ec1f16)\n4a41c7c055871c76bd9207cca29fe04716cffc16bba1cdedf7fb9e7d00ec1f16\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:43:54 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:43:54.199 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[d4f020c1-99eb-453b-99b2-719709d92a54]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:43:54 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:43:54.200 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapff042a1d-50, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:43:54 np0005534516 podman[351762]: 2025-11-25 08:43:54.201047062 +0000 UTC m=+0.568160454 container attach ac5273ef9ae56c3344bd833dc6585094f7c4a715f7167625208fb000ac992d71 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_mahavira, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 03:43:54 np0005534516 nova_compute[253538]: 2025-11-25 08:43:54.204 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:43:54 np0005534516 kernel: tapff042a1d-50: left promiscuous mode
Nov 25 03:43:54 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:43:54.211 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[e6a31dfd-3e35-48c8-a401-d3cabfddb09b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:43:54 np0005534516 nova_compute[253538]: 2025-11-25 08:43:54.226 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:43:54 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:43:54.228 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[28bb16a8-f921-4818-99eb-175320205eb8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:43:54 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:43:54.229 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[7a986f49-934d-4653-b18f-fe9053120e03]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:43:54 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:43:54.251 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[e8303cc0-d461-4bf3-8714-b0c2fea72882]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 558430, 'reachable_time': 18010, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 351839, 'error': None, 'target': 'ovnmeta-ff042a1d-571e-4cfe-b6e8-5931f017fb97', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:43:54 np0005534516 systemd[1]: run-netns-ovnmeta\x2dff042a1d\x2d571e\x2d4cfe\x2db6e8\x2d5931f017fb97.mount: Deactivated successfully.
Nov 25 03:43:54 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:43:54.256 162852 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-ff042a1d-571e-4cfe-b6e8-5931f017fb97 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 25 03:43:54 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:43:54.256 162852 DEBUG oslo.privsep.daemon [-] privsep: reply[ff3c7606-6658-4648-88a4-65cc081a3bdf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:43:54 np0005534516 nova_compute[253538]: 2025-11-25 08:43:54.459 253542 INFO nova.virt.libvirt.driver [None req-1fe5a5ed-fa78-490a-8b4b-cc53adb653da 3d8339b871b34cf5bbf797eb592ec74e 31e0445675494c73be5eb4d1a6ec9597 - - default default] [instance: a5dba47f-da80-465e-9659-1897b7d8b1dc] Deleting instance files /var/lib/nova/instances/a5dba47f-da80-465e-9659-1897b7d8b1dc_del#033[00m
Nov 25 03:43:54 np0005534516 nova_compute[253538]: 2025-11-25 08:43:54.460 253542 INFO nova.virt.libvirt.driver [None req-1fe5a5ed-fa78-490a-8b4b-cc53adb653da 3d8339b871b34cf5bbf797eb592ec74e 31e0445675494c73be5eb4d1a6ec9597 - - default default] [instance: a5dba47f-da80-465e-9659-1897b7d8b1dc] Deletion of /var/lib/nova/instances/a5dba47f-da80-465e-9659-1897b7d8b1dc_del complete#033[00m
Nov 25 03:43:54 np0005534516 nova_compute[253538]: 2025-11-25 08:43:54.512 253542 INFO nova.compute.manager [None req-1fe5a5ed-fa78-490a-8b4b-cc53adb653da 3d8339b871b34cf5bbf797eb592ec74e 31e0445675494c73be5eb4d1a6ec9597 - - default default] [instance: a5dba47f-da80-465e-9659-1897b7d8b1dc] Took 1.11 seconds to destroy the instance on the hypervisor.#033[00m
Nov 25 03:43:54 np0005534516 nova_compute[253538]: 2025-11-25 08:43:54.513 253542 DEBUG oslo.service.loopingcall [None req-1fe5a5ed-fa78-490a-8b4b-cc53adb653da 3d8339b871b34cf5bbf797eb592ec74e 31e0445675494c73be5eb4d1a6ec9597 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 25 03:43:54 np0005534516 nova_compute[253538]: 2025-11-25 08:43:54.513 253542 DEBUG nova.compute.manager [-] [instance: a5dba47f-da80-465e-9659-1897b7d8b1dc] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 25 03:43:54 np0005534516 nova_compute[253538]: 2025-11-25 08:43:54.514 253542 DEBUG nova.network.neutron [-] [instance: a5dba47f-da80-465e-9659-1897b7d8b1dc] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 25 03:43:54 np0005534516 quirky_mahavira[351825]: {
Nov 25 03:43:54 np0005534516 quirky_mahavira[351825]:    "0": [
Nov 25 03:43:54 np0005534516 quirky_mahavira[351825]:        {
Nov 25 03:43:54 np0005534516 quirky_mahavira[351825]:            "devices": [
Nov 25 03:43:54 np0005534516 quirky_mahavira[351825]:                "/dev/loop3"
Nov 25 03:43:54 np0005534516 quirky_mahavira[351825]:            ],
Nov 25 03:43:54 np0005534516 quirky_mahavira[351825]:            "lv_name": "ceph_lv0",
Nov 25 03:43:54 np0005534516 quirky_mahavira[351825]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 03:43:54 np0005534516 quirky_mahavira[351825]:            "lv_size": "21470642176",
Nov 25 03:43:54 np0005534516 quirky_mahavira[351825]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=fa0f8d90-5df8-4d42-9078-da082765696d,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 03:43:54 np0005534516 quirky_mahavira[351825]:            "lv_uuid": "ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr",
Nov 25 03:43:54 np0005534516 quirky_mahavira[351825]:            "name": "ceph_lv0",
Nov 25 03:43:54 np0005534516 quirky_mahavira[351825]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 03:43:54 np0005534516 quirky_mahavira[351825]:            "tags": {
Nov 25 03:43:54 np0005534516 quirky_mahavira[351825]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 03:43:54 np0005534516 quirky_mahavira[351825]:                "ceph.block_uuid": "ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr",
Nov 25 03:43:54 np0005534516 quirky_mahavira[351825]:                "ceph.cephx_lockbox_secret": "",
Nov 25 03:43:54 np0005534516 quirky_mahavira[351825]:                "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 03:43:54 np0005534516 quirky_mahavira[351825]:                "ceph.cluster_name": "ceph",
Nov 25 03:43:54 np0005534516 quirky_mahavira[351825]:                "ceph.crush_device_class": "",
Nov 25 03:43:54 np0005534516 quirky_mahavira[351825]:                "ceph.encrypted": "0",
Nov 25 03:43:54 np0005534516 quirky_mahavira[351825]:                "ceph.osd_fsid": "fa0f8d90-5df8-4d42-9078-da082765696d",
Nov 25 03:43:54 np0005534516 quirky_mahavira[351825]:                "ceph.osd_id": "0",
Nov 25 03:43:54 np0005534516 quirky_mahavira[351825]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 03:43:54 np0005534516 quirky_mahavira[351825]:                "ceph.type": "block",
Nov 25 03:43:54 np0005534516 quirky_mahavira[351825]:                "ceph.vdo": "0"
Nov 25 03:43:54 np0005534516 quirky_mahavira[351825]:            },
Nov 25 03:43:54 np0005534516 quirky_mahavira[351825]:            "type": "block",
Nov 25 03:43:54 np0005534516 quirky_mahavira[351825]:            "vg_name": "ceph_vg0"
Nov 25 03:43:54 np0005534516 quirky_mahavira[351825]:        }
Nov 25 03:43:54 np0005534516 quirky_mahavira[351825]:    ],
Nov 25 03:43:54 np0005534516 quirky_mahavira[351825]:    "1": [
Nov 25 03:43:54 np0005534516 quirky_mahavira[351825]:        {
Nov 25 03:43:54 np0005534516 quirky_mahavira[351825]:            "devices": [
Nov 25 03:43:54 np0005534516 quirky_mahavira[351825]:                "/dev/loop4"
Nov 25 03:43:54 np0005534516 quirky_mahavira[351825]:            ],
Nov 25 03:43:54 np0005534516 quirky_mahavira[351825]:            "lv_name": "ceph_lv1",
Nov 25 03:43:54 np0005534516 quirky_mahavira[351825]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 03:43:54 np0005534516 quirky_mahavira[351825]:            "lv_size": "21470642176",
Nov 25 03:43:54 np0005534516 quirky_mahavira[351825]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1ccbbde0-8faf-460a-800e-c84f00ed17db,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 03:43:54 np0005534516 quirky_mahavira[351825]:            "lv_uuid": "nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e",
Nov 25 03:43:54 np0005534516 quirky_mahavira[351825]:            "name": "ceph_lv1",
Nov 25 03:43:54 np0005534516 quirky_mahavira[351825]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 03:43:54 np0005534516 quirky_mahavira[351825]:            "tags": {
Nov 25 03:43:54 np0005534516 quirky_mahavira[351825]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 03:43:54 np0005534516 quirky_mahavira[351825]:                "ceph.block_uuid": "nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e",
Nov 25 03:43:54 np0005534516 quirky_mahavira[351825]:                "ceph.cephx_lockbox_secret": "",
Nov 25 03:43:54 np0005534516 quirky_mahavira[351825]:                "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 03:43:54 np0005534516 quirky_mahavira[351825]:                "ceph.cluster_name": "ceph",
Nov 25 03:43:54 np0005534516 quirky_mahavira[351825]:                "ceph.crush_device_class": "",
Nov 25 03:43:54 np0005534516 quirky_mahavira[351825]:                "ceph.encrypted": "0",
Nov 25 03:43:54 np0005534516 quirky_mahavira[351825]:                "ceph.osd_fsid": "1ccbbde0-8faf-460a-800e-c84f00ed17db",
Nov 25 03:43:54 np0005534516 quirky_mahavira[351825]:                "ceph.osd_id": "1",
Nov 25 03:43:54 np0005534516 quirky_mahavira[351825]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 03:43:54 np0005534516 quirky_mahavira[351825]:                "ceph.type": "block",
Nov 25 03:43:54 np0005534516 quirky_mahavira[351825]:                "ceph.vdo": "0"
Nov 25 03:43:54 np0005534516 quirky_mahavira[351825]:            },
Nov 25 03:43:54 np0005534516 quirky_mahavira[351825]:            "type": "block",
Nov 25 03:43:54 np0005534516 quirky_mahavira[351825]:            "vg_name": "ceph_vg1"
Nov 25 03:43:54 np0005534516 quirky_mahavira[351825]:        }
Nov 25 03:43:54 np0005534516 quirky_mahavira[351825]:    ],
Nov 25 03:43:54 np0005534516 quirky_mahavira[351825]:    "2": [
Nov 25 03:43:54 np0005534516 quirky_mahavira[351825]:        {
Nov 25 03:43:54 np0005534516 quirky_mahavira[351825]:            "devices": [
Nov 25 03:43:54 np0005534516 quirky_mahavira[351825]:                "/dev/loop5"
Nov 25 03:43:54 np0005534516 quirky_mahavira[351825]:            ],
Nov 25 03:43:54 np0005534516 quirky_mahavira[351825]:            "lv_name": "ceph_lv2",
Nov 25 03:43:54 np0005534516 quirky_mahavira[351825]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 03:43:54 np0005534516 quirky_mahavira[351825]:            "lv_size": "21470642176",
Nov 25 03:43:54 np0005534516 quirky_mahavira[351825]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2a15223e-f21c-42af-b702-2be1c3607399,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 03:43:54 np0005534516 quirky_mahavira[351825]:            "lv_uuid": "5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0",
Nov 25 03:43:54 np0005534516 quirky_mahavira[351825]:            "name": "ceph_lv2",
Nov 25 03:43:54 np0005534516 quirky_mahavira[351825]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 03:43:54 np0005534516 quirky_mahavira[351825]:            "tags": {
Nov 25 03:43:54 np0005534516 quirky_mahavira[351825]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 03:43:54 np0005534516 quirky_mahavira[351825]:                "ceph.block_uuid": "5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0",
Nov 25 03:43:54 np0005534516 quirky_mahavira[351825]:                "ceph.cephx_lockbox_secret": "",
Nov 25 03:43:54 np0005534516 quirky_mahavira[351825]:                "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 03:43:54 np0005534516 quirky_mahavira[351825]:                "ceph.cluster_name": "ceph",
Nov 25 03:43:54 np0005534516 quirky_mahavira[351825]:                "ceph.crush_device_class": "",
Nov 25 03:43:54 np0005534516 quirky_mahavira[351825]:                "ceph.encrypted": "0",
Nov 25 03:43:54 np0005534516 quirky_mahavira[351825]:                "ceph.osd_fsid": "2a15223e-f21c-42af-b702-2be1c3607399",
Nov 25 03:43:54 np0005534516 quirky_mahavira[351825]:                "ceph.osd_id": "2",
Nov 25 03:43:54 np0005534516 quirky_mahavira[351825]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 03:43:54 np0005534516 quirky_mahavira[351825]:                "ceph.type": "block",
Nov 25 03:43:54 np0005534516 quirky_mahavira[351825]:                "ceph.vdo": "0"
Nov 25 03:43:54 np0005534516 quirky_mahavira[351825]:            },
Nov 25 03:43:54 np0005534516 quirky_mahavira[351825]:            "type": "block",
Nov 25 03:43:54 np0005534516 quirky_mahavira[351825]:            "vg_name": "ceph_vg2"
Nov 25 03:43:54 np0005534516 quirky_mahavira[351825]:        }
Nov 25 03:43:54 np0005534516 quirky_mahavira[351825]:    ]
Nov 25 03:43:54 np0005534516 quirky_mahavira[351825]: }
Nov 25 03:43:54 np0005534516 systemd[1]: libpod-ac5273ef9ae56c3344bd833dc6585094f7c4a715f7167625208fb000ac992d71.scope: Deactivated successfully.
Nov 25 03:43:54 np0005534516 podman[351762]: 2025-11-25 08:43:54.97448539 +0000 UTC m=+1.341598762 container died ac5273ef9ae56c3344bd833dc6585094f7c4a715f7167625208fb000ac992d71 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_mahavira, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Nov 25 03:43:55 np0005534516 systemd[1]: var-lib-containers-storage-overlay-1b277a5b8cad874e155653a18414fb1cf9b9335df84dcf2df73f8bf3711af37f-merged.mount: Deactivated successfully.
Nov 25 03:43:55 np0005534516 podman[351762]: 2025-11-25 08:43:55.132603558 +0000 UTC m=+1.499716920 container remove ac5273ef9ae56c3344bd833dc6585094f7c4a715f7167625208fb000ac992d71 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_mahavira, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 03:43:55 np0005534516 systemd[1]: libpod-conmon-ac5273ef9ae56c3344bd833dc6585094f7c4a715f7167625208fb000ac992d71.scope: Deactivated successfully.
Nov 25 03:43:55 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1871: 321 pgs: 321 active+clean; 114 MiB data, 698 MiB used, 59 GiB / 60 GiB avail; 2.9 MiB/s rd, 8.0 MiB/s wr, 223 op/s
Nov 25 03:43:55 np0005534516 nova_compute[253538]: 2025-11-25 08:43:55.503 253542 DEBUG nova.network.neutron [-] [instance: a5dba47f-da80-465e-9659-1897b7d8b1dc] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 03:43:55 np0005534516 nova_compute[253538]: 2025-11-25 08:43:55.519 253542 INFO nova.compute.manager [-] [instance: a5dba47f-da80-465e-9659-1897b7d8b1dc] Took 1.01 seconds to deallocate network for instance.#033[00m
Nov 25 03:43:55 np0005534516 nova_compute[253538]: 2025-11-25 08:43:55.566 253542 DEBUG nova.compute.manager [req-88607c3d-0aa9-4da4-96ac-3b578a7935c5 req-ad026df2-39bf-4768-9db6-af0ca59f31ea b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: a5dba47f-da80-465e-9659-1897b7d8b1dc] Received event network-vif-deleted-5de7e6a0-0b2c-4247-8314-ebb08913a220 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:43:55 np0005534516 nova_compute[253538]: 2025-11-25 08:43:55.569 253542 DEBUG oslo_concurrency.lockutils [None req-1fe5a5ed-fa78-490a-8b4b-cc53adb653da 3d8339b871b34cf5bbf797eb592ec74e 31e0445675494c73be5eb4d1a6ec9597 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:43:55 np0005534516 nova_compute[253538]: 2025-11-25 08:43:55.570 253542 DEBUG oslo_concurrency.lockutils [None req-1fe5a5ed-fa78-490a-8b4b-cc53adb653da 3d8339b871b34cf5bbf797eb592ec74e 31e0445675494c73be5eb4d1a6ec9597 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:43:55 np0005534516 nova_compute[253538]: 2025-11-25 08:43:55.639 253542 DEBUG oslo_concurrency.processutils [None req-1fe5a5ed-fa78-490a-8b4b-cc53adb653da 3d8339b871b34cf5bbf797eb592ec74e 31e0445675494c73be5eb4d1a6ec9597 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:43:55 np0005534516 podman[352006]: 2025-11-25 08:43:55.834906928 +0000 UTC m=+0.047970319 container create 742fe1498db01c575f22ff5e5447e46d6554fd65f453a86c401818dacc397ef7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_volhard, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef)
Nov 25 03:43:55 np0005534516 systemd[1]: Started libpod-conmon-742fe1498db01c575f22ff5e5447e46d6554fd65f453a86c401818dacc397ef7.scope.
Nov 25 03:43:55 np0005534516 podman[352006]: 2025-11-25 08:43:55.81260165 +0000 UTC m=+0.025665071 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 03:43:55 np0005534516 systemd[1]: Started libcrun container.
Nov 25 03:43:55 np0005534516 podman[352006]: 2025-11-25 08:43:55.953009836 +0000 UTC m=+0.166073237 container init 742fe1498db01c575f22ff5e5447e46d6554fd65f453a86c401818dacc397ef7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_volhard, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Nov 25 03:43:55 np0005534516 podman[352006]: 2025-11-25 08:43:55.960416938 +0000 UTC m=+0.173480349 container start 742fe1498db01c575f22ff5e5447e46d6554fd65f453a86c401818dacc397ef7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_volhard, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 03:43:55 np0005534516 beautiful_volhard[352041]: 167 167
Nov 25 03:43:55 np0005534516 systemd[1]: libpod-742fe1498db01c575f22ff5e5447e46d6554fd65f453a86c401818dacc397ef7.scope: Deactivated successfully.
Nov 25 03:43:55 np0005534516 podman[352006]: 2025-11-25 08:43:55.971966653 +0000 UTC m=+0.185030114 container attach 742fe1498db01c575f22ff5e5447e46d6554fd65f453a86c401818dacc397ef7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_volhard, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Nov 25 03:43:55 np0005534516 podman[352006]: 2025-11-25 08:43:55.97259809 +0000 UTC m=+0.185661491 container died 742fe1498db01c575f22ff5e5447e46d6554fd65f453a86c401818dacc397ef7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_volhard, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 03:43:56 np0005534516 podman[352030]: 2025-11-25 08:43:56.020614468 +0000 UTC m=+0.137090377 container health_status 201f5ba8a846455dad7681d91099d8c3f33e8ac91298c1330a8b525b94c03938 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=multipathd)
Nov 25 03:43:56 np0005534516 systemd[1]: var-lib-containers-storage-overlay-0e8a224085af0e24fa38d0e5781a5be4407abbbccc5b25956a6cae3f64dc80ab-merged.mount: Deactivated successfully.
Nov 25 03:43:56 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 03:43:56 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3928886384' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 03:43:56 np0005534516 nova_compute[253538]: 2025-11-25 08:43:56.100 253542 DEBUG oslo_concurrency.processutils [None req-1fe5a5ed-fa78-490a-8b4b-cc53adb653da 3d8339b871b34cf5bbf797eb592ec74e 31e0445675494c73be5eb4d1a6ec9597 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.461s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:43:56 np0005534516 nova_compute[253538]: 2025-11-25 08:43:56.108 253542 DEBUG nova.compute.provider_tree [None req-1fe5a5ed-fa78-490a-8b4b-cc53adb653da 3d8339b871b34cf5bbf797eb592ec74e 31e0445675494c73be5eb4d1a6ec9597 - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 03:43:56 np0005534516 nova_compute[253538]: 2025-11-25 08:43:56.128 253542 DEBUG nova.scheduler.client.report [None req-1fe5a5ed-fa78-490a-8b4b-cc53adb653da 3d8339b871b34cf5bbf797eb592ec74e 31e0445675494c73be5eb4d1a6ec9597 - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 03:43:56 np0005534516 nova_compute[253538]: 2025-11-25 08:43:56.155 253542 DEBUG oslo_concurrency.lockutils [None req-1fe5a5ed-fa78-490a-8b4b-cc53adb653da 3d8339b871b34cf5bbf797eb592ec74e 31e0445675494c73be5eb4d1a6ec9597 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.586s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:43:56 np0005534516 podman[352006]: 2025-11-25 08:43:56.168011495 +0000 UTC m=+0.381074886 container remove 742fe1498db01c575f22ff5e5447e46d6554fd65f453a86c401818dacc397ef7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_volhard, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 03:43:56 np0005534516 systemd[1]: libpod-conmon-742fe1498db01c575f22ff5e5447e46d6554fd65f453a86c401818dacc397ef7.scope: Deactivated successfully.
Nov 25 03:43:56 np0005534516 nova_compute[253538]: 2025-11-25 08:43:56.197 253542 INFO nova.scheduler.client.report [None req-1fe5a5ed-fa78-490a-8b4b-cc53adb653da 3d8339b871b34cf5bbf797eb592ec74e 31e0445675494c73be5eb4d1a6ec9597 - - default default] Deleted allocations for instance a5dba47f-da80-465e-9659-1897b7d8b1dc#033[00m
Nov 25 03:43:56 np0005534516 nova_compute[253538]: 2025-11-25 08:43:56.270 253542 DEBUG oslo_concurrency.lockutils [None req-1fe5a5ed-fa78-490a-8b4b-cc53adb653da 3d8339b871b34cf5bbf797eb592ec74e 31e0445675494c73be5eb4d1a6ec9597 - - default default] Lock "a5dba47f-da80-465e-9659-1897b7d8b1dc" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.875s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:43:56 np0005534516 podman[352079]: 2025-11-25 08:43:56.391529997 +0000 UTC m=+0.076410044 container create f7d116ec682a3933ddd939fde1b681b88475be5ca5201b65e7f54e3fe18ef95b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_swirles, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Nov 25 03:43:56 np0005534516 podman[352079]: 2025-11-25 08:43:56.343105876 +0000 UTC m=+0.027986003 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 03:43:56 np0005534516 systemd[1]: Started libpod-conmon-f7d116ec682a3933ddd939fde1b681b88475be5ca5201b65e7f54e3fe18ef95b.scope.
Nov 25 03:43:56 np0005534516 systemd[1]: Started libcrun container.
Nov 25 03:43:56 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4e01dec20bd8d1c6b693ad7e7b8ce31746f2110cf622006e29cad8280a8c67fe/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 03:43:56 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4e01dec20bd8d1c6b693ad7e7b8ce31746f2110cf622006e29cad8280a8c67fe/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 03:43:56 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4e01dec20bd8d1c6b693ad7e7b8ce31746f2110cf622006e29cad8280a8c67fe/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 03:43:56 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4e01dec20bd8d1c6b693ad7e7b8ce31746f2110cf622006e29cad8280a8c67fe/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 03:43:56 np0005534516 podman[352079]: 2025-11-25 08:43:56.524818228 +0000 UTC m=+0.209698315 container init f7d116ec682a3933ddd939fde1b681b88475be5ca5201b65e7f54e3fe18ef95b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_swirles, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS)
Nov 25 03:43:56 np0005534516 podman[352079]: 2025-11-25 08:43:56.533936647 +0000 UTC m=+0.218816694 container start f7d116ec682a3933ddd939fde1b681b88475be5ca5201b65e7f54e3fe18ef95b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_swirles, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3)
Nov 25 03:43:56 np0005534516 podman[352079]: 2025-11-25 08:43:56.551693651 +0000 UTC m=+0.236573688 container attach f7d116ec682a3933ddd939fde1b681b88475be5ca5201b65e7f54e3fe18ef95b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_swirles, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 03:43:57 np0005534516 nova_compute[253538]: 2025-11-25 08:43:57.005 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:43:57 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e230 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 03:43:57 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e230 do_prune osdmap full prune enabled
Nov 25 03:43:57 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e231 e231: 3 total, 3 up, 3 in
Nov 25 03:43:57 np0005534516 ceph-mon[75015]: log_channel(cluster) log [DBG] : osdmap e231: 3 total, 3 up, 3 in
Nov 25 03:43:57 np0005534516 busy_swirles[352096]: {
Nov 25 03:43:57 np0005534516 busy_swirles[352096]:    "1ccbbde0-8faf-460a-800e-c84f00ed17db": {
Nov 25 03:43:57 np0005534516 busy_swirles[352096]:        "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 03:43:57 np0005534516 busy_swirles[352096]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 03:43:57 np0005534516 busy_swirles[352096]:        "osd_id": 1,
Nov 25 03:43:57 np0005534516 busy_swirles[352096]:        "osd_uuid": "1ccbbde0-8faf-460a-800e-c84f00ed17db",
Nov 25 03:43:57 np0005534516 busy_swirles[352096]:        "type": "bluestore"
Nov 25 03:43:57 np0005534516 busy_swirles[352096]:    },
Nov 25 03:43:57 np0005534516 busy_swirles[352096]:    "2a15223e-f21c-42af-b702-2be1c3607399": {
Nov 25 03:43:57 np0005534516 busy_swirles[352096]:        "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 03:43:57 np0005534516 busy_swirles[352096]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 03:43:57 np0005534516 busy_swirles[352096]:        "osd_id": 2,
Nov 25 03:43:57 np0005534516 busy_swirles[352096]:        "osd_uuid": "2a15223e-f21c-42af-b702-2be1c3607399",
Nov 25 03:43:57 np0005534516 busy_swirles[352096]:        "type": "bluestore"
Nov 25 03:43:57 np0005534516 busy_swirles[352096]:    },
Nov 25 03:43:57 np0005534516 busy_swirles[352096]:    "fa0f8d90-5df8-4d42-9078-da082765696d": {
Nov 25 03:43:57 np0005534516 busy_swirles[352096]:        "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 03:43:57 np0005534516 busy_swirles[352096]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 03:43:57 np0005534516 busy_swirles[352096]:        "osd_id": 0,
Nov 25 03:43:57 np0005534516 busy_swirles[352096]:        "osd_uuid": "fa0f8d90-5df8-4d42-9078-da082765696d",
Nov 25 03:43:57 np0005534516 busy_swirles[352096]:        "type": "bluestore"
Nov 25 03:43:57 np0005534516 busy_swirles[352096]:    }
Nov 25 03:43:57 np0005534516 busy_swirles[352096]: }
Nov 25 03:43:57 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1873: 321 pgs: 321 active+clean; 96 MiB data, 692 MiB used, 59 GiB / 60 GiB avail; 2.7 MiB/s rd, 1.9 MiB/s wr, 155 op/s
Nov 25 03:43:57 np0005534516 systemd[1]: libpod-f7d116ec682a3933ddd939fde1b681b88475be5ca5201b65e7f54e3fe18ef95b.scope: Deactivated successfully.
Nov 25 03:43:57 np0005534516 podman[352079]: 2025-11-25 08:43:57.505625077 +0000 UTC m=+1.190505144 container died f7d116ec682a3933ddd939fde1b681b88475be5ca5201b65e7f54e3fe18ef95b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_swirles, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Nov 25 03:43:57 np0005534516 systemd[1]: var-lib-containers-storage-overlay-4e01dec20bd8d1c6b693ad7e7b8ce31746f2110cf622006e29cad8280a8c67fe-merged.mount: Deactivated successfully.
Nov 25 03:43:57 np0005534516 podman[352079]: 2025-11-25 08:43:57.577392213 +0000 UTC m=+1.262272290 container remove f7d116ec682a3933ddd939fde1b681b88475be5ca5201b65e7f54e3fe18ef95b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_swirles, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Nov 25 03:43:57 np0005534516 systemd[1]: libpod-conmon-f7d116ec682a3933ddd939fde1b681b88475be5ca5201b65e7f54e3fe18ef95b.scope: Deactivated successfully.
Nov 25 03:43:57 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 03:43:57 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 03:43:57 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 03:43:57 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 03:43:57 np0005534516 ceph-mgr[75313]: [progress WARNING root] complete: ev be81313d-8599-41c6-a411-19a47d16bf95 does not exist
Nov 25 03:43:57 np0005534516 ceph-mgr[75313]: [progress WARNING root] complete: ev 0b4f9d4b-7dac-4a6a-b2ea-bb0541c947ca does not exist
Nov 25 03:43:58 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 03:43:58 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 03:43:58 np0005534516 nova_compute[253538]: 2025-11-25 08:43:58.667 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:43:59 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1874: 321 pgs: 321 active+clean; 88 MiB data, 684 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 1.6 MiB/s wr, 147 op/s
Nov 25 03:43:59 np0005534516 podman[352190]: 2025-11-25 08:43:59.910336029 +0000 UTC m=+0.143550033 container health_status 8ad1e319ea263c63021f01bb2d6f18ca5aea205883339692c6b10bddfa993191 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251118)
Nov 25 03:44:00 np0005534516 nova_compute[253538]: 2025-11-25 08:44:00.414 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:44:01 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1875: 321 pgs: 321 active+clean; 88 MiB data, 677 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 16 KiB/s wr, 135 op/s
Nov 25 03:44:02 np0005534516 nova_compute[253538]: 2025-11-25 08:44:02.006 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:44:02 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e231 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 03:44:03 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1876: 321 pgs: 321 active+clean; 88 MiB data, 677 MiB used, 59 GiB / 60 GiB avail; 757 KiB/s rd, 1.4 KiB/s wr, 59 op/s
Nov 25 03:44:03 np0005534516 nova_compute[253538]: 2025-11-25 08:44:03.669 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:44:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 03:44:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:44:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 03:44:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:44:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 25 03:44:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:44:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 03:44:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:44:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 03:44:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:44:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0010122368870873217 of space, bias 1.0, pg target 0.3036710661261965 quantized to 32 (current 32)
Nov 25 03:44:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:44:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 25 03:44:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:44:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 03:44:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:44:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 25 03:44:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:44:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 25 03:44:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:44:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 03:44:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:44:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 25 03:44:05 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1877: 321 pgs: 321 active+clean; 88 MiB data, 677 MiB used, 59 GiB / 60 GiB avail; 26 KiB/s rd, 1023 B/s wr, 17 op/s
Nov 25 03:44:07 np0005534516 nova_compute[253538]: 2025-11-25 08:44:07.061 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:44:07 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e231 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 03:44:07 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1878: 321 pgs: 321 active+clean; 88 MiB data, 677 MiB used, 59 GiB / 60 GiB avail; 10 KiB/s rd, 397 B/s wr, 13 op/s
Nov 25 03:44:08 np0005534516 nova_compute[253538]: 2025-11-25 08:44:08.643 253542 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764060233.6417558, a5dba47f-da80-465e-9659-1897b7d8b1dc => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 03:44:08 np0005534516 nova_compute[253538]: 2025-11-25 08:44:08.643 253542 INFO nova.compute.manager [-] [instance: a5dba47f-da80-465e-9659-1897b7d8b1dc] VM Stopped (Lifecycle Event)#033[00m
Nov 25 03:44:08 np0005534516 nova_compute[253538]: 2025-11-25 08:44:08.671 253542 DEBUG nova.compute.manager [None req-355a25f5-f478-46ba-ba76-ca3c584f810a - - - - - -] [instance: a5dba47f-da80-465e-9659-1897b7d8b1dc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:44:08 np0005534516 nova_compute[253538]: 2025-11-25 08:44:08.672 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:44:09 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1879: 321 pgs: 321 active+clean; 88 MiB data, 677 MiB used, 59 GiB / 60 GiB avail; 8.7 KiB/s rd, 341 B/s wr, 11 op/s
Nov 25 03:44:11 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1880: 321 pgs: 321 active+clean; 88 MiB data, 677 MiB used, 59 GiB / 60 GiB avail
Nov 25 03:44:12 np0005534516 nova_compute[253538]: 2025-11-25 08:44:12.064 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:44:12 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e231 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 03:44:13 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1881: 321 pgs: 321 active+clean; 88 MiB data, 677 MiB used, 59 GiB / 60 GiB avail
Nov 25 03:44:13 np0005534516 nova_compute[253538]: 2025-11-25 08:44:13.675 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:44:15 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1882: 321 pgs: 321 active+clean; 88 MiB data, 677 MiB used, 59 GiB / 60 GiB avail
Nov 25 03:44:17 np0005534516 nova_compute[253538]: 2025-11-25 08:44:17.104 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:44:17 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e231 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 03:44:17 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1883: 321 pgs: 321 active+clean; 88 MiB data, 677 MiB used, 59 GiB / 60 GiB avail
Nov 25 03:44:18 np0005534516 nova_compute[253538]: 2025-11-25 08:44:18.391 253542 DEBUG oslo_concurrency.lockutils [None req-2e8ff9d8-ad48-4ca6-9375-f328afc8dbb2 9e24446c871b4d7ca816a3833d05daa9 95ca8ccb4dca4f58b3896b0533bab879 - - default default] Acquiring lock "65e7119e-238b-426c-9e9d-67b4c38c61b7" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:44:18 np0005534516 nova_compute[253538]: 2025-11-25 08:44:18.391 253542 DEBUG oslo_concurrency.lockutils [None req-2e8ff9d8-ad48-4ca6-9375-f328afc8dbb2 9e24446c871b4d7ca816a3833d05daa9 95ca8ccb4dca4f58b3896b0533bab879 - - default default] Lock "65e7119e-238b-426c-9e9d-67b4c38c61b7" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:44:18 np0005534516 nova_compute[253538]: 2025-11-25 08:44:18.404 253542 DEBUG nova.compute.manager [None req-2e8ff9d8-ad48-4ca6-9375-f328afc8dbb2 9e24446c871b4d7ca816a3833d05daa9 95ca8ccb4dca4f58b3896b0533bab879 - - default default] [instance: 65e7119e-238b-426c-9e9d-67b4c38c61b7] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 25 03:44:18 np0005534516 nova_compute[253538]: 2025-11-25 08:44:18.475 253542 DEBUG oslo_concurrency.lockutils [None req-2e8ff9d8-ad48-4ca6-9375-f328afc8dbb2 9e24446c871b4d7ca816a3833d05daa9 95ca8ccb4dca4f58b3896b0533bab879 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:44:18 np0005534516 nova_compute[253538]: 2025-11-25 08:44:18.476 253542 DEBUG oslo_concurrency.lockutils [None req-2e8ff9d8-ad48-4ca6-9375-f328afc8dbb2 9e24446c871b4d7ca816a3833d05daa9 95ca8ccb4dca4f58b3896b0533bab879 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:44:18 np0005534516 nova_compute[253538]: 2025-11-25 08:44:18.487 253542 DEBUG nova.virt.hardware [None req-2e8ff9d8-ad48-4ca6-9375-f328afc8dbb2 9e24446c871b4d7ca816a3833d05daa9 95ca8ccb4dca4f58b3896b0533bab879 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 25 03:44:18 np0005534516 nova_compute[253538]: 2025-11-25 08:44:18.488 253542 INFO nova.compute.claims [None req-2e8ff9d8-ad48-4ca6-9375-f328afc8dbb2 9e24446c871b4d7ca816a3833d05daa9 95ca8ccb4dca4f58b3896b0533bab879 - - default default] [instance: 65e7119e-238b-426c-9e9d-67b4c38c61b7] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 25 03:44:18 np0005534516 nova_compute[253538]: 2025-11-25 08:44:18.601 253542 DEBUG oslo_concurrency.processutils [None req-2e8ff9d8-ad48-4ca6-9375-f328afc8dbb2 9e24446c871b4d7ca816a3833d05daa9 95ca8ccb4dca4f58b3896b0533bab879 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:44:18 np0005534516 nova_compute[253538]: 2025-11-25 08:44:18.677 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:44:19 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 03:44:19 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3897038534' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 03:44:19 np0005534516 nova_compute[253538]: 2025-11-25 08:44:19.143 253542 DEBUG oslo_concurrency.processutils [None req-2e8ff9d8-ad48-4ca6-9375-f328afc8dbb2 9e24446c871b4d7ca816a3833d05daa9 95ca8ccb4dca4f58b3896b0533bab879 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.542s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:44:19 np0005534516 nova_compute[253538]: 2025-11-25 08:44:19.150 253542 DEBUG nova.compute.provider_tree [None req-2e8ff9d8-ad48-4ca6-9375-f328afc8dbb2 9e24446c871b4d7ca816a3833d05daa9 95ca8ccb4dca4f58b3896b0533bab879 - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 03:44:19 np0005534516 nova_compute[253538]: 2025-11-25 08:44:19.170 253542 DEBUG nova.scheduler.client.report [None req-2e8ff9d8-ad48-4ca6-9375-f328afc8dbb2 9e24446c871b4d7ca816a3833d05daa9 95ca8ccb4dca4f58b3896b0533bab879 - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 03:44:19 np0005534516 nova_compute[253538]: 2025-11-25 08:44:19.193 253542 DEBUG oslo_concurrency.lockutils [None req-2e8ff9d8-ad48-4ca6-9375-f328afc8dbb2 9e24446c871b4d7ca816a3833d05daa9 95ca8ccb4dca4f58b3896b0533bab879 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.717s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:44:19 np0005534516 nova_compute[253538]: 2025-11-25 08:44:19.194 253542 DEBUG nova.compute.manager [None req-2e8ff9d8-ad48-4ca6-9375-f328afc8dbb2 9e24446c871b4d7ca816a3833d05daa9 95ca8ccb4dca4f58b3896b0533bab879 - - default default] [instance: 65e7119e-238b-426c-9e9d-67b4c38c61b7] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 25 03:44:19 np0005534516 nova_compute[253538]: 2025-11-25 08:44:19.240 253542 DEBUG nova.compute.manager [None req-2e8ff9d8-ad48-4ca6-9375-f328afc8dbb2 9e24446c871b4d7ca816a3833d05daa9 95ca8ccb4dca4f58b3896b0533bab879 - - default default] [instance: 65e7119e-238b-426c-9e9d-67b4c38c61b7] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 25 03:44:19 np0005534516 nova_compute[253538]: 2025-11-25 08:44:19.241 253542 DEBUG nova.network.neutron [None req-2e8ff9d8-ad48-4ca6-9375-f328afc8dbb2 9e24446c871b4d7ca816a3833d05daa9 95ca8ccb4dca4f58b3896b0533bab879 - - default default] [instance: 65e7119e-238b-426c-9e9d-67b4c38c61b7] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 25 03:44:19 np0005534516 nova_compute[253538]: 2025-11-25 08:44:19.264 253542 INFO nova.virt.libvirt.driver [None req-2e8ff9d8-ad48-4ca6-9375-f328afc8dbb2 9e24446c871b4d7ca816a3833d05daa9 95ca8ccb4dca4f58b3896b0533bab879 - - default default] [instance: 65e7119e-238b-426c-9e9d-67b4c38c61b7] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 25 03:44:19 np0005534516 nova_compute[253538]: 2025-11-25 08:44:19.287 253542 DEBUG nova.compute.manager [None req-2e8ff9d8-ad48-4ca6-9375-f328afc8dbb2 9e24446c871b4d7ca816a3833d05daa9 95ca8ccb4dca4f58b3896b0533bab879 - - default default] [instance: 65e7119e-238b-426c-9e9d-67b4c38c61b7] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 25 03:44:19 np0005534516 nova_compute[253538]: 2025-11-25 08:44:19.380 253542 DEBUG nova.compute.manager [None req-2e8ff9d8-ad48-4ca6-9375-f328afc8dbb2 9e24446c871b4d7ca816a3833d05daa9 95ca8ccb4dca4f58b3896b0533bab879 - - default default] [instance: 65e7119e-238b-426c-9e9d-67b4c38c61b7] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 25 03:44:19 np0005534516 nova_compute[253538]: 2025-11-25 08:44:19.381 253542 DEBUG nova.virt.libvirt.driver [None req-2e8ff9d8-ad48-4ca6-9375-f328afc8dbb2 9e24446c871b4d7ca816a3833d05daa9 95ca8ccb4dca4f58b3896b0533bab879 - - default default] [instance: 65e7119e-238b-426c-9e9d-67b4c38c61b7] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 25 03:44:19 np0005534516 nova_compute[253538]: 2025-11-25 08:44:19.382 253542 INFO nova.virt.libvirt.driver [None req-2e8ff9d8-ad48-4ca6-9375-f328afc8dbb2 9e24446c871b4d7ca816a3833d05daa9 95ca8ccb4dca4f58b3896b0533bab879 - - default default] [instance: 65e7119e-238b-426c-9e9d-67b4c38c61b7] Creating image(s)#033[00m
Nov 25 03:44:19 np0005534516 nova_compute[253538]: 2025-11-25 08:44:19.406 253542 DEBUG nova.storage.rbd_utils [None req-2e8ff9d8-ad48-4ca6-9375-f328afc8dbb2 9e24446c871b4d7ca816a3833d05daa9 95ca8ccb4dca4f58b3896b0533bab879 - - default default] rbd image 65e7119e-238b-426c-9e9d-67b4c38c61b7_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:44:19 np0005534516 nova_compute[253538]: 2025-11-25 08:44:19.435 253542 DEBUG nova.storage.rbd_utils [None req-2e8ff9d8-ad48-4ca6-9375-f328afc8dbb2 9e24446c871b4d7ca816a3833d05daa9 95ca8ccb4dca4f58b3896b0533bab879 - - default default] rbd image 65e7119e-238b-426c-9e9d-67b4c38c61b7_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:44:19 np0005534516 nova_compute[253538]: 2025-11-25 08:44:19.460 253542 DEBUG nova.storage.rbd_utils [None req-2e8ff9d8-ad48-4ca6-9375-f328afc8dbb2 9e24446c871b4d7ca816a3833d05daa9 95ca8ccb4dca4f58b3896b0533bab879 - - default default] rbd image 65e7119e-238b-426c-9e9d-67b4c38c61b7_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:44:19 np0005534516 nova_compute[253538]: 2025-11-25 08:44:19.464 253542 DEBUG oslo_concurrency.processutils [None req-2e8ff9d8-ad48-4ca6-9375-f328afc8dbb2 9e24446c871b4d7ca816a3833d05daa9 95ca8ccb4dca4f58b3896b0533bab879 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:44:19 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1884: 321 pgs: 321 active+clean; 88 MiB data, 677 MiB used, 59 GiB / 60 GiB avail
Nov 25 03:44:19 np0005534516 nova_compute[253538]: 2025-11-25 08:44:19.527 253542 DEBUG nova.policy [None req-2e8ff9d8-ad48-4ca6-9375-f328afc8dbb2 9e24446c871b4d7ca816a3833d05daa9 95ca8ccb4dca4f58b3896b0533bab879 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '9e24446c871b4d7ca816a3833d05daa9', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '95ca8ccb4dca4f58b3896b0533bab879', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 25 03:44:19 np0005534516 nova_compute[253538]: 2025-11-25 08:44:19.553 253542 DEBUG oslo_concurrency.processutils [None req-2e8ff9d8-ad48-4ca6-9375-f328afc8dbb2 9e24446c871b4d7ca816a3833d05daa9 95ca8ccb4dca4f58b3896b0533bab879 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json" returned: 0 in 0.089s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:44:19 np0005534516 nova_compute[253538]: 2025-11-25 08:44:19.554 253542 DEBUG oslo_concurrency.lockutils [None req-2e8ff9d8-ad48-4ca6-9375-f328afc8dbb2 9e24446c871b4d7ca816a3833d05daa9 95ca8ccb4dca4f58b3896b0533bab879 - - default default] Acquiring lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:44:19 np0005534516 nova_compute[253538]: 2025-11-25 08:44:19.555 253542 DEBUG oslo_concurrency.lockutils [None req-2e8ff9d8-ad48-4ca6-9375-f328afc8dbb2 9e24446c871b4d7ca816a3833d05daa9 95ca8ccb4dca4f58b3896b0533bab879 - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:44:19 np0005534516 nova_compute[253538]: 2025-11-25 08:44:19.555 253542 DEBUG oslo_concurrency.lockutils [None req-2e8ff9d8-ad48-4ca6-9375-f328afc8dbb2 9e24446c871b4d7ca816a3833d05daa9 95ca8ccb4dca4f58b3896b0533bab879 - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:44:19 np0005534516 nova_compute[253538]: 2025-11-25 08:44:19.579 253542 DEBUG nova.storage.rbd_utils [None req-2e8ff9d8-ad48-4ca6-9375-f328afc8dbb2 9e24446c871b4d7ca816a3833d05daa9 95ca8ccb4dca4f58b3896b0533bab879 - - default default] rbd image 65e7119e-238b-426c-9e9d-67b4c38c61b7_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:44:19 np0005534516 nova_compute[253538]: 2025-11-25 08:44:19.583 253542 DEBUG oslo_concurrency.processutils [None req-2e8ff9d8-ad48-4ca6-9375-f328afc8dbb2 9e24446c871b4d7ca816a3833d05daa9 95ca8ccb4dca4f58b3896b0533bab879 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc 65e7119e-238b-426c-9e9d-67b4c38c61b7_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:44:19 np0005534516 nova_compute[253538]: 2025-11-25 08:44:19.924 253542 DEBUG oslo_concurrency.processutils [None req-2e8ff9d8-ad48-4ca6-9375-f328afc8dbb2 9e24446c871b4d7ca816a3833d05daa9 95ca8ccb4dca4f58b3896b0533bab879 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc 65e7119e-238b-426c-9e9d-67b4c38c61b7_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.341s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:44:20 np0005534516 nova_compute[253538]: 2025-11-25 08:44:20.005 253542 DEBUG nova.storage.rbd_utils [None req-2e8ff9d8-ad48-4ca6-9375-f328afc8dbb2 9e24446c871b4d7ca816a3833d05daa9 95ca8ccb4dca4f58b3896b0533bab879 - - default default] resizing rbd image 65e7119e-238b-426c-9e9d-67b4c38c61b7_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 25 03:44:20 np0005534516 nova_compute[253538]: 2025-11-25 08:44:20.115 253542 DEBUG nova.objects.instance [None req-2e8ff9d8-ad48-4ca6-9375-f328afc8dbb2 9e24446c871b4d7ca816a3833d05daa9 95ca8ccb4dca4f58b3896b0533bab879 - - default default] Lazy-loading 'migration_context' on Instance uuid 65e7119e-238b-426c-9e9d-67b4c38c61b7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 03:44:20 np0005534516 nova_compute[253538]: 2025-11-25 08:44:20.129 253542 DEBUG nova.virt.libvirt.driver [None req-2e8ff9d8-ad48-4ca6-9375-f328afc8dbb2 9e24446c871b4d7ca816a3833d05daa9 95ca8ccb4dca4f58b3896b0533bab879 - - default default] [instance: 65e7119e-238b-426c-9e9d-67b4c38c61b7] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 25 03:44:20 np0005534516 nova_compute[253538]: 2025-11-25 08:44:20.129 253542 DEBUG nova.virt.libvirt.driver [None req-2e8ff9d8-ad48-4ca6-9375-f328afc8dbb2 9e24446c871b4d7ca816a3833d05daa9 95ca8ccb4dca4f58b3896b0533bab879 - - default default] [instance: 65e7119e-238b-426c-9e9d-67b4c38c61b7] Ensure instance console log exists: /var/lib/nova/instances/65e7119e-238b-426c-9e9d-67b4c38c61b7/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 25 03:44:20 np0005534516 nova_compute[253538]: 2025-11-25 08:44:20.130 253542 DEBUG oslo_concurrency.lockutils [None req-2e8ff9d8-ad48-4ca6-9375-f328afc8dbb2 9e24446c871b4d7ca816a3833d05daa9 95ca8ccb4dca4f58b3896b0533bab879 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:44:20 np0005534516 nova_compute[253538]: 2025-11-25 08:44:20.130 253542 DEBUG oslo_concurrency.lockutils [None req-2e8ff9d8-ad48-4ca6-9375-f328afc8dbb2 9e24446c871b4d7ca816a3833d05daa9 95ca8ccb4dca4f58b3896b0533bab879 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:44:20 np0005534516 nova_compute[253538]: 2025-11-25 08:44:20.131 253542 DEBUG oslo_concurrency.lockutils [None req-2e8ff9d8-ad48-4ca6-9375-f328afc8dbb2 9e24446c871b4d7ca816a3833d05daa9 95ca8ccb4dca4f58b3896b0533bab879 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:44:20 np0005534516 nova_compute[253538]: 2025-11-25 08:44:20.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 03:44:20 np0005534516 nova_compute[253538]: 2025-11-25 08:44:20.671 253542 DEBUG nova.network.neutron [None req-2e8ff9d8-ad48-4ca6-9375-f328afc8dbb2 9e24446c871b4d7ca816a3833d05daa9 95ca8ccb4dca4f58b3896b0533bab879 - - default default] [instance: 65e7119e-238b-426c-9e9d-67b4c38c61b7] Successfully created port: 82c25621-80b3-4927-957d-aec0a653a4f8 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 25 03:44:20 np0005534516 podman[352409]: 2025-11-25 08:44:20.854851462 +0000 UTC m=+0.097062696 container health_status 5c8d5508db57b4c3da7e80ae11f3a3094e87785cb74f60c98199261dd8b73481 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Nov 25 03:44:21 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1885: 321 pgs: 321 active+clean; 88 MiB data, 677 MiB used, 59 GiB / 60 GiB avail
Nov 25 03:44:21 np0005534516 nova_compute[253538]: 2025-11-25 08:44:21.817 253542 DEBUG nova.network.neutron [None req-2e8ff9d8-ad48-4ca6-9375-f328afc8dbb2 9e24446c871b4d7ca816a3833d05daa9 95ca8ccb4dca4f58b3896b0533bab879 - - default default] [instance: 65e7119e-238b-426c-9e9d-67b4c38c61b7] Successfully updated port: 82c25621-80b3-4927-957d-aec0a653a4f8 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 25 03:44:21 np0005534516 nova_compute[253538]: 2025-11-25 08:44:21.833 253542 DEBUG oslo_concurrency.lockutils [None req-2e8ff9d8-ad48-4ca6-9375-f328afc8dbb2 9e24446c871b4d7ca816a3833d05daa9 95ca8ccb4dca4f58b3896b0533bab879 - - default default] Acquiring lock "refresh_cache-65e7119e-238b-426c-9e9d-67b4c38c61b7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 03:44:21 np0005534516 nova_compute[253538]: 2025-11-25 08:44:21.833 253542 DEBUG oslo_concurrency.lockutils [None req-2e8ff9d8-ad48-4ca6-9375-f328afc8dbb2 9e24446c871b4d7ca816a3833d05daa9 95ca8ccb4dca4f58b3896b0533bab879 - - default default] Acquired lock "refresh_cache-65e7119e-238b-426c-9e9d-67b4c38c61b7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 03:44:21 np0005534516 nova_compute[253538]: 2025-11-25 08:44:21.833 253542 DEBUG nova.network.neutron [None req-2e8ff9d8-ad48-4ca6-9375-f328afc8dbb2 9e24446c871b4d7ca816a3833d05daa9 95ca8ccb4dca4f58b3896b0533bab879 - - default default] [instance: 65e7119e-238b-426c-9e9d-67b4c38c61b7] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 25 03:44:21 np0005534516 nova_compute[253538]: 2025-11-25 08:44:21.983 253542 DEBUG nova.compute.manager [req-909cc263-008e-4d4e-9dc8-c38707afdb84 req-70fa9ed3-b196-4254-843e-d65b1dc34fc5 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 65e7119e-238b-426c-9e9d-67b4c38c61b7] Received event network-changed-82c25621-80b3-4927-957d-aec0a653a4f8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:44:21 np0005534516 nova_compute[253538]: 2025-11-25 08:44:21.984 253542 DEBUG nova.compute.manager [req-909cc263-008e-4d4e-9dc8-c38707afdb84 req-70fa9ed3-b196-4254-843e-d65b1dc34fc5 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 65e7119e-238b-426c-9e9d-67b4c38c61b7] Refreshing instance network info cache due to event network-changed-82c25621-80b3-4927-957d-aec0a653a4f8. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 03:44:21 np0005534516 nova_compute[253538]: 2025-11-25 08:44:21.985 253542 DEBUG oslo_concurrency.lockutils [req-909cc263-008e-4d4e-9dc8-c38707afdb84 req-70fa9ed3-b196-4254-843e-d65b1dc34fc5 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-65e7119e-238b-426c-9e9d-67b4c38c61b7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 03:44:22 np0005534516 nova_compute[253538]: 2025-11-25 08:44:22.054 253542 DEBUG nova.network.neutron [None req-2e8ff9d8-ad48-4ca6-9375-f328afc8dbb2 9e24446c871b4d7ca816a3833d05daa9 95ca8ccb4dca4f58b3896b0533bab879 - - default default] [instance: 65e7119e-238b-426c-9e9d-67b4c38c61b7] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 25 03:44:22 np0005534516 nova_compute[253538]: 2025-11-25 08:44:22.145 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:44:22 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e231 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 03:44:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 03:44:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 03:44:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 03:44:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 03:44:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 03:44:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 03:44:23 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1886: 321 pgs: 321 active+clean; 108 MiB data, 687 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 892 KiB/s wr, 25 op/s
Nov 25 03:44:23 np0005534516 nova_compute[253538]: 2025-11-25 08:44:23.654 253542 DEBUG nova.network.neutron [None req-2e8ff9d8-ad48-4ca6-9375-f328afc8dbb2 9e24446c871b4d7ca816a3833d05daa9 95ca8ccb4dca4f58b3896b0533bab879 - - default default] [instance: 65e7119e-238b-426c-9e9d-67b4c38c61b7] Updating instance_info_cache with network_info: [{"id": "82c25621-80b3-4927-957d-aec0a653a4f8", "address": "fa:16:3e:c8:b4:91", "network": {"id": "5091e160-cd87-462f-b734-443b7a3a08ee", "bridge": "br-int", "label": "tempest-ServerAddressesNegativeTestJSON-1608437613-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "95ca8ccb4dca4f58b3896b0533bab879", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap82c25621-80", "ovs_interfaceid": "82c25621-80b3-4927-957d-aec0a653a4f8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 03:44:23 np0005534516 nova_compute[253538]: 2025-11-25 08:44:23.677 253542 DEBUG oslo_concurrency.lockutils [None req-2e8ff9d8-ad48-4ca6-9375-f328afc8dbb2 9e24446c871b4d7ca816a3833d05daa9 95ca8ccb4dca4f58b3896b0533bab879 - - default default] Releasing lock "refresh_cache-65e7119e-238b-426c-9e9d-67b4c38c61b7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 03:44:23 np0005534516 nova_compute[253538]: 2025-11-25 08:44:23.678 253542 DEBUG nova.compute.manager [None req-2e8ff9d8-ad48-4ca6-9375-f328afc8dbb2 9e24446c871b4d7ca816a3833d05daa9 95ca8ccb4dca4f58b3896b0533bab879 - - default default] [instance: 65e7119e-238b-426c-9e9d-67b4c38c61b7] Instance network_info: |[{"id": "82c25621-80b3-4927-957d-aec0a653a4f8", "address": "fa:16:3e:c8:b4:91", "network": {"id": "5091e160-cd87-462f-b734-443b7a3a08ee", "bridge": "br-int", "label": "tempest-ServerAddressesNegativeTestJSON-1608437613-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "95ca8ccb4dca4f58b3896b0533bab879", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap82c25621-80", "ovs_interfaceid": "82c25621-80b3-4927-957d-aec0a653a4f8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 25 03:44:23 np0005534516 nova_compute[253538]: 2025-11-25 08:44:23.679 253542 DEBUG oslo_concurrency.lockutils [req-909cc263-008e-4d4e-9dc8-c38707afdb84 req-70fa9ed3-b196-4254-843e-d65b1dc34fc5 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-65e7119e-238b-426c-9e9d-67b4c38c61b7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 03:44:23 np0005534516 nova_compute[253538]: 2025-11-25 08:44:23.680 253542 DEBUG nova.network.neutron [req-909cc263-008e-4d4e-9dc8-c38707afdb84 req-70fa9ed3-b196-4254-843e-d65b1dc34fc5 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 65e7119e-238b-426c-9e9d-67b4c38c61b7] Refreshing network info cache for port 82c25621-80b3-4927-957d-aec0a653a4f8 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 03:44:23 np0005534516 nova_compute[253538]: 2025-11-25 08:44:23.690 253542 DEBUG nova.virt.libvirt.driver [None req-2e8ff9d8-ad48-4ca6-9375-f328afc8dbb2 9e24446c871b4d7ca816a3833d05daa9 95ca8ccb4dca4f58b3896b0533bab879 - - default default] [instance: 65e7119e-238b-426c-9e9d-67b4c38c61b7] Start _get_guest_xml network_info=[{"id": "82c25621-80b3-4927-957d-aec0a653a4f8", "address": "fa:16:3e:c8:b4:91", "network": {"id": "5091e160-cd87-462f-b734-443b7a3a08ee", "bridge": "br-int", "label": "tempest-ServerAddressesNegativeTestJSON-1608437613-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "95ca8ccb4dca4f58b3896b0533bab879", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap82c25621-80", "ovs_interfaceid": "82c25621-80b3-4927-957d-aec0a653a4f8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'device_name': '/dev/vda', 'guest_format': None, 'encryption_options': None, 'boot_index': 0, 'encryption_format': None, 'encryption_secret_uuid': None, 'size': 0, 'encrypted': False, 'disk_bus': 'virtio', 'image_id': '8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 25 03:44:23 np0005534516 nova_compute[253538]: 2025-11-25 08:44:23.691 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:44:23 np0005534516 nova_compute[253538]: 2025-11-25 08:44:23.699 253542 WARNING nova.virt.libvirt.driver [None req-2e8ff9d8-ad48-4ca6-9375-f328afc8dbb2 9e24446c871b4d7ca816a3833d05daa9 95ca8ccb4dca4f58b3896b0533bab879 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 03:44:23 np0005534516 nova_compute[253538]: 2025-11-25 08:44:23.710 253542 DEBUG nova.virt.libvirt.host [None req-2e8ff9d8-ad48-4ca6-9375-f328afc8dbb2 9e24446c871b4d7ca816a3833d05daa9 95ca8ccb4dca4f58b3896b0533bab879 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 25 03:44:23 np0005534516 nova_compute[253538]: 2025-11-25 08:44:23.710 253542 DEBUG nova.virt.libvirt.host [None req-2e8ff9d8-ad48-4ca6-9375-f328afc8dbb2 9e24446c871b4d7ca816a3833d05daa9 95ca8ccb4dca4f58b3896b0533bab879 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 25 03:44:23 np0005534516 nova_compute[253538]: 2025-11-25 08:44:23.715 253542 DEBUG nova.virt.libvirt.host [None req-2e8ff9d8-ad48-4ca6-9375-f328afc8dbb2 9e24446c871b4d7ca816a3833d05daa9 95ca8ccb4dca4f58b3896b0533bab879 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 25 03:44:23 np0005534516 nova_compute[253538]: 2025-11-25 08:44:23.715 253542 DEBUG nova.virt.libvirt.host [None req-2e8ff9d8-ad48-4ca6-9375-f328afc8dbb2 9e24446c871b4d7ca816a3833d05daa9 95ca8ccb4dca4f58b3896b0533bab879 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 25 03:44:23 np0005534516 nova_compute[253538]: 2025-11-25 08:44:23.716 253542 DEBUG nova.virt.libvirt.driver [None req-2e8ff9d8-ad48-4ca6-9375-f328afc8dbb2 9e24446c871b4d7ca816a3833d05daa9 95ca8ccb4dca4f58b3896b0533bab879 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 25 03:44:23 np0005534516 nova_compute[253538]: 2025-11-25 08:44:23.717 253542 DEBUG nova.virt.hardware [None req-2e8ff9d8-ad48-4ca6-9375-f328afc8dbb2 9e24446c871b4d7ca816a3833d05daa9 95ca8ccb4dca4f58b3896b0533bab879 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T08:20:54Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='c0247c91-0f34-45b2-87e7-43f31a790a00',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 25 03:44:23 np0005534516 nova_compute[253538]: 2025-11-25 08:44:23.717 253542 DEBUG nova.virt.hardware [None req-2e8ff9d8-ad48-4ca6-9375-f328afc8dbb2 9e24446c871b4d7ca816a3833d05daa9 95ca8ccb4dca4f58b3896b0533bab879 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 25 03:44:23 np0005534516 nova_compute[253538]: 2025-11-25 08:44:23.718 253542 DEBUG nova.virt.hardware [None req-2e8ff9d8-ad48-4ca6-9375-f328afc8dbb2 9e24446c871b4d7ca816a3833d05daa9 95ca8ccb4dca4f58b3896b0533bab879 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 25 03:44:23 np0005534516 nova_compute[253538]: 2025-11-25 08:44:23.718 253542 DEBUG nova.virt.hardware [None req-2e8ff9d8-ad48-4ca6-9375-f328afc8dbb2 9e24446c871b4d7ca816a3833d05daa9 95ca8ccb4dca4f58b3896b0533bab879 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 25 03:44:23 np0005534516 nova_compute[253538]: 2025-11-25 08:44:23.719 253542 DEBUG nova.virt.hardware [None req-2e8ff9d8-ad48-4ca6-9375-f328afc8dbb2 9e24446c871b4d7ca816a3833d05daa9 95ca8ccb4dca4f58b3896b0533bab879 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 25 03:44:23 np0005534516 nova_compute[253538]: 2025-11-25 08:44:23.719 253542 DEBUG nova.virt.hardware [None req-2e8ff9d8-ad48-4ca6-9375-f328afc8dbb2 9e24446c871b4d7ca816a3833d05daa9 95ca8ccb4dca4f58b3896b0533bab879 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 25 03:44:23 np0005534516 nova_compute[253538]: 2025-11-25 08:44:23.720 253542 DEBUG nova.virt.hardware [None req-2e8ff9d8-ad48-4ca6-9375-f328afc8dbb2 9e24446c871b4d7ca816a3833d05daa9 95ca8ccb4dca4f58b3896b0533bab879 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 25 03:44:23 np0005534516 nova_compute[253538]: 2025-11-25 08:44:23.720 253542 DEBUG nova.virt.hardware [None req-2e8ff9d8-ad48-4ca6-9375-f328afc8dbb2 9e24446c871b4d7ca816a3833d05daa9 95ca8ccb4dca4f58b3896b0533bab879 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 25 03:44:23 np0005534516 nova_compute[253538]: 2025-11-25 08:44:23.721 253542 DEBUG nova.virt.hardware [None req-2e8ff9d8-ad48-4ca6-9375-f328afc8dbb2 9e24446c871b4d7ca816a3833d05daa9 95ca8ccb4dca4f58b3896b0533bab879 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 25 03:44:23 np0005534516 nova_compute[253538]: 2025-11-25 08:44:23.721 253542 DEBUG nova.virt.hardware [None req-2e8ff9d8-ad48-4ca6-9375-f328afc8dbb2 9e24446c871b4d7ca816a3833d05daa9 95ca8ccb4dca4f58b3896b0533bab879 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 25 03:44:23 np0005534516 nova_compute[253538]: 2025-11-25 08:44:23.722 253542 DEBUG nova.virt.hardware [None req-2e8ff9d8-ad48-4ca6-9375-f328afc8dbb2 9e24446c871b4d7ca816a3833d05daa9 95ca8ccb4dca4f58b3896b0533bab879 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 25 03:44:23 np0005534516 nova_compute[253538]: 2025-11-25 08:44:23.726 253542 DEBUG oslo_concurrency.processutils [None req-2e8ff9d8-ad48-4ca6-9375-f328afc8dbb2 9e24446c871b4d7ca816a3833d05daa9 95ca8ccb4dca4f58b3896b0533bab879 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:44:24 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 03:44:24 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3321357064' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 03:44:24 np0005534516 nova_compute[253538]: 2025-11-25 08:44:24.298 253542 DEBUG oslo_concurrency.processutils [None req-2e8ff9d8-ad48-4ca6-9375-f328afc8dbb2 9e24446c871b4d7ca816a3833d05daa9 95ca8ccb4dca4f58b3896b0533bab879 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.572s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:44:24 np0005534516 nova_compute[253538]: 2025-11-25 08:44:24.328 253542 DEBUG nova.storage.rbd_utils [None req-2e8ff9d8-ad48-4ca6-9375-f328afc8dbb2 9e24446c871b4d7ca816a3833d05daa9 95ca8ccb4dca4f58b3896b0533bab879 - - default default] rbd image 65e7119e-238b-426c-9e9d-67b4c38c61b7_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:44:24 np0005534516 nova_compute[253538]: 2025-11-25 08:44:24.332 253542 DEBUG oslo_concurrency.processutils [None req-2e8ff9d8-ad48-4ca6-9375-f328afc8dbb2 9e24446c871b4d7ca816a3833d05daa9 95ca8ccb4dca4f58b3896b0533bab879 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:44:24 np0005534516 nova_compute[253538]: 2025-11-25 08:44:24.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 03:44:24 np0005534516 nova_compute[253538]: 2025-11-25 08:44:24.555 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 25 03:44:24 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 03:44:24 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1455607539' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 03:44:24 np0005534516 nova_compute[253538]: 2025-11-25 08:44:24.840 253542 DEBUG oslo_concurrency.processutils [None req-2e8ff9d8-ad48-4ca6-9375-f328afc8dbb2 9e24446c871b4d7ca816a3833d05daa9 95ca8ccb4dca4f58b3896b0533bab879 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.508s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:44:24 np0005534516 nova_compute[253538]: 2025-11-25 08:44:24.842 253542 DEBUG nova.virt.libvirt.vif [None req-2e8ff9d8-ad48-4ca6-9375-f328afc8dbb2 9e24446c871b4d7ca816a3833d05daa9 95ca8ccb4dca4f58b3896b0533bab879 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T08:44:16Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerAddressesNegativeTestJSON-server-1971153713',display_name='tempest-ServerAddressesNegativeTestJSON-server-1971153713',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveraddressesnegativetestjson-server-1971153713',id=101,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='95ca8ccb4dca4f58b3896b0533bab879',ramdisk_id='',reservation_id='r-eybd3brc',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerAddressesNegativeTestJSON-1856355067',owner_user_name='tempest-ServerAddressesNegativeTestJSON-1856355067-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T08:44:19Z,user_data=None,user_id='9e24446c871b4d7ca816a3833d05daa9',uuid=65e7119e-238b-426c-9e9d-67b4c38c61b7,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "82c25621-80b3-4927-957d-aec0a653a4f8", "address": "fa:16:3e:c8:b4:91", "network": {"id": "5091e160-cd87-462f-b734-443b7a3a08ee", "bridge": "br-int", "label": "tempest-ServerAddressesNegativeTestJSON-1608437613-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "95ca8ccb4dca4f58b3896b0533bab879", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap82c25621-80", "ovs_interfaceid": "82c25621-80b3-4927-957d-aec0a653a4f8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 25 03:44:24 np0005534516 nova_compute[253538]: 2025-11-25 08:44:24.842 253542 DEBUG nova.network.os_vif_util [None req-2e8ff9d8-ad48-4ca6-9375-f328afc8dbb2 9e24446c871b4d7ca816a3833d05daa9 95ca8ccb4dca4f58b3896b0533bab879 - - default default] Converting VIF {"id": "82c25621-80b3-4927-957d-aec0a653a4f8", "address": "fa:16:3e:c8:b4:91", "network": {"id": "5091e160-cd87-462f-b734-443b7a3a08ee", "bridge": "br-int", "label": "tempest-ServerAddressesNegativeTestJSON-1608437613-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "95ca8ccb4dca4f58b3896b0533bab879", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap82c25621-80", "ovs_interfaceid": "82c25621-80b3-4927-957d-aec0a653a4f8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 03:44:24 np0005534516 nova_compute[253538]: 2025-11-25 08:44:24.843 253542 DEBUG nova.network.os_vif_util [None req-2e8ff9d8-ad48-4ca6-9375-f328afc8dbb2 9e24446c871b4d7ca816a3833d05daa9 95ca8ccb4dca4f58b3896b0533bab879 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c8:b4:91,bridge_name='br-int',has_traffic_filtering=True,id=82c25621-80b3-4927-957d-aec0a653a4f8,network=Network(5091e160-cd87-462f-b734-443b7a3a08ee),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap82c25621-80') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 03:44:24 np0005534516 nova_compute[253538]: 2025-11-25 08:44:24.844 253542 DEBUG nova.objects.instance [None req-2e8ff9d8-ad48-4ca6-9375-f328afc8dbb2 9e24446c871b4d7ca816a3833d05daa9 95ca8ccb4dca4f58b3896b0533bab879 - - default default] Lazy-loading 'pci_devices' on Instance uuid 65e7119e-238b-426c-9e9d-67b4c38c61b7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 03:44:24 np0005534516 nova_compute[253538]: 2025-11-25 08:44:24.860 253542 DEBUG nova.virt.libvirt.driver [None req-2e8ff9d8-ad48-4ca6-9375-f328afc8dbb2 9e24446c871b4d7ca816a3833d05daa9 95ca8ccb4dca4f58b3896b0533bab879 - - default default] [instance: 65e7119e-238b-426c-9e9d-67b4c38c61b7] End _get_guest_xml xml=<domain type="kvm">
Nov 25 03:44:24 np0005534516 nova_compute[253538]:  <uuid>65e7119e-238b-426c-9e9d-67b4c38c61b7</uuid>
Nov 25 03:44:24 np0005534516 nova_compute[253538]:  <name>instance-00000065</name>
Nov 25 03:44:24 np0005534516 nova_compute[253538]:  <memory>131072</memory>
Nov 25 03:44:24 np0005534516 nova_compute[253538]:  <vcpu>1</vcpu>
Nov 25 03:44:24 np0005534516 nova_compute[253538]:  <metadata>
Nov 25 03:44:24 np0005534516 nova_compute[253538]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 03:44:24 np0005534516 nova_compute[253538]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 03:44:24 np0005534516 nova_compute[253538]:      <nova:name>tempest-ServerAddressesNegativeTestJSON-server-1971153713</nova:name>
Nov 25 03:44:24 np0005534516 nova_compute[253538]:      <nova:creationTime>2025-11-25 08:44:23</nova:creationTime>
Nov 25 03:44:24 np0005534516 nova_compute[253538]:      <nova:flavor name="m1.nano">
Nov 25 03:44:24 np0005534516 nova_compute[253538]:        <nova:memory>128</nova:memory>
Nov 25 03:44:24 np0005534516 nova_compute[253538]:        <nova:disk>1</nova:disk>
Nov 25 03:44:24 np0005534516 nova_compute[253538]:        <nova:swap>0</nova:swap>
Nov 25 03:44:24 np0005534516 nova_compute[253538]:        <nova:ephemeral>0</nova:ephemeral>
Nov 25 03:44:24 np0005534516 nova_compute[253538]:        <nova:vcpus>1</nova:vcpus>
Nov 25 03:44:24 np0005534516 nova_compute[253538]:      </nova:flavor>
Nov 25 03:44:24 np0005534516 nova_compute[253538]:      <nova:owner>
Nov 25 03:44:24 np0005534516 nova_compute[253538]:        <nova:user uuid="9e24446c871b4d7ca816a3833d05daa9">tempest-ServerAddressesNegativeTestJSON-1856355067-project-member</nova:user>
Nov 25 03:44:24 np0005534516 nova_compute[253538]:        <nova:project uuid="95ca8ccb4dca4f58b3896b0533bab879">tempest-ServerAddressesNegativeTestJSON-1856355067</nova:project>
Nov 25 03:44:24 np0005534516 nova_compute[253538]:      </nova:owner>
Nov 25 03:44:24 np0005534516 nova_compute[253538]:      <nova:root type="image" uuid="8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e"/>
Nov 25 03:44:24 np0005534516 nova_compute[253538]:      <nova:ports>
Nov 25 03:44:24 np0005534516 nova_compute[253538]:        <nova:port uuid="82c25621-80b3-4927-957d-aec0a653a4f8">
Nov 25 03:44:24 np0005534516 nova_compute[253538]:          <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Nov 25 03:44:24 np0005534516 nova_compute[253538]:        </nova:port>
Nov 25 03:44:24 np0005534516 nova_compute[253538]:      </nova:ports>
Nov 25 03:44:24 np0005534516 nova_compute[253538]:    </nova:instance>
Nov 25 03:44:24 np0005534516 nova_compute[253538]:  </metadata>
Nov 25 03:44:24 np0005534516 nova_compute[253538]:  <sysinfo type="smbios">
Nov 25 03:44:24 np0005534516 nova_compute[253538]:    <system>
Nov 25 03:44:24 np0005534516 nova_compute[253538]:      <entry name="manufacturer">RDO</entry>
Nov 25 03:44:24 np0005534516 nova_compute[253538]:      <entry name="product">OpenStack Compute</entry>
Nov 25 03:44:24 np0005534516 nova_compute[253538]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 03:44:24 np0005534516 nova_compute[253538]:      <entry name="serial">65e7119e-238b-426c-9e9d-67b4c38c61b7</entry>
Nov 25 03:44:24 np0005534516 nova_compute[253538]:      <entry name="uuid">65e7119e-238b-426c-9e9d-67b4c38c61b7</entry>
Nov 25 03:44:24 np0005534516 nova_compute[253538]:      <entry name="family">Virtual Machine</entry>
Nov 25 03:44:24 np0005534516 nova_compute[253538]:    </system>
Nov 25 03:44:24 np0005534516 nova_compute[253538]:  </sysinfo>
Nov 25 03:44:24 np0005534516 nova_compute[253538]:  <os>
Nov 25 03:44:24 np0005534516 nova_compute[253538]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 25 03:44:24 np0005534516 nova_compute[253538]:    <boot dev="hd"/>
Nov 25 03:44:24 np0005534516 nova_compute[253538]:    <smbios mode="sysinfo"/>
Nov 25 03:44:24 np0005534516 nova_compute[253538]:  </os>
Nov 25 03:44:24 np0005534516 nova_compute[253538]:  <features>
Nov 25 03:44:24 np0005534516 nova_compute[253538]:    <acpi/>
Nov 25 03:44:24 np0005534516 nova_compute[253538]:    <apic/>
Nov 25 03:44:24 np0005534516 nova_compute[253538]:    <vmcoreinfo/>
Nov 25 03:44:24 np0005534516 nova_compute[253538]:  </features>
Nov 25 03:44:24 np0005534516 nova_compute[253538]:  <clock offset="utc">
Nov 25 03:44:24 np0005534516 nova_compute[253538]:    <timer name="pit" tickpolicy="delay"/>
Nov 25 03:44:24 np0005534516 nova_compute[253538]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 25 03:44:24 np0005534516 nova_compute[253538]:    <timer name="hpet" present="no"/>
Nov 25 03:44:24 np0005534516 nova_compute[253538]:  </clock>
Nov 25 03:44:24 np0005534516 nova_compute[253538]:  <cpu mode="host-model" match="exact">
Nov 25 03:44:24 np0005534516 nova_compute[253538]:    <topology sockets="1" cores="1" threads="1"/>
Nov 25 03:44:24 np0005534516 nova_compute[253538]:  </cpu>
Nov 25 03:44:24 np0005534516 nova_compute[253538]:  <devices>
Nov 25 03:44:24 np0005534516 nova_compute[253538]:    <disk type="network" device="disk">
Nov 25 03:44:24 np0005534516 nova_compute[253538]:      <driver type="raw" cache="none"/>
Nov 25 03:44:24 np0005534516 nova_compute[253538]:      <source protocol="rbd" name="vms/65e7119e-238b-426c-9e9d-67b4c38c61b7_disk">
Nov 25 03:44:24 np0005534516 nova_compute[253538]:        <host name="192.168.122.100" port="6789"/>
Nov 25 03:44:24 np0005534516 nova_compute[253538]:      </source>
Nov 25 03:44:24 np0005534516 nova_compute[253538]:      <auth username="openstack">
Nov 25 03:44:24 np0005534516 nova_compute[253538]:        <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 03:44:24 np0005534516 nova_compute[253538]:      </auth>
Nov 25 03:44:24 np0005534516 nova_compute[253538]:      <target dev="vda" bus="virtio"/>
Nov 25 03:44:24 np0005534516 nova_compute[253538]:    </disk>
Nov 25 03:44:24 np0005534516 nova_compute[253538]:    <disk type="network" device="cdrom">
Nov 25 03:44:24 np0005534516 nova_compute[253538]:      <driver type="raw" cache="none"/>
Nov 25 03:44:24 np0005534516 nova_compute[253538]:      <source protocol="rbd" name="vms/65e7119e-238b-426c-9e9d-67b4c38c61b7_disk.config">
Nov 25 03:44:24 np0005534516 nova_compute[253538]:        <host name="192.168.122.100" port="6789"/>
Nov 25 03:44:24 np0005534516 nova_compute[253538]:      </source>
Nov 25 03:44:24 np0005534516 nova_compute[253538]:      <auth username="openstack">
Nov 25 03:44:24 np0005534516 nova_compute[253538]:        <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 03:44:24 np0005534516 nova_compute[253538]:      </auth>
Nov 25 03:44:24 np0005534516 nova_compute[253538]:      <target dev="sda" bus="sata"/>
Nov 25 03:44:24 np0005534516 nova_compute[253538]:    </disk>
Nov 25 03:44:24 np0005534516 nova_compute[253538]:    <interface type="ethernet">
Nov 25 03:44:24 np0005534516 nova_compute[253538]:      <mac address="fa:16:3e:c8:b4:91"/>
Nov 25 03:44:24 np0005534516 nova_compute[253538]:      <model type="virtio"/>
Nov 25 03:44:24 np0005534516 nova_compute[253538]:      <driver name="vhost" rx_queue_size="512"/>
Nov 25 03:44:24 np0005534516 nova_compute[253538]:      <mtu size="1442"/>
Nov 25 03:44:24 np0005534516 nova_compute[253538]:      <target dev="tap82c25621-80"/>
Nov 25 03:44:24 np0005534516 nova_compute[253538]:    </interface>
Nov 25 03:44:24 np0005534516 nova_compute[253538]:    <serial type="pty">
Nov 25 03:44:24 np0005534516 nova_compute[253538]:      <log file="/var/lib/nova/instances/65e7119e-238b-426c-9e9d-67b4c38c61b7/console.log" append="off"/>
Nov 25 03:44:24 np0005534516 nova_compute[253538]:    </serial>
Nov 25 03:44:24 np0005534516 nova_compute[253538]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 03:44:24 np0005534516 nova_compute[253538]:    <video>
Nov 25 03:44:24 np0005534516 nova_compute[253538]:      <model type="virtio"/>
Nov 25 03:44:24 np0005534516 nova_compute[253538]:    </video>
Nov 25 03:44:24 np0005534516 nova_compute[253538]:    <input type="tablet" bus="usb"/>
Nov 25 03:44:24 np0005534516 nova_compute[253538]:    <rng model="virtio">
Nov 25 03:44:24 np0005534516 nova_compute[253538]:      <backend model="random">/dev/urandom</backend>
Nov 25 03:44:24 np0005534516 nova_compute[253538]:    </rng>
Nov 25 03:44:24 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root"/>
Nov 25 03:44:24 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:44:24 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:44:24 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:44:24 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:44:24 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:44:24 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:44:24 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:44:24 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:44:24 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:44:24 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:44:24 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:44:24 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:44:24 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:44:24 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:44:24 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:44:24 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:44:24 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:44:24 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:44:24 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:44:24 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:44:24 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:44:24 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:44:24 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:44:24 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:44:24 np0005534516 nova_compute[253538]:    <controller type="usb" index="0"/>
Nov 25 03:44:24 np0005534516 nova_compute[253538]:    <memballoon model="virtio">
Nov 25 03:44:24 np0005534516 nova_compute[253538]:      <stats period="10"/>
Nov 25 03:44:24 np0005534516 nova_compute[253538]:    </memballoon>
Nov 25 03:44:24 np0005534516 nova_compute[253538]:  </devices>
Nov 25 03:44:24 np0005534516 nova_compute[253538]: </domain>
Nov 25 03:44:24 np0005534516 nova_compute[253538]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 25 03:44:24 np0005534516 nova_compute[253538]: 2025-11-25 08:44:24.862 253542 DEBUG nova.compute.manager [None req-2e8ff9d8-ad48-4ca6-9375-f328afc8dbb2 9e24446c871b4d7ca816a3833d05daa9 95ca8ccb4dca4f58b3896b0533bab879 - - default default] [instance: 65e7119e-238b-426c-9e9d-67b4c38c61b7] Preparing to wait for external event network-vif-plugged-82c25621-80b3-4927-957d-aec0a653a4f8 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 25 03:44:24 np0005534516 nova_compute[253538]: 2025-11-25 08:44:24.862 253542 DEBUG oslo_concurrency.lockutils [None req-2e8ff9d8-ad48-4ca6-9375-f328afc8dbb2 9e24446c871b4d7ca816a3833d05daa9 95ca8ccb4dca4f58b3896b0533bab879 - - default default] Acquiring lock "65e7119e-238b-426c-9e9d-67b4c38c61b7-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:44:24 np0005534516 nova_compute[253538]: 2025-11-25 08:44:24.863 253542 DEBUG oslo_concurrency.lockutils [None req-2e8ff9d8-ad48-4ca6-9375-f328afc8dbb2 9e24446c871b4d7ca816a3833d05daa9 95ca8ccb4dca4f58b3896b0533bab879 - - default default] Lock "65e7119e-238b-426c-9e9d-67b4c38c61b7-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:44:24 np0005534516 nova_compute[253538]: 2025-11-25 08:44:24.863 253542 DEBUG oslo_concurrency.lockutils [None req-2e8ff9d8-ad48-4ca6-9375-f328afc8dbb2 9e24446c871b4d7ca816a3833d05daa9 95ca8ccb4dca4f58b3896b0533bab879 - - default default] Lock "65e7119e-238b-426c-9e9d-67b4c38c61b7-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:44:24 np0005534516 nova_compute[253538]: 2025-11-25 08:44:24.864 253542 DEBUG nova.virt.libvirt.vif [None req-2e8ff9d8-ad48-4ca6-9375-f328afc8dbb2 9e24446c871b4d7ca816a3833d05daa9 95ca8ccb4dca4f58b3896b0533bab879 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T08:44:16Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerAddressesNegativeTestJSON-server-1971153713',display_name='tempest-ServerAddressesNegativeTestJSON-server-1971153713',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveraddressesnegativetestjson-server-1971153713',id=101,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='95ca8ccb4dca4f58b3896b0533bab879',ramdisk_id='',reservation_id='r-eybd3brc',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerAddressesNegativeTestJSON-1856355067',owner_user_name='tempest-ServerAddressesNegativeTestJSON-1856355067-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T08:44:19Z,user_data=None,user_id='9e24446c871b4d7ca816a3833d05daa9',uuid=65e7119e-238b-426c-9e9d-67b4c38c61b7,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "82c25621-80b3-4927-957d-aec0a653a4f8", "address": "fa:16:3e:c8:b4:91", "network": {"id": "5091e160-cd87-462f-b734-443b7a3a08ee", "bridge": "br-int", "label": "tempest-ServerAddressesNegativeTestJSON-1608437613-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "95ca8ccb4dca4f58b3896b0533bab879", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap82c25621-80", "ovs_interfaceid": "82c25621-80b3-4927-957d-aec0a653a4f8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 25 03:44:24 np0005534516 nova_compute[253538]: 2025-11-25 08:44:24.864 253542 DEBUG nova.network.os_vif_util [None req-2e8ff9d8-ad48-4ca6-9375-f328afc8dbb2 9e24446c871b4d7ca816a3833d05daa9 95ca8ccb4dca4f58b3896b0533bab879 - - default default] Converting VIF {"id": "82c25621-80b3-4927-957d-aec0a653a4f8", "address": "fa:16:3e:c8:b4:91", "network": {"id": "5091e160-cd87-462f-b734-443b7a3a08ee", "bridge": "br-int", "label": "tempest-ServerAddressesNegativeTestJSON-1608437613-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "95ca8ccb4dca4f58b3896b0533bab879", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap82c25621-80", "ovs_interfaceid": "82c25621-80b3-4927-957d-aec0a653a4f8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 03:44:24 np0005534516 nova_compute[253538]: 2025-11-25 08:44:24.866 253542 DEBUG nova.network.os_vif_util [None req-2e8ff9d8-ad48-4ca6-9375-f328afc8dbb2 9e24446c871b4d7ca816a3833d05daa9 95ca8ccb4dca4f58b3896b0533bab879 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c8:b4:91,bridge_name='br-int',has_traffic_filtering=True,id=82c25621-80b3-4927-957d-aec0a653a4f8,network=Network(5091e160-cd87-462f-b734-443b7a3a08ee),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap82c25621-80') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 03:44:24 np0005534516 nova_compute[253538]: 2025-11-25 08:44:24.866 253542 DEBUG os_vif [None req-2e8ff9d8-ad48-4ca6-9375-f328afc8dbb2 9e24446c871b4d7ca816a3833d05daa9 95ca8ccb4dca4f58b3896b0533bab879 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:c8:b4:91,bridge_name='br-int',has_traffic_filtering=True,id=82c25621-80b3-4927-957d-aec0a653a4f8,network=Network(5091e160-cd87-462f-b734-443b7a3a08ee),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap82c25621-80') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 25 03:44:24 np0005534516 nova_compute[253538]: 2025-11-25 08:44:24.867 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:44:24 np0005534516 nova_compute[253538]: 2025-11-25 08:44:24.868 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:44:24 np0005534516 nova_compute[253538]: 2025-11-25 08:44:24.868 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 03:44:24 np0005534516 nova_compute[253538]: 2025-11-25 08:44:24.872 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:44:24 np0005534516 nova_compute[253538]: 2025-11-25 08:44:24.872 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap82c25621-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:44:24 np0005534516 nova_compute[253538]: 2025-11-25 08:44:24.873 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap82c25621-80, col_values=(('external_ids', {'iface-id': '82c25621-80b3-4927-957d-aec0a653a4f8', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:c8:b4:91', 'vm-uuid': '65e7119e-238b-426c-9e9d-67b4c38c61b7'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:44:24 np0005534516 nova_compute[253538]: 2025-11-25 08:44:24.874 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:44:24 np0005534516 NetworkManager[48915]: <info>  [1764060264.8755] manager: (tap82c25621-80): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/396)
Nov 25 03:44:24 np0005534516 nova_compute[253538]: 2025-11-25 08:44:24.877 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 25 03:44:24 np0005534516 nova_compute[253538]: 2025-11-25 08:44:24.881 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:44:24 np0005534516 nova_compute[253538]: 2025-11-25 08:44:24.881 253542 INFO os_vif [None req-2e8ff9d8-ad48-4ca6-9375-f328afc8dbb2 9e24446c871b4d7ca816a3833d05daa9 95ca8ccb4dca4f58b3896b0533bab879 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:c8:b4:91,bridge_name='br-int',has_traffic_filtering=True,id=82c25621-80b3-4927-957d-aec0a653a4f8,network=Network(5091e160-cd87-462f-b734-443b7a3a08ee),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap82c25621-80')#033[00m
Nov 25 03:44:24 np0005534516 nova_compute[253538]: 2025-11-25 08:44:24.950 253542 DEBUG nova.virt.libvirt.driver [None req-2e8ff9d8-ad48-4ca6-9375-f328afc8dbb2 9e24446c871b4d7ca816a3833d05daa9 95ca8ccb4dca4f58b3896b0533bab879 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 03:44:24 np0005534516 nova_compute[253538]: 2025-11-25 08:44:24.951 253542 DEBUG nova.virt.libvirt.driver [None req-2e8ff9d8-ad48-4ca6-9375-f328afc8dbb2 9e24446c871b4d7ca816a3833d05daa9 95ca8ccb4dca4f58b3896b0533bab879 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 03:44:24 np0005534516 nova_compute[253538]: 2025-11-25 08:44:24.952 253542 DEBUG nova.virt.libvirt.driver [None req-2e8ff9d8-ad48-4ca6-9375-f328afc8dbb2 9e24446c871b4d7ca816a3833d05daa9 95ca8ccb4dca4f58b3896b0533bab879 - - default default] No VIF found with MAC fa:16:3e:c8:b4:91, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 25 03:44:24 np0005534516 nova_compute[253538]: 2025-11-25 08:44:24.953 253542 INFO nova.virt.libvirt.driver [None req-2e8ff9d8-ad48-4ca6-9375-f328afc8dbb2 9e24446c871b4d7ca816a3833d05daa9 95ca8ccb4dca4f58b3896b0533bab879 - - default default] [instance: 65e7119e-238b-426c-9e9d-67b4c38c61b7] Using config drive#033[00m
Nov 25 03:44:24 np0005534516 nova_compute[253538]: 2025-11-25 08:44:24.984 253542 DEBUG nova.storage.rbd_utils [None req-2e8ff9d8-ad48-4ca6-9375-f328afc8dbb2 9e24446c871b4d7ca816a3833d05daa9 95ca8ccb4dca4f58b3896b0533bab879 - - default default] rbd image 65e7119e-238b-426c-9e9d-67b4c38c61b7_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:44:25 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1887: 321 pgs: 321 active+clean; 134 MiB data, 698 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 25 03:44:25 np0005534516 nova_compute[253538]: 2025-11-25 08:44:25.791 253542 INFO nova.virt.libvirt.driver [None req-2e8ff9d8-ad48-4ca6-9375-f328afc8dbb2 9e24446c871b4d7ca816a3833d05daa9 95ca8ccb4dca4f58b3896b0533bab879 - - default default] [instance: 65e7119e-238b-426c-9e9d-67b4c38c61b7] Creating config drive at /var/lib/nova/instances/65e7119e-238b-426c-9e9d-67b4c38c61b7/disk.config#033[00m
Nov 25 03:44:25 np0005534516 nova_compute[253538]: 2025-11-25 08:44:25.800 253542 DEBUG oslo_concurrency.processutils [None req-2e8ff9d8-ad48-4ca6-9375-f328afc8dbb2 9e24446c871b4d7ca816a3833d05daa9 95ca8ccb4dca4f58b3896b0533bab879 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/65e7119e-238b-426c-9e9d-67b4c38c61b7/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpgq8mjtha execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:44:25 np0005534516 nova_compute[253538]: 2025-11-25 08:44:25.891 253542 DEBUG nova.network.neutron [req-909cc263-008e-4d4e-9dc8-c38707afdb84 req-70fa9ed3-b196-4254-843e-d65b1dc34fc5 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 65e7119e-238b-426c-9e9d-67b4c38c61b7] Updated VIF entry in instance network info cache for port 82c25621-80b3-4927-957d-aec0a653a4f8. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 25 03:44:25 np0005534516 nova_compute[253538]: 2025-11-25 08:44:25.892 253542 DEBUG nova.network.neutron [req-909cc263-008e-4d4e-9dc8-c38707afdb84 req-70fa9ed3-b196-4254-843e-d65b1dc34fc5 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 65e7119e-238b-426c-9e9d-67b4c38c61b7] Updating instance_info_cache with network_info: [{"id": "82c25621-80b3-4927-957d-aec0a653a4f8", "address": "fa:16:3e:c8:b4:91", "network": {"id": "5091e160-cd87-462f-b734-443b7a3a08ee", "bridge": "br-int", "label": "tempest-ServerAddressesNegativeTestJSON-1608437613-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "95ca8ccb4dca4f58b3896b0533bab879", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap82c25621-80", "ovs_interfaceid": "82c25621-80b3-4927-957d-aec0a653a4f8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 03:44:25 np0005534516 nova_compute[253538]: 2025-11-25 08:44:25.909 253542 DEBUG oslo_concurrency.lockutils [req-909cc263-008e-4d4e-9dc8-c38707afdb84 req-70fa9ed3-b196-4254-843e-d65b1dc34fc5 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-65e7119e-238b-426c-9e9d-67b4c38c61b7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 03:44:25 np0005534516 nova_compute[253538]: 2025-11-25 08:44:25.943 253542 DEBUG oslo_concurrency.processutils [None req-2e8ff9d8-ad48-4ca6-9375-f328afc8dbb2 9e24446c871b4d7ca816a3833d05daa9 95ca8ccb4dca4f58b3896b0533bab879 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/65e7119e-238b-426c-9e9d-67b4c38c61b7/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpgq8mjtha" returned: 0 in 0.143s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:44:25 np0005534516 nova_compute[253538]: 2025-11-25 08:44:25.967 253542 DEBUG nova.storage.rbd_utils [None req-2e8ff9d8-ad48-4ca6-9375-f328afc8dbb2 9e24446c871b4d7ca816a3833d05daa9 95ca8ccb4dca4f58b3896b0533bab879 - - default default] rbd image 65e7119e-238b-426c-9e9d-67b4c38c61b7_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:44:25 np0005534516 nova_compute[253538]: 2025-11-25 08:44:25.970 253542 DEBUG oslo_concurrency.processutils [None req-2e8ff9d8-ad48-4ca6-9375-f328afc8dbb2 9e24446c871b4d7ca816a3833d05daa9 95ca8ccb4dca4f58b3896b0533bab879 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/65e7119e-238b-426c-9e9d-67b4c38c61b7/disk.config 65e7119e-238b-426c-9e9d-67b4c38c61b7_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:44:26 np0005534516 nova_compute[253538]: 2025-11-25 08:44:26.284 253542 DEBUG oslo_concurrency.processutils [None req-2e8ff9d8-ad48-4ca6-9375-f328afc8dbb2 9e24446c871b4d7ca816a3833d05daa9 95ca8ccb4dca4f58b3896b0533bab879 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/65e7119e-238b-426c-9e9d-67b4c38c61b7/disk.config 65e7119e-238b-426c-9e9d-67b4c38c61b7_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.313s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:44:26 np0005534516 nova_compute[253538]: 2025-11-25 08:44:26.285 253542 INFO nova.virt.libvirt.driver [None req-2e8ff9d8-ad48-4ca6-9375-f328afc8dbb2 9e24446c871b4d7ca816a3833d05daa9 95ca8ccb4dca4f58b3896b0533bab879 - - default default] [instance: 65e7119e-238b-426c-9e9d-67b4c38c61b7] Deleting local config drive /var/lib/nova/instances/65e7119e-238b-426c-9e9d-67b4c38c61b7/disk.config because it was imported into RBD.#033[00m
Nov 25 03:44:26 np0005534516 kernel: tap82c25621-80: entered promiscuous mode
Nov 25 03:44:26 np0005534516 NetworkManager[48915]: <info>  [1764060266.3665] manager: (tap82c25621-80): new Tun device (/org/freedesktop/NetworkManager/Devices/397)
Nov 25 03:44:26 np0005534516 ovn_controller[152859]: 2025-11-25T08:44:26Z|00973|binding|INFO|Claiming lport 82c25621-80b3-4927-957d-aec0a653a4f8 for this chassis.
Nov 25 03:44:26 np0005534516 ovn_controller[152859]: 2025-11-25T08:44:26Z|00974|binding|INFO|82c25621-80b3-4927-957d-aec0a653a4f8: Claiming fa:16:3e:c8:b4:91 10.100.0.12
Nov 25 03:44:26 np0005534516 nova_compute[253538]: 2025-11-25 08:44:26.371 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:44:26 np0005534516 nova_compute[253538]: 2025-11-25 08:44:26.379 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:44:26 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:44:26.397 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c8:b4:91 10.100.0.12'], port_security=['fa:16:3e:c8:b4:91 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '65e7119e-238b-426c-9e9d-67b4c38c61b7', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5091e160-cd87-462f-b734-443b7a3a08ee', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '95ca8ccb4dca4f58b3896b0533bab879', 'neutron:revision_number': '2', 'neutron:security_group_ids': '15c498d9-36a3-47cc-8194-8875167d8fa1', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=86fe2083-70bb-4b93-9773-f8b27481b1ca, chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=82c25621-80b3-4927-957d-aec0a653a4f8) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 03:44:26 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:44:26.399 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 82c25621-80b3-4927-957d-aec0a653a4f8 in datapath 5091e160-cd87-462f-b734-443b7a3a08ee bound to our chassis#033[00m
Nov 25 03:44:26 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:44:26.401 162739 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 5091e160-cd87-462f-b734-443b7a3a08ee#033[00m
Nov 25 03:44:26 np0005534516 systemd-machined[215790]: New machine qemu-124-instance-00000065.
Nov 25 03:44:26 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:44:26.417 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[b3ce7778-bb24-4ccd-a3d2-ba05574950f9]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:44:26 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:44:26.418 162739 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap5091e160-c1 in ovnmeta-5091e160-cd87-462f-b734-443b7a3a08ee namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 25 03:44:26 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:44:26.421 269370 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap5091e160-c0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 25 03:44:26 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:44:26.421 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[8afcfb20-bacf-461d-9c79-0e660682be84]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:44:26 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:44:26.422 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[9c089d6b-1c47-4564-ae9b-ba1d74401a05]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:44:26 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:44:26.438 162852 DEBUG oslo.privsep.daemon [-] privsep: reply[98641f84-0c4f-4070-9270-f3d6fde6f955]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:44:26 np0005534516 systemd[1]: Started Virtual Machine qemu-124-instance-00000065.
Nov 25 03:44:26 np0005534516 nova_compute[253538]: 2025-11-25 08:44:26.465 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:44:26 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:44:26.465 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[1a357c37-38ec-42bb-8104-074bfe7ea2a7]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:44:26 np0005534516 ovn_controller[152859]: 2025-11-25T08:44:26Z|00975|binding|INFO|Setting lport 82c25621-80b3-4927-957d-aec0a653a4f8 ovn-installed in OVS
Nov 25 03:44:26 np0005534516 ovn_controller[152859]: 2025-11-25T08:44:26Z|00976|binding|INFO|Setting lport 82c25621-80b3-4927-957d-aec0a653a4f8 up in Southbound
Nov 25 03:44:26 np0005534516 nova_compute[253538]: 2025-11-25 08:44:26.470 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:44:26 np0005534516 podman[352561]: 2025-11-25 08:44:26.485751643 +0000 UTC m=+0.080122244 container health_status 201f5ba8a846455dad7681d91099d8c3f33e8ac91298c1330a8b525b94c03938 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, managed_by=edpm_ansible, io.buildah.version=1.41.3)
Nov 25 03:44:26 np0005534516 systemd-udevd[352589]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 03:44:26 np0005534516 NetworkManager[48915]: <info>  [1764060266.5011] device (tap82c25621-80): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 03:44:26 np0005534516 NetworkManager[48915]: <info>  [1764060266.5020] device (tap82c25621-80): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 03:44:26 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:44:26.509 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[ab6863c0-1840-4d25-9040-61e34bdc164d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:44:26 np0005534516 NetworkManager[48915]: <info>  [1764060266.5155] manager: (tap5091e160-c0): new Veth device (/org/freedesktop/NetworkManager/Devices/398)
Nov 25 03:44:26 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:44:26.514 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[a7509ee7-bc30-495a-8540-c6b8f35cdb80]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:44:26 np0005534516 systemd-udevd[352594]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 03:44:26 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:44:26.548 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[b9839b2a-ac38-4427-8ae4-ebb7f3aa1ab0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:44:26 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:44:26.554 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[40d73f4e-4361-478e-968e-79588a6e0729]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:44:26 np0005534516 NetworkManager[48915]: <info>  [1764060266.5736] device (tap5091e160-c0): carrier: link connected
Nov 25 03:44:26 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:44:26.580 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[1c6d909d-02e8-41f9-87e0-658a15fd13f9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:44:26 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:44:26.597 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[669fa496-7b3d-44d9-ab40-60d8019af672]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap5091e160-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:bd:e2:81'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 288], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 562117, 'reachable_time': 30122, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 352618, 'error': None, 'target': 'ovnmeta-5091e160-cd87-462f-b734-443b7a3a08ee', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:44:26 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:44:26.616 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[cf26dc62-3f8c-4584-86cb-9d60487bdc7a]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:febd:e281'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 562117, 'tstamp': 562117}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 352619, 'error': None, 'target': 'ovnmeta-5091e160-cd87-462f-b734-443b7a3a08ee', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:44:26 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:44:26.641 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[2846e32a-857e-4fae-ab2d-611cf912887a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap5091e160-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:bd:e2:81'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 288], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 562117, 'reachable_time': 30122, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 352620, 'error': None, 'target': 'ovnmeta-5091e160-cd87-462f-b734-443b7a3a08ee', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:44:26 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:44:26.678 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[ea0edc9e-aa77-4781-aab9-9a17e4598a05]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:44:26 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:44:26.744 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[d32900ec-a4cd-4074-9833-d6370c512d24]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:44:26 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:44:26.745 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5091e160-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:44:26 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:44:26.746 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 03:44:26 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:44:26.746 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap5091e160-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:44:26 np0005534516 kernel: tap5091e160-c0: entered promiscuous mode
Nov 25 03:44:26 np0005534516 NetworkManager[48915]: <info>  [1764060266.7493] manager: (tap5091e160-c0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/399)
Nov 25 03:44:26 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:44:26.753 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap5091e160-c0, col_values=(('external_ids', {'iface-id': '09d23056-ba0e-4cb0-ae84-5e88f5553ae3'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:44:26 np0005534516 nova_compute[253538]: 2025-11-25 08:44:26.754 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:44:26 np0005534516 ovn_controller[152859]: 2025-11-25T08:44:26Z|00977|binding|INFO|Releasing lport 09d23056-ba0e-4cb0-ae84-5e88f5553ae3 from this chassis (sb_readonly=0)
Nov 25 03:44:26 np0005534516 nova_compute[253538]: 2025-11-25 08:44:26.776 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:44:26 np0005534516 nova_compute[253538]: 2025-11-25 08:44:26.779 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:44:26 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:44:26.779 162739 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/5091e160-cd87-462f-b734-443b7a3a08ee.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/5091e160-cd87-462f-b734-443b7a3a08ee.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 25 03:44:26 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:44:26.781 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[0a82a159-c204-4ee6-8457-9fb912e75627]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:44:26 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:44:26.781 162739 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 25 03:44:26 np0005534516 ovn_metadata_agent[162734]: global
Nov 25 03:44:26 np0005534516 ovn_metadata_agent[162734]:    log         /dev/log local0 debug
Nov 25 03:44:26 np0005534516 ovn_metadata_agent[162734]:    log-tag     haproxy-metadata-proxy-5091e160-cd87-462f-b734-443b7a3a08ee
Nov 25 03:44:26 np0005534516 ovn_metadata_agent[162734]:    user        root
Nov 25 03:44:26 np0005534516 ovn_metadata_agent[162734]:    group       root
Nov 25 03:44:26 np0005534516 ovn_metadata_agent[162734]:    maxconn     1024
Nov 25 03:44:26 np0005534516 ovn_metadata_agent[162734]:    pidfile     /var/lib/neutron/external/pids/5091e160-cd87-462f-b734-443b7a3a08ee.pid.haproxy
Nov 25 03:44:26 np0005534516 ovn_metadata_agent[162734]:    daemon
Nov 25 03:44:26 np0005534516 ovn_metadata_agent[162734]: 
Nov 25 03:44:26 np0005534516 ovn_metadata_agent[162734]: defaults
Nov 25 03:44:26 np0005534516 ovn_metadata_agent[162734]:    log global
Nov 25 03:44:26 np0005534516 ovn_metadata_agent[162734]:    mode http
Nov 25 03:44:26 np0005534516 ovn_metadata_agent[162734]:    option httplog
Nov 25 03:44:26 np0005534516 ovn_metadata_agent[162734]:    option dontlognull
Nov 25 03:44:26 np0005534516 ovn_metadata_agent[162734]:    option http-server-close
Nov 25 03:44:26 np0005534516 ovn_metadata_agent[162734]:    option forwardfor
Nov 25 03:44:26 np0005534516 ovn_metadata_agent[162734]:    retries                 3
Nov 25 03:44:26 np0005534516 ovn_metadata_agent[162734]:    timeout http-request    30s
Nov 25 03:44:26 np0005534516 ovn_metadata_agent[162734]:    timeout connect         30s
Nov 25 03:44:26 np0005534516 ovn_metadata_agent[162734]:    timeout client          32s
Nov 25 03:44:26 np0005534516 ovn_metadata_agent[162734]:    timeout server          32s
Nov 25 03:44:26 np0005534516 ovn_metadata_agent[162734]:    timeout http-keep-alive 30s
Nov 25 03:44:26 np0005534516 ovn_metadata_agent[162734]: 
Nov 25 03:44:26 np0005534516 ovn_metadata_agent[162734]: 
Nov 25 03:44:26 np0005534516 ovn_metadata_agent[162734]: listen listener
Nov 25 03:44:26 np0005534516 ovn_metadata_agent[162734]:    bind 169.254.169.254:80
Nov 25 03:44:26 np0005534516 ovn_metadata_agent[162734]:    server metadata /var/lib/neutron/metadata_proxy
Nov 25 03:44:26 np0005534516 ovn_metadata_agent[162734]:    http-request add-header X-OVN-Network-ID 5091e160-cd87-462f-b734-443b7a3a08ee
Nov 25 03:44:26 np0005534516 ovn_metadata_agent[162734]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 25 03:44:26 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:44:26.782 162739 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-5091e160-cd87-462f-b734-443b7a3a08ee', 'env', 'PROCESS_TAG=haproxy-5091e160-cd87-462f-b734-443b7a3a08ee', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/5091e160-cd87-462f-b734-443b7a3a08ee.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 25 03:44:26 np0005534516 nova_compute[253538]: 2025-11-25 08:44:26.796 253542 DEBUG nova.compute.manager [req-b19560db-667f-40c6-b2a3-1fc137702f27 req-a2346e44-7bb1-4799-bbe6-1d15d2cd0a92 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 65e7119e-238b-426c-9e9d-67b4c38c61b7] Received event network-vif-plugged-82c25621-80b3-4927-957d-aec0a653a4f8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:44:26 np0005534516 nova_compute[253538]: 2025-11-25 08:44:26.797 253542 DEBUG oslo_concurrency.lockutils [req-b19560db-667f-40c6-b2a3-1fc137702f27 req-a2346e44-7bb1-4799-bbe6-1d15d2cd0a92 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "65e7119e-238b-426c-9e9d-67b4c38c61b7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:44:26 np0005534516 nova_compute[253538]: 2025-11-25 08:44:26.797 253542 DEBUG oslo_concurrency.lockutils [req-b19560db-667f-40c6-b2a3-1fc137702f27 req-a2346e44-7bb1-4799-bbe6-1d15d2cd0a92 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "65e7119e-238b-426c-9e9d-67b4c38c61b7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:44:26 np0005534516 nova_compute[253538]: 2025-11-25 08:44:26.797 253542 DEBUG oslo_concurrency.lockutils [req-b19560db-667f-40c6-b2a3-1fc137702f27 req-a2346e44-7bb1-4799-bbe6-1d15d2cd0a92 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "65e7119e-238b-426c-9e9d-67b4c38c61b7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:44:26 np0005534516 nova_compute[253538]: 2025-11-25 08:44:26.798 253542 DEBUG nova.compute.manager [req-b19560db-667f-40c6-b2a3-1fc137702f27 req-a2346e44-7bb1-4799-bbe6-1d15d2cd0a92 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 65e7119e-238b-426c-9e9d-67b4c38c61b7] Processing event network-vif-plugged-82c25621-80b3-4927-957d-aec0a653a4f8 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 25 03:44:27 np0005534516 nova_compute[253538]: 2025-11-25 08:44:27.148 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:44:27 np0005534516 podman[352653]: 2025-11-25 08:44:27.180159057 +0000 UTC m=+0.047106635 container create ba8588a56368e542da74ef2c59bb52ed114faf17bb5f012202e8d0458088eefb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-5091e160-cd87-462f-b734-443b7a3a08ee, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, tcib_managed=true, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 25 03:44:27 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e231 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 03:44:27 np0005534516 systemd[1]: Started libpod-conmon-ba8588a56368e542da74ef2c59bb52ed114faf17bb5f012202e8d0458088eefb.scope.
Nov 25 03:44:27 np0005534516 systemd[1]: Started libcrun container.
Nov 25 03:44:27 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a3d54a18379cf3aa265a6136d343a21576bceff9585d581766ed779c3b455434/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 25 03:44:27 np0005534516 podman[352653]: 2025-11-25 08:44:27.154006815 +0000 UTC m=+0.020954423 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 25 03:44:27 np0005534516 podman[352653]: 2025-11-25 08:44:27.264070074 +0000 UTC m=+0.131017632 container init ba8588a56368e542da74ef2c59bb52ed114faf17bb5f012202e8d0458088eefb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-5091e160-cd87-462f-b734-443b7a3a08ee, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, tcib_managed=true)
Nov 25 03:44:27 np0005534516 podman[352653]: 2025-11-25 08:44:27.271531957 +0000 UTC m=+0.138479525 container start ba8588a56368e542da74ef2c59bb52ed114faf17bb5f012202e8d0458088eefb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-5091e160-cd87-462f-b734-443b7a3a08ee, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS)
Nov 25 03:44:27 np0005534516 neutron-haproxy-ovnmeta-5091e160-cd87-462f-b734-443b7a3a08ee[352685]: [NOTICE]   (352710) : New worker (352713) forked
Nov 25 03:44:27 np0005534516 neutron-haproxy-ovnmeta-5091e160-cd87-462f-b734-443b7a3a08ee[352685]: [NOTICE]   (352710) : Loading success.
Nov 25 03:44:27 np0005534516 nova_compute[253538]: 2025-11-25 08:44:27.400 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764060267.3997738, 65e7119e-238b-426c-9e9d-67b4c38c61b7 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 03:44:27 np0005534516 nova_compute[253538]: 2025-11-25 08:44:27.400 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 65e7119e-238b-426c-9e9d-67b4c38c61b7] VM Started (Lifecycle Event)#033[00m
Nov 25 03:44:27 np0005534516 nova_compute[253538]: 2025-11-25 08:44:27.404 253542 DEBUG nova.compute.manager [None req-2e8ff9d8-ad48-4ca6-9375-f328afc8dbb2 9e24446c871b4d7ca816a3833d05daa9 95ca8ccb4dca4f58b3896b0533bab879 - - default default] [instance: 65e7119e-238b-426c-9e9d-67b4c38c61b7] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 25 03:44:27 np0005534516 nova_compute[253538]: 2025-11-25 08:44:27.408 253542 DEBUG nova.virt.libvirt.driver [None req-2e8ff9d8-ad48-4ca6-9375-f328afc8dbb2 9e24446c871b4d7ca816a3833d05daa9 95ca8ccb4dca4f58b3896b0533bab879 - - default default] [instance: 65e7119e-238b-426c-9e9d-67b4c38c61b7] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 25 03:44:27 np0005534516 nova_compute[253538]: 2025-11-25 08:44:27.413 253542 INFO nova.virt.libvirt.driver [-] [instance: 65e7119e-238b-426c-9e9d-67b4c38c61b7] Instance spawned successfully.#033[00m
Nov 25 03:44:27 np0005534516 nova_compute[253538]: 2025-11-25 08:44:27.413 253542 DEBUG nova.virt.libvirt.driver [None req-2e8ff9d8-ad48-4ca6-9375-f328afc8dbb2 9e24446c871b4d7ca816a3833d05daa9 95ca8ccb4dca4f58b3896b0533bab879 - - default default] [instance: 65e7119e-238b-426c-9e9d-67b4c38c61b7] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 25 03:44:27 np0005534516 nova_compute[253538]: 2025-11-25 08:44:27.430 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 65e7119e-238b-426c-9e9d-67b4c38c61b7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:44:27 np0005534516 nova_compute[253538]: 2025-11-25 08:44:27.436 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 65e7119e-238b-426c-9e9d-67b4c38c61b7] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 03:44:27 np0005534516 nova_compute[253538]: 2025-11-25 08:44:27.441 253542 DEBUG nova.virt.libvirt.driver [None req-2e8ff9d8-ad48-4ca6-9375-f328afc8dbb2 9e24446c871b4d7ca816a3833d05daa9 95ca8ccb4dca4f58b3896b0533bab879 - - default default] [instance: 65e7119e-238b-426c-9e9d-67b4c38c61b7] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:44:27 np0005534516 nova_compute[253538]: 2025-11-25 08:44:27.442 253542 DEBUG nova.virt.libvirt.driver [None req-2e8ff9d8-ad48-4ca6-9375-f328afc8dbb2 9e24446c871b4d7ca816a3833d05daa9 95ca8ccb4dca4f58b3896b0533bab879 - - default default] [instance: 65e7119e-238b-426c-9e9d-67b4c38c61b7] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:44:27 np0005534516 nova_compute[253538]: 2025-11-25 08:44:27.442 253542 DEBUG nova.virt.libvirt.driver [None req-2e8ff9d8-ad48-4ca6-9375-f328afc8dbb2 9e24446c871b4d7ca816a3833d05daa9 95ca8ccb4dca4f58b3896b0533bab879 - - default default] [instance: 65e7119e-238b-426c-9e9d-67b4c38c61b7] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:44:27 np0005534516 nova_compute[253538]: 2025-11-25 08:44:27.443 253542 DEBUG nova.virt.libvirt.driver [None req-2e8ff9d8-ad48-4ca6-9375-f328afc8dbb2 9e24446c871b4d7ca816a3833d05daa9 95ca8ccb4dca4f58b3896b0533bab879 - - default default] [instance: 65e7119e-238b-426c-9e9d-67b4c38c61b7] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:44:27 np0005534516 nova_compute[253538]: 2025-11-25 08:44:27.443 253542 DEBUG nova.virt.libvirt.driver [None req-2e8ff9d8-ad48-4ca6-9375-f328afc8dbb2 9e24446c871b4d7ca816a3833d05daa9 95ca8ccb4dca4f58b3896b0533bab879 - - default default] [instance: 65e7119e-238b-426c-9e9d-67b4c38c61b7] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:44:27 np0005534516 nova_compute[253538]: 2025-11-25 08:44:27.444 253542 DEBUG nova.virt.libvirt.driver [None req-2e8ff9d8-ad48-4ca6-9375-f328afc8dbb2 9e24446c871b4d7ca816a3833d05daa9 95ca8ccb4dca4f58b3896b0533bab879 - - default default] [instance: 65e7119e-238b-426c-9e9d-67b4c38c61b7] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:44:27 np0005534516 nova_compute[253538]: 2025-11-25 08:44:27.484 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 65e7119e-238b-426c-9e9d-67b4c38c61b7] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 03:44:27 np0005534516 nova_compute[253538]: 2025-11-25 08:44:27.484 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764060267.4042397, 65e7119e-238b-426c-9e9d-67b4c38c61b7 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 03:44:27 np0005534516 nova_compute[253538]: 2025-11-25 08:44:27.485 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 65e7119e-238b-426c-9e9d-67b4c38c61b7] VM Paused (Lifecycle Event)#033[00m
Nov 25 03:44:27 np0005534516 nova_compute[253538]: 2025-11-25 08:44:27.511 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 65e7119e-238b-426c-9e9d-67b4c38c61b7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:44:27 np0005534516 nova_compute[253538]: 2025-11-25 08:44:27.516 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764060267.407532, 65e7119e-238b-426c-9e9d-67b4c38c61b7 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 03:44:27 np0005534516 nova_compute[253538]: 2025-11-25 08:44:27.516 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 65e7119e-238b-426c-9e9d-67b4c38c61b7] VM Resumed (Lifecycle Event)#033[00m
Nov 25 03:44:27 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1888: 321 pgs: 321 active+clean; 134 MiB data, 698 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 25 03:44:27 np0005534516 nova_compute[253538]: 2025-11-25 08:44:27.536 253542 INFO nova.compute.manager [None req-2e8ff9d8-ad48-4ca6-9375-f328afc8dbb2 9e24446c871b4d7ca816a3833d05daa9 95ca8ccb4dca4f58b3896b0533bab879 - - default default] [instance: 65e7119e-238b-426c-9e9d-67b4c38c61b7] Took 8.16 seconds to spawn the instance on the hypervisor.#033[00m
Nov 25 03:44:27 np0005534516 nova_compute[253538]: 2025-11-25 08:44:27.537 253542 DEBUG nova.compute.manager [None req-2e8ff9d8-ad48-4ca6-9375-f328afc8dbb2 9e24446c871b4d7ca816a3833d05daa9 95ca8ccb4dca4f58b3896b0533bab879 - - default default] [instance: 65e7119e-238b-426c-9e9d-67b4c38c61b7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:44:27 np0005534516 nova_compute[253538]: 2025-11-25 08:44:27.539 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 65e7119e-238b-426c-9e9d-67b4c38c61b7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:44:27 np0005534516 nova_compute[253538]: 2025-11-25 08:44:27.549 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 65e7119e-238b-426c-9e9d-67b4c38c61b7] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 03:44:27 np0005534516 nova_compute[253538]: 2025-11-25 08:44:27.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 03:44:27 np0005534516 nova_compute[253538]: 2025-11-25 08:44:27.554 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 25 03:44:27 np0005534516 nova_compute[253538]: 2025-11-25 08:44:27.555 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 25 03:44:27 np0005534516 nova_compute[253538]: 2025-11-25 08:44:27.586 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 65e7119e-238b-426c-9e9d-67b4c38c61b7] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 03:44:27 np0005534516 nova_compute[253538]: 2025-11-25 08:44:27.595 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] [instance: 65e7119e-238b-426c-9e9d-67b4c38c61b7] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Nov 25 03:44:27 np0005534516 nova_compute[253538]: 2025-11-25 08:44:27.595 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 25 03:44:27 np0005534516 nova_compute[253538]: 2025-11-25 08:44:27.596 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 03:44:27 np0005534516 nova_compute[253538]: 2025-11-25 08:44:27.629 253542 INFO nova.compute.manager [None req-2e8ff9d8-ad48-4ca6-9375-f328afc8dbb2 9e24446c871b4d7ca816a3833d05daa9 95ca8ccb4dca4f58b3896b0533bab879 - - default default] [instance: 65e7119e-238b-426c-9e9d-67b4c38c61b7] Took 9.18 seconds to build instance.#033[00m
Nov 25 03:44:27 np0005534516 nova_compute[253538]: 2025-11-25 08:44:27.656 253542 DEBUG oslo_concurrency.lockutils [None req-2e8ff9d8-ad48-4ca6-9375-f328afc8dbb2 9e24446c871b4d7ca816a3833d05daa9 95ca8ccb4dca4f58b3896b0533bab879 - - default default] Lock "65e7119e-238b-426c-9e9d-67b4c38c61b7" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.264s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:44:28 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:44:28.358 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=26, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '3e:21:09', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '6a:71:8e:07:fa:c2'}, ipsec=False) old=SB_Global(nb_cfg=25) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 03:44:28 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:44:28.359 162739 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 25 03:44:28 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:44:28.360 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=0e3362c2-a4a0-4a10-9289-943331244f84, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '26'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:44:28 np0005534516 nova_compute[253538]: 2025-11-25 08:44:28.406 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:44:28 np0005534516 nova_compute[253538]: 2025-11-25 08:44:28.928 253542 DEBUG nova.compute.manager [req-bf4f38b1-f3d8-4e9b-ba12-b3b5d0f904bb req-7412ae7a-4b37-4c6e-beb8-49509933eb4f b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 65e7119e-238b-426c-9e9d-67b4c38c61b7] Received event network-vif-plugged-82c25621-80b3-4927-957d-aec0a653a4f8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:44:28 np0005534516 nova_compute[253538]: 2025-11-25 08:44:28.929 253542 DEBUG oslo_concurrency.lockutils [req-bf4f38b1-f3d8-4e9b-ba12-b3b5d0f904bb req-7412ae7a-4b37-4c6e-beb8-49509933eb4f b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "65e7119e-238b-426c-9e9d-67b4c38c61b7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:44:28 np0005534516 nova_compute[253538]: 2025-11-25 08:44:28.929 253542 DEBUG oslo_concurrency.lockutils [req-bf4f38b1-f3d8-4e9b-ba12-b3b5d0f904bb req-7412ae7a-4b37-4c6e-beb8-49509933eb4f b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "65e7119e-238b-426c-9e9d-67b4c38c61b7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:44:28 np0005534516 nova_compute[253538]: 2025-11-25 08:44:28.929 253542 DEBUG oslo_concurrency.lockutils [req-bf4f38b1-f3d8-4e9b-ba12-b3b5d0f904bb req-7412ae7a-4b37-4c6e-beb8-49509933eb4f b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "65e7119e-238b-426c-9e9d-67b4c38c61b7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:44:28 np0005534516 nova_compute[253538]: 2025-11-25 08:44:28.930 253542 DEBUG nova.compute.manager [req-bf4f38b1-f3d8-4e9b-ba12-b3b5d0f904bb req-7412ae7a-4b37-4c6e-beb8-49509933eb4f b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 65e7119e-238b-426c-9e9d-67b4c38c61b7] No waiting events found dispatching network-vif-plugged-82c25621-80b3-4927-957d-aec0a653a4f8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 03:44:28 np0005534516 nova_compute[253538]: 2025-11-25 08:44:28.930 253542 WARNING nova.compute.manager [req-bf4f38b1-f3d8-4e9b-ba12-b3b5d0f904bb req-7412ae7a-4b37-4c6e-beb8-49509933eb4f b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 65e7119e-238b-426c-9e9d-67b4c38c61b7] Received unexpected event network-vif-plugged-82c25621-80b3-4927-957d-aec0a653a4f8 for instance with vm_state active and task_state None.#033[00m
Nov 25 03:44:29 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 25 03:44:29 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/562701283' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 03:44:29 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 25 03:44:29 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/562701283' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 03:44:29 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1889: 321 pgs: 321 active+clean; 134 MiB data, 698 MiB used, 59 GiB / 60 GiB avail; 80 KiB/s rd, 1.8 MiB/s wr, 30 op/s
Nov 25 03:44:29 np0005534516 nova_compute[253538]: 2025-11-25 08:44:29.582 253542 DEBUG oslo_concurrency.lockutils [None req-6265c21e-f30f-4d24-8cb1-a928c8cdba8d 9e24446c871b4d7ca816a3833d05daa9 95ca8ccb4dca4f58b3896b0533bab879 - - default default] Acquiring lock "65e7119e-238b-426c-9e9d-67b4c38c61b7" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:44:29 np0005534516 nova_compute[253538]: 2025-11-25 08:44:29.583 253542 DEBUG oslo_concurrency.lockutils [None req-6265c21e-f30f-4d24-8cb1-a928c8cdba8d 9e24446c871b4d7ca816a3833d05daa9 95ca8ccb4dca4f58b3896b0533bab879 - - default default] Lock "65e7119e-238b-426c-9e9d-67b4c38c61b7" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:44:29 np0005534516 nova_compute[253538]: 2025-11-25 08:44:29.584 253542 DEBUG oslo_concurrency.lockutils [None req-6265c21e-f30f-4d24-8cb1-a928c8cdba8d 9e24446c871b4d7ca816a3833d05daa9 95ca8ccb4dca4f58b3896b0533bab879 - - default default] Acquiring lock "65e7119e-238b-426c-9e9d-67b4c38c61b7-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:44:29 np0005534516 nova_compute[253538]: 2025-11-25 08:44:29.584 253542 DEBUG oslo_concurrency.lockutils [None req-6265c21e-f30f-4d24-8cb1-a928c8cdba8d 9e24446c871b4d7ca816a3833d05daa9 95ca8ccb4dca4f58b3896b0533bab879 - - default default] Lock "65e7119e-238b-426c-9e9d-67b4c38c61b7-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:44:29 np0005534516 nova_compute[253538]: 2025-11-25 08:44:29.584 253542 DEBUG oslo_concurrency.lockutils [None req-6265c21e-f30f-4d24-8cb1-a928c8cdba8d 9e24446c871b4d7ca816a3833d05daa9 95ca8ccb4dca4f58b3896b0533bab879 - - default default] Lock "65e7119e-238b-426c-9e9d-67b4c38c61b7-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:44:29 np0005534516 nova_compute[253538]: 2025-11-25 08:44:29.585 253542 INFO nova.compute.manager [None req-6265c21e-f30f-4d24-8cb1-a928c8cdba8d 9e24446c871b4d7ca816a3833d05daa9 95ca8ccb4dca4f58b3896b0533bab879 - - default default] [instance: 65e7119e-238b-426c-9e9d-67b4c38c61b7] Terminating instance#033[00m
Nov 25 03:44:29 np0005534516 nova_compute[253538]: 2025-11-25 08:44:29.586 253542 DEBUG nova.compute.manager [None req-6265c21e-f30f-4d24-8cb1-a928c8cdba8d 9e24446c871b4d7ca816a3833d05daa9 95ca8ccb4dca4f58b3896b0533bab879 - - default default] [instance: 65e7119e-238b-426c-9e9d-67b4c38c61b7] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 25 03:44:29 np0005534516 kernel: tap82c25621-80 (unregistering): left promiscuous mode
Nov 25 03:44:29 np0005534516 NetworkManager[48915]: <info>  [1764060269.7825] device (tap82c25621-80): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 03:44:29 np0005534516 ovn_controller[152859]: 2025-11-25T08:44:29Z|00978|binding|INFO|Releasing lport 82c25621-80b3-4927-957d-aec0a653a4f8 from this chassis (sb_readonly=0)
Nov 25 03:44:29 np0005534516 ovn_controller[152859]: 2025-11-25T08:44:29Z|00979|binding|INFO|Setting lport 82c25621-80b3-4927-957d-aec0a653a4f8 down in Southbound
Nov 25 03:44:29 np0005534516 ovn_controller[152859]: 2025-11-25T08:44:29Z|00980|binding|INFO|Removing iface tap82c25621-80 ovn-installed in OVS
Nov 25 03:44:29 np0005534516 nova_compute[253538]: 2025-11-25 08:44:29.794 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:44:29 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:44:29.801 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c8:b4:91 10.100.0.12'], port_security=['fa:16:3e:c8:b4:91 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '65e7119e-238b-426c-9e9d-67b4c38c61b7', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5091e160-cd87-462f-b734-443b7a3a08ee', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '95ca8ccb4dca4f58b3896b0533bab879', 'neutron:revision_number': '4', 'neutron:security_group_ids': '15c498d9-36a3-47cc-8194-8875167d8fa1', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=86fe2083-70bb-4b93-9773-f8b27481b1ca, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=82c25621-80b3-4927-957d-aec0a653a4f8) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 03:44:29 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:44:29.802 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 82c25621-80b3-4927-957d-aec0a653a4f8 in datapath 5091e160-cd87-462f-b734-443b7a3a08ee unbound from our chassis#033[00m
Nov 25 03:44:29 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:44:29.803 162739 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 5091e160-cd87-462f-b734-443b7a3a08ee, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 25 03:44:29 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:44:29.804 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[cba9f19e-9ef6-47e0-8270-f0d464804aab]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:44:29 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:44:29.804 162739 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-5091e160-cd87-462f-b734-443b7a3a08ee namespace which is not needed anymore#033[00m
Nov 25 03:44:29 np0005534516 nova_compute[253538]: 2025-11-25 08:44:29.821 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:44:29 np0005534516 systemd[1]: machine-qemu\x2d124\x2dinstance\x2d00000065.scope: Deactivated successfully.
Nov 25 03:44:29 np0005534516 systemd[1]: machine-qemu\x2d124\x2dinstance\x2d00000065.scope: Consumed 3.163s CPU time.
Nov 25 03:44:29 np0005534516 systemd-machined[215790]: Machine qemu-124-instance-00000065 terminated.
Nov 25 03:44:29 np0005534516 nova_compute[253538]: 2025-11-25 08:44:29.875 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:44:30 np0005534516 nova_compute[253538]: 2025-11-25 08:44:30.030 253542 INFO nova.virt.libvirt.driver [-] [instance: 65e7119e-238b-426c-9e9d-67b4c38c61b7] Instance destroyed successfully.#033[00m
Nov 25 03:44:30 np0005534516 nova_compute[253538]: 2025-11-25 08:44:30.031 253542 DEBUG nova.objects.instance [None req-6265c21e-f30f-4d24-8cb1-a928c8cdba8d 9e24446c871b4d7ca816a3833d05daa9 95ca8ccb4dca4f58b3896b0533bab879 - - default default] Lazy-loading 'resources' on Instance uuid 65e7119e-238b-426c-9e9d-67b4c38c61b7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 03:44:30 np0005534516 neutron-haproxy-ovnmeta-5091e160-cd87-462f-b734-443b7a3a08ee[352685]: [NOTICE]   (352710) : haproxy version is 2.8.14-c23fe91
Nov 25 03:44:30 np0005534516 neutron-haproxy-ovnmeta-5091e160-cd87-462f-b734-443b7a3a08ee[352685]: [NOTICE]   (352710) : path to executable is /usr/sbin/haproxy
Nov 25 03:44:30 np0005534516 neutron-haproxy-ovnmeta-5091e160-cd87-462f-b734-443b7a3a08ee[352685]: [WARNING]  (352710) : Exiting Master process...
Nov 25 03:44:30 np0005534516 neutron-haproxy-ovnmeta-5091e160-cd87-462f-b734-443b7a3a08ee[352685]: [WARNING]  (352710) : Exiting Master process...
Nov 25 03:44:30 np0005534516 neutron-haproxy-ovnmeta-5091e160-cd87-462f-b734-443b7a3a08ee[352685]: [ALERT]    (352710) : Current worker (352713) exited with code 143 (Terminated)
Nov 25 03:44:30 np0005534516 neutron-haproxy-ovnmeta-5091e160-cd87-462f-b734-443b7a3a08ee[352685]: [WARNING]  (352710) : All workers exited. Exiting... (0)
Nov 25 03:44:30 np0005534516 nova_compute[253538]: 2025-11-25 08:44:30.047 253542 DEBUG nova.virt.libvirt.vif [None req-6265c21e-f30f-4d24-8cb1-a928c8cdba8d 9e24446c871b4d7ca816a3833d05daa9 95ca8ccb4dca4f58b3896b0533bab879 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T08:44:16Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerAddressesNegativeTestJSON-server-1971153713',display_name='tempest-ServerAddressesNegativeTestJSON-server-1971153713',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveraddressesnegativetestjson-server-1971153713',id=101,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-25T08:44:27Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='95ca8ccb4dca4f58b3896b0533bab879',ramdisk_id='',reservation_id='r-eybd3brc',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerAddressesNegativeTestJSON-1856355067',owner_user_name='tempest-ServerAddressesNegativeTestJSON-1856355067-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T08:44:27Z,user_data=None,user_id='9e24446c871b4d7ca816a3833d05daa9',uuid=65e7119e-238b-426c-9e9d-67b4c38c61b7,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "82c25621-80b3-4927-957d-aec0a653a4f8", "address": "fa:16:3e:c8:b4:91", "network": {"id": "5091e160-cd87-462f-b734-443b7a3a08ee", "bridge": "br-int", "label": "tempest-ServerAddressesNegativeTestJSON-1608437613-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "95ca8ccb4dca4f58b3896b0533bab879", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap82c25621-80", "ovs_interfaceid": "82c25621-80b3-4927-957d-aec0a653a4f8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 25 03:44:30 np0005534516 nova_compute[253538]: 2025-11-25 08:44:30.048 253542 DEBUG nova.network.os_vif_util [None req-6265c21e-f30f-4d24-8cb1-a928c8cdba8d 9e24446c871b4d7ca816a3833d05daa9 95ca8ccb4dca4f58b3896b0533bab879 - - default default] Converting VIF {"id": "82c25621-80b3-4927-957d-aec0a653a4f8", "address": "fa:16:3e:c8:b4:91", "network": {"id": "5091e160-cd87-462f-b734-443b7a3a08ee", "bridge": "br-int", "label": "tempest-ServerAddressesNegativeTestJSON-1608437613-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "95ca8ccb4dca4f58b3896b0533bab879", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap82c25621-80", "ovs_interfaceid": "82c25621-80b3-4927-957d-aec0a653a4f8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 03:44:30 np0005534516 nova_compute[253538]: 2025-11-25 08:44:30.049 253542 DEBUG nova.network.os_vif_util [None req-6265c21e-f30f-4d24-8cb1-a928c8cdba8d 9e24446c871b4d7ca816a3833d05daa9 95ca8ccb4dca4f58b3896b0533bab879 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c8:b4:91,bridge_name='br-int',has_traffic_filtering=True,id=82c25621-80b3-4927-957d-aec0a653a4f8,network=Network(5091e160-cd87-462f-b734-443b7a3a08ee),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap82c25621-80') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 03:44:30 np0005534516 systemd[1]: libpod-ba8588a56368e542da74ef2c59bb52ed114faf17bb5f012202e8d0458088eefb.scope: Deactivated successfully.
Nov 25 03:44:30 np0005534516 nova_compute[253538]: 2025-11-25 08:44:30.050 253542 DEBUG os_vif [None req-6265c21e-f30f-4d24-8cb1-a928c8cdba8d 9e24446c871b4d7ca816a3833d05daa9 95ca8ccb4dca4f58b3896b0533bab879 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:c8:b4:91,bridge_name='br-int',has_traffic_filtering=True,id=82c25621-80b3-4927-957d-aec0a653a4f8,network=Network(5091e160-cd87-462f-b734-443b7a3a08ee),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap82c25621-80') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 25 03:44:30 np0005534516 nova_compute[253538]: 2025-11-25 08:44:30.051 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:44:30 np0005534516 nova_compute[253538]: 2025-11-25 08:44:30.052 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap82c25621-80, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:44:30 np0005534516 nova_compute[253538]: 2025-11-25 08:44:30.053 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:44:30 np0005534516 nova_compute[253538]: 2025-11-25 08:44:30.055 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:44:30 np0005534516 podman[352752]: 2025-11-25 08:44:30.057665364 +0000 UTC m=+0.167131786 container died ba8588a56368e542da74ef2c59bb52ed114faf17bb5f012202e8d0458088eefb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-5091e160-cd87-462f-b734-443b7a3a08ee, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 03:44:30 np0005534516 nova_compute[253538]: 2025-11-25 08:44:30.058 253542 INFO os_vif [None req-6265c21e-f30f-4d24-8cb1-a928c8cdba8d 9e24446c871b4d7ca816a3833d05daa9 95ca8ccb4dca4f58b3896b0533bab879 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:c8:b4:91,bridge_name='br-int',has_traffic_filtering=True,id=82c25621-80b3-4927-957d-aec0a653a4f8,network=Network(5091e160-cd87-462f-b734-443b7a3a08ee),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap82c25621-80')#033[00m
Nov 25 03:44:30 np0005534516 podman[352778]: 2025-11-25 08:44:30.423300258 +0000 UTC m=+0.363354073 container health_status 8ad1e319ea263c63021f01bb2d6f18ca5aea205883339692c6b10bddfa993191 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 03:44:30 np0005534516 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-ba8588a56368e542da74ef2c59bb52ed114faf17bb5f012202e8d0458088eefb-userdata-shm.mount: Deactivated successfully.
Nov 25 03:44:30 np0005534516 systemd[1]: var-lib-containers-storage-overlay-a3d54a18379cf3aa265a6136d343a21576bceff9585d581766ed779c3b455434-merged.mount: Deactivated successfully.
Nov 25 03:44:30 np0005534516 podman[352752]: 2025-11-25 08:44:30.474423101 +0000 UTC m=+0.583889533 container cleanup ba8588a56368e542da74ef2c59bb52ed114faf17bb5f012202e8d0458088eefb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-5091e160-cd87-462f-b734-443b7a3a08ee, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 03:44:30 np0005534516 nova_compute[253538]: 2025-11-25 08:44:30.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 03:44:30 np0005534516 podman[352838]: 2025-11-25 08:44:30.562680677 +0000 UTC m=+0.061822536 container remove ba8588a56368e542da74ef2c59bb52ed114faf17bb5f012202e8d0458088eefb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-5091e160-cd87-462f-b734-443b7a3a08ee, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true)
Nov 25 03:44:30 np0005534516 systemd[1]: libpod-conmon-ba8588a56368e542da74ef2c59bb52ed114faf17bb5f012202e8d0458088eefb.scope: Deactivated successfully.
Nov 25 03:44:30 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:44:30.568 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[a43185c2-07d7-4e67-9f6b-32c7975f4e9d]: (4, ('Tue Nov 25 08:44:29 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-5091e160-cd87-462f-b734-443b7a3a08ee (ba8588a56368e542da74ef2c59bb52ed114faf17bb5f012202e8d0458088eefb)\nba8588a56368e542da74ef2c59bb52ed114faf17bb5f012202e8d0458088eefb\nTue Nov 25 08:44:30 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-5091e160-cd87-462f-b734-443b7a3a08ee (ba8588a56368e542da74ef2c59bb52ed114faf17bb5f012202e8d0458088eefb)\nba8588a56368e542da74ef2c59bb52ed114faf17bb5f012202e8d0458088eefb\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:44:30 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:44:30.570 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[387cff21-f137-4711-a11b-35e60515b654]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:44:30 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:44:30.571 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5091e160-c0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:44:30 np0005534516 nova_compute[253538]: 2025-11-25 08:44:30.574 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:44:30 np0005534516 kernel: tap5091e160-c0: left promiscuous mode
Nov 25 03:44:30 np0005534516 nova_compute[253538]: 2025-11-25 08:44:30.587 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:44:30 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:44:30.590 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[f3117b71-a824-4e17-9c4b-8d34f6bfb847]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:44:30 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:44:30.601 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[e68f837b-a124-4380-9b58-a61ccd2d13fd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:44:30 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:44:30.603 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[7beaf457-b9eb-4d95-bc06-9b47d5c600ec]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:44:30 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:44:30.616 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[fba7735a-d7a3-4da8-846f-41a0331e71f8]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 562110, 'reachable_time': 42824, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 352853, 'error': None, 'target': 'ovnmeta-5091e160-cd87-462f-b734-443b7a3a08ee', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:44:30 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:44:30.619 162852 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-5091e160-cd87-462f-b734-443b7a3a08ee deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 25 03:44:30 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:44:30.619 162852 DEBUG oslo.privsep.daemon [-] privsep: reply[f991180d-5902-4206-a9a4-baf85292e422]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:44:30 np0005534516 systemd[1]: run-netns-ovnmeta\x2d5091e160\x2dcd87\x2d462f\x2db734\x2d443b7a3a08ee.mount: Deactivated successfully.
Nov 25 03:44:30 np0005534516 nova_compute[253538]: 2025-11-25 08:44:30.980 253542 INFO nova.virt.libvirt.driver [None req-6265c21e-f30f-4d24-8cb1-a928c8cdba8d 9e24446c871b4d7ca816a3833d05daa9 95ca8ccb4dca4f58b3896b0533bab879 - - default default] [instance: 65e7119e-238b-426c-9e9d-67b4c38c61b7] Deleting instance files /var/lib/nova/instances/65e7119e-238b-426c-9e9d-67b4c38c61b7_del#033[00m
Nov 25 03:44:30 np0005534516 nova_compute[253538]: 2025-11-25 08:44:30.981 253542 INFO nova.virt.libvirt.driver [None req-6265c21e-f30f-4d24-8cb1-a928c8cdba8d 9e24446c871b4d7ca816a3833d05daa9 95ca8ccb4dca4f58b3896b0533bab879 - - default default] [instance: 65e7119e-238b-426c-9e9d-67b4c38c61b7] Deletion of /var/lib/nova/instances/65e7119e-238b-426c-9e9d-67b4c38c61b7_del complete#033[00m
Nov 25 03:44:31 np0005534516 nova_compute[253538]: 2025-11-25 08:44:31.009 253542 DEBUG nova.compute.manager [req-5727c05e-e076-4661-8555-1cbf30cbaa6d req-899293d3-defa-47d4-ba4d-37166c14bd4c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 65e7119e-238b-426c-9e9d-67b4c38c61b7] Received event network-vif-unplugged-82c25621-80b3-4927-957d-aec0a653a4f8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:44:31 np0005534516 nova_compute[253538]: 2025-11-25 08:44:31.009 253542 DEBUG oslo_concurrency.lockutils [req-5727c05e-e076-4661-8555-1cbf30cbaa6d req-899293d3-defa-47d4-ba4d-37166c14bd4c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "65e7119e-238b-426c-9e9d-67b4c38c61b7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:44:31 np0005534516 nova_compute[253538]: 2025-11-25 08:44:31.009 253542 DEBUG oslo_concurrency.lockutils [req-5727c05e-e076-4661-8555-1cbf30cbaa6d req-899293d3-defa-47d4-ba4d-37166c14bd4c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "65e7119e-238b-426c-9e9d-67b4c38c61b7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:44:31 np0005534516 nova_compute[253538]: 2025-11-25 08:44:31.010 253542 DEBUG oslo_concurrency.lockutils [req-5727c05e-e076-4661-8555-1cbf30cbaa6d req-899293d3-defa-47d4-ba4d-37166c14bd4c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "65e7119e-238b-426c-9e9d-67b4c38c61b7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:44:31 np0005534516 nova_compute[253538]: 2025-11-25 08:44:31.010 253542 DEBUG nova.compute.manager [req-5727c05e-e076-4661-8555-1cbf30cbaa6d req-899293d3-defa-47d4-ba4d-37166c14bd4c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 65e7119e-238b-426c-9e9d-67b4c38c61b7] No waiting events found dispatching network-vif-unplugged-82c25621-80b3-4927-957d-aec0a653a4f8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 03:44:31 np0005534516 nova_compute[253538]: 2025-11-25 08:44:31.010 253542 DEBUG nova.compute.manager [req-5727c05e-e076-4661-8555-1cbf30cbaa6d req-899293d3-defa-47d4-ba4d-37166c14bd4c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 65e7119e-238b-426c-9e9d-67b4c38c61b7] Received event network-vif-unplugged-82c25621-80b3-4927-957d-aec0a653a4f8 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 25 03:44:31 np0005534516 nova_compute[253538]: 2025-11-25 08:44:31.010 253542 DEBUG nova.compute.manager [req-5727c05e-e076-4661-8555-1cbf30cbaa6d req-899293d3-defa-47d4-ba4d-37166c14bd4c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 65e7119e-238b-426c-9e9d-67b4c38c61b7] Received event network-vif-plugged-82c25621-80b3-4927-957d-aec0a653a4f8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:44:31 np0005534516 nova_compute[253538]: 2025-11-25 08:44:31.011 253542 DEBUG oslo_concurrency.lockutils [req-5727c05e-e076-4661-8555-1cbf30cbaa6d req-899293d3-defa-47d4-ba4d-37166c14bd4c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "65e7119e-238b-426c-9e9d-67b4c38c61b7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:44:31 np0005534516 nova_compute[253538]: 2025-11-25 08:44:31.011 253542 DEBUG oslo_concurrency.lockutils [req-5727c05e-e076-4661-8555-1cbf30cbaa6d req-899293d3-defa-47d4-ba4d-37166c14bd4c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "65e7119e-238b-426c-9e9d-67b4c38c61b7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:44:31 np0005534516 nova_compute[253538]: 2025-11-25 08:44:31.011 253542 DEBUG oslo_concurrency.lockutils [req-5727c05e-e076-4661-8555-1cbf30cbaa6d req-899293d3-defa-47d4-ba4d-37166c14bd4c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "65e7119e-238b-426c-9e9d-67b4c38c61b7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:44:31 np0005534516 nova_compute[253538]: 2025-11-25 08:44:31.011 253542 DEBUG nova.compute.manager [req-5727c05e-e076-4661-8555-1cbf30cbaa6d req-899293d3-defa-47d4-ba4d-37166c14bd4c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 65e7119e-238b-426c-9e9d-67b4c38c61b7] No waiting events found dispatching network-vif-plugged-82c25621-80b3-4927-957d-aec0a653a4f8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 03:44:31 np0005534516 nova_compute[253538]: 2025-11-25 08:44:31.012 253542 WARNING nova.compute.manager [req-5727c05e-e076-4661-8555-1cbf30cbaa6d req-899293d3-defa-47d4-ba4d-37166c14bd4c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 65e7119e-238b-426c-9e9d-67b4c38c61b7] Received unexpected event network-vif-plugged-82c25621-80b3-4927-957d-aec0a653a4f8 for instance with vm_state active and task_state deleting.#033[00m
Nov 25 03:44:31 np0005534516 nova_compute[253538]: 2025-11-25 08:44:31.024 253542 INFO nova.compute.manager [None req-6265c21e-f30f-4d24-8cb1-a928c8cdba8d 9e24446c871b4d7ca816a3833d05daa9 95ca8ccb4dca4f58b3896b0533bab879 - - default default] [instance: 65e7119e-238b-426c-9e9d-67b4c38c61b7] Took 1.44 seconds to destroy the instance on the hypervisor.#033[00m
Nov 25 03:44:31 np0005534516 nova_compute[253538]: 2025-11-25 08:44:31.024 253542 DEBUG oslo.service.loopingcall [None req-6265c21e-f30f-4d24-8cb1-a928c8cdba8d 9e24446c871b4d7ca816a3833d05daa9 95ca8ccb4dca4f58b3896b0533bab879 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 25 03:44:31 np0005534516 nova_compute[253538]: 2025-11-25 08:44:31.025 253542 DEBUG nova.compute.manager [-] [instance: 65e7119e-238b-426c-9e9d-67b4c38c61b7] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 25 03:44:31 np0005534516 nova_compute[253538]: 2025-11-25 08:44:31.025 253542 DEBUG nova.network.neutron [-] [instance: 65e7119e-238b-426c-9e9d-67b4c38c61b7] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 25 03:44:31 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1890: 321 pgs: 321 active+clean; 134 MiB data, 698 MiB used, 59 GiB / 60 GiB avail; 1.0 MiB/s rd, 1.8 MiB/s wr, 67 op/s
Nov 25 03:44:32 np0005534516 nova_compute[253538]: 2025-11-25 08:44:32.111 253542 DEBUG nova.network.neutron [-] [instance: 65e7119e-238b-426c-9e9d-67b4c38c61b7] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 03:44:32 np0005534516 nova_compute[253538]: 2025-11-25 08:44:32.132 253542 INFO nova.compute.manager [-] [instance: 65e7119e-238b-426c-9e9d-67b4c38c61b7] Took 1.11 seconds to deallocate network for instance.#033[00m
Nov 25 03:44:32 np0005534516 nova_compute[253538]: 2025-11-25 08:44:32.149 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:44:32 np0005534516 nova_compute[253538]: 2025-11-25 08:44:32.178 253542 DEBUG oslo_concurrency.lockutils [None req-6265c21e-f30f-4d24-8cb1-a928c8cdba8d 9e24446c871b4d7ca816a3833d05daa9 95ca8ccb4dca4f58b3896b0533bab879 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:44:32 np0005534516 nova_compute[253538]: 2025-11-25 08:44:32.179 253542 DEBUG oslo_concurrency.lockutils [None req-6265c21e-f30f-4d24-8cb1-a928c8cdba8d 9e24446c871b4d7ca816a3833d05daa9 95ca8ccb4dca4f58b3896b0533bab879 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:44:32 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e231 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 03:44:32 np0005534516 nova_compute[253538]: 2025-11-25 08:44:32.225 253542 DEBUG oslo_concurrency.processutils [None req-6265c21e-f30f-4d24-8cb1-a928c8cdba8d 9e24446c871b4d7ca816a3833d05daa9 95ca8ccb4dca4f58b3896b0533bab879 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:44:32 np0005534516 nova_compute[253538]: 2025-11-25 08:44:32.555 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 03:44:32 np0005534516 nova_compute[253538]: 2025-11-25 08:44:32.579 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:44:32 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 03:44:32 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/679818289' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 03:44:32 np0005534516 nova_compute[253538]: 2025-11-25 08:44:32.654 253542 DEBUG oslo_concurrency.processutils [None req-6265c21e-f30f-4d24-8cb1-a928c8cdba8d 9e24446c871b4d7ca816a3833d05daa9 95ca8ccb4dca4f58b3896b0533bab879 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.430s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:44:32 np0005534516 nova_compute[253538]: 2025-11-25 08:44:32.660 253542 DEBUG nova.compute.provider_tree [None req-6265c21e-f30f-4d24-8cb1-a928c8cdba8d 9e24446c871b4d7ca816a3833d05daa9 95ca8ccb4dca4f58b3896b0533bab879 - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 03:44:32 np0005534516 nova_compute[253538]: 2025-11-25 08:44:32.675 253542 DEBUG nova.scheduler.client.report [None req-6265c21e-f30f-4d24-8cb1-a928c8cdba8d 9e24446c871b4d7ca816a3833d05daa9 95ca8ccb4dca4f58b3896b0533bab879 - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 03:44:32 np0005534516 nova_compute[253538]: 2025-11-25 08:44:32.693 253542 DEBUG oslo_concurrency.lockutils [None req-6265c21e-f30f-4d24-8cb1-a928c8cdba8d 9e24446c871b4d7ca816a3833d05daa9 95ca8ccb4dca4f58b3896b0533bab879 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.514s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:44:32 np0005534516 nova_compute[253538]: 2025-11-25 08:44:32.695 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.117s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:44:32 np0005534516 nova_compute[253538]: 2025-11-25 08:44:32.695 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:44:32 np0005534516 nova_compute[253538]: 2025-11-25 08:44:32.696 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 25 03:44:32 np0005534516 nova_compute[253538]: 2025-11-25 08:44:32.696 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:44:32 np0005534516 nova_compute[253538]: 2025-11-25 08:44:32.745 253542 INFO nova.scheduler.client.report [None req-6265c21e-f30f-4d24-8cb1-a928c8cdba8d 9e24446c871b4d7ca816a3833d05daa9 95ca8ccb4dca4f58b3896b0533bab879 - - default default] Deleted allocations for instance 65e7119e-238b-426c-9e9d-67b4c38c61b7#033[00m
Nov 25 03:44:32 np0005534516 nova_compute[253538]: 2025-11-25 08:44:32.824 253542 DEBUG oslo_concurrency.lockutils [None req-6265c21e-f30f-4d24-8cb1-a928c8cdba8d 9e24446c871b4d7ca816a3833d05daa9 95ca8ccb4dca4f58b3896b0533bab879 - - default default] Lock "65e7119e-238b-426c-9e9d-67b4c38c61b7" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.241s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:44:33 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 03:44:33 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4045952111' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 03:44:33 np0005534516 nova_compute[253538]: 2025-11-25 08:44:33.108 253542 DEBUG nova.compute.manager [req-1692b5a5-88f7-4ff5-b9d9-d25ad729eba8 req-942da066-845e-4fc8-8b48-1f0cfc462b83 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 65e7119e-238b-426c-9e9d-67b4c38c61b7] Received event network-vif-deleted-82c25621-80b3-4927-957d-aec0a653a4f8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:44:33 np0005534516 nova_compute[253538]: 2025-11-25 08:44:33.110 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.414s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:44:33 np0005534516 nova_compute[253538]: 2025-11-25 08:44:33.301 253542 WARNING nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 03:44:33 np0005534516 nova_compute[253538]: 2025-11-25 08:44:33.302 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3939MB free_disk=59.96738052368164GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 25 03:44:33 np0005534516 nova_compute[253538]: 2025-11-25 08:44:33.303 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:44:33 np0005534516 nova_compute[253538]: 2025-11-25 08:44:33.303 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:44:33 np0005534516 nova_compute[253538]: 2025-11-25 08:44:33.350 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 25 03:44:33 np0005534516 nova_compute[253538]: 2025-11-25 08:44:33.352 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 25 03:44:33 np0005534516 nova_compute[253538]: 2025-11-25 08:44:33.385 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:44:33 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1891: 321 pgs: 321 active+clean; 114 MiB data, 688 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 114 op/s
Nov 25 03:44:33 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 03:44:33 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3976193311' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 03:44:33 np0005534516 nova_compute[253538]: 2025-11-25 08:44:33.944 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.559s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:44:33 np0005534516 nova_compute[253538]: 2025-11-25 08:44:33.951 253542 DEBUG nova.compute.provider_tree [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 03:44:33 np0005534516 nova_compute[253538]: 2025-11-25 08:44:33.966 253542 DEBUG nova.scheduler.client.report [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 03:44:33 np0005534516 nova_compute[253538]: 2025-11-25 08:44:33.988 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 25 03:44:33 np0005534516 nova_compute[253538]: 2025-11-25 08:44:33.988 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.685s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:44:34 np0005534516 nova_compute[253538]: 2025-11-25 08:44:34.988 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 03:44:35 np0005534516 nova_compute[253538]: 2025-11-25 08:44:35.056 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:44:35 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1892: 321 pgs: 321 active+clean; 88 MiB data, 677 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 936 KiB/s wr, 101 op/s
Nov 25 03:44:35 np0005534516 nova_compute[253538]: 2025-11-25 08:44:35.547 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 03:44:35 np0005534516 nova_compute[253538]: 2025-11-25 08:44:35.548 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 03:44:37 np0005534516 nova_compute[253538]: 2025-11-25 08:44:37.151 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:44:37 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e231 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 03:44:37 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1893: 321 pgs: 321 active+clean; 88 MiB data, 677 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 100 op/s
Nov 25 03:44:38 np0005534516 nova_compute[253538]: 2025-11-25 08:44:38.016 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:44:38 np0005534516 nova_compute[253538]: 2025-11-25 08:44:38.553 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 03:44:39 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1894: 321 pgs: 321 active+clean; 88 MiB data, 677 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 99 op/s
Nov 25 03:44:40 np0005534516 nova_compute[253538]: 2025-11-25 08:44:40.058 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:44:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:44:41.069 162739 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:44:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:44:41.070 162739 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:44:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:44:41.070 162739 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:44:41 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1895: 321 pgs: 321 active+clean; 88 MiB data, 677 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.6 KiB/s wr, 96 op/s
Nov 25 03:44:42 np0005534516 nova_compute[253538]: 2025-11-25 08:44:42.153 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:44:42 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e231 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 03:44:43 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1896: 321 pgs: 321 active+clean; 88 MiB data, 677 MiB used, 59 GiB / 60 GiB avail; 933 KiB/s rd, 1.2 KiB/s wr, 59 op/s
Nov 25 03:44:45 np0005534516 nova_compute[253538]: 2025-11-25 08:44:45.029 253542 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764060270.0267487, 65e7119e-238b-426c-9e9d-67b4c38c61b7 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 03:44:45 np0005534516 nova_compute[253538]: 2025-11-25 08:44:45.030 253542 INFO nova.compute.manager [-] [instance: 65e7119e-238b-426c-9e9d-67b4c38c61b7] VM Stopped (Lifecycle Event)#033[00m
Nov 25 03:44:45 np0005534516 nova_compute[253538]: 2025-11-25 08:44:45.055 253542 DEBUG nova.compute.manager [None req-9b05518c-4af7-49dd-91d0-672d42ce37ba - - - - - -] [instance: 65e7119e-238b-426c-9e9d-67b4c38c61b7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:44:45 np0005534516 nova_compute[253538]: 2025-11-25 08:44:45.062 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:44:45 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1897: 321 pgs: 321 active+clean; 88 MiB data, 677 MiB used, 59 GiB / 60 GiB avail; 8.9 KiB/s rd, 341 B/s wr, 12 op/s
Nov 25 03:44:47 np0005534516 nova_compute[253538]: 2025-11-25 08:44:47.155 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:44:47 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e231 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 03:44:47 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1898: 321 pgs: 321 active+clean; 88 MiB data, 677 MiB used, 59 GiB / 60 GiB avail
Nov 25 03:44:49 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1899: 321 pgs: 321 active+clean; 88 MiB data, 677 MiB used, 59 GiB / 60 GiB avail
Nov 25 03:44:50 np0005534516 nova_compute[253538]: 2025-11-25 08:44:50.065 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:44:51 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1900: 321 pgs: 321 active+clean; 88 MiB data, 677 MiB used, 59 GiB / 60 GiB avail
Nov 25 03:44:51 np0005534516 podman[352922]: 2025-11-25 08:44:51.826657305 +0000 UTC m=+0.078016817 container health_status 5c8d5508db57b4c3da7e80ae11f3a3094e87785cb74f60c98199261dd8b73481 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, org.label-schema.build-date=20251118, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Nov 25 03:44:52 np0005534516 nova_compute[253538]: 2025-11-25 08:44:52.157 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:44:52 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e231 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 03:44:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] Optimize plan auto_2025-11-25_08:44:53
Nov 25 03:44:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 03:44:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] do_upmap
Nov 25 03:44:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'volumes', '.mgr', 'default.rgw.control', 'default.rgw.log', 'cephfs.cephfs.data', 'images', '.rgw.root', 'vms', 'backups', 'default.rgw.meta']
Nov 25 03:44:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] prepared 0/10 changes
Nov 25 03:44:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 03:44:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 03:44:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 03:44:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 03:44:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 03:44:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 03:44:53 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1901: 321 pgs: 321 active+clean; 88 MiB data, 677 MiB used, 59 GiB / 60 GiB avail
Nov 25 03:44:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 03:44:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 03:44:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 03:44:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 03:44:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 03:44:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 03:44:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 03:44:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 03:44:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 03:44:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 03:44:55 np0005534516 nova_compute[253538]: 2025-11-25 08:44:55.068 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:44:55 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1902: 321 pgs: 321 active+clean; 88 MiB data, 677 MiB used, 59 GiB / 60 GiB avail
Nov 25 03:44:56 np0005534516 podman[352942]: 2025-11-25 08:44:56.822234122 +0000 UTC m=+0.072776754 container health_status 201f5ba8a846455dad7681d91099d8c3f33e8ac91298c1330a8b525b94c03938 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251118, tcib_managed=true)
Nov 25 03:44:57 np0005534516 nova_compute[253538]: 2025-11-25 08:44:57.159 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:44:57 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e231 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 03:44:57 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1903: 321 pgs: 321 active+clean; 88 MiB data, 677 MiB used, 59 GiB / 60 GiB avail
Nov 25 03:44:59 np0005534516 podman[353135]: 2025-11-25 08:44:59.000086232 +0000 UTC m=+0.342330470 container exec 04ce8b89bbacb77b3b59c4996e899d7494010827e740656ebea8754c25303956 (image=quay.io/ceph/ceph:v18, name=ceph-a058ea16-8b73-51e1-b172-ed66107102bf-mon-compute-0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2)
Nov 25 03:44:59 np0005534516 podman[353135]: 2025-11-25 08:44:59.143737077 +0000 UTC m=+0.485981255 container exec_died 04ce8b89bbacb77b3b59c4996e899d7494010827e740656ebea8754c25303956 (image=quay.io/ceph/ceph:v18, name=ceph-a058ea16-8b73-51e1-b172-ed66107102bf-mon-compute-0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 03:44:59 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1904: 321 pgs: 321 active+clean; 88 MiB data, 677 MiB used, 59 GiB / 60 GiB avail
Nov 25 03:45:00 np0005534516 nova_compute[253538]: 2025-11-25 08:45:00.071 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:45:00 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 03:45:00 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 03:45:00 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 03:45:00 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 03:45:00 np0005534516 podman[353320]: 2025-11-25 08:45:00.600833535 +0000 UTC m=+0.106960306 container health_status 8ad1e319ea263c63021f01bb2d6f18ca5aea205883339692c6b10bddfa993191 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_managed=true, config_id=ovn_controller, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_controller)
Nov 25 03:45:01 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 03:45:01 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 03:45:01 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 03:45:01 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 03:45:01 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 03:45:01 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 03:45:01 np0005534516 ceph-mgr[75313]: [progress WARNING root] complete: ev cb7b7dc4-3bad-4ac9-9c1c-c9ff2261dfb2 does not exist
Nov 25 03:45:01 np0005534516 ceph-mgr[75313]: [progress WARNING root] complete: ev 1ea8571f-f945-4836-9c83-b0d91a87e9e6 does not exist
Nov 25 03:45:01 np0005534516 ceph-mgr[75313]: [progress WARNING root] complete: ev 2f7cc1f0-954d-4bf5-8ab1-293ee578109b does not exist
Nov 25 03:45:01 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 03:45:01 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 03:45:01 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 03:45:01 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 03:45:01 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 03:45:01 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 03:45:01 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 03:45:01 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 03:45:01 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 03:45:01 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 03:45:01 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 03:45:01 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1905: 321 pgs: 321 active+clean; 88 MiB data, 677 MiB used, 59 GiB / 60 GiB avail
Nov 25 03:45:01 np0005534516 podman[353590]: 2025-11-25 08:45:01.978367565 +0000 UTC m=+0.071173320 container create cc831d43ed639e14140ff5f7896fe6020fd1aaef78370ac601949e9ff1eac84d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_banzai, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 03:45:02 np0005534516 systemd[1]: Started libpod-conmon-cc831d43ed639e14140ff5f7896fe6020fd1aaef78370ac601949e9ff1eac84d.scope.
Nov 25 03:45:02 np0005534516 podman[353590]: 2025-11-25 08:45:01.946453305 +0000 UTC m=+0.039259070 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 03:45:02 np0005534516 systemd[1]: Started libcrun container.
Nov 25 03:45:02 np0005534516 podman[353590]: 2025-11-25 08:45:02.090798308 +0000 UTC m=+0.183604073 container init cc831d43ed639e14140ff5f7896fe6020fd1aaef78370ac601949e9ff1eac84d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_banzai, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Nov 25 03:45:02 np0005534516 podman[353590]: 2025-11-25 08:45:02.09817866 +0000 UTC m=+0.190984375 container start cc831d43ed639e14140ff5f7896fe6020fd1aaef78370ac601949e9ff1eac84d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_banzai, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Nov 25 03:45:02 np0005534516 systemd[1]: libpod-cc831d43ed639e14140ff5f7896fe6020fd1aaef78370ac601949e9ff1eac84d.scope: Deactivated successfully.
Nov 25 03:45:02 np0005534516 priceless_banzai[353606]: 167 167
Nov 25 03:45:02 np0005534516 conmon[353606]: conmon cc831d43ed639e14140f <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-cc831d43ed639e14140ff5f7896fe6020fd1aaef78370ac601949e9ff1eac84d.scope/container/memory.events
Nov 25 03:45:02 np0005534516 podman[353590]: 2025-11-25 08:45:02.107881094 +0000 UTC m=+0.200686809 container attach cc831d43ed639e14140ff5f7896fe6020fd1aaef78370ac601949e9ff1eac84d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_banzai, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Nov 25 03:45:02 np0005534516 podman[353590]: 2025-11-25 08:45:02.108375497 +0000 UTC m=+0.201181242 container died cc831d43ed639e14140ff5f7896fe6020fd1aaef78370ac601949e9ff1eac84d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_banzai, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507)
Nov 25 03:45:02 np0005534516 systemd[1]: var-lib-containers-storage-overlay-6ec8aec0c7c42f370d76ed540009118e8eae8335caafb165bef8ec257db7c2a6-merged.mount: Deactivated successfully.
Nov 25 03:45:02 np0005534516 nova_compute[253538]: 2025-11-25 08:45:02.161 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:45:02 np0005534516 podman[353590]: 2025-11-25 08:45:02.172966287 +0000 UTC m=+0.265772002 container remove cc831d43ed639e14140ff5f7896fe6020fd1aaef78370ac601949e9ff1eac84d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_banzai, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 03:45:02 np0005534516 systemd[1]: libpod-conmon-cc831d43ed639e14140ff5f7896fe6020fd1aaef78370ac601949e9ff1eac84d.scope: Deactivated successfully.
Nov 25 03:45:02 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e231 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 03:45:02 np0005534516 podman[353630]: 2025-11-25 08:45:02.346577619 +0000 UTC m=+0.046818357 container create 08f75f34ba37be07a950a8b1209de75fef487fb2b7226e40ed0efa08f70f4fc7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_easley, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Nov 25 03:45:02 np0005534516 systemd[1]: Started libpod-conmon-08f75f34ba37be07a950a8b1209de75fef487fb2b7226e40ed0efa08f70f4fc7.scope.
Nov 25 03:45:02 np0005534516 podman[353630]: 2025-11-25 08:45:02.324701103 +0000 UTC m=+0.024941861 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 03:45:02 np0005534516 systemd[1]: Started libcrun container.
Nov 25 03:45:02 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cb6d5e90267e7c69e262fb87213892f955fa4feead9cef4a6391c3757f2a3e8b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 03:45:02 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cb6d5e90267e7c69e262fb87213892f955fa4feead9cef4a6391c3757f2a3e8b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 03:45:02 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cb6d5e90267e7c69e262fb87213892f955fa4feead9cef4a6391c3757f2a3e8b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 03:45:02 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cb6d5e90267e7c69e262fb87213892f955fa4feead9cef4a6391c3757f2a3e8b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 03:45:02 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cb6d5e90267e7c69e262fb87213892f955fa4feead9cef4a6391c3757f2a3e8b/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 03:45:02 np0005534516 podman[353630]: 2025-11-25 08:45:02.466355203 +0000 UTC m=+0.166595971 container init 08f75f34ba37be07a950a8b1209de75fef487fb2b7226e40ed0efa08f70f4fc7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_easley, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Nov 25 03:45:02 np0005534516 podman[353630]: 2025-11-25 08:45:02.478557086 +0000 UTC m=+0.178797824 container start 08f75f34ba37be07a950a8b1209de75fef487fb2b7226e40ed0efa08f70f4fc7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_easley, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 03:45:02 np0005534516 podman[353630]: 2025-11-25 08:45:02.487624953 +0000 UTC m=+0.187865741 container attach 08f75f34ba37be07a950a8b1209de75fef487fb2b7226e40ed0efa08f70f4fc7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_easley, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3)
Nov 25 03:45:03 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1906: 321 pgs: 321 active+clean; 88 MiB data, 677 MiB used, 59 GiB / 60 GiB avail
Nov 25 03:45:03 np0005534516 fervent_easley[353647]: --> passed data devices: 0 physical, 3 LVM
Nov 25 03:45:03 np0005534516 fervent_easley[353647]: --> relative data size: 1.0
Nov 25 03:45:03 np0005534516 fervent_easley[353647]: --> All data devices are unavailable
Nov 25 03:45:03 np0005534516 systemd[1]: libpod-08f75f34ba37be07a950a8b1209de75fef487fb2b7226e40ed0efa08f70f4fc7.scope: Deactivated successfully.
Nov 25 03:45:03 np0005534516 systemd[1]: libpod-08f75f34ba37be07a950a8b1209de75fef487fb2b7226e40ed0efa08f70f4fc7.scope: Consumed 1.090s CPU time.
Nov 25 03:45:03 np0005534516 podman[353630]: 2025-11-25 08:45:03.625293836 +0000 UTC m=+1.325534624 container died 08f75f34ba37be07a950a8b1209de75fef487fb2b7226e40ed0efa08f70f4fc7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_easley, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 03:45:03 np0005534516 systemd[1]: var-lib-containers-storage-overlay-cb6d5e90267e7c69e262fb87213892f955fa4feead9cef4a6391c3757f2a3e8b-merged.mount: Deactivated successfully.
Nov 25 03:45:03 np0005534516 podman[353630]: 2025-11-25 08:45:03.967301756 +0000 UTC m=+1.667542494 container remove 08f75f34ba37be07a950a8b1209de75fef487fb2b7226e40ed0efa08f70f4fc7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_easley, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef)
Nov 25 03:45:03 np0005534516 systemd[1]: libpod-conmon-08f75f34ba37be07a950a8b1209de75fef487fb2b7226e40ed0efa08f70f4fc7.scope: Deactivated successfully.
Nov 25 03:45:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 03:45:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:45:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 03:45:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:45:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 25 03:45:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:45:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 03:45:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:45:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 03:45:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:45:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0010122368870873217 of space, bias 1.0, pg target 0.3036710661261965 quantized to 32 (current 32)
Nov 25 03:45:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:45:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 25 03:45:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:45:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 03:45:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:45:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 25 03:45:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:45:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 25 03:45:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:45:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 03:45:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:45:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 25 03:45:04 np0005534516 podman[353831]: 2025-11-25 08:45:04.723998317 +0000 UTC m=+0.042149679 container create 99ad72fb87c9c6f9ebc9070d5f07e728ffa6c6f4baa8fa544b705dea7c192db2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_beaver, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Nov 25 03:45:04 np0005534516 systemd[1]: Started libpod-conmon-99ad72fb87c9c6f9ebc9070d5f07e728ffa6c6f4baa8fa544b705dea7c192db2.scope.
Nov 25 03:45:04 np0005534516 systemd[1]: Started libcrun container.
Nov 25 03:45:04 np0005534516 podman[353831]: 2025-11-25 08:45:04.705152143 +0000 UTC m=+0.023303535 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 03:45:04 np0005534516 podman[353831]: 2025-11-25 08:45:04.810752932 +0000 UTC m=+0.128904324 container init 99ad72fb87c9c6f9ebc9070d5f07e728ffa6c6f4baa8fa544b705dea7c192db2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_beaver, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Nov 25 03:45:04 np0005534516 podman[353831]: 2025-11-25 08:45:04.817257109 +0000 UTC m=+0.135408481 container start 99ad72fb87c9c6f9ebc9070d5f07e728ffa6c6f4baa8fa544b705dea7c192db2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_beaver, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Nov 25 03:45:04 np0005534516 podman[353831]: 2025-11-25 08:45:04.820875308 +0000 UTC m=+0.139026710 container attach 99ad72fb87c9c6f9ebc9070d5f07e728ffa6c6f4baa8fa544b705dea7c192db2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_beaver, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Nov 25 03:45:04 np0005534516 nice_beaver[353847]: 167 167
Nov 25 03:45:04 np0005534516 systemd[1]: libpod-99ad72fb87c9c6f9ebc9070d5f07e728ffa6c6f4baa8fa544b705dea7c192db2.scope: Deactivated successfully.
Nov 25 03:45:04 np0005534516 podman[353831]: 2025-11-25 08:45:04.822904192 +0000 UTC m=+0.141055574 container died 99ad72fb87c9c6f9ebc9070d5f07e728ffa6c6f4baa8fa544b705dea7c192db2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_beaver, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Nov 25 03:45:04 np0005534516 systemd[1]: var-lib-containers-storage-overlay-1ae638707330f2c7c18558eaefb2042e7c04063007e7ab32ce4355c22d7775da-merged.mount: Deactivated successfully.
Nov 25 03:45:04 np0005534516 podman[353831]: 2025-11-25 08:45:04.863290064 +0000 UTC m=+0.181441436 container remove 99ad72fb87c9c6f9ebc9070d5f07e728ffa6c6f4baa8fa544b705dea7c192db2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_beaver, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Nov 25 03:45:04 np0005534516 systemd[1]: libpod-conmon-99ad72fb87c9c6f9ebc9070d5f07e728ffa6c6f4baa8fa544b705dea7c192db2.scope: Deactivated successfully.
Nov 25 03:45:05 np0005534516 podman[353871]: 2025-11-25 08:45:05.027636292 +0000 UTC m=+0.045346007 container create acad2594dc67a008e32c63d855042d56df5c5d0e8e549f8780ce020eebc2bd49 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_sutherland, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 03:45:05 np0005534516 systemd[1]: Started libpod-conmon-acad2594dc67a008e32c63d855042d56df5c5d0e8e549f8780ce020eebc2bd49.scope.
Nov 25 03:45:05 np0005534516 podman[353871]: 2025-11-25 08:45:05.004057109 +0000 UTC m=+0.021766764 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 03:45:05 np0005534516 systemd[1]: Started libcrun container.
Nov 25 03:45:05 np0005534516 nova_compute[253538]: 2025-11-25 08:45:05.113 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:45:05 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/75bfb344a9c30b0e24d6e6557d74c59cfb7841f8a3065212bb52b1c1abd2c005/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 03:45:05 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/75bfb344a9c30b0e24d6e6557d74c59cfb7841f8a3065212bb52b1c1abd2c005/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 03:45:05 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/75bfb344a9c30b0e24d6e6557d74c59cfb7841f8a3065212bb52b1c1abd2c005/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 03:45:05 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/75bfb344a9c30b0e24d6e6557d74c59cfb7841f8a3065212bb52b1c1abd2c005/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 03:45:05 np0005534516 podman[353871]: 2025-11-25 08:45:05.141266429 +0000 UTC m=+0.158976424 container init acad2594dc67a008e32c63d855042d56df5c5d0e8e549f8780ce020eebc2bd49 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_sutherland, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Nov 25 03:45:05 np0005534516 podman[353871]: 2025-11-25 08:45:05.150392417 +0000 UTC m=+0.168102092 container start acad2594dc67a008e32c63d855042d56df5c5d0e8e549f8780ce020eebc2bd49 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_sutherland, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 03:45:05 np0005534516 podman[353871]: 2025-11-25 08:45:05.155946378 +0000 UTC m=+0.173656043 container attach acad2594dc67a008e32c63d855042d56df5c5d0e8e549f8780ce020eebc2bd49 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_sutherland, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 03:45:05 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1907: 321 pgs: 321 active+clean; 88 MiB data, 677 MiB used, 59 GiB / 60 GiB avail
Nov 25 03:45:05 np0005534516 angry_sutherland[353888]: {
Nov 25 03:45:05 np0005534516 angry_sutherland[353888]:    "0": [
Nov 25 03:45:05 np0005534516 angry_sutherland[353888]:        {
Nov 25 03:45:05 np0005534516 angry_sutherland[353888]:            "devices": [
Nov 25 03:45:05 np0005534516 angry_sutherland[353888]:                "/dev/loop3"
Nov 25 03:45:05 np0005534516 angry_sutherland[353888]:            ],
Nov 25 03:45:05 np0005534516 angry_sutherland[353888]:            "lv_name": "ceph_lv0",
Nov 25 03:45:05 np0005534516 angry_sutherland[353888]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 03:45:05 np0005534516 angry_sutherland[353888]:            "lv_size": "21470642176",
Nov 25 03:45:05 np0005534516 angry_sutherland[353888]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=fa0f8d90-5df8-4d42-9078-da082765696d,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 03:45:05 np0005534516 angry_sutherland[353888]:            "lv_uuid": "ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr",
Nov 25 03:45:05 np0005534516 angry_sutherland[353888]:            "name": "ceph_lv0",
Nov 25 03:45:05 np0005534516 angry_sutherland[353888]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 03:45:05 np0005534516 angry_sutherland[353888]:            "tags": {
Nov 25 03:45:05 np0005534516 angry_sutherland[353888]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 03:45:05 np0005534516 angry_sutherland[353888]:                "ceph.block_uuid": "ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr",
Nov 25 03:45:05 np0005534516 angry_sutherland[353888]:                "ceph.cephx_lockbox_secret": "",
Nov 25 03:45:05 np0005534516 angry_sutherland[353888]:                "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 03:45:05 np0005534516 angry_sutherland[353888]:                "ceph.cluster_name": "ceph",
Nov 25 03:45:05 np0005534516 angry_sutherland[353888]:                "ceph.crush_device_class": "",
Nov 25 03:45:05 np0005534516 angry_sutherland[353888]:                "ceph.encrypted": "0",
Nov 25 03:45:05 np0005534516 angry_sutherland[353888]:                "ceph.osd_fsid": "fa0f8d90-5df8-4d42-9078-da082765696d",
Nov 25 03:45:05 np0005534516 angry_sutherland[353888]:                "ceph.osd_id": "0",
Nov 25 03:45:05 np0005534516 angry_sutherland[353888]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 03:45:05 np0005534516 angry_sutherland[353888]:                "ceph.type": "block",
Nov 25 03:45:05 np0005534516 angry_sutherland[353888]:                "ceph.vdo": "0"
Nov 25 03:45:05 np0005534516 angry_sutherland[353888]:            },
Nov 25 03:45:05 np0005534516 angry_sutherland[353888]:            "type": "block",
Nov 25 03:45:05 np0005534516 angry_sutherland[353888]:            "vg_name": "ceph_vg0"
Nov 25 03:45:05 np0005534516 angry_sutherland[353888]:        }
Nov 25 03:45:05 np0005534516 angry_sutherland[353888]:    ],
Nov 25 03:45:05 np0005534516 angry_sutherland[353888]:    "1": [
Nov 25 03:45:05 np0005534516 angry_sutherland[353888]:        {
Nov 25 03:45:05 np0005534516 angry_sutherland[353888]:            "devices": [
Nov 25 03:45:05 np0005534516 angry_sutherland[353888]:                "/dev/loop4"
Nov 25 03:45:05 np0005534516 angry_sutherland[353888]:            ],
Nov 25 03:45:05 np0005534516 angry_sutherland[353888]:            "lv_name": "ceph_lv1",
Nov 25 03:45:05 np0005534516 angry_sutherland[353888]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 03:45:05 np0005534516 angry_sutherland[353888]:            "lv_size": "21470642176",
Nov 25 03:45:05 np0005534516 angry_sutherland[353888]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1ccbbde0-8faf-460a-800e-c84f00ed17db,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 03:45:05 np0005534516 angry_sutherland[353888]:            "lv_uuid": "nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e",
Nov 25 03:45:05 np0005534516 angry_sutherland[353888]:            "name": "ceph_lv1",
Nov 25 03:45:05 np0005534516 angry_sutherland[353888]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 03:45:05 np0005534516 angry_sutherland[353888]:            "tags": {
Nov 25 03:45:05 np0005534516 angry_sutherland[353888]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 03:45:05 np0005534516 angry_sutherland[353888]:                "ceph.block_uuid": "nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e",
Nov 25 03:45:05 np0005534516 angry_sutherland[353888]:                "ceph.cephx_lockbox_secret": "",
Nov 25 03:45:05 np0005534516 angry_sutherland[353888]:                "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 03:45:05 np0005534516 angry_sutherland[353888]:                "ceph.cluster_name": "ceph",
Nov 25 03:45:05 np0005534516 angry_sutherland[353888]:                "ceph.crush_device_class": "",
Nov 25 03:45:05 np0005534516 angry_sutherland[353888]:                "ceph.encrypted": "0",
Nov 25 03:45:05 np0005534516 angry_sutherland[353888]:                "ceph.osd_fsid": "1ccbbde0-8faf-460a-800e-c84f00ed17db",
Nov 25 03:45:05 np0005534516 angry_sutherland[353888]:                "ceph.osd_id": "1",
Nov 25 03:45:05 np0005534516 angry_sutherland[353888]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 03:45:05 np0005534516 angry_sutherland[353888]:                "ceph.type": "block",
Nov 25 03:45:05 np0005534516 angry_sutherland[353888]:                "ceph.vdo": "0"
Nov 25 03:45:05 np0005534516 angry_sutherland[353888]:            },
Nov 25 03:45:05 np0005534516 angry_sutherland[353888]:            "type": "block",
Nov 25 03:45:05 np0005534516 angry_sutherland[353888]:            "vg_name": "ceph_vg1"
Nov 25 03:45:05 np0005534516 angry_sutherland[353888]:        }
Nov 25 03:45:05 np0005534516 angry_sutherland[353888]:    ],
Nov 25 03:45:05 np0005534516 angry_sutherland[353888]:    "2": [
Nov 25 03:45:05 np0005534516 angry_sutherland[353888]:        {
Nov 25 03:45:05 np0005534516 angry_sutherland[353888]:            "devices": [
Nov 25 03:45:05 np0005534516 angry_sutherland[353888]:                "/dev/loop5"
Nov 25 03:45:05 np0005534516 angry_sutherland[353888]:            ],
Nov 25 03:45:05 np0005534516 angry_sutherland[353888]:            "lv_name": "ceph_lv2",
Nov 25 03:45:05 np0005534516 angry_sutherland[353888]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 03:45:05 np0005534516 angry_sutherland[353888]:            "lv_size": "21470642176",
Nov 25 03:45:05 np0005534516 angry_sutherland[353888]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2a15223e-f21c-42af-b702-2be1c3607399,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 03:45:05 np0005534516 angry_sutherland[353888]:            "lv_uuid": "5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0",
Nov 25 03:45:05 np0005534516 angry_sutherland[353888]:            "name": "ceph_lv2",
Nov 25 03:45:05 np0005534516 angry_sutherland[353888]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 03:45:05 np0005534516 angry_sutherland[353888]:            "tags": {
Nov 25 03:45:05 np0005534516 angry_sutherland[353888]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 03:45:05 np0005534516 angry_sutherland[353888]:                "ceph.block_uuid": "5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0",
Nov 25 03:45:05 np0005534516 angry_sutherland[353888]:                "ceph.cephx_lockbox_secret": "",
Nov 25 03:45:05 np0005534516 angry_sutherland[353888]:                "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 03:45:05 np0005534516 angry_sutherland[353888]:                "ceph.cluster_name": "ceph",
Nov 25 03:45:05 np0005534516 angry_sutherland[353888]:                "ceph.crush_device_class": "",
Nov 25 03:45:05 np0005534516 angry_sutherland[353888]:                "ceph.encrypted": "0",
Nov 25 03:45:05 np0005534516 angry_sutherland[353888]:                "ceph.osd_fsid": "2a15223e-f21c-42af-b702-2be1c3607399",
Nov 25 03:45:05 np0005534516 angry_sutherland[353888]:                "ceph.osd_id": "2",
Nov 25 03:45:05 np0005534516 angry_sutherland[353888]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 03:45:05 np0005534516 angry_sutherland[353888]:                "ceph.type": "block",
Nov 25 03:45:05 np0005534516 angry_sutherland[353888]:                "ceph.vdo": "0"
Nov 25 03:45:05 np0005534516 angry_sutherland[353888]:            },
Nov 25 03:45:05 np0005534516 angry_sutherland[353888]:            "type": "block",
Nov 25 03:45:05 np0005534516 angry_sutherland[353888]:            "vg_name": "ceph_vg2"
Nov 25 03:45:05 np0005534516 angry_sutherland[353888]:        }
Nov 25 03:45:05 np0005534516 angry_sutherland[353888]:    ]
Nov 25 03:45:05 np0005534516 angry_sutherland[353888]: }
Nov 25 03:45:05 np0005534516 systemd[1]: libpod-acad2594dc67a008e32c63d855042d56df5c5d0e8e549f8780ce020eebc2bd49.scope: Deactivated successfully.
Nov 25 03:45:05 np0005534516 podman[353871]: 2025-11-25 08:45:05.992863366 +0000 UTC m=+1.010573011 container died acad2594dc67a008e32c63d855042d56df5c5d0e8e549f8780ce020eebc2bd49 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_sutherland, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 03:45:06 np0005534516 systemd[1]: var-lib-containers-storage-overlay-75bfb344a9c30b0e24d6e6557d74c59cfb7841f8a3065212bb52b1c1abd2c005-merged.mount: Deactivated successfully.
Nov 25 03:45:06 np0005534516 podman[353871]: 2025-11-25 08:45:06.062992957 +0000 UTC m=+1.080702592 container remove acad2594dc67a008e32c63d855042d56df5c5d0e8e549f8780ce020eebc2bd49 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_sutherland, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 03:45:06 np0005534516 systemd[1]: libpod-conmon-acad2594dc67a008e32c63d855042d56df5c5d0e8e549f8780ce020eebc2bd49.scope: Deactivated successfully.
Nov 25 03:45:06 np0005534516 podman[354051]: 2025-11-25 08:45:06.791408267 +0000 UTC m=+0.035401266 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 03:45:06 np0005534516 podman[354051]: 2025-11-25 08:45:06.995808057 +0000 UTC m=+0.239801026 container create 3c603dc69eb7593e015de9ccacb2ef046f77b2839427b6de20087171e1e0d6a8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_hertz, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True)
Nov 25 03:45:07 np0005534516 systemd[1]: Started libpod-conmon-3c603dc69eb7593e015de9ccacb2ef046f77b2839427b6de20087171e1e0d6a8.scope.
Nov 25 03:45:07 np0005534516 systemd[1]: Started libcrun container.
Nov 25 03:45:07 np0005534516 podman[354051]: 2025-11-25 08:45:07.134295772 +0000 UTC m=+0.378288821 container init 3c603dc69eb7593e015de9ccacb2ef046f77b2839427b6de20087171e1e0d6a8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_hertz, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Nov 25 03:45:07 np0005534516 podman[354051]: 2025-11-25 08:45:07.143083701 +0000 UTC m=+0.387076660 container start 3c603dc69eb7593e015de9ccacb2ef046f77b2839427b6de20087171e1e0d6a8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_hertz, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Nov 25 03:45:07 np0005534516 stoic_hertz[354067]: 167 167
Nov 25 03:45:07 np0005534516 systemd[1]: libpod-3c603dc69eb7593e015de9ccacb2ef046f77b2839427b6de20087171e1e0d6a8.scope: Deactivated successfully.
Nov 25 03:45:07 np0005534516 conmon[354067]: conmon 3c603dc69eb7593e015d <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-3c603dc69eb7593e015de9ccacb2ef046f77b2839427b6de20087171e1e0d6a8.scope/container/memory.events
Nov 25 03:45:07 np0005534516 nova_compute[253538]: 2025-11-25 08:45:07.162 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:45:07 np0005534516 podman[354051]: 2025-11-25 08:45:07.17573075 +0000 UTC m=+0.419723759 container attach 3c603dc69eb7593e015de9ccacb2ef046f77b2839427b6de20087171e1e0d6a8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_hertz, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Nov 25 03:45:07 np0005534516 podman[354051]: 2025-11-25 08:45:07.176354988 +0000 UTC m=+0.420347977 container died 3c603dc69eb7593e015de9ccacb2ef046f77b2839427b6de20087171e1e0d6a8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_hertz, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Nov 25 03:45:07 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e231 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 03:45:07 np0005534516 systemd[1]: var-lib-containers-storage-overlay-3ca01bfd5b09f7d66871e288471a81483624665255c2bb9a3ca56df670a17bba-merged.mount: Deactivated successfully.
Nov 25 03:45:07 np0005534516 podman[354051]: 2025-11-25 08:45:07.263301368 +0000 UTC m=+0.507294337 container remove 3c603dc69eb7593e015de9ccacb2ef046f77b2839427b6de20087171e1e0d6a8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_hertz, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Nov 25 03:45:07 np0005534516 systemd[1]: libpod-conmon-3c603dc69eb7593e015de9ccacb2ef046f77b2839427b6de20087171e1e0d6a8.scope: Deactivated successfully.
Nov 25 03:45:07 np0005534516 podman[354092]: 2025-11-25 08:45:07.498391203 +0000 UTC m=+0.046968380 container create 7a806559cc37a49877f10a1e0c6c3ab378f0e9a2d756a924ed1406cb4dfc56e6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_colden, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 03:45:07 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1908: 321 pgs: 321 active+clean; 88 MiB data, 677 MiB used, 59 GiB / 60 GiB avail
Nov 25 03:45:07 np0005534516 systemd[1]: Started libpod-conmon-7a806559cc37a49877f10a1e0c6c3ab378f0e9a2d756a924ed1406cb4dfc56e6.scope.
Nov 25 03:45:07 np0005534516 podman[354092]: 2025-11-25 08:45:07.478603474 +0000 UTC m=+0.027180651 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 03:45:07 np0005534516 systemd[1]: Started libcrun container.
Nov 25 03:45:07 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/28dc452f7ab2563e9c7c830661297e87a5909c2ae6b46f35ea5491a7b86ac382/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 03:45:07 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/28dc452f7ab2563e9c7c830661297e87a5909c2ae6b46f35ea5491a7b86ac382/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 03:45:07 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/28dc452f7ab2563e9c7c830661297e87a5909c2ae6b46f35ea5491a7b86ac382/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 03:45:07 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/28dc452f7ab2563e9c7c830661297e87a5909c2ae6b46f35ea5491a7b86ac382/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 03:45:07 np0005534516 podman[354092]: 2025-11-25 08:45:07.605088872 +0000 UTC m=+0.153666099 container init 7a806559cc37a49877f10a1e0c6c3ab378f0e9a2d756a924ed1406cb4dfc56e6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_colden, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Nov 25 03:45:07 np0005534516 podman[354092]: 2025-11-25 08:45:07.616167674 +0000 UTC m=+0.164744821 container start 7a806559cc37a49877f10a1e0c6c3ab378f0e9a2d756a924ed1406cb4dfc56e6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_colden, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 03:45:07 np0005534516 podman[354092]: 2025-11-25 08:45:07.622420024 +0000 UTC m=+0.170997201 container attach 7a806559cc37a49877f10a1e0c6c3ab378f0e9a2d756a924ed1406cb4dfc56e6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_colden, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True)
Nov 25 03:45:08 np0005534516 heuristic_colden[354109]: {
Nov 25 03:45:08 np0005534516 heuristic_colden[354109]:    "1ccbbde0-8faf-460a-800e-c84f00ed17db": {
Nov 25 03:45:08 np0005534516 heuristic_colden[354109]:        "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 03:45:08 np0005534516 heuristic_colden[354109]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 03:45:08 np0005534516 heuristic_colden[354109]:        "osd_id": 1,
Nov 25 03:45:08 np0005534516 heuristic_colden[354109]:        "osd_uuid": "1ccbbde0-8faf-460a-800e-c84f00ed17db",
Nov 25 03:45:08 np0005534516 heuristic_colden[354109]:        "type": "bluestore"
Nov 25 03:45:08 np0005534516 heuristic_colden[354109]:    },
Nov 25 03:45:08 np0005534516 heuristic_colden[354109]:    "2a15223e-f21c-42af-b702-2be1c3607399": {
Nov 25 03:45:08 np0005534516 heuristic_colden[354109]:        "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 03:45:08 np0005534516 heuristic_colden[354109]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 03:45:08 np0005534516 heuristic_colden[354109]:        "osd_id": 2,
Nov 25 03:45:08 np0005534516 heuristic_colden[354109]:        "osd_uuid": "2a15223e-f21c-42af-b702-2be1c3607399",
Nov 25 03:45:08 np0005534516 heuristic_colden[354109]:        "type": "bluestore"
Nov 25 03:45:08 np0005534516 heuristic_colden[354109]:    },
Nov 25 03:45:08 np0005534516 heuristic_colden[354109]:    "fa0f8d90-5df8-4d42-9078-da082765696d": {
Nov 25 03:45:08 np0005534516 heuristic_colden[354109]:        "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 03:45:08 np0005534516 heuristic_colden[354109]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 03:45:08 np0005534516 heuristic_colden[354109]:        "osd_id": 0,
Nov 25 03:45:08 np0005534516 heuristic_colden[354109]:        "osd_uuid": "fa0f8d90-5df8-4d42-9078-da082765696d",
Nov 25 03:45:08 np0005534516 heuristic_colden[354109]:        "type": "bluestore"
Nov 25 03:45:08 np0005534516 heuristic_colden[354109]:    }
Nov 25 03:45:08 np0005534516 heuristic_colden[354109]: }
Nov 25 03:45:08 np0005534516 systemd[1]: libpod-7a806559cc37a49877f10a1e0c6c3ab378f0e9a2d756a924ed1406cb4dfc56e6.scope: Deactivated successfully.
Nov 25 03:45:08 np0005534516 podman[354092]: 2025-11-25 08:45:08.676026346 +0000 UTC m=+1.224603523 container died 7a806559cc37a49877f10a1e0c6c3ab378f0e9a2d756a924ed1406cb4dfc56e6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_colden, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Nov 25 03:45:08 np0005534516 systemd[1]: libpod-7a806559cc37a49877f10a1e0c6c3ab378f0e9a2d756a924ed1406cb4dfc56e6.scope: Consumed 1.064s CPU time.
Nov 25 03:45:08 np0005534516 systemd[1]: var-lib-containers-storage-overlay-28dc452f7ab2563e9c7c830661297e87a5909c2ae6b46f35ea5491a7b86ac382-merged.mount: Deactivated successfully.
Nov 25 03:45:08 np0005534516 podman[354092]: 2025-11-25 08:45:08.735446756 +0000 UTC m=+1.284023883 container remove 7a806559cc37a49877f10a1e0c6c3ab378f0e9a2d756a924ed1406cb4dfc56e6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_colden, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 03:45:08 np0005534516 systemd[1]: libpod-conmon-7a806559cc37a49877f10a1e0c6c3ab378f0e9a2d756a924ed1406cb4dfc56e6.scope: Deactivated successfully.
Nov 25 03:45:08 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 03:45:08 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 03:45:08 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 03:45:08 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 03:45:08 np0005534516 ceph-mgr[75313]: [progress WARNING root] complete: ev f13e6c5f-67c6-4f8d-98cc-deb4a1e43135 does not exist
Nov 25 03:45:08 np0005534516 ceph-mgr[75313]: [progress WARNING root] complete: ev 7ce54203-8e03-4c5a-908b-e9e5541e7af3 does not exist
Nov 25 03:45:08 np0005534516 ovn_controller[152859]: 2025-11-25T08:45:08Z|00981|memory_trim|INFO|Detected inactivity (last active 30000 ms ago): trimming memory
Nov 25 03:45:09 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 03:45:09 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 03:45:09 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1909: 321 pgs: 321 active+clean; 88 MiB data, 677 MiB used, 59 GiB / 60 GiB avail
Nov 25 03:45:10 np0005534516 nova_compute[253538]: 2025-11-25 08:45:10.158 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:45:11 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1910: 321 pgs: 321 active+clean; 88 MiB data, 677 MiB used, 59 GiB / 60 GiB avail
Nov 25 03:45:12 np0005534516 nova_compute[253538]: 2025-11-25 08:45:12.164 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:45:12 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e231 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 03:45:13 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1911: 321 pgs: 321 active+clean; 88 MiB data, 677 MiB used, 59 GiB / 60 GiB avail
Nov 25 03:45:15 np0005534516 nova_compute[253538]: 2025-11-25 08:45:15.197 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:45:15 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1912: 321 pgs: 321 active+clean; 88 MiB data, 677 MiB used, 59 GiB / 60 GiB avail
Nov 25 03:45:17 np0005534516 nova_compute[253538]: 2025-11-25 08:45:17.165 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:45:17 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e231 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 03:45:17 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1913: 321 pgs: 321 active+clean; 88 MiB data, 677 MiB used, 59 GiB / 60 GiB avail
Nov 25 03:45:19 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1914: 321 pgs: 321 active+clean; 88 MiB data, 677 MiB used, 59 GiB / 60 GiB avail
Nov 25 03:45:19 np0005534516 nova_compute[253538]: 2025-11-25 08:45:19.556 253542 DEBUG oslo_concurrency.lockutils [None req-bc1b3ee1-26cf-4578-8a5c-9ff2f10fe318 0f86af4601044b11b6ad679db30c1c0a 5f476c88f03140fb8498fc62d3d783b0 - - default default] Acquiring lock "5bcfbf47-fa2f-4580-9ace-f946f5683844" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:45:19 np0005534516 nova_compute[253538]: 2025-11-25 08:45:19.557 253542 DEBUG oslo_concurrency.lockutils [None req-bc1b3ee1-26cf-4578-8a5c-9ff2f10fe318 0f86af4601044b11b6ad679db30c1c0a 5f476c88f03140fb8498fc62d3d783b0 - - default default] Lock "5bcfbf47-fa2f-4580-9ace-f946f5683844" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:45:19 np0005534516 nova_compute[253538]: 2025-11-25 08:45:19.591 253542 DEBUG nova.compute.manager [None req-bc1b3ee1-26cf-4578-8a5c-9ff2f10fe318 0f86af4601044b11b6ad679db30c1c0a 5f476c88f03140fb8498fc62d3d783b0 - - default default] [instance: 5bcfbf47-fa2f-4580-9ace-f946f5683844] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 25 03:45:19 np0005534516 nova_compute[253538]: 2025-11-25 08:45:19.682 253542 DEBUG oslo_concurrency.lockutils [None req-bc1b3ee1-26cf-4578-8a5c-9ff2f10fe318 0f86af4601044b11b6ad679db30c1c0a 5f476c88f03140fb8498fc62d3d783b0 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:45:19 np0005534516 nova_compute[253538]: 2025-11-25 08:45:19.683 253542 DEBUG oslo_concurrency.lockutils [None req-bc1b3ee1-26cf-4578-8a5c-9ff2f10fe318 0f86af4601044b11b6ad679db30c1c0a 5f476c88f03140fb8498fc62d3d783b0 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:45:19 np0005534516 nova_compute[253538]: 2025-11-25 08:45:19.696 253542 DEBUG nova.virt.hardware [None req-bc1b3ee1-26cf-4578-8a5c-9ff2f10fe318 0f86af4601044b11b6ad679db30c1c0a 5f476c88f03140fb8498fc62d3d783b0 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 25 03:45:19 np0005534516 nova_compute[253538]: 2025-11-25 08:45:19.697 253542 INFO nova.compute.claims [None req-bc1b3ee1-26cf-4578-8a5c-9ff2f10fe318 0f86af4601044b11b6ad679db30c1c0a 5f476c88f03140fb8498fc62d3d783b0 - - default default] [instance: 5bcfbf47-fa2f-4580-9ace-f946f5683844] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 25 03:45:19 np0005534516 nova_compute[253538]: 2025-11-25 08:45:19.783 253542 DEBUG oslo_concurrency.processutils [None req-bc1b3ee1-26cf-4578-8a5c-9ff2f10fe318 0f86af4601044b11b6ad679db30c1c0a 5f476c88f03140fb8498fc62d3d783b0 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:45:20 np0005534516 nova_compute[253538]: 2025-11-25 08:45:20.199 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:45:20 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 03:45:20 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/407019856' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 03:45:20 np0005534516 nova_compute[253538]: 2025-11-25 08:45:20.270 253542 DEBUG oslo_concurrency.processutils [None req-bc1b3ee1-26cf-4578-8a5c-9ff2f10fe318 0f86af4601044b11b6ad679db30c1c0a 5f476c88f03140fb8498fc62d3d783b0 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.487s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:45:20 np0005534516 nova_compute[253538]: 2025-11-25 08:45:20.280 253542 DEBUG nova.compute.provider_tree [None req-bc1b3ee1-26cf-4578-8a5c-9ff2f10fe318 0f86af4601044b11b6ad679db30c1c0a 5f476c88f03140fb8498fc62d3d783b0 - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 03:45:20 np0005534516 nova_compute[253538]: 2025-11-25 08:45:20.303 253542 DEBUG nova.scheduler.client.report [None req-bc1b3ee1-26cf-4578-8a5c-9ff2f10fe318 0f86af4601044b11b6ad679db30c1c0a 5f476c88f03140fb8498fc62d3d783b0 - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 03:45:20 np0005534516 nova_compute[253538]: 2025-11-25 08:45:20.339 253542 DEBUG oslo_concurrency.lockutils [None req-bc1b3ee1-26cf-4578-8a5c-9ff2f10fe318 0f86af4601044b11b6ad679db30c1c0a 5f476c88f03140fb8498fc62d3d783b0 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.656s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:45:20 np0005534516 nova_compute[253538]: 2025-11-25 08:45:20.403 253542 DEBUG oslo_concurrency.lockutils [None req-bc1b3ee1-26cf-4578-8a5c-9ff2f10fe318 0f86af4601044b11b6ad679db30c1c0a 5f476c88f03140fb8498fc62d3d783b0 - - default default] Acquiring lock "dddd2e74-d0f8-4cdf-9b9c-58f4a6307432" by "nova.compute.manager.ComputeManager._validate_instance_group_policy.<locals>._do_validation" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:45:20 np0005534516 nova_compute[253538]: 2025-11-25 08:45:20.404 253542 DEBUG oslo_concurrency.lockutils [None req-bc1b3ee1-26cf-4578-8a5c-9ff2f10fe318 0f86af4601044b11b6ad679db30c1c0a 5f476c88f03140fb8498fc62d3d783b0 - - default default] Lock "dddd2e74-d0f8-4cdf-9b9c-58f4a6307432" acquired by "nova.compute.manager.ComputeManager._validate_instance_group_policy.<locals>._do_validation" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:45:20 np0005534516 nova_compute[253538]: 2025-11-25 08:45:20.414 253542 DEBUG oslo_concurrency.lockutils [None req-bc1b3ee1-26cf-4578-8a5c-9ff2f10fe318 0f86af4601044b11b6ad679db30c1c0a 5f476c88f03140fb8498fc62d3d783b0 - - default default] Lock "dddd2e74-d0f8-4cdf-9b9c-58f4a6307432" "released" by "nova.compute.manager.ComputeManager._validate_instance_group_policy.<locals>._do_validation" :: held 0.010s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:45:20 np0005534516 nova_compute[253538]: 2025-11-25 08:45:20.415 253542 DEBUG nova.compute.manager [None req-bc1b3ee1-26cf-4578-8a5c-9ff2f10fe318 0f86af4601044b11b6ad679db30c1c0a 5f476c88f03140fb8498fc62d3d783b0 - - default default] [instance: 5bcfbf47-fa2f-4580-9ace-f946f5683844] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 25 03:45:20 np0005534516 nova_compute[253538]: 2025-11-25 08:45:20.464 253542 DEBUG nova.compute.manager [None req-bc1b3ee1-26cf-4578-8a5c-9ff2f10fe318 0f86af4601044b11b6ad679db30c1c0a 5f476c88f03140fb8498fc62d3d783b0 - - default default] [instance: 5bcfbf47-fa2f-4580-9ace-f946f5683844] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 25 03:45:20 np0005534516 nova_compute[253538]: 2025-11-25 08:45:20.464 253542 DEBUG nova.network.neutron [None req-bc1b3ee1-26cf-4578-8a5c-9ff2f10fe318 0f86af4601044b11b6ad679db30c1c0a 5f476c88f03140fb8498fc62d3d783b0 - - default default] [instance: 5bcfbf47-fa2f-4580-9ace-f946f5683844] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 25 03:45:20 np0005534516 nova_compute[253538]: 2025-11-25 08:45:20.486 253542 INFO nova.virt.libvirt.driver [None req-bc1b3ee1-26cf-4578-8a5c-9ff2f10fe318 0f86af4601044b11b6ad679db30c1c0a 5f476c88f03140fb8498fc62d3d783b0 - - default default] [instance: 5bcfbf47-fa2f-4580-9ace-f946f5683844] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 25 03:45:20 np0005534516 nova_compute[253538]: 2025-11-25 08:45:20.516 253542 DEBUG nova.compute.manager [None req-bc1b3ee1-26cf-4578-8a5c-9ff2f10fe318 0f86af4601044b11b6ad679db30c1c0a 5f476c88f03140fb8498fc62d3d783b0 - - default default] [instance: 5bcfbf47-fa2f-4580-9ace-f946f5683844] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 25 03:45:20 np0005534516 nova_compute[253538]: 2025-11-25 08:45:20.595 253542 DEBUG nova.compute.manager [None req-bc1b3ee1-26cf-4578-8a5c-9ff2f10fe318 0f86af4601044b11b6ad679db30c1c0a 5f476c88f03140fb8498fc62d3d783b0 - - default default] [instance: 5bcfbf47-fa2f-4580-9ace-f946f5683844] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 25 03:45:20 np0005534516 nova_compute[253538]: 2025-11-25 08:45:20.596 253542 DEBUG nova.virt.libvirt.driver [None req-bc1b3ee1-26cf-4578-8a5c-9ff2f10fe318 0f86af4601044b11b6ad679db30c1c0a 5f476c88f03140fb8498fc62d3d783b0 - - default default] [instance: 5bcfbf47-fa2f-4580-9ace-f946f5683844] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 25 03:45:20 np0005534516 nova_compute[253538]: 2025-11-25 08:45:20.597 253542 INFO nova.virt.libvirt.driver [None req-bc1b3ee1-26cf-4578-8a5c-9ff2f10fe318 0f86af4601044b11b6ad679db30c1c0a 5f476c88f03140fb8498fc62d3d783b0 - - default default] [instance: 5bcfbf47-fa2f-4580-9ace-f946f5683844] Creating image(s)#033[00m
Nov 25 03:45:20 np0005534516 nova_compute[253538]: 2025-11-25 08:45:20.621 253542 DEBUG nova.storage.rbd_utils [None req-bc1b3ee1-26cf-4578-8a5c-9ff2f10fe318 0f86af4601044b11b6ad679db30c1c0a 5f476c88f03140fb8498fc62d3d783b0 - - default default] rbd image 5bcfbf47-fa2f-4580-9ace-f946f5683844_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:45:20 np0005534516 nova_compute[253538]: 2025-11-25 08:45:20.646 253542 DEBUG nova.storage.rbd_utils [None req-bc1b3ee1-26cf-4578-8a5c-9ff2f10fe318 0f86af4601044b11b6ad679db30c1c0a 5f476c88f03140fb8498fc62d3d783b0 - - default default] rbd image 5bcfbf47-fa2f-4580-9ace-f946f5683844_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:45:20 np0005534516 nova_compute[253538]: 2025-11-25 08:45:20.678 253542 DEBUG nova.storage.rbd_utils [None req-bc1b3ee1-26cf-4578-8a5c-9ff2f10fe318 0f86af4601044b11b6ad679db30c1c0a 5f476c88f03140fb8498fc62d3d783b0 - - default default] rbd image 5bcfbf47-fa2f-4580-9ace-f946f5683844_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:45:20 np0005534516 nova_compute[253538]: 2025-11-25 08:45:20.684 253542 DEBUG oslo_concurrency.processutils [None req-bc1b3ee1-26cf-4578-8a5c-9ff2f10fe318 0f86af4601044b11b6ad679db30c1c0a 5f476c88f03140fb8498fc62d3d783b0 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:45:20 np0005534516 nova_compute[253538]: 2025-11-25 08:45:20.777 253542 DEBUG oslo_concurrency.processutils [None req-bc1b3ee1-26cf-4578-8a5c-9ff2f10fe318 0f86af4601044b11b6ad679db30c1c0a 5f476c88f03140fb8498fc62d3d783b0 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json" returned: 0 in 0.093s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:45:20 np0005534516 nova_compute[253538]: 2025-11-25 08:45:20.779 253542 DEBUG oslo_concurrency.lockutils [None req-bc1b3ee1-26cf-4578-8a5c-9ff2f10fe318 0f86af4601044b11b6ad679db30c1c0a 5f476c88f03140fb8498fc62d3d783b0 - - default default] Acquiring lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:45:20 np0005534516 nova_compute[253538]: 2025-11-25 08:45:20.779 253542 DEBUG oslo_concurrency.lockutils [None req-bc1b3ee1-26cf-4578-8a5c-9ff2f10fe318 0f86af4601044b11b6ad679db30c1c0a 5f476c88f03140fb8498fc62d3d783b0 - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:45:20 np0005534516 nova_compute[253538]: 2025-11-25 08:45:20.780 253542 DEBUG oslo_concurrency.lockutils [None req-bc1b3ee1-26cf-4578-8a5c-9ff2f10fe318 0f86af4601044b11b6ad679db30c1c0a 5f476c88f03140fb8498fc62d3d783b0 - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:45:20 np0005534516 nova_compute[253538]: 2025-11-25 08:45:20.803 253542 DEBUG nova.storage.rbd_utils [None req-bc1b3ee1-26cf-4578-8a5c-9ff2f10fe318 0f86af4601044b11b6ad679db30c1c0a 5f476c88f03140fb8498fc62d3d783b0 - - default default] rbd image 5bcfbf47-fa2f-4580-9ace-f946f5683844_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:45:20 np0005534516 nova_compute[253538]: 2025-11-25 08:45:20.807 253542 DEBUG oslo_concurrency.processutils [None req-bc1b3ee1-26cf-4578-8a5c-9ff2f10fe318 0f86af4601044b11b6ad679db30c1c0a 5f476c88f03140fb8498fc62d3d783b0 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc 5bcfbf47-fa2f-4580-9ace-f946f5683844_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:45:20 np0005534516 nova_compute[253538]: 2025-11-25 08:45:20.902 253542 DEBUG nova.policy [None req-bc1b3ee1-26cf-4578-8a5c-9ff2f10fe318 0f86af4601044b11b6ad679db30c1c0a 5f476c88f03140fb8498fc62d3d783b0 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '0f86af4601044b11b6ad679db30c1c0a', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '5f476c88f03140fb8498fc62d3d783b0', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 25 03:45:21 np0005534516 nova_compute[253538]: 2025-11-25 08:45:21.159 253542 DEBUG oslo_concurrency.processutils [None req-bc1b3ee1-26cf-4578-8a5c-9ff2f10fe318 0f86af4601044b11b6ad679db30c1c0a 5f476c88f03140fb8498fc62d3d783b0 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc 5bcfbf47-fa2f-4580-9ace-f946f5683844_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.352s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:45:21 np0005534516 nova_compute[253538]: 2025-11-25 08:45:21.226 253542 DEBUG nova.storage.rbd_utils [None req-bc1b3ee1-26cf-4578-8a5c-9ff2f10fe318 0f86af4601044b11b6ad679db30c1c0a 5f476c88f03140fb8498fc62d3d783b0 - - default default] resizing rbd image 5bcfbf47-fa2f-4580-9ace-f946f5683844_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 25 03:45:21 np0005534516 nova_compute[253538]: 2025-11-25 08:45:21.323 253542 DEBUG nova.objects.instance [None req-bc1b3ee1-26cf-4578-8a5c-9ff2f10fe318 0f86af4601044b11b6ad679db30c1c0a 5f476c88f03140fb8498fc62d3d783b0 - - default default] Lazy-loading 'migration_context' on Instance uuid 5bcfbf47-fa2f-4580-9ace-f946f5683844 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 03:45:21 np0005534516 nova_compute[253538]: 2025-11-25 08:45:21.334 253542 DEBUG nova.virt.libvirt.driver [None req-bc1b3ee1-26cf-4578-8a5c-9ff2f10fe318 0f86af4601044b11b6ad679db30c1c0a 5f476c88f03140fb8498fc62d3d783b0 - - default default] [instance: 5bcfbf47-fa2f-4580-9ace-f946f5683844] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 25 03:45:21 np0005534516 nova_compute[253538]: 2025-11-25 08:45:21.335 253542 DEBUG nova.virt.libvirt.driver [None req-bc1b3ee1-26cf-4578-8a5c-9ff2f10fe318 0f86af4601044b11b6ad679db30c1c0a 5f476c88f03140fb8498fc62d3d783b0 - - default default] [instance: 5bcfbf47-fa2f-4580-9ace-f946f5683844] Ensure instance console log exists: /var/lib/nova/instances/5bcfbf47-fa2f-4580-9ace-f946f5683844/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 25 03:45:21 np0005534516 nova_compute[253538]: 2025-11-25 08:45:21.335 253542 DEBUG oslo_concurrency.lockutils [None req-bc1b3ee1-26cf-4578-8a5c-9ff2f10fe318 0f86af4601044b11b6ad679db30c1c0a 5f476c88f03140fb8498fc62d3d783b0 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:45:21 np0005534516 nova_compute[253538]: 2025-11-25 08:45:21.336 253542 DEBUG oslo_concurrency.lockutils [None req-bc1b3ee1-26cf-4578-8a5c-9ff2f10fe318 0f86af4601044b11b6ad679db30c1c0a 5f476c88f03140fb8498fc62d3d783b0 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:45:21 np0005534516 nova_compute[253538]: 2025-11-25 08:45:21.336 253542 DEBUG oslo_concurrency.lockutils [None req-bc1b3ee1-26cf-4578-8a5c-9ff2f10fe318 0f86af4601044b11b6ad679db30c1c0a 5f476c88f03140fb8498fc62d3d783b0 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:45:21 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1915: 321 pgs: 321 active+clean; 88 MiB data, 677 MiB used, 59 GiB / 60 GiB avail
Nov 25 03:45:21 np0005534516 nova_compute[253538]: 2025-11-25 08:45:21.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 03:45:21 np0005534516 nova_compute[253538]: 2025-11-25 08:45:21.661 253542 DEBUG nova.network.neutron [None req-bc1b3ee1-26cf-4578-8a5c-9ff2f10fe318 0f86af4601044b11b6ad679db30c1c0a 5f476c88f03140fb8498fc62d3d783b0 - - default default] [instance: 5bcfbf47-fa2f-4580-9ace-f946f5683844] Successfully created port: 284bf2a3-cc8c-4631-81d0-b2eda45727f9 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 25 03:45:22 np0005534516 ceph-mon[75015]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #87. Immutable memtables: 0.
Nov 25 03:45:22 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:45:22.125096) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 25 03:45:22 np0005534516 ceph-mon[75015]: rocksdb: [db/flush_job.cc:856] [default] [JOB 49] Flushing memtable with next log file: 87
Nov 25 03:45:22 np0005534516 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764060322125171, "job": 49, "event": "flush_started", "num_memtables": 1, "num_entries": 1887, "num_deletes": 262, "total_data_size": 2869941, "memory_usage": 2907920, "flush_reason": "Manual Compaction"}
Nov 25 03:45:22 np0005534516 ceph-mon[75015]: rocksdb: [db/flush_job.cc:885] [default] [JOB 49] Level-0 flush table #88: started
Nov 25 03:45:22 np0005534516 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764060322142410, "cf_name": "default", "job": 49, "event": "table_file_creation", "file_number": 88, "file_size": 2804690, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 38241, "largest_seqno": 40127, "table_properties": {"data_size": 2795971, "index_size": 5405, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2245, "raw_key_size": 18382, "raw_average_key_size": 20, "raw_value_size": 2778369, "raw_average_value_size": 3135, "num_data_blocks": 238, "num_entries": 886, "num_filter_entries": 886, "num_deletions": 262, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764060157, "oldest_key_time": 1764060157, "file_creation_time": 1764060322, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "dc5dc6ae-0fb8-474e-b34d-e37705a41add", "db_session_id": "WCSCMBNWDQZ3QKJA3OWW", "orig_file_number": 88, "seqno_to_time_mapping": "N/A"}}
Nov 25 03:45:22 np0005534516 ceph-mon[75015]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 49] Flush lasted 17361 microseconds, and 7098 cpu microseconds.
Nov 25 03:45:22 np0005534516 ceph-mon[75015]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 03:45:22 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:45:22.142463) [db/flush_job.cc:967] [default] [JOB 49] Level-0 flush table #88: 2804690 bytes OK
Nov 25 03:45:22 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:45:22.142487) [db/memtable_list.cc:519] [default] Level-0 commit table #88 started
Nov 25 03:45:22 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:45:22.144447) [db/memtable_list.cc:722] [default] Level-0 commit table #88: memtable #1 done
Nov 25 03:45:22 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:45:22.144459) EVENT_LOG_v1 {"time_micros": 1764060322144454, "job": 49, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 25 03:45:22 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:45:22.144478) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 25 03:45:22 np0005534516 ceph-mon[75015]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 49] Try to delete WAL files size 2861786, prev total WAL file size 2861786, number of live WAL files 2.
Nov 25 03:45:22 np0005534516 ceph-mon[75015]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000084.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 03:45:22 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:45:22.145462) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730033353134' seq:72057594037927935, type:22 .. '7061786F730033373636' seq:0, type:0; will stop at (end)
Nov 25 03:45:22 np0005534516 ceph-mon[75015]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 50] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 25 03:45:22 np0005534516 ceph-mon[75015]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 49 Base level 0, inputs: [88(2738KB)], [86(8423KB)]
Nov 25 03:45:22 np0005534516 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764060322145513, "job": 50, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [88], "files_L6": [86], "score": -1, "input_data_size": 11430662, "oldest_snapshot_seqno": -1}
Nov 25 03:45:22 np0005534516 nova_compute[253538]: 2025-11-25 08:45:22.167 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:45:22 np0005534516 ceph-mon[75015]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 50] Generated table #89: 6474 keys, 9802179 bytes, temperature: kUnknown
Nov 25 03:45:22 np0005534516 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764060322202281, "cf_name": "default", "job": 50, "event": "table_file_creation", "file_number": 89, "file_size": 9802179, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9757783, "index_size": 27134, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 16197, "raw_key_size": 164951, "raw_average_key_size": 25, "raw_value_size": 9640545, "raw_average_value_size": 1489, "num_data_blocks": 1096, "num_entries": 6474, "num_filter_entries": 6474, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764056881, "oldest_key_time": 0, "file_creation_time": 1764060322, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "dc5dc6ae-0fb8-474e-b34d-e37705a41add", "db_session_id": "WCSCMBNWDQZ3QKJA3OWW", "orig_file_number": 89, "seqno_to_time_mapping": "N/A"}}
Nov 25 03:45:22 np0005534516 ceph-mon[75015]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 03:45:22 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:45:22.202556) [db/compaction/compaction_job.cc:1663] [default] [JOB 50] Compacted 1@0 + 1@6 files to L6 => 9802179 bytes
Nov 25 03:45:22 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:45:22.203896) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 200.9 rd, 172.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.7, 8.2 +0.0 blob) out(9.3 +0.0 blob), read-write-amplify(7.6) write-amplify(3.5) OK, records in: 7007, records dropped: 533 output_compression: NoCompression
Nov 25 03:45:22 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:45:22.203910) EVENT_LOG_v1 {"time_micros": 1764060322203903, "job": 50, "event": "compaction_finished", "compaction_time_micros": 56887, "compaction_time_cpu_micros": 25683, "output_level": 6, "num_output_files": 1, "total_output_size": 9802179, "num_input_records": 7007, "num_output_records": 6474, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 25 03:45:22 np0005534516 ceph-mon[75015]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000088.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 03:45:22 np0005534516 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764060322204405, "job": 50, "event": "table_file_deletion", "file_number": 88}
Nov 25 03:45:22 np0005534516 ceph-mon[75015]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000086.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 03:45:22 np0005534516 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764060322206000, "job": 50, "event": "table_file_deletion", "file_number": 86}
Nov 25 03:45:22 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:45:22.145398) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 03:45:22 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:45:22.206084) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 03:45:22 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:45:22.206090) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 03:45:22 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:45:22.206092) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 03:45:22 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:45:22.206094) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 03:45:22 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:45:22.206096) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 03:45:22 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e231 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 03:45:22 np0005534516 nova_compute[253538]: 2025-11-25 08:45:22.606 253542 DEBUG nova.network.neutron [None req-bc1b3ee1-26cf-4578-8a5c-9ff2f10fe318 0f86af4601044b11b6ad679db30c1c0a 5f476c88f03140fb8498fc62d3d783b0 - - default default] [instance: 5bcfbf47-fa2f-4580-9ace-f946f5683844] Successfully updated port: 284bf2a3-cc8c-4631-81d0-b2eda45727f9 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 25 03:45:22 np0005534516 nova_compute[253538]: 2025-11-25 08:45:22.621 253542 DEBUG oslo_concurrency.lockutils [None req-bc1b3ee1-26cf-4578-8a5c-9ff2f10fe318 0f86af4601044b11b6ad679db30c1c0a 5f476c88f03140fb8498fc62d3d783b0 - - default default] Acquiring lock "refresh_cache-5bcfbf47-fa2f-4580-9ace-f946f5683844" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 03:45:22 np0005534516 nova_compute[253538]: 2025-11-25 08:45:22.621 253542 DEBUG oslo_concurrency.lockutils [None req-bc1b3ee1-26cf-4578-8a5c-9ff2f10fe318 0f86af4601044b11b6ad679db30c1c0a 5f476c88f03140fb8498fc62d3d783b0 - - default default] Acquired lock "refresh_cache-5bcfbf47-fa2f-4580-9ace-f946f5683844" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 03:45:22 np0005534516 nova_compute[253538]: 2025-11-25 08:45:22.621 253542 DEBUG nova.network.neutron [None req-bc1b3ee1-26cf-4578-8a5c-9ff2f10fe318 0f86af4601044b11b6ad679db30c1c0a 5f476c88f03140fb8498fc62d3d783b0 - - default default] [instance: 5bcfbf47-fa2f-4580-9ace-f946f5683844] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 25 03:45:22 np0005534516 nova_compute[253538]: 2025-11-25 08:45:22.728 253542 DEBUG nova.compute.manager [req-e19cf46c-3168-4f19-88b9-5e71d9add750 req-9c891613-0f2d-4423-ba35-a35b4c1d1012 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 5bcfbf47-fa2f-4580-9ace-f946f5683844] Received event network-changed-284bf2a3-cc8c-4631-81d0-b2eda45727f9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:45:22 np0005534516 nova_compute[253538]: 2025-11-25 08:45:22.729 253542 DEBUG nova.compute.manager [req-e19cf46c-3168-4f19-88b9-5e71d9add750 req-9c891613-0f2d-4423-ba35-a35b4c1d1012 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 5bcfbf47-fa2f-4580-9ace-f946f5683844] Refreshing instance network info cache due to event network-changed-284bf2a3-cc8c-4631-81d0-b2eda45727f9. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 03:45:22 np0005534516 nova_compute[253538]: 2025-11-25 08:45:22.729 253542 DEBUG oslo_concurrency.lockutils [req-e19cf46c-3168-4f19-88b9-5e71d9add750 req-9c891613-0f2d-4423-ba35-a35b4c1d1012 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-5bcfbf47-fa2f-4580-9ace-f946f5683844" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 03:45:22 np0005534516 podman[354393]: 2025-11-25 08:45:22.815613906 +0000 UTC m=+0.060429708 container health_status 5c8d5508db57b4c3da7e80ae11f3a3094e87785cb74f60c98199261dd8b73481 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Nov 25 03:45:23 np0005534516 nova_compute[253538]: 2025-11-25 08:45:23.046 253542 DEBUG nova.network.neutron [None req-bc1b3ee1-26cf-4578-8a5c-9ff2f10fe318 0f86af4601044b11b6ad679db30c1c0a 5f476c88f03140fb8498fc62d3d783b0 - - default default] [instance: 5bcfbf47-fa2f-4580-9ace-f946f5683844] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 25 03:45:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 03:45:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 03:45:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 03:45:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 03:45:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 03:45:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 03:45:23 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1916: 321 pgs: 321 active+clean; 95 MiB data, 677 MiB used, 59 GiB / 60 GiB avail; 327 KiB/s wr, 0 op/s
Nov 25 03:45:24 np0005534516 nova_compute[253538]: 2025-11-25 08:45:24.481 253542 DEBUG nova.network.neutron [None req-bc1b3ee1-26cf-4578-8a5c-9ff2f10fe318 0f86af4601044b11b6ad679db30c1c0a 5f476c88f03140fb8498fc62d3d783b0 - - default default] [instance: 5bcfbf47-fa2f-4580-9ace-f946f5683844] Updating instance_info_cache with network_info: [{"id": "284bf2a3-cc8c-4631-81d0-b2eda45727f9", "address": "fa:16:3e:22:ee:87", "network": {"id": "e802e356-a113-4a9a-a825-0a16ca2eb73c", "bridge": "br-int", "label": "tempest-ServerGroupTestJSON-1741820126-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f476c88f03140fb8498fc62d3d783b0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap284bf2a3-cc", "ovs_interfaceid": "284bf2a3-cc8c-4631-81d0-b2eda45727f9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 03:45:24 np0005534516 nova_compute[253538]: 2025-11-25 08:45:24.499 253542 DEBUG oslo_concurrency.lockutils [None req-bc1b3ee1-26cf-4578-8a5c-9ff2f10fe318 0f86af4601044b11b6ad679db30c1c0a 5f476c88f03140fb8498fc62d3d783b0 - - default default] Releasing lock "refresh_cache-5bcfbf47-fa2f-4580-9ace-f946f5683844" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 03:45:24 np0005534516 nova_compute[253538]: 2025-11-25 08:45:24.499 253542 DEBUG nova.compute.manager [None req-bc1b3ee1-26cf-4578-8a5c-9ff2f10fe318 0f86af4601044b11b6ad679db30c1c0a 5f476c88f03140fb8498fc62d3d783b0 - - default default] [instance: 5bcfbf47-fa2f-4580-9ace-f946f5683844] Instance network_info: |[{"id": "284bf2a3-cc8c-4631-81d0-b2eda45727f9", "address": "fa:16:3e:22:ee:87", "network": {"id": "e802e356-a113-4a9a-a825-0a16ca2eb73c", "bridge": "br-int", "label": "tempest-ServerGroupTestJSON-1741820126-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f476c88f03140fb8498fc62d3d783b0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap284bf2a3-cc", "ovs_interfaceid": "284bf2a3-cc8c-4631-81d0-b2eda45727f9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 25 03:45:24 np0005534516 nova_compute[253538]: 2025-11-25 08:45:24.500 253542 DEBUG oslo_concurrency.lockutils [req-e19cf46c-3168-4f19-88b9-5e71d9add750 req-9c891613-0f2d-4423-ba35-a35b4c1d1012 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-5bcfbf47-fa2f-4580-9ace-f946f5683844" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 03:45:24 np0005534516 nova_compute[253538]: 2025-11-25 08:45:24.500 253542 DEBUG nova.network.neutron [req-e19cf46c-3168-4f19-88b9-5e71d9add750 req-9c891613-0f2d-4423-ba35-a35b4c1d1012 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 5bcfbf47-fa2f-4580-9ace-f946f5683844] Refreshing network info cache for port 284bf2a3-cc8c-4631-81d0-b2eda45727f9 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 03:45:24 np0005534516 nova_compute[253538]: 2025-11-25 08:45:24.502 253542 DEBUG nova.virt.libvirt.driver [None req-bc1b3ee1-26cf-4578-8a5c-9ff2f10fe318 0f86af4601044b11b6ad679db30c1c0a 5f476c88f03140fb8498fc62d3d783b0 - - default default] [instance: 5bcfbf47-fa2f-4580-9ace-f946f5683844] Start _get_guest_xml network_info=[{"id": "284bf2a3-cc8c-4631-81d0-b2eda45727f9", "address": "fa:16:3e:22:ee:87", "network": {"id": "e802e356-a113-4a9a-a825-0a16ca2eb73c", "bridge": "br-int", "label": "tempest-ServerGroupTestJSON-1741820126-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f476c88f03140fb8498fc62d3d783b0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap284bf2a3-cc", "ovs_interfaceid": "284bf2a3-cc8c-4631-81d0-b2eda45727f9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'device_name': '/dev/vda', 'guest_format': None, 'encryption_options': None, 'boot_index': 0, 'encryption_format': None, 'encryption_secret_uuid': None, 'size': 0, 'encrypted': False, 'disk_bus': 'virtio', 'image_id': '8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 25 03:45:24 np0005534516 nova_compute[253538]: 2025-11-25 08:45:24.507 253542 WARNING nova.virt.libvirt.driver [None req-bc1b3ee1-26cf-4578-8a5c-9ff2f10fe318 0f86af4601044b11b6ad679db30c1c0a 5f476c88f03140fb8498fc62d3d783b0 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 03:45:24 np0005534516 nova_compute[253538]: 2025-11-25 08:45:24.516 253542 DEBUG nova.virt.libvirt.host [None req-bc1b3ee1-26cf-4578-8a5c-9ff2f10fe318 0f86af4601044b11b6ad679db30c1c0a 5f476c88f03140fb8498fc62d3d783b0 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 25 03:45:24 np0005534516 nova_compute[253538]: 2025-11-25 08:45:24.516 253542 DEBUG nova.virt.libvirt.host [None req-bc1b3ee1-26cf-4578-8a5c-9ff2f10fe318 0f86af4601044b11b6ad679db30c1c0a 5f476c88f03140fb8498fc62d3d783b0 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 25 03:45:24 np0005534516 nova_compute[253538]: 2025-11-25 08:45:24.519 253542 DEBUG nova.virt.libvirt.host [None req-bc1b3ee1-26cf-4578-8a5c-9ff2f10fe318 0f86af4601044b11b6ad679db30c1c0a 5f476c88f03140fb8498fc62d3d783b0 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 25 03:45:24 np0005534516 nova_compute[253538]: 2025-11-25 08:45:24.519 253542 DEBUG nova.virt.libvirt.host [None req-bc1b3ee1-26cf-4578-8a5c-9ff2f10fe318 0f86af4601044b11b6ad679db30c1c0a 5f476c88f03140fb8498fc62d3d783b0 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 25 03:45:24 np0005534516 nova_compute[253538]: 2025-11-25 08:45:24.519 253542 DEBUG nova.virt.libvirt.driver [None req-bc1b3ee1-26cf-4578-8a5c-9ff2f10fe318 0f86af4601044b11b6ad679db30c1c0a 5f476c88f03140fb8498fc62d3d783b0 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 25 03:45:24 np0005534516 nova_compute[253538]: 2025-11-25 08:45:24.519 253542 DEBUG nova.virt.hardware [None req-bc1b3ee1-26cf-4578-8a5c-9ff2f10fe318 0f86af4601044b11b6ad679db30c1c0a 5f476c88f03140fb8498fc62d3d783b0 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T08:20:54Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='c0247c91-0f34-45b2-87e7-43f31a790a00',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 25 03:45:24 np0005534516 nova_compute[253538]: 2025-11-25 08:45:24.520 253542 DEBUG nova.virt.hardware [None req-bc1b3ee1-26cf-4578-8a5c-9ff2f10fe318 0f86af4601044b11b6ad679db30c1c0a 5f476c88f03140fb8498fc62d3d783b0 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 25 03:45:24 np0005534516 nova_compute[253538]: 2025-11-25 08:45:24.520 253542 DEBUG nova.virt.hardware [None req-bc1b3ee1-26cf-4578-8a5c-9ff2f10fe318 0f86af4601044b11b6ad679db30c1c0a 5f476c88f03140fb8498fc62d3d783b0 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 25 03:45:24 np0005534516 nova_compute[253538]: 2025-11-25 08:45:24.520 253542 DEBUG nova.virt.hardware [None req-bc1b3ee1-26cf-4578-8a5c-9ff2f10fe318 0f86af4601044b11b6ad679db30c1c0a 5f476c88f03140fb8498fc62d3d783b0 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 25 03:45:24 np0005534516 nova_compute[253538]: 2025-11-25 08:45:24.520 253542 DEBUG nova.virt.hardware [None req-bc1b3ee1-26cf-4578-8a5c-9ff2f10fe318 0f86af4601044b11b6ad679db30c1c0a 5f476c88f03140fb8498fc62d3d783b0 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 25 03:45:24 np0005534516 nova_compute[253538]: 2025-11-25 08:45:24.521 253542 DEBUG nova.virt.hardware [None req-bc1b3ee1-26cf-4578-8a5c-9ff2f10fe318 0f86af4601044b11b6ad679db30c1c0a 5f476c88f03140fb8498fc62d3d783b0 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 25 03:45:24 np0005534516 nova_compute[253538]: 2025-11-25 08:45:24.521 253542 DEBUG nova.virt.hardware [None req-bc1b3ee1-26cf-4578-8a5c-9ff2f10fe318 0f86af4601044b11b6ad679db30c1c0a 5f476c88f03140fb8498fc62d3d783b0 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 25 03:45:24 np0005534516 nova_compute[253538]: 2025-11-25 08:45:24.521 253542 DEBUG nova.virt.hardware [None req-bc1b3ee1-26cf-4578-8a5c-9ff2f10fe318 0f86af4601044b11b6ad679db30c1c0a 5f476c88f03140fb8498fc62d3d783b0 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 25 03:45:24 np0005534516 nova_compute[253538]: 2025-11-25 08:45:24.521 253542 DEBUG nova.virt.hardware [None req-bc1b3ee1-26cf-4578-8a5c-9ff2f10fe318 0f86af4601044b11b6ad679db30c1c0a 5f476c88f03140fb8498fc62d3d783b0 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 25 03:45:24 np0005534516 nova_compute[253538]: 2025-11-25 08:45:24.521 253542 DEBUG nova.virt.hardware [None req-bc1b3ee1-26cf-4578-8a5c-9ff2f10fe318 0f86af4601044b11b6ad679db30c1c0a 5f476c88f03140fb8498fc62d3d783b0 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 25 03:45:24 np0005534516 nova_compute[253538]: 2025-11-25 08:45:24.521 253542 DEBUG nova.virt.hardware [None req-bc1b3ee1-26cf-4578-8a5c-9ff2f10fe318 0f86af4601044b11b6ad679db30c1c0a 5f476c88f03140fb8498fc62d3d783b0 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 25 03:45:24 np0005534516 nova_compute[253538]: 2025-11-25 08:45:24.524 253542 DEBUG oslo_concurrency.processutils [None req-bc1b3ee1-26cf-4578-8a5c-9ff2f10fe318 0f86af4601044b11b6ad679db30c1c0a 5f476c88f03140fb8498fc62d3d783b0 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:45:24 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 03:45:24 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3619621212' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 03:45:24 np0005534516 nova_compute[253538]: 2025-11-25 08:45:24.973 253542 DEBUG oslo_concurrency.processutils [None req-bc1b3ee1-26cf-4578-8a5c-9ff2f10fe318 0f86af4601044b11b6ad679db30c1c0a 5f476c88f03140fb8498fc62d3d783b0 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.449s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:45:24 np0005534516 nova_compute[253538]: 2025-11-25 08:45:24.996 253542 DEBUG nova.storage.rbd_utils [None req-bc1b3ee1-26cf-4578-8a5c-9ff2f10fe318 0f86af4601044b11b6ad679db30c1c0a 5f476c88f03140fb8498fc62d3d783b0 - - default default] rbd image 5bcfbf47-fa2f-4580-9ace-f946f5683844_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:45:25 np0005534516 nova_compute[253538]: 2025-11-25 08:45:25.000 253542 DEBUG oslo_concurrency.processutils [None req-bc1b3ee1-26cf-4578-8a5c-9ff2f10fe318 0f86af4601044b11b6ad679db30c1c0a 5f476c88f03140fb8498fc62d3d783b0 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:45:25 np0005534516 nova_compute[253538]: 2025-11-25 08:45:25.204 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:45:25 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 03:45:25 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4109364491' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 03:45:25 np0005534516 nova_compute[253538]: 2025-11-25 08:45:25.543 253542 DEBUG oslo_concurrency.processutils [None req-bc1b3ee1-26cf-4578-8a5c-9ff2f10fe318 0f86af4601044b11b6ad679db30c1c0a 5f476c88f03140fb8498fc62d3d783b0 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.544s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:45:25 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1917: 321 pgs: 321 active+clean; 134 MiB data, 694 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 25 03:45:25 np0005534516 nova_compute[253538]: 2025-11-25 08:45:25.545 253542 DEBUG nova.virt.libvirt.vif [None req-bc1b3ee1-26cf-4578-8a5c-9ff2f10fe318 0f86af4601044b11b6ad679db30c1c0a 5f476c88f03140fb8498fc62d3d783b0 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T08:45:17Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerGroupTestJSON-server-1939536355',display_name='tempest-ServerGroupTestJSON-server-1939536355',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-servergrouptestjson-server-1939536355',id=102,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='5f476c88f03140fb8498fc62d3d783b0',ramdisk_id='',reservation_id='r-qblg0f2q',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerGroupTestJSON-166485107',owner_user_name='tempest-ServerGroupTestJSON-166485107-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T08:45:20Z,user_data=None,user_id='0f86af4601044b11b6ad679db30c1c0a',uuid=5bcfbf47-fa2f-4580-9ace-f946f5683844,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "284bf2a3-cc8c-4631-81d0-b2eda45727f9", "address": "fa:16:3e:22:ee:87", "network": {"id": "e802e356-a113-4a9a-a825-0a16ca2eb73c", "bridge": "br-int", "label": "tempest-ServerGroupTestJSON-1741820126-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f476c88f03140fb8498fc62d3d783b0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap284bf2a3-cc", "ovs_interfaceid": "284bf2a3-cc8c-4631-81d0-b2eda45727f9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 25 03:45:25 np0005534516 nova_compute[253538]: 2025-11-25 08:45:25.546 253542 DEBUG nova.network.os_vif_util [None req-bc1b3ee1-26cf-4578-8a5c-9ff2f10fe318 0f86af4601044b11b6ad679db30c1c0a 5f476c88f03140fb8498fc62d3d783b0 - - default default] Converting VIF {"id": "284bf2a3-cc8c-4631-81d0-b2eda45727f9", "address": "fa:16:3e:22:ee:87", "network": {"id": "e802e356-a113-4a9a-a825-0a16ca2eb73c", "bridge": "br-int", "label": "tempest-ServerGroupTestJSON-1741820126-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f476c88f03140fb8498fc62d3d783b0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap284bf2a3-cc", "ovs_interfaceid": "284bf2a3-cc8c-4631-81d0-b2eda45727f9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 03:45:25 np0005534516 nova_compute[253538]: 2025-11-25 08:45:25.547 253542 DEBUG nova.network.os_vif_util [None req-bc1b3ee1-26cf-4578-8a5c-9ff2f10fe318 0f86af4601044b11b6ad679db30c1c0a 5f476c88f03140fb8498fc62d3d783b0 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:22:ee:87,bridge_name='br-int',has_traffic_filtering=True,id=284bf2a3-cc8c-4631-81d0-b2eda45727f9,network=Network(e802e356-a113-4a9a-a825-0a16ca2eb73c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap284bf2a3-cc') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 03:45:25 np0005534516 nova_compute[253538]: 2025-11-25 08:45:25.548 253542 DEBUG nova.objects.instance [None req-bc1b3ee1-26cf-4578-8a5c-9ff2f10fe318 0f86af4601044b11b6ad679db30c1c0a 5f476c88f03140fb8498fc62d3d783b0 - - default default] Lazy-loading 'pci_devices' on Instance uuid 5bcfbf47-fa2f-4580-9ace-f946f5683844 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 03:45:25 np0005534516 nova_compute[253538]: 2025-11-25 08:45:25.563 253542 DEBUG nova.virt.libvirt.driver [None req-bc1b3ee1-26cf-4578-8a5c-9ff2f10fe318 0f86af4601044b11b6ad679db30c1c0a 5f476c88f03140fb8498fc62d3d783b0 - - default default] [instance: 5bcfbf47-fa2f-4580-9ace-f946f5683844] End _get_guest_xml xml=<domain type="kvm">
Nov 25 03:45:25 np0005534516 nova_compute[253538]:  <uuid>5bcfbf47-fa2f-4580-9ace-f946f5683844</uuid>
Nov 25 03:45:25 np0005534516 nova_compute[253538]:  <name>instance-00000066</name>
Nov 25 03:45:25 np0005534516 nova_compute[253538]:  <memory>131072</memory>
Nov 25 03:45:25 np0005534516 nova_compute[253538]:  <vcpu>1</vcpu>
Nov 25 03:45:25 np0005534516 nova_compute[253538]:  <metadata>
Nov 25 03:45:25 np0005534516 nova_compute[253538]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 03:45:25 np0005534516 nova_compute[253538]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 03:45:25 np0005534516 nova_compute[253538]:      <nova:name>tempest-ServerGroupTestJSON-server-1939536355</nova:name>
Nov 25 03:45:25 np0005534516 nova_compute[253538]:      <nova:creationTime>2025-11-25 08:45:24</nova:creationTime>
Nov 25 03:45:25 np0005534516 nova_compute[253538]:      <nova:flavor name="m1.nano">
Nov 25 03:45:25 np0005534516 nova_compute[253538]:        <nova:memory>128</nova:memory>
Nov 25 03:45:25 np0005534516 nova_compute[253538]:        <nova:disk>1</nova:disk>
Nov 25 03:45:25 np0005534516 nova_compute[253538]:        <nova:swap>0</nova:swap>
Nov 25 03:45:25 np0005534516 nova_compute[253538]:        <nova:ephemeral>0</nova:ephemeral>
Nov 25 03:45:25 np0005534516 nova_compute[253538]:        <nova:vcpus>1</nova:vcpus>
Nov 25 03:45:25 np0005534516 nova_compute[253538]:      </nova:flavor>
Nov 25 03:45:25 np0005534516 nova_compute[253538]:      <nova:owner>
Nov 25 03:45:25 np0005534516 nova_compute[253538]:        <nova:user uuid="0f86af4601044b11b6ad679db30c1c0a">tempest-ServerGroupTestJSON-166485107-project-member</nova:user>
Nov 25 03:45:25 np0005534516 nova_compute[253538]:        <nova:project uuid="5f476c88f03140fb8498fc62d3d783b0">tempest-ServerGroupTestJSON-166485107</nova:project>
Nov 25 03:45:25 np0005534516 nova_compute[253538]:      </nova:owner>
Nov 25 03:45:25 np0005534516 nova_compute[253538]:      <nova:root type="image" uuid="8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e"/>
Nov 25 03:45:25 np0005534516 nova_compute[253538]:      <nova:ports>
Nov 25 03:45:25 np0005534516 nova_compute[253538]:        <nova:port uuid="284bf2a3-cc8c-4631-81d0-b2eda45727f9">
Nov 25 03:45:25 np0005534516 nova_compute[253538]:          <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Nov 25 03:45:25 np0005534516 nova_compute[253538]:        </nova:port>
Nov 25 03:45:25 np0005534516 nova_compute[253538]:      </nova:ports>
Nov 25 03:45:25 np0005534516 nova_compute[253538]:    </nova:instance>
Nov 25 03:45:25 np0005534516 nova_compute[253538]:  </metadata>
Nov 25 03:45:25 np0005534516 nova_compute[253538]:  <sysinfo type="smbios">
Nov 25 03:45:25 np0005534516 nova_compute[253538]:    <system>
Nov 25 03:45:25 np0005534516 nova_compute[253538]:      <entry name="manufacturer">RDO</entry>
Nov 25 03:45:25 np0005534516 nova_compute[253538]:      <entry name="product">OpenStack Compute</entry>
Nov 25 03:45:25 np0005534516 nova_compute[253538]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 03:45:25 np0005534516 nova_compute[253538]:      <entry name="serial">5bcfbf47-fa2f-4580-9ace-f946f5683844</entry>
Nov 25 03:45:25 np0005534516 nova_compute[253538]:      <entry name="uuid">5bcfbf47-fa2f-4580-9ace-f946f5683844</entry>
Nov 25 03:45:25 np0005534516 nova_compute[253538]:      <entry name="family">Virtual Machine</entry>
Nov 25 03:45:25 np0005534516 nova_compute[253538]:    </system>
Nov 25 03:45:25 np0005534516 nova_compute[253538]:  </sysinfo>
Nov 25 03:45:25 np0005534516 nova_compute[253538]:  <os>
Nov 25 03:45:25 np0005534516 nova_compute[253538]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 25 03:45:25 np0005534516 nova_compute[253538]:    <boot dev="hd"/>
Nov 25 03:45:25 np0005534516 nova_compute[253538]:    <smbios mode="sysinfo"/>
Nov 25 03:45:25 np0005534516 nova_compute[253538]:  </os>
Nov 25 03:45:25 np0005534516 nova_compute[253538]:  <features>
Nov 25 03:45:25 np0005534516 nova_compute[253538]:    <acpi/>
Nov 25 03:45:25 np0005534516 nova_compute[253538]:    <apic/>
Nov 25 03:45:25 np0005534516 nova_compute[253538]:    <vmcoreinfo/>
Nov 25 03:45:25 np0005534516 nova_compute[253538]:  </features>
Nov 25 03:45:25 np0005534516 nova_compute[253538]:  <clock offset="utc">
Nov 25 03:45:25 np0005534516 nova_compute[253538]:    <timer name="pit" tickpolicy="delay"/>
Nov 25 03:45:25 np0005534516 nova_compute[253538]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 25 03:45:25 np0005534516 nova_compute[253538]:    <timer name="hpet" present="no"/>
Nov 25 03:45:25 np0005534516 nova_compute[253538]:  </clock>
Nov 25 03:45:25 np0005534516 nova_compute[253538]:  <cpu mode="host-model" match="exact">
Nov 25 03:45:25 np0005534516 nova_compute[253538]:    <topology sockets="1" cores="1" threads="1"/>
Nov 25 03:45:25 np0005534516 nova_compute[253538]:  </cpu>
Nov 25 03:45:25 np0005534516 nova_compute[253538]:  <devices>
Nov 25 03:45:25 np0005534516 nova_compute[253538]:    <disk type="network" device="disk">
Nov 25 03:45:25 np0005534516 nova_compute[253538]:      <driver type="raw" cache="none"/>
Nov 25 03:45:25 np0005534516 nova_compute[253538]:      <source protocol="rbd" name="vms/5bcfbf47-fa2f-4580-9ace-f946f5683844_disk">
Nov 25 03:45:25 np0005534516 nova_compute[253538]:        <host name="192.168.122.100" port="6789"/>
Nov 25 03:45:25 np0005534516 nova_compute[253538]:      </source>
Nov 25 03:45:25 np0005534516 nova_compute[253538]:      <auth username="openstack">
Nov 25 03:45:25 np0005534516 nova_compute[253538]:        <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 03:45:25 np0005534516 nova_compute[253538]:      </auth>
Nov 25 03:45:25 np0005534516 nova_compute[253538]:      <target dev="vda" bus="virtio"/>
Nov 25 03:45:25 np0005534516 nova_compute[253538]:    </disk>
Nov 25 03:45:25 np0005534516 nova_compute[253538]:    <disk type="network" device="cdrom">
Nov 25 03:45:25 np0005534516 nova_compute[253538]:      <driver type="raw" cache="none"/>
Nov 25 03:45:25 np0005534516 nova_compute[253538]:      <source protocol="rbd" name="vms/5bcfbf47-fa2f-4580-9ace-f946f5683844_disk.config">
Nov 25 03:45:25 np0005534516 nova_compute[253538]:        <host name="192.168.122.100" port="6789"/>
Nov 25 03:45:25 np0005534516 nova_compute[253538]:      </source>
Nov 25 03:45:25 np0005534516 nova_compute[253538]:      <auth username="openstack">
Nov 25 03:45:25 np0005534516 nova_compute[253538]:        <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 03:45:25 np0005534516 nova_compute[253538]:      </auth>
Nov 25 03:45:25 np0005534516 nova_compute[253538]:      <target dev="sda" bus="sata"/>
Nov 25 03:45:25 np0005534516 nova_compute[253538]:    </disk>
Nov 25 03:45:25 np0005534516 nova_compute[253538]:    <interface type="ethernet">
Nov 25 03:45:25 np0005534516 nova_compute[253538]:      <mac address="fa:16:3e:22:ee:87"/>
Nov 25 03:45:25 np0005534516 nova_compute[253538]:      <model type="virtio"/>
Nov 25 03:45:25 np0005534516 nova_compute[253538]:      <driver name="vhost" rx_queue_size="512"/>
Nov 25 03:45:25 np0005534516 nova_compute[253538]:      <mtu size="1442"/>
Nov 25 03:45:25 np0005534516 nova_compute[253538]:      <target dev="tap284bf2a3-cc"/>
Nov 25 03:45:25 np0005534516 nova_compute[253538]:    </interface>
Nov 25 03:45:25 np0005534516 nova_compute[253538]:    <serial type="pty">
Nov 25 03:45:25 np0005534516 nova_compute[253538]:      <log file="/var/lib/nova/instances/5bcfbf47-fa2f-4580-9ace-f946f5683844/console.log" append="off"/>
Nov 25 03:45:25 np0005534516 nova_compute[253538]:    </serial>
Nov 25 03:45:25 np0005534516 nova_compute[253538]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 03:45:25 np0005534516 nova_compute[253538]:    <video>
Nov 25 03:45:25 np0005534516 nova_compute[253538]:      <model type="virtio"/>
Nov 25 03:45:25 np0005534516 nova_compute[253538]:    </video>
Nov 25 03:45:25 np0005534516 nova_compute[253538]:    <input type="tablet" bus="usb"/>
Nov 25 03:45:25 np0005534516 nova_compute[253538]:    <rng model="virtio">
Nov 25 03:45:25 np0005534516 nova_compute[253538]:      <backend model="random">/dev/urandom</backend>
Nov 25 03:45:25 np0005534516 nova_compute[253538]:    </rng>
Nov 25 03:45:25 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root"/>
Nov 25 03:45:25 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:45:25 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:45:25 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:45:25 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:45:25 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:45:25 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:45:25 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:45:25 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:45:25 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:45:25 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:45:25 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:45:25 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:45:25 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:45:25 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:45:25 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:45:25 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:45:25 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:45:25 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:45:25 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:45:25 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:45:25 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:45:25 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:45:25 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:45:25 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:45:25 np0005534516 nova_compute[253538]:    <controller type="usb" index="0"/>
Nov 25 03:45:25 np0005534516 nova_compute[253538]:    <memballoon model="virtio">
Nov 25 03:45:25 np0005534516 nova_compute[253538]:      <stats period="10"/>
Nov 25 03:45:25 np0005534516 nova_compute[253538]:    </memballoon>
Nov 25 03:45:25 np0005534516 nova_compute[253538]:  </devices>
Nov 25 03:45:25 np0005534516 nova_compute[253538]: </domain>
Nov 25 03:45:25 np0005534516 nova_compute[253538]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 25 03:45:25 np0005534516 nova_compute[253538]: 2025-11-25 08:45:25.564 253542 DEBUG nova.compute.manager [None req-bc1b3ee1-26cf-4578-8a5c-9ff2f10fe318 0f86af4601044b11b6ad679db30c1c0a 5f476c88f03140fb8498fc62d3d783b0 - - default default] [instance: 5bcfbf47-fa2f-4580-9ace-f946f5683844] Preparing to wait for external event network-vif-plugged-284bf2a3-cc8c-4631-81d0-b2eda45727f9 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 25 03:45:25 np0005534516 nova_compute[253538]: 2025-11-25 08:45:25.564 253542 DEBUG oslo_concurrency.lockutils [None req-bc1b3ee1-26cf-4578-8a5c-9ff2f10fe318 0f86af4601044b11b6ad679db30c1c0a 5f476c88f03140fb8498fc62d3d783b0 - - default default] Acquiring lock "5bcfbf47-fa2f-4580-9ace-f946f5683844-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:45:25 np0005534516 nova_compute[253538]: 2025-11-25 08:45:25.565 253542 DEBUG oslo_concurrency.lockutils [None req-bc1b3ee1-26cf-4578-8a5c-9ff2f10fe318 0f86af4601044b11b6ad679db30c1c0a 5f476c88f03140fb8498fc62d3d783b0 - - default default] Lock "5bcfbf47-fa2f-4580-9ace-f946f5683844-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:45:25 np0005534516 nova_compute[253538]: 2025-11-25 08:45:25.565 253542 DEBUG oslo_concurrency.lockutils [None req-bc1b3ee1-26cf-4578-8a5c-9ff2f10fe318 0f86af4601044b11b6ad679db30c1c0a 5f476c88f03140fb8498fc62d3d783b0 - - default default] Lock "5bcfbf47-fa2f-4580-9ace-f946f5683844-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:45:25 np0005534516 nova_compute[253538]: 2025-11-25 08:45:25.566 253542 DEBUG nova.virt.libvirt.vif [None req-bc1b3ee1-26cf-4578-8a5c-9ff2f10fe318 0f86af4601044b11b6ad679db30c1c0a 5f476c88f03140fb8498fc62d3d783b0 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T08:45:17Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerGroupTestJSON-server-1939536355',display_name='tempest-ServerGroupTestJSON-server-1939536355',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-servergrouptestjson-server-1939536355',id=102,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='5f476c88f03140fb8498fc62d3d783b0',ramdisk_id='',reservation_id='r-qblg0f2q',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerGroupTestJSON-166485107',owner_user_name='tempest-ServerGroupTestJSON-166485107-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T08:45:20Z,user_data=None,user_id='0f86af4601044b11b6ad679db30c1c0a',uuid=5bcfbf47-fa2f-4580-9ace-f946f5683844,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "284bf2a3-cc8c-4631-81d0-b2eda45727f9", "address": "fa:16:3e:22:ee:87", "network": {"id": "e802e356-a113-4a9a-a825-0a16ca2eb73c", "bridge": "br-int", "label": "tempest-ServerGroupTestJSON-1741820126-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f476c88f03140fb8498fc62d3d783b0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap284bf2a3-cc", "ovs_interfaceid": "284bf2a3-cc8c-4631-81d0-b2eda45727f9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 25 03:45:25 np0005534516 nova_compute[253538]: 2025-11-25 08:45:25.566 253542 DEBUG nova.network.os_vif_util [None req-bc1b3ee1-26cf-4578-8a5c-9ff2f10fe318 0f86af4601044b11b6ad679db30c1c0a 5f476c88f03140fb8498fc62d3d783b0 - - default default] Converting VIF {"id": "284bf2a3-cc8c-4631-81d0-b2eda45727f9", "address": "fa:16:3e:22:ee:87", "network": {"id": "e802e356-a113-4a9a-a825-0a16ca2eb73c", "bridge": "br-int", "label": "tempest-ServerGroupTestJSON-1741820126-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f476c88f03140fb8498fc62d3d783b0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap284bf2a3-cc", "ovs_interfaceid": "284bf2a3-cc8c-4631-81d0-b2eda45727f9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 03:45:25 np0005534516 nova_compute[253538]: 2025-11-25 08:45:25.567 253542 DEBUG nova.network.os_vif_util [None req-bc1b3ee1-26cf-4578-8a5c-9ff2f10fe318 0f86af4601044b11b6ad679db30c1c0a 5f476c88f03140fb8498fc62d3d783b0 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:22:ee:87,bridge_name='br-int',has_traffic_filtering=True,id=284bf2a3-cc8c-4631-81d0-b2eda45727f9,network=Network(e802e356-a113-4a9a-a825-0a16ca2eb73c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap284bf2a3-cc') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 03:45:25 np0005534516 nova_compute[253538]: 2025-11-25 08:45:25.567 253542 DEBUG os_vif [None req-bc1b3ee1-26cf-4578-8a5c-9ff2f10fe318 0f86af4601044b11b6ad679db30c1c0a 5f476c88f03140fb8498fc62d3d783b0 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:22:ee:87,bridge_name='br-int',has_traffic_filtering=True,id=284bf2a3-cc8c-4631-81d0-b2eda45727f9,network=Network(e802e356-a113-4a9a-a825-0a16ca2eb73c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap284bf2a3-cc') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 25 03:45:25 np0005534516 nova_compute[253538]: 2025-11-25 08:45:25.568 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:45:25 np0005534516 nova_compute[253538]: 2025-11-25 08:45:25.568 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:45:25 np0005534516 nova_compute[253538]: 2025-11-25 08:45:25.569 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 03:45:25 np0005534516 nova_compute[253538]: 2025-11-25 08:45:25.573 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:45:25 np0005534516 nova_compute[253538]: 2025-11-25 08:45:25.573 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap284bf2a3-cc, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:45:25 np0005534516 nova_compute[253538]: 2025-11-25 08:45:25.574 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap284bf2a3-cc, col_values=(('external_ids', {'iface-id': '284bf2a3-cc8c-4631-81d0-b2eda45727f9', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:22:ee:87', 'vm-uuid': '5bcfbf47-fa2f-4580-9ace-f946f5683844'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:45:25 np0005534516 nova_compute[253538]: 2025-11-25 08:45:25.611 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:45:25 np0005534516 NetworkManager[48915]: <info>  [1764060325.6125] manager: (tap284bf2a3-cc): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/400)
Nov 25 03:45:25 np0005534516 nova_compute[253538]: 2025-11-25 08:45:25.616 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 25 03:45:25 np0005534516 nova_compute[253538]: 2025-11-25 08:45:25.618 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:45:25 np0005534516 nova_compute[253538]: 2025-11-25 08:45:25.619 253542 INFO os_vif [None req-bc1b3ee1-26cf-4578-8a5c-9ff2f10fe318 0f86af4601044b11b6ad679db30c1c0a 5f476c88f03140fb8498fc62d3d783b0 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:22:ee:87,bridge_name='br-int',has_traffic_filtering=True,id=284bf2a3-cc8c-4631-81d0-b2eda45727f9,network=Network(e802e356-a113-4a9a-a825-0a16ca2eb73c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap284bf2a3-cc')#033[00m
Nov 25 03:45:25 np0005534516 nova_compute[253538]: 2025-11-25 08:45:25.671 253542 DEBUG nova.virt.libvirt.driver [None req-bc1b3ee1-26cf-4578-8a5c-9ff2f10fe318 0f86af4601044b11b6ad679db30c1c0a 5f476c88f03140fb8498fc62d3d783b0 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 03:45:25 np0005534516 nova_compute[253538]: 2025-11-25 08:45:25.672 253542 DEBUG nova.virt.libvirt.driver [None req-bc1b3ee1-26cf-4578-8a5c-9ff2f10fe318 0f86af4601044b11b6ad679db30c1c0a 5f476c88f03140fb8498fc62d3d783b0 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 03:45:25 np0005534516 nova_compute[253538]: 2025-11-25 08:45:25.672 253542 DEBUG nova.virt.libvirt.driver [None req-bc1b3ee1-26cf-4578-8a5c-9ff2f10fe318 0f86af4601044b11b6ad679db30c1c0a 5f476c88f03140fb8498fc62d3d783b0 - - default default] No VIF found with MAC fa:16:3e:22:ee:87, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 25 03:45:25 np0005534516 nova_compute[253538]: 2025-11-25 08:45:25.672 253542 INFO nova.virt.libvirt.driver [None req-bc1b3ee1-26cf-4578-8a5c-9ff2f10fe318 0f86af4601044b11b6ad679db30c1c0a 5f476c88f03140fb8498fc62d3d783b0 - - default default] [instance: 5bcfbf47-fa2f-4580-9ace-f946f5683844] Using config drive#033[00m
Nov 25 03:45:25 np0005534516 nova_compute[253538]: 2025-11-25 08:45:25.696 253542 DEBUG nova.storage.rbd_utils [None req-bc1b3ee1-26cf-4578-8a5c-9ff2f10fe318 0f86af4601044b11b6ad679db30c1c0a 5f476c88f03140fb8498fc62d3d783b0 - - default default] rbd image 5bcfbf47-fa2f-4580-9ace-f946f5683844_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:45:26 np0005534516 nova_compute[253538]: 2025-11-25 08:45:26.265 253542 INFO nova.virt.libvirt.driver [None req-bc1b3ee1-26cf-4578-8a5c-9ff2f10fe318 0f86af4601044b11b6ad679db30c1c0a 5f476c88f03140fb8498fc62d3d783b0 - - default default] [instance: 5bcfbf47-fa2f-4580-9ace-f946f5683844] Creating config drive at /var/lib/nova/instances/5bcfbf47-fa2f-4580-9ace-f946f5683844/disk.config#033[00m
Nov 25 03:45:26 np0005534516 nova_compute[253538]: 2025-11-25 08:45:26.276 253542 DEBUG oslo_concurrency.processutils [None req-bc1b3ee1-26cf-4578-8a5c-9ff2f10fe318 0f86af4601044b11b6ad679db30c1c0a 5f476c88f03140fb8498fc62d3d783b0 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/5bcfbf47-fa2f-4580-9ace-f946f5683844/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpwss7npke execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:45:26 np0005534516 nova_compute[253538]: 2025-11-25 08:45:26.430 253542 DEBUG oslo_concurrency.processutils [None req-bc1b3ee1-26cf-4578-8a5c-9ff2f10fe318 0f86af4601044b11b6ad679db30c1c0a 5f476c88f03140fb8498fc62d3d783b0 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/5bcfbf47-fa2f-4580-9ace-f946f5683844/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpwss7npke" returned: 0 in 0.154s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:45:26 np0005534516 nova_compute[253538]: 2025-11-25 08:45:26.456 253542 DEBUG nova.storage.rbd_utils [None req-bc1b3ee1-26cf-4578-8a5c-9ff2f10fe318 0f86af4601044b11b6ad679db30c1c0a 5f476c88f03140fb8498fc62d3d783b0 - - default default] rbd image 5bcfbf47-fa2f-4580-9ace-f946f5683844_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:45:26 np0005534516 nova_compute[253538]: 2025-11-25 08:45:26.459 253542 DEBUG oslo_concurrency.processutils [None req-bc1b3ee1-26cf-4578-8a5c-9ff2f10fe318 0f86af4601044b11b6ad679db30c1c0a 5f476c88f03140fb8498fc62d3d783b0 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/5bcfbf47-fa2f-4580-9ace-f946f5683844/disk.config 5bcfbf47-fa2f-4580-9ace-f946f5683844_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:45:26 np0005534516 nova_compute[253538]: 2025-11-25 08:45:26.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 03:45:26 np0005534516 nova_compute[253538]: 2025-11-25 08:45:26.555 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 25 03:45:26 np0005534516 nova_compute[253538]: 2025-11-25 08:45:26.622 253542 DEBUG oslo_concurrency.processutils [None req-bc1b3ee1-26cf-4578-8a5c-9ff2f10fe318 0f86af4601044b11b6ad679db30c1c0a 5f476c88f03140fb8498fc62d3d783b0 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/5bcfbf47-fa2f-4580-9ace-f946f5683844/disk.config 5bcfbf47-fa2f-4580-9ace-f946f5683844_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.162s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:45:26 np0005534516 nova_compute[253538]: 2025-11-25 08:45:26.623 253542 INFO nova.virt.libvirt.driver [None req-bc1b3ee1-26cf-4578-8a5c-9ff2f10fe318 0f86af4601044b11b6ad679db30c1c0a 5f476c88f03140fb8498fc62d3d783b0 - - default default] [instance: 5bcfbf47-fa2f-4580-9ace-f946f5683844] Deleting local config drive /var/lib/nova/instances/5bcfbf47-fa2f-4580-9ace-f946f5683844/disk.config because it was imported into RBD.#033[00m
Nov 25 03:45:26 np0005534516 kernel: tap284bf2a3-cc: entered promiscuous mode
Nov 25 03:45:26 np0005534516 NetworkManager[48915]: <info>  [1764060326.6917] manager: (tap284bf2a3-cc): new Tun device (/org/freedesktop/NetworkManager/Devices/401)
Nov 25 03:45:26 np0005534516 ovn_controller[152859]: 2025-11-25T08:45:26Z|00982|binding|INFO|Claiming lport 284bf2a3-cc8c-4631-81d0-b2eda45727f9 for this chassis.
Nov 25 03:45:26 np0005534516 ovn_controller[152859]: 2025-11-25T08:45:26Z|00983|binding|INFO|284bf2a3-cc8c-4631-81d0-b2eda45727f9: Claiming fa:16:3e:22:ee:87 10.100.0.13
Nov 25 03:45:26 np0005534516 systemd-udevd[354545]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 03:45:26 np0005534516 nova_compute[253538]: 2025-11-25 08:45:26.743 253542 DEBUG nova.network.neutron [req-e19cf46c-3168-4f19-88b9-5e71d9add750 req-9c891613-0f2d-4423-ba35-a35b4c1d1012 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 5bcfbf47-fa2f-4580-9ace-f946f5683844] Updated VIF entry in instance network info cache for port 284bf2a3-cc8c-4631-81d0-b2eda45727f9. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 25 03:45:26 np0005534516 nova_compute[253538]: 2025-11-25 08:45:26.744 253542 DEBUG nova.network.neutron [req-e19cf46c-3168-4f19-88b9-5e71d9add750 req-9c891613-0f2d-4423-ba35-a35b4c1d1012 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 5bcfbf47-fa2f-4580-9ace-f946f5683844] Updating instance_info_cache with network_info: [{"id": "284bf2a3-cc8c-4631-81d0-b2eda45727f9", "address": "fa:16:3e:22:ee:87", "network": {"id": "e802e356-a113-4a9a-a825-0a16ca2eb73c", "bridge": "br-int", "label": "tempest-ServerGroupTestJSON-1741820126-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f476c88f03140fb8498fc62d3d783b0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap284bf2a3-cc", "ovs_interfaceid": "284bf2a3-cc8c-4631-81d0-b2eda45727f9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 03:45:26 np0005534516 nova_compute[253538]: 2025-11-25 08:45:26.746 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:45:26 np0005534516 nova_compute[253538]: 2025-11-25 08:45:26.751 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:45:26 np0005534516 NetworkManager[48915]: <info>  [1764060326.7578] device (tap284bf2a3-cc): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 03:45:26 np0005534516 NetworkManager[48915]: <info>  [1764060326.7585] device (tap284bf2a3-cc): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 03:45:26 np0005534516 nova_compute[253538]: 2025-11-25 08:45:26.767 253542 DEBUG oslo_concurrency.lockutils [req-e19cf46c-3168-4f19-88b9-5e71d9add750 req-9c891613-0f2d-4423-ba35-a35b4c1d1012 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-5bcfbf47-fa2f-4580-9ace-f946f5683844" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 03:45:26 np0005534516 systemd-machined[215790]: New machine qemu-125-instance-00000066.
Nov 25 03:45:26 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:45:26.797 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:22:ee:87 10.100.0.13'], port_security=['fa:16:3e:22:ee:87 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '5bcfbf47-fa2f-4580-9ace-f946f5683844', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e802e356-a113-4a9a-a825-0a16ca2eb73c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '5f476c88f03140fb8498fc62d3d783b0', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'fe50aa03-4d4a-43b6-a9bc-79cbda787ef2', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=fe5d4171-3312-481d-8e49-f074dc7ca346, chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=284bf2a3-cc8c-4631-81d0-b2eda45727f9) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 03:45:26 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:45:26.799 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 284bf2a3-cc8c-4631-81d0-b2eda45727f9 in datapath e802e356-a113-4a9a-a825-0a16ca2eb73c bound to our chassis#033[00m
Nov 25 03:45:26 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:45:26.800 162739 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network e802e356-a113-4a9a-a825-0a16ca2eb73c#033[00m
Nov 25 03:45:26 np0005534516 systemd[1]: Started Virtual Machine qemu-125-instance-00000066.
Nov 25 03:45:26 np0005534516 ovn_controller[152859]: 2025-11-25T08:45:26Z|00984|binding|INFO|Setting lport 284bf2a3-cc8c-4631-81d0-b2eda45727f9 ovn-installed in OVS
Nov 25 03:45:26 np0005534516 ovn_controller[152859]: 2025-11-25T08:45:26Z|00985|binding|INFO|Setting lport 284bf2a3-cc8c-4631-81d0-b2eda45727f9 up in Southbound
Nov 25 03:45:26 np0005534516 nova_compute[253538]: 2025-11-25 08:45:26.816 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:45:26 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:45:26.817 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[dc4d9f2d-6c00-4a53-87b6-f7e9b778ce2b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:45:26 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:45:26.818 162739 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tape802e356-a1 in ovnmeta-e802e356-a113-4a9a-a825-0a16ca2eb73c namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 25 03:45:26 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:45:26.824 269370 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tape802e356-a0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 25 03:45:26 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:45:26.824 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[39e5ba6e-be19-4d37-9da9-8f6e08187de4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:45:26 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:45:26.827 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[3e22bd54-49ed-4b50-a8fa-e35a4535cb64]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:45:26 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:45:26.845 162852 DEBUG oslo.privsep.daemon [-] privsep: reply[0d50647c-0ad7-4b52-b865-da5bd1de18de]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:45:26 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:45:26.873 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[9bae36b3-8247-4c90-8e1d-5c146ac57acd]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:45:26 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:45:26.906 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[971ee084-443f-4917-b94e-db84445d6816]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:45:26 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:45:26.913 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[b1e9660e-70d4-4c3d-9cef-e0f64261e3da]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:45:26 np0005534516 NetworkManager[48915]: <info>  [1764060326.9154] manager: (tape802e356-a0): new Veth device (/org/freedesktop/NetworkManager/Devices/402)
Nov 25 03:45:26 np0005534516 systemd-udevd[354549]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 03:45:26 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:45:26.946 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[4a113db9-9d31-4c5d-b0ed-de86720bfc1d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:45:26 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:45:26.950 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[913a6ea9-ad13-4754-b8ca-2cdc3f3e8bdf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:45:26 np0005534516 podman[354553]: 2025-11-25 08:45:26.96342408 +0000 UTC m=+0.101227262 container health_status 201f5ba8a846455dad7681d91099d8c3f33e8ac91298c1330a8b525b94c03938 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 03:45:26 np0005534516 NetworkManager[48915]: <info>  [1764060326.9695] device (tape802e356-a0): carrier: link connected
Nov 25 03:45:26 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:45:26.973 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[14a124ba-26da-45f6-9cc4-b53b9370fb97]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:45:26 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:45:26.988 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[a3497022-6cc5-4515-a86f-b612e269d09a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape802e356-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:35:01:92'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 291], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 568156, 'reachable_time': 40673, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 354601, 'error': None, 'target': 'ovnmeta-e802e356-a113-4a9a-a825-0a16ca2eb73c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:45:27 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:45:27.000 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[9891c253-ebfa-469b-8ecd-e4529aca95d8]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe35:192'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 568156, 'tstamp': 568156}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 354602, 'error': None, 'target': 'ovnmeta-e802e356-a113-4a9a-a825-0a16ca2eb73c', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:45:27 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:45:27.014 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[b442ab71-f3a0-419e-8714-f7b5a9196904]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape802e356-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:35:01:92'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 291], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 568156, 'reachable_time': 40673, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 148, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 148, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 354603, 'error': None, 'target': 'ovnmeta-e802e356-a113-4a9a-a825-0a16ca2eb73c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:45:27 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:45:27.051 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[7b7ee884-7cd3-465a-a765-7c9f4d5298f5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:45:27 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:45:27.109 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[6fc2ab8c-5b7d-482e-be53-a460b52d3514]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:45:27 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:45:27.111 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape802e356-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:45:27 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:45:27.112 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 03:45:27 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:45:27.113 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape802e356-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:45:27 np0005534516 nova_compute[253538]: 2025-11-25 08:45:27.114 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:45:27 np0005534516 NetworkManager[48915]: <info>  [1764060327.1151] manager: (tape802e356-a0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/403)
Nov 25 03:45:27 np0005534516 kernel: tape802e356-a0: entered promiscuous mode
Nov 25 03:45:27 np0005534516 nova_compute[253538]: 2025-11-25 08:45:27.116 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:45:27 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:45:27.117 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tape802e356-a0, col_values=(('external_ids', {'iface-id': 'd883574e-d66a-473c-a9da-0bd8ff7e0311'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:45:27 np0005534516 nova_compute[253538]: 2025-11-25 08:45:27.118 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:45:27 np0005534516 ovn_controller[152859]: 2025-11-25T08:45:27Z|00986|binding|INFO|Releasing lport d883574e-d66a-473c-a9da-0bd8ff7e0311 from this chassis (sb_readonly=0)
Nov 25 03:45:27 np0005534516 nova_compute[253538]: 2025-11-25 08:45:27.133 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:45:27 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:45:27.134 162739 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/e802e356-a113-4a9a-a825-0a16ca2eb73c.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/e802e356-a113-4a9a-a825-0a16ca2eb73c.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 25 03:45:27 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:45:27.135 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[a1af98fd-69c1-4f6e-aae1-7fdaf1f04c65]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:45:27 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:45:27.137 162739 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 25 03:45:27 np0005534516 ovn_metadata_agent[162734]: global
Nov 25 03:45:27 np0005534516 ovn_metadata_agent[162734]:    log         /dev/log local0 debug
Nov 25 03:45:27 np0005534516 ovn_metadata_agent[162734]:    log-tag     haproxy-metadata-proxy-e802e356-a113-4a9a-a825-0a16ca2eb73c
Nov 25 03:45:27 np0005534516 ovn_metadata_agent[162734]:    user        root
Nov 25 03:45:27 np0005534516 ovn_metadata_agent[162734]:    group       root
Nov 25 03:45:27 np0005534516 ovn_metadata_agent[162734]:    maxconn     1024
Nov 25 03:45:27 np0005534516 ovn_metadata_agent[162734]:    pidfile     /var/lib/neutron/external/pids/e802e356-a113-4a9a-a825-0a16ca2eb73c.pid.haproxy
Nov 25 03:45:27 np0005534516 ovn_metadata_agent[162734]:    daemon
Nov 25 03:45:27 np0005534516 ovn_metadata_agent[162734]: 
Nov 25 03:45:27 np0005534516 ovn_metadata_agent[162734]: defaults
Nov 25 03:45:27 np0005534516 ovn_metadata_agent[162734]:    log global
Nov 25 03:45:27 np0005534516 ovn_metadata_agent[162734]:    mode http
Nov 25 03:45:27 np0005534516 ovn_metadata_agent[162734]:    option httplog
Nov 25 03:45:27 np0005534516 ovn_metadata_agent[162734]:    option dontlognull
Nov 25 03:45:27 np0005534516 ovn_metadata_agent[162734]:    option http-server-close
Nov 25 03:45:27 np0005534516 ovn_metadata_agent[162734]:    option forwardfor
Nov 25 03:45:27 np0005534516 ovn_metadata_agent[162734]:    retries                 3
Nov 25 03:45:27 np0005534516 ovn_metadata_agent[162734]:    timeout http-request    30s
Nov 25 03:45:27 np0005534516 ovn_metadata_agent[162734]:    timeout connect         30s
Nov 25 03:45:27 np0005534516 ovn_metadata_agent[162734]:    timeout client          32s
Nov 25 03:45:27 np0005534516 ovn_metadata_agent[162734]:    timeout server          32s
Nov 25 03:45:27 np0005534516 ovn_metadata_agent[162734]:    timeout http-keep-alive 30s
Nov 25 03:45:27 np0005534516 ovn_metadata_agent[162734]: 
Nov 25 03:45:27 np0005534516 ovn_metadata_agent[162734]: 
Nov 25 03:45:27 np0005534516 ovn_metadata_agent[162734]: listen listener
Nov 25 03:45:27 np0005534516 ovn_metadata_agent[162734]:    bind 169.254.169.254:80
Nov 25 03:45:27 np0005534516 ovn_metadata_agent[162734]:    server metadata /var/lib/neutron/metadata_proxy
Nov 25 03:45:27 np0005534516 ovn_metadata_agent[162734]:    http-request add-header X-OVN-Network-ID e802e356-a113-4a9a-a825-0a16ca2eb73c
Nov 25 03:45:27 np0005534516 ovn_metadata_agent[162734]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 25 03:45:27 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:45:27.138 162739 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-e802e356-a113-4a9a-a825-0a16ca2eb73c', 'env', 'PROCESS_TAG=haproxy-e802e356-a113-4a9a-a825-0a16ca2eb73c', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/e802e356-a113-4a9a-a825-0a16ca2eb73c.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 25 03:45:27 np0005534516 nova_compute[253538]: 2025-11-25 08:45:27.170 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:45:27 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e231 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 03:45:27 np0005534516 nova_compute[253538]: 2025-11-25 08:45:27.255 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764060327.25455, 5bcfbf47-fa2f-4580-9ace-f946f5683844 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 03:45:27 np0005534516 nova_compute[253538]: 2025-11-25 08:45:27.255 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 5bcfbf47-fa2f-4580-9ace-f946f5683844] VM Started (Lifecycle Event)#033[00m
Nov 25 03:45:27 np0005534516 nova_compute[253538]: 2025-11-25 08:45:27.279 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 5bcfbf47-fa2f-4580-9ace-f946f5683844] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:45:27 np0005534516 nova_compute[253538]: 2025-11-25 08:45:27.284 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764060327.2556005, 5bcfbf47-fa2f-4580-9ace-f946f5683844 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 03:45:27 np0005534516 nova_compute[253538]: 2025-11-25 08:45:27.284 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 5bcfbf47-fa2f-4580-9ace-f946f5683844] VM Paused (Lifecycle Event)#033[00m
Nov 25 03:45:27 np0005534516 nova_compute[253538]: 2025-11-25 08:45:27.303 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 5bcfbf47-fa2f-4580-9ace-f946f5683844] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:45:27 np0005534516 nova_compute[253538]: 2025-11-25 08:45:27.306 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 5bcfbf47-fa2f-4580-9ace-f946f5683844] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 03:45:27 np0005534516 nova_compute[253538]: 2025-11-25 08:45:27.325 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 5bcfbf47-fa2f-4580-9ace-f946f5683844] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 03:45:27 np0005534516 nova_compute[253538]: 2025-11-25 08:45:27.375 253542 DEBUG nova.compute.manager [req-5127318b-fabf-4939-86ac-c0ff98ddd8df req-8a73912c-155e-4fa3-825c-121b9e3cd9f0 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 5bcfbf47-fa2f-4580-9ace-f946f5683844] Received event network-vif-plugged-284bf2a3-cc8c-4631-81d0-b2eda45727f9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:45:27 np0005534516 nova_compute[253538]: 2025-11-25 08:45:27.376 253542 DEBUG oslo_concurrency.lockutils [req-5127318b-fabf-4939-86ac-c0ff98ddd8df req-8a73912c-155e-4fa3-825c-121b9e3cd9f0 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "5bcfbf47-fa2f-4580-9ace-f946f5683844-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:45:27 np0005534516 nova_compute[253538]: 2025-11-25 08:45:27.377 253542 DEBUG oslo_concurrency.lockutils [req-5127318b-fabf-4939-86ac-c0ff98ddd8df req-8a73912c-155e-4fa3-825c-121b9e3cd9f0 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "5bcfbf47-fa2f-4580-9ace-f946f5683844-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:45:27 np0005534516 nova_compute[253538]: 2025-11-25 08:45:27.377 253542 DEBUG oslo_concurrency.lockutils [req-5127318b-fabf-4939-86ac-c0ff98ddd8df req-8a73912c-155e-4fa3-825c-121b9e3cd9f0 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "5bcfbf47-fa2f-4580-9ace-f946f5683844-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:45:27 np0005534516 nova_compute[253538]: 2025-11-25 08:45:27.377 253542 DEBUG nova.compute.manager [req-5127318b-fabf-4939-86ac-c0ff98ddd8df req-8a73912c-155e-4fa3-825c-121b9e3cd9f0 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 5bcfbf47-fa2f-4580-9ace-f946f5683844] Processing event network-vif-plugged-284bf2a3-cc8c-4631-81d0-b2eda45727f9 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 25 03:45:27 np0005534516 nova_compute[253538]: 2025-11-25 08:45:27.378 253542 DEBUG nova.compute.manager [None req-bc1b3ee1-26cf-4578-8a5c-9ff2f10fe318 0f86af4601044b11b6ad679db30c1c0a 5f476c88f03140fb8498fc62d3d783b0 - - default default] [instance: 5bcfbf47-fa2f-4580-9ace-f946f5683844] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 25 03:45:27 np0005534516 nova_compute[253538]: 2025-11-25 08:45:27.382 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764060327.382117, 5bcfbf47-fa2f-4580-9ace-f946f5683844 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 03:45:27 np0005534516 nova_compute[253538]: 2025-11-25 08:45:27.382 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 5bcfbf47-fa2f-4580-9ace-f946f5683844] VM Resumed (Lifecycle Event)#033[00m
Nov 25 03:45:27 np0005534516 nova_compute[253538]: 2025-11-25 08:45:27.384 253542 DEBUG nova.virt.libvirt.driver [None req-bc1b3ee1-26cf-4578-8a5c-9ff2f10fe318 0f86af4601044b11b6ad679db30c1c0a 5f476c88f03140fb8498fc62d3d783b0 - - default default] [instance: 5bcfbf47-fa2f-4580-9ace-f946f5683844] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 25 03:45:27 np0005534516 nova_compute[253538]: 2025-11-25 08:45:27.388 253542 INFO nova.virt.libvirt.driver [-] [instance: 5bcfbf47-fa2f-4580-9ace-f946f5683844] Instance spawned successfully.#033[00m
Nov 25 03:45:27 np0005534516 nova_compute[253538]: 2025-11-25 08:45:27.388 253542 DEBUG nova.virt.libvirt.driver [None req-bc1b3ee1-26cf-4578-8a5c-9ff2f10fe318 0f86af4601044b11b6ad679db30c1c0a 5f476c88f03140fb8498fc62d3d783b0 - - default default] [instance: 5bcfbf47-fa2f-4580-9ace-f946f5683844] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 25 03:45:27 np0005534516 nova_compute[253538]: 2025-11-25 08:45:27.405 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 5bcfbf47-fa2f-4580-9ace-f946f5683844] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:45:27 np0005534516 nova_compute[253538]: 2025-11-25 08:45:27.410 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 5bcfbf47-fa2f-4580-9ace-f946f5683844] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 03:45:27 np0005534516 nova_compute[253538]: 2025-11-25 08:45:27.413 253542 DEBUG nova.virt.libvirt.driver [None req-bc1b3ee1-26cf-4578-8a5c-9ff2f10fe318 0f86af4601044b11b6ad679db30c1c0a 5f476c88f03140fb8498fc62d3d783b0 - - default default] [instance: 5bcfbf47-fa2f-4580-9ace-f946f5683844] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:45:27 np0005534516 nova_compute[253538]: 2025-11-25 08:45:27.414 253542 DEBUG nova.virt.libvirt.driver [None req-bc1b3ee1-26cf-4578-8a5c-9ff2f10fe318 0f86af4601044b11b6ad679db30c1c0a 5f476c88f03140fb8498fc62d3d783b0 - - default default] [instance: 5bcfbf47-fa2f-4580-9ace-f946f5683844] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:45:27 np0005534516 nova_compute[253538]: 2025-11-25 08:45:27.414 253542 DEBUG nova.virt.libvirt.driver [None req-bc1b3ee1-26cf-4578-8a5c-9ff2f10fe318 0f86af4601044b11b6ad679db30c1c0a 5f476c88f03140fb8498fc62d3d783b0 - - default default] [instance: 5bcfbf47-fa2f-4580-9ace-f946f5683844] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:45:27 np0005534516 nova_compute[253538]: 2025-11-25 08:45:27.414 253542 DEBUG nova.virt.libvirt.driver [None req-bc1b3ee1-26cf-4578-8a5c-9ff2f10fe318 0f86af4601044b11b6ad679db30c1c0a 5f476c88f03140fb8498fc62d3d783b0 - - default default] [instance: 5bcfbf47-fa2f-4580-9ace-f946f5683844] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:45:27 np0005534516 nova_compute[253538]: 2025-11-25 08:45:27.415 253542 DEBUG nova.virt.libvirt.driver [None req-bc1b3ee1-26cf-4578-8a5c-9ff2f10fe318 0f86af4601044b11b6ad679db30c1c0a 5f476c88f03140fb8498fc62d3d783b0 - - default default] [instance: 5bcfbf47-fa2f-4580-9ace-f946f5683844] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:45:27 np0005534516 nova_compute[253538]: 2025-11-25 08:45:27.415 253542 DEBUG nova.virt.libvirt.driver [None req-bc1b3ee1-26cf-4578-8a5c-9ff2f10fe318 0f86af4601044b11b6ad679db30c1c0a 5f476c88f03140fb8498fc62d3d783b0 - - default default] [instance: 5bcfbf47-fa2f-4580-9ace-f946f5683844] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:45:27 np0005534516 nova_compute[253538]: 2025-11-25 08:45:27.445 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 5bcfbf47-fa2f-4580-9ace-f946f5683844] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 03:45:27 np0005534516 nova_compute[253538]: 2025-11-25 08:45:27.474 253542 INFO nova.compute.manager [None req-bc1b3ee1-26cf-4578-8a5c-9ff2f10fe318 0f86af4601044b11b6ad679db30c1c0a 5f476c88f03140fb8498fc62d3d783b0 - - default default] [instance: 5bcfbf47-fa2f-4580-9ace-f946f5683844] Took 6.88 seconds to spawn the instance on the hypervisor.#033[00m
Nov 25 03:45:27 np0005534516 nova_compute[253538]: 2025-11-25 08:45:27.474 253542 DEBUG nova.compute.manager [None req-bc1b3ee1-26cf-4578-8a5c-9ff2f10fe318 0f86af4601044b11b6ad679db30c1c0a 5f476c88f03140fb8498fc62d3d783b0 - - default default] [instance: 5bcfbf47-fa2f-4580-9ace-f946f5683844] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:45:27 np0005534516 podman[354678]: 2025-11-25 08:45:27.511290082 +0000 UTC m=+0.049867585 container create 8ad3315585977bf4d76840b382ec350859c95df106cfedba694ea9ed375df84b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-e802e356-a113-4a9a-a825-0a16ca2eb73c, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0)
Nov 25 03:45:27 np0005534516 nova_compute[253538]: 2025-11-25 08:45:27.540 253542 INFO nova.compute.manager [None req-bc1b3ee1-26cf-4578-8a5c-9ff2f10fe318 0f86af4601044b11b6ad679db30c1c0a 5f476c88f03140fb8498fc62d3d783b0 - - default default] [instance: 5bcfbf47-fa2f-4580-9ace-f946f5683844] Took 7.90 seconds to build instance.#033[00m
Nov 25 03:45:27 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1918: 321 pgs: 321 active+clean; 134 MiB data, 698 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.8 MiB/s wr, 29 op/s
Nov 25 03:45:27 np0005534516 systemd[1]: Started libpod-conmon-8ad3315585977bf4d76840b382ec350859c95df106cfedba694ea9ed375df84b.scope.
Nov 25 03:45:27 np0005534516 nova_compute[253538]: 2025-11-25 08:45:27.555 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 03:45:27 np0005534516 nova_compute[253538]: 2025-11-25 08:45:27.570 253542 DEBUG oslo_concurrency.lockutils [None req-bc1b3ee1-26cf-4578-8a5c-9ff2f10fe318 0f86af4601044b11b6ad679db30c1c0a 5f476c88f03140fb8498fc62d3d783b0 - - default default] Lock "5bcfbf47-fa2f-4580-9ace-f946f5683844" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.013s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:45:27 np0005534516 systemd[1]: Started libcrun container.
Nov 25 03:45:27 np0005534516 podman[354678]: 2025-11-25 08:45:27.482948124 +0000 UTC m=+0.021525657 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 25 03:45:27 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/beb33698eadac245ca4ccca1fab19fcc25dc1326559f66c577ff040cd5948b3a/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 25 03:45:27 np0005534516 podman[354678]: 2025-11-25 08:45:27.600288626 +0000 UTC m=+0.138866159 container init 8ad3315585977bf4d76840b382ec350859c95df106cfedba694ea9ed375df84b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-e802e356-a113-4a9a-a825-0a16ca2eb73c, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.schema-version=1.0)
Nov 25 03:45:27 np0005534516 podman[354678]: 2025-11-25 08:45:27.60641811 +0000 UTC m=+0.144995613 container start 8ad3315585977bf4d76840b382ec350859c95df106cfedba694ea9ed375df84b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-e802e356-a113-4a9a-a825-0a16ca2eb73c, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Nov 25 03:45:27 np0005534516 neutron-haproxy-ovnmeta-e802e356-a113-4a9a-a825-0a16ca2eb73c[354693]: [NOTICE]   (354697) : New worker (354699) forked
Nov 25 03:45:27 np0005534516 neutron-haproxy-ovnmeta-e802e356-a113-4a9a-a825-0a16ca2eb73c[354693]: [NOTICE]   (354697) : Loading success.
Nov 25 03:45:29 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 25 03:45:29 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4155403741' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 03:45:29 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 25 03:45:29 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4155403741' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 03:45:29 np0005534516 nova_compute[253538]: 2025-11-25 08:45:29.537 253542 DEBUG oslo_concurrency.lockutils [None req-30253b64-33e8-4c1f-8cdf-f15e04bf0928 0f86af4601044b11b6ad679db30c1c0a 5f476c88f03140fb8498fc62d3d783b0 - - default default] Acquiring lock "5bcfbf47-fa2f-4580-9ace-f946f5683844" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:45:29 np0005534516 nova_compute[253538]: 2025-11-25 08:45:29.538 253542 DEBUG oslo_concurrency.lockutils [None req-30253b64-33e8-4c1f-8cdf-f15e04bf0928 0f86af4601044b11b6ad679db30c1c0a 5f476c88f03140fb8498fc62d3d783b0 - - default default] Lock "5bcfbf47-fa2f-4580-9ace-f946f5683844" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:45:29 np0005534516 nova_compute[253538]: 2025-11-25 08:45:29.538 253542 DEBUG oslo_concurrency.lockutils [None req-30253b64-33e8-4c1f-8cdf-f15e04bf0928 0f86af4601044b11b6ad679db30c1c0a 5f476c88f03140fb8498fc62d3d783b0 - - default default] Acquiring lock "5bcfbf47-fa2f-4580-9ace-f946f5683844-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:45:29 np0005534516 nova_compute[253538]: 2025-11-25 08:45:29.538 253542 DEBUG oslo_concurrency.lockutils [None req-30253b64-33e8-4c1f-8cdf-f15e04bf0928 0f86af4601044b11b6ad679db30c1c0a 5f476c88f03140fb8498fc62d3d783b0 - - default default] Lock "5bcfbf47-fa2f-4580-9ace-f946f5683844-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:45:29 np0005534516 nova_compute[253538]: 2025-11-25 08:45:29.539 253542 DEBUG oslo_concurrency.lockutils [None req-30253b64-33e8-4c1f-8cdf-f15e04bf0928 0f86af4601044b11b6ad679db30c1c0a 5f476c88f03140fb8498fc62d3d783b0 - - default default] Lock "5bcfbf47-fa2f-4580-9ace-f946f5683844-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:45:29 np0005534516 nova_compute[253538]: 2025-11-25 08:45:29.540 253542 INFO nova.compute.manager [None req-30253b64-33e8-4c1f-8cdf-f15e04bf0928 0f86af4601044b11b6ad679db30c1c0a 5f476c88f03140fb8498fc62d3d783b0 - - default default] [instance: 5bcfbf47-fa2f-4580-9ace-f946f5683844] Terminating instance#033[00m
Nov 25 03:45:29 np0005534516 nova_compute[253538]: 2025-11-25 08:45:29.541 253542 DEBUG nova.compute.manager [None req-30253b64-33e8-4c1f-8cdf-f15e04bf0928 0f86af4601044b11b6ad679db30c1c0a 5f476c88f03140fb8498fc62d3d783b0 - - default default] [instance: 5bcfbf47-fa2f-4580-9ace-f946f5683844] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 25 03:45:29 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1919: 321 pgs: 321 active+clean; 134 MiB data, 698 MiB used, 59 GiB / 60 GiB avail; 644 KiB/s rd, 1.8 MiB/s wr, 53 op/s
Nov 25 03:45:29 np0005534516 nova_compute[253538]: 2025-11-25 08:45:29.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 03:45:29 np0005534516 nova_compute[253538]: 2025-11-25 08:45:29.554 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 25 03:45:29 np0005534516 nova_compute[253538]: 2025-11-25 08:45:29.555 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 25 03:45:29 np0005534516 nova_compute[253538]: 2025-11-25 08:45:29.572 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] [instance: 5bcfbf47-fa2f-4580-9ace-f946f5683844] Skipping network cache update for instance because it is being deleted. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9875#033[00m
Nov 25 03:45:29 np0005534516 nova_compute[253538]: 2025-11-25 08:45:29.572 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 25 03:45:29 np0005534516 nova_compute[253538]: 2025-11-25 08:45:29.647 253542 DEBUG nova.compute.manager [req-4a9955f7-72b8-46ac-83a6-aef94513abd7 req-df9288e5-0bf1-41ce-b04a-b0c1f1053bbe b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 5bcfbf47-fa2f-4580-9ace-f946f5683844] Received event network-vif-plugged-284bf2a3-cc8c-4631-81d0-b2eda45727f9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:45:29 np0005534516 nova_compute[253538]: 2025-11-25 08:45:29.647 253542 DEBUG oslo_concurrency.lockutils [req-4a9955f7-72b8-46ac-83a6-aef94513abd7 req-df9288e5-0bf1-41ce-b04a-b0c1f1053bbe b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "5bcfbf47-fa2f-4580-9ace-f946f5683844-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:45:29 np0005534516 nova_compute[253538]: 2025-11-25 08:45:29.648 253542 DEBUG oslo_concurrency.lockutils [req-4a9955f7-72b8-46ac-83a6-aef94513abd7 req-df9288e5-0bf1-41ce-b04a-b0c1f1053bbe b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "5bcfbf47-fa2f-4580-9ace-f946f5683844-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:45:29 np0005534516 nova_compute[253538]: 2025-11-25 08:45:29.648 253542 DEBUG oslo_concurrency.lockutils [req-4a9955f7-72b8-46ac-83a6-aef94513abd7 req-df9288e5-0bf1-41ce-b04a-b0c1f1053bbe b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "5bcfbf47-fa2f-4580-9ace-f946f5683844-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:45:29 np0005534516 nova_compute[253538]: 2025-11-25 08:45:29.648 253542 DEBUG nova.compute.manager [req-4a9955f7-72b8-46ac-83a6-aef94513abd7 req-df9288e5-0bf1-41ce-b04a-b0c1f1053bbe b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 5bcfbf47-fa2f-4580-9ace-f946f5683844] No waiting events found dispatching network-vif-plugged-284bf2a3-cc8c-4631-81d0-b2eda45727f9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 03:45:29 np0005534516 nova_compute[253538]: 2025-11-25 08:45:29.649 253542 WARNING nova.compute.manager [req-4a9955f7-72b8-46ac-83a6-aef94513abd7 req-df9288e5-0bf1-41ce-b04a-b0c1f1053bbe b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 5bcfbf47-fa2f-4580-9ace-f946f5683844] Received unexpected event network-vif-plugged-284bf2a3-cc8c-4631-81d0-b2eda45727f9 for instance with vm_state active and task_state deleting.#033[00m
Nov 25 03:45:29 np0005534516 kernel: tap284bf2a3-cc (unregistering): left promiscuous mode
Nov 25 03:45:29 np0005534516 NetworkManager[48915]: <info>  [1764060329.6877] device (tap284bf2a3-cc): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 03:45:29 np0005534516 nova_compute[253538]: 2025-11-25 08:45:29.698 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:45:29 np0005534516 ovn_controller[152859]: 2025-11-25T08:45:29Z|00987|binding|INFO|Releasing lport 284bf2a3-cc8c-4631-81d0-b2eda45727f9 from this chassis (sb_readonly=0)
Nov 25 03:45:29 np0005534516 ovn_controller[152859]: 2025-11-25T08:45:29Z|00988|binding|INFO|Setting lport 284bf2a3-cc8c-4631-81d0-b2eda45727f9 down in Southbound
Nov 25 03:45:29 np0005534516 ovn_controller[152859]: 2025-11-25T08:45:29Z|00989|binding|INFO|Removing iface tap284bf2a3-cc ovn-installed in OVS
Nov 25 03:45:29 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:45:29.709 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:22:ee:87 10.100.0.13'], port_security=['fa:16:3e:22:ee:87 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '5bcfbf47-fa2f-4580-9ace-f946f5683844', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e802e356-a113-4a9a-a825-0a16ca2eb73c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '5f476c88f03140fb8498fc62d3d783b0', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'fe50aa03-4d4a-43b6-a9bc-79cbda787ef2', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=fe5d4171-3312-481d-8e49-f074dc7ca346, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=284bf2a3-cc8c-4631-81d0-b2eda45727f9) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 03:45:29 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:45:29.711 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 284bf2a3-cc8c-4631-81d0-b2eda45727f9 in datapath e802e356-a113-4a9a-a825-0a16ca2eb73c unbound from our chassis#033[00m
Nov 25 03:45:29 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:45:29.713 162739 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network e802e356-a113-4a9a-a825-0a16ca2eb73c, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 25 03:45:29 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:45:29.714 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[37416b1e-8503-43d5-9452-f95bf2168b33]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:45:29 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:45:29.714 162739 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-e802e356-a113-4a9a-a825-0a16ca2eb73c namespace which is not needed anymore#033[00m
Nov 25 03:45:29 np0005534516 nova_compute[253538]: 2025-11-25 08:45:29.721 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:45:29 np0005534516 systemd[1]: machine-qemu\x2d125\x2dinstance\x2d00000066.scope: Deactivated successfully.
Nov 25 03:45:29 np0005534516 systemd[1]: machine-qemu\x2d125\x2dinstance\x2d00000066.scope: Consumed 2.650s CPU time.
Nov 25 03:45:29 np0005534516 systemd-machined[215790]: Machine qemu-125-instance-00000066 terminated.
Nov 25 03:45:29 np0005534516 NetworkManager[48915]: <info>  [1764060329.7567] manager: (tap284bf2a3-cc): new Tun device (/org/freedesktop/NetworkManager/Devices/404)
Nov 25 03:45:29 np0005534516 nova_compute[253538]: 2025-11-25 08:45:29.771 253542 INFO nova.virt.libvirt.driver [-] [instance: 5bcfbf47-fa2f-4580-9ace-f946f5683844] Instance destroyed successfully.#033[00m
Nov 25 03:45:29 np0005534516 nova_compute[253538]: 2025-11-25 08:45:29.772 253542 DEBUG nova.objects.instance [None req-30253b64-33e8-4c1f-8cdf-f15e04bf0928 0f86af4601044b11b6ad679db30c1c0a 5f476c88f03140fb8498fc62d3d783b0 - - default default] Lazy-loading 'resources' on Instance uuid 5bcfbf47-fa2f-4580-9ace-f946f5683844 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 03:45:29 np0005534516 nova_compute[253538]: 2025-11-25 08:45:29.794 253542 DEBUG nova.virt.libvirt.vif [None req-30253b64-33e8-4c1f-8cdf-f15e04bf0928 0f86af4601044b11b6ad679db30c1c0a 5f476c88f03140fb8498fc62d3d783b0 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T08:45:17Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerGroupTestJSON-server-1939536355',display_name='tempest-ServerGroupTestJSON-server-1939536355',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-servergrouptestjson-server-1939536355',id=102,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-25T08:45:27Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='5f476c88f03140fb8498fc62d3d783b0',ramdisk_id='',reservation_id='r-qblg0f2q',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerGroupTestJSON-166485107',owner_user_name='tempest-ServerGroupTestJSON-166485107-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T08:45:27Z,user_data=None,user_id='0f86af4601044b11b6ad679db30c1c0a',uuid=5bcfbf47-fa2f-4580-9ace-f946f5683844,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "284bf2a3-cc8c-4631-81d0-b2eda45727f9", "address": "fa:16:3e:22:ee:87", "network": {"id": "e802e356-a113-4a9a-a825-0a16ca2eb73c", "bridge": "br-int", "label": "tempest-ServerGroupTestJSON-1741820126-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f476c88f03140fb8498fc62d3d783b0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap284bf2a3-cc", "ovs_interfaceid": "284bf2a3-cc8c-4631-81d0-b2eda45727f9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 25 03:45:29 np0005534516 nova_compute[253538]: 2025-11-25 08:45:29.795 253542 DEBUG nova.network.os_vif_util [None req-30253b64-33e8-4c1f-8cdf-f15e04bf0928 0f86af4601044b11b6ad679db30c1c0a 5f476c88f03140fb8498fc62d3d783b0 - - default default] Converting VIF {"id": "284bf2a3-cc8c-4631-81d0-b2eda45727f9", "address": "fa:16:3e:22:ee:87", "network": {"id": "e802e356-a113-4a9a-a825-0a16ca2eb73c", "bridge": "br-int", "label": "tempest-ServerGroupTestJSON-1741820126-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f476c88f03140fb8498fc62d3d783b0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap284bf2a3-cc", "ovs_interfaceid": "284bf2a3-cc8c-4631-81d0-b2eda45727f9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 03:45:29 np0005534516 nova_compute[253538]: 2025-11-25 08:45:29.796 253542 DEBUG nova.network.os_vif_util [None req-30253b64-33e8-4c1f-8cdf-f15e04bf0928 0f86af4601044b11b6ad679db30c1c0a 5f476c88f03140fb8498fc62d3d783b0 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:22:ee:87,bridge_name='br-int',has_traffic_filtering=True,id=284bf2a3-cc8c-4631-81d0-b2eda45727f9,network=Network(e802e356-a113-4a9a-a825-0a16ca2eb73c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap284bf2a3-cc') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 03:45:29 np0005534516 nova_compute[253538]: 2025-11-25 08:45:29.796 253542 DEBUG os_vif [None req-30253b64-33e8-4c1f-8cdf-f15e04bf0928 0f86af4601044b11b6ad679db30c1c0a 5f476c88f03140fb8498fc62d3d783b0 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:22:ee:87,bridge_name='br-int',has_traffic_filtering=True,id=284bf2a3-cc8c-4631-81d0-b2eda45727f9,network=Network(e802e356-a113-4a9a-a825-0a16ca2eb73c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap284bf2a3-cc') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 25 03:45:29 np0005534516 nova_compute[253538]: 2025-11-25 08:45:29.798 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:45:29 np0005534516 nova_compute[253538]: 2025-11-25 08:45:29.798 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap284bf2a3-cc, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:45:29 np0005534516 nova_compute[253538]: 2025-11-25 08:45:29.800 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:45:29 np0005534516 nova_compute[253538]: 2025-11-25 08:45:29.803 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 25 03:45:29 np0005534516 nova_compute[253538]: 2025-11-25 08:45:29.805 253542 INFO os_vif [None req-30253b64-33e8-4c1f-8cdf-f15e04bf0928 0f86af4601044b11b6ad679db30c1c0a 5f476c88f03140fb8498fc62d3d783b0 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:22:ee:87,bridge_name='br-int',has_traffic_filtering=True,id=284bf2a3-cc8c-4631-81d0-b2eda45727f9,network=Network(e802e356-a113-4a9a-a825-0a16ca2eb73c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap284bf2a3-cc')#033[00m
Nov 25 03:45:29 np0005534516 neutron-haproxy-ovnmeta-e802e356-a113-4a9a-a825-0a16ca2eb73c[354693]: [NOTICE]   (354697) : haproxy version is 2.8.14-c23fe91
Nov 25 03:45:29 np0005534516 neutron-haproxy-ovnmeta-e802e356-a113-4a9a-a825-0a16ca2eb73c[354693]: [NOTICE]   (354697) : path to executable is /usr/sbin/haproxy
Nov 25 03:45:29 np0005534516 neutron-haproxy-ovnmeta-e802e356-a113-4a9a-a825-0a16ca2eb73c[354693]: [WARNING]  (354697) : Exiting Master process...
Nov 25 03:45:29 np0005534516 neutron-haproxy-ovnmeta-e802e356-a113-4a9a-a825-0a16ca2eb73c[354693]: [ALERT]    (354697) : Current worker (354699) exited with code 143 (Terminated)
Nov 25 03:45:29 np0005534516 neutron-haproxy-ovnmeta-e802e356-a113-4a9a-a825-0a16ca2eb73c[354693]: [WARNING]  (354697) : All workers exited. Exiting... (0)
Nov 25 03:45:29 np0005534516 systemd[1]: libpod-8ad3315585977bf4d76840b382ec350859c95df106cfedba694ea9ed375df84b.scope: Deactivated successfully.
Nov 25 03:45:29 np0005534516 podman[354743]: 2025-11-25 08:45:29.848960359 +0000 UTC m=+0.045299034 container died 8ad3315585977bf4d76840b382ec350859c95df106cfedba694ea9ed375df84b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-e802e356-a113-4a9a-a825-0a16ca2eb73c, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 25 03:45:29 np0005534516 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-8ad3315585977bf4d76840b382ec350859c95df106cfedba694ea9ed375df84b-userdata-shm.mount: Deactivated successfully.
Nov 25 03:45:29 np0005534516 systemd[1]: var-lib-containers-storage-overlay-beb33698eadac245ca4ccca1fab19fcc25dc1326559f66c577ff040cd5948b3a-merged.mount: Deactivated successfully.
Nov 25 03:45:29 np0005534516 podman[354743]: 2025-11-25 08:45:29.899674118 +0000 UTC m=+0.096012803 container cleanup 8ad3315585977bf4d76840b382ec350859c95df106cfedba694ea9ed375df84b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-e802e356-a113-4a9a-a825-0a16ca2eb73c, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Nov 25 03:45:29 np0005534516 systemd[1]: libpod-conmon-8ad3315585977bf4d76840b382ec350859c95df106cfedba694ea9ed375df84b.scope: Deactivated successfully.
Nov 25 03:45:29 np0005534516 podman[354793]: 2025-11-25 08:45:29.980712818 +0000 UTC m=+0.056462333 container remove 8ad3315585977bf4d76840b382ec350859c95df106cfedba694ea9ed375df84b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-e802e356-a113-4a9a-a825-0a16ca2eb73c, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2)
Nov 25 03:45:29 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:45:29.987 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[b5595ca5-f5af-41de-9746-b133617ec956]: (4, ('Tue Nov 25 08:45:29 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-e802e356-a113-4a9a-a825-0a16ca2eb73c (8ad3315585977bf4d76840b382ec350859c95df106cfedba694ea9ed375df84b)\n8ad3315585977bf4d76840b382ec350859c95df106cfedba694ea9ed375df84b\nTue Nov 25 08:45:29 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-e802e356-a113-4a9a-a825-0a16ca2eb73c (8ad3315585977bf4d76840b382ec350859c95df106cfedba694ea9ed375df84b)\n8ad3315585977bf4d76840b382ec350859c95df106cfedba694ea9ed375df84b\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:45:29 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:45:29.989 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[42fdcad8-a46f-4656-b66e-e104e0ba5d26]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:45:29 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:45:29.990 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape802e356-a0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:45:30 np0005534516 nova_compute[253538]: 2025-11-25 08:45:30.033 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:45:30 np0005534516 kernel: tape802e356-a0: left promiscuous mode
Nov 25 03:45:30 np0005534516 nova_compute[253538]: 2025-11-25 08:45:30.035 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:45:30 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:45:30.038 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[5ad33721-1e90-444d-93db-62e2c57055a4]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:45:30 np0005534516 nova_compute[253538]: 2025-11-25 08:45:30.047 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:45:30 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:45:30.053 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[e98f6c85-0287-4d5e-aecb-7c499a634d2d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:45:30 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:45:30.055 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[43bf9bb4-c400-40d0-a7bf-49eead470bf5]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:45:30 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:45:30.071 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[35d34317-b579-48bb-b87b-954a83758eff]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 568150, 'reachable_time': 18652, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 354809, 'error': None, 'target': 'ovnmeta-e802e356-a113-4a9a-a825-0a16ca2eb73c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:45:30 np0005534516 systemd[1]: run-netns-ovnmeta\x2de802e356\x2da113\x2d4a9a\x2da825\x2d0a16ca2eb73c.mount: Deactivated successfully.
Nov 25 03:45:30 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:45:30.076 162852 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-e802e356-a113-4a9a-a825-0a16ca2eb73c deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 25 03:45:30 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:45:30.077 162852 DEBUG oslo.privsep.daemon [-] privsep: reply[26bc9d73-cd02-4940-9105-f947ad7389a5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:45:30 np0005534516 nova_compute[253538]: 2025-11-25 08:45:30.151 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:45:30 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:45:30.150 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=27, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '3e:21:09', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '6a:71:8e:07:fa:c2'}, ipsec=False) old=SB_Global(nb_cfg=26) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 03:45:30 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:45:30.152 162739 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 25 03:45:30 np0005534516 nova_compute[253538]: 2025-11-25 08:45:30.210 253542 INFO nova.virt.libvirt.driver [None req-30253b64-33e8-4c1f-8cdf-f15e04bf0928 0f86af4601044b11b6ad679db30c1c0a 5f476c88f03140fb8498fc62d3d783b0 - - default default] [instance: 5bcfbf47-fa2f-4580-9ace-f946f5683844] Deleting instance files /var/lib/nova/instances/5bcfbf47-fa2f-4580-9ace-f946f5683844_del#033[00m
Nov 25 03:45:30 np0005534516 nova_compute[253538]: 2025-11-25 08:45:30.211 253542 INFO nova.virt.libvirt.driver [None req-30253b64-33e8-4c1f-8cdf-f15e04bf0928 0f86af4601044b11b6ad679db30c1c0a 5f476c88f03140fb8498fc62d3d783b0 - - default default] [instance: 5bcfbf47-fa2f-4580-9ace-f946f5683844] Deletion of /var/lib/nova/instances/5bcfbf47-fa2f-4580-9ace-f946f5683844_del complete#033[00m
Nov 25 03:45:30 np0005534516 nova_compute[253538]: 2025-11-25 08:45:30.271 253542 INFO nova.compute.manager [None req-30253b64-33e8-4c1f-8cdf-f15e04bf0928 0f86af4601044b11b6ad679db30c1c0a 5f476c88f03140fb8498fc62d3d783b0 - - default default] [instance: 5bcfbf47-fa2f-4580-9ace-f946f5683844] Took 0.73 seconds to destroy the instance on the hypervisor.#033[00m
Nov 25 03:45:30 np0005534516 nova_compute[253538]: 2025-11-25 08:45:30.272 253542 DEBUG oslo.service.loopingcall [None req-30253b64-33e8-4c1f-8cdf-f15e04bf0928 0f86af4601044b11b6ad679db30c1c0a 5f476c88f03140fb8498fc62d3d783b0 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 25 03:45:30 np0005534516 nova_compute[253538]: 2025-11-25 08:45:30.272 253542 DEBUG nova.compute.manager [-] [instance: 5bcfbf47-fa2f-4580-9ace-f946f5683844] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 25 03:45:30 np0005534516 nova_compute[253538]: 2025-11-25 08:45:30.273 253542 DEBUG nova.network.neutron [-] [instance: 5bcfbf47-fa2f-4580-9ace-f946f5683844] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 25 03:45:30 np0005534516 podman[354810]: 2025-11-25 08:45:30.844028269 +0000 UTC m=+0.094149283 container health_status 8ad1e319ea263c63021f01bb2d6f18ca5aea205883339692c6b10bddfa993191 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Nov 25 03:45:31 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1920: 321 pgs: 321 active+clean; 134 MiB data, 698 MiB used, 59 GiB / 60 GiB avail; 1.4 MiB/s rd, 1.8 MiB/s wr, 80 op/s
Nov 25 03:45:31 np0005534516 nova_compute[253538]: 2025-11-25 08:45:31.738 253542 DEBUG nova.compute.manager [req-51320292-220b-46ad-b8db-83303f265f23 req-0cae2e2d-8243-4626-9654-1e767b14dc56 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 5bcfbf47-fa2f-4580-9ace-f946f5683844] Received event network-vif-unplugged-284bf2a3-cc8c-4631-81d0-b2eda45727f9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:45:31 np0005534516 nova_compute[253538]: 2025-11-25 08:45:31.739 253542 DEBUG oslo_concurrency.lockutils [req-51320292-220b-46ad-b8db-83303f265f23 req-0cae2e2d-8243-4626-9654-1e767b14dc56 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "5bcfbf47-fa2f-4580-9ace-f946f5683844-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:45:31 np0005534516 nova_compute[253538]: 2025-11-25 08:45:31.739 253542 DEBUG oslo_concurrency.lockutils [req-51320292-220b-46ad-b8db-83303f265f23 req-0cae2e2d-8243-4626-9654-1e767b14dc56 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "5bcfbf47-fa2f-4580-9ace-f946f5683844-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:45:31 np0005534516 nova_compute[253538]: 2025-11-25 08:45:31.740 253542 DEBUG oslo_concurrency.lockutils [req-51320292-220b-46ad-b8db-83303f265f23 req-0cae2e2d-8243-4626-9654-1e767b14dc56 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "5bcfbf47-fa2f-4580-9ace-f946f5683844-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:45:31 np0005534516 nova_compute[253538]: 2025-11-25 08:45:31.740 253542 DEBUG nova.compute.manager [req-51320292-220b-46ad-b8db-83303f265f23 req-0cae2e2d-8243-4626-9654-1e767b14dc56 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 5bcfbf47-fa2f-4580-9ace-f946f5683844] No waiting events found dispatching network-vif-unplugged-284bf2a3-cc8c-4631-81d0-b2eda45727f9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 03:45:31 np0005534516 nova_compute[253538]: 2025-11-25 08:45:31.741 253542 DEBUG nova.compute.manager [req-51320292-220b-46ad-b8db-83303f265f23 req-0cae2e2d-8243-4626-9654-1e767b14dc56 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 5bcfbf47-fa2f-4580-9ace-f946f5683844] Received event network-vif-unplugged-284bf2a3-cc8c-4631-81d0-b2eda45727f9 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 25 03:45:31 np0005534516 nova_compute[253538]: 2025-11-25 08:45:31.741 253542 DEBUG nova.compute.manager [req-51320292-220b-46ad-b8db-83303f265f23 req-0cae2e2d-8243-4626-9654-1e767b14dc56 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 5bcfbf47-fa2f-4580-9ace-f946f5683844] Received event network-vif-plugged-284bf2a3-cc8c-4631-81d0-b2eda45727f9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:45:31 np0005534516 nova_compute[253538]: 2025-11-25 08:45:31.742 253542 DEBUG oslo_concurrency.lockutils [req-51320292-220b-46ad-b8db-83303f265f23 req-0cae2e2d-8243-4626-9654-1e767b14dc56 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "5bcfbf47-fa2f-4580-9ace-f946f5683844-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:45:31 np0005534516 nova_compute[253538]: 2025-11-25 08:45:31.742 253542 DEBUG oslo_concurrency.lockutils [req-51320292-220b-46ad-b8db-83303f265f23 req-0cae2e2d-8243-4626-9654-1e767b14dc56 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "5bcfbf47-fa2f-4580-9ace-f946f5683844-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:45:31 np0005534516 nova_compute[253538]: 2025-11-25 08:45:31.743 253542 DEBUG oslo_concurrency.lockutils [req-51320292-220b-46ad-b8db-83303f265f23 req-0cae2e2d-8243-4626-9654-1e767b14dc56 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "5bcfbf47-fa2f-4580-9ace-f946f5683844-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:45:31 np0005534516 nova_compute[253538]: 2025-11-25 08:45:31.743 253542 DEBUG nova.compute.manager [req-51320292-220b-46ad-b8db-83303f265f23 req-0cae2e2d-8243-4626-9654-1e767b14dc56 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 5bcfbf47-fa2f-4580-9ace-f946f5683844] No waiting events found dispatching network-vif-plugged-284bf2a3-cc8c-4631-81d0-b2eda45727f9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 03:45:31 np0005534516 nova_compute[253538]: 2025-11-25 08:45:31.743 253542 WARNING nova.compute.manager [req-51320292-220b-46ad-b8db-83303f265f23 req-0cae2e2d-8243-4626-9654-1e767b14dc56 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 5bcfbf47-fa2f-4580-9ace-f946f5683844] Received unexpected event network-vif-plugged-284bf2a3-cc8c-4631-81d0-b2eda45727f9 for instance with vm_state active and task_state deleting.#033[00m
Nov 25 03:45:31 np0005534516 nova_compute[253538]: 2025-11-25 08:45:31.921 253542 DEBUG nova.network.neutron [-] [instance: 5bcfbf47-fa2f-4580-9ace-f946f5683844] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 03:45:31 np0005534516 nova_compute[253538]: 2025-11-25 08:45:31.972 253542 INFO nova.compute.manager [-] [instance: 5bcfbf47-fa2f-4580-9ace-f946f5683844] Took 1.70 seconds to deallocate network for instance.#033[00m
Nov 25 03:45:32 np0005534516 nova_compute[253538]: 2025-11-25 08:45:32.027 253542 DEBUG oslo_concurrency.lockutils [None req-30253b64-33e8-4c1f-8cdf-f15e04bf0928 0f86af4601044b11b6ad679db30c1c0a 5f476c88f03140fb8498fc62d3d783b0 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:45:32 np0005534516 nova_compute[253538]: 2025-11-25 08:45:32.028 253542 DEBUG oslo_concurrency.lockutils [None req-30253b64-33e8-4c1f-8cdf-f15e04bf0928 0f86af4601044b11b6ad679db30c1c0a 5f476c88f03140fb8498fc62d3d783b0 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:45:32 np0005534516 nova_compute[253538]: 2025-11-25 08:45:32.030 253542 DEBUG nova.compute.manager [req-3818273b-383b-4c7f-b619-8e03db95eff4 req-2c288f2c-c003-4590-beb2-27a7b4aabca5 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 5bcfbf47-fa2f-4580-9ace-f946f5683844] Received event network-vif-deleted-284bf2a3-cc8c-4631-81d0-b2eda45727f9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:45:32 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e231 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 03:45:32 np0005534516 nova_compute[253538]: 2025-11-25 08:45:32.223 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:45:32 np0005534516 nova_compute[253538]: 2025-11-25 08:45:32.374 253542 DEBUG oslo_concurrency.processutils [None req-30253b64-33e8-4c1f-8cdf-f15e04bf0928 0f86af4601044b11b6ad679db30c1c0a 5f476c88f03140fb8498fc62d3d783b0 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:45:32 np0005534516 nova_compute[253538]: 2025-11-25 08:45:32.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 03:45:32 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 03:45:32 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2885918888' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 03:45:32 np0005534516 nova_compute[253538]: 2025-11-25 08:45:32.834 253542 DEBUG oslo_concurrency.processutils [None req-30253b64-33e8-4c1f-8cdf-f15e04bf0928 0f86af4601044b11b6ad679db30c1c0a 5f476c88f03140fb8498fc62d3d783b0 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.461s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:45:32 np0005534516 nova_compute[253538]: 2025-11-25 08:45:32.841 253542 DEBUG nova.compute.provider_tree [None req-30253b64-33e8-4c1f-8cdf-f15e04bf0928 0f86af4601044b11b6ad679db30c1c0a 5f476c88f03140fb8498fc62d3d783b0 - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 03:45:32 np0005534516 nova_compute[253538]: 2025-11-25 08:45:32.866 253542 DEBUG nova.scheduler.client.report [None req-30253b64-33e8-4c1f-8cdf-f15e04bf0928 0f86af4601044b11b6ad679db30c1c0a 5f476c88f03140fb8498fc62d3d783b0 - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 03:45:32 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:45:32.874 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ba:78:41 10.100.0.18 10.100.0.2'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.18/28 10.100.0.2/28', 'neutron:device_id': 'ovnmeta-8d7b16e8-5cab-4430-95a4-049834bc562c', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8d7b16e8-5cab-4430-95a4-049834bc562c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '36c8f2a4d2284593a0dc0fa30aef95d1', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=1b8af023-ed29-4fce-8188-06112851a9e5, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=8e8e3470-47b7-47a3-bc0e-2a46677d3377) old=Port_Binding(mac=['fa:16:3e:ba:78:41 10.100.0.2'], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'ovnmeta-8d7b16e8-5cab-4430-95a4-049834bc562c', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8d7b16e8-5cab-4430-95a4-049834bc562c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '36c8f2a4d2284593a0dc0fa30aef95d1', 'neutron:revision_number': '2', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 03:45:32 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:45:32.875 162739 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port 8e8e3470-47b7-47a3-bc0e-2a46677d3377 in datapath 8d7b16e8-5cab-4430-95a4-049834bc562c updated#033[00m
Nov 25 03:45:32 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:45:32.876 162739 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 8d7b16e8-5cab-4430-95a4-049834bc562c, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 25 03:45:32 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:45:32.877 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[4acd1086-1c70-4ebc-837e-f14ffc307d61]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:45:32 np0005534516 nova_compute[253538]: 2025-11-25 08:45:32.883 253542 DEBUG oslo_concurrency.lockutils [None req-30253b64-33e8-4c1f-8cdf-f15e04bf0928 0f86af4601044b11b6ad679db30c1c0a 5f476c88f03140fb8498fc62d3d783b0 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.855s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:45:32 np0005534516 nova_compute[253538]: 2025-11-25 08:45:32.940 253542 INFO nova.scheduler.client.report [None req-30253b64-33e8-4c1f-8cdf-f15e04bf0928 0f86af4601044b11b6ad679db30c1c0a 5f476c88f03140fb8498fc62d3d783b0 - - default default] Deleted allocations for instance 5bcfbf47-fa2f-4580-9ace-f946f5683844#033[00m
Nov 25 03:45:33 np0005534516 nova_compute[253538]: 2025-11-25 08:45:33.014 253542 DEBUG oslo_concurrency.lockutils [None req-30253b64-33e8-4c1f-8cdf-f15e04bf0928 0f86af4601044b11b6ad679db30c1c0a 5f476c88f03140fb8498fc62d3d783b0 - - default default] Lock "5bcfbf47-fa2f-4580-9ace-f946f5683844" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.477s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:45:33 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1921: 321 pgs: 321 active+clean; 104 MiB data, 685 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 114 op/s
Nov 25 03:45:33 np0005534516 nova_compute[253538]: 2025-11-25 08:45:33.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 03:45:33 np0005534516 nova_compute[253538]: 2025-11-25 08:45:33.555 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Nov 25 03:45:33 np0005534516 nova_compute[253538]: 2025-11-25 08:45:33.567 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Nov 25 03:45:34 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:45:34.154 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=0e3362c2-a4a0-4a10-9289-943331244f84, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '27'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:45:34 np0005534516 nova_compute[253538]: 2025-11-25 08:45:34.567 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 03:45:34 np0005534516 nova_compute[253538]: 2025-11-25 08:45:34.568 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 03:45:34 np0005534516 nova_compute[253538]: 2025-11-25 08:45:34.594 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:45:34 np0005534516 nova_compute[253538]: 2025-11-25 08:45:34.594 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:45:34 np0005534516 nova_compute[253538]: 2025-11-25 08:45:34.594 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:45:34 np0005534516 nova_compute[253538]: 2025-11-25 08:45:34.594 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 25 03:45:34 np0005534516 nova_compute[253538]: 2025-11-25 08:45:34.594 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:45:34 np0005534516 nova_compute[253538]: 2025-11-25 08:45:34.801 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:45:35 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 03:45:35 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4114571231' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 03:45:35 np0005534516 nova_compute[253538]: 2025-11-25 08:45:35.203 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.609s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:45:35 np0005534516 nova_compute[253538]: 2025-11-25 08:45:35.395 253542 WARNING nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 03:45:35 np0005534516 nova_compute[253538]: 2025-11-25 08:45:35.396 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3932MB free_disk=59.980873107910156GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 25 03:45:35 np0005534516 nova_compute[253538]: 2025-11-25 08:45:35.396 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:45:35 np0005534516 nova_compute[253538]: 2025-11-25 08:45:35.397 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:45:35 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1922: 321 pgs: 321 active+clean; 88 MiB data, 681 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.5 MiB/s wr, 126 op/s
Nov 25 03:45:35 np0005534516 nova_compute[253538]: 2025-11-25 08:45:35.549 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 25 03:45:35 np0005534516 nova_compute[253538]: 2025-11-25 08:45:35.549 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 25 03:45:35 np0005534516 nova_compute[253538]: 2025-11-25 08:45:35.565 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:45:36 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 03:45:36 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3902485140' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 03:45:36 np0005534516 nova_compute[253538]: 2025-11-25 08:45:36.040 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.476s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:45:36 np0005534516 nova_compute[253538]: 2025-11-25 08:45:36.046 253542 DEBUG nova.compute.provider_tree [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 03:45:36 np0005534516 nova_compute[253538]: 2025-11-25 08:45:36.062 253542 DEBUG nova.scheduler.client.report [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 03:45:36 np0005534516 nova_compute[253538]: 2025-11-25 08:45:36.088 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 25 03:45:36 np0005534516 nova_compute[253538]: 2025-11-25 08:45:36.088 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.691s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:45:36 np0005534516 nova_compute[253538]: 2025-11-25 08:45:36.358 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:45:36 np0005534516 nova_compute[253538]: 2025-11-25 08:45:36.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 03:45:36 np0005534516 nova_compute[253538]: 2025-11-25 08:45:36.555 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 03:45:36 np0005534516 nova_compute[253538]: 2025-11-25 08:45:36.555 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Nov 25 03:45:37 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e231 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 03:45:37 np0005534516 nova_compute[253538]: 2025-11-25 08:45:37.225 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:45:37 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1923: 321 pgs: 321 active+clean; 88 MiB data, 681 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 100 op/s
Nov 25 03:45:38 np0005534516 nova_compute[253538]: 2025-11-25 08:45:38.567 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 03:45:39 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1924: 321 pgs: 321 active+clean; 88 MiB data, 681 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 97 op/s
Nov 25 03:45:39 np0005534516 nova_compute[253538]: 2025-11-25 08:45:39.804 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:45:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:45:41.070 162739 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:45:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:45:41.071 162739 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:45:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:45:41.071 162739 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:45:41 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1925: 321 pgs: 321 active+clean; 88 MiB data, 681 MiB used, 59 GiB / 60 GiB avail; 1.3 MiB/s rd, 13 KiB/s wr, 73 op/s
Nov 25 03:45:42 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e231 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 03:45:42 np0005534516 nova_compute[253538]: 2025-11-25 08:45:42.228 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:45:43 np0005534516 nova_compute[253538]: 2025-11-25 08:45:43.077 253542 DEBUG oslo_concurrency.lockutils [None req-172343fa-55e9-4310-9527-fa12301679c6 61a0c433a20242eeae2b074ca5cce0fb eab64550373b4d6fa996c94c6ad06846 - - default default] Acquiring lock "7d74f950-951c-4cea-99f7-71a915c6a21c" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:45:43 np0005534516 nova_compute[253538]: 2025-11-25 08:45:43.077 253542 DEBUG oslo_concurrency.lockutils [None req-172343fa-55e9-4310-9527-fa12301679c6 61a0c433a20242eeae2b074ca5cce0fb eab64550373b4d6fa996c94c6ad06846 - - default default] Lock "7d74f950-951c-4cea-99f7-71a915c6a21c" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:45:43 np0005534516 nova_compute[253538]: 2025-11-25 08:45:43.094 253542 DEBUG nova.compute.manager [None req-172343fa-55e9-4310-9527-fa12301679c6 61a0c433a20242eeae2b074ca5cce0fb eab64550373b4d6fa996c94c6ad06846 - - default default] [instance: 7d74f950-951c-4cea-99f7-71a915c6a21c] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 25 03:45:43 np0005534516 nova_compute[253538]: 2025-11-25 08:45:43.187 253542 DEBUG oslo_concurrency.lockutils [None req-172343fa-55e9-4310-9527-fa12301679c6 61a0c433a20242eeae2b074ca5cce0fb eab64550373b4d6fa996c94c6ad06846 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:45:43 np0005534516 nova_compute[253538]: 2025-11-25 08:45:43.188 253542 DEBUG oslo_concurrency.lockutils [None req-172343fa-55e9-4310-9527-fa12301679c6 61a0c433a20242eeae2b074ca5cce0fb eab64550373b4d6fa996c94c6ad06846 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:45:43 np0005534516 nova_compute[253538]: 2025-11-25 08:45:43.195 253542 DEBUG nova.virt.hardware [None req-172343fa-55e9-4310-9527-fa12301679c6 61a0c433a20242eeae2b074ca5cce0fb eab64550373b4d6fa996c94c6ad06846 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 25 03:45:43 np0005534516 nova_compute[253538]: 2025-11-25 08:45:43.196 253542 INFO nova.compute.claims [None req-172343fa-55e9-4310-9527-fa12301679c6 61a0c433a20242eeae2b074ca5cce0fb eab64550373b4d6fa996c94c6ad06846 - - default default] [instance: 7d74f950-951c-4cea-99f7-71a915c6a21c] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 25 03:45:43 np0005534516 nova_compute[253538]: 2025-11-25 08:45:43.318 253542 DEBUG oslo_concurrency.processutils [None req-172343fa-55e9-4310-9527-fa12301679c6 61a0c433a20242eeae2b074ca5cce0fb eab64550373b4d6fa996c94c6ad06846 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:45:43 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1926: 321 pgs: 321 active+clean; 88 MiB data, 681 MiB used, 59 GiB / 60 GiB avail; 593 KiB/s rd, 1.2 KiB/s wr, 46 op/s
Nov 25 03:45:43 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 03:45:43 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1984082783' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 03:45:43 np0005534516 nova_compute[253538]: 2025-11-25 08:45:43.752 253542 DEBUG oslo_concurrency.processutils [None req-172343fa-55e9-4310-9527-fa12301679c6 61a0c433a20242eeae2b074ca5cce0fb eab64550373b4d6fa996c94c6ad06846 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.434s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:45:43 np0005534516 nova_compute[253538]: 2025-11-25 08:45:43.761 253542 DEBUG nova.compute.provider_tree [None req-172343fa-55e9-4310-9527-fa12301679c6 61a0c433a20242eeae2b074ca5cce0fb eab64550373b4d6fa996c94c6ad06846 - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 03:45:43 np0005534516 nova_compute[253538]: 2025-11-25 08:45:43.780 253542 DEBUG nova.scheduler.client.report [None req-172343fa-55e9-4310-9527-fa12301679c6 61a0c433a20242eeae2b074ca5cce0fb eab64550373b4d6fa996c94c6ad06846 - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 03:45:43 np0005534516 nova_compute[253538]: 2025-11-25 08:45:43.805 253542 DEBUG oslo_concurrency.lockutils [None req-172343fa-55e9-4310-9527-fa12301679c6 61a0c433a20242eeae2b074ca5cce0fb eab64550373b4d6fa996c94c6ad06846 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.617s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:45:43 np0005534516 nova_compute[253538]: 2025-11-25 08:45:43.806 253542 DEBUG nova.compute.manager [None req-172343fa-55e9-4310-9527-fa12301679c6 61a0c433a20242eeae2b074ca5cce0fb eab64550373b4d6fa996c94c6ad06846 - - default default] [instance: 7d74f950-951c-4cea-99f7-71a915c6a21c] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 25 03:45:43 np0005534516 nova_compute[253538]: 2025-11-25 08:45:43.847 253542 DEBUG nova.compute.manager [None req-172343fa-55e9-4310-9527-fa12301679c6 61a0c433a20242eeae2b074ca5cce0fb eab64550373b4d6fa996c94c6ad06846 - - default default] [instance: 7d74f950-951c-4cea-99f7-71a915c6a21c] Not allocating networking since 'none' was specified. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1948#033[00m
Nov 25 03:45:43 np0005534516 nova_compute[253538]: 2025-11-25 08:45:43.860 253542 INFO nova.virt.libvirt.driver [None req-172343fa-55e9-4310-9527-fa12301679c6 61a0c433a20242eeae2b074ca5cce0fb eab64550373b4d6fa996c94c6ad06846 - - default default] [instance: 7d74f950-951c-4cea-99f7-71a915c6a21c] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 25 03:45:43 np0005534516 nova_compute[253538]: 2025-11-25 08:45:43.875 253542 DEBUG nova.compute.manager [None req-172343fa-55e9-4310-9527-fa12301679c6 61a0c433a20242eeae2b074ca5cce0fb eab64550373b4d6fa996c94c6ad06846 - - default default] [instance: 7d74f950-951c-4cea-99f7-71a915c6a21c] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 25 03:45:43 np0005534516 nova_compute[253538]: 2025-11-25 08:45:43.952 253542 DEBUG nova.compute.manager [None req-172343fa-55e9-4310-9527-fa12301679c6 61a0c433a20242eeae2b074ca5cce0fb eab64550373b4d6fa996c94c6ad06846 - - default default] [instance: 7d74f950-951c-4cea-99f7-71a915c6a21c] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 25 03:45:43 np0005534516 nova_compute[253538]: 2025-11-25 08:45:43.954 253542 DEBUG nova.virt.libvirt.driver [None req-172343fa-55e9-4310-9527-fa12301679c6 61a0c433a20242eeae2b074ca5cce0fb eab64550373b4d6fa996c94c6ad06846 - - default default] [instance: 7d74f950-951c-4cea-99f7-71a915c6a21c] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 25 03:45:43 np0005534516 nova_compute[253538]: 2025-11-25 08:45:43.954 253542 INFO nova.virt.libvirt.driver [None req-172343fa-55e9-4310-9527-fa12301679c6 61a0c433a20242eeae2b074ca5cce0fb eab64550373b4d6fa996c94c6ad06846 - - default default] [instance: 7d74f950-951c-4cea-99f7-71a915c6a21c] Creating image(s)#033[00m
Nov 25 03:45:43 np0005534516 nova_compute[253538]: 2025-11-25 08:45:43.979 253542 DEBUG nova.storage.rbd_utils [None req-172343fa-55e9-4310-9527-fa12301679c6 61a0c433a20242eeae2b074ca5cce0fb eab64550373b4d6fa996c94c6ad06846 - - default default] rbd image 7d74f950-951c-4cea-99f7-71a915c6a21c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:45:44 np0005534516 nova_compute[253538]: 2025-11-25 08:45:44.006 253542 DEBUG nova.storage.rbd_utils [None req-172343fa-55e9-4310-9527-fa12301679c6 61a0c433a20242eeae2b074ca5cce0fb eab64550373b4d6fa996c94c6ad06846 - - default default] rbd image 7d74f950-951c-4cea-99f7-71a915c6a21c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:45:44 np0005534516 nova_compute[253538]: 2025-11-25 08:45:44.028 253542 DEBUG nova.storage.rbd_utils [None req-172343fa-55e9-4310-9527-fa12301679c6 61a0c433a20242eeae2b074ca5cce0fb eab64550373b4d6fa996c94c6ad06846 - - default default] rbd image 7d74f950-951c-4cea-99f7-71a915c6a21c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:45:44 np0005534516 nova_compute[253538]: 2025-11-25 08:45:44.031 253542 DEBUG oslo_concurrency.processutils [None req-172343fa-55e9-4310-9527-fa12301679c6 61a0c433a20242eeae2b074ca5cce0fb eab64550373b4d6fa996c94c6ad06846 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:45:44 np0005534516 nova_compute[253538]: 2025-11-25 08:45:44.104 253542 DEBUG oslo_concurrency.processutils [None req-172343fa-55e9-4310-9527-fa12301679c6 61a0c433a20242eeae2b074ca5cce0fb eab64550373b4d6fa996c94c6ad06846 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json" returned: 0 in 0.073s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:45:44 np0005534516 nova_compute[253538]: 2025-11-25 08:45:44.105 253542 DEBUG oslo_concurrency.lockutils [None req-172343fa-55e9-4310-9527-fa12301679c6 61a0c433a20242eeae2b074ca5cce0fb eab64550373b4d6fa996c94c6ad06846 - - default default] Acquiring lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:45:44 np0005534516 nova_compute[253538]: 2025-11-25 08:45:44.105 253542 DEBUG oslo_concurrency.lockutils [None req-172343fa-55e9-4310-9527-fa12301679c6 61a0c433a20242eeae2b074ca5cce0fb eab64550373b4d6fa996c94c6ad06846 - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:45:44 np0005534516 nova_compute[253538]: 2025-11-25 08:45:44.106 253542 DEBUG oslo_concurrency.lockutils [None req-172343fa-55e9-4310-9527-fa12301679c6 61a0c433a20242eeae2b074ca5cce0fb eab64550373b4d6fa996c94c6ad06846 - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:45:44 np0005534516 nova_compute[253538]: 2025-11-25 08:45:44.124 253542 DEBUG nova.storage.rbd_utils [None req-172343fa-55e9-4310-9527-fa12301679c6 61a0c433a20242eeae2b074ca5cce0fb eab64550373b4d6fa996c94c6ad06846 - - default default] rbd image 7d74f950-951c-4cea-99f7-71a915c6a21c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:45:44 np0005534516 nova_compute[253538]: 2025-11-25 08:45:44.127 253542 DEBUG oslo_concurrency.processutils [None req-172343fa-55e9-4310-9527-fa12301679c6 61a0c433a20242eeae2b074ca5cce0fb eab64550373b4d6fa996c94c6ad06846 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc 7d74f950-951c-4cea-99f7-71a915c6a21c_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:45:44 np0005534516 nova_compute[253538]: 2025-11-25 08:45:44.425 253542 DEBUG oslo_concurrency.processutils [None req-172343fa-55e9-4310-9527-fa12301679c6 61a0c433a20242eeae2b074ca5cce0fb eab64550373b4d6fa996c94c6ad06846 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc 7d74f950-951c-4cea-99f7-71a915c6a21c_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.298s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:45:44 np0005534516 nova_compute[253538]: 2025-11-25 08:45:44.485 253542 DEBUG nova.storage.rbd_utils [None req-172343fa-55e9-4310-9527-fa12301679c6 61a0c433a20242eeae2b074ca5cce0fb eab64550373b4d6fa996c94c6ad06846 - - default default] resizing rbd image 7d74f950-951c-4cea-99f7-71a915c6a21c_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 25 03:45:44 np0005534516 nova_compute[253538]: 2025-11-25 08:45:44.587 253542 DEBUG nova.objects.instance [None req-172343fa-55e9-4310-9527-fa12301679c6 61a0c433a20242eeae2b074ca5cce0fb eab64550373b4d6fa996c94c6ad06846 - - default default] Lazy-loading 'migration_context' on Instance uuid 7d74f950-951c-4cea-99f7-71a915c6a21c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 03:45:44 np0005534516 nova_compute[253538]: 2025-11-25 08:45:44.602 253542 DEBUG nova.virt.libvirt.driver [None req-172343fa-55e9-4310-9527-fa12301679c6 61a0c433a20242eeae2b074ca5cce0fb eab64550373b4d6fa996c94c6ad06846 - - default default] [instance: 7d74f950-951c-4cea-99f7-71a915c6a21c] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 25 03:45:44 np0005534516 nova_compute[253538]: 2025-11-25 08:45:44.603 253542 DEBUG nova.virt.libvirt.driver [None req-172343fa-55e9-4310-9527-fa12301679c6 61a0c433a20242eeae2b074ca5cce0fb eab64550373b4d6fa996c94c6ad06846 - - default default] [instance: 7d74f950-951c-4cea-99f7-71a915c6a21c] Ensure instance console log exists: /var/lib/nova/instances/7d74f950-951c-4cea-99f7-71a915c6a21c/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 25 03:45:44 np0005534516 nova_compute[253538]: 2025-11-25 08:45:44.603 253542 DEBUG oslo_concurrency.lockutils [None req-172343fa-55e9-4310-9527-fa12301679c6 61a0c433a20242eeae2b074ca5cce0fb eab64550373b4d6fa996c94c6ad06846 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:45:44 np0005534516 nova_compute[253538]: 2025-11-25 08:45:44.604 253542 DEBUG oslo_concurrency.lockutils [None req-172343fa-55e9-4310-9527-fa12301679c6 61a0c433a20242eeae2b074ca5cce0fb eab64550373b4d6fa996c94c6ad06846 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:45:44 np0005534516 nova_compute[253538]: 2025-11-25 08:45:44.604 253542 DEBUG oslo_concurrency.lockutils [None req-172343fa-55e9-4310-9527-fa12301679c6 61a0c433a20242eeae2b074ca5cce0fb eab64550373b4d6fa996c94c6ad06846 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:45:44 np0005534516 nova_compute[253538]: 2025-11-25 08:45:44.606 253542 DEBUG nova.virt.libvirt.driver [None req-172343fa-55e9-4310-9527-fa12301679c6 61a0c433a20242eeae2b074ca5cce0fb eab64550373b4d6fa996c94c6ad06846 - - default default] [instance: 7d74f950-951c-4cea-99f7-71a915c6a21c] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'device_name': '/dev/vda', 'guest_format': None, 'encryption_options': None, 'boot_index': 0, 'encryption_format': None, 'encryption_secret_uuid': None, 'size': 0, 'encrypted': False, 'disk_bus': 'virtio', 'image_id': '8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 25 03:45:44 np0005534516 nova_compute[253538]: 2025-11-25 08:45:44.611 253542 WARNING nova.virt.libvirt.driver [None req-172343fa-55e9-4310-9527-fa12301679c6 61a0c433a20242eeae2b074ca5cce0fb eab64550373b4d6fa996c94c6ad06846 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 03:45:44 np0005534516 nova_compute[253538]: 2025-11-25 08:45:44.615 253542 DEBUG nova.virt.libvirt.host [None req-172343fa-55e9-4310-9527-fa12301679c6 61a0c433a20242eeae2b074ca5cce0fb eab64550373b4d6fa996c94c6ad06846 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 25 03:45:44 np0005534516 nova_compute[253538]: 2025-11-25 08:45:44.616 253542 DEBUG nova.virt.libvirt.host [None req-172343fa-55e9-4310-9527-fa12301679c6 61a0c433a20242eeae2b074ca5cce0fb eab64550373b4d6fa996c94c6ad06846 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 25 03:45:44 np0005534516 nova_compute[253538]: 2025-11-25 08:45:44.624 253542 DEBUG nova.virt.libvirt.host [None req-172343fa-55e9-4310-9527-fa12301679c6 61a0c433a20242eeae2b074ca5cce0fb eab64550373b4d6fa996c94c6ad06846 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 25 03:45:44 np0005534516 nova_compute[253538]: 2025-11-25 08:45:44.625 253542 DEBUG nova.virt.libvirt.host [None req-172343fa-55e9-4310-9527-fa12301679c6 61a0c433a20242eeae2b074ca5cce0fb eab64550373b4d6fa996c94c6ad06846 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 25 03:45:44 np0005534516 nova_compute[253538]: 2025-11-25 08:45:44.625 253542 DEBUG nova.virt.libvirt.driver [None req-172343fa-55e9-4310-9527-fa12301679c6 61a0c433a20242eeae2b074ca5cce0fb eab64550373b4d6fa996c94c6ad06846 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 25 03:45:44 np0005534516 nova_compute[253538]: 2025-11-25 08:45:44.626 253542 DEBUG nova.virt.hardware [None req-172343fa-55e9-4310-9527-fa12301679c6 61a0c433a20242eeae2b074ca5cce0fb eab64550373b4d6fa996c94c6ad06846 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T08:20:54Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='c0247c91-0f34-45b2-87e7-43f31a790a00',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 25 03:45:44 np0005534516 nova_compute[253538]: 2025-11-25 08:45:44.626 253542 DEBUG nova.virt.hardware [None req-172343fa-55e9-4310-9527-fa12301679c6 61a0c433a20242eeae2b074ca5cce0fb eab64550373b4d6fa996c94c6ad06846 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 25 03:45:44 np0005534516 nova_compute[253538]: 2025-11-25 08:45:44.626 253542 DEBUG nova.virt.hardware [None req-172343fa-55e9-4310-9527-fa12301679c6 61a0c433a20242eeae2b074ca5cce0fb eab64550373b4d6fa996c94c6ad06846 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 25 03:45:44 np0005534516 nova_compute[253538]: 2025-11-25 08:45:44.627 253542 DEBUG nova.virt.hardware [None req-172343fa-55e9-4310-9527-fa12301679c6 61a0c433a20242eeae2b074ca5cce0fb eab64550373b4d6fa996c94c6ad06846 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 25 03:45:44 np0005534516 nova_compute[253538]: 2025-11-25 08:45:44.627 253542 DEBUG nova.virt.hardware [None req-172343fa-55e9-4310-9527-fa12301679c6 61a0c433a20242eeae2b074ca5cce0fb eab64550373b4d6fa996c94c6ad06846 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 25 03:45:44 np0005534516 nova_compute[253538]: 2025-11-25 08:45:44.627 253542 DEBUG nova.virt.hardware [None req-172343fa-55e9-4310-9527-fa12301679c6 61a0c433a20242eeae2b074ca5cce0fb eab64550373b4d6fa996c94c6ad06846 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 25 03:45:44 np0005534516 nova_compute[253538]: 2025-11-25 08:45:44.627 253542 DEBUG nova.virt.hardware [None req-172343fa-55e9-4310-9527-fa12301679c6 61a0c433a20242eeae2b074ca5cce0fb eab64550373b4d6fa996c94c6ad06846 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 25 03:45:44 np0005534516 nova_compute[253538]: 2025-11-25 08:45:44.628 253542 DEBUG nova.virt.hardware [None req-172343fa-55e9-4310-9527-fa12301679c6 61a0c433a20242eeae2b074ca5cce0fb eab64550373b4d6fa996c94c6ad06846 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 25 03:45:44 np0005534516 nova_compute[253538]: 2025-11-25 08:45:44.628 253542 DEBUG nova.virt.hardware [None req-172343fa-55e9-4310-9527-fa12301679c6 61a0c433a20242eeae2b074ca5cce0fb eab64550373b4d6fa996c94c6ad06846 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 25 03:45:44 np0005534516 nova_compute[253538]: 2025-11-25 08:45:44.628 253542 DEBUG nova.virt.hardware [None req-172343fa-55e9-4310-9527-fa12301679c6 61a0c433a20242eeae2b074ca5cce0fb eab64550373b4d6fa996c94c6ad06846 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 25 03:45:44 np0005534516 nova_compute[253538]: 2025-11-25 08:45:44.628 253542 DEBUG nova.virt.hardware [None req-172343fa-55e9-4310-9527-fa12301679c6 61a0c433a20242eeae2b074ca5cce0fb eab64550373b4d6fa996c94c6ad06846 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 25 03:45:44 np0005534516 nova_compute[253538]: 2025-11-25 08:45:44.632 253542 DEBUG oslo_concurrency.processutils [None req-172343fa-55e9-4310-9527-fa12301679c6 61a0c433a20242eeae2b074ca5cce0fb eab64550373b4d6fa996c94c6ad06846 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:45:44 np0005534516 nova_compute[253538]: 2025-11-25 08:45:44.771 253542 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764060329.7701275, 5bcfbf47-fa2f-4580-9ace-f946f5683844 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 03:45:44 np0005534516 nova_compute[253538]: 2025-11-25 08:45:44.771 253542 INFO nova.compute.manager [-] [instance: 5bcfbf47-fa2f-4580-9ace-f946f5683844] VM Stopped (Lifecycle Event)#033[00m
Nov 25 03:45:44 np0005534516 nova_compute[253538]: 2025-11-25 08:45:44.795 253542 DEBUG nova.compute.manager [None req-6426877f-db23-4ea2-8196-165e937de9b8 - - - - - -] [instance: 5bcfbf47-fa2f-4580-9ace-f946f5683844] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:45:44 np0005534516 nova_compute[253538]: 2025-11-25 08:45:44.806 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:45:45 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 03:45:45 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2695249858' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 03:45:45 np0005534516 nova_compute[253538]: 2025-11-25 08:45:45.201 253542 DEBUG oslo_concurrency.processutils [None req-172343fa-55e9-4310-9527-fa12301679c6 61a0c433a20242eeae2b074ca5cce0fb eab64550373b4d6fa996c94c6ad06846 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.569s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:45:45 np0005534516 nova_compute[253538]: 2025-11-25 08:45:45.224 253542 DEBUG nova.storage.rbd_utils [None req-172343fa-55e9-4310-9527-fa12301679c6 61a0c433a20242eeae2b074ca5cce0fb eab64550373b4d6fa996c94c6ad06846 - - default default] rbd image 7d74f950-951c-4cea-99f7-71a915c6a21c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:45:45 np0005534516 nova_compute[253538]: 2025-11-25 08:45:45.227 253542 DEBUG oslo_concurrency.processutils [None req-172343fa-55e9-4310-9527-fa12301679c6 61a0c433a20242eeae2b074ca5cce0fb eab64550373b4d6fa996c94c6ad06846 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:45:45 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1927: 321 pgs: 321 active+clean; 98 MiB data, 681 MiB used, 59 GiB / 60 GiB avail; 16 KiB/s rd, 715 KiB/s wr, 25 op/s
Nov 25 03:45:45 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 03:45:45 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3269541838' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 03:45:45 np0005534516 nova_compute[253538]: 2025-11-25 08:45:45.655 253542 DEBUG oslo_concurrency.processutils [None req-172343fa-55e9-4310-9527-fa12301679c6 61a0c433a20242eeae2b074ca5cce0fb eab64550373b4d6fa996c94c6ad06846 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.427s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:45:45 np0005534516 nova_compute[253538]: 2025-11-25 08:45:45.657 253542 DEBUG nova.objects.instance [None req-172343fa-55e9-4310-9527-fa12301679c6 61a0c433a20242eeae2b074ca5cce0fb eab64550373b4d6fa996c94c6ad06846 - - default default] Lazy-loading 'pci_devices' on Instance uuid 7d74f950-951c-4cea-99f7-71a915c6a21c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 03:45:45 np0005534516 nova_compute[253538]: 2025-11-25 08:45:45.687 253542 DEBUG nova.virt.libvirt.driver [None req-172343fa-55e9-4310-9527-fa12301679c6 61a0c433a20242eeae2b074ca5cce0fb eab64550373b4d6fa996c94c6ad06846 - - default default] [instance: 7d74f950-951c-4cea-99f7-71a915c6a21c] End _get_guest_xml xml=<domain type="kvm">
Nov 25 03:45:45 np0005534516 nova_compute[253538]:  <uuid>7d74f950-951c-4cea-99f7-71a915c6a21c</uuid>
Nov 25 03:45:45 np0005534516 nova_compute[253538]:  <name>instance-00000067</name>
Nov 25 03:45:45 np0005534516 nova_compute[253538]:  <memory>131072</memory>
Nov 25 03:45:45 np0005534516 nova_compute[253538]:  <vcpu>1</vcpu>
Nov 25 03:45:45 np0005534516 nova_compute[253538]:  <metadata>
Nov 25 03:45:45 np0005534516 nova_compute[253538]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 03:45:45 np0005534516 nova_compute[253538]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 03:45:45 np0005534516 nova_compute[253538]:      <nova:name>tempest-ServerShowV254Test-server-320589125</nova:name>
Nov 25 03:45:45 np0005534516 nova_compute[253538]:      <nova:creationTime>2025-11-25 08:45:44</nova:creationTime>
Nov 25 03:45:45 np0005534516 nova_compute[253538]:      <nova:flavor name="m1.nano">
Nov 25 03:45:45 np0005534516 nova_compute[253538]:        <nova:memory>128</nova:memory>
Nov 25 03:45:45 np0005534516 nova_compute[253538]:        <nova:disk>1</nova:disk>
Nov 25 03:45:45 np0005534516 nova_compute[253538]:        <nova:swap>0</nova:swap>
Nov 25 03:45:45 np0005534516 nova_compute[253538]:        <nova:ephemeral>0</nova:ephemeral>
Nov 25 03:45:45 np0005534516 nova_compute[253538]:        <nova:vcpus>1</nova:vcpus>
Nov 25 03:45:45 np0005534516 nova_compute[253538]:      </nova:flavor>
Nov 25 03:45:45 np0005534516 nova_compute[253538]:      <nova:owner>
Nov 25 03:45:45 np0005534516 nova_compute[253538]:        <nova:user uuid="61a0c433a20242eeae2b074ca5cce0fb">tempest-ServerShowV254Test-584114205-project-member</nova:user>
Nov 25 03:45:45 np0005534516 nova_compute[253538]:        <nova:project uuid="eab64550373b4d6fa996c94c6ad06846">tempest-ServerShowV254Test-584114205</nova:project>
Nov 25 03:45:45 np0005534516 nova_compute[253538]:      </nova:owner>
Nov 25 03:45:45 np0005534516 nova_compute[253538]:      <nova:root type="image" uuid="8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e"/>
Nov 25 03:45:45 np0005534516 nova_compute[253538]:      <nova:ports/>
Nov 25 03:45:45 np0005534516 nova_compute[253538]:    </nova:instance>
Nov 25 03:45:45 np0005534516 nova_compute[253538]:  </metadata>
Nov 25 03:45:45 np0005534516 nova_compute[253538]:  <sysinfo type="smbios">
Nov 25 03:45:45 np0005534516 nova_compute[253538]:    <system>
Nov 25 03:45:45 np0005534516 nova_compute[253538]:      <entry name="manufacturer">RDO</entry>
Nov 25 03:45:45 np0005534516 nova_compute[253538]:      <entry name="product">OpenStack Compute</entry>
Nov 25 03:45:45 np0005534516 nova_compute[253538]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 03:45:45 np0005534516 nova_compute[253538]:      <entry name="serial">7d74f950-951c-4cea-99f7-71a915c6a21c</entry>
Nov 25 03:45:45 np0005534516 nova_compute[253538]:      <entry name="uuid">7d74f950-951c-4cea-99f7-71a915c6a21c</entry>
Nov 25 03:45:45 np0005534516 nova_compute[253538]:      <entry name="family">Virtual Machine</entry>
Nov 25 03:45:45 np0005534516 nova_compute[253538]:    </system>
Nov 25 03:45:45 np0005534516 nova_compute[253538]:  </sysinfo>
Nov 25 03:45:45 np0005534516 nova_compute[253538]:  <os>
Nov 25 03:45:45 np0005534516 nova_compute[253538]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 25 03:45:45 np0005534516 nova_compute[253538]:    <boot dev="hd"/>
Nov 25 03:45:45 np0005534516 nova_compute[253538]:    <smbios mode="sysinfo"/>
Nov 25 03:45:45 np0005534516 nova_compute[253538]:  </os>
Nov 25 03:45:45 np0005534516 nova_compute[253538]:  <features>
Nov 25 03:45:45 np0005534516 nova_compute[253538]:    <acpi/>
Nov 25 03:45:45 np0005534516 nova_compute[253538]:    <apic/>
Nov 25 03:45:45 np0005534516 nova_compute[253538]:    <vmcoreinfo/>
Nov 25 03:45:45 np0005534516 nova_compute[253538]:  </features>
Nov 25 03:45:45 np0005534516 nova_compute[253538]:  <clock offset="utc">
Nov 25 03:45:45 np0005534516 nova_compute[253538]:    <timer name="pit" tickpolicy="delay"/>
Nov 25 03:45:45 np0005534516 nova_compute[253538]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 25 03:45:45 np0005534516 nova_compute[253538]:    <timer name="hpet" present="no"/>
Nov 25 03:45:45 np0005534516 nova_compute[253538]:  </clock>
Nov 25 03:45:45 np0005534516 nova_compute[253538]:  <cpu mode="host-model" match="exact">
Nov 25 03:45:45 np0005534516 nova_compute[253538]:    <topology sockets="1" cores="1" threads="1"/>
Nov 25 03:45:45 np0005534516 nova_compute[253538]:  </cpu>
Nov 25 03:45:45 np0005534516 nova_compute[253538]:  <devices>
Nov 25 03:45:45 np0005534516 nova_compute[253538]:    <disk type="network" device="disk">
Nov 25 03:45:45 np0005534516 nova_compute[253538]:      <driver type="raw" cache="none"/>
Nov 25 03:45:45 np0005534516 nova_compute[253538]:      <source protocol="rbd" name="vms/7d74f950-951c-4cea-99f7-71a915c6a21c_disk">
Nov 25 03:45:45 np0005534516 nova_compute[253538]:        <host name="192.168.122.100" port="6789"/>
Nov 25 03:45:45 np0005534516 nova_compute[253538]:      </source>
Nov 25 03:45:45 np0005534516 nova_compute[253538]:      <auth username="openstack">
Nov 25 03:45:45 np0005534516 nova_compute[253538]:        <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 03:45:45 np0005534516 nova_compute[253538]:      </auth>
Nov 25 03:45:45 np0005534516 nova_compute[253538]:      <target dev="vda" bus="virtio"/>
Nov 25 03:45:45 np0005534516 nova_compute[253538]:    </disk>
Nov 25 03:45:45 np0005534516 nova_compute[253538]:    <disk type="network" device="cdrom">
Nov 25 03:45:45 np0005534516 nova_compute[253538]:      <driver type="raw" cache="none"/>
Nov 25 03:45:45 np0005534516 nova_compute[253538]:      <source protocol="rbd" name="vms/7d74f950-951c-4cea-99f7-71a915c6a21c_disk.config">
Nov 25 03:45:45 np0005534516 nova_compute[253538]:        <host name="192.168.122.100" port="6789"/>
Nov 25 03:45:45 np0005534516 nova_compute[253538]:      </source>
Nov 25 03:45:45 np0005534516 nova_compute[253538]:      <auth username="openstack">
Nov 25 03:45:45 np0005534516 nova_compute[253538]:        <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 03:45:45 np0005534516 nova_compute[253538]:      </auth>
Nov 25 03:45:45 np0005534516 nova_compute[253538]:      <target dev="sda" bus="sata"/>
Nov 25 03:45:45 np0005534516 nova_compute[253538]:    </disk>
Nov 25 03:45:45 np0005534516 nova_compute[253538]:    <serial type="pty">
Nov 25 03:45:45 np0005534516 nova_compute[253538]:      <log file="/var/lib/nova/instances/7d74f950-951c-4cea-99f7-71a915c6a21c/console.log" append="off"/>
Nov 25 03:45:45 np0005534516 nova_compute[253538]:    </serial>
Nov 25 03:45:45 np0005534516 nova_compute[253538]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 03:45:45 np0005534516 nova_compute[253538]:    <video>
Nov 25 03:45:45 np0005534516 nova_compute[253538]:      <model type="virtio"/>
Nov 25 03:45:45 np0005534516 nova_compute[253538]:    </video>
Nov 25 03:45:45 np0005534516 nova_compute[253538]:    <input type="tablet" bus="usb"/>
Nov 25 03:45:45 np0005534516 nova_compute[253538]:    <rng model="virtio">
Nov 25 03:45:45 np0005534516 nova_compute[253538]:      <backend model="random">/dev/urandom</backend>
Nov 25 03:45:45 np0005534516 nova_compute[253538]:    </rng>
Nov 25 03:45:45 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root"/>
Nov 25 03:45:45 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:45:45 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:45:45 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:45:45 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:45:45 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:45:45 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:45:45 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:45:45 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:45:45 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:45:45 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:45:45 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:45:45 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:45:45 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:45:45 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:45:45 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:45:45 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:45:45 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:45:45 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:45:45 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:45:45 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:45:45 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:45:45 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:45:45 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:45:45 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:45:45 np0005534516 nova_compute[253538]:    <controller type="usb" index="0"/>
Nov 25 03:45:45 np0005534516 nova_compute[253538]:    <memballoon model="virtio">
Nov 25 03:45:45 np0005534516 nova_compute[253538]:      <stats period="10"/>
Nov 25 03:45:45 np0005534516 nova_compute[253538]:    </memballoon>
Nov 25 03:45:45 np0005534516 nova_compute[253538]:  </devices>
Nov 25 03:45:45 np0005534516 nova_compute[253538]: </domain>
Nov 25 03:45:45 np0005534516 nova_compute[253538]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 25 03:45:45 np0005534516 nova_compute[253538]: 2025-11-25 08:45:45.737 253542 DEBUG nova.virt.libvirt.driver [None req-172343fa-55e9-4310-9527-fa12301679c6 61a0c433a20242eeae2b074ca5cce0fb eab64550373b4d6fa996c94c6ad06846 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 03:45:45 np0005534516 nova_compute[253538]: 2025-11-25 08:45:45.737 253542 DEBUG nova.virt.libvirt.driver [None req-172343fa-55e9-4310-9527-fa12301679c6 61a0c433a20242eeae2b074ca5cce0fb eab64550373b4d6fa996c94c6ad06846 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 03:45:45 np0005534516 nova_compute[253538]: 2025-11-25 08:45:45.738 253542 INFO nova.virt.libvirt.driver [None req-172343fa-55e9-4310-9527-fa12301679c6 61a0c433a20242eeae2b074ca5cce0fb eab64550373b4d6fa996c94c6ad06846 - - default default] [instance: 7d74f950-951c-4cea-99f7-71a915c6a21c] Using config drive#033[00m
Nov 25 03:45:45 np0005534516 nova_compute[253538]: 2025-11-25 08:45:45.756 253542 DEBUG nova.storage.rbd_utils [None req-172343fa-55e9-4310-9527-fa12301679c6 61a0c433a20242eeae2b074ca5cce0fb eab64550373b4d6fa996c94c6ad06846 - - default default] rbd image 7d74f950-951c-4cea-99f7-71a915c6a21c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:45:46 np0005534516 nova_compute[253538]: 2025-11-25 08:45:46.001 253542 INFO nova.virt.libvirt.driver [None req-172343fa-55e9-4310-9527-fa12301679c6 61a0c433a20242eeae2b074ca5cce0fb eab64550373b4d6fa996c94c6ad06846 - - default default] [instance: 7d74f950-951c-4cea-99f7-71a915c6a21c] Creating config drive at /var/lib/nova/instances/7d74f950-951c-4cea-99f7-71a915c6a21c/disk.config#033[00m
Nov 25 03:45:46 np0005534516 nova_compute[253538]: 2025-11-25 08:45:46.006 253542 DEBUG oslo_concurrency.processutils [None req-172343fa-55e9-4310-9527-fa12301679c6 61a0c433a20242eeae2b074ca5cce0fb eab64550373b4d6fa996c94c6ad06846 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/7d74f950-951c-4cea-99f7-71a915c6a21c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpjy_euid_ execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:45:46 np0005534516 nova_compute[253538]: 2025-11-25 08:45:46.160 253542 DEBUG oslo_concurrency.processutils [None req-172343fa-55e9-4310-9527-fa12301679c6 61a0c433a20242eeae2b074ca5cce0fb eab64550373b4d6fa996c94c6ad06846 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/7d74f950-951c-4cea-99f7-71a915c6a21c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpjy_euid_" returned: 0 in 0.153s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:45:46 np0005534516 nova_compute[253538]: 2025-11-25 08:45:46.187 253542 DEBUG nova.storage.rbd_utils [None req-172343fa-55e9-4310-9527-fa12301679c6 61a0c433a20242eeae2b074ca5cce0fb eab64550373b4d6fa996c94c6ad06846 - - default default] rbd image 7d74f950-951c-4cea-99f7-71a915c6a21c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:45:46 np0005534516 nova_compute[253538]: 2025-11-25 08:45:46.193 253542 DEBUG oslo_concurrency.processutils [None req-172343fa-55e9-4310-9527-fa12301679c6 61a0c433a20242eeae2b074ca5cce0fb eab64550373b4d6fa996c94c6ad06846 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/7d74f950-951c-4cea-99f7-71a915c6a21c/disk.config 7d74f950-951c-4cea-99f7-71a915c6a21c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:45:46 np0005534516 nova_compute[253538]: 2025-11-25 08:45:46.386 253542 DEBUG oslo_concurrency.processutils [None req-172343fa-55e9-4310-9527-fa12301679c6 61a0c433a20242eeae2b074ca5cce0fb eab64550373b4d6fa996c94c6ad06846 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/7d74f950-951c-4cea-99f7-71a915c6a21c/disk.config 7d74f950-951c-4cea-99f7-71a915c6a21c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.193s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:45:46 np0005534516 nova_compute[253538]: 2025-11-25 08:45:46.388 253542 INFO nova.virt.libvirt.driver [None req-172343fa-55e9-4310-9527-fa12301679c6 61a0c433a20242eeae2b074ca5cce0fb eab64550373b4d6fa996c94c6ad06846 - - default default] [instance: 7d74f950-951c-4cea-99f7-71a915c6a21c] Deleting local config drive /var/lib/nova/instances/7d74f950-951c-4cea-99f7-71a915c6a21c/disk.config because it was imported into RBD.#033[00m
Nov 25 03:45:46 np0005534516 systemd-machined[215790]: New machine qemu-126-instance-00000067.
Nov 25 03:45:46 np0005534516 systemd[1]: Started Virtual Machine qemu-126-instance-00000067.
Nov 25 03:45:47 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e231 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 03:45:47 np0005534516 nova_compute[253538]: 2025-11-25 08:45:47.229 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:45:47 np0005534516 nova_compute[253538]: 2025-11-25 08:45:47.546 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764060347.545099, 7d74f950-951c-4cea-99f7-71a915c6a21c => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 03:45:47 np0005534516 nova_compute[253538]: 2025-11-25 08:45:47.546 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 7d74f950-951c-4cea-99f7-71a915c6a21c] VM Resumed (Lifecycle Event)#033[00m
Nov 25 03:45:47 np0005534516 nova_compute[253538]: 2025-11-25 08:45:47.549 253542 DEBUG nova.compute.manager [None req-172343fa-55e9-4310-9527-fa12301679c6 61a0c433a20242eeae2b074ca5cce0fb eab64550373b4d6fa996c94c6ad06846 - - default default] [instance: 7d74f950-951c-4cea-99f7-71a915c6a21c] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 25 03:45:47 np0005534516 nova_compute[253538]: 2025-11-25 08:45:47.549 253542 DEBUG nova.virt.libvirt.driver [None req-172343fa-55e9-4310-9527-fa12301679c6 61a0c433a20242eeae2b074ca5cce0fb eab64550373b4d6fa996c94c6ad06846 - - default default] [instance: 7d74f950-951c-4cea-99f7-71a915c6a21c] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 25 03:45:47 np0005534516 nova_compute[253538]: 2025-11-25 08:45:47.552 253542 INFO nova.virt.libvirt.driver [-] [instance: 7d74f950-951c-4cea-99f7-71a915c6a21c] Instance spawned successfully.#033[00m
Nov 25 03:45:47 np0005534516 nova_compute[253538]: 2025-11-25 08:45:47.552 253542 DEBUG nova.virt.libvirt.driver [None req-172343fa-55e9-4310-9527-fa12301679c6 61a0c433a20242eeae2b074ca5cce0fb eab64550373b4d6fa996c94c6ad06846 - - default default] [instance: 7d74f950-951c-4cea-99f7-71a915c6a21c] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 25 03:45:47 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1928: 321 pgs: 321 active+clean; 123 MiB data, 686 MiB used, 59 GiB / 60 GiB avail; 7.0 KiB/s rd, 1.1 MiB/s wr, 14 op/s
Nov 25 03:45:47 np0005534516 nova_compute[253538]: 2025-11-25 08:45:47.578 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 7d74f950-951c-4cea-99f7-71a915c6a21c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:45:47 np0005534516 nova_compute[253538]: 2025-11-25 08:45:47.586 253542 DEBUG nova.virt.libvirt.driver [None req-172343fa-55e9-4310-9527-fa12301679c6 61a0c433a20242eeae2b074ca5cce0fb eab64550373b4d6fa996c94c6ad06846 - - default default] [instance: 7d74f950-951c-4cea-99f7-71a915c6a21c] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:45:47 np0005534516 nova_compute[253538]: 2025-11-25 08:45:47.587 253542 DEBUG nova.virt.libvirt.driver [None req-172343fa-55e9-4310-9527-fa12301679c6 61a0c433a20242eeae2b074ca5cce0fb eab64550373b4d6fa996c94c6ad06846 - - default default] [instance: 7d74f950-951c-4cea-99f7-71a915c6a21c] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:45:47 np0005534516 nova_compute[253538]: 2025-11-25 08:45:47.587 253542 DEBUG nova.virt.libvirt.driver [None req-172343fa-55e9-4310-9527-fa12301679c6 61a0c433a20242eeae2b074ca5cce0fb eab64550373b4d6fa996c94c6ad06846 - - default default] [instance: 7d74f950-951c-4cea-99f7-71a915c6a21c] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:45:47 np0005534516 nova_compute[253538]: 2025-11-25 08:45:47.587 253542 DEBUG nova.virt.libvirt.driver [None req-172343fa-55e9-4310-9527-fa12301679c6 61a0c433a20242eeae2b074ca5cce0fb eab64550373b4d6fa996c94c6ad06846 - - default default] [instance: 7d74f950-951c-4cea-99f7-71a915c6a21c] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:45:47 np0005534516 nova_compute[253538]: 2025-11-25 08:45:47.588 253542 DEBUG nova.virt.libvirt.driver [None req-172343fa-55e9-4310-9527-fa12301679c6 61a0c433a20242eeae2b074ca5cce0fb eab64550373b4d6fa996c94c6ad06846 - - default default] [instance: 7d74f950-951c-4cea-99f7-71a915c6a21c] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:45:47 np0005534516 nova_compute[253538]: 2025-11-25 08:45:47.588 253542 DEBUG nova.virt.libvirt.driver [None req-172343fa-55e9-4310-9527-fa12301679c6 61a0c433a20242eeae2b074ca5cce0fb eab64550373b4d6fa996c94c6ad06846 - - default default] [instance: 7d74f950-951c-4cea-99f7-71a915c6a21c] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:45:47 np0005534516 nova_compute[253538]: 2025-11-25 08:45:47.593 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 7d74f950-951c-4cea-99f7-71a915c6a21c] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 03:45:47 np0005534516 nova_compute[253538]: 2025-11-25 08:45:47.624 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 7d74f950-951c-4cea-99f7-71a915c6a21c] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 03:45:47 np0005534516 nova_compute[253538]: 2025-11-25 08:45:47.625 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764060347.5487173, 7d74f950-951c-4cea-99f7-71a915c6a21c => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 03:45:47 np0005534516 nova_compute[253538]: 2025-11-25 08:45:47.625 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 7d74f950-951c-4cea-99f7-71a915c6a21c] VM Started (Lifecycle Event)#033[00m
Nov 25 03:45:47 np0005534516 nova_compute[253538]: 2025-11-25 08:45:47.652 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 7d74f950-951c-4cea-99f7-71a915c6a21c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:45:47 np0005534516 nova_compute[253538]: 2025-11-25 08:45:47.656 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 7d74f950-951c-4cea-99f7-71a915c6a21c] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 03:45:47 np0005534516 nova_compute[253538]: 2025-11-25 08:45:47.676 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 7d74f950-951c-4cea-99f7-71a915c6a21c] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 03:45:47 np0005534516 nova_compute[253538]: 2025-11-25 08:45:47.686 253542 INFO nova.compute.manager [None req-172343fa-55e9-4310-9527-fa12301679c6 61a0c433a20242eeae2b074ca5cce0fb eab64550373b4d6fa996c94c6ad06846 - - default default] [instance: 7d74f950-951c-4cea-99f7-71a915c6a21c] Took 3.73 seconds to spawn the instance on the hypervisor.#033[00m
Nov 25 03:45:47 np0005534516 nova_compute[253538]: 2025-11-25 08:45:47.687 253542 DEBUG nova.compute.manager [None req-172343fa-55e9-4310-9527-fa12301679c6 61a0c433a20242eeae2b074ca5cce0fb eab64550373b4d6fa996c94c6ad06846 - - default default] [instance: 7d74f950-951c-4cea-99f7-71a915c6a21c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:45:47 np0005534516 nova_compute[253538]: 2025-11-25 08:45:47.748 253542 INFO nova.compute.manager [None req-172343fa-55e9-4310-9527-fa12301679c6 61a0c433a20242eeae2b074ca5cce0fb eab64550373b4d6fa996c94c6ad06846 - - default default] [instance: 7d74f950-951c-4cea-99f7-71a915c6a21c] Took 4.60 seconds to build instance.#033[00m
Nov 25 03:45:47 np0005534516 nova_compute[253538]: 2025-11-25 08:45:47.765 253542 DEBUG oslo_concurrency.lockutils [None req-172343fa-55e9-4310-9527-fa12301679c6 61a0c433a20242eeae2b074ca5cce0fb eab64550373b4d6fa996c94c6ad06846 - - default default] Lock "7d74f950-951c-4cea-99f7-71a915c6a21c" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 4.687s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:45:49 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1929: 321 pgs: 321 active+clean; 134 MiB data, 694 MiB used, 59 GiB / 60 GiB avail; 331 KiB/s rd, 1.8 MiB/s wr, 40 op/s
Nov 25 03:45:49 np0005534516 nova_compute[253538]: 2025-11-25 08:45:49.809 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:45:50 np0005534516 nova_compute[253538]: 2025-11-25 08:45:50.243 253542 INFO nova.compute.manager [None req-a6fb5086-8250-421b-b016-cb44267cbd6e 61a0c433a20242eeae2b074ca5cce0fb eab64550373b4d6fa996c94c6ad06846 - - default default] [instance: 7d74f950-951c-4cea-99f7-71a915c6a21c] Rebuilding instance#033[00m
Nov 25 03:45:50 np0005534516 nova_compute[253538]: 2025-11-25 08:45:50.482 253542 DEBUG nova.objects.instance [None req-a6fb5086-8250-421b-b016-cb44267cbd6e 61a0c433a20242eeae2b074ca5cce0fb eab64550373b4d6fa996c94c6ad06846 - - default default] Lazy-loading 'trusted_certs' on Instance uuid 7d74f950-951c-4cea-99f7-71a915c6a21c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 03:45:50 np0005534516 nova_compute[253538]: 2025-11-25 08:45:50.496 253542 DEBUG nova.compute.manager [None req-a6fb5086-8250-421b-b016-cb44267cbd6e 61a0c433a20242eeae2b074ca5cce0fb eab64550373b4d6fa996c94c6ad06846 - - default default] [instance: 7d74f950-951c-4cea-99f7-71a915c6a21c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:45:50 np0005534516 nova_compute[253538]: 2025-11-25 08:45:50.542 253542 DEBUG nova.objects.instance [None req-a6fb5086-8250-421b-b016-cb44267cbd6e 61a0c433a20242eeae2b074ca5cce0fb eab64550373b4d6fa996c94c6ad06846 - - default default] Lazy-loading 'pci_requests' on Instance uuid 7d74f950-951c-4cea-99f7-71a915c6a21c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 03:45:50 np0005534516 nova_compute[253538]: 2025-11-25 08:45:50.551 253542 DEBUG nova.objects.instance [None req-a6fb5086-8250-421b-b016-cb44267cbd6e 61a0c433a20242eeae2b074ca5cce0fb eab64550373b4d6fa996c94c6ad06846 - - default default] Lazy-loading 'pci_devices' on Instance uuid 7d74f950-951c-4cea-99f7-71a915c6a21c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 03:45:50 np0005534516 nova_compute[253538]: 2025-11-25 08:45:50.564 253542 DEBUG nova.objects.instance [None req-a6fb5086-8250-421b-b016-cb44267cbd6e 61a0c433a20242eeae2b074ca5cce0fb eab64550373b4d6fa996c94c6ad06846 - - default default] Lazy-loading 'resources' on Instance uuid 7d74f950-951c-4cea-99f7-71a915c6a21c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 03:45:50 np0005534516 nova_compute[253538]: 2025-11-25 08:45:50.573 253542 DEBUG nova.objects.instance [None req-a6fb5086-8250-421b-b016-cb44267cbd6e 61a0c433a20242eeae2b074ca5cce0fb eab64550373b4d6fa996c94c6ad06846 - - default default] Lazy-loading 'migration_context' on Instance uuid 7d74f950-951c-4cea-99f7-71a915c6a21c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 03:45:50 np0005534516 nova_compute[253538]: 2025-11-25 08:45:50.583 253542 DEBUG nova.objects.instance [None req-a6fb5086-8250-421b-b016-cb44267cbd6e 61a0c433a20242eeae2b074ca5cce0fb eab64550373b4d6fa996c94c6ad06846 - - default default] [instance: 7d74f950-951c-4cea-99f7-71a915c6a21c] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032#033[00m
Nov 25 03:45:50 np0005534516 nova_compute[253538]: 2025-11-25 08:45:50.588 253542 DEBUG nova.virt.libvirt.driver [None req-a6fb5086-8250-421b-b016-cb44267cbd6e 61a0c433a20242eeae2b074ca5cce0fb eab64550373b4d6fa996c94c6ad06846 - - default default] [instance: 7d74f950-951c-4cea-99f7-71a915c6a21c] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071#033[00m
Nov 25 03:45:51 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1930: 321 pgs: 321 active+clean; 134 MiB data, 698 MiB used, 59 GiB / 60 GiB avail; 1.1 MiB/s rd, 1.8 MiB/s wr, 73 op/s
Nov 25 03:45:52 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e231 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 03:45:52 np0005534516 nova_compute[253538]: 2025-11-25 08:45:52.232 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:45:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] Optimize plan auto_2025-11-25_08:45:53
Nov 25 03:45:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 03:45:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] do_upmap
Nov 25 03:45:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] pools ['.mgr', 'default.rgw.control', 'cephfs.cephfs.meta', 'images', 'default.rgw.meta', 'cephfs.cephfs.data', 'volumes', 'backups', 'default.rgw.log', '.rgw.root', 'vms']
Nov 25 03:45:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] prepared 0/10 changes
Nov 25 03:45:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 03:45:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 03:45:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 03:45:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 03:45:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 03:45:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 03:45:53 np0005534516 nova_compute[253538]: 2025-11-25 08:45:53.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 03:45:53 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1931: 321 pgs: 321 active+clean; 134 MiB data, 698 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Nov 25 03:45:53 np0005534516 podman[355273]: 2025-11-25 08:45:53.821390277 +0000 UTC m=+0.062461533 container health_status 5c8d5508db57b4c3da7e80ae11f3a3094e87785cb74f60c98199261dd8b73481 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251118, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible, io.buildah.version=1.41.3)
Nov 25 03:45:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 03:45:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 03:45:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 03:45:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 03:45:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 03:45:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 03:45:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 03:45:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 03:45:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 03:45:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 03:45:54 np0005534516 nova_compute[253538]: 2025-11-25 08:45:54.811 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:45:55 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1932: 321 pgs: 321 active+clean; 134 MiB data, 698 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Nov 25 03:45:57 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e231 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 03:45:57 np0005534516 nova_compute[253538]: 2025-11-25 08:45:57.233 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:45:57 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1933: 321 pgs: 321 active+clean; 134 MiB data, 698 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.1 MiB/s wr, 87 op/s
Nov 25 03:45:57 np0005534516 podman[355292]: 2025-11-25 08:45:57.842167301 +0000 UTC m=+0.082127691 container health_status 201f5ba8a846455dad7681d91099d8c3f33e8ac91298c1330a8b525b94c03938 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_id=multipathd, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Nov 25 03:45:58 np0005534516 ceph-osd[88620]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #48. Immutable memtables: 5.
Nov 25 03:45:59 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1934: 321 pgs: 321 active+clean; 134 MiB data, 698 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 669 KiB/s wr, 86 op/s
Nov 25 03:45:59 np0005534516 nova_compute[253538]: 2025-11-25 08:45:59.815 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:46:00 np0005534516 nova_compute[253538]: 2025-11-25 08:46:00.637 253542 DEBUG nova.virt.libvirt.driver [None req-a6fb5086-8250-421b-b016-cb44267cbd6e 61a0c433a20242eeae2b074ca5cce0fb eab64550373b4d6fa996c94c6ad06846 - - default default] [instance: 7d74f950-951c-4cea-99f7-71a915c6a21c] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101#033[00m
Nov 25 03:46:01 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1935: 321 pgs: 321 active+clean; 148 MiB data, 727 MiB used, 59 GiB / 60 GiB avail; 1.6 MiB/s rd, 908 KiB/s wr, 73 op/s
Nov 25 03:46:01 np0005534516 podman[355314]: 2025-11-25 08:46:01.872426537 +0000 UTC m=+0.121483064 container health_status 8ad1e319ea263c63021f01bb2d6f18ca5aea205883339692c6b10bddfa993191 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Nov 25 03:46:02 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e231 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 03:46:02 np0005534516 nova_compute[253538]: 2025-11-25 08:46:02.234 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:46:03 np0005534516 systemd[1]: machine-qemu\x2d126\x2dinstance\x2d00000067.scope: Deactivated successfully.
Nov 25 03:46:03 np0005534516 systemd[1]: machine-qemu\x2d126\x2dinstance\x2d00000067.scope: Consumed 13.520s CPU time.
Nov 25 03:46:03 np0005534516 systemd-machined[215790]: Machine qemu-126-instance-00000067 terminated.
Nov 25 03:46:03 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1936: 321 pgs: 321 active+clean; 165 MiB data, 741 MiB used, 59 GiB / 60 GiB avail; 1016 KiB/s rd, 2.1 MiB/s wr, 75 op/s
Nov 25 03:46:03 np0005534516 nova_compute[253538]: 2025-11-25 08:46:03.653 253542 INFO nova.virt.libvirt.driver [None req-a6fb5086-8250-421b-b016-cb44267cbd6e 61a0c433a20242eeae2b074ca5cce0fb eab64550373b4d6fa996c94c6ad06846 - - default default] [instance: 7d74f950-951c-4cea-99f7-71a915c6a21c] Instance shutdown successfully after 13 seconds.#033[00m
Nov 25 03:46:03 np0005534516 nova_compute[253538]: 2025-11-25 08:46:03.661 253542 INFO nova.virt.libvirt.driver [-] [instance: 7d74f950-951c-4cea-99f7-71a915c6a21c] Instance destroyed successfully.#033[00m
Nov 25 03:46:03 np0005534516 nova_compute[253538]: 2025-11-25 08:46:03.668 253542 INFO nova.virt.libvirt.driver [-] [instance: 7d74f950-951c-4cea-99f7-71a915c6a21c] Instance destroyed successfully.#033[00m
Nov 25 03:46:04 np0005534516 nova_compute[253538]: 2025-11-25 08:46:04.033 253542 INFO nova.virt.libvirt.driver [None req-a6fb5086-8250-421b-b016-cb44267cbd6e 61a0c433a20242eeae2b074ca5cce0fb eab64550373b4d6fa996c94c6ad06846 - - default default] [instance: 7d74f950-951c-4cea-99f7-71a915c6a21c] Deleting instance files /var/lib/nova/instances/7d74f950-951c-4cea-99f7-71a915c6a21c_del#033[00m
Nov 25 03:46:04 np0005534516 nova_compute[253538]: 2025-11-25 08:46:04.034 253542 INFO nova.virt.libvirt.driver [None req-a6fb5086-8250-421b-b016-cb44267cbd6e 61a0c433a20242eeae2b074ca5cce0fb eab64550373b4d6fa996c94c6ad06846 - - default default] [instance: 7d74f950-951c-4cea-99f7-71a915c6a21c] Deletion of /var/lib/nova/instances/7d74f950-951c-4cea-99f7-71a915c6a21c_del complete#033[00m
Nov 25 03:46:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 03:46:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:46:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 03:46:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:46:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0007553940182001693 of space, bias 1.0, pg target 0.2266182054600508 quantized to 32 (current 32)
Nov 25 03:46:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:46:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 03:46:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:46:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 03:46:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:46:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0010122368870873217 of space, bias 1.0, pg target 0.3036710661261965 quantized to 32 (current 32)
Nov 25 03:46:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:46:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 25 03:46:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:46:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 03:46:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:46:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 25 03:46:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:46:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 25 03:46:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:46:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 03:46:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:46:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 25 03:46:04 np0005534516 nova_compute[253538]: 2025-11-25 08:46:04.172 253542 DEBUG nova.virt.libvirt.driver [None req-a6fb5086-8250-421b-b016-cb44267cbd6e 61a0c433a20242eeae2b074ca5cce0fb eab64550373b4d6fa996c94c6ad06846 - - default default] [instance: 7d74f950-951c-4cea-99f7-71a915c6a21c] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 25 03:46:04 np0005534516 nova_compute[253538]: 2025-11-25 08:46:04.173 253542 INFO nova.virt.libvirt.driver [None req-a6fb5086-8250-421b-b016-cb44267cbd6e 61a0c433a20242eeae2b074ca5cce0fb eab64550373b4d6fa996c94c6ad06846 - - default default] [instance: 7d74f950-951c-4cea-99f7-71a915c6a21c] Creating image(s)#033[00m
Nov 25 03:46:04 np0005534516 nova_compute[253538]: 2025-11-25 08:46:04.204 253542 DEBUG nova.storage.rbd_utils [None req-a6fb5086-8250-421b-b016-cb44267cbd6e 61a0c433a20242eeae2b074ca5cce0fb eab64550373b4d6fa996c94c6ad06846 - - default default] rbd image 7d74f950-951c-4cea-99f7-71a915c6a21c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:46:04 np0005534516 nova_compute[253538]: 2025-11-25 08:46:04.241 253542 DEBUG nova.storage.rbd_utils [None req-a6fb5086-8250-421b-b016-cb44267cbd6e 61a0c433a20242eeae2b074ca5cce0fb eab64550373b4d6fa996c94c6ad06846 - - default default] rbd image 7d74f950-951c-4cea-99f7-71a915c6a21c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:46:04 np0005534516 nova_compute[253538]: 2025-11-25 08:46:04.278 253542 DEBUG nova.storage.rbd_utils [None req-a6fb5086-8250-421b-b016-cb44267cbd6e 61a0c433a20242eeae2b074ca5cce0fb eab64550373b4d6fa996c94c6ad06846 - - default default] rbd image 7d74f950-951c-4cea-99f7-71a915c6a21c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:46:04 np0005534516 nova_compute[253538]: 2025-11-25 08:46:04.283 253542 DEBUG oslo_concurrency.processutils [None req-a6fb5086-8250-421b-b016-cb44267cbd6e 61a0c433a20242eeae2b074ca5cce0fb eab64550373b4d6fa996c94c6ad06846 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a6a8bade0130d94f6fc91c6e483cc9b6db836676 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:46:04 np0005534516 nova_compute[253538]: 2025-11-25 08:46:04.380 253542 DEBUG oslo_concurrency.processutils [None req-a6fb5086-8250-421b-b016-cb44267cbd6e 61a0c433a20242eeae2b074ca5cce0fb eab64550373b4d6fa996c94c6ad06846 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a6a8bade0130d94f6fc91c6e483cc9b6db836676 --force-share --output=json" returned: 0 in 0.097s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:46:04 np0005534516 nova_compute[253538]: 2025-11-25 08:46:04.382 253542 DEBUG oslo_concurrency.lockutils [None req-a6fb5086-8250-421b-b016-cb44267cbd6e 61a0c433a20242eeae2b074ca5cce0fb eab64550373b4d6fa996c94c6ad06846 - - default default] Acquiring lock "a6a8bade0130d94f6fc91c6e483cc9b6db836676" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:46:04 np0005534516 nova_compute[253538]: 2025-11-25 08:46:04.383 253542 DEBUG oslo_concurrency.lockutils [None req-a6fb5086-8250-421b-b016-cb44267cbd6e 61a0c433a20242eeae2b074ca5cce0fb eab64550373b4d6fa996c94c6ad06846 - - default default] Lock "a6a8bade0130d94f6fc91c6e483cc9b6db836676" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:46:04 np0005534516 nova_compute[253538]: 2025-11-25 08:46:04.384 253542 DEBUG oslo_concurrency.lockutils [None req-a6fb5086-8250-421b-b016-cb44267cbd6e 61a0c433a20242eeae2b074ca5cce0fb eab64550373b4d6fa996c94c6ad06846 - - default default] Lock "a6a8bade0130d94f6fc91c6e483cc9b6db836676" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:46:04 np0005534516 nova_compute[253538]: 2025-11-25 08:46:04.418 253542 DEBUG nova.storage.rbd_utils [None req-a6fb5086-8250-421b-b016-cb44267cbd6e 61a0c433a20242eeae2b074ca5cce0fb eab64550373b4d6fa996c94c6ad06846 - - default default] rbd image 7d74f950-951c-4cea-99f7-71a915c6a21c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:46:04 np0005534516 nova_compute[253538]: 2025-11-25 08:46:04.423 253542 DEBUG oslo_concurrency.processutils [None req-a6fb5086-8250-421b-b016-cb44267cbd6e 61a0c433a20242eeae2b074ca5cce0fb eab64550373b4d6fa996c94c6ad06846 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/a6a8bade0130d94f6fc91c6e483cc9b6db836676 7d74f950-951c-4cea-99f7-71a915c6a21c_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:46:04 np0005534516 nova_compute[253538]: 2025-11-25 08:46:04.769 253542 DEBUG oslo_concurrency.processutils [None req-a6fb5086-8250-421b-b016-cb44267cbd6e 61a0c433a20242eeae2b074ca5cce0fb eab64550373b4d6fa996c94c6ad06846 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/a6a8bade0130d94f6fc91c6e483cc9b6db836676 7d74f950-951c-4cea-99f7-71a915c6a21c_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.347s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:46:04 np0005534516 nova_compute[253538]: 2025-11-25 08:46:04.831 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:46:04 np0005534516 nova_compute[253538]: 2025-11-25 08:46:04.838 253542 DEBUG nova.storage.rbd_utils [None req-a6fb5086-8250-421b-b016-cb44267cbd6e 61a0c433a20242eeae2b074ca5cce0fb eab64550373b4d6fa996c94c6ad06846 - - default default] resizing rbd image 7d74f950-951c-4cea-99f7-71a915c6a21c_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 25 03:46:04 np0005534516 nova_compute[253538]: 2025-11-25 08:46:04.943 253542 DEBUG nova.virt.libvirt.driver [None req-a6fb5086-8250-421b-b016-cb44267cbd6e 61a0c433a20242eeae2b074ca5cce0fb eab64550373b4d6fa996c94c6ad06846 - - default default] [instance: 7d74f950-951c-4cea-99f7-71a915c6a21c] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 25 03:46:04 np0005534516 nova_compute[253538]: 2025-11-25 08:46:04.944 253542 DEBUG nova.virt.libvirt.driver [None req-a6fb5086-8250-421b-b016-cb44267cbd6e 61a0c433a20242eeae2b074ca5cce0fb eab64550373b4d6fa996c94c6ad06846 - - default default] [instance: 7d74f950-951c-4cea-99f7-71a915c6a21c] Ensure instance console log exists: /var/lib/nova/instances/7d74f950-951c-4cea-99f7-71a915c6a21c/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 25 03:46:04 np0005534516 nova_compute[253538]: 2025-11-25 08:46:04.945 253542 DEBUG oslo_concurrency.lockutils [None req-a6fb5086-8250-421b-b016-cb44267cbd6e 61a0c433a20242eeae2b074ca5cce0fb eab64550373b4d6fa996c94c6ad06846 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:46:04 np0005534516 nova_compute[253538]: 2025-11-25 08:46:04.945 253542 DEBUG oslo_concurrency.lockutils [None req-a6fb5086-8250-421b-b016-cb44267cbd6e 61a0c433a20242eeae2b074ca5cce0fb eab64550373b4d6fa996c94c6ad06846 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:46:04 np0005534516 nova_compute[253538]: 2025-11-25 08:46:04.945 253542 DEBUG oslo_concurrency.lockutils [None req-a6fb5086-8250-421b-b016-cb44267cbd6e 61a0c433a20242eeae2b074ca5cce0fb eab64550373b4d6fa996c94c6ad06846 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:46:04 np0005534516 nova_compute[253538]: 2025-11-25 08:46:04.947 253542 DEBUG nova.virt.libvirt.driver [None req-a6fb5086-8250-421b-b016-cb44267cbd6e 61a0c433a20242eeae2b074ca5cce0fb eab64550373b4d6fa996c94c6ad06846 - - default default] [instance: 7d74f950-951c-4cea-99f7-71a915c6a21c] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:56Z,direct_url=<?>,disk_format='qcow2',id=64385127-d622-49bb-be38-b33beb2692d1,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:58Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'device_name': '/dev/vda', 'guest_format': None, 'encryption_options': None, 'boot_index': 0, 'encryption_format': None, 'encryption_secret_uuid': None, 'size': 0, 'encrypted': False, 'disk_bus': 'virtio', 'image_id': '8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 25 03:46:04 np0005534516 nova_compute[253538]: 2025-11-25 08:46:04.951 253542 WARNING nova.virt.libvirt.driver [None req-a6fb5086-8250-421b-b016-cb44267cbd6e 61a0c433a20242eeae2b074ca5cce0fb eab64550373b4d6fa996c94c6ad06846 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.: NotImplementedError#033[00m
Nov 25 03:46:04 np0005534516 nova_compute[253538]: 2025-11-25 08:46:04.979 253542 DEBUG nova.virt.libvirt.host [None req-a6fb5086-8250-421b-b016-cb44267cbd6e 61a0c433a20242eeae2b074ca5cce0fb eab64550373b4d6fa996c94c6ad06846 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 25 03:46:04 np0005534516 nova_compute[253538]: 2025-11-25 08:46:04.980 253542 DEBUG nova.virt.libvirt.host [None req-a6fb5086-8250-421b-b016-cb44267cbd6e 61a0c433a20242eeae2b074ca5cce0fb eab64550373b4d6fa996c94c6ad06846 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 25 03:46:04 np0005534516 nova_compute[253538]: 2025-11-25 08:46:04.984 253542 DEBUG nova.virt.libvirt.host [None req-a6fb5086-8250-421b-b016-cb44267cbd6e 61a0c433a20242eeae2b074ca5cce0fb eab64550373b4d6fa996c94c6ad06846 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 25 03:46:04 np0005534516 nova_compute[253538]: 2025-11-25 08:46:04.984 253542 DEBUG nova.virt.libvirt.host [None req-a6fb5086-8250-421b-b016-cb44267cbd6e 61a0c433a20242eeae2b074ca5cce0fb eab64550373b4d6fa996c94c6ad06846 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 25 03:46:04 np0005534516 nova_compute[253538]: 2025-11-25 08:46:04.984 253542 DEBUG nova.virt.libvirt.driver [None req-a6fb5086-8250-421b-b016-cb44267cbd6e 61a0c433a20242eeae2b074ca5cce0fb eab64550373b4d6fa996c94c6ad06846 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 25 03:46:04 np0005534516 nova_compute[253538]: 2025-11-25 08:46:04.985 253542 DEBUG nova.virt.hardware [None req-a6fb5086-8250-421b-b016-cb44267cbd6e 61a0c433a20242eeae2b074ca5cce0fb eab64550373b4d6fa996c94c6ad06846 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T08:20:54Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='c0247c91-0f34-45b2-87e7-43f31a790a00',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:56Z,direct_url=<?>,disk_format='qcow2',id=64385127-d622-49bb-be38-b33beb2692d1,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:58Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 25 03:46:04 np0005534516 nova_compute[253538]: 2025-11-25 08:46:04.985 253542 DEBUG nova.virt.hardware [None req-a6fb5086-8250-421b-b016-cb44267cbd6e 61a0c433a20242eeae2b074ca5cce0fb eab64550373b4d6fa996c94c6ad06846 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 25 03:46:04 np0005534516 nova_compute[253538]: 2025-11-25 08:46:04.985 253542 DEBUG nova.virt.hardware [None req-a6fb5086-8250-421b-b016-cb44267cbd6e 61a0c433a20242eeae2b074ca5cce0fb eab64550373b4d6fa996c94c6ad06846 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 25 03:46:04 np0005534516 nova_compute[253538]: 2025-11-25 08:46:04.986 253542 DEBUG nova.virt.hardware [None req-a6fb5086-8250-421b-b016-cb44267cbd6e 61a0c433a20242eeae2b074ca5cce0fb eab64550373b4d6fa996c94c6ad06846 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 25 03:46:04 np0005534516 nova_compute[253538]: 2025-11-25 08:46:04.986 253542 DEBUG nova.virt.hardware [None req-a6fb5086-8250-421b-b016-cb44267cbd6e 61a0c433a20242eeae2b074ca5cce0fb eab64550373b4d6fa996c94c6ad06846 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 25 03:46:04 np0005534516 nova_compute[253538]: 2025-11-25 08:46:04.986 253542 DEBUG nova.virt.hardware [None req-a6fb5086-8250-421b-b016-cb44267cbd6e 61a0c433a20242eeae2b074ca5cce0fb eab64550373b4d6fa996c94c6ad06846 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 25 03:46:04 np0005534516 nova_compute[253538]: 2025-11-25 08:46:04.986 253542 DEBUG nova.virt.hardware [None req-a6fb5086-8250-421b-b016-cb44267cbd6e 61a0c433a20242eeae2b074ca5cce0fb eab64550373b4d6fa996c94c6ad06846 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 25 03:46:04 np0005534516 nova_compute[253538]: 2025-11-25 08:46:04.986 253542 DEBUG nova.virt.hardware [None req-a6fb5086-8250-421b-b016-cb44267cbd6e 61a0c433a20242eeae2b074ca5cce0fb eab64550373b4d6fa996c94c6ad06846 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 25 03:46:04 np0005534516 nova_compute[253538]: 2025-11-25 08:46:04.987 253542 DEBUG nova.virt.hardware [None req-a6fb5086-8250-421b-b016-cb44267cbd6e 61a0c433a20242eeae2b074ca5cce0fb eab64550373b4d6fa996c94c6ad06846 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 25 03:46:04 np0005534516 nova_compute[253538]: 2025-11-25 08:46:04.987 253542 DEBUG nova.virt.hardware [None req-a6fb5086-8250-421b-b016-cb44267cbd6e 61a0c433a20242eeae2b074ca5cce0fb eab64550373b4d6fa996c94c6ad06846 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 25 03:46:04 np0005534516 nova_compute[253538]: 2025-11-25 08:46:04.987 253542 DEBUG nova.virt.hardware [None req-a6fb5086-8250-421b-b016-cb44267cbd6e 61a0c433a20242eeae2b074ca5cce0fb eab64550373b4d6fa996c94c6ad06846 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 25 03:46:04 np0005534516 nova_compute[253538]: 2025-11-25 08:46:04.987 253542 DEBUG nova.objects.instance [None req-a6fb5086-8250-421b-b016-cb44267cbd6e 61a0c433a20242eeae2b074ca5cce0fb eab64550373b4d6fa996c94c6ad06846 - - default default] Lazy-loading 'vcpu_model' on Instance uuid 7d74f950-951c-4cea-99f7-71a915c6a21c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 03:46:05 np0005534516 nova_compute[253538]: 2025-11-25 08:46:05.009 253542 DEBUG oslo_concurrency.processutils [None req-a6fb5086-8250-421b-b016-cb44267cbd6e 61a0c433a20242eeae2b074ca5cce0fb eab64550373b4d6fa996c94c6ad06846 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:46:05 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 03:46:05 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/947521671' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 03:46:05 np0005534516 nova_compute[253538]: 2025-11-25 08:46:05.420 253542 DEBUG oslo_concurrency.processutils [None req-a6fb5086-8250-421b-b016-cb44267cbd6e 61a0c433a20242eeae2b074ca5cce0fb eab64550373b4d6fa996c94c6ad06846 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.411s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:46:05 np0005534516 nova_compute[253538]: 2025-11-25 08:46:05.438 253542 DEBUG nova.storage.rbd_utils [None req-a6fb5086-8250-421b-b016-cb44267cbd6e 61a0c433a20242eeae2b074ca5cce0fb eab64550373b4d6fa996c94c6ad06846 - - default default] rbd image 7d74f950-951c-4cea-99f7-71a915c6a21c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:46:05 np0005534516 nova_compute[253538]: 2025-11-25 08:46:05.441 253542 DEBUG oslo_concurrency.processutils [None req-a6fb5086-8250-421b-b016-cb44267cbd6e 61a0c433a20242eeae2b074ca5cce0fb eab64550373b4d6fa996c94c6ad06846 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:46:05 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1937: 321 pgs: 321 active+clean; 149 MiB data, 726 MiB used, 59 GiB / 60 GiB avail; 264 KiB/s rd, 2.2 MiB/s wr, 85 op/s
Nov 25 03:46:05 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 03:46:05 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3947023449' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 03:46:05 np0005534516 nova_compute[253538]: 2025-11-25 08:46:05.871 253542 DEBUG oslo_concurrency.processutils [None req-a6fb5086-8250-421b-b016-cb44267cbd6e 61a0c433a20242eeae2b074ca5cce0fb eab64550373b4d6fa996c94c6ad06846 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.429s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:46:05 np0005534516 nova_compute[253538]: 2025-11-25 08:46:05.875 253542 DEBUG nova.virt.libvirt.driver [None req-a6fb5086-8250-421b-b016-cb44267cbd6e 61a0c433a20242eeae2b074ca5cce0fb eab64550373b4d6fa996c94c6ad06846 - - default default] [instance: 7d74f950-951c-4cea-99f7-71a915c6a21c] End _get_guest_xml xml=<domain type="kvm">
Nov 25 03:46:05 np0005534516 nova_compute[253538]:  <uuid>7d74f950-951c-4cea-99f7-71a915c6a21c</uuid>
Nov 25 03:46:05 np0005534516 nova_compute[253538]:  <name>instance-00000067</name>
Nov 25 03:46:05 np0005534516 nova_compute[253538]:  <memory>131072</memory>
Nov 25 03:46:05 np0005534516 nova_compute[253538]:  <vcpu>1</vcpu>
Nov 25 03:46:05 np0005534516 nova_compute[253538]:  <metadata>
Nov 25 03:46:05 np0005534516 nova_compute[253538]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 03:46:05 np0005534516 nova_compute[253538]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 03:46:05 np0005534516 nova_compute[253538]:      <nova:name>tempest-ServerShowV254Test-server-320589125</nova:name>
Nov 25 03:46:05 np0005534516 nova_compute[253538]:      <nova:creationTime>2025-11-25 08:46:04</nova:creationTime>
Nov 25 03:46:05 np0005534516 nova_compute[253538]:      <nova:flavor name="m1.nano">
Nov 25 03:46:05 np0005534516 nova_compute[253538]:        <nova:memory>128</nova:memory>
Nov 25 03:46:05 np0005534516 nova_compute[253538]:        <nova:disk>1</nova:disk>
Nov 25 03:46:05 np0005534516 nova_compute[253538]:        <nova:swap>0</nova:swap>
Nov 25 03:46:05 np0005534516 nova_compute[253538]:        <nova:ephemeral>0</nova:ephemeral>
Nov 25 03:46:05 np0005534516 nova_compute[253538]:        <nova:vcpus>1</nova:vcpus>
Nov 25 03:46:05 np0005534516 nova_compute[253538]:      </nova:flavor>
Nov 25 03:46:05 np0005534516 nova_compute[253538]:      <nova:owner>
Nov 25 03:46:05 np0005534516 nova_compute[253538]:        <nova:user uuid="61a0c433a20242eeae2b074ca5cce0fb">tempest-ServerShowV254Test-584114205-project-member</nova:user>
Nov 25 03:46:05 np0005534516 nova_compute[253538]:        <nova:project uuid="eab64550373b4d6fa996c94c6ad06846">tempest-ServerShowV254Test-584114205</nova:project>
Nov 25 03:46:05 np0005534516 nova_compute[253538]:      </nova:owner>
Nov 25 03:46:05 np0005534516 nova_compute[253538]:      <nova:root type="image" uuid="64385127-d622-49bb-be38-b33beb2692d1"/>
Nov 25 03:46:05 np0005534516 nova_compute[253538]:      <nova:ports/>
Nov 25 03:46:05 np0005534516 nova_compute[253538]:    </nova:instance>
Nov 25 03:46:05 np0005534516 nova_compute[253538]:  </metadata>
Nov 25 03:46:05 np0005534516 nova_compute[253538]:  <sysinfo type="smbios">
Nov 25 03:46:05 np0005534516 nova_compute[253538]:    <system>
Nov 25 03:46:05 np0005534516 nova_compute[253538]:      <entry name="manufacturer">RDO</entry>
Nov 25 03:46:05 np0005534516 nova_compute[253538]:      <entry name="product">OpenStack Compute</entry>
Nov 25 03:46:05 np0005534516 nova_compute[253538]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 03:46:05 np0005534516 nova_compute[253538]:      <entry name="serial">7d74f950-951c-4cea-99f7-71a915c6a21c</entry>
Nov 25 03:46:05 np0005534516 nova_compute[253538]:      <entry name="uuid">7d74f950-951c-4cea-99f7-71a915c6a21c</entry>
Nov 25 03:46:05 np0005534516 nova_compute[253538]:      <entry name="family">Virtual Machine</entry>
Nov 25 03:46:05 np0005534516 nova_compute[253538]:    </system>
Nov 25 03:46:05 np0005534516 nova_compute[253538]:  </sysinfo>
Nov 25 03:46:05 np0005534516 nova_compute[253538]:  <os>
Nov 25 03:46:05 np0005534516 nova_compute[253538]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 25 03:46:05 np0005534516 nova_compute[253538]:    <boot dev="hd"/>
Nov 25 03:46:05 np0005534516 nova_compute[253538]:    <smbios mode="sysinfo"/>
Nov 25 03:46:05 np0005534516 nova_compute[253538]:  </os>
Nov 25 03:46:05 np0005534516 nova_compute[253538]:  <features>
Nov 25 03:46:05 np0005534516 nova_compute[253538]:    <acpi/>
Nov 25 03:46:05 np0005534516 nova_compute[253538]:    <apic/>
Nov 25 03:46:05 np0005534516 nova_compute[253538]:    <vmcoreinfo/>
Nov 25 03:46:05 np0005534516 nova_compute[253538]:  </features>
Nov 25 03:46:05 np0005534516 nova_compute[253538]:  <clock offset="utc">
Nov 25 03:46:05 np0005534516 nova_compute[253538]:    <timer name="pit" tickpolicy="delay"/>
Nov 25 03:46:05 np0005534516 nova_compute[253538]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 25 03:46:05 np0005534516 nova_compute[253538]:    <timer name="hpet" present="no"/>
Nov 25 03:46:05 np0005534516 nova_compute[253538]:  </clock>
Nov 25 03:46:05 np0005534516 nova_compute[253538]:  <cpu mode="host-model" match="exact">
Nov 25 03:46:05 np0005534516 nova_compute[253538]:    <topology sockets="1" cores="1" threads="1"/>
Nov 25 03:46:05 np0005534516 nova_compute[253538]:  </cpu>
Nov 25 03:46:05 np0005534516 nova_compute[253538]:  <devices>
Nov 25 03:46:05 np0005534516 nova_compute[253538]:    <disk type="network" device="disk">
Nov 25 03:46:05 np0005534516 nova_compute[253538]:      <driver type="raw" cache="none"/>
Nov 25 03:46:05 np0005534516 nova_compute[253538]:      <source protocol="rbd" name="vms/7d74f950-951c-4cea-99f7-71a915c6a21c_disk">
Nov 25 03:46:05 np0005534516 nova_compute[253538]:        <host name="192.168.122.100" port="6789"/>
Nov 25 03:46:05 np0005534516 nova_compute[253538]:      </source>
Nov 25 03:46:05 np0005534516 nova_compute[253538]:      <auth username="openstack">
Nov 25 03:46:05 np0005534516 nova_compute[253538]:        <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 03:46:05 np0005534516 nova_compute[253538]:      </auth>
Nov 25 03:46:05 np0005534516 nova_compute[253538]:      <target dev="vda" bus="virtio"/>
Nov 25 03:46:05 np0005534516 nova_compute[253538]:    </disk>
Nov 25 03:46:05 np0005534516 nova_compute[253538]:    <disk type="network" device="cdrom">
Nov 25 03:46:05 np0005534516 nova_compute[253538]:      <driver type="raw" cache="none"/>
Nov 25 03:46:05 np0005534516 nova_compute[253538]:      <source protocol="rbd" name="vms/7d74f950-951c-4cea-99f7-71a915c6a21c_disk.config">
Nov 25 03:46:05 np0005534516 nova_compute[253538]:        <host name="192.168.122.100" port="6789"/>
Nov 25 03:46:05 np0005534516 nova_compute[253538]:      </source>
Nov 25 03:46:05 np0005534516 nova_compute[253538]:      <auth username="openstack">
Nov 25 03:46:05 np0005534516 nova_compute[253538]:        <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 03:46:05 np0005534516 nova_compute[253538]:      </auth>
Nov 25 03:46:05 np0005534516 nova_compute[253538]:      <target dev="sda" bus="sata"/>
Nov 25 03:46:05 np0005534516 nova_compute[253538]:    </disk>
Nov 25 03:46:05 np0005534516 nova_compute[253538]:    <serial type="pty">
Nov 25 03:46:05 np0005534516 nova_compute[253538]:      <log file="/var/lib/nova/instances/7d74f950-951c-4cea-99f7-71a915c6a21c/console.log" append="off"/>
Nov 25 03:46:05 np0005534516 nova_compute[253538]:    </serial>
Nov 25 03:46:05 np0005534516 nova_compute[253538]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 03:46:05 np0005534516 nova_compute[253538]:    <video>
Nov 25 03:46:05 np0005534516 nova_compute[253538]:      <model type="virtio"/>
Nov 25 03:46:05 np0005534516 nova_compute[253538]:    </video>
Nov 25 03:46:05 np0005534516 nova_compute[253538]:    <input type="tablet" bus="usb"/>
Nov 25 03:46:05 np0005534516 nova_compute[253538]:    <rng model="virtio">
Nov 25 03:46:05 np0005534516 nova_compute[253538]:      <backend model="random">/dev/urandom</backend>
Nov 25 03:46:05 np0005534516 nova_compute[253538]:    </rng>
Nov 25 03:46:05 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root"/>
Nov 25 03:46:05 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:46:05 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:46:05 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:46:05 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:46:05 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:46:05 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:46:05 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:46:05 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:46:05 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:46:05 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:46:05 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:46:05 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:46:05 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:46:05 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:46:05 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:46:05 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:46:05 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:46:05 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:46:05 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:46:05 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:46:05 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:46:05 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:46:05 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:46:05 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:46:05 np0005534516 nova_compute[253538]:    <controller type="usb" index="0"/>
Nov 25 03:46:05 np0005534516 nova_compute[253538]:    <memballoon model="virtio">
Nov 25 03:46:05 np0005534516 nova_compute[253538]:      <stats period="10"/>
Nov 25 03:46:05 np0005534516 nova_compute[253538]:    </memballoon>
Nov 25 03:46:05 np0005534516 nova_compute[253538]:  </devices>
Nov 25 03:46:05 np0005534516 nova_compute[253538]: </domain>
Nov 25 03:46:05 np0005534516 nova_compute[253538]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 25 03:46:05 np0005534516 nova_compute[253538]: 2025-11-25 08:46:05.926 253542 DEBUG nova.virt.libvirt.driver [None req-a6fb5086-8250-421b-b016-cb44267cbd6e 61a0c433a20242eeae2b074ca5cce0fb eab64550373b4d6fa996c94c6ad06846 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 03:46:05 np0005534516 nova_compute[253538]: 2025-11-25 08:46:05.926 253542 DEBUG nova.virt.libvirt.driver [None req-a6fb5086-8250-421b-b016-cb44267cbd6e 61a0c433a20242eeae2b074ca5cce0fb eab64550373b4d6fa996c94c6ad06846 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 03:46:05 np0005534516 nova_compute[253538]: 2025-11-25 08:46:05.927 253542 INFO nova.virt.libvirt.driver [None req-a6fb5086-8250-421b-b016-cb44267cbd6e 61a0c433a20242eeae2b074ca5cce0fb eab64550373b4d6fa996c94c6ad06846 - - default default] [instance: 7d74f950-951c-4cea-99f7-71a915c6a21c] Using config drive#033[00m
Nov 25 03:46:05 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:46:05.936 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:8b:ff:26 10.100.0.18 10.100.0.2'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.18/28 10.100.0.2/28', 'neutron:device_id': 'ovnmeta-3b58c00c-f900-493f-9371-00803cd7f82a', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3b58c00c-f900-493f-9371-00803cd7f82a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '36c8f2a4d2284593a0dc0fa30aef95d1', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=524d73b0-969b-46fc-b4ff-7468eaa76344, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=d4082b80-0c53-4461-b3c7-56420ca50a2b) old=Port_Binding(mac=['fa:16:3e:8b:ff:26 10.100.0.2'], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'ovnmeta-3b58c00c-f900-493f-9371-00803cd7f82a', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3b58c00c-f900-493f-9371-00803cd7f82a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '36c8f2a4d2284593a0dc0fa30aef95d1', 'neutron:revision_number': '2', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 03:46:05 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:46:05.938 162739 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port d4082b80-0c53-4461-b3c7-56420ca50a2b in datapath 3b58c00c-f900-493f-9371-00803cd7f82a updated#033[00m
Nov 25 03:46:05 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:46:05.939 162739 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 3b58c00c-f900-493f-9371-00803cd7f82a, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 25 03:46:05 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:46:05.940 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[014a3f22-d828-444f-860f-539b97eb043e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:46:05 np0005534516 nova_compute[253538]: 2025-11-25 08:46:05.951 253542 DEBUG nova.storage.rbd_utils [None req-a6fb5086-8250-421b-b016-cb44267cbd6e 61a0c433a20242eeae2b074ca5cce0fb eab64550373b4d6fa996c94c6ad06846 - - default default] rbd image 7d74f950-951c-4cea-99f7-71a915c6a21c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:46:05 np0005534516 nova_compute[253538]: 2025-11-25 08:46:05.968 253542 DEBUG nova.objects.instance [None req-a6fb5086-8250-421b-b016-cb44267cbd6e 61a0c433a20242eeae2b074ca5cce0fb eab64550373b4d6fa996c94c6ad06846 - - default default] Lazy-loading 'ec2_ids' on Instance uuid 7d74f950-951c-4cea-99f7-71a915c6a21c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 03:46:06 np0005534516 nova_compute[253538]: 2025-11-25 08:46:06.241 253542 INFO nova.virt.libvirt.driver [None req-a6fb5086-8250-421b-b016-cb44267cbd6e 61a0c433a20242eeae2b074ca5cce0fb eab64550373b4d6fa996c94c6ad06846 - - default default] [instance: 7d74f950-951c-4cea-99f7-71a915c6a21c] Creating config drive at /var/lib/nova/instances/7d74f950-951c-4cea-99f7-71a915c6a21c/disk.config#033[00m
Nov 25 03:46:06 np0005534516 nova_compute[253538]: 2025-11-25 08:46:06.250 253542 DEBUG oslo_concurrency.processutils [None req-a6fb5086-8250-421b-b016-cb44267cbd6e 61a0c433a20242eeae2b074ca5cce0fb eab64550373b4d6fa996c94c6ad06846 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/7d74f950-951c-4cea-99f7-71a915c6a21c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpis31mg61 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:46:06 np0005534516 nova_compute[253538]: 2025-11-25 08:46:06.412 253542 DEBUG oslo_concurrency.processutils [None req-a6fb5086-8250-421b-b016-cb44267cbd6e 61a0c433a20242eeae2b074ca5cce0fb eab64550373b4d6fa996c94c6ad06846 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/7d74f950-951c-4cea-99f7-71a915c6a21c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpis31mg61" returned: 0 in 0.162s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:46:06 np0005534516 nova_compute[253538]: 2025-11-25 08:46:06.448 253542 DEBUG nova.storage.rbd_utils [None req-a6fb5086-8250-421b-b016-cb44267cbd6e 61a0c433a20242eeae2b074ca5cce0fb eab64550373b4d6fa996c94c6ad06846 - - default default] rbd image 7d74f950-951c-4cea-99f7-71a915c6a21c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:46:06 np0005534516 nova_compute[253538]: 2025-11-25 08:46:06.453 253542 DEBUG oslo_concurrency.processutils [None req-a6fb5086-8250-421b-b016-cb44267cbd6e 61a0c433a20242eeae2b074ca5cce0fb eab64550373b4d6fa996c94c6ad06846 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/7d74f950-951c-4cea-99f7-71a915c6a21c/disk.config 7d74f950-951c-4cea-99f7-71a915c6a21c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:46:06 np0005534516 nova_compute[253538]: 2025-11-25 08:46:06.706 253542 DEBUG oslo_concurrency.processutils [None req-a6fb5086-8250-421b-b016-cb44267cbd6e 61a0c433a20242eeae2b074ca5cce0fb eab64550373b4d6fa996c94c6ad06846 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/7d74f950-951c-4cea-99f7-71a915c6a21c/disk.config 7d74f950-951c-4cea-99f7-71a915c6a21c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.253s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:46:06 np0005534516 nova_compute[253538]: 2025-11-25 08:46:06.708 253542 INFO nova.virt.libvirt.driver [None req-a6fb5086-8250-421b-b016-cb44267cbd6e 61a0c433a20242eeae2b074ca5cce0fb eab64550373b4d6fa996c94c6ad06846 - - default default] [instance: 7d74f950-951c-4cea-99f7-71a915c6a21c] Deleting local config drive /var/lib/nova/instances/7d74f950-951c-4cea-99f7-71a915c6a21c/disk.config because it was imported into RBD.#033[00m
Nov 25 03:46:06 np0005534516 systemd-machined[215790]: New machine qemu-127-instance-00000067.
Nov 25 03:46:06 np0005534516 systemd[1]: Started Virtual Machine qemu-127-instance-00000067.
Nov 25 03:46:07 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e231 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 03:46:07 np0005534516 nova_compute[253538]: 2025-11-25 08:46:07.237 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:46:07 np0005534516 nova_compute[253538]: 2025-11-25 08:46:07.291 253542 DEBUG nova.virt.libvirt.host [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Removed pending event for 7d74f950-951c-4cea-99f7-71a915c6a21c due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Nov 25 03:46:07 np0005534516 nova_compute[253538]: 2025-11-25 08:46:07.292 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764060367.2913284, 7d74f950-951c-4cea-99f7-71a915c6a21c => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 03:46:07 np0005534516 nova_compute[253538]: 2025-11-25 08:46:07.292 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 7d74f950-951c-4cea-99f7-71a915c6a21c] VM Resumed (Lifecycle Event)#033[00m
Nov 25 03:46:07 np0005534516 nova_compute[253538]: 2025-11-25 08:46:07.295 253542 DEBUG nova.compute.manager [None req-a6fb5086-8250-421b-b016-cb44267cbd6e 61a0c433a20242eeae2b074ca5cce0fb eab64550373b4d6fa996c94c6ad06846 - - default default] [instance: 7d74f950-951c-4cea-99f7-71a915c6a21c] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 25 03:46:07 np0005534516 nova_compute[253538]: 2025-11-25 08:46:07.295 253542 DEBUG nova.virt.libvirt.driver [None req-a6fb5086-8250-421b-b016-cb44267cbd6e 61a0c433a20242eeae2b074ca5cce0fb eab64550373b4d6fa996c94c6ad06846 - - default default] [instance: 7d74f950-951c-4cea-99f7-71a915c6a21c] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 25 03:46:07 np0005534516 nova_compute[253538]: 2025-11-25 08:46:07.300 253542 INFO nova.virt.libvirt.driver [-] [instance: 7d74f950-951c-4cea-99f7-71a915c6a21c] Instance spawned successfully.#033[00m
Nov 25 03:46:07 np0005534516 nova_compute[253538]: 2025-11-25 08:46:07.301 253542 DEBUG nova.virt.libvirt.driver [None req-a6fb5086-8250-421b-b016-cb44267cbd6e 61a0c433a20242eeae2b074ca5cce0fb eab64550373b4d6fa996c94c6ad06846 - - default default] [instance: 7d74f950-951c-4cea-99f7-71a915c6a21c] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 25 03:46:07 np0005534516 nova_compute[253538]: 2025-11-25 08:46:07.334 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 7d74f950-951c-4cea-99f7-71a915c6a21c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:46:07 np0005534516 nova_compute[253538]: 2025-11-25 08:46:07.339 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 7d74f950-951c-4cea-99f7-71a915c6a21c] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 03:46:07 np0005534516 nova_compute[253538]: 2025-11-25 08:46:07.349 253542 DEBUG nova.virt.libvirt.driver [None req-a6fb5086-8250-421b-b016-cb44267cbd6e 61a0c433a20242eeae2b074ca5cce0fb eab64550373b4d6fa996c94c6ad06846 - - default default] [instance: 7d74f950-951c-4cea-99f7-71a915c6a21c] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:46:07 np0005534516 nova_compute[253538]: 2025-11-25 08:46:07.350 253542 DEBUG nova.virt.libvirt.driver [None req-a6fb5086-8250-421b-b016-cb44267cbd6e 61a0c433a20242eeae2b074ca5cce0fb eab64550373b4d6fa996c94c6ad06846 - - default default] [instance: 7d74f950-951c-4cea-99f7-71a915c6a21c] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:46:07 np0005534516 nova_compute[253538]: 2025-11-25 08:46:07.350 253542 DEBUG nova.virt.libvirt.driver [None req-a6fb5086-8250-421b-b016-cb44267cbd6e 61a0c433a20242eeae2b074ca5cce0fb eab64550373b4d6fa996c94c6ad06846 - - default default] [instance: 7d74f950-951c-4cea-99f7-71a915c6a21c] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:46:07 np0005534516 nova_compute[253538]: 2025-11-25 08:46:07.350 253542 DEBUG nova.virt.libvirt.driver [None req-a6fb5086-8250-421b-b016-cb44267cbd6e 61a0c433a20242eeae2b074ca5cce0fb eab64550373b4d6fa996c94c6ad06846 - - default default] [instance: 7d74f950-951c-4cea-99f7-71a915c6a21c] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:46:07 np0005534516 nova_compute[253538]: 2025-11-25 08:46:07.351 253542 DEBUG nova.virt.libvirt.driver [None req-a6fb5086-8250-421b-b016-cb44267cbd6e 61a0c433a20242eeae2b074ca5cce0fb eab64550373b4d6fa996c94c6ad06846 - - default default] [instance: 7d74f950-951c-4cea-99f7-71a915c6a21c] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:46:07 np0005534516 nova_compute[253538]: 2025-11-25 08:46:07.351 253542 DEBUG nova.virt.libvirt.driver [None req-a6fb5086-8250-421b-b016-cb44267cbd6e 61a0c433a20242eeae2b074ca5cce0fb eab64550373b4d6fa996c94c6ad06846 - - default default] [instance: 7d74f950-951c-4cea-99f7-71a915c6a21c] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:46:07 np0005534516 nova_compute[253538]: 2025-11-25 08:46:07.373 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 7d74f950-951c-4cea-99f7-71a915c6a21c] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.#033[00m
Nov 25 03:46:07 np0005534516 nova_compute[253538]: 2025-11-25 08:46:07.374 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764060367.2957478, 7d74f950-951c-4cea-99f7-71a915c6a21c => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 03:46:07 np0005534516 nova_compute[253538]: 2025-11-25 08:46:07.374 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 7d74f950-951c-4cea-99f7-71a915c6a21c] VM Started (Lifecycle Event)#033[00m
Nov 25 03:46:07 np0005534516 nova_compute[253538]: 2025-11-25 08:46:07.404 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 7d74f950-951c-4cea-99f7-71a915c6a21c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:46:07 np0005534516 nova_compute[253538]: 2025-11-25 08:46:07.408 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 7d74f950-951c-4cea-99f7-71a915c6a21c] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 03:46:07 np0005534516 nova_compute[253538]: 2025-11-25 08:46:07.414 253542 DEBUG nova.compute.manager [None req-a6fb5086-8250-421b-b016-cb44267cbd6e 61a0c433a20242eeae2b074ca5cce0fb eab64550373b4d6fa996c94c6ad06846 - - default default] [instance: 7d74f950-951c-4cea-99f7-71a915c6a21c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:46:07 np0005534516 nova_compute[253538]: 2025-11-25 08:46:07.423 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 7d74f950-951c-4cea-99f7-71a915c6a21c] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.#033[00m
Nov 25 03:46:07 np0005534516 nova_compute[253538]: 2025-11-25 08:46:07.462 253542 DEBUG oslo_concurrency.lockutils [None req-a6fb5086-8250-421b-b016-cb44267cbd6e 61a0c433a20242eeae2b074ca5cce0fb eab64550373b4d6fa996c94c6ad06846 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:46:07 np0005534516 nova_compute[253538]: 2025-11-25 08:46:07.463 253542 DEBUG oslo_concurrency.lockutils [None req-a6fb5086-8250-421b-b016-cb44267cbd6e 61a0c433a20242eeae2b074ca5cce0fb eab64550373b4d6fa996c94c6ad06846 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:46:07 np0005534516 nova_compute[253538]: 2025-11-25 08:46:07.463 253542 DEBUG nova.objects.instance [None req-a6fb5086-8250-421b-b016-cb44267cbd6e 61a0c433a20242eeae2b074ca5cce0fb eab64550373b4d6fa996c94c6ad06846 - - default default] [instance: 7d74f950-951c-4cea-99f7-71a915c6a21c] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032#033[00m
Nov 25 03:46:07 np0005534516 nova_compute[253538]: 2025-11-25 08:46:07.517 253542 DEBUG oslo_concurrency.lockutils [None req-a6fb5086-8250-421b-b016-cb44267cbd6e 61a0c433a20242eeae2b074ca5cce0fb eab64550373b4d6fa996c94c6ad06846 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: held 0.054s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:46:07 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1938: 321 pgs: 321 active+clean; 127 MiB data, 718 MiB used, 59 GiB / 60 GiB avail; 276 KiB/s rd, 2.6 MiB/s wr, 100 op/s
Nov 25 03:46:08 np0005534516 nova_compute[253538]: 2025-11-25 08:46:08.804 253542 DEBUG oslo_concurrency.lockutils [None req-5fd87975-9760-42f0-8db0-837451033232 61a0c433a20242eeae2b074ca5cce0fb eab64550373b4d6fa996c94c6ad06846 - - default default] Acquiring lock "7d74f950-951c-4cea-99f7-71a915c6a21c" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:46:08 np0005534516 nova_compute[253538]: 2025-11-25 08:46:08.804 253542 DEBUG oslo_concurrency.lockutils [None req-5fd87975-9760-42f0-8db0-837451033232 61a0c433a20242eeae2b074ca5cce0fb eab64550373b4d6fa996c94c6ad06846 - - default default] Lock "7d74f950-951c-4cea-99f7-71a915c6a21c" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:46:08 np0005534516 nova_compute[253538]: 2025-11-25 08:46:08.804 253542 DEBUG oslo_concurrency.lockutils [None req-5fd87975-9760-42f0-8db0-837451033232 61a0c433a20242eeae2b074ca5cce0fb eab64550373b4d6fa996c94c6ad06846 - - default default] Acquiring lock "7d74f950-951c-4cea-99f7-71a915c6a21c-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:46:08 np0005534516 nova_compute[253538]: 2025-11-25 08:46:08.805 253542 DEBUG oslo_concurrency.lockutils [None req-5fd87975-9760-42f0-8db0-837451033232 61a0c433a20242eeae2b074ca5cce0fb eab64550373b4d6fa996c94c6ad06846 - - default default] Lock "7d74f950-951c-4cea-99f7-71a915c6a21c-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:46:08 np0005534516 nova_compute[253538]: 2025-11-25 08:46:08.805 253542 DEBUG oslo_concurrency.lockutils [None req-5fd87975-9760-42f0-8db0-837451033232 61a0c433a20242eeae2b074ca5cce0fb eab64550373b4d6fa996c94c6ad06846 - - default default] Lock "7d74f950-951c-4cea-99f7-71a915c6a21c-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:46:08 np0005534516 nova_compute[253538]: 2025-11-25 08:46:08.806 253542 INFO nova.compute.manager [None req-5fd87975-9760-42f0-8db0-837451033232 61a0c433a20242eeae2b074ca5cce0fb eab64550373b4d6fa996c94c6ad06846 - - default default] [instance: 7d74f950-951c-4cea-99f7-71a915c6a21c] Terminating instance#033[00m
Nov 25 03:46:08 np0005534516 nova_compute[253538]: 2025-11-25 08:46:08.807 253542 DEBUG oslo_concurrency.lockutils [None req-5fd87975-9760-42f0-8db0-837451033232 61a0c433a20242eeae2b074ca5cce0fb eab64550373b4d6fa996c94c6ad06846 - - default default] Acquiring lock "refresh_cache-7d74f950-951c-4cea-99f7-71a915c6a21c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 03:46:08 np0005534516 nova_compute[253538]: 2025-11-25 08:46:08.807 253542 DEBUG oslo_concurrency.lockutils [None req-5fd87975-9760-42f0-8db0-837451033232 61a0c433a20242eeae2b074ca5cce0fb eab64550373b4d6fa996c94c6ad06846 - - default default] Acquired lock "refresh_cache-7d74f950-951c-4cea-99f7-71a915c6a21c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 03:46:08 np0005534516 nova_compute[253538]: 2025-11-25 08:46:08.807 253542 DEBUG nova.network.neutron [None req-5fd87975-9760-42f0-8db0-837451033232 61a0c433a20242eeae2b074ca5cce0fb eab64550373b4d6fa996c94c6ad06846 - - default default] [instance: 7d74f950-951c-4cea-99f7-71a915c6a21c] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 25 03:46:09 np0005534516 nova_compute[253538]: 2025-11-25 08:46:09.001 253542 DEBUG nova.network.neutron [None req-5fd87975-9760-42f0-8db0-837451033232 61a0c433a20242eeae2b074ca5cce0fb eab64550373b4d6fa996c94c6ad06846 - - default default] [instance: 7d74f950-951c-4cea-99f7-71a915c6a21c] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 25 03:46:09 np0005534516 nova_compute[253538]: 2025-11-25 08:46:09.319 253542 DEBUG nova.network.neutron [None req-5fd87975-9760-42f0-8db0-837451033232 61a0c433a20242eeae2b074ca5cce0fb eab64550373b4d6fa996c94c6ad06846 - - default default] [instance: 7d74f950-951c-4cea-99f7-71a915c6a21c] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 03:46:09 np0005534516 nova_compute[253538]: 2025-11-25 08:46:09.346 253542 DEBUG oslo_concurrency.lockutils [None req-5fd87975-9760-42f0-8db0-837451033232 61a0c433a20242eeae2b074ca5cce0fb eab64550373b4d6fa996c94c6ad06846 - - default default] Releasing lock "refresh_cache-7d74f950-951c-4cea-99f7-71a915c6a21c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 03:46:09 np0005534516 nova_compute[253538]: 2025-11-25 08:46:09.346 253542 DEBUG nova.compute.manager [None req-5fd87975-9760-42f0-8db0-837451033232 61a0c433a20242eeae2b074ca5cce0fb eab64550373b4d6fa996c94c6ad06846 - - default default] [instance: 7d74f950-951c-4cea-99f7-71a915c6a21c] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 25 03:46:09 np0005534516 systemd[1]: machine-qemu\x2d127\x2dinstance\x2d00000067.scope: Deactivated successfully.
Nov 25 03:46:09 np0005534516 systemd[1]: machine-qemu\x2d127\x2dinstance\x2d00000067.scope: Consumed 2.641s CPU time.
Nov 25 03:46:09 np0005534516 systemd-machined[215790]: Machine qemu-127-instance-00000067 terminated.
Nov 25 03:46:09 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1939: 321 pgs: 321 active+clean; 119 MiB data, 711 MiB used, 59 GiB / 60 GiB avail; 808 KiB/s rd, 3.2 MiB/s wr, 129 op/s
Nov 25 03:46:09 np0005534516 nova_compute[253538]: 2025-11-25 08:46:09.570 253542 INFO nova.virt.libvirt.driver [-] [instance: 7d74f950-951c-4cea-99f7-71a915c6a21c] Instance destroyed successfully.#033[00m
Nov 25 03:46:09 np0005534516 nova_compute[253538]: 2025-11-25 08:46:09.571 253542 DEBUG nova.objects.instance [None req-5fd87975-9760-42f0-8db0-837451033232 61a0c433a20242eeae2b074ca5cce0fb eab64550373b4d6fa996c94c6ad06846 - - default default] Lazy-loading 'resources' on Instance uuid 7d74f950-951c-4cea-99f7-71a915c6a21c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 03:46:09 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 03:46:09 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 03:46:09 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 03:46:09 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 03:46:09 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 03:46:09 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 03:46:09 np0005534516 ceph-mgr[75313]: [progress WARNING root] complete: ev 7c8f7b1e-67a8-4e31-95e0-fa3ea2f20e55 does not exist
Nov 25 03:46:09 np0005534516 ceph-mgr[75313]: [progress WARNING root] complete: ev d40661d3-566f-48f1-99b0-ff02f1b3a083 does not exist
Nov 25 03:46:09 np0005534516 ceph-mgr[75313]: [progress WARNING root] complete: ev ba1d45b7-b105-411e-a2a7-4fe4ae298d87 does not exist
Nov 25 03:46:09 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 03:46:09 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 03:46:09 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 03:46:09 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 03:46:09 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 03:46:09 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 03:46:09 np0005534516 nova_compute[253538]: 2025-11-25 08:46:09.834 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:46:09 np0005534516 nova_compute[253538]: 2025-11-25 08:46:09.969 253542 INFO nova.virt.libvirt.driver [None req-5fd87975-9760-42f0-8db0-837451033232 61a0c433a20242eeae2b074ca5cce0fb eab64550373b4d6fa996c94c6ad06846 - - default default] [instance: 7d74f950-951c-4cea-99f7-71a915c6a21c] Deleting instance files /var/lib/nova/instances/7d74f950-951c-4cea-99f7-71a915c6a21c_del#033[00m
Nov 25 03:46:09 np0005534516 nova_compute[253538]: 2025-11-25 08:46:09.970 253542 INFO nova.virt.libvirt.driver [None req-5fd87975-9760-42f0-8db0-837451033232 61a0c433a20242eeae2b074ca5cce0fb eab64550373b4d6fa996c94c6ad06846 - - default default] [instance: 7d74f950-951c-4cea-99f7-71a915c6a21c] Deletion of /var/lib/nova/instances/7d74f950-951c-4cea-99f7-71a915c6a21c_del complete#033[00m
Nov 25 03:46:10 np0005534516 nova_compute[253538]: 2025-11-25 08:46:10.016 253542 INFO nova.compute.manager [None req-5fd87975-9760-42f0-8db0-837451033232 61a0c433a20242eeae2b074ca5cce0fb eab64550373b4d6fa996c94c6ad06846 - - default default] [instance: 7d74f950-951c-4cea-99f7-71a915c6a21c] Took 0.67 seconds to destroy the instance on the hypervisor.#033[00m
Nov 25 03:46:10 np0005534516 nova_compute[253538]: 2025-11-25 08:46:10.017 253542 DEBUG oslo.service.loopingcall [None req-5fd87975-9760-42f0-8db0-837451033232 61a0c433a20242eeae2b074ca5cce0fb eab64550373b4d6fa996c94c6ad06846 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 25 03:46:10 np0005534516 nova_compute[253538]: 2025-11-25 08:46:10.018 253542 DEBUG nova.compute.manager [-] [instance: 7d74f950-951c-4cea-99f7-71a915c6a21c] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 25 03:46:10 np0005534516 nova_compute[253538]: 2025-11-25 08:46:10.018 253542 DEBUG nova.network.neutron [-] [instance: 7d74f950-951c-4cea-99f7-71a915c6a21c] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 25 03:46:10 np0005534516 ovn_controller[152859]: 2025-11-25T08:46:10Z|00990|memory_trim|INFO|Detected inactivity (last active 30001 ms ago): trimming memory
Nov 25 03:46:10 np0005534516 nova_compute[253538]: 2025-11-25 08:46:10.207 253542 DEBUG nova.network.neutron [-] [instance: 7d74f950-951c-4cea-99f7-71a915c6a21c] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 25 03:46:10 np0005534516 nova_compute[253538]: 2025-11-25 08:46:10.222 253542 DEBUG nova.network.neutron [-] [instance: 7d74f950-951c-4cea-99f7-71a915c6a21c] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 03:46:10 np0005534516 nova_compute[253538]: 2025-11-25 08:46:10.240 253542 INFO nova.compute.manager [-] [instance: 7d74f950-951c-4cea-99f7-71a915c6a21c] Took 0.22 seconds to deallocate network for instance.#033[00m
Nov 25 03:46:10 np0005534516 podman[356001]: 2025-11-25 08:46:10.373531761 +0000 UTC m=+0.044033830 container create 8e2908b66aaa9f9a78f53de9f1eeb4a115da55703df22f3e33be6b6b33883b74 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_bassi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 03:46:10 np0005534516 systemd[1]: Started libpod-conmon-8e2908b66aaa9f9a78f53de9f1eeb4a115da55703df22f3e33be6b6b33883b74.scope.
Nov 25 03:46:10 np0005534516 nova_compute[253538]: 2025-11-25 08:46:10.426 253542 DEBUG oslo_concurrency.lockutils [None req-5fd87975-9760-42f0-8db0-837451033232 61a0c433a20242eeae2b074ca5cce0fb eab64550373b4d6fa996c94c6ad06846 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:46:10 np0005534516 nova_compute[253538]: 2025-11-25 08:46:10.426 253542 DEBUG oslo_concurrency.lockutils [None req-5fd87975-9760-42f0-8db0-837451033232 61a0c433a20242eeae2b074ca5cce0fb eab64550373b4d6fa996c94c6ad06846 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:46:10 np0005534516 podman[356001]: 2025-11-25 08:46:10.352340363 +0000 UTC m=+0.022842432 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 03:46:10 np0005534516 systemd[1]: Started libcrun container.
Nov 25 03:46:10 np0005534516 podman[356001]: 2025-11-25 08:46:10.46608623 +0000 UTC m=+0.136588339 container init 8e2908b66aaa9f9a78f53de9f1eeb4a115da55703df22f3e33be6b6b33883b74 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_bassi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Nov 25 03:46:10 np0005534516 podman[356001]: 2025-11-25 08:46:10.47544648 +0000 UTC m=+0.145948569 container start 8e2908b66aaa9f9a78f53de9f1eeb4a115da55703df22f3e33be6b6b33883b74 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_bassi, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Nov 25 03:46:10 np0005534516 podman[356001]: 2025-11-25 08:46:10.480360752 +0000 UTC m=+0.150862831 container attach 8e2908b66aaa9f9a78f53de9f1eeb4a115da55703df22f3e33be6b6b33883b74 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_bassi, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 03:46:10 np0005534516 blissful_bassi[356017]: 167 167
Nov 25 03:46:10 np0005534516 systemd[1]: libpod-8e2908b66aaa9f9a78f53de9f1eeb4a115da55703df22f3e33be6b6b33883b74.scope: Deactivated successfully.
Nov 25 03:46:10 np0005534516 conmon[356017]: conmon 8e2908b66aaa9f9a78f5 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-8e2908b66aaa9f9a78f53de9f1eeb4a115da55703df22f3e33be6b6b33883b74.scope/container/memory.events
Nov 25 03:46:10 np0005534516 nova_compute[253538]: 2025-11-25 08:46:10.485 253542 DEBUG oslo_concurrency.processutils [None req-5fd87975-9760-42f0-8db0-837451033232 61a0c433a20242eeae2b074ca5cce0fb eab64550373b4d6fa996c94c6ad06846 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:46:10 np0005534516 podman[356001]: 2025-11-25 08:46:10.486976139 +0000 UTC m=+0.157478188 container died 8e2908b66aaa9f9a78f53de9f1eeb4a115da55703df22f3e33be6b6b33883b74 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_bassi, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default)
Nov 25 03:46:10 np0005534516 systemd[1]: var-lib-containers-storage-overlay-c290bb5bd5fdf500ac93d4d0a2c832cf142614c4ffcd6438edd3a98338878404-merged.mount: Deactivated successfully.
Nov 25 03:46:10 np0005534516 podman[356001]: 2025-11-25 08:46:10.528006278 +0000 UTC m=+0.198508357 container remove 8e2908b66aaa9f9a78f53de9f1eeb4a115da55703df22f3e33be6b6b33883b74 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_bassi, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Nov 25 03:46:10 np0005534516 systemd[1]: libpod-conmon-8e2908b66aaa9f9a78f53de9f1eeb4a115da55703df22f3e33be6b6b33883b74.scope: Deactivated successfully.
Nov 25 03:46:10 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 03:46:10 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 03:46:10 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 03:46:10 np0005534516 podman[356061]: 2025-11-25 08:46:10.716766463 +0000 UTC m=+0.064002975 container create c3302ae8b89d6da317da96cd3e2913eb2ec91d4a785908e0459799116af9bd78 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_driscoll, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 25 03:46:10 np0005534516 systemd[1]: Started libpod-conmon-c3302ae8b89d6da317da96cd3e2913eb2ec91d4a785908e0459799116af9bd78.scope.
Nov 25 03:46:10 np0005534516 podman[356061]: 2025-11-25 08:46:10.68340487 +0000 UTC m=+0.030641422 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 03:46:10 np0005534516 systemd[1]: Started libcrun container.
Nov 25 03:46:10 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bf998b94a794eba61a8732f0122e73af3e3af31a5fb8ef58690feb09755f5f24/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 03:46:10 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bf998b94a794eba61a8732f0122e73af3e3af31a5fb8ef58690feb09755f5f24/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 03:46:10 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bf998b94a794eba61a8732f0122e73af3e3af31a5fb8ef58690feb09755f5f24/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 03:46:10 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bf998b94a794eba61a8732f0122e73af3e3af31a5fb8ef58690feb09755f5f24/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 03:46:10 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bf998b94a794eba61a8732f0122e73af3e3af31a5fb8ef58690feb09755f5f24/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 03:46:10 np0005534516 podman[356061]: 2025-11-25 08:46:10.838916775 +0000 UTC m=+0.186153337 container init c3302ae8b89d6da317da96cd3e2913eb2ec91d4a785908e0459799116af9bd78 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_driscoll, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 03:46:10 np0005534516 podman[356061]: 2025-11-25 08:46:10.847920056 +0000 UTC m=+0.195156528 container start c3302ae8b89d6da317da96cd3e2913eb2ec91d4a785908e0459799116af9bd78 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_driscoll, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507)
Nov 25 03:46:10 np0005534516 podman[356061]: 2025-11-25 08:46:10.854083001 +0000 UTC m=+0.201319523 container attach c3302ae8b89d6da317da96cd3e2913eb2ec91d4a785908e0459799116af9bd78 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_driscoll, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 25 03:46:10 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 03:46:10 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/642968321' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 03:46:10 np0005534516 nova_compute[253538]: 2025-11-25 08:46:10.943 253542 DEBUG oslo_concurrency.processutils [None req-5fd87975-9760-42f0-8db0-837451033232 61a0c433a20242eeae2b074ca5cce0fb eab64550373b4d6fa996c94c6ad06846 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.457s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:46:10 np0005534516 nova_compute[253538]: 2025-11-25 08:46:10.951 253542 DEBUG nova.compute.provider_tree [None req-5fd87975-9760-42f0-8db0-837451033232 61a0c433a20242eeae2b074ca5cce0fb eab64550373b4d6fa996c94c6ad06846 - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 03:46:10 np0005534516 nova_compute[253538]: 2025-11-25 08:46:10.975 253542 DEBUG nova.scheduler.client.report [None req-5fd87975-9760-42f0-8db0-837451033232 61a0c433a20242eeae2b074ca5cce0fb eab64550373b4d6fa996c94c6ad06846 - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 03:46:11 np0005534516 nova_compute[253538]: 2025-11-25 08:46:11.110 253542 DEBUG oslo_concurrency.lockutils [None req-5fd87975-9760-42f0-8db0-837451033232 61a0c433a20242eeae2b074ca5cce0fb eab64550373b4d6fa996c94c6ad06846 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.683s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:46:11 np0005534516 nova_compute[253538]: 2025-11-25 08:46:11.172 253542 INFO nova.scheduler.client.report [None req-5fd87975-9760-42f0-8db0-837451033232 61a0c433a20242eeae2b074ca5cce0fb eab64550373b4d6fa996c94c6ad06846 - - default default] Deleted allocations for instance 7d74f950-951c-4cea-99f7-71a915c6a21c#033[00m
Nov 25 03:46:11 np0005534516 nova_compute[253538]: 2025-11-25 08:46:11.272 253542 DEBUG oslo_concurrency.lockutils [None req-5fd87975-9760-42f0-8db0-837451033232 61a0c433a20242eeae2b074ca5cce0fb eab64550373b4d6fa996c94c6ad06846 - - default default] Lock "7d74f950-951c-4cea-99f7-71a915c6a21c" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.468s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:46:11 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1940: 321 pgs: 321 active+clean; 134 MiB data, 720 MiB used, 59 GiB / 60 GiB avail; 1.7 MiB/s rd, 3.9 MiB/s wr, 171 op/s
Nov 25 03:46:11 np0005534516 inspiring_driscoll[356078]: --> passed data devices: 0 physical, 3 LVM
Nov 25 03:46:11 np0005534516 inspiring_driscoll[356078]: --> relative data size: 1.0
Nov 25 03:46:11 np0005534516 inspiring_driscoll[356078]: --> All data devices are unavailable
Nov 25 03:46:11 np0005534516 systemd[1]: libpod-c3302ae8b89d6da317da96cd3e2913eb2ec91d4a785908e0459799116af9bd78.scope: Deactivated successfully.
Nov 25 03:46:11 np0005534516 systemd[1]: libpod-c3302ae8b89d6da317da96cd3e2913eb2ec91d4a785908e0459799116af9bd78.scope: Consumed 1.060s CPU time.
Nov 25 03:46:12 np0005534516 podman[356109]: 2025-11-25 08:46:12.012971498 +0000 UTC m=+0.027674613 container died c3302ae8b89d6da317da96cd3e2913eb2ec91d4a785908e0459799116af9bd78 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_driscoll, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Nov 25 03:46:12 np0005534516 systemd[1]: var-lib-containers-storage-overlay-bf998b94a794eba61a8732f0122e73af3e3af31a5fb8ef58690feb09755f5f24-merged.mount: Deactivated successfully.
Nov 25 03:46:12 np0005534516 podman[356109]: 2025-11-25 08:46:12.067182639 +0000 UTC m=+0.081885734 container remove c3302ae8b89d6da317da96cd3e2913eb2ec91d4a785908e0459799116af9bd78 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_driscoll, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 03:46:12 np0005534516 systemd[1]: libpod-conmon-c3302ae8b89d6da317da96cd3e2913eb2ec91d4a785908e0459799116af9bd78.scope: Deactivated successfully.
Nov 25 03:46:12 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e231 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 03:46:12 np0005534516 nova_compute[253538]: 2025-11-25 08:46:12.239 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:46:12 np0005534516 podman[356265]: 2025-11-25 08:46:12.709478202 +0000 UTC m=+0.035703528 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 03:46:12 np0005534516 podman[356265]: 2025-11-25 08:46:12.919811285 +0000 UTC m=+0.246036561 container create f12ea809448563538a909f1e445a88413d72eeef2fbe72025190e1a9ee6190a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_elion, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 03:46:12 np0005534516 systemd[1]: Started libpod-conmon-f12ea809448563538a909f1e445a88413d72eeef2fbe72025190e1a9ee6190a9.scope.
Nov 25 03:46:13 np0005534516 systemd[1]: Started libcrun container.
Nov 25 03:46:13 np0005534516 podman[356265]: 2025-11-25 08:46:13.048615234 +0000 UTC m=+0.374840500 container init f12ea809448563538a909f1e445a88413d72eeef2fbe72025190e1a9ee6190a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_elion, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 03:46:13 np0005534516 podman[356265]: 2025-11-25 08:46:13.055980091 +0000 UTC m=+0.382205327 container start f12ea809448563538a909f1e445a88413d72eeef2fbe72025190e1a9ee6190a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_elion, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Nov 25 03:46:13 np0005534516 unruffled_elion[356281]: 167 167
Nov 25 03:46:13 np0005534516 systemd[1]: libpod-f12ea809448563538a909f1e445a88413d72eeef2fbe72025190e1a9ee6190a9.scope: Deactivated successfully.
Nov 25 03:46:13 np0005534516 podman[356265]: 2025-11-25 08:46:13.064433687 +0000 UTC m=+0.390658973 container attach f12ea809448563538a909f1e445a88413d72eeef2fbe72025190e1a9ee6190a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_elion, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Nov 25 03:46:13 np0005534516 podman[356265]: 2025-11-25 08:46:13.065976499 +0000 UTC m=+0.392201805 container died f12ea809448563538a909f1e445a88413d72eeef2fbe72025190e1a9ee6190a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_elion, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS)
Nov 25 03:46:13 np0005534516 systemd[1]: var-lib-containers-storage-overlay-5c81e26d87a0a1b0d4f4a49f8f43543840f84f7a90403757ff393003dc0d6347-merged.mount: Deactivated successfully.
Nov 25 03:46:13 np0005534516 podman[356265]: 2025-11-25 08:46:13.15715189 +0000 UTC m=+0.483377166 container remove f12ea809448563538a909f1e445a88413d72eeef2fbe72025190e1a9ee6190a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_elion, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 03:46:13 np0005534516 systemd[1]: libpod-conmon-f12ea809448563538a909f1e445a88413d72eeef2fbe72025190e1a9ee6190a9.scope: Deactivated successfully.
Nov 25 03:46:13 np0005534516 podman[356305]: 2025-11-25 08:46:13.377527503 +0000 UTC m=+0.045611153 container create f9bb6dfb7b70f2f67cad900e0796e7368fafd8d518c248897c7d0134b7b7f5dd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_chebyshev, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 03:46:13 np0005534516 systemd[1]: Started libpod-conmon-f9bb6dfb7b70f2f67cad900e0796e7368fafd8d518c248897c7d0134b7b7f5dd.scope.
Nov 25 03:46:13 np0005534516 systemd[1]: Started libcrun container.
Nov 25 03:46:13 np0005534516 podman[356305]: 2025-11-25 08:46:13.361330959 +0000 UTC m=+0.029414619 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 03:46:13 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4cdc2bd1bce1bc5d920d0fed2203f5932cc232e119754600811026523f685212/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 03:46:13 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4cdc2bd1bce1bc5d920d0fed2203f5932cc232e119754600811026523f685212/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 03:46:13 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4cdc2bd1bce1bc5d920d0fed2203f5932cc232e119754600811026523f685212/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 03:46:13 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4cdc2bd1bce1bc5d920d0fed2203f5932cc232e119754600811026523f685212/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 03:46:13 np0005534516 podman[356305]: 2025-11-25 08:46:13.470990895 +0000 UTC m=+0.139074575 container init f9bb6dfb7b70f2f67cad900e0796e7368fafd8d518c248897c7d0134b7b7f5dd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_chebyshev, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Nov 25 03:46:13 np0005534516 podman[356305]: 2025-11-25 08:46:13.485196567 +0000 UTC m=+0.153280217 container start f9bb6dfb7b70f2f67cad900e0796e7368fafd8d518c248897c7d0134b7b7f5dd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_chebyshev, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Nov 25 03:46:13 np0005534516 podman[356305]: 2025-11-25 08:46:13.490009595 +0000 UTC m=+0.158093245 container attach f9bb6dfb7b70f2f67cad900e0796e7368fafd8d518c248897c7d0134b7b7f5dd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_chebyshev, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 03:46:13 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1941: 321 pgs: 321 active+clean; 111 MiB data, 708 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 3.0 MiB/s wr, 190 op/s
Nov 25 03:46:14 np0005534516 laughing_chebyshev[356321]: {
Nov 25 03:46:14 np0005534516 laughing_chebyshev[356321]:    "0": [
Nov 25 03:46:14 np0005534516 laughing_chebyshev[356321]:        {
Nov 25 03:46:14 np0005534516 laughing_chebyshev[356321]:            "devices": [
Nov 25 03:46:14 np0005534516 laughing_chebyshev[356321]:                "/dev/loop3"
Nov 25 03:46:14 np0005534516 laughing_chebyshev[356321]:            ],
Nov 25 03:46:14 np0005534516 laughing_chebyshev[356321]:            "lv_name": "ceph_lv0",
Nov 25 03:46:14 np0005534516 laughing_chebyshev[356321]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 03:46:14 np0005534516 laughing_chebyshev[356321]:            "lv_size": "21470642176",
Nov 25 03:46:14 np0005534516 laughing_chebyshev[356321]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=fa0f8d90-5df8-4d42-9078-da082765696d,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 03:46:14 np0005534516 laughing_chebyshev[356321]:            "lv_uuid": "ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr",
Nov 25 03:46:14 np0005534516 laughing_chebyshev[356321]:            "name": "ceph_lv0",
Nov 25 03:46:14 np0005534516 laughing_chebyshev[356321]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 03:46:14 np0005534516 laughing_chebyshev[356321]:            "tags": {
Nov 25 03:46:14 np0005534516 laughing_chebyshev[356321]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 03:46:14 np0005534516 laughing_chebyshev[356321]:                "ceph.block_uuid": "ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr",
Nov 25 03:46:14 np0005534516 laughing_chebyshev[356321]:                "ceph.cephx_lockbox_secret": "",
Nov 25 03:46:14 np0005534516 laughing_chebyshev[356321]:                "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 03:46:14 np0005534516 laughing_chebyshev[356321]:                "ceph.cluster_name": "ceph",
Nov 25 03:46:14 np0005534516 laughing_chebyshev[356321]:                "ceph.crush_device_class": "",
Nov 25 03:46:14 np0005534516 laughing_chebyshev[356321]:                "ceph.encrypted": "0",
Nov 25 03:46:14 np0005534516 laughing_chebyshev[356321]:                "ceph.osd_fsid": "fa0f8d90-5df8-4d42-9078-da082765696d",
Nov 25 03:46:14 np0005534516 laughing_chebyshev[356321]:                "ceph.osd_id": "0",
Nov 25 03:46:14 np0005534516 laughing_chebyshev[356321]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 03:46:14 np0005534516 laughing_chebyshev[356321]:                "ceph.type": "block",
Nov 25 03:46:14 np0005534516 laughing_chebyshev[356321]:                "ceph.vdo": "0"
Nov 25 03:46:14 np0005534516 laughing_chebyshev[356321]:            },
Nov 25 03:46:14 np0005534516 laughing_chebyshev[356321]:            "type": "block",
Nov 25 03:46:14 np0005534516 laughing_chebyshev[356321]:            "vg_name": "ceph_vg0"
Nov 25 03:46:14 np0005534516 laughing_chebyshev[356321]:        }
Nov 25 03:46:14 np0005534516 laughing_chebyshev[356321]:    ],
Nov 25 03:46:14 np0005534516 laughing_chebyshev[356321]:    "1": [
Nov 25 03:46:14 np0005534516 laughing_chebyshev[356321]:        {
Nov 25 03:46:14 np0005534516 laughing_chebyshev[356321]:            "devices": [
Nov 25 03:46:14 np0005534516 laughing_chebyshev[356321]:                "/dev/loop4"
Nov 25 03:46:14 np0005534516 laughing_chebyshev[356321]:            ],
Nov 25 03:46:14 np0005534516 laughing_chebyshev[356321]:            "lv_name": "ceph_lv1",
Nov 25 03:46:14 np0005534516 laughing_chebyshev[356321]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 03:46:14 np0005534516 laughing_chebyshev[356321]:            "lv_size": "21470642176",
Nov 25 03:46:14 np0005534516 laughing_chebyshev[356321]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1ccbbde0-8faf-460a-800e-c84f00ed17db,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 03:46:14 np0005534516 laughing_chebyshev[356321]:            "lv_uuid": "nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e",
Nov 25 03:46:14 np0005534516 laughing_chebyshev[356321]:            "name": "ceph_lv1",
Nov 25 03:46:14 np0005534516 laughing_chebyshev[356321]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 03:46:14 np0005534516 laughing_chebyshev[356321]:            "tags": {
Nov 25 03:46:14 np0005534516 laughing_chebyshev[356321]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 03:46:14 np0005534516 laughing_chebyshev[356321]:                "ceph.block_uuid": "nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e",
Nov 25 03:46:14 np0005534516 laughing_chebyshev[356321]:                "ceph.cephx_lockbox_secret": "",
Nov 25 03:46:14 np0005534516 laughing_chebyshev[356321]:                "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 03:46:14 np0005534516 laughing_chebyshev[356321]:                "ceph.cluster_name": "ceph",
Nov 25 03:46:14 np0005534516 laughing_chebyshev[356321]:                "ceph.crush_device_class": "",
Nov 25 03:46:14 np0005534516 laughing_chebyshev[356321]:                "ceph.encrypted": "0",
Nov 25 03:46:14 np0005534516 laughing_chebyshev[356321]:                "ceph.osd_fsid": "1ccbbde0-8faf-460a-800e-c84f00ed17db",
Nov 25 03:46:14 np0005534516 laughing_chebyshev[356321]:                "ceph.osd_id": "1",
Nov 25 03:46:14 np0005534516 laughing_chebyshev[356321]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 03:46:14 np0005534516 laughing_chebyshev[356321]:                "ceph.type": "block",
Nov 25 03:46:14 np0005534516 laughing_chebyshev[356321]:                "ceph.vdo": "0"
Nov 25 03:46:14 np0005534516 laughing_chebyshev[356321]:            },
Nov 25 03:46:14 np0005534516 laughing_chebyshev[356321]:            "type": "block",
Nov 25 03:46:14 np0005534516 laughing_chebyshev[356321]:            "vg_name": "ceph_vg1"
Nov 25 03:46:14 np0005534516 laughing_chebyshev[356321]:        }
Nov 25 03:46:14 np0005534516 laughing_chebyshev[356321]:    ],
Nov 25 03:46:14 np0005534516 laughing_chebyshev[356321]:    "2": [
Nov 25 03:46:14 np0005534516 laughing_chebyshev[356321]:        {
Nov 25 03:46:14 np0005534516 laughing_chebyshev[356321]:            "devices": [
Nov 25 03:46:14 np0005534516 laughing_chebyshev[356321]:                "/dev/loop5"
Nov 25 03:46:14 np0005534516 laughing_chebyshev[356321]:            ],
Nov 25 03:46:14 np0005534516 laughing_chebyshev[356321]:            "lv_name": "ceph_lv2",
Nov 25 03:46:14 np0005534516 laughing_chebyshev[356321]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 03:46:14 np0005534516 laughing_chebyshev[356321]:            "lv_size": "21470642176",
Nov 25 03:46:14 np0005534516 laughing_chebyshev[356321]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2a15223e-f21c-42af-b702-2be1c3607399,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 03:46:14 np0005534516 laughing_chebyshev[356321]:            "lv_uuid": "5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0",
Nov 25 03:46:14 np0005534516 laughing_chebyshev[356321]:            "name": "ceph_lv2",
Nov 25 03:46:14 np0005534516 laughing_chebyshev[356321]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 03:46:14 np0005534516 laughing_chebyshev[356321]:            "tags": {
Nov 25 03:46:14 np0005534516 laughing_chebyshev[356321]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 03:46:14 np0005534516 laughing_chebyshev[356321]:                "ceph.block_uuid": "5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0",
Nov 25 03:46:14 np0005534516 laughing_chebyshev[356321]:                "ceph.cephx_lockbox_secret": "",
Nov 25 03:46:14 np0005534516 laughing_chebyshev[356321]:                "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 03:46:14 np0005534516 laughing_chebyshev[356321]:                "ceph.cluster_name": "ceph",
Nov 25 03:46:14 np0005534516 laughing_chebyshev[356321]:                "ceph.crush_device_class": "",
Nov 25 03:46:14 np0005534516 laughing_chebyshev[356321]:                "ceph.encrypted": "0",
Nov 25 03:46:14 np0005534516 laughing_chebyshev[356321]:                "ceph.osd_fsid": "2a15223e-f21c-42af-b702-2be1c3607399",
Nov 25 03:46:14 np0005534516 laughing_chebyshev[356321]:                "ceph.osd_id": "2",
Nov 25 03:46:14 np0005534516 laughing_chebyshev[356321]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 03:46:14 np0005534516 laughing_chebyshev[356321]:                "ceph.type": "block",
Nov 25 03:46:14 np0005534516 laughing_chebyshev[356321]:                "ceph.vdo": "0"
Nov 25 03:46:14 np0005534516 laughing_chebyshev[356321]:            },
Nov 25 03:46:14 np0005534516 laughing_chebyshev[356321]:            "type": "block",
Nov 25 03:46:14 np0005534516 laughing_chebyshev[356321]:            "vg_name": "ceph_vg2"
Nov 25 03:46:14 np0005534516 laughing_chebyshev[356321]:        }
Nov 25 03:46:14 np0005534516 laughing_chebyshev[356321]:    ]
Nov 25 03:46:14 np0005534516 laughing_chebyshev[356321]: }
Nov 25 03:46:14 np0005534516 systemd[1]: libpod-f9bb6dfb7b70f2f67cad900e0796e7368fafd8d518c248897c7d0134b7b7f5dd.scope: Deactivated successfully.
Nov 25 03:46:14 np0005534516 podman[356305]: 2025-11-25 08:46:14.287172885 +0000 UTC m=+0.955256525 container died f9bb6dfb7b70f2f67cad900e0796e7368fafd8d518c248897c7d0134b7b7f5dd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_chebyshev, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 03:46:14 np0005534516 systemd[1]: var-lib-containers-storage-overlay-4cdc2bd1bce1bc5d920d0fed2203f5932cc232e119754600811026523f685212-merged.mount: Deactivated successfully.
Nov 25 03:46:14 np0005534516 podman[356305]: 2025-11-25 08:46:14.343597466 +0000 UTC m=+1.011681116 container remove f9bb6dfb7b70f2f67cad900e0796e7368fafd8d518c248897c7d0134b7b7f5dd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_chebyshev, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef)
Nov 25 03:46:14 np0005534516 systemd[1]: libpod-conmon-f9bb6dfb7b70f2f67cad900e0796e7368fafd8d518c248897c7d0134b7b7f5dd.scope: Deactivated successfully.
Nov 25 03:46:14 np0005534516 nova_compute[253538]: 2025-11-25 08:46:14.836 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:46:14 np0005534516 podman[356483]: 2025-11-25 08:46:14.989873264 +0000 UTC m=+0.049696332 container create 1abdd1f5c70e597abf54abe2d017f9fc66b027e91bcb1e983ddda921f3b2c5e2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_golick, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Nov 25 03:46:15 np0005534516 systemd[1]: Started libpod-conmon-1abdd1f5c70e597abf54abe2d017f9fc66b027e91bcb1e983ddda921f3b2c5e2.scope.
Nov 25 03:46:15 np0005534516 podman[356483]: 2025-11-25 08:46:14.968754028 +0000 UTC m=+0.028577106 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 03:46:15 np0005534516 systemd[1]: Started libcrun container.
Nov 25 03:46:15 np0005534516 podman[356483]: 2025-11-25 08:46:15.112389765 +0000 UTC m=+0.172212913 container init 1abdd1f5c70e597abf54abe2d017f9fc66b027e91bcb1e983ddda921f3b2c5e2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_golick, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Nov 25 03:46:15 np0005534516 podman[356483]: 2025-11-25 08:46:15.125155637 +0000 UTC m=+0.184978745 container start 1abdd1f5c70e597abf54abe2d017f9fc66b027e91bcb1e983ddda921f3b2c5e2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_golick, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True)
Nov 25 03:46:15 np0005534516 podman[356483]: 2025-11-25 08:46:15.130067329 +0000 UTC m=+0.189890497 container attach 1abdd1f5c70e597abf54abe2d017f9fc66b027e91bcb1e983ddda921f3b2c5e2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_golick, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Nov 25 03:46:15 np0005534516 eager_golick[356499]: 167 167
Nov 25 03:46:15 np0005534516 systemd[1]: libpod-1abdd1f5c70e597abf54abe2d017f9fc66b027e91bcb1e983ddda921f3b2c5e2.scope: Deactivated successfully.
Nov 25 03:46:15 np0005534516 conmon[356499]: conmon 1abdd1f5c70e597abf54 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-1abdd1f5c70e597abf54abe2d017f9fc66b027e91bcb1e983ddda921f3b2c5e2.scope/container/memory.events
Nov 25 03:46:15 np0005534516 podman[356483]: 2025-11-25 08:46:15.133022668 +0000 UTC m=+0.192845736 container died 1abdd1f5c70e597abf54abe2d017f9fc66b027e91bcb1e983ddda921f3b2c5e2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_golick, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3)
Nov 25 03:46:15 np0005534516 systemd[1]: var-lib-containers-storage-overlay-cf113db45288d783046d69b3f525f98d67828bb88cbb8743d0e916669ca47d6f-merged.mount: Deactivated successfully.
Nov 25 03:46:15 np0005534516 podman[356483]: 2025-11-25 08:46:15.169531456 +0000 UTC m=+0.229354524 container remove 1abdd1f5c70e597abf54abe2d017f9fc66b027e91bcb1e983ddda921f3b2c5e2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_golick, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Nov 25 03:46:15 np0005534516 systemd[1]: libpod-conmon-1abdd1f5c70e597abf54abe2d017f9fc66b027e91bcb1e983ddda921f3b2c5e2.scope: Deactivated successfully.
Nov 25 03:46:15 np0005534516 podman[356524]: 2025-11-25 08:46:15.345959651 +0000 UTC m=+0.043109376 container create ba4cff7cd5a6deac67377245eb949bbbea6da00ea6cab3cc8d6a9b514117b10c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_pare, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 03:46:15 np0005534516 systemd[1]: Started libpod-conmon-ba4cff7cd5a6deac67377245eb949bbbea6da00ea6cab3cc8d6a9b514117b10c.scope.
Nov 25 03:46:15 np0005534516 systemd[1]: Started libcrun container.
Nov 25 03:46:15 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/470262dccf30f895cec56bbb6c6ecddec14ac030047199f9910c87611e463951/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 03:46:15 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/470262dccf30f895cec56bbb6c6ecddec14ac030047199f9910c87611e463951/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 03:46:15 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/470262dccf30f895cec56bbb6c6ecddec14ac030047199f9910c87611e463951/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 03:46:15 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/470262dccf30f895cec56bbb6c6ecddec14ac030047199f9910c87611e463951/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 03:46:15 np0005534516 podman[356524]: 2025-11-25 08:46:15.326235732 +0000 UTC m=+0.023385457 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 03:46:15 np0005534516 podman[356524]: 2025-11-25 08:46:15.43292528 +0000 UTC m=+0.130074975 container init ba4cff7cd5a6deac67377245eb949bbbea6da00ea6cab3cc8d6a9b514117b10c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_pare, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 25 03:46:15 np0005534516 podman[356524]: 2025-11-25 08:46:15.438333925 +0000 UTC m=+0.135483620 container start ba4cff7cd5a6deac67377245eb949bbbea6da00ea6cab3cc8d6a9b514117b10c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_pare, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True)
Nov 25 03:46:15 np0005534516 podman[356524]: 2025-11-25 08:46:15.443607016 +0000 UTC m=+0.140756741 container attach ba4cff7cd5a6deac67377245eb949bbbea6da00ea6cab3cc8d6a9b514117b10c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_pare, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 03:46:15 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1942: 321 pgs: 321 active+clean; 88 MiB data, 699 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 167 op/s
Nov 25 03:46:16 np0005534516 gracious_pare[356540]: {
Nov 25 03:46:16 np0005534516 gracious_pare[356540]:    "1ccbbde0-8faf-460a-800e-c84f00ed17db": {
Nov 25 03:46:16 np0005534516 gracious_pare[356540]:        "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 03:46:16 np0005534516 gracious_pare[356540]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 03:46:16 np0005534516 gracious_pare[356540]:        "osd_id": 1,
Nov 25 03:46:16 np0005534516 gracious_pare[356540]:        "osd_uuid": "1ccbbde0-8faf-460a-800e-c84f00ed17db",
Nov 25 03:46:16 np0005534516 gracious_pare[356540]:        "type": "bluestore"
Nov 25 03:46:16 np0005534516 gracious_pare[356540]:    },
Nov 25 03:46:16 np0005534516 gracious_pare[356540]:    "2a15223e-f21c-42af-b702-2be1c3607399": {
Nov 25 03:46:16 np0005534516 gracious_pare[356540]:        "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 03:46:16 np0005534516 gracious_pare[356540]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 03:46:16 np0005534516 gracious_pare[356540]:        "osd_id": 2,
Nov 25 03:46:16 np0005534516 gracious_pare[356540]:        "osd_uuid": "2a15223e-f21c-42af-b702-2be1c3607399",
Nov 25 03:46:16 np0005534516 gracious_pare[356540]:        "type": "bluestore"
Nov 25 03:46:16 np0005534516 gracious_pare[356540]:    },
Nov 25 03:46:16 np0005534516 gracious_pare[356540]:    "fa0f8d90-5df8-4d42-9078-da082765696d": {
Nov 25 03:46:16 np0005534516 gracious_pare[356540]:        "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 03:46:16 np0005534516 gracious_pare[356540]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 03:46:16 np0005534516 gracious_pare[356540]:        "osd_id": 0,
Nov 25 03:46:16 np0005534516 gracious_pare[356540]:        "osd_uuid": "fa0f8d90-5df8-4d42-9078-da082765696d",
Nov 25 03:46:16 np0005534516 gracious_pare[356540]:        "type": "bluestore"
Nov 25 03:46:16 np0005534516 gracious_pare[356540]:    }
Nov 25 03:46:16 np0005534516 gracious_pare[356540]: }
Nov 25 03:46:16 np0005534516 systemd[1]: libpod-ba4cff7cd5a6deac67377245eb949bbbea6da00ea6cab3cc8d6a9b514117b10c.scope: Deactivated successfully.
Nov 25 03:46:16 np0005534516 podman[356524]: 2025-11-25 08:46:16.460083678 +0000 UTC m=+1.157233413 container died ba4cff7cd5a6deac67377245eb949bbbea6da00ea6cab3cc8d6a9b514117b10c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_pare, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 03:46:16 np0005534516 systemd[1]: libpod-ba4cff7cd5a6deac67377245eb949bbbea6da00ea6cab3cc8d6a9b514117b10c.scope: Consumed 1.035s CPU time.
Nov 25 03:46:16 np0005534516 systemd[1]: var-lib-containers-storage-overlay-470262dccf30f895cec56bbb6c6ecddec14ac030047199f9910c87611e463951-merged.mount: Deactivated successfully.
Nov 25 03:46:16 np0005534516 podman[356524]: 2025-11-25 08:46:16.519598243 +0000 UTC m=+1.216747948 container remove ba4cff7cd5a6deac67377245eb949bbbea6da00ea6cab3cc8d6a9b514117b10c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_pare, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Nov 25 03:46:16 np0005534516 systemd[1]: libpod-conmon-ba4cff7cd5a6deac67377245eb949bbbea6da00ea6cab3cc8d6a9b514117b10c.scope: Deactivated successfully.
Nov 25 03:46:16 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 03:46:16 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 03:46:16 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 03:46:16 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 03:46:16 np0005534516 ceph-mgr[75313]: [progress WARNING root] complete: ev dcee8d98-a42d-45d5-a325-379b51e9fe38 does not exist
Nov 25 03:46:16 np0005534516 ceph-mgr[75313]: [progress WARNING root] complete: ev ddcbc32c-7895-4c25-8377-a0dec9697939 does not exist
Nov 25 03:46:16 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 03:46:16 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 03:46:17 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e231 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 03:46:17 np0005534516 nova_compute[253538]: 2025-11-25 08:46:17.241 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:46:17 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1943: 321 pgs: 321 active+clean; 88 MiB data, 699 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.7 MiB/s wr, 130 op/s
Nov 25 03:46:19 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:46:19.301 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:8b:ff:26 10.100.0.18 10.100.0.2 10.100.0.34'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.18/28 10.100.0.2/28 10.100.0.34/28', 'neutron:device_id': 'ovnmeta-3b58c00c-f900-493f-9371-00803cd7f82a', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3b58c00c-f900-493f-9371-00803cd7f82a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '36c8f2a4d2284593a0dc0fa30aef95d1', 'neutron:revision_number': '6', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=524d73b0-969b-46fc-b4ff-7468eaa76344, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=d4082b80-0c53-4461-b3c7-56420ca50a2b) old=Port_Binding(mac=['fa:16:3e:8b:ff:26 10.100.0.18 10.100.0.2'], external_ids={'neutron:cidrs': '10.100.0.18/28 10.100.0.2/28', 'neutron:device_id': 'ovnmeta-3b58c00c-f900-493f-9371-00803cd7f82a', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3b58c00c-f900-493f-9371-00803cd7f82a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '36c8f2a4d2284593a0dc0fa30aef95d1', 'neutron:revision_number': '5', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 03:46:19 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:46:19.303 162739 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port d4082b80-0c53-4461-b3c7-56420ca50a2b in datapath 3b58c00c-f900-493f-9371-00803cd7f82a updated#033[00m
Nov 25 03:46:19 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:46:19.304 162739 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 3b58c00c-f900-493f-9371-00803cd7f82a, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 25 03:46:19 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:46:19.305 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[b0b6659b-25ec-4caa-9836-5ee0252adbf5]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:46:19 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1944: 321 pgs: 321 active+clean; 88 MiB data, 699 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.3 MiB/s wr, 114 op/s
Nov 25 03:46:19 np0005534516 nova_compute[253538]: 2025-11-25 08:46:19.840 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:46:21 np0005534516 nova_compute[253538]: 2025-11-25 08:46:21.565 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 03:46:21 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1945: 321 pgs: 321 active+clean; 88 MiB data, 699 MiB used, 59 GiB / 60 GiB avail; 1.4 MiB/s rd, 725 KiB/s wr, 86 op/s
Nov 25 03:46:22 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e231 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 03:46:22 np0005534516 nova_compute[253538]: 2025-11-25 08:46:22.243 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:46:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 03:46:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 03:46:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 03:46:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 03:46:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 03:46:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 03:46:23 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1946: 321 pgs: 321 active+clean; 88 MiB data, 699 MiB used, 59 GiB / 60 GiB avail; 530 KiB/s rd, 1.2 KiB/s wr, 44 op/s
Nov 25 03:46:24 np0005534516 nova_compute[253538]: 2025-11-25 08:46:24.569 253542 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764060369.5679095, 7d74f950-951c-4cea-99f7-71a915c6a21c => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 03:46:24 np0005534516 nova_compute[253538]: 2025-11-25 08:46:24.570 253542 INFO nova.compute.manager [-] [instance: 7d74f950-951c-4cea-99f7-71a915c6a21c] VM Stopped (Lifecycle Event)#033[00m
Nov 25 03:46:24 np0005534516 nova_compute[253538]: 2025-11-25 08:46:24.588 253542 DEBUG nova.compute.manager [None req-d6b179ab-c939-4f81-a596-8e3b767e7eb1 - - - - - -] [instance: 7d74f950-951c-4cea-99f7-71a915c6a21c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:46:24 np0005534516 podman[356637]: 2025-11-25 08:46:24.831988553 +0000 UTC m=+0.076225862 container health_status 5c8d5508db57b4c3da7e80ae11f3a3094e87785cb74f60c98199261dd8b73481 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Nov 25 03:46:24 np0005534516 nova_compute[253538]: 2025-11-25 08:46:24.842 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:46:25 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1947: 321 pgs: 321 active+clean; 88 MiB data, 699 MiB used, 59 GiB / 60 GiB avail; 8.8 KiB/s rd, 341 B/s wr, 12 op/s
Nov 25 03:46:27 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e231 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 03:46:27 np0005534516 nova_compute[253538]: 2025-11-25 08:46:27.246 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:46:27 np0005534516 nova_compute[253538]: 2025-11-25 08:46:27.553 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 03:46:27 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1948: 321 pgs: 321 active+clean; 88 MiB data, 699 MiB used, 59 GiB / 60 GiB avail
Nov 25 03:46:28 np0005534516 nova_compute[253538]: 2025-11-25 08:46:28.553 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 03:46:28 np0005534516 nova_compute[253538]: 2025-11-25 08:46:28.554 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 25 03:46:28 np0005534516 podman[356657]: 2025-11-25 08:46:28.820362317 +0000 UTC m=+0.066254266 container health_status 201f5ba8a846455dad7681d91099d8c3f33e8ac91298c1330a8b525b94c03938 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118)
Nov 25 03:46:29 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 25 03:46:29 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3770457833' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 03:46:29 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 25 03:46:29 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3770457833' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 03:46:29 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1949: 321 pgs: 321 active+clean; 88 MiB data, 699 MiB used, 59 GiB / 60 GiB avail
Nov 25 03:46:29 np0005534516 nova_compute[253538]: 2025-11-25 08:46:29.845 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:46:31 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:46:31.527 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=28, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '3e:21:09', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '6a:71:8e:07:fa:c2'}, ipsec=False) old=SB_Global(nb_cfg=27) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 03:46:31 np0005534516 nova_compute[253538]: 2025-11-25 08:46:31.527 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:46:31 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:46:31.529 162739 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 25 03:46:31 np0005534516 nova_compute[253538]: 2025-11-25 08:46:31.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 03:46:31 np0005534516 nova_compute[253538]: 2025-11-25 08:46:31.555 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 25 03:46:31 np0005534516 nova_compute[253538]: 2025-11-25 08:46:31.555 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 25 03:46:31 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1950: 321 pgs: 321 active+clean; 88 MiB data, 699 MiB used, 59 GiB / 60 GiB avail
Nov 25 03:46:31 np0005534516 nova_compute[253538]: 2025-11-25 08:46:31.578 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 25 03:46:32 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e231 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 03:46:32 np0005534516 nova_compute[253538]: 2025-11-25 08:46:32.294 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:46:32 np0005534516 nova_compute[253538]: 2025-11-25 08:46:32.581 253542 DEBUG oslo_concurrency.lockutils [None req-5b5eacf4-c6da-49cf-9bde-b1ad24f0d0e3 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] Acquiring lock "0f5e68e6-8f02-4a3a-ac0c-322d82950d98" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:46:32 np0005534516 nova_compute[253538]: 2025-11-25 08:46:32.582 253542 DEBUG oslo_concurrency.lockutils [None req-5b5eacf4-c6da-49cf-9bde-b1ad24f0d0e3 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] Lock "0f5e68e6-8f02-4a3a-ac0c-322d82950d98" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:46:32 np0005534516 nova_compute[253538]: 2025-11-25 08:46:32.598 253542 DEBUG nova.compute.manager [None req-5b5eacf4-c6da-49cf-9bde-b1ad24f0d0e3 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] [instance: 0f5e68e6-8f02-4a3a-ac0c-322d82950d98] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 25 03:46:32 np0005534516 nova_compute[253538]: 2025-11-25 08:46:32.678 253542 DEBUG oslo_concurrency.lockutils [None req-5b5eacf4-c6da-49cf-9bde-b1ad24f0d0e3 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:46:32 np0005534516 nova_compute[253538]: 2025-11-25 08:46:32.679 253542 DEBUG oslo_concurrency.lockutils [None req-5b5eacf4-c6da-49cf-9bde-b1ad24f0d0e3 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:46:32 np0005534516 nova_compute[253538]: 2025-11-25 08:46:32.688 253542 DEBUG nova.virt.hardware [None req-5b5eacf4-c6da-49cf-9bde-b1ad24f0d0e3 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 25 03:46:32 np0005534516 nova_compute[253538]: 2025-11-25 08:46:32.689 253542 INFO nova.compute.claims [None req-5b5eacf4-c6da-49cf-9bde-b1ad24f0d0e3 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] [instance: 0f5e68e6-8f02-4a3a-ac0c-322d82950d98] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 25 03:46:32 np0005534516 podman[356677]: 2025-11-25 08:46:32.843940455 +0000 UTC m=+0.092809167 container health_status 8ad1e319ea263c63021f01bb2d6f18ca5aea205883339692c6b10bddfa993191 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2)
Nov 25 03:46:32 np0005534516 nova_compute[253538]: 2025-11-25 08:46:32.926 253542 DEBUG oslo_concurrency.processutils [None req-5b5eacf4-c6da-49cf-9bde-b1ad24f0d0e3 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:46:33 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 03:46:33 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1588211025' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 03:46:33 np0005534516 nova_compute[253538]: 2025-11-25 08:46:33.366 253542 DEBUG oslo_concurrency.processutils [None req-5b5eacf4-c6da-49cf-9bde-b1ad24f0d0e3 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.440s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:46:33 np0005534516 nova_compute[253538]: 2025-11-25 08:46:33.372 253542 DEBUG nova.compute.provider_tree [None req-5b5eacf4-c6da-49cf-9bde-b1ad24f0d0e3 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 03:46:33 np0005534516 nova_compute[253538]: 2025-11-25 08:46:33.386 253542 DEBUG nova.scheduler.client.report [None req-5b5eacf4-c6da-49cf-9bde-b1ad24f0d0e3 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 03:46:33 np0005534516 nova_compute[253538]: 2025-11-25 08:46:33.403 253542 DEBUG oslo_concurrency.lockutils [None req-5b5eacf4-c6da-49cf-9bde-b1ad24f0d0e3 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.725s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:46:33 np0005534516 nova_compute[253538]: 2025-11-25 08:46:33.404 253542 DEBUG nova.compute.manager [None req-5b5eacf4-c6da-49cf-9bde-b1ad24f0d0e3 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] [instance: 0f5e68e6-8f02-4a3a-ac0c-322d82950d98] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 25 03:46:33 np0005534516 nova_compute[253538]: 2025-11-25 08:46:33.453 253542 DEBUG nova.compute.manager [None req-5b5eacf4-c6da-49cf-9bde-b1ad24f0d0e3 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] [instance: 0f5e68e6-8f02-4a3a-ac0c-322d82950d98] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 25 03:46:33 np0005534516 nova_compute[253538]: 2025-11-25 08:46:33.454 253542 DEBUG nova.network.neutron [None req-5b5eacf4-c6da-49cf-9bde-b1ad24f0d0e3 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] [instance: 0f5e68e6-8f02-4a3a-ac0c-322d82950d98] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 25 03:46:33 np0005534516 nova_compute[253538]: 2025-11-25 08:46:33.471 253542 INFO nova.virt.libvirt.driver [None req-5b5eacf4-c6da-49cf-9bde-b1ad24f0d0e3 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] [instance: 0f5e68e6-8f02-4a3a-ac0c-322d82950d98] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 25 03:46:33 np0005534516 nova_compute[253538]: 2025-11-25 08:46:33.484 253542 DEBUG nova.compute.manager [None req-5b5eacf4-c6da-49cf-9bde-b1ad24f0d0e3 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] [instance: 0f5e68e6-8f02-4a3a-ac0c-322d82950d98] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 25 03:46:33 np0005534516 nova_compute[253538]: 2025-11-25 08:46:33.563 253542 DEBUG nova.compute.manager [None req-5b5eacf4-c6da-49cf-9bde-b1ad24f0d0e3 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] [instance: 0f5e68e6-8f02-4a3a-ac0c-322d82950d98] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 25 03:46:33 np0005534516 nova_compute[253538]: 2025-11-25 08:46:33.564 253542 DEBUG nova.virt.libvirt.driver [None req-5b5eacf4-c6da-49cf-9bde-b1ad24f0d0e3 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] [instance: 0f5e68e6-8f02-4a3a-ac0c-322d82950d98] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 25 03:46:33 np0005534516 nova_compute[253538]: 2025-11-25 08:46:33.565 253542 INFO nova.virt.libvirt.driver [None req-5b5eacf4-c6da-49cf-9bde-b1ad24f0d0e3 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] [instance: 0f5e68e6-8f02-4a3a-ac0c-322d82950d98] Creating image(s)#033[00m
Nov 25 03:46:33 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1951: 321 pgs: 321 active+clean; 88 MiB data, 699 MiB used, 59 GiB / 60 GiB avail
Nov 25 03:46:33 np0005534516 nova_compute[253538]: 2025-11-25 08:46:33.586 253542 DEBUG nova.storage.rbd_utils [None req-5b5eacf4-c6da-49cf-9bde-b1ad24f0d0e3 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] rbd image 0f5e68e6-8f02-4a3a-ac0c-322d82950d98_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:46:33 np0005534516 nova_compute[253538]: 2025-11-25 08:46:33.603 253542 DEBUG nova.storage.rbd_utils [None req-5b5eacf4-c6da-49cf-9bde-b1ad24f0d0e3 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] rbd image 0f5e68e6-8f02-4a3a-ac0c-322d82950d98_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:46:33 np0005534516 nova_compute[253538]: 2025-11-25 08:46:33.622 253542 DEBUG nova.storage.rbd_utils [None req-5b5eacf4-c6da-49cf-9bde-b1ad24f0d0e3 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] rbd image 0f5e68e6-8f02-4a3a-ac0c-322d82950d98_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:46:33 np0005534516 nova_compute[253538]: 2025-11-25 08:46:33.624 253542 DEBUG oslo_concurrency.processutils [None req-5b5eacf4-c6da-49cf-9bde-b1ad24f0d0e3 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:46:33 np0005534516 nova_compute[253538]: 2025-11-25 08:46:33.689 253542 DEBUG oslo_concurrency.processutils [None req-5b5eacf4-c6da-49cf-9bde-b1ad24f0d0e3 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json" returned: 0 in 0.065s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:46:33 np0005534516 nova_compute[253538]: 2025-11-25 08:46:33.690 253542 DEBUG oslo_concurrency.lockutils [None req-5b5eacf4-c6da-49cf-9bde-b1ad24f0d0e3 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] Acquiring lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:46:33 np0005534516 nova_compute[253538]: 2025-11-25 08:46:33.691 253542 DEBUG oslo_concurrency.lockutils [None req-5b5eacf4-c6da-49cf-9bde-b1ad24f0d0e3 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:46:33 np0005534516 nova_compute[253538]: 2025-11-25 08:46:33.691 253542 DEBUG oslo_concurrency.lockutils [None req-5b5eacf4-c6da-49cf-9bde-b1ad24f0d0e3 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:46:33 np0005534516 nova_compute[253538]: 2025-11-25 08:46:33.710 253542 DEBUG nova.storage.rbd_utils [None req-5b5eacf4-c6da-49cf-9bde-b1ad24f0d0e3 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] rbd image 0f5e68e6-8f02-4a3a-ac0c-322d82950d98_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:46:33 np0005534516 nova_compute[253538]: 2025-11-25 08:46:33.715 253542 DEBUG oslo_concurrency.processutils [None req-5b5eacf4-c6da-49cf-9bde-b1ad24f0d0e3 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc 0f5e68e6-8f02-4a3a-ac0c-322d82950d98_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:46:33 np0005534516 nova_compute[253538]: 2025-11-25 08:46:33.795 253542 DEBUG nova.policy [None req-5b5eacf4-c6da-49cf-9bde-b1ad24f0d0e3 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '229f02d89c9848d8aaaaab070ce4d179', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '947f731219de435196429037dc94fd56', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 25 03:46:34 np0005534516 nova_compute[253538]: 2025-11-25 08:46:34.553 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 03:46:34 np0005534516 nova_compute[253538]: 2025-11-25 08:46:34.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 03:46:34 np0005534516 nova_compute[253538]: 2025-11-25 08:46:34.577 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:46:34 np0005534516 nova_compute[253538]: 2025-11-25 08:46:34.578 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:46:34 np0005534516 nova_compute[253538]: 2025-11-25 08:46:34.578 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:46:34 np0005534516 nova_compute[253538]: 2025-11-25 08:46:34.578 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 25 03:46:34 np0005534516 nova_compute[253538]: 2025-11-25 08:46:34.578 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:46:34 np0005534516 nova_compute[253538]: 2025-11-25 08:46:34.850 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:46:35 np0005534516 nova_compute[253538]: 2025-11-25 08:46:35.030 253542 DEBUG nova.network.neutron [None req-5b5eacf4-c6da-49cf-9bde-b1ad24f0d0e3 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] [instance: 0f5e68e6-8f02-4a3a-ac0c-322d82950d98] Successfully created port: 7246ed42-6ec3-42e8-9b9d-12606aeeb43c _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 25 03:46:35 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 03:46:35 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/222939804' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 03:46:35 np0005534516 nova_compute[253538]: 2025-11-25 08:46:35.143 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.565s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:46:35 np0005534516 nova_compute[253538]: 2025-11-25 08:46:35.343 253542 DEBUG oslo_concurrency.processutils [None req-5b5eacf4-c6da-49cf-9bde-b1ad24f0d0e3 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc 0f5e68e6-8f02-4a3a-ac0c-322d82950d98_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.629s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:46:35 np0005534516 nova_compute[253538]: 2025-11-25 08:46:35.406 253542 DEBUG nova.storage.rbd_utils [None req-5b5eacf4-c6da-49cf-9bde-b1ad24f0d0e3 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] resizing rbd image 0f5e68e6-8f02-4a3a-ac0c-322d82950d98_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 25 03:46:35 np0005534516 nova_compute[253538]: 2025-11-25 08:46:35.483 253542 WARNING nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 03:46:35 np0005534516 nova_compute[253538]: 2025-11-25 08:46:35.484 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3899MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 25 03:46:35 np0005534516 nova_compute[253538]: 2025-11-25 08:46:35.484 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:46:35 np0005534516 nova_compute[253538]: 2025-11-25 08:46:35.485 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:46:35 np0005534516 nova_compute[253538]: 2025-11-25 08:46:35.553 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Instance 0f5e68e6-8f02-4a3a-ac0c-322d82950d98 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 25 03:46:35 np0005534516 nova_compute[253538]: 2025-11-25 08:46:35.554 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 25 03:46:35 np0005534516 nova_compute[253538]: 2025-11-25 08:46:35.554 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 25 03:46:35 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1952: 321 pgs: 321 active+clean; 88 MiB data, 706 MiB used, 59 GiB / 60 GiB avail; 6.1 KiB/s rd, 2.5 KiB/s wr, 10 op/s
Nov 25 03:46:35 np0005534516 nova_compute[253538]: 2025-11-25 08:46:35.607 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:46:35 np0005534516 nova_compute[253538]: 2025-11-25 08:46:35.834 253542 DEBUG nova.objects.instance [None req-5b5eacf4-c6da-49cf-9bde-b1ad24f0d0e3 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] Lazy-loading 'migration_context' on Instance uuid 0f5e68e6-8f02-4a3a-ac0c-322d82950d98 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 03:46:35 np0005534516 nova_compute[253538]: 2025-11-25 08:46:35.843 253542 DEBUG nova.virt.libvirt.driver [None req-5b5eacf4-c6da-49cf-9bde-b1ad24f0d0e3 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] [instance: 0f5e68e6-8f02-4a3a-ac0c-322d82950d98] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 25 03:46:35 np0005534516 nova_compute[253538]: 2025-11-25 08:46:35.844 253542 DEBUG nova.virt.libvirt.driver [None req-5b5eacf4-c6da-49cf-9bde-b1ad24f0d0e3 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] [instance: 0f5e68e6-8f02-4a3a-ac0c-322d82950d98] Ensure instance console log exists: /var/lib/nova/instances/0f5e68e6-8f02-4a3a-ac0c-322d82950d98/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 25 03:46:35 np0005534516 nova_compute[253538]: 2025-11-25 08:46:35.844 253542 DEBUG oslo_concurrency.lockutils [None req-5b5eacf4-c6da-49cf-9bde-b1ad24f0d0e3 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:46:35 np0005534516 nova_compute[253538]: 2025-11-25 08:46:35.844 253542 DEBUG oslo_concurrency.lockutils [None req-5b5eacf4-c6da-49cf-9bde-b1ad24f0d0e3 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:46:35 np0005534516 nova_compute[253538]: 2025-11-25 08:46:35.845 253542 DEBUG oslo_concurrency.lockutils [None req-5b5eacf4-c6da-49cf-9bde-b1ad24f0d0e3 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:46:36 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 03:46:36 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1841820826' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 03:46:36 np0005534516 nova_compute[253538]: 2025-11-25 08:46:36.061 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.454s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:46:36 np0005534516 nova_compute[253538]: 2025-11-25 08:46:36.068 253542 DEBUG nova.compute.provider_tree [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 03:46:36 np0005534516 nova_compute[253538]: 2025-11-25 08:46:36.084 253542 DEBUG nova.scheduler.client.report [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 03:46:36 np0005534516 nova_compute[253538]: 2025-11-25 08:46:36.105 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 25 03:46:36 np0005534516 nova_compute[253538]: 2025-11-25 08:46:36.106 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.621s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:46:36 np0005534516 nova_compute[253538]: 2025-11-25 08:46:36.679 253542 DEBUG nova.network.neutron [None req-5b5eacf4-c6da-49cf-9bde-b1ad24f0d0e3 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] [instance: 0f5e68e6-8f02-4a3a-ac0c-322d82950d98] Successfully updated port: 7246ed42-6ec3-42e8-9b9d-12606aeeb43c _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 25 03:46:36 np0005534516 nova_compute[253538]: 2025-11-25 08:46:36.736 253542 DEBUG oslo_concurrency.lockutils [None req-5b5eacf4-c6da-49cf-9bde-b1ad24f0d0e3 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] Acquiring lock "refresh_cache-0f5e68e6-8f02-4a3a-ac0c-322d82950d98" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 03:46:36 np0005534516 nova_compute[253538]: 2025-11-25 08:46:36.737 253542 DEBUG oslo_concurrency.lockutils [None req-5b5eacf4-c6da-49cf-9bde-b1ad24f0d0e3 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] Acquired lock "refresh_cache-0f5e68e6-8f02-4a3a-ac0c-322d82950d98" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 03:46:36 np0005534516 nova_compute[253538]: 2025-11-25 08:46:36.737 253542 DEBUG nova.network.neutron [None req-5b5eacf4-c6da-49cf-9bde-b1ad24f0d0e3 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] [instance: 0f5e68e6-8f02-4a3a-ac0c-322d82950d98] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 25 03:46:36 np0005534516 nova_compute[253538]: 2025-11-25 08:46:36.830 253542 DEBUG nova.compute.manager [req-c11e99ff-d585-46f7-a3d3-fc11f150ab14 req-3721c3a8-fdb1-4c70-b187-1ce9eb079667 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 0f5e68e6-8f02-4a3a-ac0c-322d82950d98] Received event network-changed-7246ed42-6ec3-42e8-9b9d-12606aeeb43c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:46:36 np0005534516 nova_compute[253538]: 2025-11-25 08:46:36.831 253542 DEBUG nova.compute.manager [req-c11e99ff-d585-46f7-a3d3-fc11f150ab14 req-3721c3a8-fdb1-4c70-b187-1ce9eb079667 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 0f5e68e6-8f02-4a3a-ac0c-322d82950d98] Refreshing instance network info cache due to event network-changed-7246ed42-6ec3-42e8-9b9d-12606aeeb43c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 03:46:36 np0005534516 nova_compute[253538]: 2025-11-25 08:46:36.831 253542 DEBUG oslo_concurrency.lockutils [req-c11e99ff-d585-46f7-a3d3-fc11f150ab14 req-3721c3a8-fdb1-4c70-b187-1ce9eb079667 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-0f5e68e6-8f02-4a3a-ac0c-322d82950d98" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 03:46:37 np0005534516 nova_compute[253538]: 2025-11-25 08:46:37.100 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 03:46:37 np0005534516 nova_compute[253538]: 2025-11-25 08:46:37.101 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 03:46:37 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e231 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 03:46:37 np0005534516 nova_compute[253538]: 2025-11-25 08:46:37.338 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:46:37 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1953: 321 pgs: 321 active+clean; 101 MiB data, 715 MiB used, 59 GiB / 60 GiB avail; 16 KiB/s rd, 774 KiB/s wr, 21 op/s
Nov 25 03:46:37 np0005534516 nova_compute[253538]: 2025-11-25 08:46:37.783 253542 DEBUG nova.network.neutron [None req-5b5eacf4-c6da-49cf-9bde-b1ad24f0d0e3 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] [instance: 0f5e68e6-8f02-4a3a-ac0c-322d82950d98] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 25 03:46:38 np0005534516 nova_compute[253538]: 2025-11-25 08:46:38.548 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 03:46:38 np0005534516 nova_compute[253538]: 2025-11-25 08:46:38.604 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 03:46:38 np0005534516 nova_compute[253538]: 2025-11-25 08:46:38.989 253542 DEBUG nova.network.neutron [None req-5b5eacf4-c6da-49cf-9bde-b1ad24f0d0e3 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] [instance: 0f5e68e6-8f02-4a3a-ac0c-322d82950d98] Updating instance_info_cache with network_info: [{"id": "7246ed42-6ec3-42e8-9b9d-12606aeeb43c", "address": "fa:16:3e:8c:96:cd", "network": {"id": "aa04d86f-73a3-4b24-9c95-8ec29aa39064", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-1377856389-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "947f731219de435196429037dc94fd56", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7246ed42-6e", "ovs_interfaceid": "7246ed42-6ec3-42e8-9b9d-12606aeeb43c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 03:46:39 np0005534516 nova_compute[253538]: 2025-11-25 08:46:39.068 253542 DEBUG oslo_concurrency.lockutils [None req-5b5eacf4-c6da-49cf-9bde-b1ad24f0d0e3 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] Releasing lock "refresh_cache-0f5e68e6-8f02-4a3a-ac0c-322d82950d98" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 03:46:39 np0005534516 nova_compute[253538]: 2025-11-25 08:46:39.068 253542 DEBUG nova.compute.manager [None req-5b5eacf4-c6da-49cf-9bde-b1ad24f0d0e3 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] [instance: 0f5e68e6-8f02-4a3a-ac0c-322d82950d98] Instance network_info: |[{"id": "7246ed42-6ec3-42e8-9b9d-12606aeeb43c", "address": "fa:16:3e:8c:96:cd", "network": {"id": "aa04d86f-73a3-4b24-9c95-8ec29aa39064", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-1377856389-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "947f731219de435196429037dc94fd56", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7246ed42-6e", "ovs_interfaceid": "7246ed42-6ec3-42e8-9b9d-12606aeeb43c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 25 03:46:39 np0005534516 nova_compute[253538]: 2025-11-25 08:46:39.068 253542 DEBUG oslo_concurrency.lockutils [req-c11e99ff-d585-46f7-a3d3-fc11f150ab14 req-3721c3a8-fdb1-4c70-b187-1ce9eb079667 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-0f5e68e6-8f02-4a3a-ac0c-322d82950d98" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 03:46:39 np0005534516 nova_compute[253538]: 2025-11-25 08:46:39.069 253542 DEBUG nova.network.neutron [req-c11e99ff-d585-46f7-a3d3-fc11f150ab14 req-3721c3a8-fdb1-4c70-b187-1ce9eb079667 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 0f5e68e6-8f02-4a3a-ac0c-322d82950d98] Refreshing network info cache for port 7246ed42-6ec3-42e8-9b9d-12606aeeb43c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 03:46:39 np0005534516 nova_compute[253538]: 2025-11-25 08:46:39.072 253542 DEBUG nova.virt.libvirt.driver [None req-5b5eacf4-c6da-49cf-9bde-b1ad24f0d0e3 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] [instance: 0f5e68e6-8f02-4a3a-ac0c-322d82950d98] Start _get_guest_xml network_info=[{"id": "7246ed42-6ec3-42e8-9b9d-12606aeeb43c", "address": "fa:16:3e:8c:96:cd", "network": {"id": "aa04d86f-73a3-4b24-9c95-8ec29aa39064", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-1377856389-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "947f731219de435196429037dc94fd56", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7246ed42-6e", "ovs_interfaceid": "7246ed42-6ec3-42e8-9b9d-12606aeeb43c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'device_name': '/dev/vda', 'guest_format': None, 'encryption_options': None, 'boot_index': 0, 'encryption_format': None, 'encryption_secret_uuid': None, 'size': 0, 'encrypted': False, 'disk_bus': 'virtio', 'image_id': '8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 25 03:46:39 np0005534516 nova_compute[253538]: 2025-11-25 08:46:39.077 253542 WARNING nova.virt.libvirt.driver [None req-5b5eacf4-c6da-49cf-9bde-b1ad24f0d0e3 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 03:46:39 np0005534516 nova_compute[253538]: 2025-11-25 08:46:39.091 253542 DEBUG nova.virt.libvirt.host [None req-5b5eacf4-c6da-49cf-9bde-b1ad24f0d0e3 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 25 03:46:39 np0005534516 nova_compute[253538]: 2025-11-25 08:46:39.092 253542 DEBUG nova.virt.libvirt.host [None req-5b5eacf4-c6da-49cf-9bde-b1ad24f0d0e3 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 25 03:46:39 np0005534516 nova_compute[253538]: 2025-11-25 08:46:39.097 253542 DEBUG nova.virt.libvirt.host [None req-5b5eacf4-c6da-49cf-9bde-b1ad24f0d0e3 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 25 03:46:39 np0005534516 nova_compute[253538]: 2025-11-25 08:46:39.098 253542 DEBUG nova.virt.libvirt.host [None req-5b5eacf4-c6da-49cf-9bde-b1ad24f0d0e3 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 25 03:46:39 np0005534516 nova_compute[253538]: 2025-11-25 08:46:39.099 253542 DEBUG nova.virt.libvirt.driver [None req-5b5eacf4-c6da-49cf-9bde-b1ad24f0d0e3 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 25 03:46:39 np0005534516 nova_compute[253538]: 2025-11-25 08:46:39.099 253542 DEBUG nova.virt.hardware [None req-5b5eacf4-c6da-49cf-9bde-b1ad24f0d0e3 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T08:20:54Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='c0247c91-0f34-45b2-87e7-43f31a790a00',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 25 03:46:39 np0005534516 nova_compute[253538]: 2025-11-25 08:46:39.100 253542 DEBUG nova.virt.hardware [None req-5b5eacf4-c6da-49cf-9bde-b1ad24f0d0e3 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 25 03:46:39 np0005534516 nova_compute[253538]: 2025-11-25 08:46:39.100 253542 DEBUG nova.virt.hardware [None req-5b5eacf4-c6da-49cf-9bde-b1ad24f0d0e3 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 25 03:46:39 np0005534516 nova_compute[253538]: 2025-11-25 08:46:39.100 253542 DEBUG nova.virt.hardware [None req-5b5eacf4-c6da-49cf-9bde-b1ad24f0d0e3 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 25 03:46:39 np0005534516 nova_compute[253538]: 2025-11-25 08:46:39.101 253542 DEBUG nova.virt.hardware [None req-5b5eacf4-c6da-49cf-9bde-b1ad24f0d0e3 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 25 03:46:39 np0005534516 nova_compute[253538]: 2025-11-25 08:46:39.101 253542 DEBUG nova.virt.hardware [None req-5b5eacf4-c6da-49cf-9bde-b1ad24f0d0e3 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 25 03:46:39 np0005534516 nova_compute[253538]: 2025-11-25 08:46:39.101 253542 DEBUG nova.virt.hardware [None req-5b5eacf4-c6da-49cf-9bde-b1ad24f0d0e3 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 25 03:46:39 np0005534516 nova_compute[253538]: 2025-11-25 08:46:39.101 253542 DEBUG nova.virt.hardware [None req-5b5eacf4-c6da-49cf-9bde-b1ad24f0d0e3 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 25 03:46:39 np0005534516 nova_compute[253538]: 2025-11-25 08:46:39.102 253542 DEBUG nova.virt.hardware [None req-5b5eacf4-c6da-49cf-9bde-b1ad24f0d0e3 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 25 03:46:39 np0005534516 nova_compute[253538]: 2025-11-25 08:46:39.102 253542 DEBUG nova.virt.hardware [None req-5b5eacf4-c6da-49cf-9bde-b1ad24f0d0e3 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 25 03:46:39 np0005534516 nova_compute[253538]: 2025-11-25 08:46:39.102 253542 DEBUG nova.virt.hardware [None req-5b5eacf4-c6da-49cf-9bde-b1ad24f0d0e3 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 25 03:46:39 np0005534516 nova_compute[253538]: 2025-11-25 08:46:39.107 253542 DEBUG oslo_concurrency.processutils [None req-5b5eacf4-c6da-49cf-9bde-b1ad24f0d0e3 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:46:39 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 03:46:39 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/808212849' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 03:46:39 np0005534516 nova_compute[253538]: 2025-11-25 08:46:39.575 253542 DEBUG oslo_concurrency.processutils [None req-5b5eacf4-c6da-49cf-9bde-b1ad24f0d0e3 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.468s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:46:39 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1954: 321 pgs: 321 active+clean; 103 MiB data, 716 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 866 KiB/s wr, 24 op/s
Nov 25 03:46:39 np0005534516 nova_compute[253538]: 2025-11-25 08:46:39.598 253542 DEBUG nova.storage.rbd_utils [None req-5b5eacf4-c6da-49cf-9bde-b1ad24f0d0e3 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] rbd image 0f5e68e6-8f02-4a3a-ac0c-322d82950d98_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:46:39 np0005534516 nova_compute[253538]: 2025-11-25 08:46:39.604 253542 DEBUG oslo_concurrency.processutils [None req-5b5eacf4-c6da-49cf-9bde-b1ad24f0d0e3 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:46:39 np0005534516 nova_compute[253538]: 2025-11-25 08:46:39.853 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:46:40 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 03:46:40 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4139228368' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 03:46:40 np0005534516 nova_compute[253538]: 2025-11-25 08:46:40.031 253542 DEBUG oslo_concurrency.processutils [None req-5b5eacf4-c6da-49cf-9bde-b1ad24f0d0e3 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.427s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:46:40 np0005534516 nova_compute[253538]: 2025-11-25 08:46:40.033 253542 DEBUG nova.virt.libvirt.vif [None req-5b5eacf4-c6da-49cf-9bde-b1ad24f0d0e3 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T08:46:31Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersNegativeTestJSON-server-673040864',display_name='tempest-ServersNegativeTestJSON-server-673040864',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversnegativetestjson-server-673040864',id=104,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='947f731219de435196429037dc94fd56',ramdisk_id='',reservation_id='r-lfthxaw9',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersNegativeTestJSON-740481153',owner_user_name='tempest-ServersNegativeTestJSON-740481153-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T08:46:33Z,user_data=None,user_id='229f02d89c9848d8aaaaab070ce4d179',uuid=0f5e68e6-8f02-4a3a-ac0c-322d82950d98,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "7246ed42-6ec3-42e8-9b9d-12606aeeb43c", "address": "fa:16:3e:8c:96:cd", "network": {"id": "aa04d86f-73a3-4b24-9c95-8ec29aa39064", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-1377856389-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "947f731219de435196429037dc94fd56", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7246ed42-6e", "ovs_interfaceid": "7246ed42-6ec3-42e8-9b9d-12606aeeb43c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 25 03:46:40 np0005534516 nova_compute[253538]: 2025-11-25 08:46:40.034 253542 DEBUG nova.network.os_vif_util [None req-5b5eacf4-c6da-49cf-9bde-b1ad24f0d0e3 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] Converting VIF {"id": "7246ed42-6ec3-42e8-9b9d-12606aeeb43c", "address": "fa:16:3e:8c:96:cd", "network": {"id": "aa04d86f-73a3-4b24-9c95-8ec29aa39064", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-1377856389-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "947f731219de435196429037dc94fd56", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7246ed42-6e", "ovs_interfaceid": "7246ed42-6ec3-42e8-9b9d-12606aeeb43c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 03:46:40 np0005534516 nova_compute[253538]: 2025-11-25 08:46:40.035 253542 DEBUG nova.network.os_vif_util [None req-5b5eacf4-c6da-49cf-9bde-b1ad24f0d0e3 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:8c:96:cd,bridge_name='br-int',has_traffic_filtering=True,id=7246ed42-6ec3-42e8-9b9d-12606aeeb43c,network=Network(aa04d86f-73a3-4b24-9c95-8ec29aa39064),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7246ed42-6e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 03:46:40 np0005534516 nova_compute[253538]: 2025-11-25 08:46:40.037 253542 DEBUG nova.objects.instance [None req-5b5eacf4-c6da-49cf-9bde-b1ad24f0d0e3 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] Lazy-loading 'pci_devices' on Instance uuid 0f5e68e6-8f02-4a3a-ac0c-322d82950d98 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 03:46:40 np0005534516 nova_compute[253538]: 2025-11-25 08:46:40.052 253542 DEBUG nova.virt.libvirt.driver [None req-5b5eacf4-c6da-49cf-9bde-b1ad24f0d0e3 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] [instance: 0f5e68e6-8f02-4a3a-ac0c-322d82950d98] End _get_guest_xml xml=<domain type="kvm">
Nov 25 03:46:40 np0005534516 nova_compute[253538]:  <uuid>0f5e68e6-8f02-4a3a-ac0c-322d82950d98</uuid>
Nov 25 03:46:40 np0005534516 nova_compute[253538]:  <name>instance-00000068</name>
Nov 25 03:46:40 np0005534516 nova_compute[253538]:  <memory>131072</memory>
Nov 25 03:46:40 np0005534516 nova_compute[253538]:  <vcpu>1</vcpu>
Nov 25 03:46:40 np0005534516 nova_compute[253538]:  <metadata>
Nov 25 03:46:40 np0005534516 nova_compute[253538]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 03:46:40 np0005534516 nova_compute[253538]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 03:46:40 np0005534516 nova_compute[253538]:      <nova:name>tempest-ServersNegativeTestJSON-server-673040864</nova:name>
Nov 25 03:46:40 np0005534516 nova_compute[253538]:      <nova:creationTime>2025-11-25 08:46:39</nova:creationTime>
Nov 25 03:46:40 np0005534516 nova_compute[253538]:      <nova:flavor name="m1.nano">
Nov 25 03:46:40 np0005534516 nova_compute[253538]:        <nova:memory>128</nova:memory>
Nov 25 03:46:40 np0005534516 nova_compute[253538]:        <nova:disk>1</nova:disk>
Nov 25 03:46:40 np0005534516 nova_compute[253538]:        <nova:swap>0</nova:swap>
Nov 25 03:46:40 np0005534516 nova_compute[253538]:        <nova:ephemeral>0</nova:ephemeral>
Nov 25 03:46:40 np0005534516 nova_compute[253538]:        <nova:vcpus>1</nova:vcpus>
Nov 25 03:46:40 np0005534516 nova_compute[253538]:      </nova:flavor>
Nov 25 03:46:40 np0005534516 nova_compute[253538]:      <nova:owner>
Nov 25 03:46:40 np0005534516 nova_compute[253538]:        <nova:user uuid="229f02d89c9848d8aaaaab070ce4d179">tempest-ServersNegativeTestJSON-740481153-project-member</nova:user>
Nov 25 03:46:40 np0005534516 nova_compute[253538]:        <nova:project uuid="947f731219de435196429037dc94fd56">tempest-ServersNegativeTestJSON-740481153</nova:project>
Nov 25 03:46:40 np0005534516 nova_compute[253538]:      </nova:owner>
Nov 25 03:46:40 np0005534516 nova_compute[253538]:      <nova:root type="image" uuid="8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e"/>
Nov 25 03:46:40 np0005534516 nova_compute[253538]:      <nova:ports>
Nov 25 03:46:40 np0005534516 nova_compute[253538]:        <nova:port uuid="7246ed42-6ec3-42e8-9b9d-12606aeeb43c">
Nov 25 03:46:40 np0005534516 nova_compute[253538]:          <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Nov 25 03:46:40 np0005534516 nova_compute[253538]:        </nova:port>
Nov 25 03:46:40 np0005534516 nova_compute[253538]:      </nova:ports>
Nov 25 03:46:40 np0005534516 nova_compute[253538]:    </nova:instance>
Nov 25 03:46:40 np0005534516 nova_compute[253538]:  </metadata>
Nov 25 03:46:40 np0005534516 nova_compute[253538]:  <sysinfo type="smbios">
Nov 25 03:46:40 np0005534516 nova_compute[253538]:    <system>
Nov 25 03:46:40 np0005534516 nova_compute[253538]:      <entry name="manufacturer">RDO</entry>
Nov 25 03:46:40 np0005534516 nova_compute[253538]:      <entry name="product">OpenStack Compute</entry>
Nov 25 03:46:40 np0005534516 nova_compute[253538]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 03:46:40 np0005534516 nova_compute[253538]:      <entry name="serial">0f5e68e6-8f02-4a3a-ac0c-322d82950d98</entry>
Nov 25 03:46:40 np0005534516 nova_compute[253538]:      <entry name="uuid">0f5e68e6-8f02-4a3a-ac0c-322d82950d98</entry>
Nov 25 03:46:40 np0005534516 nova_compute[253538]:      <entry name="family">Virtual Machine</entry>
Nov 25 03:46:40 np0005534516 nova_compute[253538]:    </system>
Nov 25 03:46:40 np0005534516 nova_compute[253538]:  </sysinfo>
Nov 25 03:46:40 np0005534516 nova_compute[253538]:  <os>
Nov 25 03:46:40 np0005534516 nova_compute[253538]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 25 03:46:40 np0005534516 nova_compute[253538]:    <boot dev="hd"/>
Nov 25 03:46:40 np0005534516 nova_compute[253538]:    <smbios mode="sysinfo"/>
Nov 25 03:46:40 np0005534516 nova_compute[253538]:  </os>
Nov 25 03:46:40 np0005534516 nova_compute[253538]:  <features>
Nov 25 03:46:40 np0005534516 nova_compute[253538]:    <acpi/>
Nov 25 03:46:40 np0005534516 nova_compute[253538]:    <apic/>
Nov 25 03:46:40 np0005534516 nova_compute[253538]:    <vmcoreinfo/>
Nov 25 03:46:40 np0005534516 nova_compute[253538]:  </features>
Nov 25 03:46:40 np0005534516 nova_compute[253538]:  <clock offset="utc">
Nov 25 03:46:40 np0005534516 nova_compute[253538]:    <timer name="pit" tickpolicy="delay"/>
Nov 25 03:46:40 np0005534516 nova_compute[253538]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 25 03:46:40 np0005534516 nova_compute[253538]:    <timer name="hpet" present="no"/>
Nov 25 03:46:40 np0005534516 nova_compute[253538]:  </clock>
Nov 25 03:46:40 np0005534516 nova_compute[253538]:  <cpu mode="host-model" match="exact">
Nov 25 03:46:40 np0005534516 nova_compute[253538]:    <topology sockets="1" cores="1" threads="1"/>
Nov 25 03:46:40 np0005534516 nova_compute[253538]:  </cpu>
Nov 25 03:46:40 np0005534516 nova_compute[253538]:  <devices>
Nov 25 03:46:40 np0005534516 nova_compute[253538]:    <disk type="network" device="disk">
Nov 25 03:46:40 np0005534516 nova_compute[253538]:      <driver type="raw" cache="none"/>
Nov 25 03:46:40 np0005534516 nova_compute[253538]:      <source protocol="rbd" name="vms/0f5e68e6-8f02-4a3a-ac0c-322d82950d98_disk">
Nov 25 03:46:40 np0005534516 nova_compute[253538]:        <host name="192.168.122.100" port="6789"/>
Nov 25 03:46:40 np0005534516 nova_compute[253538]:      </source>
Nov 25 03:46:40 np0005534516 nova_compute[253538]:      <auth username="openstack">
Nov 25 03:46:40 np0005534516 nova_compute[253538]:        <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 03:46:40 np0005534516 nova_compute[253538]:      </auth>
Nov 25 03:46:40 np0005534516 nova_compute[253538]:      <target dev="vda" bus="virtio"/>
Nov 25 03:46:40 np0005534516 nova_compute[253538]:    </disk>
Nov 25 03:46:40 np0005534516 nova_compute[253538]:    <disk type="network" device="cdrom">
Nov 25 03:46:40 np0005534516 nova_compute[253538]:      <driver type="raw" cache="none"/>
Nov 25 03:46:40 np0005534516 nova_compute[253538]:      <source protocol="rbd" name="vms/0f5e68e6-8f02-4a3a-ac0c-322d82950d98_disk.config">
Nov 25 03:46:40 np0005534516 nova_compute[253538]:        <host name="192.168.122.100" port="6789"/>
Nov 25 03:46:40 np0005534516 nova_compute[253538]:      </source>
Nov 25 03:46:40 np0005534516 nova_compute[253538]:      <auth username="openstack">
Nov 25 03:46:40 np0005534516 nova_compute[253538]:        <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 03:46:40 np0005534516 nova_compute[253538]:      </auth>
Nov 25 03:46:40 np0005534516 nova_compute[253538]:      <target dev="sda" bus="sata"/>
Nov 25 03:46:40 np0005534516 nova_compute[253538]:    </disk>
Nov 25 03:46:40 np0005534516 nova_compute[253538]:    <interface type="ethernet">
Nov 25 03:46:40 np0005534516 nova_compute[253538]:      <mac address="fa:16:3e:8c:96:cd"/>
Nov 25 03:46:40 np0005534516 nova_compute[253538]:      <model type="virtio"/>
Nov 25 03:46:40 np0005534516 nova_compute[253538]:      <driver name="vhost" rx_queue_size="512"/>
Nov 25 03:46:40 np0005534516 nova_compute[253538]:      <mtu size="1442"/>
Nov 25 03:46:40 np0005534516 nova_compute[253538]:      <target dev="tap7246ed42-6e"/>
Nov 25 03:46:40 np0005534516 nova_compute[253538]:    </interface>
Nov 25 03:46:40 np0005534516 nova_compute[253538]:    <serial type="pty">
Nov 25 03:46:40 np0005534516 nova_compute[253538]:      <log file="/var/lib/nova/instances/0f5e68e6-8f02-4a3a-ac0c-322d82950d98/console.log" append="off"/>
Nov 25 03:46:40 np0005534516 nova_compute[253538]:    </serial>
Nov 25 03:46:40 np0005534516 nova_compute[253538]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 03:46:40 np0005534516 nova_compute[253538]:    <video>
Nov 25 03:46:40 np0005534516 nova_compute[253538]:      <model type="virtio"/>
Nov 25 03:46:40 np0005534516 nova_compute[253538]:    </video>
Nov 25 03:46:40 np0005534516 nova_compute[253538]:    <input type="tablet" bus="usb"/>
Nov 25 03:46:40 np0005534516 nova_compute[253538]:    <rng model="virtio">
Nov 25 03:46:40 np0005534516 nova_compute[253538]:      <backend model="random">/dev/urandom</backend>
Nov 25 03:46:40 np0005534516 nova_compute[253538]:    </rng>
Nov 25 03:46:40 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root"/>
Nov 25 03:46:40 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:46:40 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:46:40 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:46:40 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:46:40 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:46:40 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:46:40 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:46:40 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:46:40 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:46:40 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:46:40 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:46:40 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:46:40 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:46:40 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:46:40 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:46:40 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:46:40 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:46:40 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:46:40 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:46:40 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:46:40 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:46:40 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:46:40 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:46:40 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:46:40 np0005534516 nova_compute[253538]:    <controller type="usb" index="0"/>
Nov 25 03:46:40 np0005534516 nova_compute[253538]:    <memballoon model="virtio">
Nov 25 03:46:40 np0005534516 nova_compute[253538]:      <stats period="10"/>
Nov 25 03:46:40 np0005534516 nova_compute[253538]:    </memballoon>
Nov 25 03:46:40 np0005534516 nova_compute[253538]:  </devices>
Nov 25 03:46:40 np0005534516 nova_compute[253538]: </domain>
Nov 25 03:46:40 np0005534516 nova_compute[253538]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 25 03:46:40 np0005534516 nova_compute[253538]: 2025-11-25 08:46:40.054 253542 DEBUG nova.compute.manager [None req-5b5eacf4-c6da-49cf-9bde-b1ad24f0d0e3 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] [instance: 0f5e68e6-8f02-4a3a-ac0c-322d82950d98] Preparing to wait for external event network-vif-plugged-7246ed42-6ec3-42e8-9b9d-12606aeeb43c prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 25 03:46:40 np0005534516 nova_compute[253538]: 2025-11-25 08:46:40.054 253542 DEBUG oslo_concurrency.lockutils [None req-5b5eacf4-c6da-49cf-9bde-b1ad24f0d0e3 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] Acquiring lock "0f5e68e6-8f02-4a3a-ac0c-322d82950d98-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:46:40 np0005534516 nova_compute[253538]: 2025-11-25 08:46:40.054 253542 DEBUG oslo_concurrency.lockutils [None req-5b5eacf4-c6da-49cf-9bde-b1ad24f0d0e3 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] Lock "0f5e68e6-8f02-4a3a-ac0c-322d82950d98-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:46:40 np0005534516 nova_compute[253538]: 2025-11-25 08:46:40.054 253542 DEBUG oslo_concurrency.lockutils [None req-5b5eacf4-c6da-49cf-9bde-b1ad24f0d0e3 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] Lock "0f5e68e6-8f02-4a3a-ac0c-322d82950d98-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:46:40 np0005534516 nova_compute[253538]: 2025-11-25 08:46:40.055 253542 DEBUG nova.virt.libvirt.vif [None req-5b5eacf4-c6da-49cf-9bde-b1ad24f0d0e3 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T08:46:31Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersNegativeTestJSON-server-673040864',display_name='tempest-ServersNegativeTestJSON-server-673040864',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversnegativetestjson-server-673040864',id=104,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='947f731219de435196429037dc94fd56',ramdisk_id='',reservation_id='r-lfthxaw9',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersNegativeTestJSON-740481153',owner_user_name='tempest-ServersNegativeTestJSON-740481153-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T08:46:33Z,user_data=None,user_id='229f02d89c9848d8aaaaab070ce4d179',uuid=0f5e68e6-8f02-4a3a-ac0c-322d82950d98,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "7246ed42-6ec3-42e8-9b9d-12606aeeb43c", "address": "fa:16:3e:8c:96:cd", "network": {"id": "aa04d86f-73a3-4b24-9c95-8ec29aa39064", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-1377856389-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "947f731219de435196429037dc94fd56", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7246ed42-6e", "ovs_interfaceid": "7246ed42-6ec3-42e8-9b9d-12606aeeb43c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 25 03:46:40 np0005534516 nova_compute[253538]: 2025-11-25 08:46:40.055 253542 DEBUG nova.network.os_vif_util [None req-5b5eacf4-c6da-49cf-9bde-b1ad24f0d0e3 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] Converting VIF {"id": "7246ed42-6ec3-42e8-9b9d-12606aeeb43c", "address": "fa:16:3e:8c:96:cd", "network": {"id": "aa04d86f-73a3-4b24-9c95-8ec29aa39064", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-1377856389-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "947f731219de435196429037dc94fd56", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7246ed42-6e", "ovs_interfaceid": "7246ed42-6ec3-42e8-9b9d-12606aeeb43c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 03:46:40 np0005534516 nova_compute[253538]: 2025-11-25 08:46:40.056 253542 DEBUG nova.network.os_vif_util [None req-5b5eacf4-c6da-49cf-9bde-b1ad24f0d0e3 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:8c:96:cd,bridge_name='br-int',has_traffic_filtering=True,id=7246ed42-6ec3-42e8-9b9d-12606aeeb43c,network=Network(aa04d86f-73a3-4b24-9c95-8ec29aa39064),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7246ed42-6e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 03:46:40 np0005534516 nova_compute[253538]: 2025-11-25 08:46:40.056 253542 DEBUG os_vif [None req-5b5eacf4-c6da-49cf-9bde-b1ad24f0d0e3 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:8c:96:cd,bridge_name='br-int',has_traffic_filtering=True,id=7246ed42-6ec3-42e8-9b9d-12606aeeb43c,network=Network(aa04d86f-73a3-4b24-9c95-8ec29aa39064),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7246ed42-6e') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 25 03:46:40 np0005534516 nova_compute[253538]: 2025-11-25 08:46:40.057 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:46:40 np0005534516 nova_compute[253538]: 2025-11-25 08:46:40.057 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:46:40 np0005534516 nova_compute[253538]: 2025-11-25 08:46:40.057 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 03:46:40 np0005534516 nova_compute[253538]: 2025-11-25 08:46:40.060 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:46:40 np0005534516 nova_compute[253538]: 2025-11-25 08:46:40.061 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7246ed42-6e, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:46:40 np0005534516 nova_compute[253538]: 2025-11-25 08:46:40.061 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap7246ed42-6e, col_values=(('external_ids', {'iface-id': '7246ed42-6ec3-42e8-9b9d-12606aeeb43c', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:8c:96:cd', 'vm-uuid': '0f5e68e6-8f02-4a3a-ac0c-322d82950d98'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:46:40 np0005534516 nova_compute[253538]: 2025-11-25 08:46:40.063 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:46:40 np0005534516 NetworkManager[48915]: <info>  [1764060400.0640] manager: (tap7246ed42-6e): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/405)
Nov 25 03:46:40 np0005534516 nova_compute[253538]: 2025-11-25 08:46:40.065 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 25 03:46:40 np0005534516 nova_compute[253538]: 2025-11-25 08:46:40.070 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:46:40 np0005534516 nova_compute[253538]: 2025-11-25 08:46:40.070 253542 INFO os_vif [None req-5b5eacf4-c6da-49cf-9bde-b1ad24f0d0e3 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:8c:96:cd,bridge_name='br-int',has_traffic_filtering=True,id=7246ed42-6ec3-42e8-9b9d-12606aeeb43c,network=Network(aa04d86f-73a3-4b24-9c95-8ec29aa39064),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7246ed42-6e')#033[00m
Nov 25 03:46:40 np0005534516 nova_compute[253538]: 2025-11-25 08:46:40.114 253542 DEBUG nova.virt.libvirt.driver [None req-5b5eacf4-c6da-49cf-9bde-b1ad24f0d0e3 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 03:46:40 np0005534516 nova_compute[253538]: 2025-11-25 08:46:40.114 253542 DEBUG nova.virt.libvirt.driver [None req-5b5eacf4-c6da-49cf-9bde-b1ad24f0d0e3 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 03:46:40 np0005534516 nova_compute[253538]: 2025-11-25 08:46:40.114 253542 DEBUG nova.virt.libvirt.driver [None req-5b5eacf4-c6da-49cf-9bde-b1ad24f0d0e3 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] No VIF found with MAC fa:16:3e:8c:96:cd, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 25 03:46:40 np0005534516 nova_compute[253538]: 2025-11-25 08:46:40.115 253542 INFO nova.virt.libvirt.driver [None req-5b5eacf4-c6da-49cf-9bde-b1ad24f0d0e3 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] [instance: 0f5e68e6-8f02-4a3a-ac0c-322d82950d98] Using config drive#033[00m
Nov 25 03:46:40 np0005534516 nova_compute[253538]: 2025-11-25 08:46:40.139 253542 DEBUG nova.storage.rbd_utils [None req-5b5eacf4-c6da-49cf-9bde-b1ad24f0d0e3 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] rbd image 0f5e68e6-8f02-4a3a-ac0c-322d82950d98_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:46:40 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:46:40.531 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=0e3362c2-a4a0-4a10-9289-943331244f84, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '28'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:46:40 np0005534516 nova_compute[253538]: 2025-11-25 08:46:40.691 253542 INFO nova.virt.libvirt.driver [None req-5b5eacf4-c6da-49cf-9bde-b1ad24f0d0e3 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] [instance: 0f5e68e6-8f02-4a3a-ac0c-322d82950d98] Creating config drive at /var/lib/nova/instances/0f5e68e6-8f02-4a3a-ac0c-322d82950d98/disk.config#033[00m
Nov 25 03:46:40 np0005534516 nova_compute[253538]: 2025-11-25 08:46:40.695 253542 DEBUG oslo_concurrency.processutils [None req-5b5eacf4-c6da-49cf-9bde-b1ad24f0d0e3 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/0f5e68e6-8f02-4a3a-ac0c-322d82950d98/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmphtifg9cm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:46:40 np0005534516 nova_compute[253538]: 2025-11-25 08:46:40.745 253542 DEBUG nova.network.neutron [req-c11e99ff-d585-46f7-a3d3-fc11f150ab14 req-3721c3a8-fdb1-4c70-b187-1ce9eb079667 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 0f5e68e6-8f02-4a3a-ac0c-322d82950d98] Updated VIF entry in instance network info cache for port 7246ed42-6ec3-42e8-9b9d-12606aeeb43c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 25 03:46:40 np0005534516 nova_compute[253538]: 2025-11-25 08:46:40.747 253542 DEBUG nova.network.neutron [req-c11e99ff-d585-46f7-a3d3-fc11f150ab14 req-3721c3a8-fdb1-4c70-b187-1ce9eb079667 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 0f5e68e6-8f02-4a3a-ac0c-322d82950d98] Updating instance_info_cache with network_info: [{"id": "7246ed42-6ec3-42e8-9b9d-12606aeeb43c", "address": "fa:16:3e:8c:96:cd", "network": {"id": "aa04d86f-73a3-4b24-9c95-8ec29aa39064", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-1377856389-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "947f731219de435196429037dc94fd56", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7246ed42-6e", "ovs_interfaceid": "7246ed42-6ec3-42e8-9b9d-12606aeeb43c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 03:46:40 np0005534516 nova_compute[253538]: 2025-11-25 08:46:40.768 253542 DEBUG oslo_concurrency.lockutils [req-c11e99ff-d585-46f7-a3d3-fc11f150ab14 req-3721c3a8-fdb1-4c70-b187-1ce9eb079667 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-0f5e68e6-8f02-4a3a-ac0c-322d82950d98" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 03:46:40 np0005534516 nova_compute[253538]: 2025-11-25 08:46:40.835 253542 DEBUG oslo_concurrency.processutils [None req-5b5eacf4-c6da-49cf-9bde-b1ad24f0d0e3 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/0f5e68e6-8f02-4a3a-ac0c-322d82950d98/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmphtifg9cm" returned: 0 in 0.140s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:46:40 np0005534516 nova_compute[253538]: 2025-11-25 08:46:40.873 253542 DEBUG nova.storage.rbd_utils [None req-5b5eacf4-c6da-49cf-9bde-b1ad24f0d0e3 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] rbd image 0f5e68e6-8f02-4a3a-ac0c-322d82950d98_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:46:40 np0005534516 nova_compute[253538]: 2025-11-25 08:46:40.876 253542 DEBUG oslo_concurrency.processutils [None req-5b5eacf4-c6da-49cf-9bde-b1ad24f0d0e3 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/0f5e68e6-8f02-4a3a-ac0c-322d82950d98/disk.config 0f5e68e6-8f02-4a3a-ac0c-322d82950d98_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:46:41 np0005534516 nova_compute[253538]: 2025-11-25 08:46:41.069 253542 DEBUG oslo_concurrency.processutils [None req-5b5eacf4-c6da-49cf-9bde-b1ad24f0d0e3 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/0f5e68e6-8f02-4a3a-ac0c-322d82950d98/disk.config 0f5e68e6-8f02-4a3a-ac0c-322d82950d98_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.193s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:46:41 np0005534516 nova_compute[253538]: 2025-11-25 08:46:41.071 253542 INFO nova.virt.libvirt.driver [None req-5b5eacf4-c6da-49cf-9bde-b1ad24f0d0e3 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] [instance: 0f5e68e6-8f02-4a3a-ac0c-322d82950d98] Deleting local config drive /var/lib/nova/instances/0f5e68e6-8f02-4a3a-ac0c-322d82950d98/disk.config because it was imported into RBD.#033[00m
Nov 25 03:46:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:46:41.071 162739 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:46:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:46:41.071 162739 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:46:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:46:41.072 162739 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:46:41 np0005534516 kernel: tap7246ed42-6e: entered promiscuous mode
Nov 25 03:46:41 np0005534516 NetworkManager[48915]: <info>  [1764060401.1359] manager: (tap7246ed42-6e): new Tun device (/org/freedesktop/NetworkManager/Devices/406)
Nov 25 03:46:41 np0005534516 nova_compute[253538]: 2025-11-25 08:46:41.135 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:46:41 np0005534516 nova_compute[253538]: 2025-11-25 08:46:41.140 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:46:41 np0005534516 ovn_controller[152859]: 2025-11-25T08:46:41Z|00991|binding|INFO|Claiming lport 7246ed42-6ec3-42e8-9b9d-12606aeeb43c for this chassis.
Nov 25 03:46:41 np0005534516 ovn_controller[152859]: 2025-11-25T08:46:41Z|00992|binding|INFO|7246ed42-6ec3-42e8-9b9d-12606aeeb43c: Claiming fa:16:3e:8c:96:cd 10.100.0.3
Nov 25 03:46:41 np0005534516 nova_compute[253538]: 2025-11-25 08:46:41.144 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:46:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:46:41.158 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:8c:96:cd 10.100.0.3'], port_security=['fa:16:3e:8c:96:cd 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '0f5e68e6-8f02-4a3a-ac0c-322d82950d98', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-aa04d86f-73a3-4b24-9c95-8ec29aa39064', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '947f731219de435196429037dc94fd56', 'neutron:revision_number': '2', 'neutron:security_group_ids': '6084cb22-e3b6-414c-801b-b7e992786a2f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d4043632-827b-4717-bb5e-38582ebfacad, chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=7246ed42-6ec3-42e8-9b9d-12606aeeb43c) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 03:46:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:46:41.159 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 7246ed42-6ec3-42e8-9b9d-12606aeeb43c in datapath aa04d86f-73a3-4b24-9c95-8ec29aa39064 bound to our chassis#033[00m
Nov 25 03:46:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:46:41.161 162739 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network aa04d86f-73a3-4b24-9c95-8ec29aa39064#033[00m
Nov 25 03:46:41 np0005534516 systemd-machined[215790]: New machine qemu-128-instance-00000068.
Nov 25 03:46:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:46:41.177 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[013b6b67-0571-4a29-85a1-6828704011aa]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:46:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:46:41.178 162739 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapaa04d86f-71 in ovnmeta-aa04d86f-73a3-4b24-9c95-8ec29aa39064 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 25 03:46:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:46:41.180 269370 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapaa04d86f-70 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 25 03:46:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:46:41.180 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[78bb0179-c4b6-4513-a7df-062391567210]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:46:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:46:41.181 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[f08286f9-4af0-44bf-85dc-c6304c5a49f7]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:46:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:46:41.197 162852 DEBUG oslo.privsep.daemon [-] privsep: reply[fdcb2576-de8b-4734-9d9b-35baa3871cba]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:46:41 np0005534516 nova_compute[253538]: 2025-11-25 08:46:41.203 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:46:41 np0005534516 systemd[1]: Started Virtual Machine qemu-128-instance-00000068.
Nov 25 03:46:41 np0005534516 ovn_controller[152859]: 2025-11-25T08:46:41Z|00993|binding|INFO|Setting lport 7246ed42-6ec3-42e8-9b9d-12606aeeb43c ovn-installed in OVS
Nov 25 03:46:41 np0005534516 ovn_controller[152859]: 2025-11-25T08:46:41Z|00994|binding|INFO|Setting lport 7246ed42-6ec3-42e8-9b9d-12606aeeb43c up in Southbound
Nov 25 03:46:41 np0005534516 nova_compute[253538]: 2025-11-25 08:46:41.208 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:46:41 np0005534516 systemd-udevd[357076]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 03:46:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:46:41.218 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[5d885432-af7f-47cb-b75e-a6e1b8ff09af]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:46:41 np0005534516 NetworkManager[48915]: <info>  [1764060401.2419] device (tap7246ed42-6e): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 03:46:41 np0005534516 NetworkManager[48915]: <info>  [1764060401.2428] device (tap7246ed42-6e): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 03:46:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:46:41.256 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[328ba119-9ca9-48bf-949d-716306419899]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:46:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:46:41.261 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[ae4deaa3-cd5c-4145-a385-8da32f22ba5f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:46:41 np0005534516 NetworkManager[48915]: <info>  [1764060401.2631] manager: (tapaa04d86f-70): new Veth device (/org/freedesktop/NetworkManager/Devices/407)
Nov 25 03:46:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:46:41.298 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[36870340-57fb-4736-b14e-ab1fa49d77a0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:46:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:46:41.301 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[47f31f0c-e9d3-4f5e-a7bc-de0c059c0068]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:46:41 np0005534516 NetworkManager[48915]: <info>  [1764060401.3229] device (tapaa04d86f-70): carrier: link connected
Nov 25 03:46:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:46:41.328 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[a1fb19fa-dd0b-4ffe-b782-0d0cdbe55226]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:46:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:46:41.344 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[114d45d0-4e9d-4e26-b036-09619672ba5d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapaa04d86f-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:13:d2:af'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 294], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 575592, 'reachable_time': 15292, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 357106, 'error': None, 'target': 'ovnmeta-aa04d86f-73a3-4b24-9c95-8ec29aa39064', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:46:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:46:41.359 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[e188277f-26bf-4f48-84bb-e8925bd0d281]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe13:d2af'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 575592, 'tstamp': 575592}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 357107, 'error': None, 'target': 'ovnmeta-aa04d86f-73a3-4b24-9c95-8ec29aa39064', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:46:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:46:41.377 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[8315cd77-63da-469b-9553-ab4319e194da]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapaa04d86f-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:13:d2:af'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 294], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 575592, 'reachable_time': 15292, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 357108, 'error': None, 'target': 'ovnmeta-aa04d86f-73a3-4b24-9c95-8ec29aa39064', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:46:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:46:41.407 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[3306347f-0412-4f94-a74f-c2fb43686976]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:46:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:46:41.473 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[e30ec9a3-0c92-4dd0-8753-241227d5fa51]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:46:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:46:41.474 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapaa04d86f-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:46:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:46:41.474 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 03:46:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:46:41.475 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapaa04d86f-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:46:41 np0005534516 nova_compute[253538]: 2025-11-25 08:46:41.476 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:46:41 np0005534516 kernel: tapaa04d86f-70: entered promiscuous mode
Nov 25 03:46:41 np0005534516 NetworkManager[48915]: <info>  [1764060401.4771] manager: (tapaa04d86f-70): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/408)
Nov 25 03:46:41 np0005534516 nova_compute[253538]: 2025-11-25 08:46:41.478 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:46:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:46:41.479 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapaa04d86f-70, col_values=(('external_ids', {'iface-id': 'bedad8d3-cb44-47dd-87a4-c24448506880'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:46:41 np0005534516 nova_compute[253538]: 2025-11-25 08:46:41.480 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:46:41 np0005534516 ovn_controller[152859]: 2025-11-25T08:46:41Z|00995|binding|INFO|Releasing lport bedad8d3-cb44-47dd-87a4-c24448506880 from this chassis (sb_readonly=0)
Nov 25 03:46:41 np0005534516 nova_compute[253538]: 2025-11-25 08:46:41.498 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:46:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:46:41.499 162739 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/aa04d86f-73a3-4b24-9c95-8ec29aa39064.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/aa04d86f-73a3-4b24-9c95-8ec29aa39064.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 25 03:46:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:46:41.500 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[2e7331bb-5ac4-4fa8-8f6c-d9ef5c30011f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:46:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:46:41.501 162739 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 25 03:46:41 np0005534516 ovn_metadata_agent[162734]: global
Nov 25 03:46:41 np0005534516 ovn_metadata_agent[162734]:    log         /dev/log local0 debug
Nov 25 03:46:41 np0005534516 ovn_metadata_agent[162734]:    log-tag     haproxy-metadata-proxy-aa04d86f-73a3-4b24-9c95-8ec29aa39064
Nov 25 03:46:41 np0005534516 ovn_metadata_agent[162734]:    user        root
Nov 25 03:46:41 np0005534516 ovn_metadata_agent[162734]:    group       root
Nov 25 03:46:41 np0005534516 ovn_metadata_agent[162734]:    maxconn     1024
Nov 25 03:46:41 np0005534516 ovn_metadata_agent[162734]:    pidfile     /var/lib/neutron/external/pids/aa04d86f-73a3-4b24-9c95-8ec29aa39064.pid.haproxy
Nov 25 03:46:41 np0005534516 ovn_metadata_agent[162734]:    daemon
Nov 25 03:46:41 np0005534516 ovn_metadata_agent[162734]: 
Nov 25 03:46:41 np0005534516 ovn_metadata_agent[162734]: defaults
Nov 25 03:46:41 np0005534516 ovn_metadata_agent[162734]:    log global
Nov 25 03:46:41 np0005534516 ovn_metadata_agent[162734]:    mode http
Nov 25 03:46:41 np0005534516 ovn_metadata_agent[162734]:    option httplog
Nov 25 03:46:41 np0005534516 ovn_metadata_agent[162734]:    option dontlognull
Nov 25 03:46:41 np0005534516 ovn_metadata_agent[162734]:    option http-server-close
Nov 25 03:46:41 np0005534516 ovn_metadata_agent[162734]:    option forwardfor
Nov 25 03:46:41 np0005534516 ovn_metadata_agent[162734]:    retries                 3
Nov 25 03:46:41 np0005534516 ovn_metadata_agent[162734]:    timeout http-request    30s
Nov 25 03:46:41 np0005534516 ovn_metadata_agent[162734]:    timeout connect         30s
Nov 25 03:46:41 np0005534516 ovn_metadata_agent[162734]:    timeout client          32s
Nov 25 03:46:41 np0005534516 ovn_metadata_agent[162734]:    timeout server          32s
Nov 25 03:46:41 np0005534516 ovn_metadata_agent[162734]:    timeout http-keep-alive 30s
Nov 25 03:46:41 np0005534516 ovn_metadata_agent[162734]: 
Nov 25 03:46:41 np0005534516 ovn_metadata_agent[162734]: 
Nov 25 03:46:41 np0005534516 ovn_metadata_agent[162734]: listen listener
Nov 25 03:46:41 np0005534516 ovn_metadata_agent[162734]:    bind 169.254.169.254:80
Nov 25 03:46:41 np0005534516 ovn_metadata_agent[162734]:    server metadata /var/lib/neutron/metadata_proxy
Nov 25 03:46:41 np0005534516 ovn_metadata_agent[162734]:    http-request add-header X-OVN-Network-ID aa04d86f-73a3-4b24-9c95-8ec29aa39064
Nov 25 03:46:41 np0005534516 ovn_metadata_agent[162734]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 25 03:46:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:46:41.501 162739 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-aa04d86f-73a3-4b24-9c95-8ec29aa39064', 'env', 'PROCESS_TAG=haproxy-aa04d86f-73a3-4b24-9c95-8ec29aa39064', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/aa04d86f-73a3-4b24-9c95-8ec29aa39064.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 25 03:46:41 np0005534516 nova_compute[253538]: 2025-11-25 08:46:41.569 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764060401.5686646, 0f5e68e6-8f02-4a3a-ac0c-322d82950d98 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 03:46:41 np0005534516 nova_compute[253538]: 2025-11-25 08:46:41.570 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 0f5e68e6-8f02-4a3a-ac0c-322d82950d98] VM Started (Lifecycle Event)#033[00m
Nov 25 03:46:41 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1955: 321 pgs: 321 active+clean; 134 MiB data, 720 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 25 03:46:41 np0005534516 nova_compute[253538]: 2025-11-25 08:46:41.587 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 0f5e68e6-8f02-4a3a-ac0c-322d82950d98] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:46:41 np0005534516 nova_compute[253538]: 2025-11-25 08:46:41.590 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764060401.568901, 0f5e68e6-8f02-4a3a-ac0c-322d82950d98 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 03:46:41 np0005534516 nova_compute[253538]: 2025-11-25 08:46:41.590 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 0f5e68e6-8f02-4a3a-ac0c-322d82950d98] VM Paused (Lifecycle Event)#033[00m
Nov 25 03:46:41 np0005534516 nova_compute[253538]: 2025-11-25 08:46:41.605 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 0f5e68e6-8f02-4a3a-ac0c-322d82950d98] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:46:41 np0005534516 nova_compute[253538]: 2025-11-25 08:46:41.608 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 0f5e68e6-8f02-4a3a-ac0c-322d82950d98] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 03:46:41 np0005534516 nova_compute[253538]: 2025-11-25 08:46:41.619 253542 DEBUG nova.compute.manager [req-bba16e52-ea9a-49a6-9756-131849133e03 req-87d8e408-f1c5-465e-8f1a-ac5dc901fcb3 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 0f5e68e6-8f02-4a3a-ac0c-322d82950d98] Received event network-vif-plugged-7246ed42-6ec3-42e8-9b9d-12606aeeb43c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:46:41 np0005534516 nova_compute[253538]: 2025-11-25 08:46:41.619 253542 DEBUG oslo_concurrency.lockutils [req-bba16e52-ea9a-49a6-9756-131849133e03 req-87d8e408-f1c5-465e-8f1a-ac5dc901fcb3 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "0f5e68e6-8f02-4a3a-ac0c-322d82950d98-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:46:41 np0005534516 nova_compute[253538]: 2025-11-25 08:46:41.620 253542 DEBUG oslo_concurrency.lockutils [req-bba16e52-ea9a-49a6-9756-131849133e03 req-87d8e408-f1c5-465e-8f1a-ac5dc901fcb3 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "0f5e68e6-8f02-4a3a-ac0c-322d82950d98-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:46:41 np0005534516 nova_compute[253538]: 2025-11-25 08:46:41.620 253542 DEBUG oslo_concurrency.lockutils [req-bba16e52-ea9a-49a6-9756-131849133e03 req-87d8e408-f1c5-465e-8f1a-ac5dc901fcb3 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "0f5e68e6-8f02-4a3a-ac0c-322d82950d98-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:46:41 np0005534516 nova_compute[253538]: 2025-11-25 08:46:41.620 253542 DEBUG nova.compute.manager [req-bba16e52-ea9a-49a6-9756-131849133e03 req-87d8e408-f1c5-465e-8f1a-ac5dc901fcb3 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 0f5e68e6-8f02-4a3a-ac0c-322d82950d98] Processing event network-vif-plugged-7246ed42-6ec3-42e8-9b9d-12606aeeb43c _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 25 03:46:41 np0005534516 nova_compute[253538]: 2025-11-25 08:46:41.621 253542 DEBUG nova.compute.manager [None req-5b5eacf4-c6da-49cf-9bde-b1ad24f0d0e3 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] [instance: 0f5e68e6-8f02-4a3a-ac0c-322d82950d98] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 25 03:46:41 np0005534516 nova_compute[253538]: 2025-11-25 08:46:41.624 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 0f5e68e6-8f02-4a3a-ac0c-322d82950d98] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 03:46:41 np0005534516 nova_compute[253538]: 2025-11-25 08:46:41.625 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764060401.6240888, 0f5e68e6-8f02-4a3a-ac0c-322d82950d98 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 03:46:41 np0005534516 nova_compute[253538]: 2025-11-25 08:46:41.625 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 0f5e68e6-8f02-4a3a-ac0c-322d82950d98] VM Resumed (Lifecycle Event)#033[00m
Nov 25 03:46:41 np0005534516 nova_compute[253538]: 2025-11-25 08:46:41.627 253542 DEBUG nova.virt.libvirt.driver [None req-5b5eacf4-c6da-49cf-9bde-b1ad24f0d0e3 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] [instance: 0f5e68e6-8f02-4a3a-ac0c-322d82950d98] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 25 03:46:41 np0005534516 nova_compute[253538]: 2025-11-25 08:46:41.630 253542 INFO nova.virt.libvirt.driver [-] [instance: 0f5e68e6-8f02-4a3a-ac0c-322d82950d98] Instance spawned successfully.#033[00m
Nov 25 03:46:41 np0005534516 nova_compute[253538]: 2025-11-25 08:46:41.630 253542 DEBUG nova.virt.libvirt.driver [None req-5b5eacf4-c6da-49cf-9bde-b1ad24f0d0e3 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] [instance: 0f5e68e6-8f02-4a3a-ac0c-322d82950d98] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 25 03:46:41 np0005534516 nova_compute[253538]: 2025-11-25 08:46:41.648 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 0f5e68e6-8f02-4a3a-ac0c-322d82950d98] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:46:41 np0005534516 nova_compute[253538]: 2025-11-25 08:46:41.652 253542 DEBUG nova.virt.libvirt.driver [None req-5b5eacf4-c6da-49cf-9bde-b1ad24f0d0e3 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] [instance: 0f5e68e6-8f02-4a3a-ac0c-322d82950d98] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:46:41 np0005534516 nova_compute[253538]: 2025-11-25 08:46:41.653 253542 DEBUG nova.virt.libvirt.driver [None req-5b5eacf4-c6da-49cf-9bde-b1ad24f0d0e3 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] [instance: 0f5e68e6-8f02-4a3a-ac0c-322d82950d98] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:46:41 np0005534516 nova_compute[253538]: 2025-11-25 08:46:41.653 253542 DEBUG nova.virt.libvirt.driver [None req-5b5eacf4-c6da-49cf-9bde-b1ad24f0d0e3 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] [instance: 0f5e68e6-8f02-4a3a-ac0c-322d82950d98] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:46:41 np0005534516 nova_compute[253538]: 2025-11-25 08:46:41.654 253542 DEBUG nova.virt.libvirt.driver [None req-5b5eacf4-c6da-49cf-9bde-b1ad24f0d0e3 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] [instance: 0f5e68e6-8f02-4a3a-ac0c-322d82950d98] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:46:41 np0005534516 nova_compute[253538]: 2025-11-25 08:46:41.654 253542 DEBUG nova.virt.libvirt.driver [None req-5b5eacf4-c6da-49cf-9bde-b1ad24f0d0e3 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] [instance: 0f5e68e6-8f02-4a3a-ac0c-322d82950d98] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:46:41 np0005534516 nova_compute[253538]: 2025-11-25 08:46:41.654 253542 DEBUG nova.virt.libvirt.driver [None req-5b5eacf4-c6da-49cf-9bde-b1ad24f0d0e3 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] [instance: 0f5e68e6-8f02-4a3a-ac0c-322d82950d98] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:46:41 np0005534516 nova_compute[253538]: 2025-11-25 08:46:41.658 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 0f5e68e6-8f02-4a3a-ac0c-322d82950d98] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 03:46:41 np0005534516 nova_compute[253538]: 2025-11-25 08:46:41.686 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 0f5e68e6-8f02-4a3a-ac0c-322d82950d98] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 03:46:41 np0005534516 nova_compute[253538]: 2025-11-25 08:46:41.713 253542 INFO nova.compute.manager [None req-5b5eacf4-c6da-49cf-9bde-b1ad24f0d0e3 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] [instance: 0f5e68e6-8f02-4a3a-ac0c-322d82950d98] Took 8.15 seconds to spawn the instance on the hypervisor.#033[00m
Nov 25 03:46:41 np0005534516 nova_compute[253538]: 2025-11-25 08:46:41.714 253542 DEBUG nova.compute.manager [None req-5b5eacf4-c6da-49cf-9bde-b1ad24f0d0e3 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] [instance: 0f5e68e6-8f02-4a3a-ac0c-322d82950d98] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:46:41 np0005534516 nova_compute[253538]: 2025-11-25 08:46:41.777 253542 INFO nova.compute.manager [None req-5b5eacf4-c6da-49cf-9bde-b1ad24f0d0e3 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] [instance: 0f5e68e6-8f02-4a3a-ac0c-322d82950d98] Took 9.13 seconds to build instance.#033[00m
Nov 25 03:46:41 np0005534516 nova_compute[253538]: 2025-11-25 08:46:41.791 253542 DEBUG oslo_concurrency.lockutils [None req-5b5eacf4-c6da-49cf-9bde-b1ad24f0d0e3 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] Lock "0f5e68e6-8f02-4a3a-ac0c-322d82950d98" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.209s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:46:41 np0005534516 podman[357183]: 2025-11-25 08:46:41.924426933 +0000 UTC m=+0.058518737 container create 7670f239ad1da4d9969c96f4d7b382e932cc33f3700d83bfd2af5d25adafffaf (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-aa04d86f-73a3-4b24-9c95-8ec29aa39064, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3)
Nov 25 03:46:41 np0005534516 systemd[1]: Started libpod-conmon-7670f239ad1da4d9969c96f4d7b382e932cc33f3700d83bfd2af5d25adafffaf.scope.
Nov 25 03:46:41 np0005534516 podman[357183]: 2025-11-25 08:46:41.890718941 +0000 UTC m=+0.024810765 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 25 03:46:41 np0005534516 systemd[1]: Started libcrun container.
Nov 25 03:46:41 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5392b1f619a47826c2c6fed409eebb3fb96b83ac7eec8e199c10f4e6b9e7199c/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 25 03:46:42 np0005534516 podman[357183]: 2025-11-25 08:46:42.010519549 +0000 UTC m=+0.144611373 container init 7670f239ad1da4d9969c96f4d7b382e932cc33f3700d83bfd2af5d25adafffaf (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-aa04d86f-73a3-4b24-9c95-8ec29aa39064, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Nov 25 03:46:42 np0005534516 podman[357183]: 2025-11-25 08:46:42.021341399 +0000 UTC m=+0.155433203 container start 7670f239ad1da4d9969c96f4d7b382e932cc33f3700d83bfd2af5d25adafffaf (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-aa04d86f-73a3-4b24-9c95-8ec29aa39064, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Nov 25 03:46:42 np0005534516 neutron-haproxy-ovnmeta-aa04d86f-73a3-4b24-9c95-8ec29aa39064[357198]: [NOTICE]   (357204) : New worker (357206) forked
Nov 25 03:46:42 np0005534516 neutron-haproxy-ovnmeta-aa04d86f-73a3-4b24-9c95-8ec29aa39064[357198]: [NOTICE]   (357204) : Loading success.
Nov 25 03:46:42 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e231 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 03:46:42 np0005534516 nova_compute[253538]: 2025-11-25 08:46:42.339 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:46:43 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1956: 321 pgs: 321 active+clean; 134 MiB data, 720 MiB used, 59 GiB / 60 GiB avail; 36 KiB/s rd, 1.8 MiB/s wr, 30 op/s
Nov 25 03:46:43 np0005534516 nova_compute[253538]: 2025-11-25 08:46:43.809 253542 DEBUG nova.compute.manager [req-a2a157d0-7e35-4d69-b341-8dd82fe063f5 req-f6255b50-63fa-4d29-a17c-c158bbcc41bd b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 0f5e68e6-8f02-4a3a-ac0c-322d82950d98] Received event network-vif-plugged-7246ed42-6ec3-42e8-9b9d-12606aeeb43c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:46:43 np0005534516 nova_compute[253538]: 2025-11-25 08:46:43.810 253542 DEBUG oslo_concurrency.lockutils [req-a2a157d0-7e35-4d69-b341-8dd82fe063f5 req-f6255b50-63fa-4d29-a17c-c158bbcc41bd b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "0f5e68e6-8f02-4a3a-ac0c-322d82950d98-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:46:43 np0005534516 nova_compute[253538]: 2025-11-25 08:46:43.810 253542 DEBUG oslo_concurrency.lockutils [req-a2a157d0-7e35-4d69-b341-8dd82fe063f5 req-f6255b50-63fa-4d29-a17c-c158bbcc41bd b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "0f5e68e6-8f02-4a3a-ac0c-322d82950d98-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:46:43 np0005534516 nova_compute[253538]: 2025-11-25 08:46:43.810 253542 DEBUG oslo_concurrency.lockutils [req-a2a157d0-7e35-4d69-b341-8dd82fe063f5 req-f6255b50-63fa-4d29-a17c-c158bbcc41bd b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "0f5e68e6-8f02-4a3a-ac0c-322d82950d98-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:46:43 np0005534516 nova_compute[253538]: 2025-11-25 08:46:43.810 253542 DEBUG nova.compute.manager [req-a2a157d0-7e35-4d69-b341-8dd82fe063f5 req-f6255b50-63fa-4d29-a17c-c158bbcc41bd b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 0f5e68e6-8f02-4a3a-ac0c-322d82950d98] No waiting events found dispatching network-vif-plugged-7246ed42-6ec3-42e8-9b9d-12606aeeb43c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 03:46:43 np0005534516 nova_compute[253538]: 2025-11-25 08:46:43.810 253542 WARNING nova.compute.manager [req-a2a157d0-7e35-4d69-b341-8dd82fe063f5 req-f6255b50-63fa-4d29-a17c-c158bbcc41bd b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 0f5e68e6-8f02-4a3a-ac0c-322d82950d98] Received unexpected event network-vif-plugged-7246ed42-6ec3-42e8-9b9d-12606aeeb43c for instance with vm_state active and task_state None.#033[00m
Nov 25 03:46:45 np0005534516 nova_compute[253538]: 2025-11-25 08:46:45.065 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:46:45 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1957: 321 pgs: 321 active+clean; 134 MiB data, 720 MiB used, 59 GiB / 60 GiB avail; 1.2 MiB/s rd, 1.8 MiB/s wr, 75 op/s
Nov 25 03:46:45 np0005534516 nova_compute[253538]: 2025-11-25 08:46:45.799 253542 DEBUG oslo_concurrency.lockutils [None req-792be2b9-75a6-4252-8518-0add3a6e87f8 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] Acquiring lock "46491e7b-1f61-45bf-a185-2a6b9dfb7258" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:46:45 np0005534516 nova_compute[253538]: 2025-11-25 08:46:45.800 253542 DEBUG oslo_concurrency.lockutils [None req-792be2b9-75a6-4252-8518-0add3a6e87f8 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] Lock "46491e7b-1f61-45bf-a185-2a6b9dfb7258" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:46:45 np0005534516 nova_compute[253538]: 2025-11-25 08:46:45.818 253542 DEBUG nova.compute.manager [None req-792be2b9-75a6-4252-8518-0add3a6e87f8 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] [instance: 46491e7b-1f61-45bf-a185-2a6b9dfb7258] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 25 03:46:45 np0005534516 nova_compute[253538]: 2025-11-25 08:46:45.894 253542 DEBUG oslo_concurrency.lockutils [None req-792be2b9-75a6-4252-8518-0add3a6e87f8 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:46:45 np0005534516 nova_compute[253538]: 2025-11-25 08:46:45.894 253542 DEBUG oslo_concurrency.lockutils [None req-792be2b9-75a6-4252-8518-0add3a6e87f8 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:46:45 np0005534516 nova_compute[253538]: 2025-11-25 08:46:45.906 253542 DEBUG nova.virt.hardware [None req-792be2b9-75a6-4252-8518-0add3a6e87f8 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 25 03:46:45 np0005534516 nova_compute[253538]: 2025-11-25 08:46:45.906 253542 INFO nova.compute.claims [None req-792be2b9-75a6-4252-8518-0add3a6e87f8 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] [instance: 46491e7b-1f61-45bf-a185-2a6b9dfb7258] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 25 03:46:46 np0005534516 nova_compute[253538]: 2025-11-25 08:46:46.044 253542 DEBUG oslo_concurrency.processutils [None req-792be2b9-75a6-4252-8518-0add3a6e87f8 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:46:46 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 03:46:46 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3307040062' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 03:46:46 np0005534516 nova_compute[253538]: 2025-11-25 08:46:46.469 253542 DEBUG oslo_concurrency.processutils [None req-792be2b9-75a6-4252-8518-0add3a6e87f8 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.424s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:46:46 np0005534516 nova_compute[253538]: 2025-11-25 08:46:46.476 253542 DEBUG nova.compute.provider_tree [None req-792be2b9-75a6-4252-8518-0add3a6e87f8 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 03:46:46 np0005534516 nova_compute[253538]: 2025-11-25 08:46:46.492 253542 DEBUG nova.scheduler.client.report [None req-792be2b9-75a6-4252-8518-0add3a6e87f8 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 03:46:46 np0005534516 nova_compute[253538]: 2025-11-25 08:46:46.514 253542 DEBUG oslo_concurrency.lockutils [None req-792be2b9-75a6-4252-8518-0add3a6e87f8 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.619s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:46:46 np0005534516 nova_compute[253538]: 2025-11-25 08:46:46.515 253542 DEBUG nova.compute.manager [None req-792be2b9-75a6-4252-8518-0add3a6e87f8 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] [instance: 46491e7b-1f61-45bf-a185-2a6b9dfb7258] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 25 03:46:46 np0005534516 nova_compute[253538]: 2025-11-25 08:46:46.559 253542 DEBUG nova.compute.manager [None req-792be2b9-75a6-4252-8518-0add3a6e87f8 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] [instance: 46491e7b-1f61-45bf-a185-2a6b9dfb7258] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 25 03:46:46 np0005534516 nova_compute[253538]: 2025-11-25 08:46:46.560 253542 DEBUG nova.network.neutron [None req-792be2b9-75a6-4252-8518-0add3a6e87f8 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] [instance: 46491e7b-1f61-45bf-a185-2a6b9dfb7258] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 25 03:46:46 np0005534516 nova_compute[253538]: 2025-11-25 08:46:46.604 253542 INFO nova.virt.libvirt.driver [None req-792be2b9-75a6-4252-8518-0add3a6e87f8 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] [instance: 46491e7b-1f61-45bf-a185-2a6b9dfb7258] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 25 03:46:46 np0005534516 nova_compute[253538]: 2025-11-25 08:46:46.639 253542 DEBUG nova.compute.manager [None req-792be2b9-75a6-4252-8518-0add3a6e87f8 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] [instance: 46491e7b-1f61-45bf-a185-2a6b9dfb7258] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 25 03:46:46 np0005534516 nova_compute[253538]: 2025-11-25 08:46:46.764 253542 DEBUG nova.compute.manager [None req-792be2b9-75a6-4252-8518-0add3a6e87f8 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] [instance: 46491e7b-1f61-45bf-a185-2a6b9dfb7258] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 25 03:46:46 np0005534516 nova_compute[253538]: 2025-11-25 08:46:46.766 253542 DEBUG nova.virt.libvirt.driver [None req-792be2b9-75a6-4252-8518-0add3a6e87f8 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] [instance: 46491e7b-1f61-45bf-a185-2a6b9dfb7258] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 25 03:46:46 np0005534516 nova_compute[253538]: 2025-11-25 08:46:46.767 253542 INFO nova.virt.libvirt.driver [None req-792be2b9-75a6-4252-8518-0add3a6e87f8 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] [instance: 46491e7b-1f61-45bf-a185-2a6b9dfb7258] Creating image(s)#033[00m
Nov 25 03:46:46 np0005534516 nova_compute[253538]: 2025-11-25 08:46:46.809 253542 DEBUG nova.storage.rbd_utils [None req-792be2b9-75a6-4252-8518-0add3a6e87f8 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] rbd image 46491e7b-1f61-45bf-a185-2a6b9dfb7258_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:46:46 np0005534516 nova_compute[253538]: 2025-11-25 08:46:46.834 253542 DEBUG nova.storage.rbd_utils [None req-792be2b9-75a6-4252-8518-0add3a6e87f8 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] rbd image 46491e7b-1f61-45bf-a185-2a6b9dfb7258_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:46:46 np0005534516 nova_compute[253538]: 2025-11-25 08:46:46.857 253542 DEBUG nova.storage.rbd_utils [None req-792be2b9-75a6-4252-8518-0add3a6e87f8 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] rbd image 46491e7b-1f61-45bf-a185-2a6b9dfb7258_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:46:46 np0005534516 nova_compute[253538]: 2025-11-25 08:46:46.861 253542 DEBUG oslo_concurrency.processutils [None req-792be2b9-75a6-4252-8518-0add3a6e87f8 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:46:46 np0005534516 nova_compute[253538]: 2025-11-25 08:46:46.900 253542 DEBUG nova.policy [None req-792be2b9-75a6-4252-8518-0add3a6e87f8 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '229f02d89c9848d8aaaaab070ce4d179', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '947f731219de435196429037dc94fd56', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 25 03:46:46 np0005534516 nova_compute[253538]: 2025-11-25 08:46:46.938 253542 DEBUG oslo_concurrency.processutils [None req-792be2b9-75a6-4252-8518-0add3a6e87f8 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json" returned: 0 in 0.077s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:46:46 np0005534516 nova_compute[253538]: 2025-11-25 08:46:46.939 253542 DEBUG oslo_concurrency.lockutils [None req-792be2b9-75a6-4252-8518-0add3a6e87f8 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] Acquiring lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:46:46 np0005534516 nova_compute[253538]: 2025-11-25 08:46:46.939 253542 DEBUG oslo_concurrency.lockutils [None req-792be2b9-75a6-4252-8518-0add3a6e87f8 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:46:46 np0005534516 nova_compute[253538]: 2025-11-25 08:46:46.940 253542 DEBUG oslo_concurrency.lockutils [None req-792be2b9-75a6-4252-8518-0add3a6e87f8 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:46:46 np0005534516 nova_compute[253538]: 2025-11-25 08:46:46.963 253542 DEBUG nova.storage.rbd_utils [None req-792be2b9-75a6-4252-8518-0add3a6e87f8 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] rbd image 46491e7b-1f61-45bf-a185-2a6b9dfb7258_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:46:46 np0005534516 nova_compute[253538]: 2025-11-25 08:46:46.968 253542 DEBUG oslo_concurrency.processutils [None req-792be2b9-75a6-4252-8518-0add3a6e87f8 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc 46491e7b-1f61-45bf-a185-2a6b9dfb7258_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:46:47 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e231 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 03:46:47 np0005534516 nova_compute[253538]: 2025-11-25 08:46:47.281 253542 DEBUG oslo_concurrency.processutils [None req-792be2b9-75a6-4252-8518-0add3a6e87f8 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc 46491e7b-1f61-45bf-a185-2a6b9dfb7258_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.312s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:46:47 np0005534516 nova_compute[253538]: 2025-11-25 08:46:47.343 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:46:47 np0005534516 nova_compute[253538]: 2025-11-25 08:46:47.349 253542 DEBUG nova.storage.rbd_utils [None req-792be2b9-75a6-4252-8518-0add3a6e87f8 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] resizing rbd image 46491e7b-1f61-45bf-a185-2a6b9dfb7258_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 25 03:46:47 np0005534516 nova_compute[253538]: 2025-11-25 08:46:47.448 253542 DEBUG nova.objects.instance [None req-792be2b9-75a6-4252-8518-0add3a6e87f8 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] Lazy-loading 'migration_context' on Instance uuid 46491e7b-1f61-45bf-a185-2a6b9dfb7258 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 03:46:47 np0005534516 nova_compute[253538]: 2025-11-25 08:46:47.462 253542 DEBUG nova.virt.libvirt.driver [None req-792be2b9-75a6-4252-8518-0add3a6e87f8 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] [instance: 46491e7b-1f61-45bf-a185-2a6b9dfb7258] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 25 03:46:47 np0005534516 nova_compute[253538]: 2025-11-25 08:46:47.462 253542 DEBUG nova.virt.libvirt.driver [None req-792be2b9-75a6-4252-8518-0add3a6e87f8 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] [instance: 46491e7b-1f61-45bf-a185-2a6b9dfb7258] Ensure instance console log exists: /var/lib/nova/instances/46491e7b-1f61-45bf-a185-2a6b9dfb7258/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 25 03:46:47 np0005534516 nova_compute[253538]: 2025-11-25 08:46:47.463 253542 DEBUG oslo_concurrency.lockutils [None req-792be2b9-75a6-4252-8518-0add3a6e87f8 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:46:47 np0005534516 nova_compute[253538]: 2025-11-25 08:46:47.463 253542 DEBUG oslo_concurrency.lockutils [None req-792be2b9-75a6-4252-8518-0add3a6e87f8 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:46:47 np0005534516 nova_compute[253538]: 2025-11-25 08:46:47.463 253542 DEBUG oslo_concurrency.lockutils [None req-792be2b9-75a6-4252-8518-0add3a6e87f8 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:46:47 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1958: 321 pgs: 321 active+clean; 134 MiB data, 720 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 90 op/s
Nov 25 03:46:47 np0005534516 nova_compute[253538]: 2025-11-25 08:46:47.885 253542 DEBUG nova.network.neutron [None req-792be2b9-75a6-4252-8518-0add3a6e87f8 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] [instance: 46491e7b-1f61-45bf-a185-2a6b9dfb7258] Successfully created port: 51030d00-0656-4e18-a844-07210dd53c67 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 25 03:46:49 np0005534516 nova_compute[253538]: 2025-11-25 08:46:49.578 253542 DEBUG nova.network.neutron [None req-792be2b9-75a6-4252-8518-0add3a6e87f8 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] [instance: 46491e7b-1f61-45bf-a185-2a6b9dfb7258] Successfully updated port: 51030d00-0656-4e18-a844-07210dd53c67 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 25 03:46:49 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1959: 321 pgs: 321 active+clean; 140 MiB data, 721 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.1 MiB/s wr, 80 op/s
Nov 25 03:46:49 np0005534516 nova_compute[253538]: 2025-11-25 08:46:49.600 253542 DEBUG oslo_concurrency.lockutils [None req-792be2b9-75a6-4252-8518-0add3a6e87f8 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] Acquiring lock "refresh_cache-46491e7b-1f61-45bf-a185-2a6b9dfb7258" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 03:46:49 np0005534516 nova_compute[253538]: 2025-11-25 08:46:49.601 253542 DEBUG oslo_concurrency.lockutils [None req-792be2b9-75a6-4252-8518-0add3a6e87f8 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] Acquired lock "refresh_cache-46491e7b-1f61-45bf-a185-2a6b9dfb7258" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 03:46:49 np0005534516 nova_compute[253538]: 2025-11-25 08:46:49.601 253542 DEBUG nova.network.neutron [None req-792be2b9-75a6-4252-8518-0add3a6e87f8 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] [instance: 46491e7b-1f61-45bf-a185-2a6b9dfb7258] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 25 03:46:49 np0005534516 nova_compute[253538]: 2025-11-25 08:46:49.679 253542 DEBUG nova.compute.manager [req-a7847f34-a297-4579-86f9-16d1d9fede53 req-788158b7-045f-479e-b108-9251caad3d67 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 46491e7b-1f61-45bf-a185-2a6b9dfb7258] Received event network-changed-51030d00-0656-4e18-a844-07210dd53c67 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:46:49 np0005534516 nova_compute[253538]: 2025-11-25 08:46:49.679 253542 DEBUG nova.compute.manager [req-a7847f34-a297-4579-86f9-16d1d9fede53 req-788158b7-045f-479e-b108-9251caad3d67 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 46491e7b-1f61-45bf-a185-2a6b9dfb7258] Refreshing instance network info cache due to event network-changed-51030d00-0656-4e18-a844-07210dd53c67. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 03:46:49 np0005534516 nova_compute[253538]: 2025-11-25 08:46:49.680 253542 DEBUG oslo_concurrency.lockutils [req-a7847f34-a297-4579-86f9-16d1d9fede53 req-788158b7-045f-479e-b108-9251caad3d67 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-46491e7b-1f61-45bf-a185-2a6b9dfb7258" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 03:46:50 np0005534516 nova_compute[253538]: 2025-11-25 08:46:50.025 253542 DEBUG nova.network.neutron [None req-792be2b9-75a6-4252-8518-0add3a6e87f8 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] [instance: 46491e7b-1f61-45bf-a185-2a6b9dfb7258] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 25 03:46:50 np0005534516 nova_compute[253538]: 2025-11-25 08:46:50.068 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:46:50 np0005534516 nova_compute[253538]: 2025-11-25 08:46:50.790 253542 DEBUG nova.network.neutron [None req-792be2b9-75a6-4252-8518-0add3a6e87f8 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] [instance: 46491e7b-1f61-45bf-a185-2a6b9dfb7258] Updating instance_info_cache with network_info: [{"id": "51030d00-0656-4e18-a844-07210dd53c67", "address": "fa:16:3e:71:8f:76", "network": {"id": "aa04d86f-73a3-4b24-9c95-8ec29aa39064", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-1377856389-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "947f731219de435196429037dc94fd56", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap51030d00-06", "ovs_interfaceid": "51030d00-0656-4e18-a844-07210dd53c67", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 03:46:50 np0005534516 nova_compute[253538]: 2025-11-25 08:46:50.808 253542 DEBUG oslo_concurrency.lockutils [None req-792be2b9-75a6-4252-8518-0add3a6e87f8 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] Releasing lock "refresh_cache-46491e7b-1f61-45bf-a185-2a6b9dfb7258" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 03:46:50 np0005534516 nova_compute[253538]: 2025-11-25 08:46:50.808 253542 DEBUG nova.compute.manager [None req-792be2b9-75a6-4252-8518-0add3a6e87f8 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] [instance: 46491e7b-1f61-45bf-a185-2a6b9dfb7258] Instance network_info: |[{"id": "51030d00-0656-4e18-a844-07210dd53c67", "address": "fa:16:3e:71:8f:76", "network": {"id": "aa04d86f-73a3-4b24-9c95-8ec29aa39064", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-1377856389-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "947f731219de435196429037dc94fd56", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap51030d00-06", "ovs_interfaceid": "51030d00-0656-4e18-a844-07210dd53c67", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 25 03:46:50 np0005534516 nova_compute[253538]: 2025-11-25 08:46:50.809 253542 DEBUG oslo_concurrency.lockutils [req-a7847f34-a297-4579-86f9-16d1d9fede53 req-788158b7-045f-479e-b108-9251caad3d67 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-46491e7b-1f61-45bf-a185-2a6b9dfb7258" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 03:46:50 np0005534516 nova_compute[253538]: 2025-11-25 08:46:50.810 253542 DEBUG nova.network.neutron [req-a7847f34-a297-4579-86f9-16d1d9fede53 req-788158b7-045f-479e-b108-9251caad3d67 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 46491e7b-1f61-45bf-a185-2a6b9dfb7258] Refreshing network info cache for port 51030d00-0656-4e18-a844-07210dd53c67 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 03:46:50 np0005534516 nova_compute[253538]: 2025-11-25 08:46:50.815 253542 DEBUG nova.virt.libvirt.driver [None req-792be2b9-75a6-4252-8518-0add3a6e87f8 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] [instance: 46491e7b-1f61-45bf-a185-2a6b9dfb7258] Start _get_guest_xml network_info=[{"id": "51030d00-0656-4e18-a844-07210dd53c67", "address": "fa:16:3e:71:8f:76", "network": {"id": "aa04d86f-73a3-4b24-9c95-8ec29aa39064", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-1377856389-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "947f731219de435196429037dc94fd56", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap51030d00-06", "ovs_interfaceid": "51030d00-0656-4e18-a844-07210dd53c67", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'device_name': '/dev/vda', 'guest_format': None, 'encryption_options': None, 'boot_index': 0, 'encryption_format': None, 'encryption_secret_uuid': None, 'size': 0, 'encrypted': False, 'disk_bus': 'virtio', 'image_id': '8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 25 03:46:50 np0005534516 nova_compute[253538]: 2025-11-25 08:46:50.823 253542 WARNING nova.virt.libvirt.driver [None req-792be2b9-75a6-4252-8518-0add3a6e87f8 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 03:46:50 np0005534516 nova_compute[253538]: 2025-11-25 08:46:50.836 253542 DEBUG nova.virt.libvirt.host [None req-792be2b9-75a6-4252-8518-0add3a6e87f8 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 25 03:46:50 np0005534516 nova_compute[253538]: 2025-11-25 08:46:50.837 253542 DEBUG nova.virt.libvirt.host [None req-792be2b9-75a6-4252-8518-0add3a6e87f8 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 25 03:46:50 np0005534516 nova_compute[253538]: 2025-11-25 08:46:50.842 253542 DEBUG nova.virt.libvirt.host [None req-792be2b9-75a6-4252-8518-0add3a6e87f8 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 25 03:46:50 np0005534516 nova_compute[253538]: 2025-11-25 08:46:50.843 253542 DEBUG nova.virt.libvirt.host [None req-792be2b9-75a6-4252-8518-0add3a6e87f8 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 25 03:46:50 np0005534516 nova_compute[253538]: 2025-11-25 08:46:50.844 253542 DEBUG nova.virt.libvirt.driver [None req-792be2b9-75a6-4252-8518-0add3a6e87f8 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 25 03:46:50 np0005534516 nova_compute[253538]: 2025-11-25 08:46:50.845 253542 DEBUG nova.virt.hardware [None req-792be2b9-75a6-4252-8518-0add3a6e87f8 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T08:20:54Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='c0247c91-0f34-45b2-87e7-43f31a790a00',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 25 03:46:50 np0005534516 nova_compute[253538]: 2025-11-25 08:46:50.846 253542 DEBUG nova.virt.hardware [None req-792be2b9-75a6-4252-8518-0add3a6e87f8 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 25 03:46:50 np0005534516 nova_compute[253538]: 2025-11-25 08:46:50.847 253542 DEBUG nova.virt.hardware [None req-792be2b9-75a6-4252-8518-0add3a6e87f8 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 25 03:46:50 np0005534516 nova_compute[253538]: 2025-11-25 08:46:50.847 253542 DEBUG nova.virt.hardware [None req-792be2b9-75a6-4252-8518-0add3a6e87f8 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 25 03:46:50 np0005534516 nova_compute[253538]: 2025-11-25 08:46:50.848 253542 DEBUG nova.virt.hardware [None req-792be2b9-75a6-4252-8518-0add3a6e87f8 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 25 03:46:50 np0005534516 nova_compute[253538]: 2025-11-25 08:46:50.849 253542 DEBUG nova.virt.hardware [None req-792be2b9-75a6-4252-8518-0add3a6e87f8 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 25 03:46:50 np0005534516 nova_compute[253538]: 2025-11-25 08:46:50.849 253542 DEBUG nova.virt.hardware [None req-792be2b9-75a6-4252-8518-0add3a6e87f8 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 25 03:46:50 np0005534516 nova_compute[253538]: 2025-11-25 08:46:50.850 253542 DEBUG nova.virt.hardware [None req-792be2b9-75a6-4252-8518-0add3a6e87f8 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 25 03:46:50 np0005534516 nova_compute[253538]: 2025-11-25 08:46:50.851 253542 DEBUG nova.virt.hardware [None req-792be2b9-75a6-4252-8518-0add3a6e87f8 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 25 03:46:50 np0005534516 nova_compute[253538]: 2025-11-25 08:46:50.851 253542 DEBUG nova.virt.hardware [None req-792be2b9-75a6-4252-8518-0add3a6e87f8 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 25 03:46:50 np0005534516 nova_compute[253538]: 2025-11-25 08:46:50.852 253542 DEBUG nova.virt.hardware [None req-792be2b9-75a6-4252-8518-0add3a6e87f8 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 25 03:46:50 np0005534516 nova_compute[253538]: 2025-11-25 08:46:50.858 253542 DEBUG oslo_concurrency.processutils [None req-792be2b9-75a6-4252-8518-0add3a6e87f8 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:46:51 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 03:46:51 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3266228349' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 03:46:51 np0005534516 nova_compute[253538]: 2025-11-25 08:46:51.350 253542 DEBUG oslo_concurrency.processutils [None req-792be2b9-75a6-4252-8518-0add3a6e87f8 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.492s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:46:51 np0005534516 nova_compute[253538]: 2025-11-25 08:46:51.370 253542 DEBUG nova.storage.rbd_utils [None req-792be2b9-75a6-4252-8518-0add3a6e87f8 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] rbd image 46491e7b-1f61-45bf-a185-2a6b9dfb7258_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:46:51 np0005534516 nova_compute[253538]: 2025-11-25 08:46:51.373 253542 DEBUG oslo_concurrency.processutils [None req-792be2b9-75a6-4252-8518-0add3a6e87f8 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:46:51 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1960: 321 pgs: 321 active+clean; 151 MiB data, 727 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.5 MiB/s wr, 90 op/s
Nov 25 03:46:51 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 03:46:51 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3222846228' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 03:46:51 np0005534516 nova_compute[253538]: 2025-11-25 08:46:51.823 253542 DEBUG oslo_concurrency.processutils [None req-792be2b9-75a6-4252-8518-0add3a6e87f8 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.450s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:46:51 np0005534516 nova_compute[253538]: 2025-11-25 08:46:51.827 253542 DEBUG nova.virt.libvirt.vif [None req-792be2b9-75a6-4252-8518-0add3a6e87f8 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T08:46:44Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersNegativeTestJSON-server-1354440524',display_name='tempest-ServersNegativeTestJSON-server-1354440524',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversnegativetestjson-server-1354440524',id=105,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='947f731219de435196429037dc94fd56',ramdisk_id='',reservation_id='r-94mtar4m',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersNegativeTestJSON-740481153',owner_user_name='tempest-ServersNegativeTestJSON-740481153-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T08:46:46Z,user_data=None,user_id='229f02d89c9848d8aaaaab070ce4d179',uuid=46491e7b-1f61-45bf-a185-2a6b9dfb7258,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "51030d00-0656-4e18-a844-07210dd53c67", "address": "fa:16:3e:71:8f:76", "network": {"id": "aa04d86f-73a3-4b24-9c95-8ec29aa39064", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-1377856389-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "947f731219de435196429037dc94fd56", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap51030d00-06", "ovs_interfaceid": "51030d00-0656-4e18-a844-07210dd53c67", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 25 03:46:51 np0005534516 nova_compute[253538]: 2025-11-25 08:46:51.828 253542 DEBUG nova.network.os_vif_util [None req-792be2b9-75a6-4252-8518-0add3a6e87f8 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] Converting VIF {"id": "51030d00-0656-4e18-a844-07210dd53c67", "address": "fa:16:3e:71:8f:76", "network": {"id": "aa04d86f-73a3-4b24-9c95-8ec29aa39064", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-1377856389-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "947f731219de435196429037dc94fd56", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap51030d00-06", "ovs_interfaceid": "51030d00-0656-4e18-a844-07210dd53c67", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 03:46:51 np0005534516 nova_compute[253538]: 2025-11-25 08:46:51.830 253542 DEBUG nova.network.os_vif_util [None req-792be2b9-75a6-4252-8518-0add3a6e87f8 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:71:8f:76,bridge_name='br-int',has_traffic_filtering=True,id=51030d00-0656-4e18-a844-07210dd53c67,network=Network(aa04d86f-73a3-4b24-9c95-8ec29aa39064),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap51030d00-06') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 03:46:51 np0005534516 nova_compute[253538]: 2025-11-25 08:46:51.833 253542 DEBUG nova.objects.instance [None req-792be2b9-75a6-4252-8518-0add3a6e87f8 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] Lazy-loading 'pci_devices' on Instance uuid 46491e7b-1f61-45bf-a185-2a6b9dfb7258 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 03:46:51 np0005534516 nova_compute[253538]: 2025-11-25 08:46:51.863 253542 DEBUG nova.virt.libvirt.driver [None req-792be2b9-75a6-4252-8518-0add3a6e87f8 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] [instance: 46491e7b-1f61-45bf-a185-2a6b9dfb7258] End _get_guest_xml xml=<domain type="kvm">
Nov 25 03:46:51 np0005534516 nova_compute[253538]:  <uuid>46491e7b-1f61-45bf-a185-2a6b9dfb7258</uuid>
Nov 25 03:46:51 np0005534516 nova_compute[253538]:  <name>instance-00000069</name>
Nov 25 03:46:51 np0005534516 nova_compute[253538]:  <memory>131072</memory>
Nov 25 03:46:51 np0005534516 nova_compute[253538]:  <vcpu>1</vcpu>
Nov 25 03:46:51 np0005534516 nova_compute[253538]:  <metadata>
Nov 25 03:46:51 np0005534516 nova_compute[253538]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 03:46:51 np0005534516 nova_compute[253538]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 03:46:51 np0005534516 nova_compute[253538]:      <nova:name>tempest-ServersNegativeTestJSON-server-1354440524</nova:name>
Nov 25 03:46:51 np0005534516 nova_compute[253538]:      <nova:creationTime>2025-11-25 08:46:50</nova:creationTime>
Nov 25 03:46:51 np0005534516 nova_compute[253538]:      <nova:flavor name="m1.nano">
Nov 25 03:46:51 np0005534516 nova_compute[253538]:        <nova:memory>128</nova:memory>
Nov 25 03:46:51 np0005534516 nova_compute[253538]:        <nova:disk>1</nova:disk>
Nov 25 03:46:51 np0005534516 nova_compute[253538]:        <nova:swap>0</nova:swap>
Nov 25 03:46:51 np0005534516 nova_compute[253538]:        <nova:ephemeral>0</nova:ephemeral>
Nov 25 03:46:51 np0005534516 nova_compute[253538]:        <nova:vcpus>1</nova:vcpus>
Nov 25 03:46:51 np0005534516 nova_compute[253538]:      </nova:flavor>
Nov 25 03:46:51 np0005534516 nova_compute[253538]:      <nova:owner>
Nov 25 03:46:51 np0005534516 nova_compute[253538]:        <nova:user uuid="229f02d89c9848d8aaaaab070ce4d179">tempest-ServersNegativeTestJSON-740481153-project-member</nova:user>
Nov 25 03:46:51 np0005534516 nova_compute[253538]:        <nova:project uuid="947f731219de435196429037dc94fd56">tempest-ServersNegativeTestJSON-740481153</nova:project>
Nov 25 03:46:51 np0005534516 nova_compute[253538]:      </nova:owner>
Nov 25 03:46:51 np0005534516 nova_compute[253538]:      <nova:root type="image" uuid="8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e"/>
Nov 25 03:46:51 np0005534516 nova_compute[253538]:      <nova:ports>
Nov 25 03:46:51 np0005534516 nova_compute[253538]:        <nova:port uuid="51030d00-0656-4e18-a844-07210dd53c67">
Nov 25 03:46:51 np0005534516 nova_compute[253538]:          <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Nov 25 03:46:51 np0005534516 nova_compute[253538]:        </nova:port>
Nov 25 03:46:51 np0005534516 nova_compute[253538]:      </nova:ports>
Nov 25 03:46:51 np0005534516 nova_compute[253538]:    </nova:instance>
Nov 25 03:46:51 np0005534516 nova_compute[253538]:  </metadata>
Nov 25 03:46:51 np0005534516 nova_compute[253538]:  <sysinfo type="smbios">
Nov 25 03:46:51 np0005534516 nova_compute[253538]:    <system>
Nov 25 03:46:51 np0005534516 nova_compute[253538]:      <entry name="manufacturer">RDO</entry>
Nov 25 03:46:51 np0005534516 nova_compute[253538]:      <entry name="product">OpenStack Compute</entry>
Nov 25 03:46:51 np0005534516 nova_compute[253538]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 03:46:51 np0005534516 nova_compute[253538]:      <entry name="serial">46491e7b-1f61-45bf-a185-2a6b9dfb7258</entry>
Nov 25 03:46:51 np0005534516 nova_compute[253538]:      <entry name="uuid">46491e7b-1f61-45bf-a185-2a6b9dfb7258</entry>
Nov 25 03:46:51 np0005534516 nova_compute[253538]:      <entry name="family">Virtual Machine</entry>
Nov 25 03:46:51 np0005534516 nova_compute[253538]:    </system>
Nov 25 03:46:51 np0005534516 nova_compute[253538]:  </sysinfo>
Nov 25 03:46:51 np0005534516 nova_compute[253538]:  <os>
Nov 25 03:46:51 np0005534516 nova_compute[253538]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 25 03:46:51 np0005534516 nova_compute[253538]:    <boot dev="hd"/>
Nov 25 03:46:51 np0005534516 nova_compute[253538]:    <smbios mode="sysinfo"/>
Nov 25 03:46:51 np0005534516 nova_compute[253538]:  </os>
Nov 25 03:46:51 np0005534516 nova_compute[253538]:  <features>
Nov 25 03:46:51 np0005534516 nova_compute[253538]:    <acpi/>
Nov 25 03:46:51 np0005534516 nova_compute[253538]:    <apic/>
Nov 25 03:46:51 np0005534516 nova_compute[253538]:    <vmcoreinfo/>
Nov 25 03:46:51 np0005534516 nova_compute[253538]:  </features>
Nov 25 03:46:51 np0005534516 nova_compute[253538]:  <clock offset="utc">
Nov 25 03:46:51 np0005534516 nova_compute[253538]:    <timer name="pit" tickpolicy="delay"/>
Nov 25 03:46:51 np0005534516 nova_compute[253538]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 25 03:46:51 np0005534516 nova_compute[253538]:    <timer name="hpet" present="no"/>
Nov 25 03:46:51 np0005534516 nova_compute[253538]:  </clock>
Nov 25 03:46:51 np0005534516 nova_compute[253538]:  <cpu mode="host-model" match="exact">
Nov 25 03:46:51 np0005534516 nova_compute[253538]:    <topology sockets="1" cores="1" threads="1"/>
Nov 25 03:46:51 np0005534516 nova_compute[253538]:  </cpu>
Nov 25 03:46:51 np0005534516 nova_compute[253538]:  <devices>
Nov 25 03:46:51 np0005534516 nova_compute[253538]:    <disk type="network" device="disk">
Nov 25 03:46:51 np0005534516 nova_compute[253538]:      <driver type="raw" cache="none"/>
Nov 25 03:46:51 np0005534516 nova_compute[253538]:      <source protocol="rbd" name="vms/46491e7b-1f61-45bf-a185-2a6b9dfb7258_disk">
Nov 25 03:46:51 np0005534516 nova_compute[253538]:        <host name="192.168.122.100" port="6789"/>
Nov 25 03:46:51 np0005534516 nova_compute[253538]:      </source>
Nov 25 03:46:51 np0005534516 nova_compute[253538]:      <auth username="openstack">
Nov 25 03:46:51 np0005534516 nova_compute[253538]:        <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 03:46:51 np0005534516 nova_compute[253538]:      </auth>
Nov 25 03:46:51 np0005534516 nova_compute[253538]:      <target dev="vda" bus="virtio"/>
Nov 25 03:46:51 np0005534516 nova_compute[253538]:    </disk>
Nov 25 03:46:51 np0005534516 nova_compute[253538]:    <disk type="network" device="cdrom">
Nov 25 03:46:51 np0005534516 nova_compute[253538]:      <driver type="raw" cache="none"/>
Nov 25 03:46:51 np0005534516 nova_compute[253538]:      <source protocol="rbd" name="vms/46491e7b-1f61-45bf-a185-2a6b9dfb7258_disk.config">
Nov 25 03:46:51 np0005534516 nova_compute[253538]:        <host name="192.168.122.100" port="6789"/>
Nov 25 03:46:51 np0005534516 nova_compute[253538]:      </source>
Nov 25 03:46:51 np0005534516 nova_compute[253538]:      <auth username="openstack">
Nov 25 03:46:51 np0005534516 nova_compute[253538]:        <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 03:46:51 np0005534516 nova_compute[253538]:      </auth>
Nov 25 03:46:51 np0005534516 nova_compute[253538]:      <target dev="sda" bus="sata"/>
Nov 25 03:46:51 np0005534516 nova_compute[253538]:    </disk>
Nov 25 03:46:51 np0005534516 nova_compute[253538]:    <interface type="ethernet">
Nov 25 03:46:51 np0005534516 nova_compute[253538]:      <mac address="fa:16:3e:71:8f:76"/>
Nov 25 03:46:51 np0005534516 nova_compute[253538]:      <model type="virtio"/>
Nov 25 03:46:51 np0005534516 nova_compute[253538]:      <driver name="vhost" rx_queue_size="512"/>
Nov 25 03:46:51 np0005534516 nova_compute[253538]:      <mtu size="1442"/>
Nov 25 03:46:51 np0005534516 nova_compute[253538]:      <target dev="tap51030d00-06"/>
Nov 25 03:46:51 np0005534516 nova_compute[253538]:    </interface>
Nov 25 03:46:51 np0005534516 nova_compute[253538]:    <serial type="pty">
Nov 25 03:46:51 np0005534516 nova_compute[253538]:      <log file="/var/lib/nova/instances/46491e7b-1f61-45bf-a185-2a6b9dfb7258/console.log" append="off"/>
Nov 25 03:46:51 np0005534516 nova_compute[253538]:    </serial>
Nov 25 03:46:51 np0005534516 nova_compute[253538]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 03:46:51 np0005534516 nova_compute[253538]:    <video>
Nov 25 03:46:51 np0005534516 nova_compute[253538]:      <model type="virtio"/>
Nov 25 03:46:51 np0005534516 nova_compute[253538]:    </video>
Nov 25 03:46:51 np0005534516 nova_compute[253538]:    <input type="tablet" bus="usb"/>
Nov 25 03:46:51 np0005534516 nova_compute[253538]:    <rng model="virtio">
Nov 25 03:46:51 np0005534516 nova_compute[253538]:      <backend model="random">/dev/urandom</backend>
Nov 25 03:46:51 np0005534516 nova_compute[253538]:    </rng>
Nov 25 03:46:51 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root"/>
Nov 25 03:46:51 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:46:51 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:46:51 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:46:51 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:46:51 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:46:51 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:46:51 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:46:51 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:46:51 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:46:51 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:46:51 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:46:51 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:46:51 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:46:51 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:46:51 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:46:51 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:46:51 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:46:51 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:46:51 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:46:51 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:46:51 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:46:51 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:46:51 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:46:51 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:46:51 np0005534516 nova_compute[253538]:    <controller type="usb" index="0"/>
Nov 25 03:46:51 np0005534516 nova_compute[253538]:    <memballoon model="virtio">
Nov 25 03:46:51 np0005534516 nova_compute[253538]:      <stats period="10"/>
Nov 25 03:46:51 np0005534516 nova_compute[253538]:    </memballoon>
Nov 25 03:46:51 np0005534516 nova_compute[253538]:  </devices>
Nov 25 03:46:51 np0005534516 nova_compute[253538]: </domain>
Nov 25 03:46:51 np0005534516 nova_compute[253538]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 25 03:46:51 np0005534516 nova_compute[253538]: 2025-11-25 08:46:51.865 253542 DEBUG nova.compute.manager [None req-792be2b9-75a6-4252-8518-0add3a6e87f8 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] [instance: 46491e7b-1f61-45bf-a185-2a6b9dfb7258] Preparing to wait for external event network-vif-plugged-51030d00-0656-4e18-a844-07210dd53c67 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 25 03:46:51 np0005534516 nova_compute[253538]: 2025-11-25 08:46:51.865 253542 DEBUG oslo_concurrency.lockutils [None req-792be2b9-75a6-4252-8518-0add3a6e87f8 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] Acquiring lock "46491e7b-1f61-45bf-a185-2a6b9dfb7258-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:46:51 np0005534516 nova_compute[253538]: 2025-11-25 08:46:51.865 253542 DEBUG oslo_concurrency.lockutils [None req-792be2b9-75a6-4252-8518-0add3a6e87f8 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] Lock "46491e7b-1f61-45bf-a185-2a6b9dfb7258-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:46:51 np0005534516 nova_compute[253538]: 2025-11-25 08:46:51.866 253542 DEBUG oslo_concurrency.lockutils [None req-792be2b9-75a6-4252-8518-0add3a6e87f8 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] Lock "46491e7b-1f61-45bf-a185-2a6b9dfb7258-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:46:51 np0005534516 nova_compute[253538]: 2025-11-25 08:46:51.866 253542 DEBUG nova.virt.libvirt.vif [None req-792be2b9-75a6-4252-8518-0add3a6e87f8 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T08:46:44Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersNegativeTestJSON-server-1354440524',display_name='tempest-ServersNegativeTestJSON-server-1354440524',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversnegativetestjson-server-1354440524',id=105,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='947f731219de435196429037dc94fd56',ramdisk_id='',reservation_id='r-94mtar4m',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersNegativeTestJSON-740481153',owner_user_name='tempest-ServersNegativeTestJSON-740481153-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T08:46:46Z,user_data=None,user_id='229f02d89c9848d8aaaaab070ce4d179',uuid=46491e7b-1f61-45bf-a185-2a6b9dfb7258,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "51030d00-0656-4e18-a844-07210dd53c67", "address": "fa:16:3e:71:8f:76", "network": {"id": "aa04d86f-73a3-4b24-9c95-8ec29aa39064", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-1377856389-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "947f731219de435196429037dc94fd56", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap51030d00-06", "ovs_interfaceid": "51030d00-0656-4e18-a844-07210dd53c67", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 25 03:46:51 np0005534516 nova_compute[253538]: 2025-11-25 08:46:51.867 253542 DEBUG nova.network.os_vif_util [None req-792be2b9-75a6-4252-8518-0add3a6e87f8 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] Converting VIF {"id": "51030d00-0656-4e18-a844-07210dd53c67", "address": "fa:16:3e:71:8f:76", "network": {"id": "aa04d86f-73a3-4b24-9c95-8ec29aa39064", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-1377856389-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "947f731219de435196429037dc94fd56", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap51030d00-06", "ovs_interfaceid": "51030d00-0656-4e18-a844-07210dd53c67", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 03:46:51 np0005534516 nova_compute[253538]: 2025-11-25 08:46:51.867 253542 DEBUG nova.network.os_vif_util [None req-792be2b9-75a6-4252-8518-0add3a6e87f8 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:71:8f:76,bridge_name='br-int',has_traffic_filtering=True,id=51030d00-0656-4e18-a844-07210dd53c67,network=Network(aa04d86f-73a3-4b24-9c95-8ec29aa39064),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap51030d00-06') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 03:46:51 np0005534516 nova_compute[253538]: 2025-11-25 08:46:51.868 253542 DEBUG os_vif [None req-792be2b9-75a6-4252-8518-0add3a6e87f8 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:71:8f:76,bridge_name='br-int',has_traffic_filtering=True,id=51030d00-0656-4e18-a844-07210dd53c67,network=Network(aa04d86f-73a3-4b24-9c95-8ec29aa39064),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap51030d00-06') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 25 03:46:51 np0005534516 nova_compute[253538]: 2025-11-25 08:46:51.868 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:46:51 np0005534516 nova_compute[253538]: 2025-11-25 08:46:51.869 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:46:51 np0005534516 nova_compute[253538]: 2025-11-25 08:46:51.869 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 03:46:51 np0005534516 nova_compute[253538]: 2025-11-25 08:46:51.875 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:46:51 np0005534516 nova_compute[253538]: 2025-11-25 08:46:51.876 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap51030d00-06, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:46:51 np0005534516 nova_compute[253538]: 2025-11-25 08:46:51.877 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap51030d00-06, col_values=(('external_ids', {'iface-id': '51030d00-0656-4e18-a844-07210dd53c67', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:71:8f:76', 'vm-uuid': '46491e7b-1f61-45bf-a185-2a6b9dfb7258'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:46:51 np0005534516 nova_compute[253538]: 2025-11-25 08:46:51.879 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:46:51 np0005534516 NetworkManager[48915]: <info>  [1764060411.8800] manager: (tap51030d00-06): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/409)
Nov 25 03:46:51 np0005534516 nova_compute[253538]: 2025-11-25 08:46:51.884 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 25 03:46:51 np0005534516 nova_compute[253538]: 2025-11-25 08:46:51.887 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:46:51 np0005534516 nova_compute[253538]: 2025-11-25 08:46:51.888 253542 INFO os_vif [None req-792be2b9-75a6-4252-8518-0add3a6e87f8 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:71:8f:76,bridge_name='br-int',has_traffic_filtering=True,id=51030d00-0656-4e18-a844-07210dd53c67,network=Network(aa04d86f-73a3-4b24-9c95-8ec29aa39064),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap51030d00-06')#033[00m
Nov 25 03:46:51 np0005534516 nova_compute[253538]: 2025-11-25 08:46:51.965 253542 DEBUG nova.virt.libvirt.driver [None req-792be2b9-75a6-4252-8518-0add3a6e87f8 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 03:46:51 np0005534516 nova_compute[253538]: 2025-11-25 08:46:51.966 253542 DEBUG nova.virt.libvirt.driver [None req-792be2b9-75a6-4252-8518-0add3a6e87f8 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 03:46:51 np0005534516 nova_compute[253538]: 2025-11-25 08:46:51.967 253542 DEBUG nova.virt.libvirt.driver [None req-792be2b9-75a6-4252-8518-0add3a6e87f8 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] No VIF found with MAC fa:16:3e:71:8f:76, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 25 03:46:51 np0005534516 nova_compute[253538]: 2025-11-25 08:46:51.968 253542 INFO nova.virt.libvirt.driver [None req-792be2b9-75a6-4252-8518-0add3a6e87f8 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] [instance: 46491e7b-1f61-45bf-a185-2a6b9dfb7258] Using config drive#033[00m
Nov 25 03:46:51 np0005534516 nova_compute[253538]: 2025-11-25 08:46:51.994 253542 DEBUG nova.storage.rbd_utils [None req-792be2b9-75a6-4252-8518-0add3a6e87f8 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] rbd image 46491e7b-1f61-45bf-a185-2a6b9dfb7258_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:46:52 np0005534516 nova_compute[253538]: 2025-11-25 08:46:52.168 253542 DEBUG nova.network.neutron [req-a7847f34-a297-4579-86f9-16d1d9fede53 req-788158b7-045f-479e-b108-9251caad3d67 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 46491e7b-1f61-45bf-a185-2a6b9dfb7258] Updated VIF entry in instance network info cache for port 51030d00-0656-4e18-a844-07210dd53c67. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 25 03:46:52 np0005534516 nova_compute[253538]: 2025-11-25 08:46:52.169 253542 DEBUG nova.network.neutron [req-a7847f34-a297-4579-86f9-16d1d9fede53 req-788158b7-045f-479e-b108-9251caad3d67 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 46491e7b-1f61-45bf-a185-2a6b9dfb7258] Updating instance_info_cache with network_info: [{"id": "51030d00-0656-4e18-a844-07210dd53c67", "address": "fa:16:3e:71:8f:76", "network": {"id": "aa04d86f-73a3-4b24-9c95-8ec29aa39064", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-1377856389-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "947f731219de435196429037dc94fd56", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap51030d00-06", "ovs_interfaceid": "51030d00-0656-4e18-a844-07210dd53c67", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 03:46:52 np0005534516 nova_compute[253538]: 2025-11-25 08:46:52.181 253542 DEBUG oslo_concurrency.lockutils [req-a7847f34-a297-4579-86f9-16d1d9fede53 req-788158b7-045f-479e-b108-9251caad3d67 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-46491e7b-1f61-45bf-a185-2a6b9dfb7258" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 03:46:52 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e231 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 03:46:52 np0005534516 nova_compute[253538]: 2025-11-25 08:46:52.343 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:46:52 np0005534516 nova_compute[253538]: 2025-11-25 08:46:52.407 253542 INFO nova.virt.libvirt.driver [None req-792be2b9-75a6-4252-8518-0add3a6e87f8 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] [instance: 46491e7b-1f61-45bf-a185-2a6b9dfb7258] Creating config drive at /var/lib/nova/instances/46491e7b-1f61-45bf-a185-2a6b9dfb7258/disk.config#033[00m
Nov 25 03:46:52 np0005534516 nova_compute[253538]: 2025-11-25 08:46:52.416 253542 DEBUG oslo_concurrency.processutils [None req-792be2b9-75a6-4252-8518-0add3a6e87f8 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/46491e7b-1f61-45bf-a185-2a6b9dfb7258/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp5cv_v4pt execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:46:52 np0005534516 nova_compute[253538]: 2025-11-25 08:46:52.583 253542 DEBUG oslo_concurrency.processutils [None req-792be2b9-75a6-4252-8518-0add3a6e87f8 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/46491e7b-1f61-45bf-a185-2a6b9dfb7258/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp5cv_v4pt" returned: 0 in 0.167s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:46:52 np0005534516 nova_compute[253538]: 2025-11-25 08:46:52.610 253542 DEBUG nova.storage.rbd_utils [None req-792be2b9-75a6-4252-8518-0add3a6e87f8 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] rbd image 46491e7b-1f61-45bf-a185-2a6b9dfb7258_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:46:52 np0005534516 nova_compute[253538]: 2025-11-25 08:46:52.665 253542 DEBUG oslo_concurrency.processutils [None req-792be2b9-75a6-4252-8518-0add3a6e87f8 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/46491e7b-1f61-45bf-a185-2a6b9dfb7258/disk.config 46491e7b-1f61-45bf-a185-2a6b9dfb7258_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:46:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] Optimize plan auto_2025-11-25_08:46:53
Nov 25 03:46:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 03:46:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] do_upmap
Nov 25 03:46:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] pools ['.rgw.root', 'default.rgw.control', 'images', 'volumes', 'default.rgw.log', 'cephfs.cephfs.meta', 'backups', 'default.rgw.meta', '.mgr', 'vms', 'cephfs.cephfs.data']
Nov 25 03:46:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] prepared 0/10 changes
Nov 25 03:46:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 03:46:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 03:46:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 03:46:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 03:46:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 03:46:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 03:46:53 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1961: 321 pgs: 321 active+clean; 180 MiB data, 741 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 103 op/s
Nov 25 03:46:53 np0005534516 nova_compute[253538]: 2025-11-25 08:46:53.624 253542 DEBUG oslo_concurrency.processutils [None req-792be2b9-75a6-4252-8518-0add3a6e87f8 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/46491e7b-1f61-45bf-a185-2a6b9dfb7258/disk.config 46491e7b-1f61-45bf-a185-2a6b9dfb7258_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.959s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:46:53 np0005534516 nova_compute[253538]: 2025-11-25 08:46:53.626 253542 INFO nova.virt.libvirt.driver [None req-792be2b9-75a6-4252-8518-0add3a6e87f8 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] [instance: 46491e7b-1f61-45bf-a185-2a6b9dfb7258] Deleting local config drive /var/lib/nova/instances/46491e7b-1f61-45bf-a185-2a6b9dfb7258/disk.config because it was imported into RBD.#033[00m
Nov 25 03:46:53 np0005534516 NetworkManager[48915]: <info>  [1764060413.6861] manager: (tap51030d00-06): new Tun device (/org/freedesktop/NetworkManager/Devices/410)
Nov 25 03:46:53 np0005534516 kernel: tap51030d00-06: entered promiscuous mode
Nov 25 03:46:53 np0005534516 ovn_controller[152859]: 2025-11-25T08:46:53Z|00996|binding|INFO|Claiming lport 51030d00-0656-4e18-a844-07210dd53c67 for this chassis.
Nov 25 03:46:53 np0005534516 ovn_controller[152859]: 2025-11-25T08:46:53Z|00997|binding|INFO|51030d00-0656-4e18-a844-07210dd53c67: Claiming fa:16:3e:71:8f:76 10.100.0.8
Nov 25 03:46:53 np0005534516 nova_compute[253538]: 2025-11-25 08:46:53.688 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:46:53 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:46:53.695 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:71:8f:76 10.100.0.8'], port_security=['fa:16:3e:71:8f:76 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '46491e7b-1f61-45bf-a185-2a6b9dfb7258', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-aa04d86f-73a3-4b24-9c95-8ec29aa39064', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '947f731219de435196429037dc94fd56', 'neutron:revision_number': '2', 'neutron:security_group_ids': '6084cb22-e3b6-414c-801b-b7e992786a2f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d4043632-827b-4717-bb5e-38582ebfacad, chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=51030d00-0656-4e18-a844-07210dd53c67) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 03:46:53 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:46:53.696 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 51030d00-0656-4e18-a844-07210dd53c67 in datapath aa04d86f-73a3-4b24-9c95-8ec29aa39064 bound to our chassis#033[00m
Nov 25 03:46:53 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:46:53.698 162739 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network aa04d86f-73a3-4b24-9c95-8ec29aa39064#033[00m
Nov 25 03:46:53 np0005534516 ovn_controller[152859]: 2025-11-25T08:46:53Z|00998|binding|INFO|Setting lport 51030d00-0656-4e18-a844-07210dd53c67 ovn-installed in OVS
Nov 25 03:46:53 np0005534516 ovn_controller[152859]: 2025-11-25T08:46:53Z|00999|binding|INFO|Setting lport 51030d00-0656-4e18-a844-07210dd53c67 up in Southbound
Nov 25 03:46:53 np0005534516 nova_compute[253538]: 2025-11-25 08:46:53.705 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:46:53 np0005534516 nova_compute[253538]: 2025-11-25 08:46:53.707 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:46:53 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:46:53.712 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[6fb39712-3f31-4de9-83b9-00955b656f76]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:46:53 np0005534516 systemd-udevd[357543]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 03:46:53 np0005534516 systemd-machined[215790]: New machine qemu-129-instance-00000069.
Nov 25 03:46:53 np0005534516 NetworkManager[48915]: <info>  [1764060413.7391] device (tap51030d00-06): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 03:46:53 np0005534516 NetworkManager[48915]: <info>  [1764060413.7402] device (tap51030d00-06): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 03:46:53 np0005534516 systemd[1]: Started Virtual Machine qemu-129-instance-00000069.
Nov 25 03:46:53 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:46:53.746 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[41080deb-4311-4e1a-80bc-96af4baa2d36]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:46:53 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:46:53.750 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[530b8a6e-3935-4910-8bfb-8f61df6fd13b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:46:53 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:46:53.777 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[557290e2-a00e-420c-8253-810e251e90df]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:46:53 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:46:53.797 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[04b55135-67b1-46d0-aeaa-20ed663f1a0e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapaa04d86f-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:13:d2:af'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 6, 'tx_packets': 5, 'rx_bytes': 532, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 6, 'tx_packets': 5, 'rx_bytes': 532, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 294], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 575592, 'reachable_time': 15292, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 357552, 'error': None, 'target': 'ovnmeta-aa04d86f-73a3-4b24-9c95-8ec29aa39064', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:46:53 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:46:53.813 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[6cd13505-2b43-4c56-9ddc-5b6d60cb7101]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapaa04d86f-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 575604, 'tstamp': 575604}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 357555, 'error': None, 'target': 'ovnmeta-aa04d86f-73a3-4b24-9c95-8ec29aa39064', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapaa04d86f-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 575606, 'tstamp': 575606}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 357555, 'error': None, 'target': 'ovnmeta-aa04d86f-73a3-4b24-9c95-8ec29aa39064', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:46:53 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:46:53.815 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapaa04d86f-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:46:53 np0005534516 nova_compute[253538]: 2025-11-25 08:46:53.816 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:46:53 np0005534516 nova_compute[253538]: 2025-11-25 08:46:53.818 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:46:53 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:46:53.818 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapaa04d86f-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:46:53 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:46:53.818 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 03:46:53 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:46:53.819 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapaa04d86f-70, col_values=(('external_ids', {'iface-id': 'bedad8d3-cb44-47dd-87a4-c24448506880'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:46:53 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:46:53.819 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 03:46:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 03:46:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 03:46:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 03:46:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 03:46:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 03:46:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 03:46:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 03:46:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 03:46:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 03:46:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 03:46:53 np0005534516 ovn_controller[152859]: 2025-11-25T08:46:53Z|00106|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:8c:96:cd 10.100.0.3
Nov 25 03:46:53 np0005534516 ovn_controller[152859]: 2025-11-25T08:46:53Z|00107|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:8c:96:cd 10.100.0.3
Nov 25 03:46:54 np0005534516 nova_compute[253538]: 2025-11-25 08:46:54.164 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764060414.1637063, 46491e7b-1f61-45bf-a185-2a6b9dfb7258 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 03:46:54 np0005534516 nova_compute[253538]: 2025-11-25 08:46:54.164 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 46491e7b-1f61-45bf-a185-2a6b9dfb7258] VM Started (Lifecycle Event)#033[00m
Nov 25 03:46:54 np0005534516 nova_compute[253538]: 2025-11-25 08:46:54.182 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 46491e7b-1f61-45bf-a185-2a6b9dfb7258] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:46:54 np0005534516 nova_compute[253538]: 2025-11-25 08:46:54.185 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764060414.1638577, 46491e7b-1f61-45bf-a185-2a6b9dfb7258 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 03:46:54 np0005534516 nova_compute[253538]: 2025-11-25 08:46:54.185 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 46491e7b-1f61-45bf-a185-2a6b9dfb7258] VM Paused (Lifecycle Event)#033[00m
Nov 25 03:46:54 np0005534516 nova_compute[253538]: 2025-11-25 08:46:54.199 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 46491e7b-1f61-45bf-a185-2a6b9dfb7258] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:46:54 np0005534516 nova_compute[253538]: 2025-11-25 08:46:54.202 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 46491e7b-1f61-45bf-a185-2a6b9dfb7258] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 03:46:54 np0005534516 nova_compute[253538]: 2025-11-25 08:46:54.221 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 46491e7b-1f61-45bf-a185-2a6b9dfb7258] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 03:46:55 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1962: 321 pgs: 321 active+clean; 192 MiB data, 752 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 2.7 MiB/s wr, 133 op/s
Nov 25 03:46:55 np0005534516 podman[357599]: 2025-11-25 08:46:55.824581071 +0000 UTC m=+0.062768501 container health_status 5c8d5508db57b4c3da7e80ae11f3a3094e87785cb74f60c98199261dd8b73481 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251118)
Nov 25 03:46:56 np0005534516 nova_compute[253538]: 2025-11-25 08:46:56.094 253542 DEBUG nova.compute.manager [req-0e8adc8c-22b6-4162-93f0-37ec3f7f9b27 req-ad0faddb-a365-4a1e-a2c0-4e257879d11a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 46491e7b-1f61-45bf-a185-2a6b9dfb7258] Received event network-vif-plugged-51030d00-0656-4e18-a844-07210dd53c67 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:46:56 np0005534516 nova_compute[253538]: 2025-11-25 08:46:56.095 253542 DEBUG oslo_concurrency.lockutils [req-0e8adc8c-22b6-4162-93f0-37ec3f7f9b27 req-ad0faddb-a365-4a1e-a2c0-4e257879d11a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "46491e7b-1f61-45bf-a185-2a6b9dfb7258-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:46:56 np0005534516 nova_compute[253538]: 2025-11-25 08:46:56.095 253542 DEBUG oslo_concurrency.lockutils [req-0e8adc8c-22b6-4162-93f0-37ec3f7f9b27 req-ad0faddb-a365-4a1e-a2c0-4e257879d11a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "46491e7b-1f61-45bf-a185-2a6b9dfb7258-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:46:56 np0005534516 nova_compute[253538]: 2025-11-25 08:46:56.095 253542 DEBUG oslo_concurrency.lockutils [req-0e8adc8c-22b6-4162-93f0-37ec3f7f9b27 req-ad0faddb-a365-4a1e-a2c0-4e257879d11a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "46491e7b-1f61-45bf-a185-2a6b9dfb7258-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:46:56 np0005534516 nova_compute[253538]: 2025-11-25 08:46:56.095 253542 DEBUG nova.compute.manager [req-0e8adc8c-22b6-4162-93f0-37ec3f7f9b27 req-ad0faddb-a365-4a1e-a2c0-4e257879d11a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 46491e7b-1f61-45bf-a185-2a6b9dfb7258] Processing event network-vif-plugged-51030d00-0656-4e18-a844-07210dd53c67 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 25 03:46:56 np0005534516 nova_compute[253538]: 2025-11-25 08:46:56.096 253542 DEBUG nova.compute.manager [None req-792be2b9-75a6-4252-8518-0add3a6e87f8 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] [instance: 46491e7b-1f61-45bf-a185-2a6b9dfb7258] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 25 03:46:56 np0005534516 nova_compute[253538]: 2025-11-25 08:46:56.099 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764060416.0996416, 46491e7b-1f61-45bf-a185-2a6b9dfb7258 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 03:46:56 np0005534516 nova_compute[253538]: 2025-11-25 08:46:56.099 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 46491e7b-1f61-45bf-a185-2a6b9dfb7258] VM Resumed (Lifecycle Event)#033[00m
Nov 25 03:46:56 np0005534516 nova_compute[253538]: 2025-11-25 08:46:56.102 253542 DEBUG nova.virt.libvirt.driver [None req-792be2b9-75a6-4252-8518-0add3a6e87f8 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] [instance: 46491e7b-1f61-45bf-a185-2a6b9dfb7258] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 25 03:46:56 np0005534516 nova_compute[253538]: 2025-11-25 08:46:56.105 253542 INFO nova.virt.libvirt.driver [-] [instance: 46491e7b-1f61-45bf-a185-2a6b9dfb7258] Instance spawned successfully.#033[00m
Nov 25 03:46:56 np0005534516 nova_compute[253538]: 2025-11-25 08:46:56.106 253542 DEBUG nova.virt.libvirt.driver [None req-792be2b9-75a6-4252-8518-0add3a6e87f8 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] [instance: 46491e7b-1f61-45bf-a185-2a6b9dfb7258] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 25 03:46:56 np0005534516 nova_compute[253538]: 2025-11-25 08:46:56.126 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 46491e7b-1f61-45bf-a185-2a6b9dfb7258] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:46:56 np0005534516 nova_compute[253538]: 2025-11-25 08:46:56.132 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 46491e7b-1f61-45bf-a185-2a6b9dfb7258] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 03:46:56 np0005534516 nova_compute[253538]: 2025-11-25 08:46:56.135 253542 DEBUG nova.virt.libvirt.driver [None req-792be2b9-75a6-4252-8518-0add3a6e87f8 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] [instance: 46491e7b-1f61-45bf-a185-2a6b9dfb7258] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:46:56 np0005534516 nova_compute[253538]: 2025-11-25 08:46:56.136 253542 DEBUG nova.virt.libvirt.driver [None req-792be2b9-75a6-4252-8518-0add3a6e87f8 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] [instance: 46491e7b-1f61-45bf-a185-2a6b9dfb7258] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:46:56 np0005534516 nova_compute[253538]: 2025-11-25 08:46:56.136 253542 DEBUG nova.virt.libvirt.driver [None req-792be2b9-75a6-4252-8518-0add3a6e87f8 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] [instance: 46491e7b-1f61-45bf-a185-2a6b9dfb7258] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:46:56 np0005534516 nova_compute[253538]: 2025-11-25 08:46:56.137 253542 DEBUG nova.virt.libvirt.driver [None req-792be2b9-75a6-4252-8518-0add3a6e87f8 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] [instance: 46491e7b-1f61-45bf-a185-2a6b9dfb7258] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:46:56 np0005534516 nova_compute[253538]: 2025-11-25 08:46:56.137 253542 DEBUG nova.virt.libvirt.driver [None req-792be2b9-75a6-4252-8518-0add3a6e87f8 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] [instance: 46491e7b-1f61-45bf-a185-2a6b9dfb7258] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:46:56 np0005534516 nova_compute[253538]: 2025-11-25 08:46:56.137 253542 DEBUG nova.virt.libvirt.driver [None req-792be2b9-75a6-4252-8518-0add3a6e87f8 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] [instance: 46491e7b-1f61-45bf-a185-2a6b9dfb7258] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:46:56 np0005534516 nova_compute[253538]: 2025-11-25 08:46:56.164 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 46491e7b-1f61-45bf-a185-2a6b9dfb7258] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 03:46:56 np0005534516 nova_compute[253538]: 2025-11-25 08:46:56.192 253542 INFO nova.compute.manager [None req-792be2b9-75a6-4252-8518-0add3a6e87f8 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] [instance: 46491e7b-1f61-45bf-a185-2a6b9dfb7258] Took 9.43 seconds to spawn the instance on the hypervisor.#033[00m
Nov 25 03:46:56 np0005534516 nova_compute[253538]: 2025-11-25 08:46:56.192 253542 DEBUG nova.compute.manager [None req-792be2b9-75a6-4252-8518-0add3a6e87f8 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] [instance: 46491e7b-1f61-45bf-a185-2a6b9dfb7258] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:46:56 np0005534516 nova_compute[253538]: 2025-11-25 08:46:56.254 253542 INFO nova.compute.manager [None req-792be2b9-75a6-4252-8518-0add3a6e87f8 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] [instance: 46491e7b-1f61-45bf-a185-2a6b9dfb7258] Took 10.39 seconds to build instance.#033[00m
Nov 25 03:46:56 np0005534516 nova_compute[253538]: 2025-11-25 08:46:56.267 253542 DEBUG oslo_concurrency.lockutils [None req-792be2b9-75a6-4252-8518-0add3a6e87f8 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] Lock "46491e7b-1f61-45bf-a185-2a6b9dfb7258" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.468s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:46:56 np0005534516 nova_compute[253538]: 2025-11-25 08:46:56.919 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:46:57 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e231 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 03:46:57 np0005534516 nova_compute[253538]: 2025-11-25 08:46:57.346 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:46:57 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1963: 321 pgs: 321 active+clean; 208 MiB data, 766 MiB used, 59 GiB / 60 GiB avail; 1.1 MiB/s rd, 3.9 MiB/s wr, 114 op/s
Nov 25 03:46:58 np0005534516 nova_compute[253538]: 2025-11-25 08:46:58.190 253542 DEBUG nova.compute.manager [req-2514c12d-85fa-45e0-90fd-acb6c48907e3 req-fefbea91-014a-4d39-972f-b534ee377e05 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 46491e7b-1f61-45bf-a185-2a6b9dfb7258] Received event network-vif-plugged-51030d00-0656-4e18-a844-07210dd53c67 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:46:58 np0005534516 nova_compute[253538]: 2025-11-25 08:46:58.191 253542 DEBUG oslo_concurrency.lockutils [req-2514c12d-85fa-45e0-90fd-acb6c48907e3 req-fefbea91-014a-4d39-972f-b534ee377e05 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "46491e7b-1f61-45bf-a185-2a6b9dfb7258-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:46:58 np0005534516 nova_compute[253538]: 2025-11-25 08:46:58.191 253542 DEBUG oslo_concurrency.lockutils [req-2514c12d-85fa-45e0-90fd-acb6c48907e3 req-fefbea91-014a-4d39-972f-b534ee377e05 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "46491e7b-1f61-45bf-a185-2a6b9dfb7258-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:46:58 np0005534516 nova_compute[253538]: 2025-11-25 08:46:58.192 253542 DEBUG oslo_concurrency.lockutils [req-2514c12d-85fa-45e0-90fd-acb6c48907e3 req-fefbea91-014a-4d39-972f-b534ee377e05 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "46491e7b-1f61-45bf-a185-2a6b9dfb7258-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:46:58 np0005534516 nova_compute[253538]: 2025-11-25 08:46:58.192 253542 DEBUG nova.compute.manager [req-2514c12d-85fa-45e0-90fd-acb6c48907e3 req-fefbea91-014a-4d39-972f-b534ee377e05 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 46491e7b-1f61-45bf-a185-2a6b9dfb7258] No waiting events found dispatching network-vif-plugged-51030d00-0656-4e18-a844-07210dd53c67 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 03:46:58 np0005534516 nova_compute[253538]: 2025-11-25 08:46:58.192 253542 WARNING nova.compute.manager [req-2514c12d-85fa-45e0-90fd-acb6c48907e3 req-fefbea91-014a-4d39-972f-b534ee377e05 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 46491e7b-1f61-45bf-a185-2a6b9dfb7258] Received unexpected event network-vif-plugged-51030d00-0656-4e18-a844-07210dd53c67 for instance with vm_state active and task_state None.#033[00m
Nov 25 03:46:58 np0005534516 nova_compute[253538]: 2025-11-25 08:46:58.454 253542 DEBUG oslo_concurrency.lockutils [None req-d7327ad1-4a1f-44da-a984-e6a13ef73fe4 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] Acquiring lock "46491e7b-1f61-45bf-a185-2a6b9dfb7258" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:46:58 np0005534516 nova_compute[253538]: 2025-11-25 08:46:58.455 253542 DEBUG oslo_concurrency.lockutils [None req-d7327ad1-4a1f-44da-a984-e6a13ef73fe4 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] Lock "46491e7b-1f61-45bf-a185-2a6b9dfb7258" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:46:58 np0005534516 nova_compute[253538]: 2025-11-25 08:46:58.455 253542 DEBUG oslo_concurrency.lockutils [None req-d7327ad1-4a1f-44da-a984-e6a13ef73fe4 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] Acquiring lock "46491e7b-1f61-45bf-a185-2a6b9dfb7258-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:46:58 np0005534516 nova_compute[253538]: 2025-11-25 08:46:58.456 253542 DEBUG oslo_concurrency.lockutils [None req-d7327ad1-4a1f-44da-a984-e6a13ef73fe4 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] Lock "46491e7b-1f61-45bf-a185-2a6b9dfb7258-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:46:58 np0005534516 nova_compute[253538]: 2025-11-25 08:46:58.456 253542 DEBUG oslo_concurrency.lockutils [None req-d7327ad1-4a1f-44da-a984-e6a13ef73fe4 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] Lock "46491e7b-1f61-45bf-a185-2a6b9dfb7258-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:46:58 np0005534516 nova_compute[253538]: 2025-11-25 08:46:58.457 253542 INFO nova.compute.manager [None req-d7327ad1-4a1f-44da-a984-e6a13ef73fe4 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] [instance: 46491e7b-1f61-45bf-a185-2a6b9dfb7258] Terminating instance#033[00m
Nov 25 03:46:58 np0005534516 nova_compute[253538]: 2025-11-25 08:46:58.458 253542 DEBUG nova.compute.manager [None req-d7327ad1-4a1f-44da-a984-e6a13ef73fe4 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] [instance: 46491e7b-1f61-45bf-a185-2a6b9dfb7258] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 25 03:46:58 np0005534516 kernel: tap51030d00-06 (unregistering): left promiscuous mode
Nov 25 03:46:58 np0005534516 NetworkManager[48915]: <info>  [1764060418.4968] device (tap51030d00-06): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 03:46:58 np0005534516 nova_compute[253538]: 2025-11-25 08:46:58.508 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:46:58 np0005534516 ovn_controller[152859]: 2025-11-25T08:46:58Z|01000|binding|INFO|Releasing lport 51030d00-0656-4e18-a844-07210dd53c67 from this chassis (sb_readonly=0)
Nov 25 03:46:58 np0005534516 ovn_controller[152859]: 2025-11-25T08:46:58Z|01001|binding|INFO|Setting lport 51030d00-0656-4e18-a844-07210dd53c67 down in Southbound
Nov 25 03:46:58 np0005534516 ovn_controller[152859]: 2025-11-25T08:46:58Z|01002|binding|INFO|Removing iface tap51030d00-06 ovn-installed in OVS
Nov 25 03:46:58 np0005534516 nova_compute[253538]: 2025-11-25 08:46:58.510 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:46:58 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:46:58.515 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:71:8f:76 10.100.0.8'], port_security=['fa:16:3e:71:8f:76 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '46491e7b-1f61-45bf-a185-2a6b9dfb7258', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-aa04d86f-73a3-4b24-9c95-8ec29aa39064', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '947f731219de435196429037dc94fd56', 'neutron:revision_number': '4', 'neutron:security_group_ids': '6084cb22-e3b6-414c-801b-b7e992786a2f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d4043632-827b-4717-bb5e-38582ebfacad, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=51030d00-0656-4e18-a844-07210dd53c67) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 03:46:58 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:46:58.516 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 51030d00-0656-4e18-a844-07210dd53c67 in datapath aa04d86f-73a3-4b24-9c95-8ec29aa39064 unbound from our chassis#033[00m
Nov 25 03:46:58 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:46:58.517 162739 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network aa04d86f-73a3-4b24-9c95-8ec29aa39064#033[00m
Nov 25 03:46:58 np0005534516 nova_compute[253538]: 2025-11-25 08:46:58.523 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:46:58 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:46:58.549 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[f388c3c3-3acc-419c-b499-2d77fcebd72b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:46:58 np0005534516 systemd[1]: machine-qemu\x2d129\x2dinstance\x2d00000069.scope: Deactivated successfully.
Nov 25 03:46:58 np0005534516 systemd[1]: machine-qemu\x2d129\x2dinstance\x2d00000069.scope: Consumed 2.871s CPU time.
Nov 25 03:46:58 np0005534516 systemd-machined[215790]: Machine qemu-129-instance-00000069 terminated.
Nov 25 03:46:58 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:46:58.587 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[01fe2583-eff8-4b27-805c-b76be726ed59]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:46:58 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:46:58.589 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[a1567a5f-121b-4d45-bb43-0dfcaada992e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:46:58 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:46:58.620 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[087c58b1-a107-48dd-abe0-8afa243227cb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:46:58 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:46:58.643 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[83b12c23-6c31-4c07-9ebb-14f1527013ca]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapaa04d86f-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:13:d2:af'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 6, 'tx_packets': 7, 'rx_bytes': 532, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 6, 'tx_packets': 7, 'rx_bytes': 532, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 294], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 575592, 'reachable_time': 15292, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 357630, 'error': None, 'target': 'ovnmeta-aa04d86f-73a3-4b24-9c95-8ec29aa39064', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:46:58 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:46:58.664 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[1809fcfd-fdbb-4f88-95ad-89441c920d82]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapaa04d86f-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 575604, 'tstamp': 575604}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 357631, 'error': None, 'target': 'ovnmeta-aa04d86f-73a3-4b24-9c95-8ec29aa39064', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapaa04d86f-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 575606, 'tstamp': 575606}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 357631, 'error': None, 'target': 'ovnmeta-aa04d86f-73a3-4b24-9c95-8ec29aa39064', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:46:58 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:46:58.666 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapaa04d86f-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:46:58 np0005534516 nova_compute[253538]: 2025-11-25 08:46:58.668 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:46:58 np0005534516 nova_compute[253538]: 2025-11-25 08:46:58.671 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:46:58 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:46:58.672 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapaa04d86f-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:46:58 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:46:58.672 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 03:46:58 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:46:58.673 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapaa04d86f-70, col_values=(('external_ids', {'iface-id': 'bedad8d3-cb44-47dd-87a4-c24448506880'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:46:58 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:46:58.673 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 03:46:58 np0005534516 nova_compute[253538]: 2025-11-25 08:46:58.706 253542 INFO nova.virt.libvirt.driver [-] [instance: 46491e7b-1f61-45bf-a185-2a6b9dfb7258] Instance destroyed successfully.#033[00m
Nov 25 03:46:58 np0005534516 nova_compute[253538]: 2025-11-25 08:46:58.706 253542 DEBUG nova.objects.instance [None req-d7327ad1-4a1f-44da-a984-e6a13ef73fe4 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] Lazy-loading 'resources' on Instance uuid 46491e7b-1f61-45bf-a185-2a6b9dfb7258 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 03:46:58 np0005534516 nova_compute[253538]: 2025-11-25 08:46:58.718 253542 DEBUG nova.virt.libvirt.vif [None req-d7327ad1-4a1f-44da-a984-e6a13ef73fe4 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T08:46:44Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersNegativeTestJSON-server-1354440524',display_name='tempest-ServersNegativeTestJSON-server-1354440524',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversnegativetestjson-server-1354440524',id=105,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-25T08:46:56Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='947f731219de435196429037dc94fd56',ramdisk_id='',reservation_id='r-94mtar4m',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersNegativeTestJSON-740481153',owner_user_name='tempest-ServersNegativeTestJSON-740481153-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T08:46:56Z,user_data=None,user_id='229f02d89c9848d8aaaaab070ce4d179',uuid=46491e7b-1f61-45bf-a185-2a6b9dfb7258,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "51030d00-0656-4e18-a844-07210dd53c67", "address": "fa:16:3e:71:8f:76", "network": {"id": "aa04d86f-73a3-4b24-9c95-8ec29aa39064", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-1377856389-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "947f731219de435196429037dc94fd56", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap51030d00-06", "ovs_interfaceid": "51030d00-0656-4e18-a844-07210dd53c67", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 25 03:46:58 np0005534516 nova_compute[253538]: 2025-11-25 08:46:58.718 253542 DEBUG nova.network.os_vif_util [None req-d7327ad1-4a1f-44da-a984-e6a13ef73fe4 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] Converting VIF {"id": "51030d00-0656-4e18-a844-07210dd53c67", "address": "fa:16:3e:71:8f:76", "network": {"id": "aa04d86f-73a3-4b24-9c95-8ec29aa39064", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-1377856389-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "947f731219de435196429037dc94fd56", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap51030d00-06", "ovs_interfaceid": "51030d00-0656-4e18-a844-07210dd53c67", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 03:46:58 np0005534516 nova_compute[253538]: 2025-11-25 08:46:58.719 253542 DEBUG nova.network.os_vif_util [None req-d7327ad1-4a1f-44da-a984-e6a13ef73fe4 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:71:8f:76,bridge_name='br-int',has_traffic_filtering=True,id=51030d00-0656-4e18-a844-07210dd53c67,network=Network(aa04d86f-73a3-4b24-9c95-8ec29aa39064),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap51030d00-06') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 03:46:58 np0005534516 nova_compute[253538]: 2025-11-25 08:46:58.720 253542 DEBUG os_vif [None req-d7327ad1-4a1f-44da-a984-e6a13ef73fe4 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:71:8f:76,bridge_name='br-int',has_traffic_filtering=True,id=51030d00-0656-4e18-a844-07210dd53c67,network=Network(aa04d86f-73a3-4b24-9c95-8ec29aa39064),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap51030d00-06') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 25 03:46:58 np0005534516 nova_compute[253538]: 2025-11-25 08:46:58.721 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:46:58 np0005534516 nova_compute[253538]: 2025-11-25 08:46:58.722 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap51030d00-06, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:46:58 np0005534516 nova_compute[253538]: 2025-11-25 08:46:58.724 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:46:58 np0005534516 nova_compute[253538]: 2025-11-25 08:46:58.725 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 25 03:46:58 np0005534516 nova_compute[253538]: 2025-11-25 08:46:58.725 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:46:58 np0005534516 nova_compute[253538]: 2025-11-25 08:46:58.729 253542 INFO os_vif [None req-d7327ad1-4a1f-44da-a984-e6a13ef73fe4 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:71:8f:76,bridge_name='br-int',has_traffic_filtering=True,id=51030d00-0656-4e18-a844-07210dd53c67,network=Network(aa04d86f-73a3-4b24-9c95-8ec29aa39064),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap51030d00-06')#033[00m
Nov 25 03:46:59 np0005534516 nova_compute[253538]: 2025-11-25 08:46:59.165 253542 INFO nova.virt.libvirt.driver [None req-d7327ad1-4a1f-44da-a984-e6a13ef73fe4 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] [instance: 46491e7b-1f61-45bf-a185-2a6b9dfb7258] Deleting instance files /var/lib/nova/instances/46491e7b-1f61-45bf-a185-2a6b9dfb7258_del#033[00m
Nov 25 03:46:59 np0005534516 nova_compute[253538]: 2025-11-25 08:46:59.166 253542 INFO nova.virt.libvirt.driver [None req-d7327ad1-4a1f-44da-a984-e6a13ef73fe4 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] [instance: 46491e7b-1f61-45bf-a185-2a6b9dfb7258] Deletion of /var/lib/nova/instances/46491e7b-1f61-45bf-a185-2a6b9dfb7258_del complete#033[00m
Nov 25 03:46:59 np0005534516 nova_compute[253538]: 2025-11-25 08:46:59.248 253542 INFO nova.compute.manager [None req-d7327ad1-4a1f-44da-a984-e6a13ef73fe4 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] [instance: 46491e7b-1f61-45bf-a185-2a6b9dfb7258] Took 0.79 seconds to destroy the instance on the hypervisor.#033[00m
Nov 25 03:46:59 np0005534516 nova_compute[253538]: 2025-11-25 08:46:59.249 253542 DEBUG oslo.service.loopingcall [None req-d7327ad1-4a1f-44da-a984-e6a13ef73fe4 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 25 03:46:59 np0005534516 nova_compute[253538]: 2025-11-25 08:46:59.249 253542 DEBUG nova.compute.manager [-] [instance: 46491e7b-1f61-45bf-a185-2a6b9dfb7258] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 25 03:46:59 np0005534516 nova_compute[253538]: 2025-11-25 08:46:59.249 253542 DEBUG nova.network.neutron [-] [instance: 46491e7b-1f61-45bf-a185-2a6b9dfb7258] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 25 03:46:59 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1964: 321 pgs: 321 active+clean; 208 MiB data, 766 MiB used, 59 GiB / 60 GiB avail; 364 KiB/s rd, 3.9 MiB/s wr, 95 op/s
Nov 25 03:46:59 np0005534516 podman[357663]: 2025-11-25 08:46:59.839185828 +0000 UTC m=+0.076982012 container health_status 201f5ba8a846455dad7681d91099d8c3f33e8ac91298c1330a8b525b94c03938 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 03:47:00 np0005534516 nova_compute[253538]: 2025-11-25 08:47:00.299 253542 DEBUG nova.compute.manager [req-05033038-ba5b-479b-9a33-3471d099c027 req-c1a9f9e4-894d-401c-bcf1-04693587a820 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 46491e7b-1f61-45bf-a185-2a6b9dfb7258] Received event network-vif-unplugged-51030d00-0656-4e18-a844-07210dd53c67 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:47:00 np0005534516 nova_compute[253538]: 2025-11-25 08:47:00.300 253542 DEBUG oslo_concurrency.lockutils [req-05033038-ba5b-479b-9a33-3471d099c027 req-c1a9f9e4-894d-401c-bcf1-04693587a820 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "46491e7b-1f61-45bf-a185-2a6b9dfb7258-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:47:00 np0005534516 nova_compute[253538]: 2025-11-25 08:47:00.300 253542 DEBUG oslo_concurrency.lockutils [req-05033038-ba5b-479b-9a33-3471d099c027 req-c1a9f9e4-894d-401c-bcf1-04693587a820 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "46491e7b-1f61-45bf-a185-2a6b9dfb7258-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:47:00 np0005534516 nova_compute[253538]: 2025-11-25 08:47:00.300 253542 DEBUG oslo_concurrency.lockutils [req-05033038-ba5b-479b-9a33-3471d099c027 req-c1a9f9e4-894d-401c-bcf1-04693587a820 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "46491e7b-1f61-45bf-a185-2a6b9dfb7258-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:47:00 np0005534516 nova_compute[253538]: 2025-11-25 08:47:00.300 253542 DEBUG nova.compute.manager [req-05033038-ba5b-479b-9a33-3471d099c027 req-c1a9f9e4-894d-401c-bcf1-04693587a820 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 46491e7b-1f61-45bf-a185-2a6b9dfb7258] No waiting events found dispatching network-vif-unplugged-51030d00-0656-4e18-a844-07210dd53c67 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 03:47:00 np0005534516 nova_compute[253538]: 2025-11-25 08:47:00.301 253542 DEBUG nova.compute.manager [req-05033038-ba5b-479b-9a33-3471d099c027 req-c1a9f9e4-894d-401c-bcf1-04693587a820 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 46491e7b-1f61-45bf-a185-2a6b9dfb7258] Received event network-vif-unplugged-51030d00-0656-4e18-a844-07210dd53c67 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 25 03:47:00 np0005534516 nova_compute[253538]: 2025-11-25 08:47:00.301 253542 DEBUG nova.compute.manager [req-05033038-ba5b-479b-9a33-3471d099c027 req-c1a9f9e4-894d-401c-bcf1-04693587a820 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 46491e7b-1f61-45bf-a185-2a6b9dfb7258] Received event network-vif-plugged-51030d00-0656-4e18-a844-07210dd53c67 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:47:00 np0005534516 nova_compute[253538]: 2025-11-25 08:47:00.301 253542 DEBUG oslo_concurrency.lockutils [req-05033038-ba5b-479b-9a33-3471d099c027 req-c1a9f9e4-894d-401c-bcf1-04693587a820 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "46491e7b-1f61-45bf-a185-2a6b9dfb7258-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:47:00 np0005534516 nova_compute[253538]: 2025-11-25 08:47:00.301 253542 DEBUG oslo_concurrency.lockutils [req-05033038-ba5b-479b-9a33-3471d099c027 req-c1a9f9e4-894d-401c-bcf1-04693587a820 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "46491e7b-1f61-45bf-a185-2a6b9dfb7258-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:47:00 np0005534516 nova_compute[253538]: 2025-11-25 08:47:00.301 253542 DEBUG oslo_concurrency.lockutils [req-05033038-ba5b-479b-9a33-3471d099c027 req-c1a9f9e4-894d-401c-bcf1-04693587a820 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "46491e7b-1f61-45bf-a185-2a6b9dfb7258-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:47:00 np0005534516 nova_compute[253538]: 2025-11-25 08:47:00.301 253542 DEBUG nova.compute.manager [req-05033038-ba5b-479b-9a33-3471d099c027 req-c1a9f9e4-894d-401c-bcf1-04693587a820 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 46491e7b-1f61-45bf-a185-2a6b9dfb7258] No waiting events found dispatching network-vif-plugged-51030d00-0656-4e18-a844-07210dd53c67 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 03:47:00 np0005534516 nova_compute[253538]: 2025-11-25 08:47:00.301 253542 WARNING nova.compute.manager [req-05033038-ba5b-479b-9a33-3471d099c027 req-c1a9f9e4-894d-401c-bcf1-04693587a820 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 46491e7b-1f61-45bf-a185-2a6b9dfb7258] Received unexpected event network-vif-plugged-51030d00-0656-4e18-a844-07210dd53c67 for instance with vm_state active and task_state deleting.#033[00m
Nov 25 03:47:00 np0005534516 nova_compute[253538]: 2025-11-25 08:47:00.424 253542 DEBUG nova.network.neutron [-] [instance: 46491e7b-1f61-45bf-a185-2a6b9dfb7258] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 03:47:00 np0005534516 nova_compute[253538]: 2025-11-25 08:47:00.438 253542 INFO nova.compute.manager [-] [instance: 46491e7b-1f61-45bf-a185-2a6b9dfb7258] Took 1.19 seconds to deallocate network for instance.#033[00m
Nov 25 03:47:00 np0005534516 nova_compute[253538]: 2025-11-25 08:47:00.475 253542 DEBUG oslo_concurrency.lockutils [None req-d7327ad1-4a1f-44da-a984-e6a13ef73fe4 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:47:00 np0005534516 nova_compute[253538]: 2025-11-25 08:47:00.475 253542 DEBUG oslo_concurrency.lockutils [None req-d7327ad1-4a1f-44da-a984-e6a13ef73fe4 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:47:00 np0005534516 nova_compute[253538]: 2025-11-25 08:47:00.488 253542 DEBUG nova.compute.manager [req-b0a575b9-535f-46e9-9990-d0f9e6cbe346 req-7f07f70f-e124-4c5f-a4f3-26f0dfb13033 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 46491e7b-1f61-45bf-a185-2a6b9dfb7258] Received event network-vif-deleted-51030d00-0656-4e18-a844-07210dd53c67 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:47:00 np0005534516 nova_compute[253538]: 2025-11-25 08:47:00.574 253542 DEBUG oslo_concurrency.processutils [None req-d7327ad1-4a1f-44da-a984-e6a13ef73fe4 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:47:01 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 03:47:01 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2261886896' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 03:47:01 np0005534516 nova_compute[253538]: 2025-11-25 08:47:01.082 253542 DEBUG oslo_concurrency.processutils [None req-d7327ad1-4a1f-44da-a984-e6a13ef73fe4 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.507s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:47:01 np0005534516 nova_compute[253538]: 2025-11-25 08:47:01.090 253542 DEBUG nova.compute.provider_tree [None req-d7327ad1-4a1f-44da-a984-e6a13ef73fe4 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 03:47:01 np0005534516 nova_compute[253538]: 2025-11-25 08:47:01.105 253542 DEBUG nova.scheduler.client.report [None req-d7327ad1-4a1f-44da-a984-e6a13ef73fe4 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 03:47:01 np0005534516 nova_compute[253538]: 2025-11-25 08:47:01.128 253542 DEBUG oslo_concurrency.lockutils [None req-d7327ad1-4a1f-44da-a984-e6a13ef73fe4 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.652s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:47:01 np0005534516 nova_compute[253538]: 2025-11-25 08:47:01.160 253542 INFO nova.scheduler.client.report [None req-d7327ad1-4a1f-44da-a984-e6a13ef73fe4 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] Deleted allocations for instance 46491e7b-1f61-45bf-a185-2a6b9dfb7258#033[00m
Nov 25 03:47:01 np0005534516 nova_compute[253538]: 2025-11-25 08:47:01.217 253542 DEBUG oslo_concurrency.lockutils [None req-d7327ad1-4a1f-44da-a984-e6a13ef73fe4 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] Lock "46491e7b-1f61-45bf-a185-2a6b9dfb7258" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.763s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:47:01 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1965: 321 pgs: 321 active+clean; 202 MiB data, 761 MiB used, 59 GiB / 60 GiB avail; 860 KiB/s rd, 3.8 MiB/s wr, 133 op/s
Nov 25 03:47:02 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e231 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 03:47:02 np0005534516 nova_compute[253538]: 2025-11-25 08:47:02.348 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:47:03 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1966: 321 pgs: 321 active+clean; 167 MiB data, 745 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 3.4 MiB/s wr, 176 op/s
Nov 25 03:47:03 np0005534516 nova_compute[253538]: 2025-11-25 08:47:03.725 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:47:03 np0005534516 podman[357706]: 2025-11-25 08:47:03.851532027 +0000 UTC m=+0.097817921 container health_status 8ad1e319ea263c63021f01bb2d6f18ca5aea205883339692c6b10bddfa993191 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible)
Nov 25 03:47:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 03:47:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:47:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 03:47:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:47:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0007559663345705542 of space, bias 1.0, pg target 0.22678990037116625 quantized to 32 (current 32)
Nov 25 03:47:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:47:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 03:47:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:47:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 03:47:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:47:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0010122368870873217 of space, bias 1.0, pg target 0.3036710661261965 quantized to 32 (current 32)
Nov 25 03:47:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:47:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 25 03:47:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:47:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 03:47:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:47:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 25 03:47:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:47:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 25 03:47:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:47:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 03:47:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:47:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 25 03:47:05 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1967: 321 pgs: 321 active+clean; 167 MiB data, 745 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.2 MiB/s wr, 161 op/s
Nov 25 03:47:07 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e231 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 03:47:07 np0005534516 nova_compute[253538]: 2025-11-25 08:47:07.349 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:47:07 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1968: 321 pgs: 321 active+clean; 167 MiB data, 745 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 1.2 MiB/s wr, 127 op/s
Nov 25 03:47:08 np0005534516 nova_compute[253538]: 2025-11-25 08:47:08.728 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:47:09 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1969: 321 pgs: 321 active+clean; 167 MiB data, 745 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 69 KiB/s wr, 102 op/s
Nov 25 03:47:11 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1970: 321 pgs: 321 active+clean; 167 MiB data, 745 MiB used, 59 GiB / 60 GiB avail; 1.8 MiB/s rd, 67 KiB/s wr, 96 op/s
Nov 25 03:47:12 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e231 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 03:47:12 np0005534516 nova_compute[253538]: 2025-11-25 08:47:12.351 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:47:13 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1971: 321 pgs: 321 active+clean; 167 MiB data, 745 MiB used, 59 GiB / 60 GiB avail; 1.4 MiB/s rd, 13 KiB/s wr, 56 op/s
Nov 25 03:47:13 np0005534516 nova_compute[253538]: 2025-11-25 08:47:13.705 253542 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764060418.7043505, 46491e7b-1f61-45bf-a185-2a6b9dfb7258 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 03:47:13 np0005534516 nova_compute[253538]: 2025-11-25 08:47:13.706 253542 INFO nova.compute.manager [-] [instance: 46491e7b-1f61-45bf-a185-2a6b9dfb7258] VM Stopped (Lifecycle Event)#033[00m
Nov 25 03:47:13 np0005534516 nova_compute[253538]: 2025-11-25 08:47:13.722 253542 DEBUG nova.compute.manager [None req-6c397927-9bbc-48f3-82f4-5cd8fab880c2 - - - - - -] [instance: 46491e7b-1f61-45bf-a185-2a6b9dfb7258] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:47:13 np0005534516 nova_compute[253538]: 2025-11-25 08:47:13.747 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:47:15 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1972: 321 pgs: 321 active+clean; 167 MiB data, 745 MiB used, 59 GiB / 60 GiB avail; 13 KiB/s wr, 0 op/s
Nov 25 03:47:17 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e231 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 03:47:17 np0005534516 nova_compute[253538]: 2025-11-25 08:47:17.354 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:47:17 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 03:47:17 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 03:47:17 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 03:47:17 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 03:47:17 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 03:47:17 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 03:47:17 np0005534516 ceph-mgr[75313]: [progress WARNING root] complete: ev c77da09d-3b00-4d8d-abad-c88895c2ecf3 does not exist
Nov 25 03:47:17 np0005534516 ceph-mgr[75313]: [progress WARNING root] complete: ev b63b025c-8960-4dfa-8bec-ad279b45aa06 does not exist
Nov 25 03:47:17 np0005534516 ceph-mgr[75313]: [progress WARNING root] complete: ev c5effa08-5f89-4848-9d34-57b223cc7fc0 does not exist
Nov 25 03:47:17 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 03:47:17 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 03:47:17 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 03:47:17 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 03:47:17 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 03:47:17 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 03:47:17 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1973: 321 pgs: 321 active+clean; 167 MiB data, 745 MiB used, 59 GiB / 60 GiB avail; 1023 B/s wr, 0 op/s
Nov 25 03:47:18 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 03:47:18 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 03:47:18 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 03:47:18 np0005534516 podman[358008]: 2025-11-25 08:47:18.217610452 +0000 UTC m=+0.051772848 container create 542a5678a17648af72c28d7911bfe35d78240fee2aa772bd9e0713bed98ee674 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_williams, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Nov 25 03:47:18 np0005534516 systemd[1]: Started libpod-conmon-542a5678a17648af72c28d7911bfe35d78240fee2aa772bd9e0713bed98ee674.scope.
Nov 25 03:47:18 np0005534516 systemd[1]: Started libcrun container.
Nov 25 03:47:18 np0005534516 podman[358008]: 2025-11-25 08:47:18.192670034 +0000 UTC m=+0.026832460 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 03:47:18 np0005534516 podman[358008]: 2025-11-25 08:47:18.293917376 +0000 UTC m=+0.128079802 container init 542a5678a17648af72c28d7911bfe35d78240fee2aa772bd9e0713bed98ee674 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_williams, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 03:47:18 np0005534516 podman[358008]: 2025-11-25 08:47:18.302841515 +0000 UTC m=+0.137003921 container start 542a5678a17648af72c28d7911bfe35d78240fee2aa772bd9e0713bed98ee674 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_williams, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 03:47:18 np0005534516 podman[358008]: 2025-11-25 08:47:18.305861345 +0000 UTC m=+0.140023751 container attach 542a5678a17648af72c28d7911bfe35d78240fee2aa772bd9e0713bed98ee674 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_williams, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True)
Nov 25 03:47:18 np0005534516 elastic_williams[358022]: 167 167
Nov 25 03:47:18 np0005534516 systemd[1]: libpod-542a5678a17648af72c28d7911bfe35d78240fee2aa772bd9e0713bed98ee674.scope: Deactivated successfully.
Nov 25 03:47:18 np0005534516 podman[358008]: 2025-11-25 08:47:18.309834542 +0000 UTC m=+0.143996948 container died 542a5678a17648af72c28d7911bfe35d78240fee2aa772bd9e0713bed98ee674 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_williams, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 25 03:47:18 np0005534516 systemd[1]: var-lib-containers-storage-overlay-8e1a19815b431335eb246fac309316f9715cd86e8ba2194b67d6d035798373e1-merged.mount: Deactivated successfully.
Nov 25 03:47:18 np0005534516 podman[358008]: 2025-11-25 08:47:18.353160442 +0000 UTC m=+0.187322848 container remove 542a5678a17648af72c28d7911bfe35d78240fee2aa772bd9e0713bed98ee674 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_williams, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Nov 25 03:47:18 np0005534516 systemd[1]: libpod-conmon-542a5678a17648af72c28d7911bfe35d78240fee2aa772bd9e0713bed98ee674.scope: Deactivated successfully.
Nov 25 03:47:18 np0005534516 podman[358046]: 2025-11-25 08:47:18.511892814 +0000 UTC m=+0.038051451 container create 285a7482557b6c895a65405550bfe05bd69235f27b035018536786fa162a761c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_lovelace, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Nov 25 03:47:18 np0005534516 systemd[1]: Started libpod-conmon-285a7482557b6c895a65405550bfe05bd69235f27b035018536786fa162a761c.scope.
Nov 25 03:47:18 np0005534516 systemd[1]: Started libcrun container.
Nov 25 03:47:18 np0005534516 podman[358046]: 2025-11-25 08:47:18.49570863 +0000 UTC m=+0.021867297 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 03:47:18 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/49871b6162d184fa7fbaae731f90cd07d8afc050a8114f33c51f56b1d5822d4a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 03:47:18 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/49871b6162d184fa7fbaae731f90cd07d8afc050a8114f33c51f56b1d5822d4a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 03:47:18 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/49871b6162d184fa7fbaae731f90cd07d8afc050a8114f33c51f56b1d5822d4a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 03:47:18 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/49871b6162d184fa7fbaae731f90cd07d8afc050a8114f33c51f56b1d5822d4a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 03:47:18 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/49871b6162d184fa7fbaae731f90cd07d8afc050a8114f33c51f56b1d5822d4a/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 03:47:18 np0005534516 podman[358046]: 2025-11-25 08:47:18.607008991 +0000 UTC m=+0.133167618 container init 285a7482557b6c895a65405550bfe05bd69235f27b035018536786fa162a761c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_lovelace, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Nov 25 03:47:18 np0005534516 podman[358046]: 2025-11-25 08:47:18.620374009 +0000 UTC m=+0.146532636 container start 285a7482557b6c895a65405550bfe05bd69235f27b035018536786fa162a761c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_lovelace, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Nov 25 03:47:18 np0005534516 podman[358046]: 2025-11-25 08:47:18.623522153 +0000 UTC m=+0.149680780 container attach 285a7482557b6c895a65405550bfe05bd69235f27b035018536786fa162a761c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_lovelace, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True)
Nov 25 03:47:18 np0005534516 nova_compute[253538]: 2025-11-25 08:47:18.749 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:47:19 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1974: 321 pgs: 321 active+clean; 167 MiB data, 745 MiB used, 59 GiB / 60 GiB avail; 1023 B/s wr, 0 op/s
Nov 25 03:47:19 np0005534516 reverent_lovelace[358063]: --> passed data devices: 0 physical, 3 LVM
Nov 25 03:47:19 np0005534516 reverent_lovelace[358063]: --> relative data size: 1.0
Nov 25 03:47:19 np0005534516 reverent_lovelace[358063]: --> All data devices are unavailable
Nov 25 03:47:19 np0005534516 systemd[1]: libpod-285a7482557b6c895a65405550bfe05bd69235f27b035018536786fa162a761c.scope: Deactivated successfully.
Nov 25 03:47:19 np0005534516 podman[358046]: 2025-11-25 08:47:19.725168197 +0000 UTC m=+1.251326824 container died 285a7482557b6c895a65405550bfe05bd69235f27b035018536786fa162a761c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_lovelace, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 03:47:19 np0005534516 systemd[1]: libpod-285a7482557b6c895a65405550bfe05bd69235f27b035018536786fa162a761c.scope: Consumed 1.051s CPU time.
Nov 25 03:47:19 np0005534516 systemd[1]: var-lib-containers-storage-overlay-49871b6162d184fa7fbaae731f90cd07d8afc050a8114f33c51f56b1d5822d4a-merged.mount: Deactivated successfully.
Nov 25 03:47:19 np0005534516 podman[358046]: 2025-11-25 08:47:19.789906461 +0000 UTC m=+1.316065088 container remove 285a7482557b6c895a65405550bfe05bd69235f27b035018536786fa162a761c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_lovelace, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Nov 25 03:47:19 np0005534516 systemd[1]: libpod-conmon-285a7482557b6c895a65405550bfe05bd69235f27b035018536786fa162a761c.scope: Deactivated successfully.
Nov 25 03:47:20 np0005534516 podman[358244]: 2025-11-25 08:47:20.57139629 +0000 UTC m=+0.056770811 container create 65475d31d40e1e622fcf176daa092dae24a91217ea693b5a82556166ac939bfd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_bhaskara, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 03:47:20 np0005534516 systemd[1]: Started libpod-conmon-65475d31d40e1e622fcf176daa092dae24a91217ea693b5a82556166ac939bfd.scope.
Nov 25 03:47:20 np0005534516 podman[358244]: 2025-11-25 08:47:20.545097626 +0000 UTC m=+0.030472227 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 03:47:20 np0005534516 systemd[1]: Started libcrun container.
Nov 25 03:47:20 np0005534516 podman[358244]: 2025-11-25 08:47:20.677012269 +0000 UTC m=+0.162386830 container init 65475d31d40e1e622fcf176daa092dae24a91217ea693b5a82556166ac939bfd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_bhaskara, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Nov 25 03:47:20 np0005534516 podman[358244]: 2025-11-25 08:47:20.685330792 +0000 UTC m=+0.170705303 container start 65475d31d40e1e622fcf176daa092dae24a91217ea693b5a82556166ac939bfd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_bhaskara, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Nov 25 03:47:20 np0005534516 modest_bhaskara[358261]: 167 167
Nov 25 03:47:20 np0005534516 systemd[1]: libpod-65475d31d40e1e622fcf176daa092dae24a91217ea693b5a82556166ac939bfd.scope: Deactivated successfully.
Nov 25 03:47:20 np0005534516 podman[358244]: 2025-11-25 08:47:20.691213729 +0000 UTC m=+0.176588280 container attach 65475d31d40e1e622fcf176daa092dae24a91217ea693b5a82556166ac939bfd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_bhaskara, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 03:47:20 np0005534516 podman[358244]: 2025-11-25 08:47:20.691670571 +0000 UTC m=+0.177045092 container died 65475d31d40e1e622fcf176daa092dae24a91217ea693b5a82556166ac939bfd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_bhaskara, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 03:47:20 np0005534516 systemd[1]: var-lib-containers-storage-overlay-c7a541d9857c589bf2bf6ed978a48c6aa6f2d6fb5cf102cf7f88784232aaf503-merged.mount: Deactivated successfully.
Nov 25 03:47:20 np0005534516 podman[358244]: 2025-11-25 08:47:20.734093207 +0000 UTC m=+0.219467718 container remove 65475d31d40e1e622fcf176daa092dae24a91217ea693b5a82556166ac939bfd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_bhaskara, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Nov 25 03:47:20 np0005534516 systemd[1]: libpod-conmon-65475d31d40e1e622fcf176daa092dae24a91217ea693b5a82556166ac939bfd.scope: Deactivated successfully.
Nov 25 03:47:20 np0005534516 podman[358285]: 2025-11-25 08:47:20.977361973 +0000 UTC m=+0.077602550 container create e44ffea12e263bb6c20c56e1dee14b99c1bed34e04c3953f905d6450b25ee613 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_lederberg, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3)
Nov 25 03:47:21 np0005534516 podman[358285]: 2025-11-25 08:47:20.938108501 +0000 UTC m=+0.038349068 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 03:47:21 np0005534516 systemd[1]: Started libpod-conmon-e44ffea12e263bb6c20c56e1dee14b99c1bed34e04c3953f905d6450b25ee613.scope.
Nov 25 03:47:21 np0005534516 systemd[1]: Started libcrun container.
Nov 25 03:47:21 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0fc654a88ed71455e11c02369ca85ef6c238ffcc166c004b27cedfdf72b7109b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 03:47:21 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0fc654a88ed71455e11c02369ca85ef6c238ffcc166c004b27cedfdf72b7109b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 03:47:21 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0fc654a88ed71455e11c02369ca85ef6c238ffcc166c004b27cedfdf72b7109b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 03:47:21 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0fc654a88ed71455e11c02369ca85ef6c238ffcc166c004b27cedfdf72b7109b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 03:47:21 np0005534516 podman[358285]: 2025-11-25 08:47:21.085060277 +0000 UTC m=+0.185300904 container init e44ffea12e263bb6c20c56e1dee14b99c1bed34e04c3953f905d6450b25ee613 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_lederberg, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 03:47:21 np0005534516 podman[358285]: 2025-11-25 08:47:21.100654305 +0000 UTC m=+0.200894842 container start e44ffea12e263bb6c20c56e1dee14b99c1bed34e04c3953f905d6450b25ee613 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_lederberg, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Nov 25 03:47:21 np0005534516 podman[358285]: 2025-11-25 08:47:21.108158396 +0000 UTC m=+0.208399033 container attach e44ffea12e263bb6c20c56e1dee14b99c1bed34e04c3953f905d6450b25ee613 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_lederberg, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 03:47:21 np0005534516 nova_compute[253538]: 2025-11-25 08:47:21.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 03:47:21 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1975: 321 pgs: 321 active+clean; 167 MiB data, 745 MiB used, 59 GiB / 60 GiB avail; 1023 B/s wr, 0 op/s
Nov 25 03:47:21 np0005534516 eloquent_lederberg[358301]: {
Nov 25 03:47:21 np0005534516 eloquent_lederberg[358301]:    "0": [
Nov 25 03:47:21 np0005534516 eloquent_lederberg[358301]:        {
Nov 25 03:47:21 np0005534516 eloquent_lederberg[358301]:            "devices": [
Nov 25 03:47:21 np0005534516 eloquent_lederberg[358301]:                "/dev/loop3"
Nov 25 03:47:21 np0005534516 eloquent_lederberg[358301]:            ],
Nov 25 03:47:21 np0005534516 eloquent_lederberg[358301]:            "lv_name": "ceph_lv0",
Nov 25 03:47:21 np0005534516 eloquent_lederberg[358301]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 03:47:21 np0005534516 eloquent_lederberg[358301]:            "lv_size": "21470642176",
Nov 25 03:47:21 np0005534516 eloquent_lederberg[358301]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=fa0f8d90-5df8-4d42-9078-da082765696d,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 03:47:21 np0005534516 eloquent_lederberg[358301]:            "lv_uuid": "ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr",
Nov 25 03:47:21 np0005534516 eloquent_lederberg[358301]:            "name": "ceph_lv0",
Nov 25 03:47:21 np0005534516 eloquent_lederberg[358301]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 03:47:21 np0005534516 eloquent_lederberg[358301]:            "tags": {
Nov 25 03:47:21 np0005534516 eloquent_lederberg[358301]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 03:47:21 np0005534516 eloquent_lederberg[358301]:                "ceph.block_uuid": "ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr",
Nov 25 03:47:21 np0005534516 eloquent_lederberg[358301]:                "ceph.cephx_lockbox_secret": "",
Nov 25 03:47:21 np0005534516 eloquent_lederberg[358301]:                "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 03:47:21 np0005534516 eloquent_lederberg[358301]:                "ceph.cluster_name": "ceph",
Nov 25 03:47:21 np0005534516 eloquent_lederberg[358301]:                "ceph.crush_device_class": "",
Nov 25 03:47:21 np0005534516 eloquent_lederberg[358301]:                "ceph.encrypted": "0",
Nov 25 03:47:21 np0005534516 eloquent_lederberg[358301]:                "ceph.osd_fsid": "fa0f8d90-5df8-4d42-9078-da082765696d",
Nov 25 03:47:21 np0005534516 eloquent_lederberg[358301]:                "ceph.osd_id": "0",
Nov 25 03:47:21 np0005534516 eloquent_lederberg[358301]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 03:47:21 np0005534516 eloquent_lederberg[358301]:                "ceph.type": "block",
Nov 25 03:47:21 np0005534516 eloquent_lederberg[358301]:                "ceph.vdo": "0"
Nov 25 03:47:21 np0005534516 eloquent_lederberg[358301]:            },
Nov 25 03:47:21 np0005534516 eloquent_lederberg[358301]:            "type": "block",
Nov 25 03:47:21 np0005534516 eloquent_lederberg[358301]:            "vg_name": "ceph_vg0"
Nov 25 03:47:21 np0005534516 eloquent_lederberg[358301]:        }
Nov 25 03:47:21 np0005534516 eloquent_lederberg[358301]:    ],
Nov 25 03:47:21 np0005534516 eloquent_lederberg[358301]:    "1": [
Nov 25 03:47:21 np0005534516 eloquent_lederberg[358301]:        {
Nov 25 03:47:21 np0005534516 eloquent_lederberg[358301]:            "devices": [
Nov 25 03:47:21 np0005534516 eloquent_lederberg[358301]:                "/dev/loop4"
Nov 25 03:47:21 np0005534516 eloquent_lederberg[358301]:            ],
Nov 25 03:47:21 np0005534516 eloquent_lederberg[358301]:            "lv_name": "ceph_lv1",
Nov 25 03:47:21 np0005534516 eloquent_lederberg[358301]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 03:47:21 np0005534516 eloquent_lederberg[358301]:            "lv_size": "21470642176",
Nov 25 03:47:21 np0005534516 eloquent_lederberg[358301]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1ccbbde0-8faf-460a-800e-c84f00ed17db,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 03:47:21 np0005534516 eloquent_lederberg[358301]:            "lv_uuid": "nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e",
Nov 25 03:47:21 np0005534516 eloquent_lederberg[358301]:            "name": "ceph_lv1",
Nov 25 03:47:21 np0005534516 eloquent_lederberg[358301]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 03:47:21 np0005534516 eloquent_lederberg[358301]:            "tags": {
Nov 25 03:47:21 np0005534516 eloquent_lederberg[358301]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 03:47:21 np0005534516 eloquent_lederberg[358301]:                "ceph.block_uuid": "nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e",
Nov 25 03:47:21 np0005534516 eloquent_lederberg[358301]:                "ceph.cephx_lockbox_secret": "",
Nov 25 03:47:21 np0005534516 eloquent_lederberg[358301]:                "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 03:47:21 np0005534516 eloquent_lederberg[358301]:                "ceph.cluster_name": "ceph",
Nov 25 03:47:21 np0005534516 eloquent_lederberg[358301]:                "ceph.crush_device_class": "",
Nov 25 03:47:21 np0005534516 eloquent_lederberg[358301]:                "ceph.encrypted": "0",
Nov 25 03:47:21 np0005534516 eloquent_lederberg[358301]:                "ceph.osd_fsid": "1ccbbde0-8faf-460a-800e-c84f00ed17db",
Nov 25 03:47:21 np0005534516 eloquent_lederberg[358301]:                "ceph.osd_id": "1",
Nov 25 03:47:21 np0005534516 eloquent_lederberg[358301]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 03:47:21 np0005534516 eloquent_lederberg[358301]:                "ceph.type": "block",
Nov 25 03:47:21 np0005534516 eloquent_lederberg[358301]:                "ceph.vdo": "0"
Nov 25 03:47:21 np0005534516 eloquent_lederberg[358301]:            },
Nov 25 03:47:21 np0005534516 eloquent_lederberg[358301]:            "type": "block",
Nov 25 03:47:21 np0005534516 eloquent_lederberg[358301]:            "vg_name": "ceph_vg1"
Nov 25 03:47:21 np0005534516 eloquent_lederberg[358301]:        }
Nov 25 03:47:21 np0005534516 eloquent_lederberg[358301]:    ],
Nov 25 03:47:21 np0005534516 eloquent_lederberg[358301]:    "2": [
Nov 25 03:47:21 np0005534516 eloquent_lederberg[358301]:        {
Nov 25 03:47:21 np0005534516 eloquent_lederberg[358301]:            "devices": [
Nov 25 03:47:21 np0005534516 eloquent_lederberg[358301]:                "/dev/loop5"
Nov 25 03:47:21 np0005534516 eloquent_lederberg[358301]:            ],
Nov 25 03:47:21 np0005534516 eloquent_lederberg[358301]:            "lv_name": "ceph_lv2",
Nov 25 03:47:21 np0005534516 eloquent_lederberg[358301]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 03:47:21 np0005534516 eloquent_lederberg[358301]:            "lv_size": "21470642176",
Nov 25 03:47:21 np0005534516 eloquent_lederberg[358301]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2a15223e-f21c-42af-b702-2be1c3607399,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 03:47:21 np0005534516 eloquent_lederberg[358301]:            "lv_uuid": "5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0",
Nov 25 03:47:21 np0005534516 eloquent_lederberg[358301]:            "name": "ceph_lv2",
Nov 25 03:47:21 np0005534516 eloquent_lederberg[358301]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 03:47:21 np0005534516 eloquent_lederberg[358301]:            "tags": {
Nov 25 03:47:21 np0005534516 eloquent_lederberg[358301]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 03:47:21 np0005534516 eloquent_lederberg[358301]:                "ceph.block_uuid": "5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0",
Nov 25 03:47:21 np0005534516 eloquent_lederberg[358301]:                "ceph.cephx_lockbox_secret": "",
Nov 25 03:47:21 np0005534516 eloquent_lederberg[358301]:                "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 03:47:21 np0005534516 eloquent_lederberg[358301]:                "ceph.cluster_name": "ceph",
Nov 25 03:47:21 np0005534516 eloquent_lederberg[358301]:                "ceph.crush_device_class": "",
Nov 25 03:47:21 np0005534516 eloquent_lederberg[358301]:                "ceph.encrypted": "0",
Nov 25 03:47:21 np0005534516 eloquent_lederberg[358301]:                "ceph.osd_fsid": "2a15223e-f21c-42af-b702-2be1c3607399",
Nov 25 03:47:21 np0005534516 eloquent_lederberg[358301]:                "ceph.osd_id": "2",
Nov 25 03:47:21 np0005534516 eloquent_lederberg[358301]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 03:47:21 np0005534516 eloquent_lederberg[358301]:                "ceph.type": "block",
Nov 25 03:47:21 np0005534516 eloquent_lederberg[358301]:                "ceph.vdo": "0"
Nov 25 03:47:21 np0005534516 eloquent_lederberg[358301]:            },
Nov 25 03:47:21 np0005534516 eloquent_lederberg[358301]:            "type": "block",
Nov 25 03:47:21 np0005534516 eloquent_lederberg[358301]:            "vg_name": "ceph_vg2"
Nov 25 03:47:21 np0005534516 eloquent_lederberg[358301]:        }
Nov 25 03:47:21 np0005534516 eloquent_lederberg[358301]:    ]
Nov 25 03:47:21 np0005534516 eloquent_lederberg[358301]: }
Nov 25 03:47:21 np0005534516 systemd[1]: libpod-e44ffea12e263bb6c20c56e1dee14b99c1bed34e04c3953f905d6450b25ee613.scope: Deactivated successfully.
Nov 25 03:47:21 np0005534516 podman[358285]: 2025-11-25 08:47:21.936246733 +0000 UTC m=+1.036487280 container died e44ffea12e263bb6c20c56e1dee14b99c1bed34e04c3953f905d6450b25ee613 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_lederberg, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True)
Nov 25 03:47:21 np0005534516 systemd[1]: var-lib-containers-storage-overlay-0fc654a88ed71455e11c02369ca85ef6c238ffcc166c004b27cedfdf72b7109b-merged.mount: Deactivated successfully.
Nov 25 03:47:22 np0005534516 podman[358285]: 2025-11-25 08:47:22.050838692 +0000 UTC m=+1.151079239 container remove e44ffea12e263bb6c20c56e1dee14b99c1bed34e04c3953f905d6450b25ee613 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_lederberg, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 03:47:22 np0005534516 systemd[1]: libpod-conmon-e44ffea12e263bb6c20c56e1dee14b99c1bed34e04c3953f905d6450b25ee613.scope: Deactivated successfully.
Nov 25 03:47:22 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e231 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 03:47:22 np0005534516 nova_compute[253538]: 2025-11-25 08:47:22.355 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:47:22 np0005534516 podman[358462]: 2025-11-25 08:47:22.815280805 +0000 UTC m=+0.064542089 container create 11354ff55bd5b42ea0c862eb544290c290dd12a13405fdad4affe2cc62a79aad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_agnesi, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Nov 25 03:47:22 np0005534516 systemd[1]: Started libpod-conmon-11354ff55bd5b42ea0c862eb544290c290dd12a13405fdad4affe2cc62a79aad.scope.
Nov 25 03:47:22 np0005534516 systemd[1]: Started libcrun container.
Nov 25 03:47:22 np0005534516 podman[358462]: 2025-11-25 08:47:22.788623242 +0000 UTC m=+0.037884606 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 03:47:22 np0005534516 podman[358462]: 2025-11-25 08:47:22.894928358 +0000 UTC m=+0.144189742 container init 11354ff55bd5b42ea0c862eb544290c290dd12a13405fdad4affe2cc62a79aad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_agnesi, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Nov 25 03:47:22 np0005534516 podman[358462]: 2025-11-25 08:47:22.901809453 +0000 UTC m=+0.151070737 container start 11354ff55bd5b42ea0c862eb544290c290dd12a13405fdad4affe2cc62a79aad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_agnesi, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Nov 25 03:47:22 np0005534516 podman[358462]: 2025-11-25 08:47:22.905148202 +0000 UTC m=+0.154409496 container attach 11354ff55bd5b42ea0c862eb544290c290dd12a13405fdad4affe2cc62a79aad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_agnesi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Nov 25 03:47:22 np0005534516 affectionate_agnesi[358478]: 167 167
Nov 25 03:47:22 np0005534516 systemd[1]: libpod-11354ff55bd5b42ea0c862eb544290c290dd12a13405fdad4affe2cc62a79aad.scope: Deactivated successfully.
Nov 25 03:47:22 np0005534516 podman[358462]: 2025-11-25 08:47:22.909959911 +0000 UTC m=+0.159221225 container died 11354ff55bd5b42ea0c862eb544290c290dd12a13405fdad4affe2cc62a79aad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_agnesi, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 03:47:22 np0005534516 systemd[1]: var-lib-containers-storage-overlay-e64e9be6df04ffdeb7b98598942aa01935c85ecf0a134d2c983a3b73f09af5b7-merged.mount: Deactivated successfully.
Nov 25 03:47:22 np0005534516 podman[358462]: 2025-11-25 08:47:22.9454028 +0000 UTC m=+0.194664084 container remove 11354ff55bd5b42ea0c862eb544290c290dd12a13405fdad4affe2cc62a79aad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_agnesi, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 03:47:22 np0005534516 systemd[1]: libpod-conmon-11354ff55bd5b42ea0c862eb544290c290dd12a13405fdad4affe2cc62a79aad.scope: Deactivated successfully.
Nov 25 03:47:23 np0005534516 podman[358501]: 2025-11-25 08:47:23.153031971 +0000 UTC m=+0.059910046 container create 9bec94195a47e137c4482be98900037a374c2005e2b3a49cd5a21eb5d4cecf80 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_wright, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True)
Nov 25 03:47:23 np0005534516 systemd[1]: Started libpod-conmon-9bec94195a47e137c4482be98900037a374c2005e2b3a49cd5a21eb5d4cecf80.scope.
Nov 25 03:47:23 np0005534516 podman[358501]: 2025-11-25 08:47:23.124209458 +0000 UTC m=+0.031087583 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 03:47:23 np0005534516 systemd[1]: Started libcrun container.
Nov 25 03:47:23 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d125c475ea58b36811403f83de960be1bb4a9f8b44d4d7a10252caf829d4851d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 03:47:23 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d125c475ea58b36811403f83de960be1bb4a9f8b44d4d7a10252caf829d4851d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 03:47:23 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d125c475ea58b36811403f83de960be1bb4a9f8b44d4d7a10252caf829d4851d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 03:47:23 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d125c475ea58b36811403f83de960be1bb4a9f8b44d4d7a10252caf829d4851d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 03:47:23 np0005534516 podman[358501]: 2025-11-25 08:47:23.272534781 +0000 UTC m=+0.179412896 container init 9bec94195a47e137c4482be98900037a374c2005e2b3a49cd5a21eb5d4cecf80 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_wright, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default)
Nov 25 03:47:23 np0005534516 podman[358501]: 2025-11-25 08:47:23.279163579 +0000 UTC m=+0.186041654 container start 9bec94195a47e137c4482be98900037a374c2005e2b3a49cd5a21eb5d4cecf80 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_wright, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Nov 25 03:47:23 np0005534516 podman[358501]: 2025-11-25 08:47:23.286842344 +0000 UTC m=+0.193720469 container attach 9bec94195a47e137c4482be98900037a374c2005e2b3a49cd5a21eb5d4cecf80 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_wright, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Nov 25 03:47:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 03:47:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 03:47:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 03:47:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 03:47:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 03:47:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 03:47:23 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1976: 321 pgs: 321 active+clean; 167 MiB data, 745 MiB used, 59 GiB / 60 GiB avail; 1023 B/s wr, 0 op/s
Nov 25 03:47:23 np0005534516 nova_compute[253538]: 2025-11-25 08:47:23.751 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:47:24 np0005534516 quirky_wright[358517]: {
Nov 25 03:47:24 np0005534516 quirky_wright[358517]:    "1ccbbde0-8faf-460a-800e-c84f00ed17db": {
Nov 25 03:47:24 np0005534516 quirky_wright[358517]:        "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 03:47:24 np0005534516 quirky_wright[358517]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 03:47:24 np0005534516 quirky_wright[358517]:        "osd_id": 1,
Nov 25 03:47:24 np0005534516 quirky_wright[358517]:        "osd_uuid": "1ccbbde0-8faf-460a-800e-c84f00ed17db",
Nov 25 03:47:24 np0005534516 quirky_wright[358517]:        "type": "bluestore"
Nov 25 03:47:24 np0005534516 quirky_wright[358517]:    },
Nov 25 03:47:24 np0005534516 quirky_wright[358517]:    "2a15223e-f21c-42af-b702-2be1c3607399": {
Nov 25 03:47:24 np0005534516 quirky_wright[358517]:        "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 03:47:24 np0005534516 quirky_wright[358517]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 03:47:24 np0005534516 quirky_wright[358517]:        "osd_id": 2,
Nov 25 03:47:24 np0005534516 quirky_wright[358517]:        "osd_uuid": "2a15223e-f21c-42af-b702-2be1c3607399",
Nov 25 03:47:24 np0005534516 quirky_wright[358517]:        "type": "bluestore"
Nov 25 03:47:24 np0005534516 quirky_wright[358517]:    },
Nov 25 03:47:24 np0005534516 quirky_wright[358517]:    "fa0f8d90-5df8-4d42-9078-da082765696d": {
Nov 25 03:47:24 np0005534516 quirky_wright[358517]:        "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 03:47:24 np0005534516 quirky_wright[358517]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 03:47:24 np0005534516 quirky_wright[358517]:        "osd_id": 0,
Nov 25 03:47:24 np0005534516 quirky_wright[358517]:        "osd_uuid": "fa0f8d90-5df8-4d42-9078-da082765696d",
Nov 25 03:47:24 np0005534516 quirky_wright[358517]:        "type": "bluestore"
Nov 25 03:47:24 np0005534516 quirky_wright[358517]:    }
Nov 25 03:47:24 np0005534516 quirky_wright[358517]: }
Nov 25 03:47:24 np0005534516 systemd[1]: libpod-9bec94195a47e137c4482be98900037a374c2005e2b3a49cd5a21eb5d4cecf80.scope: Deactivated successfully.
Nov 25 03:47:24 np0005534516 systemd[1]: libpod-9bec94195a47e137c4482be98900037a374c2005e2b3a49cd5a21eb5d4cecf80.scope: Consumed 1.125s CPU time.
Nov 25 03:47:24 np0005534516 podman[358501]: 2025-11-25 08:47:24.537298164 +0000 UTC m=+1.444176249 container died 9bec94195a47e137c4482be98900037a374c2005e2b3a49cd5a21eb5d4cecf80 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_wright, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Nov 25 03:47:24 np0005534516 systemd[1]: var-lib-containers-storage-overlay-d125c475ea58b36811403f83de960be1bb4a9f8b44d4d7a10252caf829d4851d-merged.mount: Deactivated successfully.
Nov 25 03:47:24 np0005534516 podman[358501]: 2025-11-25 08:47:24.597077824 +0000 UTC m=+1.503955899 container remove 9bec94195a47e137c4482be98900037a374c2005e2b3a49cd5a21eb5d4cecf80 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_wright, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 03:47:24 np0005534516 systemd[1]: libpod-conmon-9bec94195a47e137c4482be98900037a374c2005e2b3a49cd5a21eb5d4cecf80.scope: Deactivated successfully.
Nov 25 03:47:24 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 03:47:24 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 03:47:24 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 03:47:24 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 03:47:24 np0005534516 ceph-mgr[75313]: [progress WARNING root] complete: ev 48120499-6576-4cff-82dd-9bfa8ec7444b does not exist
Nov 25 03:47:24 np0005534516 ceph-mgr[75313]: [progress WARNING root] complete: ev f17f3596-a74d-47d1-866b-4c21259da89a does not exist
Nov 25 03:47:25 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 03:47:25 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 03:47:25 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1977: 321 pgs: 321 active+clean; 167 MiB data, 745 MiB used, 59 GiB / 60 GiB avail; 1023 B/s wr, 0 op/s
Nov 25 03:47:26 np0005534516 podman[358612]: 2025-11-25 08:47:26.832578635 +0000 UTC m=+0.076087769 container health_status 5c8d5508db57b4c3da7e80ae11f3a3094e87785cb74f60c98199261dd8b73481 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent)
Nov 25 03:47:27 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e231 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 03:47:27 np0005534516 ceph-mon[75015]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #90. Immutable memtables: 0.
Nov 25 03:47:27 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:47:27.242103) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 25 03:47:27 np0005534516 ceph-mon[75015]: rocksdb: [db/flush_job.cc:856] [default] [JOB 51] Flushing memtable with next log file: 90
Nov 25 03:47:27 np0005534516 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764060447242166, "job": 51, "event": "flush_started", "num_memtables": 1, "num_entries": 1292, "num_deletes": 250, "total_data_size": 1879368, "memory_usage": 1905504, "flush_reason": "Manual Compaction"}
Nov 25 03:47:27 np0005534516 ceph-mon[75015]: rocksdb: [db/flush_job.cc:885] [default] [JOB 51] Level-0 flush table #91: started
Nov 25 03:47:27 np0005534516 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764060447256714, "cf_name": "default", "job": 51, "event": "table_file_creation", "file_number": 91, "file_size": 1850201, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 40128, "largest_seqno": 41419, "table_properties": {"data_size": 1844180, "index_size": 3288, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1669, "raw_key_size": 11935, "raw_average_key_size": 18, "raw_value_size": 1832112, "raw_average_value_size": 2797, "num_data_blocks": 147, "num_entries": 655, "num_filter_entries": 655, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764060323, "oldest_key_time": 1764060323, "file_creation_time": 1764060447, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "dc5dc6ae-0fb8-474e-b34d-e37705a41add", "db_session_id": "WCSCMBNWDQZ3QKJA3OWW", "orig_file_number": 91, "seqno_to_time_mapping": "N/A"}}
Nov 25 03:47:27 np0005534516 ceph-mon[75015]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 51] Flush lasted 14673 microseconds, and 8891 cpu microseconds.
Nov 25 03:47:27 np0005534516 ceph-mon[75015]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 03:47:27 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:47:27.256775) [db/flush_job.cc:967] [default] [JOB 51] Level-0 flush table #91: 1850201 bytes OK
Nov 25 03:47:27 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:47:27.256800) [db/memtable_list.cc:519] [default] Level-0 commit table #91 started
Nov 25 03:47:27 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:47:27.258630) [db/memtable_list.cc:722] [default] Level-0 commit table #91: memtable #1 done
Nov 25 03:47:27 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:47:27.258654) EVENT_LOG_v1 {"time_micros": 1764060447258647, "job": 51, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 25 03:47:27 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:47:27.258677) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 25 03:47:27 np0005534516 ceph-mon[75015]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 51] Try to delete WAL files size 1873554, prev total WAL file size 1873554, number of live WAL files 2.
Nov 25 03:47:27 np0005534516 ceph-mon[75015]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000087.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 03:47:27 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:47:27.259758) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6B760030' seq:72057594037927935, type:22 .. '6B7600323531' seq:0, type:0; will stop at (end)
Nov 25 03:47:27 np0005534516 ceph-mon[75015]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 52] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 25 03:47:27 np0005534516 ceph-mon[75015]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 51 Base level 0, inputs: [91(1806KB)], [89(9572KB)]
Nov 25 03:47:27 np0005534516 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764060447259887, "job": 52, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [91], "files_L6": [89], "score": -1, "input_data_size": 11652380, "oldest_snapshot_seqno": -1}
Nov 25 03:47:27 np0005534516 ceph-mon[75015]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 52] Generated table #92: 6617 keys, 10933566 bytes, temperature: kUnknown
Nov 25 03:47:27 np0005534516 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764060447346370, "cf_name": "default", "job": 52, "event": "table_file_creation", "file_number": 92, "file_size": 10933566, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10886716, "index_size": 29202, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 16581, "raw_key_size": 169714, "raw_average_key_size": 25, "raw_value_size": 10765498, "raw_average_value_size": 1626, "num_data_blocks": 1166, "num_entries": 6617, "num_filter_entries": 6617, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764056881, "oldest_key_time": 0, "file_creation_time": 1764060447, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "dc5dc6ae-0fb8-474e-b34d-e37705a41add", "db_session_id": "WCSCMBNWDQZ3QKJA3OWW", "orig_file_number": 92, "seqno_to_time_mapping": "N/A"}}
Nov 25 03:47:27 np0005534516 ceph-mon[75015]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 03:47:27 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:47:27.346586) [db/compaction/compaction_job.cc:1663] [default] [JOB 52] Compacted 1@0 + 1@6 files to L6 => 10933566 bytes
Nov 25 03:47:27 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:47:27.347981) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 134.7 rd, 126.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.8, 9.3 +0.0 blob) out(10.4 +0.0 blob), read-write-amplify(12.2) write-amplify(5.9) OK, records in: 7129, records dropped: 512 output_compression: NoCompression
Nov 25 03:47:27 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:47:27.347999) EVENT_LOG_v1 {"time_micros": 1764060447347990, "job": 52, "event": "compaction_finished", "compaction_time_micros": 86529, "compaction_time_cpu_micros": 40105, "output_level": 6, "num_output_files": 1, "total_output_size": 10933566, "num_input_records": 7129, "num_output_records": 6617, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 25 03:47:27 np0005534516 ceph-mon[75015]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000091.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 03:47:27 np0005534516 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764060447348436, "job": 52, "event": "table_file_deletion", "file_number": 91}
Nov 25 03:47:27 np0005534516 ceph-mon[75015]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000089.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 03:47:27 np0005534516 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764060447350128, "job": 52, "event": "table_file_deletion", "file_number": 89}
Nov 25 03:47:27 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:47:27.259540) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 03:47:27 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:47:27.350210) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 03:47:27 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:47:27.350216) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 03:47:27 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:47:27.350219) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 03:47:27 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:47:27.350221) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 03:47:27 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:47:27.350223) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 03:47:27 np0005534516 nova_compute[253538]: 2025-11-25 08:47:27.408 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:47:27 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1978: 321 pgs: 321 active+clean; 167 MiB data, 745 MiB used, 59 GiB / 60 GiB avail
Nov 25 03:47:28 np0005534516 nova_compute[253538]: 2025-11-25 08:47:28.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 03:47:28 np0005534516 nova_compute[253538]: 2025-11-25 08:47:28.756 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:47:29 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 25 03:47:29 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2182159265' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 03:47:29 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 25 03:47:29 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2182159265' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 03:47:29 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1979: 321 pgs: 321 active+clean; 167 MiB data, 745 MiB used, 59 GiB / 60 GiB avail
Nov 25 03:47:30 np0005534516 nova_compute[253538]: 2025-11-25 08:47:30.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 03:47:30 np0005534516 nova_compute[253538]: 2025-11-25 08:47:30.554 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 25 03:47:30 np0005534516 podman[358633]: 2025-11-25 08:47:30.807203172 +0000 UTC m=+0.062032792 container health_status 201f5ba8a846455dad7681d91099d8c3f33e8ac91298c1330a8b525b94c03938 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, org.label-schema.license=GPLv2, tcib_managed=true)
Nov 25 03:47:31 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1980: 321 pgs: 321 active+clean; 167 MiB data, 745 MiB used, 59 GiB / 60 GiB avail; 2.3 KiB/s wr, 0 op/s
Nov 25 03:47:31 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:47:31.684 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=29, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '3e:21:09', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '6a:71:8e:07:fa:c2'}, ipsec=False) old=SB_Global(nb_cfg=28) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 03:47:31 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:47:31.685 162739 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 1 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 25 03:47:31 np0005534516 nova_compute[253538]: 2025-11-25 08:47:31.685 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:47:32 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e231 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 03:47:32 np0005534516 nova_compute[253538]: 2025-11-25 08:47:32.410 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:47:32 np0005534516 nova_compute[253538]: 2025-11-25 08:47:32.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 03:47:32 np0005534516 nova_compute[253538]: 2025-11-25 08:47:32.554 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 25 03:47:32 np0005534516 nova_compute[253538]: 2025-11-25 08:47:32.555 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 25 03:47:32 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:47:32.687 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=0e3362c2-a4a0-4a10-9289-943331244f84, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '29'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:47:33 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1981: 321 pgs: 321 active+clean; 167 MiB data, 745 MiB used, 59 GiB / 60 GiB avail; 2.3 KiB/s wr, 0 op/s
Nov 25 03:47:33 np0005534516 nova_compute[253538]: 2025-11-25 08:47:33.758 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:47:33 np0005534516 nova_compute[253538]: 2025-11-25 08:47:33.785 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Acquiring lock "refresh_cache-0f5e68e6-8f02-4a3a-ac0c-322d82950d98" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 03:47:33 np0005534516 nova_compute[253538]: 2025-11-25 08:47:33.786 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Acquired lock "refresh_cache-0f5e68e6-8f02-4a3a-ac0c-322d82950d98" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 03:47:33 np0005534516 nova_compute[253538]: 2025-11-25 08:47:33.786 253542 DEBUG nova.network.neutron [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] [instance: 0f5e68e6-8f02-4a3a-ac0c-322d82950d98] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Nov 25 03:47:33 np0005534516 nova_compute[253538]: 2025-11-25 08:47:33.786 253542 DEBUG nova.objects.instance [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 0f5e68e6-8f02-4a3a-ac0c-322d82950d98 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 03:47:34 np0005534516 podman[358655]: 2025-11-25 08:47:34.881379895 +0000 UTC m=+0.125313167 container health_status 8ad1e319ea263c63021f01bb2d6f18ca5aea205883339692c6b10bddfa993191 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 03:47:34 np0005534516 nova_compute[253538]: 2025-11-25 08:47:34.972 253542 INFO nova.compute.manager [None req-0b9eab54-e03d-440c-85a1-2fd2c0492551 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] [instance: 0f5e68e6-8f02-4a3a-ac0c-322d82950d98] Pausing#033[00m
Nov 25 03:47:34 np0005534516 nova_compute[253538]: 2025-11-25 08:47:34.973 253542 DEBUG nova.objects.instance [None req-0b9eab54-e03d-440c-85a1-2fd2c0492551 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] Lazy-loading 'flavor' on Instance uuid 0f5e68e6-8f02-4a3a-ac0c-322d82950d98 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 03:47:35 np0005534516 nova_compute[253538]: 2025-11-25 08:47:35.007 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764060455.0074933, 0f5e68e6-8f02-4a3a-ac0c-322d82950d98 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 03:47:35 np0005534516 nova_compute[253538]: 2025-11-25 08:47:35.008 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 0f5e68e6-8f02-4a3a-ac0c-322d82950d98] VM Paused (Lifecycle Event)#033[00m
Nov 25 03:47:35 np0005534516 nova_compute[253538]: 2025-11-25 08:47:35.009 253542 DEBUG nova.compute.manager [None req-0b9eab54-e03d-440c-85a1-2fd2c0492551 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] [instance: 0f5e68e6-8f02-4a3a-ac0c-322d82950d98] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:47:35 np0005534516 nova_compute[253538]: 2025-11-25 08:47:35.030 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 0f5e68e6-8f02-4a3a-ac0c-322d82950d98] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:47:35 np0005534516 nova_compute[253538]: 2025-11-25 08:47:35.035 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 0f5e68e6-8f02-4a3a-ac0c-322d82950d98] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: pausing, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 03:47:35 np0005534516 nova_compute[253538]: 2025-11-25 08:47:35.045 253542 DEBUG nova.network.neutron [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] [instance: 0f5e68e6-8f02-4a3a-ac0c-322d82950d98] Updating instance_info_cache with network_info: [{"id": "7246ed42-6ec3-42e8-9b9d-12606aeeb43c", "address": "fa:16:3e:8c:96:cd", "network": {"id": "aa04d86f-73a3-4b24-9c95-8ec29aa39064", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-1377856389-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "947f731219de435196429037dc94fd56", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7246ed42-6e", "ovs_interfaceid": "7246ed42-6ec3-42e8-9b9d-12606aeeb43c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 03:47:35 np0005534516 nova_compute[253538]: 2025-11-25 08:47:35.049 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 0f5e68e6-8f02-4a3a-ac0c-322d82950d98] During sync_power_state the instance has a pending task (pausing). Skip.#033[00m
Nov 25 03:47:35 np0005534516 nova_compute[253538]: 2025-11-25 08:47:35.064 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Releasing lock "refresh_cache-0f5e68e6-8f02-4a3a-ac0c-322d82950d98" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 03:47:35 np0005534516 nova_compute[253538]: 2025-11-25 08:47:35.064 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] [instance: 0f5e68e6-8f02-4a3a-ac0c-322d82950d98] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Nov 25 03:47:35 np0005534516 nova_compute[253538]: 2025-11-25 08:47:35.553 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 03:47:35 np0005534516 nova_compute[253538]: 2025-11-25 08:47:35.553 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 03:47:35 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1982: 321 pgs: 321 active+clean; 167 MiB data, 745 MiB used, 59 GiB / 60 GiB avail; 2.3 KiB/s wr, 0 op/s
Nov 25 03:47:36 np0005534516 nova_compute[253538]: 2025-11-25 08:47:36.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 03:47:36 np0005534516 nova_compute[253538]: 2025-11-25 08:47:36.574 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:47:36 np0005534516 nova_compute[253538]: 2025-11-25 08:47:36.575 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:47:36 np0005534516 nova_compute[253538]: 2025-11-25 08:47:36.575 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:47:36 np0005534516 nova_compute[253538]: 2025-11-25 08:47:36.575 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 25 03:47:36 np0005534516 nova_compute[253538]: 2025-11-25 08:47:36.576 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:47:37 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 03:47:37 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/602891899' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 03:47:37 np0005534516 nova_compute[253538]: 2025-11-25 08:47:37.027 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.452s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:47:37 np0005534516 nova_compute[253538]: 2025-11-25 08:47:37.124 253542 DEBUG nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] skipping disk for instance-00000068 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 25 03:47:37 np0005534516 nova_compute[253538]: 2025-11-25 08:47:37.125 253542 DEBUG nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] skipping disk for instance-00000068 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 25 03:47:37 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e231 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 03:47:37 np0005534516 nova_compute[253538]: 2025-11-25 08:47:37.315 253542 WARNING nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 03:47:37 np0005534516 nova_compute[253538]: 2025-11-25 08:47:37.316 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3696MB free_disk=59.942779541015625GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 25 03:47:37 np0005534516 nova_compute[253538]: 2025-11-25 08:47:37.317 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:47:37 np0005534516 nova_compute[253538]: 2025-11-25 08:47:37.317 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:47:37 np0005534516 nova_compute[253538]: 2025-11-25 08:47:37.378 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Instance 0f5e68e6-8f02-4a3a-ac0c-322d82950d98 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 25 03:47:37 np0005534516 nova_compute[253538]: 2025-11-25 08:47:37.378 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 25 03:47:37 np0005534516 nova_compute[253538]: 2025-11-25 08:47:37.379 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 25 03:47:37 np0005534516 nova_compute[253538]: 2025-11-25 08:47:37.393 253542 DEBUG nova.scheduler.client.report [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Refreshing inventories for resource provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Nov 25 03:47:37 np0005534516 nova_compute[253538]: 2025-11-25 08:47:37.410 253542 DEBUG nova.scheduler.client.report [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Updating ProviderTree inventory for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Nov 25 03:47:37 np0005534516 nova_compute[253538]: 2025-11-25 08:47:37.410 253542 DEBUG nova.compute.provider_tree [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Updating inventory in ProviderTree for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Nov 25 03:47:37 np0005534516 nova_compute[253538]: 2025-11-25 08:47:37.412 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:47:37 np0005534516 nova_compute[253538]: 2025-11-25 08:47:37.425 253542 DEBUG nova.scheduler.client.report [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Refreshing aggregate associations for resource provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Nov 25 03:47:37 np0005534516 nova_compute[253538]: 2025-11-25 08:47:37.445 253542 DEBUG nova.scheduler.client.report [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Refreshing trait associations for resource provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4, traits: HW_CPU_X86_ABM,HW_CPU_X86_SSE,HW_CPU_X86_SSE4A,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_NET_VIF_MODEL_VIRTIO,HW_CPU_X86_SSSE3,COMPUTE_ACCELERATORS,COMPUTE_RESCUE_BFV,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_BMI2,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_DEVICE_TAGGING,HW_CPU_X86_SSE2,HW_CPU_X86_SSE42,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_SECURITY_TPM_1_2,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_SVM,HW_CPU_X86_AVX,HW_CPU_X86_CLMUL,COMPUTE_NODE,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_AVX2,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_SHA,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_BMI,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_MMX,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_SSE41,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_SECURITY_TPM_2_0,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_F16C,COMPUTE_STORAGE_BUS_FDC,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_AMD_SVM,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_AESNI,HW_CPU_X86_FMA3 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Nov 25 03:47:37 np0005534516 nova_compute[253538]: 2025-11-25 08:47:37.476 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:47:37 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1983: 321 pgs: 321 active+clean; 167 MiB data, 745 MiB used, 59 GiB / 60 GiB avail; 2.3 KiB/s wr, 0 op/s
Nov 25 03:47:37 np0005534516 nova_compute[253538]: 2025-11-25 08:47:37.777 253542 INFO nova.compute.manager [None req-e51841f2-2494-4049-9023-d82542654b2c 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] [instance: 0f5e68e6-8f02-4a3a-ac0c-322d82950d98] Unpausing#033[00m
Nov 25 03:47:37 np0005534516 nova_compute[253538]: 2025-11-25 08:47:37.778 253542 DEBUG nova.objects.instance [None req-e51841f2-2494-4049-9023-d82542654b2c 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] Lazy-loading 'flavor' on Instance uuid 0f5e68e6-8f02-4a3a-ac0c-322d82950d98 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 03:47:37 np0005534516 nova_compute[253538]: 2025-11-25 08:47:37.805 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764060457.8048139, 0f5e68e6-8f02-4a3a-ac0c-322d82950d98 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 03:47:37 np0005534516 nova_compute[253538]: 2025-11-25 08:47:37.805 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 0f5e68e6-8f02-4a3a-ac0c-322d82950d98] VM Resumed (Lifecycle Event)#033[00m
Nov 25 03:47:37 np0005534516 virtqemud[253839]: argument unsupported: QEMU guest agent is not configured
Nov 25 03:47:37 np0005534516 nova_compute[253538]: 2025-11-25 08:47:37.808 253542 DEBUG nova.virt.libvirt.guest [None req-e51841f2-2494-4049-9023-d82542654b2c 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] [instance: 0f5e68e6-8f02-4a3a-ac0c-322d82950d98] Failed to set time: agent not configured sync_guest_time /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:200#033[00m
Nov 25 03:47:37 np0005534516 nova_compute[253538]: 2025-11-25 08:47:37.809 253542 DEBUG nova.compute.manager [None req-e51841f2-2494-4049-9023-d82542654b2c 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] [instance: 0f5e68e6-8f02-4a3a-ac0c-322d82950d98] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:47:37 np0005534516 nova_compute[253538]: 2025-11-25 08:47:37.828 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 0f5e68e6-8f02-4a3a-ac0c-322d82950d98] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:47:37 np0005534516 nova_compute[253538]: 2025-11-25 08:47:37.832 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 0f5e68e6-8f02-4a3a-ac0c-322d82950d98] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: paused, current task_state: unpausing, current DB power_state: 3, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 03:47:37 np0005534516 nova_compute[253538]: 2025-11-25 08:47:37.854 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 0f5e68e6-8f02-4a3a-ac0c-322d82950d98] During sync_power_state the instance has a pending task (unpausing). Skip.#033[00m
Nov 25 03:47:37 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 03:47:37 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2798887891' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 03:47:37 np0005534516 nova_compute[253538]: 2025-11-25 08:47:37.972 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.496s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:47:37 np0005534516 nova_compute[253538]: 2025-11-25 08:47:37.980 253542 DEBUG nova.compute.provider_tree [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 03:47:37 np0005534516 nova_compute[253538]: 2025-11-25 08:47:37.992 253542 DEBUG nova.scheduler.client.report [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 03:47:38 np0005534516 nova_compute[253538]: 2025-11-25 08:47:38.014 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 25 03:47:38 np0005534516 nova_compute[253538]: 2025-11-25 08:47:38.015 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.698s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:47:38 np0005534516 nova_compute[253538]: 2025-11-25 08:47:38.760 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:47:39 np0005534516 nova_compute[253538]: 2025-11-25 08:47:39.008 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 03:47:39 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1984: 321 pgs: 321 active+clean; 167 MiB data, 745 MiB used, 59 GiB / 60 GiB avail; 2.3 KiB/s wr, 0 op/s
Nov 25 03:47:40 np0005534516 nova_compute[253538]: 2025-11-25 08:47:40.553 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 03:47:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:47:41.072 162739 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:47:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:47:41.072 162739 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:47:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:47:41.073 162739 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:47:41 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1985: 321 pgs: 321 active+clean; 167 MiB data, 745 MiB used, 59 GiB / 60 GiB avail; 2.3 KiB/s wr, 0 op/s
Nov 25 03:47:42 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e231 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 03:47:42 np0005534516 nova_compute[253538]: 2025-11-25 08:47:42.415 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:47:43 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1986: 321 pgs: 321 active+clean; 167 MiB data, 745 MiB used, 59 GiB / 60 GiB avail
Nov 25 03:47:43 np0005534516 nova_compute[253538]: 2025-11-25 08:47:43.763 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:47:45 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1987: 321 pgs: 321 active+clean; 167 MiB data, 745 MiB used, 59 GiB / 60 GiB avail; 8.0 KiB/s wr, 1 op/s
Nov 25 03:47:47 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e231 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 03:47:47 np0005534516 nova_compute[253538]: 2025-11-25 08:47:47.417 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:47:47 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1988: 321 pgs: 321 active+clean; 167 MiB data, 745 MiB used, 59 GiB / 60 GiB avail; 0 B/s rd, 8.1 KiB/s wr, 1 op/s
Nov 25 03:47:48 np0005534516 nova_compute[253538]: 2025-11-25 08:47:48.765 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:47:49 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1989: 321 pgs: 321 active+clean; 167 MiB data, 745 MiB used, 59 GiB / 60 GiB avail; 0 B/s rd, 8.1 KiB/s wr, 1 op/s
Nov 25 03:47:51 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1990: 321 pgs: 321 active+clean; 167 MiB data, 745 MiB used, 59 GiB / 60 GiB avail; 0 B/s rd, 8.1 KiB/s wr, 1 op/s
Nov 25 03:47:52 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e231 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 03:47:52 np0005534516 nova_compute[253538]: 2025-11-25 08:47:52.419 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:47:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] Optimize plan auto_2025-11-25_08:47:53
Nov 25 03:47:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 03:47:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] do_upmap
Nov 25 03:47:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] pools ['cephfs.cephfs.data', 'cephfs.cephfs.meta', '.rgw.root', 'images', 'volumes', 'backups', '.mgr', 'default.rgw.meta', 'vms', 'default.rgw.control', 'default.rgw.log']
Nov 25 03:47:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] prepared 0/10 changes
Nov 25 03:47:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 03:47:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 03:47:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 03:47:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 03:47:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 03:47:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 03:47:53 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1991: 321 pgs: 321 active+clean; 167 MiB data, 745 MiB used, 59 GiB / 60 GiB avail; 0 B/s rd, 8.1 KiB/s wr, 1 op/s
Nov 25 03:47:53 np0005534516 nova_compute[253538]: 2025-11-25 08:47:53.767 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:47:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 03:47:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 03:47:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 03:47:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 03:47:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 03:47:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 03:47:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 03:47:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 03:47:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 03:47:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 03:47:55 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1992: 321 pgs: 321 active+clean; 167 MiB data, 745 MiB used, 59 GiB / 60 GiB avail; 0 B/s rd, 8.1 KiB/s wr, 1 op/s
Nov 25 03:47:57 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e231 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 03:47:57 np0005534516 nova_compute[253538]: 2025-11-25 08:47:57.420 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:47:57 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1993: 321 pgs: 321 active+clean; 167 MiB data, 745 MiB used, 59 GiB / 60 GiB avail; 0 B/s rd, 85 B/s wr, 0 op/s
Nov 25 03:47:57 np0005534516 podman[358730]: 2025-11-25 08:47:57.827156245 +0000 UTC m=+0.077975079 container health_status 5c8d5508db57b4c3da7e80ae11f3a3094e87785cb74f60c98199261dd8b73481 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 25 03:47:58 np0005534516 nova_compute[253538]: 2025-11-25 08:47:58.078 253542 DEBUG oslo_concurrency.lockutils [None req-f3e2ea2b-df6b-4b6c-90d2-fed2fa078aa2 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] Acquiring lock "0f5e68e6-8f02-4a3a-ac0c-322d82950d98" by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:47:58 np0005534516 nova_compute[253538]: 2025-11-25 08:47:58.078 253542 DEBUG oslo_concurrency.lockutils [None req-f3e2ea2b-df6b-4b6c-90d2-fed2fa078aa2 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] Lock "0f5e68e6-8f02-4a3a-ac0c-322d82950d98" acquired by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:47:58 np0005534516 nova_compute[253538]: 2025-11-25 08:47:58.078 253542 INFO nova.compute.manager [None req-f3e2ea2b-df6b-4b6c-90d2-fed2fa078aa2 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] [instance: 0f5e68e6-8f02-4a3a-ac0c-322d82950d98] Shelving#033[00m
Nov 25 03:47:58 np0005534516 nova_compute[253538]: 2025-11-25 08:47:58.093 253542 DEBUG nova.virt.libvirt.driver [None req-f3e2ea2b-df6b-4b6c-90d2-fed2fa078aa2 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] [instance: 0f5e68e6-8f02-4a3a-ac0c-322d82950d98] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071#033[00m
Nov 25 03:47:58 np0005534516 nova_compute[253538]: 2025-11-25 08:47:58.769 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:47:59 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1994: 321 pgs: 321 active+clean; 167 MiB data, 745 MiB used, 59 GiB / 60 GiB avail
Nov 25 03:48:00 np0005534516 kernel: tap7246ed42-6e (unregistering): left promiscuous mode
Nov 25 03:48:00 np0005534516 NetworkManager[48915]: <info>  [1764060480.3647] device (tap7246ed42-6e): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 03:48:00 np0005534516 nova_compute[253538]: 2025-11-25 08:48:00.375 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:48:00 np0005534516 ovn_controller[152859]: 2025-11-25T08:48:00Z|01003|binding|INFO|Releasing lport 7246ed42-6ec3-42e8-9b9d-12606aeeb43c from this chassis (sb_readonly=0)
Nov 25 03:48:00 np0005534516 ovn_controller[152859]: 2025-11-25T08:48:00Z|01004|binding|INFO|Setting lport 7246ed42-6ec3-42e8-9b9d-12606aeeb43c down in Southbound
Nov 25 03:48:00 np0005534516 ovn_controller[152859]: 2025-11-25T08:48:00Z|01005|binding|INFO|Removing iface tap7246ed42-6e ovn-installed in OVS
Nov 25 03:48:00 np0005534516 nova_compute[253538]: 2025-11-25 08:48:00.378 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:48:00 np0005534516 nova_compute[253538]: 2025-11-25 08:48:00.395 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:48:00 np0005534516 systemd[1]: machine-qemu\x2d128\x2dinstance\x2d00000068.scope: Deactivated successfully.
Nov 25 03:48:00 np0005534516 systemd[1]: machine-qemu\x2d128\x2dinstance\x2d00000068.scope: Consumed 15.717s CPU time.
Nov 25 03:48:00 np0005534516 systemd-machined[215790]: Machine qemu-128-instance-00000068 terminated.
Nov 25 03:48:00 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:48:00.464 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:8c:96:cd 10.100.0.3'], port_security=['fa:16:3e:8c:96:cd 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '0f5e68e6-8f02-4a3a-ac0c-322d82950d98', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-aa04d86f-73a3-4b24-9c95-8ec29aa39064', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '947f731219de435196429037dc94fd56', 'neutron:revision_number': '4', 'neutron:security_group_ids': '6084cb22-e3b6-414c-801b-b7e992786a2f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d4043632-827b-4717-bb5e-38582ebfacad, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=7246ed42-6ec3-42e8-9b9d-12606aeeb43c) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 03:48:00 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:48:00.466 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 7246ed42-6ec3-42e8-9b9d-12606aeeb43c in datapath aa04d86f-73a3-4b24-9c95-8ec29aa39064 unbound from our chassis#033[00m
Nov 25 03:48:00 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:48:00.468 162739 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network aa04d86f-73a3-4b24-9c95-8ec29aa39064, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 25 03:48:00 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:48:00.469 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[b126234e-82cd-49e5-9b28-c32f37514807]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:48:00 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:48:00.470 162739 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-aa04d86f-73a3-4b24-9c95-8ec29aa39064 namespace which is not needed anymore#033[00m
Nov 25 03:48:00 np0005534516 neutron-haproxy-ovnmeta-aa04d86f-73a3-4b24-9c95-8ec29aa39064[357198]: [NOTICE]   (357204) : haproxy version is 2.8.14-c23fe91
Nov 25 03:48:00 np0005534516 neutron-haproxy-ovnmeta-aa04d86f-73a3-4b24-9c95-8ec29aa39064[357198]: [NOTICE]   (357204) : path to executable is /usr/sbin/haproxy
Nov 25 03:48:00 np0005534516 neutron-haproxy-ovnmeta-aa04d86f-73a3-4b24-9c95-8ec29aa39064[357198]: [WARNING]  (357204) : Exiting Master process...
Nov 25 03:48:00 np0005534516 neutron-haproxy-ovnmeta-aa04d86f-73a3-4b24-9c95-8ec29aa39064[357198]: [WARNING]  (357204) : Exiting Master process...
Nov 25 03:48:00 np0005534516 neutron-haproxy-ovnmeta-aa04d86f-73a3-4b24-9c95-8ec29aa39064[357198]: [ALERT]    (357204) : Current worker (357206) exited with code 143 (Terminated)
Nov 25 03:48:00 np0005534516 neutron-haproxy-ovnmeta-aa04d86f-73a3-4b24-9c95-8ec29aa39064[357198]: [WARNING]  (357204) : All workers exited. Exiting... (0)
Nov 25 03:48:00 np0005534516 systemd[1]: libpod-7670f239ad1da4d9969c96f4d7b382e932cc33f3700d83bfd2af5d25adafffaf.scope: Deactivated successfully.
Nov 25 03:48:00 np0005534516 podman[358774]: 2025-11-25 08:48:00.630914635 +0000 UTC m=+0.060808950 container died 7670f239ad1da4d9969c96f4d7b382e932cc33f3700d83bfd2af5d25adafffaf (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-aa04d86f-73a3-4b24-9c95-8ec29aa39064, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251118)
Nov 25 03:48:00 np0005534516 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-7670f239ad1da4d9969c96f4d7b382e932cc33f3700d83bfd2af5d25adafffaf-userdata-shm.mount: Deactivated successfully.
Nov 25 03:48:00 np0005534516 systemd[1]: var-lib-containers-storage-overlay-5392b1f619a47826c2c6fed409eebb3fb96b83ac7eec8e199c10f4e6b9e7199c-merged.mount: Deactivated successfully.
Nov 25 03:48:00 np0005534516 podman[358774]: 2025-11-25 08:48:00.688563249 +0000 UTC m=+0.118457544 container cleanup 7670f239ad1da4d9969c96f4d7b382e932cc33f3700d83bfd2af5d25adafffaf (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-aa04d86f-73a3-4b24-9c95-8ec29aa39064, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Nov 25 03:48:00 np0005534516 systemd[1]: libpod-conmon-7670f239ad1da4d9969c96f4d7b382e932cc33f3700d83bfd2af5d25adafffaf.scope: Deactivated successfully.
Nov 25 03:48:00 np0005534516 podman[358816]: 2025-11-25 08:48:00.776795032 +0000 UTC m=+0.065287480 container remove 7670f239ad1da4d9969c96f4d7b382e932cc33f3700d83bfd2af5d25adafffaf (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-aa04d86f-73a3-4b24-9c95-8ec29aa39064, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Nov 25 03:48:00 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:48:00.785 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[f37f1650-ff9f-4b3e-bd91-5d6a5ecfcba4]: (4, ('Tue Nov 25 08:48:00 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-aa04d86f-73a3-4b24-9c95-8ec29aa39064 (7670f239ad1da4d9969c96f4d7b382e932cc33f3700d83bfd2af5d25adafffaf)\n7670f239ad1da4d9969c96f4d7b382e932cc33f3700d83bfd2af5d25adafffaf\nTue Nov 25 08:48:00 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-aa04d86f-73a3-4b24-9c95-8ec29aa39064 (7670f239ad1da4d9969c96f4d7b382e932cc33f3700d83bfd2af5d25adafffaf)\n7670f239ad1da4d9969c96f4d7b382e932cc33f3700d83bfd2af5d25adafffaf\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:48:00 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:48:00.787 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[a0b2f437-acd0-4303-bf43-3c84780b545f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:48:00 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:48:00.788 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapaa04d86f-70, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:48:00 np0005534516 nova_compute[253538]: 2025-11-25 08:48:00.791 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:48:00 np0005534516 kernel: tapaa04d86f-70: left promiscuous mode
Nov 25 03:48:00 np0005534516 nova_compute[253538]: 2025-11-25 08:48:00.810 253542 DEBUG nova.compute.manager [req-ee30c98f-aa1f-4416-b9d2-788427a9bdd1 req-9b731722-310d-403a-8475-c24d49cfb9da b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 0f5e68e6-8f02-4a3a-ac0c-322d82950d98] Received event network-vif-unplugged-7246ed42-6ec3-42e8-9b9d-12606aeeb43c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:48:00 np0005534516 nova_compute[253538]: 2025-11-25 08:48:00.811 253542 DEBUG oslo_concurrency.lockutils [req-ee30c98f-aa1f-4416-b9d2-788427a9bdd1 req-9b731722-310d-403a-8475-c24d49cfb9da b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "0f5e68e6-8f02-4a3a-ac0c-322d82950d98-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:48:00 np0005534516 nova_compute[253538]: 2025-11-25 08:48:00.812 253542 DEBUG oslo_concurrency.lockutils [req-ee30c98f-aa1f-4416-b9d2-788427a9bdd1 req-9b731722-310d-403a-8475-c24d49cfb9da b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "0f5e68e6-8f02-4a3a-ac0c-322d82950d98-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:48:00 np0005534516 nova_compute[253538]: 2025-11-25 08:48:00.812 253542 DEBUG oslo_concurrency.lockutils [req-ee30c98f-aa1f-4416-b9d2-788427a9bdd1 req-9b731722-310d-403a-8475-c24d49cfb9da b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "0f5e68e6-8f02-4a3a-ac0c-322d82950d98-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:48:00 np0005534516 nova_compute[253538]: 2025-11-25 08:48:00.813 253542 DEBUG nova.compute.manager [req-ee30c98f-aa1f-4416-b9d2-788427a9bdd1 req-9b731722-310d-403a-8475-c24d49cfb9da b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 0f5e68e6-8f02-4a3a-ac0c-322d82950d98] No waiting events found dispatching network-vif-unplugged-7246ed42-6ec3-42e8-9b9d-12606aeeb43c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 03:48:00 np0005534516 nova_compute[253538]: 2025-11-25 08:48:00.813 253542 WARNING nova.compute.manager [req-ee30c98f-aa1f-4416-b9d2-788427a9bdd1 req-9b731722-310d-403a-8475-c24d49cfb9da b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 0f5e68e6-8f02-4a3a-ac0c-322d82950d98] Received unexpected event network-vif-unplugged-7246ed42-6ec3-42e8-9b9d-12606aeeb43c for instance with vm_state active and task_state shelving.#033[00m
Nov 25 03:48:00 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:48:00.814 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[2df285cd-c786-46bf-9a81-964a4e668155]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:48:00 np0005534516 nova_compute[253538]: 2025-11-25 08:48:00.814 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:48:00 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:48:00.838 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[029e7655-3026-4b1a-96c7-f9b8a8780c2c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:48:00 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:48:00.840 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[83c845af-e6d4-4204-ade8-133a1639f041]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:48:00 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:48:00.867 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[77a406c3-5ab0-422b-bdd0-d8a8995d17a2]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 575584, 'reachable_time': 24835, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 358836, 'error': None, 'target': 'ovnmeta-aa04d86f-73a3-4b24-9c95-8ec29aa39064', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:48:00 np0005534516 systemd[1]: run-netns-ovnmeta\x2daa04d86f\x2d73a3\x2d4b24\x2d9c95\x2d8ec29aa39064.mount: Deactivated successfully.
Nov 25 03:48:00 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:48:00.872 162852 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-aa04d86f-73a3-4b24-9c95-8ec29aa39064 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 25 03:48:00 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:48:00.872 162852 DEBUG oslo.privsep.daemon [-] privsep: reply[d04e6fa9-5490-48c6-86e0-be83949c6435]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:48:00 np0005534516 podman[358835]: 2025-11-25 08:48:00.962447844 +0000 UTC m=+0.106743510 container health_status 201f5ba8a846455dad7681d91099d8c3f33e8ac91298c1330a8b525b94c03938 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, container_name=multipathd, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, managed_by=edpm_ansible)
Nov 25 03:48:01 np0005534516 nova_compute[253538]: 2025-11-25 08:48:01.112 253542 INFO nova.virt.libvirt.driver [None req-f3e2ea2b-df6b-4b6c-90d2-fed2fa078aa2 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] [instance: 0f5e68e6-8f02-4a3a-ac0c-322d82950d98] Instance shutdown successfully after 3 seconds.#033[00m
Nov 25 03:48:01 np0005534516 nova_compute[253538]: 2025-11-25 08:48:01.119 253542 INFO nova.virt.libvirt.driver [-] [instance: 0f5e68e6-8f02-4a3a-ac0c-322d82950d98] Instance destroyed successfully.#033[00m
Nov 25 03:48:01 np0005534516 nova_compute[253538]: 2025-11-25 08:48:01.120 253542 DEBUG nova.objects.instance [None req-f3e2ea2b-df6b-4b6c-90d2-fed2fa078aa2 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] Lazy-loading 'numa_topology' on Instance uuid 0f5e68e6-8f02-4a3a-ac0c-322d82950d98 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 03:48:01 np0005534516 nova_compute[253538]: 2025-11-25 08:48:01.421 253542 INFO nova.virt.libvirt.driver [None req-f3e2ea2b-df6b-4b6c-90d2-fed2fa078aa2 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] [instance: 0f5e68e6-8f02-4a3a-ac0c-322d82950d98] Beginning cold snapshot process#033[00m
Nov 25 03:48:01 np0005534516 nova_compute[253538]: 2025-11-25 08:48:01.540 253542 DEBUG nova.virt.libvirt.imagebackend [None req-f3e2ea2b-df6b-4b6c-90d2-fed2fa078aa2 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] No parent info for 8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163#033[00m
Nov 25 03:48:01 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1995: 321 pgs: 321 active+clean; 167 MiB data, 745 MiB used, 59 GiB / 60 GiB avail; 682 B/s rd, 8.3 KiB/s wr, 1 op/s
Nov 25 03:48:01 np0005534516 nova_compute[253538]: 2025-11-25 08:48:01.731 253542 DEBUG nova.storage.rbd_utils [None req-f3e2ea2b-df6b-4b6c-90d2-fed2fa078aa2 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] creating snapshot(8057da270e024476acbd6ce05785685a) on rbd image(0f5e68e6-8f02-4a3a-ac0c-322d82950d98_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Nov 25 03:48:01 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e231 do_prune osdmap full prune enabled
Nov 25 03:48:01 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e232 e232: 3 total, 3 up, 3 in
Nov 25 03:48:01 np0005534516 ceph-mon[75015]: log_channel(cluster) log [DBG] : osdmap e232: 3 total, 3 up, 3 in
Nov 25 03:48:01 np0005534516 nova_compute[253538]: 2025-11-25 08:48:01.945 253542 DEBUG nova.storage.rbd_utils [None req-f3e2ea2b-df6b-4b6c-90d2-fed2fa078aa2 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] cloning vms/0f5e68e6-8f02-4a3a-ac0c-322d82950d98_disk@8057da270e024476acbd6ce05785685a to images/515c4bcf-552c-4c04-8c0d-ad03b9e9133d clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Nov 25 03:48:02 np0005534516 nova_compute[253538]: 2025-11-25 08:48:02.053 253542 DEBUG nova.storage.rbd_utils [None req-f3e2ea2b-df6b-4b6c-90d2-fed2fa078aa2 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] flattening images/515c4bcf-552c-4c04-8c0d-ad03b9e9133d flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314#033[00m
Nov 25 03:48:02 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e232 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 03:48:02 np0005534516 nova_compute[253538]: 2025-11-25 08:48:02.423 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:48:02 np0005534516 nova_compute[253538]: 2025-11-25 08:48:02.751 253542 DEBUG nova.storage.rbd_utils [None req-f3e2ea2b-df6b-4b6c-90d2-fed2fa078aa2 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] removing snapshot(8057da270e024476acbd6ce05785685a) on rbd image(0f5e68e6-8f02-4a3a-ac0c-322d82950d98_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489#033[00m
Nov 25 03:48:02 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e232 do_prune osdmap full prune enabled
Nov 25 03:48:02 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e233 e233: 3 total, 3 up, 3 in
Nov 25 03:48:02 np0005534516 ceph-mon[75015]: log_channel(cluster) log [DBG] : osdmap e233: 3 total, 3 up, 3 in
Nov 25 03:48:02 np0005534516 nova_compute[253538]: 2025-11-25 08:48:02.923 253542 DEBUG nova.compute.manager [req-bd138e37-b9ab-44d6-b9cf-8266fd225f35 req-02cd81cc-9daa-4df8-90b1-8f7699dbb25c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 0f5e68e6-8f02-4a3a-ac0c-322d82950d98] Received event network-vif-plugged-7246ed42-6ec3-42e8-9b9d-12606aeeb43c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:48:02 np0005534516 nova_compute[253538]: 2025-11-25 08:48:02.924 253542 DEBUG oslo_concurrency.lockutils [req-bd138e37-b9ab-44d6-b9cf-8266fd225f35 req-02cd81cc-9daa-4df8-90b1-8f7699dbb25c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "0f5e68e6-8f02-4a3a-ac0c-322d82950d98-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:48:02 np0005534516 nova_compute[253538]: 2025-11-25 08:48:02.924 253542 DEBUG oslo_concurrency.lockutils [req-bd138e37-b9ab-44d6-b9cf-8266fd225f35 req-02cd81cc-9daa-4df8-90b1-8f7699dbb25c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "0f5e68e6-8f02-4a3a-ac0c-322d82950d98-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:48:02 np0005534516 nova_compute[253538]: 2025-11-25 08:48:02.925 253542 DEBUG oslo_concurrency.lockutils [req-bd138e37-b9ab-44d6-b9cf-8266fd225f35 req-02cd81cc-9daa-4df8-90b1-8f7699dbb25c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "0f5e68e6-8f02-4a3a-ac0c-322d82950d98-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:48:02 np0005534516 nova_compute[253538]: 2025-11-25 08:48:02.925 253542 DEBUG nova.compute.manager [req-bd138e37-b9ab-44d6-b9cf-8266fd225f35 req-02cd81cc-9daa-4df8-90b1-8f7699dbb25c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 0f5e68e6-8f02-4a3a-ac0c-322d82950d98] No waiting events found dispatching network-vif-plugged-7246ed42-6ec3-42e8-9b9d-12606aeeb43c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 03:48:02 np0005534516 nova_compute[253538]: 2025-11-25 08:48:02.926 253542 WARNING nova.compute.manager [req-bd138e37-b9ab-44d6-b9cf-8266fd225f35 req-02cd81cc-9daa-4df8-90b1-8f7699dbb25c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 0f5e68e6-8f02-4a3a-ac0c-322d82950d98] Received unexpected event network-vif-plugged-7246ed42-6ec3-42e8-9b9d-12606aeeb43c for instance with vm_state active and task_state shelving_image_uploading.#033[00m
Nov 25 03:48:02 np0005534516 nova_compute[253538]: 2025-11-25 08:48:02.952 253542 DEBUG nova.storage.rbd_utils [None req-f3e2ea2b-df6b-4b6c-90d2-fed2fa078aa2 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] creating snapshot(snap) on rbd image(515c4bcf-552c-4c04-8c0d-ad03b9e9133d) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Nov 25 03:48:03 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1998: 321 pgs: 321 active+clean; 188 MiB data, 756 MiB used, 59 GiB / 60 GiB avail; 165 KiB/s rd, 1.4 MiB/s wr, 42 op/s
Nov 25 03:48:03 np0005534516 nova_compute[253538]: 2025-11-25 08:48:03.771 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:48:03 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e233 do_prune osdmap full prune enabled
Nov 25 03:48:03 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e234 e234: 3 total, 3 up, 3 in
Nov 25 03:48:03 np0005534516 ceph-mon[75015]: log_channel(cluster) log [DBG] : osdmap e234: 3 total, 3 up, 3 in
Nov 25 03:48:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 03:48:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:48:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 03:48:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:48:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0007594002327928631 of space, bias 1.0, pg target 0.22782006983785894 quantized to 32 (current 32)
Nov 25 03:48:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:48:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 03:48:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:48:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 03:48:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:48:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0011925165437585467 of space, bias 1.0, pg target 0.357754963127564 quantized to 32 (current 32)
Nov 25 03:48:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:48:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 25 03:48:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:48:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 03:48:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:48:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 25 03:48:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:48:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 25 03:48:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:48:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 03:48:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:48:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 25 03:48:05 np0005534516 nova_compute[253538]: 2025-11-25 08:48:05.239 253542 INFO nova.virt.libvirt.driver [None req-f3e2ea2b-df6b-4b6c-90d2-fed2fa078aa2 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] [instance: 0f5e68e6-8f02-4a3a-ac0c-322d82950d98] Snapshot image upload complete#033[00m
Nov 25 03:48:05 np0005534516 nova_compute[253538]: 2025-11-25 08:48:05.240 253542 DEBUG nova.compute.manager [None req-f3e2ea2b-df6b-4b6c-90d2-fed2fa078aa2 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] [instance: 0f5e68e6-8f02-4a3a-ac0c-322d82950d98] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:48:05 np0005534516 nova_compute[253538]: 2025-11-25 08:48:05.322 253542 INFO nova.compute.manager [None req-f3e2ea2b-df6b-4b6c-90d2-fed2fa078aa2 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] [instance: 0f5e68e6-8f02-4a3a-ac0c-322d82950d98] Shelve offloading#033[00m
Nov 25 03:48:05 np0005534516 nova_compute[253538]: 2025-11-25 08:48:05.334 253542 INFO nova.virt.libvirt.driver [-] [instance: 0f5e68e6-8f02-4a3a-ac0c-322d82950d98] Instance destroyed successfully.#033[00m
Nov 25 03:48:05 np0005534516 nova_compute[253538]: 2025-11-25 08:48:05.335 253542 DEBUG nova.compute.manager [None req-f3e2ea2b-df6b-4b6c-90d2-fed2fa078aa2 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] [instance: 0f5e68e6-8f02-4a3a-ac0c-322d82950d98] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:48:05 np0005534516 nova_compute[253538]: 2025-11-25 08:48:05.338 253542 DEBUG oslo_concurrency.lockutils [None req-f3e2ea2b-df6b-4b6c-90d2-fed2fa078aa2 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] Acquiring lock "refresh_cache-0f5e68e6-8f02-4a3a-ac0c-322d82950d98" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 03:48:05 np0005534516 nova_compute[253538]: 2025-11-25 08:48:05.339 253542 DEBUG oslo_concurrency.lockutils [None req-f3e2ea2b-df6b-4b6c-90d2-fed2fa078aa2 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] Acquired lock "refresh_cache-0f5e68e6-8f02-4a3a-ac0c-322d82950d98" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 03:48:05 np0005534516 nova_compute[253538]: 2025-11-25 08:48:05.339 253542 DEBUG nova.network.neutron [None req-f3e2ea2b-df6b-4b6c-90d2-fed2fa078aa2 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] [instance: 0f5e68e6-8f02-4a3a-ac0c-322d82950d98] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 25 03:48:05 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2000: 321 pgs: 321 active+clean; 210 MiB data, 772 MiB used, 59 GiB / 60 GiB avail; 4.0 MiB/s rd, 4.5 MiB/s wr, 75 op/s
Nov 25 03:48:05 np0005534516 ceph-mon[75015]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 25 03:48:05 np0005534516 ceph-mon[75015]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 3600.0 total, 600.0 interval#012Cumulative writes: 9129 writes, 41K keys, 9129 commit groups, 1.0 writes per commit group, ingest: 0.05 GB, 0.02 MB/s#012Cumulative WAL: 9129 writes, 9129 syncs, 1.00 writes per sync, written: 0.05 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1550 writes, 7194 keys, 1550 commit groups, 1.0 writes per commit group, ingest: 9.45 MB, 0.02 MB/s#012Interval WAL: 1550 writes, 1550 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     28.9      1.71              0.17        26    0.066       0      0       0.0       0.0#012  L6      1/0   10.43 MB   0.0      0.2     0.0      0.2       0.2      0.0       0.0   3.9     54.0     44.8      4.26              0.63        25    0.171    135K    14K       0.0       0.0#012 Sum      1/0   10.43 MB   0.0      0.2     0.0      0.2       0.2      0.1       0.0   4.9     38.5     40.3      5.98              0.81        51    0.117    135K    14K       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.1     0.0      0.0       0.1      0.0       0.0   6.2     79.1     81.4      0.77              0.24        12    0.064     40K   3102       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.2     0.0      0.2       0.2      0.0       0.0   0.0     54.0     44.8      4.26              0.63        25    0.171    135K    14K       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     28.9      1.71              0.17        25    0.068       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     19.9      0.00              0.00         1    0.003       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 3600.0 total, 600.0 interval#012Flush(GB): cumulative 0.048, interval 0.010#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.23 GB write, 0.07 MB/s write, 0.22 GB read, 0.06 MB/s read, 6.0 seconds#012Interval compaction: 0.06 GB write, 0.10 MB/s write, 0.06 GB read, 0.10 MB/s read, 0.8 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55e967e9f1f0#2 capacity: 304.00 MB usage: 26.60 MB table_size: 0 occupancy: 18446744073709551615 collections: 7 last_copies: 0 last_secs: 0.000221 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(1750,25.58 MB,8.41591%) FilterBlock(52,386.11 KB,0.124033%) IndexBlock(52,655.14 KB,0.210456%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Nov 25 03:48:05 np0005534516 podman[358999]: 2025-11-25 08:48:05.884797262 +0000 UTC m=+0.134255197 container health_status 8ad1e319ea263c63021f01bb2d6f18ca5aea205883339692c6b10bddfa993191 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Nov 25 03:48:06 np0005534516 rsyslogd[1007]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 25 03:48:06 np0005534516 rsyslogd[1007]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 25 03:48:07 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e234 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 03:48:07 np0005534516 nova_compute[253538]: 2025-11-25 08:48:07.425 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:48:07 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2001: 321 pgs: 321 active+clean; 246 MiB data, 792 MiB used, 59 GiB / 60 GiB avail; 7.8 MiB/s rd, 7.8 MiB/s wr, 170 op/s
Nov 25 03:48:08 np0005534516 nova_compute[253538]: 2025-11-25 08:48:08.190 253542 DEBUG nova.network.neutron [None req-f3e2ea2b-df6b-4b6c-90d2-fed2fa078aa2 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] [instance: 0f5e68e6-8f02-4a3a-ac0c-322d82950d98] Updating instance_info_cache with network_info: [{"id": "7246ed42-6ec3-42e8-9b9d-12606aeeb43c", "address": "fa:16:3e:8c:96:cd", "network": {"id": "aa04d86f-73a3-4b24-9c95-8ec29aa39064", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-1377856389-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "947f731219de435196429037dc94fd56", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7246ed42-6e", "ovs_interfaceid": "7246ed42-6ec3-42e8-9b9d-12606aeeb43c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 03:48:08 np0005534516 nova_compute[253538]: 2025-11-25 08:48:08.227 253542 DEBUG oslo_concurrency.lockutils [None req-f3e2ea2b-df6b-4b6c-90d2-fed2fa078aa2 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] Releasing lock "refresh_cache-0f5e68e6-8f02-4a3a-ac0c-322d82950d98" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 03:48:08 np0005534516 nova_compute[253538]: 2025-11-25 08:48:08.774 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:48:09 np0005534516 nova_compute[253538]: 2025-11-25 08:48:09.591 253542 INFO nova.virt.libvirt.driver [-] [instance: 0f5e68e6-8f02-4a3a-ac0c-322d82950d98] Instance destroyed successfully.#033[00m
Nov 25 03:48:09 np0005534516 nova_compute[253538]: 2025-11-25 08:48:09.592 253542 DEBUG nova.objects.instance [None req-f3e2ea2b-df6b-4b6c-90d2-fed2fa078aa2 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] Lazy-loading 'resources' on Instance uuid 0f5e68e6-8f02-4a3a-ac0c-322d82950d98 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 03:48:09 np0005534516 nova_compute[253538]: 2025-11-25 08:48:09.604 253542 DEBUG nova.virt.libvirt.vif [None req-f3e2ea2b-df6b-4b6c-90d2-fed2fa078aa2 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T08:46:31Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersNegativeTestJSON-server-673040864',display_name='tempest-ServersNegativeTestJSON-server-673040864',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversnegativetestjson-server-673040864',id=104,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-25T08:46:41Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='947f731219de435196429037dc94fd56',ramdisk_id='',reservation_id='r-lfthxaw9',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersNegativeTestJSON-740481153',owner_user_name='tempest-ServersNegativeTestJSON-740481153-project-member',shelved_at='2025-11-25T08:48:05.240608',shelved_host='compute-0.ctlplane.example.com',shelved_image_id='515c4bcf-552c-4c04-8c0d-ad03b9e9133d'},tags=<?>,task_state='shelving_offloading',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T08:48:01Z,user_data=None,user_id='229f02d89c9848d8aaaaab070ce4d179',uuid=0f5e68e6-8f02-4a3a-ac0c-322d82950d98,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='shelved') vif={"id": "7246ed42-6ec3-42e8-9b9d-12606aeeb43c", "address": "fa:16:3e:8c:96:cd", "network": {"id": "aa04d86f-73a3-4b24-9c95-8ec29aa39064", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-1377856389-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "947f731219de435196429037dc94fd56", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7246ed42-6e", "ovs_interfaceid": "7246ed42-6ec3-42e8-9b9d-12606aeeb43c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 25 03:48:09 np0005534516 nova_compute[253538]: 2025-11-25 08:48:09.605 253542 DEBUG nova.network.os_vif_util [None req-f3e2ea2b-df6b-4b6c-90d2-fed2fa078aa2 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] Converting VIF {"id": "7246ed42-6ec3-42e8-9b9d-12606aeeb43c", "address": "fa:16:3e:8c:96:cd", "network": {"id": "aa04d86f-73a3-4b24-9c95-8ec29aa39064", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-1377856389-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "947f731219de435196429037dc94fd56", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7246ed42-6e", "ovs_interfaceid": "7246ed42-6ec3-42e8-9b9d-12606aeeb43c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 03:48:09 np0005534516 nova_compute[253538]: 2025-11-25 08:48:09.606 253542 DEBUG nova.network.os_vif_util [None req-f3e2ea2b-df6b-4b6c-90d2-fed2fa078aa2 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:8c:96:cd,bridge_name='br-int',has_traffic_filtering=True,id=7246ed42-6ec3-42e8-9b9d-12606aeeb43c,network=Network(aa04d86f-73a3-4b24-9c95-8ec29aa39064),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7246ed42-6e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 03:48:09 np0005534516 nova_compute[253538]: 2025-11-25 08:48:09.606 253542 DEBUG os_vif [None req-f3e2ea2b-df6b-4b6c-90d2-fed2fa078aa2 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:8c:96:cd,bridge_name='br-int',has_traffic_filtering=True,id=7246ed42-6ec3-42e8-9b9d-12606aeeb43c,network=Network(aa04d86f-73a3-4b24-9c95-8ec29aa39064),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7246ed42-6e') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 25 03:48:09 np0005534516 nova_compute[253538]: 2025-11-25 08:48:09.609 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:48:09 np0005534516 nova_compute[253538]: 2025-11-25 08:48:09.609 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7246ed42-6e, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:48:09 np0005534516 nova_compute[253538]: 2025-11-25 08:48:09.612 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:48:09 np0005534516 nova_compute[253538]: 2025-11-25 08:48:09.615 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 25 03:48:09 np0005534516 nova_compute[253538]: 2025-11-25 08:48:09.617 253542 INFO os_vif [None req-f3e2ea2b-df6b-4b6c-90d2-fed2fa078aa2 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:8c:96:cd,bridge_name='br-int',has_traffic_filtering=True,id=7246ed42-6ec3-42e8-9b9d-12606aeeb43c,network=Network(aa04d86f-73a3-4b24-9c95-8ec29aa39064),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7246ed42-6e')#033[00m
Nov 25 03:48:09 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2002: 321 pgs: 321 active+clean; 246 MiB data, 792 MiB used, 59 GiB / 60 GiB avail; 6.1 MiB/s rd, 6.0 MiB/s wr, 129 op/s
Nov 25 03:48:09 np0005534516 nova_compute[253538]: 2025-11-25 08:48:09.713 253542 DEBUG nova.compute.manager [req-c36c9d84-4ec5-44ef-8053-6e278432db47 req-fdb9a605-27ee-452d-92c1-81528b2138f5 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 0f5e68e6-8f02-4a3a-ac0c-322d82950d98] Received event network-changed-7246ed42-6ec3-42e8-9b9d-12606aeeb43c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:48:09 np0005534516 nova_compute[253538]: 2025-11-25 08:48:09.714 253542 DEBUG nova.compute.manager [req-c36c9d84-4ec5-44ef-8053-6e278432db47 req-fdb9a605-27ee-452d-92c1-81528b2138f5 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 0f5e68e6-8f02-4a3a-ac0c-322d82950d98] Refreshing instance network info cache due to event network-changed-7246ed42-6ec3-42e8-9b9d-12606aeeb43c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 03:48:09 np0005534516 nova_compute[253538]: 2025-11-25 08:48:09.715 253542 DEBUG oslo_concurrency.lockutils [req-c36c9d84-4ec5-44ef-8053-6e278432db47 req-fdb9a605-27ee-452d-92c1-81528b2138f5 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-0f5e68e6-8f02-4a3a-ac0c-322d82950d98" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 03:48:09 np0005534516 nova_compute[253538]: 2025-11-25 08:48:09.715 253542 DEBUG oslo_concurrency.lockutils [req-c36c9d84-4ec5-44ef-8053-6e278432db47 req-fdb9a605-27ee-452d-92c1-81528b2138f5 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-0f5e68e6-8f02-4a3a-ac0c-322d82950d98" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 03:48:09 np0005534516 nova_compute[253538]: 2025-11-25 08:48:09.715 253542 DEBUG nova.network.neutron [req-c36c9d84-4ec5-44ef-8053-6e278432db47 req-fdb9a605-27ee-452d-92c1-81528b2138f5 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 0f5e68e6-8f02-4a3a-ac0c-322d82950d98] Refreshing network info cache for port 7246ed42-6ec3-42e8-9b9d-12606aeeb43c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 03:48:09 np0005534516 nova_compute[253538]: 2025-11-25 08:48:09.910 253542 INFO nova.virt.libvirt.driver [None req-f3e2ea2b-df6b-4b6c-90d2-fed2fa078aa2 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] [instance: 0f5e68e6-8f02-4a3a-ac0c-322d82950d98] Deleting instance files /var/lib/nova/instances/0f5e68e6-8f02-4a3a-ac0c-322d82950d98_del#033[00m
Nov 25 03:48:09 np0005534516 nova_compute[253538]: 2025-11-25 08:48:09.911 253542 INFO nova.virt.libvirt.driver [None req-f3e2ea2b-df6b-4b6c-90d2-fed2fa078aa2 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] [instance: 0f5e68e6-8f02-4a3a-ac0c-322d82950d98] Deletion of /var/lib/nova/instances/0f5e68e6-8f02-4a3a-ac0c-322d82950d98_del complete#033[00m
Nov 25 03:48:10 np0005534516 nova_compute[253538]: 2025-11-25 08:48:10.068 253542 INFO nova.scheduler.client.report [None req-f3e2ea2b-df6b-4b6c-90d2-fed2fa078aa2 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] Deleted allocations for instance 0f5e68e6-8f02-4a3a-ac0c-322d82950d98#033[00m
Nov 25 03:48:10 np0005534516 nova_compute[253538]: 2025-11-25 08:48:10.121 253542 DEBUG oslo_concurrency.lockutils [None req-f3e2ea2b-df6b-4b6c-90d2-fed2fa078aa2 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:48:10 np0005534516 nova_compute[253538]: 2025-11-25 08:48:10.122 253542 DEBUG oslo_concurrency.lockutils [None req-f3e2ea2b-df6b-4b6c-90d2-fed2fa078aa2 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:48:10 np0005534516 nova_compute[253538]: 2025-11-25 08:48:10.154 253542 DEBUG oslo_concurrency.processutils [None req-f3e2ea2b-df6b-4b6c-90d2-fed2fa078aa2 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:48:10 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 03:48:10 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2207137970' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 03:48:10 np0005534516 nova_compute[253538]: 2025-11-25 08:48:10.581 253542 DEBUG oslo_concurrency.processutils [None req-f3e2ea2b-df6b-4b6c-90d2-fed2fa078aa2 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.428s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:48:10 np0005534516 nova_compute[253538]: 2025-11-25 08:48:10.591 253542 DEBUG nova.compute.provider_tree [None req-f3e2ea2b-df6b-4b6c-90d2-fed2fa078aa2 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 03:48:10 np0005534516 nova_compute[253538]: 2025-11-25 08:48:10.610 253542 DEBUG nova.scheduler.client.report [None req-f3e2ea2b-df6b-4b6c-90d2-fed2fa078aa2 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 03:48:10 np0005534516 nova_compute[253538]: 2025-11-25 08:48:10.702 253542 DEBUG oslo_concurrency.lockutils [None req-f3e2ea2b-df6b-4b6c-90d2-fed2fa078aa2 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.580s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:48:10 np0005534516 nova_compute[253538]: 2025-11-25 08:48:10.753 253542 DEBUG oslo_concurrency.lockutils [None req-f3e2ea2b-df6b-4b6c-90d2-fed2fa078aa2 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] Lock "0f5e68e6-8f02-4a3a-ac0c-322d82950d98" "released" by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" :: held 12.675s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:48:11 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2003: 321 pgs: 321 active+clean; 246 MiB data, 792 MiB used, 59 GiB / 60 GiB avail; 5.4 MiB/s rd, 5.3 MiB/s wr, 117 op/s
Nov 25 03:48:12 np0005534516 nova_compute[253538]: 2025-11-25 08:48:12.174 253542 DEBUG nova.network.neutron [req-c36c9d84-4ec5-44ef-8053-6e278432db47 req-fdb9a605-27ee-452d-92c1-81528b2138f5 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 0f5e68e6-8f02-4a3a-ac0c-322d82950d98] Updated VIF entry in instance network info cache for port 7246ed42-6ec3-42e8-9b9d-12606aeeb43c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 25 03:48:12 np0005534516 nova_compute[253538]: 2025-11-25 08:48:12.175 253542 DEBUG nova.network.neutron [req-c36c9d84-4ec5-44ef-8053-6e278432db47 req-fdb9a605-27ee-452d-92c1-81528b2138f5 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 0f5e68e6-8f02-4a3a-ac0c-322d82950d98] Updating instance_info_cache with network_info: [{"id": "7246ed42-6ec3-42e8-9b9d-12606aeeb43c", "address": "fa:16:3e:8c:96:cd", "network": {"id": "aa04d86f-73a3-4b24-9c95-8ec29aa39064", "bridge": null, "label": "tempest-ServersNegativeTestJSON-1377856389-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "947f731219de435196429037dc94fd56", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "unbound", "details": {}, "devname": "tap7246ed42-6e", "ovs_interfaceid": null, "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 03:48:12 np0005534516 nova_compute[253538]: 2025-11-25 08:48:12.197 253542 DEBUG oslo_concurrency.lockutils [req-c36c9d84-4ec5-44ef-8053-6e278432db47 req-fdb9a605-27ee-452d-92c1-81528b2138f5 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-0f5e68e6-8f02-4a3a-ac0c-322d82950d98" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 03:48:12 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e234 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 03:48:12 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e234 do_prune osdmap full prune enabled
Nov 25 03:48:12 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e235 e235: 3 total, 3 up, 3 in
Nov 25 03:48:12 np0005534516 ceph-mon[75015]: log_channel(cluster) log [DBG] : osdmap e235: 3 total, 3 up, 3 in
Nov 25 03:48:12 np0005534516 nova_compute[253538]: 2025-11-25 08:48:12.428 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:48:13 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2005: 321 pgs: 321 active+clean; 214 MiB data, 768 MiB used, 59 GiB / 60 GiB avail; 4.7 MiB/s rd, 3.6 MiB/s wr, 89 op/s
Nov 25 03:48:14 np0005534516 nova_compute[253538]: 2025-11-25 08:48:14.612 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:48:15 np0005534516 nova_compute[253538]: 2025-11-25 08:48:15.611 253542 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764060480.6104796, 0f5e68e6-8f02-4a3a-ac0c-322d82950d98 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 03:48:15 np0005534516 nova_compute[253538]: 2025-11-25 08:48:15.612 253542 INFO nova.compute.manager [-] [instance: 0f5e68e6-8f02-4a3a-ac0c-322d82950d98] VM Stopped (Lifecycle Event)#033[00m
Nov 25 03:48:15 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2006: 321 pgs: 321 active+clean; 167 MiB data, 745 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.0 MiB/s wr, 93 op/s
Nov 25 03:48:15 np0005534516 nova_compute[253538]: 2025-11-25 08:48:15.633 253542 DEBUG nova.compute.manager [None req-47b23a12-9dfd-4059-9c72-87c0180ba1af - - - - - -] [instance: 0f5e68e6-8f02-4a3a-ac0c-322d82950d98] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:48:16 np0005534516 nova_compute[253538]: 2025-11-25 08:48:16.165 253542 DEBUG oslo_concurrency.lockutils [None req-ef9476da-4879-492c-9ae5-d68db7389824 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] Acquiring lock "0f5e68e6-8f02-4a3a-ac0c-322d82950d98" by "nova.compute.manager.ComputeManager.unshelve_instance.<locals>.do_unshelve_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:48:16 np0005534516 nova_compute[253538]: 2025-11-25 08:48:16.165 253542 DEBUG oslo_concurrency.lockutils [None req-ef9476da-4879-492c-9ae5-d68db7389824 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] Lock "0f5e68e6-8f02-4a3a-ac0c-322d82950d98" acquired by "nova.compute.manager.ComputeManager.unshelve_instance.<locals>.do_unshelve_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:48:16 np0005534516 nova_compute[253538]: 2025-11-25 08:48:16.165 253542 INFO nova.compute.manager [None req-ef9476da-4879-492c-9ae5-d68db7389824 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] [instance: 0f5e68e6-8f02-4a3a-ac0c-322d82950d98] Unshelving#033[00m
Nov 25 03:48:16 np0005534516 nova_compute[253538]: 2025-11-25 08:48:16.364 253542 DEBUG oslo_concurrency.lockutils [None req-ef9476da-4879-492c-9ae5-d68db7389824 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:48:16 np0005534516 nova_compute[253538]: 2025-11-25 08:48:16.365 253542 DEBUG oslo_concurrency.lockutils [None req-ef9476da-4879-492c-9ae5-d68db7389824 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:48:16 np0005534516 nova_compute[253538]: 2025-11-25 08:48:16.371 253542 DEBUG nova.objects.instance [None req-ef9476da-4879-492c-9ae5-d68db7389824 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] Lazy-loading 'pci_requests' on Instance uuid 0f5e68e6-8f02-4a3a-ac0c-322d82950d98 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 03:48:16 np0005534516 nova_compute[253538]: 2025-11-25 08:48:16.380 253542 DEBUG nova.objects.instance [None req-ef9476da-4879-492c-9ae5-d68db7389824 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] Lazy-loading 'numa_topology' on Instance uuid 0f5e68e6-8f02-4a3a-ac0c-322d82950d98 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 03:48:16 np0005534516 nova_compute[253538]: 2025-11-25 08:48:16.389 253542 DEBUG nova.virt.hardware [None req-ef9476da-4879-492c-9ae5-d68db7389824 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 25 03:48:16 np0005534516 nova_compute[253538]: 2025-11-25 08:48:16.390 253542 INFO nova.compute.claims [None req-ef9476da-4879-492c-9ae5-d68db7389824 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] [instance: 0f5e68e6-8f02-4a3a-ac0c-322d82950d98] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 25 03:48:16 np0005534516 nova_compute[253538]: 2025-11-25 08:48:16.523 253542 DEBUG oslo_concurrency.processutils [None req-ef9476da-4879-492c-9ae5-d68db7389824 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:48:16 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 03:48:16 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2771433555' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 03:48:17 np0005534516 nova_compute[253538]: 2025-11-25 08:48:17.029 253542 DEBUG oslo_concurrency.processutils [None req-ef9476da-4879-492c-9ae5-d68db7389824 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.506s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:48:17 np0005534516 nova_compute[253538]: 2025-11-25 08:48:17.038 253542 DEBUG nova.compute.provider_tree [None req-ef9476da-4879-492c-9ae5-d68db7389824 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 03:48:17 np0005534516 nova_compute[253538]: 2025-11-25 08:48:17.058 253542 DEBUG nova.scheduler.client.report [None req-ef9476da-4879-492c-9ae5-d68db7389824 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 03:48:17 np0005534516 nova_compute[253538]: 2025-11-25 08:48:17.124 253542 DEBUG oslo_concurrency.lockutils [None req-ef9476da-4879-492c-9ae5-d68db7389824 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.759s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:48:17 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e235 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 03:48:17 np0005534516 ceph-mon[75015]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #93. Immutable memtables: 0.
Nov 25 03:48:17 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:48:17.255433) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 25 03:48:17 np0005534516 ceph-mon[75015]: rocksdb: [db/flush_job.cc:856] [default] [JOB 53] Flushing memtable with next log file: 93
Nov 25 03:48:17 np0005534516 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764060497255473, "job": 53, "event": "flush_started", "num_memtables": 1, "num_entries": 674, "num_deletes": 251, "total_data_size": 781704, "memory_usage": 793488, "flush_reason": "Manual Compaction"}
Nov 25 03:48:17 np0005534516 ceph-mon[75015]: rocksdb: [db/flush_job.cc:885] [default] [JOB 53] Level-0 flush table #94: started
Nov 25 03:48:17 np0005534516 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764060497260030, "cf_name": "default", "job": 53, "event": "table_file_creation", "file_number": 94, "file_size": 523980, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 41420, "largest_seqno": 42093, "table_properties": {"data_size": 520872, "index_size": 1016, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1029, "raw_key_size": 8233, "raw_average_key_size": 20, "raw_value_size": 514368, "raw_average_value_size": 1289, "num_data_blocks": 46, "num_entries": 399, "num_filter_entries": 399, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764060448, "oldest_key_time": 1764060448, "file_creation_time": 1764060497, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "dc5dc6ae-0fb8-474e-b34d-e37705a41add", "db_session_id": "WCSCMBNWDQZ3QKJA3OWW", "orig_file_number": 94, "seqno_to_time_mapping": "N/A"}}
Nov 25 03:48:17 np0005534516 ceph-mon[75015]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 53] Flush lasted 4624 microseconds, and 1731 cpu microseconds.
Nov 25 03:48:17 np0005534516 ceph-mon[75015]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 03:48:17 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:48:17.260060) [db/flush_job.cc:967] [default] [JOB 53] Level-0 flush table #94: 523980 bytes OK
Nov 25 03:48:17 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:48:17.260074) [db/memtable_list.cc:519] [default] Level-0 commit table #94 started
Nov 25 03:48:17 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:48:17.261775) [db/memtable_list.cc:722] [default] Level-0 commit table #94: memtable #1 done
Nov 25 03:48:17 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:48:17.261795) EVENT_LOG_v1 {"time_micros": 1764060497261789, "job": 53, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 25 03:48:17 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:48:17.261815) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 25 03:48:17 np0005534516 ceph-mon[75015]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 53] Try to delete WAL files size 778151, prev total WAL file size 778151, number of live WAL files 2.
Nov 25 03:48:17 np0005534516 ceph-mon[75015]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000090.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 03:48:17 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:48:17.262534) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740031353035' seq:72057594037927935, type:22 .. '6D6772737461740031373536' seq:0, type:0; will stop at (end)
Nov 25 03:48:17 np0005534516 ceph-mon[75015]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 54] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 25 03:48:17 np0005534516 ceph-mon[75015]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 53 Base level 0, inputs: [94(511KB)], [92(10MB)]
Nov 25 03:48:17 np0005534516 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764060497262590, "job": 54, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [94], "files_L6": [92], "score": -1, "input_data_size": 11457546, "oldest_snapshot_seqno": -1}
Nov 25 03:48:17 np0005534516 ceph-mon[75015]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 54] Generated table #95: 6521 keys, 8417251 bytes, temperature: kUnknown
Nov 25 03:48:17 np0005534516 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764060497337728, "cf_name": "default", "job": 54, "event": "table_file_creation", "file_number": 95, "file_size": 8417251, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8375163, "index_size": 24698, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 16325, "raw_key_size": 167916, "raw_average_key_size": 25, "raw_value_size": 8259662, "raw_average_value_size": 1266, "num_data_blocks": 979, "num_entries": 6521, "num_filter_entries": 6521, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764056881, "oldest_key_time": 0, "file_creation_time": 1764060497, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "dc5dc6ae-0fb8-474e-b34d-e37705a41add", "db_session_id": "WCSCMBNWDQZ3QKJA3OWW", "orig_file_number": 95, "seqno_to_time_mapping": "N/A"}}
Nov 25 03:48:17 np0005534516 ceph-mon[75015]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 03:48:17 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:48:17.338019) [db/compaction/compaction_job.cc:1663] [default] [JOB 54] Compacted 1@0 + 1@6 files to L6 => 8417251 bytes
Nov 25 03:48:17 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:48:17.339789) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 152.3 rd, 111.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.5, 10.4 +0.0 blob) out(8.0 +0.0 blob), read-write-amplify(37.9) write-amplify(16.1) OK, records in: 7016, records dropped: 495 output_compression: NoCompression
Nov 25 03:48:17 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:48:17.339819) EVENT_LOG_v1 {"time_micros": 1764060497339806, "job": 54, "event": "compaction_finished", "compaction_time_micros": 75219, "compaction_time_cpu_micros": 40412, "output_level": 6, "num_output_files": 1, "total_output_size": 8417251, "num_input_records": 7016, "num_output_records": 6521, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 25 03:48:17 np0005534516 ceph-mon[75015]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000094.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 03:48:17 np0005534516 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764060497340165, "job": 54, "event": "table_file_deletion", "file_number": 94}
Nov 25 03:48:17 np0005534516 ceph-mon[75015]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000092.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 03:48:17 np0005534516 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764060497344067, "job": 54, "event": "table_file_deletion", "file_number": 92}
Nov 25 03:48:17 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:48:17.262399) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 03:48:17 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:48:17.344114) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 03:48:17 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:48:17.344119) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 03:48:17 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:48:17.344122) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 03:48:17 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:48:17.344125) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 03:48:17 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:48:17.344127) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 03:48:17 np0005534516 nova_compute[253538]: 2025-11-25 08:48:17.430 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:48:17 np0005534516 nova_compute[253538]: 2025-11-25 08:48:17.538 253542 INFO nova.network.neutron [None req-ef9476da-4879-492c-9ae5-d68db7389824 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] [instance: 0f5e68e6-8f02-4a3a-ac0c-322d82950d98] Updating port 7246ed42-6ec3-42e8-9b9d-12606aeeb43c with attributes {'binding:host_id': 'compute-0.ctlplane.example.com', 'device_owner': 'compute:nova'}#033[00m
Nov 25 03:48:17 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2007: 321 pgs: 321 active+clean; 167 MiB data, 745 MiB used, 59 GiB / 60 GiB avail; 23 KiB/s rd, 1.5 KiB/s wr, 34 op/s
Nov 25 03:48:19 np0005534516 nova_compute[253538]: 2025-11-25 08:48:19.315 253542 DEBUG oslo_concurrency.lockutils [None req-ef9476da-4879-492c-9ae5-d68db7389824 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] Acquiring lock "refresh_cache-0f5e68e6-8f02-4a3a-ac0c-322d82950d98" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 03:48:19 np0005534516 nova_compute[253538]: 2025-11-25 08:48:19.315 253542 DEBUG oslo_concurrency.lockutils [None req-ef9476da-4879-492c-9ae5-d68db7389824 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] Acquired lock "refresh_cache-0f5e68e6-8f02-4a3a-ac0c-322d82950d98" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 03:48:19 np0005534516 nova_compute[253538]: 2025-11-25 08:48:19.315 253542 DEBUG nova.network.neutron [None req-ef9476da-4879-492c-9ae5-d68db7389824 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] [instance: 0f5e68e6-8f02-4a3a-ac0c-322d82950d98] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 25 03:48:19 np0005534516 nova_compute[253538]: 2025-11-25 08:48:19.439 253542 DEBUG nova.compute.manager [req-dc7b8288-94e0-48b2-a0b2-3af5d4e540b5 req-706ec3ab-65ed-45ca-9297-fdbafd7b270c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 0f5e68e6-8f02-4a3a-ac0c-322d82950d98] Received event network-changed-7246ed42-6ec3-42e8-9b9d-12606aeeb43c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:48:19 np0005534516 nova_compute[253538]: 2025-11-25 08:48:19.439 253542 DEBUG nova.compute.manager [req-dc7b8288-94e0-48b2-a0b2-3af5d4e540b5 req-706ec3ab-65ed-45ca-9297-fdbafd7b270c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 0f5e68e6-8f02-4a3a-ac0c-322d82950d98] Refreshing instance network info cache due to event network-changed-7246ed42-6ec3-42e8-9b9d-12606aeeb43c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 03:48:19 np0005534516 nova_compute[253538]: 2025-11-25 08:48:19.440 253542 DEBUG oslo_concurrency.lockutils [req-dc7b8288-94e0-48b2-a0b2-3af5d4e540b5 req-706ec3ab-65ed-45ca-9297-fdbafd7b270c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-0f5e68e6-8f02-4a3a-ac0c-322d82950d98" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 03:48:19 np0005534516 nova_compute[253538]: 2025-11-25 08:48:19.615 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:48:19 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2008: 321 pgs: 321 active+clean; 167 MiB data, 745 MiB used, 59 GiB / 60 GiB avail; 23 KiB/s rd, 1.4 KiB/s wr, 33 op/s
Nov 25 03:48:21 np0005534516 nova_compute[253538]: 2025-11-25 08:48:21.107 253542 DEBUG nova.network.neutron [None req-ef9476da-4879-492c-9ae5-d68db7389824 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] [instance: 0f5e68e6-8f02-4a3a-ac0c-322d82950d98] Updating instance_info_cache with network_info: [{"id": "7246ed42-6ec3-42e8-9b9d-12606aeeb43c", "address": "fa:16:3e:8c:96:cd", "network": {"id": "aa04d86f-73a3-4b24-9c95-8ec29aa39064", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-1377856389-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "947f731219de435196429037dc94fd56", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7246ed42-6e", "ovs_interfaceid": "7246ed42-6ec3-42e8-9b9d-12606aeeb43c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 03:48:21 np0005534516 nova_compute[253538]: 2025-11-25 08:48:21.125 253542 DEBUG oslo_concurrency.lockutils [None req-ef9476da-4879-492c-9ae5-d68db7389824 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] Releasing lock "refresh_cache-0f5e68e6-8f02-4a3a-ac0c-322d82950d98" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 03:48:21 np0005534516 nova_compute[253538]: 2025-11-25 08:48:21.128 253542 DEBUG nova.virt.libvirt.driver [None req-ef9476da-4879-492c-9ae5-d68db7389824 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] [instance: 0f5e68e6-8f02-4a3a-ac0c-322d82950d98] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 25 03:48:21 np0005534516 nova_compute[253538]: 2025-11-25 08:48:21.128 253542 INFO nova.virt.libvirt.driver [None req-ef9476da-4879-492c-9ae5-d68db7389824 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] [instance: 0f5e68e6-8f02-4a3a-ac0c-322d82950d98] Creating image(s)#033[00m
Nov 25 03:48:21 np0005534516 nova_compute[253538]: 2025-11-25 08:48:21.160 253542 DEBUG nova.storage.rbd_utils [None req-ef9476da-4879-492c-9ae5-d68db7389824 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] rbd image 0f5e68e6-8f02-4a3a-ac0c-322d82950d98_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:48:21 np0005534516 nova_compute[253538]: 2025-11-25 08:48:21.165 253542 DEBUG nova.objects.instance [None req-ef9476da-4879-492c-9ae5-d68db7389824 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] Lazy-loading 'trusted_certs' on Instance uuid 0f5e68e6-8f02-4a3a-ac0c-322d82950d98 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 03:48:21 np0005534516 nova_compute[253538]: 2025-11-25 08:48:21.167 253542 DEBUG oslo_concurrency.lockutils [req-dc7b8288-94e0-48b2-a0b2-3af5d4e540b5 req-706ec3ab-65ed-45ca-9297-fdbafd7b270c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-0f5e68e6-8f02-4a3a-ac0c-322d82950d98" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 03:48:21 np0005534516 nova_compute[253538]: 2025-11-25 08:48:21.168 253542 DEBUG nova.network.neutron [req-dc7b8288-94e0-48b2-a0b2-3af5d4e540b5 req-706ec3ab-65ed-45ca-9297-fdbafd7b270c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 0f5e68e6-8f02-4a3a-ac0c-322d82950d98] Refreshing network info cache for port 7246ed42-6ec3-42e8-9b9d-12606aeeb43c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 03:48:21 np0005534516 nova_compute[253538]: 2025-11-25 08:48:21.201 253542 DEBUG nova.storage.rbd_utils [None req-ef9476da-4879-492c-9ae5-d68db7389824 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] rbd image 0f5e68e6-8f02-4a3a-ac0c-322d82950d98_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:48:21 np0005534516 nova_compute[253538]: 2025-11-25 08:48:21.224 253542 DEBUG nova.storage.rbd_utils [None req-ef9476da-4879-492c-9ae5-d68db7389824 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] rbd image 0f5e68e6-8f02-4a3a-ac0c-322d82950d98_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:48:21 np0005534516 nova_compute[253538]: 2025-11-25 08:48:21.228 253542 DEBUG oslo_concurrency.lockutils [None req-ef9476da-4879-492c-9ae5-d68db7389824 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] Acquiring lock "be59cdadf3c3d9b1c643597c1bdc7dc8b2c4cd9c" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:48:21 np0005534516 nova_compute[253538]: 2025-11-25 08:48:21.229 253542 DEBUG oslo_concurrency.lockutils [None req-ef9476da-4879-492c-9ae5-d68db7389824 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] Lock "be59cdadf3c3d9b1c643597c1bdc7dc8b2c4cd9c" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:48:21 np0005534516 nova_compute[253538]: 2025-11-25 08:48:21.494 253542 DEBUG nova.virt.libvirt.imagebackend [None req-ef9476da-4879-492c-9ae5-d68db7389824 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] Image locations are: [{'url': 'rbd://a058ea16-8b73-51e1-b172-ed66107102bf/images/515c4bcf-552c-4c04-8c0d-ad03b9e9133d/snap', 'metadata': {'store': 'default_backend'}}, {'url': 'rbd://a058ea16-8b73-51e1-b172-ed66107102bf/images/515c4bcf-552c-4c04-8c0d-ad03b9e9133d/snap', 'metadata': {}}] clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1085#033[00m
Nov 25 03:48:21 np0005534516 nova_compute[253538]: 2025-11-25 08:48:21.560 253542 DEBUG nova.virt.libvirt.imagebackend [None req-ef9476da-4879-492c-9ae5-d68db7389824 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] Selected location: {'url': 'rbd://a058ea16-8b73-51e1-b172-ed66107102bf/images/515c4bcf-552c-4c04-8c0d-ad03b9e9133d/snap', 'metadata': {'store': 'default_backend'}} clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1094#033[00m
Nov 25 03:48:21 np0005534516 nova_compute[253538]: 2025-11-25 08:48:21.561 253542 DEBUG nova.storage.rbd_utils [None req-ef9476da-4879-492c-9ae5-d68db7389824 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] cloning images/515c4bcf-552c-4c04-8c0d-ad03b9e9133d@snap to None/0f5e68e6-8f02-4a3a-ac0c-322d82950d98_disk clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Nov 25 03:48:21 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2009: 321 pgs: 321 active+clean; 167 MiB data, 745 MiB used, 59 GiB / 60 GiB avail; 21 KiB/s rd, 1.4 KiB/s wr, 31 op/s
Nov 25 03:48:21 np0005534516 nova_compute[253538]: 2025-11-25 08:48:21.674 253542 DEBUG oslo_concurrency.lockutils [None req-ef9476da-4879-492c-9ae5-d68db7389824 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] Lock "be59cdadf3c3d9b1c643597c1bdc7dc8b2c4cd9c" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.446s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:48:21 np0005534516 nova_compute[253538]: 2025-11-25 08:48:21.796 253542 DEBUG nova.objects.instance [None req-ef9476da-4879-492c-9ae5-d68db7389824 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] Lazy-loading 'migration_context' on Instance uuid 0f5e68e6-8f02-4a3a-ac0c-322d82950d98 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 03:48:21 np0005534516 nova_compute[253538]: 2025-11-25 08:48:21.862 253542 DEBUG nova.storage.rbd_utils [None req-ef9476da-4879-492c-9ae5-d68db7389824 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] flattening vms/0f5e68e6-8f02-4a3a-ac0c-322d82950d98_disk flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314#033[00m
Nov 25 03:48:22 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e235 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 03:48:22 np0005534516 nova_compute[253538]: 2025-11-25 08:48:22.294 253542 DEBUG nova.virt.libvirt.driver [None req-ef9476da-4879-492c-9ae5-d68db7389824 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] [instance: 0f5e68e6-8f02-4a3a-ac0c-322d82950d98] Image rbd:vms/0f5e68e6-8f02-4a3a-ac0c-322d82950d98_disk:id=openstack:conf=/etc/ceph/ceph.conf flattened successfully while unshelving instance. _try_fetch_image_cache /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11007#033[00m
Nov 25 03:48:22 np0005534516 nova_compute[253538]: 2025-11-25 08:48:22.295 253542 DEBUG nova.virt.libvirt.driver [None req-ef9476da-4879-492c-9ae5-d68db7389824 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] [instance: 0f5e68e6-8f02-4a3a-ac0c-322d82950d98] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 25 03:48:22 np0005534516 nova_compute[253538]: 2025-11-25 08:48:22.296 253542 DEBUG nova.virt.libvirt.driver [None req-ef9476da-4879-492c-9ae5-d68db7389824 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] [instance: 0f5e68e6-8f02-4a3a-ac0c-322d82950d98] Ensure instance console log exists: /var/lib/nova/instances/0f5e68e6-8f02-4a3a-ac0c-322d82950d98/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 25 03:48:22 np0005534516 nova_compute[253538]: 2025-11-25 08:48:22.296 253542 DEBUG oslo_concurrency.lockutils [None req-ef9476da-4879-492c-9ae5-d68db7389824 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:48:22 np0005534516 nova_compute[253538]: 2025-11-25 08:48:22.297 253542 DEBUG oslo_concurrency.lockutils [None req-ef9476da-4879-492c-9ae5-d68db7389824 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:48:22 np0005534516 nova_compute[253538]: 2025-11-25 08:48:22.297 253542 DEBUG oslo_concurrency.lockutils [None req-ef9476da-4879-492c-9ae5-d68db7389824 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:48:22 np0005534516 nova_compute[253538]: 2025-11-25 08:48:22.299 253542 DEBUG nova.virt.libvirt.driver [None req-ef9476da-4879-492c-9ae5-d68db7389824 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] [instance: 0f5e68e6-8f02-4a3a-ac0c-322d82950d98] Start _get_guest_xml network_info=[{"id": "7246ed42-6ec3-42e8-9b9d-12606aeeb43c", "address": "fa:16:3e:8c:96:cd", "network": {"id": "aa04d86f-73a3-4b24-9c95-8ec29aa39064", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-1377856389-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "947f731219de435196429037dc94fd56", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7246ed42-6e", "ovs_interfaceid": "7246ed42-6ec3-42e8-9b9d-12606aeeb43c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='',container_format='bare',created_at=2025-11-25T08:47:57Z,direct_url=<?>,disk_format='raw',id=515c4bcf-552c-4c04-8c0d-ad03b9e9133d,min_disk=1,min_ram=0,name='tempest-ServersNegativeTestJSON-server-673040864-shelved',owner='947f731219de435196429037dc94fd56',properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=2025-11-25T08:48:05Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'device_name': '/dev/vda', 'guest_format': None, 'encryption_options': None, 'boot_index': 0, 'encryption_format': None, 'encryption_secret_uuid': None, 'size': 0, 'encrypted': False, 'disk_bus': 'virtio', 'image_id': '8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 25 03:48:22 np0005534516 nova_compute[253538]: 2025-11-25 08:48:22.303 253542 WARNING nova.virt.libvirt.driver [None req-ef9476da-4879-492c-9ae5-d68db7389824 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 03:48:22 np0005534516 nova_compute[253538]: 2025-11-25 08:48:22.307 253542 DEBUG nova.virt.libvirt.host [None req-ef9476da-4879-492c-9ae5-d68db7389824 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 25 03:48:22 np0005534516 nova_compute[253538]: 2025-11-25 08:48:22.308 253542 DEBUG nova.virt.libvirt.host [None req-ef9476da-4879-492c-9ae5-d68db7389824 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 25 03:48:22 np0005534516 nova_compute[253538]: 2025-11-25 08:48:22.311 253542 DEBUG nova.virt.libvirt.host [None req-ef9476da-4879-492c-9ae5-d68db7389824 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 25 03:48:22 np0005534516 nova_compute[253538]: 2025-11-25 08:48:22.311 253542 DEBUG nova.virt.libvirt.host [None req-ef9476da-4879-492c-9ae5-d68db7389824 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 25 03:48:22 np0005534516 nova_compute[253538]: 2025-11-25 08:48:22.312 253542 DEBUG nova.virt.libvirt.driver [None req-ef9476da-4879-492c-9ae5-d68db7389824 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 25 03:48:22 np0005534516 nova_compute[253538]: 2025-11-25 08:48:22.312 253542 DEBUG nova.virt.hardware [None req-ef9476da-4879-492c-9ae5-d68db7389824 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T08:20:54Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='c0247c91-0f34-45b2-87e7-43f31a790a00',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='',container_format='bare',created_at=2025-11-25T08:47:57Z,direct_url=<?>,disk_format='raw',id=515c4bcf-552c-4c04-8c0d-ad03b9e9133d,min_disk=1,min_ram=0,name='tempest-ServersNegativeTestJSON-server-673040864-shelved',owner='947f731219de435196429037dc94fd56',properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=2025-11-25T08:48:05Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 25 03:48:22 np0005534516 nova_compute[253538]: 2025-11-25 08:48:22.312 253542 DEBUG nova.virt.hardware [None req-ef9476da-4879-492c-9ae5-d68db7389824 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 25 03:48:22 np0005534516 nova_compute[253538]: 2025-11-25 08:48:22.312 253542 DEBUG nova.virt.hardware [None req-ef9476da-4879-492c-9ae5-d68db7389824 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 25 03:48:22 np0005534516 nova_compute[253538]: 2025-11-25 08:48:22.312 253542 DEBUG nova.virt.hardware [None req-ef9476da-4879-492c-9ae5-d68db7389824 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 25 03:48:22 np0005534516 nova_compute[253538]: 2025-11-25 08:48:22.313 253542 DEBUG nova.virt.hardware [None req-ef9476da-4879-492c-9ae5-d68db7389824 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 25 03:48:22 np0005534516 nova_compute[253538]: 2025-11-25 08:48:22.313 253542 DEBUG nova.virt.hardware [None req-ef9476da-4879-492c-9ae5-d68db7389824 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 25 03:48:22 np0005534516 nova_compute[253538]: 2025-11-25 08:48:22.313 253542 DEBUG nova.virt.hardware [None req-ef9476da-4879-492c-9ae5-d68db7389824 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 25 03:48:22 np0005534516 nova_compute[253538]: 2025-11-25 08:48:22.313 253542 DEBUG nova.virt.hardware [None req-ef9476da-4879-492c-9ae5-d68db7389824 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 25 03:48:22 np0005534516 nova_compute[253538]: 2025-11-25 08:48:22.313 253542 DEBUG nova.virt.hardware [None req-ef9476da-4879-492c-9ae5-d68db7389824 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 25 03:48:22 np0005534516 nova_compute[253538]: 2025-11-25 08:48:22.313 253542 DEBUG nova.virt.hardware [None req-ef9476da-4879-492c-9ae5-d68db7389824 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 25 03:48:22 np0005534516 nova_compute[253538]: 2025-11-25 08:48:22.314 253542 DEBUG nova.virt.hardware [None req-ef9476da-4879-492c-9ae5-d68db7389824 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 25 03:48:22 np0005534516 nova_compute[253538]: 2025-11-25 08:48:22.314 253542 DEBUG nova.objects.instance [None req-ef9476da-4879-492c-9ae5-d68db7389824 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] Lazy-loading 'vcpu_model' on Instance uuid 0f5e68e6-8f02-4a3a-ac0c-322d82950d98 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 03:48:22 np0005534516 nova_compute[253538]: 2025-11-25 08:48:22.328 253542 DEBUG oslo_concurrency.processutils [None req-ef9476da-4879-492c-9ae5-d68db7389824 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:48:22 np0005534516 nova_compute[253538]: 2025-11-25 08:48:22.431 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:48:22 np0005534516 nova_compute[253538]: 2025-11-25 08:48:22.595 253542 DEBUG nova.network.neutron [req-dc7b8288-94e0-48b2-a0b2-3af5d4e540b5 req-706ec3ab-65ed-45ca-9297-fdbafd7b270c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 0f5e68e6-8f02-4a3a-ac0c-322d82950d98] Updated VIF entry in instance network info cache for port 7246ed42-6ec3-42e8-9b9d-12606aeeb43c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 25 03:48:22 np0005534516 nova_compute[253538]: 2025-11-25 08:48:22.595 253542 DEBUG nova.network.neutron [req-dc7b8288-94e0-48b2-a0b2-3af5d4e540b5 req-706ec3ab-65ed-45ca-9297-fdbafd7b270c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 0f5e68e6-8f02-4a3a-ac0c-322d82950d98] Updating instance_info_cache with network_info: [{"id": "7246ed42-6ec3-42e8-9b9d-12606aeeb43c", "address": "fa:16:3e:8c:96:cd", "network": {"id": "aa04d86f-73a3-4b24-9c95-8ec29aa39064", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-1377856389-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "947f731219de435196429037dc94fd56", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7246ed42-6e", "ovs_interfaceid": "7246ed42-6ec3-42e8-9b9d-12606aeeb43c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 03:48:22 np0005534516 nova_compute[253538]: 2025-11-25 08:48:22.606 253542 DEBUG oslo_concurrency.lockutils [req-dc7b8288-94e0-48b2-a0b2-3af5d4e540b5 req-706ec3ab-65ed-45ca-9297-fdbafd7b270c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-0f5e68e6-8f02-4a3a-ac0c-322d82950d98" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 03:48:22 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 03:48:22 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3891797224' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 03:48:22 np0005534516 nova_compute[253538]: 2025-11-25 08:48:22.785 253542 DEBUG oslo_concurrency.processutils [None req-ef9476da-4879-492c-9ae5-d68db7389824 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.457s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:48:22 np0005534516 nova_compute[253538]: 2025-11-25 08:48:22.817 253542 DEBUG nova.storage.rbd_utils [None req-ef9476da-4879-492c-9ae5-d68db7389824 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] rbd image 0f5e68e6-8f02-4a3a-ac0c-322d82950d98_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:48:22 np0005534516 nova_compute[253538]: 2025-11-25 08:48:22.821 253542 DEBUG oslo_concurrency.processutils [None req-ef9476da-4879-492c-9ae5-d68db7389824 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:48:23 np0005534516 nova_compute[253538]: 2025-11-25 08:48:23.009 253542 DEBUG oslo_concurrency.lockutils [None req-9ca56f56-3c41-4463-8196-1fa2ff9b5bed 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Acquiring lock "fc289da4-ce2a-46ee-bf8c-bffa1e0e75fe" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:48:23 np0005534516 nova_compute[253538]: 2025-11-25 08:48:23.009 253542 DEBUG oslo_concurrency.lockutils [None req-9ca56f56-3c41-4463-8196-1fa2ff9b5bed 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lock "fc289da4-ce2a-46ee-bf8c-bffa1e0e75fe" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:48:23 np0005534516 nova_compute[253538]: 2025-11-25 08:48:23.027 253542 DEBUG nova.compute.manager [None req-9ca56f56-3c41-4463-8196-1fa2ff9b5bed 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: fc289da4-ce2a-46ee-bf8c-bffa1e0e75fe] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 25 03:48:23 np0005534516 nova_compute[253538]: 2025-11-25 08:48:23.098 253542 DEBUG oslo_concurrency.lockutils [None req-9ca56f56-3c41-4463-8196-1fa2ff9b5bed 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:48:23 np0005534516 nova_compute[253538]: 2025-11-25 08:48:23.098 253542 DEBUG oslo_concurrency.lockutils [None req-9ca56f56-3c41-4463-8196-1fa2ff9b5bed 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:48:23 np0005534516 nova_compute[253538]: 2025-11-25 08:48:23.105 253542 DEBUG nova.virt.hardware [None req-9ca56f56-3c41-4463-8196-1fa2ff9b5bed 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 25 03:48:23 np0005534516 nova_compute[253538]: 2025-11-25 08:48:23.106 253542 INFO nova.compute.claims [None req-9ca56f56-3c41-4463-8196-1fa2ff9b5bed 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: fc289da4-ce2a-46ee-bf8c-bffa1e0e75fe] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 25 03:48:23 np0005534516 nova_compute[253538]: 2025-11-25 08:48:23.222 253542 DEBUG oslo_concurrency.processutils [None req-9ca56f56-3c41-4463-8196-1fa2ff9b5bed 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:48:23 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 03:48:23 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/871314407' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 03:48:23 np0005534516 nova_compute[253538]: 2025-11-25 08:48:23.286 253542 DEBUG oslo_concurrency.processutils [None req-ef9476da-4879-492c-9ae5-d68db7389824 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.465s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:48:23 np0005534516 nova_compute[253538]: 2025-11-25 08:48:23.288 253542 DEBUG nova.virt.libvirt.vif [None req-ef9476da-4879-492c-9ae5-d68db7389824 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-11-25T08:46:31Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersNegativeTestJSON-server-673040864',display_name='tempest-ServersNegativeTestJSON-server-673040864',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversnegativetestjson-server-673040864',id=104,image_ref='515c4bcf-552c-4c04-8c0d-ad03b9e9133d',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-25T08:46:41Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=4,progress=0,project_id='947f731219de435196429037dc94fd56',ramdisk_id='',reservation_id='r-lfthxaw9',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersNegativeTestJSON-740481153',owner_user_name='tempest-ServersNegativeTestJSON-740481153-project-member',shelved_at='2025-11-25T08:48:05.240608',shelved_host='compute-0.ctlplane.example.com',shelved_image_id='515c4bcf-552c-4c04-8c0d-ad03b9e9133d'},tags=<?>,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T08:48:16Z,user_data=None,user_id='229f02d89c9848d8aaaaab070ce4d179',uuid=0f5e68e6-8f02-4a3a-ac0c-322d82950d98,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='shelved_offloaded') vif={"id": "7246ed42-6ec3-42e8-9b9d-12606aeeb43c", "address": "fa:16:3e:8c:96:cd", "network": {"id": "aa04d86f-73a3-4b24-9c95-8ec29aa39064", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-1377856389-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "947f731219de435196429037dc94fd56", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7246ed42-6e", "ovs_interfaceid": "7246ed42-6ec3-42e8-9b9d-12606aeeb43c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 25 03:48:23 np0005534516 nova_compute[253538]: 2025-11-25 08:48:23.288 253542 DEBUG nova.network.os_vif_util [None req-ef9476da-4879-492c-9ae5-d68db7389824 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] Converting VIF {"id": "7246ed42-6ec3-42e8-9b9d-12606aeeb43c", "address": "fa:16:3e:8c:96:cd", "network": {"id": "aa04d86f-73a3-4b24-9c95-8ec29aa39064", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-1377856389-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "947f731219de435196429037dc94fd56", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7246ed42-6e", "ovs_interfaceid": "7246ed42-6ec3-42e8-9b9d-12606aeeb43c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 03:48:23 np0005534516 nova_compute[253538]: 2025-11-25 08:48:23.289 253542 DEBUG nova.network.os_vif_util [None req-ef9476da-4879-492c-9ae5-d68db7389824 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:8c:96:cd,bridge_name='br-int',has_traffic_filtering=True,id=7246ed42-6ec3-42e8-9b9d-12606aeeb43c,network=Network(aa04d86f-73a3-4b24-9c95-8ec29aa39064),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7246ed42-6e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 03:48:23 np0005534516 nova_compute[253538]: 2025-11-25 08:48:23.290 253542 DEBUG nova.objects.instance [None req-ef9476da-4879-492c-9ae5-d68db7389824 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] Lazy-loading 'pci_devices' on Instance uuid 0f5e68e6-8f02-4a3a-ac0c-322d82950d98 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 03:48:23 np0005534516 nova_compute[253538]: 2025-11-25 08:48:23.301 253542 DEBUG nova.virt.libvirt.driver [None req-ef9476da-4879-492c-9ae5-d68db7389824 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] [instance: 0f5e68e6-8f02-4a3a-ac0c-322d82950d98] End _get_guest_xml xml=<domain type="kvm">
Nov 25 03:48:23 np0005534516 nova_compute[253538]:  <uuid>0f5e68e6-8f02-4a3a-ac0c-322d82950d98</uuid>
Nov 25 03:48:23 np0005534516 nova_compute[253538]:  <name>instance-00000068</name>
Nov 25 03:48:23 np0005534516 nova_compute[253538]:  <memory>131072</memory>
Nov 25 03:48:23 np0005534516 nova_compute[253538]:  <vcpu>1</vcpu>
Nov 25 03:48:23 np0005534516 nova_compute[253538]:  <metadata>
Nov 25 03:48:23 np0005534516 nova_compute[253538]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 03:48:23 np0005534516 nova_compute[253538]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 03:48:23 np0005534516 nova_compute[253538]:      <nova:name>tempest-ServersNegativeTestJSON-server-673040864</nova:name>
Nov 25 03:48:23 np0005534516 nova_compute[253538]:      <nova:creationTime>2025-11-25 08:48:22</nova:creationTime>
Nov 25 03:48:23 np0005534516 nova_compute[253538]:      <nova:flavor name="m1.nano">
Nov 25 03:48:23 np0005534516 nova_compute[253538]:        <nova:memory>128</nova:memory>
Nov 25 03:48:23 np0005534516 nova_compute[253538]:        <nova:disk>1</nova:disk>
Nov 25 03:48:23 np0005534516 nova_compute[253538]:        <nova:swap>0</nova:swap>
Nov 25 03:48:23 np0005534516 nova_compute[253538]:        <nova:ephemeral>0</nova:ephemeral>
Nov 25 03:48:23 np0005534516 nova_compute[253538]:        <nova:vcpus>1</nova:vcpus>
Nov 25 03:48:23 np0005534516 nova_compute[253538]:      </nova:flavor>
Nov 25 03:48:23 np0005534516 nova_compute[253538]:      <nova:owner>
Nov 25 03:48:23 np0005534516 nova_compute[253538]:        <nova:user uuid="229f02d89c9848d8aaaaab070ce4d179">tempest-ServersNegativeTestJSON-740481153-project-member</nova:user>
Nov 25 03:48:23 np0005534516 nova_compute[253538]:        <nova:project uuid="947f731219de435196429037dc94fd56">tempest-ServersNegativeTestJSON-740481153</nova:project>
Nov 25 03:48:23 np0005534516 nova_compute[253538]:      </nova:owner>
Nov 25 03:48:23 np0005534516 nova_compute[253538]:      <nova:root type="image" uuid="515c4bcf-552c-4c04-8c0d-ad03b9e9133d"/>
Nov 25 03:48:23 np0005534516 nova_compute[253538]:      <nova:ports>
Nov 25 03:48:23 np0005534516 nova_compute[253538]:        <nova:port uuid="7246ed42-6ec3-42e8-9b9d-12606aeeb43c">
Nov 25 03:48:23 np0005534516 nova_compute[253538]:          <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Nov 25 03:48:23 np0005534516 nova_compute[253538]:        </nova:port>
Nov 25 03:48:23 np0005534516 nova_compute[253538]:      </nova:ports>
Nov 25 03:48:23 np0005534516 nova_compute[253538]:    </nova:instance>
Nov 25 03:48:23 np0005534516 nova_compute[253538]:  </metadata>
Nov 25 03:48:23 np0005534516 nova_compute[253538]:  <sysinfo type="smbios">
Nov 25 03:48:23 np0005534516 nova_compute[253538]:    <system>
Nov 25 03:48:23 np0005534516 nova_compute[253538]:      <entry name="manufacturer">RDO</entry>
Nov 25 03:48:23 np0005534516 nova_compute[253538]:      <entry name="product">OpenStack Compute</entry>
Nov 25 03:48:23 np0005534516 nova_compute[253538]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 03:48:23 np0005534516 nova_compute[253538]:      <entry name="serial">0f5e68e6-8f02-4a3a-ac0c-322d82950d98</entry>
Nov 25 03:48:23 np0005534516 nova_compute[253538]:      <entry name="uuid">0f5e68e6-8f02-4a3a-ac0c-322d82950d98</entry>
Nov 25 03:48:23 np0005534516 nova_compute[253538]:      <entry name="family">Virtual Machine</entry>
Nov 25 03:48:23 np0005534516 nova_compute[253538]:    </system>
Nov 25 03:48:23 np0005534516 nova_compute[253538]:  </sysinfo>
Nov 25 03:48:23 np0005534516 nova_compute[253538]:  <os>
Nov 25 03:48:23 np0005534516 nova_compute[253538]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 25 03:48:23 np0005534516 nova_compute[253538]:    <boot dev="hd"/>
Nov 25 03:48:23 np0005534516 nova_compute[253538]:    <smbios mode="sysinfo"/>
Nov 25 03:48:23 np0005534516 nova_compute[253538]:  </os>
Nov 25 03:48:23 np0005534516 nova_compute[253538]:  <features>
Nov 25 03:48:23 np0005534516 nova_compute[253538]:    <acpi/>
Nov 25 03:48:23 np0005534516 nova_compute[253538]:    <apic/>
Nov 25 03:48:23 np0005534516 nova_compute[253538]:    <vmcoreinfo/>
Nov 25 03:48:23 np0005534516 nova_compute[253538]:  </features>
Nov 25 03:48:23 np0005534516 nova_compute[253538]:  <clock offset="utc">
Nov 25 03:48:23 np0005534516 nova_compute[253538]:    <timer name="pit" tickpolicy="delay"/>
Nov 25 03:48:23 np0005534516 nova_compute[253538]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 25 03:48:23 np0005534516 nova_compute[253538]:    <timer name="hpet" present="no"/>
Nov 25 03:48:23 np0005534516 nova_compute[253538]:  </clock>
Nov 25 03:48:23 np0005534516 nova_compute[253538]:  <cpu mode="host-model" match="exact">
Nov 25 03:48:23 np0005534516 nova_compute[253538]:    <topology sockets="1" cores="1" threads="1"/>
Nov 25 03:48:23 np0005534516 nova_compute[253538]:  </cpu>
Nov 25 03:48:23 np0005534516 nova_compute[253538]:  <devices>
Nov 25 03:48:23 np0005534516 nova_compute[253538]:    <disk type="network" device="disk">
Nov 25 03:48:23 np0005534516 nova_compute[253538]:      <driver type="raw" cache="none"/>
Nov 25 03:48:23 np0005534516 nova_compute[253538]:      <source protocol="rbd" name="vms/0f5e68e6-8f02-4a3a-ac0c-322d82950d98_disk">
Nov 25 03:48:23 np0005534516 nova_compute[253538]:        <host name="192.168.122.100" port="6789"/>
Nov 25 03:48:23 np0005534516 nova_compute[253538]:      </source>
Nov 25 03:48:23 np0005534516 nova_compute[253538]:      <auth username="openstack">
Nov 25 03:48:23 np0005534516 nova_compute[253538]:        <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 03:48:23 np0005534516 nova_compute[253538]:      </auth>
Nov 25 03:48:23 np0005534516 nova_compute[253538]:      <target dev="vda" bus="virtio"/>
Nov 25 03:48:23 np0005534516 nova_compute[253538]:    </disk>
Nov 25 03:48:23 np0005534516 nova_compute[253538]:    <disk type="network" device="cdrom">
Nov 25 03:48:23 np0005534516 nova_compute[253538]:      <driver type="raw" cache="none"/>
Nov 25 03:48:23 np0005534516 nova_compute[253538]:      <source protocol="rbd" name="vms/0f5e68e6-8f02-4a3a-ac0c-322d82950d98_disk.config">
Nov 25 03:48:23 np0005534516 nova_compute[253538]:        <host name="192.168.122.100" port="6789"/>
Nov 25 03:48:23 np0005534516 nova_compute[253538]:      </source>
Nov 25 03:48:23 np0005534516 nova_compute[253538]:      <auth username="openstack">
Nov 25 03:48:23 np0005534516 nova_compute[253538]:        <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 03:48:23 np0005534516 nova_compute[253538]:      </auth>
Nov 25 03:48:23 np0005534516 nova_compute[253538]:      <target dev="sda" bus="sata"/>
Nov 25 03:48:23 np0005534516 nova_compute[253538]:    </disk>
Nov 25 03:48:23 np0005534516 nova_compute[253538]:    <interface type="ethernet">
Nov 25 03:48:23 np0005534516 nova_compute[253538]:      <mac address="fa:16:3e:8c:96:cd"/>
Nov 25 03:48:23 np0005534516 nova_compute[253538]:      <model type="virtio"/>
Nov 25 03:48:23 np0005534516 nova_compute[253538]:      <driver name="vhost" rx_queue_size="512"/>
Nov 25 03:48:23 np0005534516 nova_compute[253538]:      <mtu size="1442"/>
Nov 25 03:48:23 np0005534516 nova_compute[253538]:      <target dev="tap7246ed42-6e"/>
Nov 25 03:48:23 np0005534516 nova_compute[253538]:    </interface>
Nov 25 03:48:23 np0005534516 nova_compute[253538]:    <serial type="pty">
Nov 25 03:48:23 np0005534516 nova_compute[253538]:      <log file="/var/lib/nova/instances/0f5e68e6-8f02-4a3a-ac0c-322d82950d98/console.log" append="off"/>
Nov 25 03:48:23 np0005534516 nova_compute[253538]:    </serial>
Nov 25 03:48:23 np0005534516 nova_compute[253538]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 03:48:23 np0005534516 nova_compute[253538]:    <video>
Nov 25 03:48:23 np0005534516 nova_compute[253538]:      <model type="virtio"/>
Nov 25 03:48:23 np0005534516 nova_compute[253538]:    </video>
Nov 25 03:48:23 np0005534516 nova_compute[253538]:    <input type="tablet" bus="usb"/>
Nov 25 03:48:23 np0005534516 nova_compute[253538]:    <input type="keyboard" bus="usb"/>
Nov 25 03:48:23 np0005534516 nova_compute[253538]:    <rng model="virtio">
Nov 25 03:48:23 np0005534516 nova_compute[253538]:      <backend model="random">/dev/urandom</backend>
Nov 25 03:48:23 np0005534516 nova_compute[253538]:    </rng>
Nov 25 03:48:23 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root"/>
Nov 25 03:48:23 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:48:23 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:48:23 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:48:23 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:48:23 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:48:23 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:48:23 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:48:23 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:48:23 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:48:23 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:48:23 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:48:23 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:48:23 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:48:23 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:48:23 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:48:23 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:48:23 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:48:23 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:48:23 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:48:23 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:48:23 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:48:23 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:48:23 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:48:23 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:48:23 np0005534516 nova_compute[253538]:    <controller type="usb" index="0"/>
Nov 25 03:48:23 np0005534516 nova_compute[253538]:    <memballoon model="virtio">
Nov 25 03:48:23 np0005534516 nova_compute[253538]:      <stats period="10"/>
Nov 25 03:48:23 np0005534516 nova_compute[253538]:    </memballoon>
Nov 25 03:48:23 np0005534516 nova_compute[253538]:  </devices>
Nov 25 03:48:23 np0005534516 nova_compute[253538]: </domain>
Nov 25 03:48:23 np0005534516 nova_compute[253538]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 25 03:48:23 np0005534516 nova_compute[253538]: 2025-11-25 08:48:23.302 253542 DEBUG nova.compute.manager [None req-ef9476da-4879-492c-9ae5-d68db7389824 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] [instance: 0f5e68e6-8f02-4a3a-ac0c-322d82950d98] Preparing to wait for external event network-vif-plugged-7246ed42-6ec3-42e8-9b9d-12606aeeb43c prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 25 03:48:23 np0005534516 nova_compute[253538]: 2025-11-25 08:48:23.303 253542 DEBUG oslo_concurrency.lockutils [None req-ef9476da-4879-492c-9ae5-d68db7389824 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] Acquiring lock "0f5e68e6-8f02-4a3a-ac0c-322d82950d98-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:48:23 np0005534516 nova_compute[253538]: 2025-11-25 08:48:23.304 253542 DEBUG oslo_concurrency.lockutils [None req-ef9476da-4879-492c-9ae5-d68db7389824 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] Lock "0f5e68e6-8f02-4a3a-ac0c-322d82950d98-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:48:23 np0005534516 nova_compute[253538]: 2025-11-25 08:48:23.305 253542 DEBUG oslo_concurrency.lockutils [None req-ef9476da-4879-492c-9ae5-d68db7389824 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] Lock "0f5e68e6-8f02-4a3a-ac0c-322d82950d98-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:48:23 np0005534516 nova_compute[253538]: 2025-11-25 08:48:23.306 253542 DEBUG nova.virt.libvirt.vif [None req-ef9476da-4879-492c-9ae5-d68db7389824 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-11-25T08:46:31Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersNegativeTestJSON-server-673040864',display_name='tempest-ServersNegativeTestJSON-server-673040864',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversnegativetestjson-server-673040864',id=104,image_ref='515c4bcf-552c-4c04-8c0d-ad03b9e9133d',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-25T08:46:41Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=4,progress=0,project_id='947f731219de435196429037dc94fd56',ramdisk_id='',reservation_id='r-lfthxaw9',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersNegativeTestJSON-740481153',owner_user_name='tempest-ServersNegativeTestJSON-740481153-project-member',shelved_at='2025-11-25T08:48:05.240608',shelved_host='compute-0.ctlplane.example.com',shelved_image_id='515c4bcf-552c-4c04-8c0d-ad03b9e9133d'},tags=<?>,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T08:48:16Z,user_data=None,user_id='229f02d89c9848d8aaaaab070ce4d179',uuid=0f5e68e6-8f02-4a3a-ac0c-322d82950d98,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='shelved_offloaded') vif={"id": "7246ed42-6ec3-42e8-9b9d-12606aeeb43c", "address": "fa:16:3e:8c:96:cd", "network": {"id": "aa04d86f-73a3-4b24-9c95-8ec29aa39064", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-1377856389-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "947f731219de435196429037dc94fd56", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7246ed42-6e", "ovs_interfaceid": "7246ed42-6ec3-42e8-9b9d-12606aeeb43c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 25 03:48:23 np0005534516 nova_compute[253538]: 2025-11-25 08:48:23.307 253542 DEBUG nova.network.os_vif_util [None req-ef9476da-4879-492c-9ae5-d68db7389824 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] Converting VIF {"id": "7246ed42-6ec3-42e8-9b9d-12606aeeb43c", "address": "fa:16:3e:8c:96:cd", "network": {"id": "aa04d86f-73a3-4b24-9c95-8ec29aa39064", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-1377856389-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "947f731219de435196429037dc94fd56", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7246ed42-6e", "ovs_interfaceid": "7246ed42-6ec3-42e8-9b9d-12606aeeb43c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 03:48:23 np0005534516 nova_compute[253538]: 2025-11-25 08:48:23.308 253542 DEBUG nova.network.os_vif_util [None req-ef9476da-4879-492c-9ae5-d68db7389824 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:8c:96:cd,bridge_name='br-int',has_traffic_filtering=True,id=7246ed42-6ec3-42e8-9b9d-12606aeeb43c,network=Network(aa04d86f-73a3-4b24-9c95-8ec29aa39064),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7246ed42-6e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 03:48:23 np0005534516 nova_compute[253538]: 2025-11-25 08:48:23.309 253542 DEBUG os_vif [None req-ef9476da-4879-492c-9ae5-d68db7389824 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:8c:96:cd,bridge_name='br-int',has_traffic_filtering=True,id=7246ed42-6ec3-42e8-9b9d-12606aeeb43c,network=Network(aa04d86f-73a3-4b24-9c95-8ec29aa39064),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7246ed42-6e') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 25 03:48:23 np0005534516 nova_compute[253538]: 2025-11-25 08:48:23.310 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:48:23 np0005534516 nova_compute[253538]: 2025-11-25 08:48:23.311 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:48:23 np0005534516 nova_compute[253538]: 2025-11-25 08:48:23.312 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 03:48:23 np0005534516 nova_compute[253538]: 2025-11-25 08:48:23.315 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:48:23 np0005534516 nova_compute[253538]: 2025-11-25 08:48:23.316 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7246ed42-6e, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:48:23 np0005534516 nova_compute[253538]: 2025-11-25 08:48:23.317 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap7246ed42-6e, col_values=(('external_ids', {'iface-id': '7246ed42-6ec3-42e8-9b9d-12606aeeb43c', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:8c:96:cd', 'vm-uuid': '0f5e68e6-8f02-4a3a-ac0c-322d82950d98'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:48:23 np0005534516 NetworkManager[48915]: <info>  [1764060503.3207] manager: (tap7246ed42-6e): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/411)
Nov 25 03:48:23 np0005534516 nova_compute[253538]: 2025-11-25 08:48:23.322 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 25 03:48:23 np0005534516 nova_compute[253538]: 2025-11-25 08:48:23.324 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:48:23 np0005534516 nova_compute[253538]: 2025-11-25 08:48:23.325 253542 INFO os_vif [None req-ef9476da-4879-492c-9ae5-d68db7389824 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:8c:96:cd,bridge_name='br-int',has_traffic_filtering=True,id=7246ed42-6ec3-42e8-9b9d-12606aeeb43c,network=Network(aa04d86f-73a3-4b24-9c95-8ec29aa39064),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7246ed42-6e')#033[00m
Nov 25 03:48:23 np0005534516 nova_compute[253538]: 2025-11-25 08:48:23.390 253542 DEBUG nova.virt.libvirt.driver [None req-ef9476da-4879-492c-9ae5-d68db7389824 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 03:48:23 np0005534516 nova_compute[253538]: 2025-11-25 08:48:23.390 253542 DEBUG nova.virt.libvirt.driver [None req-ef9476da-4879-492c-9ae5-d68db7389824 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 03:48:23 np0005534516 nova_compute[253538]: 2025-11-25 08:48:23.391 253542 DEBUG nova.virt.libvirt.driver [None req-ef9476da-4879-492c-9ae5-d68db7389824 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] No VIF found with MAC fa:16:3e:8c:96:cd, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 25 03:48:23 np0005534516 nova_compute[253538]: 2025-11-25 08:48:23.391 253542 INFO nova.virt.libvirt.driver [None req-ef9476da-4879-492c-9ae5-d68db7389824 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] [instance: 0f5e68e6-8f02-4a3a-ac0c-322d82950d98] Using config drive#033[00m
Nov 25 03:48:23 np0005534516 nova_compute[253538]: 2025-11-25 08:48:23.416 253542 DEBUG nova.storage.rbd_utils [None req-ef9476da-4879-492c-9ae5-d68db7389824 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] rbd image 0f5e68e6-8f02-4a3a-ac0c-322d82950d98_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:48:23 np0005534516 nova_compute[253538]: 2025-11-25 08:48:23.432 253542 DEBUG nova.objects.instance [None req-ef9476da-4879-492c-9ae5-d68db7389824 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] Lazy-loading 'ec2_ids' on Instance uuid 0f5e68e6-8f02-4a3a-ac0c-322d82950d98 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 03:48:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 03:48:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 03:48:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 03:48:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 03:48:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 03:48:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 03:48:23 np0005534516 nova_compute[253538]: 2025-11-25 08:48:23.491 253542 DEBUG nova.objects.instance [None req-ef9476da-4879-492c-9ae5-d68db7389824 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] Lazy-loading 'keypairs' on Instance uuid 0f5e68e6-8f02-4a3a-ac0c-322d82950d98 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 03:48:23 np0005534516 nova_compute[253538]: 2025-11-25 08:48:23.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 03:48:23 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2010: 321 pgs: 321 active+clean; 182 MiB data, 752 MiB used, 59 GiB / 60 GiB avail; 1017 KiB/s rd, 618 KiB/s wr, 35 op/s
Nov 25 03:48:23 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 03:48:23 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2376442700' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 03:48:23 np0005534516 nova_compute[253538]: 2025-11-25 08:48:23.671 253542 DEBUG oslo_concurrency.processutils [None req-9ca56f56-3c41-4463-8196-1fa2ff9b5bed 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.448s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:48:23 np0005534516 nova_compute[253538]: 2025-11-25 08:48:23.677 253542 DEBUG nova.compute.provider_tree [None req-9ca56f56-3c41-4463-8196-1fa2ff9b5bed 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 03:48:23 np0005534516 nova_compute[253538]: 2025-11-25 08:48:23.693 253542 DEBUG nova.scheduler.client.report [None req-9ca56f56-3c41-4463-8196-1fa2ff9b5bed 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 03:48:23 np0005534516 nova_compute[253538]: 2025-11-25 08:48:23.714 253542 DEBUG oslo_concurrency.lockutils [None req-9ca56f56-3c41-4463-8196-1fa2ff9b5bed 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.616s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:48:23 np0005534516 nova_compute[253538]: 2025-11-25 08:48:23.715 253542 DEBUG nova.compute.manager [None req-9ca56f56-3c41-4463-8196-1fa2ff9b5bed 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: fc289da4-ce2a-46ee-bf8c-bffa1e0e75fe] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 25 03:48:23 np0005534516 nova_compute[253538]: 2025-11-25 08:48:23.775 253542 DEBUG nova.compute.manager [None req-9ca56f56-3c41-4463-8196-1fa2ff9b5bed 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: fc289da4-ce2a-46ee-bf8c-bffa1e0e75fe] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 25 03:48:23 np0005534516 nova_compute[253538]: 2025-11-25 08:48:23.775 253542 DEBUG nova.network.neutron [None req-9ca56f56-3c41-4463-8196-1fa2ff9b5bed 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: fc289da4-ce2a-46ee-bf8c-bffa1e0e75fe] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 25 03:48:23 np0005534516 nova_compute[253538]: 2025-11-25 08:48:23.795 253542 INFO nova.virt.libvirt.driver [None req-9ca56f56-3c41-4463-8196-1fa2ff9b5bed 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: fc289da4-ce2a-46ee-bf8c-bffa1e0e75fe] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 25 03:48:23 np0005534516 nova_compute[253538]: 2025-11-25 08:48:23.814 253542 DEBUG nova.compute.manager [None req-9ca56f56-3c41-4463-8196-1fa2ff9b5bed 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: fc289da4-ce2a-46ee-bf8c-bffa1e0e75fe] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 25 03:48:23 np0005534516 nova_compute[253538]: 2025-11-25 08:48:23.907 253542 DEBUG nova.compute.manager [None req-9ca56f56-3c41-4463-8196-1fa2ff9b5bed 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: fc289da4-ce2a-46ee-bf8c-bffa1e0e75fe] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 25 03:48:23 np0005534516 nova_compute[253538]: 2025-11-25 08:48:23.909 253542 DEBUG nova.virt.libvirt.driver [None req-9ca56f56-3c41-4463-8196-1fa2ff9b5bed 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: fc289da4-ce2a-46ee-bf8c-bffa1e0e75fe] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 25 03:48:23 np0005534516 nova_compute[253538]: 2025-11-25 08:48:23.909 253542 INFO nova.virt.libvirt.driver [None req-9ca56f56-3c41-4463-8196-1fa2ff9b5bed 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: fc289da4-ce2a-46ee-bf8c-bffa1e0e75fe] Creating image(s)#033[00m
Nov 25 03:48:23 np0005534516 nova_compute[253538]: 2025-11-25 08:48:23.935 253542 DEBUG nova.storage.rbd_utils [None req-9ca56f56-3c41-4463-8196-1fa2ff9b5bed 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] rbd image fc289da4-ce2a-46ee-bf8c-bffa1e0e75fe_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:48:23 np0005534516 nova_compute[253538]: 2025-11-25 08:48:23.958 253542 DEBUG nova.storage.rbd_utils [None req-9ca56f56-3c41-4463-8196-1fa2ff9b5bed 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] rbd image fc289da4-ce2a-46ee-bf8c-bffa1e0e75fe_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:48:23 np0005534516 nova_compute[253538]: 2025-11-25 08:48:23.989 253542 DEBUG nova.storage.rbd_utils [None req-9ca56f56-3c41-4463-8196-1fa2ff9b5bed 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] rbd image fc289da4-ce2a-46ee-bf8c-bffa1e0e75fe_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:48:23 np0005534516 nova_compute[253538]: 2025-11-25 08:48:23.992 253542 DEBUG oslo_concurrency.processutils [None req-9ca56f56-3c41-4463-8196-1fa2ff9b5bed 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:48:24 np0005534516 nova_compute[253538]: 2025-11-25 08:48:24.025 253542 INFO nova.virt.libvirt.driver [None req-ef9476da-4879-492c-9ae5-d68db7389824 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] [instance: 0f5e68e6-8f02-4a3a-ac0c-322d82950d98] Creating config drive at /var/lib/nova/instances/0f5e68e6-8f02-4a3a-ac0c-322d82950d98/disk.config#033[00m
Nov 25 03:48:24 np0005534516 nova_compute[253538]: 2025-11-25 08:48:24.030 253542 DEBUG oslo_concurrency.processutils [None req-ef9476da-4879-492c-9ae5-d68db7389824 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/0f5e68e6-8f02-4a3a-ac0c-322d82950d98/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp7pggu0r3 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:48:24 np0005534516 nova_compute[253538]: 2025-11-25 08:48:24.064 253542 DEBUG oslo_concurrency.processutils [None req-9ca56f56-3c41-4463-8196-1fa2ff9b5bed 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json" returned: 0 in 0.071s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:48:24 np0005534516 nova_compute[253538]: 2025-11-25 08:48:24.065 253542 DEBUG oslo_concurrency.lockutils [None req-9ca56f56-3c41-4463-8196-1fa2ff9b5bed 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Acquiring lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:48:24 np0005534516 nova_compute[253538]: 2025-11-25 08:48:24.066 253542 DEBUG oslo_concurrency.lockutils [None req-9ca56f56-3c41-4463-8196-1fa2ff9b5bed 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:48:24 np0005534516 nova_compute[253538]: 2025-11-25 08:48:24.066 253542 DEBUG oslo_concurrency.lockutils [None req-9ca56f56-3c41-4463-8196-1fa2ff9b5bed 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:48:24 np0005534516 nova_compute[253538]: 2025-11-25 08:48:24.088 253542 DEBUG nova.storage.rbd_utils [None req-9ca56f56-3c41-4463-8196-1fa2ff9b5bed 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] rbd image fc289da4-ce2a-46ee-bf8c-bffa1e0e75fe_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:48:24 np0005534516 nova_compute[253538]: 2025-11-25 08:48:24.091 253542 DEBUG oslo_concurrency.processutils [None req-9ca56f56-3c41-4463-8196-1fa2ff9b5bed 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc fc289da4-ce2a-46ee-bf8c-bffa1e0e75fe_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:48:24 np0005534516 nova_compute[253538]: 2025-11-25 08:48:24.132 253542 DEBUG nova.policy [None req-9ca56f56-3c41-4463-8196-1fa2ff9b5bed 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '4211995133cc45db8e38c47f747fb092', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '92faeb767e7a423586eaaf32661ce771', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 25 03:48:24 np0005534516 nova_compute[253538]: 2025-11-25 08:48:24.170 253542 DEBUG oslo_concurrency.processutils [None req-ef9476da-4879-492c-9ae5-d68db7389824 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/0f5e68e6-8f02-4a3a-ac0c-322d82950d98/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp7pggu0r3" returned: 0 in 0.140s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:48:24 np0005534516 nova_compute[253538]: 2025-11-25 08:48:24.194 253542 DEBUG nova.storage.rbd_utils [None req-ef9476da-4879-492c-9ae5-d68db7389824 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] rbd image 0f5e68e6-8f02-4a3a-ac0c-322d82950d98_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:48:24 np0005534516 nova_compute[253538]: 2025-11-25 08:48:24.198 253542 DEBUG oslo_concurrency.processutils [None req-ef9476da-4879-492c-9ae5-d68db7389824 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/0f5e68e6-8f02-4a3a-ac0c-322d82950d98/disk.config 0f5e68e6-8f02-4a3a-ac0c-322d82950d98_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:48:24 np0005534516 nova_compute[253538]: 2025-11-25 08:48:24.407 253542 DEBUG oslo_concurrency.processutils [None req-9ca56f56-3c41-4463-8196-1fa2ff9b5bed 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc fc289da4-ce2a-46ee-bf8c-bffa1e0e75fe_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.316s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:48:24 np0005534516 nova_compute[253538]: 2025-11-25 08:48:24.464 253542 DEBUG oslo_concurrency.processutils [None req-ef9476da-4879-492c-9ae5-d68db7389824 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/0f5e68e6-8f02-4a3a-ac0c-322d82950d98/disk.config 0f5e68e6-8f02-4a3a-ac0c-322d82950d98_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.266s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:48:24 np0005534516 nova_compute[253538]: 2025-11-25 08:48:24.465 253542 INFO nova.virt.libvirt.driver [None req-ef9476da-4879-492c-9ae5-d68db7389824 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] [instance: 0f5e68e6-8f02-4a3a-ac0c-322d82950d98] Deleting local config drive /var/lib/nova/instances/0f5e68e6-8f02-4a3a-ac0c-322d82950d98/disk.config because it was imported into RBD.#033[00m
Nov 25 03:48:24 np0005534516 nova_compute[253538]: 2025-11-25 08:48:24.470 253542 DEBUG nova.storage.rbd_utils [None req-9ca56f56-3c41-4463-8196-1fa2ff9b5bed 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] resizing rbd image fc289da4-ce2a-46ee-bf8c-bffa1e0e75fe_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 25 03:48:24 np0005534516 kernel: tap7246ed42-6e: entered promiscuous mode
Nov 25 03:48:24 np0005534516 NetworkManager[48915]: <info>  [1764060504.5218] manager: (tap7246ed42-6e): new Tun device (/org/freedesktop/NetworkManager/Devices/412)
Nov 25 03:48:24 np0005534516 nova_compute[253538]: 2025-11-25 08:48:24.521 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:48:24 np0005534516 ovn_controller[152859]: 2025-11-25T08:48:24Z|01006|binding|INFO|Claiming lport 7246ed42-6ec3-42e8-9b9d-12606aeeb43c for this chassis.
Nov 25 03:48:24 np0005534516 ovn_controller[152859]: 2025-11-25T08:48:24Z|01007|binding|INFO|7246ed42-6ec3-42e8-9b9d-12606aeeb43c: Claiming fa:16:3e:8c:96:cd 10.100.0.3
Nov 25 03:48:24 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:48:24.533 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:8c:96:cd 10.100.0.3'], port_security=['fa:16:3e:8c:96:cd 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '0f5e68e6-8f02-4a3a-ac0c-322d82950d98', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-aa04d86f-73a3-4b24-9c95-8ec29aa39064', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '947f731219de435196429037dc94fd56', 'neutron:revision_number': '7', 'neutron:security_group_ids': '6084cb22-e3b6-414c-801b-b7e992786a2f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d4043632-827b-4717-bb5e-38582ebfacad, chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=7246ed42-6ec3-42e8-9b9d-12606aeeb43c) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 03:48:24 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:48:24.535 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 7246ed42-6ec3-42e8-9b9d-12606aeeb43c in datapath aa04d86f-73a3-4b24-9c95-8ec29aa39064 bound to our chassis#033[00m
Nov 25 03:48:24 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:48:24.537 162739 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network aa04d86f-73a3-4b24-9c95-8ec29aa39064#033[00m
Nov 25 03:48:24 np0005534516 ovn_controller[152859]: 2025-11-25T08:48:24Z|01008|binding|INFO|Setting lport 7246ed42-6ec3-42e8-9b9d-12606aeeb43c ovn-installed in OVS
Nov 25 03:48:24 np0005534516 ovn_controller[152859]: 2025-11-25T08:48:24Z|01009|binding|INFO|Setting lport 7246ed42-6ec3-42e8-9b9d-12606aeeb43c up in Southbound
Nov 25 03:48:24 np0005534516 nova_compute[253538]: 2025-11-25 08:48:24.538 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:48:24 np0005534516 nova_compute[253538]: 2025-11-25 08:48:24.542 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:48:24 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:48:24.548 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[17721f9a-0ed7-4d43-bb35-c438f58adba9]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:48:24 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:48:24.549 162739 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapaa04d86f-71 in ovnmeta-aa04d86f-73a3-4b24-9c95-8ec29aa39064 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 25 03:48:24 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:48:24.551 269370 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapaa04d86f-70 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 25 03:48:24 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:48:24.551 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[8a5194cf-3aa3-4ba1-ac43-3004e7073e1b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:48:24 np0005534516 nova_compute[253538]: 2025-11-25 08:48:24.551 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:48:24 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:48:24.552 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[2d83b3e8-bda9-4d5a-bf36-f9ac07410f94]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:48:24 np0005534516 systemd-udevd[359610]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 03:48:24 np0005534516 systemd-machined[215790]: New machine qemu-130-instance-00000068.
Nov 25 03:48:24 np0005534516 NetworkManager[48915]: <info>  [1764060504.5682] device (tap7246ed42-6e): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 03:48:24 np0005534516 NetworkManager[48915]: <info>  [1764060504.5693] device (tap7246ed42-6e): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 03:48:24 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:48:24.570 162852 DEBUG oslo.privsep.daemon [-] privsep: reply[ee54e7ae-873d-45b8-849f-cc5da2b58f78]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:48:24 np0005534516 systemd[1]: Started Virtual Machine qemu-130-instance-00000068.
Nov 25 03:48:24 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:48:24.586 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[cad40d86-b3b4-4040-a5be-be05c6acc760]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:48:24 np0005534516 nova_compute[253538]: 2025-11-25 08:48:24.600 253542 DEBUG nova.objects.instance [None req-9ca56f56-3c41-4463-8196-1fa2ff9b5bed 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lazy-loading 'migration_context' on Instance uuid fc289da4-ce2a-46ee-bf8c-bffa1e0e75fe obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 03:48:24 np0005534516 nova_compute[253538]: 2025-11-25 08:48:24.612 253542 DEBUG nova.virt.libvirt.driver [None req-9ca56f56-3c41-4463-8196-1fa2ff9b5bed 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: fc289da4-ce2a-46ee-bf8c-bffa1e0e75fe] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 25 03:48:24 np0005534516 nova_compute[253538]: 2025-11-25 08:48:24.613 253542 DEBUG nova.virt.libvirt.driver [None req-9ca56f56-3c41-4463-8196-1fa2ff9b5bed 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: fc289da4-ce2a-46ee-bf8c-bffa1e0e75fe] Ensure instance console log exists: /var/lib/nova/instances/fc289da4-ce2a-46ee-bf8c-bffa1e0e75fe/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 25 03:48:24 np0005534516 nova_compute[253538]: 2025-11-25 08:48:24.613 253542 DEBUG oslo_concurrency.lockutils [None req-9ca56f56-3c41-4463-8196-1fa2ff9b5bed 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:48:24 np0005534516 nova_compute[253538]: 2025-11-25 08:48:24.613 253542 DEBUG oslo_concurrency.lockutils [None req-9ca56f56-3c41-4463-8196-1fa2ff9b5bed 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:48:24 np0005534516 nova_compute[253538]: 2025-11-25 08:48:24.614 253542 DEBUG oslo_concurrency.lockutils [None req-9ca56f56-3c41-4463-8196-1fa2ff9b5bed 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:48:24 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:48:24.619 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[5ff7e1c7-334a-4d9e-b4f0-e962626dc4db]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:48:24 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:48:24.624 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[c8453c91-1487-4be1-abd7-d6898ac18a9d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:48:24 np0005534516 NetworkManager[48915]: <info>  [1764060504.6258] manager: (tapaa04d86f-70): new Veth device (/org/freedesktop/NetworkManager/Devices/413)
Nov 25 03:48:24 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:48:24.658 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[854568d3-3eff-45fc-a389-e17c66f7e487]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:48:24 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:48:24.662 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[3706ce14-122a-4cf3-874a-3cc6ca862212]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:48:24 np0005534516 NetworkManager[48915]: <info>  [1764060504.6844] device (tapaa04d86f-70): carrier: link connected
Nov 25 03:48:24 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:48:24.688 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[05295ca2-e7e8-4fae-b524-3e7df2780fa5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:48:24 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:48:24.704 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[fd294fc8-e393-4e7c-beb5-4a60d783bf41]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapaa04d86f-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:13:d2:af'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 299], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 585928, 'reachable_time': 25222, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 359662, 'error': None, 'target': 'ovnmeta-aa04d86f-73a3-4b24-9c95-8ec29aa39064', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:48:24 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:48:24.720 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[2aed3438-9e24-420a-b05e-5ceddc653753]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe13:d2af'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 585928, 'tstamp': 585928}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 359663, 'error': None, 'target': 'ovnmeta-aa04d86f-73a3-4b24-9c95-8ec29aa39064', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:48:24 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:48:24.736 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[957fc964-0755-4e5d-ab48-68fab7210e6e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapaa04d86f-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:13:d2:af'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 299], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 585928, 'reachable_time': 25222, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 359664, 'error': None, 'target': 'ovnmeta-aa04d86f-73a3-4b24-9c95-8ec29aa39064', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:48:24 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:48:24.766 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[1178df11-3b7e-4a50-8f3e-bb384c0df590]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:48:24 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:48:24.833 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[c3d2f247-e8d1-4e6f-8256-24526aae8576]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:48:24 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:48:24.834 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapaa04d86f-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:48:24 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:48:24.834 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 03:48:24 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:48:24.834 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapaa04d86f-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:48:24 np0005534516 NetworkManager[48915]: <info>  [1764060504.8367] manager: (tapaa04d86f-70): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/414)
Nov 25 03:48:24 np0005534516 kernel: tapaa04d86f-70: entered promiscuous mode
Nov 25 03:48:24 np0005534516 nova_compute[253538]: 2025-11-25 08:48:24.837 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:48:24 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:48:24.838 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapaa04d86f-70, col_values=(('external_ids', {'iface-id': 'bedad8d3-cb44-47dd-87a4-c24448506880'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:48:24 np0005534516 nova_compute[253538]: 2025-11-25 08:48:24.839 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:48:24 np0005534516 ovn_controller[152859]: 2025-11-25T08:48:24Z|01010|binding|INFO|Releasing lport bedad8d3-cb44-47dd-87a4-c24448506880 from this chassis (sb_readonly=0)
Nov 25 03:48:24 np0005534516 nova_compute[253538]: 2025-11-25 08:48:24.841 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:48:24 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:48:24.841 162739 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/aa04d86f-73a3-4b24-9c95-8ec29aa39064.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/aa04d86f-73a3-4b24-9c95-8ec29aa39064.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 25 03:48:24 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:48:24.842 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[b524119e-eef3-4307-9212-5819f1d2ce10]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:48:24 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:48:24.843 162739 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 25 03:48:24 np0005534516 ovn_metadata_agent[162734]: global
Nov 25 03:48:24 np0005534516 ovn_metadata_agent[162734]:    log         /dev/log local0 debug
Nov 25 03:48:24 np0005534516 ovn_metadata_agent[162734]:    log-tag     haproxy-metadata-proxy-aa04d86f-73a3-4b24-9c95-8ec29aa39064
Nov 25 03:48:24 np0005534516 ovn_metadata_agent[162734]:    user        root
Nov 25 03:48:24 np0005534516 ovn_metadata_agent[162734]:    group       root
Nov 25 03:48:24 np0005534516 ovn_metadata_agent[162734]:    maxconn     1024
Nov 25 03:48:24 np0005534516 ovn_metadata_agent[162734]:    pidfile     /var/lib/neutron/external/pids/aa04d86f-73a3-4b24-9c95-8ec29aa39064.pid.haproxy
Nov 25 03:48:24 np0005534516 ovn_metadata_agent[162734]:    daemon
Nov 25 03:48:24 np0005534516 ovn_metadata_agent[162734]: 
Nov 25 03:48:24 np0005534516 ovn_metadata_agent[162734]: defaults
Nov 25 03:48:24 np0005534516 ovn_metadata_agent[162734]:    log global
Nov 25 03:48:24 np0005534516 ovn_metadata_agent[162734]:    mode http
Nov 25 03:48:24 np0005534516 ovn_metadata_agent[162734]:    option httplog
Nov 25 03:48:24 np0005534516 ovn_metadata_agent[162734]:    option dontlognull
Nov 25 03:48:24 np0005534516 ovn_metadata_agent[162734]:    option http-server-close
Nov 25 03:48:24 np0005534516 ovn_metadata_agent[162734]:    option forwardfor
Nov 25 03:48:24 np0005534516 ovn_metadata_agent[162734]:    retries                 3
Nov 25 03:48:24 np0005534516 ovn_metadata_agent[162734]:    timeout http-request    30s
Nov 25 03:48:24 np0005534516 ovn_metadata_agent[162734]:    timeout connect         30s
Nov 25 03:48:24 np0005534516 ovn_metadata_agent[162734]:    timeout client          32s
Nov 25 03:48:24 np0005534516 ovn_metadata_agent[162734]:    timeout server          32s
Nov 25 03:48:24 np0005534516 ovn_metadata_agent[162734]:    timeout http-keep-alive 30s
Nov 25 03:48:24 np0005534516 ovn_metadata_agent[162734]: 
Nov 25 03:48:24 np0005534516 ovn_metadata_agent[162734]: 
Nov 25 03:48:24 np0005534516 ovn_metadata_agent[162734]: listen listener
Nov 25 03:48:24 np0005534516 ovn_metadata_agent[162734]:    bind 169.254.169.254:80
Nov 25 03:48:24 np0005534516 ovn_metadata_agent[162734]:    server metadata /var/lib/neutron/metadata_proxy
Nov 25 03:48:24 np0005534516 ovn_metadata_agent[162734]:    http-request add-header X-OVN-Network-ID aa04d86f-73a3-4b24-9c95-8ec29aa39064
Nov 25 03:48:24 np0005534516 ovn_metadata_agent[162734]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 25 03:48:24 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:48:24.843 162739 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-aa04d86f-73a3-4b24-9c95-8ec29aa39064', 'env', 'PROCESS_TAG=haproxy-aa04d86f-73a3-4b24-9c95-8ec29aa39064', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/aa04d86f-73a3-4b24-9c95-8ec29aa39064.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 25 03:48:24 np0005534516 nova_compute[253538]: 2025-11-25 08:48:24.861 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:48:24 np0005534516 nova_compute[253538]: 2025-11-25 08:48:24.935 253542 DEBUG nova.network.neutron [None req-9ca56f56-3c41-4463-8196-1fa2ff9b5bed 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: fc289da4-ce2a-46ee-bf8c-bffa1e0e75fe] Successfully created port: 5b999504-81af-4e3d-9707-b0a72b902669 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 25 03:48:24 np0005534516 nova_compute[253538]: 2025-11-25 08:48:24.985 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764060504.984773, 0f5e68e6-8f02-4a3a-ac0c-322d82950d98 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 03:48:24 np0005534516 nova_compute[253538]: 2025-11-25 08:48:24.986 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 0f5e68e6-8f02-4a3a-ac0c-322d82950d98] VM Started (Lifecycle Event)#033[00m
Nov 25 03:48:25 np0005534516 nova_compute[253538]: 2025-11-25 08:48:25.008 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 0f5e68e6-8f02-4a3a-ac0c-322d82950d98] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:48:25 np0005534516 nova_compute[253538]: 2025-11-25 08:48:25.016 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764060504.9848933, 0f5e68e6-8f02-4a3a-ac0c-322d82950d98 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 03:48:25 np0005534516 nova_compute[253538]: 2025-11-25 08:48:25.017 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 0f5e68e6-8f02-4a3a-ac0c-322d82950d98] VM Paused (Lifecycle Event)#033[00m
Nov 25 03:48:25 np0005534516 nova_compute[253538]: 2025-11-25 08:48:25.036 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 0f5e68e6-8f02-4a3a-ac0c-322d82950d98] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:48:25 np0005534516 nova_compute[253538]: 2025-11-25 08:48:25.040 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 0f5e68e6-8f02-4a3a-ac0c-322d82950d98] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: shelved_offloaded, current task_state: spawning, current DB power_state: 4, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 03:48:25 np0005534516 nova_compute[253538]: 2025-11-25 08:48:25.056 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 0f5e68e6-8f02-4a3a-ac0c-322d82950d98] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 03:48:25 np0005534516 podman[359838]: 2025-11-25 08:48:25.231563782 +0000 UTC m=+0.052013325 container create 1ef76cfbd14ecfe3c363f9d691011408549afde8970f34e26eb70eec780e434d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-aa04d86f-73a3-4b24-9c95-8ec29aa39064, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true)
Nov 25 03:48:25 np0005534516 systemd[1]: Started libpod-conmon-1ef76cfbd14ecfe3c363f9d691011408549afde8970f34e26eb70eec780e434d.scope.
Nov 25 03:48:25 np0005534516 podman[359838]: 2025-11-25 08:48:25.204961749 +0000 UTC m=+0.025411322 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 25 03:48:25 np0005534516 systemd[1]: Started libcrun container.
Nov 25 03:48:25 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/18e74c7d913d4d8440112d04e1a5043b5748bae7519c2709c1298e5997bcd39c/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 25 03:48:25 np0005534516 podman[359838]: 2025-11-25 08:48:25.328242121 +0000 UTC m=+0.148691684 container init 1ef76cfbd14ecfe3c363f9d691011408549afde8970f34e26eb70eec780e434d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-aa04d86f-73a3-4b24-9c95-8ec29aa39064, org.label-schema.build-date=20251118, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Nov 25 03:48:25 np0005534516 podman[359838]: 2025-11-25 08:48:25.333861971 +0000 UTC m=+0.154311504 container start 1ef76cfbd14ecfe3c363f9d691011408549afde8970f34e26eb70eec780e434d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-aa04d86f-73a3-4b24-9c95-8ec29aa39064, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Nov 25 03:48:25 np0005534516 neutron-haproxy-ovnmeta-aa04d86f-73a3-4b24-9c95-8ec29aa39064[359865]: [NOTICE]   (359870) : New worker (359872) forked
Nov 25 03:48:25 np0005534516 neutron-haproxy-ovnmeta-aa04d86f-73a3-4b24-9c95-8ec29aa39064[359865]: [NOTICE]   (359870) : Loading success.
Nov 25 03:48:25 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 03:48:25 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 03:48:25 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2011: 321 pgs: 321 active+clean; 220 MiB data, 764 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.0 MiB/s wr, 61 op/s
Nov 25 03:48:25 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 03:48:25 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 03:48:25 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 03:48:25 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 03:48:25 np0005534516 ceph-mgr[75313]: [progress WARNING root] complete: ev 47fdf725-20d0-4765-91f9-a2ba5350be93 does not exist
Nov 25 03:48:25 np0005534516 ceph-mgr[75313]: [progress WARNING root] complete: ev f6d6e071-ca3d-487c-9737-d0b78d1a52d9 does not exist
Nov 25 03:48:25 np0005534516 ceph-mgr[75313]: [progress WARNING root] complete: ev 4ebcfa09-2b55-4f9b-8dcc-6f8dc013c723 does not exist
Nov 25 03:48:25 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 03:48:25 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 03:48:25 np0005534516 nova_compute[253538]: 2025-11-25 08:48:25.641 253542 DEBUG nova.network.neutron [None req-9ca56f56-3c41-4463-8196-1fa2ff9b5bed 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: fc289da4-ce2a-46ee-bf8c-bffa1e0e75fe] Successfully updated port: 5b999504-81af-4e3d-9707-b0a72b902669 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 25 03:48:25 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 03:48:25 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 03:48:25 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 03:48:25 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 03:48:25 np0005534516 nova_compute[253538]: 2025-11-25 08:48:25.654 253542 DEBUG oslo_concurrency.lockutils [None req-9ca56f56-3c41-4463-8196-1fa2ff9b5bed 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Acquiring lock "refresh_cache-fc289da4-ce2a-46ee-bf8c-bffa1e0e75fe" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 03:48:25 np0005534516 nova_compute[253538]: 2025-11-25 08:48:25.654 253542 DEBUG oslo_concurrency.lockutils [None req-9ca56f56-3c41-4463-8196-1fa2ff9b5bed 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Acquired lock "refresh_cache-fc289da4-ce2a-46ee-bf8c-bffa1e0e75fe" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 03:48:25 np0005534516 nova_compute[253538]: 2025-11-25 08:48:25.654 253542 DEBUG nova.network.neutron [None req-9ca56f56-3c41-4463-8196-1fa2ff9b5bed 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: fc289da4-ce2a-46ee-bf8c-bffa1e0e75fe] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 25 03:48:25 np0005534516 nova_compute[253538]: 2025-11-25 08:48:25.784 253542 DEBUG nova.compute.manager [req-0be2142a-30d2-439b-aa6e-b70ce0c8ecbb req-d3ee8ccd-6049-4075-ab3e-b6578c36cd5c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: fc289da4-ce2a-46ee-bf8c-bffa1e0e75fe] Received event network-changed-5b999504-81af-4e3d-9707-b0a72b902669 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:48:25 np0005534516 nova_compute[253538]: 2025-11-25 08:48:25.784 253542 DEBUG nova.compute.manager [req-0be2142a-30d2-439b-aa6e-b70ce0c8ecbb req-d3ee8ccd-6049-4075-ab3e-b6578c36cd5c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: fc289da4-ce2a-46ee-bf8c-bffa1e0e75fe] Refreshing instance network info cache due to event network-changed-5b999504-81af-4e3d-9707-b0a72b902669. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 03:48:25 np0005534516 nova_compute[253538]: 2025-11-25 08:48:25.784 253542 DEBUG oslo_concurrency.lockutils [req-0be2142a-30d2-439b-aa6e-b70ce0c8ecbb req-d3ee8ccd-6049-4075-ab3e-b6578c36cd5c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-fc289da4-ce2a-46ee-bf8c-bffa1e0e75fe" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 03:48:25 np0005534516 nova_compute[253538]: 2025-11-25 08:48:25.856 253542 DEBUG nova.network.neutron [None req-9ca56f56-3c41-4463-8196-1fa2ff9b5bed 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: fc289da4-ce2a-46ee-bf8c-bffa1e0e75fe] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 25 03:48:26 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 03:48:26 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 03:48:26 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 03:48:26 np0005534516 podman[360038]: 2025-11-25 08:48:26.27851477 +0000 UTC m=+0.061997041 container create fb3cd3bf81bf55b3c3aad8fcafb9aa8eed984f45b339d5dafaefcdb1b24274f9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_cannon, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 03:48:26 np0005534516 systemd[1]: Started libpod-conmon-fb3cd3bf81bf55b3c3aad8fcafb9aa8eed984f45b339d5dafaefcdb1b24274f9.scope.
Nov 25 03:48:26 np0005534516 podman[360038]: 2025-11-25 08:48:26.245397363 +0000 UTC m=+0.028879634 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 03:48:26 np0005534516 systemd[1]: Started libcrun container.
Nov 25 03:48:26 np0005534516 podman[360038]: 2025-11-25 08:48:26.381220361 +0000 UTC m=+0.164702632 container init fb3cd3bf81bf55b3c3aad8fcafb9aa8eed984f45b339d5dafaefcdb1b24274f9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_cannon, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef)
Nov 25 03:48:26 np0005534516 podman[360038]: 2025-11-25 08:48:26.392847992 +0000 UTC m=+0.176330243 container start fb3cd3bf81bf55b3c3aad8fcafb9aa8eed984f45b339d5dafaefcdb1b24274f9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_cannon, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Nov 25 03:48:26 np0005534516 podman[360038]: 2025-11-25 08:48:26.397403895 +0000 UTC m=+0.180886146 container attach fb3cd3bf81bf55b3c3aad8fcafb9aa8eed984f45b339d5dafaefcdb1b24274f9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_cannon, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 03:48:26 np0005534516 silly_cannon[360055]: 167 167
Nov 25 03:48:26 np0005534516 systemd[1]: libpod-fb3cd3bf81bf55b3c3aad8fcafb9aa8eed984f45b339d5dafaefcdb1b24274f9.scope: Deactivated successfully.
Nov 25 03:48:26 np0005534516 conmon[360055]: conmon fb3cd3bf81bf55b3c3aa <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-fb3cd3bf81bf55b3c3aad8fcafb9aa8eed984f45b339d5dafaefcdb1b24274f9.scope/container/memory.events
Nov 25 03:48:26 np0005534516 podman[360038]: 2025-11-25 08:48:26.403897778 +0000 UTC m=+0.187380029 container died fb3cd3bf81bf55b3c3aad8fcafb9aa8eed984f45b339d5dafaefcdb1b24274f9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_cannon, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 03:48:26 np0005534516 systemd[1]: var-lib-containers-storage-overlay-d9ed06d03727b6ab3876a5e9632a457105a0d411b9ccf58edb2aa26a35e35959-merged.mount: Deactivated successfully.
Nov 25 03:48:26 np0005534516 podman[360038]: 2025-11-25 08:48:26.456884307 +0000 UTC m=+0.240366568 container remove fb3cd3bf81bf55b3c3aad8fcafb9aa8eed984f45b339d5dafaefcdb1b24274f9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_cannon, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 03:48:26 np0005534516 systemd[1]: libpod-conmon-fb3cd3bf81bf55b3c3aad8fcafb9aa8eed984f45b339d5dafaefcdb1b24274f9.scope: Deactivated successfully.
Nov 25 03:48:26 np0005534516 podman[360078]: 2025-11-25 08:48:26.643990278 +0000 UTC m=+0.037603118 container create be9d6df7c4e289107a1b234ef2e0a9836bf565f9822766fd21b55569af09da2c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_panini, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef)
Nov 25 03:48:26 np0005534516 systemd[1]: Started libpod-conmon-be9d6df7c4e289107a1b234ef2e0a9836bf565f9822766fd21b55569af09da2c.scope.
Nov 25 03:48:26 np0005534516 systemd[1]: Started libcrun container.
Nov 25 03:48:26 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a8cae20c8f0bd9bf4e532c325bcd8326c40b428f438c27be3d39d074ba98d866/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 03:48:26 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a8cae20c8f0bd9bf4e532c325bcd8326c40b428f438c27be3d39d074ba98d866/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 03:48:26 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a8cae20c8f0bd9bf4e532c325bcd8326c40b428f438c27be3d39d074ba98d866/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 03:48:26 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a8cae20c8f0bd9bf4e532c325bcd8326c40b428f438c27be3d39d074ba98d866/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 03:48:26 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a8cae20c8f0bd9bf4e532c325bcd8326c40b428f438c27be3d39d074ba98d866/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 03:48:26 np0005534516 podman[360078]: 2025-11-25 08:48:26.724178365 +0000 UTC m=+0.117791215 container init be9d6df7c4e289107a1b234ef2e0a9836bf565f9822766fd21b55569af09da2c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_panini, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Nov 25 03:48:26 np0005534516 podman[360078]: 2025-11-25 08:48:26.629409848 +0000 UTC m=+0.023022708 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 03:48:26 np0005534516 podman[360078]: 2025-11-25 08:48:26.732101378 +0000 UTC m=+0.125714258 container start be9d6df7c4e289107a1b234ef2e0a9836bf565f9822766fd21b55569af09da2c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_panini, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 03:48:26 np0005534516 podman[360078]: 2025-11-25 08:48:26.735921951 +0000 UTC m=+0.129534791 container attach be9d6df7c4e289107a1b234ef2e0a9836bf565f9822766fd21b55569af09da2c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_panini, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 03:48:26 np0005534516 nova_compute[253538]: 2025-11-25 08:48:26.774 253542 DEBUG nova.network.neutron [None req-9ca56f56-3c41-4463-8196-1fa2ff9b5bed 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: fc289da4-ce2a-46ee-bf8c-bffa1e0e75fe] Updating instance_info_cache with network_info: [{"id": "5b999504-81af-4e3d-9707-b0a72b902669", "address": "fa:16:3e:cf:68:34", "network": {"id": "41ed78ca-e8a4-4daf-884b-6b7b763e272f", "bridge": "br-int", "label": "tempest-network-smoke--1240799795", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92faeb767e7a423586eaaf32661ce771", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5b999504-81", "ovs_interfaceid": "5b999504-81af-4e3d-9707-b0a72b902669", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 03:48:26 np0005534516 nova_compute[253538]: 2025-11-25 08:48:26.794 253542 DEBUG oslo_concurrency.lockutils [None req-9ca56f56-3c41-4463-8196-1fa2ff9b5bed 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Releasing lock "refresh_cache-fc289da4-ce2a-46ee-bf8c-bffa1e0e75fe" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 03:48:26 np0005534516 nova_compute[253538]: 2025-11-25 08:48:26.794 253542 DEBUG nova.compute.manager [None req-9ca56f56-3c41-4463-8196-1fa2ff9b5bed 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: fc289da4-ce2a-46ee-bf8c-bffa1e0e75fe] Instance network_info: |[{"id": "5b999504-81af-4e3d-9707-b0a72b902669", "address": "fa:16:3e:cf:68:34", "network": {"id": "41ed78ca-e8a4-4daf-884b-6b7b763e272f", "bridge": "br-int", "label": "tempest-network-smoke--1240799795", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92faeb767e7a423586eaaf32661ce771", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5b999504-81", "ovs_interfaceid": "5b999504-81af-4e3d-9707-b0a72b902669", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 25 03:48:26 np0005534516 nova_compute[253538]: 2025-11-25 08:48:26.795 253542 DEBUG oslo_concurrency.lockutils [req-0be2142a-30d2-439b-aa6e-b70ce0c8ecbb req-d3ee8ccd-6049-4075-ab3e-b6578c36cd5c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-fc289da4-ce2a-46ee-bf8c-bffa1e0e75fe" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 03:48:26 np0005534516 nova_compute[253538]: 2025-11-25 08:48:26.795 253542 DEBUG nova.network.neutron [req-0be2142a-30d2-439b-aa6e-b70ce0c8ecbb req-d3ee8ccd-6049-4075-ab3e-b6578c36cd5c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: fc289da4-ce2a-46ee-bf8c-bffa1e0e75fe] Refreshing network info cache for port 5b999504-81af-4e3d-9707-b0a72b902669 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 03:48:26 np0005534516 nova_compute[253538]: 2025-11-25 08:48:26.799 253542 DEBUG nova.virt.libvirt.driver [None req-9ca56f56-3c41-4463-8196-1fa2ff9b5bed 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: fc289da4-ce2a-46ee-bf8c-bffa1e0e75fe] Start _get_guest_xml network_info=[{"id": "5b999504-81af-4e3d-9707-b0a72b902669", "address": "fa:16:3e:cf:68:34", "network": {"id": "41ed78ca-e8a4-4daf-884b-6b7b763e272f", "bridge": "br-int", "label": "tempest-network-smoke--1240799795", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92faeb767e7a423586eaaf32661ce771", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5b999504-81", "ovs_interfaceid": "5b999504-81af-4e3d-9707-b0a72b902669", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'device_name': '/dev/vda', 'guest_format': None, 'encryption_options': None, 'boot_index': 0, 'encryption_format': None, 'encryption_secret_uuid': None, 'size': 0, 'encrypted': False, 'disk_bus': 'virtio', 'image_id': '8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 25 03:48:26 np0005534516 nova_compute[253538]: 2025-11-25 08:48:26.804 253542 WARNING nova.virt.libvirt.driver [None req-9ca56f56-3c41-4463-8196-1fa2ff9b5bed 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 03:48:26 np0005534516 nova_compute[253538]: 2025-11-25 08:48:26.813 253542 DEBUG nova.virt.libvirt.host [None req-9ca56f56-3c41-4463-8196-1fa2ff9b5bed 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 25 03:48:26 np0005534516 nova_compute[253538]: 2025-11-25 08:48:26.814 253542 DEBUG nova.virt.libvirt.host [None req-9ca56f56-3c41-4463-8196-1fa2ff9b5bed 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 25 03:48:26 np0005534516 nova_compute[253538]: 2025-11-25 08:48:26.817 253542 DEBUG nova.virt.libvirt.host [None req-9ca56f56-3c41-4463-8196-1fa2ff9b5bed 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 25 03:48:26 np0005534516 nova_compute[253538]: 2025-11-25 08:48:26.817 253542 DEBUG nova.virt.libvirt.host [None req-9ca56f56-3c41-4463-8196-1fa2ff9b5bed 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 25 03:48:26 np0005534516 nova_compute[253538]: 2025-11-25 08:48:26.818 253542 DEBUG nova.virt.libvirt.driver [None req-9ca56f56-3c41-4463-8196-1fa2ff9b5bed 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 25 03:48:26 np0005534516 nova_compute[253538]: 2025-11-25 08:48:26.818 253542 DEBUG nova.virt.hardware [None req-9ca56f56-3c41-4463-8196-1fa2ff9b5bed 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T08:20:54Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='c0247c91-0f34-45b2-87e7-43f31a790a00',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 25 03:48:26 np0005534516 nova_compute[253538]: 2025-11-25 08:48:26.819 253542 DEBUG nova.virt.hardware [None req-9ca56f56-3c41-4463-8196-1fa2ff9b5bed 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 25 03:48:26 np0005534516 nova_compute[253538]: 2025-11-25 08:48:26.819 253542 DEBUG nova.virt.hardware [None req-9ca56f56-3c41-4463-8196-1fa2ff9b5bed 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 25 03:48:26 np0005534516 nova_compute[253538]: 2025-11-25 08:48:26.819 253542 DEBUG nova.virt.hardware [None req-9ca56f56-3c41-4463-8196-1fa2ff9b5bed 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 25 03:48:26 np0005534516 nova_compute[253538]: 2025-11-25 08:48:26.819 253542 DEBUG nova.virt.hardware [None req-9ca56f56-3c41-4463-8196-1fa2ff9b5bed 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 25 03:48:26 np0005534516 nova_compute[253538]: 2025-11-25 08:48:26.820 253542 DEBUG nova.virt.hardware [None req-9ca56f56-3c41-4463-8196-1fa2ff9b5bed 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 25 03:48:26 np0005534516 nova_compute[253538]: 2025-11-25 08:48:26.820 253542 DEBUG nova.virt.hardware [None req-9ca56f56-3c41-4463-8196-1fa2ff9b5bed 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 25 03:48:26 np0005534516 nova_compute[253538]: 2025-11-25 08:48:26.820 253542 DEBUG nova.virt.hardware [None req-9ca56f56-3c41-4463-8196-1fa2ff9b5bed 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 25 03:48:26 np0005534516 nova_compute[253538]: 2025-11-25 08:48:26.820 253542 DEBUG nova.virt.hardware [None req-9ca56f56-3c41-4463-8196-1fa2ff9b5bed 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 25 03:48:26 np0005534516 nova_compute[253538]: 2025-11-25 08:48:26.821 253542 DEBUG nova.virt.hardware [None req-9ca56f56-3c41-4463-8196-1fa2ff9b5bed 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 25 03:48:26 np0005534516 nova_compute[253538]: 2025-11-25 08:48:26.821 253542 DEBUG nova.virt.hardware [None req-9ca56f56-3c41-4463-8196-1fa2ff9b5bed 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 25 03:48:26 np0005534516 nova_compute[253538]: 2025-11-25 08:48:26.824 253542 DEBUG oslo_concurrency.processutils [None req-9ca56f56-3c41-4463-8196-1fa2ff9b5bed 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:48:27 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e235 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 03:48:27 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 03:48:27 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1129538565' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 03:48:27 np0005534516 nova_compute[253538]: 2025-11-25 08:48:27.321 253542 DEBUG oslo_concurrency.processutils [None req-9ca56f56-3c41-4463-8196-1fa2ff9b5bed 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.496s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:48:27 np0005534516 nova_compute[253538]: 2025-11-25 08:48:27.353 253542 DEBUG nova.storage.rbd_utils [None req-9ca56f56-3c41-4463-8196-1fa2ff9b5bed 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] rbd image fc289da4-ce2a-46ee-bf8c-bffa1e0e75fe_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:48:27 np0005534516 nova_compute[253538]: 2025-11-25 08:48:27.358 253542 DEBUG oslo_concurrency.processutils [None req-9ca56f56-3c41-4463-8196-1fa2ff9b5bed 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:48:27 np0005534516 nova_compute[253538]: 2025-11-25 08:48:27.477 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:48:27 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2012: 321 pgs: 321 active+clean; 288 MiB data, 807 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 5.6 MiB/s wr, 100 op/s
Nov 25 03:48:27 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 03:48:27 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/637292873' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 03:48:27 np0005534516 great_panini[360095]: --> passed data devices: 0 physical, 3 LVM
Nov 25 03:48:27 np0005534516 nova_compute[253538]: 2025-11-25 08:48:27.852 253542 DEBUG oslo_concurrency.processutils [None req-9ca56f56-3c41-4463-8196-1fa2ff9b5bed 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.494s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:48:27 np0005534516 great_panini[360095]: --> relative data size: 1.0
Nov 25 03:48:27 np0005534516 great_panini[360095]: --> All data devices are unavailable
Nov 25 03:48:27 np0005534516 nova_compute[253538]: 2025-11-25 08:48:27.854 253542 DEBUG nova.virt.libvirt.vif [None req-9ca56f56-3c41-4463-8196-1fa2ff9b5bed 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T08:48:22Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-500014830',display_name='tempest-TestNetworkBasicOps-server-500014830',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-500014830',id=106,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFCCvuRR9teEjk+xhoL/dPXtSbMEI/QvMm2XyYfTKUyOXE8qn7R4eNZpb9TezDBvzTLIaZuuD77pyfzIuaqqEBF8FLx+5feWI/X0iULdgxVeu0o4nXU62owugHwOXwCyOg==',key_name='tempest-TestNetworkBasicOps-1624027369',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='92faeb767e7a423586eaaf32661ce771',ramdisk_id='',reservation_id='r-nidctccp',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-2019122229',owner_user_name='tempest-TestNetworkBasicOps-2019122229-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T08:48:23Z,user_data=None,user_id='4211995133cc45db8e38c47f747fb092',uuid=fc289da4-ce2a-46ee-bf8c-bffa1e0e75fe,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "5b999504-81af-4e3d-9707-b0a72b902669", "address": "fa:16:3e:cf:68:34", "network": {"id": "41ed78ca-e8a4-4daf-884b-6b7b763e272f", "bridge": "br-int", "label": "tempest-network-smoke--1240799795", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92faeb767e7a423586eaaf32661ce771", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5b999504-81", "ovs_interfaceid": "5b999504-81af-4e3d-9707-b0a72b902669", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 25 03:48:27 np0005534516 nova_compute[253538]: 2025-11-25 08:48:27.855 253542 DEBUG nova.network.os_vif_util [None req-9ca56f56-3c41-4463-8196-1fa2ff9b5bed 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Converting VIF {"id": "5b999504-81af-4e3d-9707-b0a72b902669", "address": "fa:16:3e:cf:68:34", "network": {"id": "41ed78ca-e8a4-4daf-884b-6b7b763e272f", "bridge": "br-int", "label": "tempest-network-smoke--1240799795", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92faeb767e7a423586eaaf32661ce771", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5b999504-81", "ovs_interfaceid": "5b999504-81af-4e3d-9707-b0a72b902669", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 03:48:27 np0005534516 nova_compute[253538]: 2025-11-25 08:48:27.856 253542 DEBUG nova.network.os_vif_util [None req-9ca56f56-3c41-4463-8196-1fa2ff9b5bed 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:cf:68:34,bridge_name='br-int',has_traffic_filtering=True,id=5b999504-81af-4e3d-9707-b0a72b902669,network=Network(41ed78ca-e8a4-4daf-884b-6b7b763e272f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5b999504-81') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 03:48:27 np0005534516 nova_compute[253538]: 2025-11-25 08:48:27.859 253542 DEBUG nova.objects.instance [None req-9ca56f56-3c41-4463-8196-1fa2ff9b5bed 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lazy-loading 'pci_devices' on Instance uuid fc289da4-ce2a-46ee-bf8c-bffa1e0e75fe obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 03:48:27 np0005534516 systemd[1]: libpod-be9d6df7c4e289107a1b234ef2e0a9836bf565f9822766fd21b55569af09da2c.scope: Deactivated successfully.
Nov 25 03:48:27 np0005534516 systemd[1]: libpod-be9d6df7c4e289107a1b234ef2e0a9836bf565f9822766fd21b55569af09da2c.scope: Consumed 1.046s CPU time.
Nov 25 03:48:27 np0005534516 podman[360078]: 2025-11-25 08:48:27.878917972 +0000 UTC m=+1.272530812 container died be9d6df7c4e289107a1b234ef2e0a9836bf565f9822766fd21b55569af09da2c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_panini, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Nov 25 03:48:27 np0005534516 systemd[1]: var-lib-containers-storage-overlay-a8cae20c8f0bd9bf4e532c325bcd8326c40b428f438c27be3d39d074ba98d866-merged.mount: Deactivated successfully.
Nov 25 03:48:27 np0005534516 nova_compute[253538]: 2025-11-25 08:48:27.908 253542 DEBUG nova.virt.libvirt.driver [None req-9ca56f56-3c41-4463-8196-1fa2ff9b5bed 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: fc289da4-ce2a-46ee-bf8c-bffa1e0e75fe] End _get_guest_xml xml=<domain type="kvm">
Nov 25 03:48:27 np0005534516 nova_compute[253538]:  <uuid>fc289da4-ce2a-46ee-bf8c-bffa1e0e75fe</uuid>
Nov 25 03:48:27 np0005534516 nova_compute[253538]:  <name>instance-0000006a</name>
Nov 25 03:48:27 np0005534516 nova_compute[253538]:  <memory>131072</memory>
Nov 25 03:48:27 np0005534516 nova_compute[253538]:  <vcpu>1</vcpu>
Nov 25 03:48:27 np0005534516 nova_compute[253538]:  <metadata>
Nov 25 03:48:27 np0005534516 nova_compute[253538]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 03:48:27 np0005534516 nova_compute[253538]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 03:48:27 np0005534516 nova_compute[253538]:      <nova:name>tempest-TestNetworkBasicOps-server-500014830</nova:name>
Nov 25 03:48:27 np0005534516 nova_compute[253538]:      <nova:creationTime>2025-11-25 08:48:26</nova:creationTime>
Nov 25 03:48:27 np0005534516 nova_compute[253538]:      <nova:flavor name="m1.nano">
Nov 25 03:48:27 np0005534516 nova_compute[253538]:        <nova:memory>128</nova:memory>
Nov 25 03:48:27 np0005534516 nova_compute[253538]:        <nova:disk>1</nova:disk>
Nov 25 03:48:27 np0005534516 nova_compute[253538]:        <nova:swap>0</nova:swap>
Nov 25 03:48:27 np0005534516 nova_compute[253538]:        <nova:ephemeral>0</nova:ephemeral>
Nov 25 03:48:27 np0005534516 nova_compute[253538]:        <nova:vcpus>1</nova:vcpus>
Nov 25 03:48:27 np0005534516 nova_compute[253538]:      </nova:flavor>
Nov 25 03:48:27 np0005534516 nova_compute[253538]:      <nova:owner>
Nov 25 03:48:27 np0005534516 nova_compute[253538]:        <nova:user uuid="4211995133cc45db8e38c47f747fb092">tempest-TestNetworkBasicOps-2019122229-project-member</nova:user>
Nov 25 03:48:27 np0005534516 nova_compute[253538]:        <nova:project uuid="92faeb767e7a423586eaaf32661ce771">tempest-TestNetworkBasicOps-2019122229</nova:project>
Nov 25 03:48:27 np0005534516 nova_compute[253538]:      </nova:owner>
Nov 25 03:48:27 np0005534516 nova_compute[253538]:      <nova:root type="image" uuid="8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e"/>
Nov 25 03:48:27 np0005534516 nova_compute[253538]:      <nova:ports>
Nov 25 03:48:27 np0005534516 nova_compute[253538]:        <nova:port uuid="5b999504-81af-4e3d-9707-b0a72b902669">
Nov 25 03:48:27 np0005534516 nova_compute[253538]:          <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Nov 25 03:48:27 np0005534516 nova_compute[253538]:        </nova:port>
Nov 25 03:48:27 np0005534516 nova_compute[253538]:      </nova:ports>
Nov 25 03:48:27 np0005534516 nova_compute[253538]:    </nova:instance>
Nov 25 03:48:27 np0005534516 nova_compute[253538]:  </metadata>
Nov 25 03:48:27 np0005534516 nova_compute[253538]:  <sysinfo type="smbios">
Nov 25 03:48:27 np0005534516 nova_compute[253538]:    <system>
Nov 25 03:48:27 np0005534516 nova_compute[253538]:      <entry name="manufacturer">RDO</entry>
Nov 25 03:48:27 np0005534516 nova_compute[253538]:      <entry name="product">OpenStack Compute</entry>
Nov 25 03:48:27 np0005534516 nova_compute[253538]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 03:48:27 np0005534516 nova_compute[253538]:      <entry name="serial">fc289da4-ce2a-46ee-bf8c-bffa1e0e75fe</entry>
Nov 25 03:48:27 np0005534516 nova_compute[253538]:      <entry name="uuid">fc289da4-ce2a-46ee-bf8c-bffa1e0e75fe</entry>
Nov 25 03:48:27 np0005534516 nova_compute[253538]:      <entry name="family">Virtual Machine</entry>
Nov 25 03:48:27 np0005534516 nova_compute[253538]:    </system>
Nov 25 03:48:27 np0005534516 nova_compute[253538]:  </sysinfo>
Nov 25 03:48:27 np0005534516 nova_compute[253538]:  <os>
Nov 25 03:48:27 np0005534516 nova_compute[253538]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 25 03:48:27 np0005534516 nova_compute[253538]:    <boot dev="hd"/>
Nov 25 03:48:27 np0005534516 nova_compute[253538]:    <smbios mode="sysinfo"/>
Nov 25 03:48:27 np0005534516 nova_compute[253538]:  </os>
Nov 25 03:48:27 np0005534516 nova_compute[253538]:  <features>
Nov 25 03:48:27 np0005534516 nova_compute[253538]:    <acpi/>
Nov 25 03:48:27 np0005534516 nova_compute[253538]:    <apic/>
Nov 25 03:48:27 np0005534516 nova_compute[253538]:    <vmcoreinfo/>
Nov 25 03:48:27 np0005534516 nova_compute[253538]:  </features>
Nov 25 03:48:27 np0005534516 nova_compute[253538]:  <clock offset="utc">
Nov 25 03:48:27 np0005534516 nova_compute[253538]:    <timer name="pit" tickpolicy="delay"/>
Nov 25 03:48:27 np0005534516 nova_compute[253538]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 25 03:48:27 np0005534516 nova_compute[253538]:    <timer name="hpet" present="no"/>
Nov 25 03:48:27 np0005534516 nova_compute[253538]:  </clock>
Nov 25 03:48:27 np0005534516 nova_compute[253538]:  <cpu mode="host-model" match="exact">
Nov 25 03:48:27 np0005534516 nova_compute[253538]:    <topology sockets="1" cores="1" threads="1"/>
Nov 25 03:48:27 np0005534516 nova_compute[253538]:  </cpu>
Nov 25 03:48:27 np0005534516 nova_compute[253538]:  <devices>
Nov 25 03:48:27 np0005534516 nova_compute[253538]:    <disk type="network" device="disk">
Nov 25 03:48:27 np0005534516 nova_compute[253538]:      <driver type="raw" cache="none"/>
Nov 25 03:48:27 np0005534516 nova_compute[253538]:      <source protocol="rbd" name="vms/fc289da4-ce2a-46ee-bf8c-bffa1e0e75fe_disk">
Nov 25 03:48:27 np0005534516 nova_compute[253538]:        <host name="192.168.122.100" port="6789"/>
Nov 25 03:48:27 np0005534516 nova_compute[253538]:      </source>
Nov 25 03:48:27 np0005534516 nova_compute[253538]:      <auth username="openstack">
Nov 25 03:48:27 np0005534516 nova_compute[253538]:        <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 03:48:27 np0005534516 nova_compute[253538]:      </auth>
Nov 25 03:48:27 np0005534516 nova_compute[253538]:      <target dev="vda" bus="virtio"/>
Nov 25 03:48:27 np0005534516 nova_compute[253538]:    </disk>
Nov 25 03:48:27 np0005534516 nova_compute[253538]:    <disk type="network" device="cdrom">
Nov 25 03:48:27 np0005534516 nova_compute[253538]:      <driver type="raw" cache="none"/>
Nov 25 03:48:27 np0005534516 nova_compute[253538]:      <source protocol="rbd" name="vms/fc289da4-ce2a-46ee-bf8c-bffa1e0e75fe_disk.config">
Nov 25 03:48:27 np0005534516 nova_compute[253538]:        <host name="192.168.122.100" port="6789"/>
Nov 25 03:48:27 np0005534516 nova_compute[253538]:      </source>
Nov 25 03:48:27 np0005534516 nova_compute[253538]:      <auth username="openstack">
Nov 25 03:48:27 np0005534516 nova_compute[253538]:        <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 03:48:27 np0005534516 nova_compute[253538]:      </auth>
Nov 25 03:48:27 np0005534516 nova_compute[253538]:      <target dev="sda" bus="sata"/>
Nov 25 03:48:27 np0005534516 nova_compute[253538]:    </disk>
Nov 25 03:48:27 np0005534516 nova_compute[253538]:    <interface type="ethernet">
Nov 25 03:48:27 np0005534516 nova_compute[253538]:      <mac address="fa:16:3e:cf:68:34"/>
Nov 25 03:48:27 np0005534516 nova_compute[253538]:      <model type="virtio"/>
Nov 25 03:48:27 np0005534516 nova_compute[253538]:      <driver name="vhost" rx_queue_size="512"/>
Nov 25 03:48:27 np0005534516 nova_compute[253538]:      <mtu size="1442"/>
Nov 25 03:48:27 np0005534516 nova_compute[253538]:      <target dev="tap5b999504-81"/>
Nov 25 03:48:27 np0005534516 nova_compute[253538]:    </interface>
Nov 25 03:48:27 np0005534516 nova_compute[253538]:    <serial type="pty">
Nov 25 03:48:27 np0005534516 nova_compute[253538]:      <log file="/var/lib/nova/instances/fc289da4-ce2a-46ee-bf8c-bffa1e0e75fe/console.log" append="off"/>
Nov 25 03:48:27 np0005534516 nova_compute[253538]:    </serial>
Nov 25 03:48:27 np0005534516 nova_compute[253538]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 03:48:27 np0005534516 nova_compute[253538]:    <video>
Nov 25 03:48:27 np0005534516 nova_compute[253538]:      <model type="virtio"/>
Nov 25 03:48:27 np0005534516 nova_compute[253538]:    </video>
Nov 25 03:48:27 np0005534516 nova_compute[253538]:    <input type="tablet" bus="usb"/>
Nov 25 03:48:27 np0005534516 nova_compute[253538]:    <rng model="virtio">
Nov 25 03:48:27 np0005534516 nova_compute[253538]:      <backend model="random">/dev/urandom</backend>
Nov 25 03:48:27 np0005534516 nova_compute[253538]:    </rng>
Nov 25 03:48:27 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root"/>
Nov 25 03:48:27 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:48:27 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:48:27 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:48:27 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:48:27 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:48:27 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:48:27 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:48:27 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:48:27 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:48:27 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:48:27 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:48:27 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:48:27 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:48:27 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:48:27 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:48:27 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:48:27 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:48:27 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:48:27 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:48:27 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:48:27 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:48:27 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:48:27 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:48:27 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:48:27 np0005534516 nova_compute[253538]:    <controller type="usb" index="0"/>
Nov 25 03:48:27 np0005534516 nova_compute[253538]:    <memballoon model="virtio">
Nov 25 03:48:27 np0005534516 nova_compute[253538]:      <stats period="10"/>
Nov 25 03:48:27 np0005534516 nova_compute[253538]:    </memballoon>
Nov 25 03:48:27 np0005534516 nova_compute[253538]:  </devices>
Nov 25 03:48:27 np0005534516 nova_compute[253538]: </domain>
Nov 25 03:48:27 np0005534516 nova_compute[253538]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 25 03:48:27 np0005534516 nova_compute[253538]: 2025-11-25 08:48:27.909 253542 DEBUG nova.compute.manager [None req-9ca56f56-3c41-4463-8196-1fa2ff9b5bed 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: fc289da4-ce2a-46ee-bf8c-bffa1e0e75fe] Preparing to wait for external event network-vif-plugged-5b999504-81af-4e3d-9707-b0a72b902669 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 25 03:48:27 np0005534516 nova_compute[253538]: 2025-11-25 08:48:27.910 253542 DEBUG oslo_concurrency.lockutils [None req-9ca56f56-3c41-4463-8196-1fa2ff9b5bed 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Acquiring lock "fc289da4-ce2a-46ee-bf8c-bffa1e0e75fe-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:48:27 np0005534516 nova_compute[253538]: 2025-11-25 08:48:27.910 253542 DEBUG oslo_concurrency.lockutils [None req-9ca56f56-3c41-4463-8196-1fa2ff9b5bed 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lock "fc289da4-ce2a-46ee-bf8c-bffa1e0e75fe-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:48:27 np0005534516 nova_compute[253538]: 2025-11-25 08:48:27.910 253542 DEBUG oslo_concurrency.lockutils [None req-9ca56f56-3c41-4463-8196-1fa2ff9b5bed 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lock "fc289da4-ce2a-46ee-bf8c-bffa1e0e75fe-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:48:27 np0005534516 nova_compute[253538]: 2025-11-25 08:48:27.911 253542 DEBUG nova.virt.libvirt.vif [None req-9ca56f56-3c41-4463-8196-1fa2ff9b5bed 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T08:48:22Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-500014830',display_name='tempest-TestNetworkBasicOps-server-500014830',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-500014830',id=106,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFCCvuRR9teEjk+xhoL/dPXtSbMEI/QvMm2XyYfTKUyOXE8qn7R4eNZpb9TezDBvzTLIaZuuD77pyfzIuaqqEBF8FLx+5feWI/X0iULdgxVeu0o4nXU62owugHwOXwCyOg==',key_name='tempest-TestNetworkBasicOps-1624027369',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='92faeb767e7a423586eaaf32661ce771',ramdisk_id='',reservation_id='r-nidctccp',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-2019122229',owner_user_name='tempest-TestNetworkBasicOps-2019122229-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T08:48:23Z,user_data=None,user_id='4211995133cc45db8e38c47f747fb092',uuid=fc289da4-ce2a-46ee-bf8c-bffa1e0e75fe,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "5b999504-81af-4e3d-9707-b0a72b902669", "address": "fa:16:3e:cf:68:34", "network": {"id": "41ed78ca-e8a4-4daf-884b-6b7b763e272f", "bridge": "br-int", "label": "tempest-network-smoke--1240799795", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92faeb767e7a423586eaaf32661ce771", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5b999504-81", "ovs_interfaceid": "5b999504-81af-4e3d-9707-b0a72b902669", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 25 03:48:27 np0005534516 nova_compute[253538]: 2025-11-25 08:48:27.911 253542 DEBUG nova.network.os_vif_util [None req-9ca56f56-3c41-4463-8196-1fa2ff9b5bed 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Converting VIF {"id": "5b999504-81af-4e3d-9707-b0a72b902669", "address": "fa:16:3e:cf:68:34", "network": {"id": "41ed78ca-e8a4-4daf-884b-6b7b763e272f", "bridge": "br-int", "label": "tempest-network-smoke--1240799795", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92faeb767e7a423586eaaf32661ce771", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5b999504-81", "ovs_interfaceid": "5b999504-81af-4e3d-9707-b0a72b902669", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 03:48:27 np0005534516 nova_compute[253538]: 2025-11-25 08:48:27.912 253542 DEBUG nova.network.os_vif_util [None req-9ca56f56-3c41-4463-8196-1fa2ff9b5bed 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:cf:68:34,bridge_name='br-int',has_traffic_filtering=True,id=5b999504-81af-4e3d-9707-b0a72b902669,network=Network(41ed78ca-e8a4-4daf-884b-6b7b763e272f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5b999504-81') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 03:48:27 np0005534516 nova_compute[253538]: 2025-11-25 08:48:27.912 253542 DEBUG os_vif [None req-9ca56f56-3c41-4463-8196-1fa2ff9b5bed 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:cf:68:34,bridge_name='br-int',has_traffic_filtering=True,id=5b999504-81af-4e3d-9707-b0a72b902669,network=Network(41ed78ca-e8a4-4daf-884b-6b7b763e272f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5b999504-81') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 25 03:48:27 np0005534516 nova_compute[253538]: 2025-11-25 08:48:27.913 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:48:27 np0005534516 nova_compute[253538]: 2025-11-25 08:48:27.913 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:48:27 np0005534516 nova_compute[253538]: 2025-11-25 08:48:27.914 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 03:48:27 np0005534516 nova_compute[253538]: 2025-11-25 08:48:27.918 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:48:27 np0005534516 nova_compute[253538]: 2025-11-25 08:48:27.919 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap5b999504-81, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:48:27 np0005534516 nova_compute[253538]: 2025-11-25 08:48:27.919 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap5b999504-81, col_values=(('external_ids', {'iface-id': '5b999504-81af-4e3d-9707-b0a72b902669', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:cf:68:34', 'vm-uuid': 'fc289da4-ce2a-46ee-bf8c-bffa1e0e75fe'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:48:27 np0005534516 NetworkManager[48915]: <info>  [1764060507.9224] manager: (tap5b999504-81): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/415)
Nov 25 03:48:27 np0005534516 nova_compute[253538]: 2025-11-25 08:48:27.922 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:48:27 np0005534516 nova_compute[253538]: 2025-11-25 08:48:27.924 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 25 03:48:27 np0005534516 nova_compute[253538]: 2025-11-25 08:48:27.928 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:48:27 np0005534516 nova_compute[253538]: 2025-11-25 08:48:27.929 253542 INFO os_vif [None req-9ca56f56-3c41-4463-8196-1fa2ff9b5bed 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:cf:68:34,bridge_name='br-int',has_traffic_filtering=True,id=5b999504-81af-4e3d-9707-b0a72b902669,network=Network(41ed78ca-e8a4-4daf-884b-6b7b763e272f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5b999504-81')#033[00m
Nov 25 03:48:27 np0005534516 podman[360078]: 2025-11-25 08:48:27.940773909 +0000 UTC m=+1.334386749 container remove be9d6df7c4e289107a1b234ef2e0a9836bf565f9822766fd21b55569af09da2c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_panini, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 03:48:27 np0005534516 systemd[1]: libpod-conmon-be9d6df7c4e289107a1b234ef2e0a9836bf565f9822766fd21b55569af09da2c.scope: Deactivated successfully.
Nov 25 03:48:27 np0005534516 podman[360186]: 2025-11-25 08:48:27.979456024 +0000 UTC m=+0.071860275 container health_status 5c8d5508db57b4c3da7e80ae11f3a3094e87785cb74f60c98199261dd8b73481 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, config_id=ovn_metadata_agent, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Nov 25 03:48:27 np0005534516 nova_compute[253538]: 2025-11-25 08:48:27.983 253542 DEBUG nova.virt.libvirt.driver [None req-9ca56f56-3c41-4463-8196-1fa2ff9b5bed 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 03:48:27 np0005534516 nova_compute[253538]: 2025-11-25 08:48:27.984 253542 DEBUG nova.virt.libvirt.driver [None req-9ca56f56-3c41-4463-8196-1fa2ff9b5bed 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 03:48:27 np0005534516 nova_compute[253538]: 2025-11-25 08:48:27.984 253542 DEBUG nova.virt.libvirt.driver [None req-9ca56f56-3c41-4463-8196-1fa2ff9b5bed 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] No VIF found with MAC fa:16:3e:cf:68:34, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 25 03:48:27 np0005534516 nova_compute[253538]: 2025-11-25 08:48:27.984 253542 INFO nova.virt.libvirt.driver [None req-9ca56f56-3c41-4463-8196-1fa2ff9b5bed 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: fc289da4-ce2a-46ee-bf8c-bffa1e0e75fe] Using config drive#033[00m
Nov 25 03:48:28 np0005534516 nova_compute[253538]: 2025-11-25 08:48:28.007 253542 DEBUG nova.storage.rbd_utils [None req-9ca56f56-3c41-4463-8196-1fa2ff9b5bed 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] rbd image fc289da4-ce2a-46ee-bf8c-bffa1e0e75fe_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:48:28 np0005534516 nova_compute[253538]: 2025-11-25 08:48:28.555 253542 INFO nova.virt.libvirt.driver [None req-9ca56f56-3c41-4463-8196-1fa2ff9b5bed 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: fc289da4-ce2a-46ee-bf8c-bffa1e0e75fe] Creating config drive at /var/lib/nova/instances/fc289da4-ce2a-46ee-bf8c-bffa1e0e75fe/disk.config#033[00m
Nov 25 03:48:28 np0005534516 nova_compute[253538]: 2025-11-25 08:48:28.560 253542 DEBUG oslo_concurrency.processutils [None req-9ca56f56-3c41-4463-8196-1fa2ff9b5bed 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/fc289da4-ce2a-46ee-bf8c-bffa1e0e75fe/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp4oi_w9ch execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:48:28 np0005534516 podman[360376]: 2025-11-25 08:48:28.560447634 +0000 UTC m=+0.021330932 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 03:48:28 np0005534516 nova_compute[253538]: 2025-11-25 08:48:28.707 253542 DEBUG oslo_concurrency.processutils [None req-9ca56f56-3c41-4463-8196-1fa2ff9b5bed 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/fc289da4-ce2a-46ee-bf8c-bffa1e0e75fe/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp4oi_w9ch" returned: 0 in 0.146s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:48:28 np0005534516 nova_compute[253538]: 2025-11-25 08:48:28.734 253542 DEBUG nova.storage.rbd_utils [None req-9ca56f56-3c41-4463-8196-1fa2ff9b5bed 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] rbd image fc289da4-ce2a-46ee-bf8c-bffa1e0e75fe_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:48:28 np0005534516 nova_compute[253538]: 2025-11-25 08:48:28.738 253542 DEBUG oslo_concurrency.processutils [None req-9ca56f56-3c41-4463-8196-1fa2ff9b5bed 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/fc289da4-ce2a-46ee-bf8c-bffa1e0e75fe/disk.config fc289da4-ce2a-46ee-bf8c-bffa1e0e75fe_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:48:28 np0005534516 podman[360376]: 2025-11-25 08:48:28.914477945 +0000 UTC m=+0.375361263 container create 67c2bdc2585a1246b7a8871a8e8805a04bca6193a51b2f4b948ad943d1b35efd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_nash, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True)
Nov 25 03:48:28 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 25 03:48:28 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2564928069' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 03:48:28 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 25 03:48:28 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2564928069' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 03:48:29 np0005534516 systemd[1]: Started libpod-conmon-67c2bdc2585a1246b7a8871a8e8805a04bca6193a51b2f4b948ad943d1b35efd.scope.
Nov 25 03:48:29 np0005534516 systemd[1]: Started libcrun container.
Nov 25 03:48:29 np0005534516 podman[360376]: 2025-11-25 08:48:29.134251282 +0000 UTC m=+0.595134640 container init 67c2bdc2585a1246b7a8871a8e8805a04bca6193a51b2f4b948ad943d1b35efd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_nash, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Nov 25 03:48:29 np0005534516 podman[360376]: 2025-11-25 08:48:29.147073025 +0000 UTC m=+0.607956343 container start 67c2bdc2585a1246b7a8871a8e8805a04bca6193a51b2f4b948ad943d1b35efd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_nash, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef)
Nov 25 03:48:29 np0005534516 podman[360376]: 2025-11-25 08:48:29.15210779 +0000 UTC m=+0.612991088 container attach 67c2bdc2585a1246b7a8871a8e8805a04bca6193a51b2f4b948ad943d1b35efd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_nash, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 03:48:29 np0005534516 nostalgic_nash[360430]: 167 167
Nov 25 03:48:29 np0005534516 systemd[1]: libpod-67c2bdc2585a1246b7a8871a8e8805a04bca6193a51b2f4b948ad943d1b35efd.scope: Deactivated successfully.
Nov 25 03:48:29 np0005534516 podman[360376]: 2025-11-25 08:48:29.15808507 +0000 UTC m=+0.618968448 container died 67c2bdc2585a1246b7a8871a8e8805a04bca6193a51b2f4b948ad943d1b35efd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_nash, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef)
Nov 25 03:48:29 np0005534516 nova_compute[253538]: 2025-11-25 08:48:29.175 253542 DEBUG oslo_concurrency.processutils [None req-9ca56f56-3c41-4463-8196-1fa2ff9b5bed 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/fc289da4-ce2a-46ee-bf8c-bffa1e0e75fe/disk.config fc289da4-ce2a-46ee-bf8c-bffa1e0e75fe_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.437s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:48:29 np0005534516 nova_compute[253538]: 2025-11-25 08:48:29.177 253542 INFO nova.virt.libvirt.driver [None req-9ca56f56-3c41-4463-8196-1fa2ff9b5bed 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: fc289da4-ce2a-46ee-bf8c-bffa1e0e75fe] Deleting local config drive /var/lib/nova/instances/fc289da4-ce2a-46ee-bf8c-bffa1e0e75fe/disk.config because it was imported into RBD.#033[00m
Nov 25 03:48:29 np0005534516 systemd[1]: var-lib-containers-storage-overlay-7c0432f1067bc43c487cee263258d3c4fe163defcbc34c339935a087115412b0-merged.mount: Deactivated successfully.
Nov 25 03:48:29 np0005534516 podman[360376]: 2025-11-25 08:48:29.204869013 +0000 UTC m=+0.665752301 container remove 67c2bdc2585a1246b7a8871a8e8805a04bca6193a51b2f4b948ad943d1b35efd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_nash, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 03:48:29 np0005534516 systemd[1]: libpod-conmon-67c2bdc2585a1246b7a8871a8e8805a04bca6193a51b2f4b948ad943d1b35efd.scope: Deactivated successfully.
Nov 25 03:48:29 np0005534516 kernel: tap5b999504-81: entered promiscuous mode
Nov 25 03:48:29 np0005534516 NetworkManager[48915]: <info>  [1764060509.2389] manager: (tap5b999504-81): new Tun device (/org/freedesktop/NetworkManager/Devices/416)
Nov 25 03:48:29 np0005534516 nova_compute[253538]: 2025-11-25 08:48:29.238 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:48:29 np0005534516 ovn_controller[152859]: 2025-11-25T08:48:29Z|01011|binding|INFO|Claiming lport 5b999504-81af-4e3d-9707-b0a72b902669 for this chassis.
Nov 25 03:48:29 np0005534516 ovn_controller[152859]: 2025-11-25T08:48:29Z|01012|binding|INFO|5b999504-81af-4e3d-9707-b0a72b902669: Claiming fa:16:3e:cf:68:34 10.100.0.9
Nov 25 03:48:29 np0005534516 nova_compute[253538]: 2025-11-25 08:48:29.243 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:48:29 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:48:29.257 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:cf:68:34 10.100.0.9'], port_security=['fa:16:3e:cf:68:34 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': 'fc289da4-ce2a-46ee-bf8c-bffa1e0e75fe', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-41ed78ca-e8a4-4daf-884b-6b7b763e272f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '92faeb767e7a423586eaaf32661ce771', 'neutron:revision_number': '2', 'neutron:security_group_ids': '4a5a2de2-f65d-4e79-a42e-c5ccdc573b10', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=78a84b49-c79a-4804-945b-0e3005e5ab18, chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=5b999504-81af-4e3d-9707-b0a72b902669) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 03:48:29 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:48:29.260 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 5b999504-81af-4e3d-9707-b0a72b902669 in datapath 41ed78ca-e8a4-4daf-884b-6b7b763e272f bound to our chassis#033[00m
Nov 25 03:48:29 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:48:29.264 162739 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 41ed78ca-e8a4-4daf-884b-6b7b763e272f#033[00m
Nov 25 03:48:29 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:48:29.278 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[fe405515-5d9a-42fb-b8de-b53c143491dd]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:48:29 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:48:29.279 162739 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap41ed78ca-e1 in ovnmeta-41ed78ca-e8a4-4daf-884b-6b7b763e272f namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 25 03:48:29 np0005534516 systemd-machined[215790]: New machine qemu-131-instance-0000006a.
Nov 25 03:48:29 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:48:29.282 269370 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap41ed78ca-e0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 25 03:48:29 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:48:29.282 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[3baa8c77-fe8d-406f-b01e-114fc830b506]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:48:29 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:48:29.283 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[628eea26-e776-4212-9b73-b078cc29ec04]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:48:29 np0005534516 systemd-udevd[360467]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 03:48:29 np0005534516 systemd[1]: Started Virtual Machine qemu-131-instance-0000006a.
Nov 25 03:48:29 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:48:29.300 162852 DEBUG oslo.privsep.daemon [-] privsep: reply[86898f55-8ff7-423a-a82b-9b15622882e2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:48:29 np0005534516 NetworkManager[48915]: <info>  [1764060509.3027] device (tap5b999504-81): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 03:48:29 np0005534516 NetworkManager[48915]: <info>  [1764060509.3080] device (tap5b999504-81): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 03:48:29 np0005534516 ovn_controller[152859]: 2025-11-25T08:48:29Z|01013|binding|INFO|Setting lport 5b999504-81af-4e3d-9707-b0a72b902669 ovn-installed in OVS
Nov 25 03:48:29 np0005534516 ovn_controller[152859]: 2025-11-25T08:48:29Z|01014|binding|INFO|Setting lport 5b999504-81af-4e3d-9707-b0a72b902669 up in Southbound
Nov 25 03:48:29 np0005534516 nova_compute[253538]: 2025-11-25 08:48:29.303 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:48:29 np0005534516 nova_compute[253538]: 2025-11-25 08:48:29.311 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:48:29 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:48:29.319 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[4e53c9a5-4397-49c1-8bbf-12538131378e]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:48:29 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:48:29.346 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[12135bc8-8f23-451a-bddb-741aec46386a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:48:29 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:48:29.351 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[4cd8fd55-0509-4926-b176-f2b7fd51fba4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:48:29 np0005534516 NetworkManager[48915]: <info>  [1764060509.3528] manager: (tap41ed78ca-e0): new Veth device (/org/freedesktop/NetworkManager/Devices/417)
Nov 25 03:48:29 np0005534516 systemd-udevd[360470]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 03:48:29 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:48:29.385 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[21d23abe-3560-40e2-8688-12930250aebf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:48:29 np0005534516 podman[360479]: 2025-11-25 08:48:29.387604327 +0000 UTC m=+0.041121253 container create c4104425d6f906714e87250f2709158778ee08c6a259dc1cf4333d4f0b154ab6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_stonebraker, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 03:48:29 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:48:29.388 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[986e52d9-bd6a-4c5e-a7dd-1391ec9d5d5d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:48:29 np0005534516 NetworkManager[48915]: <info>  [1764060509.4085] device (tap41ed78ca-e0): carrier: link connected
Nov 25 03:48:29 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:48:29.413 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[3fcd3b95-0e90-4e9f-92c4-7c6a5757c46d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:48:29 np0005534516 systemd[1]: Started libpod-conmon-c4104425d6f906714e87250f2709158778ee08c6a259dc1cf4333d4f0b154ab6.scope.
Nov 25 03:48:29 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:48:29.429 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[3f970479-40fa-416a-b7e0-a7b2944de157]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap41ed78ca-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:58:60:9b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 301], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 586400, 'reachable_time': 38666, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 360521, 'error': None, 'target': 'ovnmeta-41ed78ca-e8a4-4daf-884b-6b7b763e272f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:48:29 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:48:29.444 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[4b3032d2-b886-4284-8137-a5de4ed0b058]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe58:609b'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 586400, 'tstamp': 586400}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 360524, 'error': None, 'target': 'ovnmeta-41ed78ca-e8a4-4daf-884b-6b7b763e272f', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:48:29 np0005534516 systemd[1]: Started libcrun container.
Nov 25 03:48:29 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3ed5815105189e7d8616e25d145024352c74833678e3620fdb6a521b333c4756/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 03:48:29 np0005534516 podman[360479]: 2025-11-25 08:48:29.373255283 +0000 UTC m=+0.026772229 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 03:48:29 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3ed5815105189e7d8616e25d145024352c74833678e3620fdb6a521b333c4756/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 03:48:29 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:48:29.465 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[4f524342-3110-4d51-a142-12bff50926f3]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap41ed78ca-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:58:60:9b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 301], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 586400, 'reachable_time': 38666, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 360526, 'error': None, 'target': 'ovnmeta-41ed78ca-e8a4-4daf-884b-6b7b763e272f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:48:29 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3ed5815105189e7d8616e25d145024352c74833678e3620fdb6a521b333c4756/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 03:48:29 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3ed5815105189e7d8616e25d145024352c74833678e3620fdb6a521b333c4756/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 03:48:29 np0005534516 podman[360479]: 2025-11-25 08:48:29.484691717 +0000 UTC m=+0.138208683 container init c4104425d6f906714e87250f2709158778ee08c6a259dc1cf4333d4f0b154ab6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_stonebraker, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Nov 25 03:48:29 np0005534516 podman[360479]: 2025-11-25 08:48:29.494678904 +0000 UTC m=+0.148195830 container start c4104425d6f906714e87250f2709158778ee08c6a259dc1cf4333d4f0b154ab6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_stonebraker, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 03:48:29 np0005534516 podman[360479]: 2025-11-25 08:48:29.49824864 +0000 UTC m=+0.151765586 container attach c4104425d6f906714e87250f2709158778ee08c6a259dc1cf4333d4f0b154ab6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_stonebraker, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 03:48:29 np0005534516 nova_compute[253538]: 2025-11-25 08:48:29.504 253542 DEBUG nova.network.neutron [req-0be2142a-30d2-439b-aa6e-b70ce0c8ecbb req-d3ee8ccd-6049-4075-ab3e-b6578c36cd5c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: fc289da4-ce2a-46ee-bf8c-bffa1e0e75fe] Updated VIF entry in instance network info cache for port 5b999504-81af-4e3d-9707-b0a72b902669. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 25 03:48:29 np0005534516 nova_compute[253538]: 2025-11-25 08:48:29.505 253542 DEBUG nova.network.neutron [req-0be2142a-30d2-439b-aa6e-b70ce0c8ecbb req-d3ee8ccd-6049-4075-ab3e-b6578c36cd5c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: fc289da4-ce2a-46ee-bf8c-bffa1e0e75fe] Updating instance_info_cache with network_info: [{"id": "5b999504-81af-4e3d-9707-b0a72b902669", "address": "fa:16:3e:cf:68:34", "network": {"id": "41ed78ca-e8a4-4daf-884b-6b7b763e272f", "bridge": "br-int", "label": "tempest-network-smoke--1240799795", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92faeb767e7a423586eaaf32661ce771", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5b999504-81", "ovs_interfaceid": "5b999504-81af-4e3d-9707-b0a72b902669", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 03:48:29 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:48:29.519 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[2b2e994b-1c2e-43e7-a5e4-3a667888bad6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:48:29 np0005534516 nova_compute[253538]: 2025-11-25 08:48:29.520 253542 DEBUG oslo_concurrency.lockutils [req-0be2142a-30d2-439b-aa6e-b70ce0c8ecbb req-d3ee8ccd-6049-4075-ab3e-b6578c36cd5c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-fc289da4-ce2a-46ee-bf8c-bffa1e0e75fe" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 03:48:29 np0005534516 nova_compute[253538]: 2025-11-25 08:48:29.553 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 03:48:29 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:48:29.586 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[7b93d627-493a-4676-95af-b8871d57f359]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:48:29 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:48:29.588 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap41ed78ca-e0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:48:29 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:48:29.588 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 03:48:29 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:48:29.589 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap41ed78ca-e0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:48:29 np0005534516 nova_compute[253538]: 2025-11-25 08:48:29.590 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:48:29 np0005534516 NetworkManager[48915]: <info>  [1764060509.5916] manager: (tap41ed78ca-e0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/418)
Nov 25 03:48:29 np0005534516 kernel: tap41ed78ca-e0: entered promiscuous mode
Nov 25 03:48:29 np0005534516 nova_compute[253538]: 2025-11-25 08:48:29.595 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:48:29 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:48:29.596 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap41ed78ca-e0, col_values=(('external_ids', {'iface-id': '2a6dde1f-8745-4374-9d65-dba32b48db06'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:48:29 np0005534516 nova_compute[253538]: 2025-11-25 08:48:29.598 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:48:29 np0005534516 ovn_controller[152859]: 2025-11-25T08:48:29Z|01015|binding|INFO|Releasing lport 2a6dde1f-8745-4374-9d65-dba32b48db06 from this chassis (sb_readonly=0)
Nov 25 03:48:29 np0005534516 nova_compute[253538]: 2025-11-25 08:48:29.620 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:48:29 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:48:29.621 162739 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/41ed78ca-e8a4-4daf-884b-6b7b763e272f.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/41ed78ca-e8a4-4daf-884b-6b7b763e272f.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 25 03:48:29 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:48:29.622 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[29c46812-80c3-49e2-8809-11ccb2069b6e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:48:29 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:48:29.623 162739 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 25 03:48:29 np0005534516 ovn_metadata_agent[162734]: global
Nov 25 03:48:29 np0005534516 ovn_metadata_agent[162734]:    log         /dev/log local0 debug
Nov 25 03:48:29 np0005534516 ovn_metadata_agent[162734]:    log-tag     haproxy-metadata-proxy-41ed78ca-e8a4-4daf-884b-6b7b763e272f
Nov 25 03:48:29 np0005534516 ovn_metadata_agent[162734]:    user        root
Nov 25 03:48:29 np0005534516 ovn_metadata_agent[162734]:    group       root
Nov 25 03:48:29 np0005534516 ovn_metadata_agent[162734]:    maxconn     1024
Nov 25 03:48:29 np0005534516 ovn_metadata_agent[162734]:    pidfile     /var/lib/neutron/external/pids/41ed78ca-e8a4-4daf-884b-6b7b763e272f.pid.haproxy
Nov 25 03:48:29 np0005534516 ovn_metadata_agent[162734]:    daemon
Nov 25 03:48:29 np0005534516 ovn_metadata_agent[162734]: 
Nov 25 03:48:29 np0005534516 ovn_metadata_agent[162734]: defaults
Nov 25 03:48:29 np0005534516 ovn_metadata_agent[162734]:    log global
Nov 25 03:48:29 np0005534516 ovn_metadata_agent[162734]:    mode http
Nov 25 03:48:29 np0005534516 ovn_metadata_agent[162734]:    option httplog
Nov 25 03:48:29 np0005534516 ovn_metadata_agent[162734]:    option dontlognull
Nov 25 03:48:29 np0005534516 ovn_metadata_agent[162734]:    option http-server-close
Nov 25 03:48:29 np0005534516 ovn_metadata_agent[162734]:    option forwardfor
Nov 25 03:48:29 np0005534516 ovn_metadata_agent[162734]:    retries                 3
Nov 25 03:48:29 np0005534516 ovn_metadata_agent[162734]:    timeout http-request    30s
Nov 25 03:48:29 np0005534516 ovn_metadata_agent[162734]:    timeout connect         30s
Nov 25 03:48:29 np0005534516 ovn_metadata_agent[162734]:    timeout client          32s
Nov 25 03:48:29 np0005534516 ovn_metadata_agent[162734]:    timeout server          32s
Nov 25 03:48:29 np0005534516 ovn_metadata_agent[162734]:    timeout http-keep-alive 30s
Nov 25 03:48:29 np0005534516 ovn_metadata_agent[162734]: 
Nov 25 03:48:29 np0005534516 ovn_metadata_agent[162734]: 
Nov 25 03:48:29 np0005534516 ovn_metadata_agent[162734]: listen listener
Nov 25 03:48:29 np0005534516 ovn_metadata_agent[162734]:    bind 169.254.169.254:80
Nov 25 03:48:29 np0005534516 ovn_metadata_agent[162734]:    server metadata /var/lib/neutron/metadata_proxy
Nov 25 03:48:29 np0005534516 ovn_metadata_agent[162734]:    http-request add-header X-OVN-Network-ID 41ed78ca-e8a4-4daf-884b-6b7b763e272f
Nov 25 03:48:29 np0005534516 ovn_metadata_agent[162734]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 25 03:48:29 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:48:29.623 162739 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-41ed78ca-e8a4-4daf-884b-6b7b763e272f', 'env', 'PROCESS_TAG=haproxy-41ed78ca-e8a4-4daf-884b-6b7b763e272f', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/41ed78ca-e8a4-4daf-884b-6b7b763e272f.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 25 03:48:29 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2013: 321 pgs: 321 active+clean; 292 MiB data, 808 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 5.7 MiB/s wr, 113 op/s
Nov 25 03:48:29 np0005534516 nova_compute[253538]: 2025-11-25 08:48:29.770 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764060509.7696543, fc289da4-ce2a-46ee-bf8c-bffa1e0e75fe => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 03:48:29 np0005534516 nova_compute[253538]: 2025-11-25 08:48:29.770 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: fc289da4-ce2a-46ee-bf8c-bffa1e0e75fe] VM Started (Lifecycle Event)#033[00m
Nov 25 03:48:29 np0005534516 nova_compute[253538]: 2025-11-25 08:48:29.785 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: fc289da4-ce2a-46ee-bf8c-bffa1e0e75fe] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:48:29 np0005534516 nova_compute[253538]: 2025-11-25 08:48:29.789 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764060509.771985, fc289da4-ce2a-46ee-bf8c-bffa1e0e75fe => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 03:48:29 np0005534516 nova_compute[253538]: 2025-11-25 08:48:29.789 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: fc289da4-ce2a-46ee-bf8c-bffa1e0e75fe] VM Paused (Lifecycle Event)#033[00m
Nov 25 03:48:29 np0005534516 nova_compute[253538]: 2025-11-25 08:48:29.806 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: fc289da4-ce2a-46ee-bf8c-bffa1e0e75fe] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:48:29 np0005534516 nova_compute[253538]: 2025-11-25 08:48:29.809 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: fc289da4-ce2a-46ee-bf8c-bffa1e0e75fe] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 03:48:29 np0005534516 nova_compute[253538]: 2025-11-25 08:48:29.838 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: fc289da4-ce2a-46ee-bf8c-bffa1e0e75fe] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 03:48:30 np0005534516 podman[360601]: 2025-11-25 08:48:30.057653891 +0000 UTC m=+0.063376687 container create bbe7a88bce4ee5fd9c18ecf2e88ab17f45240f07c5add1dacd1bb1ee8f137c9f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-41ed78ca-e8a4-4daf-884b-6b7b763e272f, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0)
Nov 25 03:48:30 np0005534516 systemd[1]: Started libpod-conmon-bbe7a88bce4ee5fd9c18ecf2e88ab17f45240f07c5add1dacd1bb1ee8f137c9f.scope.
Nov 25 03:48:30 np0005534516 podman[360601]: 2025-11-25 08:48:30.020504387 +0000 UTC m=+0.026227173 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 25 03:48:30 np0005534516 systemd[1]: Started libcrun container.
Nov 25 03:48:30 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d1849c6e6683bfaf37cc6be930869317f1e1d8455b3540b1f7e705a5ef3ca71a/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 25 03:48:30 np0005534516 podman[360601]: 2025-11-25 08:48:30.165598863 +0000 UTC m=+0.171321629 container init bbe7a88bce4ee5fd9c18ecf2e88ab17f45240f07c5add1dacd1bb1ee8f137c9f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-41ed78ca-e8a4-4daf-884b-6b7b763e272f, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, tcib_managed=true, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 03:48:30 np0005534516 podman[360601]: 2025-11-25 08:48:30.176748432 +0000 UTC m=+0.182471188 container start bbe7a88bce4ee5fd9c18ecf2e88ab17f45240f07c5add1dacd1bb1ee8f137c9f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-41ed78ca-e8a4-4daf-884b-6b7b763e272f, tcib_managed=true, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 03:48:30 np0005534516 neutron-haproxy-ovnmeta-41ed78ca-e8a4-4daf-884b-6b7b763e272f[360616]: [NOTICE]   (360622) : New worker (360626) forked
Nov 25 03:48:30 np0005534516 neutron-haproxy-ovnmeta-41ed78ca-e8a4-4daf-884b-6b7b763e272f[360616]: [NOTICE]   (360622) : Loading success.
Nov 25 03:48:30 np0005534516 eager_stonebraker[360522]: {
Nov 25 03:48:30 np0005534516 eager_stonebraker[360522]:    "0": [
Nov 25 03:48:30 np0005534516 eager_stonebraker[360522]:        {
Nov 25 03:48:30 np0005534516 eager_stonebraker[360522]:            "devices": [
Nov 25 03:48:30 np0005534516 eager_stonebraker[360522]:                "/dev/loop3"
Nov 25 03:48:30 np0005534516 eager_stonebraker[360522]:            ],
Nov 25 03:48:30 np0005534516 eager_stonebraker[360522]:            "lv_name": "ceph_lv0",
Nov 25 03:48:30 np0005534516 eager_stonebraker[360522]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 03:48:30 np0005534516 eager_stonebraker[360522]:            "lv_size": "21470642176",
Nov 25 03:48:30 np0005534516 eager_stonebraker[360522]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=fa0f8d90-5df8-4d42-9078-da082765696d,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 03:48:30 np0005534516 eager_stonebraker[360522]:            "lv_uuid": "ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr",
Nov 25 03:48:30 np0005534516 eager_stonebraker[360522]:            "name": "ceph_lv0",
Nov 25 03:48:30 np0005534516 eager_stonebraker[360522]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 03:48:30 np0005534516 eager_stonebraker[360522]:            "tags": {
Nov 25 03:48:30 np0005534516 eager_stonebraker[360522]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 03:48:30 np0005534516 eager_stonebraker[360522]:                "ceph.block_uuid": "ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr",
Nov 25 03:48:30 np0005534516 eager_stonebraker[360522]:                "ceph.cephx_lockbox_secret": "",
Nov 25 03:48:30 np0005534516 eager_stonebraker[360522]:                "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 03:48:30 np0005534516 eager_stonebraker[360522]:                "ceph.cluster_name": "ceph",
Nov 25 03:48:30 np0005534516 eager_stonebraker[360522]:                "ceph.crush_device_class": "",
Nov 25 03:48:30 np0005534516 eager_stonebraker[360522]:                "ceph.encrypted": "0",
Nov 25 03:48:30 np0005534516 eager_stonebraker[360522]:                "ceph.osd_fsid": "fa0f8d90-5df8-4d42-9078-da082765696d",
Nov 25 03:48:30 np0005534516 eager_stonebraker[360522]:                "ceph.osd_id": "0",
Nov 25 03:48:30 np0005534516 eager_stonebraker[360522]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 03:48:30 np0005534516 eager_stonebraker[360522]:                "ceph.type": "block",
Nov 25 03:48:30 np0005534516 eager_stonebraker[360522]:                "ceph.vdo": "0"
Nov 25 03:48:30 np0005534516 eager_stonebraker[360522]:            },
Nov 25 03:48:30 np0005534516 eager_stonebraker[360522]:            "type": "block",
Nov 25 03:48:30 np0005534516 eager_stonebraker[360522]:            "vg_name": "ceph_vg0"
Nov 25 03:48:30 np0005534516 eager_stonebraker[360522]:        }
Nov 25 03:48:30 np0005534516 eager_stonebraker[360522]:    ],
Nov 25 03:48:30 np0005534516 eager_stonebraker[360522]:    "1": [
Nov 25 03:48:30 np0005534516 eager_stonebraker[360522]:        {
Nov 25 03:48:30 np0005534516 eager_stonebraker[360522]:            "devices": [
Nov 25 03:48:30 np0005534516 eager_stonebraker[360522]:                "/dev/loop4"
Nov 25 03:48:30 np0005534516 eager_stonebraker[360522]:            ],
Nov 25 03:48:30 np0005534516 eager_stonebraker[360522]:            "lv_name": "ceph_lv1",
Nov 25 03:48:30 np0005534516 eager_stonebraker[360522]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 03:48:30 np0005534516 eager_stonebraker[360522]:            "lv_size": "21470642176",
Nov 25 03:48:30 np0005534516 eager_stonebraker[360522]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1ccbbde0-8faf-460a-800e-c84f00ed17db,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 03:48:30 np0005534516 eager_stonebraker[360522]:            "lv_uuid": "nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e",
Nov 25 03:48:30 np0005534516 eager_stonebraker[360522]:            "name": "ceph_lv1",
Nov 25 03:48:30 np0005534516 eager_stonebraker[360522]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 03:48:30 np0005534516 eager_stonebraker[360522]:            "tags": {
Nov 25 03:48:30 np0005534516 eager_stonebraker[360522]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 03:48:30 np0005534516 eager_stonebraker[360522]:                "ceph.block_uuid": "nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e",
Nov 25 03:48:30 np0005534516 eager_stonebraker[360522]:                "ceph.cephx_lockbox_secret": "",
Nov 25 03:48:30 np0005534516 eager_stonebraker[360522]:                "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 03:48:30 np0005534516 eager_stonebraker[360522]:                "ceph.cluster_name": "ceph",
Nov 25 03:48:30 np0005534516 eager_stonebraker[360522]:                "ceph.crush_device_class": "",
Nov 25 03:48:30 np0005534516 eager_stonebraker[360522]:                "ceph.encrypted": "0",
Nov 25 03:48:30 np0005534516 eager_stonebraker[360522]:                "ceph.osd_fsid": "1ccbbde0-8faf-460a-800e-c84f00ed17db",
Nov 25 03:48:30 np0005534516 eager_stonebraker[360522]:                "ceph.osd_id": "1",
Nov 25 03:48:30 np0005534516 eager_stonebraker[360522]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 03:48:30 np0005534516 eager_stonebraker[360522]:                "ceph.type": "block",
Nov 25 03:48:30 np0005534516 eager_stonebraker[360522]:                "ceph.vdo": "0"
Nov 25 03:48:30 np0005534516 eager_stonebraker[360522]:            },
Nov 25 03:48:30 np0005534516 eager_stonebraker[360522]:            "type": "block",
Nov 25 03:48:30 np0005534516 eager_stonebraker[360522]:            "vg_name": "ceph_vg1"
Nov 25 03:48:30 np0005534516 eager_stonebraker[360522]:        }
Nov 25 03:48:30 np0005534516 eager_stonebraker[360522]:    ],
Nov 25 03:48:30 np0005534516 eager_stonebraker[360522]:    "2": [
Nov 25 03:48:30 np0005534516 eager_stonebraker[360522]:        {
Nov 25 03:48:30 np0005534516 eager_stonebraker[360522]:            "devices": [
Nov 25 03:48:30 np0005534516 eager_stonebraker[360522]:                "/dev/loop5"
Nov 25 03:48:30 np0005534516 eager_stonebraker[360522]:            ],
Nov 25 03:48:30 np0005534516 eager_stonebraker[360522]:            "lv_name": "ceph_lv2",
Nov 25 03:48:30 np0005534516 eager_stonebraker[360522]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 03:48:30 np0005534516 eager_stonebraker[360522]:            "lv_size": "21470642176",
Nov 25 03:48:30 np0005534516 eager_stonebraker[360522]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2a15223e-f21c-42af-b702-2be1c3607399,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 03:48:30 np0005534516 eager_stonebraker[360522]:            "lv_uuid": "5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0",
Nov 25 03:48:30 np0005534516 eager_stonebraker[360522]:            "name": "ceph_lv2",
Nov 25 03:48:30 np0005534516 eager_stonebraker[360522]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 03:48:30 np0005534516 eager_stonebraker[360522]:            "tags": {
Nov 25 03:48:30 np0005534516 eager_stonebraker[360522]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 03:48:30 np0005534516 eager_stonebraker[360522]:                "ceph.block_uuid": "5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0",
Nov 25 03:48:30 np0005534516 eager_stonebraker[360522]:                "ceph.cephx_lockbox_secret": "",
Nov 25 03:48:30 np0005534516 eager_stonebraker[360522]:                "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 03:48:30 np0005534516 eager_stonebraker[360522]:                "ceph.cluster_name": "ceph",
Nov 25 03:48:30 np0005534516 eager_stonebraker[360522]:                "ceph.crush_device_class": "",
Nov 25 03:48:30 np0005534516 eager_stonebraker[360522]:                "ceph.encrypted": "0",
Nov 25 03:48:30 np0005534516 eager_stonebraker[360522]:                "ceph.osd_fsid": "2a15223e-f21c-42af-b702-2be1c3607399",
Nov 25 03:48:30 np0005534516 eager_stonebraker[360522]:                "ceph.osd_id": "2",
Nov 25 03:48:30 np0005534516 eager_stonebraker[360522]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 03:48:30 np0005534516 eager_stonebraker[360522]:                "ceph.type": "block",
Nov 25 03:48:30 np0005534516 eager_stonebraker[360522]:                "ceph.vdo": "0"
Nov 25 03:48:30 np0005534516 eager_stonebraker[360522]:            },
Nov 25 03:48:30 np0005534516 eager_stonebraker[360522]:            "type": "block",
Nov 25 03:48:30 np0005534516 eager_stonebraker[360522]:            "vg_name": "ceph_vg2"
Nov 25 03:48:30 np0005534516 eager_stonebraker[360522]:        }
Nov 25 03:48:30 np0005534516 eager_stonebraker[360522]:    ]
Nov 25 03:48:30 np0005534516 eager_stonebraker[360522]: }
Nov 25 03:48:30 np0005534516 systemd[1]: libpod-c4104425d6f906714e87250f2709158778ee08c6a259dc1cf4333d4f0b154ab6.scope: Deactivated successfully.
Nov 25 03:48:30 np0005534516 podman[360479]: 2025-11-25 08:48:30.275832645 +0000 UTC m=+0.929349571 container died c4104425d6f906714e87250f2709158778ee08c6a259dc1cf4333d4f0b154ab6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_stonebraker, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Nov 25 03:48:30 np0005534516 systemd[1]: var-lib-containers-storage-overlay-3ed5815105189e7d8616e25d145024352c74833678e3620fdb6a521b333c4756-merged.mount: Deactivated successfully.
Nov 25 03:48:30 np0005534516 podman[360479]: 2025-11-25 08:48:30.340466546 +0000 UTC m=+0.993983472 container remove c4104425d6f906714e87250f2709158778ee08c6a259dc1cf4333d4f0b154ab6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_stonebraker, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Nov 25 03:48:30 np0005534516 systemd[1]: libpod-conmon-c4104425d6f906714e87250f2709158778ee08c6a259dc1cf4333d4f0b154ab6.scope: Deactivated successfully.
Nov 25 03:48:30 np0005534516 podman[360786]: 2025-11-25 08:48:30.976289544 +0000 UTC m=+0.042930430 container create fb2781fa68584be05b2ec9fdb399c3f71b9aa60dd38d882c745c50f3b07fdd4c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_aryabhata, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 03:48:31 np0005534516 systemd[1]: Started libpod-conmon-fb2781fa68584be05b2ec9fdb399c3f71b9aa60dd38d882c745c50f3b07fdd4c.scope.
Nov 25 03:48:31 np0005534516 systemd[1]: Started libcrun container.
Nov 25 03:48:31 np0005534516 podman[360786]: 2025-11-25 08:48:30.957728738 +0000 UTC m=+0.024369654 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 03:48:31 np0005534516 podman[360786]: 2025-11-25 08:48:31.054243022 +0000 UTC m=+0.120883978 container init fb2781fa68584be05b2ec9fdb399c3f71b9aa60dd38d882c745c50f3b07fdd4c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_aryabhata, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True)
Nov 25 03:48:31 np0005534516 podman[360786]: 2025-11-25 08:48:31.066351536 +0000 UTC m=+0.132992412 container start fb2781fa68584be05b2ec9fdb399c3f71b9aa60dd38d882c745c50f3b07fdd4c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_aryabhata, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 03:48:31 np0005534516 podman[360786]: 2025-11-25 08:48:31.070147198 +0000 UTC m=+0.136788074 container attach fb2781fa68584be05b2ec9fdb399c3f71b9aa60dd38d882c745c50f3b07fdd4c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_aryabhata, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Nov 25 03:48:31 np0005534516 systemd[1]: libpod-fb2781fa68584be05b2ec9fdb399c3f71b9aa60dd38d882c745c50f3b07fdd4c.scope: Deactivated successfully.
Nov 25 03:48:31 np0005534516 suspicious_aryabhata[360804]: 167 167
Nov 25 03:48:31 np0005534516 conmon[360804]: conmon fb2781fa68584be05b2e <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-fb2781fa68584be05b2ec9fdb399c3f71b9aa60dd38d882c745c50f3b07fdd4c.scope/container/memory.events
Nov 25 03:48:31 np0005534516 podman[360786]: 2025-11-25 08:48:31.077136265 +0000 UTC m=+0.143777171 container died fb2781fa68584be05b2ec9fdb399c3f71b9aa60dd38d882c745c50f3b07fdd4c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_aryabhata, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True)
Nov 25 03:48:31 np0005534516 systemd[1]: var-lib-containers-storage-overlay-0f5f69eee9ad3dcf3a615ec601b0722d9b458cabc86f8788f4cbab61346f961f-merged.mount: Deactivated successfully.
Nov 25 03:48:31 np0005534516 podman[360786]: 2025-11-25 08:48:31.11614009 +0000 UTC m=+0.182780966 container remove fb2781fa68584be05b2ec9fdb399c3f71b9aa60dd38d882c745c50f3b07fdd4c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_aryabhata, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 03:48:31 np0005534516 podman[360800]: 2025-11-25 08:48:31.121915114 +0000 UTC m=+0.095296123 container health_status 201f5ba8a846455dad7681d91099d8c3f33e8ac91298c1330a8b525b94c03938 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 03:48:31 np0005534516 systemd[1]: libpod-conmon-fb2781fa68584be05b2ec9fdb399c3f71b9aa60dd38d882c745c50f3b07fdd4c.scope: Deactivated successfully.
Nov 25 03:48:31 np0005534516 podman[360847]: 2025-11-25 08:48:31.304430392 +0000 UTC m=+0.049332421 container create 5c9800e43168f6c04198eb5b41063717f6eefa13c9cc042637a702fdda36609a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_hertz, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 03:48:31 np0005534516 systemd[1]: Started libpod-conmon-5c9800e43168f6c04198eb5b41063717f6eefa13c9cc042637a702fdda36609a.scope.
Nov 25 03:48:31 np0005534516 podman[360847]: 2025-11-25 08:48:31.278606381 +0000 UTC m=+0.023508490 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 03:48:31 np0005534516 systemd[1]: Started libcrun container.
Nov 25 03:48:31 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9e161e6472f1112ff5d4f86ab502a4b2841c88a971979218dd933f2378ad0e10/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 03:48:31 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9e161e6472f1112ff5d4f86ab502a4b2841c88a971979218dd933f2378ad0e10/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 03:48:31 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9e161e6472f1112ff5d4f86ab502a4b2841c88a971979218dd933f2378ad0e10/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 03:48:31 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9e161e6472f1112ff5d4f86ab502a4b2841c88a971979218dd933f2378ad0e10/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 03:48:31 np0005534516 nova_compute[253538]: 2025-11-25 08:48:31.387 253542 DEBUG nova.compute.manager [req-b07b73a1-b562-4c71-b2e3-457456e7c97a req-eceabf1a-52e7-4188-a1fd-deb7b6eca57a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 0f5e68e6-8f02-4a3a-ac0c-322d82950d98] Received event network-vif-plugged-7246ed42-6ec3-42e8-9b9d-12606aeeb43c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:48:31 np0005534516 nova_compute[253538]: 2025-11-25 08:48:31.388 253542 DEBUG oslo_concurrency.lockutils [req-b07b73a1-b562-4c71-b2e3-457456e7c97a req-eceabf1a-52e7-4188-a1fd-deb7b6eca57a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "0f5e68e6-8f02-4a3a-ac0c-322d82950d98-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:48:31 np0005534516 nova_compute[253538]: 2025-11-25 08:48:31.388 253542 DEBUG oslo_concurrency.lockutils [req-b07b73a1-b562-4c71-b2e3-457456e7c97a req-eceabf1a-52e7-4188-a1fd-deb7b6eca57a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "0f5e68e6-8f02-4a3a-ac0c-322d82950d98-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:48:31 np0005534516 nova_compute[253538]: 2025-11-25 08:48:31.389 253542 DEBUG oslo_concurrency.lockutils [req-b07b73a1-b562-4c71-b2e3-457456e7c97a req-eceabf1a-52e7-4188-a1fd-deb7b6eca57a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "0f5e68e6-8f02-4a3a-ac0c-322d82950d98-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:48:31 np0005534516 nova_compute[253538]: 2025-11-25 08:48:31.389 253542 DEBUG nova.compute.manager [req-b07b73a1-b562-4c71-b2e3-457456e7c97a req-eceabf1a-52e7-4188-a1fd-deb7b6eca57a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 0f5e68e6-8f02-4a3a-ac0c-322d82950d98] Processing event network-vif-plugged-7246ed42-6ec3-42e8-9b9d-12606aeeb43c _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 25 03:48:31 np0005534516 nova_compute[253538]: 2025-11-25 08:48:31.390 253542 DEBUG nova.compute.manager [None req-ef9476da-4879-492c-9ae5-d68db7389824 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] [instance: 0f5e68e6-8f02-4a3a-ac0c-322d82950d98] Instance event wait completed in 6 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 25 03:48:31 np0005534516 podman[360847]: 2025-11-25 08:48:31.394870125 +0000 UTC m=+0.139772264 container init 5c9800e43168f6c04198eb5b41063717f6eefa13c9cc042637a702fdda36609a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_hertz, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Nov 25 03:48:31 np0005534516 nova_compute[253538]: 2025-11-25 08:48:31.396 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764060511.3959463, 0f5e68e6-8f02-4a3a-ac0c-322d82950d98 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 03:48:31 np0005534516 nova_compute[253538]: 2025-11-25 08:48:31.396 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 0f5e68e6-8f02-4a3a-ac0c-322d82950d98] VM Resumed (Lifecycle Event)#033[00m
Nov 25 03:48:31 np0005534516 nova_compute[253538]: 2025-11-25 08:48:31.399 253542 DEBUG nova.virt.libvirt.driver [None req-ef9476da-4879-492c-9ae5-d68db7389824 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] [instance: 0f5e68e6-8f02-4a3a-ac0c-322d82950d98] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 25 03:48:31 np0005534516 nova_compute[253538]: 2025-11-25 08:48:31.403 253542 INFO nova.virt.libvirt.driver [-] [instance: 0f5e68e6-8f02-4a3a-ac0c-322d82950d98] Instance spawned successfully.#033[00m
Nov 25 03:48:31 np0005534516 podman[360847]: 2025-11-25 08:48:31.404894123 +0000 UTC m=+0.149796152 container start 5c9800e43168f6c04198eb5b41063717f6eefa13c9cc042637a702fdda36609a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_hertz, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Nov 25 03:48:31 np0005534516 podman[360847]: 2025-11-25 08:48:31.409956419 +0000 UTC m=+0.154858448 container attach 5c9800e43168f6c04198eb5b41063717f6eefa13c9cc042637a702fdda36609a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_hertz, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Nov 25 03:48:31 np0005534516 nova_compute[253538]: 2025-11-25 08:48:31.415 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 0f5e68e6-8f02-4a3a-ac0c-322d82950d98] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:48:31 np0005534516 nova_compute[253538]: 2025-11-25 08:48:31.420 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 0f5e68e6-8f02-4a3a-ac0c-322d82950d98] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: shelved_offloaded, current task_state: spawning, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 03:48:31 np0005534516 nova_compute[253538]: 2025-11-25 08:48:31.450 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 0f5e68e6-8f02-4a3a-ac0c-322d82950d98] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 03:48:31 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2014: 321 pgs: 321 active+clean; 292 MiB data, 813 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 5.7 MiB/s wr, 118 op/s
Nov 25 03:48:31 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:48:31.965 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=30, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '3e:21:09', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '6a:71:8e:07:fa:c2'}, ipsec=False) old=SB_Global(nb_cfg=29) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 03:48:31 np0005534516 nova_compute[253538]: 2025-11-25 08:48:31.965 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:48:31 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:48:31.966 162739 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 25 03:48:32 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e235 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 03:48:32 np0005534516 brave_hertz[360864]: {
Nov 25 03:48:32 np0005534516 brave_hertz[360864]:    "1ccbbde0-8faf-460a-800e-c84f00ed17db": {
Nov 25 03:48:32 np0005534516 brave_hertz[360864]:        "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 03:48:32 np0005534516 brave_hertz[360864]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 03:48:32 np0005534516 brave_hertz[360864]:        "osd_id": 1,
Nov 25 03:48:32 np0005534516 brave_hertz[360864]:        "osd_uuid": "1ccbbde0-8faf-460a-800e-c84f00ed17db",
Nov 25 03:48:32 np0005534516 brave_hertz[360864]:        "type": "bluestore"
Nov 25 03:48:32 np0005534516 brave_hertz[360864]:    },
Nov 25 03:48:32 np0005534516 brave_hertz[360864]:    "2a15223e-f21c-42af-b702-2be1c3607399": {
Nov 25 03:48:32 np0005534516 brave_hertz[360864]:        "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 03:48:32 np0005534516 brave_hertz[360864]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 03:48:32 np0005534516 brave_hertz[360864]:        "osd_id": 2,
Nov 25 03:48:32 np0005534516 brave_hertz[360864]:        "osd_uuid": "2a15223e-f21c-42af-b702-2be1c3607399",
Nov 25 03:48:32 np0005534516 brave_hertz[360864]:        "type": "bluestore"
Nov 25 03:48:32 np0005534516 brave_hertz[360864]:    },
Nov 25 03:48:32 np0005534516 brave_hertz[360864]:    "fa0f8d90-5df8-4d42-9078-da082765696d": {
Nov 25 03:48:32 np0005534516 brave_hertz[360864]:        "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 03:48:32 np0005534516 brave_hertz[360864]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 03:48:32 np0005534516 brave_hertz[360864]:        "osd_id": 0,
Nov 25 03:48:32 np0005534516 brave_hertz[360864]:        "osd_uuid": "fa0f8d90-5df8-4d42-9078-da082765696d",
Nov 25 03:48:32 np0005534516 brave_hertz[360864]:        "type": "bluestore"
Nov 25 03:48:32 np0005534516 brave_hertz[360864]:    }
Nov 25 03:48:32 np0005534516 brave_hertz[360864]: }
Nov 25 03:48:32 np0005534516 systemd[1]: libpod-5c9800e43168f6c04198eb5b41063717f6eefa13c9cc042637a702fdda36609a.scope: Deactivated successfully.
Nov 25 03:48:32 np0005534516 podman[360847]: 2025-11-25 08:48:32.415891059 +0000 UTC m=+1.160793098 container died 5c9800e43168f6c04198eb5b41063717f6eefa13c9cc042637a702fdda36609a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_hertz, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Nov 25 03:48:32 np0005534516 nova_compute[253538]: 2025-11-25 08:48:32.478 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:48:32 np0005534516 systemd[1]: var-lib-containers-storage-overlay-9e161e6472f1112ff5d4f86ab502a4b2841c88a971979218dd933f2378ad0e10-merged.mount: Deactivated successfully.
Nov 25 03:48:32 np0005534516 nova_compute[253538]: 2025-11-25 08:48:32.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 03:48:32 np0005534516 nova_compute[253538]: 2025-11-25 08:48:32.555 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 25 03:48:32 np0005534516 podman[360847]: 2025-11-25 08:48:32.662617027 +0000 UTC m=+1.407519096 container remove 5c9800e43168f6c04198eb5b41063717f6eefa13c9cc042637a702fdda36609a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_hertz, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 03:48:32 np0005534516 systemd[1]: libpod-conmon-5c9800e43168f6c04198eb5b41063717f6eefa13c9cc042637a702fdda36609a.scope: Deactivated successfully.
Nov 25 03:48:32 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e235 do_prune osdmap full prune enabled
Nov 25 03:48:32 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 03:48:32 np0005534516 nova_compute[253538]: 2025-11-25 08:48:32.922 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:48:33 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e236 e236: 3 total, 3 up, 3 in
Nov 25 03:48:33 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 03:48:33 np0005534516 ceph-mon[75015]: log_channel(cluster) log [DBG] : osdmap e236: 3 total, 3 up, 3 in
Nov 25 03:48:33 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 03:48:33 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 03:48:33 np0005534516 ceph-mgr[75313]: [progress WARNING root] complete: ev 55f6932e-5d8a-41f1-94c5-c5fe7264fcf4 does not exist
Nov 25 03:48:33 np0005534516 ceph-mgr[75313]: [progress WARNING root] complete: ev c4d665d6-d460-47c5-bae2-3529dfa60c46 does not exist
Nov 25 03:48:33 np0005534516 nova_compute[253538]: 2025-11-25 08:48:33.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 03:48:33 np0005534516 nova_compute[253538]: 2025-11-25 08:48:33.554 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 25 03:48:33 np0005534516 nova_compute[253538]: 2025-11-25 08:48:33.555 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 25 03:48:33 np0005534516 nova_compute[253538]: 2025-11-25 08:48:33.576 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] [instance: fc289da4-ce2a-46ee-bf8c-bffa1e0e75fe] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Nov 25 03:48:33 np0005534516 nova_compute[253538]: 2025-11-25 08:48:33.576 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Acquiring lock "refresh_cache-0f5e68e6-8f02-4a3a-ac0c-322d82950d98" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 03:48:33 np0005534516 nova_compute[253538]: 2025-11-25 08:48:33.576 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Acquired lock "refresh_cache-0f5e68e6-8f02-4a3a-ac0c-322d82950d98" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 03:48:33 np0005534516 nova_compute[253538]: 2025-11-25 08:48:33.577 253542 DEBUG nova.network.neutron [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] [instance: 0f5e68e6-8f02-4a3a-ac0c-322d82950d98] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Nov 25 03:48:33 np0005534516 nova_compute[253538]: 2025-11-25 08:48:33.577 253542 DEBUG nova.objects.instance [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 0f5e68e6-8f02-4a3a-ac0c-322d82950d98 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 03:48:33 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2016: 321 pgs: 321 active+clean; 292 MiB data, 813 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 6.1 MiB/s wr, 138 op/s
Nov 25 03:48:34 np0005534516 nova_compute[253538]: 2025-11-25 08:48:34.128 253542 DEBUG nova.compute.manager [req-a4433b7e-8b94-4bda-8171-ab1742a1268f req-97036637-39f6-4d09-8852-f050313cbe87 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 0f5e68e6-8f02-4a3a-ac0c-322d82950d98] Received event network-vif-plugged-7246ed42-6ec3-42e8-9b9d-12606aeeb43c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:48:34 np0005534516 nova_compute[253538]: 2025-11-25 08:48:34.128 253542 DEBUG oslo_concurrency.lockutils [req-a4433b7e-8b94-4bda-8171-ab1742a1268f req-97036637-39f6-4d09-8852-f050313cbe87 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "0f5e68e6-8f02-4a3a-ac0c-322d82950d98-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:48:34 np0005534516 nova_compute[253538]: 2025-11-25 08:48:34.129 253542 DEBUG oslo_concurrency.lockutils [req-a4433b7e-8b94-4bda-8171-ab1742a1268f req-97036637-39f6-4d09-8852-f050313cbe87 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "0f5e68e6-8f02-4a3a-ac0c-322d82950d98-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:48:34 np0005534516 nova_compute[253538]: 2025-11-25 08:48:34.129 253542 DEBUG oslo_concurrency.lockutils [req-a4433b7e-8b94-4bda-8171-ab1742a1268f req-97036637-39f6-4d09-8852-f050313cbe87 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "0f5e68e6-8f02-4a3a-ac0c-322d82950d98-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:48:34 np0005534516 nova_compute[253538]: 2025-11-25 08:48:34.129 253542 DEBUG nova.compute.manager [req-a4433b7e-8b94-4bda-8171-ab1742a1268f req-97036637-39f6-4d09-8852-f050313cbe87 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 0f5e68e6-8f02-4a3a-ac0c-322d82950d98] No waiting events found dispatching network-vif-plugged-7246ed42-6ec3-42e8-9b9d-12606aeeb43c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 03:48:34 np0005534516 nova_compute[253538]: 2025-11-25 08:48:34.130 253542 WARNING nova.compute.manager [req-a4433b7e-8b94-4bda-8171-ab1742a1268f req-97036637-39f6-4d09-8852-f050313cbe87 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 0f5e68e6-8f02-4a3a-ac0c-322d82950d98] Received unexpected event network-vif-plugged-7246ed42-6ec3-42e8-9b9d-12606aeeb43c for instance with vm_state shelved_offloaded and task_state spawning.#033[00m
Nov 25 03:48:34 np0005534516 nova_compute[253538]: 2025-11-25 08:48:34.130 253542 DEBUG nova.compute.manager [req-a4433b7e-8b94-4bda-8171-ab1742a1268f req-97036637-39f6-4d09-8852-f050313cbe87 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: fc289da4-ce2a-46ee-bf8c-bffa1e0e75fe] Received event network-vif-plugged-5b999504-81af-4e3d-9707-b0a72b902669 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:48:34 np0005534516 nova_compute[253538]: 2025-11-25 08:48:34.131 253542 DEBUG oslo_concurrency.lockutils [req-a4433b7e-8b94-4bda-8171-ab1742a1268f req-97036637-39f6-4d09-8852-f050313cbe87 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "fc289da4-ce2a-46ee-bf8c-bffa1e0e75fe-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:48:34 np0005534516 nova_compute[253538]: 2025-11-25 08:48:34.131 253542 DEBUG oslo_concurrency.lockutils [req-a4433b7e-8b94-4bda-8171-ab1742a1268f req-97036637-39f6-4d09-8852-f050313cbe87 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "fc289da4-ce2a-46ee-bf8c-bffa1e0e75fe-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:48:34 np0005534516 nova_compute[253538]: 2025-11-25 08:48:34.131 253542 DEBUG oslo_concurrency.lockutils [req-a4433b7e-8b94-4bda-8171-ab1742a1268f req-97036637-39f6-4d09-8852-f050313cbe87 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "fc289da4-ce2a-46ee-bf8c-bffa1e0e75fe-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:48:34 np0005534516 nova_compute[253538]: 2025-11-25 08:48:34.132 253542 DEBUG nova.compute.manager [req-a4433b7e-8b94-4bda-8171-ab1742a1268f req-97036637-39f6-4d09-8852-f050313cbe87 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: fc289da4-ce2a-46ee-bf8c-bffa1e0e75fe] Processing event network-vif-plugged-5b999504-81af-4e3d-9707-b0a72b902669 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 25 03:48:34 np0005534516 nova_compute[253538]: 2025-11-25 08:48:34.132 253542 DEBUG nova.compute.manager [req-a4433b7e-8b94-4bda-8171-ab1742a1268f req-97036637-39f6-4d09-8852-f050313cbe87 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: fc289da4-ce2a-46ee-bf8c-bffa1e0e75fe] Received event network-vif-plugged-5b999504-81af-4e3d-9707-b0a72b902669 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:48:34 np0005534516 nova_compute[253538]: 2025-11-25 08:48:34.132 253542 DEBUG oslo_concurrency.lockutils [req-a4433b7e-8b94-4bda-8171-ab1742a1268f req-97036637-39f6-4d09-8852-f050313cbe87 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "fc289da4-ce2a-46ee-bf8c-bffa1e0e75fe-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:48:34 np0005534516 nova_compute[253538]: 2025-11-25 08:48:34.133 253542 DEBUG oslo_concurrency.lockutils [req-a4433b7e-8b94-4bda-8171-ab1742a1268f req-97036637-39f6-4d09-8852-f050313cbe87 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "fc289da4-ce2a-46ee-bf8c-bffa1e0e75fe-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:48:34 np0005534516 nova_compute[253538]: 2025-11-25 08:48:34.133 253542 DEBUG oslo_concurrency.lockutils [req-a4433b7e-8b94-4bda-8171-ab1742a1268f req-97036637-39f6-4d09-8852-f050313cbe87 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "fc289da4-ce2a-46ee-bf8c-bffa1e0e75fe-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:48:34 np0005534516 nova_compute[253538]: 2025-11-25 08:48:34.133 253542 DEBUG nova.compute.manager [req-a4433b7e-8b94-4bda-8171-ab1742a1268f req-97036637-39f6-4d09-8852-f050313cbe87 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: fc289da4-ce2a-46ee-bf8c-bffa1e0e75fe] No waiting events found dispatching network-vif-plugged-5b999504-81af-4e3d-9707-b0a72b902669 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 03:48:34 np0005534516 nova_compute[253538]: 2025-11-25 08:48:34.134 253542 WARNING nova.compute.manager [req-a4433b7e-8b94-4bda-8171-ab1742a1268f req-97036637-39f6-4d09-8852-f050313cbe87 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: fc289da4-ce2a-46ee-bf8c-bffa1e0e75fe] Received unexpected event network-vif-plugged-5b999504-81af-4e3d-9707-b0a72b902669 for instance with vm_state building and task_state spawning.#033[00m
Nov 25 03:48:34 np0005534516 nova_compute[253538]: 2025-11-25 08:48:34.135 253542 DEBUG nova.compute.manager [None req-9ca56f56-3c41-4463-8196-1fa2ff9b5bed 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: fc289da4-ce2a-46ee-bf8c-bffa1e0e75fe] Instance event wait completed in 4 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 25 03:48:34 np0005534516 nova_compute[253538]: 2025-11-25 08:48:34.147 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764060514.1473668, fc289da4-ce2a-46ee-bf8c-bffa1e0e75fe => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 03:48:34 np0005534516 nova_compute[253538]: 2025-11-25 08:48:34.148 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: fc289da4-ce2a-46ee-bf8c-bffa1e0e75fe] VM Resumed (Lifecycle Event)#033[00m
Nov 25 03:48:34 np0005534516 nova_compute[253538]: 2025-11-25 08:48:34.150 253542 DEBUG nova.virt.libvirt.driver [None req-9ca56f56-3c41-4463-8196-1fa2ff9b5bed 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: fc289da4-ce2a-46ee-bf8c-bffa1e0e75fe] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 25 03:48:34 np0005534516 nova_compute[253538]: 2025-11-25 08:48:34.155 253542 INFO nova.virt.libvirt.driver [-] [instance: fc289da4-ce2a-46ee-bf8c-bffa1e0e75fe] Instance spawned successfully.#033[00m
Nov 25 03:48:34 np0005534516 nova_compute[253538]: 2025-11-25 08:48:34.156 253542 DEBUG nova.virt.libvirt.driver [None req-9ca56f56-3c41-4463-8196-1fa2ff9b5bed 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: fc289da4-ce2a-46ee-bf8c-bffa1e0e75fe] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 25 03:48:34 np0005534516 nova_compute[253538]: 2025-11-25 08:48:34.175 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: fc289da4-ce2a-46ee-bf8c-bffa1e0e75fe] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:48:34 np0005534516 nova_compute[253538]: 2025-11-25 08:48:34.180 253542 DEBUG nova.virt.libvirt.driver [None req-9ca56f56-3c41-4463-8196-1fa2ff9b5bed 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: fc289da4-ce2a-46ee-bf8c-bffa1e0e75fe] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:48:34 np0005534516 nova_compute[253538]: 2025-11-25 08:48:34.181 253542 DEBUG nova.virt.libvirt.driver [None req-9ca56f56-3c41-4463-8196-1fa2ff9b5bed 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: fc289da4-ce2a-46ee-bf8c-bffa1e0e75fe] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:48:34 np0005534516 nova_compute[253538]: 2025-11-25 08:48:34.182 253542 DEBUG nova.virt.libvirt.driver [None req-9ca56f56-3c41-4463-8196-1fa2ff9b5bed 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: fc289da4-ce2a-46ee-bf8c-bffa1e0e75fe] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:48:34 np0005534516 nova_compute[253538]: 2025-11-25 08:48:34.182 253542 DEBUG nova.virt.libvirt.driver [None req-9ca56f56-3c41-4463-8196-1fa2ff9b5bed 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: fc289da4-ce2a-46ee-bf8c-bffa1e0e75fe] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:48:34 np0005534516 nova_compute[253538]: 2025-11-25 08:48:34.183 253542 DEBUG nova.virt.libvirt.driver [None req-9ca56f56-3c41-4463-8196-1fa2ff9b5bed 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: fc289da4-ce2a-46ee-bf8c-bffa1e0e75fe] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:48:34 np0005534516 nova_compute[253538]: 2025-11-25 08:48:34.184 253542 DEBUG nova.virt.libvirt.driver [None req-9ca56f56-3c41-4463-8196-1fa2ff9b5bed 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: fc289da4-ce2a-46ee-bf8c-bffa1e0e75fe] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:48:34 np0005534516 nova_compute[253538]: 2025-11-25 08:48:34.189 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: fc289da4-ce2a-46ee-bf8c-bffa1e0e75fe] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 03:48:34 np0005534516 nova_compute[253538]: 2025-11-25 08:48:34.213 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: fc289da4-ce2a-46ee-bf8c-bffa1e0e75fe] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 03:48:34 np0005534516 nova_compute[253538]: 2025-11-25 08:48:34.249 253542 INFO nova.compute.manager [None req-9ca56f56-3c41-4463-8196-1fa2ff9b5bed 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: fc289da4-ce2a-46ee-bf8c-bffa1e0e75fe] Took 10.34 seconds to spawn the instance on the hypervisor.#033[00m
Nov 25 03:48:34 np0005534516 nova_compute[253538]: 2025-11-25 08:48:34.250 253542 DEBUG nova.compute.manager [None req-9ca56f56-3c41-4463-8196-1fa2ff9b5bed 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: fc289da4-ce2a-46ee-bf8c-bffa1e0e75fe] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:48:34 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 03:48:34 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 03:48:34 np0005534516 nova_compute[253538]: 2025-11-25 08:48:34.316 253542 INFO nova.compute.manager [None req-9ca56f56-3c41-4463-8196-1fa2ff9b5bed 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: fc289da4-ce2a-46ee-bf8c-bffa1e0e75fe] Took 11.25 seconds to build instance.#033[00m
Nov 25 03:48:34 np0005534516 nova_compute[253538]: 2025-11-25 08:48:34.331 253542 DEBUG oslo_concurrency.lockutils [None req-9ca56f56-3c41-4463-8196-1fa2ff9b5bed 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lock "fc289da4-ce2a-46ee-bf8c-bffa1e0e75fe" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.322s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:48:34 np0005534516 nova_compute[253538]: 2025-11-25 08:48:34.587 253542 DEBUG nova.compute.manager [None req-ef9476da-4879-492c-9ae5-d68db7389824 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] [instance: 0f5e68e6-8f02-4a3a-ac0c-322d82950d98] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:48:34 np0005534516 nova_compute[253538]: 2025-11-25 08:48:34.653 253542 DEBUG oslo_concurrency.lockutils [None req-ef9476da-4879-492c-9ae5-d68db7389824 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] Lock "0f5e68e6-8f02-4a3a-ac0c-322d82950d98" "released" by "nova.compute.manager.ComputeManager.unshelve_instance.<locals>.do_unshelve_instance" :: held 18.488s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:48:34 np0005534516 nova_compute[253538]: 2025-11-25 08:48:34.979 253542 DEBUG nova.network.neutron [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] [instance: 0f5e68e6-8f02-4a3a-ac0c-322d82950d98] Updating instance_info_cache with network_info: [{"id": "7246ed42-6ec3-42e8-9b9d-12606aeeb43c", "address": "fa:16:3e:8c:96:cd", "network": {"id": "aa04d86f-73a3-4b24-9c95-8ec29aa39064", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-1377856389-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "947f731219de435196429037dc94fd56", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7246ed42-6e", "ovs_interfaceid": "7246ed42-6ec3-42e8-9b9d-12606aeeb43c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 03:48:35 np0005534516 nova_compute[253538]: 2025-11-25 08:48:35.007 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Releasing lock "refresh_cache-0f5e68e6-8f02-4a3a-ac0c-322d82950d98" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 03:48:35 np0005534516 nova_compute[253538]: 2025-11-25 08:48:35.008 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] [instance: 0f5e68e6-8f02-4a3a-ac0c-322d82950d98] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Nov 25 03:48:35 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2017: 321 pgs: 321 active+clean; 270 MiB data, 813 MiB used, 59 GiB / 60 GiB avail; 2.6 MiB/s rd, 4.4 MiB/s wr, 119 op/s
Nov 25 03:48:36 np0005534516 nova_compute[253538]: 2025-11-25 08:48:36.553 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 03:48:36 np0005534516 podman[360961]: 2025-11-25 08:48:36.861079999 +0000 UTC m=+0.098419007 container health_status 8ad1e319ea263c63021f01bb2d6f18ca5aea205883339692c6b10bddfa993191 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Nov 25 03:48:37 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e236 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 03:48:37 np0005534516 nova_compute[253538]: 2025-11-25 08:48:37.480 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:48:37 np0005534516 nova_compute[253538]: 2025-11-25 08:48:37.553 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 03:48:37 np0005534516 nova_compute[253538]: 2025-11-25 08:48:37.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 03:48:37 np0005534516 nova_compute[253538]: 2025-11-25 08:48:37.582 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:48:37 np0005534516 nova_compute[253538]: 2025-11-25 08:48:37.583 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:48:37 np0005534516 nova_compute[253538]: 2025-11-25 08:48:37.583 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:48:37 np0005534516 nova_compute[253538]: 2025-11-25 08:48:37.583 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 25 03:48:37 np0005534516 nova_compute[253538]: 2025-11-25 08:48:37.585 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:48:37 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2018: 321 pgs: 321 active+clean; 235 MiB data, 794 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 86 KiB/s wr, 178 op/s
Nov 25 03:48:37 np0005534516 nova_compute[253538]: 2025-11-25 08:48:37.924 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:48:38 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 03:48:38 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1845244451' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 03:48:38 np0005534516 nova_compute[253538]: 2025-11-25 08:48:38.046 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.462s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:48:38 np0005534516 nova_compute[253538]: 2025-11-25 08:48:38.142 253542 DEBUG nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] skipping disk for instance-00000068 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 25 03:48:38 np0005534516 nova_compute[253538]: 2025-11-25 08:48:38.143 253542 DEBUG nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] skipping disk for instance-00000068 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 25 03:48:38 np0005534516 nova_compute[253538]: 2025-11-25 08:48:38.149 253542 DEBUG nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] skipping disk for instance-0000006a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 25 03:48:38 np0005534516 nova_compute[253538]: 2025-11-25 08:48:38.149 253542 DEBUG nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] skipping disk for instance-0000006a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 25 03:48:38 np0005534516 nova_compute[253538]: 2025-11-25 08:48:38.364 253542 WARNING nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 03:48:38 np0005534516 nova_compute[253538]: 2025-11-25 08:48:38.368 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3520MB free_disk=59.92182922363281GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 25 03:48:38 np0005534516 nova_compute[253538]: 2025-11-25 08:48:38.369 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:48:38 np0005534516 nova_compute[253538]: 2025-11-25 08:48:38.370 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:48:38 np0005534516 nova_compute[253538]: 2025-11-25 08:48:38.464 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Instance 0f5e68e6-8f02-4a3a-ac0c-322d82950d98 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 25 03:48:38 np0005534516 nova_compute[253538]: 2025-11-25 08:48:38.464 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Instance fc289da4-ce2a-46ee-bf8c-bffa1e0e75fe actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 25 03:48:38 np0005534516 nova_compute[253538]: 2025-11-25 08:48:38.464 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 25 03:48:38 np0005534516 nova_compute[253538]: 2025-11-25 08:48:38.465 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=768MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 25 03:48:38 np0005534516 nova_compute[253538]: 2025-11-25 08:48:38.520 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:48:38 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:48:38.968 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=0e3362c2-a4a0-4a10-9289-943331244f84, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '30'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:48:38 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 03:48:38 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3272144953' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 03:48:38 np0005534516 nova_compute[253538]: 2025-11-25 08:48:38.997 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.477s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:48:39 np0005534516 nova_compute[253538]: 2025-11-25 08:48:39.003 253542 DEBUG nova.compute.provider_tree [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 03:48:39 np0005534516 nova_compute[253538]: 2025-11-25 08:48:39.017 253542 DEBUG nova.scheduler.client.report [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 03:48:39 np0005534516 nova_compute[253538]: 2025-11-25 08:48:39.044 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 25 03:48:39 np0005534516 nova_compute[253538]: 2025-11-25 08:48:39.045 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.676s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:48:39 np0005534516 ovn_controller[152859]: 2025-11-25T08:48:39Z|01016|binding|INFO|Releasing lport 2a6dde1f-8745-4374-9d65-dba32b48db06 from this chassis (sb_readonly=0)
Nov 25 03:48:39 np0005534516 nova_compute[253538]: 2025-11-25 08:48:39.046 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:48:39 np0005534516 NetworkManager[48915]: <info>  [1764060519.0476] manager: (patch-provnet-14ba300c-36d8-42f5-a7bd-8245a4488c5f-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/419)
Nov 25 03:48:39 np0005534516 NetworkManager[48915]: <info>  [1764060519.0494] manager: (patch-br-int-to-provnet-14ba300c-36d8-42f5-a7bd-8245a4488c5f): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/420)
Nov 25 03:48:39 np0005534516 ovn_controller[152859]: 2025-11-25T08:48:39Z|01017|binding|INFO|Releasing lport bedad8d3-cb44-47dd-87a4-c24448506880 from this chassis (sb_readonly=0)
Nov 25 03:48:39 np0005534516 ovn_controller[152859]: 2025-11-25T08:48:39Z|01018|binding|INFO|Releasing lport 2a6dde1f-8745-4374-9d65-dba32b48db06 from this chassis (sb_readonly=0)
Nov 25 03:48:39 np0005534516 ovn_controller[152859]: 2025-11-25T08:48:39Z|01019|binding|INFO|Releasing lport bedad8d3-cb44-47dd-87a4-c24448506880 from this chassis (sb_readonly=0)
Nov 25 03:48:39 np0005534516 nova_compute[253538]: 2025-11-25 08:48:39.108 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:48:39 np0005534516 nova_compute[253538]: 2025-11-25 08:48:39.118 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:48:39 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2019: 321 pgs: 321 active+clean; 213 MiB data, 784 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 16 KiB/s wr, 170 op/s
Nov 25 03:48:40 np0005534516 nova_compute[253538]: 2025-11-25 08:48:40.035 253542 DEBUG nova.compute.manager [req-080ae981-c6d1-49a3-8d24-620f42ee65e3 req-c93c8a9d-863f-4a5a-8351-cfe9e889bef4 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: fc289da4-ce2a-46ee-bf8c-bffa1e0e75fe] Received event network-changed-5b999504-81af-4e3d-9707-b0a72b902669 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:48:40 np0005534516 nova_compute[253538]: 2025-11-25 08:48:40.035 253542 DEBUG nova.compute.manager [req-080ae981-c6d1-49a3-8d24-620f42ee65e3 req-c93c8a9d-863f-4a5a-8351-cfe9e889bef4 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: fc289da4-ce2a-46ee-bf8c-bffa1e0e75fe] Refreshing instance network info cache due to event network-changed-5b999504-81af-4e3d-9707-b0a72b902669. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 03:48:40 np0005534516 nova_compute[253538]: 2025-11-25 08:48:40.036 253542 DEBUG oslo_concurrency.lockutils [req-080ae981-c6d1-49a3-8d24-620f42ee65e3 req-c93c8a9d-863f-4a5a-8351-cfe9e889bef4 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-fc289da4-ce2a-46ee-bf8c-bffa1e0e75fe" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 03:48:40 np0005534516 nova_compute[253538]: 2025-11-25 08:48:40.036 253542 DEBUG oslo_concurrency.lockutils [req-080ae981-c6d1-49a3-8d24-620f42ee65e3 req-c93c8a9d-863f-4a5a-8351-cfe9e889bef4 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-fc289da4-ce2a-46ee-bf8c-bffa1e0e75fe" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 03:48:40 np0005534516 nova_compute[253538]: 2025-11-25 08:48:40.036 253542 DEBUG nova.network.neutron [req-080ae981-c6d1-49a3-8d24-620f42ee65e3 req-c93c8a9d-863f-4a5a-8351-cfe9e889bef4 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: fc289da4-ce2a-46ee-bf8c-bffa1e0e75fe] Refreshing network info cache for port 5b999504-81af-4e3d-9707-b0a72b902669 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 03:48:40 np0005534516 nova_compute[253538]: 2025-11-25 08:48:40.039 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 03:48:40 np0005534516 nova_compute[253538]: 2025-11-25 08:48:40.039 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 03:48:40 np0005534516 nova_compute[253538]: 2025-11-25 08:48:40.584 253542 DEBUG nova.objects.instance [None req-2d4b191e-064f-42de-9532-c08f279cf4b0 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] Lazy-loading 'pci_devices' on Instance uuid 0f5e68e6-8f02-4a3a-ac0c-322d82950d98 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 03:48:40 np0005534516 nova_compute[253538]: 2025-11-25 08:48:40.610 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764060520.610518, 0f5e68e6-8f02-4a3a-ac0c-322d82950d98 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 03:48:40 np0005534516 nova_compute[253538]: 2025-11-25 08:48:40.611 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 0f5e68e6-8f02-4a3a-ac0c-322d82950d98] VM Paused (Lifecycle Event)#033[00m
Nov 25 03:48:40 np0005534516 nova_compute[253538]: 2025-11-25 08:48:40.634 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 0f5e68e6-8f02-4a3a-ac0c-322d82950d98] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:48:40 np0005534516 nova_compute[253538]: 2025-11-25 08:48:40.638 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 0f5e68e6-8f02-4a3a-ac0c-322d82950d98] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: suspending, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 03:48:40 np0005534516 nova_compute[253538]: 2025-11-25 08:48:40.659 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 0f5e68e6-8f02-4a3a-ac0c-322d82950d98] During sync_power_state the instance has a pending task (suspending). Skip.#033[00m
Nov 25 03:48:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:48:41.072 162739 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:48:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:48:41.073 162739 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:48:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:48:41.073 162739 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:48:41 np0005534516 kernel: tap7246ed42-6e (unregistering): left promiscuous mode
Nov 25 03:48:41 np0005534516 NetworkManager[48915]: <info>  [1764060521.1353] device (tap7246ed42-6e): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 03:48:41 np0005534516 ovn_controller[152859]: 2025-11-25T08:48:41Z|01020|binding|INFO|Releasing lport 7246ed42-6ec3-42e8-9b9d-12606aeeb43c from this chassis (sb_readonly=0)
Nov 25 03:48:41 np0005534516 nova_compute[253538]: 2025-11-25 08:48:41.139 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:48:41 np0005534516 ovn_controller[152859]: 2025-11-25T08:48:41Z|01021|binding|INFO|Setting lport 7246ed42-6ec3-42e8-9b9d-12606aeeb43c down in Southbound
Nov 25 03:48:41 np0005534516 ovn_controller[152859]: 2025-11-25T08:48:41Z|01022|binding|INFO|Removing iface tap7246ed42-6e ovn-installed in OVS
Nov 25 03:48:41 np0005534516 nova_compute[253538]: 2025-11-25 08:48:41.143 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:48:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:48:41.151 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:8c:96:cd 10.100.0.3'], port_security=['fa:16:3e:8c:96:cd 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '0f5e68e6-8f02-4a3a-ac0c-322d82950d98', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-aa04d86f-73a3-4b24-9c95-8ec29aa39064', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '947f731219de435196429037dc94fd56', 'neutron:revision_number': '9', 'neutron:security_group_ids': '6084cb22-e3b6-414c-801b-b7e992786a2f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d4043632-827b-4717-bb5e-38582ebfacad, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=7246ed42-6ec3-42e8-9b9d-12606aeeb43c) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 03:48:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:48:41.152 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 7246ed42-6ec3-42e8-9b9d-12606aeeb43c in datapath aa04d86f-73a3-4b24-9c95-8ec29aa39064 unbound from our chassis#033[00m
Nov 25 03:48:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:48:41.153 162739 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network aa04d86f-73a3-4b24-9c95-8ec29aa39064, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 25 03:48:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:48:41.154 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[a08c0da2-ac33-4280-9a58-f345e51389ee]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:48:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:48:41.157 162739 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-aa04d86f-73a3-4b24-9c95-8ec29aa39064 namespace which is not needed anymore#033[00m
Nov 25 03:48:41 np0005534516 nova_compute[253538]: 2025-11-25 08:48:41.169 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:48:41 np0005534516 systemd[1]: machine-qemu\x2d130\x2dinstance\x2d00000068.scope: Deactivated successfully.
Nov 25 03:48:41 np0005534516 systemd[1]: machine-qemu\x2d130\x2dinstance\x2d00000068.scope: Consumed 10.181s CPU time.
Nov 25 03:48:41 np0005534516 systemd-machined[215790]: Machine qemu-130-instance-00000068 terminated.
Nov 25 03:48:41 np0005534516 nova_compute[253538]: 2025-11-25 08:48:41.244 253542 DEBUG nova.network.neutron [req-080ae981-c6d1-49a3-8d24-620f42ee65e3 req-c93c8a9d-863f-4a5a-8351-cfe9e889bef4 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: fc289da4-ce2a-46ee-bf8c-bffa1e0e75fe] Updated VIF entry in instance network info cache for port 5b999504-81af-4e3d-9707-b0a72b902669. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 25 03:48:41 np0005534516 nova_compute[253538]: 2025-11-25 08:48:41.245 253542 DEBUG nova.network.neutron [req-080ae981-c6d1-49a3-8d24-620f42ee65e3 req-c93c8a9d-863f-4a5a-8351-cfe9e889bef4 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: fc289da4-ce2a-46ee-bf8c-bffa1e0e75fe] Updating instance_info_cache with network_info: [{"id": "5b999504-81af-4e3d-9707-b0a72b902669", "address": "fa:16:3e:cf:68:34", "network": {"id": "41ed78ca-e8a4-4daf-884b-6b7b763e272f", "bridge": "br-int", "label": "tempest-network-smoke--1240799795", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.180", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92faeb767e7a423586eaaf32661ce771", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5b999504-81", "ovs_interfaceid": "5b999504-81af-4e3d-9707-b0a72b902669", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 03:48:41 np0005534516 nova_compute[253538]: 2025-11-25 08:48:41.262 253542 DEBUG oslo_concurrency.lockutils [req-080ae981-c6d1-49a3-8d24-620f42ee65e3 req-c93c8a9d-863f-4a5a-8351-cfe9e889bef4 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-fc289da4-ce2a-46ee-bf8c-bffa1e0e75fe" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 03:48:41 np0005534516 neutron-haproxy-ovnmeta-aa04d86f-73a3-4b24-9c95-8ec29aa39064[359865]: [NOTICE]   (359870) : haproxy version is 2.8.14-c23fe91
Nov 25 03:48:41 np0005534516 neutron-haproxy-ovnmeta-aa04d86f-73a3-4b24-9c95-8ec29aa39064[359865]: [NOTICE]   (359870) : path to executable is /usr/sbin/haproxy
Nov 25 03:48:41 np0005534516 neutron-haproxy-ovnmeta-aa04d86f-73a3-4b24-9c95-8ec29aa39064[359865]: [WARNING]  (359870) : Exiting Master process...
Nov 25 03:48:41 np0005534516 neutron-haproxy-ovnmeta-aa04d86f-73a3-4b24-9c95-8ec29aa39064[359865]: [ALERT]    (359870) : Current worker (359872) exited with code 143 (Terminated)
Nov 25 03:48:41 np0005534516 neutron-haproxy-ovnmeta-aa04d86f-73a3-4b24-9c95-8ec29aa39064[359865]: [WARNING]  (359870) : All workers exited. Exiting... (0)
Nov 25 03:48:41 np0005534516 systemd[1]: libpod-1ef76cfbd14ecfe3c363f9d691011408549afde8970f34e26eb70eec780e434d.scope: Deactivated successfully.
Nov 25 03:48:41 np0005534516 podman[361060]: 2025-11-25 08:48:41.289628492 +0000 UTC m=+0.048653354 container died 1ef76cfbd14ecfe3c363f9d691011408549afde8970f34e26eb70eec780e434d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-aa04d86f-73a3-4b24-9c95-8ec29aa39064, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, tcib_managed=true)
Nov 25 03:48:41 np0005534516 kernel: tap7246ed42-6e: entered promiscuous mode
Nov 25 03:48:41 np0005534516 systemd-udevd[361039]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 03:48:41 np0005534516 NetworkManager[48915]: <info>  [1764060521.3075] manager: (tap7246ed42-6e): new Tun device (/org/freedesktop/NetworkManager/Devices/421)
Nov 25 03:48:41 np0005534516 ovn_controller[152859]: 2025-11-25T08:48:41Z|01023|binding|INFO|Claiming lport 7246ed42-6ec3-42e8-9b9d-12606aeeb43c for this chassis.
Nov 25 03:48:41 np0005534516 ovn_controller[152859]: 2025-11-25T08:48:41Z|01024|binding|INFO|7246ed42-6ec3-42e8-9b9d-12606aeeb43c: Claiming fa:16:3e:8c:96:cd 10.100.0.3
Nov 25 03:48:41 np0005534516 nova_compute[253538]: 2025-11-25 08:48:41.310 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:48:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:48:41.316 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:8c:96:cd 10.100.0.3'], port_security=['fa:16:3e:8c:96:cd 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '0f5e68e6-8f02-4a3a-ac0c-322d82950d98', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-aa04d86f-73a3-4b24-9c95-8ec29aa39064', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '947f731219de435196429037dc94fd56', 'neutron:revision_number': '9', 'neutron:security_group_ids': '6084cb22-e3b6-414c-801b-b7e992786a2f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d4043632-827b-4717-bb5e-38582ebfacad, chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=7246ed42-6ec3-42e8-9b9d-12606aeeb43c) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 03:48:41 np0005534516 kernel: tap7246ed42-6e (unregistering): left promiscuous mode
Nov 25 03:48:41 np0005534516 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-1ef76cfbd14ecfe3c363f9d691011408549afde8970f34e26eb70eec780e434d-userdata-shm.mount: Deactivated successfully.
Nov 25 03:48:41 np0005534516 systemd[1]: var-lib-containers-storage-overlay-18e74c7d913d4d8440112d04e1a5043b5748bae7519c2709c1298e5997bcd39c-merged.mount: Deactivated successfully.
Nov 25 03:48:41 np0005534516 ovn_controller[152859]: 2025-11-25T08:48:41Z|01025|binding|INFO|Setting lport 7246ed42-6ec3-42e8-9b9d-12606aeeb43c ovn-installed in OVS
Nov 25 03:48:41 np0005534516 ovn_controller[152859]: 2025-11-25T08:48:41Z|01026|binding|INFO|Setting lport 7246ed42-6ec3-42e8-9b9d-12606aeeb43c up in Southbound
Nov 25 03:48:41 np0005534516 virtnodedevd[239165]: ethtool ioctl error on tap7246ed42-6e: No such device
Nov 25 03:48:41 np0005534516 nova_compute[253538]: 2025-11-25 08:48:41.334 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:48:41 np0005534516 ovn_controller[152859]: 2025-11-25T08:48:41Z|01027|binding|INFO|Releasing lport 7246ed42-6ec3-42e8-9b9d-12606aeeb43c from this chassis (sb_readonly=0)
Nov 25 03:48:41 np0005534516 nova_compute[253538]: 2025-11-25 08:48:41.341 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:48:41 np0005534516 ovn_controller[152859]: 2025-11-25T08:48:41Z|01028|binding|INFO|Setting lport 7246ed42-6ec3-42e8-9b9d-12606aeeb43c down in Southbound
Nov 25 03:48:41 np0005534516 ovn_controller[152859]: 2025-11-25T08:48:41Z|01029|binding|INFO|Removing iface tap7246ed42-6e ovn-installed in OVS
Nov 25 03:48:41 np0005534516 podman[361060]: 2025-11-25 08:48:41.343618818 +0000 UTC m=+0.102643680 container cleanup 1ef76cfbd14ecfe3c363f9d691011408549afde8970f34e26eb70eec780e434d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-aa04d86f-73a3-4b24-9c95-8ec29aa39064, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true)
Nov 25 03:48:41 np0005534516 virtnodedevd[239165]: ethtool ioctl error on tap7246ed42-6e: No such device
Nov 25 03:48:41 np0005534516 nova_compute[253538]: 2025-11-25 08:48:41.346 253542 DEBUG nova.compute.manager [None req-2d4b191e-064f-42de-9532-c08f279cf4b0 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] [instance: 0f5e68e6-8f02-4a3a-ac0c-322d82950d98] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:48:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:48:41.348 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:8c:96:cd 10.100.0.3'], port_security=['fa:16:3e:8c:96:cd 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '0f5e68e6-8f02-4a3a-ac0c-322d82950d98', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-aa04d86f-73a3-4b24-9c95-8ec29aa39064', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '947f731219de435196429037dc94fd56', 'neutron:revision_number': '9', 'neutron:security_group_ids': '6084cb22-e3b6-414c-801b-b7e992786a2f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d4043632-827b-4717-bb5e-38582ebfacad, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=7246ed42-6ec3-42e8-9b9d-12606aeeb43c) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 03:48:41 np0005534516 virtnodedevd[239165]: ethtool ioctl error on tap7246ed42-6e: No such device
Nov 25 03:48:41 np0005534516 nova_compute[253538]: 2025-11-25 08:48:41.356 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:48:41 np0005534516 virtnodedevd[239165]: ethtool ioctl error on tap7246ed42-6e: No such device
Nov 25 03:48:41 np0005534516 systemd[1]: libpod-conmon-1ef76cfbd14ecfe3c363f9d691011408549afde8970f34e26eb70eec780e434d.scope: Deactivated successfully.
Nov 25 03:48:41 np0005534516 virtnodedevd[239165]: ethtool ioctl error on tap7246ed42-6e: No such device
Nov 25 03:48:41 np0005534516 virtnodedevd[239165]: ethtool ioctl error on tap7246ed42-6e: No such device
Nov 25 03:48:41 np0005534516 virtnodedevd[239165]: ethtool ioctl error on tap7246ed42-6e: No such device
Nov 25 03:48:41 np0005534516 virtnodedevd[239165]: ethtool ioctl error on tap7246ed42-6e: No such device
Nov 25 03:48:41 np0005534516 podman[361097]: 2025-11-25 08:48:41.411896486 +0000 UTC m=+0.043242549 container remove 1ef76cfbd14ecfe3c363f9d691011408549afde8970f34e26eb70eec780e434d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-aa04d86f-73a3-4b24-9c95-8ec29aa39064, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 25 03:48:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:48:41.417 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[9c564000-2b7b-4eb7-8f49-417792ea48e2]: (4, ('Tue Nov 25 08:48:41 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-aa04d86f-73a3-4b24-9c95-8ec29aa39064 (1ef76cfbd14ecfe3c363f9d691011408549afde8970f34e26eb70eec780e434d)\n1ef76cfbd14ecfe3c363f9d691011408549afde8970f34e26eb70eec780e434d\nTue Nov 25 08:48:41 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-aa04d86f-73a3-4b24-9c95-8ec29aa39064 (1ef76cfbd14ecfe3c363f9d691011408549afde8970f34e26eb70eec780e434d)\n1ef76cfbd14ecfe3c363f9d691011408549afde8970f34e26eb70eec780e434d\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:48:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:48:41.419 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[6ce0a48a-b246-4c00-beed-aa7308329dff]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:48:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:48:41.420 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapaa04d86f-70, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:48:41 np0005534516 nova_compute[253538]: 2025-11-25 08:48:41.422 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:48:41 np0005534516 kernel: tapaa04d86f-70: left promiscuous mode
Nov 25 03:48:41 np0005534516 nova_compute[253538]: 2025-11-25 08:48:41.439 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:48:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:48:41.442 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[fb684fee-1ffe-468d-b66d-62206c7c9291]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:48:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:48:41.457 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[5c7b0f3f-c18f-4b28-bc7c-7dba7cbb82f1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:48:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:48:41.458 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[e4c12e62-b718-4254-a9ba-4b4ab7531d54]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:48:41 np0005534516 nova_compute[253538]: 2025-11-25 08:48:41.474 253542 DEBUG nova.compute.manager [req-6f88cfc7-6bb6-4e1a-a3da-a865876fafa8 req-be4e026f-5537-4af8-b350-e49a92c39214 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 0f5e68e6-8f02-4a3a-ac0c-322d82950d98] Received event network-vif-unplugged-7246ed42-6ec3-42e8-9b9d-12606aeeb43c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:48:41 np0005534516 nova_compute[253538]: 2025-11-25 08:48:41.475 253542 DEBUG oslo_concurrency.lockutils [req-6f88cfc7-6bb6-4e1a-a3da-a865876fafa8 req-be4e026f-5537-4af8-b350-e49a92c39214 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "0f5e68e6-8f02-4a3a-ac0c-322d82950d98-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:48:41 np0005534516 nova_compute[253538]: 2025-11-25 08:48:41.475 253542 DEBUG oslo_concurrency.lockutils [req-6f88cfc7-6bb6-4e1a-a3da-a865876fafa8 req-be4e026f-5537-4af8-b350-e49a92c39214 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "0f5e68e6-8f02-4a3a-ac0c-322d82950d98-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:48:41 np0005534516 nova_compute[253538]: 2025-11-25 08:48:41.475 253542 DEBUG oslo_concurrency.lockutils [req-6f88cfc7-6bb6-4e1a-a3da-a865876fafa8 req-be4e026f-5537-4af8-b350-e49a92c39214 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "0f5e68e6-8f02-4a3a-ac0c-322d82950d98-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:48:41 np0005534516 nova_compute[253538]: 2025-11-25 08:48:41.476 253542 DEBUG nova.compute.manager [req-6f88cfc7-6bb6-4e1a-a3da-a865876fafa8 req-be4e026f-5537-4af8-b350-e49a92c39214 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 0f5e68e6-8f02-4a3a-ac0c-322d82950d98] No waiting events found dispatching network-vif-unplugged-7246ed42-6ec3-42e8-9b9d-12606aeeb43c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 03:48:41 np0005534516 nova_compute[253538]: 2025-11-25 08:48:41.476 253542 WARNING nova.compute.manager [req-6f88cfc7-6bb6-4e1a-a3da-a865876fafa8 req-be4e026f-5537-4af8-b350-e49a92c39214 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 0f5e68e6-8f02-4a3a-ac0c-322d82950d98] Received unexpected event network-vif-unplugged-7246ed42-6ec3-42e8-9b9d-12606aeeb43c for instance with vm_state suspended and task_state None.#033[00m
Nov 25 03:48:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:48:41.476 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[b6cd79c7-64a5-474a-9c0f-e5a9a5c3299d]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 585921, 'reachable_time': 17273, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 361128, 'error': None, 'target': 'ovnmeta-aa04d86f-73a3-4b24-9c95-8ec29aa39064', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:48:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:48:41.482 162852 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-aa04d86f-73a3-4b24-9c95-8ec29aa39064 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 25 03:48:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:48:41.482 162852 DEBUG oslo.privsep.daemon [-] privsep: reply[8797e6d9-4840-41ae-ab2f-286ff9049d0b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:48:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:48:41.483 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 7246ed42-6ec3-42e8-9b9d-12606aeeb43c in datapath aa04d86f-73a3-4b24-9c95-8ec29aa39064 unbound from our chassis#033[00m
Nov 25 03:48:41 np0005534516 systemd[1]: run-netns-ovnmeta\x2daa04d86f\x2d73a3\x2d4b24\x2d9c95\x2d8ec29aa39064.mount: Deactivated successfully.
Nov 25 03:48:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:48:41.484 162739 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network aa04d86f-73a3-4b24-9c95-8ec29aa39064, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 25 03:48:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:48:41.485 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[51102c6c-8c3d-487f-989a-6e6bb3f9adb4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:48:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:48:41.486 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 7246ed42-6ec3-42e8-9b9d-12606aeeb43c in datapath aa04d86f-73a3-4b24-9c95-8ec29aa39064 unbound from our chassis#033[00m
Nov 25 03:48:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:48:41.487 162739 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network aa04d86f-73a3-4b24-9c95-8ec29aa39064, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 25 03:48:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:48:41.487 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[fa0036a8-ae28-41ad-99cb-b73b82400668]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:48:41 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2020: 321 pgs: 321 active+clean; 213 MiB data, 768 MiB used, 59 GiB / 60 GiB avail; 4.6 MiB/s rd, 16 KiB/s wr, 186 op/s
Nov 25 03:48:42 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e236 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 03:48:42 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e236 do_prune osdmap full prune enabled
Nov 25 03:48:42 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e237 e237: 3 total, 3 up, 3 in
Nov 25 03:48:42 np0005534516 ceph-mon[75015]: log_channel(cluster) log [DBG] : osdmap e237: 3 total, 3 up, 3 in
Nov 25 03:48:42 np0005534516 nova_compute[253538]: 2025-11-25 08:48:42.481 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:48:42 np0005534516 nova_compute[253538]: 2025-11-25 08:48:42.553 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 03:48:42 np0005534516 nova_compute[253538]: 2025-11-25 08:48:42.809 253542 INFO nova.compute.manager [None req-4df8f282-a01e-4d82-aa1b-b3b18fe6f346 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] [instance: 0f5e68e6-8f02-4a3a-ac0c-322d82950d98] Resuming#033[00m
Nov 25 03:48:42 np0005534516 nova_compute[253538]: 2025-11-25 08:48:42.810 253542 DEBUG nova.objects.instance [None req-4df8f282-a01e-4d82-aa1b-b3b18fe6f346 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] Lazy-loading 'flavor' on Instance uuid 0f5e68e6-8f02-4a3a-ac0c-322d82950d98 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 03:48:42 np0005534516 nova_compute[253538]: 2025-11-25 08:48:42.836 253542 DEBUG oslo_concurrency.lockutils [None req-4df8f282-a01e-4d82-aa1b-b3b18fe6f346 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] Acquiring lock "refresh_cache-0f5e68e6-8f02-4a3a-ac0c-322d82950d98" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 03:48:42 np0005534516 nova_compute[253538]: 2025-11-25 08:48:42.837 253542 DEBUG oslo_concurrency.lockutils [None req-4df8f282-a01e-4d82-aa1b-b3b18fe6f346 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] Acquired lock "refresh_cache-0f5e68e6-8f02-4a3a-ac0c-322d82950d98" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 03:48:42 np0005534516 nova_compute[253538]: 2025-11-25 08:48:42.837 253542 DEBUG nova.network.neutron [None req-4df8f282-a01e-4d82-aa1b-b3b18fe6f346 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] [instance: 0f5e68e6-8f02-4a3a-ac0c-322d82950d98] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 25 03:48:42 np0005534516 nova_compute[253538]: 2025-11-25 08:48:42.930 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:48:43 np0005534516 nova_compute[253538]: 2025-11-25 08:48:43.581 253542 DEBUG nova.compute.manager [req-2fa8ccb6-1ac1-427f-8c3a-0363df00ea18 req-6c38c6e6-59b7-4eb3-8840-86827375665a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 0f5e68e6-8f02-4a3a-ac0c-322d82950d98] Received event network-vif-plugged-7246ed42-6ec3-42e8-9b9d-12606aeeb43c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:48:43 np0005534516 nova_compute[253538]: 2025-11-25 08:48:43.581 253542 DEBUG oslo_concurrency.lockutils [req-2fa8ccb6-1ac1-427f-8c3a-0363df00ea18 req-6c38c6e6-59b7-4eb3-8840-86827375665a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "0f5e68e6-8f02-4a3a-ac0c-322d82950d98-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:48:43 np0005534516 nova_compute[253538]: 2025-11-25 08:48:43.581 253542 DEBUG oslo_concurrency.lockutils [req-2fa8ccb6-1ac1-427f-8c3a-0363df00ea18 req-6c38c6e6-59b7-4eb3-8840-86827375665a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "0f5e68e6-8f02-4a3a-ac0c-322d82950d98-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:48:43 np0005534516 nova_compute[253538]: 2025-11-25 08:48:43.582 253542 DEBUG oslo_concurrency.lockutils [req-2fa8ccb6-1ac1-427f-8c3a-0363df00ea18 req-6c38c6e6-59b7-4eb3-8840-86827375665a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "0f5e68e6-8f02-4a3a-ac0c-322d82950d98-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:48:43 np0005534516 nova_compute[253538]: 2025-11-25 08:48:43.582 253542 DEBUG nova.compute.manager [req-2fa8ccb6-1ac1-427f-8c3a-0363df00ea18 req-6c38c6e6-59b7-4eb3-8840-86827375665a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 0f5e68e6-8f02-4a3a-ac0c-322d82950d98] No waiting events found dispatching network-vif-plugged-7246ed42-6ec3-42e8-9b9d-12606aeeb43c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 03:48:43 np0005534516 nova_compute[253538]: 2025-11-25 08:48:43.582 253542 WARNING nova.compute.manager [req-2fa8ccb6-1ac1-427f-8c3a-0363df00ea18 req-6c38c6e6-59b7-4eb3-8840-86827375665a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 0f5e68e6-8f02-4a3a-ac0c-322d82950d98] Received unexpected event network-vif-plugged-7246ed42-6ec3-42e8-9b9d-12606aeeb43c for instance with vm_state suspended and task_state resuming.#033[00m
Nov 25 03:48:43 np0005534516 nova_compute[253538]: 2025-11-25 08:48:43.582 253542 DEBUG nova.compute.manager [req-2fa8ccb6-1ac1-427f-8c3a-0363df00ea18 req-6c38c6e6-59b7-4eb3-8840-86827375665a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 0f5e68e6-8f02-4a3a-ac0c-322d82950d98] Received event network-vif-plugged-7246ed42-6ec3-42e8-9b9d-12606aeeb43c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:48:43 np0005534516 nova_compute[253538]: 2025-11-25 08:48:43.582 253542 DEBUG oslo_concurrency.lockutils [req-2fa8ccb6-1ac1-427f-8c3a-0363df00ea18 req-6c38c6e6-59b7-4eb3-8840-86827375665a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "0f5e68e6-8f02-4a3a-ac0c-322d82950d98-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:48:43 np0005534516 nova_compute[253538]: 2025-11-25 08:48:43.583 253542 DEBUG oslo_concurrency.lockutils [req-2fa8ccb6-1ac1-427f-8c3a-0363df00ea18 req-6c38c6e6-59b7-4eb3-8840-86827375665a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "0f5e68e6-8f02-4a3a-ac0c-322d82950d98-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:48:43 np0005534516 nova_compute[253538]: 2025-11-25 08:48:43.583 253542 DEBUG oslo_concurrency.lockutils [req-2fa8ccb6-1ac1-427f-8c3a-0363df00ea18 req-6c38c6e6-59b7-4eb3-8840-86827375665a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "0f5e68e6-8f02-4a3a-ac0c-322d82950d98-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:48:43 np0005534516 nova_compute[253538]: 2025-11-25 08:48:43.583 253542 DEBUG nova.compute.manager [req-2fa8ccb6-1ac1-427f-8c3a-0363df00ea18 req-6c38c6e6-59b7-4eb3-8840-86827375665a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 0f5e68e6-8f02-4a3a-ac0c-322d82950d98] No waiting events found dispatching network-vif-plugged-7246ed42-6ec3-42e8-9b9d-12606aeeb43c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 03:48:43 np0005534516 nova_compute[253538]: 2025-11-25 08:48:43.583 253542 WARNING nova.compute.manager [req-2fa8ccb6-1ac1-427f-8c3a-0363df00ea18 req-6c38c6e6-59b7-4eb3-8840-86827375665a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 0f5e68e6-8f02-4a3a-ac0c-322d82950d98] Received unexpected event network-vif-plugged-7246ed42-6ec3-42e8-9b9d-12606aeeb43c for instance with vm_state suspended and task_state resuming.#033[00m
Nov 25 03:48:43 np0005534516 nova_compute[253538]: 2025-11-25 08:48:43.584 253542 DEBUG nova.compute.manager [req-2fa8ccb6-1ac1-427f-8c3a-0363df00ea18 req-6c38c6e6-59b7-4eb3-8840-86827375665a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 0f5e68e6-8f02-4a3a-ac0c-322d82950d98] Received event network-vif-plugged-7246ed42-6ec3-42e8-9b9d-12606aeeb43c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:48:43 np0005534516 nova_compute[253538]: 2025-11-25 08:48:43.584 253542 DEBUG oslo_concurrency.lockutils [req-2fa8ccb6-1ac1-427f-8c3a-0363df00ea18 req-6c38c6e6-59b7-4eb3-8840-86827375665a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "0f5e68e6-8f02-4a3a-ac0c-322d82950d98-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:48:43 np0005534516 nova_compute[253538]: 2025-11-25 08:48:43.584 253542 DEBUG oslo_concurrency.lockutils [req-2fa8ccb6-1ac1-427f-8c3a-0363df00ea18 req-6c38c6e6-59b7-4eb3-8840-86827375665a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "0f5e68e6-8f02-4a3a-ac0c-322d82950d98-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:48:43 np0005534516 nova_compute[253538]: 2025-11-25 08:48:43.584 253542 DEBUG oslo_concurrency.lockutils [req-2fa8ccb6-1ac1-427f-8c3a-0363df00ea18 req-6c38c6e6-59b7-4eb3-8840-86827375665a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "0f5e68e6-8f02-4a3a-ac0c-322d82950d98-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:48:43 np0005534516 nova_compute[253538]: 2025-11-25 08:48:43.585 253542 DEBUG nova.compute.manager [req-2fa8ccb6-1ac1-427f-8c3a-0363df00ea18 req-6c38c6e6-59b7-4eb3-8840-86827375665a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 0f5e68e6-8f02-4a3a-ac0c-322d82950d98] No waiting events found dispatching network-vif-plugged-7246ed42-6ec3-42e8-9b9d-12606aeeb43c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 03:48:43 np0005534516 nova_compute[253538]: 2025-11-25 08:48:43.585 253542 WARNING nova.compute.manager [req-2fa8ccb6-1ac1-427f-8c3a-0363df00ea18 req-6c38c6e6-59b7-4eb3-8840-86827375665a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 0f5e68e6-8f02-4a3a-ac0c-322d82950d98] Received unexpected event network-vif-plugged-7246ed42-6ec3-42e8-9b9d-12606aeeb43c for instance with vm_state suspended and task_state resuming.#033[00m
Nov 25 03:48:43 np0005534516 nova_compute[253538]: 2025-11-25 08:48:43.585 253542 DEBUG nova.compute.manager [req-2fa8ccb6-1ac1-427f-8c3a-0363df00ea18 req-6c38c6e6-59b7-4eb3-8840-86827375665a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 0f5e68e6-8f02-4a3a-ac0c-322d82950d98] Received event network-vif-unplugged-7246ed42-6ec3-42e8-9b9d-12606aeeb43c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:48:43 np0005534516 nova_compute[253538]: 2025-11-25 08:48:43.585 253542 DEBUG oslo_concurrency.lockutils [req-2fa8ccb6-1ac1-427f-8c3a-0363df00ea18 req-6c38c6e6-59b7-4eb3-8840-86827375665a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "0f5e68e6-8f02-4a3a-ac0c-322d82950d98-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:48:43 np0005534516 nova_compute[253538]: 2025-11-25 08:48:43.586 253542 DEBUG oslo_concurrency.lockutils [req-2fa8ccb6-1ac1-427f-8c3a-0363df00ea18 req-6c38c6e6-59b7-4eb3-8840-86827375665a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "0f5e68e6-8f02-4a3a-ac0c-322d82950d98-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:48:43 np0005534516 nova_compute[253538]: 2025-11-25 08:48:43.586 253542 DEBUG oslo_concurrency.lockutils [req-2fa8ccb6-1ac1-427f-8c3a-0363df00ea18 req-6c38c6e6-59b7-4eb3-8840-86827375665a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "0f5e68e6-8f02-4a3a-ac0c-322d82950d98-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:48:43 np0005534516 nova_compute[253538]: 2025-11-25 08:48:43.586 253542 DEBUG nova.compute.manager [req-2fa8ccb6-1ac1-427f-8c3a-0363df00ea18 req-6c38c6e6-59b7-4eb3-8840-86827375665a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 0f5e68e6-8f02-4a3a-ac0c-322d82950d98] No waiting events found dispatching network-vif-unplugged-7246ed42-6ec3-42e8-9b9d-12606aeeb43c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 03:48:43 np0005534516 nova_compute[253538]: 2025-11-25 08:48:43.586 253542 WARNING nova.compute.manager [req-2fa8ccb6-1ac1-427f-8c3a-0363df00ea18 req-6c38c6e6-59b7-4eb3-8840-86827375665a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 0f5e68e6-8f02-4a3a-ac0c-322d82950d98] Received unexpected event network-vif-unplugged-7246ed42-6ec3-42e8-9b9d-12606aeeb43c for instance with vm_state suspended and task_state resuming.#033[00m
Nov 25 03:48:43 np0005534516 nova_compute[253538]: 2025-11-25 08:48:43.586 253542 DEBUG nova.compute.manager [req-2fa8ccb6-1ac1-427f-8c3a-0363df00ea18 req-6c38c6e6-59b7-4eb3-8840-86827375665a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 0f5e68e6-8f02-4a3a-ac0c-322d82950d98] Received event network-vif-plugged-7246ed42-6ec3-42e8-9b9d-12606aeeb43c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:48:43 np0005534516 nova_compute[253538]: 2025-11-25 08:48:43.587 253542 DEBUG oslo_concurrency.lockutils [req-2fa8ccb6-1ac1-427f-8c3a-0363df00ea18 req-6c38c6e6-59b7-4eb3-8840-86827375665a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "0f5e68e6-8f02-4a3a-ac0c-322d82950d98-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:48:43 np0005534516 nova_compute[253538]: 2025-11-25 08:48:43.587 253542 DEBUG oslo_concurrency.lockutils [req-2fa8ccb6-1ac1-427f-8c3a-0363df00ea18 req-6c38c6e6-59b7-4eb3-8840-86827375665a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "0f5e68e6-8f02-4a3a-ac0c-322d82950d98-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:48:43 np0005534516 nova_compute[253538]: 2025-11-25 08:48:43.587 253542 DEBUG oslo_concurrency.lockutils [req-2fa8ccb6-1ac1-427f-8c3a-0363df00ea18 req-6c38c6e6-59b7-4eb3-8840-86827375665a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "0f5e68e6-8f02-4a3a-ac0c-322d82950d98-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:48:43 np0005534516 nova_compute[253538]: 2025-11-25 08:48:43.587 253542 DEBUG nova.compute.manager [req-2fa8ccb6-1ac1-427f-8c3a-0363df00ea18 req-6c38c6e6-59b7-4eb3-8840-86827375665a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 0f5e68e6-8f02-4a3a-ac0c-322d82950d98] No waiting events found dispatching network-vif-plugged-7246ed42-6ec3-42e8-9b9d-12606aeeb43c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 03:48:43 np0005534516 nova_compute[253538]: 2025-11-25 08:48:43.588 253542 WARNING nova.compute.manager [req-2fa8ccb6-1ac1-427f-8c3a-0363df00ea18 req-6c38c6e6-59b7-4eb3-8840-86827375665a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 0f5e68e6-8f02-4a3a-ac0c-322d82950d98] Received unexpected event network-vif-plugged-7246ed42-6ec3-42e8-9b9d-12606aeeb43c for instance with vm_state suspended and task_state resuming.#033[00m
Nov 25 03:48:43 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2022: 321 pgs: 321 active+clean; 213 MiB data, 767 MiB used, 59 GiB / 60 GiB avail; 4.4 MiB/s rd, 1.3 KiB/s wr, 170 op/s
Nov 25 03:48:44 np0005534516 nova_compute[253538]: 2025-11-25 08:48:44.821 253542 DEBUG nova.network.neutron [None req-4df8f282-a01e-4d82-aa1b-b3b18fe6f346 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] [instance: 0f5e68e6-8f02-4a3a-ac0c-322d82950d98] Updating instance_info_cache with network_info: [{"id": "7246ed42-6ec3-42e8-9b9d-12606aeeb43c", "address": "fa:16:3e:8c:96:cd", "network": {"id": "aa04d86f-73a3-4b24-9c95-8ec29aa39064", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-1377856389-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "947f731219de435196429037dc94fd56", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7246ed42-6e", "ovs_interfaceid": "7246ed42-6ec3-42e8-9b9d-12606aeeb43c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 03:48:44 np0005534516 nova_compute[253538]: 2025-11-25 08:48:44.846 253542 DEBUG oslo_concurrency.lockutils [None req-4df8f282-a01e-4d82-aa1b-b3b18fe6f346 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] Releasing lock "refresh_cache-0f5e68e6-8f02-4a3a-ac0c-322d82950d98" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 03:48:44 np0005534516 nova_compute[253538]: 2025-11-25 08:48:44.851 253542 DEBUG nova.virt.libvirt.vif [None req-4df8f282-a01e-4d82-aa1b-b3b18fe6f346 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-11-25T08:46:31Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersNegativeTestJSON-server-673040864',display_name='tempest-ServersNegativeTestJSON-server-673040864',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversnegativetestjson-server-673040864',id=104,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-25T08:48:34Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='947f731219de435196429037dc94fd56',ramdisk_id='',reservation_id='r-lfthxaw9',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-ServersNegativeTestJSON-740481153',owner_user_name='tempest-ServersNegativeTestJSON-740481153-project-member'},tags=<?>,task_state='resuming',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T08:48:41Z,user_data=None,user_id='229f02d89c9848d8aaaaab070ce4d179',uuid=0f5e68e6-8f02-4a3a-ac0c-322d82950d98,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='suspended') vif={"id": "7246ed42-6ec3-42e8-9b9d-12606aeeb43c", "address": "fa:16:3e:8c:96:cd", "network": {"id": "aa04d86f-73a3-4b24-9c95-8ec29aa39064", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-1377856389-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "947f731219de435196429037dc94fd56", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7246ed42-6e", "ovs_interfaceid": "7246ed42-6ec3-42e8-9b9d-12606aeeb43c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 25 03:48:44 np0005534516 nova_compute[253538]: 2025-11-25 08:48:44.852 253542 DEBUG nova.network.os_vif_util [None req-4df8f282-a01e-4d82-aa1b-b3b18fe6f346 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] Converting VIF {"id": "7246ed42-6ec3-42e8-9b9d-12606aeeb43c", "address": "fa:16:3e:8c:96:cd", "network": {"id": "aa04d86f-73a3-4b24-9c95-8ec29aa39064", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-1377856389-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "947f731219de435196429037dc94fd56", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7246ed42-6e", "ovs_interfaceid": "7246ed42-6ec3-42e8-9b9d-12606aeeb43c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 03:48:44 np0005534516 nova_compute[253538]: 2025-11-25 08:48:44.852 253542 DEBUG nova.network.os_vif_util [None req-4df8f282-a01e-4d82-aa1b-b3b18fe6f346 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:8c:96:cd,bridge_name='br-int',has_traffic_filtering=True,id=7246ed42-6ec3-42e8-9b9d-12606aeeb43c,network=Network(aa04d86f-73a3-4b24-9c95-8ec29aa39064),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7246ed42-6e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 03:48:44 np0005534516 nova_compute[253538]: 2025-11-25 08:48:44.853 253542 DEBUG os_vif [None req-4df8f282-a01e-4d82-aa1b-b3b18fe6f346 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:8c:96:cd,bridge_name='br-int',has_traffic_filtering=True,id=7246ed42-6ec3-42e8-9b9d-12606aeeb43c,network=Network(aa04d86f-73a3-4b24-9c95-8ec29aa39064),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7246ed42-6e') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 25 03:48:44 np0005534516 nova_compute[253538]: 2025-11-25 08:48:44.853 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:48:44 np0005534516 nova_compute[253538]: 2025-11-25 08:48:44.854 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:48:44 np0005534516 nova_compute[253538]: 2025-11-25 08:48:44.854 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 03:48:44 np0005534516 nova_compute[253538]: 2025-11-25 08:48:44.857 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:48:44 np0005534516 nova_compute[253538]: 2025-11-25 08:48:44.857 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7246ed42-6e, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:48:44 np0005534516 nova_compute[253538]: 2025-11-25 08:48:44.857 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap7246ed42-6e, col_values=(('external_ids', {'iface-id': '7246ed42-6ec3-42e8-9b9d-12606aeeb43c', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:8c:96:cd', 'vm-uuid': '0f5e68e6-8f02-4a3a-ac0c-322d82950d98'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:48:44 np0005534516 nova_compute[253538]: 2025-11-25 08:48:44.858 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 03:48:44 np0005534516 nova_compute[253538]: 2025-11-25 08:48:44.858 253542 INFO os_vif [None req-4df8f282-a01e-4d82-aa1b-b3b18fe6f346 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:8c:96:cd,bridge_name='br-int',has_traffic_filtering=True,id=7246ed42-6ec3-42e8-9b9d-12606aeeb43c,network=Network(aa04d86f-73a3-4b24-9c95-8ec29aa39064),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7246ed42-6e')#033[00m
Nov 25 03:48:44 np0005534516 nova_compute[253538]: 2025-11-25 08:48:44.881 253542 DEBUG nova.objects.instance [None req-4df8f282-a01e-4d82-aa1b-b3b18fe6f346 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] Lazy-loading 'numa_topology' on Instance uuid 0f5e68e6-8f02-4a3a-ac0c-322d82950d98 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 03:48:44 np0005534516 kernel: tap7246ed42-6e: entered promiscuous mode
Nov 25 03:48:44 np0005534516 ovn_controller[152859]: 2025-11-25T08:48:44Z|01030|binding|INFO|Claiming lport 7246ed42-6ec3-42e8-9b9d-12606aeeb43c for this chassis.
Nov 25 03:48:44 np0005534516 nova_compute[253538]: 2025-11-25 08:48:44.949 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:48:44 np0005534516 ovn_controller[152859]: 2025-11-25T08:48:44Z|01031|binding|INFO|7246ed42-6ec3-42e8-9b9d-12606aeeb43c: Claiming fa:16:3e:8c:96:cd 10.100.0.3
Nov 25 03:48:44 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:48:44.957 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:8c:96:cd 10.100.0.3'], port_security=['fa:16:3e:8c:96:cd 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '0f5e68e6-8f02-4a3a-ac0c-322d82950d98', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-aa04d86f-73a3-4b24-9c95-8ec29aa39064', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '947f731219de435196429037dc94fd56', 'neutron:revision_number': '12', 'neutron:security_group_ids': '6084cb22-e3b6-414c-801b-b7e992786a2f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d4043632-827b-4717-bb5e-38582ebfacad, chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=7246ed42-6ec3-42e8-9b9d-12606aeeb43c) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 03:48:44 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:48:44.958 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 7246ed42-6ec3-42e8-9b9d-12606aeeb43c in datapath aa04d86f-73a3-4b24-9c95-8ec29aa39064 bound to our chassis#033[00m
Nov 25 03:48:44 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:48:44.959 162739 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network aa04d86f-73a3-4b24-9c95-8ec29aa39064#033[00m
Nov 25 03:48:44 np0005534516 NetworkManager[48915]: <info>  [1764060524.9636] manager: (tap7246ed42-6e): new Tun device (/org/freedesktop/NetworkManager/Devices/422)
Nov 25 03:48:44 np0005534516 ovn_controller[152859]: 2025-11-25T08:48:44Z|01032|binding|INFO|Setting lport 7246ed42-6ec3-42e8-9b9d-12606aeeb43c ovn-installed in OVS
Nov 25 03:48:44 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:48:44.971 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[c66fe68e-d391-4fba-9549-df3b077175fe]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:48:44 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:48:44.972 162739 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapaa04d86f-71 in ovnmeta-aa04d86f-73a3-4b24-9c95-8ec29aa39064 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 25 03:48:44 np0005534516 ovn_controller[152859]: 2025-11-25T08:48:44Z|01033|binding|INFO|Setting lport 7246ed42-6ec3-42e8-9b9d-12606aeeb43c up in Southbound
Nov 25 03:48:44 np0005534516 nova_compute[253538]: 2025-11-25 08:48:44.973 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:48:44 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:48:44.974 269370 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapaa04d86f-70 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 25 03:48:44 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:48:44.974 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[d0ea699c-742d-460a-9b9e-653500cb7727]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:48:44 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:48:44.975 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[5d9d97dd-87a7-4d0c-9058-79f16db6f2d5]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:48:44 np0005534516 systemd-udevd[361142]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 03:48:44 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:48:44.986 162852 DEBUG oslo.privsep.daemon [-] privsep: reply[bf48e926-c2d7-49e3-a176-91046f4fb5d8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:48:44 np0005534516 nova_compute[253538]: 2025-11-25 08:48:44.986 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:48:44 np0005534516 NetworkManager[48915]: <info>  [1764060524.9978] device (tap7246ed42-6e): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 03:48:45 np0005534516 systemd-machined[215790]: New machine qemu-132-instance-00000068.
Nov 25 03:48:45 np0005534516 NetworkManager[48915]: <info>  [1764060525.0028] device (tap7246ed42-6e): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 03:48:45 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:48:45.010 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[91aa5166-4ce5-4e56-b063-6cdb33e1c7e4]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:48:45 np0005534516 systemd[1]: Started Virtual Machine qemu-132-instance-00000068.
Nov 25 03:48:45 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:48:45.038 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[9be68a92-4de1-4b31-8981-8525fc6d3b70]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:48:45 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:48:45.043 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[d508633a-a61a-4cfc-847e-6465541e6b64]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:48:45 np0005534516 NetworkManager[48915]: <info>  [1764060525.0438] manager: (tapaa04d86f-70): new Veth device (/org/freedesktop/NetworkManager/Devices/423)
Nov 25 03:48:45 np0005534516 systemd-udevd[361148]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 03:48:45 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:48:45.079 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[bc99cff6-4902-4e1b-bb97-faca9cbed997]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:48:45 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:48:45.081 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[5feead4b-ddae-415f-9998-595bef0eb7c9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:48:45 np0005534516 NetworkManager[48915]: <info>  [1764060525.1042] device (tapaa04d86f-70): carrier: link connected
Nov 25 03:48:45 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:48:45.109 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[5b5be971-c791-4923-8ffc-62bb894235d5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:48:45 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:48:45.127 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[f4a3be46-c5df-46cb-a8ee-cf386d59cb7b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapaa04d86f-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:13:d2:af'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 304], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 587970, 'reachable_time': 16497, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 361176, 'error': None, 'target': 'ovnmeta-aa04d86f-73a3-4b24-9c95-8ec29aa39064', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:48:45 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:48:45.141 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[52d96dea-7fca-4413-a159-571eb1710b02]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe13:d2af'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 587970, 'tstamp': 587970}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 361177, 'error': None, 'target': 'ovnmeta-aa04d86f-73a3-4b24-9c95-8ec29aa39064', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:48:45 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:48:45.159 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[ab54f03f-df66-4223-9bd3-f47c3f350b26]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapaa04d86f-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:13:d2:af'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 304], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 587970, 'reachable_time': 16497, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 148, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 148, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 361178, 'error': None, 'target': 'ovnmeta-aa04d86f-73a3-4b24-9c95-8ec29aa39064', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:48:45 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:48:45.189 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[c4c57fd2-65e4-4bcf-9709-2fd9c9e54f88]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:48:45 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:48:45.262 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[139f1a7c-da6d-45fd-ae6c-138e16275895]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:48:45 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:48:45.263 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapaa04d86f-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:48:45 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:48:45.264 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 03:48:45 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:48:45.264 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapaa04d86f-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:48:45 np0005534516 nova_compute[253538]: 2025-11-25 08:48:45.266 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:48:45 np0005534516 NetworkManager[48915]: <info>  [1764060525.2668] manager: (tapaa04d86f-70): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/424)
Nov 25 03:48:45 np0005534516 kernel: tapaa04d86f-70: entered promiscuous mode
Nov 25 03:48:45 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:48:45.270 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapaa04d86f-70, col_values=(('external_ids', {'iface-id': 'bedad8d3-cb44-47dd-87a4-c24448506880'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:48:45 np0005534516 nova_compute[253538]: 2025-11-25 08:48:45.271 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:48:45 np0005534516 ovn_controller[152859]: 2025-11-25T08:48:45Z|01034|binding|INFO|Releasing lport bedad8d3-cb44-47dd-87a4-c24448506880 from this chassis (sb_readonly=0)
Nov 25 03:48:45 np0005534516 nova_compute[253538]: 2025-11-25 08:48:45.272 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:48:45 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:48:45.273 162739 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/aa04d86f-73a3-4b24-9c95-8ec29aa39064.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/aa04d86f-73a3-4b24-9c95-8ec29aa39064.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 25 03:48:45 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:48:45.274 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[538a30f4-b27b-4a55-b6ca-ee22d8483e16]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:48:45 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:48:45.275 162739 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 25 03:48:45 np0005534516 ovn_metadata_agent[162734]: global
Nov 25 03:48:45 np0005534516 ovn_metadata_agent[162734]:    log         /dev/log local0 debug
Nov 25 03:48:45 np0005534516 ovn_metadata_agent[162734]:    log-tag     haproxy-metadata-proxy-aa04d86f-73a3-4b24-9c95-8ec29aa39064
Nov 25 03:48:45 np0005534516 ovn_metadata_agent[162734]:    user        root
Nov 25 03:48:45 np0005534516 ovn_metadata_agent[162734]:    group       root
Nov 25 03:48:45 np0005534516 ovn_metadata_agent[162734]:    maxconn     1024
Nov 25 03:48:45 np0005534516 ovn_metadata_agent[162734]:    pidfile     /var/lib/neutron/external/pids/aa04d86f-73a3-4b24-9c95-8ec29aa39064.pid.haproxy
Nov 25 03:48:45 np0005534516 ovn_metadata_agent[162734]:    daemon
Nov 25 03:48:45 np0005534516 ovn_metadata_agent[162734]: 
Nov 25 03:48:45 np0005534516 ovn_metadata_agent[162734]: defaults
Nov 25 03:48:45 np0005534516 ovn_metadata_agent[162734]:    log global
Nov 25 03:48:45 np0005534516 ovn_metadata_agent[162734]:    mode http
Nov 25 03:48:45 np0005534516 ovn_metadata_agent[162734]:    option httplog
Nov 25 03:48:45 np0005534516 ovn_metadata_agent[162734]:    option dontlognull
Nov 25 03:48:45 np0005534516 ovn_metadata_agent[162734]:    option http-server-close
Nov 25 03:48:45 np0005534516 ovn_metadata_agent[162734]:    option forwardfor
Nov 25 03:48:45 np0005534516 ovn_metadata_agent[162734]:    retries                 3
Nov 25 03:48:45 np0005534516 ovn_metadata_agent[162734]:    timeout http-request    30s
Nov 25 03:48:45 np0005534516 ovn_metadata_agent[162734]:    timeout connect         30s
Nov 25 03:48:45 np0005534516 ovn_metadata_agent[162734]:    timeout client          32s
Nov 25 03:48:45 np0005534516 ovn_metadata_agent[162734]:    timeout server          32s
Nov 25 03:48:45 np0005534516 ovn_metadata_agent[162734]:    timeout http-keep-alive 30s
Nov 25 03:48:45 np0005534516 ovn_metadata_agent[162734]: 
Nov 25 03:48:45 np0005534516 ovn_metadata_agent[162734]: 
Nov 25 03:48:45 np0005534516 ovn_metadata_agent[162734]: listen listener
Nov 25 03:48:45 np0005534516 ovn_metadata_agent[162734]:    bind 169.254.169.254:80
Nov 25 03:48:45 np0005534516 ovn_metadata_agent[162734]:    server metadata /var/lib/neutron/metadata_proxy
Nov 25 03:48:45 np0005534516 ovn_metadata_agent[162734]:    http-request add-header X-OVN-Network-ID aa04d86f-73a3-4b24-9c95-8ec29aa39064
Nov 25 03:48:45 np0005534516 ovn_metadata_agent[162734]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 25 03:48:45 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:48:45.275 162739 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-aa04d86f-73a3-4b24-9c95-8ec29aa39064', 'env', 'PROCESS_TAG=haproxy-aa04d86f-73a3-4b24-9c95-8ec29aa39064', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/aa04d86f-73a3-4b24-9c95-8ec29aa39064.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 25 03:48:45 np0005534516 nova_compute[253538]: 2025-11-25 08:48:45.285 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:48:45 np0005534516 nova_compute[253538]: 2025-11-25 08:48:45.478 253542 DEBUG nova.virt.libvirt.host [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Removed pending event for 0f5e68e6-8f02-4a3a-ac0c-322d82950d98 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Nov 25 03:48:45 np0005534516 nova_compute[253538]: 2025-11-25 08:48:45.479 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764060525.478455, 0f5e68e6-8f02-4a3a-ac0c-322d82950d98 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 03:48:45 np0005534516 nova_compute[253538]: 2025-11-25 08:48:45.479 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 0f5e68e6-8f02-4a3a-ac0c-322d82950d98] VM Started (Lifecycle Event)#033[00m
Nov 25 03:48:45 np0005534516 nova_compute[253538]: 2025-11-25 08:48:45.493 253542 DEBUG nova.compute.manager [None req-4df8f282-a01e-4d82-aa1b-b3b18fe6f346 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] [instance: 0f5e68e6-8f02-4a3a-ac0c-322d82950d98] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 25 03:48:45 np0005534516 nova_compute[253538]: 2025-11-25 08:48:45.494 253542 DEBUG nova.objects.instance [None req-4df8f282-a01e-4d82-aa1b-b3b18fe6f346 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] Lazy-loading 'pci_devices' on Instance uuid 0f5e68e6-8f02-4a3a-ac0c-322d82950d98 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 03:48:45 np0005534516 nova_compute[253538]: 2025-11-25 08:48:45.503 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 0f5e68e6-8f02-4a3a-ac0c-322d82950d98] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:48:45 np0005534516 nova_compute[253538]: 2025-11-25 08:48:45.506 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 0f5e68e6-8f02-4a3a-ac0c-322d82950d98] Synchronizing instance power state after lifecycle event "Started"; current vm_state: suspended, current task_state: resuming, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 03:48:45 np0005534516 nova_compute[253538]: 2025-11-25 08:48:45.518 253542 INFO nova.virt.libvirt.driver [-] [instance: 0f5e68e6-8f02-4a3a-ac0c-322d82950d98] Instance running successfully.#033[00m
Nov 25 03:48:45 np0005534516 virtqemud[253839]: argument unsupported: QEMU guest agent is not configured
Nov 25 03:48:45 np0005534516 nova_compute[253538]: 2025-11-25 08:48:45.521 253542 DEBUG nova.virt.libvirt.guest [None req-4df8f282-a01e-4d82-aa1b-b3b18fe6f346 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] [instance: 0f5e68e6-8f02-4a3a-ac0c-322d82950d98] Failed to set time: agent not configured sync_guest_time /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:200#033[00m
Nov 25 03:48:45 np0005534516 nova_compute[253538]: 2025-11-25 08:48:45.522 253542 DEBUG nova.compute.manager [None req-4df8f282-a01e-4d82-aa1b-b3b18fe6f346 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] [instance: 0f5e68e6-8f02-4a3a-ac0c-322d82950d98] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:48:45 np0005534516 nova_compute[253538]: 2025-11-25 08:48:45.527 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 0f5e68e6-8f02-4a3a-ac0c-322d82950d98] During sync_power_state the instance has a pending task (resuming). Skip.#033[00m
Nov 25 03:48:45 np0005534516 nova_compute[253538]: 2025-11-25 08:48:45.527 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764060525.4817843, 0f5e68e6-8f02-4a3a-ac0c-322d82950d98 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 03:48:45 np0005534516 nova_compute[253538]: 2025-11-25 08:48:45.527 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 0f5e68e6-8f02-4a3a-ac0c-322d82950d98] VM Resumed (Lifecycle Event)#033[00m
Nov 25 03:48:45 np0005534516 nova_compute[253538]: 2025-11-25 08:48:45.568 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 0f5e68e6-8f02-4a3a-ac0c-322d82950d98] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:48:45 np0005534516 nova_compute[253538]: 2025-11-25 08:48:45.574 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 0f5e68e6-8f02-4a3a-ac0c-322d82950d98] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: suspended, current task_state: resuming, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 03:48:45 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2023: 321 pgs: 321 active+clean; 213 MiB data, 767 MiB used, 59 GiB / 60 GiB avail; 4.0 MiB/s rd, 1023 B/s wr, 154 op/s
Nov 25 03:48:45 np0005534516 podman[361252]: 2025-11-25 08:48:45.674711661 +0000 UTC m=+0.057147381 container create a559c132001b3659bbd3cf2fe2e10523a8a704c8e542ff28e0667476905b656a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-aa04d86f-73a3-4b24-9c95-8ec29aa39064, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 25 03:48:45 np0005534516 nova_compute[253538]: 2025-11-25 08:48:45.704 253542 DEBUG nova.compute.manager [req-392ce04e-e0ac-4c35-b92f-f22f5846cd81 req-af05835e-078f-4d21-8876-f629abe8af1e b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 0f5e68e6-8f02-4a3a-ac0c-322d82950d98] Received event network-vif-plugged-7246ed42-6ec3-42e8-9b9d-12606aeeb43c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:48:45 np0005534516 nova_compute[253538]: 2025-11-25 08:48:45.705 253542 DEBUG oslo_concurrency.lockutils [req-392ce04e-e0ac-4c35-b92f-f22f5846cd81 req-af05835e-078f-4d21-8876-f629abe8af1e b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "0f5e68e6-8f02-4a3a-ac0c-322d82950d98-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:48:45 np0005534516 nova_compute[253538]: 2025-11-25 08:48:45.705 253542 DEBUG oslo_concurrency.lockutils [req-392ce04e-e0ac-4c35-b92f-f22f5846cd81 req-af05835e-078f-4d21-8876-f629abe8af1e b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "0f5e68e6-8f02-4a3a-ac0c-322d82950d98-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:48:45 np0005534516 nova_compute[253538]: 2025-11-25 08:48:45.705 253542 DEBUG oslo_concurrency.lockutils [req-392ce04e-e0ac-4c35-b92f-f22f5846cd81 req-af05835e-078f-4d21-8876-f629abe8af1e b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "0f5e68e6-8f02-4a3a-ac0c-322d82950d98-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:48:45 np0005534516 nova_compute[253538]: 2025-11-25 08:48:45.705 253542 DEBUG nova.compute.manager [req-392ce04e-e0ac-4c35-b92f-f22f5846cd81 req-af05835e-078f-4d21-8876-f629abe8af1e b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 0f5e68e6-8f02-4a3a-ac0c-322d82950d98] No waiting events found dispatching network-vif-plugged-7246ed42-6ec3-42e8-9b9d-12606aeeb43c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 03:48:45 np0005534516 nova_compute[253538]: 2025-11-25 08:48:45.706 253542 WARNING nova.compute.manager [req-392ce04e-e0ac-4c35-b92f-f22f5846cd81 req-af05835e-078f-4d21-8876-f629abe8af1e b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 0f5e68e6-8f02-4a3a-ac0c-322d82950d98] Received unexpected event network-vif-plugged-7246ed42-6ec3-42e8-9b9d-12606aeeb43c for instance with vm_state active and task_state None.#033[00m
Nov 25 03:48:45 np0005534516 nova_compute[253538]: 2025-11-25 08:48:45.706 253542 DEBUG nova.compute.manager [req-392ce04e-e0ac-4c35-b92f-f22f5846cd81 req-af05835e-078f-4d21-8876-f629abe8af1e b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 0f5e68e6-8f02-4a3a-ac0c-322d82950d98] Received event network-vif-plugged-7246ed42-6ec3-42e8-9b9d-12606aeeb43c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:48:45 np0005534516 nova_compute[253538]: 2025-11-25 08:48:45.706 253542 DEBUG oslo_concurrency.lockutils [req-392ce04e-e0ac-4c35-b92f-f22f5846cd81 req-af05835e-078f-4d21-8876-f629abe8af1e b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "0f5e68e6-8f02-4a3a-ac0c-322d82950d98-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:48:45 np0005534516 nova_compute[253538]: 2025-11-25 08:48:45.707 253542 DEBUG oslo_concurrency.lockutils [req-392ce04e-e0ac-4c35-b92f-f22f5846cd81 req-af05835e-078f-4d21-8876-f629abe8af1e b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "0f5e68e6-8f02-4a3a-ac0c-322d82950d98-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:48:45 np0005534516 nova_compute[253538]: 2025-11-25 08:48:45.707 253542 DEBUG oslo_concurrency.lockutils [req-392ce04e-e0ac-4c35-b92f-f22f5846cd81 req-af05835e-078f-4d21-8876-f629abe8af1e b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "0f5e68e6-8f02-4a3a-ac0c-322d82950d98-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:48:45 np0005534516 nova_compute[253538]: 2025-11-25 08:48:45.707 253542 DEBUG nova.compute.manager [req-392ce04e-e0ac-4c35-b92f-f22f5846cd81 req-af05835e-078f-4d21-8876-f629abe8af1e b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 0f5e68e6-8f02-4a3a-ac0c-322d82950d98] No waiting events found dispatching network-vif-plugged-7246ed42-6ec3-42e8-9b9d-12606aeeb43c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 03:48:45 np0005534516 nova_compute[253538]: 2025-11-25 08:48:45.707 253542 WARNING nova.compute.manager [req-392ce04e-e0ac-4c35-b92f-f22f5846cd81 req-af05835e-078f-4d21-8876-f629abe8af1e b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 0f5e68e6-8f02-4a3a-ac0c-322d82950d98] Received unexpected event network-vif-plugged-7246ed42-6ec3-42e8-9b9d-12606aeeb43c for instance with vm_state active and task_state None.#033[00m
Nov 25 03:48:45 np0005534516 systemd[1]: Started libpod-conmon-a559c132001b3659bbd3cf2fe2e10523a8a704c8e542ff28e0667476905b656a.scope.
Nov 25 03:48:45 np0005534516 podman[361252]: 2025-11-25 08:48:45.647074551 +0000 UTC m=+0.029510301 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 25 03:48:45 np0005534516 systemd[1]: Started libcrun container.
Nov 25 03:48:45 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/27c4637736e1dcbe4428132fd374615171f835052d57e6b8b7096684d764b77e/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 25 03:48:45 np0005534516 podman[361252]: 2025-11-25 08:48:45.764411404 +0000 UTC m=+0.146847154 container init a559c132001b3659bbd3cf2fe2e10523a8a704c8e542ff28e0667476905b656a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-aa04d86f-73a3-4b24-9c95-8ec29aa39064, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 03:48:45 np0005534516 podman[361252]: 2025-11-25 08:48:45.770861046 +0000 UTC m=+0.153296776 container start a559c132001b3659bbd3cf2fe2e10523a8a704c8e542ff28e0667476905b656a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-aa04d86f-73a3-4b24-9c95-8ec29aa39064, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 03:48:45 np0005534516 neutron-haproxy-ovnmeta-aa04d86f-73a3-4b24-9c95-8ec29aa39064[361267]: [NOTICE]   (361271) : New worker (361273) forked
Nov 25 03:48:45 np0005534516 neutron-haproxy-ovnmeta-aa04d86f-73a3-4b24-9c95-8ec29aa39064[361267]: [NOTICE]   (361271) : Loading success.
Nov 25 03:48:47 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e237 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 03:48:47 np0005534516 nova_compute[253538]: 2025-11-25 08:48:47.484 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:48:47 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2024: 321 pgs: 321 active+clean; 221 MiB data, 781 MiB used, 59 GiB / 60 GiB avail; 933 KiB/s rd, 1.4 MiB/s wr, 55 op/s
Nov 25 03:48:47 np0005534516 nova_compute[253538]: 2025-11-25 08:48:47.932 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:48:48 np0005534516 ovn_controller[152859]: 2025-11-25T08:48:48Z|00108|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:cf:68:34 10.100.0.9
Nov 25 03:48:48 np0005534516 ovn_controller[152859]: 2025-11-25T08:48:48Z|00109|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:cf:68:34 10.100.0.9
Nov 25 03:48:49 np0005534516 ovn_controller[152859]: 2025-11-25T08:48:49Z|00110|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:8c:96:cd 10.100.0.3
Nov 25 03:48:49 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2025: 321 pgs: 321 active+clean; 233 MiB data, 789 MiB used, 59 GiB / 60 GiB avail; 916 KiB/s rd, 2.2 MiB/s wr, 76 op/s
Nov 25 03:48:51 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2026: 321 pgs: 321 active+clean; 243 MiB data, 792 MiB used, 59 GiB / 60 GiB avail; 685 KiB/s rd, 2.5 MiB/s wr, 101 op/s
Nov 25 03:48:52 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e237 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 03:48:52 np0005534516 nova_compute[253538]: 2025-11-25 08:48:52.497 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:48:52 np0005534516 nova_compute[253538]: 2025-11-25 08:48:52.935 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:48:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] Optimize plan auto_2025-11-25_08:48:53
Nov 25 03:48:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 03:48:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] do_upmap
Nov 25 03:48:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] pools ['images', '.mgr', 'default.rgw.meta', 'vms', 'default.rgw.log', '.rgw.root', 'cephfs.cephfs.meta', 'volumes', 'default.rgw.control', 'cephfs.cephfs.data', 'backups']
Nov 25 03:48:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] prepared 0/10 changes
Nov 25 03:48:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 03:48:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 03:48:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 03:48:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 03:48:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 03:48:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 03:48:53 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2027: 321 pgs: 321 active+clean; 246 MiB data, 792 MiB used, 59 GiB / 60 GiB avail; 905 KiB/s rd, 2.3 MiB/s wr, 116 op/s
Nov 25 03:48:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 03:48:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 03:48:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 03:48:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 03:48:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 03:48:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 03:48:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 03:48:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 03:48:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 03:48:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 03:48:54 np0005534516 ceph-mon[75015]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #96. Immutable memtables: 0.
Nov 25 03:48:54 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:48:54.784580) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 25 03:48:54 np0005534516 ceph-mon[75015]: rocksdb: [db/flush_job.cc:856] [default] [JOB 55] Flushing memtable with next log file: 96
Nov 25 03:48:54 np0005534516 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764060534784682, "job": 55, "event": "flush_started", "num_memtables": 1, "num_entries": 619, "num_deletes": 252, "total_data_size": 640566, "memory_usage": 651384, "flush_reason": "Manual Compaction"}
Nov 25 03:48:54 np0005534516 ceph-mon[75015]: rocksdb: [db/flush_job.cc:885] [default] [JOB 55] Level-0 flush table #97: started
Nov 25 03:48:54 np0005534516 nova_compute[253538]: 2025-11-25 08:48:54.787 253542 INFO nova.compute.manager [None req-c0924b2b-b51e-468d-b422-c90007fdb370 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: fc289da4-ce2a-46ee-bf8c-bffa1e0e75fe] Get console output#033[00m
Nov 25 03:48:54 np0005534516 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764060534793406, "cf_name": "default", "job": 55, "event": "table_file_creation", "file_number": 97, "file_size": 634118, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 42094, "largest_seqno": 42712, "table_properties": {"data_size": 630790, "index_size": 1236, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1029, "raw_key_size": 8031, "raw_average_key_size": 19, "raw_value_size": 623927, "raw_average_value_size": 1529, "num_data_blocks": 54, "num_entries": 408, "num_filter_entries": 408, "num_deletions": 252, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764060497, "oldest_key_time": 1764060497, "file_creation_time": 1764060534, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "dc5dc6ae-0fb8-474e-b34d-e37705a41add", "db_session_id": "WCSCMBNWDQZ3QKJA3OWW", "orig_file_number": 97, "seqno_to_time_mapping": "N/A"}}
Nov 25 03:48:54 np0005534516 ceph-mon[75015]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 55] Flush lasted 8871 microseconds, and 3589 cpu microseconds.
Nov 25 03:48:54 np0005534516 ceph-mon[75015]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 03:48:54 np0005534516 nova_compute[253538]: 2025-11-25 08:48:54.794 310639 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Nov 25 03:48:54 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:48:54.793468) [db/flush_job.cc:967] [default] [JOB 55] Level-0 flush table #97: 634118 bytes OK
Nov 25 03:48:54 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:48:54.793496) [db/memtable_list.cc:519] [default] Level-0 commit table #97 started
Nov 25 03:48:54 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:48:54.796284) [db/memtable_list.cc:722] [default] Level-0 commit table #97: memtable #1 done
Nov 25 03:48:54 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:48:54.796350) EVENT_LOG_v1 {"time_micros": 1764060534796338, "job": 55, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 25 03:48:54 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:48:54.796384) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 25 03:48:54 np0005534516 ceph-mon[75015]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 55] Try to delete WAL files size 637181, prev total WAL file size 637181, number of live WAL files 2.
Nov 25 03:48:54 np0005534516 ceph-mon[75015]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000093.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 03:48:54 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:48:54.797057) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730033373635' seq:72057594037927935, type:22 .. '7061786F730034303137' seq:0, type:0; will stop at (end)
Nov 25 03:48:54 np0005534516 ceph-mon[75015]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 56] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 25 03:48:54 np0005534516 ceph-mon[75015]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 55 Base level 0, inputs: [97(619KB)], [95(8219KB)]
Nov 25 03:48:54 np0005534516 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764060534797106, "job": 56, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [97], "files_L6": [95], "score": -1, "input_data_size": 9051369, "oldest_snapshot_seqno": -1}
Nov 25 03:48:54 np0005534516 ceph-mon[75015]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 56] Generated table #98: 6410 keys, 7383917 bytes, temperature: kUnknown
Nov 25 03:48:54 np0005534516 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764060534852397, "cf_name": "default", "job": 56, "event": "table_file_creation", "file_number": 98, "file_size": 7383917, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7343599, "index_size": 23184, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 16069, "raw_key_size": 166342, "raw_average_key_size": 25, "raw_value_size": 7231117, "raw_average_value_size": 1128, "num_data_blocks": 907, "num_entries": 6410, "num_filter_entries": 6410, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764056881, "oldest_key_time": 0, "file_creation_time": 1764060534, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "dc5dc6ae-0fb8-474e-b34d-e37705a41add", "db_session_id": "WCSCMBNWDQZ3QKJA3OWW", "orig_file_number": 98, "seqno_to_time_mapping": "N/A"}}
Nov 25 03:48:54 np0005534516 ceph-mon[75015]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 03:48:54 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:48:54.852820) [db/compaction/compaction_job.cc:1663] [default] [JOB 56] Compacted 1@0 + 1@6 files to L6 => 7383917 bytes
Nov 25 03:48:54 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:48:54.854658) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 163.4 rd, 133.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.6, 8.0 +0.0 blob) out(7.0 +0.0 blob), read-write-amplify(25.9) write-amplify(11.6) OK, records in: 6929, records dropped: 519 output_compression: NoCompression
Nov 25 03:48:54 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:48:54.854677) EVENT_LOG_v1 {"time_micros": 1764060534854667, "job": 56, "event": "compaction_finished", "compaction_time_micros": 55405, "compaction_time_cpu_micros": 33116, "output_level": 6, "num_output_files": 1, "total_output_size": 7383917, "num_input_records": 6929, "num_output_records": 6410, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 25 03:48:54 np0005534516 ceph-mon[75015]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000097.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 03:48:54 np0005534516 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764060534854973, "job": 56, "event": "table_file_deletion", "file_number": 97}
Nov 25 03:48:54 np0005534516 ceph-mon[75015]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000095.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 03:48:54 np0005534516 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764060534857126, "job": 56, "event": "table_file_deletion", "file_number": 95}
Nov 25 03:48:54 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:48:54.796957) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 03:48:54 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:48:54.857218) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 03:48:54 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:48:54.857224) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 03:48:54 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:48:54.857226) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 03:48:54 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:48:54.857228) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 03:48:54 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:48:54.857229) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 03:48:55 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2028: 321 pgs: 321 active+clean; 248 MiB data, 792 MiB used, 59 GiB / 60 GiB avail; 859 KiB/s rd, 2.1 MiB/s wr, 111 op/s
Nov 25 03:48:57 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e237 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 03:48:57 np0005534516 nova_compute[253538]: 2025-11-25 08:48:57.499 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:48:57 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2029: 321 pgs: 321 active+clean; 248 MiB data, 792 MiB used, 59 GiB / 60 GiB avail; 860 KiB/s rd, 2.2 MiB/s wr, 112 op/s
Nov 25 03:48:57 np0005534516 nova_compute[253538]: 2025-11-25 08:48:57.938 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:48:58 np0005534516 podman[361284]: 2025-11-25 08:48:58.821388439 +0000 UTC m=+0.065677090 container health_status 5c8d5508db57b4c3da7e80ae11f3a3094e87785cb74f60c98199261dd8b73481 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.build-date=20251118)
Nov 25 03:48:59 np0005534516 nova_compute[253538]: 2025-11-25 08:48:59.225 253542 DEBUG oslo_concurrency.lockutils [None req-446c21e8-43f5-418f-a525-16e7c35e584d 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] Acquiring lock "0f5e68e6-8f02-4a3a-ac0c-322d82950d98" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:48:59 np0005534516 nova_compute[253538]: 2025-11-25 08:48:59.226 253542 DEBUG oslo_concurrency.lockutils [None req-446c21e8-43f5-418f-a525-16e7c35e584d 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] Lock "0f5e68e6-8f02-4a3a-ac0c-322d82950d98" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:48:59 np0005534516 nova_compute[253538]: 2025-11-25 08:48:59.227 253542 DEBUG oslo_concurrency.lockutils [None req-446c21e8-43f5-418f-a525-16e7c35e584d 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] Acquiring lock "0f5e68e6-8f02-4a3a-ac0c-322d82950d98-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:48:59 np0005534516 nova_compute[253538]: 2025-11-25 08:48:59.228 253542 DEBUG oslo_concurrency.lockutils [None req-446c21e8-43f5-418f-a525-16e7c35e584d 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] Lock "0f5e68e6-8f02-4a3a-ac0c-322d82950d98-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:48:59 np0005534516 nova_compute[253538]: 2025-11-25 08:48:59.228 253542 DEBUG oslo_concurrency.lockutils [None req-446c21e8-43f5-418f-a525-16e7c35e584d 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] Lock "0f5e68e6-8f02-4a3a-ac0c-322d82950d98-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:48:59 np0005534516 nova_compute[253538]: 2025-11-25 08:48:59.230 253542 INFO nova.compute.manager [None req-446c21e8-43f5-418f-a525-16e7c35e584d 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] [instance: 0f5e68e6-8f02-4a3a-ac0c-322d82950d98] Terminating instance#033[00m
Nov 25 03:48:59 np0005534516 nova_compute[253538]: 2025-11-25 08:48:59.232 253542 DEBUG nova.compute.manager [None req-446c21e8-43f5-418f-a525-16e7c35e584d 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] [instance: 0f5e68e6-8f02-4a3a-ac0c-322d82950d98] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 25 03:48:59 np0005534516 kernel: tap7246ed42-6e (unregistering): left promiscuous mode
Nov 25 03:48:59 np0005534516 NetworkManager[48915]: <info>  [1764060539.3180] device (tap7246ed42-6e): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 03:48:59 np0005534516 ovn_controller[152859]: 2025-11-25T08:48:59Z|01035|binding|INFO|Releasing lport 7246ed42-6ec3-42e8-9b9d-12606aeeb43c from this chassis (sb_readonly=0)
Nov 25 03:48:59 np0005534516 ovn_controller[152859]: 2025-11-25T08:48:59Z|01036|binding|INFO|Setting lport 7246ed42-6ec3-42e8-9b9d-12606aeeb43c down in Southbound
Nov 25 03:48:59 np0005534516 ovn_controller[152859]: 2025-11-25T08:48:59Z|01037|binding|INFO|Removing iface tap7246ed42-6e ovn-installed in OVS
Nov 25 03:48:59 np0005534516 nova_compute[253538]: 2025-11-25 08:48:59.330 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:48:59 np0005534516 nova_compute[253538]: 2025-11-25 08:48:59.333 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:48:59 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:48:59.338 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:8c:96:cd 10.100.0.3'], port_security=['fa:16:3e:8c:96:cd 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '0f5e68e6-8f02-4a3a-ac0c-322d82950d98', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-aa04d86f-73a3-4b24-9c95-8ec29aa39064', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '947f731219de435196429037dc94fd56', 'neutron:revision_number': '13', 'neutron:security_group_ids': '6084cb22-e3b6-414c-801b-b7e992786a2f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d4043632-827b-4717-bb5e-38582ebfacad, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=7246ed42-6ec3-42e8-9b9d-12606aeeb43c) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 03:48:59 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:48:59.341 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 7246ed42-6ec3-42e8-9b9d-12606aeeb43c in datapath aa04d86f-73a3-4b24-9c95-8ec29aa39064 unbound from our chassis#033[00m
Nov 25 03:48:59 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:48:59.342 162739 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network aa04d86f-73a3-4b24-9c95-8ec29aa39064, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 25 03:48:59 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:48:59.344 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[4bdd8429-4ea8-470f-885c-0b844c808e12]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:48:59 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:48:59.344 162739 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-aa04d86f-73a3-4b24-9c95-8ec29aa39064 namespace which is not needed anymore#033[00m
Nov 25 03:48:59 np0005534516 nova_compute[253538]: 2025-11-25 08:48:59.366 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:48:59 np0005534516 systemd[1]: machine-qemu\x2d132\x2dinstance\x2d00000068.scope: Deactivated successfully.
Nov 25 03:48:59 np0005534516 systemd[1]: machine-qemu\x2d132\x2dinstance\x2d00000068.scope: Consumed 4.733s CPU time.
Nov 25 03:48:59 np0005534516 systemd-machined[215790]: Machine qemu-132-instance-00000068 terminated.
Nov 25 03:48:59 np0005534516 nova_compute[253538]: 2025-11-25 08:48:59.461 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:48:59 np0005534516 neutron-haproxy-ovnmeta-aa04d86f-73a3-4b24-9c95-8ec29aa39064[361267]: [NOTICE]   (361271) : haproxy version is 2.8.14-c23fe91
Nov 25 03:48:59 np0005534516 neutron-haproxy-ovnmeta-aa04d86f-73a3-4b24-9c95-8ec29aa39064[361267]: [NOTICE]   (361271) : path to executable is /usr/sbin/haproxy
Nov 25 03:48:59 np0005534516 neutron-haproxy-ovnmeta-aa04d86f-73a3-4b24-9c95-8ec29aa39064[361267]: [ALERT]    (361271) : Current worker (361273) exited with code 143 (Terminated)
Nov 25 03:48:59 np0005534516 neutron-haproxy-ovnmeta-aa04d86f-73a3-4b24-9c95-8ec29aa39064[361267]: [WARNING]  (361271) : All workers exited. Exiting... (0)
Nov 25 03:48:59 np0005534516 systemd[1]: libpod-a559c132001b3659bbd3cf2fe2e10523a8a704c8e542ff28e0667476905b656a.scope: Deactivated successfully.
Nov 25 03:48:59 np0005534516 nova_compute[253538]: 2025-11-25 08:48:59.468 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:48:59 np0005534516 podman[361329]: 2025-11-25 08:48:59.475668302 +0000 UTC m=+0.052338983 container died a559c132001b3659bbd3cf2fe2e10523a8a704c8e542ff28e0667476905b656a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-aa04d86f-73a3-4b24-9c95-8ec29aa39064, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Nov 25 03:48:59 np0005534516 nova_compute[253538]: 2025-11-25 08:48:59.476 253542 INFO nova.virt.libvirt.driver [-] [instance: 0f5e68e6-8f02-4a3a-ac0c-322d82950d98] Instance destroyed successfully.#033[00m
Nov 25 03:48:59 np0005534516 nova_compute[253538]: 2025-11-25 08:48:59.476 253542 DEBUG nova.objects.instance [None req-446c21e8-43f5-418f-a525-16e7c35e584d 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] Lazy-loading 'resources' on Instance uuid 0f5e68e6-8f02-4a3a-ac0c-322d82950d98 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 03:48:59 np0005534516 nova_compute[253538]: 2025-11-25 08:48:59.490 253542 DEBUG nova.virt.libvirt.vif [None req-446c21e8-43f5-418f-a525-16e7c35e584d 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-11-25T08:46:31Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersNegativeTestJSON-server-673040864',display_name='tempest-ServersNegativeTestJSON-server-673040864',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversnegativetestjson-server-673040864',id=104,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-25T08:48:34Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='947f731219de435196429037dc94fd56',ramdisk_id='',reservation_id='r-lfthxaw9',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersNegativeTestJSON-740481153',owner_user_name='tempest-ServersNegativeTestJSON-740481153-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T08:48:45Z,user_data=None,user_id='229f02d89c9848d8aaaaab070ce4d179',uuid=0f5e68e6-8f02-4a3a-ac0c-322d82950d98,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "7246ed42-6ec3-42e8-9b9d-12606aeeb43c", "address": "fa:16:3e:8c:96:cd", "network": {"id": "aa04d86f-73a3-4b24-9c95-8ec29aa39064", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-1377856389-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "947f731219de435196429037dc94fd56", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7246ed42-6e", "ovs_interfaceid": "7246ed42-6ec3-42e8-9b9d-12606aeeb43c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 25 03:48:59 np0005534516 nova_compute[253538]: 2025-11-25 08:48:59.491 253542 DEBUG nova.network.os_vif_util [None req-446c21e8-43f5-418f-a525-16e7c35e584d 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] Converting VIF {"id": "7246ed42-6ec3-42e8-9b9d-12606aeeb43c", "address": "fa:16:3e:8c:96:cd", "network": {"id": "aa04d86f-73a3-4b24-9c95-8ec29aa39064", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-1377856389-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "947f731219de435196429037dc94fd56", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7246ed42-6e", "ovs_interfaceid": "7246ed42-6ec3-42e8-9b9d-12606aeeb43c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 03:48:59 np0005534516 nova_compute[253538]: 2025-11-25 08:48:59.491 253542 DEBUG nova.network.os_vif_util [None req-446c21e8-43f5-418f-a525-16e7c35e584d 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:8c:96:cd,bridge_name='br-int',has_traffic_filtering=True,id=7246ed42-6ec3-42e8-9b9d-12606aeeb43c,network=Network(aa04d86f-73a3-4b24-9c95-8ec29aa39064),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7246ed42-6e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 03:48:59 np0005534516 nova_compute[253538]: 2025-11-25 08:48:59.492 253542 DEBUG os_vif [None req-446c21e8-43f5-418f-a525-16e7c35e584d 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:8c:96:cd,bridge_name='br-int',has_traffic_filtering=True,id=7246ed42-6ec3-42e8-9b9d-12606aeeb43c,network=Network(aa04d86f-73a3-4b24-9c95-8ec29aa39064),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7246ed42-6e') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 25 03:48:59 np0005534516 nova_compute[253538]: 2025-11-25 08:48:59.493 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:48:59 np0005534516 nova_compute[253538]: 2025-11-25 08:48:59.494 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7246ed42-6e, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:48:59 np0005534516 nova_compute[253538]: 2025-11-25 08:48:59.496 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:48:59 np0005534516 nova_compute[253538]: 2025-11-25 08:48:59.497 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 25 03:48:59 np0005534516 nova_compute[253538]: 2025-11-25 08:48:59.498 253542 INFO os_vif [None req-446c21e8-43f5-418f-a525-16e7c35e584d 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:8c:96:cd,bridge_name='br-int',has_traffic_filtering=True,id=7246ed42-6ec3-42e8-9b9d-12606aeeb43c,network=Network(aa04d86f-73a3-4b24-9c95-8ec29aa39064),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7246ed42-6e')#033[00m
Nov 25 03:48:59 np0005534516 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-a559c132001b3659bbd3cf2fe2e10523a8a704c8e542ff28e0667476905b656a-userdata-shm.mount: Deactivated successfully.
Nov 25 03:48:59 np0005534516 systemd[1]: var-lib-containers-storage-overlay-27c4637736e1dcbe4428132fd374615171f835052d57e6b8b7096684d764b77e-merged.mount: Deactivated successfully.
Nov 25 03:48:59 np0005534516 podman[361329]: 2025-11-25 08:48:59.552853858 +0000 UTC m=+0.129524519 container cleanup a559c132001b3659bbd3cf2fe2e10523a8a704c8e542ff28e0667476905b656a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-aa04d86f-73a3-4b24-9c95-8ec29aa39064, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2)
Nov 25 03:48:59 np0005534516 systemd[1]: libpod-conmon-a559c132001b3659bbd3cf2fe2e10523a8a704c8e542ff28e0667476905b656a.scope: Deactivated successfully.
Nov 25 03:48:59 np0005534516 podman[361387]: 2025-11-25 08:48:59.622382451 +0000 UTC m=+0.046164728 container remove a559c132001b3659bbd3cf2fe2e10523a8a704c8e542ff28e0667476905b656a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-aa04d86f-73a3-4b24-9c95-8ec29aa39064, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251118)
Nov 25 03:48:59 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:48:59.631 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[107b1716-079d-44ed-b273-ad3e9cc004a5]: (4, ('Tue Nov 25 08:48:59 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-aa04d86f-73a3-4b24-9c95-8ec29aa39064 (a559c132001b3659bbd3cf2fe2e10523a8a704c8e542ff28e0667476905b656a)\na559c132001b3659bbd3cf2fe2e10523a8a704c8e542ff28e0667476905b656a\nTue Nov 25 08:48:59 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-aa04d86f-73a3-4b24-9c95-8ec29aa39064 (a559c132001b3659bbd3cf2fe2e10523a8a704c8e542ff28e0667476905b656a)\na559c132001b3659bbd3cf2fe2e10523a8a704c8e542ff28e0667476905b656a\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:48:59 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:48:59.633 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[bc966103-c07a-4c93-b012-26092e147202]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:48:59 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:48:59.634 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapaa04d86f-70, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:48:59 np0005534516 nova_compute[253538]: 2025-11-25 08:48:59.636 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:48:59 np0005534516 kernel: tapaa04d86f-70: left promiscuous mode
Nov 25 03:48:59 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2030: 321 pgs: 321 active+clean; 248 MiB data, 792 MiB used, 59 GiB / 60 GiB avail; 710 KiB/s rd, 1.0 MiB/s wr, 90 op/s
Nov 25 03:48:59 np0005534516 nova_compute[253538]: 2025-11-25 08:48:59.650 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:48:59 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:48:59.654 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[eb34e242-17c0-445b-8dea-462496b548fb]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:48:59 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:48:59.674 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[86f7403d-2f41-4501-8905-e58ede1d6c33]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:48:59 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:48:59.676 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[08b62d5e-8d7e-4fdb-82c9-ca944ded141f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:48:59 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:48:59.697 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[73ce3328-c1b9-476a-b337-9e0d60a7216f]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 587963, 'reachable_time': 44558, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 361403, 'error': None, 'target': 'ovnmeta-aa04d86f-73a3-4b24-9c95-8ec29aa39064', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:48:59 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:48:59.700 162852 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-aa04d86f-73a3-4b24-9c95-8ec29aa39064 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 25 03:48:59 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:48:59.700 162852 DEBUG oslo.privsep.daemon [-] privsep: reply[09e260e7-04f2-4a88-881f-043690682133]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:48:59 np0005534516 systemd[1]: run-netns-ovnmeta\x2daa04d86f\x2d73a3\x2d4b24\x2d9c95\x2d8ec29aa39064.mount: Deactivated successfully.
Nov 25 03:48:59 np0005534516 nova_compute[253538]: 2025-11-25 08:48:59.917 253542 INFO nova.virt.libvirt.driver [None req-446c21e8-43f5-418f-a525-16e7c35e584d 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] [instance: 0f5e68e6-8f02-4a3a-ac0c-322d82950d98] Deleting instance files /var/lib/nova/instances/0f5e68e6-8f02-4a3a-ac0c-322d82950d98_del#033[00m
Nov 25 03:48:59 np0005534516 nova_compute[253538]: 2025-11-25 08:48:59.918 253542 INFO nova.virt.libvirt.driver [None req-446c21e8-43f5-418f-a525-16e7c35e584d 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] [instance: 0f5e68e6-8f02-4a3a-ac0c-322d82950d98] Deletion of /var/lib/nova/instances/0f5e68e6-8f02-4a3a-ac0c-322d82950d98_del complete#033[00m
Nov 25 03:48:59 np0005534516 nova_compute[253538]: 2025-11-25 08:48:59.991 253542 INFO nova.compute.manager [None req-446c21e8-43f5-418f-a525-16e7c35e584d 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] [instance: 0f5e68e6-8f02-4a3a-ac0c-322d82950d98] Took 0.76 seconds to destroy the instance on the hypervisor.#033[00m
Nov 25 03:48:59 np0005534516 nova_compute[253538]: 2025-11-25 08:48:59.992 253542 DEBUG oslo.service.loopingcall [None req-446c21e8-43f5-418f-a525-16e7c35e584d 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 25 03:48:59 np0005534516 nova_compute[253538]: 2025-11-25 08:48:59.993 253542 DEBUG nova.compute.manager [-] [instance: 0f5e68e6-8f02-4a3a-ac0c-322d82950d98] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 25 03:48:59 np0005534516 nova_compute[253538]: 2025-11-25 08:48:59.993 253542 DEBUG nova.network.neutron [-] [instance: 0f5e68e6-8f02-4a3a-ac0c-322d82950d98] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 25 03:49:00 np0005534516 nova_compute[253538]: 2025-11-25 08:49:00.114 253542 DEBUG nova.compute.manager [req-1eeb0f55-d83a-477d-8020-f4f7e0176f1f req-500be00e-f872-4ea7-962d-6737220fae48 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 0f5e68e6-8f02-4a3a-ac0c-322d82950d98] Received event network-vif-unplugged-7246ed42-6ec3-42e8-9b9d-12606aeeb43c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:49:00 np0005534516 nova_compute[253538]: 2025-11-25 08:49:00.115 253542 DEBUG oslo_concurrency.lockutils [req-1eeb0f55-d83a-477d-8020-f4f7e0176f1f req-500be00e-f872-4ea7-962d-6737220fae48 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "0f5e68e6-8f02-4a3a-ac0c-322d82950d98-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:49:00 np0005534516 nova_compute[253538]: 2025-11-25 08:49:00.115 253542 DEBUG oslo_concurrency.lockutils [req-1eeb0f55-d83a-477d-8020-f4f7e0176f1f req-500be00e-f872-4ea7-962d-6737220fae48 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "0f5e68e6-8f02-4a3a-ac0c-322d82950d98-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:49:00 np0005534516 nova_compute[253538]: 2025-11-25 08:49:00.115 253542 DEBUG oslo_concurrency.lockutils [req-1eeb0f55-d83a-477d-8020-f4f7e0176f1f req-500be00e-f872-4ea7-962d-6737220fae48 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "0f5e68e6-8f02-4a3a-ac0c-322d82950d98-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:49:00 np0005534516 nova_compute[253538]: 2025-11-25 08:49:00.116 253542 DEBUG nova.compute.manager [req-1eeb0f55-d83a-477d-8020-f4f7e0176f1f req-500be00e-f872-4ea7-962d-6737220fae48 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 0f5e68e6-8f02-4a3a-ac0c-322d82950d98] No waiting events found dispatching network-vif-unplugged-7246ed42-6ec3-42e8-9b9d-12606aeeb43c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 03:49:00 np0005534516 nova_compute[253538]: 2025-11-25 08:49:00.116 253542 DEBUG nova.compute.manager [req-1eeb0f55-d83a-477d-8020-f4f7e0176f1f req-500be00e-f872-4ea7-962d-6737220fae48 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 0f5e68e6-8f02-4a3a-ac0c-322d82950d98] Received event network-vif-unplugged-7246ed42-6ec3-42e8-9b9d-12606aeeb43c for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 25 03:49:00 np0005534516 nova_compute[253538]: 2025-11-25 08:49:00.774 253542 DEBUG nova.network.neutron [-] [instance: 0f5e68e6-8f02-4a3a-ac0c-322d82950d98] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 03:49:00 np0005534516 nova_compute[253538]: 2025-11-25 08:49:00.801 253542 INFO nova.compute.manager [-] [instance: 0f5e68e6-8f02-4a3a-ac0c-322d82950d98] Took 0.81 seconds to deallocate network for instance.#033[00m
Nov 25 03:49:00 np0005534516 nova_compute[253538]: 2025-11-25 08:49:00.861 253542 DEBUG nova.compute.manager [req-a2477d76-dee8-4bd5-8d0c-61626c505780 req-fe2358db-9c3a-41b4-aabb-c6b4fc5eba0e b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 0f5e68e6-8f02-4a3a-ac0c-322d82950d98] Received event network-vif-deleted-7246ed42-6ec3-42e8-9b9d-12606aeeb43c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:49:00 np0005534516 nova_compute[253538]: 2025-11-25 08:49:00.892 253542 DEBUG oslo_concurrency.lockutils [None req-446c21e8-43f5-418f-a525-16e7c35e584d 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:49:00 np0005534516 nova_compute[253538]: 2025-11-25 08:49:00.893 253542 DEBUG oslo_concurrency.lockutils [None req-446c21e8-43f5-418f-a525-16e7c35e584d 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:49:00 np0005534516 nova_compute[253538]: 2025-11-25 08:49:00.977 253542 DEBUG oslo_concurrency.processutils [None req-446c21e8-43f5-418f-a525-16e7c35e584d 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:49:01 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 03:49:01 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1124057411' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 03:49:01 np0005534516 nova_compute[253538]: 2025-11-25 08:49:01.419 253542 DEBUG oslo_concurrency.processutils [None req-446c21e8-43f5-418f-a525-16e7c35e584d 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.442s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:49:01 np0005534516 nova_compute[253538]: 2025-11-25 08:49:01.428 253542 DEBUG nova.compute.provider_tree [None req-446c21e8-43f5-418f-a525-16e7c35e584d 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 03:49:01 np0005534516 nova_compute[253538]: 2025-11-25 08:49:01.444 253542 DEBUG nova.scheduler.client.report [None req-446c21e8-43f5-418f-a525-16e7c35e584d 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 03:49:01 np0005534516 nova_compute[253538]: 2025-11-25 08:49:01.463 253542 DEBUG oslo_concurrency.lockutils [None req-446c21e8-43f5-418f-a525-16e7c35e584d 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.569s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:49:01 np0005534516 nova_compute[253538]: 2025-11-25 08:49:01.490 253542 INFO nova.scheduler.client.report [None req-446c21e8-43f5-418f-a525-16e7c35e584d 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] Deleted allocations for instance 0f5e68e6-8f02-4a3a-ac0c-322d82950d98#033[00m
Nov 25 03:49:01 np0005534516 nova_compute[253538]: 2025-11-25 08:49:01.554 253542 DEBUG oslo_concurrency.lockutils [None req-446c21e8-43f5-418f-a525-16e7c35e584d 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] Lock "0f5e68e6-8f02-4a3a-ac0c-322d82950d98" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.328s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:49:01 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2031: 321 pgs: 321 active+clean; 218 MiB data, 792 MiB used, 59 GiB / 60 GiB avail; 684 KiB/s rd, 341 KiB/s wr, 72 op/s
Nov 25 03:49:01 np0005534516 podman[361429]: 2025-11-25 08:49:01.815761452 +0000 UTC m=+0.064843037 container health_status 201f5ba8a846455dad7681d91099d8c3f33e8ac91298c1330a8b525b94c03938 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251118, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 03:49:02 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e237 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 03:49:02 np0005534516 nova_compute[253538]: 2025-11-25 08:49:02.378 253542 DEBUG nova.compute.manager [req-d0a440df-36f1-4958-a4ac-c3554ff05ae6 req-f5b70ea4-55fa-4648-9e49-efaec511f3fd b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 0f5e68e6-8f02-4a3a-ac0c-322d82950d98] Received event network-vif-plugged-7246ed42-6ec3-42e8-9b9d-12606aeeb43c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:49:02 np0005534516 nova_compute[253538]: 2025-11-25 08:49:02.379 253542 DEBUG oslo_concurrency.lockutils [req-d0a440df-36f1-4958-a4ac-c3554ff05ae6 req-f5b70ea4-55fa-4648-9e49-efaec511f3fd b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "0f5e68e6-8f02-4a3a-ac0c-322d82950d98-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:49:02 np0005534516 nova_compute[253538]: 2025-11-25 08:49:02.379 253542 DEBUG oslo_concurrency.lockutils [req-d0a440df-36f1-4958-a4ac-c3554ff05ae6 req-f5b70ea4-55fa-4648-9e49-efaec511f3fd b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "0f5e68e6-8f02-4a3a-ac0c-322d82950d98-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:49:02 np0005534516 nova_compute[253538]: 2025-11-25 08:49:02.381 253542 DEBUG oslo_concurrency.lockutils [req-d0a440df-36f1-4958-a4ac-c3554ff05ae6 req-f5b70ea4-55fa-4648-9e49-efaec511f3fd b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "0f5e68e6-8f02-4a3a-ac0c-322d82950d98-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:49:02 np0005534516 nova_compute[253538]: 2025-11-25 08:49:02.381 253542 DEBUG nova.compute.manager [req-d0a440df-36f1-4958-a4ac-c3554ff05ae6 req-f5b70ea4-55fa-4648-9e49-efaec511f3fd b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 0f5e68e6-8f02-4a3a-ac0c-322d82950d98] No waiting events found dispatching network-vif-plugged-7246ed42-6ec3-42e8-9b9d-12606aeeb43c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 03:49:02 np0005534516 nova_compute[253538]: 2025-11-25 08:49:02.382 253542 WARNING nova.compute.manager [req-d0a440df-36f1-4958-a4ac-c3554ff05ae6 req-f5b70ea4-55fa-4648-9e49-efaec511f3fd b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 0f5e68e6-8f02-4a3a-ac0c-322d82950d98] Received unexpected event network-vif-plugged-7246ed42-6ec3-42e8-9b9d-12606aeeb43c for instance with vm_state deleted and task_state None.#033[00m
Nov 25 03:49:02 np0005534516 nova_compute[253538]: 2025-11-25 08:49:02.502 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:49:03 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2032: 321 pgs: 321 active+clean; 167 MiB data, 765 MiB used, 59 GiB / 60 GiB avail; 311 KiB/s rd, 88 KiB/s wr, 52 op/s
Nov 25 03:49:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 03:49:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:49:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 03:49:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:49:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.000759972549163248 of space, bias 1.0, pg target 0.2279917647489744 quantized to 32 (current 32)
Nov 25 03:49:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:49:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 03:49:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:49:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 03:49:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:49:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0010122368870873217 of space, bias 1.0, pg target 0.3036710661261965 quantized to 32 (current 32)
Nov 25 03:49:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:49:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 25 03:49:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:49:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 03:49:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:49:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 25 03:49:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:49:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 25 03:49:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:49:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 03:49:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:49:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 25 03:49:04 np0005534516 nova_compute[253538]: 2025-11-25 08:49:04.495 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:49:05 np0005534516 ovn_controller[152859]: 2025-11-25T08:49:05Z|01038|binding|INFO|Releasing lport 2a6dde1f-8745-4374-9d65-dba32b48db06 from this chassis (sb_readonly=0)
Nov 25 03:49:05 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2033: 321 pgs: 321 active+clean; 167 MiB data, 757 MiB used, 59 GiB / 60 GiB avail; 26 KiB/s rd, 27 KiB/s wr, 29 op/s
Nov 25 03:49:05 np0005534516 nova_compute[253538]: 2025-11-25 08:49:05.746 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:49:06 np0005534516 nova_compute[253538]: 2025-11-25 08:49:06.830 253542 DEBUG oslo_concurrency.lockutils [None req-ada1986d-f723-4490-9503-77afa8d8cdd4 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Acquiring lock "ec34a574-9c78-43d8-a65a-aa4052a5d452" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:49:06 np0005534516 nova_compute[253538]: 2025-11-25 08:49:06.830 253542 DEBUG oslo_concurrency.lockutils [None req-ada1986d-f723-4490-9503-77afa8d8cdd4 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lock "ec34a574-9c78-43d8-a65a-aa4052a5d452" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:49:06 np0005534516 nova_compute[253538]: 2025-11-25 08:49:06.842 253542 DEBUG nova.compute.manager [None req-ada1986d-f723-4490-9503-77afa8d8cdd4 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: ec34a574-9c78-43d8-a65a-aa4052a5d452] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 25 03:49:06 np0005534516 nova_compute[253538]: 2025-11-25 08:49:06.910 253542 DEBUG oslo_concurrency.lockutils [None req-ada1986d-f723-4490-9503-77afa8d8cdd4 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:49:06 np0005534516 nova_compute[253538]: 2025-11-25 08:49:06.911 253542 DEBUG oslo_concurrency.lockutils [None req-ada1986d-f723-4490-9503-77afa8d8cdd4 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:49:06 np0005534516 nova_compute[253538]: 2025-11-25 08:49:06.918 253542 DEBUG nova.virt.hardware [None req-ada1986d-f723-4490-9503-77afa8d8cdd4 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 25 03:49:06 np0005534516 nova_compute[253538]: 2025-11-25 08:49:06.918 253542 INFO nova.compute.claims [None req-ada1986d-f723-4490-9503-77afa8d8cdd4 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: ec34a574-9c78-43d8-a65a-aa4052a5d452] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 25 03:49:07 np0005534516 nova_compute[253538]: 2025-11-25 08:49:07.019 253542 DEBUG oslo_concurrency.processutils [None req-ada1986d-f723-4490-9503-77afa8d8cdd4 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:49:07 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e237 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 03:49:07 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 03:49:07 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1469260779' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 03:49:07 np0005534516 nova_compute[253538]: 2025-11-25 08:49:07.480 253542 DEBUG oslo_concurrency.processutils [None req-ada1986d-f723-4490-9503-77afa8d8cdd4 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.462s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:49:07 np0005534516 nova_compute[253538]: 2025-11-25 08:49:07.485 253542 DEBUG nova.compute.provider_tree [None req-ada1986d-f723-4490-9503-77afa8d8cdd4 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 03:49:07 np0005534516 nova_compute[253538]: 2025-11-25 08:49:07.502 253542 DEBUG nova.scheduler.client.report [None req-ada1986d-f723-4490-9503-77afa8d8cdd4 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 03:49:07 np0005534516 nova_compute[253538]: 2025-11-25 08:49:07.507 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:49:07 np0005534516 nova_compute[253538]: 2025-11-25 08:49:07.525 253542 DEBUG oslo_concurrency.lockutils [None req-ada1986d-f723-4490-9503-77afa8d8cdd4 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.614s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:49:07 np0005534516 nova_compute[253538]: 2025-11-25 08:49:07.526 253542 DEBUG nova.compute.manager [None req-ada1986d-f723-4490-9503-77afa8d8cdd4 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: ec34a574-9c78-43d8-a65a-aa4052a5d452] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 25 03:49:07 np0005534516 nova_compute[253538]: 2025-11-25 08:49:07.568 253542 DEBUG nova.compute.manager [None req-ada1986d-f723-4490-9503-77afa8d8cdd4 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: ec34a574-9c78-43d8-a65a-aa4052a5d452] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 25 03:49:07 np0005534516 nova_compute[253538]: 2025-11-25 08:49:07.568 253542 DEBUG nova.network.neutron [None req-ada1986d-f723-4490-9503-77afa8d8cdd4 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: ec34a574-9c78-43d8-a65a-aa4052a5d452] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 25 03:49:07 np0005534516 nova_compute[253538]: 2025-11-25 08:49:07.583 253542 INFO nova.virt.libvirt.driver [None req-ada1986d-f723-4490-9503-77afa8d8cdd4 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: ec34a574-9c78-43d8-a65a-aa4052a5d452] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 25 03:49:07 np0005534516 nova_compute[253538]: 2025-11-25 08:49:07.599 253542 DEBUG nova.compute.manager [None req-ada1986d-f723-4490-9503-77afa8d8cdd4 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: ec34a574-9c78-43d8-a65a-aa4052a5d452] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 25 03:49:07 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2034: 321 pgs: 321 active+clean; 167 MiB data, 753 MiB used, 59 GiB / 60 GiB avail; 25 KiB/s rd, 26 KiB/s wr, 29 op/s
Nov 25 03:49:07 np0005534516 nova_compute[253538]: 2025-11-25 08:49:07.665 253542 DEBUG nova.compute.manager [None req-ada1986d-f723-4490-9503-77afa8d8cdd4 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: ec34a574-9c78-43d8-a65a-aa4052a5d452] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 25 03:49:07 np0005534516 nova_compute[253538]: 2025-11-25 08:49:07.667 253542 DEBUG nova.virt.libvirt.driver [None req-ada1986d-f723-4490-9503-77afa8d8cdd4 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: ec34a574-9c78-43d8-a65a-aa4052a5d452] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 25 03:49:07 np0005534516 nova_compute[253538]: 2025-11-25 08:49:07.668 253542 INFO nova.virt.libvirt.driver [None req-ada1986d-f723-4490-9503-77afa8d8cdd4 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: ec34a574-9c78-43d8-a65a-aa4052a5d452] Creating image(s)#033[00m
Nov 25 03:49:07 np0005534516 nova_compute[253538]: 2025-11-25 08:49:07.695 253542 DEBUG nova.storage.rbd_utils [None req-ada1986d-f723-4490-9503-77afa8d8cdd4 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] rbd image ec34a574-9c78-43d8-a65a-aa4052a5d452_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:49:07 np0005534516 nova_compute[253538]: 2025-11-25 08:49:07.715 253542 DEBUG nova.storage.rbd_utils [None req-ada1986d-f723-4490-9503-77afa8d8cdd4 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] rbd image ec34a574-9c78-43d8-a65a-aa4052a5d452_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:49:07 np0005534516 nova_compute[253538]: 2025-11-25 08:49:07.739 253542 DEBUG nova.storage.rbd_utils [None req-ada1986d-f723-4490-9503-77afa8d8cdd4 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] rbd image ec34a574-9c78-43d8-a65a-aa4052a5d452_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:49:07 np0005534516 nova_compute[253538]: 2025-11-25 08:49:07.744 253542 DEBUG oslo_concurrency.processutils [None req-ada1986d-f723-4490-9503-77afa8d8cdd4 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:49:07 np0005534516 nova_compute[253538]: 2025-11-25 08:49:07.817 253542 DEBUG oslo_concurrency.processutils [None req-ada1986d-f723-4490-9503-77afa8d8cdd4 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json" returned: 0 in 0.073s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:49:07 np0005534516 nova_compute[253538]: 2025-11-25 08:49:07.818 253542 DEBUG oslo_concurrency.lockutils [None req-ada1986d-f723-4490-9503-77afa8d8cdd4 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Acquiring lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:49:07 np0005534516 nova_compute[253538]: 2025-11-25 08:49:07.818 253542 DEBUG oslo_concurrency.lockutils [None req-ada1986d-f723-4490-9503-77afa8d8cdd4 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:49:07 np0005534516 nova_compute[253538]: 2025-11-25 08:49:07.819 253542 DEBUG oslo_concurrency.lockutils [None req-ada1986d-f723-4490-9503-77afa8d8cdd4 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:49:07 np0005534516 nova_compute[253538]: 2025-11-25 08:49:07.839 253542 DEBUG nova.storage.rbd_utils [None req-ada1986d-f723-4490-9503-77afa8d8cdd4 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] rbd image ec34a574-9c78-43d8-a65a-aa4052a5d452_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:49:07 np0005534516 nova_compute[253538]: 2025-11-25 08:49:07.842 253542 DEBUG oslo_concurrency.processutils [None req-ada1986d-f723-4490-9503-77afa8d8cdd4 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc ec34a574-9c78-43d8-a65a-aa4052a5d452_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:49:07 np0005534516 podman[361513]: 2025-11-25 08:49:07.856183303 +0000 UTC m=+0.108344523 container health_status 8ad1e319ea263c63021f01bb2d6f18ca5aea205883339692c6b10bddfa993191 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 03:49:07 np0005534516 nova_compute[253538]: 2025-11-25 08:49:07.886 253542 DEBUG nova.policy [None req-ada1986d-f723-4490-9503-77afa8d8cdd4 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '4211995133cc45db8e38c47f747fb092', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '92faeb767e7a423586eaaf32661ce771', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 25 03:49:08 np0005534516 nova_compute[253538]: 2025-11-25 08:49:08.273 253542 DEBUG oslo_concurrency.processutils [None req-ada1986d-f723-4490-9503-77afa8d8cdd4 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc ec34a574-9c78-43d8-a65a-aa4052a5d452_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.431s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:49:08 np0005534516 nova_compute[253538]: 2025-11-25 08:49:08.337 253542 DEBUG nova.storage.rbd_utils [None req-ada1986d-f723-4490-9503-77afa8d8cdd4 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] resizing rbd image ec34a574-9c78-43d8-a65a-aa4052a5d452_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 25 03:49:08 np0005534516 nova_compute[253538]: 2025-11-25 08:49:08.440 253542 DEBUG nova.objects.instance [None req-ada1986d-f723-4490-9503-77afa8d8cdd4 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lazy-loading 'migration_context' on Instance uuid ec34a574-9c78-43d8-a65a-aa4052a5d452 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 03:49:08 np0005534516 nova_compute[253538]: 2025-11-25 08:49:08.453 253542 DEBUG nova.virt.libvirt.driver [None req-ada1986d-f723-4490-9503-77afa8d8cdd4 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: ec34a574-9c78-43d8-a65a-aa4052a5d452] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 25 03:49:08 np0005534516 nova_compute[253538]: 2025-11-25 08:49:08.454 253542 DEBUG nova.virt.libvirt.driver [None req-ada1986d-f723-4490-9503-77afa8d8cdd4 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: ec34a574-9c78-43d8-a65a-aa4052a5d452] Ensure instance console log exists: /var/lib/nova/instances/ec34a574-9c78-43d8-a65a-aa4052a5d452/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 25 03:49:08 np0005534516 nova_compute[253538]: 2025-11-25 08:49:08.454 253542 DEBUG oslo_concurrency.lockutils [None req-ada1986d-f723-4490-9503-77afa8d8cdd4 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:49:08 np0005534516 nova_compute[253538]: 2025-11-25 08:49:08.454 253542 DEBUG oslo_concurrency.lockutils [None req-ada1986d-f723-4490-9503-77afa8d8cdd4 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:49:08 np0005534516 nova_compute[253538]: 2025-11-25 08:49:08.455 253542 DEBUG oslo_concurrency.lockutils [None req-ada1986d-f723-4490-9503-77afa8d8cdd4 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:49:08 np0005534516 nova_compute[253538]: 2025-11-25 08:49:08.957 253542 DEBUG nova.network.neutron [None req-ada1986d-f723-4490-9503-77afa8d8cdd4 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: ec34a574-9c78-43d8-a65a-aa4052a5d452] Successfully created port: bc66e4f2-ce2b-49ca-ba89-a86607d56e5a _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 25 03:49:09 np0005534516 nova_compute[253538]: 2025-11-25 08:49:09.497 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:49:09 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2035: 321 pgs: 321 active+clean; 181 MiB data, 753 MiB used, 59 GiB / 60 GiB avail; 35 KiB/s rd, 604 KiB/s wr, 41 op/s
Nov 25 03:49:10 np0005534516 nova_compute[253538]: 2025-11-25 08:49:10.193 253542 DEBUG nova.network.neutron [None req-ada1986d-f723-4490-9503-77afa8d8cdd4 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: ec34a574-9c78-43d8-a65a-aa4052a5d452] Successfully updated port: bc66e4f2-ce2b-49ca-ba89-a86607d56e5a _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 25 03:49:10 np0005534516 nova_compute[253538]: 2025-11-25 08:49:10.205 253542 DEBUG oslo_concurrency.lockutils [None req-ada1986d-f723-4490-9503-77afa8d8cdd4 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Acquiring lock "refresh_cache-ec34a574-9c78-43d8-a65a-aa4052a5d452" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 03:49:10 np0005534516 nova_compute[253538]: 2025-11-25 08:49:10.206 253542 DEBUG oslo_concurrency.lockutils [None req-ada1986d-f723-4490-9503-77afa8d8cdd4 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Acquired lock "refresh_cache-ec34a574-9c78-43d8-a65a-aa4052a5d452" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 03:49:10 np0005534516 nova_compute[253538]: 2025-11-25 08:49:10.206 253542 DEBUG nova.network.neutron [None req-ada1986d-f723-4490-9503-77afa8d8cdd4 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: ec34a574-9c78-43d8-a65a-aa4052a5d452] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 25 03:49:10 np0005534516 nova_compute[253538]: 2025-11-25 08:49:10.363 253542 DEBUG nova.compute.manager [req-d63b1dad-4547-4e35-b2a9-8da69771b43f req-4eacee53-84a7-4b9a-8f1c-e41af8ec8a97 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: ec34a574-9c78-43d8-a65a-aa4052a5d452] Received event network-changed-bc66e4f2-ce2b-49ca-ba89-a86607d56e5a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:49:10 np0005534516 nova_compute[253538]: 2025-11-25 08:49:10.364 253542 DEBUG nova.compute.manager [req-d63b1dad-4547-4e35-b2a9-8da69771b43f req-4eacee53-84a7-4b9a-8f1c-e41af8ec8a97 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: ec34a574-9c78-43d8-a65a-aa4052a5d452] Refreshing instance network info cache due to event network-changed-bc66e4f2-ce2b-49ca-ba89-a86607d56e5a. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 03:49:10 np0005534516 nova_compute[253538]: 2025-11-25 08:49:10.364 253542 DEBUG oslo_concurrency.lockutils [req-d63b1dad-4547-4e35-b2a9-8da69771b43f req-4eacee53-84a7-4b9a-8f1c-e41af8ec8a97 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-ec34a574-9c78-43d8-a65a-aa4052a5d452" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 03:49:10 np0005534516 nova_compute[253538]: 2025-11-25 08:49:10.403 253542 DEBUG nova.network.neutron [None req-ada1986d-f723-4490-9503-77afa8d8cdd4 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: ec34a574-9c78-43d8-a65a-aa4052a5d452] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 25 03:49:11 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2036: 321 pgs: 321 active+clean; 194 MiB data, 752 MiB used, 59 GiB / 60 GiB avail; 42 KiB/s rd, 835 KiB/s wr, 54 op/s
Nov 25 03:49:12 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e237 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 03:49:12 np0005534516 nova_compute[253538]: 2025-11-25 08:49:12.539 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:49:12 np0005534516 nova_compute[253538]: 2025-11-25 08:49:12.628 253542 DEBUG nova.network.neutron [None req-ada1986d-f723-4490-9503-77afa8d8cdd4 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: ec34a574-9c78-43d8-a65a-aa4052a5d452] Updating instance_info_cache with network_info: [{"id": "bc66e4f2-ce2b-49ca-ba89-a86607d56e5a", "address": "fa:16:3e:d3:98:59", "network": {"id": "34b2f9d4-63fb-44ac-b745-d86825df1c61", "bridge": "br-int", "label": "tempest-network-smoke--1835414434", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": "10.100.0.17", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.23", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92faeb767e7a423586eaaf32661ce771", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbc66e4f2-ce", "ovs_interfaceid": "bc66e4f2-ce2b-49ca-ba89-a86607d56e5a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 03:49:12 np0005534516 nova_compute[253538]: 2025-11-25 08:49:12.645 253542 DEBUG oslo_concurrency.lockutils [None req-ada1986d-f723-4490-9503-77afa8d8cdd4 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Releasing lock "refresh_cache-ec34a574-9c78-43d8-a65a-aa4052a5d452" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 03:49:12 np0005534516 nova_compute[253538]: 2025-11-25 08:49:12.646 253542 DEBUG nova.compute.manager [None req-ada1986d-f723-4490-9503-77afa8d8cdd4 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: ec34a574-9c78-43d8-a65a-aa4052a5d452] Instance network_info: |[{"id": "bc66e4f2-ce2b-49ca-ba89-a86607d56e5a", "address": "fa:16:3e:d3:98:59", "network": {"id": "34b2f9d4-63fb-44ac-b745-d86825df1c61", "bridge": "br-int", "label": "tempest-network-smoke--1835414434", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": "10.100.0.17", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.23", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92faeb767e7a423586eaaf32661ce771", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbc66e4f2-ce", "ovs_interfaceid": "bc66e4f2-ce2b-49ca-ba89-a86607d56e5a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 25 03:49:12 np0005534516 nova_compute[253538]: 2025-11-25 08:49:12.646 253542 DEBUG oslo_concurrency.lockutils [req-d63b1dad-4547-4e35-b2a9-8da69771b43f req-4eacee53-84a7-4b9a-8f1c-e41af8ec8a97 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-ec34a574-9c78-43d8-a65a-aa4052a5d452" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 03:49:12 np0005534516 nova_compute[253538]: 2025-11-25 08:49:12.647 253542 DEBUG nova.network.neutron [req-d63b1dad-4547-4e35-b2a9-8da69771b43f req-4eacee53-84a7-4b9a-8f1c-e41af8ec8a97 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: ec34a574-9c78-43d8-a65a-aa4052a5d452] Refreshing network info cache for port bc66e4f2-ce2b-49ca-ba89-a86607d56e5a _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 03:49:12 np0005534516 nova_compute[253538]: 2025-11-25 08:49:12.649 253542 DEBUG nova.virt.libvirt.driver [None req-ada1986d-f723-4490-9503-77afa8d8cdd4 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: ec34a574-9c78-43d8-a65a-aa4052a5d452] Start _get_guest_xml network_info=[{"id": "bc66e4f2-ce2b-49ca-ba89-a86607d56e5a", "address": "fa:16:3e:d3:98:59", "network": {"id": "34b2f9d4-63fb-44ac-b745-d86825df1c61", "bridge": "br-int", "label": "tempest-network-smoke--1835414434", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": "10.100.0.17", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.23", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92faeb767e7a423586eaaf32661ce771", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbc66e4f2-ce", "ovs_interfaceid": "bc66e4f2-ce2b-49ca-ba89-a86607d56e5a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'device_name': '/dev/vda', 'guest_format': None, 'encryption_options': None, 'boot_index': 0, 'encryption_format': None, 'encryption_secret_uuid': None, 'size': 0, 'encrypted': False, 'disk_bus': 'virtio', 'image_id': '8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 25 03:49:12 np0005534516 nova_compute[253538]: 2025-11-25 08:49:12.653 253542 WARNING nova.virt.libvirt.driver [None req-ada1986d-f723-4490-9503-77afa8d8cdd4 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 03:49:12 np0005534516 nova_compute[253538]: 2025-11-25 08:49:12.657 253542 DEBUG nova.virt.libvirt.host [None req-ada1986d-f723-4490-9503-77afa8d8cdd4 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 25 03:49:12 np0005534516 nova_compute[253538]: 2025-11-25 08:49:12.658 253542 DEBUG nova.virt.libvirt.host [None req-ada1986d-f723-4490-9503-77afa8d8cdd4 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 25 03:49:12 np0005534516 nova_compute[253538]: 2025-11-25 08:49:12.662 253542 DEBUG nova.virt.libvirt.host [None req-ada1986d-f723-4490-9503-77afa8d8cdd4 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 25 03:49:12 np0005534516 nova_compute[253538]: 2025-11-25 08:49:12.662 253542 DEBUG nova.virt.libvirt.host [None req-ada1986d-f723-4490-9503-77afa8d8cdd4 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 25 03:49:12 np0005534516 nova_compute[253538]: 2025-11-25 08:49:12.663 253542 DEBUG nova.virt.libvirt.driver [None req-ada1986d-f723-4490-9503-77afa8d8cdd4 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 25 03:49:12 np0005534516 nova_compute[253538]: 2025-11-25 08:49:12.663 253542 DEBUG nova.virt.hardware [None req-ada1986d-f723-4490-9503-77afa8d8cdd4 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T08:20:54Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='c0247c91-0f34-45b2-87e7-43f31a790a00',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 25 03:49:12 np0005534516 nova_compute[253538]: 2025-11-25 08:49:12.664 253542 DEBUG nova.virt.hardware [None req-ada1986d-f723-4490-9503-77afa8d8cdd4 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 25 03:49:12 np0005534516 nova_compute[253538]: 2025-11-25 08:49:12.664 253542 DEBUG nova.virt.hardware [None req-ada1986d-f723-4490-9503-77afa8d8cdd4 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 25 03:49:12 np0005534516 nova_compute[253538]: 2025-11-25 08:49:12.664 253542 DEBUG nova.virt.hardware [None req-ada1986d-f723-4490-9503-77afa8d8cdd4 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 25 03:49:12 np0005534516 nova_compute[253538]: 2025-11-25 08:49:12.664 253542 DEBUG nova.virt.hardware [None req-ada1986d-f723-4490-9503-77afa8d8cdd4 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 25 03:49:12 np0005534516 nova_compute[253538]: 2025-11-25 08:49:12.664 253542 DEBUG nova.virt.hardware [None req-ada1986d-f723-4490-9503-77afa8d8cdd4 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 25 03:49:12 np0005534516 nova_compute[253538]: 2025-11-25 08:49:12.665 253542 DEBUG nova.virt.hardware [None req-ada1986d-f723-4490-9503-77afa8d8cdd4 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 25 03:49:12 np0005534516 nova_compute[253538]: 2025-11-25 08:49:12.665 253542 DEBUG nova.virt.hardware [None req-ada1986d-f723-4490-9503-77afa8d8cdd4 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 25 03:49:12 np0005534516 nova_compute[253538]: 2025-11-25 08:49:12.665 253542 DEBUG nova.virt.hardware [None req-ada1986d-f723-4490-9503-77afa8d8cdd4 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 25 03:49:12 np0005534516 nova_compute[253538]: 2025-11-25 08:49:12.665 253542 DEBUG nova.virt.hardware [None req-ada1986d-f723-4490-9503-77afa8d8cdd4 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 25 03:49:12 np0005534516 nova_compute[253538]: 2025-11-25 08:49:12.666 253542 DEBUG nova.virt.hardware [None req-ada1986d-f723-4490-9503-77afa8d8cdd4 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 25 03:49:12 np0005534516 nova_compute[253538]: 2025-11-25 08:49:12.668 253542 DEBUG oslo_concurrency.processutils [None req-ada1986d-f723-4490-9503-77afa8d8cdd4 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:49:13 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 03:49:13 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/124645347' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 03:49:13 np0005534516 nova_compute[253538]: 2025-11-25 08:49:13.113 253542 DEBUG oslo_concurrency.processutils [None req-ada1986d-f723-4490-9503-77afa8d8cdd4 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.445s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:49:13 np0005534516 nova_compute[253538]: 2025-11-25 08:49:13.135 253542 DEBUG nova.storage.rbd_utils [None req-ada1986d-f723-4490-9503-77afa8d8cdd4 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] rbd image ec34a574-9c78-43d8-a65a-aa4052a5d452_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:49:13 np0005534516 nova_compute[253538]: 2025-11-25 08:49:13.139 253542 DEBUG oslo_concurrency.processutils [None req-ada1986d-f723-4490-9503-77afa8d8cdd4 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:49:13 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 03:49:13 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3637525819' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 03:49:13 np0005534516 nova_compute[253538]: 2025-11-25 08:49:13.578 253542 DEBUG oslo_concurrency.processutils [None req-ada1986d-f723-4490-9503-77afa8d8cdd4 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.439s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:49:13 np0005534516 nova_compute[253538]: 2025-11-25 08:49:13.580 253542 DEBUG nova.virt.libvirt.vif [None req-ada1986d-f723-4490-9503-77afa8d8cdd4 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T08:49:06Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-286927500',display_name='tempest-TestNetworkBasicOps-server-286927500',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-286927500',id=107,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDpx2ujcMDfZDTmdH6itq/rcGct31eJ9TyaupJxEMjoZjvc+WgDrLkUbHxw8m+QMC78njJvM+fOPyWv9TETxSR2Le+lHvoJLnW/RQzdZT3SocZ8dY0e2xdmGW9jZNSUf0g==',key_name='tempest-TestNetworkBasicOps-1732848641',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='92faeb767e7a423586eaaf32661ce771',ramdisk_id='',reservation_id='r-tlyua8ve',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-2019122229',owner_user_name='tempest-TestNetworkBasicOps-2019122229-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T08:49:07Z,user_data=None,user_id='4211995133cc45db8e38c47f747fb092',uuid=ec34a574-9c78-43d8-a65a-aa4052a5d452,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "bc66e4f2-ce2b-49ca-ba89-a86607d56e5a", "address": "fa:16:3e:d3:98:59", "network": {"id": "34b2f9d4-63fb-44ac-b745-d86825df1c61", "bridge": "br-int", "label": "tempest-network-smoke--1835414434", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": "10.100.0.17", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.23", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92faeb767e7a423586eaaf32661ce771", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbc66e4f2-ce", "ovs_interfaceid": "bc66e4f2-ce2b-49ca-ba89-a86607d56e5a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 25 03:49:13 np0005534516 nova_compute[253538]: 2025-11-25 08:49:13.581 253542 DEBUG nova.network.os_vif_util [None req-ada1986d-f723-4490-9503-77afa8d8cdd4 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Converting VIF {"id": "bc66e4f2-ce2b-49ca-ba89-a86607d56e5a", "address": "fa:16:3e:d3:98:59", "network": {"id": "34b2f9d4-63fb-44ac-b745-d86825df1c61", "bridge": "br-int", "label": "tempest-network-smoke--1835414434", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": "10.100.0.17", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.23", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92faeb767e7a423586eaaf32661ce771", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbc66e4f2-ce", "ovs_interfaceid": "bc66e4f2-ce2b-49ca-ba89-a86607d56e5a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 03:49:13 np0005534516 nova_compute[253538]: 2025-11-25 08:49:13.581 253542 DEBUG nova.network.os_vif_util [None req-ada1986d-f723-4490-9503-77afa8d8cdd4 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d3:98:59,bridge_name='br-int',has_traffic_filtering=True,id=bc66e4f2-ce2b-49ca-ba89-a86607d56e5a,network=Network(34b2f9d4-63fb-44ac-b745-d86825df1c61),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbc66e4f2-ce') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 03:49:13 np0005534516 nova_compute[253538]: 2025-11-25 08:49:13.583 253542 DEBUG nova.objects.instance [None req-ada1986d-f723-4490-9503-77afa8d8cdd4 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lazy-loading 'pci_devices' on Instance uuid ec34a574-9c78-43d8-a65a-aa4052a5d452 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 03:49:13 np0005534516 nova_compute[253538]: 2025-11-25 08:49:13.604 253542 DEBUG nova.virt.libvirt.driver [None req-ada1986d-f723-4490-9503-77afa8d8cdd4 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: ec34a574-9c78-43d8-a65a-aa4052a5d452] End _get_guest_xml xml=<domain type="kvm">
Nov 25 03:49:13 np0005534516 nova_compute[253538]:  <uuid>ec34a574-9c78-43d8-a65a-aa4052a5d452</uuid>
Nov 25 03:49:13 np0005534516 nova_compute[253538]:  <name>instance-0000006b</name>
Nov 25 03:49:13 np0005534516 nova_compute[253538]:  <memory>131072</memory>
Nov 25 03:49:13 np0005534516 nova_compute[253538]:  <vcpu>1</vcpu>
Nov 25 03:49:13 np0005534516 nova_compute[253538]:  <metadata>
Nov 25 03:49:13 np0005534516 nova_compute[253538]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 03:49:13 np0005534516 nova_compute[253538]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 03:49:13 np0005534516 nova_compute[253538]:      <nova:name>tempest-TestNetworkBasicOps-server-286927500</nova:name>
Nov 25 03:49:13 np0005534516 nova_compute[253538]:      <nova:creationTime>2025-11-25 08:49:12</nova:creationTime>
Nov 25 03:49:13 np0005534516 nova_compute[253538]:      <nova:flavor name="m1.nano">
Nov 25 03:49:13 np0005534516 nova_compute[253538]:        <nova:memory>128</nova:memory>
Nov 25 03:49:13 np0005534516 nova_compute[253538]:        <nova:disk>1</nova:disk>
Nov 25 03:49:13 np0005534516 nova_compute[253538]:        <nova:swap>0</nova:swap>
Nov 25 03:49:13 np0005534516 nova_compute[253538]:        <nova:ephemeral>0</nova:ephemeral>
Nov 25 03:49:13 np0005534516 nova_compute[253538]:        <nova:vcpus>1</nova:vcpus>
Nov 25 03:49:13 np0005534516 nova_compute[253538]:      </nova:flavor>
Nov 25 03:49:13 np0005534516 nova_compute[253538]:      <nova:owner>
Nov 25 03:49:13 np0005534516 nova_compute[253538]:        <nova:user uuid="4211995133cc45db8e38c47f747fb092">tempest-TestNetworkBasicOps-2019122229-project-member</nova:user>
Nov 25 03:49:13 np0005534516 nova_compute[253538]:        <nova:project uuid="92faeb767e7a423586eaaf32661ce771">tempest-TestNetworkBasicOps-2019122229</nova:project>
Nov 25 03:49:13 np0005534516 nova_compute[253538]:      </nova:owner>
Nov 25 03:49:13 np0005534516 nova_compute[253538]:      <nova:root type="image" uuid="8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e"/>
Nov 25 03:49:13 np0005534516 nova_compute[253538]:      <nova:ports>
Nov 25 03:49:13 np0005534516 nova_compute[253538]:        <nova:port uuid="bc66e4f2-ce2b-49ca-ba89-a86607d56e5a">
Nov 25 03:49:13 np0005534516 nova_compute[253538]:          <nova:ip type="fixed" address="10.100.0.23" ipVersion="4"/>
Nov 25 03:49:13 np0005534516 nova_compute[253538]:        </nova:port>
Nov 25 03:49:13 np0005534516 nova_compute[253538]:      </nova:ports>
Nov 25 03:49:13 np0005534516 nova_compute[253538]:    </nova:instance>
Nov 25 03:49:13 np0005534516 nova_compute[253538]:  </metadata>
Nov 25 03:49:13 np0005534516 nova_compute[253538]:  <sysinfo type="smbios">
Nov 25 03:49:13 np0005534516 nova_compute[253538]:    <system>
Nov 25 03:49:13 np0005534516 nova_compute[253538]:      <entry name="manufacturer">RDO</entry>
Nov 25 03:49:13 np0005534516 nova_compute[253538]:      <entry name="product">OpenStack Compute</entry>
Nov 25 03:49:13 np0005534516 nova_compute[253538]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 03:49:13 np0005534516 nova_compute[253538]:      <entry name="serial">ec34a574-9c78-43d8-a65a-aa4052a5d452</entry>
Nov 25 03:49:13 np0005534516 nova_compute[253538]:      <entry name="uuid">ec34a574-9c78-43d8-a65a-aa4052a5d452</entry>
Nov 25 03:49:13 np0005534516 nova_compute[253538]:      <entry name="family">Virtual Machine</entry>
Nov 25 03:49:13 np0005534516 nova_compute[253538]:    </system>
Nov 25 03:49:13 np0005534516 nova_compute[253538]:  </sysinfo>
Nov 25 03:49:13 np0005534516 nova_compute[253538]:  <os>
Nov 25 03:49:13 np0005534516 nova_compute[253538]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 25 03:49:13 np0005534516 nova_compute[253538]:    <boot dev="hd"/>
Nov 25 03:49:13 np0005534516 nova_compute[253538]:    <smbios mode="sysinfo"/>
Nov 25 03:49:13 np0005534516 nova_compute[253538]:  </os>
Nov 25 03:49:13 np0005534516 nova_compute[253538]:  <features>
Nov 25 03:49:13 np0005534516 nova_compute[253538]:    <acpi/>
Nov 25 03:49:13 np0005534516 nova_compute[253538]:    <apic/>
Nov 25 03:49:13 np0005534516 nova_compute[253538]:    <vmcoreinfo/>
Nov 25 03:49:13 np0005534516 nova_compute[253538]:  </features>
Nov 25 03:49:13 np0005534516 nova_compute[253538]:  <clock offset="utc">
Nov 25 03:49:13 np0005534516 nova_compute[253538]:    <timer name="pit" tickpolicy="delay"/>
Nov 25 03:49:13 np0005534516 nova_compute[253538]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 25 03:49:13 np0005534516 nova_compute[253538]:    <timer name="hpet" present="no"/>
Nov 25 03:49:13 np0005534516 nova_compute[253538]:  </clock>
Nov 25 03:49:13 np0005534516 nova_compute[253538]:  <cpu mode="host-model" match="exact">
Nov 25 03:49:13 np0005534516 nova_compute[253538]:    <topology sockets="1" cores="1" threads="1"/>
Nov 25 03:49:13 np0005534516 nova_compute[253538]:  </cpu>
Nov 25 03:49:13 np0005534516 nova_compute[253538]:  <devices>
Nov 25 03:49:13 np0005534516 nova_compute[253538]:    <disk type="network" device="disk">
Nov 25 03:49:13 np0005534516 nova_compute[253538]:      <driver type="raw" cache="none"/>
Nov 25 03:49:13 np0005534516 nova_compute[253538]:      <source protocol="rbd" name="vms/ec34a574-9c78-43d8-a65a-aa4052a5d452_disk">
Nov 25 03:49:13 np0005534516 nova_compute[253538]:        <host name="192.168.122.100" port="6789"/>
Nov 25 03:49:13 np0005534516 nova_compute[253538]:      </source>
Nov 25 03:49:13 np0005534516 nova_compute[253538]:      <auth username="openstack">
Nov 25 03:49:13 np0005534516 nova_compute[253538]:        <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 03:49:13 np0005534516 nova_compute[253538]:      </auth>
Nov 25 03:49:13 np0005534516 nova_compute[253538]:      <target dev="vda" bus="virtio"/>
Nov 25 03:49:13 np0005534516 nova_compute[253538]:    </disk>
Nov 25 03:49:13 np0005534516 nova_compute[253538]:    <disk type="network" device="cdrom">
Nov 25 03:49:13 np0005534516 nova_compute[253538]:      <driver type="raw" cache="none"/>
Nov 25 03:49:13 np0005534516 nova_compute[253538]:      <source protocol="rbd" name="vms/ec34a574-9c78-43d8-a65a-aa4052a5d452_disk.config">
Nov 25 03:49:13 np0005534516 nova_compute[253538]:        <host name="192.168.122.100" port="6789"/>
Nov 25 03:49:13 np0005534516 nova_compute[253538]:      </source>
Nov 25 03:49:13 np0005534516 nova_compute[253538]:      <auth username="openstack">
Nov 25 03:49:13 np0005534516 nova_compute[253538]:        <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 03:49:13 np0005534516 nova_compute[253538]:      </auth>
Nov 25 03:49:13 np0005534516 nova_compute[253538]:      <target dev="sda" bus="sata"/>
Nov 25 03:49:13 np0005534516 nova_compute[253538]:    </disk>
Nov 25 03:49:13 np0005534516 nova_compute[253538]:    <interface type="ethernet">
Nov 25 03:49:13 np0005534516 nova_compute[253538]:      <mac address="fa:16:3e:d3:98:59"/>
Nov 25 03:49:13 np0005534516 nova_compute[253538]:      <model type="virtio"/>
Nov 25 03:49:13 np0005534516 nova_compute[253538]:      <driver name="vhost" rx_queue_size="512"/>
Nov 25 03:49:13 np0005534516 nova_compute[253538]:      <mtu size="1442"/>
Nov 25 03:49:13 np0005534516 nova_compute[253538]:      <target dev="tapbc66e4f2-ce"/>
Nov 25 03:49:13 np0005534516 nova_compute[253538]:    </interface>
Nov 25 03:49:13 np0005534516 nova_compute[253538]:    <serial type="pty">
Nov 25 03:49:13 np0005534516 nova_compute[253538]:      <log file="/var/lib/nova/instances/ec34a574-9c78-43d8-a65a-aa4052a5d452/console.log" append="off"/>
Nov 25 03:49:13 np0005534516 nova_compute[253538]:    </serial>
Nov 25 03:49:13 np0005534516 nova_compute[253538]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 03:49:13 np0005534516 nova_compute[253538]:    <video>
Nov 25 03:49:13 np0005534516 nova_compute[253538]:      <model type="virtio"/>
Nov 25 03:49:13 np0005534516 nova_compute[253538]:    </video>
Nov 25 03:49:13 np0005534516 nova_compute[253538]:    <input type="tablet" bus="usb"/>
Nov 25 03:49:13 np0005534516 nova_compute[253538]:    <rng model="virtio">
Nov 25 03:49:13 np0005534516 nova_compute[253538]:      <backend model="random">/dev/urandom</backend>
Nov 25 03:49:13 np0005534516 nova_compute[253538]:    </rng>
Nov 25 03:49:13 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root"/>
Nov 25 03:49:13 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:49:13 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:49:13 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:49:13 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:49:13 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:49:13 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:49:13 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:49:13 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:49:13 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:49:13 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:49:13 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:49:13 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:49:13 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:49:13 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:49:13 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:49:13 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:49:13 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:49:13 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:49:13 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:49:13 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:49:13 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:49:13 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:49:13 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:49:13 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:49:13 np0005534516 nova_compute[253538]:    <controller type="usb" index="0"/>
Nov 25 03:49:13 np0005534516 nova_compute[253538]:    <memballoon model="virtio">
Nov 25 03:49:13 np0005534516 nova_compute[253538]:      <stats period="10"/>
Nov 25 03:49:13 np0005534516 nova_compute[253538]:    </memballoon>
Nov 25 03:49:13 np0005534516 nova_compute[253538]:  </devices>
Nov 25 03:49:13 np0005534516 nova_compute[253538]: </domain>
Nov 25 03:49:13 np0005534516 nova_compute[253538]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 25 03:49:13 np0005534516 nova_compute[253538]: 2025-11-25 08:49:13.605 253542 DEBUG nova.compute.manager [None req-ada1986d-f723-4490-9503-77afa8d8cdd4 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: ec34a574-9c78-43d8-a65a-aa4052a5d452] Preparing to wait for external event network-vif-plugged-bc66e4f2-ce2b-49ca-ba89-a86607d56e5a prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 25 03:49:13 np0005534516 nova_compute[253538]: 2025-11-25 08:49:13.606 253542 DEBUG oslo_concurrency.lockutils [None req-ada1986d-f723-4490-9503-77afa8d8cdd4 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Acquiring lock "ec34a574-9c78-43d8-a65a-aa4052a5d452-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:49:13 np0005534516 nova_compute[253538]: 2025-11-25 08:49:13.606 253542 DEBUG oslo_concurrency.lockutils [None req-ada1986d-f723-4490-9503-77afa8d8cdd4 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lock "ec34a574-9c78-43d8-a65a-aa4052a5d452-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:49:13 np0005534516 nova_compute[253538]: 2025-11-25 08:49:13.606 253542 DEBUG oslo_concurrency.lockutils [None req-ada1986d-f723-4490-9503-77afa8d8cdd4 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lock "ec34a574-9c78-43d8-a65a-aa4052a5d452-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:49:13 np0005534516 nova_compute[253538]: 2025-11-25 08:49:13.607 253542 DEBUG nova.virt.libvirt.vif [None req-ada1986d-f723-4490-9503-77afa8d8cdd4 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T08:49:06Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-286927500',display_name='tempest-TestNetworkBasicOps-server-286927500',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-286927500',id=107,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDpx2ujcMDfZDTmdH6itq/rcGct31eJ9TyaupJxEMjoZjvc+WgDrLkUbHxw8m+QMC78njJvM+fOPyWv9TETxSR2Le+lHvoJLnW/RQzdZT3SocZ8dY0e2xdmGW9jZNSUf0g==',key_name='tempest-TestNetworkBasicOps-1732848641',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='92faeb767e7a423586eaaf32661ce771',ramdisk_id='',reservation_id='r-tlyua8ve',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-2019122229',owner_user_name='tempest-TestNetworkBasicOps-2019122229-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T08:49:07Z,user_data=None,user_id='4211995133cc45db8e38c47f747fb092',uuid=ec34a574-9c78-43d8-a65a-aa4052a5d452,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "bc66e4f2-ce2b-49ca-ba89-a86607d56e5a", "address": "fa:16:3e:d3:98:59", "network": {"id": "34b2f9d4-63fb-44ac-b745-d86825df1c61", "bridge": "br-int", "label": "tempest-network-smoke--1835414434", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": "10.100.0.17", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.23", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92faeb767e7a423586eaaf32661ce771", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbc66e4f2-ce", "ovs_interfaceid": "bc66e4f2-ce2b-49ca-ba89-a86607d56e5a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 25 03:49:13 np0005534516 nova_compute[253538]: 2025-11-25 08:49:13.607 253542 DEBUG nova.network.os_vif_util [None req-ada1986d-f723-4490-9503-77afa8d8cdd4 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Converting VIF {"id": "bc66e4f2-ce2b-49ca-ba89-a86607d56e5a", "address": "fa:16:3e:d3:98:59", "network": {"id": "34b2f9d4-63fb-44ac-b745-d86825df1c61", "bridge": "br-int", "label": "tempest-network-smoke--1835414434", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": "10.100.0.17", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.23", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92faeb767e7a423586eaaf32661ce771", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbc66e4f2-ce", "ovs_interfaceid": "bc66e4f2-ce2b-49ca-ba89-a86607d56e5a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 03:49:13 np0005534516 nova_compute[253538]: 2025-11-25 08:49:13.608 253542 DEBUG nova.network.os_vif_util [None req-ada1986d-f723-4490-9503-77afa8d8cdd4 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d3:98:59,bridge_name='br-int',has_traffic_filtering=True,id=bc66e4f2-ce2b-49ca-ba89-a86607d56e5a,network=Network(34b2f9d4-63fb-44ac-b745-d86825df1c61),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbc66e4f2-ce') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 03:49:13 np0005534516 nova_compute[253538]: 2025-11-25 08:49:13.608 253542 DEBUG os_vif [None req-ada1986d-f723-4490-9503-77afa8d8cdd4 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:d3:98:59,bridge_name='br-int',has_traffic_filtering=True,id=bc66e4f2-ce2b-49ca-ba89-a86607d56e5a,network=Network(34b2f9d4-63fb-44ac-b745-d86825df1c61),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbc66e4f2-ce') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 25 03:49:13 np0005534516 nova_compute[253538]: 2025-11-25 08:49:13.609 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:49:13 np0005534516 nova_compute[253538]: 2025-11-25 08:49:13.610 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:49:13 np0005534516 nova_compute[253538]: 2025-11-25 08:49:13.610 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 03:49:13 np0005534516 nova_compute[253538]: 2025-11-25 08:49:13.613 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:49:13 np0005534516 nova_compute[253538]: 2025-11-25 08:49:13.613 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapbc66e4f2-ce, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:49:13 np0005534516 nova_compute[253538]: 2025-11-25 08:49:13.614 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapbc66e4f2-ce, col_values=(('external_ids', {'iface-id': 'bc66e4f2-ce2b-49ca-ba89-a86607d56e5a', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:d3:98:59', 'vm-uuid': 'ec34a574-9c78-43d8-a65a-aa4052a5d452'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:49:13 np0005534516 NetworkManager[48915]: <info>  [1764060553.6503] manager: (tapbc66e4f2-ce): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/425)
Nov 25 03:49:13 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2037: 321 pgs: 321 active+clean; 213 MiB data, 771 MiB used, 59 GiB / 60 GiB avail; 34 KiB/s rd, 1.8 MiB/s wr, 50 op/s
Nov 25 03:49:13 np0005534516 nova_compute[253538]: 2025-11-25 08:49:13.649 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:49:13 np0005534516 nova_compute[253538]: 2025-11-25 08:49:13.653 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 25 03:49:13 np0005534516 nova_compute[253538]: 2025-11-25 08:49:13.655 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:49:13 np0005534516 nova_compute[253538]: 2025-11-25 08:49:13.655 253542 INFO os_vif [None req-ada1986d-f723-4490-9503-77afa8d8cdd4 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:d3:98:59,bridge_name='br-int',has_traffic_filtering=True,id=bc66e4f2-ce2b-49ca-ba89-a86607d56e5a,network=Network(34b2f9d4-63fb-44ac-b745-d86825df1c61),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbc66e4f2-ce')#033[00m
Nov 25 03:49:13 np0005534516 nova_compute[253538]: 2025-11-25 08:49:13.715 253542 DEBUG nova.virt.libvirt.driver [None req-ada1986d-f723-4490-9503-77afa8d8cdd4 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 03:49:13 np0005534516 nova_compute[253538]: 2025-11-25 08:49:13.715 253542 DEBUG nova.virt.libvirt.driver [None req-ada1986d-f723-4490-9503-77afa8d8cdd4 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 03:49:13 np0005534516 nova_compute[253538]: 2025-11-25 08:49:13.716 253542 DEBUG nova.virt.libvirt.driver [None req-ada1986d-f723-4490-9503-77afa8d8cdd4 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] No VIF found with MAC fa:16:3e:d3:98:59, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 25 03:49:13 np0005534516 nova_compute[253538]: 2025-11-25 08:49:13.716 253542 INFO nova.virt.libvirt.driver [None req-ada1986d-f723-4490-9503-77afa8d8cdd4 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: ec34a574-9c78-43d8-a65a-aa4052a5d452] Using config drive#033[00m
Nov 25 03:49:13 np0005534516 nova_compute[253538]: 2025-11-25 08:49:13.739 253542 DEBUG nova.storage.rbd_utils [None req-ada1986d-f723-4490-9503-77afa8d8cdd4 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] rbd image ec34a574-9c78-43d8-a65a-aa4052a5d452_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:49:14 np0005534516 nova_compute[253538]: 2025-11-25 08:49:14.473 253542 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764060539.4726338, 0f5e68e6-8f02-4a3a-ac0c-322d82950d98 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 03:49:14 np0005534516 nova_compute[253538]: 2025-11-25 08:49:14.474 253542 INFO nova.compute.manager [-] [instance: 0f5e68e6-8f02-4a3a-ac0c-322d82950d98] VM Stopped (Lifecycle Event)#033[00m
Nov 25 03:49:14 np0005534516 nova_compute[253538]: 2025-11-25 08:49:14.490 253542 DEBUG nova.compute.manager [None req-31915953-ace4-495d-9a22-bb1cd8eceb17 - - - - - -] [instance: 0f5e68e6-8f02-4a3a-ac0c-322d82950d98] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:49:14 np0005534516 nova_compute[253538]: 2025-11-25 08:49:14.559 253542 INFO nova.virt.libvirt.driver [None req-ada1986d-f723-4490-9503-77afa8d8cdd4 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: ec34a574-9c78-43d8-a65a-aa4052a5d452] Creating config drive at /var/lib/nova/instances/ec34a574-9c78-43d8-a65a-aa4052a5d452/disk.config#033[00m
Nov 25 03:49:14 np0005534516 nova_compute[253538]: 2025-11-25 08:49:14.564 253542 DEBUG oslo_concurrency.processutils [None req-ada1986d-f723-4490-9503-77afa8d8cdd4 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/ec34a574-9c78-43d8-a65a-aa4052a5d452/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp17n7bimt execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:49:14 np0005534516 nova_compute[253538]: 2025-11-25 08:49:14.707 253542 DEBUG oslo_concurrency.processutils [None req-ada1986d-f723-4490-9503-77afa8d8cdd4 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/ec34a574-9c78-43d8-a65a-aa4052a5d452/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp17n7bimt" returned: 0 in 0.143s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:49:14 np0005534516 nova_compute[253538]: 2025-11-25 08:49:14.735 253542 DEBUG nova.storage.rbd_utils [None req-ada1986d-f723-4490-9503-77afa8d8cdd4 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] rbd image ec34a574-9c78-43d8-a65a-aa4052a5d452_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:49:14 np0005534516 nova_compute[253538]: 2025-11-25 08:49:14.739 253542 DEBUG oslo_concurrency.processutils [None req-ada1986d-f723-4490-9503-77afa8d8cdd4 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/ec34a574-9c78-43d8-a65a-aa4052a5d452/disk.config ec34a574-9c78-43d8-a65a-aa4052a5d452_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:49:14 np0005534516 nova_compute[253538]: 2025-11-25 08:49:14.958 253542 DEBUG nova.network.neutron [req-d63b1dad-4547-4e35-b2a9-8da69771b43f req-4eacee53-84a7-4b9a-8f1c-e41af8ec8a97 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: ec34a574-9c78-43d8-a65a-aa4052a5d452] Updated VIF entry in instance network info cache for port bc66e4f2-ce2b-49ca-ba89-a86607d56e5a. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 25 03:49:14 np0005534516 nova_compute[253538]: 2025-11-25 08:49:14.959 253542 DEBUG nova.network.neutron [req-d63b1dad-4547-4e35-b2a9-8da69771b43f req-4eacee53-84a7-4b9a-8f1c-e41af8ec8a97 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: ec34a574-9c78-43d8-a65a-aa4052a5d452] Updating instance_info_cache with network_info: [{"id": "bc66e4f2-ce2b-49ca-ba89-a86607d56e5a", "address": "fa:16:3e:d3:98:59", "network": {"id": "34b2f9d4-63fb-44ac-b745-d86825df1c61", "bridge": "br-int", "label": "tempest-network-smoke--1835414434", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": "10.100.0.17", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.23", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92faeb767e7a423586eaaf32661ce771", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbc66e4f2-ce", "ovs_interfaceid": "bc66e4f2-ce2b-49ca-ba89-a86607d56e5a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 03:49:14 np0005534516 nova_compute[253538]: 2025-11-25 08:49:14.979 253542 DEBUG oslo_concurrency.lockutils [req-d63b1dad-4547-4e35-b2a9-8da69771b43f req-4eacee53-84a7-4b9a-8f1c-e41af8ec8a97 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-ec34a574-9c78-43d8-a65a-aa4052a5d452" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 03:49:15 np0005534516 nova_compute[253538]: 2025-11-25 08:49:15.132 253542 DEBUG oslo_concurrency.processutils [None req-ada1986d-f723-4490-9503-77afa8d8cdd4 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/ec34a574-9c78-43d8-a65a-aa4052a5d452/disk.config ec34a574-9c78-43d8-a65a-aa4052a5d452_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.393s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:49:15 np0005534516 nova_compute[253538]: 2025-11-25 08:49:15.133 253542 INFO nova.virt.libvirt.driver [None req-ada1986d-f723-4490-9503-77afa8d8cdd4 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: ec34a574-9c78-43d8-a65a-aa4052a5d452] Deleting local config drive /var/lib/nova/instances/ec34a574-9c78-43d8-a65a-aa4052a5d452/disk.config because it was imported into RBD.#033[00m
Nov 25 03:49:15 np0005534516 kernel: tapbc66e4f2-ce: entered promiscuous mode
Nov 25 03:49:15 np0005534516 NetworkManager[48915]: <info>  [1764060555.1804] manager: (tapbc66e4f2-ce): new Tun device (/org/freedesktop/NetworkManager/Devices/426)
Nov 25 03:49:15 np0005534516 ovn_controller[152859]: 2025-11-25T08:49:15Z|01039|binding|INFO|Claiming lport bc66e4f2-ce2b-49ca-ba89-a86607d56e5a for this chassis.
Nov 25 03:49:15 np0005534516 ovn_controller[152859]: 2025-11-25T08:49:15Z|01040|binding|INFO|bc66e4f2-ce2b-49ca-ba89-a86607d56e5a: Claiming fa:16:3e:d3:98:59 10.100.0.23
Nov 25 03:49:15 np0005534516 nova_compute[253538]: 2025-11-25 08:49:15.181 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:49:15 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:49:15.191 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d3:98:59 10.100.0.23'], port_security=['fa:16:3e:d3:98:59 10.100.0.23'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.23/28', 'neutron:device_id': 'ec34a574-9c78-43d8-a65a-aa4052a5d452', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-34b2f9d4-63fb-44ac-b745-d86825df1c61', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '92faeb767e7a423586eaaf32661ce771', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'f6e83494-bf67-403f-bbb9-2d14a77705aa', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=99275f51-5c9e-402a-8f14-8705646486ea, chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=bc66e4f2-ce2b-49ca-ba89-a86607d56e5a) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 03:49:15 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:49:15.192 162739 INFO neutron.agent.ovn.metadata.agent [-] Port bc66e4f2-ce2b-49ca-ba89-a86607d56e5a in datapath 34b2f9d4-63fb-44ac-b745-d86825df1c61 bound to our chassis#033[00m
Nov 25 03:49:15 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:49:15.193 162739 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 34b2f9d4-63fb-44ac-b745-d86825df1c61#033[00m
Nov 25 03:49:15 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:49:15.205 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[add58370-ba19-40db-b15b-518aec5251f5]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:49:15 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:49:15.206 162739 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap34b2f9d4-61 in ovnmeta-34b2f9d4-63fb-44ac-b745-d86825df1c61 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 25 03:49:15 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:49:15.208 269370 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap34b2f9d4-60 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 25 03:49:15 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:49:15.208 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[53bdcdb4-167a-4c9a-aa90-2abef5632b93]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:49:15 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:49:15.209 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[24f5e807-8e17-4427-b9cf-312a972875de]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:49:15 np0005534516 systemd-machined[215790]: New machine qemu-133-instance-0000006b.
Nov 25 03:49:15 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:49:15.222 162852 DEBUG oslo.privsep.daemon [-] privsep: reply[2b485511-423f-4efb-9a41-2b2066629cb1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:49:15 np0005534516 nova_compute[253538]: 2025-11-25 08:49:15.226 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:49:15 np0005534516 ovn_controller[152859]: 2025-11-25T08:49:15Z|01041|binding|INFO|Setting lport bc66e4f2-ce2b-49ca-ba89-a86607d56e5a ovn-installed in OVS
Nov 25 03:49:15 np0005534516 ovn_controller[152859]: 2025-11-25T08:49:15Z|01042|binding|INFO|Setting lport bc66e4f2-ce2b-49ca-ba89-a86607d56e5a up in Southbound
Nov 25 03:49:15 np0005534516 systemd[1]: Started Virtual Machine qemu-133-instance-0000006b.
Nov 25 03:49:15 np0005534516 nova_compute[253538]: 2025-11-25 08:49:15.229 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:49:15 np0005534516 systemd-udevd[361803]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 03:49:15 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:49:15.247 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[479adfcc-83db-4bbf-8db4-14e6a1631cc4]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:49:15 np0005534516 NetworkManager[48915]: <info>  [1764060555.2532] device (tapbc66e4f2-ce): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 03:49:15 np0005534516 NetworkManager[48915]: <info>  [1764060555.2541] device (tapbc66e4f2-ce): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 03:49:15 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:49:15.275 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[846abea7-0796-4804-845b-2c3727dcbfd7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:49:15 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:49:15.281 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[38587ca0-a519-4786-b33d-40760078c197]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:49:15 np0005534516 NetworkManager[48915]: <info>  [1764060555.2829] manager: (tap34b2f9d4-60): new Veth device (/org/freedesktop/NetworkManager/Devices/427)
Nov 25 03:49:15 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:49:15.313 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[e83aec7c-fbf7-4026-9e0d-0c8e102f1877]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:49:15 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:49:15.317 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[810ddae8-31e3-4290-bf3f-179646013604]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:49:15 np0005534516 NetworkManager[48915]: <info>  [1764060555.3454] device (tap34b2f9d4-60): carrier: link connected
Nov 25 03:49:15 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:49:15.351 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[78fbeab3-8c81-4b96-9af0-fc5d274e621e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:49:15 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:49:15.370 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[ab856813-8e54-4bd1-87af-bba6ec3a2213]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap34b2f9d4-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:84:3b:ef'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 307], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 590994, 'reachable_time': 30016, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 361833, 'error': None, 'target': 'ovnmeta-34b2f9d4-63fb-44ac-b745-d86825df1c61', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:49:15 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:49:15.394 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[1823f16c-fab9-42bb-b69e-0df924083fcb]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe84:3bef'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 590994, 'tstamp': 590994}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 361834, 'error': None, 'target': 'ovnmeta-34b2f9d4-63fb-44ac-b745-d86825df1c61', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:49:15 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:49:15.424 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[901a7577-9579-4404-a8b4-7c0969c7dd9f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap34b2f9d4-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:84:3b:ef'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 307], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 590994, 'reachable_time': 30016, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 361835, 'error': None, 'target': 'ovnmeta-34b2f9d4-63fb-44ac-b745-d86825df1c61', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:49:15 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:49:15.467 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[a3435674-4c2c-4bc2-83b7-89fbd7381e5c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:49:15 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:49:15.534 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[cda45b2c-33ff-4e68-a429-a2748dd99857]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:49:15 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:49:15.536 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap34b2f9d4-60, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:49:15 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:49:15.537 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 03:49:15 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:49:15.537 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap34b2f9d4-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:49:15 np0005534516 nova_compute[253538]: 2025-11-25 08:49:15.539 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:49:15 np0005534516 NetworkManager[48915]: <info>  [1764060555.5405] manager: (tap34b2f9d4-60): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/428)
Nov 25 03:49:15 np0005534516 kernel: tap34b2f9d4-60: entered promiscuous mode
Nov 25 03:49:15 np0005534516 nova_compute[253538]: 2025-11-25 08:49:15.542 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:49:15 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:49:15.543 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap34b2f9d4-60, col_values=(('external_ids', {'iface-id': '07382928-ba7a-4550-9d32-44d1241668cf'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:49:15 np0005534516 nova_compute[253538]: 2025-11-25 08:49:15.544 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:49:15 np0005534516 ovn_controller[152859]: 2025-11-25T08:49:15Z|01043|binding|INFO|Releasing lport 07382928-ba7a-4550-9d32-44d1241668cf from this chassis (sb_readonly=0)
Nov 25 03:49:15 np0005534516 nova_compute[253538]: 2025-11-25 08:49:15.546 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:49:15 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:49:15.546 162739 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/34b2f9d4-63fb-44ac-b745-d86825df1c61.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/34b2f9d4-63fb-44ac-b745-d86825df1c61.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 25 03:49:15 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:49:15.558 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[9727fec1-d9ce-4c2e-87ac-fbbdcc6f76d1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:49:15 np0005534516 nova_compute[253538]: 2025-11-25 08:49:15.560 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:49:15 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:49:15.560 162739 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 25 03:49:15 np0005534516 ovn_metadata_agent[162734]: global
Nov 25 03:49:15 np0005534516 ovn_metadata_agent[162734]:    log         /dev/log local0 debug
Nov 25 03:49:15 np0005534516 ovn_metadata_agent[162734]:    log-tag     haproxy-metadata-proxy-34b2f9d4-63fb-44ac-b745-d86825df1c61
Nov 25 03:49:15 np0005534516 ovn_metadata_agent[162734]:    user        root
Nov 25 03:49:15 np0005534516 ovn_metadata_agent[162734]:    group       root
Nov 25 03:49:15 np0005534516 ovn_metadata_agent[162734]:    maxconn     1024
Nov 25 03:49:15 np0005534516 ovn_metadata_agent[162734]:    pidfile     /var/lib/neutron/external/pids/34b2f9d4-63fb-44ac-b745-d86825df1c61.pid.haproxy
Nov 25 03:49:15 np0005534516 ovn_metadata_agent[162734]:    daemon
Nov 25 03:49:15 np0005534516 ovn_metadata_agent[162734]: 
Nov 25 03:49:15 np0005534516 ovn_metadata_agent[162734]: defaults
Nov 25 03:49:15 np0005534516 ovn_metadata_agent[162734]:    log global
Nov 25 03:49:15 np0005534516 ovn_metadata_agent[162734]:    mode http
Nov 25 03:49:15 np0005534516 ovn_metadata_agent[162734]:    option httplog
Nov 25 03:49:15 np0005534516 ovn_metadata_agent[162734]:    option dontlognull
Nov 25 03:49:15 np0005534516 ovn_metadata_agent[162734]:    option http-server-close
Nov 25 03:49:15 np0005534516 ovn_metadata_agent[162734]:    option forwardfor
Nov 25 03:49:15 np0005534516 ovn_metadata_agent[162734]:    retries                 3
Nov 25 03:49:15 np0005534516 ovn_metadata_agent[162734]:    timeout http-request    30s
Nov 25 03:49:15 np0005534516 ovn_metadata_agent[162734]:    timeout connect         30s
Nov 25 03:49:15 np0005534516 ovn_metadata_agent[162734]:    timeout client          32s
Nov 25 03:49:15 np0005534516 ovn_metadata_agent[162734]:    timeout server          32s
Nov 25 03:49:15 np0005534516 ovn_metadata_agent[162734]:    timeout http-keep-alive 30s
Nov 25 03:49:15 np0005534516 ovn_metadata_agent[162734]: 
Nov 25 03:49:15 np0005534516 ovn_metadata_agent[162734]: 
Nov 25 03:49:15 np0005534516 ovn_metadata_agent[162734]: listen listener
Nov 25 03:49:15 np0005534516 ovn_metadata_agent[162734]:    bind 169.254.169.254:80
Nov 25 03:49:15 np0005534516 ovn_metadata_agent[162734]:    server metadata /var/lib/neutron/metadata_proxy
Nov 25 03:49:15 np0005534516 ovn_metadata_agent[162734]:    http-request add-header X-OVN-Network-ID 34b2f9d4-63fb-44ac-b745-d86825df1c61
Nov 25 03:49:15 np0005534516 ovn_metadata_agent[162734]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 25 03:49:15 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:49:15.561 162739 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-34b2f9d4-63fb-44ac-b745-d86825df1c61', 'env', 'PROCESS_TAG=haproxy-34b2f9d4-63fb-44ac-b745-d86825df1c61', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/34b2f9d4-63fb-44ac-b745-d86825df1c61.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 25 03:49:15 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2038: 321 pgs: 321 active+clean; 213 MiB data, 771 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.8 MiB/s wr, 31 op/s
Nov 25 03:49:15 np0005534516 nova_compute[253538]: 2025-11-25 08:49:15.812 253542 DEBUG nova.compute.manager [req-7949981e-aa9d-4d62-bce2-8ac23f7a87f3 req-d873ba04-1185-44c9-bad1-d3ae0f75a979 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: ec34a574-9c78-43d8-a65a-aa4052a5d452] Received event network-vif-plugged-bc66e4f2-ce2b-49ca-ba89-a86607d56e5a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:49:15 np0005534516 nova_compute[253538]: 2025-11-25 08:49:15.812 253542 DEBUG oslo_concurrency.lockutils [req-7949981e-aa9d-4d62-bce2-8ac23f7a87f3 req-d873ba04-1185-44c9-bad1-d3ae0f75a979 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "ec34a574-9c78-43d8-a65a-aa4052a5d452-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:49:15 np0005534516 nova_compute[253538]: 2025-11-25 08:49:15.817 253542 DEBUG oslo_concurrency.lockutils [req-7949981e-aa9d-4d62-bce2-8ac23f7a87f3 req-d873ba04-1185-44c9-bad1-d3ae0f75a979 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "ec34a574-9c78-43d8-a65a-aa4052a5d452-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.005s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:49:15 np0005534516 nova_compute[253538]: 2025-11-25 08:49:15.818 253542 DEBUG oslo_concurrency.lockutils [req-7949981e-aa9d-4d62-bce2-8ac23f7a87f3 req-d873ba04-1185-44c9-bad1-d3ae0f75a979 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "ec34a574-9c78-43d8-a65a-aa4052a5d452-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:49:15 np0005534516 nova_compute[253538]: 2025-11-25 08:49:15.818 253542 DEBUG nova.compute.manager [req-7949981e-aa9d-4d62-bce2-8ac23f7a87f3 req-d873ba04-1185-44c9-bad1-d3ae0f75a979 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: ec34a574-9c78-43d8-a65a-aa4052a5d452] Processing event network-vif-plugged-bc66e4f2-ce2b-49ca-ba89-a86607d56e5a _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 25 03:49:15 np0005534516 nova_compute[253538]: 2025-11-25 08:49:15.950 253542 DEBUG nova.compute.manager [None req-ada1986d-f723-4490-9503-77afa8d8cdd4 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: ec34a574-9c78-43d8-a65a-aa4052a5d452] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 25 03:49:15 np0005534516 nova_compute[253538]: 2025-11-25 08:49:15.953 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764060555.9491007, ec34a574-9c78-43d8-a65a-aa4052a5d452 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 03:49:15 np0005534516 nova_compute[253538]: 2025-11-25 08:49:15.954 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: ec34a574-9c78-43d8-a65a-aa4052a5d452] VM Started (Lifecycle Event)#033[00m
Nov 25 03:49:15 np0005534516 nova_compute[253538]: 2025-11-25 08:49:15.957 253542 DEBUG nova.virt.libvirt.driver [None req-ada1986d-f723-4490-9503-77afa8d8cdd4 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: ec34a574-9c78-43d8-a65a-aa4052a5d452] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 25 03:49:15 np0005534516 nova_compute[253538]: 2025-11-25 08:49:15.963 253542 INFO nova.virt.libvirt.driver [-] [instance: ec34a574-9c78-43d8-a65a-aa4052a5d452] Instance spawned successfully.#033[00m
Nov 25 03:49:15 np0005534516 nova_compute[253538]: 2025-11-25 08:49:15.963 253542 DEBUG nova.virt.libvirt.driver [None req-ada1986d-f723-4490-9503-77afa8d8cdd4 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: ec34a574-9c78-43d8-a65a-aa4052a5d452] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 25 03:49:15 np0005534516 nova_compute[253538]: 2025-11-25 08:49:15.975 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: ec34a574-9c78-43d8-a65a-aa4052a5d452] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:49:15 np0005534516 podman[361906]: 2025-11-25 08:49:15.986872906 +0000 UTC m=+0.076825929 container create 50e62fc8631cbda48a1b969df0030a18056f96d0308a64f2b67e79dab3389241 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-34b2f9d4-63fb-44ac-b745-d86825df1c61, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0)
Nov 25 03:49:15 np0005534516 nova_compute[253538]: 2025-11-25 08:49:15.987 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: ec34a574-9c78-43d8-a65a-aa4052a5d452] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 03:49:15 np0005534516 nova_compute[253538]: 2025-11-25 08:49:15.996 253542 DEBUG nova.virt.libvirt.driver [None req-ada1986d-f723-4490-9503-77afa8d8cdd4 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: ec34a574-9c78-43d8-a65a-aa4052a5d452] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:49:15 np0005534516 nova_compute[253538]: 2025-11-25 08:49:15.997 253542 DEBUG nova.virt.libvirt.driver [None req-ada1986d-f723-4490-9503-77afa8d8cdd4 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: ec34a574-9c78-43d8-a65a-aa4052a5d452] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:49:15 np0005534516 nova_compute[253538]: 2025-11-25 08:49:15.998 253542 DEBUG nova.virt.libvirt.driver [None req-ada1986d-f723-4490-9503-77afa8d8cdd4 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: ec34a574-9c78-43d8-a65a-aa4052a5d452] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:49:16 np0005534516 nova_compute[253538]: 2025-11-25 08:49:15.999 253542 DEBUG nova.virt.libvirt.driver [None req-ada1986d-f723-4490-9503-77afa8d8cdd4 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: ec34a574-9c78-43d8-a65a-aa4052a5d452] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:49:16 np0005534516 nova_compute[253538]: 2025-11-25 08:49:16.000 253542 DEBUG nova.virt.libvirt.driver [None req-ada1986d-f723-4490-9503-77afa8d8cdd4 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: ec34a574-9c78-43d8-a65a-aa4052a5d452] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:49:16 np0005534516 nova_compute[253538]: 2025-11-25 08:49:16.001 253542 DEBUG nova.virt.libvirt.driver [None req-ada1986d-f723-4490-9503-77afa8d8cdd4 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: ec34a574-9c78-43d8-a65a-aa4052a5d452] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:49:16 np0005534516 nova_compute[253538]: 2025-11-25 08:49:16.007 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: ec34a574-9c78-43d8-a65a-aa4052a5d452] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 03:49:16 np0005534516 nova_compute[253538]: 2025-11-25 08:49:16.008 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764060555.9513018, ec34a574-9c78-43d8-a65a-aa4052a5d452 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 03:49:16 np0005534516 nova_compute[253538]: 2025-11-25 08:49:16.008 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: ec34a574-9c78-43d8-a65a-aa4052a5d452] VM Paused (Lifecycle Event)#033[00m
Nov 25 03:49:16 np0005534516 podman[361906]: 2025-11-25 08:49:15.935385386 +0000 UTC m=+0.025338239 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 25 03:49:16 np0005534516 nova_compute[253538]: 2025-11-25 08:49:16.034 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: ec34a574-9c78-43d8-a65a-aa4052a5d452] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:49:16 np0005534516 nova_compute[253538]: 2025-11-25 08:49:16.039 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764060555.9572742, ec34a574-9c78-43d8-a65a-aa4052a5d452 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 03:49:16 np0005534516 nova_compute[253538]: 2025-11-25 08:49:16.039 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: ec34a574-9c78-43d8-a65a-aa4052a5d452] VM Resumed (Lifecycle Event)#033[00m
Nov 25 03:49:16 np0005534516 systemd[1]: Started libpod-conmon-50e62fc8631cbda48a1b969df0030a18056f96d0308a64f2b67e79dab3389241.scope.
Nov 25 03:49:16 np0005534516 nova_compute[253538]: 2025-11-25 08:49:16.064 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: ec34a574-9c78-43d8-a65a-aa4052a5d452] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:49:16 np0005534516 nova_compute[253538]: 2025-11-25 08:49:16.067 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: ec34a574-9c78-43d8-a65a-aa4052a5d452] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 03:49:16 np0005534516 systemd[1]: Started libcrun container.
Nov 25 03:49:16 np0005534516 nova_compute[253538]: 2025-11-25 08:49:16.075 253542 INFO nova.compute.manager [None req-ada1986d-f723-4490-9503-77afa8d8cdd4 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: ec34a574-9c78-43d8-a65a-aa4052a5d452] Took 8.41 seconds to spawn the instance on the hypervisor.#033[00m
Nov 25 03:49:16 np0005534516 nova_compute[253538]: 2025-11-25 08:49:16.075 253542 DEBUG nova.compute.manager [None req-ada1986d-f723-4490-9503-77afa8d8cdd4 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: ec34a574-9c78-43d8-a65a-aa4052a5d452] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:49:16 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e2d709033092afbe32e7301de9ce77134dfb1c21bcf75d55f1f66a87f8dc2459/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 25 03:49:16 np0005534516 nova_compute[253538]: 2025-11-25 08:49:16.085 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: ec34a574-9c78-43d8-a65a-aa4052a5d452] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 03:49:16 np0005534516 podman[361906]: 2025-11-25 08:49:16.093647585 +0000 UTC m=+0.183600458 container init 50e62fc8631cbda48a1b969df0030a18056f96d0308a64f2b67e79dab3389241 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-34b2f9d4-63fb-44ac-b745-d86825df1c61, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Nov 25 03:49:16 np0005534516 podman[361906]: 2025-11-25 08:49:16.099103891 +0000 UTC m=+0.189056754 container start 50e62fc8631cbda48a1b969df0030a18056f96d0308a64f2b67e79dab3389241 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-34b2f9d4-63fb-44ac-b745-d86825df1c61, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118)
Nov 25 03:49:16 np0005534516 neutron-haproxy-ovnmeta-34b2f9d4-63fb-44ac-b745-d86825df1c61[361922]: [NOTICE]   (361926) : New worker (361928) forked
Nov 25 03:49:16 np0005534516 neutron-haproxy-ovnmeta-34b2f9d4-63fb-44ac-b745-d86825df1c61[361922]: [NOTICE]   (361926) : Loading success.
Nov 25 03:49:16 np0005534516 nova_compute[253538]: 2025-11-25 08:49:16.131 253542 INFO nova.compute.manager [None req-ada1986d-f723-4490-9503-77afa8d8cdd4 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: ec34a574-9c78-43d8-a65a-aa4052a5d452] Took 9.25 seconds to build instance.#033[00m
Nov 25 03:49:16 np0005534516 nova_compute[253538]: 2025-11-25 08:49:16.142 253542 DEBUG oslo_concurrency.lockutils [None req-ada1986d-f723-4490-9503-77afa8d8cdd4 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lock "ec34a574-9c78-43d8-a65a-aa4052a5d452" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.312s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:49:17 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e237 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 03:49:17 np0005534516 nova_compute[253538]: 2025-11-25 08:49:17.541 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:49:17 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2039: 321 pgs: 321 active+clean; 213 MiB data, 771 MiB used, 59 GiB / 60 GiB avail; 113 KiB/s rd, 1.8 MiB/s wr, 35 op/s
Nov 25 03:49:17 np0005534516 nova_compute[253538]: 2025-11-25 08:49:17.905 253542 DEBUG nova.compute.manager [req-fa9d9c0a-766d-44f9-9573-0dc12327814a req-af2572c8-834d-40e6-a37b-38808bab7e66 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: ec34a574-9c78-43d8-a65a-aa4052a5d452] Received event network-vif-plugged-bc66e4f2-ce2b-49ca-ba89-a86607d56e5a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:49:17 np0005534516 nova_compute[253538]: 2025-11-25 08:49:17.905 253542 DEBUG oslo_concurrency.lockutils [req-fa9d9c0a-766d-44f9-9573-0dc12327814a req-af2572c8-834d-40e6-a37b-38808bab7e66 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "ec34a574-9c78-43d8-a65a-aa4052a5d452-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:49:17 np0005534516 nova_compute[253538]: 2025-11-25 08:49:17.905 253542 DEBUG oslo_concurrency.lockutils [req-fa9d9c0a-766d-44f9-9573-0dc12327814a req-af2572c8-834d-40e6-a37b-38808bab7e66 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "ec34a574-9c78-43d8-a65a-aa4052a5d452-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:49:17 np0005534516 nova_compute[253538]: 2025-11-25 08:49:17.905 253542 DEBUG oslo_concurrency.lockutils [req-fa9d9c0a-766d-44f9-9573-0dc12327814a req-af2572c8-834d-40e6-a37b-38808bab7e66 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "ec34a574-9c78-43d8-a65a-aa4052a5d452-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:49:17 np0005534516 nova_compute[253538]: 2025-11-25 08:49:17.906 253542 DEBUG nova.compute.manager [req-fa9d9c0a-766d-44f9-9573-0dc12327814a req-af2572c8-834d-40e6-a37b-38808bab7e66 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: ec34a574-9c78-43d8-a65a-aa4052a5d452] No waiting events found dispatching network-vif-plugged-bc66e4f2-ce2b-49ca-ba89-a86607d56e5a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 03:49:17 np0005534516 nova_compute[253538]: 2025-11-25 08:49:17.906 253542 WARNING nova.compute.manager [req-fa9d9c0a-766d-44f9-9573-0dc12327814a req-af2572c8-834d-40e6-a37b-38808bab7e66 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: ec34a574-9c78-43d8-a65a-aa4052a5d452] Received unexpected event network-vif-plugged-bc66e4f2-ce2b-49ca-ba89-a86607d56e5a for instance with vm_state active and task_state None.#033[00m
Nov 25 03:49:18 np0005534516 nova_compute[253538]: 2025-11-25 08:49:18.691 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:49:19 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2040: 321 pgs: 321 active+clean; 213 MiB data, 771 MiB used, 59 GiB / 60 GiB avail; 749 KiB/s rd, 1.8 MiB/s wr, 60 op/s
Nov 25 03:49:21 np0005534516 nova_compute[253538]: 2025-11-25 08:49:21.520 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:49:21 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2041: 321 pgs: 321 active+clean; 213 MiB data, 771 MiB used, 59 GiB / 60 GiB avail; 982 KiB/s rd, 1.2 MiB/s wr, 56 op/s
Nov 25 03:49:22 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e237 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 03:49:22 np0005534516 nova_compute[253538]: 2025-11-25 08:49:22.542 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:49:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 03:49:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 03:49:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 03:49:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 03:49:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 03:49:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 03:49:23 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2042: 321 pgs: 321 active+clean; 213 MiB data, 771 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1003 KiB/s wr, 75 op/s
Nov 25 03:49:23 np0005534516 nova_compute[253538]: 2025-11-25 08:49:23.693 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:49:24 np0005534516 nova_compute[253538]: 2025-11-25 08:49:24.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 03:49:25 np0005534516 nova_compute[253538]: 2025-11-25 08:49:25.471 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:49:25 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2043: 321 pgs: 321 active+clean; 213 MiB data, 771 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 15 KiB/s wr, 74 op/s
Nov 25 03:49:27 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e237 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 03:49:27 np0005534516 nova_compute[253538]: 2025-11-25 08:49:27.544 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:49:27 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2044: 321 pgs: 321 active+clean; 213 MiB data, 771 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 16 KiB/s wr, 74 op/s
Nov 25 03:49:28 np0005534516 nova_compute[253538]: 2025-11-25 08:49:28.695 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:49:28 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 25 03:49:28 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3817985458' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 03:49:28 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 25 03:49:28 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3817985458' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 03:49:29 np0005534516 ovn_controller[152859]: 2025-11-25T08:49:29Z|00111|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:d3:98:59 10.100.0.23
Nov 25 03:49:29 np0005534516 ovn_controller[152859]: 2025-11-25T08:49:29Z|00112|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:d3:98:59 10.100.0.23
Nov 25 03:49:29 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2045: 321 pgs: 321 active+clean; 215 MiB data, 775 MiB used, 59 GiB / 60 GiB avail; 1.8 MiB/s rd, 346 KiB/s wr, 75 op/s
Nov 25 03:49:29 np0005534516 podman[361939]: 2025-11-25 08:49:29.836669354 +0000 UTC m=+0.078173384 container health_status 5c8d5508db57b4c3da7e80ae11f3a3094e87785cb74f60c98199261dd8b73481 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Nov 25 03:49:30 np0005534516 nova_compute[253538]: 2025-11-25 08:49:30.474 253542 DEBUG oslo_concurrency.lockutils [None req-19216ac5-cfe1-4bf8-b52b-a146e40d4512 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Acquiring lock "a3e97a6a-f96c-46ee-ba5a-2729f07dd5b5" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:49:30 np0005534516 nova_compute[253538]: 2025-11-25 08:49:30.474 253542 DEBUG oslo_concurrency.lockutils [None req-19216ac5-cfe1-4bf8-b52b-a146e40d4512 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Lock "a3e97a6a-f96c-46ee-ba5a-2729f07dd5b5" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:49:30 np0005534516 nova_compute[253538]: 2025-11-25 08:49:30.496 253542 DEBUG nova.compute.manager [None req-19216ac5-cfe1-4bf8-b52b-a146e40d4512 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: a3e97a6a-f96c-46ee-ba5a-2729f07dd5b5] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 25 03:49:30 np0005534516 nova_compute[253538]: 2025-11-25 08:49:30.564 253542 DEBUG oslo_concurrency.lockutils [None req-19216ac5-cfe1-4bf8-b52b-a146e40d4512 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:49:30 np0005534516 nova_compute[253538]: 2025-11-25 08:49:30.565 253542 DEBUG oslo_concurrency.lockutils [None req-19216ac5-cfe1-4bf8-b52b-a146e40d4512 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:49:30 np0005534516 nova_compute[253538]: 2025-11-25 08:49:30.573 253542 DEBUG nova.virt.hardware [None req-19216ac5-cfe1-4bf8-b52b-a146e40d4512 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 25 03:49:30 np0005534516 nova_compute[253538]: 2025-11-25 08:49:30.573 253542 INFO nova.compute.claims [None req-19216ac5-cfe1-4bf8-b52b-a146e40d4512 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: a3e97a6a-f96c-46ee-ba5a-2729f07dd5b5] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 25 03:49:30 np0005534516 nova_compute[253538]: 2025-11-25 08:49:30.701 253542 DEBUG oslo_concurrency.processutils [None req-19216ac5-cfe1-4bf8-b52b-a146e40d4512 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:49:31 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 03:49:31 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1996448545' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 03:49:31 np0005534516 nova_compute[253538]: 2025-11-25 08:49:31.180 253542 DEBUG oslo_concurrency.processutils [None req-19216ac5-cfe1-4bf8-b52b-a146e40d4512 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.478s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:49:31 np0005534516 nova_compute[253538]: 2025-11-25 08:49:31.188 253542 DEBUG nova.compute.provider_tree [None req-19216ac5-cfe1-4bf8-b52b-a146e40d4512 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 03:49:31 np0005534516 nova_compute[253538]: 2025-11-25 08:49:31.206 253542 DEBUG nova.scheduler.client.report [None req-19216ac5-cfe1-4bf8-b52b-a146e40d4512 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 03:49:31 np0005534516 nova_compute[253538]: 2025-11-25 08:49:31.229 253542 DEBUG oslo_concurrency.lockutils [None req-19216ac5-cfe1-4bf8-b52b-a146e40d4512 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.665s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:49:31 np0005534516 nova_compute[253538]: 2025-11-25 08:49:31.230 253542 DEBUG nova.compute.manager [None req-19216ac5-cfe1-4bf8-b52b-a146e40d4512 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: a3e97a6a-f96c-46ee-ba5a-2729f07dd5b5] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 25 03:49:31 np0005534516 nova_compute[253538]: 2025-11-25 08:49:31.275 253542 DEBUG nova.compute.manager [None req-19216ac5-cfe1-4bf8-b52b-a146e40d4512 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: a3e97a6a-f96c-46ee-ba5a-2729f07dd5b5] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 25 03:49:31 np0005534516 nova_compute[253538]: 2025-11-25 08:49:31.276 253542 DEBUG nova.network.neutron [None req-19216ac5-cfe1-4bf8-b52b-a146e40d4512 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: a3e97a6a-f96c-46ee-ba5a-2729f07dd5b5] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 25 03:49:31 np0005534516 nova_compute[253538]: 2025-11-25 08:49:31.301 253542 INFO nova.virt.libvirt.driver [None req-19216ac5-cfe1-4bf8-b52b-a146e40d4512 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: a3e97a6a-f96c-46ee-ba5a-2729f07dd5b5] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 25 03:49:31 np0005534516 nova_compute[253538]: 2025-11-25 08:49:31.322 253542 DEBUG nova.compute.manager [None req-19216ac5-cfe1-4bf8-b52b-a146e40d4512 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: a3e97a6a-f96c-46ee-ba5a-2729f07dd5b5] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 25 03:49:31 np0005534516 nova_compute[253538]: 2025-11-25 08:49:31.422 253542 DEBUG nova.compute.manager [None req-19216ac5-cfe1-4bf8-b52b-a146e40d4512 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: a3e97a6a-f96c-46ee-ba5a-2729f07dd5b5] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 25 03:49:31 np0005534516 nova_compute[253538]: 2025-11-25 08:49:31.423 253542 DEBUG nova.virt.libvirt.driver [None req-19216ac5-cfe1-4bf8-b52b-a146e40d4512 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: a3e97a6a-f96c-46ee-ba5a-2729f07dd5b5] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 25 03:49:31 np0005534516 nova_compute[253538]: 2025-11-25 08:49:31.424 253542 INFO nova.virt.libvirt.driver [None req-19216ac5-cfe1-4bf8-b52b-a146e40d4512 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: a3e97a6a-f96c-46ee-ba5a-2729f07dd5b5] Creating image(s)#033[00m
Nov 25 03:49:31 np0005534516 nova_compute[253538]: 2025-11-25 08:49:31.452 253542 DEBUG nova.storage.rbd_utils [None req-19216ac5-cfe1-4bf8-b52b-a146e40d4512 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] rbd image a3e97a6a-f96c-46ee-ba5a-2729f07dd5b5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:49:31 np0005534516 nova_compute[253538]: 2025-11-25 08:49:31.483 253542 DEBUG nova.storage.rbd_utils [None req-19216ac5-cfe1-4bf8-b52b-a146e40d4512 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] rbd image a3e97a6a-f96c-46ee-ba5a-2729f07dd5b5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:49:31 np0005534516 nova_compute[253538]: 2025-11-25 08:49:31.517 253542 DEBUG nova.storage.rbd_utils [None req-19216ac5-cfe1-4bf8-b52b-a146e40d4512 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] rbd image a3e97a6a-f96c-46ee-ba5a-2729f07dd5b5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:49:31 np0005534516 nova_compute[253538]: 2025-11-25 08:49:31.523 253542 DEBUG oslo_concurrency.processutils [None req-19216ac5-cfe1-4bf8-b52b-a146e40d4512 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:49:31 np0005534516 nova_compute[253538]: 2025-11-25 08:49:31.563 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 03:49:31 np0005534516 nova_compute[253538]: 2025-11-25 08:49:31.607 253542 DEBUG oslo_concurrency.processutils [None req-19216ac5-cfe1-4bf8-b52b-a146e40d4512 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json" returned: 0 in 0.085s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:49:31 np0005534516 nova_compute[253538]: 2025-11-25 08:49:31.609 253542 DEBUG oslo_concurrency.lockutils [None req-19216ac5-cfe1-4bf8-b52b-a146e40d4512 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Acquiring lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:49:31 np0005534516 nova_compute[253538]: 2025-11-25 08:49:31.610 253542 DEBUG oslo_concurrency.lockutils [None req-19216ac5-cfe1-4bf8-b52b-a146e40d4512 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:49:31 np0005534516 nova_compute[253538]: 2025-11-25 08:49:31.611 253542 DEBUG oslo_concurrency.lockutils [None req-19216ac5-cfe1-4bf8-b52b-a146e40d4512 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:49:31 np0005534516 nova_compute[253538]: 2025-11-25 08:49:31.640 253542 DEBUG nova.storage.rbd_utils [None req-19216ac5-cfe1-4bf8-b52b-a146e40d4512 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] rbd image a3e97a6a-f96c-46ee-ba5a-2729f07dd5b5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:49:31 np0005534516 nova_compute[253538]: 2025-11-25 08:49:31.645 253542 DEBUG oslo_concurrency.processutils [None req-19216ac5-cfe1-4bf8-b52b-a146e40d4512 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc a3e97a6a-f96c-46ee-ba5a-2729f07dd5b5_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:49:31 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2046: 321 pgs: 321 active+clean; 233 MiB data, 787 MiB used, 59 GiB / 60 GiB avail; 1.3 MiB/s rd, 1.4 MiB/s wr, 66 op/s
Nov 25 03:49:31 np0005534516 nova_compute[253538]: 2025-11-25 08:49:31.734 253542 DEBUG nova.policy [None req-19216ac5-cfe1-4bf8-b52b-a146e40d4512 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '009378dc36154271ba5b4590ce67ddde', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '7dcaf3c96bfc4db3a41291debd385c67', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 25 03:49:32 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e237 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 03:49:32 np0005534516 nova_compute[253538]: 2025-11-25 08:49:32.546 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:49:32 np0005534516 nova_compute[253538]: 2025-11-25 08:49:32.554 253542 DEBUG oslo_concurrency.processutils [None req-19216ac5-cfe1-4bf8-b52b-a146e40d4512 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc a3e97a6a-f96c-46ee-ba5a-2729f07dd5b5_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.909s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:49:32 np0005534516 nova_compute[253538]: 2025-11-25 08:49:32.609 253542 DEBUG nova.storage.rbd_utils [None req-19216ac5-cfe1-4bf8-b52b-a146e40d4512 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] resizing rbd image a3e97a6a-f96c-46ee-ba5a-2729f07dd5b5_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 25 03:49:32 np0005534516 nova_compute[253538]: 2025-11-25 08:49:32.728 253542 DEBUG nova.objects.instance [None req-19216ac5-cfe1-4bf8-b52b-a146e40d4512 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Lazy-loading 'migration_context' on Instance uuid a3e97a6a-f96c-46ee-ba5a-2729f07dd5b5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 03:49:32 np0005534516 nova_compute[253538]: 2025-11-25 08:49:32.751 253542 DEBUG nova.virt.libvirt.driver [None req-19216ac5-cfe1-4bf8-b52b-a146e40d4512 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: a3e97a6a-f96c-46ee-ba5a-2729f07dd5b5] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 25 03:49:32 np0005534516 nova_compute[253538]: 2025-11-25 08:49:32.751 253542 DEBUG nova.virt.libvirt.driver [None req-19216ac5-cfe1-4bf8-b52b-a146e40d4512 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: a3e97a6a-f96c-46ee-ba5a-2729f07dd5b5] Ensure instance console log exists: /var/lib/nova/instances/a3e97a6a-f96c-46ee-ba5a-2729f07dd5b5/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 25 03:49:32 np0005534516 nova_compute[253538]: 2025-11-25 08:49:32.752 253542 DEBUG oslo_concurrency.lockutils [None req-19216ac5-cfe1-4bf8-b52b-a146e40d4512 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:49:32 np0005534516 nova_compute[253538]: 2025-11-25 08:49:32.752 253542 DEBUG oslo_concurrency.lockutils [None req-19216ac5-cfe1-4bf8-b52b-a146e40d4512 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:49:32 np0005534516 nova_compute[253538]: 2025-11-25 08:49:32.752 253542 DEBUG oslo_concurrency.lockutils [None req-19216ac5-cfe1-4bf8-b52b-a146e40d4512 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:49:32 np0005534516 nova_compute[253538]: 2025-11-25 08:49:32.778 253542 DEBUG nova.network.neutron [None req-19216ac5-cfe1-4bf8-b52b-a146e40d4512 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: a3e97a6a-f96c-46ee-ba5a-2729f07dd5b5] Successfully created port: 67b278e0-034e-4bb1-8cba-035ab2a72de3 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 25 03:49:32 np0005534516 podman[362148]: 2025-11-25 08:49:32.804908677 +0000 UTC m=+0.057425709 container health_status 201f5ba8a846455dad7681d91099d8c3f33e8ac91298c1330a8b525b94c03938 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3)
Nov 25 03:49:33 np0005534516 nova_compute[253538]: 2025-11-25 08:49:33.555 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 03:49:33 np0005534516 nova_compute[253538]: 2025-11-25 08:49:33.556 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 25 03:49:33 np0005534516 nova_compute[253538]: 2025-11-25 08:49:33.556 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 25 03:49:33 np0005534516 nova_compute[253538]: 2025-11-25 08:49:33.572 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] [instance: a3e97a6a-f96c-46ee-ba5a-2729f07dd5b5] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Nov 25 03:49:33 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2047: 321 pgs: 321 active+clean; 264 MiB data, 823 MiB used, 59 GiB / 60 GiB avail; 1.2 MiB/s rd, 2.9 MiB/s wr, 88 op/s
Nov 25 03:49:33 np0005534516 nova_compute[253538]: 2025-11-25 08:49:33.697 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:49:33 np0005534516 nova_compute[253538]: 2025-11-25 08:49:33.827 253542 DEBUG nova.network.neutron [None req-19216ac5-cfe1-4bf8-b52b-a146e40d4512 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: a3e97a6a-f96c-46ee-ba5a-2729f07dd5b5] Successfully updated port: 67b278e0-034e-4bb1-8cba-035ab2a72de3 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 25 03:49:33 np0005534516 nova_compute[253538]: 2025-11-25 08:49:33.833 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Acquiring lock "refresh_cache-fc289da4-ce2a-46ee-bf8c-bffa1e0e75fe" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 03:49:33 np0005534516 nova_compute[253538]: 2025-11-25 08:49:33.833 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Acquired lock "refresh_cache-fc289da4-ce2a-46ee-bf8c-bffa1e0e75fe" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 03:49:33 np0005534516 nova_compute[253538]: 2025-11-25 08:49:33.834 253542 DEBUG nova.network.neutron [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] [instance: fc289da4-ce2a-46ee-bf8c-bffa1e0e75fe] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Nov 25 03:49:33 np0005534516 nova_compute[253538]: 2025-11-25 08:49:33.834 253542 DEBUG nova.objects.instance [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lazy-loading 'info_cache' on Instance uuid fc289da4-ce2a-46ee-bf8c-bffa1e0e75fe obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 03:49:33 np0005534516 nova_compute[253538]: 2025-11-25 08:49:33.839 253542 DEBUG oslo_concurrency.lockutils [None req-19216ac5-cfe1-4bf8-b52b-a146e40d4512 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Acquiring lock "refresh_cache-a3e97a6a-f96c-46ee-ba5a-2729f07dd5b5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 03:49:33 np0005534516 nova_compute[253538]: 2025-11-25 08:49:33.839 253542 DEBUG oslo_concurrency.lockutils [None req-19216ac5-cfe1-4bf8-b52b-a146e40d4512 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Acquired lock "refresh_cache-a3e97a6a-f96c-46ee-ba5a-2729f07dd5b5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 03:49:33 np0005534516 nova_compute[253538]: 2025-11-25 08:49:33.839 253542 DEBUG nova.network.neutron [None req-19216ac5-cfe1-4bf8-b52b-a146e40d4512 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: a3e97a6a-f96c-46ee-ba5a-2729f07dd5b5] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 25 03:49:33 np0005534516 nova_compute[253538]: 2025-11-25 08:49:33.955 253542 DEBUG nova.compute.manager [req-549cbfdb-e01d-44bb-bccb-93d992cdc4fe req-98761f87-8987-40ae-b14e-fae4aa4aa41b b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: a3e97a6a-f96c-46ee-ba5a-2729f07dd5b5] Received event network-changed-67b278e0-034e-4bb1-8cba-035ab2a72de3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:49:33 np0005534516 nova_compute[253538]: 2025-11-25 08:49:33.955 253542 DEBUG nova.compute.manager [req-549cbfdb-e01d-44bb-bccb-93d992cdc4fe req-98761f87-8987-40ae-b14e-fae4aa4aa41b b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: a3e97a6a-f96c-46ee-ba5a-2729f07dd5b5] Refreshing instance network info cache due to event network-changed-67b278e0-034e-4bb1-8cba-035ab2a72de3. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 03:49:33 np0005534516 nova_compute[253538]: 2025-11-25 08:49:33.956 253542 DEBUG oslo_concurrency.lockutils [req-549cbfdb-e01d-44bb-bccb-93d992cdc4fe req-98761f87-8987-40ae-b14e-fae4aa4aa41b b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-a3e97a6a-f96c-46ee-ba5a-2729f07dd5b5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 03:49:34 np0005534516 nova_compute[253538]: 2025-11-25 08:49:34.026 253542 DEBUG nova.network.neutron [None req-19216ac5-cfe1-4bf8-b52b-a146e40d4512 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: a3e97a6a-f96c-46ee-ba5a-2729f07dd5b5] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 25 03:49:34 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 03:49:34 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 03:49:34 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 03:49:34 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 03:49:34 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 03:49:34 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 03:49:34 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 03:49:34 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 03:49:34 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 03:49:34 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 03:49:34 np0005534516 ceph-mgr[75313]: [progress WARNING root] complete: ev 1b01b991-9cde-41c8-9d0c-09e8311a2d01 does not exist
Nov 25 03:49:34 np0005534516 ceph-mgr[75313]: [progress WARNING root] complete: ev c21a5308-e4ff-415d-acb8-76a9fb62479a does not exist
Nov 25 03:49:34 np0005534516 ceph-mgr[75313]: [progress WARNING root] complete: ev 0c792828-9384-4086-852f-71f111eb13f1 does not exist
Nov 25 03:49:34 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 03:49:34 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 03:49:34 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 03:49:34 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 03:49:34 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 03:49:34 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 03:49:34 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 03:49:34 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 03:49:34 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 03:49:34 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 03:49:34 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 03:49:34 np0005534516 nova_compute[253538]: 2025-11-25 08:49:34.953 253542 DEBUG nova.network.neutron [None req-19216ac5-cfe1-4bf8-b52b-a146e40d4512 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: a3e97a6a-f96c-46ee-ba5a-2729f07dd5b5] Updating instance_info_cache with network_info: [{"id": "67b278e0-034e-4bb1-8cba-035ab2a72de3", "address": "fa:16:3e:d5:f1:de", "network": {"id": "6cdb9ec3-6144-4767-9719-ddbf0a68bf7c", "bridge": "br-int", "label": "tempest-network-smoke--1425209404", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7dcaf3c96bfc4db3a41291debd385c67", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap67b278e0-03", "ovs_interfaceid": "67b278e0-034e-4bb1-8cba-035ab2a72de3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 03:49:34 np0005534516 nova_compute[253538]: 2025-11-25 08:49:34.976 253542 DEBUG oslo_concurrency.lockutils [None req-19216ac5-cfe1-4bf8-b52b-a146e40d4512 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Releasing lock "refresh_cache-a3e97a6a-f96c-46ee-ba5a-2729f07dd5b5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 03:49:34 np0005534516 nova_compute[253538]: 2025-11-25 08:49:34.977 253542 DEBUG nova.compute.manager [None req-19216ac5-cfe1-4bf8-b52b-a146e40d4512 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: a3e97a6a-f96c-46ee-ba5a-2729f07dd5b5] Instance network_info: |[{"id": "67b278e0-034e-4bb1-8cba-035ab2a72de3", "address": "fa:16:3e:d5:f1:de", "network": {"id": "6cdb9ec3-6144-4767-9719-ddbf0a68bf7c", "bridge": "br-int", "label": "tempest-network-smoke--1425209404", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7dcaf3c96bfc4db3a41291debd385c67", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap67b278e0-03", "ovs_interfaceid": "67b278e0-034e-4bb1-8cba-035ab2a72de3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 25 03:49:34 np0005534516 nova_compute[253538]: 2025-11-25 08:49:34.977 253542 DEBUG oslo_concurrency.lockutils [req-549cbfdb-e01d-44bb-bccb-93d992cdc4fe req-98761f87-8987-40ae-b14e-fae4aa4aa41b b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-a3e97a6a-f96c-46ee-ba5a-2729f07dd5b5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 03:49:34 np0005534516 nova_compute[253538]: 2025-11-25 08:49:34.978 253542 DEBUG nova.network.neutron [req-549cbfdb-e01d-44bb-bccb-93d992cdc4fe req-98761f87-8987-40ae-b14e-fae4aa4aa41b b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: a3e97a6a-f96c-46ee-ba5a-2729f07dd5b5] Refreshing network info cache for port 67b278e0-034e-4bb1-8cba-035ab2a72de3 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 03:49:34 np0005534516 nova_compute[253538]: 2025-11-25 08:49:34.980 253542 DEBUG nova.virt.libvirt.driver [None req-19216ac5-cfe1-4bf8-b52b-a146e40d4512 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: a3e97a6a-f96c-46ee-ba5a-2729f07dd5b5] Start _get_guest_xml network_info=[{"id": "67b278e0-034e-4bb1-8cba-035ab2a72de3", "address": "fa:16:3e:d5:f1:de", "network": {"id": "6cdb9ec3-6144-4767-9719-ddbf0a68bf7c", "bridge": "br-int", "label": "tempest-network-smoke--1425209404", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7dcaf3c96bfc4db3a41291debd385c67", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap67b278e0-03", "ovs_interfaceid": "67b278e0-034e-4bb1-8cba-035ab2a72de3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'device_name': '/dev/vda', 'guest_format': None, 'encryption_options': None, 'boot_index': 0, 'encryption_format': None, 'encryption_secret_uuid': None, 'size': 0, 'encrypted': False, 'disk_bus': 'virtio', 'image_id': '8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 25 03:49:34 np0005534516 nova_compute[253538]: 2025-11-25 08:49:34.988 253542 WARNING nova.virt.libvirt.driver [None req-19216ac5-cfe1-4bf8-b52b-a146e40d4512 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 03:49:34 np0005534516 nova_compute[253538]: 2025-11-25 08:49:34.997 253542 DEBUG nova.virt.libvirt.host [None req-19216ac5-cfe1-4bf8-b52b-a146e40d4512 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 25 03:49:34 np0005534516 nova_compute[253538]: 2025-11-25 08:49:34.998 253542 DEBUG nova.virt.libvirt.host [None req-19216ac5-cfe1-4bf8-b52b-a146e40d4512 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 25 03:49:35 np0005534516 nova_compute[253538]: 2025-11-25 08:49:35.001 253542 DEBUG nova.virt.libvirt.host [None req-19216ac5-cfe1-4bf8-b52b-a146e40d4512 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 25 03:49:35 np0005534516 nova_compute[253538]: 2025-11-25 08:49:35.002 253542 DEBUG nova.virt.libvirt.host [None req-19216ac5-cfe1-4bf8-b52b-a146e40d4512 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 25 03:49:35 np0005534516 nova_compute[253538]: 2025-11-25 08:49:35.002 253542 DEBUG nova.virt.libvirt.driver [None req-19216ac5-cfe1-4bf8-b52b-a146e40d4512 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 25 03:49:35 np0005534516 nova_compute[253538]: 2025-11-25 08:49:35.003 253542 DEBUG nova.virt.hardware [None req-19216ac5-cfe1-4bf8-b52b-a146e40d4512 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T08:20:54Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='c0247c91-0f34-45b2-87e7-43f31a790a00',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 25 03:49:35 np0005534516 nova_compute[253538]: 2025-11-25 08:49:35.003 253542 DEBUG nova.virt.hardware [None req-19216ac5-cfe1-4bf8-b52b-a146e40d4512 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 25 03:49:35 np0005534516 nova_compute[253538]: 2025-11-25 08:49:35.004 253542 DEBUG nova.virt.hardware [None req-19216ac5-cfe1-4bf8-b52b-a146e40d4512 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 25 03:49:35 np0005534516 nova_compute[253538]: 2025-11-25 08:49:35.004 253542 DEBUG nova.virt.hardware [None req-19216ac5-cfe1-4bf8-b52b-a146e40d4512 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 25 03:49:35 np0005534516 nova_compute[253538]: 2025-11-25 08:49:35.004 253542 DEBUG nova.virt.hardware [None req-19216ac5-cfe1-4bf8-b52b-a146e40d4512 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 25 03:49:35 np0005534516 nova_compute[253538]: 2025-11-25 08:49:35.004 253542 DEBUG nova.virt.hardware [None req-19216ac5-cfe1-4bf8-b52b-a146e40d4512 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 25 03:49:35 np0005534516 nova_compute[253538]: 2025-11-25 08:49:35.005 253542 DEBUG nova.virt.hardware [None req-19216ac5-cfe1-4bf8-b52b-a146e40d4512 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 25 03:49:35 np0005534516 nova_compute[253538]: 2025-11-25 08:49:35.005 253542 DEBUG nova.virt.hardware [None req-19216ac5-cfe1-4bf8-b52b-a146e40d4512 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 25 03:49:35 np0005534516 nova_compute[253538]: 2025-11-25 08:49:35.005 253542 DEBUG nova.virt.hardware [None req-19216ac5-cfe1-4bf8-b52b-a146e40d4512 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 25 03:49:35 np0005534516 nova_compute[253538]: 2025-11-25 08:49:35.006 253542 DEBUG nova.virt.hardware [None req-19216ac5-cfe1-4bf8-b52b-a146e40d4512 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 25 03:49:35 np0005534516 nova_compute[253538]: 2025-11-25 08:49:35.006 253542 DEBUG nova.virt.hardware [None req-19216ac5-cfe1-4bf8-b52b-a146e40d4512 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 25 03:49:35 np0005534516 nova_compute[253538]: 2025-11-25 08:49:35.010 253542 DEBUG oslo_concurrency.processutils [None req-19216ac5-cfe1-4bf8-b52b-a146e40d4512 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:49:35 np0005534516 podman[362578]: 2025-11-25 08:49:35.332028005 +0000 UTC m=+0.067944800 container create b0a378b81fbe23d4a0905facc4fb7b40750821f30598ca47416b92da4fe35d51 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_bell, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 03:49:35 np0005534516 systemd[1]: Started libpod-conmon-b0a378b81fbe23d4a0905facc4fb7b40750821f30598ca47416b92da4fe35d51.scope.
Nov 25 03:49:35 np0005534516 podman[362578]: 2025-11-25 08:49:35.301782925 +0000 UTC m=+0.037699740 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 03:49:35 np0005534516 systemd[1]: Started libcrun container.
Nov 25 03:49:35 np0005534516 podman[362578]: 2025-11-25 08:49:35.432112236 +0000 UTC m=+0.168029031 container init b0a378b81fbe23d4a0905facc4fb7b40750821f30598ca47416b92da4fe35d51 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_bell, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Nov 25 03:49:35 np0005534516 podman[362578]: 2025-11-25 08:49:35.439117503 +0000 UTC m=+0.175034298 container start b0a378b81fbe23d4a0905facc4fb7b40750821f30598ca47416b92da4fe35d51 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_bell, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 03:49:35 np0005534516 optimistic_bell[362594]: 167 167
Nov 25 03:49:35 np0005534516 systemd[1]: libpod-b0a378b81fbe23d4a0905facc4fb7b40750821f30598ca47416b92da4fe35d51.scope: Deactivated successfully.
Nov 25 03:49:35 np0005534516 podman[362578]: 2025-11-25 08:49:35.450783785 +0000 UTC m=+0.186700660 container attach b0a378b81fbe23d4a0905facc4fb7b40750821f30598ca47416b92da4fe35d51 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_bell, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 03:49:35 np0005534516 podman[362578]: 2025-11-25 08:49:35.451392662 +0000 UTC m=+0.187309487 container died b0a378b81fbe23d4a0905facc4fb7b40750821f30598ca47416b92da4fe35d51 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_bell, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 03:49:35 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 03:49:35 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/191068514' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 03:49:35 np0005534516 nova_compute[253538]: 2025-11-25 08:49:35.491 253542 DEBUG oslo_concurrency.processutils [None req-19216ac5-cfe1-4bf8-b52b-a146e40d4512 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.481s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:49:35 np0005534516 nova_compute[253538]: 2025-11-25 08:49:35.528 253542 DEBUG nova.storage.rbd_utils [None req-19216ac5-cfe1-4bf8-b52b-a146e40d4512 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] rbd image a3e97a6a-f96c-46ee-ba5a-2729f07dd5b5_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:49:35 np0005534516 nova_compute[253538]: 2025-11-25 08:49:35.534 253542 DEBUG oslo_concurrency.processutils [None req-19216ac5-cfe1-4bf8-b52b-a146e40d4512 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:49:35 np0005534516 systemd[1]: var-lib-containers-storage-overlay-dc5264ad69715d30f95aea435c50859cf7292d8da1ec24cf8c60ab718585ed33-merged.mount: Deactivated successfully.
Nov 25 03:49:35 np0005534516 nova_compute[253538]: 2025-11-25 08:49:35.589 253542 DEBUG nova.network.neutron [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] [instance: fc289da4-ce2a-46ee-bf8c-bffa1e0e75fe] Updating instance_info_cache with network_info: [{"id": "5b999504-81af-4e3d-9707-b0a72b902669", "address": "fa:16:3e:cf:68:34", "network": {"id": "41ed78ca-e8a4-4daf-884b-6b7b763e272f", "bridge": "br-int", "label": "tempest-network-smoke--1240799795", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.180", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92faeb767e7a423586eaaf32661ce771", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5b999504-81", "ovs_interfaceid": "5b999504-81af-4e3d-9707-b0a72b902669", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 03:49:35 np0005534516 nova_compute[253538]: 2025-11-25 08:49:35.608 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Releasing lock "refresh_cache-fc289da4-ce2a-46ee-bf8c-bffa1e0e75fe" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 03:49:35 np0005534516 nova_compute[253538]: 2025-11-25 08:49:35.608 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] [instance: fc289da4-ce2a-46ee-bf8c-bffa1e0e75fe] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Nov 25 03:49:35 np0005534516 nova_compute[253538]: 2025-11-25 08:49:35.609 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 03:49:35 np0005534516 nova_compute[253538]: 2025-11-25 08:49:35.609 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 25 03:49:35 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2048: 321 pgs: 321 active+clean; 285 MiB data, 829 MiB used, 59 GiB / 60 GiB avail; 266 KiB/s rd, 3.4 MiB/s wr, 66 op/s
Nov 25 03:49:35 np0005534516 podman[362578]: 2025-11-25 08:49:35.699389144 +0000 UTC m=+0.435305969 container remove b0a378b81fbe23d4a0905facc4fb7b40750821f30598ca47416b92da4fe35d51 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_bell, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Nov 25 03:49:35 np0005534516 systemd[1]: libpod-conmon-b0a378b81fbe23d4a0905facc4fb7b40750821f30598ca47416b92da4fe35d51.scope: Deactivated successfully.
Nov 25 03:49:35 np0005534516 podman[362660]: 2025-11-25 08:49:35.940064239 +0000 UTC m=+0.066392289 container create 7cb80059c3f9497bfb9274fda802542c6b4e211568f74aea4278f73d739be013 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_curie, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 03:49:35 np0005534516 podman[362660]: 2025-11-25 08:49:35.895385963 +0000 UTC m=+0.021714003 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 03:49:35 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 03:49:35 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2438672530' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 03:49:36 np0005534516 nova_compute[253538]: 2025-11-25 08:49:36.019 253542 DEBUG oslo_concurrency.processutils [None req-19216ac5-cfe1-4bf8-b52b-a146e40d4512 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.485s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:49:36 np0005534516 nova_compute[253538]: 2025-11-25 08:49:36.023 253542 DEBUG nova.virt.libvirt.vif [None req-19216ac5-cfe1-4bf8-b52b-a146e40d4512 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T08:49:28Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-2097673480',display_name='tempest-TestNetworkAdvancedServerOps-server-2097673480',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-2097673480',id=108,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBN8CJ+LAazH0nh/fs57lUBduvcorPeFDtVdwZY7U/GDNTdvdOvwS2k2O3rjC5dikEP5slLAsdzOE76Bw4/4L12X0ArhaClfawfYB19breOk8NW05uifXWs22TjYOgG1XfA==',key_name='tempest-TestNetworkAdvancedServerOps-659140204',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='7dcaf3c96bfc4db3a41291debd385c67',ramdisk_id='',reservation_id='r-140u7iq0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkAdvancedServerOps-1132090577',owner_user_name='tempest-TestNetworkAdvancedServerOps-1132090577-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T08:49:31Z,user_data=None,user_id='009378dc36154271ba5b4590ce67ddde',uuid=a3e97a6a-f96c-46ee-ba5a-2729f07dd5b5,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "67b278e0-034e-4bb1-8cba-035ab2a72de3", "address": "fa:16:3e:d5:f1:de", "network": {"id": "6cdb9ec3-6144-4767-9719-ddbf0a68bf7c", "bridge": "br-int", "label": "tempest-network-smoke--1425209404", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7dcaf3c96bfc4db3a41291debd385c67", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap67b278e0-03", "ovs_interfaceid": "67b278e0-034e-4bb1-8cba-035ab2a72de3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 25 03:49:36 np0005534516 nova_compute[253538]: 2025-11-25 08:49:36.024 253542 DEBUG nova.network.os_vif_util [None req-19216ac5-cfe1-4bf8-b52b-a146e40d4512 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Converting VIF {"id": "67b278e0-034e-4bb1-8cba-035ab2a72de3", "address": "fa:16:3e:d5:f1:de", "network": {"id": "6cdb9ec3-6144-4767-9719-ddbf0a68bf7c", "bridge": "br-int", "label": "tempest-network-smoke--1425209404", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7dcaf3c96bfc4db3a41291debd385c67", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap67b278e0-03", "ovs_interfaceid": "67b278e0-034e-4bb1-8cba-035ab2a72de3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 03:49:36 np0005534516 nova_compute[253538]: 2025-11-25 08:49:36.025 253542 DEBUG nova.network.os_vif_util [None req-19216ac5-cfe1-4bf8-b52b-a146e40d4512 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d5:f1:de,bridge_name='br-int',has_traffic_filtering=True,id=67b278e0-034e-4bb1-8cba-035ab2a72de3,network=Network(6cdb9ec3-6144-4767-9719-ddbf0a68bf7c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap67b278e0-03') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 03:49:36 np0005534516 nova_compute[253538]: 2025-11-25 08:49:36.028 253542 DEBUG nova.objects.instance [None req-19216ac5-cfe1-4bf8-b52b-a146e40d4512 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Lazy-loading 'pci_devices' on Instance uuid a3e97a6a-f96c-46ee-ba5a-2729f07dd5b5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 03:49:36 np0005534516 systemd[1]: Started libpod-conmon-7cb80059c3f9497bfb9274fda802542c6b4e211568f74aea4278f73d739be013.scope.
Nov 25 03:49:36 np0005534516 nova_compute[253538]: 2025-11-25 08:49:36.043 253542 DEBUG nova.virt.libvirt.driver [None req-19216ac5-cfe1-4bf8-b52b-a146e40d4512 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: a3e97a6a-f96c-46ee-ba5a-2729f07dd5b5] End _get_guest_xml xml=<domain type="kvm">
Nov 25 03:49:36 np0005534516 nova_compute[253538]:  <uuid>a3e97a6a-f96c-46ee-ba5a-2729f07dd5b5</uuid>
Nov 25 03:49:36 np0005534516 nova_compute[253538]:  <name>instance-0000006c</name>
Nov 25 03:49:36 np0005534516 nova_compute[253538]:  <memory>131072</memory>
Nov 25 03:49:36 np0005534516 nova_compute[253538]:  <vcpu>1</vcpu>
Nov 25 03:49:36 np0005534516 nova_compute[253538]:  <metadata>
Nov 25 03:49:36 np0005534516 nova_compute[253538]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 03:49:36 np0005534516 nova_compute[253538]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 03:49:36 np0005534516 nova_compute[253538]:      <nova:name>tempest-TestNetworkAdvancedServerOps-server-2097673480</nova:name>
Nov 25 03:49:36 np0005534516 nova_compute[253538]:      <nova:creationTime>2025-11-25 08:49:34</nova:creationTime>
Nov 25 03:49:36 np0005534516 nova_compute[253538]:      <nova:flavor name="m1.nano">
Nov 25 03:49:36 np0005534516 nova_compute[253538]:        <nova:memory>128</nova:memory>
Nov 25 03:49:36 np0005534516 nova_compute[253538]:        <nova:disk>1</nova:disk>
Nov 25 03:49:36 np0005534516 nova_compute[253538]:        <nova:swap>0</nova:swap>
Nov 25 03:49:36 np0005534516 nova_compute[253538]:        <nova:ephemeral>0</nova:ephemeral>
Nov 25 03:49:36 np0005534516 nova_compute[253538]:        <nova:vcpus>1</nova:vcpus>
Nov 25 03:49:36 np0005534516 nova_compute[253538]:      </nova:flavor>
Nov 25 03:49:36 np0005534516 nova_compute[253538]:      <nova:owner>
Nov 25 03:49:36 np0005534516 nova_compute[253538]:        <nova:user uuid="009378dc36154271ba5b4590ce67ddde">tempest-TestNetworkAdvancedServerOps-1132090577-project-member</nova:user>
Nov 25 03:49:36 np0005534516 nova_compute[253538]:        <nova:project uuid="7dcaf3c96bfc4db3a41291debd385c67">tempest-TestNetworkAdvancedServerOps-1132090577</nova:project>
Nov 25 03:49:36 np0005534516 nova_compute[253538]:      </nova:owner>
Nov 25 03:49:36 np0005534516 nova_compute[253538]:      <nova:root type="image" uuid="8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e"/>
Nov 25 03:49:36 np0005534516 nova_compute[253538]:      <nova:ports>
Nov 25 03:49:36 np0005534516 nova_compute[253538]:        <nova:port uuid="67b278e0-034e-4bb1-8cba-035ab2a72de3">
Nov 25 03:49:36 np0005534516 nova_compute[253538]:          <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Nov 25 03:49:36 np0005534516 nova_compute[253538]:        </nova:port>
Nov 25 03:49:36 np0005534516 nova_compute[253538]:      </nova:ports>
Nov 25 03:49:36 np0005534516 nova_compute[253538]:    </nova:instance>
Nov 25 03:49:36 np0005534516 nova_compute[253538]:  </metadata>
Nov 25 03:49:36 np0005534516 nova_compute[253538]:  <sysinfo type="smbios">
Nov 25 03:49:36 np0005534516 nova_compute[253538]:    <system>
Nov 25 03:49:36 np0005534516 nova_compute[253538]:      <entry name="manufacturer">RDO</entry>
Nov 25 03:49:36 np0005534516 nova_compute[253538]:      <entry name="product">OpenStack Compute</entry>
Nov 25 03:49:36 np0005534516 nova_compute[253538]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 03:49:36 np0005534516 nova_compute[253538]:      <entry name="serial">a3e97a6a-f96c-46ee-ba5a-2729f07dd5b5</entry>
Nov 25 03:49:36 np0005534516 nova_compute[253538]:      <entry name="uuid">a3e97a6a-f96c-46ee-ba5a-2729f07dd5b5</entry>
Nov 25 03:49:36 np0005534516 nova_compute[253538]:      <entry name="family">Virtual Machine</entry>
Nov 25 03:49:36 np0005534516 nova_compute[253538]:    </system>
Nov 25 03:49:36 np0005534516 nova_compute[253538]:  </sysinfo>
Nov 25 03:49:36 np0005534516 nova_compute[253538]:  <os>
Nov 25 03:49:36 np0005534516 nova_compute[253538]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 25 03:49:36 np0005534516 nova_compute[253538]:    <boot dev="hd"/>
Nov 25 03:49:36 np0005534516 nova_compute[253538]:    <smbios mode="sysinfo"/>
Nov 25 03:49:36 np0005534516 nova_compute[253538]:  </os>
Nov 25 03:49:36 np0005534516 nova_compute[253538]:  <features>
Nov 25 03:49:36 np0005534516 nova_compute[253538]:    <acpi/>
Nov 25 03:49:36 np0005534516 nova_compute[253538]:    <apic/>
Nov 25 03:49:36 np0005534516 nova_compute[253538]:    <vmcoreinfo/>
Nov 25 03:49:36 np0005534516 nova_compute[253538]:  </features>
Nov 25 03:49:36 np0005534516 nova_compute[253538]:  <clock offset="utc">
Nov 25 03:49:36 np0005534516 nova_compute[253538]:    <timer name="pit" tickpolicy="delay"/>
Nov 25 03:49:36 np0005534516 nova_compute[253538]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 25 03:49:36 np0005534516 nova_compute[253538]:    <timer name="hpet" present="no"/>
Nov 25 03:49:36 np0005534516 nova_compute[253538]:  </clock>
Nov 25 03:49:36 np0005534516 nova_compute[253538]:  <cpu mode="host-model" match="exact">
Nov 25 03:49:36 np0005534516 nova_compute[253538]:    <topology sockets="1" cores="1" threads="1"/>
Nov 25 03:49:36 np0005534516 nova_compute[253538]:  </cpu>
Nov 25 03:49:36 np0005534516 nova_compute[253538]:  <devices>
Nov 25 03:49:36 np0005534516 nova_compute[253538]:    <disk type="network" device="disk">
Nov 25 03:49:36 np0005534516 nova_compute[253538]:      <driver type="raw" cache="none"/>
Nov 25 03:49:36 np0005534516 nova_compute[253538]:      <source protocol="rbd" name="vms/a3e97a6a-f96c-46ee-ba5a-2729f07dd5b5_disk">
Nov 25 03:49:36 np0005534516 nova_compute[253538]:        <host name="192.168.122.100" port="6789"/>
Nov 25 03:49:36 np0005534516 nova_compute[253538]:      </source>
Nov 25 03:49:36 np0005534516 nova_compute[253538]:      <auth username="openstack">
Nov 25 03:49:36 np0005534516 nova_compute[253538]:        <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 03:49:36 np0005534516 nova_compute[253538]:      </auth>
Nov 25 03:49:36 np0005534516 nova_compute[253538]:      <target dev="vda" bus="virtio"/>
Nov 25 03:49:36 np0005534516 nova_compute[253538]:    </disk>
Nov 25 03:49:36 np0005534516 nova_compute[253538]:    <disk type="network" device="cdrom">
Nov 25 03:49:36 np0005534516 nova_compute[253538]:      <driver type="raw" cache="none"/>
Nov 25 03:49:36 np0005534516 nova_compute[253538]:      <source protocol="rbd" name="vms/a3e97a6a-f96c-46ee-ba5a-2729f07dd5b5_disk.config">
Nov 25 03:49:36 np0005534516 nova_compute[253538]:        <host name="192.168.122.100" port="6789"/>
Nov 25 03:49:36 np0005534516 nova_compute[253538]:      </source>
Nov 25 03:49:36 np0005534516 nova_compute[253538]:      <auth username="openstack">
Nov 25 03:49:36 np0005534516 nova_compute[253538]:        <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 03:49:36 np0005534516 nova_compute[253538]:      </auth>
Nov 25 03:49:36 np0005534516 nova_compute[253538]:      <target dev="sda" bus="sata"/>
Nov 25 03:49:36 np0005534516 nova_compute[253538]:    </disk>
Nov 25 03:49:36 np0005534516 nova_compute[253538]:    <interface type="ethernet">
Nov 25 03:49:36 np0005534516 nova_compute[253538]:      <mac address="fa:16:3e:d5:f1:de"/>
Nov 25 03:49:36 np0005534516 nova_compute[253538]:      <model type="virtio"/>
Nov 25 03:49:36 np0005534516 nova_compute[253538]:      <driver name="vhost" rx_queue_size="512"/>
Nov 25 03:49:36 np0005534516 nova_compute[253538]:      <mtu size="1442"/>
Nov 25 03:49:36 np0005534516 nova_compute[253538]:      <target dev="tap67b278e0-03"/>
Nov 25 03:49:36 np0005534516 nova_compute[253538]:    </interface>
Nov 25 03:49:36 np0005534516 nova_compute[253538]:    <serial type="pty">
Nov 25 03:49:36 np0005534516 nova_compute[253538]:      <log file="/var/lib/nova/instances/a3e97a6a-f96c-46ee-ba5a-2729f07dd5b5/console.log" append="off"/>
Nov 25 03:49:36 np0005534516 nova_compute[253538]:    </serial>
Nov 25 03:49:36 np0005534516 nova_compute[253538]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 03:49:36 np0005534516 nova_compute[253538]:    <video>
Nov 25 03:49:36 np0005534516 nova_compute[253538]:      <model type="virtio"/>
Nov 25 03:49:36 np0005534516 nova_compute[253538]:    </video>
Nov 25 03:49:36 np0005534516 nova_compute[253538]:    <input type="tablet" bus="usb"/>
Nov 25 03:49:36 np0005534516 nova_compute[253538]:    <rng model="virtio">
Nov 25 03:49:36 np0005534516 nova_compute[253538]:      <backend model="random">/dev/urandom</backend>
Nov 25 03:49:36 np0005534516 nova_compute[253538]:    </rng>
Nov 25 03:49:36 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root"/>
Nov 25 03:49:36 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:49:36 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:49:36 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:49:36 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:49:36 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:49:36 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:49:36 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:49:36 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:49:36 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:49:36 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:49:36 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:49:36 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:49:36 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:49:36 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:49:36 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:49:36 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:49:36 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:49:36 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:49:36 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:49:36 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:49:36 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:49:36 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:49:36 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:49:36 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:49:36 np0005534516 nova_compute[253538]:    <controller type="usb" index="0"/>
Nov 25 03:49:36 np0005534516 nova_compute[253538]:    <memballoon model="virtio">
Nov 25 03:49:36 np0005534516 nova_compute[253538]:      <stats period="10"/>
Nov 25 03:49:36 np0005534516 nova_compute[253538]:    </memballoon>
Nov 25 03:49:36 np0005534516 nova_compute[253538]:  </devices>
Nov 25 03:49:36 np0005534516 nova_compute[253538]: </domain>
Nov 25 03:49:36 np0005534516 nova_compute[253538]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 25 03:49:36 np0005534516 nova_compute[253538]: 2025-11-25 08:49:36.043 253542 DEBUG nova.compute.manager [None req-19216ac5-cfe1-4bf8-b52b-a146e40d4512 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: a3e97a6a-f96c-46ee-ba5a-2729f07dd5b5] Preparing to wait for external event network-vif-plugged-67b278e0-034e-4bb1-8cba-035ab2a72de3 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 25 03:49:36 np0005534516 nova_compute[253538]: 2025-11-25 08:49:36.044 253542 DEBUG oslo_concurrency.lockutils [None req-19216ac5-cfe1-4bf8-b52b-a146e40d4512 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Acquiring lock "a3e97a6a-f96c-46ee-ba5a-2729f07dd5b5-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:49:36 np0005534516 nova_compute[253538]: 2025-11-25 08:49:36.044 253542 DEBUG oslo_concurrency.lockutils [None req-19216ac5-cfe1-4bf8-b52b-a146e40d4512 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Lock "a3e97a6a-f96c-46ee-ba5a-2729f07dd5b5-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:49:36 np0005534516 nova_compute[253538]: 2025-11-25 08:49:36.045 253542 DEBUG oslo_concurrency.lockutils [None req-19216ac5-cfe1-4bf8-b52b-a146e40d4512 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Lock "a3e97a6a-f96c-46ee-ba5a-2729f07dd5b5-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:49:36 np0005534516 nova_compute[253538]: 2025-11-25 08:49:36.046 253542 DEBUG nova.virt.libvirt.vif [None req-19216ac5-cfe1-4bf8-b52b-a146e40d4512 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T08:49:28Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-2097673480',display_name='tempest-TestNetworkAdvancedServerOps-server-2097673480',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-2097673480',id=108,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBN8CJ+LAazH0nh/fs57lUBduvcorPeFDtVdwZY7U/GDNTdvdOvwS2k2O3rjC5dikEP5slLAsdzOE76Bw4/4L12X0ArhaClfawfYB19breOk8NW05uifXWs22TjYOgG1XfA==',key_name='tempest-TestNetworkAdvancedServerOps-659140204',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='7dcaf3c96bfc4db3a41291debd385c67',ramdisk_id='',reservation_id='r-140u7iq0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkAdvancedServerOps-1132090577',owner_user_name='tempest-TestNetworkAdvancedServerOps-1132090577-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T08:49:31Z,user_data=None,user_id='009378dc36154271ba5b4590ce67ddde',uuid=a3e97a6a-f96c-46ee-ba5a-2729f07dd5b5,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "67b278e0-034e-4bb1-8cba-035ab2a72de3", "address": "fa:16:3e:d5:f1:de", "network": {"id": "6cdb9ec3-6144-4767-9719-ddbf0a68bf7c", "bridge": "br-int", "label": "tempest-network-smoke--1425209404", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7dcaf3c96bfc4db3a41291debd385c67", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap67b278e0-03", "ovs_interfaceid": "67b278e0-034e-4bb1-8cba-035ab2a72de3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 25 03:49:36 np0005534516 nova_compute[253538]: 2025-11-25 08:49:36.047 253542 DEBUG nova.network.os_vif_util [None req-19216ac5-cfe1-4bf8-b52b-a146e40d4512 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Converting VIF {"id": "67b278e0-034e-4bb1-8cba-035ab2a72de3", "address": "fa:16:3e:d5:f1:de", "network": {"id": "6cdb9ec3-6144-4767-9719-ddbf0a68bf7c", "bridge": "br-int", "label": "tempest-network-smoke--1425209404", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7dcaf3c96bfc4db3a41291debd385c67", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap67b278e0-03", "ovs_interfaceid": "67b278e0-034e-4bb1-8cba-035ab2a72de3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 03:49:36 np0005534516 nova_compute[253538]: 2025-11-25 08:49:36.047 253542 DEBUG nova.network.os_vif_util [None req-19216ac5-cfe1-4bf8-b52b-a146e40d4512 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d5:f1:de,bridge_name='br-int',has_traffic_filtering=True,id=67b278e0-034e-4bb1-8cba-035ab2a72de3,network=Network(6cdb9ec3-6144-4767-9719-ddbf0a68bf7c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap67b278e0-03') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 03:49:36 np0005534516 nova_compute[253538]: 2025-11-25 08:49:36.048 253542 DEBUG os_vif [None req-19216ac5-cfe1-4bf8-b52b-a146e40d4512 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:d5:f1:de,bridge_name='br-int',has_traffic_filtering=True,id=67b278e0-034e-4bb1-8cba-035ab2a72de3,network=Network(6cdb9ec3-6144-4767-9719-ddbf0a68bf7c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap67b278e0-03') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 25 03:49:36 np0005534516 nova_compute[253538]: 2025-11-25 08:49:36.049 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:49:36 np0005534516 nova_compute[253538]: 2025-11-25 08:49:36.050 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:49:36 np0005534516 nova_compute[253538]: 2025-11-25 08:49:36.051 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 03:49:36 np0005534516 nova_compute[253538]: 2025-11-25 08:49:36.059 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:49:36 np0005534516 nova_compute[253538]: 2025-11-25 08:49:36.059 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap67b278e0-03, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:49:36 np0005534516 nova_compute[253538]: 2025-11-25 08:49:36.060 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap67b278e0-03, col_values=(('external_ids', {'iface-id': '67b278e0-034e-4bb1-8cba-035ab2a72de3', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:d5:f1:de', 'vm-uuid': 'a3e97a6a-f96c-46ee-ba5a-2729f07dd5b5'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:49:36 np0005534516 nova_compute[253538]: 2025-11-25 08:49:36.063 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:49:36 np0005534516 NetworkManager[48915]: <info>  [1764060576.0645] manager: (tap67b278e0-03): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/429)
Nov 25 03:49:36 np0005534516 nova_compute[253538]: 2025-11-25 08:49:36.067 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 25 03:49:36 np0005534516 systemd[1]: Started libcrun container.
Nov 25 03:49:36 np0005534516 nova_compute[253538]: 2025-11-25 08:49:36.071 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:49:36 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/181c67ffbc920c269885be2cfb351c29e96a406420cab0835f8d406c343e5fd3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 03:49:36 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/181c67ffbc920c269885be2cfb351c29e96a406420cab0835f8d406c343e5fd3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 03:49:36 np0005534516 nova_compute[253538]: 2025-11-25 08:49:36.073 253542 INFO os_vif [None req-19216ac5-cfe1-4bf8-b52b-a146e40d4512 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:d5:f1:de,bridge_name='br-int',has_traffic_filtering=True,id=67b278e0-034e-4bb1-8cba-035ab2a72de3,network=Network(6cdb9ec3-6144-4767-9719-ddbf0a68bf7c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap67b278e0-03')#033[00m
Nov 25 03:49:36 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/181c67ffbc920c269885be2cfb351c29e96a406420cab0835f8d406c343e5fd3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 03:49:36 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/181c67ffbc920c269885be2cfb351c29e96a406420cab0835f8d406c343e5fd3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 03:49:36 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/181c67ffbc920c269885be2cfb351c29e96a406420cab0835f8d406c343e5fd3/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 03:49:36 np0005534516 podman[362660]: 2025-11-25 08:49:36.184978209 +0000 UTC m=+0.311306289 container init 7cb80059c3f9497bfb9274fda802542c6b4e211568f74aea4278f73d739be013 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_curie, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 03:49:36 np0005534516 podman[362660]: 2025-11-25 08:49:36.197702559 +0000 UTC m=+0.324030609 container start 7cb80059c3f9497bfb9274fda802542c6b4e211568f74aea4278f73d739be013 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_curie, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Nov 25 03:49:36 np0005534516 nova_compute[253538]: 2025-11-25 08:49:36.215 253542 DEBUG nova.virt.libvirt.driver [None req-19216ac5-cfe1-4bf8-b52b-a146e40d4512 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 03:49:36 np0005534516 nova_compute[253538]: 2025-11-25 08:49:36.216 253542 DEBUG nova.virt.libvirt.driver [None req-19216ac5-cfe1-4bf8-b52b-a146e40d4512 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 03:49:36 np0005534516 nova_compute[253538]: 2025-11-25 08:49:36.216 253542 DEBUG nova.virt.libvirt.driver [None req-19216ac5-cfe1-4bf8-b52b-a146e40d4512 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] No VIF found with MAC fa:16:3e:d5:f1:de, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 25 03:49:36 np0005534516 nova_compute[253538]: 2025-11-25 08:49:36.216 253542 INFO nova.virt.libvirt.driver [None req-19216ac5-cfe1-4bf8-b52b-a146e40d4512 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: a3e97a6a-f96c-46ee-ba5a-2729f07dd5b5] Using config drive#033[00m
Nov 25 03:49:36 np0005534516 nova_compute[253538]: 2025-11-25 08:49:36.240 253542 DEBUG nova.storage.rbd_utils [None req-19216ac5-cfe1-4bf8-b52b-a146e40d4512 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] rbd image a3e97a6a-f96c-46ee-ba5a-2729f07dd5b5_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:49:36 np0005534516 podman[362660]: 2025-11-25 08:49:36.251762387 +0000 UTC m=+0.378090447 container attach 7cb80059c3f9497bfb9274fda802542c6b4e211568f74aea4278f73d739be013 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_curie, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS)
Nov 25 03:49:36 np0005534516 nova_compute[253538]: 2025-11-25 08:49:36.628 253542 DEBUG nova.network.neutron [req-549cbfdb-e01d-44bb-bccb-93d992cdc4fe req-98761f87-8987-40ae-b14e-fae4aa4aa41b b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: a3e97a6a-f96c-46ee-ba5a-2729f07dd5b5] Updated VIF entry in instance network info cache for port 67b278e0-034e-4bb1-8cba-035ab2a72de3. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 25 03:49:36 np0005534516 nova_compute[253538]: 2025-11-25 08:49:36.629 253542 DEBUG nova.network.neutron [req-549cbfdb-e01d-44bb-bccb-93d992cdc4fe req-98761f87-8987-40ae-b14e-fae4aa4aa41b b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: a3e97a6a-f96c-46ee-ba5a-2729f07dd5b5] Updating instance_info_cache with network_info: [{"id": "67b278e0-034e-4bb1-8cba-035ab2a72de3", "address": "fa:16:3e:d5:f1:de", "network": {"id": "6cdb9ec3-6144-4767-9719-ddbf0a68bf7c", "bridge": "br-int", "label": "tempest-network-smoke--1425209404", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7dcaf3c96bfc4db3a41291debd385c67", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap67b278e0-03", "ovs_interfaceid": "67b278e0-034e-4bb1-8cba-035ab2a72de3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 03:49:36 np0005534516 nova_compute[253538]: 2025-11-25 08:49:36.677 253542 DEBUG oslo_concurrency.lockutils [req-549cbfdb-e01d-44bb-bccb-93d992cdc4fe req-98761f87-8987-40ae-b14e-fae4aa4aa41b b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-a3e97a6a-f96c-46ee-ba5a-2729f07dd5b5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 03:49:36 np0005534516 nova_compute[253538]: 2025-11-25 08:49:36.820 253542 INFO nova.virt.libvirt.driver [None req-19216ac5-cfe1-4bf8-b52b-a146e40d4512 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: a3e97a6a-f96c-46ee-ba5a-2729f07dd5b5] Creating config drive at /var/lib/nova/instances/a3e97a6a-f96c-46ee-ba5a-2729f07dd5b5/disk.config#033[00m
Nov 25 03:49:36 np0005534516 nova_compute[253538]: 2025-11-25 08:49:36.829 253542 DEBUG oslo_concurrency.processutils [None req-19216ac5-cfe1-4bf8-b52b-a146e40d4512 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/a3e97a6a-f96c-46ee-ba5a-2729f07dd5b5/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpqktfooxn execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:49:36 np0005534516 nova_compute[253538]: 2025-11-25 08:49:36.983 253542 DEBUG oslo_concurrency.processutils [None req-19216ac5-cfe1-4bf8-b52b-a146e40d4512 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/a3e97a6a-f96c-46ee-ba5a-2729f07dd5b5/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpqktfooxn" returned: 0 in 0.153s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:49:37 np0005534516 nova_compute[253538]: 2025-11-25 08:49:37.009 253542 DEBUG nova.storage.rbd_utils [None req-19216ac5-cfe1-4bf8-b52b-a146e40d4512 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] rbd image a3e97a6a-f96c-46ee-ba5a-2729f07dd5b5_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:49:37 np0005534516 nova_compute[253538]: 2025-11-25 08:49:37.013 253542 DEBUG oslo_concurrency.processutils [None req-19216ac5-cfe1-4bf8-b52b-a146e40d4512 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/a3e97a6a-f96c-46ee-ba5a-2729f07dd5b5/disk.config a3e97a6a-f96c-46ee-ba5a-2729f07dd5b5_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:49:37 np0005534516 vigorous_curie[362678]: --> passed data devices: 0 physical, 3 LVM
Nov 25 03:49:37 np0005534516 vigorous_curie[362678]: --> relative data size: 1.0
Nov 25 03:49:37 np0005534516 vigorous_curie[362678]: --> All data devices are unavailable
Nov 25 03:49:37 np0005534516 nova_compute[253538]: 2025-11-25 08:49:37.228 253542 DEBUG oslo_concurrency.lockutils [None req-66cbd094-af78-48a9-b936-09c8259d7202 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Acquiring lock "ec34a574-9c78-43d8-a65a-aa4052a5d452" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:49:37 np0005534516 nova_compute[253538]: 2025-11-25 08:49:37.230 253542 DEBUG oslo_concurrency.lockutils [None req-66cbd094-af78-48a9-b936-09c8259d7202 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lock "ec34a574-9c78-43d8-a65a-aa4052a5d452" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:49:37 np0005534516 nova_compute[253538]: 2025-11-25 08:49:37.231 253542 DEBUG oslo_concurrency.lockutils [None req-66cbd094-af78-48a9-b936-09c8259d7202 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Acquiring lock "ec34a574-9c78-43d8-a65a-aa4052a5d452-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:49:37 np0005534516 nova_compute[253538]: 2025-11-25 08:49:37.232 253542 DEBUG oslo_concurrency.lockutils [None req-66cbd094-af78-48a9-b936-09c8259d7202 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lock "ec34a574-9c78-43d8-a65a-aa4052a5d452-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:49:37 np0005534516 nova_compute[253538]: 2025-11-25 08:49:37.232 253542 DEBUG oslo_concurrency.lockutils [None req-66cbd094-af78-48a9-b936-09c8259d7202 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lock "ec34a574-9c78-43d8-a65a-aa4052a5d452-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:49:37 np0005534516 nova_compute[253538]: 2025-11-25 08:49:37.235 253542 INFO nova.compute.manager [None req-66cbd094-af78-48a9-b936-09c8259d7202 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: ec34a574-9c78-43d8-a65a-aa4052a5d452] Terminating instance#033[00m
Nov 25 03:49:37 np0005534516 nova_compute[253538]: 2025-11-25 08:49:37.237 253542 DEBUG nova.compute.manager [None req-66cbd094-af78-48a9-b936-09c8259d7202 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: ec34a574-9c78-43d8-a65a-aa4052a5d452] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 25 03:49:37 np0005534516 systemd[1]: libpod-7cb80059c3f9497bfb9274fda802542c6b4e211568f74aea4278f73d739be013.scope: Deactivated successfully.
Nov 25 03:49:37 np0005534516 podman[362765]: 2025-11-25 08:49:37.2812914 +0000 UTC m=+0.025089843 container died 7cb80059c3f9497bfb9274fda802542c6b4e211568f74aea4278f73d739be013 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_curie, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 03:49:37 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e237 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 03:49:37 np0005534516 ceph-mon[75015]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #99. Immutable memtables: 0.
Nov 25 03:49:37 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:49:37.502956) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 25 03:49:37 np0005534516 ceph-mon[75015]: rocksdb: [db/flush_job.cc:856] [default] [JOB 57] Flushing memtable with next log file: 99
Nov 25 03:49:37 np0005534516 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764060577502997, "job": 57, "event": "flush_started", "num_memtables": 1, "num_entries": 629, "num_deletes": 260, "total_data_size": 667157, "memory_usage": 680312, "flush_reason": "Manual Compaction"}
Nov 25 03:49:37 np0005534516 ceph-mon[75015]: rocksdb: [db/flush_job.cc:885] [default] [JOB 57] Level-0 flush table #100: started
Nov 25 03:49:37 np0005534516 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764060577534971, "cf_name": "default", "job": 57, "event": "table_file_creation", "file_number": 100, "file_size": 660917, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 42713, "largest_seqno": 43341, "table_properties": {"data_size": 657542, "index_size": 1219, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1093, "raw_key_size": 7747, "raw_average_key_size": 18, "raw_value_size": 650719, "raw_average_value_size": 1579, "num_data_blocks": 54, "num_entries": 412, "num_filter_entries": 412, "num_deletions": 260, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764060535, "oldest_key_time": 1764060535, "file_creation_time": 1764060577, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "dc5dc6ae-0fb8-474e-b34d-e37705a41add", "db_session_id": "WCSCMBNWDQZ3QKJA3OWW", "orig_file_number": 100, "seqno_to_time_mapping": "N/A"}}
Nov 25 03:49:37 np0005534516 ceph-mon[75015]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 57] Flush lasted 32073 microseconds, and 3420 cpu microseconds.
Nov 25 03:49:37 np0005534516 ceph-mon[75015]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 03:49:37 np0005534516 nova_compute[253538]: 2025-11-25 08:49:37.549 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:49:37 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:49:37.535026) [db/flush_job.cc:967] [default] [JOB 57] Level-0 flush table #100: 660917 bytes OK
Nov 25 03:49:37 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:49:37.535049) [db/memtable_list.cc:519] [default] Level-0 commit table #100 started
Nov 25 03:49:37 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:49:37.560611) [db/memtable_list.cc:722] [default] Level-0 commit table #100: memtable #1 done
Nov 25 03:49:37 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:49:37.560649) EVENT_LOG_v1 {"time_micros": 1764060577560641, "job": 57, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 25 03:49:37 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:49:37.560671) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 25 03:49:37 np0005534516 ceph-mon[75015]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 57] Try to delete WAL files size 663718, prev total WAL file size 663718, number of live WAL files 2.
Nov 25 03:49:37 np0005534516 ceph-mon[75015]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000096.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 03:49:37 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:49:37.561250) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0031353130' seq:72057594037927935, type:22 .. '6C6F676D0031373636' seq:0, type:0; will stop at (end)
Nov 25 03:49:37 np0005534516 ceph-mon[75015]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 58] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 25 03:49:37 np0005534516 ceph-mon[75015]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 57 Base level 0, inputs: [100(645KB)], [98(7210KB)]
Nov 25 03:49:37 np0005534516 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764060577561275, "job": 58, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [100], "files_L6": [98], "score": -1, "input_data_size": 8044834, "oldest_snapshot_seqno": -1}
Nov 25 03:49:37 np0005534516 systemd[1]: var-lib-containers-storage-overlay-181c67ffbc920c269885be2cfb351c29e96a406420cab0835f8d406c343e5fd3-merged.mount: Deactivated successfully.
Nov 25 03:49:37 np0005534516 ceph-mon[75015]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 58] Generated table #101: 6290 keys, 7911635 bytes, temperature: kUnknown
Nov 25 03:49:37 np0005534516 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764060577636292, "cf_name": "default", "job": 58, "event": "table_file_creation", "file_number": 101, "file_size": 7911635, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7871020, "index_size": 23801, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 15749, "raw_key_size": 164809, "raw_average_key_size": 26, "raw_value_size": 7759444, "raw_average_value_size": 1233, "num_data_blocks": 930, "num_entries": 6290, "num_filter_entries": 6290, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764056881, "oldest_key_time": 0, "file_creation_time": 1764060577, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "dc5dc6ae-0fb8-474e-b34d-e37705a41add", "db_session_id": "WCSCMBNWDQZ3QKJA3OWW", "orig_file_number": 101, "seqno_to_time_mapping": "N/A"}}
Nov 25 03:49:37 np0005534516 ceph-mon[75015]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 03:49:37 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:49:37.636565) [db/compaction/compaction_job.cc:1663] [default] [JOB 58] Compacted 1@0 + 1@6 files to L6 => 7911635 bytes
Nov 25 03:49:37 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:49:37.653281) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 107.1 rd, 105.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.6, 7.0 +0.0 blob) out(7.5 +0.0 blob), read-write-amplify(24.1) write-amplify(12.0) OK, records in: 6822, records dropped: 532 output_compression: NoCompression
Nov 25 03:49:37 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:49:37.653349) EVENT_LOG_v1 {"time_micros": 1764060577653331, "job": 58, "event": "compaction_finished", "compaction_time_micros": 75124, "compaction_time_cpu_micros": 19393, "output_level": 6, "num_output_files": 1, "total_output_size": 7911635, "num_input_records": 6822, "num_output_records": 6290, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 25 03:49:37 np0005534516 ceph-mon[75015]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000100.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 03:49:37 np0005534516 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764060577653612, "job": 58, "event": "table_file_deletion", "file_number": 100}
Nov 25 03:49:37 np0005534516 ceph-mon[75015]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000098.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 03:49:37 np0005534516 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764060577654793, "job": 58, "event": "table_file_deletion", "file_number": 98}
Nov 25 03:49:37 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:49:37.561159) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 03:49:37 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:49:37.654865) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 03:49:37 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:49:37.654869) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 03:49:37 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:49:37.654871) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 03:49:37 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:49:37.654872) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 03:49:37 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:49:37.654874) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 03:49:37 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2049: 321 pgs: 321 active+clean; 293 MiB data, 835 MiB used, 59 GiB / 60 GiB avail; 283 KiB/s rd, 3.9 MiB/s wr, 89 op/s
Nov 25 03:49:37 np0005534516 podman[362765]: 2025-11-25 08:49:37.733560261 +0000 UTC m=+0.477358684 container remove 7cb80059c3f9497bfb9274fda802542c6b4e211568f74aea4278f73d739be013 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_curie, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Nov 25 03:49:37 np0005534516 systemd[1]: libpod-conmon-7cb80059c3f9497bfb9274fda802542c6b4e211568f74aea4278f73d739be013.scope: Deactivated successfully.
Nov 25 03:49:37 np0005534516 kernel: tapbc66e4f2-ce (unregistering): left promiscuous mode
Nov 25 03:49:37 np0005534516 NetworkManager[48915]: <info>  [1764060577.7896] device (tapbc66e4f2-ce): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 03:49:37 np0005534516 ovn_controller[152859]: 2025-11-25T08:49:37Z|01044|binding|INFO|Releasing lport bc66e4f2-ce2b-49ca-ba89-a86607d56e5a from this chassis (sb_readonly=0)
Nov 25 03:49:37 np0005534516 ovn_controller[152859]: 2025-11-25T08:49:37Z|01045|binding|INFO|Setting lport bc66e4f2-ce2b-49ca-ba89-a86607d56e5a down in Southbound
Nov 25 03:49:37 np0005534516 ovn_controller[152859]: 2025-11-25T08:49:37Z|01046|binding|INFO|Removing iface tapbc66e4f2-ce ovn-installed in OVS
Nov 25 03:49:37 np0005534516 nova_compute[253538]: 2025-11-25 08:49:37.851 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:49:37 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:49:37.857 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d3:98:59 10.100.0.23'], port_security=['fa:16:3e:d3:98:59 10.100.0.23'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.23/28', 'neutron:device_id': 'ec34a574-9c78-43d8-a65a-aa4052a5d452', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-34b2f9d4-63fb-44ac-b745-d86825df1c61', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '92faeb767e7a423586eaaf32661ce771', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'f6e83494-bf67-403f-bbb9-2d14a77705aa', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=99275f51-5c9e-402a-8f14-8705646486ea, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=bc66e4f2-ce2b-49ca-ba89-a86607d56e5a) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 03:49:37 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:49:37.858 162739 INFO neutron.agent.ovn.metadata.agent [-] Port bc66e4f2-ce2b-49ca-ba89-a86607d56e5a in datapath 34b2f9d4-63fb-44ac-b745-d86825df1c61 unbound from our chassis#033[00m
Nov 25 03:49:37 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:49:37.859 162739 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 34b2f9d4-63fb-44ac-b745-d86825df1c61, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 25 03:49:37 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:49:37.860 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[aa69bcc2-61d7-434f-9023-e741ebc5b87a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:49:37 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:49:37.860 162739 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-34b2f9d4-63fb-44ac-b745-d86825df1c61 namespace which is not needed anymore#033[00m
Nov 25 03:49:37 np0005534516 nova_compute[253538]: 2025-11-25 08:49:37.867 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:49:37 np0005534516 systemd[1]: machine-qemu\x2d133\x2dinstance\x2d0000006b.scope: Deactivated successfully.
Nov 25 03:49:37 np0005534516 systemd[1]: machine-qemu\x2d133\x2dinstance\x2d0000006b.scope: Consumed 13.484s CPU time.
Nov 25 03:49:37 np0005534516 systemd-machined[215790]: Machine qemu-133-instance-0000006b terminated.
Nov 25 03:49:37 np0005534516 nova_compute[253538]: 2025-11-25 08:49:37.966 253542 DEBUG oslo_concurrency.processutils [None req-19216ac5-cfe1-4bf8-b52b-a146e40d4512 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/a3e97a6a-f96c-46ee-ba5a-2729f07dd5b5/disk.config a3e97a6a-f96c-46ee-ba5a-2729f07dd5b5_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.952s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:49:37 np0005534516 nova_compute[253538]: 2025-11-25 08:49:37.966 253542 INFO nova.virt.libvirt.driver [None req-19216ac5-cfe1-4bf8-b52b-a146e40d4512 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: a3e97a6a-f96c-46ee-ba5a-2729f07dd5b5] Deleting local config drive /var/lib/nova/instances/a3e97a6a-f96c-46ee-ba5a-2729f07dd5b5/disk.config because it was imported into RBD.#033[00m
Nov 25 03:49:38 np0005534516 kernel: tap67b278e0-03: entered promiscuous mode
Nov 25 03:49:38 np0005534516 NetworkManager[48915]: <info>  [1764060578.0242] manager: (tap67b278e0-03): new Tun device (/org/freedesktop/NetworkManager/Devices/430)
Nov 25 03:49:38 np0005534516 systemd-udevd[362818]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 03:49:38 np0005534516 NetworkManager[48915]: <info>  [1764060578.0352] device (tap67b278e0-03): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 03:49:38 np0005534516 NetworkManager[48915]: <info>  [1764060578.0360] device (tap67b278e0-03): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 03:49:38 np0005534516 ovn_controller[152859]: 2025-11-25T08:49:38Z|01047|binding|INFO|Claiming lport 67b278e0-034e-4bb1-8cba-035ab2a72de3 for this chassis.
Nov 25 03:49:38 np0005534516 nova_compute[253538]: 2025-11-25 08:49:38.038 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:49:38 np0005534516 ovn_controller[152859]: 2025-11-25T08:49:38Z|01048|binding|INFO|67b278e0-034e-4bb1-8cba-035ab2a72de3: Claiming fa:16:3e:d5:f1:de 10.100.0.12
Nov 25 03:49:38 np0005534516 nova_compute[253538]: 2025-11-25 08:49:38.043 253542 DEBUG nova.compute.manager [req-23ab3e89-9735-4ce5-9a32-2e5d445d6209 req-ee137a08-2074-4c78-af8e-6fdd88af6e89 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: ec34a574-9c78-43d8-a65a-aa4052a5d452] Received event network-vif-unplugged-bc66e4f2-ce2b-49ca-ba89-a86607d56e5a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:49:38 np0005534516 nova_compute[253538]: 2025-11-25 08:49:38.043 253542 DEBUG oslo_concurrency.lockutils [req-23ab3e89-9735-4ce5-9a32-2e5d445d6209 req-ee137a08-2074-4c78-af8e-6fdd88af6e89 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "ec34a574-9c78-43d8-a65a-aa4052a5d452-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:49:38 np0005534516 nova_compute[253538]: 2025-11-25 08:49:38.044 253542 DEBUG oslo_concurrency.lockutils [req-23ab3e89-9735-4ce5-9a32-2e5d445d6209 req-ee137a08-2074-4c78-af8e-6fdd88af6e89 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "ec34a574-9c78-43d8-a65a-aa4052a5d452-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:49:38 np0005534516 nova_compute[253538]: 2025-11-25 08:49:38.044 253542 DEBUG oslo_concurrency.lockutils [req-23ab3e89-9735-4ce5-9a32-2e5d445d6209 req-ee137a08-2074-4c78-af8e-6fdd88af6e89 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "ec34a574-9c78-43d8-a65a-aa4052a5d452-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:49:38 np0005534516 nova_compute[253538]: 2025-11-25 08:49:38.044 253542 DEBUG nova.compute.manager [req-23ab3e89-9735-4ce5-9a32-2e5d445d6209 req-ee137a08-2074-4c78-af8e-6fdd88af6e89 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: ec34a574-9c78-43d8-a65a-aa4052a5d452] No waiting events found dispatching network-vif-unplugged-bc66e4f2-ce2b-49ca-ba89-a86607d56e5a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 03:49:38 np0005534516 nova_compute[253538]: 2025-11-25 08:49:38.044 253542 DEBUG nova.compute.manager [req-23ab3e89-9735-4ce5-9a32-2e5d445d6209 req-ee137a08-2074-4c78-af8e-6fdd88af6e89 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: ec34a574-9c78-43d8-a65a-aa4052a5d452] Received event network-vif-unplugged-bc66e4f2-ce2b-49ca-ba89-a86607d56e5a for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 25 03:49:38 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:49:38.045 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d5:f1:de 10.100.0.12'], port_security=['fa:16:3e:d5:f1:de 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': 'a3e97a6a-f96c-46ee-ba5a-2729f07dd5b5', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6cdb9ec3-6144-4767-9719-ddbf0a68bf7c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '7dcaf3c96bfc4db3a41291debd385c67', 'neutron:revision_number': '2', 'neutron:security_group_ids': '76a697a7-7255-4dde-bfd3-a4f7c520b32e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ddbaf93a-8581-4a98-b32a-d829e79ecbfd, chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=67b278e0-034e-4bb1-8cba-035ab2a72de3) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 03:49:38 np0005534516 ovn_controller[152859]: 2025-11-25T08:49:38Z|01049|binding|INFO|Setting lport 67b278e0-034e-4bb1-8cba-035ab2a72de3 ovn-installed in OVS
Nov 25 03:49:38 np0005534516 ovn_controller[152859]: 2025-11-25T08:49:38Z|01050|binding|INFO|Setting lport 67b278e0-034e-4bb1-8cba-035ab2a72de3 up in Southbound
Nov 25 03:49:38 np0005534516 nova_compute[253538]: 2025-11-25 08:49:38.059 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:49:38 np0005534516 nova_compute[253538]: 2025-11-25 08:49:38.065 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:49:38 np0005534516 podman[362812]: 2025-11-25 08:49:38.066322613 +0000 UTC m=+0.173715783 container health_status 8ad1e319ea263c63021f01bb2d6f18ca5aea205883339692c6b10bddfa993191 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 25 03:49:38 np0005534516 neutron-haproxy-ovnmeta-34b2f9d4-63fb-44ac-b745-d86825df1c61[361922]: [NOTICE]   (361926) : haproxy version is 2.8.14-c23fe91
Nov 25 03:49:38 np0005534516 neutron-haproxy-ovnmeta-34b2f9d4-63fb-44ac-b745-d86825df1c61[361922]: [NOTICE]   (361926) : path to executable is /usr/sbin/haproxy
Nov 25 03:49:38 np0005534516 neutron-haproxy-ovnmeta-34b2f9d4-63fb-44ac-b745-d86825df1c61[361922]: [WARNING]  (361926) : Exiting Master process...
Nov 25 03:49:38 np0005534516 neutron-haproxy-ovnmeta-34b2f9d4-63fb-44ac-b745-d86825df1c61[361922]: [ALERT]    (361926) : Current worker (361928) exited with code 143 (Terminated)
Nov 25 03:49:38 np0005534516 neutron-haproxy-ovnmeta-34b2f9d4-63fb-44ac-b745-d86825df1c61[361922]: [WARNING]  (361926) : All workers exited. Exiting... (0)
Nov 25 03:49:38 np0005534516 NetworkManager[48915]: <info>  [1764060578.0685] manager: (tapbc66e4f2-ce): new Tun device (/org/freedesktop/NetworkManager/Devices/431)
Nov 25 03:49:38 np0005534516 systemd[1]: libpod-50e62fc8631cbda48a1b969df0030a18056f96d0308a64f2b67e79dab3389241.scope: Deactivated successfully.
Nov 25 03:49:38 np0005534516 kernel: tapbc66e4f2-ce: entered promiscuous mode
Nov 25 03:49:38 np0005534516 nova_compute[253538]: 2025-11-25 08:49:38.072 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:49:38 np0005534516 ovn_controller[152859]: 2025-11-25T08:49:38Z|01051|binding|INFO|Claiming lport bc66e4f2-ce2b-49ca-ba89-a86607d56e5a for this chassis.
Nov 25 03:49:38 np0005534516 ovn_controller[152859]: 2025-11-25T08:49:38Z|01052|binding|INFO|bc66e4f2-ce2b-49ca-ba89-a86607d56e5a: Claiming fa:16:3e:d3:98:59 10.100.0.23
Nov 25 03:49:38 np0005534516 kernel: tapbc66e4f2-ce (unregistering): left promiscuous mode
Nov 25 03:49:38 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:49:38.077 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d3:98:59 10.100.0.23'], port_security=['fa:16:3e:d3:98:59 10.100.0.23'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.23/28', 'neutron:device_id': 'ec34a574-9c78-43d8-a65a-aa4052a5d452', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-34b2f9d4-63fb-44ac-b745-d86825df1c61', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '92faeb767e7a423586eaaf32661ce771', 'neutron:revision_number': '5', 'neutron:security_group_ids': 'f6e83494-bf67-403f-bbb9-2d14a77705aa', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=99275f51-5c9e-402a-8f14-8705646486ea, chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=bc66e4f2-ce2b-49ca-ba89-a86607d56e5a) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 03:49:38 np0005534516 podman[362874]: 2025-11-25 08:49:38.079884337 +0000 UTC m=+0.132049558 container died 50e62fc8631cbda48a1b969df0030a18056f96d0308a64f2b67e79dab3389241 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-34b2f9d4-63fb-44ac-b745-d86825df1c61, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.license=GPLv2)
Nov 25 03:49:38 np0005534516 systemd-machined[215790]: New machine qemu-134-instance-0000006c.
Nov 25 03:49:38 np0005534516 nova_compute[253538]: 2025-11-25 08:49:38.099 253542 INFO nova.virt.libvirt.driver [-] [instance: ec34a574-9c78-43d8-a65a-aa4052a5d452] Instance destroyed successfully.#033[00m
Nov 25 03:49:38 np0005534516 systemd[1]: Started Virtual Machine qemu-134-instance-0000006c.
Nov 25 03:49:38 np0005534516 nova_compute[253538]: 2025-11-25 08:49:38.100 253542 DEBUG nova.objects.instance [None req-66cbd094-af78-48a9-b936-09c8259d7202 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lazy-loading 'resources' on Instance uuid ec34a574-9c78-43d8-a65a-aa4052a5d452 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 03:49:38 np0005534516 ovn_controller[152859]: 2025-11-25T08:49:38Z|01053|binding|INFO|Setting lport bc66e4f2-ce2b-49ca-ba89-a86607d56e5a ovn-installed in OVS
Nov 25 03:49:38 np0005534516 ovn_controller[152859]: 2025-11-25T08:49:38Z|01054|binding|INFO|Setting lport bc66e4f2-ce2b-49ca-ba89-a86607d56e5a up in Southbound
Nov 25 03:49:38 np0005534516 ovn_controller[152859]: 2025-11-25T08:49:38Z|01055|binding|INFO|Releasing lport bc66e4f2-ce2b-49ca-ba89-a86607d56e5a from this chassis (sb_readonly=1)
Nov 25 03:49:38 np0005534516 ovn_controller[152859]: 2025-11-25T08:49:38Z|01056|binding|INFO|Removing iface tapbc66e4f2-ce ovn-installed in OVS
Nov 25 03:49:38 np0005534516 ovn_controller[152859]: 2025-11-25T08:49:38Z|01057|if_status|INFO|Not setting lport bc66e4f2-ce2b-49ca-ba89-a86607d56e5a down as sb is readonly
Nov 25 03:49:38 np0005534516 nova_compute[253538]: 2025-11-25 08:49:38.102 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:49:38 np0005534516 ovn_controller[152859]: 2025-11-25T08:49:38Z|01058|binding|INFO|Releasing lport bc66e4f2-ce2b-49ca-ba89-a86607d56e5a from this chassis (sb_readonly=0)
Nov 25 03:49:38 np0005534516 ovn_controller[152859]: 2025-11-25T08:49:38Z|01059|binding|INFO|Setting lport bc66e4f2-ce2b-49ca-ba89-a86607d56e5a down in Southbound
Nov 25 03:49:38 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:49:38.112 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d3:98:59 10.100.0.23'], port_security=['fa:16:3e:d3:98:59 10.100.0.23'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.23/28', 'neutron:device_id': 'ec34a574-9c78-43d8-a65a-aa4052a5d452', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-34b2f9d4-63fb-44ac-b745-d86825df1c61', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '92faeb767e7a423586eaaf32661ce771', 'neutron:revision_number': '5', 'neutron:security_group_ids': 'f6e83494-bf67-403f-bbb9-2d14a77705aa', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=99275f51-5c9e-402a-8f14-8705646486ea, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=bc66e4f2-ce2b-49ca-ba89-a86607d56e5a) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 03:49:38 np0005534516 nova_compute[253538]: 2025-11-25 08:49:38.113 253542 DEBUG nova.virt.libvirt.vif [None req-66cbd094-af78-48a9-b936-09c8259d7202 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T08:49:06Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-286927500',display_name='tempest-TestNetworkBasicOps-server-286927500',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-286927500',id=107,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDpx2ujcMDfZDTmdH6itq/rcGct31eJ9TyaupJxEMjoZjvc+WgDrLkUbHxw8m+QMC78njJvM+fOPyWv9TETxSR2Le+lHvoJLnW/RQzdZT3SocZ8dY0e2xdmGW9jZNSUf0g==',key_name='tempest-TestNetworkBasicOps-1732848641',keypairs=<?>,launch_index=0,launched_at=2025-11-25T08:49:16Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='92faeb767e7a423586eaaf32661ce771',ramdisk_id='',reservation_id='r-tlyua8ve',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-2019122229',owner_user_name='tempest-TestNetworkBasicOps-2019122229-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T08:49:16Z,user_data=None,user_id='4211995133cc45db8e38c47f747fb092',uuid=ec34a574-9c78-43d8-a65a-aa4052a5d452,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "bc66e4f2-ce2b-49ca-ba89-a86607d56e5a", "address": "fa:16:3e:d3:98:59", "network": {"id": "34b2f9d4-63fb-44ac-b745-d86825df1c61", "bridge": "br-int", "label": "tempest-network-smoke--1835414434", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": "10.100.0.17", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.23", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92faeb767e7a423586eaaf32661ce771", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbc66e4f2-ce", "ovs_interfaceid": "bc66e4f2-ce2b-49ca-ba89-a86607d56e5a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 25 03:49:38 np0005534516 nova_compute[253538]: 2025-11-25 08:49:38.113 253542 DEBUG nova.network.os_vif_util [None req-66cbd094-af78-48a9-b936-09c8259d7202 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Converting VIF {"id": "bc66e4f2-ce2b-49ca-ba89-a86607d56e5a", "address": "fa:16:3e:d3:98:59", "network": {"id": "34b2f9d4-63fb-44ac-b745-d86825df1c61", "bridge": "br-int", "label": "tempest-network-smoke--1835414434", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": "10.100.0.17", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.23", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92faeb767e7a423586eaaf32661ce771", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbc66e4f2-ce", "ovs_interfaceid": "bc66e4f2-ce2b-49ca-ba89-a86607d56e5a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 03:49:38 np0005534516 nova_compute[253538]: 2025-11-25 08:49:38.114 253542 DEBUG nova.network.os_vif_util [None req-66cbd094-af78-48a9-b936-09c8259d7202 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d3:98:59,bridge_name='br-int',has_traffic_filtering=True,id=bc66e4f2-ce2b-49ca-ba89-a86607d56e5a,network=Network(34b2f9d4-63fb-44ac-b745-d86825df1c61),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbc66e4f2-ce') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 03:49:38 np0005534516 nova_compute[253538]: 2025-11-25 08:49:38.114 253542 DEBUG os_vif [None req-66cbd094-af78-48a9-b936-09c8259d7202 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:d3:98:59,bridge_name='br-int',has_traffic_filtering=True,id=bc66e4f2-ce2b-49ca-ba89-a86607d56e5a,network=Network(34b2f9d4-63fb-44ac-b745-d86825df1c61),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbc66e4f2-ce') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 25 03:49:38 np0005534516 nova_compute[253538]: 2025-11-25 08:49:38.117 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:49:38 np0005534516 nova_compute[253538]: 2025-11-25 08:49:38.117 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapbc66e4f2-ce, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:49:38 np0005534516 nova_compute[253538]: 2025-11-25 08:49:38.118 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:49:38 np0005534516 nova_compute[253538]: 2025-11-25 08:49:38.120 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 25 03:49:38 np0005534516 nova_compute[253538]: 2025-11-25 08:49:38.122 253542 INFO os_vif [None req-66cbd094-af78-48a9-b936-09c8259d7202 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:d3:98:59,bridge_name='br-int',has_traffic_filtering=True,id=bc66e4f2-ce2b-49ca-ba89-a86607d56e5a,network=Network(34b2f9d4-63fb-44ac-b745-d86825df1c61),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbc66e4f2-ce')#033[00m
Nov 25 03:49:38 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:49:38.155 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=31, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '3e:21:09', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '6a:71:8e:07:fa:c2'}, ipsec=False) old=SB_Global(nb_cfg=30) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 03:49:38 np0005534516 nova_compute[253538]: 2025-11-25 08:49:38.155 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:49:38 np0005534516 nova_compute[253538]: 2025-11-25 08:49:38.365 253542 DEBUG nova.compute.manager [req-dcd82882-91c2-4a1e-aae7-ba7b491b311a req-64706819-548d-4ff3-988c-85778814c304 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: a3e97a6a-f96c-46ee-ba5a-2729f07dd5b5] Received event network-vif-plugged-67b278e0-034e-4bb1-8cba-035ab2a72de3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:49:38 np0005534516 nova_compute[253538]: 2025-11-25 08:49:38.367 253542 DEBUG oslo_concurrency.lockutils [req-dcd82882-91c2-4a1e-aae7-ba7b491b311a req-64706819-548d-4ff3-988c-85778814c304 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "a3e97a6a-f96c-46ee-ba5a-2729f07dd5b5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:49:38 np0005534516 nova_compute[253538]: 2025-11-25 08:49:38.367 253542 DEBUG oslo_concurrency.lockutils [req-dcd82882-91c2-4a1e-aae7-ba7b491b311a req-64706819-548d-4ff3-988c-85778814c304 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "a3e97a6a-f96c-46ee-ba5a-2729f07dd5b5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:49:38 np0005534516 nova_compute[253538]: 2025-11-25 08:49:38.368 253542 DEBUG oslo_concurrency.lockutils [req-dcd82882-91c2-4a1e-aae7-ba7b491b311a req-64706819-548d-4ff3-988c-85778814c304 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "a3e97a6a-f96c-46ee-ba5a-2729f07dd5b5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:49:38 np0005534516 nova_compute[253538]: 2025-11-25 08:49:38.368 253542 DEBUG nova.compute.manager [req-dcd82882-91c2-4a1e-aae7-ba7b491b311a req-64706819-548d-4ff3-988c-85778814c304 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: a3e97a6a-f96c-46ee-ba5a-2729f07dd5b5] Processing event network-vif-plugged-67b278e0-034e-4bb1-8cba-035ab2a72de3 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 25 03:49:38 np0005534516 systemd[1]: var-lib-containers-storage-overlay-e2d709033092afbe32e7301de9ce77134dfb1c21bcf75d55f1f66a87f8dc2459-merged.mount: Deactivated successfully.
Nov 25 03:49:38 np0005534516 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-50e62fc8631cbda48a1b969df0030a18056f96d0308a64f2b67e79dab3389241-userdata-shm.mount: Deactivated successfully.
Nov 25 03:49:38 np0005534516 nova_compute[253538]: 2025-11-25 08:49:38.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 03:49:38 np0005534516 nova_compute[253538]: 2025-11-25 08:49:38.555 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 03:49:38 np0005534516 nova_compute[253538]: 2025-11-25 08:49:38.555 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 03:49:38 np0005534516 nova_compute[253538]: 2025-11-25 08:49:38.577 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:49:38 np0005534516 nova_compute[253538]: 2025-11-25 08:49:38.577 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:49:38 np0005534516 nova_compute[253538]: 2025-11-25 08:49:38.578 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:49:38 np0005534516 nova_compute[253538]: 2025-11-25 08:49:38.578 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 25 03:49:38 np0005534516 nova_compute[253538]: 2025-11-25 08:49:38.578 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:49:38 np0005534516 podman[362874]: 2025-11-25 08:49:38.663936598 +0000 UTC m=+0.716101819 container cleanup 50e62fc8631cbda48a1b969df0030a18056f96d0308a64f2b67e79dab3389241 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-34b2f9d4-63fb-44ac-b745-d86825df1c61, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.license=GPLv2)
Nov 25 03:49:38 np0005534516 systemd[1]: libpod-conmon-50e62fc8631cbda48a1b969df0030a18056f96d0308a64f2b67e79dab3389241.scope: Deactivated successfully.
Nov 25 03:49:38 np0005534516 podman[363052]: 2025-11-25 08:49:38.853330411 +0000 UTC m=+0.155731222 container remove 50e62fc8631cbda48a1b969df0030a18056f96d0308a64f2b67e79dab3389241 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-34b2f9d4-63fb-44ac-b745-d86825df1c61, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS)
Nov 25 03:49:38 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:49:38.864 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[15ea3d1d-e5e1-4d71-8dcf-68dc00a984df]: (4, ('Tue Nov 25 08:49:37 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-34b2f9d4-63fb-44ac-b745-d86825df1c61 (50e62fc8631cbda48a1b969df0030a18056f96d0308a64f2b67e79dab3389241)\n50e62fc8631cbda48a1b969df0030a18056f96d0308a64f2b67e79dab3389241\nTue Nov 25 08:49:38 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-34b2f9d4-63fb-44ac-b745-d86825df1c61 (50e62fc8631cbda48a1b969df0030a18056f96d0308a64f2b67e79dab3389241)\n50e62fc8631cbda48a1b969df0030a18056f96d0308a64f2b67e79dab3389241\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:49:38 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:49:38.867 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[59c26321-5321-4d58-ae4e-763f82cf6d71]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:49:38 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:49:38.868 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap34b2f9d4-60, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:49:38 np0005534516 kernel: tap34b2f9d4-60: left promiscuous mode
Nov 25 03:49:38 np0005534516 nova_compute[253538]: 2025-11-25 08:49:38.940 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:49:38 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:49:38.955 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[e39681f1-9c0c-4b0d-ab4a-25f9d099969a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:49:38 np0005534516 nova_compute[253538]: 2025-11-25 08:49:38.956 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:49:38 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:49:38.977 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[12c68f42-47fc-419c-9304-3816f62de4ef]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:49:38 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:49:38.978 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[cc23548d-ccda-49b6-98f2-f0c07e9dedf2]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:49:38 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:49:38.994 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[e6511d47-37c7-4241-850c-8ce3ff15f5b3]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 590986, 'reachable_time': 35568, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 363135, 'error': None, 'target': 'ovnmeta-34b2f9d4-63fb-44ac-b745-d86825df1c61', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:49:38 np0005534516 nova_compute[253538]: 2025-11-25 08:49:38.994 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764060578.9941876, a3e97a6a-f96c-46ee-ba5a-2729f07dd5b5 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 03:49:38 np0005534516 nova_compute[253538]: 2025-11-25 08:49:38.995 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: a3e97a6a-f96c-46ee-ba5a-2729f07dd5b5] VM Started (Lifecycle Event)#033[00m
Nov 25 03:49:38 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:49:38.996 162852 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-34b2f9d4-63fb-44ac-b745-d86825df1c61 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 25 03:49:38 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:49:38.996 162852 DEBUG oslo.privsep.daemon [-] privsep: reply[14646793-fa81-4a0a-af01-b7716bd604e3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:49:38 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:49:38.997 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 67b278e0-034e-4bb1-8cba-035ab2a72de3 in datapath 6cdb9ec3-6144-4767-9719-ddbf0a68bf7c unbound from our chassis#033[00m
Nov 25 03:49:38 np0005534516 podman[363119]: 2025-11-25 08:49:38.997213214 +0000 UTC m=+0.093711571 container create e0eb2105062c253ec69225fe41d8d391f962594bea5a9b2a5e4478517f1520ef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_nobel, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Nov 25 03:49:38 np0005534516 nova_compute[253538]: 2025-11-25 08:49:38.997 253542 DEBUG nova.compute.manager [None req-19216ac5-cfe1-4bf8-b52b-a146e40d4512 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: a3e97a6a-f96c-46ee-ba5a-2729f07dd5b5] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 25 03:49:38 np0005534516 systemd[1]: run-netns-ovnmeta\x2d34b2f9d4\x2d63fb\x2d44ac\x2db745\x2dd86825df1c61.mount: Deactivated successfully.
Nov 25 03:49:39 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:49:39.000 162739 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 6cdb9ec3-6144-4767-9719-ddbf0a68bf7c#033[00m
Nov 25 03:49:39 np0005534516 nova_compute[253538]: 2025-11-25 08:49:39.004 253542 DEBUG nova.virt.libvirt.driver [None req-19216ac5-cfe1-4bf8-b52b-a146e40d4512 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: a3e97a6a-f96c-46ee-ba5a-2729f07dd5b5] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 25 03:49:39 np0005534516 nova_compute[253538]: 2025-11-25 08:49:39.009 253542 INFO nova.virt.libvirt.driver [-] [instance: a3e97a6a-f96c-46ee-ba5a-2729f07dd5b5] Instance spawned successfully.#033[00m
Nov 25 03:49:39 np0005534516 nova_compute[253538]: 2025-11-25 08:49:39.009 253542 DEBUG nova.virt.libvirt.driver [None req-19216ac5-cfe1-4bf8-b52b-a146e40d4512 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: a3e97a6a-f96c-46ee-ba5a-2729f07dd5b5] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 25 03:49:39 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:49:39.012 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[d0ec390b-f2ad-401f-8b76-563b527abb06]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:49:39 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:49:39.012 162739 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap6cdb9ec3-61 in ovnmeta-6cdb9ec3-6144-4767-9719-ddbf0a68bf7c namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 25 03:49:39 np0005534516 nova_compute[253538]: 2025-11-25 08:49:39.013 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: a3e97a6a-f96c-46ee-ba5a-2729f07dd5b5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:49:39 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:49:39.015 269370 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap6cdb9ec3-60 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 25 03:49:39 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:49:39.015 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[2ec60da9-870a-494e-af81-427bc3f6498f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:49:39 np0005534516 nova_compute[253538]: 2025-11-25 08:49:39.015 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: a3e97a6a-f96c-46ee-ba5a-2729f07dd5b5] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 03:49:39 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:49:39.016 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[43d32f67-496d-4403-a760-966815e770a7]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:49:39 np0005534516 nova_compute[253538]: 2025-11-25 08:49:39.026 253542 DEBUG nova.virt.libvirt.driver [None req-19216ac5-cfe1-4bf8-b52b-a146e40d4512 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: a3e97a6a-f96c-46ee-ba5a-2729f07dd5b5] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:49:39 np0005534516 nova_compute[253538]: 2025-11-25 08:49:39.027 253542 DEBUG nova.virt.libvirt.driver [None req-19216ac5-cfe1-4bf8-b52b-a146e40d4512 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: a3e97a6a-f96c-46ee-ba5a-2729f07dd5b5] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:49:39 np0005534516 nova_compute[253538]: 2025-11-25 08:49:39.027 253542 DEBUG nova.virt.libvirt.driver [None req-19216ac5-cfe1-4bf8-b52b-a146e40d4512 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: a3e97a6a-f96c-46ee-ba5a-2729f07dd5b5] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:49:39 np0005534516 nova_compute[253538]: 2025-11-25 08:49:39.027 253542 DEBUG nova.virt.libvirt.driver [None req-19216ac5-cfe1-4bf8-b52b-a146e40d4512 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: a3e97a6a-f96c-46ee-ba5a-2729f07dd5b5] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:49:39 np0005534516 nova_compute[253538]: 2025-11-25 08:49:39.028 253542 DEBUG nova.virt.libvirt.driver [None req-19216ac5-cfe1-4bf8-b52b-a146e40d4512 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: a3e97a6a-f96c-46ee-ba5a-2729f07dd5b5] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:49:39 np0005534516 nova_compute[253538]: 2025-11-25 08:49:39.028 253542 DEBUG nova.virt.libvirt.driver [None req-19216ac5-cfe1-4bf8-b52b-a146e40d4512 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: a3e97a6a-f96c-46ee-ba5a-2729f07dd5b5] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:49:39 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:49:39.031 162852 DEBUG oslo.privsep.daemon [-] privsep: reply[e43c1a88-6086-4e7b-bb67-ffcfecdaf9c1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:49:39 np0005534516 podman[363119]: 2025-11-25 08:49:38.944171253 +0000 UTC m=+0.040669640 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 03:49:39 np0005534516 nova_compute[253538]: 2025-11-25 08:49:39.040 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: a3e97a6a-f96c-46ee-ba5a-2729f07dd5b5] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 03:49:39 np0005534516 nova_compute[253538]: 2025-11-25 08:49:39.041 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764060578.994403, a3e97a6a-f96c-46ee-ba5a-2729f07dd5b5 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 03:49:39 np0005534516 nova_compute[253538]: 2025-11-25 08:49:39.041 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: a3e97a6a-f96c-46ee-ba5a-2729f07dd5b5] VM Paused (Lifecycle Event)#033[00m
Nov 25 03:49:39 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:49:39.053 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[8469cb6c-4a31-44ef-a33f-338a98899b63]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:49:39 np0005534516 systemd[1]: Started libpod-conmon-e0eb2105062c253ec69225fe41d8d391f962594bea5a9b2a5e4478517f1520ef.scope.
Nov 25 03:49:39 np0005534516 nova_compute[253538]: 2025-11-25 08:49:39.062 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: a3e97a6a-f96c-46ee-ba5a-2729f07dd5b5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:49:39 np0005534516 nova_compute[253538]: 2025-11-25 08:49:39.065 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764060579.001391, a3e97a6a-f96c-46ee-ba5a-2729f07dd5b5 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 03:49:39 np0005534516 nova_compute[253538]: 2025-11-25 08:49:39.065 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: a3e97a6a-f96c-46ee-ba5a-2729f07dd5b5] VM Resumed (Lifecycle Event)#033[00m
Nov 25 03:49:39 np0005534516 nova_compute[253538]: 2025-11-25 08:49:39.081 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: a3e97a6a-f96c-46ee-ba5a-2729f07dd5b5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:49:39 np0005534516 systemd[1]: Started libcrun container.
Nov 25 03:49:39 np0005534516 nova_compute[253538]: 2025-11-25 08:49:39.084 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: a3e97a6a-f96c-46ee-ba5a-2729f07dd5b5] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 03:49:39 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:49:39.088 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[1e0b1e84-2638-4089-9ec3-9236bd5e877d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:49:39 np0005534516 nova_compute[253538]: 2025-11-25 08:49:39.091 253542 INFO nova.compute.manager [None req-19216ac5-cfe1-4bf8-b52b-a146e40d4512 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: a3e97a6a-f96c-46ee-ba5a-2729f07dd5b5] Took 7.67 seconds to spawn the instance on the hypervisor.#033[00m
Nov 25 03:49:39 np0005534516 nova_compute[253538]: 2025-11-25 08:49:39.092 253542 DEBUG nova.compute.manager [None req-19216ac5-cfe1-4bf8-b52b-a146e40d4512 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: a3e97a6a-f96c-46ee-ba5a-2729f07dd5b5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:49:39 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:49:39.097 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[ea6ceef6-c522-4d72-ad4e-15115a795a0c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:49:39 np0005534516 NetworkManager[48915]: <info>  [1764060579.0988] manager: (tap6cdb9ec3-60): new Veth device (/org/freedesktop/NetworkManager/Devices/432)
Nov 25 03:49:39 np0005534516 nova_compute[253538]: 2025-11-25 08:49:39.119 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: a3e97a6a-f96c-46ee-ba5a-2729f07dd5b5] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 03:49:39 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:49:39.133 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[b8ce3893-bcbc-4e0f-ab7f-a05bf5245816]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:49:39 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 03:49:39 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1886141674' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 03:49:39 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:49:39.138 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[9aeff855-dae3-4ea6-afce-b6bfd8ad672c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:49:39 np0005534516 podman[363119]: 2025-11-25 08:49:39.14343013 +0000 UTC m=+0.239928507 container init e0eb2105062c253ec69225fe41d8d391f962594bea5a9b2a5e4478517f1520ef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_nobel, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 03:49:39 np0005534516 podman[363119]: 2025-11-25 08:49:39.151651661 +0000 UTC m=+0.248150018 container start e0eb2105062c253ec69225fe41d8d391f962594bea5a9b2a5e4478517f1520ef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_nobel, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Nov 25 03:49:39 np0005534516 happy_nobel[363142]: 167 167
Nov 25 03:49:39 np0005534516 nova_compute[253538]: 2025-11-25 08:49:39.155 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.577s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:49:39 np0005534516 systemd[1]: libpod-e0eb2105062c253ec69225fe41d8d391f962594bea5a9b2a5e4478517f1520ef.scope: Deactivated successfully.
Nov 25 03:49:39 np0005534516 nova_compute[253538]: 2025-11-25 08:49:39.159 253542 INFO nova.compute.manager [None req-19216ac5-cfe1-4bf8-b52b-a146e40d4512 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: a3e97a6a-f96c-46ee-ba5a-2729f07dd5b5] Took 8.62 seconds to build instance.#033[00m
Nov 25 03:49:39 np0005534516 NetworkManager[48915]: <info>  [1764060579.1646] device (tap6cdb9ec3-60): carrier: link connected
Nov 25 03:49:39 np0005534516 podman[363119]: 2025-11-25 08:49:39.170753321 +0000 UTC m=+0.267251708 container attach e0eb2105062c253ec69225fe41d8d391f962594bea5a9b2a5e4478517f1520ef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_nobel, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 03:49:39 np0005534516 podman[363119]: 2025-11-25 08:49:39.172073857 +0000 UTC m=+0.268572224 container died e0eb2105062c253ec69225fe41d8d391f962594bea5a9b2a5e4478517f1520ef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_nobel, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Nov 25 03:49:39 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:49:39.172 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[ba55f619-56dd-4cc5-acec-0995dedc2e50]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:49:39 np0005534516 nova_compute[253538]: 2025-11-25 08:49:39.178 253542 DEBUG oslo_concurrency.lockutils [None req-19216ac5-cfe1-4bf8-b52b-a146e40d4512 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Lock "a3e97a6a-f96c-46ee-ba5a-2729f07dd5b5" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.704s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:49:39 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:49:39.192 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[e183afd9-f3e4-4246-8e00-5e2a4e1fc9e9]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6cdb9ec3-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:4c:ec:5b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 310], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 593376, 'reachable_time': 27825, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 363174, 'error': None, 'target': 'ovnmeta-6cdb9ec3-6144-4767-9719-ddbf0a68bf7c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:49:39 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:49:39.211 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[6e457073-ac4c-4b3d-90ba-0c848cabffb4]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe4c:ec5b'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 593376, 'tstamp': 593376}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 363179, 'error': None, 'target': 'ovnmeta-6cdb9ec3-6144-4767-9719-ddbf0a68bf7c', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:49:39 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:49:39.228 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[4571ae45-bbcd-49f8-8b3c-cbbfadfcc5f3]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6cdb9ec3-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:4c:ec:5b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 310], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 593376, 'reachable_time': 27825, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 363180, 'error': None, 'target': 'ovnmeta-6cdb9ec3-6144-4767-9719-ddbf0a68bf7c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:49:39 np0005534516 nova_compute[253538]: 2025-11-25 08:49:39.255 253542 DEBUG nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] skipping disk for instance-0000006a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 25 03:49:39 np0005534516 nova_compute[253538]: 2025-11-25 08:49:39.255 253542 DEBUG nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] skipping disk for instance-0000006a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 25 03:49:39 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:49:39.257 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[740def7c-6af5-42ac-a9a1-495330b61f83]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:49:39 np0005534516 nova_compute[253538]: 2025-11-25 08:49:39.258 253542 DEBUG nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] skipping disk for instance-0000006b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 25 03:49:39 np0005534516 nova_compute[253538]: 2025-11-25 08:49:39.259 253542 DEBUG nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] skipping disk for instance-0000006b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 25 03:49:39 np0005534516 nova_compute[253538]: 2025-11-25 08:49:39.268 253542 DEBUG nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] skipping disk for instance-0000006c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 25 03:49:39 np0005534516 nova_compute[253538]: 2025-11-25 08:49:39.268 253542 DEBUG nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] skipping disk for instance-0000006c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 25 03:49:39 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:49:39.328 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[9569e6a7-0436-4546-b979-4b8eff55988e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:49:39 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:49:39.329 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6cdb9ec3-60, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:49:39 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:49:39.329 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 03:49:39 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:49:39.329 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6cdb9ec3-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:49:39 np0005534516 nova_compute[253538]: 2025-11-25 08:49:39.331 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:49:39 np0005534516 NetworkManager[48915]: <info>  [1764060579.3319] manager: (tap6cdb9ec3-60): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/433)
Nov 25 03:49:39 np0005534516 kernel: tap6cdb9ec3-60: entered promiscuous mode
Nov 25 03:49:39 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:49:39.345 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap6cdb9ec3-60, col_values=(('external_ids', {'iface-id': 'a1fb9b8f-24d1-400b-bfc7-6d0e30e1e254'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:49:39 np0005534516 ovn_controller[152859]: 2025-11-25T08:49:39Z|01060|binding|INFO|Releasing lport a1fb9b8f-24d1-400b-bfc7-6d0e30e1e254 from this chassis (sb_readonly=0)
Nov 25 03:49:39 np0005534516 nova_compute[253538]: 2025-11-25 08:49:39.346 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:49:39 np0005534516 nova_compute[253538]: 2025-11-25 08:49:39.347 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:49:39 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:49:39.349 162739 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/6cdb9ec3-6144-4767-9719-ddbf0a68bf7c.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/6cdb9ec3-6144-4767-9719-ddbf0a68bf7c.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 25 03:49:39 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:49:39.350 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[bf04abee-f7e2-4704-80b9-4453e0d44e54]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:49:39 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:49:39.351 162739 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 25 03:49:39 np0005534516 ovn_metadata_agent[162734]: global
Nov 25 03:49:39 np0005534516 ovn_metadata_agent[162734]:    log         /dev/log local0 debug
Nov 25 03:49:39 np0005534516 ovn_metadata_agent[162734]:    log-tag     haproxy-metadata-proxy-6cdb9ec3-6144-4767-9719-ddbf0a68bf7c
Nov 25 03:49:39 np0005534516 ovn_metadata_agent[162734]:    user        root
Nov 25 03:49:39 np0005534516 ovn_metadata_agent[162734]:    group       root
Nov 25 03:49:39 np0005534516 ovn_metadata_agent[162734]:    maxconn     1024
Nov 25 03:49:39 np0005534516 ovn_metadata_agent[162734]:    pidfile     /var/lib/neutron/external/pids/6cdb9ec3-6144-4767-9719-ddbf0a68bf7c.pid.haproxy
Nov 25 03:49:39 np0005534516 ovn_metadata_agent[162734]:    daemon
Nov 25 03:49:39 np0005534516 ovn_metadata_agent[162734]: 
Nov 25 03:49:39 np0005534516 ovn_metadata_agent[162734]: defaults
Nov 25 03:49:39 np0005534516 ovn_metadata_agent[162734]:    log global
Nov 25 03:49:39 np0005534516 ovn_metadata_agent[162734]:    mode http
Nov 25 03:49:39 np0005534516 ovn_metadata_agent[162734]:    option httplog
Nov 25 03:49:39 np0005534516 ovn_metadata_agent[162734]:    option dontlognull
Nov 25 03:49:39 np0005534516 ovn_metadata_agent[162734]:    option http-server-close
Nov 25 03:49:39 np0005534516 ovn_metadata_agent[162734]:    option forwardfor
Nov 25 03:49:39 np0005534516 ovn_metadata_agent[162734]:    retries                 3
Nov 25 03:49:39 np0005534516 ovn_metadata_agent[162734]:    timeout http-request    30s
Nov 25 03:49:39 np0005534516 ovn_metadata_agent[162734]:    timeout connect         30s
Nov 25 03:49:39 np0005534516 ovn_metadata_agent[162734]:    timeout client          32s
Nov 25 03:49:39 np0005534516 ovn_metadata_agent[162734]:    timeout server          32s
Nov 25 03:49:39 np0005534516 ovn_metadata_agent[162734]:    timeout http-keep-alive 30s
Nov 25 03:49:39 np0005534516 ovn_metadata_agent[162734]: 
Nov 25 03:49:39 np0005534516 ovn_metadata_agent[162734]: 
Nov 25 03:49:39 np0005534516 ovn_metadata_agent[162734]: listen listener
Nov 25 03:49:39 np0005534516 ovn_metadata_agent[162734]:    bind 169.254.169.254:80
Nov 25 03:49:39 np0005534516 ovn_metadata_agent[162734]:    server metadata /var/lib/neutron/metadata_proxy
Nov 25 03:49:39 np0005534516 ovn_metadata_agent[162734]:    http-request add-header X-OVN-Network-ID 6cdb9ec3-6144-4767-9719-ddbf0a68bf7c
Nov 25 03:49:39 np0005534516 ovn_metadata_agent[162734]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 25 03:49:39 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:49:39.352 162739 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-6cdb9ec3-6144-4767-9719-ddbf0a68bf7c', 'env', 'PROCESS_TAG=haproxy-6cdb9ec3-6144-4767-9719-ddbf0a68bf7c', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/6cdb9ec3-6144-4767-9719-ddbf0a68bf7c.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 25 03:49:39 np0005534516 nova_compute[253538]: 2025-11-25 08:49:39.361 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:49:39 np0005534516 systemd[1]: var-lib-containers-storage-overlay-9d4b8906c251b2b7e57b0e9e5272232cbdb6f9837bc1e0c0de887bbceee89836-merged.mount: Deactivated successfully.
Nov 25 03:49:39 np0005534516 podman[363119]: 2025-11-25 08:49:39.530018174 +0000 UTC m=+0.626516521 container remove e0eb2105062c253ec69225fe41d8d391f962594bea5a9b2a5e4478517f1520ef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_nobel, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Nov 25 03:49:39 np0005534516 nova_compute[253538]: 2025-11-25 08:49:39.538 253542 WARNING nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 03:49:39 np0005534516 nova_compute[253538]: 2025-11-25 08:49:39.539 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3513MB free_disk=59.87651062011719GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 25 03:49:39 np0005534516 nova_compute[253538]: 2025-11-25 08:49:39.539 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:49:39 np0005534516 nova_compute[253538]: 2025-11-25 08:49:39.540 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:49:39 np0005534516 nova_compute[253538]: 2025-11-25 08:49:39.617 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Instance fc289da4-ce2a-46ee-bf8c-bffa1e0e75fe actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 25 03:49:39 np0005534516 nova_compute[253538]: 2025-11-25 08:49:39.618 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Instance ec34a574-9c78-43d8-a65a-aa4052a5d452 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 25 03:49:39 np0005534516 nova_compute[253538]: 2025-11-25 08:49:39.618 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Instance a3e97a6a-f96c-46ee-ba5a-2729f07dd5b5 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 25 03:49:39 np0005534516 nova_compute[253538]: 2025-11-25 08:49:39.618 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 25 03:49:39 np0005534516 nova_compute[253538]: 2025-11-25 08:49:39.618 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=896MB phys_disk=59GB used_disk=3GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 25 03:49:39 np0005534516 systemd[1]: libpod-conmon-e0eb2105062c253ec69225fe41d8d391f962594bea5a9b2a5e4478517f1520ef.scope: Deactivated successfully.
Nov 25 03:49:39 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2050: 321 pgs: 321 active+clean; 293 MiB data, 835 MiB used, 59 GiB / 60 GiB avail; 285 KiB/s rd, 3.9 MiB/s wr, 92 op/s
Nov 25 03:49:39 np0005534516 nova_compute[253538]: 2025-11-25 08:49:39.680 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:49:39 np0005534516 podman[363211]: 2025-11-25 08:49:39.771145282 +0000 UTC m=+0.079087460 container create f8284ac3054758f93e1956d42b529dd53200b28cb691fcc27237c71804f0995d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_mendel, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Nov 25 03:49:39 np0005534516 podman[363211]: 2025-11-25 08:49:39.720892566 +0000 UTC m=+0.028834614 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 03:49:39 np0005534516 systemd[1]: Started libpod-conmon-f8284ac3054758f93e1956d42b529dd53200b28cb691fcc27237c71804f0995d.scope.
Nov 25 03:49:39 np0005534516 systemd[1]: Started libcrun container.
Nov 25 03:49:39 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4ac1284a1ccf97edfbea65953620e524c6cd053344512cf31e249083ff348e13/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 03:49:39 np0005534516 podman[363230]: 2025-11-25 08:49:39.867214414 +0000 UTC m=+0.133418794 container create c76772cc6aabf600f4ec379b74bcf99747fba8c7f3ee0a94c71e948710ff658b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-6cdb9ec3-6144-4767-9719-ddbf0a68bf7c, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Nov 25 03:49:39 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4ac1284a1ccf97edfbea65953620e524c6cd053344512cf31e249083ff348e13/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 03:49:39 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4ac1284a1ccf97edfbea65953620e524c6cd053344512cf31e249083ff348e13/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 03:49:39 np0005534516 podman[363230]: 2025-11-25 08:49:39.783556664 +0000 UTC m=+0.049761074 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 25 03:49:39 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4ac1284a1ccf97edfbea65953620e524c6cd053344512cf31e249083ff348e13/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 03:49:39 np0005534516 podman[363211]: 2025-11-25 08:49:39.919220637 +0000 UTC m=+0.227162655 container init f8284ac3054758f93e1956d42b529dd53200b28cb691fcc27237c71804f0995d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_mendel, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 03:49:39 np0005534516 podman[363211]: 2025-11-25 08:49:39.927793487 +0000 UTC m=+0.235735485 container start f8284ac3054758f93e1956d42b529dd53200b28cb691fcc27237c71804f0995d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_mendel, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 03:49:39 np0005534516 systemd[1]: Started libpod-conmon-c76772cc6aabf600f4ec379b74bcf99747fba8c7f3ee0a94c71e948710ff658b.scope.
Nov 25 03:49:39 np0005534516 systemd[1]: Started libcrun container.
Nov 25 03:49:39 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/51884f3ed72ef29fe201d1c8b86187d773b15028358389bb3a90b47cd47948b0/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 25 03:49:40 np0005534516 podman[363211]: 2025-11-25 08:49:40.017128429 +0000 UTC m=+0.325070477 container attach f8284ac3054758f93e1956d42b529dd53200b28cb691fcc27237c71804f0995d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_mendel, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Nov 25 03:49:40 np0005534516 podman[363230]: 2025-11-25 08:49:40.071910366 +0000 UTC m=+0.338114776 container init c76772cc6aabf600f4ec379b74bcf99747fba8c7f3ee0a94c71e948710ff658b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-6cdb9ec3-6144-4767-9719-ddbf0a68bf7c, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true)
Nov 25 03:49:40 np0005534516 podman[363230]: 2025-11-25 08:49:40.081659538 +0000 UTC m=+0.347863918 container start c76772cc6aabf600f4ec379b74bcf99747fba8c7f3ee0a94c71e948710ff658b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-6cdb9ec3-6144-4767-9719-ddbf0a68bf7c, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 25 03:49:40 np0005534516 neutron-haproxy-ovnmeta-6cdb9ec3-6144-4767-9719-ddbf0a68bf7c[363273]: [NOTICE]   (363277) : New worker (363279) forked
Nov 25 03:49:40 np0005534516 neutron-haproxy-ovnmeta-6cdb9ec3-6144-4767-9719-ddbf0a68bf7c[363273]: [NOTICE]   (363277) : Loading success.
Nov 25 03:49:40 np0005534516 nova_compute[253538]: 2025-11-25 08:49:40.142 253542 DEBUG nova.compute.manager [req-23c5af2d-b57a-40c0-b17d-9a598d155ba4 req-24be5093-a61e-41e7-bf4f-a3250ab60e94 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: ec34a574-9c78-43d8-a65a-aa4052a5d452] Received event network-vif-plugged-bc66e4f2-ce2b-49ca-ba89-a86607d56e5a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:49:40 np0005534516 nova_compute[253538]: 2025-11-25 08:49:40.143 253542 DEBUG oslo_concurrency.lockutils [req-23c5af2d-b57a-40c0-b17d-9a598d155ba4 req-24be5093-a61e-41e7-bf4f-a3250ab60e94 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "ec34a574-9c78-43d8-a65a-aa4052a5d452-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:49:40 np0005534516 nova_compute[253538]: 2025-11-25 08:49:40.143 253542 DEBUG oslo_concurrency.lockutils [req-23c5af2d-b57a-40c0-b17d-9a598d155ba4 req-24be5093-a61e-41e7-bf4f-a3250ab60e94 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "ec34a574-9c78-43d8-a65a-aa4052a5d452-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:49:40 np0005534516 nova_compute[253538]: 2025-11-25 08:49:40.144 253542 DEBUG oslo_concurrency.lockutils [req-23c5af2d-b57a-40c0-b17d-9a598d155ba4 req-24be5093-a61e-41e7-bf4f-a3250ab60e94 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "ec34a574-9c78-43d8-a65a-aa4052a5d452-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:49:40 np0005534516 nova_compute[253538]: 2025-11-25 08:49:40.144 253542 DEBUG nova.compute.manager [req-23c5af2d-b57a-40c0-b17d-9a598d155ba4 req-24be5093-a61e-41e7-bf4f-a3250ab60e94 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: ec34a574-9c78-43d8-a65a-aa4052a5d452] No waiting events found dispatching network-vif-plugged-bc66e4f2-ce2b-49ca-ba89-a86607d56e5a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 03:49:40 np0005534516 nova_compute[253538]: 2025-11-25 08:49:40.145 253542 WARNING nova.compute.manager [req-23c5af2d-b57a-40c0-b17d-9a598d155ba4 req-24be5093-a61e-41e7-bf4f-a3250ab60e94 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: ec34a574-9c78-43d8-a65a-aa4052a5d452] Received unexpected event network-vif-plugged-bc66e4f2-ce2b-49ca-ba89-a86607d56e5a for instance with vm_state active and task_state deleting.#033[00m
Nov 25 03:49:40 np0005534516 nova_compute[253538]: 2025-11-25 08:49:40.145 253542 DEBUG nova.compute.manager [req-23c5af2d-b57a-40c0-b17d-9a598d155ba4 req-24be5093-a61e-41e7-bf4f-a3250ab60e94 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: ec34a574-9c78-43d8-a65a-aa4052a5d452] Received event network-vif-plugged-bc66e4f2-ce2b-49ca-ba89-a86607d56e5a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:49:40 np0005534516 nova_compute[253538]: 2025-11-25 08:49:40.145 253542 DEBUG oslo_concurrency.lockutils [req-23c5af2d-b57a-40c0-b17d-9a598d155ba4 req-24be5093-a61e-41e7-bf4f-a3250ab60e94 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "ec34a574-9c78-43d8-a65a-aa4052a5d452-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:49:40 np0005534516 nova_compute[253538]: 2025-11-25 08:49:40.146 253542 DEBUG oslo_concurrency.lockutils [req-23c5af2d-b57a-40c0-b17d-9a598d155ba4 req-24be5093-a61e-41e7-bf4f-a3250ab60e94 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "ec34a574-9c78-43d8-a65a-aa4052a5d452-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:49:40 np0005534516 nova_compute[253538]: 2025-11-25 08:49:40.146 253542 DEBUG oslo_concurrency.lockutils [req-23c5af2d-b57a-40c0-b17d-9a598d155ba4 req-24be5093-a61e-41e7-bf4f-a3250ab60e94 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "ec34a574-9c78-43d8-a65a-aa4052a5d452-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:49:40 np0005534516 nova_compute[253538]: 2025-11-25 08:49:40.146 253542 DEBUG nova.compute.manager [req-23c5af2d-b57a-40c0-b17d-9a598d155ba4 req-24be5093-a61e-41e7-bf4f-a3250ab60e94 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: ec34a574-9c78-43d8-a65a-aa4052a5d452] No waiting events found dispatching network-vif-plugged-bc66e4f2-ce2b-49ca-ba89-a86607d56e5a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 03:49:40 np0005534516 nova_compute[253538]: 2025-11-25 08:49:40.146 253542 WARNING nova.compute.manager [req-23c5af2d-b57a-40c0-b17d-9a598d155ba4 req-24be5093-a61e-41e7-bf4f-a3250ab60e94 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: ec34a574-9c78-43d8-a65a-aa4052a5d452] Received unexpected event network-vif-plugged-bc66e4f2-ce2b-49ca-ba89-a86607d56e5a for instance with vm_state active and task_state deleting.#033[00m
Nov 25 03:49:40 np0005534516 nova_compute[253538]: 2025-11-25 08:49:40.147 253542 DEBUG nova.compute.manager [req-23c5af2d-b57a-40c0-b17d-9a598d155ba4 req-24be5093-a61e-41e7-bf4f-a3250ab60e94 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: ec34a574-9c78-43d8-a65a-aa4052a5d452] Received event network-vif-plugged-bc66e4f2-ce2b-49ca-ba89-a86607d56e5a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:49:40 np0005534516 nova_compute[253538]: 2025-11-25 08:49:40.147 253542 DEBUG oslo_concurrency.lockutils [req-23c5af2d-b57a-40c0-b17d-9a598d155ba4 req-24be5093-a61e-41e7-bf4f-a3250ab60e94 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "ec34a574-9c78-43d8-a65a-aa4052a5d452-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:49:40 np0005534516 nova_compute[253538]: 2025-11-25 08:49:40.148 253542 DEBUG oslo_concurrency.lockutils [req-23c5af2d-b57a-40c0-b17d-9a598d155ba4 req-24be5093-a61e-41e7-bf4f-a3250ab60e94 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "ec34a574-9c78-43d8-a65a-aa4052a5d452-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:49:40 np0005534516 nova_compute[253538]: 2025-11-25 08:49:40.148 253542 DEBUG oslo_concurrency.lockutils [req-23c5af2d-b57a-40c0-b17d-9a598d155ba4 req-24be5093-a61e-41e7-bf4f-a3250ab60e94 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "ec34a574-9c78-43d8-a65a-aa4052a5d452-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:49:40 np0005534516 nova_compute[253538]: 2025-11-25 08:49:40.148 253542 DEBUG nova.compute.manager [req-23c5af2d-b57a-40c0-b17d-9a598d155ba4 req-24be5093-a61e-41e7-bf4f-a3250ab60e94 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: ec34a574-9c78-43d8-a65a-aa4052a5d452] No waiting events found dispatching network-vif-plugged-bc66e4f2-ce2b-49ca-ba89-a86607d56e5a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 03:49:40 np0005534516 nova_compute[253538]: 2025-11-25 08:49:40.149 253542 WARNING nova.compute.manager [req-23c5af2d-b57a-40c0-b17d-9a598d155ba4 req-24be5093-a61e-41e7-bf4f-a3250ab60e94 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: ec34a574-9c78-43d8-a65a-aa4052a5d452] Received unexpected event network-vif-plugged-bc66e4f2-ce2b-49ca-ba89-a86607d56e5a for instance with vm_state active and task_state deleting.#033[00m
Nov 25 03:49:40 np0005534516 nova_compute[253538]: 2025-11-25 08:49:40.149 253542 DEBUG nova.compute.manager [req-23c5af2d-b57a-40c0-b17d-9a598d155ba4 req-24be5093-a61e-41e7-bf4f-a3250ab60e94 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: ec34a574-9c78-43d8-a65a-aa4052a5d452] Received event network-vif-unplugged-bc66e4f2-ce2b-49ca-ba89-a86607d56e5a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:49:40 np0005534516 nova_compute[253538]: 2025-11-25 08:49:40.149 253542 DEBUG oslo_concurrency.lockutils [req-23c5af2d-b57a-40c0-b17d-9a598d155ba4 req-24be5093-a61e-41e7-bf4f-a3250ab60e94 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "ec34a574-9c78-43d8-a65a-aa4052a5d452-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:49:40 np0005534516 nova_compute[253538]: 2025-11-25 08:49:40.150 253542 DEBUG oslo_concurrency.lockutils [req-23c5af2d-b57a-40c0-b17d-9a598d155ba4 req-24be5093-a61e-41e7-bf4f-a3250ab60e94 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "ec34a574-9c78-43d8-a65a-aa4052a5d452-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:49:40 np0005534516 nova_compute[253538]: 2025-11-25 08:49:40.150 253542 DEBUG oslo_concurrency.lockutils [req-23c5af2d-b57a-40c0-b17d-9a598d155ba4 req-24be5093-a61e-41e7-bf4f-a3250ab60e94 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "ec34a574-9c78-43d8-a65a-aa4052a5d452-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:49:40 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:49:40.150 162739 INFO neutron.agent.ovn.metadata.agent [-] Port bc66e4f2-ce2b-49ca-ba89-a86607d56e5a in datapath 34b2f9d4-63fb-44ac-b745-d86825df1c61 unbound from our chassis#033[00m
Nov 25 03:49:40 np0005534516 nova_compute[253538]: 2025-11-25 08:49:40.150 253542 DEBUG nova.compute.manager [req-23c5af2d-b57a-40c0-b17d-9a598d155ba4 req-24be5093-a61e-41e7-bf4f-a3250ab60e94 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: ec34a574-9c78-43d8-a65a-aa4052a5d452] No waiting events found dispatching network-vif-unplugged-bc66e4f2-ce2b-49ca-ba89-a86607d56e5a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 03:49:40 np0005534516 nova_compute[253538]: 2025-11-25 08:49:40.150 253542 DEBUG nova.compute.manager [req-23c5af2d-b57a-40c0-b17d-9a598d155ba4 req-24be5093-a61e-41e7-bf4f-a3250ab60e94 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: ec34a574-9c78-43d8-a65a-aa4052a5d452] Received event network-vif-unplugged-bc66e4f2-ce2b-49ca-ba89-a86607d56e5a for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 25 03:49:40 np0005534516 nova_compute[253538]: 2025-11-25 08:49:40.151 253542 DEBUG nova.compute.manager [req-23c5af2d-b57a-40c0-b17d-9a598d155ba4 req-24be5093-a61e-41e7-bf4f-a3250ab60e94 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: ec34a574-9c78-43d8-a65a-aa4052a5d452] Received event network-vif-plugged-bc66e4f2-ce2b-49ca-ba89-a86607d56e5a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:49:40 np0005534516 nova_compute[253538]: 2025-11-25 08:49:40.151 253542 DEBUG oslo_concurrency.lockutils [req-23c5af2d-b57a-40c0-b17d-9a598d155ba4 req-24be5093-a61e-41e7-bf4f-a3250ab60e94 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "ec34a574-9c78-43d8-a65a-aa4052a5d452-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:49:40 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:49:40.151 162739 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 34b2f9d4-63fb-44ac-b745-d86825df1c61, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 25 03:49:40 np0005534516 nova_compute[253538]: 2025-11-25 08:49:40.151 253542 DEBUG oslo_concurrency.lockutils [req-23c5af2d-b57a-40c0-b17d-9a598d155ba4 req-24be5093-a61e-41e7-bf4f-a3250ab60e94 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "ec34a574-9c78-43d8-a65a-aa4052a5d452-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:49:40 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:49:40.152 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[85e7c194-8c80-4815-b372-2b38564e7c97]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:49:40 np0005534516 nova_compute[253538]: 2025-11-25 08:49:40.152 253542 DEBUG oslo_concurrency.lockutils [req-23c5af2d-b57a-40c0-b17d-9a598d155ba4 req-24be5093-a61e-41e7-bf4f-a3250ab60e94 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "ec34a574-9c78-43d8-a65a-aa4052a5d452-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:49:40 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:49:40.153 162739 INFO neutron.agent.ovn.metadata.agent [-] Port bc66e4f2-ce2b-49ca-ba89-a86607d56e5a in datapath 34b2f9d4-63fb-44ac-b745-d86825df1c61 unbound from our chassis#033[00m
Nov 25 03:49:40 np0005534516 nova_compute[253538]: 2025-11-25 08:49:40.153 253542 DEBUG nova.compute.manager [req-23c5af2d-b57a-40c0-b17d-9a598d155ba4 req-24be5093-a61e-41e7-bf4f-a3250ab60e94 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: ec34a574-9c78-43d8-a65a-aa4052a5d452] No waiting events found dispatching network-vif-plugged-bc66e4f2-ce2b-49ca-ba89-a86607d56e5a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 03:49:40 np0005534516 nova_compute[253538]: 2025-11-25 08:49:40.153 253542 WARNING nova.compute.manager [req-23c5af2d-b57a-40c0-b17d-9a598d155ba4 req-24be5093-a61e-41e7-bf4f-a3250ab60e94 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: ec34a574-9c78-43d8-a65a-aa4052a5d452] Received unexpected event network-vif-plugged-bc66e4f2-ce2b-49ca-ba89-a86607d56e5a for instance with vm_state active and task_state deleting.#033[00m
Nov 25 03:49:40 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:49:40.153 162739 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 34b2f9d4-63fb-44ac-b745-d86825df1c61, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 25 03:49:40 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:49:40.154 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[93084fa8-4dd8-429b-9e4d-8023eea56a05]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:49:40 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:49:40.154 162739 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 25 03:49:40 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 03:49:40 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1651653293' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 03:49:40 np0005534516 nova_compute[253538]: 2025-11-25 08:49:40.225 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.545s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:49:40 np0005534516 nova_compute[253538]: 2025-11-25 08:49:40.233 253542 DEBUG nova.compute.provider_tree [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 03:49:40 np0005534516 nova_compute[253538]: 2025-11-25 08:49:40.257 253542 DEBUG nova.scheduler.client.report [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 03:49:40 np0005534516 nova_compute[253538]: 2025-11-25 08:49:40.277 253542 INFO nova.virt.libvirt.driver [None req-66cbd094-af78-48a9-b936-09c8259d7202 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: ec34a574-9c78-43d8-a65a-aa4052a5d452] Deleting instance files /var/lib/nova/instances/ec34a574-9c78-43d8-a65a-aa4052a5d452_del#033[00m
Nov 25 03:49:40 np0005534516 nova_compute[253538]: 2025-11-25 08:49:40.278 253542 INFO nova.virt.libvirt.driver [None req-66cbd094-af78-48a9-b936-09c8259d7202 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: ec34a574-9c78-43d8-a65a-aa4052a5d452] Deletion of /var/lib/nova/instances/ec34a574-9c78-43d8-a65a-aa4052a5d452_del complete#033[00m
Nov 25 03:49:40 np0005534516 nova_compute[253538]: 2025-11-25 08:49:40.282 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 25 03:49:40 np0005534516 nova_compute[253538]: 2025-11-25 08:49:40.282 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.743s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:49:40 np0005534516 nova_compute[253538]: 2025-11-25 08:49:40.338 253542 INFO nova.compute.manager [None req-66cbd094-af78-48a9-b936-09c8259d7202 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: ec34a574-9c78-43d8-a65a-aa4052a5d452] Took 3.10 seconds to destroy the instance on the hypervisor.#033[00m
Nov 25 03:49:40 np0005534516 nova_compute[253538]: 2025-11-25 08:49:40.338 253542 DEBUG oslo.service.loopingcall [None req-66cbd094-af78-48a9-b936-09c8259d7202 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 25 03:49:40 np0005534516 nova_compute[253538]: 2025-11-25 08:49:40.339 253542 DEBUG nova.compute.manager [-] [instance: ec34a574-9c78-43d8-a65a-aa4052a5d452] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 25 03:49:40 np0005534516 nova_compute[253538]: 2025-11-25 08:49:40.339 253542 DEBUG nova.network.neutron [-] [instance: ec34a574-9c78-43d8-a65a-aa4052a5d452] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 25 03:49:40 np0005534516 nova_compute[253538]: 2025-11-25 08:49:40.516 253542 DEBUG nova.compute.manager [req-5a34007d-2fc7-4b8f-8c33-2d1d8537ad45 req-cf8f828b-e6d5-4eeb-96a0-7b5078e159be b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: a3e97a6a-f96c-46ee-ba5a-2729f07dd5b5] Received event network-vif-plugged-67b278e0-034e-4bb1-8cba-035ab2a72de3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:49:40 np0005534516 nova_compute[253538]: 2025-11-25 08:49:40.517 253542 DEBUG oslo_concurrency.lockutils [req-5a34007d-2fc7-4b8f-8c33-2d1d8537ad45 req-cf8f828b-e6d5-4eeb-96a0-7b5078e159be b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "a3e97a6a-f96c-46ee-ba5a-2729f07dd5b5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:49:40 np0005534516 nova_compute[253538]: 2025-11-25 08:49:40.517 253542 DEBUG oslo_concurrency.lockutils [req-5a34007d-2fc7-4b8f-8c33-2d1d8537ad45 req-cf8f828b-e6d5-4eeb-96a0-7b5078e159be b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "a3e97a6a-f96c-46ee-ba5a-2729f07dd5b5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:49:40 np0005534516 nova_compute[253538]: 2025-11-25 08:49:40.517 253542 DEBUG oslo_concurrency.lockutils [req-5a34007d-2fc7-4b8f-8c33-2d1d8537ad45 req-cf8f828b-e6d5-4eeb-96a0-7b5078e159be b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "a3e97a6a-f96c-46ee-ba5a-2729f07dd5b5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:49:40 np0005534516 nova_compute[253538]: 2025-11-25 08:49:40.517 253542 DEBUG nova.compute.manager [req-5a34007d-2fc7-4b8f-8c33-2d1d8537ad45 req-cf8f828b-e6d5-4eeb-96a0-7b5078e159be b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: a3e97a6a-f96c-46ee-ba5a-2729f07dd5b5] No waiting events found dispatching network-vif-plugged-67b278e0-034e-4bb1-8cba-035ab2a72de3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 03:49:40 np0005534516 nova_compute[253538]: 2025-11-25 08:49:40.518 253542 WARNING nova.compute.manager [req-5a34007d-2fc7-4b8f-8c33-2d1d8537ad45 req-cf8f828b-e6d5-4eeb-96a0-7b5078e159be b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: a3e97a6a-f96c-46ee-ba5a-2729f07dd5b5] Received unexpected event network-vif-plugged-67b278e0-034e-4bb1-8cba-035ab2a72de3 for instance with vm_state active and task_state None.#033[00m
Nov 25 03:49:40 np0005534516 gracious_mendel[363263]: {
Nov 25 03:49:40 np0005534516 gracious_mendel[363263]:    "0": [
Nov 25 03:49:40 np0005534516 gracious_mendel[363263]:        {
Nov 25 03:49:40 np0005534516 gracious_mendel[363263]:            "devices": [
Nov 25 03:49:40 np0005534516 gracious_mendel[363263]:                "/dev/loop3"
Nov 25 03:49:40 np0005534516 gracious_mendel[363263]:            ],
Nov 25 03:49:40 np0005534516 gracious_mendel[363263]:            "lv_name": "ceph_lv0",
Nov 25 03:49:40 np0005534516 gracious_mendel[363263]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 03:49:40 np0005534516 gracious_mendel[363263]:            "lv_size": "21470642176",
Nov 25 03:49:40 np0005534516 gracious_mendel[363263]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=fa0f8d90-5df8-4d42-9078-da082765696d,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 03:49:40 np0005534516 gracious_mendel[363263]:            "lv_uuid": "ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr",
Nov 25 03:49:40 np0005534516 gracious_mendel[363263]:            "name": "ceph_lv0",
Nov 25 03:49:40 np0005534516 gracious_mendel[363263]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 03:49:40 np0005534516 gracious_mendel[363263]:            "tags": {
Nov 25 03:49:40 np0005534516 gracious_mendel[363263]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 03:49:40 np0005534516 gracious_mendel[363263]:                "ceph.block_uuid": "ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr",
Nov 25 03:49:40 np0005534516 gracious_mendel[363263]:                "ceph.cephx_lockbox_secret": "",
Nov 25 03:49:40 np0005534516 gracious_mendel[363263]:                "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 03:49:40 np0005534516 gracious_mendel[363263]:                "ceph.cluster_name": "ceph",
Nov 25 03:49:40 np0005534516 gracious_mendel[363263]:                "ceph.crush_device_class": "",
Nov 25 03:49:40 np0005534516 gracious_mendel[363263]:                "ceph.encrypted": "0",
Nov 25 03:49:40 np0005534516 gracious_mendel[363263]:                "ceph.osd_fsid": "fa0f8d90-5df8-4d42-9078-da082765696d",
Nov 25 03:49:40 np0005534516 gracious_mendel[363263]:                "ceph.osd_id": "0",
Nov 25 03:49:40 np0005534516 gracious_mendel[363263]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 03:49:40 np0005534516 gracious_mendel[363263]:                "ceph.type": "block",
Nov 25 03:49:40 np0005534516 gracious_mendel[363263]:                "ceph.vdo": "0"
Nov 25 03:49:40 np0005534516 gracious_mendel[363263]:            },
Nov 25 03:49:40 np0005534516 gracious_mendel[363263]:            "type": "block",
Nov 25 03:49:40 np0005534516 gracious_mendel[363263]:            "vg_name": "ceph_vg0"
Nov 25 03:49:40 np0005534516 gracious_mendel[363263]:        }
Nov 25 03:49:40 np0005534516 gracious_mendel[363263]:    ],
Nov 25 03:49:40 np0005534516 gracious_mendel[363263]:    "1": [
Nov 25 03:49:40 np0005534516 gracious_mendel[363263]:        {
Nov 25 03:49:40 np0005534516 gracious_mendel[363263]:            "devices": [
Nov 25 03:49:40 np0005534516 gracious_mendel[363263]:                "/dev/loop4"
Nov 25 03:49:40 np0005534516 gracious_mendel[363263]:            ],
Nov 25 03:49:40 np0005534516 gracious_mendel[363263]:            "lv_name": "ceph_lv1",
Nov 25 03:49:40 np0005534516 gracious_mendel[363263]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 03:49:40 np0005534516 gracious_mendel[363263]:            "lv_size": "21470642176",
Nov 25 03:49:40 np0005534516 gracious_mendel[363263]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1ccbbde0-8faf-460a-800e-c84f00ed17db,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 03:49:40 np0005534516 gracious_mendel[363263]:            "lv_uuid": "nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e",
Nov 25 03:49:40 np0005534516 gracious_mendel[363263]:            "name": "ceph_lv1",
Nov 25 03:49:40 np0005534516 gracious_mendel[363263]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 03:49:40 np0005534516 gracious_mendel[363263]:            "tags": {
Nov 25 03:49:40 np0005534516 gracious_mendel[363263]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 03:49:40 np0005534516 gracious_mendel[363263]:                "ceph.block_uuid": "nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e",
Nov 25 03:49:40 np0005534516 gracious_mendel[363263]:                "ceph.cephx_lockbox_secret": "",
Nov 25 03:49:40 np0005534516 gracious_mendel[363263]:                "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 03:49:40 np0005534516 gracious_mendel[363263]:                "ceph.cluster_name": "ceph",
Nov 25 03:49:40 np0005534516 gracious_mendel[363263]:                "ceph.crush_device_class": "",
Nov 25 03:49:40 np0005534516 gracious_mendel[363263]:                "ceph.encrypted": "0",
Nov 25 03:49:40 np0005534516 gracious_mendel[363263]:                "ceph.osd_fsid": "1ccbbde0-8faf-460a-800e-c84f00ed17db",
Nov 25 03:49:40 np0005534516 gracious_mendel[363263]:                "ceph.osd_id": "1",
Nov 25 03:49:40 np0005534516 gracious_mendel[363263]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 03:49:40 np0005534516 gracious_mendel[363263]:                "ceph.type": "block",
Nov 25 03:49:40 np0005534516 gracious_mendel[363263]:                "ceph.vdo": "0"
Nov 25 03:49:40 np0005534516 gracious_mendel[363263]:            },
Nov 25 03:49:40 np0005534516 gracious_mendel[363263]:            "type": "block",
Nov 25 03:49:40 np0005534516 gracious_mendel[363263]:            "vg_name": "ceph_vg1"
Nov 25 03:49:40 np0005534516 gracious_mendel[363263]:        }
Nov 25 03:49:40 np0005534516 gracious_mendel[363263]:    ],
Nov 25 03:49:40 np0005534516 gracious_mendel[363263]:    "2": [
Nov 25 03:49:40 np0005534516 gracious_mendel[363263]:        {
Nov 25 03:49:40 np0005534516 gracious_mendel[363263]:            "devices": [
Nov 25 03:49:40 np0005534516 gracious_mendel[363263]:                "/dev/loop5"
Nov 25 03:49:40 np0005534516 gracious_mendel[363263]:            ],
Nov 25 03:49:40 np0005534516 gracious_mendel[363263]:            "lv_name": "ceph_lv2",
Nov 25 03:49:40 np0005534516 gracious_mendel[363263]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 03:49:40 np0005534516 gracious_mendel[363263]:            "lv_size": "21470642176",
Nov 25 03:49:40 np0005534516 gracious_mendel[363263]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2a15223e-f21c-42af-b702-2be1c3607399,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 03:49:40 np0005534516 gracious_mendel[363263]:            "lv_uuid": "5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0",
Nov 25 03:49:40 np0005534516 gracious_mendel[363263]:            "name": "ceph_lv2",
Nov 25 03:49:40 np0005534516 gracious_mendel[363263]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 03:49:40 np0005534516 gracious_mendel[363263]:            "tags": {
Nov 25 03:49:40 np0005534516 gracious_mendel[363263]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 03:49:40 np0005534516 gracious_mendel[363263]:                "ceph.block_uuid": "5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0",
Nov 25 03:49:40 np0005534516 gracious_mendel[363263]:                "ceph.cephx_lockbox_secret": "",
Nov 25 03:49:40 np0005534516 gracious_mendel[363263]:                "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 03:49:40 np0005534516 gracious_mendel[363263]:                "ceph.cluster_name": "ceph",
Nov 25 03:49:40 np0005534516 gracious_mendel[363263]:                "ceph.crush_device_class": "",
Nov 25 03:49:40 np0005534516 gracious_mendel[363263]:                "ceph.encrypted": "0",
Nov 25 03:49:40 np0005534516 gracious_mendel[363263]:                "ceph.osd_fsid": "2a15223e-f21c-42af-b702-2be1c3607399",
Nov 25 03:49:40 np0005534516 gracious_mendel[363263]:                "ceph.osd_id": "2",
Nov 25 03:49:40 np0005534516 gracious_mendel[363263]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 03:49:40 np0005534516 gracious_mendel[363263]:                "ceph.type": "block",
Nov 25 03:49:40 np0005534516 gracious_mendel[363263]:                "ceph.vdo": "0"
Nov 25 03:49:40 np0005534516 gracious_mendel[363263]:            },
Nov 25 03:49:40 np0005534516 gracious_mendel[363263]:            "type": "block",
Nov 25 03:49:40 np0005534516 gracious_mendel[363263]:            "vg_name": "ceph_vg2"
Nov 25 03:49:40 np0005534516 gracious_mendel[363263]:        }
Nov 25 03:49:40 np0005534516 gracious_mendel[363263]:    ]
Nov 25 03:49:40 np0005534516 gracious_mendel[363263]: }
Nov 25 03:49:40 np0005534516 systemd[1]: libpod-f8284ac3054758f93e1956d42b529dd53200b28cb691fcc27237c71804f0995d.scope: Deactivated successfully.
Nov 25 03:49:40 np0005534516 podman[363211]: 2025-11-25 08:49:40.756084539 +0000 UTC m=+1.064026547 container died f8284ac3054758f93e1956d42b529dd53200b28cb691fcc27237c71804f0995d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_mendel, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 03:49:40 np0005534516 systemd[1]: var-lib-containers-storage-overlay-4ac1284a1ccf97edfbea65953620e524c6cd053344512cf31e249083ff348e13-merged.mount: Deactivated successfully.
Nov 25 03:49:40 np0005534516 podman[363211]: 2025-11-25 08:49:40.897933808 +0000 UTC m=+1.205875807 container remove f8284ac3054758f93e1956d42b529dd53200b28cb691fcc27237c71804f0995d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_mendel, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 03:49:40 np0005534516 systemd[1]: libpod-conmon-f8284ac3054758f93e1956d42b529dd53200b28cb691fcc27237c71804f0995d.scope: Deactivated successfully.
Nov 25 03:49:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:49:41.073 162739 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:49:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:49:41.074 162739 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:49:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:49:41.075 162739 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:49:41 np0005534516 nova_compute[253538]: 2025-11-25 08:49:41.223 253542 DEBUG nova.network.neutron [-] [instance: ec34a574-9c78-43d8-a65a-aa4052a5d452] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 03:49:41 np0005534516 nova_compute[253538]: 2025-11-25 08:49:41.251 253542 INFO nova.compute.manager [-] [instance: ec34a574-9c78-43d8-a65a-aa4052a5d452] Took 0.91 seconds to deallocate network for instance.#033[00m
Nov 25 03:49:41 np0005534516 nova_compute[253538]: 2025-11-25 08:49:41.276 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 03:49:41 np0005534516 nova_compute[253538]: 2025-11-25 08:49:41.303 253542 DEBUG oslo_concurrency.lockutils [None req-66cbd094-af78-48a9-b936-09c8259d7202 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:49:41 np0005534516 nova_compute[253538]: 2025-11-25 08:49:41.303 253542 DEBUG oslo_concurrency.lockutils [None req-66cbd094-af78-48a9-b936-09c8259d7202 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:49:41 np0005534516 nova_compute[253538]: 2025-11-25 08:49:41.429 253542 DEBUG oslo_concurrency.processutils [None req-66cbd094-af78-48a9-b936-09c8259d7202 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:49:41 np0005534516 podman[363449]: 2025-11-25 08:49:41.545298056 +0000 UTC m=+0.045972332 container create fee426d57fc6ef4567bf77d85a18cb810c4ab69390016fdc7f72ce07c34c1567 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_albattani, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 03:49:41 np0005534516 systemd[1]: Started libpod-conmon-fee426d57fc6ef4567bf77d85a18cb810c4ab69390016fdc7f72ce07c34c1567.scope.
Nov 25 03:49:41 np0005534516 systemd[1]: Started libcrun container.
Nov 25 03:49:41 np0005534516 podman[363449]: 2025-11-25 08:49:41.522125166 +0000 UTC m=+0.022799462 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 03:49:41 np0005534516 podman[363449]: 2025-11-25 08:49:41.636781466 +0000 UTC m=+0.137460542 container init fee426d57fc6ef4567bf77d85a18cb810c4ab69390016fdc7f72ce07c34c1567 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_albattani, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 03:49:41 np0005534516 podman[363449]: 2025-11-25 08:49:41.643943158 +0000 UTC m=+0.144617434 container start fee426d57fc6ef4567bf77d85a18cb810c4ab69390016fdc7f72ce07c34c1567 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_albattani, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Nov 25 03:49:41 np0005534516 modest_albattani[363484]: 167 167
Nov 25 03:49:41 np0005534516 systemd[1]: libpod-fee426d57fc6ef4567bf77d85a18cb810c4ab69390016fdc7f72ce07c34c1567.scope: Deactivated successfully.
Nov 25 03:49:41 np0005534516 podman[363449]: 2025-11-25 08:49:41.654151081 +0000 UTC m=+0.154825397 container attach fee426d57fc6ef4567bf77d85a18cb810c4ab69390016fdc7f72ce07c34c1567 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_albattani, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Nov 25 03:49:41 np0005534516 podman[363449]: 2025-11-25 08:49:41.654584963 +0000 UTC m=+0.155259249 container died fee426d57fc6ef4567bf77d85a18cb810c4ab69390016fdc7f72ce07c34c1567 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_albattani, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Nov 25 03:49:41 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2051: 321 pgs: 321 active+clean; 260 MiB data, 835 MiB used, 59 GiB / 60 GiB avail; 323 KiB/s rd, 3.6 MiB/s wr, 96 op/s
Nov 25 03:49:41 np0005534516 systemd[1]: var-lib-containers-storage-overlay-88440674ec54114c41870e7212d11b3016c757d5aff6a0b7a670128015928746-merged.mount: Deactivated successfully.
Nov 25 03:49:41 np0005534516 podman[363449]: 2025-11-25 08:49:41.727464825 +0000 UTC m=+0.228139111 container remove fee426d57fc6ef4567bf77d85a18cb810c4ab69390016fdc7f72ce07c34c1567 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_albattani, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Nov 25 03:49:41 np0005534516 systemd[1]: libpod-conmon-fee426d57fc6ef4567bf77d85a18cb810c4ab69390016fdc7f72ce07c34c1567.scope: Deactivated successfully.
Nov 25 03:49:41 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 03:49:41 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/739853544' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 03:49:41 np0005534516 nova_compute[253538]: 2025-11-25 08:49:41.919 253542 DEBUG oslo_concurrency.processutils [None req-66cbd094-af78-48a9-b936-09c8259d7202 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.489s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:49:41 np0005534516 nova_compute[253538]: 2025-11-25 08:49:41.930 253542 DEBUG nova.compute.provider_tree [None req-66cbd094-af78-48a9-b936-09c8259d7202 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 03:49:41 np0005534516 podman[363508]: 2025-11-25 08:49:41.94795621 +0000 UTC m=+0.060087651 container create 4b311b7e42d36cff0b7f1fe977b7a5a97a6fe4764af02039fd97b5a24c49bc6f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_colden, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef)
Nov 25 03:49:41 np0005534516 nova_compute[253538]: 2025-11-25 08:49:41.949 253542 DEBUG nova.scheduler.client.report [None req-66cbd094-af78-48a9-b936-09c8259d7202 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 03:49:41 np0005534516 nova_compute[253538]: 2025-11-25 08:49:41.976 253542 DEBUG oslo_concurrency.lockutils [None req-66cbd094-af78-48a9-b936-09c8259d7202 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.672s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:49:42 np0005534516 systemd[1]: Started libpod-conmon-4b311b7e42d36cff0b7f1fe977b7a5a97a6fe4764af02039fd97b5a24c49bc6f.scope.
Nov 25 03:49:42 np0005534516 podman[363508]: 2025-11-25 08:49:41.921215863 +0000 UTC m=+0.033347314 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 03:49:42 np0005534516 nova_compute[253538]: 2025-11-25 08:49:42.024 253542 INFO nova.scheduler.client.report [None req-66cbd094-af78-48a9-b936-09c8259d7202 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Deleted allocations for instance ec34a574-9c78-43d8-a65a-aa4052a5d452#033[00m
Nov 25 03:49:42 np0005534516 systemd[1]: Started libcrun container.
Nov 25 03:49:42 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0be47031c7368003ff63301176b0c92c4a6918301a566fcfad4ed6d2941c286e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 03:49:42 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0be47031c7368003ff63301176b0c92c4a6918301a566fcfad4ed6d2941c286e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 03:49:42 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0be47031c7368003ff63301176b0c92c4a6918301a566fcfad4ed6d2941c286e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 03:49:42 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0be47031c7368003ff63301176b0c92c4a6918301a566fcfad4ed6d2941c286e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 03:49:42 np0005534516 podman[363508]: 2025-11-25 08:49:42.073588675 +0000 UTC m=+0.185720206 container init 4b311b7e42d36cff0b7f1fe977b7a5a97a6fe4764af02039fd97b5a24c49bc6f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_colden, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Nov 25 03:49:42 np0005534516 podman[363508]: 2025-11-25 08:49:42.081703341 +0000 UTC m=+0.193834812 container start 4b311b7e42d36cff0b7f1fe977b7a5a97a6fe4764af02039fd97b5a24c49bc6f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_colden, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Nov 25 03:49:42 np0005534516 podman[363508]: 2025-11-25 08:49:42.090652402 +0000 UTC m=+0.202783843 container attach 4b311b7e42d36cff0b7f1fe977b7a5a97a6fe4764af02039fd97b5a24c49bc6f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_colden, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 03:49:42 np0005534516 nova_compute[253538]: 2025-11-25 08:49:42.100 253542 DEBUG oslo_concurrency.lockutils [None req-66cbd094-af78-48a9-b936-09c8259d7202 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lock "ec34a574-9c78-43d8-a65a-aa4052a5d452" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.870s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:49:42 np0005534516 nova_compute[253538]: 2025-11-25 08:49:42.310 253542 DEBUG nova.compute.manager [req-a59d175c-ed76-4fc8-9552-88001618296c req-ab356573-6698-407b-a439-236be186e439 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: ec34a574-9c78-43d8-a65a-aa4052a5d452] Received event network-vif-deleted-bc66e4f2-ce2b-49ca-ba89-a86607d56e5a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:49:42 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e237 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 03:49:42 np0005534516 nova_compute[253538]: 2025-11-25 08:49:42.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 03:49:42 np0005534516 nova_compute[253538]: 2025-11-25 08:49:42.575 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:49:43 np0005534516 jovial_colden[363524]: {
Nov 25 03:49:43 np0005534516 jovial_colden[363524]:    "1ccbbde0-8faf-460a-800e-c84f00ed17db": {
Nov 25 03:49:43 np0005534516 jovial_colden[363524]:        "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 03:49:43 np0005534516 jovial_colden[363524]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 03:49:43 np0005534516 jovial_colden[363524]:        "osd_id": 1,
Nov 25 03:49:43 np0005534516 jovial_colden[363524]:        "osd_uuid": "1ccbbde0-8faf-460a-800e-c84f00ed17db",
Nov 25 03:49:43 np0005534516 jovial_colden[363524]:        "type": "bluestore"
Nov 25 03:49:43 np0005534516 jovial_colden[363524]:    },
Nov 25 03:49:43 np0005534516 jovial_colden[363524]:    "2a15223e-f21c-42af-b702-2be1c3607399": {
Nov 25 03:49:43 np0005534516 jovial_colden[363524]:        "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 03:49:43 np0005534516 jovial_colden[363524]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 03:49:43 np0005534516 jovial_colden[363524]:        "osd_id": 2,
Nov 25 03:49:43 np0005534516 jovial_colden[363524]:        "osd_uuid": "2a15223e-f21c-42af-b702-2be1c3607399",
Nov 25 03:49:43 np0005534516 jovial_colden[363524]:        "type": "bluestore"
Nov 25 03:49:43 np0005534516 jovial_colden[363524]:    },
Nov 25 03:49:43 np0005534516 jovial_colden[363524]:    "fa0f8d90-5df8-4d42-9078-da082765696d": {
Nov 25 03:49:43 np0005534516 jovial_colden[363524]:        "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 03:49:43 np0005534516 jovial_colden[363524]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 03:49:43 np0005534516 jovial_colden[363524]:        "osd_id": 0,
Nov 25 03:49:43 np0005534516 jovial_colden[363524]:        "osd_uuid": "fa0f8d90-5df8-4d42-9078-da082765696d",
Nov 25 03:49:43 np0005534516 jovial_colden[363524]:        "type": "bluestore"
Nov 25 03:49:43 np0005534516 jovial_colden[363524]:    }
Nov 25 03:49:43 np0005534516 jovial_colden[363524]: }
Nov 25 03:49:43 np0005534516 systemd[1]: libpod-4b311b7e42d36cff0b7f1fe977b7a5a97a6fe4764af02039fd97b5a24c49bc6f.scope: Deactivated successfully.
Nov 25 03:49:43 np0005534516 podman[363508]: 2025-11-25 08:49:43.111754538 +0000 UTC m=+1.223885969 container died 4b311b7e42d36cff0b7f1fe977b7a5a97a6fe4764af02039fd97b5a24c49bc6f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_colden, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 03:49:43 np0005534516 systemd[1]: libpod-4b311b7e42d36cff0b7f1fe977b7a5a97a6fe4764af02039fd97b5a24c49bc6f.scope: Consumed 1.036s CPU time.
Nov 25 03:49:43 np0005534516 nova_compute[253538]: 2025-11-25 08:49:43.118 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:49:43 np0005534516 systemd[1]: var-lib-containers-storage-overlay-0be47031c7368003ff63301176b0c92c4a6918301a566fcfad4ed6d2941c286e-merged.mount: Deactivated successfully.
Nov 25 03:49:43 np0005534516 podman[363508]: 2025-11-25 08:49:43.188382991 +0000 UTC m=+1.300514432 container remove 4b311b7e42d36cff0b7f1fe977b7a5a97a6fe4764af02039fd97b5a24c49bc6f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_colden, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Nov 25 03:49:43 np0005534516 systemd[1]: libpod-conmon-4b311b7e42d36cff0b7f1fe977b7a5a97a6fe4764af02039fd97b5a24c49bc6f.scope: Deactivated successfully.
Nov 25 03:49:43 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 03:49:43 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 03:49:43 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 03:49:43 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 03:49:43 np0005534516 ceph-mgr[75313]: [progress WARNING root] complete: ev 39305fec-3473-42a5-8618-3f487b7272c3 does not exist
Nov 25 03:49:43 np0005534516 ceph-mgr[75313]: [progress WARNING root] complete: ev 02efaacc-18a0-4659-857e-4a0d65f69301 does not exist
Nov 25 03:49:43 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2052: 321 pgs: 321 active+clean; 213 MiB data, 826 MiB used, 59 GiB / 60 GiB avail; 1.6 MiB/s rd, 2.6 MiB/s wr, 147 op/s
Nov 25 03:49:44 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 03:49:44 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 03:49:44 np0005534516 ovn_controller[152859]: 2025-11-25T08:49:44Z|01061|binding|INFO|Releasing lport a1fb9b8f-24d1-400b-bfc7-6d0e30e1e254 from this chassis (sb_readonly=0)
Nov 25 03:49:44 np0005534516 ovn_controller[152859]: 2025-11-25T08:49:44Z|01062|binding|INFO|Releasing lport 2a6dde1f-8745-4374-9d65-dba32b48db06 from this chassis (sb_readonly=0)
Nov 25 03:49:44 np0005534516 nova_compute[253538]: 2025-11-25 08:49:44.407 253542 DEBUG nova.compute.manager [req-64c3ac17-05ac-4784-b80d-b0a22aefc7eb req-f9790e13-0e7e-4c40-96ee-30d27c89bcd8 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: a3e97a6a-f96c-46ee-ba5a-2729f07dd5b5] Received event network-changed-67b278e0-034e-4bb1-8cba-035ab2a72de3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:49:44 np0005534516 nova_compute[253538]: 2025-11-25 08:49:44.408 253542 DEBUG nova.compute.manager [req-64c3ac17-05ac-4784-b80d-b0a22aefc7eb req-f9790e13-0e7e-4c40-96ee-30d27c89bcd8 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: a3e97a6a-f96c-46ee-ba5a-2729f07dd5b5] Refreshing instance network info cache due to event network-changed-67b278e0-034e-4bb1-8cba-035ab2a72de3. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 03:49:44 np0005534516 nova_compute[253538]: 2025-11-25 08:49:44.408 253542 DEBUG oslo_concurrency.lockutils [req-64c3ac17-05ac-4784-b80d-b0a22aefc7eb req-f9790e13-0e7e-4c40-96ee-30d27c89bcd8 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-a3e97a6a-f96c-46ee-ba5a-2729f07dd5b5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 03:49:44 np0005534516 nova_compute[253538]: 2025-11-25 08:49:44.409 253542 DEBUG oslo_concurrency.lockutils [req-64c3ac17-05ac-4784-b80d-b0a22aefc7eb req-f9790e13-0e7e-4c40-96ee-30d27c89bcd8 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-a3e97a6a-f96c-46ee-ba5a-2729f07dd5b5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 03:49:44 np0005534516 nova_compute[253538]: 2025-11-25 08:49:44.409 253542 DEBUG nova.network.neutron [req-64c3ac17-05ac-4784-b80d-b0a22aefc7eb req-f9790e13-0e7e-4c40-96ee-30d27c89bcd8 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: a3e97a6a-f96c-46ee-ba5a-2729f07dd5b5] Refreshing network info cache for port 67b278e0-034e-4bb1-8cba-035ab2a72de3 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 03:49:44 np0005534516 nova_compute[253538]: 2025-11-25 08:49:44.428 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:49:45 np0005534516 nova_compute[253538]: 2025-11-25 08:49:45.434 253542 DEBUG nova.compute.manager [req-357b6125-7503-40fd-b463-0a454c8918c2 req-eaa37c7e-a716-4126-a620-1c7af9a4bc55 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: fc289da4-ce2a-46ee-bf8c-bffa1e0e75fe] Received event network-changed-5b999504-81af-4e3d-9707-b0a72b902669 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:49:45 np0005534516 nova_compute[253538]: 2025-11-25 08:49:45.435 253542 DEBUG nova.compute.manager [req-357b6125-7503-40fd-b463-0a454c8918c2 req-eaa37c7e-a716-4126-a620-1c7af9a4bc55 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: fc289da4-ce2a-46ee-bf8c-bffa1e0e75fe] Refreshing instance network info cache due to event network-changed-5b999504-81af-4e3d-9707-b0a72b902669. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 03:49:45 np0005534516 nova_compute[253538]: 2025-11-25 08:49:45.436 253542 DEBUG oslo_concurrency.lockutils [req-357b6125-7503-40fd-b463-0a454c8918c2 req-eaa37c7e-a716-4126-a620-1c7af9a4bc55 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-fc289da4-ce2a-46ee-bf8c-bffa1e0e75fe" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 03:49:45 np0005534516 nova_compute[253538]: 2025-11-25 08:49:45.436 253542 DEBUG oslo_concurrency.lockutils [req-357b6125-7503-40fd-b463-0a454c8918c2 req-eaa37c7e-a716-4126-a620-1c7af9a4bc55 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-fc289da4-ce2a-46ee-bf8c-bffa1e0e75fe" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 03:49:45 np0005534516 nova_compute[253538]: 2025-11-25 08:49:45.436 253542 DEBUG nova.network.neutron [req-357b6125-7503-40fd-b463-0a454c8918c2 req-eaa37c7e-a716-4126-a620-1c7af9a4bc55 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: fc289da4-ce2a-46ee-bf8c-bffa1e0e75fe] Refreshing network info cache for port 5b999504-81af-4e3d-9707-b0a72b902669 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 03:49:45 np0005534516 nova_compute[253538]: 2025-11-25 08:49:45.591 253542 DEBUG oslo_concurrency.lockutils [None req-cd41ae2e-df99-4795-b08d-38b967982846 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Acquiring lock "fc289da4-ce2a-46ee-bf8c-bffa1e0e75fe" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:49:45 np0005534516 nova_compute[253538]: 2025-11-25 08:49:45.591 253542 DEBUG oslo_concurrency.lockutils [None req-cd41ae2e-df99-4795-b08d-38b967982846 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lock "fc289da4-ce2a-46ee-bf8c-bffa1e0e75fe" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:49:45 np0005534516 nova_compute[253538]: 2025-11-25 08:49:45.592 253542 DEBUG oslo_concurrency.lockutils [None req-cd41ae2e-df99-4795-b08d-38b967982846 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Acquiring lock "fc289da4-ce2a-46ee-bf8c-bffa1e0e75fe-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:49:45 np0005534516 nova_compute[253538]: 2025-11-25 08:49:45.593 253542 DEBUG oslo_concurrency.lockutils [None req-cd41ae2e-df99-4795-b08d-38b967982846 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lock "fc289da4-ce2a-46ee-bf8c-bffa1e0e75fe-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:49:45 np0005534516 nova_compute[253538]: 2025-11-25 08:49:45.593 253542 DEBUG oslo_concurrency.lockutils [None req-cd41ae2e-df99-4795-b08d-38b967982846 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lock "fc289da4-ce2a-46ee-bf8c-bffa1e0e75fe-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:49:45 np0005534516 nova_compute[253538]: 2025-11-25 08:49:45.595 253542 INFO nova.compute.manager [None req-cd41ae2e-df99-4795-b08d-38b967982846 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: fc289da4-ce2a-46ee-bf8c-bffa1e0e75fe] Terminating instance#033[00m
Nov 25 03:49:45 np0005534516 nova_compute[253538]: 2025-11-25 08:49:45.596 253542 DEBUG nova.compute.manager [None req-cd41ae2e-df99-4795-b08d-38b967982846 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: fc289da4-ce2a-46ee-bf8c-bffa1e0e75fe] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 25 03:49:45 np0005534516 kernel: tap5b999504-81 (unregistering): left promiscuous mode
Nov 25 03:49:45 np0005534516 NetworkManager[48915]: <info>  [1764060585.6607] device (tap5b999504-81): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 03:49:45 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2053: 321 pgs: 321 active+clean; 213 MiB data, 814 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.0 MiB/s wr, 134 op/s
Nov 25 03:49:45 np0005534516 nova_compute[253538]: 2025-11-25 08:49:45.669 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:49:45 np0005534516 ovn_controller[152859]: 2025-11-25T08:49:45Z|01063|binding|INFO|Releasing lport 5b999504-81af-4e3d-9707-b0a72b902669 from this chassis (sb_readonly=0)
Nov 25 03:49:45 np0005534516 ovn_controller[152859]: 2025-11-25T08:49:45Z|01064|binding|INFO|Setting lport 5b999504-81af-4e3d-9707-b0a72b902669 down in Southbound
Nov 25 03:49:45 np0005534516 ovn_controller[152859]: 2025-11-25T08:49:45Z|01065|binding|INFO|Removing iface tap5b999504-81 ovn-installed in OVS
Nov 25 03:49:45 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:49:45.678 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:cf:68:34 10.100.0.9'], port_security=['fa:16:3e:cf:68:34 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': 'fc289da4-ce2a-46ee-bf8c-bffa1e0e75fe', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-41ed78ca-e8a4-4daf-884b-6b7b763e272f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '92faeb767e7a423586eaaf32661ce771', 'neutron:revision_number': '4', 'neutron:security_group_ids': '4a5a2de2-f65d-4e79-a42e-c5ccdc573b10', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=78a84b49-c79a-4804-945b-0e3005e5ab18, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=5b999504-81af-4e3d-9707-b0a72b902669) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 03:49:45 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:49:45.679 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 5b999504-81af-4e3d-9707-b0a72b902669 in datapath 41ed78ca-e8a4-4daf-884b-6b7b763e272f unbound from our chassis#033[00m
Nov 25 03:49:45 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:49:45.680 162739 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 41ed78ca-e8a4-4daf-884b-6b7b763e272f, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 25 03:49:45 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:49:45.681 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[3da42f4b-a13e-41fb-a2aa-92fc5347c850]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:49:45 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:49:45.686 162739 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-41ed78ca-e8a4-4daf-884b-6b7b763e272f namespace which is not needed anymore#033[00m
Nov 25 03:49:45 np0005534516 nova_compute[253538]: 2025-11-25 08:49:45.706 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:49:45 np0005534516 systemd[1]: machine-qemu\x2d131\x2dinstance\x2d0000006a.scope: Deactivated successfully.
Nov 25 03:49:45 np0005534516 systemd[1]: machine-qemu\x2d131\x2dinstance\x2d0000006a.scope: Consumed 15.912s CPU time.
Nov 25 03:49:45 np0005534516 systemd-machined[215790]: Machine qemu-131-instance-0000006a terminated.
Nov 25 03:49:45 np0005534516 neutron-haproxy-ovnmeta-41ed78ca-e8a4-4daf-884b-6b7b763e272f[360616]: [NOTICE]   (360622) : haproxy version is 2.8.14-c23fe91
Nov 25 03:49:45 np0005534516 neutron-haproxy-ovnmeta-41ed78ca-e8a4-4daf-884b-6b7b763e272f[360616]: [NOTICE]   (360622) : path to executable is /usr/sbin/haproxy
Nov 25 03:49:45 np0005534516 neutron-haproxy-ovnmeta-41ed78ca-e8a4-4daf-884b-6b7b763e272f[360616]: [WARNING]  (360622) : Exiting Master process...
Nov 25 03:49:45 np0005534516 neutron-haproxy-ovnmeta-41ed78ca-e8a4-4daf-884b-6b7b763e272f[360616]: [ALERT]    (360622) : Current worker (360626) exited with code 143 (Terminated)
Nov 25 03:49:45 np0005534516 neutron-haproxy-ovnmeta-41ed78ca-e8a4-4daf-884b-6b7b763e272f[360616]: [WARNING]  (360622) : All workers exited. Exiting... (0)
Nov 25 03:49:45 np0005534516 systemd[1]: libpod-bbe7a88bce4ee5fd9c18ecf2e88ab17f45240f07c5add1dacd1bb1ee8f137c9f.scope: Deactivated successfully.
Nov 25 03:49:45 np0005534516 nova_compute[253538]: 2025-11-25 08:49:45.830 253542 INFO nova.virt.libvirt.driver [-] [instance: fc289da4-ce2a-46ee-bf8c-bffa1e0e75fe] Instance destroyed successfully.#033[00m
Nov 25 03:49:45 np0005534516 nova_compute[253538]: 2025-11-25 08:49:45.831 253542 DEBUG nova.objects.instance [None req-cd41ae2e-df99-4795-b08d-38b967982846 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lazy-loading 'resources' on Instance uuid fc289da4-ce2a-46ee-bf8c-bffa1e0e75fe obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 03:49:45 np0005534516 podman[363644]: 2025-11-25 08:49:45.833429719 +0000 UTC m=+0.046564888 container died bbe7a88bce4ee5fd9c18ecf2e88ab17f45240f07c5add1dacd1bb1ee8f137c9f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-41ed78ca-e8a4-4daf-884b-6b7b763e272f, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, io.buildah.version=1.41.3)
Nov 25 03:49:45 np0005534516 nova_compute[253538]: 2025-11-25 08:49:45.842 253542 DEBUG nova.virt.libvirt.vif [None req-cd41ae2e-df99-4795-b08d-38b967982846 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T08:48:22Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-500014830',display_name='tempest-TestNetworkBasicOps-server-500014830',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-500014830',id=106,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFCCvuRR9teEjk+xhoL/dPXtSbMEI/QvMm2XyYfTKUyOXE8qn7R4eNZpb9TezDBvzTLIaZuuD77pyfzIuaqqEBF8FLx+5feWI/X0iULdgxVeu0o4nXU62owugHwOXwCyOg==',key_name='tempest-TestNetworkBasicOps-1624027369',keypairs=<?>,launch_index=0,launched_at=2025-11-25T08:48:34Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='92faeb767e7a423586eaaf32661ce771',ramdisk_id='',reservation_id='r-nidctccp',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-2019122229',owner_user_name='tempest-TestNetworkBasicOps-2019122229-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T08:48:34Z,user_data=None,user_id='4211995133cc45db8e38c47f747fb092',uuid=fc289da4-ce2a-46ee-bf8c-bffa1e0e75fe,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "5b999504-81af-4e3d-9707-b0a72b902669", "address": "fa:16:3e:cf:68:34", "network": {"id": "41ed78ca-e8a4-4daf-884b-6b7b763e272f", "bridge": "br-int", "label": "tempest-network-smoke--1240799795", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.180", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92faeb767e7a423586eaaf32661ce771", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5b999504-81", "ovs_interfaceid": "5b999504-81af-4e3d-9707-b0a72b902669", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 25 03:49:45 np0005534516 nova_compute[253538]: 2025-11-25 08:49:45.845 253542 DEBUG nova.network.os_vif_util [None req-cd41ae2e-df99-4795-b08d-38b967982846 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Converting VIF {"id": "5b999504-81af-4e3d-9707-b0a72b902669", "address": "fa:16:3e:cf:68:34", "network": {"id": "41ed78ca-e8a4-4daf-884b-6b7b763e272f", "bridge": "br-int", "label": "tempest-network-smoke--1240799795", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.180", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92faeb767e7a423586eaaf32661ce771", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5b999504-81", "ovs_interfaceid": "5b999504-81af-4e3d-9707-b0a72b902669", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 03:49:45 np0005534516 nova_compute[253538]: 2025-11-25 08:49:45.846 253542 DEBUG nova.network.os_vif_util [None req-cd41ae2e-df99-4795-b08d-38b967982846 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:cf:68:34,bridge_name='br-int',has_traffic_filtering=True,id=5b999504-81af-4e3d-9707-b0a72b902669,network=Network(41ed78ca-e8a4-4daf-884b-6b7b763e272f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5b999504-81') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 03:49:45 np0005534516 nova_compute[253538]: 2025-11-25 08:49:45.846 253542 DEBUG os_vif [None req-cd41ae2e-df99-4795-b08d-38b967982846 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:cf:68:34,bridge_name='br-int',has_traffic_filtering=True,id=5b999504-81af-4e3d-9707-b0a72b902669,network=Network(41ed78ca-e8a4-4daf-884b-6b7b763e272f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5b999504-81') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 25 03:49:45 np0005534516 nova_compute[253538]: 2025-11-25 08:49:45.848 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:49:45 np0005534516 nova_compute[253538]: 2025-11-25 08:49:45.848 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5b999504-81, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:49:45 np0005534516 nova_compute[253538]: 2025-11-25 08:49:45.850 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:49:45 np0005534516 nova_compute[253538]: 2025-11-25 08:49:45.852 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 25 03:49:45 np0005534516 nova_compute[253538]: 2025-11-25 08:49:45.853 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:49:45 np0005534516 nova_compute[253538]: 2025-11-25 08:49:45.855 253542 INFO os_vif [None req-cd41ae2e-df99-4795-b08d-38b967982846 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:cf:68:34,bridge_name='br-int',has_traffic_filtering=True,id=5b999504-81af-4e3d-9707-b0a72b902669,network=Network(41ed78ca-e8a4-4daf-884b-6b7b763e272f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5b999504-81')#033[00m
Nov 25 03:49:45 np0005534516 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-bbe7a88bce4ee5fd9c18ecf2e88ab17f45240f07c5add1dacd1bb1ee8f137c9f-userdata-shm.mount: Deactivated successfully.
Nov 25 03:49:45 np0005534516 systemd[1]: var-lib-containers-storage-overlay-d1849c6e6683bfaf37cc6be930869317f1e1d8455b3540b1f7e705a5ef3ca71a-merged.mount: Deactivated successfully.
Nov 25 03:49:45 np0005534516 podman[363644]: 2025-11-25 08:49:45.936989423 +0000 UTC m=+0.150124572 container cleanup bbe7a88bce4ee5fd9c18ecf2e88ab17f45240f07c5add1dacd1bb1ee8f137c9f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-41ed78ca-e8a4-4daf-884b-6b7b763e272f, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team)
Nov 25 03:49:45 np0005534516 systemd[1]: libpod-conmon-bbe7a88bce4ee5fd9c18ecf2e88ab17f45240f07c5add1dacd1bb1ee8f137c9f.scope: Deactivated successfully.
Nov 25 03:49:46 np0005534516 podman[363704]: 2025-11-25 08:49:46.01529413 +0000 UTC m=+0.048925742 container remove bbe7a88bce4ee5fd9c18ecf2e88ab17f45240f07c5add1dacd1bb1ee8f137c9f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-41ed78ca-e8a4-4daf-884b-6b7b763e272f, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.vendor=CentOS)
Nov 25 03:49:46 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:49:46.024 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[3cd91fff-e742-451f-b4e5-f2e4e5fe866c]: (4, ('Tue Nov 25 08:49:45 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-41ed78ca-e8a4-4daf-884b-6b7b763e272f (bbe7a88bce4ee5fd9c18ecf2e88ab17f45240f07c5add1dacd1bb1ee8f137c9f)\nbbe7a88bce4ee5fd9c18ecf2e88ab17f45240f07c5add1dacd1bb1ee8f137c9f\nTue Nov 25 08:49:45 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-41ed78ca-e8a4-4daf-884b-6b7b763e272f (bbe7a88bce4ee5fd9c18ecf2e88ab17f45240f07c5add1dacd1bb1ee8f137c9f)\nbbe7a88bce4ee5fd9c18ecf2e88ab17f45240f07c5add1dacd1bb1ee8f137c9f\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:49:46 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:49:46.027 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[00afde97-bcfc-4b71-ba96-6a9280bc6358]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:49:46 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:49:46.029 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap41ed78ca-e0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:49:46 np0005534516 nova_compute[253538]: 2025-11-25 08:49:46.031 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:49:46 np0005534516 kernel: tap41ed78ca-e0: left promiscuous mode
Nov 25 03:49:46 np0005534516 nova_compute[253538]: 2025-11-25 08:49:46.044 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:49:46 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:49:46.049 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[d99d73ba-1766-4a11-90f2-86a3a0671a33]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:49:46 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:49:46.068 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[55ea96c7-2f6f-4238-90bb-afb5ad89488b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:49:46 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:49:46.069 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[a8c137e0-2f02-43e7-90cd-0df0e3f71c5e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:49:46 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:49:46.087 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[9cc0171e-8f92-4c98-a63b-92a6335b7c31]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 586394, 'reachable_time': 16603, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 363719, 'error': None, 'target': 'ovnmeta-41ed78ca-e8a4-4daf-884b-6b7b763e272f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:49:46 np0005534516 systemd[1]: run-netns-ovnmeta\x2d41ed78ca\x2de8a4\x2d4daf\x2d884b\x2d6b7b763e272f.mount: Deactivated successfully.
Nov 25 03:49:46 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:49:46.091 162852 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-41ed78ca-e8a4-4daf-884b-6b7b763e272f deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 25 03:49:46 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:49:46.091 162852 DEBUG oslo.privsep.daemon [-] privsep: reply[a501b747-6a31-4b7f-945e-3cd05fb48c4b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:49:46 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:49:46.155 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=0e3362c2-a4a0-4a10-9289-943331244f84, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '31'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:49:46 np0005534516 nova_compute[253538]: 2025-11-25 08:49:46.203 253542 INFO nova.virt.libvirt.driver [None req-cd41ae2e-df99-4795-b08d-38b967982846 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: fc289da4-ce2a-46ee-bf8c-bffa1e0e75fe] Deleting instance files /var/lib/nova/instances/fc289da4-ce2a-46ee-bf8c-bffa1e0e75fe_del#033[00m
Nov 25 03:49:46 np0005534516 nova_compute[253538]: 2025-11-25 08:49:46.206 253542 INFO nova.virt.libvirt.driver [None req-cd41ae2e-df99-4795-b08d-38b967982846 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: fc289da4-ce2a-46ee-bf8c-bffa1e0e75fe] Deletion of /var/lib/nova/instances/fc289da4-ce2a-46ee-bf8c-bffa1e0e75fe_del complete#033[00m
Nov 25 03:49:46 np0005534516 nova_compute[253538]: 2025-11-25 08:49:46.278 253542 INFO nova.compute.manager [None req-cd41ae2e-df99-4795-b08d-38b967982846 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: fc289da4-ce2a-46ee-bf8c-bffa1e0e75fe] Took 0.68 seconds to destroy the instance on the hypervisor.#033[00m
Nov 25 03:49:46 np0005534516 nova_compute[253538]: 2025-11-25 08:49:46.279 253542 DEBUG oslo.service.loopingcall [None req-cd41ae2e-df99-4795-b08d-38b967982846 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 25 03:49:46 np0005534516 nova_compute[253538]: 2025-11-25 08:49:46.279 253542 DEBUG nova.compute.manager [-] [instance: fc289da4-ce2a-46ee-bf8c-bffa1e0e75fe] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 25 03:49:46 np0005534516 nova_compute[253538]: 2025-11-25 08:49:46.280 253542 DEBUG nova.network.neutron [-] [instance: fc289da4-ce2a-46ee-bf8c-bffa1e0e75fe] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 25 03:49:47 np0005534516 nova_compute[253538]: 2025-11-25 08:49:47.127 253542 DEBUG nova.network.neutron [req-64c3ac17-05ac-4784-b80d-b0a22aefc7eb req-f9790e13-0e7e-4c40-96ee-30d27c89bcd8 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: a3e97a6a-f96c-46ee-ba5a-2729f07dd5b5] Updated VIF entry in instance network info cache for port 67b278e0-034e-4bb1-8cba-035ab2a72de3. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 25 03:49:47 np0005534516 nova_compute[253538]: 2025-11-25 08:49:47.129 253542 DEBUG nova.network.neutron [req-64c3ac17-05ac-4784-b80d-b0a22aefc7eb req-f9790e13-0e7e-4c40-96ee-30d27c89bcd8 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: a3e97a6a-f96c-46ee-ba5a-2729f07dd5b5] Updating instance_info_cache with network_info: [{"id": "67b278e0-034e-4bb1-8cba-035ab2a72de3", "address": "fa:16:3e:d5:f1:de", "network": {"id": "6cdb9ec3-6144-4767-9719-ddbf0a68bf7c", "bridge": "br-int", "label": "tempest-network-smoke--1425209404", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.218", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7dcaf3c96bfc4db3a41291debd385c67", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap67b278e0-03", "ovs_interfaceid": "67b278e0-034e-4bb1-8cba-035ab2a72de3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 03:49:47 np0005534516 nova_compute[253538]: 2025-11-25 08:49:47.147 253542 DEBUG oslo_concurrency.lockutils [req-64c3ac17-05ac-4784-b80d-b0a22aefc7eb req-f9790e13-0e7e-4c40-96ee-30d27c89bcd8 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-a3e97a6a-f96c-46ee-ba5a-2729f07dd5b5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 03:49:47 np0005534516 nova_compute[253538]: 2025-11-25 08:49:47.262 253542 DEBUG nova.network.neutron [-] [instance: fc289da4-ce2a-46ee-bf8c-bffa1e0e75fe] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 03:49:47 np0005534516 nova_compute[253538]: 2025-11-25 08:49:47.280 253542 INFO nova.compute.manager [-] [instance: fc289da4-ce2a-46ee-bf8c-bffa1e0e75fe] Took 1.00 seconds to deallocate network for instance.#033[00m
Nov 25 03:49:47 np0005534516 nova_compute[253538]: 2025-11-25 08:49:47.319 253542 DEBUG oslo_concurrency.lockutils [None req-cd41ae2e-df99-4795-b08d-38b967982846 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:49:47 np0005534516 nova_compute[253538]: 2025-11-25 08:49:47.320 253542 DEBUG oslo_concurrency.lockutils [None req-cd41ae2e-df99-4795-b08d-38b967982846 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:49:47 np0005534516 nova_compute[253538]: 2025-11-25 08:49:47.395 253542 DEBUG oslo_concurrency.processutils [None req-cd41ae2e-df99-4795-b08d-38b967982846 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:49:47 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e237 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 03:49:47 np0005534516 nova_compute[253538]: 2025-11-25 08:49:47.577 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:49:47 np0005534516 nova_compute[253538]: 2025-11-25 08:49:47.647 253542 DEBUG nova.compute.manager [req-aaec318e-7c76-4c50-9291-5991ae5407f6 req-e2d1ba84-6c86-4ce8-b041-d8e392193ebb b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: fc289da4-ce2a-46ee-bf8c-bffa1e0e75fe] Received event network-vif-unplugged-5b999504-81af-4e3d-9707-b0a72b902669 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:49:47 np0005534516 nova_compute[253538]: 2025-11-25 08:49:47.649 253542 DEBUG oslo_concurrency.lockutils [req-aaec318e-7c76-4c50-9291-5991ae5407f6 req-e2d1ba84-6c86-4ce8-b041-d8e392193ebb b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "fc289da4-ce2a-46ee-bf8c-bffa1e0e75fe-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:49:47 np0005534516 nova_compute[253538]: 2025-11-25 08:49:47.649 253542 DEBUG oslo_concurrency.lockutils [req-aaec318e-7c76-4c50-9291-5991ae5407f6 req-e2d1ba84-6c86-4ce8-b041-d8e392193ebb b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "fc289da4-ce2a-46ee-bf8c-bffa1e0e75fe-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:49:47 np0005534516 nova_compute[253538]: 2025-11-25 08:49:47.649 253542 DEBUG oslo_concurrency.lockutils [req-aaec318e-7c76-4c50-9291-5991ae5407f6 req-e2d1ba84-6c86-4ce8-b041-d8e392193ebb b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "fc289da4-ce2a-46ee-bf8c-bffa1e0e75fe-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:49:47 np0005534516 nova_compute[253538]: 2025-11-25 08:49:47.650 253542 DEBUG nova.compute.manager [req-aaec318e-7c76-4c50-9291-5991ae5407f6 req-e2d1ba84-6c86-4ce8-b041-d8e392193ebb b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: fc289da4-ce2a-46ee-bf8c-bffa1e0e75fe] No waiting events found dispatching network-vif-unplugged-5b999504-81af-4e3d-9707-b0a72b902669 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 03:49:47 np0005534516 nova_compute[253538]: 2025-11-25 08:49:47.650 253542 WARNING nova.compute.manager [req-aaec318e-7c76-4c50-9291-5991ae5407f6 req-e2d1ba84-6c86-4ce8-b041-d8e392193ebb b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: fc289da4-ce2a-46ee-bf8c-bffa1e0e75fe] Received unexpected event network-vif-unplugged-5b999504-81af-4e3d-9707-b0a72b902669 for instance with vm_state deleted and task_state None.#033[00m
Nov 25 03:49:47 np0005534516 nova_compute[253538]: 2025-11-25 08:49:47.650 253542 DEBUG nova.compute.manager [req-aaec318e-7c76-4c50-9291-5991ae5407f6 req-e2d1ba84-6c86-4ce8-b041-d8e392193ebb b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: fc289da4-ce2a-46ee-bf8c-bffa1e0e75fe] Received event network-vif-plugged-5b999504-81af-4e3d-9707-b0a72b902669 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:49:47 np0005534516 nova_compute[253538]: 2025-11-25 08:49:47.651 253542 DEBUG oslo_concurrency.lockutils [req-aaec318e-7c76-4c50-9291-5991ae5407f6 req-e2d1ba84-6c86-4ce8-b041-d8e392193ebb b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "fc289da4-ce2a-46ee-bf8c-bffa1e0e75fe-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:49:47 np0005534516 nova_compute[253538]: 2025-11-25 08:49:47.652 253542 DEBUG oslo_concurrency.lockutils [req-aaec318e-7c76-4c50-9291-5991ae5407f6 req-e2d1ba84-6c86-4ce8-b041-d8e392193ebb b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "fc289da4-ce2a-46ee-bf8c-bffa1e0e75fe-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:49:47 np0005534516 nova_compute[253538]: 2025-11-25 08:49:47.652 253542 DEBUG oslo_concurrency.lockutils [req-aaec318e-7c76-4c50-9291-5991ae5407f6 req-e2d1ba84-6c86-4ce8-b041-d8e392193ebb b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "fc289da4-ce2a-46ee-bf8c-bffa1e0e75fe-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:49:47 np0005534516 nova_compute[253538]: 2025-11-25 08:49:47.653 253542 DEBUG nova.compute.manager [req-aaec318e-7c76-4c50-9291-5991ae5407f6 req-e2d1ba84-6c86-4ce8-b041-d8e392193ebb b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: fc289da4-ce2a-46ee-bf8c-bffa1e0e75fe] No waiting events found dispatching network-vif-plugged-5b999504-81af-4e3d-9707-b0a72b902669 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 03:49:47 np0005534516 nova_compute[253538]: 2025-11-25 08:49:47.653 253542 WARNING nova.compute.manager [req-aaec318e-7c76-4c50-9291-5991ae5407f6 req-e2d1ba84-6c86-4ce8-b041-d8e392193ebb b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: fc289da4-ce2a-46ee-bf8c-bffa1e0e75fe] Received unexpected event network-vif-plugged-5b999504-81af-4e3d-9707-b0a72b902669 for instance with vm_state deleted and task_state None.#033[00m
Nov 25 03:49:47 np0005534516 nova_compute[253538]: 2025-11-25 08:49:47.653 253542 DEBUG nova.compute.manager [req-aaec318e-7c76-4c50-9291-5991ae5407f6 req-e2d1ba84-6c86-4ce8-b041-d8e392193ebb b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: fc289da4-ce2a-46ee-bf8c-bffa1e0e75fe] Received event network-vif-deleted-5b999504-81af-4e3d-9707-b0a72b902669 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:49:47 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2054: 321 pgs: 321 active+clean; 171 MiB data, 814 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 531 KiB/s wr, 138 op/s
Nov 25 03:49:47 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 03:49:47 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/523222807' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 03:49:47 np0005534516 nova_compute[253538]: 2025-11-25 08:49:47.869 253542 DEBUG oslo_concurrency.processutils [None req-cd41ae2e-df99-4795-b08d-38b967982846 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.474s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:49:47 np0005534516 nova_compute[253538]: 2025-11-25 08:49:47.875 253542 DEBUG nova.compute.provider_tree [None req-cd41ae2e-df99-4795-b08d-38b967982846 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 03:49:47 np0005534516 nova_compute[253538]: 2025-11-25 08:49:47.893 253542 DEBUG nova.scheduler.client.report [None req-cd41ae2e-df99-4795-b08d-38b967982846 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 03:49:47 np0005534516 nova_compute[253538]: 2025-11-25 08:49:47.919 253542 DEBUG oslo_concurrency.lockutils [None req-cd41ae2e-df99-4795-b08d-38b967982846 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.600s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:49:47 np0005534516 nova_compute[253538]: 2025-11-25 08:49:47.961 253542 INFO nova.scheduler.client.report [None req-cd41ae2e-df99-4795-b08d-38b967982846 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Deleted allocations for instance fc289da4-ce2a-46ee-bf8c-bffa1e0e75fe#033[00m
Nov 25 03:49:48 np0005534516 nova_compute[253538]: 2025-11-25 08:49:48.297 253542 DEBUG oslo_concurrency.lockutils [None req-cd41ae2e-df99-4795-b08d-38b967982846 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lock "fc289da4-ce2a-46ee-bf8c-bffa1e0e75fe" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.706s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:49:48 np0005534516 nova_compute[253538]: 2025-11-25 08:49:48.350 253542 DEBUG nova.network.neutron [req-357b6125-7503-40fd-b463-0a454c8918c2 req-eaa37c7e-a716-4126-a620-1c7af9a4bc55 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: fc289da4-ce2a-46ee-bf8c-bffa1e0e75fe] Updated VIF entry in instance network info cache for port 5b999504-81af-4e3d-9707-b0a72b902669. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 25 03:49:48 np0005534516 nova_compute[253538]: 2025-11-25 08:49:48.351 253542 DEBUG nova.network.neutron [req-357b6125-7503-40fd-b463-0a454c8918c2 req-eaa37c7e-a716-4126-a620-1c7af9a4bc55 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: fc289da4-ce2a-46ee-bf8c-bffa1e0e75fe] Updating instance_info_cache with network_info: [{"id": "5b999504-81af-4e3d-9707-b0a72b902669", "address": "fa:16:3e:cf:68:34", "network": {"id": "41ed78ca-e8a4-4daf-884b-6b7b763e272f", "bridge": "br-int", "label": "tempest-network-smoke--1240799795", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92faeb767e7a423586eaaf32661ce771", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5b999504-81", "ovs_interfaceid": "5b999504-81af-4e3d-9707-b0a72b902669", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 03:49:48 np0005534516 nova_compute[253538]: 2025-11-25 08:49:48.369 253542 DEBUG oslo_concurrency.lockutils [req-357b6125-7503-40fd-b463-0a454c8918c2 req-eaa37c7e-a716-4126-a620-1c7af9a4bc55 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-fc289da4-ce2a-46ee-bf8c-bffa1e0e75fe" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 03:49:49 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2055: 321 pgs: 321 active+clean; 154 MiB data, 805 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 29 KiB/s wr, 120 op/s
Nov 25 03:49:50 np0005534516 nova_compute[253538]: 2025-11-25 08:49:50.852 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:49:51 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2056: 321 pgs: 321 active+clean; 134 MiB data, 793 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 29 KiB/s wr, 126 op/s
Nov 25 03:49:51 np0005534516 ovn_controller[152859]: 2025-11-25T08:49:51Z|00113|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:d5:f1:de 10.100.0.12
Nov 25 03:49:51 np0005534516 ovn_controller[152859]: 2025-11-25T08:49:51Z|00114|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:d5:f1:de 10.100.0.12
Nov 25 03:49:52 np0005534516 ceph-osd[90711]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #47. Immutable memtables: 4.
Nov 25 03:49:52 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e237 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 03:49:52 np0005534516 nova_compute[253538]: 2025-11-25 08:49:52.579 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:49:53 np0005534516 nova_compute[253538]: 2025-11-25 08:49:53.097 253542 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764060578.0957832, ec34a574-9c78-43d8-a65a-aa4052a5d452 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 03:49:53 np0005534516 nova_compute[253538]: 2025-11-25 08:49:53.097 253542 INFO nova.compute.manager [-] [instance: ec34a574-9c78-43d8-a65a-aa4052a5d452] VM Stopped (Lifecycle Event)#033[00m
Nov 25 03:49:53 np0005534516 nova_compute[253538]: 2025-11-25 08:49:53.116 253542 DEBUG nova.compute.manager [None req-4bad9868-c77d-4cc7-a328-40ab7f05430d - - - - - -] [instance: ec34a574-9c78-43d8-a65a-aa4052a5d452] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:49:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] Optimize plan auto_2025-11-25_08:49:53
Nov 25 03:49:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 03:49:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] do_upmap
Nov 25 03:49:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] pools ['images', 'default.rgw.control', 'default.rgw.log', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', 'backups', 'volumes', 'default.rgw.meta', 'vms', '.rgw.root', '.mgr']
Nov 25 03:49:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] prepared 0/10 changes
Nov 25 03:49:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 03:49:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 03:49:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 03:49:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 03:49:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 03:49:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 03:49:53 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2057: 321 pgs: 321 active+clean; 143 MiB data, 787 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 961 KiB/s wr, 151 op/s
Nov 25 03:49:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 03:49:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 03:49:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 03:49:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 03:49:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 03:49:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 03:49:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 03:49:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 03:49:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 03:49:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 03:49:53 np0005534516 ovn_controller[152859]: 2025-11-25T08:49:53Z|01066|binding|INFO|Releasing lport a1fb9b8f-24d1-400b-bfc7-6d0e30e1e254 from this chassis (sb_readonly=0)
Nov 25 03:49:54 np0005534516 nova_compute[253538]: 2025-11-25 08:49:54.034 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:49:55 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2058: 321 pgs: 321 active+clean; 164 MiB data, 802 MiB used, 59 GiB / 60 GiB avail; 871 KiB/s rd, 2.1 MiB/s wr, 103 op/s
Nov 25 03:49:55 np0005534516 nova_compute[253538]: 2025-11-25 08:49:55.856 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:49:57 np0005534516 nova_compute[253538]: 2025-11-25 08:49:57.218 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:49:57 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e237 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 03:49:57 np0005534516 nova_compute[253538]: 2025-11-25 08:49:57.555 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._run_image_cache_manager_pass run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 03:49:57 np0005534516 nova_compute[253538]: 2025-11-25 08:49:57.556 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Acquiring lock "storage-registry-lock" by "nova.virt.storage_users.register_storage_use.<locals>.do_register_storage_use" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:49:57 np0005534516 nova_compute[253538]: 2025-11-25 08:49:57.557 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "storage-registry-lock" acquired by "nova.virt.storage_users.register_storage_use.<locals>.do_register_storage_use" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:49:57 np0005534516 nova_compute[253538]: 2025-11-25 08:49:57.558 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "storage-registry-lock" "released" by "nova.virt.storage_users.register_storage_use.<locals>.do_register_storage_use" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:49:57 np0005534516 nova_compute[253538]: 2025-11-25 08:49:57.558 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Acquiring lock "storage-registry-lock" by "nova.virt.storage_users.get_storage_users.<locals>.do_get_storage_users" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:49:57 np0005534516 nova_compute[253538]: 2025-11-25 08:49:57.559 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "storage-registry-lock" acquired by "nova.virt.storage_users.get_storage_users.<locals>.do_get_storage_users" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:49:57 np0005534516 nova_compute[253538]: 2025-11-25 08:49:57.559 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "storage-registry-lock" "released" by "nova.virt.storage_users.get_storage_users.<locals>.do_get_storage_users" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:49:57 np0005534516 nova_compute[253538]: 2025-11-25 08:49:57.580 253542 DEBUG nova.virt.libvirt.imagecache [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Adding ephemeral_1_0706d66 into backend ephemeral images _store_ephemeral_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:100#033[00m
Nov 25 03:49:57 np0005534516 nova_compute[253538]: 2025-11-25 08:49:57.583 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:49:57 np0005534516 nova_compute[253538]: 2025-11-25 08:49:57.599 253542 DEBUG nova.virt.libvirt.imagecache [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Verify base images _age_and_verify_cached_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:314#033[00m
Nov 25 03:49:57 np0005534516 nova_compute[253538]: 2025-11-25 08:49:57.600 253542 DEBUG nova.virt.libvirt.imagecache [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Image id 8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e yields fingerprint ad982bd9427c86feb49d0b60fa1a5b2511227adc _age_and_verify_cached_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:319#033[00m
Nov 25 03:49:57 np0005534516 nova_compute[253538]: 2025-11-25 08:49:57.600 253542 INFO nova.virt.libvirt.imagecache [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] image 8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e at (/var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc): checking#033[00m
Nov 25 03:49:57 np0005534516 nova_compute[253538]: 2025-11-25 08:49:57.600 253542 DEBUG nova.virt.libvirt.imagecache [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] image 8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e at (/var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc): image is in use _mark_in_use /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:279#033[00m
Nov 25 03:49:57 np0005534516 nova_compute[253538]: 2025-11-25 08:49:57.603 253542 DEBUG nova.virt.libvirt.imagecache [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Image id  yields fingerprint da39a3ee5e6b4b0d3255bfef95601890afd80709 _age_and_verify_cached_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:319#033[00m
Nov 25 03:49:57 np0005534516 nova_compute[253538]: 2025-11-25 08:49:57.604 253542 DEBUG nova.virt.libvirt.imagecache [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] a3e97a6a-f96c-46ee-ba5a-2729f07dd5b5 is a valid instance name _list_backing_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:126#033[00m
Nov 25 03:49:57 np0005534516 nova_compute[253538]: 2025-11-25 08:49:57.604 253542 WARNING nova.virt.libvirt.imagecache [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Unknown base file: /var/lib/nova/instances/_base/a6a8bade0130d94f6fc91c6e483cc9b6db836676#033[00m
Nov 25 03:49:57 np0005534516 nova_compute[253538]: 2025-11-25 08:49:57.604 253542 INFO nova.virt.libvirt.imagecache [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Active base files: /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc#033[00m
Nov 25 03:49:57 np0005534516 nova_compute[253538]: 2025-11-25 08:49:57.604 253542 INFO nova.virt.libvirt.imagecache [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Removable base files: /var/lib/nova/instances/_base/a6a8bade0130d94f6fc91c6e483cc9b6db836676#033[00m
Nov 25 03:49:57 np0005534516 nova_compute[253538]: 2025-11-25 08:49:57.605 253542 INFO nova.virt.libvirt.imagecache [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Base, swap or ephemeral file too young to remove: /var/lib/nova/instances/_base/a6a8bade0130d94f6fc91c6e483cc9b6db836676#033[00m
Nov 25 03:49:57 np0005534516 nova_compute[253538]: 2025-11-25 08:49:57.605 253542 DEBUG nova.virt.libvirt.imagecache [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Verification complete _age_and_verify_cached_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:350#033[00m
Nov 25 03:49:57 np0005534516 nova_compute[253538]: 2025-11-25 08:49:57.605 253542 DEBUG nova.virt.libvirt.imagecache [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Verify swap images _age_and_verify_swap_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:299#033[00m
Nov 25 03:49:57 np0005534516 nova_compute[253538]: 2025-11-25 08:49:57.606 253542 DEBUG nova.virt.libvirt.imagecache [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Verify ephemeral images _age_and_verify_ephemeral_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:284#033[00m
Nov 25 03:49:57 np0005534516 nova_compute[253538]: 2025-11-25 08:49:57.606 253542 INFO nova.virt.libvirt.imagecache [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Base, swap or ephemeral file too young to remove: /var/lib/nova/instances/_base/ephemeral_1_0706d66#033[00m
Nov 25 03:49:57 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2059: 321 pgs: 321 active+clean; 167 MiB data, 806 MiB used, 59 GiB / 60 GiB avail; 353 KiB/s rd, 2.1 MiB/s wr, 91 op/s
Nov 25 03:49:58 np0005534516 nova_compute[253538]: 2025-11-25 08:49:58.575 253542 INFO nova.compute.manager [None req-71708210-6bc6-4ee9-ae8e-307a0bafd9ab 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: a3e97a6a-f96c-46ee-ba5a-2729f07dd5b5] Get console output#033[00m
Nov 25 03:49:58 np0005534516 nova_compute[253538]: 2025-11-25 08:49:58.581 310639 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Nov 25 03:49:58 np0005534516 nova_compute[253538]: 2025-11-25 08:49:58.860 253542 INFO nova.compute.manager [None req-27801bca-78f3-4b87-aa11-3def6d8566fe 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: a3e97a6a-f96c-46ee-ba5a-2729f07dd5b5] Pausing#033[00m
Nov 25 03:49:58 np0005534516 nova_compute[253538]: 2025-11-25 08:49:58.861 253542 DEBUG nova.objects.instance [None req-27801bca-78f3-4b87-aa11-3def6d8566fe 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Lazy-loading 'flavor' on Instance uuid a3e97a6a-f96c-46ee-ba5a-2729f07dd5b5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 03:49:58 np0005534516 nova_compute[253538]: 2025-11-25 08:49:58.884 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764060598.8844662, a3e97a6a-f96c-46ee-ba5a-2729f07dd5b5 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 03:49:58 np0005534516 nova_compute[253538]: 2025-11-25 08:49:58.884 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: a3e97a6a-f96c-46ee-ba5a-2729f07dd5b5] VM Paused (Lifecycle Event)#033[00m
Nov 25 03:49:58 np0005534516 nova_compute[253538]: 2025-11-25 08:49:58.886 253542 DEBUG nova.compute.manager [None req-27801bca-78f3-4b87-aa11-3def6d8566fe 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: a3e97a6a-f96c-46ee-ba5a-2729f07dd5b5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:49:58 np0005534516 nova_compute[253538]: 2025-11-25 08:49:58.907 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: a3e97a6a-f96c-46ee-ba5a-2729f07dd5b5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:49:58 np0005534516 nova_compute[253538]: 2025-11-25 08:49:58.911 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: a3e97a6a-f96c-46ee-ba5a-2729f07dd5b5] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: pausing, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 03:49:58 np0005534516 nova_compute[253538]: 2025-11-25 08:49:58.933 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: a3e97a6a-f96c-46ee-ba5a-2729f07dd5b5] During sync_power_state the instance has a pending task (pausing). Skip.#033[00m
Nov 25 03:49:59 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2060: 321 pgs: 321 active+clean; 167 MiB data, 806 MiB used, 59 GiB / 60 GiB avail; 344 KiB/s rd, 2.1 MiB/s wr, 77 op/s
Nov 25 03:50:00 np0005534516 podman[363742]: 2025-11-25 08:50:00.819059287 +0000 UTC m=+0.073406437 container health_status 5c8d5508db57b4c3da7e80ae11f3a3094e87785cb74f60c98199261dd8b73481 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_metadata_agent, managed_by=edpm_ansible, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 03:50:00 np0005534516 nova_compute[253538]: 2025-11-25 08:50:00.827 253542 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764060585.8265743, fc289da4-ce2a-46ee-bf8c-bffa1e0e75fe => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 03:50:00 np0005534516 nova_compute[253538]: 2025-11-25 08:50:00.828 253542 INFO nova.compute.manager [-] [instance: fc289da4-ce2a-46ee-bf8c-bffa1e0e75fe] VM Stopped (Lifecycle Event)#033[00m
Nov 25 03:50:00 np0005534516 nova_compute[253538]: 2025-11-25 08:50:00.844 253542 DEBUG nova.compute.manager [None req-1e14336a-ee55-467d-917f-79e8a6ac8e11 - - - - - -] [instance: fc289da4-ce2a-46ee-bf8c-bffa1e0e75fe] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:50:00 np0005534516 nova_compute[253538]: 2025-11-25 08:50:00.858 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:50:01 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2061: 321 pgs: 321 active+clean; 167 MiB data, 807 MiB used, 59 GiB / 60 GiB avail; 343 KiB/s rd, 2.1 MiB/s wr, 74 op/s
Nov 25 03:50:01 np0005534516 nova_compute[253538]: 2025-11-25 08:50:01.809 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:50:02 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e237 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 03:50:02 np0005534516 nova_compute[253538]: 2025-11-25 08:50:02.584 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:50:03 np0005534516 nova_compute[253538]: 2025-11-25 08:50:03.414 253542 INFO nova.compute.manager [None req-b27c68a6-1b9d-474e-b49b-1ba52da493d5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: a3e97a6a-f96c-46ee-ba5a-2729f07dd5b5] Get console output#033[00m
Nov 25 03:50:03 np0005534516 nova_compute[253538]: 2025-11-25 08:50:03.421 310639 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Nov 25 03:50:03 np0005534516 nova_compute[253538]: 2025-11-25 08:50:03.562 253542 INFO nova.compute.manager [None req-dd55d156-6abd-477a-94bf-33b49f6f470d 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: a3e97a6a-f96c-46ee-ba5a-2729f07dd5b5] Unpausing#033[00m
Nov 25 03:50:03 np0005534516 nova_compute[253538]: 2025-11-25 08:50:03.563 253542 DEBUG nova.objects.instance [None req-dd55d156-6abd-477a-94bf-33b49f6f470d 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Lazy-loading 'flavor' on Instance uuid a3e97a6a-f96c-46ee-ba5a-2729f07dd5b5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 03:50:03 np0005534516 nova_compute[253538]: 2025-11-25 08:50:03.590 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764060603.5904915, a3e97a6a-f96c-46ee-ba5a-2729f07dd5b5 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 03:50:03 np0005534516 nova_compute[253538]: 2025-11-25 08:50:03.591 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: a3e97a6a-f96c-46ee-ba5a-2729f07dd5b5] VM Resumed (Lifecycle Event)#033[00m
Nov 25 03:50:03 np0005534516 virtqemud[253839]: argument unsupported: QEMU guest agent is not configured
Nov 25 03:50:03 np0005534516 nova_compute[253538]: 2025-11-25 08:50:03.595 253542 DEBUG nova.virt.libvirt.guest [None req-dd55d156-6abd-477a-94bf-33b49f6f470d 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: a3e97a6a-f96c-46ee-ba5a-2729f07dd5b5] Failed to set time: agent not configured sync_guest_time /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:200#033[00m
Nov 25 03:50:03 np0005534516 nova_compute[253538]: 2025-11-25 08:50:03.595 253542 DEBUG nova.compute.manager [None req-dd55d156-6abd-477a-94bf-33b49f6f470d 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: a3e97a6a-f96c-46ee-ba5a-2729f07dd5b5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:50:03 np0005534516 nova_compute[253538]: 2025-11-25 08:50:03.611 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: a3e97a6a-f96c-46ee-ba5a-2729f07dd5b5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:50:03 np0005534516 nova_compute[253538]: 2025-11-25 08:50:03.621 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: a3e97a6a-f96c-46ee-ba5a-2729f07dd5b5] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: paused, current task_state: unpausing, current DB power_state: 3, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 03:50:03 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2062: 321 pgs: 321 active+clean; 167 MiB data, 807 MiB used, 59 GiB / 60 GiB avail; 335 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Nov 25 03:50:03 np0005534516 podman[363762]: 2025-11-25 08:50:03.841680417 +0000 UTC m=+0.084434133 container health_status 201f5ba8a846455dad7681d91099d8c3f33e8ac91298c1330a8b525b94c03938 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251118)
Nov 25 03:50:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 03:50:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:50:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 03:50:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:50:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0007589550978381194 of space, bias 1.0, pg target 0.22768652935143582 quantized to 32 (current 32)
Nov 25 03:50:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:50:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 03:50:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:50:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 03:50:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:50:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0010122368870873217 of space, bias 1.0, pg target 0.3036710661261965 quantized to 32 (current 32)
Nov 25 03:50:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:50:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 25 03:50:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:50:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 03:50:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:50:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 25 03:50:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:50:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 25 03:50:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:50:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 03:50:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:50:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 25 03:50:04 np0005534516 nova_compute[253538]: 2025-11-25 08:50:04.467 253542 INFO nova.compute.manager [None req-172c20c4-70c0-4bdf-bb0f-0568f5238bef 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: a3e97a6a-f96c-46ee-ba5a-2729f07dd5b5] Get console output#033[00m
Nov 25 03:50:04 np0005534516 nova_compute[253538]: 2025-11-25 08:50:04.473 310639 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Nov 25 03:50:05 np0005534516 ceph-osd[88620]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 25 03:50:05 np0005534516 ceph-osd[88620]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 3600.5 total, 600.0 interval#012Cumulative writes: 33K writes, 133K keys, 33K commit groups, 1.0 writes per commit group, ingest: 0.13 GB, 0.04 MB/s#012Cumulative WAL: 33K writes, 11K syncs, 2.90 writes per sync, written: 0.13 GB, 0.04 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 5652 writes, 21K keys, 5652 commit groups, 1.0 writes per commit group, ingest: 22.96 MB, 0.04 MB/s#012Interval WAL: 5653 writes, 2226 syncs, 2.54 writes per sync, written: 0.02 GB, 0.04 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 25 03:50:05 np0005534516 nova_compute[253538]: 2025-11-25 08:50:05.537 253542 DEBUG oslo_concurrency.lockutils [None req-2e8b81d8-68aa-4e66-8a71-fc27daf5eff3 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Acquiring lock "a3e97a6a-f96c-46ee-ba5a-2729f07dd5b5" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:50:05 np0005534516 nova_compute[253538]: 2025-11-25 08:50:05.538 253542 DEBUG oslo_concurrency.lockutils [None req-2e8b81d8-68aa-4e66-8a71-fc27daf5eff3 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Lock "a3e97a6a-f96c-46ee-ba5a-2729f07dd5b5" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:50:05 np0005534516 nova_compute[253538]: 2025-11-25 08:50:05.538 253542 DEBUG oslo_concurrency.lockutils [None req-2e8b81d8-68aa-4e66-8a71-fc27daf5eff3 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Acquiring lock "a3e97a6a-f96c-46ee-ba5a-2729f07dd5b5-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:50:05 np0005534516 nova_compute[253538]: 2025-11-25 08:50:05.538 253542 DEBUG oslo_concurrency.lockutils [None req-2e8b81d8-68aa-4e66-8a71-fc27daf5eff3 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Lock "a3e97a6a-f96c-46ee-ba5a-2729f07dd5b5-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:50:05 np0005534516 nova_compute[253538]: 2025-11-25 08:50:05.539 253542 DEBUG oslo_concurrency.lockutils [None req-2e8b81d8-68aa-4e66-8a71-fc27daf5eff3 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Lock "a3e97a6a-f96c-46ee-ba5a-2729f07dd5b5-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:50:05 np0005534516 nova_compute[253538]: 2025-11-25 08:50:05.540 253542 INFO nova.compute.manager [None req-2e8b81d8-68aa-4e66-8a71-fc27daf5eff3 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: a3e97a6a-f96c-46ee-ba5a-2729f07dd5b5] Terminating instance#033[00m
Nov 25 03:50:05 np0005534516 nova_compute[253538]: 2025-11-25 08:50:05.541 253542 DEBUG nova.compute.manager [None req-2e8b81d8-68aa-4e66-8a71-fc27daf5eff3 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: a3e97a6a-f96c-46ee-ba5a-2729f07dd5b5] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 25 03:50:05 np0005534516 kernel: tap67b278e0-03 (unregistering): left promiscuous mode
Nov 25 03:50:05 np0005534516 NetworkManager[48915]: <info>  [1764060605.5969] device (tap67b278e0-03): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 03:50:05 np0005534516 nova_compute[253538]: 2025-11-25 08:50:05.609 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:50:05 np0005534516 ovn_controller[152859]: 2025-11-25T08:50:05Z|01067|binding|INFO|Releasing lport 67b278e0-034e-4bb1-8cba-035ab2a72de3 from this chassis (sb_readonly=0)
Nov 25 03:50:05 np0005534516 ovn_controller[152859]: 2025-11-25T08:50:05Z|01068|binding|INFO|Setting lport 67b278e0-034e-4bb1-8cba-035ab2a72de3 down in Southbound
Nov 25 03:50:05 np0005534516 ovn_controller[152859]: 2025-11-25T08:50:05Z|01069|binding|INFO|Removing iface tap67b278e0-03 ovn-installed in OVS
Nov 25 03:50:05 np0005534516 nova_compute[253538]: 2025-11-25 08:50:05.612 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:50:05 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:50:05.619 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d5:f1:de 10.100.0.12'], port_security=['fa:16:3e:d5:f1:de 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': 'a3e97a6a-f96c-46ee-ba5a-2729f07dd5b5', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6cdb9ec3-6144-4767-9719-ddbf0a68bf7c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '7dcaf3c96bfc4db3a41291debd385c67', 'neutron:revision_number': '4', 'neutron:security_group_ids': '76a697a7-7255-4dde-bfd3-a4f7c520b32e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ddbaf93a-8581-4a98-b32a-d829e79ecbfd, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=67b278e0-034e-4bb1-8cba-035ab2a72de3) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 03:50:05 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:50:05.622 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 67b278e0-034e-4bb1-8cba-035ab2a72de3 in datapath 6cdb9ec3-6144-4767-9719-ddbf0a68bf7c unbound from our chassis#033[00m
Nov 25 03:50:05 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:50:05.624 162739 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 6cdb9ec3-6144-4767-9719-ddbf0a68bf7c, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 25 03:50:05 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:50:05.626 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[eb222281-3834-4a4b-bd86-7251812b7a98]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:50:05 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:50:05.627 162739 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-6cdb9ec3-6144-4767-9719-ddbf0a68bf7c namespace which is not needed anymore#033[00m
Nov 25 03:50:05 np0005534516 nova_compute[253538]: 2025-11-25 08:50:05.627 253542 DEBUG nova.compute.manager [req-d2b2b4fc-27b8-4001-9a72-23ead2519d37 req-4d65b1b9-1809-42b1-85ea-e16a5d1e46b1 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: a3e97a6a-f96c-46ee-ba5a-2729f07dd5b5] Received event network-changed-67b278e0-034e-4bb1-8cba-035ab2a72de3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:50:05 np0005534516 nova_compute[253538]: 2025-11-25 08:50:05.628 253542 DEBUG nova.compute.manager [req-d2b2b4fc-27b8-4001-9a72-23ead2519d37 req-4d65b1b9-1809-42b1-85ea-e16a5d1e46b1 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: a3e97a6a-f96c-46ee-ba5a-2729f07dd5b5] Refreshing instance network info cache due to event network-changed-67b278e0-034e-4bb1-8cba-035ab2a72de3. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 03:50:05 np0005534516 nova_compute[253538]: 2025-11-25 08:50:05.628 253542 DEBUG oslo_concurrency.lockutils [req-d2b2b4fc-27b8-4001-9a72-23ead2519d37 req-4d65b1b9-1809-42b1-85ea-e16a5d1e46b1 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-a3e97a6a-f96c-46ee-ba5a-2729f07dd5b5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 03:50:05 np0005534516 nova_compute[253538]: 2025-11-25 08:50:05.628 253542 DEBUG oslo_concurrency.lockutils [req-d2b2b4fc-27b8-4001-9a72-23ead2519d37 req-4d65b1b9-1809-42b1-85ea-e16a5d1e46b1 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-a3e97a6a-f96c-46ee-ba5a-2729f07dd5b5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 03:50:05 np0005534516 nova_compute[253538]: 2025-11-25 08:50:05.629 253542 DEBUG nova.network.neutron [req-d2b2b4fc-27b8-4001-9a72-23ead2519d37 req-4d65b1b9-1809-42b1-85ea-e16a5d1e46b1 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: a3e97a6a-f96c-46ee-ba5a-2729f07dd5b5] Refreshing network info cache for port 67b278e0-034e-4bb1-8cba-035ab2a72de3 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 03:50:05 np0005534516 nova_compute[253538]: 2025-11-25 08:50:05.646 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:50:05 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2063: 321 pgs: 321 active+clean; 167 MiB data, 807 MiB used, 59 GiB / 60 GiB avail; 95 KiB/s rd, 1.2 MiB/s wr, 26 op/s
Nov 25 03:50:05 np0005534516 systemd[1]: machine-qemu\x2d134\x2dinstance\x2d0000006c.scope: Deactivated successfully.
Nov 25 03:50:05 np0005534516 systemd[1]: machine-qemu\x2d134\x2dinstance\x2d0000006c.scope: Consumed 13.733s CPU time.
Nov 25 03:50:05 np0005534516 systemd-machined[215790]: Machine qemu-134-instance-0000006c terminated.
Nov 25 03:50:05 np0005534516 nova_compute[253538]: 2025-11-25 08:50:05.787 253542 INFO nova.virt.libvirt.driver [-] [instance: a3e97a6a-f96c-46ee-ba5a-2729f07dd5b5] Instance destroyed successfully.#033[00m
Nov 25 03:50:05 np0005534516 nova_compute[253538]: 2025-11-25 08:50:05.788 253542 DEBUG nova.objects.instance [None req-2e8b81d8-68aa-4e66-8a71-fc27daf5eff3 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Lazy-loading 'resources' on Instance uuid a3e97a6a-f96c-46ee-ba5a-2729f07dd5b5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 03:50:05 np0005534516 neutron-haproxy-ovnmeta-6cdb9ec3-6144-4767-9719-ddbf0a68bf7c[363273]: [NOTICE]   (363277) : haproxy version is 2.8.14-c23fe91
Nov 25 03:50:05 np0005534516 neutron-haproxy-ovnmeta-6cdb9ec3-6144-4767-9719-ddbf0a68bf7c[363273]: [NOTICE]   (363277) : path to executable is /usr/sbin/haproxy
Nov 25 03:50:05 np0005534516 neutron-haproxy-ovnmeta-6cdb9ec3-6144-4767-9719-ddbf0a68bf7c[363273]: [WARNING]  (363277) : Exiting Master process...
Nov 25 03:50:05 np0005534516 neutron-haproxy-ovnmeta-6cdb9ec3-6144-4767-9719-ddbf0a68bf7c[363273]: [WARNING]  (363277) : Exiting Master process...
Nov 25 03:50:05 np0005534516 neutron-haproxy-ovnmeta-6cdb9ec3-6144-4767-9719-ddbf0a68bf7c[363273]: [ALERT]    (363277) : Current worker (363279) exited with code 143 (Terminated)
Nov 25 03:50:05 np0005534516 neutron-haproxy-ovnmeta-6cdb9ec3-6144-4767-9719-ddbf0a68bf7c[363273]: [WARNING]  (363277) : All workers exited. Exiting... (0)
Nov 25 03:50:05 np0005534516 systemd[1]: libpod-c76772cc6aabf600f4ec379b74bcf99747fba8c7f3ee0a94c71e948710ff658b.scope: Deactivated successfully.
Nov 25 03:50:05 np0005534516 podman[363808]: 2025-11-25 08:50:05.822653089 +0000 UTC m=+0.067713794 container died c76772cc6aabf600f4ec379b74bcf99747fba8c7f3ee0a94c71e948710ff658b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-6cdb9ec3-6144-4767-9719-ddbf0a68bf7c, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true)
Nov 25 03:50:05 np0005534516 nova_compute[253538]: 2025-11-25 08:50:05.849 253542 DEBUG nova.virt.libvirt.vif [None req-2e8b81d8-68aa-4e66-8a71-fc27daf5eff3 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T08:49:28Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-2097673480',display_name='tempest-TestNetworkAdvancedServerOps-server-2097673480',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-2097673480',id=108,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBN8CJ+LAazH0nh/fs57lUBduvcorPeFDtVdwZY7U/GDNTdvdOvwS2k2O3rjC5dikEP5slLAsdzOE76Bw4/4L12X0ArhaClfawfYB19breOk8NW05uifXWs22TjYOgG1XfA==',key_name='tempest-TestNetworkAdvancedServerOps-659140204',keypairs=<?>,launch_index=0,launched_at=2025-11-25T08:49:39Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='7dcaf3c96bfc4db3a41291debd385c67',ramdisk_id='',reservation_id='r-140u7iq0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkAdvancedServerOps-1132090577',owner_user_name='tempest-TestNetworkAdvancedServerOps-1132090577-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T08:50:03Z,user_data=None,user_id='009378dc36154271ba5b4590ce67ddde',uuid=a3e97a6a-f96c-46ee-ba5a-2729f07dd5b5,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "67b278e0-034e-4bb1-8cba-035ab2a72de3", "address": "fa:16:3e:d5:f1:de", "network": {"id": "6cdb9ec3-6144-4767-9719-ddbf0a68bf7c", "bridge": "br-int", "label": "tempest-network-smoke--1425209404", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.218", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7dcaf3c96bfc4db3a41291debd385c67", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap67b278e0-03", "ovs_interfaceid": "67b278e0-034e-4bb1-8cba-035ab2a72de3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 25 03:50:05 np0005534516 nova_compute[253538]: 2025-11-25 08:50:05.849 253542 DEBUG nova.network.os_vif_util [None req-2e8b81d8-68aa-4e66-8a71-fc27daf5eff3 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Converting VIF {"id": "67b278e0-034e-4bb1-8cba-035ab2a72de3", "address": "fa:16:3e:d5:f1:de", "network": {"id": "6cdb9ec3-6144-4767-9719-ddbf0a68bf7c", "bridge": "br-int", "label": "tempest-network-smoke--1425209404", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.218", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7dcaf3c96bfc4db3a41291debd385c67", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap67b278e0-03", "ovs_interfaceid": "67b278e0-034e-4bb1-8cba-035ab2a72de3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 03:50:05 np0005534516 nova_compute[253538]: 2025-11-25 08:50:05.852 253542 DEBUG nova.network.os_vif_util [None req-2e8b81d8-68aa-4e66-8a71-fc27daf5eff3 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:d5:f1:de,bridge_name='br-int',has_traffic_filtering=True,id=67b278e0-034e-4bb1-8cba-035ab2a72de3,network=Network(6cdb9ec3-6144-4767-9719-ddbf0a68bf7c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap67b278e0-03') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 03:50:05 np0005534516 nova_compute[253538]: 2025-11-25 08:50:05.853 253542 DEBUG os_vif [None req-2e8b81d8-68aa-4e66-8a71-fc27daf5eff3 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:d5:f1:de,bridge_name='br-int',has_traffic_filtering=True,id=67b278e0-034e-4bb1-8cba-035ab2a72de3,network=Network(6cdb9ec3-6144-4767-9719-ddbf0a68bf7c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap67b278e0-03') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 25 03:50:05 np0005534516 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-c76772cc6aabf600f4ec379b74bcf99747fba8c7f3ee0a94c71e948710ff658b-userdata-shm.mount: Deactivated successfully.
Nov 25 03:50:05 np0005534516 nova_compute[253538]: 2025-11-25 08:50:05.858 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:50:05 np0005534516 nova_compute[253538]: 2025-11-25 08:50:05.858 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap67b278e0-03, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:50:05 np0005534516 nova_compute[253538]: 2025-11-25 08:50:05.861 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:50:05 np0005534516 systemd[1]: var-lib-containers-storage-overlay-51884f3ed72ef29fe201d1c8b86187d773b15028358389bb3a90b47cd47948b0-merged.mount: Deactivated successfully.
Nov 25 03:50:05 np0005534516 nova_compute[253538]: 2025-11-25 08:50:05.864 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:50:05 np0005534516 nova_compute[253538]: 2025-11-25 08:50:05.867 253542 INFO os_vif [None req-2e8b81d8-68aa-4e66-8a71-fc27daf5eff3 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:d5:f1:de,bridge_name='br-int',has_traffic_filtering=True,id=67b278e0-034e-4bb1-8cba-035ab2a72de3,network=Network(6cdb9ec3-6144-4767-9719-ddbf0a68bf7c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap67b278e0-03')#033[00m
Nov 25 03:50:05 np0005534516 podman[363808]: 2025-11-25 08:50:05.872048403 +0000 UTC m=+0.117109118 container cleanup c76772cc6aabf600f4ec379b74bcf99747fba8c7f3ee0a94c71e948710ff658b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-6cdb9ec3-6144-4767-9719-ddbf0a68bf7c, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 25 03:50:05 np0005534516 systemd[1]: libpod-conmon-c76772cc6aabf600f4ec379b74bcf99747fba8c7f3ee0a94c71e948710ff658b.scope: Deactivated successfully.
Nov 25 03:50:05 np0005534516 podman[363855]: 2025-11-25 08:50:05.945339045 +0000 UTC m=+0.047748719 container remove c76772cc6aabf600f4ec379b74bcf99747fba8c7f3ee0a94c71e948710ff658b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-6cdb9ec3-6144-4767-9719-ddbf0a68bf7c, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118)
Nov 25 03:50:05 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:50:05.951 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[bd2d68fd-ca2e-49b2-a496-841191866c60]: (4, ('Tue Nov 25 08:50:05 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-6cdb9ec3-6144-4767-9719-ddbf0a68bf7c (c76772cc6aabf600f4ec379b74bcf99747fba8c7f3ee0a94c71e948710ff658b)\nc76772cc6aabf600f4ec379b74bcf99747fba8c7f3ee0a94c71e948710ff658b\nTue Nov 25 08:50:05 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-6cdb9ec3-6144-4767-9719-ddbf0a68bf7c (c76772cc6aabf600f4ec379b74bcf99747fba8c7f3ee0a94c71e948710ff658b)\nc76772cc6aabf600f4ec379b74bcf99747fba8c7f3ee0a94c71e948710ff658b\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:50:05 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:50:05.953 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[7af7179f-8b08-48ac-8382-ca297ac34a64]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:50:05 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:50:05.955 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6cdb9ec3-60, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:50:05 np0005534516 nova_compute[253538]: 2025-11-25 08:50:05.956 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:50:05 np0005534516 kernel: tap6cdb9ec3-60: left promiscuous mode
Nov 25 03:50:05 np0005534516 nova_compute[253538]: 2025-11-25 08:50:05.980 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:50:05 np0005534516 nova_compute[253538]: 2025-11-25 08:50:05.981 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:50:05 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:50:05.984 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[d312f268-f719-4a1b-8723-d3cc012f821f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:50:06 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:50:05.999 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[acc9bf1b-e7a8-448a-afec-d94ce86194a5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:50:06 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:50:06.001 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[75754567-253a-4667-bcc6-3acdb3da1607]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:50:06 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:50:06.023 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[7dc16e26-cba5-4437-a53e-ed427b71493f]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 593368, 'reachable_time': 21018, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 363881, 'error': None, 'target': 'ovnmeta-6cdb9ec3-6144-4767-9719-ddbf0a68bf7c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:50:06 np0005534516 systemd[1]: run-netns-ovnmeta\x2d6cdb9ec3\x2d6144\x2d4767\x2d9719\x2dddbf0a68bf7c.mount: Deactivated successfully.
Nov 25 03:50:06 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:50:06.027 162852 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-6cdb9ec3-6144-4767-9719-ddbf0a68bf7c deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 25 03:50:06 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:50:06.027 162852 DEBUG oslo.privsep.daemon [-] privsep: reply[7f302018-54d9-41fd-b611-f88338301a74]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:50:06 np0005534516 nova_compute[253538]: 2025-11-25 08:50:06.207 253542 INFO nova.virt.libvirt.driver [None req-2e8b81d8-68aa-4e66-8a71-fc27daf5eff3 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: a3e97a6a-f96c-46ee-ba5a-2729f07dd5b5] Deleting instance files /var/lib/nova/instances/a3e97a6a-f96c-46ee-ba5a-2729f07dd5b5_del#033[00m
Nov 25 03:50:06 np0005534516 nova_compute[253538]: 2025-11-25 08:50:06.208 253542 INFO nova.virt.libvirt.driver [None req-2e8b81d8-68aa-4e66-8a71-fc27daf5eff3 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: a3e97a6a-f96c-46ee-ba5a-2729f07dd5b5] Deletion of /var/lib/nova/instances/a3e97a6a-f96c-46ee-ba5a-2729f07dd5b5_del complete#033[00m
Nov 25 03:50:06 np0005534516 nova_compute[253538]: 2025-11-25 08:50:06.282 253542 INFO nova.compute.manager [None req-2e8b81d8-68aa-4e66-8a71-fc27daf5eff3 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: a3e97a6a-f96c-46ee-ba5a-2729f07dd5b5] Took 0.74 seconds to destroy the instance on the hypervisor.#033[00m
Nov 25 03:50:06 np0005534516 nova_compute[253538]: 2025-11-25 08:50:06.282 253542 DEBUG oslo.service.loopingcall [None req-2e8b81d8-68aa-4e66-8a71-fc27daf5eff3 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 25 03:50:06 np0005534516 nova_compute[253538]: 2025-11-25 08:50:06.283 253542 DEBUG nova.compute.manager [-] [instance: a3e97a6a-f96c-46ee-ba5a-2729f07dd5b5] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 25 03:50:06 np0005534516 nova_compute[253538]: 2025-11-25 08:50:06.283 253542 DEBUG nova.network.neutron [-] [instance: a3e97a6a-f96c-46ee-ba5a-2729f07dd5b5] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 25 03:50:07 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e237 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 03:50:07 np0005534516 nova_compute[253538]: 2025-11-25 08:50:07.585 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:50:07 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2064: 321 pgs: 321 active+clean; 150 MiB data, 799 MiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 18 KiB/s wr, 22 op/s
Nov 25 03:50:08 np0005534516 nova_compute[253538]: 2025-11-25 08:50:08.465 253542 DEBUG nova.compute.manager [req-28f7df6e-0601-4a05-bd52-b50139969290 req-7596ecd4-7607-4db3-93bd-14d4d9ac77f6 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: a3e97a6a-f96c-46ee-ba5a-2729f07dd5b5] Received event network-vif-unplugged-67b278e0-034e-4bb1-8cba-035ab2a72de3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:50:08 np0005534516 nova_compute[253538]: 2025-11-25 08:50:08.466 253542 DEBUG oslo_concurrency.lockutils [req-28f7df6e-0601-4a05-bd52-b50139969290 req-7596ecd4-7607-4db3-93bd-14d4d9ac77f6 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "a3e97a6a-f96c-46ee-ba5a-2729f07dd5b5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:50:08 np0005534516 nova_compute[253538]: 2025-11-25 08:50:08.466 253542 DEBUG oslo_concurrency.lockutils [req-28f7df6e-0601-4a05-bd52-b50139969290 req-7596ecd4-7607-4db3-93bd-14d4d9ac77f6 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "a3e97a6a-f96c-46ee-ba5a-2729f07dd5b5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:50:08 np0005534516 nova_compute[253538]: 2025-11-25 08:50:08.466 253542 DEBUG oslo_concurrency.lockutils [req-28f7df6e-0601-4a05-bd52-b50139969290 req-7596ecd4-7607-4db3-93bd-14d4d9ac77f6 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "a3e97a6a-f96c-46ee-ba5a-2729f07dd5b5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:50:08 np0005534516 nova_compute[253538]: 2025-11-25 08:50:08.466 253542 DEBUG nova.compute.manager [req-28f7df6e-0601-4a05-bd52-b50139969290 req-7596ecd4-7607-4db3-93bd-14d4d9ac77f6 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: a3e97a6a-f96c-46ee-ba5a-2729f07dd5b5] No waiting events found dispatching network-vif-unplugged-67b278e0-034e-4bb1-8cba-035ab2a72de3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 03:50:08 np0005534516 nova_compute[253538]: 2025-11-25 08:50:08.466 253542 DEBUG nova.compute.manager [req-28f7df6e-0601-4a05-bd52-b50139969290 req-7596ecd4-7607-4db3-93bd-14d4d9ac77f6 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: a3e97a6a-f96c-46ee-ba5a-2729f07dd5b5] Received event network-vif-unplugged-67b278e0-034e-4bb1-8cba-035ab2a72de3 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 25 03:50:08 np0005534516 podman[363883]: 2025-11-25 08:50:08.877210717 +0000 UTC m=+0.099730713 container health_status 8ad1e319ea263c63021f01bb2d6f18ca5aea205883339692c6b10bddfa993191 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 03:50:09 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2065: 321 pgs: 321 active+clean; 129 MiB data, 784 MiB used, 59 GiB / 60 GiB avail; 12 KiB/s rd, 13 KiB/s wr, 18 op/s
Nov 25 03:50:09 np0005534516 nova_compute[253538]: 2025-11-25 08:50:09.894 253542 DEBUG nova.network.neutron [-] [instance: a3e97a6a-f96c-46ee-ba5a-2729f07dd5b5] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 03:50:09 np0005534516 nova_compute[253538]: 2025-11-25 08:50:09.932 253542 INFO nova.compute.manager [-] [instance: a3e97a6a-f96c-46ee-ba5a-2729f07dd5b5] Took 3.65 seconds to deallocate network for instance.#033[00m
Nov 25 03:50:09 np0005534516 nova_compute[253538]: 2025-11-25 08:50:09.996 253542 DEBUG oslo_concurrency.lockutils [None req-2e8b81d8-68aa-4e66-8a71-fc27daf5eff3 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:50:09 np0005534516 nova_compute[253538]: 2025-11-25 08:50:09.996 253542 DEBUG oslo_concurrency.lockutils [None req-2e8b81d8-68aa-4e66-8a71-fc27daf5eff3 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:50:10 np0005534516 nova_compute[253538]: 2025-11-25 08:50:10.085 253542 DEBUG oslo_concurrency.processutils [None req-2e8b81d8-68aa-4e66-8a71-fc27daf5eff3 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:50:10 np0005534516 nova_compute[253538]: 2025-11-25 08:50:10.187 253542 DEBUG nova.network.neutron [req-d2b2b4fc-27b8-4001-9a72-23ead2519d37 req-4d65b1b9-1809-42b1-85ea-e16a5d1e46b1 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: a3e97a6a-f96c-46ee-ba5a-2729f07dd5b5] Updated VIF entry in instance network info cache for port 67b278e0-034e-4bb1-8cba-035ab2a72de3. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 25 03:50:10 np0005534516 nova_compute[253538]: 2025-11-25 08:50:10.188 253542 DEBUG nova.network.neutron [req-d2b2b4fc-27b8-4001-9a72-23ead2519d37 req-4d65b1b9-1809-42b1-85ea-e16a5d1e46b1 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: a3e97a6a-f96c-46ee-ba5a-2729f07dd5b5] Updating instance_info_cache with network_info: [{"id": "67b278e0-034e-4bb1-8cba-035ab2a72de3", "address": "fa:16:3e:d5:f1:de", "network": {"id": "6cdb9ec3-6144-4767-9719-ddbf0a68bf7c", "bridge": "br-int", "label": "tempest-network-smoke--1425209404", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7dcaf3c96bfc4db3a41291debd385c67", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap67b278e0-03", "ovs_interfaceid": "67b278e0-034e-4bb1-8cba-035ab2a72de3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 03:50:10 np0005534516 nova_compute[253538]: 2025-11-25 08:50:10.204 253542 DEBUG oslo_concurrency.lockutils [req-d2b2b4fc-27b8-4001-9a72-23ead2519d37 req-4d65b1b9-1809-42b1-85ea-e16a5d1e46b1 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-a3e97a6a-f96c-46ee-ba5a-2729f07dd5b5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 03:50:10 np0005534516 nova_compute[253538]: 2025-11-25 08:50:10.215 253542 DEBUG oslo_concurrency.lockutils [None req-ed0e43f2-f757-4d6e-bc6d-994891196834 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Acquiring lock "49b75125-0ca4-438d-9f2a-1d130a6b5632" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:50:10 np0005534516 nova_compute[253538]: 2025-11-25 08:50:10.216 253542 DEBUG oslo_concurrency.lockutils [None req-ed0e43f2-f757-4d6e-bc6d-994891196834 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lock "49b75125-0ca4-438d-9f2a-1d130a6b5632" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:50:10 np0005534516 nova_compute[253538]: 2025-11-25 08:50:10.234 253542 DEBUG nova.compute.manager [None req-ed0e43f2-f757-4d6e-bc6d-994891196834 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 49b75125-0ca4-438d-9f2a-1d130a6b5632] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 25 03:50:10 np0005534516 nova_compute[253538]: 2025-11-25 08:50:10.291 253542 DEBUG oslo_concurrency.lockutils [None req-ed0e43f2-f757-4d6e-bc6d-994891196834 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:50:10 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 03:50:10 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1851252405' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 03:50:10 np0005534516 nova_compute[253538]: 2025-11-25 08:50:10.557 253542 DEBUG oslo_concurrency.processutils [None req-2e8b81d8-68aa-4e66-8a71-fc27daf5eff3 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.472s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:50:10 np0005534516 nova_compute[253538]: 2025-11-25 08:50:10.565 253542 DEBUG nova.compute.provider_tree [None req-2e8b81d8-68aa-4e66-8a71-fc27daf5eff3 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 03:50:10 np0005534516 nova_compute[253538]: 2025-11-25 08:50:10.583 253542 DEBUG nova.scheduler.client.report [None req-2e8b81d8-68aa-4e66-8a71-fc27daf5eff3 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 03:50:10 np0005534516 nova_compute[253538]: 2025-11-25 08:50:10.606 253542 DEBUG oslo_concurrency.lockutils [None req-2e8b81d8-68aa-4e66-8a71-fc27daf5eff3 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.610s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:50:10 np0005534516 nova_compute[253538]: 2025-11-25 08:50:10.611 253542 DEBUG oslo_concurrency.lockutils [None req-ed0e43f2-f757-4d6e-bc6d-994891196834 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.320s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:50:10 np0005534516 nova_compute[253538]: 2025-11-25 08:50:10.620 253542 DEBUG nova.virt.hardware [None req-ed0e43f2-f757-4d6e-bc6d-994891196834 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 25 03:50:10 np0005534516 nova_compute[253538]: 2025-11-25 08:50:10.620 253542 INFO nova.compute.claims [None req-ed0e43f2-f757-4d6e-bc6d-994891196834 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 49b75125-0ca4-438d-9f2a-1d130a6b5632] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 25 03:50:10 np0005534516 nova_compute[253538]: 2025-11-25 08:50:10.643 253542 INFO nova.scheduler.client.report [None req-2e8b81d8-68aa-4e66-8a71-fc27daf5eff3 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Deleted allocations for instance a3e97a6a-f96c-46ee-ba5a-2729f07dd5b5#033[00m
Nov 25 03:50:10 np0005534516 nova_compute[253538]: 2025-11-25 08:50:10.684 253542 DEBUG nova.compute.manager [req-3f0a1bdc-5fdb-4348-b360-538e3eec4fd6 req-7fc8f680-dfab-49f3-b2b2-6656821c3b3c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: a3e97a6a-f96c-46ee-ba5a-2729f07dd5b5] Received event network-vif-plugged-67b278e0-034e-4bb1-8cba-035ab2a72de3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:50:10 np0005534516 nova_compute[253538]: 2025-11-25 08:50:10.685 253542 DEBUG oslo_concurrency.lockutils [req-3f0a1bdc-5fdb-4348-b360-538e3eec4fd6 req-7fc8f680-dfab-49f3-b2b2-6656821c3b3c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "a3e97a6a-f96c-46ee-ba5a-2729f07dd5b5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:50:10 np0005534516 nova_compute[253538]: 2025-11-25 08:50:10.685 253542 DEBUG oslo_concurrency.lockutils [req-3f0a1bdc-5fdb-4348-b360-538e3eec4fd6 req-7fc8f680-dfab-49f3-b2b2-6656821c3b3c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "a3e97a6a-f96c-46ee-ba5a-2729f07dd5b5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:50:10 np0005534516 nova_compute[253538]: 2025-11-25 08:50:10.686 253542 DEBUG oslo_concurrency.lockutils [req-3f0a1bdc-5fdb-4348-b360-538e3eec4fd6 req-7fc8f680-dfab-49f3-b2b2-6656821c3b3c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "a3e97a6a-f96c-46ee-ba5a-2729f07dd5b5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:50:10 np0005534516 nova_compute[253538]: 2025-11-25 08:50:10.686 253542 DEBUG nova.compute.manager [req-3f0a1bdc-5fdb-4348-b360-538e3eec4fd6 req-7fc8f680-dfab-49f3-b2b2-6656821c3b3c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: a3e97a6a-f96c-46ee-ba5a-2729f07dd5b5] No waiting events found dispatching network-vif-plugged-67b278e0-034e-4bb1-8cba-035ab2a72de3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 03:50:10 np0005534516 nova_compute[253538]: 2025-11-25 08:50:10.687 253542 WARNING nova.compute.manager [req-3f0a1bdc-5fdb-4348-b360-538e3eec4fd6 req-7fc8f680-dfab-49f3-b2b2-6656821c3b3c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: a3e97a6a-f96c-46ee-ba5a-2729f07dd5b5] Received unexpected event network-vif-plugged-67b278e0-034e-4bb1-8cba-035ab2a72de3 for instance with vm_state deleted and task_state None.#033[00m
Nov 25 03:50:10 np0005534516 nova_compute[253538]: 2025-11-25 08:50:10.687 253542 DEBUG nova.compute.manager [req-3f0a1bdc-5fdb-4348-b360-538e3eec4fd6 req-7fc8f680-dfab-49f3-b2b2-6656821c3b3c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: a3e97a6a-f96c-46ee-ba5a-2729f07dd5b5] Received event network-vif-deleted-67b278e0-034e-4bb1-8cba-035ab2a72de3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:50:10 np0005534516 nova_compute[253538]: 2025-11-25 08:50:10.688 253542 INFO nova.compute.manager [req-3f0a1bdc-5fdb-4348-b360-538e3eec4fd6 req-7fc8f680-dfab-49f3-b2b2-6656821c3b3c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: a3e97a6a-f96c-46ee-ba5a-2729f07dd5b5] Neutron deleted interface 67b278e0-034e-4bb1-8cba-035ab2a72de3; detaching it from the instance and deleting it from the info cache#033[00m
Nov 25 03:50:10 np0005534516 nova_compute[253538]: 2025-11-25 08:50:10.688 253542 DEBUG nova.network.neutron [req-3f0a1bdc-5fdb-4348-b360-538e3eec4fd6 req-7fc8f680-dfab-49f3-b2b2-6656821c3b3c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: a3e97a6a-f96c-46ee-ba5a-2729f07dd5b5] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 03:50:10 np0005534516 nova_compute[253538]: 2025-11-25 08:50:10.724 253542 DEBUG nova.compute.manager [req-3f0a1bdc-5fdb-4348-b360-538e3eec4fd6 req-7fc8f680-dfab-49f3-b2b2-6656821c3b3c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: a3e97a6a-f96c-46ee-ba5a-2729f07dd5b5] Detach interface failed, port_id=67b278e0-034e-4bb1-8cba-035ab2a72de3, reason: Instance a3e97a6a-f96c-46ee-ba5a-2729f07dd5b5 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882#033[00m
Nov 25 03:50:10 np0005534516 nova_compute[253538]: 2025-11-25 08:50:10.747 253542 DEBUG oslo_concurrency.lockutils [None req-2e8b81d8-68aa-4e66-8a71-fc27daf5eff3 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Lock "a3e97a6a-f96c-46ee-ba5a-2729f07dd5b5" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 5.209s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:50:10 np0005534516 nova_compute[253538]: 2025-11-25 08:50:10.761 253542 DEBUG oslo_concurrency.processutils [None req-ed0e43f2-f757-4d6e-bc6d-994891196834 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:50:10 np0005534516 nova_compute[253538]: 2025-11-25 08:50:10.864 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:50:11 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 03:50:11 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3301658766' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 03:50:11 np0005534516 nova_compute[253538]: 2025-11-25 08:50:11.254 253542 DEBUG oslo_concurrency.processutils [None req-ed0e43f2-f757-4d6e-bc6d-994891196834 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.494s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:50:11 np0005534516 nova_compute[253538]: 2025-11-25 08:50:11.260 253542 DEBUG nova.compute.provider_tree [None req-ed0e43f2-f757-4d6e-bc6d-994891196834 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 03:50:11 np0005534516 nova_compute[253538]: 2025-11-25 08:50:11.277 253542 DEBUG nova.scheduler.client.report [None req-ed0e43f2-f757-4d6e-bc6d-994891196834 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 03:50:11 np0005534516 nova_compute[253538]: 2025-11-25 08:50:11.302 253542 DEBUG oslo_concurrency.lockutils [None req-ed0e43f2-f757-4d6e-bc6d-994891196834 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.691s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:50:11 np0005534516 nova_compute[253538]: 2025-11-25 08:50:11.303 253542 DEBUG nova.compute.manager [None req-ed0e43f2-f757-4d6e-bc6d-994891196834 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 49b75125-0ca4-438d-9f2a-1d130a6b5632] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 25 03:50:11 np0005534516 nova_compute[253538]: 2025-11-25 08:50:11.357 253542 DEBUG nova.compute.manager [None req-ed0e43f2-f757-4d6e-bc6d-994891196834 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 49b75125-0ca4-438d-9f2a-1d130a6b5632] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 25 03:50:11 np0005534516 nova_compute[253538]: 2025-11-25 08:50:11.358 253542 DEBUG nova.network.neutron [None req-ed0e43f2-f757-4d6e-bc6d-994891196834 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 49b75125-0ca4-438d-9f2a-1d130a6b5632] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 25 03:50:11 np0005534516 nova_compute[253538]: 2025-11-25 08:50:11.393 253542 INFO nova.virt.libvirt.driver [None req-ed0e43f2-f757-4d6e-bc6d-994891196834 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 49b75125-0ca4-438d-9f2a-1d130a6b5632] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 25 03:50:11 np0005534516 nova_compute[253538]: 2025-11-25 08:50:11.407 253542 DEBUG nova.compute.manager [None req-ed0e43f2-f757-4d6e-bc6d-994891196834 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 49b75125-0ca4-438d-9f2a-1d130a6b5632] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 25 03:50:11 np0005534516 nova_compute[253538]: 2025-11-25 08:50:11.507 253542 DEBUG nova.compute.manager [None req-ed0e43f2-f757-4d6e-bc6d-994891196834 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 49b75125-0ca4-438d-9f2a-1d130a6b5632] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 25 03:50:11 np0005534516 nova_compute[253538]: 2025-11-25 08:50:11.509 253542 DEBUG nova.virt.libvirt.driver [None req-ed0e43f2-f757-4d6e-bc6d-994891196834 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 49b75125-0ca4-438d-9f2a-1d130a6b5632] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 25 03:50:11 np0005534516 nova_compute[253538]: 2025-11-25 08:50:11.510 253542 INFO nova.virt.libvirt.driver [None req-ed0e43f2-f757-4d6e-bc6d-994891196834 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 49b75125-0ca4-438d-9f2a-1d130a6b5632] Creating image(s)#033[00m
Nov 25 03:50:11 np0005534516 nova_compute[253538]: 2025-11-25 08:50:11.544 253542 DEBUG nova.storage.rbd_utils [None req-ed0e43f2-f757-4d6e-bc6d-994891196834 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] rbd image 49b75125-0ca4-438d-9f2a-1d130a6b5632_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:50:11 np0005534516 nova_compute[253538]: 2025-11-25 08:50:11.568 253542 DEBUG nova.storage.rbd_utils [None req-ed0e43f2-f757-4d6e-bc6d-994891196834 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] rbd image 49b75125-0ca4-438d-9f2a-1d130a6b5632_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:50:11 np0005534516 nova_compute[253538]: 2025-11-25 08:50:11.596 253542 DEBUG nova.storage.rbd_utils [None req-ed0e43f2-f757-4d6e-bc6d-994891196834 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] rbd image 49b75125-0ca4-438d-9f2a-1d130a6b5632_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:50:11 np0005534516 nova_compute[253538]: 2025-11-25 08:50:11.599 253542 DEBUG oslo_concurrency.processutils [None req-ed0e43f2-f757-4d6e-bc6d-994891196834 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:50:11 np0005534516 ceph-osd[89702]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 25 03:50:11 np0005534516 ceph-osd[89702]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 3600.4 total, 600.0 interval#012Cumulative writes: 32K writes, 126K keys, 32K commit groups, 1.0 writes per commit group, ingest: 0.12 GB, 0.03 MB/s#012Cumulative WAL: 32K writes, 11K syncs, 2.89 writes per sync, written: 0.12 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 5256 writes, 19K keys, 5256 commit groups, 1.0 writes per commit group, ingest: 17.05 MB, 0.03 MB/s#012Interval WAL: 5256 writes, 2172 syncs, 2.42 writes per sync, written: 0.02 GB, 0.03 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 25 03:50:11 np0005534516 nova_compute[253538]: 2025-11-25 08:50:11.637 253542 DEBUG nova.policy [None req-ed0e43f2-f757-4d6e-bc6d-994891196834 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '4211995133cc45db8e38c47f747fb092', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '92faeb767e7a423586eaaf32661ce771', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 25 03:50:11 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2066: 321 pgs: 321 active+clean; 88 MiB data, 763 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 13 KiB/s wr, 28 op/s
Nov 25 03:50:11 np0005534516 nova_compute[253538]: 2025-11-25 08:50:11.683 253542 DEBUG oslo_concurrency.processutils [None req-ed0e43f2-f757-4d6e-bc6d-994891196834 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json" returned: 0 in 0.084s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:50:11 np0005534516 nova_compute[253538]: 2025-11-25 08:50:11.684 253542 DEBUG oslo_concurrency.lockutils [None req-ed0e43f2-f757-4d6e-bc6d-994891196834 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Acquiring lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:50:11 np0005534516 nova_compute[253538]: 2025-11-25 08:50:11.685 253542 DEBUG oslo_concurrency.lockutils [None req-ed0e43f2-f757-4d6e-bc6d-994891196834 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:50:11 np0005534516 nova_compute[253538]: 2025-11-25 08:50:11.686 253542 DEBUG oslo_concurrency.lockutils [None req-ed0e43f2-f757-4d6e-bc6d-994891196834 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:50:11 np0005534516 nova_compute[253538]: 2025-11-25 08:50:11.729 253542 DEBUG nova.storage.rbd_utils [None req-ed0e43f2-f757-4d6e-bc6d-994891196834 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] rbd image 49b75125-0ca4-438d-9f2a-1d130a6b5632_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:50:11 np0005534516 nova_compute[253538]: 2025-11-25 08:50:11.733 253542 DEBUG oslo_concurrency.processutils [None req-ed0e43f2-f757-4d6e-bc6d-994891196834 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc 49b75125-0ca4-438d-9f2a-1d130a6b5632_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:50:12 np0005534516 nova_compute[253538]: 2025-11-25 08:50:12.066 253542 DEBUG oslo_concurrency.processutils [None req-ed0e43f2-f757-4d6e-bc6d-994891196834 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc 49b75125-0ca4-438d-9f2a-1d130a6b5632_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.333s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:50:12 np0005534516 nova_compute[253538]: 2025-11-25 08:50:12.124 253542 DEBUG nova.storage.rbd_utils [None req-ed0e43f2-f757-4d6e-bc6d-994891196834 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] resizing rbd image 49b75125-0ca4-438d-9f2a-1d130a6b5632_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 25 03:50:12 np0005534516 nova_compute[253538]: 2025-11-25 08:50:12.208 253542 DEBUG nova.objects.instance [None req-ed0e43f2-f757-4d6e-bc6d-994891196834 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lazy-loading 'migration_context' on Instance uuid 49b75125-0ca4-438d-9f2a-1d130a6b5632 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 03:50:12 np0005534516 nova_compute[253538]: 2025-11-25 08:50:12.222 253542 DEBUG nova.virt.libvirt.driver [None req-ed0e43f2-f757-4d6e-bc6d-994891196834 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 49b75125-0ca4-438d-9f2a-1d130a6b5632] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 25 03:50:12 np0005534516 nova_compute[253538]: 2025-11-25 08:50:12.222 253542 DEBUG nova.virt.libvirt.driver [None req-ed0e43f2-f757-4d6e-bc6d-994891196834 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 49b75125-0ca4-438d-9f2a-1d130a6b5632] Ensure instance console log exists: /var/lib/nova/instances/49b75125-0ca4-438d-9f2a-1d130a6b5632/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 25 03:50:12 np0005534516 nova_compute[253538]: 2025-11-25 08:50:12.223 253542 DEBUG oslo_concurrency.lockutils [None req-ed0e43f2-f757-4d6e-bc6d-994891196834 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:50:12 np0005534516 nova_compute[253538]: 2025-11-25 08:50:12.223 253542 DEBUG oslo_concurrency.lockutils [None req-ed0e43f2-f757-4d6e-bc6d-994891196834 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:50:12 np0005534516 nova_compute[253538]: 2025-11-25 08:50:12.224 253542 DEBUG oslo_concurrency.lockutils [None req-ed0e43f2-f757-4d6e-bc6d-994891196834 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:50:12 np0005534516 nova_compute[253538]: 2025-11-25 08:50:12.306 253542 DEBUG nova.network.neutron [None req-ed0e43f2-f757-4d6e-bc6d-994891196834 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 49b75125-0ca4-438d-9f2a-1d130a6b5632] Successfully created port: d0945383-2a0d-4019-9b60-eea96d667c69 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 25 03:50:12 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e237 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 03:50:12 np0005534516 nova_compute[253538]: 2025-11-25 08:50:12.622 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:50:13 np0005534516 nova_compute[253538]: 2025-11-25 08:50:13.246 253542 DEBUG nova.network.neutron [None req-ed0e43f2-f757-4d6e-bc6d-994891196834 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 49b75125-0ca4-438d-9f2a-1d130a6b5632] Successfully updated port: d0945383-2a0d-4019-9b60-eea96d667c69 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 25 03:50:13 np0005534516 nova_compute[253538]: 2025-11-25 08:50:13.265 253542 DEBUG oslo_concurrency.lockutils [None req-ed0e43f2-f757-4d6e-bc6d-994891196834 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Acquiring lock "refresh_cache-49b75125-0ca4-438d-9f2a-1d130a6b5632" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 03:50:13 np0005534516 nova_compute[253538]: 2025-11-25 08:50:13.266 253542 DEBUG oslo_concurrency.lockutils [None req-ed0e43f2-f757-4d6e-bc6d-994891196834 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Acquired lock "refresh_cache-49b75125-0ca4-438d-9f2a-1d130a6b5632" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 03:50:13 np0005534516 nova_compute[253538]: 2025-11-25 08:50:13.266 253542 DEBUG nova.network.neutron [None req-ed0e43f2-f757-4d6e-bc6d-994891196834 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 49b75125-0ca4-438d-9f2a-1d130a6b5632] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 25 03:50:13 np0005534516 nova_compute[253538]: 2025-11-25 08:50:13.411 253542 DEBUG nova.compute.manager [req-fd6bbb25-595a-4447-9a58-bb21575e279b req-2b7172cc-0d68-40df-b004-5890e26a3c73 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 49b75125-0ca4-438d-9f2a-1d130a6b5632] Received event network-changed-d0945383-2a0d-4019-9b60-eea96d667c69 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:50:13 np0005534516 nova_compute[253538]: 2025-11-25 08:50:13.412 253542 DEBUG nova.compute.manager [req-fd6bbb25-595a-4447-9a58-bb21575e279b req-2b7172cc-0d68-40df-b004-5890e26a3c73 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 49b75125-0ca4-438d-9f2a-1d130a6b5632] Refreshing instance network info cache due to event network-changed-d0945383-2a0d-4019-9b60-eea96d667c69. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 03:50:13 np0005534516 nova_compute[253538]: 2025-11-25 08:50:13.413 253542 DEBUG oslo_concurrency.lockutils [req-fd6bbb25-595a-4447-9a58-bb21575e279b req-2b7172cc-0d68-40df-b004-5890e26a3c73 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-49b75125-0ca4-438d-9f2a-1d130a6b5632" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 03:50:13 np0005534516 nova_compute[253538]: 2025-11-25 08:50:13.511 253542 DEBUG nova.network.neutron [None req-ed0e43f2-f757-4d6e-bc6d-994891196834 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 49b75125-0ca4-438d-9f2a-1d130a6b5632] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 25 03:50:13 np0005534516 nova_compute[253538]: 2025-11-25 08:50:13.577 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:50:13 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2067: 321 pgs: 321 active+clean; 94 MiB data, 766 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 205 KiB/s wr, 28 op/s
Nov 25 03:50:13 np0005534516 nova_compute[253538]: 2025-11-25 08:50:13.793 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:50:14 np0005534516 nova_compute[253538]: 2025-11-25 08:50:14.711 253542 DEBUG nova.network.neutron [None req-ed0e43f2-f757-4d6e-bc6d-994891196834 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 49b75125-0ca4-438d-9f2a-1d130a6b5632] Updating instance_info_cache with network_info: [{"id": "d0945383-2a0d-4019-9b60-eea96d667c69", "address": "fa:16:3e:88:13:51", "network": {"id": "c833a599-5a18-44d2-82ad-b16f7476c220", "bridge": "br-int", "label": "tempest-network-smoke--480494293", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92faeb767e7a423586eaaf32661ce771", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd0945383-2a", "ovs_interfaceid": "d0945383-2a0d-4019-9b60-eea96d667c69", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 03:50:14 np0005534516 nova_compute[253538]: 2025-11-25 08:50:14.738 253542 DEBUG oslo_concurrency.lockutils [None req-ed0e43f2-f757-4d6e-bc6d-994891196834 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Releasing lock "refresh_cache-49b75125-0ca4-438d-9f2a-1d130a6b5632" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 03:50:14 np0005534516 nova_compute[253538]: 2025-11-25 08:50:14.738 253542 DEBUG nova.compute.manager [None req-ed0e43f2-f757-4d6e-bc6d-994891196834 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 49b75125-0ca4-438d-9f2a-1d130a6b5632] Instance network_info: |[{"id": "d0945383-2a0d-4019-9b60-eea96d667c69", "address": "fa:16:3e:88:13:51", "network": {"id": "c833a599-5a18-44d2-82ad-b16f7476c220", "bridge": "br-int", "label": "tempest-network-smoke--480494293", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92faeb767e7a423586eaaf32661ce771", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd0945383-2a", "ovs_interfaceid": "d0945383-2a0d-4019-9b60-eea96d667c69", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 25 03:50:14 np0005534516 nova_compute[253538]: 2025-11-25 08:50:14.739 253542 DEBUG oslo_concurrency.lockutils [req-fd6bbb25-595a-4447-9a58-bb21575e279b req-2b7172cc-0d68-40df-b004-5890e26a3c73 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-49b75125-0ca4-438d-9f2a-1d130a6b5632" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 03:50:14 np0005534516 nova_compute[253538]: 2025-11-25 08:50:14.739 253542 DEBUG nova.network.neutron [req-fd6bbb25-595a-4447-9a58-bb21575e279b req-2b7172cc-0d68-40df-b004-5890e26a3c73 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 49b75125-0ca4-438d-9f2a-1d130a6b5632] Refreshing network info cache for port d0945383-2a0d-4019-9b60-eea96d667c69 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 03:50:14 np0005534516 nova_compute[253538]: 2025-11-25 08:50:14.745 253542 DEBUG nova.virt.libvirt.driver [None req-ed0e43f2-f757-4d6e-bc6d-994891196834 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 49b75125-0ca4-438d-9f2a-1d130a6b5632] Start _get_guest_xml network_info=[{"id": "d0945383-2a0d-4019-9b60-eea96d667c69", "address": "fa:16:3e:88:13:51", "network": {"id": "c833a599-5a18-44d2-82ad-b16f7476c220", "bridge": "br-int", "label": "tempest-network-smoke--480494293", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92faeb767e7a423586eaaf32661ce771", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd0945383-2a", "ovs_interfaceid": "d0945383-2a0d-4019-9b60-eea96d667c69", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'device_name': '/dev/vda', 'guest_format': None, 'encryption_options': None, 'boot_index': 0, 'encryption_format': None, 'encryption_secret_uuid': None, 'size': 0, 'encrypted': False, 'disk_bus': 'virtio', 'image_id': '8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 25 03:50:14 np0005534516 nova_compute[253538]: 2025-11-25 08:50:14.751 253542 WARNING nova.virt.libvirt.driver [None req-ed0e43f2-f757-4d6e-bc6d-994891196834 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 03:50:14 np0005534516 nova_compute[253538]: 2025-11-25 08:50:14.761 253542 DEBUG nova.virt.libvirt.host [None req-ed0e43f2-f757-4d6e-bc6d-994891196834 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 25 03:50:14 np0005534516 nova_compute[253538]: 2025-11-25 08:50:14.762 253542 DEBUG nova.virt.libvirt.host [None req-ed0e43f2-f757-4d6e-bc6d-994891196834 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 25 03:50:14 np0005534516 nova_compute[253538]: 2025-11-25 08:50:14.767 253542 DEBUG nova.virt.libvirt.host [None req-ed0e43f2-f757-4d6e-bc6d-994891196834 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 25 03:50:14 np0005534516 nova_compute[253538]: 2025-11-25 08:50:14.767 253542 DEBUG nova.virt.libvirt.host [None req-ed0e43f2-f757-4d6e-bc6d-994891196834 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 25 03:50:14 np0005534516 nova_compute[253538]: 2025-11-25 08:50:14.768 253542 DEBUG nova.virt.libvirt.driver [None req-ed0e43f2-f757-4d6e-bc6d-994891196834 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 25 03:50:14 np0005534516 nova_compute[253538]: 2025-11-25 08:50:14.769 253542 DEBUG nova.virt.hardware [None req-ed0e43f2-f757-4d6e-bc6d-994891196834 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T08:20:54Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='c0247c91-0f34-45b2-87e7-43f31a790a00',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 25 03:50:14 np0005534516 nova_compute[253538]: 2025-11-25 08:50:14.769 253542 DEBUG nova.virt.hardware [None req-ed0e43f2-f757-4d6e-bc6d-994891196834 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 25 03:50:14 np0005534516 nova_compute[253538]: 2025-11-25 08:50:14.770 253542 DEBUG nova.virt.hardware [None req-ed0e43f2-f757-4d6e-bc6d-994891196834 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 25 03:50:14 np0005534516 nova_compute[253538]: 2025-11-25 08:50:14.770 253542 DEBUG nova.virt.hardware [None req-ed0e43f2-f757-4d6e-bc6d-994891196834 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 25 03:50:14 np0005534516 nova_compute[253538]: 2025-11-25 08:50:14.771 253542 DEBUG nova.virt.hardware [None req-ed0e43f2-f757-4d6e-bc6d-994891196834 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 25 03:50:14 np0005534516 nova_compute[253538]: 2025-11-25 08:50:14.771 253542 DEBUG nova.virt.hardware [None req-ed0e43f2-f757-4d6e-bc6d-994891196834 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 25 03:50:14 np0005534516 nova_compute[253538]: 2025-11-25 08:50:14.772 253542 DEBUG nova.virt.hardware [None req-ed0e43f2-f757-4d6e-bc6d-994891196834 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 25 03:50:14 np0005534516 nova_compute[253538]: 2025-11-25 08:50:14.772 253542 DEBUG nova.virt.hardware [None req-ed0e43f2-f757-4d6e-bc6d-994891196834 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 25 03:50:14 np0005534516 nova_compute[253538]: 2025-11-25 08:50:14.772 253542 DEBUG nova.virt.hardware [None req-ed0e43f2-f757-4d6e-bc6d-994891196834 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 25 03:50:14 np0005534516 nova_compute[253538]: 2025-11-25 08:50:14.773 253542 DEBUG nova.virt.hardware [None req-ed0e43f2-f757-4d6e-bc6d-994891196834 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 25 03:50:14 np0005534516 nova_compute[253538]: 2025-11-25 08:50:14.773 253542 DEBUG nova.virt.hardware [None req-ed0e43f2-f757-4d6e-bc6d-994891196834 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 25 03:50:14 np0005534516 nova_compute[253538]: 2025-11-25 08:50:14.778 253542 DEBUG oslo_concurrency.processutils [None req-ed0e43f2-f757-4d6e-bc6d-994891196834 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:50:15 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 03:50:15 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1706721461' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 03:50:15 np0005534516 nova_compute[253538]: 2025-11-25 08:50:15.326 253542 DEBUG oslo_concurrency.processutils [None req-ed0e43f2-f757-4d6e-bc6d-994891196834 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.548s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:50:15 np0005534516 nova_compute[253538]: 2025-11-25 08:50:15.357 253542 DEBUG nova.storage.rbd_utils [None req-ed0e43f2-f757-4d6e-bc6d-994891196834 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] rbd image 49b75125-0ca4-438d-9f2a-1d130a6b5632_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:50:15 np0005534516 nova_compute[253538]: 2025-11-25 08:50:15.360 253542 DEBUG oslo_concurrency.processutils [None req-ed0e43f2-f757-4d6e-bc6d-994891196834 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:50:15 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2068: 321 pgs: 321 active+clean; 110 MiB data, 771 MiB used, 59 GiB / 60 GiB avail; 26 KiB/s rd, 679 KiB/s wr, 42 op/s
Nov 25 03:50:15 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 03:50:15 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3376361003' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 03:50:15 np0005534516 nova_compute[253538]: 2025-11-25 08:50:15.823 253542 DEBUG oslo_concurrency.processutils [None req-ed0e43f2-f757-4d6e-bc6d-994891196834 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.463s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:50:15 np0005534516 nova_compute[253538]: 2025-11-25 08:50:15.826 253542 DEBUG nova.virt.libvirt.vif [None req-ed0e43f2-f757-4d6e-bc6d-994891196834 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T08:50:08Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1448452289',display_name='tempest-TestNetworkBasicOps-server-1448452289',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1448452289',id=109,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJ+ODzmwXlP8InQP3rckT8crJoUn4xNbOC1krI29pEqx8UoV+guV7Zli9feI6GoWEAqOv9PTHopksYzuWw3Q3guPlCcrWq73qjwaBwfCdvh1/iUEG/sFH1cKmRJAuAfBlw==',key_name='tempest-TestNetworkBasicOps-406479871',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='92faeb767e7a423586eaaf32661ce771',ramdisk_id='',reservation_id='r-3nsjtln8',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-2019122229',owner_user_name='tempest-TestNetworkBasicOps-2019122229-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T08:50:11Z,user_data=None,user_id='4211995133cc45db8e38c47f747fb092',uuid=49b75125-0ca4-438d-9f2a-1d130a6b5632,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d0945383-2a0d-4019-9b60-eea96d667c69", "address": "fa:16:3e:88:13:51", "network": {"id": "c833a599-5a18-44d2-82ad-b16f7476c220", "bridge": "br-int", "label": "tempest-network-smoke--480494293", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92faeb767e7a423586eaaf32661ce771", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd0945383-2a", "ovs_interfaceid": "d0945383-2a0d-4019-9b60-eea96d667c69", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 25 03:50:15 np0005534516 nova_compute[253538]: 2025-11-25 08:50:15.827 253542 DEBUG nova.network.os_vif_util [None req-ed0e43f2-f757-4d6e-bc6d-994891196834 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Converting VIF {"id": "d0945383-2a0d-4019-9b60-eea96d667c69", "address": "fa:16:3e:88:13:51", "network": {"id": "c833a599-5a18-44d2-82ad-b16f7476c220", "bridge": "br-int", "label": "tempest-network-smoke--480494293", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92faeb767e7a423586eaaf32661ce771", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd0945383-2a", "ovs_interfaceid": "d0945383-2a0d-4019-9b60-eea96d667c69", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 03:50:15 np0005534516 nova_compute[253538]: 2025-11-25 08:50:15.829 253542 DEBUG nova.network.os_vif_util [None req-ed0e43f2-f757-4d6e-bc6d-994891196834 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:88:13:51,bridge_name='br-int',has_traffic_filtering=True,id=d0945383-2a0d-4019-9b60-eea96d667c69,network=Network(c833a599-5a18-44d2-82ad-b16f7476c220),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd0945383-2a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 03:50:15 np0005534516 nova_compute[253538]: 2025-11-25 08:50:15.831 253542 DEBUG nova.objects.instance [None req-ed0e43f2-f757-4d6e-bc6d-994891196834 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lazy-loading 'pci_devices' on Instance uuid 49b75125-0ca4-438d-9f2a-1d130a6b5632 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 03:50:15 np0005534516 nova_compute[253538]: 2025-11-25 08:50:15.852 253542 DEBUG nova.virt.libvirt.driver [None req-ed0e43f2-f757-4d6e-bc6d-994891196834 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 49b75125-0ca4-438d-9f2a-1d130a6b5632] End _get_guest_xml xml=<domain type="kvm">
Nov 25 03:50:15 np0005534516 nova_compute[253538]:  <uuid>49b75125-0ca4-438d-9f2a-1d130a6b5632</uuid>
Nov 25 03:50:15 np0005534516 nova_compute[253538]:  <name>instance-0000006d</name>
Nov 25 03:50:15 np0005534516 nova_compute[253538]:  <memory>131072</memory>
Nov 25 03:50:15 np0005534516 nova_compute[253538]:  <vcpu>1</vcpu>
Nov 25 03:50:15 np0005534516 nova_compute[253538]:  <metadata>
Nov 25 03:50:15 np0005534516 nova_compute[253538]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 03:50:15 np0005534516 nova_compute[253538]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 03:50:15 np0005534516 nova_compute[253538]:      <nova:name>tempest-TestNetworkBasicOps-server-1448452289</nova:name>
Nov 25 03:50:15 np0005534516 nova_compute[253538]:      <nova:creationTime>2025-11-25 08:50:14</nova:creationTime>
Nov 25 03:50:15 np0005534516 nova_compute[253538]:      <nova:flavor name="m1.nano">
Nov 25 03:50:15 np0005534516 nova_compute[253538]:        <nova:memory>128</nova:memory>
Nov 25 03:50:15 np0005534516 nova_compute[253538]:        <nova:disk>1</nova:disk>
Nov 25 03:50:15 np0005534516 nova_compute[253538]:        <nova:swap>0</nova:swap>
Nov 25 03:50:15 np0005534516 nova_compute[253538]:        <nova:ephemeral>0</nova:ephemeral>
Nov 25 03:50:15 np0005534516 nova_compute[253538]:        <nova:vcpus>1</nova:vcpus>
Nov 25 03:50:15 np0005534516 nova_compute[253538]:      </nova:flavor>
Nov 25 03:50:15 np0005534516 nova_compute[253538]:      <nova:owner>
Nov 25 03:50:15 np0005534516 nova_compute[253538]:        <nova:user uuid="4211995133cc45db8e38c47f747fb092">tempest-TestNetworkBasicOps-2019122229-project-member</nova:user>
Nov 25 03:50:15 np0005534516 nova_compute[253538]:        <nova:project uuid="92faeb767e7a423586eaaf32661ce771">tempest-TestNetworkBasicOps-2019122229</nova:project>
Nov 25 03:50:15 np0005534516 nova_compute[253538]:      </nova:owner>
Nov 25 03:50:15 np0005534516 nova_compute[253538]:      <nova:root type="image" uuid="8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e"/>
Nov 25 03:50:15 np0005534516 nova_compute[253538]:      <nova:ports>
Nov 25 03:50:15 np0005534516 nova_compute[253538]:        <nova:port uuid="d0945383-2a0d-4019-9b60-eea96d667c69">
Nov 25 03:50:15 np0005534516 nova_compute[253538]:          <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Nov 25 03:50:15 np0005534516 nova_compute[253538]:        </nova:port>
Nov 25 03:50:15 np0005534516 nova_compute[253538]:      </nova:ports>
Nov 25 03:50:15 np0005534516 nova_compute[253538]:    </nova:instance>
Nov 25 03:50:15 np0005534516 nova_compute[253538]:  </metadata>
Nov 25 03:50:15 np0005534516 nova_compute[253538]:  <sysinfo type="smbios">
Nov 25 03:50:15 np0005534516 nova_compute[253538]:    <system>
Nov 25 03:50:15 np0005534516 nova_compute[253538]:      <entry name="manufacturer">RDO</entry>
Nov 25 03:50:15 np0005534516 nova_compute[253538]:      <entry name="product">OpenStack Compute</entry>
Nov 25 03:50:15 np0005534516 nova_compute[253538]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 03:50:15 np0005534516 nova_compute[253538]:      <entry name="serial">49b75125-0ca4-438d-9f2a-1d130a6b5632</entry>
Nov 25 03:50:15 np0005534516 nova_compute[253538]:      <entry name="uuid">49b75125-0ca4-438d-9f2a-1d130a6b5632</entry>
Nov 25 03:50:15 np0005534516 nova_compute[253538]:      <entry name="family">Virtual Machine</entry>
Nov 25 03:50:15 np0005534516 nova_compute[253538]:    </system>
Nov 25 03:50:15 np0005534516 nova_compute[253538]:  </sysinfo>
Nov 25 03:50:15 np0005534516 nova_compute[253538]:  <os>
Nov 25 03:50:15 np0005534516 nova_compute[253538]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 25 03:50:15 np0005534516 nova_compute[253538]:    <boot dev="hd"/>
Nov 25 03:50:15 np0005534516 nova_compute[253538]:    <smbios mode="sysinfo"/>
Nov 25 03:50:15 np0005534516 nova_compute[253538]:  </os>
Nov 25 03:50:15 np0005534516 nova_compute[253538]:  <features>
Nov 25 03:50:15 np0005534516 nova_compute[253538]:    <acpi/>
Nov 25 03:50:15 np0005534516 nova_compute[253538]:    <apic/>
Nov 25 03:50:15 np0005534516 nova_compute[253538]:    <vmcoreinfo/>
Nov 25 03:50:15 np0005534516 nova_compute[253538]:  </features>
Nov 25 03:50:15 np0005534516 nova_compute[253538]:  <clock offset="utc">
Nov 25 03:50:15 np0005534516 nova_compute[253538]:    <timer name="pit" tickpolicy="delay"/>
Nov 25 03:50:15 np0005534516 nova_compute[253538]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 25 03:50:15 np0005534516 nova_compute[253538]:    <timer name="hpet" present="no"/>
Nov 25 03:50:15 np0005534516 nova_compute[253538]:  </clock>
Nov 25 03:50:15 np0005534516 nova_compute[253538]:  <cpu mode="host-model" match="exact">
Nov 25 03:50:15 np0005534516 nova_compute[253538]:    <topology sockets="1" cores="1" threads="1"/>
Nov 25 03:50:15 np0005534516 nova_compute[253538]:  </cpu>
Nov 25 03:50:15 np0005534516 nova_compute[253538]:  <devices>
Nov 25 03:50:15 np0005534516 nova_compute[253538]:    <disk type="network" device="disk">
Nov 25 03:50:15 np0005534516 nova_compute[253538]:      <driver type="raw" cache="none"/>
Nov 25 03:50:15 np0005534516 nova_compute[253538]:      <source protocol="rbd" name="vms/49b75125-0ca4-438d-9f2a-1d130a6b5632_disk">
Nov 25 03:50:15 np0005534516 nova_compute[253538]:        <host name="192.168.122.100" port="6789"/>
Nov 25 03:50:15 np0005534516 nova_compute[253538]:      </source>
Nov 25 03:50:15 np0005534516 nova_compute[253538]:      <auth username="openstack">
Nov 25 03:50:15 np0005534516 nova_compute[253538]:        <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 03:50:15 np0005534516 nova_compute[253538]:      </auth>
Nov 25 03:50:15 np0005534516 nova_compute[253538]:      <target dev="vda" bus="virtio"/>
Nov 25 03:50:15 np0005534516 nova_compute[253538]:    </disk>
Nov 25 03:50:15 np0005534516 nova_compute[253538]:    <disk type="network" device="cdrom">
Nov 25 03:50:15 np0005534516 nova_compute[253538]:      <driver type="raw" cache="none"/>
Nov 25 03:50:15 np0005534516 nova_compute[253538]:      <source protocol="rbd" name="vms/49b75125-0ca4-438d-9f2a-1d130a6b5632_disk.config">
Nov 25 03:50:15 np0005534516 nova_compute[253538]:        <host name="192.168.122.100" port="6789"/>
Nov 25 03:50:15 np0005534516 nova_compute[253538]:      </source>
Nov 25 03:50:15 np0005534516 nova_compute[253538]:      <auth username="openstack">
Nov 25 03:50:15 np0005534516 nova_compute[253538]:        <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 03:50:15 np0005534516 nova_compute[253538]:      </auth>
Nov 25 03:50:15 np0005534516 nova_compute[253538]:      <target dev="sda" bus="sata"/>
Nov 25 03:50:15 np0005534516 nova_compute[253538]:    </disk>
Nov 25 03:50:15 np0005534516 nova_compute[253538]:    <interface type="ethernet">
Nov 25 03:50:15 np0005534516 nova_compute[253538]:      <mac address="fa:16:3e:88:13:51"/>
Nov 25 03:50:15 np0005534516 nova_compute[253538]:      <model type="virtio"/>
Nov 25 03:50:15 np0005534516 nova_compute[253538]:      <driver name="vhost" rx_queue_size="512"/>
Nov 25 03:50:15 np0005534516 nova_compute[253538]:      <mtu size="1442"/>
Nov 25 03:50:15 np0005534516 nova_compute[253538]:      <target dev="tapd0945383-2a"/>
Nov 25 03:50:15 np0005534516 nova_compute[253538]:    </interface>
Nov 25 03:50:15 np0005534516 nova_compute[253538]:    <serial type="pty">
Nov 25 03:50:15 np0005534516 nova_compute[253538]:      <log file="/var/lib/nova/instances/49b75125-0ca4-438d-9f2a-1d130a6b5632/console.log" append="off"/>
Nov 25 03:50:15 np0005534516 nova_compute[253538]:    </serial>
Nov 25 03:50:15 np0005534516 nova_compute[253538]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 03:50:15 np0005534516 nova_compute[253538]:    <video>
Nov 25 03:50:15 np0005534516 nova_compute[253538]:      <model type="virtio"/>
Nov 25 03:50:15 np0005534516 nova_compute[253538]:    </video>
Nov 25 03:50:15 np0005534516 nova_compute[253538]:    <input type="tablet" bus="usb"/>
Nov 25 03:50:15 np0005534516 nova_compute[253538]:    <rng model="virtio">
Nov 25 03:50:15 np0005534516 nova_compute[253538]:      <backend model="random">/dev/urandom</backend>
Nov 25 03:50:15 np0005534516 nova_compute[253538]:    </rng>
Nov 25 03:50:15 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root"/>
Nov 25 03:50:15 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:50:15 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:50:15 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:50:15 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:50:15 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:50:15 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:50:15 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:50:15 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:50:15 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:50:15 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:50:15 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:50:15 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:50:15 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:50:15 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:50:15 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:50:15 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:50:15 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:50:15 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:50:15 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:50:15 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:50:15 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:50:15 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:50:15 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:50:15 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:50:15 np0005534516 nova_compute[253538]:    <controller type="usb" index="0"/>
Nov 25 03:50:15 np0005534516 nova_compute[253538]:    <memballoon model="virtio">
Nov 25 03:50:15 np0005534516 nova_compute[253538]:      <stats period="10"/>
Nov 25 03:50:15 np0005534516 nova_compute[253538]:    </memballoon>
Nov 25 03:50:15 np0005534516 nova_compute[253538]:  </devices>
Nov 25 03:50:15 np0005534516 nova_compute[253538]: </domain>
Nov 25 03:50:15 np0005534516 nova_compute[253538]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 25 03:50:15 np0005534516 nova_compute[253538]: 2025-11-25 08:50:15.854 253542 DEBUG nova.compute.manager [None req-ed0e43f2-f757-4d6e-bc6d-994891196834 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 49b75125-0ca4-438d-9f2a-1d130a6b5632] Preparing to wait for external event network-vif-plugged-d0945383-2a0d-4019-9b60-eea96d667c69 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 25 03:50:15 np0005534516 nova_compute[253538]: 2025-11-25 08:50:15.855 253542 DEBUG oslo_concurrency.lockutils [None req-ed0e43f2-f757-4d6e-bc6d-994891196834 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Acquiring lock "49b75125-0ca4-438d-9f2a-1d130a6b5632-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:50:15 np0005534516 nova_compute[253538]: 2025-11-25 08:50:15.856 253542 DEBUG oslo_concurrency.lockutils [None req-ed0e43f2-f757-4d6e-bc6d-994891196834 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lock "49b75125-0ca4-438d-9f2a-1d130a6b5632-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:50:15 np0005534516 nova_compute[253538]: 2025-11-25 08:50:15.856 253542 DEBUG oslo_concurrency.lockutils [None req-ed0e43f2-f757-4d6e-bc6d-994891196834 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lock "49b75125-0ca4-438d-9f2a-1d130a6b5632-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:50:15 np0005534516 nova_compute[253538]: 2025-11-25 08:50:15.857 253542 DEBUG nova.virt.libvirt.vif [None req-ed0e43f2-f757-4d6e-bc6d-994891196834 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T08:50:08Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1448452289',display_name='tempest-TestNetworkBasicOps-server-1448452289',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1448452289',id=109,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJ+ODzmwXlP8InQP3rckT8crJoUn4xNbOC1krI29pEqx8UoV+guV7Zli9feI6GoWEAqOv9PTHopksYzuWw3Q3guPlCcrWq73qjwaBwfCdvh1/iUEG/sFH1cKmRJAuAfBlw==',key_name='tempest-TestNetworkBasicOps-406479871',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='92faeb767e7a423586eaaf32661ce771',ramdisk_id='',reservation_id='r-3nsjtln8',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-2019122229',owner_user_name='tempest-TestNetworkBasicOps-2019122229-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T08:50:11Z,user_data=None,user_id='4211995133cc45db8e38c47f747fb092',uuid=49b75125-0ca4-438d-9f2a-1d130a6b5632,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d0945383-2a0d-4019-9b60-eea96d667c69", "address": "fa:16:3e:88:13:51", "network": {"id": "c833a599-5a18-44d2-82ad-b16f7476c220", "bridge": "br-int", "label": "tempest-network-smoke--480494293", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92faeb767e7a423586eaaf32661ce771", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd0945383-2a", "ovs_interfaceid": "d0945383-2a0d-4019-9b60-eea96d667c69", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 25 03:50:15 np0005534516 nova_compute[253538]: 2025-11-25 08:50:15.858 253542 DEBUG nova.network.os_vif_util [None req-ed0e43f2-f757-4d6e-bc6d-994891196834 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Converting VIF {"id": "d0945383-2a0d-4019-9b60-eea96d667c69", "address": "fa:16:3e:88:13:51", "network": {"id": "c833a599-5a18-44d2-82ad-b16f7476c220", "bridge": "br-int", "label": "tempest-network-smoke--480494293", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92faeb767e7a423586eaaf32661ce771", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd0945383-2a", "ovs_interfaceid": "d0945383-2a0d-4019-9b60-eea96d667c69", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 03:50:15 np0005534516 nova_compute[253538]: 2025-11-25 08:50:15.859 253542 DEBUG nova.network.os_vif_util [None req-ed0e43f2-f757-4d6e-bc6d-994891196834 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:88:13:51,bridge_name='br-int',has_traffic_filtering=True,id=d0945383-2a0d-4019-9b60-eea96d667c69,network=Network(c833a599-5a18-44d2-82ad-b16f7476c220),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd0945383-2a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 03:50:15 np0005534516 nova_compute[253538]: 2025-11-25 08:50:15.860 253542 DEBUG os_vif [None req-ed0e43f2-f757-4d6e-bc6d-994891196834 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:88:13:51,bridge_name='br-int',has_traffic_filtering=True,id=d0945383-2a0d-4019-9b60-eea96d667c69,network=Network(c833a599-5a18-44d2-82ad-b16f7476c220),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd0945383-2a') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 25 03:50:15 np0005534516 nova_compute[253538]: 2025-11-25 08:50:15.861 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:50:15 np0005534516 nova_compute[253538]: 2025-11-25 08:50:15.862 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:50:15 np0005534516 nova_compute[253538]: 2025-11-25 08:50:15.863 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 03:50:15 np0005534516 nova_compute[253538]: 2025-11-25 08:50:15.867 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:50:15 np0005534516 nova_compute[253538]: 2025-11-25 08:50:15.868 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd0945383-2a, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:50:15 np0005534516 nova_compute[253538]: 2025-11-25 08:50:15.869 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapd0945383-2a, col_values=(('external_ids', {'iface-id': 'd0945383-2a0d-4019-9b60-eea96d667c69', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:88:13:51', 'vm-uuid': '49b75125-0ca4-438d-9f2a-1d130a6b5632'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:50:15 np0005534516 nova_compute[253538]: 2025-11-25 08:50:15.871 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:50:15 np0005534516 NetworkManager[48915]: <info>  [1764060615.8730] manager: (tapd0945383-2a): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/434)
Nov 25 03:50:15 np0005534516 nova_compute[253538]: 2025-11-25 08:50:15.874 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 25 03:50:15 np0005534516 nova_compute[253538]: 2025-11-25 08:50:15.877 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:50:15 np0005534516 nova_compute[253538]: 2025-11-25 08:50:15.879 253542 INFO os_vif [None req-ed0e43f2-f757-4d6e-bc6d-994891196834 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:88:13:51,bridge_name='br-int',has_traffic_filtering=True,id=d0945383-2a0d-4019-9b60-eea96d667c69,network=Network(c833a599-5a18-44d2-82ad-b16f7476c220),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd0945383-2a')#033[00m
Nov 25 03:50:15 np0005534516 nova_compute[253538]: 2025-11-25 08:50:15.961 253542 DEBUG nova.virt.libvirt.driver [None req-ed0e43f2-f757-4d6e-bc6d-994891196834 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 03:50:15 np0005534516 nova_compute[253538]: 2025-11-25 08:50:15.962 253542 DEBUG nova.virt.libvirt.driver [None req-ed0e43f2-f757-4d6e-bc6d-994891196834 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 03:50:15 np0005534516 nova_compute[253538]: 2025-11-25 08:50:15.962 253542 DEBUG nova.virt.libvirt.driver [None req-ed0e43f2-f757-4d6e-bc6d-994891196834 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] No VIF found with MAC fa:16:3e:88:13:51, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 25 03:50:15 np0005534516 nova_compute[253538]: 2025-11-25 08:50:15.963 253542 INFO nova.virt.libvirt.driver [None req-ed0e43f2-f757-4d6e-bc6d-994891196834 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 49b75125-0ca4-438d-9f2a-1d130a6b5632] Using config drive#033[00m
Nov 25 03:50:15 np0005534516 nova_compute[253538]: 2025-11-25 08:50:15.989 253542 DEBUG nova.storage.rbd_utils [None req-ed0e43f2-f757-4d6e-bc6d-994891196834 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] rbd image 49b75125-0ca4-438d-9f2a-1d130a6b5632_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:50:16 np0005534516 nova_compute[253538]: 2025-11-25 08:50:16.754 253542 DEBUG nova.network.neutron [req-fd6bbb25-595a-4447-9a58-bb21575e279b req-2b7172cc-0d68-40df-b004-5890e26a3c73 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 49b75125-0ca4-438d-9f2a-1d130a6b5632] Updated VIF entry in instance network info cache for port d0945383-2a0d-4019-9b60-eea96d667c69. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 25 03:50:16 np0005534516 nova_compute[253538]: 2025-11-25 08:50:16.755 253542 DEBUG nova.network.neutron [req-fd6bbb25-595a-4447-9a58-bb21575e279b req-2b7172cc-0d68-40df-b004-5890e26a3c73 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 49b75125-0ca4-438d-9f2a-1d130a6b5632] Updating instance_info_cache with network_info: [{"id": "d0945383-2a0d-4019-9b60-eea96d667c69", "address": "fa:16:3e:88:13:51", "network": {"id": "c833a599-5a18-44d2-82ad-b16f7476c220", "bridge": "br-int", "label": "tempest-network-smoke--480494293", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92faeb767e7a423586eaaf32661ce771", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd0945383-2a", "ovs_interfaceid": "d0945383-2a0d-4019-9b60-eea96d667c69", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 03:50:16 np0005534516 nova_compute[253538]: 2025-11-25 08:50:16.773 253542 DEBUG oslo_concurrency.lockutils [req-fd6bbb25-595a-4447-9a58-bb21575e279b req-2b7172cc-0d68-40df-b004-5890e26a3c73 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-49b75125-0ca4-438d-9f2a-1d130a6b5632" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 03:50:16 np0005534516 nova_compute[253538]: 2025-11-25 08:50:16.780 253542 INFO nova.virt.libvirt.driver [None req-ed0e43f2-f757-4d6e-bc6d-994891196834 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 49b75125-0ca4-438d-9f2a-1d130a6b5632] Creating config drive at /var/lib/nova/instances/49b75125-0ca4-438d-9f2a-1d130a6b5632/disk.config#033[00m
Nov 25 03:50:16 np0005534516 nova_compute[253538]: 2025-11-25 08:50:16.789 253542 DEBUG oslo_concurrency.processutils [None req-ed0e43f2-f757-4d6e-bc6d-994891196834 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/49b75125-0ca4-438d-9f2a-1d130a6b5632/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp4r65ayrp execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:50:16 np0005534516 nova_compute[253538]: 2025-11-25 08:50:16.938 253542 DEBUG oslo_concurrency.processutils [None req-ed0e43f2-f757-4d6e-bc6d-994891196834 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/49b75125-0ca4-438d-9f2a-1d130a6b5632/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp4r65ayrp" returned: 0 in 0.149s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:50:16 np0005534516 nova_compute[253538]: 2025-11-25 08:50:16.963 253542 DEBUG nova.storage.rbd_utils [None req-ed0e43f2-f757-4d6e-bc6d-994891196834 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] rbd image 49b75125-0ca4-438d-9f2a-1d130a6b5632_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:50:16 np0005534516 nova_compute[253538]: 2025-11-25 08:50:16.966 253542 DEBUG oslo_concurrency.processutils [None req-ed0e43f2-f757-4d6e-bc6d-994891196834 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/49b75125-0ca4-438d-9f2a-1d130a6b5632/disk.config 49b75125-0ca4-438d-9f2a-1d130a6b5632_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:50:17 np0005534516 nova_compute[253538]: 2025-11-25 08:50:17.133 253542 DEBUG oslo_concurrency.processutils [None req-ed0e43f2-f757-4d6e-bc6d-994891196834 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/49b75125-0ca4-438d-9f2a-1d130a6b5632/disk.config 49b75125-0ca4-438d-9f2a-1d130a6b5632_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.167s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:50:17 np0005534516 nova_compute[253538]: 2025-11-25 08:50:17.134 253542 INFO nova.virt.libvirt.driver [None req-ed0e43f2-f757-4d6e-bc6d-994891196834 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 49b75125-0ca4-438d-9f2a-1d130a6b5632] Deleting local config drive /var/lib/nova/instances/49b75125-0ca4-438d-9f2a-1d130a6b5632/disk.config because it was imported into RBD.#033[00m
Nov 25 03:50:17 np0005534516 kernel: tapd0945383-2a: entered promiscuous mode
Nov 25 03:50:17 np0005534516 NetworkManager[48915]: <info>  [1764060617.1781] manager: (tapd0945383-2a): new Tun device (/org/freedesktop/NetworkManager/Devices/435)
Nov 25 03:50:17 np0005534516 nova_compute[253538]: 2025-11-25 08:50:17.178 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:50:17 np0005534516 ovn_controller[152859]: 2025-11-25T08:50:17Z|01070|binding|INFO|Claiming lport d0945383-2a0d-4019-9b60-eea96d667c69 for this chassis.
Nov 25 03:50:17 np0005534516 ovn_controller[152859]: 2025-11-25T08:50:17Z|01071|binding|INFO|d0945383-2a0d-4019-9b60-eea96d667c69: Claiming fa:16:3e:88:13:51 10.100.0.3
Nov 25 03:50:17 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:50:17.192 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:88:13:51 10.100.0.3'], port_security=['fa:16:3e:88:13:51 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '49b75125-0ca4-438d-9f2a-1d130a6b5632', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c833a599-5a18-44d2-82ad-b16f7476c220', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '92faeb767e7a423586eaaf32661ce771', 'neutron:revision_number': '2', 'neutron:security_group_ids': '4dacd795-ee8f-4895-b3fe-aaa7865132b7', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=1d0e444d-0863-4949-9e8d-d9b0bfd89ac2, chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=d0945383-2a0d-4019-9b60-eea96d667c69) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 03:50:17 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:50:17.194 162739 INFO neutron.agent.ovn.metadata.agent [-] Port d0945383-2a0d-4019-9b60-eea96d667c69 in datapath c833a599-5a18-44d2-82ad-b16f7476c220 bound to our chassis#033[00m
Nov 25 03:50:17 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:50:17.195 162739 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network c833a599-5a18-44d2-82ad-b16f7476c220#033[00m
Nov 25 03:50:17 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:50:17.207 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[19c94124-6b33-41a3-96f0-d1d6013d4797]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:50:17 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:50:17.208 162739 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapc833a599-51 in ovnmeta-c833a599-5a18-44d2-82ad-b16f7476c220 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 25 03:50:17 np0005534516 systemd-machined[215790]: New machine qemu-135-instance-0000006d.
Nov 25 03:50:17 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:50:17.209 269370 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapc833a599-50 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 25 03:50:17 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:50:17.210 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[93b3a383-147c-406d-9db9-bc8868d172ea]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:50:17 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:50:17.211 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[fad87e76-9cd7-4cbb-8306-e0104b8cfa75]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:50:17 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:50:17.221 162852 DEBUG oslo.privsep.daemon [-] privsep: reply[799608cb-47db-4fcc-bcee-8c4f823303e4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:50:17 np0005534516 nova_compute[253538]: 2025-11-25 08:50:17.239 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:50:17 np0005534516 systemd[1]: Started Virtual Machine qemu-135-instance-0000006d.
Nov 25 03:50:17 np0005534516 nova_compute[253538]: 2025-11-25 08:50:17.244 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:50:17 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:50:17.244 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[cb7225d4-ef3e-4315-a01a-98ef8c6ba0e2]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:50:17 np0005534516 ovn_controller[152859]: 2025-11-25T08:50:17Z|01072|binding|INFO|Setting lport d0945383-2a0d-4019-9b60-eea96d667c69 ovn-installed in OVS
Nov 25 03:50:17 np0005534516 ovn_controller[152859]: 2025-11-25T08:50:17Z|01073|binding|INFO|Setting lport d0945383-2a0d-4019-9b60-eea96d667c69 up in Southbound
Nov 25 03:50:17 np0005534516 systemd-udevd[364257]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 03:50:17 np0005534516 nova_compute[253538]: 2025-11-25 08:50:17.249 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:50:17 np0005534516 NetworkManager[48915]: <info>  [1764060617.2649] device (tapd0945383-2a): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 03:50:17 np0005534516 NetworkManager[48915]: <info>  [1764060617.2659] device (tapd0945383-2a): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 03:50:17 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:50:17.278 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[874c7dd1-8a9d-4c97-aee1-942388ac35c5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:50:17 np0005534516 systemd-udevd[364261]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 03:50:17 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:50:17.284 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[0eaa80b4-0c19-4c82-82df-2f25bd23a0b3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:50:17 np0005534516 NetworkManager[48915]: <info>  [1764060617.2850] manager: (tapc833a599-50): new Veth device (/org/freedesktop/NetworkManager/Devices/436)
Nov 25 03:50:17 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:50:17.338 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[c7fe936c-e930-435c-9a8c-875868371b7b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:50:17 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:50:17.340 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[488385dc-2219-4f76-9f82-f94d3eea6378]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:50:17 np0005534516 NetworkManager[48915]: <info>  [1764060617.3636] device (tapc833a599-50): carrier: link connected
Nov 25 03:50:17 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:50:17.370 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[dd2f1cd1-83a0-450a-b98a-6ebf0c8ff12f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:50:17 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:50:17.385 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[fbc1c043-54ae-4e81-83a5-7091845ac92f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapc833a599-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:aa:30:0d'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 314], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 597196, 'reachable_time': 41066, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 364287, 'error': None, 'target': 'ovnmeta-c833a599-5a18-44d2-82ad-b16f7476c220', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:50:17 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:50:17.401 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[d3325b6d-6aa7-4d26-8fa8-6a090995fdfc]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feaa:300d'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 597196, 'tstamp': 597196}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 364288, 'error': None, 'target': 'ovnmeta-c833a599-5a18-44d2-82ad-b16f7476c220', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:50:17 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:50:17.415 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[2bc7957b-e771-48f8-a4ac-6bf691d883d1]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapc833a599-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:aa:30:0d'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 314], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 597196, 'reachable_time': 41066, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 364289, 'error': None, 'target': 'ovnmeta-c833a599-5a18-44d2-82ad-b16f7476c220', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:50:17 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:50:17.440 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[96fe0728-ca14-4285-ab01-b74618586956]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:50:17 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e237 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 03:50:17 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:50:17.498 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[320f2e2a-0aae-46f3-83e9-65a9d623cf96]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:50:17 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:50:17.499 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc833a599-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:50:17 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:50:17.500 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 03:50:17 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:50:17.500 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc833a599-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:50:17 np0005534516 nova_compute[253538]: 2025-11-25 08:50:17.509 253542 DEBUG nova.compute.manager [req-53c7a8bf-825a-484c-8260-3d6757176aef req-48892bfa-b819-49b3-9564-46d2f0f54c91 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 49b75125-0ca4-438d-9f2a-1d130a6b5632] Received event network-vif-plugged-d0945383-2a0d-4019-9b60-eea96d667c69 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:50:17 np0005534516 nova_compute[253538]: 2025-11-25 08:50:17.509 253542 DEBUG oslo_concurrency.lockutils [req-53c7a8bf-825a-484c-8260-3d6757176aef req-48892bfa-b819-49b3-9564-46d2f0f54c91 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "49b75125-0ca4-438d-9f2a-1d130a6b5632-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:50:17 np0005534516 nova_compute[253538]: 2025-11-25 08:50:17.510 253542 DEBUG oslo_concurrency.lockutils [req-53c7a8bf-825a-484c-8260-3d6757176aef req-48892bfa-b819-49b3-9564-46d2f0f54c91 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "49b75125-0ca4-438d-9f2a-1d130a6b5632-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:50:17 np0005534516 nova_compute[253538]: 2025-11-25 08:50:17.510 253542 DEBUG oslo_concurrency.lockutils [req-53c7a8bf-825a-484c-8260-3d6757176aef req-48892bfa-b819-49b3-9564-46d2f0f54c91 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "49b75125-0ca4-438d-9f2a-1d130a6b5632-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:50:17 np0005534516 nova_compute[253538]: 2025-11-25 08:50:17.510 253542 DEBUG nova.compute.manager [req-53c7a8bf-825a-484c-8260-3d6757176aef req-48892bfa-b819-49b3-9564-46d2f0f54c91 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 49b75125-0ca4-438d-9f2a-1d130a6b5632] Processing event network-vif-plugged-d0945383-2a0d-4019-9b60-eea96d667c69 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 25 03:50:17 np0005534516 nova_compute[253538]: 2025-11-25 08:50:17.552 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:50:17 np0005534516 NetworkManager[48915]: <info>  [1764060617.5528] manager: (tapc833a599-50): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/437)
Nov 25 03:50:17 np0005534516 kernel: tapc833a599-50: entered promiscuous mode
Nov 25 03:50:17 np0005534516 nova_compute[253538]: 2025-11-25 08:50:17.555 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:50:17 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:50:17.556 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapc833a599-50, col_values=(('external_ids', {'iface-id': 'bf9779d5-46bc-415b-b1d2-7b9d4a76754d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:50:17 np0005534516 nova_compute[253538]: 2025-11-25 08:50:17.557 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:50:17 np0005534516 ovn_controller[152859]: 2025-11-25T08:50:17Z|01074|binding|INFO|Releasing lport bf9779d5-46bc-415b-b1d2-7b9d4a76754d from this chassis (sb_readonly=0)
Nov 25 03:50:17 np0005534516 nova_compute[253538]: 2025-11-25 08:50:17.586 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:50:17 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:50:17.590 162739 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/c833a599-5a18-44d2-82ad-b16f7476c220.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/c833a599-5a18-44d2-82ad-b16f7476c220.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 25 03:50:17 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:50:17.593 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[5bfad831-9b77-447b-8c47-8f5cb6b9368f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:50:17 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:50:17.594 162739 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 25 03:50:17 np0005534516 ovn_metadata_agent[162734]: global
Nov 25 03:50:17 np0005534516 ovn_metadata_agent[162734]:    log         /dev/log local0 debug
Nov 25 03:50:17 np0005534516 ovn_metadata_agent[162734]:    log-tag     haproxy-metadata-proxy-c833a599-5a18-44d2-82ad-b16f7476c220
Nov 25 03:50:17 np0005534516 ovn_metadata_agent[162734]:    user        root
Nov 25 03:50:17 np0005534516 ovn_metadata_agent[162734]:    group       root
Nov 25 03:50:17 np0005534516 ovn_metadata_agent[162734]:    maxconn     1024
Nov 25 03:50:17 np0005534516 ovn_metadata_agent[162734]:    pidfile     /var/lib/neutron/external/pids/c833a599-5a18-44d2-82ad-b16f7476c220.pid.haproxy
Nov 25 03:50:17 np0005534516 ovn_metadata_agent[162734]:    daemon
Nov 25 03:50:17 np0005534516 ovn_metadata_agent[162734]: 
Nov 25 03:50:17 np0005534516 ovn_metadata_agent[162734]: defaults
Nov 25 03:50:17 np0005534516 ovn_metadata_agent[162734]:    log global
Nov 25 03:50:17 np0005534516 ovn_metadata_agent[162734]:    mode http
Nov 25 03:50:17 np0005534516 ovn_metadata_agent[162734]:    option httplog
Nov 25 03:50:17 np0005534516 ovn_metadata_agent[162734]:    option dontlognull
Nov 25 03:50:17 np0005534516 ovn_metadata_agent[162734]:    option http-server-close
Nov 25 03:50:17 np0005534516 ovn_metadata_agent[162734]:    option forwardfor
Nov 25 03:50:17 np0005534516 ovn_metadata_agent[162734]:    retries                 3
Nov 25 03:50:17 np0005534516 ovn_metadata_agent[162734]:    timeout http-request    30s
Nov 25 03:50:17 np0005534516 ovn_metadata_agent[162734]:    timeout connect         30s
Nov 25 03:50:17 np0005534516 ovn_metadata_agent[162734]:    timeout client          32s
Nov 25 03:50:17 np0005534516 ovn_metadata_agent[162734]:    timeout server          32s
Nov 25 03:50:17 np0005534516 ovn_metadata_agent[162734]:    timeout http-keep-alive 30s
Nov 25 03:50:17 np0005534516 ovn_metadata_agent[162734]: 
Nov 25 03:50:17 np0005534516 ovn_metadata_agent[162734]: 
Nov 25 03:50:17 np0005534516 ovn_metadata_agent[162734]: listen listener
Nov 25 03:50:17 np0005534516 ovn_metadata_agent[162734]:    bind 169.254.169.254:80
Nov 25 03:50:17 np0005534516 ovn_metadata_agent[162734]:    server metadata /var/lib/neutron/metadata_proxy
Nov 25 03:50:17 np0005534516 ovn_metadata_agent[162734]:    http-request add-header X-OVN-Network-ID c833a599-5a18-44d2-82ad-b16f7476c220
Nov 25 03:50:17 np0005534516 ovn_metadata_agent[162734]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 25 03:50:17 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:50:17.596 162739 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-c833a599-5a18-44d2-82ad-b16f7476c220', 'env', 'PROCESS_TAG=haproxy-c833a599-5a18-44d2-82ad-b16f7476c220', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/c833a599-5a18-44d2-82ad-b16f7476c220.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 25 03:50:17 np0005534516 nova_compute[253538]: 2025-11-25 08:50:17.624 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:50:17 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2069: 321 pgs: 321 active+clean; 134 MiB data, 781 MiB used, 59 GiB / 60 GiB avail; 36 KiB/s rd, 1.8 MiB/s wr, 54 op/s
Nov 25 03:50:17 np0005534516 podman[364319]: 2025-11-25 08:50:17.985891259 +0000 UTC m=+0.049321203 container create 56678547c53f8e4e6576d65570c14bac55f5947eb901a1ae850e23bbe040142a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-c833a599-5a18-44d2-82ad-b16f7476c220, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Nov 25 03:50:18 np0005534516 systemd[1]: Started libpod-conmon-56678547c53f8e4e6576d65570c14bac55f5947eb901a1ae850e23bbe040142a.scope.
Nov 25 03:50:18 np0005534516 systemd[1]: Started libcrun container.
Nov 25 03:50:18 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c75c05cfa6c1a51f5b9e1bfff96754528c2b327c0c5c0ba20f6d03e2e0d2380a/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 25 03:50:18 np0005534516 podman[364319]: 2025-11-25 08:50:17.959248556 +0000 UTC m=+0.022678520 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 25 03:50:18 np0005534516 podman[364319]: 2025-11-25 08:50:18.066552389 +0000 UTC m=+0.129982433 container init 56678547c53f8e4e6576d65570c14bac55f5947eb901a1ae850e23bbe040142a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-c833a599-5a18-44d2-82ad-b16f7476c220, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 25 03:50:18 np0005534516 podman[364319]: 2025-11-25 08:50:18.079753233 +0000 UTC m=+0.143183187 container start 56678547c53f8e4e6576d65570c14bac55f5947eb901a1ae850e23bbe040142a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-c833a599-5a18-44d2-82ad-b16f7476c220, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 03:50:18 np0005534516 neutron-haproxy-ovnmeta-c833a599-5a18-44d2-82ad-b16f7476c220[364373]: [NOTICE]   (364379) : New worker (364382) forked
Nov 25 03:50:18 np0005534516 neutron-haproxy-ovnmeta-c833a599-5a18-44d2-82ad-b16f7476c220[364373]: [NOTICE]   (364379) : Loading success.
Nov 25 03:50:18 np0005534516 nova_compute[253538]: 2025-11-25 08:50:18.129 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764060618.1290493, 49b75125-0ca4-438d-9f2a-1d130a6b5632 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 03:50:18 np0005534516 nova_compute[253538]: 2025-11-25 08:50:18.130 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 49b75125-0ca4-438d-9f2a-1d130a6b5632] VM Started (Lifecycle Event)#033[00m
Nov 25 03:50:18 np0005534516 nova_compute[253538]: 2025-11-25 08:50:18.132 253542 DEBUG nova.compute.manager [None req-ed0e43f2-f757-4d6e-bc6d-994891196834 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 49b75125-0ca4-438d-9f2a-1d130a6b5632] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 25 03:50:18 np0005534516 nova_compute[253538]: 2025-11-25 08:50:18.137 253542 DEBUG nova.virt.libvirt.driver [None req-ed0e43f2-f757-4d6e-bc6d-994891196834 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 49b75125-0ca4-438d-9f2a-1d130a6b5632] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 25 03:50:18 np0005534516 nova_compute[253538]: 2025-11-25 08:50:18.140 253542 INFO nova.virt.libvirt.driver [-] [instance: 49b75125-0ca4-438d-9f2a-1d130a6b5632] Instance spawned successfully.#033[00m
Nov 25 03:50:18 np0005534516 nova_compute[253538]: 2025-11-25 08:50:18.140 253542 DEBUG nova.virt.libvirt.driver [None req-ed0e43f2-f757-4d6e-bc6d-994891196834 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 49b75125-0ca4-438d-9f2a-1d130a6b5632] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 25 03:50:18 np0005534516 nova_compute[253538]: 2025-11-25 08:50:18.145 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 49b75125-0ca4-438d-9f2a-1d130a6b5632] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:50:18 np0005534516 nova_compute[253538]: 2025-11-25 08:50:18.148 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 49b75125-0ca4-438d-9f2a-1d130a6b5632] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 03:50:18 np0005534516 nova_compute[253538]: 2025-11-25 08:50:18.156 253542 DEBUG nova.virt.libvirt.driver [None req-ed0e43f2-f757-4d6e-bc6d-994891196834 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 49b75125-0ca4-438d-9f2a-1d130a6b5632] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:50:18 np0005534516 nova_compute[253538]: 2025-11-25 08:50:18.157 253542 DEBUG nova.virt.libvirt.driver [None req-ed0e43f2-f757-4d6e-bc6d-994891196834 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 49b75125-0ca4-438d-9f2a-1d130a6b5632] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:50:18 np0005534516 nova_compute[253538]: 2025-11-25 08:50:18.157 253542 DEBUG nova.virt.libvirt.driver [None req-ed0e43f2-f757-4d6e-bc6d-994891196834 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 49b75125-0ca4-438d-9f2a-1d130a6b5632] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:50:18 np0005534516 nova_compute[253538]: 2025-11-25 08:50:18.157 253542 DEBUG nova.virt.libvirt.driver [None req-ed0e43f2-f757-4d6e-bc6d-994891196834 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 49b75125-0ca4-438d-9f2a-1d130a6b5632] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:50:18 np0005534516 nova_compute[253538]: 2025-11-25 08:50:18.158 253542 DEBUG nova.virt.libvirt.driver [None req-ed0e43f2-f757-4d6e-bc6d-994891196834 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 49b75125-0ca4-438d-9f2a-1d130a6b5632] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:50:18 np0005534516 nova_compute[253538]: 2025-11-25 08:50:18.158 253542 DEBUG nova.virt.libvirt.driver [None req-ed0e43f2-f757-4d6e-bc6d-994891196834 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 49b75125-0ca4-438d-9f2a-1d130a6b5632] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:50:18 np0005534516 nova_compute[253538]: 2025-11-25 08:50:18.163 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 49b75125-0ca4-438d-9f2a-1d130a6b5632] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 03:50:18 np0005534516 nova_compute[253538]: 2025-11-25 08:50:18.163 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764060618.1294785, 49b75125-0ca4-438d-9f2a-1d130a6b5632 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 03:50:18 np0005534516 nova_compute[253538]: 2025-11-25 08:50:18.163 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 49b75125-0ca4-438d-9f2a-1d130a6b5632] VM Paused (Lifecycle Event)#033[00m
Nov 25 03:50:18 np0005534516 nova_compute[253538]: 2025-11-25 08:50:18.183 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 49b75125-0ca4-438d-9f2a-1d130a6b5632] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:50:18 np0005534516 nova_compute[253538]: 2025-11-25 08:50:18.187 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764060618.1367126, 49b75125-0ca4-438d-9f2a-1d130a6b5632 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 03:50:18 np0005534516 nova_compute[253538]: 2025-11-25 08:50:18.188 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 49b75125-0ca4-438d-9f2a-1d130a6b5632] VM Resumed (Lifecycle Event)#033[00m
Nov 25 03:50:18 np0005534516 nova_compute[253538]: 2025-11-25 08:50:18.209 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 49b75125-0ca4-438d-9f2a-1d130a6b5632] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:50:18 np0005534516 nova_compute[253538]: 2025-11-25 08:50:18.212 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 49b75125-0ca4-438d-9f2a-1d130a6b5632] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 03:50:18 np0005534516 nova_compute[253538]: 2025-11-25 08:50:18.217 253542 INFO nova.compute.manager [None req-ed0e43f2-f757-4d6e-bc6d-994891196834 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 49b75125-0ca4-438d-9f2a-1d130a6b5632] Took 6.71 seconds to spawn the instance on the hypervisor.#033[00m
Nov 25 03:50:18 np0005534516 nova_compute[253538]: 2025-11-25 08:50:18.218 253542 DEBUG nova.compute.manager [None req-ed0e43f2-f757-4d6e-bc6d-994891196834 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 49b75125-0ca4-438d-9f2a-1d130a6b5632] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:50:18 np0005534516 nova_compute[253538]: 2025-11-25 08:50:18.226 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 49b75125-0ca4-438d-9f2a-1d130a6b5632] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 03:50:18 np0005534516 nova_compute[253538]: 2025-11-25 08:50:18.286 253542 INFO nova.compute.manager [None req-ed0e43f2-f757-4d6e-bc6d-994891196834 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 49b75125-0ca4-438d-9f2a-1d130a6b5632] Took 8.01 seconds to build instance.#033[00m
Nov 25 03:50:18 np0005534516 nova_compute[253538]: 2025-11-25 08:50:18.320 253542 DEBUG oslo_concurrency.lockutils [None req-ed0e43f2-f757-4d6e-bc6d-994891196834 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lock "49b75125-0ca4-438d-9f2a-1d130a6b5632" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.105s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:50:19 np0005534516 nova_compute[253538]: 2025-11-25 08:50:19.638 253542 DEBUG nova.compute.manager [req-4f409768-a925-4b77-9f52-cd2d6e6bcd31 req-166c00ee-184a-4043-8637-b59c4d338922 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 49b75125-0ca4-438d-9f2a-1d130a6b5632] Received event network-vif-plugged-d0945383-2a0d-4019-9b60-eea96d667c69 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:50:19 np0005534516 nova_compute[253538]: 2025-11-25 08:50:19.638 253542 DEBUG oslo_concurrency.lockutils [req-4f409768-a925-4b77-9f52-cd2d6e6bcd31 req-166c00ee-184a-4043-8637-b59c4d338922 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "49b75125-0ca4-438d-9f2a-1d130a6b5632-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:50:19 np0005534516 nova_compute[253538]: 2025-11-25 08:50:19.639 253542 DEBUG oslo_concurrency.lockutils [req-4f409768-a925-4b77-9f52-cd2d6e6bcd31 req-166c00ee-184a-4043-8637-b59c4d338922 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "49b75125-0ca4-438d-9f2a-1d130a6b5632-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:50:19 np0005534516 nova_compute[253538]: 2025-11-25 08:50:19.639 253542 DEBUG oslo_concurrency.lockutils [req-4f409768-a925-4b77-9f52-cd2d6e6bcd31 req-166c00ee-184a-4043-8637-b59c4d338922 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "49b75125-0ca4-438d-9f2a-1d130a6b5632-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:50:19 np0005534516 nova_compute[253538]: 2025-11-25 08:50:19.639 253542 DEBUG nova.compute.manager [req-4f409768-a925-4b77-9f52-cd2d6e6bcd31 req-166c00ee-184a-4043-8637-b59c4d338922 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 49b75125-0ca4-438d-9f2a-1d130a6b5632] No waiting events found dispatching network-vif-plugged-d0945383-2a0d-4019-9b60-eea96d667c69 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 03:50:19 np0005534516 nova_compute[253538]: 2025-11-25 08:50:19.639 253542 WARNING nova.compute.manager [req-4f409768-a925-4b77-9f52-cd2d6e6bcd31 req-166c00ee-184a-4043-8637-b59c4d338922 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 49b75125-0ca4-438d-9f2a-1d130a6b5632] Received unexpected event network-vif-plugged-d0945383-2a0d-4019-9b60-eea96d667c69 for instance with vm_state active and task_state None.#033[00m
Nov 25 03:50:19 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2070: 321 pgs: 321 active+clean; 134 MiB data, 781 MiB used, 59 GiB / 60 GiB avail; 518 KiB/s rd, 1.8 MiB/s wr, 57 op/s
Nov 25 03:50:20 np0005534516 nova_compute[253538]: 2025-11-25 08:50:20.786 253542 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764060605.7854335, a3e97a6a-f96c-46ee-ba5a-2729f07dd5b5 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 03:50:20 np0005534516 nova_compute[253538]: 2025-11-25 08:50:20.787 253542 INFO nova.compute.manager [-] [instance: a3e97a6a-f96c-46ee-ba5a-2729f07dd5b5] VM Stopped (Lifecycle Event)#033[00m
Nov 25 03:50:20 np0005534516 nova_compute[253538]: 2025-11-25 08:50:20.805 253542 DEBUG nova.compute.manager [None req-4683676e-c4e9-40e8-b1e5-0601c1bf2528 - - - - - -] [instance: a3e97a6a-f96c-46ee-ba5a-2729f07dd5b5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:50:20 np0005534516 nova_compute[253538]: 2025-11-25 08:50:20.873 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:50:21 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2071: 321 pgs: 321 active+clean; 134 MiB data, 781 MiB used, 59 GiB / 60 GiB avail; 517 KiB/s rd, 1.8 MiB/s wr, 55 op/s
Nov 25 03:50:21 np0005534516 ceph-osd[90711]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 25 03:50:21 np0005534516 ceph-osd[90711]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 3602.4 total, 600.0 interval#012Cumulative writes: 26K writes, 102K keys, 26K commit groups, 1.0 writes per commit group, ingest: 0.10 GB, 0.03 MB/s#012Cumulative WAL: 26K writes, 8894 syncs, 2.94 writes per sync, written: 0.10 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 3508 writes, 13K keys, 3508 commit groups, 1.0 writes per commit group, ingest: 13.36 MB, 0.02 MB/s#012Interval WAL: 3508 writes, 1387 syncs, 2.53 writes per sync, written: 0.01 GB, 0.02 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 25 03:50:22 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e237 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 03:50:22 np0005534516 nova_compute[253538]: 2025-11-25 08:50:22.625 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:50:22 np0005534516 nova_compute[253538]: 2025-11-25 08:50:22.667 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:50:22 np0005534516 NetworkManager[48915]: <info>  [1764060622.6693] manager: (patch-provnet-14ba300c-36d8-42f5-a7bd-8245a4488c5f-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/438)
Nov 25 03:50:22 np0005534516 NetworkManager[48915]: <info>  [1764060622.6709] manager: (patch-br-int-to-provnet-14ba300c-36d8-42f5-a7bd-8245a4488c5f): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/439)
Nov 25 03:50:22 np0005534516 nova_compute[253538]: 2025-11-25 08:50:22.755 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:50:22 np0005534516 ovn_controller[152859]: 2025-11-25T08:50:22Z|01075|binding|INFO|Releasing lport bf9779d5-46bc-415b-b1d2-7b9d4a76754d from this chassis (sb_readonly=0)
Nov 25 03:50:22 np0005534516 nova_compute[253538]: 2025-11-25 08:50:22.763 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:50:23 np0005534516 nova_compute[253538]: 2025-11-25 08:50:23.306 253542 DEBUG nova.compute.manager [req-13d90790-992d-4613-a97d-5848e5c1b970 req-ac34986c-0ef4-477c-a9e8-4ea89ac7b0f4 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 49b75125-0ca4-438d-9f2a-1d130a6b5632] Received event network-changed-d0945383-2a0d-4019-9b60-eea96d667c69 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:50:23 np0005534516 nova_compute[253538]: 2025-11-25 08:50:23.307 253542 DEBUG nova.compute.manager [req-13d90790-992d-4613-a97d-5848e5c1b970 req-ac34986c-0ef4-477c-a9e8-4ea89ac7b0f4 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 49b75125-0ca4-438d-9f2a-1d130a6b5632] Refreshing instance network info cache due to event network-changed-d0945383-2a0d-4019-9b60-eea96d667c69. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 03:50:23 np0005534516 nova_compute[253538]: 2025-11-25 08:50:23.307 253542 DEBUG oslo_concurrency.lockutils [req-13d90790-992d-4613-a97d-5848e5c1b970 req-ac34986c-0ef4-477c-a9e8-4ea89ac7b0f4 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-49b75125-0ca4-438d-9f2a-1d130a6b5632" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 03:50:23 np0005534516 nova_compute[253538]: 2025-11-25 08:50:23.308 253542 DEBUG oslo_concurrency.lockutils [req-13d90790-992d-4613-a97d-5848e5c1b970 req-ac34986c-0ef4-477c-a9e8-4ea89ac7b0f4 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-49b75125-0ca4-438d-9f2a-1d130a6b5632" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 03:50:23 np0005534516 nova_compute[253538]: 2025-11-25 08:50:23.308 253542 DEBUG nova.network.neutron [req-13d90790-992d-4613-a97d-5848e5c1b970 req-ac34986c-0ef4-477c-a9e8-4ea89ac7b0f4 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 49b75125-0ca4-438d-9f2a-1d130a6b5632] Refreshing network info cache for port d0945383-2a0d-4019-9b60-eea96d667c69 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 03:50:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 03:50:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 03:50:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 03:50:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 03:50:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 03:50:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 03:50:23 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2072: 321 pgs: 321 active+clean; 134 MiB data, 781 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Nov 25 03:50:24 np0005534516 nova_compute[253538]: 2025-11-25 08:50:24.606 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 03:50:25 np0005534516 nova_compute[253538]: 2025-11-25 08:50:25.047 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:50:25 np0005534516 nova_compute[253538]: 2025-11-25 08:50:25.427 253542 DEBUG nova.network.neutron [req-13d90790-992d-4613-a97d-5848e5c1b970 req-ac34986c-0ef4-477c-a9e8-4ea89ac7b0f4 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 49b75125-0ca4-438d-9f2a-1d130a6b5632] Updated VIF entry in instance network info cache for port d0945383-2a0d-4019-9b60-eea96d667c69. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 25 03:50:25 np0005534516 nova_compute[253538]: 2025-11-25 08:50:25.429 253542 DEBUG nova.network.neutron [req-13d90790-992d-4613-a97d-5848e5c1b970 req-ac34986c-0ef4-477c-a9e8-4ea89ac7b0f4 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 49b75125-0ca4-438d-9f2a-1d130a6b5632] Updating instance_info_cache with network_info: [{"id": "d0945383-2a0d-4019-9b60-eea96d667c69", "address": "fa:16:3e:88:13:51", "network": {"id": "c833a599-5a18-44d2-82ad-b16f7476c220", "bridge": "br-int", "label": "tempest-network-smoke--480494293", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.229", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92faeb767e7a423586eaaf32661ce771", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd0945383-2a", "ovs_interfaceid": "d0945383-2a0d-4019-9b60-eea96d667c69", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 03:50:25 np0005534516 nova_compute[253538]: 2025-11-25 08:50:25.447 253542 DEBUG oslo_concurrency.lockutils [req-13d90790-992d-4613-a97d-5848e5c1b970 req-ac34986c-0ef4-477c-a9e8-4ea89ac7b0f4 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-49b75125-0ca4-438d-9f2a-1d130a6b5632" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 03:50:25 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2073: 321 pgs: 321 active+clean; 134 MiB data, 781 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.6 MiB/s wr, 99 op/s
Nov 25 03:50:25 np0005534516 nova_compute[253538]: 2025-11-25 08:50:25.875 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:50:27 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e237 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 03:50:27 np0005534516 nova_compute[253538]: 2025-11-25 08:50:27.627 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:50:27 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2074: 321 pgs: 321 active+clean; 134 MiB data, 781 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.1 MiB/s wr, 86 op/s
Nov 25 03:50:28 np0005534516 nova_compute[253538]: 2025-11-25 08:50:28.994 253542 DEBUG oslo_concurrency.lockutils [None req-65781c2f-9e36-4f52-9d92-a072abcd87c5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Acquiring lock "b4f98996-3a98-41ad-af66-af37066515d3" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:50:28 np0005534516 nova_compute[253538]: 2025-11-25 08:50:28.997 253542 DEBUG oslo_concurrency.lockutils [None req-65781c2f-9e36-4f52-9d92-a072abcd87c5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Lock "b4f98996-3a98-41ad-af66-af37066515d3" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:50:29 np0005534516 nova_compute[253538]: 2025-11-25 08:50:29.023 253542 DEBUG nova.compute.manager [None req-65781c2f-9e36-4f52-9d92-a072abcd87c5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: b4f98996-3a98-41ad-af66-af37066515d3] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 25 03:50:29 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 25 03:50:29 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1671249306' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 03:50:29 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 25 03:50:29 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1671249306' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 03:50:29 np0005534516 nova_compute[253538]: 2025-11-25 08:50:29.123 253542 DEBUG oslo_concurrency.lockutils [None req-65781c2f-9e36-4f52-9d92-a072abcd87c5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:50:29 np0005534516 nova_compute[253538]: 2025-11-25 08:50:29.124 253542 DEBUG oslo_concurrency.lockutils [None req-65781c2f-9e36-4f52-9d92-a072abcd87c5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:50:29 np0005534516 nova_compute[253538]: 2025-11-25 08:50:29.133 253542 DEBUG nova.virt.hardware [None req-65781c2f-9e36-4f52-9d92-a072abcd87c5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 25 03:50:29 np0005534516 nova_compute[253538]: 2025-11-25 08:50:29.133 253542 INFO nova.compute.claims [None req-65781c2f-9e36-4f52-9d92-a072abcd87c5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: b4f98996-3a98-41ad-af66-af37066515d3] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 25 03:50:29 np0005534516 nova_compute[253538]: 2025-11-25 08:50:29.304 253542 DEBUG oslo_concurrency.processutils [None req-65781c2f-9e36-4f52-9d92-a072abcd87c5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:50:29 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2075: 321 pgs: 321 active+clean; 134 MiB data, 781 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 74 op/s
Nov 25 03:50:29 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 03:50:29 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/801715794' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 03:50:29 np0005534516 nova_compute[253538]: 2025-11-25 08:50:29.899 253542 DEBUG oslo_concurrency.processutils [None req-65781c2f-9e36-4f52-9d92-a072abcd87c5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.594s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:50:29 np0005534516 nova_compute[253538]: 2025-11-25 08:50:29.908 253542 DEBUG nova.compute.provider_tree [None req-65781c2f-9e36-4f52-9d92-a072abcd87c5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 03:50:29 np0005534516 nova_compute[253538]: 2025-11-25 08:50:29.929 253542 DEBUG nova.scheduler.client.report [None req-65781c2f-9e36-4f52-9d92-a072abcd87c5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 03:50:29 np0005534516 nova_compute[253538]: 2025-11-25 08:50:29.952 253542 DEBUG oslo_concurrency.lockutils [None req-65781c2f-9e36-4f52-9d92-a072abcd87c5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.829s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:50:29 np0005534516 nova_compute[253538]: 2025-11-25 08:50:29.954 253542 DEBUG nova.compute.manager [None req-65781c2f-9e36-4f52-9d92-a072abcd87c5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: b4f98996-3a98-41ad-af66-af37066515d3] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 25 03:50:30 np0005534516 nova_compute[253538]: 2025-11-25 08:50:30.005 253542 DEBUG nova.compute.manager [None req-65781c2f-9e36-4f52-9d92-a072abcd87c5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: b4f98996-3a98-41ad-af66-af37066515d3] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 25 03:50:30 np0005534516 nova_compute[253538]: 2025-11-25 08:50:30.006 253542 DEBUG nova.network.neutron [None req-65781c2f-9e36-4f52-9d92-a072abcd87c5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: b4f98996-3a98-41ad-af66-af37066515d3] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 25 03:50:30 np0005534516 nova_compute[253538]: 2025-11-25 08:50:30.038 253542 INFO nova.virt.libvirt.driver [None req-65781c2f-9e36-4f52-9d92-a072abcd87c5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: b4f98996-3a98-41ad-af66-af37066515d3] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 25 03:50:30 np0005534516 nova_compute[253538]: 2025-11-25 08:50:30.065 253542 DEBUG nova.compute.manager [None req-65781c2f-9e36-4f52-9d92-a072abcd87c5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: b4f98996-3a98-41ad-af66-af37066515d3] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 25 03:50:30 np0005534516 nova_compute[253538]: 2025-11-25 08:50:30.163 253542 DEBUG nova.compute.manager [None req-65781c2f-9e36-4f52-9d92-a072abcd87c5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: b4f98996-3a98-41ad-af66-af37066515d3] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 25 03:50:30 np0005534516 nova_compute[253538]: 2025-11-25 08:50:30.165 253542 DEBUG nova.virt.libvirt.driver [None req-65781c2f-9e36-4f52-9d92-a072abcd87c5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: b4f98996-3a98-41ad-af66-af37066515d3] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 25 03:50:30 np0005534516 nova_compute[253538]: 2025-11-25 08:50:30.165 253542 INFO nova.virt.libvirt.driver [None req-65781c2f-9e36-4f52-9d92-a072abcd87c5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: b4f98996-3a98-41ad-af66-af37066515d3] Creating image(s)#033[00m
Nov 25 03:50:30 np0005534516 nova_compute[253538]: 2025-11-25 08:50:30.189 253542 DEBUG nova.storage.rbd_utils [None req-65781c2f-9e36-4f52-9d92-a072abcd87c5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] rbd image b4f98996-3a98-41ad-af66-af37066515d3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:50:30 np0005534516 nova_compute[253538]: 2025-11-25 08:50:30.219 253542 DEBUG nova.storage.rbd_utils [None req-65781c2f-9e36-4f52-9d92-a072abcd87c5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] rbd image b4f98996-3a98-41ad-af66-af37066515d3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:50:30 np0005534516 nova_compute[253538]: 2025-11-25 08:50:30.242 253542 DEBUG nova.storage.rbd_utils [None req-65781c2f-9e36-4f52-9d92-a072abcd87c5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] rbd image b4f98996-3a98-41ad-af66-af37066515d3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:50:30 np0005534516 nova_compute[253538]: 2025-11-25 08:50:30.245 253542 DEBUG oslo_concurrency.processutils [None req-65781c2f-9e36-4f52-9d92-a072abcd87c5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:50:30 np0005534516 nova_compute[253538]: 2025-11-25 08:50:30.309 253542 DEBUG nova.policy [None req-65781c2f-9e36-4f52-9d92-a072abcd87c5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '009378dc36154271ba5b4590ce67ddde', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '7dcaf3c96bfc4db3a41291debd385c67', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 25 03:50:30 np0005534516 nova_compute[253538]: 2025-11-25 08:50:30.350 253542 DEBUG oslo_concurrency.processutils [None req-65781c2f-9e36-4f52-9d92-a072abcd87c5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json" returned: 0 in 0.105s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:50:30 np0005534516 nova_compute[253538]: 2025-11-25 08:50:30.351 253542 DEBUG oslo_concurrency.lockutils [None req-65781c2f-9e36-4f52-9d92-a072abcd87c5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Acquiring lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:50:30 np0005534516 nova_compute[253538]: 2025-11-25 08:50:30.351 253542 DEBUG oslo_concurrency.lockutils [None req-65781c2f-9e36-4f52-9d92-a072abcd87c5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:50:30 np0005534516 nova_compute[253538]: 2025-11-25 08:50:30.352 253542 DEBUG oslo_concurrency.lockutils [None req-65781c2f-9e36-4f52-9d92-a072abcd87c5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:50:30 np0005534516 nova_compute[253538]: 2025-11-25 08:50:30.370 253542 DEBUG nova.storage.rbd_utils [None req-65781c2f-9e36-4f52-9d92-a072abcd87c5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] rbd image b4f98996-3a98-41ad-af66-af37066515d3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:50:30 np0005534516 nova_compute[253538]: 2025-11-25 08:50:30.373 253542 DEBUG oslo_concurrency.processutils [None req-65781c2f-9e36-4f52-9d92-a072abcd87c5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc b4f98996-3a98-41ad-af66-af37066515d3_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:50:30 np0005534516 nova_compute[253538]: 2025-11-25 08:50:30.728 253542 DEBUG oslo_concurrency.processutils [None req-65781c2f-9e36-4f52-9d92-a072abcd87c5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc b4f98996-3a98-41ad-af66-af37066515d3_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.356s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:50:30 np0005534516 nova_compute[253538]: 2025-11-25 08:50:30.804 253542 DEBUG nova.storage.rbd_utils [None req-65781c2f-9e36-4f52-9d92-a072abcd87c5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] resizing rbd image b4f98996-3a98-41ad-af66-af37066515d3_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 25 03:50:30 np0005534516 nova_compute[253538]: 2025-11-25 08:50:30.902 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:50:30 np0005534516 nova_compute[253538]: 2025-11-25 08:50:30.909 253542 DEBUG nova.objects.instance [None req-65781c2f-9e36-4f52-9d92-a072abcd87c5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Lazy-loading 'migration_context' on Instance uuid b4f98996-3a98-41ad-af66-af37066515d3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 03:50:30 np0005534516 nova_compute[253538]: 2025-11-25 08:50:30.969 253542 DEBUG nova.virt.libvirt.driver [None req-65781c2f-9e36-4f52-9d92-a072abcd87c5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: b4f98996-3a98-41ad-af66-af37066515d3] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 25 03:50:30 np0005534516 nova_compute[253538]: 2025-11-25 08:50:30.970 253542 DEBUG nova.virt.libvirt.driver [None req-65781c2f-9e36-4f52-9d92-a072abcd87c5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: b4f98996-3a98-41ad-af66-af37066515d3] Ensure instance console log exists: /var/lib/nova/instances/b4f98996-3a98-41ad-af66-af37066515d3/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 25 03:50:30 np0005534516 nova_compute[253538]: 2025-11-25 08:50:30.970 253542 DEBUG oslo_concurrency.lockutils [None req-65781c2f-9e36-4f52-9d92-a072abcd87c5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:50:30 np0005534516 nova_compute[253538]: 2025-11-25 08:50:30.971 253542 DEBUG oslo_concurrency.lockutils [None req-65781c2f-9e36-4f52-9d92-a072abcd87c5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:50:30 np0005534516 nova_compute[253538]: 2025-11-25 08:50:30.971 253542 DEBUG oslo_concurrency.lockutils [None req-65781c2f-9e36-4f52-9d92-a072abcd87c5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:50:31 np0005534516 nova_compute[253538]: 2025-11-25 08:50:31.177 253542 DEBUG nova.network.neutron [None req-65781c2f-9e36-4f52-9d92-a072abcd87c5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: b4f98996-3a98-41ad-af66-af37066515d3] Successfully created port: 0547929c-86ba-4aaa-869f-c7e2b5ea7e67 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 25 03:50:31 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2076: 321 pgs: 321 active+clean; 134 MiB data, 781 MiB used, 59 GiB / 60 GiB avail; 1.4 MiB/s rd, 12 KiB/s wr, 55 op/s
Nov 25 03:50:31 np0005534516 ovn_controller[152859]: 2025-11-25T08:50:31Z|00115|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:88:13:51 10.100.0.3
Nov 25 03:50:31 np0005534516 ovn_controller[152859]: 2025-11-25T08:50:31Z|00116|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:88:13:51 10.100.0.3
Nov 25 03:50:31 np0005534516 podman[364582]: 2025-11-25 08:50:31.860490773 +0000 UTC m=+0.093523917 container health_status 5c8d5508db57b4c3da7e80ae11f3a3094e87785cb74f60c98199261dd8b73481 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true)
Nov 25 03:50:32 np0005534516 nova_compute[253538]: 2025-11-25 08:50:32.257 253542 DEBUG nova.network.neutron [None req-65781c2f-9e36-4f52-9d92-a072abcd87c5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: b4f98996-3a98-41ad-af66-af37066515d3] Successfully updated port: 0547929c-86ba-4aaa-869f-c7e2b5ea7e67 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 25 03:50:32 np0005534516 nova_compute[253538]: 2025-11-25 08:50:32.271 253542 DEBUG oslo_concurrency.lockutils [None req-65781c2f-9e36-4f52-9d92-a072abcd87c5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Acquiring lock "refresh_cache-b4f98996-3a98-41ad-af66-af37066515d3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 03:50:32 np0005534516 nova_compute[253538]: 2025-11-25 08:50:32.271 253542 DEBUG oslo_concurrency.lockutils [None req-65781c2f-9e36-4f52-9d92-a072abcd87c5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Acquired lock "refresh_cache-b4f98996-3a98-41ad-af66-af37066515d3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 03:50:32 np0005534516 nova_compute[253538]: 2025-11-25 08:50:32.272 253542 DEBUG nova.network.neutron [None req-65781c2f-9e36-4f52-9d92-a072abcd87c5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: b4f98996-3a98-41ad-af66-af37066515d3] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 25 03:50:32 np0005534516 nova_compute[253538]: 2025-11-25 08:50:32.356 253542 DEBUG nova.compute.manager [req-20db4852-10c6-47d5-b22a-bfc792e5e012 req-9b1a01f6-23e5-4af1-8963-685df28ccad1 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: b4f98996-3a98-41ad-af66-af37066515d3] Received event network-changed-0547929c-86ba-4aaa-869f-c7e2b5ea7e67 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:50:32 np0005534516 nova_compute[253538]: 2025-11-25 08:50:32.357 253542 DEBUG nova.compute.manager [req-20db4852-10c6-47d5-b22a-bfc792e5e012 req-9b1a01f6-23e5-4af1-8963-685df28ccad1 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: b4f98996-3a98-41ad-af66-af37066515d3] Refreshing instance network info cache due to event network-changed-0547929c-86ba-4aaa-869f-c7e2b5ea7e67. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 03:50:32 np0005534516 nova_compute[253538]: 2025-11-25 08:50:32.358 253542 DEBUG oslo_concurrency.lockutils [req-20db4852-10c6-47d5-b22a-bfc792e5e012 req-9b1a01f6-23e5-4af1-8963-685df28ccad1 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-b4f98996-3a98-41ad-af66-af37066515d3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 03:50:32 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e237 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 03:50:32 np0005534516 nova_compute[253538]: 2025-11-25 08:50:32.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 03:50:32 np0005534516 nova_compute[253538]: 2025-11-25 08:50:32.629 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:50:32 np0005534516 nova_compute[253538]: 2025-11-25 08:50:32.790 253542 DEBUG nova.network.neutron [None req-65781c2f-9e36-4f52-9d92-a072abcd87c5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: b4f98996-3a98-41ad-af66-af37066515d3] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 25 03:50:33 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2077: 321 pgs: 321 active+clean; 192 MiB data, 816 MiB used, 59 GiB / 60 GiB avail; 1.7 MiB/s rd, 2.9 MiB/s wr, 118 op/s
Nov 25 03:50:34 np0005534516 nova_compute[253538]: 2025-11-25 08:50:34.060 253542 DEBUG nova.network.neutron [None req-65781c2f-9e36-4f52-9d92-a072abcd87c5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: b4f98996-3a98-41ad-af66-af37066515d3] Updating instance_info_cache with network_info: [{"id": "0547929c-86ba-4aaa-869f-c7e2b5ea7e67", "address": "fa:16:3e:86:20:35", "network": {"id": "c0613062-c56d-4f59-a1bd-5487b9cae905", "bridge": "br-int", "label": "tempest-network-smoke--1258419912", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7dcaf3c96bfc4db3a41291debd385c67", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0547929c-86", "ovs_interfaceid": "0547929c-86ba-4aaa-869f-c7e2b5ea7e67", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 03:50:34 np0005534516 nova_compute[253538]: 2025-11-25 08:50:34.079 253542 DEBUG oslo_concurrency.lockutils [None req-65781c2f-9e36-4f52-9d92-a072abcd87c5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Releasing lock "refresh_cache-b4f98996-3a98-41ad-af66-af37066515d3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 03:50:34 np0005534516 nova_compute[253538]: 2025-11-25 08:50:34.080 253542 DEBUG nova.compute.manager [None req-65781c2f-9e36-4f52-9d92-a072abcd87c5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: b4f98996-3a98-41ad-af66-af37066515d3] Instance network_info: |[{"id": "0547929c-86ba-4aaa-869f-c7e2b5ea7e67", "address": "fa:16:3e:86:20:35", "network": {"id": "c0613062-c56d-4f59-a1bd-5487b9cae905", "bridge": "br-int", "label": "tempest-network-smoke--1258419912", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7dcaf3c96bfc4db3a41291debd385c67", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0547929c-86", "ovs_interfaceid": "0547929c-86ba-4aaa-869f-c7e2b5ea7e67", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 25 03:50:34 np0005534516 nova_compute[253538]: 2025-11-25 08:50:34.080 253542 DEBUG oslo_concurrency.lockutils [req-20db4852-10c6-47d5-b22a-bfc792e5e012 req-9b1a01f6-23e5-4af1-8963-685df28ccad1 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-b4f98996-3a98-41ad-af66-af37066515d3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 03:50:34 np0005534516 nova_compute[253538]: 2025-11-25 08:50:34.081 253542 DEBUG nova.network.neutron [req-20db4852-10c6-47d5-b22a-bfc792e5e012 req-9b1a01f6-23e5-4af1-8963-685df28ccad1 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: b4f98996-3a98-41ad-af66-af37066515d3] Refreshing network info cache for port 0547929c-86ba-4aaa-869f-c7e2b5ea7e67 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 03:50:34 np0005534516 nova_compute[253538]: 2025-11-25 08:50:34.086 253542 DEBUG nova.virt.libvirt.driver [None req-65781c2f-9e36-4f52-9d92-a072abcd87c5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: b4f98996-3a98-41ad-af66-af37066515d3] Start _get_guest_xml network_info=[{"id": "0547929c-86ba-4aaa-869f-c7e2b5ea7e67", "address": "fa:16:3e:86:20:35", "network": {"id": "c0613062-c56d-4f59-a1bd-5487b9cae905", "bridge": "br-int", "label": "tempest-network-smoke--1258419912", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7dcaf3c96bfc4db3a41291debd385c67", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0547929c-86", "ovs_interfaceid": "0547929c-86ba-4aaa-869f-c7e2b5ea7e67", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'device_name': '/dev/vda', 'guest_format': None, 'encryption_options': None, 'boot_index': 0, 'encryption_format': None, 'encryption_secret_uuid': None, 'size': 0, 'encrypted': False, 'disk_bus': 'virtio', 'image_id': '8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 25 03:50:34 np0005534516 nova_compute[253538]: 2025-11-25 08:50:34.093 253542 WARNING nova.virt.libvirt.driver [None req-65781c2f-9e36-4f52-9d92-a072abcd87c5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 03:50:34 np0005534516 nova_compute[253538]: 2025-11-25 08:50:34.099 253542 DEBUG nova.virt.libvirt.host [None req-65781c2f-9e36-4f52-9d92-a072abcd87c5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 25 03:50:34 np0005534516 nova_compute[253538]: 2025-11-25 08:50:34.099 253542 DEBUG nova.virt.libvirt.host [None req-65781c2f-9e36-4f52-9d92-a072abcd87c5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 25 03:50:34 np0005534516 nova_compute[253538]: 2025-11-25 08:50:34.106 253542 DEBUG nova.virt.libvirt.host [None req-65781c2f-9e36-4f52-9d92-a072abcd87c5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 25 03:50:34 np0005534516 nova_compute[253538]: 2025-11-25 08:50:34.106 253542 DEBUG nova.virt.libvirt.host [None req-65781c2f-9e36-4f52-9d92-a072abcd87c5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 25 03:50:34 np0005534516 nova_compute[253538]: 2025-11-25 08:50:34.106 253542 DEBUG nova.virt.libvirt.driver [None req-65781c2f-9e36-4f52-9d92-a072abcd87c5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 25 03:50:34 np0005534516 nova_compute[253538]: 2025-11-25 08:50:34.107 253542 DEBUG nova.virt.hardware [None req-65781c2f-9e36-4f52-9d92-a072abcd87c5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T08:20:54Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='c0247c91-0f34-45b2-87e7-43f31a790a00',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 25 03:50:34 np0005534516 nova_compute[253538]: 2025-11-25 08:50:34.107 253542 DEBUG nova.virt.hardware [None req-65781c2f-9e36-4f52-9d92-a072abcd87c5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 25 03:50:34 np0005534516 nova_compute[253538]: 2025-11-25 08:50:34.107 253542 DEBUG nova.virt.hardware [None req-65781c2f-9e36-4f52-9d92-a072abcd87c5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 25 03:50:34 np0005534516 nova_compute[253538]: 2025-11-25 08:50:34.107 253542 DEBUG nova.virt.hardware [None req-65781c2f-9e36-4f52-9d92-a072abcd87c5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 25 03:50:34 np0005534516 nova_compute[253538]: 2025-11-25 08:50:34.108 253542 DEBUG nova.virt.hardware [None req-65781c2f-9e36-4f52-9d92-a072abcd87c5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 25 03:50:34 np0005534516 nova_compute[253538]: 2025-11-25 08:50:34.108 253542 DEBUG nova.virt.hardware [None req-65781c2f-9e36-4f52-9d92-a072abcd87c5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 25 03:50:34 np0005534516 nova_compute[253538]: 2025-11-25 08:50:34.108 253542 DEBUG nova.virt.hardware [None req-65781c2f-9e36-4f52-9d92-a072abcd87c5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 25 03:50:34 np0005534516 nova_compute[253538]: 2025-11-25 08:50:34.108 253542 DEBUG nova.virt.hardware [None req-65781c2f-9e36-4f52-9d92-a072abcd87c5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 25 03:50:34 np0005534516 nova_compute[253538]: 2025-11-25 08:50:34.108 253542 DEBUG nova.virt.hardware [None req-65781c2f-9e36-4f52-9d92-a072abcd87c5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 25 03:50:34 np0005534516 nova_compute[253538]: 2025-11-25 08:50:34.109 253542 DEBUG nova.virt.hardware [None req-65781c2f-9e36-4f52-9d92-a072abcd87c5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 25 03:50:34 np0005534516 nova_compute[253538]: 2025-11-25 08:50:34.109 253542 DEBUG nova.virt.hardware [None req-65781c2f-9e36-4f52-9d92-a072abcd87c5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 25 03:50:34 np0005534516 nova_compute[253538]: 2025-11-25 08:50:34.111 253542 DEBUG oslo_concurrency.processutils [None req-65781c2f-9e36-4f52-9d92-a072abcd87c5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:50:34 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 03:50:34 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2018625777' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 03:50:34 np0005534516 nova_compute[253538]: 2025-11-25 08:50:34.550 253542 DEBUG oslo_concurrency.processutils [None req-65781c2f-9e36-4f52-9d92-a072abcd87c5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.439s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:50:34 np0005534516 nova_compute[253538]: 2025-11-25 08:50:34.576 253542 DEBUG nova.storage.rbd_utils [None req-65781c2f-9e36-4f52-9d92-a072abcd87c5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] rbd image b4f98996-3a98-41ad-af66-af37066515d3_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:50:34 np0005534516 nova_compute[253538]: 2025-11-25 08:50:34.583 253542 DEBUG oslo_concurrency.processutils [None req-65781c2f-9e36-4f52-9d92-a072abcd87c5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:50:34 np0005534516 nova_compute[253538]: 2025-11-25 08:50:34.638 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 03:50:34 np0005534516 nova_compute[253538]: 2025-11-25 08:50:34.639 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 25 03:50:34 np0005534516 nova_compute[253538]: 2025-11-25 08:50:34.667 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 25 03:50:34 np0005534516 nova_compute[253538]: 2025-11-25 08:50:34.668 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 03:50:34 np0005534516 nova_compute[253538]: 2025-11-25 08:50:34.669 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 25 03:50:34 np0005534516 podman[364645]: 2025-11-25 08:50:34.836609847 +0000 UTC m=+0.078145614 container health_status 201f5ba8a846455dad7681d91099d8c3f33e8ac91298c1330a8b525b94c03938 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true)
Nov 25 03:50:35 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 03:50:35 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/174282057' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 03:50:35 np0005534516 nova_compute[253538]: 2025-11-25 08:50:35.083 253542 DEBUG oslo_concurrency.processutils [None req-65781c2f-9e36-4f52-9d92-a072abcd87c5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.500s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:50:35 np0005534516 nova_compute[253538]: 2025-11-25 08:50:35.086 253542 DEBUG nova.virt.libvirt.vif [None req-65781c2f-9e36-4f52-9d92-a072abcd87c5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T08:50:27Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-642924609',display_name='tempest-TestNetworkAdvancedServerOps-server-642924609',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-642924609',id=110,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBKQwQUMdH1inJEZNQ9tUR+z/kDiUab1e20h5rm6qDlszZoYoLqt3pa8Fary6MYkj2oJVBphpUWW4+oVR02Nvg0VNSZNNzWHbc601Ac4/2sW+DdmilXo7ZngfOc7+6JMZJw==',key_name='tempest-TestNetworkAdvancedServerOps-1971435409',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='7dcaf3c96bfc4db3a41291debd385c67',ramdisk_id='',reservation_id='r-r1bh0apf',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkAdvancedServerOps-1132090577',owner_user_name='tempest-TestNetworkAdvancedServerOps-1132090577-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T08:50:30Z,user_data=None,user_id='009378dc36154271ba5b4590ce67ddde',uuid=b4f98996-3a98-41ad-af66-af37066515d3,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "0547929c-86ba-4aaa-869f-c7e2b5ea7e67", "address": "fa:16:3e:86:20:35", "network": {"id": "c0613062-c56d-4f59-a1bd-5487b9cae905", "bridge": "br-int", "label": "tempest-network-smoke--1258419912", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7dcaf3c96bfc4db3a41291debd385c67", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0547929c-86", "ovs_interfaceid": "0547929c-86ba-4aaa-869f-c7e2b5ea7e67", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 25 03:50:35 np0005534516 nova_compute[253538]: 2025-11-25 08:50:35.087 253542 DEBUG nova.network.os_vif_util [None req-65781c2f-9e36-4f52-9d92-a072abcd87c5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Converting VIF {"id": "0547929c-86ba-4aaa-869f-c7e2b5ea7e67", "address": "fa:16:3e:86:20:35", "network": {"id": "c0613062-c56d-4f59-a1bd-5487b9cae905", "bridge": "br-int", "label": "tempest-network-smoke--1258419912", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7dcaf3c96bfc4db3a41291debd385c67", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0547929c-86", "ovs_interfaceid": "0547929c-86ba-4aaa-869f-c7e2b5ea7e67", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 03:50:35 np0005534516 nova_compute[253538]: 2025-11-25 08:50:35.088 253542 DEBUG nova.network.os_vif_util [None req-65781c2f-9e36-4f52-9d92-a072abcd87c5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:86:20:35,bridge_name='br-int',has_traffic_filtering=True,id=0547929c-86ba-4aaa-869f-c7e2b5ea7e67,network=Network(c0613062-c56d-4f59-a1bd-5487b9cae905),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0547929c-86') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 03:50:35 np0005534516 nova_compute[253538]: 2025-11-25 08:50:35.090 253542 DEBUG nova.objects.instance [None req-65781c2f-9e36-4f52-9d92-a072abcd87c5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Lazy-loading 'pci_devices' on Instance uuid b4f98996-3a98-41ad-af66-af37066515d3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 03:50:35 np0005534516 nova_compute[253538]: 2025-11-25 08:50:35.119 253542 DEBUG nova.virt.libvirt.driver [None req-65781c2f-9e36-4f52-9d92-a072abcd87c5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: b4f98996-3a98-41ad-af66-af37066515d3] End _get_guest_xml xml=<domain type="kvm">
Nov 25 03:50:35 np0005534516 nova_compute[253538]:  <uuid>b4f98996-3a98-41ad-af66-af37066515d3</uuid>
Nov 25 03:50:35 np0005534516 nova_compute[253538]:  <name>instance-0000006e</name>
Nov 25 03:50:35 np0005534516 nova_compute[253538]:  <memory>131072</memory>
Nov 25 03:50:35 np0005534516 nova_compute[253538]:  <vcpu>1</vcpu>
Nov 25 03:50:35 np0005534516 nova_compute[253538]:  <metadata>
Nov 25 03:50:35 np0005534516 nova_compute[253538]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 03:50:35 np0005534516 nova_compute[253538]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 03:50:35 np0005534516 nova_compute[253538]:      <nova:name>tempest-TestNetworkAdvancedServerOps-server-642924609</nova:name>
Nov 25 03:50:35 np0005534516 nova_compute[253538]:      <nova:creationTime>2025-11-25 08:50:34</nova:creationTime>
Nov 25 03:50:35 np0005534516 nova_compute[253538]:      <nova:flavor name="m1.nano">
Nov 25 03:50:35 np0005534516 nova_compute[253538]:        <nova:memory>128</nova:memory>
Nov 25 03:50:35 np0005534516 nova_compute[253538]:        <nova:disk>1</nova:disk>
Nov 25 03:50:35 np0005534516 nova_compute[253538]:        <nova:swap>0</nova:swap>
Nov 25 03:50:35 np0005534516 nova_compute[253538]:        <nova:ephemeral>0</nova:ephemeral>
Nov 25 03:50:35 np0005534516 nova_compute[253538]:        <nova:vcpus>1</nova:vcpus>
Nov 25 03:50:35 np0005534516 nova_compute[253538]:      </nova:flavor>
Nov 25 03:50:35 np0005534516 nova_compute[253538]:      <nova:owner>
Nov 25 03:50:35 np0005534516 nova_compute[253538]:        <nova:user uuid="009378dc36154271ba5b4590ce67ddde">tempest-TestNetworkAdvancedServerOps-1132090577-project-member</nova:user>
Nov 25 03:50:35 np0005534516 nova_compute[253538]:        <nova:project uuid="7dcaf3c96bfc4db3a41291debd385c67">tempest-TestNetworkAdvancedServerOps-1132090577</nova:project>
Nov 25 03:50:35 np0005534516 nova_compute[253538]:      </nova:owner>
Nov 25 03:50:35 np0005534516 nova_compute[253538]:      <nova:root type="image" uuid="8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e"/>
Nov 25 03:50:35 np0005534516 nova_compute[253538]:      <nova:ports>
Nov 25 03:50:35 np0005534516 nova_compute[253538]:        <nova:port uuid="0547929c-86ba-4aaa-869f-c7e2b5ea7e67">
Nov 25 03:50:35 np0005534516 nova_compute[253538]:          <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Nov 25 03:50:35 np0005534516 nova_compute[253538]:        </nova:port>
Nov 25 03:50:35 np0005534516 nova_compute[253538]:      </nova:ports>
Nov 25 03:50:35 np0005534516 nova_compute[253538]:    </nova:instance>
Nov 25 03:50:35 np0005534516 nova_compute[253538]:  </metadata>
Nov 25 03:50:35 np0005534516 nova_compute[253538]:  <sysinfo type="smbios">
Nov 25 03:50:35 np0005534516 nova_compute[253538]:    <system>
Nov 25 03:50:35 np0005534516 nova_compute[253538]:      <entry name="manufacturer">RDO</entry>
Nov 25 03:50:35 np0005534516 nova_compute[253538]:      <entry name="product">OpenStack Compute</entry>
Nov 25 03:50:35 np0005534516 nova_compute[253538]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 03:50:35 np0005534516 nova_compute[253538]:      <entry name="serial">b4f98996-3a98-41ad-af66-af37066515d3</entry>
Nov 25 03:50:35 np0005534516 nova_compute[253538]:      <entry name="uuid">b4f98996-3a98-41ad-af66-af37066515d3</entry>
Nov 25 03:50:35 np0005534516 nova_compute[253538]:      <entry name="family">Virtual Machine</entry>
Nov 25 03:50:35 np0005534516 nova_compute[253538]:    </system>
Nov 25 03:50:35 np0005534516 nova_compute[253538]:  </sysinfo>
Nov 25 03:50:35 np0005534516 nova_compute[253538]:  <os>
Nov 25 03:50:35 np0005534516 nova_compute[253538]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 25 03:50:35 np0005534516 nova_compute[253538]:    <boot dev="hd"/>
Nov 25 03:50:35 np0005534516 nova_compute[253538]:    <smbios mode="sysinfo"/>
Nov 25 03:50:35 np0005534516 nova_compute[253538]:  </os>
Nov 25 03:50:35 np0005534516 nova_compute[253538]:  <features>
Nov 25 03:50:35 np0005534516 nova_compute[253538]:    <acpi/>
Nov 25 03:50:35 np0005534516 nova_compute[253538]:    <apic/>
Nov 25 03:50:35 np0005534516 nova_compute[253538]:    <vmcoreinfo/>
Nov 25 03:50:35 np0005534516 nova_compute[253538]:  </features>
Nov 25 03:50:35 np0005534516 nova_compute[253538]:  <clock offset="utc">
Nov 25 03:50:35 np0005534516 nova_compute[253538]:    <timer name="pit" tickpolicy="delay"/>
Nov 25 03:50:35 np0005534516 nova_compute[253538]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 25 03:50:35 np0005534516 nova_compute[253538]:    <timer name="hpet" present="no"/>
Nov 25 03:50:35 np0005534516 nova_compute[253538]:  </clock>
Nov 25 03:50:35 np0005534516 nova_compute[253538]:  <cpu mode="host-model" match="exact">
Nov 25 03:50:35 np0005534516 nova_compute[253538]:    <topology sockets="1" cores="1" threads="1"/>
Nov 25 03:50:35 np0005534516 nova_compute[253538]:  </cpu>
Nov 25 03:50:35 np0005534516 nova_compute[253538]:  <devices>
Nov 25 03:50:35 np0005534516 nova_compute[253538]:    <disk type="network" device="disk">
Nov 25 03:50:35 np0005534516 nova_compute[253538]:      <driver type="raw" cache="none"/>
Nov 25 03:50:35 np0005534516 nova_compute[253538]:      <source protocol="rbd" name="vms/b4f98996-3a98-41ad-af66-af37066515d3_disk">
Nov 25 03:50:35 np0005534516 nova_compute[253538]:        <host name="192.168.122.100" port="6789"/>
Nov 25 03:50:35 np0005534516 nova_compute[253538]:      </source>
Nov 25 03:50:35 np0005534516 nova_compute[253538]:      <auth username="openstack">
Nov 25 03:50:35 np0005534516 nova_compute[253538]:        <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 03:50:35 np0005534516 nova_compute[253538]:      </auth>
Nov 25 03:50:35 np0005534516 nova_compute[253538]:      <target dev="vda" bus="virtio"/>
Nov 25 03:50:35 np0005534516 nova_compute[253538]:    </disk>
Nov 25 03:50:35 np0005534516 nova_compute[253538]:    <disk type="network" device="cdrom">
Nov 25 03:50:35 np0005534516 nova_compute[253538]:      <driver type="raw" cache="none"/>
Nov 25 03:50:35 np0005534516 nova_compute[253538]:      <source protocol="rbd" name="vms/b4f98996-3a98-41ad-af66-af37066515d3_disk.config">
Nov 25 03:50:35 np0005534516 nova_compute[253538]:        <host name="192.168.122.100" port="6789"/>
Nov 25 03:50:35 np0005534516 nova_compute[253538]:      </source>
Nov 25 03:50:35 np0005534516 nova_compute[253538]:      <auth username="openstack">
Nov 25 03:50:35 np0005534516 nova_compute[253538]:        <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 03:50:35 np0005534516 nova_compute[253538]:      </auth>
Nov 25 03:50:35 np0005534516 nova_compute[253538]:      <target dev="sda" bus="sata"/>
Nov 25 03:50:35 np0005534516 nova_compute[253538]:    </disk>
Nov 25 03:50:35 np0005534516 nova_compute[253538]:    <interface type="ethernet">
Nov 25 03:50:35 np0005534516 nova_compute[253538]:      <mac address="fa:16:3e:86:20:35"/>
Nov 25 03:50:35 np0005534516 nova_compute[253538]:      <model type="virtio"/>
Nov 25 03:50:35 np0005534516 nova_compute[253538]:      <driver name="vhost" rx_queue_size="512"/>
Nov 25 03:50:35 np0005534516 nova_compute[253538]:      <mtu size="1442"/>
Nov 25 03:50:35 np0005534516 nova_compute[253538]:      <target dev="tap0547929c-86"/>
Nov 25 03:50:35 np0005534516 nova_compute[253538]:    </interface>
Nov 25 03:50:35 np0005534516 nova_compute[253538]:    <serial type="pty">
Nov 25 03:50:35 np0005534516 nova_compute[253538]:      <log file="/var/lib/nova/instances/b4f98996-3a98-41ad-af66-af37066515d3/console.log" append="off"/>
Nov 25 03:50:35 np0005534516 nova_compute[253538]:    </serial>
Nov 25 03:50:35 np0005534516 nova_compute[253538]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 03:50:35 np0005534516 nova_compute[253538]:    <video>
Nov 25 03:50:35 np0005534516 nova_compute[253538]:      <model type="virtio"/>
Nov 25 03:50:35 np0005534516 nova_compute[253538]:    </video>
Nov 25 03:50:35 np0005534516 nova_compute[253538]:    <input type="tablet" bus="usb"/>
Nov 25 03:50:35 np0005534516 nova_compute[253538]:    <rng model="virtio">
Nov 25 03:50:35 np0005534516 nova_compute[253538]:      <backend model="random">/dev/urandom</backend>
Nov 25 03:50:35 np0005534516 nova_compute[253538]:    </rng>
Nov 25 03:50:35 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root"/>
Nov 25 03:50:35 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:50:35 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:50:35 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:50:35 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:50:35 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:50:35 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:50:35 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:50:35 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:50:35 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:50:35 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:50:35 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:50:35 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:50:35 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:50:35 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:50:35 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:50:35 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:50:35 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:50:35 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:50:35 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:50:35 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:50:35 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:50:35 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:50:35 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:50:35 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:50:35 np0005534516 nova_compute[253538]:    <controller type="usb" index="0"/>
Nov 25 03:50:35 np0005534516 nova_compute[253538]:    <memballoon model="virtio">
Nov 25 03:50:35 np0005534516 nova_compute[253538]:      <stats period="10"/>
Nov 25 03:50:35 np0005534516 nova_compute[253538]:    </memballoon>
Nov 25 03:50:35 np0005534516 nova_compute[253538]:  </devices>
Nov 25 03:50:35 np0005534516 nova_compute[253538]: </domain>
Nov 25 03:50:35 np0005534516 nova_compute[253538]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 25 03:50:35 np0005534516 nova_compute[253538]: 2025-11-25 08:50:35.121 253542 DEBUG nova.compute.manager [None req-65781c2f-9e36-4f52-9d92-a072abcd87c5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: b4f98996-3a98-41ad-af66-af37066515d3] Preparing to wait for external event network-vif-plugged-0547929c-86ba-4aaa-869f-c7e2b5ea7e67 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 25 03:50:35 np0005534516 nova_compute[253538]: 2025-11-25 08:50:35.122 253542 DEBUG oslo_concurrency.lockutils [None req-65781c2f-9e36-4f52-9d92-a072abcd87c5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Acquiring lock "b4f98996-3a98-41ad-af66-af37066515d3-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:50:35 np0005534516 nova_compute[253538]: 2025-11-25 08:50:35.123 253542 DEBUG oslo_concurrency.lockutils [None req-65781c2f-9e36-4f52-9d92-a072abcd87c5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Lock "b4f98996-3a98-41ad-af66-af37066515d3-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:50:35 np0005534516 nova_compute[253538]: 2025-11-25 08:50:35.123 253542 DEBUG oslo_concurrency.lockutils [None req-65781c2f-9e36-4f52-9d92-a072abcd87c5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Lock "b4f98996-3a98-41ad-af66-af37066515d3-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:50:35 np0005534516 nova_compute[253538]: 2025-11-25 08:50:35.124 253542 DEBUG nova.virt.libvirt.vif [None req-65781c2f-9e36-4f52-9d92-a072abcd87c5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T08:50:27Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-642924609',display_name='tempest-TestNetworkAdvancedServerOps-server-642924609',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-642924609',id=110,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBKQwQUMdH1inJEZNQ9tUR+z/kDiUab1e20h5rm6qDlszZoYoLqt3pa8Fary6MYkj2oJVBphpUWW4+oVR02Nvg0VNSZNNzWHbc601Ac4/2sW+DdmilXo7ZngfOc7+6JMZJw==',key_name='tempest-TestNetworkAdvancedServerOps-1971435409',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='7dcaf3c96bfc4db3a41291debd385c67',ramdisk_id='',reservation_id='r-r1bh0apf',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkAdvancedServerOps-1132090577',owner_user_name='tempest-TestNetworkAdvancedServerOps-1132090577-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T08:50:30Z,user_data=None,user_id='009378dc36154271ba5b4590ce67ddde',uuid=b4f98996-3a98-41ad-af66-af37066515d3,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "0547929c-86ba-4aaa-869f-c7e2b5ea7e67", "address": "fa:16:3e:86:20:35", "network": {"id": "c0613062-c56d-4f59-a1bd-5487b9cae905", "bridge": "br-int", "label": "tempest-network-smoke--1258419912", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7dcaf3c96bfc4db3a41291debd385c67", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0547929c-86", "ovs_interfaceid": "0547929c-86ba-4aaa-869f-c7e2b5ea7e67", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 25 03:50:35 np0005534516 nova_compute[253538]: 2025-11-25 08:50:35.124 253542 DEBUG nova.network.os_vif_util [None req-65781c2f-9e36-4f52-9d92-a072abcd87c5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Converting VIF {"id": "0547929c-86ba-4aaa-869f-c7e2b5ea7e67", "address": "fa:16:3e:86:20:35", "network": {"id": "c0613062-c56d-4f59-a1bd-5487b9cae905", "bridge": "br-int", "label": "tempest-network-smoke--1258419912", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7dcaf3c96bfc4db3a41291debd385c67", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0547929c-86", "ovs_interfaceid": "0547929c-86ba-4aaa-869f-c7e2b5ea7e67", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 03:50:35 np0005534516 nova_compute[253538]: 2025-11-25 08:50:35.125 253542 DEBUG nova.network.os_vif_util [None req-65781c2f-9e36-4f52-9d92-a072abcd87c5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:86:20:35,bridge_name='br-int',has_traffic_filtering=True,id=0547929c-86ba-4aaa-869f-c7e2b5ea7e67,network=Network(c0613062-c56d-4f59-a1bd-5487b9cae905),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0547929c-86') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 03:50:35 np0005534516 nova_compute[253538]: 2025-11-25 08:50:35.126 253542 DEBUG os_vif [None req-65781c2f-9e36-4f52-9d92-a072abcd87c5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:86:20:35,bridge_name='br-int',has_traffic_filtering=True,id=0547929c-86ba-4aaa-869f-c7e2b5ea7e67,network=Network(c0613062-c56d-4f59-a1bd-5487b9cae905),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0547929c-86') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 25 03:50:35 np0005534516 nova_compute[253538]: 2025-11-25 08:50:35.126 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:50:35 np0005534516 nova_compute[253538]: 2025-11-25 08:50:35.127 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:50:35 np0005534516 nova_compute[253538]: 2025-11-25 08:50:35.128 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 03:50:35 np0005534516 nova_compute[253538]: 2025-11-25 08:50:35.135 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:50:35 np0005534516 nova_compute[253538]: 2025-11-25 08:50:35.135 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap0547929c-86, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:50:35 np0005534516 nova_compute[253538]: 2025-11-25 08:50:35.136 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap0547929c-86, col_values=(('external_ids', {'iface-id': '0547929c-86ba-4aaa-869f-c7e2b5ea7e67', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:86:20:35', 'vm-uuid': 'b4f98996-3a98-41ad-af66-af37066515d3'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:50:35 np0005534516 nova_compute[253538]: 2025-11-25 08:50:35.138 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:50:35 np0005534516 NetworkManager[48915]: <info>  [1764060635.1391] manager: (tap0547929c-86): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/440)
Nov 25 03:50:35 np0005534516 nova_compute[253538]: 2025-11-25 08:50:35.140 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 25 03:50:35 np0005534516 nova_compute[253538]: 2025-11-25 08:50:35.147 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:50:35 np0005534516 nova_compute[253538]: 2025-11-25 08:50:35.148 253542 INFO os_vif [None req-65781c2f-9e36-4f52-9d92-a072abcd87c5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:86:20:35,bridge_name='br-int',has_traffic_filtering=True,id=0547929c-86ba-4aaa-869f-c7e2b5ea7e67,network=Network(c0613062-c56d-4f59-a1bd-5487b9cae905),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0547929c-86')#033[00m
Nov 25 03:50:35 np0005534516 nova_compute[253538]: 2025-11-25 08:50:35.196 253542 DEBUG nova.virt.libvirt.driver [None req-65781c2f-9e36-4f52-9d92-a072abcd87c5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 03:50:35 np0005534516 nova_compute[253538]: 2025-11-25 08:50:35.196 253542 DEBUG nova.virt.libvirt.driver [None req-65781c2f-9e36-4f52-9d92-a072abcd87c5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 03:50:35 np0005534516 nova_compute[253538]: 2025-11-25 08:50:35.197 253542 DEBUG nova.virt.libvirt.driver [None req-65781c2f-9e36-4f52-9d92-a072abcd87c5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] No VIF found with MAC fa:16:3e:86:20:35, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 25 03:50:35 np0005534516 nova_compute[253538]: 2025-11-25 08:50:35.197 253542 INFO nova.virt.libvirt.driver [None req-65781c2f-9e36-4f52-9d92-a072abcd87c5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: b4f98996-3a98-41ad-af66-af37066515d3] Using config drive#033[00m
Nov 25 03:50:35 np0005534516 nova_compute[253538]: 2025-11-25 08:50:35.221 253542 DEBUG nova.storage.rbd_utils [None req-65781c2f-9e36-4f52-9d92-a072abcd87c5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] rbd image b4f98996-3a98-41ad-af66-af37066515d3_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:50:35 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2078: 321 pgs: 321 active+clean; 213 MiB data, 828 MiB used, 59 GiB / 60 GiB avail; 343 KiB/s rd, 3.9 MiB/s wr, 90 op/s
Nov 25 03:50:35 np0005534516 nova_compute[253538]: 2025-11-25 08:50:35.902 253542 INFO nova.virt.libvirt.driver [None req-65781c2f-9e36-4f52-9d92-a072abcd87c5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: b4f98996-3a98-41ad-af66-af37066515d3] Creating config drive at /var/lib/nova/instances/b4f98996-3a98-41ad-af66-af37066515d3/disk.config#033[00m
Nov 25 03:50:35 np0005534516 nova_compute[253538]: 2025-11-25 08:50:35.911 253542 DEBUG oslo_concurrency.processutils [None req-65781c2f-9e36-4f52-9d92-a072abcd87c5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/b4f98996-3a98-41ad-af66-af37066515d3/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp5sios1hf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:50:36 np0005534516 nova_compute[253538]: 2025-11-25 08:50:36.075 253542 DEBUG oslo_concurrency.processutils [None req-65781c2f-9e36-4f52-9d92-a072abcd87c5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/b4f98996-3a98-41ad-af66-af37066515d3/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp5sios1hf" returned: 0 in 0.164s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:50:36 np0005534516 nova_compute[253538]: 2025-11-25 08:50:36.109 253542 DEBUG nova.storage.rbd_utils [None req-65781c2f-9e36-4f52-9d92-a072abcd87c5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] rbd image b4f98996-3a98-41ad-af66-af37066515d3_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:50:36 np0005534516 nova_compute[253538]: 2025-11-25 08:50:36.114 253542 DEBUG oslo_concurrency.processutils [None req-65781c2f-9e36-4f52-9d92-a072abcd87c5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/b4f98996-3a98-41ad-af66-af37066515d3/disk.config b4f98996-3a98-41ad-af66-af37066515d3_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:50:36 np0005534516 nova_compute[253538]: 2025-11-25 08:50:36.283 253542 DEBUG oslo_concurrency.processutils [None req-65781c2f-9e36-4f52-9d92-a072abcd87c5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/b4f98996-3a98-41ad-af66-af37066515d3/disk.config b4f98996-3a98-41ad-af66-af37066515d3_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.169s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:50:36 np0005534516 nova_compute[253538]: 2025-11-25 08:50:36.284 253542 INFO nova.virt.libvirt.driver [None req-65781c2f-9e36-4f52-9d92-a072abcd87c5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: b4f98996-3a98-41ad-af66-af37066515d3] Deleting local config drive /var/lib/nova/instances/b4f98996-3a98-41ad-af66-af37066515d3/disk.config because it was imported into RBD.#033[00m
Nov 25 03:50:36 np0005534516 kernel: tap0547929c-86: entered promiscuous mode
Nov 25 03:50:36 np0005534516 NetworkManager[48915]: <info>  [1764060636.3544] manager: (tap0547929c-86): new Tun device (/org/freedesktop/NetworkManager/Devices/441)
Nov 25 03:50:36 np0005534516 nova_compute[253538]: 2025-11-25 08:50:36.356 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:50:36 np0005534516 ovn_controller[152859]: 2025-11-25T08:50:36Z|01076|binding|INFO|Claiming lport 0547929c-86ba-4aaa-869f-c7e2b5ea7e67 for this chassis.
Nov 25 03:50:36 np0005534516 ovn_controller[152859]: 2025-11-25T08:50:36Z|01077|binding|INFO|0547929c-86ba-4aaa-869f-c7e2b5ea7e67: Claiming fa:16:3e:86:20:35 10.100.0.11
Nov 25 03:50:36 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:50:36.364 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:86:20:35 10.100.0.11'], port_security=['fa:16:3e:86:20:35 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': 'b4f98996-3a98-41ad-af66-af37066515d3', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c0613062-c56d-4f59-a1bd-5487b9cae905', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '7dcaf3c96bfc4db3a41291debd385c67', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'db6ee56c-6009-455c-94cc-12101966dbd4', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=eca56c04-944f-49ab-8f25-403bf5089348, chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=0547929c-86ba-4aaa-869f-c7e2b5ea7e67) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 03:50:36 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:50:36.365 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 0547929c-86ba-4aaa-869f-c7e2b5ea7e67 in datapath c0613062-c56d-4f59-a1bd-5487b9cae905 bound to our chassis#033[00m
Nov 25 03:50:36 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:50:36.366 162739 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network c0613062-c56d-4f59-a1bd-5487b9cae905#033[00m
Nov 25 03:50:36 np0005534516 ovn_controller[152859]: 2025-11-25T08:50:36Z|01078|binding|INFO|Setting lport 0547929c-86ba-4aaa-869f-c7e2b5ea7e67 ovn-installed in OVS
Nov 25 03:50:36 np0005534516 ovn_controller[152859]: 2025-11-25T08:50:36Z|01079|binding|INFO|Setting lport 0547929c-86ba-4aaa-869f-c7e2b5ea7e67 up in Southbound
Nov 25 03:50:36 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:50:36.376 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[eb1e32b9-276e-4bbb-996a-d37f57875901]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:50:36 np0005534516 nova_compute[253538]: 2025-11-25 08:50:36.377 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:50:36 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:50:36.377 162739 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapc0613062-c1 in ovnmeta-c0613062-c56d-4f59-a1bd-5487b9cae905 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 25 03:50:36 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:50:36.379 269370 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapc0613062-c0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 25 03:50:36 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:50:36.379 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[76a326e1-99a5-4ab5-99bd-2a417b57d7cf]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:50:36 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:50:36.380 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[3d15baec-2b59-45a8-b23f-0904e16491a4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:50:36 np0005534516 nova_compute[253538]: 2025-11-25 08:50:36.382 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:50:36 np0005534516 systemd-udevd[364758]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 03:50:36 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:50:36.391 162852 DEBUG oslo.privsep.daemon [-] privsep: reply[da8d02cf-3105-41b4-84cc-26c9fae72ef3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:50:36 np0005534516 systemd-machined[215790]: New machine qemu-136-instance-0000006e.
Nov 25 03:50:36 np0005534516 NetworkManager[48915]: <info>  [1764060636.3967] device (tap0547929c-86): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 03:50:36 np0005534516 NetworkManager[48915]: <info>  [1764060636.3976] device (tap0547929c-86): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 03:50:36 np0005534516 systemd[1]: Started Virtual Machine qemu-136-instance-0000006e.
Nov 25 03:50:36 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:50:36.417 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[d2a00d46-a3c3-4e6c-9f88-9661855b86f7]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:50:36 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:50:36.441 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[f66d7efb-05f4-4256-92b5-368e48843558]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:50:36 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:50:36.446 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[745c2219-2632-43f0-8d48-7d6d1d8cc7ef]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:50:36 np0005534516 NetworkManager[48915]: <info>  [1764060636.4470] manager: (tapc0613062-c0): new Veth device (/org/freedesktop/NetworkManager/Devices/442)
Nov 25 03:50:36 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:50:36.483 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[b3e9cbf7-e4ce-4da0-8f73-13d617812f6c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:50:36 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:50:36.486 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[de44e7d0-6846-4c58-8ecc-0a518beee5c1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:50:36 np0005534516 NetworkManager[48915]: <info>  [1764060636.5123] device (tapc0613062-c0): carrier: link connected
Nov 25 03:50:36 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:50:36.517 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[085625e5-7755-4336-8206-f3c5dca8f3ed]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:50:36 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:50:36.537 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[d561b019-ca23-4866-90f8-5c2607288421]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapc0613062-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:9b:a3:1e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 316], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 599111, 'reachable_time': 17002, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 364791, 'error': None, 'target': 'ovnmeta-c0613062-c56d-4f59-a1bd-5487b9cae905', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:50:36 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:50:36.553 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[458f2929-b4b4-4aa9-94dd-91864d3304eb]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe9b:a31e'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 599111, 'tstamp': 599111}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 364792, 'error': None, 'target': 'ovnmeta-c0613062-c56d-4f59-a1bd-5487b9cae905', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:50:36 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:50:36.572 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[d8627d53-e5b7-4c63-be46-40e5be69bf6d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapc0613062-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:9b:a3:1e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 316], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 599111, 'reachable_time': 17002, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 148, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 148, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 364793, 'error': None, 'target': 'ovnmeta-c0613062-c56d-4f59-a1bd-5487b9cae905', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:50:36 np0005534516 nova_compute[253538]: 2025-11-25 08:50:36.587 253542 DEBUG nova.network.neutron [req-20db4852-10c6-47d5-b22a-bfc792e5e012 req-9b1a01f6-23e5-4af1-8963-685df28ccad1 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: b4f98996-3a98-41ad-af66-af37066515d3] Updated VIF entry in instance network info cache for port 0547929c-86ba-4aaa-869f-c7e2b5ea7e67. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 25 03:50:36 np0005534516 nova_compute[253538]: 2025-11-25 08:50:36.587 253542 DEBUG nova.network.neutron [req-20db4852-10c6-47d5-b22a-bfc792e5e012 req-9b1a01f6-23e5-4af1-8963-685df28ccad1 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: b4f98996-3a98-41ad-af66-af37066515d3] Updating instance_info_cache with network_info: [{"id": "0547929c-86ba-4aaa-869f-c7e2b5ea7e67", "address": "fa:16:3e:86:20:35", "network": {"id": "c0613062-c56d-4f59-a1bd-5487b9cae905", "bridge": "br-int", "label": "tempest-network-smoke--1258419912", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7dcaf3c96bfc4db3a41291debd385c67", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0547929c-86", "ovs_interfaceid": "0547929c-86ba-4aaa-869f-c7e2b5ea7e67", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 03:50:36 np0005534516 nova_compute[253538]: 2025-11-25 08:50:36.600 253542 DEBUG oslo_concurrency.lockutils [req-20db4852-10c6-47d5-b22a-bfc792e5e012 req-9b1a01f6-23e5-4af1-8963-685df28ccad1 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-b4f98996-3a98-41ad-af66-af37066515d3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 03:50:36 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:50:36.601 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[c150579c-7a42-4f54-9086-489af16ee07f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:50:36 np0005534516 nova_compute[253538]: 2025-11-25 08:50:36.652 253542 DEBUG nova.compute.manager [req-af4d1bf3-575f-436a-968d-f2add4e16aee req-70ac3d1c-271e-46e5-b06d-18dcedcb0d51 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: b4f98996-3a98-41ad-af66-af37066515d3] Received event network-vif-plugged-0547929c-86ba-4aaa-869f-c7e2b5ea7e67 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:50:36 np0005534516 nova_compute[253538]: 2025-11-25 08:50:36.653 253542 DEBUG oslo_concurrency.lockutils [req-af4d1bf3-575f-436a-968d-f2add4e16aee req-70ac3d1c-271e-46e5-b06d-18dcedcb0d51 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "b4f98996-3a98-41ad-af66-af37066515d3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:50:36 np0005534516 nova_compute[253538]: 2025-11-25 08:50:36.653 253542 DEBUG oslo_concurrency.lockutils [req-af4d1bf3-575f-436a-968d-f2add4e16aee req-70ac3d1c-271e-46e5-b06d-18dcedcb0d51 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "b4f98996-3a98-41ad-af66-af37066515d3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:50:36 np0005534516 nova_compute[253538]: 2025-11-25 08:50:36.654 253542 DEBUG oslo_concurrency.lockutils [req-af4d1bf3-575f-436a-968d-f2add4e16aee req-70ac3d1c-271e-46e5-b06d-18dcedcb0d51 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "b4f98996-3a98-41ad-af66-af37066515d3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:50:36 np0005534516 nova_compute[253538]: 2025-11-25 08:50:36.655 253542 DEBUG nova.compute.manager [req-af4d1bf3-575f-436a-968d-f2add4e16aee req-70ac3d1c-271e-46e5-b06d-18dcedcb0d51 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: b4f98996-3a98-41ad-af66-af37066515d3] Processing event network-vif-plugged-0547929c-86ba-4aaa-869f-c7e2b5ea7e67 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 25 03:50:36 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:50:36.714 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[8222e1a3-f66f-43b6-b59d-4e4a76e40f6b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:50:36 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:50:36.716 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc0613062-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:50:36 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:50:36.717 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 03:50:36 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:50:36.718 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc0613062-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:50:36 np0005534516 nova_compute[253538]: 2025-11-25 08:50:36.720 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:50:36 np0005534516 NetworkManager[48915]: <info>  [1764060636.7215] manager: (tapc0613062-c0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/443)
Nov 25 03:50:36 np0005534516 kernel: tapc0613062-c0: entered promiscuous mode
Nov 25 03:50:36 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:50:36.732 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapc0613062-c0, col_values=(('external_ids', {'iface-id': '0ffa1e66-a7b3-404c-afe1-b89a3d2ec28f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:50:36 np0005534516 ovn_controller[152859]: 2025-11-25T08:50:36Z|01080|binding|INFO|Releasing lport 0ffa1e66-a7b3-404c-afe1-b89a3d2ec28f from this chassis (sb_readonly=0)
Nov 25 03:50:36 np0005534516 nova_compute[253538]: 2025-11-25 08:50:36.734 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:50:36 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:50:36.736 162739 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/c0613062-c56d-4f59-a1bd-5487b9cae905.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/c0613062-c56d-4f59-a1bd-5487b9cae905.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 25 03:50:36 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:50:36.737 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[37d94f66-1147-4dc8-ad19-17c42de0a083]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:50:36 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:50:36.738 162739 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 25 03:50:36 np0005534516 ovn_metadata_agent[162734]: global
Nov 25 03:50:36 np0005534516 ovn_metadata_agent[162734]:    log         /dev/log local0 debug
Nov 25 03:50:36 np0005534516 ovn_metadata_agent[162734]:    log-tag     haproxy-metadata-proxy-c0613062-c56d-4f59-a1bd-5487b9cae905
Nov 25 03:50:36 np0005534516 ovn_metadata_agent[162734]:    user        root
Nov 25 03:50:36 np0005534516 ovn_metadata_agent[162734]:    group       root
Nov 25 03:50:36 np0005534516 ovn_metadata_agent[162734]:    maxconn     1024
Nov 25 03:50:36 np0005534516 ovn_metadata_agent[162734]:    pidfile     /var/lib/neutron/external/pids/c0613062-c56d-4f59-a1bd-5487b9cae905.pid.haproxy
Nov 25 03:50:36 np0005534516 ovn_metadata_agent[162734]:    daemon
Nov 25 03:50:36 np0005534516 ovn_metadata_agent[162734]: 
Nov 25 03:50:36 np0005534516 ovn_metadata_agent[162734]: defaults
Nov 25 03:50:36 np0005534516 ovn_metadata_agent[162734]:    log global
Nov 25 03:50:36 np0005534516 ovn_metadata_agent[162734]:    mode http
Nov 25 03:50:36 np0005534516 ovn_metadata_agent[162734]:    option httplog
Nov 25 03:50:36 np0005534516 ovn_metadata_agent[162734]:    option dontlognull
Nov 25 03:50:36 np0005534516 ovn_metadata_agent[162734]:    option http-server-close
Nov 25 03:50:36 np0005534516 ovn_metadata_agent[162734]:    option forwardfor
Nov 25 03:50:36 np0005534516 ovn_metadata_agent[162734]:    retries                 3
Nov 25 03:50:36 np0005534516 ovn_metadata_agent[162734]:    timeout http-request    30s
Nov 25 03:50:36 np0005534516 ovn_metadata_agent[162734]:    timeout connect         30s
Nov 25 03:50:36 np0005534516 ovn_metadata_agent[162734]:    timeout client          32s
Nov 25 03:50:36 np0005534516 ovn_metadata_agent[162734]:    timeout server          32s
Nov 25 03:50:36 np0005534516 ovn_metadata_agent[162734]:    timeout http-keep-alive 30s
Nov 25 03:50:36 np0005534516 ovn_metadata_agent[162734]: 
Nov 25 03:50:36 np0005534516 ovn_metadata_agent[162734]: 
Nov 25 03:50:36 np0005534516 ovn_metadata_agent[162734]: listen listener
Nov 25 03:50:36 np0005534516 ovn_metadata_agent[162734]:    bind 169.254.169.254:80
Nov 25 03:50:36 np0005534516 ovn_metadata_agent[162734]:    server metadata /var/lib/neutron/metadata_proxy
Nov 25 03:50:36 np0005534516 ovn_metadata_agent[162734]:    http-request add-header X-OVN-Network-ID c0613062-c56d-4f59-a1bd-5487b9cae905
Nov 25 03:50:36 np0005534516 ovn_metadata_agent[162734]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 25 03:50:36 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:50:36.739 162739 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-c0613062-c56d-4f59-a1bd-5487b9cae905', 'env', 'PROCESS_TAG=haproxy-c0613062-c56d-4f59-a1bd-5487b9cae905', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/c0613062-c56d-4f59-a1bd-5487b9cae905.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 25 03:50:36 np0005534516 nova_compute[253538]: 2025-11-25 08:50:36.761 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:50:37 np0005534516 nova_compute[253538]: 2025-11-25 08:50:37.120 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764060637.119723, b4f98996-3a98-41ad-af66-af37066515d3 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 03:50:37 np0005534516 nova_compute[253538]: 2025-11-25 08:50:37.121 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: b4f98996-3a98-41ad-af66-af37066515d3] VM Started (Lifecycle Event)#033[00m
Nov 25 03:50:37 np0005534516 nova_compute[253538]: 2025-11-25 08:50:37.123 253542 DEBUG nova.compute.manager [None req-65781c2f-9e36-4f52-9d92-a072abcd87c5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: b4f98996-3a98-41ad-af66-af37066515d3] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 25 03:50:37 np0005534516 nova_compute[253538]: 2025-11-25 08:50:37.127 253542 DEBUG nova.virt.libvirt.driver [None req-65781c2f-9e36-4f52-9d92-a072abcd87c5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: b4f98996-3a98-41ad-af66-af37066515d3] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 25 03:50:37 np0005534516 nova_compute[253538]: 2025-11-25 08:50:37.132 253542 INFO nova.virt.libvirt.driver [-] [instance: b4f98996-3a98-41ad-af66-af37066515d3] Instance spawned successfully.#033[00m
Nov 25 03:50:37 np0005534516 nova_compute[253538]: 2025-11-25 08:50:37.132 253542 DEBUG nova.virt.libvirt.driver [None req-65781c2f-9e36-4f52-9d92-a072abcd87c5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: b4f98996-3a98-41ad-af66-af37066515d3] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 25 03:50:37 np0005534516 nova_compute[253538]: 2025-11-25 08:50:37.146 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: b4f98996-3a98-41ad-af66-af37066515d3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:50:37 np0005534516 nova_compute[253538]: 2025-11-25 08:50:37.152 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: b4f98996-3a98-41ad-af66-af37066515d3] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 03:50:37 np0005534516 podman[364866]: 2025-11-25 08:50:37.154613197 +0000 UTC m=+0.067022776 container create 12d6a8bafef2877a643e0b7a66dd30102487d9445daa265a370bb0402a6110af (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-c0613062-c56d-4f59-a1bd-5487b9cae905, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Nov 25 03:50:37 np0005534516 nova_compute[253538]: 2025-11-25 08:50:37.158 253542 DEBUG nova.virt.libvirt.driver [None req-65781c2f-9e36-4f52-9d92-a072abcd87c5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: b4f98996-3a98-41ad-af66-af37066515d3] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:50:37 np0005534516 nova_compute[253538]: 2025-11-25 08:50:37.159 253542 DEBUG nova.virt.libvirt.driver [None req-65781c2f-9e36-4f52-9d92-a072abcd87c5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: b4f98996-3a98-41ad-af66-af37066515d3] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:50:37 np0005534516 nova_compute[253538]: 2025-11-25 08:50:37.159 253542 DEBUG nova.virt.libvirt.driver [None req-65781c2f-9e36-4f52-9d92-a072abcd87c5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: b4f98996-3a98-41ad-af66-af37066515d3] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:50:37 np0005534516 nova_compute[253538]: 2025-11-25 08:50:37.160 253542 DEBUG nova.virt.libvirt.driver [None req-65781c2f-9e36-4f52-9d92-a072abcd87c5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: b4f98996-3a98-41ad-af66-af37066515d3] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:50:37 np0005534516 nova_compute[253538]: 2025-11-25 08:50:37.160 253542 DEBUG nova.virt.libvirt.driver [None req-65781c2f-9e36-4f52-9d92-a072abcd87c5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: b4f98996-3a98-41ad-af66-af37066515d3] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:50:37 np0005534516 nova_compute[253538]: 2025-11-25 08:50:37.160 253542 DEBUG nova.virt.libvirt.driver [None req-65781c2f-9e36-4f52-9d92-a072abcd87c5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: b4f98996-3a98-41ad-af66-af37066515d3] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:50:37 np0005534516 nova_compute[253538]: 2025-11-25 08:50:37.170 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: b4f98996-3a98-41ad-af66-af37066515d3] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 03:50:37 np0005534516 nova_compute[253538]: 2025-11-25 08:50:37.170 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764060637.1198974, b4f98996-3a98-41ad-af66-af37066515d3 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 03:50:37 np0005534516 nova_compute[253538]: 2025-11-25 08:50:37.170 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: b4f98996-3a98-41ad-af66-af37066515d3] VM Paused (Lifecycle Event)#033[00m
Nov 25 03:50:37 np0005534516 systemd[1]: Started libpod-conmon-12d6a8bafef2877a643e0b7a66dd30102487d9445daa265a370bb0402a6110af.scope.
Nov 25 03:50:37 np0005534516 nova_compute[253538]: 2025-11-25 08:50:37.198 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: b4f98996-3a98-41ad-af66-af37066515d3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:50:37 np0005534516 nova_compute[253538]: 2025-11-25 08:50:37.203 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764060637.1258154, b4f98996-3a98-41ad-af66-af37066515d3 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 03:50:37 np0005534516 nova_compute[253538]: 2025-11-25 08:50:37.203 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: b4f98996-3a98-41ad-af66-af37066515d3] VM Resumed (Lifecycle Event)#033[00m
Nov 25 03:50:37 np0005534516 podman[364866]: 2025-11-25 08:50:37.118533061 +0000 UTC m=+0.030942700 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 25 03:50:37 np0005534516 nova_compute[253538]: 2025-11-25 08:50:37.220 253542 INFO nova.compute.manager [None req-65781c2f-9e36-4f52-9d92-a072abcd87c5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: b4f98996-3a98-41ad-af66-af37066515d3] Took 7.06 seconds to spawn the instance on the hypervisor.#033[00m
Nov 25 03:50:37 np0005534516 nova_compute[253538]: 2025-11-25 08:50:37.220 253542 DEBUG nova.compute.manager [None req-65781c2f-9e36-4f52-9d92-a072abcd87c5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: b4f98996-3a98-41ad-af66-af37066515d3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:50:37 np0005534516 nova_compute[253538]: 2025-11-25 08:50:37.222 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: b4f98996-3a98-41ad-af66-af37066515d3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:50:37 np0005534516 nova_compute[253538]: 2025-11-25 08:50:37.229 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: b4f98996-3a98-41ad-af66-af37066515d3] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 03:50:37 np0005534516 systemd[1]: Started libcrun container.
Nov 25 03:50:37 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8cf67b50908d3dd49b2c0a0481b4509ed99ab81eb1cd3289fe65b839d4061e7d/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 25 03:50:37 np0005534516 nova_compute[253538]: 2025-11-25 08:50:37.258 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: b4f98996-3a98-41ad-af66-af37066515d3] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 03:50:37 np0005534516 podman[364866]: 2025-11-25 08:50:37.260244896 +0000 UTC m=+0.172654475 container init 12d6a8bafef2877a643e0b7a66dd30102487d9445daa265a370bb0402a6110af (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-c0613062-c56d-4f59-a1bd-5487b9cae905, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 25 03:50:37 np0005534516 podman[364866]: 2025-11-25 08:50:37.265903787 +0000 UTC m=+0.178313366 container start 12d6a8bafef2877a643e0b7a66dd30102487d9445daa265a370bb0402a6110af (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-c0613062-c56d-4f59-a1bd-5487b9cae905, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Nov 25 03:50:37 np0005534516 neutron-haproxy-ovnmeta-c0613062-c56d-4f59-a1bd-5487b9cae905[364882]: [NOTICE]   (364886) : New worker (364888) forked
Nov 25 03:50:37 np0005534516 neutron-haproxy-ovnmeta-c0613062-c56d-4f59-a1bd-5487b9cae905[364882]: [NOTICE]   (364886) : Loading success.
Nov 25 03:50:37 np0005534516 nova_compute[253538]: 2025-11-25 08:50:37.304 253542 INFO nova.compute.manager [None req-65781c2f-9e36-4f52-9d92-a072abcd87c5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: b4f98996-3a98-41ad-af66-af37066515d3] Took 8.21 seconds to build instance.#033[00m
Nov 25 03:50:37 np0005534516 nova_compute[253538]: 2025-11-25 08:50:37.321 253542 DEBUG oslo_concurrency.lockutils [None req-65781c2f-9e36-4f52-9d92-a072abcd87c5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Lock "b4f98996-3a98-41ad-af66-af37066515d3" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.324s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:50:37 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e237 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 03:50:37 np0005534516 nova_compute[253538]: 2025-11-25 08:50:37.633 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:50:37 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2079: 321 pgs: 321 active+clean; 213 MiB data, 828 MiB used, 59 GiB / 60 GiB avail; 344 KiB/s rd, 3.9 MiB/s wr, 90 op/s
Nov 25 03:50:38 np0005534516 nova_compute[253538]: 2025-11-25 08:50:38.268 253542 INFO nova.compute.manager [None req-2eafb20a-8fee-4b08-adf5-c44d34dd792d 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 49b75125-0ca4-438d-9f2a-1d130a6b5632] Get console output#033[00m
Nov 25 03:50:38 np0005534516 nova_compute[253538]: 2025-11-25 08:50:38.273 310639 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Nov 25 03:50:38 np0005534516 nova_compute[253538]: 2025-11-25 08:50:38.577 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 03:50:38 np0005534516 nova_compute[253538]: 2025-11-25 08:50:38.921 253542 DEBUG nova.compute.manager [req-d8ac8ac8-a3c4-4c61-b411-ff208fe5080a req-1dc29505-e6e9-449f-bf95-f6ee45af8814 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: b4f98996-3a98-41ad-af66-af37066515d3] Received event network-vif-plugged-0547929c-86ba-4aaa-869f-c7e2b5ea7e67 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:50:38 np0005534516 nova_compute[253538]: 2025-11-25 08:50:38.922 253542 DEBUG oslo_concurrency.lockutils [req-d8ac8ac8-a3c4-4c61-b411-ff208fe5080a req-1dc29505-e6e9-449f-bf95-f6ee45af8814 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "b4f98996-3a98-41ad-af66-af37066515d3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:50:38 np0005534516 nova_compute[253538]: 2025-11-25 08:50:38.922 253542 DEBUG oslo_concurrency.lockutils [req-d8ac8ac8-a3c4-4c61-b411-ff208fe5080a req-1dc29505-e6e9-449f-bf95-f6ee45af8814 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "b4f98996-3a98-41ad-af66-af37066515d3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:50:38 np0005534516 nova_compute[253538]: 2025-11-25 08:50:38.923 253542 DEBUG oslo_concurrency.lockutils [req-d8ac8ac8-a3c4-4c61-b411-ff208fe5080a req-1dc29505-e6e9-449f-bf95-f6ee45af8814 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "b4f98996-3a98-41ad-af66-af37066515d3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:50:38 np0005534516 nova_compute[253538]: 2025-11-25 08:50:38.923 253542 DEBUG nova.compute.manager [req-d8ac8ac8-a3c4-4c61-b411-ff208fe5080a req-1dc29505-e6e9-449f-bf95-f6ee45af8814 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: b4f98996-3a98-41ad-af66-af37066515d3] No waiting events found dispatching network-vif-plugged-0547929c-86ba-4aaa-869f-c7e2b5ea7e67 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 03:50:38 np0005534516 nova_compute[253538]: 2025-11-25 08:50:38.923 253542 WARNING nova.compute.manager [req-d8ac8ac8-a3c4-4c61-b411-ff208fe5080a req-1dc29505-e6e9-449f-bf95-f6ee45af8814 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: b4f98996-3a98-41ad-af66-af37066515d3] Received unexpected event network-vif-plugged-0547929c-86ba-4aaa-869f-c7e2b5ea7e67 for instance with vm_state active and task_state None.#033[00m
Nov 25 03:50:39 np0005534516 nova_compute[253538]: 2025-11-25 08:50:39.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 03:50:39 np0005534516 nova_compute[253538]: 2025-11-25 08:50:39.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 03:50:39 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2080: 321 pgs: 321 active+clean; 213 MiB data, 828 MiB used, 59 GiB / 60 GiB avail; 1.5 MiB/s rd, 3.9 MiB/s wr, 139 op/s
Nov 25 03:50:39 np0005534516 podman[364897]: 2025-11-25 08:50:39.871676235 +0000 UTC m=+0.124107936 container health_status 8ad1e319ea263c63021f01bb2d6f18ca5aea205883339692c6b10bddfa993191 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Nov 25 03:50:40 np0005534516 nova_compute[253538]: 2025-11-25 08:50:40.138 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:50:40 np0005534516 nova_compute[253538]: 2025-11-25 08:50:40.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 03:50:40 np0005534516 nova_compute[253538]: 2025-11-25 08:50:40.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 03:50:40 np0005534516 nova_compute[253538]: 2025-11-25 08:50:40.583 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:50:40 np0005534516 nova_compute[253538]: 2025-11-25 08:50:40.583 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:50:40 np0005534516 nova_compute[253538]: 2025-11-25 08:50:40.584 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:50:40 np0005534516 nova_compute[253538]: 2025-11-25 08:50:40.584 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 25 03:50:40 np0005534516 nova_compute[253538]: 2025-11-25 08:50:40.585 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:50:40 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:50:40.685 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=32, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '3e:21:09', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '6a:71:8e:07:fa:c2'}, ipsec=False) old=SB_Global(nb_cfg=31) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 03:50:40 np0005534516 nova_compute[253538]: 2025-11-25 08:50:40.686 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:50:40 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:50:40.687 162739 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 25 03:50:40 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:50:40.687 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=0e3362c2-a4a0-4a10-9289-943331244f84, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '32'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:50:41 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 03:50:41 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4289358426' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 03:50:41 np0005534516 nova_compute[253538]: 2025-11-25 08:50:41.053 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.469s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:50:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:50:41.074 162739 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:50:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:50:41.075 162739 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:50:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:50:41.076 162739 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:50:41 np0005534516 nova_compute[253538]: 2025-11-25 08:50:41.139 253542 DEBUG nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] skipping disk for instance-0000006d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 25 03:50:41 np0005534516 nova_compute[253538]: 2025-11-25 08:50:41.139 253542 DEBUG nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] skipping disk for instance-0000006d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 25 03:50:41 np0005534516 nova_compute[253538]: 2025-11-25 08:50:41.142 253542 DEBUG nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] skipping disk for instance-0000006e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 25 03:50:41 np0005534516 nova_compute[253538]: 2025-11-25 08:50:41.142 253542 DEBUG nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] skipping disk for instance-0000006e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 25 03:50:41 np0005534516 nova_compute[253538]: 2025-11-25 08:50:41.336 253542 WARNING nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 03:50:41 np0005534516 nova_compute[253538]: 2025-11-25 08:50:41.339 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3477MB free_disk=59.92185974121094GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 25 03:50:41 np0005534516 nova_compute[253538]: 2025-11-25 08:50:41.339 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:50:41 np0005534516 nova_compute[253538]: 2025-11-25 08:50:41.339 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:50:41 np0005534516 nova_compute[253538]: 2025-11-25 08:50:41.516 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Instance 49b75125-0ca4-438d-9f2a-1d130a6b5632 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 25 03:50:41 np0005534516 nova_compute[253538]: 2025-11-25 08:50:41.516 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Instance b4f98996-3a98-41ad-af66-af37066515d3 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 25 03:50:41 np0005534516 nova_compute[253538]: 2025-11-25 08:50:41.517 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 25 03:50:41 np0005534516 nova_compute[253538]: 2025-11-25 08:50:41.517 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=768MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 25 03:50:41 np0005534516 nova_compute[253538]: 2025-11-25 08:50:41.633 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:50:41 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2081: 321 pgs: 321 active+clean; 213 MiB data, 828 MiB used, 59 GiB / 60 GiB avail; 1.5 MiB/s rd, 3.9 MiB/s wr, 139 op/s
Nov 25 03:50:42 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 03:50:42 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3570297059' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 03:50:42 np0005534516 nova_compute[253538]: 2025-11-25 08:50:42.200 253542 DEBUG nova.compute.manager [req-facbe85b-5c08-474e-ae68-ee2db01af8b1 req-ff8d7fd8-3acc-47c5-b8d6-8fa1e61b6562 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: b4f98996-3a98-41ad-af66-af37066515d3] Received event network-changed-0547929c-86ba-4aaa-869f-c7e2b5ea7e67 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:50:42 np0005534516 nova_compute[253538]: 2025-11-25 08:50:42.201 253542 DEBUG nova.compute.manager [req-facbe85b-5c08-474e-ae68-ee2db01af8b1 req-ff8d7fd8-3acc-47c5-b8d6-8fa1e61b6562 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: b4f98996-3a98-41ad-af66-af37066515d3] Refreshing instance network info cache due to event network-changed-0547929c-86ba-4aaa-869f-c7e2b5ea7e67. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 03:50:42 np0005534516 nova_compute[253538]: 2025-11-25 08:50:42.202 253542 DEBUG oslo_concurrency.lockutils [req-facbe85b-5c08-474e-ae68-ee2db01af8b1 req-ff8d7fd8-3acc-47c5-b8d6-8fa1e61b6562 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-b4f98996-3a98-41ad-af66-af37066515d3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 03:50:42 np0005534516 nova_compute[253538]: 2025-11-25 08:50:42.202 253542 DEBUG oslo_concurrency.lockutils [req-facbe85b-5c08-474e-ae68-ee2db01af8b1 req-ff8d7fd8-3acc-47c5-b8d6-8fa1e61b6562 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-b4f98996-3a98-41ad-af66-af37066515d3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 03:50:42 np0005534516 nova_compute[253538]: 2025-11-25 08:50:42.203 253542 DEBUG nova.network.neutron [req-facbe85b-5c08-474e-ae68-ee2db01af8b1 req-ff8d7fd8-3acc-47c5-b8d6-8fa1e61b6562 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: b4f98996-3a98-41ad-af66-af37066515d3] Refreshing network info cache for port 0547929c-86ba-4aaa-869f-c7e2b5ea7e67 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 03:50:42 np0005534516 nova_compute[253538]: 2025-11-25 08:50:42.204 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.571s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:50:42 np0005534516 nova_compute[253538]: 2025-11-25 08:50:42.212 253542 DEBUG nova.compute.provider_tree [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 03:50:42 np0005534516 nova_compute[253538]: 2025-11-25 08:50:42.233 253542 DEBUG nova.scheduler.client.report [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 03:50:42 np0005534516 ceph-mgr[75313]: [devicehealth INFO root] Check health
Nov 25 03:50:42 np0005534516 nova_compute[253538]: 2025-11-25 08:50:42.270 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 25 03:50:42 np0005534516 nova_compute[253538]: 2025-11-25 08:50:42.271 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.932s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:50:42 np0005534516 nova_compute[253538]: 2025-11-25 08:50:42.272 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 03:50:42 np0005534516 nova_compute[253538]: 2025-11-25 08:50:42.272 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Nov 25 03:50:42 np0005534516 nova_compute[253538]: 2025-11-25 08:50:42.288 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Nov 25 03:50:42 np0005534516 nova_compute[253538]: 2025-11-25 08:50:42.327 253542 DEBUG oslo_concurrency.lockutils [None req-38cbd88f-9fda-4f44-ab67-acd4e2366ab1 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Acquiring lock "interface-49b75125-0ca4-438d-9f2a-1d130a6b5632-None" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:50:42 np0005534516 nova_compute[253538]: 2025-11-25 08:50:42.327 253542 DEBUG oslo_concurrency.lockutils [None req-38cbd88f-9fda-4f44-ab67-acd4e2366ab1 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lock "interface-49b75125-0ca4-438d-9f2a-1d130a6b5632-None" acquired by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:50:42 np0005534516 nova_compute[253538]: 2025-11-25 08:50:42.328 253542 DEBUG nova.objects.instance [None req-38cbd88f-9fda-4f44-ab67-acd4e2366ab1 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lazy-loading 'flavor' on Instance uuid 49b75125-0ca4-438d-9f2a-1d130a6b5632 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 03:50:42 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e237 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 03:50:42 np0005534516 nova_compute[253538]: 2025-11-25 08:50:42.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 03:50:42 np0005534516 nova_compute[253538]: 2025-11-25 08:50:42.555 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 03:50:42 np0005534516 nova_compute[253538]: 2025-11-25 08:50:42.555 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Nov 25 03:50:42 np0005534516 nova_compute[253538]: 2025-11-25 08:50:42.634 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:50:42 np0005534516 nova_compute[253538]: 2025-11-25 08:50:42.961 253542 DEBUG nova.objects.instance [None req-38cbd88f-9fda-4f44-ab67-acd4e2366ab1 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lazy-loading 'pci_requests' on Instance uuid 49b75125-0ca4-438d-9f2a-1d130a6b5632 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 03:50:42 np0005534516 nova_compute[253538]: 2025-11-25 08:50:42.979 253542 DEBUG nova.network.neutron [None req-38cbd88f-9fda-4f44-ab67-acd4e2366ab1 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 49b75125-0ca4-438d-9f2a-1d130a6b5632] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 25 03:50:43 np0005534516 nova_compute[253538]: 2025-11-25 08:50:43.211 253542 DEBUG nova.policy [None req-38cbd88f-9fda-4f44-ab67-acd4e2366ab1 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '4211995133cc45db8e38c47f747fb092', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '92faeb767e7a423586eaaf32661ce771', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 25 03:50:43 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2082: 321 pgs: 321 active+clean; 213 MiB data, 828 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 3.9 MiB/s wr, 164 op/s
Nov 25 03:50:43 np0005534516 nova_compute[253538]: 2025-11-25 08:50:43.882 253542 DEBUG nova.network.neutron [None req-38cbd88f-9fda-4f44-ab67-acd4e2366ab1 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 49b75125-0ca4-438d-9f2a-1d130a6b5632] Successfully created port: 340ce0e3-8b72-4b40-afcb-53f30e6cc961 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 25 03:50:44 np0005534516 nova_compute[253538]: 2025-11-25 08:50:44.030 253542 DEBUG nova.network.neutron [req-facbe85b-5c08-474e-ae68-ee2db01af8b1 req-ff8d7fd8-3acc-47c5-b8d6-8fa1e61b6562 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: b4f98996-3a98-41ad-af66-af37066515d3] Updated VIF entry in instance network info cache for port 0547929c-86ba-4aaa-869f-c7e2b5ea7e67. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 25 03:50:44 np0005534516 nova_compute[253538]: 2025-11-25 08:50:44.031 253542 DEBUG nova.network.neutron [req-facbe85b-5c08-474e-ae68-ee2db01af8b1 req-ff8d7fd8-3acc-47c5-b8d6-8fa1e61b6562 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: b4f98996-3a98-41ad-af66-af37066515d3] Updating instance_info_cache with network_info: [{"id": "0547929c-86ba-4aaa-869f-c7e2b5ea7e67", "address": "fa:16:3e:86:20:35", "network": {"id": "c0613062-c56d-4f59-a1bd-5487b9cae905", "bridge": "br-int", "label": "tempest-network-smoke--1258419912", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.239", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7dcaf3c96bfc4db3a41291debd385c67", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0547929c-86", "ovs_interfaceid": "0547929c-86ba-4aaa-869f-c7e2b5ea7e67", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 03:50:44 np0005534516 nova_compute[253538]: 2025-11-25 08:50:44.046 253542 DEBUG oslo_concurrency.lockutils [req-facbe85b-5c08-474e-ae68-ee2db01af8b1 req-ff8d7fd8-3acc-47c5-b8d6-8fa1e61b6562 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-b4f98996-3a98-41ad-af66-af37066515d3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 03:50:44 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 03:50:44 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 03:50:44 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 03:50:44 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 03:50:44 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 03:50:44 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 03:50:44 np0005534516 ceph-mgr[75313]: [progress WARNING root] complete: ev 43ad9e0a-d835-412f-b19c-390bb7c66bd8 does not exist
Nov 25 03:50:44 np0005534516 ceph-mgr[75313]: [progress WARNING root] complete: ev 9215fbf3-59fe-4379-ab3a-4cfd39fc3581 does not exist
Nov 25 03:50:44 np0005534516 ceph-mgr[75313]: [progress WARNING root] complete: ev faef36d8-2c4a-416d-b213-56cb66b224b6 does not exist
Nov 25 03:50:44 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 03:50:44 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 03:50:44 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 03:50:44 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 03:50:44 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 03:50:44 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 03:50:44 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 03:50:44 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 03:50:44 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 03:50:45 np0005534516 nova_compute[253538]: 2025-11-25 08:50:45.051 253542 DEBUG nova.network.neutron [None req-38cbd88f-9fda-4f44-ab67-acd4e2366ab1 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 49b75125-0ca4-438d-9f2a-1d130a6b5632] Successfully updated port: 340ce0e3-8b72-4b40-afcb-53f30e6cc961 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 25 03:50:45 np0005534516 nova_compute[253538]: 2025-11-25 08:50:45.184 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:50:45 np0005534516 nova_compute[253538]: 2025-11-25 08:50:45.201 253542 DEBUG oslo_concurrency.lockutils [None req-38cbd88f-9fda-4f44-ab67-acd4e2366ab1 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Acquiring lock "refresh_cache-49b75125-0ca4-438d-9f2a-1d130a6b5632" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 03:50:45 np0005534516 nova_compute[253538]: 2025-11-25 08:50:45.202 253542 DEBUG oslo_concurrency.lockutils [None req-38cbd88f-9fda-4f44-ab67-acd4e2366ab1 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Acquired lock "refresh_cache-49b75125-0ca4-438d-9f2a-1d130a6b5632" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 03:50:45 np0005534516 nova_compute[253538]: 2025-11-25 08:50:45.202 253542 DEBUG nova.network.neutron [None req-38cbd88f-9fda-4f44-ab67-acd4e2366ab1 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 49b75125-0ca4-438d-9f2a-1d130a6b5632] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 25 03:50:45 np0005534516 podman[365244]: 2025-11-25 08:50:45.142202148 +0000 UTC m=+0.034120405 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 03:50:45 np0005534516 podman[365244]: 2025-11-25 08:50:45.312856888 +0000 UTC m=+0.204775075 container create 36b6052625caada1f1eee4b4c65c8281b887435cd33497a6a4101b76a88f8e98 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_mccarthy, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Nov 25 03:50:45 np0005534516 nova_compute[253538]: 2025-11-25 08:50:45.374 253542 DEBUG nova.compute.manager [req-0b48e77c-5c37-4c1a-ada9-9e99047b6fe5 req-a665ee81-929a-40e1-bc9a-9932c18cbcb9 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 49b75125-0ca4-438d-9f2a-1d130a6b5632] Received event network-changed-340ce0e3-8b72-4b40-afcb-53f30e6cc961 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:50:45 np0005534516 nova_compute[253538]: 2025-11-25 08:50:45.375 253542 DEBUG nova.compute.manager [req-0b48e77c-5c37-4c1a-ada9-9e99047b6fe5 req-a665ee81-929a-40e1-bc9a-9932c18cbcb9 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 49b75125-0ca4-438d-9f2a-1d130a6b5632] Refreshing instance network info cache due to event network-changed-340ce0e3-8b72-4b40-afcb-53f30e6cc961. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 03:50:45 np0005534516 nova_compute[253538]: 2025-11-25 08:50:45.375 253542 DEBUG oslo_concurrency.lockutils [req-0b48e77c-5c37-4c1a-ada9-9e99047b6fe5 req-a665ee81-929a-40e1-bc9a-9932c18cbcb9 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-49b75125-0ca4-438d-9f2a-1d130a6b5632" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 03:50:45 np0005534516 systemd[1]: Started libpod-conmon-36b6052625caada1f1eee4b4c65c8281b887435cd33497a6a4101b76a88f8e98.scope.
Nov 25 03:50:45 np0005534516 systemd[1]: Started libcrun container.
Nov 25 03:50:45 np0005534516 podman[365244]: 2025-11-25 08:50:45.501839829 +0000 UTC m=+0.393758056 container init 36b6052625caada1f1eee4b4c65c8281b887435cd33497a6a4101b76a88f8e98 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_mccarthy, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True)
Nov 25 03:50:45 np0005534516 podman[365244]: 2025-11-25 08:50:45.51417766 +0000 UTC m=+0.406095807 container start 36b6052625caada1f1eee4b4c65c8281b887435cd33497a6a4101b76a88f8e98 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_mccarthy, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 03:50:45 np0005534516 elastic_mccarthy[365260]: 167 167
Nov 25 03:50:45 np0005534516 systemd[1]: libpod-36b6052625caada1f1eee4b4c65c8281b887435cd33497a6a4101b76a88f8e98.scope: Deactivated successfully.
Nov 25 03:50:45 np0005534516 podman[365244]: 2025-11-25 08:50:45.61951375 +0000 UTC m=+0.511431937 container attach 36b6052625caada1f1eee4b4c65c8281b887435cd33497a6a4101b76a88f8e98 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_mccarthy, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 03:50:45 np0005534516 podman[365244]: 2025-11-25 08:50:45.620366394 +0000 UTC m=+0.512284571 container died 36b6052625caada1f1eee4b4c65c8281b887435cd33497a6a4101b76a88f8e98 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_mccarthy, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Nov 25 03:50:45 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2083: 321 pgs: 321 active+clean; 213 MiB data, 828 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.0 MiB/s wr, 101 op/s
Nov 25 03:50:45 np0005534516 systemd[1]: var-lib-containers-storage-overlay-0c74f875615c9d312dd008f4958c530c8ed7006d7919e6917d9b470930b88962-merged.mount: Deactivated successfully.
Nov 25 03:50:45 np0005534516 podman[365244]: 2025-11-25 08:50:45.78379758 +0000 UTC m=+0.675715747 container remove 36b6052625caada1f1eee4b4c65c8281b887435cd33497a6a4101b76a88f8e98 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_mccarthy, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 03:50:45 np0005534516 systemd[1]: libpod-conmon-36b6052625caada1f1eee4b4c65c8281b887435cd33497a6a4101b76a88f8e98.scope: Deactivated successfully.
Nov 25 03:50:45 np0005534516 podman[365284]: 2025-11-25 08:50:45.970044758 +0000 UTC m=+0.045235942 container create 6c9c0d3f27aa7494f24081cd0fbdd13661925a0ba01dee19e8276578267b6858 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_davinci, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 03:50:46 np0005534516 systemd[1]: Started libpod-conmon-6c9c0d3f27aa7494f24081cd0fbdd13661925a0ba01dee19e8276578267b6858.scope.
Nov 25 03:50:46 np0005534516 systemd[1]: Started libcrun container.
Nov 25 03:50:46 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8cf05399c35cf2bc9da94f342c558490e36ee5c466a45b865134cffca6078c36/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 03:50:46 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8cf05399c35cf2bc9da94f342c558490e36ee5c466a45b865134cffca6078c36/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 03:50:46 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8cf05399c35cf2bc9da94f342c558490e36ee5c466a45b865134cffca6078c36/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 03:50:46 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8cf05399c35cf2bc9da94f342c558490e36ee5c466a45b865134cffca6078c36/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 03:50:46 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8cf05399c35cf2bc9da94f342c558490e36ee5c466a45b865134cffca6078c36/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 03:50:46 np0005534516 podman[365284]: 2025-11-25 08:50:45.951851411 +0000 UTC m=+0.027042595 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 03:50:46 np0005534516 podman[365284]: 2025-11-25 08:50:46.065918866 +0000 UTC m=+0.141110110 container init 6c9c0d3f27aa7494f24081cd0fbdd13661925a0ba01dee19e8276578267b6858 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_davinci, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 03:50:46 np0005534516 podman[365284]: 2025-11-25 08:50:46.076258293 +0000 UTC m=+0.151449507 container start 6c9c0d3f27aa7494f24081cd0fbdd13661925a0ba01dee19e8276578267b6858 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_davinci, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3)
Nov 25 03:50:46 np0005534516 podman[365284]: 2025-11-25 08:50:46.080698572 +0000 UTC m=+0.155889836 container attach 6c9c0d3f27aa7494f24081cd0fbdd13661925a0ba01dee19e8276578267b6858 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_davinci, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Nov 25 03:50:47 np0005534516 musing_davinci[365300]: --> passed data devices: 0 physical, 3 LVM
Nov 25 03:50:47 np0005534516 musing_davinci[365300]: --> relative data size: 1.0
Nov 25 03:50:47 np0005534516 musing_davinci[365300]: --> All data devices are unavailable
Nov 25 03:50:47 np0005534516 systemd[1]: libpod-6c9c0d3f27aa7494f24081cd0fbdd13661925a0ba01dee19e8276578267b6858.scope: Deactivated successfully.
Nov 25 03:50:47 np0005534516 systemd[1]: libpod-6c9c0d3f27aa7494f24081cd0fbdd13661925a0ba01dee19e8276578267b6858.scope: Consumed 1.044s CPU time.
Nov 25 03:50:47 np0005534516 conmon[365300]: conmon 6c9c0d3f27aa7494f240 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-6c9c0d3f27aa7494f24081cd0fbdd13661925a0ba01dee19e8276578267b6858.scope/container/memory.events
Nov 25 03:50:47 np0005534516 podman[365284]: 2025-11-25 08:50:47.179768076 +0000 UTC m=+1.254959290 container died 6c9c0d3f27aa7494f24081cd0fbdd13661925a0ba01dee19e8276578267b6858 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_davinci, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Nov 25 03:50:47 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e237 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 03:50:47 np0005534516 nova_compute[253538]: 2025-11-25 08:50:47.588 253542 DEBUG nova.network.neutron [None req-38cbd88f-9fda-4f44-ab67-acd4e2366ab1 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 49b75125-0ca4-438d-9f2a-1d130a6b5632] Updating instance_info_cache with network_info: [{"id": "d0945383-2a0d-4019-9b60-eea96d667c69", "address": "fa:16:3e:88:13:51", "network": {"id": "c833a599-5a18-44d2-82ad-b16f7476c220", "bridge": "br-int", "label": "tempest-network-smoke--480494293", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.229", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92faeb767e7a423586eaaf32661ce771", "mtu": null, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd0945383-2a", "ovs_interfaceid": "d0945383-2a0d-4019-9b60-eea96d667c69", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "340ce0e3-8b72-4b40-afcb-53f30e6cc961", "address": "fa:16:3e:39:f0:9c", "network": {"id": "1e204e23-2391-4196-9262-d69db603285d", "bridge": "br-int", "label": "tempest-network-smoke--297673335", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.27", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92faeb767e7a423586eaaf32661ce771", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap340ce0e3-8b", "ovs_interfaceid": "340ce0e3-8b72-4b40-afcb-53f30e6cc961", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 03:50:47 np0005534516 nova_compute[253538]: 2025-11-25 08:50:47.609 253542 DEBUG oslo_concurrency.lockutils [None req-38cbd88f-9fda-4f44-ab67-acd4e2366ab1 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Releasing lock "refresh_cache-49b75125-0ca4-438d-9f2a-1d130a6b5632" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 03:50:47 np0005534516 nova_compute[253538]: 2025-11-25 08:50:47.611 253542 DEBUG oslo_concurrency.lockutils [req-0b48e77c-5c37-4c1a-ada9-9e99047b6fe5 req-a665ee81-929a-40e1-bc9a-9932c18cbcb9 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-49b75125-0ca4-438d-9f2a-1d130a6b5632" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 03:50:47 np0005534516 nova_compute[253538]: 2025-11-25 08:50:47.611 253542 DEBUG nova.network.neutron [req-0b48e77c-5c37-4c1a-ada9-9e99047b6fe5 req-a665ee81-929a-40e1-bc9a-9932c18cbcb9 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 49b75125-0ca4-438d-9f2a-1d130a6b5632] Refreshing network info cache for port 340ce0e3-8b72-4b40-afcb-53f30e6cc961 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 03:50:47 np0005534516 nova_compute[253538]: 2025-11-25 08:50:47.616 253542 DEBUG nova.virt.libvirt.vif [None req-38cbd88f-9fda-4f44-ab67-acd4e2366ab1 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T08:50:08Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1448452289',display_name='tempest-TestNetworkBasicOps-server-1448452289',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1448452289',id=109,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJ+ODzmwXlP8InQP3rckT8crJoUn4xNbOC1krI29pEqx8UoV+guV7Zli9feI6GoWEAqOv9PTHopksYzuWw3Q3guPlCcrWq73qjwaBwfCdvh1/iUEG/sFH1cKmRJAuAfBlw==',key_name='tempest-TestNetworkBasicOps-406479871',keypairs=<?>,launch_index=0,launched_at=2025-11-25T08:50:18Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='92faeb767e7a423586eaaf32661ce771',ramdisk_id='',reservation_id='r-3nsjtln8',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-2019122229',owner_user_name='tempest-TestNetworkBasicOps-2019122229-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T08:50:18Z,user_data=None,user_id='4211995133cc45db8e38c47f747fb092',uuid=49b75125-0ca4-438d-9f2a-1d130a6b5632,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "340ce0e3-8b72-4b40-afcb-53f30e6cc961", "address": "fa:16:3e:39:f0:9c", "network": {"id": "1e204e23-2391-4196-9262-d69db603285d", "bridge": "br-int", "label": "tempest-network-smoke--297673335", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.27", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92faeb767e7a423586eaaf32661ce771", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap340ce0e3-8b", "ovs_interfaceid": "340ce0e3-8b72-4b40-afcb-53f30e6cc961", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 25 03:50:47 np0005534516 nova_compute[253538]: 2025-11-25 08:50:47.616 253542 DEBUG nova.network.os_vif_util [None req-38cbd88f-9fda-4f44-ab67-acd4e2366ab1 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Converting VIF {"id": "340ce0e3-8b72-4b40-afcb-53f30e6cc961", "address": "fa:16:3e:39:f0:9c", "network": {"id": "1e204e23-2391-4196-9262-d69db603285d", "bridge": "br-int", "label": "tempest-network-smoke--297673335", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.27", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92faeb767e7a423586eaaf32661ce771", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap340ce0e3-8b", "ovs_interfaceid": "340ce0e3-8b72-4b40-afcb-53f30e6cc961", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 03:50:47 np0005534516 nova_compute[253538]: 2025-11-25 08:50:47.618 253542 DEBUG nova.network.os_vif_util [None req-38cbd88f-9fda-4f44-ab67-acd4e2366ab1 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:39:f0:9c,bridge_name='br-int',has_traffic_filtering=True,id=340ce0e3-8b72-4b40-afcb-53f30e6cc961,network=Network(1e204e23-2391-4196-9262-d69db603285d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap340ce0e3-8b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 03:50:47 np0005534516 nova_compute[253538]: 2025-11-25 08:50:47.618 253542 DEBUG os_vif [None req-38cbd88f-9fda-4f44-ab67-acd4e2366ab1 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:39:f0:9c,bridge_name='br-int',has_traffic_filtering=True,id=340ce0e3-8b72-4b40-afcb-53f30e6cc961,network=Network(1e204e23-2391-4196-9262-d69db603285d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap340ce0e3-8b') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 25 03:50:47 np0005534516 nova_compute[253538]: 2025-11-25 08:50:47.620 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:50:47 np0005534516 nova_compute[253538]: 2025-11-25 08:50:47.620 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:50:47 np0005534516 nova_compute[253538]: 2025-11-25 08:50:47.621 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 03:50:47 np0005534516 nova_compute[253538]: 2025-11-25 08:50:47.631 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:50:47 np0005534516 nova_compute[253538]: 2025-11-25 08:50:47.632 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap340ce0e3-8b, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:50:47 np0005534516 nova_compute[253538]: 2025-11-25 08:50:47.633 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap340ce0e3-8b, col_values=(('external_ids', {'iface-id': '340ce0e3-8b72-4b40-afcb-53f30e6cc961', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:39:f0:9c', 'vm-uuid': '49b75125-0ca4-438d-9f2a-1d130a6b5632'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:50:47 np0005534516 nova_compute[253538]: 2025-11-25 08:50:47.636 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:50:47 np0005534516 NetworkManager[48915]: <info>  [1764060647.6377] manager: (tap340ce0e3-8b): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/444)
Nov 25 03:50:47 np0005534516 nova_compute[253538]: 2025-11-25 08:50:47.641 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 25 03:50:47 np0005534516 nova_compute[253538]: 2025-11-25 08:50:47.644 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:50:47 np0005534516 nova_compute[253538]: 2025-11-25 08:50:47.645 253542 INFO os_vif [None req-38cbd88f-9fda-4f44-ab67-acd4e2366ab1 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:39:f0:9c,bridge_name='br-int',has_traffic_filtering=True,id=340ce0e3-8b72-4b40-afcb-53f30e6cc961,network=Network(1e204e23-2391-4196-9262-d69db603285d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap340ce0e3-8b')#033[00m
Nov 25 03:50:47 np0005534516 nova_compute[253538]: 2025-11-25 08:50:47.646 253542 DEBUG nova.virt.libvirt.vif [None req-38cbd88f-9fda-4f44-ab67-acd4e2366ab1 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T08:50:08Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1448452289',display_name='tempest-TestNetworkBasicOps-server-1448452289',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1448452289',id=109,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJ+ODzmwXlP8InQP3rckT8crJoUn4xNbOC1krI29pEqx8UoV+guV7Zli9feI6GoWEAqOv9PTHopksYzuWw3Q3guPlCcrWq73qjwaBwfCdvh1/iUEG/sFH1cKmRJAuAfBlw==',key_name='tempest-TestNetworkBasicOps-406479871',keypairs=<?>,launch_index=0,launched_at=2025-11-25T08:50:18Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='92faeb767e7a423586eaaf32661ce771',ramdisk_id='',reservation_id='r-3nsjtln8',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-2019122229',owner_user_name='tempest-TestNetworkBasicOps-2019122229-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T08:50:18Z,user_data=None,user_id='4211995133cc45db8e38c47f747fb092',uuid=49b75125-0ca4-438d-9f2a-1d130a6b5632,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "340ce0e3-8b72-4b40-afcb-53f30e6cc961", "address": "fa:16:3e:39:f0:9c", "network": {"id": "1e204e23-2391-4196-9262-d69db603285d", "bridge": "br-int", "label": "tempest-network-smoke--297673335", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.27", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92faeb767e7a423586eaaf32661ce771", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap340ce0e3-8b", "ovs_interfaceid": "340ce0e3-8b72-4b40-afcb-53f30e6cc961", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 25 03:50:47 np0005534516 nova_compute[253538]: 2025-11-25 08:50:47.647 253542 DEBUG nova.network.os_vif_util [None req-38cbd88f-9fda-4f44-ab67-acd4e2366ab1 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Converting VIF {"id": "340ce0e3-8b72-4b40-afcb-53f30e6cc961", "address": "fa:16:3e:39:f0:9c", "network": {"id": "1e204e23-2391-4196-9262-d69db603285d", "bridge": "br-int", "label": "tempest-network-smoke--297673335", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.27", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92faeb767e7a423586eaaf32661ce771", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap340ce0e3-8b", "ovs_interfaceid": "340ce0e3-8b72-4b40-afcb-53f30e6cc961", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 03:50:47 np0005534516 nova_compute[253538]: 2025-11-25 08:50:47.649 253542 DEBUG nova.network.os_vif_util [None req-38cbd88f-9fda-4f44-ab67-acd4e2366ab1 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:39:f0:9c,bridge_name='br-int',has_traffic_filtering=True,id=340ce0e3-8b72-4b40-afcb-53f30e6cc961,network=Network(1e204e23-2391-4196-9262-d69db603285d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap340ce0e3-8b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 03:50:47 np0005534516 nova_compute[253538]: 2025-11-25 08:50:47.653 253542 DEBUG nova.virt.libvirt.guest [None req-38cbd88f-9fda-4f44-ab67-acd4e2366ab1 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] attach device xml: <interface type="ethernet">
Nov 25 03:50:47 np0005534516 nova_compute[253538]:  <mac address="fa:16:3e:39:f0:9c"/>
Nov 25 03:50:47 np0005534516 nova_compute[253538]:  <model type="virtio"/>
Nov 25 03:50:47 np0005534516 nova_compute[253538]:  <driver name="vhost" rx_queue_size="512"/>
Nov 25 03:50:47 np0005534516 nova_compute[253538]:  <mtu size="1442"/>
Nov 25 03:50:47 np0005534516 nova_compute[253538]:  <target dev="tap340ce0e3-8b"/>
Nov 25 03:50:47 np0005534516 nova_compute[253538]: </interface>
Nov 25 03:50:47 np0005534516 nova_compute[253538]: attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339#033[00m
Nov 25 03:50:47 np0005534516 kernel: tap340ce0e3-8b: entered promiscuous mode
Nov 25 03:50:47 np0005534516 ovn_controller[152859]: 2025-11-25T08:50:47Z|01081|binding|INFO|Claiming lport 340ce0e3-8b72-4b40-afcb-53f30e6cc961 for this chassis.
Nov 25 03:50:47 np0005534516 ovn_controller[152859]: 2025-11-25T08:50:47Z|01082|binding|INFO|340ce0e3-8b72-4b40-afcb-53f30e6cc961: Claiming fa:16:3e:39:f0:9c 10.100.0.27
Nov 25 03:50:47 np0005534516 NetworkManager[48915]: <info>  [1764060647.6693] manager: (tap340ce0e3-8b): new Tun device (/org/freedesktop/NetworkManager/Devices/445)
Nov 25 03:50:47 np0005534516 nova_compute[253538]: 2025-11-25 08:50:47.667 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:50:47 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:50:47.680 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:39:f0:9c 10.100.0.27'], port_security=['fa:16:3e:39:f0:9c 10.100.0.27'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.27/28', 'neutron:device_id': '49b75125-0ca4-438d-9f2a-1d130a6b5632', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1e204e23-2391-4196-9262-d69db603285d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '92faeb767e7a423586eaaf32661ce771', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'd480e0b0-4ff3-496c-bb44-a333ae5d1ee8', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=dda51c2a-7354-4889-adc9-442043cc4089, chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=340ce0e3-8b72-4b40-afcb-53f30e6cc961) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 03:50:47 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:50:47.681 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 340ce0e3-8b72-4b40-afcb-53f30e6cc961 in datapath 1e204e23-2391-4196-9262-d69db603285d bound to our chassis#033[00m
Nov 25 03:50:47 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:50:47.683 162739 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 1e204e23-2391-4196-9262-d69db603285d#033[00m
Nov 25 03:50:47 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2084: 321 pgs: 321 active+clean; 213 MiB data, 828 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 27 KiB/s wr, 74 op/s
Nov 25 03:50:47 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:50:47.703 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[81a6bc13-d81a-49cb-bcaa-8ae1e2e92538]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:50:47 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:50:47.703 162739 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap1e204e23-21 in ovnmeta-1e204e23-2391-4196-9262-d69db603285d namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 25 03:50:47 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:50:47.705 269370 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap1e204e23-20 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 25 03:50:47 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:50:47.705 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[e272aad6-05b8-4aba-a560-626c06d7498f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:50:47 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:50:47.706 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[919a74f0-1651-4501-98dd-bef9ef5b9b6a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:50:47 np0005534516 systemd-udevd[365346]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 03:50:47 np0005534516 ovn_controller[152859]: 2025-11-25T08:50:47Z|01083|binding|INFO|Setting lport 340ce0e3-8b72-4b40-afcb-53f30e6cc961 ovn-installed in OVS
Nov 25 03:50:47 np0005534516 ovn_controller[152859]: 2025-11-25T08:50:47Z|01084|binding|INFO|Setting lport 340ce0e3-8b72-4b40-afcb-53f30e6cc961 up in Southbound
Nov 25 03:50:47 np0005534516 nova_compute[253538]: 2025-11-25 08:50:47.726 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:50:47 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:50:47.734 162852 DEBUG oslo.privsep.daemon [-] privsep: reply[49cf8429-4a9f-4bb3-9d9a-8d0c94e8a02e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:50:47 np0005534516 NetworkManager[48915]: <info>  [1764060647.7412] device (tap340ce0e3-8b): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 03:50:47 np0005534516 NetworkManager[48915]: <info>  [1764060647.7426] device (tap340ce0e3-8b): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 03:50:47 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:50:47.753 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[b3b31a01-062d-4080-8c80-088548767fed]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:50:47 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:50:47.788 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[7b740db5-b0b2-4f4f-a1b1-e78d676603bd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:50:47 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:50:47.796 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[eb994e04-8fca-4449-9ca5-3302e053c051]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:50:47 np0005534516 NetworkManager[48915]: <info>  [1764060647.7971] manager: (tap1e204e23-20): new Veth device (/org/freedesktop/NetworkManager/Devices/446)
Nov 25 03:50:47 np0005534516 systemd-udevd[365349]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 03:50:47 np0005534516 systemd[1]: var-lib-containers-storage-overlay-8cf05399c35cf2bc9da94f342c558490e36ee5c466a45b865134cffca6078c36-merged.mount: Deactivated successfully.
Nov 25 03:50:47 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:50:47.846 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[568d642f-6abb-4f63-8850-6d16b520f091]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:50:47 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:50:47.849 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[2d7d65d6-ed7d-4720-9535-2b0829c84d2c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:50:47 np0005534516 nova_compute[253538]: 2025-11-25 08:50:47.876 253542 DEBUG nova.virt.libvirt.driver [None req-38cbd88f-9fda-4f44-ab67-acd4e2366ab1 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 03:50:47 np0005534516 nova_compute[253538]: 2025-11-25 08:50:47.876 253542 DEBUG nova.virt.libvirt.driver [None req-38cbd88f-9fda-4f44-ab67-acd4e2366ab1 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 03:50:47 np0005534516 nova_compute[253538]: 2025-11-25 08:50:47.877 253542 DEBUG nova.virt.libvirt.driver [None req-38cbd88f-9fda-4f44-ab67-acd4e2366ab1 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] No VIF found with MAC fa:16:3e:88:13:51, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 25 03:50:47 np0005534516 nova_compute[253538]: 2025-11-25 08:50:47.877 253542 DEBUG nova.virt.libvirt.driver [None req-38cbd88f-9fda-4f44-ab67-acd4e2366ab1 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] No VIF found with MAC fa:16:3e:39:f0:9c, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 25 03:50:47 np0005534516 NetworkManager[48915]: <info>  [1764060647.8800] device (tap1e204e23-20): carrier: link connected
Nov 25 03:50:47 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:50:47.885 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[07cb6670-19c4-4215-a097-77e42f8faa79]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:50:47 np0005534516 nova_compute[253538]: 2025-11-25 08:50:47.897 253542 DEBUG nova.virt.libvirt.guest [None req-38cbd88f-9fda-4f44-ab67-acd4e2366ab1 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 03:50:47 np0005534516 nova_compute[253538]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 03:50:47 np0005534516 nova_compute[253538]:  <nova:name>tempest-TestNetworkBasicOps-server-1448452289</nova:name>
Nov 25 03:50:47 np0005534516 nova_compute[253538]:  <nova:creationTime>2025-11-25 08:50:47</nova:creationTime>
Nov 25 03:50:47 np0005534516 nova_compute[253538]:  <nova:flavor name="m1.nano">
Nov 25 03:50:47 np0005534516 nova_compute[253538]:    <nova:memory>128</nova:memory>
Nov 25 03:50:47 np0005534516 nova_compute[253538]:    <nova:disk>1</nova:disk>
Nov 25 03:50:47 np0005534516 nova_compute[253538]:    <nova:swap>0</nova:swap>
Nov 25 03:50:47 np0005534516 nova_compute[253538]:    <nova:ephemeral>0</nova:ephemeral>
Nov 25 03:50:47 np0005534516 nova_compute[253538]:    <nova:vcpus>1</nova:vcpus>
Nov 25 03:50:47 np0005534516 nova_compute[253538]:  </nova:flavor>
Nov 25 03:50:47 np0005534516 nova_compute[253538]:  <nova:owner>
Nov 25 03:50:47 np0005534516 nova_compute[253538]:    <nova:user uuid="4211995133cc45db8e38c47f747fb092">tempest-TestNetworkBasicOps-2019122229-project-member</nova:user>
Nov 25 03:50:47 np0005534516 nova_compute[253538]:    <nova:project uuid="92faeb767e7a423586eaaf32661ce771">tempest-TestNetworkBasicOps-2019122229</nova:project>
Nov 25 03:50:47 np0005534516 nova_compute[253538]:  </nova:owner>
Nov 25 03:50:47 np0005534516 nova_compute[253538]:  <nova:root type="image" uuid="8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e"/>
Nov 25 03:50:47 np0005534516 nova_compute[253538]:  <nova:ports>
Nov 25 03:50:47 np0005534516 nova_compute[253538]:    <nova:port uuid="d0945383-2a0d-4019-9b60-eea96d667c69">
Nov 25 03:50:47 np0005534516 nova_compute[253538]:      <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Nov 25 03:50:47 np0005534516 nova_compute[253538]:    </nova:port>
Nov 25 03:50:47 np0005534516 nova_compute[253538]:    <nova:port uuid="340ce0e3-8b72-4b40-afcb-53f30e6cc961">
Nov 25 03:50:47 np0005534516 nova_compute[253538]:      <nova:ip type="fixed" address="10.100.0.27" ipVersion="4"/>
Nov 25 03:50:47 np0005534516 nova_compute[253538]:    </nova:port>
Nov 25 03:50:47 np0005534516 nova_compute[253538]:  </nova:ports>
Nov 25 03:50:47 np0005534516 nova_compute[253538]: </nova:instance>
Nov 25 03:50:47 np0005534516 nova_compute[253538]: set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359#033[00m
Nov 25 03:50:47 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:50:47.910 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[4d03c396-43be-4220-9299-c434f94351eb]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap1e204e23-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:32:32:56'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 318], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 600247, 'reachable_time': 25621, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 365372, 'error': None, 'target': 'ovnmeta-1e204e23-2391-4196-9262-d69db603285d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:50:47 np0005534516 nova_compute[253538]: 2025-11-25 08:50:47.919 253542 DEBUG oslo_concurrency.lockutils [None req-38cbd88f-9fda-4f44-ab67-acd4e2366ab1 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lock "interface-49b75125-0ca4-438d-9f2a-1d130a6b5632-None" "released" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: held 5.591s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:50:47 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:50:47.925 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[0f9fa631-c64a-41d8-9751-3664517fa5b0]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe32:3256'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 600247, 'tstamp': 600247}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 365373, 'error': None, 'target': 'ovnmeta-1e204e23-2391-4196-9262-d69db603285d', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:50:47 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:50:47.943 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[c78ac0f7-e13c-4471-a956-9a428867e831]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap1e204e23-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:32:32:56'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 318], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 600247, 'reachable_time': 25621, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 365374, 'error': None, 'target': 'ovnmeta-1e204e23-2391-4196-9262-d69db603285d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:50:47 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:50:47.977 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[79ad4580-f050-4d3e-a82c-47377a40cade]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:50:48 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:50:48.052 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[10c6e1e7-e92a-4cab-8da1-7002d1922f2c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:50:48 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:50:48.054 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1e204e23-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:50:48 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:50:48.055 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 03:50:48 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:50:48.055 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap1e204e23-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:50:48 np0005534516 NetworkManager[48915]: <info>  [1764060648.0578] manager: (tap1e204e23-20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/447)
Nov 25 03:50:48 np0005534516 kernel: tap1e204e23-20: entered promiscuous mode
Nov 25 03:50:48 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:50:48.063 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap1e204e23-20, col_values=(('external_ids', {'iface-id': '6e90b682-b94d-4606-bcb2-f7667ba8c85c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:50:48 np0005534516 ovn_controller[152859]: 2025-11-25T08:50:48Z|01085|binding|INFO|Releasing lport 6e90b682-b94d-4606-bcb2-f7667ba8c85c from this chassis (sb_readonly=0)
Nov 25 03:50:48 np0005534516 nova_compute[253538]: 2025-11-25 08:50:48.064 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:50:48 np0005534516 nova_compute[253538]: 2025-11-25 08:50:48.085 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:50:48 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:50:48.086 162739 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/1e204e23-2391-4196-9262-d69db603285d.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/1e204e23-2391-4196-9262-d69db603285d.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 25 03:50:48 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:50:48.088 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[97d9f0a3-f46a-4239-81f7-350579700a04]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:50:48 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:50:48.089 162739 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 25 03:50:48 np0005534516 ovn_metadata_agent[162734]: global
Nov 25 03:50:48 np0005534516 ovn_metadata_agent[162734]:    log         /dev/log local0 debug
Nov 25 03:50:48 np0005534516 ovn_metadata_agent[162734]:    log-tag     haproxy-metadata-proxy-1e204e23-2391-4196-9262-d69db603285d
Nov 25 03:50:48 np0005534516 ovn_metadata_agent[162734]:    user        root
Nov 25 03:50:48 np0005534516 ovn_metadata_agent[162734]:    group       root
Nov 25 03:50:48 np0005534516 ovn_metadata_agent[162734]:    maxconn     1024
Nov 25 03:50:48 np0005534516 ovn_metadata_agent[162734]:    pidfile     /var/lib/neutron/external/pids/1e204e23-2391-4196-9262-d69db603285d.pid.haproxy
Nov 25 03:50:48 np0005534516 ovn_metadata_agent[162734]:    daemon
Nov 25 03:50:48 np0005534516 ovn_metadata_agent[162734]: 
Nov 25 03:50:48 np0005534516 ovn_metadata_agent[162734]: defaults
Nov 25 03:50:48 np0005534516 ovn_metadata_agent[162734]:    log global
Nov 25 03:50:48 np0005534516 ovn_metadata_agent[162734]:    mode http
Nov 25 03:50:48 np0005534516 ovn_metadata_agent[162734]:    option httplog
Nov 25 03:50:48 np0005534516 ovn_metadata_agent[162734]:    option dontlognull
Nov 25 03:50:48 np0005534516 ovn_metadata_agent[162734]:    option http-server-close
Nov 25 03:50:48 np0005534516 ovn_metadata_agent[162734]:    option forwardfor
Nov 25 03:50:48 np0005534516 ovn_metadata_agent[162734]:    retries                 3
Nov 25 03:50:48 np0005534516 ovn_metadata_agent[162734]:    timeout http-request    30s
Nov 25 03:50:48 np0005534516 ovn_metadata_agent[162734]:    timeout connect         30s
Nov 25 03:50:48 np0005534516 ovn_metadata_agent[162734]:    timeout client          32s
Nov 25 03:50:48 np0005534516 ovn_metadata_agent[162734]:    timeout server          32s
Nov 25 03:50:48 np0005534516 ovn_metadata_agent[162734]:    timeout http-keep-alive 30s
Nov 25 03:50:48 np0005534516 ovn_metadata_agent[162734]: 
Nov 25 03:50:48 np0005534516 ovn_metadata_agent[162734]: 
Nov 25 03:50:48 np0005534516 ovn_metadata_agent[162734]: listen listener
Nov 25 03:50:48 np0005534516 ovn_metadata_agent[162734]:    bind 169.254.169.254:80
Nov 25 03:50:48 np0005534516 ovn_metadata_agent[162734]:    server metadata /var/lib/neutron/metadata_proxy
Nov 25 03:50:48 np0005534516 ovn_metadata_agent[162734]:    http-request add-header X-OVN-Network-ID 1e204e23-2391-4196-9262-d69db603285d
Nov 25 03:50:48 np0005534516 ovn_metadata_agent[162734]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 25 03:50:48 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:50:48.089 162739 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-1e204e23-2391-4196-9262-d69db603285d', 'env', 'PROCESS_TAG=haproxy-1e204e23-2391-4196-9262-d69db603285d', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/1e204e23-2391-4196-9262-d69db603285d.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 25 03:50:48 np0005534516 nova_compute[253538]: 2025-11-25 08:50:48.161 253542 DEBUG nova.compute.manager [req-4bfa353a-45ac-4dc7-aa5c-99f6dafdfd83 req-a177a61c-15b2-40bf-a747-639fffa7f665 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 49b75125-0ca4-438d-9f2a-1d130a6b5632] Received event network-vif-plugged-340ce0e3-8b72-4b40-afcb-53f30e6cc961 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:50:48 np0005534516 nova_compute[253538]: 2025-11-25 08:50:48.162 253542 DEBUG oslo_concurrency.lockutils [req-4bfa353a-45ac-4dc7-aa5c-99f6dafdfd83 req-a177a61c-15b2-40bf-a747-639fffa7f665 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "49b75125-0ca4-438d-9f2a-1d130a6b5632-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:50:48 np0005534516 nova_compute[253538]: 2025-11-25 08:50:48.162 253542 DEBUG oslo_concurrency.lockutils [req-4bfa353a-45ac-4dc7-aa5c-99f6dafdfd83 req-a177a61c-15b2-40bf-a747-639fffa7f665 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "49b75125-0ca4-438d-9f2a-1d130a6b5632-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:50:48 np0005534516 nova_compute[253538]: 2025-11-25 08:50:48.163 253542 DEBUG oslo_concurrency.lockutils [req-4bfa353a-45ac-4dc7-aa5c-99f6dafdfd83 req-a177a61c-15b2-40bf-a747-639fffa7f665 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "49b75125-0ca4-438d-9f2a-1d130a6b5632-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:50:48 np0005534516 nova_compute[253538]: 2025-11-25 08:50:48.163 253542 DEBUG nova.compute.manager [req-4bfa353a-45ac-4dc7-aa5c-99f6dafdfd83 req-a177a61c-15b2-40bf-a747-639fffa7f665 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 49b75125-0ca4-438d-9f2a-1d130a6b5632] No waiting events found dispatching network-vif-plugged-340ce0e3-8b72-4b40-afcb-53f30e6cc961 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 03:50:48 np0005534516 nova_compute[253538]: 2025-11-25 08:50:48.163 253542 WARNING nova.compute.manager [req-4bfa353a-45ac-4dc7-aa5c-99f6dafdfd83 req-a177a61c-15b2-40bf-a747-639fffa7f665 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 49b75125-0ca4-438d-9f2a-1d130a6b5632] Received unexpected event network-vif-plugged-340ce0e3-8b72-4b40-afcb-53f30e6cc961 for instance with vm_state active and task_state None.#033[00m
Nov 25 03:50:48 np0005534516 podman[365284]: 2025-11-25 08:50:48.355439883 +0000 UTC m=+2.430631057 container remove 6c9c0d3f27aa7494f24081cd0fbdd13661925a0ba01dee19e8276578267b6858 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_davinci, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True)
Nov 25 03:50:48 np0005534516 systemd[1]: libpod-conmon-6c9c0d3f27aa7494f24081cd0fbdd13661925a0ba01dee19e8276578267b6858.scope: Deactivated successfully.
Nov 25 03:50:48 np0005534516 podman[365415]: 2025-11-25 08:50:48.48264548 +0000 UTC m=+0.030550370 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 25 03:50:48 np0005534516 nova_compute[253538]: 2025-11-25 08:50:48.898 253542 DEBUG nova.network.neutron [req-0b48e77c-5c37-4c1a-ada9-9e99047b6fe5 req-a665ee81-929a-40e1-bc9a-9932c18cbcb9 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 49b75125-0ca4-438d-9f2a-1d130a6b5632] Updated VIF entry in instance network info cache for port 340ce0e3-8b72-4b40-afcb-53f30e6cc961. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 25 03:50:48 np0005534516 nova_compute[253538]: 2025-11-25 08:50:48.899 253542 DEBUG nova.network.neutron [req-0b48e77c-5c37-4c1a-ada9-9e99047b6fe5 req-a665ee81-929a-40e1-bc9a-9932c18cbcb9 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 49b75125-0ca4-438d-9f2a-1d130a6b5632] Updating instance_info_cache with network_info: [{"id": "d0945383-2a0d-4019-9b60-eea96d667c69", "address": "fa:16:3e:88:13:51", "network": {"id": "c833a599-5a18-44d2-82ad-b16f7476c220", "bridge": "br-int", "label": "tempest-network-smoke--480494293", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.229", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92faeb767e7a423586eaaf32661ce771", "mtu": null, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd0945383-2a", "ovs_interfaceid": "d0945383-2a0d-4019-9b60-eea96d667c69", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "340ce0e3-8b72-4b40-afcb-53f30e6cc961", "address": "fa:16:3e:39:f0:9c", "network": {"id": "1e204e23-2391-4196-9262-d69db603285d", "bridge": "br-int", "label": "tempest-network-smoke--297673335", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.27", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92faeb767e7a423586eaaf32661ce771", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap340ce0e3-8b", "ovs_interfaceid": "340ce0e3-8b72-4b40-afcb-53f30e6cc961", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 03:50:48 np0005534516 nova_compute[253538]: 2025-11-25 08:50:48.913 253542 DEBUG oslo_concurrency.lockutils [req-0b48e77c-5c37-4c1a-ada9-9e99047b6fe5 req-a665ee81-929a-40e1-bc9a-9932c18cbcb9 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-49b75125-0ca4-438d-9f2a-1d130a6b5632" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 03:50:49 np0005534516 podman[365415]: 2025-11-25 08:50:49.232827121 +0000 UTC m=+0.780731971 container create 6c1d10e7495f789af54765ada4c97b3ece832943715dfcca0194921a1bf7f547 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-1e204e23-2391-4196-9262-d69db603285d, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Nov 25 03:50:49 np0005534516 systemd[1]: Started libpod-conmon-6c1d10e7495f789af54765ada4c97b3ece832943715dfcca0194921a1bf7f547.scope.
Nov 25 03:50:49 np0005534516 systemd[1]: Started libcrun container.
Nov 25 03:50:49 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fcebdde85a3fd006ad7591ab9e8a651e0c91f4e265451ae67fa9db8eed4eafeb/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 25 03:50:49 np0005534516 ovn_controller[152859]: 2025-11-25T08:50:49Z|00117|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:39:f0:9c 10.100.0.27
Nov 25 03:50:49 np0005534516 ovn_controller[152859]: 2025-11-25T08:50:49Z|00118|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:39:f0:9c 10.100.0.27
Nov 25 03:50:49 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2085: 321 pgs: 321 active+clean; 213 MiB data, 828 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 15 KiB/s wr, 76 op/s
Nov 25 03:50:49 np0005534516 podman[365415]: 2025-11-25 08:50:49.710272959 +0000 UTC m=+1.258177829 container init 6c1d10e7495f789af54765ada4c97b3ece832943715dfcca0194921a1bf7f547 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-1e204e23-2391-4196-9262-d69db603285d, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0)
Nov 25 03:50:49 np0005534516 podman[365415]: 2025-11-25 08:50:49.71707865 +0000 UTC m=+1.264983500 container start 6c1d10e7495f789af54765ada4c97b3ece832943715dfcca0194921a1bf7f547 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-1e204e23-2391-4196-9262-d69db603285d, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 03:50:49 np0005534516 neutron-haproxy-ovnmeta-1e204e23-2391-4196-9262-d69db603285d[365536]: [NOTICE]   (365551) : New worker (365553) forked
Nov 25 03:50:49 np0005534516 neutron-haproxy-ovnmeta-1e204e23-2391-4196-9262-d69db603285d[365536]: [NOTICE]   (365551) : Loading success.
Nov 25 03:50:50 np0005534516 podman[365576]: 2025-11-25 08:50:50.249055467 +0000 UTC m=+0.084119524 container create 2148d0fb1a969b16a5480ca4afb5df7b220952ca79471d37bc9cd83adb44969e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_varahamihira, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Nov 25 03:50:50 np0005534516 nova_compute[253538]: 2025-11-25 08:50:50.282 253542 DEBUG nova.compute.manager [req-9943ac55-ce78-4f15-aa08-00086e801da9 req-f21460cf-760c-4596-a720-69c145adc6fd b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 49b75125-0ca4-438d-9f2a-1d130a6b5632] Received event network-vif-plugged-340ce0e3-8b72-4b40-afcb-53f30e6cc961 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:50:50 np0005534516 nova_compute[253538]: 2025-11-25 08:50:50.282 253542 DEBUG oslo_concurrency.lockutils [req-9943ac55-ce78-4f15-aa08-00086e801da9 req-f21460cf-760c-4596-a720-69c145adc6fd b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "49b75125-0ca4-438d-9f2a-1d130a6b5632-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:50:50 np0005534516 nova_compute[253538]: 2025-11-25 08:50:50.284 253542 DEBUG oslo_concurrency.lockutils [req-9943ac55-ce78-4f15-aa08-00086e801da9 req-f21460cf-760c-4596-a720-69c145adc6fd b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "49b75125-0ca4-438d-9f2a-1d130a6b5632-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:50:50 np0005534516 nova_compute[253538]: 2025-11-25 08:50:50.284 253542 DEBUG oslo_concurrency.lockutils [req-9943ac55-ce78-4f15-aa08-00086e801da9 req-f21460cf-760c-4596-a720-69c145adc6fd b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "49b75125-0ca4-438d-9f2a-1d130a6b5632-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:50:50 np0005534516 nova_compute[253538]: 2025-11-25 08:50:50.285 253542 DEBUG nova.compute.manager [req-9943ac55-ce78-4f15-aa08-00086e801da9 req-f21460cf-760c-4596-a720-69c145adc6fd b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 49b75125-0ca4-438d-9f2a-1d130a6b5632] No waiting events found dispatching network-vif-plugged-340ce0e3-8b72-4b40-afcb-53f30e6cc961 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 03:50:50 np0005534516 nova_compute[253538]: 2025-11-25 08:50:50.286 253542 WARNING nova.compute.manager [req-9943ac55-ce78-4f15-aa08-00086e801da9 req-f21460cf-760c-4596-a720-69c145adc6fd b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 49b75125-0ca4-438d-9f2a-1d130a6b5632] Received unexpected event network-vif-plugged-340ce0e3-8b72-4b40-afcb-53f30e6cc961 for instance with vm_state active and task_state None.#033[00m
Nov 25 03:50:50 np0005534516 podman[365576]: 2025-11-25 08:50:50.201664507 +0000 UTC m=+0.036728644 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 03:50:50 np0005534516 systemd[1]: Started libpod-conmon-2148d0fb1a969b16a5480ca4afb5df7b220952ca79471d37bc9cd83adb44969e.scope.
Nov 25 03:50:50 np0005534516 systemd[1]: Started libcrun container.
Nov 25 03:50:50 np0005534516 nova_compute[253538]: 2025-11-25 08:50:50.347 253542 DEBUG oslo_concurrency.lockutils [None req-2d7710fb-da8b-49bd-bdba-95d60bef6db1 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Acquiring lock "interface-49b75125-0ca4-438d-9f2a-1d130a6b5632-340ce0e3-8b72-4b40-afcb-53f30e6cc961" by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:50:50 np0005534516 nova_compute[253538]: 2025-11-25 08:50:50.348 253542 DEBUG oslo_concurrency.lockutils [None req-2d7710fb-da8b-49bd-bdba-95d60bef6db1 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lock "interface-49b75125-0ca4-438d-9f2a-1d130a6b5632-340ce0e3-8b72-4b40-afcb-53f30e6cc961" acquired by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:50:50 np0005534516 nova_compute[253538]: 2025-11-25 08:50:50.371 253542 DEBUG nova.objects.instance [None req-2d7710fb-da8b-49bd-bdba-95d60bef6db1 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lazy-loading 'flavor' on Instance uuid 49b75125-0ca4-438d-9f2a-1d130a6b5632 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 03:50:50 np0005534516 podman[365576]: 2025-11-25 08:50:50.373289583 +0000 UTC m=+0.208353730 container init 2148d0fb1a969b16a5480ca4afb5df7b220952ca79471d37bc9cd83adb44969e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_varahamihira, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Nov 25 03:50:50 np0005534516 podman[365576]: 2025-11-25 08:50:50.38696457 +0000 UTC m=+0.222028657 container start 2148d0fb1a969b16a5480ca4afb5df7b220952ca79471d37bc9cd83adb44969e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_varahamihira, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Nov 25 03:50:50 np0005534516 nova_compute[253538]: 2025-11-25 08:50:50.395 253542 DEBUG nova.virt.libvirt.vif [None req-2d7710fb-da8b-49bd-bdba-95d60bef6db1 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T08:50:08Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1448452289',display_name='tempest-TestNetworkBasicOps-server-1448452289',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1448452289',id=109,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJ+ODzmwXlP8InQP3rckT8crJoUn4xNbOC1krI29pEqx8UoV+guV7Zli9feI6GoWEAqOv9PTHopksYzuWw3Q3guPlCcrWq73qjwaBwfCdvh1/iUEG/sFH1cKmRJAuAfBlw==',key_name='tempest-TestNetworkBasicOps-406479871',keypairs=<?>,launch_index=0,launched_at=2025-11-25T08:50:18Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='92faeb767e7a423586eaaf32661ce771',ramdisk_id='',reservation_id='r-3nsjtln8',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-2019122229',owner_user_name='tempest-TestNetworkBasicOps-2019122229-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T08:50:18Z,user_data=None,user_id='4211995133cc45db8e38c47f747fb092',uuid=49b75125-0ca4-438d-9f2a-1d130a6b5632,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "340ce0e3-8b72-4b40-afcb-53f30e6cc961", "address": "fa:16:3e:39:f0:9c", "network": {"id": "1e204e23-2391-4196-9262-d69db603285d", "bridge": "br-int", "label": "tempest-network-smoke--297673335", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.27", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92faeb767e7a423586eaaf32661ce771", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap340ce0e3-8b", "ovs_interfaceid": "340ce0e3-8b72-4b40-afcb-53f30e6cc961", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 25 03:50:50 np0005534516 brave_varahamihira[365592]: 167 167
Nov 25 03:50:50 np0005534516 nova_compute[253538]: 2025-11-25 08:50:50.396 253542 DEBUG nova.network.os_vif_util [None req-2d7710fb-da8b-49bd-bdba-95d60bef6db1 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Converting VIF {"id": "340ce0e3-8b72-4b40-afcb-53f30e6cc961", "address": "fa:16:3e:39:f0:9c", "network": {"id": "1e204e23-2391-4196-9262-d69db603285d", "bridge": "br-int", "label": "tempest-network-smoke--297673335", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.27", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92faeb767e7a423586eaaf32661ce771", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap340ce0e3-8b", "ovs_interfaceid": "340ce0e3-8b72-4b40-afcb-53f30e6cc961", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 03:50:50 np0005534516 systemd[1]: libpod-2148d0fb1a969b16a5480ca4afb5df7b220952ca79471d37bc9cd83adb44969e.scope: Deactivated successfully.
Nov 25 03:50:50 np0005534516 nova_compute[253538]: 2025-11-25 08:50:50.396 253542 DEBUG nova.network.os_vif_util [None req-2d7710fb-da8b-49bd-bdba-95d60bef6db1 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:39:f0:9c,bridge_name='br-int',has_traffic_filtering=True,id=340ce0e3-8b72-4b40-afcb-53f30e6cc961,network=Network(1e204e23-2391-4196-9262-d69db603285d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap340ce0e3-8b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 03:50:50 np0005534516 podman[365576]: 2025-11-25 08:50:50.402884936 +0000 UTC m=+0.237949023 container attach 2148d0fb1a969b16a5480ca4afb5df7b220952ca79471d37bc9cd83adb44969e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_varahamihira, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Nov 25 03:50:50 np0005534516 podman[365576]: 2025-11-25 08:50:50.403706478 +0000 UTC m=+0.238770535 container died 2148d0fb1a969b16a5480ca4afb5df7b220952ca79471d37bc9cd83adb44969e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_varahamihira, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Nov 25 03:50:50 np0005534516 nova_compute[253538]: 2025-11-25 08:50:50.402 253542 DEBUG nova.virt.libvirt.guest [None req-2d7710fb-da8b-49bd-bdba-95d60bef6db1 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:39:f0:9c"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap340ce0e3-8b"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Nov 25 03:50:50 np0005534516 nova_compute[253538]: 2025-11-25 08:50:50.408 253542 DEBUG nova.virt.libvirt.guest [None req-2d7710fb-da8b-49bd-bdba-95d60bef6db1 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:39:f0:9c"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap340ce0e3-8b"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Nov 25 03:50:50 np0005534516 nova_compute[253538]: 2025-11-25 08:50:50.412 253542 DEBUG nova.virt.libvirt.driver [None req-2d7710fb-da8b-49bd-bdba-95d60bef6db1 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Attempting to detach device tap340ce0e3-8b from instance 49b75125-0ca4-438d-9f2a-1d130a6b5632 from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487#033[00m
Nov 25 03:50:50 np0005534516 nova_compute[253538]: 2025-11-25 08:50:50.413 253542 DEBUG nova.virt.libvirt.guest [None req-2d7710fb-da8b-49bd-bdba-95d60bef6db1 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] detach device xml: <interface type="ethernet">
Nov 25 03:50:50 np0005534516 nova_compute[253538]:  <mac address="fa:16:3e:39:f0:9c"/>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:  <model type="virtio"/>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:  <driver name="vhost" rx_queue_size="512"/>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:  <mtu size="1442"/>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:  <target dev="tap340ce0e3-8b"/>
Nov 25 03:50:50 np0005534516 nova_compute[253538]: </interface>
Nov 25 03:50:50 np0005534516 nova_compute[253538]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Nov 25 03:50:50 np0005534516 nova_compute[253538]: 2025-11-25 08:50:50.438 253542 DEBUG nova.virt.libvirt.guest [None req-2d7710fb-da8b-49bd-bdba-95d60bef6db1 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:39:f0:9c"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap340ce0e3-8b"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Nov 25 03:50:50 np0005534516 nova_compute[253538]: 2025-11-25 08:50:50.442 253542 DEBUG nova.virt.libvirt.guest [None req-2d7710fb-da8b-49bd-bdba-95d60bef6db1 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:39:f0:9c"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap340ce0e3-8b"/></interface>not found in domain: <domain type='kvm' id='135'>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:  <name>instance-0000006d</name>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:  <uuid>49b75125-0ca4-438d-9f2a-1d130a6b5632</uuid>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:  <metadata>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 03:50:50 np0005534516 nova_compute[253538]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:  <nova:name>tempest-TestNetworkBasicOps-server-1448452289</nova:name>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:  <nova:creationTime>2025-11-25 08:50:47</nova:creationTime>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:  <nova:flavor name="m1.nano">
Nov 25 03:50:50 np0005534516 nova_compute[253538]:    <nova:memory>128</nova:memory>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:    <nova:disk>1</nova:disk>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:    <nova:swap>0</nova:swap>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:    <nova:ephemeral>0</nova:ephemeral>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:    <nova:vcpus>1</nova:vcpus>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:  </nova:flavor>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:  <nova:owner>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:    <nova:user uuid="4211995133cc45db8e38c47f747fb092">tempest-TestNetworkBasicOps-2019122229-project-member</nova:user>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:    <nova:project uuid="92faeb767e7a423586eaaf32661ce771">tempest-TestNetworkBasicOps-2019122229</nova:project>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:  </nova:owner>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:  <nova:root type="image" uuid="8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e"/>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:  <nova:ports>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:    <nova:port uuid="d0945383-2a0d-4019-9b60-eea96d667c69">
Nov 25 03:50:50 np0005534516 nova_compute[253538]:      <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:    </nova:port>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:    <nova:port uuid="340ce0e3-8b72-4b40-afcb-53f30e6cc961">
Nov 25 03:50:50 np0005534516 nova_compute[253538]:      <nova:ip type="fixed" address="10.100.0.27" ipVersion="4"/>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:    </nova:port>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:  </nova:ports>
Nov 25 03:50:50 np0005534516 nova_compute[253538]: </nova:instance>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:  </metadata>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:  <memory unit='KiB'>131072</memory>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:  <currentMemory unit='KiB'>131072</currentMemory>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:  <vcpu placement='static'>1</vcpu>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:  <resource>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:    <partition>/machine</partition>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:  </resource>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:  <sysinfo type='smbios'>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:    <system>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:      <entry name='manufacturer'>RDO</entry>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:      <entry name='product'>OpenStack Compute</entry>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:      <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:      <entry name='serial'>49b75125-0ca4-438d-9f2a-1d130a6b5632</entry>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:      <entry name='uuid'>49b75125-0ca4-438d-9f2a-1d130a6b5632</entry>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:      <entry name='family'>Virtual Machine</entry>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:    </system>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:  </sysinfo>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:  <os>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:    <type arch='x86_64' machine='pc-q35-rhel9.8.0'>hvm</type>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:    <boot dev='hd'/>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:    <smbios mode='sysinfo'/>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:  </os>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:  <features>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:    <acpi/>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:    <apic/>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:    <vmcoreinfo state='on'/>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:  </features>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:  <cpu mode='custom' match='exact' check='full'>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:    <model fallback='forbid'>EPYC-Rome</model>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:    <vendor>AMD</vendor>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:    <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:    <feature policy='require' name='x2apic'/>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:    <feature policy='require' name='tsc-deadline'/>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:    <feature policy='require' name='hypervisor'/>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:    <feature policy='require' name='tsc_adjust'/>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:    <feature policy='require' name='spec-ctrl'/>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:    <feature policy='require' name='stibp'/>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:    <feature policy='require' name='ssbd'/>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:    <feature policy='require' name='cmp_legacy'/>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:    <feature policy='require' name='overflow-recov'/>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:    <feature policy='require' name='succor'/>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:    <feature policy='require' name='ibrs'/>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:    <feature policy='require' name='amd-ssbd'/>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:    <feature policy='require' name='virt-ssbd'/>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:    <feature policy='disable' name='lbrv'/>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:    <feature policy='disable' name='tsc-scale'/>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:    <feature policy='disable' name='vmcb-clean'/>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:    <feature policy='disable' name='flushbyasid'/>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:    <feature policy='disable' name='pause-filter'/>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:    <feature policy='disable' name='pfthreshold'/>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:    <feature policy='disable' name='svme-addr-chk'/>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:    <feature policy='require' name='lfence-always-serializing'/>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:    <feature policy='disable' name='xsaves'/>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:    <feature policy='disable' name='svm'/>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:    <feature policy='require' name='topoext'/>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:    <feature policy='disable' name='npt'/>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:    <feature policy='disable' name='nrip-save'/>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:  </cpu>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:  <clock offset='utc'>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:    <timer name='pit' tickpolicy='delay'/>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:    <timer name='rtc' tickpolicy='catchup'/>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:    <timer name='hpet' present='no'/>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:  </clock>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:  <on_poweroff>destroy</on_poweroff>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:  <on_reboot>restart</on_reboot>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:  <on_crash>destroy</on_crash>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:  <devices>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:    <emulator>/usr/libexec/qemu-kvm</emulator>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:    <disk type='network' device='disk'>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:      <driver name='qemu' type='raw' cache='none'/>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:      <auth username='openstack'>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:        <secret type='ceph' uuid='a058ea16-8b73-51e1-b172-ed66107102bf'/>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:      </auth>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:      <source protocol='rbd' name='vms/49b75125-0ca4-438d-9f2a-1d130a6b5632_disk' index='2'>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:        <host name='192.168.122.100' port='6789'/>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:      </source>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:      <target dev='vda' bus='virtio'/>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:      <alias name='virtio-disk0'/>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:    </disk>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:    <disk type='network' device='cdrom'>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:      <driver name='qemu' type='raw' cache='none'/>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:      <auth username='openstack'>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:        <secret type='ceph' uuid='a058ea16-8b73-51e1-b172-ed66107102bf'/>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:      </auth>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:      <source protocol='rbd' name='vms/49b75125-0ca4-438d-9f2a-1d130a6b5632_disk.config' index='1'>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:        <host name='192.168.122.100' port='6789'/>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:      </source>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:      <target dev='sda' bus='sata'/>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:      <readonly/>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:      <alias name='sata0-0-0'/>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:      <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:    </disk>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:    <controller type='pci' index='0' model='pcie-root'>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:      <alias name='pcie.0'/>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:    <controller type='pci' index='1' model='pcie-root-port'>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:      <model name='pcie-root-port'/>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:      <target chassis='1' port='0x10'/>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:      <alias name='pci.1'/>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:    <controller type='pci' index='2' model='pcie-root-port'>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:      <model name='pcie-root-port'/>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:      <target chassis='2' port='0x11'/>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:      <alias name='pci.2'/>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:    <controller type='pci' index='3' model='pcie-root-port'>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:      <model name='pcie-root-port'/>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:      <target chassis='3' port='0x12'/>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:      <alias name='pci.3'/>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:    <controller type='pci' index='4' model='pcie-root-port'>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:      <model name='pcie-root-port'/>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:      <target chassis='4' port='0x13'/>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:      <alias name='pci.4'/>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:    <controller type='pci' index='5' model='pcie-root-port'>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:      <model name='pcie-root-port'/>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:      <target chassis='5' port='0x14'/>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:      <alias name='pci.5'/>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:    <controller type='pci' index='6' model='pcie-root-port'>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:      <model name='pcie-root-port'/>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:      <target chassis='6' port='0x15'/>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:      <alias name='pci.6'/>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:    <controller type='pci' index='7' model='pcie-root-port'>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:      <model name='pcie-root-port'/>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:      <target chassis='7' port='0x16'/>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:      <alias name='pci.7'/>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:    <controller type='pci' index='8' model='pcie-root-port'>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:      <model name='pcie-root-port'/>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:      <target chassis='8' port='0x17'/>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:      <alias name='pci.8'/>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:    <controller type='pci' index='9' model='pcie-root-port'>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:      <model name='pcie-root-port'/>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:      <target chassis='9' port='0x18'/>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:      <alias name='pci.9'/>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:    <controller type='pci' index='10' model='pcie-root-port'>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:      <model name='pcie-root-port'/>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:      <target chassis='10' port='0x19'/>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:      <alias name='pci.10'/>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:    <controller type='pci' index='11' model='pcie-root-port'>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:      <model name='pcie-root-port'/>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:      <target chassis='11' port='0x1a'/>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:      <alias name='pci.11'/>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:    <controller type='pci' index='12' model='pcie-root-port'>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:      <model name='pcie-root-port'/>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:      <target chassis='12' port='0x1b'/>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:      <alias name='pci.12'/>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:    <controller type='pci' index='13' model='pcie-root-port'>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:      <model name='pcie-root-port'/>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:      <target chassis='13' port='0x1c'/>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:      <alias name='pci.13'/>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:    <controller type='pci' index='14' model='pcie-root-port'>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:      <model name='pcie-root-port'/>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:      <target chassis='14' port='0x1d'/>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:      <alias name='pci.14'/>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:    <controller type='pci' index='15' model='pcie-root-port'>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:      <model name='pcie-root-port'/>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:      <target chassis='15' port='0x1e'/>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:      <alias name='pci.15'/>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:    <controller type='pci' index='16' model='pcie-root-port'>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:      <model name='pcie-root-port'/>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:      <target chassis='16' port='0x1f'/>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:      <alias name='pci.16'/>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:    <controller type='pci' index='17' model='pcie-root-port'>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:      <model name='pcie-root-port'/>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:      <target chassis='17' port='0x20'/>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:      <alias name='pci.17'/>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:    <controller type='pci' index='18' model='pcie-root-port'>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:      <model name='pcie-root-port'/>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:      <target chassis='18' port='0x21'/>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:      <alias name='pci.18'/>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:    <controller type='pci' index='19' model='pcie-root-port'>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:      <model name='pcie-root-port'/>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:      <target chassis='19' port='0x22'/>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:      <alias name='pci.19'/>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:    <controller type='pci' index='20' model='pcie-root-port'>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:      <model name='pcie-root-port'/>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:      <target chassis='20' port='0x23'/>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:      <alias name='pci.20'/>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:    <controller type='pci' index='21' model='pcie-root-port'>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:      <model name='pcie-root-port'/>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:      <target chassis='21' port='0x24'/>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:      <alias name='pci.21'/>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:    <controller type='pci' index='22' model='pcie-root-port'>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:      <model name='pcie-root-port'/>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:      <target chassis='22' port='0x25'/>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:      <alias name='pci.22'/>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:    <controller type='pci' index='23' model='pcie-root-port'>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:      <model name='pcie-root-port'/>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:      <target chassis='23' port='0x26'/>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:      <alias name='pci.23'/>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:    <controller type='pci' index='24' model='pcie-root-port'>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:      <model name='pcie-root-port'/>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:      <target chassis='24' port='0x27'/>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:      <alias name='pci.24'/>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:    <controller type='pci' index='25' model='pcie-root-port'>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:      <model name='pcie-root-port'/>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:      <target chassis='25' port='0x28'/>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:      <alias name='pci.25'/>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:    <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:      <model name='pcie-pci-bridge'/>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:      <alias name='pci.26'/>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:    <controller type='usb' index='0' model='piix3-uhci'>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:      <alias name='usb'/>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:    <controller type='sata' index='0'>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:      <alias name='ide'/>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:    <interface type='ethernet'>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:      <mac address='fa:16:3e:88:13:51'/>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:      <target dev='tapd0945383-2a'/>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:      <model type='virtio'/>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:      <driver name='vhost' rx_queue_size='512'/>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:      <mtu size='1442'/>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:      <alias name='net0'/>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:    </interface>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:    <interface type='ethernet'>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:      <mac address='fa:16:3e:39:f0:9c'/>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:      <target dev='tap340ce0e3-8b'/>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:      <model type='virtio'/>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:      <driver name='vhost' rx_queue_size='512'/>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:      <mtu size='1442'/>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:      <alias name='net1'/>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x06' slot='0x00' function='0x0'/>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:    </interface>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:    <serial type='pty'>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:      <source path='/dev/pts/0'/>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:      <log file='/var/lib/nova/instances/49b75125-0ca4-438d-9f2a-1d130a6b5632/console.log' append='off'/>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:      <target type='isa-serial' port='0'>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:        <model name='isa-serial'/>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:      </target>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:      <alias name='serial0'/>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:    </serial>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:    <console type='pty' tty='/dev/pts/0'>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:      <source path='/dev/pts/0'/>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:      <log file='/var/lib/nova/instances/49b75125-0ca4-438d-9f2a-1d130a6b5632/console.log' append='off'/>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:      <target type='serial' port='0'/>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:      <alias name='serial0'/>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:    </console>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:    <input type='tablet' bus='usb'>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:      <alias name='input0'/>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:      <address type='usb' bus='0' port='1'/>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:    </input>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:    <input type='mouse' bus='ps2'>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:      <alias name='input1'/>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:    </input>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:    <input type='keyboard' bus='ps2'>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:      <alias name='input2'/>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:    </input>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:    <graphics type='vnc' port='5900' autoport='yes' listen='::0'>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:      <listen type='address' address='::0'/>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:    </graphics>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:    <audio id='1' type='none'/>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:    <video>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:      <model type='virtio' heads='1' primary='yes'/>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:      <alias name='video0'/>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:    </video>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:    <watchdog model='itco' action='reset'>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:      <alias name='watchdog0'/>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:    </watchdog>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:    <memballoon model='virtio'>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:      <stats period='10'/>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:      <alias name='balloon0'/>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:    </memballoon>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:    <rng model='virtio'>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:      <backend model='random'>/dev/urandom</backend>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:      <alias name='rng0'/>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:    </rng>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:  </devices>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:  <seclabel type='dynamic' model='selinux' relabel='yes'>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:    <label>system_u:system_r:svirt_t:s0:c434,c684</label>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:    <imagelabel>system_u:object_r:svirt_image_t:s0:c434,c684</imagelabel>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:  </seclabel>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:  <seclabel type='dynamic' model='dac' relabel='yes'>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:    <label>+107:+107</label>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:    <imagelabel>+107:+107</imagelabel>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:  </seclabel>
Nov 25 03:50:50 np0005534516 nova_compute[253538]: </domain>
Nov 25 03:50:50 np0005534516 nova_compute[253538]: get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282#033[00m
Nov 25 03:50:50 np0005534516 nova_compute[253538]: 2025-11-25 08:50:50.442 253542 INFO nova.virt.libvirt.driver [None req-2d7710fb-da8b-49bd-bdba-95d60bef6db1 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Successfully detached device tap340ce0e3-8b from instance 49b75125-0ca4-438d-9f2a-1d130a6b5632 from the persistent domain config.#033[00m
Nov 25 03:50:50 np0005534516 nova_compute[253538]: 2025-11-25 08:50:50.446 253542 DEBUG nova.virt.libvirt.driver [None req-2d7710fb-da8b-49bd-bdba-95d60bef6db1 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] (1/8): Attempting to detach device tap340ce0e3-8b with device alias net1 from instance 49b75125-0ca4-438d-9f2a-1d130a6b5632 from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523#033[00m
Nov 25 03:50:50 np0005534516 nova_compute[253538]: 2025-11-25 08:50:50.449 253542 DEBUG nova.virt.libvirt.guest [None req-2d7710fb-da8b-49bd-bdba-95d60bef6db1 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] detach device xml: <interface type="ethernet">
Nov 25 03:50:50 np0005534516 nova_compute[253538]:  <mac address="fa:16:3e:39:f0:9c"/>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:  <model type="virtio"/>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:  <driver name="vhost" rx_queue_size="512"/>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:  <mtu size="1442"/>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:  <target dev="tap340ce0e3-8b"/>
Nov 25 03:50:50 np0005534516 nova_compute[253538]: </interface>
Nov 25 03:50:50 np0005534516 nova_compute[253538]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Nov 25 03:50:50 np0005534516 systemd[1]: var-lib-containers-storage-overlay-1c5b357e7aa5e1e82865129946a0f8f4702059266ec20bc518aed620db2e52d5-merged.mount: Deactivated successfully.
Nov 25 03:50:50 np0005534516 kernel: tap340ce0e3-8b (unregistering): left promiscuous mode
Nov 25 03:50:50 np0005534516 NetworkManager[48915]: <info>  [1764060650.5710] device (tap340ce0e3-8b): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 03:50:50 np0005534516 ovn_controller[152859]: 2025-11-25T08:50:50Z|01086|binding|INFO|Releasing lport 340ce0e3-8b72-4b40-afcb-53f30e6cc961 from this chassis (sb_readonly=0)
Nov 25 03:50:50 np0005534516 ovn_controller[152859]: 2025-11-25T08:50:50Z|01087|binding|INFO|Setting lport 340ce0e3-8b72-4b40-afcb-53f30e6cc961 down in Southbound
Nov 25 03:50:50 np0005534516 ovn_controller[152859]: 2025-11-25T08:50:50Z|01088|binding|INFO|Removing iface tap340ce0e3-8b ovn-installed in OVS
Nov 25 03:50:50 np0005534516 nova_compute[253538]: 2025-11-25 08:50:50.582 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:50:50 np0005534516 nova_compute[253538]: 2025-11-25 08:50:50.584 253542 DEBUG nova.virt.libvirt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Received event <DeviceRemovedEvent: 1764060650.5844204, 49b75125-0ca4-438d-9f2a-1d130a6b5632 => net1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370#033[00m
Nov 25 03:50:50 np0005534516 nova_compute[253538]: 2025-11-25 08:50:50.585 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:50:50 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:50:50.589 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:39:f0:9c 10.100.0.27'], port_security=['fa:16:3e:39:f0:9c 10.100.0.27'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.27/28', 'neutron:device_id': '49b75125-0ca4-438d-9f2a-1d130a6b5632', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1e204e23-2391-4196-9262-d69db603285d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '92faeb767e7a423586eaaf32661ce771', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'd480e0b0-4ff3-496c-bb44-a333ae5d1ee8', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=dda51c2a-7354-4889-adc9-442043cc4089, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=340ce0e3-8b72-4b40-afcb-53f30e6cc961) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 03:50:50 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:50:50.591 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 340ce0e3-8b72-4b40-afcb-53f30e6cc961 in datapath 1e204e23-2391-4196-9262-d69db603285d unbound from our chassis#033[00m
Nov 25 03:50:50 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:50:50.594 162739 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 1e204e23-2391-4196-9262-d69db603285d, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 25 03:50:50 np0005534516 nova_compute[253538]: 2025-11-25 08:50:50.587 253542 DEBUG nova.virt.libvirt.driver [None req-2d7710fb-da8b-49bd-bdba-95d60bef6db1 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Start waiting for the detach event from libvirt for device tap340ce0e3-8b with device alias net1 for instance 49b75125-0ca4-438d-9f2a-1d130a6b5632 _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599#033[00m
Nov 25 03:50:50 np0005534516 nova_compute[253538]: 2025-11-25 08:50:50.588 253542 DEBUG nova.virt.libvirt.guest [None req-2d7710fb-da8b-49bd-bdba-95d60bef6db1 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:39:f0:9c"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap340ce0e3-8b"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Nov 25 03:50:50 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:50:50.596 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[87bc1c9f-74cc-4603-8469-88353288045e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:50:50 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:50:50.597 162739 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-1e204e23-2391-4196-9262-d69db603285d namespace which is not needed anymore#033[00m
Nov 25 03:50:50 np0005534516 nova_compute[253538]: 2025-11-25 08:50:50.600 253542 DEBUG nova.virt.libvirt.guest [None req-2d7710fb-da8b-49bd-bdba-95d60bef6db1 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:39:f0:9c"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap340ce0e3-8b"/></interface>not found in domain: <domain type='kvm' id='135'>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:  <name>instance-0000006d</name>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:  <uuid>49b75125-0ca4-438d-9f2a-1d130a6b5632</uuid>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:  <metadata>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 03:50:50 np0005534516 nova_compute[253538]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:  <nova:name>tempest-TestNetworkBasicOps-server-1448452289</nova:name>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:  <nova:creationTime>2025-11-25 08:50:47</nova:creationTime>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:  <nova:flavor name="m1.nano">
Nov 25 03:50:50 np0005534516 nova_compute[253538]:    <nova:memory>128</nova:memory>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:    <nova:disk>1</nova:disk>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:    <nova:swap>0</nova:swap>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:    <nova:ephemeral>0</nova:ephemeral>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:    <nova:vcpus>1</nova:vcpus>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:  </nova:flavor>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:  <nova:owner>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:    <nova:user uuid="4211995133cc45db8e38c47f747fb092">tempest-TestNetworkBasicOps-2019122229-project-member</nova:user>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:    <nova:project uuid="92faeb767e7a423586eaaf32661ce771">tempest-TestNetworkBasicOps-2019122229</nova:project>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:  </nova:owner>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:  <nova:root type="image" uuid="8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e"/>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:  <nova:ports>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:    <nova:port uuid="d0945383-2a0d-4019-9b60-eea96d667c69">
Nov 25 03:50:50 np0005534516 nova_compute[253538]:      <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:    </nova:port>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:    <nova:port uuid="340ce0e3-8b72-4b40-afcb-53f30e6cc961">
Nov 25 03:50:50 np0005534516 nova_compute[253538]:      <nova:ip type="fixed" address="10.100.0.27" ipVersion="4"/>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:    </nova:port>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:  </nova:ports>
Nov 25 03:50:50 np0005534516 nova_compute[253538]: </nova:instance>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:  </metadata>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:  <memory unit='KiB'>131072</memory>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:  <currentMemory unit='KiB'>131072</currentMemory>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:  <vcpu placement='static'>1</vcpu>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:  <resource>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:    <partition>/machine</partition>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:  </resource>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:  <sysinfo type='smbios'>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:    <system>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:      <entry name='manufacturer'>RDO</entry>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:      <entry name='product'>OpenStack Compute</entry>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:      <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:      <entry name='serial'>49b75125-0ca4-438d-9f2a-1d130a6b5632</entry>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:      <entry name='uuid'>49b75125-0ca4-438d-9f2a-1d130a6b5632</entry>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:      <entry name='family'>Virtual Machine</entry>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:    </system>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:  </sysinfo>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:  <os>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:    <type arch='x86_64' machine='pc-q35-rhel9.8.0'>hvm</type>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:    <boot dev='hd'/>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:    <smbios mode='sysinfo'/>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:  </os>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:  <features>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:    <acpi/>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:    <apic/>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:    <vmcoreinfo state='on'/>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:  </features>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:  <cpu mode='custom' match='exact' check='full'>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:    <model fallback='forbid'>EPYC-Rome</model>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:    <vendor>AMD</vendor>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:    <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:    <feature policy='require' name='x2apic'/>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:    <feature policy='require' name='tsc-deadline'/>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:    <feature policy='require' name='hypervisor'/>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:    <feature policy='require' name='tsc_adjust'/>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:    <feature policy='require' name='spec-ctrl'/>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:    <feature policy='require' name='stibp'/>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:    <feature policy='require' name='ssbd'/>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:    <feature policy='require' name='cmp_legacy'/>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:    <feature policy='require' name='overflow-recov'/>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:    <feature policy='require' name='succor'/>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:    <feature policy='require' name='ibrs'/>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:    <feature policy='require' name='amd-ssbd'/>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:    <feature policy='require' name='virt-ssbd'/>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:    <feature policy='disable' name='lbrv'/>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:    <feature policy='disable' name='tsc-scale'/>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:    <feature policy='disable' name='vmcb-clean'/>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:    <feature policy='disable' name='flushbyasid'/>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:    <feature policy='disable' name='pause-filter'/>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:    <feature policy='disable' name='pfthreshold'/>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:    <feature policy='disable' name='svme-addr-chk'/>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:    <feature policy='require' name='lfence-always-serializing'/>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:    <feature policy='disable' name='xsaves'/>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:    <feature policy='disable' name='svm'/>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:    <feature policy='require' name='topoext'/>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:    <feature policy='disable' name='npt'/>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:    <feature policy='disable' name='nrip-save'/>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:  </cpu>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:  <clock offset='utc'>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:    <timer name='pit' tickpolicy='delay'/>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:    <timer name='rtc' tickpolicy='catchup'/>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:    <timer name='hpet' present='no'/>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:  </clock>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:  <on_poweroff>destroy</on_poweroff>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:  <on_reboot>restart</on_reboot>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:  <on_crash>destroy</on_crash>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:  <devices>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:    <emulator>/usr/libexec/qemu-kvm</emulator>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:    <disk type='network' device='disk'>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:      <driver name='qemu' type='raw' cache='none'/>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:      <auth username='openstack'>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:        <secret type='ceph' uuid='a058ea16-8b73-51e1-b172-ed66107102bf'/>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:      </auth>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:      <source protocol='rbd' name='vms/49b75125-0ca4-438d-9f2a-1d130a6b5632_disk' index='2'>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:        <host name='192.168.122.100' port='6789'/>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:      </source>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:      <target dev='vda' bus='virtio'/>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:      <alias name='virtio-disk0'/>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:    </disk>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:    <disk type='network' device='cdrom'>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:      <driver name='qemu' type='raw' cache='none'/>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:      <auth username='openstack'>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:        <secret type='ceph' uuid='a058ea16-8b73-51e1-b172-ed66107102bf'/>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:      </auth>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:      <source protocol='rbd' name='vms/49b75125-0ca4-438d-9f2a-1d130a6b5632_disk.config' index='1'>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:        <host name='192.168.122.100' port='6789'/>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:      </source>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:      <target dev='sda' bus='sata'/>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:      <readonly/>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:      <alias name='sata0-0-0'/>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:      <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:    </disk>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:    <controller type='pci' index='0' model='pcie-root'>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:      <alias name='pcie.0'/>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:    <controller type='pci' index='1' model='pcie-root-port'>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:      <model name='pcie-root-port'/>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:      <target chassis='1' port='0x10'/>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:      <alias name='pci.1'/>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:    <controller type='pci' index='2' model='pcie-root-port'>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:      <model name='pcie-root-port'/>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:      <target chassis='2' port='0x11'/>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:      <alias name='pci.2'/>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:    <controller type='pci' index='3' model='pcie-root-port'>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:      <model name='pcie-root-port'/>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:      <target chassis='3' port='0x12'/>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:      <alias name='pci.3'/>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:    <controller type='pci' index='4' model='pcie-root-port'>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:      <model name='pcie-root-port'/>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:      <target chassis='4' port='0x13'/>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:      <alias name='pci.4'/>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:    <controller type='pci' index='5' model='pcie-root-port'>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:      <model name='pcie-root-port'/>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:      <target chassis='5' port='0x14'/>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:      <alias name='pci.5'/>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:    <controller type='pci' index='6' model='pcie-root-port'>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:      <model name='pcie-root-port'/>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:      <target chassis='6' port='0x15'/>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:      <alias name='pci.6'/>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:    <controller type='pci' index='7' model='pcie-root-port'>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:      <model name='pcie-root-port'/>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:      <target chassis='7' port='0x16'/>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:      <alias name='pci.7'/>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:    <controller type='pci' index='8' model='pcie-root-port'>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:      <model name='pcie-root-port'/>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:      <target chassis='8' port='0x17'/>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:      <alias name='pci.8'/>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:    <controller type='pci' index='9' model='pcie-root-port'>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:      <model name='pcie-root-port'/>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:      <target chassis='9' port='0x18'/>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:      <alias name='pci.9'/>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:    <controller type='pci' index='10' model='pcie-root-port'>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:      <model name='pcie-root-port'/>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:      <target chassis='10' port='0x19'/>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:      <alias name='pci.10'/>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:    <controller type='pci' index='11' model='pcie-root-port'>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:      <model name='pcie-root-port'/>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:      <target chassis='11' port='0x1a'/>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:      <alias name='pci.11'/>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:    <controller type='pci' index='12' model='pcie-root-port'>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:      <model name='pcie-root-port'/>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:      <target chassis='12' port='0x1b'/>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:      <alias name='pci.12'/>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:    <controller type='pci' index='13' model='pcie-root-port'>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:      <model name='pcie-root-port'/>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:      <target chassis='13' port='0x1c'/>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:      <alias name='pci.13'/>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:    <controller type='pci' index='14' model='pcie-root-port'>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:      <model name='pcie-root-port'/>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:      <target chassis='14' port='0x1d'/>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:      <alias name='pci.14'/>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:    <controller type='pci' index='15' model='pcie-root-port'>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:      <model name='pcie-root-port'/>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:      <target chassis='15' port='0x1e'/>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:      <alias name='pci.15'/>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:    <controller type='pci' index='16' model='pcie-root-port'>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:      <model name='pcie-root-port'/>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:      <target chassis='16' port='0x1f'/>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:      <alias name='pci.16'/>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:    <controller type='pci' index='17' model='pcie-root-port'>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:      <model name='pcie-root-port'/>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:      <target chassis='17' port='0x20'/>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:      <alias name='pci.17'/>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:    <controller type='pci' index='18' model='pcie-root-port'>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:      <model name='pcie-root-port'/>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:      <target chassis='18' port='0x21'/>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:      <alias name='pci.18'/>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:    <controller type='pci' index='19' model='pcie-root-port'>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:      <model name='pcie-root-port'/>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:      <target chassis='19' port='0x22'/>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:      <alias name='pci.19'/>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:    <controller type='pci' index='20' model='pcie-root-port'>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:      <model name='pcie-root-port'/>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:      <target chassis='20' port='0x23'/>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:      <alias name='pci.20'/>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:    <controller type='pci' index='21' model='pcie-root-port'>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:      <model name='pcie-root-port'/>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:      <target chassis='21' port='0x24'/>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:      <alias name='pci.21'/>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:    <controller type='pci' index='22' model='pcie-root-port'>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:      <model name='pcie-root-port'/>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:      <target chassis='22' port='0x25'/>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:      <alias name='pci.22'/>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:    <controller type='pci' index='23' model='pcie-root-port'>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:      <model name='pcie-root-port'/>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:      <target chassis='23' port='0x26'/>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:      <alias name='pci.23'/>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:    <controller type='pci' index='24' model='pcie-root-port'>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:      <model name='pcie-root-port'/>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:      <target chassis='24' port='0x27'/>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:      <alias name='pci.24'/>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:    <controller type='pci' index='25' model='pcie-root-port'>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:      <model name='pcie-root-port'/>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:      <target chassis='25' port='0x28'/>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:      <alias name='pci.25'/>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:    <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:      <model name='pcie-pci-bridge'/>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:      <alias name='pci.26'/>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:    <controller type='usb' index='0' model='piix3-uhci'>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:      <alias name='usb'/>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:    <controller type='sata' index='0'>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:      <alias name='ide'/>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:    <interface type='ethernet'>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:      <mac address='fa:16:3e:88:13:51'/>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:      <target dev='tapd0945383-2a'/>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:      <model type='virtio'/>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:      <driver name='vhost' rx_queue_size='512'/>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:      <mtu size='1442'/>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:      <alias name='net0'/>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:    </interface>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:    <serial type='pty'>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:      <source path='/dev/pts/0'/>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:      <log file='/var/lib/nova/instances/49b75125-0ca4-438d-9f2a-1d130a6b5632/console.log' append='off'/>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:      <target type='isa-serial' port='0'>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:        <model name='isa-serial'/>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:      </target>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:      <alias name='serial0'/>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:    </serial>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:    <console type='pty' tty='/dev/pts/0'>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:      <source path='/dev/pts/0'/>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:      <log file='/var/lib/nova/instances/49b75125-0ca4-438d-9f2a-1d130a6b5632/console.log' append='off'/>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:      <target type='serial' port='0'/>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:      <alias name='serial0'/>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:    </console>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:    <input type='tablet' bus='usb'>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:      <alias name='input0'/>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:      <address type='usb' bus='0' port='1'/>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:    </input>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:    <input type='mouse' bus='ps2'>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:      <alias name='input1'/>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:    </input>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:    <input type='keyboard' bus='ps2'>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:      <alias name='input2'/>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:    </input>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:    <graphics type='vnc' port='5900' autoport='yes' listen='::0'>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:      <listen type='address' address='::0'/>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:    </graphics>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:    <audio id='1' type='none'/>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:    <video>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:      <model type='virtio' heads='1' primary='yes'/>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:      <alias name='video0'/>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:    </video>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:    <watchdog model='itco' action='reset'>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:      <alias name='watchdog0'/>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:    </watchdog>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:    <memballoon model='virtio'>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:      <stats period='10'/>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:      <alias name='balloon0'/>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:    </memballoon>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:    <rng model='virtio'>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:      <backend model='random'>/dev/urandom</backend>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:      <alias name='rng0'/>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:    </rng>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:  </devices>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:  <seclabel type='dynamic' model='selinux' relabel='yes'>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:    <label>system_u:system_r:svirt_t:s0:c434,c684</label>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:    <imagelabel>system_u:object_r:svirt_image_t:s0:c434,c684</imagelabel>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:  </seclabel>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:  <seclabel type='dynamic' model='dac' relabel='yes'>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:    <label>+107:+107</label>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:    <imagelabel>+107:+107</imagelabel>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:  </seclabel>
Nov 25 03:50:50 np0005534516 nova_compute[253538]: </domain>
Nov 25 03:50:50 np0005534516 nova_compute[253538]: get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282#033[00m
Nov 25 03:50:50 np0005534516 nova_compute[253538]: 2025-11-25 08:50:50.601 253542 INFO nova.virt.libvirt.driver [None req-2d7710fb-da8b-49bd-bdba-95d60bef6db1 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Successfully detached device tap340ce0e3-8b from instance 49b75125-0ca4-438d-9f2a-1d130a6b5632 from the live domain config.#033[00m
Nov 25 03:50:50 np0005534516 nova_compute[253538]: 2025-11-25 08:50:50.602 253542 DEBUG nova.virt.libvirt.vif [None req-2d7710fb-da8b-49bd-bdba-95d60bef6db1 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T08:50:08Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1448452289',display_name='tempest-TestNetworkBasicOps-server-1448452289',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1448452289',id=109,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJ+ODzmwXlP8InQP3rckT8crJoUn4xNbOC1krI29pEqx8UoV+guV7Zli9feI6GoWEAqOv9PTHopksYzuWw3Q3guPlCcrWq73qjwaBwfCdvh1/iUEG/sFH1cKmRJAuAfBlw==',key_name='tempest-TestNetworkBasicOps-406479871',keypairs=<?>,launch_index=0,launched_at=2025-11-25T08:50:18Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='92faeb767e7a423586eaaf32661ce771',ramdisk_id='',reservation_id='r-3nsjtln8',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-2019122229',owner_user_name='tempest-TestNetworkBasicOps-2019122229-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T08:50:18Z,user_data=None,user_id='4211995133cc45db8e38c47f747fb092',uuid=49b75125-0ca4-438d-9f2a-1d130a6b5632,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "340ce0e3-8b72-4b40-afcb-53f30e6cc961", "address": "fa:16:3e:39:f0:9c", "network": {"id": "1e204e23-2391-4196-9262-d69db603285d", "bridge": "br-int", "label": "tempest-network-smoke--297673335", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.27", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92faeb767e7a423586eaaf32661ce771", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap340ce0e3-8b", "ovs_interfaceid": "340ce0e3-8b72-4b40-afcb-53f30e6cc961", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 25 03:50:50 np0005534516 nova_compute[253538]: 2025-11-25 08:50:50.602 253542 DEBUG nova.network.os_vif_util [None req-2d7710fb-da8b-49bd-bdba-95d60bef6db1 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Converting VIF {"id": "340ce0e3-8b72-4b40-afcb-53f30e6cc961", "address": "fa:16:3e:39:f0:9c", "network": {"id": "1e204e23-2391-4196-9262-d69db603285d", "bridge": "br-int", "label": "tempest-network-smoke--297673335", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.27", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92faeb767e7a423586eaaf32661ce771", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap340ce0e3-8b", "ovs_interfaceid": "340ce0e3-8b72-4b40-afcb-53f30e6cc961", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 03:50:50 np0005534516 nova_compute[253538]: 2025-11-25 08:50:50.603 253542 DEBUG nova.network.os_vif_util [None req-2d7710fb-da8b-49bd-bdba-95d60bef6db1 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:39:f0:9c,bridge_name='br-int',has_traffic_filtering=True,id=340ce0e3-8b72-4b40-afcb-53f30e6cc961,network=Network(1e204e23-2391-4196-9262-d69db603285d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap340ce0e3-8b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 03:50:50 np0005534516 nova_compute[253538]: 2025-11-25 08:50:50.604 253542 DEBUG os_vif [None req-2d7710fb-da8b-49bd-bdba-95d60bef6db1 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:39:f0:9c,bridge_name='br-int',has_traffic_filtering=True,id=340ce0e3-8b72-4b40-afcb-53f30e6cc961,network=Network(1e204e23-2391-4196-9262-d69db603285d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap340ce0e3-8b') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 25 03:50:50 np0005534516 nova_compute[253538]: 2025-11-25 08:50:50.606 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:50:50 np0005534516 nova_compute[253538]: 2025-11-25 08:50:50.607 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap340ce0e3-8b, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:50:50 np0005534516 nova_compute[253538]: 2025-11-25 08:50:50.609 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:50:50 np0005534516 nova_compute[253538]: 2025-11-25 08:50:50.611 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 25 03:50:50 np0005534516 nova_compute[253538]: 2025-11-25 08:50:50.616 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:50:50 np0005534516 nova_compute[253538]: 2025-11-25 08:50:50.622 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:50:50 np0005534516 nova_compute[253538]: 2025-11-25 08:50:50.625 253542 INFO os_vif [None req-2d7710fb-da8b-49bd-bdba-95d60bef6db1 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:39:f0:9c,bridge_name='br-int',has_traffic_filtering=True,id=340ce0e3-8b72-4b40-afcb-53f30e6cc961,network=Network(1e204e23-2391-4196-9262-d69db603285d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap340ce0e3-8b')#033[00m
Nov 25 03:50:50 np0005534516 nova_compute[253538]: 2025-11-25 08:50:50.626 253542 DEBUG nova.virt.libvirt.guest [None req-2d7710fb-da8b-49bd-bdba-95d60bef6db1 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 03:50:50 np0005534516 nova_compute[253538]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:  <nova:name>tempest-TestNetworkBasicOps-server-1448452289</nova:name>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:  <nova:creationTime>2025-11-25 08:50:50</nova:creationTime>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:  <nova:flavor name="m1.nano">
Nov 25 03:50:50 np0005534516 nova_compute[253538]:    <nova:memory>128</nova:memory>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:    <nova:disk>1</nova:disk>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:    <nova:swap>0</nova:swap>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:    <nova:ephemeral>0</nova:ephemeral>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:    <nova:vcpus>1</nova:vcpus>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:  </nova:flavor>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:  <nova:owner>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:    <nova:user uuid="4211995133cc45db8e38c47f747fb092">tempest-TestNetworkBasicOps-2019122229-project-member</nova:user>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:    <nova:project uuid="92faeb767e7a423586eaaf32661ce771">tempest-TestNetworkBasicOps-2019122229</nova:project>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:  </nova:owner>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:  <nova:root type="image" uuid="8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e"/>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:  <nova:ports>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:    <nova:port uuid="d0945383-2a0d-4019-9b60-eea96d667c69">
Nov 25 03:50:50 np0005534516 nova_compute[253538]:      <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:    </nova:port>
Nov 25 03:50:50 np0005534516 nova_compute[253538]:  </nova:ports>
Nov 25 03:50:50 np0005534516 nova_compute[253538]: </nova:instance>
Nov 25 03:50:50 np0005534516 nova_compute[253538]: set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359#033[00m
Nov 25 03:50:50 np0005534516 podman[365576]: 2025-11-25 08:50:50.653177759 +0000 UTC m=+0.488241846 container remove 2148d0fb1a969b16a5480ca4afb5df7b220952ca79471d37bc9cd83adb44969e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_varahamihira, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Nov 25 03:50:50 np0005534516 systemd[1]: libpod-conmon-2148d0fb1a969b16a5480ca4afb5df7b220952ca79471d37bc9cd83adb44969e.scope: Deactivated successfully.
Nov 25 03:50:50 np0005534516 neutron-haproxy-ovnmeta-1e204e23-2391-4196-9262-d69db603285d[365536]: [NOTICE]   (365551) : haproxy version is 2.8.14-c23fe91
Nov 25 03:50:50 np0005534516 neutron-haproxy-ovnmeta-1e204e23-2391-4196-9262-d69db603285d[365536]: [NOTICE]   (365551) : path to executable is /usr/sbin/haproxy
Nov 25 03:50:50 np0005534516 neutron-haproxy-ovnmeta-1e204e23-2391-4196-9262-d69db603285d[365536]: [WARNING]  (365551) : Exiting Master process...
Nov 25 03:50:50 np0005534516 neutron-haproxy-ovnmeta-1e204e23-2391-4196-9262-d69db603285d[365536]: [ALERT]    (365551) : Current worker (365553) exited with code 143 (Terminated)
Nov 25 03:50:50 np0005534516 neutron-haproxy-ovnmeta-1e204e23-2391-4196-9262-d69db603285d[365536]: [WARNING]  (365551) : All workers exited. Exiting... (0)
Nov 25 03:50:50 np0005534516 systemd[1]: libpod-6c1d10e7495f789af54765ada4c97b3ece832943715dfcca0194921a1bf7f547.scope: Deactivated successfully.
Nov 25 03:50:50 np0005534516 podman[365630]: 2025-11-25 08:50:50.804927483 +0000 UTC m=+0.084928475 container died 6c1d10e7495f789af54765ada4c97b3ece832943715dfcca0194921a1bf7f547 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-1e204e23-2391-4196-9262-d69db603285d, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 25 03:50:50 np0005534516 systemd[1]: var-lib-containers-storage-overlay-fcebdde85a3fd006ad7591ab9e8a651e0c91f4e265451ae67fa9db8eed4eafeb-merged.mount: Deactivated successfully.
Nov 25 03:50:50 np0005534516 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-6c1d10e7495f789af54765ada4c97b3ece832943715dfcca0194921a1bf7f547-userdata-shm.mount: Deactivated successfully.
Nov 25 03:50:50 np0005534516 podman[365651]: 2025-11-25 08:50:50.884457024 +0000 UTC m=+0.046449266 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 03:50:51 np0005534516 podman[365630]: 2025-11-25 08:50:51.094084568 +0000 UTC m=+0.374085490 container cleanup 6c1d10e7495f789af54765ada4c97b3ece832943715dfcca0194921a1bf7f547 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-1e204e23-2391-4196-9262-d69db603285d, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118)
Nov 25 03:50:51 np0005534516 systemd[1]: libpod-conmon-6c1d10e7495f789af54765ada4c97b3ece832943715dfcca0194921a1bf7f547.scope: Deactivated successfully.
Nov 25 03:50:51 np0005534516 podman[365651]: 2025-11-25 08:50:51.145247007 +0000 UTC m=+0.307239269 container create 7fe575f0d8cd2a2c305adab67c87c00d0aa2380a5bd50d0554497cde18e308ac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_tharp, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Nov 25 03:50:51 np0005534516 systemd[1]: Started libpod-conmon-7fe575f0d8cd2a2c305adab67c87c00d0aa2380a5bd50d0554497cde18e308ac.scope.
Nov 25 03:50:51 np0005534516 podman[365675]: 2025-11-25 08:50:51.234172299 +0000 UTC m=+0.114535198 container remove 6c1d10e7495f789af54765ada4c97b3ece832943715dfcca0194921a1bf7f547 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-1e204e23-2391-4196-9262-d69db603285d, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 03:50:51 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:50:51.244 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[cea430a7-4ee3-4189-98f6-032534167cc4]: (4, ('Tue Nov 25 08:50:50 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-1e204e23-2391-4196-9262-d69db603285d (6c1d10e7495f789af54765ada4c97b3ece832943715dfcca0194921a1bf7f547)\n6c1d10e7495f789af54765ada4c97b3ece832943715dfcca0194921a1bf7f547\nTue Nov 25 08:50:51 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-1e204e23-2391-4196-9262-d69db603285d (6c1d10e7495f789af54765ada4c97b3ece832943715dfcca0194921a1bf7f547)\n6c1d10e7495f789af54765ada4c97b3ece832943715dfcca0194921a1bf7f547\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:50:51 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:50:51.246 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[d9d935e2-e206-42ab-bdf6-96503eec19ba]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:50:51 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:50:51.247 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1e204e23-20, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:50:51 np0005534516 systemd[1]: Started libcrun container.
Nov 25 03:50:51 np0005534516 nova_compute[253538]: 2025-11-25 08:50:51.249 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:50:51 np0005534516 kernel: tap1e204e23-20: left promiscuous mode
Nov 25 03:50:51 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/efdcb7116fe58680f24853c0dee11caca31899fce1df8c89627d37ce659d313f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 03:50:51 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/efdcb7116fe58680f24853c0dee11caca31899fce1df8c89627d37ce659d313f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 03:50:51 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/efdcb7116fe58680f24853c0dee11caca31899fce1df8c89627d37ce659d313f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 03:50:51 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/efdcb7116fe58680f24853c0dee11caca31899fce1df8c89627d37ce659d313f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 03:50:51 np0005534516 podman[365651]: 2025-11-25 08:50:51.28273741 +0000 UTC m=+0.444729652 container init 7fe575f0d8cd2a2c305adab67c87c00d0aa2380a5bd50d0554497cde18e308ac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_tharp, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 03:50:51 np0005534516 nova_compute[253538]: 2025-11-25 08:50:51.296 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:50:51 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:50:51.300 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[69e4bdcc-ed46-4774-b42c-99e220d2d83b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:50:51 np0005534516 podman[365651]: 2025-11-25 08:50:51.301814481 +0000 UTC m=+0.463806723 container start 7fe575f0d8cd2a2c305adab67c87c00d0aa2380a5bd50d0554497cde18e308ac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_tharp, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Nov 25 03:50:51 np0005534516 podman[365651]: 2025-11-25 08:50:51.317135711 +0000 UTC m=+0.479127963 container attach 7fe575f0d8cd2a2c305adab67c87c00d0aa2380a5bd50d0554497cde18e308ac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_tharp, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3)
Nov 25 03:50:51 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:50:51.318 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[34e9d573-863b-4145-aaab-2fd1af56348f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:50:51 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:50:51.320 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[1ad300bd-cf2e-4545-90e5-d1dca342b9c5]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:50:51 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:50:51.339 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[56060baa-ebe3-4c40-9da9-3a242622273f]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 600238, 'reachable_time': 15002, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 365696, 'error': None, 'target': 'ovnmeta-1e204e23-2391-4196-9262-d69db603285d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:50:51 np0005534516 systemd[1]: run-netns-ovnmeta\x2d1e204e23\x2d2391\x2d4196\x2d9262\x2dd69db603285d.mount: Deactivated successfully.
Nov 25 03:50:51 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:50:51.346 162852 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-1e204e23-2391-4196-9262-d69db603285d deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 25 03:50:51 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:50:51.346 162852 DEBUG oslo.privsep.daemon [-] privsep: reply[6a33ca9e-010e-494a-b997-ce910123da90]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:50:51 np0005534516 nova_compute[253538]: 2025-11-25 08:50:51.493 253542 DEBUG oslo_concurrency.lockutils [None req-2d7710fb-da8b-49bd-bdba-95d60bef6db1 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Acquiring lock "refresh_cache-49b75125-0ca4-438d-9f2a-1d130a6b5632" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 03:50:51 np0005534516 nova_compute[253538]: 2025-11-25 08:50:51.493 253542 DEBUG oslo_concurrency.lockutils [None req-2d7710fb-da8b-49bd-bdba-95d60bef6db1 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Acquired lock "refresh_cache-49b75125-0ca4-438d-9f2a-1d130a6b5632" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 03:50:51 np0005534516 nova_compute[253538]: 2025-11-25 08:50:51.494 253542 DEBUG nova.network.neutron [None req-2d7710fb-da8b-49bd-bdba-95d60bef6db1 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 49b75125-0ca4-438d-9f2a-1d130a6b5632] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 25 03:50:51 np0005534516 nova_compute[253538]: 2025-11-25 08:50:51.562 253542 DEBUG nova.compute.manager [req-669a3c3e-d6ef-4c8c-a255-63914030b9ad req-79d97acf-5f96-4dcc-9820-1d08086f1111 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 49b75125-0ca4-438d-9f2a-1d130a6b5632] Received event network-vif-deleted-340ce0e3-8b72-4b40-afcb-53f30e6cc961 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:50:51 np0005534516 nova_compute[253538]: 2025-11-25 08:50:51.563 253542 INFO nova.compute.manager [req-669a3c3e-d6ef-4c8c-a255-63914030b9ad req-79d97acf-5f96-4dcc-9820-1d08086f1111 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 49b75125-0ca4-438d-9f2a-1d130a6b5632] Neutron deleted interface 340ce0e3-8b72-4b40-afcb-53f30e6cc961; detaching it from the instance and deleting it from the info cache#033[00m
Nov 25 03:50:51 np0005534516 nova_compute[253538]: 2025-11-25 08:50:51.563 253542 DEBUG nova.network.neutron [req-669a3c3e-d6ef-4c8c-a255-63914030b9ad req-79d97acf-5f96-4dcc-9820-1d08086f1111 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 49b75125-0ca4-438d-9f2a-1d130a6b5632] Updating instance_info_cache with network_info: [{"id": "d0945383-2a0d-4019-9b60-eea96d667c69", "address": "fa:16:3e:88:13:51", "network": {"id": "c833a599-5a18-44d2-82ad-b16f7476c220", "bridge": "br-int", "label": "tempest-network-smoke--480494293", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.229", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92faeb767e7a423586eaaf32661ce771", "mtu": null, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd0945383-2a", "ovs_interfaceid": "d0945383-2a0d-4019-9b60-eea96d667c69", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 03:50:51 np0005534516 nova_compute[253538]: 2025-11-25 08:50:51.582 253542 DEBUG nova.objects.instance [req-669a3c3e-d6ef-4c8c-a255-63914030b9ad req-79d97acf-5f96-4dcc-9820-1d08086f1111 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lazy-loading 'system_metadata' on Instance uuid 49b75125-0ca4-438d-9f2a-1d130a6b5632 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 03:50:51 np0005534516 nova_compute[253538]: 2025-11-25 08:50:51.605 253542 DEBUG nova.objects.instance [req-669a3c3e-d6ef-4c8c-a255-63914030b9ad req-79d97acf-5f96-4dcc-9820-1d08086f1111 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lazy-loading 'flavor' on Instance uuid 49b75125-0ca4-438d-9f2a-1d130a6b5632 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 03:50:51 np0005534516 nova_compute[253538]: 2025-11-25 08:50:51.635 253542 DEBUG nova.virt.libvirt.vif [req-669a3c3e-d6ef-4c8c-a255-63914030b9ad req-79d97acf-5f96-4dcc-9820-1d08086f1111 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T08:50:08Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1448452289',display_name='tempest-TestNetworkBasicOps-server-1448452289',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1448452289',id=109,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJ+ODzmwXlP8InQP3rckT8crJoUn4xNbOC1krI29pEqx8UoV+guV7Zli9feI6GoWEAqOv9PTHopksYzuWw3Q3guPlCcrWq73qjwaBwfCdvh1/iUEG/sFH1cKmRJAuAfBlw==',key_name='tempest-TestNetworkBasicOps-406479871',keypairs=<?>,launch_index=0,launched_at=2025-11-25T08:50:18Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata=<?>,migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='92faeb767e7a423586eaaf32661ce771',ramdisk_id='',reservation_id='r-3nsjtln8',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-2019122229',owner_user_name='tempest-TestNetworkBasicOps-2019122229-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T08:50:18Z,user_data=None,user_id='4211995133cc45db8e38c47f747fb092',uuid=49b75125-0ca4-438d-9f2a-1d130a6b5632,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "340ce0e3-8b72-4b40-afcb-53f30e6cc961", "address": "fa:16:3e:39:f0:9c", "network": {"id": "1e204e23-2391-4196-9262-d69db603285d", "bridge": "br-int", "label": "tempest-network-smoke--297673335", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.27", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92faeb767e7a423586eaaf32661ce771", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap340ce0e3-8b", "ovs_interfaceid": "340ce0e3-8b72-4b40-afcb-53f30e6cc961", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 25 03:50:51 np0005534516 nova_compute[253538]: 2025-11-25 08:50:51.636 253542 DEBUG nova.network.os_vif_util [req-669a3c3e-d6ef-4c8c-a255-63914030b9ad req-79d97acf-5f96-4dcc-9820-1d08086f1111 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Converting VIF {"id": "340ce0e3-8b72-4b40-afcb-53f30e6cc961", "address": "fa:16:3e:39:f0:9c", "network": {"id": "1e204e23-2391-4196-9262-d69db603285d", "bridge": "br-int", "label": "tempest-network-smoke--297673335", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.27", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92faeb767e7a423586eaaf32661ce771", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap340ce0e3-8b", "ovs_interfaceid": "340ce0e3-8b72-4b40-afcb-53f30e6cc961", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 03:50:51 np0005534516 nova_compute[253538]: 2025-11-25 08:50:51.637 253542 DEBUG nova.network.os_vif_util [req-669a3c3e-d6ef-4c8c-a255-63914030b9ad req-79d97acf-5f96-4dcc-9820-1d08086f1111 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:39:f0:9c,bridge_name='br-int',has_traffic_filtering=True,id=340ce0e3-8b72-4b40-afcb-53f30e6cc961,network=Network(1e204e23-2391-4196-9262-d69db603285d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap340ce0e3-8b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 03:50:51 np0005534516 nova_compute[253538]: 2025-11-25 08:50:51.643 253542 DEBUG nova.virt.libvirt.guest [req-669a3c3e-d6ef-4c8c-a255-63914030b9ad req-79d97acf-5f96-4dcc-9820-1d08086f1111 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:39:f0:9c"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap340ce0e3-8b"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Nov 25 03:50:51 np0005534516 nova_compute[253538]: 2025-11-25 08:50:51.648 253542 DEBUG nova.virt.libvirt.guest [req-669a3c3e-d6ef-4c8c-a255-63914030b9ad req-79d97acf-5f96-4dcc-9820-1d08086f1111 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:39:f0:9c"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap340ce0e3-8b"/></interface>not found in domain: <domain type='kvm' id='135'>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:  <name>instance-0000006d</name>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:  <uuid>49b75125-0ca4-438d-9f2a-1d130a6b5632</uuid>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:  <metadata>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 03:50:51 np0005534516 nova_compute[253538]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:  <nova:name>tempest-TestNetworkBasicOps-server-1448452289</nova:name>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:  <nova:creationTime>2025-11-25 08:50:50</nova:creationTime>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:  <nova:flavor name="m1.nano">
Nov 25 03:50:51 np0005534516 nova_compute[253538]:    <nova:memory>128</nova:memory>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:    <nova:disk>1</nova:disk>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:    <nova:swap>0</nova:swap>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:    <nova:ephemeral>0</nova:ephemeral>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:    <nova:vcpus>1</nova:vcpus>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:  </nova:flavor>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:  <nova:owner>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:    <nova:user uuid="4211995133cc45db8e38c47f747fb092">tempest-TestNetworkBasicOps-2019122229-project-member</nova:user>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:    <nova:project uuid="92faeb767e7a423586eaaf32661ce771">tempest-TestNetworkBasicOps-2019122229</nova:project>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:  </nova:owner>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:  <nova:root type="image" uuid="8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e"/>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:  <nova:ports>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:    <nova:port uuid="d0945383-2a0d-4019-9b60-eea96d667c69">
Nov 25 03:50:51 np0005534516 nova_compute[253538]:      <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:    </nova:port>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:  </nova:ports>
Nov 25 03:50:51 np0005534516 nova_compute[253538]: </nova:instance>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:  </metadata>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:  <memory unit='KiB'>131072</memory>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:  <currentMemory unit='KiB'>131072</currentMemory>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:  <vcpu placement='static'>1</vcpu>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:  <resource>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:    <partition>/machine</partition>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:  </resource>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:  <sysinfo type='smbios'>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:    <system>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:      <entry name='manufacturer'>RDO</entry>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:      <entry name='product'>OpenStack Compute</entry>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:      <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:      <entry name='serial'>49b75125-0ca4-438d-9f2a-1d130a6b5632</entry>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:      <entry name='uuid'>49b75125-0ca4-438d-9f2a-1d130a6b5632</entry>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:      <entry name='family'>Virtual Machine</entry>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:    </system>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:  </sysinfo>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:  <os>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:    <type arch='x86_64' machine='pc-q35-rhel9.8.0'>hvm</type>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:    <boot dev='hd'/>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:    <smbios mode='sysinfo'/>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:  </os>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:  <features>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:    <acpi/>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:    <apic/>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:    <vmcoreinfo state='on'/>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:  </features>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:  <cpu mode='custom' match='exact' check='full'>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:    <model fallback='forbid'>EPYC-Rome</model>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:    <vendor>AMD</vendor>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:    <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:    <feature policy='require' name='x2apic'/>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:    <feature policy='require' name='tsc-deadline'/>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:    <feature policy='require' name='hypervisor'/>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:    <feature policy='require' name='tsc_adjust'/>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:    <feature policy='require' name='spec-ctrl'/>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:    <feature policy='require' name='stibp'/>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:    <feature policy='require' name='ssbd'/>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:    <feature policy='require' name='cmp_legacy'/>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:    <feature policy='require' name='overflow-recov'/>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:    <feature policy='require' name='succor'/>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:    <feature policy='require' name='ibrs'/>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:    <feature policy='require' name='amd-ssbd'/>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:    <feature policy='require' name='virt-ssbd'/>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:    <feature policy='disable' name='lbrv'/>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:    <feature policy='disable' name='tsc-scale'/>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:    <feature policy='disable' name='vmcb-clean'/>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:    <feature policy='disable' name='flushbyasid'/>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:    <feature policy='disable' name='pause-filter'/>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:    <feature policy='disable' name='pfthreshold'/>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:    <feature policy='disable' name='svme-addr-chk'/>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:    <feature policy='require' name='lfence-always-serializing'/>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:    <feature policy='disable' name='xsaves'/>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:    <feature policy='disable' name='svm'/>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:    <feature policy='require' name='topoext'/>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:    <feature policy='disable' name='npt'/>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:    <feature policy='disable' name='nrip-save'/>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:  </cpu>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:  <clock offset='utc'>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:    <timer name='pit' tickpolicy='delay'/>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:    <timer name='rtc' tickpolicy='catchup'/>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:    <timer name='hpet' present='no'/>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:  </clock>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:  <on_poweroff>destroy</on_poweroff>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:  <on_reboot>restart</on_reboot>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:  <on_crash>destroy</on_crash>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:  <devices>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:    <emulator>/usr/libexec/qemu-kvm</emulator>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:    <disk type='network' device='disk'>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:      <driver name='qemu' type='raw' cache='none'/>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:      <auth username='openstack'>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:        <secret type='ceph' uuid='a058ea16-8b73-51e1-b172-ed66107102bf'/>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:      </auth>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:      <source protocol='rbd' name='vms/49b75125-0ca4-438d-9f2a-1d130a6b5632_disk' index='2'>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:        <host name='192.168.122.100' port='6789'/>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:      </source>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:      <target dev='vda' bus='virtio'/>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:      <alias name='virtio-disk0'/>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:    </disk>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:    <disk type='network' device='cdrom'>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:      <driver name='qemu' type='raw' cache='none'/>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:      <auth username='openstack'>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:        <secret type='ceph' uuid='a058ea16-8b73-51e1-b172-ed66107102bf'/>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:      </auth>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:      <source protocol='rbd' name='vms/49b75125-0ca4-438d-9f2a-1d130a6b5632_disk.config' index='1'>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:        <host name='192.168.122.100' port='6789'/>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:      </source>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:      <target dev='sda' bus='sata'/>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:      <readonly/>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:      <alias name='sata0-0-0'/>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:      <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:    </disk>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:    <controller type='pci' index='0' model='pcie-root'>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:      <alias name='pcie.0'/>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:    <controller type='pci' index='1' model='pcie-root-port'>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:      <model name='pcie-root-port'/>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:      <target chassis='1' port='0x10'/>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:      <alias name='pci.1'/>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:    <controller type='pci' index='2' model='pcie-root-port'>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:      <model name='pcie-root-port'/>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:      <target chassis='2' port='0x11'/>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:      <alias name='pci.2'/>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:    <controller type='pci' index='3' model='pcie-root-port'>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:      <model name='pcie-root-port'/>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:      <target chassis='3' port='0x12'/>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:      <alias name='pci.3'/>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:    <controller type='pci' index='4' model='pcie-root-port'>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:      <model name='pcie-root-port'/>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:      <target chassis='4' port='0x13'/>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:      <alias name='pci.4'/>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:    <controller type='pci' index='5' model='pcie-root-port'>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:      <model name='pcie-root-port'/>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:      <target chassis='5' port='0x14'/>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:      <alias name='pci.5'/>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:    <controller type='pci' index='6' model='pcie-root-port'>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:      <model name='pcie-root-port'/>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:      <target chassis='6' port='0x15'/>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:      <alias name='pci.6'/>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:    <controller type='pci' index='7' model='pcie-root-port'>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:      <model name='pcie-root-port'/>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:      <target chassis='7' port='0x16'/>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:      <alias name='pci.7'/>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:    <controller type='pci' index='8' model='pcie-root-port'>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:      <model name='pcie-root-port'/>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:      <target chassis='8' port='0x17'/>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:      <alias name='pci.8'/>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:    <controller type='pci' index='9' model='pcie-root-port'>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:      <model name='pcie-root-port'/>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:      <target chassis='9' port='0x18'/>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:      <alias name='pci.9'/>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:    <controller type='pci' index='10' model='pcie-root-port'>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:      <model name='pcie-root-port'/>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:      <target chassis='10' port='0x19'/>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:      <alias name='pci.10'/>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:    <controller type='pci' index='11' model='pcie-root-port'>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:      <model name='pcie-root-port'/>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:      <target chassis='11' port='0x1a'/>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:      <alias name='pci.11'/>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:    <controller type='pci' index='12' model='pcie-root-port'>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:      <model name='pcie-root-port'/>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:      <target chassis='12' port='0x1b'/>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:      <alias name='pci.12'/>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:    <controller type='pci' index='13' model='pcie-root-port'>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:      <model name='pcie-root-port'/>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:      <target chassis='13' port='0x1c'/>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:      <alias name='pci.13'/>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:    <controller type='pci' index='14' model='pcie-root-port'>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:      <model name='pcie-root-port'/>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:      <target chassis='14' port='0x1d'/>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:      <alias name='pci.14'/>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:    <controller type='pci' index='15' model='pcie-root-port'>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:      <model name='pcie-root-port'/>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:      <target chassis='15' port='0x1e'/>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:      <alias name='pci.15'/>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:    <controller type='pci' index='16' model='pcie-root-port'>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:      <model name='pcie-root-port'/>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:      <target chassis='16' port='0x1f'/>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:      <alias name='pci.16'/>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:    <controller type='pci' index='17' model='pcie-root-port'>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:      <model name='pcie-root-port'/>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:      <target chassis='17' port='0x20'/>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:      <alias name='pci.17'/>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:    <controller type='pci' index='18' model='pcie-root-port'>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:      <model name='pcie-root-port'/>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:      <target chassis='18' port='0x21'/>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:      <alias name='pci.18'/>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:    <controller type='pci' index='19' model='pcie-root-port'>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:      <model name='pcie-root-port'/>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:      <target chassis='19' port='0x22'/>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:      <alias name='pci.19'/>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:    <controller type='pci' index='20' model='pcie-root-port'>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:      <model name='pcie-root-port'/>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:      <target chassis='20' port='0x23'/>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:      <alias name='pci.20'/>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:    <controller type='pci' index='21' model='pcie-root-port'>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:      <model name='pcie-root-port'/>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:      <target chassis='21' port='0x24'/>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:      <alias name='pci.21'/>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:    <controller type='pci' index='22' model='pcie-root-port'>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:      <model name='pcie-root-port'/>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:      <target chassis='22' port='0x25'/>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:      <alias name='pci.22'/>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:    <controller type='pci' index='23' model='pcie-root-port'>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:      <model name='pcie-root-port'/>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:      <target chassis='23' port='0x26'/>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:      <alias name='pci.23'/>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:    <controller type='pci' index='24' model='pcie-root-port'>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:      <model name='pcie-root-port'/>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:      <target chassis='24' port='0x27'/>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:      <alias name='pci.24'/>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:    <controller type='pci' index='25' model='pcie-root-port'>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:      <model name='pcie-root-port'/>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:      <target chassis='25' port='0x28'/>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:      <alias name='pci.25'/>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:    <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:      <model name='pcie-pci-bridge'/>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:      <alias name='pci.26'/>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:    <controller type='usb' index='0' model='piix3-uhci'>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:      <alias name='usb'/>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:    <controller type='sata' index='0'>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:      <alias name='ide'/>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:    <interface type='ethernet'>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:      <mac address='fa:16:3e:88:13:51'/>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:      <target dev='tapd0945383-2a'/>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:      <model type='virtio'/>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:      <driver name='vhost' rx_queue_size='512'/>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:      <mtu size='1442'/>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:      <alias name='net0'/>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:    </interface>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:    <serial type='pty'>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:      <source path='/dev/pts/0'/>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:      <log file='/var/lib/nova/instances/49b75125-0ca4-438d-9f2a-1d130a6b5632/console.log' append='off'/>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:      <target type='isa-serial' port='0'>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:        <model name='isa-serial'/>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:      </target>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:      <alias name='serial0'/>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:    </serial>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:    <console type='pty' tty='/dev/pts/0'>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:      <source path='/dev/pts/0'/>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:      <log file='/var/lib/nova/instances/49b75125-0ca4-438d-9f2a-1d130a6b5632/console.log' append='off'/>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:      <target type='serial' port='0'/>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:      <alias name='serial0'/>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:    </console>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:    <input type='tablet' bus='usb'>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:      <alias name='input0'/>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:      <address type='usb' bus='0' port='1'/>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:    </input>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:    <input type='mouse' bus='ps2'>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:      <alias name='input1'/>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:    </input>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:    <input type='keyboard' bus='ps2'>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:      <alias name='input2'/>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:    </input>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:    <graphics type='vnc' port='5900' autoport='yes' listen='::0'>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:      <listen type='address' address='::0'/>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:    </graphics>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:    <audio id='1' type='none'/>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:    <video>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:      <model type='virtio' heads='1' primary='yes'/>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:      <alias name='video0'/>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:    </video>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:    <watchdog model='itco' action='reset'>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:      <alias name='watchdog0'/>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:    </watchdog>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:    <memballoon model='virtio'>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:      <stats period='10'/>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:      <alias name='balloon0'/>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:    </memballoon>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:    <rng model='virtio'>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:      <backend model='random'>/dev/urandom</backend>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:      <alias name='rng0'/>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:    </rng>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:  </devices>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:  <seclabel type='dynamic' model='selinux' relabel='yes'>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:    <label>system_u:system_r:svirt_t:s0:c434,c684</label>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:    <imagelabel>system_u:object_r:svirt_image_t:s0:c434,c684</imagelabel>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:  </seclabel>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:  <seclabel type='dynamic' model='dac' relabel='yes'>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:    <label>+107:+107</label>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:    <imagelabel>+107:+107</imagelabel>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:  </seclabel>
Nov 25 03:50:51 np0005534516 nova_compute[253538]: </domain>
Nov 25 03:50:51 np0005534516 nova_compute[253538]: get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282#033[00m
Nov 25 03:50:51 np0005534516 nova_compute[253538]: 2025-11-25 08:50:51.657 253542 DEBUG nova.virt.libvirt.guest [req-669a3c3e-d6ef-4c8c-a255-63914030b9ad req-79d97acf-5f96-4dcc-9820-1d08086f1111 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:39:f0:9c"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap340ce0e3-8b"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Nov 25 03:50:51 np0005534516 nova_compute[253538]: 2025-11-25 08:50:51.662 253542 DEBUG nova.virt.libvirt.guest [req-669a3c3e-d6ef-4c8c-a255-63914030b9ad req-79d97acf-5f96-4dcc-9820-1d08086f1111 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:39:f0:9c"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap340ce0e3-8b"/></interface>not found in domain: <domain type='kvm' id='135'>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:  <name>instance-0000006d</name>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:  <uuid>49b75125-0ca4-438d-9f2a-1d130a6b5632</uuid>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:  <metadata>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 03:50:51 np0005534516 nova_compute[253538]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:  <nova:name>tempest-TestNetworkBasicOps-server-1448452289</nova:name>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:  <nova:creationTime>2025-11-25 08:50:50</nova:creationTime>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:  <nova:flavor name="m1.nano">
Nov 25 03:50:51 np0005534516 nova_compute[253538]:    <nova:memory>128</nova:memory>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:    <nova:disk>1</nova:disk>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:    <nova:swap>0</nova:swap>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:    <nova:ephemeral>0</nova:ephemeral>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:    <nova:vcpus>1</nova:vcpus>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:  </nova:flavor>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:  <nova:owner>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:    <nova:user uuid="4211995133cc45db8e38c47f747fb092">tempest-TestNetworkBasicOps-2019122229-project-member</nova:user>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:    <nova:project uuid="92faeb767e7a423586eaaf32661ce771">tempest-TestNetworkBasicOps-2019122229</nova:project>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:  </nova:owner>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:  <nova:root type="image" uuid="8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e"/>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:  <nova:ports>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:    <nova:port uuid="d0945383-2a0d-4019-9b60-eea96d667c69">
Nov 25 03:50:51 np0005534516 nova_compute[253538]:      <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:    </nova:port>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:  </nova:ports>
Nov 25 03:50:51 np0005534516 nova_compute[253538]: </nova:instance>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:  </metadata>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:  <memory unit='KiB'>131072</memory>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:  <currentMemory unit='KiB'>131072</currentMemory>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:  <vcpu placement='static'>1</vcpu>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:  <resource>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:    <partition>/machine</partition>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:  </resource>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:  <sysinfo type='smbios'>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:    <system>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:      <entry name='manufacturer'>RDO</entry>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:      <entry name='product'>OpenStack Compute</entry>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:      <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:      <entry name='serial'>49b75125-0ca4-438d-9f2a-1d130a6b5632</entry>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:      <entry name='uuid'>49b75125-0ca4-438d-9f2a-1d130a6b5632</entry>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:      <entry name='family'>Virtual Machine</entry>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:    </system>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:  </sysinfo>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:  <os>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:    <type arch='x86_64' machine='pc-q35-rhel9.8.0'>hvm</type>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:    <boot dev='hd'/>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:    <smbios mode='sysinfo'/>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:  </os>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:  <features>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:    <acpi/>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:    <apic/>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:    <vmcoreinfo state='on'/>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:  </features>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:  <cpu mode='custom' match='exact' check='full'>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:    <model fallback='forbid'>EPYC-Rome</model>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:    <vendor>AMD</vendor>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:    <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:    <feature policy='require' name='x2apic'/>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:    <feature policy='require' name='tsc-deadline'/>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:    <feature policy='require' name='hypervisor'/>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:    <feature policy='require' name='tsc_adjust'/>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:    <feature policy='require' name='spec-ctrl'/>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:    <feature policy='require' name='stibp'/>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:    <feature policy='require' name='ssbd'/>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:    <feature policy='require' name='cmp_legacy'/>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:    <feature policy='require' name='overflow-recov'/>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:    <feature policy='require' name='succor'/>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:    <feature policy='require' name='ibrs'/>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:    <feature policy='require' name='amd-ssbd'/>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:    <feature policy='require' name='virt-ssbd'/>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:    <feature policy='disable' name='lbrv'/>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:    <feature policy='disable' name='tsc-scale'/>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:    <feature policy='disable' name='vmcb-clean'/>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:    <feature policy='disable' name='flushbyasid'/>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:    <feature policy='disable' name='pause-filter'/>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:    <feature policy='disable' name='pfthreshold'/>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:    <feature policy='disable' name='svme-addr-chk'/>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:    <feature policy='require' name='lfence-always-serializing'/>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:    <feature policy='disable' name='xsaves'/>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:    <feature policy='disable' name='svm'/>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:    <feature policy='require' name='topoext'/>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:    <feature policy='disable' name='npt'/>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:    <feature policy='disable' name='nrip-save'/>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:  </cpu>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:  <clock offset='utc'>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:    <timer name='pit' tickpolicy='delay'/>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:    <timer name='rtc' tickpolicy='catchup'/>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:    <timer name='hpet' present='no'/>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:  </clock>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:  <on_poweroff>destroy</on_poweroff>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:  <on_reboot>restart</on_reboot>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:  <on_crash>destroy</on_crash>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:  <devices>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:    <emulator>/usr/libexec/qemu-kvm</emulator>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:    <disk type='network' device='disk'>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:      <driver name='qemu' type='raw' cache='none'/>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:      <auth username='openstack'>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:        <secret type='ceph' uuid='a058ea16-8b73-51e1-b172-ed66107102bf'/>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:      </auth>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:      <source protocol='rbd' name='vms/49b75125-0ca4-438d-9f2a-1d130a6b5632_disk' index='2'>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:        <host name='192.168.122.100' port='6789'/>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:      </source>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:      <target dev='vda' bus='virtio'/>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:      <alias name='virtio-disk0'/>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:    </disk>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:    <disk type='network' device='cdrom'>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:      <driver name='qemu' type='raw' cache='none'/>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:      <auth username='openstack'>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:        <secret type='ceph' uuid='a058ea16-8b73-51e1-b172-ed66107102bf'/>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:      </auth>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:      <source protocol='rbd' name='vms/49b75125-0ca4-438d-9f2a-1d130a6b5632_disk.config' index='1'>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:        <host name='192.168.122.100' port='6789'/>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:      </source>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:      <target dev='sda' bus='sata'/>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:      <readonly/>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:      <alias name='sata0-0-0'/>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:      <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:    </disk>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:    <controller type='pci' index='0' model='pcie-root'>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:      <alias name='pcie.0'/>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:    <controller type='pci' index='1' model='pcie-root-port'>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:      <model name='pcie-root-port'/>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:      <target chassis='1' port='0x10'/>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:      <alias name='pci.1'/>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:    <controller type='pci' index='2' model='pcie-root-port'>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:      <model name='pcie-root-port'/>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:      <target chassis='2' port='0x11'/>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:      <alias name='pci.2'/>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:    <controller type='pci' index='3' model='pcie-root-port'>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:      <model name='pcie-root-port'/>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:      <target chassis='3' port='0x12'/>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:      <alias name='pci.3'/>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:    <controller type='pci' index='4' model='pcie-root-port'>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:      <model name='pcie-root-port'/>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:      <target chassis='4' port='0x13'/>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:      <alias name='pci.4'/>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:    <controller type='pci' index='5' model='pcie-root-port'>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:      <model name='pcie-root-port'/>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:      <target chassis='5' port='0x14'/>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:      <alias name='pci.5'/>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:    <controller type='pci' index='6' model='pcie-root-port'>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:      <model name='pcie-root-port'/>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:      <target chassis='6' port='0x15'/>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:      <alias name='pci.6'/>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:    <controller type='pci' index='7' model='pcie-root-port'>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:      <model name='pcie-root-port'/>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:      <target chassis='7' port='0x16'/>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:      <alias name='pci.7'/>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:    <controller type='pci' index='8' model='pcie-root-port'>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:      <model name='pcie-root-port'/>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:      <target chassis='8' port='0x17'/>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:      <alias name='pci.8'/>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:    <controller type='pci' index='9' model='pcie-root-port'>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:      <model name='pcie-root-port'/>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:      <target chassis='9' port='0x18'/>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:      <alias name='pci.9'/>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:    <controller type='pci' index='10' model='pcie-root-port'>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:      <model name='pcie-root-port'/>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:      <target chassis='10' port='0x19'/>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:      <alias name='pci.10'/>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:    <controller type='pci' index='11' model='pcie-root-port'>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:      <model name='pcie-root-port'/>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:      <target chassis='11' port='0x1a'/>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:      <alias name='pci.11'/>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:    <controller type='pci' index='12' model='pcie-root-port'>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:      <model name='pcie-root-port'/>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:      <target chassis='12' port='0x1b'/>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:      <alias name='pci.12'/>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:    <controller type='pci' index='13' model='pcie-root-port'>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:      <model name='pcie-root-port'/>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:      <target chassis='13' port='0x1c'/>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:      <alias name='pci.13'/>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:    <controller type='pci' index='14' model='pcie-root-port'>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:      <model name='pcie-root-port'/>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:      <target chassis='14' port='0x1d'/>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:      <alias name='pci.14'/>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:    <controller type='pci' index='15' model='pcie-root-port'>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:      <model name='pcie-root-port'/>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:      <target chassis='15' port='0x1e'/>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:      <alias name='pci.15'/>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:    <controller type='pci' index='16' model='pcie-root-port'>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:      <model name='pcie-root-port'/>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:      <target chassis='16' port='0x1f'/>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:      <alias name='pci.16'/>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:    <controller type='pci' index='17' model='pcie-root-port'>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:      <model name='pcie-root-port'/>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:      <target chassis='17' port='0x20'/>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:      <alias name='pci.17'/>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:    <controller type='pci' index='18' model='pcie-root-port'>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:      <model name='pcie-root-port'/>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:      <target chassis='18' port='0x21'/>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:      <alias name='pci.18'/>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:    <controller type='pci' index='19' model='pcie-root-port'>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:      <model name='pcie-root-port'/>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:      <target chassis='19' port='0x22'/>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:      <alias name='pci.19'/>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:    <controller type='pci' index='20' model='pcie-root-port'>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:      <model name='pcie-root-port'/>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:      <target chassis='20' port='0x23'/>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:      <alias name='pci.20'/>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:    <controller type='pci' index='21' model='pcie-root-port'>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:      <model name='pcie-root-port'/>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:      <target chassis='21' port='0x24'/>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:      <alias name='pci.21'/>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:    <controller type='pci' index='22' model='pcie-root-port'>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:      <model name='pcie-root-port'/>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:      <target chassis='22' port='0x25'/>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:      <alias name='pci.22'/>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:    <controller type='pci' index='23' model='pcie-root-port'>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:      <model name='pcie-root-port'/>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:      <target chassis='23' port='0x26'/>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:      <alias name='pci.23'/>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:    <controller type='pci' index='24' model='pcie-root-port'>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:      <model name='pcie-root-port'/>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:      <target chassis='24' port='0x27'/>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:      <alias name='pci.24'/>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:    <controller type='pci' index='25' model='pcie-root-port'>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:      <model name='pcie-root-port'/>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:      <target chassis='25' port='0x28'/>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:      <alias name='pci.25'/>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:    <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:      <model name='pcie-pci-bridge'/>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:      <alias name='pci.26'/>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:    <controller type='usb' index='0' model='piix3-uhci'>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:      <alias name='usb'/>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:    <controller type='sata' index='0'>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:      <alias name='ide'/>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:    <interface type='ethernet'>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:      <mac address='fa:16:3e:88:13:51'/>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:      <target dev='tapd0945383-2a'/>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:      <model type='virtio'/>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:      <driver name='vhost' rx_queue_size='512'/>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:      <mtu size='1442'/>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:      <alias name='net0'/>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:    </interface>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:    <serial type='pty'>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:      <source path='/dev/pts/0'/>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:      <log file='/var/lib/nova/instances/49b75125-0ca4-438d-9f2a-1d130a6b5632/console.log' append='off'/>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:      <target type='isa-serial' port='0'>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:        <model name='isa-serial'/>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:      </target>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:      <alias name='serial0'/>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:    </serial>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:    <console type='pty' tty='/dev/pts/0'>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:      <source path='/dev/pts/0'/>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:      <log file='/var/lib/nova/instances/49b75125-0ca4-438d-9f2a-1d130a6b5632/console.log' append='off'/>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:      <target type='serial' port='0'/>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:      <alias name='serial0'/>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:    </console>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:    <input type='tablet' bus='usb'>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:      <alias name='input0'/>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:      <address type='usb' bus='0' port='1'/>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:    </input>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:    <input type='mouse' bus='ps2'>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:      <alias name='input1'/>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:    </input>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:    <input type='keyboard' bus='ps2'>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:      <alias name='input2'/>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:    </input>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:    <graphics type='vnc' port='5900' autoport='yes' listen='::0'>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:      <listen type='address' address='::0'/>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:    </graphics>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:    <audio id='1' type='none'/>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:    <video>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:      <model type='virtio' heads='1' primary='yes'/>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:      <alias name='video0'/>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:    </video>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:    <watchdog model='itco' action='reset'>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:      <alias name='watchdog0'/>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:    </watchdog>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:    <memballoon model='virtio'>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:      <stats period='10'/>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:      <alias name='balloon0'/>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:    </memballoon>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:    <rng model='virtio'>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:      <backend model='random'>/dev/urandom</backend>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:      <alias name='rng0'/>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:    </rng>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:  </devices>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:  <seclabel type='dynamic' model='selinux' relabel='yes'>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:    <label>system_u:system_r:svirt_t:s0:c434,c684</label>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:    <imagelabel>system_u:object_r:svirt_image_t:s0:c434,c684</imagelabel>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:  </seclabel>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:  <seclabel type='dynamic' model='dac' relabel='yes'>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:    <label>+107:+107</label>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:    <imagelabel>+107:+107</imagelabel>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:  </seclabel>
Nov 25 03:50:51 np0005534516 nova_compute[253538]: </domain>
Nov 25 03:50:51 np0005534516 nova_compute[253538]: get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282#033[00m
Nov 25 03:50:51 np0005534516 nova_compute[253538]: 2025-11-25 08:50:51.662 253542 WARNING nova.virt.libvirt.driver [req-669a3c3e-d6ef-4c8c-a255-63914030b9ad req-79d97acf-5f96-4dcc-9820-1d08086f1111 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 49b75125-0ca4-438d-9f2a-1d130a6b5632] Detaching interface fa:16:3e:39:f0:9c failed because the device is no longer found on the guest.: nova.exception.DeviceNotFound: Device 'tap340ce0e3-8b' not found.#033[00m
Nov 25 03:50:51 np0005534516 nova_compute[253538]: 2025-11-25 08:50:51.663 253542 DEBUG nova.virt.libvirt.vif [req-669a3c3e-d6ef-4c8c-a255-63914030b9ad req-79d97acf-5f96-4dcc-9820-1d08086f1111 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T08:50:08Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1448452289',display_name='tempest-TestNetworkBasicOps-server-1448452289',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1448452289',id=109,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJ+ODzmwXlP8InQP3rckT8crJoUn4xNbOC1krI29pEqx8UoV+guV7Zli9feI6GoWEAqOv9PTHopksYzuWw3Q3guPlCcrWq73qjwaBwfCdvh1/iUEG/sFH1cKmRJAuAfBlw==',key_name='tempest-TestNetworkBasicOps-406479871',keypairs=<?>,launch_index=0,launched_at=2025-11-25T08:50:18Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata=<?>,migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='92faeb767e7a423586eaaf32661ce771',ramdisk_id='',reservation_id='r-3nsjtln8',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-2019122229',owner_user_name='tempest-TestNetworkBasicOps-2019122229-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T08:50:18Z,user_data=None,user_id='4211995133cc45db8e38c47f747fb092',uuid=49b75125-0ca4-438d-9f2a-1d130a6b5632,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "340ce0e3-8b72-4b40-afcb-53f30e6cc961", "address": "fa:16:3e:39:f0:9c", "network": {"id": "1e204e23-2391-4196-9262-d69db603285d", "bridge": "br-int", "label": "tempest-network-smoke--297673335", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.27", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92faeb767e7a423586eaaf32661ce771", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap340ce0e3-8b", "ovs_interfaceid": "340ce0e3-8b72-4b40-afcb-53f30e6cc961", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 25 03:50:51 np0005534516 nova_compute[253538]: 2025-11-25 08:50:51.663 253542 DEBUG nova.network.os_vif_util [req-669a3c3e-d6ef-4c8c-a255-63914030b9ad req-79d97acf-5f96-4dcc-9820-1d08086f1111 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Converting VIF {"id": "340ce0e3-8b72-4b40-afcb-53f30e6cc961", "address": "fa:16:3e:39:f0:9c", "network": {"id": "1e204e23-2391-4196-9262-d69db603285d", "bridge": "br-int", "label": "tempest-network-smoke--297673335", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.27", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92faeb767e7a423586eaaf32661ce771", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap340ce0e3-8b", "ovs_interfaceid": "340ce0e3-8b72-4b40-afcb-53f30e6cc961", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 03:50:51 np0005534516 nova_compute[253538]: 2025-11-25 08:50:51.664 253542 DEBUG nova.network.os_vif_util [req-669a3c3e-d6ef-4c8c-a255-63914030b9ad req-79d97acf-5f96-4dcc-9820-1d08086f1111 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:39:f0:9c,bridge_name='br-int',has_traffic_filtering=True,id=340ce0e3-8b72-4b40-afcb-53f30e6cc961,network=Network(1e204e23-2391-4196-9262-d69db603285d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap340ce0e3-8b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 03:50:51 np0005534516 nova_compute[253538]: 2025-11-25 08:50:51.664 253542 DEBUG os_vif [req-669a3c3e-d6ef-4c8c-a255-63914030b9ad req-79d97acf-5f96-4dcc-9820-1d08086f1111 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:39:f0:9c,bridge_name='br-int',has_traffic_filtering=True,id=340ce0e3-8b72-4b40-afcb-53f30e6cc961,network=Network(1e204e23-2391-4196-9262-d69db603285d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap340ce0e3-8b') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 25 03:50:51 np0005534516 nova_compute[253538]: 2025-11-25 08:50:51.665 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:50:51 np0005534516 nova_compute[253538]: 2025-11-25 08:50:51.665 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap340ce0e3-8b, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:50:51 np0005534516 nova_compute[253538]: 2025-11-25 08:50:51.666 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 03:50:51 np0005534516 nova_compute[253538]: 2025-11-25 08:50:51.667 253542 INFO os_vif [req-669a3c3e-d6ef-4c8c-a255-63914030b9ad req-79d97acf-5f96-4dcc-9820-1d08086f1111 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:39:f0:9c,bridge_name='br-int',has_traffic_filtering=True,id=340ce0e3-8b72-4b40-afcb-53f30e6cc961,network=Network(1e204e23-2391-4196-9262-d69db603285d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap340ce0e3-8b')#033[00m
Nov 25 03:50:51 np0005534516 nova_compute[253538]: 2025-11-25 08:50:51.668 253542 DEBUG nova.virt.libvirt.guest [req-669a3c3e-d6ef-4c8c-a255-63914030b9ad req-79d97acf-5f96-4dcc-9820-1d08086f1111 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 03:50:51 np0005534516 nova_compute[253538]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:  <nova:name>tempest-TestNetworkBasicOps-server-1448452289</nova:name>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:  <nova:creationTime>2025-11-25 08:50:51</nova:creationTime>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:  <nova:flavor name="m1.nano">
Nov 25 03:50:51 np0005534516 nova_compute[253538]:    <nova:memory>128</nova:memory>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:    <nova:disk>1</nova:disk>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:    <nova:swap>0</nova:swap>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:    <nova:ephemeral>0</nova:ephemeral>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:    <nova:vcpus>1</nova:vcpus>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:  </nova:flavor>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:  <nova:owner>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:    <nova:user uuid="4211995133cc45db8e38c47f747fb092">tempest-TestNetworkBasicOps-2019122229-project-member</nova:user>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:    <nova:project uuid="92faeb767e7a423586eaaf32661ce771">tempest-TestNetworkBasicOps-2019122229</nova:project>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:  </nova:owner>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:  <nova:root type="image" uuid="8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e"/>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:  <nova:ports>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:    <nova:port uuid="d0945383-2a0d-4019-9b60-eea96d667c69">
Nov 25 03:50:51 np0005534516 nova_compute[253538]:      <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:    </nova:port>
Nov 25 03:50:51 np0005534516 nova_compute[253538]:  </nova:ports>
Nov 25 03:50:51 np0005534516 nova_compute[253538]: </nova:instance>
Nov 25 03:50:51 np0005534516 nova_compute[253538]: set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359#033[00m
Nov 25 03:50:51 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2086: 321 pgs: 321 active+clean; 213 MiB data, 828 MiB used, 59 GiB / 60 GiB avail; 775 KiB/s rd, 2.3 KiB/s wr, 27 op/s
Nov 25 03:50:52 np0005534516 pensive_tharp[365691]: {
Nov 25 03:50:52 np0005534516 pensive_tharp[365691]:    "0": [
Nov 25 03:50:52 np0005534516 pensive_tharp[365691]:        {
Nov 25 03:50:52 np0005534516 pensive_tharp[365691]:            "devices": [
Nov 25 03:50:52 np0005534516 pensive_tharp[365691]:                "/dev/loop3"
Nov 25 03:50:52 np0005534516 pensive_tharp[365691]:            ],
Nov 25 03:50:52 np0005534516 pensive_tharp[365691]:            "lv_name": "ceph_lv0",
Nov 25 03:50:52 np0005534516 pensive_tharp[365691]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 03:50:52 np0005534516 pensive_tharp[365691]:            "lv_size": "21470642176",
Nov 25 03:50:52 np0005534516 pensive_tharp[365691]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=fa0f8d90-5df8-4d42-9078-da082765696d,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 03:50:52 np0005534516 pensive_tharp[365691]:            "lv_uuid": "ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr",
Nov 25 03:50:52 np0005534516 pensive_tharp[365691]:            "name": "ceph_lv0",
Nov 25 03:50:52 np0005534516 pensive_tharp[365691]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 03:50:52 np0005534516 pensive_tharp[365691]:            "tags": {
Nov 25 03:50:52 np0005534516 pensive_tharp[365691]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 03:50:52 np0005534516 pensive_tharp[365691]:                "ceph.block_uuid": "ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr",
Nov 25 03:50:52 np0005534516 pensive_tharp[365691]:                "ceph.cephx_lockbox_secret": "",
Nov 25 03:50:52 np0005534516 pensive_tharp[365691]:                "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 03:50:52 np0005534516 pensive_tharp[365691]:                "ceph.cluster_name": "ceph",
Nov 25 03:50:52 np0005534516 pensive_tharp[365691]:                "ceph.crush_device_class": "",
Nov 25 03:50:52 np0005534516 pensive_tharp[365691]:                "ceph.encrypted": "0",
Nov 25 03:50:52 np0005534516 pensive_tharp[365691]:                "ceph.osd_fsid": "fa0f8d90-5df8-4d42-9078-da082765696d",
Nov 25 03:50:52 np0005534516 pensive_tharp[365691]:                "ceph.osd_id": "0",
Nov 25 03:50:52 np0005534516 pensive_tharp[365691]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 03:50:52 np0005534516 pensive_tharp[365691]:                "ceph.type": "block",
Nov 25 03:50:52 np0005534516 pensive_tharp[365691]:                "ceph.vdo": "0"
Nov 25 03:50:52 np0005534516 pensive_tharp[365691]:            },
Nov 25 03:50:52 np0005534516 pensive_tharp[365691]:            "type": "block",
Nov 25 03:50:52 np0005534516 pensive_tharp[365691]:            "vg_name": "ceph_vg0"
Nov 25 03:50:52 np0005534516 pensive_tharp[365691]:        }
Nov 25 03:50:52 np0005534516 pensive_tharp[365691]:    ],
Nov 25 03:50:52 np0005534516 pensive_tharp[365691]:    "1": [
Nov 25 03:50:52 np0005534516 pensive_tharp[365691]:        {
Nov 25 03:50:52 np0005534516 pensive_tharp[365691]:            "devices": [
Nov 25 03:50:52 np0005534516 pensive_tharp[365691]:                "/dev/loop4"
Nov 25 03:50:52 np0005534516 pensive_tharp[365691]:            ],
Nov 25 03:50:52 np0005534516 pensive_tharp[365691]:            "lv_name": "ceph_lv1",
Nov 25 03:50:52 np0005534516 pensive_tharp[365691]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 03:50:52 np0005534516 pensive_tharp[365691]:            "lv_size": "21470642176",
Nov 25 03:50:52 np0005534516 pensive_tharp[365691]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1ccbbde0-8faf-460a-800e-c84f00ed17db,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 03:50:52 np0005534516 pensive_tharp[365691]:            "lv_uuid": "nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e",
Nov 25 03:50:52 np0005534516 pensive_tharp[365691]:            "name": "ceph_lv1",
Nov 25 03:50:52 np0005534516 pensive_tharp[365691]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 03:50:52 np0005534516 pensive_tharp[365691]:            "tags": {
Nov 25 03:50:52 np0005534516 pensive_tharp[365691]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 03:50:52 np0005534516 pensive_tharp[365691]:                "ceph.block_uuid": "nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e",
Nov 25 03:50:52 np0005534516 pensive_tharp[365691]:                "ceph.cephx_lockbox_secret": "",
Nov 25 03:50:52 np0005534516 pensive_tharp[365691]:                "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 03:50:52 np0005534516 pensive_tharp[365691]:                "ceph.cluster_name": "ceph",
Nov 25 03:50:52 np0005534516 pensive_tharp[365691]:                "ceph.crush_device_class": "",
Nov 25 03:50:52 np0005534516 pensive_tharp[365691]:                "ceph.encrypted": "0",
Nov 25 03:50:52 np0005534516 pensive_tharp[365691]:                "ceph.osd_fsid": "1ccbbde0-8faf-460a-800e-c84f00ed17db",
Nov 25 03:50:52 np0005534516 pensive_tharp[365691]:                "ceph.osd_id": "1",
Nov 25 03:50:52 np0005534516 pensive_tharp[365691]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 03:50:52 np0005534516 pensive_tharp[365691]:                "ceph.type": "block",
Nov 25 03:50:52 np0005534516 pensive_tharp[365691]:                "ceph.vdo": "0"
Nov 25 03:50:52 np0005534516 pensive_tharp[365691]:            },
Nov 25 03:50:52 np0005534516 pensive_tharp[365691]:            "type": "block",
Nov 25 03:50:52 np0005534516 pensive_tharp[365691]:            "vg_name": "ceph_vg1"
Nov 25 03:50:52 np0005534516 pensive_tharp[365691]:        }
Nov 25 03:50:52 np0005534516 pensive_tharp[365691]:    ],
Nov 25 03:50:52 np0005534516 pensive_tharp[365691]:    "2": [
Nov 25 03:50:52 np0005534516 pensive_tharp[365691]:        {
Nov 25 03:50:52 np0005534516 pensive_tharp[365691]:            "devices": [
Nov 25 03:50:52 np0005534516 pensive_tharp[365691]:                "/dev/loop5"
Nov 25 03:50:52 np0005534516 pensive_tharp[365691]:            ],
Nov 25 03:50:52 np0005534516 pensive_tharp[365691]:            "lv_name": "ceph_lv2",
Nov 25 03:50:52 np0005534516 pensive_tharp[365691]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 03:50:52 np0005534516 pensive_tharp[365691]:            "lv_size": "21470642176",
Nov 25 03:50:52 np0005534516 pensive_tharp[365691]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2a15223e-f21c-42af-b702-2be1c3607399,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 03:50:52 np0005534516 pensive_tharp[365691]:            "lv_uuid": "5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0",
Nov 25 03:50:52 np0005534516 pensive_tharp[365691]:            "name": "ceph_lv2",
Nov 25 03:50:52 np0005534516 pensive_tharp[365691]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 03:50:52 np0005534516 pensive_tharp[365691]:            "tags": {
Nov 25 03:50:52 np0005534516 pensive_tharp[365691]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 03:50:52 np0005534516 pensive_tharp[365691]:                "ceph.block_uuid": "5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0",
Nov 25 03:50:52 np0005534516 pensive_tharp[365691]:                "ceph.cephx_lockbox_secret": "",
Nov 25 03:50:52 np0005534516 pensive_tharp[365691]:                "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 03:50:52 np0005534516 pensive_tharp[365691]:                "ceph.cluster_name": "ceph",
Nov 25 03:50:52 np0005534516 pensive_tharp[365691]:                "ceph.crush_device_class": "",
Nov 25 03:50:52 np0005534516 pensive_tharp[365691]:                "ceph.encrypted": "0",
Nov 25 03:50:52 np0005534516 pensive_tharp[365691]:                "ceph.osd_fsid": "2a15223e-f21c-42af-b702-2be1c3607399",
Nov 25 03:50:52 np0005534516 pensive_tharp[365691]:                "ceph.osd_id": "2",
Nov 25 03:50:52 np0005534516 pensive_tharp[365691]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 03:50:52 np0005534516 pensive_tharp[365691]:                "ceph.type": "block",
Nov 25 03:50:52 np0005534516 pensive_tharp[365691]:                "ceph.vdo": "0"
Nov 25 03:50:52 np0005534516 pensive_tharp[365691]:            },
Nov 25 03:50:52 np0005534516 pensive_tharp[365691]:            "type": "block",
Nov 25 03:50:52 np0005534516 pensive_tharp[365691]:            "vg_name": "ceph_vg2"
Nov 25 03:50:52 np0005534516 pensive_tharp[365691]:        }
Nov 25 03:50:52 np0005534516 pensive_tharp[365691]:    ]
Nov 25 03:50:52 np0005534516 pensive_tharp[365691]: }
Nov 25 03:50:52 np0005534516 systemd[1]: libpod-7fe575f0d8cd2a2c305adab67c87c00d0aa2380a5bd50d0554497cde18e308ac.scope: Deactivated successfully.
Nov 25 03:50:52 np0005534516 conmon[365691]: conmon 7fe575f0d8cd2a2c305a <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-7fe575f0d8cd2a2c305adab67c87c00d0aa2380a5bd50d0554497cde18e308ac.scope/container/memory.events
Nov 25 03:50:52 np0005534516 podman[365651]: 2025-11-25 08:50:52.116135429 +0000 UTC m=+1.278127661 container died 7fe575f0d8cd2a2c305adab67c87c00d0aa2380a5bd50d0554497cde18e308ac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_tharp, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 25 03:50:52 np0005534516 systemd[1]: var-lib-containers-storage-overlay-efdcb7116fe58680f24853c0dee11caca31899fce1df8c89627d37ce659d313f-merged.mount: Deactivated successfully.
Nov 25 03:50:52 np0005534516 podman[365651]: 2025-11-25 08:50:52.175346345 +0000 UTC m=+1.337338577 container remove 7fe575f0d8cd2a2c305adab67c87c00d0aa2380a5bd50d0554497cde18e308ac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_tharp, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 03:50:52 np0005534516 systemd[1]: libpod-conmon-7fe575f0d8cd2a2c305adab67c87c00d0aa2380a5bd50d0554497cde18e308ac.scope: Deactivated successfully.
Nov 25 03:50:52 np0005534516 nova_compute[253538]: 2025-11-25 08:50:52.405 253542 DEBUG nova.compute.manager [req-57fa048d-afdb-4eb4-8d92-8535247a5792 req-7015037a-fa63-477b-a63e-039d7c0cb1a1 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 49b75125-0ca4-438d-9f2a-1d130a6b5632] Received event network-vif-unplugged-340ce0e3-8b72-4b40-afcb-53f30e6cc961 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:50:52 np0005534516 nova_compute[253538]: 2025-11-25 08:50:52.406 253542 DEBUG oslo_concurrency.lockutils [req-57fa048d-afdb-4eb4-8d92-8535247a5792 req-7015037a-fa63-477b-a63e-039d7c0cb1a1 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "49b75125-0ca4-438d-9f2a-1d130a6b5632-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:50:52 np0005534516 nova_compute[253538]: 2025-11-25 08:50:52.406 253542 DEBUG oslo_concurrency.lockutils [req-57fa048d-afdb-4eb4-8d92-8535247a5792 req-7015037a-fa63-477b-a63e-039d7c0cb1a1 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "49b75125-0ca4-438d-9f2a-1d130a6b5632-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:50:52 np0005534516 nova_compute[253538]: 2025-11-25 08:50:52.406 253542 DEBUG oslo_concurrency.lockutils [req-57fa048d-afdb-4eb4-8d92-8535247a5792 req-7015037a-fa63-477b-a63e-039d7c0cb1a1 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "49b75125-0ca4-438d-9f2a-1d130a6b5632-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:50:52 np0005534516 nova_compute[253538]: 2025-11-25 08:50:52.407 253542 DEBUG nova.compute.manager [req-57fa048d-afdb-4eb4-8d92-8535247a5792 req-7015037a-fa63-477b-a63e-039d7c0cb1a1 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 49b75125-0ca4-438d-9f2a-1d130a6b5632] No waiting events found dispatching network-vif-unplugged-340ce0e3-8b72-4b40-afcb-53f30e6cc961 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 03:50:52 np0005534516 nova_compute[253538]: 2025-11-25 08:50:52.407 253542 WARNING nova.compute.manager [req-57fa048d-afdb-4eb4-8d92-8535247a5792 req-7015037a-fa63-477b-a63e-039d7c0cb1a1 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 49b75125-0ca4-438d-9f2a-1d130a6b5632] Received unexpected event network-vif-unplugged-340ce0e3-8b72-4b40-afcb-53f30e6cc961 for instance with vm_state active and task_state None.#033[00m
Nov 25 03:50:52 np0005534516 nova_compute[253538]: 2025-11-25 08:50:52.407 253542 DEBUG nova.compute.manager [req-57fa048d-afdb-4eb4-8d92-8535247a5792 req-7015037a-fa63-477b-a63e-039d7c0cb1a1 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 49b75125-0ca4-438d-9f2a-1d130a6b5632] Received event network-vif-plugged-340ce0e3-8b72-4b40-afcb-53f30e6cc961 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:50:52 np0005534516 nova_compute[253538]: 2025-11-25 08:50:52.407 253542 DEBUG oslo_concurrency.lockutils [req-57fa048d-afdb-4eb4-8d92-8535247a5792 req-7015037a-fa63-477b-a63e-039d7c0cb1a1 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "49b75125-0ca4-438d-9f2a-1d130a6b5632-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:50:52 np0005534516 nova_compute[253538]: 2025-11-25 08:50:52.408 253542 DEBUG oslo_concurrency.lockutils [req-57fa048d-afdb-4eb4-8d92-8535247a5792 req-7015037a-fa63-477b-a63e-039d7c0cb1a1 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "49b75125-0ca4-438d-9f2a-1d130a6b5632-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:50:52 np0005534516 nova_compute[253538]: 2025-11-25 08:50:52.408 253542 DEBUG oslo_concurrency.lockutils [req-57fa048d-afdb-4eb4-8d92-8535247a5792 req-7015037a-fa63-477b-a63e-039d7c0cb1a1 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "49b75125-0ca4-438d-9f2a-1d130a6b5632-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:50:52 np0005534516 nova_compute[253538]: 2025-11-25 08:50:52.408 253542 DEBUG nova.compute.manager [req-57fa048d-afdb-4eb4-8d92-8535247a5792 req-7015037a-fa63-477b-a63e-039d7c0cb1a1 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 49b75125-0ca4-438d-9f2a-1d130a6b5632] No waiting events found dispatching network-vif-plugged-340ce0e3-8b72-4b40-afcb-53f30e6cc961 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 03:50:52 np0005534516 nova_compute[253538]: 2025-11-25 08:50:52.408 253542 WARNING nova.compute.manager [req-57fa048d-afdb-4eb4-8d92-8535247a5792 req-7015037a-fa63-477b-a63e-039d7c0cb1a1 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 49b75125-0ca4-438d-9f2a-1d130a6b5632] Received unexpected event network-vif-plugged-340ce0e3-8b72-4b40-afcb-53f30e6cc961 for instance with vm_state active and task_state None.#033[00m
Nov 25 03:50:52 np0005534516 ovn_controller[152859]: 2025-11-25T08:50:52Z|01089|binding|INFO|Releasing lport 0ffa1e66-a7b3-404c-afe1-b89a3d2ec28f from this chassis (sb_readonly=0)
Nov 25 03:50:52 np0005534516 ovn_controller[152859]: 2025-11-25T08:50:52Z|01090|binding|INFO|Releasing lport bf9779d5-46bc-415b-b1d2-7b9d4a76754d from this chassis (sb_readonly=0)
Nov 25 03:50:52 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e237 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 03:50:52 np0005534516 nova_compute[253538]: 2025-11-25 08:50:52.517 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:50:52 np0005534516 nova_compute[253538]: 2025-11-25 08:50:52.646 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:50:52 np0005534516 ovn_controller[152859]: 2025-11-25T08:50:52Z|00119|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:86:20:35 10.100.0.11
Nov 25 03:50:52 np0005534516 ovn_controller[152859]: 2025-11-25T08:50:52Z|00120|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:86:20:35 10.100.0.11
Nov 25 03:50:53 np0005534516 podman[365854]: 2025-11-25 08:50:52.913517514 +0000 UTC m=+0.029282644 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 03:50:53 np0005534516 podman[365854]: 2025-11-25 08:50:53.030739604 +0000 UTC m=+0.146504754 container create 0ce92cba8547b96d92801884d7d5f0ff5ad314591d6c3ff37c0cdbef900f54ed (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_germain, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Nov 25 03:50:53 np0005534516 systemd[1]: Started libpod-conmon-0ce92cba8547b96d92801884d7d5f0ff5ad314591d6c3ff37c0cdbef900f54ed.scope.
Nov 25 03:50:53 np0005534516 systemd[1]: Started libcrun container.
Nov 25 03:50:53 np0005534516 nova_compute[253538]: 2025-11-25 08:50:53.253 253542 INFO nova.network.neutron [None req-2d7710fb-da8b-49bd-bdba-95d60bef6db1 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 49b75125-0ca4-438d-9f2a-1d130a6b5632] Port 340ce0e3-8b72-4b40-afcb-53f30e6cc961 from network info_cache is no longer associated with instance in Neutron. Removing from network info_cache.#033[00m
Nov 25 03:50:53 np0005534516 nova_compute[253538]: 2025-11-25 08:50:53.254 253542 DEBUG nova.network.neutron [None req-2d7710fb-da8b-49bd-bdba-95d60bef6db1 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 49b75125-0ca4-438d-9f2a-1d130a6b5632] Updating instance_info_cache with network_info: [{"id": "d0945383-2a0d-4019-9b60-eea96d667c69", "address": "fa:16:3e:88:13:51", "network": {"id": "c833a599-5a18-44d2-82ad-b16f7476c220", "bridge": "br-int", "label": "tempest-network-smoke--480494293", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.229", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92faeb767e7a423586eaaf32661ce771", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd0945383-2a", "ovs_interfaceid": "d0945383-2a0d-4019-9b60-eea96d667c69", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 03:50:53 np0005534516 nova_compute[253538]: 2025-11-25 08:50:53.269 253542 DEBUG oslo_concurrency.lockutils [None req-2d7710fb-da8b-49bd-bdba-95d60bef6db1 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Releasing lock "refresh_cache-49b75125-0ca4-438d-9f2a-1d130a6b5632" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 03:50:53 np0005534516 nova_compute[253538]: 2025-11-25 08:50:53.287 253542 DEBUG oslo_concurrency.lockutils [None req-2d7710fb-da8b-49bd-bdba-95d60bef6db1 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lock "interface-49b75125-0ca4-438d-9f2a-1d130a6b5632-340ce0e3-8b72-4b40-afcb-53f30e6cc961" "released" by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" :: held 2.939s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:50:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] Optimize plan auto_2025-11-25_08:50:53
Nov 25 03:50:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 03:50:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] do_upmap
Nov 25 03:50:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] pools ['default.rgw.control', 'backups', 'default.rgw.meta', 'images', 'vms', 'default.rgw.log', '.rgw.root', 'volumes', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', '.mgr']
Nov 25 03:50:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] prepared 0/10 changes
Nov 25 03:50:53 np0005534516 podman[365854]: 2025-11-25 08:50:53.330479551 +0000 UTC m=+0.446244701 container init 0ce92cba8547b96d92801884d7d5f0ff5ad314591d6c3ff37c0cdbef900f54ed (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_germain, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 03:50:53 np0005534516 podman[365854]: 2025-11-25 08:50:53.347162228 +0000 UTC m=+0.462927348 container start 0ce92cba8547b96d92801884d7d5f0ff5ad314591d6c3ff37c0cdbef900f54ed (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_germain, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 03:50:53 np0005534516 systemd[1]: libpod-0ce92cba8547b96d92801884d7d5f0ff5ad314591d6c3ff37c0cdbef900f54ed.scope: Deactivated successfully.
Nov 25 03:50:53 np0005534516 beautiful_germain[365870]: 167 167
Nov 25 03:50:53 np0005534516 conmon[365870]: conmon 0ce92cba8547b96d9280 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-0ce92cba8547b96d92801884d7d5f0ff5ad314591d6c3ff37c0cdbef900f54ed.scope/container/memory.events
Nov 25 03:50:53 np0005534516 nova_compute[253538]: 2025-11-25 08:50:53.419 253542 DEBUG oslo_concurrency.lockutils [None req-fde44410-1549-494f-a72d-57b9cfd054cd 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Acquiring lock "49b75125-0ca4-438d-9f2a-1d130a6b5632" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:50:53 np0005534516 nova_compute[253538]: 2025-11-25 08:50:53.419 253542 DEBUG oslo_concurrency.lockutils [None req-fde44410-1549-494f-a72d-57b9cfd054cd 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lock "49b75125-0ca4-438d-9f2a-1d130a6b5632" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:50:53 np0005534516 nova_compute[253538]: 2025-11-25 08:50:53.420 253542 DEBUG oslo_concurrency.lockutils [None req-fde44410-1549-494f-a72d-57b9cfd054cd 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Acquiring lock "49b75125-0ca4-438d-9f2a-1d130a6b5632-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:50:53 np0005534516 nova_compute[253538]: 2025-11-25 08:50:53.420 253542 DEBUG oslo_concurrency.lockutils [None req-fde44410-1549-494f-a72d-57b9cfd054cd 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lock "49b75125-0ca4-438d-9f2a-1d130a6b5632-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:50:53 np0005534516 nova_compute[253538]: 2025-11-25 08:50:53.420 253542 DEBUG oslo_concurrency.lockutils [None req-fde44410-1549-494f-a72d-57b9cfd054cd 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lock "49b75125-0ca4-438d-9f2a-1d130a6b5632-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:50:53 np0005534516 nova_compute[253538]: 2025-11-25 08:50:53.422 253542 INFO nova.compute.manager [None req-fde44410-1549-494f-a72d-57b9cfd054cd 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 49b75125-0ca4-438d-9f2a-1d130a6b5632] Terminating instance#033[00m
Nov 25 03:50:53 np0005534516 nova_compute[253538]: 2025-11-25 08:50:53.423 253542 DEBUG nova.compute.manager [None req-fde44410-1549-494f-a72d-57b9cfd054cd 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 49b75125-0ca4-438d-9f2a-1d130a6b5632] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 25 03:50:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 03:50:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 03:50:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 03:50:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 03:50:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 03:50:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 03:50:53 np0005534516 podman[365854]: 2025-11-25 08:50:53.602780533 +0000 UTC m=+0.718545733 container attach 0ce92cba8547b96d92801884d7d5f0ff5ad314591d6c3ff37c0cdbef900f54ed (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_germain, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Nov 25 03:50:53 np0005534516 podman[365854]: 2025-11-25 08:50:53.603635026 +0000 UTC m=+0.719400166 container died 0ce92cba8547b96d92801884d7d5f0ff5ad314591d6c3ff37c0cdbef900f54ed (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_germain, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 03:50:53 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2087: 321 pgs: 321 active+clean; 219 MiB data, 836 MiB used, 59 GiB / 60 GiB avail; 915 KiB/s rd, 752 KiB/s wr, 58 op/s
Nov 25 03:50:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 03:50:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 03:50:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 03:50:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 03:50:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 03:50:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 03:50:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 03:50:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 03:50:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 03:50:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 03:50:54 np0005534516 systemd[1]: var-lib-containers-storage-overlay-e17eeb591c5998cef01394ac33c1bc1fc030fd5e1a8425c01f6d9c2d38218d8f-merged.mount: Deactivated successfully.
Nov 25 03:50:54 np0005534516 kernel: tapd0945383-2a (unregistering): left promiscuous mode
Nov 25 03:50:54 np0005534516 NetworkManager[48915]: <info>  [1764060654.0487] device (tapd0945383-2a): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 03:50:54 np0005534516 ovn_controller[152859]: 2025-11-25T08:50:54Z|01091|binding|INFO|Releasing lport d0945383-2a0d-4019-9b60-eea96d667c69 from this chassis (sb_readonly=0)
Nov 25 03:50:54 np0005534516 ovn_controller[152859]: 2025-11-25T08:50:54Z|01092|binding|INFO|Setting lport d0945383-2a0d-4019-9b60-eea96d667c69 down in Southbound
Nov 25 03:50:54 np0005534516 nova_compute[253538]: 2025-11-25 08:50:54.064 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:50:54 np0005534516 ovn_controller[152859]: 2025-11-25T08:50:54Z|01093|binding|INFO|Removing iface tapd0945383-2a ovn-installed in OVS
Nov 25 03:50:54 np0005534516 nova_compute[253538]: 2025-11-25 08:50:54.066 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:50:54 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:50:54.074 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:88:13:51 10.100.0.3'], port_security=['fa:16:3e:88:13:51 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '49b75125-0ca4-438d-9f2a-1d130a6b5632', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c833a599-5a18-44d2-82ad-b16f7476c220', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '92faeb767e7a423586eaaf32661ce771', 'neutron:revision_number': '4', 'neutron:security_group_ids': '4dacd795-ee8f-4895-b3fe-aaa7865132b7', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=1d0e444d-0863-4949-9e8d-d9b0bfd89ac2, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=d0945383-2a0d-4019-9b60-eea96d667c69) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 03:50:54 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:50:54.075 162739 INFO neutron.agent.ovn.metadata.agent [-] Port d0945383-2a0d-4019-9b60-eea96d667c69 in datapath c833a599-5a18-44d2-82ad-b16f7476c220 unbound from our chassis#033[00m
Nov 25 03:50:54 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:50:54.076 162739 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network c833a599-5a18-44d2-82ad-b16f7476c220, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 25 03:50:54 np0005534516 nova_compute[253538]: 2025-11-25 08:50:54.077 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:50:54 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:50:54.077 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[ae8a2ff5-4b38-4b44-a6db-068d1309a68e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:50:54 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:50:54.078 162739 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-c833a599-5a18-44d2-82ad-b16f7476c220 namespace which is not needed anymore#033[00m
Nov 25 03:50:54 np0005534516 systemd[1]: machine-qemu\x2d135\x2dinstance\x2d0000006d.scope: Deactivated successfully.
Nov 25 03:50:54 np0005534516 systemd[1]: machine-qemu\x2d135\x2dinstance\x2d0000006d.scope: Consumed 15.495s CPU time.
Nov 25 03:50:54 np0005534516 systemd-machined[215790]: Machine qemu-135-instance-0000006d terminated.
Nov 25 03:50:54 np0005534516 nova_compute[253538]: 2025-11-25 08:50:54.270 253542 INFO nova.virt.libvirt.driver [-] [instance: 49b75125-0ca4-438d-9f2a-1d130a6b5632] Instance destroyed successfully.#033[00m
Nov 25 03:50:54 np0005534516 nova_compute[253538]: 2025-11-25 08:50:54.271 253542 DEBUG nova.objects.instance [None req-fde44410-1549-494f-a72d-57b9cfd054cd 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lazy-loading 'resources' on Instance uuid 49b75125-0ca4-438d-9f2a-1d130a6b5632 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 03:50:54 np0005534516 nova_compute[253538]: 2025-11-25 08:50:54.290 253542 DEBUG nova.virt.libvirt.vif [None req-fde44410-1549-494f-a72d-57b9cfd054cd 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T08:50:08Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1448452289',display_name='tempest-TestNetworkBasicOps-server-1448452289',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1448452289',id=109,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJ+ODzmwXlP8InQP3rckT8crJoUn4xNbOC1krI29pEqx8UoV+guV7Zli9feI6GoWEAqOv9PTHopksYzuWw3Q3guPlCcrWq73qjwaBwfCdvh1/iUEG/sFH1cKmRJAuAfBlw==',key_name='tempest-TestNetworkBasicOps-406479871',keypairs=<?>,launch_index=0,launched_at=2025-11-25T08:50:18Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='92faeb767e7a423586eaaf32661ce771',ramdisk_id='',reservation_id='r-3nsjtln8',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-2019122229',owner_user_name='tempest-TestNetworkBasicOps-2019122229-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T08:50:18Z,user_data=None,user_id='4211995133cc45db8e38c47f747fb092',uuid=49b75125-0ca4-438d-9f2a-1d130a6b5632,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "d0945383-2a0d-4019-9b60-eea96d667c69", "address": "fa:16:3e:88:13:51", "network": {"id": "c833a599-5a18-44d2-82ad-b16f7476c220", "bridge": "br-int", "label": "tempest-network-smoke--480494293", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.229", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92faeb767e7a423586eaaf32661ce771", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd0945383-2a", "ovs_interfaceid": "d0945383-2a0d-4019-9b60-eea96d667c69", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 25 03:50:54 np0005534516 nova_compute[253538]: 2025-11-25 08:50:54.290 253542 DEBUG nova.network.os_vif_util [None req-fde44410-1549-494f-a72d-57b9cfd054cd 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Converting VIF {"id": "d0945383-2a0d-4019-9b60-eea96d667c69", "address": "fa:16:3e:88:13:51", "network": {"id": "c833a599-5a18-44d2-82ad-b16f7476c220", "bridge": "br-int", "label": "tempest-network-smoke--480494293", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.229", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92faeb767e7a423586eaaf32661ce771", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd0945383-2a", "ovs_interfaceid": "d0945383-2a0d-4019-9b60-eea96d667c69", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 03:50:54 np0005534516 nova_compute[253538]: 2025-11-25 08:50:54.291 253542 DEBUG nova.network.os_vif_util [None req-fde44410-1549-494f-a72d-57b9cfd054cd 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:88:13:51,bridge_name='br-int',has_traffic_filtering=True,id=d0945383-2a0d-4019-9b60-eea96d667c69,network=Network(c833a599-5a18-44d2-82ad-b16f7476c220),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd0945383-2a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 03:50:54 np0005534516 nova_compute[253538]: 2025-11-25 08:50:54.292 253542 DEBUG os_vif [None req-fde44410-1549-494f-a72d-57b9cfd054cd 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:88:13:51,bridge_name='br-int',has_traffic_filtering=True,id=d0945383-2a0d-4019-9b60-eea96d667c69,network=Network(c833a599-5a18-44d2-82ad-b16f7476c220),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd0945383-2a') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 25 03:50:54 np0005534516 nova_compute[253538]: 2025-11-25 08:50:54.294 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:50:54 np0005534516 nova_compute[253538]: 2025-11-25 08:50:54.294 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd0945383-2a, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:50:54 np0005534516 nova_compute[253538]: 2025-11-25 08:50:54.296 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:50:54 np0005534516 nova_compute[253538]: 2025-11-25 08:50:54.300 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 25 03:50:54 np0005534516 nova_compute[253538]: 2025-11-25 08:50:54.302 253542 INFO os_vif [None req-fde44410-1549-494f-a72d-57b9cfd054cd 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:88:13:51,bridge_name='br-int',has_traffic_filtering=True,id=d0945383-2a0d-4019-9b60-eea96d667c69,network=Network(c833a599-5a18-44d2-82ad-b16f7476c220),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd0945383-2a')#033[00m
Nov 25 03:50:54 np0005534516 podman[365854]: 2025-11-25 08:50:54.375510728 +0000 UTC m=+1.491275838 container remove 0ce92cba8547b96d92801884d7d5f0ff5ad314591d6c3ff37c0cdbef900f54ed (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_germain, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True)
Nov 25 03:50:54 np0005534516 systemd[1]: libpod-conmon-0ce92cba8547b96d92801884d7d5f0ff5ad314591d6c3ff37c0cdbef900f54ed.scope: Deactivated successfully.
Nov 25 03:50:54 np0005534516 nova_compute[253538]: 2025-11-25 08:50:54.676 253542 DEBUG nova.compute.manager [req-99bfc4e8-0491-46f3-b87a-763c7ce5fdff req-53da77a6-7183-4855-b82f-8ef4b2c3c01a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 49b75125-0ca4-438d-9f2a-1d130a6b5632] Received event network-changed-d0945383-2a0d-4019-9b60-eea96d667c69 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:50:54 np0005534516 nova_compute[253538]: 2025-11-25 08:50:54.677 253542 DEBUG nova.compute.manager [req-99bfc4e8-0491-46f3-b87a-763c7ce5fdff req-53da77a6-7183-4855-b82f-8ef4b2c3c01a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 49b75125-0ca4-438d-9f2a-1d130a6b5632] Refreshing instance network info cache due to event network-changed-d0945383-2a0d-4019-9b60-eea96d667c69. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 03:50:54 np0005534516 nova_compute[253538]: 2025-11-25 08:50:54.679 253542 DEBUG oslo_concurrency.lockutils [req-99bfc4e8-0491-46f3-b87a-763c7ce5fdff req-53da77a6-7183-4855-b82f-8ef4b2c3c01a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-49b75125-0ca4-438d-9f2a-1d130a6b5632" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 03:50:54 np0005534516 nova_compute[253538]: 2025-11-25 08:50:54.680 253542 DEBUG oslo_concurrency.lockutils [req-99bfc4e8-0491-46f3-b87a-763c7ce5fdff req-53da77a6-7183-4855-b82f-8ef4b2c3c01a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-49b75125-0ca4-438d-9f2a-1d130a6b5632" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 03:50:54 np0005534516 nova_compute[253538]: 2025-11-25 08:50:54.681 253542 DEBUG nova.network.neutron [req-99bfc4e8-0491-46f3-b87a-763c7ce5fdff req-53da77a6-7183-4855-b82f-8ef4b2c3c01a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 49b75125-0ca4-438d-9f2a-1d130a6b5632] Refreshing network info cache for port d0945383-2a0d-4019-9b60-eea96d667c69 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 03:50:54 np0005534516 neutron-haproxy-ovnmeta-c833a599-5a18-44d2-82ad-b16f7476c220[364373]: [NOTICE]   (364379) : haproxy version is 2.8.14-c23fe91
Nov 25 03:50:54 np0005534516 neutron-haproxy-ovnmeta-c833a599-5a18-44d2-82ad-b16f7476c220[364373]: [NOTICE]   (364379) : path to executable is /usr/sbin/haproxy
Nov 25 03:50:54 np0005534516 neutron-haproxy-ovnmeta-c833a599-5a18-44d2-82ad-b16f7476c220[364373]: [WARNING]  (364379) : Exiting Master process...
Nov 25 03:50:54 np0005534516 neutron-haproxy-ovnmeta-c833a599-5a18-44d2-82ad-b16f7476c220[364373]: [WARNING]  (364379) : Exiting Master process...
Nov 25 03:50:54 np0005534516 neutron-haproxy-ovnmeta-c833a599-5a18-44d2-82ad-b16f7476c220[364373]: [ALERT]    (364379) : Current worker (364382) exited with code 143 (Terminated)
Nov 25 03:50:54 np0005534516 neutron-haproxy-ovnmeta-c833a599-5a18-44d2-82ad-b16f7476c220[364373]: [WARNING]  (364379) : All workers exited. Exiting... (0)
Nov 25 03:50:54 np0005534516 systemd[1]: libpod-56678547c53f8e4e6576d65570c14bac55f5947eb901a1ae850e23bbe040142a.scope: Deactivated successfully.
Nov 25 03:50:54 np0005534516 podman[365939]: 2025-11-25 08:50:54.807649531 +0000 UTC m=+0.360203777 container died 56678547c53f8e4e6576d65570c14bac55f5947eb901a1ae850e23bbe040142a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-c833a599-5a18-44d2-82ad-b16f7476c220, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true)
Nov 25 03:50:54 np0005534516 podman[365956]: 2025-11-25 08:50:54.797845429 +0000 UTC m=+0.277736389 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 03:50:55 np0005534516 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-56678547c53f8e4e6576d65570c14bac55f5947eb901a1ae850e23bbe040142a-userdata-shm.mount: Deactivated successfully.
Nov 25 03:50:55 np0005534516 systemd[1]: var-lib-containers-storage-overlay-c75c05cfa6c1a51f5b9e1bfff96754528c2b327c0c5c0ba20f6d03e2e0d2380a-merged.mount: Deactivated successfully.
Nov 25 03:50:55 np0005534516 nova_compute[253538]: 2025-11-25 08:50:55.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 03:50:55 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2088: 321 pgs: 321 active+clean; 239 MiB data, 853 MiB used, 59 GiB / 60 GiB avail; 243 KiB/s rd, 2.1 MiB/s wr, 52 op/s
Nov 25 03:50:55 np0005534516 nova_compute[253538]: 2025-11-25 08:50:55.844 253542 DEBUG nova.network.neutron [req-99bfc4e8-0491-46f3-b87a-763c7ce5fdff req-53da77a6-7183-4855-b82f-8ef4b2c3c01a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 49b75125-0ca4-438d-9f2a-1d130a6b5632] Updated VIF entry in instance network info cache for port d0945383-2a0d-4019-9b60-eea96d667c69. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 25 03:50:55 np0005534516 nova_compute[253538]: 2025-11-25 08:50:55.844 253542 DEBUG nova.network.neutron [req-99bfc4e8-0491-46f3-b87a-763c7ce5fdff req-53da77a6-7183-4855-b82f-8ef4b2c3c01a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 49b75125-0ca4-438d-9f2a-1d130a6b5632] Updating instance_info_cache with network_info: [{"id": "d0945383-2a0d-4019-9b60-eea96d667c69", "address": "fa:16:3e:88:13:51", "network": {"id": "c833a599-5a18-44d2-82ad-b16f7476c220", "bridge": "br-int", "label": "tempest-network-smoke--480494293", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92faeb767e7a423586eaaf32661ce771", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd0945383-2a", "ovs_interfaceid": "d0945383-2a0d-4019-9b60-eea96d667c69", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 03:50:55 np0005534516 nova_compute[253538]: 2025-11-25 08:50:55.864 253542 DEBUG oslo_concurrency.lockutils [req-99bfc4e8-0491-46f3-b87a-763c7ce5fdff req-53da77a6-7183-4855-b82f-8ef4b2c3c01a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-49b75125-0ca4-438d-9f2a-1d130a6b5632" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 03:50:55 np0005534516 nova_compute[253538]: 2025-11-25 08:50:55.864 253542 DEBUG nova.compute.manager [req-99bfc4e8-0491-46f3-b87a-763c7ce5fdff req-53da77a6-7183-4855-b82f-8ef4b2c3c01a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 49b75125-0ca4-438d-9f2a-1d130a6b5632] Received event network-vif-unplugged-d0945383-2a0d-4019-9b60-eea96d667c69 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:50:55 np0005534516 nova_compute[253538]: 2025-11-25 08:50:55.864 253542 DEBUG oslo_concurrency.lockutils [req-99bfc4e8-0491-46f3-b87a-763c7ce5fdff req-53da77a6-7183-4855-b82f-8ef4b2c3c01a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "49b75125-0ca4-438d-9f2a-1d130a6b5632-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:50:55 np0005534516 nova_compute[253538]: 2025-11-25 08:50:55.864 253542 DEBUG oslo_concurrency.lockutils [req-99bfc4e8-0491-46f3-b87a-763c7ce5fdff req-53da77a6-7183-4855-b82f-8ef4b2c3c01a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "49b75125-0ca4-438d-9f2a-1d130a6b5632-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:50:55 np0005534516 nova_compute[253538]: 2025-11-25 08:50:55.864 253542 DEBUG oslo_concurrency.lockutils [req-99bfc4e8-0491-46f3-b87a-763c7ce5fdff req-53da77a6-7183-4855-b82f-8ef4b2c3c01a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "49b75125-0ca4-438d-9f2a-1d130a6b5632-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:50:55 np0005534516 nova_compute[253538]: 2025-11-25 08:50:55.864 253542 DEBUG nova.compute.manager [req-99bfc4e8-0491-46f3-b87a-763c7ce5fdff req-53da77a6-7183-4855-b82f-8ef4b2c3c01a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 49b75125-0ca4-438d-9f2a-1d130a6b5632] No waiting events found dispatching network-vif-unplugged-d0945383-2a0d-4019-9b60-eea96d667c69 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 03:50:55 np0005534516 nova_compute[253538]: 2025-11-25 08:50:55.865 253542 DEBUG nova.compute.manager [req-99bfc4e8-0491-46f3-b87a-763c7ce5fdff req-53da77a6-7183-4855-b82f-8ef4b2c3c01a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 49b75125-0ca4-438d-9f2a-1d130a6b5632] Received event network-vif-unplugged-d0945383-2a0d-4019-9b60-eea96d667c69 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 25 03:50:55 np0005534516 nova_compute[253538]: 2025-11-25 08:50:55.865 253542 DEBUG nova.compute.manager [req-99bfc4e8-0491-46f3-b87a-763c7ce5fdff req-53da77a6-7183-4855-b82f-8ef4b2c3c01a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 49b75125-0ca4-438d-9f2a-1d130a6b5632] Received event network-vif-plugged-d0945383-2a0d-4019-9b60-eea96d667c69 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:50:55 np0005534516 nova_compute[253538]: 2025-11-25 08:50:55.865 253542 DEBUG oslo_concurrency.lockutils [req-99bfc4e8-0491-46f3-b87a-763c7ce5fdff req-53da77a6-7183-4855-b82f-8ef4b2c3c01a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "49b75125-0ca4-438d-9f2a-1d130a6b5632-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:50:55 np0005534516 nova_compute[253538]: 2025-11-25 08:50:55.865 253542 DEBUG oslo_concurrency.lockutils [req-99bfc4e8-0491-46f3-b87a-763c7ce5fdff req-53da77a6-7183-4855-b82f-8ef4b2c3c01a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "49b75125-0ca4-438d-9f2a-1d130a6b5632-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:50:55 np0005534516 nova_compute[253538]: 2025-11-25 08:50:55.865 253542 DEBUG oslo_concurrency.lockutils [req-99bfc4e8-0491-46f3-b87a-763c7ce5fdff req-53da77a6-7183-4855-b82f-8ef4b2c3c01a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "49b75125-0ca4-438d-9f2a-1d130a6b5632-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:50:55 np0005534516 nova_compute[253538]: 2025-11-25 08:50:55.865 253542 DEBUG nova.compute.manager [req-99bfc4e8-0491-46f3-b87a-763c7ce5fdff req-53da77a6-7183-4855-b82f-8ef4b2c3c01a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 49b75125-0ca4-438d-9f2a-1d130a6b5632] No waiting events found dispatching network-vif-plugged-d0945383-2a0d-4019-9b60-eea96d667c69 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 03:50:55 np0005534516 nova_compute[253538]: 2025-11-25 08:50:55.866 253542 WARNING nova.compute.manager [req-99bfc4e8-0491-46f3-b87a-763c7ce5fdff req-53da77a6-7183-4855-b82f-8ef4b2c3c01a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 49b75125-0ca4-438d-9f2a-1d130a6b5632] Received unexpected event network-vif-plugged-d0945383-2a0d-4019-9b60-eea96d667c69 for instance with vm_state active and task_state deleting.#033[00m
Nov 25 03:50:55 np0005534516 podman[365956]: 2025-11-25 08:50:55.886252138 +0000 UTC m=+1.366143068 container create 1dd6295ba9701bc3f9c74e132c6fa64bde4f40cb63f75dcaa25dbbdff29adc76 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_faraday, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 25 03:50:55 np0005534516 podman[365939]: 2025-11-25 08:50:55.915741508 +0000 UTC m=+1.468295744 container cleanup 56678547c53f8e4e6576d65570c14bac55f5947eb901a1ae850e23bbe040142a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-c833a599-5a18-44d2-82ad-b16f7476c220, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Nov 25 03:50:55 np0005534516 systemd[1]: Started libpod-conmon-1dd6295ba9701bc3f9c74e132c6fa64bde4f40cb63f75dcaa25dbbdff29adc76.scope.
Nov 25 03:50:56 np0005534516 systemd[1]: Started libcrun container.
Nov 25 03:50:56 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/45c08fa59ea80ba0097df1768a7820a88f02490a03e5d195c2e16a1a30e8d5ca/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 03:50:56 np0005534516 systemd[1]: libpod-conmon-56678547c53f8e4e6576d65570c14bac55f5947eb901a1ae850e23bbe040142a.scope: Deactivated successfully.
Nov 25 03:50:56 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/45c08fa59ea80ba0097df1768a7820a88f02490a03e5d195c2e16a1a30e8d5ca/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 03:50:56 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/45c08fa59ea80ba0097df1768a7820a88f02490a03e5d195c2e16a1a30e8d5ca/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 03:50:56 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/45c08fa59ea80ba0097df1768a7820a88f02490a03e5d195c2e16a1a30e8d5ca/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 03:50:56 np0005534516 podman[365956]: 2025-11-25 08:50:56.040568741 +0000 UTC m=+1.520459651 container init 1dd6295ba9701bc3f9c74e132c6fa64bde4f40cb63f75dcaa25dbbdff29adc76 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_faraday, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default)
Nov 25 03:50:56 np0005534516 podman[365956]: 2025-11-25 08:50:56.053688492 +0000 UTC m=+1.533579382 container start 1dd6295ba9701bc3f9c74e132c6fa64bde4f40cb63f75dcaa25dbbdff29adc76 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_faraday, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True)
Nov 25 03:50:56 np0005534516 podman[365988]: 2025-11-25 08:50:56.057398902 +0000 UTC m=+0.107294345 container remove 56678547c53f8e4e6576d65570c14bac55f5947eb901a1ae850e23bbe040142a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-c833a599-5a18-44d2-82ad-b16f7476c220, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3)
Nov 25 03:50:56 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:50:56.067 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[cf2e315e-9999-49dc-9b8c-0495ef4fac75]: (4, ('Tue Nov 25 08:50:54 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-c833a599-5a18-44d2-82ad-b16f7476c220 (56678547c53f8e4e6576d65570c14bac55f5947eb901a1ae850e23bbe040142a)\n56678547c53f8e4e6576d65570c14bac55f5947eb901a1ae850e23bbe040142a\nTue Nov 25 08:50:55 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-c833a599-5a18-44d2-82ad-b16f7476c220 (56678547c53f8e4e6576d65570c14bac55f5947eb901a1ae850e23bbe040142a)\n56678547c53f8e4e6576d65570c14bac55f5947eb901a1ae850e23bbe040142a\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:50:56 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:50:56.069 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[dbb2a851-efe6-4bd0-8da6-6fc6bb2589ce]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:50:56 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:50:56.070 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc833a599-50, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:50:56 np0005534516 podman[365956]: 2025-11-25 08:50:56.07263095 +0000 UTC m=+1.552521840 container attach 1dd6295ba9701bc3f9c74e132c6fa64bde4f40cb63f75dcaa25dbbdff29adc76 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_faraday, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 03:50:56 np0005534516 kernel: tapc833a599-50: left promiscuous mode
Nov 25 03:50:56 np0005534516 nova_compute[253538]: 2025-11-25 08:50:56.073 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:50:56 np0005534516 nova_compute[253538]: 2025-11-25 08:50:56.087 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:50:56 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:50:56.091 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[bb16900d-a3d8-4146-b275-d87d95fd51a5]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:50:56 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:50:56.118 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[ae600429-b93e-446e-9dfd-c3f403fe6d83]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:50:56 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:50:56.120 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[9686ff0b-ec77-4ac7-ad00-9d68ce314fa4]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:50:56 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:50:56.138 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[91bb9c11-a0eb-46db-b184-5a1a4562e6ae]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 597187, 'reachable_time': 41576, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 366009, 'error': None, 'target': 'ovnmeta-c833a599-5a18-44d2-82ad-b16f7476c220', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:50:56 np0005534516 systemd[1]: run-netns-ovnmeta\x2dc833a599\x2d5a18\x2d44d2\x2d82ad\x2db16f7476c220.mount: Deactivated successfully.
Nov 25 03:50:56 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:50:56.143 162852 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-c833a599-5a18-44d2-82ad-b16f7476c220 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 25 03:50:56 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:50:56.143 162852 DEBUG oslo.privsep.daemon [-] privsep: reply[78803b91-ed67-49a4-b280-5abd50cc12e1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:50:56 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e237 do_prune osdmap full prune enabled
Nov 25 03:50:56 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e238 e238: 3 total, 3 up, 3 in
Nov 25 03:50:56 np0005534516 ceph-mon[75015]: log_channel(cluster) log [DBG] : osdmap e238: 3 total, 3 up, 3 in
Nov 25 03:50:56 np0005534516 nova_compute[253538]: 2025-11-25 08:50:56.398 253542 INFO nova.virt.libvirt.driver [None req-fde44410-1549-494f-a72d-57b9cfd054cd 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 49b75125-0ca4-438d-9f2a-1d130a6b5632] Deleting instance files /var/lib/nova/instances/49b75125-0ca4-438d-9f2a-1d130a6b5632_del#033[00m
Nov 25 03:50:56 np0005534516 nova_compute[253538]: 2025-11-25 08:50:56.399 253542 INFO nova.virt.libvirt.driver [None req-fde44410-1549-494f-a72d-57b9cfd054cd 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 49b75125-0ca4-438d-9f2a-1d130a6b5632] Deletion of /var/lib/nova/instances/49b75125-0ca4-438d-9f2a-1d130a6b5632_del complete#033[00m
Nov 25 03:50:56 np0005534516 nova_compute[253538]: 2025-11-25 08:50:56.498 253542 INFO nova.compute.manager [None req-fde44410-1549-494f-a72d-57b9cfd054cd 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 49b75125-0ca4-438d-9f2a-1d130a6b5632] Took 3.08 seconds to destroy the instance on the hypervisor.#033[00m
Nov 25 03:50:56 np0005534516 nova_compute[253538]: 2025-11-25 08:50:56.499 253542 DEBUG oslo.service.loopingcall [None req-fde44410-1549-494f-a72d-57b9cfd054cd 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 25 03:50:56 np0005534516 nova_compute[253538]: 2025-11-25 08:50:56.500 253542 DEBUG nova.compute.manager [-] [instance: 49b75125-0ca4-438d-9f2a-1d130a6b5632] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 25 03:50:56 np0005534516 nova_compute[253538]: 2025-11-25 08:50:56.500 253542 DEBUG nova.network.neutron [-] [instance: 49b75125-0ca4-438d-9f2a-1d130a6b5632] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 25 03:50:57 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e238 do_prune osdmap full prune enabled
Nov 25 03:50:57 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e239 e239: 3 total, 3 up, 3 in
Nov 25 03:50:57 np0005534516 adoring_faraday[365989]: {
Nov 25 03:50:57 np0005534516 adoring_faraday[365989]:    "1ccbbde0-8faf-460a-800e-c84f00ed17db": {
Nov 25 03:50:57 np0005534516 adoring_faraday[365989]:        "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 03:50:57 np0005534516 adoring_faraday[365989]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 03:50:57 np0005534516 adoring_faraday[365989]:        "osd_id": 1,
Nov 25 03:50:57 np0005534516 adoring_faraday[365989]:        "osd_uuid": "1ccbbde0-8faf-460a-800e-c84f00ed17db",
Nov 25 03:50:57 np0005534516 adoring_faraday[365989]:        "type": "bluestore"
Nov 25 03:50:57 np0005534516 adoring_faraday[365989]:    },
Nov 25 03:50:57 np0005534516 adoring_faraday[365989]:    "2a15223e-f21c-42af-b702-2be1c3607399": {
Nov 25 03:50:57 np0005534516 adoring_faraday[365989]:        "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 03:50:57 np0005534516 adoring_faraday[365989]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 03:50:57 np0005534516 adoring_faraday[365989]:        "osd_id": 2,
Nov 25 03:50:57 np0005534516 adoring_faraday[365989]:        "osd_uuid": "2a15223e-f21c-42af-b702-2be1c3607399",
Nov 25 03:50:57 np0005534516 adoring_faraday[365989]:        "type": "bluestore"
Nov 25 03:50:57 np0005534516 adoring_faraday[365989]:    },
Nov 25 03:50:57 np0005534516 adoring_faraday[365989]:    "fa0f8d90-5df8-4d42-9078-da082765696d": {
Nov 25 03:50:57 np0005534516 adoring_faraday[365989]:        "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 03:50:57 np0005534516 adoring_faraday[365989]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 03:50:57 np0005534516 adoring_faraday[365989]:        "osd_id": 0,
Nov 25 03:50:57 np0005534516 adoring_faraday[365989]:        "osd_uuid": "fa0f8d90-5df8-4d42-9078-da082765696d",
Nov 25 03:50:57 np0005534516 adoring_faraday[365989]:        "type": "bluestore"
Nov 25 03:50:57 np0005534516 adoring_faraday[365989]:    }
Nov 25 03:50:57 np0005534516 adoring_faraday[365989]: }
Nov 25 03:50:57 np0005534516 ceph-mon[75015]: log_channel(cluster) log [DBG] : osdmap e239: 3 total, 3 up, 3 in
Nov 25 03:50:57 np0005534516 systemd[1]: libpod-1dd6295ba9701bc3f9c74e132c6fa64bde4f40cb63f75dcaa25dbbdff29adc76.scope: Deactivated successfully.
Nov 25 03:50:57 np0005534516 podman[365956]: 2025-11-25 08:50:57.234670061 +0000 UTC m=+2.714560951 container died 1dd6295ba9701bc3f9c74e132c6fa64bde4f40cb63f75dcaa25dbbdff29adc76 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_faraday, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 03:50:57 np0005534516 systemd[1]: libpod-1dd6295ba9701bc3f9c74e132c6fa64bde4f40cb63f75dcaa25dbbdff29adc76.scope: Consumed 1.152s CPU time.
Nov 25 03:50:57 np0005534516 systemd[1]: var-lib-containers-storage-overlay-45c08fa59ea80ba0097df1768a7820a88f02490a03e5d195c2e16a1a30e8d5ca-merged.mount: Deactivated successfully.
Nov 25 03:50:57 np0005534516 podman[365956]: 2025-11-25 08:50:57.295768747 +0000 UTC m=+2.775659637 container remove 1dd6295ba9701bc3f9c74e132c6fa64bde4f40cb63f75dcaa25dbbdff29adc76 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_faraday, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 03:50:57 np0005534516 systemd[1]: libpod-conmon-1dd6295ba9701bc3f9c74e132c6fa64bde4f40cb63f75dcaa25dbbdff29adc76.scope: Deactivated successfully.
Nov 25 03:50:57 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 03:50:57 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 03:50:57 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 03:50:57 np0005534516 nova_compute[253538]: 2025-11-25 08:50:57.358 253542 DEBUG nova.network.neutron [-] [instance: 49b75125-0ca4-438d-9f2a-1d130a6b5632] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 03:50:57 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 03:50:57 np0005534516 ceph-mgr[75313]: [progress WARNING root] complete: ev 00a433db-0208-4058-ab2c-5b88c0c04df6 does not exist
Nov 25 03:50:57 np0005534516 ceph-mgr[75313]: [progress WARNING root] complete: ev 906afcbe-3e35-4330-9d29-0f9a1d073551 does not exist
Nov 25 03:50:57 np0005534516 nova_compute[253538]: 2025-11-25 08:50:57.372 253542 INFO nova.compute.manager [-] [instance: 49b75125-0ca4-438d-9f2a-1d130a6b5632] Took 0.87 seconds to deallocate network for instance.#033[00m
Nov 25 03:50:57 np0005534516 nova_compute[253538]: 2025-11-25 08:50:57.416 253542 DEBUG oslo_concurrency.lockutils [None req-fde44410-1549-494f-a72d-57b9cfd054cd 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:50:57 np0005534516 nova_compute[253538]: 2025-11-25 08:50:57.417 253542 DEBUG oslo_concurrency.lockutils [None req-fde44410-1549-494f-a72d-57b9cfd054cd 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:50:57 np0005534516 nova_compute[253538]: 2025-11-25 08:50:57.442 253542 DEBUG nova.compute.manager [req-c53d841e-b80c-49c3-a820-2c9c24502234 req-03439cd3-faa4-4710-9680-9bfe2d35926e b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 49b75125-0ca4-438d-9f2a-1d130a6b5632] Received event network-vif-deleted-d0945383-2a0d-4019-9b60-eea96d667c69 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:50:57 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e239 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 03:50:57 np0005534516 nova_compute[253538]: 2025-11-25 08:50:57.491 253542 DEBUG oslo_concurrency.processutils [None req-fde44410-1549-494f-a72d-57b9cfd054cd 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:50:57 np0005534516 nova_compute[253538]: 2025-11-25 08:50:57.651 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:50:57 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2091: 321 pgs: 321 active+clean; 199 MiB data, 853 MiB used, 59 GiB / 60 GiB avail; 456 KiB/s rd, 3.2 MiB/s wr, 118 op/s
Nov 25 03:50:57 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 03:50:57 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/198989968' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 03:50:57 np0005534516 nova_compute[253538]: 2025-11-25 08:50:57.990 253542 DEBUG oslo_concurrency.processutils [None req-fde44410-1549-494f-a72d-57b9cfd054cd 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.499s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:50:58 np0005534516 nova_compute[253538]: 2025-11-25 08:50:58.000 253542 DEBUG nova.compute.provider_tree [None req-fde44410-1549-494f-a72d-57b9cfd054cd 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 03:50:58 np0005534516 nova_compute[253538]: 2025-11-25 08:50:58.025 253542 DEBUG nova.scheduler.client.report [None req-fde44410-1549-494f-a72d-57b9cfd054cd 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 03:50:58 np0005534516 nova_compute[253538]: 2025-11-25 08:50:58.044 253542 DEBUG oslo_concurrency.lockutils [None req-fde44410-1549-494f-a72d-57b9cfd054cd 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.627s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:50:58 np0005534516 nova_compute[253538]: 2025-11-25 08:50:58.069 253542 INFO nova.scheduler.client.report [None req-fde44410-1549-494f-a72d-57b9cfd054cd 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Deleted allocations for instance 49b75125-0ca4-438d-9f2a-1d130a6b5632#033[00m
Nov 25 03:50:58 np0005534516 nova_compute[253538]: 2025-11-25 08:50:58.077 253542 INFO nova.compute.manager [None req-75344707-4b45-4ca5-bd88-ba2015dc36f5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: b4f98996-3a98-41ad-af66-af37066515d3] Get console output#033[00m
Nov 25 03:50:58 np0005534516 nova_compute[253538]: 2025-11-25 08:50:58.088 310639 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Nov 25 03:50:58 np0005534516 nova_compute[253538]: 2025-11-25 08:50:58.145 253542 DEBUG oslo_concurrency.lockutils [None req-fde44410-1549-494f-a72d-57b9cfd054cd 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lock "49b75125-0ca4-438d-9f2a-1d130a6b5632" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.726s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:50:58 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 03:50:58 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 03:50:58 np0005534516 nova_compute[253538]: 2025-11-25 08:50:58.401 253542 DEBUG oslo_concurrency.lockutils [None req-7f8cb972-1a7e-499f-a306-931fec67a817 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Acquiring lock "b4f98996-3a98-41ad-af66-af37066515d3" by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:50:58 np0005534516 nova_compute[253538]: 2025-11-25 08:50:58.402 253542 DEBUG oslo_concurrency.lockutils [None req-7f8cb972-1a7e-499f-a306-931fec67a817 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Lock "b4f98996-3a98-41ad-af66-af37066515d3" acquired by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:50:58 np0005534516 nova_compute[253538]: 2025-11-25 08:50:58.402 253542 INFO nova.compute.manager [None req-7f8cb972-1a7e-499f-a306-931fec67a817 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: b4f98996-3a98-41ad-af66-af37066515d3] Rebooting instance#033[00m
Nov 25 03:50:58 np0005534516 nova_compute[253538]: 2025-11-25 08:50:58.415 253542 DEBUG oslo_concurrency.lockutils [None req-7f8cb972-1a7e-499f-a306-931fec67a817 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Acquiring lock "refresh_cache-b4f98996-3a98-41ad-af66-af37066515d3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 03:50:58 np0005534516 nova_compute[253538]: 2025-11-25 08:50:58.416 253542 DEBUG oslo_concurrency.lockutils [None req-7f8cb972-1a7e-499f-a306-931fec67a817 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Acquired lock "refresh_cache-b4f98996-3a98-41ad-af66-af37066515d3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 03:50:58 np0005534516 nova_compute[253538]: 2025-11-25 08:50:58.416 253542 DEBUG nova.network.neutron [None req-7f8cb972-1a7e-499f-a306-931fec67a817 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: b4f98996-3a98-41ad-af66-af37066515d3] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 25 03:50:59 np0005534516 nova_compute[253538]: 2025-11-25 08:50:59.297 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:50:59 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2092: 321 pgs: 321 active+clean; 167 MiB data, 836 MiB used, 59 GiB / 60 GiB avail; 497 KiB/s rd, 3.2 MiB/s wr, 180 op/s
Nov 25 03:51:00 np0005534516 nova_compute[253538]: 2025-11-25 08:51:00.959 253542 DEBUG nova.network.neutron [None req-7f8cb972-1a7e-499f-a306-931fec67a817 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: b4f98996-3a98-41ad-af66-af37066515d3] Updating instance_info_cache with network_info: [{"id": "0547929c-86ba-4aaa-869f-c7e2b5ea7e67", "address": "fa:16:3e:86:20:35", "network": {"id": "c0613062-c56d-4f59-a1bd-5487b9cae905", "bridge": "br-int", "label": "tempest-network-smoke--1258419912", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.239", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7dcaf3c96bfc4db3a41291debd385c67", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0547929c-86", "ovs_interfaceid": "0547929c-86ba-4aaa-869f-c7e2b5ea7e67", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 03:51:00 np0005534516 nova_compute[253538]: 2025-11-25 08:51:00.988 253542 DEBUG oslo_concurrency.lockutils [None req-7f8cb972-1a7e-499f-a306-931fec67a817 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Releasing lock "refresh_cache-b4f98996-3a98-41ad-af66-af37066515d3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 03:51:00 np0005534516 nova_compute[253538]: 2025-11-25 08:51:00.991 253542 DEBUG nova.compute.manager [None req-7f8cb972-1a7e-499f-a306-931fec67a817 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: b4f98996-3a98-41ad-af66-af37066515d3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:51:01 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2093: 321 pgs: 321 active+clean; 167 MiB data, 836 MiB used, 59 GiB / 60 GiB avail; 287 KiB/s rd, 2.1 MiB/s wr, 133 op/s
Nov 25 03:51:02 np0005534516 ovn_controller[152859]: 2025-11-25T08:51:02Z|01094|binding|INFO|Releasing lport 0ffa1e66-a7b3-404c-afe1-b89a3d2ec28f from this chassis (sb_readonly=0)
Nov 25 03:51:02 np0005534516 nova_compute[253538]: 2025-11-25 08:51:02.373 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:51:02 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e239 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 03:51:02 np0005534516 nova_compute[253538]: 2025-11-25 08:51:02.653 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:51:02 np0005534516 podman[366125]: 2025-11-25 08:51:02.85051767 +0000 UTC m=+0.081460523 container health_status 5c8d5508db57b4c3da7e80ae11f3a3094e87785cb74f60c98199261dd8b73481 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_metadata_agent, io.buildah.version=1.41.3)
Nov 25 03:51:03 np0005534516 kernel: tap0547929c-86 (unregistering): left promiscuous mode
Nov 25 03:51:03 np0005534516 NetworkManager[48915]: <info>  [1764060663.4906] device (tap0547929c-86): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 03:51:03 np0005534516 ovn_controller[152859]: 2025-11-25T08:51:03Z|01095|binding|INFO|Releasing lport 0547929c-86ba-4aaa-869f-c7e2b5ea7e67 from this chassis (sb_readonly=0)
Nov 25 03:51:03 np0005534516 ovn_controller[152859]: 2025-11-25T08:51:03Z|01096|binding|INFO|Setting lport 0547929c-86ba-4aaa-869f-c7e2b5ea7e67 down in Southbound
Nov 25 03:51:03 np0005534516 ovn_controller[152859]: 2025-11-25T08:51:03Z|01097|binding|INFO|Removing iface tap0547929c-86 ovn-installed in OVS
Nov 25 03:51:03 np0005534516 nova_compute[253538]: 2025-11-25 08:51:03.506 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:51:03 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:51:03.513 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:86:20:35 10.100.0.11'], port_security=['fa:16:3e:86:20:35 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': 'b4f98996-3a98-41ad-af66-af37066515d3', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c0613062-c56d-4f59-a1bd-5487b9cae905', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '7dcaf3c96bfc4db3a41291debd385c67', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'db6ee56c-6009-455c-94cc-12101966dbd4', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.239'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=eca56c04-944f-49ab-8f25-403bf5089348, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=0547929c-86ba-4aaa-869f-c7e2b5ea7e67) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 03:51:03 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:51:03.515 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 0547929c-86ba-4aaa-869f-c7e2b5ea7e67 in datapath c0613062-c56d-4f59-a1bd-5487b9cae905 unbound from our chassis#033[00m
Nov 25 03:51:03 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:51:03.515 162739 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network c0613062-c56d-4f59-a1bd-5487b9cae905, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 25 03:51:03 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:51:03.517 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[582c7d5b-ec68-4866-b01c-031219518b6b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:51:03 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:51:03.518 162739 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-c0613062-c56d-4f59-a1bd-5487b9cae905 namespace which is not needed anymore#033[00m
Nov 25 03:51:03 np0005534516 nova_compute[253538]: 2025-11-25 08:51:03.528 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:51:03 np0005534516 systemd[1]: machine-qemu\x2d136\x2dinstance\x2d0000006e.scope: Deactivated successfully.
Nov 25 03:51:03 np0005534516 systemd[1]: machine-qemu\x2d136\x2dinstance\x2d0000006e.scope: Consumed 14.817s CPU time.
Nov 25 03:51:03 np0005534516 systemd-machined[215790]: Machine qemu-136-instance-0000006e terminated.
Nov 25 03:51:03 np0005534516 neutron-haproxy-ovnmeta-c0613062-c56d-4f59-a1bd-5487b9cae905[364882]: [NOTICE]   (364886) : haproxy version is 2.8.14-c23fe91
Nov 25 03:51:03 np0005534516 neutron-haproxy-ovnmeta-c0613062-c56d-4f59-a1bd-5487b9cae905[364882]: [NOTICE]   (364886) : path to executable is /usr/sbin/haproxy
Nov 25 03:51:03 np0005534516 neutron-haproxy-ovnmeta-c0613062-c56d-4f59-a1bd-5487b9cae905[364882]: [WARNING]  (364886) : Exiting Master process...
Nov 25 03:51:03 np0005534516 neutron-haproxy-ovnmeta-c0613062-c56d-4f59-a1bd-5487b9cae905[364882]: [ALERT]    (364886) : Current worker (364888) exited with code 143 (Terminated)
Nov 25 03:51:03 np0005534516 neutron-haproxy-ovnmeta-c0613062-c56d-4f59-a1bd-5487b9cae905[364882]: [WARNING]  (364886) : All workers exited. Exiting... (0)
Nov 25 03:51:03 np0005534516 systemd[1]: libpod-12d6a8bafef2877a643e0b7a66dd30102487d9445daa265a370bb0402a6110af.scope: Deactivated successfully.
Nov 25 03:51:03 np0005534516 podman[366170]: 2025-11-25 08:51:03.685474031 +0000 UTC m=+0.053775441 container died 12d6a8bafef2877a643e0b7a66dd30102487d9445daa265a370bb0402a6110af (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-c0613062-c56d-4f59-a1bd-5487b9cae905, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 03:51:03 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2094: 321 pgs: 321 active+clean; 167 MiB data, 807 MiB used, 59 GiB / 60 GiB avail; 146 KiB/s rd, 115 KiB/s wr, 110 op/s
Nov 25 03:51:03 np0005534516 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-12d6a8bafef2877a643e0b7a66dd30102487d9445daa265a370bb0402a6110af-userdata-shm.mount: Deactivated successfully.
Nov 25 03:51:03 np0005534516 systemd[1]: var-lib-containers-storage-overlay-8cf67b50908d3dd49b2c0a0481b4509ed99ab81eb1cd3289fe65b839d4061e7d-merged.mount: Deactivated successfully.
Nov 25 03:51:03 np0005534516 nova_compute[253538]: 2025-11-25 08:51:03.732 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:51:03 np0005534516 nova_compute[253538]: 2025-11-25 08:51:03.736 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:51:03 np0005534516 podman[366170]: 2025-11-25 08:51:03.737408452 +0000 UTC m=+0.105709852 container cleanup 12d6a8bafef2877a643e0b7a66dd30102487d9445daa265a370bb0402a6110af (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-c0613062-c56d-4f59-a1bd-5487b9cae905, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3)
Nov 25 03:51:03 np0005534516 systemd[1]: libpod-conmon-12d6a8bafef2877a643e0b7a66dd30102487d9445daa265a370bb0402a6110af.scope: Deactivated successfully.
Nov 25 03:51:03 np0005534516 podman[366205]: 2025-11-25 08:51:03.816632013 +0000 UTC m=+0.052576679 container remove 12d6a8bafef2877a643e0b7a66dd30102487d9445daa265a370bb0402a6110af (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-c0613062-c56d-4f59-a1bd-5487b9cae905, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true)
Nov 25 03:51:03 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:51:03.827 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[1752eac3-6274-4b05-bffb-5300166fd1fb]: (4, ('Tue Nov 25 08:51:03 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-c0613062-c56d-4f59-a1bd-5487b9cae905 (12d6a8bafef2877a643e0b7a66dd30102487d9445daa265a370bb0402a6110af)\n12d6a8bafef2877a643e0b7a66dd30102487d9445daa265a370bb0402a6110af\nTue Nov 25 08:51:03 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-c0613062-c56d-4f59-a1bd-5487b9cae905 (12d6a8bafef2877a643e0b7a66dd30102487d9445daa265a370bb0402a6110af)\n12d6a8bafef2877a643e0b7a66dd30102487d9445daa265a370bb0402a6110af\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:51:03 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:51:03.829 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[110d7db9-aff3-43f8-b21d-dcdb78de8858]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:51:03 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:51:03.831 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc0613062-c0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:51:03 np0005534516 nova_compute[253538]: 2025-11-25 08:51:03.833 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:51:03 np0005534516 kernel: tapc0613062-c0: left promiscuous mode
Nov 25 03:51:03 np0005534516 nova_compute[253538]: 2025-11-25 08:51:03.861 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:51:03 np0005534516 nova_compute[253538]: 2025-11-25 08:51:03.863 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:51:03 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:51:03.866 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[e7a6e6f2-5fcc-41a1-bf24-288f19a9fd14]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:51:03 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:51:03.881 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[621bf3bb-5e8c-44b9-b8da-21cb337888a0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:51:03 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:51:03.883 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[38384457-dd65-4930-afa1-5267586314cc]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:51:03 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:51:03.904 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[6c182968-a11f-415a-81a4-1a7033af170c]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 599103, 'reachable_time': 39433, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 366223, 'error': None, 'target': 'ovnmeta-c0613062-c56d-4f59-a1bd-5487b9cae905', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:51:03 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:51:03.906 162852 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-c0613062-c56d-4f59-a1bd-5487b9cae905 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 25 03:51:03 np0005534516 systemd[1]: run-netns-ovnmeta\x2dc0613062\x2dc56d\x2d4f59\x2da1bd\x2d5487b9cae905.mount: Deactivated successfully.
Nov 25 03:51:03 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:51:03.907 162852 DEBUG oslo.privsep.daemon [-] privsep: reply[235b6405-31fb-4d25-bd12-3ed8220d5bc3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:51:04 np0005534516 nova_compute[253538]: 2025-11-25 08:51:04.125 253542 DEBUG nova.compute.manager [req-4ec39d64-09b8-48b0-9c7e-7a16f3852e27 req-42104617-5041-4f99-9342-ebb97dc58557 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: b4f98996-3a98-41ad-af66-af37066515d3] Received event network-vif-unplugged-0547929c-86ba-4aaa-869f-c7e2b5ea7e67 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:51:04 np0005534516 nova_compute[253538]: 2025-11-25 08:51:04.127 253542 DEBUG oslo_concurrency.lockutils [req-4ec39d64-09b8-48b0-9c7e-7a16f3852e27 req-42104617-5041-4f99-9342-ebb97dc58557 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "b4f98996-3a98-41ad-af66-af37066515d3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:51:04 np0005534516 nova_compute[253538]: 2025-11-25 08:51:04.127 253542 DEBUG oslo_concurrency.lockutils [req-4ec39d64-09b8-48b0-9c7e-7a16f3852e27 req-42104617-5041-4f99-9342-ebb97dc58557 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "b4f98996-3a98-41ad-af66-af37066515d3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:51:04 np0005534516 nova_compute[253538]: 2025-11-25 08:51:04.128 253542 DEBUG oslo_concurrency.lockutils [req-4ec39d64-09b8-48b0-9c7e-7a16f3852e27 req-42104617-5041-4f99-9342-ebb97dc58557 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "b4f98996-3a98-41ad-af66-af37066515d3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:51:04 np0005534516 nova_compute[253538]: 2025-11-25 08:51:04.128 253542 DEBUG nova.compute.manager [req-4ec39d64-09b8-48b0-9c7e-7a16f3852e27 req-42104617-5041-4f99-9342-ebb97dc58557 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: b4f98996-3a98-41ad-af66-af37066515d3] No waiting events found dispatching network-vif-unplugged-0547929c-86ba-4aaa-869f-c7e2b5ea7e67 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 03:51:04 np0005534516 nova_compute[253538]: 2025-11-25 08:51:04.129 253542 WARNING nova.compute.manager [req-4ec39d64-09b8-48b0-9c7e-7a16f3852e27 req-42104617-5041-4f99-9342-ebb97dc58557 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: b4f98996-3a98-41ad-af66-af37066515d3] Received unexpected event network-vif-unplugged-0547929c-86ba-4aaa-869f-c7e2b5ea7e67 for instance with vm_state active and task_state reboot_started.#033[00m
Nov 25 03:51:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 03:51:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:51:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 03:51:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:51:04 np0005534516 nova_compute[253538]: 2025-11-25 08:51:04.142 253542 INFO nova.virt.libvirt.driver [None req-7f8cb972-1a7e-499f-a306-931fec67a817 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: b4f98996-3a98-41ad-af66-af37066515d3] Instance shutdown successfully.#033[00m
Nov 25 03:51:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0007600361398710686 of space, bias 1.0, pg target 0.22801084196132057 quantized to 32 (current 32)
Nov 25 03:51:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:51:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 03:51:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:51:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 03:51:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:51:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0010122368870873217 of space, bias 1.0, pg target 0.3036710661261965 quantized to 32 (current 32)
Nov 25 03:51:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:51:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 25 03:51:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:51:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 03:51:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:51:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 25 03:51:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:51:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 25 03:51:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:51:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 03:51:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:51:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 25 03:51:04 np0005534516 kernel: tap0547929c-86: entered promiscuous mode
Nov 25 03:51:04 np0005534516 systemd-udevd[366153]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 03:51:04 np0005534516 NetworkManager[48915]: <info>  [1764060664.2299] manager: (tap0547929c-86): new Tun device (/org/freedesktop/NetworkManager/Devices/448)
Nov 25 03:51:04 np0005534516 ovn_controller[152859]: 2025-11-25T08:51:04Z|01098|binding|INFO|Claiming lport 0547929c-86ba-4aaa-869f-c7e2b5ea7e67 for this chassis.
Nov 25 03:51:04 np0005534516 ovn_controller[152859]: 2025-11-25T08:51:04Z|01099|binding|INFO|0547929c-86ba-4aaa-869f-c7e2b5ea7e67: Claiming fa:16:3e:86:20:35 10.100.0.11
Nov 25 03:51:04 np0005534516 nova_compute[253538]: 2025-11-25 08:51:04.228 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:51:04 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:51:04.239 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:86:20:35 10.100.0.11'], port_security=['fa:16:3e:86:20:35 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': 'b4f98996-3a98-41ad-af66-af37066515d3', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c0613062-c56d-4f59-a1bd-5487b9cae905', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '7dcaf3c96bfc4db3a41291debd385c67', 'neutron:revision_number': '5', 'neutron:security_group_ids': 'db6ee56c-6009-455c-94cc-12101966dbd4', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.239'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=eca56c04-944f-49ab-8f25-403bf5089348, chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=0547929c-86ba-4aaa-869f-c7e2b5ea7e67) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 03:51:04 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:51:04.241 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 0547929c-86ba-4aaa-869f-c7e2b5ea7e67 in datapath c0613062-c56d-4f59-a1bd-5487b9cae905 bound to our chassis#033[00m
Nov 25 03:51:04 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:51:04.243 162739 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network c0613062-c56d-4f59-a1bd-5487b9cae905#033[00m
Nov 25 03:51:04 np0005534516 NetworkManager[48915]: <info>  [1764060664.2445] device (tap0547929c-86): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 03:51:04 np0005534516 NetworkManager[48915]: <info>  [1764060664.2463] device (tap0547929c-86): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 03:51:04 np0005534516 nova_compute[253538]: 2025-11-25 08:51:04.252 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:51:04 np0005534516 ovn_controller[152859]: 2025-11-25T08:51:04Z|01100|binding|INFO|Setting lport 0547929c-86ba-4aaa-869f-c7e2b5ea7e67 ovn-installed in OVS
Nov 25 03:51:04 np0005534516 ovn_controller[152859]: 2025-11-25T08:51:04Z|01101|binding|INFO|Setting lport 0547929c-86ba-4aaa-869f-c7e2b5ea7e67 up in Southbound
Nov 25 03:51:04 np0005534516 nova_compute[253538]: 2025-11-25 08:51:04.255 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:51:04 np0005534516 nova_compute[253538]: 2025-11-25 08:51:04.258 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:51:04 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:51:04.262 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[bcd0a604-875a-43e0-874f-07fc9ffdef6e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:51:04 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:51:04.263 162739 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapc0613062-c1 in ovnmeta-c0613062-c56d-4f59-a1bd-5487b9cae905 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 25 03:51:04 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:51:04.266 269370 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapc0613062-c0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 25 03:51:04 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:51:04.266 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[cf122442-511a-43b7-9053-e9296d57b06e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:51:04 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:51:04.267 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[74f97bf6-eca1-4cc4-9256-f0f29fb31049]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:51:04 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:51:04.281 162852 DEBUG oslo.privsep.daemon [-] privsep: reply[0d36343b-db2e-43e2-bee9-c2482b8ef950]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:51:04 np0005534516 systemd-machined[215790]: New machine qemu-137-instance-0000006e.
Nov 25 03:51:04 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:51:04.296 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[4ec903d7-5181-4534-a3db-20403b2914f0]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:51:04 np0005534516 nova_compute[253538]: 2025-11-25 08:51:04.298 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:51:04 np0005534516 systemd[1]: Started Virtual Machine qemu-137-instance-0000006e.
Nov 25 03:51:04 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:51:04.348 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[3e067c61-fdd2-4580-a9fe-562e97288a6a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:51:04 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:51:04.356 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[5ef0ba18-172b-452c-ae17-21ca55dfe967]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:51:04 np0005534516 NetworkManager[48915]: <info>  [1764060664.3575] manager: (tapc0613062-c0): new Veth device (/org/freedesktop/NetworkManager/Devices/449)
Nov 25 03:51:04 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:51:04.407 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[b4df58c3-cb8b-4bfd-bef5-48e42e86104c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:51:04 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:51:04.411 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[65c82f1c-0641-47ca-b1b5-17d21eaff36a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:51:04 np0005534516 NetworkManager[48915]: <info>  [1764060664.4506] device (tapc0613062-c0): carrier: link connected
Nov 25 03:51:04 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:51:04.461 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[ad389578-1bc6-4e39-98d8-4db036b89d0f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:51:04 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:51:04.491 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[b7e915cf-44dc-4801-a75a-58ee340d9d20]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapc0613062-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:9b:a3:1e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 322], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 601904, 'reachable_time': 23683, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 366267, 'error': None, 'target': 'ovnmeta-c0613062-c56d-4f59-a1bd-5487b9cae905', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:51:04 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:51:04.512 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[0cf41db1-8bc9-43a8-b774-31592e1f3375]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe9b:a31e'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 601904, 'tstamp': 601904}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 366268, 'error': None, 'target': 'ovnmeta-c0613062-c56d-4f59-a1bd-5487b9cae905', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:51:04 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:51:04.530 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[255cf0be-f100-4371-8bd0-848f9ab7aa06]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapc0613062-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:9b:a3:1e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 322], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 601904, 'reachable_time': 23683, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 366283, 'error': None, 'target': 'ovnmeta-c0613062-c56d-4f59-a1bd-5487b9cae905', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:51:04 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:51:04.583 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[51f6a5fe-670e-4788-9eca-66c6808e2a16]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:51:04 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:51:04.666 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[dee934d6-1776-47fc-b0fc-c3afc6bfba75]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:51:04 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:51:04.667 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc0613062-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:51:04 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:51:04.667 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 03:51:04 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:51:04.667 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc0613062-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:51:04 np0005534516 NetworkManager[48915]: <info>  [1764060664.6698] manager: (tapc0613062-c0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/450)
Nov 25 03:51:04 np0005534516 kernel: tapc0613062-c0: entered promiscuous mode
Nov 25 03:51:04 np0005534516 nova_compute[253538]: 2025-11-25 08:51:04.669 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:51:04 np0005534516 nova_compute[253538]: 2025-11-25 08:51:04.671 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:51:04 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:51:04.673 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapc0613062-c0, col_values=(('external_ids', {'iface-id': '0ffa1e66-a7b3-404c-afe1-b89a3d2ec28f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:51:04 np0005534516 nova_compute[253538]: 2025-11-25 08:51:04.673 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:51:04 np0005534516 ovn_controller[152859]: 2025-11-25T08:51:04Z|01102|binding|INFO|Releasing lport 0ffa1e66-a7b3-404c-afe1-b89a3d2ec28f from this chassis (sb_readonly=0)
Nov 25 03:51:04 np0005534516 nova_compute[253538]: 2025-11-25 08:51:04.675 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:51:04 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:51:04.676 162739 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/c0613062-c56d-4f59-a1bd-5487b9cae905.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/c0613062-c56d-4f59-a1bd-5487b9cae905.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 25 03:51:04 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:51:04.678 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[5a9f45f5-de79-4032-97e1-15f320797043]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:51:04 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:51:04.679 162739 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 25 03:51:04 np0005534516 ovn_metadata_agent[162734]: global
Nov 25 03:51:04 np0005534516 ovn_metadata_agent[162734]:    log         /dev/log local0 debug
Nov 25 03:51:04 np0005534516 ovn_metadata_agent[162734]:    log-tag     haproxy-metadata-proxy-c0613062-c56d-4f59-a1bd-5487b9cae905
Nov 25 03:51:04 np0005534516 ovn_metadata_agent[162734]:    user        root
Nov 25 03:51:04 np0005534516 ovn_metadata_agent[162734]:    group       root
Nov 25 03:51:04 np0005534516 ovn_metadata_agent[162734]:    maxconn     1024
Nov 25 03:51:04 np0005534516 ovn_metadata_agent[162734]:    pidfile     /var/lib/neutron/external/pids/c0613062-c56d-4f59-a1bd-5487b9cae905.pid.haproxy
Nov 25 03:51:04 np0005534516 ovn_metadata_agent[162734]:    daemon
Nov 25 03:51:04 np0005534516 ovn_metadata_agent[162734]: 
Nov 25 03:51:04 np0005534516 ovn_metadata_agent[162734]: defaults
Nov 25 03:51:04 np0005534516 ovn_metadata_agent[162734]:    log global
Nov 25 03:51:04 np0005534516 ovn_metadata_agent[162734]:    mode http
Nov 25 03:51:04 np0005534516 ovn_metadata_agent[162734]:    option httplog
Nov 25 03:51:04 np0005534516 ovn_metadata_agent[162734]:    option dontlognull
Nov 25 03:51:04 np0005534516 ovn_metadata_agent[162734]:    option http-server-close
Nov 25 03:51:04 np0005534516 ovn_metadata_agent[162734]:    option forwardfor
Nov 25 03:51:04 np0005534516 ovn_metadata_agent[162734]:    retries                 3
Nov 25 03:51:04 np0005534516 ovn_metadata_agent[162734]:    timeout http-request    30s
Nov 25 03:51:04 np0005534516 ovn_metadata_agent[162734]:    timeout connect         30s
Nov 25 03:51:04 np0005534516 ovn_metadata_agent[162734]:    timeout client          32s
Nov 25 03:51:04 np0005534516 ovn_metadata_agent[162734]:    timeout server          32s
Nov 25 03:51:04 np0005534516 ovn_metadata_agent[162734]:    timeout http-keep-alive 30s
Nov 25 03:51:04 np0005534516 ovn_metadata_agent[162734]: 
Nov 25 03:51:04 np0005534516 ovn_metadata_agent[162734]: 
Nov 25 03:51:04 np0005534516 ovn_metadata_agent[162734]: listen listener
Nov 25 03:51:04 np0005534516 ovn_metadata_agent[162734]:    bind 169.254.169.254:80
Nov 25 03:51:04 np0005534516 ovn_metadata_agent[162734]:    server metadata /var/lib/neutron/metadata_proxy
Nov 25 03:51:04 np0005534516 ovn_metadata_agent[162734]:    http-request add-header X-OVN-Network-ID c0613062-c56d-4f59-a1bd-5487b9cae905
Nov 25 03:51:04 np0005534516 ovn_metadata_agent[162734]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 25 03:51:04 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:51:04.679 162739 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-c0613062-c56d-4f59-a1bd-5487b9cae905', 'env', 'PROCESS_TAG=haproxy-c0613062-c56d-4f59-a1bd-5487b9cae905', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/c0613062-c56d-4f59-a1bd-5487b9cae905.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 25 03:51:04 np0005534516 nova_compute[253538]: 2025-11-25 08:51:04.688 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:51:04 np0005534516 nova_compute[253538]: 2025-11-25 08:51:04.733 253542 DEBUG nova.virt.libvirt.host [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Removed pending event for b4f98996-3a98-41ad-af66-af37066515d3 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Nov 25 03:51:04 np0005534516 nova_compute[253538]: 2025-11-25 08:51:04.734 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764060664.7330246, b4f98996-3a98-41ad-af66-af37066515d3 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 03:51:04 np0005534516 nova_compute[253538]: 2025-11-25 08:51:04.734 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: b4f98996-3a98-41ad-af66-af37066515d3] VM Resumed (Lifecycle Event)#033[00m
Nov 25 03:51:04 np0005534516 nova_compute[253538]: 2025-11-25 08:51:04.739 253542 INFO nova.virt.libvirt.driver [-] [instance: b4f98996-3a98-41ad-af66-af37066515d3] Instance running successfully.#033[00m
Nov 25 03:51:04 np0005534516 nova_compute[253538]: 2025-11-25 08:51:04.739 253542 INFO nova.virt.libvirt.driver [None req-7f8cb972-1a7e-499f-a306-931fec67a817 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: b4f98996-3a98-41ad-af66-af37066515d3] Instance soft rebooted successfully.#033[00m
Nov 25 03:51:04 np0005534516 nova_compute[253538]: 2025-11-25 08:51:04.740 253542 DEBUG nova.compute.manager [None req-7f8cb972-1a7e-499f-a306-931fec67a817 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: b4f98996-3a98-41ad-af66-af37066515d3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:51:04 np0005534516 nova_compute[253538]: 2025-11-25 08:51:04.763 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: b4f98996-3a98-41ad-af66-af37066515d3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:51:04 np0005534516 nova_compute[253538]: 2025-11-25 08:51:04.766 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: b4f98996-3a98-41ad-af66-af37066515d3] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: reboot_started, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 03:51:04 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e239 do_prune osdmap full prune enabled
Nov 25 03:51:04 np0005534516 nova_compute[253538]: 2025-11-25 08:51:04.796 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: b4f98996-3a98-41ad-af66-af37066515d3] During sync_power_state the instance has a pending task (reboot_started). Skip.#033[00m
Nov 25 03:51:04 np0005534516 nova_compute[253538]: 2025-11-25 08:51:04.797 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764060664.7357535, b4f98996-3a98-41ad-af66-af37066515d3 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 03:51:04 np0005534516 nova_compute[253538]: 2025-11-25 08:51:04.797 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: b4f98996-3a98-41ad-af66-af37066515d3] VM Started (Lifecycle Event)#033[00m
Nov 25 03:51:04 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e240 e240: 3 total, 3 up, 3 in
Nov 25 03:51:04 np0005534516 ceph-mon[75015]: log_channel(cluster) log [DBG] : osdmap e240: 3 total, 3 up, 3 in
Nov 25 03:51:04 np0005534516 nova_compute[253538]: 2025-11-25 08:51:04.805 253542 DEBUG oslo_concurrency.lockutils [None req-7f8cb972-1a7e-499f-a306-931fec67a817 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Lock "b4f98996-3a98-41ad-af66-af37066515d3" "released" by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" :: held 6.403s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:51:04 np0005534516 nova_compute[253538]: 2025-11-25 08:51:04.818 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: b4f98996-3a98-41ad-af66-af37066515d3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:51:04 np0005534516 nova_compute[253538]: 2025-11-25 08:51:04.823 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: b4f98996-3a98-41ad-af66-af37066515d3] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 03:51:05 np0005534516 podman[366343]: 2025-11-25 08:51:05.125788824 +0000 UTC m=+0.054198622 container create dda7a335c431b3b6ccd804e4a60d89574b0669704d04b1923f4b0c950d3bca3b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-c0613062-c56d-4f59-a1bd-5487b9cae905, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 03:51:05 np0005534516 systemd[1]: Started libpod-conmon-dda7a335c431b3b6ccd804e4a60d89574b0669704d04b1923f4b0c950d3bca3b.scope.
Nov 25 03:51:05 np0005534516 systemd[1]: Started libcrun container.
Nov 25 03:51:05 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/06636a18fe499615bcb16111056ae7d170b090417624338dfdbe17fd20509fcd/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 25 03:51:05 np0005534516 podman[366343]: 2025-11-25 08:51:05.191295799 +0000 UTC m=+0.119705617 container init dda7a335c431b3b6ccd804e4a60d89574b0669704d04b1923f4b0c950d3bca3b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-c0613062-c56d-4f59-a1bd-5487b9cae905, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 25 03:51:05 np0005534516 podman[366343]: 2025-11-25 08:51:05.101249778 +0000 UTC m=+0.029659606 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 25 03:51:05 np0005534516 podman[366343]: 2025-11-25 08:51:05.200108175 +0000 UTC m=+0.128517983 container start dda7a335c431b3b6ccd804e4a60d89574b0669704d04b1923f4b0c950d3bca3b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-c0613062-c56d-4f59-a1bd-5487b9cae905, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.vendor=CentOS)
Nov 25 03:51:05 np0005534516 neutron-haproxy-ovnmeta-c0613062-c56d-4f59-a1bd-5487b9cae905[366360]: [NOTICE]   (366378) : New worker (366383) forked
Nov 25 03:51:05 np0005534516 neutron-haproxy-ovnmeta-c0613062-c56d-4f59-a1bd-5487b9cae905[366360]: [NOTICE]   (366378) : Loading success.
Nov 25 03:51:05 np0005534516 podman[366356]: 2025-11-25 08:51:05.232652227 +0000 UTC m=+0.070489849 container health_status 201f5ba8a846455dad7681d91099d8c3f33e8ac91298c1330a8b525b94c03938 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=multipathd, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118)
Nov 25 03:51:05 np0005534516 nova_compute[253538]: 2025-11-25 08:51:05.595 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:51:05 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2096: 321 pgs: 321 active+clean; 167 MiB data, 807 MiB used, 59 GiB / 60 GiB avail; 41 KiB/s rd, 46 KiB/s wr, 64 op/s
Nov 25 03:51:05 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e240 do_prune osdmap full prune enabled
Nov 25 03:51:05 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e241 e241: 3 total, 3 up, 3 in
Nov 25 03:51:05 np0005534516 ceph-mon[75015]: log_channel(cluster) log [DBG] : osdmap e241: 3 total, 3 up, 3 in
Nov 25 03:51:06 np0005534516 nova_compute[253538]: 2025-11-25 08:51:06.236 253542 DEBUG nova.compute.manager [req-6bd1ecdd-c448-449e-a77c-b74339069f1f req-da5aebe8-8cc1-42d0-8b8f-a8a91c373ad1 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: b4f98996-3a98-41ad-af66-af37066515d3] Received event network-vif-plugged-0547929c-86ba-4aaa-869f-c7e2b5ea7e67 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:51:06 np0005534516 nova_compute[253538]: 2025-11-25 08:51:06.236 253542 DEBUG oslo_concurrency.lockutils [req-6bd1ecdd-c448-449e-a77c-b74339069f1f req-da5aebe8-8cc1-42d0-8b8f-a8a91c373ad1 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "b4f98996-3a98-41ad-af66-af37066515d3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:51:06 np0005534516 nova_compute[253538]: 2025-11-25 08:51:06.237 253542 DEBUG oslo_concurrency.lockutils [req-6bd1ecdd-c448-449e-a77c-b74339069f1f req-da5aebe8-8cc1-42d0-8b8f-a8a91c373ad1 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "b4f98996-3a98-41ad-af66-af37066515d3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:51:06 np0005534516 nova_compute[253538]: 2025-11-25 08:51:06.237 253542 DEBUG oslo_concurrency.lockutils [req-6bd1ecdd-c448-449e-a77c-b74339069f1f req-da5aebe8-8cc1-42d0-8b8f-a8a91c373ad1 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "b4f98996-3a98-41ad-af66-af37066515d3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:51:06 np0005534516 nova_compute[253538]: 2025-11-25 08:51:06.238 253542 DEBUG nova.compute.manager [req-6bd1ecdd-c448-449e-a77c-b74339069f1f req-da5aebe8-8cc1-42d0-8b8f-a8a91c373ad1 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: b4f98996-3a98-41ad-af66-af37066515d3] No waiting events found dispatching network-vif-plugged-0547929c-86ba-4aaa-869f-c7e2b5ea7e67 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 03:51:06 np0005534516 nova_compute[253538]: 2025-11-25 08:51:06.238 253542 WARNING nova.compute.manager [req-6bd1ecdd-c448-449e-a77c-b74339069f1f req-da5aebe8-8cc1-42d0-8b8f-a8a91c373ad1 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: b4f98996-3a98-41ad-af66-af37066515d3] Received unexpected event network-vif-plugged-0547929c-86ba-4aaa-869f-c7e2b5ea7e67 for instance with vm_state active and task_state None.#033[00m
Nov 25 03:51:06 np0005534516 nova_compute[253538]: 2025-11-25 08:51:06.238 253542 DEBUG nova.compute.manager [req-6bd1ecdd-c448-449e-a77c-b74339069f1f req-da5aebe8-8cc1-42d0-8b8f-a8a91c373ad1 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: b4f98996-3a98-41ad-af66-af37066515d3] Received event network-vif-plugged-0547929c-86ba-4aaa-869f-c7e2b5ea7e67 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:51:06 np0005534516 nova_compute[253538]: 2025-11-25 08:51:06.238 253542 DEBUG oslo_concurrency.lockutils [req-6bd1ecdd-c448-449e-a77c-b74339069f1f req-da5aebe8-8cc1-42d0-8b8f-a8a91c373ad1 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "b4f98996-3a98-41ad-af66-af37066515d3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:51:06 np0005534516 nova_compute[253538]: 2025-11-25 08:51:06.239 253542 DEBUG oslo_concurrency.lockutils [req-6bd1ecdd-c448-449e-a77c-b74339069f1f req-da5aebe8-8cc1-42d0-8b8f-a8a91c373ad1 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "b4f98996-3a98-41ad-af66-af37066515d3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:51:06 np0005534516 nova_compute[253538]: 2025-11-25 08:51:06.239 253542 DEBUG oslo_concurrency.lockutils [req-6bd1ecdd-c448-449e-a77c-b74339069f1f req-da5aebe8-8cc1-42d0-8b8f-a8a91c373ad1 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "b4f98996-3a98-41ad-af66-af37066515d3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:51:06 np0005534516 nova_compute[253538]: 2025-11-25 08:51:06.239 253542 DEBUG nova.compute.manager [req-6bd1ecdd-c448-449e-a77c-b74339069f1f req-da5aebe8-8cc1-42d0-8b8f-a8a91c373ad1 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: b4f98996-3a98-41ad-af66-af37066515d3] No waiting events found dispatching network-vif-plugged-0547929c-86ba-4aaa-869f-c7e2b5ea7e67 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 03:51:06 np0005534516 nova_compute[253538]: 2025-11-25 08:51:06.240 253542 WARNING nova.compute.manager [req-6bd1ecdd-c448-449e-a77c-b74339069f1f req-da5aebe8-8cc1-42d0-8b8f-a8a91c373ad1 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: b4f98996-3a98-41ad-af66-af37066515d3] Received unexpected event network-vif-plugged-0547929c-86ba-4aaa-869f-c7e2b5ea7e67 for instance with vm_state active and task_state None.#033[00m
Nov 25 03:51:06 np0005534516 nova_compute[253538]: 2025-11-25 08:51:06.240 253542 DEBUG nova.compute.manager [req-6bd1ecdd-c448-449e-a77c-b74339069f1f req-da5aebe8-8cc1-42d0-8b8f-a8a91c373ad1 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: b4f98996-3a98-41ad-af66-af37066515d3] Received event network-vif-plugged-0547929c-86ba-4aaa-869f-c7e2b5ea7e67 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:51:06 np0005534516 nova_compute[253538]: 2025-11-25 08:51:06.240 253542 DEBUG oslo_concurrency.lockutils [req-6bd1ecdd-c448-449e-a77c-b74339069f1f req-da5aebe8-8cc1-42d0-8b8f-a8a91c373ad1 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "b4f98996-3a98-41ad-af66-af37066515d3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:51:06 np0005534516 nova_compute[253538]: 2025-11-25 08:51:06.241 253542 DEBUG oslo_concurrency.lockutils [req-6bd1ecdd-c448-449e-a77c-b74339069f1f req-da5aebe8-8cc1-42d0-8b8f-a8a91c373ad1 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "b4f98996-3a98-41ad-af66-af37066515d3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:51:06 np0005534516 nova_compute[253538]: 2025-11-25 08:51:06.241 253542 DEBUG oslo_concurrency.lockutils [req-6bd1ecdd-c448-449e-a77c-b74339069f1f req-da5aebe8-8cc1-42d0-8b8f-a8a91c373ad1 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "b4f98996-3a98-41ad-af66-af37066515d3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:51:06 np0005534516 nova_compute[253538]: 2025-11-25 08:51:06.241 253542 DEBUG nova.compute.manager [req-6bd1ecdd-c448-449e-a77c-b74339069f1f req-da5aebe8-8cc1-42d0-8b8f-a8a91c373ad1 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: b4f98996-3a98-41ad-af66-af37066515d3] No waiting events found dispatching network-vif-plugged-0547929c-86ba-4aaa-869f-c7e2b5ea7e67 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 03:51:06 np0005534516 nova_compute[253538]: 2025-11-25 08:51:06.241 253542 WARNING nova.compute.manager [req-6bd1ecdd-c448-449e-a77c-b74339069f1f req-da5aebe8-8cc1-42d0-8b8f-a8a91c373ad1 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: b4f98996-3a98-41ad-af66-af37066515d3] Received unexpected event network-vif-plugged-0547929c-86ba-4aaa-869f-c7e2b5ea7e67 for instance with vm_state active and task_state None.#033[00m
Nov 25 03:51:06 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e241 do_prune osdmap full prune enabled
Nov 25 03:51:06 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e242 e242: 3 total, 3 up, 3 in
Nov 25 03:51:06 np0005534516 ceph-mon[75015]: log_channel(cluster) log [DBG] : osdmap e242: 3 total, 3 up, 3 in
Nov 25 03:51:07 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e242 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 03:51:07 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e242 do_prune osdmap full prune enabled
Nov 25 03:51:07 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e243 e243: 3 total, 3 up, 3 in
Nov 25 03:51:07 np0005534516 ceph-mon[75015]: log_channel(cluster) log [DBG] : osdmap e243: 3 total, 3 up, 3 in
Nov 25 03:51:07 np0005534516 ceph-osd[89702]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #48. Immutable memtables: 5.
Nov 25 03:51:07 np0005534516 nova_compute[253538]: 2025-11-25 08:51:07.656 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:51:07 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2100: 321 pgs: 321 active+clean; 167 MiB data, 807 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 3.7 KiB/s wr, 112 op/s
Nov 25 03:51:08 np0005534516 nova_compute[253538]: 2025-11-25 08:51:08.566 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:51:08 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e243 do_prune osdmap full prune enabled
Nov 25 03:51:08 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e244 e244: 3 total, 3 up, 3 in
Nov 25 03:51:08 np0005534516 ceph-mon[75015]: log_channel(cluster) log [DBG] : osdmap e244: 3 total, 3 up, 3 in
Nov 25 03:51:09 np0005534516 nova_compute[253538]: 2025-11-25 08:51:09.267 253542 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764060654.2659361, 49b75125-0ca4-438d-9f2a-1d130a6b5632 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 03:51:09 np0005534516 nova_compute[253538]: 2025-11-25 08:51:09.267 253542 INFO nova.compute.manager [-] [instance: 49b75125-0ca4-438d-9f2a-1d130a6b5632] VM Stopped (Lifecycle Event)#033[00m
Nov 25 03:51:09 np0005534516 nova_compute[253538]: 2025-11-25 08:51:09.288 253542 DEBUG nova.compute.manager [None req-acc0dda8-28df-4d66-9910-c1a5067b3c62 - - - - - -] [instance: 49b75125-0ca4-438d-9f2a-1d130a6b5632] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:51:09 np0005534516 nova_compute[253538]: 2025-11-25 08:51:09.344 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:51:09 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2102: 321 pgs: 321 active+clean; 167 MiB data, 807 MiB used, 59 GiB / 60 GiB avail; 5.9 MiB/s rd, 18 KiB/s wr, 347 op/s
Nov 25 03:51:09 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e244 do_prune osdmap full prune enabled
Nov 25 03:51:09 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e245 e245: 3 total, 3 up, 3 in
Nov 25 03:51:09 np0005534516 ceph-mon[75015]: log_channel(cluster) log [DBG] : osdmap e245: 3 total, 3 up, 3 in
Nov 25 03:51:10 np0005534516 podman[366392]: 2025-11-25 08:51:10.896241366 +0000 UTC m=+0.135126799 container health_status 8ad1e319ea263c63021f01bb2d6f18ca5aea205883339692c6b10bddfa993191 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_managed=true)
Nov 25 03:51:11 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2104: 321 pgs: 321 active+clean; 167 MiB data, 807 MiB used, 59 GiB / 60 GiB avail; 2.9 MiB/s rd, 12 KiB/s wr, 193 op/s
Nov 25 03:51:12 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e245 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 03:51:12 np0005534516 nova_compute[253538]: 2025-11-25 08:51:12.659 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:51:12 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e245 do_prune osdmap full prune enabled
Nov 25 03:51:12 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e246 e246: 3 total, 3 up, 3 in
Nov 25 03:51:12 np0005534516 ceph-mon[75015]: log_channel(cluster) log [DBG] : osdmap e246: 3 total, 3 up, 3 in
Nov 25 03:51:13 np0005534516 nova_compute[253538]: 2025-11-25 08:51:13.489 253542 DEBUG oslo_concurrency.lockutils [None req-73791b0c-fec3-40af-918c-8d0544886cd1 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Acquiring lock "e7bccd38-8444-4a5d-8a51-a99fa2e7d4fb" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:51:13 np0005534516 nova_compute[253538]: 2025-11-25 08:51:13.490 253542 DEBUG oslo_concurrency.lockutils [None req-73791b0c-fec3-40af-918c-8d0544886cd1 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lock "e7bccd38-8444-4a5d-8a51-a99fa2e7d4fb" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:51:13 np0005534516 nova_compute[253538]: 2025-11-25 08:51:13.507 253542 DEBUG nova.compute.manager [None req-73791b0c-fec3-40af-918c-8d0544886cd1 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: e7bccd38-8444-4a5d-8a51-a99fa2e7d4fb] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 25 03:51:13 np0005534516 nova_compute[253538]: 2025-11-25 08:51:13.590 253542 DEBUG oslo_concurrency.lockutils [None req-73791b0c-fec3-40af-918c-8d0544886cd1 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:51:13 np0005534516 nova_compute[253538]: 2025-11-25 08:51:13.591 253542 DEBUG oslo_concurrency.lockutils [None req-73791b0c-fec3-40af-918c-8d0544886cd1 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:51:13 np0005534516 nova_compute[253538]: 2025-11-25 08:51:13.600 253542 DEBUG nova.virt.hardware [None req-73791b0c-fec3-40af-918c-8d0544886cd1 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 25 03:51:13 np0005534516 nova_compute[253538]: 2025-11-25 08:51:13.600 253542 INFO nova.compute.claims [None req-73791b0c-fec3-40af-918c-8d0544886cd1 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: e7bccd38-8444-4a5d-8a51-a99fa2e7d4fb] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 25 03:51:13 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2106: 321 pgs: 321 active+clean; 167 MiB data, 825 MiB used, 59 GiB / 60 GiB avail; 2.4 MiB/s rd, 15 KiB/s wr, 220 op/s
Nov 25 03:51:13 np0005534516 nova_compute[253538]: 2025-11-25 08:51:13.729 253542 DEBUG oslo_concurrency.processutils [None req-73791b0c-fec3-40af-918c-8d0544886cd1 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:51:13 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e246 do_prune osdmap full prune enabled
Nov 25 03:51:14 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e247 e247: 3 total, 3 up, 3 in
Nov 25 03:51:14 np0005534516 ceph-mon[75015]: log_channel(cluster) log [DBG] : osdmap e247: 3 total, 3 up, 3 in
Nov 25 03:51:14 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 03:51:14 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2891708322' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 03:51:14 np0005534516 nova_compute[253538]: 2025-11-25 08:51:14.245 253542 DEBUG oslo_concurrency.processutils [None req-73791b0c-fec3-40af-918c-8d0544886cd1 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.516s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:51:14 np0005534516 nova_compute[253538]: 2025-11-25 08:51:14.251 253542 DEBUG nova.compute.provider_tree [None req-73791b0c-fec3-40af-918c-8d0544886cd1 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 03:51:14 np0005534516 nova_compute[253538]: 2025-11-25 08:51:14.272 253542 DEBUG nova.scheduler.client.report [None req-73791b0c-fec3-40af-918c-8d0544886cd1 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 03:51:14 np0005534516 nova_compute[253538]: 2025-11-25 08:51:14.302 253542 DEBUG oslo_concurrency.lockutils [None req-73791b0c-fec3-40af-918c-8d0544886cd1 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.711s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:51:14 np0005534516 nova_compute[253538]: 2025-11-25 08:51:14.303 253542 DEBUG nova.compute.manager [None req-73791b0c-fec3-40af-918c-8d0544886cd1 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: e7bccd38-8444-4a5d-8a51-a99fa2e7d4fb] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 25 03:51:14 np0005534516 nova_compute[253538]: 2025-11-25 08:51:14.347 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:51:14 np0005534516 nova_compute[253538]: 2025-11-25 08:51:14.379 253542 DEBUG nova.compute.manager [None req-73791b0c-fec3-40af-918c-8d0544886cd1 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: e7bccd38-8444-4a5d-8a51-a99fa2e7d4fb] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 25 03:51:14 np0005534516 nova_compute[253538]: 2025-11-25 08:51:14.380 253542 DEBUG nova.network.neutron [None req-73791b0c-fec3-40af-918c-8d0544886cd1 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: e7bccd38-8444-4a5d-8a51-a99fa2e7d4fb] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 25 03:51:14 np0005534516 nova_compute[253538]: 2025-11-25 08:51:14.404 253542 INFO nova.virt.libvirt.driver [None req-73791b0c-fec3-40af-918c-8d0544886cd1 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: e7bccd38-8444-4a5d-8a51-a99fa2e7d4fb] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 25 03:51:14 np0005534516 nova_compute[253538]: 2025-11-25 08:51:14.424 253542 DEBUG nova.compute.manager [None req-73791b0c-fec3-40af-918c-8d0544886cd1 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: e7bccd38-8444-4a5d-8a51-a99fa2e7d4fb] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 25 03:51:14 np0005534516 nova_compute[253538]: 2025-11-25 08:51:14.521 253542 DEBUG nova.compute.manager [None req-73791b0c-fec3-40af-918c-8d0544886cd1 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: e7bccd38-8444-4a5d-8a51-a99fa2e7d4fb] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 25 03:51:14 np0005534516 nova_compute[253538]: 2025-11-25 08:51:14.523 253542 DEBUG nova.virt.libvirt.driver [None req-73791b0c-fec3-40af-918c-8d0544886cd1 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: e7bccd38-8444-4a5d-8a51-a99fa2e7d4fb] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 25 03:51:14 np0005534516 nova_compute[253538]: 2025-11-25 08:51:14.523 253542 INFO nova.virt.libvirt.driver [None req-73791b0c-fec3-40af-918c-8d0544886cd1 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: e7bccd38-8444-4a5d-8a51-a99fa2e7d4fb] Creating image(s)#033[00m
Nov 25 03:51:14 np0005534516 nova_compute[253538]: 2025-11-25 08:51:14.552 253542 DEBUG nova.storage.rbd_utils [None req-73791b0c-fec3-40af-918c-8d0544886cd1 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] rbd image e7bccd38-8444-4a5d-8a51-a99fa2e7d4fb_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:51:14 np0005534516 nova_compute[253538]: 2025-11-25 08:51:14.582 253542 DEBUG nova.storage.rbd_utils [None req-73791b0c-fec3-40af-918c-8d0544886cd1 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] rbd image e7bccd38-8444-4a5d-8a51-a99fa2e7d4fb_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:51:14 np0005534516 nova_compute[253538]: 2025-11-25 08:51:14.607 253542 DEBUG nova.storage.rbd_utils [None req-73791b0c-fec3-40af-918c-8d0544886cd1 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] rbd image e7bccd38-8444-4a5d-8a51-a99fa2e7d4fb_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:51:14 np0005534516 nova_compute[253538]: 2025-11-25 08:51:14.611 253542 DEBUG oslo_concurrency.processutils [None req-73791b0c-fec3-40af-918c-8d0544886cd1 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:51:14 np0005534516 nova_compute[253538]: 2025-11-25 08:51:14.651 253542 DEBUG nova.policy [None req-73791b0c-fec3-40af-918c-8d0544886cd1 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '4211995133cc45db8e38c47f747fb092', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '92faeb767e7a423586eaaf32661ce771', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 25 03:51:14 np0005534516 nova_compute[253538]: 2025-11-25 08:51:14.697 253542 DEBUG oslo_concurrency.processutils [None req-73791b0c-fec3-40af-918c-8d0544886cd1 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json" returned: 0 in 0.086s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:51:14 np0005534516 nova_compute[253538]: 2025-11-25 08:51:14.698 253542 DEBUG oslo_concurrency.lockutils [None req-73791b0c-fec3-40af-918c-8d0544886cd1 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Acquiring lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:51:14 np0005534516 nova_compute[253538]: 2025-11-25 08:51:14.698 253542 DEBUG oslo_concurrency.lockutils [None req-73791b0c-fec3-40af-918c-8d0544886cd1 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:51:14 np0005534516 nova_compute[253538]: 2025-11-25 08:51:14.699 253542 DEBUG oslo_concurrency.lockutils [None req-73791b0c-fec3-40af-918c-8d0544886cd1 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:51:14 np0005534516 nova_compute[253538]: 2025-11-25 08:51:14.722 253542 DEBUG nova.storage.rbd_utils [None req-73791b0c-fec3-40af-918c-8d0544886cd1 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] rbd image e7bccd38-8444-4a5d-8a51-a99fa2e7d4fb_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:51:14 np0005534516 nova_compute[253538]: 2025-11-25 08:51:14.727 253542 DEBUG oslo_concurrency.processutils [None req-73791b0c-fec3-40af-918c-8d0544886cd1 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc e7bccd38-8444-4a5d-8a51-a99fa2e7d4fb_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:51:15 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e247 do_prune osdmap full prune enabled
Nov 25 03:51:15 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e248 e248: 3 total, 3 up, 3 in
Nov 25 03:51:15 np0005534516 ceph-mon[75015]: log_channel(cluster) log [DBG] : osdmap e248: 3 total, 3 up, 3 in
Nov 25 03:51:15 np0005534516 nova_compute[253538]: 2025-11-25 08:51:15.069 253542 DEBUG oslo_concurrency.processutils [None req-73791b0c-fec3-40af-918c-8d0544886cd1 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc e7bccd38-8444-4a5d-8a51-a99fa2e7d4fb_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.342s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:51:15 np0005534516 nova_compute[253538]: 2025-11-25 08:51:15.172 253542 DEBUG nova.storage.rbd_utils [None req-73791b0c-fec3-40af-918c-8d0544886cd1 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] resizing rbd image e7bccd38-8444-4a5d-8a51-a99fa2e7d4fb_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 25 03:51:15 np0005534516 nova_compute[253538]: 2025-11-25 08:51:15.311 253542 DEBUG nova.objects.instance [None req-73791b0c-fec3-40af-918c-8d0544886cd1 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lazy-loading 'migration_context' on Instance uuid e7bccd38-8444-4a5d-8a51-a99fa2e7d4fb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 03:51:15 np0005534516 nova_compute[253538]: 2025-11-25 08:51:15.362 253542 DEBUG nova.virt.libvirt.driver [None req-73791b0c-fec3-40af-918c-8d0544886cd1 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: e7bccd38-8444-4a5d-8a51-a99fa2e7d4fb] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 25 03:51:15 np0005534516 nova_compute[253538]: 2025-11-25 08:51:15.363 253542 DEBUG nova.virt.libvirt.driver [None req-73791b0c-fec3-40af-918c-8d0544886cd1 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: e7bccd38-8444-4a5d-8a51-a99fa2e7d4fb] Ensure instance console log exists: /var/lib/nova/instances/e7bccd38-8444-4a5d-8a51-a99fa2e7d4fb/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 25 03:51:15 np0005534516 nova_compute[253538]: 2025-11-25 08:51:15.363 253542 DEBUG oslo_concurrency.lockutils [None req-73791b0c-fec3-40af-918c-8d0544886cd1 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:51:15 np0005534516 nova_compute[253538]: 2025-11-25 08:51:15.363 253542 DEBUG oslo_concurrency.lockutils [None req-73791b0c-fec3-40af-918c-8d0544886cd1 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:51:15 np0005534516 nova_compute[253538]: 2025-11-25 08:51:15.363 253542 DEBUG oslo_concurrency.lockutils [None req-73791b0c-fec3-40af-918c-8d0544886cd1 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:51:15 np0005534516 nova_compute[253538]: 2025-11-25 08:51:15.435 253542 DEBUG nova.network.neutron [None req-73791b0c-fec3-40af-918c-8d0544886cd1 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: e7bccd38-8444-4a5d-8a51-a99fa2e7d4fb] Successfully created port: 2ad9a2b7-f59d-49fd-aaa3-5253dd637f18 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 25 03:51:15 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2109: 321 pgs: 321 active+clean; 167 MiB data, 825 MiB used, 59 GiB / 60 GiB avail; 54 KiB/s rd, 8.0 KiB/s wr, 78 op/s
Nov 25 03:51:16 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e248 do_prune osdmap full prune enabled
Nov 25 03:51:16 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e249 e249: 3 total, 3 up, 3 in
Nov 25 03:51:16 np0005534516 ceph-mon[75015]: log_channel(cluster) log [DBG] : osdmap e249: 3 total, 3 up, 3 in
Nov 25 03:51:16 np0005534516 nova_compute[253538]: 2025-11-25 08:51:16.125 253542 DEBUG nova.network.neutron [None req-73791b0c-fec3-40af-918c-8d0544886cd1 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: e7bccd38-8444-4a5d-8a51-a99fa2e7d4fb] Successfully updated port: 2ad9a2b7-f59d-49fd-aaa3-5253dd637f18 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 25 03:51:16 np0005534516 nova_compute[253538]: 2025-11-25 08:51:16.145 253542 DEBUG oslo_concurrency.lockutils [None req-73791b0c-fec3-40af-918c-8d0544886cd1 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Acquiring lock "refresh_cache-e7bccd38-8444-4a5d-8a51-a99fa2e7d4fb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 03:51:16 np0005534516 nova_compute[253538]: 2025-11-25 08:51:16.146 253542 DEBUG oslo_concurrency.lockutils [None req-73791b0c-fec3-40af-918c-8d0544886cd1 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Acquired lock "refresh_cache-e7bccd38-8444-4a5d-8a51-a99fa2e7d4fb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 03:51:16 np0005534516 nova_compute[253538]: 2025-11-25 08:51:16.146 253542 DEBUG nova.network.neutron [None req-73791b0c-fec3-40af-918c-8d0544886cd1 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: e7bccd38-8444-4a5d-8a51-a99fa2e7d4fb] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 25 03:51:16 np0005534516 nova_compute[253538]: 2025-11-25 08:51:16.234 253542 DEBUG nova.compute.manager [req-34cc0621-2a58-4e58-be34-0d4de92a80f5 req-c2a9f91f-026d-4745-b002-f999de995816 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: e7bccd38-8444-4a5d-8a51-a99fa2e7d4fb] Received event network-changed-2ad9a2b7-f59d-49fd-aaa3-5253dd637f18 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:51:16 np0005534516 nova_compute[253538]: 2025-11-25 08:51:16.235 253542 DEBUG nova.compute.manager [req-34cc0621-2a58-4e58-be34-0d4de92a80f5 req-c2a9f91f-026d-4745-b002-f999de995816 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: e7bccd38-8444-4a5d-8a51-a99fa2e7d4fb] Refreshing instance network info cache due to event network-changed-2ad9a2b7-f59d-49fd-aaa3-5253dd637f18. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 03:51:16 np0005534516 nova_compute[253538]: 2025-11-25 08:51:16.235 253542 DEBUG oslo_concurrency.lockutils [req-34cc0621-2a58-4e58-be34-0d4de92a80f5 req-c2a9f91f-026d-4745-b002-f999de995816 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-e7bccd38-8444-4a5d-8a51-a99fa2e7d4fb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 03:51:16 np0005534516 nova_compute[253538]: 2025-11-25 08:51:16.358 253542 DEBUG nova.network.neutron [None req-73791b0c-fec3-40af-918c-8d0544886cd1 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: e7bccd38-8444-4a5d-8a51-a99fa2e7d4fb] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 25 03:51:17 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e249 do_prune osdmap full prune enabled
Nov 25 03:51:17 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e250 e250: 3 total, 3 up, 3 in
Nov 25 03:51:17 np0005534516 ceph-mon[75015]: log_channel(cluster) log [DBG] : osdmap e250: 3 total, 3 up, 3 in
Nov 25 03:51:17 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 03:51:17 np0005534516 nova_compute[253538]: 2025-11-25 08:51:17.661 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:51:17 np0005534516 nova_compute[253538]: 2025-11-25 08:51:17.694 253542 DEBUG nova.network.neutron [None req-73791b0c-fec3-40af-918c-8d0544886cd1 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: e7bccd38-8444-4a5d-8a51-a99fa2e7d4fb] Updating instance_info_cache with network_info: [{"id": "2ad9a2b7-f59d-49fd-aaa3-5253dd637f18", "address": "fa:16:3e:3e:2a:a3", "network": {"id": "8daad2e3-552f-4ebe-8fa4-01c68ec704b1", "bridge": "br-int", "label": "tempest-network-smoke--961197321", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92faeb767e7a423586eaaf32661ce771", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2ad9a2b7-f5", "ovs_interfaceid": "2ad9a2b7-f59d-49fd-aaa3-5253dd637f18", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 03:51:17 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2112: 321 pgs: 321 active+clean; 186 MiB data, 827 MiB used, 59 GiB / 60 GiB avail; 411 KiB/s rd, 619 KiB/s wr, 228 op/s
Nov 25 03:51:17 np0005534516 nova_compute[253538]: 2025-11-25 08:51:17.716 253542 DEBUG oslo_concurrency.lockutils [None req-73791b0c-fec3-40af-918c-8d0544886cd1 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Releasing lock "refresh_cache-e7bccd38-8444-4a5d-8a51-a99fa2e7d4fb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 03:51:17 np0005534516 nova_compute[253538]: 2025-11-25 08:51:17.716 253542 DEBUG nova.compute.manager [None req-73791b0c-fec3-40af-918c-8d0544886cd1 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: e7bccd38-8444-4a5d-8a51-a99fa2e7d4fb] Instance network_info: |[{"id": "2ad9a2b7-f59d-49fd-aaa3-5253dd637f18", "address": "fa:16:3e:3e:2a:a3", "network": {"id": "8daad2e3-552f-4ebe-8fa4-01c68ec704b1", "bridge": "br-int", "label": "tempest-network-smoke--961197321", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92faeb767e7a423586eaaf32661ce771", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2ad9a2b7-f5", "ovs_interfaceid": "2ad9a2b7-f59d-49fd-aaa3-5253dd637f18", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 25 03:51:17 np0005534516 nova_compute[253538]: 2025-11-25 08:51:17.717 253542 DEBUG oslo_concurrency.lockutils [req-34cc0621-2a58-4e58-be34-0d4de92a80f5 req-c2a9f91f-026d-4745-b002-f999de995816 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-e7bccd38-8444-4a5d-8a51-a99fa2e7d4fb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 03:51:17 np0005534516 nova_compute[253538]: 2025-11-25 08:51:17.717 253542 DEBUG nova.network.neutron [req-34cc0621-2a58-4e58-be34-0d4de92a80f5 req-c2a9f91f-026d-4745-b002-f999de995816 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: e7bccd38-8444-4a5d-8a51-a99fa2e7d4fb] Refreshing network info cache for port 2ad9a2b7-f59d-49fd-aaa3-5253dd637f18 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 03:51:17 np0005534516 nova_compute[253538]: 2025-11-25 08:51:17.722 253542 DEBUG nova.virt.libvirt.driver [None req-73791b0c-fec3-40af-918c-8d0544886cd1 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: e7bccd38-8444-4a5d-8a51-a99fa2e7d4fb] Start _get_guest_xml network_info=[{"id": "2ad9a2b7-f59d-49fd-aaa3-5253dd637f18", "address": "fa:16:3e:3e:2a:a3", "network": {"id": "8daad2e3-552f-4ebe-8fa4-01c68ec704b1", "bridge": "br-int", "label": "tempest-network-smoke--961197321", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92faeb767e7a423586eaaf32661ce771", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2ad9a2b7-f5", "ovs_interfaceid": "2ad9a2b7-f59d-49fd-aaa3-5253dd637f18", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'device_name': '/dev/vda', 'guest_format': None, 'encryption_options': None, 'boot_index': 0, 'encryption_format': None, 'encryption_secret_uuid': None, 'size': 0, 'encrypted': False, 'disk_bus': 'virtio', 'image_id': '8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 25 03:51:17 np0005534516 ovn_controller[152859]: 2025-11-25T08:51:17Z|00121|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:86:20:35 10.100.0.11
Nov 25 03:51:17 np0005534516 nova_compute[253538]: 2025-11-25 08:51:17.729 253542 WARNING nova.virt.libvirt.driver [None req-73791b0c-fec3-40af-918c-8d0544886cd1 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 03:51:17 np0005534516 nova_compute[253538]: 2025-11-25 08:51:17.740 253542 DEBUG nova.virt.libvirt.host [None req-73791b0c-fec3-40af-918c-8d0544886cd1 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 25 03:51:17 np0005534516 nova_compute[253538]: 2025-11-25 08:51:17.741 253542 DEBUG nova.virt.libvirt.host [None req-73791b0c-fec3-40af-918c-8d0544886cd1 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 25 03:51:17 np0005534516 nova_compute[253538]: 2025-11-25 08:51:17.746 253542 DEBUG nova.virt.libvirt.host [None req-73791b0c-fec3-40af-918c-8d0544886cd1 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 25 03:51:17 np0005534516 nova_compute[253538]: 2025-11-25 08:51:17.746 253542 DEBUG nova.virt.libvirt.host [None req-73791b0c-fec3-40af-918c-8d0544886cd1 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 25 03:51:17 np0005534516 nova_compute[253538]: 2025-11-25 08:51:17.747 253542 DEBUG nova.virt.libvirt.driver [None req-73791b0c-fec3-40af-918c-8d0544886cd1 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 25 03:51:17 np0005534516 nova_compute[253538]: 2025-11-25 08:51:17.747 253542 DEBUG nova.virt.hardware [None req-73791b0c-fec3-40af-918c-8d0544886cd1 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T08:20:54Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='c0247c91-0f34-45b2-87e7-43f31a790a00',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 25 03:51:17 np0005534516 nova_compute[253538]: 2025-11-25 08:51:17.748 253542 DEBUG nova.virt.hardware [None req-73791b0c-fec3-40af-918c-8d0544886cd1 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 25 03:51:17 np0005534516 nova_compute[253538]: 2025-11-25 08:51:17.749 253542 DEBUG nova.virt.hardware [None req-73791b0c-fec3-40af-918c-8d0544886cd1 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 25 03:51:17 np0005534516 nova_compute[253538]: 2025-11-25 08:51:17.749 253542 DEBUG nova.virt.hardware [None req-73791b0c-fec3-40af-918c-8d0544886cd1 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 25 03:51:17 np0005534516 nova_compute[253538]: 2025-11-25 08:51:17.750 253542 DEBUG nova.virt.hardware [None req-73791b0c-fec3-40af-918c-8d0544886cd1 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 25 03:51:17 np0005534516 nova_compute[253538]: 2025-11-25 08:51:17.750 253542 DEBUG nova.virt.hardware [None req-73791b0c-fec3-40af-918c-8d0544886cd1 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 25 03:51:17 np0005534516 nova_compute[253538]: 2025-11-25 08:51:17.750 253542 DEBUG nova.virt.hardware [None req-73791b0c-fec3-40af-918c-8d0544886cd1 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 25 03:51:17 np0005534516 nova_compute[253538]: 2025-11-25 08:51:17.751 253542 DEBUG nova.virt.hardware [None req-73791b0c-fec3-40af-918c-8d0544886cd1 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 25 03:51:17 np0005534516 nova_compute[253538]: 2025-11-25 08:51:17.751 253542 DEBUG nova.virt.hardware [None req-73791b0c-fec3-40af-918c-8d0544886cd1 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 25 03:51:17 np0005534516 nova_compute[253538]: 2025-11-25 08:51:17.752 253542 DEBUG nova.virt.hardware [None req-73791b0c-fec3-40af-918c-8d0544886cd1 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 25 03:51:17 np0005534516 nova_compute[253538]: 2025-11-25 08:51:17.752 253542 DEBUG nova.virt.hardware [None req-73791b0c-fec3-40af-918c-8d0544886cd1 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 25 03:51:17 np0005534516 nova_compute[253538]: 2025-11-25 08:51:17.757 253542 DEBUG oslo_concurrency.processutils [None req-73791b0c-fec3-40af-918c-8d0544886cd1 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:51:18 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e250 do_prune osdmap full prune enabled
Nov 25 03:51:18 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e251 e251: 3 total, 3 up, 3 in
Nov 25 03:51:18 np0005534516 ceph-mon[75015]: log_channel(cluster) log [DBG] : osdmap e251: 3 total, 3 up, 3 in
Nov 25 03:51:18 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 03:51:18 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3801386082' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 03:51:18 np0005534516 nova_compute[253538]: 2025-11-25 08:51:18.253 253542 DEBUG oslo_concurrency.processutils [None req-73791b0c-fec3-40af-918c-8d0544886cd1 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.496s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:51:18 np0005534516 nova_compute[253538]: 2025-11-25 08:51:18.292 253542 DEBUG nova.storage.rbd_utils [None req-73791b0c-fec3-40af-918c-8d0544886cd1 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] rbd image e7bccd38-8444-4a5d-8a51-a99fa2e7d4fb_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:51:18 np0005534516 nova_compute[253538]: 2025-11-25 08:51:18.297 253542 DEBUG oslo_concurrency.processutils [None req-73791b0c-fec3-40af-918c-8d0544886cd1 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:51:18 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 03:51:18 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1256411726' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 03:51:18 np0005534516 nova_compute[253538]: 2025-11-25 08:51:18.813 253542 DEBUG oslo_concurrency.processutils [None req-73791b0c-fec3-40af-918c-8d0544886cd1 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.516s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:51:18 np0005534516 nova_compute[253538]: 2025-11-25 08:51:18.814 253542 DEBUG nova.virt.libvirt.vif [None req-73791b0c-fec3-40af-918c-8d0544886cd1 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T08:51:12Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-199412432',display_name='tempest-TestNetworkBasicOps-server-199412432',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-199412432',id=111,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMuJR0+MVpFtNHEH/qBMUEI9mE13UW6GrULa+2972JvBZYqj7jCYYsMmZITZ+SM7QQhK9eTjWP2J5imfxbLYOM0couLFe8mdKS/uhBmTvd2vRYexSjbqdhkaRLs1gfDUJQ==',key_name='tempest-TestNetworkBasicOps-1778650473',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='92faeb767e7a423586eaaf32661ce771',ramdisk_id='',reservation_id='r-hcn7lok0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-2019122229',owner_user_name='tempest-TestNetworkBasicOps-2019122229-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T08:51:14Z,user_data=None,user_id='4211995133cc45db8e38c47f747fb092',uuid=e7bccd38-8444-4a5d-8a51-a99fa2e7d4fb,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "2ad9a2b7-f59d-49fd-aaa3-5253dd637f18", "address": "fa:16:3e:3e:2a:a3", "network": {"id": "8daad2e3-552f-4ebe-8fa4-01c68ec704b1", "bridge": "br-int", "label": "tempest-network-smoke--961197321", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92faeb767e7a423586eaaf32661ce771", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2ad9a2b7-f5", "ovs_interfaceid": "2ad9a2b7-f59d-49fd-aaa3-5253dd637f18", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 25 03:51:18 np0005534516 nova_compute[253538]: 2025-11-25 08:51:18.815 253542 DEBUG nova.network.os_vif_util [None req-73791b0c-fec3-40af-918c-8d0544886cd1 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Converting VIF {"id": "2ad9a2b7-f59d-49fd-aaa3-5253dd637f18", "address": "fa:16:3e:3e:2a:a3", "network": {"id": "8daad2e3-552f-4ebe-8fa4-01c68ec704b1", "bridge": "br-int", "label": "tempest-network-smoke--961197321", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92faeb767e7a423586eaaf32661ce771", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2ad9a2b7-f5", "ovs_interfaceid": "2ad9a2b7-f59d-49fd-aaa3-5253dd637f18", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 03:51:18 np0005534516 nova_compute[253538]: 2025-11-25 08:51:18.815 253542 DEBUG nova.network.os_vif_util [None req-73791b0c-fec3-40af-918c-8d0544886cd1 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:3e:2a:a3,bridge_name='br-int',has_traffic_filtering=True,id=2ad9a2b7-f59d-49fd-aaa3-5253dd637f18,network=Network(8daad2e3-552f-4ebe-8fa4-01c68ec704b1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2ad9a2b7-f5') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 03:51:18 np0005534516 nova_compute[253538]: 2025-11-25 08:51:18.817 253542 DEBUG nova.objects.instance [None req-73791b0c-fec3-40af-918c-8d0544886cd1 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lazy-loading 'pci_devices' on Instance uuid e7bccd38-8444-4a5d-8a51-a99fa2e7d4fb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 03:51:18 np0005534516 nova_compute[253538]: 2025-11-25 08:51:18.830 253542 DEBUG nova.virt.libvirt.driver [None req-73791b0c-fec3-40af-918c-8d0544886cd1 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: e7bccd38-8444-4a5d-8a51-a99fa2e7d4fb] End _get_guest_xml xml=<domain type="kvm">
Nov 25 03:51:18 np0005534516 nova_compute[253538]:  <uuid>e7bccd38-8444-4a5d-8a51-a99fa2e7d4fb</uuid>
Nov 25 03:51:18 np0005534516 nova_compute[253538]:  <name>instance-0000006f</name>
Nov 25 03:51:18 np0005534516 nova_compute[253538]:  <memory>131072</memory>
Nov 25 03:51:18 np0005534516 nova_compute[253538]:  <vcpu>1</vcpu>
Nov 25 03:51:18 np0005534516 nova_compute[253538]:  <metadata>
Nov 25 03:51:18 np0005534516 nova_compute[253538]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 03:51:18 np0005534516 nova_compute[253538]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 03:51:18 np0005534516 nova_compute[253538]:      <nova:name>tempest-TestNetworkBasicOps-server-199412432</nova:name>
Nov 25 03:51:18 np0005534516 nova_compute[253538]:      <nova:creationTime>2025-11-25 08:51:17</nova:creationTime>
Nov 25 03:51:18 np0005534516 nova_compute[253538]:      <nova:flavor name="m1.nano">
Nov 25 03:51:18 np0005534516 nova_compute[253538]:        <nova:memory>128</nova:memory>
Nov 25 03:51:18 np0005534516 nova_compute[253538]:        <nova:disk>1</nova:disk>
Nov 25 03:51:18 np0005534516 nova_compute[253538]:        <nova:swap>0</nova:swap>
Nov 25 03:51:18 np0005534516 nova_compute[253538]:        <nova:ephemeral>0</nova:ephemeral>
Nov 25 03:51:18 np0005534516 nova_compute[253538]:        <nova:vcpus>1</nova:vcpus>
Nov 25 03:51:18 np0005534516 nova_compute[253538]:      </nova:flavor>
Nov 25 03:51:18 np0005534516 nova_compute[253538]:      <nova:owner>
Nov 25 03:51:18 np0005534516 nova_compute[253538]:        <nova:user uuid="4211995133cc45db8e38c47f747fb092">tempest-TestNetworkBasicOps-2019122229-project-member</nova:user>
Nov 25 03:51:18 np0005534516 nova_compute[253538]:        <nova:project uuid="92faeb767e7a423586eaaf32661ce771">tempest-TestNetworkBasicOps-2019122229</nova:project>
Nov 25 03:51:18 np0005534516 nova_compute[253538]:      </nova:owner>
Nov 25 03:51:18 np0005534516 nova_compute[253538]:      <nova:root type="image" uuid="8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e"/>
Nov 25 03:51:18 np0005534516 nova_compute[253538]:      <nova:ports>
Nov 25 03:51:18 np0005534516 nova_compute[253538]:        <nova:port uuid="2ad9a2b7-f59d-49fd-aaa3-5253dd637f18">
Nov 25 03:51:18 np0005534516 nova_compute[253538]:          <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Nov 25 03:51:18 np0005534516 nova_compute[253538]:        </nova:port>
Nov 25 03:51:18 np0005534516 nova_compute[253538]:      </nova:ports>
Nov 25 03:51:18 np0005534516 nova_compute[253538]:    </nova:instance>
Nov 25 03:51:18 np0005534516 nova_compute[253538]:  </metadata>
Nov 25 03:51:18 np0005534516 nova_compute[253538]:  <sysinfo type="smbios">
Nov 25 03:51:18 np0005534516 nova_compute[253538]:    <system>
Nov 25 03:51:18 np0005534516 nova_compute[253538]:      <entry name="manufacturer">RDO</entry>
Nov 25 03:51:18 np0005534516 nova_compute[253538]:      <entry name="product">OpenStack Compute</entry>
Nov 25 03:51:18 np0005534516 nova_compute[253538]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 03:51:18 np0005534516 nova_compute[253538]:      <entry name="serial">e7bccd38-8444-4a5d-8a51-a99fa2e7d4fb</entry>
Nov 25 03:51:18 np0005534516 nova_compute[253538]:      <entry name="uuid">e7bccd38-8444-4a5d-8a51-a99fa2e7d4fb</entry>
Nov 25 03:51:18 np0005534516 nova_compute[253538]:      <entry name="family">Virtual Machine</entry>
Nov 25 03:51:18 np0005534516 nova_compute[253538]:    </system>
Nov 25 03:51:18 np0005534516 nova_compute[253538]:  </sysinfo>
Nov 25 03:51:18 np0005534516 nova_compute[253538]:  <os>
Nov 25 03:51:18 np0005534516 nova_compute[253538]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 25 03:51:18 np0005534516 nova_compute[253538]:    <boot dev="hd"/>
Nov 25 03:51:18 np0005534516 nova_compute[253538]:    <smbios mode="sysinfo"/>
Nov 25 03:51:18 np0005534516 nova_compute[253538]:  </os>
Nov 25 03:51:18 np0005534516 nova_compute[253538]:  <features>
Nov 25 03:51:18 np0005534516 nova_compute[253538]:    <acpi/>
Nov 25 03:51:18 np0005534516 nova_compute[253538]:    <apic/>
Nov 25 03:51:18 np0005534516 nova_compute[253538]:    <vmcoreinfo/>
Nov 25 03:51:18 np0005534516 nova_compute[253538]:  </features>
Nov 25 03:51:18 np0005534516 nova_compute[253538]:  <clock offset="utc">
Nov 25 03:51:18 np0005534516 nova_compute[253538]:    <timer name="pit" tickpolicy="delay"/>
Nov 25 03:51:18 np0005534516 nova_compute[253538]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 25 03:51:18 np0005534516 nova_compute[253538]:    <timer name="hpet" present="no"/>
Nov 25 03:51:18 np0005534516 nova_compute[253538]:  </clock>
Nov 25 03:51:18 np0005534516 nova_compute[253538]:  <cpu mode="host-model" match="exact">
Nov 25 03:51:18 np0005534516 nova_compute[253538]:    <topology sockets="1" cores="1" threads="1"/>
Nov 25 03:51:18 np0005534516 nova_compute[253538]:  </cpu>
Nov 25 03:51:18 np0005534516 nova_compute[253538]:  <devices>
Nov 25 03:51:18 np0005534516 nova_compute[253538]:    <disk type="network" device="disk">
Nov 25 03:51:18 np0005534516 nova_compute[253538]:      <driver type="raw" cache="none"/>
Nov 25 03:51:18 np0005534516 nova_compute[253538]:      <source protocol="rbd" name="vms/e7bccd38-8444-4a5d-8a51-a99fa2e7d4fb_disk">
Nov 25 03:51:18 np0005534516 nova_compute[253538]:        <host name="192.168.122.100" port="6789"/>
Nov 25 03:51:18 np0005534516 nova_compute[253538]:      </source>
Nov 25 03:51:18 np0005534516 nova_compute[253538]:      <auth username="openstack">
Nov 25 03:51:18 np0005534516 nova_compute[253538]:        <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 03:51:18 np0005534516 nova_compute[253538]:      </auth>
Nov 25 03:51:18 np0005534516 nova_compute[253538]:      <target dev="vda" bus="virtio"/>
Nov 25 03:51:18 np0005534516 nova_compute[253538]:    </disk>
Nov 25 03:51:18 np0005534516 nova_compute[253538]:    <disk type="network" device="cdrom">
Nov 25 03:51:18 np0005534516 nova_compute[253538]:      <driver type="raw" cache="none"/>
Nov 25 03:51:18 np0005534516 nova_compute[253538]:      <source protocol="rbd" name="vms/e7bccd38-8444-4a5d-8a51-a99fa2e7d4fb_disk.config">
Nov 25 03:51:18 np0005534516 nova_compute[253538]:        <host name="192.168.122.100" port="6789"/>
Nov 25 03:51:18 np0005534516 nova_compute[253538]:      </source>
Nov 25 03:51:18 np0005534516 nova_compute[253538]:      <auth username="openstack">
Nov 25 03:51:18 np0005534516 nova_compute[253538]:        <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 03:51:18 np0005534516 nova_compute[253538]:      </auth>
Nov 25 03:51:18 np0005534516 nova_compute[253538]:      <target dev="sda" bus="sata"/>
Nov 25 03:51:18 np0005534516 nova_compute[253538]:    </disk>
Nov 25 03:51:18 np0005534516 nova_compute[253538]:    <interface type="ethernet">
Nov 25 03:51:18 np0005534516 nova_compute[253538]:      <mac address="fa:16:3e:3e:2a:a3"/>
Nov 25 03:51:18 np0005534516 nova_compute[253538]:      <model type="virtio"/>
Nov 25 03:51:18 np0005534516 nova_compute[253538]:      <driver name="vhost" rx_queue_size="512"/>
Nov 25 03:51:18 np0005534516 nova_compute[253538]:      <mtu size="1442"/>
Nov 25 03:51:18 np0005534516 nova_compute[253538]:      <target dev="tap2ad9a2b7-f5"/>
Nov 25 03:51:18 np0005534516 nova_compute[253538]:    </interface>
Nov 25 03:51:18 np0005534516 nova_compute[253538]:    <serial type="pty">
Nov 25 03:51:18 np0005534516 nova_compute[253538]:      <log file="/var/lib/nova/instances/e7bccd38-8444-4a5d-8a51-a99fa2e7d4fb/console.log" append="off"/>
Nov 25 03:51:18 np0005534516 nova_compute[253538]:    </serial>
Nov 25 03:51:18 np0005534516 nova_compute[253538]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 03:51:18 np0005534516 nova_compute[253538]:    <video>
Nov 25 03:51:18 np0005534516 nova_compute[253538]:      <model type="virtio"/>
Nov 25 03:51:18 np0005534516 nova_compute[253538]:    </video>
Nov 25 03:51:18 np0005534516 nova_compute[253538]:    <input type="tablet" bus="usb"/>
Nov 25 03:51:18 np0005534516 nova_compute[253538]:    <rng model="virtio">
Nov 25 03:51:18 np0005534516 nova_compute[253538]:      <backend model="random">/dev/urandom</backend>
Nov 25 03:51:18 np0005534516 nova_compute[253538]:    </rng>
Nov 25 03:51:18 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root"/>
Nov 25 03:51:18 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:51:18 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:51:18 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:51:18 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:51:18 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:51:18 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:51:18 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:51:18 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:51:18 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:51:18 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:51:18 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:51:18 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:51:18 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:51:18 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:51:18 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:51:18 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:51:18 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:51:18 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:51:18 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:51:18 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:51:18 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:51:18 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:51:18 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:51:18 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:51:18 np0005534516 nova_compute[253538]:    <controller type="usb" index="0"/>
Nov 25 03:51:18 np0005534516 nova_compute[253538]:    <memballoon model="virtio">
Nov 25 03:51:18 np0005534516 nova_compute[253538]:      <stats period="10"/>
Nov 25 03:51:18 np0005534516 nova_compute[253538]:    </memballoon>
Nov 25 03:51:18 np0005534516 nova_compute[253538]:  </devices>
Nov 25 03:51:18 np0005534516 nova_compute[253538]: </domain>
Nov 25 03:51:18 np0005534516 nova_compute[253538]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 25 03:51:18 np0005534516 nova_compute[253538]: 2025-11-25 08:51:18.832 253542 DEBUG nova.compute.manager [None req-73791b0c-fec3-40af-918c-8d0544886cd1 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: e7bccd38-8444-4a5d-8a51-a99fa2e7d4fb] Preparing to wait for external event network-vif-plugged-2ad9a2b7-f59d-49fd-aaa3-5253dd637f18 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 25 03:51:18 np0005534516 nova_compute[253538]: 2025-11-25 08:51:18.832 253542 DEBUG oslo_concurrency.lockutils [None req-73791b0c-fec3-40af-918c-8d0544886cd1 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Acquiring lock "e7bccd38-8444-4a5d-8a51-a99fa2e7d4fb-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:51:18 np0005534516 nova_compute[253538]: 2025-11-25 08:51:18.832 253542 DEBUG oslo_concurrency.lockutils [None req-73791b0c-fec3-40af-918c-8d0544886cd1 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lock "e7bccd38-8444-4a5d-8a51-a99fa2e7d4fb-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:51:18 np0005534516 nova_compute[253538]: 2025-11-25 08:51:18.833 253542 DEBUG oslo_concurrency.lockutils [None req-73791b0c-fec3-40af-918c-8d0544886cd1 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lock "e7bccd38-8444-4a5d-8a51-a99fa2e7d4fb-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:51:18 np0005534516 nova_compute[253538]: 2025-11-25 08:51:18.833 253542 DEBUG nova.virt.libvirt.vif [None req-73791b0c-fec3-40af-918c-8d0544886cd1 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T08:51:12Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-199412432',display_name='tempest-TestNetworkBasicOps-server-199412432',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-199412432',id=111,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMuJR0+MVpFtNHEH/qBMUEI9mE13UW6GrULa+2972JvBZYqj7jCYYsMmZITZ+SM7QQhK9eTjWP2J5imfxbLYOM0couLFe8mdKS/uhBmTvd2vRYexSjbqdhkaRLs1gfDUJQ==',key_name='tempest-TestNetworkBasicOps-1778650473',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='92faeb767e7a423586eaaf32661ce771',ramdisk_id='',reservation_id='r-hcn7lok0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-2019122229',owner_user_name='tempest-TestNetworkBasicOps-2019122229-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T08:51:14Z,user_data=None,user_id='4211995133cc45db8e38c47f747fb092',uuid=e7bccd38-8444-4a5d-8a51-a99fa2e7d4fb,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "2ad9a2b7-f59d-49fd-aaa3-5253dd637f18", "address": "fa:16:3e:3e:2a:a3", "network": {"id": "8daad2e3-552f-4ebe-8fa4-01c68ec704b1", "bridge": "br-int", "label": "tempest-network-smoke--961197321", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92faeb767e7a423586eaaf32661ce771", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2ad9a2b7-f5", "ovs_interfaceid": "2ad9a2b7-f59d-49fd-aaa3-5253dd637f18", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 25 03:51:18 np0005534516 nova_compute[253538]: 2025-11-25 08:51:18.834 253542 DEBUG nova.network.os_vif_util [None req-73791b0c-fec3-40af-918c-8d0544886cd1 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Converting VIF {"id": "2ad9a2b7-f59d-49fd-aaa3-5253dd637f18", "address": "fa:16:3e:3e:2a:a3", "network": {"id": "8daad2e3-552f-4ebe-8fa4-01c68ec704b1", "bridge": "br-int", "label": "tempest-network-smoke--961197321", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92faeb767e7a423586eaaf32661ce771", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2ad9a2b7-f5", "ovs_interfaceid": "2ad9a2b7-f59d-49fd-aaa3-5253dd637f18", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 03:51:18 np0005534516 nova_compute[253538]: 2025-11-25 08:51:18.834 253542 DEBUG nova.network.os_vif_util [None req-73791b0c-fec3-40af-918c-8d0544886cd1 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:3e:2a:a3,bridge_name='br-int',has_traffic_filtering=True,id=2ad9a2b7-f59d-49fd-aaa3-5253dd637f18,network=Network(8daad2e3-552f-4ebe-8fa4-01c68ec704b1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2ad9a2b7-f5') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 03:51:18 np0005534516 nova_compute[253538]: 2025-11-25 08:51:18.834 253542 DEBUG os_vif [None req-73791b0c-fec3-40af-918c-8d0544886cd1 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:3e:2a:a3,bridge_name='br-int',has_traffic_filtering=True,id=2ad9a2b7-f59d-49fd-aaa3-5253dd637f18,network=Network(8daad2e3-552f-4ebe-8fa4-01c68ec704b1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2ad9a2b7-f5') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 25 03:51:18 np0005534516 nova_compute[253538]: 2025-11-25 08:51:18.835 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:51:18 np0005534516 nova_compute[253538]: 2025-11-25 08:51:18.835 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:51:18 np0005534516 nova_compute[253538]: 2025-11-25 08:51:18.836 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 03:51:18 np0005534516 nova_compute[253538]: 2025-11-25 08:51:18.838 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:51:18 np0005534516 nova_compute[253538]: 2025-11-25 08:51:18.838 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2ad9a2b7-f5, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:51:18 np0005534516 nova_compute[253538]: 2025-11-25 08:51:18.838 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap2ad9a2b7-f5, col_values=(('external_ids', {'iface-id': '2ad9a2b7-f59d-49fd-aaa3-5253dd637f18', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:3e:2a:a3', 'vm-uuid': 'e7bccd38-8444-4a5d-8a51-a99fa2e7d4fb'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:51:18 np0005534516 nova_compute[253538]: 2025-11-25 08:51:18.840 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:51:18 np0005534516 NetworkManager[48915]: <info>  [1764060678.8422] manager: (tap2ad9a2b7-f5): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/451)
Nov 25 03:51:18 np0005534516 nova_compute[253538]: 2025-11-25 08:51:18.842 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 25 03:51:18 np0005534516 nova_compute[253538]: 2025-11-25 08:51:18.847 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:51:18 np0005534516 nova_compute[253538]: 2025-11-25 08:51:18.848 253542 INFO os_vif [None req-73791b0c-fec3-40af-918c-8d0544886cd1 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:3e:2a:a3,bridge_name='br-int',has_traffic_filtering=True,id=2ad9a2b7-f59d-49fd-aaa3-5253dd637f18,network=Network(8daad2e3-552f-4ebe-8fa4-01c68ec704b1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2ad9a2b7-f5')#033[00m
Nov 25 03:51:18 np0005534516 nova_compute[253538]: 2025-11-25 08:51:18.901 253542 DEBUG nova.virt.libvirt.driver [None req-73791b0c-fec3-40af-918c-8d0544886cd1 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 03:51:18 np0005534516 nova_compute[253538]: 2025-11-25 08:51:18.901 253542 DEBUG nova.virt.libvirt.driver [None req-73791b0c-fec3-40af-918c-8d0544886cd1 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 03:51:18 np0005534516 nova_compute[253538]: 2025-11-25 08:51:18.902 253542 DEBUG nova.virt.libvirt.driver [None req-73791b0c-fec3-40af-918c-8d0544886cd1 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] No VIF found with MAC fa:16:3e:3e:2a:a3, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 25 03:51:18 np0005534516 nova_compute[253538]: 2025-11-25 08:51:18.903 253542 INFO nova.virt.libvirt.driver [None req-73791b0c-fec3-40af-918c-8d0544886cd1 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: e7bccd38-8444-4a5d-8a51-a99fa2e7d4fb] Using config drive#033[00m
Nov 25 03:51:18 np0005534516 nova_compute[253538]: 2025-11-25 08:51:18.937 253542 DEBUG nova.storage.rbd_utils [None req-73791b0c-fec3-40af-918c-8d0544886cd1 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] rbd image e7bccd38-8444-4a5d-8a51-a99fa2e7d4fb_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:51:19 np0005534516 nova_compute[253538]: 2025-11-25 08:51:19.180 253542 DEBUG nova.network.neutron [req-34cc0621-2a58-4e58-be34-0d4de92a80f5 req-c2a9f91f-026d-4745-b002-f999de995816 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: e7bccd38-8444-4a5d-8a51-a99fa2e7d4fb] Updated VIF entry in instance network info cache for port 2ad9a2b7-f59d-49fd-aaa3-5253dd637f18. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 25 03:51:19 np0005534516 nova_compute[253538]: 2025-11-25 08:51:19.181 253542 DEBUG nova.network.neutron [req-34cc0621-2a58-4e58-be34-0d4de92a80f5 req-c2a9f91f-026d-4745-b002-f999de995816 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: e7bccd38-8444-4a5d-8a51-a99fa2e7d4fb] Updating instance_info_cache with network_info: [{"id": "2ad9a2b7-f59d-49fd-aaa3-5253dd637f18", "address": "fa:16:3e:3e:2a:a3", "network": {"id": "8daad2e3-552f-4ebe-8fa4-01c68ec704b1", "bridge": "br-int", "label": "tempest-network-smoke--961197321", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92faeb767e7a423586eaaf32661ce771", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2ad9a2b7-f5", "ovs_interfaceid": "2ad9a2b7-f59d-49fd-aaa3-5253dd637f18", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 03:51:19 np0005534516 nova_compute[253538]: 2025-11-25 08:51:19.194 253542 DEBUG oslo_concurrency.lockutils [req-34cc0621-2a58-4e58-be34-0d4de92a80f5 req-c2a9f91f-026d-4745-b002-f999de995816 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-e7bccd38-8444-4a5d-8a51-a99fa2e7d4fb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 03:51:19 np0005534516 nova_compute[253538]: 2025-11-25 08:51:19.294 253542 INFO nova.virt.libvirt.driver [None req-73791b0c-fec3-40af-918c-8d0544886cd1 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: e7bccd38-8444-4a5d-8a51-a99fa2e7d4fb] Creating config drive at /var/lib/nova/instances/e7bccd38-8444-4a5d-8a51-a99fa2e7d4fb/disk.config#033[00m
Nov 25 03:51:19 np0005534516 nova_compute[253538]: 2025-11-25 08:51:19.299 253542 DEBUG oslo_concurrency.processutils [None req-73791b0c-fec3-40af-918c-8d0544886cd1 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/e7bccd38-8444-4a5d-8a51-a99fa2e7d4fb/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpn521ovv7 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:51:19 np0005534516 nova_compute[253538]: 2025-11-25 08:51:19.448 253542 DEBUG oslo_concurrency.processutils [None req-73791b0c-fec3-40af-918c-8d0544886cd1 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/e7bccd38-8444-4a5d-8a51-a99fa2e7d4fb/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpn521ovv7" returned: 0 in 0.148s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:51:19 np0005534516 nova_compute[253538]: 2025-11-25 08:51:19.474 253542 DEBUG nova.storage.rbd_utils [None req-73791b0c-fec3-40af-918c-8d0544886cd1 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] rbd image e7bccd38-8444-4a5d-8a51-a99fa2e7d4fb_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:51:19 np0005534516 nova_compute[253538]: 2025-11-25 08:51:19.477 253542 DEBUG oslo_concurrency.processutils [None req-73791b0c-fec3-40af-918c-8d0544886cd1 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/e7bccd38-8444-4a5d-8a51-a99fa2e7d4fb/disk.config e7bccd38-8444-4a5d-8a51-a99fa2e7d4fb_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:51:19 np0005534516 nova_compute[253538]: 2025-11-25 08:51:19.675 253542 DEBUG oslo_concurrency.processutils [None req-73791b0c-fec3-40af-918c-8d0544886cd1 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/e7bccd38-8444-4a5d-8a51-a99fa2e7d4fb/disk.config e7bccd38-8444-4a5d-8a51-a99fa2e7d4fb_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.198s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:51:19 np0005534516 nova_compute[253538]: 2025-11-25 08:51:19.676 253542 INFO nova.virt.libvirt.driver [None req-73791b0c-fec3-40af-918c-8d0544886cd1 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: e7bccd38-8444-4a5d-8a51-a99fa2e7d4fb] Deleting local config drive /var/lib/nova/instances/e7bccd38-8444-4a5d-8a51-a99fa2e7d4fb/disk.config because it was imported into RBD.#033[00m
Nov 25 03:51:19 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2114: 321 pgs: 321 active+clean; 213 MiB data, 846 MiB used, 59 GiB / 60 GiB avail; 1.4 MiB/s rd, 4.6 MiB/s wr, 432 op/s
Nov 25 03:51:19 np0005534516 kernel: tap2ad9a2b7-f5: entered promiscuous mode
Nov 25 03:51:19 np0005534516 NetworkManager[48915]: <info>  [1764060679.7454] manager: (tap2ad9a2b7-f5): new Tun device (/org/freedesktop/NetworkManager/Devices/452)
Nov 25 03:51:19 np0005534516 nova_compute[253538]: 2025-11-25 08:51:19.746 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:51:19 np0005534516 ovn_controller[152859]: 2025-11-25T08:51:19Z|01103|binding|INFO|Claiming lport 2ad9a2b7-f59d-49fd-aaa3-5253dd637f18 for this chassis.
Nov 25 03:51:19 np0005534516 ovn_controller[152859]: 2025-11-25T08:51:19Z|01104|binding|INFO|2ad9a2b7-f59d-49fd-aaa3-5253dd637f18: Claiming fa:16:3e:3e:2a:a3 10.100.0.12
Nov 25 03:51:19 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:51:19.756 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:3e:2a:a3 10.100.0.12'], port_security=['fa:16:3e:3e:2a:a3 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': 'e7bccd38-8444-4a5d-8a51-a99fa2e7d4fb', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8daad2e3-552f-4ebe-8fa4-01c68ec704b1', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '92faeb767e7a423586eaaf32661ce771', 'neutron:revision_number': '2', 'neutron:security_group_ids': '371b8f16-6d0a-48c6-b770-1fa4712eb5f3', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0b7f6d3d-0fdc-46fc-8ec7-6805fb9ea29c, chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=2ad9a2b7-f59d-49fd-aaa3-5253dd637f18) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 03:51:19 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:51:19.758 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 2ad9a2b7-f59d-49fd-aaa3-5253dd637f18 in datapath 8daad2e3-552f-4ebe-8fa4-01c68ec704b1 bound to our chassis#033[00m
Nov 25 03:51:19 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:51:19.760 162739 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 8daad2e3-552f-4ebe-8fa4-01c68ec704b1#033[00m
Nov 25 03:51:19 np0005534516 ovn_controller[152859]: 2025-11-25T08:51:19Z|01105|binding|INFO|Setting lport 2ad9a2b7-f59d-49fd-aaa3-5253dd637f18 ovn-installed in OVS
Nov 25 03:51:19 np0005534516 ovn_controller[152859]: 2025-11-25T08:51:19Z|01106|binding|INFO|Setting lport 2ad9a2b7-f59d-49fd-aaa3-5253dd637f18 up in Southbound
Nov 25 03:51:19 np0005534516 nova_compute[253538]: 2025-11-25 08:51:19.764 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:51:19 np0005534516 nova_compute[253538]: 2025-11-25 08:51:19.770 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:51:19 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:51:19.773 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[3344920b-2dad-4971-94f3-989b3d8b72f1]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:51:19 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:51:19.774 162739 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap8daad2e3-51 in ovnmeta-8daad2e3-552f-4ebe-8fa4-01c68ec704b1 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 25 03:51:19 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:51:19.777 269370 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap8daad2e3-50 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 25 03:51:19 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:51:19.777 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[f7e20961-8d20-4473-a685-ae0f323a93ae]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:51:19 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:51:19.778 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[7250311d-69e7-4c31-ac06-b75c90ef0ae1]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:51:19 np0005534516 systemd-udevd[366745]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 03:51:19 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:51:19.787 162852 DEBUG oslo.privsep.daemon [-] privsep: reply[36e5ecf9-e0d3-4072-8d75-1bdc78fdd5ea]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:51:19 np0005534516 systemd-machined[215790]: New machine qemu-138-instance-0000006f.
Nov 25 03:51:19 np0005534516 NetworkManager[48915]: <info>  [1764060679.8012] device (tap2ad9a2b7-f5): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 03:51:19 np0005534516 NetworkManager[48915]: <info>  [1764060679.8024] device (tap2ad9a2b7-f5): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 03:51:19 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:51:19.805 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[3f19af19-71d8-48f5-8dd7-95e18bca5820]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:51:19 np0005534516 systemd[1]: Started Virtual Machine qemu-138-instance-0000006f.
Nov 25 03:51:19 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:51:19.836 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[2232733e-cf31-4a1c-9f8a-f3d93af48af3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:51:19 np0005534516 systemd-udevd[366748]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 03:51:19 np0005534516 NetworkManager[48915]: <info>  [1764060679.8432] manager: (tap8daad2e3-50): new Veth device (/org/freedesktop/NetworkManager/Devices/453)
Nov 25 03:51:19 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:51:19.843 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[6637afda-4259-4f0a-8f88-0f81ebfe11e2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:51:19 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:51:19.876 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[29af44d6-f5a8-44b8-bb60-a407c95defd7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:51:19 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:51:19.879 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[cea1fe43-5230-4c05-8e22-cca5fe990c1c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:51:19 np0005534516 NetworkManager[48915]: <info>  [1764060679.9026] device (tap8daad2e3-50): carrier: link connected
Nov 25 03:51:19 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:51:19.908 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[3c7d4198-4c7b-4bf0-9e18-4b57ee37a0e8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:51:19 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:51:19.923 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[312f6e7a-a5d0-4ef8-ae6f-d5ef92f7f90f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap8daad2e3-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:47:fb:f7'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 324], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 603450, 'reachable_time': 38589, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 366776, 'error': None, 'target': 'ovnmeta-8daad2e3-552f-4ebe-8fa4-01c68ec704b1', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:51:19 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:51:19.941 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[80541db5-200d-4e35-a735-6a8cc0a7fb3d]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe47:fbf7'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 603450, 'tstamp': 603450}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 366777, 'error': None, 'target': 'ovnmeta-8daad2e3-552f-4ebe-8fa4-01c68ec704b1', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:51:19 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:51:19.956 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[b6bd1222-eb84-4bef-bcc5-a6226fc6db97]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap8daad2e3-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:47:fb:f7'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 324], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 603450, 'reachable_time': 38589, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 366778, 'error': None, 'target': 'ovnmeta-8daad2e3-552f-4ebe-8fa4-01c68ec704b1', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:51:19 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:51:19.993 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[30cd24b1-b375-44df-90c5-c5b5aa07fcd3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:51:20 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:51:20.076 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[2dc8c945-f602-43ca-9d51-76599d3e297a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:51:20 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:51:20.078 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8daad2e3-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:51:20 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:51:20.078 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 03:51:20 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:51:20.078 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap8daad2e3-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:51:20 np0005534516 nova_compute[253538]: 2025-11-25 08:51:20.080 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:51:20 np0005534516 kernel: tap8daad2e3-50: entered promiscuous mode
Nov 25 03:51:20 np0005534516 NetworkManager[48915]: <info>  [1764060680.0811] manager: (tap8daad2e3-50): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/454)
Nov 25 03:51:20 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:51:20.084 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap8daad2e3-50, col_values=(('external_ids', {'iface-id': 'e844dcfd-3730-493f-b401-25ee7b281b7b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:51:20 np0005534516 nova_compute[253538]: 2025-11-25 08:51:20.085 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:51:20 np0005534516 ovn_controller[152859]: 2025-11-25T08:51:20Z|01107|binding|INFO|Releasing lport e844dcfd-3730-493f-b401-25ee7b281b7b from this chassis (sb_readonly=0)
Nov 25 03:51:20 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:51:20.108 162739 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/8daad2e3-552f-4ebe-8fa4-01c68ec704b1.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/8daad2e3-552f-4ebe-8fa4-01c68ec704b1.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 25 03:51:20 np0005534516 nova_compute[253538]: 2025-11-25 08:51:20.107 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:51:20 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:51:20.109 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[aff53141-2a0a-41b6-a0b9-4757c8b1438c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:51:20 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:51:20.110 162739 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 25 03:51:20 np0005534516 ovn_metadata_agent[162734]: global
Nov 25 03:51:20 np0005534516 ovn_metadata_agent[162734]:    log         /dev/log local0 debug
Nov 25 03:51:20 np0005534516 ovn_metadata_agent[162734]:    log-tag     haproxy-metadata-proxy-8daad2e3-552f-4ebe-8fa4-01c68ec704b1
Nov 25 03:51:20 np0005534516 ovn_metadata_agent[162734]:    user        root
Nov 25 03:51:20 np0005534516 ovn_metadata_agent[162734]:    group       root
Nov 25 03:51:20 np0005534516 ovn_metadata_agent[162734]:    maxconn     1024
Nov 25 03:51:20 np0005534516 ovn_metadata_agent[162734]:    pidfile     /var/lib/neutron/external/pids/8daad2e3-552f-4ebe-8fa4-01c68ec704b1.pid.haproxy
Nov 25 03:51:20 np0005534516 ovn_metadata_agent[162734]:    daemon
Nov 25 03:51:20 np0005534516 ovn_metadata_agent[162734]: 
Nov 25 03:51:20 np0005534516 ovn_metadata_agent[162734]: defaults
Nov 25 03:51:20 np0005534516 ovn_metadata_agent[162734]:    log global
Nov 25 03:51:20 np0005534516 ovn_metadata_agent[162734]:    mode http
Nov 25 03:51:20 np0005534516 ovn_metadata_agent[162734]:    option httplog
Nov 25 03:51:20 np0005534516 ovn_metadata_agent[162734]:    option dontlognull
Nov 25 03:51:20 np0005534516 ovn_metadata_agent[162734]:    option http-server-close
Nov 25 03:51:20 np0005534516 ovn_metadata_agent[162734]:    option forwardfor
Nov 25 03:51:20 np0005534516 ovn_metadata_agent[162734]:    retries                 3
Nov 25 03:51:20 np0005534516 ovn_metadata_agent[162734]:    timeout http-request    30s
Nov 25 03:51:20 np0005534516 ovn_metadata_agent[162734]:    timeout connect         30s
Nov 25 03:51:20 np0005534516 ovn_metadata_agent[162734]:    timeout client          32s
Nov 25 03:51:20 np0005534516 ovn_metadata_agent[162734]:    timeout server          32s
Nov 25 03:51:20 np0005534516 ovn_metadata_agent[162734]:    timeout http-keep-alive 30s
Nov 25 03:51:20 np0005534516 ovn_metadata_agent[162734]: 
Nov 25 03:51:20 np0005534516 ovn_metadata_agent[162734]: 
Nov 25 03:51:20 np0005534516 ovn_metadata_agent[162734]: listen listener
Nov 25 03:51:20 np0005534516 ovn_metadata_agent[162734]:    bind 169.254.169.254:80
Nov 25 03:51:20 np0005534516 ovn_metadata_agent[162734]:    server metadata /var/lib/neutron/metadata_proxy
Nov 25 03:51:20 np0005534516 ovn_metadata_agent[162734]:    http-request add-header X-OVN-Network-ID 8daad2e3-552f-4ebe-8fa4-01c68ec704b1
Nov 25 03:51:20 np0005534516 ovn_metadata_agent[162734]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 25 03:51:20 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:51:20.110 162739 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-8daad2e3-552f-4ebe-8fa4-01c68ec704b1', 'env', 'PROCESS_TAG=haproxy-8daad2e3-552f-4ebe-8fa4-01c68ec704b1', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/8daad2e3-552f-4ebe-8fa4-01c68ec704b1.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 25 03:51:20 np0005534516 nova_compute[253538]: 2025-11-25 08:51:20.126 253542 DEBUG nova.compute.manager [req-d74417db-c3e3-4f84-bc8d-21e4949b5f16 req-8e305039-0155-4e74-a90f-d56c123b154a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: e7bccd38-8444-4a5d-8a51-a99fa2e7d4fb] Received event network-vif-plugged-2ad9a2b7-f59d-49fd-aaa3-5253dd637f18 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:51:20 np0005534516 nova_compute[253538]: 2025-11-25 08:51:20.127 253542 DEBUG oslo_concurrency.lockutils [req-d74417db-c3e3-4f84-bc8d-21e4949b5f16 req-8e305039-0155-4e74-a90f-d56c123b154a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "e7bccd38-8444-4a5d-8a51-a99fa2e7d4fb-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:51:20 np0005534516 nova_compute[253538]: 2025-11-25 08:51:20.127 253542 DEBUG oslo_concurrency.lockutils [req-d74417db-c3e3-4f84-bc8d-21e4949b5f16 req-8e305039-0155-4e74-a90f-d56c123b154a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "e7bccd38-8444-4a5d-8a51-a99fa2e7d4fb-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:51:20 np0005534516 nova_compute[253538]: 2025-11-25 08:51:20.128 253542 DEBUG oslo_concurrency.lockutils [req-d74417db-c3e3-4f84-bc8d-21e4949b5f16 req-8e305039-0155-4e74-a90f-d56c123b154a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "e7bccd38-8444-4a5d-8a51-a99fa2e7d4fb-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:51:20 np0005534516 nova_compute[253538]: 2025-11-25 08:51:20.128 253542 DEBUG nova.compute.manager [req-d74417db-c3e3-4f84-bc8d-21e4949b5f16 req-8e305039-0155-4e74-a90f-d56c123b154a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: e7bccd38-8444-4a5d-8a51-a99fa2e7d4fb] Processing event network-vif-plugged-2ad9a2b7-f59d-49fd-aaa3-5253dd637f18 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 25 03:51:20 np0005534516 nova_compute[253538]: 2025-11-25 08:51:20.257 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764060680.257192, e7bccd38-8444-4a5d-8a51-a99fa2e7d4fb => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 03:51:20 np0005534516 nova_compute[253538]: 2025-11-25 08:51:20.258 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: e7bccd38-8444-4a5d-8a51-a99fa2e7d4fb] VM Started (Lifecycle Event)#033[00m
Nov 25 03:51:20 np0005534516 nova_compute[253538]: 2025-11-25 08:51:20.262 253542 DEBUG nova.compute.manager [None req-73791b0c-fec3-40af-918c-8d0544886cd1 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: e7bccd38-8444-4a5d-8a51-a99fa2e7d4fb] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 25 03:51:20 np0005534516 nova_compute[253538]: 2025-11-25 08:51:20.266 253542 DEBUG nova.virt.libvirt.driver [None req-73791b0c-fec3-40af-918c-8d0544886cd1 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: e7bccd38-8444-4a5d-8a51-a99fa2e7d4fb] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 25 03:51:20 np0005534516 nova_compute[253538]: 2025-11-25 08:51:20.270 253542 INFO nova.virt.libvirt.driver [-] [instance: e7bccd38-8444-4a5d-8a51-a99fa2e7d4fb] Instance spawned successfully.#033[00m
Nov 25 03:51:20 np0005534516 nova_compute[253538]: 2025-11-25 08:51:20.271 253542 DEBUG nova.virt.libvirt.driver [None req-73791b0c-fec3-40af-918c-8d0544886cd1 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: e7bccd38-8444-4a5d-8a51-a99fa2e7d4fb] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 25 03:51:20 np0005534516 nova_compute[253538]: 2025-11-25 08:51:20.286 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: e7bccd38-8444-4a5d-8a51-a99fa2e7d4fb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:51:20 np0005534516 nova_compute[253538]: 2025-11-25 08:51:20.289 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: e7bccd38-8444-4a5d-8a51-a99fa2e7d4fb] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 03:51:20 np0005534516 nova_compute[253538]: 2025-11-25 08:51:20.299 253542 DEBUG nova.virt.libvirt.driver [None req-73791b0c-fec3-40af-918c-8d0544886cd1 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: e7bccd38-8444-4a5d-8a51-a99fa2e7d4fb] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:51:20 np0005534516 nova_compute[253538]: 2025-11-25 08:51:20.299 253542 DEBUG nova.virt.libvirt.driver [None req-73791b0c-fec3-40af-918c-8d0544886cd1 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: e7bccd38-8444-4a5d-8a51-a99fa2e7d4fb] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:51:20 np0005534516 nova_compute[253538]: 2025-11-25 08:51:20.300 253542 DEBUG nova.virt.libvirt.driver [None req-73791b0c-fec3-40af-918c-8d0544886cd1 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: e7bccd38-8444-4a5d-8a51-a99fa2e7d4fb] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:51:20 np0005534516 nova_compute[253538]: 2025-11-25 08:51:20.301 253542 DEBUG nova.virt.libvirt.driver [None req-73791b0c-fec3-40af-918c-8d0544886cd1 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: e7bccd38-8444-4a5d-8a51-a99fa2e7d4fb] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:51:20 np0005534516 nova_compute[253538]: 2025-11-25 08:51:20.302 253542 DEBUG nova.virt.libvirt.driver [None req-73791b0c-fec3-40af-918c-8d0544886cd1 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: e7bccd38-8444-4a5d-8a51-a99fa2e7d4fb] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:51:20 np0005534516 nova_compute[253538]: 2025-11-25 08:51:20.302 253542 DEBUG nova.virt.libvirt.driver [None req-73791b0c-fec3-40af-918c-8d0544886cd1 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: e7bccd38-8444-4a5d-8a51-a99fa2e7d4fb] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:51:20 np0005534516 nova_compute[253538]: 2025-11-25 08:51:20.314 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: e7bccd38-8444-4a5d-8a51-a99fa2e7d4fb] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 03:51:20 np0005534516 nova_compute[253538]: 2025-11-25 08:51:20.314 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764060680.2573326, e7bccd38-8444-4a5d-8a51-a99fa2e7d4fb => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 03:51:20 np0005534516 nova_compute[253538]: 2025-11-25 08:51:20.314 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: e7bccd38-8444-4a5d-8a51-a99fa2e7d4fb] VM Paused (Lifecycle Event)#033[00m
Nov 25 03:51:20 np0005534516 nova_compute[253538]: 2025-11-25 08:51:20.341 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: e7bccd38-8444-4a5d-8a51-a99fa2e7d4fb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:51:20 np0005534516 nova_compute[253538]: 2025-11-25 08:51:20.343 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764060680.2649112, e7bccd38-8444-4a5d-8a51-a99fa2e7d4fb => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 03:51:20 np0005534516 nova_compute[253538]: 2025-11-25 08:51:20.343 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: e7bccd38-8444-4a5d-8a51-a99fa2e7d4fb] VM Resumed (Lifecycle Event)#033[00m
Nov 25 03:51:20 np0005534516 nova_compute[253538]: 2025-11-25 08:51:20.370 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: e7bccd38-8444-4a5d-8a51-a99fa2e7d4fb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:51:20 np0005534516 nova_compute[253538]: 2025-11-25 08:51:20.374 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: e7bccd38-8444-4a5d-8a51-a99fa2e7d4fb] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 03:51:20 np0005534516 nova_compute[253538]: 2025-11-25 08:51:20.381 253542 INFO nova.compute.manager [None req-73791b0c-fec3-40af-918c-8d0544886cd1 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: e7bccd38-8444-4a5d-8a51-a99fa2e7d4fb] Took 5.86 seconds to spawn the instance on the hypervisor.#033[00m
Nov 25 03:51:20 np0005534516 nova_compute[253538]: 2025-11-25 08:51:20.381 253542 DEBUG nova.compute.manager [None req-73791b0c-fec3-40af-918c-8d0544886cd1 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: e7bccd38-8444-4a5d-8a51-a99fa2e7d4fb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:51:20 np0005534516 nova_compute[253538]: 2025-11-25 08:51:20.391 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: e7bccd38-8444-4a5d-8a51-a99fa2e7d4fb] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 03:51:20 np0005534516 nova_compute[253538]: 2025-11-25 08:51:20.433 253542 INFO nova.compute.manager [None req-73791b0c-fec3-40af-918c-8d0544886cd1 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: e7bccd38-8444-4a5d-8a51-a99fa2e7d4fb] Took 6.87 seconds to build instance.#033[00m
Nov 25 03:51:20 np0005534516 nova_compute[253538]: 2025-11-25 08:51:20.446 253542 DEBUG oslo_concurrency.lockutils [None req-73791b0c-fec3-40af-918c-8d0544886cd1 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lock "e7bccd38-8444-4a5d-8a51-a99fa2e7d4fb" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 6.956s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:51:20 np0005534516 podman[366852]: 2025-11-25 08:51:20.470408568 +0000 UTC m=+0.049718212 container create b20ece1dee115c004ce5ad488b43e4172f6f506e449fe87286dd0509ecda3529 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-8daad2e3-552f-4ebe-8fa4-01c68ec704b1, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3)
Nov 25 03:51:20 np0005534516 systemd[1]: Started libpod-conmon-b20ece1dee115c004ce5ad488b43e4172f6f506e449fe87286dd0509ecda3529.scope.
Nov 25 03:51:20 np0005534516 systemd[1]: Started libcrun container.
Nov 25 03:51:20 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b2849636997d3f41f71e0b1f9f347c5bb88d5e1c968dd84aab3bbb4c45a6a526/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 25 03:51:20 np0005534516 podman[366852]: 2025-11-25 08:51:20.44469124 +0000 UTC m=+0.024000944 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 25 03:51:20 np0005534516 podman[366852]: 2025-11-25 08:51:20.542849909 +0000 UTC m=+0.122159563 container init b20ece1dee115c004ce5ad488b43e4172f6f506e449fe87286dd0509ecda3529 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-8daad2e3-552f-4ebe-8fa4-01c68ec704b1, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 03:51:20 np0005534516 podman[366852]: 2025-11-25 08:51:20.548363857 +0000 UTC m=+0.127673511 container start b20ece1dee115c004ce5ad488b43e4172f6f506e449fe87286dd0509ecda3529 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-8daad2e3-552f-4ebe-8fa4-01c68ec704b1, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 03:51:20 np0005534516 neutron-haproxy-ovnmeta-8daad2e3-552f-4ebe-8fa4-01c68ec704b1[366867]: [NOTICE]   (366871) : New worker (366873) forked
Nov 25 03:51:20 np0005534516 neutron-haproxy-ovnmeta-8daad2e3-552f-4ebe-8fa4-01c68ec704b1[366867]: [NOTICE]   (366871) : Loading success.
Nov 25 03:51:21 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2115: 321 pgs: 321 active+clean; 213 MiB data, 846 MiB used, 59 GiB / 60 GiB avail; 1.1 MiB/s rd, 3.6 MiB/s wr, 337 op/s
Nov 25 03:51:22 np0005534516 nova_compute[253538]: 2025-11-25 08:51:22.350 253542 DEBUG nova.compute.manager [req-3910b8d2-78a0-46f5-9d07-446d0e4f3f13 req-9dcc57d1-4ac0-476f-98e2-30242cd3e775 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: e7bccd38-8444-4a5d-8a51-a99fa2e7d4fb] Received event network-vif-plugged-2ad9a2b7-f59d-49fd-aaa3-5253dd637f18 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:51:22 np0005534516 nova_compute[253538]: 2025-11-25 08:51:22.350 253542 DEBUG oslo_concurrency.lockutils [req-3910b8d2-78a0-46f5-9d07-446d0e4f3f13 req-9dcc57d1-4ac0-476f-98e2-30242cd3e775 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "e7bccd38-8444-4a5d-8a51-a99fa2e7d4fb-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:51:22 np0005534516 nova_compute[253538]: 2025-11-25 08:51:22.351 253542 DEBUG oslo_concurrency.lockutils [req-3910b8d2-78a0-46f5-9d07-446d0e4f3f13 req-9dcc57d1-4ac0-476f-98e2-30242cd3e775 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "e7bccd38-8444-4a5d-8a51-a99fa2e7d4fb-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:51:22 np0005534516 nova_compute[253538]: 2025-11-25 08:51:22.351 253542 DEBUG oslo_concurrency.lockutils [req-3910b8d2-78a0-46f5-9d07-446d0e4f3f13 req-9dcc57d1-4ac0-476f-98e2-30242cd3e775 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "e7bccd38-8444-4a5d-8a51-a99fa2e7d4fb-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:51:22 np0005534516 nova_compute[253538]: 2025-11-25 08:51:22.352 253542 DEBUG nova.compute.manager [req-3910b8d2-78a0-46f5-9d07-446d0e4f3f13 req-9dcc57d1-4ac0-476f-98e2-30242cd3e775 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: e7bccd38-8444-4a5d-8a51-a99fa2e7d4fb] No waiting events found dispatching network-vif-plugged-2ad9a2b7-f59d-49fd-aaa3-5253dd637f18 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 03:51:22 np0005534516 nova_compute[253538]: 2025-11-25 08:51:22.352 253542 WARNING nova.compute.manager [req-3910b8d2-78a0-46f5-9d07-446d0e4f3f13 req-9dcc57d1-4ac0-476f-98e2-30242cd3e775 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: e7bccd38-8444-4a5d-8a51-a99fa2e7d4fb] Received unexpected event network-vif-plugged-2ad9a2b7-f59d-49fd-aaa3-5253dd637f18 for instance with vm_state active and task_state None.#033[00m
Nov 25 03:51:22 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e251 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 03:51:22 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e251 do_prune osdmap full prune enabled
Nov 25 03:51:22 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e252 e252: 3 total, 3 up, 3 in
Nov 25 03:51:22 np0005534516 ceph-mon[75015]: log_channel(cluster) log [DBG] : osdmap e252: 3 total, 3 up, 3 in
Nov 25 03:51:22 np0005534516 nova_compute[253538]: 2025-11-25 08:51:22.699 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:51:23 np0005534516 nova_compute[253538]: 2025-11-25 08:51:23.243 253542 INFO nova.compute.manager [None req-0c53f06c-d7ad-484c-9397-161052b07dd4 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: b4f98996-3a98-41ad-af66-af37066515d3] Get console output#033[00m
Nov 25 03:51:23 np0005534516 nova_compute[253538]: 2025-11-25 08:51:23.249 310639 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Nov 25 03:51:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 03:51:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 03:51:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 03:51:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 03:51:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 03:51:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 03:51:23 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2117: 321 pgs: 321 active+clean; 213 MiB data, 846 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.9 MiB/s wr, 272 op/s
Nov 25 03:51:23 np0005534516 nova_compute[253538]: 2025-11-25 08:51:23.841 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:51:24 np0005534516 nova_compute[253538]: 2025-11-25 08:51:24.082 253542 DEBUG nova.compute.manager [req-fb766494-7a89-437f-8fb1-686c94cc07c0 req-9e61290e-7ef7-4911-b1ac-a61f95b488aa b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: b4f98996-3a98-41ad-af66-af37066515d3] Received event network-changed-0547929c-86ba-4aaa-869f-c7e2b5ea7e67 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:51:24 np0005534516 nova_compute[253538]: 2025-11-25 08:51:24.082 253542 DEBUG nova.compute.manager [req-fb766494-7a89-437f-8fb1-686c94cc07c0 req-9e61290e-7ef7-4911-b1ac-a61f95b488aa b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: b4f98996-3a98-41ad-af66-af37066515d3] Refreshing instance network info cache due to event network-changed-0547929c-86ba-4aaa-869f-c7e2b5ea7e67. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 03:51:24 np0005534516 nova_compute[253538]: 2025-11-25 08:51:24.082 253542 DEBUG oslo_concurrency.lockutils [req-fb766494-7a89-437f-8fb1-686c94cc07c0 req-9e61290e-7ef7-4911-b1ac-a61f95b488aa b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-b4f98996-3a98-41ad-af66-af37066515d3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 03:51:24 np0005534516 nova_compute[253538]: 2025-11-25 08:51:24.083 253542 DEBUG oslo_concurrency.lockutils [req-fb766494-7a89-437f-8fb1-686c94cc07c0 req-9e61290e-7ef7-4911-b1ac-a61f95b488aa b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-b4f98996-3a98-41ad-af66-af37066515d3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 03:51:24 np0005534516 nova_compute[253538]: 2025-11-25 08:51:24.083 253542 DEBUG nova.network.neutron [req-fb766494-7a89-437f-8fb1-686c94cc07c0 req-9e61290e-7ef7-4911-b1ac-a61f95b488aa b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: b4f98996-3a98-41ad-af66-af37066515d3] Refreshing network info cache for port 0547929c-86ba-4aaa-869f-c7e2b5ea7e67 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 03:51:24 np0005534516 nova_compute[253538]: 2025-11-25 08:51:24.145 253542 DEBUG oslo_concurrency.lockutils [None req-4de13c00-17b3-4ea1-939d-eac5a4c2cddd 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Acquiring lock "b4f98996-3a98-41ad-af66-af37066515d3" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:51:24 np0005534516 nova_compute[253538]: 2025-11-25 08:51:24.146 253542 DEBUG oslo_concurrency.lockutils [None req-4de13c00-17b3-4ea1-939d-eac5a4c2cddd 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Lock "b4f98996-3a98-41ad-af66-af37066515d3" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:51:24 np0005534516 nova_compute[253538]: 2025-11-25 08:51:24.146 253542 DEBUG oslo_concurrency.lockutils [None req-4de13c00-17b3-4ea1-939d-eac5a4c2cddd 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Acquiring lock "b4f98996-3a98-41ad-af66-af37066515d3-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:51:24 np0005534516 nova_compute[253538]: 2025-11-25 08:51:24.146 253542 DEBUG oslo_concurrency.lockutils [None req-4de13c00-17b3-4ea1-939d-eac5a4c2cddd 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Lock "b4f98996-3a98-41ad-af66-af37066515d3-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:51:24 np0005534516 nova_compute[253538]: 2025-11-25 08:51:24.146 253542 DEBUG oslo_concurrency.lockutils [None req-4de13c00-17b3-4ea1-939d-eac5a4c2cddd 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Lock "b4f98996-3a98-41ad-af66-af37066515d3-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:51:24 np0005534516 nova_compute[253538]: 2025-11-25 08:51:24.147 253542 INFO nova.compute.manager [None req-4de13c00-17b3-4ea1-939d-eac5a4c2cddd 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: b4f98996-3a98-41ad-af66-af37066515d3] Terminating instance#033[00m
Nov 25 03:51:24 np0005534516 nova_compute[253538]: 2025-11-25 08:51:24.148 253542 DEBUG nova.compute.manager [None req-4de13c00-17b3-4ea1-939d-eac5a4c2cddd 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: b4f98996-3a98-41ad-af66-af37066515d3] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 25 03:51:24 np0005534516 kernel: tap0547929c-86 (unregistering): left promiscuous mode
Nov 25 03:51:24 np0005534516 NetworkManager[48915]: <info>  [1764060684.2002] device (tap0547929c-86): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 03:51:24 np0005534516 ovn_controller[152859]: 2025-11-25T08:51:24Z|01108|binding|INFO|Releasing lport 0547929c-86ba-4aaa-869f-c7e2b5ea7e67 from this chassis (sb_readonly=0)
Nov 25 03:51:24 np0005534516 ovn_controller[152859]: 2025-11-25T08:51:24Z|01109|binding|INFO|Setting lport 0547929c-86ba-4aaa-869f-c7e2b5ea7e67 down in Southbound
Nov 25 03:51:24 np0005534516 ovn_controller[152859]: 2025-11-25T08:51:24Z|01110|binding|INFO|Removing iface tap0547929c-86 ovn-installed in OVS
Nov 25 03:51:24 np0005534516 nova_compute[253538]: 2025-11-25 08:51:24.222 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:51:24 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:51:24.230 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:86:20:35 10.100.0.11'], port_security=['fa:16:3e:86:20:35 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': 'b4f98996-3a98-41ad-af66-af37066515d3', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c0613062-c56d-4f59-a1bd-5487b9cae905', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '7dcaf3c96bfc4db3a41291debd385c67', 'neutron:revision_number': '6', 'neutron:security_group_ids': 'db6ee56c-6009-455c-94cc-12101966dbd4', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=eca56c04-944f-49ab-8f25-403bf5089348, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=0547929c-86ba-4aaa-869f-c7e2b5ea7e67) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 03:51:24 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:51:24.234 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 0547929c-86ba-4aaa-869f-c7e2b5ea7e67 in datapath c0613062-c56d-4f59-a1bd-5487b9cae905 unbound from our chassis#033[00m
Nov 25 03:51:24 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:51:24.236 162739 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network c0613062-c56d-4f59-a1bd-5487b9cae905, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 25 03:51:24 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:51:24.238 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[181365b4-b812-49eb-a1c1-e06ba7e16d85]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:51:24 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:51:24.239 162739 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-c0613062-c56d-4f59-a1bd-5487b9cae905 namespace which is not needed anymore#033[00m
Nov 25 03:51:24 np0005534516 nova_compute[253538]: 2025-11-25 08:51:24.246 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:51:24 np0005534516 systemd[1]: machine-qemu\x2d137\x2dinstance\x2d0000006e.scope: Deactivated successfully.
Nov 25 03:51:24 np0005534516 systemd[1]: machine-qemu\x2d137\x2dinstance\x2d0000006e.scope: Consumed 13.310s CPU time.
Nov 25 03:51:24 np0005534516 systemd-machined[215790]: Machine qemu-137-instance-0000006e terminated.
Nov 25 03:51:24 np0005534516 nova_compute[253538]: 2025-11-25 08:51:24.394 253542 INFO nova.virt.libvirt.driver [-] [instance: b4f98996-3a98-41ad-af66-af37066515d3] Instance destroyed successfully.#033[00m
Nov 25 03:51:24 np0005534516 nova_compute[253538]: 2025-11-25 08:51:24.395 253542 DEBUG nova.objects.instance [None req-4de13c00-17b3-4ea1-939d-eac5a4c2cddd 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Lazy-loading 'resources' on Instance uuid b4f98996-3a98-41ad-af66-af37066515d3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 03:51:24 np0005534516 nova_compute[253538]: 2025-11-25 08:51:24.405 253542 DEBUG nova.virt.libvirt.vif [None req-4de13c00-17b3-4ea1-939d-eac5a4c2cddd 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T08:50:27Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-642924609',display_name='tempest-TestNetworkAdvancedServerOps-server-642924609',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-642924609',id=110,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBKQwQUMdH1inJEZNQ9tUR+z/kDiUab1e20h5rm6qDlszZoYoLqt3pa8Fary6MYkj2oJVBphpUWW4+oVR02Nvg0VNSZNNzWHbc601Ac4/2sW+DdmilXo7ZngfOc7+6JMZJw==',key_name='tempest-TestNetworkAdvancedServerOps-1971435409',keypairs=<?>,launch_index=0,launched_at=2025-11-25T08:50:37Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='7dcaf3c96bfc4db3a41291debd385c67',ramdisk_id='',reservation_id='r-r1bh0apf',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkAdvancedServerOps-1132090577',owner_user_name='tempest-TestNetworkAdvancedServerOps-1132090577-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T08:51:04Z,user_data=None,user_id='009378dc36154271ba5b4590ce67ddde',uuid=b4f98996-3a98-41ad-af66-af37066515d3,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "0547929c-86ba-4aaa-869f-c7e2b5ea7e67", "address": "fa:16:3e:86:20:35", "network": {"id": "c0613062-c56d-4f59-a1bd-5487b9cae905", "bridge": "br-int", "label": "tempest-network-smoke--1258419912", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.239", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7dcaf3c96bfc4db3a41291debd385c67", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0547929c-86", "ovs_interfaceid": "0547929c-86ba-4aaa-869f-c7e2b5ea7e67", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 25 03:51:24 np0005534516 nova_compute[253538]: 2025-11-25 08:51:24.405 253542 DEBUG nova.network.os_vif_util [None req-4de13c00-17b3-4ea1-939d-eac5a4c2cddd 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Converting VIF {"id": "0547929c-86ba-4aaa-869f-c7e2b5ea7e67", "address": "fa:16:3e:86:20:35", "network": {"id": "c0613062-c56d-4f59-a1bd-5487b9cae905", "bridge": "br-int", "label": "tempest-network-smoke--1258419912", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.239", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7dcaf3c96bfc4db3a41291debd385c67", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0547929c-86", "ovs_interfaceid": "0547929c-86ba-4aaa-869f-c7e2b5ea7e67", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 03:51:24 np0005534516 nova_compute[253538]: 2025-11-25 08:51:24.406 253542 DEBUG nova.network.os_vif_util [None req-4de13c00-17b3-4ea1-939d-eac5a4c2cddd 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:86:20:35,bridge_name='br-int',has_traffic_filtering=True,id=0547929c-86ba-4aaa-869f-c7e2b5ea7e67,network=Network(c0613062-c56d-4f59-a1bd-5487b9cae905),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0547929c-86') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 03:51:24 np0005534516 nova_compute[253538]: 2025-11-25 08:51:24.407 253542 DEBUG os_vif [None req-4de13c00-17b3-4ea1-939d-eac5a4c2cddd 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:86:20:35,bridge_name='br-int',has_traffic_filtering=True,id=0547929c-86ba-4aaa-869f-c7e2b5ea7e67,network=Network(c0613062-c56d-4f59-a1bd-5487b9cae905),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0547929c-86') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 25 03:51:24 np0005534516 nova_compute[253538]: 2025-11-25 08:51:24.409 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:51:24 np0005534516 nova_compute[253538]: 2025-11-25 08:51:24.409 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0547929c-86, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:51:24 np0005534516 nova_compute[253538]: 2025-11-25 08:51:24.411 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:51:24 np0005534516 nova_compute[253538]: 2025-11-25 08:51:24.413 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:51:24 np0005534516 nova_compute[253538]: 2025-11-25 08:51:24.420 253542 INFO os_vif [None req-4de13c00-17b3-4ea1-939d-eac5a4c2cddd 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:86:20:35,bridge_name='br-int',has_traffic_filtering=True,id=0547929c-86ba-4aaa-869f-c7e2b5ea7e67,network=Network(c0613062-c56d-4f59-a1bd-5487b9cae905),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0547929c-86')#033[00m
Nov 25 03:51:24 np0005534516 neutron-haproxy-ovnmeta-c0613062-c56d-4f59-a1bd-5487b9cae905[366360]: [NOTICE]   (366378) : haproxy version is 2.8.14-c23fe91
Nov 25 03:51:24 np0005534516 neutron-haproxy-ovnmeta-c0613062-c56d-4f59-a1bd-5487b9cae905[366360]: [NOTICE]   (366378) : path to executable is /usr/sbin/haproxy
Nov 25 03:51:24 np0005534516 neutron-haproxy-ovnmeta-c0613062-c56d-4f59-a1bd-5487b9cae905[366360]: [WARNING]  (366378) : Exiting Master process...
Nov 25 03:51:24 np0005534516 neutron-haproxy-ovnmeta-c0613062-c56d-4f59-a1bd-5487b9cae905[366360]: [ALERT]    (366378) : Current worker (366383) exited with code 143 (Terminated)
Nov 25 03:51:24 np0005534516 neutron-haproxy-ovnmeta-c0613062-c56d-4f59-a1bd-5487b9cae905[366360]: [WARNING]  (366378) : All workers exited. Exiting... (0)
Nov 25 03:51:24 np0005534516 systemd[1]: libpod-dda7a335c431b3b6ccd804e4a60d89574b0669704d04b1923f4b0c950d3bca3b.scope: Deactivated successfully.
Nov 25 03:51:24 np0005534516 podman[366905]: 2025-11-25 08:51:24.434876803 +0000 UTC m=+0.071416943 container died dda7a335c431b3b6ccd804e4a60d89574b0669704d04b1923f4b0c950d3bca3b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-c0613062-c56d-4f59-a1bd-5487b9cae905, org.label-schema.build-date=20251118, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Nov 25 03:51:24 np0005534516 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-dda7a335c431b3b6ccd804e4a60d89574b0669704d04b1923f4b0c950d3bca3b-userdata-shm.mount: Deactivated successfully.
Nov 25 03:51:24 np0005534516 systemd[1]: var-lib-containers-storage-overlay-06636a18fe499615bcb16111056ae7d170b090417624338dfdbe17fd20509fcd-merged.mount: Deactivated successfully.
Nov 25 03:51:24 np0005534516 podman[366905]: 2025-11-25 08:51:24.490890953 +0000 UTC m=+0.127431053 container cleanup dda7a335c431b3b6ccd804e4a60d89574b0669704d04b1923f4b0c950d3bca3b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-c0613062-c56d-4f59-a1bd-5487b9cae905, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Nov 25 03:51:24 np0005534516 systemd[1]: libpod-conmon-dda7a335c431b3b6ccd804e4a60d89574b0669704d04b1923f4b0c950d3bca3b.scope: Deactivated successfully.
Nov 25 03:51:24 np0005534516 podman[366964]: 2025-11-25 08:51:24.568363668 +0000 UTC m=+0.050581945 container remove dda7a335c431b3b6ccd804e4a60d89574b0669704d04b1923f4b0c950d3bca3b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-c0613062-c56d-4f59-a1bd-5487b9cae905, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Nov 25 03:51:24 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:51:24.577 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[43494fc6-f2f8-4a99-8ba8-1846f4fe64a7]: (4, ('Tue Nov 25 08:51:24 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-c0613062-c56d-4f59-a1bd-5487b9cae905 (dda7a335c431b3b6ccd804e4a60d89574b0669704d04b1923f4b0c950d3bca3b)\ndda7a335c431b3b6ccd804e4a60d89574b0669704d04b1923f4b0c950d3bca3b\nTue Nov 25 08:51:24 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-c0613062-c56d-4f59-a1bd-5487b9cae905 (dda7a335c431b3b6ccd804e4a60d89574b0669704d04b1923f4b0c950d3bca3b)\ndda7a335c431b3b6ccd804e4a60d89574b0669704d04b1923f4b0c950d3bca3b\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:51:24 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:51:24.579 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[8084a923-c887-4465-bc48-78547d281b72]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:51:24 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:51:24.582 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc0613062-c0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:51:24 np0005534516 nova_compute[253538]: 2025-11-25 08:51:24.584 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:51:24 np0005534516 kernel: tapc0613062-c0: left promiscuous mode
Nov 25 03:51:24 np0005534516 nova_compute[253538]: 2025-11-25 08:51:24.587 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:51:24 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:51:24.591 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[ef2568e9-5928-4873-917d-923c6a4a834e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:51:24 np0005534516 nova_compute[253538]: 2025-11-25 08:51:24.607 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:51:24 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:51:24.608 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[db8151c8-62e5-4659-97c9-3c93bbc7d146]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:51:24 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:51:24.610 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[284451fa-3b03-4833-a9bd-839c65e154d0]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:51:24 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:51:24.628 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[ef99021b-99d6-49b7-9abd-6f2c5520cc89]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 601893, 'reachable_time': 40088, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 366979, 'error': None, 'target': 'ovnmeta-c0613062-c56d-4f59-a1bd-5487b9cae905', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:51:24 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:51:24.630 162852 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-c0613062-c56d-4f59-a1bd-5487b9cae905 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 25 03:51:24 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:51:24.630 162852 DEBUG oslo.privsep.daemon [-] privsep: reply[a020c0b9-1c7c-4144-87cd-e5bf0bd0fa17]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:51:24 np0005534516 systemd[1]: run-netns-ovnmeta\x2dc0613062\x2dc56d\x2d4f59\x2da1bd\x2d5487b9cae905.mount: Deactivated successfully.
Nov 25 03:51:24 np0005534516 nova_compute[253538]: 2025-11-25 08:51:24.738 253542 DEBUG nova.compute.manager [req-0308a17c-396b-4c67-a49e-869127eb407f req-c47b7472-fd34-4c77-b48f-d062b3f1b3e2 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: e7bccd38-8444-4a5d-8a51-a99fa2e7d4fb] Received event network-changed-2ad9a2b7-f59d-49fd-aaa3-5253dd637f18 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:51:24 np0005534516 nova_compute[253538]: 2025-11-25 08:51:24.739 253542 DEBUG nova.compute.manager [req-0308a17c-396b-4c67-a49e-869127eb407f req-c47b7472-fd34-4c77-b48f-d062b3f1b3e2 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: e7bccd38-8444-4a5d-8a51-a99fa2e7d4fb] Refreshing instance network info cache due to event network-changed-2ad9a2b7-f59d-49fd-aaa3-5253dd637f18. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 03:51:24 np0005534516 nova_compute[253538]: 2025-11-25 08:51:24.739 253542 DEBUG oslo_concurrency.lockutils [req-0308a17c-396b-4c67-a49e-869127eb407f req-c47b7472-fd34-4c77-b48f-d062b3f1b3e2 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-e7bccd38-8444-4a5d-8a51-a99fa2e7d4fb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 03:51:24 np0005534516 nova_compute[253538]: 2025-11-25 08:51:24.739 253542 DEBUG oslo_concurrency.lockutils [req-0308a17c-396b-4c67-a49e-869127eb407f req-c47b7472-fd34-4c77-b48f-d062b3f1b3e2 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-e7bccd38-8444-4a5d-8a51-a99fa2e7d4fb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 03:51:24 np0005534516 nova_compute[253538]: 2025-11-25 08:51:24.740 253542 DEBUG nova.network.neutron [req-0308a17c-396b-4c67-a49e-869127eb407f req-c47b7472-fd34-4c77-b48f-d062b3f1b3e2 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: e7bccd38-8444-4a5d-8a51-a99fa2e7d4fb] Refreshing network info cache for port 2ad9a2b7-f59d-49fd-aaa3-5253dd637f18 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 03:51:24 np0005534516 nova_compute[253538]: 2025-11-25 08:51:24.831 253542 INFO nova.virt.libvirt.driver [None req-4de13c00-17b3-4ea1-939d-eac5a4c2cddd 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: b4f98996-3a98-41ad-af66-af37066515d3] Deleting instance files /var/lib/nova/instances/b4f98996-3a98-41ad-af66-af37066515d3_del#033[00m
Nov 25 03:51:24 np0005534516 nova_compute[253538]: 2025-11-25 08:51:24.834 253542 INFO nova.virt.libvirt.driver [None req-4de13c00-17b3-4ea1-939d-eac5a4c2cddd 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: b4f98996-3a98-41ad-af66-af37066515d3] Deletion of /var/lib/nova/instances/b4f98996-3a98-41ad-af66-af37066515d3_del complete#033[00m
Nov 25 03:51:24 np0005534516 nova_compute[253538]: 2025-11-25 08:51:24.896 253542 INFO nova.compute.manager [None req-4de13c00-17b3-4ea1-939d-eac5a4c2cddd 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: b4f98996-3a98-41ad-af66-af37066515d3] Took 0.75 seconds to destroy the instance on the hypervisor.#033[00m
Nov 25 03:51:24 np0005534516 nova_compute[253538]: 2025-11-25 08:51:24.897 253542 DEBUG oslo.service.loopingcall [None req-4de13c00-17b3-4ea1-939d-eac5a4c2cddd 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 25 03:51:24 np0005534516 nova_compute[253538]: 2025-11-25 08:51:24.898 253542 DEBUG nova.compute.manager [-] [instance: b4f98996-3a98-41ad-af66-af37066515d3] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 25 03:51:24 np0005534516 nova_compute[253538]: 2025-11-25 08:51:24.898 253542 DEBUG nova.network.neutron [-] [instance: b4f98996-3a98-41ad-af66-af37066515d3] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 25 03:51:25 np0005534516 nova_compute[253538]: 2025-11-25 08:51:25.564 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 03:51:25 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2118: 321 pgs: 321 active+clean; 215 MiB data, 846 MiB used, 59 GiB / 60 GiB avail; 3.6 MiB/s rd, 2.4 MiB/s wr, 287 op/s
Nov 25 03:51:26 np0005534516 nova_compute[253538]: 2025-11-25 08:51:26.861 253542 DEBUG nova.compute.manager [req-555a50df-21f8-437a-8e3e-2b6ac8c3bece req-12bd5ad0-1c02-4545-9813-5a5b18614f1b b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: b4f98996-3a98-41ad-af66-af37066515d3] Received event network-vif-unplugged-0547929c-86ba-4aaa-869f-c7e2b5ea7e67 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:51:26 np0005534516 nova_compute[253538]: 2025-11-25 08:51:26.862 253542 DEBUG oslo_concurrency.lockutils [req-555a50df-21f8-437a-8e3e-2b6ac8c3bece req-12bd5ad0-1c02-4545-9813-5a5b18614f1b b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "b4f98996-3a98-41ad-af66-af37066515d3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:51:26 np0005534516 nova_compute[253538]: 2025-11-25 08:51:26.862 253542 DEBUG oslo_concurrency.lockutils [req-555a50df-21f8-437a-8e3e-2b6ac8c3bece req-12bd5ad0-1c02-4545-9813-5a5b18614f1b b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "b4f98996-3a98-41ad-af66-af37066515d3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:51:26 np0005534516 nova_compute[253538]: 2025-11-25 08:51:26.863 253542 DEBUG oslo_concurrency.lockutils [req-555a50df-21f8-437a-8e3e-2b6ac8c3bece req-12bd5ad0-1c02-4545-9813-5a5b18614f1b b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "b4f98996-3a98-41ad-af66-af37066515d3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:51:26 np0005534516 nova_compute[253538]: 2025-11-25 08:51:26.863 253542 DEBUG nova.compute.manager [req-555a50df-21f8-437a-8e3e-2b6ac8c3bece req-12bd5ad0-1c02-4545-9813-5a5b18614f1b b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: b4f98996-3a98-41ad-af66-af37066515d3] No waiting events found dispatching network-vif-unplugged-0547929c-86ba-4aaa-869f-c7e2b5ea7e67 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 03:51:26 np0005534516 nova_compute[253538]: 2025-11-25 08:51:26.863 253542 DEBUG nova.compute.manager [req-555a50df-21f8-437a-8e3e-2b6ac8c3bece req-12bd5ad0-1c02-4545-9813-5a5b18614f1b b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: b4f98996-3a98-41ad-af66-af37066515d3] Received event network-vif-unplugged-0547929c-86ba-4aaa-869f-c7e2b5ea7e67 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 25 03:51:26 np0005534516 nova_compute[253538]: 2025-11-25 08:51:26.863 253542 DEBUG nova.compute.manager [req-555a50df-21f8-437a-8e3e-2b6ac8c3bece req-12bd5ad0-1c02-4545-9813-5a5b18614f1b b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: b4f98996-3a98-41ad-af66-af37066515d3] Received event network-vif-plugged-0547929c-86ba-4aaa-869f-c7e2b5ea7e67 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:51:26 np0005534516 nova_compute[253538]: 2025-11-25 08:51:26.864 253542 DEBUG oslo_concurrency.lockutils [req-555a50df-21f8-437a-8e3e-2b6ac8c3bece req-12bd5ad0-1c02-4545-9813-5a5b18614f1b b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "b4f98996-3a98-41ad-af66-af37066515d3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:51:26 np0005534516 nova_compute[253538]: 2025-11-25 08:51:26.864 253542 DEBUG oslo_concurrency.lockutils [req-555a50df-21f8-437a-8e3e-2b6ac8c3bece req-12bd5ad0-1c02-4545-9813-5a5b18614f1b b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "b4f98996-3a98-41ad-af66-af37066515d3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:51:26 np0005534516 nova_compute[253538]: 2025-11-25 08:51:26.865 253542 DEBUG oslo_concurrency.lockutils [req-555a50df-21f8-437a-8e3e-2b6ac8c3bece req-12bd5ad0-1c02-4545-9813-5a5b18614f1b b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "b4f98996-3a98-41ad-af66-af37066515d3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:51:26 np0005534516 nova_compute[253538]: 2025-11-25 08:51:26.865 253542 DEBUG nova.compute.manager [req-555a50df-21f8-437a-8e3e-2b6ac8c3bece req-12bd5ad0-1c02-4545-9813-5a5b18614f1b b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: b4f98996-3a98-41ad-af66-af37066515d3] No waiting events found dispatching network-vif-plugged-0547929c-86ba-4aaa-869f-c7e2b5ea7e67 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 03:51:26 np0005534516 nova_compute[253538]: 2025-11-25 08:51:26.865 253542 WARNING nova.compute.manager [req-555a50df-21f8-437a-8e3e-2b6ac8c3bece req-12bd5ad0-1c02-4545-9813-5a5b18614f1b b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: b4f98996-3a98-41ad-af66-af37066515d3] Received unexpected event network-vif-plugged-0547929c-86ba-4aaa-869f-c7e2b5ea7e67 for instance with vm_state active and task_state deleting.#033[00m
Nov 25 03:51:27 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e252 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 03:51:27 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e252 do_prune osdmap full prune enabled
Nov 25 03:51:27 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e253 e253: 3 total, 3 up, 3 in
Nov 25 03:51:27 np0005534516 ceph-mon[75015]: log_channel(cluster) log [DBG] : osdmap e253: 3 total, 3 up, 3 in
Nov 25 03:51:27 np0005534516 nova_compute[253538]: 2025-11-25 08:51:27.703 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:51:27 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2120: 321 pgs: 321 active+clean; 189 MiB data, 832 MiB used, 59 GiB / 60 GiB avail; 3.0 MiB/s rd, 36 KiB/s wr, 143 op/s
Nov 25 03:51:28 np0005534516 nova_compute[253538]: 2025-11-25 08:51:28.459 253542 DEBUG nova.network.neutron [req-fb766494-7a89-437f-8fb1-686c94cc07c0 req-9e61290e-7ef7-4911-b1ac-a61f95b488aa b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: b4f98996-3a98-41ad-af66-af37066515d3] Updated VIF entry in instance network info cache for port 0547929c-86ba-4aaa-869f-c7e2b5ea7e67. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 25 03:51:28 np0005534516 nova_compute[253538]: 2025-11-25 08:51:28.460 253542 DEBUG nova.network.neutron [req-fb766494-7a89-437f-8fb1-686c94cc07c0 req-9e61290e-7ef7-4911-b1ac-a61f95b488aa b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: b4f98996-3a98-41ad-af66-af37066515d3] Updating instance_info_cache with network_info: [{"id": "0547929c-86ba-4aaa-869f-c7e2b5ea7e67", "address": "fa:16:3e:86:20:35", "network": {"id": "c0613062-c56d-4f59-a1bd-5487b9cae905", "bridge": "br-int", "label": "tempest-network-smoke--1258419912", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7dcaf3c96bfc4db3a41291debd385c67", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0547929c-86", "ovs_interfaceid": "0547929c-86ba-4aaa-869f-c7e2b5ea7e67", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 03:51:28 np0005534516 nova_compute[253538]: 2025-11-25 08:51:28.481 253542 DEBUG oslo_concurrency.lockutils [req-fb766494-7a89-437f-8fb1-686c94cc07c0 req-9e61290e-7ef7-4911-b1ac-a61f95b488aa b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-b4f98996-3a98-41ad-af66-af37066515d3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 03:51:28 np0005534516 nova_compute[253538]: 2025-11-25 08:51:28.510 253542 DEBUG nova.network.neutron [-] [instance: b4f98996-3a98-41ad-af66-af37066515d3] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 03:51:28 np0005534516 nova_compute[253538]: 2025-11-25 08:51:28.532 253542 INFO nova.compute.manager [-] [instance: b4f98996-3a98-41ad-af66-af37066515d3] Took 3.63 seconds to deallocate network for instance.#033[00m
Nov 25 03:51:28 np0005534516 nova_compute[253538]: 2025-11-25 08:51:28.576 253542 DEBUG oslo_concurrency.lockutils [None req-4de13c00-17b3-4ea1-939d-eac5a4c2cddd 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:51:28 np0005534516 nova_compute[253538]: 2025-11-25 08:51:28.577 253542 DEBUG oslo_concurrency.lockutils [None req-4de13c00-17b3-4ea1-939d-eac5a4c2cddd 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:51:28 np0005534516 nova_compute[253538]: 2025-11-25 08:51:28.636 253542 DEBUG nova.compute.manager [req-33392c5f-ccfe-4e56-b5cc-6ae11e07e2e8 req-2b778316-36d7-4a82-8c2a-b1faf4e609cd b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: b4f98996-3a98-41ad-af66-af37066515d3] Received event network-vif-deleted-0547929c-86ba-4aaa-869f-c7e2b5ea7e67 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:51:28 np0005534516 nova_compute[253538]: 2025-11-25 08:51:28.686 253542 DEBUG oslo_concurrency.processutils [None req-4de13c00-17b3-4ea1-939d-eac5a4c2cddd 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:51:29 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 25 03:51:29 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2801981059' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 03:51:29 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 25 03:51:29 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2801981059' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 03:51:29 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 03:51:29 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/529027540' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 03:51:29 np0005534516 nova_compute[253538]: 2025-11-25 08:51:29.108 253542 DEBUG oslo_concurrency.processutils [None req-4de13c00-17b3-4ea1-939d-eac5a4c2cddd 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.422s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:51:29 np0005534516 nova_compute[253538]: 2025-11-25 08:51:29.114 253542 DEBUG nova.compute.provider_tree [None req-4de13c00-17b3-4ea1-939d-eac5a4c2cddd 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 03:51:29 np0005534516 nova_compute[253538]: 2025-11-25 08:51:29.131 253542 DEBUG nova.scheduler.client.report [None req-4de13c00-17b3-4ea1-939d-eac5a4c2cddd 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 03:51:29 np0005534516 nova_compute[253538]: 2025-11-25 08:51:29.151 253542 DEBUG oslo_concurrency.lockutils [None req-4de13c00-17b3-4ea1-939d-eac5a4c2cddd 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.574s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:51:29 np0005534516 nova_compute[253538]: 2025-11-25 08:51:29.180 253542 INFO nova.scheduler.client.report [None req-4de13c00-17b3-4ea1-939d-eac5a4c2cddd 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Deleted allocations for instance b4f98996-3a98-41ad-af66-af37066515d3#033[00m
Nov 25 03:51:29 np0005534516 nova_compute[253538]: 2025-11-25 08:51:29.241 253542 DEBUG nova.network.neutron [req-0308a17c-396b-4c67-a49e-869127eb407f req-c47b7472-fd34-4c77-b48f-d062b3f1b3e2 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: e7bccd38-8444-4a5d-8a51-a99fa2e7d4fb] Updated VIF entry in instance network info cache for port 2ad9a2b7-f59d-49fd-aaa3-5253dd637f18. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 25 03:51:29 np0005534516 nova_compute[253538]: 2025-11-25 08:51:29.242 253542 DEBUG nova.network.neutron [req-0308a17c-396b-4c67-a49e-869127eb407f req-c47b7472-fd34-4c77-b48f-d062b3f1b3e2 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: e7bccd38-8444-4a5d-8a51-a99fa2e7d4fb] Updating instance_info_cache with network_info: [{"id": "2ad9a2b7-f59d-49fd-aaa3-5253dd637f18", "address": "fa:16:3e:3e:2a:a3", "network": {"id": "8daad2e3-552f-4ebe-8fa4-01c68ec704b1", "bridge": "br-int", "label": "tempest-network-smoke--961197321", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.244", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92faeb767e7a423586eaaf32661ce771", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2ad9a2b7-f5", "ovs_interfaceid": "2ad9a2b7-f59d-49fd-aaa3-5253dd637f18", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 03:51:29 np0005534516 nova_compute[253538]: 2025-11-25 08:51:29.263 253542 DEBUG oslo_concurrency.lockutils [None req-4de13c00-17b3-4ea1-939d-eac5a4c2cddd 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Lock "b4f98996-3a98-41ad-af66-af37066515d3" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 5.117s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:51:29 np0005534516 nova_compute[253538]: 2025-11-25 08:51:29.274 253542 DEBUG oslo_concurrency.lockutils [req-0308a17c-396b-4c67-a49e-869127eb407f req-c47b7472-fd34-4c77-b48f-d062b3f1b3e2 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-e7bccd38-8444-4a5d-8a51-a99fa2e7d4fb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 03:51:29 np0005534516 nova_compute[253538]: 2025-11-25 08:51:29.413 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:51:29 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2121: 321 pgs: 321 active+clean; 134 MiB data, 800 MiB used, 59 GiB / 60 GiB avail; 3.0 MiB/s rd, 37 KiB/s wr, 180 op/s
Nov 25 03:51:31 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2122: 321 pgs: 321 active+clean; 134 MiB data, 800 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 31 KiB/s wr, 106 op/s
Nov 25 03:51:31 np0005534516 nova_compute[253538]: 2025-11-25 08:51:31.843 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 03:51:31 np0005534516 nova_compute[253538]: 2025-11-25 08:51:31.879 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Triggering sync for uuid e7bccd38-8444-4a5d-8a51-a99fa2e7d4fb _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268#033[00m
Nov 25 03:51:31 np0005534516 nova_compute[253538]: 2025-11-25 08:51:31.880 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Acquiring lock "e7bccd38-8444-4a5d-8a51-a99fa2e7d4fb" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:51:31 np0005534516 nova_compute[253538]: 2025-11-25 08:51:31.880 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "e7bccd38-8444-4a5d-8a51-a99fa2e7d4fb" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:51:31 np0005534516 nova_compute[253538]: 2025-11-25 08:51:31.911 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "e7bccd38-8444-4a5d-8a51-a99fa2e7d4fb" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.031s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:51:32 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 03:51:32 np0005534516 nova_compute[253538]: 2025-11-25 08:51:32.702 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:51:33 np0005534516 nova_compute[253538]: 2025-11-25 08:51:33.592 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 03:51:33 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2123: 321 pgs: 321 active+clean; 146 MiB data, 804 MiB used, 59 GiB / 60 GiB avail; 1.5 MiB/s rd, 938 KiB/s wr, 119 op/s
Nov 25 03:51:33 np0005534516 podman[367003]: 2025-11-25 08:51:33.818370006 +0000 UTC m=+0.064197270 container health_status 5c8d5508db57b4c3da7e80ae11f3a3094e87785cb74f60c98199261dd8b73481 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 25 03:51:33 np0005534516 ovn_controller[152859]: 2025-11-25T08:51:33Z|00122|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:3e:2a:a3 10.100.0.12
Nov 25 03:51:33 np0005534516 ovn_controller[152859]: 2025-11-25T08:51:33Z|00123|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:3e:2a:a3 10.100.0.12
Nov 25 03:51:34 np0005534516 ovn_controller[152859]: 2025-11-25T08:51:34Z|01111|binding|INFO|Releasing lport e844dcfd-3730-493f-b401-25ee7b281b7b from this chassis (sb_readonly=0)
Nov 25 03:51:34 np0005534516 nova_compute[253538]: 2025-11-25 08:51:34.108 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:51:34 np0005534516 nova_compute[253538]: 2025-11-25 08:51:34.415 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:51:34 np0005534516 nova_compute[253538]: 2025-11-25 08:51:34.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 03:51:34 np0005534516 nova_compute[253538]: 2025-11-25 08:51:34.555 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 25 03:51:35 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2124: 321 pgs: 321 active+clean; 157 MiB data, 812 MiB used, 59 GiB / 60 GiB avail; 210 KiB/s rd, 1.7 MiB/s wr, 88 op/s
Nov 25 03:51:35 np0005534516 podman[367022]: 2025-11-25 08:51:35.845971238 +0000 UTC m=+0.083926868 container health_status 201f5ba8a846455dad7681d91099d8c3f33e8ac91298c1330a8b525b94c03938 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible)
Nov 25 03:51:36 np0005534516 nova_compute[253538]: 2025-11-25 08:51:36.556 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 03:51:36 np0005534516 nova_compute[253538]: 2025-11-25 08:51:36.556 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 25 03:51:36 np0005534516 nova_compute[253538]: 2025-11-25 08:51:36.556 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 25 03:51:37 np0005534516 nova_compute[253538]: 2025-11-25 08:51:37.121 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Acquiring lock "refresh_cache-e7bccd38-8444-4a5d-8a51-a99fa2e7d4fb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 03:51:37 np0005534516 nova_compute[253538]: 2025-11-25 08:51:37.122 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Acquired lock "refresh_cache-e7bccd38-8444-4a5d-8a51-a99fa2e7d4fb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 03:51:37 np0005534516 nova_compute[253538]: 2025-11-25 08:51:37.122 253542 DEBUG nova.network.neutron [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] [instance: e7bccd38-8444-4a5d-8a51-a99fa2e7d4fb] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Nov 25 03:51:37 np0005534516 nova_compute[253538]: 2025-11-25 08:51:37.122 253542 DEBUG nova.objects.instance [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lazy-loading 'info_cache' on Instance uuid e7bccd38-8444-4a5d-8a51-a99fa2e7d4fb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 03:51:37 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 03:51:37 np0005534516 nova_compute[253538]: 2025-11-25 08:51:37.705 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:51:37 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2125: 321 pgs: 321 active+clean; 167 MiB data, 825 MiB used, 59 GiB / 60 GiB avail; 403 KiB/s rd, 2.5 MiB/s wr, 104 op/s
Nov 25 03:51:38 np0005534516 nova_compute[253538]: 2025-11-25 08:51:38.538 253542 DEBUG nova.network.neutron [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] [instance: e7bccd38-8444-4a5d-8a51-a99fa2e7d4fb] Updating instance_info_cache with network_info: [{"id": "2ad9a2b7-f59d-49fd-aaa3-5253dd637f18", "address": "fa:16:3e:3e:2a:a3", "network": {"id": "8daad2e3-552f-4ebe-8fa4-01c68ec704b1", "bridge": "br-int", "label": "tempest-network-smoke--961197321", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.244", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92faeb767e7a423586eaaf32661ce771", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2ad9a2b7-f5", "ovs_interfaceid": "2ad9a2b7-f59d-49fd-aaa3-5253dd637f18", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 03:51:38 np0005534516 nova_compute[253538]: 2025-11-25 08:51:38.550 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Releasing lock "refresh_cache-e7bccd38-8444-4a5d-8a51-a99fa2e7d4fb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 03:51:38 np0005534516 nova_compute[253538]: 2025-11-25 08:51:38.550 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] [instance: e7bccd38-8444-4a5d-8a51-a99fa2e7d4fb] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Nov 25 03:51:39 np0005534516 nova_compute[253538]: 2025-11-25 08:51:39.390 253542 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764060684.3889587, b4f98996-3a98-41ad-af66-af37066515d3 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 03:51:39 np0005534516 nova_compute[253538]: 2025-11-25 08:51:39.391 253542 INFO nova.compute.manager [-] [instance: b4f98996-3a98-41ad-af66-af37066515d3] VM Stopped (Lifecycle Event)#033[00m
Nov 25 03:51:39 np0005534516 nova_compute[253538]: 2025-11-25 08:51:39.405 253542 DEBUG nova.compute.manager [None req-f2ea9c83-649c-44b5-adfe-68e96ec671de - - - - - -] [instance: b4f98996-3a98-41ad-af66-af37066515d3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:51:39 np0005534516 nova_compute[253538]: 2025-11-25 08:51:39.445 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:51:39 np0005534516 nova_compute[253538]: 2025-11-25 08:51:39.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 03:51:39 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2126: 321 pgs: 321 active+clean; 167 MiB data, 825 MiB used, 59 GiB / 60 GiB avail; 343 KiB/s rd, 2.1 MiB/s wr, 89 op/s
Nov 25 03:51:40 np0005534516 nova_compute[253538]: 2025-11-25 08:51:40.170 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:51:40 np0005534516 nova_compute[253538]: 2025-11-25 08:51:40.548 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 03:51:40 np0005534516 nova_compute[253538]: 2025-11-25 08:51:40.550 253542 INFO nova.compute.manager [None req-a8eedef9-005f-445c-bac8-56e75535fb1a 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: e7bccd38-8444-4a5d-8a51-a99fa2e7d4fb] Get console output#033[00m
Nov 25 03:51:40 np0005534516 nova_compute[253538]: 2025-11-25 08:51:40.553 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 03:51:40 np0005534516 nova_compute[253538]: 2025-11-25 08:51:40.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 03:51:40 np0005534516 nova_compute[253538]: 2025-11-25 08:51:40.559 310639 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Nov 25 03:51:40 np0005534516 nova_compute[253538]: 2025-11-25 08:51:40.576 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:51:40 np0005534516 nova_compute[253538]: 2025-11-25 08:51:40.576 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:51:40 np0005534516 nova_compute[253538]: 2025-11-25 08:51:40.576 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:51:40 np0005534516 nova_compute[253538]: 2025-11-25 08:51:40.576 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 25 03:51:40 np0005534516 nova_compute[253538]: 2025-11-25 08:51:40.577 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:51:41 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 03:51:41 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3969646687' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 03:51:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:51:41.075 162739 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:51:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:51:41.076 162739 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:51:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:51:41.077 162739 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:51:41 np0005534516 nova_compute[253538]: 2025-11-25 08:51:41.080 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.503s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:51:41 np0005534516 nova_compute[253538]: 2025-11-25 08:51:41.092 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:51:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:51:41.094 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=33, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '3e:21:09', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '6a:71:8e:07:fa:c2'}, ipsec=False) old=SB_Global(nb_cfg=32) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 03:51:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:51:41.095 162739 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 25 03:51:41 np0005534516 nova_compute[253538]: 2025-11-25 08:51:41.172 253542 DEBUG nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] skipping disk for instance-0000006f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 25 03:51:41 np0005534516 nova_compute[253538]: 2025-11-25 08:51:41.172 253542 DEBUG nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] skipping disk for instance-0000006f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 25 03:51:41 np0005534516 podman[367067]: 2025-11-25 08:51:41.253437607 +0000 UTC m=+0.121796993 container health_status 8ad1e319ea263c63021f01bb2d6f18ca5aea205883339692c6b10bddfa993191 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0)
Nov 25 03:51:41 np0005534516 nova_compute[253538]: 2025-11-25 08:51:41.386 253542 WARNING nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 03:51:41 np0005534516 nova_compute[253538]: 2025-11-25 08:51:41.388 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3682MB free_disk=59.9428825378418GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 25 03:51:41 np0005534516 nova_compute[253538]: 2025-11-25 08:51:41.388 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:51:41 np0005534516 nova_compute[253538]: 2025-11-25 08:51:41.389 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:51:41 np0005534516 nova_compute[253538]: 2025-11-25 08:51:41.462 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Instance e7bccd38-8444-4a5d-8a51-a99fa2e7d4fb actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 25 03:51:41 np0005534516 nova_compute[253538]: 2025-11-25 08:51:41.463 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 25 03:51:41 np0005534516 nova_compute[253538]: 2025-11-25 08:51:41.463 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 25 03:51:41 np0005534516 nova_compute[253538]: 2025-11-25 08:51:41.667 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:51:41 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2127: 321 pgs: 321 active+clean; 167 MiB data, 825 MiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Nov 25 03:51:42 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 03:51:42 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2445386032' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 03:51:42 np0005534516 nova_compute[253538]: 2025-11-25 08:51:42.198 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.532s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:51:42 np0005534516 nova_compute[253538]: 2025-11-25 08:51:42.205 253542 DEBUG nova.compute.provider_tree [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 03:51:42 np0005534516 nova_compute[253538]: 2025-11-25 08:51:42.220 253542 DEBUG nova.scheduler.client.report [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 03:51:42 np0005534516 nova_compute[253538]: 2025-11-25 08:51:42.239 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 25 03:51:42 np0005534516 nova_compute[253538]: 2025-11-25 08:51:42.240 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.851s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:51:42 np0005534516 nova_compute[253538]: 2025-11-25 08:51:42.398 253542 DEBUG nova.compute.manager [req-dbf80395-0c38-4e1f-9bba-bb8267550931 req-bcd5bed4-05be-4e29-8f23-0f926c046f58 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: e7bccd38-8444-4a5d-8a51-a99fa2e7d4fb] Received event network-changed-2ad9a2b7-f59d-49fd-aaa3-5253dd637f18 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:51:42 np0005534516 nova_compute[253538]: 2025-11-25 08:51:42.399 253542 DEBUG nova.compute.manager [req-dbf80395-0c38-4e1f-9bba-bb8267550931 req-bcd5bed4-05be-4e29-8f23-0f926c046f58 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: e7bccd38-8444-4a5d-8a51-a99fa2e7d4fb] Refreshing instance network info cache due to event network-changed-2ad9a2b7-f59d-49fd-aaa3-5253dd637f18. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 03:51:42 np0005534516 nova_compute[253538]: 2025-11-25 08:51:42.399 253542 DEBUG oslo_concurrency.lockutils [req-dbf80395-0c38-4e1f-9bba-bb8267550931 req-bcd5bed4-05be-4e29-8f23-0f926c046f58 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-e7bccd38-8444-4a5d-8a51-a99fa2e7d4fb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 03:51:42 np0005534516 nova_compute[253538]: 2025-11-25 08:51:42.400 253542 DEBUG oslo_concurrency.lockutils [req-dbf80395-0c38-4e1f-9bba-bb8267550931 req-bcd5bed4-05be-4e29-8f23-0f926c046f58 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-e7bccd38-8444-4a5d-8a51-a99fa2e7d4fb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 03:51:42 np0005534516 nova_compute[253538]: 2025-11-25 08:51:42.400 253542 DEBUG nova.network.neutron [req-dbf80395-0c38-4e1f-9bba-bb8267550931 req-bcd5bed4-05be-4e29-8f23-0f926c046f58 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: e7bccd38-8444-4a5d-8a51-a99fa2e7d4fb] Refreshing network info cache for port 2ad9a2b7-f59d-49fd-aaa3-5253dd637f18 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 03:51:42 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 03:51:42 np0005534516 nova_compute[253538]: 2025-11-25 08:51:42.708 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:51:43 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2128: 321 pgs: 321 active+clean; 167 MiB data, 825 MiB used, 59 GiB / 60 GiB avail; 331 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Nov 25 03:51:44 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:51:44.097 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=0e3362c2-a4a0-4a10-9289-943331244f84, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '33'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:51:44 np0005534516 nova_compute[253538]: 2025-11-25 08:51:44.452 253542 DEBUG nova.network.neutron [req-dbf80395-0c38-4e1f-9bba-bb8267550931 req-bcd5bed4-05be-4e29-8f23-0f926c046f58 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: e7bccd38-8444-4a5d-8a51-a99fa2e7d4fb] Updated VIF entry in instance network info cache for port 2ad9a2b7-f59d-49fd-aaa3-5253dd637f18. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 25 03:51:44 np0005534516 nova_compute[253538]: 2025-11-25 08:51:44.453 253542 DEBUG nova.network.neutron [req-dbf80395-0c38-4e1f-9bba-bb8267550931 req-bcd5bed4-05be-4e29-8f23-0f926c046f58 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: e7bccd38-8444-4a5d-8a51-a99fa2e7d4fb] Updating instance_info_cache with network_info: [{"id": "2ad9a2b7-f59d-49fd-aaa3-5253dd637f18", "address": "fa:16:3e:3e:2a:a3", "network": {"id": "8daad2e3-552f-4ebe-8fa4-01c68ec704b1", "bridge": "br-int", "label": "tempest-network-smoke--961197321", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92faeb767e7a423586eaaf32661ce771", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2ad9a2b7-f5", "ovs_interfaceid": "2ad9a2b7-f59d-49fd-aaa3-5253dd637f18", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 03:51:44 np0005534516 nova_compute[253538]: 2025-11-25 08:51:44.483 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:51:44 np0005534516 nova_compute[253538]: 2025-11-25 08:51:44.497 253542 DEBUG oslo_concurrency.lockutils [req-dbf80395-0c38-4e1f-9bba-bb8267550931 req-bcd5bed4-05be-4e29-8f23-0f926c046f58 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-e7bccd38-8444-4a5d-8a51-a99fa2e7d4fb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 03:51:45 np0005534516 nova_compute[253538]: 2025-11-25 08:51:45.207 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:51:45 np0005534516 nova_compute[253538]: 2025-11-25 08:51:45.240 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 03:51:45 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2129: 321 pgs: 321 active+clean; 167 MiB data, 825 MiB used, 59 GiB / 60 GiB avail; 234 KiB/s rd, 1.4 MiB/s wr, 32 op/s
Nov 25 03:51:47 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 03:51:47 np0005534516 nova_compute[253538]: 2025-11-25 08:51:47.711 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:51:47 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2130: 321 pgs: 321 active+clean; 167 MiB data, 825 MiB used, 59 GiB / 60 GiB avail; 175 KiB/s rd, 735 KiB/s wr, 18 op/s
Nov 25 03:51:48 np0005534516 nova_compute[253538]: 2025-11-25 08:51:48.033 253542 DEBUG oslo_concurrency.lockutils [None req-2ceca59f-39a8-4b14-8061-2836c9cbd1d5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Acquiring lock "9fed0304-736a-4739-9e78-a95c676d1206" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:51:48 np0005534516 nova_compute[253538]: 2025-11-25 08:51:48.033 253542 DEBUG oslo_concurrency.lockutils [None req-2ceca59f-39a8-4b14-8061-2836c9cbd1d5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Lock "9fed0304-736a-4739-9e78-a95c676d1206" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:51:48 np0005534516 nova_compute[253538]: 2025-11-25 08:51:48.049 253542 DEBUG nova.compute.manager [None req-2ceca59f-39a8-4b14-8061-2836c9cbd1d5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: 9fed0304-736a-4739-9e78-a95c676d1206] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 25 03:51:48 np0005534516 nova_compute[253538]: 2025-11-25 08:51:48.129 253542 DEBUG oslo_concurrency.lockutils [None req-2ceca59f-39a8-4b14-8061-2836c9cbd1d5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:51:48 np0005534516 nova_compute[253538]: 2025-11-25 08:51:48.129 253542 DEBUG oslo_concurrency.lockutils [None req-2ceca59f-39a8-4b14-8061-2836c9cbd1d5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:51:48 np0005534516 nova_compute[253538]: 2025-11-25 08:51:48.136 253542 DEBUG nova.virt.hardware [None req-2ceca59f-39a8-4b14-8061-2836c9cbd1d5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 25 03:51:48 np0005534516 nova_compute[253538]: 2025-11-25 08:51:48.136 253542 INFO nova.compute.claims [None req-2ceca59f-39a8-4b14-8061-2836c9cbd1d5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: 9fed0304-736a-4739-9e78-a95c676d1206] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 25 03:51:48 np0005534516 nova_compute[253538]: 2025-11-25 08:51:48.257 253542 DEBUG oslo_concurrency.processutils [None req-2ceca59f-39a8-4b14-8061-2836c9cbd1d5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:51:48 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 03:51:48 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2121873357' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 03:51:48 np0005534516 nova_compute[253538]: 2025-11-25 08:51:48.675 253542 DEBUG oslo_concurrency.processutils [None req-2ceca59f-39a8-4b14-8061-2836c9cbd1d5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.418s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:51:48 np0005534516 nova_compute[253538]: 2025-11-25 08:51:48.683 253542 DEBUG nova.compute.provider_tree [None req-2ceca59f-39a8-4b14-8061-2836c9cbd1d5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 03:51:48 np0005534516 nova_compute[253538]: 2025-11-25 08:51:48.698 253542 DEBUG nova.scheduler.client.report [None req-2ceca59f-39a8-4b14-8061-2836c9cbd1d5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 03:51:48 np0005534516 nova_compute[253538]: 2025-11-25 08:51:48.726 253542 DEBUG oslo_concurrency.lockutils [None req-2ceca59f-39a8-4b14-8061-2836c9cbd1d5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.597s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:51:48 np0005534516 nova_compute[253538]: 2025-11-25 08:51:48.727 253542 DEBUG nova.compute.manager [None req-2ceca59f-39a8-4b14-8061-2836c9cbd1d5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: 9fed0304-736a-4739-9e78-a95c676d1206] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 25 03:51:48 np0005534516 nova_compute[253538]: 2025-11-25 08:51:48.786 253542 DEBUG nova.compute.manager [None req-2ceca59f-39a8-4b14-8061-2836c9cbd1d5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: 9fed0304-736a-4739-9e78-a95c676d1206] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 25 03:51:48 np0005534516 nova_compute[253538]: 2025-11-25 08:51:48.787 253542 DEBUG nova.network.neutron [None req-2ceca59f-39a8-4b14-8061-2836c9cbd1d5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: 9fed0304-736a-4739-9e78-a95c676d1206] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 25 03:51:48 np0005534516 nova_compute[253538]: 2025-11-25 08:51:48.810 253542 INFO nova.virt.libvirt.driver [None req-2ceca59f-39a8-4b14-8061-2836c9cbd1d5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: 9fed0304-736a-4739-9e78-a95c676d1206] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 25 03:51:48 np0005534516 nova_compute[253538]: 2025-11-25 08:51:48.826 253542 DEBUG nova.compute.manager [None req-2ceca59f-39a8-4b14-8061-2836c9cbd1d5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: 9fed0304-736a-4739-9e78-a95c676d1206] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 25 03:51:48 np0005534516 nova_compute[253538]: 2025-11-25 08:51:48.957 253542 DEBUG nova.compute.manager [None req-2ceca59f-39a8-4b14-8061-2836c9cbd1d5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: 9fed0304-736a-4739-9e78-a95c676d1206] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 25 03:51:48 np0005534516 nova_compute[253538]: 2025-11-25 08:51:48.958 253542 DEBUG nova.virt.libvirt.driver [None req-2ceca59f-39a8-4b14-8061-2836c9cbd1d5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: 9fed0304-736a-4739-9e78-a95c676d1206] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 25 03:51:48 np0005534516 nova_compute[253538]: 2025-11-25 08:51:48.959 253542 INFO nova.virt.libvirt.driver [None req-2ceca59f-39a8-4b14-8061-2836c9cbd1d5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: 9fed0304-736a-4739-9e78-a95c676d1206] Creating image(s)#033[00m
Nov 25 03:51:48 np0005534516 nova_compute[253538]: 2025-11-25 08:51:48.984 253542 DEBUG nova.storage.rbd_utils [None req-2ceca59f-39a8-4b14-8061-2836c9cbd1d5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] rbd image 9fed0304-736a-4739-9e78-a95c676d1206_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:51:49 np0005534516 nova_compute[253538]: 2025-11-25 08:51:49.014 253542 DEBUG nova.storage.rbd_utils [None req-2ceca59f-39a8-4b14-8061-2836c9cbd1d5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] rbd image 9fed0304-736a-4739-9e78-a95c676d1206_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:51:49 np0005534516 nova_compute[253538]: 2025-11-25 08:51:49.049 253542 DEBUG nova.storage.rbd_utils [None req-2ceca59f-39a8-4b14-8061-2836c9cbd1d5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] rbd image 9fed0304-736a-4739-9e78-a95c676d1206_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:51:49 np0005534516 nova_compute[253538]: 2025-11-25 08:51:49.053 253542 DEBUG oslo_concurrency.processutils [None req-2ceca59f-39a8-4b14-8061-2836c9cbd1d5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:51:49 np0005534516 nova_compute[253538]: 2025-11-25 08:51:49.100 253542 DEBUG oslo_concurrency.lockutils [None req-761a63e2-f508-433d-8783-c11001e83e53 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Acquiring lock "47279d1c-3634-4ea6-a752-99950cd5ce6c" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:51:49 np0005534516 nova_compute[253538]: 2025-11-25 08:51:49.100 253542 DEBUG oslo_concurrency.lockutils [None req-761a63e2-f508-433d-8783-c11001e83e53 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lock "47279d1c-3634-4ea6-a752-99950cd5ce6c" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:51:49 np0005534516 nova_compute[253538]: 2025-11-25 08:51:49.117 253542 DEBUG nova.compute.manager [None req-761a63e2-f508-433d-8783-c11001e83e53 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 47279d1c-3634-4ea6-a752-99950cd5ce6c] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 25 03:51:49 np0005534516 nova_compute[253538]: 2025-11-25 08:51:49.135 253542 DEBUG oslo_concurrency.processutils [None req-2ceca59f-39a8-4b14-8061-2836c9cbd1d5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json" returned: 0 in 0.081s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:51:49 np0005534516 nova_compute[253538]: 2025-11-25 08:51:49.136 253542 DEBUG oslo_concurrency.lockutils [None req-2ceca59f-39a8-4b14-8061-2836c9cbd1d5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Acquiring lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:51:49 np0005534516 nova_compute[253538]: 2025-11-25 08:51:49.136 253542 DEBUG oslo_concurrency.lockutils [None req-2ceca59f-39a8-4b14-8061-2836c9cbd1d5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:51:49 np0005534516 nova_compute[253538]: 2025-11-25 08:51:49.137 253542 DEBUG oslo_concurrency.lockutils [None req-2ceca59f-39a8-4b14-8061-2836c9cbd1d5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:51:49 np0005534516 nova_compute[253538]: 2025-11-25 08:51:49.159 253542 DEBUG nova.storage.rbd_utils [None req-2ceca59f-39a8-4b14-8061-2836c9cbd1d5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] rbd image 9fed0304-736a-4739-9e78-a95c676d1206_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:51:49 np0005534516 nova_compute[253538]: 2025-11-25 08:51:49.163 253542 DEBUG oslo_concurrency.processutils [None req-2ceca59f-39a8-4b14-8061-2836c9cbd1d5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc 9fed0304-736a-4739-9e78-a95c676d1206_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:51:49 np0005534516 nova_compute[253538]: 2025-11-25 08:51:49.226 253542 DEBUG oslo_concurrency.lockutils [None req-761a63e2-f508-433d-8783-c11001e83e53 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:51:49 np0005534516 nova_compute[253538]: 2025-11-25 08:51:49.227 253542 DEBUG oslo_concurrency.lockutils [None req-761a63e2-f508-433d-8783-c11001e83e53 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:51:49 np0005534516 nova_compute[253538]: 2025-11-25 08:51:49.234 253542 DEBUG nova.virt.hardware [None req-761a63e2-f508-433d-8783-c11001e83e53 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 25 03:51:49 np0005534516 nova_compute[253538]: 2025-11-25 08:51:49.234 253542 INFO nova.compute.claims [None req-761a63e2-f508-433d-8783-c11001e83e53 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 47279d1c-3634-4ea6-a752-99950cd5ce6c] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 25 03:51:49 np0005534516 nova_compute[253538]: 2025-11-25 08:51:49.334 253542 DEBUG nova.policy [None req-2ceca59f-39a8-4b14-8061-2836c9cbd1d5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '009378dc36154271ba5b4590ce67ddde', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '7dcaf3c96bfc4db3a41291debd385c67', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 25 03:51:49 np0005534516 nova_compute[253538]: 2025-11-25 08:51:49.401 253542 DEBUG oslo_concurrency.processutils [None req-761a63e2-f508-433d-8783-c11001e83e53 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:51:49 np0005534516 nova_compute[253538]: 2025-11-25 08:51:49.485 253542 DEBUG oslo_concurrency.processutils [None req-2ceca59f-39a8-4b14-8061-2836c9cbd1d5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc 9fed0304-736a-4739-9e78-a95c676d1206_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.322s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:51:49 np0005534516 nova_compute[253538]: 2025-11-25 08:51:49.512 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:51:49 np0005534516 nova_compute[253538]: 2025-11-25 08:51:49.554 253542 DEBUG nova.storage.rbd_utils [None req-2ceca59f-39a8-4b14-8061-2836c9cbd1d5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] resizing rbd image 9fed0304-736a-4739-9e78-a95c676d1206_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 25 03:51:49 np0005534516 nova_compute[253538]: 2025-11-25 08:51:49.658 253542 DEBUG nova.objects.instance [None req-2ceca59f-39a8-4b14-8061-2836c9cbd1d5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Lazy-loading 'migration_context' on Instance uuid 9fed0304-736a-4739-9e78-a95c676d1206 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 03:51:49 np0005534516 nova_compute[253538]: 2025-11-25 08:51:49.674 253542 DEBUG nova.virt.libvirt.driver [None req-2ceca59f-39a8-4b14-8061-2836c9cbd1d5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: 9fed0304-736a-4739-9e78-a95c676d1206] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 25 03:51:49 np0005534516 nova_compute[253538]: 2025-11-25 08:51:49.674 253542 DEBUG nova.virt.libvirt.driver [None req-2ceca59f-39a8-4b14-8061-2836c9cbd1d5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: 9fed0304-736a-4739-9e78-a95c676d1206] Ensure instance console log exists: /var/lib/nova/instances/9fed0304-736a-4739-9e78-a95c676d1206/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 25 03:51:49 np0005534516 nova_compute[253538]: 2025-11-25 08:51:49.675 253542 DEBUG oslo_concurrency.lockutils [None req-2ceca59f-39a8-4b14-8061-2836c9cbd1d5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:51:49 np0005534516 nova_compute[253538]: 2025-11-25 08:51:49.675 253542 DEBUG oslo_concurrency.lockutils [None req-2ceca59f-39a8-4b14-8061-2836c9cbd1d5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:51:49 np0005534516 nova_compute[253538]: 2025-11-25 08:51:49.675 253542 DEBUG oslo_concurrency.lockutils [None req-2ceca59f-39a8-4b14-8061-2836c9cbd1d5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:51:49 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2131: 321 pgs: 321 active+clean; 183 MiB data, 825 MiB used, 59 GiB / 60 GiB avail; 14 KiB/s rd, 715 KiB/s wr, 15 op/s
Nov 25 03:51:49 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 03:51:49 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2932646589' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 03:51:49 np0005534516 nova_compute[253538]: 2025-11-25 08:51:49.871 253542 DEBUG oslo_concurrency.processutils [None req-761a63e2-f508-433d-8783-c11001e83e53 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.470s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:51:49 np0005534516 nova_compute[253538]: 2025-11-25 08:51:49.879 253542 DEBUG nova.compute.provider_tree [None req-761a63e2-f508-433d-8783-c11001e83e53 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 03:51:49 np0005534516 nova_compute[253538]: 2025-11-25 08:51:49.895 253542 DEBUG nova.scheduler.client.report [None req-761a63e2-f508-433d-8783-c11001e83e53 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 03:51:49 np0005534516 nova_compute[253538]: 2025-11-25 08:51:49.942 253542 DEBUG oslo_concurrency.lockutils [None req-761a63e2-f508-433d-8783-c11001e83e53 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.715s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:51:49 np0005534516 nova_compute[253538]: 2025-11-25 08:51:49.943 253542 DEBUG nova.compute.manager [None req-761a63e2-f508-433d-8783-c11001e83e53 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 47279d1c-3634-4ea6-a752-99950cd5ce6c] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 25 03:51:50 np0005534516 nova_compute[253538]: 2025-11-25 08:51:50.005 253542 DEBUG nova.compute.manager [None req-761a63e2-f508-433d-8783-c11001e83e53 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 47279d1c-3634-4ea6-a752-99950cd5ce6c] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 25 03:51:50 np0005534516 nova_compute[253538]: 2025-11-25 08:51:50.006 253542 DEBUG nova.network.neutron [None req-761a63e2-f508-433d-8783-c11001e83e53 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 47279d1c-3634-4ea6-a752-99950cd5ce6c] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 25 03:51:50 np0005534516 nova_compute[253538]: 2025-11-25 08:51:50.024 253542 INFO nova.virt.libvirt.driver [None req-761a63e2-f508-433d-8783-c11001e83e53 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 47279d1c-3634-4ea6-a752-99950cd5ce6c] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 25 03:51:50 np0005534516 nova_compute[253538]: 2025-11-25 08:51:50.043 253542 DEBUG nova.compute.manager [None req-761a63e2-f508-433d-8783-c11001e83e53 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 47279d1c-3634-4ea6-a752-99950cd5ce6c] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 25 03:51:50 np0005534516 nova_compute[253538]: 2025-11-25 08:51:50.148 253542 DEBUG nova.compute.manager [None req-761a63e2-f508-433d-8783-c11001e83e53 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 47279d1c-3634-4ea6-a752-99950cd5ce6c] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 25 03:51:50 np0005534516 nova_compute[253538]: 2025-11-25 08:51:50.149 253542 DEBUG nova.virt.libvirt.driver [None req-761a63e2-f508-433d-8783-c11001e83e53 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 47279d1c-3634-4ea6-a752-99950cd5ce6c] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 25 03:51:50 np0005534516 nova_compute[253538]: 2025-11-25 08:51:50.150 253542 INFO nova.virt.libvirt.driver [None req-761a63e2-f508-433d-8783-c11001e83e53 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 47279d1c-3634-4ea6-a752-99950cd5ce6c] Creating image(s)#033[00m
Nov 25 03:51:50 np0005534516 nova_compute[253538]: 2025-11-25 08:51:50.170 253542 DEBUG nova.storage.rbd_utils [None req-761a63e2-f508-433d-8783-c11001e83e53 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] rbd image 47279d1c-3634-4ea6-a752-99950cd5ce6c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:51:50 np0005534516 nova_compute[253538]: 2025-11-25 08:51:50.193 253542 DEBUG nova.storage.rbd_utils [None req-761a63e2-f508-433d-8783-c11001e83e53 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] rbd image 47279d1c-3634-4ea6-a752-99950cd5ce6c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:51:50 np0005534516 nova_compute[253538]: 2025-11-25 08:51:50.217 253542 DEBUG nova.storage.rbd_utils [None req-761a63e2-f508-433d-8783-c11001e83e53 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] rbd image 47279d1c-3634-4ea6-a752-99950cd5ce6c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:51:50 np0005534516 nova_compute[253538]: 2025-11-25 08:51:50.221 253542 DEBUG oslo_concurrency.processutils [None req-761a63e2-f508-433d-8783-c11001e83e53 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:51:50 np0005534516 nova_compute[253538]: 2025-11-25 08:51:50.269 253542 DEBUG nova.policy [None req-761a63e2-f508-433d-8783-c11001e83e53 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '4211995133cc45db8e38c47f747fb092', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '92faeb767e7a423586eaaf32661ce771', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 25 03:51:50 np0005534516 nova_compute[253538]: 2025-11-25 08:51:50.313 253542 DEBUG oslo_concurrency.processutils [None req-761a63e2-f508-433d-8783-c11001e83e53 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json" returned: 0 in 0.093s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:51:50 np0005534516 nova_compute[253538]: 2025-11-25 08:51:50.314 253542 DEBUG oslo_concurrency.lockutils [None req-761a63e2-f508-433d-8783-c11001e83e53 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Acquiring lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:51:50 np0005534516 nova_compute[253538]: 2025-11-25 08:51:50.315 253542 DEBUG oslo_concurrency.lockutils [None req-761a63e2-f508-433d-8783-c11001e83e53 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:51:50 np0005534516 nova_compute[253538]: 2025-11-25 08:51:50.315 253542 DEBUG oslo_concurrency.lockutils [None req-761a63e2-f508-433d-8783-c11001e83e53 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:51:50 np0005534516 nova_compute[253538]: 2025-11-25 08:51:50.339 253542 DEBUG nova.storage.rbd_utils [None req-761a63e2-f508-433d-8783-c11001e83e53 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] rbd image 47279d1c-3634-4ea6-a752-99950cd5ce6c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:51:50 np0005534516 nova_compute[253538]: 2025-11-25 08:51:50.343 253542 DEBUG oslo_concurrency.processutils [None req-761a63e2-f508-433d-8783-c11001e83e53 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc 47279d1c-3634-4ea6-a752-99950cd5ce6c_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:51:50 np0005534516 nova_compute[253538]: 2025-11-25 08:51:50.503 253542 DEBUG nova.network.neutron [None req-2ceca59f-39a8-4b14-8061-2836c9cbd1d5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: 9fed0304-736a-4739-9e78-a95c676d1206] Successfully created port: 5518ee18-fcb4-4885-8bc6-a3daba84baff _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 25 03:51:50 np0005534516 nova_compute[253538]: 2025-11-25 08:51:50.697 253542 DEBUG oslo_concurrency.processutils [None req-761a63e2-f508-433d-8783-c11001e83e53 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc 47279d1c-3634-4ea6-a752-99950cd5ce6c_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.353s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:51:50 np0005534516 nova_compute[253538]: 2025-11-25 08:51:50.769 253542 DEBUG nova.storage.rbd_utils [None req-761a63e2-f508-433d-8783-c11001e83e53 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] resizing rbd image 47279d1c-3634-4ea6-a752-99950cd5ce6c_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 25 03:51:50 np0005534516 nova_compute[253538]: 2025-11-25 08:51:50.884 253542 DEBUG nova.objects.instance [None req-761a63e2-f508-433d-8783-c11001e83e53 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lazy-loading 'migration_context' on Instance uuid 47279d1c-3634-4ea6-a752-99950cd5ce6c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 03:51:50 np0005534516 nova_compute[253538]: 2025-11-25 08:51:50.901 253542 DEBUG nova.virt.libvirt.driver [None req-761a63e2-f508-433d-8783-c11001e83e53 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 47279d1c-3634-4ea6-a752-99950cd5ce6c] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 25 03:51:50 np0005534516 nova_compute[253538]: 2025-11-25 08:51:50.902 253542 DEBUG nova.virt.libvirt.driver [None req-761a63e2-f508-433d-8783-c11001e83e53 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 47279d1c-3634-4ea6-a752-99950cd5ce6c] Ensure instance console log exists: /var/lib/nova/instances/47279d1c-3634-4ea6-a752-99950cd5ce6c/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 25 03:51:50 np0005534516 nova_compute[253538]: 2025-11-25 08:51:50.902 253542 DEBUG oslo_concurrency.lockutils [None req-761a63e2-f508-433d-8783-c11001e83e53 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:51:50 np0005534516 nova_compute[253538]: 2025-11-25 08:51:50.903 253542 DEBUG oslo_concurrency.lockutils [None req-761a63e2-f508-433d-8783-c11001e83e53 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:51:50 np0005534516 nova_compute[253538]: 2025-11-25 08:51:50.903 253542 DEBUG oslo_concurrency.lockutils [None req-761a63e2-f508-433d-8783-c11001e83e53 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:51:51 np0005534516 nova_compute[253538]: 2025-11-25 08:51:51.060 253542 DEBUG nova.network.neutron [None req-761a63e2-f508-433d-8783-c11001e83e53 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 47279d1c-3634-4ea6-a752-99950cd5ce6c] Successfully created port: a535be3a-db4d-4a49-9772-867e101290fa _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 25 03:51:51 np0005534516 nova_compute[253538]: 2025-11-25 08:51:51.575 253542 DEBUG nova.network.neutron [None req-2ceca59f-39a8-4b14-8061-2836c9cbd1d5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: 9fed0304-736a-4739-9e78-a95c676d1206] Successfully updated port: 5518ee18-fcb4-4885-8bc6-a3daba84baff _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 25 03:51:51 np0005534516 nova_compute[253538]: 2025-11-25 08:51:51.595 253542 DEBUG oslo_concurrency.lockutils [None req-2ceca59f-39a8-4b14-8061-2836c9cbd1d5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Acquiring lock "refresh_cache-9fed0304-736a-4739-9e78-a95c676d1206" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 03:51:51 np0005534516 nova_compute[253538]: 2025-11-25 08:51:51.595 253542 DEBUG oslo_concurrency.lockutils [None req-2ceca59f-39a8-4b14-8061-2836c9cbd1d5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Acquired lock "refresh_cache-9fed0304-736a-4739-9e78-a95c676d1206" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 03:51:51 np0005534516 nova_compute[253538]: 2025-11-25 08:51:51.595 253542 DEBUG nova.network.neutron [None req-2ceca59f-39a8-4b14-8061-2836c9cbd1d5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: 9fed0304-736a-4739-9e78-a95c676d1206] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 25 03:51:51 np0005534516 nova_compute[253538]: 2025-11-25 08:51:51.679 253542 DEBUG nova.compute.manager [req-8cf36846-8f7d-4432-b1dc-71913f8930ad req-0700f0e1-da46-4e6f-938d-6de05c8fd6c2 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 9fed0304-736a-4739-9e78-a95c676d1206] Received event network-changed-5518ee18-fcb4-4885-8bc6-a3daba84baff external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:51:51 np0005534516 nova_compute[253538]: 2025-11-25 08:51:51.680 253542 DEBUG nova.compute.manager [req-8cf36846-8f7d-4432-b1dc-71913f8930ad req-0700f0e1-da46-4e6f-938d-6de05c8fd6c2 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 9fed0304-736a-4739-9e78-a95c676d1206] Refreshing instance network info cache due to event network-changed-5518ee18-fcb4-4885-8bc6-a3daba84baff. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 03:51:51 np0005534516 nova_compute[253538]: 2025-11-25 08:51:51.680 253542 DEBUG oslo_concurrency.lockutils [req-8cf36846-8f7d-4432-b1dc-71913f8930ad req-0700f0e1-da46-4e6f-938d-6de05c8fd6c2 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-9fed0304-736a-4739-9e78-a95c676d1206" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 03:51:51 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2132: 321 pgs: 321 active+clean; 183 MiB data, 825 MiB used, 59 GiB / 60 GiB avail; 14 KiB/s rd, 714 KiB/s wr, 15 op/s
Nov 25 03:51:51 np0005534516 nova_compute[253538]: 2025-11-25 08:51:51.761 253542 DEBUG nova.network.neutron [None req-2ceca59f-39a8-4b14-8061-2836c9cbd1d5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: 9fed0304-736a-4739-9e78-a95c676d1206] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 25 03:51:51 np0005534516 nova_compute[253538]: 2025-11-25 08:51:51.997 253542 DEBUG nova.network.neutron [None req-761a63e2-f508-433d-8783-c11001e83e53 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 47279d1c-3634-4ea6-a752-99950cd5ce6c] Successfully updated port: a535be3a-db4d-4a49-9772-867e101290fa _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 25 03:51:52 np0005534516 nova_compute[253538]: 2025-11-25 08:51:52.008 253542 DEBUG oslo_concurrency.lockutils [None req-761a63e2-f508-433d-8783-c11001e83e53 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Acquiring lock "refresh_cache-47279d1c-3634-4ea6-a752-99950cd5ce6c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 03:51:52 np0005534516 nova_compute[253538]: 2025-11-25 08:51:52.009 253542 DEBUG oslo_concurrency.lockutils [None req-761a63e2-f508-433d-8783-c11001e83e53 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Acquired lock "refresh_cache-47279d1c-3634-4ea6-a752-99950cd5ce6c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 03:51:52 np0005534516 nova_compute[253538]: 2025-11-25 08:51:52.009 253542 DEBUG nova.network.neutron [None req-761a63e2-f508-433d-8783-c11001e83e53 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 47279d1c-3634-4ea6-a752-99950cd5ce6c] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 25 03:51:52 np0005534516 nova_compute[253538]: 2025-11-25 08:51:52.082 253542 DEBUG nova.compute.manager [req-ca986abd-1769-4c31-bdb7-22787f9cdc03 req-f6df62a0-c7dd-4716-abbc-13899e66de2c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 47279d1c-3634-4ea6-a752-99950cd5ce6c] Received event network-changed-a535be3a-db4d-4a49-9772-867e101290fa external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:51:52 np0005534516 nova_compute[253538]: 2025-11-25 08:51:52.083 253542 DEBUG nova.compute.manager [req-ca986abd-1769-4c31-bdb7-22787f9cdc03 req-f6df62a0-c7dd-4716-abbc-13899e66de2c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 47279d1c-3634-4ea6-a752-99950cd5ce6c] Refreshing instance network info cache due to event network-changed-a535be3a-db4d-4a49-9772-867e101290fa. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 03:51:52 np0005534516 nova_compute[253538]: 2025-11-25 08:51:52.083 253542 DEBUG oslo_concurrency.lockutils [req-ca986abd-1769-4c31-bdb7-22787f9cdc03 req-f6df62a0-c7dd-4716-abbc-13899e66de2c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-47279d1c-3634-4ea6-a752-99950cd5ce6c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 03:51:52 np0005534516 nova_compute[253538]: 2025-11-25 08:51:52.160 253542 DEBUG nova.network.neutron [None req-761a63e2-f508-433d-8783-c11001e83e53 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 47279d1c-3634-4ea6-a752-99950cd5ce6c] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 25 03:51:52 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 03:51:52 np0005534516 nova_compute[253538]: 2025-11-25 08:51:52.603 253542 DEBUG nova.network.neutron [None req-2ceca59f-39a8-4b14-8061-2836c9cbd1d5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: 9fed0304-736a-4739-9e78-a95c676d1206] Updating instance_info_cache with network_info: [{"id": "5518ee18-fcb4-4885-8bc6-a3daba84baff", "address": "fa:16:3e:aa:ad:17", "network": {"id": "78cbfb83-5eb2-43b6-8132-ed291918f722", "bridge": "br-int", "label": "tempest-network-smoke--234361531", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7dcaf3c96bfc4db3a41291debd385c67", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5518ee18-fc", "ovs_interfaceid": "5518ee18-fcb4-4885-8bc6-a3daba84baff", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 03:51:52 np0005534516 nova_compute[253538]: 2025-11-25 08:51:52.625 253542 DEBUG oslo_concurrency.lockutils [None req-2ceca59f-39a8-4b14-8061-2836c9cbd1d5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Releasing lock "refresh_cache-9fed0304-736a-4739-9e78-a95c676d1206" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 03:51:52 np0005534516 nova_compute[253538]: 2025-11-25 08:51:52.626 253542 DEBUG nova.compute.manager [None req-2ceca59f-39a8-4b14-8061-2836c9cbd1d5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: 9fed0304-736a-4739-9e78-a95c676d1206] Instance network_info: |[{"id": "5518ee18-fcb4-4885-8bc6-a3daba84baff", "address": "fa:16:3e:aa:ad:17", "network": {"id": "78cbfb83-5eb2-43b6-8132-ed291918f722", "bridge": "br-int", "label": "tempest-network-smoke--234361531", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7dcaf3c96bfc4db3a41291debd385c67", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5518ee18-fc", "ovs_interfaceid": "5518ee18-fcb4-4885-8bc6-a3daba84baff", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 25 03:51:52 np0005534516 nova_compute[253538]: 2025-11-25 08:51:52.626 253542 DEBUG oslo_concurrency.lockutils [req-8cf36846-8f7d-4432-b1dc-71913f8930ad req-0700f0e1-da46-4e6f-938d-6de05c8fd6c2 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-9fed0304-736a-4739-9e78-a95c676d1206" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 03:51:52 np0005534516 nova_compute[253538]: 2025-11-25 08:51:52.627 253542 DEBUG nova.network.neutron [req-8cf36846-8f7d-4432-b1dc-71913f8930ad req-0700f0e1-da46-4e6f-938d-6de05c8fd6c2 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 9fed0304-736a-4739-9e78-a95c676d1206] Refreshing network info cache for port 5518ee18-fcb4-4885-8bc6-a3daba84baff _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 03:51:52 np0005534516 nova_compute[253538]: 2025-11-25 08:51:52.630 253542 DEBUG nova.virt.libvirt.driver [None req-2ceca59f-39a8-4b14-8061-2836c9cbd1d5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: 9fed0304-736a-4739-9e78-a95c676d1206] Start _get_guest_xml network_info=[{"id": "5518ee18-fcb4-4885-8bc6-a3daba84baff", "address": "fa:16:3e:aa:ad:17", "network": {"id": "78cbfb83-5eb2-43b6-8132-ed291918f722", "bridge": "br-int", "label": "tempest-network-smoke--234361531", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7dcaf3c96bfc4db3a41291debd385c67", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5518ee18-fc", "ovs_interfaceid": "5518ee18-fcb4-4885-8bc6-a3daba84baff", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'device_name': '/dev/vda', 'guest_format': None, 'encryption_options': None, 'boot_index': 0, 'encryption_format': None, 'encryption_secret_uuid': None, 'size': 0, 'encrypted': False, 'disk_bus': 'virtio', 'image_id': '8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 25 03:51:52 np0005534516 nova_compute[253538]: 2025-11-25 08:51:52.644 253542 WARNING nova.virt.libvirt.driver [None req-2ceca59f-39a8-4b14-8061-2836c9cbd1d5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 03:51:52 np0005534516 nova_compute[253538]: 2025-11-25 08:51:52.649 253542 DEBUG nova.virt.libvirt.host [None req-2ceca59f-39a8-4b14-8061-2836c9cbd1d5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 25 03:51:52 np0005534516 nova_compute[253538]: 2025-11-25 08:51:52.649 253542 DEBUG nova.virt.libvirt.host [None req-2ceca59f-39a8-4b14-8061-2836c9cbd1d5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 25 03:51:52 np0005534516 nova_compute[253538]: 2025-11-25 08:51:52.653 253542 DEBUG nova.virt.libvirt.host [None req-2ceca59f-39a8-4b14-8061-2836c9cbd1d5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 25 03:51:52 np0005534516 nova_compute[253538]: 2025-11-25 08:51:52.653 253542 DEBUG nova.virt.libvirt.host [None req-2ceca59f-39a8-4b14-8061-2836c9cbd1d5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 25 03:51:52 np0005534516 nova_compute[253538]: 2025-11-25 08:51:52.654 253542 DEBUG nova.virt.libvirt.driver [None req-2ceca59f-39a8-4b14-8061-2836c9cbd1d5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 25 03:51:52 np0005534516 nova_compute[253538]: 2025-11-25 08:51:52.654 253542 DEBUG nova.virt.hardware [None req-2ceca59f-39a8-4b14-8061-2836c9cbd1d5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T08:20:54Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='c0247c91-0f34-45b2-87e7-43f31a790a00',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 25 03:51:52 np0005534516 nova_compute[253538]: 2025-11-25 08:51:52.654 253542 DEBUG nova.virt.hardware [None req-2ceca59f-39a8-4b14-8061-2836c9cbd1d5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 25 03:51:52 np0005534516 nova_compute[253538]: 2025-11-25 08:51:52.655 253542 DEBUG nova.virt.hardware [None req-2ceca59f-39a8-4b14-8061-2836c9cbd1d5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 25 03:51:52 np0005534516 nova_compute[253538]: 2025-11-25 08:51:52.655 253542 DEBUG nova.virt.hardware [None req-2ceca59f-39a8-4b14-8061-2836c9cbd1d5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 25 03:51:52 np0005534516 nova_compute[253538]: 2025-11-25 08:51:52.655 253542 DEBUG nova.virt.hardware [None req-2ceca59f-39a8-4b14-8061-2836c9cbd1d5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 25 03:51:52 np0005534516 nova_compute[253538]: 2025-11-25 08:51:52.655 253542 DEBUG nova.virt.hardware [None req-2ceca59f-39a8-4b14-8061-2836c9cbd1d5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 25 03:51:52 np0005534516 nova_compute[253538]: 2025-11-25 08:51:52.656 253542 DEBUG nova.virt.hardware [None req-2ceca59f-39a8-4b14-8061-2836c9cbd1d5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 25 03:51:52 np0005534516 nova_compute[253538]: 2025-11-25 08:51:52.656 253542 DEBUG nova.virt.hardware [None req-2ceca59f-39a8-4b14-8061-2836c9cbd1d5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 25 03:51:52 np0005534516 nova_compute[253538]: 2025-11-25 08:51:52.656 253542 DEBUG nova.virt.hardware [None req-2ceca59f-39a8-4b14-8061-2836c9cbd1d5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 25 03:51:52 np0005534516 nova_compute[253538]: 2025-11-25 08:51:52.656 253542 DEBUG nova.virt.hardware [None req-2ceca59f-39a8-4b14-8061-2836c9cbd1d5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 25 03:51:52 np0005534516 nova_compute[253538]: 2025-11-25 08:51:52.657 253542 DEBUG nova.virt.hardware [None req-2ceca59f-39a8-4b14-8061-2836c9cbd1d5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 25 03:51:52 np0005534516 nova_compute[253538]: 2025-11-25 08:51:52.660 253542 DEBUG oslo_concurrency.processutils [None req-2ceca59f-39a8-4b14-8061-2836c9cbd1d5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:51:52 np0005534516 nova_compute[253538]: 2025-11-25 08:51:52.713 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:51:53 np0005534516 nova_compute[253538]: 2025-11-25 08:51:53.024 253542 DEBUG nova.network.neutron [None req-761a63e2-f508-433d-8783-c11001e83e53 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 47279d1c-3634-4ea6-a752-99950cd5ce6c] Updating instance_info_cache with network_info: [{"id": "a535be3a-db4d-4a49-9772-867e101290fa", "address": "fa:16:3e:5f:1f:f1", "network": {"id": "8daad2e3-552f-4ebe-8fa4-01c68ec704b1", "bridge": "br-int", "label": "tempest-network-smoke--961197321", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92faeb767e7a423586eaaf32661ce771", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa535be3a-db", "ovs_interfaceid": "a535be3a-db4d-4a49-9772-867e101290fa", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 03:51:53 np0005534516 nova_compute[253538]: 2025-11-25 08:51:53.062 253542 DEBUG oslo_concurrency.lockutils [None req-761a63e2-f508-433d-8783-c11001e83e53 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Releasing lock "refresh_cache-47279d1c-3634-4ea6-a752-99950cd5ce6c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 03:51:53 np0005534516 nova_compute[253538]: 2025-11-25 08:51:53.062 253542 DEBUG nova.compute.manager [None req-761a63e2-f508-433d-8783-c11001e83e53 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 47279d1c-3634-4ea6-a752-99950cd5ce6c] Instance network_info: |[{"id": "a535be3a-db4d-4a49-9772-867e101290fa", "address": "fa:16:3e:5f:1f:f1", "network": {"id": "8daad2e3-552f-4ebe-8fa4-01c68ec704b1", "bridge": "br-int", "label": "tempest-network-smoke--961197321", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92faeb767e7a423586eaaf32661ce771", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa535be3a-db", "ovs_interfaceid": "a535be3a-db4d-4a49-9772-867e101290fa", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 25 03:51:53 np0005534516 nova_compute[253538]: 2025-11-25 08:51:53.063 253542 DEBUG oslo_concurrency.lockutils [req-ca986abd-1769-4c31-bdb7-22787f9cdc03 req-f6df62a0-c7dd-4716-abbc-13899e66de2c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-47279d1c-3634-4ea6-a752-99950cd5ce6c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 03:51:53 np0005534516 nova_compute[253538]: 2025-11-25 08:51:53.063 253542 DEBUG nova.network.neutron [req-ca986abd-1769-4c31-bdb7-22787f9cdc03 req-f6df62a0-c7dd-4716-abbc-13899e66de2c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 47279d1c-3634-4ea6-a752-99950cd5ce6c] Refreshing network info cache for port a535be3a-db4d-4a49-9772-867e101290fa _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 03:51:53 np0005534516 nova_compute[253538]: 2025-11-25 08:51:53.065 253542 DEBUG nova.virt.libvirt.driver [None req-761a63e2-f508-433d-8783-c11001e83e53 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 47279d1c-3634-4ea6-a752-99950cd5ce6c] Start _get_guest_xml network_info=[{"id": "a535be3a-db4d-4a49-9772-867e101290fa", "address": "fa:16:3e:5f:1f:f1", "network": {"id": "8daad2e3-552f-4ebe-8fa4-01c68ec704b1", "bridge": "br-int", "label": "tempest-network-smoke--961197321", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92faeb767e7a423586eaaf32661ce771", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa535be3a-db", "ovs_interfaceid": "a535be3a-db4d-4a49-9772-867e101290fa", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'device_name': '/dev/vda', 'guest_format': None, 'encryption_options': None, 'boot_index': 0, 'encryption_format': None, 'encryption_secret_uuid': None, 'size': 0, 'encrypted': False, 'disk_bus': 'virtio', 'image_id': '8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 25 03:51:53 np0005534516 nova_compute[253538]: 2025-11-25 08:51:53.070 253542 WARNING nova.virt.libvirt.driver [None req-761a63e2-f508-433d-8783-c11001e83e53 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 03:51:53 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 03:51:53 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1404244749' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 03:51:53 np0005534516 nova_compute[253538]: 2025-11-25 08:51:53.074 253542 DEBUG nova.virt.libvirt.host [None req-761a63e2-f508-433d-8783-c11001e83e53 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 25 03:51:53 np0005534516 nova_compute[253538]: 2025-11-25 08:51:53.074 253542 DEBUG nova.virt.libvirt.host [None req-761a63e2-f508-433d-8783-c11001e83e53 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 25 03:51:53 np0005534516 nova_compute[253538]: 2025-11-25 08:51:53.080 253542 DEBUG nova.virt.libvirt.host [None req-761a63e2-f508-433d-8783-c11001e83e53 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 25 03:51:53 np0005534516 nova_compute[253538]: 2025-11-25 08:51:53.081 253542 DEBUG nova.virt.libvirt.host [None req-761a63e2-f508-433d-8783-c11001e83e53 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 25 03:51:53 np0005534516 nova_compute[253538]: 2025-11-25 08:51:53.081 253542 DEBUG nova.virt.libvirt.driver [None req-761a63e2-f508-433d-8783-c11001e83e53 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 25 03:51:53 np0005534516 nova_compute[253538]: 2025-11-25 08:51:53.081 253542 DEBUG nova.virt.hardware [None req-761a63e2-f508-433d-8783-c11001e83e53 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T08:20:54Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='c0247c91-0f34-45b2-87e7-43f31a790a00',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 25 03:51:53 np0005534516 nova_compute[253538]: 2025-11-25 08:51:53.082 253542 DEBUG nova.virt.hardware [None req-761a63e2-f508-433d-8783-c11001e83e53 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 25 03:51:53 np0005534516 nova_compute[253538]: 2025-11-25 08:51:53.082 253542 DEBUG nova.virt.hardware [None req-761a63e2-f508-433d-8783-c11001e83e53 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 25 03:51:53 np0005534516 nova_compute[253538]: 2025-11-25 08:51:53.082 253542 DEBUG nova.virt.hardware [None req-761a63e2-f508-433d-8783-c11001e83e53 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 25 03:51:53 np0005534516 nova_compute[253538]: 2025-11-25 08:51:53.082 253542 DEBUG nova.virt.hardware [None req-761a63e2-f508-433d-8783-c11001e83e53 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 25 03:51:53 np0005534516 nova_compute[253538]: 2025-11-25 08:51:53.082 253542 DEBUG nova.virt.hardware [None req-761a63e2-f508-433d-8783-c11001e83e53 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 25 03:51:53 np0005534516 nova_compute[253538]: 2025-11-25 08:51:53.083 253542 DEBUG nova.virt.hardware [None req-761a63e2-f508-433d-8783-c11001e83e53 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 25 03:51:53 np0005534516 nova_compute[253538]: 2025-11-25 08:51:53.083 253542 DEBUG nova.virt.hardware [None req-761a63e2-f508-433d-8783-c11001e83e53 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 25 03:51:53 np0005534516 nova_compute[253538]: 2025-11-25 08:51:53.083 253542 DEBUG nova.virt.hardware [None req-761a63e2-f508-433d-8783-c11001e83e53 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 25 03:51:53 np0005534516 nova_compute[253538]: 2025-11-25 08:51:53.083 253542 DEBUG nova.virt.hardware [None req-761a63e2-f508-433d-8783-c11001e83e53 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 25 03:51:53 np0005534516 nova_compute[253538]: 2025-11-25 08:51:53.083 253542 DEBUG nova.virt.hardware [None req-761a63e2-f508-433d-8783-c11001e83e53 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 25 03:51:53 np0005534516 nova_compute[253538]: 2025-11-25 08:51:53.086 253542 DEBUG oslo_concurrency.processutils [None req-761a63e2-f508-433d-8783-c11001e83e53 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:51:53 np0005534516 nova_compute[253538]: 2025-11-25 08:51:53.120 253542 DEBUG oslo_concurrency.processutils [None req-2ceca59f-39a8-4b14-8061-2836c9cbd1d5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.460s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:51:53 np0005534516 nova_compute[253538]: 2025-11-25 08:51:53.148 253542 DEBUG nova.storage.rbd_utils [None req-2ceca59f-39a8-4b14-8061-2836c9cbd1d5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] rbd image 9fed0304-736a-4739-9e78-a95c676d1206_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:51:53 np0005534516 nova_compute[253538]: 2025-11-25 08:51:53.153 253542 DEBUG oslo_concurrency.processutils [None req-2ceca59f-39a8-4b14-8061-2836c9cbd1d5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:51:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] Optimize plan auto_2025-11-25_08:51:53
Nov 25 03:51:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 03:51:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] do_upmap
Nov 25 03:51:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] pools ['vms', 'default.rgw.meta', 'cephfs.cephfs.meta', 'backups', '.rgw.root', 'default.rgw.control', 'cephfs.cephfs.data', 'default.rgw.log', 'images', '.mgr', 'volumes']
Nov 25 03:51:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] prepared 0/10 changes
Nov 25 03:51:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 03:51:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 03:51:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 03:51:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 03:51:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 03:51:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 03:51:53 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 03:51:53 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/35121730' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 03:51:53 np0005534516 nova_compute[253538]: 2025-11-25 08:51:53.580 253542 DEBUG oslo_concurrency.processutils [None req-761a63e2-f508-433d-8783-c11001e83e53 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.494s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:51:53 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 03:51:53 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/233193304' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 03:51:53 np0005534516 nova_compute[253538]: 2025-11-25 08:51:53.603 253542 DEBUG nova.storage.rbd_utils [None req-761a63e2-f508-433d-8783-c11001e83e53 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] rbd image 47279d1c-3634-4ea6-a752-99950cd5ce6c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:51:53 np0005534516 nova_compute[253538]: 2025-11-25 08:51:53.608 253542 DEBUG oslo_concurrency.processutils [None req-761a63e2-f508-433d-8783-c11001e83e53 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:51:53 np0005534516 nova_compute[253538]: 2025-11-25 08:51:53.658 253542 DEBUG oslo_concurrency.processutils [None req-2ceca59f-39a8-4b14-8061-2836c9cbd1d5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.505s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:51:53 np0005534516 nova_compute[253538]: 2025-11-25 08:51:53.661 253542 DEBUG nova.virt.libvirt.vif [None req-2ceca59f-39a8-4b14-8061-2836c9cbd1d5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T08:51:47Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-327371372',display_name='tempest-TestNetworkAdvancedServerOps-server-327371372',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-327371372',id=112,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBMmeM8p5EJ/GlX5lcj2KCLf3TZ7G2ER5A8cVNuOIrPFCSxO8qm+HkDs1IeLM92Y+nFqkA7zCF0FOinPZWhiCPngd1CuHc0FrMJjIXIIvisspammiOznjpk4rLsu9K/sxg==',key_name='tempest-TestNetworkAdvancedServerOps-2049221666',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='7dcaf3c96bfc4db3a41291debd385c67',ramdisk_id='',reservation_id='r-2o0w2mgj',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkAdvancedServerOps-1132090577',owner_user_name='tempest-TestNetworkAdvancedServerOps-1132090577-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T08:51:48Z,user_data=None,user_id='009378dc36154271ba5b4590ce67ddde',uuid=9fed0304-736a-4739-9e78-a95c676d1206,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "5518ee18-fcb4-4885-8bc6-a3daba84baff", "address": "fa:16:3e:aa:ad:17", "network": {"id": "78cbfb83-5eb2-43b6-8132-ed291918f722", "bridge": "br-int", "label": "tempest-network-smoke--234361531", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7dcaf3c96bfc4db3a41291debd385c67", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5518ee18-fc", "ovs_interfaceid": "5518ee18-fcb4-4885-8bc6-a3daba84baff", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 25 03:51:53 np0005534516 nova_compute[253538]: 2025-11-25 08:51:53.662 253542 DEBUG nova.network.os_vif_util [None req-2ceca59f-39a8-4b14-8061-2836c9cbd1d5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Converting VIF {"id": "5518ee18-fcb4-4885-8bc6-a3daba84baff", "address": "fa:16:3e:aa:ad:17", "network": {"id": "78cbfb83-5eb2-43b6-8132-ed291918f722", "bridge": "br-int", "label": "tempest-network-smoke--234361531", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7dcaf3c96bfc4db3a41291debd385c67", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5518ee18-fc", "ovs_interfaceid": "5518ee18-fcb4-4885-8bc6-a3daba84baff", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 03:51:53 np0005534516 nova_compute[253538]: 2025-11-25 08:51:53.663 253542 DEBUG nova.network.os_vif_util [None req-2ceca59f-39a8-4b14-8061-2836c9cbd1d5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:aa:ad:17,bridge_name='br-int',has_traffic_filtering=True,id=5518ee18-fcb4-4885-8bc6-a3daba84baff,network=Network(78cbfb83-5eb2-43b6-8132-ed291918f722),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5518ee18-fc') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 03:51:53 np0005534516 nova_compute[253538]: 2025-11-25 08:51:53.665 253542 DEBUG nova.objects.instance [None req-2ceca59f-39a8-4b14-8061-2836c9cbd1d5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Lazy-loading 'pci_devices' on Instance uuid 9fed0304-736a-4739-9e78-a95c676d1206 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 03:51:53 np0005534516 nova_compute[253538]: 2025-11-25 08:51:53.678 253542 DEBUG nova.virt.libvirt.driver [None req-2ceca59f-39a8-4b14-8061-2836c9cbd1d5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: 9fed0304-736a-4739-9e78-a95c676d1206] End _get_guest_xml xml=<domain type="kvm">
Nov 25 03:51:53 np0005534516 nova_compute[253538]:  <uuid>9fed0304-736a-4739-9e78-a95c676d1206</uuid>
Nov 25 03:51:53 np0005534516 nova_compute[253538]:  <name>instance-00000070</name>
Nov 25 03:51:53 np0005534516 nova_compute[253538]:  <memory>131072</memory>
Nov 25 03:51:53 np0005534516 nova_compute[253538]:  <vcpu>1</vcpu>
Nov 25 03:51:53 np0005534516 nova_compute[253538]:  <metadata>
Nov 25 03:51:53 np0005534516 nova_compute[253538]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 03:51:53 np0005534516 nova_compute[253538]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 03:51:53 np0005534516 nova_compute[253538]:      <nova:name>tempest-TestNetworkAdvancedServerOps-server-327371372</nova:name>
Nov 25 03:51:53 np0005534516 nova_compute[253538]:      <nova:creationTime>2025-11-25 08:51:52</nova:creationTime>
Nov 25 03:51:53 np0005534516 nova_compute[253538]:      <nova:flavor name="m1.nano">
Nov 25 03:51:53 np0005534516 nova_compute[253538]:        <nova:memory>128</nova:memory>
Nov 25 03:51:53 np0005534516 nova_compute[253538]:        <nova:disk>1</nova:disk>
Nov 25 03:51:53 np0005534516 nova_compute[253538]:        <nova:swap>0</nova:swap>
Nov 25 03:51:53 np0005534516 nova_compute[253538]:        <nova:ephemeral>0</nova:ephemeral>
Nov 25 03:51:53 np0005534516 nova_compute[253538]:        <nova:vcpus>1</nova:vcpus>
Nov 25 03:51:53 np0005534516 nova_compute[253538]:      </nova:flavor>
Nov 25 03:51:53 np0005534516 nova_compute[253538]:      <nova:owner>
Nov 25 03:51:53 np0005534516 nova_compute[253538]:        <nova:user uuid="009378dc36154271ba5b4590ce67ddde">tempest-TestNetworkAdvancedServerOps-1132090577-project-member</nova:user>
Nov 25 03:51:53 np0005534516 nova_compute[253538]:        <nova:project uuid="7dcaf3c96bfc4db3a41291debd385c67">tempest-TestNetworkAdvancedServerOps-1132090577</nova:project>
Nov 25 03:51:53 np0005534516 nova_compute[253538]:      </nova:owner>
Nov 25 03:51:53 np0005534516 nova_compute[253538]:      <nova:root type="image" uuid="8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e"/>
Nov 25 03:51:53 np0005534516 nova_compute[253538]:      <nova:ports>
Nov 25 03:51:53 np0005534516 nova_compute[253538]:        <nova:port uuid="5518ee18-fcb4-4885-8bc6-a3daba84baff">
Nov 25 03:51:53 np0005534516 nova_compute[253538]:          <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Nov 25 03:51:53 np0005534516 nova_compute[253538]:        </nova:port>
Nov 25 03:51:53 np0005534516 nova_compute[253538]:      </nova:ports>
Nov 25 03:51:53 np0005534516 nova_compute[253538]:    </nova:instance>
Nov 25 03:51:53 np0005534516 nova_compute[253538]:  </metadata>
Nov 25 03:51:53 np0005534516 nova_compute[253538]:  <sysinfo type="smbios">
Nov 25 03:51:53 np0005534516 nova_compute[253538]:    <system>
Nov 25 03:51:53 np0005534516 nova_compute[253538]:      <entry name="manufacturer">RDO</entry>
Nov 25 03:51:53 np0005534516 nova_compute[253538]:      <entry name="product">OpenStack Compute</entry>
Nov 25 03:51:53 np0005534516 nova_compute[253538]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 03:51:53 np0005534516 nova_compute[253538]:      <entry name="serial">9fed0304-736a-4739-9e78-a95c676d1206</entry>
Nov 25 03:51:53 np0005534516 nova_compute[253538]:      <entry name="uuid">9fed0304-736a-4739-9e78-a95c676d1206</entry>
Nov 25 03:51:53 np0005534516 nova_compute[253538]:      <entry name="family">Virtual Machine</entry>
Nov 25 03:51:53 np0005534516 nova_compute[253538]:    </system>
Nov 25 03:51:53 np0005534516 nova_compute[253538]:  </sysinfo>
Nov 25 03:51:53 np0005534516 nova_compute[253538]:  <os>
Nov 25 03:51:53 np0005534516 nova_compute[253538]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 25 03:51:53 np0005534516 nova_compute[253538]:    <boot dev="hd"/>
Nov 25 03:51:53 np0005534516 nova_compute[253538]:    <smbios mode="sysinfo"/>
Nov 25 03:51:53 np0005534516 nova_compute[253538]:  </os>
Nov 25 03:51:53 np0005534516 nova_compute[253538]:  <features>
Nov 25 03:51:53 np0005534516 nova_compute[253538]:    <acpi/>
Nov 25 03:51:53 np0005534516 nova_compute[253538]:    <apic/>
Nov 25 03:51:53 np0005534516 nova_compute[253538]:    <vmcoreinfo/>
Nov 25 03:51:53 np0005534516 nova_compute[253538]:  </features>
Nov 25 03:51:53 np0005534516 nova_compute[253538]:  <clock offset="utc">
Nov 25 03:51:53 np0005534516 nova_compute[253538]:    <timer name="pit" tickpolicy="delay"/>
Nov 25 03:51:53 np0005534516 nova_compute[253538]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 25 03:51:53 np0005534516 nova_compute[253538]:    <timer name="hpet" present="no"/>
Nov 25 03:51:53 np0005534516 nova_compute[253538]:  </clock>
Nov 25 03:51:53 np0005534516 nova_compute[253538]:  <cpu mode="host-model" match="exact">
Nov 25 03:51:53 np0005534516 nova_compute[253538]:    <topology sockets="1" cores="1" threads="1"/>
Nov 25 03:51:53 np0005534516 nova_compute[253538]:  </cpu>
Nov 25 03:51:53 np0005534516 nova_compute[253538]:  <devices>
Nov 25 03:51:53 np0005534516 nova_compute[253538]:    <disk type="network" device="disk">
Nov 25 03:51:53 np0005534516 nova_compute[253538]:      <driver type="raw" cache="none"/>
Nov 25 03:51:53 np0005534516 nova_compute[253538]:      <source protocol="rbd" name="vms/9fed0304-736a-4739-9e78-a95c676d1206_disk">
Nov 25 03:51:53 np0005534516 nova_compute[253538]:        <host name="192.168.122.100" port="6789"/>
Nov 25 03:51:53 np0005534516 nova_compute[253538]:      </source>
Nov 25 03:51:53 np0005534516 nova_compute[253538]:      <auth username="openstack">
Nov 25 03:51:53 np0005534516 nova_compute[253538]:        <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 03:51:53 np0005534516 nova_compute[253538]:      </auth>
Nov 25 03:51:53 np0005534516 nova_compute[253538]:      <target dev="vda" bus="virtio"/>
Nov 25 03:51:53 np0005534516 nova_compute[253538]:    </disk>
Nov 25 03:51:53 np0005534516 nova_compute[253538]:    <disk type="network" device="cdrom">
Nov 25 03:51:53 np0005534516 nova_compute[253538]:      <driver type="raw" cache="none"/>
Nov 25 03:51:53 np0005534516 nova_compute[253538]:      <source protocol="rbd" name="vms/9fed0304-736a-4739-9e78-a95c676d1206_disk.config">
Nov 25 03:51:53 np0005534516 nova_compute[253538]:        <host name="192.168.122.100" port="6789"/>
Nov 25 03:51:53 np0005534516 nova_compute[253538]:      </source>
Nov 25 03:51:53 np0005534516 nova_compute[253538]:      <auth username="openstack">
Nov 25 03:51:53 np0005534516 nova_compute[253538]:        <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 03:51:53 np0005534516 nova_compute[253538]:      </auth>
Nov 25 03:51:53 np0005534516 nova_compute[253538]:      <target dev="sda" bus="sata"/>
Nov 25 03:51:53 np0005534516 nova_compute[253538]:    </disk>
Nov 25 03:51:53 np0005534516 nova_compute[253538]:    <interface type="ethernet">
Nov 25 03:51:53 np0005534516 nova_compute[253538]:      <mac address="fa:16:3e:aa:ad:17"/>
Nov 25 03:51:53 np0005534516 nova_compute[253538]:      <model type="virtio"/>
Nov 25 03:51:53 np0005534516 nova_compute[253538]:      <driver name="vhost" rx_queue_size="512"/>
Nov 25 03:51:53 np0005534516 nova_compute[253538]:      <mtu size="1442"/>
Nov 25 03:51:53 np0005534516 nova_compute[253538]:      <target dev="tap5518ee18-fc"/>
Nov 25 03:51:53 np0005534516 nova_compute[253538]:    </interface>
Nov 25 03:51:53 np0005534516 nova_compute[253538]:    <serial type="pty">
Nov 25 03:51:53 np0005534516 nova_compute[253538]:      <log file="/var/lib/nova/instances/9fed0304-736a-4739-9e78-a95c676d1206/console.log" append="off"/>
Nov 25 03:51:53 np0005534516 nova_compute[253538]:    </serial>
Nov 25 03:51:53 np0005534516 nova_compute[253538]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 03:51:53 np0005534516 nova_compute[253538]:    <video>
Nov 25 03:51:53 np0005534516 nova_compute[253538]:      <model type="virtio"/>
Nov 25 03:51:53 np0005534516 nova_compute[253538]:    </video>
Nov 25 03:51:53 np0005534516 nova_compute[253538]:    <input type="tablet" bus="usb"/>
Nov 25 03:51:53 np0005534516 nova_compute[253538]:    <rng model="virtio">
Nov 25 03:51:53 np0005534516 nova_compute[253538]:      <backend model="random">/dev/urandom</backend>
Nov 25 03:51:53 np0005534516 nova_compute[253538]:    </rng>
Nov 25 03:51:53 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root"/>
Nov 25 03:51:53 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:51:53 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:51:53 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:51:53 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:51:53 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:51:53 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:51:53 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:51:53 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:51:53 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:51:53 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:51:53 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:51:53 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:51:53 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:51:53 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:51:53 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:51:53 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:51:53 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:51:53 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:51:53 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:51:53 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:51:53 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:51:53 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:51:53 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:51:53 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:51:53 np0005534516 nova_compute[253538]:    <controller type="usb" index="0"/>
Nov 25 03:51:53 np0005534516 nova_compute[253538]:    <memballoon model="virtio">
Nov 25 03:51:53 np0005534516 nova_compute[253538]:      <stats period="10"/>
Nov 25 03:51:53 np0005534516 nova_compute[253538]:    </memballoon>
Nov 25 03:51:53 np0005534516 nova_compute[253538]:  </devices>
Nov 25 03:51:53 np0005534516 nova_compute[253538]: </domain>
Nov 25 03:51:53 np0005534516 nova_compute[253538]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 25 03:51:53 np0005534516 nova_compute[253538]: 2025-11-25 08:51:53.680 253542 DEBUG nova.compute.manager [None req-2ceca59f-39a8-4b14-8061-2836c9cbd1d5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: 9fed0304-736a-4739-9e78-a95c676d1206] Preparing to wait for external event network-vif-plugged-5518ee18-fcb4-4885-8bc6-a3daba84baff prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 25 03:51:53 np0005534516 nova_compute[253538]: 2025-11-25 08:51:53.681 253542 DEBUG oslo_concurrency.lockutils [None req-2ceca59f-39a8-4b14-8061-2836c9cbd1d5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Acquiring lock "9fed0304-736a-4739-9e78-a95c676d1206-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:51:53 np0005534516 nova_compute[253538]: 2025-11-25 08:51:53.682 253542 DEBUG oslo_concurrency.lockutils [None req-2ceca59f-39a8-4b14-8061-2836c9cbd1d5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Lock "9fed0304-736a-4739-9e78-a95c676d1206-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:51:53 np0005534516 nova_compute[253538]: 2025-11-25 08:51:53.682 253542 DEBUG oslo_concurrency.lockutils [None req-2ceca59f-39a8-4b14-8061-2836c9cbd1d5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Lock "9fed0304-736a-4739-9e78-a95c676d1206-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:51:53 np0005534516 nova_compute[253538]: 2025-11-25 08:51:53.684 253542 DEBUG nova.virt.libvirt.vif [None req-2ceca59f-39a8-4b14-8061-2836c9cbd1d5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T08:51:47Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-327371372',display_name='tempest-TestNetworkAdvancedServerOps-server-327371372',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-327371372',id=112,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBMmeM8p5EJ/GlX5lcj2KCLf3TZ7G2ER5A8cVNuOIrPFCSxO8qm+HkDs1IeLM92Y+nFqkA7zCF0FOinPZWhiCPngd1CuHc0FrMJjIXIIvisspammiOznjpk4rLsu9K/sxg==',key_name='tempest-TestNetworkAdvancedServerOps-2049221666',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='7dcaf3c96bfc4db3a41291debd385c67',ramdisk_id='',reservation_id='r-2o0w2mgj',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkAdvancedServerOps-1132090577',owner_user_name='tempest-TestNetworkAdvancedServerOps-1132090577-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T08:51:48Z,user_data=None,user_id='009378dc36154271ba5b4590ce67ddde',uuid=9fed0304-736a-4739-9e78-a95c676d1206,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "5518ee18-fcb4-4885-8bc6-a3daba84baff", "address": "fa:16:3e:aa:ad:17", "network": {"id": "78cbfb83-5eb2-43b6-8132-ed291918f722", "bridge": "br-int", "label": "tempest-network-smoke--234361531", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7dcaf3c96bfc4db3a41291debd385c67", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5518ee18-fc", "ovs_interfaceid": "5518ee18-fcb4-4885-8bc6-a3daba84baff", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 25 03:51:53 np0005534516 nova_compute[253538]: 2025-11-25 08:51:53.684 253542 DEBUG nova.network.os_vif_util [None req-2ceca59f-39a8-4b14-8061-2836c9cbd1d5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Converting VIF {"id": "5518ee18-fcb4-4885-8bc6-a3daba84baff", "address": "fa:16:3e:aa:ad:17", "network": {"id": "78cbfb83-5eb2-43b6-8132-ed291918f722", "bridge": "br-int", "label": "tempest-network-smoke--234361531", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7dcaf3c96bfc4db3a41291debd385c67", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5518ee18-fc", "ovs_interfaceid": "5518ee18-fcb4-4885-8bc6-a3daba84baff", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 03:51:53 np0005534516 nova_compute[253538]: 2025-11-25 08:51:53.685 253542 DEBUG nova.network.os_vif_util [None req-2ceca59f-39a8-4b14-8061-2836c9cbd1d5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:aa:ad:17,bridge_name='br-int',has_traffic_filtering=True,id=5518ee18-fcb4-4885-8bc6-a3daba84baff,network=Network(78cbfb83-5eb2-43b6-8132-ed291918f722),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5518ee18-fc') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 03:51:53 np0005534516 nova_compute[253538]: 2025-11-25 08:51:53.686 253542 DEBUG os_vif [None req-2ceca59f-39a8-4b14-8061-2836c9cbd1d5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:aa:ad:17,bridge_name='br-int',has_traffic_filtering=True,id=5518ee18-fcb4-4885-8bc6-a3daba84baff,network=Network(78cbfb83-5eb2-43b6-8132-ed291918f722),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5518ee18-fc') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 25 03:51:53 np0005534516 nova_compute[253538]: 2025-11-25 08:51:53.687 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:51:53 np0005534516 nova_compute[253538]: 2025-11-25 08:51:53.688 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:51:53 np0005534516 nova_compute[253538]: 2025-11-25 08:51:53.689 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 03:51:53 np0005534516 nova_compute[253538]: 2025-11-25 08:51:53.695 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:51:53 np0005534516 nova_compute[253538]: 2025-11-25 08:51:53.695 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap5518ee18-fc, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:51:53 np0005534516 nova_compute[253538]: 2025-11-25 08:51:53.696 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap5518ee18-fc, col_values=(('external_ids', {'iface-id': '5518ee18-fcb4-4885-8bc6-a3daba84baff', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:aa:ad:17', 'vm-uuid': '9fed0304-736a-4739-9e78-a95c676d1206'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:51:53 np0005534516 nova_compute[253538]: 2025-11-25 08:51:53.699 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:51:53 np0005534516 NetworkManager[48915]: <info>  [1764060713.7000] manager: (tap5518ee18-fc): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/455)
Nov 25 03:51:53 np0005534516 nova_compute[253538]: 2025-11-25 08:51:53.702 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 25 03:51:53 np0005534516 nova_compute[253538]: 2025-11-25 08:51:53.709 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:51:53 np0005534516 nova_compute[253538]: 2025-11-25 08:51:53.710 253542 INFO os_vif [None req-2ceca59f-39a8-4b14-8061-2836c9cbd1d5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:aa:ad:17,bridge_name='br-int',has_traffic_filtering=True,id=5518ee18-fcb4-4885-8bc6-a3daba84baff,network=Network(78cbfb83-5eb2-43b6-8132-ed291918f722),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5518ee18-fc')#033[00m
Nov 25 03:51:53 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2133: 321 pgs: 321 active+clean; 230 MiB data, 845 MiB used, 59 GiB / 60 GiB avail; 31 KiB/s rd, 2.4 MiB/s wr, 42 op/s
Nov 25 03:51:53 np0005534516 nova_compute[253538]: 2025-11-25 08:51:53.754 253542 DEBUG nova.virt.libvirt.driver [None req-2ceca59f-39a8-4b14-8061-2836c9cbd1d5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 03:51:53 np0005534516 nova_compute[253538]: 2025-11-25 08:51:53.754 253542 DEBUG nova.virt.libvirt.driver [None req-2ceca59f-39a8-4b14-8061-2836c9cbd1d5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 03:51:53 np0005534516 nova_compute[253538]: 2025-11-25 08:51:53.755 253542 DEBUG nova.virt.libvirt.driver [None req-2ceca59f-39a8-4b14-8061-2836c9cbd1d5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] No VIF found with MAC fa:16:3e:aa:ad:17, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 25 03:51:53 np0005534516 nova_compute[253538]: 2025-11-25 08:51:53.755 253542 INFO nova.virt.libvirt.driver [None req-2ceca59f-39a8-4b14-8061-2836c9cbd1d5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: 9fed0304-736a-4739-9e78-a95c676d1206] Using config drive#033[00m
Nov 25 03:51:53 np0005534516 nova_compute[253538]: 2025-11-25 08:51:53.788 253542 DEBUG nova.storage.rbd_utils [None req-2ceca59f-39a8-4b14-8061-2836c9cbd1d5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] rbd image 9fed0304-736a-4739-9e78-a95c676d1206_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:51:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 03:51:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 03:51:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 03:51:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 03:51:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 03:51:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 03:51:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 03:51:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 03:51:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 03:51:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 03:51:54 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 03:51:54 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4228729770' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 03:51:54 np0005534516 nova_compute[253538]: 2025-11-25 08:51:54.064 253542 DEBUG oslo_concurrency.processutils [None req-761a63e2-f508-433d-8783-c11001e83e53 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.456s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:51:54 np0005534516 nova_compute[253538]: 2025-11-25 08:51:54.065 253542 DEBUG nova.virt.libvirt.vif [None req-761a63e2-f508-433d-8783-c11001e83e53 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T08:51:48Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-737057420',display_name='tempest-TestNetworkBasicOps-server-737057420',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-737057420',id=113,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBD0V13VdtFjjfuJa+A9AY8vVYQrlDp8VmR/zDbnMpoRaniytKdXYDv2ooGFtOXnD87APiPgGqKaLDSkFHV94Z3CrjduwX8FjMfno6fvPaCxDikVs3WLPJK+CBmQ5ToXLLA==',key_name='tempest-TestNetworkBasicOps-301366135',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='92faeb767e7a423586eaaf32661ce771',ramdisk_id='',reservation_id='r-xuar01pl',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-2019122229',owner_user_name='tempest-TestNetworkBasicOps-2019122229-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T08:51:50Z,user_data=None,user_id='4211995133cc45db8e38c47f747fb092',uuid=47279d1c-3634-4ea6-a752-99950cd5ce6c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "a535be3a-db4d-4a49-9772-867e101290fa", "address": "fa:16:3e:5f:1f:f1", "network": {"id": "8daad2e3-552f-4ebe-8fa4-01c68ec704b1", "bridge": "br-int", "label": "tempest-network-smoke--961197321", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92faeb767e7a423586eaaf32661ce771", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa535be3a-db", "ovs_interfaceid": "a535be3a-db4d-4a49-9772-867e101290fa", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 25 03:51:54 np0005534516 nova_compute[253538]: 2025-11-25 08:51:54.066 253542 DEBUG nova.network.os_vif_util [None req-761a63e2-f508-433d-8783-c11001e83e53 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Converting VIF {"id": "a535be3a-db4d-4a49-9772-867e101290fa", "address": "fa:16:3e:5f:1f:f1", "network": {"id": "8daad2e3-552f-4ebe-8fa4-01c68ec704b1", "bridge": "br-int", "label": "tempest-network-smoke--961197321", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92faeb767e7a423586eaaf32661ce771", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa535be3a-db", "ovs_interfaceid": "a535be3a-db4d-4a49-9772-867e101290fa", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 03:51:54 np0005534516 nova_compute[253538]: 2025-11-25 08:51:54.066 253542 DEBUG nova.network.os_vif_util [None req-761a63e2-f508-433d-8783-c11001e83e53 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:5f:1f:f1,bridge_name='br-int',has_traffic_filtering=True,id=a535be3a-db4d-4a49-9772-867e101290fa,network=Network(8daad2e3-552f-4ebe-8fa4-01c68ec704b1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa535be3a-db') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 03:51:54 np0005534516 nova_compute[253538]: 2025-11-25 08:51:54.067 253542 DEBUG nova.objects.instance [None req-761a63e2-f508-433d-8783-c11001e83e53 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lazy-loading 'pci_devices' on Instance uuid 47279d1c-3634-4ea6-a752-99950cd5ce6c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 03:51:54 np0005534516 nova_compute[253538]: 2025-11-25 08:51:54.069 253542 DEBUG nova.network.neutron [req-ca986abd-1769-4c31-bdb7-22787f9cdc03 req-f6df62a0-c7dd-4716-abbc-13899e66de2c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 47279d1c-3634-4ea6-a752-99950cd5ce6c] Updated VIF entry in instance network info cache for port a535be3a-db4d-4a49-9772-867e101290fa. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 25 03:51:54 np0005534516 nova_compute[253538]: 2025-11-25 08:51:54.069 253542 DEBUG nova.network.neutron [req-ca986abd-1769-4c31-bdb7-22787f9cdc03 req-f6df62a0-c7dd-4716-abbc-13899e66de2c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 47279d1c-3634-4ea6-a752-99950cd5ce6c] Updating instance_info_cache with network_info: [{"id": "a535be3a-db4d-4a49-9772-867e101290fa", "address": "fa:16:3e:5f:1f:f1", "network": {"id": "8daad2e3-552f-4ebe-8fa4-01c68ec704b1", "bridge": "br-int", "label": "tempest-network-smoke--961197321", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92faeb767e7a423586eaaf32661ce771", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa535be3a-db", "ovs_interfaceid": "a535be3a-db4d-4a49-9772-867e101290fa", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 03:51:54 np0005534516 nova_compute[253538]: 2025-11-25 08:51:54.084 253542 DEBUG nova.virt.libvirt.driver [None req-761a63e2-f508-433d-8783-c11001e83e53 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 47279d1c-3634-4ea6-a752-99950cd5ce6c] End _get_guest_xml xml=<domain type="kvm">
Nov 25 03:51:54 np0005534516 nova_compute[253538]:  <uuid>47279d1c-3634-4ea6-a752-99950cd5ce6c</uuid>
Nov 25 03:51:54 np0005534516 nova_compute[253538]:  <name>instance-00000071</name>
Nov 25 03:51:54 np0005534516 nova_compute[253538]:  <memory>131072</memory>
Nov 25 03:51:54 np0005534516 nova_compute[253538]:  <vcpu>1</vcpu>
Nov 25 03:51:54 np0005534516 nova_compute[253538]:  <metadata>
Nov 25 03:51:54 np0005534516 nova_compute[253538]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 03:51:54 np0005534516 nova_compute[253538]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 03:51:54 np0005534516 nova_compute[253538]:      <nova:name>tempest-TestNetworkBasicOps-server-737057420</nova:name>
Nov 25 03:51:54 np0005534516 nova_compute[253538]:      <nova:creationTime>2025-11-25 08:51:53</nova:creationTime>
Nov 25 03:51:54 np0005534516 nova_compute[253538]:      <nova:flavor name="m1.nano">
Nov 25 03:51:54 np0005534516 nova_compute[253538]:        <nova:memory>128</nova:memory>
Nov 25 03:51:54 np0005534516 nova_compute[253538]:        <nova:disk>1</nova:disk>
Nov 25 03:51:54 np0005534516 nova_compute[253538]:        <nova:swap>0</nova:swap>
Nov 25 03:51:54 np0005534516 nova_compute[253538]:        <nova:ephemeral>0</nova:ephemeral>
Nov 25 03:51:54 np0005534516 nova_compute[253538]:        <nova:vcpus>1</nova:vcpus>
Nov 25 03:51:54 np0005534516 nova_compute[253538]:      </nova:flavor>
Nov 25 03:51:54 np0005534516 nova_compute[253538]:      <nova:owner>
Nov 25 03:51:54 np0005534516 nova_compute[253538]:        <nova:user uuid="4211995133cc45db8e38c47f747fb092">tempest-TestNetworkBasicOps-2019122229-project-member</nova:user>
Nov 25 03:51:54 np0005534516 nova_compute[253538]:        <nova:project uuid="92faeb767e7a423586eaaf32661ce771">tempest-TestNetworkBasicOps-2019122229</nova:project>
Nov 25 03:51:54 np0005534516 nova_compute[253538]:      </nova:owner>
Nov 25 03:51:54 np0005534516 nova_compute[253538]:      <nova:root type="image" uuid="8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e"/>
Nov 25 03:51:54 np0005534516 nova_compute[253538]:      <nova:ports>
Nov 25 03:51:54 np0005534516 nova_compute[253538]:        <nova:port uuid="a535be3a-db4d-4a49-9772-867e101290fa">
Nov 25 03:51:54 np0005534516 nova_compute[253538]:          <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Nov 25 03:51:54 np0005534516 nova_compute[253538]:        </nova:port>
Nov 25 03:51:54 np0005534516 nova_compute[253538]:      </nova:ports>
Nov 25 03:51:54 np0005534516 nova_compute[253538]:    </nova:instance>
Nov 25 03:51:54 np0005534516 nova_compute[253538]:  </metadata>
Nov 25 03:51:54 np0005534516 nova_compute[253538]:  <sysinfo type="smbios">
Nov 25 03:51:54 np0005534516 nova_compute[253538]:    <system>
Nov 25 03:51:54 np0005534516 nova_compute[253538]:      <entry name="manufacturer">RDO</entry>
Nov 25 03:51:54 np0005534516 nova_compute[253538]:      <entry name="product">OpenStack Compute</entry>
Nov 25 03:51:54 np0005534516 nova_compute[253538]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 03:51:54 np0005534516 nova_compute[253538]:      <entry name="serial">47279d1c-3634-4ea6-a752-99950cd5ce6c</entry>
Nov 25 03:51:54 np0005534516 nova_compute[253538]:      <entry name="uuid">47279d1c-3634-4ea6-a752-99950cd5ce6c</entry>
Nov 25 03:51:54 np0005534516 nova_compute[253538]:      <entry name="family">Virtual Machine</entry>
Nov 25 03:51:54 np0005534516 nova_compute[253538]:    </system>
Nov 25 03:51:54 np0005534516 nova_compute[253538]:  </sysinfo>
Nov 25 03:51:54 np0005534516 nova_compute[253538]:  <os>
Nov 25 03:51:54 np0005534516 nova_compute[253538]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 25 03:51:54 np0005534516 nova_compute[253538]:    <boot dev="hd"/>
Nov 25 03:51:54 np0005534516 nova_compute[253538]:    <smbios mode="sysinfo"/>
Nov 25 03:51:54 np0005534516 nova_compute[253538]:  </os>
Nov 25 03:51:54 np0005534516 nova_compute[253538]:  <features>
Nov 25 03:51:54 np0005534516 nova_compute[253538]:    <acpi/>
Nov 25 03:51:54 np0005534516 nova_compute[253538]:    <apic/>
Nov 25 03:51:54 np0005534516 nova_compute[253538]:    <vmcoreinfo/>
Nov 25 03:51:54 np0005534516 nova_compute[253538]:  </features>
Nov 25 03:51:54 np0005534516 nova_compute[253538]:  <clock offset="utc">
Nov 25 03:51:54 np0005534516 nova_compute[253538]:    <timer name="pit" tickpolicy="delay"/>
Nov 25 03:51:54 np0005534516 nova_compute[253538]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 25 03:51:54 np0005534516 nova_compute[253538]:    <timer name="hpet" present="no"/>
Nov 25 03:51:54 np0005534516 nova_compute[253538]:  </clock>
Nov 25 03:51:54 np0005534516 nova_compute[253538]:  <cpu mode="host-model" match="exact">
Nov 25 03:51:54 np0005534516 nova_compute[253538]:    <topology sockets="1" cores="1" threads="1"/>
Nov 25 03:51:54 np0005534516 nova_compute[253538]:  </cpu>
Nov 25 03:51:54 np0005534516 nova_compute[253538]:  <devices>
Nov 25 03:51:54 np0005534516 nova_compute[253538]:    <disk type="network" device="disk">
Nov 25 03:51:54 np0005534516 nova_compute[253538]:      <driver type="raw" cache="none"/>
Nov 25 03:51:54 np0005534516 nova_compute[253538]:      <source protocol="rbd" name="vms/47279d1c-3634-4ea6-a752-99950cd5ce6c_disk">
Nov 25 03:51:54 np0005534516 nova_compute[253538]:        <host name="192.168.122.100" port="6789"/>
Nov 25 03:51:54 np0005534516 nova_compute[253538]:      </source>
Nov 25 03:51:54 np0005534516 nova_compute[253538]:      <auth username="openstack">
Nov 25 03:51:54 np0005534516 nova_compute[253538]:        <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 03:51:54 np0005534516 nova_compute[253538]:      </auth>
Nov 25 03:51:54 np0005534516 nova_compute[253538]:      <target dev="vda" bus="virtio"/>
Nov 25 03:51:54 np0005534516 nova_compute[253538]:    </disk>
Nov 25 03:51:54 np0005534516 nova_compute[253538]:    <disk type="network" device="cdrom">
Nov 25 03:51:54 np0005534516 nova_compute[253538]:      <driver type="raw" cache="none"/>
Nov 25 03:51:54 np0005534516 nova_compute[253538]:      <source protocol="rbd" name="vms/47279d1c-3634-4ea6-a752-99950cd5ce6c_disk.config">
Nov 25 03:51:54 np0005534516 nova_compute[253538]:        <host name="192.168.122.100" port="6789"/>
Nov 25 03:51:54 np0005534516 nova_compute[253538]:      </source>
Nov 25 03:51:54 np0005534516 nova_compute[253538]:      <auth username="openstack">
Nov 25 03:51:54 np0005534516 nova_compute[253538]:        <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 03:51:54 np0005534516 nova_compute[253538]:      </auth>
Nov 25 03:51:54 np0005534516 nova_compute[253538]:      <target dev="sda" bus="sata"/>
Nov 25 03:51:54 np0005534516 nova_compute[253538]:    </disk>
Nov 25 03:51:54 np0005534516 nova_compute[253538]:    <interface type="ethernet">
Nov 25 03:51:54 np0005534516 nova_compute[253538]:      <mac address="fa:16:3e:5f:1f:f1"/>
Nov 25 03:51:54 np0005534516 nova_compute[253538]:      <model type="virtio"/>
Nov 25 03:51:54 np0005534516 nova_compute[253538]:      <driver name="vhost" rx_queue_size="512"/>
Nov 25 03:51:54 np0005534516 nova_compute[253538]:      <mtu size="1442"/>
Nov 25 03:51:54 np0005534516 nova_compute[253538]:      <target dev="tapa535be3a-db"/>
Nov 25 03:51:54 np0005534516 nova_compute[253538]:    </interface>
Nov 25 03:51:54 np0005534516 nova_compute[253538]:    <serial type="pty">
Nov 25 03:51:54 np0005534516 nova_compute[253538]:      <log file="/var/lib/nova/instances/47279d1c-3634-4ea6-a752-99950cd5ce6c/console.log" append="off"/>
Nov 25 03:51:54 np0005534516 nova_compute[253538]:    </serial>
Nov 25 03:51:54 np0005534516 nova_compute[253538]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 03:51:54 np0005534516 nova_compute[253538]:    <video>
Nov 25 03:51:54 np0005534516 nova_compute[253538]:      <model type="virtio"/>
Nov 25 03:51:54 np0005534516 nova_compute[253538]:    </video>
Nov 25 03:51:54 np0005534516 nova_compute[253538]:    <input type="tablet" bus="usb"/>
Nov 25 03:51:54 np0005534516 nova_compute[253538]:    <rng model="virtio">
Nov 25 03:51:54 np0005534516 nova_compute[253538]:      <backend model="random">/dev/urandom</backend>
Nov 25 03:51:54 np0005534516 nova_compute[253538]:    </rng>
Nov 25 03:51:54 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root"/>
Nov 25 03:51:54 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:51:54 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:51:54 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:51:54 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:51:54 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:51:54 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:51:54 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:51:54 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:51:54 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:51:54 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:51:54 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:51:54 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:51:54 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:51:54 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:51:54 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:51:54 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:51:54 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:51:54 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:51:54 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:51:54 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:51:54 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:51:54 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:51:54 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:51:54 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:51:54 np0005534516 nova_compute[253538]:    <controller type="usb" index="0"/>
Nov 25 03:51:54 np0005534516 nova_compute[253538]:    <memballoon model="virtio">
Nov 25 03:51:54 np0005534516 nova_compute[253538]:      <stats period="10"/>
Nov 25 03:51:54 np0005534516 nova_compute[253538]:    </memballoon>
Nov 25 03:51:54 np0005534516 nova_compute[253538]:  </devices>
Nov 25 03:51:54 np0005534516 nova_compute[253538]: </domain>
Nov 25 03:51:54 np0005534516 nova_compute[253538]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 25 03:51:54 np0005534516 nova_compute[253538]: 2025-11-25 08:51:54.085 253542 DEBUG nova.compute.manager [None req-761a63e2-f508-433d-8783-c11001e83e53 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 47279d1c-3634-4ea6-a752-99950cd5ce6c] Preparing to wait for external event network-vif-plugged-a535be3a-db4d-4a49-9772-867e101290fa prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 25 03:51:54 np0005534516 nova_compute[253538]: 2025-11-25 08:51:54.085 253542 DEBUG oslo_concurrency.lockutils [None req-761a63e2-f508-433d-8783-c11001e83e53 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Acquiring lock "47279d1c-3634-4ea6-a752-99950cd5ce6c-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:51:54 np0005534516 nova_compute[253538]: 2025-11-25 08:51:54.085 253542 DEBUG oslo_concurrency.lockutils [None req-761a63e2-f508-433d-8783-c11001e83e53 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lock "47279d1c-3634-4ea6-a752-99950cd5ce6c-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:51:54 np0005534516 nova_compute[253538]: 2025-11-25 08:51:54.085 253542 DEBUG oslo_concurrency.lockutils [None req-761a63e2-f508-433d-8783-c11001e83e53 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lock "47279d1c-3634-4ea6-a752-99950cd5ce6c-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:51:54 np0005534516 nova_compute[253538]: 2025-11-25 08:51:54.086 253542 DEBUG nova.virt.libvirt.vif [None req-761a63e2-f508-433d-8783-c11001e83e53 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T08:51:48Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-737057420',display_name='tempest-TestNetworkBasicOps-server-737057420',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-737057420',id=113,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBD0V13VdtFjjfuJa+A9AY8vVYQrlDp8VmR/zDbnMpoRaniytKdXYDv2ooGFtOXnD87APiPgGqKaLDSkFHV94Z3CrjduwX8FjMfno6fvPaCxDikVs3WLPJK+CBmQ5ToXLLA==',key_name='tempest-TestNetworkBasicOps-301366135',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='92faeb767e7a423586eaaf32661ce771',ramdisk_id='',reservation_id='r-xuar01pl',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-2019122229',owner_user_name='tempest-TestNetworkBasicOps-2019122229-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T08:51:50Z,user_data=None,user_id='4211995133cc45db8e38c47f747fb092',uuid=47279d1c-3634-4ea6-a752-99950cd5ce6c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "a535be3a-db4d-4a49-9772-867e101290fa", "address": "fa:16:3e:5f:1f:f1", "network": {"id": "8daad2e3-552f-4ebe-8fa4-01c68ec704b1", "bridge": "br-int", "label": "tempest-network-smoke--961197321", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92faeb767e7a423586eaaf32661ce771", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa535be3a-db", "ovs_interfaceid": "a535be3a-db4d-4a49-9772-867e101290fa", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 25 03:51:54 np0005534516 nova_compute[253538]: 2025-11-25 08:51:54.086 253542 DEBUG nova.network.os_vif_util [None req-761a63e2-f508-433d-8783-c11001e83e53 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Converting VIF {"id": "a535be3a-db4d-4a49-9772-867e101290fa", "address": "fa:16:3e:5f:1f:f1", "network": {"id": "8daad2e3-552f-4ebe-8fa4-01c68ec704b1", "bridge": "br-int", "label": "tempest-network-smoke--961197321", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92faeb767e7a423586eaaf32661ce771", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa535be3a-db", "ovs_interfaceid": "a535be3a-db4d-4a49-9772-867e101290fa", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 03:51:54 np0005534516 nova_compute[253538]: 2025-11-25 08:51:54.087 253542 DEBUG nova.network.os_vif_util [None req-761a63e2-f508-433d-8783-c11001e83e53 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:5f:1f:f1,bridge_name='br-int',has_traffic_filtering=True,id=a535be3a-db4d-4a49-9772-867e101290fa,network=Network(8daad2e3-552f-4ebe-8fa4-01c68ec704b1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa535be3a-db') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 03:51:54 np0005534516 nova_compute[253538]: 2025-11-25 08:51:54.087 253542 DEBUG os_vif [None req-761a63e2-f508-433d-8783-c11001e83e53 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:5f:1f:f1,bridge_name='br-int',has_traffic_filtering=True,id=a535be3a-db4d-4a49-9772-867e101290fa,network=Network(8daad2e3-552f-4ebe-8fa4-01c68ec704b1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa535be3a-db') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 25 03:51:54 np0005534516 nova_compute[253538]: 2025-11-25 08:51:54.088 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:51:54 np0005534516 nova_compute[253538]: 2025-11-25 08:51:54.088 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:51:54 np0005534516 nova_compute[253538]: 2025-11-25 08:51:54.088 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 03:51:54 np0005534516 nova_compute[253538]: 2025-11-25 08:51:54.089 253542 DEBUG oslo_concurrency.lockutils [req-ca986abd-1769-4c31-bdb7-22787f9cdc03 req-f6df62a0-c7dd-4716-abbc-13899e66de2c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-47279d1c-3634-4ea6-a752-99950cd5ce6c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 03:51:54 np0005534516 nova_compute[253538]: 2025-11-25 08:51:54.091 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:51:54 np0005534516 nova_compute[253538]: 2025-11-25 08:51:54.091 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa535be3a-db, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:51:54 np0005534516 nova_compute[253538]: 2025-11-25 08:51:54.091 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapa535be3a-db, col_values=(('external_ids', {'iface-id': 'a535be3a-db4d-4a49-9772-867e101290fa', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:5f:1f:f1', 'vm-uuid': '47279d1c-3634-4ea6-a752-99950cd5ce6c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:51:54 np0005534516 nova_compute[253538]: 2025-11-25 08:51:54.093 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:51:54 np0005534516 NetworkManager[48915]: <info>  [1764060714.0940] manager: (tapa535be3a-db): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/456)
Nov 25 03:51:54 np0005534516 nova_compute[253538]: 2025-11-25 08:51:54.095 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 25 03:51:54 np0005534516 nova_compute[253538]: 2025-11-25 08:51:54.099 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:51:54 np0005534516 nova_compute[253538]: 2025-11-25 08:51:54.100 253542 INFO os_vif [None req-761a63e2-f508-433d-8783-c11001e83e53 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:5f:1f:f1,bridge_name='br-int',has_traffic_filtering=True,id=a535be3a-db4d-4a49-9772-867e101290fa,network=Network(8daad2e3-552f-4ebe-8fa4-01c68ec704b1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa535be3a-db')#033[00m
Nov 25 03:51:54 np0005534516 nova_compute[253538]: 2025-11-25 08:51:54.160 253542 DEBUG nova.virt.libvirt.driver [None req-761a63e2-f508-433d-8783-c11001e83e53 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 03:51:54 np0005534516 nova_compute[253538]: 2025-11-25 08:51:54.160 253542 DEBUG nova.virt.libvirt.driver [None req-761a63e2-f508-433d-8783-c11001e83e53 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 03:51:54 np0005534516 nova_compute[253538]: 2025-11-25 08:51:54.160 253542 DEBUG nova.virt.libvirt.driver [None req-761a63e2-f508-433d-8783-c11001e83e53 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] No VIF found with MAC fa:16:3e:5f:1f:f1, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 25 03:51:54 np0005534516 nova_compute[253538]: 2025-11-25 08:51:54.161 253542 INFO nova.virt.libvirt.driver [None req-761a63e2-f508-433d-8783-c11001e83e53 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 47279d1c-3634-4ea6-a752-99950cd5ce6c] Using config drive#033[00m
Nov 25 03:51:54 np0005534516 nova_compute[253538]: 2025-11-25 08:51:54.181 253542 DEBUG nova.storage.rbd_utils [None req-761a63e2-f508-433d-8783-c11001e83e53 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] rbd image 47279d1c-3634-4ea6-a752-99950cd5ce6c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:51:54 np0005534516 nova_compute[253538]: 2025-11-25 08:51:54.466 253542 DEBUG nova.network.neutron [req-8cf36846-8f7d-4432-b1dc-71913f8930ad req-0700f0e1-da46-4e6f-938d-6de05c8fd6c2 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 9fed0304-736a-4739-9e78-a95c676d1206] Updated VIF entry in instance network info cache for port 5518ee18-fcb4-4885-8bc6-a3daba84baff. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 25 03:51:54 np0005534516 nova_compute[253538]: 2025-11-25 08:51:54.467 253542 DEBUG nova.network.neutron [req-8cf36846-8f7d-4432-b1dc-71913f8930ad req-0700f0e1-da46-4e6f-938d-6de05c8fd6c2 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 9fed0304-736a-4739-9e78-a95c676d1206] Updating instance_info_cache with network_info: [{"id": "5518ee18-fcb4-4885-8bc6-a3daba84baff", "address": "fa:16:3e:aa:ad:17", "network": {"id": "78cbfb83-5eb2-43b6-8132-ed291918f722", "bridge": "br-int", "label": "tempest-network-smoke--234361531", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7dcaf3c96bfc4db3a41291debd385c67", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5518ee18-fc", "ovs_interfaceid": "5518ee18-fcb4-4885-8bc6-a3daba84baff", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 03:51:54 np0005534516 nova_compute[253538]: 2025-11-25 08:51:54.488 253542 DEBUG oslo_concurrency.lockutils [req-8cf36846-8f7d-4432-b1dc-71913f8930ad req-0700f0e1-da46-4e6f-938d-6de05c8fd6c2 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-9fed0304-736a-4739-9e78-a95c676d1206" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 03:51:54 np0005534516 nova_compute[253538]: 2025-11-25 08:51:54.531 253542 INFO nova.virt.libvirt.driver [None req-2ceca59f-39a8-4b14-8061-2836c9cbd1d5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: 9fed0304-736a-4739-9e78-a95c676d1206] Creating config drive at /var/lib/nova/instances/9fed0304-736a-4739-9e78-a95c676d1206/disk.config#033[00m
Nov 25 03:51:54 np0005534516 nova_compute[253538]: 2025-11-25 08:51:54.537 253542 DEBUG oslo_concurrency.processutils [None req-2ceca59f-39a8-4b14-8061-2836c9cbd1d5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/9fed0304-736a-4739-9e78-a95c676d1206/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpee75fmk3 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:51:54 np0005534516 nova_compute[253538]: 2025-11-25 08:51:54.702 253542 DEBUG oslo_concurrency.processutils [None req-2ceca59f-39a8-4b14-8061-2836c9cbd1d5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/9fed0304-736a-4739-9e78-a95c676d1206/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpee75fmk3" returned: 0 in 0.165s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:51:54 np0005534516 nova_compute[253538]: 2025-11-25 08:51:54.724 253542 DEBUG nova.storage.rbd_utils [None req-2ceca59f-39a8-4b14-8061-2836c9cbd1d5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] rbd image 9fed0304-736a-4739-9e78-a95c676d1206_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:51:54 np0005534516 nova_compute[253538]: 2025-11-25 08:51:54.726 253542 DEBUG oslo_concurrency.processutils [None req-2ceca59f-39a8-4b14-8061-2836c9cbd1d5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/9fed0304-736a-4739-9e78-a95c676d1206/disk.config 9fed0304-736a-4739-9e78-a95c676d1206_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:51:54 np0005534516 nova_compute[253538]: 2025-11-25 08:51:54.792 253542 INFO nova.virt.libvirt.driver [None req-761a63e2-f508-433d-8783-c11001e83e53 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 47279d1c-3634-4ea6-a752-99950cd5ce6c] Creating config drive at /var/lib/nova/instances/47279d1c-3634-4ea6-a752-99950cd5ce6c/disk.config#033[00m
Nov 25 03:51:54 np0005534516 nova_compute[253538]: 2025-11-25 08:51:54.797 253542 DEBUG oslo_concurrency.processutils [None req-761a63e2-f508-433d-8783-c11001e83e53 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/47279d1c-3634-4ea6-a752-99950cd5ce6c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpaktgeuh6 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:51:54 np0005534516 nova_compute[253538]: 2025-11-25 08:51:54.922 253542 DEBUG oslo_concurrency.processutils [None req-2ceca59f-39a8-4b14-8061-2836c9cbd1d5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/9fed0304-736a-4739-9e78-a95c676d1206/disk.config 9fed0304-736a-4739-9e78-a95c676d1206_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.196s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:51:54 np0005534516 nova_compute[253538]: 2025-11-25 08:51:54.923 253542 INFO nova.virt.libvirt.driver [None req-2ceca59f-39a8-4b14-8061-2836c9cbd1d5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: 9fed0304-736a-4739-9e78-a95c676d1206] Deleting local config drive /var/lib/nova/instances/9fed0304-736a-4739-9e78-a95c676d1206/disk.config because it was imported into RBD.#033[00m
Nov 25 03:51:54 np0005534516 nova_compute[253538]: 2025-11-25 08:51:54.937 253542 DEBUG oslo_concurrency.processutils [None req-761a63e2-f508-433d-8783-c11001e83e53 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/47279d1c-3634-4ea6-a752-99950cd5ce6c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpaktgeuh6" returned: 0 in 0.140s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:51:54 np0005534516 nova_compute[253538]: 2025-11-25 08:51:54.961 253542 DEBUG nova.storage.rbd_utils [None req-761a63e2-f508-433d-8783-c11001e83e53 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] rbd image 47279d1c-3634-4ea6-a752-99950cd5ce6c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:51:54 np0005534516 nova_compute[253538]: 2025-11-25 08:51:54.964 253542 DEBUG oslo_concurrency.processutils [None req-761a63e2-f508-433d-8783-c11001e83e53 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/47279d1c-3634-4ea6-a752-99950cd5ce6c/disk.config 47279d1c-3634-4ea6-a752-99950cd5ce6c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:51:54 np0005534516 kernel: tap5518ee18-fc: entered promiscuous mode
Nov 25 03:51:54 np0005534516 NetworkManager[48915]: <info>  [1764060714.9685] manager: (tap5518ee18-fc): new Tun device (/org/freedesktop/NetworkManager/Devices/457)
Nov 25 03:51:54 np0005534516 ovn_controller[152859]: 2025-11-25T08:51:54Z|01112|binding|INFO|Claiming lport 5518ee18-fcb4-4885-8bc6-a3daba84baff for this chassis.
Nov 25 03:51:54 np0005534516 ovn_controller[152859]: 2025-11-25T08:51:54Z|01113|binding|INFO|5518ee18-fcb4-4885-8bc6-a3daba84baff: Claiming fa:16:3e:aa:ad:17 10.100.0.11
Nov 25 03:51:54 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:51:54.980 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:aa:ad:17 10.100.0.11'], port_security=['fa:16:3e:aa:ad:17 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '9fed0304-736a-4739-9e78-a95c676d1206', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-78cbfb83-5eb2-43b6-8132-ed291918f722', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '7dcaf3c96bfc4db3a41291debd385c67', 'neutron:revision_number': '2', 'neutron:security_group_ids': '3be0e54f-d1af-493d-9b04-9b0789174c39', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2873e9cb-ad93-4607-afe3-5e46bdf03cb7, chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=5518ee18-fcb4-4885-8bc6-a3daba84baff) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 03:51:54 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:51:54.983 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 5518ee18-fcb4-4885-8bc6-a3daba84baff in datapath 78cbfb83-5eb2-43b6-8132-ed291918f722 bound to our chassis#033[00m
Nov 25 03:51:54 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:51:54.985 162739 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 78cbfb83-5eb2-43b6-8132-ed291918f722#033[00m
Nov 25 03:51:54 np0005534516 ovn_controller[152859]: 2025-11-25T08:51:54Z|01114|binding|INFO|Setting lport 5518ee18-fcb4-4885-8bc6-a3daba84baff ovn-installed in OVS
Nov 25 03:51:54 np0005534516 ovn_controller[152859]: 2025-11-25T08:51:54Z|01115|binding|INFO|Setting lport 5518ee18-fcb4-4885-8bc6-a3daba84baff up in Southbound
Nov 25 03:51:55 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:51:54.998 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[a1675316-4b1e-47d0-ae46-cff9fdec791d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:51:55 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:51:54.999 162739 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap78cbfb83-51 in ovnmeta-78cbfb83-5eb2-43b6-8132-ed291918f722 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 25 03:51:55 np0005534516 systemd-machined[215790]: New machine qemu-139-instance-00000070.
Nov 25 03:51:55 np0005534516 nova_compute[253538]: 2025-11-25 08:51:55.002 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:51:55 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:51:55.001 269370 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap78cbfb83-50 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 25 03:51:55 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:51:55.001 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[759becdb-88b8-4887-a837-c2dac60228bb]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:51:55 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:51:55.004 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[94b92ab0-f7b1-4790-9e81-d5af333d4b00]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:51:55 np0005534516 systemd[1]: Started Virtual Machine qemu-139-instance-00000070.
Nov 25 03:51:55 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:51:55.018 162852 DEBUG oslo.privsep.daemon [-] privsep: reply[9c753d41-16ea-4ca0-a63f-e8a0eca9b7cb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:51:55 np0005534516 systemd-udevd[367739]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 03:51:55 np0005534516 NetworkManager[48915]: <info>  [1764060715.0412] device (tap5518ee18-fc): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 03:51:55 np0005534516 NetworkManager[48915]: <info>  [1764060715.0421] device (tap5518ee18-fc): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 03:51:55 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:51:55.045 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[bddf972d-e65b-454c-a1ff-330870304af3]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:51:55 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:51:55.076 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[29143171-ca1e-4cb5-b573-2838022dbf5e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:51:55 np0005534516 NetworkManager[48915]: <info>  [1764060715.0832] manager: (tap78cbfb83-50): new Veth device (/org/freedesktop/NetworkManager/Devices/458)
Nov 25 03:51:55 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:51:55.082 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[a641f973-62b6-4b4c-a740-65bb146619fd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:51:55 np0005534516 systemd-udevd[367748]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 03:51:55 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:51:55.120 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[2d92d6bc-d771-46a9-b285-c9e378727ed6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:51:55 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:51:55.123 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[0e23a90b-1df1-4745-a4c3-f2b227d2e24b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:51:55 np0005534516 nova_compute[253538]: 2025-11-25 08:51:55.150 253542 DEBUG oslo_concurrency.processutils [None req-761a63e2-f508-433d-8783-c11001e83e53 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/47279d1c-3634-4ea6-a752-99950cd5ce6c/disk.config 47279d1c-3634-4ea6-a752-99950cd5ce6c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.186s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:51:55 np0005534516 nova_compute[253538]: 2025-11-25 08:51:55.151 253542 INFO nova.virt.libvirt.driver [None req-761a63e2-f508-433d-8783-c11001e83e53 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 47279d1c-3634-4ea6-a752-99950cd5ce6c] Deleting local config drive /var/lib/nova/instances/47279d1c-3634-4ea6-a752-99950cd5ce6c/disk.config because it was imported into RBD.#033[00m
Nov 25 03:51:55 np0005534516 NetworkManager[48915]: <info>  [1764060715.1539] device (tap78cbfb83-50): carrier: link connected
Nov 25 03:51:55 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:51:55.157 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[76ba6643-af1e-4bf8-a947-fcd92c730434]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:51:55 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:51:55.178 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[f2744787-a89d-41b0-ada5-b380eff35a83]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap78cbfb83-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e8:20:be'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 327], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 606975, 'reachable_time': 30188, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 367790, 'error': None, 'target': 'ovnmeta-78cbfb83-5eb2-43b6-8132-ed291918f722', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:51:55 np0005534516 nova_compute[253538]: 2025-11-25 08:51:55.196 253542 DEBUG nova.compute.manager [req-f55a8e5d-d576-44c7-8fdd-fc226d4c8d38 req-4469d890-d158-48d2-a15d-20f2dcfc9872 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 9fed0304-736a-4739-9e78-a95c676d1206] Received event network-vif-plugged-5518ee18-fcb4-4885-8bc6-a3daba84baff external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:51:55 np0005534516 nova_compute[253538]: 2025-11-25 08:51:55.197 253542 DEBUG oslo_concurrency.lockutils [req-f55a8e5d-d576-44c7-8fdd-fc226d4c8d38 req-4469d890-d158-48d2-a15d-20f2dcfc9872 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "9fed0304-736a-4739-9e78-a95c676d1206-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:51:55 np0005534516 nova_compute[253538]: 2025-11-25 08:51:55.197 253542 DEBUG oslo_concurrency.lockutils [req-f55a8e5d-d576-44c7-8fdd-fc226d4c8d38 req-4469d890-d158-48d2-a15d-20f2dcfc9872 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "9fed0304-736a-4739-9e78-a95c676d1206-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:51:55 np0005534516 nova_compute[253538]: 2025-11-25 08:51:55.197 253542 DEBUG oslo_concurrency.lockutils [req-f55a8e5d-d576-44c7-8fdd-fc226d4c8d38 req-4469d890-d158-48d2-a15d-20f2dcfc9872 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "9fed0304-736a-4739-9e78-a95c676d1206-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:51:55 np0005534516 nova_compute[253538]: 2025-11-25 08:51:55.198 253542 DEBUG nova.compute.manager [req-f55a8e5d-d576-44c7-8fdd-fc226d4c8d38 req-4469d890-d158-48d2-a15d-20f2dcfc9872 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 9fed0304-736a-4739-9e78-a95c676d1206] Processing event network-vif-plugged-5518ee18-fcb4-4885-8bc6-a3daba84baff _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 25 03:51:55 np0005534516 kernel: tapa535be3a-db: entered promiscuous mode
Nov 25 03:51:55 np0005534516 NetworkManager[48915]: <info>  [1764060715.1997] manager: (tapa535be3a-db): new Tun device (/org/freedesktop/NetworkManager/Devices/459)
Nov 25 03:51:55 np0005534516 systemd-udevd[367782]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 03:51:55 np0005534516 NetworkManager[48915]: <info>  [1764060715.2196] device (tapa535be3a-db): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 03:51:55 np0005534516 NetworkManager[48915]: <info>  [1764060715.2210] device (tapa535be3a-db): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 03:51:55 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:51:55.209 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[1b5b65bc-ac0f-437e-a6d5-432c0e1af6a1]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fee8:20be'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 606975, 'tstamp': 606975}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 367796, 'error': None, 'target': 'ovnmeta-78cbfb83-5eb2-43b6-8132-ed291918f722', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:51:55 np0005534516 ovn_controller[152859]: 2025-11-25T08:51:55Z|01116|binding|INFO|Claiming lport a535be3a-db4d-4a49-9772-867e101290fa for this chassis.
Nov 25 03:51:55 np0005534516 ovn_controller[152859]: 2025-11-25T08:51:55Z|01117|binding|INFO|a535be3a-db4d-4a49-9772-867e101290fa: Claiming fa:16:3e:5f:1f:f1 10.100.0.5
Nov 25 03:51:55 np0005534516 nova_compute[253538]: 2025-11-25 08:51:55.264 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:51:55 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:51:55.272 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:5f:1f:f1 10.100.0.5'], port_security=['fa:16:3e:5f:1f:f1 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '47279d1c-3634-4ea6-a752-99950cd5ce6c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8daad2e3-552f-4ebe-8fa4-01c68ec704b1', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '92faeb767e7a423586eaaf32661ce771', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'de7d7e23-cab0-4e13-9965-ee46854760f3', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0b7f6d3d-0fdc-46fc-8ec7-6805fb9ea29c, chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=a535be3a-db4d-4a49-9772-867e101290fa) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 03:51:55 np0005534516 ovn_controller[152859]: 2025-11-25T08:51:55Z|01118|binding|INFO|Setting lport a535be3a-db4d-4a49-9772-867e101290fa ovn-installed in OVS
Nov 25 03:51:55 np0005534516 ovn_controller[152859]: 2025-11-25T08:51:55Z|01119|binding|INFO|Setting lport a535be3a-db4d-4a49-9772-867e101290fa up in Southbound
Nov 25 03:51:55 np0005534516 nova_compute[253538]: 2025-11-25 08:51:55.283 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:51:55 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:51:55.285 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[5e6e774a-9c3c-4ef2-994c-6989f275e804]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap78cbfb83-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e8:20:be'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 327], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 606975, 'reachable_time': 30188, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 367802, 'error': None, 'target': 'ovnmeta-78cbfb83-5eb2-43b6-8132-ed291918f722', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:51:55 np0005534516 systemd-machined[215790]: New machine qemu-140-instance-00000071.
Nov 25 03:51:55 np0005534516 systemd[1]: Started Virtual Machine qemu-140-instance-00000071.
Nov 25 03:51:55 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:51:55.323 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[0bc3cbb7-03b9-4852-aa82-2b786f8d9357]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:51:55 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:51:55.387 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[ad8d5f23-f2c3-4bc4-9cfa-5b96ebc59d10]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:51:55 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:51:55.389 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap78cbfb83-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:51:55 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:51:55.389 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 03:51:55 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:51:55.390 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap78cbfb83-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:51:55 np0005534516 kernel: tap78cbfb83-50: entered promiscuous mode
Nov 25 03:51:55 np0005534516 NetworkManager[48915]: <info>  [1764060715.3923] manager: (tap78cbfb83-50): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/460)
Nov 25 03:51:55 np0005534516 nova_compute[253538]: 2025-11-25 08:51:55.393 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:51:55 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:51:55.395 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap78cbfb83-50, col_values=(('external_ids', {'iface-id': '7a0c677f-94d5-4688-b88c-93d2fa378198'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:51:55 np0005534516 ovn_controller[152859]: 2025-11-25T08:51:55Z|01120|binding|INFO|Releasing lport 7a0c677f-94d5-4688-b88c-93d2fa378198 from this chassis (sb_readonly=0)
Nov 25 03:51:55 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:51:55.412 162739 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/78cbfb83-5eb2-43b6-8132-ed291918f722.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/78cbfb83-5eb2-43b6-8132-ed291918f722.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 25 03:51:55 np0005534516 nova_compute[253538]: 2025-11-25 08:51:55.412 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:51:55 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:51:55.413 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[0f74b1c0-60cd-45ba-849d-b8c85594a257]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:51:55 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:51:55.414 162739 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 25 03:51:55 np0005534516 ovn_metadata_agent[162734]: global
Nov 25 03:51:55 np0005534516 ovn_metadata_agent[162734]:    log         /dev/log local0 debug
Nov 25 03:51:55 np0005534516 ovn_metadata_agent[162734]:    log-tag     haproxy-metadata-proxy-78cbfb83-5eb2-43b6-8132-ed291918f722
Nov 25 03:51:55 np0005534516 ovn_metadata_agent[162734]:    user        root
Nov 25 03:51:55 np0005534516 ovn_metadata_agent[162734]:    group       root
Nov 25 03:51:55 np0005534516 ovn_metadata_agent[162734]:    maxconn     1024
Nov 25 03:51:55 np0005534516 ovn_metadata_agent[162734]:    pidfile     /var/lib/neutron/external/pids/78cbfb83-5eb2-43b6-8132-ed291918f722.pid.haproxy
Nov 25 03:51:55 np0005534516 ovn_metadata_agent[162734]:    daemon
Nov 25 03:51:55 np0005534516 ovn_metadata_agent[162734]: 
Nov 25 03:51:55 np0005534516 ovn_metadata_agent[162734]: defaults
Nov 25 03:51:55 np0005534516 ovn_metadata_agent[162734]:    log global
Nov 25 03:51:55 np0005534516 ovn_metadata_agent[162734]:    mode http
Nov 25 03:51:55 np0005534516 ovn_metadata_agent[162734]:    option httplog
Nov 25 03:51:55 np0005534516 ovn_metadata_agent[162734]:    option dontlognull
Nov 25 03:51:55 np0005534516 ovn_metadata_agent[162734]:    option http-server-close
Nov 25 03:51:55 np0005534516 ovn_metadata_agent[162734]:    option forwardfor
Nov 25 03:51:55 np0005534516 ovn_metadata_agent[162734]:    retries                 3
Nov 25 03:51:55 np0005534516 ovn_metadata_agent[162734]:    timeout http-request    30s
Nov 25 03:51:55 np0005534516 ovn_metadata_agent[162734]:    timeout connect         30s
Nov 25 03:51:55 np0005534516 ovn_metadata_agent[162734]:    timeout client          32s
Nov 25 03:51:55 np0005534516 ovn_metadata_agent[162734]:    timeout server          32s
Nov 25 03:51:55 np0005534516 ovn_metadata_agent[162734]:    timeout http-keep-alive 30s
Nov 25 03:51:55 np0005534516 ovn_metadata_agent[162734]: 
Nov 25 03:51:55 np0005534516 ovn_metadata_agent[162734]: 
Nov 25 03:51:55 np0005534516 ovn_metadata_agent[162734]: listen listener
Nov 25 03:51:55 np0005534516 ovn_metadata_agent[162734]:    bind 169.254.169.254:80
Nov 25 03:51:55 np0005534516 ovn_metadata_agent[162734]:    server metadata /var/lib/neutron/metadata_proxy
Nov 25 03:51:55 np0005534516 ovn_metadata_agent[162734]:    http-request add-header X-OVN-Network-ID 78cbfb83-5eb2-43b6-8132-ed291918f722
Nov 25 03:51:55 np0005534516 ovn_metadata_agent[162734]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 25 03:51:55 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:51:55.414 162739 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-78cbfb83-5eb2-43b6-8132-ed291918f722', 'env', 'PROCESS_TAG=haproxy-78cbfb83-5eb2-43b6-8132-ed291918f722', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/78cbfb83-5eb2-43b6-8132-ed291918f722.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 25 03:51:55 np0005534516 nova_compute[253538]: 2025-11-25 08:51:55.477 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764060715.476883, 9fed0304-736a-4739-9e78-a95c676d1206 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 03:51:55 np0005534516 nova_compute[253538]: 2025-11-25 08:51:55.477 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 9fed0304-736a-4739-9e78-a95c676d1206] VM Started (Lifecycle Event)#033[00m
Nov 25 03:51:55 np0005534516 nova_compute[253538]: 2025-11-25 08:51:55.480 253542 DEBUG nova.compute.manager [None req-2ceca59f-39a8-4b14-8061-2836c9cbd1d5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: 9fed0304-736a-4739-9e78-a95c676d1206] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 25 03:51:55 np0005534516 nova_compute[253538]: 2025-11-25 08:51:55.484 253542 DEBUG nova.virt.libvirt.driver [None req-2ceca59f-39a8-4b14-8061-2836c9cbd1d5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: 9fed0304-736a-4739-9e78-a95c676d1206] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 25 03:51:55 np0005534516 nova_compute[253538]: 2025-11-25 08:51:55.493 253542 DEBUG nova.compute.manager [req-5cfc0dfa-d3f1-47d9-90c0-6366ffb4c833 req-ff900022-237a-457b-a9e9-312807a8041a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 47279d1c-3634-4ea6-a752-99950cd5ce6c] Received event network-vif-plugged-a535be3a-db4d-4a49-9772-867e101290fa external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:51:55 np0005534516 nova_compute[253538]: 2025-11-25 08:51:55.493 253542 DEBUG oslo_concurrency.lockutils [req-5cfc0dfa-d3f1-47d9-90c0-6366ffb4c833 req-ff900022-237a-457b-a9e9-312807a8041a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "47279d1c-3634-4ea6-a752-99950cd5ce6c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:51:55 np0005534516 nova_compute[253538]: 2025-11-25 08:51:55.493 253542 DEBUG oslo_concurrency.lockutils [req-5cfc0dfa-d3f1-47d9-90c0-6366ffb4c833 req-ff900022-237a-457b-a9e9-312807a8041a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "47279d1c-3634-4ea6-a752-99950cd5ce6c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:51:55 np0005534516 nova_compute[253538]: 2025-11-25 08:51:55.493 253542 DEBUG oslo_concurrency.lockutils [req-5cfc0dfa-d3f1-47d9-90c0-6366ffb4c833 req-ff900022-237a-457b-a9e9-312807a8041a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "47279d1c-3634-4ea6-a752-99950cd5ce6c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:51:55 np0005534516 nova_compute[253538]: 2025-11-25 08:51:55.494 253542 DEBUG nova.compute.manager [req-5cfc0dfa-d3f1-47d9-90c0-6366ffb4c833 req-ff900022-237a-457b-a9e9-312807a8041a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 47279d1c-3634-4ea6-a752-99950cd5ce6c] Processing event network-vif-plugged-a535be3a-db4d-4a49-9772-867e101290fa _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 25 03:51:55 np0005534516 nova_compute[253538]: 2025-11-25 08:51:55.495 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 9fed0304-736a-4739-9e78-a95c676d1206] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:51:55 np0005534516 nova_compute[253538]: 2025-11-25 08:51:55.499 253542 INFO nova.virt.libvirt.driver [-] [instance: 9fed0304-736a-4739-9e78-a95c676d1206] Instance spawned successfully.#033[00m
Nov 25 03:51:55 np0005534516 nova_compute[253538]: 2025-11-25 08:51:55.500 253542 DEBUG nova.virt.libvirt.driver [None req-2ceca59f-39a8-4b14-8061-2836c9cbd1d5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: 9fed0304-736a-4739-9e78-a95c676d1206] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 25 03:51:55 np0005534516 nova_compute[253538]: 2025-11-25 08:51:55.501 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 9fed0304-736a-4739-9e78-a95c676d1206] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 03:51:55 np0005534516 nova_compute[253538]: 2025-11-25 08:51:55.531 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 9fed0304-736a-4739-9e78-a95c676d1206] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 03:51:55 np0005534516 nova_compute[253538]: 2025-11-25 08:51:55.532 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764060715.4771008, 9fed0304-736a-4739-9e78-a95c676d1206 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 03:51:55 np0005534516 nova_compute[253538]: 2025-11-25 08:51:55.532 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 9fed0304-736a-4739-9e78-a95c676d1206] VM Paused (Lifecycle Event)#033[00m
Nov 25 03:51:55 np0005534516 nova_compute[253538]: 2025-11-25 08:51:55.546 253542 DEBUG nova.virt.libvirt.driver [None req-2ceca59f-39a8-4b14-8061-2836c9cbd1d5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: 9fed0304-736a-4739-9e78-a95c676d1206] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:51:55 np0005534516 nova_compute[253538]: 2025-11-25 08:51:55.547 253542 DEBUG nova.virt.libvirt.driver [None req-2ceca59f-39a8-4b14-8061-2836c9cbd1d5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: 9fed0304-736a-4739-9e78-a95c676d1206] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:51:55 np0005534516 nova_compute[253538]: 2025-11-25 08:51:55.547 253542 DEBUG nova.virt.libvirt.driver [None req-2ceca59f-39a8-4b14-8061-2836c9cbd1d5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: 9fed0304-736a-4739-9e78-a95c676d1206] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:51:55 np0005534516 nova_compute[253538]: 2025-11-25 08:51:55.547 253542 DEBUG nova.virt.libvirt.driver [None req-2ceca59f-39a8-4b14-8061-2836c9cbd1d5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: 9fed0304-736a-4739-9e78-a95c676d1206] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:51:55 np0005534516 nova_compute[253538]: 2025-11-25 08:51:55.548 253542 DEBUG nova.virt.libvirt.driver [None req-2ceca59f-39a8-4b14-8061-2836c9cbd1d5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: 9fed0304-736a-4739-9e78-a95c676d1206] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:51:55 np0005534516 nova_compute[253538]: 2025-11-25 08:51:55.548 253542 DEBUG nova.virt.libvirt.driver [None req-2ceca59f-39a8-4b14-8061-2836c9cbd1d5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: 9fed0304-736a-4739-9e78-a95c676d1206] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:51:55 np0005534516 nova_compute[253538]: 2025-11-25 08:51:55.551 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 9fed0304-736a-4739-9e78-a95c676d1206] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:51:55 np0005534516 nova_compute[253538]: 2025-11-25 08:51:55.563 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764060715.48286, 9fed0304-736a-4739-9e78-a95c676d1206 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 03:51:55 np0005534516 nova_compute[253538]: 2025-11-25 08:51:55.564 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 9fed0304-736a-4739-9e78-a95c676d1206] VM Resumed (Lifecycle Event)#033[00m
Nov 25 03:51:55 np0005534516 nova_compute[253538]: 2025-11-25 08:51:55.586 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 9fed0304-736a-4739-9e78-a95c676d1206] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:51:55 np0005534516 nova_compute[253538]: 2025-11-25 08:51:55.589 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 9fed0304-736a-4739-9e78-a95c676d1206] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 03:51:55 np0005534516 nova_compute[253538]: 2025-11-25 08:51:55.606 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 9fed0304-736a-4739-9e78-a95c676d1206] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 03:51:55 np0005534516 nova_compute[253538]: 2025-11-25 08:51:55.615 253542 INFO nova.compute.manager [None req-2ceca59f-39a8-4b14-8061-2836c9cbd1d5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: 9fed0304-736a-4739-9e78-a95c676d1206] Took 6.66 seconds to spawn the instance on the hypervisor.#033[00m
Nov 25 03:51:55 np0005534516 nova_compute[253538]: 2025-11-25 08:51:55.616 253542 DEBUG nova.compute.manager [None req-2ceca59f-39a8-4b14-8061-2836c9cbd1d5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: 9fed0304-736a-4739-9e78-a95c676d1206] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:51:55 np0005534516 nova_compute[253538]: 2025-11-25 08:51:55.672 253542 INFO nova.compute.manager [None req-2ceca59f-39a8-4b14-8061-2836c9cbd1d5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: 9fed0304-736a-4739-9e78-a95c676d1206] Took 7.57 seconds to build instance.#033[00m
Nov 25 03:51:55 np0005534516 nova_compute[253538]: 2025-11-25 08:51:55.699 253542 DEBUG oslo_concurrency.lockutils [None req-2ceca59f-39a8-4b14-8061-2836c9cbd1d5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Lock "9fed0304-736a-4739-9e78-a95c676d1206" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 7.666s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:51:55 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2134: 321 pgs: 321 active+clean; 259 MiB data, 871 MiB used, 59 GiB / 60 GiB avail; 34 KiB/s rd, 3.6 MiB/s wr, 54 op/s
Nov 25 03:51:55 np0005534516 podman[367903]: 2025-11-25 08:51:55.824166344 +0000 UTC m=+0.052958170 container create 354c4c5ebb81452fe5e409e73a536bd402c008eefa5f7f63106b0b5501227d2f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-78cbfb83-5eb2-43b6-8132-ed291918f722, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118)
Nov 25 03:51:55 np0005534516 systemd[1]: Started libpod-conmon-354c4c5ebb81452fe5e409e73a536bd402c008eefa5f7f63106b0b5501227d2f.scope.
Nov 25 03:51:55 np0005534516 systemd[1]: Started libcrun container.
Nov 25 03:51:55 np0005534516 podman[367903]: 2025-11-25 08:51:55.79265027 +0000 UTC m=+0.021442116 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 25 03:51:55 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b6f13050eb7b6987378aae86fa19090b68d0867d3d270269cd362f317d03ce43/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 25 03:51:55 np0005534516 nova_compute[253538]: 2025-11-25 08:51:55.900 253542 DEBUG nova.compute.manager [None req-761a63e2-f508-433d-8783-c11001e83e53 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 47279d1c-3634-4ea6-a752-99950cd5ce6c] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 25 03:51:55 np0005534516 nova_compute[253538]: 2025-11-25 08:51:55.901 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764060715.900513, 47279d1c-3634-4ea6-a752-99950cd5ce6c => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 03:51:55 np0005534516 nova_compute[253538]: 2025-11-25 08:51:55.901 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 47279d1c-3634-4ea6-a752-99950cd5ce6c] VM Started (Lifecycle Event)#033[00m
Nov 25 03:51:55 np0005534516 nova_compute[253538]: 2025-11-25 08:51:55.905 253542 DEBUG nova.virt.libvirt.driver [None req-761a63e2-f508-433d-8783-c11001e83e53 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 47279d1c-3634-4ea6-a752-99950cd5ce6c] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 25 03:51:55 np0005534516 podman[367903]: 2025-11-25 08:51:55.914204575 +0000 UTC m=+0.142996421 container init 354c4c5ebb81452fe5e409e73a536bd402c008eefa5f7f63106b0b5501227d2f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-78cbfb83-5eb2-43b6-8132-ed291918f722, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118)
Nov 25 03:51:55 np0005534516 nova_compute[253538]: 2025-11-25 08:51:55.915 253542 INFO nova.virt.libvirt.driver [-] [instance: 47279d1c-3634-4ea6-a752-99950cd5ce6c] Instance spawned successfully.#033[00m
Nov 25 03:51:55 np0005534516 nova_compute[253538]: 2025-11-25 08:51:55.917 253542 DEBUG nova.virt.libvirt.driver [None req-761a63e2-f508-433d-8783-c11001e83e53 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 47279d1c-3634-4ea6-a752-99950cd5ce6c] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 25 03:51:55 np0005534516 podman[367903]: 2025-11-25 08:51:55.92071043 +0000 UTC m=+0.149502256 container start 354c4c5ebb81452fe5e409e73a536bd402c008eefa5f7f63106b0b5501227d2f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-78cbfb83-5eb2-43b6-8132-ed291918f722, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Nov 25 03:51:55 np0005534516 nova_compute[253538]: 2025-11-25 08:51:55.922 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 47279d1c-3634-4ea6-a752-99950cd5ce6c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:51:55 np0005534516 nova_compute[253538]: 2025-11-25 08:51:55.932 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 47279d1c-3634-4ea6-a752-99950cd5ce6c] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 03:51:55 np0005534516 nova_compute[253538]: 2025-11-25 08:51:55.939 253542 DEBUG nova.virt.libvirt.driver [None req-761a63e2-f508-433d-8783-c11001e83e53 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 47279d1c-3634-4ea6-a752-99950cd5ce6c] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:51:55 np0005534516 nova_compute[253538]: 2025-11-25 08:51:55.940 253542 DEBUG nova.virt.libvirt.driver [None req-761a63e2-f508-433d-8783-c11001e83e53 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 47279d1c-3634-4ea6-a752-99950cd5ce6c] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:51:55 np0005534516 nova_compute[253538]: 2025-11-25 08:51:55.940 253542 DEBUG nova.virt.libvirt.driver [None req-761a63e2-f508-433d-8783-c11001e83e53 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 47279d1c-3634-4ea6-a752-99950cd5ce6c] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:51:55 np0005534516 nova_compute[253538]: 2025-11-25 08:51:55.941 253542 DEBUG nova.virt.libvirt.driver [None req-761a63e2-f508-433d-8783-c11001e83e53 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 47279d1c-3634-4ea6-a752-99950cd5ce6c] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:51:55 np0005534516 nova_compute[253538]: 2025-11-25 08:51:55.941 253542 DEBUG nova.virt.libvirt.driver [None req-761a63e2-f508-433d-8783-c11001e83e53 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 47279d1c-3634-4ea6-a752-99950cd5ce6c] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:51:55 np0005534516 nova_compute[253538]: 2025-11-25 08:51:55.942 253542 DEBUG nova.virt.libvirt.driver [None req-761a63e2-f508-433d-8783-c11001e83e53 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 47279d1c-3634-4ea6-a752-99950cd5ce6c] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:51:55 np0005534516 neutron-haproxy-ovnmeta-78cbfb83-5eb2-43b6-8132-ed291918f722[367943]: [NOTICE]   (367947) : New worker (367949) forked
Nov 25 03:51:55 np0005534516 neutron-haproxy-ovnmeta-78cbfb83-5eb2-43b6-8132-ed291918f722[367943]: [NOTICE]   (367947) : Loading success.
Nov 25 03:51:55 np0005534516 nova_compute[253538]: 2025-11-25 08:51:55.966 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 47279d1c-3634-4ea6-a752-99950cd5ce6c] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 03:51:55 np0005534516 nova_compute[253538]: 2025-11-25 08:51:55.966 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764060715.9014297, 47279d1c-3634-4ea6-a752-99950cd5ce6c => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 03:51:55 np0005534516 nova_compute[253538]: 2025-11-25 08:51:55.966 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 47279d1c-3634-4ea6-a752-99950cd5ce6c] VM Paused (Lifecycle Event)#033[00m
Nov 25 03:51:55 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:51:55.985 162739 INFO neutron.agent.ovn.metadata.agent [-] Port a535be3a-db4d-4a49-9772-867e101290fa in datapath 8daad2e3-552f-4ebe-8fa4-01c68ec704b1 unbound from our chassis#033[00m
Nov 25 03:51:55 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:51:55.987 162739 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 8daad2e3-552f-4ebe-8fa4-01c68ec704b1#033[00m
Nov 25 03:51:55 np0005534516 nova_compute[253538]: 2025-11-25 08:51:55.992 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 47279d1c-3634-4ea6-a752-99950cd5ce6c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:51:55 np0005534516 nova_compute[253538]: 2025-11-25 08:51:55.995 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764060715.9112117, 47279d1c-3634-4ea6-a752-99950cd5ce6c => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 03:51:55 np0005534516 nova_compute[253538]: 2025-11-25 08:51:55.996 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 47279d1c-3634-4ea6-a752-99950cd5ce6c] VM Resumed (Lifecycle Event)#033[00m
Nov 25 03:51:55 np0005534516 nova_compute[253538]: 2025-11-25 08:51:55.998 253542 INFO nova.compute.manager [None req-761a63e2-f508-433d-8783-c11001e83e53 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 47279d1c-3634-4ea6-a752-99950cd5ce6c] Took 5.85 seconds to spawn the instance on the hypervisor.#033[00m
Nov 25 03:51:55 np0005534516 nova_compute[253538]: 2025-11-25 08:51:55.998 253542 DEBUG nova.compute.manager [None req-761a63e2-f508-433d-8783-c11001e83e53 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 47279d1c-3634-4ea6-a752-99950cd5ce6c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:51:56 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:51:56.001 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[e3f5ce0f-27d2-47bb-a93e-c1e48e202117]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:51:56 np0005534516 nova_compute[253538]: 2025-11-25 08:51:56.020 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 47279d1c-3634-4ea6-a752-99950cd5ce6c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:51:56 np0005534516 nova_compute[253538]: 2025-11-25 08:51:56.023 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 47279d1c-3634-4ea6-a752-99950cd5ce6c] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 03:51:56 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:51:56.029 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[5ee2c581-eced-4311-bd6e-7e04837857ff]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:51:56 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:51:56.032 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[499a49b2-3d68-4c6d-a033-6d33fc8400ad]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:51:56 np0005534516 nova_compute[253538]: 2025-11-25 08:51:56.047 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 47279d1c-3634-4ea6-a752-99950cd5ce6c] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 03:51:56 np0005534516 nova_compute[253538]: 2025-11-25 08:51:56.058 253542 INFO nova.compute.manager [None req-761a63e2-f508-433d-8783-c11001e83e53 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 47279d1c-3634-4ea6-a752-99950cd5ce6c] Took 6.85 seconds to build instance.#033[00m
Nov 25 03:51:56 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:51:56.059 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[f5ce3f07-fe63-463e-8e42-96da08f6165c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:51:56 np0005534516 nova_compute[253538]: 2025-11-25 08:51:56.072 253542 DEBUG oslo_concurrency.lockutils [None req-761a63e2-f508-433d-8783-c11001e83e53 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lock "47279d1c-3634-4ea6-a752-99950cd5ce6c" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 6.971s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:51:56 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:51:56.075 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[97c242c1-c655-46da-9758-b0acaea06518]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap8daad2e3-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:47:fb:f7'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 6, 'rx_bytes': 616, 'tx_bytes': 440, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 6, 'rx_bytes': 616, 'tx_bytes': 440, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 324], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 603450, 'reachable_time': 38589, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 300, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 300, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 367963, 'error': None, 'target': 'ovnmeta-8daad2e3-552f-4ebe-8fa4-01c68ec704b1', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:51:56 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:51:56.097 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[64b15398-2333-4173-aee1-9fe39123f319]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap8daad2e3-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 603462, 'tstamp': 603462}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 367964, 'error': None, 'target': 'ovnmeta-8daad2e3-552f-4ebe-8fa4-01c68ec704b1', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap8daad2e3-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 603466, 'tstamp': 603466}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 367964, 'error': None, 'target': 'ovnmeta-8daad2e3-552f-4ebe-8fa4-01c68ec704b1', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:51:56 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:51:56.099 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8daad2e3-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:51:56 np0005534516 nova_compute[253538]: 2025-11-25 08:51:56.101 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:51:56 np0005534516 nova_compute[253538]: 2025-11-25 08:51:56.101 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:51:56 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:51:56.102 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap8daad2e3-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:51:56 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:51:56.102 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 03:51:56 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:51:56.102 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap8daad2e3-50, col_values=(('external_ids', {'iface-id': 'e844dcfd-3730-493f-b401-25ee7b281b7b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:51:56 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:51:56.103 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 03:51:57 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 03:51:57 np0005534516 nova_compute[253538]: 2025-11-25 08:51:57.715 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:51:57 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2135: 321 pgs: 321 active+clean; 259 MiB data, 871 MiB used, 59 GiB / 60 GiB avail; 1.3 MiB/s rd, 3.6 MiB/s wr, 105 op/s
Nov 25 03:51:58 np0005534516 nova_compute[253538]: 2025-11-25 08:51:58.049 253542 DEBUG nova.compute.manager [req-6a217210-28ce-4903-9324-ffa9824c2ed8 req-046379fc-f848-4b47-8d60-2303373f1dd4 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 9fed0304-736a-4739-9e78-a95c676d1206] Received event network-vif-plugged-5518ee18-fcb4-4885-8bc6-a3daba84baff external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:51:58 np0005534516 nova_compute[253538]: 2025-11-25 08:51:58.050 253542 DEBUG oslo_concurrency.lockutils [req-6a217210-28ce-4903-9324-ffa9824c2ed8 req-046379fc-f848-4b47-8d60-2303373f1dd4 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "9fed0304-736a-4739-9e78-a95c676d1206-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:51:58 np0005534516 nova_compute[253538]: 2025-11-25 08:51:58.050 253542 DEBUG oslo_concurrency.lockutils [req-6a217210-28ce-4903-9324-ffa9824c2ed8 req-046379fc-f848-4b47-8d60-2303373f1dd4 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "9fed0304-736a-4739-9e78-a95c676d1206-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:51:58 np0005534516 nova_compute[253538]: 2025-11-25 08:51:58.050 253542 DEBUG oslo_concurrency.lockutils [req-6a217210-28ce-4903-9324-ffa9824c2ed8 req-046379fc-f848-4b47-8d60-2303373f1dd4 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "9fed0304-736a-4739-9e78-a95c676d1206-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:51:58 np0005534516 nova_compute[253538]: 2025-11-25 08:51:58.050 253542 DEBUG nova.compute.manager [req-6a217210-28ce-4903-9324-ffa9824c2ed8 req-046379fc-f848-4b47-8d60-2303373f1dd4 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 9fed0304-736a-4739-9e78-a95c676d1206] No waiting events found dispatching network-vif-plugged-5518ee18-fcb4-4885-8bc6-a3daba84baff pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 03:51:58 np0005534516 nova_compute[253538]: 2025-11-25 08:51:58.050 253542 WARNING nova.compute.manager [req-6a217210-28ce-4903-9324-ffa9824c2ed8 req-046379fc-f848-4b47-8d60-2303373f1dd4 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 9fed0304-736a-4739-9e78-a95c676d1206] Received unexpected event network-vif-plugged-5518ee18-fcb4-4885-8bc6-a3daba84baff for instance with vm_state active and task_state None.#033[00m
Nov 25 03:51:58 np0005534516 nova_compute[253538]: 2025-11-25 08:51:58.114 253542 DEBUG nova.compute.manager [req-0a40afba-d827-4c93-8b4d-5b83610614bf req-61340031-9ba8-464e-9041-bbfb6ba02c75 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 47279d1c-3634-4ea6-a752-99950cd5ce6c] Received event network-vif-plugged-a535be3a-db4d-4a49-9772-867e101290fa external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:51:58 np0005534516 nova_compute[253538]: 2025-11-25 08:51:58.115 253542 DEBUG oslo_concurrency.lockutils [req-0a40afba-d827-4c93-8b4d-5b83610614bf req-61340031-9ba8-464e-9041-bbfb6ba02c75 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "47279d1c-3634-4ea6-a752-99950cd5ce6c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:51:58 np0005534516 nova_compute[253538]: 2025-11-25 08:51:58.115 253542 DEBUG oslo_concurrency.lockutils [req-0a40afba-d827-4c93-8b4d-5b83610614bf req-61340031-9ba8-464e-9041-bbfb6ba02c75 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "47279d1c-3634-4ea6-a752-99950cd5ce6c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:51:58 np0005534516 nova_compute[253538]: 2025-11-25 08:51:58.116 253542 DEBUG oslo_concurrency.lockutils [req-0a40afba-d827-4c93-8b4d-5b83610614bf req-61340031-9ba8-464e-9041-bbfb6ba02c75 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "47279d1c-3634-4ea6-a752-99950cd5ce6c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:51:58 np0005534516 nova_compute[253538]: 2025-11-25 08:51:58.116 253542 DEBUG nova.compute.manager [req-0a40afba-d827-4c93-8b4d-5b83610614bf req-61340031-9ba8-464e-9041-bbfb6ba02c75 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 47279d1c-3634-4ea6-a752-99950cd5ce6c] No waiting events found dispatching network-vif-plugged-a535be3a-db4d-4a49-9772-867e101290fa pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 03:51:58 np0005534516 nova_compute[253538]: 2025-11-25 08:51:58.116 253542 WARNING nova.compute.manager [req-0a40afba-d827-4c93-8b4d-5b83610614bf req-61340031-9ba8-464e-9041-bbfb6ba02c75 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 47279d1c-3634-4ea6-a752-99950cd5ce6c] Received unexpected event network-vif-plugged-a535be3a-db4d-4a49-9772-867e101290fa for instance with vm_state active and task_state None.#033[00m
Nov 25 03:51:58 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 03:51:58 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 03:51:58 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 03:51:58 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 03:51:59 np0005534516 nova_compute[253538]: 2025-11-25 08:51:59.095 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:51:59 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 03:51:59 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 03:51:59 np0005534516 podman[368354]: 2025-11-25 08:51:59.71588275 +0000 UTC m=+0.062732181 container create bb38fea992b4597632019f06dab2c5285d592a7cc5d99a9431a22162887faf35 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_pasteur, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 03:51:59 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2136: 321 pgs: 321 active+clean; 260 MiB data, 872 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 3.6 MiB/s wr, 199 op/s
Nov 25 03:51:59 np0005534516 systemd[1]: Started libpod-conmon-bb38fea992b4597632019f06dab2c5285d592a7cc5d99a9431a22162887faf35.scope.
Nov 25 03:51:59 np0005534516 podman[368354]: 2025-11-25 08:51:59.690538591 +0000 UTC m=+0.037388092 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 03:51:59 np0005534516 systemd[1]: Started libcrun container.
Nov 25 03:51:59 np0005534516 podman[368354]: 2025-11-25 08:51:59.83159798 +0000 UTC m=+0.178447421 container init bb38fea992b4597632019f06dab2c5285d592a7cc5d99a9431a22162887faf35 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_pasteur, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 03:51:59 np0005534516 podman[368354]: 2025-11-25 08:51:59.840798206 +0000 UTC m=+0.187647687 container start bb38fea992b4597632019f06dab2c5285d592a7cc5d99a9431a22162887faf35 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_pasteur, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 03:51:59 np0005534516 podman[368354]: 2025-11-25 08:51:59.847359241 +0000 UTC m=+0.194208692 container attach bb38fea992b4597632019f06dab2c5285d592a7cc5d99a9431a22162887faf35 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_pasteur, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Nov 25 03:51:59 np0005534516 systemd[1]: libpod-bb38fea992b4597632019f06dab2c5285d592a7cc5d99a9431a22162887faf35.scope: Deactivated successfully.
Nov 25 03:51:59 np0005534516 kind_pasteur[368371]: 167 167
Nov 25 03:51:59 np0005534516 conmon[368371]: conmon bb38fea992b459763201 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-bb38fea992b4597632019f06dab2c5285d592a7cc5d99a9431a22162887faf35.scope/container/memory.events
Nov 25 03:51:59 np0005534516 podman[368354]: 2025-11-25 08:51:59.852736345 +0000 UTC m=+0.199585766 container died bb38fea992b4597632019f06dab2c5285d592a7cc5d99a9431a22162887faf35 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_pasteur, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 03:51:59 np0005534516 systemd[1]: var-lib-containers-storage-overlay-d523399cf4cc2c3c97b53d61c54bfd02b4d940f1a209a363e7aa33a89f676f6e-merged.mount: Deactivated successfully.
Nov 25 03:51:59 np0005534516 podman[368354]: 2025-11-25 08:51:59.918201018 +0000 UTC m=+0.265050439 container remove bb38fea992b4597632019f06dab2c5285d592a7cc5d99a9431a22162887faf35 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_pasteur, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Nov 25 03:51:59 np0005534516 systemd[1]: libpod-conmon-bb38fea992b4597632019f06dab2c5285d592a7cc5d99a9431a22162887faf35.scope: Deactivated successfully.
Nov 25 03:52:00 np0005534516 podman[368396]: 2025-11-25 08:52:00.144658303 +0000 UTC m=+0.053470153 container create 17b5d11e16d68ea7b9edb00041d52000dfbc18b13311a6c8c9b33e194bef125a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_noether, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 03:52:00 np0005534516 systemd[1]: Started libpod-conmon-17b5d11e16d68ea7b9edb00041d52000dfbc18b13311a6c8c9b33e194bef125a.scope.
Nov 25 03:52:00 np0005534516 podman[368396]: 2025-11-25 08:52:00.119352265 +0000 UTC m=+0.028164125 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 03:52:00 np0005534516 systemd[1]: Started libcrun container.
Nov 25 03:52:00 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c546deb9b6a912f50dd29fd3cef778dc9c9fcc911a10a6b81b9cb4f6d824da1f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 03:52:00 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c546deb9b6a912f50dd29fd3cef778dc9c9fcc911a10a6b81b9cb4f6d824da1f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 03:52:00 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c546deb9b6a912f50dd29fd3cef778dc9c9fcc911a10a6b81b9cb4f6d824da1f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 03:52:00 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c546deb9b6a912f50dd29fd3cef778dc9c9fcc911a10a6b81b9cb4f6d824da1f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 03:52:00 np0005534516 nova_compute[253538]: 2025-11-25 08:52:00.228 253542 DEBUG nova.compute.manager [req-68294fe3-cb58-425d-8295-ab28300e67aa req-5aad538f-192a-4020-9df1-400dc296698c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 47279d1c-3634-4ea6-a752-99950cd5ce6c] Received event network-changed-a535be3a-db4d-4a49-9772-867e101290fa external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:52:00 np0005534516 nova_compute[253538]: 2025-11-25 08:52:00.230 253542 DEBUG nova.compute.manager [req-68294fe3-cb58-425d-8295-ab28300e67aa req-5aad538f-192a-4020-9df1-400dc296698c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 47279d1c-3634-4ea6-a752-99950cd5ce6c] Refreshing instance network info cache due to event network-changed-a535be3a-db4d-4a49-9772-867e101290fa. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 03:52:00 np0005534516 nova_compute[253538]: 2025-11-25 08:52:00.230 253542 DEBUG oslo_concurrency.lockutils [req-68294fe3-cb58-425d-8295-ab28300e67aa req-5aad538f-192a-4020-9df1-400dc296698c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-47279d1c-3634-4ea6-a752-99950cd5ce6c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 03:52:00 np0005534516 nova_compute[253538]: 2025-11-25 08:52:00.231 253542 DEBUG oslo_concurrency.lockutils [req-68294fe3-cb58-425d-8295-ab28300e67aa req-5aad538f-192a-4020-9df1-400dc296698c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-47279d1c-3634-4ea6-a752-99950cd5ce6c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 03:52:00 np0005534516 nova_compute[253538]: 2025-11-25 08:52:00.231 253542 DEBUG nova.network.neutron [req-68294fe3-cb58-425d-8295-ab28300e67aa req-5aad538f-192a-4020-9df1-400dc296698c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 47279d1c-3634-4ea6-a752-99950cd5ce6c] Refreshing network info cache for port a535be3a-db4d-4a49-9772-867e101290fa _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 03:52:00 np0005534516 podman[368396]: 2025-11-25 08:52:00.243263275 +0000 UTC m=+0.152075165 container init 17b5d11e16d68ea7b9edb00041d52000dfbc18b13311a6c8c9b33e194bef125a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_noether, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Nov 25 03:52:00 np0005534516 podman[368396]: 2025-11-25 08:52:00.251757952 +0000 UTC m=+0.160569782 container start 17b5d11e16d68ea7b9edb00041d52000dfbc18b13311a6c8c9b33e194bef125a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_noether, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2)
Nov 25 03:52:00 np0005534516 podman[368396]: 2025-11-25 08:52:00.264226636 +0000 UTC m=+0.173038506 container attach 17b5d11e16d68ea7b9edb00041d52000dfbc18b13311a6c8c9b33e194bef125a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_noether, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Nov 25 03:52:00 np0005534516 nova_compute[253538]: 2025-11-25 08:52:00.481 253542 DEBUG nova.compute.manager [req-39c32d01-fca9-4f6d-bbe0-f777ee183e8b req-6fa4999c-a2a2-4b62-9072-bf0b2374c769 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 9fed0304-736a-4739-9e78-a95c676d1206] Received event network-changed-5518ee18-fcb4-4885-8bc6-a3daba84baff external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:52:00 np0005534516 nova_compute[253538]: 2025-11-25 08:52:00.482 253542 DEBUG nova.compute.manager [req-39c32d01-fca9-4f6d-bbe0-f777ee183e8b req-6fa4999c-a2a2-4b62-9072-bf0b2374c769 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 9fed0304-736a-4739-9e78-a95c676d1206] Refreshing instance network info cache due to event network-changed-5518ee18-fcb4-4885-8bc6-a3daba84baff. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 03:52:00 np0005534516 nova_compute[253538]: 2025-11-25 08:52:00.482 253542 DEBUG oslo_concurrency.lockutils [req-39c32d01-fca9-4f6d-bbe0-f777ee183e8b req-6fa4999c-a2a2-4b62-9072-bf0b2374c769 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-9fed0304-736a-4739-9e78-a95c676d1206" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 03:52:00 np0005534516 nova_compute[253538]: 2025-11-25 08:52:00.483 253542 DEBUG oslo_concurrency.lockutils [req-39c32d01-fca9-4f6d-bbe0-f777ee183e8b req-6fa4999c-a2a2-4b62-9072-bf0b2374c769 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-9fed0304-736a-4739-9e78-a95c676d1206" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 03:52:00 np0005534516 nova_compute[253538]: 2025-11-25 08:52:00.483 253542 DEBUG nova.network.neutron [req-39c32d01-fca9-4f6d-bbe0-f777ee183e8b req-6fa4999c-a2a2-4b62-9072-bf0b2374c769 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 9fed0304-736a-4739-9e78-a95c676d1206] Refreshing network info cache for port 5518ee18-fcb4-4885-8bc6-a3daba84baff _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 03:52:01 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2137: 321 pgs: 321 active+clean; 260 MiB data, 872 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 2.9 MiB/s wr, 185 op/s
Nov 25 03:52:01 np0005534516 nifty_noether[368412]: [
Nov 25 03:52:01 np0005534516 nifty_noether[368412]:    {
Nov 25 03:52:01 np0005534516 nifty_noether[368412]:        "available": false,
Nov 25 03:52:01 np0005534516 nifty_noether[368412]:        "ceph_device": false,
Nov 25 03:52:01 np0005534516 nifty_noether[368412]:        "device_id": "QEMU_DVD-ROM_QM00001",
Nov 25 03:52:01 np0005534516 nifty_noether[368412]:        "lsm_data": {},
Nov 25 03:52:01 np0005534516 nifty_noether[368412]:        "lvs": [],
Nov 25 03:52:01 np0005534516 nifty_noether[368412]:        "path": "/dev/sr0",
Nov 25 03:52:01 np0005534516 nifty_noether[368412]:        "rejected_reasons": [
Nov 25 03:52:01 np0005534516 nifty_noether[368412]:            "Insufficient space (<5GB)",
Nov 25 03:52:01 np0005534516 nifty_noether[368412]:            "Has a FileSystem"
Nov 25 03:52:01 np0005534516 nifty_noether[368412]:        ],
Nov 25 03:52:01 np0005534516 nifty_noether[368412]:        "sys_api": {
Nov 25 03:52:01 np0005534516 nifty_noether[368412]:            "actuators": null,
Nov 25 03:52:01 np0005534516 nifty_noether[368412]:            "device_nodes": "sr0",
Nov 25 03:52:01 np0005534516 nifty_noether[368412]:            "devname": "sr0",
Nov 25 03:52:01 np0005534516 nifty_noether[368412]:            "human_readable_size": "482.00 KB",
Nov 25 03:52:01 np0005534516 nifty_noether[368412]:            "id_bus": "ata",
Nov 25 03:52:01 np0005534516 nifty_noether[368412]:            "model": "QEMU DVD-ROM",
Nov 25 03:52:01 np0005534516 nifty_noether[368412]:            "nr_requests": "2",
Nov 25 03:52:01 np0005534516 nifty_noether[368412]:            "parent": "/dev/sr0",
Nov 25 03:52:01 np0005534516 nifty_noether[368412]:            "partitions": {},
Nov 25 03:52:01 np0005534516 nifty_noether[368412]:            "path": "/dev/sr0",
Nov 25 03:52:01 np0005534516 nifty_noether[368412]:            "removable": "1",
Nov 25 03:52:01 np0005534516 nifty_noether[368412]:            "rev": "2.5+",
Nov 25 03:52:01 np0005534516 nifty_noether[368412]:            "ro": "0",
Nov 25 03:52:01 np0005534516 nifty_noether[368412]:            "rotational": "1",
Nov 25 03:52:01 np0005534516 nifty_noether[368412]:            "sas_address": "",
Nov 25 03:52:01 np0005534516 nifty_noether[368412]:            "sas_device_handle": "",
Nov 25 03:52:01 np0005534516 nifty_noether[368412]:            "scheduler_mode": "mq-deadline",
Nov 25 03:52:01 np0005534516 nifty_noether[368412]:            "sectors": 0,
Nov 25 03:52:01 np0005534516 nifty_noether[368412]:            "sectorsize": "2048",
Nov 25 03:52:01 np0005534516 nifty_noether[368412]:            "size": 493568.0,
Nov 25 03:52:01 np0005534516 nifty_noether[368412]:            "support_discard": "2048",
Nov 25 03:52:01 np0005534516 nifty_noether[368412]:            "type": "disk",
Nov 25 03:52:01 np0005534516 nifty_noether[368412]:            "vendor": "QEMU"
Nov 25 03:52:01 np0005534516 nifty_noether[368412]:        }
Nov 25 03:52:01 np0005534516 nifty_noether[368412]:    }
Nov 25 03:52:01 np0005534516 nifty_noether[368412]: ]
Nov 25 03:52:01 np0005534516 nova_compute[253538]: 2025-11-25 08:52:01.746 253542 DEBUG nova.network.neutron [req-68294fe3-cb58-425d-8295-ab28300e67aa req-5aad538f-192a-4020-9df1-400dc296698c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 47279d1c-3634-4ea6-a752-99950cd5ce6c] Updated VIF entry in instance network info cache for port a535be3a-db4d-4a49-9772-867e101290fa. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 25 03:52:01 np0005534516 nova_compute[253538]: 2025-11-25 08:52:01.749 253542 DEBUG nova.network.neutron [req-68294fe3-cb58-425d-8295-ab28300e67aa req-5aad538f-192a-4020-9df1-400dc296698c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 47279d1c-3634-4ea6-a752-99950cd5ce6c] Updating instance_info_cache with network_info: [{"id": "a535be3a-db4d-4a49-9772-867e101290fa", "address": "fa:16:3e:5f:1f:f1", "network": {"id": "8daad2e3-552f-4ebe-8fa4-01c68ec704b1", "bridge": "br-int", "label": "tempest-network-smoke--961197321", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.244", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92faeb767e7a423586eaaf32661ce771", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa535be3a-db", "ovs_interfaceid": "a535be3a-db4d-4a49-9772-867e101290fa", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 03:52:01 np0005534516 nova_compute[253538]: 2025-11-25 08:52:01.767 253542 DEBUG oslo_concurrency.lockutils [req-68294fe3-cb58-425d-8295-ab28300e67aa req-5aad538f-192a-4020-9df1-400dc296698c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-47279d1c-3634-4ea6-a752-99950cd5ce6c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 03:52:01 np0005534516 systemd[1]: libpod-17b5d11e16d68ea7b9edb00041d52000dfbc18b13311a6c8c9b33e194bef125a.scope: Deactivated successfully.
Nov 25 03:52:01 np0005534516 systemd[1]: libpod-17b5d11e16d68ea7b9edb00041d52000dfbc18b13311a6c8c9b33e194bef125a.scope: Consumed 1.492s CPU time.
Nov 25 03:52:01 np0005534516 podman[370400]: 2025-11-25 08:52:01.814132214 +0000 UTC m=+0.030744454 container died 17b5d11e16d68ea7b9edb00041d52000dfbc18b13311a6c8c9b33e194bef125a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_noether, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Nov 25 03:52:01 np0005534516 systemd[1]: var-lib-containers-storage-overlay-c546deb9b6a912f50dd29fd3cef778dc9c9fcc911a10a6b81b9cb4f6d824da1f-merged.mount: Deactivated successfully.
Nov 25 03:52:01 np0005534516 podman[370400]: 2025-11-25 08:52:01.899854361 +0000 UTC m=+0.116466601 container remove 17b5d11e16d68ea7b9edb00041d52000dfbc18b13311a6c8c9b33e194bef125a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_noether, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 03:52:01 np0005534516 systemd[1]: libpod-conmon-17b5d11e16d68ea7b9edb00041d52000dfbc18b13311a6c8c9b33e194bef125a.scope: Deactivated successfully.
Nov 25 03:52:01 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 03:52:01 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 03:52:01 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 03:52:01 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 03:52:01 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 03:52:01 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 03:52:01 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 03:52:01 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 03:52:01 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 03:52:01 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 03:52:01 np0005534516 ceph-mgr[75313]: [progress WARNING root] complete: ev 6f615931-3e1c-42cb-b218-9ea9a16b4f9d does not exist
Nov 25 03:52:01 np0005534516 ceph-mgr[75313]: [progress WARNING root] complete: ev 780bb90f-8739-4266-85b3-78d307608b68 does not exist
Nov 25 03:52:01 np0005534516 ceph-mgr[75313]: [progress WARNING root] complete: ev 9788e609-56cf-4cb7-b904-32bb3ba360ec does not exist
Nov 25 03:52:01 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 03:52:01 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 03:52:02 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 03:52:02 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 03:52:02 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 03:52:02 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 03:52:02 np0005534516 nova_compute[253538]: 2025-11-25 08:52:02.083 253542 DEBUG nova.network.neutron [req-39c32d01-fca9-4f6d-bbe0-f777ee183e8b req-6fa4999c-a2a2-4b62-9072-bf0b2374c769 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 9fed0304-736a-4739-9e78-a95c676d1206] Updated VIF entry in instance network info cache for port 5518ee18-fcb4-4885-8bc6-a3daba84baff. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 25 03:52:02 np0005534516 nova_compute[253538]: 2025-11-25 08:52:02.085 253542 DEBUG nova.network.neutron [req-39c32d01-fca9-4f6d-bbe0-f777ee183e8b req-6fa4999c-a2a2-4b62-9072-bf0b2374c769 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 9fed0304-736a-4739-9e78-a95c676d1206] Updating instance_info_cache with network_info: [{"id": "5518ee18-fcb4-4885-8bc6-a3daba84baff", "address": "fa:16:3e:aa:ad:17", "network": {"id": "78cbfb83-5eb2-43b6-8132-ed291918f722", "bridge": "br-int", "label": "tempest-network-smoke--234361531", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.183", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7dcaf3c96bfc4db3a41291debd385c67", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5518ee18-fc", "ovs_interfaceid": "5518ee18-fcb4-4885-8bc6-a3daba84baff", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 03:52:02 np0005534516 nova_compute[253538]: 2025-11-25 08:52:02.103 253542 DEBUG oslo_concurrency.lockutils [req-39c32d01-fca9-4f6d-bbe0-f777ee183e8b req-6fa4999c-a2a2-4b62-9072-bf0b2374c769 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-9fed0304-736a-4739-9e78-a95c676d1206" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 03:52:02 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 03:52:02 np0005534516 podman[370554]: 2025-11-25 08:52:02.606618308 +0000 UTC m=+0.038840681 container create 7c09e8733af9f00dd40d482de2f2b377541b817bbbe384f11725769475719d4e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_meninsky, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Nov 25 03:52:02 np0005534516 systemd[1]: Started libpod-conmon-7c09e8733af9f00dd40d482de2f2b377541b817bbbe384f11725769475719d4e.scope.
Nov 25 03:52:02 np0005534516 systemd[1]: Started libcrun container.
Nov 25 03:52:02 np0005534516 podman[370554]: 2025-11-25 08:52:02.588789921 +0000 UTC m=+0.021012314 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 03:52:02 np0005534516 podman[370554]: 2025-11-25 08:52:02.696808324 +0000 UTC m=+0.129030727 container init 7c09e8733af9f00dd40d482de2f2b377541b817bbbe384f11725769475719d4e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_meninsky, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 03:52:02 np0005534516 podman[370554]: 2025-11-25 08:52:02.703467653 +0000 UTC m=+0.135690026 container start 7c09e8733af9f00dd40d482de2f2b377541b817bbbe384f11725769475719d4e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_meninsky, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Nov 25 03:52:02 np0005534516 gifted_meninsky[370571]: 167 167
Nov 25 03:52:02 np0005534516 systemd[1]: libpod-7c09e8733af9f00dd40d482de2f2b377541b817bbbe384f11725769475719d4e.scope: Deactivated successfully.
Nov 25 03:52:02 np0005534516 podman[370554]: 2025-11-25 08:52:02.710706806 +0000 UTC m=+0.142929179 container attach 7c09e8733af9f00dd40d482de2f2b377541b817bbbe384f11725769475719d4e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_meninsky, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True)
Nov 25 03:52:02 np0005534516 conmon[370571]: conmon 7c09e8733af9f00dd40d <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-7c09e8733af9f00dd40d482de2f2b377541b817bbbe384f11725769475719d4e.scope/container/memory.events
Nov 25 03:52:02 np0005534516 podman[370554]: 2025-11-25 08:52:02.712037952 +0000 UTC m=+0.144260325 container died 7c09e8733af9f00dd40d482de2f2b377541b817bbbe384f11725769475719d4e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_meninsky, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 03:52:02 np0005534516 nova_compute[253538]: 2025-11-25 08:52:02.716 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:52:02 np0005534516 systemd[1]: var-lib-containers-storage-overlay-cd160ed41af57d781304cbacc2ed6a0d95397042c17de0783f3518d3c1033286-merged.mount: Deactivated successfully.
Nov 25 03:52:02 np0005534516 podman[370554]: 2025-11-25 08:52:02.778013719 +0000 UTC m=+0.210236092 container remove 7c09e8733af9f00dd40d482de2f2b377541b817bbbe384f11725769475719d4e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_meninsky, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Nov 25 03:52:02 np0005534516 systemd[1]: libpod-conmon-7c09e8733af9f00dd40d482de2f2b377541b817bbbe384f11725769475719d4e.scope: Deactivated successfully.
Nov 25 03:52:02 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 03:52:02 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 03:52:02 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 03:52:02 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 03:52:02 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 03:52:02 np0005534516 podman[370596]: 2025-11-25 08:52:02.993413587 +0000 UTC m=+0.052530608 container create 4794ca68449583edaa52a4acacdd0afde508379d89d6d41a4e0724338cfd94fc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_swartz, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef)
Nov 25 03:52:03 np0005534516 systemd[1]: Started libpod-conmon-4794ca68449583edaa52a4acacdd0afde508379d89d6d41a4e0724338cfd94fc.scope.
Nov 25 03:52:03 np0005534516 podman[370596]: 2025-11-25 08:52:02.962003656 +0000 UTC m=+0.021120667 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 03:52:03 np0005534516 systemd[1]: Started libcrun container.
Nov 25 03:52:03 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2e19ad65772af566d697367e7a63863cf44733f3655084e5bf391b2fc12c6634/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 03:52:03 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2e19ad65772af566d697367e7a63863cf44733f3655084e5bf391b2fc12c6634/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 03:52:03 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2e19ad65772af566d697367e7a63863cf44733f3655084e5bf391b2fc12c6634/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 03:52:03 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2e19ad65772af566d697367e7a63863cf44733f3655084e5bf391b2fc12c6634/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 03:52:03 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2e19ad65772af566d697367e7a63863cf44733f3655084e5bf391b2fc12c6634/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 03:52:03 np0005534516 podman[370596]: 2025-11-25 08:52:03.093359504 +0000 UTC m=+0.152476495 container init 4794ca68449583edaa52a4acacdd0afde508379d89d6d41a4e0724338cfd94fc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_swartz, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Nov 25 03:52:03 np0005534516 podman[370596]: 2025-11-25 08:52:03.103151606 +0000 UTC m=+0.162268587 container start 4794ca68449583edaa52a4acacdd0afde508379d89d6d41a4e0724338cfd94fc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_swartz, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef)
Nov 25 03:52:03 np0005534516 podman[370596]: 2025-11-25 08:52:03.108845109 +0000 UTC m=+0.167962090 container attach 4794ca68449583edaa52a4acacdd0afde508379d89d6d41a4e0724338cfd94fc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_swartz, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 03:52:03 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2138: 321 pgs: 321 active+clean; 260 MiB data, 872 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 2.9 MiB/s wr, 190 op/s
Nov 25 03:52:04 np0005534516 nova_compute[253538]: 2025-11-25 08:52:04.098 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:52:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 03:52:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:52:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 03:52:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:52:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0014566723440450609 of space, bias 1.0, pg target 0.4370017032135183 quantized to 32 (current 32)
Nov 25 03:52:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:52:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 03:52:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:52:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 03:52:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:52:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0010122368870873217 of space, bias 1.0, pg target 0.3036710661261965 quantized to 32 (current 32)
Nov 25 03:52:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:52:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 25 03:52:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:52:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 03:52:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:52:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 25 03:52:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:52:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 25 03:52:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:52:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 03:52:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:52:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 25 03:52:04 np0005534516 intelligent_swartz[370612]: --> passed data devices: 0 physical, 3 LVM
Nov 25 03:52:04 np0005534516 intelligent_swartz[370612]: --> relative data size: 1.0
Nov 25 03:52:04 np0005534516 intelligent_swartz[370612]: --> All data devices are unavailable
Nov 25 03:52:04 np0005534516 systemd[1]: libpod-4794ca68449583edaa52a4acacdd0afde508379d89d6d41a4e0724338cfd94fc.scope: Deactivated successfully.
Nov 25 03:52:04 np0005534516 systemd[1]: libpod-4794ca68449583edaa52a4acacdd0afde508379d89d6d41a4e0724338cfd94fc.scope: Consumed 1.036s CPU time.
Nov 25 03:52:04 np0005534516 podman[370641]: 2025-11-25 08:52:04.302648491 +0000 UTC m=+0.029913492 container died 4794ca68449583edaa52a4acacdd0afde508379d89d6d41a4e0724338cfd94fc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_swartz, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 03:52:04 np0005534516 systemd[1]: var-lib-containers-storage-overlay-2e19ad65772af566d697367e7a63863cf44733f3655084e5bf391b2fc12c6634-merged.mount: Deactivated successfully.
Nov 25 03:52:04 np0005534516 podman[370642]: 2025-11-25 08:52:04.349643599 +0000 UTC m=+0.057031018 container health_status 5c8d5508db57b4c3da7e80ae11f3a3094e87785cb74f60c98199261dd8b73481 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_metadata_agent, org.label-schema.build-date=20251118)
Nov 25 03:52:04 np0005534516 podman[370641]: 2025-11-25 08:52:04.363182812 +0000 UTC m=+0.090447793 container remove 4794ca68449583edaa52a4acacdd0afde508379d89d6d41a4e0724338cfd94fc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_swartz, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 03:52:04 np0005534516 systemd[1]: libpod-conmon-4794ca68449583edaa52a4acacdd0afde508379d89d6d41a4e0724338cfd94fc.scope: Deactivated successfully.
Nov 25 03:52:05 np0005534516 podman[370808]: 2025-11-25 08:52:05.075062846 +0000 UTC m=+0.084363269 container create 93de3221f0f1e068dd555444b1f2bc47371a888d0bc9fa5cf59da227fe6fe2fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_lalande, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 03:52:05 np0005534516 podman[370808]: 2025-11-25 08:52:05.020022583 +0000 UTC m=+0.029323006 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 03:52:05 np0005534516 systemd[1]: Started libpod-conmon-93de3221f0f1e068dd555444b1f2bc47371a888d0bc9fa5cf59da227fe6fe2fd.scope.
Nov 25 03:52:05 np0005534516 systemd[1]: Started libcrun container.
Nov 25 03:52:05 np0005534516 podman[370808]: 2025-11-25 08:52:05.319581345 +0000 UTC m=+0.328881778 container init 93de3221f0f1e068dd555444b1f2bc47371a888d0bc9fa5cf59da227fe6fe2fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_lalande, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 03:52:05 np0005534516 podman[370808]: 2025-11-25 08:52:05.329203902 +0000 UTC m=+0.338504315 container start 93de3221f0f1e068dd555444b1f2bc47371a888d0bc9fa5cf59da227fe6fe2fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_lalande, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Nov 25 03:52:05 np0005534516 sleepy_lalande[370824]: 167 167
Nov 25 03:52:05 np0005534516 systemd[1]: libpod-93de3221f0f1e068dd555444b1f2bc47371a888d0bc9fa5cf59da227fe6fe2fd.scope: Deactivated successfully.
Nov 25 03:52:05 np0005534516 podman[370808]: 2025-11-25 08:52:05.570026712 +0000 UTC m=+0.579327155 container attach 93de3221f0f1e068dd555444b1f2bc47371a888d0bc9fa5cf59da227fe6fe2fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_lalande, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Nov 25 03:52:05 np0005534516 podman[370808]: 2025-11-25 08:52:05.571868682 +0000 UTC m=+0.581169095 container died 93de3221f0f1e068dd555444b1f2bc47371a888d0bc9fa5cf59da227fe6fe2fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_lalande, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Nov 25 03:52:05 np0005534516 systemd[1]: var-lib-containers-storage-overlay-6653c9d1d70de4a691e965d8cba549567a0bf15fb185a643fc470f187d947e4c-merged.mount: Deactivated successfully.
Nov 25 03:52:05 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2139: 321 pgs: 321 active+clean; 260 MiB data, 872 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 1.2 MiB/s wr, 168 op/s
Nov 25 03:52:05 np0005534516 podman[370808]: 2025-11-25 08:52:05.98138641 +0000 UTC m=+0.990686853 container remove 93de3221f0f1e068dd555444b1f2bc47371a888d0bc9fa5cf59da227fe6fe2fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_lalande, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 03:52:06 np0005534516 systemd[1]: libpod-conmon-93de3221f0f1e068dd555444b1f2bc47371a888d0bc9fa5cf59da227fe6fe2fd.scope: Deactivated successfully.
Nov 25 03:52:06 np0005534516 podman[370843]: 2025-11-25 08:52:06.19538215 +0000 UTC m=+0.125096000 container health_status 201f5ba8a846455dad7681d91099d8c3f33e8ac91298c1330a8b525b94c03938 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=multipathd, org.label-schema.build-date=20251118)
Nov 25 03:52:06 np0005534516 podman[370864]: 2025-11-25 08:52:06.190049867 +0000 UTC m=+0.031471943 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 03:52:06 np0005534516 podman[370864]: 2025-11-25 08:52:06.360754509 +0000 UTC m=+0.202176535 container create f8d8ae674e876a19cee0457d23f9354637c2174422f2040cd948fb7225901585 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_colden, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 03:52:06 np0005534516 systemd[1]: Started libpod-conmon-f8d8ae674e876a19cee0457d23f9354637c2174422f2040cd948fb7225901585.scope.
Nov 25 03:52:06 np0005534516 systemd[1]: Started libcrun container.
Nov 25 03:52:06 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/820aead6f93f99be965cc392a43097cb8a3262e7afff66020d5e15bc83ab40f1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 03:52:06 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/820aead6f93f99be965cc392a43097cb8a3262e7afff66020d5e15bc83ab40f1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 03:52:06 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/820aead6f93f99be965cc392a43097cb8a3262e7afff66020d5e15bc83ab40f1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 03:52:06 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/820aead6f93f99be965cc392a43097cb8a3262e7afff66020d5e15bc83ab40f1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 03:52:06 np0005534516 podman[370864]: 2025-11-25 08:52:06.661020601 +0000 UTC m=+0.502442627 container init f8d8ae674e876a19cee0457d23f9354637c2174422f2040cd948fb7225901585 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_colden, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Nov 25 03:52:06 np0005534516 podman[370864]: 2025-11-25 08:52:06.669480877 +0000 UTC m=+0.510902903 container start f8d8ae674e876a19cee0457d23f9354637c2174422f2040cd948fb7225901585 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_colden, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 03:52:06 np0005534516 podman[370864]: 2025-11-25 08:52:06.806022934 +0000 UTC m=+0.647444970 container attach f8d8ae674e876a19cee0457d23f9354637c2174422f2040cd948fb7225901585 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_colden, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 25 03:52:07 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 03:52:07 np0005534516 optimistic_colden[370882]: {
Nov 25 03:52:07 np0005534516 optimistic_colden[370882]:    "0": [
Nov 25 03:52:07 np0005534516 optimistic_colden[370882]:        {
Nov 25 03:52:07 np0005534516 optimistic_colden[370882]:            "devices": [
Nov 25 03:52:07 np0005534516 optimistic_colden[370882]:                "/dev/loop3"
Nov 25 03:52:07 np0005534516 optimistic_colden[370882]:            ],
Nov 25 03:52:07 np0005534516 optimistic_colden[370882]:            "lv_name": "ceph_lv0",
Nov 25 03:52:07 np0005534516 optimistic_colden[370882]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 03:52:07 np0005534516 optimistic_colden[370882]:            "lv_size": "21470642176",
Nov 25 03:52:07 np0005534516 optimistic_colden[370882]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=fa0f8d90-5df8-4d42-9078-da082765696d,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 03:52:07 np0005534516 optimistic_colden[370882]:            "lv_uuid": "ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr",
Nov 25 03:52:07 np0005534516 optimistic_colden[370882]:            "name": "ceph_lv0",
Nov 25 03:52:07 np0005534516 optimistic_colden[370882]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 03:52:07 np0005534516 optimistic_colden[370882]:            "tags": {
Nov 25 03:52:07 np0005534516 optimistic_colden[370882]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 03:52:07 np0005534516 optimistic_colden[370882]:                "ceph.block_uuid": "ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr",
Nov 25 03:52:07 np0005534516 optimistic_colden[370882]:                "ceph.cephx_lockbox_secret": "",
Nov 25 03:52:07 np0005534516 optimistic_colden[370882]:                "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 03:52:07 np0005534516 optimistic_colden[370882]:                "ceph.cluster_name": "ceph",
Nov 25 03:52:07 np0005534516 optimistic_colden[370882]:                "ceph.crush_device_class": "",
Nov 25 03:52:07 np0005534516 optimistic_colden[370882]:                "ceph.encrypted": "0",
Nov 25 03:52:07 np0005534516 optimistic_colden[370882]:                "ceph.osd_fsid": "fa0f8d90-5df8-4d42-9078-da082765696d",
Nov 25 03:52:07 np0005534516 optimistic_colden[370882]:                "ceph.osd_id": "0",
Nov 25 03:52:07 np0005534516 optimistic_colden[370882]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 03:52:07 np0005534516 optimistic_colden[370882]:                "ceph.type": "block",
Nov 25 03:52:07 np0005534516 optimistic_colden[370882]:                "ceph.vdo": "0"
Nov 25 03:52:07 np0005534516 optimistic_colden[370882]:            },
Nov 25 03:52:07 np0005534516 optimistic_colden[370882]:            "type": "block",
Nov 25 03:52:07 np0005534516 optimistic_colden[370882]:            "vg_name": "ceph_vg0"
Nov 25 03:52:07 np0005534516 optimistic_colden[370882]:        }
Nov 25 03:52:07 np0005534516 optimistic_colden[370882]:    ],
Nov 25 03:52:07 np0005534516 optimistic_colden[370882]:    "1": [
Nov 25 03:52:07 np0005534516 optimistic_colden[370882]:        {
Nov 25 03:52:07 np0005534516 optimistic_colden[370882]:            "devices": [
Nov 25 03:52:07 np0005534516 optimistic_colden[370882]:                "/dev/loop4"
Nov 25 03:52:07 np0005534516 optimistic_colden[370882]:            ],
Nov 25 03:52:07 np0005534516 optimistic_colden[370882]:            "lv_name": "ceph_lv1",
Nov 25 03:52:07 np0005534516 optimistic_colden[370882]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 03:52:07 np0005534516 optimistic_colden[370882]:            "lv_size": "21470642176",
Nov 25 03:52:07 np0005534516 optimistic_colden[370882]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1ccbbde0-8faf-460a-800e-c84f00ed17db,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 03:52:07 np0005534516 optimistic_colden[370882]:            "lv_uuid": "nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e",
Nov 25 03:52:07 np0005534516 optimistic_colden[370882]:            "name": "ceph_lv1",
Nov 25 03:52:07 np0005534516 optimistic_colden[370882]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 03:52:07 np0005534516 optimistic_colden[370882]:            "tags": {
Nov 25 03:52:07 np0005534516 optimistic_colden[370882]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 03:52:07 np0005534516 optimistic_colden[370882]:                "ceph.block_uuid": "nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e",
Nov 25 03:52:07 np0005534516 optimistic_colden[370882]:                "ceph.cephx_lockbox_secret": "",
Nov 25 03:52:07 np0005534516 optimistic_colden[370882]:                "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 03:52:07 np0005534516 optimistic_colden[370882]:                "ceph.cluster_name": "ceph",
Nov 25 03:52:07 np0005534516 optimistic_colden[370882]:                "ceph.crush_device_class": "",
Nov 25 03:52:07 np0005534516 optimistic_colden[370882]:                "ceph.encrypted": "0",
Nov 25 03:52:07 np0005534516 optimistic_colden[370882]:                "ceph.osd_fsid": "1ccbbde0-8faf-460a-800e-c84f00ed17db",
Nov 25 03:52:07 np0005534516 optimistic_colden[370882]:                "ceph.osd_id": "1",
Nov 25 03:52:07 np0005534516 optimistic_colden[370882]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 03:52:07 np0005534516 optimistic_colden[370882]:                "ceph.type": "block",
Nov 25 03:52:07 np0005534516 optimistic_colden[370882]:                "ceph.vdo": "0"
Nov 25 03:52:07 np0005534516 optimistic_colden[370882]:            },
Nov 25 03:52:07 np0005534516 optimistic_colden[370882]:            "type": "block",
Nov 25 03:52:07 np0005534516 optimistic_colden[370882]:            "vg_name": "ceph_vg1"
Nov 25 03:52:07 np0005534516 optimistic_colden[370882]:        }
Nov 25 03:52:07 np0005534516 optimistic_colden[370882]:    ],
Nov 25 03:52:07 np0005534516 optimistic_colden[370882]:    "2": [
Nov 25 03:52:07 np0005534516 optimistic_colden[370882]:        {
Nov 25 03:52:07 np0005534516 optimistic_colden[370882]:            "devices": [
Nov 25 03:52:07 np0005534516 optimistic_colden[370882]:                "/dev/loop5"
Nov 25 03:52:07 np0005534516 optimistic_colden[370882]:            ],
Nov 25 03:52:07 np0005534516 optimistic_colden[370882]:            "lv_name": "ceph_lv2",
Nov 25 03:52:07 np0005534516 optimistic_colden[370882]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 03:52:07 np0005534516 optimistic_colden[370882]:            "lv_size": "21470642176",
Nov 25 03:52:07 np0005534516 optimistic_colden[370882]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2a15223e-f21c-42af-b702-2be1c3607399,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 03:52:07 np0005534516 optimistic_colden[370882]:            "lv_uuid": "5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0",
Nov 25 03:52:07 np0005534516 optimistic_colden[370882]:            "name": "ceph_lv2",
Nov 25 03:52:07 np0005534516 optimistic_colden[370882]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 03:52:07 np0005534516 optimistic_colden[370882]:            "tags": {
Nov 25 03:52:07 np0005534516 optimistic_colden[370882]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 03:52:07 np0005534516 optimistic_colden[370882]:                "ceph.block_uuid": "5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0",
Nov 25 03:52:07 np0005534516 optimistic_colden[370882]:                "ceph.cephx_lockbox_secret": "",
Nov 25 03:52:07 np0005534516 optimistic_colden[370882]:                "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 03:52:07 np0005534516 optimistic_colden[370882]:                "ceph.cluster_name": "ceph",
Nov 25 03:52:07 np0005534516 optimistic_colden[370882]:                "ceph.crush_device_class": "",
Nov 25 03:52:07 np0005534516 optimistic_colden[370882]:                "ceph.encrypted": "0",
Nov 25 03:52:07 np0005534516 optimistic_colden[370882]:                "ceph.osd_fsid": "2a15223e-f21c-42af-b702-2be1c3607399",
Nov 25 03:52:07 np0005534516 optimistic_colden[370882]:                "ceph.osd_id": "2",
Nov 25 03:52:07 np0005534516 optimistic_colden[370882]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 03:52:07 np0005534516 optimistic_colden[370882]:                "ceph.type": "block",
Nov 25 03:52:07 np0005534516 optimistic_colden[370882]:                "ceph.vdo": "0"
Nov 25 03:52:07 np0005534516 optimistic_colden[370882]:            },
Nov 25 03:52:07 np0005534516 optimistic_colden[370882]:            "type": "block",
Nov 25 03:52:07 np0005534516 optimistic_colden[370882]:            "vg_name": "ceph_vg2"
Nov 25 03:52:07 np0005534516 optimistic_colden[370882]:        }
Nov 25 03:52:07 np0005534516 optimistic_colden[370882]:    ]
Nov 25 03:52:07 np0005534516 optimistic_colden[370882]: }
Nov 25 03:52:07 np0005534516 systemd[1]: libpod-f8d8ae674e876a19cee0457d23f9354637c2174422f2040cd948fb7225901585.scope: Deactivated successfully.
Nov 25 03:52:07 np0005534516 podman[370864]: 2025-11-25 08:52:07.540904676 +0000 UTC m=+1.382326692 container died f8d8ae674e876a19cee0457d23f9354637c2174422f2040cd948fb7225901585 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_colden, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Nov 25 03:52:07 np0005534516 systemd[1]: var-lib-containers-storage-overlay-820aead6f93f99be965cc392a43097cb8a3262e7afff66020d5e15bc83ab40f1-merged.mount: Deactivated successfully.
Nov 25 03:52:07 np0005534516 nova_compute[253538]: 2025-11-25 08:52:07.719 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:52:07 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2140: 321 pgs: 321 active+clean; 260 MiB data, 872 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 28 KiB/s wr, 161 op/s
Nov 25 03:52:08 np0005534516 podman[370864]: 2025-11-25 08:52:08.035233994 +0000 UTC m=+1.876656020 container remove f8d8ae674e876a19cee0457d23f9354637c2174422f2040cd948fb7225901585 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_colden, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Nov 25 03:52:08 np0005534516 systemd[1]: libpod-conmon-f8d8ae674e876a19cee0457d23f9354637c2174422f2040cd948fb7225901585.scope: Deactivated successfully.
Nov 25 03:52:08 np0005534516 podman[371042]: 2025-11-25 08:52:08.83524985 +0000 UTC m=+0.057986245 container create fe2827d738c01c389eb848c05ab854b933b643a3fc8495584b74fd2654e3f6fb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_nash, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Nov 25 03:52:08 np0005534516 systemd[1]: Started libpod-conmon-fe2827d738c01c389eb848c05ab854b933b643a3fc8495584b74fd2654e3f6fb.scope.
Nov 25 03:52:08 np0005534516 podman[371042]: 2025-11-25 08:52:08.812181291 +0000 UTC m=+0.034917736 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 03:52:08 np0005534516 systemd[1]: Started libcrun container.
Nov 25 03:52:08 np0005534516 podman[371042]: 2025-11-25 08:52:08.931377504 +0000 UTC m=+0.154113939 container init fe2827d738c01c389eb848c05ab854b933b643a3fc8495584b74fd2654e3f6fb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_nash, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Nov 25 03:52:08 np0005534516 podman[371042]: 2025-11-25 08:52:08.939651436 +0000 UTC m=+0.162387841 container start fe2827d738c01c389eb848c05ab854b933b643a3fc8495584b74fd2654e3f6fb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_nash, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 03:52:08 np0005534516 podman[371042]: 2025-11-25 08:52:08.943814236 +0000 UTC m=+0.166550691 container attach fe2827d738c01c389eb848c05ab854b933b643a3fc8495584b74fd2654e3f6fb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_nash, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 03:52:08 np0005534516 dazzling_nash[371058]: 167 167
Nov 25 03:52:08 np0005534516 systemd[1]: libpod-fe2827d738c01c389eb848c05ab854b933b643a3fc8495584b74fd2654e3f6fb.scope: Deactivated successfully.
Nov 25 03:52:08 np0005534516 conmon[371058]: conmon fe2827d738c01c389eb8 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-fe2827d738c01c389eb848c05ab854b933b643a3fc8495584b74fd2654e3f6fb.scope/container/memory.events
Nov 25 03:52:08 np0005534516 podman[371042]: 2025-11-25 08:52:08.946589551 +0000 UTC m=+0.169325956 container died fe2827d738c01c389eb848c05ab854b933b643a3fc8495584b74fd2654e3f6fb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_nash, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 03:52:08 np0005534516 systemd[1]: var-lib-containers-storage-overlay-cab6055323cbeb4fe2219d840867d1d0b8d7516049c4f00aee61e5b6bbad131f-merged.mount: Deactivated successfully.
Nov 25 03:52:08 np0005534516 podman[371042]: 2025-11-25 08:52:08.984461565 +0000 UTC m=+0.207197971 container remove fe2827d738c01c389eb848c05ab854b933b643a3fc8495584b74fd2654e3f6fb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_nash, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 03:52:09 np0005534516 systemd[1]: libpod-conmon-fe2827d738c01c389eb848c05ab854b933b643a3fc8495584b74fd2654e3f6fb.scope: Deactivated successfully.
Nov 25 03:52:09 np0005534516 nova_compute[253538]: 2025-11-25 08:52:09.101 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:52:09 np0005534516 podman[371081]: 2025-11-25 08:52:09.17727418 +0000 UTC m=+0.042275324 container create 1d2ffaec223dc746e98b88031cdd867915757904473dcdde45849a2fe1341139 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_fermat, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Nov 25 03:52:09 np0005534516 systemd[1]: Started libpod-conmon-1d2ffaec223dc746e98b88031cdd867915757904473dcdde45849a2fe1341139.scope.
Nov 25 03:52:09 np0005534516 podman[371081]: 2025-11-25 08:52:09.157448799 +0000 UTC m=+0.022449973 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 03:52:09 np0005534516 systemd[1]: Started libcrun container.
Nov 25 03:52:09 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a4324980b22023e9d7fe5d3c9eb22c39753bad4ee9c0cb5ac4e8a5628511539a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 03:52:09 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a4324980b22023e9d7fe5d3c9eb22c39753bad4ee9c0cb5ac4e8a5628511539a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 03:52:09 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a4324980b22023e9d7fe5d3c9eb22c39753bad4ee9c0cb5ac4e8a5628511539a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 03:52:09 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a4324980b22023e9d7fe5d3c9eb22c39753bad4ee9c0cb5ac4e8a5628511539a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 03:52:09 np0005534516 podman[371081]: 2025-11-25 08:52:09.27100812 +0000 UTC m=+0.136009274 container init 1d2ffaec223dc746e98b88031cdd867915757904473dcdde45849a2fe1341139 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_fermat, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Nov 25 03:52:09 np0005534516 podman[371081]: 2025-11-25 08:52:09.278524111 +0000 UTC m=+0.143525255 container start 1d2ffaec223dc746e98b88031cdd867915757904473dcdde45849a2fe1341139 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_fermat, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2)
Nov 25 03:52:09 np0005534516 podman[371081]: 2025-11-25 08:52:09.284812479 +0000 UTC m=+0.149813623 container attach 1d2ffaec223dc746e98b88031cdd867915757904473dcdde45849a2fe1341139 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_fermat, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Nov 25 03:52:09 np0005534516 ovn_controller[152859]: 2025-11-25T08:52:09Z|00124|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:5f:1f:f1 10.100.0.5
Nov 25 03:52:09 np0005534516 ovn_controller[152859]: 2025-11-25T08:52:09Z|00125|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:5f:1f:f1 10.100.0.5
Nov 25 03:52:09 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2141: 321 pgs: 321 active+clean; 284 MiB data, 897 MiB used, 59 GiB / 60 GiB avail; 2.9 MiB/s rd, 2.5 MiB/s wr, 189 op/s
Nov 25 03:52:10 np0005534516 distracted_fermat[371097]: {
Nov 25 03:52:10 np0005534516 distracted_fermat[371097]:    "1ccbbde0-8faf-460a-800e-c84f00ed17db": {
Nov 25 03:52:10 np0005534516 distracted_fermat[371097]:        "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 03:52:10 np0005534516 distracted_fermat[371097]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 03:52:10 np0005534516 distracted_fermat[371097]:        "osd_id": 1,
Nov 25 03:52:10 np0005534516 distracted_fermat[371097]:        "osd_uuid": "1ccbbde0-8faf-460a-800e-c84f00ed17db",
Nov 25 03:52:10 np0005534516 distracted_fermat[371097]:        "type": "bluestore"
Nov 25 03:52:10 np0005534516 distracted_fermat[371097]:    },
Nov 25 03:52:10 np0005534516 distracted_fermat[371097]:    "2a15223e-f21c-42af-b702-2be1c3607399": {
Nov 25 03:52:10 np0005534516 distracted_fermat[371097]:        "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 03:52:10 np0005534516 distracted_fermat[371097]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 03:52:10 np0005534516 distracted_fermat[371097]:        "osd_id": 2,
Nov 25 03:52:10 np0005534516 distracted_fermat[371097]:        "osd_uuid": "2a15223e-f21c-42af-b702-2be1c3607399",
Nov 25 03:52:10 np0005534516 distracted_fermat[371097]:        "type": "bluestore"
Nov 25 03:52:10 np0005534516 distracted_fermat[371097]:    },
Nov 25 03:52:10 np0005534516 distracted_fermat[371097]:    "fa0f8d90-5df8-4d42-9078-da082765696d": {
Nov 25 03:52:10 np0005534516 distracted_fermat[371097]:        "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 03:52:10 np0005534516 distracted_fermat[371097]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 03:52:10 np0005534516 distracted_fermat[371097]:        "osd_id": 0,
Nov 25 03:52:10 np0005534516 distracted_fermat[371097]:        "osd_uuid": "fa0f8d90-5df8-4d42-9078-da082765696d",
Nov 25 03:52:10 np0005534516 distracted_fermat[371097]:        "type": "bluestore"
Nov 25 03:52:10 np0005534516 distracted_fermat[371097]:    }
Nov 25 03:52:10 np0005534516 distracted_fermat[371097]: }
Nov 25 03:52:10 np0005534516 systemd[1]: libpod-1d2ffaec223dc746e98b88031cdd867915757904473dcdde45849a2fe1341139.scope: Deactivated successfully.
Nov 25 03:52:10 np0005534516 conmon[371097]: conmon 1d2ffaec223dc746e98b <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-1d2ffaec223dc746e98b88031cdd867915757904473dcdde45849a2fe1341139.scope/container/memory.events
Nov 25 03:52:10 np0005534516 podman[371081]: 2025-11-25 08:52:10.283345221 +0000 UTC m=+1.148346385 container died 1d2ffaec223dc746e98b88031cdd867915757904473dcdde45849a2fe1341139 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_fermat, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 03:52:10 np0005534516 systemd[1]: var-lib-containers-storage-overlay-a4324980b22023e9d7fe5d3c9eb22c39753bad4ee9c0cb5ac4e8a5628511539a-merged.mount: Deactivated successfully.
Nov 25 03:52:10 np0005534516 podman[371081]: 2025-11-25 08:52:10.343273837 +0000 UTC m=+1.208274981 container remove 1d2ffaec223dc746e98b88031cdd867915757904473dcdde45849a2fe1341139 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_fermat, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 03:52:10 np0005534516 systemd[1]: libpod-conmon-1d2ffaec223dc746e98b88031cdd867915757904473dcdde45849a2fe1341139.scope: Deactivated successfully.
Nov 25 03:52:10 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 03:52:10 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 03:52:10 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 03:52:10 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 03:52:10 np0005534516 ceph-mgr[75313]: [progress WARNING root] complete: ev 2a2d1d77-28da-4fd6-adfb-09615c07e00e does not exist
Nov 25 03:52:10 np0005534516 ceph-mgr[75313]: [progress WARNING root] complete: ev 5e2f9f6c-db54-41a1-b0cf-463ea072696b does not exist
Nov 25 03:52:10 np0005534516 ovn_controller[152859]: 2025-11-25T08:52:10Z|00126|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:aa:ad:17 10.100.0.11
Nov 25 03:52:10 np0005534516 ovn_controller[152859]: 2025-11-25T08:52:10Z|00127|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:aa:ad:17 10.100.0.11
Nov 25 03:52:11 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 03:52:11 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 03:52:11 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2142: 321 pgs: 321 active+clean; 284 MiB data, 897 MiB used, 59 GiB / 60 GiB avail; 434 KiB/s rd, 2.5 MiB/s wr, 95 op/s
Nov 25 03:52:11 np0005534516 podman[371193]: 2025-11-25 08:52:11.888561802 +0000 UTC m=+0.130508756 container health_status 8ad1e319ea263c63021f01bb2d6f18ca5aea205883339692c6b10bddfa993191 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, container_name=ovn_controller, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true)
Nov 25 03:52:12 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 03:52:12 np0005534516 nova_compute[253538]: 2025-11-25 08:52:12.721 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:52:13 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2143: 321 pgs: 321 active+clean; 320 MiB data, 925 MiB used, 59 GiB / 60 GiB avail; 684 KiB/s rd, 4.2 MiB/s wr, 182 op/s
Nov 25 03:52:14 np0005534516 nova_compute[253538]: 2025-11-25 08:52:14.104 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:52:14 np0005534516 nova_compute[253538]: 2025-11-25 08:52:14.884 253542 INFO nova.compute.manager [None req-f8a57218-974c-44b2-989c-6b1b7fca6cfb 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 47279d1c-3634-4ea6-a752-99950cd5ce6c] Get console output#033[00m
Nov 25 03:52:14 np0005534516 nova_compute[253538]: 2025-11-25 08:52:14.891 310639 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Nov 25 03:52:15 np0005534516 nova_compute[253538]: 2025-11-25 08:52:15.228 253542 DEBUG oslo_concurrency.lockutils [None req-caf90b56-93c1-4042-8b13-7f23668a4f7f 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Acquiring lock "47279d1c-3634-4ea6-a752-99950cd5ce6c" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:52:15 np0005534516 nova_compute[253538]: 2025-11-25 08:52:15.228 253542 DEBUG oslo_concurrency.lockutils [None req-caf90b56-93c1-4042-8b13-7f23668a4f7f 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lock "47279d1c-3634-4ea6-a752-99950cd5ce6c" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:52:15 np0005534516 nova_compute[253538]: 2025-11-25 08:52:15.229 253542 DEBUG oslo_concurrency.lockutils [None req-caf90b56-93c1-4042-8b13-7f23668a4f7f 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Acquiring lock "47279d1c-3634-4ea6-a752-99950cd5ce6c-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:52:15 np0005534516 nova_compute[253538]: 2025-11-25 08:52:15.229 253542 DEBUG oslo_concurrency.lockutils [None req-caf90b56-93c1-4042-8b13-7f23668a4f7f 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lock "47279d1c-3634-4ea6-a752-99950cd5ce6c-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:52:15 np0005534516 nova_compute[253538]: 2025-11-25 08:52:15.229 253542 DEBUG oslo_concurrency.lockutils [None req-caf90b56-93c1-4042-8b13-7f23668a4f7f 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lock "47279d1c-3634-4ea6-a752-99950cd5ce6c-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:52:15 np0005534516 nova_compute[253538]: 2025-11-25 08:52:15.230 253542 INFO nova.compute.manager [None req-caf90b56-93c1-4042-8b13-7f23668a4f7f 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 47279d1c-3634-4ea6-a752-99950cd5ce6c] Terminating instance#033[00m
Nov 25 03:52:15 np0005534516 nova_compute[253538]: 2025-11-25 08:52:15.231 253542 DEBUG nova.compute.manager [None req-caf90b56-93c1-4042-8b13-7f23668a4f7f 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 47279d1c-3634-4ea6-a752-99950cd5ce6c] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 25 03:52:15 np0005534516 kernel: tapa535be3a-db (unregistering): left promiscuous mode
Nov 25 03:52:15 np0005534516 NetworkManager[48915]: <info>  [1764060735.2960] device (tapa535be3a-db): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 03:52:15 np0005534516 ovn_controller[152859]: 2025-11-25T08:52:15Z|01121|binding|INFO|Releasing lport a535be3a-db4d-4a49-9772-867e101290fa from this chassis (sb_readonly=0)
Nov 25 03:52:15 np0005534516 ovn_controller[152859]: 2025-11-25T08:52:15Z|01122|binding|INFO|Setting lport a535be3a-db4d-4a49-9772-867e101290fa down in Southbound
Nov 25 03:52:15 np0005534516 ovn_controller[152859]: 2025-11-25T08:52:15Z|01123|binding|INFO|Removing iface tapa535be3a-db ovn-installed in OVS
Nov 25 03:52:15 np0005534516 nova_compute[253538]: 2025-11-25 08:52:15.309 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:52:15 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:52:15.318 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:5f:1f:f1 10.100.0.5'], port_security=['fa:16:3e:5f:1f:f1 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '47279d1c-3634-4ea6-a752-99950cd5ce6c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8daad2e3-552f-4ebe-8fa4-01c68ec704b1', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '92faeb767e7a423586eaaf32661ce771', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'de7d7e23-cab0-4e13-9965-ee46854760f3', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.244'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0b7f6d3d-0fdc-46fc-8ec7-6805fb9ea29c, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=a535be3a-db4d-4a49-9772-867e101290fa) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 03:52:15 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:52:15.319 162739 INFO neutron.agent.ovn.metadata.agent [-] Port a535be3a-db4d-4a49-9772-867e101290fa in datapath 8daad2e3-552f-4ebe-8fa4-01c68ec704b1 unbound from our chassis#033[00m
Nov 25 03:52:15 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:52:15.320 162739 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 8daad2e3-552f-4ebe-8fa4-01c68ec704b1#033[00m
Nov 25 03:52:15 np0005534516 nova_compute[253538]: 2025-11-25 08:52:15.329 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:52:15 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:52:15.346 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[e3c5da7d-ba87-4c41-aa48-354196296b93]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:52:15 np0005534516 systemd[1]: machine-qemu\x2d140\x2dinstance\x2d00000071.scope: Deactivated successfully.
Nov 25 03:52:15 np0005534516 systemd[1]: machine-qemu\x2d140\x2dinstance\x2d00000071.scope: Consumed 13.607s CPU time.
Nov 25 03:52:15 np0005534516 systemd-machined[215790]: Machine qemu-140-instance-00000071 terminated.
Nov 25 03:52:15 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:52:15.375 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[9417b609-9c42-46eb-baab-6db04f1d6cae]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:52:15 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:52:15.378 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[0546267f-ecb3-477b-a9ef-432eff5edff6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:52:15 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:52:15.405 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[74152124-abfd-4014-8078-dc0fafa45161]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:52:15 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:52:15.421 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[6f6908a1-5a57-4f7d-bf8d-59c3bcfab420]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap8daad2e3-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:47:fb:f7'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 9, 'tx_packets': 8, 'rx_bytes': 658, 'tx_bytes': 524, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 9, 'tx_packets': 8, 'rx_bytes': 658, 'tx_bytes': 524, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 324], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 603450, 'reachable_time': 38589, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 300, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 300, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 371231, 'error': None, 'target': 'ovnmeta-8daad2e3-552f-4ebe-8fa4-01c68ec704b1', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:52:15 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:52:15.439 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[387bea50-8b09-4ce5-8192-55e3990209b7]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap8daad2e3-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 603462, 'tstamp': 603462}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 371232, 'error': None, 'target': 'ovnmeta-8daad2e3-552f-4ebe-8fa4-01c68ec704b1', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap8daad2e3-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 603466, 'tstamp': 603466}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 371232, 'error': None, 'target': 'ovnmeta-8daad2e3-552f-4ebe-8fa4-01c68ec704b1', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:52:15 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:52:15.441 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8daad2e3-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:52:15 np0005534516 nova_compute[253538]: 2025-11-25 08:52:15.442 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:52:15 np0005534516 nova_compute[253538]: 2025-11-25 08:52:15.448 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:52:15 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:52:15.449 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap8daad2e3-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:52:15 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:52:15.450 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 03:52:15 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:52:15.450 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap8daad2e3-50, col_values=(('external_ids', {'iface-id': 'e844dcfd-3730-493f-b401-25ee7b281b7b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:52:15 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:52:15.451 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 03:52:15 np0005534516 nova_compute[253538]: 2025-11-25 08:52:15.456 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:52:15 np0005534516 nova_compute[253538]: 2025-11-25 08:52:15.461 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:52:15 np0005534516 nova_compute[253538]: 2025-11-25 08:52:15.475 253542 INFO nova.virt.libvirt.driver [-] [instance: 47279d1c-3634-4ea6-a752-99950cd5ce6c] Instance destroyed successfully.#033[00m
Nov 25 03:52:15 np0005534516 nova_compute[253538]: 2025-11-25 08:52:15.475 253542 DEBUG nova.objects.instance [None req-caf90b56-93c1-4042-8b13-7f23668a4f7f 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lazy-loading 'resources' on Instance uuid 47279d1c-3634-4ea6-a752-99950cd5ce6c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 03:52:15 np0005534516 nova_compute[253538]: 2025-11-25 08:52:15.489 253542 DEBUG nova.virt.libvirt.vif [None req-caf90b56-93c1-4042-8b13-7f23668a4f7f 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T08:51:48Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-737057420',display_name='tempest-TestNetworkBasicOps-server-737057420',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-737057420',id=113,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBD0V13VdtFjjfuJa+A9AY8vVYQrlDp8VmR/zDbnMpoRaniytKdXYDv2ooGFtOXnD87APiPgGqKaLDSkFHV94Z3CrjduwX8FjMfno6fvPaCxDikVs3WLPJK+CBmQ5ToXLLA==',key_name='tempest-TestNetworkBasicOps-301366135',keypairs=<?>,launch_index=0,launched_at=2025-11-25T08:51:56Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='92faeb767e7a423586eaaf32661ce771',ramdisk_id='',reservation_id='r-xuar01pl',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-2019122229',owner_user_name='tempest-TestNetworkBasicOps-2019122229-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T08:51:56Z,user_data=None,user_id='4211995133cc45db8e38c47f747fb092',uuid=47279d1c-3634-4ea6-a752-99950cd5ce6c,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "a535be3a-db4d-4a49-9772-867e101290fa", "address": "fa:16:3e:5f:1f:f1", "network": {"id": "8daad2e3-552f-4ebe-8fa4-01c68ec704b1", "bridge": "br-int", "label": "tempest-network-smoke--961197321", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.244", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92faeb767e7a423586eaaf32661ce771", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa535be3a-db", "ovs_interfaceid": "a535be3a-db4d-4a49-9772-867e101290fa", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 25 03:52:15 np0005534516 nova_compute[253538]: 2025-11-25 08:52:15.489 253542 DEBUG nova.network.os_vif_util [None req-caf90b56-93c1-4042-8b13-7f23668a4f7f 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Converting VIF {"id": "a535be3a-db4d-4a49-9772-867e101290fa", "address": "fa:16:3e:5f:1f:f1", "network": {"id": "8daad2e3-552f-4ebe-8fa4-01c68ec704b1", "bridge": "br-int", "label": "tempest-network-smoke--961197321", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.244", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92faeb767e7a423586eaaf32661ce771", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa535be3a-db", "ovs_interfaceid": "a535be3a-db4d-4a49-9772-867e101290fa", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 03:52:15 np0005534516 nova_compute[253538]: 2025-11-25 08:52:15.490 253542 DEBUG nova.network.os_vif_util [None req-caf90b56-93c1-4042-8b13-7f23668a4f7f 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:5f:1f:f1,bridge_name='br-int',has_traffic_filtering=True,id=a535be3a-db4d-4a49-9772-867e101290fa,network=Network(8daad2e3-552f-4ebe-8fa4-01c68ec704b1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa535be3a-db') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 03:52:15 np0005534516 nova_compute[253538]: 2025-11-25 08:52:15.490 253542 DEBUG os_vif [None req-caf90b56-93c1-4042-8b13-7f23668a4f7f 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:5f:1f:f1,bridge_name='br-int',has_traffic_filtering=True,id=a535be3a-db4d-4a49-9772-867e101290fa,network=Network(8daad2e3-552f-4ebe-8fa4-01c68ec704b1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa535be3a-db') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 25 03:52:15 np0005534516 nova_compute[253538]: 2025-11-25 08:52:15.493 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:52:15 np0005534516 nova_compute[253538]: 2025-11-25 08:52:15.494 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa535be3a-db, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:52:15 np0005534516 nova_compute[253538]: 2025-11-25 08:52:15.497 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:52:15 np0005534516 nova_compute[253538]: 2025-11-25 08:52:15.500 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 25 03:52:15 np0005534516 nova_compute[253538]: 2025-11-25 08:52:15.503 253542 INFO os_vif [None req-caf90b56-93c1-4042-8b13-7f23668a4f7f 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:5f:1f:f1,bridge_name='br-int',has_traffic_filtering=True,id=a535be3a-db4d-4a49-9772-867e101290fa,network=Network(8daad2e3-552f-4ebe-8fa4-01c68ec704b1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa535be3a-db')#033[00m
Nov 25 03:52:15 np0005534516 nova_compute[253538]: 2025-11-25 08:52:15.531 253542 DEBUG nova.compute.manager [req-a303b7c2-563d-433e-9ad6-99cb6773da26 req-033f0dae-18c3-4916-82e9-613e301b278c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 47279d1c-3634-4ea6-a752-99950cd5ce6c] Received event network-vif-unplugged-a535be3a-db4d-4a49-9772-867e101290fa external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:52:15 np0005534516 nova_compute[253538]: 2025-11-25 08:52:15.532 253542 DEBUG oslo_concurrency.lockutils [req-a303b7c2-563d-433e-9ad6-99cb6773da26 req-033f0dae-18c3-4916-82e9-613e301b278c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "47279d1c-3634-4ea6-a752-99950cd5ce6c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:52:15 np0005534516 nova_compute[253538]: 2025-11-25 08:52:15.532 253542 DEBUG oslo_concurrency.lockutils [req-a303b7c2-563d-433e-9ad6-99cb6773da26 req-033f0dae-18c3-4916-82e9-613e301b278c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "47279d1c-3634-4ea6-a752-99950cd5ce6c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:52:15 np0005534516 nova_compute[253538]: 2025-11-25 08:52:15.533 253542 DEBUG oslo_concurrency.lockutils [req-a303b7c2-563d-433e-9ad6-99cb6773da26 req-033f0dae-18c3-4916-82e9-613e301b278c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "47279d1c-3634-4ea6-a752-99950cd5ce6c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:52:15 np0005534516 nova_compute[253538]: 2025-11-25 08:52:15.533 253542 DEBUG nova.compute.manager [req-a303b7c2-563d-433e-9ad6-99cb6773da26 req-033f0dae-18c3-4916-82e9-613e301b278c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 47279d1c-3634-4ea6-a752-99950cd5ce6c] No waiting events found dispatching network-vif-unplugged-a535be3a-db4d-4a49-9772-867e101290fa pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 03:52:15 np0005534516 nova_compute[253538]: 2025-11-25 08:52:15.533 253542 DEBUG nova.compute.manager [req-a303b7c2-563d-433e-9ad6-99cb6773da26 req-033f0dae-18c3-4916-82e9-613e301b278c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 47279d1c-3634-4ea6-a752-99950cd5ce6c] Received event network-vif-unplugged-a535be3a-db4d-4a49-9772-867e101290fa for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 25 03:52:15 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2144: 321 pgs: 321 active+clean; 326 MiB data, 910 MiB used, 59 GiB / 60 GiB avail; 692 KiB/s rd, 4.3 MiB/s wr, 198 op/s
Nov 25 03:52:15 np0005534516 nova_compute[253538]: 2025-11-25 08:52:15.987 253542 INFO nova.virt.libvirt.driver [None req-caf90b56-93c1-4042-8b13-7f23668a4f7f 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 47279d1c-3634-4ea6-a752-99950cd5ce6c] Deleting instance files /var/lib/nova/instances/47279d1c-3634-4ea6-a752-99950cd5ce6c_del#033[00m
Nov 25 03:52:15 np0005534516 nova_compute[253538]: 2025-11-25 08:52:15.988 253542 INFO nova.virt.libvirt.driver [None req-caf90b56-93c1-4042-8b13-7f23668a4f7f 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 47279d1c-3634-4ea6-a752-99950cd5ce6c] Deletion of /var/lib/nova/instances/47279d1c-3634-4ea6-a752-99950cd5ce6c_del complete#033[00m
Nov 25 03:52:17 np0005534516 nova_compute[253538]: 2025-11-25 08:52:17.048 253542 INFO nova.compute.manager [None req-caf90b56-93c1-4042-8b13-7f23668a4f7f 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 47279d1c-3634-4ea6-a752-99950cd5ce6c] Took 1.82 seconds to destroy the instance on the hypervisor.#033[00m
Nov 25 03:52:17 np0005534516 nova_compute[253538]: 2025-11-25 08:52:17.049 253542 DEBUG oslo.service.loopingcall [None req-caf90b56-93c1-4042-8b13-7f23668a4f7f 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 25 03:52:17 np0005534516 nova_compute[253538]: 2025-11-25 08:52:17.050 253542 DEBUG nova.compute.manager [-] [instance: 47279d1c-3634-4ea6-a752-99950cd5ce6c] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 25 03:52:17 np0005534516 nova_compute[253538]: 2025-11-25 08:52:17.050 253542 DEBUG nova.network.neutron [-] [instance: 47279d1c-3634-4ea6-a752-99950cd5ce6c] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 25 03:52:17 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 03:52:17 np0005534516 nova_compute[253538]: 2025-11-25 08:52:17.614 253542 DEBUG nova.compute.manager [req-2bd63a49-dfcf-4b49-bdb5-31e90efb7cad req-6b36d64f-46f4-474a-8c23-c49b240e0029 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 47279d1c-3634-4ea6-a752-99950cd5ce6c] Received event network-vif-plugged-a535be3a-db4d-4a49-9772-867e101290fa external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:52:17 np0005534516 nova_compute[253538]: 2025-11-25 08:52:17.615 253542 DEBUG oslo_concurrency.lockutils [req-2bd63a49-dfcf-4b49-bdb5-31e90efb7cad req-6b36d64f-46f4-474a-8c23-c49b240e0029 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "47279d1c-3634-4ea6-a752-99950cd5ce6c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:52:17 np0005534516 nova_compute[253538]: 2025-11-25 08:52:17.615 253542 DEBUG oslo_concurrency.lockutils [req-2bd63a49-dfcf-4b49-bdb5-31e90efb7cad req-6b36d64f-46f4-474a-8c23-c49b240e0029 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "47279d1c-3634-4ea6-a752-99950cd5ce6c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:52:17 np0005534516 nova_compute[253538]: 2025-11-25 08:52:17.616 253542 DEBUG oslo_concurrency.lockutils [req-2bd63a49-dfcf-4b49-bdb5-31e90efb7cad req-6b36d64f-46f4-474a-8c23-c49b240e0029 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "47279d1c-3634-4ea6-a752-99950cd5ce6c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:52:17 np0005534516 nova_compute[253538]: 2025-11-25 08:52:17.616 253542 DEBUG nova.compute.manager [req-2bd63a49-dfcf-4b49-bdb5-31e90efb7cad req-6b36d64f-46f4-474a-8c23-c49b240e0029 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 47279d1c-3634-4ea6-a752-99950cd5ce6c] No waiting events found dispatching network-vif-plugged-a535be3a-db4d-4a49-9772-867e101290fa pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 03:52:17 np0005534516 nova_compute[253538]: 2025-11-25 08:52:17.616 253542 WARNING nova.compute.manager [req-2bd63a49-dfcf-4b49-bdb5-31e90efb7cad req-6b36d64f-46f4-474a-8c23-c49b240e0029 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 47279d1c-3634-4ea6-a752-99950cd5ce6c] Received unexpected event network-vif-plugged-a535be3a-db4d-4a49-9772-867e101290fa for instance with vm_state active and task_state deleting.#033[00m
Nov 25 03:52:17 np0005534516 nova_compute[253538]: 2025-11-25 08:52:17.725 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:52:17 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2145: 321 pgs: 321 active+clean; 299 MiB data, 895 MiB used, 59 GiB / 60 GiB avail; 699 KiB/s rd, 4.3 MiB/s wr, 209 op/s
Nov 25 03:52:18 np0005534516 nova_compute[253538]: 2025-11-25 08:52:18.118 253542 INFO nova.compute.manager [None req-f3878c9c-63c5-4ae1-b21b-c1ea62a9a88e 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: 9fed0304-736a-4739-9e78-a95c676d1206] Get console output#033[00m
Nov 25 03:52:18 np0005534516 nova_compute[253538]: 2025-11-25 08:52:18.123 310639 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Nov 25 03:52:18 np0005534516 ceph-mon[75015]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #102. Immutable memtables: 0.
Nov 25 03:52:18 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:52:18.233455) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 25 03:52:18 np0005534516 ceph-mon[75015]: rocksdb: [db/flush_job.cc:856] [default] [JOB 59] Flushing memtable with next log file: 102
Nov 25 03:52:18 np0005534516 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764060738233482, "job": 59, "event": "flush_started", "num_memtables": 1, "num_entries": 1792, "num_deletes": 258, "total_data_size": 2816350, "memory_usage": 2856296, "flush_reason": "Manual Compaction"}
Nov 25 03:52:18 np0005534516 ceph-mon[75015]: rocksdb: [db/flush_job.cc:885] [default] [JOB 59] Level-0 flush table #103: started
Nov 25 03:52:18 np0005534516 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764060738253452, "cf_name": "default", "job": 59, "event": "table_file_creation", "file_number": 103, "file_size": 2741872, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 43342, "largest_seqno": 45133, "table_properties": {"data_size": 2733469, "index_size": 5153, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2181, "raw_key_size": 17444, "raw_average_key_size": 20, "raw_value_size": 2716733, "raw_average_value_size": 3196, "num_data_blocks": 228, "num_entries": 850, "num_filter_entries": 850, "num_deletions": 258, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764060578, "oldest_key_time": 1764060578, "file_creation_time": 1764060738, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "dc5dc6ae-0fb8-474e-b34d-e37705a41add", "db_session_id": "WCSCMBNWDQZ3QKJA3OWW", "orig_file_number": 103, "seqno_to_time_mapping": "N/A"}}
Nov 25 03:52:18 np0005534516 ceph-mon[75015]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 59] Flush lasted 20035 microseconds, and 11605 cpu microseconds.
Nov 25 03:52:18 np0005534516 ceph-mon[75015]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 03:52:18 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:52:18.253488) [db/flush_job.cc:967] [default] [JOB 59] Level-0 flush table #103: 2741872 bytes OK
Nov 25 03:52:18 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:52:18.253506) [db/memtable_list.cc:519] [default] Level-0 commit table #103 started
Nov 25 03:52:18 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:52:18.255154) [db/memtable_list.cc:722] [default] Level-0 commit table #103: memtable #1 done
Nov 25 03:52:18 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:52:18.255168) EVENT_LOG_v1 {"time_micros": 1764060738255163, "job": 59, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 25 03:52:18 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:52:18.255184) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 25 03:52:18 np0005534516 ceph-mon[75015]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 59] Try to delete WAL files size 2808540, prev total WAL file size 2808540, number of live WAL files 2.
Nov 25 03:52:18 np0005534516 ceph-mon[75015]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000099.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 03:52:18 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:52:18.256189) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730034303136' seq:72057594037927935, type:22 .. '7061786F730034323638' seq:0, type:0; will stop at (end)
Nov 25 03:52:18 np0005534516 ceph-mon[75015]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 60] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 25 03:52:18 np0005534516 ceph-mon[75015]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 59 Base level 0, inputs: [103(2677KB)], [101(7726KB)]
Nov 25 03:52:18 np0005534516 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764060738256219, "job": 60, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [103], "files_L6": [101], "score": -1, "input_data_size": 10653507, "oldest_snapshot_seqno": -1}
Nov 25 03:52:18 np0005534516 ceph-mon[75015]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 60] Generated table #104: 6614 keys, 8933452 bytes, temperature: kUnknown
Nov 25 03:52:18 np0005534516 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764060738322020, "cf_name": "default", "job": 60, "event": "table_file_creation", "file_number": 104, "file_size": 8933452, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8889601, "index_size": 26203, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 16581, "raw_key_size": 172342, "raw_average_key_size": 26, "raw_value_size": 8771329, "raw_average_value_size": 1326, "num_data_blocks": 1027, "num_entries": 6614, "num_filter_entries": 6614, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764056881, "oldest_key_time": 0, "file_creation_time": 1764060738, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "dc5dc6ae-0fb8-474e-b34d-e37705a41add", "db_session_id": "WCSCMBNWDQZ3QKJA3OWW", "orig_file_number": 104, "seqno_to_time_mapping": "N/A"}}
Nov 25 03:52:18 np0005534516 ceph-mon[75015]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 03:52:18 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:52:18.322260) [db/compaction/compaction_job.cc:1663] [default] [JOB 60] Compacted 1@0 + 1@6 files to L6 => 8933452 bytes
Nov 25 03:52:18 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:52:18.323800) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 161.7 rd, 135.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.6, 7.5 +0.0 blob) out(8.5 +0.0 blob), read-write-amplify(7.1) write-amplify(3.3) OK, records in: 7140, records dropped: 526 output_compression: NoCompression
Nov 25 03:52:18 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:52:18.323819) EVENT_LOG_v1 {"time_micros": 1764060738323809, "job": 60, "event": "compaction_finished", "compaction_time_micros": 65870, "compaction_time_cpu_micros": 20839, "output_level": 6, "num_output_files": 1, "total_output_size": 8933452, "num_input_records": 7140, "num_output_records": 6614, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 25 03:52:18 np0005534516 ceph-mon[75015]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000103.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 03:52:18 np0005534516 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764060738324544, "job": 60, "event": "table_file_deletion", "file_number": 103}
Nov 25 03:52:18 np0005534516 ceph-mon[75015]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000101.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 03:52:18 np0005534516 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764060738326060, "job": 60, "event": "table_file_deletion", "file_number": 101}
Nov 25 03:52:18 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:52:18.256124) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 03:52:18 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:52:18.326107) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 03:52:18 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:52:18.326111) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 03:52:18 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:52:18.326113) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 03:52:18 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:52:18.326115) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 03:52:18 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:52:18.326117) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 03:52:19 np0005534516 nova_compute[253538]: 2025-11-25 08:52:19.170 253542 DEBUG nova.network.neutron [-] [instance: 47279d1c-3634-4ea6-a752-99950cd5ce6c] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 03:52:19 np0005534516 nova_compute[253538]: 2025-11-25 08:52:19.217 253542 INFO nova.compute.manager [-] [instance: 47279d1c-3634-4ea6-a752-99950cd5ce6c] Took 2.17 seconds to deallocate network for instance.#033[00m
Nov 25 03:52:19 np0005534516 nova_compute[253538]: 2025-11-25 08:52:19.256 253542 DEBUG nova.compute.manager [req-9e7c019a-1fcd-4404-9f4a-eacfaa1ea1ac req-c99b3aa3-1c75-426a-9b51-185c9b489121 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 47279d1c-3634-4ea6-a752-99950cd5ce6c] Received event network-vif-deleted-a535be3a-db4d-4a49-9772-867e101290fa external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:52:19 np0005534516 nova_compute[253538]: 2025-11-25 08:52:19.274 253542 DEBUG oslo_concurrency.lockutils [None req-caf90b56-93c1-4042-8b13-7f23668a4f7f 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:52:19 np0005534516 nova_compute[253538]: 2025-11-25 08:52:19.274 253542 DEBUG oslo_concurrency.lockutils [None req-caf90b56-93c1-4042-8b13-7f23668a4f7f 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:52:19 np0005534516 nova_compute[253538]: 2025-11-25 08:52:19.419 253542 DEBUG oslo_concurrency.processutils [None req-caf90b56-93c1-4042-8b13-7f23668a4f7f 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:52:19 np0005534516 nova_compute[253538]: 2025-11-25 08:52:19.679 253542 INFO nova.compute.manager [None req-68fe08f7-1912-4300-9a43-35c82c8b3b9a 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: 9fed0304-736a-4739-9e78-a95c676d1206] Rebuilding instance#033[00m
Nov 25 03:52:19 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2146: 321 pgs: 321 active+clean; 246 MiB data, 871 MiB used, 59 GiB / 60 GiB avail; 705 KiB/s rd, 4.3 MiB/s wr, 215 op/s
Nov 25 03:52:19 np0005534516 nova_compute[253538]: 2025-11-25 08:52:19.897 253542 DEBUG nova.objects.instance [None req-68fe08f7-1912-4300-9a43-35c82c8b3b9a 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Lazy-loading 'trusted_certs' on Instance uuid 9fed0304-736a-4739-9e78-a95c676d1206 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 03:52:19 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 03:52:19 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4226396814' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 03:52:19 np0005534516 nova_compute[253538]: 2025-11-25 08:52:19.910 253542 DEBUG nova.compute.manager [None req-68fe08f7-1912-4300-9a43-35c82c8b3b9a 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: 9fed0304-736a-4739-9e78-a95c676d1206] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:52:19 np0005534516 nova_compute[253538]: 2025-11-25 08:52:19.924 253542 DEBUG oslo_concurrency.processutils [None req-caf90b56-93c1-4042-8b13-7f23668a4f7f 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.505s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:52:19 np0005534516 nova_compute[253538]: 2025-11-25 08:52:19.931 253542 DEBUG nova.compute.provider_tree [None req-caf90b56-93c1-4042-8b13-7f23668a4f7f 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 03:52:19 np0005534516 nova_compute[253538]: 2025-11-25 08:52:19.964 253542 DEBUG nova.scheduler.client.report [None req-caf90b56-93c1-4042-8b13-7f23668a4f7f 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 03:52:19 np0005534516 nova_compute[253538]: 2025-11-25 08:52:19.969 253542 DEBUG nova.objects.instance [None req-68fe08f7-1912-4300-9a43-35c82c8b3b9a 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Lazy-loading 'pci_requests' on Instance uuid 9fed0304-736a-4739-9e78-a95c676d1206 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 03:52:19 np0005534516 nova_compute[253538]: 2025-11-25 08:52:19.983 253542 DEBUG nova.objects.instance [None req-68fe08f7-1912-4300-9a43-35c82c8b3b9a 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Lazy-loading 'pci_devices' on Instance uuid 9fed0304-736a-4739-9e78-a95c676d1206 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 03:52:19 np0005534516 nova_compute[253538]: 2025-11-25 08:52:19.997 253542 DEBUG nova.objects.instance [None req-68fe08f7-1912-4300-9a43-35c82c8b3b9a 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Lazy-loading 'resources' on Instance uuid 9fed0304-736a-4739-9e78-a95c676d1206 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 03:52:20 np0005534516 nova_compute[253538]: 2025-11-25 08:52:20.006 253542 DEBUG nova.objects.instance [None req-68fe08f7-1912-4300-9a43-35c82c8b3b9a 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Lazy-loading 'migration_context' on Instance uuid 9fed0304-736a-4739-9e78-a95c676d1206 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 03:52:20 np0005534516 nova_compute[253538]: 2025-11-25 08:52:20.014 253542 DEBUG nova.objects.instance [None req-68fe08f7-1912-4300-9a43-35c82c8b3b9a 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: 9fed0304-736a-4739-9e78-a95c676d1206] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032#033[00m
Nov 25 03:52:20 np0005534516 nova_compute[253538]: 2025-11-25 08:52:20.020 253542 DEBUG nova.virt.libvirt.driver [None req-68fe08f7-1912-4300-9a43-35c82c8b3b9a 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: 9fed0304-736a-4739-9e78-a95c676d1206] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071#033[00m
Nov 25 03:52:20 np0005534516 nova_compute[253538]: 2025-11-25 08:52:20.029 253542 DEBUG oslo_concurrency.lockutils [None req-caf90b56-93c1-4042-8b13-7f23668a4f7f 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.754s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:52:20 np0005534516 nova_compute[253538]: 2025-11-25 08:52:20.091 253542 INFO nova.scheduler.client.report [None req-caf90b56-93c1-4042-8b13-7f23668a4f7f 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Deleted allocations for instance 47279d1c-3634-4ea6-a752-99950cd5ce6c#033[00m
Nov 25 03:52:20 np0005534516 nova_compute[253538]: 2025-11-25 08:52:20.184 253542 DEBUG oslo_concurrency.lockutils [None req-caf90b56-93c1-4042-8b13-7f23668a4f7f 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lock "47279d1c-3634-4ea6-a752-99950cd5ce6c" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.955s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:52:20 np0005534516 nova_compute[253538]: 2025-11-25 08:52:20.498 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:52:21 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2147: 321 pgs: 321 active+clean; 246 MiB data, 871 MiB used, 59 GiB / 60 GiB avail; 350 KiB/s rd, 1.8 MiB/s wr, 136 op/s
Nov 25 03:52:22 np0005534516 kernel: tap5518ee18-fc (unregistering): left promiscuous mode
Nov 25 03:52:22 np0005534516 NetworkManager[48915]: <info>  [1764060742.2766] device (tap5518ee18-fc): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 03:52:22 np0005534516 ovn_controller[152859]: 2025-11-25T08:52:22Z|01124|binding|INFO|Releasing lport 5518ee18-fcb4-4885-8bc6-a3daba84baff from this chassis (sb_readonly=0)
Nov 25 03:52:22 np0005534516 ovn_controller[152859]: 2025-11-25T08:52:22Z|01125|binding|INFO|Setting lport 5518ee18-fcb4-4885-8bc6-a3daba84baff down in Southbound
Nov 25 03:52:22 np0005534516 ovn_controller[152859]: 2025-11-25T08:52:22Z|01126|binding|INFO|Removing iface tap5518ee18-fc ovn-installed in OVS
Nov 25 03:52:22 np0005534516 nova_compute[253538]: 2025-11-25 08:52:22.290 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:52:22 np0005534516 nova_compute[253538]: 2025-11-25 08:52:22.293 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:52:22 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:52:22.306 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:aa:ad:17 10.100.0.11'], port_security=['fa:16:3e:aa:ad:17 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '9fed0304-736a-4739-9e78-a95c676d1206', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-78cbfb83-5eb2-43b6-8132-ed291918f722', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '7dcaf3c96bfc4db3a41291debd385c67', 'neutron:revision_number': '4', 'neutron:security_group_ids': '3be0e54f-d1af-493d-9b04-9b0789174c39', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.183'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2873e9cb-ad93-4607-afe3-5e46bdf03cb7, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=5518ee18-fcb4-4885-8bc6-a3daba84baff) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 03:52:22 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:52:22.307 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 5518ee18-fcb4-4885-8bc6-a3daba84baff in datapath 78cbfb83-5eb2-43b6-8132-ed291918f722 unbound from our chassis#033[00m
Nov 25 03:52:22 np0005534516 nova_compute[253538]: 2025-11-25 08:52:22.308 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:52:22 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:52:22.308 162739 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 78cbfb83-5eb2-43b6-8132-ed291918f722, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 25 03:52:22 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:52:22.309 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[31d12b23-fcde-4540-b4e2-a5ffcdb72126]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:52:22 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:52:22.310 162739 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-78cbfb83-5eb2-43b6-8132-ed291918f722 namespace which is not needed anymore#033[00m
Nov 25 03:52:22 np0005534516 systemd[1]: machine-qemu\x2d139\x2dinstance\x2d00000070.scope: Deactivated successfully.
Nov 25 03:52:22 np0005534516 systemd[1]: machine-qemu\x2d139\x2dinstance\x2d00000070.scope: Consumed 14.163s CPU time.
Nov 25 03:52:22 np0005534516 systemd-machined[215790]: Machine qemu-139-instance-00000070 terminated.
Nov 25 03:52:22 np0005534516 neutron-haproxy-ovnmeta-78cbfb83-5eb2-43b6-8132-ed291918f722[367943]: [NOTICE]   (367947) : haproxy version is 2.8.14-c23fe91
Nov 25 03:52:22 np0005534516 neutron-haproxy-ovnmeta-78cbfb83-5eb2-43b6-8132-ed291918f722[367943]: [NOTICE]   (367947) : path to executable is /usr/sbin/haproxy
Nov 25 03:52:22 np0005534516 neutron-haproxy-ovnmeta-78cbfb83-5eb2-43b6-8132-ed291918f722[367943]: [WARNING]  (367947) : Exiting Master process...
Nov 25 03:52:22 np0005534516 neutron-haproxy-ovnmeta-78cbfb83-5eb2-43b6-8132-ed291918f722[367943]: [WARNING]  (367947) : Exiting Master process...
Nov 25 03:52:22 np0005534516 neutron-haproxy-ovnmeta-78cbfb83-5eb2-43b6-8132-ed291918f722[367943]: [ALERT]    (367947) : Current worker (367949) exited with code 143 (Terminated)
Nov 25 03:52:22 np0005534516 neutron-haproxy-ovnmeta-78cbfb83-5eb2-43b6-8132-ed291918f722[367943]: [WARNING]  (367947) : All workers exited. Exiting... (0)
Nov 25 03:52:22 np0005534516 systemd[1]: libpod-354c4c5ebb81452fe5e409e73a536bd402c008eefa5f7f63106b0b5501227d2f.scope: Deactivated successfully.
Nov 25 03:52:22 np0005534516 conmon[367943]: conmon 354c4c5ebb81452fe5e4 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-354c4c5ebb81452fe5e409e73a536bd402c008eefa5f7f63106b0b5501227d2f.scope/container/memory.events
Nov 25 03:52:22 np0005534516 podman[371306]: 2025-11-25 08:52:22.452635646 +0000 UTC m=+0.044770970 container died 354c4c5ebb81452fe5e409e73a536bd402c008eefa5f7f63106b0b5501227d2f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-78cbfb83-5eb2-43b6-8132-ed291918f722, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, tcib_managed=true, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 03:52:22 np0005534516 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-354c4c5ebb81452fe5e409e73a536bd402c008eefa5f7f63106b0b5501227d2f-userdata-shm.mount: Deactivated successfully.
Nov 25 03:52:22 np0005534516 systemd[1]: var-lib-containers-storage-overlay-b6f13050eb7b6987378aae86fa19090b68d0867d3d270269cd362f317d03ce43-merged.mount: Deactivated successfully.
Nov 25 03:52:22 np0005534516 podman[371306]: 2025-11-25 08:52:22.507898036 +0000 UTC m=+0.100033310 container cleanup 354c4c5ebb81452fe5e409e73a536bd402c008eefa5f7f63106b0b5501227d2f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-78cbfb83-5eb2-43b6-8132-ed291918f722, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 03:52:22 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 03:52:22 np0005534516 systemd[1]: libpod-conmon-354c4c5ebb81452fe5e409e73a536bd402c008eefa5f7f63106b0b5501227d2f.scope: Deactivated successfully.
Nov 25 03:52:22 np0005534516 podman[371345]: 2025-11-25 08:52:22.581177518 +0000 UTC m=+0.046484885 container remove 354c4c5ebb81452fe5e409e73a536bd402c008eefa5f7f63106b0b5501227d2f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-78cbfb83-5eb2-43b6-8132-ed291918f722, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 03:52:22 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:52:22.587 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[5072c573-aa1c-462e-9052-40acfdf20ba7]: (4, ('Tue Nov 25 08:52:22 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-78cbfb83-5eb2-43b6-8132-ed291918f722 (354c4c5ebb81452fe5e409e73a536bd402c008eefa5f7f63106b0b5501227d2f)\n354c4c5ebb81452fe5e409e73a536bd402c008eefa5f7f63106b0b5501227d2f\nTue Nov 25 08:52:22 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-78cbfb83-5eb2-43b6-8132-ed291918f722 (354c4c5ebb81452fe5e409e73a536bd402c008eefa5f7f63106b0b5501227d2f)\n354c4c5ebb81452fe5e409e73a536bd402c008eefa5f7f63106b0b5501227d2f\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:52:22 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:52:22.590 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[c6477721-71f3-4d9e-85f5-5fa62cd355c0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:52:22 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:52:22.590 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap78cbfb83-50, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:52:22 np0005534516 nova_compute[253538]: 2025-11-25 08:52:22.592 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:52:22 np0005534516 kernel: tap78cbfb83-50: left promiscuous mode
Nov 25 03:52:22 np0005534516 nova_compute[253538]: 2025-11-25 08:52:22.611 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:52:22 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:52:22.614 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[4a1b7684-54b2-4bfe-8be4-d97c5d238cb4]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:52:22 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:52:22.630 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[871da7e6-3d07-495e-8a3a-3dd4d2b96f18]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:52:22 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:52:22.631 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[e5621982-820b-4951-81e9-5e3c6b896d15]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:52:22 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:52:22.647 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[a8c22c2c-4e5a-46d5-a4be-0d80be280876]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 606966, 'reachable_time': 25934, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 371363, 'error': None, 'target': 'ovnmeta-78cbfb83-5eb2-43b6-8132-ed291918f722', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:52:22 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:52:22.649 162852 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-78cbfb83-5eb2-43b6-8132-ed291918f722 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 25 03:52:22 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:52:22.650 162852 DEBUG oslo.privsep.daemon [-] privsep: reply[da0dd7f6-e935-48d8-8af0-2f00ac57538d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:52:22 np0005534516 systemd[1]: run-netns-ovnmeta\x2d78cbfb83\x2d5eb2\x2d43b6\x2d8132\x2ded291918f722.mount: Deactivated successfully.
Nov 25 03:52:22 np0005534516 nova_compute[253538]: 2025-11-25 08:52:22.726 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:52:23 np0005534516 nova_compute[253538]: 2025-11-25 08:52:23.048 253542 INFO nova.virt.libvirt.driver [None req-68fe08f7-1912-4300-9a43-35c82c8b3b9a 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: 9fed0304-736a-4739-9e78-a95c676d1206] Instance shutdown successfully after 3 seconds.#033[00m
Nov 25 03:52:23 np0005534516 nova_compute[253538]: 2025-11-25 08:52:23.056 253542 INFO nova.virt.libvirt.driver [-] [instance: 9fed0304-736a-4739-9e78-a95c676d1206] Instance destroyed successfully.#033[00m
Nov 25 03:52:23 np0005534516 nova_compute[253538]: 2025-11-25 08:52:23.061 253542 INFO nova.virt.libvirt.driver [-] [instance: 9fed0304-736a-4739-9e78-a95c676d1206] Instance destroyed successfully.#033[00m
Nov 25 03:52:23 np0005534516 nova_compute[253538]: 2025-11-25 08:52:23.062 253542 DEBUG nova.virt.libvirt.vif [None req-68fe08f7-1912-4300-9a43-35c82c8b3b9a 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T08:51:47Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-327371372',display_name='tempest-TestNetworkAdvancedServerOps-server-327371372',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-327371372',id=112,image_ref='64385127-d622-49bb-be38-b33beb2692d1',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBMmeM8p5EJ/GlX5lcj2KCLf3TZ7G2ER5A8cVNuOIrPFCSxO8qm+HkDs1IeLM92Y+nFqkA7zCF0FOinPZWhiCPngd1CuHc0FrMJjIXIIvisspammiOznjpk4rLsu9K/sxg==',key_name='tempest-TestNetworkAdvancedServerOps-2049221666',keypairs=<?>,launch_index=0,launched_at=2025-11-25T08:51:55Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='7dcaf3c96bfc4db3a41291debd385c67',ramdisk_id='',reservation_id='r-2o0w2mgj',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='64385127-d622-49bb-be38-b33beb2692d1',image_container_format='bare',image_disk_format='qcow2',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkAdvancedServerOps-1132090577',owner_user_name='tempest-TestNetworkAdvancedServerOps-1132090577-project-member'},tags=<?>,task_state='rebuilding',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T08:52:19Z,user_data=None,user_id='009378dc36154271ba5b4590ce67ddde',uuid=9fed0304-736a-4739-9e78-a95c676d1206,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "5518ee18-fcb4-4885-8bc6-a3daba84baff", "address": "fa:16:3e:aa:ad:17", "network": {"id": "78cbfb83-5eb2-43b6-8132-ed291918f722", "bridge": "br-int", "label": "tempest-network-smoke--234361531", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.183", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7dcaf3c96bfc4db3a41291debd385c67", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5518ee18-fc", "ovs_interfaceid": "5518ee18-fcb4-4885-8bc6-a3daba84baff", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 25 03:52:23 np0005534516 nova_compute[253538]: 2025-11-25 08:52:23.063 253542 DEBUG nova.network.os_vif_util [None req-68fe08f7-1912-4300-9a43-35c82c8b3b9a 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Converting VIF {"id": "5518ee18-fcb4-4885-8bc6-a3daba84baff", "address": "fa:16:3e:aa:ad:17", "network": {"id": "78cbfb83-5eb2-43b6-8132-ed291918f722", "bridge": "br-int", "label": "tempest-network-smoke--234361531", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.183", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7dcaf3c96bfc4db3a41291debd385c67", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5518ee18-fc", "ovs_interfaceid": "5518ee18-fcb4-4885-8bc6-a3daba84baff", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 03:52:23 np0005534516 nova_compute[253538]: 2025-11-25 08:52:23.064 253542 DEBUG nova.network.os_vif_util [None req-68fe08f7-1912-4300-9a43-35c82c8b3b9a 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:aa:ad:17,bridge_name='br-int',has_traffic_filtering=True,id=5518ee18-fcb4-4885-8bc6-a3daba84baff,network=Network(78cbfb83-5eb2-43b6-8132-ed291918f722),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5518ee18-fc') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 03:52:23 np0005534516 nova_compute[253538]: 2025-11-25 08:52:23.064 253542 DEBUG os_vif [None req-68fe08f7-1912-4300-9a43-35c82c8b3b9a 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:aa:ad:17,bridge_name='br-int',has_traffic_filtering=True,id=5518ee18-fcb4-4885-8bc6-a3daba84baff,network=Network(78cbfb83-5eb2-43b6-8132-ed291918f722),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5518ee18-fc') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 25 03:52:23 np0005534516 nova_compute[253538]: 2025-11-25 08:52:23.066 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:52:23 np0005534516 nova_compute[253538]: 2025-11-25 08:52:23.066 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5518ee18-fc, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:52:23 np0005534516 nova_compute[253538]: 2025-11-25 08:52:23.071 253542 DEBUG nova.compute.manager [req-c745bb40-78ef-4ff0-8462-240c8a99dc84 req-48f0a54f-5729-41f4-9082-a22c4b4aad4d b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 9fed0304-736a-4739-9e78-a95c676d1206] Received event network-vif-unplugged-5518ee18-fcb4-4885-8bc6-a3daba84baff external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:52:23 np0005534516 nova_compute[253538]: 2025-11-25 08:52:23.072 253542 DEBUG oslo_concurrency.lockutils [req-c745bb40-78ef-4ff0-8462-240c8a99dc84 req-48f0a54f-5729-41f4-9082-a22c4b4aad4d b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "9fed0304-736a-4739-9e78-a95c676d1206-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:52:23 np0005534516 nova_compute[253538]: 2025-11-25 08:52:23.072 253542 DEBUG oslo_concurrency.lockutils [req-c745bb40-78ef-4ff0-8462-240c8a99dc84 req-48f0a54f-5729-41f4-9082-a22c4b4aad4d b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "9fed0304-736a-4739-9e78-a95c676d1206-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:52:23 np0005534516 nova_compute[253538]: 2025-11-25 08:52:23.072 253542 DEBUG oslo_concurrency.lockutils [req-c745bb40-78ef-4ff0-8462-240c8a99dc84 req-48f0a54f-5729-41f4-9082-a22c4b4aad4d b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "9fed0304-736a-4739-9e78-a95c676d1206-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:52:23 np0005534516 nova_compute[253538]: 2025-11-25 08:52:23.072 253542 DEBUG nova.compute.manager [req-c745bb40-78ef-4ff0-8462-240c8a99dc84 req-48f0a54f-5729-41f4-9082-a22c4b4aad4d b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 9fed0304-736a-4739-9e78-a95c676d1206] No waiting events found dispatching network-vif-unplugged-5518ee18-fcb4-4885-8bc6-a3daba84baff pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 03:52:23 np0005534516 nova_compute[253538]: 2025-11-25 08:52:23.072 253542 WARNING nova.compute.manager [req-c745bb40-78ef-4ff0-8462-240c8a99dc84 req-48f0a54f-5729-41f4-9082-a22c4b4aad4d b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 9fed0304-736a-4739-9e78-a95c676d1206] Received unexpected event network-vif-unplugged-5518ee18-fcb4-4885-8bc6-a3daba84baff for instance with vm_state active and task_state rebuilding.#033[00m
Nov 25 03:52:23 np0005534516 nova_compute[253538]: 2025-11-25 08:52:23.073 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:52:23 np0005534516 nova_compute[253538]: 2025-11-25 08:52:23.074 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 25 03:52:23 np0005534516 nova_compute[253538]: 2025-11-25 08:52:23.075 253542 INFO os_vif [None req-68fe08f7-1912-4300-9a43-35c82c8b3b9a 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:aa:ad:17,bridge_name='br-int',has_traffic_filtering=True,id=5518ee18-fcb4-4885-8bc6-a3daba84baff,network=Network(78cbfb83-5eb2-43b6-8132-ed291918f722),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5518ee18-fc')#033[00m
Nov 25 03:52:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 03:52:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 03:52:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 03:52:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 03:52:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 03:52:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 03:52:23 np0005534516 nova_compute[253538]: 2025-11-25 08:52:23.500 253542 INFO nova.virt.libvirt.driver [None req-68fe08f7-1912-4300-9a43-35c82c8b3b9a 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: 9fed0304-736a-4739-9e78-a95c676d1206] Deleting instance files /var/lib/nova/instances/9fed0304-736a-4739-9e78-a95c676d1206_del#033[00m
Nov 25 03:52:23 np0005534516 nova_compute[253538]: 2025-11-25 08:52:23.502 253542 INFO nova.virt.libvirt.driver [None req-68fe08f7-1912-4300-9a43-35c82c8b3b9a 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: 9fed0304-736a-4739-9e78-a95c676d1206] Deletion of /var/lib/nova/instances/9fed0304-736a-4739-9e78-a95c676d1206_del complete#033[00m
Nov 25 03:52:23 np0005534516 nova_compute[253538]: 2025-11-25 08:52:23.714 253542 DEBUG nova.virt.libvirt.driver [None req-68fe08f7-1912-4300-9a43-35c82c8b3b9a 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: 9fed0304-736a-4739-9e78-a95c676d1206] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 25 03:52:23 np0005534516 nova_compute[253538]: 2025-11-25 08:52:23.715 253542 INFO nova.virt.libvirt.driver [None req-68fe08f7-1912-4300-9a43-35c82c8b3b9a 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: 9fed0304-736a-4739-9e78-a95c676d1206] Creating image(s)#033[00m
Nov 25 03:52:23 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2148: 321 pgs: 321 active+clean; 232 MiB data, 870 MiB used, 59 GiB / 60 GiB avail; 351 KiB/s rd, 1.8 MiB/s wr, 139 op/s
Nov 25 03:52:23 np0005534516 nova_compute[253538]: 2025-11-25 08:52:23.745 253542 DEBUG nova.storage.rbd_utils [None req-68fe08f7-1912-4300-9a43-35c82c8b3b9a 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] rbd image 9fed0304-736a-4739-9e78-a95c676d1206_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:52:23 np0005534516 nova_compute[253538]: 2025-11-25 08:52:23.770 253542 DEBUG nova.storage.rbd_utils [None req-68fe08f7-1912-4300-9a43-35c82c8b3b9a 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] rbd image 9fed0304-736a-4739-9e78-a95c676d1206_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:52:23 np0005534516 nova_compute[253538]: 2025-11-25 08:52:23.796 253542 DEBUG nova.storage.rbd_utils [None req-68fe08f7-1912-4300-9a43-35c82c8b3b9a 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] rbd image 9fed0304-736a-4739-9e78-a95c676d1206_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:52:23 np0005534516 nova_compute[253538]: 2025-11-25 08:52:23.801 253542 DEBUG oslo_concurrency.processutils [None req-68fe08f7-1912-4300-9a43-35c82c8b3b9a 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a6a8bade0130d94f6fc91c6e483cc9b6db836676 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:52:23 np0005534516 nova_compute[253538]: 2025-11-25 08:52:23.879 253542 DEBUG oslo_concurrency.lockutils [None req-e1fa327e-2a0d-458a-8bb4-96d2ef3d6fc0 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Acquiring lock "e7bccd38-8444-4a5d-8a51-a99fa2e7d4fb" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:52:23 np0005534516 nova_compute[253538]: 2025-11-25 08:52:23.880 253542 DEBUG oslo_concurrency.lockutils [None req-e1fa327e-2a0d-458a-8bb4-96d2ef3d6fc0 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lock "e7bccd38-8444-4a5d-8a51-a99fa2e7d4fb" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:52:23 np0005534516 nova_compute[253538]: 2025-11-25 08:52:23.880 253542 DEBUG oslo_concurrency.lockutils [None req-e1fa327e-2a0d-458a-8bb4-96d2ef3d6fc0 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Acquiring lock "e7bccd38-8444-4a5d-8a51-a99fa2e7d4fb-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:52:23 np0005534516 nova_compute[253538]: 2025-11-25 08:52:23.881 253542 DEBUG oslo_concurrency.lockutils [None req-e1fa327e-2a0d-458a-8bb4-96d2ef3d6fc0 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lock "e7bccd38-8444-4a5d-8a51-a99fa2e7d4fb-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:52:23 np0005534516 nova_compute[253538]: 2025-11-25 08:52:23.881 253542 DEBUG oslo_concurrency.lockutils [None req-e1fa327e-2a0d-458a-8bb4-96d2ef3d6fc0 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lock "e7bccd38-8444-4a5d-8a51-a99fa2e7d4fb-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:52:23 np0005534516 nova_compute[253538]: 2025-11-25 08:52:23.883 253542 INFO nova.compute.manager [None req-e1fa327e-2a0d-458a-8bb4-96d2ef3d6fc0 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: e7bccd38-8444-4a5d-8a51-a99fa2e7d4fb] Terminating instance#033[00m
Nov 25 03:52:23 np0005534516 nova_compute[253538]: 2025-11-25 08:52:23.884 253542 DEBUG nova.compute.manager [None req-e1fa327e-2a0d-458a-8bb4-96d2ef3d6fc0 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: e7bccd38-8444-4a5d-8a51-a99fa2e7d4fb] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 25 03:52:23 np0005534516 nova_compute[253538]: 2025-11-25 08:52:23.907 253542 DEBUG oslo_concurrency.processutils [None req-68fe08f7-1912-4300-9a43-35c82c8b3b9a 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a6a8bade0130d94f6fc91c6e483cc9b6db836676 --force-share --output=json" returned: 0 in 0.105s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:52:23 np0005534516 nova_compute[253538]: 2025-11-25 08:52:23.907 253542 DEBUG oslo_concurrency.lockutils [None req-68fe08f7-1912-4300-9a43-35c82c8b3b9a 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Acquiring lock "a6a8bade0130d94f6fc91c6e483cc9b6db836676" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:52:23 np0005534516 nova_compute[253538]: 2025-11-25 08:52:23.908 253542 DEBUG oslo_concurrency.lockutils [None req-68fe08f7-1912-4300-9a43-35c82c8b3b9a 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Lock "a6a8bade0130d94f6fc91c6e483cc9b6db836676" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:52:23 np0005534516 nova_compute[253538]: 2025-11-25 08:52:23.909 253542 DEBUG oslo_concurrency.lockutils [None req-68fe08f7-1912-4300-9a43-35c82c8b3b9a 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Lock "a6a8bade0130d94f6fc91c6e483cc9b6db836676" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:52:23 np0005534516 kernel: tap2ad9a2b7-f5 (unregistering): left promiscuous mode
Nov 25 03:52:23 np0005534516 nova_compute[253538]: 2025-11-25 08:52:23.946 253542 DEBUG nova.storage.rbd_utils [None req-68fe08f7-1912-4300-9a43-35c82c8b3b9a 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] rbd image 9fed0304-736a-4739-9e78-a95c676d1206_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:52:23 np0005534516 NetworkManager[48915]: <info>  [1764060743.9477] device (tap2ad9a2b7-f5): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 03:52:23 np0005534516 ovn_controller[152859]: 2025-11-25T08:52:23Z|01127|binding|INFO|Releasing lport 2ad9a2b7-f59d-49fd-aaa3-5253dd637f18 from this chassis (sb_readonly=0)
Nov 25 03:52:23 np0005534516 ovn_controller[152859]: 2025-11-25T08:52:23Z|01128|binding|INFO|Setting lport 2ad9a2b7-f59d-49fd-aaa3-5253dd637f18 down in Southbound
Nov 25 03:52:23 np0005534516 ovn_controller[152859]: 2025-11-25T08:52:23Z|01129|binding|INFO|Removing iface tap2ad9a2b7-f5 ovn-installed in OVS
Nov 25 03:52:23 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:52:23.961 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:3e:2a:a3 10.100.0.12'], port_security=['fa:16:3e:3e:2a:a3 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': 'e7bccd38-8444-4a5d-8a51-a99fa2e7d4fb', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8daad2e3-552f-4ebe-8fa4-01c68ec704b1', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '92faeb767e7a423586eaaf32661ce771', 'neutron:revision_number': '4', 'neutron:security_group_ids': '371b8f16-6d0a-48c6-b770-1fa4712eb5f3', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0b7f6d3d-0fdc-46fc-8ec7-6805fb9ea29c, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=2ad9a2b7-f59d-49fd-aaa3-5253dd637f18) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 03:52:23 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:52:23.963 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 2ad9a2b7-f59d-49fd-aaa3-5253dd637f18 in datapath 8daad2e3-552f-4ebe-8fa4-01c68ec704b1 unbound from our chassis#033[00m
Nov 25 03:52:23 np0005534516 nova_compute[253538]: 2025-11-25 08:52:23.962 253542 DEBUG oslo_concurrency.processutils [None req-68fe08f7-1912-4300-9a43-35c82c8b3b9a 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/a6a8bade0130d94f6fc91c6e483cc9b6db836676 9fed0304-736a-4739-9e78-a95c676d1206_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:52:23 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:52:23.964 162739 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 8daad2e3-552f-4ebe-8fa4-01c68ec704b1, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 25 03:52:23 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:52:23.965 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[d2a28fad-ac65-4b53-9b6d-12241abb6d20]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:52:23 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:52:23.965 162739 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-8daad2e3-552f-4ebe-8fa4-01c68ec704b1 namespace which is not needed anymore#033[00m
Nov 25 03:52:24 np0005534516 nova_compute[253538]: 2025-11-25 08:52:24.006 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:52:24 np0005534516 systemd[1]: machine-qemu\x2d138\x2dinstance\x2d0000006f.scope: Deactivated successfully.
Nov 25 03:52:24 np0005534516 systemd[1]: machine-qemu\x2d138\x2dinstance\x2d0000006f.scope: Consumed 15.200s CPU time.
Nov 25 03:52:24 np0005534516 systemd-machined[215790]: Machine qemu-138-instance-0000006f terminated.
Nov 25 03:52:24 np0005534516 NetworkManager[48915]: <info>  [1764060744.1055] manager: (tap2ad9a2b7-f5): new Tun device (/org/freedesktop/NetworkManager/Devices/461)
Nov 25 03:52:24 np0005534516 nova_compute[253538]: 2025-11-25 08:52:24.131 253542 INFO nova.virt.libvirt.driver [-] [instance: e7bccd38-8444-4a5d-8a51-a99fa2e7d4fb] Instance destroyed successfully.#033[00m
Nov 25 03:52:24 np0005534516 nova_compute[253538]: 2025-11-25 08:52:24.132 253542 DEBUG nova.objects.instance [None req-e1fa327e-2a0d-458a-8bb4-96d2ef3d6fc0 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lazy-loading 'resources' on Instance uuid e7bccd38-8444-4a5d-8a51-a99fa2e7d4fb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 03:52:24 np0005534516 neutron-haproxy-ovnmeta-8daad2e3-552f-4ebe-8fa4-01c68ec704b1[366867]: [NOTICE]   (366871) : haproxy version is 2.8.14-c23fe91
Nov 25 03:52:24 np0005534516 neutron-haproxy-ovnmeta-8daad2e3-552f-4ebe-8fa4-01c68ec704b1[366867]: [NOTICE]   (366871) : path to executable is /usr/sbin/haproxy
Nov 25 03:52:24 np0005534516 neutron-haproxy-ovnmeta-8daad2e3-552f-4ebe-8fa4-01c68ec704b1[366867]: [WARNING]  (366871) : Exiting Master process...
Nov 25 03:52:24 np0005534516 neutron-haproxy-ovnmeta-8daad2e3-552f-4ebe-8fa4-01c68ec704b1[366867]: [WARNING]  (366871) : Exiting Master process...
Nov 25 03:52:24 np0005534516 neutron-haproxy-ovnmeta-8daad2e3-552f-4ebe-8fa4-01c68ec704b1[366867]: [ALERT]    (366871) : Current worker (366873) exited with code 143 (Terminated)
Nov 25 03:52:24 np0005534516 neutron-haproxy-ovnmeta-8daad2e3-552f-4ebe-8fa4-01c68ec704b1[366867]: [WARNING]  (366871) : All workers exited. Exiting... (0)
Nov 25 03:52:24 np0005534516 systemd[1]: libpod-b20ece1dee115c004ce5ad488b43e4172f6f506e449fe87286dd0509ecda3529.scope: Deactivated successfully.
Nov 25 03:52:24 np0005534516 podman[371496]: 2025-11-25 08:52:24.152971384 +0000 UTC m=+0.073526360 container died b20ece1dee115c004ce5ad488b43e4172f6f506e449fe87286dd0509ecda3529 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-8daad2e3-552f-4ebe-8fa4-01c68ec704b1, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Nov 25 03:52:24 np0005534516 nova_compute[253538]: 2025-11-25 08:52:24.152 253542 DEBUG nova.virt.libvirt.vif [None req-e1fa327e-2a0d-458a-8bb4-96d2ef3d6fc0 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T08:51:12Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-199412432',display_name='tempest-TestNetworkBasicOps-server-199412432',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-199412432',id=111,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMuJR0+MVpFtNHEH/qBMUEI9mE13UW6GrULa+2972JvBZYqj7jCYYsMmZITZ+SM7QQhK9eTjWP2J5imfxbLYOM0couLFe8mdKS/uhBmTvd2vRYexSjbqdhkaRLs1gfDUJQ==',key_name='tempest-TestNetworkBasicOps-1778650473',keypairs=<?>,launch_index=0,launched_at=2025-11-25T08:51:20Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='92faeb767e7a423586eaaf32661ce771',ramdisk_id='',reservation_id='r-hcn7lok0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-2019122229',owner_user_name='tempest-TestNetworkBasicOps-2019122229-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T08:51:20Z,user_data=None,user_id='4211995133cc45db8e38c47f747fb092',uuid=e7bccd38-8444-4a5d-8a51-a99fa2e7d4fb,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "2ad9a2b7-f59d-49fd-aaa3-5253dd637f18", "address": "fa:16:3e:3e:2a:a3", "network": {"id": "8daad2e3-552f-4ebe-8fa4-01c68ec704b1", "bridge": "br-int", "label": "tempest-network-smoke--961197321", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92faeb767e7a423586eaaf32661ce771", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2ad9a2b7-f5", "ovs_interfaceid": "2ad9a2b7-f59d-49fd-aaa3-5253dd637f18", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 25 03:52:24 np0005534516 nova_compute[253538]: 2025-11-25 08:52:24.155 253542 DEBUG nova.network.os_vif_util [None req-e1fa327e-2a0d-458a-8bb4-96d2ef3d6fc0 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Converting VIF {"id": "2ad9a2b7-f59d-49fd-aaa3-5253dd637f18", "address": "fa:16:3e:3e:2a:a3", "network": {"id": "8daad2e3-552f-4ebe-8fa4-01c68ec704b1", "bridge": "br-int", "label": "tempest-network-smoke--961197321", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92faeb767e7a423586eaaf32661ce771", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2ad9a2b7-f5", "ovs_interfaceid": "2ad9a2b7-f59d-49fd-aaa3-5253dd637f18", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 03:52:24 np0005534516 nova_compute[253538]: 2025-11-25 08:52:24.156 253542 DEBUG nova.network.os_vif_util [None req-e1fa327e-2a0d-458a-8bb4-96d2ef3d6fc0 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:3e:2a:a3,bridge_name='br-int',has_traffic_filtering=True,id=2ad9a2b7-f59d-49fd-aaa3-5253dd637f18,network=Network(8daad2e3-552f-4ebe-8fa4-01c68ec704b1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2ad9a2b7-f5') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 03:52:24 np0005534516 nova_compute[253538]: 2025-11-25 08:52:24.156 253542 DEBUG os_vif [None req-e1fa327e-2a0d-458a-8bb4-96d2ef3d6fc0 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:3e:2a:a3,bridge_name='br-int',has_traffic_filtering=True,id=2ad9a2b7-f59d-49fd-aaa3-5253dd637f18,network=Network(8daad2e3-552f-4ebe-8fa4-01c68ec704b1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2ad9a2b7-f5') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 25 03:52:24 np0005534516 nova_compute[253538]: 2025-11-25 08:52:24.159 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:52:24 np0005534516 nova_compute[253538]: 2025-11-25 08:52:24.160 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2ad9a2b7-f5, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:52:24 np0005534516 nova_compute[253538]: 2025-11-25 08:52:24.211 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:52:24 np0005534516 nova_compute[253538]: 2025-11-25 08:52:24.213 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 25 03:52:24 np0005534516 nova_compute[253538]: 2025-11-25 08:52:24.215 253542 INFO os_vif [None req-e1fa327e-2a0d-458a-8bb4-96d2ef3d6fc0 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:3e:2a:a3,bridge_name='br-int',has_traffic_filtering=True,id=2ad9a2b7-f59d-49fd-aaa3-5253dd637f18,network=Network(8daad2e3-552f-4ebe-8fa4-01c68ec704b1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2ad9a2b7-f5')#033[00m
Nov 25 03:52:24 np0005534516 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-b20ece1dee115c004ce5ad488b43e4172f6f506e449fe87286dd0509ecda3529-userdata-shm.mount: Deactivated successfully.
Nov 25 03:52:24 np0005534516 systemd[1]: var-lib-containers-storage-overlay-b2849636997d3f41f71e0b1f9f347c5bb88d5e1c968dd84aab3bbb4c45a6a526-merged.mount: Deactivated successfully.
Nov 25 03:52:24 np0005534516 podman[371496]: 2025-11-25 08:52:24.278079054 +0000 UTC m=+0.198634050 container cleanup b20ece1dee115c004ce5ad488b43e4172f6f506e449fe87286dd0509ecda3529 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-8daad2e3-552f-4ebe-8fa4-01c68ec704b1, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251118)
Nov 25 03:52:24 np0005534516 systemd[1]: libpod-conmon-b20ece1dee115c004ce5ad488b43e4172f6f506e449fe87286dd0509ecda3529.scope: Deactivated successfully.
Nov 25 03:52:24 np0005534516 nova_compute[253538]: 2025-11-25 08:52:24.306 253542 DEBUG oslo_concurrency.processutils [None req-68fe08f7-1912-4300-9a43-35c82c8b3b9a 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/a6a8bade0130d94f6fc91c6e483cc9b6db836676 9fed0304-736a-4739-9e78-a95c676d1206_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.345s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:52:24 np0005534516 podman[371558]: 2025-11-25 08:52:24.344884344 +0000 UTC m=+0.043084965 container remove b20ece1dee115c004ce5ad488b43e4172f6f506e449fe87286dd0509ecda3529 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-8daad2e3-552f-4ebe-8fa4-01c68ec704b1, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 03:52:24 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:52:24.354 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[6a867c55-6b0d-4d0c-a506-9632f475cae7]: (4, ('Tue Nov 25 08:52:24 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-8daad2e3-552f-4ebe-8fa4-01c68ec704b1 (b20ece1dee115c004ce5ad488b43e4172f6f506e449fe87286dd0509ecda3529)\nb20ece1dee115c004ce5ad488b43e4172f6f506e449fe87286dd0509ecda3529\nTue Nov 25 08:52:24 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-8daad2e3-552f-4ebe-8fa4-01c68ec704b1 (b20ece1dee115c004ce5ad488b43e4172f6f506e449fe87286dd0509ecda3529)\nb20ece1dee115c004ce5ad488b43e4172f6f506e449fe87286dd0509ecda3529\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:52:24 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:52:24.356 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[d3ef1a28-c9c8-42c5-8092-47c7ca2dac4a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:52:24 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:52:24.356 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8daad2e3-50, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:52:24 np0005534516 kernel: tap8daad2e3-50: left promiscuous mode
Nov 25 03:52:24 np0005534516 nova_compute[253538]: 2025-11-25 08:52:24.367 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:52:24 np0005534516 nova_compute[253538]: 2025-11-25 08:52:24.371 253542 DEBUG nova.compute.manager [req-b0d22655-7050-4c1b-8399-f94ad5ba2dc0 req-9eea4f90-c034-4f44-9f52-a5066f878786 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: e7bccd38-8444-4a5d-8a51-a99fa2e7d4fb] Received event network-vif-unplugged-2ad9a2b7-f59d-49fd-aaa3-5253dd637f18 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:52:24 np0005534516 nova_compute[253538]: 2025-11-25 08:52:24.371 253542 DEBUG oslo_concurrency.lockutils [req-b0d22655-7050-4c1b-8399-f94ad5ba2dc0 req-9eea4f90-c034-4f44-9f52-a5066f878786 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "e7bccd38-8444-4a5d-8a51-a99fa2e7d4fb-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:52:24 np0005534516 nova_compute[253538]: 2025-11-25 08:52:24.372 253542 DEBUG oslo_concurrency.lockutils [req-b0d22655-7050-4c1b-8399-f94ad5ba2dc0 req-9eea4f90-c034-4f44-9f52-a5066f878786 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "e7bccd38-8444-4a5d-8a51-a99fa2e7d4fb-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:52:24 np0005534516 nova_compute[253538]: 2025-11-25 08:52:24.372 253542 DEBUG oslo_concurrency.lockutils [req-b0d22655-7050-4c1b-8399-f94ad5ba2dc0 req-9eea4f90-c034-4f44-9f52-a5066f878786 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "e7bccd38-8444-4a5d-8a51-a99fa2e7d4fb-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:52:24 np0005534516 nova_compute[253538]: 2025-11-25 08:52:24.372 253542 DEBUG nova.compute.manager [req-b0d22655-7050-4c1b-8399-f94ad5ba2dc0 req-9eea4f90-c034-4f44-9f52-a5066f878786 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: e7bccd38-8444-4a5d-8a51-a99fa2e7d4fb] No waiting events found dispatching network-vif-unplugged-2ad9a2b7-f59d-49fd-aaa3-5253dd637f18 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 03:52:24 np0005534516 nova_compute[253538]: 2025-11-25 08:52:24.372 253542 DEBUG nova.compute.manager [req-b0d22655-7050-4c1b-8399-f94ad5ba2dc0 req-9eea4f90-c034-4f44-9f52-a5066f878786 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: e7bccd38-8444-4a5d-8a51-a99fa2e7d4fb] Received event network-vif-unplugged-2ad9a2b7-f59d-49fd-aaa3-5253dd637f18 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 25 03:52:24 np0005534516 nova_compute[253538]: 2025-11-25 08:52:24.377 253542 DEBUG nova.storage.rbd_utils [None req-68fe08f7-1912-4300-9a43-35c82c8b3b9a 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] resizing rbd image 9fed0304-736a-4739-9e78-a95c676d1206_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 25 03:52:24 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:52:24.391 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[56102816-58da-4525-bcc2-bc4c7047e07a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:52:24 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:52:24.405 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[1a890ece-4b52-4cf1-99c9-a678143ebfc6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:52:24 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:52:24.407 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[c067a3a3-78a2-4f2a-836c-5cbd0094c978]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:52:24 np0005534516 nova_compute[253538]: 2025-11-25 08:52:24.412 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:52:24 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:52:24.427 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[ac8beffa-2641-4982-babe-75b9935d01f5]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 603442, 'reachable_time': 21287, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 371626, 'error': None, 'target': 'ovnmeta-8daad2e3-552f-4ebe-8fa4-01c68ec704b1', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:52:24 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:52:24.429 162852 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-8daad2e3-552f-4ebe-8fa4-01c68ec704b1 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 25 03:52:24 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:52:24.429 162852 DEBUG oslo.privsep.daemon [-] privsep: reply[889338ad-3cf5-41d5-97d4-6dc1a95cad4d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:52:24 np0005534516 systemd[1]: run-netns-ovnmeta\x2d8daad2e3\x2d552f\x2d4ebe\x2d8fa4\x2d01c68ec704b1.mount: Deactivated successfully.
Nov 25 03:52:24 np0005534516 nova_compute[253538]: 2025-11-25 08:52:24.468 253542 DEBUG nova.virt.libvirt.driver [None req-68fe08f7-1912-4300-9a43-35c82c8b3b9a 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: 9fed0304-736a-4739-9e78-a95c676d1206] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 25 03:52:24 np0005534516 nova_compute[253538]: 2025-11-25 08:52:24.468 253542 DEBUG nova.virt.libvirt.driver [None req-68fe08f7-1912-4300-9a43-35c82c8b3b9a 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: 9fed0304-736a-4739-9e78-a95c676d1206] Ensure instance console log exists: /var/lib/nova/instances/9fed0304-736a-4739-9e78-a95c676d1206/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 25 03:52:24 np0005534516 nova_compute[253538]: 2025-11-25 08:52:24.469 253542 DEBUG oslo_concurrency.lockutils [None req-68fe08f7-1912-4300-9a43-35c82c8b3b9a 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:52:24 np0005534516 nova_compute[253538]: 2025-11-25 08:52:24.469 253542 DEBUG oslo_concurrency.lockutils [None req-68fe08f7-1912-4300-9a43-35c82c8b3b9a 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:52:24 np0005534516 nova_compute[253538]: 2025-11-25 08:52:24.469 253542 DEBUG oslo_concurrency.lockutils [None req-68fe08f7-1912-4300-9a43-35c82c8b3b9a 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:52:24 np0005534516 nova_compute[253538]: 2025-11-25 08:52:24.472 253542 DEBUG nova.virt.libvirt.driver [None req-68fe08f7-1912-4300-9a43-35c82c8b3b9a 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: 9fed0304-736a-4739-9e78-a95c676d1206] Start _get_guest_xml network_info=[{"id": "5518ee18-fcb4-4885-8bc6-a3daba84baff", "address": "fa:16:3e:aa:ad:17", "network": {"id": "78cbfb83-5eb2-43b6-8132-ed291918f722", "bridge": "br-int", "label": "tempest-network-smoke--234361531", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.183", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7dcaf3c96bfc4db3a41291debd385c67", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5518ee18-fc", "ovs_interfaceid": "5518ee18-fcb4-4885-8bc6-a3daba84baff", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:56Z,direct_url=<?>,disk_format='qcow2',id=64385127-d622-49bb-be38-b33beb2692d1,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:58Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'device_name': '/dev/vda', 'guest_format': None, 'encryption_options': None, 'boot_index': 0, 'encryption_format': None, 'encryption_secret_uuid': None, 'size': 0, 'encrypted': False, 'disk_bus': 'virtio', 'image_id': '8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 25 03:52:24 np0005534516 nova_compute[253538]: 2025-11-25 08:52:24.476 253542 WARNING nova.virt.libvirt.driver [None req-68fe08f7-1912-4300-9a43-35c82c8b3b9a 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.: NotImplementedError#033[00m
Nov 25 03:52:24 np0005534516 nova_compute[253538]: 2025-11-25 08:52:24.482 253542 DEBUG nova.virt.libvirt.host [None req-68fe08f7-1912-4300-9a43-35c82c8b3b9a 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 25 03:52:24 np0005534516 nova_compute[253538]: 2025-11-25 08:52:24.483 253542 DEBUG nova.virt.libvirt.host [None req-68fe08f7-1912-4300-9a43-35c82c8b3b9a 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 25 03:52:24 np0005534516 nova_compute[253538]: 2025-11-25 08:52:24.486 253542 DEBUG nova.virt.libvirt.host [None req-68fe08f7-1912-4300-9a43-35c82c8b3b9a 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 25 03:52:24 np0005534516 nova_compute[253538]: 2025-11-25 08:52:24.486 253542 DEBUG nova.virt.libvirt.host [None req-68fe08f7-1912-4300-9a43-35c82c8b3b9a 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 25 03:52:24 np0005534516 nova_compute[253538]: 2025-11-25 08:52:24.487 253542 DEBUG nova.virt.libvirt.driver [None req-68fe08f7-1912-4300-9a43-35c82c8b3b9a 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 25 03:52:24 np0005534516 nova_compute[253538]: 2025-11-25 08:52:24.487 253542 DEBUG nova.virt.hardware [None req-68fe08f7-1912-4300-9a43-35c82c8b3b9a 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T08:20:54Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='c0247c91-0f34-45b2-87e7-43f31a790a00',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:56Z,direct_url=<?>,disk_format='qcow2',id=64385127-d622-49bb-be38-b33beb2692d1,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:58Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 25 03:52:24 np0005534516 nova_compute[253538]: 2025-11-25 08:52:24.488 253542 DEBUG nova.virt.hardware [None req-68fe08f7-1912-4300-9a43-35c82c8b3b9a 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 25 03:52:24 np0005534516 nova_compute[253538]: 2025-11-25 08:52:24.488 253542 DEBUG nova.virt.hardware [None req-68fe08f7-1912-4300-9a43-35c82c8b3b9a 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 25 03:52:24 np0005534516 nova_compute[253538]: 2025-11-25 08:52:24.488 253542 DEBUG nova.virt.hardware [None req-68fe08f7-1912-4300-9a43-35c82c8b3b9a 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 25 03:52:24 np0005534516 nova_compute[253538]: 2025-11-25 08:52:24.488 253542 DEBUG nova.virt.hardware [None req-68fe08f7-1912-4300-9a43-35c82c8b3b9a 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 25 03:52:24 np0005534516 nova_compute[253538]: 2025-11-25 08:52:24.489 253542 DEBUG nova.virt.hardware [None req-68fe08f7-1912-4300-9a43-35c82c8b3b9a 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 25 03:52:24 np0005534516 nova_compute[253538]: 2025-11-25 08:52:24.489 253542 DEBUG nova.virt.hardware [None req-68fe08f7-1912-4300-9a43-35c82c8b3b9a 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 25 03:52:24 np0005534516 nova_compute[253538]: 2025-11-25 08:52:24.489 253542 DEBUG nova.virt.hardware [None req-68fe08f7-1912-4300-9a43-35c82c8b3b9a 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 25 03:52:24 np0005534516 nova_compute[253538]: 2025-11-25 08:52:24.490 253542 DEBUG nova.virt.hardware [None req-68fe08f7-1912-4300-9a43-35c82c8b3b9a 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 25 03:52:24 np0005534516 nova_compute[253538]: 2025-11-25 08:52:24.490 253542 DEBUG nova.virt.hardware [None req-68fe08f7-1912-4300-9a43-35c82c8b3b9a 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 25 03:52:24 np0005534516 nova_compute[253538]: 2025-11-25 08:52:24.490 253542 DEBUG nova.virt.hardware [None req-68fe08f7-1912-4300-9a43-35c82c8b3b9a 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 25 03:52:24 np0005534516 nova_compute[253538]: 2025-11-25 08:52:24.491 253542 DEBUG nova.objects.instance [None req-68fe08f7-1912-4300-9a43-35c82c8b3b9a 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Lazy-loading 'vcpu_model' on Instance uuid 9fed0304-736a-4739-9e78-a95c676d1206 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 03:52:24 np0005534516 nova_compute[253538]: 2025-11-25 08:52:24.502 253542 DEBUG oslo_concurrency.processutils [None req-68fe08f7-1912-4300-9a43-35c82c8b3b9a 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:52:24 np0005534516 nova_compute[253538]: 2025-11-25 08:52:24.583 253542 INFO nova.virt.libvirt.driver [None req-e1fa327e-2a0d-458a-8bb4-96d2ef3d6fc0 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: e7bccd38-8444-4a5d-8a51-a99fa2e7d4fb] Deleting instance files /var/lib/nova/instances/e7bccd38-8444-4a5d-8a51-a99fa2e7d4fb_del#033[00m
Nov 25 03:52:24 np0005534516 nova_compute[253538]: 2025-11-25 08:52:24.584 253542 INFO nova.virt.libvirt.driver [None req-e1fa327e-2a0d-458a-8bb4-96d2ef3d6fc0 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: e7bccd38-8444-4a5d-8a51-a99fa2e7d4fb] Deletion of /var/lib/nova/instances/e7bccd38-8444-4a5d-8a51-a99fa2e7d4fb_del complete#033[00m
Nov 25 03:52:24 np0005534516 nova_compute[253538]: 2025-11-25 08:52:24.641 253542 INFO nova.compute.manager [None req-e1fa327e-2a0d-458a-8bb4-96d2ef3d6fc0 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: e7bccd38-8444-4a5d-8a51-a99fa2e7d4fb] Took 0.76 seconds to destroy the instance on the hypervisor.#033[00m
Nov 25 03:52:24 np0005534516 nova_compute[253538]: 2025-11-25 08:52:24.641 253542 DEBUG oslo.service.loopingcall [None req-e1fa327e-2a0d-458a-8bb4-96d2ef3d6fc0 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 25 03:52:24 np0005534516 nova_compute[253538]: 2025-11-25 08:52:24.642 253542 DEBUG nova.compute.manager [-] [instance: e7bccd38-8444-4a5d-8a51-a99fa2e7d4fb] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 25 03:52:24 np0005534516 nova_compute[253538]: 2025-11-25 08:52:24.642 253542 DEBUG nova.network.neutron [-] [instance: e7bccd38-8444-4a5d-8a51-a99fa2e7d4fb] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 25 03:52:24 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 03:52:24 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/991475864' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 03:52:25 np0005534516 nova_compute[253538]: 2025-11-25 08:52:25.000 253542 DEBUG oslo_concurrency.processutils [None req-68fe08f7-1912-4300-9a43-35c82c8b3b9a 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.498s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:52:25 np0005534516 nova_compute[253538]: 2025-11-25 08:52:25.022 253542 DEBUG nova.storage.rbd_utils [None req-68fe08f7-1912-4300-9a43-35c82c8b3b9a 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] rbd image 9fed0304-736a-4739-9e78-a95c676d1206_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:52:25 np0005534516 nova_compute[253538]: 2025-11-25 08:52:25.026 253542 DEBUG oslo_concurrency.processutils [None req-68fe08f7-1912-4300-9a43-35c82c8b3b9a 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:52:25 np0005534516 nova_compute[253538]: 2025-11-25 08:52:25.347 253542 DEBUG nova.compute.manager [req-eca482a7-de80-426c-b8ac-cc767ab4c4e4 req-6e7b5288-5673-44d0-9b56-630f28d6fb54 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 9fed0304-736a-4739-9e78-a95c676d1206] Received event network-vif-plugged-5518ee18-fcb4-4885-8bc6-a3daba84baff external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:52:25 np0005534516 nova_compute[253538]: 2025-11-25 08:52:25.347 253542 DEBUG oslo_concurrency.lockutils [req-eca482a7-de80-426c-b8ac-cc767ab4c4e4 req-6e7b5288-5673-44d0-9b56-630f28d6fb54 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "9fed0304-736a-4739-9e78-a95c676d1206-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:52:25 np0005534516 nova_compute[253538]: 2025-11-25 08:52:25.348 253542 DEBUG oslo_concurrency.lockutils [req-eca482a7-de80-426c-b8ac-cc767ab4c4e4 req-6e7b5288-5673-44d0-9b56-630f28d6fb54 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "9fed0304-736a-4739-9e78-a95c676d1206-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:52:25 np0005534516 nova_compute[253538]: 2025-11-25 08:52:25.351 253542 DEBUG oslo_concurrency.lockutils [req-eca482a7-de80-426c-b8ac-cc767ab4c4e4 req-6e7b5288-5673-44d0-9b56-630f28d6fb54 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "9fed0304-736a-4739-9e78-a95c676d1206-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:52:25 np0005534516 nova_compute[253538]: 2025-11-25 08:52:25.351 253542 DEBUG nova.compute.manager [req-eca482a7-de80-426c-b8ac-cc767ab4c4e4 req-6e7b5288-5673-44d0-9b56-630f28d6fb54 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 9fed0304-736a-4739-9e78-a95c676d1206] No waiting events found dispatching network-vif-plugged-5518ee18-fcb4-4885-8bc6-a3daba84baff pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 03:52:25 np0005534516 nova_compute[253538]: 2025-11-25 08:52:25.352 253542 WARNING nova.compute.manager [req-eca482a7-de80-426c-b8ac-cc767ab4c4e4 req-6e7b5288-5673-44d0-9b56-630f28d6fb54 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 9fed0304-736a-4739-9e78-a95c676d1206] Received unexpected event network-vif-plugged-5518ee18-fcb4-4885-8bc6-a3daba84baff for instance with vm_state active and task_state rebuild_spawning.#033[00m
Nov 25 03:52:25 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 03:52:25 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1917122493' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 03:52:25 np0005534516 nova_compute[253538]: 2025-11-25 08:52:25.482 253542 DEBUG oslo_concurrency.processutils [None req-68fe08f7-1912-4300-9a43-35c82c8b3b9a 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.456s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:52:25 np0005534516 nova_compute[253538]: 2025-11-25 08:52:25.483 253542 DEBUG nova.virt.libvirt.vif [None req-68fe08f7-1912-4300-9a43-35c82c8b3b9a 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-11-25T08:51:47Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-327371372',display_name='tempest-TestNetworkAdvancedServerOps-server-327371372',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-327371372',id=112,image_ref='64385127-d622-49bb-be38-b33beb2692d1',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBMmeM8p5EJ/GlX5lcj2KCLf3TZ7G2ER5A8cVNuOIrPFCSxO8qm+HkDs1IeLM92Y+nFqkA7zCF0FOinPZWhiCPngd1CuHc0FrMJjIXIIvisspammiOznjpk4rLsu9K/sxg==',key_name='tempest-TestNetworkAdvancedServerOps-2049221666',keypairs=<?>,launch_index=0,launched_at=2025-11-25T08:51:55Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='7dcaf3c96bfc4db3a41291debd385c67',ramdisk_id='',reservation_id='r-2o0w2mgj',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',clean_attempts='1',image_base_image_ref='64385127-d622-49bb-be38-b33beb2692d1',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkAdvancedServerOps-1132090577',owner_user_name='tempest-TestNetworkAdvancedServerOps-1132090577-project-member'},tags=<?>,task_state='rebuild_spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T08:52:23Z,user_data=None,user_id='009378dc36154271ba5b4590ce67ddde',uuid=9fed0304-736a-4739-9e78-a95c676d1206,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "5518ee18-fcb4-4885-8bc6-a3daba84baff", "address": "fa:16:3e:aa:ad:17", "network": {"id": "78cbfb83-5eb2-43b6-8132-ed291918f722", "bridge": "br-int", "label": "tempest-network-smoke--234361531", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.183", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7dcaf3c96bfc4db3a41291debd385c67", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5518ee18-fc", "ovs_interfaceid": "5518ee18-fcb4-4885-8bc6-a3daba84baff", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 25 03:52:25 np0005534516 nova_compute[253538]: 2025-11-25 08:52:25.484 253542 DEBUG nova.network.os_vif_util [None req-68fe08f7-1912-4300-9a43-35c82c8b3b9a 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Converting VIF {"id": "5518ee18-fcb4-4885-8bc6-a3daba84baff", "address": "fa:16:3e:aa:ad:17", "network": {"id": "78cbfb83-5eb2-43b6-8132-ed291918f722", "bridge": "br-int", "label": "tempest-network-smoke--234361531", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.183", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7dcaf3c96bfc4db3a41291debd385c67", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5518ee18-fc", "ovs_interfaceid": "5518ee18-fcb4-4885-8bc6-a3daba84baff", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 03:52:25 np0005534516 nova_compute[253538]: 2025-11-25 08:52:25.485 253542 DEBUG nova.network.os_vif_util [None req-68fe08f7-1912-4300-9a43-35c82c8b3b9a 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:aa:ad:17,bridge_name='br-int',has_traffic_filtering=True,id=5518ee18-fcb4-4885-8bc6-a3daba84baff,network=Network(78cbfb83-5eb2-43b6-8132-ed291918f722),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5518ee18-fc') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 03:52:25 np0005534516 nova_compute[253538]: 2025-11-25 08:52:25.488 253542 DEBUG nova.virt.libvirt.driver [None req-68fe08f7-1912-4300-9a43-35c82c8b3b9a 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: 9fed0304-736a-4739-9e78-a95c676d1206] End _get_guest_xml xml=<domain type="kvm">
Nov 25 03:52:25 np0005534516 nova_compute[253538]:  <uuid>9fed0304-736a-4739-9e78-a95c676d1206</uuid>
Nov 25 03:52:25 np0005534516 nova_compute[253538]:  <name>instance-00000070</name>
Nov 25 03:52:25 np0005534516 nova_compute[253538]:  <memory>131072</memory>
Nov 25 03:52:25 np0005534516 nova_compute[253538]:  <vcpu>1</vcpu>
Nov 25 03:52:25 np0005534516 nova_compute[253538]:  <metadata>
Nov 25 03:52:25 np0005534516 nova_compute[253538]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 03:52:25 np0005534516 nova_compute[253538]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 03:52:25 np0005534516 nova_compute[253538]:      <nova:name>tempest-TestNetworkAdvancedServerOps-server-327371372</nova:name>
Nov 25 03:52:25 np0005534516 nova_compute[253538]:      <nova:creationTime>2025-11-25 08:52:24</nova:creationTime>
Nov 25 03:52:25 np0005534516 nova_compute[253538]:      <nova:flavor name="m1.nano">
Nov 25 03:52:25 np0005534516 nova_compute[253538]:        <nova:memory>128</nova:memory>
Nov 25 03:52:25 np0005534516 nova_compute[253538]:        <nova:disk>1</nova:disk>
Nov 25 03:52:25 np0005534516 nova_compute[253538]:        <nova:swap>0</nova:swap>
Nov 25 03:52:25 np0005534516 nova_compute[253538]:        <nova:ephemeral>0</nova:ephemeral>
Nov 25 03:52:25 np0005534516 nova_compute[253538]:        <nova:vcpus>1</nova:vcpus>
Nov 25 03:52:25 np0005534516 nova_compute[253538]:      </nova:flavor>
Nov 25 03:52:25 np0005534516 nova_compute[253538]:      <nova:owner>
Nov 25 03:52:25 np0005534516 nova_compute[253538]:        <nova:user uuid="009378dc36154271ba5b4590ce67ddde">tempest-TestNetworkAdvancedServerOps-1132090577-project-member</nova:user>
Nov 25 03:52:25 np0005534516 nova_compute[253538]:        <nova:project uuid="7dcaf3c96bfc4db3a41291debd385c67">tempest-TestNetworkAdvancedServerOps-1132090577</nova:project>
Nov 25 03:52:25 np0005534516 nova_compute[253538]:      </nova:owner>
Nov 25 03:52:25 np0005534516 nova_compute[253538]:      <nova:root type="image" uuid="64385127-d622-49bb-be38-b33beb2692d1"/>
Nov 25 03:52:25 np0005534516 nova_compute[253538]:      <nova:ports>
Nov 25 03:52:25 np0005534516 nova_compute[253538]:        <nova:port uuid="5518ee18-fcb4-4885-8bc6-a3daba84baff">
Nov 25 03:52:25 np0005534516 nova_compute[253538]:          <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Nov 25 03:52:25 np0005534516 nova_compute[253538]:        </nova:port>
Nov 25 03:52:25 np0005534516 nova_compute[253538]:      </nova:ports>
Nov 25 03:52:25 np0005534516 nova_compute[253538]:    </nova:instance>
Nov 25 03:52:25 np0005534516 nova_compute[253538]:  </metadata>
Nov 25 03:52:25 np0005534516 nova_compute[253538]:  <sysinfo type="smbios">
Nov 25 03:52:25 np0005534516 nova_compute[253538]:    <system>
Nov 25 03:52:25 np0005534516 nova_compute[253538]:      <entry name="manufacturer">RDO</entry>
Nov 25 03:52:25 np0005534516 nova_compute[253538]:      <entry name="product">OpenStack Compute</entry>
Nov 25 03:52:25 np0005534516 nova_compute[253538]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 03:52:25 np0005534516 nova_compute[253538]:      <entry name="serial">9fed0304-736a-4739-9e78-a95c676d1206</entry>
Nov 25 03:52:25 np0005534516 nova_compute[253538]:      <entry name="uuid">9fed0304-736a-4739-9e78-a95c676d1206</entry>
Nov 25 03:52:25 np0005534516 nova_compute[253538]:      <entry name="family">Virtual Machine</entry>
Nov 25 03:52:25 np0005534516 nova_compute[253538]:    </system>
Nov 25 03:52:25 np0005534516 nova_compute[253538]:  </sysinfo>
Nov 25 03:52:25 np0005534516 nova_compute[253538]:  <os>
Nov 25 03:52:25 np0005534516 nova_compute[253538]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 25 03:52:25 np0005534516 nova_compute[253538]:    <boot dev="hd"/>
Nov 25 03:52:25 np0005534516 nova_compute[253538]:    <smbios mode="sysinfo"/>
Nov 25 03:52:25 np0005534516 nova_compute[253538]:  </os>
Nov 25 03:52:25 np0005534516 nova_compute[253538]:  <features>
Nov 25 03:52:25 np0005534516 nova_compute[253538]:    <acpi/>
Nov 25 03:52:25 np0005534516 nova_compute[253538]:    <apic/>
Nov 25 03:52:25 np0005534516 nova_compute[253538]:    <vmcoreinfo/>
Nov 25 03:52:25 np0005534516 nova_compute[253538]:  </features>
Nov 25 03:52:25 np0005534516 nova_compute[253538]:  <clock offset="utc">
Nov 25 03:52:25 np0005534516 nova_compute[253538]:    <timer name="pit" tickpolicy="delay"/>
Nov 25 03:52:25 np0005534516 nova_compute[253538]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 25 03:52:25 np0005534516 nova_compute[253538]:    <timer name="hpet" present="no"/>
Nov 25 03:52:25 np0005534516 nova_compute[253538]:  </clock>
Nov 25 03:52:25 np0005534516 nova_compute[253538]:  <cpu mode="host-model" match="exact">
Nov 25 03:52:25 np0005534516 nova_compute[253538]:    <topology sockets="1" cores="1" threads="1"/>
Nov 25 03:52:25 np0005534516 nova_compute[253538]:  </cpu>
Nov 25 03:52:25 np0005534516 nova_compute[253538]:  <devices>
Nov 25 03:52:25 np0005534516 nova_compute[253538]:    <disk type="network" device="disk">
Nov 25 03:52:25 np0005534516 nova_compute[253538]:      <driver type="raw" cache="none"/>
Nov 25 03:52:25 np0005534516 nova_compute[253538]:      <source protocol="rbd" name="vms/9fed0304-736a-4739-9e78-a95c676d1206_disk">
Nov 25 03:52:25 np0005534516 nova_compute[253538]:        <host name="192.168.122.100" port="6789"/>
Nov 25 03:52:25 np0005534516 nova_compute[253538]:      </source>
Nov 25 03:52:25 np0005534516 nova_compute[253538]:      <auth username="openstack">
Nov 25 03:52:25 np0005534516 nova_compute[253538]:        <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 03:52:25 np0005534516 nova_compute[253538]:      </auth>
Nov 25 03:52:25 np0005534516 nova_compute[253538]:      <target dev="vda" bus="virtio"/>
Nov 25 03:52:25 np0005534516 nova_compute[253538]:    </disk>
Nov 25 03:52:25 np0005534516 nova_compute[253538]:    <disk type="network" device="cdrom">
Nov 25 03:52:25 np0005534516 nova_compute[253538]:      <driver type="raw" cache="none"/>
Nov 25 03:52:25 np0005534516 nova_compute[253538]:      <source protocol="rbd" name="vms/9fed0304-736a-4739-9e78-a95c676d1206_disk.config">
Nov 25 03:52:25 np0005534516 nova_compute[253538]:        <host name="192.168.122.100" port="6789"/>
Nov 25 03:52:25 np0005534516 nova_compute[253538]:      </source>
Nov 25 03:52:25 np0005534516 nova_compute[253538]:      <auth username="openstack">
Nov 25 03:52:25 np0005534516 nova_compute[253538]:        <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 03:52:25 np0005534516 nova_compute[253538]:      </auth>
Nov 25 03:52:25 np0005534516 nova_compute[253538]:      <target dev="sda" bus="sata"/>
Nov 25 03:52:25 np0005534516 nova_compute[253538]:    </disk>
Nov 25 03:52:25 np0005534516 nova_compute[253538]:    <interface type="ethernet">
Nov 25 03:52:25 np0005534516 nova_compute[253538]:      <mac address="fa:16:3e:aa:ad:17"/>
Nov 25 03:52:25 np0005534516 nova_compute[253538]:      <model type="virtio"/>
Nov 25 03:52:25 np0005534516 nova_compute[253538]:      <driver name="vhost" rx_queue_size="512"/>
Nov 25 03:52:25 np0005534516 nova_compute[253538]:      <mtu size="1442"/>
Nov 25 03:52:25 np0005534516 nova_compute[253538]:      <target dev="tap5518ee18-fc"/>
Nov 25 03:52:25 np0005534516 nova_compute[253538]:    </interface>
Nov 25 03:52:25 np0005534516 nova_compute[253538]:    <serial type="pty">
Nov 25 03:52:25 np0005534516 nova_compute[253538]:      <log file="/var/lib/nova/instances/9fed0304-736a-4739-9e78-a95c676d1206/console.log" append="off"/>
Nov 25 03:52:25 np0005534516 nova_compute[253538]:    </serial>
Nov 25 03:52:25 np0005534516 nova_compute[253538]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 03:52:25 np0005534516 nova_compute[253538]:    <video>
Nov 25 03:52:25 np0005534516 nova_compute[253538]:      <model type="virtio"/>
Nov 25 03:52:25 np0005534516 nova_compute[253538]:    </video>
Nov 25 03:52:25 np0005534516 nova_compute[253538]:    <input type="tablet" bus="usb"/>
Nov 25 03:52:25 np0005534516 nova_compute[253538]:    <rng model="virtio">
Nov 25 03:52:25 np0005534516 nova_compute[253538]:      <backend model="random">/dev/urandom</backend>
Nov 25 03:52:25 np0005534516 nova_compute[253538]:    </rng>
Nov 25 03:52:25 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root"/>
Nov 25 03:52:25 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:52:25 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:52:25 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:52:25 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:52:25 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:52:25 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:52:25 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:52:25 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:52:25 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:52:25 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:52:25 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:52:25 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:52:25 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:52:25 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:52:25 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:52:25 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:52:25 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:52:25 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:52:25 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:52:25 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:52:25 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:52:25 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:52:25 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:52:25 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:52:25 np0005534516 nova_compute[253538]:    <controller type="usb" index="0"/>
Nov 25 03:52:25 np0005534516 nova_compute[253538]:    <memballoon model="virtio">
Nov 25 03:52:25 np0005534516 nova_compute[253538]:      <stats period="10"/>
Nov 25 03:52:25 np0005534516 nova_compute[253538]:    </memballoon>
Nov 25 03:52:25 np0005534516 nova_compute[253538]:  </devices>
Nov 25 03:52:25 np0005534516 nova_compute[253538]: </domain>
Nov 25 03:52:25 np0005534516 nova_compute[253538]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 25 03:52:25 np0005534516 nova_compute[253538]: 2025-11-25 08:52:25.491 253542 DEBUG nova.virt.libvirt.vif [None req-68fe08f7-1912-4300-9a43-35c82c8b3b9a 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-11-25T08:51:47Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-327371372',display_name='tempest-TestNetworkAdvancedServerOps-server-327371372',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-327371372',id=112,image_ref='64385127-d622-49bb-be38-b33beb2692d1',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBMmeM8p5EJ/GlX5lcj2KCLf3TZ7G2ER5A8cVNuOIrPFCSxO8qm+HkDs1IeLM92Y+nFqkA7zCF0FOinPZWhiCPngd1CuHc0FrMJjIXIIvisspammiOznjpk4rLsu9K/sxg==',key_name='tempest-TestNetworkAdvancedServerOps-2049221666',keypairs=<?>,launch_index=0,launched_at=2025-11-25T08:51:55Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='7dcaf3c96bfc4db3a41291debd385c67',ramdisk_id='',reservation_id='r-2o0w2mgj',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',clean_attempts='1',image_base_image_ref='64385127-d622-49bb-be38-b33beb2692d1',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkAdvancedServerOps-1132090577',owner_user_name='tempest-TestNetworkAdvancedServerOps-1132090577-project-member'},tags=<?>,task_state='rebuild_spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T08:52:23Z,user_data=None,user_id='009378dc36154271ba5b4590ce67ddde',uuid=9fed0304-736a-4739-9e78-a95c676d1206,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "5518ee18-fcb4-4885-8bc6-a3daba84baff", "address": "fa:16:3e:aa:ad:17", "network": {"id": "78cbfb83-5eb2-43b6-8132-ed291918f722", "bridge": "br-int", "label": "tempest-network-smoke--234361531", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.183", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7dcaf3c96bfc4db3a41291debd385c67", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5518ee18-fc", "ovs_interfaceid": "5518ee18-fcb4-4885-8bc6-a3daba84baff", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 25 03:52:25 np0005534516 nova_compute[253538]: 2025-11-25 08:52:25.492 253542 DEBUG nova.network.os_vif_util [None req-68fe08f7-1912-4300-9a43-35c82c8b3b9a 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Converting VIF {"id": "5518ee18-fcb4-4885-8bc6-a3daba84baff", "address": "fa:16:3e:aa:ad:17", "network": {"id": "78cbfb83-5eb2-43b6-8132-ed291918f722", "bridge": "br-int", "label": "tempest-network-smoke--234361531", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.183", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7dcaf3c96bfc4db3a41291debd385c67", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5518ee18-fc", "ovs_interfaceid": "5518ee18-fcb4-4885-8bc6-a3daba84baff", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 03:52:25 np0005534516 nova_compute[253538]: 2025-11-25 08:52:25.494 253542 DEBUG nova.network.os_vif_util [None req-68fe08f7-1912-4300-9a43-35c82c8b3b9a 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:aa:ad:17,bridge_name='br-int',has_traffic_filtering=True,id=5518ee18-fcb4-4885-8bc6-a3daba84baff,network=Network(78cbfb83-5eb2-43b6-8132-ed291918f722),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5518ee18-fc') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 03:52:25 np0005534516 nova_compute[253538]: 2025-11-25 08:52:25.495 253542 DEBUG os_vif [None req-68fe08f7-1912-4300-9a43-35c82c8b3b9a 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Plugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:aa:ad:17,bridge_name='br-int',has_traffic_filtering=True,id=5518ee18-fcb4-4885-8bc6-a3daba84baff,network=Network(78cbfb83-5eb2-43b6-8132-ed291918f722),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5518ee18-fc') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 25 03:52:25 np0005534516 nova_compute[253538]: 2025-11-25 08:52:25.496 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:52:25 np0005534516 nova_compute[253538]: 2025-11-25 08:52:25.496 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:52:25 np0005534516 nova_compute[253538]: 2025-11-25 08:52:25.497 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 03:52:25 np0005534516 nova_compute[253538]: 2025-11-25 08:52:25.500 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:52:25 np0005534516 nova_compute[253538]: 2025-11-25 08:52:25.500 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap5518ee18-fc, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:52:25 np0005534516 nova_compute[253538]: 2025-11-25 08:52:25.501 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap5518ee18-fc, col_values=(('external_ids', {'iface-id': '5518ee18-fcb4-4885-8bc6-a3daba84baff', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:aa:ad:17', 'vm-uuid': '9fed0304-736a-4739-9e78-a95c676d1206'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:52:25 np0005534516 NetworkManager[48915]: <info>  [1764060745.5037] manager: (tap5518ee18-fc): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/462)
Nov 25 03:52:25 np0005534516 nova_compute[253538]: 2025-11-25 08:52:25.507 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:52:25 np0005534516 nova_compute[253538]: 2025-11-25 08:52:25.509 253542 INFO os_vif [None req-68fe08f7-1912-4300-9a43-35c82c8b3b9a 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Successfully plugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:aa:ad:17,bridge_name='br-int',has_traffic_filtering=True,id=5518ee18-fcb4-4885-8bc6-a3daba84baff,network=Network(78cbfb83-5eb2-43b6-8132-ed291918f722),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5518ee18-fc')#033[00m
Nov 25 03:52:25 np0005534516 nova_compute[253538]: 2025-11-25 08:52:25.564 253542 DEBUG nova.virt.libvirt.driver [None req-68fe08f7-1912-4300-9a43-35c82c8b3b9a 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 03:52:25 np0005534516 nova_compute[253538]: 2025-11-25 08:52:25.564 253542 DEBUG nova.virt.libvirt.driver [None req-68fe08f7-1912-4300-9a43-35c82c8b3b9a 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 03:52:25 np0005534516 nova_compute[253538]: 2025-11-25 08:52:25.565 253542 DEBUG nova.virt.libvirt.driver [None req-68fe08f7-1912-4300-9a43-35c82c8b3b9a 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] No VIF found with MAC fa:16:3e:aa:ad:17, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 25 03:52:25 np0005534516 nova_compute[253538]: 2025-11-25 08:52:25.565 253542 INFO nova.virt.libvirt.driver [None req-68fe08f7-1912-4300-9a43-35c82c8b3b9a 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: 9fed0304-736a-4739-9e78-a95c676d1206] Using config drive#033[00m
Nov 25 03:52:25 np0005534516 nova_compute[253538]: 2025-11-25 08:52:25.591 253542 DEBUG nova.storage.rbd_utils [None req-68fe08f7-1912-4300-9a43-35c82c8b3b9a 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] rbd image 9fed0304-736a-4739-9e78-a95c676d1206_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:52:25 np0005534516 nova_compute[253538]: 2025-11-25 08:52:25.605 253542 DEBUG nova.objects.instance [None req-68fe08f7-1912-4300-9a43-35c82c8b3b9a 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Lazy-loading 'ec2_ids' on Instance uuid 9fed0304-736a-4739-9e78-a95c676d1206 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 03:52:25 np0005534516 nova_compute[253538]: 2025-11-25 08:52:25.647 253542 DEBUG nova.objects.instance [None req-68fe08f7-1912-4300-9a43-35c82c8b3b9a 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Lazy-loading 'keypairs' on Instance uuid 9fed0304-736a-4739-9e78-a95c676d1206 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 03:52:25 np0005534516 nova_compute[253538]: 2025-11-25 08:52:25.722 253542 DEBUG nova.network.neutron [-] [instance: e7bccd38-8444-4a5d-8a51-a99fa2e7d4fb] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 03:52:25 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2149: 321 pgs: 321 active+clean; 200 MiB data, 863 MiB used, 59 GiB / 60 GiB avail; 120 KiB/s rd, 1.3 MiB/s wr, 85 op/s
Nov 25 03:52:25 np0005534516 nova_compute[253538]: 2025-11-25 08:52:25.749 253542 INFO nova.compute.manager [-] [instance: e7bccd38-8444-4a5d-8a51-a99fa2e7d4fb] Took 1.11 seconds to deallocate network for instance.#033[00m
Nov 25 03:52:25 np0005534516 nova_compute[253538]: 2025-11-25 08:52:25.793 253542 DEBUG oslo_concurrency.lockutils [None req-e1fa327e-2a0d-458a-8bb4-96d2ef3d6fc0 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:52:25 np0005534516 nova_compute[253538]: 2025-11-25 08:52:25.793 253542 DEBUG oslo_concurrency.lockutils [None req-e1fa327e-2a0d-458a-8bb4-96d2ef3d6fc0 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:52:25 np0005534516 nova_compute[253538]: 2025-11-25 08:52:25.868 253542 DEBUG oslo_concurrency.processutils [None req-e1fa327e-2a0d-458a-8bb4-96d2ef3d6fc0 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:52:26 np0005534516 nova_compute[253538]: 2025-11-25 08:52:26.105 253542 INFO nova.virt.libvirt.driver [None req-68fe08f7-1912-4300-9a43-35c82c8b3b9a 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: 9fed0304-736a-4739-9e78-a95c676d1206] Creating config drive at /var/lib/nova/instances/9fed0304-736a-4739-9e78-a95c676d1206/disk.config#033[00m
Nov 25 03:52:26 np0005534516 nova_compute[253538]: 2025-11-25 08:52:26.110 253542 DEBUG oslo_concurrency.processutils [None req-68fe08f7-1912-4300-9a43-35c82c8b3b9a 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/9fed0304-736a-4739-9e78-a95c676d1206/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpkb8b4hnl execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:52:26 np0005534516 nova_compute[253538]: 2025-11-25 08:52:26.263 253542 DEBUG oslo_concurrency.processutils [None req-68fe08f7-1912-4300-9a43-35c82c8b3b9a 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/9fed0304-736a-4739-9e78-a95c676d1206/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpkb8b4hnl" returned: 0 in 0.152s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:52:26 np0005534516 nova_compute[253538]: 2025-11-25 08:52:26.296 253542 DEBUG nova.storage.rbd_utils [None req-68fe08f7-1912-4300-9a43-35c82c8b3b9a 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] rbd image 9fed0304-736a-4739-9e78-a95c676d1206_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:52:26 np0005534516 nova_compute[253538]: 2025-11-25 08:52:26.300 253542 DEBUG oslo_concurrency.processutils [None req-68fe08f7-1912-4300-9a43-35c82c8b3b9a 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/9fed0304-736a-4739-9e78-a95c676d1206/disk.config 9fed0304-736a-4739-9e78-a95c676d1206_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:52:26 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 03:52:26 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/157297453' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 03:52:26 np0005534516 nova_compute[253538]: 2025-11-25 08:52:26.371 253542 DEBUG oslo_concurrency.processutils [None req-e1fa327e-2a0d-458a-8bb4-96d2ef3d6fc0 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.503s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:52:26 np0005534516 nova_compute[253538]: 2025-11-25 08:52:26.378 253542 DEBUG nova.compute.provider_tree [None req-e1fa327e-2a0d-458a-8bb4-96d2ef3d6fc0 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 03:52:26 np0005534516 nova_compute[253538]: 2025-11-25 08:52:26.399 253542 DEBUG nova.scheduler.client.report [None req-e1fa327e-2a0d-458a-8bb4-96d2ef3d6fc0 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 03:52:26 np0005534516 nova_compute[253538]: 2025-11-25 08:52:26.426 253542 DEBUG oslo_concurrency.lockutils [None req-e1fa327e-2a0d-458a-8bb4-96d2ef3d6fc0 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.633s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:52:26 np0005534516 nova_compute[253538]: 2025-11-25 08:52:26.459 253542 INFO nova.scheduler.client.report [None req-e1fa327e-2a0d-458a-8bb4-96d2ef3d6fc0 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Deleted allocations for instance e7bccd38-8444-4a5d-8a51-a99fa2e7d4fb#033[00m
Nov 25 03:52:26 np0005534516 nova_compute[253538]: 2025-11-25 08:52:26.479 253542 DEBUG nova.compute.manager [req-55a759dd-b511-4a05-ade6-6187a913afc9 req-bfcee7c5-002c-4579-b378-d60dc24ac206 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: e7bccd38-8444-4a5d-8a51-a99fa2e7d4fb] Received event network-vif-plugged-2ad9a2b7-f59d-49fd-aaa3-5253dd637f18 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:52:26 np0005534516 nova_compute[253538]: 2025-11-25 08:52:26.480 253542 DEBUG oslo_concurrency.lockutils [req-55a759dd-b511-4a05-ade6-6187a913afc9 req-bfcee7c5-002c-4579-b378-d60dc24ac206 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "e7bccd38-8444-4a5d-8a51-a99fa2e7d4fb-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:52:26 np0005534516 nova_compute[253538]: 2025-11-25 08:52:26.481 253542 DEBUG oslo_concurrency.lockutils [req-55a759dd-b511-4a05-ade6-6187a913afc9 req-bfcee7c5-002c-4579-b378-d60dc24ac206 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "e7bccd38-8444-4a5d-8a51-a99fa2e7d4fb-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:52:26 np0005534516 nova_compute[253538]: 2025-11-25 08:52:26.481 253542 DEBUG oslo_concurrency.lockutils [req-55a759dd-b511-4a05-ade6-6187a913afc9 req-bfcee7c5-002c-4579-b378-d60dc24ac206 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "e7bccd38-8444-4a5d-8a51-a99fa2e7d4fb-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:52:26 np0005534516 nova_compute[253538]: 2025-11-25 08:52:26.482 253542 DEBUG nova.compute.manager [req-55a759dd-b511-4a05-ade6-6187a913afc9 req-bfcee7c5-002c-4579-b378-d60dc24ac206 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: e7bccd38-8444-4a5d-8a51-a99fa2e7d4fb] No waiting events found dispatching network-vif-plugged-2ad9a2b7-f59d-49fd-aaa3-5253dd637f18 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 03:52:26 np0005534516 nova_compute[253538]: 2025-11-25 08:52:26.482 253542 WARNING nova.compute.manager [req-55a759dd-b511-4a05-ade6-6187a913afc9 req-bfcee7c5-002c-4579-b378-d60dc24ac206 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: e7bccd38-8444-4a5d-8a51-a99fa2e7d4fb] Received unexpected event network-vif-plugged-2ad9a2b7-f59d-49fd-aaa3-5253dd637f18 for instance with vm_state deleted and task_state None.#033[00m
Nov 25 03:52:26 np0005534516 nova_compute[253538]: 2025-11-25 08:52:26.482 253542 DEBUG nova.compute.manager [req-55a759dd-b511-4a05-ade6-6187a913afc9 req-bfcee7c5-002c-4579-b378-d60dc24ac206 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: e7bccd38-8444-4a5d-8a51-a99fa2e7d4fb] Received event network-vif-deleted-2ad9a2b7-f59d-49fd-aaa3-5253dd637f18 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:52:26 np0005534516 nova_compute[253538]: 2025-11-25 08:52:26.486 253542 DEBUG oslo_concurrency.processutils [None req-68fe08f7-1912-4300-9a43-35c82c8b3b9a 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/9fed0304-736a-4739-9e78-a95c676d1206/disk.config 9fed0304-736a-4739-9e78-a95c676d1206_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.186s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:52:26 np0005534516 nova_compute[253538]: 2025-11-25 08:52:26.487 253542 INFO nova.virt.libvirt.driver [None req-68fe08f7-1912-4300-9a43-35c82c8b3b9a 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: 9fed0304-736a-4739-9e78-a95c676d1206] Deleting local config drive /var/lib/nova/instances/9fed0304-736a-4739-9e78-a95c676d1206/disk.config because it was imported into RBD.#033[00m
Nov 25 03:52:26 np0005534516 kernel: tap5518ee18-fc: entered promiscuous mode
Nov 25 03:52:26 np0005534516 NetworkManager[48915]: <info>  [1764060746.5569] manager: (tap5518ee18-fc): new Tun device (/org/freedesktop/NetworkManager/Devices/463)
Nov 25 03:52:26 np0005534516 systemd-udevd[371801]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 03:52:26 np0005534516 ovn_controller[152859]: 2025-11-25T08:52:26Z|01130|binding|INFO|Claiming lport 5518ee18-fcb4-4885-8bc6-a3daba84baff for this chassis.
Nov 25 03:52:26 np0005534516 ovn_controller[152859]: 2025-11-25T08:52:26Z|01131|binding|INFO|5518ee18-fcb4-4885-8bc6-a3daba84baff: Claiming fa:16:3e:aa:ad:17 10.100.0.11
Nov 25 03:52:26 np0005534516 nova_compute[253538]: 2025-11-25 08:52:26.605 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:52:26 np0005534516 nova_compute[253538]: 2025-11-25 08:52:26.621 253542 DEBUG oslo_concurrency.lockutils [None req-e1fa327e-2a0d-458a-8bb4-96d2ef3d6fc0 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lock "e7bccd38-8444-4a5d-8a51-a99fa2e7d4fb" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.742s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:52:26 np0005534516 NetworkManager[48915]: <info>  [1764060746.6244] device (tap5518ee18-fc): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 03:52:26 np0005534516 NetworkManager[48915]: <info>  [1764060746.6261] device (tap5518ee18-fc): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 03:52:26 np0005534516 systemd-machined[215790]: New machine qemu-141-instance-00000070.
Nov 25 03:52:26 np0005534516 ovn_controller[152859]: 2025-11-25T08:52:26Z|01132|binding|INFO|Setting lport 5518ee18-fcb4-4885-8bc6-a3daba84baff ovn-installed in OVS
Nov 25 03:52:26 np0005534516 nova_compute[253538]: 2025-11-25 08:52:26.643 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:52:26 np0005534516 nova_compute[253538]: 2025-11-25 08:52:26.645 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:52:26 np0005534516 systemd[1]: Started Virtual Machine qemu-141-instance-00000070.
Nov 25 03:52:26 np0005534516 ovn_controller[152859]: 2025-11-25T08:52:26Z|01133|binding|INFO|Setting lport 5518ee18-fcb4-4885-8bc6-a3daba84baff up in Southbound
Nov 25 03:52:26 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:52:26.725 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:aa:ad:17 10.100.0.11'], port_security=['fa:16:3e:aa:ad:17 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '9fed0304-736a-4739-9e78-a95c676d1206', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-78cbfb83-5eb2-43b6-8132-ed291918f722', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '7dcaf3c96bfc4db3a41291debd385c67', 'neutron:revision_number': '5', 'neutron:security_group_ids': '3be0e54f-d1af-493d-9b04-9b0789174c39', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.183'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2873e9cb-ad93-4607-afe3-5e46bdf03cb7, chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=5518ee18-fcb4-4885-8bc6-a3daba84baff) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 03:52:26 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:52:26.726 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 5518ee18-fcb4-4885-8bc6-a3daba84baff in datapath 78cbfb83-5eb2-43b6-8132-ed291918f722 bound to our chassis#033[00m
Nov 25 03:52:26 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:52:26.728 162739 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 78cbfb83-5eb2-43b6-8132-ed291918f722#033[00m
Nov 25 03:52:26 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:52:26.742 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[99d6a78d-0a4d-4466-857a-ccd074d78ca7]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:52:26 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:52:26.744 162739 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap78cbfb83-51 in ovnmeta-78cbfb83-5eb2-43b6-8132-ed291918f722 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 25 03:52:26 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:52:26.746 269370 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap78cbfb83-50 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 25 03:52:26 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:52:26.747 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[3fe6781c-ddbd-4f29-80fc-d1f69f7ed005]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:52:26 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:52:26.748 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[ed662358-bc2f-40ac-adf0-fa64f3b94e1a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:52:26 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:52:26.760 162852 DEBUG oslo.privsep.daemon [-] privsep: reply[2fcb8776-5a30-422d-b9b7-670edbceb8c3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:52:26 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:52:26.776 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[01d65654-8ec5-4e78-ba28-5065d7603499]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:52:26 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:52:26.806 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[6a014f26-28b3-41fe-b58d-435a567ea851]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:52:26 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:52:26.813 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[32e81b52-3ed2-4ff7-b4ff-463666348b6c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:52:26 np0005534516 NetworkManager[48915]: <info>  [1764060746.8143] manager: (tap78cbfb83-50): new Veth device (/org/freedesktop/NetworkManager/Devices/464)
Nov 25 03:52:26 np0005534516 systemd-udevd[371805]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 03:52:26 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:52:26.844 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[4502674e-f9e6-40d5-8cba-9ca6ca6bf718]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:52:26 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:52:26.848 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[61d851ab-9a0a-45d3-a7ce-c834a3663eeb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:52:26 np0005534516 NetworkManager[48915]: <info>  [1764060746.8725] device (tap78cbfb83-50): carrier: link connected
Nov 25 03:52:26 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:52:26.879 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[33a731ef-7a7a-47f2-90ae-0ddaab991afd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:52:26 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:52:26.901 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[56c477e8-6f7b-4e67-9314-fe26ba18786e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap78cbfb83-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e8:20:be'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 333], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 610147, 'reachable_time': 34145, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 371837, 'error': None, 'target': 'ovnmeta-78cbfb83-5eb2-43b6-8132-ed291918f722', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:52:26 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:52:26.917 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[5e73f3e1-00a0-4348-8955-293d718ed88f]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fee8:20be'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 610147, 'tstamp': 610147}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 371838, 'error': None, 'target': 'ovnmeta-78cbfb83-5eb2-43b6-8132-ed291918f722', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:52:26 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:52:26.934 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[c0651a5b-c104-4a14-9063-38bd240d8918]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap78cbfb83-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e8:20:be'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 333], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 610147, 'reachable_time': 34145, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 371839, 'error': None, 'target': 'ovnmeta-78cbfb83-5eb2-43b6-8132-ed291918f722', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:52:26 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:52:26.970 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[8d2a0af7-ff30-4055-a222-6da3e45aa235]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:52:27 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:52:27.055 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[c06782c7-18e3-48d6-986a-e99ff4f66a29]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:52:27 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:52:27.056 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap78cbfb83-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:52:27 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:52:27.056 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 03:52:27 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:52:27.057 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap78cbfb83-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:52:27 np0005534516 NetworkManager[48915]: <info>  [1764060747.0601] manager: (tap78cbfb83-50): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/465)
Nov 25 03:52:27 np0005534516 kernel: tap78cbfb83-50: entered promiscuous mode
Nov 25 03:52:27 np0005534516 nova_compute[253538]: 2025-11-25 08:52:27.059 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:52:27 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:52:27.064 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap78cbfb83-50, col_values=(('external_ids', {'iface-id': '7a0c677f-94d5-4688-b88c-93d2fa378198'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:52:27 np0005534516 ovn_controller[152859]: 2025-11-25T08:52:27Z|01134|binding|INFO|Releasing lport 7a0c677f-94d5-4688-b88c-93d2fa378198 from this chassis (sb_readonly=0)
Nov 25 03:52:27 np0005534516 nova_compute[253538]: 2025-11-25 08:52:27.066 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:52:27 np0005534516 nova_compute[253538]: 2025-11-25 08:52:27.097 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:52:27 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:52:27.098 162739 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/78cbfb83-5eb2-43b6-8132-ed291918f722.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/78cbfb83-5eb2-43b6-8132-ed291918f722.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 25 03:52:27 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:52:27.100 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[82cd838a-056d-4438-8f41-36bf05690731]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:52:27 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:52:27.100 162739 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 25 03:52:27 np0005534516 ovn_metadata_agent[162734]: global
Nov 25 03:52:27 np0005534516 ovn_metadata_agent[162734]:    log         /dev/log local0 debug
Nov 25 03:52:27 np0005534516 ovn_metadata_agent[162734]:    log-tag     haproxy-metadata-proxy-78cbfb83-5eb2-43b6-8132-ed291918f722
Nov 25 03:52:27 np0005534516 ovn_metadata_agent[162734]:    user        root
Nov 25 03:52:27 np0005534516 ovn_metadata_agent[162734]:    group       root
Nov 25 03:52:27 np0005534516 ovn_metadata_agent[162734]:    maxconn     1024
Nov 25 03:52:27 np0005534516 ovn_metadata_agent[162734]:    pidfile     /var/lib/neutron/external/pids/78cbfb83-5eb2-43b6-8132-ed291918f722.pid.haproxy
Nov 25 03:52:27 np0005534516 ovn_metadata_agent[162734]:    daemon
Nov 25 03:52:27 np0005534516 ovn_metadata_agent[162734]: 
Nov 25 03:52:27 np0005534516 ovn_metadata_agent[162734]: defaults
Nov 25 03:52:27 np0005534516 ovn_metadata_agent[162734]:    log global
Nov 25 03:52:27 np0005534516 ovn_metadata_agent[162734]:    mode http
Nov 25 03:52:27 np0005534516 ovn_metadata_agent[162734]:    option httplog
Nov 25 03:52:27 np0005534516 ovn_metadata_agent[162734]:    option dontlognull
Nov 25 03:52:27 np0005534516 ovn_metadata_agent[162734]:    option http-server-close
Nov 25 03:52:27 np0005534516 ovn_metadata_agent[162734]:    option forwardfor
Nov 25 03:52:27 np0005534516 ovn_metadata_agent[162734]:    retries                 3
Nov 25 03:52:27 np0005534516 ovn_metadata_agent[162734]:    timeout http-request    30s
Nov 25 03:52:27 np0005534516 ovn_metadata_agent[162734]:    timeout connect         30s
Nov 25 03:52:27 np0005534516 ovn_metadata_agent[162734]:    timeout client          32s
Nov 25 03:52:27 np0005534516 ovn_metadata_agent[162734]:    timeout server          32s
Nov 25 03:52:27 np0005534516 ovn_metadata_agent[162734]:    timeout http-keep-alive 30s
Nov 25 03:52:27 np0005534516 ovn_metadata_agent[162734]: 
Nov 25 03:52:27 np0005534516 ovn_metadata_agent[162734]: 
Nov 25 03:52:27 np0005534516 ovn_metadata_agent[162734]: listen listener
Nov 25 03:52:27 np0005534516 ovn_metadata_agent[162734]:    bind 169.254.169.254:80
Nov 25 03:52:27 np0005534516 ovn_metadata_agent[162734]:    server metadata /var/lib/neutron/metadata_proxy
Nov 25 03:52:27 np0005534516 ovn_metadata_agent[162734]:    http-request add-header X-OVN-Network-ID 78cbfb83-5eb2-43b6-8132-ed291918f722
Nov 25 03:52:27 np0005534516 ovn_metadata_agent[162734]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 25 03:52:27 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:52:27.102 162739 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-78cbfb83-5eb2-43b6-8132-ed291918f722', 'env', 'PROCESS_TAG=haproxy-78cbfb83-5eb2-43b6-8132-ed291918f722', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/78cbfb83-5eb2-43b6-8132-ed291918f722.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 25 03:52:27 np0005534516 nova_compute[253538]: 2025-11-25 08:52:27.270 253542 DEBUG nova.virt.libvirt.host [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Removed pending event for 9fed0304-736a-4739-9e78-a95c676d1206 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Nov 25 03:52:27 np0005534516 nova_compute[253538]: 2025-11-25 08:52:27.270 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764060747.2697103, 9fed0304-736a-4739-9e78-a95c676d1206 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 03:52:27 np0005534516 nova_compute[253538]: 2025-11-25 08:52:27.271 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 9fed0304-736a-4739-9e78-a95c676d1206] VM Resumed (Lifecycle Event)#033[00m
Nov 25 03:52:27 np0005534516 nova_compute[253538]: 2025-11-25 08:52:27.272 253542 DEBUG nova.compute.manager [None req-68fe08f7-1912-4300-9a43-35c82c8b3b9a 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: 9fed0304-736a-4739-9e78-a95c676d1206] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 25 03:52:27 np0005534516 nova_compute[253538]: 2025-11-25 08:52:27.273 253542 DEBUG nova.virt.libvirt.driver [None req-68fe08f7-1912-4300-9a43-35c82c8b3b9a 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: 9fed0304-736a-4739-9e78-a95c676d1206] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 25 03:52:27 np0005534516 nova_compute[253538]: 2025-11-25 08:52:27.277 253542 INFO nova.virt.libvirt.driver [-] [instance: 9fed0304-736a-4739-9e78-a95c676d1206] Instance spawned successfully.#033[00m
Nov 25 03:52:27 np0005534516 nova_compute[253538]: 2025-11-25 08:52:27.278 253542 DEBUG nova.virt.libvirt.driver [None req-68fe08f7-1912-4300-9a43-35c82c8b3b9a 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: 9fed0304-736a-4739-9e78-a95c676d1206] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 25 03:52:27 np0005534516 nova_compute[253538]: 2025-11-25 08:52:27.293 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 9fed0304-736a-4739-9e78-a95c676d1206] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:52:27 np0005534516 nova_compute[253538]: 2025-11-25 08:52:27.299 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 9fed0304-736a-4739-9e78-a95c676d1206] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 03:52:27 np0005534516 nova_compute[253538]: 2025-11-25 08:52:27.303 253542 DEBUG nova.virt.libvirt.driver [None req-68fe08f7-1912-4300-9a43-35c82c8b3b9a 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: 9fed0304-736a-4739-9e78-a95c676d1206] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:52:27 np0005534516 nova_compute[253538]: 2025-11-25 08:52:27.303 253542 DEBUG nova.virt.libvirt.driver [None req-68fe08f7-1912-4300-9a43-35c82c8b3b9a 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: 9fed0304-736a-4739-9e78-a95c676d1206] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:52:27 np0005534516 nova_compute[253538]: 2025-11-25 08:52:27.304 253542 DEBUG nova.virt.libvirt.driver [None req-68fe08f7-1912-4300-9a43-35c82c8b3b9a 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: 9fed0304-736a-4739-9e78-a95c676d1206] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:52:27 np0005534516 nova_compute[253538]: 2025-11-25 08:52:27.304 253542 DEBUG nova.virt.libvirt.driver [None req-68fe08f7-1912-4300-9a43-35c82c8b3b9a 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: 9fed0304-736a-4739-9e78-a95c676d1206] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:52:27 np0005534516 nova_compute[253538]: 2025-11-25 08:52:27.304 253542 DEBUG nova.virt.libvirt.driver [None req-68fe08f7-1912-4300-9a43-35c82c8b3b9a 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: 9fed0304-736a-4739-9e78-a95c676d1206] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:52:27 np0005534516 nova_compute[253538]: 2025-11-25 08:52:27.305 253542 DEBUG nova.virt.libvirt.driver [None req-68fe08f7-1912-4300-9a43-35c82c8b3b9a 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: 9fed0304-736a-4739-9e78-a95c676d1206] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:52:27 np0005534516 nova_compute[253538]: 2025-11-25 08:52:27.332 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 9fed0304-736a-4739-9e78-a95c676d1206] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.#033[00m
Nov 25 03:52:27 np0005534516 nova_compute[253538]: 2025-11-25 08:52:27.332 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764060747.2707255, 9fed0304-736a-4739-9e78-a95c676d1206 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 03:52:27 np0005534516 nova_compute[253538]: 2025-11-25 08:52:27.333 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 9fed0304-736a-4739-9e78-a95c676d1206] VM Started (Lifecycle Event)#033[00m
Nov 25 03:52:27 np0005534516 nova_compute[253538]: 2025-11-25 08:52:27.347 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 9fed0304-736a-4739-9e78-a95c676d1206] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:52:27 np0005534516 nova_compute[253538]: 2025-11-25 08:52:27.350 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 9fed0304-736a-4739-9e78-a95c676d1206] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 03:52:27 np0005534516 nova_compute[253538]: 2025-11-25 08:52:27.381 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 9fed0304-736a-4739-9e78-a95c676d1206] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.#033[00m
Nov 25 03:52:27 np0005534516 nova_compute[253538]: 2025-11-25 08:52:27.405 253542 DEBUG nova.compute.manager [None req-68fe08f7-1912-4300-9a43-35c82c8b3b9a 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: 9fed0304-736a-4739-9e78-a95c676d1206] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:52:27 np0005534516 nova_compute[253538]: 2025-11-25 08:52:27.429 253542 DEBUG nova.compute.manager [req-423a4341-3a70-4aad-8ab1-76f1b3221bc7 req-7ad78eda-9780-43b0-94ab-177b638cf87f b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 9fed0304-736a-4739-9e78-a95c676d1206] Received event network-vif-plugged-5518ee18-fcb4-4885-8bc6-a3daba84baff external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:52:27 np0005534516 nova_compute[253538]: 2025-11-25 08:52:27.429 253542 DEBUG oslo_concurrency.lockutils [req-423a4341-3a70-4aad-8ab1-76f1b3221bc7 req-7ad78eda-9780-43b0-94ab-177b638cf87f b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "9fed0304-736a-4739-9e78-a95c676d1206-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:52:27 np0005534516 nova_compute[253538]: 2025-11-25 08:52:27.430 253542 DEBUG oslo_concurrency.lockutils [req-423a4341-3a70-4aad-8ab1-76f1b3221bc7 req-7ad78eda-9780-43b0-94ab-177b638cf87f b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "9fed0304-736a-4739-9e78-a95c676d1206-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:52:27 np0005534516 nova_compute[253538]: 2025-11-25 08:52:27.430 253542 DEBUG oslo_concurrency.lockutils [req-423a4341-3a70-4aad-8ab1-76f1b3221bc7 req-7ad78eda-9780-43b0-94ab-177b638cf87f b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "9fed0304-736a-4739-9e78-a95c676d1206-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:52:27 np0005534516 nova_compute[253538]: 2025-11-25 08:52:27.431 253542 DEBUG nova.compute.manager [req-423a4341-3a70-4aad-8ab1-76f1b3221bc7 req-7ad78eda-9780-43b0-94ab-177b638cf87f b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 9fed0304-736a-4739-9e78-a95c676d1206] No waiting events found dispatching network-vif-plugged-5518ee18-fcb4-4885-8bc6-a3daba84baff pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 03:52:27 np0005534516 nova_compute[253538]: 2025-11-25 08:52:27.431 253542 WARNING nova.compute.manager [req-423a4341-3a70-4aad-8ab1-76f1b3221bc7 req-7ad78eda-9780-43b0-94ab-177b638cf87f b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 9fed0304-736a-4739-9e78-a95c676d1206] Received unexpected event network-vif-plugged-5518ee18-fcb4-4885-8bc6-a3daba84baff for instance with vm_state active and task_state rebuild_spawning.#033[00m
Nov 25 03:52:27 np0005534516 nova_compute[253538]: 2025-11-25 08:52:27.450 253542 DEBUG oslo_concurrency.lockutils [None req-68fe08f7-1912-4300-9a43-35c82c8b3b9a 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:52:27 np0005534516 nova_compute[253538]: 2025-11-25 08:52:27.450 253542 DEBUG oslo_concurrency.lockutils [None req-68fe08f7-1912-4300-9a43-35c82c8b3b9a 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:52:27 np0005534516 nova_compute[253538]: 2025-11-25 08:52:27.451 253542 DEBUG nova.objects.instance [None req-68fe08f7-1912-4300-9a43-35c82c8b3b9a 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: 9fed0304-736a-4739-9e78-a95c676d1206] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032#033[00m
Nov 25 03:52:27 np0005534516 nova_compute[253538]: 2025-11-25 08:52:27.505 253542 DEBUG oslo_concurrency.lockutils [None req-68fe08f7-1912-4300-9a43-35c82c8b3b9a 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: held 0.055s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:52:27 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 03:52:27 np0005534516 podman[371913]: 2025-11-25 08:52:27.527089918 +0000 UTC m=+0.048689534 container create 8a36c5bc963ac3ed122ee2b8648c4534c0b5f0f9e8e5d564f8a25822a95499d3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-78cbfb83-5eb2-43b6-8132-ed291918f722, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Nov 25 03:52:27 np0005534516 nova_compute[253538]: 2025-11-25 08:52:27.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 03:52:27 np0005534516 systemd[1]: Started libpod-conmon-8a36c5bc963ac3ed122ee2b8648c4534c0b5f0f9e8e5d564f8a25822a95499d3.scope.
Nov 25 03:52:27 np0005534516 systemd[1]: Started libcrun container.
Nov 25 03:52:27 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a7e55ccd711f80e0874846fa98c3c9a5f347c2a29b653b3ef7eacea6b4a1c5bc/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 25 03:52:27 np0005534516 podman[371913]: 2025-11-25 08:52:27.503389493 +0000 UTC m=+0.024989129 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 25 03:52:27 np0005534516 podman[371913]: 2025-11-25 08:52:27.61454312 +0000 UTC m=+0.136142826 container init 8a36c5bc963ac3ed122ee2b8648c4534c0b5f0f9e8e5d564f8a25822a95499d3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-78cbfb83-5eb2-43b6-8132-ed291918f722, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 03:52:27 np0005534516 podman[371913]: 2025-11-25 08:52:27.62123271 +0000 UTC m=+0.142832366 container start 8a36c5bc963ac3ed122ee2b8648c4534c0b5f0f9e8e5d564f8a25822a95499d3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-78cbfb83-5eb2-43b6-8132-ed291918f722, tcib_managed=true, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 25 03:52:27 np0005534516 neutron-haproxy-ovnmeta-78cbfb83-5eb2-43b6-8132-ed291918f722[371927]: [NOTICE]   (371931) : New worker (371933) forked
Nov 25 03:52:27 np0005534516 neutron-haproxy-ovnmeta-78cbfb83-5eb2-43b6-8132-ed291918f722[371927]: [NOTICE]   (371931) : Loading success.
Nov 25 03:52:27 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2150: 321 pgs: 321 active+clean; 140 MiB data, 831 MiB used, 59 GiB / 60 GiB avail; 65 KiB/s rd, 1.5 MiB/s wr, 99 op/s
Nov 25 03:52:27 np0005534516 nova_compute[253538]: 2025-11-25 08:52:27.766 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:52:29 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 25 03:52:29 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2641560818' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 03:52:29 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 25 03:52:29 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2641560818' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 03:52:29 np0005534516 nova_compute[253538]: 2025-11-25 08:52:29.520 253542 DEBUG nova.compute.manager [req-7ae73501-e155-4e7c-9277-579c07c48689 req-2ddeb27b-b666-4690-9d03-a178203eef21 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 9fed0304-736a-4739-9e78-a95c676d1206] Received event network-vif-plugged-5518ee18-fcb4-4885-8bc6-a3daba84baff external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:52:29 np0005534516 nova_compute[253538]: 2025-11-25 08:52:29.521 253542 DEBUG oslo_concurrency.lockutils [req-7ae73501-e155-4e7c-9277-579c07c48689 req-2ddeb27b-b666-4690-9d03-a178203eef21 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "9fed0304-736a-4739-9e78-a95c676d1206-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:52:29 np0005534516 nova_compute[253538]: 2025-11-25 08:52:29.522 253542 DEBUG oslo_concurrency.lockutils [req-7ae73501-e155-4e7c-9277-579c07c48689 req-2ddeb27b-b666-4690-9d03-a178203eef21 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "9fed0304-736a-4739-9e78-a95c676d1206-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:52:29 np0005534516 nova_compute[253538]: 2025-11-25 08:52:29.523 253542 DEBUG oslo_concurrency.lockutils [req-7ae73501-e155-4e7c-9277-579c07c48689 req-2ddeb27b-b666-4690-9d03-a178203eef21 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "9fed0304-736a-4739-9e78-a95c676d1206-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:52:29 np0005534516 nova_compute[253538]: 2025-11-25 08:52:29.523 253542 DEBUG nova.compute.manager [req-7ae73501-e155-4e7c-9277-579c07c48689 req-2ddeb27b-b666-4690-9d03-a178203eef21 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 9fed0304-736a-4739-9e78-a95c676d1206] No waiting events found dispatching network-vif-plugged-5518ee18-fcb4-4885-8bc6-a3daba84baff pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 03:52:29 np0005534516 nova_compute[253538]: 2025-11-25 08:52:29.524 253542 WARNING nova.compute.manager [req-7ae73501-e155-4e7c-9277-579c07c48689 req-2ddeb27b-b666-4690-9d03-a178203eef21 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 9fed0304-736a-4739-9e78-a95c676d1206] Received unexpected event network-vif-plugged-5518ee18-fcb4-4885-8bc6-a3daba84baff for instance with vm_state active and task_state None.#033[00m
Nov 25 03:52:29 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2151: 321 pgs: 321 active+clean; 134 MiB data, 809 MiB used, 59 GiB / 60 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 158 op/s
Nov 25 03:52:30 np0005534516 nova_compute[253538]: 2025-11-25 08:52:30.472 253542 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764060735.4713962, 47279d1c-3634-4ea6-a752-99950cd5ce6c => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 03:52:30 np0005534516 nova_compute[253538]: 2025-11-25 08:52:30.473 253542 INFO nova.compute.manager [-] [instance: 47279d1c-3634-4ea6-a752-99950cd5ce6c] VM Stopped (Lifecycle Event)#033[00m
Nov 25 03:52:30 np0005534516 nova_compute[253538]: 2025-11-25 08:52:30.494 253542 DEBUG nova.compute.manager [None req-62faa6f0-fb7e-4ad2-ab29-84510384c51c - - - - - -] [instance: 47279d1c-3634-4ea6-a752-99950cd5ce6c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:52:30 np0005534516 ovn_controller[152859]: 2025-11-25T08:52:30Z|01135|binding|INFO|Releasing lport 7a0c677f-94d5-4688-b88c-93d2fa378198 from this chassis (sb_readonly=0)
Nov 25 03:52:30 np0005534516 nova_compute[253538]: 2025-11-25 08:52:30.505 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:52:30 np0005534516 nova_compute[253538]: 2025-11-25 08:52:30.552 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:52:31 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2152: 321 pgs: 321 active+clean; 134 MiB data, 809 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 159 op/s
Nov 25 03:52:32 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 03:52:32 np0005534516 nova_compute[253538]: 2025-11-25 08:52:32.769 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:52:33 np0005534516 nova_compute[253538]: 2025-11-25 08:52:33.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 03:52:33 np0005534516 nova_compute[253538]: 2025-11-25 08:52:33.667 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:52:33 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2153: 321 pgs: 321 active+clean; 134 MiB data, 809 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 159 op/s
Nov 25 03:52:34 np0005534516 podman[371942]: 2025-11-25 08:52:34.797169732 +0000 UTC m=+0.048462168 container health_status 5c8d5508db57b4c3da7e80ae11f3a3094e87785cb74f60c98199261dd8b73481 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_metadata_agent)
Nov 25 03:52:35 np0005534516 nova_compute[253538]: 2025-11-25 08:52:35.508 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:52:35 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2154: 321 pgs: 321 active+clean; 134 MiB data, 809 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 156 op/s
Nov 25 03:52:36 np0005534516 nova_compute[253538]: 2025-11-25 08:52:36.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 03:52:36 np0005534516 nova_compute[253538]: 2025-11-25 08:52:36.554 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 25 03:52:36 np0005534516 nova_compute[253538]: 2025-11-25 08:52:36.555 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 25 03:52:36 np0005534516 podman[371960]: 2025-11-25 08:52:36.823247744 +0000 UTC m=+0.075577425 container health_status 201f5ba8a846455dad7681d91099d8c3f33e8ac91298c1330a8b525b94c03938 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Nov 25 03:52:36 np0005534516 nova_compute[253538]: 2025-11-25 08:52:36.961 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Acquiring lock "refresh_cache-9fed0304-736a-4739-9e78-a95c676d1206" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 03:52:36 np0005534516 nova_compute[253538]: 2025-11-25 08:52:36.962 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Acquired lock "refresh_cache-9fed0304-736a-4739-9e78-a95c676d1206" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 03:52:36 np0005534516 nova_compute[253538]: 2025-11-25 08:52:36.962 253542 DEBUG nova.network.neutron [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] [instance: 9fed0304-736a-4739-9e78-a95c676d1206] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Nov 25 03:52:36 np0005534516 nova_compute[253538]: 2025-11-25 08:52:36.962 253542 DEBUG nova.objects.instance [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 9fed0304-736a-4739-9e78-a95c676d1206 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 03:52:37 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 03:52:37 np0005534516 nova_compute[253538]: 2025-11-25 08:52:37.772 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:52:37 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2155: 321 pgs: 321 active+clean; 134 MiB data, 809 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 602 KiB/s wr, 122 op/s
Nov 25 03:52:39 np0005534516 nova_compute[253538]: 2025-11-25 08:52:39.129 253542 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764060744.127684, e7bccd38-8444-4a5d-8a51-a99fa2e7d4fb => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 03:52:39 np0005534516 nova_compute[253538]: 2025-11-25 08:52:39.129 253542 INFO nova.compute.manager [-] [instance: e7bccd38-8444-4a5d-8a51-a99fa2e7d4fb] VM Stopped (Lifecycle Event)#033[00m
Nov 25 03:52:39 np0005534516 nova_compute[253538]: 2025-11-25 08:52:39.155 253542 DEBUG nova.compute.manager [None req-8908c9dc-1128-48af-bdf8-6ebab6cd4371 - - - - - -] [instance: e7bccd38-8444-4a5d-8a51-a99fa2e7d4fb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:52:39 np0005534516 ovn_controller[152859]: 2025-11-25T08:52:39Z|00128|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:aa:ad:17 10.100.0.11
Nov 25 03:52:39 np0005534516 ovn_controller[152859]: 2025-11-25T08:52:39Z|00129|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:aa:ad:17 10.100.0.11
Nov 25 03:52:39 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2156: 321 pgs: 321 active+clean; 142 MiB data, 810 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.0 MiB/s wr, 112 op/s
Nov 25 03:52:40 np0005534516 nova_compute[253538]: 2025-11-25 08:52:40.510 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:52:40 np0005534516 ceph-osd[88620]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #49. Immutable memtables: 6.
Nov 25 03:52:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:52:41.076 162739 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:52:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:52:41.077 162739 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:52:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:52:41.077 162739 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:52:41 np0005534516 nova_compute[253538]: 2025-11-25 08:52:41.174 253542 DEBUG nova.network.neutron [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] [instance: 9fed0304-736a-4739-9e78-a95c676d1206] Updating instance_info_cache with network_info: [{"id": "5518ee18-fcb4-4885-8bc6-a3daba84baff", "address": "fa:16:3e:aa:ad:17", "network": {"id": "78cbfb83-5eb2-43b6-8132-ed291918f722", "bridge": "br-int", "label": "tempest-network-smoke--234361531", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.183", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7dcaf3c96bfc4db3a41291debd385c67", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5518ee18-fc", "ovs_interfaceid": "5518ee18-fcb4-4885-8bc6-a3daba84baff", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 03:52:41 np0005534516 nova_compute[253538]: 2025-11-25 08:52:41.191 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Releasing lock "refresh_cache-9fed0304-736a-4739-9e78-a95c676d1206" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 03:52:41 np0005534516 nova_compute[253538]: 2025-11-25 08:52:41.191 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] [instance: 9fed0304-736a-4739-9e78-a95c676d1206] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Nov 25 03:52:41 np0005534516 nova_compute[253538]: 2025-11-25 08:52:41.192 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 03:52:41 np0005534516 nova_compute[253538]: 2025-11-25 08:52:41.192 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 03:52:41 np0005534516 nova_compute[253538]: 2025-11-25 08:52:41.193 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 25 03:52:41 np0005534516 nova_compute[253538]: 2025-11-25 08:52:41.555 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 03:52:41 np0005534516 nova_compute[253538]: 2025-11-25 08:52:41.573 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 03:52:41 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2157: 321 pgs: 321 active+clean; 152 MiB data, 800 MiB used, 59 GiB / 60 GiB avail; 459 KiB/s rd, 1.2 MiB/s wr, 48 op/s
Nov 25 03:52:42 np0005534516 nova_compute[253538]: 2025-11-25 08:52:42.350 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:52:42 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:52:42.350 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=34, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '3e:21:09', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '6a:71:8e:07:fa:c2'}, ipsec=False) old=SB_Global(nb_cfg=33) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 03:52:42 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:52:42.352 162739 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 25 03:52:42 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 03:52:42 np0005534516 nova_compute[253538]: 2025-11-25 08:52:42.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 03:52:42 np0005534516 nova_compute[253538]: 2025-11-25 08:52:42.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 03:52:42 np0005534516 nova_compute[253538]: 2025-11-25 08:52:42.589 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:52:42 np0005534516 nova_compute[253538]: 2025-11-25 08:52:42.589 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:52:42 np0005534516 nova_compute[253538]: 2025-11-25 08:52:42.589 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:52:42 np0005534516 nova_compute[253538]: 2025-11-25 08:52:42.590 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 25 03:52:42 np0005534516 nova_compute[253538]: 2025-11-25 08:52:42.590 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:52:42 np0005534516 nova_compute[253538]: 2025-11-25 08:52:42.775 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:52:42 np0005534516 podman[372001]: 2025-11-25 08:52:42.866709136 +0000 UTC m=+0.111359074 container health_status 8ad1e319ea263c63021f01bb2d6f18ca5aea205883339692c6b10bddfa993191 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 03:52:42 np0005534516 nova_compute[253538]: 2025-11-25 08:52:42.949 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:52:43 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 03:52:43 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2903804474' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 03:52:43 np0005534516 nova_compute[253538]: 2025-11-25 08:52:43.053 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.463s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:52:43 np0005534516 nova_compute[253538]: 2025-11-25 08:52:43.138 253542 DEBUG nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] skipping disk for instance-00000070 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 25 03:52:43 np0005534516 nova_compute[253538]: 2025-11-25 08:52:43.139 253542 DEBUG nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] skipping disk for instance-00000070 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 25 03:52:43 np0005534516 nova_compute[253538]: 2025-11-25 08:52:43.340 253542 WARNING nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 03:52:43 np0005534516 nova_compute[253538]: 2025-11-25 08:52:43.341 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3647MB free_disk=59.95318603515625GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 25 03:52:43 np0005534516 nova_compute[253538]: 2025-11-25 08:52:43.342 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:52:43 np0005534516 nova_compute[253538]: 2025-11-25 08:52:43.342 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:52:43 np0005534516 nova_compute[253538]: 2025-11-25 08:52:43.444 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Instance 9fed0304-736a-4739-9e78-a95c676d1206 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 25 03:52:43 np0005534516 nova_compute[253538]: 2025-11-25 08:52:43.444 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 25 03:52:43 np0005534516 nova_compute[253538]: 2025-11-25 08:52:43.444 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 25 03:52:43 np0005534516 nova_compute[253538]: 2025-11-25 08:52:43.461 253542 DEBUG nova.scheduler.client.report [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Refreshing inventories for resource provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Nov 25 03:52:43 np0005534516 nova_compute[253538]: 2025-11-25 08:52:43.481 253542 DEBUG nova.scheduler.client.report [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Updating ProviderTree inventory for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Nov 25 03:52:43 np0005534516 nova_compute[253538]: 2025-11-25 08:52:43.481 253542 DEBUG nova.compute.provider_tree [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Updating inventory in ProviderTree for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Nov 25 03:52:43 np0005534516 nova_compute[253538]: 2025-11-25 08:52:43.495 253542 DEBUG nova.scheduler.client.report [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Refreshing aggregate associations for resource provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Nov 25 03:52:43 np0005534516 nova_compute[253538]: 2025-11-25 08:52:43.518 253542 DEBUG nova.scheduler.client.report [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Refreshing trait associations for resource provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4, traits: HW_CPU_X86_ABM,HW_CPU_X86_SSE,HW_CPU_X86_SSE4A,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_NET_VIF_MODEL_VIRTIO,HW_CPU_X86_SSSE3,COMPUTE_ACCELERATORS,COMPUTE_RESCUE_BFV,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_BMI2,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_DEVICE_TAGGING,HW_CPU_X86_SSE2,HW_CPU_X86_SSE42,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_SECURITY_TPM_1_2,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_SVM,HW_CPU_X86_AVX,HW_CPU_X86_CLMUL,COMPUTE_NODE,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_AVX2,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_SHA,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_BMI,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_MMX,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_SSE41,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_SECURITY_TPM_2_0,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_F16C,COMPUTE_STORAGE_BUS_FDC,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_AMD_SVM,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_AESNI,HW_CPU_X86_FMA3 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Nov 25 03:52:43 np0005534516 nova_compute[253538]: 2025-11-25 08:52:43.573 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:52:43 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2158: 321 pgs: 321 active+clean; 161 MiB data, 810 MiB used, 59 GiB / 60 GiB avail; 256 KiB/s rd, 2.0 MiB/s wr, 45 op/s
Nov 25 03:52:44 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 03:52:44 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/115009720' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 03:52:44 np0005534516 nova_compute[253538]: 2025-11-25 08:52:44.054 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.481s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:52:44 np0005534516 nova_compute[253538]: 2025-11-25 08:52:44.060 253542 DEBUG nova.compute.provider_tree [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 03:52:44 np0005534516 nova_compute[253538]: 2025-11-25 08:52:44.077 253542 DEBUG nova.scheduler.client.report [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 03:52:44 np0005534516 nova_compute[253538]: 2025-11-25 08:52:44.250 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 25 03:52:44 np0005534516 nova_compute[253538]: 2025-11-25 08:52:44.250 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.908s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:52:45 np0005534516 nova_compute[253538]: 2025-11-25 08:52:45.251 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 03:52:45 np0005534516 nova_compute[253538]: 2025-11-25 08:52:45.511 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:52:45 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2159: 321 pgs: 321 active+clean; 167 MiB data, 836 MiB used, 59 GiB / 60 GiB avail; 326 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Nov 25 03:52:45 np0005534516 nova_compute[253538]: 2025-11-25 08:52:45.896 253542 INFO nova.compute.manager [None req-7b54f463-9fdc-48ff-975e-b0b6c2a0caff 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: 9fed0304-736a-4739-9e78-a95c676d1206] Get console output#033[00m
Nov 25 03:52:45 np0005534516 nova_compute[253538]: 2025-11-25 08:52:45.907 310639 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Nov 25 03:52:47 np0005534516 nova_compute[253538]: 2025-11-25 08:52:47.403 253542 DEBUG nova.compute.manager [req-6e2fa655-5e5e-4ff3-9ff7-3871a7648573 req-8e44571b-3960-49ca-9bda-43c52b1dffd6 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 9fed0304-736a-4739-9e78-a95c676d1206] Received event network-changed-5518ee18-fcb4-4885-8bc6-a3daba84baff external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:52:47 np0005534516 nova_compute[253538]: 2025-11-25 08:52:47.405 253542 DEBUG nova.compute.manager [req-6e2fa655-5e5e-4ff3-9ff7-3871a7648573 req-8e44571b-3960-49ca-9bda-43c52b1dffd6 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 9fed0304-736a-4739-9e78-a95c676d1206] Refreshing instance network info cache due to event network-changed-5518ee18-fcb4-4885-8bc6-a3daba84baff. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 03:52:47 np0005534516 nova_compute[253538]: 2025-11-25 08:52:47.405 253542 DEBUG oslo_concurrency.lockutils [req-6e2fa655-5e5e-4ff3-9ff7-3871a7648573 req-8e44571b-3960-49ca-9bda-43c52b1dffd6 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-9fed0304-736a-4739-9e78-a95c676d1206" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 03:52:47 np0005534516 nova_compute[253538]: 2025-11-25 08:52:47.405 253542 DEBUG oslo_concurrency.lockutils [req-6e2fa655-5e5e-4ff3-9ff7-3871a7648573 req-8e44571b-3960-49ca-9bda-43c52b1dffd6 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-9fed0304-736a-4739-9e78-a95c676d1206" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 03:52:47 np0005534516 nova_compute[253538]: 2025-11-25 08:52:47.406 253542 DEBUG nova.network.neutron [req-6e2fa655-5e5e-4ff3-9ff7-3871a7648573 req-8e44571b-3960-49ca-9bda-43c52b1dffd6 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 9fed0304-736a-4739-9e78-a95c676d1206] Refreshing network info cache for port 5518ee18-fcb4-4885-8bc6-a3daba84baff _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 03:52:47 np0005534516 nova_compute[253538]: 2025-11-25 08:52:47.467 253542 DEBUG oslo_concurrency.lockutils [None req-d74486d6-1794-4aab-af7b-ae08cd7a99b5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Acquiring lock "9fed0304-736a-4739-9e78-a95c676d1206" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:52:47 np0005534516 nova_compute[253538]: 2025-11-25 08:52:47.468 253542 DEBUG oslo_concurrency.lockutils [None req-d74486d6-1794-4aab-af7b-ae08cd7a99b5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Lock "9fed0304-736a-4739-9e78-a95c676d1206" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:52:47 np0005534516 nova_compute[253538]: 2025-11-25 08:52:47.468 253542 DEBUG oslo_concurrency.lockutils [None req-d74486d6-1794-4aab-af7b-ae08cd7a99b5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Acquiring lock "9fed0304-736a-4739-9e78-a95c676d1206-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:52:47 np0005534516 nova_compute[253538]: 2025-11-25 08:52:47.468 253542 DEBUG oslo_concurrency.lockutils [None req-d74486d6-1794-4aab-af7b-ae08cd7a99b5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Lock "9fed0304-736a-4739-9e78-a95c676d1206-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:52:47 np0005534516 nova_compute[253538]: 2025-11-25 08:52:47.469 253542 DEBUG oslo_concurrency.lockutils [None req-d74486d6-1794-4aab-af7b-ae08cd7a99b5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Lock "9fed0304-736a-4739-9e78-a95c676d1206-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:52:47 np0005534516 nova_compute[253538]: 2025-11-25 08:52:47.470 253542 INFO nova.compute.manager [None req-d74486d6-1794-4aab-af7b-ae08cd7a99b5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: 9fed0304-736a-4739-9e78-a95c676d1206] Terminating instance#033[00m
Nov 25 03:52:47 np0005534516 nova_compute[253538]: 2025-11-25 08:52:47.471 253542 DEBUG nova.compute.manager [None req-d74486d6-1794-4aab-af7b-ae08cd7a99b5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: 9fed0304-736a-4739-9e78-a95c676d1206] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 25 03:52:47 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 03:52:47 np0005534516 kernel: tap5518ee18-fc (unregistering): left promiscuous mode
Nov 25 03:52:47 np0005534516 NetworkManager[48915]: <info>  [1764060767.5283] device (tap5518ee18-fc): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 03:52:47 np0005534516 ovn_controller[152859]: 2025-11-25T08:52:47Z|01136|binding|INFO|Releasing lport 5518ee18-fcb4-4885-8bc6-a3daba84baff from this chassis (sb_readonly=0)
Nov 25 03:52:47 np0005534516 ovn_controller[152859]: 2025-11-25T08:52:47Z|01137|binding|INFO|Setting lport 5518ee18-fcb4-4885-8bc6-a3daba84baff down in Southbound
Nov 25 03:52:47 np0005534516 ovn_controller[152859]: 2025-11-25T08:52:47Z|01138|binding|INFO|Removing iface tap5518ee18-fc ovn-installed in OVS
Nov 25 03:52:47 np0005534516 nova_compute[253538]: 2025-11-25 08:52:47.534 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:52:47 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:52:47.540 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:aa:ad:17 10.100.0.11'], port_security=['fa:16:3e:aa:ad:17 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '9fed0304-736a-4739-9e78-a95c676d1206', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-78cbfb83-5eb2-43b6-8132-ed291918f722', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '7dcaf3c96bfc4db3a41291debd385c67', 'neutron:revision_number': '6', 'neutron:security_group_ids': '3be0e54f-d1af-493d-9b04-9b0789174c39', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2873e9cb-ad93-4607-afe3-5e46bdf03cb7, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=5518ee18-fcb4-4885-8bc6-a3daba84baff) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 03:52:47 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:52:47.541 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 5518ee18-fcb4-4885-8bc6-a3daba84baff in datapath 78cbfb83-5eb2-43b6-8132-ed291918f722 unbound from our chassis#033[00m
Nov 25 03:52:47 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:52:47.542 162739 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 78cbfb83-5eb2-43b6-8132-ed291918f722, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 25 03:52:47 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:52:47.544 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[8279534f-d0e6-41f9-9b01-ea9960a4fc83]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:52:47 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:52:47.545 162739 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-78cbfb83-5eb2-43b6-8132-ed291918f722 namespace which is not needed anymore#033[00m
Nov 25 03:52:47 np0005534516 nova_compute[253538]: 2025-11-25 08:52:47.553 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:52:47 np0005534516 systemd[1]: machine-qemu\x2d141\x2dinstance\x2d00000070.scope: Deactivated successfully.
Nov 25 03:52:47 np0005534516 systemd[1]: machine-qemu\x2d141\x2dinstance\x2d00000070.scope: Consumed 14.131s CPU time.
Nov 25 03:52:47 np0005534516 systemd-machined[215790]: Machine qemu-141-instance-00000070 terminated.
Nov 25 03:52:47 np0005534516 neutron-haproxy-ovnmeta-78cbfb83-5eb2-43b6-8132-ed291918f722[371927]: [NOTICE]   (371931) : haproxy version is 2.8.14-c23fe91
Nov 25 03:52:47 np0005534516 neutron-haproxy-ovnmeta-78cbfb83-5eb2-43b6-8132-ed291918f722[371927]: [NOTICE]   (371931) : path to executable is /usr/sbin/haproxy
Nov 25 03:52:47 np0005534516 neutron-haproxy-ovnmeta-78cbfb83-5eb2-43b6-8132-ed291918f722[371927]: [WARNING]  (371931) : Exiting Master process...
Nov 25 03:52:47 np0005534516 neutron-haproxy-ovnmeta-78cbfb83-5eb2-43b6-8132-ed291918f722[371927]: [ALERT]    (371931) : Current worker (371933) exited with code 143 (Terminated)
Nov 25 03:52:47 np0005534516 neutron-haproxy-ovnmeta-78cbfb83-5eb2-43b6-8132-ed291918f722[371927]: [WARNING]  (371931) : All workers exited. Exiting... (0)
Nov 25 03:52:47 np0005534516 systemd[1]: libpod-8a36c5bc963ac3ed122ee2b8648c4534c0b5f0f9e8e5d564f8a25822a95499d3.scope: Deactivated successfully.
Nov 25 03:52:47 np0005534516 conmon[371927]: conmon 8a36c5bc963ac3ed122e <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-8a36c5bc963ac3ed122ee2b8648c4534c0b5f0f9e8e5d564f8a25822a95499d3.scope/container/memory.events
Nov 25 03:52:47 np0005534516 podman[372077]: 2025-11-25 08:52:47.692385571 +0000 UTC m=+0.052097905 container died 8a36c5bc963ac3ed122ee2b8648c4534c0b5f0f9e8e5d564f8a25822a95499d3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-78cbfb83-5eb2-43b6-8132-ed291918f722, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 25 03:52:47 np0005534516 nova_compute[253538]: 2025-11-25 08:52:47.712 253542 INFO nova.virt.libvirt.driver [-] [instance: 9fed0304-736a-4739-9e78-a95c676d1206] Instance destroyed successfully.#033[00m
Nov 25 03:52:47 np0005534516 nova_compute[253538]: 2025-11-25 08:52:47.712 253542 DEBUG nova.objects.instance [None req-d74486d6-1794-4aab-af7b-ae08cd7a99b5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Lazy-loading 'resources' on Instance uuid 9fed0304-736a-4739-9e78-a95c676d1206 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 03:52:47 np0005534516 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-8a36c5bc963ac3ed122ee2b8648c4534c0b5f0f9e8e5d564f8a25822a95499d3-userdata-shm.mount: Deactivated successfully.
Nov 25 03:52:47 np0005534516 nova_compute[253538]: 2025-11-25 08:52:47.728 253542 DEBUG nova.virt.libvirt.vif [None req-d74486d6-1794-4aab-af7b-ae08cd7a99b5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-11-25T08:51:47Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-327371372',display_name='tempest-TestNetworkAdvancedServerOps-server-327371372',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-327371372',id=112,image_ref='64385127-d622-49bb-be38-b33beb2692d1',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBMmeM8p5EJ/GlX5lcj2KCLf3TZ7G2ER5A8cVNuOIrPFCSxO8qm+HkDs1IeLM92Y+nFqkA7zCF0FOinPZWhiCPngd1CuHc0FrMJjIXIIvisspammiOznjpk4rLsu9K/sxg==',key_name='tempest-TestNetworkAdvancedServerOps-2049221666',keypairs=<?>,launch_index=0,launched_at=2025-11-25T08:52:27Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='7dcaf3c96bfc4db3a41291debd385c67',ramdisk_id='',reservation_id='r-2o0w2mgj',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',clean_attempts='1',image_base_image_ref='64385127-d622-49bb-be38-b33beb2692d1',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkAdvancedServerOps-1132090577',owner_user_name='tempest-TestNetworkAdvancedServerOps-1132090577-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T08:52:27Z,user_data=None,user_id='009378dc36154271ba5b4590ce67ddde',uuid=9fed0304-736a-4739-9e78-a95c676d1206,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "5518ee18-fcb4-4885-8bc6-a3daba84baff", "address": "fa:16:3e:aa:ad:17", "network": {"id": "78cbfb83-5eb2-43b6-8132-ed291918f722", "bridge": "br-int", "label": "tempest-network-smoke--234361531", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.183", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7dcaf3c96bfc4db3a41291debd385c67", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5518ee18-fc", "ovs_interfaceid": "5518ee18-fcb4-4885-8bc6-a3daba84baff", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 25 03:52:47 np0005534516 nova_compute[253538]: 2025-11-25 08:52:47.729 253542 DEBUG nova.network.os_vif_util [None req-d74486d6-1794-4aab-af7b-ae08cd7a99b5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Converting VIF {"id": "5518ee18-fcb4-4885-8bc6-a3daba84baff", "address": "fa:16:3e:aa:ad:17", "network": {"id": "78cbfb83-5eb2-43b6-8132-ed291918f722", "bridge": "br-int", "label": "tempest-network-smoke--234361531", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.183", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7dcaf3c96bfc4db3a41291debd385c67", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5518ee18-fc", "ovs_interfaceid": "5518ee18-fcb4-4885-8bc6-a3daba84baff", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 03:52:47 np0005534516 nova_compute[253538]: 2025-11-25 08:52:47.730 253542 DEBUG nova.network.os_vif_util [None req-d74486d6-1794-4aab-af7b-ae08cd7a99b5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:aa:ad:17,bridge_name='br-int',has_traffic_filtering=True,id=5518ee18-fcb4-4885-8bc6-a3daba84baff,network=Network(78cbfb83-5eb2-43b6-8132-ed291918f722),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5518ee18-fc') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 03:52:47 np0005534516 nova_compute[253538]: 2025-11-25 08:52:47.731 253542 DEBUG os_vif [None req-d74486d6-1794-4aab-af7b-ae08cd7a99b5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:aa:ad:17,bridge_name='br-int',has_traffic_filtering=True,id=5518ee18-fcb4-4885-8bc6-a3daba84baff,network=Network(78cbfb83-5eb2-43b6-8132-ed291918f722),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5518ee18-fc') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 25 03:52:47 np0005534516 systemd[1]: var-lib-containers-storage-overlay-a7e55ccd711f80e0874846fa98c3c9a5f347c2a29b653b3ef7eacea6b4a1c5bc-merged.mount: Deactivated successfully.
Nov 25 03:52:47 np0005534516 nova_compute[253538]: 2025-11-25 08:52:47.733 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:52:47 np0005534516 nova_compute[253538]: 2025-11-25 08:52:47.734 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5518ee18-fc, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:52:47 np0005534516 nova_compute[253538]: 2025-11-25 08:52:47.736 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:52:47 np0005534516 nova_compute[253538]: 2025-11-25 08:52:47.737 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:52:47 np0005534516 nova_compute[253538]: 2025-11-25 08:52:47.739 253542 INFO os_vif [None req-d74486d6-1794-4aab-af7b-ae08cd7a99b5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:aa:ad:17,bridge_name='br-int',has_traffic_filtering=True,id=5518ee18-fcb4-4885-8bc6-a3daba84baff,network=Network(78cbfb83-5eb2-43b6-8132-ed291918f722),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5518ee18-fc')#033[00m
Nov 25 03:52:47 np0005534516 podman[372077]: 2025-11-25 08:52:47.745243567 +0000 UTC m=+0.104955911 container cleanup 8a36c5bc963ac3ed122ee2b8648c4534c0b5f0f9e8e5d564f8a25822a95499d3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-78cbfb83-5eb2-43b6-8132-ed291918f722, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 25 03:52:47 np0005534516 systemd[1]: libpod-conmon-8a36c5bc963ac3ed122ee2b8648c4534c0b5f0f9e8e5d564f8a25822a95499d3.scope: Deactivated successfully.
Nov 25 03:52:47 np0005534516 nova_compute[253538]: 2025-11-25 08:52:47.776 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:52:47 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2160: 321 pgs: 321 active+clean; 167 MiB data, 836 MiB used, 59 GiB / 60 GiB avail; 326 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Nov 25 03:52:47 np0005534516 podman[372131]: 2025-11-25 08:52:47.822666211 +0000 UTC m=+0.044542954 container remove 8a36c5bc963ac3ed122ee2b8648c4534c0b5f0f9e8e5d564f8a25822a95499d3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-78cbfb83-5eb2-43b6-8132-ed291918f722, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Nov 25 03:52:47 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:52:47.830 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[c5dcb056-51e6-4800-be94-9657e739ffe5]: (4, ('Tue Nov 25 08:52:47 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-78cbfb83-5eb2-43b6-8132-ed291918f722 (8a36c5bc963ac3ed122ee2b8648c4534c0b5f0f9e8e5d564f8a25822a95499d3)\n8a36c5bc963ac3ed122ee2b8648c4534c0b5f0f9e8e5d564f8a25822a95499d3\nTue Nov 25 08:52:47 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-78cbfb83-5eb2-43b6-8132-ed291918f722 (8a36c5bc963ac3ed122ee2b8648c4534c0b5f0f9e8e5d564f8a25822a95499d3)\n8a36c5bc963ac3ed122ee2b8648c4534c0b5f0f9e8e5d564f8a25822a95499d3\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:52:47 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:52:47.832 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[76777055-b6dd-491f-a149-6c24170c5166]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:52:47 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:52:47.832 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap78cbfb83-50, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:52:47 np0005534516 nova_compute[253538]: 2025-11-25 08:52:47.834 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:52:47 np0005534516 kernel: tap78cbfb83-50: left promiscuous mode
Nov 25 03:52:47 np0005534516 nova_compute[253538]: 2025-11-25 08:52:47.848 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:52:47 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:52:47.850 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[25460703-45eb-4af1-95de-363286988d5e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:52:47 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:52:47.866 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[e83458da-9fea-4aca-8da4-e0d26474f559]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:52:47 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:52:47.868 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[7ff955e5-f5e9-4049-a4db-b71187d583cb]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:52:47 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:52:47.885 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[864e3d29-2318-4215-b84b-aa480201884c]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 610139, 'reachable_time': 16993, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 372149, 'error': None, 'target': 'ovnmeta-78cbfb83-5eb2-43b6-8132-ed291918f722', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:52:47 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:52:47.888 162852 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-78cbfb83-5eb2-43b6-8132-ed291918f722 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 25 03:52:47 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:52:47.888 162852 DEBUG oslo.privsep.daemon [-] privsep: reply[b7d31f5c-63bd-4f89-8add-2ecccdc79b2a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:52:47 np0005534516 systemd[1]: run-netns-ovnmeta\x2d78cbfb83\x2d5eb2\x2d43b6\x2d8132\x2ded291918f722.mount: Deactivated successfully.
Nov 25 03:52:48 np0005534516 nova_compute[253538]: 2025-11-25 08:52:48.352 253542 INFO nova.virt.libvirt.driver [None req-d74486d6-1794-4aab-af7b-ae08cd7a99b5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: 9fed0304-736a-4739-9e78-a95c676d1206] Deleting instance files /var/lib/nova/instances/9fed0304-736a-4739-9e78-a95c676d1206_del#033[00m
Nov 25 03:52:48 np0005534516 nova_compute[253538]: 2025-11-25 08:52:48.353 253542 INFO nova.virt.libvirt.driver [None req-d74486d6-1794-4aab-af7b-ae08cd7a99b5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: 9fed0304-736a-4739-9e78-a95c676d1206] Deletion of /var/lib/nova/instances/9fed0304-736a-4739-9e78-a95c676d1206_del complete#033[00m
Nov 25 03:52:48 np0005534516 nova_compute[253538]: 2025-11-25 08:52:48.436 253542 INFO nova.compute.manager [None req-d74486d6-1794-4aab-af7b-ae08cd7a99b5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: 9fed0304-736a-4739-9e78-a95c676d1206] Took 0.96 seconds to destroy the instance on the hypervisor.#033[00m
Nov 25 03:52:48 np0005534516 nova_compute[253538]: 2025-11-25 08:52:48.437 253542 DEBUG oslo.service.loopingcall [None req-d74486d6-1794-4aab-af7b-ae08cd7a99b5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 25 03:52:48 np0005534516 nova_compute[253538]: 2025-11-25 08:52:48.437 253542 DEBUG nova.compute.manager [-] [instance: 9fed0304-736a-4739-9e78-a95c676d1206] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 25 03:52:48 np0005534516 nova_compute[253538]: 2025-11-25 08:52:48.438 253542 DEBUG nova.network.neutron [-] [instance: 9fed0304-736a-4739-9e78-a95c676d1206] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 25 03:52:49 np0005534516 nova_compute[253538]: 2025-11-25 08:52:49.286 253542 DEBUG nova.compute.manager [req-5c63fd74-1fd9-4a2f-95ed-8c1d1b6e7415 req-54ffc602-2743-4305-aad3-8e5f6732b9c7 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 9fed0304-736a-4739-9e78-a95c676d1206] Received event network-vif-unplugged-5518ee18-fcb4-4885-8bc6-a3daba84baff external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:52:49 np0005534516 nova_compute[253538]: 2025-11-25 08:52:49.286 253542 DEBUG oslo_concurrency.lockutils [req-5c63fd74-1fd9-4a2f-95ed-8c1d1b6e7415 req-54ffc602-2743-4305-aad3-8e5f6732b9c7 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "9fed0304-736a-4739-9e78-a95c676d1206-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:52:49 np0005534516 nova_compute[253538]: 2025-11-25 08:52:49.287 253542 DEBUG oslo_concurrency.lockutils [req-5c63fd74-1fd9-4a2f-95ed-8c1d1b6e7415 req-54ffc602-2743-4305-aad3-8e5f6732b9c7 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "9fed0304-736a-4739-9e78-a95c676d1206-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:52:49 np0005534516 nova_compute[253538]: 2025-11-25 08:52:49.287 253542 DEBUG oslo_concurrency.lockutils [req-5c63fd74-1fd9-4a2f-95ed-8c1d1b6e7415 req-54ffc602-2743-4305-aad3-8e5f6732b9c7 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "9fed0304-736a-4739-9e78-a95c676d1206-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:52:49 np0005534516 nova_compute[253538]: 2025-11-25 08:52:49.287 253542 DEBUG nova.compute.manager [req-5c63fd74-1fd9-4a2f-95ed-8c1d1b6e7415 req-54ffc602-2743-4305-aad3-8e5f6732b9c7 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 9fed0304-736a-4739-9e78-a95c676d1206] No waiting events found dispatching network-vif-unplugged-5518ee18-fcb4-4885-8bc6-a3daba84baff pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 03:52:49 np0005534516 nova_compute[253538]: 2025-11-25 08:52:49.287 253542 DEBUG nova.compute.manager [req-5c63fd74-1fd9-4a2f-95ed-8c1d1b6e7415 req-54ffc602-2743-4305-aad3-8e5f6732b9c7 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 9fed0304-736a-4739-9e78-a95c676d1206] Received event network-vif-unplugged-5518ee18-fcb4-4885-8bc6-a3daba84baff for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 25 03:52:49 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2161: 321 pgs: 321 active+clean; 105 MiB data, 803 MiB used, 59 GiB / 60 GiB avail; 339 KiB/s rd, 2.1 MiB/s wr, 81 op/s
Nov 25 03:52:49 np0005534516 nova_compute[253538]: 2025-11-25 08:52:49.843 253542 DEBUG nova.network.neutron [req-6e2fa655-5e5e-4ff3-9ff7-3871a7648573 req-8e44571b-3960-49ca-9bda-43c52b1dffd6 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 9fed0304-736a-4739-9e78-a95c676d1206] Updated VIF entry in instance network info cache for port 5518ee18-fcb4-4885-8bc6-a3daba84baff. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 25 03:52:49 np0005534516 nova_compute[253538]: 2025-11-25 08:52:49.844 253542 DEBUG nova.network.neutron [req-6e2fa655-5e5e-4ff3-9ff7-3871a7648573 req-8e44571b-3960-49ca-9bda-43c52b1dffd6 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 9fed0304-736a-4739-9e78-a95c676d1206] Updating instance_info_cache with network_info: [{"id": "5518ee18-fcb4-4885-8bc6-a3daba84baff", "address": "fa:16:3e:aa:ad:17", "network": {"id": "78cbfb83-5eb2-43b6-8132-ed291918f722", "bridge": "br-int", "label": "tempest-network-smoke--234361531", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7dcaf3c96bfc4db3a41291debd385c67", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5518ee18-fc", "ovs_interfaceid": "5518ee18-fcb4-4885-8bc6-a3daba84baff", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 03:52:49 np0005534516 nova_compute[253538]: 2025-11-25 08:52:49.861 253542 DEBUG oslo_concurrency.lockutils [req-6e2fa655-5e5e-4ff3-9ff7-3871a7648573 req-8e44571b-3960-49ca-9bda-43c52b1dffd6 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-9fed0304-736a-4739-9e78-a95c676d1206" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 03:52:49 np0005534516 nova_compute[253538]: 2025-11-25 08:52:49.982 253542 DEBUG nova.network.neutron [-] [instance: 9fed0304-736a-4739-9e78-a95c676d1206] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 03:52:50 np0005534516 nova_compute[253538]: 2025-11-25 08:52:50.000 253542 INFO nova.compute.manager [-] [instance: 9fed0304-736a-4739-9e78-a95c676d1206] Took 1.56 seconds to deallocate network for instance.#033[00m
Nov 25 03:52:50 np0005534516 nova_compute[253538]: 2025-11-25 08:52:50.052 253542 DEBUG nova.compute.manager [req-c173ee66-ee08-4cf8-b67a-b963ff29fd61 req-4605baa8-cf88-4195-9439-8a679222c72a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 9fed0304-736a-4739-9e78-a95c676d1206] Received event network-vif-deleted-5518ee18-fcb4-4885-8bc6-a3daba84baff external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:52:50 np0005534516 nova_compute[253538]: 2025-11-25 08:52:50.055 253542 DEBUG oslo_concurrency.lockutils [None req-d74486d6-1794-4aab-af7b-ae08cd7a99b5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:52:50 np0005534516 nova_compute[253538]: 2025-11-25 08:52:50.055 253542 DEBUG oslo_concurrency.lockutils [None req-d74486d6-1794-4aab-af7b-ae08cd7a99b5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:52:50 np0005534516 nova_compute[253538]: 2025-11-25 08:52:50.148 253542 DEBUG oslo_concurrency.processutils [None req-d74486d6-1794-4aab-af7b-ae08cd7a99b5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:52:50 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:52:50.355 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=0e3362c2-a4a0-4a10-9289-943331244f84, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '34'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:52:50 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 03:52:50 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2006487482' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 03:52:50 np0005534516 nova_compute[253538]: 2025-11-25 08:52:50.630 253542 DEBUG oslo_concurrency.processutils [None req-d74486d6-1794-4aab-af7b-ae08cd7a99b5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.483s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:52:50 np0005534516 nova_compute[253538]: 2025-11-25 08:52:50.642 253542 DEBUG nova.compute.provider_tree [None req-d74486d6-1794-4aab-af7b-ae08cd7a99b5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 03:52:50 np0005534516 nova_compute[253538]: 2025-11-25 08:52:50.661 253542 DEBUG nova.scheduler.client.report [None req-d74486d6-1794-4aab-af7b-ae08cd7a99b5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 03:52:50 np0005534516 nova_compute[253538]: 2025-11-25 08:52:50.690 253542 DEBUG oslo_concurrency.lockutils [None req-d74486d6-1794-4aab-af7b-ae08cd7a99b5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.635s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:52:50 np0005534516 nova_compute[253538]: 2025-11-25 08:52:50.726 253542 INFO nova.scheduler.client.report [None req-d74486d6-1794-4aab-af7b-ae08cd7a99b5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Deleted allocations for instance 9fed0304-736a-4739-9e78-a95c676d1206#033[00m
Nov 25 03:52:50 np0005534516 nova_compute[253538]: 2025-11-25 08:52:50.829 253542 DEBUG oslo_concurrency.lockutils [None req-d74486d6-1794-4aab-af7b-ae08cd7a99b5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Lock "9fed0304-736a-4739-9e78-a95c676d1206" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.362s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:52:51 np0005534516 nova_compute[253538]: 2025-11-25 08:52:51.417 253542 DEBUG nova.compute.manager [req-9b4c5bb6-c031-479e-b93a-e29edb405b35 req-4d2a9390-be4b-44cc-8344-20c64a871315 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 9fed0304-736a-4739-9e78-a95c676d1206] Received event network-vif-plugged-5518ee18-fcb4-4885-8bc6-a3daba84baff external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:52:51 np0005534516 nova_compute[253538]: 2025-11-25 08:52:51.417 253542 DEBUG oslo_concurrency.lockutils [req-9b4c5bb6-c031-479e-b93a-e29edb405b35 req-4d2a9390-be4b-44cc-8344-20c64a871315 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "9fed0304-736a-4739-9e78-a95c676d1206-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:52:51 np0005534516 nova_compute[253538]: 2025-11-25 08:52:51.418 253542 DEBUG oslo_concurrency.lockutils [req-9b4c5bb6-c031-479e-b93a-e29edb405b35 req-4d2a9390-be4b-44cc-8344-20c64a871315 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "9fed0304-736a-4739-9e78-a95c676d1206-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:52:51 np0005534516 nova_compute[253538]: 2025-11-25 08:52:51.418 253542 DEBUG oslo_concurrency.lockutils [req-9b4c5bb6-c031-479e-b93a-e29edb405b35 req-4d2a9390-be4b-44cc-8344-20c64a871315 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "9fed0304-736a-4739-9e78-a95c676d1206-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:52:51 np0005534516 nova_compute[253538]: 2025-11-25 08:52:51.419 253542 DEBUG nova.compute.manager [req-9b4c5bb6-c031-479e-b93a-e29edb405b35 req-4d2a9390-be4b-44cc-8344-20c64a871315 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 9fed0304-736a-4739-9e78-a95c676d1206] No waiting events found dispatching network-vif-plugged-5518ee18-fcb4-4885-8bc6-a3daba84baff pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 03:52:51 np0005534516 nova_compute[253538]: 2025-11-25 08:52:51.419 253542 WARNING nova.compute.manager [req-9b4c5bb6-c031-479e-b93a-e29edb405b35 req-4d2a9390-be4b-44cc-8344-20c64a871315 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 9fed0304-736a-4739-9e78-a95c676d1206] Received unexpected event network-vif-plugged-5518ee18-fcb4-4885-8bc6-a3daba84baff for instance with vm_state deleted and task_state None.#033[00m
Nov 25 03:52:51 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2162: 321 pgs: 321 active+clean; 88 MiB data, 793 MiB used, 59 GiB / 60 GiB avail; 246 KiB/s rd, 1.4 MiB/s wr, 68 op/s
Nov 25 03:52:52 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 03:52:52 np0005534516 nova_compute[253538]: 2025-11-25 08:52:52.735 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:52:52 np0005534516 nova_compute[253538]: 2025-11-25 08:52:52.778 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:52:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] Optimize plan auto_2025-11-25_08:52:53
Nov 25 03:52:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 03:52:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] do_upmap
Nov 25 03:52:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] pools ['default.rgw.meta', 'default.rgw.log', 'volumes', 'default.rgw.control', '.rgw.root', 'images', 'backups', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', 'vms', '.mgr']
Nov 25 03:52:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] prepared 0/10 changes
Nov 25 03:52:53 np0005534516 nova_compute[253538]: 2025-11-25 08:52:53.439 253542 DEBUG oslo_concurrency.lockutils [None req-643270b7-dd7c-4c64-a96c-a1a312013354 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Acquiring lock "76611b0b-db06-4903-a22a-59b23a1e0d48" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:52:53 np0005534516 nova_compute[253538]: 2025-11-25 08:52:53.439 253542 DEBUG oslo_concurrency.lockutils [None req-643270b7-dd7c-4c64-a96c-a1a312013354 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lock "76611b0b-db06-4903-a22a-59b23a1e0d48" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:52:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 03:52:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 03:52:53 np0005534516 nova_compute[253538]: 2025-11-25 08:52:53.454 253542 DEBUG nova.compute.manager [None req-643270b7-dd7c-4c64-a96c-a1a312013354 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 76611b0b-db06-4903-a22a-59b23a1e0d48] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 25 03:52:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 03:52:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 03:52:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 03:52:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 03:52:53 np0005534516 nova_compute[253538]: 2025-11-25 08:52:53.540 253542 DEBUG oslo_concurrency.lockutils [None req-643270b7-dd7c-4c64-a96c-a1a312013354 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:52:53 np0005534516 nova_compute[253538]: 2025-11-25 08:52:53.540 253542 DEBUG oslo_concurrency.lockutils [None req-643270b7-dd7c-4c64-a96c-a1a312013354 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:52:53 np0005534516 nova_compute[253538]: 2025-11-25 08:52:53.547 253542 DEBUG nova.virt.hardware [None req-643270b7-dd7c-4c64-a96c-a1a312013354 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 25 03:52:53 np0005534516 nova_compute[253538]: 2025-11-25 08:52:53.548 253542 INFO nova.compute.claims [None req-643270b7-dd7c-4c64-a96c-a1a312013354 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 76611b0b-db06-4903-a22a-59b23a1e0d48] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 25 03:52:53 np0005534516 nova_compute[253538]: 2025-11-25 08:52:53.663 253542 DEBUG oslo_concurrency.processutils [None req-643270b7-dd7c-4c64-a96c-a1a312013354 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:52:53 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2163: 321 pgs: 321 active+clean; 88 MiB data, 793 MiB used, 59 GiB / 60 GiB avail; 202 KiB/s rd, 941 KiB/s wr, 55 op/s
Nov 25 03:52:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 03:52:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 03:52:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 03:52:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 03:52:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 03:52:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 03:52:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 03:52:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 03:52:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 03:52:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 03:52:54 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 03:52:54 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1354322793' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 03:52:54 np0005534516 nova_compute[253538]: 2025-11-25 08:52:54.213 253542 DEBUG oslo_concurrency.processutils [None req-643270b7-dd7c-4c64-a96c-a1a312013354 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.550s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:52:54 np0005534516 nova_compute[253538]: 2025-11-25 08:52:54.221 253542 DEBUG nova.compute.provider_tree [None req-643270b7-dd7c-4c64-a96c-a1a312013354 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 03:52:54 np0005534516 nova_compute[253538]: 2025-11-25 08:52:54.235 253542 DEBUG nova.scheduler.client.report [None req-643270b7-dd7c-4c64-a96c-a1a312013354 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 03:52:54 np0005534516 nova_compute[253538]: 2025-11-25 08:52:54.259 253542 DEBUG oslo_concurrency.lockutils [None req-643270b7-dd7c-4c64-a96c-a1a312013354 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.718s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:52:54 np0005534516 nova_compute[253538]: 2025-11-25 08:52:54.260 253542 DEBUG nova.compute.manager [None req-643270b7-dd7c-4c64-a96c-a1a312013354 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 76611b0b-db06-4903-a22a-59b23a1e0d48] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 25 03:52:54 np0005534516 nova_compute[253538]: 2025-11-25 08:52:54.308 253542 DEBUG nova.compute.manager [None req-643270b7-dd7c-4c64-a96c-a1a312013354 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 76611b0b-db06-4903-a22a-59b23a1e0d48] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 25 03:52:54 np0005534516 nova_compute[253538]: 2025-11-25 08:52:54.309 253542 DEBUG nova.network.neutron [None req-643270b7-dd7c-4c64-a96c-a1a312013354 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 76611b0b-db06-4903-a22a-59b23a1e0d48] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 25 03:52:54 np0005534516 nova_compute[253538]: 2025-11-25 08:52:54.346 253542 INFO nova.virt.libvirt.driver [None req-643270b7-dd7c-4c64-a96c-a1a312013354 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 76611b0b-db06-4903-a22a-59b23a1e0d48] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 25 03:52:54 np0005534516 nova_compute[253538]: 2025-11-25 08:52:54.370 253542 DEBUG nova.compute.manager [None req-643270b7-dd7c-4c64-a96c-a1a312013354 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 76611b0b-db06-4903-a22a-59b23a1e0d48] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 25 03:52:54 np0005534516 nova_compute[253538]: 2025-11-25 08:52:54.462 253542 DEBUG nova.compute.manager [None req-643270b7-dd7c-4c64-a96c-a1a312013354 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 76611b0b-db06-4903-a22a-59b23a1e0d48] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 25 03:52:54 np0005534516 nova_compute[253538]: 2025-11-25 08:52:54.464 253542 DEBUG nova.virt.libvirt.driver [None req-643270b7-dd7c-4c64-a96c-a1a312013354 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 76611b0b-db06-4903-a22a-59b23a1e0d48] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 25 03:52:54 np0005534516 nova_compute[253538]: 2025-11-25 08:52:54.464 253542 INFO nova.virt.libvirt.driver [None req-643270b7-dd7c-4c64-a96c-a1a312013354 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 76611b0b-db06-4903-a22a-59b23a1e0d48] Creating image(s)#033[00m
Nov 25 03:52:54 np0005534516 nova_compute[253538]: 2025-11-25 08:52:54.485 253542 DEBUG nova.storage.rbd_utils [None req-643270b7-dd7c-4c64-a96c-a1a312013354 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] rbd image 76611b0b-db06-4903-a22a-59b23a1e0d48_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:52:54 np0005534516 nova_compute[253538]: 2025-11-25 08:52:54.507 253542 DEBUG nova.storage.rbd_utils [None req-643270b7-dd7c-4c64-a96c-a1a312013354 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] rbd image 76611b0b-db06-4903-a22a-59b23a1e0d48_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:52:54 np0005534516 nova_compute[253538]: 2025-11-25 08:52:54.529 253542 DEBUG nova.storage.rbd_utils [None req-643270b7-dd7c-4c64-a96c-a1a312013354 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] rbd image 76611b0b-db06-4903-a22a-59b23a1e0d48_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:52:54 np0005534516 nova_compute[253538]: 2025-11-25 08:52:54.533 253542 DEBUG oslo_concurrency.processutils [None req-643270b7-dd7c-4c64-a96c-a1a312013354 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:52:54 np0005534516 nova_compute[253538]: 2025-11-25 08:52:54.633 253542 DEBUG oslo_concurrency.processutils [None req-643270b7-dd7c-4c64-a96c-a1a312013354 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json" returned: 0 in 0.100s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:52:54 np0005534516 nova_compute[253538]: 2025-11-25 08:52:54.635 253542 DEBUG oslo_concurrency.lockutils [None req-643270b7-dd7c-4c64-a96c-a1a312013354 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Acquiring lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:52:54 np0005534516 nova_compute[253538]: 2025-11-25 08:52:54.636 253542 DEBUG oslo_concurrency.lockutils [None req-643270b7-dd7c-4c64-a96c-a1a312013354 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:52:54 np0005534516 nova_compute[253538]: 2025-11-25 08:52:54.637 253542 DEBUG oslo_concurrency.lockutils [None req-643270b7-dd7c-4c64-a96c-a1a312013354 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:52:54 np0005534516 nova_compute[253538]: 2025-11-25 08:52:54.670 253542 DEBUG nova.storage.rbd_utils [None req-643270b7-dd7c-4c64-a96c-a1a312013354 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] rbd image 76611b0b-db06-4903-a22a-59b23a1e0d48_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:52:54 np0005534516 nova_compute[253538]: 2025-11-25 08:52:54.675 253542 DEBUG oslo_concurrency.processutils [None req-643270b7-dd7c-4c64-a96c-a1a312013354 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc 76611b0b-db06-4903-a22a-59b23a1e0d48_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:52:54 np0005534516 nova_compute[253538]: 2025-11-25 08:52:54.723 253542 DEBUG nova.policy [None req-643270b7-dd7c-4c64-a96c-a1a312013354 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '4211995133cc45db8e38c47f747fb092', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '92faeb767e7a423586eaaf32661ce771', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 25 03:52:55 np0005534516 nova_compute[253538]: 2025-11-25 08:52:55.224 253542 DEBUG oslo_concurrency.processutils [None req-643270b7-dd7c-4c64-a96c-a1a312013354 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc 76611b0b-db06-4903-a22a-59b23a1e0d48_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.549s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:52:55 np0005534516 nova_compute[253538]: 2025-11-25 08:52:55.296 253542 DEBUG nova.storage.rbd_utils [None req-643270b7-dd7c-4c64-a96c-a1a312013354 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] resizing rbd image 76611b0b-db06-4903-a22a-59b23a1e0d48_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 25 03:52:55 np0005534516 nova_compute[253538]: 2025-11-25 08:52:55.389 253542 DEBUG nova.objects.instance [None req-643270b7-dd7c-4c64-a96c-a1a312013354 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lazy-loading 'migration_context' on Instance uuid 76611b0b-db06-4903-a22a-59b23a1e0d48 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 03:52:55 np0005534516 nova_compute[253538]: 2025-11-25 08:52:55.392 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:52:55 np0005534516 nova_compute[253538]: 2025-11-25 08:52:55.430 253542 DEBUG nova.virt.libvirt.driver [None req-643270b7-dd7c-4c64-a96c-a1a312013354 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 76611b0b-db06-4903-a22a-59b23a1e0d48] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 25 03:52:55 np0005534516 nova_compute[253538]: 2025-11-25 08:52:55.431 253542 DEBUG nova.virt.libvirt.driver [None req-643270b7-dd7c-4c64-a96c-a1a312013354 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 76611b0b-db06-4903-a22a-59b23a1e0d48] Ensure instance console log exists: /var/lib/nova/instances/76611b0b-db06-4903-a22a-59b23a1e0d48/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 25 03:52:55 np0005534516 nova_compute[253538]: 2025-11-25 08:52:55.432 253542 DEBUG oslo_concurrency.lockutils [None req-643270b7-dd7c-4c64-a96c-a1a312013354 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:52:55 np0005534516 nova_compute[253538]: 2025-11-25 08:52:55.433 253542 DEBUG oslo_concurrency.lockutils [None req-643270b7-dd7c-4c64-a96c-a1a312013354 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:52:55 np0005534516 nova_compute[253538]: 2025-11-25 08:52:55.433 253542 DEBUG oslo_concurrency.lockutils [None req-643270b7-dd7c-4c64-a96c-a1a312013354 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:52:55 np0005534516 nova_compute[253538]: 2025-11-25 08:52:55.542 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:52:55 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2164: 321 pgs: 321 active+clean; 88 MiB data, 793 MiB used, 59 GiB / 60 GiB avail; 90 KiB/s rd, 100 KiB/s wr, 45 op/s
Nov 25 03:52:56 np0005534516 nova_compute[253538]: 2025-11-25 08:52:56.329 253542 DEBUG nova.network.neutron [None req-643270b7-dd7c-4c64-a96c-a1a312013354 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 76611b0b-db06-4903-a22a-59b23a1e0d48] Successfully created port: 8f1fcc3c-5f46-4272-be9b-4d5213b3aceb _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 25 03:52:57 np0005534516 nova_compute[253538]: 2025-11-25 08:52:57.399 253542 DEBUG nova.network.neutron [None req-643270b7-dd7c-4c64-a96c-a1a312013354 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 76611b0b-db06-4903-a22a-59b23a1e0d48] Successfully updated port: 8f1fcc3c-5f46-4272-be9b-4d5213b3aceb _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 25 03:52:57 np0005534516 nova_compute[253538]: 2025-11-25 08:52:57.420 253542 DEBUG oslo_concurrency.lockutils [None req-643270b7-dd7c-4c64-a96c-a1a312013354 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Acquiring lock "refresh_cache-76611b0b-db06-4903-a22a-59b23a1e0d48" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 03:52:57 np0005534516 nova_compute[253538]: 2025-11-25 08:52:57.420 253542 DEBUG oslo_concurrency.lockutils [None req-643270b7-dd7c-4c64-a96c-a1a312013354 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Acquired lock "refresh_cache-76611b0b-db06-4903-a22a-59b23a1e0d48" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 03:52:57 np0005534516 nova_compute[253538]: 2025-11-25 08:52:57.421 253542 DEBUG nova.network.neutron [None req-643270b7-dd7c-4c64-a96c-a1a312013354 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 76611b0b-db06-4903-a22a-59b23a1e0d48] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 25 03:52:57 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 03:52:57 np0005534516 nova_compute[253538]: 2025-11-25 08:52:57.535 253542 DEBUG nova.compute.manager [req-5d5fde80-d0ee-45d4-a8da-69ed275ee82c req-2046a213-6aa3-48d1-bc8a-8928134b6e7c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 76611b0b-db06-4903-a22a-59b23a1e0d48] Received event network-changed-8f1fcc3c-5f46-4272-be9b-4d5213b3aceb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:52:57 np0005534516 nova_compute[253538]: 2025-11-25 08:52:57.535 253542 DEBUG nova.compute.manager [req-5d5fde80-d0ee-45d4-a8da-69ed275ee82c req-2046a213-6aa3-48d1-bc8a-8928134b6e7c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 76611b0b-db06-4903-a22a-59b23a1e0d48] Refreshing instance network info cache due to event network-changed-8f1fcc3c-5f46-4272-be9b-4d5213b3aceb. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 03:52:57 np0005534516 nova_compute[253538]: 2025-11-25 08:52:57.535 253542 DEBUG oslo_concurrency.lockutils [req-5d5fde80-d0ee-45d4-a8da-69ed275ee82c req-2046a213-6aa3-48d1-bc8a-8928134b6e7c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-76611b0b-db06-4903-a22a-59b23a1e0d48" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 03:52:57 np0005534516 nova_compute[253538]: 2025-11-25 08:52:57.737 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:52:57 np0005534516 nova_compute[253538]: 2025-11-25 08:52:57.779 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:52:57 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2165: 321 pgs: 321 active+clean; 101 MiB data, 805 MiB used, 59 GiB / 60 GiB avail; 29 KiB/s rd, 1.0 MiB/s wr, 41 op/s
Nov 25 03:52:57 np0005534516 nova_compute[253538]: 2025-11-25 08:52:57.996 253542 DEBUG nova.network.neutron [None req-643270b7-dd7c-4c64-a96c-a1a312013354 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 76611b0b-db06-4903-a22a-59b23a1e0d48] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 25 03:52:58 np0005534516 nova_compute[253538]: 2025-11-25 08:52:58.886 253542 DEBUG nova.network.neutron [None req-643270b7-dd7c-4c64-a96c-a1a312013354 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 76611b0b-db06-4903-a22a-59b23a1e0d48] Updating instance_info_cache with network_info: [{"id": "8f1fcc3c-5f46-4272-be9b-4d5213b3aceb", "address": "fa:16:3e:36:b2:21", "network": {"id": "60f2641c-f03e-4ef3-a462-4bd54e93c59c", "bridge": "br-int", "label": "tempest-network-smoke--706068333", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92faeb767e7a423586eaaf32661ce771", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8f1fcc3c-5f", "ovs_interfaceid": "8f1fcc3c-5f46-4272-be9b-4d5213b3aceb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 03:52:58 np0005534516 nova_compute[253538]: 2025-11-25 08:52:58.902 253542 DEBUG oslo_concurrency.lockutils [None req-643270b7-dd7c-4c64-a96c-a1a312013354 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Releasing lock "refresh_cache-76611b0b-db06-4903-a22a-59b23a1e0d48" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 03:52:58 np0005534516 nova_compute[253538]: 2025-11-25 08:52:58.902 253542 DEBUG nova.compute.manager [None req-643270b7-dd7c-4c64-a96c-a1a312013354 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 76611b0b-db06-4903-a22a-59b23a1e0d48] Instance network_info: |[{"id": "8f1fcc3c-5f46-4272-be9b-4d5213b3aceb", "address": "fa:16:3e:36:b2:21", "network": {"id": "60f2641c-f03e-4ef3-a462-4bd54e93c59c", "bridge": "br-int", "label": "tempest-network-smoke--706068333", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92faeb767e7a423586eaaf32661ce771", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8f1fcc3c-5f", "ovs_interfaceid": "8f1fcc3c-5f46-4272-be9b-4d5213b3aceb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 25 03:52:58 np0005534516 nova_compute[253538]: 2025-11-25 08:52:58.902 253542 DEBUG oslo_concurrency.lockutils [req-5d5fde80-d0ee-45d4-a8da-69ed275ee82c req-2046a213-6aa3-48d1-bc8a-8928134b6e7c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-76611b0b-db06-4903-a22a-59b23a1e0d48" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 03:52:58 np0005534516 nova_compute[253538]: 2025-11-25 08:52:58.903 253542 DEBUG nova.network.neutron [req-5d5fde80-d0ee-45d4-a8da-69ed275ee82c req-2046a213-6aa3-48d1-bc8a-8928134b6e7c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 76611b0b-db06-4903-a22a-59b23a1e0d48] Refreshing network info cache for port 8f1fcc3c-5f46-4272-be9b-4d5213b3aceb _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 03:52:58 np0005534516 nova_compute[253538]: 2025-11-25 08:52:58.905 253542 DEBUG nova.virt.libvirt.driver [None req-643270b7-dd7c-4c64-a96c-a1a312013354 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 76611b0b-db06-4903-a22a-59b23a1e0d48] Start _get_guest_xml network_info=[{"id": "8f1fcc3c-5f46-4272-be9b-4d5213b3aceb", "address": "fa:16:3e:36:b2:21", "network": {"id": "60f2641c-f03e-4ef3-a462-4bd54e93c59c", "bridge": "br-int", "label": "tempest-network-smoke--706068333", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92faeb767e7a423586eaaf32661ce771", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8f1fcc3c-5f", "ovs_interfaceid": "8f1fcc3c-5f46-4272-be9b-4d5213b3aceb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'device_name': '/dev/vda', 'guest_format': None, 'encryption_options': None, 'boot_index': 0, 'encryption_format': None, 'encryption_secret_uuid': None, 'size': 0, 'encrypted': False, 'disk_bus': 'virtio', 'image_id': '8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 25 03:52:58 np0005534516 nova_compute[253538]: 2025-11-25 08:52:58.909 253542 WARNING nova.virt.libvirt.driver [None req-643270b7-dd7c-4c64-a96c-a1a312013354 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 03:52:58 np0005534516 nova_compute[253538]: 2025-11-25 08:52:58.917 253542 DEBUG nova.virt.libvirt.host [None req-643270b7-dd7c-4c64-a96c-a1a312013354 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 25 03:52:58 np0005534516 nova_compute[253538]: 2025-11-25 08:52:58.917 253542 DEBUG nova.virt.libvirt.host [None req-643270b7-dd7c-4c64-a96c-a1a312013354 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 25 03:52:58 np0005534516 nova_compute[253538]: 2025-11-25 08:52:58.921 253542 DEBUG nova.virt.libvirt.host [None req-643270b7-dd7c-4c64-a96c-a1a312013354 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 25 03:52:58 np0005534516 nova_compute[253538]: 2025-11-25 08:52:58.922 253542 DEBUG nova.virt.libvirt.host [None req-643270b7-dd7c-4c64-a96c-a1a312013354 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 25 03:52:58 np0005534516 nova_compute[253538]: 2025-11-25 08:52:58.922 253542 DEBUG nova.virt.libvirt.driver [None req-643270b7-dd7c-4c64-a96c-a1a312013354 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 25 03:52:58 np0005534516 nova_compute[253538]: 2025-11-25 08:52:58.922 253542 DEBUG nova.virt.hardware [None req-643270b7-dd7c-4c64-a96c-a1a312013354 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T08:20:54Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='c0247c91-0f34-45b2-87e7-43f31a790a00',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 25 03:52:58 np0005534516 nova_compute[253538]: 2025-11-25 08:52:58.923 253542 DEBUG nova.virt.hardware [None req-643270b7-dd7c-4c64-a96c-a1a312013354 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 25 03:52:58 np0005534516 nova_compute[253538]: 2025-11-25 08:52:58.923 253542 DEBUG nova.virt.hardware [None req-643270b7-dd7c-4c64-a96c-a1a312013354 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 25 03:52:58 np0005534516 nova_compute[253538]: 2025-11-25 08:52:58.923 253542 DEBUG nova.virt.hardware [None req-643270b7-dd7c-4c64-a96c-a1a312013354 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 25 03:52:58 np0005534516 nova_compute[253538]: 2025-11-25 08:52:58.923 253542 DEBUG nova.virt.hardware [None req-643270b7-dd7c-4c64-a96c-a1a312013354 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 25 03:52:58 np0005534516 nova_compute[253538]: 2025-11-25 08:52:58.923 253542 DEBUG nova.virt.hardware [None req-643270b7-dd7c-4c64-a96c-a1a312013354 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 25 03:52:58 np0005534516 nova_compute[253538]: 2025-11-25 08:52:58.923 253542 DEBUG nova.virt.hardware [None req-643270b7-dd7c-4c64-a96c-a1a312013354 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 25 03:52:58 np0005534516 nova_compute[253538]: 2025-11-25 08:52:58.924 253542 DEBUG nova.virt.hardware [None req-643270b7-dd7c-4c64-a96c-a1a312013354 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 25 03:52:58 np0005534516 nova_compute[253538]: 2025-11-25 08:52:58.924 253542 DEBUG nova.virt.hardware [None req-643270b7-dd7c-4c64-a96c-a1a312013354 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 25 03:52:58 np0005534516 nova_compute[253538]: 2025-11-25 08:52:58.924 253542 DEBUG nova.virt.hardware [None req-643270b7-dd7c-4c64-a96c-a1a312013354 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 25 03:52:58 np0005534516 nova_compute[253538]: 2025-11-25 08:52:58.924 253542 DEBUG nova.virt.hardware [None req-643270b7-dd7c-4c64-a96c-a1a312013354 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 25 03:52:58 np0005534516 nova_compute[253538]: 2025-11-25 08:52:58.926 253542 DEBUG oslo_concurrency.processutils [None req-643270b7-dd7c-4c64-a96c-a1a312013354 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:52:59 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 03:52:59 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1028327335' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 03:52:59 np0005534516 nova_compute[253538]: 2025-11-25 08:52:59.403 253542 DEBUG oslo_concurrency.processutils [None req-643270b7-dd7c-4c64-a96c-a1a312013354 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.476s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:52:59 np0005534516 nova_compute[253538]: 2025-11-25 08:52:59.428 253542 DEBUG nova.storage.rbd_utils [None req-643270b7-dd7c-4c64-a96c-a1a312013354 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] rbd image 76611b0b-db06-4903-a22a-59b23a1e0d48_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:52:59 np0005534516 nova_compute[253538]: 2025-11-25 08:52:59.433 253542 DEBUG oslo_concurrency.processutils [None req-643270b7-dd7c-4c64-a96c-a1a312013354 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:52:59 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2166: 321 pgs: 321 active+clean; 134 MiB data, 810 MiB used, 59 GiB / 60 GiB avail; 37 KiB/s rd, 1.8 MiB/s wr, 55 op/s
Nov 25 03:52:59 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 03:52:59 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/993957718' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 03:52:59 np0005534516 nova_compute[253538]: 2025-11-25 08:52:59.881 253542 DEBUG oslo_concurrency.processutils [None req-643270b7-dd7c-4c64-a96c-a1a312013354 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.448s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:52:59 np0005534516 nova_compute[253538]: 2025-11-25 08:52:59.884 253542 DEBUG nova.virt.libvirt.vif [None req-643270b7-dd7c-4c64-a96c-a1a312013354 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T08:52:52Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1241900398',display_name='tempest-TestNetworkBasicOps-server-1241900398',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1241900398',id=114,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDKQnIncww1mRwX9O2GT7bKPsyEkgA1/4oI48uojAID9Vm7PrMk+vKtiibHK4EoiYH2/wbPYy0HX20b3AH2Q3Q4yusIjxaAQq6AUEbJthvns45ZkdO1WCW3z0AmP1FYwxQ==',key_name='tempest-TestNetworkBasicOps-922693977',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='92faeb767e7a423586eaaf32661ce771',ramdisk_id='',reservation_id='r-1z0lb28a',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-2019122229',owner_user_name='tempest-TestNetworkBasicOps-2019122229-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T08:52:54Z,user_data=None,user_id='4211995133cc45db8e38c47f747fb092',uuid=76611b0b-db06-4903-a22a-59b23a1e0d48,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "8f1fcc3c-5f46-4272-be9b-4d5213b3aceb", "address": "fa:16:3e:36:b2:21", "network": {"id": "60f2641c-f03e-4ef3-a462-4bd54e93c59c", "bridge": "br-int", "label": "tempest-network-smoke--706068333", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92faeb767e7a423586eaaf32661ce771", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8f1fcc3c-5f", "ovs_interfaceid": "8f1fcc3c-5f46-4272-be9b-4d5213b3aceb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 25 03:52:59 np0005534516 nova_compute[253538]: 2025-11-25 08:52:59.885 253542 DEBUG nova.network.os_vif_util [None req-643270b7-dd7c-4c64-a96c-a1a312013354 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Converting VIF {"id": "8f1fcc3c-5f46-4272-be9b-4d5213b3aceb", "address": "fa:16:3e:36:b2:21", "network": {"id": "60f2641c-f03e-4ef3-a462-4bd54e93c59c", "bridge": "br-int", "label": "tempest-network-smoke--706068333", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92faeb767e7a423586eaaf32661ce771", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8f1fcc3c-5f", "ovs_interfaceid": "8f1fcc3c-5f46-4272-be9b-4d5213b3aceb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 03:52:59 np0005534516 nova_compute[253538]: 2025-11-25 08:52:59.886 253542 DEBUG nova.network.os_vif_util [None req-643270b7-dd7c-4c64-a96c-a1a312013354 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:36:b2:21,bridge_name='br-int',has_traffic_filtering=True,id=8f1fcc3c-5f46-4272-be9b-4d5213b3aceb,network=Network(60f2641c-f03e-4ef3-a462-4bd54e93c59c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8f1fcc3c-5f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 03:52:59 np0005534516 nova_compute[253538]: 2025-11-25 08:52:59.888 253542 DEBUG nova.objects.instance [None req-643270b7-dd7c-4c64-a96c-a1a312013354 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lazy-loading 'pci_devices' on Instance uuid 76611b0b-db06-4903-a22a-59b23a1e0d48 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 03:52:59 np0005534516 nova_compute[253538]: 2025-11-25 08:52:59.907 253542 DEBUG nova.virt.libvirt.driver [None req-643270b7-dd7c-4c64-a96c-a1a312013354 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 76611b0b-db06-4903-a22a-59b23a1e0d48] End _get_guest_xml xml=<domain type="kvm">
Nov 25 03:52:59 np0005534516 nova_compute[253538]:  <uuid>76611b0b-db06-4903-a22a-59b23a1e0d48</uuid>
Nov 25 03:52:59 np0005534516 nova_compute[253538]:  <name>instance-00000072</name>
Nov 25 03:52:59 np0005534516 nova_compute[253538]:  <memory>131072</memory>
Nov 25 03:52:59 np0005534516 nova_compute[253538]:  <vcpu>1</vcpu>
Nov 25 03:52:59 np0005534516 nova_compute[253538]:  <metadata>
Nov 25 03:52:59 np0005534516 nova_compute[253538]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 03:52:59 np0005534516 nova_compute[253538]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 03:52:59 np0005534516 nova_compute[253538]:      <nova:name>tempest-TestNetworkBasicOps-server-1241900398</nova:name>
Nov 25 03:52:59 np0005534516 nova_compute[253538]:      <nova:creationTime>2025-11-25 08:52:58</nova:creationTime>
Nov 25 03:52:59 np0005534516 nova_compute[253538]:      <nova:flavor name="m1.nano">
Nov 25 03:52:59 np0005534516 nova_compute[253538]:        <nova:memory>128</nova:memory>
Nov 25 03:52:59 np0005534516 nova_compute[253538]:        <nova:disk>1</nova:disk>
Nov 25 03:52:59 np0005534516 nova_compute[253538]:        <nova:swap>0</nova:swap>
Nov 25 03:52:59 np0005534516 nova_compute[253538]:        <nova:ephemeral>0</nova:ephemeral>
Nov 25 03:52:59 np0005534516 nova_compute[253538]:        <nova:vcpus>1</nova:vcpus>
Nov 25 03:52:59 np0005534516 nova_compute[253538]:      </nova:flavor>
Nov 25 03:52:59 np0005534516 nova_compute[253538]:      <nova:owner>
Nov 25 03:52:59 np0005534516 nova_compute[253538]:        <nova:user uuid="4211995133cc45db8e38c47f747fb092">tempest-TestNetworkBasicOps-2019122229-project-member</nova:user>
Nov 25 03:52:59 np0005534516 nova_compute[253538]:        <nova:project uuid="92faeb767e7a423586eaaf32661ce771">tempest-TestNetworkBasicOps-2019122229</nova:project>
Nov 25 03:52:59 np0005534516 nova_compute[253538]:      </nova:owner>
Nov 25 03:52:59 np0005534516 nova_compute[253538]:      <nova:root type="image" uuid="8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e"/>
Nov 25 03:52:59 np0005534516 nova_compute[253538]:      <nova:ports>
Nov 25 03:52:59 np0005534516 nova_compute[253538]:        <nova:port uuid="8f1fcc3c-5f46-4272-be9b-4d5213b3aceb">
Nov 25 03:52:59 np0005534516 nova_compute[253538]:          <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Nov 25 03:52:59 np0005534516 nova_compute[253538]:        </nova:port>
Nov 25 03:52:59 np0005534516 nova_compute[253538]:      </nova:ports>
Nov 25 03:52:59 np0005534516 nova_compute[253538]:    </nova:instance>
Nov 25 03:52:59 np0005534516 nova_compute[253538]:  </metadata>
Nov 25 03:52:59 np0005534516 nova_compute[253538]:  <sysinfo type="smbios">
Nov 25 03:52:59 np0005534516 nova_compute[253538]:    <system>
Nov 25 03:52:59 np0005534516 nova_compute[253538]:      <entry name="manufacturer">RDO</entry>
Nov 25 03:52:59 np0005534516 nova_compute[253538]:      <entry name="product">OpenStack Compute</entry>
Nov 25 03:52:59 np0005534516 nova_compute[253538]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 03:52:59 np0005534516 nova_compute[253538]:      <entry name="serial">76611b0b-db06-4903-a22a-59b23a1e0d48</entry>
Nov 25 03:52:59 np0005534516 nova_compute[253538]:      <entry name="uuid">76611b0b-db06-4903-a22a-59b23a1e0d48</entry>
Nov 25 03:52:59 np0005534516 nova_compute[253538]:      <entry name="family">Virtual Machine</entry>
Nov 25 03:52:59 np0005534516 nova_compute[253538]:    </system>
Nov 25 03:52:59 np0005534516 nova_compute[253538]:  </sysinfo>
Nov 25 03:52:59 np0005534516 nova_compute[253538]:  <os>
Nov 25 03:52:59 np0005534516 nova_compute[253538]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 25 03:52:59 np0005534516 nova_compute[253538]:    <boot dev="hd"/>
Nov 25 03:52:59 np0005534516 nova_compute[253538]:    <smbios mode="sysinfo"/>
Nov 25 03:52:59 np0005534516 nova_compute[253538]:  </os>
Nov 25 03:52:59 np0005534516 nova_compute[253538]:  <features>
Nov 25 03:52:59 np0005534516 nova_compute[253538]:    <acpi/>
Nov 25 03:52:59 np0005534516 nova_compute[253538]:    <apic/>
Nov 25 03:52:59 np0005534516 nova_compute[253538]:    <vmcoreinfo/>
Nov 25 03:52:59 np0005534516 nova_compute[253538]:  </features>
Nov 25 03:52:59 np0005534516 nova_compute[253538]:  <clock offset="utc">
Nov 25 03:52:59 np0005534516 nova_compute[253538]:    <timer name="pit" tickpolicy="delay"/>
Nov 25 03:52:59 np0005534516 nova_compute[253538]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 25 03:52:59 np0005534516 nova_compute[253538]:    <timer name="hpet" present="no"/>
Nov 25 03:52:59 np0005534516 nova_compute[253538]:  </clock>
Nov 25 03:52:59 np0005534516 nova_compute[253538]:  <cpu mode="host-model" match="exact">
Nov 25 03:52:59 np0005534516 nova_compute[253538]:    <topology sockets="1" cores="1" threads="1"/>
Nov 25 03:52:59 np0005534516 nova_compute[253538]:  </cpu>
Nov 25 03:52:59 np0005534516 nova_compute[253538]:  <devices>
Nov 25 03:52:59 np0005534516 nova_compute[253538]:    <disk type="network" device="disk">
Nov 25 03:52:59 np0005534516 nova_compute[253538]:      <driver type="raw" cache="none"/>
Nov 25 03:52:59 np0005534516 nova_compute[253538]:      <source protocol="rbd" name="vms/76611b0b-db06-4903-a22a-59b23a1e0d48_disk">
Nov 25 03:52:59 np0005534516 nova_compute[253538]:        <host name="192.168.122.100" port="6789"/>
Nov 25 03:52:59 np0005534516 nova_compute[253538]:      </source>
Nov 25 03:52:59 np0005534516 nova_compute[253538]:      <auth username="openstack">
Nov 25 03:52:59 np0005534516 nova_compute[253538]:        <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 03:52:59 np0005534516 nova_compute[253538]:      </auth>
Nov 25 03:52:59 np0005534516 nova_compute[253538]:      <target dev="vda" bus="virtio"/>
Nov 25 03:52:59 np0005534516 nova_compute[253538]:    </disk>
Nov 25 03:52:59 np0005534516 nova_compute[253538]:    <disk type="network" device="cdrom">
Nov 25 03:52:59 np0005534516 nova_compute[253538]:      <driver type="raw" cache="none"/>
Nov 25 03:52:59 np0005534516 nova_compute[253538]:      <source protocol="rbd" name="vms/76611b0b-db06-4903-a22a-59b23a1e0d48_disk.config">
Nov 25 03:52:59 np0005534516 nova_compute[253538]:        <host name="192.168.122.100" port="6789"/>
Nov 25 03:52:59 np0005534516 nova_compute[253538]:      </source>
Nov 25 03:52:59 np0005534516 nova_compute[253538]:      <auth username="openstack">
Nov 25 03:52:59 np0005534516 nova_compute[253538]:        <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 03:52:59 np0005534516 nova_compute[253538]:      </auth>
Nov 25 03:52:59 np0005534516 nova_compute[253538]:      <target dev="sda" bus="sata"/>
Nov 25 03:52:59 np0005534516 nova_compute[253538]:    </disk>
Nov 25 03:52:59 np0005534516 nova_compute[253538]:    <interface type="ethernet">
Nov 25 03:52:59 np0005534516 nova_compute[253538]:      <mac address="fa:16:3e:36:b2:21"/>
Nov 25 03:52:59 np0005534516 nova_compute[253538]:      <model type="virtio"/>
Nov 25 03:52:59 np0005534516 nova_compute[253538]:      <driver name="vhost" rx_queue_size="512"/>
Nov 25 03:52:59 np0005534516 nova_compute[253538]:      <mtu size="1442"/>
Nov 25 03:52:59 np0005534516 nova_compute[253538]:      <target dev="tap8f1fcc3c-5f"/>
Nov 25 03:52:59 np0005534516 nova_compute[253538]:    </interface>
Nov 25 03:52:59 np0005534516 nova_compute[253538]:    <serial type="pty">
Nov 25 03:52:59 np0005534516 nova_compute[253538]:      <log file="/var/lib/nova/instances/76611b0b-db06-4903-a22a-59b23a1e0d48/console.log" append="off"/>
Nov 25 03:52:59 np0005534516 nova_compute[253538]:    </serial>
Nov 25 03:52:59 np0005534516 nova_compute[253538]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 03:52:59 np0005534516 nova_compute[253538]:    <video>
Nov 25 03:52:59 np0005534516 nova_compute[253538]:      <model type="virtio"/>
Nov 25 03:52:59 np0005534516 nova_compute[253538]:    </video>
Nov 25 03:52:59 np0005534516 nova_compute[253538]:    <input type="tablet" bus="usb"/>
Nov 25 03:52:59 np0005534516 nova_compute[253538]:    <rng model="virtio">
Nov 25 03:52:59 np0005534516 nova_compute[253538]:      <backend model="random">/dev/urandom</backend>
Nov 25 03:52:59 np0005534516 nova_compute[253538]:    </rng>
Nov 25 03:52:59 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root"/>
Nov 25 03:52:59 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:52:59 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:52:59 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:52:59 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:52:59 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:52:59 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:52:59 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:52:59 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:52:59 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:52:59 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:52:59 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:52:59 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:52:59 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:52:59 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:52:59 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:52:59 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:52:59 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:52:59 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:52:59 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:52:59 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:52:59 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:52:59 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:52:59 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:52:59 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:52:59 np0005534516 nova_compute[253538]:    <controller type="usb" index="0"/>
Nov 25 03:52:59 np0005534516 nova_compute[253538]:    <memballoon model="virtio">
Nov 25 03:52:59 np0005534516 nova_compute[253538]:      <stats period="10"/>
Nov 25 03:52:59 np0005534516 nova_compute[253538]:    </memballoon>
Nov 25 03:52:59 np0005534516 nova_compute[253538]:  </devices>
Nov 25 03:52:59 np0005534516 nova_compute[253538]: </domain>
Nov 25 03:52:59 np0005534516 nova_compute[253538]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 25 03:52:59 np0005534516 nova_compute[253538]: 2025-11-25 08:52:59.909 253542 DEBUG nova.compute.manager [None req-643270b7-dd7c-4c64-a96c-a1a312013354 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 76611b0b-db06-4903-a22a-59b23a1e0d48] Preparing to wait for external event network-vif-plugged-8f1fcc3c-5f46-4272-be9b-4d5213b3aceb prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 25 03:52:59 np0005534516 nova_compute[253538]: 2025-11-25 08:52:59.910 253542 DEBUG oslo_concurrency.lockutils [None req-643270b7-dd7c-4c64-a96c-a1a312013354 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Acquiring lock "76611b0b-db06-4903-a22a-59b23a1e0d48-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:52:59 np0005534516 nova_compute[253538]: 2025-11-25 08:52:59.910 253542 DEBUG oslo_concurrency.lockutils [None req-643270b7-dd7c-4c64-a96c-a1a312013354 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lock "76611b0b-db06-4903-a22a-59b23a1e0d48-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:52:59 np0005534516 nova_compute[253538]: 2025-11-25 08:52:59.910 253542 DEBUG oslo_concurrency.lockutils [None req-643270b7-dd7c-4c64-a96c-a1a312013354 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lock "76611b0b-db06-4903-a22a-59b23a1e0d48-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:52:59 np0005534516 nova_compute[253538]: 2025-11-25 08:52:59.912 253542 DEBUG nova.virt.libvirt.vif [None req-643270b7-dd7c-4c64-a96c-a1a312013354 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T08:52:52Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1241900398',display_name='tempest-TestNetworkBasicOps-server-1241900398',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1241900398',id=114,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDKQnIncww1mRwX9O2GT7bKPsyEkgA1/4oI48uojAID9Vm7PrMk+vKtiibHK4EoiYH2/wbPYy0HX20b3AH2Q3Q4yusIjxaAQq6AUEbJthvns45ZkdO1WCW3z0AmP1FYwxQ==',key_name='tempest-TestNetworkBasicOps-922693977',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='92faeb767e7a423586eaaf32661ce771',ramdisk_id='',reservation_id='r-1z0lb28a',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-2019122229',owner_user_name='tempest-TestNetworkBasicOps-2019122229-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T08:52:54Z,user_data=None,user_id='4211995133cc45db8e38c47f747fb092',uuid=76611b0b-db06-4903-a22a-59b23a1e0d48,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "8f1fcc3c-5f46-4272-be9b-4d5213b3aceb", "address": "fa:16:3e:36:b2:21", "network": {"id": "60f2641c-f03e-4ef3-a462-4bd54e93c59c", "bridge": "br-int", "label": "tempest-network-smoke--706068333", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92faeb767e7a423586eaaf32661ce771", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8f1fcc3c-5f", "ovs_interfaceid": "8f1fcc3c-5f46-4272-be9b-4d5213b3aceb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 25 03:52:59 np0005534516 nova_compute[253538]: 2025-11-25 08:52:59.912 253542 DEBUG nova.network.os_vif_util [None req-643270b7-dd7c-4c64-a96c-a1a312013354 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Converting VIF {"id": "8f1fcc3c-5f46-4272-be9b-4d5213b3aceb", "address": "fa:16:3e:36:b2:21", "network": {"id": "60f2641c-f03e-4ef3-a462-4bd54e93c59c", "bridge": "br-int", "label": "tempest-network-smoke--706068333", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92faeb767e7a423586eaaf32661ce771", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8f1fcc3c-5f", "ovs_interfaceid": "8f1fcc3c-5f46-4272-be9b-4d5213b3aceb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 03:52:59 np0005534516 nova_compute[253538]: 2025-11-25 08:52:59.913 253542 DEBUG nova.network.os_vif_util [None req-643270b7-dd7c-4c64-a96c-a1a312013354 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:36:b2:21,bridge_name='br-int',has_traffic_filtering=True,id=8f1fcc3c-5f46-4272-be9b-4d5213b3aceb,network=Network(60f2641c-f03e-4ef3-a462-4bd54e93c59c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8f1fcc3c-5f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 03:52:59 np0005534516 nova_compute[253538]: 2025-11-25 08:52:59.914 253542 DEBUG os_vif [None req-643270b7-dd7c-4c64-a96c-a1a312013354 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:36:b2:21,bridge_name='br-int',has_traffic_filtering=True,id=8f1fcc3c-5f46-4272-be9b-4d5213b3aceb,network=Network(60f2641c-f03e-4ef3-a462-4bd54e93c59c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8f1fcc3c-5f') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 25 03:52:59 np0005534516 nova_compute[253538]: 2025-11-25 08:52:59.915 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:52:59 np0005534516 nova_compute[253538]: 2025-11-25 08:52:59.916 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:52:59 np0005534516 nova_compute[253538]: 2025-11-25 08:52:59.916 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 03:52:59 np0005534516 nova_compute[253538]: 2025-11-25 08:52:59.920 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:52:59 np0005534516 nova_compute[253538]: 2025-11-25 08:52:59.921 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap8f1fcc3c-5f, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:52:59 np0005534516 nova_compute[253538]: 2025-11-25 08:52:59.921 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap8f1fcc3c-5f, col_values=(('external_ids', {'iface-id': '8f1fcc3c-5f46-4272-be9b-4d5213b3aceb', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:36:b2:21', 'vm-uuid': '76611b0b-db06-4903-a22a-59b23a1e0d48'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:52:59 np0005534516 nova_compute[253538]: 2025-11-25 08:52:59.923 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:52:59 np0005534516 NetworkManager[48915]: <info>  [1764060779.9241] manager: (tap8f1fcc3c-5f): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/466)
Nov 25 03:52:59 np0005534516 nova_compute[253538]: 2025-11-25 08:52:59.927 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 25 03:52:59 np0005534516 nova_compute[253538]: 2025-11-25 08:52:59.930 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:52:59 np0005534516 nova_compute[253538]: 2025-11-25 08:52:59.931 253542 INFO os_vif [None req-643270b7-dd7c-4c64-a96c-a1a312013354 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:36:b2:21,bridge_name='br-int',has_traffic_filtering=True,id=8f1fcc3c-5f46-4272-be9b-4d5213b3aceb,network=Network(60f2641c-f03e-4ef3-a462-4bd54e93c59c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8f1fcc3c-5f')#033[00m
Nov 25 03:52:59 np0005534516 nova_compute[253538]: 2025-11-25 08:52:59.985 253542 DEBUG nova.virt.libvirt.driver [None req-643270b7-dd7c-4c64-a96c-a1a312013354 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 03:52:59 np0005534516 nova_compute[253538]: 2025-11-25 08:52:59.986 253542 DEBUG nova.virt.libvirt.driver [None req-643270b7-dd7c-4c64-a96c-a1a312013354 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 03:52:59 np0005534516 nova_compute[253538]: 2025-11-25 08:52:59.986 253542 DEBUG nova.virt.libvirt.driver [None req-643270b7-dd7c-4c64-a96c-a1a312013354 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] No VIF found with MAC fa:16:3e:36:b2:21, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 25 03:52:59 np0005534516 nova_compute[253538]: 2025-11-25 08:52:59.986 253542 INFO nova.virt.libvirt.driver [None req-643270b7-dd7c-4c64-a96c-a1a312013354 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 76611b0b-db06-4903-a22a-59b23a1e0d48] Using config drive#033[00m
Nov 25 03:53:00 np0005534516 nova_compute[253538]: 2025-11-25 08:53:00.010 253542 DEBUG nova.storage.rbd_utils [None req-643270b7-dd7c-4c64-a96c-a1a312013354 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] rbd image 76611b0b-db06-4903-a22a-59b23a1e0d48_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:53:00 np0005534516 nova_compute[253538]: 2025-11-25 08:53:00.391 253542 INFO nova.virt.libvirt.driver [None req-643270b7-dd7c-4c64-a96c-a1a312013354 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 76611b0b-db06-4903-a22a-59b23a1e0d48] Creating config drive at /var/lib/nova/instances/76611b0b-db06-4903-a22a-59b23a1e0d48/disk.config#033[00m
Nov 25 03:53:00 np0005534516 nova_compute[253538]: 2025-11-25 08:53:00.396 253542 DEBUG oslo_concurrency.processutils [None req-643270b7-dd7c-4c64-a96c-a1a312013354 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/76611b0b-db06-4903-a22a-59b23a1e0d48/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpz2ubngmk execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:53:00 np0005534516 nova_compute[253538]: 2025-11-25 08:53:00.440 253542 DEBUG nova.network.neutron [req-5d5fde80-d0ee-45d4-a8da-69ed275ee82c req-2046a213-6aa3-48d1-bc8a-8928134b6e7c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 76611b0b-db06-4903-a22a-59b23a1e0d48] Updated VIF entry in instance network info cache for port 8f1fcc3c-5f46-4272-be9b-4d5213b3aceb. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 25 03:53:00 np0005534516 nova_compute[253538]: 2025-11-25 08:53:00.441 253542 DEBUG nova.network.neutron [req-5d5fde80-d0ee-45d4-a8da-69ed275ee82c req-2046a213-6aa3-48d1-bc8a-8928134b6e7c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 76611b0b-db06-4903-a22a-59b23a1e0d48] Updating instance_info_cache with network_info: [{"id": "8f1fcc3c-5f46-4272-be9b-4d5213b3aceb", "address": "fa:16:3e:36:b2:21", "network": {"id": "60f2641c-f03e-4ef3-a462-4bd54e93c59c", "bridge": "br-int", "label": "tempest-network-smoke--706068333", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92faeb767e7a423586eaaf32661ce771", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8f1fcc3c-5f", "ovs_interfaceid": "8f1fcc3c-5f46-4272-be9b-4d5213b3aceb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 03:53:00 np0005534516 nova_compute[253538]: 2025-11-25 08:53:00.454 253542 DEBUG oslo_concurrency.lockutils [req-5d5fde80-d0ee-45d4-a8da-69ed275ee82c req-2046a213-6aa3-48d1-bc8a-8928134b6e7c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-76611b0b-db06-4903-a22a-59b23a1e0d48" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 03:53:00 np0005534516 nova_compute[253538]: 2025-11-25 08:53:00.547 253542 DEBUG oslo_concurrency.processutils [None req-643270b7-dd7c-4c64-a96c-a1a312013354 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/76611b0b-db06-4903-a22a-59b23a1e0d48/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpz2ubngmk" returned: 0 in 0.151s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:53:00 np0005534516 nova_compute[253538]: 2025-11-25 08:53:00.573 253542 DEBUG nova.storage.rbd_utils [None req-643270b7-dd7c-4c64-a96c-a1a312013354 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] rbd image 76611b0b-db06-4903-a22a-59b23a1e0d48_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:53:00 np0005534516 nova_compute[253538]: 2025-11-25 08:53:00.576 253542 DEBUG oslo_concurrency.processutils [None req-643270b7-dd7c-4c64-a96c-a1a312013354 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/76611b0b-db06-4903-a22a-59b23a1e0d48/disk.config 76611b0b-db06-4903-a22a-59b23a1e0d48_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:53:00 np0005534516 nova_compute[253538]: 2025-11-25 08:53:00.731 253542 DEBUG oslo_concurrency.processutils [None req-643270b7-dd7c-4c64-a96c-a1a312013354 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/76611b0b-db06-4903-a22a-59b23a1e0d48/disk.config 76611b0b-db06-4903-a22a-59b23a1e0d48_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.155s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:53:00 np0005534516 nova_compute[253538]: 2025-11-25 08:53:00.732 253542 INFO nova.virt.libvirt.driver [None req-643270b7-dd7c-4c64-a96c-a1a312013354 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 76611b0b-db06-4903-a22a-59b23a1e0d48] Deleting local config drive /var/lib/nova/instances/76611b0b-db06-4903-a22a-59b23a1e0d48/disk.config because it was imported into RBD.#033[00m
Nov 25 03:53:00 np0005534516 kernel: tap8f1fcc3c-5f: entered promiscuous mode
Nov 25 03:53:00 np0005534516 NetworkManager[48915]: <info>  [1764060780.7843] manager: (tap8f1fcc3c-5f): new Tun device (/org/freedesktop/NetworkManager/Devices/467)
Nov 25 03:53:00 np0005534516 nova_compute[253538]: 2025-11-25 08:53:00.783 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:53:00 np0005534516 ovn_controller[152859]: 2025-11-25T08:53:00Z|01139|binding|INFO|Claiming lport 8f1fcc3c-5f46-4272-be9b-4d5213b3aceb for this chassis.
Nov 25 03:53:00 np0005534516 ovn_controller[152859]: 2025-11-25T08:53:00Z|01140|binding|INFO|8f1fcc3c-5f46-4272-be9b-4d5213b3aceb: Claiming fa:16:3e:36:b2:21 10.100.0.3
Nov 25 03:53:00 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:53:00.801 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:36:b2:21 10.100.0.3'], port_security=['fa:16:3e:36:b2:21 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '76611b0b-db06-4903-a22a-59b23a1e0d48', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-60f2641c-f03e-4ef3-a462-4bd54e93c59c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '92faeb767e7a423586eaaf32661ce771', 'neutron:revision_number': '2', 'neutron:security_group_ids': '70c2e597-b59d-412f-a7ad-333ba7cbd35e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2b027220-e81c-4ac9-90ba-6c25793cc1d8, chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=8f1fcc3c-5f46-4272-be9b-4d5213b3aceb) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 03:53:00 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:53:00.803 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 8f1fcc3c-5f46-4272-be9b-4d5213b3aceb in datapath 60f2641c-f03e-4ef3-a462-4bd54e93c59c bound to our chassis#033[00m
Nov 25 03:53:00 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:53:00.803 162739 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 60f2641c-f03e-4ef3-a462-4bd54e93c59c#033[00m
Nov 25 03:53:00 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:53:00.816 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[c3daf525-1d85-4c4b-9356-ee50584a9740]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:53:00 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:53:00.817 162739 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap60f2641c-f1 in ovnmeta-60f2641c-f03e-4ef3-a462-4bd54e93c59c namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 25 03:53:00 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:53:00.818 269370 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap60f2641c-f0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 25 03:53:00 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:53:00.819 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[27a40688-c661-43cc-86b1-788c1092903d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:53:00 np0005534516 systemd-machined[215790]: New machine qemu-142-instance-00000072.
Nov 25 03:53:00 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:53:00.820 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[2c2efc22-664b-4a72-af4a-78fb09745acb]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:53:00 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:53:00.832 162852 DEBUG oslo.privsep.daemon [-] privsep: reply[8e55b737-df94-43d5-b8a8-5f382cc20421]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:53:00 np0005534516 systemd[1]: Started Virtual Machine qemu-142-instance-00000072.
Nov 25 03:53:00 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:53:00.856 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[a8519504-a6b4-4a2f-be3e-afdb9ff76ed8]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:53:00 np0005534516 nova_compute[253538]: 2025-11-25 08:53:00.858 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:53:00 np0005534516 systemd-udevd[372500]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 03:53:00 np0005534516 ovn_controller[152859]: 2025-11-25T08:53:00Z|01141|binding|INFO|Setting lport 8f1fcc3c-5f46-4272-be9b-4d5213b3aceb ovn-installed in OVS
Nov 25 03:53:00 np0005534516 ovn_controller[152859]: 2025-11-25T08:53:00Z|01142|binding|INFO|Setting lport 8f1fcc3c-5f46-4272-be9b-4d5213b3aceb up in Southbound
Nov 25 03:53:00 np0005534516 nova_compute[253538]: 2025-11-25 08:53:00.867 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:53:00 np0005534516 NetworkManager[48915]: <info>  [1764060780.8797] device (tap8f1fcc3c-5f): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 03:53:00 np0005534516 NetworkManager[48915]: <info>  [1764060780.8816] device (tap8f1fcc3c-5f): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 03:53:00 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:53:00.892 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[bdf2bac1-2ad3-46a0-9c15-551ed096ac00]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:53:00 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:53:00.898 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[035e419c-6708-446a-9330-c956a4bda8f4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:53:00 np0005534516 NetworkManager[48915]: <info>  [1764060780.8994] manager: (tap60f2641c-f0): new Veth device (/org/freedesktop/NetworkManager/Devices/468)
Nov 25 03:53:00 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:53:00.939 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[bf9ba79c-1321-4774-811c-f89aff29046d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:53:00 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:53:00.943 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[df40f483-7bdd-4d9d-a2d4-292771f24054]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:53:00 np0005534516 NetworkManager[48915]: <info>  [1764060780.9723] device (tap60f2641c-f0): carrier: link connected
Nov 25 03:53:00 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:53:00.981 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[453cb564-4891-4e56-aaaf-02f83a332a86]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:53:01 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:53:01.002 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[d3197cac-04bf-45c4-ae8a-6d51b171fcee]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap60f2641c-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b7:65:0e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 336], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 613557, 'reachable_time': 35003, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 372530, 'error': None, 'target': 'ovnmeta-60f2641c-f03e-4ef3-a462-4bd54e93c59c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:53:01 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:53:01.019 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[59877384-4653-4bb7-8170-e5aa1898bf4b]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feb7:650e'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 613557, 'tstamp': 613557}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 372531, 'error': None, 'target': 'ovnmeta-60f2641c-f03e-4ef3-a462-4bd54e93c59c', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:53:01 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:53:01.038 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[82e5a50a-f998-4183-b03d-d688ad6b3e24]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap60f2641c-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b7:65:0e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 336], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 613557, 'reachable_time': 35003, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 372532, 'error': None, 'target': 'ovnmeta-60f2641c-f03e-4ef3-a462-4bd54e93c59c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:53:01 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:53:01.070 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[8b779864-1e64-42c3-b10a-01815a852090]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:53:01 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:53:01.143 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[86af22c9-475b-45ce-8dca-da3dc15cc223]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:53:01 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:53:01.144 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap60f2641c-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:53:01 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:53:01.145 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 03:53:01 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:53:01.145 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap60f2641c-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:53:01 np0005534516 NetworkManager[48915]: <info>  [1764060781.1478] manager: (tap60f2641c-f0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/469)
Nov 25 03:53:01 np0005534516 kernel: tap60f2641c-f0: entered promiscuous mode
Nov 25 03:53:01 np0005534516 nova_compute[253538]: 2025-11-25 08:53:01.149 253542 DEBUG nova.compute.manager [req-2fcab513-bc6d-411f-834c-02ab70773cbd req-d7f9d82d-88ba-443e-ac21-b916c952e907 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 76611b0b-db06-4903-a22a-59b23a1e0d48] Received event network-vif-plugged-8f1fcc3c-5f46-4272-be9b-4d5213b3aceb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:53:01 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:53:01.150 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap60f2641c-f0, col_values=(('external_ids', {'iface-id': 'baf4584e-8381-4bcf-9f75-0a7b69cd8212'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:53:01 np0005534516 nova_compute[253538]: 2025-11-25 08:53:01.150 253542 DEBUG oslo_concurrency.lockutils [req-2fcab513-bc6d-411f-834c-02ab70773cbd req-d7f9d82d-88ba-443e-ac21-b916c952e907 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "76611b0b-db06-4903-a22a-59b23a1e0d48-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:53:01 np0005534516 nova_compute[253538]: 2025-11-25 08:53:01.150 253542 DEBUG oslo_concurrency.lockutils [req-2fcab513-bc6d-411f-834c-02ab70773cbd req-d7f9d82d-88ba-443e-ac21-b916c952e907 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "76611b0b-db06-4903-a22a-59b23a1e0d48-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:53:01 np0005534516 nova_compute[253538]: 2025-11-25 08:53:01.150 253542 DEBUG oslo_concurrency.lockutils [req-2fcab513-bc6d-411f-834c-02ab70773cbd req-d7f9d82d-88ba-443e-ac21-b916c952e907 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "76611b0b-db06-4903-a22a-59b23a1e0d48-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:53:01 np0005534516 nova_compute[253538]: 2025-11-25 08:53:01.151 253542 DEBUG nova.compute.manager [req-2fcab513-bc6d-411f-834c-02ab70773cbd req-d7f9d82d-88ba-443e-ac21-b916c952e907 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 76611b0b-db06-4903-a22a-59b23a1e0d48] Processing event network-vif-plugged-8f1fcc3c-5f46-4272-be9b-4d5213b3aceb _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 25 03:53:01 np0005534516 nova_compute[253538]: 2025-11-25 08:53:01.151 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:53:01 np0005534516 ovn_controller[152859]: 2025-11-25T08:53:01Z|01143|binding|INFO|Releasing lport baf4584e-8381-4bcf-9f75-0a7b69cd8212 from this chassis (sb_readonly=0)
Nov 25 03:53:01 np0005534516 nova_compute[253538]: 2025-11-25 08:53:01.180 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:53:01 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:53:01.181 162739 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/60f2641c-f03e-4ef3-a462-4bd54e93c59c.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/60f2641c-f03e-4ef3-a462-4bd54e93c59c.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 25 03:53:01 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:53:01.182 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[5ef428e6-cc2d-4676-ae2d-39507fa14c38]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:53:01 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:53:01.183 162739 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 25 03:53:01 np0005534516 ovn_metadata_agent[162734]: global
Nov 25 03:53:01 np0005534516 ovn_metadata_agent[162734]:    log         /dev/log local0 debug
Nov 25 03:53:01 np0005534516 ovn_metadata_agent[162734]:    log-tag     haproxy-metadata-proxy-60f2641c-f03e-4ef3-a462-4bd54e93c59c
Nov 25 03:53:01 np0005534516 ovn_metadata_agent[162734]:    user        root
Nov 25 03:53:01 np0005534516 ovn_metadata_agent[162734]:    group       root
Nov 25 03:53:01 np0005534516 ovn_metadata_agent[162734]:    maxconn     1024
Nov 25 03:53:01 np0005534516 ovn_metadata_agent[162734]:    pidfile     /var/lib/neutron/external/pids/60f2641c-f03e-4ef3-a462-4bd54e93c59c.pid.haproxy
Nov 25 03:53:01 np0005534516 ovn_metadata_agent[162734]:    daemon
Nov 25 03:53:01 np0005534516 ovn_metadata_agent[162734]: 
Nov 25 03:53:01 np0005534516 ovn_metadata_agent[162734]: defaults
Nov 25 03:53:01 np0005534516 ovn_metadata_agent[162734]:    log global
Nov 25 03:53:01 np0005534516 ovn_metadata_agent[162734]:    mode http
Nov 25 03:53:01 np0005534516 ovn_metadata_agent[162734]:    option httplog
Nov 25 03:53:01 np0005534516 ovn_metadata_agent[162734]:    option dontlognull
Nov 25 03:53:01 np0005534516 ovn_metadata_agent[162734]:    option http-server-close
Nov 25 03:53:01 np0005534516 ovn_metadata_agent[162734]:    option forwardfor
Nov 25 03:53:01 np0005534516 ovn_metadata_agent[162734]:    retries                 3
Nov 25 03:53:01 np0005534516 ovn_metadata_agent[162734]:    timeout http-request    30s
Nov 25 03:53:01 np0005534516 ovn_metadata_agent[162734]:    timeout connect         30s
Nov 25 03:53:01 np0005534516 ovn_metadata_agent[162734]:    timeout client          32s
Nov 25 03:53:01 np0005534516 ovn_metadata_agent[162734]:    timeout server          32s
Nov 25 03:53:01 np0005534516 ovn_metadata_agent[162734]:    timeout http-keep-alive 30s
Nov 25 03:53:01 np0005534516 ovn_metadata_agent[162734]: 
Nov 25 03:53:01 np0005534516 ovn_metadata_agent[162734]: 
Nov 25 03:53:01 np0005534516 ovn_metadata_agent[162734]: listen listener
Nov 25 03:53:01 np0005534516 ovn_metadata_agent[162734]:    bind 169.254.169.254:80
Nov 25 03:53:01 np0005534516 ovn_metadata_agent[162734]:    server metadata /var/lib/neutron/metadata_proxy
Nov 25 03:53:01 np0005534516 ovn_metadata_agent[162734]:    http-request add-header X-OVN-Network-ID 60f2641c-f03e-4ef3-a462-4bd54e93c59c
Nov 25 03:53:01 np0005534516 ovn_metadata_agent[162734]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 25 03:53:01 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:53:01.185 162739 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-60f2641c-f03e-4ef3-a462-4bd54e93c59c', 'env', 'PROCESS_TAG=haproxy-60f2641c-f03e-4ef3-a462-4bd54e93c59c', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/60f2641c-f03e-4ef3-a462-4bd54e93c59c.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 25 03:53:01 np0005534516 podman[372564]: 2025-11-25 08:53:01.597859372 +0000 UTC m=+0.052972169 container create f93db09c6af719a38bb678e200d6d4705ea435f5ca121747b3ebd1a98fd7f49e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-60f2641c-f03e-4ef3-a462-4bd54e93c59c, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2)
Nov 25 03:53:01 np0005534516 systemd[1]: Started libpod-conmon-f93db09c6af719a38bb678e200d6d4705ea435f5ca121747b3ebd1a98fd7f49e.scope.
Nov 25 03:53:01 np0005534516 systemd[1]: Started libcrun container.
Nov 25 03:53:01 np0005534516 podman[372564]: 2025-11-25 08:53:01.57013071 +0000 UTC m=+0.025243497 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 25 03:53:01 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3552cae6011fac03a9da2375e711d654941020090cecb0f909b8a0ba241d8ae3/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 25 03:53:01 np0005534516 podman[372564]: 2025-11-25 08:53:01.68066641 +0000 UTC m=+0.135779207 container init f93db09c6af719a38bb678e200d6d4705ea435f5ca121747b3ebd1a98fd7f49e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-60f2641c-f03e-4ef3-a462-4bd54e93c59c, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Nov 25 03:53:01 np0005534516 podman[372564]: 2025-11-25 08:53:01.69152157 +0000 UTC m=+0.146634337 container start f93db09c6af719a38bb678e200d6d4705ea435f5ca121747b3ebd1a98fd7f49e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-60f2641c-f03e-4ef3-a462-4bd54e93c59c, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, tcib_managed=true, org.label-schema.schema-version=1.0)
Nov 25 03:53:01 np0005534516 neutron-haproxy-ovnmeta-60f2641c-f03e-4ef3-a462-4bd54e93c59c[372579]: [NOTICE]   (372609) : New worker (372623) forked
Nov 25 03:53:01 np0005534516 neutron-haproxy-ovnmeta-60f2641c-f03e-4ef3-a462-4bd54e93c59c[372579]: [NOTICE]   (372609) : Loading success.
Nov 25 03:53:01 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2167: 321 pgs: 321 active+clean; 134 MiB data, 810 MiB used, 59 GiB / 60 GiB avail; 31 KiB/s rd, 1.8 MiB/s wr, 46 op/s
Nov 25 03:53:01 np0005534516 nova_compute[253538]: 2025-11-25 08:53:01.826 253542 DEBUG nova.compute.manager [None req-643270b7-dd7c-4c64-a96c-a1a312013354 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 76611b0b-db06-4903-a22a-59b23a1e0d48] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 25 03:53:01 np0005534516 nova_compute[253538]: 2025-11-25 08:53:01.827 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764060781.825934, 76611b0b-db06-4903-a22a-59b23a1e0d48 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 03:53:01 np0005534516 nova_compute[253538]: 2025-11-25 08:53:01.827 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 76611b0b-db06-4903-a22a-59b23a1e0d48] VM Started (Lifecycle Event)#033[00m
Nov 25 03:53:01 np0005534516 nova_compute[253538]: 2025-11-25 08:53:01.830 253542 DEBUG nova.virt.libvirt.driver [None req-643270b7-dd7c-4c64-a96c-a1a312013354 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 76611b0b-db06-4903-a22a-59b23a1e0d48] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 25 03:53:01 np0005534516 nova_compute[253538]: 2025-11-25 08:53:01.834 253542 INFO nova.virt.libvirt.driver [-] [instance: 76611b0b-db06-4903-a22a-59b23a1e0d48] Instance spawned successfully.#033[00m
Nov 25 03:53:01 np0005534516 nova_compute[253538]: 2025-11-25 08:53:01.834 253542 DEBUG nova.virt.libvirt.driver [None req-643270b7-dd7c-4c64-a96c-a1a312013354 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 76611b0b-db06-4903-a22a-59b23a1e0d48] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 25 03:53:01 np0005534516 nova_compute[253538]: 2025-11-25 08:53:01.871 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 76611b0b-db06-4903-a22a-59b23a1e0d48] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:53:01 np0005534516 nova_compute[253538]: 2025-11-25 08:53:01.877 253542 DEBUG nova.virt.libvirt.driver [None req-643270b7-dd7c-4c64-a96c-a1a312013354 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 76611b0b-db06-4903-a22a-59b23a1e0d48] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:53:01 np0005534516 nova_compute[253538]: 2025-11-25 08:53:01.877 253542 DEBUG nova.virt.libvirt.driver [None req-643270b7-dd7c-4c64-a96c-a1a312013354 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 76611b0b-db06-4903-a22a-59b23a1e0d48] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:53:01 np0005534516 nova_compute[253538]: 2025-11-25 08:53:01.878 253542 DEBUG nova.virt.libvirt.driver [None req-643270b7-dd7c-4c64-a96c-a1a312013354 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 76611b0b-db06-4903-a22a-59b23a1e0d48] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:53:01 np0005534516 nova_compute[253538]: 2025-11-25 08:53:01.878 253542 DEBUG nova.virt.libvirt.driver [None req-643270b7-dd7c-4c64-a96c-a1a312013354 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 76611b0b-db06-4903-a22a-59b23a1e0d48] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:53:01 np0005534516 nova_compute[253538]: 2025-11-25 08:53:01.878 253542 DEBUG nova.virt.libvirt.driver [None req-643270b7-dd7c-4c64-a96c-a1a312013354 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 76611b0b-db06-4903-a22a-59b23a1e0d48] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:53:01 np0005534516 nova_compute[253538]: 2025-11-25 08:53:01.879 253542 DEBUG nova.virt.libvirt.driver [None req-643270b7-dd7c-4c64-a96c-a1a312013354 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 76611b0b-db06-4903-a22a-59b23a1e0d48] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:53:01 np0005534516 nova_compute[253538]: 2025-11-25 08:53:01.884 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 76611b0b-db06-4903-a22a-59b23a1e0d48] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 03:53:01 np0005534516 nova_compute[253538]: 2025-11-25 08:53:01.914 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 76611b0b-db06-4903-a22a-59b23a1e0d48] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 03:53:01 np0005534516 nova_compute[253538]: 2025-11-25 08:53:01.914 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764060781.826323, 76611b0b-db06-4903-a22a-59b23a1e0d48 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 03:53:01 np0005534516 nova_compute[253538]: 2025-11-25 08:53:01.915 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 76611b0b-db06-4903-a22a-59b23a1e0d48] VM Paused (Lifecycle Event)#033[00m
Nov 25 03:53:01 np0005534516 nova_compute[253538]: 2025-11-25 08:53:01.930 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 76611b0b-db06-4903-a22a-59b23a1e0d48] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:53:01 np0005534516 nova_compute[253538]: 2025-11-25 08:53:01.935 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764060781.829803, 76611b0b-db06-4903-a22a-59b23a1e0d48 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 03:53:01 np0005534516 nova_compute[253538]: 2025-11-25 08:53:01.936 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 76611b0b-db06-4903-a22a-59b23a1e0d48] VM Resumed (Lifecycle Event)#033[00m
Nov 25 03:53:01 np0005534516 nova_compute[253538]: 2025-11-25 08:53:01.962 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 76611b0b-db06-4903-a22a-59b23a1e0d48] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:53:01 np0005534516 nova_compute[253538]: 2025-11-25 08:53:01.966 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 76611b0b-db06-4903-a22a-59b23a1e0d48] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 03:53:01 np0005534516 nova_compute[253538]: 2025-11-25 08:53:01.978 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 76611b0b-db06-4903-a22a-59b23a1e0d48] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 03:53:02 np0005534516 nova_compute[253538]: 2025-11-25 08:53:02.155 253542 INFO nova.compute.manager [None req-643270b7-dd7c-4c64-a96c-a1a312013354 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 76611b0b-db06-4903-a22a-59b23a1e0d48] Took 7.69 seconds to spawn the instance on the hypervisor.#033[00m
Nov 25 03:53:02 np0005534516 nova_compute[253538]: 2025-11-25 08:53:02.156 253542 DEBUG nova.compute.manager [None req-643270b7-dd7c-4c64-a96c-a1a312013354 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 76611b0b-db06-4903-a22a-59b23a1e0d48] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:53:02 np0005534516 nova_compute[253538]: 2025-11-25 08:53:02.231 253542 INFO nova.compute.manager [None req-643270b7-dd7c-4c64-a96c-a1a312013354 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 76611b0b-db06-4903-a22a-59b23a1e0d48] Took 8.73 seconds to build instance.#033[00m
Nov 25 03:53:02 np0005534516 nova_compute[253538]: 2025-11-25 08:53:02.251 253542 DEBUG oslo_concurrency.lockutils [None req-643270b7-dd7c-4c64-a96c-a1a312013354 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lock "76611b0b-db06-4903-a22a-59b23a1e0d48" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.811s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:53:02 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 03:53:02 np0005534516 nova_compute[253538]: 2025-11-25 08:53:02.710 253542 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764060767.7089639, 9fed0304-736a-4739-9e78-a95c676d1206 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 03:53:02 np0005534516 nova_compute[253538]: 2025-11-25 08:53:02.711 253542 INFO nova.compute.manager [-] [instance: 9fed0304-736a-4739-9e78-a95c676d1206] VM Stopped (Lifecycle Event)#033[00m
Nov 25 03:53:02 np0005534516 nova_compute[253538]: 2025-11-25 08:53:02.728 253542 DEBUG nova.compute.manager [None req-9355f7f9-45ed-434f-a7b4-c3fdfd5fd711 - - - - - -] [instance: 9fed0304-736a-4739-9e78-a95c676d1206] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:53:02 np0005534516 nova_compute[253538]: 2025-11-25 08:53:02.781 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:53:03 np0005534516 nova_compute[253538]: 2025-11-25 08:53:03.239 253542 DEBUG nova.compute.manager [req-d882b52c-0b88-491a-bd61-1627293b3460 req-d02e61c2-6e13-4fa4-b9a2-502fc8db2417 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 76611b0b-db06-4903-a22a-59b23a1e0d48] Received event network-vif-plugged-8f1fcc3c-5f46-4272-be9b-4d5213b3aceb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:53:03 np0005534516 nova_compute[253538]: 2025-11-25 08:53:03.239 253542 DEBUG oslo_concurrency.lockutils [req-d882b52c-0b88-491a-bd61-1627293b3460 req-d02e61c2-6e13-4fa4-b9a2-502fc8db2417 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "76611b0b-db06-4903-a22a-59b23a1e0d48-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:53:03 np0005534516 nova_compute[253538]: 2025-11-25 08:53:03.240 253542 DEBUG oslo_concurrency.lockutils [req-d882b52c-0b88-491a-bd61-1627293b3460 req-d02e61c2-6e13-4fa4-b9a2-502fc8db2417 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "76611b0b-db06-4903-a22a-59b23a1e0d48-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:53:03 np0005534516 nova_compute[253538]: 2025-11-25 08:53:03.241 253542 DEBUG oslo_concurrency.lockutils [req-d882b52c-0b88-491a-bd61-1627293b3460 req-d02e61c2-6e13-4fa4-b9a2-502fc8db2417 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "76611b0b-db06-4903-a22a-59b23a1e0d48-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:53:03 np0005534516 nova_compute[253538]: 2025-11-25 08:53:03.241 253542 DEBUG nova.compute.manager [req-d882b52c-0b88-491a-bd61-1627293b3460 req-d02e61c2-6e13-4fa4-b9a2-502fc8db2417 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 76611b0b-db06-4903-a22a-59b23a1e0d48] No waiting events found dispatching network-vif-plugged-8f1fcc3c-5f46-4272-be9b-4d5213b3aceb pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 03:53:03 np0005534516 nova_compute[253538]: 2025-11-25 08:53:03.242 253542 WARNING nova.compute.manager [req-d882b52c-0b88-491a-bd61-1627293b3460 req-d02e61c2-6e13-4fa4-b9a2-502fc8db2417 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 76611b0b-db06-4903-a22a-59b23a1e0d48] Received unexpected event network-vif-plugged-8f1fcc3c-5f46-4272-be9b-4d5213b3aceb for instance with vm_state active and task_state None.#033[00m
Nov 25 03:53:03 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2168: 321 pgs: 321 active+clean; 134 MiB data, 810 MiB used, 59 GiB / 60 GiB avail; 587 KiB/s rd, 1.8 MiB/s wr, 56 op/s
Nov 25 03:53:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 03:53:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:53:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 03:53:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:53:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0003461242226671876 of space, bias 1.0, pg target 0.10383726680015627 quantized to 32 (current 32)
Nov 25 03:53:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:53:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 03:53:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:53:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 03:53:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:53:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0010122368870873217 of space, bias 1.0, pg target 0.3036710661261965 quantized to 32 (current 32)
Nov 25 03:53:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:53:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 25 03:53:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:53:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 03:53:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:53:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 25 03:53:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:53:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 25 03:53:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:53:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 03:53:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:53:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 25 03:53:04 np0005534516 nova_compute[253538]: 2025-11-25 08:53:04.923 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:53:05 np0005534516 nova_compute[253538]: 2025-11-25 08:53:05.567 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:53:05 np0005534516 NetworkManager[48915]: <info>  [1764060785.5684] manager: (patch-provnet-14ba300c-36d8-42f5-a7bd-8245a4488c5f-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/470)
Nov 25 03:53:05 np0005534516 NetworkManager[48915]: <info>  [1764060785.5695] manager: (patch-br-int-to-provnet-14ba300c-36d8-42f5-a7bd-8245a4488c5f): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/471)
Nov 25 03:53:05 np0005534516 ovn_controller[152859]: 2025-11-25T08:53:05Z|01144|binding|INFO|Releasing lport baf4584e-8381-4bcf-9f75-0a7b69cd8212 from this chassis (sb_readonly=0)
Nov 25 03:53:05 np0005534516 nova_compute[253538]: 2025-11-25 08:53:05.664 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:53:05 np0005534516 ovn_controller[152859]: 2025-11-25T08:53:05Z|01145|binding|INFO|Releasing lport baf4584e-8381-4bcf-9f75-0a7b69cd8212 from this chassis (sb_readonly=0)
Nov 25 03:53:05 np0005534516 nova_compute[253538]: 2025-11-25 08:53:05.676 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:53:05 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2169: 321 pgs: 321 active+clean; 134 MiB data, 810 MiB used, 59 GiB / 60 GiB avail; 884 KiB/s rd, 1.8 MiB/s wr, 66 op/s
Nov 25 03:53:05 np0005534516 podman[372638]: 2025-11-25 08:53:05.878294809 +0000 UTC m=+0.117746124 container health_status 5c8d5508db57b4c3da7e80ae11f3a3094e87785cb74f60c98199261dd8b73481 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Nov 25 03:53:05 np0005534516 nova_compute[253538]: 2025-11-25 08:53:05.877 253542 DEBUG nova.compute.manager [req-e349520d-40a5-4ae1-b20d-2dd5935c5007 req-a00f0c38-f146-418a-bbe2-a143208bd304 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 76611b0b-db06-4903-a22a-59b23a1e0d48] Received event network-changed-8f1fcc3c-5f46-4272-be9b-4d5213b3aceb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:53:05 np0005534516 nova_compute[253538]: 2025-11-25 08:53:05.877 253542 DEBUG nova.compute.manager [req-e349520d-40a5-4ae1-b20d-2dd5935c5007 req-a00f0c38-f146-418a-bbe2-a143208bd304 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 76611b0b-db06-4903-a22a-59b23a1e0d48] Refreshing instance network info cache due to event network-changed-8f1fcc3c-5f46-4272-be9b-4d5213b3aceb. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 03:53:05 np0005534516 nova_compute[253538]: 2025-11-25 08:53:05.877 253542 DEBUG oslo_concurrency.lockutils [req-e349520d-40a5-4ae1-b20d-2dd5935c5007 req-a00f0c38-f146-418a-bbe2-a143208bd304 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-76611b0b-db06-4903-a22a-59b23a1e0d48" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 03:53:05 np0005534516 nova_compute[253538]: 2025-11-25 08:53:05.877 253542 DEBUG oslo_concurrency.lockutils [req-e349520d-40a5-4ae1-b20d-2dd5935c5007 req-a00f0c38-f146-418a-bbe2-a143208bd304 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-76611b0b-db06-4903-a22a-59b23a1e0d48" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 03:53:05 np0005534516 nova_compute[253538]: 2025-11-25 08:53:05.878 253542 DEBUG nova.network.neutron [req-e349520d-40a5-4ae1-b20d-2dd5935c5007 req-a00f0c38-f146-418a-bbe2-a143208bd304 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 76611b0b-db06-4903-a22a-59b23a1e0d48] Refreshing network info cache for port 8f1fcc3c-5f46-4272-be9b-4d5213b3aceb _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 03:53:07 np0005534516 nova_compute[253538]: 2025-11-25 08:53:07.232 253542 DEBUG nova.network.neutron [req-e349520d-40a5-4ae1-b20d-2dd5935c5007 req-a00f0c38-f146-418a-bbe2-a143208bd304 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 76611b0b-db06-4903-a22a-59b23a1e0d48] Updated VIF entry in instance network info cache for port 8f1fcc3c-5f46-4272-be9b-4d5213b3aceb. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 25 03:53:07 np0005534516 nova_compute[253538]: 2025-11-25 08:53:07.233 253542 DEBUG nova.network.neutron [req-e349520d-40a5-4ae1-b20d-2dd5935c5007 req-a00f0c38-f146-418a-bbe2-a143208bd304 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 76611b0b-db06-4903-a22a-59b23a1e0d48] Updating instance_info_cache with network_info: [{"id": "8f1fcc3c-5f46-4272-be9b-4d5213b3aceb", "address": "fa:16:3e:36:b2:21", "network": {"id": "60f2641c-f03e-4ef3-a462-4bd54e93c59c", "bridge": "br-int", "label": "tempest-network-smoke--706068333", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.208", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92faeb767e7a423586eaaf32661ce771", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8f1fcc3c-5f", "ovs_interfaceid": "8f1fcc3c-5f46-4272-be9b-4d5213b3aceb", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 03:53:07 np0005534516 nova_compute[253538]: 2025-11-25 08:53:07.402 253542 DEBUG oslo_concurrency.lockutils [req-e349520d-40a5-4ae1-b20d-2dd5935c5007 req-a00f0c38-f146-418a-bbe2-a143208bd304 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-76611b0b-db06-4903-a22a-59b23a1e0d48" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 03:53:07 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 03:53:07 np0005534516 nova_compute[253538]: 2025-11-25 08:53:07.592 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:53:07 np0005534516 nova_compute[253538]: 2025-11-25 08:53:07.783 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:53:07 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2170: 321 pgs: 321 active+clean; 134 MiB data, 810 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Nov 25 03:53:07 np0005534516 podman[372657]: 2025-11-25 08:53:07.8139716 +0000 UTC m=+0.071888147 container health_status 201f5ba8a846455dad7681d91099d8c3f33e8ac91298c1330a8b525b94c03938 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true)
Nov 25 03:53:09 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2171: 321 pgs: 321 active+clean; 134 MiB data, 810 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 775 KiB/s wr, 87 op/s
Nov 25 03:53:09 np0005534516 nova_compute[253538]: 2025-11-25 08:53:09.925 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:53:11 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0) v1
Nov 25 03:53:11 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 25 03:53:11 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 03:53:11 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 03:53:11 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 03:53:11 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 03:53:11 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 03:53:11 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 03:53:11 np0005534516 ceph-mgr[75313]: [progress WARNING root] complete: ev f1572cef-6b91-4b1a-bd6c-96cb4eff0c55 does not exist
Nov 25 03:53:11 np0005534516 ceph-mgr[75313]: [progress WARNING root] complete: ev 8399d3c8-dd57-4d30-92fe-03501066f83a does not exist
Nov 25 03:53:11 np0005534516 ceph-mgr[75313]: [progress WARNING root] complete: ev ade82a42-a067-4256-9a53-eb2846d3cb9a does not exist
Nov 25 03:53:11 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 03:53:11 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 03:53:11 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 03:53:11 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 03:53:11 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 03:53:11 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 03:53:11 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2172: 321 pgs: 321 active+clean; 134 MiB data, 810 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Nov 25 03:53:12 np0005534516 nova_compute[253538]: 2025-11-25 08:53:12.044 253542 DEBUG oslo_concurrency.lockutils [None req-df790dd1-a03f-490c-a5fd-544bc83166c6 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Acquiring lock "2da7049d-715e-4209-8c17-dda96ff6a192" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:53:12 np0005534516 nova_compute[253538]: 2025-11-25 08:53:12.045 253542 DEBUG oslo_concurrency.lockutils [None req-df790dd1-a03f-490c-a5fd-544bc83166c6 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Lock "2da7049d-715e-4209-8c17-dda96ff6a192" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:53:12 np0005534516 podman[372947]: 2025-11-25 08:53:12.058095764 +0000 UTC m=+0.050523704 container create b48e593f5cd7d66415e844e8a8120f6c793dc51cb38c29d9ca494e97823329c9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_goldwasser, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True)
Nov 25 03:53:12 np0005534516 nova_compute[253538]: 2025-11-25 08:53:12.059 253542 DEBUG nova.compute.manager [None req-df790dd1-a03f-490c-a5fd-544bc83166c6 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: 2da7049d-715e-4209-8c17-dda96ff6a192] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 25 03:53:12 np0005534516 systemd[1]: Started libpod-conmon-b48e593f5cd7d66415e844e8a8120f6c793dc51cb38c29d9ca494e97823329c9.scope.
Nov 25 03:53:12 np0005534516 systemd[1]: Started libcrun container.
Nov 25 03:53:12 np0005534516 podman[372947]: 2025-11-25 08:53:12.033977098 +0000 UTC m=+0.026405018 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 03:53:12 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 25 03:53:12 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 03:53:12 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 03:53:12 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 03:53:12 np0005534516 podman[372947]: 2025-11-25 08:53:12.146482062 +0000 UTC m=+0.138909972 container init b48e593f5cd7d66415e844e8a8120f6c793dc51cb38c29d9ca494e97823329c9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_goldwasser, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Nov 25 03:53:12 np0005534516 podman[372947]: 2025-11-25 08:53:12.157808565 +0000 UTC m=+0.150236465 container start b48e593f5cd7d66415e844e8a8120f6c793dc51cb38c29d9ca494e97823329c9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_goldwasser, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True)
Nov 25 03:53:12 np0005534516 sweet_goldwasser[372964]: 167 167
Nov 25 03:53:12 np0005534516 podman[372947]: 2025-11-25 08:53:12.165079779 +0000 UTC m=+0.157507719 container attach b48e593f5cd7d66415e844e8a8120f6c793dc51cb38c29d9ca494e97823329c9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_goldwasser, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 03:53:12 np0005534516 systemd[1]: libpod-b48e593f5cd7d66415e844e8a8120f6c793dc51cb38c29d9ca494e97823329c9.scope: Deactivated successfully.
Nov 25 03:53:12 np0005534516 podman[372947]: 2025-11-25 08:53:12.184425257 +0000 UTC m=+0.176853237 container died b48e593f5cd7d66415e844e8a8120f6c793dc51cb38c29d9ca494e97823329c9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_goldwasser, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 03:53:12 np0005534516 nova_compute[253538]: 2025-11-25 08:53:12.188 253542 DEBUG oslo_concurrency.lockutils [None req-df790dd1-a03f-490c-a5fd-544bc83166c6 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:53:12 np0005534516 nova_compute[253538]: 2025-11-25 08:53:12.190 253542 DEBUG oslo_concurrency.lockutils [None req-df790dd1-a03f-490c-a5fd-544bc83166c6 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:53:12 np0005534516 nova_compute[253538]: 2025-11-25 08:53:12.198 253542 DEBUG nova.virt.hardware [None req-df790dd1-a03f-490c-a5fd-544bc83166c6 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 25 03:53:12 np0005534516 nova_compute[253538]: 2025-11-25 08:53:12.199 253542 INFO nova.compute.claims [None req-df790dd1-a03f-490c-a5fd-544bc83166c6 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: 2da7049d-715e-4209-8c17-dda96ff6a192] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 25 03:53:12 np0005534516 systemd[1]: var-lib-containers-storage-overlay-397771b7242a7a653ee77c1099af808974bc553922fd215896b96a8843380842-merged.mount: Deactivated successfully.
Nov 25 03:53:12 np0005534516 podman[372947]: 2025-11-25 08:53:12.2341769 +0000 UTC m=+0.226604800 container remove b48e593f5cd7d66415e844e8a8120f6c793dc51cb38c29d9ca494e97823329c9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_goldwasser, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Nov 25 03:53:12 np0005534516 systemd[1]: libpod-conmon-b48e593f5cd7d66415e844e8a8120f6c793dc51cb38c29d9ca494e97823329c9.scope: Deactivated successfully.
Nov 25 03:53:12 np0005534516 nova_compute[253538]: 2025-11-25 08:53:12.330 253542 DEBUG oslo_concurrency.processutils [None req-df790dd1-a03f-490c-a5fd-544bc83166c6 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:53:12 np0005534516 podman[372991]: 2025-11-25 08:53:12.427294332 +0000 UTC m=+0.051585953 container create f58fd611877415f9a4d3ad2bb288a33c5b11ccf8630fa129902c3e426d172cc9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_swanson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Nov 25 03:53:12 np0005534516 systemd[1]: Started libpod-conmon-f58fd611877415f9a4d3ad2bb288a33c5b11ccf8630fa129902c3e426d172cc9.scope.
Nov 25 03:53:12 np0005534516 systemd[1]: Started libcrun container.
Nov 25 03:53:12 np0005534516 podman[372991]: 2025-11-25 08:53:12.403615867 +0000 UTC m=+0.027907488 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 03:53:12 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6d26f6d7e472d03d32efcbcc832fed07109dda0b830f2b7b80f6656aa2a61ef4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 03:53:12 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6d26f6d7e472d03d32efcbcc832fed07109dda0b830f2b7b80f6656aa2a61ef4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 03:53:12 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6d26f6d7e472d03d32efcbcc832fed07109dda0b830f2b7b80f6656aa2a61ef4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 03:53:12 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6d26f6d7e472d03d32efcbcc832fed07109dda0b830f2b7b80f6656aa2a61ef4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 03:53:12 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6d26f6d7e472d03d32efcbcc832fed07109dda0b830f2b7b80f6656aa2a61ef4/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 03:53:12 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 03:53:12 np0005534516 podman[372991]: 2025-11-25 08:53:12.516904602 +0000 UTC m=+0.141196203 container init f58fd611877415f9a4d3ad2bb288a33c5b11ccf8630fa129902c3e426d172cc9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_swanson, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default)
Nov 25 03:53:12 np0005534516 podman[372991]: 2025-11-25 08:53:12.525893162 +0000 UTC m=+0.150184773 container start f58fd611877415f9a4d3ad2bb288a33c5b11ccf8630fa129902c3e426d172cc9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_swanson, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 03:53:12 np0005534516 podman[372991]: 2025-11-25 08:53:12.529126889 +0000 UTC m=+0.153418480 container attach f58fd611877415f9a4d3ad2bb288a33c5b11ccf8630fa129902c3e426d172cc9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_swanson, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 03:53:12 np0005534516 nova_compute[253538]: 2025-11-25 08:53:12.785 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:53:12 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 03:53:12 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2010193917' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 03:53:12 np0005534516 nova_compute[253538]: 2025-11-25 08:53:12.816 253542 DEBUG oslo_concurrency.processutils [None req-df790dd1-a03f-490c-a5fd-544bc83166c6 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.486s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:53:12 np0005534516 nova_compute[253538]: 2025-11-25 08:53:12.822 253542 DEBUG nova.compute.provider_tree [None req-df790dd1-a03f-490c-a5fd-544bc83166c6 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 03:53:12 np0005534516 nova_compute[253538]: 2025-11-25 08:53:12.844 253542 DEBUG nova.scheduler.client.report [None req-df790dd1-a03f-490c-a5fd-544bc83166c6 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 03:53:12 np0005534516 nova_compute[253538]: 2025-11-25 08:53:12.872 253542 DEBUG oslo_concurrency.lockutils [None req-df790dd1-a03f-490c-a5fd-544bc83166c6 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.682s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:53:12 np0005534516 nova_compute[253538]: 2025-11-25 08:53:12.873 253542 DEBUG nova.compute.manager [None req-df790dd1-a03f-490c-a5fd-544bc83166c6 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: 2da7049d-715e-4209-8c17-dda96ff6a192] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 25 03:53:12 np0005534516 nova_compute[253538]: 2025-11-25 08:53:12.932 253542 DEBUG nova.compute.manager [None req-df790dd1-a03f-490c-a5fd-544bc83166c6 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: 2da7049d-715e-4209-8c17-dda96ff6a192] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 25 03:53:12 np0005534516 nova_compute[253538]: 2025-11-25 08:53:12.933 253542 DEBUG nova.network.neutron [None req-df790dd1-a03f-490c-a5fd-544bc83166c6 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: 2da7049d-715e-4209-8c17-dda96ff6a192] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 25 03:53:12 np0005534516 nova_compute[253538]: 2025-11-25 08:53:12.959 253542 INFO nova.virt.libvirt.driver [None req-df790dd1-a03f-490c-a5fd-544bc83166c6 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: 2da7049d-715e-4209-8c17-dda96ff6a192] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 25 03:53:12 np0005534516 nova_compute[253538]: 2025-11-25 08:53:12.979 253542 DEBUG nova.compute.manager [None req-df790dd1-a03f-490c-a5fd-544bc83166c6 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: 2da7049d-715e-4209-8c17-dda96ff6a192] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 25 03:53:13 np0005534516 nova_compute[253538]: 2025-11-25 08:53:13.087 253542 DEBUG nova.compute.manager [None req-df790dd1-a03f-490c-a5fd-544bc83166c6 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: 2da7049d-715e-4209-8c17-dda96ff6a192] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 25 03:53:13 np0005534516 nova_compute[253538]: 2025-11-25 08:53:13.089 253542 DEBUG nova.virt.libvirt.driver [None req-df790dd1-a03f-490c-a5fd-544bc83166c6 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: 2da7049d-715e-4209-8c17-dda96ff6a192] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 25 03:53:13 np0005534516 nova_compute[253538]: 2025-11-25 08:53:13.090 253542 INFO nova.virt.libvirt.driver [None req-df790dd1-a03f-490c-a5fd-544bc83166c6 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: 2da7049d-715e-4209-8c17-dda96ff6a192] Creating image(s)#033[00m
Nov 25 03:53:13 np0005534516 nova_compute[253538]: 2025-11-25 08:53:13.128 253542 DEBUG nova.storage.rbd_utils [None req-df790dd1-a03f-490c-a5fd-544bc83166c6 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] rbd image 2da7049d-715e-4209-8c17-dda96ff6a192_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:53:13 np0005534516 nova_compute[253538]: 2025-11-25 08:53:13.169 253542 DEBUG nova.storage.rbd_utils [None req-df790dd1-a03f-490c-a5fd-544bc83166c6 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] rbd image 2da7049d-715e-4209-8c17-dda96ff6a192_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:53:13 np0005534516 nova_compute[253538]: 2025-11-25 08:53:13.201 253542 DEBUG nova.storage.rbd_utils [None req-df790dd1-a03f-490c-a5fd-544bc83166c6 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] rbd image 2da7049d-715e-4209-8c17-dda96ff6a192_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:53:13 np0005534516 nova_compute[253538]: 2025-11-25 08:53:13.204 253542 DEBUG oslo_concurrency.processutils [None req-df790dd1-a03f-490c-a5fd-544bc83166c6 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:53:13 np0005534516 nova_compute[253538]: 2025-11-25 08:53:13.243 253542 DEBUG nova.policy [None req-df790dd1-a03f-490c-a5fd-544bc83166c6 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '009378dc36154271ba5b4590ce67ddde', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '7dcaf3c96bfc4db3a41291debd385c67', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 25 03:53:13 np0005534516 nova_compute[253538]: 2025-11-25 08:53:13.285 253542 DEBUG oslo_concurrency.processutils [None req-df790dd1-a03f-490c-a5fd-544bc83166c6 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json" returned: 0 in 0.080s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:53:13 np0005534516 nova_compute[253538]: 2025-11-25 08:53:13.285 253542 DEBUG oslo_concurrency.lockutils [None req-df790dd1-a03f-490c-a5fd-544bc83166c6 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Acquiring lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:53:13 np0005534516 nova_compute[253538]: 2025-11-25 08:53:13.286 253542 DEBUG oslo_concurrency.lockutils [None req-df790dd1-a03f-490c-a5fd-544bc83166c6 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:53:13 np0005534516 nova_compute[253538]: 2025-11-25 08:53:13.286 253542 DEBUG oslo_concurrency.lockutils [None req-df790dd1-a03f-490c-a5fd-544bc83166c6 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:53:13 np0005534516 nova_compute[253538]: 2025-11-25 08:53:13.309 253542 DEBUG nova.storage.rbd_utils [None req-df790dd1-a03f-490c-a5fd-544bc83166c6 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] rbd image 2da7049d-715e-4209-8c17-dda96ff6a192_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:53:13 np0005534516 nova_compute[253538]: 2025-11-25 08:53:13.314 253542 DEBUG oslo_concurrency.processutils [None req-df790dd1-a03f-490c-a5fd-544bc83166c6 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc 2da7049d-715e-4209-8c17-dda96ff6a192_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:53:13 np0005534516 nova_compute[253538]: 2025-11-25 08:53:13.645 253542 DEBUG oslo_concurrency.processutils [None req-df790dd1-a03f-490c-a5fd-544bc83166c6 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc 2da7049d-715e-4209-8c17-dda96ff6a192_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.330s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:53:13 np0005534516 adoring_swanson[373017]: --> passed data devices: 0 physical, 3 LVM
Nov 25 03:53:13 np0005534516 adoring_swanson[373017]: --> relative data size: 1.0
Nov 25 03:53:13 np0005534516 adoring_swanson[373017]: --> All data devices are unavailable
Nov 25 03:53:13 np0005534516 systemd[1]: libpod-f58fd611877415f9a4d3ad2bb288a33c5b11ccf8630fa129902c3e426d172cc9.scope: Deactivated successfully.
Nov 25 03:53:13 np0005534516 systemd[1]: libpod-f58fd611877415f9a4d3ad2bb288a33c5b11ccf8630fa129902c3e426d172cc9.scope: Consumed 1.035s CPU time.
Nov 25 03:53:13 np0005534516 conmon[373017]: conmon f58fd611877415f9a4d3 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-f58fd611877415f9a4d3ad2bb288a33c5b11ccf8630fa129902c3e426d172cc9.scope/container/memory.events
Nov 25 03:53:13 np0005534516 podman[372991]: 2025-11-25 08:53:13.703048568 +0000 UTC m=+1.327340189 container died f58fd611877415f9a4d3ad2bb288a33c5b11ccf8630fa129902c3e426d172cc9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_swanson, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Nov 25 03:53:13 np0005534516 nova_compute[253538]: 2025-11-25 08:53:13.734 253542 DEBUG nova.storage.rbd_utils [None req-df790dd1-a03f-490c-a5fd-544bc83166c6 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] resizing rbd image 2da7049d-715e-4209-8c17-dda96ff6a192_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 25 03:53:13 np0005534516 systemd[1]: var-lib-containers-storage-overlay-6d26f6d7e472d03d32efcbcc832fed07109dda0b830f2b7b80f6656aa2a61ef4-merged.mount: Deactivated successfully.
Nov 25 03:53:13 np0005534516 podman[372991]: 2025-11-25 08:53:13.775461288 +0000 UTC m=+1.399752879 container remove f58fd611877415f9a4d3ad2bb288a33c5b11ccf8630fa129902c3e426d172cc9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_swanson, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 03:53:13 np0005534516 systemd[1]: libpod-conmon-f58fd611877415f9a4d3ad2bb288a33c5b11ccf8630fa129902c3e426d172cc9.scope: Deactivated successfully.
Nov 25 03:53:13 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2173: 321 pgs: 321 active+clean; 150 MiB data, 823 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.1 MiB/s wr, 80 op/s
Nov 25 03:53:13 np0005534516 ovn_controller[152859]: 2025-11-25T08:53:13Z|00130|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:36:b2:21 10.100.0.3
Nov 25 03:53:13 np0005534516 ovn_controller[152859]: 2025-11-25T08:53:13Z|00131|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:36:b2:21 10.100.0.3
Nov 25 03:53:13 np0005534516 podman[373185]: 2025-11-25 08:53:13.839833062 +0000 UTC m=+0.110128120 container health_status 8ad1e319ea263c63021f01bb2d6f18ca5aea205883339692c6b10bddfa993191 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller)
Nov 25 03:53:13 np0005534516 nova_compute[253538]: 2025-11-25 08:53:13.860 253542 DEBUG nova.objects.instance [None req-df790dd1-a03f-490c-a5fd-544bc83166c6 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Lazy-loading 'migration_context' on Instance uuid 2da7049d-715e-4209-8c17-dda96ff6a192 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 03:53:13 np0005534516 nova_compute[253538]: 2025-11-25 08:53:13.872 253542 DEBUG nova.virt.libvirt.driver [None req-df790dd1-a03f-490c-a5fd-544bc83166c6 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: 2da7049d-715e-4209-8c17-dda96ff6a192] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 25 03:53:13 np0005534516 nova_compute[253538]: 2025-11-25 08:53:13.873 253542 DEBUG nova.virt.libvirt.driver [None req-df790dd1-a03f-490c-a5fd-544bc83166c6 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: 2da7049d-715e-4209-8c17-dda96ff6a192] Ensure instance console log exists: /var/lib/nova/instances/2da7049d-715e-4209-8c17-dda96ff6a192/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 25 03:53:13 np0005534516 nova_compute[253538]: 2025-11-25 08:53:13.874 253542 DEBUG oslo_concurrency.lockutils [None req-df790dd1-a03f-490c-a5fd-544bc83166c6 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:53:13 np0005534516 nova_compute[253538]: 2025-11-25 08:53:13.874 253542 DEBUG oslo_concurrency.lockutils [None req-df790dd1-a03f-490c-a5fd-544bc83166c6 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:53:13 np0005534516 nova_compute[253538]: 2025-11-25 08:53:13.875 253542 DEBUG oslo_concurrency.lockutils [None req-df790dd1-a03f-490c-a5fd-544bc83166c6 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:53:14 np0005534516 podman[373402]: 2025-11-25 08:53:14.476604335 +0000 UTC m=+0.058924579 container create 1e2504f0c78dece38ffa09207deb7b39ffab637fa47a33b91ea926067fa0e784 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_euclid, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True)
Nov 25 03:53:14 np0005534516 systemd[1]: Started libpod-conmon-1e2504f0c78dece38ffa09207deb7b39ffab637fa47a33b91ea926067fa0e784.scope.
Nov 25 03:53:14 np0005534516 podman[373402]: 2025-11-25 08:53:14.452673685 +0000 UTC m=+0.034993969 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 03:53:14 np0005534516 systemd[1]: Started libcrun container.
Nov 25 03:53:14 np0005534516 nova_compute[253538]: 2025-11-25 08:53:14.558 253542 DEBUG nova.network.neutron [None req-df790dd1-a03f-490c-a5fd-544bc83166c6 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: 2da7049d-715e-4209-8c17-dda96ff6a192] Successfully created port: e0eb0246-9869-4c10-b45b-bd0799ae0c95 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 25 03:53:14 np0005534516 podman[373402]: 2025-11-25 08:53:14.56414124 +0000 UTC m=+0.146461564 container init 1e2504f0c78dece38ffa09207deb7b39ffab637fa47a33b91ea926067fa0e784 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_euclid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 03:53:14 np0005534516 podman[373402]: 2025-11-25 08:53:14.572630898 +0000 UTC m=+0.154951142 container start 1e2504f0c78dece38ffa09207deb7b39ffab637fa47a33b91ea926067fa0e784 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_euclid, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 03:53:14 np0005534516 podman[373402]: 2025-11-25 08:53:14.575561676 +0000 UTC m=+0.157881950 container attach 1e2504f0c78dece38ffa09207deb7b39ffab637fa47a33b91ea926067fa0e784 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_euclid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 03:53:14 np0005534516 thirsty_euclid[373419]: 167 167
Nov 25 03:53:14 np0005534516 systemd[1]: libpod-1e2504f0c78dece38ffa09207deb7b39ffab637fa47a33b91ea926067fa0e784.scope: Deactivated successfully.
Nov 25 03:53:14 np0005534516 podman[373402]: 2025-11-25 08:53:14.618077695 +0000 UTC m=+0.200397969 container died 1e2504f0c78dece38ffa09207deb7b39ffab637fa47a33b91ea926067fa0e784 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_euclid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 03:53:14 np0005534516 systemd[1]: var-lib-containers-storage-overlay-b223c47ae5b7cbbca42189143355e77ef2e0e9e7b00544d74752f6719725a2c5-merged.mount: Deactivated successfully.
Nov 25 03:53:14 np0005534516 podman[373402]: 2025-11-25 08:53:14.679275724 +0000 UTC m=+0.261595958 container remove 1e2504f0c78dece38ffa09207deb7b39ffab637fa47a33b91ea926067fa0e784 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_euclid, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Nov 25 03:53:14 np0005534516 systemd[1]: libpod-conmon-1e2504f0c78dece38ffa09207deb7b39ffab637fa47a33b91ea926067fa0e784.scope: Deactivated successfully.
Nov 25 03:53:14 np0005534516 nova_compute[253538]: 2025-11-25 08:53:14.928 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:53:14 np0005534516 podman[373443]: 2025-11-25 08:53:14.837929332 +0000 UTC m=+0.023688885 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 03:53:14 np0005534516 podman[373443]: 2025-11-25 08:53:14.948245177 +0000 UTC m=+0.134004710 container create 162a2806d3100b69eca1688499ed5398e5713ed3b744b1ecfd8f5d56397c2a29 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_mahavira, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2)
Nov 25 03:53:15 np0005534516 systemd[1]: Started libpod-conmon-162a2806d3100b69eca1688499ed5398e5713ed3b744b1ecfd8f5d56397c2a29.scope.
Nov 25 03:53:15 np0005534516 systemd[1]: Started libcrun container.
Nov 25 03:53:15 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c72796ac21f34f63c04016751c25950b8d8b0faa47d3804553d9bdcf042c7f8e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 03:53:15 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c72796ac21f34f63c04016751c25950b8d8b0faa47d3804553d9bdcf042c7f8e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 03:53:15 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c72796ac21f34f63c04016751c25950b8d8b0faa47d3804553d9bdcf042c7f8e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 03:53:15 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c72796ac21f34f63c04016751c25950b8d8b0faa47d3804553d9bdcf042c7f8e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 03:53:15 np0005534516 podman[373443]: 2025-11-25 08:53:15.086230332 +0000 UTC m=+0.271989865 container init 162a2806d3100b69eca1688499ed5398e5713ed3b744b1ecfd8f5d56397c2a29 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_mahavira, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 03:53:15 np0005534516 podman[373443]: 2025-11-25 08:53:15.093553339 +0000 UTC m=+0.279312882 container start 162a2806d3100b69eca1688499ed5398e5713ed3b744b1ecfd8f5d56397c2a29 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_mahavira, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Nov 25 03:53:15 np0005534516 podman[373443]: 2025-11-25 08:53:15.097515545 +0000 UTC m=+0.283275118 container attach 162a2806d3100b69eca1688499ed5398e5713ed3b744b1ecfd8f5d56397c2a29 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_mahavira, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 03:53:15 np0005534516 nova_compute[253538]: 2025-11-25 08:53:15.536 253542 DEBUG nova.network.neutron [None req-df790dd1-a03f-490c-a5fd-544bc83166c6 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: 2da7049d-715e-4209-8c17-dda96ff6a192] Successfully updated port: e0eb0246-9869-4c10-b45b-bd0799ae0c95 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 25 03:53:15 np0005534516 nova_compute[253538]: 2025-11-25 08:53:15.553 253542 DEBUG oslo_concurrency.lockutils [None req-df790dd1-a03f-490c-a5fd-544bc83166c6 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Acquiring lock "refresh_cache-2da7049d-715e-4209-8c17-dda96ff6a192" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 03:53:15 np0005534516 nova_compute[253538]: 2025-11-25 08:53:15.554 253542 DEBUG oslo_concurrency.lockutils [None req-df790dd1-a03f-490c-a5fd-544bc83166c6 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Acquired lock "refresh_cache-2da7049d-715e-4209-8c17-dda96ff6a192" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 03:53:15 np0005534516 nova_compute[253538]: 2025-11-25 08:53:15.555 253542 DEBUG nova.network.neutron [None req-df790dd1-a03f-490c-a5fd-544bc83166c6 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: 2da7049d-715e-4209-8c17-dda96ff6a192] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 25 03:53:15 np0005534516 nova_compute[253538]: 2025-11-25 08:53:15.626 253542 DEBUG nova.compute.manager [req-d7f0cac9-e8f4-4d28-b4d7-e818e52e2d44 req-cc8a2eeb-e00e-4b75-b3c3-8fc020d2ceff b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 2da7049d-715e-4209-8c17-dda96ff6a192] Received event network-changed-e0eb0246-9869-4c10-b45b-bd0799ae0c95 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:53:15 np0005534516 nova_compute[253538]: 2025-11-25 08:53:15.626 253542 DEBUG nova.compute.manager [req-d7f0cac9-e8f4-4d28-b4d7-e818e52e2d44 req-cc8a2eeb-e00e-4b75-b3c3-8fc020d2ceff b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 2da7049d-715e-4209-8c17-dda96ff6a192] Refreshing instance network info cache due to event network-changed-e0eb0246-9869-4c10-b45b-bd0799ae0c95. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 03:53:15 np0005534516 nova_compute[253538]: 2025-11-25 08:53:15.627 253542 DEBUG oslo_concurrency.lockutils [req-d7f0cac9-e8f4-4d28-b4d7-e818e52e2d44 req-cc8a2eeb-e00e-4b75-b3c3-8fc020d2ceff b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-2da7049d-715e-4209-8c17-dda96ff6a192" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 03:53:15 np0005534516 nova_compute[253538]: 2025-11-25 08:53:15.743 253542 DEBUG nova.network.neutron [None req-df790dd1-a03f-490c-a5fd-544bc83166c6 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: 2da7049d-715e-4209-8c17-dda96ff6a192] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 25 03:53:15 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2174: 321 pgs: 321 active+clean; 183 MiB data, 841 MiB used, 59 GiB / 60 GiB avail; 1.6 MiB/s rd, 2.6 MiB/s wr, 88 op/s
Nov 25 03:53:15 np0005534516 charming_mahavira[373460]: {
Nov 25 03:53:15 np0005534516 charming_mahavira[373460]:    "0": [
Nov 25 03:53:15 np0005534516 charming_mahavira[373460]:        {
Nov 25 03:53:15 np0005534516 charming_mahavira[373460]:            "devices": [
Nov 25 03:53:15 np0005534516 charming_mahavira[373460]:                "/dev/loop3"
Nov 25 03:53:15 np0005534516 charming_mahavira[373460]:            ],
Nov 25 03:53:15 np0005534516 charming_mahavira[373460]:            "lv_name": "ceph_lv0",
Nov 25 03:53:15 np0005534516 charming_mahavira[373460]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 03:53:15 np0005534516 charming_mahavira[373460]:            "lv_size": "21470642176",
Nov 25 03:53:15 np0005534516 charming_mahavira[373460]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=fa0f8d90-5df8-4d42-9078-da082765696d,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 03:53:15 np0005534516 charming_mahavira[373460]:            "lv_uuid": "ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr",
Nov 25 03:53:15 np0005534516 charming_mahavira[373460]:            "name": "ceph_lv0",
Nov 25 03:53:15 np0005534516 charming_mahavira[373460]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 03:53:15 np0005534516 charming_mahavira[373460]:            "tags": {
Nov 25 03:53:15 np0005534516 charming_mahavira[373460]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 03:53:15 np0005534516 charming_mahavira[373460]:                "ceph.block_uuid": "ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr",
Nov 25 03:53:15 np0005534516 charming_mahavira[373460]:                "ceph.cephx_lockbox_secret": "",
Nov 25 03:53:15 np0005534516 charming_mahavira[373460]:                "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 03:53:15 np0005534516 charming_mahavira[373460]:                "ceph.cluster_name": "ceph",
Nov 25 03:53:15 np0005534516 charming_mahavira[373460]:                "ceph.crush_device_class": "",
Nov 25 03:53:15 np0005534516 charming_mahavira[373460]:                "ceph.encrypted": "0",
Nov 25 03:53:15 np0005534516 charming_mahavira[373460]:                "ceph.osd_fsid": "fa0f8d90-5df8-4d42-9078-da082765696d",
Nov 25 03:53:15 np0005534516 charming_mahavira[373460]:                "ceph.osd_id": "0",
Nov 25 03:53:15 np0005534516 charming_mahavira[373460]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 03:53:15 np0005534516 charming_mahavira[373460]:                "ceph.type": "block",
Nov 25 03:53:15 np0005534516 charming_mahavira[373460]:                "ceph.vdo": "0"
Nov 25 03:53:15 np0005534516 charming_mahavira[373460]:            },
Nov 25 03:53:15 np0005534516 charming_mahavira[373460]:            "type": "block",
Nov 25 03:53:15 np0005534516 charming_mahavira[373460]:            "vg_name": "ceph_vg0"
Nov 25 03:53:15 np0005534516 charming_mahavira[373460]:        }
Nov 25 03:53:15 np0005534516 charming_mahavira[373460]:    ],
Nov 25 03:53:15 np0005534516 charming_mahavira[373460]:    "1": [
Nov 25 03:53:15 np0005534516 charming_mahavira[373460]:        {
Nov 25 03:53:15 np0005534516 charming_mahavira[373460]:            "devices": [
Nov 25 03:53:15 np0005534516 charming_mahavira[373460]:                "/dev/loop4"
Nov 25 03:53:15 np0005534516 charming_mahavira[373460]:            ],
Nov 25 03:53:15 np0005534516 charming_mahavira[373460]:            "lv_name": "ceph_lv1",
Nov 25 03:53:15 np0005534516 charming_mahavira[373460]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 03:53:15 np0005534516 charming_mahavira[373460]:            "lv_size": "21470642176",
Nov 25 03:53:15 np0005534516 charming_mahavira[373460]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1ccbbde0-8faf-460a-800e-c84f00ed17db,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 03:53:15 np0005534516 charming_mahavira[373460]:            "lv_uuid": "nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e",
Nov 25 03:53:15 np0005534516 charming_mahavira[373460]:            "name": "ceph_lv1",
Nov 25 03:53:15 np0005534516 charming_mahavira[373460]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 03:53:15 np0005534516 charming_mahavira[373460]:            "tags": {
Nov 25 03:53:15 np0005534516 charming_mahavira[373460]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 03:53:15 np0005534516 charming_mahavira[373460]:                "ceph.block_uuid": "nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e",
Nov 25 03:53:15 np0005534516 charming_mahavira[373460]:                "ceph.cephx_lockbox_secret": "",
Nov 25 03:53:15 np0005534516 charming_mahavira[373460]:                "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 03:53:15 np0005534516 charming_mahavira[373460]:                "ceph.cluster_name": "ceph",
Nov 25 03:53:15 np0005534516 charming_mahavira[373460]:                "ceph.crush_device_class": "",
Nov 25 03:53:15 np0005534516 charming_mahavira[373460]:                "ceph.encrypted": "0",
Nov 25 03:53:15 np0005534516 charming_mahavira[373460]:                "ceph.osd_fsid": "1ccbbde0-8faf-460a-800e-c84f00ed17db",
Nov 25 03:53:15 np0005534516 charming_mahavira[373460]:                "ceph.osd_id": "1",
Nov 25 03:53:15 np0005534516 charming_mahavira[373460]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 03:53:15 np0005534516 charming_mahavira[373460]:                "ceph.type": "block",
Nov 25 03:53:15 np0005534516 charming_mahavira[373460]:                "ceph.vdo": "0"
Nov 25 03:53:15 np0005534516 charming_mahavira[373460]:            },
Nov 25 03:53:15 np0005534516 charming_mahavira[373460]:            "type": "block",
Nov 25 03:53:15 np0005534516 charming_mahavira[373460]:            "vg_name": "ceph_vg1"
Nov 25 03:53:15 np0005534516 charming_mahavira[373460]:        }
Nov 25 03:53:15 np0005534516 charming_mahavira[373460]:    ],
Nov 25 03:53:15 np0005534516 charming_mahavira[373460]:    "2": [
Nov 25 03:53:15 np0005534516 charming_mahavira[373460]:        {
Nov 25 03:53:15 np0005534516 charming_mahavira[373460]:            "devices": [
Nov 25 03:53:15 np0005534516 charming_mahavira[373460]:                "/dev/loop5"
Nov 25 03:53:15 np0005534516 charming_mahavira[373460]:            ],
Nov 25 03:53:15 np0005534516 charming_mahavira[373460]:            "lv_name": "ceph_lv2",
Nov 25 03:53:15 np0005534516 charming_mahavira[373460]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 03:53:15 np0005534516 charming_mahavira[373460]:            "lv_size": "21470642176",
Nov 25 03:53:15 np0005534516 charming_mahavira[373460]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2a15223e-f21c-42af-b702-2be1c3607399,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 03:53:15 np0005534516 charming_mahavira[373460]:            "lv_uuid": "5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0",
Nov 25 03:53:15 np0005534516 charming_mahavira[373460]:            "name": "ceph_lv2",
Nov 25 03:53:15 np0005534516 charming_mahavira[373460]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 03:53:15 np0005534516 charming_mahavira[373460]:            "tags": {
Nov 25 03:53:15 np0005534516 charming_mahavira[373460]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 03:53:15 np0005534516 charming_mahavira[373460]:                "ceph.block_uuid": "5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0",
Nov 25 03:53:15 np0005534516 charming_mahavira[373460]:                "ceph.cephx_lockbox_secret": "",
Nov 25 03:53:15 np0005534516 charming_mahavira[373460]:                "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 03:53:15 np0005534516 charming_mahavira[373460]:                "ceph.cluster_name": "ceph",
Nov 25 03:53:15 np0005534516 charming_mahavira[373460]:                "ceph.crush_device_class": "",
Nov 25 03:53:15 np0005534516 charming_mahavira[373460]:                "ceph.encrypted": "0",
Nov 25 03:53:15 np0005534516 charming_mahavira[373460]:                "ceph.osd_fsid": "2a15223e-f21c-42af-b702-2be1c3607399",
Nov 25 03:53:15 np0005534516 charming_mahavira[373460]:                "ceph.osd_id": "2",
Nov 25 03:53:15 np0005534516 charming_mahavira[373460]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 03:53:15 np0005534516 charming_mahavira[373460]:                "ceph.type": "block",
Nov 25 03:53:15 np0005534516 charming_mahavira[373460]:                "ceph.vdo": "0"
Nov 25 03:53:15 np0005534516 charming_mahavira[373460]:            },
Nov 25 03:53:15 np0005534516 charming_mahavira[373460]:            "type": "block",
Nov 25 03:53:15 np0005534516 charming_mahavira[373460]:            "vg_name": "ceph_vg2"
Nov 25 03:53:15 np0005534516 charming_mahavira[373460]:        }
Nov 25 03:53:15 np0005534516 charming_mahavira[373460]:    ]
Nov 25 03:53:15 np0005534516 charming_mahavira[373460]: }
Nov 25 03:53:15 np0005534516 systemd[1]: libpod-162a2806d3100b69eca1688499ed5398e5713ed3b744b1ecfd8f5d56397c2a29.scope: Deactivated successfully.
Nov 25 03:53:15 np0005534516 podman[373469]: 2025-11-25 08:53:15.899048971 +0000 UTC m=+0.029082370 container died 162a2806d3100b69eca1688499ed5398e5713ed3b744b1ecfd8f5d56397c2a29 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_mahavira, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 03:53:15 np0005534516 systemd[1]: var-lib-containers-storage-overlay-c72796ac21f34f63c04016751c25950b8d8b0faa47d3804553d9bdcf042c7f8e-merged.mount: Deactivated successfully.
Nov 25 03:53:15 np0005534516 podman[373469]: 2025-11-25 08:53:15.957385233 +0000 UTC m=+0.087418612 container remove 162a2806d3100b69eca1688499ed5398e5713ed3b744b1ecfd8f5d56397c2a29 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_mahavira, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 03:53:15 np0005534516 systemd[1]: libpod-conmon-162a2806d3100b69eca1688499ed5398e5713ed3b744b1ecfd8f5d56397c2a29.scope: Deactivated successfully.
Nov 25 03:53:16 np0005534516 podman[373625]: 2025-11-25 08:53:16.681064724 +0000 UTC m=+0.047871893 container create 11349692952baaf9d526bd9be97413833b5df302e458f9aeeef26f311d158a98 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_wilson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Nov 25 03:53:16 np0005534516 systemd[1]: Started libpod-conmon-11349692952baaf9d526bd9be97413833b5df302e458f9aeeef26f311d158a98.scope.
Nov 25 03:53:16 np0005534516 podman[373625]: 2025-11-25 08:53:16.653675691 +0000 UTC m=+0.020482850 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 03:53:16 np0005534516 systemd[1]: Started libcrun container.
Nov 25 03:53:16 np0005534516 podman[373625]: 2025-11-25 08:53:16.781983447 +0000 UTC m=+0.148790596 container init 11349692952baaf9d526bd9be97413833b5df302e458f9aeeef26f311d158a98 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_wilson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Nov 25 03:53:16 np0005534516 podman[373625]: 2025-11-25 08:53:16.796168037 +0000 UTC m=+0.162975206 container start 11349692952baaf9d526bd9be97413833b5df302e458f9aeeef26f311d158a98 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_wilson, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Nov 25 03:53:16 np0005534516 podman[373625]: 2025-11-25 08:53:16.80037433 +0000 UTC m=+0.167181469 container attach 11349692952baaf9d526bd9be97413833b5df302e458f9aeeef26f311d158a98 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_wilson, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 03:53:16 np0005534516 flamboyant_wilson[373641]: 167 167
Nov 25 03:53:16 np0005534516 systemd[1]: libpod-11349692952baaf9d526bd9be97413833b5df302e458f9aeeef26f311d158a98.scope: Deactivated successfully.
Nov 25 03:53:16 np0005534516 podman[373625]: 2025-11-25 08:53:16.80298697 +0000 UTC m=+0.169794139 container died 11349692952baaf9d526bd9be97413833b5df302e458f9aeeef26f311d158a98 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_wilson, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 03:53:16 np0005534516 systemd[1]: var-lib-containers-storage-overlay-00173d2313ee2c64b0b61b8ff3441b38a9841d6f5323735886a6c34c001b9078-merged.mount: Deactivated successfully.
Nov 25 03:53:16 np0005534516 podman[373625]: 2025-11-25 08:53:16.848001545 +0000 UTC m=+0.214808674 container remove 11349692952baaf9d526bd9be97413833b5df302e458f9aeeef26f311d158a98 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_wilson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Nov 25 03:53:16 np0005534516 systemd[1]: libpod-conmon-11349692952baaf9d526bd9be97413833b5df302e458f9aeeef26f311d158a98.scope: Deactivated successfully.
Nov 25 03:53:17 np0005534516 podman[373665]: 2025-11-25 08:53:17.085374073 +0000 UTC m=+0.096795404 container create 62fc6a628c6184f16f2582c4f247f12b0e7c3e3e782841df98fc6452163dec34 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_driscoll, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Nov 25 03:53:17 np0005534516 systemd[1]: Started libpod-conmon-62fc6a628c6184f16f2582c4f247f12b0e7c3e3e782841df98fc6452163dec34.scope.
Nov 25 03:53:17 np0005534516 podman[373665]: 2025-11-25 08:53:17.057919377 +0000 UTC m=+0.069340798 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 03:53:17 np0005534516 systemd[1]: Started libcrun container.
Nov 25 03:53:17 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b07340675ca375edf6cac3afd873b540a2970346d60e3abd726c8af966d2f623/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 03:53:17 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b07340675ca375edf6cac3afd873b540a2970346d60e3abd726c8af966d2f623/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 03:53:17 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b07340675ca375edf6cac3afd873b540a2970346d60e3abd726c8af966d2f623/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 03:53:17 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b07340675ca375edf6cac3afd873b540a2970346d60e3abd726c8af966d2f623/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 03:53:17 np0005534516 podman[373665]: 2025-11-25 08:53:17.195568174 +0000 UTC m=+0.206989535 container init 62fc6a628c6184f16f2582c4f247f12b0e7c3e3e782841df98fc6452163dec34 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_driscoll, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 03:53:17 np0005534516 podman[373665]: 2025-11-25 08:53:17.20551161 +0000 UTC m=+0.216932941 container start 62fc6a628c6184f16f2582c4f247f12b0e7c3e3e782841df98fc6452163dec34 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_driscoll, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 03:53:17 np0005534516 podman[373665]: 2025-11-25 08:53:17.209444525 +0000 UTC m=+0.220865856 container attach 62fc6a628c6184f16f2582c4f247f12b0e7c3e3e782841df98fc6452163dec34 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_driscoll, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default)
Nov 25 03:53:17 np0005534516 nova_compute[253538]: 2025-11-25 08:53:17.365 253542 DEBUG nova.network.neutron [None req-df790dd1-a03f-490c-a5fd-544bc83166c6 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: 2da7049d-715e-4209-8c17-dda96ff6a192] Updating instance_info_cache with network_info: [{"id": "e0eb0246-9869-4c10-b45b-bd0799ae0c95", "address": "fa:16:3e:ea:4d:89", "network": {"id": "ab581a21-5712-4b8e-87f9-b943349fbfcb", "bridge": "br-int", "label": "tempest-network-smoke--1196736477", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7dcaf3c96bfc4db3a41291debd385c67", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape0eb0246-98", "ovs_interfaceid": "e0eb0246-9869-4c10-b45b-bd0799ae0c95", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 03:53:17 np0005534516 nova_compute[253538]: 2025-11-25 08:53:17.401 253542 DEBUG oslo_concurrency.lockutils [None req-df790dd1-a03f-490c-a5fd-544bc83166c6 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Releasing lock "refresh_cache-2da7049d-715e-4209-8c17-dda96ff6a192" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 03:53:17 np0005534516 nova_compute[253538]: 2025-11-25 08:53:17.402 253542 DEBUG nova.compute.manager [None req-df790dd1-a03f-490c-a5fd-544bc83166c6 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: 2da7049d-715e-4209-8c17-dda96ff6a192] Instance network_info: |[{"id": "e0eb0246-9869-4c10-b45b-bd0799ae0c95", "address": "fa:16:3e:ea:4d:89", "network": {"id": "ab581a21-5712-4b8e-87f9-b943349fbfcb", "bridge": "br-int", "label": "tempest-network-smoke--1196736477", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7dcaf3c96bfc4db3a41291debd385c67", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape0eb0246-98", "ovs_interfaceid": "e0eb0246-9869-4c10-b45b-bd0799ae0c95", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 25 03:53:17 np0005534516 nova_compute[253538]: 2025-11-25 08:53:17.402 253542 DEBUG oslo_concurrency.lockutils [req-d7f0cac9-e8f4-4d28-b4d7-e818e52e2d44 req-cc8a2eeb-e00e-4b75-b3c3-8fc020d2ceff b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-2da7049d-715e-4209-8c17-dda96ff6a192" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 03:53:17 np0005534516 nova_compute[253538]: 2025-11-25 08:53:17.402 253542 DEBUG nova.network.neutron [req-d7f0cac9-e8f4-4d28-b4d7-e818e52e2d44 req-cc8a2eeb-e00e-4b75-b3c3-8fc020d2ceff b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 2da7049d-715e-4209-8c17-dda96ff6a192] Refreshing network info cache for port e0eb0246-9869-4c10-b45b-bd0799ae0c95 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 03:53:17 np0005534516 nova_compute[253538]: 2025-11-25 08:53:17.405 253542 DEBUG nova.virt.libvirt.driver [None req-df790dd1-a03f-490c-a5fd-544bc83166c6 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: 2da7049d-715e-4209-8c17-dda96ff6a192] Start _get_guest_xml network_info=[{"id": "e0eb0246-9869-4c10-b45b-bd0799ae0c95", "address": "fa:16:3e:ea:4d:89", "network": {"id": "ab581a21-5712-4b8e-87f9-b943349fbfcb", "bridge": "br-int", "label": "tempest-network-smoke--1196736477", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7dcaf3c96bfc4db3a41291debd385c67", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape0eb0246-98", "ovs_interfaceid": "e0eb0246-9869-4c10-b45b-bd0799ae0c95", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'device_name': '/dev/vda', 'guest_format': None, 'encryption_options': None, 'boot_index': 0, 'encryption_format': None, 'encryption_secret_uuid': None, 'size': 0, 'encrypted': False, 'disk_bus': 'virtio', 'image_id': '8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 25 03:53:17 np0005534516 nova_compute[253538]: 2025-11-25 08:53:17.412 253542 WARNING nova.virt.libvirt.driver [None req-df790dd1-a03f-490c-a5fd-544bc83166c6 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 03:53:17 np0005534516 nova_compute[253538]: 2025-11-25 08:53:17.419 253542 DEBUG nova.virt.libvirt.host [None req-df790dd1-a03f-490c-a5fd-544bc83166c6 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 25 03:53:17 np0005534516 nova_compute[253538]: 2025-11-25 08:53:17.419 253542 DEBUG nova.virt.libvirt.host [None req-df790dd1-a03f-490c-a5fd-544bc83166c6 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 25 03:53:17 np0005534516 nova_compute[253538]: 2025-11-25 08:53:17.423 253542 DEBUG nova.virt.libvirt.host [None req-df790dd1-a03f-490c-a5fd-544bc83166c6 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 25 03:53:17 np0005534516 nova_compute[253538]: 2025-11-25 08:53:17.423 253542 DEBUG nova.virt.libvirt.host [None req-df790dd1-a03f-490c-a5fd-544bc83166c6 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 25 03:53:17 np0005534516 nova_compute[253538]: 2025-11-25 08:53:17.424 253542 DEBUG nova.virt.libvirt.driver [None req-df790dd1-a03f-490c-a5fd-544bc83166c6 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 25 03:53:17 np0005534516 nova_compute[253538]: 2025-11-25 08:53:17.424 253542 DEBUG nova.virt.hardware [None req-df790dd1-a03f-490c-a5fd-544bc83166c6 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T08:20:54Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='c0247c91-0f34-45b2-87e7-43f31a790a00',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 25 03:53:17 np0005534516 nova_compute[253538]: 2025-11-25 08:53:17.424 253542 DEBUG nova.virt.hardware [None req-df790dd1-a03f-490c-a5fd-544bc83166c6 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 25 03:53:17 np0005534516 nova_compute[253538]: 2025-11-25 08:53:17.424 253542 DEBUG nova.virt.hardware [None req-df790dd1-a03f-490c-a5fd-544bc83166c6 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 25 03:53:17 np0005534516 nova_compute[253538]: 2025-11-25 08:53:17.425 253542 DEBUG nova.virt.hardware [None req-df790dd1-a03f-490c-a5fd-544bc83166c6 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 25 03:53:17 np0005534516 nova_compute[253538]: 2025-11-25 08:53:17.425 253542 DEBUG nova.virt.hardware [None req-df790dd1-a03f-490c-a5fd-544bc83166c6 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 25 03:53:17 np0005534516 nova_compute[253538]: 2025-11-25 08:53:17.425 253542 DEBUG nova.virt.hardware [None req-df790dd1-a03f-490c-a5fd-544bc83166c6 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 25 03:53:17 np0005534516 nova_compute[253538]: 2025-11-25 08:53:17.425 253542 DEBUG nova.virt.hardware [None req-df790dd1-a03f-490c-a5fd-544bc83166c6 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 25 03:53:17 np0005534516 nova_compute[253538]: 2025-11-25 08:53:17.425 253542 DEBUG nova.virt.hardware [None req-df790dd1-a03f-490c-a5fd-544bc83166c6 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 25 03:53:17 np0005534516 nova_compute[253538]: 2025-11-25 08:53:17.426 253542 DEBUG nova.virt.hardware [None req-df790dd1-a03f-490c-a5fd-544bc83166c6 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 25 03:53:17 np0005534516 nova_compute[253538]: 2025-11-25 08:53:17.426 253542 DEBUG nova.virt.hardware [None req-df790dd1-a03f-490c-a5fd-544bc83166c6 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 25 03:53:17 np0005534516 nova_compute[253538]: 2025-11-25 08:53:17.426 253542 DEBUG nova.virt.hardware [None req-df790dd1-a03f-490c-a5fd-544bc83166c6 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 25 03:53:17 np0005534516 nova_compute[253538]: 2025-11-25 08:53:17.429 253542 DEBUG oslo_concurrency.processutils [None req-df790dd1-a03f-490c-a5fd-544bc83166c6 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:53:17 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 03:53:17 np0005534516 nova_compute[253538]: 2025-11-25 08:53:17.786 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:53:17 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2175: 321 pgs: 321 active+clean; 207 MiB data, 854 MiB used, 59 GiB / 60 GiB avail; 1.4 MiB/s rd, 3.7 MiB/s wr, 109 op/s
Nov 25 03:53:17 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 03:53:17 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1043780554' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 03:53:17 np0005534516 nova_compute[253538]: 2025-11-25 08:53:17.928 253542 DEBUG oslo_concurrency.processutils [None req-df790dd1-a03f-490c-a5fd-544bc83166c6 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.499s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:53:17 np0005534516 nova_compute[253538]: 2025-11-25 08:53:17.955 253542 DEBUG nova.storage.rbd_utils [None req-df790dd1-a03f-490c-a5fd-544bc83166c6 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] rbd image 2da7049d-715e-4209-8c17-dda96ff6a192_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:53:17 np0005534516 nova_compute[253538]: 2025-11-25 08:53:17.960 253542 DEBUG oslo_concurrency.processutils [None req-df790dd1-a03f-490c-a5fd-544bc83166c6 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:53:18 np0005534516 compassionate_driscoll[373681]: {
Nov 25 03:53:18 np0005534516 compassionate_driscoll[373681]:    "1ccbbde0-8faf-460a-800e-c84f00ed17db": {
Nov 25 03:53:18 np0005534516 compassionate_driscoll[373681]:        "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 03:53:18 np0005534516 compassionate_driscoll[373681]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 03:53:18 np0005534516 compassionate_driscoll[373681]:        "osd_id": 1,
Nov 25 03:53:18 np0005534516 compassionate_driscoll[373681]:        "osd_uuid": "1ccbbde0-8faf-460a-800e-c84f00ed17db",
Nov 25 03:53:18 np0005534516 compassionate_driscoll[373681]:        "type": "bluestore"
Nov 25 03:53:18 np0005534516 compassionate_driscoll[373681]:    },
Nov 25 03:53:18 np0005534516 compassionate_driscoll[373681]:    "2a15223e-f21c-42af-b702-2be1c3607399": {
Nov 25 03:53:18 np0005534516 compassionate_driscoll[373681]:        "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 03:53:18 np0005534516 compassionate_driscoll[373681]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 03:53:18 np0005534516 compassionate_driscoll[373681]:        "osd_id": 2,
Nov 25 03:53:18 np0005534516 compassionate_driscoll[373681]:        "osd_uuid": "2a15223e-f21c-42af-b702-2be1c3607399",
Nov 25 03:53:18 np0005534516 compassionate_driscoll[373681]:        "type": "bluestore"
Nov 25 03:53:18 np0005534516 compassionate_driscoll[373681]:    },
Nov 25 03:53:18 np0005534516 compassionate_driscoll[373681]:    "fa0f8d90-5df8-4d42-9078-da082765696d": {
Nov 25 03:53:18 np0005534516 compassionate_driscoll[373681]:        "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 03:53:18 np0005534516 compassionate_driscoll[373681]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 03:53:18 np0005534516 compassionate_driscoll[373681]:        "osd_id": 0,
Nov 25 03:53:18 np0005534516 compassionate_driscoll[373681]:        "osd_uuid": "fa0f8d90-5df8-4d42-9078-da082765696d",
Nov 25 03:53:18 np0005534516 compassionate_driscoll[373681]:        "type": "bluestore"
Nov 25 03:53:18 np0005534516 compassionate_driscoll[373681]:    }
Nov 25 03:53:18 np0005534516 compassionate_driscoll[373681]: }
Nov 25 03:53:18 np0005534516 systemd[1]: libpod-62fc6a628c6184f16f2582c4f247f12b0e7c3e3e782841df98fc6452163dec34.scope: Deactivated successfully.
Nov 25 03:53:18 np0005534516 podman[373665]: 2025-11-25 08:53:18.196735836 +0000 UTC m=+1.208157187 container died 62fc6a628c6184f16f2582c4f247f12b0e7c3e3e782841df98fc6452163dec34 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_driscoll, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 03:53:18 np0005534516 systemd[1]: var-lib-containers-storage-overlay-b07340675ca375edf6cac3afd873b540a2970346d60e3abd726c8af966d2f623-merged.mount: Deactivated successfully.
Nov 25 03:53:18 np0005534516 podman[373665]: 2025-11-25 08:53:18.258772618 +0000 UTC m=+1.270193949 container remove 62fc6a628c6184f16f2582c4f247f12b0e7c3e3e782841df98fc6452163dec34 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_driscoll, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 03:53:18 np0005534516 systemd[1]: libpod-conmon-62fc6a628c6184f16f2582c4f247f12b0e7c3e3e782841df98fc6452163dec34.scope: Deactivated successfully.
Nov 25 03:53:18 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 03:53:18 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 03:53:18 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 03:53:18 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 03:53:18 np0005534516 ceph-mgr[75313]: [progress WARNING root] complete: ev f5967b3e-72fd-44fc-9490-6791292b9d83 does not exist
Nov 25 03:53:18 np0005534516 ceph-mgr[75313]: [progress WARNING root] complete: ev 5c7d998b-8ad5-4c02-b93d-4376418d0428 does not exist
Nov 25 03:53:18 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 03:53:18 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1005190395' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 03:53:18 np0005534516 nova_compute[253538]: 2025-11-25 08:53:18.456 253542 DEBUG oslo_concurrency.processutils [None req-df790dd1-a03f-490c-a5fd-544bc83166c6 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.496s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:53:18 np0005534516 nova_compute[253538]: 2025-11-25 08:53:18.458 253542 DEBUG nova.virt.libvirt.vif [None req-df790dd1-a03f-490c-a5fd-544bc83166c6 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T08:53:10Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1007468219',display_name='tempest-TestNetworkAdvancedServerOps-server-1007468219',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1007468219',id=115,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPU68q8J+sEr75b4404lN3PmtYldQRjuuyP6SvWU7eV24JrfwADcpjaDPQik5pfkJbRZMvyTiPsHIXeXsaypDt7sTW7yrvZEo+4ZU8LLmZuoq5gv1NHdtvyWg13jQU9yUw==',key_name='tempest-TestNetworkAdvancedServerOps-1510547977',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='7dcaf3c96bfc4db3a41291debd385c67',ramdisk_id='',reservation_id='r-020ctw8m',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkAdvancedServerOps-1132090577',owner_user_name='tempest-TestNetworkAdvancedServerOps-1132090577-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T08:53:13Z,user_data=None,user_id='009378dc36154271ba5b4590ce67ddde',uuid=2da7049d-715e-4209-8c17-dda96ff6a192,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "e0eb0246-9869-4c10-b45b-bd0799ae0c95", "address": "fa:16:3e:ea:4d:89", "network": {"id": "ab581a21-5712-4b8e-87f9-b943349fbfcb", "bridge": "br-int", "label": "tempest-network-smoke--1196736477", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7dcaf3c96bfc4db3a41291debd385c67", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape0eb0246-98", "ovs_interfaceid": "e0eb0246-9869-4c10-b45b-bd0799ae0c95", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 25 03:53:18 np0005534516 nova_compute[253538]: 2025-11-25 08:53:18.458 253542 DEBUG nova.network.os_vif_util [None req-df790dd1-a03f-490c-a5fd-544bc83166c6 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Converting VIF {"id": "e0eb0246-9869-4c10-b45b-bd0799ae0c95", "address": "fa:16:3e:ea:4d:89", "network": {"id": "ab581a21-5712-4b8e-87f9-b943349fbfcb", "bridge": "br-int", "label": "tempest-network-smoke--1196736477", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7dcaf3c96bfc4db3a41291debd385c67", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape0eb0246-98", "ovs_interfaceid": "e0eb0246-9869-4c10-b45b-bd0799ae0c95", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 03:53:18 np0005534516 nova_compute[253538]: 2025-11-25 08:53:18.459 253542 DEBUG nova.network.os_vif_util [None req-df790dd1-a03f-490c-a5fd-544bc83166c6 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ea:4d:89,bridge_name='br-int',has_traffic_filtering=True,id=e0eb0246-9869-4c10-b45b-bd0799ae0c95,network=Network(ab581a21-5712-4b8e-87f9-b943349fbfcb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape0eb0246-98') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 03:53:18 np0005534516 nova_compute[253538]: 2025-11-25 08:53:18.461 253542 DEBUG nova.objects.instance [None req-df790dd1-a03f-490c-a5fd-544bc83166c6 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Lazy-loading 'pci_devices' on Instance uuid 2da7049d-715e-4209-8c17-dda96ff6a192 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 03:53:18 np0005534516 nova_compute[253538]: 2025-11-25 08:53:18.476 253542 DEBUG nova.virt.libvirt.driver [None req-df790dd1-a03f-490c-a5fd-544bc83166c6 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: 2da7049d-715e-4209-8c17-dda96ff6a192] End _get_guest_xml xml=<domain type="kvm">
Nov 25 03:53:18 np0005534516 nova_compute[253538]:  <uuid>2da7049d-715e-4209-8c17-dda96ff6a192</uuid>
Nov 25 03:53:18 np0005534516 nova_compute[253538]:  <name>instance-00000073</name>
Nov 25 03:53:18 np0005534516 nova_compute[253538]:  <memory>131072</memory>
Nov 25 03:53:18 np0005534516 nova_compute[253538]:  <vcpu>1</vcpu>
Nov 25 03:53:18 np0005534516 nova_compute[253538]:  <metadata>
Nov 25 03:53:18 np0005534516 nova_compute[253538]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 03:53:18 np0005534516 nova_compute[253538]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 03:53:18 np0005534516 nova_compute[253538]:      <nova:name>tempest-TestNetworkAdvancedServerOps-server-1007468219</nova:name>
Nov 25 03:53:18 np0005534516 nova_compute[253538]:      <nova:creationTime>2025-11-25 08:53:17</nova:creationTime>
Nov 25 03:53:18 np0005534516 nova_compute[253538]:      <nova:flavor name="m1.nano">
Nov 25 03:53:18 np0005534516 nova_compute[253538]:        <nova:memory>128</nova:memory>
Nov 25 03:53:18 np0005534516 nova_compute[253538]:        <nova:disk>1</nova:disk>
Nov 25 03:53:18 np0005534516 nova_compute[253538]:        <nova:swap>0</nova:swap>
Nov 25 03:53:18 np0005534516 nova_compute[253538]:        <nova:ephemeral>0</nova:ephemeral>
Nov 25 03:53:18 np0005534516 nova_compute[253538]:        <nova:vcpus>1</nova:vcpus>
Nov 25 03:53:18 np0005534516 nova_compute[253538]:      </nova:flavor>
Nov 25 03:53:18 np0005534516 nova_compute[253538]:      <nova:owner>
Nov 25 03:53:18 np0005534516 nova_compute[253538]:        <nova:user uuid="009378dc36154271ba5b4590ce67ddde">tempest-TestNetworkAdvancedServerOps-1132090577-project-member</nova:user>
Nov 25 03:53:18 np0005534516 nova_compute[253538]:        <nova:project uuid="7dcaf3c96bfc4db3a41291debd385c67">tempest-TestNetworkAdvancedServerOps-1132090577</nova:project>
Nov 25 03:53:18 np0005534516 nova_compute[253538]:      </nova:owner>
Nov 25 03:53:18 np0005534516 nova_compute[253538]:      <nova:root type="image" uuid="8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e"/>
Nov 25 03:53:18 np0005534516 nova_compute[253538]:      <nova:ports>
Nov 25 03:53:18 np0005534516 nova_compute[253538]:        <nova:port uuid="e0eb0246-9869-4c10-b45b-bd0799ae0c95">
Nov 25 03:53:18 np0005534516 nova_compute[253538]:          <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Nov 25 03:53:18 np0005534516 nova_compute[253538]:        </nova:port>
Nov 25 03:53:18 np0005534516 nova_compute[253538]:      </nova:ports>
Nov 25 03:53:18 np0005534516 nova_compute[253538]:    </nova:instance>
Nov 25 03:53:18 np0005534516 nova_compute[253538]:  </metadata>
Nov 25 03:53:18 np0005534516 nova_compute[253538]:  <sysinfo type="smbios">
Nov 25 03:53:18 np0005534516 nova_compute[253538]:    <system>
Nov 25 03:53:18 np0005534516 nova_compute[253538]:      <entry name="manufacturer">RDO</entry>
Nov 25 03:53:18 np0005534516 nova_compute[253538]:      <entry name="product">OpenStack Compute</entry>
Nov 25 03:53:18 np0005534516 nova_compute[253538]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 03:53:18 np0005534516 nova_compute[253538]:      <entry name="serial">2da7049d-715e-4209-8c17-dda96ff6a192</entry>
Nov 25 03:53:18 np0005534516 nova_compute[253538]:      <entry name="uuid">2da7049d-715e-4209-8c17-dda96ff6a192</entry>
Nov 25 03:53:18 np0005534516 nova_compute[253538]:      <entry name="family">Virtual Machine</entry>
Nov 25 03:53:18 np0005534516 nova_compute[253538]:    </system>
Nov 25 03:53:18 np0005534516 nova_compute[253538]:  </sysinfo>
Nov 25 03:53:18 np0005534516 nova_compute[253538]:  <os>
Nov 25 03:53:18 np0005534516 nova_compute[253538]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 25 03:53:18 np0005534516 nova_compute[253538]:    <boot dev="hd"/>
Nov 25 03:53:18 np0005534516 nova_compute[253538]:    <smbios mode="sysinfo"/>
Nov 25 03:53:18 np0005534516 nova_compute[253538]:  </os>
Nov 25 03:53:18 np0005534516 nova_compute[253538]:  <features>
Nov 25 03:53:18 np0005534516 nova_compute[253538]:    <acpi/>
Nov 25 03:53:18 np0005534516 nova_compute[253538]:    <apic/>
Nov 25 03:53:18 np0005534516 nova_compute[253538]:    <vmcoreinfo/>
Nov 25 03:53:18 np0005534516 nova_compute[253538]:  </features>
Nov 25 03:53:18 np0005534516 nova_compute[253538]:  <clock offset="utc">
Nov 25 03:53:18 np0005534516 nova_compute[253538]:    <timer name="pit" tickpolicy="delay"/>
Nov 25 03:53:18 np0005534516 nova_compute[253538]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 25 03:53:18 np0005534516 nova_compute[253538]:    <timer name="hpet" present="no"/>
Nov 25 03:53:18 np0005534516 nova_compute[253538]:  </clock>
Nov 25 03:53:18 np0005534516 nova_compute[253538]:  <cpu mode="host-model" match="exact">
Nov 25 03:53:18 np0005534516 nova_compute[253538]:    <topology sockets="1" cores="1" threads="1"/>
Nov 25 03:53:18 np0005534516 nova_compute[253538]:  </cpu>
Nov 25 03:53:18 np0005534516 nova_compute[253538]:  <devices>
Nov 25 03:53:18 np0005534516 nova_compute[253538]:    <disk type="network" device="disk">
Nov 25 03:53:18 np0005534516 nova_compute[253538]:      <driver type="raw" cache="none"/>
Nov 25 03:53:18 np0005534516 nova_compute[253538]:      <source protocol="rbd" name="vms/2da7049d-715e-4209-8c17-dda96ff6a192_disk">
Nov 25 03:53:18 np0005534516 nova_compute[253538]:        <host name="192.168.122.100" port="6789"/>
Nov 25 03:53:18 np0005534516 nova_compute[253538]:      </source>
Nov 25 03:53:18 np0005534516 nova_compute[253538]:      <auth username="openstack">
Nov 25 03:53:18 np0005534516 nova_compute[253538]:        <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 03:53:18 np0005534516 nova_compute[253538]:      </auth>
Nov 25 03:53:18 np0005534516 nova_compute[253538]:      <target dev="vda" bus="virtio"/>
Nov 25 03:53:18 np0005534516 nova_compute[253538]:    </disk>
Nov 25 03:53:18 np0005534516 nova_compute[253538]:    <disk type="network" device="cdrom">
Nov 25 03:53:18 np0005534516 nova_compute[253538]:      <driver type="raw" cache="none"/>
Nov 25 03:53:18 np0005534516 nova_compute[253538]:      <source protocol="rbd" name="vms/2da7049d-715e-4209-8c17-dda96ff6a192_disk.config">
Nov 25 03:53:18 np0005534516 nova_compute[253538]:        <host name="192.168.122.100" port="6789"/>
Nov 25 03:53:18 np0005534516 nova_compute[253538]:      </source>
Nov 25 03:53:18 np0005534516 nova_compute[253538]:      <auth username="openstack">
Nov 25 03:53:18 np0005534516 nova_compute[253538]:        <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 03:53:18 np0005534516 nova_compute[253538]:      </auth>
Nov 25 03:53:18 np0005534516 nova_compute[253538]:      <target dev="sda" bus="sata"/>
Nov 25 03:53:18 np0005534516 nova_compute[253538]:    </disk>
Nov 25 03:53:18 np0005534516 nova_compute[253538]:    <interface type="ethernet">
Nov 25 03:53:18 np0005534516 nova_compute[253538]:      <mac address="fa:16:3e:ea:4d:89"/>
Nov 25 03:53:18 np0005534516 nova_compute[253538]:      <model type="virtio"/>
Nov 25 03:53:18 np0005534516 nova_compute[253538]:      <driver name="vhost" rx_queue_size="512"/>
Nov 25 03:53:18 np0005534516 nova_compute[253538]:      <mtu size="1442"/>
Nov 25 03:53:18 np0005534516 nova_compute[253538]:      <target dev="tape0eb0246-98"/>
Nov 25 03:53:18 np0005534516 nova_compute[253538]:    </interface>
Nov 25 03:53:18 np0005534516 nova_compute[253538]:    <serial type="pty">
Nov 25 03:53:18 np0005534516 nova_compute[253538]:      <log file="/var/lib/nova/instances/2da7049d-715e-4209-8c17-dda96ff6a192/console.log" append="off"/>
Nov 25 03:53:18 np0005534516 nova_compute[253538]:    </serial>
Nov 25 03:53:18 np0005534516 nova_compute[253538]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 03:53:18 np0005534516 nova_compute[253538]:    <video>
Nov 25 03:53:18 np0005534516 nova_compute[253538]:      <model type="virtio"/>
Nov 25 03:53:18 np0005534516 nova_compute[253538]:    </video>
Nov 25 03:53:18 np0005534516 nova_compute[253538]:    <input type="tablet" bus="usb"/>
Nov 25 03:53:18 np0005534516 nova_compute[253538]:    <rng model="virtio">
Nov 25 03:53:18 np0005534516 nova_compute[253538]:      <backend model="random">/dev/urandom</backend>
Nov 25 03:53:18 np0005534516 nova_compute[253538]:    </rng>
Nov 25 03:53:18 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root"/>
Nov 25 03:53:18 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:53:18 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:53:18 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:53:18 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:53:18 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:53:18 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:53:18 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:53:18 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:53:18 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:53:18 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:53:18 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:53:18 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:53:18 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:53:18 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:53:18 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:53:18 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:53:18 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:53:18 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:53:18 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:53:18 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:53:18 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:53:18 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:53:18 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:53:18 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:53:18 np0005534516 nova_compute[253538]:    <controller type="usb" index="0"/>
Nov 25 03:53:18 np0005534516 nova_compute[253538]:    <memballoon model="virtio">
Nov 25 03:53:18 np0005534516 nova_compute[253538]:      <stats period="10"/>
Nov 25 03:53:18 np0005534516 nova_compute[253538]:    </memballoon>
Nov 25 03:53:18 np0005534516 nova_compute[253538]:  </devices>
Nov 25 03:53:18 np0005534516 nova_compute[253538]: </domain>
Nov 25 03:53:18 np0005534516 nova_compute[253538]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 25 03:53:18 np0005534516 nova_compute[253538]: 2025-11-25 08:53:18.477 253542 DEBUG nova.compute.manager [None req-df790dd1-a03f-490c-a5fd-544bc83166c6 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: 2da7049d-715e-4209-8c17-dda96ff6a192] Preparing to wait for external event network-vif-plugged-e0eb0246-9869-4c10-b45b-bd0799ae0c95 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 25 03:53:18 np0005534516 nova_compute[253538]: 2025-11-25 08:53:18.477 253542 DEBUG oslo_concurrency.lockutils [None req-df790dd1-a03f-490c-a5fd-544bc83166c6 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Acquiring lock "2da7049d-715e-4209-8c17-dda96ff6a192-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:53:18 np0005534516 nova_compute[253538]: 2025-11-25 08:53:18.477 253542 DEBUG oslo_concurrency.lockutils [None req-df790dd1-a03f-490c-a5fd-544bc83166c6 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Lock "2da7049d-715e-4209-8c17-dda96ff6a192-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:53:18 np0005534516 nova_compute[253538]: 2025-11-25 08:53:18.477 253542 DEBUG oslo_concurrency.lockutils [None req-df790dd1-a03f-490c-a5fd-544bc83166c6 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Lock "2da7049d-715e-4209-8c17-dda96ff6a192-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:53:18 np0005534516 nova_compute[253538]: 2025-11-25 08:53:18.478 253542 DEBUG nova.virt.libvirt.vif [None req-df790dd1-a03f-490c-a5fd-544bc83166c6 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T08:53:10Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1007468219',display_name='tempest-TestNetworkAdvancedServerOps-server-1007468219',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1007468219',id=115,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPU68q8J+sEr75b4404lN3PmtYldQRjuuyP6SvWU7eV24JrfwADcpjaDPQik5pfkJbRZMvyTiPsHIXeXsaypDt7sTW7yrvZEo+4ZU8LLmZuoq5gv1NHdtvyWg13jQU9yUw==',key_name='tempest-TestNetworkAdvancedServerOps-1510547977',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='7dcaf3c96bfc4db3a41291debd385c67',ramdisk_id='',reservation_id='r-020ctw8m',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkAdvancedServerOps-1132090577',owner_user_name='tempest-TestNetworkAdvancedServerOps-1132090577-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T08:53:13Z,user_data=None,user_id='009378dc36154271ba5b4590ce67ddde',uuid=2da7049d-715e-4209-8c17-dda96ff6a192,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "e0eb0246-9869-4c10-b45b-bd0799ae0c95", "address": "fa:16:3e:ea:4d:89", "network": {"id": "ab581a21-5712-4b8e-87f9-b943349fbfcb", "bridge": "br-int", "label": "tempest-network-smoke--1196736477", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7dcaf3c96bfc4db3a41291debd385c67", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape0eb0246-98", "ovs_interfaceid": "e0eb0246-9869-4c10-b45b-bd0799ae0c95", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 25 03:53:18 np0005534516 nova_compute[253538]: 2025-11-25 08:53:18.478 253542 DEBUG nova.network.os_vif_util [None req-df790dd1-a03f-490c-a5fd-544bc83166c6 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Converting VIF {"id": "e0eb0246-9869-4c10-b45b-bd0799ae0c95", "address": "fa:16:3e:ea:4d:89", "network": {"id": "ab581a21-5712-4b8e-87f9-b943349fbfcb", "bridge": "br-int", "label": "tempest-network-smoke--1196736477", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7dcaf3c96bfc4db3a41291debd385c67", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape0eb0246-98", "ovs_interfaceid": "e0eb0246-9869-4c10-b45b-bd0799ae0c95", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 03:53:18 np0005534516 nova_compute[253538]: 2025-11-25 08:53:18.479 253542 DEBUG nova.network.os_vif_util [None req-df790dd1-a03f-490c-a5fd-544bc83166c6 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ea:4d:89,bridge_name='br-int',has_traffic_filtering=True,id=e0eb0246-9869-4c10-b45b-bd0799ae0c95,network=Network(ab581a21-5712-4b8e-87f9-b943349fbfcb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape0eb0246-98') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 03:53:18 np0005534516 nova_compute[253538]: 2025-11-25 08:53:18.479 253542 DEBUG os_vif [None req-df790dd1-a03f-490c-a5fd-544bc83166c6 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:ea:4d:89,bridge_name='br-int',has_traffic_filtering=True,id=e0eb0246-9869-4c10-b45b-bd0799ae0c95,network=Network(ab581a21-5712-4b8e-87f9-b943349fbfcb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape0eb0246-98') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 25 03:53:18 np0005534516 nova_compute[253538]: 2025-11-25 08:53:18.480 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:53:18 np0005534516 nova_compute[253538]: 2025-11-25 08:53:18.480 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:53:18 np0005534516 nova_compute[253538]: 2025-11-25 08:53:18.481 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 03:53:18 np0005534516 nova_compute[253538]: 2025-11-25 08:53:18.485 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:53:18 np0005534516 nova_compute[253538]: 2025-11-25 08:53:18.485 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape0eb0246-98, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:53:18 np0005534516 nova_compute[253538]: 2025-11-25 08:53:18.485 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tape0eb0246-98, col_values=(('external_ids', {'iface-id': 'e0eb0246-9869-4c10-b45b-bd0799ae0c95', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:ea:4d:89', 'vm-uuid': '2da7049d-715e-4209-8c17-dda96ff6a192'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:53:18 np0005534516 nova_compute[253538]: 2025-11-25 08:53:18.487 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:53:18 np0005534516 NetworkManager[48915]: <info>  [1764060798.4885] manager: (tape0eb0246-98): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/472)
Nov 25 03:53:18 np0005534516 nova_compute[253538]: 2025-11-25 08:53:18.488 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 25 03:53:18 np0005534516 nova_compute[253538]: 2025-11-25 08:53:18.494 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:53:18 np0005534516 nova_compute[253538]: 2025-11-25 08:53:18.496 253542 INFO os_vif [None req-df790dd1-a03f-490c-a5fd-544bc83166c6 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:ea:4d:89,bridge_name='br-int',has_traffic_filtering=True,id=e0eb0246-9869-4c10-b45b-bd0799ae0c95,network=Network(ab581a21-5712-4b8e-87f9-b943349fbfcb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape0eb0246-98')#033[00m
Nov 25 03:53:18 np0005534516 nova_compute[253538]: 2025-11-25 08:53:18.548 253542 DEBUG nova.virt.libvirt.driver [None req-df790dd1-a03f-490c-a5fd-544bc83166c6 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 03:53:18 np0005534516 nova_compute[253538]: 2025-11-25 08:53:18.549 253542 DEBUG nova.virt.libvirt.driver [None req-df790dd1-a03f-490c-a5fd-544bc83166c6 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 03:53:18 np0005534516 nova_compute[253538]: 2025-11-25 08:53:18.550 253542 DEBUG nova.virt.libvirt.driver [None req-df790dd1-a03f-490c-a5fd-544bc83166c6 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] No VIF found with MAC fa:16:3e:ea:4d:89, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 25 03:53:18 np0005534516 nova_compute[253538]: 2025-11-25 08:53:18.550 253542 INFO nova.virt.libvirt.driver [None req-df790dd1-a03f-490c-a5fd-544bc83166c6 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: 2da7049d-715e-4209-8c17-dda96ff6a192] Using config drive#033[00m
Nov 25 03:53:18 np0005534516 nova_compute[253538]: 2025-11-25 08:53:18.576 253542 DEBUG nova.storage.rbd_utils [None req-df790dd1-a03f-490c-a5fd-544bc83166c6 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] rbd image 2da7049d-715e-4209-8c17-dda96ff6a192_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:53:19 np0005534516 nova_compute[253538]: 2025-11-25 08:53:19.161 253542 DEBUG nova.network.neutron [req-d7f0cac9-e8f4-4d28-b4d7-e818e52e2d44 req-cc8a2eeb-e00e-4b75-b3c3-8fc020d2ceff b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 2da7049d-715e-4209-8c17-dda96ff6a192] Updated VIF entry in instance network info cache for port e0eb0246-9869-4c10-b45b-bd0799ae0c95. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 25 03:53:19 np0005534516 nova_compute[253538]: 2025-11-25 08:53:19.161 253542 DEBUG nova.network.neutron [req-d7f0cac9-e8f4-4d28-b4d7-e818e52e2d44 req-cc8a2eeb-e00e-4b75-b3c3-8fc020d2ceff b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 2da7049d-715e-4209-8c17-dda96ff6a192] Updating instance_info_cache with network_info: [{"id": "e0eb0246-9869-4c10-b45b-bd0799ae0c95", "address": "fa:16:3e:ea:4d:89", "network": {"id": "ab581a21-5712-4b8e-87f9-b943349fbfcb", "bridge": "br-int", "label": "tempest-network-smoke--1196736477", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7dcaf3c96bfc4db3a41291debd385c67", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape0eb0246-98", "ovs_interfaceid": "e0eb0246-9869-4c10-b45b-bd0799ae0c95", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 03:53:19 np0005534516 nova_compute[253538]: 2025-11-25 08:53:19.171 253542 INFO nova.virt.libvirt.driver [None req-df790dd1-a03f-490c-a5fd-544bc83166c6 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: 2da7049d-715e-4209-8c17-dda96ff6a192] Creating config drive at /var/lib/nova/instances/2da7049d-715e-4209-8c17-dda96ff6a192/disk.config#033[00m
Nov 25 03:53:19 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 03:53:19 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 03:53:19 np0005534516 nova_compute[253538]: 2025-11-25 08:53:19.181 253542 DEBUG oslo_concurrency.processutils [None req-df790dd1-a03f-490c-a5fd-544bc83166c6 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/2da7049d-715e-4209-8c17-dda96ff6a192/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp5lngydeb execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:53:19 np0005534516 nova_compute[253538]: 2025-11-25 08:53:19.229 253542 DEBUG oslo_concurrency.lockutils [req-d7f0cac9-e8f4-4d28-b4d7-e818e52e2d44 req-cc8a2eeb-e00e-4b75-b3c3-8fc020d2ceff b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-2da7049d-715e-4209-8c17-dda96ff6a192" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 03:53:19 np0005534516 nova_compute[253538]: 2025-11-25 08:53:19.335 253542 DEBUG oslo_concurrency.processutils [None req-df790dd1-a03f-490c-a5fd-544bc83166c6 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/2da7049d-715e-4209-8c17-dda96ff6a192/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp5lngydeb" returned: 0 in 0.154s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:53:19 np0005534516 nova_compute[253538]: 2025-11-25 08:53:19.361 253542 DEBUG nova.storage.rbd_utils [None req-df790dd1-a03f-490c-a5fd-544bc83166c6 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] rbd image 2da7049d-715e-4209-8c17-dda96ff6a192_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:53:19 np0005534516 nova_compute[253538]: 2025-11-25 08:53:19.365 253542 DEBUG oslo_concurrency.processutils [None req-df790dd1-a03f-490c-a5fd-544bc83166c6 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/2da7049d-715e-4209-8c17-dda96ff6a192/disk.config 2da7049d-715e-4209-8c17-dda96ff6a192_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:53:19 np0005534516 nova_compute[253538]: 2025-11-25 08:53:19.542 253542 DEBUG oslo_concurrency.processutils [None req-df790dd1-a03f-490c-a5fd-544bc83166c6 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/2da7049d-715e-4209-8c17-dda96ff6a192/disk.config 2da7049d-715e-4209-8c17-dda96ff6a192_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.177s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:53:19 np0005534516 nova_compute[253538]: 2025-11-25 08:53:19.543 253542 INFO nova.virt.libvirt.driver [None req-df790dd1-a03f-490c-a5fd-544bc83166c6 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: 2da7049d-715e-4209-8c17-dda96ff6a192] Deleting local config drive /var/lib/nova/instances/2da7049d-715e-4209-8c17-dda96ff6a192/disk.config because it was imported into RBD.#033[00m
Nov 25 03:53:19 np0005534516 kernel: tape0eb0246-98: entered promiscuous mode
Nov 25 03:53:19 np0005534516 NetworkManager[48915]: <info>  [1764060799.6051] manager: (tape0eb0246-98): new Tun device (/org/freedesktop/NetworkManager/Devices/473)
Nov 25 03:53:19 np0005534516 ovn_controller[152859]: 2025-11-25T08:53:19Z|01146|binding|INFO|Claiming lport e0eb0246-9869-4c10-b45b-bd0799ae0c95 for this chassis.
Nov 25 03:53:19 np0005534516 ovn_controller[152859]: 2025-11-25T08:53:19Z|01147|binding|INFO|e0eb0246-9869-4c10-b45b-bd0799ae0c95: Claiming fa:16:3e:ea:4d:89 10.100.0.6
Nov 25 03:53:19 np0005534516 nova_compute[253538]: 2025-11-25 08:53:19.606 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:53:19 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:53:19.614 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ea:4d:89 10.100.0.6'], port_security=['fa:16:3e:ea:4d:89 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '2da7049d-715e-4209-8c17-dda96ff6a192', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ab581a21-5712-4b8e-87f9-b943349fbfcb', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '7dcaf3c96bfc4db3a41291debd385c67', 'neutron:revision_number': '2', 'neutron:security_group_ids': '622b72dc-477c-41ae-9a43-d0ddc5df890f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=cbdce223-39b2-47e0-8888-53244c8f0c36, chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=e0eb0246-9869-4c10-b45b-bd0799ae0c95) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 03:53:19 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:53:19.617 162739 INFO neutron.agent.ovn.metadata.agent [-] Port e0eb0246-9869-4c10-b45b-bd0799ae0c95 in datapath ab581a21-5712-4b8e-87f9-b943349fbfcb bound to our chassis#033[00m
Nov 25 03:53:19 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:53:19.619 162739 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network ab581a21-5712-4b8e-87f9-b943349fbfcb#033[00m
Nov 25 03:53:19 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:53:19.633 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[3a5d7b55-1031-4331-9f51-1d793fddc228]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:53:19 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:53:19.634 162739 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapab581a21-51 in ovnmeta-ab581a21-5712-4b8e-87f9-b943349fbfcb namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 25 03:53:19 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:53:19.637 269370 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapab581a21-50 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 25 03:53:19 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:53:19.638 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[d5453d83-cd19-4bf8-8ec5-ac8cc2be2dba]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:53:19 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:53:19.638 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[cc127147-c344-4d3d-9983-4c54d93b4865]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:53:19 np0005534516 ovn_controller[152859]: 2025-11-25T08:53:19Z|01148|binding|INFO|Setting lport e0eb0246-9869-4c10-b45b-bd0799ae0c95 ovn-installed in OVS
Nov 25 03:53:19 np0005534516 ovn_controller[152859]: 2025-11-25T08:53:19Z|01149|binding|INFO|Setting lport e0eb0246-9869-4c10-b45b-bd0799ae0c95 up in Southbound
Nov 25 03:53:19 np0005534516 nova_compute[253538]: 2025-11-25 08:53:19.643 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:53:19 np0005534516 systemd-machined[215790]: New machine qemu-143-instance-00000073.
Nov 25 03:53:19 np0005534516 nova_compute[253538]: 2025-11-25 08:53:19.648 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:53:19 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:53:19.656 162852 DEBUG oslo.privsep.daemon [-] privsep: reply[3820ce2f-889d-4a87-8416-0ff6d9f7d80b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:53:19 np0005534516 systemd[1]: Started Virtual Machine qemu-143-instance-00000073.
Nov 25 03:53:19 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:53:19.677 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[c6362c18-f75a-4ba1-9f88-1ffb73c2f857]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:53:19 np0005534516 systemd-udevd[373915]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 03:53:19 np0005534516 NetworkManager[48915]: <info>  [1764060799.7009] device (tape0eb0246-98): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 03:53:19 np0005534516 NetworkManager[48915]: <info>  [1764060799.7017] device (tape0eb0246-98): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 03:53:19 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:53:19.721 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[b43caa17-8af5-48d1-abe0-2bb464f60d30]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:53:19 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:53:19.726 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[0f68907c-0090-45df-a926-6754a8de25b3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:53:19 np0005534516 NetworkManager[48915]: <info>  [1764060799.7277] manager: (tapab581a21-50): new Veth device (/org/freedesktop/NetworkManager/Devices/474)
Nov 25 03:53:19 np0005534516 systemd-udevd[373919]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 03:53:19 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:53:19.764 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[d2994d5a-02dd-4f35-8d2b-97fbaaa4ec62]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:53:19 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:53:19.767 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[152de971-7bbc-4691-85a2-9ca61bf1f0d4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:53:19 np0005534516 NetworkManager[48915]: <info>  [1764060799.7911] device (tapab581a21-50): carrier: link connected
Nov 25 03:53:19 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2176: 321 pgs: 321 active+clean; 213 MiB data, 857 MiB used, 59 GiB / 60 GiB avail; 407 KiB/s rd, 3.9 MiB/s wr, 92 op/s
Nov 25 03:53:19 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:53:19.798 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[7874146d-e78b-4d4c-9e08-de3c69592211]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:53:19 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:53:19.821 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[ac9a1867-ae34-496c-b482-7af215322829]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapab581a21-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:53:79:7a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 338], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 615439, 'reachable_time': 30428, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 373945, 'error': None, 'target': 'ovnmeta-ab581a21-5712-4b8e-87f9-b943349fbfcb', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:53:19 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:53:19.837 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[518ada18-4876-4360-82c3-f9a99916c9bc]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe53:797a'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 615439, 'tstamp': 615439}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 373946, 'error': None, 'target': 'ovnmeta-ab581a21-5712-4b8e-87f9-b943349fbfcb', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:53:19 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:53:19.859 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[b06cdee9-6a8d-4a67-b937-4931cfa049d9]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapab581a21-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:53:79:7a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 338], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 615439, 'reachable_time': 30428, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 373947, 'error': None, 'target': 'ovnmeta-ab581a21-5712-4b8e-87f9-b943349fbfcb', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:53:19 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:53:19.892 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[088d2636-c3ec-4794-bc2a-3c9788b80d17]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:53:19 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:53:19.970 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[23b245cd-cfec-471d-97cd-6b377ad481d8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:53:19 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:53:19.973 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapab581a21-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:53:19 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:53:19.973 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 03:53:19 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:53:19.974 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapab581a21-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:53:19 np0005534516 kernel: tapab581a21-50: entered promiscuous mode
Nov 25 03:53:19 np0005534516 nova_compute[253538]: 2025-11-25 08:53:19.976 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:53:19 np0005534516 NetworkManager[48915]: <info>  [1764060799.9799] manager: (tapab581a21-50): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/475)
Nov 25 03:53:19 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:53:19.985 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapab581a21-50, col_values=(('external_ids', {'iface-id': 'b956a451-af5c-4f4e-b3b8-704d71686765'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:53:19 np0005534516 nova_compute[253538]: 2025-11-25 08:53:19.987 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:53:19 np0005534516 ovn_controller[152859]: 2025-11-25T08:53:19Z|01150|binding|INFO|Releasing lport b956a451-af5c-4f4e-b3b8-704d71686765 from this chassis (sb_readonly=0)
Nov 25 03:53:19 np0005534516 nova_compute[253538]: 2025-11-25 08:53:19.988 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:53:19 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:53:19.991 162739 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/ab581a21-5712-4b8e-87f9-b943349fbfcb.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/ab581a21-5712-4b8e-87f9-b943349fbfcb.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 25 03:53:19 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:53:19.992 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[0206fb18-65bb-4036-ab34-c4d70bf2ed92]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:53:19 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:53:19.993 162739 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 25 03:53:19 np0005534516 ovn_metadata_agent[162734]: global
Nov 25 03:53:19 np0005534516 ovn_metadata_agent[162734]:    log         /dev/log local0 debug
Nov 25 03:53:19 np0005534516 ovn_metadata_agent[162734]:    log-tag     haproxy-metadata-proxy-ab581a21-5712-4b8e-87f9-b943349fbfcb
Nov 25 03:53:19 np0005534516 ovn_metadata_agent[162734]:    user        root
Nov 25 03:53:19 np0005534516 ovn_metadata_agent[162734]:    group       root
Nov 25 03:53:19 np0005534516 ovn_metadata_agent[162734]:    maxconn     1024
Nov 25 03:53:19 np0005534516 ovn_metadata_agent[162734]:    pidfile     /var/lib/neutron/external/pids/ab581a21-5712-4b8e-87f9-b943349fbfcb.pid.haproxy
Nov 25 03:53:19 np0005534516 ovn_metadata_agent[162734]:    daemon
Nov 25 03:53:19 np0005534516 ovn_metadata_agent[162734]: 
Nov 25 03:53:19 np0005534516 ovn_metadata_agent[162734]: defaults
Nov 25 03:53:19 np0005534516 ovn_metadata_agent[162734]:    log global
Nov 25 03:53:19 np0005534516 ovn_metadata_agent[162734]:    mode http
Nov 25 03:53:19 np0005534516 ovn_metadata_agent[162734]:    option httplog
Nov 25 03:53:19 np0005534516 ovn_metadata_agent[162734]:    option dontlognull
Nov 25 03:53:19 np0005534516 ovn_metadata_agent[162734]:    option http-server-close
Nov 25 03:53:19 np0005534516 ovn_metadata_agent[162734]:    option forwardfor
Nov 25 03:53:19 np0005534516 ovn_metadata_agent[162734]:    retries                 3
Nov 25 03:53:19 np0005534516 ovn_metadata_agent[162734]:    timeout http-request    30s
Nov 25 03:53:19 np0005534516 ovn_metadata_agent[162734]:    timeout connect         30s
Nov 25 03:53:19 np0005534516 ovn_metadata_agent[162734]:    timeout client          32s
Nov 25 03:53:19 np0005534516 ovn_metadata_agent[162734]:    timeout server          32s
Nov 25 03:53:19 np0005534516 ovn_metadata_agent[162734]:    timeout http-keep-alive 30s
Nov 25 03:53:19 np0005534516 ovn_metadata_agent[162734]: 
Nov 25 03:53:19 np0005534516 ovn_metadata_agent[162734]: 
Nov 25 03:53:19 np0005534516 ovn_metadata_agent[162734]: listen listener
Nov 25 03:53:19 np0005534516 ovn_metadata_agent[162734]:    bind 169.254.169.254:80
Nov 25 03:53:19 np0005534516 ovn_metadata_agent[162734]:    server metadata /var/lib/neutron/metadata_proxy
Nov 25 03:53:19 np0005534516 ovn_metadata_agent[162734]:    http-request add-header X-OVN-Network-ID ab581a21-5712-4b8e-87f9-b943349fbfcb
Nov 25 03:53:19 np0005534516 ovn_metadata_agent[162734]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 25 03:53:19 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:53:19.994 162739 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-ab581a21-5712-4b8e-87f9-b943349fbfcb', 'env', 'PROCESS_TAG=haproxy-ab581a21-5712-4b8e-87f9-b943349fbfcb', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/ab581a21-5712-4b8e-87f9-b943349fbfcb.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 25 03:53:20 np0005534516 nova_compute[253538]: 2025-11-25 08:53:20.007 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:53:20 np0005534516 nova_compute[253538]: 2025-11-25 08:53:20.201 253542 DEBUG nova.compute.manager [req-c353fe6a-f78c-4118-bac8-c86755ec20cd req-bf1200db-2cc2-4e5f-b944-e7b73bc05523 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 2da7049d-715e-4209-8c17-dda96ff6a192] Received event network-vif-plugged-e0eb0246-9869-4c10-b45b-bd0799ae0c95 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:53:20 np0005534516 nova_compute[253538]: 2025-11-25 08:53:20.201 253542 DEBUG oslo_concurrency.lockutils [req-c353fe6a-f78c-4118-bac8-c86755ec20cd req-bf1200db-2cc2-4e5f-b944-e7b73bc05523 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "2da7049d-715e-4209-8c17-dda96ff6a192-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:53:20 np0005534516 nova_compute[253538]: 2025-11-25 08:53:20.202 253542 DEBUG oslo_concurrency.lockutils [req-c353fe6a-f78c-4118-bac8-c86755ec20cd req-bf1200db-2cc2-4e5f-b944-e7b73bc05523 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "2da7049d-715e-4209-8c17-dda96ff6a192-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:53:20 np0005534516 nova_compute[253538]: 2025-11-25 08:53:20.203 253542 DEBUG oslo_concurrency.lockutils [req-c353fe6a-f78c-4118-bac8-c86755ec20cd req-bf1200db-2cc2-4e5f-b944-e7b73bc05523 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "2da7049d-715e-4209-8c17-dda96ff6a192-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:53:20 np0005534516 nova_compute[253538]: 2025-11-25 08:53:20.203 253542 DEBUG nova.compute.manager [req-c353fe6a-f78c-4118-bac8-c86755ec20cd req-bf1200db-2cc2-4e5f-b944-e7b73bc05523 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 2da7049d-715e-4209-8c17-dda96ff6a192] Processing event network-vif-plugged-e0eb0246-9869-4c10-b45b-bd0799ae0c95 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 25 03:53:20 np0005534516 nova_compute[253538]: 2025-11-25 08:53:20.244 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764060800.2439125, 2da7049d-715e-4209-8c17-dda96ff6a192 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 03:53:20 np0005534516 nova_compute[253538]: 2025-11-25 08:53:20.245 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 2da7049d-715e-4209-8c17-dda96ff6a192] VM Started (Lifecycle Event)#033[00m
Nov 25 03:53:20 np0005534516 nova_compute[253538]: 2025-11-25 08:53:20.248 253542 DEBUG nova.compute.manager [None req-df790dd1-a03f-490c-a5fd-544bc83166c6 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: 2da7049d-715e-4209-8c17-dda96ff6a192] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 25 03:53:20 np0005534516 nova_compute[253538]: 2025-11-25 08:53:20.257 253542 DEBUG nova.virt.libvirt.driver [None req-df790dd1-a03f-490c-a5fd-544bc83166c6 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: 2da7049d-715e-4209-8c17-dda96ff6a192] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 25 03:53:20 np0005534516 nova_compute[253538]: 2025-11-25 08:53:20.261 253542 INFO nova.virt.libvirt.driver [-] [instance: 2da7049d-715e-4209-8c17-dda96ff6a192] Instance spawned successfully.#033[00m
Nov 25 03:53:20 np0005534516 nova_compute[253538]: 2025-11-25 08:53:20.262 253542 DEBUG nova.virt.libvirt.driver [None req-df790dd1-a03f-490c-a5fd-544bc83166c6 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: 2da7049d-715e-4209-8c17-dda96ff6a192] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 25 03:53:20 np0005534516 nova_compute[253538]: 2025-11-25 08:53:20.265 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 2da7049d-715e-4209-8c17-dda96ff6a192] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:53:20 np0005534516 nova_compute[253538]: 2025-11-25 08:53:20.271 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 2da7049d-715e-4209-8c17-dda96ff6a192] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 03:53:20 np0005534516 nova_compute[253538]: 2025-11-25 08:53:20.288 253542 DEBUG nova.virt.libvirt.driver [None req-df790dd1-a03f-490c-a5fd-544bc83166c6 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: 2da7049d-715e-4209-8c17-dda96ff6a192] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:53:20 np0005534516 nova_compute[253538]: 2025-11-25 08:53:20.289 253542 DEBUG nova.virt.libvirt.driver [None req-df790dd1-a03f-490c-a5fd-544bc83166c6 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: 2da7049d-715e-4209-8c17-dda96ff6a192] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:53:20 np0005534516 nova_compute[253538]: 2025-11-25 08:53:20.290 253542 DEBUG nova.virt.libvirt.driver [None req-df790dd1-a03f-490c-a5fd-544bc83166c6 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: 2da7049d-715e-4209-8c17-dda96ff6a192] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:53:20 np0005534516 nova_compute[253538]: 2025-11-25 08:53:20.290 253542 DEBUG nova.virt.libvirt.driver [None req-df790dd1-a03f-490c-a5fd-544bc83166c6 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: 2da7049d-715e-4209-8c17-dda96ff6a192] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:53:20 np0005534516 nova_compute[253538]: 2025-11-25 08:53:20.291 253542 DEBUG nova.virt.libvirt.driver [None req-df790dd1-a03f-490c-a5fd-544bc83166c6 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: 2da7049d-715e-4209-8c17-dda96ff6a192] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:53:20 np0005534516 nova_compute[253538]: 2025-11-25 08:53:20.291 253542 DEBUG nova.virt.libvirt.driver [None req-df790dd1-a03f-490c-a5fd-544bc83166c6 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: 2da7049d-715e-4209-8c17-dda96ff6a192] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:53:20 np0005534516 nova_compute[253538]: 2025-11-25 08:53:20.318 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 2da7049d-715e-4209-8c17-dda96ff6a192] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 03:53:20 np0005534516 nova_compute[253538]: 2025-11-25 08:53:20.318 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764060800.2441359, 2da7049d-715e-4209-8c17-dda96ff6a192 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 03:53:20 np0005534516 nova_compute[253538]: 2025-11-25 08:53:20.319 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 2da7049d-715e-4209-8c17-dda96ff6a192] VM Paused (Lifecycle Event)#033[00m
Nov 25 03:53:20 np0005534516 nova_compute[253538]: 2025-11-25 08:53:20.353 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 2da7049d-715e-4209-8c17-dda96ff6a192] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:53:20 np0005534516 nova_compute[253538]: 2025-11-25 08:53:20.357 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764060800.2565794, 2da7049d-715e-4209-8c17-dda96ff6a192 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 03:53:20 np0005534516 nova_compute[253538]: 2025-11-25 08:53:20.358 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 2da7049d-715e-4209-8c17-dda96ff6a192] VM Resumed (Lifecycle Event)#033[00m
Nov 25 03:53:20 np0005534516 nova_compute[253538]: 2025-11-25 08:53:20.379 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 2da7049d-715e-4209-8c17-dda96ff6a192] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:53:20 np0005534516 nova_compute[253538]: 2025-11-25 08:53:20.381 253542 INFO nova.compute.manager [None req-df790dd1-a03f-490c-a5fd-544bc83166c6 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: 2da7049d-715e-4209-8c17-dda96ff6a192] Took 7.29 seconds to spawn the instance on the hypervisor.#033[00m
Nov 25 03:53:20 np0005534516 nova_compute[253538]: 2025-11-25 08:53:20.381 253542 DEBUG nova.compute.manager [None req-df790dd1-a03f-490c-a5fd-544bc83166c6 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: 2da7049d-715e-4209-8c17-dda96ff6a192] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:53:20 np0005534516 nova_compute[253538]: 2025-11-25 08:53:20.384 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 2da7049d-715e-4209-8c17-dda96ff6a192] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 03:53:20 np0005534516 nova_compute[253538]: 2025-11-25 08:53:20.413 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 2da7049d-715e-4209-8c17-dda96ff6a192] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 03:53:20 np0005534516 podman[374021]: 2025-11-25 08:53:20.417995693 +0000 UTC m=+0.048586942 container create 4589d8b885927712e3e4ff7370f0a7b7fd524b8512fabdebcd406b222e11eefd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-ab581a21-5712-4b8e-87f9-b943349fbfcb, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Nov 25 03:53:20 np0005534516 nova_compute[253538]: 2025-11-25 08:53:20.444 253542 INFO nova.compute.manager [None req-df790dd1-a03f-490c-a5fd-544bc83166c6 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: 2da7049d-715e-4209-8c17-dda96ff6a192] Took 8.34 seconds to build instance.#033[00m
Nov 25 03:53:20 np0005534516 systemd[1]: Started libpod-conmon-4589d8b885927712e3e4ff7370f0a7b7fd524b8512fabdebcd406b222e11eefd.scope.
Nov 25 03:53:20 np0005534516 nova_compute[253538]: 2025-11-25 08:53:20.472 253542 DEBUG oslo_concurrency.lockutils [None req-df790dd1-a03f-490c-a5fd-544bc83166c6 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Lock "2da7049d-715e-4209-8c17-dda96ff6a192" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.427s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:53:20 np0005534516 systemd[1]: Started libcrun container.
Nov 25 03:53:20 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d53e3214adfe5885526c3929acf65ff9e3b10430b7f7fcf526d2c5c055a47429/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 25 03:53:20 np0005534516 podman[374021]: 2025-11-25 08:53:20.397063033 +0000 UTC m=+0.027654302 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 25 03:53:20 np0005534516 podman[374021]: 2025-11-25 08:53:20.502129416 +0000 UTC m=+0.132720715 container init 4589d8b885927712e3e4ff7370f0a7b7fd524b8512fabdebcd406b222e11eefd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-ab581a21-5712-4b8e-87f9-b943349fbfcb, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Nov 25 03:53:20 np0005534516 podman[374021]: 2025-11-25 08:53:20.511074415 +0000 UTC m=+0.141665664 container start 4589d8b885927712e3e4ff7370f0a7b7fd524b8512fabdebcd406b222e11eefd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-ab581a21-5712-4b8e-87f9-b943349fbfcb, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 25 03:53:20 np0005534516 neutron-haproxy-ovnmeta-ab581a21-5712-4b8e-87f9-b943349fbfcb[374036]: [NOTICE]   (374040) : New worker (374042) forked
Nov 25 03:53:20 np0005534516 neutron-haproxy-ovnmeta-ab581a21-5712-4b8e-87f9-b943349fbfcb[374036]: [NOTICE]   (374040) : Loading success.
Nov 25 03:53:20 np0005534516 nova_compute[253538]: 2025-11-25 08:53:20.601 253542 INFO nova.compute.manager [None req-68c25e1f-d24f-479c-af3b-2edbd6a3be38 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 76611b0b-db06-4903-a22a-59b23a1e0d48] Get console output#033[00m
Nov 25 03:53:20 np0005534516 nova_compute[253538]: 2025-11-25 08:53:20.608 310639 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Nov 25 03:53:21 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2177: 321 pgs: 321 active+clean; 213 MiB data, 857 MiB used, 59 GiB / 60 GiB avail; 1.3 MiB/s rd, 3.9 MiB/s wr, 132 op/s
Nov 25 03:53:22 np0005534516 nova_compute[253538]: 2025-11-25 08:53:22.265 253542 DEBUG nova.compute.manager [req-d159983d-a9c5-407f-8fe7-8540d3cced2e req-0dc6ed96-7fe2-4f14-9242-1838a8652be0 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 2da7049d-715e-4209-8c17-dda96ff6a192] Received event network-vif-plugged-e0eb0246-9869-4c10-b45b-bd0799ae0c95 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:53:22 np0005534516 nova_compute[253538]: 2025-11-25 08:53:22.266 253542 DEBUG oslo_concurrency.lockutils [req-d159983d-a9c5-407f-8fe7-8540d3cced2e req-0dc6ed96-7fe2-4f14-9242-1838a8652be0 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "2da7049d-715e-4209-8c17-dda96ff6a192-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:53:22 np0005534516 nova_compute[253538]: 2025-11-25 08:53:22.266 253542 DEBUG oslo_concurrency.lockutils [req-d159983d-a9c5-407f-8fe7-8540d3cced2e req-0dc6ed96-7fe2-4f14-9242-1838a8652be0 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "2da7049d-715e-4209-8c17-dda96ff6a192-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:53:22 np0005534516 nova_compute[253538]: 2025-11-25 08:53:22.267 253542 DEBUG oslo_concurrency.lockutils [req-d159983d-a9c5-407f-8fe7-8540d3cced2e req-0dc6ed96-7fe2-4f14-9242-1838a8652be0 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "2da7049d-715e-4209-8c17-dda96ff6a192-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:53:22 np0005534516 nova_compute[253538]: 2025-11-25 08:53:22.267 253542 DEBUG nova.compute.manager [req-d159983d-a9c5-407f-8fe7-8540d3cced2e req-0dc6ed96-7fe2-4f14-9242-1838a8652be0 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 2da7049d-715e-4209-8c17-dda96ff6a192] No waiting events found dispatching network-vif-plugged-e0eb0246-9869-4c10-b45b-bd0799ae0c95 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 03:53:22 np0005534516 nova_compute[253538]: 2025-11-25 08:53:22.267 253542 WARNING nova.compute.manager [req-d159983d-a9c5-407f-8fe7-8540d3cced2e req-0dc6ed96-7fe2-4f14-9242-1838a8652be0 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 2da7049d-715e-4209-8c17-dda96ff6a192] Received unexpected event network-vif-plugged-e0eb0246-9869-4c10-b45b-bd0799ae0c95 for instance with vm_state active and task_state None.#033[00m
Nov 25 03:53:22 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 03:53:22 np0005534516 nova_compute[253538]: 2025-11-25 08:53:22.788 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:53:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 03:53:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 03:53:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 03:53:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 03:53:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 03:53:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 03:53:23 np0005534516 nova_compute[253538]: 2025-11-25 08:53:23.487 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:53:23 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2178: 321 pgs: 321 active+clean; 213 MiB data, 857 MiB used, 59 GiB / 60 GiB avail; 1.7 MiB/s rd, 3.9 MiB/s wr, 145 op/s
Nov 25 03:53:25 np0005534516 nova_compute[253538]: 2025-11-25 08:53:25.387 253542 DEBUG oslo_concurrency.lockutils [None req-b64d71f1-b88b-41a0-91c0-6bcfe4f8ef88 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Acquiring lock "interface-76611b0b-db06-4903-a22a-59b23a1e0d48-None" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:53:25 np0005534516 nova_compute[253538]: 2025-11-25 08:53:25.388 253542 DEBUG oslo_concurrency.lockutils [None req-b64d71f1-b88b-41a0-91c0-6bcfe4f8ef88 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lock "interface-76611b0b-db06-4903-a22a-59b23a1e0d48-None" acquired by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:53:25 np0005534516 nova_compute[253538]: 2025-11-25 08:53:25.389 253542 DEBUG nova.objects.instance [None req-b64d71f1-b88b-41a0-91c0-6bcfe4f8ef88 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lazy-loading 'flavor' on Instance uuid 76611b0b-db06-4903-a22a-59b23a1e0d48 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 03:53:25 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2179: 321 pgs: 321 active+clean; 213 MiB data, 857 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.9 MiB/s wr, 151 op/s
Nov 25 03:53:25 np0005534516 nova_compute[253538]: 2025-11-25 08:53:25.806 253542 DEBUG nova.objects.instance [None req-b64d71f1-b88b-41a0-91c0-6bcfe4f8ef88 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lazy-loading 'pci_requests' on Instance uuid 76611b0b-db06-4903-a22a-59b23a1e0d48 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 03:53:25 np0005534516 nova_compute[253538]: 2025-11-25 08:53:25.821 253542 DEBUG nova.network.neutron [None req-b64d71f1-b88b-41a0-91c0-6bcfe4f8ef88 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 76611b0b-db06-4903-a22a-59b23a1e0d48] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 25 03:53:25 np0005534516 nova_compute[253538]: 2025-11-25 08:53:25.856 253542 DEBUG nova.compute.manager [req-778a5f67-bd48-4dcc-bd8b-d5244b731a1e req-38ac3873-07df-435d-83b0-2d012ec170e9 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 2da7049d-715e-4209-8c17-dda96ff6a192] Received event network-changed-e0eb0246-9869-4c10-b45b-bd0799ae0c95 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:53:25 np0005534516 nova_compute[253538]: 2025-11-25 08:53:25.856 253542 DEBUG nova.compute.manager [req-778a5f67-bd48-4dcc-bd8b-d5244b731a1e req-38ac3873-07df-435d-83b0-2d012ec170e9 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 2da7049d-715e-4209-8c17-dda96ff6a192] Refreshing instance network info cache due to event network-changed-e0eb0246-9869-4c10-b45b-bd0799ae0c95. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 03:53:25 np0005534516 nova_compute[253538]: 2025-11-25 08:53:25.857 253542 DEBUG oslo_concurrency.lockutils [req-778a5f67-bd48-4dcc-bd8b-d5244b731a1e req-38ac3873-07df-435d-83b0-2d012ec170e9 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-2da7049d-715e-4209-8c17-dda96ff6a192" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 03:53:25 np0005534516 nova_compute[253538]: 2025-11-25 08:53:25.857 253542 DEBUG oslo_concurrency.lockutils [req-778a5f67-bd48-4dcc-bd8b-d5244b731a1e req-38ac3873-07df-435d-83b0-2d012ec170e9 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-2da7049d-715e-4209-8c17-dda96ff6a192" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 03:53:25 np0005534516 nova_compute[253538]: 2025-11-25 08:53:25.858 253542 DEBUG nova.network.neutron [req-778a5f67-bd48-4dcc-bd8b-d5244b731a1e req-38ac3873-07df-435d-83b0-2d012ec170e9 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 2da7049d-715e-4209-8c17-dda96ff6a192] Refreshing network info cache for port e0eb0246-9869-4c10-b45b-bd0799ae0c95 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 03:53:26 np0005534516 nova_compute[253538]: 2025-11-25 08:53:26.004 253542 DEBUG nova.policy [None req-b64d71f1-b88b-41a0-91c0-6bcfe4f8ef88 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '4211995133cc45db8e38c47f747fb092', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '92faeb767e7a423586eaaf32661ce771', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 25 03:53:26 np0005534516 nova_compute[253538]: 2025-11-25 08:53:26.805 253542 DEBUG nova.network.neutron [None req-b64d71f1-b88b-41a0-91c0-6bcfe4f8ef88 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 76611b0b-db06-4903-a22a-59b23a1e0d48] Successfully created port: 1959aca7-b25c-4fe5-b59a-70db352af78b _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 25 03:53:27 np0005534516 nova_compute[253538]: 2025-11-25 08:53:27.289 253542 DEBUG nova.network.neutron [req-778a5f67-bd48-4dcc-bd8b-d5244b731a1e req-38ac3873-07df-435d-83b0-2d012ec170e9 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 2da7049d-715e-4209-8c17-dda96ff6a192] Updated VIF entry in instance network info cache for port e0eb0246-9869-4c10-b45b-bd0799ae0c95. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 25 03:53:27 np0005534516 nova_compute[253538]: 2025-11-25 08:53:27.289 253542 DEBUG nova.network.neutron [req-778a5f67-bd48-4dcc-bd8b-d5244b731a1e req-38ac3873-07df-435d-83b0-2d012ec170e9 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 2da7049d-715e-4209-8c17-dda96ff6a192] Updating instance_info_cache with network_info: [{"id": "e0eb0246-9869-4c10-b45b-bd0799ae0c95", "address": "fa:16:3e:ea:4d:89", "network": {"id": "ab581a21-5712-4b8e-87f9-b943349fbfcb", "bridge": "br-int", "label": "tempest-network-smoke--1196736477", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.226", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7dcaf3c96bfc4db3a41291debd385c67", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape0eb0246-98", "ovs_interfaceid": "e0eb0246-9869-4c10-b45b-bd0799ae0c95", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 03:53:27 np0005534516 nova_compute[253538]: 2025-11-25 08:53:27.306 253542 DEBUG oslo_concurrency.lockutils [req-778a5f67-bd48-4dcc-bd8b-d5244b731a1e req-38ac3873-07df-435d-83b0-2d012ec170e9 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-2da7049d-715e-4209-8c17-dda96ff6a192" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 03:53:27 np0005534516 nova_compute[253538]: 2025-11-25 08:53:27.423 253542 DEBUG nova.network.neutron [None req-b64d71f1-b88b-41a0-91c0-6bcfe4f8ef88 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 76611b0b-db06-4903-a22a-59b23a1e0d48] Successfully updated port: 1959aca7-b25c-4fe5-b59a-70db352af78b _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 25 03:53:27 np0005534516 nova_compute[253538]: 2025-11-25 08:53:27.435 253542 DEBUG oslo_concurrency.lockutils [None req-b64d71f1-b88b-41a0-91c0-6bcfe4f8ef88 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Acquiring lock "refresh_cache-76611b0b-db06-4903-a22a-59b23a1e0d48" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 03:53:27 np0005534516 nova_compute[253538]: 2025-11-25 08:53:27.436 253542 DEBUG oslo_concurrency.lockutils [None req-b64d71f1-b88b-41a0-91c0-6bcfe4f8ef88 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Acquired lock "refresh_cache-76611b0b-db06-4903-a22a-59b23a1e0d48" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 03:53:27 np0005534516 nova_compute[253538]: 2025-11-25 08:53:27.436 253542 DEBUG nova.network.neutron [None req-b64d71f1-b88b-41a0-91c0-6bcfe4f8ef88 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 76611b0b-db06-4903-a22a-59b23a1e0d48] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 25 03:53:27 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 03:53:27 np0005534516 nova_compute[253538]: 2025-11-25 08:53:27.528 253542 DEBUG nova.compute.manager [req-d524d148-208e-426f-9087-6b37fcc3db0a req-fb04bd9c-375d-4630-ae69-8c93da1c6cdc b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 76611b0b-db06-4903-a22a-59b23a1e0d48] Received event network-changed-1959aca7-b25c-4fe5-b59a-70db352af78b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:53:27 np0005534516 nova_compute[253538]: 2025-11-25 08:53:27.529 253542 DEBUG nova.compute.manager [req-d524d148-208e-426f-9087-6b37fcc3db0a req-fb04bd9c-375d-4630-ae69-8c93da1c6cdc b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 76611b0b-db06-4903-a22a-59b23a1e0d48] Refreshing instance network info cache due to event network-changed-1959aca7-b25c-4fe5-b59a-70db352af78b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 03:53:27 np0005534516 nova_compute[253538]: 2025-11-25 08:53:27.529 253542 DEBUG oslo_concurrency.lockutils [req-d524d148-208e-426f-9087-6b37fcc3db0a req-fb04bd9c-375d-4630-ae69-8c93da1c6cdc b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-76611b0b-db06-4903-a22a-59b23a1e0d48" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 03:53:27 np0005534516 nova_compute[253538]: 2025-11-25 08:53:27.791 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:53:27 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2180: 321 pgs: 321 active+clean; 213 MiB data, 857 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 1.3 MiB/s wr, 123 op/s
Nov 25 03:53:28 np0005534516 nova_compute[253538]: 2025-11-25 08:53:28.489 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:53:28 np0005534516 nova_compute[253538]: 2025-11-25 08:53:28.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 03:53:29 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 25 03:53:29 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2543939752' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 03:53:29 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 25 03:53:29 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2543939752' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 03:53:29 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2181: 321 pgs: 321 active+clean; 213 MiB data, 857 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 245 KiB/s wr, 91 op/s
Nov 25 03:53:30 np0005534516 nova_compute[253538]: 2025-11-25 08:53:30.311 253542 DEBUG nova.network.neutron [None req-b64d71f1-b88b-41a0-91c0-6bcfe4f8ef88 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 76611b0b-db06-4903-a22a-59b23a1e0d48] Updating instance_info_cache with network_info: [{"id": "8f1fcc3c-5f46-4272-be9b-4d5213b3aceb", "address": "fa:16:3e:36:b2:21", "network": {"id": "60f2641c-f03e-4ef3-a462-4bd54e93c59c", "bridge": "br-int", "label": "tempest-network-smoke--706068333", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.208", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92faeb767e7a423586eaaf32661ce771", "mtu": null, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8f1fcc3c-5f", "ovs_interfaceid": "8f1fcc3c-5f46-4272-be9b-4d5213b3aceb", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "1959aca7-b25c-4fe5-b59a-70db352af78b", "address": "fa:16:3e:d6:c5:52", "network": {"id": "8f0f7d83-b45f-4a49-9b0c-4eced5b56b37", "bridge": "br-int", "label": "tempest-network-smoke--701944320", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.26", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92faeb767e7a423586eaaf32661ce771", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1959aca7-b2", "ovs_interfaceid": "1959aca7-b25c-4fe5-b59a-70db352af78b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 03:53:30 np0005534516 nova_compute[253538]: 2025-11-25 08:53:30.341 253542 DEBUG oslo_concurrency.lockutils [None req-b64d71f1-b88b-41a0-91c0-6bcfe4f8ef88 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Releasing lock "refresh_cache-76611b0b-db06-4903-a22a-59b23a1e0d48" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 03:53:30 np0005534516 nova_compute[253538]: 2025-11-25 08:53:30.342 253542 DEBUG oslo_concurrency.lockutils [req-d524d148-208e-426f-9087-6b37fcc3db0a req-fb04bd9c-375d-4630-ae69-8c93da1c6cdc b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-76611b0b-db06-4903-a22a-59b23a1e0d48" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 03:53:30 np0005534516 nova_compute[253538]: 2025-11-25 08:53:30.343 253542 DEBUG nova.network.neutron [req-d524d148-208e-426f-9087-6b37fcc3db0a req-fb04bd9c-375d-4630-ae69-8c93da1c6cdc b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 76611b0b-db06-4903-a22a-59b23a1e0d48] Refreshing network info cache for port 1959aca7-b25c-4fe5-b59a-70db352af78b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 03:53:30 np0005534516 nova_compute[253538]: 2025-11-25 08:53:30.346 253542 DEBUG nova.virt.libvirt.vif [None req-b64d71f1-b88b-41a0-91c0-6bcfe4f8ef88 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T08:52:52Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1241900398',display_name='tempest-TestNetworkBasicOps-server-1241900398',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1241900398',id=114,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDKQnIncww1mRwX9O2GT7bKPsyEkgA1/4oI48uojAID9Vm7PrMk+vKtiibHK4EoiYH2/wbPYy0HX20b3AH2Q3Q4yusIjxaAQq6AUEbJthvns45ZkdO1WCW3z0AmP1FYwxQ==',key_name='tempest-TestNetworkBasicOps-922693977',keypairs=<?>,launch_index=0,launched_at=2025-11-25T08:53:02Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='92faeb767e7a423586eaaf32661ce771',ramdisk_id='',reservation_id='r-1z0lb28a',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-2019122229',owner_user_name='tempest-TestNetworkBasicOps-2019122229-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T08:53:02Z,user_data=None,user_id='4211995133cc45db8e38c47f747fb092',uuid=76611b0b-db06-4903-a22a-59b23a1e0d48,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "1959aca7-b25c-4fe5-b59a-70db352af78b", "address": "fa:16:3e:d6:c5:52", "network": {"id": "8f0f7d83-b45f-4a49-9b0c-4eced5b56b37", "bridge": "br-int", "label": "tempest-network-smoke--701944320", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.26", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92faeb767e7a423586eaaf32661ce771", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1959aca7-b2", "ovs_interfaceid": "1959aca7-b25c-4fe5-b59a-70db352af78b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 25 03:53:30 np0005534516 nova_compute[253538]: 2025-11-25 08:53:30.346 253542 DEBUG nova.network.os_vif_util [None req-b64d71f1-b88b-41a0-91c0-6bcfe4f8ef88 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Converting VIF {"id": "1959aca7-b25c-4fe5-b59a-70db352af78b", "address": "fa:16:3e:d6:c5:52", "network": {"id": "8f0f7d83-b45f-4a49-9b0c-4eced5b56b37", "bridge": "br-int", "label": "tempest-network-smoke--701944320", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.26", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92faeb767e7a423586eaaf32661ce771", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1959aca7-b2", "ovs_interfaceid": "1959aca7-b25c-4fe5-b59a-70db352af78b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 03:53:30 np0005534516 nova_compute[253538]: 2025-11-25 08:53:30.347 253542 DEBUG nova.network.os_vif_util [None req-b64d71f1-b88b-41a0-91c0-6bcfe4f8ef88 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d6:c5:52,bridge_name='br-int',has_traffic_filtering=True,id=1959aca7-b25c-4fe5-b59a-70db352af78b,network=Network(8f0f7d83-b45f-4a49-9b0c-4eced5b56b37),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1959aca7-b2') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 03:53:30 np0005534516 nova_compute[253538]: 2025-11-25 08:53:30.348 253542 DEBUG os_vif [None req-b64d71f1-b88b-41a0-91c0-6bcfe4f8ef88 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:d6:c5:52,bridge_name='br-int',has_traffic_filtering=True,id=1959aca7-b25c-4fe5-b59a-70db352af78b,network=Network(8f0f7d83-b45f-4a49-9b0c-4eced5b56b37),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1959aca7-b2') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 25 03:53:30 np0005534516 nova_compute[253538]: 2025-11-25 08:53:30.348 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:53:30 np0005534516 nova_compute[253538]: 2025-11-25 08:53:30.349 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:53:30 np0005534516 nova_compute[253538]: 2025-11-25 08:53:30.349 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 03:53:30 np0005534516 nova_compute[253538]: 2025-11-25 08:53:30.353 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:53:30 np0005534516 nova_compute[253538]: 2025-11-25 08:53:30.353 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap1959aca7-b2, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:53:30 np0005534516 nova_compute[253538]: 2025-11-25 08:53:30.354 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap1959aca7-b2, col_values=(('external_ids', {'iface-id': '1959aca7-b25c-4fe5-b59a-70db352af78b', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:d6:c5:52', 'vm-uuid': '76611b0b-db06-4903-a22a-59b23a1e0d48'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:53:30 np0005534516 nova_compute[253538]: 2025-11-25 08:53:30.355 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:53:30 np0005534516 NetworkManager[48915]: <info>  [1764060810.3564] manager: (tap1959aca7-b2): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/476)
Nov 25 03:53:30 np0005534516 nova_compute[253538]: 2025-11-25 08:53:30.358 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 25 03:53:30 np0005534516 nova_compute[253538]: 2025-11-25 08:53:30.369 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:53:30 np0005534516 nova_compute[253538]: 2025-11-25 08:53:30.370 253542 INFO os_vif [None req-b64d71f1-b88b-41a0-91c0-6bcfe4f8ef88 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:d6:c5:52,bridge_name='br-int',has_traffic_filtering=True,id=1959aca7-b25c-4fe5-b59a-70db352af78b,network=Network(8f0f7d83-b45f-4a49-9b0c-4eced5b56b37),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1959aca7-b2')#033[00m
Nov 25 03:53:30 np0005534516 nova_compute[253538]: 2025-11-25 08:53:30.371 253542 DEBUG nova.virt.libvirt.vif [None req-b64d71f1-b88b-41a0-91c0-6bcfe4f8ef88 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T08:52:52Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1241900398',display_name='tempest-TestNetworkBasicOps-server-1241900398',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1241900398',id=114,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDKQnIncww1mRwX9O2GT7bKPsyEkgA1/4oI48uojAID9Vm7PrMk+vKtiibHK4EoiYH2/wbPYy0HX20b3AH2Q3Q4yusIjxaAQq6AUEbJthvns45ZkdO1WCW3z0AmP1FYwxQ==',key_name='tempest-TestNetworkBasicOps-922693977',keypairs=<?>,launch_index=0,launched_at=2025-11-25T08:53:02Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='92faeb767e7a423586eaaf32661ce771',ramdisk_id='',reservation_id='r-1z0lb28a',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-2019122229',owner_user_name='tempest-TestNetworkBasicOps-2019122229-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T08:53:02Z,user_data=None,user_id='4211995133cc45db8e38c47f747fb092',uuid=76611b0b-db06-4903-a22a-59b23a1e0d48,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "1959aca7-b25c-4fe5-b59a-70db352af78b", "address": "fa:16:3e:d6:c5:52", "network": {"id": "8f0f7d83-b45f-4a49-9b0c-4eced5b56b37", "bridge": "br-int", "label": "tempest-network-smoke--701944320", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.26", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92faeb767e7a423586eaaf32661ce771", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1959aca7-b2", "ovs_interfaceid": "1959aca7-b25c-4fe5-b59a-70db352af78b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 25 03:53:30 np0005534516 nova_compute[253538]: 2025-11-25 08:53:30.371 253542 DEBUG nova.network.os_vif_util [None req-b64d71f1-b88b-41a0-91c0-6bcfe4f8ef88 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Converting VIF {"id": "1959aca7-b25c-4fe5-b59a-70db352af78b", "address": "fa:16:3e:d6:c5:52", "network": {"id": "8f0f7d83-b45f-4a49-9b0c-4eced5b56b37", "bridge": "br-int", "label": "tempest-network-smoke--701944320", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.26", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92faeb767e7a423586eaaf32661ce771", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1959aca7-b2", "ovs_interfaceid": "1959aca7-b25c-4fe5-b59a-70db352af78b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 03:53:30 np0005534516 nova_compute[253538]: 2025-11-25 08:53:30.372 253542 DEBUG nova.network.os_vif_util [None req-b64d71f1-b88b-41a0-91c0-6bcfe4f8ef88 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d6:c5:52,bridge_name='br-int',has_traffic_filtering=True,id=1959aca7-b25c-4fe5-b59a-70db352af78b,network=Network(8f0f7d83-b45f-4a49-9b0c-4eced5b56b37),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1959aca7-b2') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 03:53:30 np0005534516 nova_compute[253538]: 2025-11-25 08:53:30.375 253542 DEBUG nova.virt.libvirt.guest [None req-b64d71f1-b88b-41a0-91c0-6bcfe4f8ef88 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] attach device xml: <interface type="ethernet">
Nov 25 03:53:30 np0005534516 nova_compute[253538]:  <mac address="fa:16:3e:d6:c5:52"/>
Nov 25 03:53:30 np0005534516 nova_compute[253538]:  <model type="virtio"/>
Nov 25 03:53:30 np0005534516 nova_compute[253538]:  <driver name="vhost" rx_queue_size="512"/>
Nov 25 03:53:30 np0005534516 nova_compute[253538]:  <mtu size="1442"/>
Nov 25 03:53:30 np0005534516 nova_compute[253538]:  <target dev="tap1959aca7-b2"/>
Nov 25 03:53:30 np0005534516 nova_compute[253538]: </interface>
Nov 25 03:53:30 np0005534516 nova_compute[253538]: attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339#033[00m
Nov 25 03:53:30 np0005534516 NetworkManager[48915]: <info>  [1764060810.4029] manager: (tap1959aca7-b2): new Tun device (/org/freedesktop/NetworkManager/Devices/477)
Nov 25 03:53:30 np0005534516 kernel: tap1959aca7-b2: entered promiscuous mode
Nov 25 03:53:30 np0005534516 nova_compute[253538]: 2025-11-25 08:53:30.405 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:53:30 np0005534516 ovn_controller[152859]: 2025-11-25T08:53:30Z|01151|binding|INFO|Claiming lport 1959aca7-b25c-4fe5-b59a-70db352af78b for this chassis.
Nov 25 03:53:30 np0005534516 ovn_controller[152859]: 2025-11-25T08:53:30Z|01152|binding|INFO|1959aca7-b25c-4fe5-b59a-70db352af78b: Claiming fa:16:3e:d6:c5:52 10.100.0.26
Nov 25 03:53:30 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:53:30.421 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d6:c5:52 10.100.0.26'], port_security=['fa:16:3e:d6:c5:52 10.100.0.26'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.26/28', 'neutron:device_id': '76611b0b-db06-4903-a22a-59b23a1e0d48', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8f0f7d83-b45f-4a49-9b0c-4eced5b56b37', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '92faeb767e7a423586eaaf32661ce771', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'd480e0b0-4ff3-496c-bb44-a333ae5d1ee8', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=927e21d4-cff1-4c45-a730-0e5e5d57b4cb, chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=1959aca7-b25c-4fe5-b59a-70db352af78b) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 03:53:30 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:53:30.423 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 1959aca7-b25c-4fe5-b59a-70db352af78b in datapath 8f0f7d83-b45f-4a49-9b0c-4eced5b56b37 bound to our chassis#033[00m
Nov 25 03:53:30 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:53:30.425 162739 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 8f0f7d83-b45f-4a49-9b0c-4eced5b56b37#033[00m
Nov 25 03:53:30 np0005534516 systemd-udevd[374058]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 03:53:30 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:53:30.442 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[a167b9b6-16ab-4a36-9fab-d726a2389334]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:53:30 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:53:30.443 162739 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap8f0f7d83-b1 in ovnmeta-8f0f7d83-b45f-4a49-9b0c-4eced5b56b37 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 25 03:53:30 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:53:30.446 269370 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap8f0f7d83-b0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 25 03:53:30 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:53:30.446 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[8e243095-614d-44b1-b6e7-b9ad1352ff95]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:53:30 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:53:30.448 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[1ed9c0ae-2b7c-46ed-a81b-1aff17481e6b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:53:30 np0005534516 nova_compute[253538]: 2025-11-25 08:53:30.466 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:53:30 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:53:30.466 162852 DEBUG oslo.privsep.daemon [-] privsep: reply[3fcc2495-8151-4b1e-b4fa-868c9a3cbec8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:53:30 np0005534516 ovn_controller[152859]: 2025-11-25T08:53:30Z|01153|binding|INFO|Setting lport 1959aca7-b25c-4fe5-b59a-70db352af78b ovn-installed in OVS
Nov 25 03:53:30 np0005534516 ovn_controller[152859]: 2025-11-25T08:53:30Z|01154|binding|INFO|Setting lport 1959aca7-b25c-4fe5-b59a-70db352af78b up in Southbound
Nov 25 03:53:30 np0005534516 nova_compute[253538]: 2025-11-25 08:53:30.471 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:53:30 np0005534516 NetworkManager[48915]: <info>  [1764060810.4736] device (tap1959aca7-b2): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 03:53:30 np0005534516 NetworkManager[48915]: <info>  [1764060810.4750] device (tap1959aca7-b2): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 03:53:30 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:53:30.503 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[41e0dcfd-e9c9-463f-bcc6-eb78394677d6]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:53:30 np0005534516 nova_compute[253538]: 2025-11-25 08:53:30.517 253542 DEBUG nova.virt.libvirt.driver [None req-b64d71f1-b88b-41a0-91c0-6bcfe4f8ef88 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 03:53:30 np0005534516 nova_compute[253538]: 2025-11-25 08:53:30.517 253542 DEBUG nova.virt.libvirt.driver [None req-b64d71f1-b88b-41a0-91c0-6bcfe4f8ef88 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 03:53:30 np0005534516 nova_compute[253538]: 2025-11-25 08:53:30.517 253542 DEBUG nova.virt.libvirt.driver [None req-b64d71f1-b88b-41a0-91c0-6bcfe4f8ef88 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] No VIF found with MAC fa:16:3e:36:b2:21, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 25 03:53:30 np0005534516 nova_compute[253538]: 2025-11-25 08:53:30.517 253542 DEBUG nova.virt.libvirt.driver [None req-b64d71f1-b88b-41a0-91c0-6bcfe4f8ef88 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] No VIF found with MAC fa:16:3e:d6:c5:52, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 25 03:53:30 np0005534516 nova_compute[253538]: 2025-11-25 08:53:30.543 253542 DEBUG nova.virt.libvirt.guest [None req-b64d71f1-b88b-41a0-91c0-6bcfe4f8ef88 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 03:53:30 np0005534516 nova_compute[253538]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 03:53:30 np0005534516 nova_compute[253538]:  <nova:name>tempest-TestNetworkBasicOps-server-1241900398</nova:name>
Nov 25 03:53:30 np0005534516 nova_compute[253538]:  <nova:creationTime>2025-11-25 08:53:30</nova:creationTime>
Nov 25 03:53:30 np0005534516 nova_compute[253538]:  <nova:flavor name="m1.nano">
Nov 25 03:53:30 np0005534516 nova_compute[253538]:    <nova:memory>128</nova:memory>
Nov 25 03:53:30 np0005534516 nova_compute[253538]:    <nova:disk>1</nova:disk>
Nov 25 03:53:30 np0005534516 nova_compute[253538]:    <nova:swap>0</nova:swap>
Nov 25 03:53:30 np0005534516 nova_compute[253538]:    <nova:ephemeral>0</nova:ephemeral>
Nov 25 03:53:30 np0005534516 nova_compute[253538]:    <nova:vcpus>1</nova:vcpus>
Nov 25 03:53:30 np0005534516 nova_compute[253538]:  </nova:flavor>
Nov 25 03:53:30 np0005534516 nova_compute[253538]:  <nova:owner>
Nov 25 03:53:30 np0005534516 nova_compute[253538]:    <nova:user uuid="4211995133cc45db8e38c47f747fb092">tempest-TestNetworkBasicOps-2019122229-project-member</nova:user>
Nov 25 03:53:30 np0005534516 nova_compute[253538]:    <nova:project uuid="92faeb767e7a423586eaaf32661ce771">tempest-TestNetworkBasicOps-2019122229</nova:project>
Nov 25 03:53:30 np0005534516 nova_compute[253538]:  </nova:owner>
Nov 25 03:53:30 np0005534516 nova_compute[253538]:  <nova:root type="image" uuid="8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e"/>
Nov 25 03:53:30 np0005534516 nova_compute[253538]:  <nova:ports>
Nov 25 03:53:30 np0005534516 nova_compute[253538]:    <nova:port uuid="8f1fcc3c-5f46-4272-be9b-4d5213b3aceb">
Nov 25 03:53:30 np0005534516 nova_compute[253538]:      <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Nov 25 03:53:30 np0005534516 nova_compute[253538]:    </nova:port>
Nov 25 03:53:30 np0005534516 nova_compute[253538]:    <nova:port uuid="1959aca7-b25c-4fe5-b59a-70db352af78b">
Nov 25 03:53:30 np0005534516 nova_compute[253538]:      <nova:ip type="fixed" address="10.100.0.26" ipVersion="4"/>
Nov 25 03:53:30 np0005534516 nova_compute[253538]:    </nova:port>
Nov 25 03:53:30 np0005534516 nova_compute[253538]:  </nova:ports>
Nov 25 03:53:30 np0005534516 nova_compute[253538]: </nova:instance>
Nov 25 03:53:30 np0005534516 nova_compute[253538]: set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359#033[00m
Nov 25 03:53:30 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:53:30.547 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[c62f272a-1424-4aaf-bcc1-446b6ba1b462]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:53:30 np0005534516 NetworkManager[48915]: <info>  [1764060810.5558] manager: (tap8f0f7d83-b0): new Veth device (/org/freedesktop/NetworkManager/Devices/478)
Nov 25 03:53:30 np0005534516 systemd-udevd[374061]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 03:53:30 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:53:30.556 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[91ef1049-1106-4798-8782-8a9f3f0470fc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:53:30 np0005534516 nova_compute[253538]: 2025-11-25 08:53:30.582 253542 DEBUG oslo_concurrency.lockutils [None req-b64d71f1-b88b-41a0-91c0-6bcfe4f8ef88 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lock "interface-76611b0b-db06-4903-a22a-59b23a1e0d48-None" "released" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: held 5.194s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:53:30 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:53:30.593 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[62bb72dc-4361-477e-94b1-705a0c60e9b8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:53:30 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:53:30.596 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[3e27edfd-ee18-4075-b3bc-90b97dc8dc73]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:53:30 np0005534516 NetworkManager[48915]: <info>  [1764060810.6194] device (tap8f0f7d83-b0): carrier: link connected
Nov 25 03:53:30 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:53:30.623 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[dc7f51ac-73d2-4600-8b01-ad73b9b22d3e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:53:30 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:53:30.640 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[e3e4233c-75b8-446f-97bf-3cdcaed31aa6]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap8f0f7d83-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:09:fd:91'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 340], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 616521, 'reachable_time': 30317, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 374084, 'error': None, 'target': 'ovnmeta-8f0f7d83-b45f-4a49-9b0c-4eced5b56b37', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:53:30 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:53:30.656 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[30600fff-e3ce-456c-bf65-384816a736dd]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe09:fd91'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 616521, 'tstamp': 616521}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 374085, 'error': None, 'target': 'ovnmeta-8f0f7d83-b45f-4a49-9b0c-4eced5b56b37', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:53:30 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:53:30.671 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[51737ee8-a821-44f0-9bc7-75c5716cafd3]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap8f0f7d83-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:09:fd:91'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 340], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 616521, 'reachable_time': 30317, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 374086, 'error': None, 'target': 'ovnmeta-8f0f7d83-b45f-4a49-9b0c-4eced5b56b37', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:53:30 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:53:30.722 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[ad30bcb7-991c-4d67-b2b6-5e1dc7ed52dd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:53:30 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:53:30.810 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[c91c59d5-c4fe-43f7-8b63-99ba576efddf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:53:30 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:53:30.811 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8f0f7d83-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:53:30 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:53:30.812 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 03:53:30 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:53:30.812 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap8f0f7d83-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:53:30 np0005534516 nova_compute[253538]: 2025-11-25 08:53:30.814 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:53:30 np0005534516 kernel: tap8f0f7d83-b0: entered promiscuous mode
Nov 25 03:53:30 np0005534516 NetworkManager[48915]: <info>  [1764060810.8154] manager: (tap8f0f7d83-b0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/479)
Nov 25 03:53:30 np0005534516 nova_compute[253538]: 2025-11-25 08:53:30.817 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:53:30 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:53:30.819 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap8f0f7d83-b0, col_values=(('external_ids', {'iface-id': '4bc48b70-3942-46d1-ac71-5fa19e5d9ae3'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:53:30 np0005534516 nova_compute[253538]: 2025-11-25 08:53:30.820 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:53:30 np0005534516 nova_compute[253538]: 2025-11-25 08:53:30.821 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:53:30 np0005534516 ovn_controller[152859]: 2025-11-25T08:53:30Z|01155|binding|INFO|Releasing lport 4bc48b70-3942-46d1-ac71-5fa19e5d9ae3 from this chassis (sb_readonly=0)
Nov 25 03:53:30 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:53:30.824 162739 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/8f0f7d83-b45f-4a49-9b0c-4eced5b56b37.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/8f0f7d83-b45f-4a49-9b0c-4eced5b56b37.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 25 03:53:30 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:53:30.837 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[c61e9462-829e-4cbb-b996-eee8a68d0f2b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:53:30 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:53:30.838 162739 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 25 03:53:30 np0005534516 ovn_metadata_agent[162734]: global
Nov 25 03:53:30 np0005534516 ovn_metadata_agent[162734]:    log         /dev/log local0 debug
Nov 25 03:53:30 np0005534516 ovn_metadata_agent[162734]:    log-tag     haproxy-metadata-proxy-8f0f7d83-b45f-4a49-9b0c-4eced5b56b37
Nov 25 03:53:30 np0005534516 ovn_metadata_agent[162734]:    user        root
Nov 25 03:53:30 np0005534516 ovn_metadata_agent[162734]:    group       root
Nov 25 03:53:30 np0005534516 ovn_metadata_agent[162734]:    maxconn     1024
Nov 25 03:53:30 np0005534516 ovn_metadata_agent[162734]:    pidfile     /var/lib/neutron/external/pids/8f0f7d83-b45f-4a49-9b0c-4eced5b56b37.pid.haproxy
Nov 25 03:53:30 np0005534516 ovn_metadata_agent[162734]:    daemon
Nov 25 03:53:30 np0005534516 ovn_metadata_agent[162734]: 
Nov 25 03:53:30 np0005534516 ovn_metadata_agent[162734]: defaults
Nov 25 03:53:30 np0005534516 ovn_metadata_agent[162734]:    log global
Nov 25 03:53:30 np0005534516 ovn_metadata_agent[162734]:    mode http
Nov 25 03:53:30 np0005534516 ovn_metadata_agent[162734]:    option httplog
Nov 25 03:53:30 np0005534516 ovn_metadata_agent[162734]:    option dontlognull
Nov 25 03:53:30 np0005534516 ovn_metadata_agent[162734]:    option http-server-close
Nov 25 03:53:30 np0005534516 ovn_metadata_agent[162734]:    option forwardfor
Nov 25 03:53:30 np0005534516 ovn_metadata_agent[162734]:    retries                 3
Nov 25 03:53:30 np0005534516 ovn_metadata_agent[162734]:    timeout http-request    30s
Nov 25 03:53:30 np0005534516 ovn_metadata_agent[162734]:    timeout connect         30s
Nov 25 03:53:30 np0005534516 ovn_metadata_agent[162734]:    timeout client          32s
Nov 25 03:53:30 np0005534516 ovn_metadata_agent[162734]:    timeout server          32s
Nov 25 03:53:30 np0005534516 ovn_metadata_agent[162734]:    timeout http-keep-alive 30s
Nov 25 03:53:30 np0005534516 ovn_metadata_agent[162734]: 
Nov 25 03:53:30 np0005534516 ovn_metadata_agent[162734]: 
Nov 25 03:53:30 np0005534516 ovn_metadata_agent[162734]: listen listener
Nov 25 03:53:30 np0005534516 ovn_metadata_agent[162734]:    bind 169.254.169.254:80
Nov 25 03:53:30 np0005534516 ovn_metadata_agent[162734]:    server metadata /var/lib/neutron/metadata_proxy
Nov 25 03:53:30 np0005534516 ovn_metadata_agent[162734]:    http-request add-header X-OVN-Network-ID 8f0f7d83-b45f-4a49-9b0c-4eced5b56b37
Nov 25 03:53:30 np0005534516 ovn_metadata_agent[162734]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 25 03:53:30 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:53:30.839 162739 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-8f0f7d83-b45f-4a49-9b0c-4eced5b56b37', 'env', 'PROCESS_TAG=haproxy-8f0f7d83-b45f-4a49-9b0c-4eced5b56b37', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/8f0f7d83-b45f-4a49-9b0c-4eced5b56b37.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 25 03:53:30 np0005534516 nova_compute[253538]: 2025-11-25 08:53:30.843 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:53:31 np0005534516 nova_compute[253538]: 2025-11-25 08:53:31.160 253542 DEBUG nova.compute.manager [req-c7f3a4ec-a9d8-4055-9f6a-0ee6ac0d1780 req-c6d08ffc-28fb-4dc5-822a-efb455fb9713 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 76611b0b-db06-4903-a22a-59b23a1e0d48] Received event network-vif-plugged-1959aca7-b25c-4fe5-b59a-70db352af78b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:53:31 np0005534516 nova_compute[253538]: 2025-11-25 08:53:31.161 253542 DEBUG oslo_concurrency.lockutils [req-c7f3a4ec-a9d8-4055-9f6a-0ee6ac0d1780 req-c6d08ffc-28fb-4dc5-822a-efb455fb9713 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "76611b0b-db06-4903-a22a-59b23a1e0d48-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:53:31 np0005534516 nova_compute[253538]: 2025-11-25 08:53:31.161 253542 DEBUG oslo_concurrency.lockutils [req-c7f3a4ec-a9d8-4055-9f6a-0ee6ac0d1780 req-c6d08ffc-28fb-4dc5-822a-efb455fb9713 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "76611b0b-db06-4903-a22a-59b23a1e0d48-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:53:31 np0005534516 nova_compute[253538]: 2025-11-25 08:53:31.161 253542 DEBUG oslo_concurrency.lockutils [req-c7f3a4ec-a9d8-4055-9f6a-0ee6ac0d1780 req-c6d08ffc-28fb-4dc5-822a-efb455fb9713 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "76611b0b-db06-4903-a22a-59b23a1e0d48-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:53:31 np0005534516 nova_compute[253538]: 2025-11-25 08:53:31.161 253542 DEBUG nova.compute.manager [req-c7f3a4ec-a9d8-4055-9f6a-0ee6ac0d1780 req-c6d08ffc-28fb-4dc5-822a-efb455fb9713 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 76611b0b-db06-4903-a22a-59b23a1e0d48] No waiting events found dispatching network-vif-plugged-1959aca7-b25c-4fe5-b59a-70db352af78b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 03:53:31 np0005534516 nova_compute[253538]: 2025-11-25 08:53:31.162 253542 WARNING nova.compute.manager [req-c7f3a4ec-a9d8-4055-9f6a-0ee6ac0d1780 req-c6d08ffc-28fb-4dc5-822a-efb455fb9713 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 76611b0b-db06-4903-a22a-59b23a1e0d48] Received unexpected event network-vif-plugged-1959aca7-b25c-4fe5-b59a-70db352af78b for instance with vm_state active and task_state None.#033[00m
Nov 25 03:53:31 np0005534516 podman[374118]: 2025-11-25 08:53:31.298351517 +0000 UTC m=+0.069358770 container create a13d5ce423b857682add3ea01f6435ad249dc4a50e6b1f6fbcb9a1810b5f0c2c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-8f0f7d83-b45f-4a49-9b0c-4eced5b56b37, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team)
Nov 25 03:53:31 np0005534516 systemd[1]: Started libpod-conmon-a13d5ce423b857682add3ea01f6435ad249dc4a50e6b1f6fbcb9a1810b5f0c2c.scope.
Nov 25 03:53:31 np0005534516 podman[374118]: 2025-11-25 08:53:31.266723651 +0000 UTC m=+0.037730924 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 25 03:53:31 np0005534516 systemd[1]: Started libcrun container.
Nov 25 03:53:31 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e9ca46848af656bb6a002ab5710a7e5831c6c96bc3a4e29b10c26e75820c7aca/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 25 03:53:31 np0005534516 podman[374118]: 2025-11-25 08:53:31.396905984 +0000 UTC m=+0.167913237 container init a13d5ce423b857682add3ea01f6435ad249dc4a50e6b1f6fbcb9a1810b5f0c2c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-8f0f7d83-b45f-4a49-9b0c-4eced5b56b37, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 25 03:53:31 np0005534516 podman[374118]: 2025-11-25 08:53:31.409732076 +0000 UTC m=+0.180739389 container start a13d5ce423b857682add3ea01f6435ad249dc4a50e6b1f6fbcb9a1810b5f0c2c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-8f0f7d83-b45f-4a49-9b0c-4eced5b56b37, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 03:53:31 np0005534516 neutron-haproxy-ovnmeta-8f0f7d83-b45f-4a49-9b0c-4eced5b56b37[374133]: [NOTICE]   (374137) : New worker (374139) forked
Nov 25 03:53:31 np0005534516 neutron-haproxy-ovnmeta-8f0f7d83-b45f-4a49-9b0c-4eced5b56b37[374133]: [NOTICE]   (374137) : Loading success.
Nov 25 03:53:31 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2182: 321 pgs: 321 active+clean; 215 MiB data, 857 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 123 KiB/s wr, 80 op/s
Nov 25 03:53:32 np0005534516 nova_compute[253538]: 2025-11-25 08:53:32.056 253542 DEBUG nova.network.neutron [req-d524d148-208e-426f-9087-6b37fcc3db0a req-fb04bd9c-375d-4630-ae69-8c93da1c6cdc b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 76611b0b-db06-4903-a22a-59b23a1e0d48] Updated VIF entry in instance network info cache for port 1959aca7-b25c-4fe5-b59a-70db352af78b. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 25 03:53:32 np0005534516 nova_compute[253538]: 2025-11-25 08:53:32.056 253542 DEBUG nova.network.neutron [req-d524d148-208e-426f-9087-6b37fcc3db0a req-fb04bd9c-375d-4630-ae69-8c93da1c6cdc b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 76611b0b-db06-4903-a22a-59b23a1e0d48] Updating instance_info_cache with network_info: [{"id": "8f1fcc3c-5f46-4272-be9b-4d5213b3aceb", "address": "fa:16:3e:36:b2:21", "network": {"id": "60f2641c-f03e-4ef3-a462-4bd54e93c59c", "bridge": "br-int", "label": "tempest-network-smoke--706068333", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.208", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92faeb767e7a423586eaaf32661ce771", "mtu": null, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8f1fcc3c-5f", "ovs_interfaceid": "8f1fcc3c-5f46-4272-be9b-4d5213b3aceb", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "1959aca7-b25c-4fe5-b59a-70db352af78b", "address": "fa:16:3e:d6:c5:52", "network": {"id": "8f0f7d83-b45f-4a49-9b0c-4eced5b56b37", "bridge": "br-int", "label": "tempest-network-smoke--701944320", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.26", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92faeb767e7a423586eaaf32661ce771", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1959aca7-b2", "ovs_interfaceid": "1959aca7-b25c-4fe5-b59a-70db352af78b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 03:53:32 np0005534516 nova_compute[253538]: 2025-11-25 08:53:32.075 253542 DEBUG oslo_concurrency.lockutils [req-d524d148-208e-426f-9087-6b37fcc3db0a req-fb04bd9c-375d-4630-ae69-8c93da1c6cdc b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-76611b0b-db06-4903-a22a-59b23a1e0d48" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 03:53:32 np0005534516 ovn_controller[152859]: 2025-11-25T08:53:32Z|00132|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:ea:4d:89 10.100.0.6
Nov 25 03:53:32 np0005534516 ovn_controller[152859]: 2025-11-25T08:53:32Z|00133|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:ea:4d:89 10.100.0.6
Nov 25 03:53:32 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 03:53:32 np0005534516 nova_compute[253538]: 2025-11-25 08:53:32.795 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:53:33 np0005534516 nova_compute[253538]: 2025-11-25 08:53:33.250 253542 DEBUG nova.compute.manager [req-ca5d8caf-3103-4a1a-9e03-b3bded05e784 req-36f0948c-5208-4a33-8b8b-22a79c8f0fd8 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 76611b0b-db06-4903-a22a-59b23a1e0d48] Received event network-vif-plugged-1959aca7-b25c-4fe5-b59a-70db352af78b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:53:33 np0005534516 nova_compute[253538]: 2025-11-25 08:53:33.251 253542 DEBUG oslo_concurrency.lockutils [req-ca5d8caf-3103-4a1a-9e03-b3bded05e784 req-36f0948c-5208-4a33-8b8b-22a79c8f0fd8 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "76611b0b-db06-4903-a22a-59b23a1e0d48-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:53:33 np0005534516 nova_compute[253538]: 2025-11-25 08:53:33.251 253542 DEBUG oslo_concurrency.lockutils [req-ca5d8caf-3103-4a1a-9e03-b3bded05e784 req-36f0948c-5208-4a33-8b8b-22a79c8f0fd8 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "76611b0b-db06-4903-a22a-59b23a1e0d48-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:53:33 np0005534516 nova_compute[253538]: 2025-11-25 08:53:33.251 253542 DEBUG oslo_concurrency.lockutils [req-ca5d8caf-3103-4a1a-9e03-b3bded05e784 req-36f0948c-5208-4a33-8b8b-22a79c8f0fd8 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "76611b0b-db06-4903-a22a-59b23a1e0d48-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:53:33 np0005534516 nova_compute[253538]: 2025-11-25 08:53:33.252 253542 DEBUG nova.compute.manager [req-ca5d8caf-3103-4a1a-9e03-b3bded05e784 req-36f0948c-5208-4a33-8b8b-22a79c8f0fd8 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 76611b0b-db06-4903-a22a-59b23a1e0d48] No waiting events found dispatching network-vif-plugged-1959aca7-b25c-4fe5-b59a-70db352af78b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 03:53:33 np0005534516 nova_compute[253538]: 2025-11-25 08:53:33.252 253542 WARNING nova.compute.manager [req-ca5d8caf-3103-4a1a-9e03-b3bded05e784 req-36f0948c-5208-4a33-8b8b-22a79c8f0fd8 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 76611b0b-db06-4903-a22a-59b23a1e0d48] Received unexpected event network-vif-plugged-1959aca7-b25c-4fe5-b59a-70db352af78b for instance with vm_state active and task_state None.#033[00m
Nov 25 03:53:33 np0005534516 ovn_controller[152859]: 2025-11-25T08:53:33Z|00134|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:d6:c5:52 10.100.0.26
Nov 25 03:53:33 np0005534516 ovn_controller[152859]: 2025-11-25T08:53:33Z|00135|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:d6:c5:52 10.100.0.26
Nov 25 03:53:33 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2183: 321 pgs: 321 active+clean; 227 MiB data, 865 MiB used, 59 GiB / 60 GiB avail; 1.1 MiB/s rd, 808 KiB/s wr, 51 op/s
Nov 25 03:53:34 np0005534516 nova_compute[253538]: 2025-11-25 08:53:34.553 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 03:53:35 np0005534516 nova_compute[253538]: 2025-11-25 08:53:35.355 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:53:35 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2184: 321 pgs: 321 active+clean; 232 MiB data, 873 MiB used, 59 GiB / 60 GiB avail; 849 KiB/s rd, 1.5 MiB/s wr, 59 op/s
Nov 25 03:53:36 np0005534516 nova_compute[253538]: 2025-11-25 08:53:36.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 03:53:36 np0005534516 nova_compute[253538]: 2025-11-25 08:53:36.555 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 25 03:53:36 np0005534516 nova_compute[253538]: 2025-11-25 08:53:36.555 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 25 03:53:36 np0005534516 podman[374149]: 2025-11-25 08:53:36.811246661 +0000 UTC m=+0.063343575 container health_status 5c8d5508db57b4c3da7e80ae11f3a3094e87785cb74f60c98199261dd8b73481 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Nov 25 03:53:36 np0005534516 nova_compute[253538]: 2025-11-25 08:53:36.969 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Acquiring lock "refresh_cache-76611b0b-db06-4903-a22a-59b23a1e0d48" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 03:53:36 np0005534516 nova_compute[253538]: 2025-11-25 08:53:36.970 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Acquired lock "refresh_cache-76611b0b-db06-4903-a22a-59b23a1e0d48" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 03:53:36 np0005534516 nova_compute[253538]: 2025-11-25 08:53:36.970 253542 DEBUG nova.network.neutron [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] [instance: 76611b0b-db06-4903-a22a-59b23a1e0d48] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Nov 25 03:53:36 np0005534516 nova_compute[253538]: 2025-11-25 08:53:36.970 253542 DEBUG nova.objects.instance [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 76611b0b-db06-4903-a22a-59b23a1e0d48 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 03:53:37 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 03:53:37 np0005534516 nova_compute[253538]: 2025-11-25 08:53:37.797 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:53:37 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2185: 321 pgs: 321 active+clean; 244 MiB data, 882 MiB used, 59 GiB / 60 GiB avail; 356 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Nov 25 03:53:38 np0005534516 podman[374168]: 2025-11-25 08:53:38.812130202 +0000 UTC m=+0.062363188 container health_status 201f5ba8a846455dad7681d91099d8c3f33e8ac91298c1330a8b525b94c03938 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=multipathd, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team)
Nov 25 03:53:39 np0005534516 nova_compute[253538]: 2025-11-25 08:53:39.385 253542 INFO nova.compute.manager [None req-cc4d202c-01ab-4e8b-a455-6903f57f3cc0 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: 2da7049d-715e-4209-8c17-dda96ff6a192] Get console output#033[00m
Nov 25 03:53:39 np0005534516 nova_compute[253538]: 2025-11-25 08:53:39.390 310639 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Nov 25 03:53:39 np0005534516 nova_compute[253538]: 2025-11-25 08:53:39.637 253542 DEBUG oslo_concurrency.lockutils [None req-d42ff47c-450b-4b3d-9bee-c90199d3b13c 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Acquiring lock "2da7049d-715e-4209-8c17-dda96ff6a192" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:53:39 np0005534516 nova_compute[253538]: 2025-11-25 08:53:39.638 253542 DEBUG oslo_concurrency.lockutils [None req-d42ff47c-450b-4b3d-9bee-c90199d3b13c 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Lock "2da7049d-715e-4209-8c17-dda96ff6a192" acquired by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:53:39 np0005534516 nova_compute[253538]: 2025-11-25 08:53:39.639 253542 DEBUG nova.compute.manager [None req-d42ff47c-450b-4b3d-9bee-c90199d3b13c 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: 2da7049d-715e-4209-8c17-dda96ff6a192] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:53:39 np0005534516 nova_compute[253538]: 2025-11-25 08:53:39.644 253542 DEBUG nova.compute.manager [None req-d42ff47c-450b-4b3d-9bee-c90199d3b13c 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: 2da7049d-715e-4209-8c17-dda96ff6a192] Stopping instance; current vm_state: active, current task_state: powering-off, current DB power_state: 1, current VM power_state: 1 do_stop_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3338#033[00m
Nov 25 03:53:39 np0005534516 nova_compute[253538]: 2025-11-25 08:53:39.645 253542 DEBUG nova.objects.instance [None req-d42ff47c-450b-4b3d-9bee-c90199d3b13c 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Lazy-loading 'flavor' on Instance uuid 2da7049d-715e-4209-8c17-dda96ff6a192 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 03:53:39 np0005534516 nova_compute[253538]: 2025-11-25 08:53:39.662 253542 DEBUG nova.virt.libvirt.driver [None req-d42ff47c-450b-4b3d-9bee-c90199d3b13c 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: 2da7049d-715e-4209-8c17-dda96ff6a192] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071#033[00m
Nov 25 03:53:39 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2186: 321 pgs: 321 active+clean; 246 MiB data, 882 MiB used, 59 GiB / 60 GiB avail; 370 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Nov 25 03:53:40 np0005534516 nova_compute[253538]: 2025-11-25 08:53:40.407 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:53:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:53:41.077 162739 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:53:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:53:41.077 162739 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:53:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:53:41.078 162739 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:53:41 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2187: 321 pgs: 321 active+clean; 246 MiB data, 882 MiB used, 59 GiB / 60 GiB avail; 370 KiB/s rd, 2.2 MiB/s wr, 67 op/s
Nov 25 03:53:41 np0005534516 nova_compute[253538]: 2025-11-25 08:53:41.819 253542 DEBUG nova.network.neutron [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] [instance: 76611b0b-db06-4903-a22a-59b23a1e0d48] Updating instance_info_cache with network_info: [{"id": "8f1fcc3c-5f46-4272-be9b-4d5213b3aceb", "address": "fa:16:3e:36:b2:21", "network": {"id": "60f2641c-f03e-4ef3-a462-4bd54e93c59c", "bridge": "br-int", "label": "tempest-network-smoke--706068333", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.208", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92faeb767e7a423586eaaf32661ce771", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8f1fcc3c-5f", "ovs_interfaceid": "8f1fcc3c-5f46-4272-be9b-4d5213b3aceb", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "1959aca7-b25c-4fe5-b59a-70db352af78b", "address": "fa:16:3e:d6:c5:52", "network": {"id": "8f0f7d83-b45f-4a49-9b0c-4eced5b56b37", "bridge": "br-int", "label": "tempest-network-smoke--701944320", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.26", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92faeb767e7a423586eaaf32661ce771", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1959aca7-b2", "ovs_interfaceid": "1959aca7-b25c-4fe5-b59a-70db352af78b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 03:53:41 np0005534516 nova_compute[253538]: 2025-11-25 08:53:41.856 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Releasing lock "refresh_cache-76611b0b-db06-4903-a22a-59b23a1e0d48" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 03:53:41 np0005534516 nova_compute[253538]: 2025-11-25 08:53:41.856 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] [instance: 76611b0b-db06-4903-a22a-59b23a1e0d48] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Nov 25 03:53:41 np0005534516 nova_compute[253538]: 2025-11-25 08:53:41.857 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 03:53:41 np0005534516 nova_compute[253538]: 2025-11-25 08:53:41.857 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 03:53:41 np0005534516 nova_compute[253538]: 2025-11-25 08:53:41.858 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 25 03:53:41 np0005534516 kernel: tape0eb0246-98 (unregistering): left promiscuous mode
Nov 25 03:53:41 np0005534516 NetworkManager[48915]: <info>  [1764060821.9766] device (tape0eb0246-98): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 03:53:41 np0005534516 nova_compute[253538]: 2025-11-25 08:53:41.986 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:53:41 np0005534516 ovn_controller[152859]: 2025-11-25T08:53:41Z|01156|binding|INFO|Releasing lport e0eb0246-9869-4c10-b45b-bd0799ae0c95 from this chassis (sb_readonly=0)
Nov 25 03:53:41 np0005534516 ovn_controller[152859]: 2025-11-25T08:53:41Z|01157|binding|INFO|Setting lport e0eb0246-9869-4c10-b45b-bd0799ae0c95 down in Southbound
Nov 25 03:53:41 np0005534516 ovn_controller[152859]: 2025-11-25T08:53:41Z|01158|binding|INFO|Removing iface tape0eb0246-98 ovn-installed in OVS
Nov 25 03:53:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:53:41.998 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ea:4d:89 10.100.0.6'], port_security=['fa:16:3e:ea:4d:89 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '2da7049d-715e-4209-8c17-dda96ff6a192', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ab581a21-5712-4b8e-87f9-b943349fbfcb', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '7dcaf3c96bfc4db3a41291debd385c67', 'neutron:revision_number': '4', 'neutron:security_group_ids': '622b72dc-477c-41ae-9a43-d0ddc5df890f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.226'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=cbdce223-39b2-47e0-8888-53244c8f0c36, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=e0eb0246-9869-4c10-b45b-bd0799ae0c95) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 03:53:42 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:53:42.000 162739 INFO neutron.agent.ovn.metadata.agent [-] Port e0eb0246-9869-4c10-b45b-bd0799ae0c95 in datapath ab581a21-5712-4b8e-87f9-b943349fbfcb unbound from our chassis#033[00m
Nov 25 03:53:42 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:53:42.003 162739 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network ab581a21-5712-4b8e-87f9-b943349fbfcb, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 25 03:53:42 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:53:42.004 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[a966bcb8-573b-4873-b3ca-3dab46b29fc1]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:53:42 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:53:42.005 162739 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-ab581a21-5712-4b8e-87f9-b943349fbfcb namespace which is not needed anymore#033[00m
Nov 25 03:53:42 np0005534516 nova_compute[253538]: 2025-11-25 08:53:42.019 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:53:42 np0005534516 systemd[1]: machine-qemu\x2d143\x2dinstance\x2d00000073.scope: Deactivated successfully.
Nov 25 03:53:42 np0005534516 systemd[1]: machine-qemu\x2d143\x2dinstance\x2d00000073.scope: Consumed 14.361s CPU time.
Nov 25 03:53:42 np0005534516 systemd-machined[215790]: Machine qemu-143-instance-00000073 terminated.
Nov 25 03:53:42 np0005534516 neutron-haproxy-ovnmeta-ab581a21-5712-4b8e-87f9-b943349fbfcb[374036]: [NOTICE]   (374040) : haproxy version is 2.8.14-c23fe91
Nov 25 03:53:42 np0005534516 neutron-haproxy-ovnmeta-ab581a21-5712-4b8e-87f9-b943349fbfcb[374036]: [NOTICE]   (374040) : path to executable is /usr/sbin/haproxy
Nov 25 03:53:42 np0005534516 neutron-haproxy-ovnmeta-ab581a21-5712-4b8e-87f9-b943349fbfcb[374036]: [WARNING]  (374040) : Exiting Master process...
Nov 25 03:53:42 np0005534516 neutron-haproxy-ovnmeta-ab581a21-5712-4b8e-87f9-b943349fbfcb[374036]: [WARNING]  (374040) : Exiting Master process...
Nov 25 03:53:42 np0005534516 neutron-haproxy-ovnmeta-ab581a21-5712-4b8e-87f9-b943349fbfcb[374036]: [ALERT]    (374040) : Current worker (374042) exited with code 143 (Terminated)
Nov 25 03:53:42 np0005534516 neutron-haproxy-ovnmeta-ab581a21-5712-4b8e-87f9-b943349fbfcb[374036]: [WARNING]  (374040) : All workers exited. Exiting... (0)
Nov 25 03:53:42 np0005534516 systemd[1]: libpod-4589d8b885927712e3e4ff7370f0a7b7fd524b8512fabdebcd406b222e11eefd.scope: Deactivated successfully.
Nov 25 03:53:42 np0005534516 podman[374212]: 2025-11-25 08:53:42.224924311 +0000 UTC m=+0.072469805 container died 4589d8b885927712e3e4ff7370f0a7b7fd524b8512fabdebcd406b222e11eefd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-ab581a21-5712-4b8e-87f9-b943349fbfcb, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Nov 25 03:53:42 np0005534516 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-4589d8b885927712e3e4ff7370f0a7b7fd524b8512fabdebcd406b222e11eefd-userdata-shm.mount: Deactivated successfully.
Nov 25 03:53:42 np0005534516 systemd[1]: var-lib-containers-storage-overlay-d53e3214adfe5885526c3929acf65ff9e3b10430b7f7fcf526d2c5c055a47429-merged.mount: Deactivated successfully.
Nov 25 03:53:42 np0005534516 podman[374212]: 2025-11-25 08:53:42.282132297 +0000 UTC m=+0.129677761 container cleanup 4589d8b885927712e3e4ff7370f0a7b7fd524b8512fabdebcd406b222e11eefd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-ab581a21-5712-4b8e-87f9-b943349fbfcb, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 03:53:42 np0005534516 systemd[1]: libpod-conmon-4589d8b885927712e3e4ff7370f0a7b7fd524b8512fabdebcd406b222e11eefd.scope: Deactivated successfully.
Nov 25 03:53:42 np0005534516 podman[374250]: 2025-11-25 08:53:42.372790438 +0000 UTC m=+0.055749807 container remove 4589d8b885927712e3e4ff7370f0a7b7fd524b8512fabdebcd406b222e11eefd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-ab581a21-5712-4b8e-87f9-b943349fbfcb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.schema-version=1.0)
Nov 25 03:53:42 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:53:42.379 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[45a8831d-8f03-42f7-b65e-4cd416ac924c]: (4, ('Tue Nov 25 08:53:42 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-ab581a21-5712-4b8e-87f9-b943349fbfcb (4589d8b885927712e3e4ff7370f0a7b7fd524b8512fabdebcd406b222e11eefd)\n4589d8b885927712e3e4ff7370f0a7b7fd524b8512fabdebcd406b222e11eefd\nTue Nov 25 08:53:42 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-ab581a21-5712-4b8e-87f9-b943349fbfcb (4589d8b885927712e3e4ff7370f0a7b7fd524b8512fabdebcd406b222e11eefd)\n4589d8b885927712e3e4ff7370f0a7b7fd524b8512fabdebcd406b222e11eefd\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:53:42 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:53:42.381 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[f9db0a86-5b34-4261-934f-c4e0d3d6d8ec]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:53:42 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:53:42.382 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapab581a21-50, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:53:42 np0005534516 nova_compute[253538]: 2025-11-25 08:53:42.384 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:53:42 np0005534516 kernel: tapab581a21-50: left promiscuous mode
Nov 25 03:53:42 np0005534516 nova_compute[253538]: 2025-11-25 08:53:42.410 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:53:42 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:53:42.413 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[e203bbd7-abad-4a8c-8dde-133fb9e789da]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:53:42 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:53:42.431 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[5ac45020-4aca-40e0-ab4e-13b5707674dd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:53:42 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:53:42.434 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[8a887f43-8ffb-4329-98a3-1423374b08bd]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:53:42 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:53:42.450 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[0bf427cf-ee95-4028-848d-215fc19f7f15]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 615431, 'reachable_time': 18459, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 374267, 'error': None, 'target': 'ovnmeta-ab581a21-5712-4b8e-87f9-b943349fbfcb', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:53:42 np0005534516 systemd[1]: run-netns-ovnmeta\x2dab581a21\x2d5712\x2d4b8e\x2d87f9\x2db943349fbfcb.mount: Deactivated successfully.
Nov 25 03:53:42 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:53:42.453 162852 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-ab581a21-5712-4b8e-87f9-b943349fbfcb deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 25 03:53:42 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:53:42.454 162852 DEBUG oslo.privsep.daemon [-] privsep: reply[014b3946-7bfb-46dc-8a67-ced0cc5b367b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:53:42 np0005534516 nova_compute[253538]: 2025-11-25 08:53:42.507 253542 DEBUG nova.compute.manager [req-deba0c59-f095-415b-90e3-cb3ee6a4c60a req-ef700aa4-b722-431a-a8d9-dab1bc1b55fa b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 2da7049d-715e-4209-8c17-dda96ff6a192] Received event network-vif-unplugged-e0eb0246-9869-4c10-b45b-bd0799ae0c95 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:53:42 np0005534516 nova_compute[253538]: 2025-11-25 08:53:42.507 253542 DEBUG oslo_concurrency.lockutils [req-deba0c59-f095-415b-90e3-cb3ee6a4c60a req-ef700aa4-b722-431a-a8d9-dab1bc1b55fa b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "2da7049d-715e-4209-8c17-dda96ff6a192-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:53:42 np0005534516 nova_compute[253538]: 2025-11-25 08:53:42.508 253542 DEBUG oslo_concurrency.lockutils [req-deba0c59-f095-415b-90e3-cb3ee6a4c60a req-ef700aa4-b722-431a-a8d9-dab1bc1b55fa b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "2da7049d-715e-4209-8c17-dda96ff6a192-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:53:42 np0005534516 nova_compute[253538]: 2025-11-25 08:53:42.508 253542 DEBUG oslo_concurrency.lockutils [req-deba0c59-f095-415b-90e3-cb3ee6a4c60a req-ef700aa4-b722-431a-a8d9-dab1bc1b55fa b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "2da7049d-715e-4209-8c17-dda96ff6a192-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:53:42 np0005534516 nova_compute[253538]: 2025-11-25 08:53:42.508 253542 DEBUG nova.compute.manager [req-deba0c59-f095-415b-90e3-cb3ee6a4c60a req-ef700aa4-b722-431a-a8d9-dab1bc1b55fa b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 2da7049d-715e-4209-8c17-dda96ff6a192] No waiting events found dispatching network-vif-unplugged-e0eb0246-9869-4c10-b45b-bd0799ae0c95 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 03:53:42 np0005534516 nova_compute[253538]: 2025-11-25 08:53:42.509 253542 WARNING nova.compute.manager [req-deba0c59-f095-415b-90e3-cb3ee6a4c60a req-ef700aa4-b722-431a-a8d9-dab1bc1b55fa b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 2da7049d-715e-4209-8c17-dda96ff6a192] Received unexpected event network-vif-unplugged-e0eb0246-9869-4c10-b45b-bd0799ae0c95 for instance with vm_state active and task_state powering-off.#033[00m
Nov 25 03:53:42 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 03:53:42 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:53:42.538 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=35, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '3e:21:09', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '6a:71:8e:07:fa:c2'}, ipsec=False) old=SB_Global(nb_cfg=34) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 03:53:42 np0005534516 nova_compute[253538]: 2025-11-25 08:53:42.539 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:53:42 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:53:42.539 162739 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 25 03:53:42 np0005534516 nova_compute[253538]: 2025-11-25 08:53:42.555 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 03:53:42 np0005534516 nova_compute[253538]: 2025-11-25 08:53:42.678 253542 INFO nova.virt.libvirt.driver [None req-d42ff47c-450b-4b3d-9bee-c90199d3b13c 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: 2da7049d-715e-4209-8c17-dda96ff6a192] Instance shutdown successfully after 3 seconds.#033[00m
Nov 25 03:53:42 np0005534516 nova_compute[253538]: 2025-11-25 08:53:42.685 253542 INFO nova.virt.libvirt.driver [-] [instance: 2da7049d-715e-4209-8c17-dda96ff6a192] Instance destroyed successfully.#033[00m
Nov 25 03:53:42 np0005534516 nova_compute[253538]: 2025-11-25 08:53:42.685 253542 DEBUG nova.objects.instance [None req-d42ff47c-450b-4b3d-9bee-c90199d3b13c 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Lazy-loading 'numa_topology' on Instance uuid 2da7049d-715e-4209-8c17-dda96ff6a192 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 03:53:42 np0005534516 nova_compute[253538]: 2025-11-25 08:53:42.694 253542 DEBUG nova.compute.manager [None req-d42ff47c-450b-4b3d-9bee-c90199d3b13c 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: 2da7049d-715e-4209-8c17-dda96ff6a192] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:53:42 np0005534516 nova_compute[253538]: 2025-11-25 08:53:42.771 253542 DEBUG oslo_concurrency.lockutils [None req-d42ff47c-450b-4b3d-9bee-c90199d3b13c 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Lock "2da7049d-715e-4209-8c17-dda96ff6a192" "released" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: held 3.133s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:53:42 np0005534516 nova_compute[253538]: 2025-11-25 08:53:42.799 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:53:43 np0005534516 nova_compute[253538]: 2025-11-25 08:53:43.548 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 03:53:43 np0005534516 nova_compute[253538]: 2025-11-25 08:53:43.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 03:53:43 np0005534516 nova_compute[253538]: 2025-11-25 08:53:43.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 03:53:43 np0005534516 nova_compute[253538]: 2025-11-25 08:53:43.584 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:53:43 np0005534516 nova_compute[253538]: 2025-11-25 08:53:43.585 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:53:43 np0005534516 nova_compute[253538]: 2025-11-25 08:53:43.586 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:53:43 np0005534516 nova_compute[253538]: 2025-11-25 08:53:43.586 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 25 03:53:43 np0005534516 nova_compute[253538]: 2025-11-25 08:53:43.587 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:53:43 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2188: 321 pgs: 321 active+clean; 246 MiB data, 882 MiB used, 59 GiB / 60 GiB avail; 356 KiB/s rd, 2.1 MiB/s wr, 62 op/s
Nov 25 03:53:44 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 03:53:44 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2971015882' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 03:53:44 np0005534516 nova_compute[253538]: 2025-11-25 08:53:44.104 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.516s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:53:44 np0005534516 nova_compute[253538]: 2025-11-25 08:53:44.218 253542 DEBUG oslo_concurrency.lockutils [None req-dd7c073b-5906-452e-a821-d734a5b66929 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Acquiring lock "5e14b791-8860-44a3-87e0-5c7fcc1dcf12" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:53:44 np0005534516 nova_compute[253538]: 2025-11-25 08:53:44.219 253542 DEBUG oslo_concurrency.lockutils [None req-dd7c073b-5906-452e-a821-d734a5b66929 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lock "5e14b791-8860-44a3-87e0-5c7fcc1dcf12" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:53:44 np0005534516 nova_compute[253538]: 2025-11-25 08:53:44.221 253542 DEBUG nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] skipping disk for instance-00000072 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 25 03:53:44 np0005534516 nova_compute[253538]: 2025-11-25 08:53:44.221 253542 DEBUG nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] skipping disk for instance-00000072 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 25 03:53:44 np0005534516 nova_compute[253538]: 2025-11-25 08:53:44.228 253542 DEBUG nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] skipping disk for instance-00000073 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 25 03:53:44 np0005534516 nova_compute[253538]: 2025-11-25 08:53:44.229 253542 DEBUG nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] skipping disk for instance-00000073 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 25 03:53:44 np0005534516 nova_compute[253538]: 2025-11-25 08:53:44.236 253542 DEBUG nova.compute.manager [None req-dd7c073b-5906-452e-a821-d734a5b66929 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 5e14b791-8860-44a3-87e0-5c7fcc1dcf12] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 25 03:53:44 np0005534516 podman[374290]: 2025-11-25 08:53:44.306136699 +0000 UTC m=+0.138133642 container health_status 8ad1e319ea263c63021f01bb2d6f18ca5aea205883339692c6b10bddfa993191 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.build-date=20251118)
Nov 25 03:53:44 np0005534516 nova_compute[253538]: 2025-11-25 08:53:44.333 253542 DEBUG oslo_concurrency.lockutils [None req-dd7c073b-5906-452e-a821-d734a5b66929 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:53:44 np0005534516 nova_compute[253538]: 2025-11-25 08:53:44.334 253542 DEBUG oslo_concurrency.lockutils [None req-dd7c073b-5906-452e-a821-d734a5b66929 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:53:44 np0005534516 nova_compute[253538]: 2025-11-25 08:53:44.342 253542 DEBUG nova.virt.hardware [None req-dd7c073b-5906-452e-a821-d734a5b66929 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 25 03:53:44 np0005534516 nova_compute[253538]: 2025-11-25 08:53:44.343 253542 INFO nova.compute.claims [None req-dd7c073b-5906-452e-a821-d734a5b66929 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 5e14b791-8860-44a3-87e0-5c7fcc1dcf12] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 25 03:53:44 np0005534516 nova_compute[253538]: 2025-11-25 08:53:44.485 253542 DEBUG oslo_concurrency.processutils [None req-dd7c073b-5906-452e-a821-d734a5b66929 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:53:44 np0005534516 nova_compute[253538]: 2025-11-25 08:53:44.567 253542 WARNING nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 03:53:44 np0005534516 nova_compute[253538]: 2025-11-25 08:53:44.569 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3646MB free_disk=59.89712142944336GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 25 03:53:44 np0005534516 nova_compute[253538]: 2025-11-25 08:53:44.569 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:53:44 np0005534516 nova_compute[253538]: 2025-11-25 08:53:44.596 253542 DEBUG nova.compute.manager [req-7f1dbc04-98cc-4c81-a9fe-832dfd006d4a req-ae922ddb-3ec1-4edb-8ba4-73d54a45a248 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 2da7049d-715e-4209-8c17-dda96ff6a192] Received event network-vif-plugged-e0eb0246-9869-4c10-b45b-bd0799ae0c95 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:53:44 np0005534516 nova_compute[253538]: 2025-11-25 08:53:44.596 253542 DEBUG oslo_concurrency.lockutils [req-7f1dbc04-98cc-4c81-a9fe-832dfd006d4a req-ae922ddb-3ec1-4edb-8ba4-73d54a45a248 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "2da7049d-715e-4209-8c17-dda96ff6a192-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:53:44 np0005534516 nova_compute[253538]: 2025-11-25 08:53:44.596 253542 DEBUG oslo_concurrency.lockutils [req-7f1dbc04-98cc-4c81-a9fe-832dfd006d4a req-ae922ddb-3ec1-4edb-8ba4-73d54a45a248 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "2da7049d-715e-4209-8c17-dda96ff6a192-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:53:44 np0005534516 nova_compute[253538]: 2025-11-25 08:53:44.597 253542 DEBUG oslo_concurrency.lockutils [req-7f1dbc04-98cc-4c81-a9fe-832dfd006d4a req-ae922ddb-3ec1-4edb-8ba4-73d54a45a248 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "2da7049d-715e-4209-8c17-dda96ff6a192-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:53:44 np0005534516 nova_compute[253538]: 2025-11-25 08:53:44.597 253542 DEBUG nova.compute.manager [req-7f1dbc04-98cc-4c81-a9fe-832dfd006d4a req-ae922ddb-3ec1-4edb-8ba4-73d54a45a248 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 2da7049d-715e-4209-8c17-dda96ff6a192] No waiting events found dispatching network-vif-plugged-e0eb0246-9869-4c10-b45b-bd0799ae0c95 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 03:53:44 np0005534516 nova_compute[253538]: 2025-11-25 08:53:44.597 253542 WARNING nova.compute.manager [req-7f1dbc04-98cc-4c81-a9fe-832dfd006d4a req-ae922ddb-3ec1-4edb-8ba4-73d54a45a248 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 2da7049d-715e-4209-8c17-dda96ff6a192] Received unexpected event network-vif-plugged-e0eb0246-9869-4c10-b45b-bd0799ae0c95 for instance with vm_state stopped and task_state None.#033[00m
Nov 25 03:53:44 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 03:53:44 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1128690214' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 03:53:44 np0005534516 nova_compute[253538]: 2025-11-25 08:53:44.969 253542 DEBUG oslo_concurrency.processutils [None req-dd7c073b-5906-452e-a821-d734a5b66929 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.484s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:53:44 np0005534516 nova_compute[253538]: 2025-11-25 08:53:44.977 253542 DEBUG nova.compute.provider_tree [None req-dd7c073b-5906-452e-a821-d734a5b66929 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 03:53:44 np0005534516 nova_compute[253538]: 2025-11-25 08:53:44.997 253542 DEBUG nova.scheduler.client.report [None req-dd7c073b-5906-452e-a821-d734a5b66929 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 03:53:45 np0005534516 nova_compute[253538]: 2025-11-25 08:53:45.019 253542 DEBUG oslo_concurrency.lockutils [None req-dd7c073b-5906-452e-a821-d734a5b66929 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.685s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:53:45 np0005534516 nova_compute[253538]: 2025-11-25 08:53:45.020 253542 DEBUG nova.compute.manager [None req-dd7c073b-5906-452e-a821-d734a5b66929 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 5e14b791-8860-44a3-87e0-5c7fcc1dcf12] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 25 03:53:45 np0005534516 nova_compute[253538]: 2025-11-25 08:53:45.026 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.457s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:53:45 np0005534516 nova_compute[253538]: 2025-11-25 08:53:45.102 253542 DEBUG nova.compute.manager [None req-dd7c073b-5906-452e-a821-d734a5b66929 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 5e14b791-8860-44a3-87e0-5c7fcc1dcf12] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 25 03:53:45 np0005534516 nova_compute[253538]: 2025-11-25 08:53:45.103 253542 DEBUG nova.network.neutron [None req-dd7c073b-5906-452e-a821-d734a5b66929 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 5e14b791-8860-44a3-87e0-5c7fcc1dcf12] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 25 03:53:45 np0005534516 nova_compute[253538]: 2025-11-25 08:53:45.126 253542 INFO nova.virt.libvirt.driver [None req-dd7c073b-5906-452e-a821-d734a5b66929 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 5e14b791-8860-44a3-87e0-5c7fcc1dcf12] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 25 03:53:45 np0005534516 nova_compute[253538]: 2025-11-25 08:53:45.130 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Instance 76611b0b-db06-4903-a22a-59b23a1e0d48 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 25 03:53:45 np0005534516 nova_compute[253538]: 2025-11-25 08:53:45.131 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Instance 2da7049d-715e-4209-8c17-dda96ff6a192 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 25 03:53:45 np0005534516 nova_compute[253538]: 2025-11-25 08:53:45.131 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Instance 5e14b791-8860-44a3-87e0-5c7fcc1dcf12 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 25 03:53:45 np0005534516 nova_compute[253538]: 2025-11-25 08:53:45.131 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 25 03:53:45 np0005534516 nova_compute[253538]: 2025-11-25 08:53:45.131 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=896MB phys_disk=59GB used_disk=3GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 25 03:53:45 np0005534516 nova_compute[253538]: 2025-11-25 08:53:45.164 253542 DEBUG nova.compute.manager [None req-dd7c073b-5906-452e-a821-d734a5b66929 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 5e14b791-8860-44a3-87e0-5c7fcc1dcf12] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 25 03:53:45 np0005534516 nova_compute[253538]: 2025-11-25 08:53:45.200 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:53:45 np0005534516 nova_compute[253538]: 2025-11-25 08:53:45.312 253542 DEBUG nova.compute.manager [None req-dd7c073b-5906-452e-a821-d734a5b66929 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 5e14b791-8860-44a3-87e0-5c7fcc1dcf12] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 25 03:53:45 np0005534516 nova_compute[253538]: 2025-11-25 08:53:45.314 253542 DEBUG nova.virt.libvirt.driver [None req-dd7c073b-5906-452e-a821-d734a5b66929 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 5e14b791-8860-44a3-87e0-5c7fcc1dcf12] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 25 03:53:45 np0005534516 nova_compute[253538]: 2025-11-25 08:53:45.315 253542 INFO nova.virt.libvirt.driver [None req-dd7c073b-5906-452e-a821-d734a5b66929 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 5e14b791-8860-44a3-87e0-5c7fcc1dcf12] Creating image(s)#033[00m
Nov 25 03:53:45 np0005534516 nova_compute[253538]: 2025-11-25 08:53:45.346 253542 DEBUG nova.storage.rbd_utils [None req-dd7c073b-5906-452e-a821-d734a5b66929 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] rbd image 5e14b791-8860-44a3-87e0-5c7fcc1dcf12_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:53:45 np0005534516 nova_compute[253538]: 2025-11-25 08:53:45.380 253542 DEBUG nova.storage.rbd_utils [None req-dd7c073b-5906-452e-a821-d734a5b66929 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] rbd image 5e14b791-8860-44a3-87e0-5c7fcc1dcf12_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:53:45 np0005534516 nova_compute[253538]: 2025-11-25 08:53:45.410 253542 DEBUG nova.storage.rbd_utils [None req-dd7c073b-5906-452e-a821-d734a5b66929 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] rbd image 5e14b791-8860-44a3-87e0-5c7fcc1dcf12_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:53:45 np0005534516 nova_compute[253538]: 2025-11-25 08:53:45.416 253542 DEBUG oslo_concurrency.processutils [None req-dd7c073b-5906-452e-a821-d734a5b66929 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:53:45 np0005534516 nova_compute[253538]: 2025-11-25 08:53:45.466 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:53:45 np0005534516 nova_compute[253538]: 2025-11-25 08:53:45.518 253542 DEBUG oslo_concurrency.processutils [None req-dd7c073b-5906-452e-a821-d734a5b66929 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json" returned: 0 in 0.102s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:53:45 np0005534516 nova_compute[253538]: 2025-11-25 08:53:45.520 253542 DEBUG oslo_concurrency.lockutils [None req-dd7c073b-5906-452e-a821-d734a5b66929 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Acquiring lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:53:45 np0005534516 nova_compute[253538]: 2025-11-25 08:53:45.521 253542 DEBUG oslo_concurrency.lockutils [None req-dd7c073b-5906-452e-a821-d734a5b66929 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:53:45 np0005534516 nova_compute[253538]: 2025-11-25 08:53:45.521 253542 DEBUG oslo_concurrency.lockutils [None req-dd7c073b-5906-452e-a821-d734a5b66929 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:53:45 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:53:45.541 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=0e3362c2-a4a0-4a10-9289-943331244f84, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '35'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:53:45 np0005534516 nova_compute[253538]: 2025-11-25 08:53:45.551 253542 DEBUG nova.storage.rbd_utils [None req-dd7c073b-5906-452e-a821-d734a5b66929 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] rbd image 5e14b791-8860-44a3-87e0-5c7fcc1dcf12_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:53:45 np0005534516 nova_compute[253538]: 2025-11-25 08:53:45.556 253542 DEBUG oslo_concurrency.processutils [None req-dd7c073b-5906-452e-a821-d734a5b66929 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc 5e14b791-8860-44a3-87e0-5c7fcc1dcf12_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:53:45 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 03:53:45 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/163139968' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 03:53:45 np0005534516 nova_compute[253538]: 2025-11-25 08:53:45.664 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.464s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:53:45 np0005534516 nova_compute[253538]: 2025-11-25 08:53:45.671 253542 DEBUG nova.compute.provider_tree [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 03:53:45 np0005534516 nova_compute[253538]: 2025-11-25 08:53:45.686 253542 DEBUG nova.scheduler.client.report [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 03:53:45 np0005534516 nova_compute[253538]: 2025-11-25 08:53:45.712 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 25 03:53:45 np0005534516 nova_compute[253538]: 2025-11-25 08:53:45.713 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.687s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:53:45 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2189: 321 pgs: 321 active+clean; 246 MiB data, 882 MiB used, 59 GiB / 60 GiB avail; 278 KiB/s rd, 1.4 MiB/s wr, 51 op/s
Nov 25 03:53:45 np0005534516 nova_compute[253538]: 2025-11-25 08:53:45.880 253542 DEBUG oslo_concurrency.processutils [None req-dd7c073b-5906-452e-a821-d734a5b66929 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc 5e14b791-8860-44a3-87e0-5c7fcc1dcf12_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.324s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:53:45 np0005534516 nova_compute[253538]: 2025-11-25 08:53:45.958 253542 DEBUG nova.storage.rbd_utils [None req-dd7c073b-5906-452e-a821-d734a5b66929 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] resizing rbd image 5e14b791-8860-44a3-87e0-5c7fcc1dcf12_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 25 03:53:45 np0005534516 nova_compute[253538]: 2025-11-25 08:53:45.994 253542 DEBUG nova.policy [None req-dd7c073b-5906-452e-a821-d734a5b66929 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '4211995133cc45db8e38c47f747fb092', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '92faeb767e7a423586eaaf32661ce771', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 25 03:53:46 np0005534516 nova_compute[253538]: 2025-11-25 08:53:46.063 253542 DEBUG nova.objects.instance [None req-dd7c073b-5906-452e-a821-d734a5b66929 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lazy-loading 'migration_context' on Instance uuid 5e14b791-8860-44a3-87e0-5c7fcc1dcf12 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 03:53:46 np0005534516 nova_compute[253538]: 2025-11-25 08:53:46.076 253542 DEBUG nova.virt.libvirt.driver [None req-dd7c073b-5906-452e-a821-d734a5b66929 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 5e14b791-8860-44a3-87e0-5c7fcc1dcf12] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 25 03:53:46 np0005534516 nova_compute[253538]: 2025-11-25 08:53:46.077 253542 DEBUG nova.virt.libvirt.driver [None req-dd7c073b-5906-452e-a821-d734a5b66929 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 5e14b791-8860-44a3-87e0-5c7fcc1dcf12] Ensure instance console log exists: /var/lib/nova/instances/5e14b791-8860-44a3-87e0-5c7fcc1dcf12/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 25 03:53:46 np0005534516 nova_compute[253538]: 2025-11-25 08:53:46.077 253542 DEBUG oslo_concurrency.lockutils [None req-dd7c073b-5906-452e-a821-d734a5b66929 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:53:46 np0005534516 nova_compute[253538]: 2025-11-25 08:53:46.078 253542 DEBUG oslo_concurrency.lockutils [None req-dd7c073b-5906-452e-a821-d734a5b66929 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:53:46 np0005534516 nova_compute[253538]: 2025-11-25 08:53:46.078 253542 DEBUG oslo_concurrency.lockutils [None req-dd7c073b-5906-452e-a821-d734a5b66929 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:53:46 np0005534516 nova_compute[253538]: 2025-11-25 08:53:46.228 253542 INFO nova.compute.manager [None req-80114ece-5bfa-4ba3-981a-b469714094ff 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: 2da7049d-715e-4209-8c17-dda96ff6a192] Get console output#033[00m
Nov 25 03:53:46 np0005534516 nova_compute[253538]: 2025-11-25 08:53:46.637 253542 DEBUG nova.objects.instance [None req-516d2bae-345f-4add-b8df-da3179b8a89d 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Lazy-loading 'flavor' on Instance uuid 2da7049d-715e-4209-8c17-dda96ff6a192 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 03:53:46 np0005534516 nova_compute[253538]: 2025-11-25 08:53:46.660 253542 DEBUG oslo_concurrency.lockutils [None req-516d2bae-345f-4add-b8df-da3179b8a89d 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Acquiring lock "refresh_cache-2da7049d-715e-4209-8c17-dda96ff6a192" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 03:53:46 np0005534516 nova_compute[253538]: 2025-11-25 08:53:46.660 253542 DEBUG oslo_concurrency.lockutils [None req-516d2bae-345f-4add-b8df-da3179b8a89d 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Acquired lock "refresh_cache-2da7049d-715e-4209-8c17-dda96ff6a192" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 03:53:46 np0005534516 nova_compute[253538]: 2025-11-25 08:53:46.661 253542 DEBUG nova.network.neutron [None req-516d2bae-345f-4add-b8df-da3179b8a89d 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: 2da7049d-715e-4209-8c17-dda96ff6a192] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 25 03:53:46 np0005534516 nova_compute[253538]: 2025-11-25 08:53:46.662 253542 DEBUG nova.objects.instance [None req-516d2bae-345f-4add-b8df-da3179b8a89d 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Lazy-loading 'info_cache' on Instance uuid 2da7049d-715e-4209-8c17-dda96ff6a192 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 03:53:47 np0005534516 nova_compute[253538]: 2025-11-25 08:53:47.176 253542 DEBUG nova.network.neutron [None req-dd7c073b-5906-452e-a821-d734a5b66929 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 5e14b791-8860-44a3-87e0-5c7fcc1dcf12] Successfully created port: 0797e76b-3f15-4c7e-ae0d-0f4813d59967 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 25 03:53:47 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 03:53:47 np0005534516 nova_compute[253538]: 2025-11-25 08:53:47.802 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:53:47 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2190: 321 pgs: 321 active+clean; 258 MiB data, 890 MiB used, 59 GiB / 60 GiB avail; 122 KiB/s rd, 1.3 MiB/s wr, 31 op/s
Nov 25 03:53:48 np0005534516 nova_compute[253538]: 2025-11-25 08:53:48.919 253542 DEBUG nova.network.neutron [None req-516d2bae-345f-4add-b8df-da3179b8a89d 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: 2da7049d-715e-4209-8c17-dda96ff6a192] Updating instance_info_cache with network_info: [{"id": "e0eb0246-9869-4c10-b45b-bd0799ae0c95", "address": "fa:16:3e:ea:4d:89", "network": {"id": "ab581a21-5712-4b8e-87f9-b943349fbfcb", "bridge": "br-int", "label": "tempest-network-smoke--1196736477", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.226", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7dcaf3c96bfc4db3a41291debd385c67", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape0eb0246-98", "ovs_interfaceid": "e0eb0246-9869-4c10-b45b-bd0799ae0c95", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 03:53:48 np0005534516 nova_compute[253538]: 2025-11-25 08:53:48.939 253542 DEBUG oslo_concurrency.lockutils [None req-516d2bae-345f-4add-b8df-da3179b8a89d 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Releasing lock "refresh_cache-2da7049d-715e-4209-8c17-dda96ff6a192" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 03:53:48 np0005534516 nova_compute[253538]: 2025-11-25 08:53:48.962 253542 DEBUG nova.network.neutron [None req-dd7c073b-5906-452e-a821-d734a5b66929 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 5e14b791-8860-44a3-87e0-5c7fcc1dcf12] Successfully updated port: 0797e76b-3f15-4c7e-ae0d-0f4813d59967 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 25 03:53:48 np0005534516 nova_compute[253538]: 2025-11-25 08:53:48.970 253542 INFO nova.virt.libvirt.driver [-] [instance: 2da7049d-715e-4209-8c17-dda96ff6a192] Instance destroyed successfully.#033[00m
Nov 25 03:53:48 np0005534516 nova_compute[253538]: 2025-11-25 08:53:48.971 253542 DEBUG nova.objects.instance [None req-516d2bae-345f-4add-b8df-da3179b8a89d 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Lazy-loading 'numa_topology' on Instance uuid 2da7049d-715e-4209-8c17-dda96ff6a192 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 03:53:48 np0005534516 nova_compute[253538]: 2025-11-25 08:53:48.985 253542 DEBUG nova.objects.instance [None req-516d2bae-345f-4add-b8df-da3179b8a89d 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Lazy-loading 'resources' on Instance uuid 2da7049d-715e-4209-8c17-dda96ff6a192 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 03:53:48 np0005534516 nova_compute[253538]: 2025-11-25 08:53:48.991 253542 DEBUG oslo_concurrency.lockutils [None req-dd7c073b-5906-452e-a821-d734a5b66929 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Acquiring lock "refresh_cache-5e14b791-8860-44a3-87e0-5c7fcc1dcf12" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 03:53:48 np0005534516 nova_compute[253538]: 2025-11-25 08:53:48.991 253542 DEBUG oslo_concurrency.lockutils [None req-dd7c073b-5906-452e-a821-d734a5b66929 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Acquired lock "refresh_cache-5e14b791-8860-44a3-87e0-5c7fcc1dcf12" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 03:53:48 np0005534516 nova_compute[253538]: 2025-11-25 08:53:48.991 253542 DEBUG nova.network.neutron [None req-dd7c073b-5906-452e-a821-d734a5b66929 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 5e14b791-8860-44a3-87e0-5c7fcc1dcf12] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 25 03:53:49 np0005534516 nova_compute[253538]: 2025-11-25 08:53:49.006 253542 DEBUG nova.virt.libvirt.vif [None req-516d2bae-345f-4add-b8df-da3179b8a89d 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T08:53:10Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1007468219',display_name='tempest-TestNetworkAdvancedServerOps-server-1007468219',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1007468219',id=115,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPU68q8J+sEr75b4404lN3PmtYldQRjuuyP6SvWU7eV24JrfwADcpjaDPQik5pfkJbRZMvyTiPsHIXeXsaypDt7sTW7yrvZEo+4ZU8LLmZuoq5gv1NHdtvyWg13jQU9yUw==',key_name='tempest-TestNetworkAdvancedServerOps-1510547977',keypairs=<?>,launch_index=0,launched_at=2025-11-25T08:53:20Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='7dcaf3c96bfc4db3a41291debd385c67',ramdisk_id='',reservation_id='r-020ctw8m',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkAdvancedServerOps-1132090577',owner_user_name='tempest-TestNetworkAdvancedServerOps-1132090577-project-member'},tags=<?>,task_state='powering-on',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T08:53:42Z,user_data=None,user_id='009378dc36154271ba5b4590ce67ddde',uuid=2da7049d-715e-4209-8c17-dda96ff6a192,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "e0eb0246-9869-4c10-b45b-bd0799ae0c95", "address": "fa:16:3e:ea:4d:89", "network": {"id": "ab581a21-5712-4b8e-87f9-b943349fbfcb", "bridge": "br-int", "label": "tempest-network-smoke--1196736477", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.226", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7dcaf3c96bfc4db3a41291debd385c67", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape0eb0246-98", "ovs_interfaceid": "e0eb0246-9869-4c10-b45b-bd0799ae0c95", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 25 03:53:49 np0005534516 nova_compute[253538]: 2025-11-25 08:53:49.006 253542 DEBUG nova.network.os_vif_util [None req-516d2bae-345f-4add-b8df-da3179b8a89d 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Converting VIF {"id": "e0eb0246-9869-4c10-b45b-bd0799ae0c95", "address": "fa:16:3e:ea:4d:89", "network": {"id": "ab581a21-5712-4b8e-87f9-b943349fbfcb", "bridge": "br-int", "label": "tempest-network-smoke--1196736477", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.226", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7dcaf3c96bfc4db3a41291debd385c67", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape0eb0246-98", "ovs_interfaceid": "e0eb0246-9869-4c10-b45b-bd0799ae0c95", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 03:53:49 np0005534516 nova_compute[253538]: 2025-11-25 08:53:49.007 253542 DEBUG nova.network.os_vif_util [None req-516d2bae-345f-4add-b8df-da3179b8a89d 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ea:4d:89,bridge_name='br-int',has_traffic_filtering=True,id=e0eb0246-9869-4c10-b45b-bd0799ae0c95,network=Network(ab581a21-5712-4b8e-87f9-b943349fbfcb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape0eb0246-98') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 03:53:49 np0005534516 nova_compute[253538]: 2025-11-25 08:53:49.008 253542 DEBUG os_vif [None req-516d2bae-345f-4add-b8df-da3179b8a89d 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:ea:4d:89,bridge_name='br-int',has_traffic_filtering=True,id=e0eb0246-9869-4c10-b45b-bd0799ae0c95,network=Network(ab581a21-5712-4b8e-87f9-b943349fbfcb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape0eb0246-98') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 25 03:53:49 np0005534516 nova_compute[253538]: 2025-11-25 08:53:49.011 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:53:49 np0005534516 nova_compute[253538]: 2025-11-25 08:53:49.011 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape0eb0246-98, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:53:49 np0005534516 nova_compute[253538]: 2025-11-25 08:53:49.018 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:53:49 np0005534516 nova_compute[253538]: 2025-11-25 08:53:49.021 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 25 03:53:49 np0005534516 nova_compute[253538]: 2025-11-25 08:53:49.026 253542 INFO os_vif [None req-516d2bae-345f-4add-b8df-da3179b8a89d 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:ea:4d:89,bridge_name='br-int',has_traffic_filtering=True,id=e0eb0246-9869-4c10-b45b-bd0799ae0c95,network=Network(ab581a21-5712-4b8e-87f9-b943349fbfcb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape0eb0246-98')#033[00m
Nov 25 03:53:49 np0005534516 nova_compute[253538]: 2025-11-25 08:53:49.039 253542 DEBUG nova.virt.libvirt.driver [None req-516d2bae-345f-4add-b8df-da3179b8a89d 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: 2da7049d-715e-4209-8c17-dda96ff6a192] Start _get_guest_xml network_info=[{"id": "e0eb0246-9869-4c10-b45b-bd0799ae0c95", "address": "fa:16:3e:ea:4d:89", "network": {"id": "ab581a21-5712-4b8e-87f9-b943349fbfcb", "bridge": "br-int", "label": "tempest-network-smoke--1196736477", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.226", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7dcaf3c96bfc4db3a41291debd385c67", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape0eb0246-98", "ovs_interfaceid": "e0eb0246-9869-4c10-b45b-bd0799ae0c95", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'device_name': '/dev/vda', 'guest_format': None, 'encryption_options': None, 'boot_index': 0, 'encryption_format': None, 'encryption_secret_uuid': None, 'size': 0, 'encrypted': False, 'disk_bus': 'virtio', 'image_id': '8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 25 03:53:49 np0005534516 nova_compute[253538]: 2025-11-25 08:53:49.043 253542 DEBUG nova.compute.manager [req-5633a7d5-6a60-4453-8d8c-862bb082b7dd req-33bc177e-60d4-46d7-aa5d-0357806ccfe1 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 5e14b791-8860-44a3-87e0-5c7fcc1dcf12] Received event network-changed-0797e76b-3f15-4c7e-ae0d-0f4813d59967 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:53:49 np0005534516 nova_compute[253538]: 2025-11-25 08:53:49.044 253542 DEBUG nova.compute.manager [req-5633a7d5-6a60-4453-8d8c-862bb082b7dd req-33bc177e-60d4-46d7-aa5d-0357806ccfe1 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 5e14b791-8860-44a3-87e0-5c7fcc1dcf12] Refreshing instance network info cache due to event network-changed-0797e76b-3f15-4c7e-ae0d-0f4813d59967. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 03:53:49 np0005534516 nova_compute[253538]: 2025-11-25 08:53:49.044 253542 DEBUG oslo_concurrency.lockutils [req-5633a7d5-6a60-4453-8d8c-862bb082b7dd req-33bc177e-60d4-46d7-aa5d-0357806ccfe1 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-5e14b791-8860-44a3-87e0-5c7fcc1dcf12" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 03:53:49 np0005534516 nova_compute[253538]: 2025-11-25 08:53:49.048 253542 WARNING nova.virt.libvirt.driver [None req-516d2bae-345f-4add-b8df-da3179b8a89d 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 03:53:49 np0005534516 nova_compute[253538]: 2025-11-25 08:53:49.052 253542 DEBUG nova.virt.libvirt.host [None req-516d2bae-345f-4add-b8df-da3179b8a89d 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 25 03:53:49 np0005534516 nova_compute[253538]: 2025-11-25 08:53:49.053 253542 DEBUG nova.virt.libvirt.host [None req-516d2bae-345f-4add-b8df-da3179b8a89d 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 25 03:53:49 np0005534516 nova_compute[253538]: 2025-11-25 08:53:49.056 253542 DEBUG nova.virt.libvirt.host [None req-516d2bae-345f-4add-b8df-da3179b8a89d 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 25 03:53:49 np0005534516 nova_compute[253538]: 2025-11-25 08:53:49.057 253542 DEBUG nova.virt.libvirt.host [None req-516d2bae-345f-4add-b8df-da3179b8a89d 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 25 03:53:49 np0005534516 nova_compute[253538]: 2025-11-25 08:53:49.058 253542 DEBUG nova.virt.libvirt.driver [None req-516d2bae-345f-4add-b8df-da3179b8a89d 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 25 03:53:49 np0005534516 nova_compute[253538]: 2025-11-25 08:53:49.058 253542 DEBUG nova.virt.hardware [None req-516d2bae-345f-4add-b8df-da3179b8a89d 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T08:20:54Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='c0247c91-0f34-45b2-87e7-43f31a790a00',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 25 03:53:49 np0005534516 nova_compute[253538]: 2025-11-25 08:53:49.059 253542 DEBUG nova.virt.hardware [None req-516d2bae-345f-4add-b8df-da3179b8a89d 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 25 03:53:49 np0005534516 nova_compute[253538]: 2025-11-25 08:53:49.060 253542 DEBUG nova.virt.hardware [None req-516d2bae-345f-4add-b8df-da3179b8a89d 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 25 03:53:49 np0005534516 nova_compute[253538]: 2025-11-25 08:53:49.060 253542 DEBUG nova.virt.hardware [None req-516d2bae-345f-4add-b8df-da3179b8a89d 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 25 03:53:49 np0005534516 nova_compute[253538]: 2025-11-25 08:53:49.061 253542 DEBUG nova.virt.hardware [None req-516d2bae-345f-4add-b8df-da3179b8a89d 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 25 03:53:49 np0005534516 nova_compute[253538]: 2025-11-25 08:53:49.061 253542 DEBUG nova.virt.hardware [None req-516d2bae-345f-4add-b8df-da3179b8a89d 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 25 03:53:49 np0005534516 nova_compute[253538]: 2025-11-25 08:53:49.061 253542 DEBUG nova.virt.hardware [None req-516d2bae-345f-4add-b8df-da3179b8a89d 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 25 03:53:49 np0005534516 nova_compute[253538]: 2025-11-25 08:53:49.062 253542 DEBUG nova.virt.hardware [None req-516d2bae-345f-4add-b8df-da3179b8a89d 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 25 03:53:49 np0005534516 nova_compute[253538]: 2025-11-25 08:53:49.062 253542 DEBUG nova.virt.hardware [None req-516d2bae-345f-4add-b8df-da3179b8a89d 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 25 03:53:49 np0005534516 nova_compute[253538]: 2025-11-25 08:53:49.062 253542 DEBUG nova.virt.hardware [None req-516d2bae-345f-4add-b8df-da3179b8a89d 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 25 03:53:49 np0005534516 nova_compute[253538]: 2025-11-25 08:53:49.063 253542 DEBUG nova.virt.hardware [None req-516d2bae-345f-4add-b8df-da3179b8a89d 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 25 03:53:49 np0005534516 nova_compute[253538]: 2025-11-25 08:53:49.063 253542 DEBUG nova.objects.instance [None req-516d2bae-345f-4add-b8df-da3179b8a89d 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Lazy-loading 'vcpu_model' on Instance uuid 2da7049d-715e-4209-8c17-dda96ff6a192 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 03:53:49 np0005534516 nova_compute[253538]: 2025-11-25 08:53:49.077 253542 DEBUG oslo_concurrency.processutils [None req-516d2bae-345f-4add-b8df-da3179b8a89d 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:53:49 np0005534516 nova_compute[253538]: 2025-11-25 08:53:49.204 253542 DEBUG nova.network.neutron [None req-dd7c073b-5906-452e-a821-d734a5b66929 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 5e14b791-8860-44a3-87e0-5c7fcc1dcf12] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 25 03:53:49 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 03:53:49 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2687222348' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 03:53:49 np0005534516 nova_compute[253538]: 2025-11-25 08:53:49.542 253542 DEBUG oslo_concurrency.processutils [None req-516d2bae-345f-4add-b8df-da3179b8a89d 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.465s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:53:49 np0005534516 nova_compute[253538]: 2025-11-25 08:53:49.575 253542 DEBUG oslo_concurrency.processutils [None req-516d2bae-345f-4add-b8df-da3179b8a89d 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:53:49 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2191: 321 pgs: 321 active+clean; 293 MiB data, 904 MiB used, 59 GiB / 60 GiB avail; 33 KiB/s rd, 1.8 MiB/s wr, 34 op/s
Nov 25 03:53:50 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 03:53:50 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2791381028' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 03:53:50 np0005534516 nova_compute[253538]: 2025-11-25 08:53:50.052 253542 DEBUG oslo_concurrency.processutils [None req-516d2bae-345f-4add-b8df-da3179b8a89d 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.476s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:53:50 np0005534516 nova_compute[253538]: 2025-11-25 08:53:50.054 253542 DEBUG nova.virt.libvirt.vif [None req-516d2bae-345f-4add-b8df-da3179b8a89d 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T08:53:10Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1007468219',display_name='tempest-TestNetworkAdvancedServerOps-server-1007468219',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1007468219',id=115,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPU68q8J+sEr75b4404lN3PmtYldQRjuuyP6SvWU7eV24JrfwADcpjaDPQik5pfkJbRZMvyTiPsHIXeXsaypDt7sTW7yrvZEo+4ZU8LLmZuoq5gv1NHdtvyWg13jQU9yUw==',key_name='tempest-TestNetworkAdvancedServerOps-1510547977',keypairs=<?>,launch_index=0,launched_at=2025-11-25T08:53:20Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='7dcaf3c96bfc4db3a41291debd385c67',ramdisk_id='',reservation_id='r-020ctw8m',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkAdvancedServerOps-1132090577',owner_user_name='tempest-TestNetworkAdvancedServerOps-1132090577-project-member'},tags=<?>,task_state='powering-on',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T08:53:42Z,user_data=None,user_id='009378dc36154271ba5b4590ce67ddde',uuid=2da7049d-715e-4209-8c17-dda96ff6a192,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "e0eb0246-9869-4c10-b45b-bd0799ae0c95", "address": "fa:16:3e:ea:4d:89", "network": {"id": "ab581a21-5712-4b8e-87f9-b943349fbfcb", "bridge": "br-int", "label": "tempest-network-smoke--1196736477", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.226", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7dcaf3c96bfc4db3a41291debd385c67", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape0eb0246-98", "ovs_interfaceid": "e0eb0246-9869-4c10-b45b-bd0799ae0c95", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 25 03:53:50 np0005534516 nova_compute[253538]: 2025-11-25 08:53:50.054 253542 DEBUG nova.network.os_vif_util [None req-516d2bae-345f-4add-b8df-da3179b8a89d 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Converting VIF {"id": "e0eb0246-9869-4c10-b45b-bd0799ae0c95", "address": "fa:16:3e:ea:4d:89", "network": {"id": "ab581a21-5712-4b8e-87f9-b943349fbfcb", "bridge": "br-int", "label": "tempest-network-smoke--1196736477", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.226", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7dcaf3c96bfc4db3a41291debd385c67", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape0eb0246-98", "ovs_interfaceid": "e0eb0246-9869-4c10-b45b-bd0799ae0c95", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 03:53:50 np0005534516 nova_compute[253538]: 2025-11-25 08:53:50.055 253542 DEBUG nova.network.os_vif_util [None req-516d2bae-345f-4add-b8df-da3179b8a89d 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ea:4d:89,bridge_name='br-int',has_traffic_filtering=True,id=e0eb0246-9869-4c10-b45b-bd0799ae0c95,network=Network(ab581a21-5712-4b8e-87f9-b943349fbfcb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape0eb0246-98') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 03:53:50 np0005534516 nova_compute[253538]: 2025-11-25 08:53:50.056 253542 DEBUG nova.objects.instance [None req-516d2bae-345f-4add-b8df-da3179b8a89d 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Lazy-loading 'pci_devices' on Instance uuid 2da7049d-715e-4209-8c17-dda96ff6a192 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 03:53:50 np0005534516 nova_compute[253538]: 2025-11-25 08:53:50.072 253542 DEBUG nova.virt.libvirt.driver [None req-516d2bae-345f-4add-b8df-da3179b8a89d 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: 2da7049d-715e-4209-8c17-dda96ff6a192] End _get_guest_xml xml=<domain type="kvm">
Nov 25 03:53:50 np0005534516 nova_compute[253538]:  <uuid>2da7049d-715e-4209-8c17-dda96ff6a192</uuid>
Nov 25 03:53:50 np0005534516 nova_compute[253538]:  <name>instance-00000073</name>
Nov 25 03:53:50 np0005534516 nova_compute[253538]:  <memory>131072</memory>
Nov 25 03:53:50 np0005534516 nova_compute[253538]:  <vcpu>1</vcpu>
Nov 25 03:53:50 np0005534516 nova_compute[253538]:  <metadata>
Nov 25 03:53:50 np0005534516 nova_compute[253538]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 03:53:50 np0005534516 nova_compute[253538]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 03:53:50 np0005534516 nova_compute[253538]:      <nova:name>tempest-TestNetworkAdvancedServerOps-server-1007468219</nova:name>
Nov 25 03:53:50 np0005534516 nova_compute[253538]:      <nova:creationTime>2025-11-25 08:53:49</nova:creationTime>
Nov 25 03:53:50 np0005534516 nova_compute[253538]:      <nova:flavor name="m1.nano">
Nov 25 03:53:50 np0005534516 nova_compute[253538]:        <nova:memory>128</nova:memory>
Nov 25 03:53:50 np0005534516 nova_compute[253538]:        <nova:disk>1</nova:disk>
Nov 25 03:53:50 np0005534516 nova_compute[253538]:        <nova:swap>0</nova:swap>
Nov 25 03:53:50 np0005534516 nova_compute[253538]:        <nova:ephemeral>0</nova:ephemeral>
Nov 25 03:53:50 np0005534516 nova_compute[253538]:        <nova:vcpus>1</nova:vcpus>
Nov 25 03:53:50 np0005534516 nova_compute[253538]:      </nova:flavor>
Nov 25 03:53:50 np0005534516 nova_compute[253538]:      <nova:owner>
Nov 25 03:53:50 np0005534516 nova_compute[253538]:        <nova:user uuid="009378dc36154271ba5b4590ce67ddde">tempest-TestNetworkAdvancedServerOps-1132090577-project-member</nova:user>
Nov 25 03:53:50 np0005534516 nova_compute[253538]:        <nova:project uuid="7dcaf3c96bfc4db3a41291debd385c67">tempest-TestNetworkAdvancedServerOps-1132090577</nova:project>
Nov 25 03:53:50 np0005534516 nova_compute[253538]:      </nova:owner>
Nov 25 03:53:50 np0005534516 nova_compute[253538]:      <nova:root type="image" uuid="8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e"/>
Nov 25 03:53:50 np0005534516 nova_compute[253538]:      <nova:ports>
Nov 25 03:53:50 np0005534516 nova_compute[253538]:        <nova:port uuid="e0eb0246-9869-4c10-b45b-bd0799ae0c95">
Nov 25 03:53:50 np0005534516 nova_compute[253538]:          <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Nov 25 03:53:50 np0005534516 nova_compute[253538]:        </nova:port>
Nov 25 03:53:50 np0005534516 nova_compute[253538]:      </nova:ports>
Nov 25 03:53:50 np0005534516 nova_compute[253538]:    </nova:instance>
Nov 25 03:53:50 np0005534516 nova_compute[253538]:  </metadata>
Nov 25 03:53:50 np0005534516 nova_compute[253538]:  <sysinfo type="smbios">
Nov 25 03:53:50 np0005534516 nova_compute[253538]:    <system>
Nov 25 03:53:50 np0005534516 nova_compute[253538]:      <entry name="manufacturer">RDO</entry>
Nov 25 03:53:50 np0005534516 nova_compute[253538]:      <entry name="product">OpenStack Compute</entry>
Nov 25 03:53:50 np0005534516 nova_compute[253538]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 03:53:50 np0005534516 nova_compute[253538]:      <entry name="serial">2da7049d-715e-4209-8c17-dda96ff6a192</entry>
Nov 25 03:53:50 np0005534516 nova_compute[253538]:      <entry name="uuid">2da7049d-715e-4209-8c17-dda96ff6a192</entry>
Nov 25 03:53:50 np0005534516 nova_compute[253538]:      <entry name="family">Virtual Machine</entry>
Nov 25 03:53:50 np0005534516 nova_compute[253538]:    </system>
Nov 25 03:53:50 np0005534516 nova_compute[253538]:  </sysinfo>
Nov 25 03:53:50 np0005534516 nova_compute[253538]:  <os>
Nov 25 03:53:50 np0005534516 nova_compute[253538]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 25 03:53:50 np0005534516 nova_compute[253538]:    <boot dev="hd"/>
Nov 25 03:53:50 np0005534516 nova_compute[253538]:    <smbios mode="sysinfo"/>
Nov 25 03:53:50 np0005534516 nova_compute[253538]:  </os>
Nov 25 03:53:50 np0005534516 nova_compute[253538]:  <features>
Nov 25 03:53:50 np0005534516 nova_compute[253538]:    <acpi/>
Nov 25 03:53:50 np0005534516 nova_compute[253538]:    <apic/>
Nov 25 03:53:50 np0005534516 nova_compute[253538]:    <vmcoreinfo/>
Nov 25 03:53:50 np0005534516 nova_compute[253538]:  </features>
Nov 25 03:53:50 np0005534516 nova_compute[253538]:  <clock offset="utc">
Nov 25 03:53:50 np0005534516 nova_compute[253538]:    <timer name="pit" tickpolicy="delay"/>
Nov 25 03:53:50 np0005534516 nova_compute[253538]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 25 03:53:50 np0005534516 nova_compute[253538]:    <timer name="hpet" present="no"/>
Nov 25 03:53:50 np0005534516 nova_compute[253538]:  </clock>
Nov 25 03:53:50 np0005534516 nova_compute[253538]:  <cpu mode="host-model" match="exact">
Nov 25 03:53:50 np0005534516 nova_compute[253538]:    <topology sockets="1" cores="1" threads="1"/>
Nov 25 03:53:50 np0005534516 nova_compute[253538]:  </cpu>
Nov 25 03:53:50 np0005534516 nova_compute[253538]:  <devices>
Nov 25 03:53:50 np0005534516 nova_compute[253538]:    <disk type="network" device="disk">
Nov 25 03:53:50 np0005534516 nova_compute[253538]:      <driver type="raw" cache="none"/>
Nov 25 03:53:50 np0005534516 nova_compute[253538]:      <source protocol="rbd" name="vms/2da7049d-715e-4209-8c17-dda96ff6a192_disk">
Nov 25 03:53:50 np0005534516 nova_compute[253538]:        <host name="192.168.122.100" port="6789"/>
Nov 25 03:53:50 np0005534516 nova_compute[253538]:      </source>
Nov 25 03:53:50 np0005534516 nova_compute[253538]:      <auth username="openstack">
Nov 25 03:53:50 np0005534516 nova_compute[253538]:        <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 03:53:50 np0005534516 nova_compute[253538]:      </auth>
Nov 25 03:53:50 np0005534516 nova_compute[253538]:      <target dev="vda" bus="virtio"/>
Nov 25 03:53:50 np0005534516 nova_compute[253538]:    </disk>
Nov 25 03:53:50 np0005534516 nova_compute[253538]:    <disk type="network" device="cdrom">
Nov 25 03:53:50 np0005534516 nova_compute[253538]:      <driver type="raw" cache="none"/>
Nov 25 03:53:50 np0005534516 nova_compute[253538]:      <source protocol="rbd" name="vms/2da7049d-715e-4209-8c17-dda96ff6a192_disk.config">
Nov 25 03:53:50 np0005534516 nova_compute[253538]:        <host name="192.168.122.100" port="6789"/>
Nov 25 03:53:50 np0005534516 nova_compute[253538]:      </source>
Nov 25 03:53:50 np0005534516 nova_compute[253538]:      <auth username="openstack">
Nov 25 03:53:50 np0005534516 nova_compute[253538]:        <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 03:53:50 np0005534516 nova_compute[253538]:      </auth>
Nov 25 03:53:50 np0005534516 nova_compute[253538]:      <target dev="sda" bus="sata"/>
Nov 25 03:53:50 np0005534516 nova_compute[253538]:    </disk>
Nov 25 03:53:50 np0005534516 nova_compute[253538]:    <interface type="ethernet">
Nov 25 03:53:50 np0005534516 nova_compute[253538]:      <mac address="fa:16:3e:ea:4d:89"/>
Nov 25 03:53:50 np0005534516 nova_compute[253538]:      <model type="virtio"/>
Nov 25 03:53:50 np0005534516 nova_compute[253538]:      <driver name="vhost" rx_queue_size="512"/>
Nov 25 03:53:50 np0005534516 nova_compute[253538]:      <mtu size="1442"/>
Nov 25 03:53:50 np0005534516 nova_compute[253538]:      <target dev="tape0eb0246-98"/>
Nov 25 03:53:50 np0005534516 nova_compute[253538]:    </interface>
Nov 25 03:53:50 np0005534516 nova_compute[253538]:    <serial type="pty">
Nov 25 03:53:50 np0005534516 nova_compute[253538]:      <log file="/var/lib/nova/instances/2da7049d-715e-4209-8c17-dda96ff6a192/console.log" append="off"/>
Nov 25 03:53:50 np0005534516 nova_compute[253538]:    </serial>
Nov 25 03:53:50 np0005534516 nova_compute[253538]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 03:53:50 np0005534516 nova_compute[253538]:    <video>
Nov 25 03:53:50 np0005534516 nova_compute[253538]:      <model type="virtio"/>
Nov 25 03:53:50 np0005534516 nova_compute[253538]:    </video>
Nov 25 03:53:50 np0005534516 nova_compute[253538]:    <input type="tablet" bus="usb"/>
Nov 25 03:53:50 np0005534516 nova_compute[253538]:    <input type="keyboard" bus="usb"/>
Nov 25 03:53:50 np0005534516 nova_compute[253538]:    <rng model="virtio">
Nov 25 03:53:50 np0005534516 nova_compute[253538]:      <backend model="random">/dev/urandom</backend>
Nov 25 03:53:50 np0005534516 nova_compute[253538]:    </rng>
Nov 25 03:53:50 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root"/>
Nov 25 03:53:50 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:53:50 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:53:50 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:53:50 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:53:50 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:53:50 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:53:50 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:53:50 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:53:50 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:53:50 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:53:50 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:53:50 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:53:50 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:53:50 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:53:50 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:53:50 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:53:50 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:53:50 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:53:50 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:53:50 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:53:50 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:53:50 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:53:50 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:53:50 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:53:50 np0005534516 nova_compute[253538]:    <controller type="usb" index="0"/>
Nov 25 03:53:50 np0005534516 nova_compute[253538]:    <memballoon model="virtio">
Nov 25 03:53:50 np0005534516 nova_compute[253538]:      <stats period="10"/>
Nov 25 03:53:50 np0005534516 nova_compute[253538]:    </memballoon>
Nov 25 03:53:50 np0005534516 nova_compute[253538]:  </devices>
Nov 25 03:53:50 np0005534516 nova_compute[253538]: </domain>
Nov 25 03:53:50 np0005534516 nova_compute[253538]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 25 03:53:50 np0005534516 nova_compute[253538]: 2025-11-25 08:53:50.075 253542 DEBUG nova.virt.libvirt.driver [None req-516d2bae-345f-4add-b8df-da3179b8a89d 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] skipping disk for instance-00000073 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 25 03:53:50 np0005534516 nova_compute[253538]: 2025-11-25 08:53:50.075 253542 DEBUG nova.virt.libvirt.driver [None req-516d2bae-345f-4add-b8df-da3179b8a89d 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] skipping disk for instance-00000073 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 25 03:53:50 np0005534516 nova_compute[253538]: 2025-11-25 08:53:50.076 253542 DEBUG nova.virt.libvirt.vif [None req-516d2bae-345f-4add-b8df-da3179b8a89d 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T08:53:10Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1007468219',display_name='tempest-TestNetworkAdvancedServerOps-server-1007468219',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1007468219',id=115,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPU68q8J+sEr75b4404lN3PmtYldQRjuuyP6SvWU7eV24JrfwADcpjaDPQik5pfkJbRZMvyTiPsHIXeXsaypDt7sTW7yrvZEo+4ZU8LLmZuoq5gv1NHdtvyWg13jQU9yUw==',key_name='tempest-TestNetworkAdvancedServerOps-1510547977',keypairs=<?>,launch_index=0,launched_at=2025-11-25T08:53:20Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=<?>,power_state=4,progress=0,project_id='7dcaf3c96bfc4db3a41291debd385c67',ramdisk_id='',reservation_id='r-020ctw8m',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkAdvancedServerOps-1132090577',owner_user_name='tempest-TestNetworkAdvancedServerOps-1132090577-project-member'},tags=<?>,task_state='powering-on',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T08:53:42Z,user_data=None,user_id='009378dc36154271ba5b4590ce67ddde',uuid=2da7049d-715e-4209-8c17-dda96ff6a192,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "e0eb0246-9869-4c10-b45b-bd0799ae0c95", "address": "fa:16:3e:ea:4d:89", "network": {"id": "ab581a21-5712-4b8e-87f9-b943349fbfcb", "bridge": "br-int", "label": "tempest-network-smoke--1196736477", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.226", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7dcaf3c96bfc4db3a41291debd385c67", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape0eb0246-98", "ovs_interfaceid": "e0eb0246-9869-4c10-b45b-bd0799ae0c95", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 25 03:53:50 np0005534516 nova_compute[253538]: 2025-11-25 08:53:50.077 253542 DEBUG nova.network.os_vif_util [None req-516d2bae-345f-4add-b8df-da3179b8a89d 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Converting VIF {"id": "e0eb0246-9869-4c10-b45b-bd0799ae0c95", "address": "fa:16:3e:ea:4d:89", "network": {"id": "ab581a21-5712-4b8e-87f9-b943349fbfcb", "bridge": "br-int", "label": "tempest-network-smoke--1196736477", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.226", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7dcaf3c96bfc4db3a41291debd385c67", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape0eb0246-98", "ovs_interfaceid": "e0eb0246-9869-4c10-b45b-bd0799ae0c95", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 03:53:50 np0005534516 nova_compute[253538]: 2025-11-25 08:53:50.078 253542 DEBUG nova.network.os_vif_util [None req-516d2bae-345f-4add-b8df-da3179b8a89d 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ea:4d:89,bridge_name='br-int',has_traffic_filtering=True,id=e0eb0246-9869-4c10-b45b-bd0799ae0c95,network=Network(ab581a21-5712-4b8e-87f9-b943349fbfcb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape0eb0246-98') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 03:53:50 np0005534516 nova_compute[253538]: 2025-11-25 08:53:50.078 253542 DEBUG os_vif [None req-516d2bae-345f-4add-b8df-da3179b8a89d 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:ea:4d:89,bridge_name='br-int',has_traffic_filtering=True,id=e0eb0246-9869-4c10-b45b-bd0799ae0c95,network=Network(ab581a21-5712-4b8e-87f9-b943349fbfcb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape0eb0246-98') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 25 03:53:50 np0005534516 nova_compute[253538]: 2025-11-25 08:53:50.079 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:53:50 np0005534516 nova_compute[253538]: 2025-11-25 08:53:50.080 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:53:50 np0005534516 nova_compute[253538]: 2025-11-25 08:53:50.081 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 03:53:50 np0005534516 nova_compute[253538]: 2025-11-25 08:53:50.085 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:53:50 np0005534516 nova_compute[253538]: 2025-11-25 08:53:50.085 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape0eb0246-98, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:53:50 np0005534516 nova_compute[253538]: 2025-11-25 08:53:50.086 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tape0eb0246-98, col_values=(('external_ids', {'iface-id': 'e0eb0246-9869-4c10-b45b-bd0799ae0c95', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:ea:4d:89', 'vm-uuid': '2da7049d-715e-4209-8c17-dda96ff6a192'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:53:50 np0005534516 nova_compute[253538]: 2025-11-25 08:53:50.087 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:53:50 np0005534516 NetworkManager[48915]: <info>  [1764060830.0885] manager: (tape0eb0246-98): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/480)
Nov 25 03:53:50 np0005534516 nova_compute[253538]: 2025-11-25 08:53:50.091 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 25 03:53:50 np0005534516 nova_compute[253538]: 2025-11-25 08:53:50.094 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:53:50 np0005534516 nova_compute[253538]: 2025-11-25 08:53:50.095 253542 INFO os_vif [None req-516d2bae-345f-4add-b8df-da3179b8a89d 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:ea:4d:89,bridge_name='br-int',has_traffic_filtering=True,id=e0eb0246-9869-4c10-b45b-bd0799ae0c95,network=Network(ab581a21-5712-4b8e-87f9-b943349fbfcb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape0eb0246-98')#033[00m
Nov 25 03:53:50 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:53:50.140 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c9:40:8d 10.100.0.2 2001:db8::f816:3eff:fec9:408d'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:fec9:408d/64', 'neutron:device_id': 'ovnmeta-3def65f0-e242-4a46-9e8e-162652f023c1', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3def65f0-e242-4a46-9e8e-162652f023c1', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd5d3235b532e4e7e87aab12bda12e650', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=adce94c1-2312-402b-8ec0-ddb674bf630f, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=7654d474-edb7-40de-964c-94dde0499c7f) old=Port_Binding(mac=['fa:16:3e:c9:40:8d 2001:db8::f816:3eff:fec9:408d'], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:fec9:408d/64', 'neutron:device_id': 'ovnmeta-3def65f0-e242-4a46-9e8e-162652f023c1', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3def65f0-e242-4a46-9e8e-162652f023c1', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd5d3235b532e4e7e87aab12bda12e650', 'neutron:revision_number': '2', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 03:53:50 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:53:50.141 162739 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port 7654d474-edb7-40de-964c-94dde0499c7f in datapath 3def65f0-e242-4a46-9e8e-162652f023c1 updated#033[00m
Nov 25 03:53:50 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:53:50.143 162739 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 3def65f0-e242-4a46-9e8e-162652f023c1, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 25 03:53:50 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:53:50.144 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[d50bcd8c-a813-4d0d-8f7e-d38a1dfe5eb4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:53:50 np0005534516 kernel: tape0eb0246-98: entered promiscuous mode
Nov 25 03:53:50 np0005534516 nova_compute[253538]: 2025-11-25 08:53:50.182 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:53:50 np0005534516 NetworkManager[48915]: <info>  [1764060830.1833] manager: (tape0eb0246-98): new Tun device (/org/freedesktop/NetworkManager/Devices/481)
Nov 25 03:53:50 np0005534516 ovn_controller[152859]: 2025-11-25T08:53:50Z|01159|binding|INFO|Claiming lport e0eb0246-9869-4c10-b45b-bd0799ae0c95 for this chassis.
Nov 25 03:53:50 np0005534516 ovn_controller[152859]: 2025-11-25T08:53:50Z|01160|binding|INFO|e0eb0246-9869-4c10-b45b-bd0799ae0c95: Claiming fa:16:3e:ea:4d:89 10.100.0.6
Nov 25 03:53:50 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:53:50.192 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ea:4d:89 10.100.0.6'], port_security=['fa:16:3e:ea:4d:89 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '2da7049d-715e-4209-8c17-dda96ff6a192', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ab581a21-5712-4b8e-87f9-b943349fbfcb', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '7dcaf3c96bfc4db3a41291debd385c67', 'neutron:revision_number': '5', 'neutron:security_group_ids': '622b72dc-477c-41ae-9a43-d0ddc5df890f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.226'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=cbdce223-39b2-47e0-8888-53244c8f0c36, chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=e0eb0246-9869-4c10-b45b-bd0799ae0c95) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 03:53:50 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:53:50.193 162739 INFO neutron.agent.ovn.metadata.agent [-] Port e0eb0246-9869-4c10-b45b-bd0799ae0c95 in datapath ab581a21-5712-4b8e-87f9-b943349fbfcb bound to our chassis#033[00m
Nov 25 03:53:50 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:53:50.194 162739 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network ab581a21-5712-4b8e-87f9-b943349fbfcb#033[00m
Nov 25 03:53:50 np0005534516 ovn_controller[152859]: 2025-11-25T08:53:50Z|01161|binding|INFO|Setting lport e0eb0246-9869-4c10-b45b-bd0799ae0c95 ovn-installed in OVS
Nov 25 03:53:50 np0005534516 ovn_controller[152859]: 2025-11-25T08:53:50Z|01162|binding|INFO|Setting lport e0eb0246-9869-4c10-b45b-bd0799ae0c95 up in Southbound
Nov 25 03:53:50 np0005534516 nova_compute[253538]: 2025-11-25 08:53:50.208 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:53:50 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:53:50.211 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[070b9ec9-b205-4cf9-a371-93aacea29a95]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:53:50 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:53:50.226 162739 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapab581a21-51 in ovnmeta-ab581a21-5712-4b8e-87f9-b943349fbfcb namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 25 03:53:50 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:53:50.228 269370 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapab581a21-50 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 25 03:53:50 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:53:50.228 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[67cfda9f-b2a4-4769-a679-2de6544aa5c8]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:53:50 np0005534516 nova_compute[253538]: 2025-11-25 08:53:50.230 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:53:50 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:53:50.230 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[538994f5-3a7c-4c3e-9613-1e59a08c4a8f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:53:50 np0005534516 systemd-machined[215790]: New machine qemu-144-instance-00000073.
Nov 25 03:53:50 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:53:50.246 162852 DEBUG oslo.privsep.daemon [-] privsep: reply[937ba49f-b93a-4ff5-b6db-a5c19f2ecec7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:53:50 np0005534516 systemd[1]: Started Virtual Machine qemu-144-instance-00000073.
Nov 25 03:53:50 np0005534516 systemd-udevd[374607]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 03:53:50 np0005534516 NetworkManager[48915]: <info>  [1764060830.2775] device (tape0eb0246-98): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 03:53:50 np0005534516 NetworkManager[48915]: <info>  [1764060830.2787] device (tape0eb0246-98): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 03:53:50 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:53:50.278 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[040d138c-6e06-4ed9-9374-4a77edd5b06a]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:53:50 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:53:50.313 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[ad730a4b-6803-409f-b3ad-de5b76073976]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:53:50 np0005534516 NetworkManager[48915]: <info>  [1764060830.3203] manager: (tapab581a21-50): new Veth device (/org/freedesktop/NetworkManager/Devices/482)
Nov 25 03:53:50 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:53:50.319 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[86c3aa34-1ad9-482e-b212-b360019fa8f2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:53:50 np0005534516 systemd-udevd[374610]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 03:53:50 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:53:50.360 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[c28977bc-401f-4130-b66e-7fca89d636a9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:53:50 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:53:50.364 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[8410c992-1eef-4fbf-b301-a1978d3b474d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:53:50 np0005534516 NetworkManager[48915]: <info>  [1764060830.3988] device (tapab581a21-50): carrier: link connected
Nov 25 03:53:50 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:53:50.406 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[aacb59d2-e03f-4bdf-8aea-03e2e9bb8c01]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:53:50 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:53:50.428 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[8f230d77-4c37-46b3-bb9e-cf71bd7d772c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapab581a21-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:53:79:7a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 343], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 618499, 'reachable_time': 26439, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 374637, 'error': None, 'target': 'ovnmeta-ab581a21-5712-4b8e-87f9-b943349fbfcb', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:53:50 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:53:50.446 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[09771235-a8cc-4290-83e4-ef21b52a863b]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe53:797a'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 618499, 'tstamp': 618499}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 374638, 'error': None, 'target': 'ovnmeta-ab581a21-5712-4b8e-87f9-b943349fbfcb', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:53:50 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:53:50.470 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[01e239f5-58bf-4c5a-bf9a-580eef201909]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapab581a21-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:53:79:7a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 343], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 618499, 'reachable_time': 26439, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 374639, 'error': None, 'target': 'ovnmeta-ab581a21-5712-4b8e-87f9-b943349fbfcb', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:53:50 np0005534516 nova_compute[253538]: 2025-11-25 08:53:50.484 253542 DEBUG nova.compute.manager [req-0834014d-4f27-446a-a233-4b6649626c63 req-2dec6cc7-103d-4237-8dae-a94c6d544c98 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 2da7049d-715e-4209-8c17-dda96ff6a192] Received event network-vif-plugged-e0eb0246-9869-4c10-b45b-bd0799ae0c95 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:53:50 np0005534516 nova_compute[253538]: 2025-11-25 08:53:50.485 253542 DEBUG oslo_concurrency.lockutils [req-0834014d-4f27-446a-a233-4b6649626c63 req-2dec6cc7-103d-4237-8dae-a94c6d544c98 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "2da7049d-715e-4209-8c17-dda96ff6a192-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:53:50 np0005534516 nova_compute[253538]: 2025-11-25 08:53:50.486 253542 DEBUG oslo_concurrency.lockutils [req-0834014d-4f27-446a-a233-4b6649626c63 req-2dec6cc7-103d-4237-8dae-a94c6d544c98 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "2da7049d-715e-4209-8c17-dda96ff6a192-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:53:50 np0005534516 nova_compute[253538]: 2025-11-25 08:53:50.486 253542 DEBUG oslo_concurrency.lockutils [req-0834014d-4f27-446a-a233-4b6649626c63 req-2dec6cc7-103d-4237-8dae-a94c6d544c98 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "2da7049d-715e-4209-8c17-dda96ff6a192-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:53:50 np0005534516 nova_compute[253538]: 2025-11-25 08:53:50.487 253542 DEBUG nova.compute.manager [req-0834014d-4f27-446a-a233-4b6649626c63 req-2dec6cc7-103d-4237-8dae-a94c6d544c98 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 2da7049d-715e-4209-8c17-dda96ff6a192] No waiting events found dispatching network-vif-plugged-e0eb0246-9869-4c10-b45b-bd0799ae0c95 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 03:53:50 np0005534516 nova_compute[253538]: 2025-11-25 08:53:50.488 253542 WARNING nova.compute.manager [req-0834014d-4f27-446a-a233-4b6649626c63 req-2dec6cc7-103d-4237-8dae-a94c6d544c98 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 2da7049d-715e-4209-8c17-dda96ff6a192] Received unexpected event network-vif-plugged-e0eb0246-9869-4c10-b45b-bd0799ae0c95 for instance with vm_state stopped and task_state powering-on.#033[00m
Nov 25 03:53:50 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:53:50.515 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[3a66a8a0-7fb5-4a3a-a991-72bbb173f465]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:53:50 np0005534516 nova_compute[253538]: 2025-11-25 08:53:50.575 253542 DEBUG nova.network.neutron [None req-dd7c073b-5906-452e-a821-d734a5b66929 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 5e14b791-8860-44a3-87e0-5c7fcc1dcf12] Updating instance_info_cache with network_info: [{"id": "0797e76b-3f15-4c7e-ae0d-0f4813d59967", "address": "fa:16:3e:21:4a:e2", "network": {"id": "8f0f7d83-b45f-4a49-9b0c-4eced5b56b37", "bridge": "br-int", "label": "tempest-network-smoke--701944320", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.30", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92faeb767e7a423586eaaf32661ce771", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0797e76b-3f", "ovs_interfaceid": "0797e76b-3f15-4c7e-ae0d-0f4813d59967", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 03:53:50 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:53:50.609 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[56747a97-e240-48c2-ae84-652efd43dd26]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:53:50 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:53:50.611 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapab581a21-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:53:50 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:53:50.611 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 03:53:50 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:53:50.612 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapab581a21-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:53:50 np0005534516 nova_compute[253538]: 2025-11-25 08:53:50.614 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:53:50 np0005534516 NetworkManager[48915]: <info>  [1764060830.6152] manager: (tapab581a21-50): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/483)
Nov 25 03:53:50 np0005534516 kernel: tapab581a21-50: entered promiscuous mode
Nov 25 03:53:50 np0005534516 nova_compute[253538]: 2025-11-25 08:53:50.618 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:53:50 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:53:50.618 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapab581a21-50, col_values=(('external_ids', {'iface-id': 'b956a451-af5c-4f4e-b3b8-704d71686765'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:53:50 np0005534516 ovn_controller[152859]: 2025-11-25T08:53:50Z|01163|binding|INFO|Releasing lport b956a451-af5c-4f4e-b3b8-704d71686765 from this chassis (sb_readonly=0)
Nov 25 03:53:50 np0005534516 nova_compute[253538]: 2025-11-25 08:53:50.648 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:53:50 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:53:50.649 162739 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/ab581a21-5712-4b8e-87f9-b943349fbfcb.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/ab581a21-5712-4b8e-87f9-b943349fbfcb.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 25 03:53:50 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:53:50.650 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[e17c153a-5fdb-4de5-8368-66f7b45c57ee]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:53:50 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:53:50.651 162739 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 25 03:53:50 np0005534516 ovn_metadata_agent[162734]: global
Nov 25 03:53:50 np0005534516 ovn_metadata_agent[162734]:    log         /dev/log local0 debug
Nov 25 03:53:50 np0005534516 ovn_metadata_agent[162734]:    log-tag     haproxy-metadata-proxy-ab581a21-5712-4b8e-87f9-b943349fbfcb
Nov 25 03:53:50 np0005534516 ovn_metadata_agent[162734]:    user        root
Nov 25 03:53:50 np0005534516 ovn_metadata_agent[162734]:    group       root
Nov 25 03:53:50 np0005534516 ovn_metadata_agent[162734]:    maxconn     1024
Nov 25 03:53:50 np0005534516 ovn_metadata_agent[162734]:    pidfile     /var/lib/neutron/external/pids/ab581a21-5712-4b8e-87f9-b943349fbfcb.pid.haproxy
Nov 25 03:53:50 np0005534516 ovn_metadata_agent[162734]:    daemon
Nov 25 03:53:50 np0005534516 ovn_metadata_agent[162734]: 
Nov 25 03:53:50 np0005534516 ovn_metadata_agent[162734]: defaults
Nov 25 03:53:50 np0005534516 ovn_metadata_agent[162734]:    log global
Nov 25 03:53:50 np0005534516 ovn_metadata_agent[162734]:    mode http
Nov 25 03:53:50 np0005534516 ovn_metadata_agent[162734]:    option httplog
Nov 25 03:53:50 np0005534516 ovn_metadata_agent[162734]:    option dontlognull
Nov 25 03:53:50 np0005534516 ovn_metadata_agent[162734]:    option http-server-close
Nov 25 03:53:50 np0005534516 ovn_metadata_agent[162734]:    option forwardfor
Nov 25 03:53:50 np0005534516 ovn_metadata_agent[162734]:    retries                 3
Nov 25 03:53:50 np0005534516 ovn_metadata_agent[162734]:    timeout http-request    30s
Nov 25 03:53:50 np0005534516 ovn_metadata_agent[162734]:    timeout connect         30s
Nov 25 03:53:50 np0005534516 ovn_metadata_agent[162734]:    timeout client          32s
Nov 25 03:53:50 np0005534516 ovn_metadata_agent[162734]:    timeout server          32s
Nov 25 03:53:50 np0005534516 ovn_metadata_agent[162734]:    timeout http-keep-alive 30s
Nov 25 03:53:50 np0005534516 ovn_metadata_agent[162734]: 
Nov 25 03:53:50 np0005534516 ovn_metadata_agent[162734]: 
Nov 25 03:53:50 np0005534516 ovn_metadata_agent[162734]: listen listener
Nov 25 03:53:50 np0005534516 ovn_metadata_agent[162734]:    bind 169.254.169.254:80
Nov 25 03:53:50 np0005534516 ovn_metadata_agent[162734]:    server metadata /var/lib/neutron/metadata_proxy
Nov 25 03:53:50 np0005534516 ovn_metadata_agent[162734]:    http-request add-header X-OVN-Network-ID ab581a21-5712-4b8e-87f9-b943349fbfcb
Nov 25 03:53:50 np0005534516 ovn_metadata_agent[162734]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 25 03:53:50 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:53:50.651 162739 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-ab581a21-5712-4b8e-87f9-b943349fbfcb', 'env', 'PROCESS_TAG=haproxy-ab581a21-5712-4b8e-87f9-b943349fbfcb', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/ab581a21-5712-4b8e-87f9-b943349fbfcb.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 25 03:53:50 np0005534516 nova_compute[253538]: 2025-11-25 08:53:50.700 253542 DEBUG oslo_concurrency.lockutils [None req-dd7c073b-5906-452e-a821-d734a5b66929 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Releasing lock "refresh_cache-5e14b791-8860-44a3-87e0-5c7fcc1dcf12" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 03:53:50 np0005534516 nova_compute[253538]: 2025-11-25 08:53:50.701 253542 DEBUG nova.compute.manager [None req-dd7c073b-5906-452e-a821-d734a5b66929 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 5e14b791-8860-44a3-87e0-5c7fcc1dcf12] Instance network_info: |[{"id": "0797e76b-3f15-4c7e-ae0d-0f4813d59967", "address": "fa:16:3e:21:4a:e2", "network": {"id": "8f0f7d83-b45f-4a49-9b0c-4eced5b56b37", "bridge": "br-int", "label": "tempest-network-smoke--701944320", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.30", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92faeb767e7a423586eaaf32661ce771", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0797e76b-3f", "ovs_interfaceid": "0797e76b-3f15-4c7e-ae0d-0f4813d59967", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 25 03:53:50 np0005534516 nova_compute[253538]: 2025-11-25 08:53:50.701 253542 DEBUG oslo_concurrency.lockutils [req-5633a7d5-6a60-4453-8d8c-862bb082b7dd req-33bc177e-60d4-46d7-aa5d-0357806ccfe1 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-5e14b791-8860-44a3-87e0-5c7fcc1dcf12" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 03:53:50 np0005534516 nova_compute[253538]: 2025-11-25 08:53:50.702 253542 DEBUG nova.network.neutron [req-5633a7d5-6a60-4453-8d8c-862bb082b7dd req-33bc177e-60d4-46d7-aa5d-0357806ccfe1 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 5e14b791-8860-44a3-87e0-5c7fcc1dcf12] Refreshing network info cache for port 0797e76b-3f15-4c7e-ae0d-0f4813d59967 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 03:53:50 np0005534516 nova_compute[253538]: 2025-11-25 08:53:50.705 253542 DEBUG nova.virt.libvirt.driver [None req-dd7c073b-5906-452e-a821-d734a5b66929 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 5e14b791-8860-44a3-87e0-5c7fcc1dcf12] Start _get_guest_xml network_info=[{"id": "0797e76b-3f15-4c7e-ae0d-0f4813d59967", "address": "fa:16:3e:21:4a:e2", "network": {"id": "8f0f7d83-b45f-4a49-9b0c-4eced5b56b37", "bridge": "br-int", "label": "tempest-network-smoke--701944320", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.30", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92faeb767e7a423586eaaf32661ce771", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0797e76b-3f", "ovs_interfaceid": "0797e76b-3f15-4c7e-ae0d-0f4813d59967", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'device_name': '/dev/vda', 'guest_format': None, 'encryption_options': None, 'boot_index': 0, 'encryption_format': None, 'encryption_secret_uuid': None, 'size': 0, 'encrypted': False, 'disk_bus': 'virtio', 'image_id': '8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 25 03:53:50 np0005534516 nova_compute[253538]: 2025-11-25 08:53:50.713 253542 WARNING nova.virt.libvirt.driver [None req-dd7c073b-5906-452e-a821-d734a5b66929 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 03:53:50 np0005534516 nova_compute[253538]: 2025-11-25 08:53:50.723 253542 DEBUG nova.virt.libvirt.host [None req-dd7c073b-5906-452e-a821-d734a5b66929 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 25 03:53:50 np0005534516 nova_compute[253538]: 2025-11-25 08:53:50.724 253542 DEBUG nova.virt.libvirt.host [None req-dd7c073b-5906-452e-a821-d734a5b66929 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 25 03:53:50 np0005534516 nova_compute[253538]: 2025-11-25 08:53:50.728 253542 DEBUG nova.virt.libvirt.host [None req-dd7c073b-5906-452e-a821-d734a5b66929 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 25 03:53:50 np0005534516 nova_compute[253538]: 2025-11-25 08:53:50.729 253542 DEBUG nova.virt.libvirt.host [None req-dd7c073b-5906-452e-a821-d734a5b66929 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 25 03:53:50 np0005534516 nova_compute[253538]: 2025-11-25 08:53:50.730 253542 DEBUG nova.virt.libvirt.driver [None req-dd7c073b-5906-452e-a821-d734a5b66929 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 25 03:53:50 np0005534516 nova_compute[253538]: 2025-11-25 08:53:50.730 253542 DEBUG nova.virt.hardware [None req-dd7c073b-5906-452e-a821-d734a5b66929 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T08:20:54Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='c0247c91-0f34-45b2-87e7-43f31a790a00',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 25 03:53:50 np0005534516 nova_compute[253538]: 2025-11-25 08:53:50.731 253542 DEBUG nova.virt.hardware [None req-dd7c073b-5906-452e-a821-d734a5b66929 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 25 03:53:50 np0005534516 nova_compute[253538]: 2025-11-25 08:53:50.731 253542 DEBUG nova.virt.hardware [None req-dd7c073b-5906-452e-a821-d734a5b66929 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 25 03:53:50 np0005534516 nova_compute[253538]: 2025-11-25 08:53:50.731 253542 DEBUG nova.virt.hardware [None req-dd7c073b-5906-452e-a821-d734a5b66929 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 25 03:53:50 np0005534516 nova_compute[253538]: 2025-11-25 08:53:50.732 253542 DEBUG nova.virt.hardware [None req-dd7c073b-5906-452e-a821-d734a5b66929 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 25 03:53:50 np0005534516 nova_compute[253538]: 2025-11-25 08:53:50.732 253542 DEBUG nova.virt.hardware [None req-dd7c073b-5906-452e-a821-d734a5b66929 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 25 03:53:50 np0005534516 nova_compute[253538]: 2025-11-25 08:53:50.732 253542 DEBUG nova.virt.hardware [None req-dd7c073b-5906-452e-a821-d734a5b66929 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 25 03:53:50 np0005534516 nova_compute[253538]: 2025-11-25 08:53:50.732 253542 DEBUG nova.virt.hardware [None req-dd7c073b-5906-452e-a821-d734a5b66929 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 25 03:53:50 np0005534516 nova_compute[253538]: 2025-11-25 08:53:50.733 253542 DEBUG nova.virt.hardware [None req-dd7c073b-5906-452e-a821-d734a5b66929 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 25 03:53:50 np0005534516 nova_compute[253538]: 2025-11-25 08:53:50.733 253542 DEBUG nova.virt.hardware [None req-dd7c073b-5906-452e-a821-d734a5b66929 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 25 03:53:50 np0005534516 nova_compute[253538]: 2025-11-25 08:53:50.733 253542 DEBUG nova.virt.hardware [None req-dd7c073b-5906-452e-a821-d734a5b66929 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 25 03:53:50 np0005534516 nova_compute[253538]: 2025-11-25 08:53:50.737 253542 DEBUG oslo_concurrency.processutils [None req-dd7c073b-5906-452e-a821-d734a5b66929 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:53:50 np0005534516 nova_compute[253538]: 2025-11-25 08:53:50.833 253542 DEBUG nova.virt.libvirt.host [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Removed pending event for 2da7049d-715e-4209-8c17-dda96ff6a192 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Nov 25 03:53:50 np0005534516 nova_compute[253538]: 2025-11-25 08:53:50.835 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764060830.8317401, 2da7049d-715e-4209-8c17-dda96ff6a192 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 03:53:50 np0005534516 nova_compute[253538]: 2025-11-25 08:53:50.836 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 2da7049d-715e-4209-8c17-dda96ff6a192] VM Resumed (Lifecycle Event)#033[00m
Nov 25 03:53:50 np0005534516 nova_compute[253538]: 2025-11-25 08:53:50.841 253542 DEBUG nova.compute.manager [None req-516d2bae-345f-4add-b8df-da3179b8a89d 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: 2da7049d-715e-4209-8c17-dda96ff6a192] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 25 03:53:50 np0005534516 nova_compute[253538]: 2025-11-25 08:53:50.848 253542 INFO nova.virt.libvirt.driver [-] [instance: 2da7049d-715e-4209-8c17-dda96ff6a192] Instance rebooted successfully.#033[00m
Nov 25 03:53:50 np0005534516 nova_compute[253538]: 2025-11-25 08:53:50.849 253542 DEBUG nova.compute.manager [None req-516d2bae-345f-4add-b8df-da3179b8a89d 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: 2da7049d-715e-4209-8c17-dda96ff6a192] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:53:50 np0005534516 nova_compute[253538]: 2025-11-25 08:53:50.857 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 2da7049d-715e-4209-8c17-dda96ff6a192] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:53:50 np0005534516 nova_compute[253538]: 2025-11-25 08:53:50.863 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 2da7049d-715e-4209-8c17-dda96ff6a192] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: stopped, current task_state: powering-on, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 03:53:50 np0005534516 nova_compute[253538]: 2025-11-25 08:53:50.882 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 2da7049d-715e-4209-8c17-dda96ff6a192] During sync_power_state the instance has a pending task (powering-on). Skip.#033[00m
Nov 25 03:53:50 np0005534516 nova_compute[253538]: 2025-11-25 08:53:50.882 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764060830.8340163, 2da7049d-715e-4209-8c17-dda96ff6a192 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 03:53:50 np0005534516 nova_compute[253538]: 2025-11-25 08:53:50.883 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 2da7049d-715e-4209-8c17-dda96ff6a192] VM Started (Lifecycle Event)#033[00m
Nov 25 03:53:50 np0005534516 nova_compute[253538]: 2025-11-25 08:53:50.904 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 2da7049d-715e-4209-8c17-dda96ff6a192] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:53:50 np0005534516 nova_compute[253538]: 2025-11-25 08:53:50.907 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 2da7049d-715e-4209-8c17-dda96ff6a192] Synchronizing instance power state after lifecycle event "Started"; current vm_state: stopped, current task_state: powering-on, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 03:53:50 np0005534516 nova_compute[253538]: 2025-11-25 08:53:50.951 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 2da7049d-715e-4209-8c17-dda96ff6a192] During sync_power_state the instance has a pending task (powering-on). Skip.#033[00m
Nov 25 03:53:51 np0005534516 podman[374730]: 2025-11-25 08:53:51.000949509 +0000 UTC m=+0.024023324 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 25 03:53:51 np0005534516 podman[374730]: 2025-11-25 08:53:51.132296035 +0000 UTC m=+0.155369820 container create ac4143c5654b3f2d26e52800ec0179efe78b75cd7b70bac827c41ee2ea6faa2f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-ab581a21-5712-4b8e-87f9-b943349fbfcb, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Nov 25 03:53:51 np0005534516 systemd[1]: Started libpod-conmon-ac4143c5654b3f2d26e52800ec0179efe78b75cd7b70bac827c41ee2ea6faa2f.scope.
Nov 25 03:53:51 np0005534516 systemd[1]: Started libcrun container.
Nov 25 03:53:51 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 03:53:51 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3500140972' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 03:53:51 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a887c0faf322aa9a07d1834b6a53fb5f904fd4dfee4a01fe03dcd58a1b43320c/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 25 03:53:51 np0005534516 nova_compute[253538]: 2025-11-25 08:53:51.252 253542 DEBUG oslo_concurrency.processutils [None req-dd7c073b-5906-452e-a821-d734a5b66929 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.514s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:53:51 np0005534516 podman[374730]: 2025-11-25 08:53:51.267877086 +0000 UTC m=+0.290950901 container init ac4143c5654b3f2d26e52800ec0179efe78b75cd7b70bac827c41ee2ea6faa2f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-ab581a21-5712-4b8e-87f9-b943349fbfcb, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 03:53:51 np0005534516 podman[374730]: 2025-11-25 08:53:51.274816314 +0000 UTC m=+0.297890099 container start ac4143c5654b3f2d26e52800ec0179efe78b75cd7b70bac827c41ee2ea6faa2f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-ab581a21-5712-4b8e-87f9-b943349fbfcb, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 03:53:51 np0005534516 nova_compute[253538]: 2025-11-25 08:53:51.276 253542 DEBUG nova.storage.rbd_utils [None req-dd7c073b-5906-452e-a821-d734a5b66929 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] rbd image 5e14b791-8860-44a3-87e0-5c7fcc1dcf12_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:53:51 np0005534516 nova_compute[253538]: 2025-11-25 08:53:51.280 253542 DEBUG oslo_concurrency.processutils [None req-dd7c073b-5906-452e-a821-d734a5b66929 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:53:51 np0005534516 neutron-haproxy-ovnmeta-ab581a21-5712-4b8e-87f9-b943349fbfcb[374743]: [NOTICE]   (374770) : New worker (374773) forked
Nov 25 03:53:51 np0005534516 neutron-haproxy-ovnmeta-ab581a21-5712-4b8e-87f9-b943349fbfcb[374743]: [NOTICE]   (374770) : Loading success.
Nov 25 03:53:51 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 03:53:51 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/968671767' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 03:53:51 np0005534516 nova_compute[253538]: 2025-11-25 08:53:51.785 253542 DEBUG oslo_concurrency.processutils [None req-dd7c073b-5906-452e-a821-d734a5b66929 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.505s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:53:51 np0005534516 nova_compute[253538]: 2025-11-25 08:53:51.787 253542 DEBUG nova.virt.libvirt.vif [None req-dd7c073b-5906-452e-a821-d734a5b66929 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T08:53:43Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-160796765',display_name='tempest-TestNetworkBasicOps-server-160796765',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-160796765',id=116,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBA0f+XBfXdSU4e2+02qYGk42nbRwIu1Vshv2fAHcU2M9HY4bsiawDBYsAh0BiTPD2qOg4I+4cye8z+LuwXaU2+YwQ92/nUDN4SrklXs8+Sfqmmth2xZ1VW9badcZ/6ZoHg==',key_name='tempest-TestNetworkBasicOps-278129425',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='92faeb767e7a423586eaaf32661ce771',ramdisk_id='',reservation_id='r-8e5h58cs',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-2019122229',owner_user_name='tempest-TestNetworkBasicOps-2019122229-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T08:53:45Z,user_data=None,user_id='4211995133cc45db8e38c47f747fb092',uuid=5e14b791-8860-44a3-87e0-5c7fcc1dcf12,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "0797e76b-3f15-4c7e-ae0d-0f4813d59967", "address": "fa:16:3e:21:4a:e2", "network": {"id": "8f0f7d83-b45f-4a49-9b0c-4eced5b56b37", "bridge": "br-int", "label": "tempest-network-smoke--701944320", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.30", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92faeb767e7a423586eaaf32661ce771", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0797e76b-3f", "ovs_interfaceid": "0797e76b-3f15-4c7e-ae0d-0f4813d59967", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 25 03:53:51 np0005534516 nova_compute[253538]: 2025-11-25 08:53:51.787 253542 DEBUG nova.network.os_vif_util [None req-dd7c073b-5906-452e-a821-d734a5b66929 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Converting VIF {"id": "0797e76b-3f15-4c7e-ae0d-0f4813d59967", "address": "fa:16:3e:21:4a:e2", "network": {"id": "8f0f7d83-b45f-4a49-9b0c-4eced5b56b37", "bridge": "br-int", "label": "tempest-network-smoke--701944320", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.30", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92faeb767e7a423586eaaf32661ce771", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0797e76b-3f", "ovs_interfaceid": "0797e76b-3f15-4c7e-ae0d-0f4813d59967", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 03:53:51 np0005534516 nova_compute[253538]: 2025-11-25 08:53:51.789 253542 DEBUG nova.network.os_vif_util [None req-dd7c073b-5906-452e-a821-d734a5b66929 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:21:4a:e2,bridge_name='br-int',has_traffic_filtering=True,id=0797e76b-3f15-4c7e-ae0d-0f4813d59967,network=Network(8f0f7d83-b45f-4a49-9b0c-4eced5b56b37),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0797e76b-3f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 03:53:51 np0005534516 nova_compute[253538]: 2025-11-25 08:53:51.791 253542 DEBUG nova.objects.instance [None req-dd7c073b-5906-452e-a821-d734a5b66929 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lazy-loading 'pci_devices' on Instance uuid 5e14b791-8860-44a3-87e0-5c7fcc1dcf12 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 03:53:51 np0005534516 nova_compute[253538]: 2025-11-25 08:53:51.805 253542 DEBUG nova.virt.libvirt.driver [None req-dd7c073b-5906-452e-a821-d734a5b66929 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 5e14b791-8860-44a3-87e0-5c7fcc1dcf12] End _get_guest_xml xml=<domain type="kvm">
Nov 25 03:53:51 np0005534516 nova_compute[253538]:  <uuid>5e14b791-8860-44a3-87e0-5c7fcc1dcf12</uuid>
Nov 25 03:53:51 np0005534516 nova_compute[253538]:  <name>instance-00000074</name>
Nov 25 03:53:51 np0005534516 nova_compute[253538]:  <memory>131072</memory>
Nov 25 03:53:51 np0005534516 nova_compute[253538]:  <vcpu>1</vcpu>
Nov 25 03:53:51 np0005534516 nova_compute[253538]:  <metadata>
Nov 25 03:53:51 np0005534516 nova_compute[253538]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 03:53:51 np0005534516 nova_compute[253538]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 03:53:51 np0005534516 nova_compute[253538]:      <nova:name>tempest-TestNetworkBasicOps-server-160796765</nova:name>
Nov 25 03:53:51 np0005534516 nova_compute[253538]:      <nova:creationTime>2025-11-25 08:53:50</nova:creationTime>
Nov 25 03:53:51 np0005534516 nova_compute[253538]:      <nova:flavor name="m1.nano">
Nov 25 03:53:51 np0005534516 nova_compute[253538]:        <nova:memory>128</nova:memory>
Nov 25 03:53:51 np0005534516 nova_compute[253538]:        <nova:disk>1</nova:disk>
Nov 25 03:53:51 np0005534516 nova_compute[253538]:        <nova:swap>0</nova:swap>
Nov 25 03:53:51 np0005534516 nova_compute[253538]:        <nova:ephemeral>0</nova:ephemeral>
Nov 25 03:53:51 np0005534516 nova_compute[253538]:        <nova:vcpus>1</nova:vcpus>
Nov 25 03:53:51 np0005534516 nova_compute[253538]:      </nova:flavor>
Nov 25 03:53:51 np0005534516 nova_compute[253538]:      <nova:owner>
Nov 25 03:53:51 np0005534516 nova_compute[253538]:        <nova:user uuid="4211995133cc45db8e38c47f747fb092">tempest-TestNetworkBasicOps-2019122229-project-member</nova:user>
Nov 25 03:53:51 np0005534516 nova_compute[253538]:        <nova:project uuid="92faeb767e7a423586eaaf32661ce771">tempest-TestNetworkBasicOps-2019122229</nova:project>
Nov 25 03:53:51 np0005534516 nova_compute[253538]:      </nova:owner>
Nov 25 03:53:51 np0005534516 nova_compute[253538]:      <nova:root type="image" uuid="8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e"/>
Nov 25 03:53:51 np0005534516 nova_compute[253538]:      <nova:ports>
Nov 25 03:53:51 np0005534516 nova_compute[253538]:        <nova:port uuid="0797e76b-3f15-4c7e-ae0d-0f4813d59967">
Nov 25 03:53:51 np0005534516 nova_compute[253538]:          <nova:ip type="fixed" address="10.100.0.30" ipVersion="4"/>
Nov 25 03:53:51 np0005534516 nova_compute[253538]:        </nova:port>
Nov 25 03:53:51 np0005534516 nova_compute[253538]:      </nova:ports>
Nov 25 03:53:51 np0005534516 nova_compute[253538]:    </nova:instance>
Nov 25 03:53:51 np0005534516 nova_compute[253538]:  </metadata>
Nov 25 03:53:51 np0005534516 nova_compute[253538]:  <sysinfo type="smbios">
Nov 25 03:53:51 np0005534516 nova_compute[253538]:    <system>
Nov 25 03:53:51 np0005534516 nova_compute[253538]:      <entry name="manufacturer">RDO</entry>
Nov 25 03:53:51 np0005534516 nova_compute[253538]:      <entry name="product">OpenStack Compute</entry>
Nov 25 03:53:51 np0005534516 nova_compute[253538]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 03:53:51 np0005534516 nova_compute[253538]:      <entry name="serial">5e14b791-8860-44a3-87e0-5c7fcc1dcf12</entry>
Nov 25 03:53:51 np0005534516 nova_compute[253538]:      <entry name="uuid">5e14b791-8860-44a3-87e0-5c7fcc1dcf12</entry>
Nov 25 03:53:51 np0005534516 nova_compute[253538]:      <entry name="family">Virtual Machine</entry>
Nov 25 03:53:51 np0005534516 nova_compute[253538]:    </system>
Nov 25 03:53:51 np0005534516 nova_compute[253538]:  </sysinfo>
Nov 25 03:53:51 np0005534516 nova_compute[253538]:  <os>
Nov 25 03:53:51 np0005534516 nova_compute[253538]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 25 03:53:51 np0005534516 nova_compute[253538]:    <boot dev="hd"/>
Nov 25 03:53:51 np0005534516 nova_compute[253538]:    <smbios mode="sysinfo"/>
Nov 25 03:53:51 np0005534516 nova_compute[253538]:  </os>
Nov 25 03:53:51 np0005534516 nova_compute[253538]:  <features>
Nov 25 03:53:51 np0005534516 nova_compute[253538]:    <acpi/>
Nov 25 03:53:51 np0005534516 nova_compute[253538]:    <apic/>
Nov 25 03:53:51 np0005534516 nova_compute[253538]:    <vmcoreinfo/>
Nov 25 03:53:51 np0005534516 nova_compute[253538]:  </features>
Nov 25 03:53:51 np0005534516 nova_compute[253538]:  <clock offset="utc">
Nov 25 03:53:51 np0005534516 nova_compute[253538]:    <timer name="pit" tickpolicy="delay"/>
Nov 25 03:53:51 np0005534516 nova_compute[253538]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 25 03:53:51 np0005534516 nova_compute[253538]:    <timer name="hpet" present="no"/>
Nov 25 03:53:51 np0005534516 nova_compute[253538]:  </clock>
Nov 25 03:53:51 np0005534516 nova_compute[253538]:  <cpu mode="host-model" match="exact">
Nov 25 03:53:51 np0005534516 nova_compute[253538]:    <topology sockets="1" cores="1" threads="1"/>
Nov 25 03:53:51 np0005534516 nova_compute[253538]:  </cpu>
Nov 25 03:53:51 np0005534516 nova_compute[253538]:  <devices>
Nov 25 03:53:51 np0005534516 nova_compute[253538]:    <disk type="network" device="disk">
Nov 25 03:53:51 np0005534516 nova_compute[253538]:      <driver type="raw" cache="none"/>
Nov 25 03:53:51 np0005534516 nova_compute[253538]:      <source protocol="rbd" name="vms/5e14b791-8860-44a3-87e0-5c7fcc1dcf12_disk">
Nov 25 03:53:51 np0005534516 nova_compute[253538]:        <host name="192.168.122.100" port="6789"/>
Nov 25 03:53:51 np0005534516 nova_compute[253538]:      </source>
Nov 25 03:53:51 np0005534516 nova_compute[253538]:      <auth username="openstack">
Nov 25 03:53:51 np0005534516 nova_compute[253538]:        <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 03:53:51 np0005534516 nova_compute[253538]:      </auth>
Nov 25 03:53:51 np0005534516 nova_compute[253538]:      <target dev="vda" bus="virtio"/>
Nov 25 03:53:51 np0005534516 nova_compute[253538]:    </disk>
Nov 25 03:53:51 np0005534516 nova_compute[253538]:    <disk type="network" device="cdrom">
Nov 25 03:53:51 np0005534516 nova_compute[253538]:      <driver type="raw" cache="none"/>
Nov 25 03:53:51 np0005534516 nova_compute[253538]:      <source protocol="rbd" name="vms/5e14b791-8860-44a3-87e0-5c7fcc1dcf12_disk.config">
Nov 25 03:53:51 np0005534516 nova_compute[253538]:        <host name="192.168.122.100" port="6789"/>
Nov 25 03:53:51 np0005534516 nova_compute[253538]:      </source>
Nov 25 03:53:51 np0005534516 nova_compute[253538]:      <auth username="openstack">
Nov 25 03:53:51 np0005534516 nova_compute[253538]:        <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 03:53:51 np0005534516 nova_compute[253538]:      </auth>
Nov 25 03:53:51 np0005534516 nova_compute[253538]:      <target dev="sda" bus="sata"/>
Nov 25 03:53:51 np0005534516 nova_compute[253538]:    </disk>
Nov 25 03:53:51 np0005534516 nova_compute[253538]:    <interface type="ethernet">
Nov 25 03:53:51 np0005534516 nova_compute[253538]:      <mac address="fa:16:3e:21:4a:e2"/>
Nov 25 03:53:51 np0005534516 nova_compute[253538]:      <model type="virtio"/>
Nov 25 03:53:51 np0005534516 nova_compute[253538]:      <driver name="vhost" rx_queue_size="512"/>
Nov 25 03:53:51 np0005534516 nova_compute[253538]:      <mtu size="1442"/>
Nov 25 03:53:51 np0005534516 nova_compute[253538]:      <target dev="tap0797e76b-3f"/>
Nov 25 03:53:51 np0005534516 nova_compute[253538]:    </interface>
Nov 25 03:53:51 np0005534516 nova_compute[253538]:    <serial type="pty">
Nov 25 03:53:51 np0005534516 nova_compute[253538]:      <log file="/var/lib/nova/instances/5e14b791-8860-44a3-87e0-5c7fcc1dcf12/console.log" append="off"/>
Nov 25 03:53:51 np0005534516 nova_compute[253538]:    </serial>
Nov 25 03:53:51 np0005534516 nova_compute[253538]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 03:53:51 np0005534516 nova_compute[253538]:    <video>
Nov 25 03:53:51 np0005534516 nova_compute[253538]:      <model type="virtio"/>
Nov 25 03:53:51 np0005534516 nova_compute[253538]:    </video>
Nov 25 03:53:51 np0005534516 nova_compute[253538]:    <input type="tablet" bus="usb"/>
Nov 25 03:53:51 np0005534516 nova_compute[253538]:    <rng model="virtio">
Nov 25 03:53:51 np0005534516 nova_compute[253538]:      <backend model="random">/dev/urandom</backend>
Nov 25 03:53:51 np0005534516 nova_compute[253538]:    </rng>
Nov 25 03:53:51 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root"/>
Nov 25 03:53:51 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:53:51 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:53:51 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:53:51 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:53:51 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:53:51 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:53:51 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:53:51 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:53:51 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:53:51 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:53:51 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:53:51 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:53:51 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:53:51 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:53:51 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:53:51 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:53:51 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:53:51 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:53:51 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:53:51 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:53:51 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:53:51 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:53:51 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:53:51 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:53:51 np0005534516 nova_compute[253538]:    <controller type="usb" index="0"/>
Nov 25 03:53:51 np0005534516 nova_compute[253538]:    <memballoon model="virtio">
Nov 25 03:53:51 np0005534516 nova_compute[253538]:      <stats period="10"/>
Nov 25 03:53:51 np0005534516 nova_compute[253538]:    </memballoon>
Nov 25 03:53:51 np0005534516 nova_compute[253538]:  </devices>
Nov 25 03:53:51 np0005534516 nova_compute[253538]: </domain>
Nov 25 03:53:51 np0005534516 nova_compute[253538]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 25 03:53:51 np0005534516 nova_compute[253538]: 2025-11-25 08:53:51.807 253542 DEBUG nova.compute.manager [None req-dd7c073b-5906-452e-a821-d734a5b66929 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 5e14b791-8860-44a3-87e0-5c7fcc1dcf12] Preparing to wait for external event network-vif-plugged-0797e76b-3f15-4c7e-ae0d-0f4813d59967 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 25 03:53:51 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2192: 321 pgs: 321 active+clean; 293 MiB data, 904 MiB used, 59 GiB / 60 GiB avail; 353 KiB/s rd, 1.8 MiB/s wr, 48 op/s
Nov 25 03:53:51 np0005534516 nova_compute[253538]: 2025-11-25 08:53:51.807 253542 DEBUG oslo_concurrency.lockutils [None req-dd7c073b-5906-452e-a821-d734a5b66929 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Acquiring lock "5e14b791-8860-44a3-87e0-5c7fcc1dcf12-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:53:51 np0005534516 nova_compute[253538]: 2025-11-25 08:53:51.807 253542 DEBUG oslo_concurrency.lockutils [None req-dd7c073b-5906-452e-a821-d734a5b66929 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lock "5e14b791-8860-44a3-87e0-5c7fcc1dcf12-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:53:51 np0005534516 nova_compute[253538]: 2025-11-25 08:53:51.807 253542 DEBUG oslo_concurrency.lockutils [None req-dd7c073b-5906-452e-a821-d734a5b66929 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lock "5e14b791-8860-44a3-87e0-5c7fcc1dcf12-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:53:51 np0005534516 nova_compute[253538]: 2025-11-25 08:53:51.808 253542 DEBUG nova.virt.libvirt.vif [None req-dd7c073b-5906-452e-a821-d734a5b66929 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T08:53:43Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-160796765',display_name='tempest-TestNetworkBasicOps-server-160796765',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-160796765',id=116,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBA0f+XBfXdSU4e2+02qYGk42nbRwIu1Vshv2fAHcU2M9HY4bsiawDBYsAh0BiTPD2qOg4I+4cye8z+LuwXaU2+YwQ92/nUDN4SrklXs8+Sfqmmth2xZ1VW9badcZ/6ZoHg==',key_name='tempest-TestNetworkBasicOps-278129425',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='92faeb767e7a423586eaaf32661ce771',ramdisk_id='',reservation_id='r-8e5h58cs',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-2019122229',owner_user_name='tempest-TestNetworkBasicOps-2019122229-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T08:53:45Z,user_data=None,user_id='4211995133cc45db8e38c47f747fb092',uuid=5e14b791-8860-44a3-87e0-5c7fcc1dcf12,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "0797e76b-3f15-4c7e-ae0d-0f4813d59967", "address": "fa:16:3e:21:4a:e2", "network": {"id": "8f0f7d83-b45f-4a49-9b0c-4eced5b56b37", "bridge": "br-int", "label": "tempest-network-smoke--701944320", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.30", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92faeb767e7a423586eaaf32661ce771", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0797e76b-3f", "ovs_interfaceid": "0797e76b-3f15-4c7e-ae0d-0f4813d59967", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 25 03:53:51 np0005534516 nova_compute[253538]: 2025-11-25 08:53:51.808 253542 DEBUG nova.network.os_vif_util [None req-dd7c073b-5906-452e-a821-d734a5b66929 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Converting VIF {"id": "0797e76b-3f15-4c7e-ae0d-0f4813d59967", "address": "fa:16:3e:21:4a:e2", "network": {"id": "8f0f7d83-b45f-4a49-9b0c-4eced5b56b37", "bridge": "br-int", "label": "tempest-network-smoke--701944320", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.30", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92faeb767e7a423586eaaf32661ce771", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0797e76b-3f", "ovs_interfaceid": "0797e76b-3f15-4c7e-ae0d-0f4813d59967", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 03:53:51 np0005534516 nova_compute[253538]: 2025-11-25 08:53:51.809 253542 DEBUG nova.network.os_vif_util [None req-dd7c073b-5906-452e-a821-d734a5b66929 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:21:4a:e2,bridge_name='br-int',has_traffic_filtering=True,id=0797e76b-3f15-4c7e-ae0d-0f4813d59967,network=Network(8f0f7d83-b45f-4a49-9b0c-4eced5b56b37),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0797e76b-3f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 03:53:51 np0005534516 nova_compute[253538]: 2025-11-25 08:53:51.809 253542 DEBUG os_vif [None req-dd7c073b-5906-452e-a821-d734a5b66929 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:21:4a:e2,bridge_name='br-int',has_traffic_filtering=True,id=0797e76b-3f15-4c7e-ae0d-0f4813d59967,network=Network(8f0f7d83-b45f-4a49-9b0c-4eced5b56b37),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0797e76b-3f') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 25 03:53:51 np0005534516 nova_compute[253538]: 2025-11-25 08:53:51.812 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:53:51 np0005534516 nova_compute[253538]: 2025-11-25 08:53:51.813 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:53:51 np0005534516 nova_compute[253538]: 2025-11-25 08:53:51.813 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 03:53:51 np0005534516 nova_compute[253538]: 2025-11-25 08:53:51.818 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:53:51 np0005534516 nova_compute[253538]: 2025-11-25 08:53:51.819 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap0797e76b-3f, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:53:51 np0005534516 nova_compute[253538]: 2025-11-25 08:53:51.819 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap0797e76b-3f, col_values=(('external_ids', {'iface-id': '0797e76b-3f15-4c7e-ae0d-0f4813d59967', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:21:4a:e2', 'vm-uuid': '5e14b791-8860-44a3-87e0-5c7fcc1dcf12'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:53:51 np0005534516 nova_compute[253538]: 2025-11-25 08:53:51.821 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:53:51 np0005534516 NetworkManager[48915]: <info>  [1764060831.8221] manager: (tap0797e76b-3f): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/484)
Nov 25 03:53:51 np0005534516 nova_compute[253538]: 2025-11-25 08:53:51.825 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 25 03:53:51 np0005534516 nova_compute[253538]: 2025-11-25 08:53:51.829 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:53:51 np0005534516 nova_compute[253538]: 2025-11-25 08:53:51.830 253542 INFO os_vif [None req-dd7c073b-5906-452e-a821-d734a5b66929 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:21:4a:e2,bridge_name='br-int',has_traffic_filtering=True,id=0797e76b-3f15-4c7e-ae0d-0f4813d59967,network=Network(8f0f7d83-b45f-4a49-9b0c-4eced5b56b37),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0797e76b-3f')#033[00m
Nov 25 03:53:51 np0005534516 nova_compute[253538]: 2025-11-25 08:53:51.879 253542 DEBUG nova.virt.libvirt.driver [None req-dd7c073b-5906-452e-a821-d734a5b66929 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 03:53:51 np0005534516 nova_compute[253538]: 2025-11-25 08:53:51.879 253542 DEBUG nova.virt.libvirt.driver [None req-dd7c073b-5906-452e-a821-d734a5b66929 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 03:53:51 np0005534516 nova_compute[253538]: 2025-11-25 08:53:51.880 253542 DEBUG nova.virt.libvirt.driver [None req-dd7c073b-5906-452e-a821-d734a5b66929 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] No VIF found with MAC fa:16:3e:21:4a:e2, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 25 03:53:51 np0005534516 nova_compute[253538]: 2025-11-25 08:53:51.880 253542 INFO nova.virt.libvirt.driver [None req-dd7c073b-5906-452e-a821-d734a5b66929 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 5e14b791-8860-44a3-87e0-5c7fcc1dcf12] Using config drive#033[00m
Nov 25 03:53:51 np0005534516 nova_compute[253538]: 2025-11-25 08:53:51.900 253542 DEBUG nova.storage.rbd_utils [None req-dd7c073b-5906-452e-a821-d734a5b66929 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] rbd image 5e14b791-8860-44a3-87e0-5c7fcc1dcf12_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:53:52 np0005534516 nova_compute[253538]: 2025-11-25 08:53:52.405 253542 INFO nova.virt.libvirt.driver [None req-dd7c073b-5906-452e-a821-d734a5b66929 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 5e14b791-8860-44a3-87e0-5c7fcc1dcf12] Creating config drive at /var/lib/nova/instances/5e14b791-8860-44a3-87e0-5c7fcc1dcf12/disk.config#033[00m
Nov 25 03:53:52 np0005534516 nova_compute[253538]: 2025-11-25 08:53:52.414 253542 DEBUG oslo_concurrency.processutils [None req-dd7c073b-5906-452e-a821-d734a5b66929 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/5e14b791-8860-44a3-87e0-5c7fcc1dcf12/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp_f46sz4b execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:53:52 np0005534516 nova_compute[253538]: 2025-11-25 08:53:52.459 253542 DEBUG nova.network.neutron [req-5633a7d5-6a60-4453-8d8c-862bb082b7dd req-33bc177e-60d4-46d7-aa5d-0357806ccfe1 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 5e14b791-8860-44a3-87e0-5c7fcc1dcf12] Updated VIF entry in instance network info cache for port 0797e76b-3f15-4c7e-ae0d-0f4813d59967. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 25 03:53:52 np0005534516 nova_compute[253538]: 2025-11-25 08:53:52.460 253542 DEBUG nova.network.neutron [req-5633a7d5-6a60-4453-8d8c-862bb082b7dd req-33bc177e-60d4-46d7-aa5d-0357806ccfe1 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 5e14b791-8860-44a3-87e0-5c7fcc1dcf12] Updating instance_info_cache with network_info: [{"id": "0797e76b-3f15-4c7e-ae0d-0f4813d59967", "address": "fa:16:3e:21:4a:e2", "network": {"id": "8f0f7d83-b45f-4a49-9b0c-4eced5b56b37", "bridge": "br-int", "label": "tempest-network-smoke--701944320", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.30", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92faeb767e7a423586eaaf32661ce771", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0797e76b-3f", "ovs_interfaceid": "0797e76b-3f15-4c7e-ae0d-0f4813d59967", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 03:53:52 np0005534516 nova_compute[253538]: 2025-11-25 08:53:52.479 253542 DEBUG oslo_concurrency.lockutils [req-5633a7d5-6a60-4453-8d8c-862bb082b7dd req-33bc177e-60d4-46d7-aa5d-0357806ccfe1 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-5e14b791-8860-44a3-87e0-5c7fcc1dcf12" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 03:53:52 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 03:53:52 np0005534516 nova_compute[253538]: 2025-11-25 08:53:52.550 253542 DEBUG nova.compute.manager [req-0b814fe1-367b-4e0b-a931-141258a2ede2 req-9527ee40-c442-42d4-b885-a041b47655b8 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 2da7049d-715e-4209-8c17-dda96ff6a192] Received event network-vif-plugged-e0eb0246-9869-4c10-b45b-bd0799ae0c95 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:53:52 np0005534516 nova_compute[253538]: 2025-11-25 08:53:52.551 253542 DEBUG oslo_concurrency.lockutils [req-0b814fe1-367b-4e0b-a931-141258a2ede2 req-9527ee40-c442-42d4-b885-a041b47655b8 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "2da7049d-715e-4209-8c17-dda96ff6a192-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:53:52 np0005534516 nova_compute[253538]: 2025-11-25 08:53:52.551 253542 DEBUG oslo_concurrency.lockutils [req-0b814fe1-367b-4e0b-a931-141258a2ede2 req-9527ee40-c442-42d4-b885-a041b47655b8 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "2da7049d-715e-4209-8c17-dda96ff6a192-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:53:52 np0005534516 nova_compute[253538]: 2025-11-25 08:53:52.552 253542 DEBUG oslo_concurrency.lockutils [req-0b814fe1-367b-4e0b-a931-141258a2ede2 req-9527ee40-c442-42d4-b885-a041b47655b8 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "2da7049d-715e-4209-8c17-dda96ff6a192-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:53:52 np0005534516 nova_compute[253538]: 2025-11-25 08:53:52.552 253542 DEBUG nova.compute.manager [req-0b814fe1-367b-4e0b-a931-141258a2ede2 req-9527ee40-c442-42d4-b885-a041b47655b8 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 2da7049d-715e-4209-8c17-dda96ff6a192] No waiting events found dispatching network-vif-plugged-e0eb0246-9869-4c10-b45b-bd0799ae0c95 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 03:53:52 np0005534516 nova_compute[253538]: 2025-11-25 08:53:52.552 253542 WARNING nova.compute.manager [req-0b814fe1-367b-4e0b-a931-141258a2ede2 req-9527ee40-c442-42d4-b885-a041b47655b8 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 2da7049d-715e-4209-8c17-dda96ff6a192] Received unexpected event network-vif-plugged-e0eb0246-9869-4c10-b45b-bd0799ae0c95 for instance with vm_state active and task_state None.#033[00m
Nov 25 03:53:52 np0005534516 nova_compute[253538]: 2025-11-25 08:53:52.563 253542 DEBUG oslo_concurrency.processutils [None req-dd7c073b-5906-452e-a821-d734a5b66929 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/5e14b791-8860-44a3-87e0-5c7fcc1dcf12/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp_f46sz4b" returned: 0 in 0.149s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:53:52 np0005534516 nova_compute[253538]: 2025-11-25 08:53:52.587 253542 DEBUG nova.storage.rbd_utils [None req-dd7c073b-5906-452e-a821-d734a5b66929 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] rbd image 5e14b791-8860-44a3-87e0-5c7fcc1dcf12_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:53:52 np0005534516 nova_compute[253538]: 2025-11-25 08:53:52.590 253542 DEBUG oslo_concurrency.processutils [None req-dd7c073b-5906-452e-a821-d734a5b66929 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/5e14b791-8860-44a3-87e0-5c7fcc1dcf12/disk.config 5e14b791-8860-44a3-87e0-5c7fcc1dcf12_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:53:52 np0005534516 nova_compute[253538]: 2025-11-25 08:53:52.806 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:53:52 np0005534516 nova_compute[253538]: 2025-11-25 08:53:52.994 253542 DEBUG oslo_concurrency.processutils [None req-dd7c073b-5906-452e-a821-d734a5b66929 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/5e14b791-8860-44a3-87e0-5c7fcc1dcf12/disk.config 5e14b791-8860-44a3-87e0-5c7fcc1dcf12_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.404s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:53:52 np0005534516 nova_compute[253538]: 2025-11-25 08:53:52.995 253542 INFO nova.virt.libvirt.driver [None req-dd7c073b-5906-452e-a821-d734a5b66929 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 5e14b791-8860-44a3-87e0-5c7fcc1dcf12] Deleting local config drive /var/lib/nova/instances/5e14b791-8860-44a3-87e0-5c7fcc1dcf12/disk.config because it was imported into RBD.#033[00m
Nov 25 03:53:53 np0005534516 NetworkManager[48915]: <info>  [1764060833.0681] manager: (tap0797e76b-3f): new Tun device (/org/freedesktop/NetworkManager/Devices/485)
Nov 25 03:53:53 np0005534516 kernel: tap0797e76b-3f: entered promiscuous mode
Nov 25 03:53:53 np0005534516 ovn_controller[152859]: 2025-11-25T08:53:53Z|01164|binding|INFO|Claiming lport 0797e76b-3f15-4c7e-ae0d-0f4813d59967 for this chassis.
Nov 25 03:53:53 np0005534516 ovn_controller[152859]: 2025-11-25T08:53:53Z|01165|binding|INFO|0797e76b-3f15-4c7e-ae0d-0f4813d59967: Claiming fa:16:3e:21:4a:e2 10.100.0.30
Nov 25 03:53:53 np0005534516 nova_compute[253538]: 2025-11-25 08:53:53.074 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:53:53 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:53:53.083 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:21:4a:e2 10.100.0.30'], port_security=['fa:16:3e:21:4a:e2 10.100.0.30'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.30/28', 'neutron:device_id': '5e14b791-8860-44a3-87e0-5c7fcc1dcf12', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8f0f7d83-b45f-4a49-9b0c-4eced5b56b37', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '92faeb767e7a423586eaaf32661ce771', 'neutron:revision_number': '2', 'neutron:security_group_ids': '96035e03-5ed4-4905-a41d-a5f2952043fc', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=927e21d4-cff1-4c45-a730-0e5e5d57b4cb, chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=0797e76b-3f15-4c7e-ae0d-0f4813d59967) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 03:53:53 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:53:53.084 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 0797e76b-3f15-4c7e-ae0d-0f4813d59967 in datapath 8f0f7d83-b45f-4a49-9b0c-4eced5b56b37 bound to our chassis#033[00m
Nov 25 03:53:53 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:53:53.086 162739 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 8f0f7d83-b45f-4a49-9b0c-4eced5b56b37#033[00m
Nov 25 03:53:53 np0005534516 ovn_controller[152859]: 2025-11-25T08:53:53Z|01166|binding|INFO|Setting lport 0797e76b-3f15-4c7e-ae0d-0f4813d59967 ovn-installed in OVS
Nov 25 03:53:53 np0005534516 ovn_controller[152859]: 2025-11-25T08:53:53Z|01167|binding|INFO|Setting lport 0797e76b-3f15-4c7e-ae0d-0f4813d59967 up in Southbound
Nov 25 03:53:53 np0005534516 NetworkManager[48915]: <info>  [1764060833.0919] device (tap0797e76b-3f): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 03:53:53 np0005534516 nova_compute[253538]: 2025-11-25 08:53:53.092 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:53:53 np0005534516 NetworkManager[48915]: <info>  [1764060833.0931] device (tap0797e76b-3f): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 03:53:53 np0005534516 nova_compute[253538]: 2025-11-25 08:53:53.098 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:53:53 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:53:53.105 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[a42ee767-f798-4f91-8806-31ffe2305cb1]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:53:53 np0005534516 systemd-machined[215790]: New machine qemu-145-instance-00000074.
Nov 25 03:53:53 np0005534516 systemd[1]: Started Virtual Machine qemu-145-instance-00000074.
Nov 25 03:53:53 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:53:53.143 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[37484728-cea6-4447-bb1f-494511d09975]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:53:53 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:53:53.147 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[06fadc81-0445-4bd8-a058-e1c4c3e435dd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:53:53 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:53:53.173 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[1ce0caf0-f582-4a09-b62c-77575fb5657d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:53:53 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:53:53.191 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[428b7952-e8e2-458b-bf09-eaf0e50539f0]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap8f0f7d83-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:09:fd:91'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 340], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 616521, 'reachable_time': 30317, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 374884, 'error': None, 'target': 'ovnmeta-8f0f7d83-b45f-4a49-9b0c-4eced5b56b37', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:53:53 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:53:53.237 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[644c76bc-0f32-4a4e-bf3c-0ed006154bc5]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.17'], ['IFA_LOCAL', '10.100.0.17'], ['IFA_BROADCAST', '10.100.0.31'], ['IFA_LABEL', 'tap8f0f7d83-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 616535, 'tstamp': 616535}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 374888, 'error': None, 'target': 'ovnmeta-8f0f7d83-b45f-4a49-9b0c-4eced5b56b37', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap8f0f7d83-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 616539, 'tstamp': 616539}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 374888, 'error': None, 'target': 'ovnmeta-8f0f7d83-b45f-4a49-9b0c-4eced5b56b37', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:53:53 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:53:53.239 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8f0f7d83-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:53:53 np0005534516 nova_compute[253538]: 2025-11-25 08:53:53.240 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:53:53 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:53:53.243 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap8f0f7d83-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:53:53 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:53:53.243 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 03:53:53 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:53:53.243 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap8f0f7d83-b0, col_values=(('external_ids', {'iface-id': '4bc48b70-3942-46d1-ac71-5fa19e5d9ae3'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:53:53 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:53:53.243 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 03:53:53 np0005534516 nova_compute[253538]: 2025-11-25 08:53:53.307 253542 DEBUG nova.compute.manager [req-2c6f8a8b-5504-43a4-aa41-77e4547c5da8 req-a99a1500-e487-485f-b986-1fbd1a913ee6 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 5e14b791-8860-44a3-87e0-5c7fcc1dcf12] Received event network-vif-plugged-0797e76b-3f15-4c7e-ae0d-0f4813d59967 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:53:53 np0005534516 nova_compute[253538]: 2025-11-25 08:53:53.307 253542 DEBUG oslo_concurrency.lockutils [req-2c6f8a8b-5504-43a4-aa41-77e4547c5da8 req-a99a1500-e487-485f-b986-1fbd1a913ee6 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "5e14b791-8860-44a3-87e0-5c7fcc1dcf12-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:53:53 np0005534516 nova_compute[253538]: 2025-11-25 08:53:53.308 253542 DEBUG oslo_concurrency.lockutils [req-2c6f8a8b-5504-43a4-aa41-77e4547c5da8 req-a99a1500-e487-485f-b986-1fbd1a913ee6 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "5e14b791-8860-44a3-87e0-5c7fcc1dcf12-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:53:53 np0005534516 nova_compute[253538]: 2025-11-25 08:53:53.309 253542 DEBUG oslo_concurrency.lockutils [req-2c6f8a8b-5504-43a4-aa41-77e4547c5da8 req-a99a1500-e487-485f-b986-1fbd1a913ee6 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "5e14b791-8860-44a3-87e0-5c7fcc1dcf12-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:53:53 np0005534516 nova_compute[253538]: 2025-11-25 08:53:53.309 253542 DEBUG nova.compute.manager [req-2c6f8a8b-5504-43a4-aa41-77e4547c5da8 req-a99a1500-e487-485f-b986-1fbd1a913ee6 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 5e14b791-8860-44a3-87e0-5c7fcc1dcf12] Processing event network-vif-plugged-0797e76b-3f15-4c7e-ae0d-0f4813d59967 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 25 03:53:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] Optimize plan auto_2025-11-25_08:53:53
Nov 25 03:53:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 03:53:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] do_upmap
Nov 25 03:53:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] pools ['default.rgw.control', 'default.rgw.meta', 'backups', 'vms', 'cephfs.cephfs.data', 'volumes', 'images', '.rgw.root', '.mgr', 'cephfs.cephfs.meta', 'default.rgw.log']
Nov 25 03:53:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] prepared 0/10 changes
Nov 25 03:53:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 03:53:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 03:53:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 03:53:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 03:53:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 03:53:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 03:53:53 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2193: 321 pgs: 321 active+clean; 293 MiB data, 904 MiB used, 59 GiB / 60 GiB avail; 745 KiB/s rd, 1.8 MiB/s wr, 61 op/s
Nov 25 03:53:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 03:53:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 03:53:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 03:53:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 03:53:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 03:53:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 03:53:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 03:53:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 03:53:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 03:53:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 03:53:54 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:53:54.414 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c9:40:8d 2001:db8::f816:3eff:fec9:408d'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:fec9:408d/64', 'neutron:device_id': 'ovnmeta-3def65f0-e242-4a46-9e8e-162652f023c1', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3def65f0-e242-4a46-9e8e-162652f023c1', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd5d3235b532e4e7e87aab12bda12e650', 'neutron:revision_number': '6', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=adce94c1-2312-402b-8ec0-ddb674bf630f, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=7654d474-edb7-40de-964c-94dde0499c7f) old=Port_Binding(mac=['fa:16:3e:c9:40:8d 10.100.0.2 2001:db8::f816:3eff:fec9:408d'], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:fec9:408d/64', 'neutron:device_id': 'ovnmeta-3def65f0-e242-4a46-9e8e-162652f023c1', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3def65f0-e242-4a46-9e8e-162652f023c1', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd5d3235b532e4e7e87aab12bda12e650', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 03:53:54 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:53:54.416 162739 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port 7654d474-edb7-40de-964c-94dde0499c7f in datapath 3def65f0-e242-4a46-9e8e-162652f023c1 updated#033[00m
Nov 25 03:53:54 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:53:54.419 162739 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 3def65f0-e242-4a46-9e8e-162652f023c1, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 25 03:53:54 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:53:54.421 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[e81db3f6-e5b2-4538-a861-2c2e396c4298]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:53:54 np0005534516 nova_compute[253538]: 2025-11-25 08:53:54.566 253542 DEBUG nova.compute.manager [None req-dd7c073b-5906-452e-a821-d734a5b66929 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 5e14b791-8860-44a3-87e0-5c7fcc1dcf12] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 25 03:53:54 np0005534516 nova_compute[253538]: 2025-11-25 08:53:54.567 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764060834.5656435, 5e14b791-8860-44a3-87e0-5c7fcc1dcf12 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 03:53:54 np0005534516 nova_compute[253538]: 2025-11-25 08:53:54.567 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 5e14b791-8860-44a3-87e0-5c7fcc1dcf12] VM Started (Lifecycle Event)#033[00m
Nov 25 03:53:54 np0005534516 nova_compute[253538]: 2025-11-25 08:53:54.570 253542 DEBUG nova.virt.libvirt.driver [None req-dd7c073b-5906-452e-a821-d734a5b66929 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 5e14b791-8860-44a3-87e0-5c7fcc1dcf12] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 25 03:53:54 np0005534516 nova_compute[253538]: 2025-11-25 08:53:54.575 253542 INFO nova.virt.libvirt.driver [-] [instance: 5e14b791-8860-44a3-87e0-5c7fcc1dcf12] Instance spawned successfully.#033[00m
Nov 25 03:53:54 np0005534516 nova_compute[253538]: 2025-11-25 08:53:54.576 253542 DEBUG nova.virt.libvirt.driver [None req-dd7c073b-5906-452e-a821-d734a5b66929 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 5e14b791-8860-44a3-87e0-5c7fcc1dcf12] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 25 03:53:54 np0005534516 nova_compute[253538]: 2025-11-25 08:53:54.589 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 5e14b791-8860-44a3-87e0-5c7fcc1dcf12] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:53:54 np0005534516 nova_compute[253538]: 2025-11-25 08:53:54.596 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 5e14b791-8860-44a3-87e0-5c7fcc1dcf12] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 03:53:54 np0005534516 nova_compute[253538]: 2025-11-25 08:53:54.602 253542 DEBUG nova.virt.libvirt.driver [None req-dd7c073b-5906-452e-a821-d734a5b66929 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 5e14b791-8860-44a3-87e0-5c7fcc1dcf12] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:53:54 np0005534516 nova_compute[253538]: 2025-11-25 08:53:54.603 253542 DEBUG nova.virt.libvirt.driver [None req-dd7c073b-5906-452e-a821-d734a5b66929 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 5e14b791-8860-44a3-87e0-5c7fcc1dcf12] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:53:54 np0005534516 nova_compute[253538]: 2025-11-25 08:53:54.603 253542 DEBUG nova.virt.libvirt.driver [None req-dd7c073b-5906-452e-a821-d734a5b66929 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 5e14b791-8860-44a3-87e0-5c7fcc1dcf12] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:53:54 np0005534516 nova_compute[253538]: 2025-11-25 08:53:54.604 253542 DEBUG nova.virt.libvirt.driver [None req-dd7c073b-5906-452e-a821-d734a5b66929 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 5e14b791-8860-44a3-87e0-5c7fcc1dcf12] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:53:54 np0005534516 nova_compute[253538]: 2025-11-25 08:53:54.604 253542 DEBUG nova.virt.libvirt.driver [None req-dd7c073b-5906-452e-a821-d734a5b66929 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 5e14b791-8860-44a3-87e0-5c7fcc1dcf12] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:53:54 np0005534516 nova_compute[253538]: 2025-11-25 08:53:54.604 253542 DEBUG nova.virt.libvirt.driver [None req-dd7c073b-5906-452e-a821-d734a5b66929 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 5e14b791-8860-44a3-87e0-5c7fcc1dcf12] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:53:54 np0005534516 nova_compute[253538]: 2025-11-25 08:53:54.627 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 5e14b791-8860-44a3-87e0-5c7fcc1dcf12] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 03:53:54 np0005534516 nova_compute[253538]: 2025-11-25 08:53:54.627 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764060834.5678415, 5e14b791-8860-44a3-87e0-5c7fcc1dcf12 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 03:53:54 np0005534516 nova_compute[253538]: 2025-11-25 08:53:54.628 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 5e14b791-8860-44a3-87e0-5c7fcc1dcf12] VM Paused (Lifecycle Event)#033[00m
Nov 25 03:53:54 np0005534516 nova_compute[253538]: 2025-11-25 08:53:54.698 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 5e14b791-8860-44a3-87e0-5c7fcc1dcf12] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:53:54 np0005534516 nova_compute[253538]: 2025-11-25 08:53:54.703 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764060834.5699406, 5e14b791-8860-44a3-87e0-5c7fcc1dcf12 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 03:53:54 np0005534516 nova_compute[253538]: 2025-11-25 08:53:54.703 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 5e14b791-8860-44a3-87e0-5c7fcc1dcf12] VM Resumed (Lifecycle Event)#033[00m
Nov 25 03:53:54 np0005534516 nova_compute[253538]: 2025-11-25 08:53:54.743 253542 INFO nova.compute.manager [None req-dd7c073b-5906-452e-a821-d734a5b66929 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 5e14b791-8860-44a3-87e0-5c7fcc1dcf12] Took 9.43 seconds to spawn the instance on the hypervisor.#033[00m
Nov 25 03:53:54 np0005534516 nova_compute[253538]: 2025-11-25 08:53:54.744 253542 DEBUG nova.compute.manager [None req-dd7c073b-5906-452e-a821-d734a5b66929 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 5e14b791-8860-44a3-87e0-5c7fcc1dcf12] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:53:54 np0005534516 nova_compute[253538]: 2025-11-25 08:53:54.790 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 5e14b791-8860-44a3-87e0-5c7fcc1dcf12] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:53:54 np0005534516 nova_compute[253538]: 2025-11-25 08:53:54.793 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 5e14b791-8860-44a3-87e0-5c7fcc1dcf12] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 03:53:54 np0005534516 nova_compute[253538]: 2025-11-25 08:53:54.825 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 5e14b791-8860-44a3-87e0-5c7fcc1dcf12] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 03:53:54 np0005534516 nova_compute[253538]: 2025-11-25 08:53:54.836 253542 INFO nova.compute.manager [None req-dd7c073b-5906-452e-a821-d734a5b66929 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 5e14b791-8860-44a3-87e0-5c7fcc1dcf12] Took 10.55 seconds to build instance.#033[00m
Nov 25 03:53:54 np0005534516 nova_compute[253538]: 2025-11-25 08:53:54.850 253542 DEBUG oslo_concurrency.lockutils [None req-dd7c073b-5906-452e-a821-d734a5b66929 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lock "5e14b791-8860-44a3-87e0-5c7fcc1dcf12" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.632s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:53:55 np0005534516 nova_compute[253538]: 2025-11-25 08:53:55.431 253542 DEBUG nova.compute.manager [req-af0fc61e-a871-46aa-966e-1d6c32f416ad req-908e7f20-88ff-4258-b028-996e84fd205e b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 5e14b791-8860-44a3-87e0-5c7fcc1dcf12] Received event network-vif-plugged-0797e76b-3f15-4c7e-ae0d-0f4813d59967 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:53:55 np0005534516 nova_compute[253538]: 2025-11-25 08:53:55.432 253542 DEBUG oslo_concurrency.lockutils [req-af0fc61e-a871-46aa-966e-1d6c32f416ad req-908e7f20-88ff-4258-b028-996e84fd205e b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "5e14b791-8860-44a3-87e0-5c7fcc1dcf12-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:53:55 np0005534516 nova_compute[253538]: 2025-11-25 08:53:55.432 253542 DEBUG oslo_concurrency.lockutils [req-af0fc61e-a871-46aa-966e-1d6c32f416ad req-908e7f20-88ff-4258-b028-996e84fd205e b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "5e14b791-8860-44a3-87e0-5c7fcc1dcf12-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:53:55 np0005534516 nova_compute[253538]: 2025-11-25 08:53:55.432 253542 DEBUG oslo_concurrency.lockutils [req-af0fc61e-a871-46aa-966e-1d6c32f416ad req-908e7f20-88ff-4258-b028-996e84fd205e b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "5e14b791-8860-44a3-87e0-5c7fcc1dcf12-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:53:55 np0005534516 nova_compute[253538]: 2025-11-25 08:53:55.433 253542 DEBUG nova.compute.manager [req-af0fc61e-a871-46aa-966e-1d6c32f416ad req-908e7f20-88ff-4258-b028-996e84fd205e b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 5e14b791-8860-44a3-87e0-5c7fcc1dcf12] No waiting events found dispatching network-vif-plugged-0797e76b-3f15-4c7e-ae0d-0f4813d59967 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 03:53:55 np0005534516 nova_compute[253538]: 2025-11-25 08:53:55.433 253542 WARNING nova.compute.manager [req-af0fc61e-a871-46aa-966e-1d6c32f416ad req-908e7f20-88ff-4258-b028-996e84fd205e b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 5e14b791-8860-44a3-87e0-5c7fcc1dcf12] Received unexpected event network-vif-plugged-0797e76b-3f15-4c7e-ae0d-0f4813d59967 for instance with vm_state active and task_state None.#033[00m
Nov 25 03:53:55 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2194: 321 pgs: 321 active+clean; 293 MiB data, 904 MiB used, 59 GiB / 60 GiB avail; 1.3 MiB/s rd, 1.8 MiB/s wr, 81 op/s
Nov 25 03:53:56 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:53:56.395 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c9:40:8d 10.100.0.2 2001:db8::f816:3eff:fec9:408d'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:fec9:408d/64', 'neutron:device_id': 'ovnmeta-3def65f0-e242-4a46-9e8e-162652f023c1', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3def65f0-e242-4a46-9e8e-162652f023c1', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd5d3235b532e4e7e87aab12bda12e650', 'neutron:revision_number': '7', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=adce94c1-2312-402b-8ec0-ddb674bf630f, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=7654d474-edb7-40de-964c-94dde0499c7f) old=Port_Binding(mac=['fa:16:3e:c9:40:8d 2001:db8::f816:3eff:fec9:408d'], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:fec9:408d/64', 'neutron:device_id': 'ovnmeta-3def65f0-e242-4a46-9e8e-162652f023c1', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3def65f0-e242-4a46-9e8e-162652f023c1', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd5d3235b532e4e7e87aab12bda12e650', 'neutron:revision_number': '6', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 03:53:56 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:53:56.396 162739 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port 7654d474-edb7-40de-964c-94dde0499c7f in datapath 3def65f0-e242-4a46-9e8e-162652f023c1 updated#033[00m
Nov 25 03:53:56 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:53:56.398 162739 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 3def65f0-e242-4a46-9e8e-162652f023c1, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 25 03:53:56 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:53:56.399 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[ca8dd362-dbce-4a1a-88b5-16ad33777c32]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:53:56 np0005534516 nova_compute[253538]: 2025-11-25 08:53:56.823 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:53:57 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 03:53:57 np0005534516 nova_compute[253538]: 2025-11-25 08:53:57.808 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:53:57 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2195: 321 pgs: 321 active+clean; 293 MiB data, 904 MiB used, 59 GiB / 60 GiB avail; 2.6 MiB/s rd, 1.8 MiB/s wr, 127 op/s
Nov 25 03:53:59 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2196: 321 pgs: 321 active+clean; 293 MiB data, 904 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 1.2 MiB/s wr, 172 op/s
Nov 25 03:54:01 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2197: 321 pgs: 321 active+clean; 293 MiB data, 904 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 14 KiB/s wr, 145 op/s
Nov 25 03:54:01 np0005534516 nova_compute[253538]: 2025-11-25 08:54:01.827 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:54:02 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:54:02.352 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c9:40:8d 10.100.0.2'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'ovnmeta-3def65f0-e242-4a46-9e8e-162652f023c1', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3def65f0-e242-4a46-9e8e-162652f023c1', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd5d3235b532e4e7e87aab12bda12e650', 'neutron:revision_number': '10', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=adce94c1-2312-402b-8ec0-ddb674bf630f, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=7654d474-edb7-40de-964c-94dde0499c7f) old=Port_Binding(mac=['fa:16:3e:c9:40:8d 10.100.0.2 2001:db8::f816:3eff:fec9:408d'], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:fec9:408d/64', 'neutron:device_id': 'ovnmeta-3def65f0-e242-4a46-9e8e-162652f023c1', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3def65f0-e242-4a46-9e8e-162652f023c1', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd5d3235b532e4e7e87aab12bda12e650', 'neutron:revision_number': '7', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 03:54:02 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:54:02.356 162739 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port 7654d474-edb7-40de-964c-94dde0499c7f in datapath 3def65f0-e242-4a46-9e8e-162652f023c1 updated#033[00m
Nov 25 03:54:02 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:54:02.358 162739 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 3def65f0-e242-4a46-9e8e-162652f023c1, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 25 03:54:02 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:54:02.359 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[1752d2d1-eb23-4be0-b88f-16ae1d5d8646]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:54:02 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 03:54:02 np0005534516 nova_compute[253538]: 2025-11-25 08:54:02.809 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:54:03 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:54:03.107 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c9:40:8d 10.100.0.2 2001:db8::f816:3eff:fec9:408d'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:fec9:408d/64', 'neutron:device_id': 'ovnmeta-3def65f0-e242-4a46-9e8e-162652f023c1', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3def65f0-e242-4a46-9e8e-162652f023c1', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd5d3235b532e4e7e87aab12bda12e650', 'neutron:revision_number': '11', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=adce94c1-2312-402b-8ec0-ddb674bf630f, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=7654d474-edb7-40de-964c-94dde0499c7f) old=Port_Binding(mac=['fa:16:3e:c9:40:8d 10.100.0.2'], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'ovnmeta-3def65f0-e242-4a46-9e8e-162652f023c1', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3def65f0-e242-4a46-9e8e-162652f023c1', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd5d3235b532e4e7e87aab12bda12e650', 'neutron:revision_number': '10', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 03:54:03 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:54:03.108 162739 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port 7654d474-edb7-40de-964c-94dde0499c7f in datapath 3def65f0-e242-4a46-9e8e-162652f023c1 updated#033[00m
Nov 25 03:54:03 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:54:03.110 162739 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 3def65f0-e242-4a46-9e8e-162652f023c1, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 25 03:54:03 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:54:03.110 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[54b9514f-d060-4cb4-b665-ec5ffeb5e9e1]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:54:03 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2198: 321 pgs: 321 active+clean; 293 MiB data, 904 MiB used, 59 GiB / 60 GiB avail; 3.6 MiB/s rd, 25 KiB/s wr, 139 op/s
Nov 25 03:54:04 np0005534516 ovn_controller[152859]: 2025-11-25T08:54:04Z|00136|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:ea:4d:89 10.100.0.6
Nov 25 03:54:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 03:54:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:54:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 03:54:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:54:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0018696304006316337 of space, bias 1.0, pg target 0.5608891201894901 quantized to 32 (current 32)
Nov 25 03:54:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:54:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 03:54:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:54:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 03:54:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:54:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0010122368870873217 of space, bias 1.0, pg target 0.3036710661261965 quantized to 32 (current 32)
Nov 25 03:54:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:54:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 25 03:54:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:54:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 03:54:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:54:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 25 03:54:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:54:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 25 03:54:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:54:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 03:54:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:54:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 25 03:54:05 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2199: 321 pgs: 321 active+clean; 293 MiB data, 904 MiB used, 59 GiB / 60 GiB avail; 3.5 MiB/s rd, 26 KiB/s wr, 141 op/s
Nov 25 03:54:06 np0005534516 nova_compute[253538]: 2025-11-25 08:54:06.832 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:54:07 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:54:07.141 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c9:40:8d 10.100.0.2'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'ovnmeta-3def65f0-e242-4a46-9e8e-162652f023c1', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3def65f0-e242-4a46-9e8e-162652f023c1', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd5d3235b532e4e7e87aab12bda12e650', 'neutron:revision_number': '14', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=adce94c1-2312-402b-8ec0-ddb674bf630f, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=7654d474-edb7-40de-964c-94dde0499c7f) old=Port_Binding(mac=['fa:16:3e:c9:40:8d 10.100.0.2 2001:db8::f816:3eff:fec9:408d'], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:fec9:408d/64', 'neutron:device_id': 'ovnmeta-3def65f0-e242-4a46-9e8e-162652f023c1', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3def65f0-e242-4a46-9e8e-162652f023c1', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd5d3235b532e4e7e87aab12bda12e650', 'neutron:revision_number': '11', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 03:54:07 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:54:07.142 162739 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port 7654d474-edb7-40de-964c-94dde0499c7f in datapath 3def65f0-e242-4a46-9e8e-162652f023c1 updated#033[00m
Nov 25 03:54:07 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:54:07.143 162739 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 3def65f0-e242-4a46-9e8e-162652f023c1, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 25 03:54:07 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:54:07.144 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[bf9e1641-ed63-4c13-92fc-dfe31a659a83]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:54:07 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 03:54:07 np0005534516 nova_compute[253538]: 2025-11-25 08:54:07.812 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:54:07 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2200: 321 pgs: 321 active+clean; 296 MiB data, 904 MiB used, 59 GiB / 60 GiB avail; 3.1 MiB/s rd, 466 KiB/s wr, 139 op/s
Nov 25 03:54:07 np0005534516 podman[374941]: 2025-11-25 08:54:07.82807465 +0000 UTC m=+0.066903532 container health_status 5c8d5508db57b4c3da7e80ae11f3a3094e87785cb74f60c98199261dd8b73481 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, container_name=ovn_metadata_agent, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 25 03:54:08 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:54:08.128 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c9:40:8d 10.100.0.2 2001:db8::f816:3eff:fec9:408d'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:fec9:408d/64', 'neutron:device_id': 'ovnmeta-3def65f0-e242-4a46-9e8e-162652f023c1', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3def65f0-e242-4a46-9e8e-162652f023c1', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd5d3235b532e4e7e87aab12bda12e650', 'neutron:revision_number': '15', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=adce94c1-2312-402b-8ec0-ddb674bf630f, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=7654d474-edb7-40de-964c-94dde0499c7f) old=Port_Binding(mac=['fa:16:3e:c9:40:8d 10.100.0.2'], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'ovnmeta-3def65f0-e242-4a46-9e8e-162652f023c1', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3def65f0-e242-4a46-9e8e-162652f023c1', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd5d3235b532e4e7e87aab12bda12e650', 'neutron:revision_number': '14', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 03:54:08 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:54:08.129 162739 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port 7654d474-edb7-40de-964c-94dde0499c7f in datapath 3def65f0-e242-4a46-9e8e-162652f023c1 updated#033[00m
Nov 25 03:54:08 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:54:08.130 162739 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 3def65f0-e242-4a46-9e8e-162652f023c1, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 25 03:54:08 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:54:08.131 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[0577f64e-5fd8-48a5-af1a-79038e46b231]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:54:08 np0005534516 ovn_controller[152859]: 2025-11-25T08:54:08Z|00137|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:21:4a:e2 10.100.0.30
Nov 25 03:54:08 np0005534516 ovn_controller[152859]: 2025-11-25T08:54:08Z|00138|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:21:4a:e2 10.100.0.30
Nov 25 03:54:09 np0005534516 nova_compute[253538]: 2025-11-25 08:54:09.430 253542 INFO nova.compute.manager [None req-cd96e6ac-1fda-4d66-9ebc-cd22ff3c24df 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: 2da7049d-715e-4209-8c17-dda96ff6a192] Get console output#033[00m
Nov 25 03:54:09 np0005534516 nova_compute[253538]: 2025-11-25 08:54:09.434 310639 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Nov 25 03:54:09 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2201: 321 pgs: 321 active+clean; 317 MiB data, 924 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 2.2 MiB/s wr, 153 op/s
Nov 25 03:54:09 np0005534516 podman[374960]: 2025-11-25 08:54:09.823058561 +0000 UTC m=+0.071573368 container health_status 201f5ba8a846455dad7681d91099d8c3f33e8ac91298c1330a8b525b94c03938 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, org.label-schema.schema-version=1.0, container_name=multipathd, org.label-schema.build-date=20251118, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 03:54:10 np0005534516 nova_compute[253538]: 2025-11-25 08:54:10.193 253542 DEBUG nova.compute.manager [req-8cb2affc-3661-4157-b5fd-88a57bc97d47 req-97661487-2f26-4889-8611-a373ffe4e07d b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 2da7049d-715e-4209-8c17-dda96ff6a192] Received event network-changed-e0eb0246-9869-4c10-b45b-bd0799ae0c95 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:54:10 np0005534516 nova_compute[253538]: 2025-11-25 08:54:10.193 253542 DEBUG nova.compute.manager [req-8cb2affc-3661-4157-b5fd-88a57bc97d47 req-97661487-2f26-4889-8611-a373ffe4e07d b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 2da7049d-715e-4209-8c17-dda96ff6a192] Refreshing instance network info cache due to event network-changed-e0eb0246-9869-4c10-b45b-bd0799ae0c95. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 03:54:10 np0005534516 nova_compute[253538]: 2025-11-25 08:54:10.194 253542 DEBUG oslo_concurrency.lockutils [req-8cb2affc-3661-4157-b5fd-88a57bc97d47 req-97661487-2f26-4889-8611-a373ffe4e07d b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-2da7049d-715e-4209-8c17-dda96ff6a192" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 03:54:10 np0005534516 nova_compute[253538]: 2025-11-25 08:54:10.194 253542 DEBUG oslo_concurrency.lockutils [req-8cb2affc-3661-4157-b5fd-88a57bc97d47 req-97661487-2f26-4889-8611-a373ffe4e07d b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-2da7049d-715e-4209-8c17-dda96ff6a192" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 03:54:10 np0005534516 nova_compute[253538]: 2025-11-25 08:54:10.194 253542 DEBUG nova.network.neutron [req-8cb2affc-3661-4157-b5fd-88a57bc97d47 req-97661487-2f26-4889-8611-a373ffe4e07d b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 2da7049d-715e-4209-8c17-dda96ff6a192] Refreshing network info cache for port e0eb0246-9869-4c10-b45b-bd0799ae0c95 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 03:54:10 np0005534516 nova_compute[253538]: 2025-11-25 08:54:10.310 253542 DEBUG oslo_concurrency.lockutils [None req-9a61d765-1b7e-48be-bf50-91425e9de404 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Acquiring lock "2da7049d-715e-4209-8c17-dda96ff6a192" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:54:10 np0005534516 nova_compute[253538]: 2025-11-25 08:54:10.311 253542 DEBUG oslo_concurrency.lockutils [None req-9a61d765-1b7e-48be-bf50-91425e9de404 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Lock "2da7049d-715e-4209-8c17-dda96ff6a192" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:54:10 np0005534516 nova_compute[253538]: 2025-11-25 08:54:10.312 253542 DEBUG oslo_concurrency.lockutils [None req-9a61d765-1b7e-48be-bf50-91425e9de404 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Acquiring lock "2da7049d-715e-4209-8c17-dda96ff6a192-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:54:10 np0005534516 nova_compute[253538]: 2025-11-25 08:54:10.312 253542 DEBUG oslo_concurrency.lockutils [None req-9a61d765-1b7e-48be-bf50-91425e9de404 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Lock "2da7049d-715e-4209-8c17-dda96ff6a192-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:54:10 np0005534516 nova_compute[253538]: 2025-11-25 08:54:10.313 253542 DEBUG oslo_concurrency.lockutils [None req-9a61d765-1b7e-48be-bf50-91425e9de404 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Lock "2da7049d-715e-4209-8c17-dda96ff6a192-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:54:10 np0005534516 nova_compute[253538]: 2025-11-25 08:54:10.314 253542 INFO nova.compute.manager [None req-9a61d765-1b7e-48be-bf50-91425e9de404 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: 2da7049d-715e-4209-8c17-dda96ff6a192] Terminating instance#033[00m
Nov 25 03:54:10 np0005534516 nova_compute[253538]: 2025-11-25 08:54:10.317 253542 DEBUG nova.compute.manager [None req-9a61d765-1b7e-48be-bf50-91425e9de404 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: 2da7049d-715e-4209-8c17-dda96ff6a192] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 25 03:54:10 np0005534516 kernel: tape0eb0246-98 (unregistering): left promiscuous mode
Nov 25 03:54:10 np0005534516 NetworkManager[48915]: <info>  [1764060850.3800] device (tape0eb0246-98): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 03:54:10 np0005534516 nova_compute[253538]: 2025-11-25 08:54:10.390 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:54:10 np0005534516 ovn_controller[152859]: 2025-11-25T08:54:10Z|01168|binding|INFO|Releasing lport e0eb0246-9869-4c10-b45b-bd0799ae0c95 from this chassis (sb_readonly=0)
Nov 25 03:54:10 np0005534516 ovn_controller[152859]: 2025-11-25T08:54:10Z|01169|binding|INFO|Setting lport e0eb0246-9869-4c10-b45b-bd0799ae0c95 down in Southbound
Nov 25 03:54:10 np0005534516 ovn_controller[152859]: 2025-11-25T08:54:10Z|01170|binding|INFO|Removing iface tape0eb0246-98 ovn-installed in OVS
Nov 25 03:54:10 np0005534516 nova_compute[253538]: 2025-11-25 08:54:10.394 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:54:10 np0005534516 nova_compute[253538]: 2025-11-25 08:54:10.407 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:54:10 np0005534516 systemd[1]: machine-qemu\x2d144\x2dinstance\x2d00000073.scope: Deactivated successfully.
Nov 25 03:54:10 np0005534516 systemd[1]: machine-qemu\x2d144\x2dinstance\x2d00000073.scope: Consumed 13.689s CPU time.
Nov 25 03:54:10 np0005534516 systemd-machined[215790]: Machine qemu-144-instance-00000073 terminated.
Nov 25 03:54:10 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:54:10.496 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ea:4d:89 10.100.0.6'], port_security=['fa:16:3e:ea:4d:89 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '2da7049d-715e-4209-8c17-dda96ff6a192', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ab581a21-5712-4b8e-87f9-b943349fbfcb', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '7dcaf3c96bfc4db3a41291debd385c67', 'neutron:revision_number': '6', 'neutron:security_group_ids': '622b72dc-477c-41ae-9a43-d0ddc5df890f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=cbdce223-39b2-47e0-8888-53244c8f0c36, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=e0eb0246-9869-4c10-b45b-bd0799ae0c95) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 03:54:10 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:54:10.499 162739 INFO neutron.agent.ovn.metadata.agent [-] Port e0eb0246-9869-4c10-b45b-bd0799ae0c95 in datapath ab581a21-5712-4b8e-87f9-b943349fbfcb unbound from our chassis#033[00m
Nov 25 03:54:10 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:54:10.501 162739 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network ab581a21-5712-4b8e-87f9-b943349fbfcb, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 25 03:54:10 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:54:10.503 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[4d191c10-b7f0-4c39-a55d-26742aa33a06]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:54:10 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:54:10.504 162739 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-ab581a21-5712-4b8e-87f9-b943349fbfcb namespace which is not needed anymore#033[00m
Nov 25 03:54:10 np0005534516 nova_compute[253538]: 2025-11-25 08:54:10.562 253542 INFO nova.virt.libvirt.driver [-] [instance: 2da7049d-715e-4209-8c17-dda96ff6a192] Instance destroyed successfully.#033[00m
Nov 25 03:54:10 np0005534516 nova_compute[253538]: 2025-11-25 08:54:10.563 253542 DEBUG nova.objects.instance [None req-9a61d765-1b7e-48be-bf50-91425e9de404 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Lazy-loading 'resources' on Instance uuid 2da7049d-715e-4209-8c17-dda96ff6a192 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 03:54:10 np0005534516 nova_compute[253538]: 2025-11-25 08:54:10.579 253542 DEBUG nova.virt.libvirt.vif [None req-9a61d765-1b7e-48be-bf50-91425e9de404 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T08:53:10Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1007468219',display_name='tempest-TestNetworkAdvancedServerOps-server-1007468219',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1007468219',id=115,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPU68q8J+sEr75b4404lN3PmtYldQRjuuyP6SvWU7eV24JrfwADcpjaDPQik5pfkJbRZMvyTiPsHIXeXsaypDt7sTW7yrvZEo+4ZU8LLmZuoq5gv1NHdtvyWg13jQU9yUw==',key_name='tempest-TestNetworkAdvancedServerOps-1510547977',keypairs=<?>,launch_index=0,launched_at=2025-11-25T08:53:20Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='7dcaf3c96bfc4db3a41291debd385c67',ramdisk_id='',reservation_id='r-020ctw8m',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkAdvancedServerOps-1132090577',owner_user_name='tempest-TestNetworkAdvancedServerOps-1132090577-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T08:53:50Z,user_data=None,user_id='009378dc36154271ba5b4590ce67ddde',uuid=2da7049d-715e-4209-8c17-dda96ff6a192,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "e0eb0246-9869-4c10-b45b-bd0799ae0c95", "address": "fa:16:3e:ea:4d:89", "network": {"id": "ab581a21-5712-4b8e-87f9-b943349fbfcb", "bridge": "br-int", "label": "tempest-network-smoke--1196736477", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.226", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7dcaf3c96bfc4db3a41291debd385c67", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape0eb0246-98", "ovs_interfaceid": "e0eb0246-9869-4c10-b45b-bd0799ae0c95", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 25 03:54:10 np0005534516 nova_compute[253538]: 2025-11-25 08:54:10.581 253542 DEBUG nova.network.os_vif_util [None req-9a61d765-1b7e-48be-bf50-91425e9de404 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Converting VIF {"id": "e0eb0246-9869-4c10-b45b-bd0799ae0c95", "address": "fa:16:3e:ea:4d:89", "network": {"id": "ab581a21-5712-4b8e-87f9-b943349fbfcb", "bridge": "br-int", "label": "tempest-network-smoke--1196736477", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.226", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7dcaf3c96bfc4db3a41291debd385c67", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape0eb0246-98", "ovs_interfaceid": "e0eb0246-9869-4c10-b45b-bd0799ae0c95", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 03:54:10 np0005534516 nova_compute[253538]: 2025-11-25 08:54:10.582 253542 DEBUG nova.network.os_vif_util [None req-9a61d765-1b7e-48be-bf50-91425e9de404 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ea:4d:89,bridge_name='br-int',has_traffic_filtering=True,id=e0eb0246-9869-4c10-b45b-bd0799ae0c95,network=Network(ab581a21-5712-4b8e-87f9-b943349fbfcb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape0eb0246-98') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 03:54:10 np0005534516 nova_compute[253538]: 2025-11-25 08:54:10.582 253542 DEBUG os_vif [None req-9a61d765-1b7e-48be-bf50-91425e9de404 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:ea:4d:89,bridge_name='br-int',has_traffic_filtering=True,id=e0eb0246-9869-4c10-b45b-bd0799ae0c95,network=Network(ab581a21-5712-4b8e-87f9-b943349fbfcb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape0eb0246-98') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 25 03:54:10 np0005534516 nova_compute[253538]: 2025-11-25 08:54:10.585 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:54:10 np0005534516 nova_compute[253538]: 2025-11-25 08:54:10.585 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape0eb0246-98, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:54:10 np0005534516 nova_compute[253538]: 2025-11-25 08:54:10.587 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:54:10 np0005534516 nova_compute[253538]: 2025-11-25 08:54:10.589 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:54:10 np0005534516 nova_compute[253538]: 2025-11-25 08:54:10.592 253542 INFO os_vif [None req-9a61d765-1b7e-48be-bf50-91425e9de404 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:ea:4d:89,bridge_name='br-int',has_traffic_filtering=True,id=e0eb0246-9869-4c10-b45b-bd0799ae0c95,network=Network(ab581a21-5712-4b8e-87f9-b943349fbfcb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape0eb0246-98')#033[00m
Nov 25 03:54:10 np0005534516 neutron-haproxy-ovnmeta-ab581a21-5712-4b8e-87f9-b943349fbfcb[374743]: [NOTICE]   (374770) : haproxy version is 2.8.14-c23fe91
Nov 25 03:54:10 np0005534516 neutron-haproxy-ovnmeta-ab581a21-5712-4b8e-87f9-b943349fbfcb[374743]: [NOTICE]   (374770) : path to executable is /usr/sbin/haproxy
Nov 25 03:54:10 np0005534516 neutron-haproxy-ovnmeta-ab581a21-5712-4b8e-87f9-b943349fbfcb[374743]: [WARNING]  (374770) : Exiting Master process...
Nov 25 03:54:10 np0005534516 neutron-haproxy-ovnmeta-ab581a21-5712-4b8e-87f9-b943349fbfcb[374743]: [WARNING]  (374770) : Exiting Master process...
Nov 25 03:54:10 np0005534516 neutron-haproxy-ovnmeta-ab581a21-5712-4b8e-87f9-b943349fbfcb[374743]: [ALERT]    (374770) : Current worker (374773) exited with code 143 (Terminated)
Nov 25 03:54:10 np0005534516 neutron-haproxy-ovnmeta-ab581a21-5712-4b8e-87f9-b943349fbfcb[374743]: [WARNING]  (374770) : All workers exited. Exiting... (0)
Nov 25 03:54:10 np0005534516 systemd[1]: libpod-ac4143c5654b3f2d26e52800ec0179efe78b75cd7b70bac827c41ee2ea6faa2f.scope: Deactivated successfully.
Nov 25 03:54:10 np0005534516 podman[375013]: 2025-11-25 08:54:10.664959397 +0000 UTC m=+0.059364047 container died ac4143c5654b3f2d26e52800ec0179efe78b75cd7b70bac827c41ee2ea6faa2f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-ab581a21-5712-4b8e-87f9-b943349fbfcb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 03:54:10 np0005534516 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-ac4143c5654b3f2d26e52800ec0179efe78b75cd7b70bac827c41ee2ea6faa2f-userdata-shm.mount: Deactivated successfully.
Nov 25 03:54:10 np0005534516 systemd[1]: var-lib-containers-storage-overlay-a887c0faf322aa9a07d1834b6a53fb5f904fd4dfee4a01fe03dcd58a1b43320c-merged.mount: Deactivated successfully.
Nov 25 03:54:10 np0005534516 podman[375013]: 2025-11-25 08:54:10.740897254 +0000 UTC m=+0.135301904 container cleanup ac4143c5654b3f2d26e52800ec0179efe78b75cd7b70bac827c41ee2ea6faa2f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-ab581a21-5712-4b8e-87f9-b943349fbfcb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 03:54:10 np0005534516 systemd[1]: libpod-conmon-ac4143c5654b3f2d26e52800ec0179efe78b75cd7b70bac827c41ee2ea6faa2f.scope: Deactivated successfully.
Nov 25 03:54:10 np0005534516 podman[375063]: 2025-11-25 08:54:10.820324786 +0000 UTC m=+0.055714358 container remove ac4143c5654b3f2d26e52800ec0179efe78b75cd7b70bac827c41ee2ea6faa2f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-ab581a21-5712-4b8e-87f9-b943349fbfcb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Nov 25 03:54:10 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:54:10.830 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[84f7a9e6-4399-484d-9273-c113f3e1353d]: (4, ('Tue Nov 25 08:54:10 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-ab581a21-5712-4b8e-87f9-b943349fbfcb (ac4143c5654b3f2d26e52800ec0179efe78b75cd7b70bac827c41ee2ea6faa2f)\nac4143c5654b3f2d26e52800ec0179efe78b75cd7b70bac827c41ee2ea6faa2f\nTue Nov 25 08:54:10 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-ab581a21-5712-4b8e-87f9-b943349fbfcb (ac4143c5654b3f2d26e52800ec0179efe78b75cd7b70bac827c41ee2ea6faa2f)\nac4143c5654b3f2d26e52800ec0179efe78b75cd7b70bac827c41ee2ea6faa2f\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:54:10 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:54:10.833 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[9b28a108-899d-48a6-a452-6d3208a352ef]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:54:10 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:54:10.834 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapab581a21-50, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:54:10 np0005534516 nova_compute[253538]: 2025-11-25 08:54:10.837 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:54:10 np0005534516 kernel: tapab581a21-50: left promiscuous mode
Nov 25 03:54:10 np0005534516 nova_compute[253538]: 2025-11-25 08:54:10.866 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:54:10 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:54:10.870 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[383f4016-f80d-4b34-ab70-1bbf137ec4e4]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:54:10 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:54:10.882 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[27d5f58b-aa6a-4281-a3e0-93e46c0785ff]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:54:10 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:54:10.884 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[e383ea9f-35b7-4df5-8077-4835c5edf706]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:54:10 np0005534516 nova_compute[253538]: 2025-11-25 08:54:10.889 253542 DEBUG nova.compute.manager [req-621efb7e-1c71-4ca1-9707-939952fb6b90 req-7cbc8932-88cf-40f9-aec0-7ed2bde4a2f7 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 2da7049d-715e-4209-8c17-dda96ff6a192] Received event network-vif-unplugged-e0eb0246-9869-4c10-b45b-bd0799ae0c95 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:54:10 np0005534516 nova_compute[253538]: 2025-11-25 08:54:10.889 253542 DEBUG oslo_concurrency.lockutils [req-621efb7e-1c71-4ca1-9707-939952fb6b90 req-7cbc8932-88cf-40f9-aec0-7ed2bde4a2f7 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "2da7049d-715e-4209-8c17-dda96ff6a192-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:54:10 np0005534516 nova_compute[253538]: 2025-11-25 08:54:10.890 253542 DEBUG oslo_concurrency.lockutils [req-621efb7e-1c71-4ca1-9707-939952fb6b90 req-7cbc8932-88cf-40f9-aec0-7ed2bde4a2f7 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "2da7049d-715e-4209-8c17-dda96ff6a192-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:54:10 np0005534516 nova_compute[253538]: 2025-11-25 08:54:10.890 253542 DEBUG oslo_concurrency.lockutils [req-621efb7e-1c71-4ca1-9707-939952fb6b90 req-7cbc8932-88cf-40f9-aec0-7ed2bde4a2f7 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "2da7049d-715e-4209-8c17-dda96ff6a192-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:54:10 np0005534516 nova_compute[253538]: 2025-11-25 08:54:10.891 253542 DEBUG nova.compute.manager [req-621efb7e-1c71-4ca1-9707-939952fb6b90 req-7cbc8932-88cf-40f9-aec0-7ed2bde4a2f7 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 2da7049d-715e-4209-8c17-dda96ff6a192] No waiting events found dispatching network-vif-unplugged-e0eb0246-9869-4c10-b45b-bd0799ae0c95 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 03:54:10 np0005534516 nova_compute[253538]: 2025-11-25 08:54:10.891 253542 DEBUG nova.compute.manager [req-621efb7e-1c71-4ca1-9707-939952fb6b90 req-7cbc8932-88cf-40f9-aec0-7ed2bde4a2f7 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 2da7049d-715e-4209-8c17-dda96ff6a192] Received event network-vif-unplugged-e0eb0246-9869-4c10-b45b-bd0799ae0c95 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 25 03:54:10 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:54:10.902 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[aa5de881-aa0f-4c41-b38a-701fdc8510a7]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 618490, 'reachable_time': 16296, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 375078, 'error': None, 'target': 'ovnmeta-ab581a21-5712-4b8e-87f9-b943349fbfcb', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:54:10 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:54:10.904 162852 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-ab581a21-5712-4b8e-87f9-b943349fbfcb deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 25 03:54:10 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:54:10.904 162852 DEBUG oslo.privsep.daemon [-] privsep: reply[a1801047-96cc-4696-ae9f-a48cf6676ed0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:54:10 np0005534516 systemd[1]: run-netns-ovnmeta\x2dab581a21\x2d5712\x2d4b8e\x2d87f9\x2db943349fbfcb.mount: Deactivated successfully.
Nov 25 03:54:11 np0005534516 nova_compute[253538]: 2025-11-25 08:54:11.253 253542 INFO nova.virt.libvirt.driver [None req-9a61d765-1b7e-48be-bf50-91425e9de404 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: 2da7049d-715e-4209-8c17-dda96ff6a192] Deleting instance files /var/lib/nova/instances/2da7049d-715e-4209-8c17-dda96ff6a192_del#033[00m
Nov 25 03:54:11 np0005534516 nova_compute[253538]: 2025-11-25 08:54:11.254 253542 INFO nova.virt.libvirt.driver [None req-9a61d765-1b7e-48be-bf50-91425e9de404 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: 2da7049d-715e-4209-8c17-dda96ff6a192] Deletion of /var/lib/nova/instances/2da7049d-715e-4209-8c17-dda96ff6a192_del complete#033[00m
Nov 25 03:54:11 np0005534516 nova_compute[253538]: 2025-11-25 08:54:11.321 253542 INFO nova.compute.manager [None req-9a61d765-1b7e-48be-bf50-91425e9de404 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: 2da7049d-715e-4209-8c17-dda96ff6a192] Took 1.00 seconds to destroy the instance on the hypervisor.#033[00m
Nov 25 03:54:11 np0005534516 nova_compute[253538]: 2025-11-25 08:54:11.322 253542 DEBUG oslo.service.loopingcall [None req-9a61d765-1b7e-48be-bf50-91425e9de404 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 25 03:54:11 np0005534516 nova_compute[253538]: 2025-11-25 08:54:11.322 253542 DEBUG nova.compute.manager [-] [instance: 2da7049d-715e-4209-8c17-dda96ff6a192] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 25 03:54:11 np0005534516 nova_compute[253538]: 2025-11-25 08:54:11.322 253542 DEBUG nova.network.neutron [-] [instance: 2da7049d-715e-4209-8c17-dda96ff6a192] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 25 03:54:11 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2202: 321 pgs: 321 active+clean; 299 MiB data, 913 MiB used, 59 GiB / 60 GiB avail; 902 KiB/s rd, 2.2 MiB/s wr, 134 op/s
Nov 25 03:54:12 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 03:54:12 np0005534516 nova_compute[253538]: 2025-11-25 08:54:12.657 253542 DEBUG nova.network.neutron [-] [instance: 2da7049d-715e-4209-8c17-dda96ff6a192] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 03:54:12 np0005534516 nova_compute[253538]: 2025-11-25 08:54:12.681 253542 INFO nova.compute.manager [-] [instance: 2da7049d-715e-4209-8c17-dda96ff6a192] Took 1.36 seconds to deallocate network for instance.#033[00m
Nov 25 03:54:12 np0005534516 nova_compute[253538]: 2025-11-25 08:54:12.720 253542 DEBUG oslo_concurrency.lockutils [None req-9a61d765-1b7e-48be-bf50-91425e9de404 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:54:12 np0005534516 nova_compute[253538]: 2025-11-25 08:54:12.721 253542 DEBUG oslo_concurrency.lockutils [None req-9a61d765-1b7e-48be-bf50-91425e9de404 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:54:12 np0005534516 nova_compute[253538]: 2025-11-25 08:54:12.816 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:54:12 np0005534516 nova_compute[253538]: 2025-11-25 08:54:12.822 253542 DEBUG oslo_concurrency.processutils [None req-9a61d765-1b7e-48be-bf50-91425e9de404 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:54:13 np0005534516 nova_compute[253538]: 2025-11-25 08:54:12.999 253542 DEBUG nova.compute.manager [req-b6639c09-bd11-4b2f-b85b-69fa90362ba3 req-66502262-4f41-438a-bfe2-9bbd442fad56 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 2da7049d-715e-4209-8c17-dda96ff6a192] Received event network-vif-plugged-e0eb0246-9869-4c10-b45b-bd0799ae0c95 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:54:13 np0005534516 nova_compute[253538]: 2025-11-25 08:54:13.000 253542 DEBUG oslo_concurrency.lockutils [req-b6639c09-bd11-4b2f-b85b-69fa90362ba3 req-66502262-4f41-438a-bfe2-9bbd442fad56 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "2da7049d-715e-4209-8c17-dda96ff6a192-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:54:13 np0005534516 nova_compute[253538]: 2025-11-25 08:54:13.001 253542 DEBUG oslo_concurrency.lockutils [req-b6639c09-bd11-4b2f-b85b-69fa90362ba3 req-66502262-4f41-438a-bfe2-9bbd442fad56 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "2da7049d-715e-4209-8c17-dda96ff6a192-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:54:13 np0005534516 nova_compute[253538]: 2025-11-25 08:54:13.001 253542 DEBUG oslo_concurrency.lockutils [req-b6639c09-bd11-4b2f-b85b-69fa90362ba3 req-66502262-4f41-438a-bfe2-9bbd442fad56 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "2da7049d-715e-4209-8c17-dda96ff6a192-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:54:13 np0005534516 nova_compute[253538]: 2025-11-25 08:54:13.001 253542 DEBUG nova.compute.manager [req-b6639c09-bd11-4b2f-b85b-69fa90362ba3 req-66502262-4f41-438a-bfe2-9bbd442fad56 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 2da7049d-715e-4209-8c17-dda96ff6a192] No waiting events found dispatching network-vif-plugged-e0eb0246-9869-4c10-b45b-bd0799ae0c95 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 03:54:13 np0005534516 nova_compute[253538]: 2025-11-25 08:54:13.002 253542 WARNING nova.compute.manager [req-b6639c09-bd11-4b2f-b85b-69fa90362ba3 req-66502262-4f41-438a-bfe2-9bbd442fad56 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 2da7049d-715e-4209-8c17-dda96ff6a192] Received unexpected event network-vif-plugged-e0eb0246-9869-4c10-b45b-bd0799ae0c95 for instance with vm_state deleted and task_state None.#033[00m
Nov 25 03:54:13 np0005534516 nova_compute[253538]: 2025-11-25 08:54:13.002 253542 DEBUG nova.compute.manager [req-b6639c09-bd11-4b2f-b85b-69fa90362ba3 req-66502262-4f41-438a-bfe2-9bbd442fad56 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 2da7049d-715e-4209-8c17-dda96ff6a192] Received event network-vif-deleted-e0eb0246-9869-4c10-b45b-bd0799ae0c95 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:54:13 np0005534516 nova_compute[253538]: 2025-11-25 08:54:13.205 253542 DEBUG nova.network.neutron [req-8cb2affc-3661-4157-b5fd-88a57bc97d47 req-97661487-2f26-4889-8611-a373ffe4e07d b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 2da7049d-715e-4209-8c17-dda96ff6a192] Updated VIF entry in instance network info cache for port e0eb0246-9869-4c10-b45b-bd0799ae0c95. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 25 03:54:13 np0005534516 nova_compute[253538]: 2025-11-25 08:54:13.205 253542 DEBUG nova.network.neutron [req-8cb2affc-3661-4157-b5fd-88a57bc97d47 req-97661487-2f26-4889-8611-a373ffe4e07d b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 2da7049d-715e-4209-8c17-dda96ff6a192] Updating instance_info_cache with network_info: [{"id": "e0eb0246-9869-4c10-b45b-bd0799ae0c95", "address": "fa:16:3e:ea:4d:89", "network": {"id": "ab581a21-5712-4b8e-87f9-b943349fbfcb", "bridge": "br-int", "label": "tempest-network-smoke--1196736477", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7dcaf3c96bfc4db3a41291debd385c67", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape0eb0246-98", "ovs_interfaceid": "e0eb0246-9869-4c10-b45b-bd0799ae0c95", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 03:54:13 np0005534516 nova_compute[253538]: 2025-11-25 08:54:13.224 253542 DEBUG oslo_concurrency.lockutils [req-8cb2affc-3661-4157-b5fd-88a57bc97d47 req-97661487-2f26-4889-8611-a373ffe4e07d b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-2da7049d-715e-4209-8c17-dda96ff6a192" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 03:54:13 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 03:54:13 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/533077817' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 03:54:13 np0005534516 nova_compute[253538]: 2025-11-25 08:54:13.293 253542 DEBUG oslo_concurrency.processutils [None req-9a61d765-1b7e-48be-bf50-91425e9de404 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.472s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:54:13 np0005534516 nova_compute[253538]: 2025-11-25 08:54:13.304 253542 DEBUG nova.compute.provider_tree [None req-9a61d765-1b7e-48be-bf50-91425e9de404 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 03:54:13 np0005534516 nova_compute[253538]: 2025-11-25 08:54:13.325 253542 DEBUG nova.scheduler.client.report [None req-9a61d765-1b7e-48be-bf50-91425e9de404 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 03:54:13 np0005534516 nova_compute[253538]: 2025-11-25 08:54:13.364 253542 DEBUG oslo_concurrency.lockutils [None req-9a61d765-1b7e-48be-bf50-91425e9de404 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.644s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:54:13 np0005534516 nova_compute[253538]: 2025-11-25 08:54:13.422 253542 INFO nova.scheduler.client.report [None req-9a61d765-1b7e-48be-bf50-91425e9de404 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Deleted allocations for instance 2da7049d-715e-4209-8c17-dda96ff6a192#033[00m
Nov 25 03:54:13 np0005534516 nova_compute[253538]: 2025-11-25 08:54:13.491 253542 DEBUG oslo_concurrency.lockutils [None req-9a61d765-1b7e-48be-bf50-91425e9de404 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Lock "2da7049d-715e-4209-8c17-dda96ff6a192" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.180s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:54:13 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2203: 321 pgs: 321 active+clean; 273 MiB data, 901 MiB used, 59 GiB / 60 GiB avail; 898 KiB/s rd, 2.2 MiB/s wr, 135 op/s
Nov 25 03:54:14 np0005534516 podman[375103]: 2025-11-25 08:54:14.864471493 +0000 UTC m=+0.112873014 container health_status 8ad1e319ea263c63021f01bb2d6f18ca5aea205883339692c6b10bddfa993191 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251118, tcib_managed=true)
Nov 25 03:54:15 np0005534516 nova_compute[253538]: 2025-11-25 08:54:15.588 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:54:15 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2204: 321 pgs: 321 active+clean; 246 MiB data, 886 MiB used, 59 GiB / 60 GiB avail; 765 KiB/s rd, 2.2 MiB/s wr, 126 op/s
Nov 25 03:54:16 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:54:16.223 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c9:40:8d 2001:db8::f816:3eff:fec9:408d'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:fec9:408d/64', 'neutron:device_id': 'ovnmeta-3def65f0-e242-4a46-9e8e-162652f023c1', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3def65f0-e242-4a46-9e8e-162652f023c1', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd5d3235b532e4e7e87aab12bda12e650', 'neutron:revision_number': '18', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=adce94c1-2312-402b-8ec0-ddb674bf630f, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=7654d474-edb7-40de-964c-94dde0499c7f) old=Port_Binding(mac=['fa:16:3e:c9:40:8d 10.100.0.2 2001:db8::f816:3eff:fec9:408d'], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:fec9:408d/64', 'neutron:device_id': 'ovnmeta-3def65f0-e242-4a46-9e8e-162652f023c1', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3def65f0-e242-4a46-9e8e-162652f023c1', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd5d3235b532e4e7e87aab12bda12e650', 'neutron:revision_number': '15', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 03:54:16 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:54:16.224 162739 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port 7654d474-edb7-40de-964c-94dde0499c7f in datapath 3def65f0-e242-4a46-9e8e-162652f023c1 updated#033[00m
Nov 25 03:54:16 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:54:16.225 162739 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 3def65f0-e242-4a46-9e8e-162652f023c1, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 25 03:54:16 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:54:16.226 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[9fb9c72b-93c9-41e4-bac8-24240d6ef951]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:54:17 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 03:54:17 np0005534516 nova_compute[253538]: 2025-11-25 08:54:17.777 253542 DEBUG nova.compute.manager [req-e1b41852-c347-41ee-8941-913e1d54bfe5 req-beb79459-f1af-4a5a-9283-d161b63c2a51 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 76611b0b-db06-4903-a22a-59b23a1e0d48] Received event network-changed-1959aca7-b25c-4fe5-b59a-70db352af78b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:54:17 np0005534516 nova_compute[253538]: 2025-11-25 08:54:17.777 253542 DEBUG nova.compute.manager [req-e1b41852-c347-41ee-8941-913e1d54bfe5 req-beb79459-f1af-4a5a-9283-d161b63c2a51 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 76611b0b-db06-4903-a22a-59b23a1e0d48] Refreshing instance network info cache due to event network-changed-1959aca7-b25c-4fe5-b59a-70db352af78b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 03:54:17 np0005534516 nova_compute[253538]: 2025-11-25 08:54:17.778 253542 DEBUG oslo_concurrency.lockutils [req-e1b41852-c347-41ee-8941-913e1d54bfe5 req-beb79459-f1af-4a5a-9283-d161b63c2a51 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-76611b0b-db06-4903-a22a-59b23a1e0d48" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 03:54:17 np0005534516 nova_compute[253538]: 2025-11-25 08:54:17.778 253542 DEBUG oslo_concurrency.lockutils [req-e1b41852-c347-41ee-8941-913e1d54bfe5 req-beb79459-f1af-4a5a-9283-d161b63c2a51 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-76611b0b-db06-4903-a22a-59b23a1e0d48" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 03:54:17 np0005534516 nova_compute[253538]: 2025-11-25 08:54:17.778 253542 DEBUG nova.network.neutron [req-e1b41852-c347-41ee-8941-913e1d54bfe5 req-beb79459-f1af-4a5a-9283-d161b63c2a51 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 76611b0b-db06-4903-a22a-59b23a1e0d48] Refreshing network info cache for port 1959aca7-b25c-4fe5-b59a-70db352af78b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 03:54:17 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2205: 321 pgs: 321 active+clean; 246 MiB data, 886 MiB used, 59 GiB / 60 GiB avail; 541 KiB/s rd, 2.2 MiB/s wr, 110 op/s
Nov 25 03:54:17 np0005534516 nova_compute[253538]: 2025-11-25 08:54:17.819 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:54:19 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 03:54:19 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 03:54:19 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 03:54:19 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 03:54:19 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 03:54:19 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 03:54:19 np0005534516 ceph-mgr[75313]: [progress WARNING root] complete: ev 41318a4f-00ed-4c34-8700-0c603a4e2c15 does not exist
Nov 25 03:54:19 np0005534516 ceph-mgr[75313]: [progress WARNING root] complete: ev e138be15-3e66-49b8-820f-da43b86b2c2a does not exist
Nov 25 03:54:19 np0005534516 ceph-mgr[75313]: [progress WARNING root] complete: ev 6c5287ec-572e-4aad-9f4e-59a72c9e65c4 does not exist
Nov 25 03:54:19 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 03:54:19 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 03:54:19 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 03:54:19 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 03:54:19 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 03:54:19 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 03:54:19 np0005534516 ovn_controller[152859]: 2025-11-25T08:54:19Z|01171|binding|INFO|Releasing lport 4bc48b70-3942-46d1-ac71-5fa19e5d9ae3 from this chassis (sb_readonly=0)
Nov 25 03:54:19 np0005534516 ovn_controller[152859]: 2025-11-25T08:54:19Z|01172|binding|INFO|Releasing lport baf4584e-8381-4bcf-9f75-0a7b69cd8212 from this chassis (sb_readonly=0)
Nov 25 03:54:19 np0005534516 nova_compute[253538]: 2025-11-25 08:54:19.722 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:54:19 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2206: 321 pgs: 321 active+clean; 246 MiB data, 886 MiB used, 59 GiB / 60 GiB avail; 376 KiB/s rd, 1.7 MiB/s wr, 91 op/s
Nov 25 03:54:19 np0005534516 nova_compute[253538]: 2025-11-25 08:54:19.840 253542 DEBUG nova.network.neutron [req-e1b41852-c347-41ee-8941-913e1d54bfe5 req-beb79459-f1af-4a5a-9283-d161b63c2a51 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 76611b0b-db06-4903-a22a-59b23a1e0d48] Updated VIF entry in instance network info cache for port 1959aca7-b25c-4fe5-b59a-70db352af78b. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 25 03:54:19 np0005534516 nova_compute[253538]: 2025-11-25 08:54:19.841 253542 DEBUG nova.network.neutron [req-e1b41852-c347-41ee-8941-913e1d54bfe5 req-beb79459-f1af-4a5a-9283-d161b63c2a51 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 76611b0b-db06-4903-a22a-59b23a1e0d48] Updating instance_info_cache with network_info: [{"id": "8f1fcc3c-5f46-4272-be9b-4d5213b3aceb", "address": "fa:16:3e:36:b2:21", "network": {"id": "60f2641c-f03e-4ef3-a462-4bd54e93c59c", "bridge": "br-int", "label": "tempest-network-smoke--706068333", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.208", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92faeb767e7a423586eaaf32661ce771", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8f1fcc3c-5f", "ovs_interfaceid": "8f1fcc3c-5f46-4272-be9b-4d5213b3aceb", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "1959aca7-b25c-4fe5-b59a-70db352af78b", "address": "fa:16:3e:d6:c5:52", "network": {"id": "8f0f7d83-b45f-4a49-9b0c-4eced5b56b37", "bridge": "br-int", "label": "tempest-network-smoke--701944320", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.26", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92faeb767e7a423586eaaf32661ce771", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1959aca7-b2", "ovs_interfaceid": "1959aca7-b25c-4fe5-b59a-70db352af78b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 03:54:19 np0005534516 nova_compute[253538]: 2025-11-25 08:54:19.854 253542 DEBUG oslo_concurrency.lockutils [req-e1b41852-c347-41ee-8941-913e1d54bfe5 req-beb79459-f1af-4a5a-9283-d161b63c2a51 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-76611b0b-db06-4903-a22a-59b23a1e0d48" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 03:54:19 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 03:54:19 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 03:54:19 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 03:54:20 np0005534516 podman[375404]: 2025-11-25 08:54:20.192760862 +0000 UTC m=+0.047784152 container create 7f4de7591f021a28ef14d18113f9fee714e44f128fe1d4beecfbb62783314359 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_fermat, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 03:54:20 np0005534516 systemd[1]: Started libpod-conmon-7f4de7591f021a28ef14d18113f9fee714e44f128fe1d4beecfbb62783314359.scope.
Nov 25 03:54:20 np0005534516 podman[375404]: 2025-11-25 08:54:20.172530622 +0000 UTC m=+0.027553952 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 03:54:20 np0005534516 systemd[1]: Started libcrun container.
Nov 25 03:54:20 np0005534516 podman[375404]: 2025-11-25 08:54:20.297323468 +0000 UTC m=+0.152346778 container init 7f4de7591f021a28ef14d18113f9fee714e44f128fe1d4beecfbb62783314359 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_fermat, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 03:54:20 np0005534516 podman[375404]: 2025-11-25 08:54:20.309989713 +0000 UTC m=+0.165013043 container start 7f4de7591f021a28ef14d18113f9fee714e44f128fe1d4beecfbb62783314359 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_fermat, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Nov 25 03:54:20 np0005534516 podman[375404]: 2025-11-25 08:54:20.313681983 +0000 UTC m=+0.168705283 container attach 7f4de7591f021a28ef14d18113f9fee714e44f128fe1d4beecfbb62783314359 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_fermat, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Nov 25 03:54:20 np0005534516 affectionate_fermat[375419]: 167 167
Nov 25 03:54:20 np0005534516 systemd[1]: libpod-7f4de7591f021a28ef14d18113f9fee714e44f128fe1d4beecfbb62783314359.scope: Deactivated successfully.
Nov 25 03:54:20 np0005534516 podman[375404]: 2025-11-25 08:54:20.319183713 +0000 UTC m=+0.174207053 container died 7f4de7591f021a28ef14d18113f9fee714e44f128fe1d4beecfbb62783314359 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_fermat, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 03:54:20 np0005534516 systemd[1]: var-lib-containers-storage-overlay-335d52f3583ee6922a8fc22fb8cf7bc018025bf073b9c7b46a4ccdb037c04064-merged.mount: Deactivated successfully.
Nov 25 03:54:20 np0005534516 podman[375404]: 2025-11-25 08:54:20.366793189 +0000 UTC m=+0.221816519 container remove 7f4de7591f021a28ef14d18113f9fee714e44f128fe1d4beecfbb62783314359 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_fermat, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Nov 25 03:54:20 np0005534516 systemd[1]: libpod-conmon-7f4de7591f021a28ef14d18113f9fee714e44f128fe1d4beecfbb62783314359.scope: Deactivated successfully.
Nov 25 03:54:20 np0005534516 nova_compute[253538]: 2025-11-25 08:54:20.591 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:54:20 np0005534516 podman[375442]: 2025-11-25 08:54:20.628534713 +0000 UTC m=+0.056902870 container create 0068d0f50011a2e4c64362a70f8661473d9c6680b62d2bfe9846a577d626cf33 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_aryabhata, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default)
Nov 25 03:54:20 np0005534516 systemd[1]: Started libpod-conmon-0068d0f50011a2e4c64362a70f8661473d9c6680b62d2bfe9846a577d626cf33.scope.
Nov 25 03:54:20 np0005534516 podman[375442]: 2025-11-25 08:54:20.602920666 +0000 UTC m=+0.031288843 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 03:54:20 np0005534516 systemd[1]: Started libcrun container.
Nov 25 03:54:20 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/961e005e26c165fb058e545803a43a0b357a398dfad8ed5330d26df707c843a1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 03:54:20 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/961e005e26c165fb058e545803a43a0b357a398dfad8ed5330d26df707c843a1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 03:54:20 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/961e005e26c165fb058e545803a43a0b357a398dfad8ed5330d26df707c843a1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 03:54:20 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/961e005e26c165fb058e545803a43a0b357a398dfad8ed5330d26df707c843a1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 03:54:20 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/961e005e26c165fb058e545803a43a0b357a398dfad8ed5330d26df707c843a1/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 03:54:20 np0005534516 podman[375442]: 2025-11-25 08:54:20.738445785 +0000 UTC m=+0.166814022 container init 0068d0f50011a2e4c64362a70f8661473d9c6680b62d2bfe9846a577d626cf33 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_aryabhata, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Nov 25 03:54:20 np0005534516 podman[375442]: 2025-11-25 08:54:20.746765102 +0000 UTC m=+0.175133259 container start 0068d0f50011a2e4c64362a70f8661473d9c6680b62d2bfe9846a577d626cf33 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_aryabhata, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 03:54:20 np0005534516 podman[375442]: 2025-11-25 08:54:20.751722336 +0000 UTC m=+0.180090493 container attach 0068d0f50011a2e4c64362a70f8661473d9c6680b62d2bfe9846a577d626cf33 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_aryabhata, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef)
Nov 25 03:54:21 np0005534516 eager_aryabhata[375459]: --> passed data devices: 0 physical, 3 LVM
Nov 25 03:54:21 np0005534516 eager_aryabhata[375459]: --> relative data size: 1.0
Nov 25 03:54:21 np0005534516 eager_aryabhata[375459]: --> All data devices are unavailable
Nov 25 03:54:21 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2207: 321 pgs: 321 active+clean; 246 MiB data, 886 MiB used, 59 GiB / 60 GiB avail; 33 KiB/s rd, 22 KiB/s wr, 30 op/s
Nov 25 03:54:21 np0005534516 systemd[1]: libpod-0068d0f50011a2e4c64362a70f8661473d9c6680b62d2bfe9846a577d626cf33.scope: Deactivated successfully.
Nov 25 03:54:21 np0005534516 systemd[1]: libpod-0068d0f50011a2e4c64362a70f8661473d9c6680b62d2bfe9846a577d626cf33.scope: Consumed 1.017s CPU time.
Nov 25 03:54:21 np0005534516 podman[375442]: 2025-11-25 08:54:21.862525061 +0000 UTC m=+1.290893218 container died 0068d0f50011a2e4c64362a70f8661473d9c6680b62d2bfe9846a577d626cf33 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_aryabhata, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Nov 25 03:54:21 np0005534516 systemd[1]: var-lib-containers-storage-overlay-961e005e26c165fb058e545803a43a0b357a398dfad8ed5330d26df707c843a1-merged.mount: Deactivated successfully.
Nov 25 03:54:21 np0005534516 podman[375442]: 2025-11-25 08:54:21.921365452 +0000 UTC m=+1.349733629 container remove 0068d0f50011a2e4c64362a70f8661473d9c6680b62d2bfe9846a577d626cf33 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_aryabhata, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 03:54:21 np0005534516 systemd[1]: libpod-conmon-0068d0f50011a2e4c64362a70f8661473d9c6680b62d2bfe9846a577d626cf33.scope: Deactivated successfully.
Nov 25 03:54:22 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 03:54:22 np0005534516 podman[375641]: 2025-11-25 08:54:22.632223171 +0000 UTC m=+0.049781746 container create 270c57f77eb8c477783d0037a7fbdc35d2fccf53f62298d06c11ef970bb4901d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_sanderson, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Nov 25 03:54:22 np0005534516 systemd[1]: Started libpod-conmon-270c57f77eb8c477783d0037a7fbdc35d2fccf53f62298d06c11ef970bb4901d.scope.
Nov 25 03:54:22 np0005534516 podman[375641]: 2025-11-25 08:54:22.6079581 +0000 UTC m=+0.025516755 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 03:54:22 np0005534516 systemd[1]: Started libcrun container.
Nov 25 03:54:22 np0005534516 podman[375641]: 2025-11-25 08:54:22.721981074 +0000 UTC m=+0.139539669 container init 270c57f77eb8c477783d0037a7fbdc35d2fccf53f62298d06c11ef970bb4901d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_sanderson, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 03:54:22 np0005534516 podman[375641]: 2025-11-25 08:54:22.730077515 +0000 UTC m=+0.147636090 container start 270c57f77eb8c477783d0037a7fbdc35d2fccf53f62298d06c11ef970bb4901d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_sanderson, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 03:54:22 np0005534516 serene_sanderson[375659]: 167 167
Nov 25 03:54:22 np0005534516 systemd[1]: libpod-270c57f77eb8c477783d0037a7fbdc35d2fccf53f62298d06c11ef970bb4901d.scope: Deactivated successfully.
Nov 25 03:54:22 np0005534516 podman[375641]: 2025-11-25 08:54:22.748839885 +0000 UTC m=+0.166398480 container attach 270c57f77eb8c477783d0037a7fbdc35d2fccf53f62298d06c11ef970bb4901d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_sanderson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 03:54:22 np0005534516 podman[375641]: 2025-11-25 08:54:22.749954396 +0000 UTC m=+0.167512971 container died 270c57f77eb8c477783d0037a7fbdc35d2fccf53f62298d06c11ef970bb4901d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_sanderson, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 25 03:54:22 np0005534516 systemd[1]: var-lib-containers-storage-overlay-d1609451b0be2cdfb2634def33fe4d7fa2f9f9e310a628fa0e30e19750fd263b-merged.mount: Deactivated successfully.
Nov 25 03:54:22 np0005534516 podman[375641]: 2025-11-25 08:54:22.801016135 +0000 UTC m=+0.218574710 container remove 270c57f77eb8c477783d0037a7fbdc35d2fccf53f62298d06c11ef970bb4901d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_sanderson, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Nov 25 03:54:22 np0005534516 systemd[1]: libpod-conmon-270c57f77eb8c477783d0037a7fbdc35d2fccf53f62298d06c11ef970bb4901d.scope: Deactivated successfully.
Nov 25 03:54:22 np0005534516 nova_compute[253538]: 2025-11-25 08:54:22.821 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:54:22 np0005534516 podman[375683]: 2025-11-25 08:54:22.985838826 +0000 UTC m=+0.041029338 container create 386dd646460c915514bea0f35f0a434debcd005a51dab4245298524e8fe30016 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_dirac, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 03:54:23 np0005534516 systemd[1]: Started libpod-conmon-386dd646460c915514bea0f35f0a434debcd005a51dab4245298524e8fe30016.scope.
Nov 25 03:54:23 np0005534516 podman[375683]: 2025-11-25 08:54:22.968874894 +0000 UTC m=+0.024065426 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 03:54:23 np0005534516 systemd[1]: Started libcrun container.
Nov 25 03:54:23 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1a110b35e0cc81ed823941d9a31a95ddb08363b129f71c78152d3ffbb73af77f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 03:54:23 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1a110b35e0cc81ed823941d9a31a95ddb08363b129f71c78152d3ffbb73af77f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 03:54:23 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1a110b35e0cc81ed823941d9a31a95ddb08363b129f71c78152d3ffbb73af77f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 03:54:23 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1a110b35e0cc81ed823941d9a31a95ddb08363b129f71c78152d3ffbb73af77f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 03:54:23 np0005534516 podman[375683]: 2025-11-25 08:54:23.081043227 +0000 UTC m=+0.136233769 container init 386dd646460c915514bea0f35f0a434debcd005a51dab4245298524e8fe30016 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_dirac, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 03:54:23 np0005534516 podman[375683]: 2025-11-25 08:54:23.088002276 +0000 UTC m=+0.143192798 container start 386dd646460c915514bea0f35f0a434debcd005a51dab4245298524e8fe30016 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_dirac, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 03:54:23 np0005534516 podman[375683]: 2025-11-25 08:54:23.091184273 +0000 UTC m=+0.146374785 container attach 386dd646460c915514bea0f35f0a434debcd005a51dab4245298524e8fe30016 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_dirac, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Nov 25 03:54:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 03:54:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 03:54:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 03:54:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 03:54:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 03:54:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 03:54:23 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2208: 321 pgs: 321 active+clean; 246 MiB data, 886 MiB used, 59 GiB / 60 GiB avail; 3.6 KiB/s rd, 16 KiB/s wr, 4 op/s
Nov 25 03:54:23 np0005534516 adoring_dirac[375700]: {
Nov 25 03:54:23 np0005534516 adoring_dirac[375700]:    "0": [
Nov 25 03:54:23 np0005534516 adoring_dirac[375700]:        {
Nov 25 03:54:23 np0005534516 adoring_dirac[375700]:            "devices": [
Nov 25 03:54:23 np0005534516 adoring_dirac[375700]:                "/dev/loop3"
Nov 25 03:54:23 np0005534516 adoring_dirac[375700]:            ],
Nov 25 03:54:23 np0005534516 adoring_dirac[375700]:            "lv_name": "ceph_lv0",
Nov 25 03:54:23 np0005534516 adoring_dirac[375700]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 03:54:23 np0005534516 adoring_dirac[375700]:            "lv_size": "21470642176",
Nov 25 03:54:23 np0005534516 adoring_dirac[375700]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=fa0f8d90-5df8-4d42-9078-da082765696d,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 03:54:23 np0005534516 adoring_dirac[375700]:            "lv_uuid": "ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr",
Nov 25 03:54:23 np0005534516 adoring_dirac[375700]:            "name": "ceph_lv0",
Nov 25 03:54:23 np0005534516 adoring_dirac[375700]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 03:54:23 np0005534516 adoring_dirac[375700]:            "tags": {
Nov 25 03:54:23 np0005534516 adoring_dirac[375700]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 03:54:23 np0005534516 adoring_dirac[375700]:                "ceph.block_uuid": "ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr",
Nov 25 03:54:23 np0005534516 adoring_dirac[375700]:                "ceph.cephx_lockbox_secret": "",
Nov 25 03:54:23 np0005534516 adoring_dirac[375700]:                "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 03:54:23 np0005534516 adoring_dirac[375700]:                "ceph.cluster_name": "ceph",
Nov 25 03:54:23 np0005534516 adoring_dirac[375700]:                "ceph.crush_device_class": "",
Nov 25 03:54:23 np0005534516 adoring_dirac[375700]:                "ceph.encrypted": "0",
Nov 25 03:54:23 np0005534516 adoring_dirac[375700]:                "ceph.osd_fsid": "fa0f8d90-5df8-4d42-9078-da082765696d",
Nov 25 03:54:23 np0005534516 adoring_dirac[375700]:                "ceph.osd_id": "0",
Nov 25 03:54:23 np0005534516 adoring_dirac[375700]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 03:54:23 np0005534516 adoring_dirac[375700]:                "ceph.type": "block",
Nov 25 03:54:23 np0005534516 adoring_dirac[375700]:                "ceph.vdo": "0"
Nov 25 03:54:23 np0005534516 adoring_dirac[375700]:            },
Nov 25 03:54:23 np0005534516 adoring_dirac[375700]:            "type": "block",
Nov 25 03:54:23 np0005534516 adoring_dirac[375700]:            "vg_name": "ceph_vg0"
Nov 25 03:54:23 np0005534516 adoring_dirac[375700]:        }
Nov 25 03:54:23 np0005534516 adoring_dirac[375700]:    ],
Nov 25 03:54:23 np0005534516 adoring_dirac[375700]:    "1": [
Nov 25 03:54:23 np0005534516 adoring_dirac[375700]:        {
Nov 25 03:54:23 np0005534516 adoring_dirac[375700]:            "devices": [
Nov 25 03:54:23 np0005534516 adoring_dirac[375700]:                "/dev/loop4"
Nov 25 03:54:23 np0005534516 adoring_dirac[375700]:            ],
Nov 25 03:54:23 np0005534516 adoring_dirac[375700]:            "lv_name": "ceph_lv1",
Nov 25 03:54:23 np0005534516 adoring_dirac[375700]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 03:54:23 np0005534516 adoring_dirac[375700]:            "lv_size": "21470642176",
Nov 25 03:54:23 np0005534516 adoring_dirac[375700]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1ccbbde0-8faf-460a-800e-c84f00ed17db,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 03:54:23 np0005534516 adoring_dirac[375700]:            "lv_uuid": "nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e",
Nov 25 03:54:23 np0005534516 adoring_dirac[375700]:            "name": "ceph_lv1",
Nov 25 03:54:23 np0005534516 adoring_dirac[375700]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 03:54:23 np0005534516 adoring_dirac[375700]:            "tags": {
Nov 25 03:54:23 np0005534516 adoring_dirac[375700]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 03:54:23 np0005534516 adoring_dirac[375700]:                "ceph.block_uuid": "nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e",
Nov 25 03:54:23 np0005534516 adoring_dirac[375700]:                "ceph.cephx_lockbox_secret": "",
Nov 25 03:54:23 np0005534516 adoring_dirac[375700]:                "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 03:54:23 np0005534516 adoring_dirac[375700]:                "ceph.cluster_name": "ceph",
Nov 25 03:54:23 np0005534516 adoring_dirac[375700]:                "ceph.crush_device_class": "",
Nov 25 03:54:23 np0005534516 adoring_dirac[375700]:                "ceph.encrypted": "0",
Nov 25 03:54:23 np0005534516 adoring_dirac[375700]:                "ceph.osd_fsid": "1ccbbde0-8faf-460a-800e-c84f00ed17db",
Nov 25 03:54:23 np0005534516 adoring_dirac[375700]:                "ceph.osd_id": "1",
Nov 25 03:54:23 np0005534516 adoring_dirac[375700]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 03:54:23 np0005534516 adoring_dirac[375700]:                "ceph.type": "block",
Nov 25 03:54:23 np0005534516 adoring_dirac[375700]:                "ceph.vdo": "0"
Nov 25 03:54:23 np0005534516 adoring_dirac[375700]:            },
Nov 25 03:54:23 np0005534516 adoring_dirac[375700]:            "type": "block",
Nov 25 03:54:23 np0005534516 adoring_dirac[375700]:            "vg_name": "ceph_vg1"
Nov 25 03:54:23 np0005534516 adoring_dirac[375700]:        }
Nov 25 03:54:23 np0005534516 adoring_dirac[375700]:    ],
Nov 25 03:54:23 np0005534516 adoring_dirac[375700]:    "2": [
Nov 25 03:54:23 np0005534516 adoring_dirac[375700]:        {
Nov 25 03:54:23 np0005534516 adoring_dirac[375700]:            "devices": [
Nov 25 03:54:23 np0005534516 adoring_dirac[375700]:                "/dev/loop5"
Nov 25 03:54:23 np0005534516 adoring_dirac[375700]:            ],
Nov 25 03:54:23 np0005534516 adoring_dirac[375700]:            "lv_name": "ceph_lv2",
Nov 25 03:54:23 np0005534516 adoring_dirac[375700]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 03:54:23 np0005534516 adoring_dirac[375700]:            "lv_size": "21470642176",
Nov 25 03:54:23 np0005534516 adoring_dirac[375700]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2a15223e-f21c-42af-b702-2be1c3607399,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 03:54:23 np0005534516 adoring_dirac[375700]:            "lv_uuid": "5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0",
Nov 25 03:54:23 np0005534516 adoring_dirac[375700]:            "name": "ceph_lv2",
Nov 25 03:54:23 np0005534516 adoring_dirac[375700]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 03:54:23 np0005534516 adoring_dirac[375700]:            "tags": {
Nov 25 03:54:23 np0005534516 adoring_dirac[375700]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 03:54:23 np0005534516 adoring_dirac[375700]:                "ceph.block_uuid": "5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0",
Nov 25 03:54:23 np0005534516 adoring_dirac[375700]:                "ceph.cephx_lockbox_secret": "",
Nov 25 03:54:23 np0005534516 adoring_dirac[375700]:                "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 03:54:23 np0005534516 adoring_dirac[375700]:                "ceph.cluster_name": "ceph",
Nov 25 03:54:23 np0005534516 adoring_dirac[375700]:                "ceph.crush_device_class": "",
Nov 25 03:54:23 np0005534516 adoring_dirac[375700]:                "ceph.encrypted": "0",
Nov 25 03:54:23 np0005534516 adoring_dirac[375700]:                "ceph.osd_fsid": "2a15223e-f21c-42af-b702-2be1c3607399",
Nov 25 03:54:23 np0005534516 adoring_dirac[375700]:                "ceph.osd_id": "2",
Nov 25 03:54:23 np0005534516 adoring_dirac[375700]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 03:54:23 np0005534516 adoring_dirac[375700]:                "ceph.type": "block",
Nov 25 03:54:23 np0005534516 adoring_dirac[375700]:                "ceph.vdo": "0"
Nov 25 03:54:23 np0005534516 adoring_dirac[375700]:            },
Nov 25 03:54:23 np0005534516 adoring_dirac[375700]:            "type": "block",
Nov 25 03:54:23 np0005534516 adoring_dirac[375700]:            "vg_name": "ceph_vg2"
Nov 25 03:54:23 np0005534516 adoring_dirac[375700]:        }
Nov 25 03:54:23 np0005534516 adoring_dirac[375700]:    ]
Nov 25 03:54:23 np0005534516 adoring_dirac[375700]: }
Nov 25 03:54:23 np0005534516 systemd[1]: libpod-386dd646460c915514bea0f35f0a434debcd005a51dab4245298524e8fe30016.scope: Deactivated successfully.
Nov 25 03:54:23 np0005534516 podman[375683]: 2025-11-25 08:54:23.881779182 +0000 UTC m=+0.936969704 container died 386dd646460c915514bea0f35f0a434debcd005a51dab4245298524e8fe30016 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_dirac, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Nov 25 03:54:23 np0005534516 systemd[1]: var-lib-containers-storage-overlay-1a110b35e0cc81ed823941d9a31a95ddb08363b129f71c78152d3ffbb73af77f-merged.mount: Deactivated successfully.
Nov 25 03:54:23 np0005534516 podman[375683]: 2025-11-25 08:54:23.945745902 +0000 UTC m=+1.000936414 container remove 386dd646460c915514bea0f35f0a434debcd005a51dab4245298524e8fe30016 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_dirac, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Nov 25 03:54:23 np0005534516 systemd[1]: libpod-conmon-386dd646460c915514bea0f35f0a434debcd005a51dab4245298524e8fe30016.scope: Deactivated successfully.
Nov 25 03:54:24 np0005534516 podman[375861]: 2025-11-25 08:54:24.683452532 +0000 UTC m=+0.049525649 container create 9a58ac960e67a3b06258fd883890cc5dafee79332e358733bd97825095730555 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_curran, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Nov 25 03:54:24 np0005534516 systemd[1]: Started libpod-conmon-9a58ac960e67a3b06258fd883890cc5dafee79332e358733bd97825095730555.scope.
Nov 25 03:54:24 np0005534516 podman[375861]: 2025-11-25 08:54:24.664986539 +0000 UTC m=+0.031059606 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 03:54:24 np0005534516 systemd[1]: Started libcrun container.
Nov 25 03:54:24 np0005534516 podman[375861]: 2025-11-25 08:54:24.793122707 +0000 UTC m=+0.159195824 container init 9a58ac960e67a3b06258fd883890cc5dafee79332e358733bd97825095730555 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_curran, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 03:54:24 np0005534516 podman[375861]: 2025-11-25 08:54:24.811060645 +0000 UTC m=+0.177133662 container start 9a58ac960e67a3b06258fd883890cc5dafee79332e358733bd97825095730555 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_curran, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 03:54:24 np0005534516 exciting_curran[375877]: 167 167
Nov 25 03:54:24 np0005534516 podman[375861]: 2025-11-25 08:54:24.81747153 +0000 UTC m=+0.183544567 container attach 9a58ac960e67a3b06258fd883890cc5dafee79332e358733bd97825095730555 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_curran, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Nov 25 03:54:24 np0005534516 systemd[1]: libpod-9a58ac960e67a3b06258fd883890cc5dafee79332e358733bd97825095730555.scope: Deactivated successfully.
Nov 25 03:54:24 np0005534516 podman[375861]: 2025-11-25 08:54:24.818290742 +0000 UTC m=+0.184363759 container died 9a58ac960e67a3b06258fd883890cc5dafee79332e358733bd97825095730555 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_curran, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Nov 25 03:54:24 np0005534516 systemd[1]: var-lib-containers-storage-overlay-a33d0b57af9dd363113d18594d7a4245bc24e7ba31584de762a5209c1b10a160-merged.mount: Deactivated successfully.
Nov 25 03:54:24 np0005534516 podman[375861]: 2025-11-25 08:54:24.863663957 +0000 UTC m=+0.229736974 container remove 9a58ac960e67a3b06258fd883890cc5dafee79332e358733bd97825095730555 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_curran, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True)
Nov 25 03:54:24 np0005534516 systemd[1]: libpod-conmon-9a58ac960e67a3b06258fd883890cc5dafee79332e358733bd97825095730555.scope: Deactivated successfully.
Nov 25 03:54:25 np0005534516 podman[375902]: 2025-11-25 08:54:25.085934007 +0000 UTC m=+0.042769165 container create 4837d63a07529fc50cde55d3cbf53191bed97eab99bacbbe5eabd130679e6788 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_moore, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS)
Nov 25 03:54:25 np0005534516 systemd[1]: Started libpod-conmon-4837d63a07529fc50cde55d3cbf53191bed97eab99bacbbe5eabd130679e6788.scope.
Nov 25 03:54:25 np0005534516 systemd[1]: Started libcrun container.
Nov 25 03:54:25 np0005534516 podman[375902]: 2025-11-25 08:54:25.069722536 +0000 UTC m=+0.026557714 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 03:54:25 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ba6e337355e3d0323a5f98a7e10d20eccbe3c333a8190e073517217c17146ad9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 03:54:25 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ba6e337355e3d0323a5f98a7e10d20eccbe3c333a8190e073517217c17146ad9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 03:54:25 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ba6e337355e3d0323a5f98a7e10d20eccbe3c333a8190e073517217c17146ad9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 03:54:25 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ba6e337355e3d0323a5f98a7e10d20eccbe3c333a8190e073517217c17146ad9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 03:54:25 np0005534516 podman[375902]: 2025-11-25 08:54:25.18741711 +0000 UTC m=+0.144252308 container init 4837d63a07529fc50cde55d3cbf53191bed97eab99bacbbe5eabd130679e6788 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_moore, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Nov 25 03:54:25 np0005534516 podman[375902]: 2025-11-25 08:54:25.198467031 +0000 UTC m=+0.155302219 container start 4837d63a07529fc50cde55d3cbf53191bed97eab99bacbbe5eabd130679e6788 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_moore, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 03:54:25 np0005534516 podman[375902]: 2025-11-25 08:54:25.20289136 +0000 UTC m=+0.159726508 container attach 4837d63a07529fc50cde55d3cbf53191bed97eab99bacbbe5eabd130679e6788 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_moore, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Nov 25 03:54:25 np0005534516 nova_compute[253538]: 2025-11-25 08:54:25.558 253542 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764060850.557127, 2da7049d-715e-4209-8c17-dda96ff6a192 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 03:54:25 np0005534516 nova_compute[253538]: 2025-11-25 08:54:25.559 253542 INFO nova.compute.manager [-] [instance: 2da7049d-715e-4209-8c17-dda96ff6a192] VM Stopped (Lifecycle Event)#033[00m
Nov 25 03:54:25 np0005534516 nova_compute[253538]: 2025-11-25 08:54:25.574 253542 DEBUG nova.compute.manager [None req-58fc3531-b33d-468a-b94f-566b5baaaed7 - - - - - -] [instance: 2da7049d-715e-4209-8c17-dda96ff6a192] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:54:25 np0005534516 nova_compute[253538]: 2025-11-25 08:54:25.595 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:54:25 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2209: 321 pgs: 321 active+clean; 246 MiB data, 886 MiB used, 59 GiB / 60 GiB avail; 2.2 KiB/s rd, 5.0 KiB/s wr, 3 op/s
Nov 25 03:54:26 np0005534516 jovial_moore[375919]: {
Nov 25 03:54:26 np0005534516 jovial_moore[375919]:    "1ccbbde0-8faf-460a-800e-c84f00ed17db": {
Nov 25 03:54:26 np0005534516 jovial_moore[375919]:        "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 03:54:26 np0005534516 jovial_moore[375919]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 03:54:26 np0005534516 jovial_moore[375919]:        "osd_id": 1,
Nov 25 03:54:26 np0005534516 jovial_moore[375919]:        "osd_uuid": "1ccbbde0-8faf-460a-800e-c84f00ed17db",
Nov 25 03:54:26 np0005534516 jovial_moore[375919]:        "type": "bluestore"
Nov 25 03:54:26 np0005534516 jovial_moore[375919]:    },
Nov 25 03:54:26 np0005534516 jovial_moore[375919]:    "2a15223e-f21c-42af-b702-2be1c3607399": {
Nov 25 03:54:26 np0005534516 jovial_moore[375919]:        "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 03:54:26 np0005534516 jovial_moore[375919]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 03:54:26 np0005534516 jovial_moore[375919]:        "osd_id": 2,
Nov 25 03:54:26 np0005534516 jovial_moore[375919]:        "osd_uuid": "2a15223e-f21c-42af-b702-2be1c3607399",
Nov 25 03:54:26 np0005534516 jovial_moore[375919]:        "type": "bluestore"
Nov 25 03:54:26 np0005534516 jovial_moore[375919]:    },
Nov 25 03:54:26 np0005534516 jovial_moore[375919]:    "fa0f8d90-5df8-4d42-9078-da082765696d": {
Nov 25 03:54:26 np0005534516 jovial_moore[375919]:        "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 03:54:26 np0005534516 jovial_moore[375919]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 03:54:26 np0005534516 jovial_moore[375919]:        "osd_id": 0,
Nov 25 03:54:26 np0005534516 jovial_moore[375919]:        "osd_uuid": "fa0f8d90-5df8-4d42-9078-da082765696d",
Nov 25 03:54:26 np0005534516 jovial_moore[375919]:        "type": "bluestore"
Nov 25 03:54:26 np0005534516 jovial_moore[375919]:    }
Nov 25 03:54:26 np0005534516 jovial_moore[375919]: }
Nov 25 03:54:26 np0005534516 systemd[1]: libpod-4837d63a07529fc50cde55d3cbf53191bed97eab99bacbbe5eabd130679e6788.scope: Deactivated successfully.
Nov 25 03:54:26 np0005534516 systemd[1]: libpod-4837d63a07529fc50cde55d3cbf53191bed97eab99bacbbe5eabd130679e6788.scope: Consumed 1.037s CPU time.
Nov 25 03:54:26 np0005534516 podman[375902]: 2025-11-25 08:54:26.226734849 +0000 UTC m=+1.183570017 container died 4837d63a07529fc50cde55d3cbf53191bed97eab99bacbbe5eabd130679e6788 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_moore, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 03:54:26 np0005534516 systemd[1]: var-lib-containers-storage-overlay-ba6e337355e3d0323a5f98a7e10d20eccbe3c333a8190e073517217c17146ad9-merged.mount: Deactivated successfully.
Nov 25 03:54:26 np0005534516 podman[375902]: 2025-11-25 08:54:26.289094116 +0000 UTC m=+1.245929274 container remove 4837d63a07529fc50cde55d3cbf53191bed97eab99bacbbe5eabd130679e6788 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_moore, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Nov 25 03:54:26 np0005534516 systemd[1]: libpod-conmon-4837d63a07529fc50cde55d3cbf53191bed97eab99bacbbe5eabd130679e6788.scope: Deactivated successfully.
Nov 25 03:54:26 np0005534516 nova_compute[253538]: 2025-11-25 08:54:26.314 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:54:26 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 03:54:26 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 03:54:26 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 03:54:26 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 03:54:26 np0005534516 ceph-mgr[75313]: [progress WARNING root] complete: ev 05a83e71-60ef-4169-b208-36978a2250e0 does not exist
Nov 25 03:54:26 np0005534516 ceph-mgr[75313]: [progress WARNING root] complete: ev 6306d883-3669-479c-8db8-624fe51bfe5b does not exist
Nov 25 03:54:27 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 03:54:27 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 03:54:27 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 03:54:27 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2210: 321 pgs: 321 active+clean; 246 MiB data, 886 MiB used, 59 GiB / 60 GiB avail; 4.0 KiB/s wr, 0 op/s
Nov 25 03:54:27 np0005534516 nova_compute[253538]: 2025-11-25 08:54:27.823 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:54:29 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 25 03:54:29 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1638025801' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 03:54:29 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 25 03:54:29 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1638025801' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 03:54:29 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2211: 321 pgs: 321 active+clean; 246 MiB data, 886 MiB used, 59 GiB / 60 GiB avail; 7.0 KiB/s wr, 0 op/s
Nov 25 03:54:30 np0005534516 nova_compute[253538]: 2025-11-25 08:54:30.598 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:54:30 np0005534516 nova_compute[253538]: 2025-11-25 08:54:30.834 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:54:31 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2212: 321 pgs: 321 active+clean; 246 MiB data, 886 MiB used, 59 GiB / 60 GiB avail; 6.0 KiB/s wr, 0 op/s
Nov 25 03:54:32 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 03:54:32 np0005534516 nova_compute[253538]: 2025-11-25 08:54:32.713 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 03:54:32 np0005534516 nova_compute[253538]: 2025-11-25 08:54:32.826 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:54:33 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2213: 321 pgs: 321 active+clean; 246 MiB data, 886 MiB used, 59 GiB / 60 GiB avail; 7.0 KiB/s wr, 0 op/s
Nov 25 03:54:34 np0005534516 nova_compute[253538]: 2025-11-25 08:54:34.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 03:54:35 np0005534516 nova_compute[253538]: 2025-11-25 08:54:35.363 253542 DEBUG oslo_concurrency.lockutils [None req-d945cceb-25fb-483b-a9c5-26730e041c66 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Acquiring lock "7e82fa8c-6663-439c-833c-2b28f22282a8" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:54:35 np0005534516 nova_compute[253538]: 2025-11-25 08:54:35.364 253542 DEBUG oslo_concurrency.lockutils [None req-d945cceb-25fb-483b-a9c5-26730e041c66 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Lock "7e82fa8c-6663-439c-833c-2b28f22282a8" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:54:35 np0005534516 nova_compute[253538]: 2025-11-25 08:54:35.381 253542 DEBUG nova.compute.manager [None req-d945cceb-25fb-483b-a9c5-26730e041c66 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: 7e82fa8c-6663-439c-833c-2b28f22282a8] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 25 03:54:35 np0005534516 nova_compute[253538]: 2025-11-25 08:54:35.472 253542 DEBUG oslo_concurrency.lockutils [None req-d945cceb-25fb-483b-a9c5-26730e041c66 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:54:35 np0005534516 nova_compute[253538]: 2025-11-25 08:54:35.473 253542 DEBUG oslo_concurrency.lockutils [None req-d945cceb-25fb-483b-a9c5-26730e041c66 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:54:35 np0005534516 nova_compute[253538]: 2025-11-25 08:54:35.484 253542 DEBUG nova.virt.hardware [None req-d945cceb-25fb-483b-a9c5-26730e041c66 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 25 03:54:35 np0005534516 nova_compute[253538]: 2025-11-25 08:54:35.485 253542 INFO nova.compute.claims [None req-d945cceb-25fb-483b-a9c5-26730e041c66 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: 7e82fa8c-6663-439c-833c-2b28f22282a8] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 25 03:54:35 np0005534516 nova_compute[253538]: 2025-11-25 08:54:35.601 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:54:35 np0005534516 nova_compute[253538]: 2025-11-25 08:54:35.669 253542 DEBUG oslo_concurrency.processutils [None req-d945cceb-25fb-483b-a9c5-26730e041c66 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:54:35 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2214: 321 pgs: 321 active+clean; 246 MiB data, 886 MiB used, 59 GiB / 60 GiB avail; 5.7 KiB/s wr, 0 op/s
Nov 25 03:54:36 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 03:54:36 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3600970949' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 03:54:36 np0005534516 nova_compute[253538]: 2025-11-25 08:54:36.178 253542 DEBUG oslo_concurrency.processutils [None req-d945cceb-25fb-483b-a9c5-26730e041c66 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.509s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:54:36 np0005534516 nova_compute[253538]: 2025-11-25 08:54:36.185 253542 DEBUG nova.compute.provider_tree [None req-d945cceb-25fb-483b-a9c5-26730e041c66 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 03:54:36 np0005534516 nova_compute[253538]: 2025-11-25 08:54:36.196 253542 DEBUG nova.scheduler.client.report [None req-d945cceb-25fb-483b-a9c5-26730e041c66 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 03:54:36 np0005534516 nova_compute[253538]: 2025-11-25 08:54:36.227 253542 DEBUG oslo_concurrency.lockutils [None req-d945cceb-25fb-483b-a9c5-26730e041c66 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.754s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:54:36 np0005534516 nova_compute[253538]: 2025-11-25 08:54:36.228 253542 DEBUG nova.compute.manager [None req-d945cceb-25fb-483b-a9c5-26730e041c66 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: 7e82fa8c-6663-439c-833c-2b28f22282a8] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 25 03:54:36 np0005534516 nova_compute[253538]: 2025-11-25 08:54:36.315 253542 DEBUG nova.compute.manager [None req-d945cceb-25fb-483b-a9c5-26730e041c66 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: 7e82fa8c-6663-439c-833c-2b28f22282a8] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 25 03:54:36 np0005534516 nova_compute[253538]: 2025-11-25 08:54:36.315 253542 DEBUG nova.network.neutron [None req-d945cceb-25fb-483b-a9c5-26730e041c66 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: 7e82fa8c-6663-439c-833c-2b28f22282a8] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 25 03:54:36 np0005534516 nova_compute[253538]: 2025-11-25 08:54:36.336 253542 INFO nova.virt.libvirt.driver [None req-d945cceb-25fb-483b-a9c5-26730e041c66 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: 7e82fa8c-6663-439c-833c-2b28f22282a8] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 25 03:54:36 np0005534516 nova_compute[253538]: 2025-11-25 08:54:36.354 253542 DEBUG nova.compute.manager [None req-d945cceb-25fb-483b-a9c5-26730e041c66 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: 7e82fa8c-6663-439c-833c-2b28f22282a8] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 25 03:54:36 np0005534516 nova_compute[253538]: 2025-11-25 08:54:36.440 253542 DEBUG nova.compute.manager [None req-d945cceb-25fb-483b-a9c5-26730e041c66 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: 7e82fa8c-6663-439c-833c-2b28f22282a8] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 25 03:54:36 np0005534516 nova_compute[253538]: 2025-11-25 08:54:36.441 253542 DEBUG nova.virt.libvirt.driver [None req-d945cceb-25fb-483b-a9c5-26730e041c66 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: 7e82fa8c-6663-439c-833c-2b28f22282a8] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 25 03:54:36 np0005534516 nova_compute[253538]: 2025-11-25 08:54:36.442 253542 INFO nova.virt.libvirt.driver [None req-d945cceb-25fb-483b-a9c5-26730e041c66 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: 7e82fa8c-6663-439c-833c-2b28f22282a8] Creating image(s)#033[00m
Nov 25 03:54:36 np0005534516 nova_compute[253538]: 2025-11-25 08:54:36.466 253542 DEBUG nova.storage.rbd_utils [None req-d945cceb-25fb-483b-a9c5-26730e041c66 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] rbd image 7e82fa8c-6663-439c-833c-2b28f22282a8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:54:36 np0005534516 nova_compute[253538]: 2025-11-25 08:54:36.496 253542 DEBUG nova.storage.rbd_utils [None req-d945cceb-25fb-483b-a9c5-26730e041c66 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] rbd image 7e82fa8c-6663-439c-833c-2b28f22282a8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:54:36 np0005534516 nova_compute[253538]: 2025-11-25 08:54:36.527 253542 DEBUG nova.storage.rbd_utils [None req-d945cceb-25fb-483b-a9c5-26730e041c66 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] rbd image 7e82fa8c-6663-439c-833c-2b28f22282a8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:54:36 np0005534516 nova_compute[253538]: 2025-11-25 08:54:36.531 253542 DEBUG oslo_concurrency.processutils [None req-d945cceb-25fb-483b-a9c5-26730e041c66 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:54:36 np0005534516 nova_compute[253538]: 2025-11-25 08:54:36.637 253542 DEBUG oslo_concurrency.processutils [None req-d945cceb-25fb-483b-a9c5-26730e041c66 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json" returned: 0 in 0.106s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:54:36 np0005534516 nova_compute[253538]: 2025-11-25 08:54:36.638 253542 DEBUG oslo_concurrency.lockutils [None req-d945cceb-25fb-483b-a9c5-26730e041c66 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Acquiring lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:54:36 np0005534516 nova_compute[253538]: 2025-11-25 08:54:36.639 253542 DEBUG oslo_concurrency.lockutils [None req-d945cceb-25fb-483b-a9c5-26730e041c66 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:54:36 np0005534516 nova_compute[253538]: 2025-11-25 08:54:36.639 253542 DEBUG oslo_concurrency.lockutils [None req-d945cceb-25fb-483b-a9c5-26730e041c66 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:54:36 np0005534516 nova_compute[253538]: 2025-11-25 08:54:36.660 253542 DEBUG nova.storage.rbd_utils [None req-d945cceb-25fb-483b-a9c5-26730e041c66 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] rbd image 7e82fa8c-6663-439c-833c-2b28f22282a8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:54:36 np0005534516 nova_compute[253538]: 2025-11-25 08:54:36.663 253542 DEBUG oslo_concurrency.processutils [None req-d945cceb-25fb-483b-a9c5-26730e041c66 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc 7e82fa8c-6663-439c-833c-2b28f22282a8_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:54:36 np0005534516 nova_compute[253538]: 2025-11-25 08:54:36.966 253542 DEBUG oslo_concurrency.processutils [None req-d945cceb-25fb-483b-a9c5-26730e041c66 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc 7e82fa8c-6663-439c-833c-2b28f22282a8_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.303s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:54:37 np0005534516 nova_compute[253538]: 2025-11-25 08:54:37.041 253542 DEBUG nova.storage.rbd_utils [None req-d945cceb-25fb-483b-a9c5-26730e041c66 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] resizing rbd image 7e82fa8c-6663-439c-833c-2b28f22282a8_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 25 03:54:37 np0005534516 nova_compute[253538]: 2025-11-25 08:54:37.078 253542 DEBUG nova.policy [None req-d945cceb-25fb-483b-a9c5-26730e041c66 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '009378dc36154271ba5b4590ce67ddde', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '7dcaf3c96bfc4db3a41291debd385c67', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 25 03:54:37 np0005534516 nova_compute[253538]: 2025-11-25 08:54:37.123 253542 DEBUG nova.objects.instance [None req-d945cceb-25fb-483b-a9c5-26730e041c66 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Lazy-loading 'migration_context' on Instance uuid 7e82fa8c-6663-439c-833c-2b28f22282a8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 03:54:37 np0005534516 nova_compute[253538]: 2025-11-25 08:54:37.144 253542 DEBUG nova.virt.libvirt.driver [None req-d945cceb-25fb-483b-a9c5-26730e041c66 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: 7e82fa8c-6663-439c-833c-2b28f22282a8] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 25 03:54:37 np0005534516 nova_compute[253538]: 2025-11-25 08:54:37.145 253542 DEBUG nova.virt.libvirt.driver [None req-d945cceb-25fb-483b-a9c5-26730e041c66 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: 7e82fa8c-6663-439c-833c-2b28f22282a8] Ensure instance console log exists: /var/lib/nova/instances/7e82fa8c-6663-439c-833c-2b28f22282a8/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 25 03:54:37 np0005534516 nova_compute[253538]: 2025-11-25 08:54:37.145 253542 DEBUG oslo_concurrency.lockutils [None req-d945cceb-25fb-483b-a9c5-26730e041c66 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:54:37 np0005534516 nova_compute[253538]: 2025-11-25 08:54:37.145 253542 DEBUG oslo_concurrency.lockutils [None req-d945cceb-25fb-483b-a9c5-26730e041c66 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:54:37 np0005534516 nova_compute[253538]: 2025-11-25 08:54:37.146 253542 DEBUG oslo_concurrency.lockutils [None req-d945cceb-25fb-483b-a9c5-26730e041c66 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:54:37 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 03:54:37 np0005534516 nova_compute[253538]: 2025-11-25 08:54:37.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 03:54:37 np0005534516 nova_compute[253538]: 2025-11-25 08:54:37.555 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 25 03:54:37 np0005534516 nova_compute[253538]: 2025-11-25 08:54:37.596 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 25 03:54:37 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2215: 321 pgs: 321 active+clean; 246 MiB data, 886 MiB used, 59 GiB / 60 GiB avail; 85 B/s rd, 5.4 KiB/s wr, 0 op/s
Nov 25 03:54:37 np0005534516 nova_compute[253538]: 2025-11-25 08:54:37.829 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:54:38 np0005534516 nova_compute[253538]: 2025-11-25 08:54:38.753 253542 DEBUG nova.network.neutron [None req-d945cceb-25fb-483b-a9c5-26730e041c66 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: 7e82fa8c-6663-439c-833c-2b28f22282a8] Successfully created port: 52157627-d75e-4670-9215-6471bda94ba6 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 25 03:54:38 np0005534516 podman[376203]: 2025-11-25 08:54:38.801163205 +0000 UTC m=+0.054431202 container health_status 5c8d5508db57b4c3da7e80ae11f3a3094e87785cb74f60c98199261dd8b73481 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true)
Nov 25 03:54:39 np0005534516 nova_compute[253538]: 2025-11-25 08:54:39.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 03:54:39 np0005534516 nova_compute[253538]: 2025-11-25 08:54:39.555 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 25 03:54:39 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2216: 321 pgs: 321 active+clean; 273 MiB data, 900 MiB used, 59 GiB / 60 GiB avail; 7.7 KiB/s rd, 1.2 MiB/s wr, 16 op/s
Nov 25 03:54:40 np0005534516 nova_compute[253538]: 2025-11-25 08:54:40.603 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:54:40 np0005534516 podman[376223]: 2025-11-25 08:54:40.812966775 +0000 UTC m=+0.066391368 container health_status 201f5ba8a846455dad7681d91099d8c3f33e8ac91298c1330a8b525b94c03938 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true)
Nov 25 03:54:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:54:41.077 162739 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:54:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:54:41.079 162739 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:54:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:54:41.079 162739 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:54:41 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2217: 321 pgs: 321 active+clean; 293 MiB data, 904 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Nov 25 03:54:42 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 03:54:42 np0005534516 nova_compute[253538]: 2025-11-25 08:54:42.555 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 03:54:42 np0005534516 nova_compute[253538]: 2025-11-25 08:54:42.831 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:54:43 np0005534516 nova_compute[253538]: 2025-11-25 08:54:43.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 03:54:43 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2218: 321 pgs: 321 active+clean; 293 MiB data, 904 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Nov 25 03:54:44 np0005534516 nova_compute[253538]: 2025-11-25 08:54:44.548 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 03:54:45 np0005534516 nova_compute[253538]: 2025-11-25 08:54:45.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 03:54:45 np0005534516 nova_compute[253538]: 2025-11-25 08:54:45.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 03:54:45 np0005534516 nova_compute[253538]: 2025-11-25 08:54:45.576 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:54:45 np0005534516 nova_compute[253538]: 2025-11-25 08:54:45.577 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:54:45 np0005534516 nova_compute[253538]: 2025-11-25 08:54:45.577 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:54:45 np0005534516 nova_compute[253538]: 2025-11-25 08:54:45.577 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 25 03:54:45 np0005534516 nova_compute[253538]: 2025-11-25 08:54:45.578 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:54:45 np0005534516 nova_compute[253538]: 2025-11-25 08:54:45.619 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:54:45 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2219: 321 pgs: 321 active+clean; 293 MiB data, 904 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Nov 25 03:54:45 np0005534516 podman[376254]: 2025-11-25 08:54:45.853362568 +0000 UTC m=+0.106580902 container health_status 8ad1e319ea263c63021f01bb2d6f18ca5aea205883339692c6b10bddfa993191 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Nov 25 03:54:46 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 03:54:46 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2637140863' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 03:54:46 np0005534516 nova_compute[253538]: 2025-11-25 08:54:46.042 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.464s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:54:46 np0005534516 nova_compute[253538]: 2025-11-25 08:54:46.137 253542 DEBUG nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] skipping disk for instance-00000074 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 25 03:54:46 np0005534516 nova_compute[253538]: 2025-11-25 08:54:46.138 253542 DEBUG nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] skipping disk for instance-00000074 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 25 03:54:46 np0005534516 nova_compute[253538]: 2025-11-25 08:54:46.144 253542 DEBUG nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] skipping disk for instance-00000072 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 25 03:54:46 np0005534516 nova_compute[253538]: 2025-11-25 08:54:46.145 253542 DEBUG nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] skipping disk for instance-00000072 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 25 03:54:46 np0005534516 nova_compute[253538]: 2025-11-25 08:54:46.409 253542 WARNING nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 03:54:46 np0005534516 nova_compute[253538]: 2025-11-25 08:54:46.411 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3370MB free_disk=59.876190185546875GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 25 03:54:46 np0005534516 nova_compute[253538]: 2025-11-25 08:54:46.411 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:54:46 np0005534516 nova_compute[253538]: 2025-11-25 08:54:46.411 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:54:47 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 03:54:47 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2220: 321 pgs: 321 active+clean; 293 MiB data, 904 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Nov 25 03:54:47 np0005534516 nova_compute[253538]: 2025-11-25 08:54:47.834 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:54:48 np0005534516 nova_compute[253538]: 2025-11-25 08:54:48.418 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Instance 76611b0b-db06-4903-a22a-59b23a1e0d48 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 25 03:54:48 np0005534516 nova_compute[253538]: 2025-11-25 08:54:48.419 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Instance 5e14b791-8860-44a3-87e0-5c7fcc1dcf12 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 25 03:54:48 np0005534516 nova_compute[253538]: 2025-11-25 08:54:48.419 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Instance 7e82fa8c-6663-439c-833c-2b28f22282a8 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 25 03:54:48 np0005534516 nova_compute[253538]: 2025-11-25 08:54:48.420 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 25 03:54:48 np0005534516 nova_compute[253538]: 2025-11-25 08:54:48.420 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=896MB phys_disk=59GB used_disk=3GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 25 03:54:48 np0005534516 nova_compute[253538]: 2025-11-25 08:54:48.713 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:54:48 np0005534516 nova_compute[253538]: 2025-11-25 08:54:48.875 253542 DEBUG nova.network.neutron [None req-d945cceb-25fb-483b-a9c5-26730e041c66 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: 7e82fa8c-6663-439c-833c-2b28f22282a8] Successfully updated port: 52157627-d75e-4670-9215-6471bda94ba6 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 25 03:54:48 np0005534516 nova_compute[253538]: 2025-11-25 08:54:48.884 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:54:48 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:54:48.884 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=36, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '3e:21:09', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '6a:71:8e:07:fa:c2'}, ipsec=False) old=SB_Global(nb_cfg=35) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 03:54:48 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:54:48.885 162739 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 1 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 25 03:54:48 np0005534516 nova_compute[253538]: 2025-11-25 08:54:48.910 253542 DEBUG oslo_concurrency.lockutils [None req-d945cceb-25fb-483b-a9c5-26730e041c66 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Acquiring lock "refresh_cache-7e82fa8c-6663-439c-833c-2b28f22282a8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 03:54:48 np0005534516 nova_compute[253538]: 2025-11-25 08:54:48.911 253542 DEBUG oslo_concurrency.lockutils [None req-d945cceb-25fb-483b-a9c5-26730e041c66 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Acquired lock "refresh_cache-7e82fa8c-6663-439c-833c-2b28f22282a8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 03:54:48 np0005534516 nova_compute[253538]: 2025-11-25 08:54:48.911 253542 DEBUG nova.network.neutron [None req-d945cceb-25fb-483b-a9c5-26730e041c66 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: 7e82fa8c-6663-439c-833c-2b28f22282a8] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 25 03:54:48 np0005534516 nova_compute[253538]: 2025-11-25 08:54:48.991 253542 DEBUG nova.compute.manager [req-679a13d9-d6c6-4e67-9a89-b6bdd6dd4135 req-3d06c655-57c3-4e71-a7db-8c0ccca42559 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 7e82fa8c-6663-439c-833c-2b28f22282a8] Received event network-changed-52157627-d75e-4670-9215-6471bda94ba6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:54:48 np0005534516 nova_compute[253538]: 2025-11-25 08:54:48.991 253542 DEBUG nova.compute.manager [req-679a13d9-d6c6-4e67-9a89-b6bdd6dd4135 req-3d06c655-57c3-4e71-a7db-8c0ccca42559 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 7e82fa8c-6663-439c-833c-2b28f22282a8] Refreshing instance network info cache due to event network-changed-52157627-d75e-4670-9215-6471bda94ba6. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 03:54:48 np0005534516 nova_compute[253538]: 2025-11-25 08:54:48.992 253542 DEBUG oslo_concurrency.lockutils [req-679a13d9-d6c6-4e67-9a89-b6bdd6dd4135 req-3d06c655-57c3-4e71-a7db-8c0ccca42559 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-7e82fa8c-6663-439c-833c-2b28f22282a8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 03:54:49 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 03:54:49 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4060622671' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 03:54:49 np0005534516 nova_compute[253538]: 2025-11-25 08:54:49.176 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.463s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:54:49 np0005534516 nova_compute[253538]: 2025-11-25 08:54:49.181 253542 DEBUG nova.compute.provider_tree [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 03:54:49 np0005534516 nova_compute[253538]: 2025-11-25 08:54:49.195 253542 DEBUG nova.scheduler.client.report [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 03:54:49 np0005534516 nova_compute[253538]: 2025-11-25 08:54:49.233 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 25 03:54:49 np0005534516 nova_compute[253538]: 2025-11-25 08:54:49.234 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 2.823s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:54:49 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2221: 321 pgs: 321 active+clean; 293 MiB data, 904 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Nov 25 03:54:49 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:54:49.888 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=0e3362c2-a4a0-4a10-9289-943331244f84, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '36'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:54:50 np0005534516 nova_compute[253538]: 2025-11-25 08:54:50.061 253542 DEBUG nova.network.neutron [None req-d945cceb-25fb-483b-a9c5-26730e041c66 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: 7e82fa8c-6663-439c-833c-2b28f22282a8] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 25 03:54:50 np0005534516 nova_compute[253538]: 2025-11-25 08:54:50.229 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 03:54:50 np0005534516 nova_compute[253538]: 2025-11-25 08:54:50.622 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:54:51 np0005534516 nova_compute[253538]: 2025-11-25 08:54:51.829 253542 DEBUG nova.network.neutron [None req-d945cceb-25fb-483b-a9c5-26730e041c66 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: 7e82fa8c-6663-439c-833c-2b28f22282a8] Updating instance_info_cache with network_info: [{"id": "52157627-d75e-4670-9215-6471bda94ba6", "address": "fa:16:3e:a5:b5:38", "network": {"id": "703bdacb-53cd-40a1-9c2c-c632a29e049b", "bridge": "br-int", "label": "tempest-network-smoke--919155047", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7dcaf3c96bfc4db3a41291debd385c67", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap52157627-d7", "ovs_interfaceid": "52157627-d75e-4670-9215-6471bda94ba6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 03:54:51 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2222: 321 pgs: 321 active+clean; 293 MiB data, 904 MiB used, 59 GiB / 60 GiB avail; 9.5 KiB/s rd, 575 KiB/s wr, 12 op/s
Nov 25 03:54:51 np0005534516 nova_compute[253538]: 2025-11-25 08:54:51.884 253542 DEBUG oslo_concurrency.lockutils [None req-d945cceb-25fb-483b-a9c5-26730e041c66 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Releasing lock "refresh_cache-7e82fa8c-6663-439c-833c-2b28f22282a8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 03:54:51 np0005534516 nova_compute[253538]: 2025-11-25 08:54:51.885 253542 DEBUG nova.compute.manager [None req-d945cceb-25fb-483b-a9c5-26730e041c66 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: 7e82fa8c-6663-439c-833c-2b28f22282a8] Instance network_info: |[{"id": "52157627-d75e-4670-9215-6471bda94ba6", "address": "fa:16:3e:a5:b5:38", "network": {"id": "703bdacb-53cd-40a1-9c2c-c632a29e049b", "bridge": "br-int", "label": "tempest-network-smoke--919155047", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7dcaf3c96bfc4db3a41291debd385c67", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap52157627-d7", "ovs_interfaceid": "52157627-d75e-4670-9215-6471bda94ba6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 25 03:54:51 np0005534516 nova_compute[253538]: 2025-11-25 08:54:51.885 253542 DEBUG oslo_concurrency.lockutils [req-679a13d9-d6c6-4e67-9a89-b6bdd6dd4135 req-3d06c655-57c3-4e71-a7db-8c0ccca42559 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-7e82fa8c-6663-439c-833c-2b28f22282a8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 03:54:51 np0005534516 nova_compute[253538]: 2025-11-25 08:54:51.886 253542 DEBUG nova.network.neutron [req-679a13d9-d6c6-4e67-9a89-b6bdd6dd4135 req-3d06c655-57c3-4e71-a7db-8c0ccca42559 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 7e82fa8c-6663-439c-833c-2b28f22282a8] Refreshing network info cache for port 52157627-d75e-4670-9215-6471bda94ba6 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 03:54:51 np0005534516 nova_compute[253538]: 2025-11-25 08:54:51.888 253542 DEBUG nova.virt.libvirt.driver [None req-d945cceb-25fb-483b-a9c5-26730e041c66 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: 7e82fa8c-6663-439c-833c-2b28f22282a8] Start _get_guest_xml network_info=[{"id": "52157627-d75e-4670-9215-6471bda94ba6", "address": "fa:16:3e:a5:b5:38", "network": {"id": "703bdacb-53cd-40a1-9c2c-c632a29e049b", "bridge": "br-int", "label": "tempest-network-smoke--919155047", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7dcaf3c96bfc4db3a41291debd385c67", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap52157627-d7", "ovs_interfaceid": "52157627-d75e-4670-9215-6471bda94ba6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'device_name': '/dev/vda', 'guest_format': None, 'encryption_options': None, 'boot_index': 0, 'encryption_format': None, 'encryption_secret_uuid': None, 'size': 0, 'encrypted': False, 'disk_bus': 'virtio', 'image_id': '8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 25 03:54:51 np0005534516 nova_compute[253538]: 2025-11-25 08:54:51.892 253542 WARNING nova.virt.libvirt.driver [None req-d945cceb-25fb-483b-a9c5-26730e041c66 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 03:54:51 np0005534516 nova_compute[253538]: 2025-11-25 08:54:51.897 253542 DEBUG nova.virt.libvirt.host [None req-d945cceb-25fb-483b-a9c5-26730e041c66 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 25 03:54:51 np0005534516 nova_compute[253538]: 2025-11-25 08:54:51.898 253542 DEBUG nova.virt.libvirt.host [None req-d945cceb-25fb-483b-a9c5-26730e041c66 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 25 03:54:51 np0005534516 nova_compute[253538]: 2025-11-25 08:54:51.905 253542 DEBUG nova.virt.libvirt.host [None req-d945cceb-25fb-483b-a9c5-26730e041c66 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 25 03:54:51 np0005534516 nova_compute[253538]: 2025-11-25 08:54:51.906 253542 DEBUG nova.virt.libvirt.host [None req-d945cceb-25fb-483b-a9c5-26730e041c66 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 25 03:54:51 np0005534516 nova_compute[253538]: 2025-11-25 08:54:51.906 253542 DEBUG nova.virt.libvirt.driver [None req-d945cceb-25fb-483b-a9c5-26730e041c66 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 25 03:54:51 np0005534516 nova_compute[253538]: 2025-11-25 08:54:51.906 253542 DEBUG nova.virt.hardware [None req-d945cceb-25fb-483b-a9c5-26730e041c66 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T08:20:54Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='c0247c91-0f34-45b2-87e7-43f31a790a00',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 25 03:54:51 np0005534516 nova_compute[253538]: 2025-11-25 08:54:51.907 253542 DEBUG nova.virt.hardware [None req-d945cceb-25fb-483b-a9c5-26730e041c66 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 25 03:54:51 np0005534516 nova_compute[253538]: 2025-11-25 08:54:51.907 253542 DEBUG nova.virt.hardware [None req-d945cceb-25fb-483b-a9c5-26730e041c66 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 25 03:54:51 np0005534516 nova_compute[253538]: 2025-11-25 08:54:51.907 253542 DEBUG nova.virt.hardware [None req-d945cceb-25fb-483b-a9c5-26730e041c66 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 25 03:54:51 np0005534516 nova_compute[253538]: 2025-11-25 08:54:51.907 253542 DEBUG nova.virt.hardware [None req-d945cceb-25fb-483b-a9c5-26730e041c66 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 25 03:54:51 np0005534516 nova_compute[253538]: 2025-11-25 08:54:51.907 253542 DEBUG nova.virt.hardware [None req-d945cceb-25fb-483b-a9c5-26730e041c66 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 25 03:54:51 np0005534516 nova_compute[253538]: 2025-11-25 08:54:51.907 253542 DEBUG nova.virt.hardware [None req-d945cceb-25fb-483b-a9c5-26730e041c66 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 25 03:54:51 np0005534516 nova_compute[253538]: 2025-11-25 08:54:51.908 253542 DEBUG nova.virt.hardware [None req-d945cceb-25fb-483b-a9c5-26730e041c66 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 25 03:54:51 np0005534516 nova_compute[253538]: 2025-11-25 08:54:51.908 253542 DEBUG nova.virt.hardware [None req-d945cceb-25fb-483b-a9c5-26730e041c66 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 25 03:54:51 np0005534516 nova_compute[253538]: 2025-11-25 08:54:51.908 253542 DEBUG nova.virt.hardware [None req-d945cceb-25fb-483b-a9c5-26730e041c66 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 25 03:54:51 np0005534516 nova_compute[253538]: 2025-11-25 08:54:51.908 253542 DEBUG nova.virt.hardware [None req-d945cceb-25fb-483b-a9c5-26730e041c66 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 25 03:54:51 np0005534516 nova_compute[253538]: 2025-11-25 08:54:51.912 253542 DEBUG oslo_concurrency.processutils [None req-d945cceb-25fb-483b-a9c5-26730e041c66 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:54:52 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 03:54:52 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3324861545' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 03:54:52 np0005534516 nova_compute[253538]: 2025-11-25 08:54:52.349 253542 DEBUG oslo_concurrency.processutils [None req-d945cceb-25fb-483b-a9c5-26730e041c66 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.437s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:54:52 np0005534516 nova_compute[253538]: 2025-11-25 08:54:52.373 253542 DEBUG nova.storage.rbd_utils [None req-d945cceb-25fb-483b-a9c5-26730e041c66 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] rbd image 7e82fa8c-6663-439c-833c-2b28f22282a8_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:54:52 np0005534516 nova_compute[253538]: 2025-11-25 08:54:52.376 253542 DEBUG oslo_concurrency.processutils [None req-d945cceb-25fb-483b-a9c5-26730e041c66 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:54:52 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 03:54:52 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 03:54:52 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4053244308' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 03:54:52 np0005534516 nova_compute[253538]: 2025-11-25 08:54:52.831 253542 DEBUG oslo_concurrency.processutils [None req-d945cceb-25fb-483b-a9c5-26730e041c66 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.455s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:54:52 np0005534516 nova_compute[253538]: 2025-11-25 08:54:52.834 253542 DEBUG nova.virt.libvirt.vif [None req-d945cceb-25fb-483b-a9c5-26730e041c66 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T08:54:33Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-360976027',display_name='tempest-TestNetworkAdvancedServerOps-server-360976027',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-360976027',id=117,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMP7QXPZDPXrUql830fJueudXHoXZ2zww6EaNe3PFgcJ0/Sar4viBph4zMnjSpS0mXHOOoM16QZ2fgUMXO9FFKEQnCoRRYMYlR8af4rdgpILBOKGdgI2okSUsyg3suBciA==',key_name='tempest-TestNetworkAdvancedServerOps-1316847301',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='7dcaf3c96bfc4db3a41291debd385c67',ramdisk_id='',reservation_id='r-p9ynh0fp',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkAdvancedServerOps-1132090577',owner_user_name='tempest-TestNetworkAdvancedServerOps-1132090577-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T08:54:36Z,user_data=None,user_id='009378dc36154271ba5b4590ce67ddde',uuid=7e82fa8c-6663-439c-833c-2b28f22282a8,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "52157627-d75e-4670-9215-6471bda94ba6", "address": "fa:16:3e:a5:b5:38", "network": {"id": "703bdacb-53cd-40a1-9c2c-c632a29e049b", "bridge": "br-int", "label": "tempest-network-smoke--919155047", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7dcaf3c96bfc4db3a41291debd385c67", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap52157627-d7", "ovs_interfaceid": "52157627-d75e-4670-9215-6471bda94ba6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 25 03:54:52 np0005534516 nova_compute[253538]: 2025-11-25 08:54:52.834 253542 DEBUG nova.network.os_vif_util [None req-d945cceb-25fb-483b-a9c5-26730e041c66 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Converting VIF {"id": "52157627-d75e-4670-9215-6471bda94ba6", "address": "fa:16:3e:a5:b5:38", "network": {"id": "703bdacb-53cd-40a1-9c2c-c632a29e049b", "bridge": "br-int", "label": "tempest-network-smoke--919155047", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7dcaf3c96bfc4db3a41291debd385c67", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap52157627-d7", "ovs_interfaceid": "52157627-d75e-4670-9215-6471bda94ba6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 03:54:52 np0005534516 nova_compute[253538]: 2025-11-25 08:54:52.836 253542 DEBUG nova.network.os_vif_util [None req-d945cceb-25fb-483b-a9c5-26730e041c66 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a5:b5:38,bridge_name='br-int',has_traffic_filtering=True,id=52157627-d75e-4670-9215-6471bda94ba6,network=Network(703bdacb-53cd-40a1-9c2c-c632a29e049b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap52157627-d7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 03:54:52 np0005534516 nova_compute[253538]: 2025-11-25 08:54:52.838 253542 DEBUG nova.objects.instance [None req-d945cceb-25fb-483b-a9c5-26730e041c66 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Lazy-loading 'pci_devices' on Instance uuid 7e82fa8c-6663-439c-833c-2b28f22282a8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 03:54:52 np0005534516 nova_compute[253538]: 2025-11-25 08:54:52.840 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:54:52 np0005534516 nova_compute[253538]: 2025-11-25 08:54:52.858 253542 DEBUG nova.virt.libvirt.driver [None req-d945cceb-25fb-483b-a9c5-26730e041c66 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: 7e82fa8c-6663-439c-833c-2b28f22282a8] End _get_guest_xml xml=<domain type="kvm">
Nov 25 03:54:52 np0005534516 nova_compute[253538]:  <uuid>7e82fa8c-6663-439c-833c-2b28f22282a8</uuid>
Nov 25 03:54:52 np0005534516 nova_compute[253538]:  <name>instance-00000075</name>
Nov 25 03:54:52 np0005534516 nova_compute[253538]:  <memory>131072</memory>
Nov 25 03:54:52 np0005534516 nova_compute[253538]:  <vcpu>1</vcpu>
Nov 25 03:54:52 np0005534516 nova_compute[253538]:  <metadata>
Nov 25 03:54:52 np0005534516 nova_compute[253538]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 03:54:52 np0005534516 nova_compute[253538]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 03:54:52 np0005534516 nova_compute[253538]:      <nova:name>tempest-TestNetworkAdvancedServerOps-server-360976027</nova:name>
Nov 25 03:54:52 np0005534516 nova_compute[253538]:      <nova:creationTime>2025-11-25 08:54:51</nova:creationTime>
Nov 25 03:54:52 np0005534516 nova_compute[253538]:      <nova:flavor name="m1.nano">
Nov 25 03:54:52 np0005534516 nova_compute[253538]:        <nova:memory>128</nova:memory>
Nov 25 03:54:52 np0005534516 nova_compute[253538]:        <nova:disk>1</nova:disk>
Nov 25 03:54:52 np0005534516 nova_compute[253538]:        <nova:swap>0</nova:swap>
Nov 25 03:54:52 np0005534516 nova_compute[253538]:        <nova:ephemeral>0</nova:ephemeral>
Nov 25 03:54:52 np0005534516 nova_compute[253538]:        <nova:vcpus>1</nova:vcpus>
Nov 25 03:54:52 np0005534516 nova_compute[253538]:      </nova:flavor>
Nov 25 03:54:52 np0005534516 nova_compute[253538]:      <nova:owner>
Nov 25 03:54:52 np0005534516 nova_compute[253538]:        <nova:user uuid="009378dc36154271ba5b4590ce67ddde">tempest-TestNetworkAdvancedServerOps-1132090577-project-member</nova:user>
Nov 25 03:54:52 np0005534516 nova_compute[253538]:        <nova:project uuid="7dcaf3c96bfc4db3a41291debd385c67">tempest-TestNetworkAdvancedServerOps-1132090577</nova:project>
Nov 25 03:54:52 np0005534516 nova_compute[253538]:      </nova:owner>
Nov 25 03:54:52 np0005534516 nova_compute[253538]:      <nova:root type="image" uuid="8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e"/>
Nov 25 03:54:52 np0005534516 nova_compute[253538]:      <nova:ports>
Nov 25 03:54:52 np0005534516 nova_compute[253538]:        <nova:port uuid="52157627-d75e-4670-9215-6471bda94ba6">
Nov 25 03:54:52 np0005534516 nova_compute[253538]:          <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Nov 25 03:54:52 np0005534516 nova_compute[253538]:        </nova:port>
Nov 25 03:54:52 np0005534516 nova_compute[253538]:      </nova:ports>
Nov 25 03:54:52 np0005534516 nova_compute[253538]:    </nova:instance>
Nov 25 03:54:52 np0005534516 nova_compute[253538]:  </metadata>
Nov 25 03:54:52 np0005534516 nova_compute[253538]:  <sysinfo type="smbios">
Nov 25 03:54:52 np0005534516 nova_compute[253538]:    <system>
Nov 25 03:54:52 np0005534516 nova_compute[253538]:      <entry name="manufacturer">RDO</entry>
Nov 25 03:54:52 np0005534516 nova_compute[253538]:      <entry name="product">OpenStack Compute</entry>
Nov 25 03:54:52 np0005534516 nova_compute[253538]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 03:54:52 np0005534516 nova_compute[253538]:      <entry name="serial">7e82fa8c-6663-439c-833c-2b28f22282a8</entry>
Nov 25 03:54:52 np0005534516 nova_compute[253538]:      <entry name="uuid">7e82fa8c-6663-439c-833c-2b28f22282a8</entry>
Nov 25 03:54:52 np0005534516 nova_compute[253538]:      <entry name="family">Virtual Machine</entry>
Nov 25 03:54:52 np0005534516 nova_compute[253538]:    </system>
Nov 25 03:54:52 np0005534516 nova_compute[253538]:  </sysinfo>
Nov 25 03:54:52 np0005534516 nova_compute[253538]:  <os>
Nov 25 03:54:52 np0005534516 nova_compute[253538]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 25 03:54:52 np0005534516 nova_compute[253538]:    <boot dev="hd"/>
Nov 25 03:54:52 np0005534516 nova_compute[253538]:    <smbios mode="sysinfo"/>
Nov 25 03:54:52 np0005534516 nova_compute[253538]:  </os>
Nov 25 03:54:52 np0005534516 nova_compute[253538]:  <features>
Nov 25 03:54:52 np0005534516 nova_compute[253538]:    <acpi/>
Nov 25 03:54:52 np0005534516 nova_compute[253538]:    <apic/>
Nov 25 03:54:52 np0005534516 nova_compute[253538]:    <vmcoreinfo/>
Nov 25 03:54:52 np0005534516 nova_compute[253538]:  </features>
Nov 25 03:54:52 np0005534516 nova_compute[253538]:  <clock offset="utc">
Nov 25 03:54:52 np0005534516 nova_compute[253538]:    <timer name="pit" tickpolicy="delay"/>
Nov 25 03:54:52 np0005534516 nova_compute[253538]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 25 03:54:52 np0005534516 nova_compute[253538]:    <timer name="hpet" present="no"/>
Nov 25 03:54:52 np0005534516 nova_compute[253538]:  </clock>
Nov 25 03:54:52 np0005534516 nova_compute[253538]:  <cpu mode="host-model" match="exact">
Nov 25 03:54:52 np0005534516 nova_compute[253538]:    <topology sockets="1" cores="1" threads="1"/>
Nov 25 03:54:52 np0005534516 nova_compute[253538]:  </cpu>
Nov 25 03:54:52 np0005534516 nova_compute[253538]:  <devices>
Nov 25 03:54:52 np0005534516 nova_compute[253538]:    <disk type="network" device="disk">
Nov 25 03:54:52 np0005534516 nova_compute[253538]:      <driver type="raw" cache="none"/>
Nov 25 03:54:52 np0005534516 nova_compute[253538]:      <source protocol="rbd" name="vms/7e82fa8c-6663-439c-833c-2b28f22282a8_disk">
Nov 25 03:54:52 np0005534516 nova_compute[253538]:        <host name="192.168.122.100" port="6789"/>
Nov 25 03:54:52 np0005534516 nova_compute[253538]:      </source>
Nov 25 03:54:52 np0005534516 nova_compute[253538]:      <auth username="openstack">
Nov 25 03:54:52 np0005534516 nova_compute[253538]:        <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 03:54:52 np0005534516 nova_compute[253538]:      </auth>
Nov 25 03:54:52 np0005534516 nova_compute[253538]:      <target dev="vda" bus="virtio"/>
Nov 25 03:54:52 np0005534516 nova_compute[253538]:    </disk>
Nov 25 03:54:52 np0005534516 nova_compute[253538]:    <disk type="network" device="cdrom">
Nov 25 03:54:52 np0005534516 nova_compute[253538]:      <driver type="raw" cache="none"/>
Nov 25 03:54:52 np0005534516 nova_compute[253538]:      <source protocol="rbd" name="vms/7e82fa8c-6663-439c-833c-2b28f22282a8_disk.config">
Nov 25 03:54:52 np0005534516 nova_compute[253538]:        <host name="192.168.122.100" port="6789"/>
Nov 25 03:54:52 np0005534516 nova_compute[253538]:      </source>
Nov 25 03:54:52 np0005534516 nova_compute[253538]:      <auth username="openstack">
Nov 25 03:54:52 np0005534516 nova_compute[253538]:        <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 03:54:52 np0005534516 nova_compute[253538]:      </auth>
Nov 25 03:54:52 np0005534516 nova_compute[253538]:      <target dev="sda" bus="sata"/>
Nov 25 03:54:52 np0005534516 nova_compute[253538]:    </disk>
Nov 25 03:54:52 np0005534516 nova_compute[253538]:    <interface type="ethernet">
Nov 25 03:54:52 np0005534516 nova_compute[253538]:      <mac address="fa:16:3e:a5:b5:38"/>
Nov 25 03:54:52 np0005534516 nova_compute[253538]:      <model type="virtio"/>
Nov 25 03:54:52 np0005534516 nova_compute[253538]:      <driver name="vhost" rx_queue_size="512"/>
Nov 25 03:54:52 np0005534516 nova_compute[253538]:      <mtu size="1442"/>
Nov 25 03:54:52 np0005534516 nova_compute[253538]:      <target dev="tap52157627-d7"/>
Nov 25 03:54:52 np0005534516 nova_compute[253538]:    </interface>
Nov 25 03:54:52 np0005534516 nova_compute[253538]:    <serial type="pty">
Nov 25 03:54:52 np0005534516 nova_compute[253538]:      <log file="/var/lib/nova/instances/7e82fa8c-6663-439c-833c-2b28f22282a8/console.log" append="off"/>
Nov 25 03:54:52 np0005534516 nova_compute[253538]:    </serial>
Nov 25 03:54:52 np0005534516 nova_compute[253538]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 03:54:52 np0005534516 nova_compute[253538]:    <video>
Nov 25 03:54:52 np0005534516 nova_compute[253538]:      <model type="virtio"/>
Nov 25 03:54:52 np0005534516 nova_compute[253538]:    </video>
Nov 25 03:54:52 np0005534516 nova_compute[253538]:    <input type="tablet" bus="usb"/>
Nov 25 03:54:52 np0005534516 nova_compute[253538]:    <rng model="virtio">
Nov 25 03:54:52 np0005534516 nova_compute[253538]:      <backend model="random">/dev/urandom</backend>
Nov 25 03:54:52 np0005534516 nova_compute[253538]:    </rng>
Nov 25 03:54:52 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root"/>
Nov 25 03:54:52 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:54:52 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:54:52 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:54:52 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:54:52 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:54:52 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:54:52 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:54:52 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:54:52 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:54:52 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:54:52 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:54:52 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:54:52 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:54:52 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:54:52 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:54:52 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:54:52 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:54:52 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:54:52 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:54:52 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:54:52 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:54:52 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:54:52 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:54:52 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:54:52 np0005534516 nova_compute[253538]:    <controller type="usb" index="0"/>
Nov 25 03:54:52 np0005534516 nova_compute[253538]:    <memballoon model="virtio">
Nov 25 03:54:52 np0005534516 nova_compute[253538]:      <stats period="10"/>
Nov 25 03:54:52 np0005534516 nova_compute[253538]:    </memballoon>
Nov 25 03:54:52 np0005534516 nova_compute[253538]:  </devices>
Nov 25 03:54:52 np0005534516 nova_compute[253538]: </domain>
Nov 25 03:54:52 np0005534516 nova_compute[253538]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 25 03:54:52 np0005534516 nova_compute[253538]: 2025-11-25 08:54:52.859 253542 DEBUG nova.compute.manager [None req-d945cceb-25fb-483b-a9c5-26730e041c66 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: 7e82fa8c-6663-439c-833c-2b28f22282a8] Preparing to wait for external event network-vif-plugged-52157627-d75e-4670-9215-6471bda94ba6 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 25 03:54:52 np0005534516 nova_compute[253538]: 2025-11-25 08:54:52.859 253542 DEBUG oslo_concurrency.lockutils [None req-d945cceb-25fb-483b-a9c5-26730e041c66 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Acquiring lock "7e82fa8c-6663-439c-833c-2b28f22282a8-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:54:52 np0005534516 nova_compute[253538]: 2025-11-25 08:54:52.860 253542 DEBUG oslo_concurrency.lockutils [None req-d945cceb-25fb-483b-a9c5-26730e041c66 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Lock "7e82fa8c-6663-439c-833c-2b28f22282a8-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:54:52 np0005534516 nova_compute[253538]: 2025-11-25 08:54:52.860 253542 DEBUG oslo_concurrency.lockutils [None req-d945cceb-25fb-483b-a9c5-26730e041c66 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Lock "7e82fa8c-6663-439c-833c-2b28f22282a8-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:54:52 np0005534516 nova_compute[253538]: 2025-11-25 08:54:52.861 253542 DEBUG nova.virt.libvirt.vif [None req-d945cceb-25fb-483b-a9c5-26730e041c66 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T08:54:33Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-360976027',display_name='tempest-TestNetworkAdvancedServerOps-server-360976027',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-360976027',id=117,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMP7QXPZDPXrUql830fJueudXHoXZ2zww6EaNe3PFgcJ0/Sar4viBph4zMnjSpS0mXHOOoM16QZ2fgUMXO9FFKEQnCoRRYMYlR8af4rdgpILBOKGdgI2okSUsyg3suBciA==',key_name='tempest-TestNetworkAdvancedServerOps-1316847301',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='7dcaf3c96bfc4db3a41291debd385c67',ramdisk_id='',reservation_id='r-p9ynh0fp',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkAdvancedServerOps-1132090577',owner_user_name='tempest-TestNetworkAdvancedServerOps-1132090577-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T08:54:36Z,user_data=None,user_id='009378dc36154271ba5b4590ce67ddde',uuid=7e82fa8c-6663-439c-833c-2b28f22282a8,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "52157627-d75e-4670-9215-6471bda94ba6", "address": "fa:16:3e:a5:b5:38", "network": {"id": "703bdacb-53cd-40a1-9c2c-c632a29e049b", "bridge": "br-int", "label": "tempest-network-smoke--919155047", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7dcaf3c96bfc4db3a41291debd385c67", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap52157627-d7", "ovs_interfaceid": "52157627-d75e-4670-9215-6471bda94ba6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 25 03:54:52 np0005534516 nova_compute[253538]: 2025-11-25 08:54:52.861 253542 DEBUG nova.network.os_vif_util [None req-d945cceb-25fb-483b-a9c5-26730e041c66 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Converting VIF {"id": "52157627-d75e-4670-9215-6471bda94ba6", "address": "fa:16:3e:a5:b5:38", "network": {"id": "703bdacb-53cd-40a1-9c2c-c632a29e049b", "bridge": "br-int", "label": "tempest-network-smoke--919155047", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7dcaf3c96bfc4db3a41291debd385c67", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap52157627-d7", "ovs_interfaceid": "52157627-d75e-4670-9215-6471bda94ba6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 03:54:52 np0005534516 nova_compute[253538]: 2025-11-25 08:54:52.862 253542 DEBUG nova.network.os_vif_util [None req-d945cceb-25fb-483b-a9c5-26730e041c66 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a5:b5:38,bridge_name='br-int',has_traffic_filtering=True,id=52157627-d75e-4670-9215-6471bda94ba6,network=Network(703bdacb-53cd-40a1-9c2c-c632a29e049b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap52157627-d7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 03:54:52 np0005534516 nova_compute[253538]: 2025-11-25 08:54:52.862 253542 DEBUG os_vif [None req-d945cceb-25fb-483b-a9c5-26730e041c66 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:a5:b5:38,bridge_name='br-int',has_traffic_filtering=True,id=52157627-d75e-4670-9215-6471bda94ba6,network=Network(703bdacb-53cd-40a1-9c2c-c632a29e049b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap52157627-d7') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 25 03:54:52 np0005534516 nova_compute[253538]: 2025-11-25 08:54:52.863 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:54:52 np0005534516 nova_compute[253538]: 2025-11-25 08:54:52.864 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:54:52 np0005534516 nova_compute[253538]: 2025-11-25 08:54:52.864 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 03:54:52 np0005534516 nova_compute[253538]: 2025-11-25 08:54:52.868 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:54:52 np0005534516 nova_compute[253538]: 2025-11-25 08:54:52.868 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap52157627-d7, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:54:52 np0005534516 nova_compute[253538]: 2025-11-25 08:54:52.869 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap52157627-d7, col_values=(('external_ids', {'iface-id': '52157627-d75e-4670-9215-6471bda94ba6', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:a5:b5:38', 'vm-uuid': '7e82fa8c-6663-439c-833c-2b28f22282a8'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:54:52 np0005534516 nova_compute[253538]: 2025-11-25 08:54:52.870 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:54:52 np0005534516 NetworkManager[48915]: <info>  [1764060892.8714] manager: (tap52157627-d7): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/486)
Nov 25 03:54:52 np0005534516 nova_compute[253538]: 2025-11-25 08:54:52.873 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 25 03:54:52 np0005534516 nova_compute[253538]: 2025-11-25 08:54:52.877 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:54:52 np0005534516 nova_compute[253538]: 2025-11-25 08:54:52.878 253542 INFO os_vif [None req-d945cceb-25fb-483b-a9c5-26730e041c66 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:a5:b5:38,bridge_name='br-int',has_traffic_filtering=True,id=52157627-d75e-4670-9215-6471bda94ba6,network=Network(703bdacb-53cd-40a1-9c2c-c632a29e049b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap52157627-d7')#033[00m
Nov 25 03:54:53 np0005534516 nova_compute[253538]: 2025-11-25 08:54:53.155 253542 DEBUG nova.virt.libvirt.driver [None req-d945cceb-25fb-483b-a9c5-26730e041c66 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 03:54:53 np0005534516 nova_compute[253538]: 2025-11-25 08:54:53.156 253542 DEBUG nova.virt.libvirt.driver [None req-d945cceb-25fb-483b-a9c5-26730e041c66 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 03:54:53 np0005534516 nova_compute[253538]: 2025-11-25 08:54:53.156 253542 DEBUG nova.virt.libvirt.driver [None req-d945cceb-25fb-483b-a9c5-26730e041c66 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] No VIF found with MAC fa:16:3e:a5:b5:38, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 25 03:54:53 np0005534516 nova_compute[253538]: 2025-11-25 08:54:53.156 253542 INFO nova.virt.libvirt.driver [None req-d945cceb-25fb-483b-a9c5-26730e041c66 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: 7e82fa8c-6663-439c-833c-2b28f22282a8] Using config drive#033[00m
Nov 25 03:54:53 np0005534516 nova_compute[253538]: 2025-11-25 08:54:53.194 253542 DEBUG nova.storage.rbd_utils [None req-d945cceb-25fb-483b-a9c5-26730e041c66 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] rbd image 7e82fa8c-6663-439c-833c-2b28f22282a8_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:54:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] Optimize plan auto_2025-11-25_08:54:53
Nov 25 03:54:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 03:54:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] do_upmap
Nov 25 03:54:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] pools ['default.rgw.log', 'default.rgw.meta', 'backups', 'images', 'default.rgw.control', 'volumes', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', '.mgr', '.rgw.root', 'vms']
Nov 25 03:54:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] prepared 0/10 changes
Nov 25 03:54:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 03:54:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 03:54:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 03:54:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 03:54:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 03:54:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 03:54:53 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2223: 321 pgs: 321 active+clean; 293 MiB data, 904 MiB used, 59 GiB / 60 GiB avail; 4.3 KiB/s wr, 0 op/s
Nov 25 03:54:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 03:54:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 03:54:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 03:54:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 03:54:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 03:54:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 03:54:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 03:54:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 03:54:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 03:54:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 03:54:54 np0005534516 nova_compute[253538]: 2025-11-25 08:54:54.073 253542 INFO nova.virt.libvirt.driver [None req-d945cceb-25fb-483b-a9c5-26730e041c66 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: 7e82fa8c-6663-439c-833c-2b28f22282a8] Creating config drive at /var/lib/nova/instances/7e82fa8c-6663-439c-833c-2b28f22282a8/disk.config#033[00m
Nov 25 03:54:54 np0005534516 nova_compute[253538]: 2025-11-25 08:54:54.079 253542 DEBUG oslo_concurrency.processutils [None req-d945cceb-25fb-483b-a9c5-26730e041c66 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/7e82fa8c-6663-439c-833c-2b28f22282a8/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmplkqel1zb execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:54:54 np0005534516 nova_compute[253538]: 2025-11-25 08:54:54.232 253542 DEBUG oslo_concurrency.processutils [None req-d945cceb-25fb-483b-a9c5-26730e041c66 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/7e82fa8c-6663-439c-833c-2b28f22282a8/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmplkqel1zb" returned: 0 in 0.153s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:54:54 np0005534516 nova_compute[253538]: 2025-11-25 08:54:54.269 253542 DEBUG nova.storage.rbd_utils [None req-d945cceb-25fb-483b-a9c5-26730e041c66 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] rbd image 7e82fa8c-6663-439c-833c-2b28f22282a8_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:54:54 np0005534516 nova_compute[253538]: 2025-11-25 08:54:54.274 253542 DEBUG oslo_concurrency.processutils [None req-d945cceb-25fb-483b-a9c5-26730e041c66 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/7e82fa8c-6663-439c-833c-2b28f22282a8/disk.config 7e82fa8c-6663-439c-833c-2b28f22282a8_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:54:54 np0005534516 nova_compute[253538]: 2025-11-25 08:54:54.709 253542 DEBUG nova.network.neutron [req-679a13d9-d6c6-4e67-9a89-b6bdd6dd4135 req-3d06c655-57c3-4e71-a7db-8c0ccca42559 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 7e82fa8c-6663-439c-833c-2b28f22282a8] Updated VIF entry in instance network info cache for port 52157627-d75e-4670-9215-6471bda94ba6. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 25 03:54:54 np0005534516 nova_compute[253538]: 2025-11-25 08:54:54.710 253542 DEBUG nova.network.neutron [req-679a13d9-d6c6-4e67-9a89-b6bdd6dd4135 req-3d06c655-57c3-4e71-a7db-8c0ccca42559 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 7e82fa8c-6663-439c-833c-2b28f22282a8] Updating instance_info_cache with network_info: [{"id": "52157627-d75e-4670-9215-6471bda94ba6", "address": "fa:16:3e:a5:b5:38", "network": {"id": "703bdacb-53cd-40a1-9c2c-c632a29e049b", "bridge": "br-int", "label": "tempest-network-smoke--919155047", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7dcaf3c96bfc4db3a41291debd385c67", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap52157627-d7", "ovs_interfaceid": "52157627-d75e-4670-9215-6471bda94ba6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 03:54:54 np0005534516 nova_compute[253538]: 2025-11-25 08:54:54.725 253542 DEBUG oslo_concurrency.lockutils [req-679a13d9-d6c6-4e67-9a89-b6bdd6dd4135 req-3d06c655-57c3-4e71-a7db-8c0ccca42559 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-7e82fa8c-6663-439c-833c-2b28f22282a8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 03:54:55 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2224: 321 pgs: 321 active+clean; 293 MiB data, 904 MiB used, 59 GiB / 60 GiB avail; 9.3 KiB/s wr, 1 op/s
Nov 25 03:54:56 np0005534516 nova_compute[253538]: 2025-11-25 08:54:56.455 253542 DEBUG oslo_concurrency.processutils [None req-d945cceb-25fb-483b-a9c5-26730e041c66 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/7e82fa8c-6663-439c-833c-2b28f22282a8/disk.config 7e82fa8c-6663-439c-833c-2b28f22282a8_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 2.181s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:54:56 np0005534516 nova_compute[253538]: 2025-11-25 08:54:56.455 253542 INFO nova.virt.libvirt.driver [None req-d945cceb-25fb-483b-a9c5-26730e041c66 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: 7e82fa8c-6663-439c-833c-2b28f22282a8] Deleting local config drive /var/lib/nova/instances/7e82fa8c-6663-439c-833c-2b28f22282a8/disk.config because it was imported into RBD.#033[00m
Nov 25 03:54:56 np0005534516 kernel: tap52157627-d7: entered promiscuous mode
Nov 25 03:54:56 np0005534516 nova_compute[253538]: 2025-11-25 08:54:56.522 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:54:56 np0005534516 ovn_controller[152859]: 2025-11-25T08:54:56Z|01173|binding|INFO|Claiming lport 52157627-d75e-4670-9215-6471bda94ba6 for this chassis.
Nov 25 03:54:56 np0005534516 ovn_controller[152859]: 2025-11-25T08:54:56Z|01174|binding|INFO|52157627-d75e-4670-9215-6471bda94ba6: Claiming fa:16:3e:a5:b5:38 10.100.0.6
Nov 25 03:54:56 np0005534516 NetworkManager[48915]: <info>  [1764060896.5249] manager: (tap52157627-d7): new Tun device (/org/freedesktop/NetworkManager/Devices/487)
Nov 25 03:54:56 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:54:56.530 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a5:b5:38 10.100.0.6'], port_security=['fa:16:3e:a5:b5:38 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '7e82fa8c-6663-439c-833c-2b28f22282a8', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-703bdacb-53cd-40a1-9c2c-c632a29e049b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '7dcaf3c96bfc4db3a41291debd385c67', 'neutron:revision_number': '2', 'neutron:security_group_ids': '8861bf3c-0dd7-44a1-b3d5-4af8cd78fda2', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=1772063d-dae9-4681-ab29-d1a2754b4cf7, chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=52157627-d75e-4670-9215-6471bda94ba6) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 03:54:56 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:54:56.531 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 52157627-d75e-4670-9215-6471bda94ba6 in datapath 703bdacb-53cd-40a1-9c2c-c632a29e049b bound to our chassis#033[00m
Nov 25 03:54:56 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:54:56.532 162739 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 703bdacb-53cd-40a1-9c2c-c632a29e049b#033[00m
Nov 25 03:54:56 np0005534516 ovn_controller[152859]: 2025-11-25T08:54:56Z|01175|binding|INFO|Setting lport 52157627-d75e-4670-9215-6471bda94ba6 ovn-installed in OVS
Nov 25 03:54:56 np0005534516 ovn_controller[152859]: 2025-11-25T08:54:56Z|01176|binding|INFO|Setting lport 52157627-d75e-4670-9215-6471bda94ba6 up in Southbound
Nov 25 03:54:56 np0005534516 nova_compute[253538]: 2025-11-25 08:54:56.543 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:54:56 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:54:56.548 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[bc42ec74-dd98-45a1-a8d0-9027e632f0f3]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:54:56 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:54:56.549 162739 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap703bdacb-51 in ovnmeta-703bdacb-53cd-40a1-9c2c-c632a29e049b namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 25 03:54:56 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:54:56.552 269370 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap703bdacb-50 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 25 03:54:56 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:54:56.552 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[2662a858-ec87-49c7-a988-ec5feb7a242f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:54:56 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:54:56.553 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[13ccb0bd-41a3-4c79-9a5c-43ffc0f2cf72]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:54:56 np0005534516 nova_compute[253538]: 2025-11-25 08:54:56.559 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:54:56 np0005534516 systemd-udevd[376448]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 03:54:56 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:54:56.568 162852 DEBUG oslo.privsep.daemon [-] privsep: reply[703fc4af-05c2-4da4-9233-b1efd593b5b5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:54:56 np0005534516 NetworkManager[48915]: <info>  [1764060896.5814] device (tap52157627-d7): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 03:54:56 np0005534516 NetworkManager[48915]: <info>  [1764060896.5827] device (tap52157627-d7): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 03:54:56 np0005534516 systemd-machined[215790]: New machine qemu-146-instance-00000075.
Nov 25 03:54:56 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:54:56.582 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[446a638b-a76b-4c7b-96dd-c35c472c898c]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:54:56 np0005534516 systemd[1]: Started Virtual Machine qemu-146-instance-00000075.
Nov 25 03:54:56 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:54:56.625 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[c7ffe411-8b3a-4bfc-9327-98a5e17a71f2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:54:56 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:54:56.632 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[2adb51e3-6a4a-4905-8427-889fb6663782]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:54:56 np0005534516 systemd-udevd[376454]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 03:54:56 np0005534516 NetworkManager[48915]: <info>  [1764060896.6342] manager: (tap703bdacb-50): new Veth device (/org/freedesktop/NetworkManager/Devices/488)
Nov 25 03:54:56 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:54:56.686 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[43b772a6-1ade-45b3-af00-2783cc06b094]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:54:56 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:54:56.691 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[a0b84c5b-4f24-41fd-baa5-a8a3ae3bd956]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:54:56 np0005534516 NetworkManager[48915]: <info>  [1764060896.7251] device (tap703bdacb-50): carrier: link connected
Nov 25 03:54:56 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:54:56.734 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[2d87f492-8b70-4832-ba66-9405e954f7ee]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:54:56 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:54:56.760 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[5caf2357-7e37-4093-9886-0e50fe6eff38]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap703bdacb-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b2:81:bd'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 347], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 625132, 'reachable_time': 20172, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 376482, 'error': None, 'target': 'ovnmeta-703bdacb-53cd-40a1-9c2c-c632a29e049b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:54:56 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:54:56.787 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[7f1086c4-96e6-4df2-b1ef-7cdea9e3c93f]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feb2:81bd'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 625132, 'tstamp': 625132}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 376483, 'error': None, 'target': 'ovnmeta-703bdacb-53cd-40a1-9c2c-c632a29e049b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:54:56 np0005534516 nova_compute[253538]: 2025-11-25 08:54:56.797 253542 DEBUG nova.compute.manager [req-96fc5201-313e-466a-a925-d73bc8ae1057 req-d6b3cfc4-0efc-4fde-b559-bcf00a280a49 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 7e82fa8c-6663-439c-833c-2b28f22282a8] Received event network-vif-plugged-52157627-d75e-4670-9215-6471bda94ba6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:54:56 np0005534516 nova_compute[253538]: 2025-11-25 08:54:56.798 253542 DEBUG oslo_concurrency.lockutils [req-96fc5201-313e-466a-a925-d73bc8ae1057 req-d6b3cfc4-0efc-4fde-b559-bcf00a280a49 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "7e82fa8c-6663-439c-833c-2b28f22282a8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:54:56 np0005534516 nova_compute[253538]: 2025-11-25 08:54:56.798 253542 DEBUG oslo_concurrency.lockutils [req-96fc5201-313e-466a-a925-d73bc8ae1057 req-d6b3cfc4-0efc-4fde-b559-bcf00a280a49 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "7e82fa8c-6663-439c-833c-2b28f22282a8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:54:56 np0005534516 nova_compute[253538]: 2025-11-25 08:54:56.799 253542 DEBUG oslo_concurrency.lockutils [req-96fc5201-313e-466a-a925-d73bc8ae1057 req-d6b3cfc4-0efc-4fde-b559-bcf00a280a49 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "7e82fa8c-6663-439c-833c-2b28f22282a8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:54:56 np0005534516 nova_compute[253538]: 2025-11-25 08:54:56.799 253542 DEBUG nova.compute.manager [req-96fc5201-313e-466a-a925-d73bc8ae1057 req-d6b3cfc4-0efc-4fde-b559-bcf00a280a49 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 7e82fa8c-6663-439c-833c-2b28f22282a8] Processing event network-vif-plugged-52157627-d75e-4670-9215-6471bda94ba6 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 25 03:54:56 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:54:56.816 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[73b7b0da-7c32-4517-a08f-f78fba4bf168]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap703bdacb-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b2:81:bd'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 347], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 625132, 'reachable_time': 20172, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 376484, 'error': None, 'target': 'ovnmeta-703bdacb-53cd-40a1-9c2c-c632a29e049b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:54:56 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:54:56.866 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[85fd4858-cae6-4ede-a9e1-312e4bd3f122]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:54:56 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:54:56.938 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[ff426b0d-5015-4823-b17e-77402984b5c7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:54:56 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:54:56.939 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap703bdacb-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:54:56 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:54:56.939 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 03:54:56 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:54:56.940 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap703bdacb-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:54:56 np0005534516 kernel: tap703bdacb-50: entered promiscuous mode
Nov 25 03:54:56 np0005534516 nova_compute[253538]: 2025-11-25 08:54:56.942 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:54:56 np0005534516 NetworkManager[48915]: <info>  [1764060896.9432] manager: (tap703bdacb-50): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/489)
Nov 25 03:54:56 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:54:56.944 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap703bdacb-50, col_values=(('external_ids', {'iface-id': '583866cf-82da-4259-9189-db9f58620872'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:54:56 np0005534516 ovn_controller[152859]: 2025-11-25T08:54:56Z|01177|binding|INFO|Releasing lport 583866cf-82da-4259-9189-db9f58620872 from this chassis (sb_readonly=0)
Nov 25 03:54:56 np0005534516 nova_compute[253538]: 2025-11-25 08:54:56.945 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:54:56 np0005534516 nova_compute[253538]: 2025-11-25 08:54:56.958 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:54:56 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:54:56.959 162739 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/703bdacb-53cd-40a1-9c2c-c632a29e049b.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/703bdacb-53cd-40a1-9c2c-c632a29e049b.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 25 03:54:56 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:54:56.960 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[4d752028-5509-4655-aa1d-16b008b002c9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:54:56 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:54:56.961 162739 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 25 03:54:56 np0005534516 ovn_metadata_agent[162734]: global
Nov 25 03:54:56 np0005534516 ovn_metadata_agent[162734]:    log         /dev/log local0 debug
Nov 25 03:54:56 np0005534516 ovn_metadata_agent[162734]:    log-tag     haproxy-metadata-proxy-703bdacb-53cd-40a1-9c2c-c632a29e049b
Nov 25 03:54:56 np0005534516 ovn_metadata_agent[162734]:    user        root
Nov 25 03:54:56 np0005534516 ovn_metadata_agent[162734]:    group       root
Nov 25 03:54:56 np0005534516 ovn_metadata_agent[162734]:    maxconn     1024
Nov 25 03:54:56 np0005534516 ovn_metadata_agent[162734]:    pidfile     /var/lib/neutron/external/pids/703bdacb-53cd-40a1-9c2c-c632a29e049b.pid.haproxy
Nov 25 03:54:56 np0005534516 ovn_metadata_agent[162734]:    daemon
Nov 25 03:54:56 np0005534516 ovn_metadata_agent[162734]: 
Nov 25 03:54:56 np0005534516 ovn_metadata_agent[162734]: defaults
Nov 25 03:54:56 np0005534516 ovn_metadata_agent[162734]:    log global
Nov 25 03:54:56 np0005534516 ovn_metadata_agent[162734]:    mode http
Nov 25 03:54:56 np0005534516 ovn_metadata_agent[162734]:    option httplog
Nov 25 03:54:56 np0005534516 ovn_metadata_agent[162734]:    option dontlognull
Nov 25 03:54:56 np0005534516 ovn_metadata_agent[162734]:    option http-server-close
Nov 25 03:54:56 np0005534516 ovn_metadata_agent[162734]:    option forwardfor
Nov 25 03:54:56 np0005534516 ovn_metadata_agent[162734]:    retries                 3
Nov 25 03:54:56 np0005534516 ovn_metadata_agent[162734]:    timeout http-request    30s
Nov 25 03:54:56 np0005534516 ovn_metadata_agent[162734]:    timeout connect         30s
Nov 25 03:54:56 np0005534516 ovn_metadata_agent[162734]:    timeout client          32s
Nov 25 03:54:56 np0005534516 ovn_metadata_agent[162734]:    timeout server          32s
Nov 25 03:54:56 np0005534516 ovn_metadata_agent[162734]:    timeout http-keep-alive 30s
Nov 25 03:54:56 np0005534516 ovn_metadata_agent[162734]: 
Nov 25 03:54:56 np0005534516 ovn_metadata_agent[162734]: 
Nov 25 03:54:56 np0005534516 ovn_metadata_agent[162734]: listen listener
Nov 25 03:54:56 np0005534516 ovn_metadata_agent[162734]:    bind 169.254.169.254:80
Nov 25 03:54:56 np0005534516 ovn_metadata_agent[162734]:    server metadata /var/lib/neutron/metadata_proxy
Nov 25 03:54:56 np0005534516 ovn_metadata_agent[162734]:    http-request add-header X-OVN-Network-ID 703bdacb-53cd-40a1-9c2c-c632a29e049b
Nov 25 03:54:56 np0005534516 ovn_metadata_agent[162734]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 25 03:54:56 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:54:56.961 162739 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-703bdacb-53cd-40a1-9c2c-c632a29e049b', 'env', 'PROCESS_TAG=haproxy-703bdacb-53cd-40a1-9c2c-c632a29e049b', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/703bdacb-53cd-40a1-9c2c-c632a29e049b.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 25 03:54:57 np0005534516 nova_compute[253538]: 2025-11-25 08:54:57.320 253542 DEBUG nova.compute.manager [None req-d945cceb-25fb-483b-a9c5-26730e041c66 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: 7e82fa8c-6663-439c-833c-2b28f22282a8] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 25 03:54:57 np0005534516 nova_compute[253538]: 2025-11-25 08:54:57.321 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764060897.3191752, 7e82fa8c-6663-439c-833c-2b28f22282a8 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 03:54:57 np0005534516 nova_compute[253538]: 2025-11-25 08:54:57.321 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 7e82fa8c-6663-439c-833c-2b28f22282a8] VM Started (Lifecycle Event)#033[00m
Nov 25 03:54:57 np0005534516 nova_compute[253538]: 2025-11-25 08:54:57.329 253542 DEBUG nova.virt.libvirt.driver [None req-d945cceb-25fb-483b-a9c5-26730e041c66 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: 7e82fa8c-6663-439c-833c-2b28f22282a8] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 25 03:54:57 np0005534516 nova_compute[253538]: 2025-11-25 08:54:57.334 253542 INFO nova.virt.libvirt.driver [-] [instance: 7e82fa8c-6663-439c-833c-2b28f22282a8] Instance spawned successfully.#033[00m
Nov 25 03:54:57 np0005534516 nova_compute[253538]: 2025-11-25 08:54:57.335 253542 DEBUG nova.virt.libvirt.driver [None req-d945cceb-25fb-483b-a9c5-26730e041c66 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: 7e82fa8c-6663-439c-833c-2b28f22282a8] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 25 03:54:57 np0005534516 nova_compute[253538]: 2025-11-25 08:54:57.341 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 7e82fa8c-6663-439c-833c-2b28f22282a8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:54:57 np0005534516 nova_compute[253538]: 2025-11-25 08:54:57.346 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 7e82fa8c-6663-439c-833c-2b28f22282a8] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 03:54:57 np0005534516 nova_compute[253538]: 2025-11-25 08:54:57.360 253542 DEBUG nova.virt.libvirt.driver [None req-d945cceb-25fb-483b-a9c5-26730e041c66 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: 7e82fa8c-6663-439c-833c-2b28f22282a8] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:54:57 np0005534516 nova_compute[253538]: 2025-11-25 08:54:57.360 253542 DEBUG nova.virt.libvirt.driver [None req-d945cceb-25fb-483b-a9c5-26730e041c66 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: 7e82fa8c-6663-439c-833c-2b28f22282a8] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:54:57 np0005534516 nova_compute[253538]: 2025-11-25 08:54:57.361 253542 DEBUG nova.virt.libvirt.driver [None req-d945cceb-25fb-483b-a9c5-26730e041c66 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: 7e82fa8c-6663-439c-833c-2b28f22282a8] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:54:57 np0005534516 nova_compute[253538]: 2025-11-25 08:54:57.361 253542 DEBUG nova.virt.libvirt.driver [None req-d945cceb-25fb-483b-a9c5-26730e041c66 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: 7e82fa8c-6663-439c-833c-2b28f22282a8] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:54:57 np0005534516 nova_compute[253538]: 2025-11-25 08:54:57.361 253542 DEBUG nova.virt.libvirt.driver [None req-d945cceb-25fb-483b-a9c5-26730e041c66 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: 7e82fa8c-6663-439c-833c-2b28f22282a8] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:54:57 np0005534516 nova_compute[253538]: 2025-11-25 08:54:57.362 253542 DEBUG nova.virt.libvirt.driver [None req-d945cceb-25fb-483b-a9c5-26730e041c66 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: 7e82fa8c-6663-439c-833c-2b28f22282a8] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:54:57 np0005534516 nova_compute[253538]: 2025-11-25 08:54:57.366 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 7e82fa8c-6663-439c-833c-2b28f22282a8] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 03:54:57 np0005534516 nova_compute[253538]: 2025-11-25 08:54:57.366 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764060897.3237886, 7e82fa8c-6663-439c-833c-2b28f22282a8 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 03:54:57 np0005534516 nova_compute[253538]: 2025-11-25 08:54:57.366 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 7e82fa8c-6663-439c-833c-2b28f22282a8] VM Paused (Lifecycle Event)#033[00m
Nov 25 03:54:57 np0005534516 nova_compute[253538]: 2025-11-25 08:54:57.395 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 7e82fa8c-6663-439c-833c-2b28f22282a8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:54:57 np0005534516 nova_compute[253538]: 2025-11-25 08:54:57.399 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764060897.3268743, 7e82fa8c-6663-439c-833c-2b28f22282a8 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 03:54:57 np0005534516 nova_compute[253538]: 2025-11-25 08:54:57.399 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 7e82fa8c-6663-439c-833c-2b28f22282a8] VM Resumed (Lifecycle Event)#033[00m
Nov 25 03:54:57 np0005534516 nova_compute[253538]: 2025-11-25 08:54:57.419 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 7e82fa8c-6663-439c-833c-2b28f22282a8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:54:57 np0005534516 nova_compute[253538]: 2025-11-25 08:54:57.423 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 7e82fa8c-6663-439c-833c-2b28f22282a8] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 03:54:57 np0005534516 nova_compute[253538]: 2025-11-25 08:54:57.428 253542 INFO nova.compute.manager [None req-d945cceb-25fb-483b-a9c5-26730e041c66 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: 7e82fa8c-6663-439c-833c-2b28f22282a8] Took 20.99 seconds to spawn the instance on the hypervisor.#033[00m
Nov 25 03:54:57 np0005534516 nova_compute[253538]: 2025-11-25 08:54:57.429 253542 DEBUG nova.compute.manager [None req-d945cceb-25fb-483b-a9c5-26730e041c66 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: 7e82fa8c-6663-439c-833c-2b28f22282a8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:54:57 np0005534516 nova_compute[253538]: 2025-11-25 08:54:57.447 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 7e82fa8c-6663-439c-833c-2b28f22282a8] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 03:54:57 np0005534516 podman[376559]: 2025-11-25 08:54:57.373299106 +0000 UTC m=+0.039702951 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 25 03:54:57 np0005534516 podman[376559]: 2025-11-25 08:54:57.485184472 +0000 UTC m=+0.151588217 container create f8a3bc495476bea27fc86f25c2d67e1acf3426aa0302bf5642894ee244335eb3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-703bdacb-53cd-40a1-9c2c-c632a29e049b, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 03:54:57 np0005534516 nova_compute[253538]: 2025-11-25 08:54:57.485 253542 INFO nova.compute.manager [None req-d945cceb-25fb-483b-a9c5-26730e041c66 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: 7e82fa8c-6663-439c-833c-2b28f22282a8] Took 22.05 seconds to build instance.#033[00m
Nov 25 03:54:57 np0005534516 nova_compute[253538]: 2025-11-25 08:54:57.498 253542 DEBUG oslo_concurrency.lockutils [None req-d945cceb-25fb-483b-a9c5-26730e041c66 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Lock "7e82fa8c-6663-439c-833c-2b28f22282a8" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 22.134s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:54:57 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 03:54:57 np0005534516 systemd[1]: Started libpod-conmon-f8a3bc495476bea27fc86f25c2d67e1acf3426aa0302bf5642894ee244335eb3.scope.
Nov 25 03:54:57 np0005534516 systemd[1]: Started libcrun container.
Nov 25 03:54:57 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/099a140f52146212a9c7c93bc8139169246af3392159627f1d84ff95a83d956b/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 25 03:54:57 np0005534516 podman[376559]: 2025-11-25 08:54:57.613551906 +0000 UTC m=+0.279955651 container init f8a3bc495476bea27fc86f25c2d67e1acf3426aa0302bf5642894ee244335eb3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-703bdacb-53cd-40a1-9c2c-c632a29e049b, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 03:54:57 np0005534516 podman[376559]: 2025-11-25 08:54:57.621853172 +0000 UTC m=+0.288256917 container start f8a3bc495476bea27fc86f25c2d67e1acf3426aa0302bf5642894ee244335eb3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-703bdacb-53cd-40a1-9c2c-c632a29e049b, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 03:54:57 np0005534516 neutron-haproxy-ovnmeta-703bdacb-53cd-40a1-9c2c-c632a29e049b[376574]: [NOTICE]   (376578) : New worker (376580) forked
Nov 25 03:54:57 np0005534516 neutron-haproxy-ovnmeta-703bdacb-53cd-40a1-9c2c-c632a29e049b[376574]: [NOTICE]   (376578) : Loading success.
Nov 25 03:54:57 np0005534516 nova_compute[253538]: 2025-11-25 08:54:57.839 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:54:57 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2225: 321 pgs: 321 active+clean; 293 MiB data, 904 MiB used, 59 GiB / 60 GiB avail; 596 B/s rd, 21 KiB/s wr, 2 op/s
Nov 25 03:54:57 np0005534516 nova_compute[253538]: 2025-11-25 08:54:57.870 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:54:57 np0005534516 nova_compute[253538]: 2025-11-25 08:54:57.975 253542 DEBUG oslo_concurrency.lockutils [None req-cc824f98-43a0-4581-947b-65ade4def281 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Acquiring lock "5e14b791-8860-44a3-87e0-5c7fcc1dcf12" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:54:57 np0005534516 nova_compute[253538]: 2025-11-25 08:54:57.976 253542 DEBUG oslo_concurrency.lockutils [None req-cc824f98-43a0-4581-947b-65ade4def281 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lock "5e14b791-8860-44a3-87e0-5c7fcc1dcf12" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:54:57 np0005534516 nova_compute[253538]: 2025-11-25 08:54:57.976 253542 DEBUG oslo_concurrency.lockutils [None req-cc824f98-43a0-4581-947b-65ade4def281 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Acquiring lock "5e14b791-8860-44a3-87e0-5c7fcc1dcf12-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:54:57 np0005534516 nova_compute[253538]: 2025-11-25 08:54:57.976 253542 DEBUG oslo_concurrency.lockutils [None req-cc824f98-43a0-4581-947b-65ade4def281 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lock "5e14b791-8860-44a3-87e0-5c7fcc1dcf12-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:54:57 np0005534516 nova_compute[253538]: 2025-11-25 08:54:57.977 253542 DEBUG oslo_concurrency.lockutils [None req-cc824f98-43a0-4581-947b-65ade4def281 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lock "5e14b791-8860-44a3-87e0-5c7fcc1dcf12-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:54:57 np0005534516 nova_compute[253538]: 2025-11-25 08:54:57.978 253542 INFO nova.compute.manager [None req-cc824f98-43a0-4581-947b-65ade4def281 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 5e14b791-8860-44a3-87e0-5c7fcc1dcf12] Terminating instance#033[00m
Nov 25 03:54:57 np0005534516 nova_compute[253538]: 2025-11-25 08:54:57.979 253542 DEBUG nova.compute.manager [None req-cc824f98-43a0-4581-947b-65ade4def281 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 5e14b791-8860-44a3-87e0-5c7fcc1dcf12] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 25 03:54:58 np0005534516 kernel: tap0797e76b-3f (unregistering): left promiscuous mode
Nov 25 03:54:58 np0005534516 NetworkManager[48915]: <info>  [1764060898.0824] device (tap0797e76b-3f): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 03:54:58 np0005534516 nova_compute[253538]: 2025-11-25 08:54:58.096 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:54:58 np0005534516 ovn_controller[152859]: 2025-11-25T08:54:58Z|01178|binding|INFO|Releasing lport 0797e76b-3f15-4c7e-ae0d-0f4813d59967 from this chassis (sb_readonly=0)
Nov 25 03:54:58 np0005534516 ovn_controller[152859]: 2025-11-25T08:54:58Z|01179|binding|INFO|Setting lport 0797e76b-3f15-4c7e-ae0d-0f4813d59967 down in Southbound
Nov 25 03:54:58 np0005534516 ovn_controller[152859]: 2025-11-25T08:54:58Z|01180|binding|INFO|Removing iface tap0797e76b-3f ovn-installed in OVS
Nov 25 03:54:58 np0005534516 nova_compute[253538]: 2025-11-25 08:54:58.103 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:54:58 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:54:58.111 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:21:4a:e2 10.100.0.30'], port_security=['fa:16:3e:21:4a:e2 10.100.0.30'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.30/28', 'neutron:device_id': '5e14b791-8860-44a3-87e0-5c7fcc1dcf12', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8f0f7d83-b45f-4a49-9b0c-4eced5b56b37', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '92faeb767e7a423586eaaf32661ce771', 'neutron:revision_number': '4', 'neutron:security_group_ids': '96035e03-5ed4-4905-a41d-a5f2952043fc', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=927e21d4-cff1-4c45-a730-0e5e5d57b4cb, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=0797e76b-3f15-4c7e-ae0d-0f4813d59967) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 03:54:58 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:54:58.112 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 0797e76b-3f15-4c7e-ae0d-0f4813d59967 in datapath 8f0f7d83-b45f-4a49-9b0c-4eced5b56b37 unbound from our chassis#033[00m
Nov 25 03:54:58 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:54:58.115 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c9:40:8d 2001:db8:0:1:f816:3eff:fec9:408d'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8:0:1:f816:3eff:fec9:408d/64', 'neutron:device_id': 'ovnmeta-3def65f0-e242-4a46-9e8e-162652f023c1', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3def65f0-e242-4a46-9e8e-162652f023c1', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd5d3235b532e4e7e87aab12bda12e650', 'neutron:revision_number': '30', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=adce94c1-2312-402b-8ec0-ddb674bf630f, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=7654d474-edb7-40de-964c-94dde0499c7f) old=Port_Binding(mac=['fa:16:3e:c9:40:8d 2001:db8::f816:3eff:fec9:408d'], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:fec9:408d/64', 'neutron:device_id': 'ovnmeta-3def65f0-e242-4a46-9e8e-162652f023c1', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3def65f0-e242-4a46-9e8e-162652f023c1', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd5d3235b532e4e7e87aab12bda12e650', 'neutron:revision_number': '28', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 03:54:58 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:54:58.117 162739 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 8f0f7d83-b45f-4a49-9b0c-4eced5b56b37#033[00m
Nov 25 03:54:58 np0005534516 nova_compute[253538]: 2025-11-25 08:54:58.135 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:54:58 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:54:58.145 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[5287f052-8700-47a7-9ca4-f9413eb2b009]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:54:58 np0005534516 systemd[1]: machine-qemu\x2d145\x2dinstance\x2d00000074.scope: Deactivated successfully.
Nov 25 03:54:58 np0005534516 systemd[1]: machine-qemu\x2d145\x2dinstance\x2d00000074.scope: Consumed 16.816s CPU time.
Nov 25 03:54:58 np0005534516 systemd-machined[215790]: Machine qemu-145-instance-00000074 terminated.
Nov 25 03:54:58 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:54:58.186 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[0d226cdf-5763-47c5-9115-198940fa2dcc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:54:58 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:54:58.191 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[043db0e2-7ff9-41e5-aea1-105c5089bf2b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:54:58 np0005534516 kernel: tap0797e76b-3f: entered promiscuous mode
Nov 25 03:54:58 np0005534516 ovn_controller[152859]: 2025-11-25T08:54:58Z|01181|binding|INFO|Claiming lport 0797e76b-3f15-4c7e-ae0d-0f4813d59967 for this chassis.
Nov 25 03:54:58 np0005534516 ovn_controller[152859]: 2025-11-25T08:54:58Z|01182|binding|INFO|0797e76b-3f15-4c7e-ae0d-0f4813d59967: Claiming fa:16:3e:21:4a:e2 10.100.0.30
Nov 25 03:54:58 np0005534516 NetworkManager[48915]: <info>  [1764060898.2031] manager: (tap0797e76b-3f): new Tun device (/org/freedesktop/NetworkManager/Devices/490)
Nov 25 03:54:58 np0005534516 nova_compute[253538]: 2025-11-25 08:54:58.202 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:54:58 np0005534516 systemd-udevd[376462]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 03:54:58 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:54:58.211 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:21:4a:e2 10.100.0.30'], port_security=['fa:16:3e:21:4a:e2 10.100.0.30'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.30/28', 'neutron:device_id': '5e14b791-8860-44a3-87e0-5c7fcc1dcf12', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8f0f7d83-b45f-4a49-9b0c-4eced5b56b37', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '92faeb767e7a423586eaaf32661ce771', 'neutron:revision_number': '4', 'neutron:security_group_ids': '96035e03-5ed4-4905-a41d-a5f2952043fc', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=927e21d4-cff1-4c45-a730-0e5e5d57b4cb, chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=0797e76b-3f15-4c7e-ae0d-0f4813d59967) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 03:54:58 np0005534516 kernel: tap0797e76b-3f (unregistering): left promiscuous mode
Nov 25 03:54:58 np0005534516 ovn_controller[152859]: 2025-11-25T08:54:58Z|01183|binding|INFO|Setting lport 0797e76b-3f15-4c7e-ae0d-0f4813d59967 ovn-installed in OVS
Nov 25 03:54:58 np0005534516 ovn_controller[152859]: 2025-11-25T08:54:58Z|01184|binding|INFO|Setting lport 0797e76b-3f15-4c7e-ae0d-0f4813d59967 up in Southbound
Nov 25 03:54:58 np0005534516 nova_compute[253538]: 2025-11-25 08:54:58.231 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:54:58 np0005534516 ovn_controller[152859]: 2025-11-25T08:54:58Z|01185|binding|INFO|Releasing lport 0797e76b-3f15-4c7e-ae0d-0f4813d59967 from this chassis (sb_readonly=1)
Nov 25 03:54:58 np0005534516 ovn_controller[152859]: 2025-11-25T08:54:58Z|01186|binding|INFO|Removing iface tap0797e76b-3f ovn-installed in OVS
Nov 25 03:54:58 np0005534516 ovn_controller[152859]: 2025-11-25T08:54:58Z|01187|if_status|INFO|Dropped 1 log messages in last 320 seconds (most recently, 320 seconds ago) due to excessive rate
Nov 25 03:54:58 np0005534516 ovn_controller[152859]: 2025-11-25T08:54:58Z|01188|if_status|INFO|Not setting lport 0797e76b-3f15-4c7e-ae0d-0f4813d59967 down as sb is readonly
Nov 25 03:54:58 np0005534516 nova_compute[253538]: 2025-11-25 08:54:58.234 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:54:58 np0005534516 ovn_controller[152859]: 2025-11-25T08:54:58Z|01189|binding|INFO|Releasing lport 0797e76b-3f15-4c7e-ae0d-0f4813d59967 from this chassis (sb_readonly=0)
Nov 25 03:54:58 np0005534516 ovn_controller[152859]: 2025-11-25T08:54:58Z|01190|binding|INFO|Setting lport 0797e76b-3f15-4c7e-ae0d-0f4813d59967 down in Southbound
Nov 25 03:54:58 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:54:58.245 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:21:4a:e2 10.100.0.30'], port_security=['fa:16:3e:21:4a:e2 10.100.0.30'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.30/28', 'neutron:device_id': '5e14b791-8860-44a3-87e0-5c7fcc1dcf12', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8f0f7d83-b45f-4a49-9b0c-4eced5b56b37', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '92faeb767e7a423586eaaf32661ce771', 'neutron:revision_number': '4', 'neutron:security_group_ids': '96035e03-5ed4-4905-a41d-a5f2952043fc', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=927e21d4-cff1-4c45-a730-0e5e5d57b4cb, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=0797e76b-3f15-4c7e-ae0d-0f4813d59967) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 03:54:58 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:54:58.243 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[683a06c5-4978-4603-b65b-a2c407854e5a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:54:58 np0005534516 nova_compute[253538]: 2025-11-25 08:54:58.246 253542 INFO nova.virt.libvirt.driver [-] [instance: 5e14b791-8860-44a3-87e0-5c7fcc1dcf12] Instance destroyed successfully.#033[00m
Nov 25 03:54:58 np0005534516 nova_compute[253538]: 2025-11-25 08:54:58.247 253542 DEBUG nova.objects.instance [None req-cc824f98-43a0-4581-947b-65ade4def281 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lazy-loading 'resources' on Instance uuid 5e14b791-8860-44a3-87e0-5c7fcc1dcf12 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 03:54:58 np0005534516 nova_compute[253538]: 2025-11-25 08:54:58.253 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:54:58 np0005534516 nova_compute[253538]: 2025-11-25 08:54:58.257 253542 DEBUG nova.virt.libvirt.vif [None req-cc824f98-43a0-4581-947b-65ade4def281 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T08:53:43Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-160796765',display_name='tempest-TestNetworkBasicOps-server-160796765',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-160796765',id=116,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBA0f+XBfXdSU4e2+02qYGk42nbRwIu1Vshv2fAHcU2M9HY4bsiawDBYsAh0BiTPD2qOg4I+4cye8z+LuwXaU2+YwQ92/nUDN4SrklXs8+Sfqmmth2xZ1VW9badcZ/6ZoHg==',key_name='tempest-TestNetworkBasicOps-278129425',keypairs=<?>,launch_index=0,launched_at=2025-11-25T08:53:54Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='92faeb767e7a423586eaaf32661ce771',ramdisk_id='',reservation_id='r-8e5h58cs',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-2019122229',owner_user_name='tempest-TestNetworkBasicOps-2019122229-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T08:53:54Z,user_data=None,user_id='4211995133cc45db8e38c47f747fb092',uuid=5e14b791-8860-44a3-87e0-5c7fcc1dcf12,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "0797e76b-3f15-4c7e-ae0d-0f4813d59967", "address": "fa:16:3e:21:4a:e2", "network": {"id": "8f0f7d83-b45f-4a49-9b0c-4eced5b56b37", "bridge": "br-int", "label": "tempest-network-smoke--701944320", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.30", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92faeb767e7a423586eaaf32661ce771", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0797e76b-3f", "ovs_interfaceid": "0797e76b-3f15-4c7e-ae0d-0f4813d59967", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 25 03:54:58 np0005534516 nova_compute[253538]: 2025-11-25 08:54:58.258 253542 DEBUG nova.network.os_vif_util [None req-cc824f98-43a0-4581-947b-65ade4def281 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Converting VIF {"id": "0797e76b-3f15-4c7e-ae0d-0f4813d59967", "address": "fa:16:3e:21:4a:e2", "network": {"id": "8f0f7d83-b45f-4a49-9b0c-4eced5b56b37", "bridge": "br-int", "label": "tempest-network-smoke--701944320", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.30", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92faeb767e7a423586eaaf32661ce771", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0797e76b-3f", "ovs_interfaceid": "0797e76b-3f15-4c7e-ae0d-0f4813d59967", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 03:54:58 np0005534516 nova_compute[253538]: 2025-11-25 08:54:58.258 253542 DEBUG nova.network.os_vif_util [None req-cc824f98-43a0-4581-947b-65ade4def281 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:21:4a:e2,bridge_name='br-int',has_traffic_filtering=True,id=0797e76b-3f15-4c7e-ae0d-0f4813d59967,network=Network(8f0f7d83-b45f-4a49-9b0c-4eced5b56b37),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0797e76b-3f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 03:54:58 np0005534516 nova_compute[253538]: 2025-11-25 08:54:58.259 253542 DEBUG os_vif [None req-cc824f98-43a0-4581-947b-65ade4def281 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:21:4a:e2,bridge_name='br-int',has_traffic_filtering=True,id=0797e76b-3f15-4c7e-ae0d-0f4813d59967,network=Network(8f0f7d83-b45f-4a49-9b0c-4eced5b56b37),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0797e76b-3f') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 25 03:54:58 np0005534516 nova_compute[253538]: 2025-11-25 08:54:58.260 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:54:58 np0005534516 nova_compute[253538]: 2025-11-25 08:54:58.260 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0797e76b-3f, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:54:58 np0005534516 nova_compute[253538]: 2025-11-25 08:54:58.264 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:54:58 np0005534516 nova_compute[253538]: 2025-11-25 08:54:58.266 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 25 03:54:58 np0005534516 nova_compute[253538]: 2025-11-25 08:54:58.269 253542 INFO os_vif [None req-cc824f98-43a0-4581-947b-65ade4def281 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:21:4a:e2,bridge_name='br-int',has_traffic_filtering=True,id=0797e76b-3f15-4c7e-ae0d-0f4813d59967,network=Network(8f0f7d83-b45f-4a49-9b0c-4eced5b56b37),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0797e76b-3f')#033[00m
Nov 25 03:54:58 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:54:58.278 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[17f6c713-4b79-4d5f-a597-348d1b340418]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap8f0f7d83-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:09:fd:91'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 11, 'tx_packets': 7, 'rx_bytes': 742, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 11, 'tx_packets': 7, 'rx_bytes': 742, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 340], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 616521, 'reachable_time': 30317, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 376603, 'error': None, 'target': 'ovnmeta-8f0f7d83-b45f-4a49-9b0c-4eced5b56b37', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:54:58 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:54:58.299 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[3f631bed-42aa-4d7e-896e-44627edab412]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.17'], ['IFA_LOCAL', '10.100.0.17'], ['IFA_BROADCAST', '10.100.0.31'], ['IFA_LABEL', 'tap8f0f7d83-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 616535, 'tstamp': 616535}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 376618, 'error': None, 'target': 'ovnmeta-8f0f7d83-b45f-4a49-9b0c-4eced5b56b37', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap8f0f7d83-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 616539, 'tstamp': 616539}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 376618, 'error': None, 'target': 'ovnmeta-8f0f7d83-b45f-4a49-9b0c-4eced5b56b37', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:54:58 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:54:58.300 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8f0f7d83-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:54:58 np0005534516 nova_compute[253538]: 2025-11-25 08:54:58.302 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:54:58 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:54:58.303 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap8f0f7d83-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:54:58 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:54:58.303 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 03:54:58 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:54:58.304 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap8f0f7d83-b0, col_values=(('external_ids', {'iface-id': '4bc48b70-3942-46d1-ac71-5fa19e5d9ae3'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:54:58 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:54:58.304 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 03:54:58 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:54:58.306 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 7654d474-edb7-40de-964c-94dde0499c7f in datapath 3def65f0-e242-4a46-9e8e-162652f023c1 unbound from our chassis#033[00m
Nov 25 03:54:58 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:54:58.307 162739 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 3def65f0-e242-4a46-9e8e-162652f023c1, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 25 03:54:58 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:54:58.308 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[bffe5059-c1f6-4934-a3bf-85dea9e560f0]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:54:58 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:54:58.310 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 0797e76b-3f15-4c7e-ae0d-0f4813d59967 in datapath 8f0f7d83-b45f-4a49-9b0c-4eced5b56b37 unbound from our chassis#033[00m
Nov 25 03:54:58 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:54:58.311 162739 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 8f0f7d83-b45f-4a49-9b0c-4eced5b56b37#033[00m
Nov 25 03:54:58 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:54:58.326 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[738e0517-5014-41cd-b09e-bed374a90e48]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:54:58 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:54:58.361 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[02a5fcd7-884e-415a-a84f-3c6dd83527c1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:54:58 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:54:58.365 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[6444d208-28e3-4c11-a9e0-77c7abe110d2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:54:58 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:54:58.400 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[e5a037fd-922c-4588-b9bc-fe7a9594a7a5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:54:58 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:54:58.418 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[159eec9e-216e-4781-ad64-6d8ccfe8d0c4]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap8f0f7d83-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:09:fd:91'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 11, 'tx_packets': 9, 'rx_bytes': 742, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 11, 'tx_packets': 9, 'rx_bytes': 742, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 340], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 616521, 'reachable_time': 30317, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 376628, 'error': None, 'target': 'ovnmeta-8f0f7d83-b45f-4a49-9b0c-4eced5b56b37', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:54:58 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:54:58.432 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[c40ec250-36bc-4a4e-a0df-6909cb7ade47]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.17'], ['IFA_LOCAL', '10.100.0.17'], ['IFA_BROADCAST', '10.100.0.31'], ['IFA_LABEL', 'tap8f0f7d83-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 616535, 'tstamp': 616535}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 376629, 'error': None, 'target': 'ovnmeta-8f0f7d83-b45f-4a49-9b0c-4eced5b56b37', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap8f0f7d83-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 616539, 'tstamp': 616539}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 376629, 'error': None, 'target': 'ovnmeta-8f0f7d83-b45f-4a49-9b0c-4eced5b56b37', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:54:58 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:54:58.434 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8f0f7d83-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:54:58 np0005534516 nova_compute[253538]: 2025-11-25 08:54:58.435 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:54:58 np0005534516 nova_compute[253538]: 2025-11-25 08:54:58.438 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:54:58 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:54:58.439 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap8f0f7d83-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:54:58 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:54:58.439 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 03:54:58 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:54:58.440 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap8f0f7d83-b0, col_values=(('external_ids', {'iface-id': '4bc48b70-3942-46d1-ac71-5fa19e5d9ae3'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:54:58 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:54:58.440 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 03:54:58 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:54:58.441 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 0797e76b-3f15-4c7e-ae0d-0f4813d59967 in datapath 8f0f7d83-b45f-4a49-9b0c-4eced5b56b37 unbound from our chassis#033[00m
Nov 25 03:54:58 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:54:58.443 162739 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 8f0f7d83-b45f-4a49-9b0c-4eced5b56b37#033[00m
Nov 25 03:54:58 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:54:58.463 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[a4286d7c-4527-460e-88be-edb0c606a2f7]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:54:58 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:54:58.495 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[6577bed7-d4c0-42e1-9d7a-40fdbf8b46d9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:54:58 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:54:58.498 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[321754e9-12f9-4ee9-8fb3-4c040dc37dec]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:54:58 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:54:58.534 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[a433dd5f-32b4-4c90-ae81-e337f44de5ee]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:54:58 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:54:58.557 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[7bb4eab5-8f55-4fb7-a68a-f7d183331729]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap8f0f7d83-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:09:fd:91'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 11, 'tx_packets': 11, 'rx_bytes': 742, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 11, 'tx_packets': 11, 'rx_bytes': 742, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 340], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 616521, 'reachable_time': 30317, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 376635, 'error': None, 'target': 'ovnmeta-8f0f7d83-b45f-4a49-9b0c-4eced5b56b37', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:54:58 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:54:58.581 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[c5e9b9a7-7d49-47f4-b392-52e72a83e25f]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.17'], ['IFA_LOCAL', '10.100.0.17'], ['IFA_BROADCAST', '10.100.0.31'], ['IFA_LABEL', 'tap8f0f7d83-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 616535, 'tstamp': 616535}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 376636, 'error': None, 'target': 'ovnmeta-8f0f7d83-b45f-4a49-9b0c-4eced5b56b37', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap8f0f7d83-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 616539, 'tstamp': 616539}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 376636, 'error': None, 'target': 'ovnmeta-8f0f7d83-b45f-4a49-9b0c-4eced5b56b37', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:54:58 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:54:58.583 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8f0f7d83-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:54:58 np0005534516 nova_compute[253538]: 2025-11-25 08:54:58.585 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:54:58 np0005534516 nova_compute[253538]: 2025-11-25 08:54:58.586 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:54:58 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:54:58.586 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap8f0f7d83-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:54:58 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:54:58.586 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 03:54:58 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:54:58.587 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap8f0f7d83-b0, col_values=(('external_ids', {'iface-id': '4bc48b70-3942-46d1-ac71-5fa19e5d9ae3'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:54:58 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:54:58.587 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 03:54:58 np0005534516 nova_compute[253538]: 2025-11-25 08:54:58.895 253542 DEBUG nova.compute.manager [req-66b7386c-3dca-4f59-baae-52782ba28094 req-665cd3a8-7025-4f94-b4b5-eec501b0d63c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 7e82fa8c-6663-439c-833c-2b28f22282a8] Received event network-vif-plugged-52157627-d75e-4670-9215-6471bda94ba6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:54:58 np0005534516 nova_compute[253538]: 2025-11-25 08:54:58.896 253542 DEBUG oslo_concurrency.lockutils [req-66b7386c-3dca-4f59-baae-52782ba28094 req-665cd3a8-7025-4f94-b4b5-eec501b0d63c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "7e82fa8c-6663-439c-833c-2b28f22282a8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:54:58 np0005534516 nova_compute[253538]: 2025-11-25 08:54:58.896 253542 DEBUG oslo_concurrency.lockutils [req-66b7386c-3dca-4f59-baae-52782ba28094 req-665cd3a8-7025-4f94-b4b5-eec501b0d63c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "7e82fa8c-6663-439c-833c-2b28f22282a8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:54:58 np0005534516 nova_compute[253538]: 2025-11-25 08:54:58.896 253542 DEBUG oslo_concurrency.lockutils [req-66b7386c-3dca-4f59-baae-52782ba28094 req-665cd3a8-7025-4f94-b4b5-eec501b0d63c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "7e82fa8c-6663-439c-833c-2b28f22282a8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:54:58 np0005534516 nova_compute[253538]: 2025-11-25 08:54:58.897 253542 DEBUG nova.compute.manager [req-66b7386c-3dca-4f59-baae-52782ba28094 req-665cd3a8-7025-4f94-b4b5-eec501b0d63c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 7e82fa8c-6663-439c-833c-2b28f22282a8] No waiting events found dispatching network-vif-plugged-52157627-d75e-4670-9215-6471bda94ba6 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 03:54:58 np0005534516 nova_compute[253538]: 2025-11-25 08:54:58.897 253542 WARNING nova.compute.manager [req-66b7386c-3dca-4f59-baae-52782ba28094 req-665cd3a8-7025-4f94-b4b5-eec501b0d63c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 7e82fa8c-6663-439c-833c-2b28f22282a8] Received unexpected event network-vif-plugged-52157627-d75e-4670-9215-6471bda94ba6 for instance with vm_state active and task_state None.#033[00m
Nov 25 03:54:58 np0005534516 nova_compute[253538]: 2025-11-25 08:54:58.897 253542 DEBUG nova.compute.manager [req-66b7386c-3dca-4f59-baae-52782ba28094 req-665cd3a8-7025-4f94-b4b5-eec501b0d63c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 5e14b791-8860-44a3-87e0-5c7fcc1dcf12] Received event network-vif-unplugged-0797e76b-3f15-4c7e-ae0d-0f4813d59967 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:54:58 np0005534516 nova_compute[253538]: 2025-11-25 08:54:58.897 253542 DEBUG oslo_concurrency.lockutils [req-66b7386c-3dca-4f59-baae-52782ba28094 req-665cd3a8-7025-4f94-b4b5-eec501b0d63c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "5e14b791-8860-44a3-87e0-5c7fcc1dcf12-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:54:58 np0005534516 nova_compute[253538]: 2025-11-25 08:54:58.898 253542 DEBUG oslo_concurrency.lockutils [req-66b7386c-3dca-4f59-baae-52782ba28094 req-665cd3a8-7025-4f94-b4b5-eec501b0d63c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "5e14b791-8860-44a3-87e0-5c7fcc1dcf12-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:54:58 np0005534516 nova_compute[253538]: 2025-11-25 08:54:58.898 253542 DEBUG oslo_concurrency.lockutils [req-66b7386c-3dca-4f59-baae-52782ba28094 req-665cd3a8-7025-4f94-b4b5-eec501b0d63c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "5e14b791-8860-44a3-87e0-5c7fcc1dcf12-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:54:58 np0005534516 nova_compute[253538]: 2025-11-25 08:54:58.898 253542 DEBUG nova.compute.manager [req-66b7386c-3dca-4f59-baae-52782ba28094 req-665cd3a8-7025-4f94-b4b5-eec501b0d63c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 5e14b791-8860-44a3-87e0-5c7fcc1dcf12] No waiting events found dispatching network-vif-unplugged-0797e76b-3f15-4c7e-ae0d-0f4813d59967 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 03:54:58 np0005534516 nova_compute[253538]: 2025-11-25 08:54:58.898 253542 DEBUG nova.compute.manager [req-66b7386c-3dca-4f59-baae-52782ba28094 req-665cd3a8-7025-4f94-b4b5-eec501b0d63c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 5e14b791-8860-44a3-87e0-5c7fcc1dcf12] Received event network-vif-unplugged-0797e76b-3f15-4c7e-ae0d-0f4813d59967 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 25 03:54:58 np0005534516 nova_compute[253538]: 2025-11-25 08:54:58.899 253542 DEBUG nova.compute.manager [req-66b7386c-3dca-4f59-baae-52782ba28094 req-665cd3a8-7025-4f94-b4b5-eec501b0d63c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 5e14b791-8860-44a3-87e0-5c7fcc1dcf12] Received event network-vif-plugged-0797e76b-3f15-4c7e-ae0d-0f4813d59967 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:54:58 np0005534516 nova_compute[253538]: 2025-11-25 08:54:58.899 253542 DEBUG oslo_concurrency.lockutils [req-66b7386c-3dca-4f59-baae-52782ba28094 req-665cd3a8-7025-4f94-b4b5-eec501b0d63c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "5e14b791-8860-44a3-87e0-5c7fcc1dcf12-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:54:58 np0005534516 nova_compute[253538]: 2025-11-25 08:54:58.900 253542 DEBUG oslo_concurrency.lockutils [req-66b7386c-3dca-4f59-baae-52782ba28094 req-665cd3a8-7025-4f94-b4b5-eec501b0d63c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "5e14b791-8860-44a3-87e0-5c7fcc1dcf12-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:54:58 np0005534516 nova_compute[253538]: 2025-11-25 08:54:58.900 253542 DEBUG oslo_concurrency.lockutils [req-66b7386c-3dca-4f59-baae-52782ba28094 req-665cd3a8-7025-4f94-b4b5-eec501b0d63c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "5e14b791-8860-44a3-87e0-5c7fcc1dcf12-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:54:58 np0005534516 nova_compute[253538]: 2025-11-25 08:54:58.900 253542 DEBUG nova.compute.manager [req-66b7386c-3dca-4f59-baae-52782ba28094 req-665cd3a8-7025-4f94-b4b5-eec501b0d63c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 5e14b791-8860-44a3-87e0-5c7fcc1dcf12] No waiting events found dispatching network-vif-plugged-0797e76b-3f15-4c7e-ae0d-0f4813d59967 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 03:54:58 np0005534516 nova_compute[253538]: 2025-11-25 08:54:58.901 253542 WARNING nova.compute.manager [req-66b7386c-3dca-4f59-baae-52782ba28094 req-665cd3a8-7025-4f94-b4b5-eec501b0d63c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 5e14b791-8860-44a3-87e0-5c7fcc1dcf12] Received unexpected event network-vif-plugged-0797e76b-3f15-4c7e-ae0d-0f4813d59967 for instance with vm_state active and task_state deleting.#033[00m
Nov 25 03:54:58 np0005534516 nova_compute[253538]: 2025-11-25 08:54:58.901 253542 DEBUG nova.compute.manager [req-66b7386c-3dca-4f59-baae-52782ba28094 req-665cd3a8-7025-4f94-b4b5-eec501b0d63c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 5e14b791-8860-44a3-87e0-5c7fcc1dcf12] Received event network-vif-plugged-0797e76b-3f15-4c7e-ae0d-0f4813d59967 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:54:58 np0005534516 nova_compute[253538]: 2025-11-25 08:54:58.901 253542 DEBUG oslo_concurrency.lockutils [req-66b7386c-3dca-4f59-baae-52782ba28094 req-665cd3a8-7025-4f94-b4b5-eec501b0d63c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "5e14b791-8860-44a3-87e0-5c7fcc1dcf12-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:54:58 np0005534516 nova_compute[253538]: 2025-11-25 08:54:58.902 253542 DEBUG oslo_concurrency.lockutils [req-66b7386c-3dca-4f59-baae-52782ba28094 req-665cd3a8-7025-4f94-b4b5-eec501b0d63c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "5e14b791-8860-44a3-87e0-5c7fcc1dcf12-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:54:58 np0005534516 nova_compute[253538]: 2025-11-25 08:54:58.902 253542 DEBUG oslo_concurrency.lockutils [req-66b7386c-3dca-4f59-baae-52782ba28094 req-665cd3a8-7025-4f94-b4b5-eec501b0d63c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "5e14b791-8860-44a3-87e0-5c7fcc1dcf12-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:54:58 np0005534516 nova_compute[253538]: 2025-11-25 08:54:58.903 253542 DEBUG nova.compute.manager [req-66b7386c-3dca-4f59-baae-52782ba28094 req-665cd3a8-7025-4f94-b4b5-eec501b0d63c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 5e14b791-8860-44a3-87e0-5c7fcc1dcf12] No waiting events found dispatching network-vif-plugged-0797e76b-3f15-4c7e-ae0d-0f4813d59967 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 03:54:58 np0005534516 nova_compute[253538]: 2025-11-25 08:54:58.903 253542 WARNING nova.compute.manager [req-66b7386c-3dca-4f59-baae-52782ba28094 req-665cd3a8-7025-4f94-b4b5-eec501b0d63c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 5e14b791-8860-44a3-87e0-5c7fcc1dcf12] Received unexpected event network-vif-plugged-0797e76b-3f15-4c7e-ae0d-0f4813d59967 for instance with vm_state active and task_state deleting.#033[00m
Nov 25 03:54:59 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2226: 321 pgs: 321 active+clean; 259 MiB data, 880 MiB used, 59 GiB / 60 GiB avail; 1.1 MiB/s rd, 23 KiB/s wr, 54 op/s
Nov 25 03:55:00 np0005534516 nova_compute[253538]: 2025-11-25 08:55:00.241 253542 INFO nova.virt.libvirt.driver [None req-cc824f98-43a0-4581-947b-65ade4def281 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 5e14b791-8860-44a3-87e0-5c7fcc1dcf12] Deleting instance files /var/lib/nova/instances/5e14b791-8860-44a3-87e0-5c7fcc1dcf12_del#033[00m
Nov 25 03:55:00 np0005534516 nova_compute[253538]: 2025-11-25 08:55:00.243 253542 INFO nova.virt.libvirt.driver [None req-cc824f98-43a0-4581-947b-65ade4def281 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 5e14b791-8860-44a3-87e0-5c7fcc1dcf12] Deletion of /var/lib/nova/instances/5e14b791-8860-44a3-87e0-5c7fcc1dcf12_del complete#033[00m
Nov 25 03:55:00 np0005534516 nova_compute[253538]: 2025-11-25 08:55:00.303 253542 INFO nova.compute.manager [None req-cc824f98-43a0-4581-947b-65ade4def281 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 5e14b791-8860-44a3-87e0-5c7fcc1dcf12] Took 2.32 seconds to destroy the instance on the hypervisor.#033[00m
Nov 25 03:55:00 np0005534516 nova_compute[253538]: 2025-11-25 08:55:00.304 253542 DEBUG oslo.service.loopingcall [None req-cc824f98-43a0-4581-947b-65ade4def281 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 25 03:55:00 np0005534516 nova_compute[253538]: 2025-11-25 08:55:00.305 253542 DEBUG nova.compute.manager [-] [instance: 5e14b791-8860-44a3-87e0-5c7fcc1dcf12] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 25 03:55:00 np0005534516 nova_compute[253538]: 2025-11-25 08:55:00.305 253542 DEBUG nova.network.neutron [-] [instance: 5e14b791-8860-44a3-87e0-5c7fcc1dcf12] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 25 03:55:00 np0005534516 nova_compute[253538]: 2025-11-25 08:55:00.992 253542 DEBUG nova.compute.manager [req-738c4fd5-77b4-4ee4-894f-541311c117ad req-4ec5de02-214d-4846-8535-d4e99fb77966 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 5e14b791-8860-44a3-87e0-5c7fcc1dcf12] Received event network-vif-plugged-0797e76b-3f15-4c7e-ae0d-0f4813d59967 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:55:00 np0005534516 nova_compute[253538]: 2025-11-25 08:55:00.993 253542 DEBUG oslo_concurrency.lockutils [req-738c4fd5-77b4-4ee4-894f-541311c117ad req-4ec5de02-214d-4846-8535-d4e99fb77966 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "5e14b791-8860-44a3-87e0-5c7fcc1dcf12-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:55:00 np0005534516 nova_compute[253538]: 2025-11-25 08:55:00.993 253542 DEBUG oslo_concurrency.lockutils [req-738c4fd5-77b4-4ee4-894f-541311c117ad req-4ec5de02-214d-4846-8535-d4e99fb77966 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "5e14b791-8860-44a3-87e0-5c7fcc1dcf12-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:55:00 np0005534516 nova_compute[253538]: 2025-11-25 08:55:00.994 253542 DEBUG oslo_concurrency.lockutils [req-738c4fd5-77b4-4ee4-894f-541311c117ad req-4ec5de02-214d-4846-8535-d4e99fb77966 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "5e14b791-8860-44a3-87e0-5c7fcc1dcf12-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:55:00 np0005534516 nova_compute[253538]: 2025-11-25 08:55:00.994 253542 DEBUG nova.compute.manager [req-738c4fd5-77b4-4ee4-894f-541311c117ad req-4ec5de02-214d-4846-8535-d4e99fb77966 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 5e14b791-8860-44a3-87e0-5c7fcc1dcf12] No waiting events found dispatching network-vif-plugged-0797e76b-3f15-4c7e-ae0d-0f4813d59967 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 03:55:00 np0005534516 nova_compute[253538]: 2025-11-25 08:55:00.994 253542 WARNING nova.compute.manager [req-738c4fd5-77b4-4ee4-894f-541311c117ad req-4ec5de02-214d-4846-8535-d4e99fb77966 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 5e14b791-8860-44a3-87e0-5c7fcc1dcf12] Received unexpected event network-vif-plugged-0797e76b-3f15-4c7e-ae0d-0f4813d59967 for instance with vm_state active and task_state deleting.#033[00m
Nov 25 03:55:00 np0005534516 nova_compute[253538]: 2025-11-25 08:55:00.994 253542 DEBUG nova.compute.manager [req-738c4fd5-77b4-4ee4-894f-541311c117ad req-4ec5de02-214d-4846-8535-d4e99fb77966 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 5e14b791-8860-44a3-87e0-5c7fcc1dcf12] Received event network-vif-unplugged-0797e76b-3f15-4c7e-ae0d-0f4813d59967 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:55:00 np0005534516 nova_compute[253538]: 2025-11-25 08:55:00.995 253542 DEBUG oslo_concurrency.lockutils [req-738c4fd5-77b4-4ee4-894f-541311c117ad req-4ec5de02-214d-4846-8535-d4e99fb77966 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "5e14b791-8860-44a3-87e0-5c7fcc1dcf12-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:55:00 np0005534516 nova_compute[253538]: 2025-11-25 08:55:00.995 253542 DEBUG oslo_concurrency.lockutils [req-738c4fd5-77b4-4ee4-894f-541311c117ad req-4ec5de02-214d-4846-8535-d4e99fb77966 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "5e14b791-8860-44a3-87e0-5c7fcc1dcf12-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:55:00 np0005534516 nova_compute[253538]: 2025-11-25 08:55:00.995 253542 DEBUG oslo_concurrency.lockutils [req-738c4fd5-77b4-4ee4-894f-541311c117ad req-4ec5de02-214d-4846-8535-d4e99fb77966 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "5e14b791-8860-44a3-87e0-5c7fcc1dcf12-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:55:00 np0005534516 nova_compute[253538]: 2025-11-25 08:55:00.996 253542 DEBUG nova.compute.manager [req-738c4fd5-77b4-4ee4-894f-541311c117ad req-4ec5de02-214d-4846-8535-d4e99fb77966 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 5e14b791-8860-44a3-87e0-5c7fcc1dcf12] No waiting events found dispatching network-vif-unplugged-0797e76b-3f15-4c7e-ae0d-0f4813d59967 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 03:55:00 np0005534516 nova_compute[253538]: 2025-11-25 08:55:00.996 253542 DEBUG nova.compute.manager [req-738c4fd5-77b4-4ee4-894f-541311c117ad req-4ec5de02-214d-4846-8535-d4e99fb77966 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 5e14b791-8860-44a3-87e0-5c7fcc1dcf12] Received event network-vif-unplugged-0797e76b-3f15-4c7e-ae0d-0f4813d59967 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 25 03:55:00 np0005534516 nova_compute[253538]: 2025-11-25 08:55:00.996 253542 DEBUG nova.compute.manager [req-738c4fd5-77b4-4ee4-894f-541311c117ad req-4ec5de02-214d-4846-8535-d4e99fb77966 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 5e14b791-8860-44a3-87e0-5c7fcc1dcf12] Received event network-vif-plugged-0797e76b-3f15-4c7e-ae0d-0f4813d59967 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:55:00 np0005534516 nova_compute[253538]: 2025-11-25 08:55:00.997 253542 DEBUG oslo_concurrency.lockutils [req-738c4fd5-77b4-4ee4-894f-541311c117ad req-4ec5de02-214d-4846-8535-d4e99fb77966 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "5e14b791-8860-44a3-87e0-5c7fcc1dcf12-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:55:00 np0005534516 nova_compute[253538]: 2025-11-25 08:55:00.997 253542 DEBUG oslo_concurrency.lockutils [req-738c4fd5-77b4-4ee4-894f-541311c117ad req-4ec5de02-214d-4846-8535-d4e99fb77966 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "5e14b791-8860-44a3-87e0-5c7fcc1dcf12-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:55:00 np0005534516 nova_compute[253538]: 2025-11-25 08:55:00.997 253542 DEBUG oslo_concurrency.lockutils [req-738c4fd5-77b4-4ee4-894f-541311c117ad req-4ec5de02-214d-4846-8535-d4e99fb77966 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "5e14b791-8860-44a3-87e0-5c7fcc1dcf12-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:55:00 np0005534516 nova_compute[253538]: 2025-11-25 08:55:00.997 253542 DEBUG nova.compute.manager [req-738c4fd5-77b4-4ee4-894f-541311c117ad req-4ec5de02-214d-4846-8535-d4e99fb77966 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 5e14b791-8860-44a3-87e0-5c7fcc1dcf12] No waiting events found dispatching network-vif-plugged-0797e76b-3f15-4c7e-ae0d-0f4813d59967 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 03:55:00 np0005534516 nova_compute[253538]: 2025-11-25 08:55:00.998 253542 WARNING nova.compute.manager [req-738c4fd5-77b4-4ee4-894f-541311c117ad req-4ec5de02-214d-4846-8535-d4e99fb77966 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 5e14b791-8860-44a3-87e0-5c7fcc1dcf12] Received unexpected event network-vif-plugged-0797e76b-3f15-4c7e-ae0d-0f4813d59967 for instance with vm_state active and task_state deleting.#033[00m
Nov 25 03:55:01 np0005534516 nova_compute[253538]: 2025-11-25 08:55:01.129 253542 DEBUG nova.network.neutron [-] [instance: 5e14b791-8860-44a3-87e0-5c7fcc1dcf12] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 03:55:01 np0005534516 nova_compute[253538]: 2025-11-25 08:55:01.145 253542 INFO nova.compute.manager [-] [instance: 5e14b791-8860-44a3-87e0-5c7fcc1dcf12] Took 0.84 seconds to deallocate network for instance.#033[00m
Nov 25 03:55:01 np0005534516 nova_compute[253538]: 2025-11-25 08:55:01.204 253542 DEBUG oslo_concurrency.lockutils [None req-cc824f98-43a0-4581-947b-65ade4def281 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:55:01 np0005534516 nova_compute[253538]: 2025-11-25 08:55:01.205 253542 DEBUG oslo_concurrency.lockutils [None req-cc824f98-43a0-4581-947b-65ade4def281 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:55:01 np0005534516 nova_compute[253538]: 2025-11-25 08:55:01.264 253542 DEBUG nova.compute.manager [req-2bca0e44-50b1-4c27-8d42-adf29616893b req-c826736a-d3b6-4868-a196-2c439a06afee b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 5e14b791-8860-44a3-87e0-5c7fcc1dcf12] Received event network-vif-deleted-0797e76b-3f15-4c7e-ae0d-0f4813d59967 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:55:01 np0005534516 nova_compute[253538]: 2025-11-25 08:55:01.302 253542 DEBUG oslo_concurrency.processutils [None req-cc824f98-43a0-4581-947b-65ade4def281 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:55:01 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 03:55:01 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3347868721' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 03:55:01 np0005534516 nova_compute[253538]: 2025-11-25 08:55:01.779 253542 DEBUG oslo_concurrency.processutils [None req-cc824f98-43a0-4581-947b-65ade4def281 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.476s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:55:01 np0005534516 nova_compute[253538]: 2025-11-25 08:55:01.786 253542 DEBUG nova.compute.provider_tree [None req-cc824f98-43a0-4581-947b-65ade4def281 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 03:55:01 np0005534516 nova_compute[253538]: 2025-11-25 08:55:01.803 253542 DEBUG nova.scheduler.client.report [None req-cc824f98-43a0-4581-947b-65ade4def281 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 03:55:01 np0005534516 nova_compute[253538]: 2025-11-25 08:55:01.821 253542 DEBUG oslo_concurrency.lockutils [None req-cc824f98-43a0-4581-947b-65ade4def281 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.616s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:55:01 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2227: 321 pgs: 321 active+clean; 235 MiB data, 867 MiB used, 59 GiB / 60 GiB avail; 1.8 MiB/s rd, 23 KiB/s wr, 91 op/s
Nov 25 03:55:01 np0005534516 nova_compute[253538]: 2025-11-25 08:55:01.853 253542 INFO nova.scheduler.client.report [None req-cc824f98-43a0-4581-947b-65ade4def281 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Deleted allocations for instance 5e14b791-8860-44a3-87e0-5c7fcc1dcf12#033[00m
Nov 25 03:55:01 np0005534516 nova_compute[253538]: 2025-11-25 08:55:01.908 253542 DEBUG oslo_concurrency.lockutils [None req-cc824f98-43a0-4581-947b-65ade4def281 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lock "5e14b791-8860-44a3-87e0-5c7fcc1dcf12" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.933s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:55:02 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 03:55:02 np0005534516 nova_compute[253538]: 2025-11-25 08:55:02.841 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:55:03 np0005534516 nova_compute[253538]: 2025-11-25 08:55:03.170 253542 DEBUG nova.compute.manager [req-433556bd-b9b5-4e0b-be37-a3f4ac0bb0c1 req-d49c2388-0fd1-4bb7-96d5-a3894c0f8606 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 7e82fa8c-6663-439c-833c-2b28f22282a8] Received event network-changed-52157627-d75e-4670-9215-6471bda94ba6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:55:03 np0005534516 nova_compute[253538]: 2025-11-25 08:55:03.171 253542 DEBUG nova.compute.manager [req-433556bd-b9b5-4e0b-be37-a3f4ac0bb0c1 req-d49c2388-0fd1-4bb7-96d5-a3894c0f8606 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 7e82fa8c-6663-439c-833c-2b28f22282a8] Refreshing instance network info cache due to event network-changed-52157627-d75e-4670-9215-6471bda94ba6. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 03:55:03 np0005534516 nova_compute[253538]: 2025-11-25 08:55:03.171 253542 DEBUG oslo_concurrency.lockutils [req-433556bd-b9b5-4e0b-be37-a3f4ac0bb0c1 req-d49c2388-0fd1-4bb7-96d5-a3894c0f8606 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-7e82fa8c-6663-439c-833c-2b28f22282a8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 03:55:03 np0005534516 nova_compute[253538]: 2025-11-25 08:55:03.171 253542 DEBUG oslo_concurrency.lockutils [req-433556bd-b9b5-4e0b-be37-a3f4ac0bb0c1 req-d49c2388-0fd1-4bb7-96d5-a3894c0f8606 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-7e82fa8c-6663-439c-833c-2b28f22282a8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 03:55:03 np0005534516 nova_compute[253538]: 2025-11-25 08:55:03.172 253542 DEBUG nova.network.neutron [req-433556bd-b9b5-4e0b-be37-a3f4ac0bb0c1 req-d49c2388-0fd1-4bb7-96d5-a3894c0f8606 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 7e82fa8c-6663-439c-833c-2b28f22282a8] Refreshing network info cache for port 52157627-d75e-4670-9215-6471bda94ba6 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 03:55:03 np0005534516 nova_compute[253538]: 2025-11-25 08:55:03.263 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:55:03 np0005534516 nova_compute[253538]: 2025-11-25 08:55:03.368 253542 DEBUG oslo_concurrency.lockutils [None req-0f2da78b-3e4c-45a0-8c24-80ffc71596f8 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Acquiring lock "interface-76611b0b-db06-4903-a22a-59b23a1e0d48-1959aca7-b25c-4fe5-b59a-70db352af78b" by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:55:03 np0005534516 nova_compute[253538]: 2025-11-25 08:55:03.369 253542 DEBUG oslo_concurrency.lockutils [None req-0f2da78b-3e4c-45a0-8c24-80ffc71596f8 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lock "interface-76611b0b-db06-4903-a22a-59b23a1e0d48-1959aca7-b25c-4fe5-b59a-70db352af78b" acquired by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:55:03 np0005534516 nova_compute[253538]: 2025-11-25 08:55:03.384 253542 DEBUG nova.objects.instance [None req-0f2da78b-3e4c-45a0-8c24-80ffc71596f8 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lazy-loading 'flavor' on Instance uuid 76611b0b-db06-4903-a22a-59b23a1e0d48 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 03:55:03 np0005534516 nova_compute[253538]: 2025-11-25 08:55:03.404 253542 DEBUG nova.virt.libvirt.vif [None req-0f2da78b-3e4c-45a0-8c24-80ffc71596f8 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T08:52:52Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1241900398',display_name='tempest-TestNetworkBasicOps-server-1241900398',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1241900398',id=114,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDKQnIncww1mRwX9O2GT7bKPsyEkgA1/4oI48uojAID9Vm7PrMk+vKtiibHK4EoiYH2/wbPYy0HX20b3AH2Q3Q4yusIjxaAQq6AUEbJthvns45ZkdO1WCW3z0AmP1FYwxQ==',key_name='tempest-TestNetworkBasicOps-922693977',keypairs=<?>,launch_index=0,launched_at=2025-11-25T08:53:02Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='92faeb767e7a423586eaaf32661ce771',ramdisk_id='',reservation_id='r-1z0lb28a',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-2019122229',owner_user_name='tempest-TestNetworkBasicOps-2019122229-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T08:53:02Z,user_data=None,user_id='4211995133cc45db8e38c47f747fb092',uuid=76611b0b-db06-4903-a22a-59b23a1e0d48,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "1959aca7-b25c-4fe5-b59a-70db352af78b", "address": "fa:16:3e:d6:c5:52", "network": {"id": "8f0f7d83-b45f-4a49-9b0c-4eced5b56b37", "bridge": "br-int", "label": "tempest-network-smoke--701944320", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.26", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92faeb767e7a423586eaaf32661ce771", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1959aca7-b2", "ovs_interfaceid": "1959aca7-b25c-4fe5-b59a-70db352af78b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 25 03:55:03 np0005534516 nova_compute[253538]: 2025-11-25 08:55:03.405 253542 DEBUG nova.network.os_vif_util [None req-0f2da78b-3e4c-45a0-8c24-80ffc71596f8 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Converting VIF {"id": "1959aca7-b25c-4fe5-b59a-70db352af78b", "address": "fa:16:3e:d6:c5:52", "network": {"id": "8f0f7d83-b45f-4a49-9b0c-4eced5b56b37", "bridge": "br-int", "label": "tempest-network-smoke--701944320", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.26", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92faeb767e7a423586eaaf32661ce771", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1959aca7-b2", "ovs_interfaceid": "1959aca7-b25c-4fe5-b59a-70db352af78b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 03:55:03 np0005534516 nova_compute[253538]: 2025-11-25 08:55:03.406 253542 DEBUG nova.network.os_vif_util [None req-0f2da78b-3e4c-45a0-8c24-80ffc71596f8 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:d6:c5:52,bridge_name='br-int',has_traffic_filtering=True,id=1959aca7-b25c-4fe5-b59a-70db352af78b,network=Network(8f0f7d83-b45f-4a49-9b0c-4eced5b56b37),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1959aca7-b2') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 03:55:03 np0005534516 nova_compute[253538]: 2025-11-25 08:55:03.410 253542 DEBUG nova.virt.libvirt.guest [None req-0f2da78b-3e4c-45a0-8c24-80ffc71596f8 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:d6:c5:52"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap1959aca7-b2"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Nov 25 03:55:03 np0005534516 nova_compute[253538]: 2025-11-25 08:55:03.413 253542 DEBUG nova.virt.libvirt.guest [None req-0f2da78b-3e4c-45a0-8c24-80ffc71596f8 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:d6:c5:52"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap1959aca7-b2"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Nov 25 03:55:03 np0005534516 nova_compute[253538]: 2025-11-25 08:55:03.416 253542 DEBUG nova.virt.libvirt.driver [None req-0f2da78b-3e4c-45a0-8c24-80ffc71596f8 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Attempting to detach device tap1959aca7-b2 from instance 76611b0b-db06-4903-a22a-59b23a1e0d48 from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487#033[00m
Nov 25 03:55:03 np0005534516 nova_compute[253538]: 2025-11-25 08:55:03.416 253542 DEBUG nova.virt.libvirt.guest [None req-0f2da78b-3e4c-45a0-8c24-80ffc71596f8 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] detach device xml: <interface type="ethernet">
Nov 25 03:55:03 np0005534516 nova_compute[253538]:  <mac address="fa:16:3e:d6:c5:52"/>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:  <model type="virtio"/>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:  <driver name="vhost" rx_queue_size="512"/>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:  <mtu size="1442"/>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:  <target dev="tap1959aca7-b2"/>
Nov 25 03:55:03 np0005534516 nova_compute[253538]: </interface>
Nov 25 03:55:03 np0005534516 nova_compute[253538]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Nov 25 03:55:03 np0005534516 nova_compute[253538]: 2025-11-25 08:55:03.425 253542 DEBUG nova.virt.libvirt.guest [None req-0f2da78b-3e4c-45a0-8c24-80ffc71596f8 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:d6:c5:52"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap1959aca7-b2"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Nov 25 03:55:03 np0005534516 nova_compute[253538]: 2025-11-25 08:55:03.430 253542 DEBUG nova.virt.libvirt.guest [None req-0f2da78b-3e4c-45a0-8c24-80ffc71596f8 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:d6:c5:52"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap1959aca7-b2"/></interface>not found in domain: <domain type='kvm' id='142'>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:  <name>instance-00000072</name>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:  <uuid>76611b0b-db06-4903-a22a-59b23a1e0d48</uuid>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:  <metadata>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 03:55:03 np0005534516 nova_compute[253538]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:  <nova:name>tempest-TestNetworkBasicOps-server-1241900398</nova:name>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:  <nova:creationTime>2025-11-25 08:53:30</nova:creationTime>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:  <nova:flavor name="m1.nano">
Nov 25 03:55:03 np0005534516 nova_compute[253538]:    <nova:memory>128</nova:memory>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:    <nova:disk>1</nova:disk>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:    <nova:swap>0</nova:swap>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:    <nova:ephemeral>0</nova:ephemeral>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:    <nova:vcpus>1</nova:vcpus>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:  </nova:flavor>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:  <nova:owner>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:    <nova:user uuid="4211995133cc45db8e38c47f747fb092">tempest-TestNetworkBasicOps-2019122229-project-member</nova:user>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:    <nova:project uuid="92faeb767e7a423586eaaf32661ce771">tempest-TestNetworkBasicOps-2019122229</nova:project>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:  </nova:owner>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:  <nova:root type="image" uuid="8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e"/>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:  <nova:ports>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:    <nova:port uuid="8f1fcc3c-5f46-4272-be9b-4d5213b3aceb">
Nov 25 03:55:03 np0005534516 nova_compute[253538]:      <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:    </nova:port>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:    <nova:port uuid="1959aca7-b25c-4fe5-b59a-70db352af78b">
Nov 25 03:55:03 np0005534516 nova_compute[253538]:      <nova:ip type="fixed" address="10.100.0.26" ipVersion="4"/>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:    </nova:port>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:  </nova:ports>
Nov 25 03:55:03 np0005534516 nova_compute[253538]: </nova:instance>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:  </metadata>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:  <memory unit='KiB'>131072</memory>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:  <currentMemory unit='KiB'>131072</currentMemory>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:  <vcpu placement='static'>1</vcpu>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:  <resource>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:    <partition>/machine</partition>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:  </resource>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:  <sysinfo type='smbios'>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:    <system>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:      <entry name='manufacturer'>RDO</entry>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:      <entry name='product'>OpenStack Compute</entry>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:      <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:      <entry name='serial'>76611b0b-db06-4903-a22a-59b23a1e0d48</entry>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:      <entry name='uuid'>76611b0b-db06-4903-a22a-59b23a1e0d48</entry>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:      <entry name='family'>Virtual Machine</entry>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:    </system>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:  </sysinfo>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:  <os>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:    <type arch='x86_64' machine='pc-q35-rhel9.8.0'>hvm</type>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:    <boot dev='hd'/>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:    <smbios mode='sysinfo'/>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:  </os>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:  <features>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:    <acpi/>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:    <apic/>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:    <vmcoreinfo state='on'/>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:  </features>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:  <cpu mode='custom' match='exact' check='full'>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:    <model fallback='forbid'>EPYC-Rome</model>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:    <vendor>AMD</vendor>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:    <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:    <feature policy='require' name='x2apic'/>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:    <feature policy='require' name='tsc-deadline'/>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:    <feature policy='require' name='hypervisor'/>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:    <feature policy='require' name='tsc_adjust'/>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:    <feature policy='require' name='spec-ctrl'/>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:    <feature policy='require' name='stibp'/>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:    <feature policy='require' name='ssbd'/>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:    <feature policy='require' name='cmp_legacy'/>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:    <feature policy='require' name='overflow-recov'/>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:    <feature policy='require' name='succor'/>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:    <feature policy='require' name='ibrs'/>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:    <feature policy='require' name='amd-ssbd'/>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:    <feature policy='require' name='virt-ssbd'/>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:    <feature policy='disable' name='lbrv'/>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:    <feature policy='disable' name='tsc-scale'/>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:    <feature policy='disable' name='vmcb-clean'/>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:    <feature policy='disable' name='flushbyasid'/>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:    <feature policy='disable' name='pause-filter'/>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:    <feature policy='disable' name='pfthreshold'/>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:    <feature policy='disable' name='svme-addr-chk'/>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:    <feature policy='require' name='lfence-always-serializing'/>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:    <feature policy='disable' name='xsaves'/>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:    <feature policy='disable' name='svm'/>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:    <feature policy='require' name='topoext'/>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:    <feature policy='disable' name='npt'/>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:    <feature policy='disable' name='nrip-save'/>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:  </cpu>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:  <clock offset='utc'>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:    <timer name='pit' tickpolicy='delay'/>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:    <timer name='rtc' tickpolicy='catchup'/>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:    <timer name='hpet' present='no'/>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:  </clock>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:  <on_poweroff>destroy</on_poweroff>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:  <on_reboot>restart</on_reboot>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:  <on_crash>destroy</on_crash>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:  <devices>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:    <emulator>/usr/libexec/qemu-kvm</emulator>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:    <disk type='network' device='disk'>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:      <driver name='qemu' type='raw' cache='none'/>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:      <auth username='openstack'>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:        <secret type='ceph' uuid='a058ea16-8b73-51e1-b172-ed66107102bf'/>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:      </auth>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:      <source protocol='rbd' name='vms/76611b0b-db06-4903-a22a-59b23a1e0d48_disk' index='2'>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:        <host name='192.168.122.100' port='6789'/>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:      </source>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:      <target dev='vda' bus='virtio'/>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:      <alias name='virtio-disk0'/>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:    </disk>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:    <disk type='network' device='cdrom'>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:      <driver name='qemu' type='raw' cache='none'/>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:      <auth username='openstack'>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:        <secret type='ceph' uuid='a058ea16-8b73-51e1-b172-ed66107102bf'/>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:      </auth>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:      <source protocol='rbd' name='vms/76611b0b-db06-4903-a22a-59b23a1e0d48_disk.config' index='1'>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:        <host name='192.168.122.100' port='6789'/>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:      </source>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:      <target dev='sda' bus='sata'/>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:      <readonly/>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:      <alias name='sata0-0-0'/>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:      <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:    </disk>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:    <controller type='pci' index='0' model='pcie-root'>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:      <alias name='pcie.0'/>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:    <controller type='pci' index='1' model='pcie-root-port'>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:      <model name='pcie-root-port'/>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:      <target chassis='1' port='0x10'/>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:      <alias name='pci.1'/>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:    <controller type='pci' index='2' model='pcie-root-port'>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:      <model name='pcie-root-port'/>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:      <target chassis='2' port='0x11'/>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:      <alias name='pci.2'/>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:    <controller type='pci' index='3' model='pcie-root-port'>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:      <model name='pcie-root-port'/>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:      <target chassis='3' port='0x12'/>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:      <alias name='pci.3'/>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:    <controller type='pci' index='4' model='pcie-root-port'>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:      <model name='pcie-root-port'/>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:      <target chassis='4' port='0x13'/>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:      <alias name='pci.4'/>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:    <controller type='pci' index='5' model='pcie-root-port'>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:      <model name='pcie-root-port'/>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:      <target chassis='5' port='0x14'/>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:      <alias name='pci.5'/>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:    <controller type='pci' index='6' model='pcie-root-port'>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:      <model name='pcie-root-port'/>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:      <target chassis='6' port='0x15'/>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:      <alias name='pci.6'/>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:    <controller type='pci' index='7' model='pcie-root-port'>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:      <model name='pcie-root-port'/>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:      <target chassis='7' port='0x16'/>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:      <alias name='pci.7'/>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:    <controller type='pci' index='8' model='pcie-root-port'>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:      <model name='pcie-root-port'/>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:      <target chassis='8' port='0x17'/>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:      <alias name='pci.8'/>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:    <controller type='pci' index='9' model='pcie-root-port'>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:      <model name='pcie-root-port'/>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:      <target chassis='9' port='0x18'/>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:      <alias name='pci.9'/>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:    <controller type='pci' index='10' model='pcie-root-port'>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:      <model name='pcie-root-port'/>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:      <target chassis='10' port='0x19'/>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:      <alias name='pci.10'/>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:    <controller type='pci' index='11' model='pcie-root-port'>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:      <model name='pcie-root-port'/>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:      <target chassis='11' port='0x1a'/>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:      <alias name='pci.11'/>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:    <controller type='pci' index='12' model='pcie-root-port'>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:      <model name='pcie-root-port'/>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:      <target chassis='12' port='0x1b'/>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:      <alias name='pci.12'/>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:    <controller type='pci' index='13' model='pcie-root-port'>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:      <model name='pcie-root-port'/>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:      <target chassis='13' port='0x1c'/>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:      <alias name='pci.13'/>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:    <controller type='pci' index='14' model='pcie-root-port'>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:      <model name='pcie-root-port'/>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:      <target chassis='14' port='0x1d'/>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:      <alias name='pci.14'/>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:    <controller type='pci' index='15' model='pcie-root-port'>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:      <model name='pcie-root-port'/>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:      <target chassis='15' port='0x1e'/>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:      <alias name='pci.15'/>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:    <controller type='pci' index='16' model='pcie-root-port'>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:      <model name='pcie-root-port'/>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:      <target chassis='16' port='0x1f'/>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:      <alias name='pci.16'/>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:    <controller type='pci' index='17' model='pcie-root-port'>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:      <model name='pcie-root-port'/>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:      <target chassis='17' port='0x20'/>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:      <alias name='pci.17'/>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:    <controller type='pci' index='18' model='pcie-root-port'>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:      <model name='pcie-root-port'/>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:      <target chassis='18' port='0x21'/>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:      <alias name='pci.18'/>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:    <controller type='pci' index='19' model='pcie-root-port'>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:      <model name='pcie-root-port'/>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:      <target chassis='19' port='0x22'/>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:      <alias name='pci.19'/>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:    <controller type='pci' index='20' model='pcie-root-port'>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:      <model name='pcie-root-port'/>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:      <target chassis='20' port='0x23'/>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:      <alias name='pci.20'/>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:    <controller type='pci' index='21' model='pcie-root-port'>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:      <model name='pcie-root-port'/>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:      <target chassis='21' port='0x24'/>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:      <alias name='pci.21'/>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:    <controller type='pci' index='22' model='pcie-root-port'>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:      <model name='pcie-root-port'/>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:      <target chassis='22' port='0x25'/>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:      <alias name='pci.22'/>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:    <controller type='pci' index='23' model='pcie-root-port'>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:      <model name='pcie-root-port'/>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:      <target chassis='23' port='0x26'/>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:      <alias name='pci.23'/>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:    <controller type='pci' index='24' model='pcie-root-port'>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:      <model name='pcie-root-port'/>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:      <target chassis='24' port='0x27'/>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:      <alias name='pci.24'/>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:    <controller type='pci' index='25' model='pcie-root-port'>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:      <model name='pcie-root-port'/>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:      <target chassis='25' port='0x28'/>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:      <alias name='pci.25'/>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:    <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:      <model name='pcie-pci-bridge'/>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:      <alias name='pci.26'/>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:    <controller type='usb' index='0' model='piix3-uhci'>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:      <alias name='usb'/>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:    <controller type='sata' index='0'>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:      <alias name='ide'/>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:    <interface type='ethernet'>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:      <mac address='fa:16:3e:36:b2:21'/>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:      <target dev='tap8f1fcc3c-5f'/>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:      <model type='virtio'/>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:      <driver name='vhost' rx_queue_size='512'/>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:      <mtu size='1442'/>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:      <alias name='net0'/>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:    </interface>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:    <interface type='ethernet'>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:      <mac address='fa:16:3e:d6:c5:52'/>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:      <target dev='tap1959aca7-b2'/>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:      <model type='virtio'/>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:      <driver name='vhost' rx_queue_size='512'/>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:      <mtu size='1442'/>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:      <alias name='net1'/>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x06' slot='0x00' function='0x0'/>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:    </interface>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:    <serial type='pty'>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:      <source path='/dev/pts/0'/>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:      <log file='/var/lib/nova/instances/76611b0b-db06-4903-a22a-59b23a1e0d48/console.log' append='off'/>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:      <target type='isa-serial' port='0'>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:        <model name='isa-serial'/>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:      </target>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:      <alias name='serial0'/>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:    </serial>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:    <console type='pty' tty='/dev/pts/0'>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:      <source path='/dev/pts/0'/>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:      <log file='/var/lib/nova/instances/76611b0b-db06-4903-a22a-59b23a1e0d48/console.log' append='off'/>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:      <target type='serial' port='0'/>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:      <alias name='serial0'/>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:    </console>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:    <input type='tablet' bus='usb'>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:      <alias name='input0'/>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:      <address type='usb' bus='0' port='1'/>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:    </input>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:    <input type='mouse' bus='ps2'>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:      <alias name='input1'/>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:    </input>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:    <input type='keyboard' bus='ps2'>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:      <alias name='input2'/>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:    </input>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:    <graphics type='vnc' port='5900' autoport='yes' listen='::0'>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:      <listen type='address' address='::0'/>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:    </graphics>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:    <audio id='1' type='none'/>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:    <video>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:      <model type='virtio' heads='1' primary='yes'/>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:      <alias name='video0'/>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:    </video>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:    <watchdog model='itco' action='reset'>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:      <alias name='watchdog0'/>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:    </watchdog>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:    <memballoon model='virtio'>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:      <stats period='10'/>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:      <alias name='balloon0'/>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:    </memballoon>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:    <rng model='virtio'>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:      <backend model='random'>/dev/urandom</backend>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:      <alias name='rng0'/>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:    </rng>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:  </devices>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:  <seclabel type='dynamic' model='selinux' relabel='yes'>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:    <label>system_u:system_r:svirt_t:s0:c344,c569</label>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:    <imagelabel>system_u:object_r:svirt_image_t:s0:c344,c569</imagelabel>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:  </seclabel>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:  <seclabel type='dynamic' model='dac' relabel='yes'>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:    <label>+107:+107</label>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:    <imagelabel>+107:+107</imagelabel>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:  </seclabel>
Nov 25 03:55:03 np0005534516 nova_compute[253538]: </domain>
Nov 25 03:55:03 np0005534516 nova_compute[253538]: get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282#033[00m
Nov 25 03:55:03 np0005534516 nova_compute[253538]: 2025-11-25 08:55:03.435 253542 INFO nova.virt.libvirt.driver [None req-0f2da78b-3e4c-45a0-8c24-80ffc71596f8 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Successfully detached device tap1959aca7-b2 from instance 76611b0b-db06-4903-a22a-59b23a1e0d48 from the persistent domain config.#033[00m
Nov 25 03:55:03 np0005534516 nova_compute[253538]: 2025-11-25 08:55:03.436 253542 DEBUG nova.virt.libvirt.driver [None req-0f2da78b-3e4c-45a0-8c24-80ffc71596f8 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] (1/8): Attempting to detach device tap1959aca7-b2 with device alias net1 from instance 76611b0b-db06-4903-a22a-59b23a1e0d48 from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523#033[00m
Nov 25 03:55:03 np0005534516 nova_compute[253538]: 2025-11-25 08:55:03.436 253542 DEBUG nova.virt.libvirt.guest [None req-0f2da78b-3e4c-45a0-8c24-80ffc71596f8 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] detach device xml: <interface type="ethernet">
Nov 25 03:55:03 np0005534516 nova_compute[253538]:  <mac address="fa:16:3e:d6:c5:52"/>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:  <model type="virtio"/>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:  <driver name="vhost" rx_queue_size="512"/>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:  <mtu size="1442"/>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:  <target dev="tap1959aca7-b2"/>
Nov 25 03:55:03 np0005534516 nova_compute[253538]: </interface>
Nov 25 03:55:03 np0005534516 nova_compute[253538]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Nov 25 03:55:03 np0005534516 kernel: tap1959aca7-b2 (unregistering): left promiscuous mode
Nov 25 03:55:03 np0005534516 ovn_controller[152859]: 2025-11-25T08:55:03Z|01191|binding|INFO|Releasing lport 1959aca7-b25c-4fe5-b59a-70db352af78b from this chassis (sb_readonly=0)
Nov 25 03:55:03 np0005534516 ovn_controller[152859]: 2025-11-25T08:55:03Z|01192|binding|INFO|Setting lport 1959aca7-b25c-4fe5-b59a-70db352af78b down in Southbound
Nov 25 03:55:03 np0005534516 ovn_controller[152859]: 2025-11-25T08:55:03Z|01193|binding|INFO|Removing iface tap1959aca7-b2 ovn-installed in OVS
Nov 25 03:55:03 np0005534516 nova_compute[253538]: 2025-11-25 08:55:03.549 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:55:03 np0005534516 NetworkManager[48915]: <info>  [1764060903.5538] device (tap1959aca7-b2): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 03:55:03 np0005534516 nova_compute[253538]: 2025-11-25 08:55:03.557 253542 DEBUG nova.virt.libvirt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Received event <DeviceRemovedEvent: 1764060903.5567229, 76611b0b-db06-4903-a22a-59b23a1e0d48 => net1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370#033[00m
Nov 25 03:55:03 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:55:03.558 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d6:c5:52 10.100.0.26', 'unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.26/28', 'neutron:device_id': '76611b0b-db06-4903-a22a-59b23a1e0d48', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8f0f7d83-b45f-4a49-9b0c-4eced5b56b37', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '92faeb767e7a423586eaaf32661ce771', 'neutron:revision_number': '5', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=927e21d4-cff1-4c45-a730-0e5e5d57b4cb, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=1959aca7-b25c-4fe5-b59a-70db352af78b) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 03:55:03 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:55:03.560 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 1959aca7-b25c-4fe5-b59a-70db352af78b in datapath 8f0f7d83-b45f-4a49-9b0c-4eced5b56b37 unbound from our chassis#033[00m
Nov 25 03:55:03 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:55:03.562 162739 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 8f0f7d83-b45f-4a49-9b0c-4eced5b56b37, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 25 03:55:03 np0005534516 nova_compute[253538]: 2025-11-25 08:55:03.563 253542 DEBUG nova.virt.libvirt.driver [None req-0f2da78b-3e4c-45a0-8c24-80ffc71596f8 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Start waiting for the detach event from libvirt for device tap1959aca7-b2 with device alias net1 for instance 76611b0b-db06-4903-a22a-59b23a1e0d48 _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599#033[00m
Nov 25 03:55:03 np0005534516 nova_compute[253538]: 2025-11-25 08:55:03.564 253542 DEBUG nova.virt.libvirt.guest [None req-0f2da78b-3e4c-45a0-8c24-80ffc71596f8 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:d6:c5:52"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap1959aca7-b2"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Nov 25 03:55:03 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:55:03.563 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[f2f4463a-d487-4a9a-94b3-541719baf2f1]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:55:03 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:55:03.564 162739 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-8f0f7d83-b45f-4a49-9b0c-4eced5b56b37 namespace which is not needed anymore#033[00m
Nov 25 03:55:03 np0005534516 nova_compute[253538]: 2025-11-25 08:55:03.567 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:55:03 np0005534516 nova_compute[253538]: 2025-11-25 08:55:03.569 253542 DEBUG nova.virt.libvirt.guest [None req-0f2da78b-3e4c-45a0-8c24-80ffc71596f8 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:d6:c5:52"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap1959aca7-b2"/></interface>not found in domain: <domain type='kvm' id='142'>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:  <name>instance-00000072</name>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:  <uuid>76611b0b-db06-4903-a22a-59b23a1e0d48</uuid>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:  <metadata>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 03:55:03 np0005534516 nova_compute[253538]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:  <nova:name>tempest-TestNetworkBasicOps-server-1241900398</nova:name>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:  <nova:creationTime>2025-11-25 08:53:30</nova:creationTime>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:  <nova:flavor name="m1.nano">
Nov 25 03:55:03 np0005534516 nova_compute[253538]:    <nova:memory>128</nova:memory>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:    <nova:disk>1</nova:disk>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:    <nova:swap>0</nova:swap>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:    <nova:ephemeral>0</nova:ephemeral>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:    <nova:vcpus>1</nova:vcpus>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:  </nova:flavor>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:  <nova:owner>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:    <nova:user uuid="4211995133cc45db8e38c47f747fb092">tempest-TestNetworkBasicOps-2019122229-project-member</nova:user>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:    <nova:project uuid="92faeb767e7a423586eaaf32661ce771">tempest-TestNetworkBasicOps-2019122229</nova:project>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:  </nova:owner>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:  <nova:root type="image" uuid="8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e"/>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:  <nova:ports>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:    <nova:port uuid="8f1fcc3c-5f46-4272-be9b-4d5213b3aceb">
Nov 25 03:55:03 np0005534516 nova_compute[253538]:      <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:    </nova:port>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:    <nova:port uuid="1959aca7-b25c-4fe5-b59a-70db352af78b">
Nov 25 03:55:03 np0005534516 nova_compute[253538]:      <nova:ip type="fixed" address="10.100.0.26" ipVersion="4"/>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:    </nova:port>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:  </nova:ports>
Nov 25 03:55:03 np0005534516 nova_compute[253538]: </nova:instance>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:  </metadata>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:  <memory unit='KiB'>131072</memory>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:  <currentMemory unit='KiB'>131072</currentMemory>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:  <vcpu placement='static'>1</vcpu>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:  <resource>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:    <partition>/machine</partition>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:  </resource>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:  <sysinfo type='smbios'>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:    <system>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:      <entry name='manufacturer'>RDO</entry>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:      <entry name='product'>OpenStack Compute</entry>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:      <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:      <entry name='serial'>76611b0b-db06-4903-a22a-59b23a1e0d48</entry>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:      <entry name='uuid'>76611b0b-db06-4903-a22a-59b23a1e0d48</entry>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:      <entry name='family'>Virtual Machine</entry>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:    </system>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:  </sysinfo>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:  <os>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:    <type arch='x86_64' machine='pc-q35-rhel9.8.0'>hvm</type>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:    <boot dev='hd'/>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:    <smbios mode='sysinfo'/>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:  </os>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:  <features>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:    <acpi/>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:    <apic/>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:    <vmcoreinfo state='on'/>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:  </features>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:  <cpu mode='custom' match='exact' check='full'>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:    <model fallback='forbid'>EPYC-Rome</model>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:    <vendor>AMD</vendor>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:    <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:    <feature policy='require' name='x2apic'/>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:    <feature policy='require' name='tsc-deadline'/>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:    <feature policy='require' name='hypervisor'/>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:    <feature policy='require' name='tsc_adjust'/>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:    <feature policy='require' name='spec-ctrl'/>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:    <feature policy='require' name='stibp'/>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:    <feature policy='require' name='ssbd'/>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:    <feature policy='require' name='cmp_legacy'/>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:    <feature policy='require' name='overflow-recov'/>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:    <feature policy='require' name='succor'/>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:    <feature policy='require' name='ibrs'/>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:    <feature policy='require' name='amd-ssbd'/>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:    <feature policy='require' name='virt-ssbd'/>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:    <feature policy='disable' name='lbrv'/>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:    <feature policy='disable' name='tsc-scale'/>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:    <feature policy='disable' name='vmcb-clean'/>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:    <feature policy='disable' name='flushbyasid'/>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:    <feature policy='disable' name='pause-filter'/>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:    <feature policy='disable' name='pfthreshold'/>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:    <feature policy='disable' name='svme-addr-chk'/>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:    <feature policy='require' name='lfence-always-serializing'/>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:    <feature policy='disable' name='xsaves'/>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:    <feature policy='disable' name='svm'/>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:    <feature policy='require' name='topoext'/>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:    <feature policy='disable' name='npt'/>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:    <feature policy='disable' name='nrip-save'/>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:  </cpu>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:  <clock offset='utc'>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:    <timer name='pit' tickpolicy='delay'/>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:    <timer name='rtc' tickpolicy='catchup'/>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:    <timer name='hpet' present='no'/>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:  </clock>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:  <on_poweroff>destroy</on_poweroff>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:  <on_reboot>restart</on_reboot>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:  <on_crash>destroy</on_crash>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:  <devices>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:    <emulator>/usr/libexec/qemu-kvm</emulator>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:    <disk type='network' device='disk'>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:      <driver name='qemu' type='raw' cache='none'/>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:      <auth username='openstack'>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:        <secret type='ceph' uuid='a058ea16-8b73-51e1-b172-ed66107102bf'/>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:      </auth>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:      <source protocol='rbd' name='vms/76611b0b-db06-4903-a22a-59b23a1e0d48_disk' index='2'>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:        <host name='192.168.122.100' port='6789'/>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:      </source>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:      <target dev='vda' bus='virtio'/>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:      <alias name='virtio-disk0'/>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:    </disk>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:    <disk type='network' device='cdrom'>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:      <driver name='qemu' type='raw' cache='none'/>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:      <auth username='openstack'>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:        <secret type='ceph' uuid='a058ea16-8b73-51e1-b172-ed66107102bf'/>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:      </auth>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:      <source protocol='rbd' name='vms/76611b0b-db06-4903-a22a-59b23a1e0d48_disk.config' index='1'>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:        <host name='192.168.122.100' port='6789'/>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:      </source>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:      <target dev='sda' bus='sata'/>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:      <readonly/>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:      <alias name='sata0-0-0'/>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:      <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:    </disk>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:    <controller type='pci' index='0' model='pcie-root'>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:      <alias name='pcie.0'/>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:    <controller type='pci' index='1' model='pcie-root-port'>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:      <model name='pcie-root-port'/>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:      <target chassis='1' port='0x10'/>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:      <alias name='pci.1'/>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:    <controller type='pci' index='2' model='pcie-root-port'>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:      <model name='pcie-root-port'/>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:      <target chassis='2' port='0x11'/>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:      <alias name='pci.2'/>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:    <controller type='pci' index='3' model='pcie-root-port'>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:      <model name='pcie-root-port'/>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:      <target chassis='3' port='0x12'/>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:      <alias name='pci.3'/>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:    <controller type='pci' index='4' model='pcie-root-port'>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:      <model name='pcie-root-port'/>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:      <target chassis='4' port='0x13'/>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:      <alias name='pci.4'/>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:    <controller type='pci' index='5' model='pcie-root-port'>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:      <model name='pcie-root-port'/>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:      <target chassis='5' port='0x14'/>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:      <alias name='pci.5'/>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:    <controller type='pci' index='6' model='pcie-root-port'>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:      <model name='pcie-root-port'/>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:      <target chassis='6' port='0x15'/>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:      <alias name='pci.6'/>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:    <controller type='pci' index='7' model='pcie-root-port'>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:      <model name='pcie-root-port'/>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:      <target chassis='7' port='0x16'/>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:      <alias name='pci.7'/>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:    <controller type='pci' index='8' model='pcie-root-port'>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:      <model name='pcie-root-port'/>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:      <target chassis='8' port='0x17'/>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:      <alias name='pci.8'/>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:    <controller type='pci' index='9' model='pcie-root-port'>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:      <model name='pcie-root-port'/>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:      <target chassis='9' port='0x18'/>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:      <alias name='pci.9'/>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:    <controller type='pci' index='10' model='pcie-root-port'>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:      <model name='pcie-root-port'/>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:      <target chassis='10' port='0x19'/>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:      <alias name='pci.10'/>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:    <controller type='pci' index='11' model='pcie-root-port'>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:      <model name='pcie-root-port'/>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:      <target chassis='11' port='0x1a'/>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:      <alias name='pci.11'/>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:    <controller type='pci' index='12' model='pcie-root-port'>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:      <model name='pcie-root-port'/>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:      <target chassis='12' port='0x1b'/>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:      <alias name='pci.12'/>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:    <controller type='pci' index='13' model='pcie-root-port'>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:      <model name='pcie-root-port'/>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:      <target chassis='13' port='0x1c'/>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:      <alias name='pci.13'/>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:    <controller type='pci' index='14' model='pcie-root-port'>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:      <model name='pcie-root-port'/>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:      <target chassis='14' port='0x1d'/>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:      <alias name='pci.14'/>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:    <controller type='pci' index='15' model='pcie-root-port'>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:      <model name='pcie-root-port'/>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:      <target chassis='15' port='0x1e'/>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:      <alias name='pci.15'/>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:    <controller type='pci' index='16' model='pcie-root-port'>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:      <model name='pcie-root-port'/>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:      <target chassis='16' port='0x1f'/>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:      <alias name='pci.16'/>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:    <controller type='pci' index='17' model='pcie-root-port'>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:      <model name='pcie-root-port'/>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:      <target chassis='17' port='0x20'/>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:      <alias name='pci.17'/>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:    <controller type='pci' index='18' model='pcie-root-port'>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:      <model name='pcie-root-port'/>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:      <target chassis='18' port='0x21'/>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:      <alias name='pci.18'/>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:    <controller type='pci' index='19' model='pcie-root-port'>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:      <model name='pcie-root-port'/>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:      <target chassis='19' port='0x22'/>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:      <alias name='pci.19'/>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:    <controller type='pci' index='20' model='pcie-root-port'>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:      <model name='pcie-root-port'/>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:      <target chassis='20' port='0x23'/>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:      <alias name='pci.20'/>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:    <controller type='pci' index='21' model='pcie-root-port'>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:      <model name='pcie-root-port'/>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:      <target chassis='21' port='0x24'/>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:      <alias name='pci.21'/>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:    <controller type='pci' index='22' model='pcie-root-port'>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:      <model name='pcie-root-port'/>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:      <target chassis='22' port='0x25'/>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:      <alias name='pci.22'/>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:    <controller type='pci' index='23' model='pcie-root-port'>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:      <model name='pcie-root-port'/>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:      <target chassis='23' port='0x26'/>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:      <alias name='pci.23'/>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:    <controller type='pci' index='24' model='pcie-root-port'>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:      <model name='pcie-root-port'/>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:      <target chassis='24' port='0x27'/>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:      <alias name='pci.24'/>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:    <controller type='pci' index='25' model='pcie-root-port'>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:      <model name='pcie-root-port'/>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:      <target chassis='25' port='0x28'/>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:      <alias name='pci.25'/>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:    <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:      <model name='pcie-pci-bridge'/>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:      <alias name='pci.26'/>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:    <controller type='usb' index='0' model='piix3-uhci'>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:      <alias name='usb'/>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:    <controller type='sata' index='0'>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:      <alias name='ide'/>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:    </controller>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:    <interface type='ethernet'>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:      <mac address='fa:16:3e:36:b2:21'/>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:      <target dev='tap8f1fcc3c-5f'/>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:      <model type='virtio'/>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:      <driver name='vhost' rx_queue_size='512'/>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:      <mtu size='1442'/>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:      <alias name='net0'/>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:    </interface>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:    <serial type='pty'>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:      <source path='/dev/pts/0'/>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:      <log file='/var/lib/nova/instances/76611b0b-db06-4903-a22a-59b23a1e0d48/console.log' append='off'/>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:      <target type='isa-serial' port='0'>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:        <model name='isa-serial'/>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:      </target>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:      <alias name='serial0'/>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:    </serial>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:    <console type='pty' tty='/dev/pts/0'>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:      <source path='/dev/pts/0'/>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:      <log file='/var/lib/nova/instances/76611b0b-db06-4903-a22a-59b23a1e0d48/console.log' append='off'/>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:      <target type='serial' port='0'/>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:      <alias name='serial0'/>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:    </console>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:    <input type='tablet' bus='usb'>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:      <alias name='input0'/>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:      <address type='usb' bus='0' port='1'/>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:    </input>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:    <input type='mouse' bus='ps2'>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:      <alias name='input1'/>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:    </input>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:    <input type='keyboard' bus='ps2'>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:      <alias name='input2'/>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:    </input>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:    <graphics type='vnc' port='5900' autoport='yes' listen='::0'>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:      <listen type='address' address='::0'/>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:    </graphics>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:    <audio id='1' type='none'/>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:    <video>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:      <model type='virtio' heads='1' primary='yes'/>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:      <alias name='video0'/>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:    </video>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:    <watchdog model='itco' action='reset'>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:      <alias name='watchdog0'/>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:    </watchdog>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:    <memballoon model='virtio'>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:      <stats period='10'/>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:      <alias name='balloon0'/>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:    </memballoon>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:    <rng model='virtio'>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:      <backend model='random'>/dev/urandom</backend>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:      <alias name='rng0'/>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:      <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:    </rng>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:  </devices>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:  <seclabel type='dynamic' model='selinux' relabel='yes'>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:    <label>system_u:system_r:svirt_t:s0:c344,c569</label>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:    <imagelabel>system_u:object_r:svirt_image_t:s0:c344,c569</imagelabel>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:  </seclabel>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:  <seclabel type='dynamic' model='dac' relabel='yes'>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:    <label>+107:+107</label>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:    <imagelabel>+107:+107</imagelabel>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:  </seclabel>
Nov 25 03:55:03 np0005534516 nova_compute[253538]: </domain>
Nov 25 03:55:03 np0005534516 nova_compute[253538]: get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282#033[00m
Nov 25 03:55:03 np0005534516 nova_compute[253538]: 2025-11-25 08:55:03.569 253542 INFO nova.virt.libvirt.driver [None req-0f2da78b-3e4c-45a0-8c24-80ffc71596f8 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Successfully detached device tap1959aca7-b2 from instance 76611b0b-db06-4903-a22a-59b23a1e0d48 from the live domain config.#033[00m
Nov 25 03:55:03 np0005534516 nova_compute[253538]: 2025-11-25 08:55:03.570 253542 DEBUG nova.virt.libvirt.vif [None req-0f2da78b-3e4c-45a0-8c24-80ffc71596f8 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T08:52:52Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1241900398',display_name='tempest-TestNetworkBasicOps-server-1241900398',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1241900398',id=114,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDKQnIncww1mRwX9O2GT7bKPsyEkgA1/4oI48uojAID9Vm7PrMk+vKtiibHK4EoiYH2/wbPYy0HX20b3AH2Q3Q4yusIjxaAQq6AUEbJthvns45ZkdO1WCW3z0AmP1FYwxQ==',key_name='tempest-TestNetworkBasicOps-922693977',keypairs=<?>,launch_index=0,launched_at=2025-11-25T08:53:02Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='92faeb767e7a423586eaaf32661ce771',ramdisk_id='',reservation_id='r-1z0lb28a',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-2019122229',owner_user_name='tempest-TestNetworkBasicOps-2019122229-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T08:53:02Z,user_data=None,user_id='4211995133cc45db8e38c47f747fb092',uuid=76611b0b-db06-4903-a22a-59b23a1e0d48,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "1959aca7-b25c-4fe5-b59a-70db352af78b", "address": "fa:16:3e:d6:c5:52", "network": {"id": "8f0f7d83-b45f-4a49-9b0c-4eced5b56b37", "bridge": "br-int", "label": "tempest-network-smoke--701944320", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.26", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92faeb767e7a423586eaaf32661ce771", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1959aca7-b2", "ovs_interfaceid": "1959aca7-b25c-4fe5-b59a-70db352af78b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 25 03:55:03 np0005534516 nova_compute[253538]: 2025-11-25 08:55:03.570 253542 DEBUG nova.network.os_vif_util [None req-0f2da78b-3e4c-45a0-8c24-80ffc71596f8 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Converting VIF {"id": "1959aca7-b25c-4fe5-b59a-70db352af78b", "address": "fa:16:3e:d6:c5:52", "network": {"id": "8f0f7d83-b45f-4a49-9b0c-4eced5b56b37", "bridge": "br-int", "label": "tempest-network-smoke--701944320", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.26", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92faeb767e7a423586eaaf32661ce771", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1959aca7-b2", "ovs_interfaceid": "1959aca7-b25c-4fe5-b59a-70db352af78b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 03:55:03 np0005534516 nova_compute[253538]: 2025-11-25 08:55:03.571 253542 DEBUG nova.network.os_vif_util [None req-0f2da78b-3e4c-45a0-8c24-80ffc71596f8 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:d6:c5:52,bridge_name='br-int',has_traffic_filtering=True,id=1959aca7-b25c-4fe5-b59a-70db352af78b,network=Network(8f0f7d83-b45f-4a49-9b0c-4eced5b56b37),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1959aca7-b2') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 03:55:03 np0005534516 nova_compute[253538]: 2025-11-25 08:55:03.571 253542 DEBUG os_vif [None req-0f2da78b-3e4c-45a0-8c24-80ffc71596f8 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:d6:c5:52,bridge_name='br-int',has_traffic_filtering=True,id=1959aca7-b25c-4fe5-b59a-70db352af78b,network=Network(8f0f7d83-b45f-4a49-9b0c-4eced5b56b37),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1959aca7-b2') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 25 03:55:03 np0005534516 nova_compute[253538]: 2025-11-25 08:55:03.576 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:55:03 np0005534516 nova_compute[253538]: 2025-11-25 08:55:03.576 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1959aca7-b2, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:55:03 np0005534516 nova_compute[253538]: 2025-11-25 08:55:03.578 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:55:03 np0005534516 nova_compute[253538]: 2025-11-25 08:55:03.579 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:55:03 np0005534516 nova_compute[253538]: 2025-11-25 08:55:03.584 253542 INFO os_vif [None req-0f2da78b-3e4c-45a0-8c24-80ffc71596f8 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:d6:c5:52,bridge_name='br-int',has_traffic_filtering=True,id=1959aca7-b25c-4fe5-b59a-70db352af78b,network=Network(8f0f7d83-b45f-4a49-9b0c-4eced5b56b37),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1959aca7-b2')#033[00m
Nov 25 03:55:03 np0005534516 nova_compute[253538]: 2025-11-25 08:55:03.585 253542 DEBUG nova.virt.libvirt.guest [None req-0f2da78b-3e4c-45a0-8c24-80ffc71596f8 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 03:55:03 np0005534516 nova_compute[253538]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:  <nova:name>tempest-TestNetworkBasicOps-server-1241900398</nova:name>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:  <nova:creationTime>2025-11-25 08:55:03</nova:creationTime>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:  <nova:flavor name="m1.nano">
Nov 25 03:55:03 np0005534516 nova_compute[253538]:    <nova:memory>128</nova:memory>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:    <nova:disk>1</nova:disk>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:    <nova:swap>0</nova:swap>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:    <nova:ephemeral>0</nova:ephemeral>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:    <nova:vcpus>1</nova:vcpus>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:  </nova:flavor>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:  <nova:owner>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:    <nova:user uuid="4211995133cc45db8e38c47f747fb092">tempest-TestNetworkBasicOps-2019122229-project-member</nova:user>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:    <nova:project uuid="92faeb767e7a423586eaaf32661ce771">tempest-TestNetworkBasicOps-2019122229</nova:project>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:  </nova:owner>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:  <nova:root type="image" uuid="8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e"/>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:  <nova:ports>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:    <nova:port uuid="8f1fcc3c-5f46-4272-be9b-4d5213b3aceb">
Nov 25 03:55:03 np0005534516 nova_compute[253538]:      <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:    </nova:port>
Nov 25 03:55:03 np0005534516 nova_compute[253538]:  </nova:ports>
Nov 25 03:55:03 np0005534516 nova_compute[253538]: </nova:instance>
Nov 25 03:55:03 np0005534516 nova_compute[253538]: set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359#033[00m
Nov 25 03:55:03 np0005534516 neutron-haproxy-ovnmeta-8f0f7d83-b45f-4a49-9b0c-4eced5b56b37[374133]: [NOTICE]   (374137) : haproxy version is 2.8.14-c23fe91
Nov 25 03:55:03 np0005534516 neutron-haproxy-ovnmeta-8f0f7d83-b45f-4a49-9b0c-4eced5b56b37[374133]: [NOTICE]   (374137) : path to executable is /usr/sbin/haproxy
Nov 25 03:55:03 np0005534516 neutron-haproxy-ovnmeta-8f0f7d83-b45f-4a49-9b0c-4eced5b56b37[374133]: [WARNING]  (374137) : Exiting Master process...
Nov 25 03:55:03 np0005534516 neutron-haproxy-ovnmeta-8f0f7d83-b45f-4a49-9b0c-4eced5b56b37[374133]: [WARNING]  (374137) : Exiting Master process...
Nov 25 03:55:03 np0005534516 neutron-haproxy-ovnmeta-8f0f7d83-b45f-4a49-9b0c-4eced5b56b37[374133]: [ALERT]    (374137) : Current worker (374139) exited with code 143 (Terminated)
Nov 25 03:55:03 np0005534516 neutron-haproxy-ovnmeta-8f0f7d83-b45f-4a49-9b0c-4eced5b56b37[374133]: [WARNING]  (374137) : All workers exited. Exiting... (0)
Nov 25 03:55:03 np0005534516 systemd[1]: libpod-a13d5ce423b857682add3ea01f6435ad249dc4a50e6b1f6fbcb9a1810b5f0c2c.scope: Deactivated successfully.
Nov 25 03:55:03 np0005534516 podman[376686]: 2025-11-25 08:55:03.745949301 +0000 UTC m=+0.060156999 container died a13d5ce423b857682add3ea01f6435ad249dc4a50e6b1f6fbcb9a1810b5f0c2c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-8f0f7d83-b45f-4a49-9b0c-4eced5b56b37, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118)
Nov 25 03:55:03 np0005534516 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-a13d5ce423b857682add3ea01f6435ad249dc4a50e6b1f6fbcb9a1810b5f0c2c-userdata-shm.mount: Deactivated successfully.
Nov 25 03:55:03 np0005534516 systemd[1]: var-lib-containers-storage-overlay-e9ca46848af656bb6a002ab5710a7e5831c6c96bc3a4e29b10c26e75820c7aca-merged.mount: Deactivated successfully.
Nov 25 03:55:03 np0005534516 podman[376686]: 2025-11-25 08:55:03.795512389 +0000 UTC m=+0.109720087 container cleanup a13d5ce423b857682add3ea01f6435ad249dc4a50e6b1f6fbcb9a1810b5f0c2c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-8f0f7d83-b45f-4a49-9b0c-4eced5b56b37, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3)
Nov 25 03:55:03 np0005534516 systemd[1]: libpod-conmon-a13d5ce423b857682add3ea01f6435ad249dc4a50e6b1f6fbcb9a1810b5f0c2c.scope: Deactivated successfully.
Nov 25 03:55:03 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2228: 321 pgs: 321 active+clean; 214 MiB data, 857 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 25 KiB/s wr, 103 op/s
Nov 25 03:55:03 np0005534516 podman[376718]: 2025-11-25 08:55:03.866396029 +0000 UTC m=+0.044464521 container remove a13d5ce423b857682add3ea01f6435ad249dc4a50e6b1f6fbcb9a1810b5f0c2c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-8f0f7d83-b45f-4a49-9b0c-4eced5b56b37, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0, tcib_managed=true)
Nov 25 03:55:03 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:55:03.871 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[8ad8d968-13a0-4b6d-aefa-471e6502a14a]: (4, ('Tue Nov 25 08:55:03 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-8f0f7d83-b45f-4a49-9b0c-4eced5b56b37 (a13d5ce423b857682add3ea01f6435ad249dc4a50e6b1f6fbcb9a1810b5f0c2c)\na13d5ce423b857682add3ea01f6435ad249dc4a50e6b1f6fbcb9a1810b5f0c2c\nTue Nov 25 08:55:03 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-8f0f7d83-b45f-4a49-9b0c-4eced5b56b37 (a13d5ce423b857682add3ea01f6435ad249dc4a50e6b1f6fbcb9a1810b5f0c2c)\na13d5ce423b857682add3ea01f6435ad249dc4a50e6b1f6fbcb9a1810b5f0c2c\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:55:03 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:55:03.873 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[dfded48b-308a-4108-896e-58e52b65b8ea]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:55:03 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:55:03.874 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8f0f7d83-b0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:55:03 np0005534516 nova_compute[253538]: 2025-11-25 08:55:03.876 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:55:03 np0005534516 kernel: tap8f0f7d83-b0: left promiscuous mode
Nov 25 03:55:03 np0005534516 nova_compute[253538]: 2025-11-25 08:55:03.877 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:55:03 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:55:03.880 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[cde4d765-e1f0-4473-86bd-7dffca54ef2e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:55:03 np0005534516 nova_compute[253538]: 2025-11-25 08:55:03.896 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:55:03 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:55:03.898 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[1f686c2a-1f69-4ad4-bdf5-3bc1079bf853]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:55:03 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:55:03.899 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[6f28313f-767d-4ff1-9c7b-0c089fea7d88]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:55:03 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:55:03.918 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[61eebccf-e9c4-4e58-939f-d44ed9028397]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 616513, 'reachable_time': 40428, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 376731, 'error': None, 'target': 'ovnmeta-8f0f7d83-b45f-4a49-9b0c-4eced5b56b37', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:55:03 np0005534516 systemd[1]: run-netns-ovnmeta\x2d8f0f7d83\x2db45f\x2d4a49\x2d9b0c\x2d4eced5b56b37.mount: Deactivated successfully.
Nov 25 03:55:03 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:55:03.921 162852 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-8f0f7d83-b45f-4a49-9b0c-4eced5b56b37 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 25 03:55:03 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:55:03.921 162852 DEBUG oslo.privsep.daemon [-] privsep: reply[da457046-5d57-42e9-a86b-9a3f4c28ae63]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:55:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 03:55:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:55:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 03:55:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:55:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0011146179266783877 of space, bias 1.0, pg target 0.3343853780035163 quantized to 32 (current 32)
Nov 25 03:55:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:55:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 03:55:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:55:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 03:55:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:55:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0010122368870873217 of space, bias 1.0, pg target 0.3036710661261965 quantized to 32 (current 32)
Nov 25 03:55:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:55:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 25 03:55:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:55:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 03:55:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:55:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 25 03:55:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:55:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 25 03:55:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:55:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 03:55:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:55:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 25 03:55:04 np0005534516 nova_compute[253538]: 2025-11-25 08:55:04.207 253542 DEBUG nova.compute.manager [req-19d1e93e-a965-4c6d-93b3-1c9f50716884 req-6c5d3981-f086-462b-ba17-8761c84f4832 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 76611b0b-db06-4903-a22a-59b23a1e0d48] Received event network-vif-unplugged-1959aca7-b25c-4fe5-b59a-70db352af78b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:55:04 np0005534516 nova_compute[253538]: 2025-11-25 08:55:04.207 253542 DEBUG oslo_concurrency.lockutils [req-19d1e93e-a965-4c6d-93b3-1c9f50716884 req-6c5d3981-f086-462b-ba17-8761c84f4832 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "76611b0b-db06-4903-a22a-59b23a1e0d48-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:55:04 np0005534516 nova_compute[253538]: 2025-11-25 08:55:04.208 253542 DEBUG oslo_concurrency.lockutils [req-19d1e93e-a965-4c6d-93b3-1c9f50716884 req-6c5d3981-f086-462b-ba17-8761c84f4832 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "76611b0b-db06-4903-a22a-59b23a1e0d48-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:55:04 np0005534516 nova_compute[253538]: 2025-11-25 08:55:04.208 253542 DEBUG oslo_concurrency.lockutils [req-19d1e93e-a965-4c6d-93b3-1c9f50716884 req-6c5d3981-f086-462b-ba17-8761c84f4832 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "76611b0b-db06-4903-a22a-59b23a1e0d48-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:55:04 np0005534516 nova_compute[253538]: 2025-11-25 08:55:04.208 253542 DEBUG nova.compute.manager [req-19d1e93e-a965-4c6d-93b3-1c9f50716884 req-6c5d3981-f086-462b-ba17-8761c84f4832 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 76611b0b-db06-4903-a22a-59b23a1e0d48] No waiting events found dispatching network-vif-unplugged-1959aca7-b25c-4fe5-b59a-70db352af78b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 03:55:04 np0005534516 nova_compute[253538]: 2025-11-25 08:55:04.208 253542 WARNING nova.compute.manager [req-19d1e93e-a965-4c6d-93b3-1c9f50716884 req-6c5d3981-f086-462b-ba17-8761c84f4832 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 76611b0b-db06-4903-a22a-59b23a1e0d48] Received unexpected event network-vif-unplugged-1959aca7-b25c-4fe5-b59a-70db352af78b for instance with vm_state active and task_state None.#033[00m
Nov 25 03:55:04 np0005534516 nova_compute[253538]: 2025-11-25 08:55:04.463 253542 DEBUG oslo_concurrency.lockutils [None req-0f2da78b-3e4c-45a0-8c24-80ffc71596f8 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Acquiring lock "refresh_cache-76611b0b-db06-4903-a22a-59b23a1e0d48" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 03:55:04 np0005534516 nova_compute[253538]: 2025-11-25 08:55:04.463 253542 DEBUG oslo_concurrency.lockutils [None req-0f2da78b-3e4c-45a0-8c24-80ffc71596f8 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Acquired lock "refresh_cache-76611b0b-db06-4903-a22a-59b23a1e0d48" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 03:55:04 np0005534516 nova_compute[253538]: 2025-11-25 08:55:04.464 253542 DEBUG nova.network.neutron [None req-0f2da78b-3e4c-45a0-8c24-80ffc71596f8 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 76611b0b-db06-4903-a22a-59b23a1e0d48] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 25 03:55:05 np0005534516 nova_compute[253538]: 2025-11-25 08:55:05.122 253542 DEBUG nova.network.neutron [req-433556bd-b9b5-4e0b-be37-a3f4ac0bb0c1 req-d49c2388-0fd1-4bb7-96d5-a3894c0f8606 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 7e82fa8c-6663-439c-833c-2b28f22282a8] Updated VIF entry in instance network info cache for port 52157627-d75e-4670-9215-6471bda94ba6. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 25 03:55:05 np0005534516 nova_compute[253538]: 2025-11-25 08:55:05.123 253542 DEBUG nova.network.neutron [req-433556bd-b9b5-4e0b-be37-a3f4ac0bb0c1 req-d49c2388-0fd1-4bb7-96d5-a3894c0f8606 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 7e82fa8c-6663-439c-833c-2b28f22282a8] Updating instance_info_cache with network_info: [{"id": "52157627-d75e-4670-9215-6471bda94ba6", "address": "fa:16:3e:a5:b5:38", "network": {"id": "703bdacb-53cd-40a1-9c2c-c632a29e049b", "bridge": "br-int", "label": "tempest-network-smoke--919155047", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.213", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7dcaf3c96bfc4db3a41291debd385c67", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap52157627-d7", "ovs_interfaceid": "52157627-d75e-4670-9215-6471bda94ba6", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 03:55:05 np0005534516 nova_compute[253538]: 2025-11-25 08:55:05.142 253542 DEBUG oslo_concurrency.lockutils [req-433556bd-b9b5-4e0b-be37-a3f4ac0bb0c1 req-d49c2388-0fd1-4bb7-96d5-a3894c0f8606 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-7e82fa8c-6663-439c-833c-2b28f22282a8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 03:55:05 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2229: 321 pgs: 321 active+clean; 213 MiB data, 857 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 25 KiB/s wr, 103 op/s
Nov 25 03:55:06 np0005534516 nova_compute[253538]: 2025-11-25 08:55:06.132 253542 INFO nova.network.neutron [None req-0f2da78b-3e4c-45a0-8c24-80ffc71596f8 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 76611b0b-db06-4903-a22a-59b23a1e0d48] Port 1959aca7-b25c-4fe5-b59a-70db352af78b from network info_cache is no longer associated with instance in Neutron. Removing from network info_cache.#033[00m
Nov 25 03:55:06 np0005534516 nova_compute[253538]: 2025-11-25 08:55:06.132 253542 DEBUG nova.network.neutron [None req-0f2da78b-3e4c-45a0-8c24-80ffc71596f8 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 76611b0b-db06-4903-a22a-59b23a1e0d48] Updating instance_info_cache with network_info: [{"id": "8f1fcc3c-5f46-4272-be9b-4d5213b3aceb", "address": "fa:16:3e:36:b2:21", "network": {"id": "60f2641c-f03e-4ef3-a462-4bd54e93c59c", "bridge": "br-int", "label": "tempest-network-smoke--706068333", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.208", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92faeb767e7a423586eaaf32661ce771", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8f1fcc3c-5f", "ovs_interfaceid": "8f1fcc3c-5f46-4272-be9b-4d5213b3aceb", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 03:55:06 np0005534516 nova_compute[253538]: 2025-11-25 08:55:06.151 253542 DEBUG oslo_concurrency.lockutils [None req-0f2da78b-3e4c-45a0-8c24-80ffc71596f8 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Releasing lock "refresh_cache-76611b0b-db06-4903-a22a-59b23a1e0d48" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 03:55:06 np0005534516 nova_compute[253538]: 2025-11-25 08:55:06.171 253542 DEBUG oslo_concurrency.lockutils [None req-0f2da78b-3e4c-45a0-8c24-80ffc71596f8 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lock "interface-76611b0b-db06-4903-a22a-59b23a1e0d48-1959aca7-b25c-4fe5-b59a-70db352af78b" "released" by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" :: held 2.802s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:55:06 np0005534516 ovn_controller[152859]: 2025-11-25T08:55:06Z|01194|binding|INFO|Releasing lport baf4584e-8381-4bcf-9f75-0a7b69cd8212 from this chassis (sb_readonly=0)
Nov 25 03:55:06 np0005534516 ovn_controller[152859]: 2025-11-25T08:55:06Z|01195|binding|INFO|Releasing lport 583866cf-82da-4259-9189-db9f58620872 from this chassis (sb_readonly=0)
Nov 25 03:55:06 np0005534516 nova_compute[253538]: 2025-11-25 08:55:06.350 253542 DEBUG nova.compute.manager [req-7cda733b-0b0d-4577-ad78-8c5e69ece9ec req-2ec7d47d-bc0d-4856-a2a5-a3bb79bda1da b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 76611b0b-db06-4903-a22a-59b23a1e0d48] Received event network-vif-plugged-1959aca7-b25c-4fe5-b59a-70db352af78b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:55:06 np0005534516 nova_compute[253538]: 2025-11-25 08:55:06.351 253542 DEBUG oslo_concurrency.lockutils [req-7cda733b-0b0d-4577-ad78-8c5e69ece9ec req-2ec7d47d-bc0d-4856-a2a5-a3bb79bda1da b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "76611b0b-db06-4903-a22a-59b23a1e0d48-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:55:06 np0005534516 nova_compute[253538]: 2025-11-25 08:55:06.351 253542 DEBUG oslo_concurrency.lockutils [req-7cda733b-0b0d-4577-ad78-8c5e69ece9ec req-2ec7d47d-bc0d-4856-a2a5-a3bb79bda1da b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "76611b0b-db06-4903-a22a-59b23a1e0d48-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:55:06 np0005534516 nova_compute[253538]: 2025-11-25 08:55:06.351 253542 DEBUG oslo_concurrency.lockutils [req-7cda733b-0b0d-4577-ad78-8c5e69ece9ec req-2ec7d47d-bc0d-4856-a2a5-a3bb79bda1da b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "76611b0b-db06-4903-a22a-59b23a1e0d48-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:55:06 np0005534516 nova_compute[253538]: 2025-11-25 08:55:06.351 253542 DEBUG nova.compute.manager [req-7cda733b-0b0d-4577-ad78-8c5e69ece9ec req-2ec7d47d-bc0d-4856-a2a5-a3bb79bda1da b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 76611b0b-db06-4903-a22a-59b23a1e0d48] No waiting events found dispatching network-vif-plugged-1959aca7-b25c-4fe5-b59a-70db352af78b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 03:55:06 np0005534516 nova_compute[253538]: 2025-11-25 08:55:06.351 253542 WARNING nova.compute.manager [req-7cda733b-0b0d-4577-ad78-8c5e69ece9ec req-2ec7d47d-bc0d-4856-a2a5-a3bb79bda1da b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 76611b0b-db06-4903-a22a-59b23a1e0d48] Received unexpected event network-vif-plugged-1959aca7-b25c-4fe5-b59a-70db352af78b for instance with vm_state active and task_state None.#033[00m
Nov 25 03:55:06 np0005534516 nova_compute[253538]: 2025-11-25 08:55:06.352 253542 DEBUG nova.compute.manager [req-7cda733b-0b0d-4577-ad78-8c5e69ece9ec req-2ec7d47d-bc0d-4856-a2a5-a3bb79bda1da b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 76611b0b-db06-4903-a22a-59b23a1e0d48] Received event network-vif-deleted-1959aca7-b25c-4fe5-b59a-70db352af78b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:55:06 np0005534516 nova_compute[253538]: 2025-11-25 08:55:06.379 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:55:07 np0005534516 nova_compute[253538]: 2025-11-25 08:55:07.278 253542 DEBUG oslo_concurrency.lockutils [None req-bf35c612-f29c-4b39-9c44-a88cbc38ae68 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Acquiring lock "76611b0b-db06-4903-a22a-59b23a1e0d48" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:55:07 np0005534516 nova_compute[253538]: 2025-11-25 08:55:07.279 253542 DEBUG oslo_concurrency.lockutils [None req-bf35c612-f29c-4b39-9c44-a88cbc38ae68 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lock "76611b0b-db06-4903-a22a-59b23a1e0d48" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:55:07 np0005534516 nova_compute[253538]: 2025-11-25 08:55:07.280 253542 DEBUG oslo_concurrency.lockutils [None req-bf35c612-f29c-4b39-9c44-a88cbc38ae68 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Acquiring lock "76611b0b-db06-4903-a22a-59b23a1e0d48-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:55:07 np0005534516 nova_compute[253538]: 2025-11-25 08:55:07.281 253542 DEBUG oslo_concurrency.lockutils [None req-bf35c612-f29c-4b39-9c44-a88cbc38ae68 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lock "76611b0b-db06-4903-a22a-59b23a1e0d48-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:55:07 np0005534516 nova_compute[253538]: 2025-11-25 08:55:07.282 253542 DEBUG oslo_concurrency.lockutils [None req-bf35c612-f29c-4b39-9c44-a88cbc38ae68 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lock "76611b0b-db06-4903-a22a-59b23a1e0d48-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:55:07 np0005534516 nova_compute[253538]: 2025-11-25 08:55:07.284 253542 INFO nova.compute.manager [None req-bf35c612-f29c-4b39-9c44-a88cbc38ae68 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 76611b0b-db06-4903-a22a-59b23a1e0d48] Terminating instance#033[00m
Nov 25 03:55:07 np0005534516 nova_compute[253538]: 2025-11-25 08:55:07.286 253542 DEBUG nova.compute.manager [None req-bf35c612-f29c-4b39-9c44-a88cbc38ae68 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 76611b0b-db06-4903-a22a-59b23a1e0d48] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 25 03:55:07 np0005534516 kernel: tap8f1fcc3c-5f (unregistering): left promiscuous mode
Nov 25 03:55:07 np0005534516 NetworkManager[48915]: <info>  [1764060907.3454] device (tap8f1fcc3c-5f): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 03:55:07 np0005534516 ovn_controller[152859]: 2025-11-25T08:55:07Z|01196|binding|INFO|Releasing lport 8f1fcc3c-5f46-4272-be9b-4d5213b3aceb from this chassis (sb_readonly=0)
Nov 25 03:55:07 np0005534516 nova_compute[253538]: 2025-11-25 08:55:07.373 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:55:07 np0005534516 ovn_controller[152859]: 2025-11-25T08:55:07Z|01197|binding|INFO|Setting lport 8f1fcc3c-5f46-4272-be9b-4d5213b3aceb down in Southbound
Nov 25 03:55:07 np0005534516 ovn_controller[152859]: 2025-11-25T08:55:07Z|01198|binding|INFO|Removing iface tap8f1fcc3c-5f ovn-installed in OVS
Nov 25 03:55:07 np0005534516 nova_compute[253538]: 2025-11-25 08:55:07.377 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:55:07 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:55:07.388 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:36:b2:21 10.100.0.3'], port_security=['fa:16:3e:36:b2:21 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '76611b0b-db06-4903-a22a-59b23a1e0d48', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-60f2641c-f03e-4ef3-a462-4bd54e93c59c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '92faeb767e7a423586eaaf32661ce771', 'neutron:revision_number': '4', 'neutron:security_group_ids': '70c2e597-b59d-412f-a7ad-333ba7cbd35e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2b027220-e81c-4ac9-90ba-6c25793cc1d8, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=8f1fcc3c-5f46-4272-be9b-4d5213b3aceb) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 03:55:07 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:55:07.390 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 8f1fcc3c-5f46-4272-be9b-4d5213b3aceb in datapath 60f2641c-f03e-4ef3-a462-4bd54e93c59c unbound from our chassis#033[00m
Nov 25 03:55:07 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:55:07.392 162739 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 60f2641c-f03e-4ef3-a462-4bd54e93c59c, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 25 03:55:07 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:55:07.394 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[7243d39f-b1ed-4338-906d-962ba0625156]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:55:07 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:55:07.395 162739 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-60f2641c-f03e-4ef3-a462-4bd54e93c59c namespace which is not needed anymore#033[00m
Nov 25 03:55:07 np0005534516 nova_compute[253538]: 2025-11-25 08:55:07.409 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:55:07 np0005534516 systemd[1]: machine-qemu\x2d142\x2dinstance\x2d00000072.scope: Deactivated successfully.
Nov 25 03:55:07 np0005534516 systemd[1]: machine-qemu\x2d142\x2dinstance\x2d00000072.scope: Consumed 20.250s CPU time.
Nov 25 03:55:07 np0005534516 systemd-machined[215790]: Machine qemu-142-instance-00000072 terminated.
Nov 25 03:55:07 np0005534516 nova_compute[253538]: 2025-11-25 08:55:07.529 253542 INFO nova.virt.libvirt.driver [-] [instance: 76611b0b-db06-4903-a22a-59b23a1e0d48] Instance destroyed successfully.#033[00m
Nov 25 03:55:07 np0005534516 nova_compute[253538]: 2025-11-25 08:55:07.530 253542 DEBUG nova.objects.instance [None req-bf35c612-f29c-4b39-9c44-a88cbc38ae68 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lazy-loading 'resources' on Instance uuid 76611b0b-db06-4903-a22a-59b23a1e0d48 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 03:55:07 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 03:55:07 np0005534516 nova_compute[253538]: 2025-11-25 08:55:07.543 253542 DEBUG nova.virt.libvirt.vif [None req-bf35c612-f29c-4b39-9c44-a88cbc38ae68 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T08:52:52Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1241900398',display_name='tempest-TestNetworkBasicOps-server-1241900398',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1241900398',id=114,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDKQnIncww1mRwX9O2GT7bKPsyEkgA1/4oI48uojAID9Vm7PrMk+vKtiibHK4EoiYH2/wbPYy0HX20b3AH2Q3Q4yusIjxaAQq6AUEbJthvns45ZkdO1WCW3z0AmP1FYwxQ==',key_name='tempest-TestNetworkBasicOps-922693977',keypairs=<?>,launch_index=0,launched_at=2025-11-25T08:53:02Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='92faeb767e7a423586eaaf32661ce771',ramdisk_id='',reservation_id='r-1z0lb28a',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-2019122229',owner_user_name='tempest-TestNetworkBasicOps-2019122229-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T08:53:02Z,user_data=None,user_id='4211995133cc45db8e38c47f747fb092',uuid=76611b0b-db06-4903-a22a-59b23a1e0d48,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "8f1fcc3c-5f46-4272-be9b-4d5213b3aceb", "address": "fa:16:3e:36:b2:21", "network": {"id": "60f2641c-f03e-4ef3-a462-4bd54e93c59c", "bridge": "br-int", "label": "tempest-network-smoke--706068333", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.208", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92faeb767e7a423586eaaf32661ce771", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8f1fcc3c-5f", "ovs_interfaceid": "8f1fcc3c-5f46-4272-be9b-4d5213b3aceb", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 25 03:55:07 np0005534516 nova_compute[253538]: 2025-11-25 08:55:07.543 253542 DEBUG nova.network.os_vif_util [None req-bf35c612-f29c-4b39-9c44-a88cbc38ae68 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Converting VIF {"id": "8f1fcc3c-5f46-4272-be9b-4d5213b3aceb", "address": "fa:16:3e:36:b2:21", "network": {"id": "60f2641c-f03e-4ef3-a462-4bd54e93c59c", "bridge": "br-int", "label": "tempest-network-smoke--706068333", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.208", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92faeb767e7a423586eaaf32661ce771", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8f1fcc3c-5f", "ovs_interfaceid": "8f1fcc3c-5f46-4272-be9b-4d5213b3aceb", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 03:55:07 np0005534516 nova_compute[253538]: 2025-11-25 08:55:07.544 253542 DEBUG nova.network.os_vif_util [None req-bf35c612-f29c-4b39-9c44-a88cbc38ae68 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:36:b2:21,bridge_name='br-int',has_traffic_filtering=True,id=8f1fcc3c-5f46-4272-be9b-4d5213b3aceb,network=Network(60f2641c-f03e-4ef3-a462-4bd54e93c59c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8f1fcc3c-5f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 03:55:07 np0005534516 nova_compute[253538]: 2025-11-25 08:55:07.545 253542 DEBUG os_vif [None req-bf35c612-f29c-4b39-9c44-a88cbc38ae68 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:36:b2:21,bridge_name='br-int',has_traffic_filtering=True,id=8f1fcc3c-5f46-4272-be9b-4d5213b3aceb,network=Network(60f2641c-f03e-4ef3-a462-4bd54e93c59c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8f1fcc3c-5f') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 25 03:55:07 np0005534516 nova_compute[253538]: 2025-11-25 08:55:07.546 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:55:07 np0005534516 nova_compute[253538]: 2025-11-25 08:55:07.547 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8f1fcc3c-5f, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:55:07 np0005534516 nova_compute[253538]: 2025-11-25 08:55:07.551 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:55:07 np0005534516 nova_compute[253538]: 2025-11-25 08:55:07.554 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:55:07 np0005534516 nova_compute[253538]: 2025-11-25 08:55:07.556 253542 INFO os_vif [None req-bf35c612-f29c-4b39-9c44-a88cbc38ae68 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:36:b2:21,bridge_name='br-int',has_traffic_filtering=True,id=8f1fcc3c-5f46-4272-be9b-4d5213b3aceb,network=Network(60f2641c-f03e-4ef3-a462-4bd54e93c59c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8f1fcc3c-5f')#033[00m
Nov 25 03:55:07 np0005534516 neutron-haproxy-ovnmeta-60f2641c-f03e-4ef3-a462-4bd54e93c59c[372579]: [NOTICE]   (372609) : haproxy version is 2.8.14-c23fe91
Nov 25 03:55:07 np0005534516 neutron-haproxy-ovnmeta-60f2641c-f03e-4ef3-a462-4bd54e93c59c[372579]: [NOTICE]   (372609) : path to executable is /usr/sbin/haproxy
Nov 25 03:55:07 np0005534516 neutron-haproxy-ovnmeta-60f2641c-f03e-4ef3-a462-4bd54e93c59c[372579]: [WARNING]  (372609) : Exiting Master process...
Nov 25 03:55:07 np0005534516 neutron-haproxy-ovnmeta-60f2641c-f03e-4ef3-a462-4bd54e93c59c[372579]: [WARNING]  (372609) : Exiting Master process...
Nov 25 03:55:07 np0005534516 neutron-haproxy-ovnmeta-60f2641c-f03e-4ef3-a462-4bd54e93c59c[372579]: [ALERT]    (372609) : Current worker (372623) exited with code 143 (Terminated)
Nov 25 03:55:07 np0005534516 neutron-haproxy-ovnmeta-60f2641c-f03e-4ef3-a462-4bd54e93c59c[372579]: [WARNING]  (372609) : All workers exited. Exiting... (0)
Nov 25 03:55:07 np0005534516 systemd[1]: libpod-f93db09c6af719a38bb678e200d6d4705ea435f5ca121747b3ebd1a98fd7f49e.scope: Deactivated successfully.
Nov 25 03:55:07 np0005534516 podman[376761]: 2025-11-25 08:55:07.602639044 +0000 UTC m=+0.052390706 container died f93db09c6af719a38bb678e200d6d4705ea435f5ca121747b3ebd1a98fd7f49e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-60f2641c-f03e-4ef3-a462-4bd54e93c59c, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, tcib_managed=true)
Nov 25 03:55:07 np0005534516 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-f93db09c6af719a38bb678e200d6d4705ea435f5ca121747b3ebd1a98fd7f49e-userdata-shm.mount: Deactivated successfully.
Nov 25 03:55:07 np0005534516 systemd[1]: var-lib-containers-storage-overlay-3552cae6011fac03a9da2375e711d654941020090cecb0f909b8a0ba241d8ae3-merged.mount: Deactivated successfully.
Nov 25 03:55:07 np0005534516 podman[376761]: 2025-11-25 08:55:07.648450651 +0000 UTC m=+0.098202333 container cleanup f93db09c6af719a38bb678e200d6d4705ea435f5ca121747b3ebd1a98fd7f49e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-60f2641c-f03e-4ef3-a462-4bd54e93c59c, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 03:55:07 np0005534516 systemd[1]: libpod-conmon-f93db09c6af719a38bb678e200d6d4705ea435f5ca121747b3ebd1a98fd7f49e.scope: Deactivated successfully.
Nov 25 03:55:07 np0005534516 podman[376814]: 2025-11-25 08:55:07.725409676 +0000 UTC m=+0.051255746 container remove f93db09c6af719a38bb678e200d6d4705ea435f5ca121747b3ebd1a98fd7f49e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-60f2641c-f03e-4ef3-a462-4bd54e93c59c, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2)
Nov 25 03:55:07 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:55:07.731 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[0116d301-58c6-4d02-b548-b7c7a57fa1ed]: (4, ('Tue Nov 25 08:55:07 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-60f2641c-f03e-4ef3-a462-4bd54e93c59c (f93db09c6af719a38bb678e200d6d4705ea435f5ca121747b3ebd1a98fd7f49e)\nf93db09c6af719a38bb678e200d6d4705ea435f5ca121747b3ebd1a98fd7f49e\nTue Nov 25 08:55:07 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-60f2641c-f03e-4ef3-a462-4bd54e93c59c (f93db09c6af719a38bb678e200d6d4705ea435f5ca121747b3ebd1a98fd7f49e)\nf93db09c6af719a38bb678e200d6d4705ea435f5ca121747b3ebd1a98fd7f49e\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:55:07 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:55:07.733 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[3d212334-7e18-409d-9962-595f2b45ba69]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:55:07 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:55:07.735 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap60f2641c-f0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:55:07 np0005534516 kernel: tap60f2641c-f0: left promiscuous mode
Nov 25 03:55:07 np0005534516 nova_compute[253538]: 2025-11-25 08:55:07.740 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:55:07 np0005534516 nova_compute[253538]: 2025-11-25 08:55:07.753 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:55:07 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:55:07.756 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[814ef78b-9bb9-4e4c-a762-97c8a3aef2b3]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:55:07 np0005534516 rsyslogd[1007]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 25 03:55:07 np0005534516 rsyslogd[1007]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 25 03:55:07 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:55:07.776 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[3d43b191-93a5-412b-86ec-e768a7a337d3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:55:07 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:55:07.782 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[3f5fbb3e-6999-4e25-ad55-256c2189ab6f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:55:07 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:55:07.799 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[f668401c-6053-4229-8291-dba7febef235]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 613548, 'reachable_time': 19392, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 376827, 'error': None, 'target': 'ovnmeta-60f2641c-f03e-4ef3-a462-4bd54e93c59c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:55:07 np0005534516 systemd[1]: run-netns-ovnmeta\x2d60f2641c\x2df03e\x2d4ef3\x2da462\x2d4bd54e93c59c.mount: Deactivated successfully.
Nov 25 03:55:07 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:55:07.801 162852 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-60f2641c-f03e-4ef3-a462-4bd54e93c59c deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 25 03:55:07 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:55:07.802 162852 DEBUG oslo.privsep.daemon [-] privsep: reply[4cb64680-0302-4019-aba0-7334f43ef75a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:55:07 np0005534516 nova_compute[253538]: 2025-11-25 08:55:07.843 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:55:07 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2230: 321 pgs: 321 active+clean; 213 MiB data, 857 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 17 KiB/s wr, 102 op/s
Nov 25 03:55:07 np0005534516 nova_compute[253538]: 2025-11-25 08:55:07.966 253542 INFO nova.virt.libvirt.driver [None req-bf35c612-f29c-4b39-9c44-a88cbc38ae68 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 76611b0b-db06-4903-a22a-59b23a1e0d48] Deleting instance files /var/lib/nova/instances/76611b0b-db06-4903-a22a-59b23a1e0d48_del#033[00m
Nov 25 03:55:07 np0005534516 nova_compute[253538]: 2025-11-25 08:55:07.967 253542 INFO nova.virt.libvirt.driver [None req-bf35c612-f29c-4b39-9c44-a88cbc38ae68 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 76611b0b-db06-4903-a22a-59b23a1e0d48] Deletion of /var/lib/nova/instances/76611b0b-db06-4903-a22a-59b23a1e0d48_del complete#033[00m
Nov 25 03:55:08 np0005534516 nova_compute[253538]: 2025-11-25 08:55:08.026 253542 INFO nova.compute.manager [None req-bf35c612-f29c-4b39-9c44-a88cbc38ae68 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 76611b0b-db06-4903-a22a-59b23a1e0d48] Took 0.74 seconds to destroy the instance on the hypervisor.#033[00m
Nov 25 03:55:08 np0005534516 nova_compute[253538]: 2025-11-25 08:55:08.028 253542 DEBUG oslo.service.loopingcall [None req-bf35c612-f29c-4b39-9c44-a88cbc38ae68 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 25 03:55:08 np0005534516 nova_compute[253538]: 2025-11-25 08:55:08.028 253542 DEBUG nova.compute.manager [-] [instance: 76611b0b-db06-4903-a22a-59b23a1e0d48] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 25 03:55:08 np0005534516 nova_compute[253538]: 2025-11-25 08:55:08.029 253542 DEBUG nova.network.neutron [-] [instance: 76611b0b-db06-4903-a22a-59b23a1e0d48] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 25 03:55:08 np0005534516 nova_compute[253538]: 2025-11-25 08:55:08.477 253542 DEBUG nova.compute.manager [req-420cf99b-0203-4462-97a3-90ae2b379910 req-72851e51-9993-49f0-8ad0-2d4c3533c87d b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 76611b0b-db06-4903-a22a-59b23a1e0d48] Received event network-changed-8f1fcc3c-5f46-4272-be9b-4d5213b3aceb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:55:08 np0005534516 nova_compute[253538]: 2025-11-25 08:55:08.478 253542 DEBUG nova.compute.manager [req-420cf99b-0203-4462-97a3-90ae2b379910 req-72851e51-9993-49f0-8ad0-2d4c3533c87d b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 76611b0b-db06-4903-a22a-59b23a1e0d48] Refreshing instance network info cache due to event network-changed-8f1fcc3c-5f46-4272-be9b-4d5213b3aceb. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 03:55:08 np0005534516 nova_compute[253538]: 2025-11-25 08:55:08.478 253542 DEBUG oslo_concurrency.lockutils [req-420cf99b-0203-4462-97a3-90ae2b379910 req-72851e51-9993-49f0-8ad0-2d4c3533c87d b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-76611b0b-db06-4903-a22a-59b23a1e0d48" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 03:55:08 np0005534516 nova_compute[253538]: 2025-11-25 08:55:08.478 253542 DEBUG oslo_concurrency.lockutils [req-420cf99b-0203-4462-97a3-90ae2b379910 req-72851e51-9993-49f0-8ad0-2d4c3533c87d b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-76611b0b-db06-4903-a22a-59b23a1e0d48" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 03:55:08 np0005534516 nova_compute[253538]: 2025-11-25 08:55:08.478 253542 DEBUG nova.network.neutron [req-420cf99b-0203-4462-97a3-90ae2b379910 req-72851e51-9993-49f0-8ad0-2d4c3533c87d b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 76611b0b-db06-4903-a22a-59b23a1e0d48] Refreshing network info cache for port 8f1fcc3c-5f46-4272-be9b-4d5213b3aceb _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 03:55:09 np0005534516 nova_compute[253538]: 2025-11-25 08:55:09.807 253542 DEBUG nova.network.neutron [-] [instance: 76611b0b-db06-4903-a22a-59b23a1e0d48] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 03:55:09 np0005534516 nova_compute[253538]: 2025-11-25 08:55:09.824 253542 INFO nova.compute.manager [-] [instance: 76611b0b-db06-4903-a22a-59b23a1e0d48] Took 1.80 seconds to deallocate network for instance.#033[00m
Nov 25 03:55:09 np0005534516 podman[376830]: 2025-11-25 08:55:09.841187965 +0000 UTC m=+0.082810546 container health_status 5c8d5508db57b4c3da7e80ae11f3a3094e87785cb74f60c98199261dd8b73481 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 25 03:55:09 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2231: 321 pgs: 321 active+clean; 164 MiB data, 830 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 394 KiB/s wr, 115 op/s
Nov 25 03:55:10 np0005534516 nova_compute[253538]: 2025-11-25 08:55:10.025 253542 DEBUG oslo_concurrency.lockutils [None req-bf35c612-f29c-4b39-9c44-a88cbc38ae68 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:55:10 np0005534516 nova_compute[253538]: 2025-11-25 08:55:10.026 253542 DEBUG oslo_concurrency.lockutils [None req-bf35c612-f29c-4b39-9c44-a88cbc38ae68 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:55:10 np0005534516 nova_compute[253538]: 2025-11-25 08:55:10.097 253542 DEBUG oslo_concurrency.processutils [None req-bf35c612-f29c-4b39-9c44-a88cbc38ae68 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:55:10 np0005534516 nova_compute[253538]: 2025-11-25 08:55:10.547 253542 DEBUG nova.compute.manager [req-454db832-d013-42ac-a2d0-6626550a2deb req-4133bf90-38e2-4a2f-9949-ecbc30365deb b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 76611b0b-db06-4903-a22a-59b23a1e0d48] Received event network-vif-plugged-8f1fcc3c-5f46-4272-be9b-4d5213b3aceb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:55:10 np0005534516 nova_compute[253538]: 2025-11-25 08:55:10.547 253542 DEBUG oslo_concurrency.lockutils [req-454db832-d013-42ac-a2d0-6626550a2deb req-4133bf90-38e2-4a2f-9949-ecbc30365deb b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "76611b0b-db06-4903-a22a-59b23a1e0d48-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:55:10 np0005534516 nova_compute[253538]: 2025-11-25 08:55:10.548 253542 DEBUG oslo_concurrency.lockutils [req-454db832-d013-42ac-a2d0-6626550a2deb req-4133bf90-38e2-4a2f-9949-ecbc30365deb b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "76611b0b-db06-4903-a22a-59b23a1e0d48-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:55:10 np0005534516 nova_compute[253538]: 2025-11-25 08:55:10.548 253542 DEBUG oslo_concurrency.lockutils [req-454db832-d013-42ac-a2d0-6626550a2deb req-4133bf90-38e2-4a2f-9949-ecbc30365deb b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "76611b0b-db06-4903-a22a-59b23a1e0d48-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:55:10 np0005534516 nova_compute[253538]: 2025-11-25 08:55:10.548 253542 DEBUG nova.compute.manager [req-454db832-d013-42ac-a2d0-6626550a2deb req-4133bf90-38e2-4a2f-9949-ecbc30365deb b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 76611b0b-db06-4903-a22a-59b23a1e0d48] No waiting events found dispatching network-vif-plugged-8f1fcc3c-5f46-4272-be9b-4d5213b3aceb pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 03:55:10 np0005534516 nova_compute[253538]: 2025-11-25 08:55:10.548 253542 WARNING nova.compute.manager [req-454db832-d013-42ac-a2d0-6626550a2deb req-4133bf90-38e2-4a2f-9949-ecbc30365deb b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 76611b0b-db06-4903-a22a-59b23a1e0d48] Received unexpected event network-vif-plugged-8f1fcc3c-5f46-4272-be9b-4d5213b3aceb for instance with vm_state deleted and task_state None.#033[00m
Nov 25 03:55:10 np0005534516 nova_compute[253538]: 2025-11-25 08:55:10.549 253542 DEBUG nova.compute.manager [req-454db832-d013-42ac-a2d0-6626550a2deb req-4133bf90-38e2-4a2f-9949-ecbc30365deb b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 76611b0b-db06-4903-a22a-59b23a1e0d48] Received event network-vif-deleted-8f1fcc3c-5f46-4272-be9b-4d5213b3aceb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:55:10 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 03:55:10 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1455161361' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 03:55:10 np0005534516 nova_compute[253538]: 2025-11-25 08:55:10.593 253542 DEBUG oslo_concurrency.processutils [None req-bf35c612-f29c-4b39-9c44-a88cbc38ae68 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.496s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:55:10 np0005534516 nova_compute[253538]: 2025-11-25 08:55:10.598 253542 DEBUG nova.network.neutron [req-420cf99b-0203-4462-97a3-90ae2b379910 req-72851e51-9993-49f0-8ad0-2d4c3533c87d b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 76611b0b-db06-4903-a22a-59b23a1e0d48] Updated VIF entry in instance network info cache for port 8f1fcc3c-5f46-4272-be9b-4d5213b3aceb. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 25 03:55:10 np0005534516 nova_compute[253538]: 2025-11-25 08:55:10.598 253542 DEBUG nova.network.neutron [req-420cf99b-0203-4462-97a3-90ae2b379910 req-72851e51-9993-49f0-8ad0-2d4c3533c87d b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 76611b0b-db06-4903-a22a-59b23a1e0d48] Updating instance_info_cache with network_info: [{"id": "8f1fcc3c-5f46-4272-be9b-4d5213b3aceb", "address": "fa:16:3e:36:b2:21", "network": {"id": "60f2641c-f03e-4ef3-a462-4bd54e93c59c", "bridge": "br-int", "label": "tempest-network-smoke--706068333", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92faeb767e7a423586eaaf32661ce771", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8f1fcc3c-5f", "ovs_interfaceid": "8f1fcc3c-5f46-4272-be9b-4d5213b3aceb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 03:55:10 np0005534516 nova_compute[253538]: 2025-11-25 08:55:10.605 253542 DEBUG nova.compute.provider_tree [None req-bf35c612-f29c-4b39-9c44-a88cbc38ae68 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 03:55:10 np0005534516 nova_compute[253538]: 2025-11-25 08:55:10.622 253542 DEBUG nova.scheduler.client.report [None req-bf35c612-f29c-4b39-9c44-a88cbc38ae68 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 03:55:10 np0005534516 nova_compute[253538]: 2025-11-25 08:55:10.626 253542 DEBUG oslo_concurrency.lockutils [req-420cf99b-0203-4462-97a3-90ae2b379910 req-72851e51-9993-49f0-8ad0-2d4c3533c87d b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-76611b0b-db06-4903-a22a-59b23a1e0d48" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 03:55:10 np0005534516 nova_compute[253538]: 2025-11-25 08:55:10.627 253542 DEBUG nova.compute.manager [req-420cf99b-0203-4462-97a3-90ae2b379910 req-72851e51-9993-49f0-8ad0-2d4c3533c87d b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 76611b0b-db06-4903-a22a-59b23a1e0d48] Received event network-vif-unplugged-8f1fcc3c-5f46-4272-be9b-4d5213b3aceb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:55:10 np0005534516 nova_compute[253538]: 2025-11-25 08:55:10.627 253542 DEBUG oslo_concurrency.lockutils [req-420cf99b-0203-4462-97a3-90ae2b379910 req-72851e51-9993-49f0-8ad0-2d4c3533c87d b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "76611b0b-db06-4903-a22a-59b23a1e0d48-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:55:10 np0005534516 nova_compute[253538]: 2025-11-25 08:55:10.627 253542 DEBUG oslo_concurrency.lockutils [req-420cf99b-0203-4462-97a3-90ae2b379910 req-72851e51-9993-49f0-8ad0-2d4c3533c87d b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "76611b0b-db06-4903-a22a-59b23a1e0d48-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:55:10 np0005534516 nova_compute[253538]: 2025-11-25 08:55:10.628 253542 DEBUG oslo_concurrency.lockutils [req-420cf99b-0203-4462-97a3-90ae2b379910 req-72851e51-9993-49f0-8ad0-2d4c3533c87d b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "76611b0b-db06-4903-a22a-59b23a1e0d48-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:55:10 np0005534516 nova_compute[253538]: 2025-11-25 08:55:10.628 253542 DEBUG nova.compute.manager [req-420cf99b-0203-4462-97a3-90ae2b379910 req-72851e51-9993-49f0-8ad0-2d4c3533c87d b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 76611b0b-db06-4903-a22a-59b23a1e0d48] No waiting events found dispatching network-vif-unplugged-8f1fcc3c-5f46-4272-be9b-4d5213b3aceb pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 03:55:10 np0005534516 nova_compute[253538]: 2025-11-25 08:55:10.628 253542 DEBUG nova.compute.manager [req-420cf99b-0203-4462-97a3-90ae2b379910 req-72851e51-9993-49f0-8ad0-2d4c3533c87d b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 76611b0b-db06-4903-a22a-59b23a1e0d48] Received event network-vif-unplugged-8f1fcc3c-5f46-4272-be9b-4d5213b3aceb for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 25 03:55:10 np0005534516 nova_compute[253538]: 2025-11-25 08:55:10.655 253542 DEBUG oslo_concurrency.lockutils [None req-bf35c612-f29c-4b39-9c44-a88cbc38ae68 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.629s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:55:10 np0005534516 nova_compute[253538]: 2025-11-25 08:55:10.694 253542 INFO nova.scheduler.client.report [None req-bf35c612-f29c-4b39-9c44-a88cbc38ae68 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Deleted allocations for instance 76611b0b-db06-4903-a22a-59b23a1e0d48#033[00m
Nov 25 03:55:10 np0005534516 nova_compute[253538]: 2025-11-25 08:55:10.796 253542 DEBUG oslo_concurrency.lockutils [None req-bf35c612-f29c-4b39-9c44-a88cbc38ae68 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lock "76611b0b-db06-4903-a22a-59b23a1e0d48" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.516s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:55:11 np0005534516 ovn_controller[152859]: 2025-11-25T08:55:11Z|00139|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:a5:b5:38 10.100.0.6
Nov 25 03:55:11 np0005534516 ovn_controller[152859]: 2025-11-25T08:55:11Z|00140|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:a5:b5:38 10.100.0.6
Nov 25 03:55:11 np0005534516 podman[376871]: 2025-11-25 08:55:11.810903408 +0000 UTC m=+0.065338810 container health_status 201f5ba8a846455dad7681d91099d8c3f33e8ac91298c1330a8b525b94c03938 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true)
Nov 25 03:55:11 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2232: 321 pgs: 321 active+clean; 150 MiB data, 825 MiB used, 59 GiB / 60 GiB avail; 1.1 MiB/s rd, 1.3 MiB/s wr, 116 op/s
Nov 25 03:55:12 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 03:55:12 np0005534516 nova_compute[253538]: 2025-11-25 08:55:12.551 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:55:12 np0005534516 nova_compute[253538]: 2025-11-25 08:55:12.845 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:55:13 np0005534516 nova_compute[253538]: 2025-11-25 08:55:13.243 253542 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764060898.2377136, 5e14b791-8860-44a3-87e0-5c7fcc1dcf12 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 03:55:13 np0005534516 nova_compute[253538]: 2025-11-25 08:55:13.244 253542 INFO nova.compute.manager [-] [instance: 5e14b791-8860-44a3-87e0-5c7fcc1dcf12] VM Stopped (Lifecycle Event)#033[00m
Nov 25 03:55:13 np0005534516 nova_compute[253538]: 2025-11-25 08:55:13.258 253542 DEBUG nova.compute.manager [None req-af17d690-bcec-4aa2-87d9-0da6f0b4c3d5 - - - - - -] [instance: 5e14b791-8860-44a3-87e0-5c7fcc1dcf12] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:55:13 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2233: 321 pgs: 321 active+clean; 159 MiB data, 834 MiB used, 59 GiB / 60 GiB avail; 454 KiB/s rd, 1.7 MiB/s wr, 91 op/s
Nov 25 03:55:15 np0005534516 ovn_controller[152859]: 2025-11-25T08:55:15Z|01199|binding|INFO|Releasing lport 583866cf-82da-4259-9189-db9f58620872 from this chassis (sb_readonly=0)
Nov 25 03:55:15 np0005534516 nova_compute[253538]: 2025-11-25 08:55:15.184 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:55:15 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2234: 321 pgs: 321 active+clean; 167 MiB data, 840 MiB used, 59 GiB / 60 GiB avail; 344 KiB/s rd, 2.1 MiB/s wr, 91 op/s
Nov 25 03:55:16 np0005534516 podman[376892]: 2025-11-25 08:55:16.904821349 +0000 UTC m=+0.154829375 container health_status 8ad1e319ea263c63021f01bb2d6f18ca5aea205883339692c6b10bddfa993191 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, container_name=ovn_controller)
Nov 25 03:55:17 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 03:55:17 np0005534516 nova_compute[253538]: 2025-11-25 08:55:17.553 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:55:17 np0005534516 nova_compute[253538]: 2025-11-25 08:55:17.847 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:55:17 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2235: 321 pgs: 321 active+clean; 167 MiB data, 840 MiB used, 59 GiB / 60 GiB avail; 344 KiB/s rd, 2.1 MiB/s wr, 91 op/s
Nov 25 03:55:18 np0005534516 nova_compute[253538]: 2025-11-25 08:55:18.824 253542 INFO nova.compute.manager [None req-c794c82d-9f08-481b-9dcc-1b462cdfb009 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: 7e82fa8c-6663-439c-833c-2b28f22282a8] Get console output#033[00m
Nov 25 03:55:18 np0005534516 nova_compute[253538]: 2025-11-25 08:55:18.832 310639 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Nov 25 03:55:19 np0005534516 nova_compute[253538]: 2025-11-25 08:55:19.326 253542 DEBUG nova.objects.instance [None req-a6afed9d-ecd5-4636-9a7b-65cbdcefd586 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Lazy-loading 'pci_devices' on Instance uuid 7e82fa8c-6663-439c-833c-2b28f22282a8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 03:55:19 np0005534516 nova_compute[253538]: 2025-11-25 08:55:19.350 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764060919.3498385, 7e82fa8c-6663-439c-833c-2b28f22282a8 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 03:55:19 np0005534516 nova_compute[253538]: 2025-11-25 08:55:19.350 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 7e82fa8c-6663-439c-833c-2b28f22282a8] VM Paused (Lifecycle Event)#033[00m
Nov 25 03:55:19 np0005534516 nova_compute[253538]: 2025-11-25 08:55:19.375 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 7e82fa8c-6663-439c-833c-2b28f22282a8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:55:19 np0005534516 nova_compute[253538]: 2025-11-25 08:55:19.379 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 7e82fa8c-6663-439c-833c-2b28f22282a8] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: suspending, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 03:55:19 np0005534516 nova_compute[253538]: 2025-11-25 08:55:19.397 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 7e82fa8c-6663-439c-833c-2b28f22282a8] During sync_power_state the instance has a pending task (suspending). Skip.#033[00m
Nov 25 03:55:19 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2236: 321 pgs: 321 active+clean; 167 MiB data, 840 MiB used, 59 GiB / 60 GiB avail; 344 KiB/s rd, 2.1 MiB/s wr, 91 op/s
Nov 25 03:55:19 np0005534516 kernel: tap52157627-d7 (unregistering): left promiscuous mode
Nov 25 03:55:19 np0005534516 NetworkManager[48915]: <info>  [1764060919.9246] device (tap52157627-d7): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 03:55:19 np0005534516 ovn_controller[152859]: 2025-11-25T08:55:19Z|01200|binding|INFO|Releasing lport 52157627-d75e-4670-9215-6471bda94ba6 from this chassis (sb_readonly=0)
Nov 25 03:55:19 np0005534516 ovn_controller[152859]: 2025-11-25T08:55:19Z|01201|binding|INFO|Setting lport 52157627-d75e-4670-9215-6471bda94ba6 down in Southbound
Nov 25 03:55:19 np0005534516 ovn_controller[152859]: 2025-11-25T08:55:19Z|01202|binding|INFO|Removing iface tap52157627-d7 ovn-installed in OVS
Nov 25 03:55:19 np0005534516 nova_compute[253538]: 2025-11-25 08:55:19.942 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:55:19 np0005534516 nova_compute[253538]: 2025-11-25 08:55:19.964 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:55:19 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:55:19.969 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a5:b5:38 10.100.0.6'], port_security=['fa:16:3e:a5:b5:38 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '7e82fa8c-6663-439c-833c-2b28f22282a8', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-703bdacb-53cd-40a1-9c2c-c632a29e049b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '7dcaf3c96bfc4db3a41291debd385c67', 'neutron:revision_number': '4', 'neutron:security_group_ids': '8861bf3c-0dd7-44a1-b3d5-4af8cd78fda2', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.213'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=1772063d-dae9-4681-ab29-d1a2754b4cf7, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=52157627-d75e-4670-9215-6471bda94ba6) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 03:55:19 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:55:19.971 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 52157627-d75e-4670-9215-6471bda94ba6 in datapath 703bdacb-53cd-40a1-9c2c-c632a29e049b unbound from our chassis#033[00m
Nov 25 03:55:19 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:55:19.973 162739 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 703bdacb-53cd-40a1-9c2c-c632a29e049b, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 25 03:55:19 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:55:19.975 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[19748c90-9ee6-4034-8761-712e84260129]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:55:19 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:55:19.975 162739 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-703bdacb-53cd-40a1-9c2c-c632a29e049b namespace which is not needed anymore#033[00m
Nov 25 03:55:19 np0005534516 systemd[1]: machine-qemu\x2d146\x2dinstance\x2d00000075.scope: Deactivated successfully.
Nov 25 03:55:19 np0005534516 systemd[1]: machine-qemu\x2d146\x2dinstance\x2d00000075.scope: Consumed 14.238s CPU time.
Nov 25 03:55:19 np0005534516 systemd-machined[215790]: Machine qemu-146-instance-00000075 terminated.
Nov 25 03:55:20 np0005534516 kernel: tap52157627-d7: entered promiscuous mode
Nov 25 03:55:20 np0005534516 kernel: tap52157627-d7 (unregistering): left promiscuous mode
Nov 25 03:55:20 np0005534516 nova_compute[253538]: 2025-11-25 08:55:20.093 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:55:20 np0005534516 ovn_controller[152859]: 2025-11-25T08:55:20Z|01203|binding|INFO|Claiming lport 52157627-d75e-4670-9215-6471bda94ba6 for this chassis.
Nov 25 03:55:20 np0005534516 ovn_controller[152859]: 2025-11-25T08:55:20Z|01204|binding|INFO|52157627-d75e-4670-9215-6471bda94ba6: Claiming fa:16:3e:a5:b5:38 10.100.0.6
Nov 25 03:55:20 np0005534516 nova_compute[253538]: 2025-11-25 08:55:20.115 253542 DEBUG nova.compute.manager [None req-a6afed9d-ecd5-4636-9a7b-65cbdcefd586 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: 7e82fa8c-6663-439c-833c-2b28f22282a8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:55:20 np0005534516 ovn_controller[152859]: 2025-11-25T08:55:20Z|01205|binding|INFO|Setting lport 52157627-d75e-4670-9215-6471bda94ba6 ovn-installed in OVS
Nov 25 03:55:20 np0005534516 ovn_controller[152859]: 2025-11-25T08:55:20Z|01206|if_status|INFO|Dropped 1 log messages in last 22 seconds (most recently, 22 seconds ago) due to excessive rate
Nov 25 03:55:20 np0005534516 ovn_controller[152859]: 2025-11-25T08:55:20Z|01207|if_status|INFO|Not setting lport 52157627-d75e-4670-9215-6471bda94ba6 down as sb is readonly
Nov 25 03:55:20 np0005534516 nova_compute[253538]: 2025-11-25 08:55:20.118 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:55:20 np0005534516 ovn_controller[152859]: 2025-11-25T08:55:20Z|01208|binding|INFO|Releasing lport 52157627-d75e-4670-9215-6471bda94ba6 from this chassis (sb_readonly=0)
Nov 25 03:55:20 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:55:20.125 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a5:b5:38 10.100.0.6'], port_security=['fa:16:3e:a5:b5:38 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '7e82fa8c-6663-439c-833c-2b28f22282a8', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-703bdacb-53cd-40a1-9c2c-c632a29e049b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '7dcaf3c96bfc4db3a41291debd385c67', 'neutron:revision_number': '4', 'neutron:security_group_ids': '8861bf3c-0dd7-44a1-b3d5-4af8cd78fda2', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.213'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=1772063d-dae9-4681-ab29-d1a2754b4cf7, chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=52157627-d75e-4670-9215-6471bda94ba6) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 03:55:20 np0005534516 nova_compute[253538]: 2025-11-25 08:55:20.135 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:55:20 np0005534516 neutron-haproxy-ovnmeta-703bdacb-53cd-40a1-9c2c-c632a29e049b[376574]: [NOTICE]   (376578) : haproxy version is 2.8.14-c23fe91
Nov 25 03:55:20 np0005534516 neutron-haproxy-ovnmeta-703bdacb-53cd-40a1-9c2c-c632a29e049b[376574]: [NOTICE]   (376578) : path to executable is /usr/sbin/haproxy
Nov 25 03:55:20 np0005534516 neutron-haproxy-ovnmeta-703bdacb-53cd-40a1-9c2c-c632a29e049b[376574]: [WARNING]  (376578) : Exiting Master process...
Nov 25 03:55:20 np0005534516 neutron-haproxy-ovnmeta-703bdacb-53cd-40a1-9c2c-c632a29e049b[376574]: [ALERT]    (376578) : Current worker (376580) exited with code 143 (Terminated)
Nov 25 03:55:20 np0005534516 neutron-haproxy-ovnmeta-703bdacb-53cd-40a1-9c2c-c632a29e049b[376574]: [WARNING]  (376578) : All workers exited. Exiting... (0)
Nov 25 03:55:20 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:55:20.144 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a5:b5:38 10.100.0.6'], port_security=['fa:16:3e:a5:b5:38 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '7e82fa8c-6663-439c-833c-2b28f22282a8', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-703bdacb-53cd-40a1-9c2c-c632a29e049b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '7dcaf3c96bfc4db3a41291debd385c67', 'neutron:revision_number': '4', 'neutron:security_group_ids': '8861bf3c-0dd7-44a1-b3d5-4af8cd78fda2', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.213'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=1772063d-dae9-4681-ab29-d1a2754b4cf7, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=52157627-d75e-4670-9215-6471bda94ba6) old=Port_Binding(chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 03:55:20 np0005534516 systemd[1]: libpod-f8a3bc495476bea27fc86f25c2d67e1acf3426aa0302bf5642894ee244335eb3.scope: Deactivated successfully.
Nov 25 03:55:20 np0005534516 podman[376947]: 2025-11-25 08:55:20.153655798 +0000 UTC m=+0.055008369 container died f8a3bc495476bea27fc86f25c2d67e1acf3426aa0302bf5642894ee244335eb3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-703bdacb-53cd-40a1-9c2c-c632a29e049b, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 25 03:55:20 np0005534516 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-f8a3bc495476bea27fc86f25c2d67e1acf3426aa0302bf5642894ee244335eb3-userdata-shm.mount: Deactivated successfully.
Nov 25 03:55:20 np0005534516 systemd[1]: var-lib-containers-storage-overlay-099a140f52146212a9c7c93bc8139169246af3392159627f1d84ff95a83d956b-merged.mount: Deactivated successfully.
Nov 25 03:55:20 np0005534516 podman[376947]: 2025-11-25 08:55:20.193854242 +0000 UTC m=+0.095206813 container cleanup f8a3bc495476bea27fc86f25c2d67e1acf3426aa0302bf5642894ee244335eb3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-703bdacb-53cd-40a1-9c2c-c632a29e049b, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 25 03:55:20 np0005534516 systemd[1]: libpod-conmon-f8a3bc495476bea27fc86f25c2d67e1acf3426aa0302bf5642894ee244335eb3.scope: Deactivated successfully.
Nov 25 03:55:20 np0005534516 podman[376981]: 2025-11-25 08:55:20.266722095 +0000 UTC m=+0.046791515 container remove f8a3bc495476bea27fc86f25c2d67e1acf3426aa0302bf5642894ee244335eb3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-703bdacb-53cd-40a1-9c2c-c632a29e049b, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 03:55:20 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:55:20.275 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[24837c32-c345-49ec-8214-e5bd11ea6690]: (4, ('Tue Nov 25 08:55:20 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-703bdacb-53cd-40a1-9c2c-c632a29e049b (f8a3bc495476bea27fc86f25c2d67e1acf3426aa0302bf5642894ee244335eb3)\nf8a3bc495476bea27fc86f25c2d67e1acf3426aa0302bf5642894ee244335eb3\nTue Nov 25 08:55:20 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-703bdacb-53cd-40a1-9c2c-c632a29e049b (f8a3bc495476bea27fc86f25c2d67e1acf3426aa0302bf5642894ee244335eb3)\nf8a3bc495476bea27fc86f25c2d67e1acf3426aa0302bf5642894ee244335eb3\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:55:20 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:55:20.278 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[76d2b0c0-f995-4ac9-a546-e962e4e384be]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:55:20 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:55:20.279 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap703bdacb-50, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:55:20 np0005534516 nova_compute[253538]: 2025-11-25 08:55:20.281 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:55:20 np0005534516 kernel: tap703bdacb-50: left promiscuous mode
Nov 25 03:55:20 np0005534516 nova_compute[253538]: 2025-11-25 08:55:20.310 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:55:20 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:55:20.313 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[78c68840-4bdf-4223-9ec4-796c2224f039]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:55:20 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:55:20.332 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[5f5c19b8-2f0f-4074-bbed-0f37519cc295]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:55:20 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:55:20.334 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[554bc9c5-a10f-4bba-a5af-6bf35dccf0a9]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:55:20 np0005534516 nova_compute[253538]: 2025-11-25 08:55:20.349 253542 DEBUG nova.compute.manager [req-cb344dcd-d5b2-46d2-90fa-8570edc09b49 req-5fa1826b-7f92-476f-af40-da5a6cc294ef b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 7e82fa8c-6663-439c-833c-2b28f22282a8] Received event network-vif-unplugged-52157627-d75e-4670-9215-6471bda94ba6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:55:20 np0005534516 nova_compute[253538]: 2025-11-25 08:55:20.350 253542 DEBUG oslo_concurrency.lockutils [req-cb344dcd-d5b2-46d2-90fa-8570edc09b49 req-5fa1826b-7f92-476f-af40-da5a6cc294ef b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "7e82fa8c-6663-439c-833c-2b28f22282a8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:55:20 np0005534516 nova_compute[253538]: 2025-11-25 08:55:20.351 253542 DEBUG oslo_concurrency.lockutils [req-cb344dcd-d5b2-46d2-90fa-8570edc09b49 req-5fa1826b-7f92-476f-af40-da5a6cc294ef b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "7e82fa8c-6663-439c-833c-2b28f22282a8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:55:20 np0005534516 nova_compute[253538]: 2025-11-25 08:55:20.352 253542 DEBUG oslo_concurrency.lockutils [req-cb344dcd-d5b2-46d2-90fa-8570edc09b49 req-5fa1826b-7f92-476f-af40-da5a6cc294ef b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "7e82fa8c-6663-439c-833c-2b28f22282a8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:55:20 np0005534516 nova_compute[253538]: 2025-11-25 08:55:20.352 253542 DEBUG nova.compute.manager [req-cb344dcd-d5b2-46d2-90fa-8570edc09b49 req-5fa1826b-7f92-476f-af40-da5a6cc294ef b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 7e82fa8c-6663-439c-833c-2b28f22282a8] No waiting events found dispatching network-vif-unplugged-52157627-d75e-4670-9215-6471bda94ba6 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 03:55:20 np0005534516 nova_compute[253538]: 2025-11-25 08:55:20.353 253542 WARNING nova.compute.manager [req-cb344dcd-d5b2-46d2-90fa-8570edc09b49 req-5fa1826b-7f92-476f-af40-da5a6cc294ef b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 7e82fa8c-6663-439c-833c-2b28f22282a8] Received unexpected event network-vif-unplugged-52157627-d75e-4670-9215-6471bda94ba6 for instance with vm_state suspended and task_state None.#033[00m
Nov 25 03:55:20 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:55:20.354 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[1e495895-8fef-40bc-9138-1702a6b6b471]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 625121, 'reachable_time': 31613, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 376999, 'error': None, 'target': 'ovnmeta-703bdacb-53cd-40a1-9c2c-c632a29e049b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:55:20 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:55:20.358 162852 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-703bdacb-53cd-40a1-9c2c-c632a29e049b deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 25 03:55:20 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:55:20.359 162852 DEBUG oslo.privsep.daemon [-] privsep: reply[c6387ab4-a1f4-4e51-bef7-94ac72580db5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:55:20 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:55:20.360 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 52157627-d75e-4670-9215-6471bda94ba6 in datapath 703bdacb-53cd-40a1-9c2c-c632a29e049b unbound from our chassis#033[00m
Nov 25 03:55:20 np0005534516 systemd[1]: run-netns-ovnmeta\x2d703bdacb\x2d53cd\x2d40a1\x2d9c2c\x2dc632a29e049b.mount: Deactivated successfully.
Nov 25 03:55:20 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:55:20.362 162739 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 703bdacb-53cd-40a1-9c2c-c632a29e049b, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 25 03:55:20 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:55:20.363 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[2969eac1-fffd-45d2-b6e8-e1bcc18ab1d4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:55:20 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:55:20.364 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 52157627-d75e-4670-9215-6471bda94ba6 in datapath 703bdacb-53cd-40a1-9c2c-c632a29e049b unbound from our chassis#033[00m
Nov 25 03:55:20 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:55:20.365 162739 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 703bdacb-53cd-40a1-9c2c-c632a29e049b, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 25 03:55:20 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:55:20.366 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[96c8a696-d2b5-40ae-b117-2c324165ed88]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:55:21 np0005534516 nova_compute[253538]: 2025-11-25 08:55:21.453 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:55:21 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2237: 321 pgs: 321 active+clean; 167 MiB data, 840 MiB used, 59 GiB / 60 GiB avail; 327 KiB/s rd, 1.8 MiB/s wr, 76 op/s
Nov 25 03:55:22 np0005534516 nova_compute[253538]: 2025-11-25 08:55:22.527 253542 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764060907.5259435, 76611b0b-db06-4903-a22a-59b23a1e0d48 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 03:55:22 np0005534516 nova_compute[253538]: 2025-11-25 08:55:22.528 253542 INFO nova.compute.manager [-] [instance: 76611b0b-db06-4903-a22a-59b23a1e0d48] VM Stopped (Lifecycle Event)#033[00m
Nov 25 03:55:22 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 03:55:22 np0005534516 nova_compute[253538]: 2025-11-25 08:55:22.545 253542 DEBUG nova.compute.manager [None req-59a63d91-652c-4975-8ecf-b9d76e39a190 - - - - - -] [instance: 76611b0b-db06-4903-a22a-59b23a1e0d48] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:55:22 np0005534516 nova_compute[253538]: 2025-11-25 08:55:22.555 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:55:22 np0005534516 nova_compute[253538]: 2025-11-25 08:55:22.791 253542 DEBUG nova.compute.manager [req-6a231e1b-2edf-423a-8d90-d204010d5814 req-08e5e5e0-545d-4f4b-823f-12d2c8ce022e b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 7e82fa8c-6663-439c-833c-2b28f22282a8] Received event network-vif-plugged-52157627-d75e-4670-9215-6471bda94ba6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:55:22 np0005534516 nova_compute[253538]: 2025-11-25 08:55:22.791 253542 DEBUG oslo_concurrency.lockutils [req-6a231e1b-2edf-423a-8d90-d204010d5814 req-08e5e5e0-545d-4f4b-823f-12d2c8ce022e b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "7e82fa8c-6663-439c-833c-2b28f22282a8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:55:22 np0005534516 nova_compute[253538]: 2025-11-25 08:55:22.792 253542 DEBUG oslo_concurrency.lockutils [req-6a231e1b-2edf-423a-8d90-d204010d5814 req-08e5e5e0-545d-4f4b-823f-12d2c8ce022e b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "7e82fa8c-6663-439c-833c-2b28f22282a8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:55:22 np0005534516 nova_compute[253538]: 2025-11-25 08:55:22.792 253542 DEBUG oslo_concurrency.lockutils [req-6a231e1b-2edf-423a-8d90-d204010d5814 req-08e5e5e0-545d-4f4b-823f-12d2c8ce022e b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "7e82fa8c-6663-439c-833c-2b28f22282a8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:55:22 np0005534516 nova_compute[253538]: 2025-11-25 08:55:22.793 253542 DEBUG nova.compute.manager [req-6a231e1b-2edf-423a-8d90-d204010d5814 req-08e5e5e0-545d-4f4b-823f-12d2c8ce022e b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 7e82fa8c-6663-439c-833c-2b28f22282a8] No waiting events found dispatching network-vif-plugged-52157627-d75e-4670-9215-6471bda94ba6 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 03:55:22 np0005534516 nova_compute[253538]: 2025-11-25 08:55:22.793 253542 WARNING nova.compute.manager [req-6a231e1b-2edf-423a-8d90-d204010d5814 req-08e5e5e0-545d-4f4b-823f-12d2c8ce022e b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 7e82fa8c-6663-439c-833c-2b28f22282a8] Received unexpected event network-vif-plugged-52157627-d75e-4670-9215-6471bda94ba6 for instance with vm_state suspended and task_state None.#033[00m
Nov 25 03:55:22 np0005534516 nova_compute[253538]: 2025-11-25 08:55:22.849 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:55:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 03:55:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 03:55:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 03:55:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 03:55:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 03:55:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 03:55:23 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2238: 321 pgs: 321 active+clean; 167 MiB data, 840 MiB used, 59 GiB / 60 GiB avail; 105 KiB/s rd, 888 KiB/s wr, 24 op/s
Nov 25 03:55:24 np0005534516 nova_compute[253538]: 2025-11-25 08:55:24.912 253542 INFO nova.compute.manager [None req-5e861d71-549c-4e19-b35d-83826b042cfe 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: 7e82fa8c-6663-439c-833c-2b28f22282a8] Get console output#033[00m
Nov 25 03:55:25 np0005534516 nova_compute[253538]: 2025-11-25 08:55:25.140 253542 INFO nova.compute.manager [None req-5616caea-af6a-47a1-8baa-f6897e154db8 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: 7e82fa8c-6663-439c-833c-2b28f22282a8] Resuming#033[00m
Nov 25 03:55:25 np0005534516 nova_compute[253538]: 2025-11-25 08:55:25.141 253542 DEBUG nova.objects.instance [None req-5616caea-af6a-47a1-8baa-f6897e154db8 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Lazy-loading 'flavor' on Instance uuid 7e82fa8c-6663-439c-833c-2b28f22282a8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 03:55:25 np0005534516 nova_compute[253538]: 2025-11-25 08:55:25.177 253542 DEBUG oslo_concurrency.lockutils [None req-5616caea-af6a-47a1-8baa-f6897e154db8 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Acquiring lock "refresh_cache-7e82fa8c-6663-439c-833c-2b28f22282a8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 03:55:25 np0005534516 nova_compute[253538]: 2025-11-25 08:55:25.178 253542 DEBUG oslo_concurrency.lockutils [None req-5616caea-af6a-47a1-8baa-f6897e154db8 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Acquired lock "refresh_cache-7e82fa8c-6663-439c-833c-2b28f22282a8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 03:55:25 np0005534516 nova_compute[253538]: 2025-11-25 08:55:25.178 253542 DEBUG nova.network.neutron [None req-5616caea-af6a-47a1-8baa-f6897e154db8 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: 7e82fa8c-6663-439c-833c-2b28f22282a8] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 25 03:55:25 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2239: 321 pgs: 321 active+clean; 167 MiB data, 840 MiB used, 59 GiB / 60 GiB avail; 36 KiB/s rd, 466 KiB/s wr, 12 op/s
Nov 25 03:55:26 np0005534516 nova_compute[253538]: 2025-11-25 08:55:26.709 253542 DEBUG nova.network.neutron [None req-5616caea-af6a-47a1-8baa-f6897e154db8 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: 7e82fa8c-6663-439c-833c-2b28f22282a8] Updating instance_info_cache with network_info: [{"id": "52157627-d75e-4670-9215-6471bda94ba6", "address": "fa:16:3e:a5:b5:38", "network": {"id": "703bdacb-53cd-40a1-9c2c-c632a29e049b", "bridge": "br-int", "label": "tempest-network-smoke--919155047", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.213", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7dcaf3c96bfc4db3a41291debd385c67", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap52157627-d7", "ovs_interfaceid": "52157627-d75e-4670-9215-6471bda94ba6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 03:55:26 np0005534516 nova_compute[253538]: 2025-11-25 08:55:26.734 253542 DEBUG oslo_concurrency.lockutils [None req-5616caea-af6a-47a1-8baa-f6897e154db8 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Releasing lock "refresh_cache-7e82fa8c-6663-439c-833c-2b28f22282a8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 03:55:26 np0005534516 nova_compute[253538]: 2025-11-25 08:55:26.741 253542 DEBUG nova.virt.libvirt.vif [None req-5616caea-af6a-47a1-8baa-f6897e154db8 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T08:54:33Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-360976027',display_name='tempest-TestNetworkAdvancedServerOps-server-360976027',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-360976027',id=117,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMP7QXPZDPXrUql830fJueudXHoXZ2zww6EaNe3PFgcJ0/Sar4viBph4zMnjSpS0mXHOOoM16QZ2fgUMXO9FFKEQnCoRRYMYlR8af4rdgpILBOKGdgI2okSUsyg3suBciA==',key_name='tempest-TestNetworkAdvancedServerOps-1316847301',keypairs=<?>,launch_index=0,launched_at=2025-11-25T08:54:57Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='7dcaf3c96bfc4db3a41291debd385c67',ramdisk_id='',reservation_id='r-p9ynh0fp',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-TestNetworkAdvancedServerOps-1132090577',owner_user_name='tempest-TestNetworkAdvancedServerOps-1132090577-project-member'},tags=<?>,task_state='resuming',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T08:55:20Z,user_data=None,user_id='009378dc36154271ba5b4590ce67ddde',uuid=7e82fa8c-6663-439c-833c-2b28f22282a8,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='suspended') vif={"id": "52157627-d75e-4670-9215-6471bda94ba6", "address": "fa:16:3e:a5:b5:38", "network": {"id": "703bdacb-53cd-40a1-9c2c-c632a29e049b", "bridge": "br-int", "label": "tempest-network-smoke--919155047", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.213", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7dcaf3c96bfc4db3a41291debd385c67", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap52157627-d7", "ovs_interfaceid": "52157627-d75e-4670-9215-6471bda94ba6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 25 03:55:26 np0005534516 nova_compute[253538]: 2025-11-25 08:55:26.742 253542 DEBUG nova.network.os_vif_util [None req-5616caea-af6a-47a1-8baa-f6897e154db8 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Converting VIF {"id": "52157627-d75e-4670-9215-6471bda94ba6", "address": "fa:16:3e:a5:b5:38", "network": {"id": "703bdacb-53cd-40a1-9c2c-c632a29e049b", "bridge": "br-int", "label": "tempest-network-smoke--919155047", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.213", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7dcaf3c96bfc4db3a41291debd385c67", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap52157627-d7", "ovs_interfaceid": "52157627-d75e-4670-9215-6471bda94ba6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 03:55:26 np0005534516 nova_compute[253538]: 2025-11-25 08:55:26.743 253542 DEBUG nova.network.os_vif_util [None req-5616caea-af6a-47a1-8baa-f6897e154db8 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a5:b5:38,bridge_name='br-int',has_traffic_filtering=True,id=52157627-d75e-4670-9215-6471bda94ba6,network=Network(703bdacb-53cd-40a1-9c2c-c632a29e049b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap52157627-d7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 03:55:26 np0005534516 nova_compute[253538]: 2025-11-25 08:55:26.743 253542 DEBUG os_vif [None req-5616caea-af6a-47a1-8baa-f6897e154db8 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:a5:b5:38,bridge_name='br-int',has_traffic_filtering=True,id=52157627-d75e-4670-9215-6471bda94ba6,network=Network(703bdacb-53cd-40a1-9c2c-c632a29e049b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap52157627-d7') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 25 03:55:26 np0005534516 nova_compute[253538]: 2025-11-25 08:55:26.744 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:55:26 np0005534516 nova_compute[253538]: 2025-11-25 08:55:26.744 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:55:26 np0005534516 nova_compute[253538]: 2025-11-25 08:55:26.745 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 03:55:26 np0005534516 nova_compute[253538]: 2025-11-25 08:55:26.749 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:55:26 np0005534516 nova_compute[253538]: 2025-11-25 08:55:26.749 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap52157627-d7, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:55:26 np0005534516 nova_compute[253538]: 2025-11-25 08:55:26.749 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap52157627-d7, col_values=(('external_ids', {'iface-id': '52157627-d75e-4670-9215-6471bda94ba6', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:a5:b5:38', 'vm-uuid': '7e82fa8c-6663-439c-833c-2b28f22282a8'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:55:26 np0005534516 nova_compute[253538]: 2025-11-25 08:55:26.751 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 03:55:26 np0005534516 nova_compute[253538]: 2025-11-25 08:55:26.751 253542 INFO os_vif [None req-5616caea-af6a-47a1-8baa-f6897e154db8 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:a5:b5:38,bridge_name='br-int',has_traffic_filtering=True,id=52157627-d75e-4670-9215-6471bda94ba6,network=Network(703bdacb-53cd-40a1-9c2c-c632a29e049b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap52157627-d7')#033[00m
Nov 25 03:55:26 np0005534516 nova_compute[253538]: 2025-11-25 08:55:26.783 253542 DEBUG nova.objects.instance [None req-5616caea-af6a-47a1-8baa-f6897e154db8 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Lazy-loading 'numa_topology' on Instance uuid 7e82fa8c-6663-439c-833c-2b28f22282a8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 03:55:26 np0005534516 kernel: tap52157627-d7: entered promiscuous mode
Nov 25 03:55:26 np0005534516 NetworkManager[48915]: <info>  [1764060926.8717] manager: (tap52157627-d7): new Tun device (/org/freedesktop/NetworkManager/Devices/491)
Nov 25 03:55:26 np0005534516 nova_compute[253538]: 2025-11-25 08:55:26.873 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:55:26 np0005534516 ovn_controller[152859]: 2025-11-25T08:55:26Z|01209|binding|INFO|Claiming lport 52157627-d75e-4670-9215-6471bda94ba6 for this chassis.
Nov 25 03:55:26 np0005534516 ovn_controller[152859]: 2025-11-25T08:55:26Z|01210|binding|INFO|52157627-d75e-4670-9215-6471bda94ba6: Claiming fa:16:3e:a5:b5:38 10.100.0.6
Nov 25 03:55:26 np0005534516 ovn_controller[152859]: 2025-11-25T08:55:26Z|01211|binding|INFO|Removing lport 52157627-d75e-4670-9215-6471bda94ba6 ovn-installed in OVS
Nov 25 03:55:26 np0005534516 nova_compute[253538]: 2025-11-25 08:55:26.876 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:55:26 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:55:26.884 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a5:b5:38 10.100.0.6'], port_security=['fa:16:3e:a5:b5:38 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '7e82fa8c-6663-439c-833c-2b28f22282a8', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-703bdacb-53cd-40a1-9c2c-c632a29e049b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '7dcaf3c96bfc4db3a41291debd385c67', 'neutron:revision_number': '5', 'neutron:security_group_ids': '8861bf3c-0dd7-44a1-b3d5-4af8cd78fda2', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.213'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=1772063d-dae9-4681-ab29-d1a2754b4cf7, chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=52157627-d75e-4670-9215-6471bda94ba6) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 03:55:26 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:55:26.885 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 52157627-d75e-4670-9215-6471bda94ba6 in datapath 703bdacb-53cd-40a1-9c2c-c632a29e049b bound to our chassis#033[00m
Nov 25 03:55:26 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:55:26.887 162739 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 703bdacb-53cd-40a1-9c2c-c632a29e049b#033[00m
Nov 25 03:55:26 np0005534516 ovn_controller[152859]: 2025-11-25T08:55:26Z|01212|binding|INFO|Setting lport 52157627-d75e-4670-9215-6471bda94ba6 ovn-installed in OVS
Nov 25 03:55:26 np0005534516 ovn_controller[152859]: 2025-11-25T08:55:26Z|01213|binding|INFO|Setting lport 52157627-d75e-4670-9215-6471bda94ba6 up in Southbound
Nov 25 03:55:26 np0005534516 nova_compute[253538]: 2025-11-25 08:55:26.890 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:55:26 np0005534516 nova_compute[253538]: 2025-11-25 08:55:26.894 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:55:26 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:55:26.901 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[352860f1-669e-4f2e-b688-0ff4aa65f5ff]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:55:26 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:55:26.902 162739 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap703bdacb-51 in ovnmeta-703bdacb-53cd-40a1-9c2c-c632a29e049b namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 25 03:55:26 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:55:26.904 269370 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap703bdacb-50 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 25 03:55:26 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:55:26.904 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[db9948f6-4835-4d2c-bf92-59f85b8c9e9e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:55:26 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:55:26.905 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[1bff56f9-8598-454c-8a87-e967c035fff2]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:55:26 np0005534516 systemd-udevd[377116]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 03:55:26 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:55:26.921 162852 DEBUG oslo.privsep.daemon [-] privsep: reply[33be21ad-db7f-4885-99bf-3ce3169b8e86]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:55:26 np0005534516 systemd-machined[215790]: New machine qemu-147-instance-00000075.
Nov 25 03:55:26 np0005534516 NetworkManager[48915]: <info>  [1764060926.9347] device (tap52157627-d7): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 03:55:26 np0005534516 systemd[1]: Started Virtual Machine qemu-147-instance-00000075.
Nov 25 03:55:26 np0005534516 NetworkManager[48915]: <info>  [1764060926.9371] device (tap52157627-d7): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 03:55:26 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:55:26.945 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[bafe9c22-3c9b-47d9-bfe7-72dd6ba6366d]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:55:26 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:55:26.981 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[903d1cf8-365d-443c-8ae0-ee6a9895f393]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:55:26 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:55:26.987 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[1f3f55ac-a1a6-47a3-8b6f-4a26e73e701b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:55:26 np0005534516 systemd-udevd[377119]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 03:55:26 np0005534516 NetworkManager[48915]: <info>  [1764060926.9888] manager: (tap703bdacb-50): new Veth device (/org/freedesktop/NetworkManager/Devices/492)
Nov 25 03:55:27 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:55:27.024 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[59294259-6a91-48eb-8d34-f07a12d2e268]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:55:27 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:55:27.028 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[80ebf1f4-bae8-4194-8b05-60bd5997b157]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:55:27 np0005534516 NetworkManager[48915]: <info>  [1764060927.0549] device (tap703bdacb-50): carrier: link connected
Nov 25 03:55:27 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:55:27.063 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[e8aa565c-057e-46f4-9f81-5d903152c5b6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:55:27 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:55:27.080 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[8a6b6cb0-0118-4fce-b131-c1893a000c27]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap703bdacb-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b2:81:bd'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 352], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 628165, 'reachable_time': 24356, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 377169, 'error': None, 'target': 'ovnmeta-703bdacb-53cd-40a1-9c2c-c632a29e049b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:55:27 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:55:27.097 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[18cee531-c5fd-44e5-8f3c-ddc69c2291e6]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feb2:81bd'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 628165, 'tstamp': 628165}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 377172, 'error': None, 'target': 'ovnmeta-703bdacb-53cd-40a1-9c2c-c632a29e049b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:55:27 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:55:27.116 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[19a62ef1-56b1-4012-ba93-4fb1d2427c3d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap703bdacb-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b2:81:bd'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 352], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 628165, 'reachable_time': 24356, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 377186, 'error': None, 'target': 'ovnmeta-703bdacb-53cd-40a1-9c2c-c632a29e049b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:55:27 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:55:27.153 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[44543f23-aa1e-4819-812c-f8e53e2f0ae8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:55:27 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:55:27.226 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[2c418b3b-f475-485b-b7c0-3341866efa13]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:55:27 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:55:27.228 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap703bdacb-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:55:27 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:55:27.228 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 03:55:27 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:55:27.228 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap703bdacb-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:55:27 np0005534516 NetworkManager[48915]: <info>  [1764060927.2306] manager: (tap703bdacb-50): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/493)
Nov 25 03:55:27 np0005534516 nova_compute[253538]: 2025-11-25 08:55:27.230 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:55:27 np0005534516 kernel: tap703bdacb-50: entered promiscuous mode
Nov 25 03:55:27 np0005534516 nova_compute[253538]: 2025-11-25 08:55:27.232 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:55:27 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:55:27.235 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap703bdacb-50, col_values=(('external_ids', {'iface-id': '583866cf-82da-4259-9189-db9f58620872'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:55:27 np0005534516 nova_compute[253538]: 2025-11-25 08:55:27.236 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:55:27 np0005534516 ovn_controller[152859]: 2025-11-25T08:55:27Z|01214|binding|INFO|Releasing lport 583866cf-82da-4259-9189-db9f58620872 from this chassis (sb_readonly=0)
Nov 25 03:55:27 np0005534516 nova_compute[253538]: 2025-11-25 08:55:27.237 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:55:27 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:55:27.238 162739 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/703bdacb-53cd-40a1-9c2c-c632a29e049b.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/703bdacb-53cd-40a1-9c2c-c632a29e049b.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 25 03:55:27 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:55:27.239 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[8403c0be-9b6f-4976-9767-2c060056b2f4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:55:27 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:55:27.240 162739 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 25 03:55:27 np0005534516 ovn_metadata_agent[162734]: global
Nov 25 03:55:27 np0005534516 ovn_metadata_agent[162734]:    log         /dev/log local0 debug
Nov 25 03:55:27 np0005534516 ovn_metadata_agent[162734]:    log-tag     haproxy-metadata-proxy-703bdacb-53cd-40a1-9c2c-c632a29e049b
Nov 25 03:55:27 np0005534516 ovn_metadata_agent[162734]:    user        root
Nov 25 03:55:27 np0005534516 ovn_metadata_agent[162734]:    group       root
Nov 25 03:55:27 np0005534516 ovn_metadata_agent[162734]:    maxconn     1024
Nov 25 03:55:27 np0005534516 ovn_metadata_agent[162734]:    pidfile     /var/lib/neutron/external/pids/703bdacb-53cd-40a1-9c2c-c632a29e049b.pid.haproxy
Nov 25 03:55:27 np0005534516 ovn_metadata_agent[162734]:    daemon
Nov 25 03:55:27 np0005534516 ovn_metadata_agent[162734]: 
Nov 25 03:55:27 np0005534516 ovn_metadata_agent[162734]: defaults
Nov 25 03:55:27 np0005534516 ovn_metadata_agent[162734]:    log global
Nov 25 03:55:27 np0005534516 ovn_metadata_agent[162734]:    mode http
Nov 25 03:55:27 np0005534516 ovn_metadata_agent[162734]:    option httplog
Nov 25 03:55:27 np0005534516 ovn_metadata_agent[162734]:    option dontlognull
Nov 25 03:55:27 np0005534516 ovn_metadata_agent[162734]:    option http-server-close
Nov 25 03:55:27 np0005534516 ovn_metadata_agent[162734]:    option forwardfor
Nov 25 03:55:27 np0005534516 ovn_metadata_agent[162734]:    retries                 3
Nov 25 03:55:27 np0005534516 ovn_metadata_agent[162734]:    timeout http-request    30s
Nov 25 03:55:27 np0005534516 ovn_metadata_agent[162734]:    timeout connect         30s
Nov 25 03:55:27 np0005534516 ovn_metadata_agent[162734]:    timeout client          32s
Nov 25 03:55:27 np0005534516 ovn_metadata_agent[162734]:    timeout server          32s
Nov 25 03:55:27 np0005534516 ovn_metadata_agent[162734]:    timeout http-keep-alive 30s
Nov 25 03:55:27 np0005534516 ovn_metadata_agent[162734]: 
Nov 25 03:55:27 np0005534516 ovn_metadata_agent[162734]: 
Nov 25 03:55:27 np0005534516 ovn_metadata_agent[162734]: listen listener
Nov 25 03:55:27 np0005534516 ovn_metadata_agent[162734]:    bind 169.254.169.254:80
Nov 25 03:55:27 np0005534516 ovn_metadata_agent[162734]:    server metadata /var/lib/neutron/metadata_proxy
Nov 25 03:55:27 np0005534516 ovn_metadata_agent[162734]:    http-request add-header X-OVN-Network-ID 703bdacb-53cd-40a1-9c2c-c632a29e049b
Nov 25 03:55:27 np0005534516 ovn_metadata_agent[162734]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 25 03:55:27 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:55:27.240 162739 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-703bdacb-53cd-40a1-9c2c-c632a29e049b', 'env', 'PROCESS_TAG=haproxy-703bdacb-53cd-40a1-9c2c-c632a29e049b', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/703bdacb-53cd-40a1-9c2c-c632a29e049b.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 25 03:55:27 np0005534516 nova_compute[253538]: 2025-11-25 08:55:27.250 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:55:27 np0005534516 podman[377266]: 2025-11-25 08:55:27.379144115 +0000 UTC m=+0.075015923 container exec 04ce8b89bbacb77b3b59c4996e899d7494010827e740656ebea8754c25303956 (image=quay.io/ceph/ceph:v18, name=ceph-a058ea16-8b73-51e1-b172-ed66107102bf-mon-compute-0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Nov 25 03:55:27 np0005534516 nova_compute[253538]: 2025-11-25 08:55:27.425 253542 DEBUG nova.virt.libvirt.host [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Removed pending event for 7e82fa8c-6663-439c-833c-2b28f22282a8 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Nov 25 03:55:27 np0005534516 nova_compute[253538]: 2025-11-25 08:55:27.425 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764060927.424899, 7e82fa8c-6663-439c-833c-2b28f22282a8 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 03:55:27 np0005534516 nova_compute[253538]: 2025-11-25 08:55:27.426 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 7e82fa8c-6663-439c-833c-2b28f22282a8] VM Started (Lifecycle Event)#033[00m
Nov 25 03:55:27 np0005534516 nova_compute[253538]: 2025-11-25 08:55:27.450 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 7e82fa8c-6663-439c-833c-2b28f22282a8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:55:27 np0005534516 nova_compute[253538]: 2025-11-25 08:55:27.463 253542 DEBUG nova.compute.manager [None req-5616caea-af6a-47a1-8baa-f6897e154db8 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: 7e82fa8c-6663-439c-833c-2b28f22282a8] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 25 03:55:27 np0005534516 nova_compute[253538]: 2025-11-25 08:55:27.464 253542 DEBUG nova.objects.instance [None req-5616caea-af6a-47a1-8baa-f6897e154db8 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Lazy-loading 'pci_devices' on Instance uuid 7e82fa8c-6663-439c-833c-2b28f22282a8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 03:55:27 np0005534516 nova_compute[253538]: 2025-11-25 08:55:27.470 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 7e82fa8c-6663-439c-833c-2b28f22282a8] Synchronizing instance power state after lifecycle event "Started"; current vm_state: suspended, current task_state: resuming, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 03:55:27 np0005534516 nova_compute[253538]: 2025-11-25 08:55:27.488 253542 INFO nova.virt.libvirt.driver [-] [instance: 7e82fa8c-6663-439c-833c-2b28f22282a8] Instance running successfully.#033[00m
Nov 25 03:55:27 np0005534516 virtqemud[253839]: argument unsupported: QEMU guest agent is not configured
Nov 25 03:55:27 np0005534516 nova_compute[253538]: 2025-11-25 08:55:27.492 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 7e82fa8c-6663-439c-833c-2b28f22282a8] During sync_power_state the instance has a pending task (resuming). Skip.#033[00m
Nov 25 03:55:27 np0005534516 nova_compute[253538]: 2025-11-25 08:55:27.493 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764060927.4329023, 7e82fa8c-6663-439c-833c-2b28f22282a8 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 03:55:27 np0005534516 nova_compute[253538]: 2025-11-25 08:55:27.493 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 7e82fa8c-6663-439c-833c-2b28f22282a8] VM Resumed (Lifecycle Event)#033[00m
Nov 25 03:55:27 np0005534516 nova_compute[253538]: 2025-11-25 08:55:27.497 253542 DEBUG nova.virt.libvirt.guest [None req-5616caea-af6a-47a1-8baa-f6897e154db8 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: 7e82fa8c-6663-439c-833c-2b28f22282a8] Failed to set time: agent not configured sync_guest_time /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:200#033[00m
Nov 25 03:55:27 np0005534516 nova_compute[253538]: 2025-11-25 08:55:27.498 253542 DEBUG nova.compute.manager [None req-5616caea-af6a-47a1-8baa-f6897e154db8 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: 7e82fa8c-6663-439c-833c-2b28f22282a8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:55:27 np0005534516 podman[377266]: 2025-11-25 08:55:27.515058235 +0000 UTC m=+0.210930043 container exec_died 04ce8b89bbacb77b3b59c4996e899d7494010827e740656ebea8754c25303956 (image=quay.io/ceph/ceph:v18, name=ceph-a058ea16-8b73-51e1-b172-ed66107102bf-mon-compute-0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 03:55:27 np0005534516 nova_compute[253538]: 2025-11-25 08:55:27.520 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 7e82fa8c-6663-439c-833c-2b28f22282a8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:55:27 np0005534516 nova_compute[253538]: 2025-11-25 08:55:27.524 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 7e82fa8c-6663-439c-833c-2b28f22282a8] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: suspended, current task_state: resuming, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 03:55:27 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 03:55:27 np0005534516 nova_compute[253538]: 2025-11-25 08:55:27.546 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 7e82fa8c-6663-439c-833c-2b28f22282a8] During sync_power_state the instance has a pending task (resuming). Skip.#033[00m
Nov 25 03:55:27 np0005534516 nova_compute[253538]: 2025-11-25 08:55:27.557 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:55:27 np0005534516 podman[377320]: 2025-11-25 08:55:27.640528869 +0000 UTC m=+0.054372621 container create 489b2c5f19d16e4e5592b32594c1a060b2f357fccef1644f8ff566a9c5ff2296 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-703bdacb-53cd-40a1-9c2c-c632a29e049b, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251118)
Nov 25 03:55:27 np0005534516 systemd[1]: Started libpod-conmon-489b2c5f19d16e4e5592b32594c1a060b2f357fccef1644f8ff566a9c5ff2296.scope.
Nov 25 03:55:27 np0005534516 podman[377320]: 2025-11-25 08:55:27.612954049 +0000 UTC m=+0.026797801 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 25 03:55:27 np0005534516 nova_compute[253538]: 2025-11-25 08:55:27.711 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:55:27 np0005534516 systemd[1]: Started libcrun container.
Nov 25 03:55:27 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/802d7b725468ff6e6315af7a30e30a1e715910a6a92e1c856150d350c2e2cd52/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 25 03:55:27 np0005534516 podman[377320]: 2025-11-25 08:55:27.746026791 +0000 UTC m=+0.159870523 container init 489b2c5f19d16e4e5592b32594c1a060b2f357fccef1644f8ff566a9c5ff2296 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-703bdacb-53cd-40a1-9c2c-c632a29e049b, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 03:55:27 np0005534516 podman[377320]: 2025-11-25 08:55:27.752172988 +0000 UTC m=+0.166016720 container start 489b2c5f19d16e4e5592b32594c1a060b2f357fccef1644f8ff566a9c5ff2296 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-703bdacb-53cd-40a1-9c2c-c632a29e049b, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2)
Nov 25 03:55:27 np0005534516 neutron-haproxy-ovnmeta-703bdacb-53cd-40a1-9c2c-c632a29e049b[377350]: [NOTICE]   (377366) : New worker (377372) forked
Nov 25 03:55:27 np0005534516 neutron-haproxy-ovnmeta-703bdacb-53cd-40a1-9c2c-c632a29e049b[377350]: [NOTICE]   (377366) : Loading success.
Nov 25 03:55:27 np0005534516 nova_compute[253538]: 2025-11-25 08:55:27.852 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:55:27 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2240: 321 pgs: 321 active+clean; 167 MiB data, 840 MiB used, 59 GiB / 60 GiB avail; 852 B/s rd, 11 KiB/s wr, 0 op/s
Nov 25 03:55:28 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 03:55:28 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 03:55:28 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 03:55:28 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 03:55:28 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 03:55:28 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 03:55:29 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 25 03:55:29 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4016422686' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 03:55:29 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 25 03:55:29 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4016422686' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 03:55:29 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 03:55:29 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 03:55:29 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 03:55:29 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 03:55:29 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 03:55:29 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 03:55:29 np0005534516 ceph-mgr[75313]: [progress WARNING root] complete: ev 746380e6-f276-4262-beb1-d716bc2b22b0 does not exist
Nov 25 03:55:29 np0005534516 ceph-mgr[75313]: [progress WARNING root] complete: ev 1f0ef677-faf7-4ba5-8a32-688d709cbc22 does not exist
Nov 25 03:55:29 np0005534516 ceph-mgr[75313]: [progress WARNING root] complete: ev 72df9519-4e17-4e03-9142-06ddb3f35e8b does not exist
Nov 25 03:55:29 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 03:55:29 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 03:55:29 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 03:55:29 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 03:55:29 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 03:55:29 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 03:55:29 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2241: 321 pgs: 321 active+clean; 167 MiB data, 840 MiB used, 59 GiB / 60 GiB avail; 3.2 KiB/s rd, 341 B/s wr, 3 op/s
Nov 25 03:55:29 np0005534516 nova_compute[253538]: 2025-11-25 08:55:29.894 253542 INFO nova.compute.manager [None req-9b8323bd-9b4a-4093-b1af-72c9e6a8836c 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: 7e82fa8c-6663-439c-833c-2b28f22282a8] Get console output#033[00m
Nov 25 03:55:29 np0005534516 nova_compute[253538]: 2025-11-25 08:55:29.899 310639 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Nov 25 03:55:29 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 03:55:29 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 03:55:29 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 03:55:29 np0005534516 podman[377737]: 2025-11-25 08:55:29.974081085 +0000 UTC m=+0.044249615 container create e8a39aedd54f40e1c5309c5dd5b94a1d25123524a164ba15ddf001afed619029 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_franklin, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Nov 25 03:55:30 np0005534516 systemd[1]: Started libpod-conmon-e8a39aedd54f40e1c5309c5dd5b94a1d25123524a164ba15ddf001afed619029.scope.
Nov 25 03:55:30 np0005534516 systemd[1]: Started libcrun container.
Nov 25 03:55:30 np0005534516 podman[377737]: 2025-11-25 08:55:29.95218128 +0000 UTC m=+0.022349820 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 03:55:30 np0005534516 podman[377737]: 2025-11-25 08:55:30.056856909 +0000 UTC m=+0.127025459 container init e8a39aedd54f40e1c5309c5dd5b94a1d25123524a164ba15ddf001afed619029 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_franklin, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 03:55:30 np0005534516 podman[377737]: 2025-11-25 08:55:30.064529978 +0000 UTC m=+0.134698548 container start e8a39aedd54f40e1c5309c5dd5b94a1d25123524a164ba15ddf001afed619029 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_franklin, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True)
Nov 25 03:55:30 np0005534516 epic_franklin[377753]: 167 167
Nov 25 03:55:30 np0005534516 systemd[1]: libpod-e8a39aedd54f40e1c5309c5dd5b94a1d25123524a164ba15ddf001afed619029.scope: Deactivated successfully.
Nov 25 03:55:30 np0005534516 podman[377737]: 2025-11-25 08:55:30.097782713 +0000 UTC m=+0.167951273 container attach e8a39aedd54f40e1c5309c5dd5b94a1d25123524a164ba15ddf001afed619029 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_franklin, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Nov 25 03:55:30 np0005534516 podman[377737]: 2025-11-25 08:55:30.100260219 +0000 UTC m=+0.170428759 container died e8a39aedd54f40e1c5309c5dd5b94a1d25123524a164ba15ddf001afed619029 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_franklin, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Nov 25 03:55:30 np0005534516 systemd[1]: var-lib-containers-storage-overlay-9c6d1441326e258abdca401ea51af6d9ba8cb7410f9dec55b8a4a69dc63fb00a-merged.mount: Deactivated successfully.
Nov 25 03:55:30 np0005534516 podman[377737]: 2025-11-25 08:55:30.170839401 +0000 UTC m=+0.241007931 container remove e8a39aedd54f40e1c5309c5dd5b94a1d25123524a164ba15ddf001afed619029 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_franklin, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3)
Nov 25 03:55:30 np0005534516 systemd[1]: libpod-conmon-e8a39aedd54f40e1c5309c5dd5b94a1d25123524a164ba15ddf001afed619029.scope: Deactivated successfully.
Nov 25 03:55:30 np0005534516 podman[377779]: 2025-11-25 08:55:30.361323336 +0000 UTC m=+0.040254897 container create 4044db565a9239e07a477716389854abc1b78069214975560d796bdcc2930779 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_kirch, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Nov 25 03:55:30 np0005534516 systemd[1]: Started libpod-conmon-4044db565a9239e07a477716389854abc1b78069214975560d796bdcc2930779.scope.
Nov 25 03:55:30 np0005534516 systemd[1]: Started libcrun container.
Nov 25 03:55:30 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/075f72d6cccc2c84f3169140b8c8e368ac2cef046a3e53978c8c180eafacaffa/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 03:55:30 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/075f72d6cccc2c84f3169140b8c8e368ac2cef046a3e53978c8c180eafacaffa/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 03:55:30 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/075f72d6cccc2c84f3169140b8c8e368ac2cef046a3e53978c8c180eafacaffa/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 03:55:30 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/075f72d6cccc2c84f3169140b8c8e368ac2cef046a3e53978c8c180eafacaffa/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 03:55:30 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/075f72d6cccc2c84f3169140b8c8e368ac2cef046a3e53978c8c180eafacaffa/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 03:55:30 np0005534516 podman[377779]: 2025-11-25 08:55:30.343588323 +0000 UTC m=+0.022519904 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 03:55:30 np0005534516 podman[377779]: 2025-11-25 08:55:30.448214711 +0000 UTC m=+0.127146302 container init 4044db565a9239e07a477716389854abc1b78069214975560d796bdcc2930779 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_kirch, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef)
Nov 25 03:55:30 np0005534516 podman[377779]: 2025-11-25 08:55:30.461486882 +0000 UTC m=+0.140418433 container start 4044db565a9239e07a477716389854abc1b78069214975560d796bdcc2930779 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_kirch, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Nov 25 03:55:30 np0005534516 podman[377779]: 2025-11-25 08:55:30.464351129 +0000 UTC m=+0.143282690 container attach 4044db565a9239e07a477716389854abc1b78069214975560d796bdcc2930779 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_kirch, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Nov 25 03:55:30 np0005534516 nova_compute[253538]: 2025-11-25 08:55:30.823 253542 DEBUG nova.compute.manager [req-2d5b4246-3438-4077-92e3-26a7fc8d9ee3 req-a0cd1187-9c6d-4402-9852-badb63f5b42b b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 7e82fa8c-6663-439c-833c-2b28f22282a8] Received event network-changed-52157627-d75e-4670-9215-6471bda94ba6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:55:30 np0005534516 nova_compute[253538]: 2025-11-25 08:55:30.824 253542 DEBUG nova.compute.manager [req-2d5b4246-3438-4077-92e3-26a7fc8d9ee3 req-a0cd1187-9c6d-4402-9852-badb63f5b42b b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 7e82fa8c-6663-439c-833c-2b28f22282a8] Refreshing instance network info cache due to event network-changed-52157627-d75e-4670-9215-6471bda94ba6. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 03:55:30 np0005534516 nova_compute[253538]: 2025-11-25 08:55:30.824 253542 DEBUG oslo_concurrency.lockutils [req-2d5b4246-3438-4077-92e3-26a7fc8d9ee3 req-a0cd1187-9c6d-4402-9852-badb63f5b42b b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-7e82fa8c-6663-439c-833c-2b28f22282a8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 03:55:30 np0005534516 nova_compute[253538]: 2025-11-25 08:55:30.824 253542 DEBUG oslo_concurrency.lockutils [req-2d5b4246-3438-4077-92e3-26a7fc8d9ee3 req-a0cd1187-9c6d-4402-9852-badb63f5b42b b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-7e82fa8c-6663-439c-833c-2b28f22282a8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 03:55:30 np0005534516 nova_compute[253538]: 2025-11-25 08:55:30.825 253542 DEBUG nova.network.neutron [req-2d5b4246-3438-4077-92e3-26a7fc8d9ee3 req-a0cd1187-9c6d-4402-9852-badb63f5b42b b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 7e82fa8c-6663-439c-833c-2b28f22282a8] Refreshing network info cache for port 52157627-d75e-4670-9215-6471bda94ba6 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 03:55:30 np0005534516 nova_compute[253538]: 2025-11-25 08:55:30.922 253542 DEBUG oslo_concurrency.lockutils [None req-909947b0-2a81-4cc4-a5d9-5fb476fbcc1b 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Acquiring lock "7e82fa8c-6663-439c-833c-2b28f22282a8" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:55:30 np0005534516 nova_compute[253538]: 2025-11-25 08:55:30.923 253542 DEBUG oslo_concurrency.lockutils [None req-909947b0-2a81-4cc4-a5d9-5fb476fbcc1b 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Lock "7e82fa8c-6663-439c-833c-2b28f22282a8" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:55:30 np0005534516 nova_compute[253538]: 2025-11-25 08:55:30.923 253542 DEBUG oslo_concurrency.lockutils [None req-909947b0-2a81-4cc4-a5d9-5fb476fbcc1b 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Acquiring lock "7e82fa8c-6663-439c-833c-2b28f22282a8-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:55:30 np0005534516 nova_compute[253538]: 2025-11-25 08:55:30.923 253542 DEBUG oslo_concurrency.lockutils [None req-909947b0-2a81-4cc4-a5d9-5fb476fbcc1b 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Lock "7e82fa8c-6663-439c-833c-2b28f22282a8-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:55:30 np0005534516 nova_compute[253538]: 2025-11-25 08:55:30.923 253542 DEBUG oslo_concurrency.lockutils [None req-909947b0-2a81-4cc4-a5d9-5fb476fbcc1b 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Lock "7e82fa8c-6663-439c-833c-2b28f22282a8-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:55:30 np0005534516 nova_compute[253538]: 2025-11-25 08:55:30.924 253542 INFO nova.compute.manager [None req-909947b0-2a81-4cc4-a5d9-5fb476fbcc1b 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: 7e82fa8c-6663-439c-833c-2b28f22282a8] Terminating instance#033[00m
Nov 25 03:55:30 np0005534516 nova_compute[253538]: 2025-11-25 08:55:30.925 253542 DEBUG nova.compute.manager [None req-909947b0-2a81-4cc4-a5d9-5fb476fbcc1b 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: 7e82fa8c-6663-439c-833c-2b28f22282a8] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 25 03:55:30 np0005534516 kernel: tap52157627-d7 (unregistering): left promiscuous mode
Nov 25 03:55:30 np0005534516 NetworkManager[48915]: <info>  [1764060930.9759] device (tap52157627-d7): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 03:55:30 np0005534516 ovn_controller[152859]: 2025-11-25T08:55:30Z|01215|binding|INFO|Releasing lport 52157627-d75e-4670-9215-6471bda94ba6 from this chassis (sb_readonly=0)
Nov 25 03:55:30 np0005534516 nova_compute[253538]: 2025-11-25 08:55:30.992 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:55:30 np0005534516 ovn_controller[152859]: 2025-11-25T08:55:30Z|01216|binding|INFO|Setting lport 52157627-d75e-4670-9215-6471bda94ba6 down in Southbound
Nov 25 03:55:30 np0005534516 ovn_controller[152859]: 2025-11-25T08:55:30Z|01217|binding|INFO|Removing iface tap52157627-d7 ovn-installed in OVS
Nov 25 03:55:30 np0005534516 nova_compute[253538]: 2025-11-25 08:55:30.995 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:55:30 np0005534516 nova_compute[253538]: 2025-11-25 08:55:30.996 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:55:31 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:55:30.999 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a5:b5:38 10.100.0.6'], port_security=['fa:16:3e:a5:b5:38 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '7e82fa8c-6663-439c-833c-2b28f22282a8', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-703bdacb-53cd-40a1-9c2c-c632a29e049b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '7dcaf3c96bfc4db3a41291debd385c67', 'neutron:revision_number': '5', 'neutron:security_group_ids': '8861bf3c-0dd7-44a1-b3d5-4af8cd78fda2', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=1772063d-dae9-4681-ab29-d1a2754b4cf7, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=52157627-d75e-4670-9215-6471bda94ba6) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 03:55:31 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:55:31.000 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 52157627-d75e-4670-9215-6471bda94ba6 in datapath 703bdacb-53cd-40a1-9c2c-c632a29e049b unbound from our chassis#033[00m
Nov 25 03:55:31 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:55:31.002 162739 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 703bdacb-53cd-40a1-9c2c-c632a29e049b, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 25 03:55:31 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:55:31.003 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[be6c3fcf-86fc-4004-9013-25cbda53c175]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:55:31 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:55:31.004 162739 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-703bdacb-53cd-40a1-9c2c-c632a29e049b namespace which is not needed anymore#033[00m
Nov 25 03:55:31 np0005534516 nova_compute[253538]: 2025-11-25 08:55:31.031 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:55:31 np0005534516 systemd[1]: machine-qemu\x2d147\x2dinstance\x2d00000075.scope: Deactivated successfully.
Nov 25 03:55:31 np0005534516 systemd-machined[215790]: Machine qemu-147-instance-00000075 terminated.
Nov 25 03:55:31 np0005534516 kernel: tap52157627-d7: entered promiscuous mode
Nov 25 03:55:31 np0005534516 systemd-udevd[377805]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 03:55:31 np0005534516 nova_compute[253538]: 2025-11-25 08:55:31.190 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:55:31 np0005534516 ovn_controller[152859]: 2025-11-25T08:55:31Z|01218|binding|INFO|Claiming lport 52157627-d75e-4670-9215-6471bda94ba6 for this chassis.
Nov 25 03:55:31 np0005534516 kernel: tap52157627-d7 (unregistering): left promiscuous mode
Nov 25 03:55:31 np0005534516 ovn_controller[152859]: 2025-11-25T08:55:31Z|01219|binding|INFO|52157627-d75e-4670-9215-6471bda94ba6: Claiming fa:16:3e:a5:b5:38 10.100.0.6
Nov 25 03:55:31 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:55:31.206 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a5:b5:38 10.100.0.6'], port_security=['fa:16:3e:a5:b5:38 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '7e82fa8c-6663-439c-833c-2b28f22282a8', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-703bdacb-53cd-40a1-9c2c-c632a29e049b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '7dcaf3c96bfc4db3a41291debd385c67', 'neutron:revision_number': '5', 'neutron:security_group_ids': '8861bf3c-0dd7-44a1-b3d5-4af8cd78fda2', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=1772063d-dae9-4681-ab29-d1a2754b4cf7, chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=52157627-d75e-4670-9215-6471bda94ba6) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 03:55:31 np0005534516 neutron-haproxy-ovnmeta-703bdacb-53cd-40a1-9c2c-c632a29e049b[377350]: [NOTICE]   (377366) : haproxy version is 2.8.14-c23fe91
Nov 25 03:55:31 np0005534516 neutron-haproxy-ovnmeta-703bdacb-53cd-40a1-9c2c-c632a29e049b[377350]: [NOTICE]   (377366) : path to executable is /usr/sbin/haproxy
Nov 25 03:55:31 np0005534516 neutron-haproxy-ovnmeta-703bdacb-53cd-40a1-9c2c-c632a29e049b[377350]: [WARNING]  (377366) : Exiting Master process...
Nov 25 03:55:31 np0005534516 neutron-haproxy-ovnmeta-703bdacb-53cd-40a1-9c2c-c632a29e049b[377350]: [WARNING]  (377366) : Exiting Master process...
Nov 25 03:55:31 np0005534516 ovn_controller[152859]: 2025-11-25T08:55:31Z|01220|binding|INFO|Setting lport 52157627-d75e-4670-9215-6471bda94ba6 ovn-installed in OVS
Nov 25 03:55:31 np0005534516 ovn_controller[152859]: 2025-11-25T08:55:31Z|01221|binding|INFO|Setting lport 52157627-d75e-4670-9215-6471bda94ba6 up in Southbound
Nov 25 03:55:31 np0005534516 nova_compute[253538]: 2025-11-25 08:55:31.221 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:55:31 np0005534516 neutron-haproxy-ovnmeta-703bdacb-53cd-40a1-9c2c-c632a29e049b[377350]: [ALERT]    (377366) : Current worker (377372) exited with code 143 (Terminated)
Nov 25 03:55:31 np0005534516 neutron-haproxy-ovnmeta-703bdacb-53cd-40a1-9c2c-c632a29e049b[377350]: [WARNING]  (377366) : All workers exited. Exiting... (0)
Nov 25 03:55:31 np0005534516 ovn_controller[152859]: 2025-11-25T08:55:31Z|01222|binding|INFO|Releasing lport 52157627-d75e-4670-9215-6471bda94ba6 from this chassis (sb_readonly=1)
Nov 25 03:55:31 np0005534516 ovn_controller[152859]: 2025-11-25T08:55:31Z|01223|binding|INFO|Removing iface tap52157627-d7 ovn-installed in OVS
Nov 25 03:55:31 np0005534516 nova_compute[253538]: 2025-11-25 08:55:31.226 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:55:31 np0005534516 systemd[1]: libpod-489b2c5f19d16e4e5592b32594c1a060b2f357fccef1644f8ff566a9c5ff2296.scope: Deactivated successfully.
Nov 25 03:55:31 np0005534516 ovn_controller[152859]: 2025-11-25T08:55:31Z|01224|binding|INFO|Releasing lport 52157627-d75e-4670-9215-6471bda94ba6 from this chassis (sb_readonly=0)
Nov 25 03:55:31 np0005534516 ovn_controller[152859]: 2025-11-25T08:55:31Z|01225|binding|INFO|Setting lport 52157627-d75e-4670-9215-6471bda94ba6 down in Southbound
Nov 25 03:55:31 np0005534516 nova_compute[253538]: 2025-11-25 08:55:31.234 253542 INFO nova.virt.libvirt.driver [-] [instance: 7e82fa8c-6663-439c-833c-2b28f22282a8] Instance destroyed successfully.#033[00m
Nov 25 03:55:31 np0005534516 nova_compute[253538]: 2025-11-25 08:55:31.235 253542 DEBUG nova.objects.instance [None req-909947b0-2a81-4cc4-a5d9-5fb476fbcc1b 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Lazy-loading 'resources' on Instance uuid 7e82fa8c-6663-439c-833c-2b28f22282a8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 03:55:31 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:55:31.238 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a5:b5:38 10.100.0.6'], port_security=['fa:16:3e:a5:b5:38 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '7e82fa8c-6663-439c-833c-2b28f22282a8', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-703bdacb-53cd-40a1-9c2c-c632a29e049b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '7dcaf3c96bfc4db3a41291debd385c67', 'neutron:revision_number': '5', 'neutron:security_group_ids': '8861bf3c-0dd7-44a1-b3d5-4af8cd78fda2', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=1772063d-dae9-4681-ab29-d1a2754b4cf7, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=52157627-d75e-4670-9215-6471bda94ba6) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 03:55:31 np0005534516 nova_compute[253538]: 2025-11-25 08:55:31.240 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:55:31 np0005534516 podman[377824]: 2025-11-25 08:55:31.240791205 +0000 UTC m=+0.120436710 container died 489b2c5f19d16e4e5592b32594c1a060b2f357fccef1644f8ff566a9c5ff2296 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-703bdacb-53cd-40a1-9c2c-c632a29e049b, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 03:55:31 np0005534516 nova_compute[253538]: 2025-11-25 08:55:31.249 253542 DEBUG nova.virt.libvirt.vif [None req-909947b0-2a81-4cc4-a5d9-5fb476fbcc1b 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T08:54:33Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-360976027',display_name='tempest-TestNetworkAdvancedServerOps-server-360976027',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-360976027',id=117,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMP7QXPZDPXrUql830fJueudXHoXZ2zww6EaNe3PFgcJ0/Sar4viBph4zMnjSpS0mXHOOoM16QZ2fgUMXO9FFKEQnCoRRYMYlR8af4rdgpILBOKGdgI2okSUsyg3suBciA==',key_name='tempest-TestNetworkAdvancedServerOps-1316847301',keypairs=<?>,launch_index=0,launched_at=2025-11-25T08:54:57Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='7dcaf3c96bfc4db3a41291debd385c67',ramdisk_id='',reservation_id='r-p9ynh0fp',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkAdvancedServerOps-1132090577',owner_user_name='tempest-TestNetworkAdvancedServerOps-1132090577-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T08:55:27Z,user_data=None,user_id='009378dc36154271ba5b4590ce67ddde',uuid=7e82fa8c-6663-439c-833c-2b28f22282a8,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "52157627-d75e-4670-9215-6471bda94ba6", "address": "fa:16:3e:a5:b5:38", "network": {"id": "703bdacb-53cd-40a1-9c2c-c632a29e049b", "bridge": "br-int", "label": "tempest-network-smoke--919155047", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.213", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7dcaf3c96bfc4db3a41291debd385c67", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap52157627-d7", "ovs_interfaceid": "52157627-d75e-4670-9215-6471bda94ba6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 25 03:55:31 np0005534516 nova_compute[253538]: 2025-11-25 08:55:31.249 253542 DEBUG nova.network.os_vif_util [None req-909947b0-2a81-4cc4-a5d9-5fb476fbcc1b 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Converting VIF {"id": "52157627-d75e-4670-9215-6471bda94ba6", "address": "fa:16:3e:a5:b5:38", "network": {"id": "703bdacb-53cd-40a1-9c2c-c632a29e049b", "bridge": "br-int", "label": "tempest-network-smoke--919155047", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.213", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7dcaf3c96bfc4db3a41291debd385c67", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap52157627-d7", "ovs_interfaceid": "52157627-d75e-4670-9215-6471bda94ba6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 03:55:31 np0005534516 nova_compute[253538]: 2025-11-25 08:55:31.250 253542 DEBUG nova.network.os_vif_util [None req-909947b0-2a81-4cc4-a5d9-5fb476fbcc1b 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a5:b5:38,bridge_name='br-int',has_traffic_filtering=True,id=52157627-d75e-4670-9215-6471bda94ba6,network=Network(703bdacb-53cd-40a1-9c2c-c632a29e049b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap52157627-d7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 03:55:31 np0005534516 nova_compute[253538]: 2025-11-25 08:55:31.251 253542 DEBUG os_vif [None req-909947b0-2a81-4cc4-a5d9-5fb476fbcc1b 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:a5:b5:38,bridge_name='br-int',has_traffic_filtering=True,id=52157627-d75e-4670-9215-6471bda94ba6,network=Network(703bdacb-53cd-40a1-9c2c-c632a29e049b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap52157627-d7') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 25 03:55:31 np0005534516 nova_compute[253538]: 2025-11-25 08:55:31.252 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:55:31 np0005534516 nova_compute[253538]: 2025-11-25 08:55:31.253 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap52157627-d7, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:55:31 np0005534516 nova_compute[253538]: 2025-11-25 08:55:31.254 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:55:31 np0005534516 nova_compute[253538]: 2025-11-25 08:55:31.258 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 25 03:55:31 np0005534516 nova_compute[253538]: 2025-11-25 08:55:31.260 253542 INFO os_vif [None req-909947b0-2a81-4cc4-a5d9-5fb476fbcc1b 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:a5:b5:38,bridge_name='br-int',has_traffic_filtering=True,id=52157627-d75e-4670-9215-6471bda94ba6,network=Network(703bdacb-53cd-40a1-9c2c-c632a29e049b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap52157627-d7')#033[00m
Nov 25 03:55:31 np0005534516 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-489b2c5f19d16e4e5592b32594c1a060b2f357fccef1644f8ff566a9c5ff2296-userdata-shm.mount: Deactivated successfully.
Nov 25 03:55:31 np0005534516 systemd[1]: var-lib-containers-storage-overlay-802d7b725468ff6e6315af7a30e30a1e715910a6a92e1c856150d350c2e2cd52-merged.mount: Deactivated successfully.
Nov 25 03:55:31 np0005534516 podman[377824]: 2025-11-25 08:55:31.313798661 +0000 UTC m=+0.193444146 container cleanup 489b2c5f19d16e4e5592b32594c1a060b2f357fccef1644f8ff566a9c5ff2296 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-703bdacb-53cd-40a1-9c2c-c632a29e049b, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true)
Nov 25 03:55:31 np0005534516 systemd[1]: libpod-conmon-489b2c5f19d16e4e5592b32594c1a060b2f357fccef1644f8ff566a9c5ff2296.scope: Deactivated successfully.
Nov 25 03:55:31 np0005534516 podman[377878]: 2025-11-25 08:55:31.37843805 +0000 UTC m=+0.039065034 container remove 489b2c5f19d16e4e5592b32594c1a060b2f357fccef1644f8ff566a9c5ff2296 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-703bdacb-53cd-40a1-9c2c-c632a29e049b, org.label-schema.license=GPLv2, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 25 03:55:31 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:55:31.387 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[efb5c976-27d1-4e33-91c1-2739770bf5af]: (4, ('Tue Nov 25 08:55:31 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-703bdacb-53cd-40a1-9c2c-c632a29e049b (489b2c5f19d16e4e5592b32594c1a060b2f357fccef1644f8ff566a9c5ff2296)\n489b2c5f19d16e4e5592b32594c1a060b2f357fccef1644f8ff566a9c5ff2296\nTue Nov 25 08:55:31 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-703bdacb-53cd-40a1-9c2c-c632a29e049b (489b2c5f19d16e4e5592b32594c1a060b2f357fccef1644f8ff566a9c5ff2296)\n489b2c5f19d16e4e5592b32594c1a060b2f357fccef1644f8ff566a9c5ff2296\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:55:31 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:55:31.389 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[cc8bc21d-504e-4eb5-9871-55111beb5784]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:55:31 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:55:31.390 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap703bdacb-50, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:55:31 np0005534516 kernel: tap703bdacb-50: left promiscuous mode
Nov 25 03:55:31 np0005534516 nova_compute[253538]: 2025-11-25 08:55:31.391 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:55:31 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:55:31.397 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[1c369b3f-bfd5-447c-bea0-cce669e0bc45]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:55:31 np0005534516 nova_compute[253538]: 2025-11-25 08:55:31.409 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:55:31 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:55:31.413 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[523bb802-21c9-4982-a270-759f23657f4a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:55:31 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:55:31.414 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[b973c891-6e91-4aa3-9f74-0af9e4d4a6d6]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:55:31 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:55:31.431 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[dbe47f0f-1fe0-49a5-879d-af32b05dfef4]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 628157, 'reachable_time': 15191, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 377897, 'error': None, 'target': 'ovnmeta-703bdacb-53cd-40a1-9c2c-c632a29e049b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:55:31 np0005534516 systemd[1]: run-netns-ovnmeta\x2d703bdacb\x2d53cd\x2d40a1\x2d9c2c\x2dc632a29e049b.mount: Deactivated successfully.
Nov 25 03:55:31 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:55:31.435 162852 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-703bdacb-53cd-40a1-9c2c-c632a29e049b deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 25 03:55:31 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:55:31.435 162852 DEBUG oslo.privsep.daemon [-] privsep: reply[b5980a43-40cf-48df-aa34-20e4d60c5afb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:55:31 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:55:31.436 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 52157627-d75e-4670-9215-6471bda94ba6 in datapath 703bdacb-53cd-40a1-9c2c-c632a29e049b unbound from our chassis#033[00m
Nov 25 03:55:31 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:55:31.438 162739 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 703bdacb-53cd-40a1-9c2c-c632a29e049b, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 25 03:55:31 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:55:31.439 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[32354173-89ba-4cd1-827c-a56384849a6f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:55:31 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:55:31.440 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 52157627-d75e-4670-9215-6471bda94ba6 in datapath 703bdacb-53cd-40a1-9c2c-c632a29e049b unbound from our chassis#033[00m
Nov 25 03:55:31 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:55:31.441 162739 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 703bdacb-53cd-40a1-9c2c-c632a29e049b, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 25 03:55:31 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:55:31.442 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[c050cf50-757d-45d0-9feb-6a71a0526805]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:55:31 np0005534516 nova_compute[253538]: 2025-11-25 08:55:31.553 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 03:55:31 np0005534516 gracious_kirch[377796]: --> passed data devices: 0 physical, 3 LVM
Nov 25 03:55:31 np0005534516 gracious_kirch[377796]: --> relative data size: 1.0
Nov 25 03:55:31 np0005534516 gracious_kirch[377796]: --> All data devices are unavailable
Nov 25 03:55:31 np0005534516 nova_compute[253538]: 2025-11-25 08:55:31.613 253542 INFO nova.virt.libvirt.driver [None req-909947b0-2a81-4cc4-a5d9-5fb476fbcc1b 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: 7e82fa8c-6663-439c-833c-2b28f22282a8] Deleting instance files /var/lib/nova/instances/7e82fa8c-6663-439c-833c-2b28f22282a8_del#033[00m
Nov 25 03:55:31 np0005534516 nova_compute[253538]: 2025-11-25 08:55:31.614 253542 INFO nova.virt.libvirt.driver [None req-909947b0-2a81-4cc4-a5d9-5fb476fbcc1b 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: 7e82fa8c-6663-439c-833c-2b28f22282a8] Deletion of /var/lib/nova/instances/7e82fa8c-6663-439c-833c-2b28f22282a8_del complete#033[00m
Nov 25 03:55:31 np0005534516 systemd[1]: libpod-4044db565a9239e07a477716389854abc1b78069214975560d796bdcc2930779.scope: Deactivated successfully.
Nov 25 03:55:31 np0005534516 systemd[1]: libpod-4044db565a9239e07a477716389854abc1b78069214975560d796bdcc2930779.scope: Consumed 1.084s CPU time.
Nov 25 03:55:31 np0005534516 podman[377779]: 2025-11-25 08:55:31.64517516 +0000 UTC m=+1.324106771 container died 4044db565a9239e07a477716389854abc1b78069214975560d796bdcc2930779 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_kirch, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 03:55:31 np0005534516 nova_compute[253538]: 2025-11-25 08:55:31.668 253542 INFO nova.compute.manager [None req-909947b0-2a81-4cc4-a5d9-5fb476fbcc1b 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: 7e82fa8c-6663-439c-833c-2b28f22282a8] Took 0.74 seconds to destroy the instance on the hypervisor.#033[00m
Nov 25 03:55:31 np0005534516 nova_compute[253538]: 2025-11-25 08:55:31.669 253542 DEBUG oslo.service.loopingcall [None req-909947b0-2a81-4cc4-a5d9-5fb476fbcc1b 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 25 03:55:31 np0005534516 nova_compute[253538]: 2025-11-25 08:55:31.670 253542 DEBUG nova.compute.manager [-] [instance: 7e82fa8c-6663-439c-833c-2b28f22282a8] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 25 03:55:31 np0005534516 nova_compute[253538]: 2025-11-25 08:55:31.670 253542 DEBUG nova.network.neutron [-] [instance: 7e82fa8c-6663-439c-833c-2b28f22282a8] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 25 03:55:31 np0005534516 systemd[1]: var-lib-containers-storage-overlay-075f72d6cccc2c84f3169140b8c8e368ac2cef046a3e53978c8c180eafacaffa-merged.mount: Deactivated successfully.
Nov 25 03:55:31 np0005534516 podman[377779]: 2025-11-25 08:55:31.721739394 +0000 UTC m=+1.400670965 container remove 4044db565a9239e07a477716389854abc1b78069214975560d796bdcc2930779 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_kirch, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507)
Nov 25 03:55:31 np0005534516 systemd[1]: libpod-conmon-4044db565a9239e07a477716389854abc1b78069214975560d796bdcc2930779.scope: Deactivated successfully.
Nov 25 03:55:31 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2242: 321 pgs: 321 active+clean; 137 MiB data, 840 MiB used, 59 GiB / 60 GiB avail; 14 KiB/s rd, 5.5 KiB/s wr, 19 op/s
Nov 25 03:55:32 np0005534516 nova_compute[253538]: 2025-11-25 08:55:32.292 253542 DEBUG nova.network.neutron [req-2d5b4246-3438-4077-92e3-26a7fc8d9ee3 req-a0cd1187-9c6d-4402-9852-badb63f5b42b b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 7e82fa8c-6663-439c-833c-2b28f22282a8] Updated VIF entry in instance network info cache for port 52157627-d75e-4670-9215-6471bda94ba6. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 25 03:55:32 np0005534516 nova_compute[253538]: 2025-11-25 08:55:32.293 253542 DEBUG nova.network.neutron [req-2d5b4246-3438-4077-92e3-26a7fc8d9ee3 req-a0cd1187-9c6d-4402-9852-badb63f5b42b b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 7e82fa8c-6663-439c-833c-2b28f22282a8] Updating instance_info_cache with network_info: [{"id": "52157627-d75e-4670-9215-6471bda94ba6", "address": "fa:16:3e:a5:b5:38", "network": {"id": "703bdacb-53cd-40a1-9c2c-c632a29e049b", "bridge": "br-int", "label": "tempest-network-smoke--919155047", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7dcaf3c96bfc4db3a41291debd385c67", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap52157627-d7", "ovs_interfaceid": "52157627-d75e-4670-9215-6471bda94ba6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 03:55:32 np0005534516 nova_compute[253538]: 2025-11-25 08:55:32.318 253542 DEBUG oslo_concurrency.lockutils [req-2d5b4246-3438-4077-92e3-26a7fc8d9ee3 req-a0cd1187-9c6d-4402-9852-badb63f5b42b b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-7e82fa8c-6663-439c-833c-2b28f22282a8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 03:55:32 np0005534516 nova_compute[253538]: 2025-11-25 08:55:32.456 253542 DEBUG nova.network.neutron [-] [instance: 7e82fa8c-6663-439c-833c-2b28f22282a8] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 03:55:32 np0005534516 nova_compute[253538]: 2025-11-25 08:55:32.473 253542 INFO nova.compute.manager [-] [instance: 7e82fa8c-6663-439c-833c-2b28f22282a8] Took 0.80 seconds to deallocate network for instance.#033[00m
Nov 25 03:55:32 np0005534516 podman[378063]: 2025-11-25 08:55:32.523156897 +0000 UTC m=+0.062746048 container create 0bb03fa762d470311aec58582d9cd0fdab98827c907738e92cebee0453e492ba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_babbage, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 03:55:32 np0005534516 nova_compute[253538]: 2025-11-25 08:55:32.526 253542 DEBUG oslo_concurrency.lockutils [None req-909947b0-2a81-4cc4-a5d9-5fb476fbcc1b 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:55:32 np0005534516 nova_compute[253538]: 2025-11-25 08:55:32.527 253542 DEBUG oslo_concurrency.lockutils [None req-909947b0-2a81-4cc4-a5d9-5fb476fbcc1b 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:55:32 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 03:55:32 np0005534516 systemd[1]: Started libpod-conmon-0bb03fa762d470311aec58582d9cd0fdab98827c907738e92cebee0453e492ba.scope.
Nov 25 03:55:32 np0005534516 podman[378063]: 2025-11-25 08:55:32.487516868 +0000 UTC m=+0.027106069 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 03:55:32 np0005534516 nova_compute[253538]: 2025-11-25 08:55:32.586 253542 DEBUG oslo_concurrency.processutils [None req-909947b0-2a81-4cc4-a5d9-5fb476fbcc1b 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:55:32 np0005534516 systemd[1]: Started libcrun container.
Nov 25 03:55:32 np0005534516 podman[378063]: 2025-11-25 08:55:32.647113852 +0000 UTC m=+0.186703023 container init 0bb03fa762d470311aec58582d9cd0fdab98827c907738e92cebee0453e492ba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_babbage, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 03:55:32 np0005534516 podman[378063]: 2025-11-25 08:55:32.655391466 +0000 UTC m=+0.194980577 container start 0bb03fa762d470311aec58582d9cd0fdab98827c907738e92cebee0453e492ba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_babbage, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 03:55:32 np0005534516 podman[378063]: 2025-11-25 08:55:32.658943504 +0000 UTC m=+0.198532655 container attach 0bb03fa762d470311aec58582d9cd0fdab98827c907738e92cebee0453e492ba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_babbage, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 03:55:32 np0005534516 nova_compute[253538]: 2025-11-25 08:55:32.661 253542 DEBUG nova.compute.manager [req-487fc684-66d0-44ad-a5f5-f58783fed6a5 req-aa252e42-b4bc-42ee-b310-c7213fa461cc b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 7e82fa8c-6663-439c-833c-2b28f22282a8] Received event network-vif-deleted-52157627-d75e-4670-9215-6471bda94ba6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:55:32 np0005534516 silly_babbage[378080]: 167 167
Nov 25 03:55:32 np0005534516 systemd[1]: libpod-0bb03fa762d470311aec58582d9cd0fdab98827c907738e92cebee0453e492ba.scope: Deactivated successfully.
Nov 25 03:55:32 np0005534516 podman[378063]: 2025-11-25 08:55:32.662728806 +0000 UTC m=+0.202317917 container died 0bb03fa762d470311aec58582d9cd0fdab98827c907738e92cebee0453e492ba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_babbage, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Nov 25 03:55:32 np0005534516 systemd[1]: var-lib-containers-storage-overlay-cc9e8cd4f76f074d2393934a0da7aa1821b868e576bab5c767031027235dc8f5-merged.mount: Deactivated successfully.
Nov 25 03:55:32 np0005534516 podman[378063]: 2025-11-25 08:55:32.706669113 +0000 UTC m=+0.246258234 container remove 0bb03fa762d470311aec58582d9cd0fdab98827c907738e92cebee0453e492ba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_babbage, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 03:55:32 np0005534516 systemd[1]: libpod-conmon-0bb03fa762d470311aec58582d9cd0fdab98827c907738e92cebee0453e492ba.scope: Deactivated successfully.
Nov 25 03:55:32 np0005534516 nova_compute[253538]: 2025-11-25 08:55:32.854 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:55:32 np0005534516 podman[378123]: 2025-11-25 08:55:32.890720952 +0000 UTC m=+0.051564724 container create aedd00449e37ef40af327302d19ee779aad2eb2171263dc064bcb98c090108a6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_bhabha, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef)
Nov 25 03:55:32 np0005534516 systemd[1]: Started libpod-conmon-aedd00449e37ef40af327302d19ee779aad2eb2171263dc064bcb98c090108a6.scope.
Nov 25 03:55:32 np0005534516 systemd[1]: Started libcrun container.
Nov 25 03:55:32 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/617d664dd5a4266051b8724908abee2621787256c00c6030f38cfceff57a50d6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 03:55:32 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/617d664dd5a4266051b8724908abee2621787256c00c6030f38cfceff57a50d6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 03:55:32 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/617d664dd5a4266051b8724908abee2621787256c00c6030f38cfceff57a50d6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 03:55:32 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/617d664dd5a4266051b8724908abee2621787256c00c6030f38cfceff57a50d6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 03:55:32 np0005534516 podman[378123]: 2025-11-25 08:55:32.958048335 +0000 UTC m=+0.118892107 container init aedd00449e37ef40af327302d19ee779aad2eb2171263dc064bcb98c090108a6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_bhabha, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Nov 25 03:55:32 np0005534516 podman[378123]: 2025-11-25 08:55:32.865244898 +0000 UTC m=+0.026088700 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 03:55:32 np0005534516 podman[378123]: 2025-11-25 08:55:32.970784391 +0000 UTC m=+0.131628163 container start aedd00449e37ef40af327302d19ee779aad2eb2171263dc064bcb98c090108a6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_bhabha, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Nov 25 03:55:32 np0005534516 podman[378123]: 2025-11-25 08:55:32.973896506 +0000 UTC m=+0.134740278 container attach aedd00449e37ef40af327302d19ee779aad2eb2171263dc064bcb98c090108a6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_bhabha, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True)
Nov 25 03:55:33 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 03:55:33 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/305180320' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 03:55:33 np0005534516 nova_compute[253538]: 2025-11-25 08:55:33.070 253542 DEBUG oslo_concurrency.processutils [None req-909947b0-2a81-4cc4-a5d9-5fb476fbcc1b 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.484s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:55:33 np0005534516 nova_compute[253538]: 2025-11-25 08:55:33.079 253542 DEBUG nova.compute.provider_tree [None req-909947b0-2a81-4cc4-a5d9-5fb476fbcc1b 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 03:55:33 np0005534516 nova_compute[253538]: 2025-11-25 08:55:33.095 253542 DEBUG nova.scheduler.client.report [None req-909947b0-2a81-4cc4-a5d9-5fb476fbcc1b 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 03:55:33 np0005534516 nova_compute[253538]: 2025-11-25 08:55:33.118 253542 DEBUG oslo_concurrency.lockutils [None req-909947b0-2a81-4cc4-a5d9-5fb476fbcc1b 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.591s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:55:33 np0005534516 nova_compute[253538]: 2025-11-25 08:55:33.149 253542 INFO nova.scheduler.client.report [None req-909947b0-2a81-4cc4-a5d9-5fb476fbcc1b 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Deleted allocations for instance 7e82fa8c-6663-439c-833c-2b28f22282a8#033[00m
Nov 25 03:55:33 np0005534516 nova_compute[253538]: 2025-11-25 08:55:33.219 253542 DEBUG oslo_concurrency.lockutils [None req-909947b0-2a81-4cc4-a5d9-5fb476fbcc1b 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Lock "7e82fa8c-6663-439c-833c-2b28f22282a8" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.297s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:55:33 np0005534516 lucid_bhabha[378140]: {
Nov 25 03:55:33 np0005534516 lucid_bhabha[378140]:    "0": [
Nov 25 03:55:33 np0005534516 lucid_bhabha[378140]:        {
Nov 25 03:55:33 np0005534516 lucid_bhabha[378140]:            "devices": [
Nov 25 03:55:33 np0005534516 lucid_bhabha[378140]:                "/dev/loop3"
Nov 25 03:55:33 np0005534516 lucid_bhabha[378140]:            ],
Nov 25 03:55:33 np0005534516 lucid_bhabha[378140]:            "lv_name": "ceph_lv0",
Nov 25 03:55:33 np0005534516 lucid_bhabha[378140]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 03:55:33 np0005534516 lucid_bhabha[378140]:            "lv_size": "21470642176",
Nov 25 03:55:33 np0005534516 lucid_bhabha[378140]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=fa0f8d90-5df8-4d42-9078-da082765696d,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 03:55:33 np0005534516 lucid_bhabha[378140]:            "lv_uuid": "ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr",
Nov 25 03:55:33 np0005534516 lucid_bhabha[378140]:            "name": "ceph_lv0",
Nov 25 03:55:33 np0005534516 lucid_bhabha[378140]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 03:55:33 np0005534516 lucid_bhabha[378140]:            "tags": {
Nov 25 03:55:33 np0005534516 lucid_bhabha[378140]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 03:55:33 np0005534516 lucid_bhabha[378140]:                "ceph.block_uuid": "ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr",
Nov 25 03:55:33 np0005534516 lucid_bhabha[378140]:                "ceph.cephx_lockbox_secret": "",
Nov 25 03:55:33 np0005534516 lucid_bhabha[378140]:                "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 03:55:33 np0005534516 lucid_bhabha[378140]:                "ceph.cluster_name": "ceph",
Nov 25 03:55:33 np0005534516 lucid_bhabha[378140]:                "ceph.crush_device_class": "",
Nov 25 03:55:33 np0005534516 lucid_bhabha[378140]:                "ceph.encrypted": "0",
Nov 25 03:55:33 np0005534516 lucid_bhabha[378140]:                "ceph.osd_fsid": "fa0f8d90-5df8-4d42-9078-da082765696d",
Nov 25 03:55:33 np0005534516 lucid_bhabha[378140]:                "ceph.osd_id": "0",
Nov 25 03:55:33 np0005534516 lucid_bhabha[378140]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 03:55:33 np0005534516 lucid_bhabha[378140]:                "ceph.type": "block",
Nov 25 03:55:33 np0005534516 lucid_bhabha[378140]:                "ceph.vdo": "0"
Nov 25 03:55:33 np0005534516 lucid_bhabha[378140]:            },
Nov 25 03:55:33 np0005534516 lucid_bhabha[378140]:            "type": "block",
Nov 25 03:55:33 np0005534516 lucid_bhabha[378140]:            "vg_name": "ceph_vg0"
Nov 25 03:55:33 np0005534516 lucid_bhabha[378140]:        }
Nov 25 03:55:33 np0005534516 lucid_bhabha[378140]:    ],
Nov 25 03:55:33 np0005534516 lucid_bhabha[378140]:    "1": [
Nov 25 03:55:33 np0005534516 lucid_bhabha[378140]:        {
Nov 25 03:55:33 np0005534516 lucid_bhabha[378140]:            "devices": [
Nov 25 03:55:33 np0005534516 lucid_bhabha[378140]:                "/dev/loop4"
Nov 25 03:55:33 np0005534516 lucid_bhabha[378140]:            ],
Nov 25 03:55:33 np0005534516 lucid_bhabha[378140]:            "lv_name": "ceph_lv1",
Nov 25 03:55:33 np0005534516 lucid_bhabha[378140]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 03:55:33 np0005534516 lucid_bhabha[378140]:            "lv_size": "21470642176",
Nov 25 03:55:33 np0005534516 lucid_bhabha[378140]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1ccbbde0-8faf-460a-800e-c84f00ed17db,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 03:55:33 np0005534516 lucid_bhabha[378140]:            "lv_uuid": "nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e",
Nov 25 03:55:33 np0005534516 lucid_bhabha[378140]:            "name": "ceph_lv1",
Nov 25 03:55:33 np0005534516 lucid_bhabha[378140]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 03:55:33 np0005534516 lucid_bhabha[378140]:            "tags": {
Nov 25 03:55:33 np0005534516 lucid_bhabha[378140]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 03:55:33 np0005534516 lucid_bhabha[378140]:                "ceph.block_uuid": "nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e",
Nov 25 03:55:33 np0005534516 lucid_bhabha[378140]:                "ceph.cephx_lockbox_secret": "",
Nov 25 03:55:33 np0005534516 lucid_bhabha[378140]:                "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 03:55:33 np0005534516 lucid_bhabha[378140]:                "ceph.cluster_name": "ceph",
Nov 25 03:55:33 np0005534516 lucid_bhabha[378140]:                "ceph.crush_device_class": "",
Nov 25 03:55:33 np0005534516 lucid_bhabha[378140]:                "ceph.encrypted": "0",
Nov 25 03:55:33 np0005534516 lucid_bhabha[378140]:                "ceph.osd_fsid": "1ccbbde0-8faf-460a-800e-c84f00ed17db",
Nov 25 03:55:33 np0005534516 lucid_bhabha[378140]:                "ceph.osd_id": "1",
Nov 25 03:55:33 np0005534516 lucid_bhabha[378140]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 03:55:33 np0005534516 lucid_bhabha[378140]:                "ceph.type": "block",
Nov 25 03:55:33 np0005534516 lucid_bhabha[378140]:                "ceph.vdo": "0"
Nov 25 03:55:33 np0005534516 lucid_bhabha[378140]:            },
Nov 25 03:55:33 np0005534516 lucid_bhabha[378140]:            "type": "block",
Nov 25 03:55:33 np0005534516 lucid_bhabha[378140]:            "vg_name": "ceph_vg1"
Nov 25 03:55:33 np0005534516 lucid_bhabha[378140]:        }
Nov 25 03:55:33 np0005534516 lucid_bhabha[378140]:    ],
Nov 25 03:55:33 np0005534516 lucid_bhabha[378140]:    "2": [
Nov 25 03:55:33 np0005534516 lucid_bhabha[378140]:        {
Nov 25 03:55:33 np0005534516 lucid_bhabha[378140]:            "devices": [
Nov 25 03:55:33 np0005534516 lucid_bhabha[378140]:                "/dev/loop5"
Nov 25 03:55:33 np0005534516 lucid_bhabha[378140]:            ],
Nov 25 03:55:33 np0005534516 lucid_bhabha[378140]:            "lv_name": "ceph_lv2",
Nov 25 03:55:33 np0005534516 lucid_bhabha[378140]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 03:55:33 np0005534516 lucid_bhabha[378140]:            "lv_size": "21470642176",
Nov 25 03:55:33 np0005534516 lucid_bhabha[378140]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2a15223e-f21c-42af-b702-2be1c3607399,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 03:55:33 np0005534516 lucid_bhabha[378140]:            "lv_uuid": "5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0",
Nov 25 03:55:33 np0005534516 lucid_bhabha[378140]:            "name": "ceph_lv2",
Nov 25 03:55:33 np0005534516 lucid_bhabha[378140]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 03:55:33 np0005534516 lucid_bhabha[378140]:            "tags": {
Nov 25 03:55:33 np0005534516 lucid_bhabha[378140]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 03:55:33 np0005534516 lucid_bhabha[378140]:                "ceph.block_uuid": "5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0",
Nov 25 03:55:33 np0005534516 lucid_bhabha[378140]:                "ceph.cephx_lockbox_secret": "",
Nov 25 03:55:33 np0005534516 lucid_bhabha[378140]:                "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 03:55:33 np0005534516 lucid_bhabha[378140]:                "ceph.cluster_name": "ceph",
Nov 25 03:55:33 np0005534516 lucid_bhabha[378140]:                "ceph.crush_device_class": "",
Nov 25 03:55:33 np0005534516 lucid_bhabha[378140]:                "ceph.encrypted": "0",
Nov 25 03:55:33 np0005534516 lucid_bhabha[378140]:                "ceph.osd_fsid": "2a15223e-f21c-42af-b702-2be1c3607399",
Nov 25 03:55:33 np0005534516 lucid_bhabha[378140]:                "ceph.osd_id": "2",
Nov 25 03:55:33 np0005534516 lucid_bhabha[378140]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 03:55:33 np0005534516 lucid_bhabha[378140]:                "ceph.type": "block",
Nov 25 03:55:33 np0005534516 lucid_bhabha[378140]:                "ceph.vdo": "0"
Nov 25 03:55:33 np0005534516 lucid_bhabha[378140]:            },
Nov 25 03:55:33 np0005534516 lucid_bhabha[378140]:            "type": "block",
Nov 25 03:55:33 np0005534516 lucid_bhabha[378140]:            "vg_name": "ceph_vg2"
Nov 25 03:55:33 np0005534516 lucid_bhabha[378140]:        }
Nov 25 03:55:33 np0005534516 lucid_bhabha[378140]:    ]
Nov 25 03:55:33 np0005534516 lucid_bhabha[378140]: }
Nov 25 03:55:33 np0005534516 systemd[1]: libpod-aedd00449e37ef40af327302d19ee779aad2eb2171263dc064bcb98c090108a6.scope: Deactivated successfully.
Nov 25 03:55:33 np0005534516 podman[378123]: 2025-11-25 08:55:33.792388184 +0000 UTC m=+0.953231966 container died aedd00449e37ef40af327302d19ee779aad2eb2171263dc064bcb98c090108a6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_bhabha, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Nov 25 03:55:33 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2243: 321 pgs: 321 active+clean; 115 MiB data, 832 MiB used, 59 GiB / 60 GiB avail; 21 KiB/s rd, 5.5 KiB/s wr, 28 op/s
Nov 25 03:55:33 np0005534516 systemd[1]: var-lib-containers-storage-overlay-617d664dd5a4266051b8724908abee2621787256c00c6030f38cfceff57a50d6-merged.mount: Deactivated successfully.
Nov 25 03:55:33 np0005534516 podman[378123]: 2025-11-25 08:55:33.932064796 +0000 UTC m=+1.092908568 container remove aedd00449e37ef40af327302d19ee779aad2eb2171263dc064bcb98c090108a6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_bhabha, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 25 03:55:33 np0005534516 systemd[1]: libpod-conmon-aedd00449e37ef40af327302d19ee779aad2eb2171263dc064bcb98c090108a6.scope: Deactivated successfully.
Nov 25 03:55:34 np0005534516 podman[378306]: 2025-11-25 08:55:34.612879197 +0000 UTC m=+0.061039783 container create 301cafe31c9fdb375c0766445a47055db3e02bc6ee3ae1b23878eed45c0b1a98 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_tesla, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Nov 25 03:55:34 np0005534516 systemd[1]: Started libpod-conmon-301cafe31c9fdb375c0766445a47055db3e02bc6ee3ae1b23878eed45c0b1a98.scope.
Nov 25 03:55:34 np0005534516 podman[378306]: 2025-11-25 08:55:34.586644443 +0000 UTC m=+0.034805069 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 03:55:34 np0005534516 systemd[1]: Started libcrun container.
Nov 25 03:55:34 np0005534516 podman[378306]: 2025-11-25 08:55:34.727551428 +0000 UTC m=+0.175712074 container init 301cafe31c9fdb375c0766445a47055db3e02bc6ee3ae1b23878eed45c0b1a98 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_tesla, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 03:55:34 np0005534516 podman[378306]: 2025-11-25 08:55:34.736720887 +0000 UTC m=+0.184881443 container start 301cafe31c9fdb375c0766445a47055db3e02bc6ee3ae1b23878eed45c0b1a98 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_tesla, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 03:55:34 np0005534516 podman[378306]: 2025-11-25 08:55:34.741531049 +0000 UTC m=+0.189691645 container attach 301cafe31c9fdb375c0766445a47055db3e02bc6ee3ae1b23878eed45c0b1a98 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_tesla, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Nov 25 03:55:34 np0005534516 agitated_tesla[378322]: 167 167
Nov 25 03:55:34 np0005534516 systemd[1]: libpod-301cafe31c9fdb375c0766445a47055db3e02bc6ee3ae1b23878eed45c0b1a98.scope: Deactivated successfully.
Nov 25 03:55:34 np0005534516 podman[378306]: 2025-11-25 08:55:34.745148667 +0000 UTC m=+0.193309223 container died 301cafe31c9fdb375c0766445a47055db3e02bc6ee3ae1b23878eed45c0b1a98 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_tesla, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Nov 25 03:55:34 np0005534516 systemd[1]: var-lib-containers-storage-overlay-024863d8799286d81c7e0ca6ae448e8700382ff96269c25c34de0cd2e05dd4b4-merged.mount: Deactivated successfully.
Nov 25 03:55:34 np0005534516 podman[378306]: 2025-11-25 08:55:34.816095718 +0000 UTC m=+0.264256284 container remove 301cafe31c9fdb375c0766445a47055db3e02bc6ee3ae1b23878eed45c0b1a98 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_tesla, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Nov 25 03:55:34 np0005534516 systemd[1]: libpod-conmon-301cafe31c9fdb375c0766445a47055db3e02bc6ee3ae1b23878eed45c0b1a98.scope: Deactivated successfully.
Nov 25 03:55:35 np0005534516 podman[378345]: 2025-11-25 08:55:35.009616226 +0000 UTC m=+0.051805092 container create af3d0f87e223b5caf5603be23af9017886c230bab2645b1d36d33a1e530970c0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_banzai, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Nov 25 03:55:35 np0005534516 systemd[1]: Started libpod-conmon-af3d0f87e223b5caf5603be23af9017886c230bab2645b1d36d33a1e530970c0.scope.
Nov 25 03:55:35 np0005534516 podman[378345]: 2025-11-25 08:55:34.981370887 +0000 UTC m=+0.023559753 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 03:55:35 np0005534516 systemd[1]: Started libcrun container.
Nov 25 03:55:35 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/db210f23a82398e427ca754024134826b5e2d7b7a8d12a11af2c30aa6e60993b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 03:55:35 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/db210f23a82398e427ca754024134826b5e2d7b7a8d12a11af2c30aa6e60993b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 03:55:35 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/db210f23a82398e427ca754024134826b5e2d7b7a8d12a11af2c30aa6e60993b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 03:55:35 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/db210f23a82398e427ca754024134826b5e2d7b7a8d12a11af2c30aa6e60993b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 03:55:35 np0005534516 podman[378345]: 2025-11-25 08:55:35.12365918 +0000 UTC m=+0.165848076 container init af3d0f87e223b5caf5603be23af9017886c230bab2645b1d36d33a1e530970c0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_banzai, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef)
Nov 25 03:55:35 np0005534516 podman[378345]: 2025-11-25 08:55:35.13286068 +0000 UTC m=+0.175049546 container start af3d0f87e223b5caf5603be23af9017886c230bab2645b1d36d33a1e530970c0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_banzai, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Nov 25 03:55:35 np0005534516 podman[378345]: 2025-11-25 08:55:35.137594349 +0000 UTC m=+0.179783226 container attach af3d0f87e223b5caf5603be23af9017886c230bab2645b1d36d33a1e530970c0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_banzai, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 03:55:35 np0005534516 nova_compute[253538]: 2025-11-25 08:55:35.413 253542 DEBUG oslo_concurrency.lockutils [None req-59e6579b-ccc5-4f68-852e-e3bd22f9798e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Acquiring lock "f028149d-de9a-49c3-8805-49336474a101" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:55:35 np0005534516 nova_compute[253538]: 2025-11-25 08:55:35.416 253542 DEBUG oslo_concurrency.lockutils [None req-59e6579b-ccc5-4f68-852e-e3bd22f9798e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lock "f028149d-de9a-49c3-8805-49336474a101" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:55:35 np0005534516 nova_compute[253538]: 2025-11-25 08:55:35.435 253542 DEBUG nova.compute.manager [None req-59e6579b-ccc5-4f68-852e-e3bd22f9798e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: f028149d-de9a-49c3-8805-49336474a101] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 25 03:55:35 np0005534516 nova_compute[253538]: 2025-11-25 08:55:35.511 253542 DEBUG oslo_concurrency.lockutils [None req-59e6579b-ccc5-4f68-852e-e3bd22f9798e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:55:35 np0005534516 nova_compute[253538]: 2025-11-25 08:55:35.512 253542 DEBUG oslo_concurrency.lockutils [None req-59e6579b-ccc5-4f68-852e-e3bd22f9798e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:55:35 np0005534516 nova_compute[253538]: 2025-11-25 08:55:35.522 253542 DEBUG nova.virt.hardware [None req-59e6579b-ccc5-4f68-852e-e3bd22f9798e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 25 03:55:35 np0005534516 nova_compute[253538]: 2025-11-25 08:55:35.522 253542 INFO nova.compute.claims [None req-59e6579b-ccc5-4f68-852e-e3bd22f9798e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: f028149d-de9a-49c3-8805-49336474a101] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 25 03:55:35 np0005534516 nova_compute[253538]: 2025-11-25 08:55:35.553 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 03:55:35 np0005534516 nova_compute[253538]: 2025-11-25 08:55:35.610 253542 DEBUG oslo_concurrency.processutils [None req-59e6579b-ccc5-4f68-852e-e3bd22f9798e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:55:35 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2244: 321 pgs: 321 active+clean; 88 MiB data, 817 MiB used, 59 GiB / 60 GiB avail; 23 KiB/s rd, 5.8 KiB/s wr, 33 op/s
Nov 25 03:55:36 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 03:55:36 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3149975109' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 03:55:36 np0005534516 peaceful_banzai[378361]: {
Nov 25 03:55:36 np0005534516 peaceful_banzai[378361]:    "1ccbbde0-8faf-460a-800e-c84f00ed17db": {
Nov 25 03:55:36 np0005534516 peaceful_banzai[378361]:        "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 03:55:36 np0005534516 peaceful_banzai[378361]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 03:55:36 np0005534516 peaceful_banzai[378361]:        "osd_id": 1,
Nov 25 03:55:36 np0005534516 peaceful_banzai[378361]:        "osd_uuid": "1ccbbde0-8faf-460a-800e-c84f00ed17db",
Nov 25 03:55:36 np0005534516 peaceful_banzai[378361]:        "type": "bluestore"
Nov 25 03:55:36 np0005534516 peaceful_banzai[378361]:    },
Nov 25 03:55:36 np0005534516 peaceful_banzai[378361]:    "2a15223e-f21c-42af-b702-2be1c3607399": {
Nov 25 03:55:36 np0005534516 peaceful_banzai[378361]:        "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 03:55:36 np0005534516 peaceful_banzai[378361]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 03:55:36 np0005534516 peaceful_banzai[378361]:        "osd_id": 2,
Nov 25 03:55:36 np0005534516 peaceful_banzai[378361]:        "osd_uuid": "2a15223e-f21c-42af-b702-2be1c3607399",
Nov 25 03:55:36 np0005534516 peaceful_banzai[378361]:        "type": "bluestore"
Nov 25 03:55:36 np0005534516 peaceful_banzai[378361]:    },
Nov 25 03:55:36 np0005534516 peaceful_banzai[378361]:    "fa0f8d90-5df8-4d42-9078-da082765696d": {
Nov 25 03:55:36 np0005534516 peaceful_banzai[378361]:        "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 03:55:36 np0005534516 peaceful_banzai[378361]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 03:55:36 np0005534516 peaceful_banzai[378361]:        "osd_id": 0,
Nov 25 03:55:36 np0005534516 peaceful_banzai[378361]:        "osd_uuid": "fa0f8d90-5df8-4d42-9078-da082765696d",
Nov 25 03:55:36 np0005534516 peaceful_banzai[378361]:        "type": "bluestore"
Nov 25 03:55:36 np0005534516 peaceful_banzai[378361]:    }
Nov 25 03:55:36 np0005534516 peaceful_banzai[378361]: }
Nov 25 03:55:36 np0005534516 nova_compute[253538]: 2025-11-25 08:55:36.078 253542 DEBUG oslo_concurrency.processutils [None req-59e6579b-ccc5-4f68-852e-e3bd22f9798e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.467s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:55:36 np0005534516 nova_compute[253538]: 2025-11-25 08:55:36.086 253542 DEBUG nova.compute.provider_tree [None req-59e6579b-ccc5-4f68-852e-e3bd22f9798e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 03:55:36 np0005534516 systemd[1]: libpod-af3d0f87e223b5caf5603be23af9017886c230bab2645b1d36d33a1e530970c0.scope: Deactivated successfully.
Nov 25 03:55:36 np0005534516 podman[378345]: 2025-11-25 08:55:36.095914474 +0000 UTC m=+1.138103300 container died af3d0f87e223b5caf5603be23af9017886c230bab2645b1d36d33a1e530970c0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_banzai, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Nov 25 03:55:36 np0005534516 nova_compute[253538]: 2025-11-25 08:55:36.101 253542 DEBUG nova.scheduler.client.report [None req-59e6579b-ccc5-4f68-852e-e3bd22f9798e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 03:55:36 np0005534516 systemd[1]: var-lib-containers-storage-overlay-db210f23a82398e427ca754024134826b5e2d7b7a8d12a11af2c30aa6e60993b-merged.mount: Deactivated successfully.
Nov 25 03:55:36 np0005534516 nova_compute[253538]: 2025-11-25 08:55:36.133 253542 DEBUG oslo_concurrency.lockutils [None req-59e6579b-ccc5-4f68-852e-e3bd22f9798e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.621s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:55:36 np0005534516 nova_compute[253538]: 2025-11-25 08:55:36.134 253542 DEBUG nova.compute.manager [None req-59e6579b-ccc5-4f68-852e-e3bd22f9798e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: f028149d-de9a-49c3-8805-49336474a101] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 25 03:55:36 np0005534516 nova_compute[253538]: 2025-11-25 08:55:36.184 253542 DEBUG nova.compute.manager [None req-59e6579b-ccc5-4f68-852e-e3bd22f9798e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: f028149d-de9a-49c3-8805-49336474a101] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 25 03:55:36 np0005534516 nova_compute[253538]: 2025-11-25 08:55:36.185 253542 DEBUG nova.network.neutron [None req-59e6579b-ccc5-4f68-852e-e3bd22f9798e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: f028149d-de9a-49c3-8805-49336474a101] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 25 03:55:36 np0005534516 podman[378345]: 2025-11-25 08:55:36.188947206 +0000 UTC m=+1.231136032 container remove af3d0f87e223b5caf5603be23af9017886c230bab2645b1d36d33a1e530970c0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_banzai, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Nov 25 03:55:36 np0005534516 systemd[1]: libpod-conmon-af3d0f87e223b5caf5603be23af9017886c230bab2645b1d36d33a1e530970c0.scope: Deactivated successfully.
Nov 25 03:55:36 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 03:55:36 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 03:55:36 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 03:55:36 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 03:55:36 np0005534516 ceph-mgr[75313]: [progress WARNING root] complete: ev aa319047-607a-432e-b886-5541a71b79d6 does not exist
Nov 25 03:55:36 np0005534516 ceph-mgr[75313]: [progress WARNING root] complete: ev 9bfefde3-1c74-4ae0-8352-3e06f3e0bdfd does not exist
Nov 25 03:55:36 np0005534516 nova_compute[253538]: 2025-11-25 08:55:36.349 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:55:36 np0005534516 nova_compute[253538]: 2025-11-25 08:55:36.371 253542 INFO nova.virt.libvirt.driver [None req-59e6579b-ccc5-4f68-852e-e3bd22f9798e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: f028149d-de9a-49c3-8805-49336474a101] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 25 03:55:36 np0005534516 nova_compute[253538]: 2025-11-25 08:55:36.374 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:55:36 np0005534516 nova_compute[253538]: 2025-11-25 08:55:36.387 253542 DEBUG nova.compute.manager [None req-59e6579b-ccc5-4f68-852e-e3bd22f9798e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: f028149d-de9a-49c3-8805-49336474a101] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 25 03:55:36 np0005534516 nova_compute[253538]: 2025-11-25 08:55:36.524 253542 DEBUG nova.compute.manager [None req-59e6579b-ccc5-4f68-852e-e3bd22f9798e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: f028149d-de9a-49c3-8805-49336474a101] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 25 03:55:36 np0005534516 nova_compute[253538]: 2025-11-25 08:55:36.525 253542 DEBUG nova.virt.libvirt.driver [None req-59e6579b-ccc5-4f68-852e-e3bd22f9798e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: f028149d-de9a-49c3-8805-49336474a101] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 25 03:55:36 np0005534516 nova_compute[253538]: 2025-11-25 08:55:36.526 253542 INFO nova.virt.libvirt.driver [None req-59e6579b-ccc5-4f68-852e-e3bd22f9798e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: f028149d-de9a-49c3-8805-49336474a101] Creating image(s)#033[00m
Nov 25 03:55:36 np0005534516 nova_compute[253538]: 2025-11-25 08:55:36.547 253542 DEBUG nova.storage.rbd_utils [None req-59e6579b-ccc5-4f68-852e-e3bd22f9798e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] rbd image f028149d-de9a-49c3-8805-49336474a101_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:55:36 np0005534516 nova_compute[253538]: 2025-11-25 08:55:36.569 253542 DEBUG nova.storage.rbd_utils [None req-59e6579b-ccc5-4f68-852e-e3bd22f9798e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] rbd image f028149d-de9a-49c3-8805-49336474a101_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:55:36 np0005534516 nova_compute[253538]: 2025-11-25 08:55:36.595 253542 DEBUG nova.storage.rbd_utils [None req-59e6579b-ccc5-4f68-852e-e3bd22f9798e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] rbd image f028149d-de9a-49c3-8805-49336474a101_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:55:36 np0005534516 nova_compute[253538]: 2025-11-25 08:55:36.600 253542 DEBUG oslo_concurrency.processutils [None req-59e6579b-ccc5-4f68-852e-e3bd22f9798e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:55:36 np0005534516 nova_compute[253538]: 2025-11-25 08:55:36.724 253542 DEBUG oslo_concurrency.processutils [None req-59e6579b-ccc5-4f68-852e-e3bd22f9798e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json" returned: 0 in 0.124s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:55:36 np0005534516 nova_compute[253538]: 2025-11-25 08:55:36.726 253542 DEBUG oslo_concurrency.lockutils [None req-59e6579b-ccc5-4f68-852e-e3bd22f9798e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Acquiring lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:55:36 np0005534516 nova_compute[253538]: 2025-11-25 08:55:36.727 253542 DEBUG oslo_concurrency.lockutils [None req-59e6579b-ccc5-4f68-852e-e3bd22f9798e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:55:36 np0005534516 nova_compute[253538]: 2025-11-25 08:55:36.728 253542 DEBUG oslo_concurrency.lockutils [None req-59e6579b-ccc5-4f68-852e-e3bd22f9798e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:55:36 np0005534516 nova_compute[253538]: 2025-11-25 08:55:36.757 253542 DEBUG nova.storage.rbd_utils [None req-59e6579b-ccc5-4f68-852e-e3bd22f9798e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] rbd image f028149d-de9a-49c3-8805-49336474a101_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:55:36 np0005534516 nova_compute[253538]: 2025-11-25 08:55:36.761 253542 DEBUG oslo_concurrency.processutils [None req-59e6579b-ccc5-4f68-852e-e3bd22f9798e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc f028149d-de9a-49c3-8805-49336474a101_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:55:37 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 03:55:37 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 03:55:37 np0005534516 nova_compute[253538]: 2025-11-25 08:55:37.025 253542 DEBUG oslo_concurrency.processutils [None req-59e6579b-ccc5-4f68-852e-e3bd22f9798e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc f028149d-de9a-49c3-8805-49336474a101_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.264s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:55:37 np0005534516 nova_compute[253538]: 2025-11-25 08:55:37.090 253542 DEBUG nova.storage.rbd_utils [None req-59e6579b-ccc5-4f68-852e-e3bd22f9798e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] resizing rbd image f028149d-de9a-49c3-8805-49336474a101_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 25 03:55:37 np0005534516 nova_compute[253538]: 2025-11-25 08:55:37.192 253542 DEBUG nova.objects.instance [None req-59e6579b-ccc5-4f68-852e-e3bd22f9798e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lazy-loading 'migration_context' on Instance uuid f028149d-de9a-49c3-8805-49336474a101 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 03:55:37 np0005534516 nova_compute[253538]: 2025-11-25 08:55:37.208 253542 DEBUG nova.virt.libvirt.driver [None req-59e6579b-ccc5-4f68-852e-e3bd22f9798e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: f028149d-de9a-49c3-8805-49336474a101] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 25 03:55:37 np0005534516 nova_compute[253538]: 2025-11-25 08:55:37.209 253542 DEBUG nova.virt.libvirt.driver [None req-59e6579b-ccc5-4f68-852e-e3bd22f9798e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: f028149d-de9a-49c3-8805-49336474a101] Ensure instance console log exists: /var/lib/nova/instances/f028149d-de9a-49c3-8805-49336474a101/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 25 03:55:37 np0005534516 nova_compute[253538]: 2025-11-25 08:55:37.210 253542 DEBUG oslo_concurrency.lockutils [None req-59e6579b-ccc5-4f68-852e-e3bd22f9798e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:55:37 np0005534516 nova_compute[253538]: 2025-11-25 08:55:37.210 253542 DEBUG oslo_concurrency.lockutils [None req-59e6579b-ccc5-4f68-852e-e3bd22f9798e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:55:37 np0005534516 nova_compute[253538]: 2025-11-25 08:55:37.210 253542 DEBUG oslo_concurrency.lockutils [None req-59e6579b-ccc5-4f68-852e-e3bd22f9798e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:55:37 np0005534516 nova_compute[253538]: 2025-11-25 08:55:37.265 253542 DEBUG nova.policy [None req-59e6579b-ccc5-4f68-852e-e3bd22f9798e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '4211995133cc45db8e38c47f747fb092', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '92faeb767e7a423586eaaf32661ce771', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 25 03:55:37 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 03:55:37 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2245: 321 pgs: 321 active+clean; 88 MiB data, 799 MiB used, 59 GiB / 60 GiB avail; 23 KiB/s rd, 5.8 KiB/s wr, 33 op/s
Nov 25 03:55:37 np0005534516 nova_compute[253538]: 2025-11-25 08:55:37.883 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:55:38 np0005534516 nova_compute[253538]: 2025-11-25 08:55:38.162 253542 DEBUG nova.network.neutron [None req-59e6579b-ccc5-4f68-852e-e3bd22f9798e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: f028149d-de9a-49c3-8805-49336474a101] Successfully updated port: 3df2cc50-c6c1-476a-a12a-0d02fae91559 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 25 03:55:38 np0005534516 nova_compute[253538]: 2025-11-25 08:55:38.191 253542 DEBUG oslo_concurrency.lockutils [None req-59e6579b-ccc5-4f68-852e-e3bd22f9798e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Acquiring lock "refresh_cache-f028149d-de9a-49c3-8805-49336474a101" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 03:55:38 np0005534516 nova_compute[253538]: 2025-11-25 08:55:38.191 253542 DEBUG oslo_concurrency.lockutils [None req-59e6579b-ccc5-4f68-852e-e3bd22f9798e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Acquired lock "refresh_cache-f028149d-de9a-49c3-8805-49336474a101" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 03:55:38 np0005534516 nova_compute[253538]: 2025-11-25 08:55:38.191 253542 DEBUG nova.network.neutron [None req-59e6579b-ccc5-4f68-852e-e3bd22f9798e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: f028149d-de9a-49c3-8805-49336474a101] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 25 03:55:38 np0005534516 nova_compute[253538]: 2025-11-25 08:55:38.355 253542 DEBUG nova.compute.manager [req-df588e23-2dca-41a8-a2fd-a035719b9c75 req-b01e9bef-321c-44ba-9801-26fd45b66884 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: f028149d-de9a-49c3-8805-49336474a101] Received event network-changed-3df2cc50-c6c1-476a-a12a-0d02fae91559 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:55:38 np0005534516 nova_compute[253538]: 2025-11-25 08:55:38.356 253542 DEBUG nova.compute.manager [req-df588e23-2dca-41a8-a2fd-a035719b9c75 req-b01e9bef-321c-44ba-9801-26fd45b66884 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: f028149d-de9a-49c3-8805-49336474a101] Refreshing instance network info cache due to event network-changed-3df2cc50-c6c1-476a-a12a-0d02fae91559. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 03:55:38 np0005534516 nova_compute[253538]: 2025-11-25 08:55:38.356 253542 DEBUG oslo_concurrency.lockutils [req-df588e23-2dca-41a8-a2fd-a035719b9c75 req-b01e9bef-321c-44ba-9801-26fd45b66884 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-f028149d-de9a-49c3-8805-49336474a101" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 03:55:39 np0005534516 nova_compute[253538]: 2025-11-25 08:55:39.018 253542 DEBUG nova.network.neutron [None req-59e6579b-ccc5-4f68-852e-e3bd22f9798e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: f028149d-de9a-49c3-8805-49336474a101] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 25 03:55:39 np0005534516 nova_compute[253538]: 2025-11-25 08:55:39.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 03:55:39 np0005534516 nova_compute[253538]: 2025-11-25 08:55:39.555 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 25 03:55:39 np0005534516 nova_compute[253538]: 2025-11-25 08:55:39.555 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 25 03:55:39 np0005534516 nova_compute[253538]: 2025-11-25 08:55:39.567 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] [instance: f028149d-de9a-49c3-8805-49336474a101] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Nov 25 03:55:39 np0005534516 nova_compute[253538]: 2025-11-25 08:55:39.568 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 25 03:55:39 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2246: 321 pgs: 321 active+clean; 122 MiB data, 810 MiB used, 59 GiB / 60 GiB avail; 24 KiB/s rd, 1.4 MiB/s wr, 36 op/s
Nov 25 03:55:40 np0005534516 nova_compute[253538]: 2025-11-25 08:55:40.304 253542 DEBUG nova.network.neutron [None req-59e6579b-ccc5-4f68-852e-e3bd22f9798e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: f028149d-de9a-49c3-8805-49336474a101] Updating instance_info_cache with network_info: [{"id": "3df2cc50-c6c1-476a-a12a-0d02fae91559", "address": "fa:16:3e:54:4e:c2", "network": {"id": "05073ace-d35c-48d1-9399-5c8964c484d2", "bridge": "br-int", "label": "tempest-network-smoke--1386100427", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92faeb767e7a423586eaaf32661ce771", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3df2cc50-c6", "ovs_interfaceid": "3df2cc50-c6c1-476a-a12a-0d02fae91559", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 03:55:40 np0005534516 nova_compute[253538]: 2025-11-25 08:55:40.318 253542 DEBUG oslo_concurrency.lockutils [None req-59e6579b-ccc5-4f68-852e-e3bd22f9798e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Releasing lock "refresh_cache-f028149d-de9a-49c3-8805-49336474a101" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 03:55:40 np0005534516 nova_compute[253538]: 2025-11-25 08:55:40.319 253542 DEBUG nova.compute.manager [None req-59e6579b-ccc5-4f68-852e-e3bd22f9798e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: f028149d-de9a-49c3-8805-49336474a101] Instance network_info: |[{"id": "3df2cc50-c6c1-476a-a12a-0d02fae91559", "address": "fa:16:3e:54:4e:c2", "network": {"id": "05073ace-d35c-48d1-9399-5c8964c484d2", "bridge": "br-int", "label": "tempest-network-smoke--1386100427", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92faeb767e7a423586eaaf32661ce771", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3df2cc50-c6", "ovs_interfaceid": "3df2cc50-c6c1-476a-a12a-0d02fae91559", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 25 03:55:40 np0005534516 nova_compute[253538]: 2025-11-25 08:55:40.319 253542 DEBUG oslo_concurrency.lockutils [req-df588e23-2dca-41a8-a2fd-a035719b9c75 req-b01e9bef-321c-44ba-9801-26fd45b66884 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-f028149d-de9a-49c3-8805-49336474a101" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 03:55:40 np0005534516 nova_compute[253538]: 2025-11-25 08:55:40.320 253542 DEBUG nova.network.neutron [req-df588e23-2dca-41a8-a2fd-a035719b9c75 req-b01e9bef-321c-44ba-9801-26fd45b66884 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: f028149d-de9a-49c3-8805-49336474a101] Refreshing network info cache for port 3df2cc50-c6c1-476a-a12a-0d02fae91559 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 03:55:40 np0005534516 nova_compute[253538]: 2025-11-25 08:55:40.323 253542 DEBUG nova.virt.libvirt.driver [None req-59e6579b-ccc5-4f68-852e-e3bd22f9798e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: f028149d-de9a-49c3-8805-49336474a101] Start _get_guest_xml network_info=[{"id": "3df2cc50-c6c1-476a-a12a-0d02fae91559", "address": "fa:16:3e:54:4e:c2", "network": {"id": "05073ace-d35c-48d1-9399-5c8964c484d2", "bridge": "br-int", "label": "tempest-network-smoke--1386100427", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92faeb767e7a423586eaaf32661ce771", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3df2cc50-c6", "ovs_interfaceid": "3df2cc50-c6c1-476a-a12a-0d02fae91559", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'device_name': '/dev/vda', 'guest_format': None, 'encryption_options': None, 'boot_index': 0, 'encryption_format': None, 'encryption_secret_uuid': None, 'size': 0, 'encrypted': False, 'disk_bus': 'virtio', 'image_id': '8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 25 03:55:40 np0005534516 nova_compute[253538]: 2025-11-25 08:55:40.328 253542 WARNING nova.virt.libvirt.driver [None req-59e6579b-ccc5-4f68-852e-e3bd22f9798e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 03:55:40 np0005534516 nova_compute[253538]: 2025-11-25 08:55:40.337 253542 DEBUG nova.virt.libvirt.host [None req-59e6579b-ccc5-4f68-852e-e3bd22f9798e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 25 03:55:40 np0005534516 nova_compute[253538]: 2025-11-25 08:55:40.337 253542 DEBUG nova.virt.libvirt.host [None req-59e6579b-ccc5-4f68-852e-e3bd22f9798e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 25 03:55:40 np0005534516 nova_compute[253538]: 2025-11-25 08:55:40.341 253542 DEBUG nova.virt.libvirt.host [None req-59e6579b-ccc5-4f68-852e-e3bd22f9798e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 25 03:55:40 np0005534516 nova_compute[253538]: 2025-11-25 08:55:40.342 253542 DEBUG nova.virt.libvirt.host [None req-59e6579b-ccc5-4f68-852e-e3bd22f9798e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 25 03:55:40 np0005534516 nova_compute[253538]: 2025-11-25 08:55:40.342 253542 DEBUG nova.virt.libvirt.driver [None req-59e6579b-ccc5-4f68-852e-e3bd22f9798e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 25 03:55:40 np0005534516 nova_compute[253538]: 2025-11-25 08:55:40.343 253542 DEBUG nova.virt.hardware [None req-59e6579b-ccc5-4f68-852e-e3bd22f9798e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T08:20:54Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='c0247c91-0f34-45b2-87e7-43f31a790a00',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 25 03:55:40 np0005534516 nova_compute[253538]: 2025-11-25 08:55:40.343 253542 DEBUG nova.virt.hardware [None req-59e6579b-ccc5-4f68-852e-e3bd22f9798e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 25 03:55:40 np0005534516 nova_compute[253538]: 2025-11-25 08:55:40.344 253542 DEBUG nova.virt.hardware [None req-59e6579b-ccc5-4f68-852e-e3bd22f9798e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 25 03:55:40 np0005534516 nova_compute[253538]: 2025-11-25 08:55:40.344 253542 DEBUG nova.virt.hardware [None req-59e6579b-ccc5-4f68-852e-e3bd22f9798e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 25 03:55:40 np0005534516 nova_compute[253538]: 2025-11-25 08:55:40.344 253542 DEBUG nova.virt.hardware [None req-59e6579b-ccc5-4f68-852e-e3bd22f9798e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 25 03:55:40 np0005534516 nova_compute[253538]: 2025-11-25 08:55:40.345 253542 DEBUG nova.virt.hardware [None req-59e6579b-ccc5-4f68-852e-e3bd22f9798e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 25 03:55:40 np0005534516 nova_compute[253538]: 2025-11-25 08:55:40.345 253542 DEBUG nova.virt.hardware [None req-59e6579b-ccc5-4f68-852e-e3bd22f9798e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 25 03:55:40 np0005534516 nova_compute[253538]: 2025-11-25 08:55:40.345 253542 DEBUG nova.virt.hardware [None req-59e6579b-ccc5-4f68-852e-e3bd22f9798e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 25 03:55:40 np0005534516 nova_compute[253538]: 2025-11-25 08:55:40.346 253542 DEBUG nova.virt.hardware [None req-59e6579b-ccc5-4f68-852e-e3bd22f9798e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 25 03:55:40 np0005534516 nova_compute[253538]: 2025-11-25 08:55:40.346 253542 DEBUG nova.virt.hardware [None req-59e6579b-ccc5-4f68-852e-e3bd22f9798e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 25 03:55:40 np0005534516 nova_compute[253538]: 2025-11-25 08:55:40.346 253542 DEBUG nova.virt.hardware [None req-59e6579b-ccc5-4f68-852e-e3bd22f9798e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 25 03:55:40 np0005534516 nova_compute[253538]: 2025-11-25 08:55:40.350 253542 DEBUG oslo_concurrency.processutils [None req-59e6579b-ccc5-4f68-852e-e3bd22f9798e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:55:40 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 03:55:40 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3352452760' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 03:55:40 np0005534516 nova_compute[253538]: 2025-11-25 08:55:40.840 253542 DEBUG oslo_concurrency.processutils [None req-59e6579b-ccc5-4f68-852e-e3bd22f9798e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.490s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:55:40 np0005534516 podman[378668]: 2025-11-25 08:55:40.850953779 +0000 UTC m=+0.086431414 container health_status 5c8d5508db57b4c3da7e80ae11f3a3094e87785cb74f60c98199261dd8b73481 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_metadata_agent)
Nov 25 03:55:40 np0005534516 nova_compute[253538]: 2025-11-25 08:55:40.875 253542 DEBUG nova.storage.rbd_utils [None req-59e6579b-ccc5-4f68-852e-e3bd22f9798e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] rbd image f028149d-de9a-49c3-8805-49336474a101_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:55:40 np0005534516 nova_compute[253538]: 2025-11-25 08:55:40.882 253542 DEBUG oslo_concurrency.processutils [None req-59e6579b-ccc5-4f68-852e-e3bd22f9798e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:55:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:55:41.078 162739 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:55:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:55:41.079 162739 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:55:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:55:41.079 162739 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:55:41 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 03:55:41 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2715981006' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 03:55:41 np0005534516 nova_compute[253538]: 2025-11-25 08:55:41.319 253542 DEBUG oslo_concurrency.processutils [None req-59e6579b-ccc5-4f68-852e-e3bd22f9798e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.437s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:55:41 np0005534516 nova_compute[253538]: 2025-11-25 08:55:41.321 253542 DEBUG nova.virt.libvirt.vif [None req-59e6579b-ccc5-4f68-852e-e3bd22f9798e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T08:55:33Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-443459317',display_name='tempest-TestNetworkBasicOps-server-443459317',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-443459317',id=118,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHwIQI4smh03agRxaCUyBdpTkZuGWd18KsbSAlDGRTalp6+OaIXJV7ErpMU5iOukAfWckmlqdBb7cA7hp/AAowmL6erSk1AV13d1Hs/ktP4LutA1fVkErwMJ9ccrFBLyEA==',key_name='tempest-TestNetworkBasicOps-292858725',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='92faeb767e7a423586eaaf32661ce771',ramdisk_id='',reservation_id='r-0xuf3mxd',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-2019122229',owner_user_name='tempest-TestNetworkBasicOps-2019122229-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T08:55:36Z,user_data=None,user_id='4211995133cc45db8e38c47f747fb092',uuid=f028149d-de9a-49c3-8805-49336474a101,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "3df2cc50-c6c1-476a-a12a-0d02fae91559", "address": "fa:16:3e:54:4e:c2", "network": {"id": "05073ace-d35c-48d1-9399-5c8964c484d2", "bridge": "br-int", "label": "tempest-network-smoke--1386100427", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92faeb767e7a423586eaaf32661ce771", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3df2cc50-c6", "ovs_interfaceid": "3df2cc50-c6c1-476a-a12a-0d02fae91559", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 25 03:55:41 np0005534516 nova_compute[253538]: 2025-11-25 08:55:41.321 253542 DEBUG nova.network.os_vif_util [None req-59e6579b-ccc5-4f68-852e-e3bd22f9798e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Converting VIF {"id": "3df2cc50-c6c1-476a-a12a-0d02fae91559", "address": "fa:16:3e:54:4e:c2", "network": {"id": "05073ace-d35c-48d1-9399-5c8964c484d2", "bridge": "br-int", "label": "tempest-network-smoke--1386100427", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92faeb767e7a423586eaaf32661ce771", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3df2cc50-c6", "ovs_interfaceid": "3df2cc50-c6c1-476a-a12a-0d02fae91559", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 03:55:41 np0005534516 nova_compute[253538]: 2025-11-25 08:55:41.322 253542 DEBUG nova.network.os_vif_util [None req-59e6579b-ccc5-4f68-852e-e3bd22f9798e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:54:4e:c2,bridge_name='br-int',has_traffic_filtering=True,id=3df2cc50-c6c1-476a-a12a-0d02fae91559,network=Network(05073ace-d35c-48d1-9399-5c8964c484d2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap3df2cc50-c6') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 03:55:41 np0005534516 nova_compute[253538]: 2025-11-25 08:55:41.323 253542 DEBUG nova.objects.instance [None req-59e6579b-ccc5-4f68-852e-e3bd22f9798e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lazy-loading 'pci_devices' on Instance uuid f028149d-de9a-49c3-8805-49336474a101 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 03:55:41 np0005534516 nova_compute[253538]: 2025-11-25 08:55:41.341 253542 DEBUG nova.virt.libvirt.driver [None req-59e6579b-ccc5-4f68-852e-e3bd22f9798e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: f028149d-de9a-49c3-8805-49336474a101] End _get_guest_xml xml=<domain type="kvm">
Nov 25 03:55:41 np0005534516 nova_compute[253538]:  <uuid>f028149d-de9a-49c3-8805-49336474a101</uuid>
Nov 25 03:55:41 np0005534516 nova_compute[253538]:  <name>instance-00000076</name>
Nov 25 03:55:41 np0005534516 nova_compute[253538]:  <memory>131072</memory>
Nov 25 03:55:41 np0005534516 nova_compute[253538]:  <vcpu>1</vcpu>
Nov 25 03:55:41 np0005534516 nova_compute[253538]:  <metadata>
Nov 25 03:55:41 np0005534516 nova_compute[253538]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 03:55:41 np0005534516 nova_compute[253538]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 03:55:41 np0005534516 nova_compute[253538]:      <nova:name>tempest-TestNetworkBasicOps-server-443459317</nova:name>
Nov 25 03:55:41 np0005534516 nova_compute[253538]:      <nova:creationTime>2025-11-25 08:55:40</nova:creationTime>
Nov 25 03:55:41 np0005534516 nova_compute[253538]:      <nova:flavor name="m1.nano">
Nov 25 03:55:41 np0005534516 nova_compute[253538]:        <nova:memory>128</nova:memory>
Nov 25 03:55:41 np0005534516 nova_compute[253538]:        <nova:disk>1</nova:disk>
Nov 25 03:55:41 np0005534516 nova_compute[253538]:        <nova:swap>0</nova:swap>
Nov 25 03:55:41 np0005534516 nova_compute[253538]:        <nova:ephemeral>0</nova:ephemeral>
Nov 25 03:55:41 np0005534516 nova_compute[253538]:        <nova:vcpus>1</nova:vcpus>
Nov 25 03:55:41 np0005534516 nova_compute[253538]:      </nova:flavor>
Nov 25 03:55:41 np0005534516 nova_compute[253538]:      <nova:owner>
Nov 25 03:55:41 np0005534516 nova_compute[253538]:        <nova:user uuid="4211995133cc45db8e38c47f747fb092">tempest-TestNetworkBasicOps-2019122229-project-member</nova:user>
Nov 25 03:55:41 np0005534516 nova_compute[253538]:        <nova:project uuid="92faeb767e7a423586eaaf32661ce771">tempest-TestNetworkBasicOps-2019122229</nova:project>
Nov 25 03:55:41 np0005534516 nova_compute[253538]:      </nova:owner>
Nov 25 03:55:41 np0005534516 nova_compute[253538]:      <nova:root type="image" uuid="8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e"/>
Nov 25 03:55:41 np0005534516 nova_compute[253538]:      <nova:ports>
Nov 25 03:55:41 np0005534516 nova_compute[253538]:        <nova:port uuid="3df2cc50-c6c1-476a-a12a-0d02fae91559">
Nov 25 03:55:41 np0005534516 nova_compute[253538]:          <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Nov 25 03:55:41 np0005534516 nova_compute[253538]:        </nova:port>
Nov 25 03:55:41 np0005534516 nova_compute[253538]:      </nova:ports>
Nov 25 03:55:41 np0005534516 nova_compute[253538]:    </nova:instance>
Nov 25 03:55:41 np0005534516 nova_compute[253538]:  </metadata>
Nov 25 03:55:41 np0005534516 nova_compute[253538]:  <sysinfo type="smbios">
Nov 25 03:55:41 np0005534516 nova_compute[253538]:    <system>
Nov 25 03:55:41 np0005534516 nova_compute[253538]:      <entry name="manufacturer">RDO</entry>
Nov 25 03:55:41 np0005534516 nova_compute[253538]:      <entry name="product">OpenStack Compute</entry>
Nov 25 03:55:41 np0005534516 nova_compute[253538]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 03:55:41 np0005534516 nova_compute[253538]:      <entry name="serial">f028149d-de9a-49c3-8805-49336474a101</entry>
Nov 25 03:55:41 np0005534516 nova_compute[253538]:      <entry name="uuid">f028149d-de9a-49c3-8805-49336474a101</entry>
Nov 25 03:55:41 np0005534516 nova_compute[253538]:      <entry name="family">Virtual Machine</entry>
Nov 25 03:55:41 np0005534516 nova_compute[253538]:    </system>
Nov 25 03:55:41 np0005534516 nova_compute[253538]:  </sysinfo>
Nov 25 03:55:41 np0005534516 nova_compute[253538]:  <os>
Nov 25 03:55:41 np0005534516 nova_compute[253538]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 25 03:55:41 np0005534516 nova_compute[253538]:    <boot dev="hd"/>
Nov 25 03:55:41 np0005534516 nova_compute[253538]:    <smbios mode="sysinfo"/>
Nov 25 03:55:41 np0005534516 nova_compute[253538]:  </os>
Nov 25 03:55:41 np0005534516 nova_compute[253538]:  <features>
Nov 25 03:55:41 np0005534516 nova_compute[253538]:    <acpi/>
Nov 25 03:55:41 np0005534516 nova_compute[253538]:    <apic/>
Nov 25 03:55:41 np0005534516 nova_compute[253538]:    <vmcoreinfo/>
Nov 25 03:55:41 np0005534516 nova_compute[253538]:  </features>
Nov 25 03:55:41 np0005534516 nova_compute[253538]:  <clock offset="utc">
Nov 25 03:55:41 np0005534516 nova_compute[253538]:    <timer name="pit" tickpolicy="delay"/>
Nov 25 03:55:41 np0005534516 nova_compute[253538]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 25 03:55:41 np0005534516 nova_compute[253538]:    <timer name="hpet" present="no"/>
Nov 25 03:55:41 np0005534516 nova_compute[253538]:  </clock>
Nov 25 03:55:41 np0005534516 nova_compute[253538]:  <cpu mode="host-model" match="exact">
Nov 25 03:55:41 np0005534516 nova_compute[253538]:    <topology sockets="1" cores="1" threads="1"/>
Nov 25 03:55:41 np0005534516 nova_compute[253538]:  </cpu>
Nov 25 03:55:41 np0005534516 nova_compute[253538]:  <devices>
Nov 25 03:55:41 np0005534516 nova_compute[253538]:    <disk type="network" device="disk">
Nov 25 03:55:41 np0005534516 nova_compute[253538]:      <driver type="raw" cache="none"/>
Nov 25 03:55:41 np0005534516 nova_compute[253538]:      <source protocol="rbd" name="vms/f028149d-de9a-49c3-8805-49336474a101_disk">
Nov 25 03:55:41 np0005534516 nova_compute[253538]:        <host name="192.168.122.100" port="6789"/>
Nov 25 03:55:41 np0005534516 nova_compute[253538]:      </source>
Nov 25 03:55:41 np0005534516 nova_compute[253538]:      <auth username="openstack">
Nov 25 03:55:41 np0005534516 nova_compute[253538]:        <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 03:55:41 np0005534516 nova_compute[253538]:      </auth>
Nov 25 03:55:41 np0005534516 nova_compute[253538]:      <target dev="vda" bus="virtio"/>
Nov 25 03:55:41 np0005534516 nova_compute[253538]:    </disk>
Nov 25 03:55:41 np0005534516 nova_compute[253538]:    <disk type="network" device="cdrom">
Nov 25 03:55:41 np0005534516 nova_compute[253538]:      <driver type="raw" cache="none"/>
Nov 25 03:55:41 np0005534516 nova_compute[253538]:      <source protocol="rbd" name="vms/f028149d-de9a-49c3-8805-49336474a101_disk.config">
Nov 25 03:55:41 np0005534516 nova_compute[253538]:        <host name="192.168.122.100" port="6789"/>
Nov 25 03:55:41 np0005534516 nova_compute[253538]:      </source>
Nov 25 03:55:41 np0005534516 nova_compute[253538]:      <auth username="openstack">
Nov 25 03:55:41 np0005534516 nova_compute[253538]:        <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 03:55:41 np0005534516 nova_compute[253538]:      </auth>
Nov 25 03:55:41 np0005534516 nova_compute[253538]:      <target dev="sda" bus="sata"/>
Nov 25 03:55:41 np0005534516 nova_compute[253538]:    </disk>
Nov 25 03:55:41 np0005534516 nova_compute[253538]:    <interface type="ethernet">
Nov 25 03:55:41 np0005534516 nova_compute[253538]:      <mac address="fa:16:3e:54:4e:c2"/>
Nov 25 03:55:41 np0005534516 nova_compute[253538]:      <model type="virtio"/>
Nov 25 03:55:41 np0005534516 nova_compute[253538]:      <driver name="vhost" rx_queue_size="512"/>
Nov 25 03:55:41 np0005534516 nova_compute[253538]:      <mtu size="1442"/>
Nov 25 03:55:41 np0005534516 nova_compute[253538]:      <target dev="tap3df2cc50-c6"/>
Nov 25 03:55:41 np0005534516 nova_compute[253538]:    </interface>
Nov 25 03:55:41 np0005534516 nova_compute[253538]:    <serial type="pty">
Nov 25 03:55:41 np0005534516 nova_compute[253538]:      <log file="/var/lib/nova/instances/f028149d-de9a-49c3-8805-49336474a101/console.log" append="off"/>
Nov 25 03:55:41 np0005534516 nova_compute[253538]:    </serial>
Nov 25 03:55:41 np0005534516 nova_compute[253538]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 03:55:41 np0005534516 nova_compute[253538]:    <video>
Nov 25 03:55:41 np0005534516 nova_compute[253538]:      <model type="virtio"/>
Nov 25 03:55:41 np0005534516 nova_compute[253538]:    </video>
Nov 25 03:55:41 np0005534516 nova_compute[253538]:    <input type="tablet" bus="usb"/>
Nov 25 03:55:41 np0005534516 nova_compute[253538]:    <rng model="virtio">
Nov 25 03:55:41 np0005534516 nova_compute[253538]:      <backend model="random">/dev/urandom</backend>
Nov 25 03:55:41 np0005534516 nova_compute[253538]:    </rng>
Nov 25 03:55:41 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root"/>
Nov 25 03:55:41 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:55:41 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:55:41 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:55:41 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:55:41 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:55:41 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:55:41 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:55:41 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:55:41 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:55:41 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:55:41 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:55:41 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:55:41 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:55:41 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:55:41 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:55:41 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:55:41 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:55:41 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:55:41 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:55:41 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:55:41 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:55:41 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:55:41 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:55:41 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:55:41 np0005534516 nova_compute[253538]:    <controller type="usb" index="0"/>
Nov 25 03:55:41 np0005534516 nova_compute[253538]:    <memballoon model="virtio">
Nov 25 03:55:41 np0005534516 nova_compute[253538]:      <stats period="10"/>
Nov 25 03:55:41 np0005534516 nova_compute[253538]:    </memballoon>
Nov 25 03:55:41 np0005534516 nova_compute[253538]:  </devices>
Nov 25 03:55:41 np0005534516 nova_compute[253538]: </domain>
Nov 25 03:55:41 np0005534516 nova_compute[253538]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 25 03:55:41 np0005534516 nova_compute[253538]: 2025-11-25 08:55:41.341 253542 DEBUG nova.compute.manager [None req-59e6579b-ccc5-4f68-852e-e3bd22f9798e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: f028149d-de9a-49c3-8805-49336474a101] Preparing to wait for external event network-vif-plugged-3df2cc50-c6c1-476a-a12a-0d02fae91559 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 25 03:55:41 np0005534516 nova_compute[253538]: 2025-11-25 08:55:41.342 253542 DEBUG oslo_concurrency.lockutils [None req-59e6579b-ccc5-4f68-852e-e3bd22f9798e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Acquiring lock "f028149d-de9a-49c3-8805-49336474a101-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:55:41 np0005534516 nova_compute[253538]: 2025-11-25 08:55:41.342 253542 DEBUG oslo_concurrency.lockutils [None req-59e6579b-ccc5-4f68-852e-e3bd22f9798e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lock "f028149d-de9a-49c3-8805-49336474a101-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:55:41 np0005534516 nova_compute[253538]: 2025-11-25 08:55:41.342 253542 DEBUG oslo_concurrency.lockutils [None req-59e6579b-ccc5-4f68-852e-e3bd22f9798e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lock "f028149d-de9a-49c3-8805-49336474a101-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:55:41 np0005534516 nova_compute[253538]: 2025-11-25 08:55:41.343 253542 DEBUG nova.virt.libvirt.vif [None req-59e6579b-ccc5-4f68-852e-e3bd22f9798e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T08:55:33Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-443459317',display_name='tempest-TestNetworkBasicOps-server-443459317',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-443459317',id=118,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHwIQI4smh03agRxaCUyBdpTkZuGWd18KsbSAlDGRTalp6+OaIXJV7ErpMU5iOukAfWckmlqdBb7cA7hp/AAowmL6erSk1AV13d1Hs/ktP4LutA1fVkErwMJ9ccrFBLyEA==',key_name='tempest-TestNetworkBasicOps-292858725',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='92faeb767e7a423586eaaf32661ce771',ramdisk_id='',reservation_id='r-0xuf3mxd',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-2019122229',owner_user_name='tempest-TestNetworkBasicOps-2019122229-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T08:55:36Z,user_data=None,user_id='4211995133cc45db8e38c47f747fb092',uuid=f028149d-de9a-49c3-8805-49336474a101,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "3df2cc50-c6c1-476a-a12a-0d02fae91559", "address": "fa:16:3e:54:4e:c2", "network": {"id": "05073ace-d35c-48d1-9399-5c8964c484d2", "bridge": "br-int", "label": "tempest-network-smoke--1386100427", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92faeb767e7a423586eaaf32661ce771", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3df2cc50-c6", "ovs_interfaceid": "3df2cc50-c6c1-476a-a12a-0d02fae91559", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 25 03:55:41 np0005534516 nova_compute[253538]: 2025-11-25 08:55:41.343 253542 DEBUG nova.network.os_vif_util [None req-59e6579b-ccc5-4f68-852e-e3bd22f9798e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Converting VIF {"id": "3df2cc50-c6c1-476a-a12a-0d02fae91559", "address": "fa:16:3e:54:4e:c2", "network": {"id": "05073ace-d35c-48d1-9399-5c8964c484d2", "bridge": "br-int", "label": "tempest-network-smoke--1386100427", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92faeb767e7a423586eaaf32661ce771", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3df2cc50-c6", "ovs_interfaceid": "3df2cc50-c6c1-476a-a12a-0d02fae91559", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 03:55:41 np0005534516 nova_compute[253538]: 2025-11-25 08:55:41.344 253542 DEBUG nova.network.os_vif_util [None req-59e6579b-ccc5-4f68-852e-e3bd22f9798e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:54:4e:c2,bridge_name='br-int',has_traffic_filtering=True,id=3df2cc50-c6c1-476a-a12a-0d02fae91559,network=Network(05073ace-d35c-48d1-9399-5c8964c484d2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap3df2cc50-c6') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 03:55:41 np0005534516 nova_compute[253538]: 2025-11-25 08:55:41.344 253542 DEBUG os_vif [None req-59e6579b-ccc5-4f68-852e-e3bd22f9798e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:54:4e:c2,bridge_name='br-int',has_traffic_filtering=True,id=3df2cc50-c6c1-476a-a12a-0d02fae91559,network=Network(05073ace-d35c-48d1-9399-5c8964c484d2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap3df2cc50-c6') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 25 03:55:41 np0005534516 nova_compute[253538]: 2025-11-25 08:55:41.345 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:55:41 np0005534516 nova_compute[253538]: 2025-11-25 08:55:41.345 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:55:41 np0005534516 nova_compute[253538]: 2025-11-25 08:55:41.346 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 03:55:41 np0005534516 nova_compute[253538]: 2025-11-25 08:55:41.350 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:55:41 np0005534516 nova_compute[253538]: 2025-11-25 08:55:41.350 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap3df2cc50-c6, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:55:41 np0005534516 nova_compute[253538]: 2025-11-25 08:55:41.351 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap3df2cc50-c6, col_values=(('external_ids', {'iface-id': '3df2cc50-c6c1-476a-a12a-0d02fae91559', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:54:4e:c2', 'vm-uuid': 'f028149d-de9a-49c3-8805-49336474a101'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:55:41 np0005534516 nova_compute[253538]: 2025-11-25 08:55:41.353 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:55:41 np0005534516 NetworkManager[48915]: <info>  [1764060941.3547] manager: (tap3df2cc50-c6): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/494)
Nov 25 03:55:41 np0005534516 nova_compute[253538]: 2025-11-25 08:55:41.355 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 25 03:55:41 np0005534516 nova_compute[253538]: 2025-11-25 08:55:41.361 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:55:41 np0005534516 nova_compute[253538]: 2025-11-25 08:55:41.362 253542 INFO os_vif [None req-59e6579b-ccc5-4f68-852e-e3bd22f9798e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:54:4e:c2,bridge_name='br-int',has_traffic_filtering=True,id=3df2cc50-c6c1-476a-a12a-0d02fae91559,network=Network(05073ace-d35c-48d1-9399-5c8964c484d2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap3df2cc50-c6')#033[00m
Nov 25 03:55:41 np0005534516 nova_compute[253538]: 2025-11-25 08:55:41.421 253542 DEBUG nova.virt.libvirt.driver [None req-59e6579b-ccc5-4f68-852e-e3bd22f9798e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 03:55:41 np0005534516 nova_compute[253538]: 2025-11-25 08:55:41.422 253542 DEBUG nova.virt.libvirt.driver [None req-59e6579b-ccc5-4f68-852e-e3bd22f9798e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 03:55:41 np0005534516 nova_compute[253538]: 2025-11-25 08:55:41.422 253542 DEBUG nova.virt.libvirt.driver [None req-59e6579b-ccc5-4f68-852e-e3bd22f9798e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] No VIF found with MAC fa:16:3e:54:4e:c2, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 25 03:55:41 np0005534516 nova_compute[253538]: 2025-11-25 08:55:41.423 253542 INFO nova.virt.libvirt.driver [None req-59e6579b-ccc5-4f68-852e-e3bd22f9798e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: f028149d-de9a-49c3-8805-49336474a101] Using config drive#033[00m
Nov 25 03:55:41 np0005534516 nova_compute[253538]: 2025-11-25 08:55:41.445 253542 DEBUG nova.storage.rbd_utils [None req-59e6579b-ccc5-4f68-852e-e3bd22f9798e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] rbd image f028149d-de9a-49c3-8805-49336474a101_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:55:41 np0005534516 nova_compute[253538]: 2025-11-25 08:55:41.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 03:55:41 np0005534516 nova_compute[253538]: 2025-11-25 08:55:41.555 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 25 03:55:41 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2247: 321 pgs: 321 active+clean; 134 MiB data, 814 MiB used, 59 GiB / 60 GiB avail; 38 KiB/s rd, 1.8 MiB/s wr, 57 op/s
Nov 25 03:55:41 np0005534516 nova_compute[253538]: 2025-11-25 08:55:41.931 253542 INFO nova.virt.libvirt.driver [None req-59e6579b-ccc5-4f68-852e-e3bd22f9798e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: f028149d-de9a-49c3-8805-49336474a101] Creating config drive at /var/lib/nova/instances/f028149d-de9a-49c3-8805-49336474a101/disk.config#033[00m
Nov 25 03:55:41 np0005534516 nova_compute[253538]: 2025-11-25 08:55:41.935 253542 DEBUG oslo_concurrency.processutils [None req-59e6579b-ccc5-4f68-852e-e3bd22f9798e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/f028149d-de9a-49c3-8805-49336474a101/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp8p3ow2_x execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:55:42 np0005534516 nova_compute[253538]: 2025-11-25 08:55:42.077 253542 DEBUG oslo_concurrency.processutils [None req-59e6579b-ccc5-4f68-852e-e3bd22f9798e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/f028149d-de9a-49c3-8805-49336474a101/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp8p3ow2_x" returned: 0 in 0.142s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:55:42 np0005534516 nova_compute[253538]: 2025-11-25 08:55:42.115 253542 DEBUG nova.storage.rbd_utils [None req-59e6579b-ccc5-4f68-852e-e3bd22f9798e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] rbd image f028149d-de9a-49c3-8805-49336474a101_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:55:42 np0005534516 nova_compute[253538]: 2025-11-25 08:55:42.120 253542 DEBUG oslo_concurrency.processutils [None req-59e6579b-ccc5-4f68-852e-e3bd22f9798e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/f028149d-de9a-49c3-8805-49336474a101/disk.config f028149d-de9a-49c3-8805-49336474a101_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:55:42 np0005534516 nova_compute[253538]: 2025-11-25 08:55:42.301 253542 DEBUG oslo_concurrency.processutils [None req-59e6579b-ccc5-4f68-852e-e3bd22f9798e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/f028149d-de9a-49c3-8805-49336474a101/disk.config f028149d-de9a-49c3-8805-49336474a101_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.181s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:55:42 np0005534516 nova_compute[253538]: 2025-11-25 08:55:42.302 253542 INFO nova.virt.libvirt.driver [None req-59e6579b-ccc5-4f68-852e-e3bd22f9798e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: f028149d-de9a-49c3-8805-49336474a101] Deleting local config drive /var/lib/nova/instances/f028149d-de9a-49c3-8805-49336474a101/disk.config because it was imported into RBD.#033[00m
Nov 25 03:55:42 np0005534516 nova_compute[253538]: 2025-11-25 08:55:42.340 253542 DEBUG nova.network.neutron [req-df588e23-2dca-41a8-a2fd-a035719b9c75 req-b01e9bef-321c-44ba-9801-26fd45b66884 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: f028149d-de9a-49c3-8805-49336474a101] Updated VIF entry in instance network info cache for port 3df2cc50-c6c1-476a-a12a-0d02fae91559. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 25 03:55:42 np0005534516 nova_compute[253538]: 2025-11-25 08:55:42.341 253542 DEBUG nova.network.neutron [req-df588e23-2dca-41a8-a2fd-a035719b9c75 req-b01e9bef-321c-44ba-9801-26fd45b66884 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: f028149d-de9a-49c3-8805-49336474a101] Updating instance_info_cache with network_info: [{"id": "3df2cc50-c6c1-476a-a12a-0d02fae91559", "address": "fa:16:3e:54:4e:c2", "network": {"id": "05073ace-d35c-48d1-9399-5c8964c484d2", "bridge": "br-int", "label": "tempest-network-smoke--1386100427", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92faeb767e7a423586eaaf32661ce771", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3df2cc50-c6", "ovs_interfaceid": "3df2cc50-c6c1-476a-a12a-0d02fae91559", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 03:55:42 np0005534516 nova_compute[253538]: 2025-11-25 08:55:42.356 253542 DEBUG oslo_concurrency.lockutils [req-df588e23-2dca-41a8-a2fd-a035719b9c75 req-b01e9bef-321c-44ba-9801-26fd45b66884 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-f028149d-de9a-49c3-8805-49336474a101" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 03:55:42 np0005534516 kernel: tap3df2cc50-c6: entered promiscuous mode
Nov 25 03:55:42 np0005534516 NetworkManager[48915]: <info>  [1764060942.3770] manager: (tap3df2cc50-c6): new Tun device (/org/freedesktop/NetworkManager/Devices/495)
Nov 25 03:55:42 np0005534516 ovn_controller[152859]: 2025-11-25T08:55:42Z|01226|binding|INFO|Claiming lport 3df2cc50-c6c1-476a-a12a-0d02fae91559 for this chassis.
Nov 25 03:55:42 np0005534516 ovn_controller[152859]: 2025-11-25T08:55:42Z|01227|binding|INFO|3df2cc50-c6c1-476a-a12a-0d02fae91559: Claiming fa:16:3e:54:4e:c2 10.100.0.14
Nov 25 03:55:42 np0005534516 nova_compute[253538]: 2025-11-25 08:55:42.377 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:55:42 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:55:42.392 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:54:4e:c2 10.100.0.14'], port_security=['fa:16:3e:54:4e:c2 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-TestNetworkBasicOps-1598437000', 'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': 'f028149d-de9a-49c3-8805-49336474a101', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-05073ace-d35c-48d1-9399-5c8964c484d2', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-TestNetworkBasicOps-1598437000', 'neutron:project_id': '92faeb767e7a423586eaaf32661ce771', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'd480e0b0-4ff3-496c-bb44-a333ae5d1ee8', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d00a1a17-1a1d-406f-9e8c-c8e478c002e9, chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=3df2cc50-c6c1-476a-a12a-0d02fae91559) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 03:55:42 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:55:42.394 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 3df2cc50-c6c1-476a-a12a-0d02fae91559 in datapath 05073ace-d35c-48d1-9399-5c8964c484d2 bound to our chassis#033[00m
Nov 25 03:55:42 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:55:42.396 162739 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 05073ace-d35c-48d1-9399-5c8964c484d2#033[00m
Nov 25 03:55:42 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:55:42.411 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[1356ce0d-9927-455b-a137-90e6fe8a31b3]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:55:42 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:55:42.412 162739 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap05073ace-d1 in ovnmeta-05073ace-d35c-48d1-9399-5c8964c484d2 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 25 03:55:42 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:55:42.415 269370 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap05073ace-d0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 25 03:55:42 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:55:42.415 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[f6e21ae9-f901-479d-ad27-d54fe04af978]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:55:42 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:55:42.417 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[14133ac0-b20b-4dd3-997a-a1bf44613df0]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:55:42 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:55:42.432 162852 DEBUG oslo.privsep.daemon [-] privsep: reply[4e96da64-c335-4ff0-8a0b-1837064288cf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:55:42 np0005534516 systemd[1]: Started Virtual Machine qemu-148-instance-00000076.
Nov 25 03:55:42 np0005534516 systemd-machined[215790]: New machine qemu-148-instance-00000076.
Nov 25 03:55:42 np0005534516 systemd-udevd[378812]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 03:55:42 np0005534516 NetworkManager[48915]: <info>  [1764060942.4584] device (tap3df2cc50-c6): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 03:55:42 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:55:42.458 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[607cd86f-a64d-4703-a388-2584e98ee66d]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:55:42 np0005534516 NetworkManager[48915]: <info>  [1764060942.4614] device (tap3df2cc50-c6): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 03:55:42 np0005534516 nova_compute[253538]: 2025-11-25 08:55:42.522 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:55:42 np0005534516 nova_compute[253538]: 2025-11-25 08:55:42.533 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:55:42 np0005534516 ovn_controller[152859]: 2025-11-25T08:55:42Z|01228|binding|INFO|Setting lport 3df2cc50-c6c1-476a-a12a-0d02fae91559 ovn-installed in OVS
Nov 25 03:55:42 np0005534516 ovn_controller[152859]: 2025-11-25T08:55:42Z|01229|binding|INFO|Setting lport 3df2cc50-c6c1-476a-a12a-0d02fae91559 up in Southbound
Nov 25 03:55:42 np0005534516 nova_compute[253538]: 2025-11-25 08:55:42.537 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:55:42 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 03:55:42 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:55:42.544 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[1d05a8fa-7aff-4a00-b02f-22964ceb6500]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:55:42 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:55:42.550 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[4e51fe1f-9963-4a10-ae12-d57662af4449]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:55:42 np0005534516 systemd-udevd[378819]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 03:55:42 np0005534516 NetworkManager[48915]: <info>  [1764060942.5525] manager: (tap05073ace-d0): new Veth device (/org/freedesktop/NetworkManager/Devices/496)
Nov 25 03:55:42 np0005534516 podman[378801]: 2025-11-25 08:55:42.587550057 +0000 UTC m=+0.160507480 container health_status 201f5ba8a846455dad7681d91099d8c3f33e8ac91298c1330a8b525b94c03938 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118)
Nov 25 03:55:42 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:55:42.590 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[0fbfa1b5-71cc-44b2-9390-89667a325853]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:55:42 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:55:42.593 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[e515bb6f-ba93-4275-bfea-7041786b0a3a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:55:42 np0005534516 NetworkManager[48915]: <info>  [1764060942.6219] device (tap05073ace-d0): carrier: link connected
Nov 25 03:55:42 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:55:42.629 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[eb964c24-170f-414a-85c5-7395527c95f1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:55:42 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:55:42.655 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[85751b2c-cd98-4e14-b92f-f87a7a03d6b0]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap05073ace-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e9:25:c6'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 355], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 629722, 'reachable_time': 18808, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 378854, 'error': None, 'target': 'ovnmeta-05073ace-d35c-48d1-9399-5c8964c484d2', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:55:42 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:55:42.673 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[d5ac7d6d-ec18-48c0-8818-541b2405a342]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fee9:25c6'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 629722, 'tstamp': 629722}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 378855, 'error': None, 'target': 'ovnmeta-05073ace-d35c-48d1-9399-5c8964c484d2', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:55:42 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:55:42.694 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[cec05173-34b2-48c8-afbb-e073759a6ae3]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap05073ace-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e9:25:c6'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 355], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 629722, 'reachable_time': 18808, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 378856, 'error': None, 'target': 'ovnmeta-05073ace-d35c-48d1-9399-5c8964c484d2', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:55:42 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:55:42.730 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[e5e5f875-9633-4009-9544-3d7e693a0d5e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:55:42 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:55:42.810 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[0db55a2d-024f-4ffc-ad64-e2313abbf553]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:55:42 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:55:42.812 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap05073ace-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:55:42 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:55:42.812 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 03:55:42 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:55:42.812 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap05073ace-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:55:42 np0005534516 nova_compute[253538]: 2025-11-25 08:55:42.814 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:55:42 np0005534516 NetworkManager[48915]: <info>  [1764060942.8149] manager: (tap05073ace-d0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/497)
Nov 25 03:55:42 np0005534516 kernel: tap05073ace-d0: entered promiscuous mode
Nov 25 03:55:42 np0005534516 nova_compute[253538]: 2025-11-25 08:55:42.817 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:55:42 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:55:42.818 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap05073ace-d0, col_values=(('external_ids', {'iface-id': '38363726-6a82-410a-a283-1a7b285deea5'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:55:42 np0005534516 nova_compute[253538]: 2025-11-25 08:55:42.819 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:55:42 np0005534516 ovn_controller[152859]: 2025-11-25T08:55:42Z|01230|binding|INFO|Releasing lport 38363726-6a82-410a-a283-1a7b285deea5 from this chassis (sb_readonly=0)
Nov 25 03:55:42 np0005534516 nova_compute[253538]: 2025-11-25 08:55:42.820 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:55:42 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:55:42.821 162739 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/05073ace-d35c-48d1-9399-5c8964c484d2.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/05073ace-d35c-48d1-9399-5c8964c484d2.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 25 03:55:42 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:55:42.822 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[522c0b2b-848d-4500-94c8-407e4ff9c67b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:55:42 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:55:42.822 162739 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 25 03:55:42 np0005534516 ovn_metadata_agent[162734]: global
Nov 25 03:55:42 np0005534516 ovn_metadata_agent[162734]:    log         /dev/log local0 debug
Nov 25 03:55:42 np0005534516 ovn_metadata_agent[162734]:    log-tag     haproxy-metadata-proxy-05073ace-d35c-48d1-9399-5c8964c484d2
Nov 25 03:55:42 np0005534516 ovn_metadata_agent[162734]:    user        root
Nov 25 03:55:42 np0005534516 ovn_metadata_agent[162734]:    group       root
Nov 25 03:55:42 np0005534516 ovn_metadata_agent[162734]:    maxconn     1024
Nov 25 03:55:42 np0005534516 ovn_metadata_agent[162734]:    pidfile     /var/lib/neutron/external/pids/05073ace-d35c-48d1-9399-5c8964c484d2.pid.haproxy
Nov 25 03:55:42 np0005534516 ovn_metadata_agent[162734]:    daemon
Nov 25 03:55:42 np0005534516 ovn_metadata_agent[162734]: 
Nov 25 03:55:42 np0005534516 ovn_metadata_agent[162734]: defaults
Nov 25 03:55:42 np0005534516 ovn_metadata_agent[162734]:    log global
Nov 25 03:55:42 np0005534516 ovn_metadata_agent[162734]:    mode http
Nov 25 03:55:42 np0005534516 ovn_metadata_agent[162734]:    option httplog
Nov 25 03:55:42 np0005534516 ovn_metadata_agent[162734]:    option dontlognull
Nov 25 03:55:42 np0005534516 ovn_metadata_agent[162734]:    option http-server-close
Nov 25 03:55:42 np0005534516 ovn_metadata_agent[162734]:    option forwardfor
Nov 25 03:55:42 np0005534516 ovn_metadata_agent[162734]:    retries                 3
Nov 25 03:55:42 np0005534516 ovn_metadata_agent[162734]:    timeout http-request    30s
Nov 25 03:55:42 np0005534516 ovn_metadata_agent[162734]:    timeout connect         30s
Nov 25 03:55:42 np0005534516 ovn_metadata_agent[162734]:    timeout client          32s
Nov 25 03:55:42 np0005534516 ovn_metadata_agent[162734]:    timeout server          32s
Nov 25 03:55:42 np0005534516 ovn_metadata_agent[162734]:    timeout http-keep-alive 30s
Nov 25 03:55:42 np0005534516 ovn_metadata_agent[162734]: 
Nov 25 03:55:42 np0005534516 ovn_metadata_agent[162734]: 
Nov 25 03:55:42 np0005534516 ovn_metadata_agent[162734]: listen listener
Nov 25 03:55:42 np0005534516 ovn_metadata_agent[162734]:    bind 169.254.169.254:80
Nov 25 03:55:42 np0005534516 ovn_metadata_agent[162734]:    server metadata /var/lib/neutron/metadata_proxy
Nov 25 03:55:42 np0005534516 ovn_metadata_agent[162734]:    http-request add-header X-OVN-Network-ID 05073ace-d35c-48d1-9399-5c8964c484d2
Nov 25 03:55:42 np0005534516 ovn_metadata_agent[162734]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 25 03:55:42 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:55:42.823 162739 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-05073ace-d35c-48d1-9399-5c8964c484d2', 'env', 'PROCESS_TAG=haproxy-05073ace-d35c-48d1-9399-5c8964c484d2', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/05073ace-d35c-48d1-9399-5c8964c484d2.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 25 03:55:42 np0005534516 nova_compute[253538]: 2025-11-25 08:55:42.834 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:55:42 np0005534516 nova_compute[253538]: 2025-11-25 08:55:42.864 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764060942.8637898, f028149d-de9a-49c3-8805-49336474a101 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 03:55:42 np0005534516 nova_compute[253538]: 2025-11-25 08:55:42.865 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: f028149d-de9a-49c3-8805-49336474a101] VM Started (Lifecycle Event)#033[00m
Nov 25 03:55:42 np0005534516 nova_compute[253538]: 2025-11-25 08:55:42.884 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: f028149d-de9a-49c3-8805-49336474a101] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:55:42 np0005534516 nova_compute[253538]: 2025-11-25 08:55:42.884 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:55:42 np0005534516 nova_compute[253538]: 2025-11-25 08:55:42.900 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764060942.8639305, f028149d-de9a-49c3-8805-49336474a101 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 03:55:42 np0005534516 nova_compute[253538]: 2025-11-25 08:55:42.900 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: f028149d-de9a-49c3-8805-49336474a101] VM Paused (Lifecycle Event)#033[00m
Nov 25 03:55:42 np0005534516 nova_compute[253538]: 2025-11-25 08:55:42.907 253542 DEBUG nova.compute.manager [req-b910b7bf-148d-4646-ac05-b957f96c4554 req-ce5f7ade-97a1-4772-8cec-a0001aac71ec b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: f028149d-de9a-49c3-8805-49336474a101] Received event network-vif-plugged-3df2cc50-c6c1-476a-a12a-0d02fae91559 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:55:42 np0005534516 nova_compute[253538]: 2025-11-25 08:55:42.907 253542 DEBUG oslo_concurrency.lockutils [req-b910b7bf-148d-4646-ac05-b957f96c4554 req-ce5f7ade-97a1-4772-8cec-a0001aac71ec b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "f028149d-de9a-49c3-8805-49336474a101-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:55:42 np0005534516 nova_compute[253538]: 2025-11-25 08:55:42.908 253542 DEBUG oslo_concurrency.lockutils [req-b910b7bf-148d-4646-ac05-b957f96c4554 req-ce5f7ade-97a1-4772-8cec-a0001aac71ec b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "f028149d-de9a-49c3-8805-49336474a101-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:55:42 np0005534516 nova_compute[253538]: 2025-11-25 08:55:42.909 253542 DEBUG oslo_concurrency.lockutils [req-b910b7bf-148d-4646-ac05-b957f96c4554 req-ce5f7ade-97a1-4772-8cec-a0001aac71ec b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "f028149d-de9a-49c3-8805-49336474a101-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:55:42 np0005534516 nova_compute[253538]: 2025-11-25 08:55:42.909 253542 DEBUG nova.compute.manager [req-b910b7bf-148d-4646-ac05-b957f96c4554 req-ce5f7ade-97a1-4772-8cec-a0001aac71ec b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: f028149d-de9a-49c3-8805-49336474a101] Processing event network-vif-plugged-3df2cc50-c6c1-476a-a12a-0d02fae91559 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 25 03:55:42 np0005534516 nova_compute[253538]: 2025-11-25 08:55:42.910 253542 DEBUG nova.compute.manager [None req-59e6579b-ccc5-4f68-852e-e3bd22f9798e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: f028149d-de9a-49c3-8805-49336474a101] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 25 03:55:42 np0005534516 nova_compute[253538]: 2025-11-25 08:55:42.928 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: f028149d-de9a-49c3-8805-49336474a101] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:55:42 np0005534516 nova_compute[253538]: 2025-11-25 08:55:42.928 253542 DEBUG nova.virt.libvirt.driver [None req-59e6579b-ccc5-4f68-852e-e3bd22f9798e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: f028149d-de9a-49c3-8805-49336474a101] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 25 03:55:42 np0005534516 nova_compute[253538]: 2025-11-25 08:55:42.935 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764060942.915906, f028149d-de9a-49c3-8805-49336474a101 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 03:55:42 np0005534516 nova_compute[253538]: 2025-11-25 08:55:42.935 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: f028149d-de9a-49c3-8805-49336474a101] VM Resumed (Lifecycle Event)#033[00m
Nov 25 03:55:42 np0005534516 nova_compute[253538]: 2025-11-25 08:55:42.937 253542 INFO nova.virt.libvirt.driver [-] [instance: f028149d-de9a-49c3-8805-49336474a101] Instance spawned successfully.#033[00m
Nov 25 03:55:42 np0005534516 nova_compute[253538]: 2025-11-25 08:55:42.939 253542 DEBUG nova.virt.libvirt.driver [None req-59e6579b-ccc5-4f68-852e-e3bd22f9798e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: f028149d-de9a-49c3-8805-49336474a101] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 25 03:55:42 np0005534516 nova_compute[253538]: 2025-11-25 08:55:42.973 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: f028149d-de9a-49c3-8805-49336474a101] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:55:42 np0005534516 nova_compute[253538]: 2025-11-25 08:55:42.978 253542 DEBUG nova.virt.libvirt.driver [None req-59e6579b-ccc5-4f68-852e-e3bd22f9798e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: f028149d-de9a-49c3-8805-49336474a101] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:55:42 np0005534516 nova_compute[253538]: 2025-11-25 08:55:42.978 253542 DEBUG nova.virt.libvirt.driver [None req-59e6579b-ccc5-4f68-852e-e3bd22f9798e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: f028149d-de9a-49c3-8805-49336474a101] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:55:42 np0005534516 nova_compute[253538]: 2025-11-25 08:55:42.978 253542 DEBUG nova.virt.libvirt.driver [None req-59e6579b-ccc5-4f68-852e-e3bd22f9798e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: f028149d-de9a-49c3-8805-49336474a101] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:55:42 np0005534516 nova_compute[253538]: 2025-11-25 08:55:42.979 253542 DEBUG nova.virt.libvirt.driver [None req-59e6579b-ccc5-4f68-852e-e3bd22f9798e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: f028149d-de9a-49c3-8805-49336474a101] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:55:42 np0005534516 nova_compute[253538]: 2025-11-25 08:55:42.979 253542 DEBUG nova.virt.libvirt.driver [None req-59e6579b-ccc5-4f68-852e-e3bd22f9798e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: f028149d-de9a-49c3-8805-49336474a101] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:55:42 np0005534516 nova_compute[253538]: 2025-11-25 08:55:42.979 253542 DEBUG nova.virt.libvirt.driver [None req-59e6579b-ccc5-4f68-852e-e3bd22f9798e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: f028149d-de9a-49c3-8805-49336474a101] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:55:42 np0005534516 nova_compute[253538]: 2025-11-25 08:55:42.983 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: f028149d-de9a-49c3-8805-49336474a101] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 03:55:43 np0005534516 nova_compute[253538]: 2025-11-25 08:55:43.009 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: f028149d-de9a-49c3-8805-49336474a101] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 03:55:43 np0005534516 nova_compute[253538]: 2025-11-25 08:55:43.031 253542 INFO nova.compute.manager [None req-59e6579b-ccc5-4f68-852e-e3bd22f9798e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: f028149d-de9a-49c3-8805-49336474a101] Took 6.51 seconds to spawn the instance on the hypervisor.#033[00m
Nov 25 03:55:43 np0005534516 nova_compute[253538]: 2025-11-25 08:55:43.031 253542 DEBUG nova.compute.manager [None req-59e6579b-ccc5-4f68-852e-e3bd22f9798e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: f028149d-de9a-49c3-8805-49336474a101] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:55:43 np0005534516 nova_compute[253538]: 2025-11-25 08:55:43.087 253542 INFO nova.compute.manager [None req-59e6579b-ccc5-4f68-852e-e3bd22f9798e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: f028149d-de9a-49c3-8805-49336474a101] Took 7.59 seconds to build instance.#033[00m
Nov 25 03:55:43 np0005534516 nova_compute[253538]: 2025-11-25 08:55:43.106 253542 DEBUG oslo_concurrency.lockutils [None req-59e6579b-ccc5-4f68-852e-e3bd22f9798e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lock "f028149d-de9a-49c3-8805-49336474a101" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 7.690s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:55:43 np0005534516 podman[378930]: 2025-11-25 08:55:43.212598129 +0000 UTC m=+0.047710089 container create d14bc14d7e6c47297bc507502f09a9fb5fe7ad742154a140e7a05ba54f8ca10e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-05073ace-d35c-48d1-9399-5c8964c484d2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Nov 25 03:55:43 np0005534516 systemd[1]: Started libpod-conmon-d14bc14d7e6c47297bc507502f09a9fb5fe7ad742154a140e7a05ba54f8ca10e.scope.
Nov 25 03:55:43 np0005534516 podman[378930]: 2025-11-25 08:55:43.1894643 +0000 UTC m=+0.024576290 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 25 03:55:43 np0005534516 systemd[1]: Started libcrun container.
Nov 25 03:55:43 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/83c502fd0fc85e055f1ddf2aec0772fba519495d438e9ce6a804788847f7e5e0/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 25 03:55:43 np0005534516 podman[378930]: 2025-11-25 08:55:43.316677792 +0000 UTC m=+0.151789772 container init d14bc14d7e6c47297bc507502f09a9fb5fe7ad742154a140e7a05ba54f8ca10e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-05073ace-d35c-48d1-9399-5c8964c484d2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Nov 25 03:55:43 np0005534516 podman[378930]: 2025-11-25 08:55:43.32392519 +0000 UTC m=+0.159037150 container start d14bc14d7e6c47297bc507502f09a9fb5fe7ad742154a140e7a05ba54f8ca10e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-05073ace-d35c-48d1-9399-5c8964c484d2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 25 03:55:43 np0005534516 neutron-haproxy-ovnmeta-05073ace-d35c-48d1-9399-5c8964c484d2[378944]: [NOTICE]   (378949) : New worker (378951) forked
Nov 25 03:55:43 np0005534516 neutron-haproxy-ovnmeta-05073ace-d35c-48d1-9399-5c8964c484d2[378944]: [NOTICE]   (378949) : Loading success.
Nov 25 03:55:43 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2248: 321 pgs: 321 active+clean; 134 MiB data, 814 MiB used, 59 GiB / 60 GiB avail; 74 KiB/s rd, 1.8 MiB/s wr, 44 op/s
Nov 25 03:55:44 np0005534516 nova_compute[253538]: 2025-11-25 08:55:44.555 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 03:55:44 np0005534516 nova_compute[253538]: 2025-11-25 08:55:44.557 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 03:55:45 np0005534516 nova_compute[253538]: 2025-11-25 08:55:45.005 253542 DEBUG nova.compute.manager [req-ac6dc1c7-8597-4e11-b305-0a5fd684b57b req-ef03b73f-2606-4620-a5d4-585022dc3303 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: f028149d-de9a-49c3-8805-49336474a101] Received event network-vif-plugged-3df2cc50-c6c1-476a-a12a-0d02fae91559 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:55:45 np0005534516 nova_compute[253538]: 2025-11-25 08:55:45.006 253542 DEBUG oslo_concurrency.lockutils [req-ac6dc1c7-8597-4e11-b305-0a5fd684b57b req-ef03b73f-2606-4620-a5d4-585022dc3303 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "f028149d-de9a-49c3-8805-49336474a101-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:55:45 np0005534516 nova_compute[253538]: 2025-11-25 08:55:45.006 253542 DEBUG oslo_concurrency.lockutils [req-ac6dc1c7-8597-4e11-b305-0a5fd684b57b req-ef03b73f-2606-4620-a5d4-585022dc3303 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "f028149d-de9a-49c3-8805-49336474a101-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:55:45 np0005534516 nova_compute[253538]: 2025-11-25 08:55:45.007 253542 DEBUG oslo_concurrency.lockutils [req-ac6dc1c7-8597-4e11-b305-0a5fd684b57b req-ef03b73f-2606-4620-a5d4-585022dc3303 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "f028149d-de9a-49c3-8805-49336474a101-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:55:45 np0005534516 nova_compute[253538]: 2025-11-25 08:55:45.007 253542 DEBUG nova.compute.manager [req-ac6dc1c7-8597-4e11-b305-0a5fd684b57b req-ef03b73f-2606-4620-a5d4-585022dc3303 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: f028149d-de9a-49c3-8805-49336474a101] No waiting events found dispatching network-vif-plugged-3df2cc50-c6c1-476a-a12a-0d02fae91559 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 03:55:45 np0005534516 nova_compute[253538]: 2025-11-25 08:55:45.008 253542 WARNING nova.compute.manager [req-ac6dc1c7-8597-4e11-b305-0a5fd684b57b req-ef03b73f-2606-4620-a5d4-585022dc3303 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: f028149d-de9a-49c3-8805-49336474a101] Received unexpected event network-vif-plugged-3df2cc50-c6c1-476a-a12a-0d02fae91559 for instance with vm_state active and task_state None.#033[00m
Nov 25 03:55:45 np0005534516 nova_compute[253538]: 2025-11-25 08:55:45.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 03:55:45 np0005534516 nova_compute[253538]: 2025-11-25 08:55:45.577 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:55:45 np0005534516 nova_compute[253538]: 2025-11-25 08:55:45.579 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:55:45 np0005534516 nova_compute[253538]: 2025-11-25 08:55:45.579 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:55:45 np0005534516 nova_compute[253538]: 2025-11-25 08:55:45.580 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 25 03:55:45 np0005534516 nova_compute[253538]: 2025-11-25 08:55:45.580 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:55:45 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2249: 321 pgs: 321 active+clean; 134 MiB data, 814 MiB used, 59 GiB / 60 GiB avail; 1.1 MiB/s rd, 1.8 MiB/s wr, 76 op/s
Nov 25 03:55:46 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 03:55:46 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2456873636' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 03:55:46 np0005534516 nova_compute[253538]: 2025-11-25 08:55:46.029 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.449s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:55:46 np0005534516 nova_compute[253538]: 2025-11-25 08:55:46.092 253542 DEBUG nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] skipping disk for instance-00000076 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 25 03:55:46 np0005534516 nova_compute[253538]: 2025-11-25 08:55:46.093 253542 DEBUG nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] skipping disk for instance-00000076 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 25 03:55:46 np0005534516 nova_compute[253538]: 2025-11-25 08:55:46.230 253542 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764060931.2297456, 7e82fa8c-6663-439c-833c-2b28f22282a8 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 03:55:46 np0005534516 nova_compute[253538]: 2025-11-25 08:55:46.231 253542 INFO nova.compute.manager [-] [instance: 7e82fa8c-6663-439c-833c-2b28f22282a8] VM Stopped (Lifecycle Event)#033[00m
Nov 25 03:55:46 np0005534516 nova_compute[253538]: 2025-11-25 08:55:46.247 253542 DEBUG nova.compute.manager [None req-ac113287-e704-499c-b72b-1cfe9e33ae74 - - - - - -] [instance: 7e82fa8c-6663-439c-833c-2b28f22282a8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:55:46 np0005534516 nova_compute[253538]: 2025-11-25 08:55:46.298 253542 WARNING nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 03:55:46 np0005534516 nova_compute[253538]: 2025-11-25 08:55:46.300 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3579MB free_disk=59.967384338378906GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 25 03:55:46 np0005534516 nova_compute[253538]: 2025-11-25 08:55:46.300 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:55:46 np0005534516 nova_compute[253538]: 2025-11-25 08:55:46.300 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:55:46 np0005534516 nova_compute[253538]: 2025-11-25 08:55:46.354 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:55:46 np0005534516 nova_compute[253538]: 2025-11-25 08:55:46.547 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Instance f028149d-de9a-49c3-8805-49336474a101 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 25 03:55:46 np0005534516 nova_compute[253538]: 2025-11-25 08:55:46.548 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 25 03:55:46 np0005534516 nova_compute[253538]: 2025-11-25 08:55:46.548 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 25 03:55:46 np0005534516 nova_compute[253538]: 2025-11-25 08:55:46.718 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:55:47 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 03:55:47 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4071085532' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 03:55:47 np0005534516 nova_compute[253538]: 2025-11-25 08:55:47.170 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.452s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:55:47 np0005534516 nova_compute[253538]: 2025-11-25 08:55:47.177 253542 DEBUG nova.compute.provider_tree [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 03:55:47 np0005534516 nova_compute[253538]: 2025-11-25 08:55:47.199 253542 DEBUG nova.scheduler.client.report [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 03:55:47 np0005534516 nova_compute[253538]: 2025-11-25 08:55:47.231 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 25 03:55:47 np0005534516 nova_compute[253538]: 2025-11-25 08:55:47.232 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.932s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:55:47 np0005534516 ovn_controller[152859]: 2025-11-25T08:55:47Z|01231|binding|INFO|Releasing lport 38363726-6a82-410a-a283-1a7b285deea5 from this chassis (sb_readonly=0)
Nov 25 03:55:47 np0005534516 nova_compute[253538]: 2025-11-25 08:55:47.459 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:55:47 np0005534516 NetworkManager[48915]: <info>  [1764060947.4605] manager: (patch-br-int-to-provnet-14ba300c-36d8-42f5-a7bd-8245a4488c5f): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/498)
Nov 25 03:55:47 np0005534516 NetworkManager[48915]: <info>  [1764060947.4620] manager: (patch-provnet-14ba300c-36d8-42f5-a7bd-8245a4488c5f-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/499)
Nov 25 03:55:47 np0005534516 ovn_controller[152859]: 2025-11-25T08:55:47Z|01232|binding|INFO|Releasing lport 38363726-6a82-410a-a283-1a7b285deea5 from this chassis (sb_readonly=0)
Nov 25 03:55:47 np0005534516 nova_compute[253538]: 2025-11-25 08:55:47.488 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:55:47 np0005534516 nova_compute[253538]: 2025-11-25 08:55:47.493 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:55:47 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 03:55:47 np0005534516 nova_compute[253538]: 2025-11-25 08:55:47.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 03:55:47 np0005534516 nova_compute[253538]: 2025-11-25 08:55:47.555 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 03:55:47 np0005534516 nova_compute[253538]: 2025-11-25 08:55:47.555 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 03:55:47 np0005534516 nova_compute[253538]: 2025-11-25 08:55:47.555 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Nov 25 03:55:47 np0005534516 nova_compute[253538]: 2025-11-25 08:55:47.851 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:55:47 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2250: 321 pgs: 321 active+clean; 134 MiB data, 814 MiB used, 59 GiB / 60 GiB avail; 1.6 MiB/s rd, 1.8 MiB/s wr, 88 op/s
Nov 25 03:55:47 np0005534516 nova_compute[253538]: 2025-11-25 08:55:47.884 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:55:47 np0005534516 podman[379006]: 2025-11-25 08:55:47.895288546 +0000 UTC m=+0.136248410 container health_status 8ad1e319ea263c63021f01bb2d6f18ca5aea205883339692c6b10bddfa993191 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118)
Nov 25 03:55:47 np0005534516 nova_compute[253538]: 2025-11-25 08:55:47.925 253542 DEBUG nova.compute.manager [req-e5fad395-e797-43bd-89f7-229cfada0e2d req-1bb0e161-8aed-42bb-869f-d2cb148c91cc b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: f028149d-de9a-49c3-8805-49336474a101] Received event network-changed-3df2cc50-c6c1-476a-a12a-0d02fae91559 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:55:47 np0005534516 nova_compute[253538]: 2025-11-25 08:55:47.926 253542 DEBUG nova.compute.manager [req-e5fad395-e797-43bd-89f7-229cfada0e2d req-1bb0e161-8aed-42bb-869f-d2cb148c91cc b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: f028149d-de9a-49c3-8805-49336474a101] Refreshing instance network info cache due to event network-changed-3df2cc50-c6c1-476a-a12a-0d02fae91559. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 03:55:47 np0005534516 nova_compute[253538]: 2025-11-25 08:55:47.926 253542 DEBUG oslo_concurrency.lockutils [req-e5fad395-e797-43bd-89f7-229cfada0e2d req-1bb0e161-8aed-42bb-869f-d2cb148c91cc b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-f028149d-de9a-49c3-8805-49336474a101" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 03:55:47 np0005534516 nova_compute[253538]: 2025-11-25 08:55:47.926 253542 DEBUG oslo_concurrency.lockutils [req-e5fad395-e797-43bd-89f7-229cfada0e2d req-1bb0e161-8aed-42bb-869f-d2cb148c91cc b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-f028149d-de9a-49c3-8805-49336474a101" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 03:55:47 np0005534516 nova_compute[253538]: 2025-11-25 08:55:47.926 253542 DEBUG nova.network.neutron [req-e5fad395-e797-43bd-89f7-229cfada0e2d req-1bb0e161-8aed-42bb-869f-d2cb148c91cc b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: f028149d-de9a-49c3-8805-49336474a101] Refreshing network info cache for port 3df2cc50-c6c1-476a-a12a-0d02fae91559 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 03:55:48 np0005534516 nova_compute[253538]: 2025-11-25 08:55:48.136 253542 DEBUG oslo_concurrency.lockutils [None req-88b52273-158b-4235-b219-e18391a39d62 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Acquiring lock "f028149d-de9a-49c3-8805-49336474a101" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:55:48 np0005534516 nova_compute[253538]: 2025-11-25 08:55:48.137 253542 DEBUG oslo_concurrency.lockutils [None req-88b52273-158b-4235-b219-e18391a39d62 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lock "f028149d-de9a-49c3-8805-49336474a101" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:55:48 np0005534516 nova_compute[253538]: 2025-11-25 08:55:48.138 253542 DEBUG oslo_concurrency.lockutils [None req-88b52273-158b-4235-b219-e18391a39d62 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Acquiring lock "f028149d-de9a-49c3-8805-49336474a101-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:55:48 np0005534516 nova_compute[253538]: 2025-11-25 08:55:48.138 253542 DEBUG oslo_concurrency.lockutils [None req-88b52273-158b-4235-b219-e18391a39d62 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lock "f028149d-de9a-49c3-8805-49336474a101-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:55:48 np0005534516 nova_compute[253538]: 2025-11-25 08:55:48.139 253542 DEBUG oslo_concurrency.lockutils [None req-88b52273-158b-4235-b219-e18391a39d62 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lock "f028149d-de9a-49c3-8805-49336474a101-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:55:48 np0005534516 nova_compute[253538]: 2025-11-25 08:55:48.141 253542 INFO nova.compute.manager [None req-88b52273-158b-4235-b219-e18391a39d62 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: f028149d-de9a-49c3-8805-49336474a101] Terminating instance#033[00m
Nov 25 03:55:48 np0005534516 nova_compute[253538]: 2025-11-25 08:55:48.144 253542 DEBUG nova.compute.manager [None req-88b52273-158b-4235-b219-e18391a39d62 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: f028149d-de9a-49c3-8805-49336474a101] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 25 03:55:48 np0005534516 kernel: tap3df2cc50-c6 (unregistering): left promiscuous mode
Nov 25 03:55:48 np0005534516 NetworkManager[48915]: <info>  [1764060948.1856] device (tap3df2cc50-c6): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 03:55:48 np0005534516 nova_compute[253538]: 2025-11-25 08:55:48.197 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:55:48 np0005534516 ovn_controller[152859]: 2025-11-25T08:55:48Z|01233|binding|INFO|Releasing lport 3df2cc50-c6c1-476a-a12a-0d02fae91559 from this chassis (sb_readonly=0)
Nov 25 03:55:48 np0005534516 ovn_controller[152859]: 2025-11-25T08:55:48Z|01234|binding|INFO|Setting lport 3df2cc50-c6c1-476a-a12a-0d02fae91559 down in Southbound
Nov 25 03:55:48 np0005534516 ovn_controller[152859]: 2025-11-25T08:55:48Z|01235|binding|INFO|Removing iface tap3df2cc50-c6 ovn-installed in OVS
Nov 25 03:55:48 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:55:48.205 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:54:4e:c2 10.100.0.14'], port_security=['fa:16:3e:54:4e:c2 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-TestNetworkBasicOps-1598437000', 'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': 'f028149d-de9a-49c3-8805-49336474a101', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-05073ace-d35c-48d1-9399-5c8964c484d2', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-TestNetworkBasicOps-1598437000', 'neutron:project_id': '92faeb767e7a423586eaaf32661ce771', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'd480e0b0-4ff3-496c-bb44-a333ae5d1ee8', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.217'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d00a1a17-1a1d-406f-9e8c-c8e478c002e9, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=3df2cc50-c6c1-476a-a12a-0d02fae91559) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 03:55:48 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:55:48.208 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 3df2cc50-c6c1-476a-a12a-0d02fae91559 in datapath 05073ace-d35c-48d1-9399-5c8964c484d2 unbound from our chassis#033[00m
Nov 25 03:55:48 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:55:48.210 162739 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 05073ace-d35c-48d1-9399-5c8964c484d2, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 25 03:55:48 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:55:48.212 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[fa608817-01b9-48d3-81cb-97cf12dc7edb]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:55:48 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:55:48.212 162739 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-05073ace-d35c-48d1-9399-5c8964c484d2 namespace which is not needed anymore#033[00m
Nov 25 03:55:48 np0005534516 nova_compute[253538]: 2025-11-25 08:55:48.217 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:55:48 np0005534516 systemd[1]: machine-qemu\x2d148\x2dinstance\x2d00000076.scope: Deactivated successfully.
Nov 25 03:55:48 np0005534516 systemd[1]: machine-qemu\x2d148\x2dinstance\x2d00000076.scope: Consumed 5.724s CPU time.
Nov 25 03:55:48 np0005534516 systemd-machined[215790]: Machine qemu-148-instance-00000076 terminated.
Nov 25 03:55:48 np0005534516 nova_compute[253538]: 2025-11-25 08:55:48.372 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:55:48 np0005534516 neutron-haproxy-ovnmeta-05073ace-d35c-48d1-9399-5c8964c484d2[378944]: [NOTICE]   (378949) : haproxy version is 2.8.14-c23fe91
Nov 25 03:55:48 np0005534516 neutron-haproxy-ovnmeta-05073ace-d35c-48d1-9399-5c8964c484d2[378944]: [NOTICE]   (378949) : path to executable is /usr/sbin/haproxy
Nov 25 03:55:48 np0005534516 neutron-haproxy-ovnmeta-05073ace-d35c-48d1-9399-5c8964c484d2[378944]: [WARNING]  (378949) : Exiting Master process...
Nov 25 03:55:48 np0005534516 neutron-haproxy-ovnmeta-05073ace-d35c-48d1-9399-5c8964c484d2[378944]: [ALERT]    (378949) : Current worker (378951) exited with code 143 (Terminated)
Nov 25 03:55:48 np0005534516 neutron-haproxy-ovnmeta-05073ace-d35c-48d1-9399-5c8964c484d2[378944]: [WARNING]  (378949) : All workers exited. Exiting... (0)
Nov 25 03:55:48 np0005534516 nova_compute[253538]: 2025-11-25 08:55:48.380 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:55:48 np0005534516 systemd[1]: libpod-d14bc14d7e6c47297bc507502f09a9fb5fe7ad742154a140e7a05ba54f8ca10e.scope: Deactivated successfully.
Nov 25 03:55:48 np0005534516 podman[379054]: 2025-11-25 08:55:48.388207473 +0000 UTC m=+0.066911583 container died d14bc14d7e6c47297bc507502f09a9fb5fe7ad742154a140e7a05ba54f8ca10e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-05073ace-d35c-48d1-9399-5c8964c484d2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 03:55:48 np0005534516 nova_compute[253538]: 2025-11-25 08:55:48.391 253542 INFO nova.virt.libvirt.driver [-] [instance: f028149d-de9a-49c3-8805-49336474a101] Instance destroyed successfully.#033[00m
Nov 25 03:55:48 np0005534516 nova_compute[253538]: 2025-11-25 08:55:48.391 253542 DEBUG nova.objects.instance [None req-88b52273-158b-4235-b219-e18391a39d62 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lazy-loading 'resources' on Instance uuid f028149d-de9a-49c3-8805-49336474a101 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 03:55:48 np0005534516 nova_compute[253538]: 2025-11-25 08:55:48.412 253542 DEBUG nova.virt.libvirt.vif [None req-88b52273-158b-4235-b219-e18391a39d62 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T08:55:33Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-443459317',display_name='tempest-TestNetworkBasicOps-server-443459317',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-443459317',id=118,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHwIQI4smh03agRxaCUyBdpTkZuGWd18KsbSAlDGRTalp6+OaIXJV7ErpMU5iOukAfWckmlqdBb7cA7hp/AAowmL6erSk1AV13d1Hs/ktP4LutA1fVkErwMJ9ccrFBLyEA==',key_name='tempest-TestNetworkBasicOps-292858725',keypairs=<?>,launch_index=0,launched_at=2025-11-25T08:55:43Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='92faeb767e7a423586eaaf32661ce771',ramdisk_id='',reservation_id='r-0xuf3mxd',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-2019122229',owner_user_name='tempest-TestNetworkBasicOps-2019122229-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T08:55:43Z,user_data=None,user_id='4211995133cc45db8e38c47f747fb092',uuid=f028149d-de9a-49c3-8805-49336474a101,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "3df2cc50-c6c1-476a-a12a-0d02fae91559", "address": "fa:16:3e:54:4e:c2", "network": {"id": "05073ace-d35c-48d1-9399-5c8964c484d2", "bridge": "br-int", "label": "tempest-network-smoke--1386100427", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92faeb767e7a423586eaaf32661ce771", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3df2cc50-c6", "ovs_interfaceid": "3df2cc50-c6c1-476a-a12a-0d02fae91559", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 25 03:55:48 np0005534516 nova_compute[253538]: 2025-11-25 08:55:48.413 253542 DEBUG nova.network.os_vif_util [None req-88b52273-158b-4235-b219-e18391a39d62 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Converting VIF {"id": "3df2cc50-c6c1-476a-a12a-0d02fae91559", "address": "fa:16:3e:54:4e:c2", "network": {"id": "05073ace-d35c-48d1-9399-5c8964c484d2", "bridge": "br-int", "label": "tempest-network-smoke--1386100427", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92faeb767e7a423586eaaf32661ce771", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3df2cc50-c6", "ovs_interfaceid": "3df2cc50-c6c1-476a-a12a-0d02fae91559", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 03:55:48 np0005534516 nova_compute[253538]: 2025-11-25 08:55:48.416 253542 DEBUG nova.network.os_vif_util [None req-88b52273-158b-4235-b219-e18391a39d62 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:54:4e:c2,bridge_name='br-int',has_traffic_filtering=True,id=3df2cc50-c6c1-476a-a12a-0d02fae91559,network=Network(05073ace-d35c-48d1-9399-5c8964c484d2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap3df2cc50-c6') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 03:55:48 np0005534516 nova_compute[253538]: 2025-11-25 08:55:48.416 253542 DEBUG os_vif [None req-88b52273-158b-4235-b219-e18391a39d62 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:54:4e:c2,bridge_name='br-int',has_traffic_filtering=True,id=3df2cc50-c6c1-476a-a12a-0d02fae91559,network=Network(05073ace-d35c-48d1-9399-5c8964c484d2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap3df2cc50-c6') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 25 03:55:48 np0005534516 nova_compute[253538]: 2025-11-25 08:55:48.418 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:55:48 np0005534516 nova_compute[253538]: 2025-11-25 08:55:48.419 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3df2cc50-c6, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:55:48 np0005534516 nova_compute[253538]: 2025-11-25 08:55:48.420 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:55:48 np0005534516 nova_compute[253538]: 2025-11-25 08:55:48.424 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 25 03:55:48 np0005534516 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-d14bc14d7e6c47297bc507502f09a9fb5fe7ad742154a140e7a05ba54f8ca10e-userdata-shm.mount: Deactivated successfully.
Nov 25 03:55:48 np0005534516 systemd[1]: var-lib-containers-storage-overlay-83c502fd0fc85e055f1ddf2aec0772fba519495d438e9ce6a804788847f7e5e0-merged.mount: Deactivated successfully.
Nov 25 03:55:48 np0005534516 nova_compute[253538]: 2025-11-25 08:55:48.428 253542 INFO os_vif [None req-88b52273-158b-4235-b219-e18391a39d62 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:54:4e:c2,bridge_name='br-int',has_traffic_filtering=True,id=3df2cc50-c6c1-476a-a12a-0d02fae91559,network=Network(05073ace-d35c-48d1-9399-5c8964c484d2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap3df2cc50-c6')#033[00m
Nov 25 03:55:48 np0005534516 podman[379054]: 2025-11-25 08:55:48.436810735 +0000 UTC m=+0.115514855 container cleanup d14bc14d7e6c47297bc507502f09a9fb5fe7ad742154a140e7a05ba54f8ca10e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-05073ace-d35c-48d1-9399-5c8964c484d2, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Nov 25 03:55:48 np0005534516 systemd[1]: libpod-conmon-d14bc14d7e6c47297bc507502f09a9fb5fe7ad742154a140e7a05ba54f8ca10e.scope: Deactivated successfully.
Nov 25 03:55:48 np0005534516 podman[379106]: 2025-11-25 08:55:48.512986689 +0000 UTC m=+0.048302086 container remove d14bc14d7e6c47297bc507502f09a9fb5fe7ad742154a140e7a05ba54f8ca10e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-05073ace-d35c-48d1-9399-5c8964c484d2, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.license=GPLv2)
Nov 25 03:55:48 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:55:48.521 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[6b91a19f-efc5-43b4-89cf-c3c69c34f25b]: (4, ('Tue Nov 25 08:55:48 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-05073ace-d35c-48d1-9399-5c8964c484d2 (d14bc14d7e6c47297bc507502f09a9fb5fe7ad742154a140e7a05ba54f8ca10e)\nd14bc14d7e6c47297bc507502f09a9fb5fe7ad742154a140e7a05ba54f8ca10e\nTue Nov 25 08:55:48 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-05073ace-d35c-48d1-9399-5c8964c484d2 (d14bc14d7e6c47297bc507502f09a9fb5fe7ad742154a140e7a05ba54f8ca10e)\nd14bc14d7e6c47297bc507502f09a9fb5fe7ad742154a140e7a05ba54f8ca10e\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:55:48 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:55:48.523 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[67ecb0da-dbe6-42ef-8354-92b8baceffd8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:55:48 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:55:48.524 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap05073ace-d0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:55:48 np0005534516 nova_compute[253538]: 2025-11-25 08:55:48.526 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:55:48 np0005534516 kernel: tap05073ace-d0: left promiscuous mode
Nov 25 03:55:48 np0005534516 nova_compute[253538]: 2025-11-25 08:55:48.538 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:55:48 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:55:48.541 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[e0bb01c1-6d8b-4c26-8fa4-9b97c8513bb9]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:55:48 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:55:48.557 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[8b8f5d47-10f6-40b5-8e9e-1a6840bb1abd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:55:48 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:55:48.559 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[94c93587-9cac-437b-962a-a362712f09aa]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:55:48 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:55:48.584 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[287fae9b-36c0-4184-bdb2-f26ba8afade1]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 629713, 'reachable_time': 27686, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 379125, 'error': None, 'target': 'ovnmeta-05073ace-d35c-48d1-9399-5c8964c484d2', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:55:48 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:55:48.589 162852 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-05073ace-d35c-48d1-9399-5c8964c484d2 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 25 03:55:48 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:55:48.589 162852 DEBUG oslo.privsep.daemon [-] privsep: reply[73a48f67-e28a-41e8-8bc0-998706aef3c8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:55:48 np0005534516 systemd[1]: run-netns-ovnmeta\x2d05073ace\x2dd35c\x2d48d1\x2d9399\x2d5c8964c484d2.mount: Deactivated successfully.
Nov 25 03:55:48 np0005534516 nova_compute[253538]: 2025-11-25 08:55:48.737 253542 INFO nova.virt.libvirt.driver [None req-88b52273-158b-4235-b219-e18391a39d62 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: f028149d-de9a-49c3-8805-49336474a101] Deleting instance files /var/lib/nova/instances/f028149d-de9a-49c3-8805-49336474a101_del#033[00m
Nov 25 03:55:48 np0005534516 nova_compute[253538]: 2025-11-25 08:55:48.738 253542 INFO nova.virt.libvirt.driver [None req-88b52273-158b-4235-b219-e18391a39d62 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: f028149d-de9a-49c3-8805-49336474a101] Deletion of /var/lib/nova/instances/f028149d-de9a-49c3-8805-49336474a101_del complete#033[00m
Nov 25 03:55:48 np0005534516 nova_compute[253538]: 2025-11-25 08:55:48.790 253542 INFO nova.compute.manager [None req-88b52273-158b-4235-b219-e18391a39d62 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: f028149d-de9a-49c3-8805-49336474a101] Took 0.65 seconds to destroy the instance on the hypervisor.#033[00m
Nov 25 03:55:48 np0005534516 nova_compute[253538]: 2025-11-25 08:55:48.791 253542 DEBUG oslo.service.loopingcall [None req-88b52273-158b-4235-b219-e18391a39d62 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 25 03:55:48 np0005534516 nova_compute[253538]: 2025-11-25 08:55:48.792 253542 DEBUG nova.compute.manager [-] [instance: f028149d-de9a-49c3-8805-49336474a101] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 25 03:55:48 np0005534516 nova_compute[253538]: 2025-11-25 08:55:48.792 253542 DEBUG nova.network.neutron [-] [instance: f028149d-de9a-49c3-8805-49336474a101] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 25 03:55:49 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:55:49.241 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=37, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '3e:21:09', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '6a:71:8e:07:fa:c2'}, ipsec=False) old=SB_Global(nb_cfg=36) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 03:55:49 np0005534516 nova_compute[253538]: 2025-11-25 08:55:49.242 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:55:49 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:55:49.243 162739 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 25 03:55:49 np0005534516 nova_compute[253538]: 2025-11-25 08:55:49.277 253542 DEBUG nova.network.neutron [req-e5fad395-e797-43bd-89f7-229cfada0e2d req-1bb0e161-8aed-42bb-869f-d2cb148c91cc b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: f028149d-de9a-49c3-8805-49336474a101] Updated VIF entry in instance network info cache for port 3df2cc50-c6c1-476a-a12a-0d02fae91559. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 25 03:55:49 np0005534516 nova_compute[253538]: 2025-11-25 08:55:49.278 253542 DEBUG nova.network.neutron [req-e5fad395-e797-43bd-89f7-229cfada0e2d req-1bb0e161-8aed-42bb-869f-d2cb148c91cc b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: f028149d-de9a-49c3-8805-49336474a101] Updating instance_info_cache with network_info: [{"id": "3df2cc50-c6c1-476a-a12a-0d02fae91559", "address": "fa:16:3e:54:4e:c2", "network": {"id": "05073ace-d35c-48d1-9399-5c8964c484d2", "bridge": "br-int", "label": "tempest-network-smoke--1386100427", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.217", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92faeb767e7a423586eaaf32661ce771", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3df2cc50-c6", "ovs_interfaceid": "3df2cc50-c6c1-476a-a12a-0d02fae91559", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 03:55:49 np0005534516 nova_compute[253538]: 2025-11-25 08:55:49.299 253542 DEBUG oslo_concurrency.lockutils [req-e5fad395-e797-43bd-89f7-229cfada0e2d req-1bb0e161-8aed-42bb-869f-d2cb148c91cc b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-f028149d-de9a-49c3-8805-49336474a101" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 03:55:49 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2251: 321 pgs: 321 active+clean; 100 MiB data, 803 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 110 op/s
Nov 25 03:55:50 np0005534516 nova_compute[253538]: 2025-11-25 08:55:50.281 253542 DEBUG nova.compute.manager [req-979d496d-b8f7-40f9-9242-09458906e405 req-63f3ab21-5ff9-4e66-b3c8-8a7a763f12f8 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: f028149d-de9a-49c3-8805-49336474a101] Received event network-vif-plugged-3df2cc50-c6c1-476a-a12a-0d02fae91559 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:55:50 np0005534516 nova_compute[253538]: 2025-11-25 08:55:50.281 253542 DEBUG oslo_concurrency.lockutils [req-979d496d-b8f7-40f9-9242-09458906e405 req-63f3ab21-5ff9-4e66-b3c8-8a7a763f12f8 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "f028149d-de9a-49c3-8805-49336474a101-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:55:50 np0005534516 nova_compute[253538]: 2025-11-25 08:55:50.281 253542 DEBUG oslo_concurrency.lockutils [req-979d496d-b8f7-40f9-9242-09458906e405 req-63f3ab21-5ff9-4e66-b3c8-8a7a763f12f8 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "f028149d-de9a-49c3-8805-49336474a101-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:55:50 np0005534516 nova_compute[253538]: 2025-11-25 08:55:50.282 253542 DEBUG oslo_concurrency.lockutils [req-979d496d-b8f7-40f9-9242-09458906e405 req-63f3ab21-5ff9-4e66-b3c8-8a7a763f12f8 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "f028149d-de9a-49c3-8805-49336474a101-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:55:50 np0005534516 nova_compute[253538]: 2025-11-25 08:55:50.282 253542 DEBUG nova.compute.manager [req-979d496d-b8f7-40f9-9242-09458906e405 req-63f3ab21-5ff9-4e66-b3c8-8a7a763f12f8 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: f028149d-de9a-49c3-8805-49336474a101] No waiting events found dispatching network-vif-plugged-3df2cc50-c6c1-476a-a12a-0d02fae91559 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 03:55:50 np0005534516 nova_compute[253538]: 2025-11-25 08:55:50.282 253542 WARNING nova.compute.manager [req-979d496d-b8f7-40f9-9242-09458906e405 req-63f3ab21-5ff9-4e66-b3c8-8a7a763f12f8 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: f028149d-de9a-49c3-8805-49336474a101] Received unexpected event network-vif-plugged-3df2cc50-c6c1-476a-a12a-0d02fae91559 for instance with vm_state active and task_state deleting.#033[00m
Nov 25 03:55:51 np0005534516 nova_compute[253538]: 2025-11-25 08:55:51.068 253542 DEBUG nova.network.neutron [-] [instance: f028149d-de9a-49c3-8805-49336474a101] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 03:55:51 np0005534516 nova_compute[253538]: 2025-11-25 08:55:51.085 253542 INFO nova.compute.manager [-] [instance: f028149d-de9a-49c3-8805-49336474a101] Took 2.29 seconds to deallocate network for instance.#033[00m
Nov 25 03:55:51 np0005534516 nova_compute[253538]: 2025-11-25 08:55:51.135 253542 DEBUG oslo_concurrency.lockutils [None req-88b52273-158b-4235-b219-e18391a39d62 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:55:51 np0005534516 nova_compute[253538]: 2025-11-25 08:55:51.136 253542 DEBUG oslo_concurrency.lockutils [None req-88b52273-158b-4235-b219-e18391a39d62 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:55:51 np0005534516 nova_compute[253538]: 2025-11-25 08:55:51.187 253542 DEBUG oslo_concurrency.processutils [None req-88b52273-158b-4235-b219-e18391a39d62 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:55:51 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 03:55:51 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2784495931' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 03:55:51 np0005534516 nova_compute[253538]: 2025-11-25 08:55:51.635 253542 DEBUG oslo_concurrency.processutils [None req-88b52273-158b-4235-b219-e18391a39d62 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.448s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:55:51 np0005534516 nova_compute[253538]: 2025-11-25 08:55:51.642 253542 DEBUG nova.compute.provider_tree [None req-88b52273-158b-4235-b219-e18391a39d62 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 03:55:51 np0005534516 nova_compute[253538]: 2025-11-25 08:55:51.660 253542 DEBUG nova.scheduler.client.report [None req-88b52273-158b-4235-b219-e18391a39d62 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 03:55:51 np0005534516 nova_compute[253538]: 2025-11-25 08:55:51.684 253542 DEBUG oslo_concurrency.lockutils [None req-88b52273-158b-4235-b219-e18391a39d62 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.548s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:55:51 np0005534516 nova_compute[253538]: 2025-11-25 08:55:51.712 253542 INFO nova.scheduler.client.report [None req-88b52273-158b-4235-b219-e18391a39d62 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Deleted allocations for instance f028149d-de9a-49c3-8805-49336474a101#033[00m
Nov 25 03:55:51 np0005534516 nova_compute[253538]: 2025-11-25 08:55:51.791 253542 DEBUG oslo_concurrency.lockutils [None req-88b52273-158b-4235-b219-e18391a39d62 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lock "f028149d-de9a-49c3-8805-49336474a101" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.654s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:55:51 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2252: 321 pgs: 321 active+clean; 88 MiB data, 798 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 426 KiB/s wr, 123 op/s
Nov 25 03:55:51 np0005534516 nova_compute[253538]: 2025-11-25 08:55:51.933 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:55:52 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 03:55:52 np0005534516 nova_compute[253538]: 2025-11-25 08:55:52.889 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:55:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] Optimize plan auto_2025-11-25_08:55:53
Nov 25 03:55:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 03:55:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] do_upmap
Nov 25 03:55:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] pools ['default.rgw.meta', 'cephfs.cephfs.meta', 'images', 'volumes', 'cephfs.cephfs.data', 'default.rgw.log', '.rgw.root', 'default.rgw.control', 'vms', 'backups', '.mgr']
Nov 25 03:55:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] prepared 0/10 changes
Nov 25 03:55:53 np0005534516 nova_compute[253538]: 2025-11-25 08:55:53.424 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:55:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 03:55:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 03:55:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 03:55:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 03:55:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 03:55:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 03:55:53 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2253: 321 pgs: 321 active+clean; 88 MiB data, 794 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 99 op/s
Nov 25 03:55:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 03:55:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 03:55:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 03:55:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 03:55:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 03:55:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 03:55:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 03:55:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 03:55:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 03:55:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 03:55:55 np0005534516 nova_compute[253538]: 2025-11-25 08:55:55.568 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 03:55:55 np0005534516 nova_compute[253538]: 2025-11-25 08:55:55.569 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Nov 25 03:55:55 np0005534516 nova_compute[253538]: 2025-11-25 08:55:55.586 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Nov 25 03:55:55 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2254: 321 pgs: 321 active+clean; 88 MiB data, 794 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.6 KiB/s wr, 96 op/s
Nov 25 03:55:57 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 03:55:57 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2255: 321 pgs: 321 active+clean; 88 MiB data, 794 MiB used, 59 GiB / 60 GiB avail; 838 KiB/s rd, 1.4 KiB/s wr, 56 op/s
Nov 25 03:55:57 np0005534516 nova_compute[253538]: 2025-11-25 08:55:57.891 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:55:58 np0005534516 nova_compute[253538]: 2025-11-25 08:55:58.191 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:55:58 np0005534516 nova_compute[253538]: 2025-11-25 08:55:58.318 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:55:58 np0005534516 nova_compute[253538]: 2025-11-25 08:55:58.428 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:55:59 np0005534516 ceph-mon[75015]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #105. Immutable memtables: 0.
Nov 25 03:55:59 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:55:59.149298) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 25 03:55:59 np0005534516 ceph-mon[75015]: rocksdb: [db/flush_job.cc:856] [default] [JOB 61] Flushing memtable with next log file: 105
Nov 25 03:55:59 np0005534516 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764060959149346, "job": 61, "event": "flush_started", "num_memtables": 1, "num_entries": 2061, "num_deletes": 251, "total_data_size": 3348479, "memory_usage": 3394752, "flush_reason": "Manual Compaction"}
Nov 25 03:55:59 np0005534516 ceph-mon[75015]: rocksdb: [db/flush_job.cc:885] [default] [JOB 61] Level-0 flush table #106: started
Nov 25 03:55:59 np0005534516 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764060959162994, "cf_name": "default", "job": 61, "event": "table_file_creation", "file_number": 106, "file_size": 3281579, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 45134, "largest_seqno": 47194, "table_properties": {"data_size": 3272291, "index_size": 5846, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2373, "raw_key_size": 19098, "raw_average_key_size": 20, "raw_value_size": 3253739, "raw_average_value_size": 3439, "num_data_blocks": 259, "num_entries": 946, "num_filter_entries": 946, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764060739, "oldest_key_time": 1764060739, "file_creation_time": 1764060959, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "dc5dc6ae-0fb8-474e-b34d-e37705a41add", "db_session_id": "WCSCMBNWDQZ3QKJA3OWW", "orig_file_number": 106, "seqno_to_time_mapping": "N/A"}}
Nov 25 03:55:59 np0005534516 ceph-mon[75015]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 61] Flush lasted 13713 microseconds, and 6759 cpu microseconds.
Nov 25 03:55:59 np0005534516 ceph-mon[75015]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 03:55:59 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:55:59.163032) [db/flush_job.cc:967] [default] [JOB 61] Level-0 flush table #106: 3281579 bytes OK
Nov 25 03:55:59 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:55:59.163050) [db/memtable_list.cc:519] [default] Level-0 commit table #106 started
Nov 25 03:55:59 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:55:59.164299) [db/memtable_list.cc:722] [default] Level-0 commit table #106: memtable #1 done
Nov 25 03:55:59 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:55:59.164327) EVENT_LOG_v1 {"time_micros": 1764060959164321, "job": 61, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 25 03:55:59 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:55:59.164344) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 25 03:55:59 np0005534516 ceph-mon[75015]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 61] Try to delete WAL files size 3339825, prev total WAL file size 3339825, number of live WAL files 2.
Nov 25 03:55:59 np0005534516 ceph-mon[75015]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000102.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 03:55:59 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:55:59.165360) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730034323637' seq:72057594037927935, type:22 .. '7061786F730034353139' seq:0, type:0; will stop at (end)
Nov 25 03:55:59 np0005534516 ceph-mon[75015]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 62] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 25 03:55:59 np0005534516 ceph-mon[75015]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 61 Base level 0, inputs: [106(3204KB)], [104(8724KB)]
Nov 25 03:55:59 np0005534516 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764060959165397, "job": 62, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [106], "files_L6": [104], "score": -1, "input_data_size": 12215031, "oldest_snapshot_seqno": -1}
Nov 25 03:55:59 np0005534516 ceph-mon[75015]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 62] Generated table #107: 7046 keys, 10541886 bytes, temperature: kUnknown
Nov 25 03:55:59 np0005534516 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764060959227009, "cf_name": "default", "job": 62, "event": "table_file_creation", "file_number": 107, "file_size": 10541886, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10493687, "index_size": 29463, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 17669, "raw_key_size": 182013, "raw_average_key_size": 25, "raw_value_size": 10366448, "raw_average_value_size": 1471, "num_data_blocks": 1162, "num_entries": 7046, "num_filter_entries": 7046, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764056881, "oldest_key_time": 0, "file_creation_time": 1764060959, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "dc5dc6ae-0fb8-474e-b34d-e37705a41add", "db_session_id": "WCSCMBNWDQZ3QKJA3OWW", "orig_file_number": 107, "seqno_to_time_mapping": "N/A"}}
Nov 25 03:55:59 np0005534516 ceph-mon[75015]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 03:55:59 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:55:59.228433) [db/compaction/compaction_job.cc:1663] [default] [JOB 62] Compacted 1@0 + 1@6 files to L6 => 10541886 bytes
Nov 25 03:55:59 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:55:59.230125) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 194.6 rd, 167.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.1, 8.5 +0.0 blob) out(10.1 +0.0 blob), read-write-amplify(6.9) write-amplify(3.2) OK, records in: 7560, records dropped: 514 output_compression: NoCompression
Nov 25 03:55:59 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:55:59.230157) EVENT_LOG_v1 {"time_micros": 1764060959230144, "job": 62, "event": "compaction_finished", "compaction_time_micros": 62785, "compaction_time_cpu_micros": 34971, "output_level": 6, "num_output_files": 1, "total_output_size": 10541886, "num_input_records": 7560, "num_output_records": 7046, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 25 03:55:59 np0005534516 ceph-mon[75015]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000106.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 03:55:59 np0005534516 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764060959230966, "job": 62, "event": "table_file_deletion", "file_number": 106}
Nov 25 03:55:59 np0005534516 ceph-mon[75015]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000104.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 03:55:59 np0005534516 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764060959233259, "job": 62, "event": "table_file_deletion", "file_number": 104}
Nov 25 03:55:59 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:55:59.165219) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 03:55:59 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:55:59.233293) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 03:55:59 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:55:59.233297) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 03:55:59 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:55:59.233300) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 03:55:59 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:55:59.233302) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 03:55:59 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:55:59.233307) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 03:55:59 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:55:59.246 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=0e3362c2-a4a0-4a10-9289-943331244f84, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '37'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:55:59 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2256: 321 pgs: 321 active+clean; 88 MiB data, 794 MiB used, 59 GiB / 60 GiB avail; 404 KiB/s rd, 1.2 KiB/s wr, 38 op/s
Nov 25 03:56:00 np0005534516 nova_compute[253538]: 2025-11-25 08:56:00.872 253542 DEBUG oslo_concurrency.lockutils [None req-6c92f596-e137-450a-bddc-fc7a7472dbe6 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Acquiring lock "4cb9b212-92f6-4b10-ac69-ba251266bfd2" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:56:00 np0005534516 nova_compute[253538]: 2025-11-25 08:56:00.873 253542 DEBUG oslo_concurrency.lockutils [None req-6c92f596-e137-450a-bddc-fc7a7472dbe6 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lock "4cb9b212-92f6-4b10-ac69-ba251266bfd2" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:56:00 np0005534516 nova_compute[253538]: 2025-11-25 08:56:00.890 253542 DEBUG nova.compute.manager [None req-6c92f596-e137-450a-bddc-fc7a7472dbe6 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 4cb9b212-92f6-4b10-ac69-ba251266bfd2] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 25 03:56:00 np0005534516 nova_compute[253538]: 2025-11-25 08:56:00.979 253542 DEBUG oslo_concurrency.lockutils [None req-6c92f596-e137-450a-bddc-fc7a7472dbe6 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:56:00 np0005534516 nova_compute[253538]: 2025-11-25 08:56:00.980 253542 DEBUG oslo_concurrency.lockutils [None req-6c92f596-e137-450a-bddc-fc7a7472dbe6 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:56:00 np0005534516 nova_compute[253538]: 2025-11-25 08:56:00.986 253542 DEBUG nova.virt.hardware [None req-6c92f596-e137-450a-bddc-fc7a7472dbe6 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 25 03:56:00 np0005534516 nova_compute[253538]: 2025-11-25 08:56:00.986 253542 INFO nova.compute.claims [None req-6c92f596-e137-450a-bddc-fc7a7472dbe6 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 4cb9b212-92f6-4b10-ac69-ba251266bfd2] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 25 03:56:01 np0005534516 nova_compute[253538]: 2025-11-25 08:56:01.076 253542 DEBUG oslo_concurrency.processutils [None req-6c92f596-e137-450a-bddc-fc7a7472dbe6 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:56:01 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 03:56:01 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2900014663' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 03:56:01 np0005534516 nova_compute[253538]: 2025-11-25 08:56:01.546 253542 DEBUG oslo_concurrency.processutils [None req-6c92f596-e137-450a-bddc-fc7a7472dbe6 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.470s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:56:01 np0005534516 nova_compute[253538]: 2025-11-25 08:56:01.552 253542 DEBUG nova.compute.provider_tree [None req-6c92f596-e137-450a-bddc-fc7a7472dbe6 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 03:56:01 np0005534516 nova_compute[253538]: 2025-11-25 08:56:01.568 253542 DEBUG nova.scheduler.client.report [None req-6c92f596-e137-450a-bddc-fc7a7472dbe6 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 03:56:01 np0005534516 nova_compute[253538]: 2025-11-25 08:56:01.592 253542 DEBUG oslo_concurrency.lockutils [None req-6c92f596-e137-450a-bddc-fc7a7472dbe6 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.612s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:56:01 np0005534516 nova_compute[253538]: 2025-11-25 08:56:01.593 253542 DEBUG nova.compute.manager [None req-6c92f596-e137-450a-bddc-fc7a7472dbe6 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 4cb9b212-92f6-4b10-ac69-ba251266bfd2] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 25 03:56:01 np0005534516 nova_compute[253538]: 2025-11-25 08:56:01.634 253542 DEBUG nova.compute.manager [None req-6c92f596-e137-450a-bddc-fc7a7472dbe6 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 4cb9b212-92f6-4b10-ac69-ba251266bfd2] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 25 03:56:01 np0005534516 nova_compute[253538]: 2025-11-25 08:56:01.635 253542 DEBUG nova.network.neutron [None req-6c92f596-e137-450a-bddc-fc7a7472dbe6 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 4cb9b212-92f6-4b10-ac69-ba251266bfd2] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 25 03:56:01 np0005534516 nova_compute[253538]: 2025-11-25 08:56:01.649 253542 INFO nova.virt.libvirt.driver [None req-6c92f596-e137-450a-bddc-fc7a7472dbe6 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 4cb9b212-92f6-4b10-ac69-ba251266bfd2] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 25 03:56:01 np0005534516 nova_compute[253538]: 2025-11-25 08:56:01.668 253542 DEBUG nova.compute.manager [None req-6c92f596-e137-450a-bddc-fc7a7472dbe6 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 4cb9b212-92f6-4b10-ac69-ba251266bfd2] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 25 03:56:01 np0005534516 nova_compute[253538]: 2025-11-25 08:56:01.740 253542 DEBUG nova.compute.manager [None req-6c92f596-e137-450a-bddc-fc7a7472dbe6 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 4cb9b212-92f6-4b10-ac69-ba251266bfd2] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 25 03:56:01 np0005534516 nova_compute[253538]: 2025-11-25 08:56:01.741 253542 DEBUG nova.virt.libvirt.driver [None req-6c92f596-e137-450a-bddc-fc7a7472dbe6 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 4cb9b212-92f6-4b10-ac69-ba251266bfd2] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 25 03:56:01 np0005534516 nova_compute[253538]: 2025-11-25 08:56:01.742 253542 INFO nova.virt.libvirt.driver [None req-6c92f596-e137-450a-bddc-fc7a7472dbe6 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 4cb9b212-92f6-4b10-ac69-ba251266bfd2] Creating image(s)#033[00m
Nov 25 03:56:01 np0005534516 nova_compute[253538]: 2025-11-25 08:56:01.761 253542 DEBUG nova.storage.rbd_utils [None req-6c92f596-e137-450a-bddc-fc7a7472dbe6 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] rbd image 4cb9b212-92f6-4b10-ac69-ba251266bfd2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:56:01 np0005534516 nova_compute[253538]: 2025-11-25 08:56:01.781 253542 DEBUG nova.storage.rbd_utils [None req-6c92f596-e137-450a-bddc-fc7a7472dbe6 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] rbd image 4cb9b212-92f6-4b10-ac69-ba251266bfd2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:56:01 np0005534516 nova_compute[253538]: 2025-11-25 08:56:01.800 253542 DEBUG nova.storage.rbd_utils [None req-6c92f596-e137-450a-bddc-fc7a7472dbe6 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] rbd image 4cb9b212-92f6-4b10-ac69-ba251266bfd2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:56:01 np0005534516 nova_compute[253538]: 2025-11-25 08:56:01.804 253542 DEBUG oslo_concurrency.processutils [None req-6c92f596-e137-450a-bddc-fc7a7472dbe6 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:56:01 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2257: 321 pgs: 321 active+clean; 88 MiB data, 794 MiB used, 59 GiB / 60 GiB avail; 11 KiB/s rd, 1.2 KiB/s wr, 16 op/s
Nov 25 03:56:01 np0005534516 nova_compute[253538]: 2025-11-25 08:56:01.882 253542 DEBUG oslo_concurrency.processutils [None req-6c92f596-e137-450a-bddc-fc7a7472dbe6 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json" returned: 0 in 0.078s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:56:01 np0005534516 nova_compute[253538]: 2025-11-25 08:56:01.883 253542 DEBUG oslo_concurrency.lockutils [None req-6c92f596-e137-450a-bddc-fc7a7472dbe6 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Acquiring lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:56:01 np0005534516 nova_compute[253538]: 2025-11-25 08:56:01.884 253542 DEBUG oslo_concurrency.lockutils [None req-6c92f596-e137-450a-bddc-fc7a7472dbe6 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:56:01 np0005534516 nova_compute[253538]: 2025-11-25 08:56:01.885 253542 DEBUG oslo_concurrency.lockutils [None req-6c92f596-e137-450a-bddc-fc7a7472dbe6 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:56:01 np0005534516 nova_compute[253538]: 2025-11-25 08:56:01.906 253542 DEBUG nova.storage.rbd_utils [None req-6c92f596-e137-450a-bddc-fc7a7472dbe6 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] rbd image 4cb9b212-92f6-4b10-ac69-ba251266bfd2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:56:01 np0005534516 nova_compute[253538]: 2025-11-25 08:56:01.909 253542 DEBUG oslo_concurrency.processutils [None req-6c92f596-e137-450a-bddc-fc7a7472dbe6 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc 4cb9b212-92f6-4b10-ac69-ba251266bfd2_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:56:02 np0005534516 nova_compute[253538]: 2025-11-25 08:56:02.080 253542 DEBUG nova.policy [None req-6c92f596-e137-450a-bddc-fc7a7472dbe6 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '4211995133cc45db8e38c47f747fb092', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '92faeb767e7a423586eaaf32661ce771', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 25 03:56:02 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 03:56:02 np0005534516 nova_compute[253538]: 2025-11-25 08:56:02.559 253542 DEBUG oslo_concurrency.processutils [None req-6c92f596-e137-450a-bddc-fc7a7472dbe6 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc 4cb9b212-92f6-4b10-ac69-ba251266bfd2_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.650s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:56:02 np0005534516 nova_compute[253538]: 2025-11-25 08:56:02.638 253542 DEBUG nova.storage.rbd_utils [None req-6c92f596-e137-450a-bddc-fc7a7472dbe6 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] resizing rbd image 4cb9b212-92f6-4b10-ac69-ba251266bfd2_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 25 03:56:02 np0005534516 nova_compute[253538]: 2025-11-25 08:56:02.920 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:56:02 np0005534516 nova_compute[253538]: 2025-11-25 08:56:02.926 253542 DEBUG nova.objects.instance [None req-6c92f596-e137-450a-bddc-fc7a7472dbe6 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lazy-loading 'migration_context' on Instance uuid 4cb9b212-92f6-4b10-ac69-ba251266bfd2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 03:56:02 np0005534516 nova_compute[253538]: 2025-11-25 08:56:02.941 253542 DEBUG nova.virt.libvirt.driver [None req-6c92f596-e137-450a-bddc-fc7a7472dbe6 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 4cb9b212-92f6-4b10-ac69-ba251266bfd2] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 25 03:56:02 np0005534516 nova_compute[253538]: 2025-11-25 08:56:02.941 253542 DEBUG nova.virt.libvirt.driver [None req-6c92f596-e137-450a-bddc-fc7a7472dbe6 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 4cb9b212-92f6-4b10-ac69-ba251266bfd2] Ensure instance console log exists: /var/lib/nova/instances/4cb9b212-92f6-4b10-ac69-ba251266bfd2/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 25 03:56:02 np0005534516 nova_compute[253538]: 2025-11-25 08:56:02.942 253542 DEBUG oslo_concurrency.lockutils [None req-6c92f596-e137-450a-bddc-fc7a7472dbe6 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:56:02 np0005534516 nova_compute[253538]: 2025-11-25 08:56:02.942 253542 DEBUG oslo_concurrency.lockutils [None req-6c92f596-e137-450a-bddc-fc7a7472dbe6 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:56:02 np0005534516 nova_compute[253538]: 2025-11-25 08:56:02.943 253542 DEBUG oslo_concurrency.lockutils [None req-6c92f596-e137-450a-bddc-fc7a7472dbe6 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:56:03 np0005534516 nova_compute[253538]: 2025-11-25 08:56:03.389 253542 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764060948.3882115, f028149d-de9a-49c3-8805-49336474a101 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 03:56:03 np0005534516 nova_compute[253538]: 2025-11-25 08:56:03.389 253542 INFO nova.compute.manager [-] [instance: f028149d-de9a-49c3-8805-49336474a101] VM Stopped (Lifecycle Event)#033[00m
Nov 25 03:56:03 np0005534516 nova_compute[253538]: 2025-11-25 08:56:03.407 253542 DEBUG nova.compute.manager [None req-8be8f295-0aab-420e-9e7b-3ec2d53d9182 - - - - - -] [instance: f028149d-de9a-49c3-8805-49336474a101] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:56:03 np0005534516 nova_compute[253538]: 2025-11-25 08:56:03.430 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:56:03 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2258: 321 pgs: 321 active+clean; 95 MiB data, 794 MiB used, 59 GiB / 60 GiB avail; 1.3 KiB/s rd, 53 KiB/s wr, 2 op/s
Nov 25 03:56:03 np0005534516 nova_compute[253538]: 2025-11-25 08:56:03.954 253542 DEBUG nova.network.neutron [None req-6c92f596-e137-450a-bddc-fc7a7472dbe6 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 4cb9b212-92f6-4b10-ac69-ba251266bfd2] Successfully updated port: 3df2cc50-c6c1-476a-a12a-0d02fae91559 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 25 03:56:03 np0005534516 nova_compute[253538]: 2025-11-25 08:56:03.967 253542 DEBUG oslo_concurrency.lockutils [None req-6c92f596-e137-450a-bddc-fc7a7472dbe6 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Acquiring lock "refresh_cache-4cb9b212-92f6-4b10-ac69-ba251266bfd2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 03:56:03 np0005534516 nova_compute[253538]: 2025-11-25 08:56:03.967 253542 DEBUG oslo_concurrency.lockutils [None req-6c92f596-e137-450a-bddc-fc7a7472dbe6 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Acquired lock "refresh_cache-4cb9b212-92f6-4b10-ac69-ba251266bfd2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 03:56:03 np0005534516 nova_compute[253538]: 2025-11-25 08:56:03.968 253542 DEBUG nova.network.neutron [None req-6c92f596-e137-450a-bddc-fc7a7472dbe6 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 4cb9b212-92f6-4b10-ac69-ba251266bfd2] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 25 03:56:04 np0005534516 nova_compute[253538]: 2025-11-25 08:56:04.124 253542 DEBUG nova.compute.manager [req-e530de18-b282-4f58-82df-1c24602f92a6 req-a6695f05-63de-49cc-a177-c4dcd1d2c6c0 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 4cb9b212-92f6-4b10-ac69-ba251266bfd2] Received event network-changed-3df2cc50-c6c1-476a-a12a-0d02fae91559 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:56:04 np0005534516 nova_compute[253538]: 2025-11-25 08:56:04.126 253542 DEBUG nova.compute.manager [req-e530de18-b282-4f58-82df-1c24602f92a6 req-a6695f05-63de-49cc-a177-c4dcd1d2c6c0 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 4cb9b212-92f6-4b10-ac69-ba251266bfd2] Refreshing instance network info cache due to event network-changed-3df2cc50-c6c1-476a-a12a-0d02fae91559. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 03:56:04 np0005534516 nova_compute[253538]: 2025-11-25 08:56:04.127 253542 DEBUG oslo_concurrency.lockutils [req-e530de18-b282-4f58-82df-1c24602f92a6 req-a6695f05-63de-49cc-a177-c4dcd1d2c6c0 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-4cb9b212-92f6-4b10-ac69-ba251266bfd2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 03:56:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 03:56:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:56:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 03:56:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:56:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.0174513251286059e-05 of space, bias 1.0, pg target 0.0030523539753858175 quantized to 32 (current 32)
Nov 25 03:56:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:56:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 03:56:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:56:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 03:56:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:56:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0010122368870873217 of space, bias 1.0, pg target 0.3036710661261965 quantized to 32 (current 32)
Nov 25 03:56:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:56:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 25 03:56:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:56:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 03:56:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:56:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 25 03:56:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:56:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 25 03:56:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:56:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 03:56:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:56:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 25 03:56:04 np0005534516 nova_compute[253538]: 2025-11-25 08:56:04.300 253542 DEBUG nova.network.neutron [None req-6c92f596-e137-450a-bddc-fc7a7472dbe6 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 4cb9b212-92f6-4b10-ac69-ba251266bfd2] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 25 03:56:05 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2259: 321 pgs: 321 active+clean; 111 MiB data, 799 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 501 KiB/s wr, 25 op/s
Nov 25 03:56:06 np0005534516 nova_compute[253538]: 2025-11-25 08:56:06.130 253542 DEBUG nova.network.neutron [None req-6c92f596-e137-450a-bddc-fc7a7472dbe6 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 4cb9b212-92f6-4b10-ac69-ba251266bfd2] Updating instance_info_cache with network_info: [{"id": "3df2cc50-c6c1-476a-a12a-0d02fae91559", "address": "fa:16:3e:54:4e:c2", "network": {"id": "05073ace-d35c-48d1-9399-5c8964c484d2", "bridge": "br-int", "label": "tempest-network-smoke--1386100427", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.217", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92faeb767e7a423586eaaf32661ce771", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3df2cc50-c6", "ovs_interfaceid": "3df2cc50-c6c1-476a-a12a-0d02fae91559", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 03:56:06 np0005534516 nova_compute[253538]: 2025-11-25 08:56:06.151 253542 DEBUG oslo_concurrency.lockutils [None req-6c92f596-e137-450a-bddc-fc7a7472dbe6 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Releasing lock "refresh_cache-4cb9b212-92f6-4b10-ac69-ba251266bfd2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 03:56:06 np0005534516 nova_compute[253538]: 2025-11-25 08:56:06.151 253542 DEBUG nova.compute.manager [None req-6c92f596-e137-450a-bddc-fc7a7472dbe6 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 4cb9b212-92f6-4b10-ac69-ba251266bfd2] Instance network_info: |[{"id": "3df2cc50-c6c1-476a-a12a-0d02fae91559", "address": "fa:16:3e:54:4e:c2", "network": {"id": "05073ace-d35c-48d1-9399-5c8964c484d2", "bridge": "br-int", "label": "tempest-network-smoke--1386100427", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.217", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92faeb767e7a423586eaaf32661ce771", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3df2cc50-c6", "ovs_interfaceid": "3df2cc50-c6c1-476a-a12a-0d02fae91559", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 25 03:56:06 np0005534516 nova_compute[253538]: 2025-11-25 08:56:06.152 253542 DEBUG oslo_concurrency.lockutils [req-e530de18-b282-4f58-82df-1c24602f92a6 req-a6695f05-63de-49cc-a177-c4dcd1d2c6c0 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-4cb9b212-92f6-4b10-ac69-ba251266bfd2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 03:56:06 np0005534516 nova_compute[253538]: 2025-11-25 08:56:06.152 253542 DEBUG nova.network.neutron [req-e530de18-b282-4f58-82df-1c24602f92a6 req-a6695f05-63de-49cc-a177-c4dcd1d2c6c0 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 4cb9b212-92f6-4b10-ac69-ba251266bfd2] Refreshing network info cache for port 3df2cc50-c6c1-476a-a12a-0d02fae91559 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 03:56:06 np0005534516 nova_compute[253538]: 2025-11-25 08:56:06.155 253542 DEBUG nova.virt.libvirt.driver [None req-6c92f596-e137-450a-bddc-fc7a7472dbe6 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 4cb9b212-92f6-4b10-ac69-ba251266bfd2] Start _get_guest_xml network_info=[{"id": "3df2cc50-c6c1-476a-a12a-0d02fae91559", "address": "fa:16:3e:54:4e:c2", "network": {"id": "05073ace-d35c-48d1-9399-5c8964c484d2", "bridge": "br-int", "label": "tempest-network-smoke--1386100427", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.217", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92faeb767e7a423586eaaf32661ce771", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3df2cc50-c6", "ovs_interfaceid": "3df2cc50-c6c1-476a-a12a-0d02fae91559", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'device_name': '/dev/vda', 'guest_format': None, 'encryption_options': None, 'boot_index': 0, 'encryption_format': None, 'encryption_secret_uuid': None, 'size': 0, 'encrypted': False, 'disk_bus': 'virtio', 'image_id': '8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 25 03:56:06 np0005534516 nova_compute[253538]: 2025-11-25 08:56:06.161 253542 WARNING nova.virt.libvirt.driver [None req-6c92f596-e137-450a-bddc-fc7a7472dbe6 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 03:56:06 np0005534516 nova_compute[253538]: 2025-11-25 08:56:06.171 253542 DEBUG nova.virt.libvirt.host [None req-6c92f596-e137-450a-bddc-fc7a7472dbe6 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 25 03:56:06 np0005534516 nova_compute[253538]: 2025-11-25 08:56:06.171 253542 DEBUG nova.virt.libvirt.host [None req-6c92f596-e137-450a-bddc-fc7a7472dbe6 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 25 03:56:06 np0005534516 nova_compute[253538]: 2025-11-25 08:56:06.188 253542 DEBUG nova.virt.libvirt.host [None req-6c92f596-e137-450a-bddc-fc7a7472dbe6 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 25 03:56:06 np0005534516 nova_compute[253538]: 2025-11-25 08:56:06.188 253542 DEBUG nova.virt.libvirt.host [None req-6c92f596-e137-450a-bddc-fc7a7472dbe6 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 25 03:56:06 np0005534516 nova_compute[253538]: 2025-11-25 08:56:06.189 253542 DEBUG nova.virt.libvirt.driver [None req-6c92f596-e137-450a-bddc-fc7a7472dbe6 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 25 03:56:06 np0005534516 nova_compute[253538]: 2025-11-25 08:56:06.189 253542 DEBUG nova.virt.hardware [None req-6c92f596-e137-450a-bddc-fc7a7472dbe6 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T08:20:54Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='c0247c91-0f34-45b2-87e7-43f31a790a00',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 25 03:56:06 np0005534516 nova_compute[253538]: 2025-11-25 08:56:06.190 253542 DEBUG nova.virt.hardware [None req-6c92f596-e137-450a-bddc-fc7a7472dbe6 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 25 03:56:06 np0005534516 nova_compute[253538]: 2025-11-25 08:56:06.190 253542 DEBUG nova.virt.hardware [None req-6c92f596-e137-450a-bddc-fc7a7472dbe6 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 25 03:56:06 np0005534516 nova_compute[253538]: 2025-11-25 08:56:06.190 253542 DEBUG nova.virt.hardware [None req-6c92f596-e137-450a-bddc-fc7a7472dbe6 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 25 03:56:06 np0005534516 nova_compute[253538]: 2025-11-25 08:56:06.190 253542 DEBUG nova.virt.hardware [None req-6c92f596-e137-450a-bddc-fc7a7472dbe6 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 25 03:56:06 np0005534516 nova_compute[253538]: 2025-11-25 08:56:06.190 253542 DEBUG nova.virt.hardware [None req-6c92f596-e137-450a-bddc-fc7a7472dbe6 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 25 03:56:06 np0005534516 nova_compute[253538]: 2025-11-25 08:56:06.191 253542 DEBUG nova.virt.hardware [None req-6c92f596-e137-450a-bddc-fc7a7472dbe6 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 25 03:56:06 np0005534516 nova_compute[253538]: 2025-11-25 08:56:06.191 253542 DEBUG nova.virt.hardware [None req-6c92f596-e137-450a-bddc-fc7a7472dbe6 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 25 03:56:06 np0005534516 nova_compute[253538]: 2025-11-25 08:56:06.191 253542 DEBUG nova.virt.hardware [None req-6c92f596-e137-450a-bddc-fc7a7472dbe6 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 25 03:56:06 np0005534516 nova_compute[253538]: 2025-11-25 08:56:06.191 253542 DEBUG nova.virt.hardware [None req-6c92f596-e137-450a-bddc-fc7a7472dbe6 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 25 03:56:06 np0005534516 nova_compute[253538]: 2025-11-25 08:56:06.191 253542 DEBUG nova.virt.hardware [None req-6c92f596-e137-450a-bddc-fc7a7472dbe6 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 25 03:56:06 np0005534516 nova_compute[253538]: 2025-11-25 08:56:06.194 253542 DEBUG oslo_concurrency.processutils [None req-6c92f596-e137-450a-bddc-fc7a7472dbe6 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:56:06 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 03:56:06 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2673339189' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 03:56:06 np0005534516 nova_compute[253538]: 2025-11-25 08:56:06.664 253542 DEBUG oslo_concurrency.processutils [None req-6c92f596-e137-450a-bddc-fc7a7472dbe6 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.470s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:56:06 np0005534516 nova_compute[253538]: 2025-11-25 08:56:06.689 253542 DEBUG nova.storage.rbd_utils [None req-6c92f596-e137-450a-bddc-fc7a7472dbe6 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] rbd image 4cb9b212-92f6-4b10-ac69-ba251266bfd2_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:56:06 np0005534516 nova_compute[253538]: 2025-11-25 08:56:06.695 253542 DEBUG oslo_concurrency.processutils [None req-6c92f596-e137-450a-bddc-fc7a7472dbe6 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:56:07 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 03:56:07 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1224367298' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 03:56:07 np0005534516 nova_compute[253538]: 2025-11-25 08:56:07.158 253542 DEBUG oslo_concurrency.processutils [None req-6c92f596-e137-450a-bddc-fc7a7472dbe6 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.463s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:56:07 np0005534516 nova_compute[253538]: 2025-11-25 08:56:07.159 253542 DEBUG nova.virt.libvirt.vif [None req-6c92f596-e137-450a-bddc-fc7a7472dbe6 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T08:55:58Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1751152359',display_name='tempest-TestNetworkBasicOps-server-1751152359',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1751152359',id=119,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJubiPmrXfIUrmgC1xrGBFg1DApHg3WIsLuifdfOtIZqe386aSym92+Q91jDIGAFqlPVRU4cUyjEEd/Wo80iuNqs/Lk8M1iBken5yj9tIQXPukgxgH/HSGQZNiNB4Q+Uyg==',key_name='tempest-TestNetworkBasicOps-141024212',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='92faeb767e7a423586eaaf32661ce771',ramdisk_id='',reservation_id='r-i7jk6l9x',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-2019122229',owner_user_name='tempest-TestNetworkBasicOps-2019122229-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T08:56:01Z,user_data=None,user_id='4211995133cc45db8e38c47f747fb092',uuid=4cb9b212-92f6-4b10-ac69-ba251266bfd2,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "3df2cc50-c6c1-476a-a12a-0d02fae91559", "address": "fa:16:3e:54:4e:c2", "network": {"id": "05073ace-d35c-48d1-9399-5c8964c484d2", "bridge": "br-int", "label": "tempest-network-smoke--1386100427", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.217", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92faeb767e7a423586eaaf32661ce771", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3df2cc50-c6", "ovs_interfaceid": "3df2cc50-c6c1-476a-a12a-0d02fae91559", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 25 03:56:07 np0005534516 nova_compute[253538]: 2025-11-25 08:56:07.160 253542 DEBUG nova.network.os_vif_util [None req-6c92f596-e137-450a-bddc-fc7a7472dbe6 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Converting VIF {"id": "3df2cc50-c6c1-476a-a12a-0d02fae91559", "address": "fa:16:3e:54:4e:c2", "network": {"id": "05073ace-d35c-48d1-9399-5c8964c484d2", "bridge": "br-int", "label": "tempest-network-smoke--1386100427", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.217", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92faeb767e7a423586eaaf32661ce771", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3df2cc50-c6", "ovs_interfaceid": "3df2cc50-c6c1-476a-a12a-0d02fae91559", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 03:56:07 np0005534516 nova_compute[253538]: 2025-11-25 08:56:07.161 253542 DEBUG nova.network.os_vif_util [None req-6c92f596-e137-450a-bddc-fc7a7472dbe6 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:54:4e:c2,bridge_name='br-int',has_traffic_filtering=True,id=3df2cc50-c6c1-476a-a12a-0d02fae91559,network=Network(05073ace-d35c-48d1-9399-5c8964c484d2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap3df2cc50-c6') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 03:56:07 np0005534516 nova_compute[253538]: 2025-11-25 08:56:07.162 253542 DEBUG nova.objects.instance [None req-6c92f596-e137-450a-bddc-fc7a7472dbe6 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lazy-loading 'pci_devices' on Instance uuid 4cb9b212-92f6-4b10-ac69-ba251266bfd2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 03:56:07 np0005534516 nova_compute[253538]: 2025-11-25 08:56:07.177 253542 DEBUG nova.virt.libvirt.driver [None req-6c92f596-e137-450a-bddc-fc7a7472dbe6 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 4cb9b212-92f6-4b10-ac69-ba251266bfd2] End _get_guest_xml xml=<domain type="kvm">
Nov 25 03:56:07 np0005534516 nova_compute[253538]:  <uuid>4cb9b212-92f6-4b10-ac69-ba251266bfd2</uuid>
Nov 25 03:56:07 np0005534516 nova_compute[253538]:  <name>instance-00000077</name>
Nov 25 03:56:07 np0005534516 nova_compute[253538]:  <memory>131072</memory>
Nov 25 03:56:07 np0005534516 nova_compute[253538]:  <vcpu>1</vcpu>
Nov 25 03:56:07 np0005534516 nova_compute[253538]:  <metadata>
Nov 25 03:56:07 np0005534516 nova_compute[253538]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 03:56:07 np0005534516 nova_compute[253538]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 03:56:07 np0005534516 nova_compute[253538]:      <nova:name>tempest-TestNetworkBasicOps-server-1751152359</nova:name>
Nov 25 03:56:07 np0005534516 nova_compute[253538]:      <nova:creationTime>2025-11-25 08:56:06</nova:creationTime>
Nov 25 03:56:07 np0005534516 nova_compute[253538]:      <nova:flavor name="m1.nano">
Nov 25 03:56:07 np0005534516 nova_compute[253538]:        <nova:memory>128</nova:memory>
Nov 25 03:56:07 np0005534516 nova_compute[253538]:        <nova:disk>1</nova:disk>
Nov 25 03:56:07 np0005534516 nova_compute[253538]:        <nova:swap>0</nova:swap>
Nov 25 03:56:07 np0005534516 nova_compute[253538]:        <nova:ephemeral>0</nova:ephemeral>
Nov 25 03:56:07 np0005534516 nova_compute[253538]:        <nova:vcpus>1</nova:vcpus>
Nov 25 03:56:07 np0005534516 nova_compute[253538]:      </nova:flavor>
Nov 25 03:56:07 np0005534516 nova_compute[253538]:      <nova:owner>
Nov 25 03:56:07 np0005534516 nova_compute[253538]:        <nova:user uuid="4211995133cc45db8e38c47f747fb092">tempest-TestNetworkBasicOps-2019122229-project-member</nova:user>
Nov 25 03:56:07 np0005534516 nova_compute[253538]:        <nova:project uuid="92faeb767e7a423586eaaf32661ce771">tempest-TestNetworkBasicOps-2019122229</nova:project>
Nov 25 03:56:07 np0005534516 nova_compute[253538]:      </nova:owner>
Nov 25 03:56:07 np0005534516 nova_compute[253538]:      <nova:root type="image" uuid="8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e"/>
Nov 25 03:56:07 np0005534516 nova_compute[253538]:      <nova:ports>
Nov 25 03:56:07 np0005534516 nova_compute[253538]:        <nova:port uuid="3df2cc50-c6c1-476a-a12a-0d02fae91559">
Nov 25 03:56:07 np0005534516 nova_compute[253538]:          <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Nov 25 03:56:07 np0005534516 nova_compute[253538]:        </nova:port>
Nov 25 03:56:07 np0005534516 nova_compute[253538]:      </nova:ports>
Nov 25 03:56:07 np0005534516 nova_compute[253538]:    </nova:instance>
Nov 25 03:56:07 np0005534516 nova_compute[253538]:  </metadata>
Nov 25 03:56:07 np0005534516 nova_compute[253538]:  <sysinfo type="smbios">
Nov 25 03:56:07 np0005534516 nova_compute[253538]:    <system>
Nov 25 03:56:07 np0005534516 nova_compute[253538]:      <entry name="manufacturer">RDO</entry>
Nov 25 03:56:07 np0005534516 nova_compute[253538]:      <entry name="product">OpenStack Compute</entry>
Nov 25 03:56:07 np0005534516 nova_compute[253538]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 03:56:07 np0005534516 nova_compute[253538]:      <entry name="serial">4cb9b212-92f6-4b10-ac69-ba251266bfd2</entry>
Nov 25 03:56:07 np0005534516 nova_compute[253538]:      <entry name="uuid">4cb9b212-92f6-4b10-ac69-ba251266bfd2</entry>
Nov 25 03:56:07 np0005534516 nova_compute[253538]:      <entry name="family">Virtual Machine</entry>
Nov 25 03:56:07 np0005534516 nova_compute[253538]:    </system>
Nov 25 03:56:07 np0005534516 nova_compute[253538]:  </sysinfo>
Nov 25 03:56:07 np0005534516 nova_compute[253538]:  <os>
Nov 25 03:56:07 np0005534516 nova_compute[253538]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 25 03:56:07 np0005534516 nova_compute[253538]:    <boot dev="hd"/>
Nov 25 03:56:07 np0005534516 nova_compute[253538]:    <smbios mode="sysinfo"/>
Nov 25 03:56:07 np0005534516 nova_compute[253538]:  </os>
Nov 25 03:56:07 np0005534516 nova_compute[253538]:  <features>
Nov 25 03:56:07 np0005534516 nova_compute[253538]:    <acpi/>
Nov 25 03:56:07 np0005534516 nova_compute[253538]:    <apic/>
Nov 25 03:56:07 np0005534516 nova_compute[253538]:    <vmcoreinfo/>
Nov 25 03:56:07 np0005534516 nova_compute[253538]:  </features>
Nov 25 03:56:07 np0005534516 nova_compute[253538]:  <clock offset="utc">
Nov 25 03:56:07 np0005534516 nova_compute[253538]:    <timer name="pit" tickpolicy="delay"/>
Nov 25 03:56:07 np0005534516 nova_compute[253538]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 25 03:56:07 np0005534516 nova_compute[253538]:    <timer name="hpet" present="no"/>
Nov 25 03:56:07 np0005534516 nova_compute[253538]:  </clock>
Nov 25 03:56:07 np0005534516 nova_compute[253538]:  <cpu mode="host-model" match="exact">
Nov 25 03:56:07 np0005534516 nova_compute[253538]:    <topology sockets="1" cores="1" threads="1"/>
Nov 25 03:56:07 np0005534516 nova_compute[253538]:  </cpu>
Nov 25 03:56:07 np0005534516 nova_compute[253538]:  <devices>
Nov 25 03:56:07 np0005534516 nova_compute[253538]:    <disk type="network" device="disk">
Nov 25 03:56:07 np0005534516 nova_compute[253538]:      <driver type="raw" cache="none"/>
Nov 25 03:56:07 np0005534516 nova_compute[253538]:      <source protocol="rbd" name="vms/4cb9b212-92f6-4b10-ac69-ba251266bfd2_disk">
Nov 25 03:56:07 np0005534516 nova_compute[253538]:        <host name="192.168.122.100" port="6789"/>
Nov 25 03:56:07 np0005534516 nova_compute[253538]:      </source>
Nov 25 03:56:07 np0005534516 nova_compute[253538]:      <auth username="openstack">
Nov 25 03:56:07 np0005534516 nova_compute[253538]:        <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 03:56:07 np0005534516 nova_compute[253538]:      </auth>
Nov 25 03:56:07 np0005534516 nova_compute[253538]:      <target dev="vda" bus="virtio"/>
Nov 25 03:56:07 np0005534516 nova_compute[253538]:    </disk>
Nov 25 03:56:07 np0005534516 nova_compute[253538]:    <disk type="network" device="cdrom">
Nov 25 03:56:07 np0005534516 nova_compute[253538]:      <driver type="raw" cache="none"/>
Nov 25 03:56:07 np0005534516 nova_compute[253538]:      <source protocol="rbd" name="vms/4cb9b212-92f6-4b10-ac69-ba251266bfd2_disk.config">
Nov 25 03:56:07 np0005534516 nova_compute[253538]:        <host name="192.168.122.100" port="6789"/>
Nov 25 03:56:07 np0005534516 nova_compute[253538]:      </source>
Nov 25 03:56:07 np0005534516 nova_compute[253538]:      <auth username="openstack">
Nov 25 03:56:07 np0005534516 nova_compute[253538]:        <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 03:56:07 np0005534516 nova_compute[253538]:      </auth>
Nov 25 03:56:07 np0005534516 nova_compute[253538]:      <target dev="sda" bus="sata"/>
Nov 25 03:56:07 np0005534516 nova_compute[253538]:    </disk>
Nov 25 03:56:07 np0005534516 nova_compute[253538]:    <interface type="ethernet">
Nov 25 03:56:07 np0005534516 nova_compute[253538]:      <mac address="fa:16:3e:54:4e:c2"/>
Nov 25 03:56:07 np0005534516 nova_compute[253538]:      <model type="virtio"/>
Nov 25 03:56:07 np0005534516 nova_compute[253538]:      <driver name="vhost" rx_queue_size="512"/>
Nov 25 03:56:07 np0005534516 nova_compute[253538]:      <mtu size="1442"/>
Nov 25 03:56:07 np0005534516 nova_compute[253538]:      <target dev="tap3df2cc50-c6"/>
Nov 25 03:56:07 np0005534516 nova_compute[253538]:    </interface>
Nov 25 03:56:07 np0005534516 nova_compute[253538]:    <serial type="pty">
Nov 25 03:56:07 np0005534516 nova_compute[253538]:      <log file="/var/lib/nova/instances/4cb9b212-92f6-4b10-ac69-ba251266bfd2/console.log" append="off"/>
Nov 25 03:56:07 np0005534516 nova_compute[253538]:    </serial>
Nov 25 03:56:07 np0005534516 nova_compute[253538]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 03:56:07 np0005534516 nova_compute[253538]:    <video>
Nov 25 03:56:07 np0005534516 nova_compute[253538]:      <model type="virtio"/>
Nov 25 03:56:07 np0005534516 nova_compute[253538]:    </video>
Nov 25 03:56:07 np0005534516 nova_compute[253538]:    <input type="tablet" bus="usb"/>
Nov 25 03:56:07 np0005534516 nova_compute[253538]:    <rng model="virtio">
Nov 25 03:56:07 np0005534516 nova_compute[253538]:      <backend model="random">/dev/urandom</backend>
Nov 25 03:56:07 np0005534516 nova_compute[253538]:    </rng>
Nov 25 03:56:07 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root"/>
Nov 25 03:56:07 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:56:07 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:56:07 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:56:07 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:56:07 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:56:07 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:56:07 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:56:07 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:56:07 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:56:07 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:56:07 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:56:07 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:56:07 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:56:07 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:56:07 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:56:07 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:56:07 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:56:07 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:56:07 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:56:07 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:56:07 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:56:07 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:56:07 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:56:07 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:56:07 np0005534516 nova_compute[253538]:    <controller type="usb" index="0"/>
Nov 25 03:56:07 np0005534516 nova_compute[253538]:    <memballoon model="virtio">
Nov 25 03:56:07 np0005534516 nova_compute[253538]:      <stats period="10"/>
Nov 25 03:56:07 np0005534516 nova_compute[253538]:    </memballoon>
Nov 25 03:56:07 np0005534516 nova_compute[253538]:  </devices>
Nov 25 03:56:07 np0005534516 nova_compute[253538]: </domain>
Nov 25 03:56:07 np0005534516 nova_compute[253538]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 25 03:56:07 np0005534516 nova_compute[253538]: 2025-11-25 08:56:07.180 253542 DEBUG nova.compute.manager [None req-6c92f596-e137-450a-bddc-fc7a7472dbe6 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 4cb9b212-92f6-4b10-ac69-ba251266bfd2] Preparing to wait for external event network-vif-plugged-3df2cc50-c6c1-476a-a12a-0d02fae91559 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 25 03:56:07 np0005534516 nova_compute[253538]: 2025-11-25 08:56:07.180 253542 DEBUG oslo_concurrency.lockutils [None req-6c92f596-e137-450a-bddc-fc7a7472dbe6 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Acquiring lock "4cb9b212-92f6-4b10-ac69-ba251266bfd2-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:56:07 np0005534516 nova_compute[253538]: 2025-11-25 08:56:07.181 253542 DEBUG oslo_concurrency.lockutils [None req-6c92f596-e137-450a-bddc-fc7a7472dbe6 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lock "4cb9b212-92f6-4b10-ac69-ba251266bfd2-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:56:07 np0005534516 nova_compute[253538]: 2025-11-25 08:56:07.181 253542 DEBUG oslo_concurrency.lockutils [None req-6c92f596-e137-450a-bddc-fc7a7472dbe6 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lock "4cb9b212-92f6-4b10-ac69-ba251266bfd2-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:56:07 np0005534516 nova_compute[253538]: 2025-11-25 08:56:07.183 253542 DEBUG nova.virt.libvirt.vif [None req-6c92f596-e137-450a-bddc-fc7a7472dbe6 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T08:55:58Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1751152359',display_name='tempest-TestNetworkBasicOps-server-1751152359',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1751152359',id=119,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJubiPmrXfIUrmgC1xrGBFg1DApHg3WIsLuifdfOtIZqe386aSym92+Q91jDIGAFqlPVRU4cUyjEEd/Wo80iuNqs/Lk8M1iBken5yj9tIQXPukgxgH/HSGQZNiNB4Q+Uyg==',key_name='tempest-TestNetworkBasicOps-141024212',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='92faeb767e7a423586eaaf32661ce771',ramdisk_id='',reservation_id='r-i7jk6l9x',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-2019122229',owner_user_name='tempest-TestNetworkBasicOps-2019122229-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T08:56:01Z,user_data=None,user_id='4211995133cc45db8e38c47f747fb092',uuid=4cb9b212-92f6-4b10-ac69-ba251266bfd2,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "3df2cc50-c6c1-476a-a12a-0d02fae91559", "address": "fa:16:3e:54:4e:c2", "network": {"id": "05073ace-d35c-48d1-9399-5c8964c484d2", "bridge": "br-int", "label": "tempest-network-smoke--1386100427", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.217", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92faeb767e7a423586eaaf32661ce771", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3df2cc50-c6", "ovs_interfaceid": "3df2cc50-c6c1-476a-a12a-0d02fae91559", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 25 03:56:07 np0005534516 nova_compute[253538]: 2025-11-25 08:56:07.184 253542 DEBUG nova.network.os_vif_util [None req-6c92f596-e137-450a-bddc-fc7a7472dbe6 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Converting VIF {"id": "3df2cc50-c6c1-476a-a12a-0d02fae91559", "address": "fa:16:3e:54:4e:c2", "network": {"id": "05073ace-d35c-48d1-9399-5c8964c484d2", "bridge": "br-int", "label": "tempest-network-smoke--1386100427", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.217", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92faeb767e7a423586eaaf32661ce771", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3df2cc50-c6", "ovs_interfaceid": "3df2cc50-c6c1-476a-a12a-0d02fae91559", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 03:56:07 np0005534516 nova_compute[253538]: 2025-11-25 08:56:07.185 253542 DEBUG nova.network.os_vif_util [None req-6c92f596-e137-450a-bddc-fc7a7472dbe6 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:54:4e:c2,bridge_name='br-int',has_traffic_filtering=True,id=3df2cc50-c6c1-476a-a12a-0d02fae91559,network=Network(05073ace-d35c-48d1-9399-5c8964c484d2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap3df2cc50-c6') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 03:56:07 np0005534516 nova_compute[253538]: 2025-11-25 08:56:07.185 253542 DEBUG os_vif [None req-6c92f596-e137-450a-bddc-fc7a7472dbe6 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:54:4e:c2,bridge_name='br-int',has_traffic_filtering=True,id=3df2cc50-c6c1-476a-a12a-0d02fae91559,network=Network(05073ace-d35c-48d1-9399-5c8964c484d2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap3df2cc50-c6') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 25 03:56:07 np0005534516 nova_compute[253538]: 2025-11-25 08:56:07.186 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:56:07 np0005534516 nova_compute[253538]: 2025-11-25 08:56:07.189 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:56:07 np0005534516 nova_compute[253538]: 2025-11-25 08:56:07.190 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 03:56:07 np0005534516 nova_compute[253538]: 2025-11-25 08:56:07.194 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:56:07 np0005534516 nova_compute[253538]: 2025-11-25 08:56:07.194 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap3df2cc50-c6, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:56:07 np0005534516 nova_compute[253538]: 2025-11-25 08:56:07.195 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap3df2cc50-c6, col_values=(('external_ids', {'iface-id': '3df2cc50-c6c1-476a-a12a-0d02fae91559', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:54:4e:c2', 'vm-uuid': '4cb9b212-92f6-4b10-ac69-ba251266bfd2'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:56:07 np0005534516 NetworkManager[48915]: <info>  [1764060967.1978] manager: (tap3df2cc50-c6): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/500)
Nov 25 03:56:07 np0005534516 nova_compute[253538]: 2025-11-25 08:56:07.199 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 25 03:56:07 np0005534516 nova_compute[253538]: 2025-11-25 08:56:07.204 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:56:07 np0005534516 nova_compute[253538]: 2025-11-25 08:56:07.205 253542 INFO os_vif [None req-6c92f596-e137-450a-bddc-fc7a7472dbe6 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:54:4e:c2,bridge_name='br-int',has_traffic_filtering=True,id=3df2cc50-c6c1-476a-a12a-0d02fae91559,network=Network(05073ace-d35c-48d1-9399-5c8964c484d2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap3df2cc50-c6')#033[00m
Nov 25 03:56:07 np0005534516 nova_compute[253538]: 2025-11-25 08:56:07.271 253542 DEBUG nova.virt.libvirt.driver [None req-6c92f596-e137-450a-bddc-fc7a7472dbe6 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 03:56:07 np0005534516 nova_compute[253538]: 2025-11-25 08:56:07.272 253542 DEBUG nova.virt.libvirt.driver [None req-6c92f596-e137-450a-bddc-fc7a7472dbe6 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 03:56:07 np0005534516 nova_compute[253538]: 2025-11-25 08:56:07.272 253542 DEBUG nova.virt.libvirt.driver [None req-6c92f596-e137-450a-bddc-fc7a7472dbe6 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] No VIF found with MAC fa:16:3e:54:4e:c2, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 25 03:56:07 np0005534516 nova_compute[253538]: 2025-11-25 08:56:07.273 253542 INFO nova.virt.libvirt.driver [None req-6c92f596-e137-450a-bddc-fc7a7472dbe6 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 4cb9b212-92f6-4b10-ac69-ba251266bfd2] Using config drive#033[00m
Nov 25 03:56:07 np0005534516 nova_compute[253538]: 2025-11-25 08:56:07.298 253542 DEBUG nova.storage.rbd_utils [None req-6c92f596-e137-450a-bddc-fc7a7472dbe6 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] rbd image 4cb9b212-92f6-4b10-ac69-ba251266bfd2_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:56:07 np0005534516 nova_compute[253538]: 2025-11-25 08:56:07.415 253542 DEBUG nova.network.neutron [req-e530de18-b282-4f58-82df-1c24602f92a6 req-a6695f05-63de-49cc-a177-c4dcd1d2c6c0 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 4cb9b212-92f6-4b10-ac69-ba251266bfd2] Updated VIF entry in instance network info cache for port 3df2cc50-c6c1-476a-a12a-0d02fae91559. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 25 03:56:07 np0005534516 nova_compute[253538]: 2025-11-25 08:56:07.415 253542 DEBUG nova.network.neutron [req-e530de18-b282-4f58-82df-1c24602f92a6 req-a6695f05-63de-49cc-a177-c4dcd1d2c6c0 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 4cb9b212-92f6-4b10-ac69-ba251266bfd2] Updating instance_info_cache with network_info: [{"id": "3df2cc50-c6c1-476a-a12a-0d02fae91559", "address": "fa:16:3e:54:4e:c2", "network": {"id": "05073ace-d35c-48d1-9399-5c8964c484d2", "bridge": "br-int", "label": "tempest-network-smoke--1386100427", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.217", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92faeb767e7a423586eaaf32661ce771", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3df2cc50-c6", "ovs_interfaceid": "3df2cc50-c6c1-476a-a12a-0d02fae91559", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 03:56:07 np0005534516 nova_compute[253538]: 2025-11-25 08:56:07.426 253542 DEBUG oslo_concurrency.lockutils [req-e530de18-b282-4f58-82df-1c24602f92a6 req-a6695f05-63de-49cc-a177-c4dcd1d2c6c0 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-4cb9b212-92f6-4b10-ac69-ba251266bfd2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 03:56:07 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 03:56:07 np0005534516 nova_compute[253538]: 2025-11-25 08:56:07.555 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 03:56:07 np0005534516 nova_compute[253538]: 2025-11-25 08:56:07.712 253542 INFO nova.virt.libvirt.driver [None req-6c92f596-e137-450a-bddc-fc7a7472dbe6 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 4cb9b212-92f6-4b10-ac69-ba251266bfd2] Creating config drive at /var/lib/nova/instances/4cb9b212-92f6-4b10-ac69-ba251266bfd2/disk.config#033[00m
Nov 25 03:56:07 np0005534516 nova_compute[253538]: 2025-11-25 08:56:07.717 253542 DEBUG oslo_concurrency.processutils [None req-6c92f596-e137-450a-bddc-fc7a7472dbe6 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/4cb9b212-92f6-4b10-ac69-ba251266bfd2/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpqe_bnj2c execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:56:07 np0005534516 nova_compute[253538]: 2025-11-25 08:56:07.867 253542 DEBUG oslo_concurrency.processutils [None req-6c92f596-e137-450a-bddc-fc7a7472dbe6 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/4cb9b212-92f6-4b10-ac69-ba251266bfd2/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpqe_bnj2c" returned: 0 in 0.150s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:56:07 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2260: 321 pgs: 321 active+clean; 134 MiB data, 814 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 25 03:56:07 np0005534516 nova_compute[253538]: 2025-11-25 08:56:07.894 253542 DEBUG nova.storage.rbd_utils [None req-6c92f596-e137-450a-bddc-fc7a7472dbe6 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] rbd image 4cb9b212-92f6-4b10-ac69-ba251266bfd2_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:56:07 np0005534516 nova_compute[253538]: 2025-11-25 08:56:07.898 253542 DEBUG oslo_concurrency.processutils [None req-6c92f596-e137-450a-bddc-fc7a7472dbe6 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/4cb9b212-92f6-4b10-ac69-ba251266bfd2/disk.config 4cb9b212-92f6-4b10-ac69-ba251266bfd2_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:56:07 np0005534516 nova_compute[253538]: 2025-11-25 08:56:07.933 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:56:08 np0005534516 nova_compute[253538]: 2025-11-25 08:56:08.055 253542 DEBUG oslo_concurrency.processutils [None req-6c92f596-e137-450a-bddc-fc7a7472dbe6 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/4cb9b212-92f6-4b10-ac69-ba251266bfd2/disk.config 4cb9b212-92f6-4b10-ac69-ba251266bfd2_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.157s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:56:08 np0005534516 nova_compute[253538]: 2025-11-25 08:56:08.056 253542 INFO nova.virt.libvirt.driver [None req-6c92f596-e137-450a-bddc-fc7a7472dbe6 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 4cb9b212-92f6-4b10-ac69-ba251266bfd2] Deleting local config drive /var/lib/nova/instances/4cb9b212-92f6-4b10-ac69-ba251266bfd2/disk.config because it was imported into RBD.#033[00m
Nov 25 03:56:08 np0005534516 kernel: tap3df2cc50-c6: entered promiscuous mode
Nov 25 03:56:08 np0005534516 NetworkManager[48915]: <info>  [1764060968.1214] manager: (tap3df2cc50-c6): new Tun device (/org/freedesktop/NetworkManager/Devices/501)
Nov 25 03:56:08 np0005534516 nova_compute[253538]: 2025-11-25 08:56:08.158 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:56:08 np0005534516 ovn_controller[152859]: 2025-11-25T08:56:08Z|01236|binding|INFO|Claiming lport 3df2cc50-c6c1-476a-a12a-0d02fae91559 for this chassis.
Nov 25 03:56:08 np0005534516 ovn_controller[152859]: 2025-11-25T08:56:08Z|01237|binding|INFO|3df2cc50-c6c1-476a-a12a-0d02fae91559: Claiming fa:16:3e:54:4e:c2 10.100.0.14
Nov 25 03:56:08 np0005534516 nova_compute[253538]: 2025-11-25 08:56:08.165 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:56:08 np0005534516 NetworkManager[48915]: <info>  [1764060968.1663] manager: (patch-br-int-to-provnet-14ba300c-36d8-42f5-a7bd-8245a4488c5f): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/502)
Nov 25 03:56:08 np0005534516 NetworkManager[48915]: <info>  [1764060968.1675] manager: (patch-provnet-14ba300c-36d8-42f5-a7bd-8245a4488c5f-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/503)
Nov 25 03:56:08 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:56:08.169 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:54:4e:c2 10.100.0.14'], port_security=['fa:16:3e:54:4e:c2 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-TestNetworkBasicOps-1598437000', 'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '4cb9b212-92f6-4b10-ac69-ba251266bfd2', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-05073ace-d35c-48d1-9399-5c8964c484d2', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-TestNetworkBasicOps-1598437000', 'neutron:project_id': '92faeb767e7a423586eaaf32661ce771', 'neutron:revision_number': '6', 'neutron:security_group_ids': 'd480e0b0-4ff3-496c-bb44-a333ae5d1ee8', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.217'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d00a1a17-1a1d-406f-9e8c-c8e478c002e9, chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=3df2cc50-c6c1-476a-a12a-0d02fae91559) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 03:56:08 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:56:08.171 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 3df2cc50-c6c1-476a-a12a-0d02fae91559 in datapath 05073ace-d35c-48d1-9399-5c8964c484d2 bound to our chassis#033[00m
Nov 25 03:56:08 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:56:08.172 162739 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 05073ace-d35c-48d1-9399-5c8964c484d2#033[00m
Nov 25 03:56:08 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:56:08.187 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[d4c701c4-27a3-4e8c-a28c-5daf5dee3eea]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:56:08 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:56:08.189 162739 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap05073ace-d1 in ovnmeta-05073ace-d35c-48d1-9399-5c8964c484d2 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 25 03:56:08 np0005534516 systemd-udevd[379479]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 03:56:08 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:56:08.191 269370 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap05073ace-d0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 25 03:56:08 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:56:08.191 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[3b12e699-c6e5-4134-b656-ca3286aee7bc]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:56:08 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:56:08.193 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[8b72aa7a-10bb-4f13-bd02-c19b5d0656d3]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:56:08 np0005534516 systemd-machined[215790]: New machine qemu-149-instance-00000077.
Nov 25 03:56:08 np0005534516 NetworkManager[48915]: <info>  [1764060968.2054] device (tap3df2cc50-c6): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 03:56:08 np0005534516 NetworkManager[48915]: <info>  [1764060968.2063] device (tap3df2cc50-c6): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 03:56:08 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:56:08.210 162852 DEBUG oslo.privsep.daemon [-] privsep: reply[aa54ce3c-b822-46b3-a956-228e15b9afeb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:56:08 np0005534516 systemd[1]: Started Virtual Machine qemu-149-instance-00000077.
Nov 25 03:56:08 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:56:08.244 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[8a268769-cfe6-4d6b-bb47-1d5fc70994e5]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:56:08 np0005534516 nova_compute[253538]: 2025-11-25 08:56:08.258 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:56:08 np0005534516 nova_compute[253538]: 2025-11-25 08:56:08.265 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:56:08 np0005534516 ovn_controller[152859]: 2025-11-25T08:56:08Z|01238|binding|INFO|Setting lport 3df2cc50-c6c1-476a-a12a-0d02fae91559 ovn-installed in OVS
Nov 25 03:56:08 np0005534516 ovn_controller[152859]: 2025-11-25T08:56:08Z|01239|binding|INFO|Setting lport 3df2cc50-c6c1-476a-a12a-0d02fae91559 up in Southbound
Nov 25 03:56:08 np0005534516 nova_compute[253538]: 2025-11-25 08:56:08.277 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:56:08 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:56:08.281 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[8ba5ba61-d343-4ea4-8c93-bc69fe6f2a6f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:56:08 np0005534516 systemd-udevd[379483]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 03:56:08 np0005534516 NetworkManager[48915]: <info>  [1764060968.2895] manager: (tap05073ace-d0): new Veth device (/org/freedesktop/NetworkManager/Devices/504)
Nov 25 03:56:08 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:56:08.288 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[dca4a725-69c2-4ff1-b203-31053948f797]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:56:08 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:56:08.333 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[8066a18e-b1f2-4253-a44a-249eaca23b7c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:56:08 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:56:08.336 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[7c2576ba-cd2b-4d32-bd22-01f897eddd36]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:56:08 np0005534516 NetworkManager[48915]: <info>  [1764060968.3630] device (tap05073ace-d0): carrier: link connected
Nov 25 03:56:08 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:56:08.370 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[a7b94d5c-7554-4ca9-9773-2aa22fb6acba]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:56:08 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:56:08.389 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[44388534-fc7c-42e4-a8ed-fd1cec6611bc]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap05073ace-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e9:25:c6'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 358], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 632296, 'reachable_time': 21388, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 379512, 'error': None, 'target': 'ovnmeta-05073ace-d35c-48d1-9399-5c8964c484d2', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:56:08 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:56:08.406 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[01ae4a3f-8f9d-4641-b076-fca3660eefed]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fee9:25c6'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 632296, 'tstamp': 632296}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 379513, 'error': None, 'target': 'ovnmeta-05073ace-d35c-48d1-9399-5c8964c484d2', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:56:08 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:56:08.426 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[cf29180f-f134-4ce2-837c-69c9f6f1e4f4]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap05073ace-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e9:25:c6'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 358], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 632296, 'reachable_time': 21388, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 379514, 'error': None, 'target': 'ovnmeta-05073ace-d35c-48d1-9399-5c8964c484d2', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:56:08 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:56:08.454 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[70b1a560-8871-4ec1-8454-18fada95185d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:56:08 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:56:08.516 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[3a7d05e6-1fb5-4f4e-b5a2-af1e973cf222]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:56:08 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:56:08.517 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap05073ace-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:56:08 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:56:08.517 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 03:56:08 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:56:08.518 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap05073ace-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:56:08 np0005534516 nova_compute[253538]: 2025-11-25 08:56:08.519 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:56:08 np0005534516 NetworkManager[48915]: <info>  [1764060968.5199] manager: (tap05073ace-d0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/505)
Nov 25 03:56:08 np0005534516 kernel: tap05073ace-d0: entered promiscuous mode
Nov 25 03:56:08 np0005534516 nova_compute[253538]: 2025-11-25 08:56:08.521 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:56:08 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:56:08.523 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap05073ace-d0, col_values=(('external_ids', {'iface-id': '38363726-6a82-410a-a283-1a7b285deea5'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:56:08 np0005534516 nova_compute[253538]: 2025-11-25 08:56:08.524 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:56:08 np0005534516 ovn_controller[152859]: 2025-11-25T08:56:08Z|01240|binding|INFO|Releasing lport 38363726-6a82-410a-a283-1a7b285deea5 from this chassis (sb_readonly=0)
Nov 25 03:56:08 np0005534516 nova_compute[253538]: 2025-11-25 08:56:08.525 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:56:08 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:56:08.527 162739 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/05073ace-d35c-48d1-9399-5c8964c484d2.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/05073ace-d35c-48d1-9399-5c8964c484d2.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 25 03:56:08 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:56:08.527 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[61f02a7d-6f1e-4338-a6b7-dbc326b7cafd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:56:08 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:56:08.528 162739 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 25 03:56:08 np0005534516 ovn_metadata_agent[162734]: global
Nov 25 03:56:08 np0005534516 ovn_metadata_agent[162734]:    log         /dev/log local0 debug
Nov 25 03:56:08 np0005534516 ovn_metadata_agent[162734]:    log-tag     haproxy-metadata-proxy-05073ace-d35c-48d1-9399-5c8964c484d2
Nov 25 03:56:08 np0005534516 ovn_metadata_agent[162734]:    user        root
Nov 25 03:56:08 np0005534516 ovn_metadata_agent[162734]:    group       root
Nov 25 03:56:08 np0005534516 ovn_metadata_agent[162734]:    maxconn     1024
Nov 25 03:56:08 np0005534516 ovn_metadata_agent[162734]:    pidfile     /var/lib/neutron/external/pids/05073ace-d35c-48d1-9399-5c8964c484d2.pid.haproxy
Nov 25 03:56:08 np0005534516 ovn_metadata_agent[162734]:    daemon
Nov 25 03:56:08 np0005534516 ovn_metadata_agent[162734]: 
Nov 25 03:56:08 np0005534516 ovn_metadata_agent[162734]: defaults
Nov 25 03:56:08 np0005534516 ovn_metadata_agent[162734]:    log global
Nov 25 03:56:08 np0005534516 ovn_metadata_agent[162734]:    mode http
Nov 25 03:56:08 np0005534516 ovn_metadata_agent[162734]:    option httplog
Nov 25 03:56:08 np0005534516 ovn_metadata_agent[162734]:    option dontlognull
Nov 25 03:56:08 np0005534516 ovn_metadata_agent[162734]:    option http-server-close
Nov 25 03:56:08 np0005534516 ovn_metadata_agent[162734]:    option forwardfor
Nov 25 03:56:08 np0005534516 ovn_metadata_agent[162734]:    retries                 3
Nov 25 03:56:08 np0005534516 ovn_metadata_agent[162734]:    timeout http-request    30s
Nov 25 03:56:08 np0005534516 ovn_metadata_agent[162734]:    timeout connect         30s
Nov 25 03:56:08 np0005534516 ovn_metadata_agent[162734]:    timeout client          32s
Nov 25 03:56:08 np0005534516 ovn_metadata_agent[162734]:    timeout server          32s
Nov 25 03:56:08 np0005534516 ovn_metadata_agent[162734]:    timeout http-keep-alive 30s
Nov 25 03:56:08 np0005534516 ovn_metadata_agent[162734]: 
Nov 25 03:56:08 np0005534516 ovn_metadata_agent[162734]: 
Nov 25 03:56:08 np0005534516 ovn_metadata_agent[162734]: listen listener
Nov 25 03:56:08 np0005534516 ovn_metadata_agent[162734]:    bind 169.254.169.254:80
Nov 25 03:56:08 np0005534516 ovn_metadata_agent[162734]:    server metadata /var/lib/neutron/metadata_proxy
Nov 25 03:56:08 np0005534516 ovn_metadata_agent[162734]:    http-request add-header X-OVN-Network-ID 05073ace-d35c-48d1-9399-5c8964c484d2
Nov 25 03:56:08 np0005534516 ovn_metadata_agent[162734]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 25 03:56:08 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:56:08.529 162739 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-05073ace-d35c-48d1-9399-5c8964c484d2', 'env', 'PROCESS_TAG=haproxy-05073ace-d35c-48d1-9399-5c8964c484d2', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/05073ace-d35c-48d1-9399-5c8964c484d2.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 25 03:56:08 np0005534516 nova_compute[253538]: 2025-11-25 08:56:08.538 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:56:08 np0005534516 nova_compute[253538]: 2025-11-25 08:56:08.556 253542 DEBUG nova.compute.manager [req-eb871ff4-d365-4076-a860-df0210c8ba96 req-047d9037-b50a-4aec-bd4a-83be98bf1070 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 4cb9b212-92f6-4b10-ac69-ba251266bfd2] Received event network-vif-plugged-3df2cc50-c6c1-476a-a12a-0d02fae91559 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:56:08 np0005534516 nova_compute[253538]: 2025-11-25 08:56:08.556 253542 DEBUG oslo_concurrency.lockutils [req-eb871ff4-d365-4076-a860-df0210c8ba96 req-047d9037-b50a-4aec-bd4a-83be98bf1070 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "4cb9b212-92f6-4b10-ac69-ba251266bfd2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:56:08 np0005534516 nova_compute[253538]: 2025-11-25 08:56:08.557 253542 DEBUG oslo_concurrency.lockutils [req-eb871ff4-d365-4076-a860-df0210c8ba96 req-047d9037-b50a-4aec-bd4a-83be98bf1070 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "4cb9b212-92f6-4b10-ac69-ba251266bfd2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:56:08 np0005534516 nova_compute[253538]: 2025-11-25 08:56:08.557 253542 DEBUG oslo_concurrency.lockutils [req-eb871ff4-d365-4076-a860-df0210c8ba96 req-047d9037-b50a-4aec-bd4a-83be98bf1070 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "4cb9b212-92f6-4b10-ac69-ba251266bfd2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:56:08 np0005534516 nova_compute[253538]: 2025-11-25 08:56:08.557 253542 DEBUG nova.compute.manager [req-eb871ff4-d365-4076-a860-df0210c8ba96 req-047d9037-b50a-4aec-bd4a-83be98bf1070 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 4cb9b212-92f6-4b10-ac69-ba251266bfd2] Processing event network-vif-plugged-3df2cc50-c6c1-476a-a12a-0d02fae91559 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 25 03:56:08 np0005534516 podman[379546]: 2025-11-25 08:56:08.925635244 +0000 UTC m=+0.054848815 container create f3b692900a0094d0f52a9c0871748349df624ab8032c765c771f1ff5e8fee42f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-05073ace-d35c-48d1-9399-5c8964c484d2, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2)
Nov 25 03:56:08 np0005534516 systemd[1]: Started libpod-conmon-f3b692900a0094d0f52a9c0871748349df624ab8032c765c771f1ff5e8fee42f.scope.
Nov 25 03:56:08 np0005534516 podman[379546]: 2025-11-25 08:56:08.896988894 +0000 UTC m=+0.026202515 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 25 03:56:08 np0005534516 systemd[1]: Started libcrun container.
Nov 25 03:56:08 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e2e19a01a3f2d2ec0b2a11352993021e8379576d9a8252569db20d988f648bd5/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 25 03:56:09 np0005534516 podman[379546]: 2025-11-25 08:56:09.009668421 +0000 UTC m=+0.138882012 container init f3b692900a0094d0f52a9c0871748349df624ab8032c765c771f1ff5e8fee42f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-05073ace-d35c-48d1-9399-5c8964c484d2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, tcib_managed=true)
Nov 25 03:56:09 np0005534516 podman[379546]: 2025-11-25 08:56:09.014572284 +0000 UTC m=+0.143785855 container start f3b692900a0094d0f52a9c0871748349df624ab8032c765c771f1ff5e8fee42f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-05073ace-d35c-48d1-9399-5c8964c484d2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 03:56:09 np0005534516 neutron-haproxy-ovnmeta-05073ace-d35c-48d1-9399-5c8964c484d2[379561]: [NOTICE]   (379565) : New worker (379567) forked
Nov 25 03:56:09 np0005534516 neutron-haproxy-ovnmeta-05073ace-d35c-48d1-9399-5c8964c484d2[379561]: [NOTICE]   (379565) : Loading success.
Nov 25 03:56:09 np0005534516 nova_compute[253538]: 2025-11-25 08:56:09.409 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764060969.4086294, 4cb9b212-92f6-4b10-ac69-ba251266bfd2 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 03:56:09 np0005534516 nova_compute[253538]: 2025-11-25 08:56:09.409 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 4cb9b212-92f6-4b10-ac69-ba251266bfd2] VM Started (Lifecycle Event)#033[00m
Nov 25 03:56:09 np0005534516 nova_compute[253538]: 2025-11-25 08:56:09.411 253542 DEBUG nova.compute.manager [None req-6c92f596-e137-450a-bddc-fc7a7472dbe6 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 4cb9b212-92f6-4b10-ac69-ba251266bfd2] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 25 03:56:09 np0005534516 nova_compute[253538]: 2025-11-25 08:56:09.415 253542 DEBUG nova.virt.libvirt.driver [None req-6c92f596-e137-450a-bddc-fc7a7472dbe6 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 4cb9b212-92f6-4b10-ac69-ba251266bfd2] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 25 03:56:09 np0005534516 nova_compute[253538]: 2025-11-25 08:56:09.418 253542 INFO nova.virt.libvirt.driver [-] [instance: 4cb9b212-92f6-4b10-ac69-ba251266bfd2] Instance spawned successfully.#033[00m
Nov 25 03:56:09 np0005534516 nova_compute[253538]: 2025-11-25 08:56:09.418 253542 DEBUG nova.virt.libvirt.driver [None req-6c92f596-e137-450a-bddc-fc7a7472dbe6 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 4cb9b212-92f6-4b10-ac69-ba251266bfd2] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 25 03:56:09 np0005534516 nova_compute[253538]: 2025-11-25 08:56:09.434 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 4cb9b212-92f6-4b10-ac69-ba251266bfd2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:56:09 np0005534516 nova_compute[253538]: 2025-11-25 08:56:09.438 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 4cb9b212-92f6-4b10-ac69-ba251266bfd2] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 03:56:09 np0005534516 nova_compute[253538]: 2025-11-25 08:56:09.450 253542 DEBUG nova.virt.libvirt.driver [None req-6c92f596-e137-450a-bddc-fc7a7472dbe6 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 4cb9b212-92f6-4b10-ac69-ba251266bfd2] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:56:09 np0005534516 nova_compute[253538]: 2025-11-25 08:56:09.450 253542 DEBUG nova.virt.libvirt.driver [None req-6c92f596-e137-450a-bddc-fc7a7472dbe6 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 4cb9b212-92f6-4b10-ac69-ba251266bfd2] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:56:09 np0005534516 nova_compute[253538]: 2025-11-25 08:56:09.451 253542 DEBUG nova.virt.libvirt.driver [None req-6c92f596-e137-450a-bddc-fc7a7472dbe6 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 4cb9b212-92f6-4b10-ac69-ba251266bfd2] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:56:09 np0005534516 nova_compute[253538]: 2025-11-25 08:56:09.451 253542 DEBUG nova.virt.libvirt.driver [None req-6c92f596-e137-450a-bddc-fc7a7472dbe6 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 4cb9b212-92f6-4b10-ac69-ba251266bfd2] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:56:09 np0005534516 nova_compute[253538]: 2025-11-25 08:56:09.451 253542 DEBUG nova.virt.libvirt.driver [None req-6c92f596-e137-450a-bddc-fc7a7472dbe6 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 4cb9b212-92f6-4b10-ac69-ba251266bfd2] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:56:09 np0005534516 nova_compute[253538]: 2025-11-25 08:56:09.452 253542 DEBUG nova.virt.libvirt.driver [None req-6c92f596-e137-450a-bddc-fc7a7472dbe6 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 4cb9b212-92f6-4b10-ac69-ba251266bfd2] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:56:09 np0005534516 nova_compute[253538]: 2025-11-25 08:56:09.473 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 4cb9b212-92f6-4b10-ac69-ba251266bfd2] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 03:56:09 np0005534516 nova_compute[253538]: 2025-11-25 08:56:09.473 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764060969.4087436, 4cb9b212-92f6-4b10-ac69-ba251266bfd2 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 03:56:09 np0005534516 nova_compute[253538]: 2025-11-25 08:56:09.474 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 4cb9b212-92f6-4b10-ac69-ba251266bfd2] VM Paused (Lifecycle Event)#033[00m
Nov 25 03:56:09 np0005534516 nova_compute[253538]: 2025-11-25 08:56:09.497 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 4cb9b212-92f6-4b10-ac69-ba251266bfd2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:56:09 np0005534516 nova_compute[253538]: 2025-11-25 08:56:09.500 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764060969.4139462, 4cb9b212-92f6-4b10-ac69-ba251266bfd2 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 03:56:09 np0005534516 nova_compute[253538]: 2025-11-25 08:56:09.501 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 4cb9b212-92f6-4b10-ac69-ba251266bfd2] VM Resumed (Lifecycle Event)#033[00m
Nov 25 03:56:09 np0005534516 nova_compute[253538]: 2025-11-25 08:56:09.518 253542 INFO nova.compute.manager [None req-6c92f596-e137-450a-bddc-fc7a7472dbe6 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 4cb9b212-92f6-4b10-ac69-ba251266bfd2] Took 7.78 seconds to spawn the instance on the hypervisor.#033[00m
Nov 25 03:56:09 np0005534516 nova_compute[253538]: 2025-11-25 08:56:09.518 253542 DEBUG nova.compute.manager [None req-6c92f596-e137-450a-bddc-fc7a7472dbe6 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 4cb9b212-92f6-4b10-ac69-ba251266bfd2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:56:09 np0005534516 nova_compute[253538]: 2025-11-25 08:56:09.520 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 4cb9b212-92f6-4b10-ac69-ba251266bfd2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:56:09 np0005534516 nova_compute[253538]: 2025-11-25 08:56:09.525 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 4cb9b212-92f6-4b10-ac69-ba251266bfd2] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 03:56:09 np0005534516 nova_compute[253538]: 2025-11-25 08:56:09.554 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 4cb9b212-92f6-4b10-ac69-ba251266bfd2] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 03:56:09 np0005534516 nova_compute[253538]: 2025-11-25 08:56:09.586 253542 INFO nova.compute.manager [None req-6c92f596-e137-450a-bddc-fc7a7472dbe6 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 4cb9b212-92f6-4b10-ac69-ba251266bfd2] Took 8.64 seconds to build instance.#033[00m
Nov 25 03:56:09 np0005534516 nova_compute[253538]: 2025-11-25 08:56:09.602 253542 DEBUG oslo_concurrency.lockutils [None req-6c92f596-e137-450a-bddc-fc7a7472dbe6 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lock "4cb9b212-92f6-4b10-ac69-ba251266bfd2" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.729s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:56:09 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2261: 321 pgs: 321 active+clean; 134 MiB data, 814 MiB used, 59 GiB / 60 GiB avail; 20 KiB/s rd, 1.8 MiB/s wr, 31 op/s
Nov 25 03:56:10 np0005534516 nova_compute[253538]: 2025-11-25 08:56:10.637 253542 DEBUG nova.compute.manager [req-481f8935-5e0b-40e1-b8d6-5459bb6ef1da req-e32b1067-5d3c-473e-9941-a4a6c83cfea1 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 4cb9b212-92f6-4b10-ac69-ba251266bfd2] Received event network-vif-plugged-3df2cc50-c6c1-476a-a12a-0d02fae91559 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:56:10 np0005534516 nova_compute[253538]: 2025-11-25 08:56:10.637 253542 DEBUG oslo_concurrency.lockutils [req-481f8935-5e0b-40e1-b8d6-5459bb6ef1da req-e32b1067-5d3c-473e-9941-a4a6c83cfea1 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "4cb9b212-92f6-4b10-ac69-ba251266bfd2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:56:10 np0005534516 nova_compute[253538]: 2025-11-25 08:56:10.638 253542 DEBUG oslo_concurrency.lockutils [req-481f8935-5e0b-40e1-b8d6-5459bb6ef1da req-e32b1067-5d3c-473e-9941-a4a6c83cfea1 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "4cb9b212-92f6-4b10-ac69-ba251266bfd2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:56:10 np0005534516 nova_compute[253538]: 2025-11-25 08:56:10.638 253542 DEBUG oslo_concurrency.lockutils [req-481f8935-5e0b-40e1-b8d6-5459bb6ef1da req-e32b1067-5d3c-473e-9941-a4a6c83cfea1 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "4cb9b212-92f6-4b10-ac69-ba251266bfd2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:56:10 np0005534516 nova_compute[253538]: 2025-11-25 08:56:10.638 253542 DEBUG nova.compute.manager [req-481f8935-5e0b-40e1-b8d6-5459bb6ef1da req-e32b1067-5d3c-473e-9941-a4a6c83cfea1 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 4cb9b212-92f6-4b10-ac69-ba251266bfd2] No waiting events found dispatching network-vif-plugged-3df2cc50-c6c1-476a-a12a-0d02fae91559 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 03:56:10 np0005534516 nova_compute[253538]: 2025-11-25 08:56:10.638 253542 WARNING nova.compute.manager [req-481f8935-5e0b-40e1-b8d6-5459bb6ef1da req-e32b1067-5d3c-473e-9941-a4a6c83cfea1 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 4cb9b212-92f6-4b10-ac69-ba251266bfd2] Received unexpected event network-vif-plugged-3df2cc50-c6c1-476a-a12a-0d02fae91559 for instance with vm_state active and task_state None.#033[00m
Nov 25 03:56:11 np0005534516 podman[379618]: 2025-11-25 08:56:11.844525022 +0000 UTC m=+0.087697029 container health_status 5c8d5508db57b4c3da7e80ae11f3a3094e87785cb74f60c98199261dd8b73481 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Nov 25 03:56:11 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2262: 321 pgs: 321 active+clean; 134 MiB data, 814 MiB used, 59 GiB / 60 GiB avail; 1.4 MiB/s rd, 1.8 MiB/s wr, 80 op/s
Nov 25 03:56:12 np0005534516 nova_compute[253538]: 2025-11-25 08:56:12.200 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:56:12 np0005534516 nova_compute[253538]: 2025-11-25 08:56:12.313 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:56:12 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 03:56:12 np0005534516 podman[379637]: 2025-11-25 08:56:12.818679926 +0000 UTC m=+0.068494144 container health_status 201f5ba8a846455dad7681d91099d8c3f33e8ac91298c1330a8b525b94c03938 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=multipathd, container_name=multipathd, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true)
Nov 25 03:56:12 np0005534516 nova_compute[253538]: 2025-11-25 08:56:12.897 253542 DEBUG oslo_concurrency.lockutils [None req-fd2e3a8c-2f98-40b7-b331-a312efe41bbd 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Acquiring lock "4cb9b212-92f6-4b10-ac69-ba251266bfd2" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:56:12 np0005534516 nova_compute[253538]: 2025-11-25 08:56:12.898 253542 DEBUG oslo_concurrency.lockutils [None req-fd2e3a8c-2f98-40b7-b331-a312efe41bbd 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lock "4cb9b212-92f6-4b10-ac69-ba251266bfd2" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:56:12 np0005534516 nova_compute[253538]: 2025-11-25 08:56:12.898 253542 DEBUG oslo_concurrency.lockutils [None req-fd2e3a8c-2f98-40b7-b331-a312efe41bbd 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Acquiring lock "4cb9b212-92f6-4b10-ac69-ba251266bfd2-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:56:12 np0005534516 nova_compute[253538]: 2025-11-25 08:56:12.898 253542 DEBUG oslo_concurrency.lockutils [None req-fd2e3a8c-2f98-40b7-b331-a312efe41bbd 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lock "4cb9b212-92f6-4b10-ac69-ba251266bfd2-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:56:12 np0005534516 nova_compute[253538]: 2025-11-25 08:56:12.898 253542 DEBUG oslo_concurrency.lockutils [None req-fd2e3a8c-2f98-40b7-b331-a312efe41bbd 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lock "4cb9b212-92f6-4b10-ac69-ba251266bfd2-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:56:12 np0005534516 nova_compute[253538]: 2025-11-25 08:56:12.900 253542 INFO nova.compute.manager [None req-fd2e3a8c-2f98-40b7-b331-a312efe41bbd 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 4cb9b212-92f6-4b10-ac69-ba251266bfd2] Terminating instance#033[00m
Nov 25 03:56:12 np0005534516 nova_compute[253538]: 2025-11-25 08:56:12.901 253542 DEBUG nova.compute.manager [None req-fd2e3a8c-2f98-40b7-b331-a312efe41bbd 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 4cb9b212-92f6-4b10-ac69-ba251266bfd2] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 25 03:56:12 np0005534516 nova_compute[253538]: 2025-11-25 08:56:12.901 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:56:12 np0005534516 kernel: tap3df2cc50-c6 (unregistering): left promiscuous mode
Nov 25 03:56:12 np0005534516 NetworkManager[48915]: <info>  [1764060972.9532] device (tap3df2cc50-c6): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 03:56:12 np0005534516 nova_compute[253538]: 2025-11-25 08:56:12.965 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:56:12 np0005534516 ovn_controller[152859]: 2025-11-25T08:56:12Z|01241|binding|INFO|Releasing lport 3df2cc50-c6c1-476a-a12a-0d02fae91559 from this chassis (sb_readonly=0)
Nov 25 03:56:12 np0005534516 ovn_controller[152859]: 2025-11-25T08:56:12Z|01242|binding|INFO|Setting lport 3df2cc50-c6c1-476a-a12a-0d02fae91559 down in Southbound
Nov 25 03:56:12 np0005534516 ovn_controller[152859]: 2025-11-25T08:56:12Z|01243|binding|INFO|Removing iface tap3df2cc50-c6 ovn-installed in OVS
Nov 25 03:56:12 np0005534516 nova_compute[253538]: 2025-11-25 08:56:12.969 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:56:12 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:56:12.982 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:54:4e:c2 10.100.0.14'], port_security=['fa:16:3e:54:4e:c2 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-TestNetworkBasicOps-1598437000', 'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '4cb9b212-92f6-4b10-ac69-ba251266bfd2', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-05073ace-d35c-48d1-9399-5c8964c484d2', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-TestNetworkBasicOps-1598437000', 'neutron:project_id': '92faeb767e7a423586eaaf32661ce771', 'neutron:revision_number': '8', 'neutron:security_group_ids': 'd480e0b0-4ff3-496c-bb44-a333ae5d1ee8', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.217', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d00a1a17-1a1d-406f-9e8c-c8e478c002e9, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=3df2cc50-c6c1-476a-a12a-0d02fae91559) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 03:56:12 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:56:12.984 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 3df2cc50-c6c1-476a-a12a-0d02fae91559 in datapath 05073ace-d35c-48d1-9399-5c8964c484d2 unbound from our chassis#033[00m
Nov 25 03:56:12 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:56:12.985 162739 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 05073ace-d35c-48d1-9399-5c8964c484d2, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 25 03:56:12 np0005534516 nova_compute[253538]: 2025-11-25 08:56:12.985 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:56:12 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:56:12.987 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[1361355f-b934-4565-9953-5916232f2276]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:56:12 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:56:12.987 162739 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-05073ace-d35c-48d1-9399-5c8964c484d2 namespace which is not needed anymore#033[00m
Nov 25 03:56:13 np0005534516 systemd[1]: machine-qemu\x2d149\x2dinstance\x2d00000077.scope: Deactivated successfully.
Nov 25 03:56:13 np0005534516 systemd[1]: machine-qemu\x2d149\x2dinstance\x2d00000077.scope: Consumed 4.751s CPU time.
Nov 25 03:56:13 np0005534516 systemd-machined[215790]: Machine qemu-149-instance-00000077 terminated.
Nov 25 03:56:13 np0005534516 neutron-haproxy-ovnmeta-05073ace-d35c-48d1-9399-5c8964c484d2[379561]: [NOTICE]   (379565) : haproxy version is 2.8.14-c23fe91
Nov 25 03:56:13 np0005534516 neutron-haproxy-ovnmeta-05073ace-d35c-48d1-9399-5c8964c484d2[379561]: [NOTICE]   (379565) : path to executable is /usr/sbin/haproxy
Nov 25 03:56:13 np0005534516 neutron-haproxy-ovnmeta-05073ace-d35c-48d1-9399-5c8964c484d2[379561]: [WARNING]  (379565) : Exiting Master process...
Nov 25 03:56:13 np0005534516 neutron-haproxy-ovnmeta-05073ace-d35c-48d1-9399-5c8964c484d2[379561]: [ALERT]    (379565) : Current worker (379567) exited with code 143 (Terminated)
Nov 25 03:56:13 np0005534516 neutron-haproxy-ovnmeta-05073ace-d35c-48d1-9399-5c8964c484d2[379561]: [WARNING]  (379565) : All workers exited. Exiting... (0)
Nov 25 03:56:13 np0005534516 systemd[1]: libpod-f3b692900a0094d0f52a9c0871748349df624ab8032c765c771f1ff5e8fee42f.scope: Deactivated successfully.
Nov 25 03:56:13 np0005534516 nova_compute[253538]: 2025-11-25 08:56:13.142 253542 INFO nova.virt.libvirt.driver [-] [instance: 4cb9b212-92f6-4b10-ac69-ba251266bfd2] Instance destroyed successfully.#033[00m
Nov 25 03:56:13 np0005534516 nova_compute[253538]: 2025-11-25 08:56:13.142 253542 DEBUG nova.objects.instance [None req-fd2e3a8c-2f98-40b7-b331-a312efe41bbd 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lazy-loading 'resources' on Instance uuid 4cb9b212-92f6-4b10-ac69-ba251266bfd2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 03:56:13 np0005534516 podman[379681]: 2025-11-25 08:56:13.144596708 +0000 UTC m=+0.047384961 container died f3b692900a0094d0f52a9c0871748349df624ab8032c765c771f1ff5e8fee42f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-05073ace-d35c-48d1-9399-5c8964c484d2, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 25 03:56:13 np0005534516 nova_compute[253538]: 2025-11-25 08:56:13.169 253542 DEBUG nova.virt.libvirt.vif [None req-fd2e3a8c-2f98-40b7-b331-a312efe41bbd 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T08:55:58Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1751152359',display_name='tempest-TestNetworkBasicOps-server-1751152359',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1751152359',id=119,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJubiPmrXfIUrmgC1xrGBFg1DApHg3WIsLuifdfOtIZqe386aSym92+Q91jDIGAFqlPVRU4cUyjEEd/Wo80iuNqs/Lk8M1iBken5yj9tIQXPukgxgH/HSGQZNiNB4Q+Uyg==',key_name='tempest-TestNetworkBasicOps-141024212',keypairs=<?>,launch_index=0,launched_at=2025-11-25T08:56:09Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='92faeb767e7a423586eaaf32661ce771',ramdisk_id='',reservation_id='r-i7jk6l9x',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-2019122229',owner_user_name='tempest-TestNetworkBasicOps-2019122229-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T08:56:09Z,user_data=None,user_id='4211995133cc45db8e38c47f747fb092',uuid=4cb9b212-92f6-4b10-ac69-ba251266bfd2,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "3df2cc50-c6c1-476a-a12a-0d02fae91559", "address": "fa:16:3e:54:4e:c2", "network": {"id": "05073ace-d35c-48d1-9399-5c8964c484d2", "bridge": "br-int", "label": "tempest-network-smoke--1386100427", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.217", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92faeb767e7a423586eaaf32661ce771", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3df2cc50-c6", "ovs_interfaceid": "3df2cc50-c6c1-476a-a12a-0d02fae91559", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 25 03:56:13 np0005534516 nova_compute[253538]: 2025-11-25 08:56:13.170 253542 DEBUG nova.network.os_vif_util [None req-fd2e3a8c-2f98-40b7-b331-a312efe41bbd 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Converting VIF {"id": "3df2cc50-c6c1-476a-a12a-0d02fae91559", "address": "fa:16:3e:54:4e:c2", "network": {"id": "05073ace-d35c-48d1-9399-5c8964c484d2", "bridge": "br-int", "label": "tempest-network-smoke--1386100427", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.217", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92faeb767e7a423586eaaf32661ce771", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3df2cc50-c6", "ovs_interfaceid": "3df2cc50-c6c1-476a-a12a-0d02fae91559", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 03:56:13 np0005534516 nova_compute[253538]: 2025-11-25 08:56:13.170 253542 DEBUG nova.network.os_vif_util [None req-fd2e3a8c-2f98-40b7-b331-a312efe41bbd 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:54:4e:c2,bridge_name='br-int',has_traffic_filtering=True,id=3df2cc50-c6c1-476a-a12a-0d02fae91559,network=Network(05073ace-d35c-48d1-9399-5c8964c484d2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap3df2cc50-c6') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 03:56:13 np0005534516 nova_compute[253538]: 2025-11-25 08:56:13.171 253542 DEBUG os_vif [None req-fd2e3a8c-2f98-40b7-b331-a312efe41bbd 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:54:4e:c2,bridge_name='br-int',has_traffic_filtering=True,id=3df2cc50-c6c1-476a-a12a-0d02fae91559,network=Network(05073ace-d35c-48d1-9399-5c8964c484d2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap3df2cc50-c6') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 25 03:56:13 np0005534516 nova_compute[253538]: 2025-11-25 08:56:13.172 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:56:13 np0005534516 nova_compute[253538]: 2025-11-25 08:56:13.173 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3df2cc50-c6, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:56:13 np0005534516 nova_compute[253538]: 2025-11-25 08:56:13.174 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:56:13 np0005534516 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-f3b692900a0094d0f52a9c0871748349df624ab8032c765c771f1ff5e8fee42f-userdata-shm.mount: Deactivated successfully.
Nov 25 03:56:13 np0005534516 systemd[1]: var-lib-containers-storage-overlay-e2e19a01a3f2d2ec0b2a11352993021e8379576d9a8252569db20d988f648bd5-merged.mount: Deactivated successfully.
Nov 25 03:56:13 np0005534516 nova_compute[253538]: 2025-11-25 08:56:13.180 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 25 03:56:13 np0005534516 nova_compute[253538]: 2025-11-25 08:56:13.182 253542 INFO os_vif [None req-fd2e3a8c-2f98-40b7-b331-a312efe41bbd 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:54:4e:c2,bridge_name='br-int',has_traffic_filtering=True,id=3df2cc50-c6c1-476a-a12a-0d02fae91559,network=Network(05073ace-d35c-48d1-9399-5c8964c484d2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap3df2cc50-c6')#033[00m
Nov 25 03:56:13 np0005534516 podman[379681]: 2025-11-25 08:56:13.189063259 +0000 UTC m=+0.091851522 container cleanup f3b692900a0094d0f52a9c0871748349df624ab8032c765c771f1ff5e8fee42f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-05073ace-d35c-48d1-9399-5c8964c484d2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 03:56:13 np0005534516 systemd[1]: libpod-conmon-f3b692900a0094d0f52a9c0871748349df624ab8032c765c771f1ff5e8fee42f.scope: Deactivated successfully.
Nov 25 03:56:13 np0005534516 nova_compute[253538]: 2025-11-25 08:56:13.237 253542 DEBUG nova.compute.manager [req-7ed0e0c5-2c59-4b33-bc76-9c183897bb63 req-95ba9ff8-d9e7-4c9d-93a3-77d62ba0b049 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 4cb9b212-92f6-4b10-ac69-ba251266bfd2] Received event network-vif-unplugged-3df2cc50-c6c1-476a-a12a-0d02fae91559 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:56:13 np0005534516 nova_compute[253538]: 2025-11-25 08:56:13.237 253542 DEBUG oslo_concurrency.lockutils [req-7ed0e0c5-2c59-4b33-bc76-9c183897bb63 req-95ba9ff8-d9e7-4c9d-93a3-77d62ba0b049 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "4cb9b212-92f6-4b10-ac69-ba251266bfd2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:56:13 np0005534516 nova_compute[253538]: 2025-11-25 08:56:13.238 253542 DEBUG oslo_concurrency.lockutils [req-7ed0e0c5-2c59-4b33-bc76-9c183897bb63 req-95ba9ff8-d9e7-4c9d-93a3-77d62ba0b049 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "4cb9b212-92f6-4b10-ac69-ba251266bfd2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:56:13 np0005534516 nova_compute[253538]: 2025-11-25 08:56:13.238 253542 DEBUG oslo_concurrency.lockutils [req-7ed0e0c5-2c59-4b33-bc76-9c183897bb63 req-95ba9ff8-d9e7-4c9d-93a3-77d62ba0b049 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "4cb9b212-92f6-4b10-ac69-ba251266bfd2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:56:13 np0005534516 nova_compute[253538]: 2025-11-25 08:56:13.238 253542 DEBUG nova.compute.manager [req-7ed0e0c5-2c59-4b33-bc76-9c183897bb63 req-95ba9ff8-d9e7-4c9d-93a3-77d62ba0b049 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 4cb9b212-92f6-4b10-ac69-ba251266bfd2] No waiting events found dispatching network-vif-unplugged-3df2cc50-c6c1-476a-a12a-0d02fae91559 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 03:56:13 np0005534516 nova_compute[253538]: 2025-11-25 08:56:13.238 253542 DEBUG nova.compute.manager [req-7ed0e0c5-2c59-4b33-bc76-9c183897bb63 req-95ba9ff8-d9e7-4c9d-93a3-77d62ba0b049 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 4cb9b212-92f6-4b10-ac69-ba251266bfd2] Received event network-vif-unplugged-3df2cc50-c6c1-476a-a12a-0d02fae91559 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 25 03:56:13 np0005534516 podman[379732]: 2025-11-25 08:56:13.253836672 +0000 UTC m=+0.044579075 container remove f3b692900a0094d0f52a9c0871748349df624ab8032c765c771f1ff5e8fee42f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-05073ace-d35c-48d1-9399-5c8964c484d2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.build-date=20251118)
Nov 25 03:56:13 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:56:13.263 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[69c91dba-ea39-4011-b798-85461b0fac82]: (4, ('Tue Nov 25 08:56:13 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-05073ace-d35c-48d1-9399-5c8964c484d2 (f3b692900a0094d0f52a9c0871748349df624ab8032c765c771f1ff5e8fee42f)\nf3b692900a0094d0f52a9c0871748349df624ab8032c765c771f1ff5e8fee42f\nTue Nov 25 08:56:13 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-05073ace-d35c-48d1-9399-5c8964c484d2 (f3b692900a0094d0f52a9c0871748349df624ab8032c765c771f1ff5e8fee42f)\nf3b692900a0094d0f52a9c0871748349df624ab8032c765c771f1ff5e8fee42f\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:56:13 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:56:13.265 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[3fb639e3-34bc-41ab-80fb-30cd274d85be]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:56:13 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:56:13.266 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap05073ace-d0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:56:13 np0005534516 nova_compute[253538]: 2025-11-25 08:56:13.268 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:56:13 np0005534516 kernel: tap05073ace-d0: left promiscuous mode
Nov 25 03:56:13 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:56:13.272 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[ef2139ae-4b60-4e35-aa65-a5d11ccab341]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:56:13 np0005534516 nova_compute[253538]: 2025-11-25 08:56:13.282 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:56:13 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:56:13.289 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[23647399-9b1d-4120-93c6-06286bd65c89]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:56:13 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:56:13.291 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[204d9758-f50a-4b82-af63-2c2c87bff0e6]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:56:13 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:56:13.308 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[68c2181e-73f0-46d1-aaa3-d7345a2b104f]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 632287, 'reachable_time': 30742, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 379750, 'error': None, 'target': 'ovnmeta-05073ace-d35c-48d1-9399-5c8964c484d2', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:56:13 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:56:13.311 162852 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-05073ace-d35c-48d1-9399-5c8964c484d2 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 25 03:56:13 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:56:13.311 162852 DEBUG oslo.privsep.daemon [-] privsep: reply[cf7afc4a-6319-42ee-a4a2-627d8a9b4c34]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:56:13 np0005534516 systemd[1]: run-netns-ovnmeta\x2d05073ace\x2dd35c\x2d48d1\x2d9399\x2d5c8964c484d2.mount: Deactivated successfully.
Nov 25 03:56:13 np0005534516 nova_compute[253538]: 2025-11-25 08:56:13.520 253542 INFO nova.virt.libvirt.driver [None req-fd2e3a8c-2f98-40b7-b331-a312efe41bbd 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 4cb9b212-92f6-4b10-ac69-ba251266bfd2] Deleting instance files /var/lib/nova/instances/4cb9b212-92f6-4b10-ac69-ba251266bfd2_del#033[00m
Nov 25 03:56:13 np0005534516 nova_compute[253538]: 2025-11-25 08:56:13.521 253542 INFO nova.virt.libvirt.driver [None req-fd2e3a8c-2f98-40b7-b331-a312efe41bbd 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 4cb9b212-92f6-4b10-ac69-ba251266bfd2] Deletion of /var/lib/nova/instances/4cb9b212-92f6-4b10-ac69-ba251266bfd2_del complete#033[00m
Nov 25 03:56:13 np0005534516 nova_compute[253538]: 2025-11-25 08:56:13.572 253542 INFO nova.compute.manager [None req-fd2e3a8c-2f98-40b7-b331-a312efe41bbd 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 4cb9b212-92f6-4b10-ac69-ba251266bfd2] Took 0.67 seconds to destroy the instance on the hypervisor.#033[00m
Nov 25 03:56:13 np0005534516 nova_compute[253538]: 2025-11-25 08:56:13.572 253542 DEBUG oslo.service.loopingcall [None req-fd2e3a8c-2f98-40b7-b331-a312efe41bbd 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 25 03:56:13 np0005534516 nova_compute[253538]: 2025-11-25 08:56:13.572 253542 DEBUG nova.compute.manager [-] [instance: 4cb9b212-92f6-4b10-ac69-ba251266bfd2] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 25 03:56:13 np0005534516 nova_compute[253538]: 2025-11-25 08:56:13.573 253542 DEBUG nova.network.neutron [-] [instance: 4cb9b212-92f6-4b10-ac69-ba251266bfd2] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 25 03:56:13 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2263: 321 pgs: 321 active+clean; 127 MiB data, 814 MiB used, 59 GiB / 60 GiB avail; 1.4 MiB/s rd, 1.8 MiB/s wr, 83 op/s
Nov 25 03:56:15 np0005534516 nova_compute[253538]: 2025-11-25 08:56:15.571 253542 DEBUG nova.compute.manager [req-b8cc945e-712e-4969-8afa-3837a924e5be req-3dafbbca-c5b1-478d-b19a-29a8c40891dd b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 4cb9b212-92f6-4b10-ac69-ba251266bfd2] Received event network-vif-plugged-3df2cc50-c6c1-476a-a12a-0d02fae91559 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:56:15 np0005534516 nova_compute[253538]: 2025-11-25 08:56:15.572 253542 DEBUG oslo_concurrency.lockutils [req-b8cc945e-712e-4969-8afa-3837a924e5be req-3dafbbca-c5b1-478d-b19a-29a8c40891dd b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "4cb9b212-92f6-4b10-ac69-ba251266bfd2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:56:15 np0005534516 nova_compute[253538]: 2025-11-25 08:56:15.572 253542 DEBUG oslo_concurrency.lockutils [req-b8cc945e-712e-4969-8afa-3837a924e5be req-3dafbbca-c5b1-478d-b19a-29a8c40891dd b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "4cb9b212-92f6-4b10-ac69-ba251266bfd2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:56:15 np0005534516 nova_compute[253538]: 2025-11-25 08:56:15.572 253542 DEBUG oslo_concurrency.lockutils [req-b8cc945e-712e-4969-8afa-3837a924e5be req-3dafbbca-c5b1-478d-b19a-29a8c40891dd b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "4cb9b212-92f6-4b10-ac69-ba251266bfd2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:56:15 np0005534516 nova_compute[253538]: 2025-11-25 08:56:15.572 253542 DEBUG nova.compute.manager [req-b8cc945e-712e-4969-8afa-3837a924e5be req-3dafbbca-c5b1-478d-b19a-29a8c40891dd b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 4cb9b212-92f6-4b10-ac69-ba251266bfd2] No waiting events found dispatching network-vif-plugged-3df2cc50-c6c1-476a-a12a-0d02fae91559 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 03:56:15 np0005534516 nova_compute[253538]: 2025-11-25 08:56:15.572 253542 WARNING nova.compute.manager [req-b8cc945e-712e-4969-8afa-3837a924e5be req-3dafbbca-c5b1-478d-b19a-29a8c40891dd b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 4cb9b212-92f6-4b10-ac69-ba251266bfd2] Received unexpected event network-vif-plugged-3df2cc50-c6c1-476a-a12a-0d02fae91559 for instance with vm_state active and task_state deleting.#033[00m
Nov 25 03:56:15 np0005534516 nova_compute[253538]: 2025-11-25 08:56:15.600 253542 DEBUG nova.network.neutron [-] [instance: 4cb9b212-92f6-4b10-ac69-ba251266bfd2] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 03:56:15 np0005534516 nova_compute[253538]: 2025-11-25 08:56:15.621 253542 INFO nova.compute.manager [-] [instance: 4cb9b212-92f6-4b10-ac69-ba251266bfd2] Took 2.05 seconds to deallocate network for instance.#033[00m
Nov 25 03:56:15 np0005534516 nova_compute[253538]: 2025-11-25 08:56:15.661 253542 DEBUG oslo_concurrency.lockutils [None req-fd2e3a8c-2f98-40b7-b331-a312efe41bbd 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:56:15 np0005534516 nova_compute[253538]: 2025-11-25 08:56:15.662 253542 DEBUG oslo_concurrency.lockutils [None req-fd2e3a8c-2f98-40b7-b331-a312efe41bbd 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:56:15 np0005534516 nova_compute[253538]: 2025-11-25 08:56:15.711 253542 DEBUG oslo_concurrency.processutils [None req-fd2e3a8c-2f98-40b7-b331-a312efe41bbd 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:56:15 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2264: 321 pgs: 321 active+clean; 111 MiB data, 809 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.7 MiB/s wr, 114 op/s
Nov 25 03:56:16 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 03:56:16 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/31950200' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 03:56:16 np0005534516 nova_compute[253538]: 2025-11-25 08:56:16.167 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:56:16 np0005534516 nova_compute[253538]: 2025-11-25 08:56:16.184 253542 DEBUG oslo_concurrency.processutils [None req-fd2e3a8c-2f98-40b7-b331-a312efe41bbd 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.472s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:56:16 np0005534516 nova_compute[253538]: 2025-11-25 08:56:16.192 253542 DEBUG nova.compute.provider_tree [None req-fd2e3a8c-2f98-40b7-b331-a312efe41bbd 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 03:56:16 np0005534516 nova_compute[253538]: 2025-11-25 08:56:16.205 253542 DEBUG nova.scheduler.client.report [None req-fd2e3a8c-2f98-40b7-b331-a312efe41bbd 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 03:56:16 np0005534516 nova_compute[253538]: 2025-11-25 08:56:16.223 253542 DEBUG oslo_concurrency.lockutils [None req-fd2e3a8c-2f98-40b7-b331-a312efe41bbd 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.561s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:56:16 np0005534516 nova_compute[253538]: 2025-11-25 08:56:16.246 253542 INFO nova.scheduler.client.report [None req-fd2e3a8c-2f98-40b7-b331-a312efe41bbd 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Deleted allocations for instance 4cb9b212-92f6-4b10-ac69-ba251266bfd2#033[00m
Nov 25 03:56:16 np0005534516 nova_compute[253538]: 2025-11-25 08:56:16.311 253542 DEBUG oslo_concurrency.lockutils [None req-fd2e3a8c-2f98-40b7-b331-a312efe41bbd 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lock "4cb9b212-92f6-4b10-ac69-ba251266bfd2" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.414s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:56:17 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 03:56:17 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2265: 321 pgs: 321 active+clean; 88 MiB data, 794 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.3 MiB/s wr, 101 op/s
Nov 25 03:56:17 np0005534516 nova_compute[253538]: 2025-11-25 08:56:17.898 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:56:18 np0005534516 nova_compute[253538]: 2025-11-25 08:56:18.232 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:56:18 np0005534516 podman[379777]: 2025-11-25 08:56:18.863003046 +0000 UTC m=+0.107020384 container health_status 8ad1e319ea263c63021f01bb2d6f18ca5aea205883339692c6b10bddfa993191 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller)
Nov 25 03:56:19 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2266: 321 pgs: 321 active+clean; 88 MiB data, 793 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 100 op/s
Nov 25 03:56:20 np0005534516 nova_compute[253538]: 2025-11-25 08:56:20.778 253542 DEBUG oslo_concurrency.lockutils [None req-ea27c167-309c-42ef-9057-32ae2ead652c 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Acquiring lock "ce8c3428-f7e4-49aa-9978-faaf5d514663" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:56:20 np0005534516 nova_compute[253538]: 2025-11-25 08:56:20.779 253542 DEBUG oslo_concurrency.lockutils [None req-ea27c167-309c-42ef-9057-32ae2ead652c 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Lock "ce8c3428-f7e4-49aa-9978-faaf5d514663" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:56:20 np0005534516 nova_compute[253538]: 2025-11-25 08:56:20.803 253542 DEBUG nova.compute.manager [None req-ea27c167-309c-42ef-9057-32ae2ead652c 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: ce8c3428-f7e4-49aa-9978-faaf5d514663] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 25 03:56:20 np0005534516 nova_compute[253538]: 2025-11-25 08:56:20.871 253542 DEBUG oslo_concurrency.lockutils [None req-ea27c167-309c-42ef-9057-32ae2ead652c 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:56:20 np0005534516 nova_compute[253538]: 2025-11-25 08:56:20.871 253542 DEBUG oslo_concurrency.lockutils [None req-ea27c167-309c-42ef-9057-32ae2ead652c 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:56:20 np0005534516 nova_compute[253538]: 2025-11-25 08:56:20.881 253542 DEBUG nova.virt.hardware [None req-ea27c167-309c-42ef-9057-32ae2ead652c 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 25 03:56:20 np0005534516 nova_compute[253538]: 2025-11-25 08:56:20.881 253542 INFO nova.compute.claims [None req-ea27c167-309c-42ef-9057-32ae2ead652c 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: ce8c3428-f7e4-49aa-9978-faaf5d514663] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 25 03:56:21 np0005534516 nova_compute[253538]: 2025-11-25 08:56:21.003 253542 DEBUG oslo_concurrency.processutils [None req-ea27c167-309c-42ef-9057-32ae2ead652c 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:56:21 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 03:56:21 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/190466925' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 03:56:21 np0005534516 nova_compute[253538]: 2025-11-25 08:56:21.830 253542 DEBUG oslo_concurrency.processutils [None req-ea27c167-309c-42ef-9057-32ae2ead652c 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.827s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:56:21 np0005534516 nova_compute[253538]: 2025-11-25 08:56:21.836 253542 DEBUG nova.compute.provider_tree [None req-ea27c167-309c-42ef-9057-32ae2ead652c 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 03:56:21 np0005534516 nova_compute[253538]: 2025-11-25 08:56:21.853 253542 DEBUG nova.scheduler.client.report [None req-ea27c167-309c-42ef-9057-32ae2ead652c 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 03:56:21 np0005534516 nova_compute[253538]: 2025-11-25 08:56:21.882 253542 DEBUG oslo_concurrency.lockutils [None req-ea27c167-309c-42ef-9057-32ae2ead652c 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.010s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:56:21 np0005534516 nova_compute[253538]: 2025-11-25 08:56:21.883 253542 DEBUG nova.compute.manager [None req-ea27c167-309c-42ef-9057-32ae2ead652c 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: ce8c3428-f7e4-49aa-9978-faaf5d514663] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 25 03:56:21 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2267: 321 pgs: 321 active+clean; 88 MiB data, 793 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.2 KiB/s wr, 95 op/s
Nov 25 03:56:21 np0005534516 nova_compute[253538]: 2025-11-25 08:56:21.961 253542 DEBUG nova.compute.manager [None req-ea27c167-309c-42ef-9057-32ae2ead652c 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: ce8c3428-f7e4-49aa-9978-faaf5d514663] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 25 03:56:21 np0005534516 nova_compute[253538]: 2025-11-25 08:56:21.962 253542 DEBUG nova.network.neutron [None req-ea27c167-309c-42ef-9057-32ae2ead652c 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: ce8c3428-f7e4-49aa-9978-faaf5d514663] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 25 03:56:21 np0005534516 nova_compute[253538]: 2025-11-25 08:56:21.982 253542 INFO nova.virt.libvirt.driver [None req-ea27c167-309c-42ef-9057-32ae2ead652c 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: ce8c3428-f7e4-49aa-9978-faaf5d514663] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 25 03:56:22 np0005534516 nova_compute[253538]: 2025-11-25 08:56:22.009 253542 DEBUG nova.compute.manager [None req-ea27c167-309c-42ef-9057-32ae2ead652c 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: ce8c3428-f7e4-49aa-9978-faaf5d514663] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 25 03:56:22 np0005534516 nova_compute[253538]: 2025-11-25 08:56:22.125 253542 DEBUG nova.compute.manager [None req-ea27c167-309c-42ef-9057-32ae2ead652c 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: ce8c3428-f7e4-49aa-9978-faaf5d514663] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 25 03:56:22 np0005534516 nova_compute[253538]: 2025-11-25 08:56:22.127 253542 DEBUG nova.virt.libvirt.driver [None req-ea27c167-309c-42ef-9057-32ae2ead652c 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: ce8c3428-f7e4-49aa-9978-faaf5d514663] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 25 03:56:22 np0005534516 nova_compute[253538]: 2025-11-25 08:56:22.128 253542 INFO nova.virt.libvirt.driver [None req-ea27c167-309c-42ef-9057-32ae2ead652c 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: ce8c3428-f7e4-49aa-9978-faaf5d514663] Creating image(s)#033[00m
Nov 25 03:56:22 np0005534516 nova_compute[253538]: 2025-11-25 08:56:22.159 253542 DEBUG nova.storage.rbd_utils [None req-ea27c167-309c-42ef-9057-32ae2ead652c 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] rbd image ce8c3428-f7e4-49aa-9978-faaf5d514663_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:56:22 np0005534516 nova_compute[253538]: 2025-11-25 08:56:22.194 253542 DEBUG nova.storage.rbd_utils [None req-ea27c167-309c-42ef-9057-32ae2ead652c 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] rbd image ce8c3428-f7e4-49aa-9978-faaf5d514663_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:56:22 np0005534516 nova_compute[253538]: 2025-11-25 08:56:22.221 253542 DEBUG nova.storage.rbd_utils [None req-ea27c167-309c-42ef-9057-32ae2ead652c 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] rbd image ce8c3428-f7e4-49aa-9978-faaf5d514663_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:56:22 np0005534516 nova_compute[253538]: 2025-11-25 08:56:22.224 253542 DEBUG oslo_concurrency.processutils [None req-ea27c167-309c-42ef-9057-32ae2ead652c 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:56:22 np0005534516 nova_compute[253538]: 2025-11-25 08:56:22.299 253542 DEBUG oslo_concurrency.processutils [None req-ea27c167-309c-42ef-9057-32ae2ead652c 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json" returned: 0 in 0.075s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:56:22 np0005534516 nova_compute[253538]: 2025-11-25 08:56:22.300 253542 DEBUG oslo_concurrency.lockutils [None req-ea27c167-309c-42ef-9057-32ae2ead652c 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Acquiring lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:56:22 np0005534516 nova_compute[253538]: 2025-11-25 08:56:22.301 253542 DEBUG oslo_concurrency.lockutils [None req-ea27c167-309c-42ef-9057-32ae2ead652c 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:56:22 np0005534516 nova_compute[253538]: 2025-11-25 08:56:22.301 253542 DEBUG oslo_concurrency.lockutils [None req-ea27c167-309c-42ef-9057-32ae2ead652c 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:56:22 np0005534516 nova_compute[253538]: 2025-11-25 08:56:22.324 253542 DEBUG nova.storage.rbd_utils [None req-ea27c167-309c-42ef-9057-32ae2ead652c 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] rbd image ce8c3428-f7e4-49aa-9978-faaf5d514663_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:56:22 np0005534516 nova_compute[253538]: 2025-11-25 08:56:22.328 253542 DEBUG oslo_concurrency.processutils [None req-ea27c167-309c-42ef-9057-32ae2ead652c 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc ce8c3428-f7e4-49aa-9978-faaf5d514663_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:56:22 np0005534516 nova_compute[253538]: 2025-11-25 08:56:22.382 253542 DEBUG nova.policy [None req-ea27c167-309c-42ef-9057-32ae2ead652c 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '283b89dbe3284e8ea2019b797673108b', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'cfffff2c57a442a59b202d368d49bf00', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 25 03:56:22 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 03:56:22 np0005534516 nova_compute[253538]: 2025-11-25 08:56:22.656 253542 DEBUG oslo_concurrency.processutils [None req-ea27c167-309c-42ef-9057-32ae2ead652c 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc ce8c3428-f7e4-49aa-9978-faaf5d514663_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.329s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:56:22 np0005534516 nova_compute[253538]: 2025-11-25 08:56:22.743 253542 DEBUG nova.storage.rbd_utils [None req-ea27c167-309c-42ef-9057-32ae2ead652c 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] resizing rbd image ce8c3428-f7e4-49aa-9978-faaf5d514663_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 25 03:56:22 np0005534516 nova_compute[253538]: 2025-11-25 08:56:22.865 253542 DEBUG nova.objects.instance [None req-ea27c167-309c-42ef-9057-32ae2ead652c 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Lazy-loading 'migration_context' on Instance uuid ce8c3428-f7e4-49aa-9978-faaf5d514663 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 03:56:22 np0005534516 nova_compute[253538]: 2025-11-25 08:56:22.884 253542 DEBUG nova.virt.libvirt.driver [None req-ea27c167-309c-42ef-9057-32ae2ead652c 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: ce8c3428-f7e4-49aa-9978-faaf5d514663] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 25 03:56:22 np0005534516 nova_compute[253538]: 2025-11-25 08:56:22.884 253542 DEBUG nova.virt.libvirt.driver [None req-ea27c167-309c-42ef-9057-32ae2ead652c 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: ce8c3428-f7e4-49aa-9978-faaf5d514663] Ensure instance console log exists: /var/lib/nova/instances/ce8c3428-f7e4-49aa-9978-faaf5d514663/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 25 03:56:22 np0005534516 nova_compute[253538]: 2025-11-25 08:56:22.885 253542 DEBUG oslo_concurrency.lockutils [None req-ea27c167-309c-42ef-9057-32ae2ead652c 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:56:22 np0005534516 nova_compute[253538]: 2025-11-25 08:56:22.885 253542 DEBUG oslo_concurrency.lockutils [None req-ea27c167-309c-42ef-9057-32ae2ead652c 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:56:22 np0005534516 nova_compute[253538]: 2025-11-25 08:56:22.885 253542 DEBUG oslo_concurrency.lockutils [None req-ea27c167-309c-42ef-9057-32ae2ead652c 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:56:22 np0005534516 nova_compute[253538]: 2025-11-25 08:56:22.900 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:56:23 np0005534516 nova_compute[253538]: 2025-11-25 08:56:23.133 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:56:23 np0005534516 nova_compute[253538]: 2025-11-25 08:56:23.231 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:56:23 np0005534516 nova_compute[253538]: 2025-11-25 08:56:23.234 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:56:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 03:56:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 03:56:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 03:56:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 03:56:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 03:56:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 03:56:23 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2268: 321 pgs: 321 active+clean; 99 MiB data, 799 MiB used, 59 GiB / 60 GiB avail; 602 KiB/s rd, 523 KiB/s wr, 48 op/s
Nov 25 03:56:24 np0005534516 nova_compute[253538]: 2025-11-25 08:56:24.119 253542 DEBUG nova.network.neutron [None req-ea27c167-309c-42ef-9057-32ae2ead652c 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: ce8c3428-f7e4-49aa-9978-faaf5d514663] Successfully created port: d92eef96-9bbe-4743-96d0-393e7e6de4ee _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 25 03:56:25 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2269: 321 pgs: 321 active+clean; 119 MiB data, 809 MiB used, 59 GiB / 60 GiB avail; 541 KiB/s rd, 1.3 MiB/s wr, 47 op/s
Nov 25 03:56:26 np0005534516 nova_compute[253538]: 2025-11-25 08:56:26.877 253542 DEBUG nova.network.neutron [None req-ea27c167-309c-42ef-9057-32ae2ead652c 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: ce8c3428-f7e4-49aa-9978-faaf5d514663] Successfully updated port: d92eef96-9bbe-4743-96d0-393e7e6de4ee _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 25 03:56:26 np0005534516 nova_compute[253538]: 2025-11-25 08:56:26.889 253542 DEBUG oslo_concurrency.lockutils [None req-ea27c167-309c-42ef-9057-32ae2ead652c 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Acquiring lock "refresh_cache-ce8c3428-f7e4-49aa-9978-faaf5d514663" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 03:56:26 np0005534516 nova_compute[253538]: 2025-11-25 08:56:26.889 253542 DEBUG oslo_concurrency.lockutils [None req-ea27c167-309c-42ef-9057-32ae2ead652c 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Acquired lock "refresh_cache-ce8c3428-f7e4-49aa-9978-faaf5d514663" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 03:56:26 np0005534516 nova_compute[253538]: 2025-11-25 08:56:26.889 253542 DEBUG nova.network.neutron [None req-ea27c167-309c-42ef-9057-32ae2ead652c 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: ce8c3428-f7e4-49aa-9978-faaf5d514663] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 25 03:56:26 np0005534516 nova_compute[253538]: 2025-11-25 08:56:26.973 253542 DEBUG nova.compute.manager [req-9f14fbe2-142f-47bb-9580-38dc27f689af req-3227d531-8c5a-49bd-99cb-f8923709080c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: ce8c3428-f7e4-49aa-9978-faaf5d514663] Received event network-changed-d92eef96-9bbe-4743-96d0-393e7e6de4ee external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:56:26 np0005534516 nova_compute[253538]: 2025-11-25 08:56:26.973 253542 DEBUG nova.compute.manager [req-9f14fbe2-142f-47bb-9580-38dc27f689af req-3227d531-8c5a-49bd-99cb-f8923709080c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: ce8c3428-f7e4-49aa-9978-faaf5d514663] Refreshing instance network info cache due to event network-changed-d92eef96-9bbe-4743-96d0-393e7e6de4ee. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 03:56:26 np0005534516 nova_compute[253538]: 2025-11-25 08:56:26.973 253542 DEBUG oslo_concurrency.lockutils [req-9f14fbe2-142f-47bb-9580-38dc27f689af req-3227d531-8c5a-49bd-99cb-f8923709080c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-ce8c3428-f7e4-49aa-9978-faaf5d514663" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 03:56:27 np0005534516 nova_compute[253538]: 2025-11-25 08:56:27.030 253542 DEBUG nova.network.neutron [None req-ea27c167-309c-42ef-9057-32ae2ead652c 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: ce8c3428-f7e4-49aa-9978-faaf5d514663] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 25 03:56:27 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 03:56:27 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2270: 321 pgs: 321 active+clean; 134 MiB data, 814 MiB used, 59 GiB / 60 GiB avail; 25 KiB/s rd, 1.8 MiB/s wr, 37 op/s
Nov 25 03:56:27 np0005534516 nova_compute[253538]: 2025-11-25 08:56:27.903 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:56:27 np0005534516 nova_compute[253538]: 2025-11-25 08:56:27.908 253542 DEBUG nova.network.neutron [None req-ea27c167-309c-42ef-9057-32ae2ead652c 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: ce8c3428-f7e4-49aa-9978-faaf5d514663] Updating instance_info_cache with network_info: [{"id": "d92eef96-9bbe-4743-96d0-393e7e6de4ee", "address": "fa:16:3e:0d:26:ec", "network": {"id": "58e30486-fde6-46bb-8263-c463bd38a1f9", "bridge": "br-int", "label": "tempest-network-smoke--1964902798", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cfffff2c57a442a59b202d368d49bf00", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd92eef96-9b", "ovs_interfaceid": "d92eef96-9bbe-4743-96d0-393e7e6de4ee", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 03:56:27 np0005534516 nova_compute[253538]: 2025-11-25 08:56:27.923 253542 DEBUG oslo_concurrency.lockutils [None req-ea27c167-309c-42ef-9057-32ae2ead652c 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Releasing lock "refresh_cache-ce8c3428-f7e4-49aa-9978-faaf5d514663" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 03:56:27 np0005534516 nova_compute[253538]: 2025-11-25 08:56:27.923 253542 DEBUG nova.compute.manager [None req-ea27c167-309c-42ef-9057-32ae2ead652c 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: ce8c3428-f7e4-49aa-9978-faaf5d514663] Instance network_info: |[{"id": "d92eef96-9bbe-4743-96d0-393e7e6de4ee", "address": "fa:16:3e:0d:26:ec", "network": {"id": "58e30486-fde6-46bb-8263-c463bd38a1f9", "bridge": "br-int", "label": "tempest-network-smoke--1964902798", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cfffff2c57a442a59b202d368d49bf00", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd92eef96-9b", "ovs_interfaceid": "d92eef96-9bbe-4743-96d0-393e7e6de4ee", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 25 03:56:27 np0005534516 nova_compute[253538]: 2025-11-25 08:56:27.923 253542 DEBUG oslo_concurrency.lockutils [req-9f14fbe2-142f-47bb-9580-38dc27f689af req-3227d531-8c5a-49bd-99cb-f8923709080c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-ce8c3428-f7e4-49aa-9978-faaf5d514663" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 03:56:27 np0005534516 nova_compute[253538]: 2025-11-25 08:56:27.924 253542 DEBUG nova.network.neutron [req-9f14fbe2-142f-47bb-9580-38dc27f689af req-3227d531-8c5a-49bd-99cb-f8923709080c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: ce8c3428-f7e4-49aa-9978-faaf5d514663] Refreshing network info cache for port d92eef96-9bbe-4743-96d0-393e7e6de4ee _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 03:56:27 np0005534516 nova_compute[253538]: 2025-11-25 08:56:27.926 253542 DEBUG nova.virt.libvirt.driver [None req-ea27c167-309c-42ef-9057-32ae2ead652c 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: ce8c3428-f7e4-49aa-9978-faaf5d514663] Start _get_guest_xml network_info=[{"id": "d92eef96-9bbe-4743-96d0-393e7e6de4ee", "address": "fa:16:3e:0d:26:ec", "network": {"id": "58e30486-fde6-46bb-8263-c463bd38a1f9", "bridge": "br-int", "label": "tempest-network-smoke--1964902798", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cfffff2c57a442a59b202d368d49bf00", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd92eef96-9b", "ovs_interfaceid": "d92eef96-9bbe-4743-96d0-393e7e6de4ee", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'device_name': '/dev/vda', 'guest_format': None, 'encryption_options': None, 'boot_index': 0, 'encryption_format': None, 'encryption_secret_uuid': None, 'size': 0, 'encrypted': False, 'disk_bus': 'virtio', 'image_id': '8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 25 03:56:27 np0005534516 nova_compute[253538]: 2025-11-25 08:56:27.932 253542 WARNING nova.virt.libvirt.driver [None req-ea27c167-309c-42ef-9057-32ae2ead652c 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 03:56:27 np0005534516 nova_compute[253538]: 2025-11-25 08:56:27.941 253542 DEBUG nova.virt.libvirt.host [None req-ea27c167-309c-42ef-9057-32ae2ead652c 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 25 03:56:27 np0005534516 nova_compute[253538]: 2025-11-25 08:56:27.941 253542 DEBUG nova.virt.libvirt.host [None req-ea27c167-309c-42ef-9057-32ae2ead652c 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 25 03:56:27 np0005534516 nova_compute[253538]: 2025-11-25 08:56:27.945 253542 DEBUG nova.virt.libvirt.host [None req-ea27c167-309c-42ef-9057-32ae2ead652c 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 25 03:56:27 np0005534516 nova_compute[253538]: 2025-11-25 08:56:27.946 253542 DEBUG nova.virt.libvirt.host [None req-ea27c167-309c-42ef-9057-32ae2ead652c 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 25 03:56:27 np0005534516 nova_compute[253538]: 2025-11-25 08:56:27.946 253542 DEBUG nova.virt.libvirt.driver [None req-ea27c167-309c-42ef-9057-32ae2ead652c 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 25 03:56:27 np0005534516 nova_compute[253538]: 2025-11-25 08:56:27.946 253542 DEBUG nova.virt.hardware [None req-ea27c167-309c-42ef-9057-32ae2ead652c 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T08:20:54Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='c0247c91-0f34-45b2-87e7-43f31a790a00',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 25 03:56:27 np0005534516 nova_compute[253538]: 2025-11-25 08:56:27.947 253542 DEBUG nova.virt.hardware [None req-ea27c167-309c-42ef-9057-32ae2ead652c 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 25 03:56:27 np0005534516 nova_compute[253538]: 2025-11-25 08:56:27.947 253542 DEBUG nova.virt.hardware [None req-ea27c167-309c-42ef-9057-32ae2ead652c 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 25 03:56:27 np0005534516 nova_compute[253538]: 2025-11-25 08:56:27.947 253542 DEBUG nova.virt.hardware [None req-ea27c167-309c-42ef-9057-32ae2ead652c 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 25 03:56:27 np0005534516 nova_compute[253538]: 2025-11-25 08:56:27.947 253542 DEBUG nova.virt.hardware [None req-ea27c167-309c-42ef-9057-32ae2ead652c 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 25 03:56:27 np0005534516 nova_compute[253538]: 2025-11-25 08:56:27.947 253542 DEBUG nova.virt.hardware [None req-ea27c167-309c-42ef-9057-32ae2ead652c 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 25 03:56:27 np0005534516 nova_compute[253538]: 2025-11-25 08:56:27.947 253542 DEBUG nova.virt.hardware [None req-ea27c167-309c-42ef-9057-32ae2ead652c 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 25 03:56:27 np0005534516 nova_compute[253538]: 2025-11-25 08:56:27.948 253542 DEBUG nova.virt.hardware [None req-ea27c167-309c-42ef-9057-32ae2ead652c 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 25 03:56:27 np0005534516 nova_compute[253538]: 2025-11-25 08:56:27.948 253542 DEBUG nova.virt.hardware [None req-ea27c167-309c-42ef-9057-32ae2ead652c 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 25 03:56:27 np0005534516 nova_compute[253538]: 2025-11-25 08:56:27.948 253542 DEBUG nova.virt.hardware [None req-ea27c167-309c-42ef-9057-32ae2ead652c 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 25 03:56:27 np0005534516 nova_compute[253538]: 2025-11-25 08:56:27.948 253542 DEBUG nova.virt.hardware [None req-ea27c167-309c-42ef-9057-32ae2ead652c 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 25 03:56:27 np0005534516 nova_compute[253538]: 2025-11-25 08:56:27.951 253542 DEBUG oslo_concurrency.processutils [None req-ea27c167-309c-42ef-9057-32ae2ead652c 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:56:28 np0005534516 nova_compute[253538]: 2025-11-25 08:56:28.140 253542 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764060973.1393652, 4cb9b212-92f6-4b10-ac69-ba251266bfd2 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 03:56:28 np0005534516 nova_compute[253538]: 2025-11-25 08:56:28.140 253542 INFO nova.compute.manager [-] [instance: 4cb9b212-92f6-4b10-ac69-ba251266bfd2] VM Stopped (Lifecycle Event)#033[00m
Nov 25 03:56:28 np0005534516 nova_compute[253538]: 2025-11-25 08:56:28.159 253542 DEBUG nova.compute.manager [None req-69f77c51-f368-452b-af2a-bd475e0ea4f1 - - - - - -] [instance: 4cb9b212-92f6-4b10-ac69-ba251266bfd2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:56:28 np0005534516 nova_compute[253538]: 2025-11-25 08:56:28.236 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:56:28 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 03:56:28 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1275504292' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 03:56:28 np0005534516 nova_compute[253538]: 2025-11-25 08:56:28.402 253542 DEBUG oslo_concurrency.processutils [None req-ea27c167-309c-42ef-9057-32ae2ead652c 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.452s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:56:28 np0005534516 nova_compute[253538]: 2025-11-25 08:56:28.427 253542 DEBUG nova.storage.rbd_utils [None req-ea27c167-309c-42ef-9057-32ae2ead652c 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] rbd image ce8c3428-f7e4-49aa-9978-faaf5d514663_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:56:28 np0005534516 nova_compute[253538]: 2025-11-25 08:56:28.432 253542 DEBUG oslo_concurrency.processutils [None req-ea27c167-309c-42ef-9057-32ae2ead652c 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:56:28 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 03:56:28 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4134776803' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 03:56:28 np0005534516 nova_compute[253538]: 2025-11-25 08:56:28.890 253542 DEBUG oslo_concurrency.processutils [None req-ea27c167-309c-42ef-9057-32ae2ead652c 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.459s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:56:28 np0005534516 nova_compute[253538]: 2025-11-25 08:56:28.892 253542 DEBUG nova.virt.libvirt.vif [None req-ea27c167-309c-42ef-9057-32ae2ead652c 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T08:56:20Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-1495447964-access_point-1402608537',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-1495447964-access_point-1402608537',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-1495447964-ac',id=120,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBIMzihuAIn/3zUYmC89IQHlRQOFsDPQXmR2lUEIBbP/zsJ4Wb7ryhi2Z+PoqeUCEWAj2u1hLvngwGPYFPPFVKkLQWsKMEmPgeFVkFH2scsb2/c4cLoNH5bP+xcccrYAT8g==',key_name='tempest-TestSecurityGroupsBasicOps-373950628',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='cfffff2c57a442a59b202d368d49bf00',ramdisk_id='',reservation_id='r-hsvmofjq',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-1495447964',owner_user_name='tempest-TestSecurityGroupsBasicOps-1495447964-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T08:56:22Z,user_data=None,user_id='283b89dbe3284e8ea2019b797673108b',uuid=ce8c3428-f7e4-49aa-9978-faaf5d514663,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d92eef96-9bbe-4743-96d0-393e7e6de4ee", "address": "fa:16:3e:0d:26:ec", "network": {"id": "58e30486-fde6-46bb-8263-c463bd38a1f9", "bridge": "br-int", "label": "tempest-network-smoke--1964902798", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cfffff2c57a442a59b202d368d49bf00", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd92eef96-9b", "ovs_interfaceid": "d92eef96-9bbe-4743-96d0-393e7e6de4ee", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 25 03:56:28 np0005534516 nova_compute[253538]: 2025-11-25 08:56:28.893 253542 DEBUG nova.network.os_vif_util [None req-ea27c167-309c-42ef-9057-32ae2ead652c 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Converting VIF {"id": "d92eef96-9bbe-4743-96d0-393e7e6de4ee", "address": "fa:16:3e:0d:26:ec", "network": {"id": "58e30486-fde6-46bb-8263-c463bd38a1f9", "bridge": "br-int", "label": "tempest-network-smoke--1964902798", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cfffff2c57a442a59b202d368d49bf00", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd92eef96-9b", "ovs_interfaceid": "d92eef96-9bbe-4743-96d0-393e7e6de4ee", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 03:56:28 np0005534516 nova_compute[253538]: 2025-11-25 08:56:28.894 253542 DEBUG nova.network.os_vif_util [None req-ea27c167-309c-42ef-9057-32ae2ead652c 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:0d:26:ec,bridge_name='br-int',has_traffic_filtering=True,id=d92eef96-9bbe-4743-96d0-393e7e6de4ee,network=Network(58e30486-fde6-46bb-8263-c463bd38a1f9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd92eef96-9b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 03:56:28 np0005534516 nova_compute[253538]: 2025-11-25 08:56:28.895 253542 DEBUG nova.objects.instance [None req-ea27c167-309c-42ef-9057-32ae2ead652c 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Lazy-loading 'pci_devices' on Instance uuid ce8c3428-f7e4-49aa-9978-faaf5d514663 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 03:56:28 np0005534516 nova_compute[253538]: 2025-11-25 08:56:28.912 253542 DEBUG nova.virt.libvirt.driver [None req-ea27c167-309c-42ef-9057-32ae2ead652c 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: ce8c3428-f7e4-49aa-9978-faaf5d514663] End _get_guest_xml xml=<domain type="kvm">
Nov 25 03:56:28 np0005534516 nova_compute[253538]:  <uuid>ce8c3428-f7e4-49aa-9978-faaf5d514663</uuid>
Nov 25 03:56:28 np0005534516 nova_compute[253538]:  <name>instance-00000078</name>
Nov 25 03:56:28 np0005534516 nova_compute[253538]:  <memory>131072</memory>
Nov 25 03:56:28 np0005534516 nova_compute[253538]:  <vcpu>1</vcpu>
Nov 25 03:56:28 np0005534516 nova_compute[253538]:  <metadata>
Nov 25 03:56:28 np0005534516 nova_compute[253538]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 03:56:28 np0005534516 nova_compute[253538]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 03:56:28 np0005534516 nova_compute[253538]:      <nova:name>tempest-server-tempest-TestSecurityGroupsBasicOps-1495447964-access_point-1402608537</nova:name>
Nov 25 03:56:28 np0005534516 nova_compute[253538]:      <nova:creationTime>2025-11-25 08:56:27</nova:creationTime>
Nov 25 03:56:28 np0005534516 nova_compute[253538]:      <nova:flavor name="m1.nano">
Nov 25 03:56:28 np0005534516 nova_compute[253538]:        <nova:memory>128</nova:memory>
Nov 25 03:56:28 np0005534516 nova_compute[253538]:        <nova:disk>1</nova:disk>
Nov 25 03:56:28 np0005534516 nova_compute[253538]:        <nova:swap>0</nova:swap>
Nov 25 03:56:28 np0005534516 nova_compute[253538]:        <nova:ephemeral>0</nova:ephemeral>
Nov 25 03:56:28 np0005534516 nova_compute[253538]:        <nova:vcpus>1</nova:vcpus>
Nov 25 03:56:28 np0005534516 nova_compute[253538]:      </nova:flavor>
Nov 25 03:56:28 np0005534516 nova_compute[253538]:      <nova:owner>
Nov 25 03:56:28 np0005534516 nova_compute[253538]:        <nova:user uuid="283b89dbe3284e8ea2019b797673108b">tempest-TestSecurityGroupsBasicOps-1495447964-project-member</nova:user>
Nov 25 03:56:28 np0005534516 nova_compute[253538]:        <nova:project uuid="cfffff2c57a442a59b202d368d49bf00">tempest-TestSecurityGroupsBasicOps-1495447964</nova:project>
Nov 25 03:56:28 np0005534516 nova_compute[253538]:      </nova:owner>
Nov 25 03:56:28 np0005534516 nova_compute[253538]:      <nova:root type="image" uuid="8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e"/>
Nov 25 03:56:28 np0005534516 nova_compute[253538]:      <nova:ports>
Nov 25 03:56:28 np0005534516 nova_compute[253538]:        <nova:port uuid="d92eef96-9bbe-4743-96d0-393e7e6de4ee">
Nov 25 03:56:28 np0005534516 nova_compute[253538]:          <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Nov 25 03:56:28 np0005534516 nova_compute[253538]:        </nova:port>
Nov 25 03:56:28 np0005534516 nova_compute[253538]:      </nova:ports>
Nov 25 03:56:28 np0005534516 nova_compute[253538]:    </nova:instance>
Nov 25 03:56:28 np0005534516 nova_compute[253538]:  </metadata>
Nov 25 03:56:28 np0005534516 nova_compute[253538]:  <sysinfo type="smbios">
Nov 25 03:56:28 np0005534516 nova_compute[253538]:    <system>
Nov 25 03:56:28 np0005534516 nova_compute[253538]:      <entry name="manufacturer">RDO</entry>
Nov 25 03:56:28 np0005534516 nova_compute[253538]:      <entry name="product">OpenStack Compute</entry>
Nov 25 03:56:28 np0005534516 nova_compute[253538]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 03:56:28 np0005534516 nova_compute[253538]:      <entry name="serial">ce8c3428-f7e4-49aa-9978-faaf5d514663</entry>
Nov 25 03:56:28 np0005534516 nova_compute[253538]:      <entry name="uuid">ce8c3428-f7e4-49aa-9978-faaf5d514663</entry>
Nov 25 03:56:28 np0005534516 nova_compute[253538]:      <entry name="family">Virtual Machine</entry>
Nov 25 03:56:28 np0005534516 nova_compute[253538]:    </system>
Nov 25 03:56:28 np0005534516 nova_compute[253538]:  </sysinfo>
Nov 25 03:56:28 np0005534516 nova_compute[253538]:  <os>
Nov 25 03:56:28 np0005534516 nova_compute[253538]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 25 03:56:28 np0005534516 nova_compute[253538]:    <boot dev="hd"/>
Nov 25 03:56:28 np0005534516 nova_compute[253538]:    <smbios mode="sysinfo"/>
Nov 25 03:56:28 np0005534516 nova_compute[253538]:  </os>
Nov 25 03:56:28 np0005534516 nova_compute[253538]:  <features>
Nov 25 03:56:28 np0005534516 nova_compute[253538]:    <acpi/>
Nov 25 03:56:28 np0005534516 nova_compute[253538]:    <apic/>
Nov 25 03:56:28 np0005534516 nova_compute[253538]:    <vmcoreinfo/>
Nov 25 03:56:28 np0005534516 nova_compute[253538]:  </features>
Nov 25 03:56:28 np0005534516 nova_compute[253538]:  <clock offset="utc">
Nov 25 03:56:28 np0005534516 nova_compute[253538]:    <timer name="pit" tickpolicy="delay"/>
Nov 25 03:56:28 np0005534516 nova_compute[253538]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 25 03:56:28 np0005534516 nova_compute[253538]:    <timer name="hpet" present="no"/>
Nov 25 03:56:28 np0005534516 nova_compute[253538]:  </clock>
Nov 25 03:56:28 np0005534516 nova_compute[253538]:  <cpu mode="host-model" match="exact">
Nov 25 03:56:28 np0005534516 nova_compute[253538]:    <topology sockets="1" cores="1" threads="1"/>
Nov 25 03:56:28 np0005534516 nova_compute[253538]:  </cpu>
Nov 25 03:56:28 np0005534516 nova_compute[253538]:  <devices>
Nov 25 03:56:28 np0005534516 nova_compute[253538]:    <disk type="network" device="disk">
Nov 25 03:56:28 np0005534516 nova_compute[253538]:      <driver type="raw" cache="none"/>
Nov 25 03:56:28 np0005534516 nova_compute[253538]:      <source protocol="rbd" name="vms/ce8c3428-f7e4-49aa-9978-faaf5d514663_disk">
Nov 25 03:56:28 np0005534516 nova_compute[253538]:        <host name="192.168.122.100" port="6789"/>
Nov 25 03:56:28 np0005534516 nova_compute[253538]:      </source>
Nov 25 03:56:28 np0005534516 nova_compute[253538]:      <auth username="openstack">
Nov 25 03:56:28 np0005534516 nova_compute[253538]:        <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 03:56:28 np0005534516 nova_compute[253538]:      </auth>
Nov 25 03:56:28 np0005534516 nova_compute[253538]:      <target dev="vda" bus="virtio"/>
Nov 25 03:56:28 np0005534516 nova_compute[253538]:    </disk>
Nov 25 03:56:28 np0005534516 nova_compute[253538]:    <disk type="network" device="cdrom">
Nov 25 03:56:28 np0005534516 nova_compute[253538]:      <driver type="raw" cache="none"/>
Nov 25 03:56:28 np0005534516 nova_compute[253538]:      <source protocol="rbd" name="vms/ce8c3428-f7e4-49aa-9978-faaf5d514663_disk.config">
Nov 25 03:56:28 np0005534516 nova_compute[253538]:        <host name="192.168.122.100" port="6789"/>
Nov 25 03:56:28 np0005534516 nova_compute[253538]:      </source>
Nov 25 03:56:28 np0005534516 nova_compute[253538]:      <auth username="openstack">
Nov 25 03:56:28 np0005534516 nova_compute[253538]:        <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 03:56:28 np0005534516 nova_compute[253538]:      </auth>
Nov 25 03:56:28 np0005534516 nova_compute[253538]:      <target dev="sda" bus="sata"/>
Nov 25 03:56:28 np0005534516 nova_compute[253538]:    </disk>
Nov 25 03:56:28 np0005534516 nova_compute[253538]:    <interface type="ethernet">
Nov 25 03:56:28 np0005534516 nova_compute[253538]:      <mac address="fa:16:3e:0d:26:ec"/>
Nov 25 03:56:28 np0005534516 nova_compute[253538]:      <model type="virtio"/>
Nov 25 03:56:28 np0005534516 nova_compute[253538]:      <driver name="vhost" rx_queue_size="512"/>
Nov 25 03:56:28 np0005534516 nova_compute[253538]:      <mtu size="1442"/>
Nov 25 03:56:28 np0005534516 nova_compute[253538]:      <target dev="tapd92eef96-9b"/>
Nov 25 03:56:28 np0005534516 nova_compute[253538]:    </interface>
Nov 25 03:56:28 np0005534516 nova_compute[253538]:    <serial type="pty">
Nov 25 03:56:28 np0005534516 nova_compute[253538]:      <log file="/var/lib/nova/instances/ce8c3428-f7e4-49aa-9978-faaf5d514663/console.log" append="off"/>
Nov 25 03:56:28 np0005534516 nova_compute[253538]:    </serial>
Nov 25 03:56:28 np0005534516 nova_compute[253538]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 03:56:28 np0005534516 nova_compute[253538]:    <video>
Nov 25 03:56:28 np0005534516 nova_compute[253538]:      <model type="virtio"/>
Nov 25 03:56:28 np0005534516 nova_compute[253538]:    </video>
Nov 25 03:56:28 np0005534516 nova_compute[253538]:    <input type="tablet" bus="usb"/>
Nov 25 03:56:28 np0005534516 nova_compute[253538]:    <rng model="virtio">
Nov 25 03:56:28 np0005534516 nova_compute[253538]:      <backend model="random">/dev/urandom</backend>
Nov 25 03:56:28 np0005534516 nova_compute[253538]:    </rng>
Nov 25 03:56:28 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root"/>
Nov 25 03:56:28 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:56:28 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:56:28 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:56:28 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:56:28 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:56:28 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:56:28 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:56:28 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:56:28 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:56:28 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:56:28 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:56:28 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:56:28 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:56:28 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:56:28 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:56:28 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:56:28 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:56:28 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:56:28 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:56:28 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:56:28 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:56:28 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:56:28 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:56:28 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:56:28 np0005534516 nova_compute[253538]:    <controller type="usb" index="0"/>
Nov 25 03:56:28 np0005534516 nova_compute[253538]:    <memballoon model="virtio">
Nov 25 03:56:28 np0005534516 nova_compute[253538]:      <stats period="10"/>
Nov 25 03:56:28 np0005534516 nova_compute[253538]:    </memballoon>
Nov 25 03:56:28 np0005534516 nova_compute[253538]:  </devices>
Nov 25 03:56:28 np0005534516 nova_compute[253538]: </domain>
Nov 25 03:56:28 np0005534516 nova_compute[253538]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 25 03:56:28 np0005534516 nova_compute[253538]: 2025-11-25 08:56:28.913 253542 DEBUG nova.compute.manager [None req-ea27c167-309c-42ef-9057-32ae2ead652c 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: ce8c3428-f7e4-49aa-9978-faaf5d514663] Preparing to wait for external event network-vif-plugged-d92eef96-9bbe-4743-96d0-393e7e6de4ee prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 25 03:56:28 np0005534516 nova_compute[253538]: 2025-11-25 08:56:28.913 253542 DEBUG oslo_concurrency.lockutils [None req-ea27c167-309c-42ef-9057-32ae2ead652c 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Acquiring lock "ce8c3428-f7e4-49aa-9978-faaf5d514663-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:56:28 np0005534516 nova_compute[253538]: 2025-11-25 08:56:28.913 253542 DEBUG oslo_concurrency.lockutils [None req-ea27c167-309c-42ef-9057-32ae2ead652c 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Lock "ce8c3428-f7e4-49aa-9978-faaf5d514663-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:56:28 np0005534516 nova_compute[253538]: 2025-11-25 08:56:28.914 253542 DEBUG oslo_concurrency.lockutils [None req-ea27c167-309c-42ef-9057-32ae2ead652c 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Lock "ce8c3428-f7e4-49aa-9978-faaf5d514663-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:56:28 np0005534516 nova_compute[253538]: 2025-11-25 08:56:28.915 253542 DEBUG nova.virt.libvirt.vif [None req-ea27c167-309c-42ef-9057-32ae2ead652c 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T08:56:20Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-1495447964-access_point-1402608537',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-1495447964-access_point-1402608537',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-1495447964-ac',id=120,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBIMzihuAIn/3zUYmC89IQHlRQOFsDPQXmR2lUEIBbP/zsJ4Wb7ryhi2Z+PoqeUCEWAj2u1hLvngwGPYFPPFVKkLQWsKMEmPgeFVkFH2scsb2/c4cLoNH5bP+xcccrYAT8g==',key_name='tempest-TestSecurityGroupsBasicOps-373950628',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='cfffff2c57a442a59b202d368d49bf00',ramdisk_id='',reservation_id='r-hsvmofjq',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-1495447964',owner_user_name='tempest-TestSecurityGroupsBasicOps-1495447964-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T08:56:22Z,user_data=None,user_id='283b89dbe3284e8ea2019b797673108b',uuid=ce8c3428-f7e4-49aa-9978-faaf5d514663,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d92eef96-9bbe-4743-96d0-393e7e6de4ee", "address": "fa:16:3e:0d:26:ec", "network": {"id": "58e30486-fde6-46bb-8263-c463bd38a1f9", "bridge": "br-int", "label": "tempest-network-smoke--1964902798", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cfffff2c57a442a59b202d368d49bf00", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd92eef96-9b", "ovs_interfaceid": "d92eef96-9bbe-4743-96d0-393e7e6de4ee", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 25 03:56:28 np0005534516 nova_compute[253538]: 2025-11-25 08:56:28.915 253542 DEBUG nova.network.os_vif_util [None req-ea27c167-309c-42ef-9057-32ae2ead652c 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Converting VIF {"id": "d92eef96-9bbe-4743-96d0-393e7e6de4ee", "address": "fa:16:3e:0d:26:ec", "network": {"id": "58e30486-fde6-46bb-8263-c463bd38a1f9", "bridge": "br-int", "label": "tempest-network-smoke--1964902798", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cfffff2c57a442a59b202d368d49bf00", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd92eef96-9b", "ovs_interfaceid": "d92eef96-9bbe-4743-96d0-393e7e6de4ee", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 03:56:28 np0005534516 nova_compute[253538]: 2025-11-25 08:56:28.916 253542 DEBUG nova.network.os_vif_util [None req-ea27c167-309c-42ef-9057-32ae2ead652c 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:0d:26:ec,bridge_name='br-int',has_traffic_filtering=True,id=d92eef96-9bbe-4743-96d0-393e7e6de4ee,network=Network(58e30486-fde6-46bb-8263-c463bd38a1f9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd92eef96-9b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 03:56:28 np0005534516 nova_compute[253538]: 2025-11-25 08:56:28.916 253542 DEBUG os_vif [None req-ea27c167-309c-42ef-9057-32ae2ead652c 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:0d:26:ec,bridge_name='br-int',has_traffic_filtering=True,id=d92eef96-9bbe-4743-96d0-393e7e6de4ee,network=Network(58e30486-fde6-46bb-8263-c463bd38a1f9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd92eef96-9b') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 25 03:56:28 np0005534516 nova_compute[253538]: 2025-11-25 08:56:28.917 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:56:28 np0005534516 nova_compute[253538]: 2025-11-25 08:56:28.917 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:56:28 np0005534516 nova_compute[253538]: 2025-11-25 08:56:28.918 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 03:56:28 np0005534516 nova_compute[253538]: 2025-11-25 08:56:28.920 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:56:28 np0005534516 nova_compute[253538]: 2025-11-25 08:56:28.921 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd92eef96-9b, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:56:28 np0005534516 nova_compute[253538]: 2025-11-25 08:56:28.921 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapd92eef96-9b, col_values=(('external_ids', {'iface-id': 'd92eef96-9bbe-4743-96d0-393e7e6de4ee', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:0d:26:ec', 'vm-uuid': 'ce8c3428-f7e4-49aa-9978-faaf5d514663'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:56:28 np0005534516 nova_compute[253538]: 2025-11-25 08:56:28.923 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:56:28 np0005534516 NetworkManager[48915]: <info>  [1764060988.9250] manager: (tapd92eef96-9b): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/506)
Nov 25 03:56:28 np0005534516 nova_compute[253538]: 2025-11-25 08:56:28.927 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 25 03:56:28 np0005534516 nova_compute[253538]: 2025-11-25 08:56:28.930 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:56:28 np0005534516 nova_compute[253538]: 2025-11-25 08:56:28.931 253542 INFO os_vif [None req-ea27c167-309c-42ef-9057-32ae2ead652c 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:0d:26:ec,bridge_name='br-int',has_traffic_filtering=True,id=d92eef96-9bbe-4743-96d0-393e7e6de4ee,network=Network(58e30486-fde6-46bb-8263-c463bd38a1f9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd92eef96-9b')#033[00m
Nov 25 03:56:28 np0005534516 nova_compute[253538]: 2025-11-25 08:56:28.993 253542 DEBUG nova.virt.libvirt.driver [None req-ea27c167-309c-42ef-9057-32ae2ead652c 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 03:56:28 np0005534516 nova_compute[253538]: 2025-11-25 08:56:28.994 253542 DEBUG nova.virt.libvirt.driver [None req-ea27c167-309c-42ef-9057-32ae2ead652c 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 03:56:28 np0005534516 nova_compute[253538]: 2025-11-25 08:56:28.994 253542 DEBUG nova.virt.libvirt.driver [None req-ea27c167-309c-42ef-9057-32ae2ead652c 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] No VIF found with MAC fa:16:3e:0d:26:ec, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 25 03:56:28 np0005534516 nova_compute[253538]: 2025-11-25 08:56:28.994 253542 INFO nova.virt.libvirt.driver [None req-ea27c167-309c-42ef-9057-32ae2ead652c 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: ce8c3428-f7e4-49aa-9978-faaf5d514663] Using config drive#033[00m
Nov 25 03:56:29 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 25 03:56:29 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1979291121' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 03:56:29 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 25 03:56:29 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1979291121' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 03:56:29 np0005534516 nova_compute[253538]: 2025-11-25 08:56:29.020 253542 DEBUG nova.storage.rbd_utils [None req-ea27c167-309c-42ef-9057-32ae2ead652c 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] rbd image ce8c3428-f7e4-49aa-9978-faaf5d514663_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:56:29 np0005534516 nova_compute[253538]: 2025-11-25 08:56:29.314 253542 DEBUG nova.network.neutron [req-9f14fbe2-142f-47bb-9580-38dc27f689af req-3227d531-8c5a-49bd-99cb-f8923709080c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: ce8c3428-f7e4-49aa-9978-faaf5d514663] Updated VIF entry in instance network info cache for port d92eef96-9bbe-4743-96d0-393e7e6de4ee. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 25 03:56:29 np0005534516 nova_compute[253538]: 2025-11-25 08:56:29.315 253542 DEBUG nova.network.neutron [req-9f14fbe2-142f-47bb-9580-38dc27f689af req-3227d531-8c5a-49bd-99cb-f8923709080c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: ce8c3428-f7e4-49aa-9978-faaf5d514663] Updating instance_info_cache with network_info: [{"id": "d92eef96-9bbe-4743-96d0-393e7e6de4ee", "address": "fa:16:3e:0d:26:ec", "network": {"id": "58e30486-fde6-46bb-8263-c463bd38a1f9", "bridge": "br-int", "label": "tempest-network-smoke--1964902798", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cfffff2c57a442a59b202d368d49bf00", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd92eef96-9b", "ovs_interfaceid": "d92eef96-9bbe-4743-96d0-393e7e6de4ee", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 03:56:29 np0005534516 nova_compute[253538]: 2025-11-25 08:56:29.334 253542 DEBUG oslo_concurrency.lockutils [req-9f14fbe2-142f-47bb-9580-38dc27f689af req-3227d531-8c5a-49bd-99cb-f8923709080c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-ce8c3428-f7e4-49aa-9978-faaf5d514663" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 03:56:29 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2271: 321 pgs: 321 active+clean; 134 MiB data, 814 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 25 03:56:30 np0005534516 nova_compute[253538]: 2025-11-25 08:56:30.048 253542 INFO nova.virt.libvirt.driver [None req-ea27c167-309c-42ef-9057-32ae2ead652c 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: ce8c3428-f7e4-49aa-9978-faaf5d514663] Creating config drive at /var/lib/nova/instances/ce8c3428-f7e4-49aa-9978-faaf5d514663/disk.config#033[00m
Nov 25 03:56:30 np0005534516 nova_compute[253538]: 2025-11-25 08:56:30.054 253542 DEBUG oslo_concurrency.processutils [None req-ea27c167-309c-42ef-9057-32ae2ead652c 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/ce8c3428-f7e4-49aa-9978-faaf5d514663/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpnkdchm5r execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:56:30 np0005534516 nova_compute[253538]: 2025-11-25 08:56:30.199 253542 DEBUG oslo_concurrency.processutils [None req-ea27c167-309c-42ef-9057-32ae2ead652c 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/ce8c3428-f7e4-49aa-9978-faaf5d514663/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpnkdchm5r" returned: 0 in 0.145s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:56:30 np0005534516 nova_compute[253538]: 2025-11-25 08:56:30.229 253542 DEBUG nova.storage.rbd_utils [None req-ea27c167-309c-42ef-9057-32ae2ead652c 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] rbd image ce8c3428-f7e4-49aa-9978-faaf5d514663_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:56:30 np0005534516 nova_compute[253538]: 2025-11-25 08:56:30.233 253542 DEBUG oslo_concurrency.processutils [None req-ea27c167-309c-42ef-9057-32ae2ead652c 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/ce8c3428-f7e4-49aa-9978-faaf5d514663/disk.config ce8c3428-f7e4-49aa-9978-faaf5d514663_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:56:30 np0005534516 nova_compute[253538]: 2025-11-25 08:56:30.428 253542 DEBUG oslo_concurrency.processutils [None req-ea27c167-309c-42ef-9057-32ae2ead652c 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/ce8c3428-f7e4-49aa-9978-faaf5d514663/disk.config ce8c3428-f7e4-49aa-9978-faaf5d514663_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.195s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:56:30 np0005534516 nova_compute[253538]: 2025-11-25 08:56:30.430 253542 INFO nova.virt.libvirt.driver [None req-ea27c167-309c-42ef-9057-32ae2ead652c 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: ce8c3428-f7e4-49aa-9978-faaf5d514663] Deleting local config drive /var/lib/nova/instances/ce8c3428-f7e4-49aa-9978-faaf5d514663/disk.config because it was imported into RBD.#033[00m
Nov 25 03:56:30 np0005534516 kernel: tapd92eef96-9b: entered promiscuous mode
Nov 25 03:56:30 np0005534516 NetworkManager[48915]: <info>  [1764060990.5106] manager: (tapd92eef96-9b): new Tun device (/org/freedesktop/NetworkManager/Devices/507)
Nov 25 03:56:30 np0005534516 ovn_controller[152859]: 2025-11-25T08:56:30Z|01244|binding|INFO|Claiming lport d92eef96-9bbe-4743-96d0-393e7e6de4ee for this chassis.
Nov 25 03:56:30 np0005534516 ovn_controller[152859]: 2025-11-25T08:56:30Z|01245|binding|INFO|d92eef96-9bbe-4743-96d0-393e7e6de4ee: Claiming fa:16:3e:0d:26:ec 10.100.0.3
Nov 25 03:56:30 np0005534516 nova_compute[253538]: 2025-11-25 08:56:30.510 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:56:30 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:56:30.528 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:0d:26:ec 10.100.0.3'], port_security=['fa:16:3e:0d:26:ec 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'ce8c3428-f7e4-49aa-9978-faaf5d514663', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-58e30486-fde6-46bb-8263-c463bd38a1f9', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'cfffff2c57a442a59b202d368d49bf00', 'neutron:revision_number': '2', 'neutron:security_group_ids': '84e8c954-c3f0-4a6c-88b0-2dc68f7ce745 aec330ab-8d77-47ae-8de6-bec0741c3114', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f8915864-93a9-4ad1-b7bb-a11d22ed3f29, chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=d92eef96-9bbe-4743-96d0-393e7e6de4ee) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 03:56:30 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:56:30.531 162739 INFO neutron.agent.ovn.metadata.agent [-] Port d92eef96-9bbe-4743-96d0-393e7e6de4ee in datapath 58e30486-fde6-46bb-8263-c463bd38a1f9 bound to our chassis#033[00m
Nov 25 03:56:30 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:56:30.533 162739 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 58e30486-fde6-46bb-8263-c463bd38a1f9#033[00m
Nov 25 03:56:30 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:56:30.553 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[181db5c8-2d5a-44c2-bff9-0af8aeccded2]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:56:30 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:56:30.555 162739 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap58e30486-f1 in ovnmeta-58e30486-fde6-46bb-8263-c463bd38a1f9 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 25 03:56:30 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:56:30.558 269370 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap58e30486-f0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 25 03:56:30 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:56:30.558 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[0c849eaf-7363-4a41-bad8-e7454ba893fd]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:56:30 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:56:30.560 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[5b36e553-c417-4652-aa7c-74bf524d3a41]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:56:30 np0005534516 systemd-udevd[380132]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 03:56:30 np0005534516 systemd-machined[215790]: New machine qemu-150-instance-00000078.
Nov 25 03:56:30 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:56:30.574 162852 DEBUG oslo.privsep.daemon [-] privsep: reply[b9027864-6297-4950-82bb-970a7dc49601]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:56:30 np0005534516 NetworkManager[48915]: <info>  [1764060990.5828] device (tapd92eef96-9b): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 03:56:30 np0005534516 NetworkManager[48915]: <info>  [1764060990.5835] device (tapd92eef96-9b): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 03:56:30 np0005534516 systemd[1]: Started Virtual Machine qemu-150-instance-00000078.
Nov 25 03:56:30 np0005534516 nova_compute[253538]: 2025-11-25 08:56:30.606 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:56:30 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:56:30.608 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[943e2306-3206-4712-9c47-017c96e8216a]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:56:30 np0005534516 ovn_controller[152859]: 2025-11-25T08:56:30Z|01246|binding|INFO|Setting lport d92eef96-9bbe-4743-96d0-393e7e6de4ee ovn-installed in OVS
Nov 25 03:56:30 np0005534516 ovn_controller[152859]: 2025-11-25T08:56:30Z|01247|binding|INFO|Setting lport d92eef96-9bbe-4743-96d0-393e7e6de4ee up in Southbound
Nov 25 03:56:30 np0005534516 nova_compute[253538]: 2025-11-25 08:56:30.613 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:56:30 np0005534516 auditd[703]: Audit daemon rotating log files
Nov 25 03:56:30 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:56:30.647 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[e30866b7-bf10-494d-9278-c1111ed4d567]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:56:30 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:56:30.652 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[6f78bb86-aad0-46ff-9ab2-9ddd6fca04a6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:56:30 np0005534516 NetworkManager[48915]: <info>  [1764060990.6533] manager: (tap58e30486-f0): new Veth device (/org/freedesktop/NetworkManager/Devices/508)
Nov 25 03:56:30 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:56:30.682 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[8c003bb9-56e7-4f8a-a14a-9b83c9d61051]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:56:30 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:56:30.685 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[9a8bf430-56e6-4b74-a012-b690b80cabd9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:56:30 np0005534516 NetworkManager[48915]: <info>  [1764060990.7056] device (tap58e30486-f0): carrier: link connected
Nov 25 03:56:30 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:56:30.710 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[120425b1-c75b-411f-96b7-d7f93ef475c9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:56:30 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:56:30.732 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[d2e03be5-450d-4a7e-97e6-a17b0e122b06]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap58e30486-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:52:d5:e5'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 361], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 634530, 'reachable_time': 24303, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 380164, 'error': None, 'target': 'ovnmeta-58e30486-fde6-46bb-8263-c463bd38a1f9', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:56:30 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:56:30.751 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[a33c26be-a27c-4509-b753-42723d17cdf2]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe52:d5e5'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 634530, 'tstamp': 634530}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 380165, 'error': None, 'target': 'ovnmeta-58e30486-fde6-46bb-8263-c463bd38a1f9', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:56:30 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:56:30.772 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[cb39c793-816f-4427-837b-f8304200c8b1]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap58e30486-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:52:d5:e5'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 361], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 634530, 'reachable_time': 24303, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 380166, 'error': None, 'target': 'ovnmeta-58e30486-fde6-46bb-8263-c463bd38a1f9', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:56:30 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:56:30.821 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[5e2045f3-d046-408e-b73d-d6f1ba0c4c7d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:56:30 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:56:30.907 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[da193fd3-b83b-4730-9241-fb95bc4f75d9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:56:30 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:56:30.909 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap58e30486-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:56:30 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:56:30.910 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 03:56:30 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:56:30.910 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap58e30486-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:56:30 np0005534516 nova_compute[253538]: 2025-11-25 08:56:30.913 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:56:30 np0005534516 kernel: tap58e30486-f0: entered promiscuous mode
Nov 25 03:56:30 np0005534516 NetworkManager[48915]: <info>  [1764060990.9139] manager: (tap58e30486-f0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/509)
Nov 25 03:56:30 np0005534516 nova_compute[253538]: 2025-11-25 08:56:30.915 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:56:30 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:56:30.917 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap58e30486-f0, col_values=(('external_ids', {'iface-id': 'f9837d3c-3aa7-48de-b240-549f6bf978b3'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:56:30 np0005534516 nova_compute[253538]: 2025-11-25 08:56:30.918 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:56:30 np0005534516 ovn_controller[152859]: 2025-11-25T08:56:30Z|01248|binding|INFO|Releasing lport f9837d3c-3aa7-48de-b240-549f6bf978b3 from this chassis (sb_readonly=0)
Nov 25 03:56:30 np0005534516 nova_compute[253538]: 2025-11-25 08:56:30.931 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:56:30 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:56:30.932 162739 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/58e30486-fde6-46bb-8263-c463bd38a1f9.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/58e30486-fde6-46bb-8263-c463bd38a1f9.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 25 03:56:30 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:56:30.933 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[dd3aa950-fa8e-4f65-8dae-8a969c2604a9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:56:30 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:56:30.933 162739 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 25 03:56:30 np0005534516 ovn_metadata_agent[162734]: global
Nov 25 03:56:30 np0005534516 ovn_metadata_agent[162734]:    log         /dev/log local0 debug
Nov 25 03:56:30 np0005534516 ovn_metadata_agent[162734]:    log-tag     haproxy-metadata-proxy-58e30486-fde6-46bb-8263-c463bd38a1f9
Nov 25 03:56:30 np0005534516 ovn_metadata_agent[162734]:    user        root
Nov 25 03:56:30 np0005534516 ovn_metadata_agent[162734]:    group       root
Nov 25 03:56:30 np0005534516 ovn_metadata_agent[162734]:    maxconn     1024
Nov 25 03:56:30 np0005534516 ovn_metadata_agent[162734]:    pidfile     /var/lib/neutron/external/pids/58e30486-fde6-46bb-8263-c463bd38a1f9.pid.haproxy
Nov 25 03:56:30 np0005534516 ovn_metadata_agent[162734]:    daemon
Nov 25 03:56:30 np0005534516 ovn_metadata_agent[162734]: 
Nov 25 03:56:30 np0005534516 ovn_metadata_agent[162734]: defaults
Nov 25 03:56:30 np0005534516 ovn_metadata_agent[162734]:    log global
Nov 25 03:56:30 np0005534516 ovn_metadata_agent[162734]:    mode http
Nov 25 03:56:30 np0005534516 ovn_metadata_agent[162734]:    option httplog
Nov 25 03:56:30 np0005534516 ovn_metadata_agent[162734]:    option dontlognull
Nov 25 03:56:30 np0005534516 ovn_metadata_agent[162734]:    option http-server-close
Nov 25 03:56:30 np0005534516 ovn_metadata_agent[162734]:    option forwardfor
Nov 25 03:56:30 np0005534516 ovn_metadata_agent[162734]:    retries                 3
Nov 25 03:56:30 np0005534516 ovn_metadata_agent[162734]:    timeout http-request    30s
Nov 25 03:56:30 np0005534516 ovn_metadata_agent[162734]:    timeout connect         30s
Nov 25 03:56:30 np0005534516 ovn_metadata_agent[162734]:    timeout client          32s
Nov 25 03:56:30 np0005534516 ovn_metadata_agent[162734]:    timeout server          32s
Nov 25 03:56:30 np0005534516 ovn_metadata_agent[162734]:    timeout http-keep-alive 30s
Nov 25 03:56:30 np0005534516 ovn_metadata_agent[162734]: 
Nov 25 03:56:30 np0005534516 ovn_metadata_agent[162734]: 
Nov 25 03:56:30 np0005534516 ovn_metadata_agent[162734]: listen listener
Nov 25 03:56:30 np0005534516 ovn_metadata_agent[162734]:    bind 169.254.169.254:80
Nov 25 03:56:30 np0005534516 ovn_metadata_agent[162734]:    server metadata /var/lib/neutron/metadata_proxy
Nov 25 03:56:30 np0005534516 ovn_metadata_agent[162734]:    http-request add-header X-OVN-Network-ID 58e30486-fde6-46bb-8263-c463bd38a1f9
Nov 25 03:56:30 np0005534516 ovn_metadata_agent[162734]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 25 03:56:30 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:56:30.935 162739 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-58e30486-fde6-46bb-8263-c463bd38a1f9', 'env', 'PROCESS_TAG=haproxy-58e30486-fde6-46bb-8263-c463bd38a1f9', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/58e30486-fde6-46bb-8263-c463bd38a1f9.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 25 03:56:30 np0005534516 nova_compute[253538]: 2025-11-25 08:56:30.952 253542 DEBUG nova.compute.manager [req-ce80dfa8-f373-452b-a287-1b293d61773b req-a00bf98a-596d-4590-af7b-79163c607ca3 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: ce8c3428-f7e4-49aa-9978-faaf5d514663] Received event network-vif-plugged-d92eef96-9bbe-4743-96d0-393e7e6de4ee external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:56:30 np0005534516 nova_compute[253538]: 2025-11-25 08:56:30.953 253542 DEBUG oslo_concurrency.lockutils [req-ce80dfa8-f373-452b-a287-1b293d61773b req-a00bf98a-596d-4590-af7b-79163c607ca3 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "ce8c3428-f7e4-49aa-9978-faaf5d514663-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:56:30 np0005534516 nova_compute[253538]: 2025-11-25 08:56:30.953 253542 DEBUG oslo_concurrency.lockutils [req-ce80dfa8-f373-452b-a287-1b293d61773b req-a00bf98a-596d-4590-af7b-79163c607ca3 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "ce8c3428-f7e4-49aa-9978-faaf5d514663-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:56:30 np0005534516 nova_compute[253538]: 2025-11-25 08:56:30.953 253542 DEBUG oslo_concurrency.lockutils [req-ce80dfa8-f373-452b-a287-1b293d61773b req-a00bf98a-596d-4590-af7b-79163c607ca3 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "ce8c3428-f7e4-49aa-9978-faaf5d514663-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:56:30 np0005534516 nova_compute[253538]: 2025-11-25 08:56:30.953 253542 DEBUG nova.compute.manager [req-ce80dfa8-f373-452b-a287-1b293d61773b req-a00bf98a-596d-4590-af7b-79163c607ca3 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: ce8c3428-f7e4-49aa-9978-faaf5d514663] Processing event network-vif-plugged-d92eef96-9bbe-4743-96d0-393e7e6de4ee _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 25 03:56:30 np0005534516 nova_compute[253538]: 2025-11-25 08:56:30.960 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764060990.9598863, ce8c3428-f7e4-49aa-9978-faaf5d514663 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 03:56:30 np0005534516 nova_compute[253538]: 2025-11-25 08:56:30.960 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: ce8c3428-f7e4-49aa-9978-faaf5d514663] VM Started (Lifecycle Event)#033[00m
Nov 25 03:56:30 np0005534516 nova_compute[253538]: 2025-11-25 08:56:30.963 253542 DEBUG nova.compute.manager [None req-ea27c167-309c-42ef-9057-32ae2ead652c 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: ce8c3428-f7e4-49aa-9978-faaf5d514663] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 25 03:56:30 np0005534516 nova_compute[253538]: 2025-11-25 08:56:30.966 253542 DEBUG nova.virt.libvirt.driver [None req-ea27c167-309c-42ef-9057-32ae2ead652c 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: ce8c3428-f7e4-49aa-9978-faaf5d514663] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 25 03:56:30 np0005534516 nova_compute[253538]: 2025-11-25 08:56:30.969 253542 INFO nova.virt.libvirt.driver [-] [instance: ce8c3428-f7e4-49aa-9978-faaf5d514663] Instance spawned successfully.#033[00m
Nov 25 03:56:30 np0005534516 nova_compute[253538]: 2025-11-25 08:56:30.969 253542 DEBUG nova.virt.libvirt.driver [None req-ea27c167-309c-42ef-9057-32ae2ead652c 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: ce8c3428-f7e4-49aa-9978-faaf5d514663] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 25 03:56:30 np0005534516 nova_compute[253538]: 2025-11-25 08:56:30.991 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: ce8c3428-f7e4-49aa-9978-faaf5d514663] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:56:31 np0005534516 nova_compute[253538]: 2025-11-25 08:56:31.004 253542 DEBUG nova.virt.libvirt.driver [None req-ea27c167-309c-42ef-9057-32ae2ead652c 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: ce8c3428-f7e4-49aa-9978-faaf5d514663] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:56:31 np0005534516 nova_compute[253538]: 2025-11-25 08:56:31.005 253542 DEBUG nova.virt.libvirt.driver [None req-ea27c167-309c-42ef-9057-32ae2ead652c 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: ce8c3428-f7e4-49aa-9978-faaf5d514663] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:56:31 np0005534516 nova_compute[253538]: 2025-11-25 08:56:31.006 253542 DEBUG nova.virt.libvirt.driver [None req-ea27c167-309c-42ef-9057-32ae2ead652c 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: ce8c3428-f7e4-49aa-9978-faaf5d514663] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:56:31 np0005534516 nova_compute[253538]: 2025-11-25 08:56:31.007 253542 DEBUG nova.virt.libvirt.driver [None req-ea27c167-309c-42ef-9057-32ae2ead652c 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: ce8c3428-f7e4-49aa-9978-faaf5d514663] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:56:31 np0005534516 nova_compute[253538]: 2025-11-25 08:56:31.008 253542 DEBUG nova.virt.libvirt.driver [None req-ea27c167-309c-42ef-9057-32ae2ead652c 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: ce8c3428-f7e4-49aa-9978-faaf5d514663] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:56:31 np0005534516 nova_compute[253538]: 2025-11-25 08:56:31.009 253542 DEBUG nova.virt.libvirt.driver [None req-ea27c167-309c-42ef-9057-32ae2ead652c 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: ce8c3428-f7e4-49aa-9978-faaf5d514663] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:56:31 np0005534516 nova_compute[253538]: 2025-11-25 08:56:31.014 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: ce8c3428-f7e4-49aa-9978-faaf5d514663] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 03:56:31 np0005534516 nova_compute[253538]: 2025-11-25 08:56:31.058 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: ce8c3428-f7e4-49aa-9978-faaf5d514663] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 03:56:31 np0005534516 nova_compute[253538]: 2025-11-25 08:56:31.066 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764060990.9608629, ce8c3428-f7e4-49aa-9978-faaf5d514663 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 03:56:31 np0005534516 nova_compute[253538]: 2025-11-25 08:56:31.067 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: ce8c3428-f7e4-49aa-9978-faaf5d514663] VM Paused (Lifecycle Event)#033[00m
Nov 25 03:56:31 np0005534516 nova_compute[253538]: 2025-11-25 08:56:31.104 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: ce8c3428-f7e4-49aa-9978-faaf5d514663] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:56:31 np0005534516 nova_compute[253538]: 2025-11-25 08:56:31.108 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764060990.9654958, ce8c3428-f7e4-49aa-9978-faaf5d514663 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 03:56:31 np0005534516 nova_compute[253538]: 2025-11-25 08:56:31.109 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: ce8c3428-f7e4-49aa-9978-faaf5d514663] VM Resumed (Lifecycle Event)#033[00m
Nov 25 03:56:31 np0005534516 nova_compute[253538]: 2025-11-25 08:56:31.132 253542 INFO nova.compute.manager [None req-ea27c167-309c-42ef-9057-32ae2ead652c 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: ce8c3428-f7e4-49aa-9978-faaf5d514663] Took 9.01 seconds to spawn the instance on the hypervisor.#033[00m
Nov 25 03:56:31 np0005534516 nova_compute[253538]: 2025-11-25 08:56:31.133 253542 DEBUG nova.compute.manager [None req-ea27c167-309c-42ef-9057-32ae2ead652c 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: ce8c3428-f7e4-49aa-9978-faaf5d514663] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:56:31 np0005534516 nova_compute[253538]: 2025-11-25 08:56:31.134 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: ce8c3428-f7e4-49aa-9978-faaf5d514663] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:56:31 np0005534516 nova_compute[253538]: 2025-11-25 08:56:31.140 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: ce8c3428-f7e4-49aa-9978-faaf5d514663] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 03:56:31 np0005534516 nova_compute[253538]: 2025-11-25 08:56:31.183 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: ce8c3428-f7e4-49aa-9978-faaf5d514663] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 03:56:31 np0005534516 nova_compute[253538]: 2025-11-25 08:56:31.205 253542 INFO nova.compute.manager [None req-ea27c167-309c-42ef-9057-32ae2ead652c 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: ce8c3428-f7e4-49aa-9978-faaf5d514663] Took 10.35 seconds to build instance.#033[00m
Nov 25 03:56:31 np0005534516 nova_compute[253538]: 2025-11-25 08:56:31.217 253542 DEBUG oslo_concurrency.lockutils [None req-ea27c167-309c-42ef-9057-32ae2ead652c 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Lock "ce8c3428-f7e4-49aa-9978-faaf5d514663" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.437s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:56:31 np0005534516 podman[380240]: 2025-11-25 08:56:31.358049173 +0000 UTC m=+0.059854550 container create 3a5e054c3e2922afccdd6a996a2f3812b74fb741e37e9172311278ab6fa9774a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-58e30486-fde6-46bb-8263-c463bd38a1f9, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 03:56:31 np0005534516 systemd[1]: Started libpod-conmon-3a5e054c3e2922afccdd6a996a2f3812b74fb741e37e9172311278ab6fa9774a.scope.
Nov 25 03:56:31 np0005534516 podman[380240]: 2025-11-25 08:56:31.323624246 +0000 UTC m=+0.025429633 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 25 03:56:31 np0005534516 systemd[1]: Started libcrun container.
Nov 25 03:56:31 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/97985d9ed470fd0792a7d93bf1b618a370e10b24c8a74fa4f0901da8068fed82/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 25 03:56:31 np0005534516 podman[380240]: 2025-11-25 08:56:31.449599575 +0000 UTC m=+0.151405002 container init 3a5e054c3e2922afccdd6a996a2f3812b74fb741e37e9172311278ab6fa9774a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-58e30486-fde6-46bb-8263-c463bd38a1f9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, tcib_managed=true, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3)
Nov 25 03:56:31 np0005534516 podman[380240]: 2025-11-25 08:56:31.457280754 +0000 UTC m=+0.159086131 container start 3a5e054c3e2922afccdd6a996a2f3812b74fb741e37e9172311278ab6fa9774a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-58e30486-fde6-46bb-8263-c463bd38a1f9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Nov 25 03:56:31 np0005534516 neutron-haproxy-ovnmeta-58e30486-fde6-46bb-8263-c463bd38a1f9[380255]: [NOTICE]   (380259) : New worker (380261) forked
Nov 25 03:56:31 np0005534516 neutron-haproxy-ovnmeta-58e30486-fde6-46bb-8263-c463bd38a1f9[380255]: [NOTICE]   (380259) : Loading success.
Nov 25 03:56:31 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2272: 321 pgs: 321 active+clean; 134 MiB data, 814 MiB used, 59 GiB / 60 GiB avail; 365 KiB/s rd, 1.8 MiB/s wr, 41 op/s
Nov 25 03:56:32 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 03:56:32 np0005534516 nova_compute[253538]: 2025-11-25 08:56:32.906 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:56:33 np0005534516 nova_compute[253538]: 2025-11-25 08:56:33.043 253542 DEBUG nova.compute.manager [req-d2e006af-aca7-43b8-97e1-e7fa6da998ce req-58ce4911-2e6d-4c2b-87ed-f85942c97862 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: ce8c3428-f7e4-49aa-9978-faaf5d514663] Received event network-vif-plugged-d92eef96-9bbe-4743-96d0-393e7e6de4ee external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:56:33 np0005534516 nova_compute[253538]: 2025-11-25 08:56:33.044 253542 DEBUG oslo_concurrency.lockutils [req-d2e006af-aca7-43b8-97e1-e7fa6da998ce req-58ce4911-2e6d-4c2b-87ed-f85942c97862 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "ce8c3428-f7e4-49aa-9978-faaf5d514663-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:56:33 np0005534516 nova_compute[253538]: 2025-11-25 08:56:33.044 253542 DEBUG oslo_concurrency.lockutils [req-d2e006af-aca7-43b8-97e1-e7fa6da998ce req-58ce4911-2e6d-4c2b-87ed-f85942c97862 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "ce8c3428-f7e4-49aa-9978-faaf5d514663-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:56:33 np0005534516 nova_compute[253538]: 2025-11-25 08:56:33.044 253542 DEBUG oslo_concurrency.lockutils [req-d2e006af-aca7-43b8-97e1-e7fa6da998ce req-58ce4911-2e6d-4c2b-87ed-f85942c97862 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "ce8c3428-f7e4-49aa-9978-faaf5d514663-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:56:33 np0005534516 nova_compute[253538]: 2025-11-25 08:56:33.045 253542 DEBUG nova.compute.manager [req-d2e006af-aca7-43b8-97e1-e7fa6da998ce req-58ce4911-2e6d-4c2b-87ed-f85942c97862 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: ce8c3428-f7e4-49aa-9978-faaf5d514663] No waiting events found dispatching network-vif-plugged-d92eef96-9bbe-4743-96d0-393e7e6de4ee pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 03:56:33 np0005534516 nova_compute[253538]: 2025-11-25 08:56:33.045 253542 WARNING nova.compute.manager [req-d2e006af-aca7-43b8-97e1-e7fa6da998ce req-58ce4911-2e6d-4c2b-87ed-f85942c97862 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: ce8c3428-f7e4-49aa-9978-faaf5d514663] Received unexpected event network-vif-plugged-d92eef96-9bbe-4743-96d0-393e7e6de4ee for instance with vm_state active and task_state None.#033[00m
Nov 25 03:56:33 np0005534516 nova_compute[253538]: 2025-11-25 08:56:33.564 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 03:56:33 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2273: 321 pgs: 321 active+clean; 134 MiB data, 814 MiB used, 59 GiB / 60 GiB avail; 914 KiB/s rd, 1.8 MiB/s wr, 63 op/s
Nov 25 03:56:33 np0005534516 nova_compute[253538]: 2025-11-25 08:56:33.982 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:56:35 np0005534516 NetworkManager[48915]: <info>  [1764060995.4044] manager: (patch-provnet-14ba300c-36d8-42f5-a7bd-8245a4488c5f-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/510)
Nov 25 03:56:35 np0005534516 NetworkManager[48915]: <info>  [1764060995.4053] manager: (patch-br-int-to-provnet-14ba300c-36d8-42f5-a7bd-8245a4488c5f): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/511)
Nov 25 03:56:35 np0005534516 nova_compute[253538]: 2025-11-25 08:56:35.403 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:56:35 np0005534516 ovn_controller[152859]: 2025-11-25T08:56:35Z|01249|binding|INFO|Releasing lport f9837d3c-3aa7-48de-b240-549f6bf978b3 from this chassis (sb_readonly=0)
Nov 25 03:56:35 np0005534516 nova_compute[253538]: 2025-11-25 08:56:35.518 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:56:35 np0005534516 nova_compute[253538]: 2025-11-25 08:56:35.524 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:56:35 np0005534516 nova_compute[253538]: 2025-11-25 08:56:35.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 03:56:35 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2274: 321 pgs: 321 active+clean; 134 MiB data, 814 MiB used, 59 GiB / 60 GiB avail; 1.7 MiB/s rd, 1.3 MiB/s wr, 92 op/s
Nov 25 03:56:36 np0005534516 nova_compute[253538]: 2025-11-25 08:56:36.336 253542 DEBUG nova.compute.manager [req-c181a1d7-54c0-4b96-bbaa-15215493d832 req-f0fa2f96-a178-4819-9d86-cb1b4fd90cdf b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: ce8c3428-f7e4-49aa-9978-faaf5d514663] Received event network-changed-d92eef96-9bbe-4743-96d0-393e7e6de4ee external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:56:36 np0005534516 nova_compute[253538]: 2025-11-25 08:56:36.337 253542 DEBUG nova.compute.manager [req-c181a1d7-54c0-4b96-bbaa-15215493d832 req-f0fa2f96-a178-4819-9d86-cb1b4fd90cdf b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: ce8c3428-f7e4-49aa-9978-faaf5d514663] Refreshing instance network info cache due to event network-changed-d92eef96-9bbe-4743-96d0-393e7e6de4ee. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 03:56:36 np0005534516 nova_compute[253538]: 2025-11-25 08:56:36.337 253542 DEBUG oslo_concurrency.lockutils [req-c181a1d7-54c0-4b96-bbaa-15215493d832 req-f0fa2f96-a178-4819-9d86-cb1b4fd90cdf b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-ce8c3428-f7e4-49aa-9978-faaf5d514663" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 03:56:36 np0005534516 nova_compute[253538]: 2025-11-25 08:56:36.338 253542 DEBUG oslo_concurrency.lockutils [req-c181a1d7-54c0-4b96-bbaa-15215493d832 req-f0fa2f96-a178-4819-9d86-cb1b4fd90cdf b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-ce8c3428-f7e4-49aa-9978-faaf5d514663" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 03:56:36 np0005534516 nova_compute[253538]: 2025-11-25 08:56:36.338 253542 DEBUG nova.network.neutron [req-c181a1d7-54c0-4b96-bbaa-15215493d832 req-f0fa2f96-a178-4819-9d86-cb1b4fd90cdf b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: ce8c3428-f7e4-49aa-9978-faaf5d514663] Refreshing network info cache for port d92eef96-9bbe-4743-96d0-393e7e6de4ee _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 03:56:37 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 03:56:37 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 03:56:37 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 03:56:37 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 03:56:37 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 03:56:37 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 03:56:37 np0005534516 ceph-mgr[75313]: [progress WARNING root] complete: ev 58649c52-1fac-4559-a770-9e0283e359a0 does not exist
Nov 25 03:56:37 np0005534516 ceph-mgr[75313]: [progress WARNING root] complete: ev 614daa20-b0e0-4788-91c4-05ab3de665ff does not exist
Nov 25 03:56:37 np0005534516 ceph-mgr[75313]: [progress WARNING root] complete: ev 38cfaaee-32b2-4997-b778-8672e3b3be46 does not exist
Nov 25 03:56:37 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 03:56:37 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 03:56:37 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 03:56:37 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 03:56:37 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 03:56:37 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 03:56:37 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 03:56:37 np0005534516 ceph-mon[75015]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #108. Immutable memtables: 0.
Nov 25 03:56:37 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:56:37.557000) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 25 03:56:37 np0005534516 ceph-mon[75015]: rocksdb: [db/flush_job.cc:856] [default] [JOB 63] Flushing memtable with next log file: 108
Nov 25 03:56:37 np0005534516 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764060997557074, "job": 63, "event": "flush_started", "num_memtables": 1, "num_entries": 551, "num_deletes": 250, "total_data_size": 557611, "memory_usage": 568616, "flush_reason": "Manual Compaction"}
Nov 25 03:56:37 np0005534516 ceph-mon[75015]: rocksdb: [db/flush_job.cc:885] [default] [JOB 63] Level-0 flush table #109: started
Nov 25 03:56:37 np0005534516 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764060997564386, "cf_name": "default", "job": 63, "event": "table_file_creation", "file_number": 109, "file_size": 381187, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 47195, "largest_seqno": 47745, "table_properties": {"data_size": 378478, "index_size": 681, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 965, "raw_key_size": 7348, "raw_average_key_size": 20, "raw_value_size": 372913, "raw_average_value_size": 1038, "num_data_blocks": 31, "num_entries": 359, "num_filter_entries": 359, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764060960, "oldest_key_time": 1764060960, "file_creation_time": 1764060997, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "dc5dc6ae-0fb8-474e-b34d-e37705a41add", "db_session_id": "WCSCMBNWDQZ3QKJA3OWW", "orig_file_number": 109, "seqno_to_time_mapping": "N/A"}}
Nov 25 03:56:37 np0005534516 ceph-mon[75015]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 63] Flush lasted 7453 microseconds, and 3657 cpu microseconds.
Nov 25 03:56:37 np0005534516 ceph-mon[75015]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 03:56:37 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:56:37.564453) [db/flush_job.cc:967] [default] [JOB 63] Level-0 flush table #109: 381187 bytes OK
Nov 25 03:56:37 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:56:37.564479) [db/memtable_list.cc:519] [default] Level-0 commit table #109 started
Nov 25 03:56:37 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:56:37.565636) [db/memtable_list.cc:722] [default] Level-0 commit table #109: memtable #1 done
Nov 25 03:56:37 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:56:37.565661) EVENT_LOG_v1 {"time_micros": 1764060997565654, "job": 63, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 25 03:56:37 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:56:37.565683) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 25 03:56:37 np0005534516 ceph-mon[75015]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 63] Try to delete WAL files size 554512, prev total WAL file size 554512, number of live WAL files 2.
Nov 25 03:56:37 np0005534516 ceph-mon[75015]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000105.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 03:56:37 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:56:37.566343) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740031373535' seq:72057594037927935, type:22 .. '6D6772737461740032303036' seq:0, type:0; will stop at (end)
Nov 25 03:56:37 np0005534516 ceph-mon[75015]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 64] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 25 03:56:37 np0005534516 ceph-mon[75015]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 63 Base level 0, inputs: [109(372KB)], [107(10MB)]
Nov 25 03:56:37 np0005534516 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764060997566387, "job": 64, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [109], "files_L6": [107], "score": -1, "input_data_size": 10923073, "oldest_snapshot_seqno": -1}
Nov 25 03:56:37 np0005534516 ceph-mon[75015]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 64] Generated table #110: 6908 keys, 7804615 bytes, temperature: kUnknown
Nov 25 03:56:37 np0005534516 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764060997615548, "cf_name": "default", "job": 64, "event": "table_file_creation", "file_number": 110, "file_size": 7804615, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7761787, "index_size": 24476, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 17285, "raw_key_size": 179336, "raw_average_key_size": 25, "raw_value_size": 7641363, "raw_average_value_size": 1106, "num_data_blocks": 955, "num_entries": 6908, "num_filter_entries": 6908, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764056881, "oldest_key_time": 0, "file_creation_time": 1764060997, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "dc5dc6ae-0fb8-474e-b34d-e37705a41add", "db_session_id": "WCSCMBNWDQZ3QKJA3OWW", "orig_file_number": 110, "seqno_to_time_mapping": "N/A"}}
Nov 25 03:56:37 np0005534516 ceph-mon[75015]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 03:56:37 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:56:37.615842) [db/compaction/compaction_job.cc:1663] [default] [JOB 64] Compacted 1@0 + 1@6 files to L6 => 7804615 bytes
Nov 25 03:56:37 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:56:37.617005) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 221.7 rd, 158.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.4, 10.1 +0.0 blob) out(7.4 +0.0 blob), read-write-amplify(49.1) write-amplify(20.5) OK, records in: 7405, records dropped: 497 output_compression: NoCompression
Nov 25 03:56:37 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:56:37.617033) EVENT_LOG_v1 {"time_micros": 1764060997617019, "job": 64, "event": "compaction_finished", "compaction_time_micros": 49276, "compaction_time_cpu_micros": 21822, "output_level": 6, "num_output_files": 1, "total_output_size": 7804615, "num_input_records": 7405, "num_output_records": 6908, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 25 03:56:37 np0005534516 ceph-mon[75015]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000109.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 03:56:37 np0005534516 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764060997617546, "job": 64, "event": "table_file_deletion", "file_number": 109}
Nov 25 03:56:37 np0005534516 ceph-mon[75015]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000107.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 03:56:37 np0005534516 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764060997620269, "job": 64, "event": "table_file_deletion", "file_number": 107}
Nov 25 03:56:37 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:56:37.566241) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 03:56:37 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:56:37.620361) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 03:56:37 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:56:37.620365) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 03:56:37 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:56:37.620368) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 03:56:37 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:56:37.620370) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 03:56:37 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:56:37.620372) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 03:56:37 np0005534516 podman[380542]: 2025-11-25 08:56:37.887728552 +0000 UTC m=+0.047093893 container create b1dcc863a652e3a9937ebcc970ed49718872b2923aa233091110aa9d4d5e76a3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_thompson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Nov 25 03:56:37 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2275: 321 pgs: 321 active+clean; 134 MiB data, 814 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 484 KiB/s wr, 96 op/s
Nov 25 03:56:37 np0005534516 nova_compute[253538]: 2025-11-25 08:56:37.907 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:56:37 np0005534516 systemd[1]: Started libpod-conmon-b1dcc863a652e3a9937ebcc970ed49718872b2923aa233091110aa9d4d5e76a3.scope.
Nov 25 03:56:37 np0005534516 podman[380542]: 2025-11-25 08:56:37.866991898 +0000 UTC m=+0.026357279 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 03:56:37 np0005534516 systemd[1]: Started libcrun container.
Nov 25 03:56:37 np0005534516 podman[380542]: 2025-11-25 08:56:37.987977381 +0000 UTC m=+0.147342752 container init b1dcc863a652e3a9937ebcc970ed49718872b2923aa233091110aa9d4d5e76a3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_thompson, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Nov 25 03:56:37 np0005534516 podman[380542]: 2025-11-25 08:56:37.999284198 +0000 UTC m=+0.158649559 container start b1dcc863a652e3a9937ebcc970ed49718872b2923aa233091110aa9d4d5e76a3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_thompson, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 03:56:38 np0005534516 podman[380542]: 2025-11-25 08:56:38.002952298 +0000 UTC m=+0.162317659 container attach b1dcc863a652e3a9937ebcc970ed49718872b2923aa233091110aa9d4d5e76a3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_thompson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef)
Nov 25 03:56:38 np0005534516 tender_thompson[380559]: 167 167
Nov 25 03:56:38 np0005534516 systemd[1]: libpod-b1dcc863a652e3a9937ebcc970ed49718872b2923aa233091110aa9d4d5e76a3.scope: Deactivated successfully.
Nov 25 03:56:38 np0005534516 podman[380542]: 2025-11-25 08:56:38.009927298 +0000 UTC m=+0.169292659 container died b1dcc863a652e3a9937ebcc970ed49718872b2923aa233091110aa9d4d5e76a3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_thompson, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Nov 25 03:56:38 np0005534516 systemd[1]: var-lib-containers-storage-overlay-8afdd2643f1e2a657c9eec1a4f5cc5586bb4f214e7388a35ac4159bb17d18d0d-merged.mount: Deactivated successfully.
Nov 25 03:56:38 np0005534516 podman[380542]: 2025-11-25 08:56:38.068210115 +0000 UTC m=+0.227575456 container remove b1dcc863a652e3a9937ebcc970ed49718872b2923aa233091110aa9d4d5e76a3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_thompson, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Nov 25 03:56:38 np0005534516 nova_compute[253538]: 2025-11-25 08:56:38.085 253542 DEBUG nova.network.neutron [req-c181a1d7-54c0-4b96-bbaa-15215493d832 req-f0fa2f96-a178-4819-9d86-cb1b4fd90cdf b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: ce8c3428-f7e4-49aa-9978-faaf5d514663] Updated VIF entry in instance network info cache for port d92eef96-9bbe-4743-96d0-393e7e6de4ee. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 25 03:56:38 np0005534516 nova_compute[253538]: 2025-11-25 08:56:38.086 253542 DEBUG nova.network.neutron [req-c181a1d7-54c0-4b96-bbaa-15215493d832 req-f0fa2f96-a178-4819-9d86-cb1b4fd90cdf b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: ce8c3428-f7e4-49aa-9978-faaf5d514663] Updating instance_info_cache with network_info: [{"id": "d92eef96-9bbe-4743-96d0-393e7e6de4ee", "address": "fa:16:3e:0d:26:ec", "network": {"id": "58e30486-fde6-46bb-8263-c463bd38a1f9", "bridge": "br-int", "label": "tempest-network-smoke--1964902798", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.228", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cfffff2c57a442a59b202d368d49bf00", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd92eef96-9b", "ovs_interfaceid": "d92eef96-9bbe-4743-96d0-393e7e6de4ee", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 03:56:38 np0005534516 systemd[1]: libpod-conmon-b1dcc863a652e3a9937ebcc970ed49718872b2923aa233091110aa9d4d5e76a3.scope: Deactivated successfully.
Nov 25 03:56:38 np0005534516 nova_compute[253538]: 2025-11-25 08:56:38.115 253542 DEBUG oslo_concurrency.lockutils [req-c181a1d7-54c0-4b96-bbaa-15215493d832 req-f0fa2f96-a178-4819-9d86-cb1b4fd90cdf b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-ce8c3428-f7e4-49aa-9978-faaf5d514663" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 03:56:38 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 03:56:38 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 03:56:38 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 03:56:38 np0005534516 podman[380582]: 2025-11-25 08:56:38.28958913 +0000 UTC m=+0.052364086 container create 2887f1e01a4ef86378e94bf33ff6526c2d3c26e6d6c1ba2409ba55a5f8dcfc1e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_curran, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 03:56:38 np0005534516 systemd[1]: Started libpod-conmon-2887f1e01a4ef86378e94bf33ff6526c2d3c26e6d6c1ba2409ba55a5f8dcfc1e.scope.
Nov 25 03:56:38 np0005534516 podman[380582]: 2025-11-25 08:56:38.268528387 +0000 UTC m=+0.031303413 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 03:56:38 np0005534516 systemd[1]: Started libcrun container.
Nov 25 03:56:38 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b1bf04a6353f380c9ba796f3a8d205aa4af89ced26f45b4afa294aa124fa9f12/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 03:56:38 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b1bf04a6353f380c9ba796f3a8d205aa4af89ced26f45b4afa294aa124fa9f12/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 03:56:38 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b1bf04a6353f380c9ba796f3a8d205aa4af89ced26f45b4afa294aa124fa9f12/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 03:56:38 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b1bf04a6353f380c9ba796f3a8d205aa4af89ced26f45b4afa294aa124fa9f12/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 03:56:38 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b1bf04a6353f380c9ba796f3a8d205aa4af89ced26f45b4afa294aa124fa9f12/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 03:56:38 np0005534516 podman[380582]: 2025-11-25 08:56:38.385887961 +0000 UTC m=+0.148662947 container init 2887f1e01a4ef86378e94bf33ff6526c2d3c26e6d6c1ba2409ba55a5f8dcfc1e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_curran, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 03:56:38 np0005534516 podman[380582]: 2025-11-25 08:56:38.40311813 +0000 UTC m=+0.165893106 container start 2887f1e01a4ef86378e94bf33ff6526c2d3c26e6d6c1ba2409ba55a5f8dcfc1e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_curran, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef)
Nov 25 03:56:38 np0005534516 podman[380582]: 2025-11-25 08:56:38.409535544 +0000 UTC m=+0.172310550 container attach 2887f1e01a4ef86378e94bf33ff6526c2d3c26e6d6c1ba2409ba55a5f8dcfc1e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_curran, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Nov 25 03:56:38 np0005534516 nova_compute[253538]: 2025-11-25 08:56:38.986 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:56:39 np0005534516 quirky_curran[380599]: --> passed data devices: 0 physical, 3 LVM
Nov 25 03:56:39 np0005534516 quirky_curran[380599]: --> relative data size: 1.0
Nov 25 03:56:39 np0005534516 quirky_curran[380599]: --> All data devices are unavailable
Nov 25 03:56:39 np0005534516 systemd[1]: libpod-2887f1e01a4ef86378e94bf33ff6526c2d3c26e6d6c1ba2409ba55a5f8dcfc1e.scope: Deactivated successfully.
Nov 25 03:56:39 np0005534516 systemd[1]: libpod-2887f1e01a4ef86378e94bf33ff6526c2d3c26e6d6c1ba2409ba55a5f8dcfc1e.scope: Consumed 1.139s CPU time.
Nov 25 03:56:39 np0005534516 podman[380582]: 2025-11-25 08:56:39.591437254 +0000 UTC m=+1.354212250 container died 2887f1e01a4ef86378e94bf33ff6526c2d3c26e6d6c1ba2409ba55a5f8dcfc1e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_curran, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 03:56:39 np0005534516 systemd[1]: var-lib-containers-storage-overlay-b1bf04a6353f380c9ba796f3a8d205aa4af89ced26f45b4afa294aa124fa9f12-merged.mount: Deactivated successfully.
Nov 25 03:56:39 np0005534516 podman[380582]: 2025-11-25 08:56:39.662808517 +0000 UTC m=+1.425583463 container remove 2887f1e01a4ef86378e94bf33ff6526c2d3c26e6d6c1ba2409ba55a5f8dcfc1e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_curran, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 03:56:39 np0005534516 systemd[1]: libpod-conmon-2887f1e01a4ef86378e94bf33ff6526c2d3c26e6d6c1ba2409ba55a5f8dcfc1e.scope: Deactivated successfully.
Nov 25 03:56:39 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2276: 321 pgs: 321 active+clean; 134 MiB data, 814 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Nov 25 03:56:40 np0005534516 podman[380781]: 2025-11-25 08:56:40.29926847 +0000 UTC m=+0.048329206 container create 0c4562bc9520a296f803dfca446c7ebb071e52ab497238cdbf300fe73cff8549 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_heisenberg, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Nov 25 03:56:40 np0005534516 podman[380781]: 2025-11-25 08:56:40.280945182 +0000 UTC m=+0.030005918 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 03:56:40 np0005534516 systemd[1]: Started libpod-conmon-0c4562bc9520a296f803dfca446c7ebb071e52ab497238cdbf300fe73cff8549.scope.
Nov 25 03:56:40 np0005534516 systemd[1]: Started libcrun container.
Nov 25 03:56:40 np0005534516 podman[380781]: 2025-11-25 08:56:40.442245252 +0000 UTC m=+0.191305988 container init 0c4562bc9520a296f803dfca446c7ebb071e52ab497238cdbf300fe73cff8549 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_heisenberg, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Nov 25 03:56:40 np0005534516 podman[380781]: 2025-11-25 08:56:40.450518417 +0000 UTC m=+0.199579153 container start 0c4562bc9520a296f803dfca446c7ebb071e52ab497238cdbf300fe73cff8549 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_heisenberg, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 03:56:40 np0005534516 distracted_heisenberg[380796]: 167 167
Nov 25 03:56:40 np0005534516 systemd[1]: libpod-0c4562bc9520a296f803dfca446c7ebb071e52ab497238cdbf300fe73cff8549.scope: Deactivated successfully.
Nov 25 03:56:40 np0005534516 podman[380781]: 2025-11-25 08:56:40.457469697 +0000 UTC m=+0.206530473 container attach 0c4562bc9520a296f803dfca446c7ebb071e52ab497238cdbf300fe73cff8549 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_heisenberg, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Nov 25 03:56:40 np0005534516 podman[380781]: 2025-11-25 08:56:40.457907339 +0000 UTC m=+0.206968085 container died 0c4562bc9520a296f803dfca446c7ebb071e52ab497238cdbf300fe73cff8549 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_heisenberg, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2)
Nov 25 03:56:40 np0005534516 systemd[1]: var-lib-containers-storage-overlay-3e259fa2e3ff4a26f1a697643623ff41ca4510560f88d0802becd22120dd1701-merged.mount: Deactivated successfully.
Nov 25 03:56:40 np0005534516 podman[380781]: 2025-11-25 08:56:40.534669518 +0000 UTC m=+0.283730254 container remove 0c4562bc9520a296f803dfca446c7ebb071e52ab497238cdbf300fe73cff8549 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_heisenberg, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 03:56:40 np0005534516 systemd[1]: libpod-conmon-0c4562bc9520a296f803dfca446c7ebb071e52ab497238cdbf300fe73cff8549.scope: Deactivated successfully.
Nov 25 03:56:40 np0005534516 nova_compute[253538]: 2025-11-25 08:56:40.559 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 03:56:40 np0005534516 nova_compute[253538]: 2025-11-25 08:56:40.564 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 25 03:56:40 np0005534516 nova_compute[253538]: 2025-11-25 08:56:40.564 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 25 03:56:40 np0005534516 nova_compute[253538]: 2025-11-25 08:56:40.748 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Acquiring lock "refresh_cache-ce8c3428-f7e4-49aa-9978-faaf5d514663" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 03:56:40 np0005534516 nova_compute[253538]: 2025-11-25 08:56:40.748 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Acquired lock "refresh_cache-ce8c3428-f7e4-49aa-9978-faaf5d514663" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 03:56:40 np0005534516 nova_compute[253538]: 2025-11-25 08:56:40.749 253542 DEBUG nova.network.neutron [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] [instance: ce8c3428-f7e4-49aa-9978-faaf5d514663] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Nov 25 03:56:40 np0005534516 nova_compute[253538]: 2025-11-25 08:56:40.749 253542 DEBUG nova.objects.instance [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lazy-loading 'info_cache' on Instance uuid ce8c3428-f7e4-49aa-9978-faaf5d514663 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 03:56:40 np0005534516 podman[380819]: 2025-11-25 08:56:40.783142571 +0000 UTC m=+0.118709083 container create 4775811d3abd53beb9662b847f8aeda8c4de43590b40a4086db173ac11dafcf8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_leavitt, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True)
Nov 25 03:56:40 np0005534516 podman[380819]: 2025-11-25 08:56:40.692294158 +0000 UTC m=+0.027860690 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 03:56:40 np0005534516 nova_compute[253538]: 2025-11-25 08:56:40.898 253542 DEBUG oslo_concurrency.lockutils [None req-442d7c35-8ef3-4365-870d-736cdeba020e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Acquiring lock "43128a42-ed0f-42ff-8282-4ef978e7c43c" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:56:40 np0005534516 nova_compute[253538]: 2025-11-25 08:56:40.902 253542 DEBUG oslo_concurrency.lockutils [None req-442d7c35-8ef3-4365-870d-736cdeba020e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lock "43128a42-ed0f-42ff-8282-4ef978e7c43c" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.004s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:56:40 np0005534516 nova_compute[253538]: 2025-11-25 08:56:40.917 253542 DEBUG nova.compute.manager [None req-442d7c35-8ef3-4365-870d-736cdeba020e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 43128a42-ed0f-42ff-8282-4ef978e7c43c] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 25 03:56:40 np0005534516 systemd[1]: Started libpod-conmon-4775811d3abd53beb9662b847f8aeda8c4de43590b40a4086db173ac11dafcf8.scope.
Nov 25 03:56:41 np0005534516 nova_compute[253538]: 2025-11-25 08:56:41.002 253542 DEBUG oslo_concurrency.lockutils [None req-442d7c35-8ef3-4365-870d-736cdeba020e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:56:41 np0005534516 nova_compute[253538]: 2025-11-25 08:56:41.003 253542 DEBUG oslo_concurrency.lockutils [None req-442d7c35-8ef3-4365-870d-736cdeba020e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:56:41 np0005534516 nova_compute[253538]: 2025-11-25 08:56:41.013 253542 DEBUG nova.virt.hardware [None req-442d7c35-8ef3-4365-870d-736cdeba020e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 25 03:56:41 np0005534516 nova_compute[253538]: 2025-11-25 08:56:41.013 253542 INFO nova.compute.claims [None req-442d7c35-8ef3-4365-870d-736cdeba020e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 43128a42-ed0f-42ff-8282-4ef978e7c43c] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 25 03:56:41 np0005534516 systemd[1]: Started libcrun container.
Nov 25 03:56:41 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fad02d2b11501102f1352bcab2bf857424bf7ecf1b5625d57167ca6b0714ec6e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 03:56:41 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fad02d2b11501102f1352bcab2bf857424bf7ecf1b5625d57167ca6b0714ec6e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 03:56:41 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fad02d2b11501102f1352bcab2bf857424bf7ecf1b5625d57167ca6b0714ec6e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 03:56:41 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fad02d2b11501102f1352bcab2bf857424bf7ecf1b5625d57167ca6b0714ec6e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 03:56:41 np0005534516 podman[380819]: 2025-11-25 08:56:41.054184769 +0000 UTC m=+0.389751281 container init 4775811d3abd53beb9662b847f8aeda8c4de43590b40a4086db173ac11dafcf8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_leavitt, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 25 03:56:41 np0005534516 podman[380819]: 2025-11-25 08:56:41.062576687 +0000 UTC m=+0.398143189 container start 4775811d3abd53beb9662b847f8aeda8c4de43590b40a4086db173ac11dafcf8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_leavitt, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Nov 25 03:56:41 np0005534516 podman[380819]: 2025-11-25 08:56:41.068331914 +0000 UTC m=+0.403898496 container attach 4775811d3abd53beb9662b847f8aeda8c4de43590b40a4086db173ac11dafcf8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_leavitt, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 03:56:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:56:41.079 162739 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:56:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:56:41.080 162739 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:56:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:56:41.081 162739 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:56:41 np0005534516 nova_compute[253538]: 2025-11-25 08:56:41.234 253542 DEBUG oslo_concurrency.processutils [None req-442d7c35-8ef3-4365-870d-736cdeba020e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:56:41 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 03:56:41 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2980668265' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 03:56:41 np0005534516 nova_compute[253538]: 2025-11-25 08:56:41.675 253542 DEBUG oslo_concurrency.processutils [None req-442d7c35-8ef3-4365-870d-736cdeba020e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.441s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:56:41 np0005534516 nova_compute[253538]: 2025-11-25 08:56:41.683 253542 DEBUG nova.compute.provider_tree [None req-442d7c35-8ef3-4365-870d-736cdeba020e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 03:56:41 np0005534516 nova_compute[253538]: 2025-11-25 08:56:41.714 253542 DEBUG nova.scheduler.client.report [None req-442d7c35-8ef3-4365-870d-736cdeba020e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 03:56:41 np0005534516 nova_compute[253538]: 2025-11-25 08:56:41.742 253542 DEBUG oslo_concurrency.lockutils [None req-442d7c35-8ef3-4365-870d-736cdeba020e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.739s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:56:41 np0005534516 nova_compute[253538]: 2025-11-25 08:56:41.743 253542 DEBUG nova.compute.manager [None req-442d7c35-8ef3-4365-870d-736cdeba020e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 43128a42-ed0f-42ff-8282-4ef978e7c43c] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 25 03:56:41 np0005534516 nova_compute[253538]: 2025-11-25 08:56:41.811 253542 DEBUG nova.compute.manager [None req-442d7c35-8ef3-4365-870d-736cdeba020e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 43128a42-ed0f-42ff-8282-4ef978e7c43c] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 25 03:56:41 np0005534516 nova_compute[253538]: 2025-11-25 08:56:41.812 253542 DEBUG nova.network.neutron [None req-442d7c35-8ef3-4365-870d-736cdeba020e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 43128a42-ed0f-42ff-8282-4ef978e7c43c] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 25 03:56:41 np0005534516 nova_compute[253538]: 2025-11-25 08:56:41.829 253542 INFO nova.virt.libvirt.driver [None req-442d7c35-8ef3-4365-870d-736cdeba020e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 43128a42-ed0f-42ff-8282-4ef978e7c43c] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 25 03:56:41 np0005534516 magical_leavitt[380836]: {
Nov 25 03:56:41 np0005534516 magical_leavitt[380836]:    "0": [
Nov 25 03:56:41 np0005534516 magical_leavitt[380836]:        {
Nov 25 03:56:41 np0005534516 magical_leavitt[380836]:            "devices": [
Nov 25 03:56:41 np0005534516 magical_leavitt[380836]:                "/dev/loop3"
Nov 25 03:56:41 np0005534516 magical_leavitt[380836]:            ],
Nov 25 03:56:41 np0005534516 magical_leavitt[380836]:            "lv_name": "ceph_lv0",
Nov 25 03:56:41 np0005534516 magical_leavitt[380836]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 03:56:41 np0005534516 magical_leavitt[380836]:            "lv_size": "21470642176",
Nov 25 03:56:41 np0005534516 magical_leavitt[380836]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=fa0f8d90-5df8-4d42-9078-da082765696d,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 03:56:41 np0005534516 magical_leavitt[380836]:            "lv_uuid": "ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr",
Nov 25 03:56:41 np0005534516 magical_leavitt[380836]:            "name": "ceph_lv0",
Nov 25 03:56:41 np0005534516 magical_leavitt[380836]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 03:56:41 np0005534516 magical_leavitt[380836]:            "tags": {
Nov 25 03:56:41 np0005534516 magical_leavitt[380836]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 03:56:41 np0005534516 magical_leavitt[380836]:                "ceph.block_uuid": "ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr",
Nov 25 03:56:41 np0005534516 magical_leavitt[380836]:                "ceph.cephx_lockbox_secret": "",
Nov 25 03:56:41 np0005534516 magical_leavitt[380836]:                "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 03:56:41 np0005534516 magical_leavitt[380836]:                "ceph.cluster_name": "ceph",
Nov 25 03:56:41 np0005534516 magical_leavitt[380836]:                "ceph.crush_device_class": "",
Nov 25 03:56:41 np0005534516 magical_leavitt[380836]:                "ceph.encrypted": "0",
Nov 25 03:56:41 np0005534516 magical_leavitt[380836]:                "ceph.osd_fsid": "fa0f8d90-5df8-4d42-9078-da082765696d",
Nov 25 03:56:41 np0005534516 magical_leavitt[380836]:                "ceph.osd_id": "0",
Nov 25 03:56:41 np0005534516 magical_leavitt[380836]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 03:56:41 np0005534516 magical_leavitt[380836]:                "ceph.type": "block",
Nov 25 03:56:41 np0005534516 magical_leavitt[380836]:                "ceph.vdo": "0"
Nov 25 03:56:41 np0005534516 magical_leavitt[380836]:            },
Nov 25 03:56:41 np0005534516 magical_leavitt[380836]:            "type": "block",
Nov 25 03:56:41 np0005534516 magical_leavitt[380836]:            "vg_name": "ceph_vg0"
Nov 25 03:56:41 np0005534516 magical_leavitt[380836]:        }
Nov 25 03:56:41 np0005534516 magical_leavitt[380836]:    ],
Nov 25 03:56:41 np0005534516 magical_leavitt[380836]:    "1": [
Nov 25 03:56:41 np0005534516 magical_leavitt[380836]:        {
Nov 25 03:56:41 np0005534516 magical_leavitt[380836]:            "devices": [
Nov 25 03:56:41 np0005534516 magical_leavitt[380836]:                "/dev/loop4"
Nov 25 03:56:41 np0005534516 magical_leavitt[380836]:            ],
Nov 25 03:56:41 np0005534516 magical_leavitt[380836]:            "lv_name": "ceph_lv1",
Nov 25 03:56:41 np0005534516 magical_leavitt[380836]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 03:56:41 np0005534516 magical_leavitt[380836]:            "lv_size": "21470642176",
Nov 25 03:56:41 np0005534516 magical_leavitt[380836]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1ccbbde0-8faf-460a-800e-c84f00ed17db,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 03:56:41 np0005534516 magical_leavitt[380836]:            "lv_uuid": "nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e",
Nov 25 03:56:41 np0005534516 magical_leavitt[380836]:            "name": "ceph_lv1",
Nov 25 03:56:41 np0005534516 magical_leavitt[380836]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 03:56:41 np0005534516 magical_leavitt[380836]:            "tags": {
Nov 25 03:56:41 np0005534516 magical_leavitt[380836]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 03:56:41 np0005534516 magical_leavitt[380836]:                "ceph.block_uuid": "nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e",
Nov 25 03:56:41 np0005534516 magical_leavitt[380836]:                "ceph.cephx_lockbox_secret": "",
Nov 25 03:56:41 np0005534516 magical_leavitt[380836]:                "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 03:56:41 np0005534516 magical_leavitt[380836]:                "ceph.cluster_name": "ceph",
Nov 25 03:56:41 np0005534516 magical_leavitt[380836]:                "ceph.crush_device_class": "",
Nov 25 03:56:41 np0005534516 magical_leavitt[380836]:                "ceph.encrypted": "0",
Nov 25 03:56:41 np0005534516 magical_leavitt[380836]:                "ceph.osd_fsid": "1ccbbde0-8faf-460a-800e-c84f00ed17db",
Nov 25 03:56:41 np0005534516 magical_leavitt[380836]:                "ceph.osd_id": "1",
Nov 25 03:56:41 np0005534516 magical_leavitt[380836]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 03:56:41 np0005534516 magical_leavitt[380836]:                "ceph.type": "block",
Nov 25 03:56:41 np0005534516 magical_leavitt[380836]:                "ceph.vdo": "0"
Nov 25 03:56:41 np0005534516 magical_leavitt[380836]:            },
Nov 25 03:56:41 np0005534516 magical_leavitt[380836]:            "type": "block",
Nov 25 03:56:41 np0005534516 magical_leavitt[380836]:            "vg_name": "ceph_vg1"
Nov 25 03:56:41 np0005534516 magical_leavitt[380836]:        }
Nov 25 03:56:41 np0005534516 magical_leavitt[380836]:    ],
Nov 25 03:56:41 np0005534516 magical_leavitt[380836]:    "2": [
Nov 25 03:56:41 np0005534516 magical_leavitt[380836]:        {
Nov 25 03:56:41 np0005534516 magical_leavitt[380836]:            "devices": [
Nov 25 03:56:41 np0005534516 magical_leavitt[380836]:                "/dev/loop5"
Nov 25 03:56:41 np0005534516 magical_leavitt[380836]:            ],
Nov 25 03:56:41 np0005534516 magical_leavitt[380836]:            "lv_name": "ceph_lv2",
Nov 25 03:56:41 np0005534516 magical_leavitt[380836]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 03:56:41 np0005534516 magical_leavitt[380836]:            "lv_size": "21470642176",
Nov 25 03:56:41 np0005534516 magical_leavitt[380836]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2a15223e-f21c-42af-b702-2be1c3607399,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 03:56:41 np0005534516 magical_leavitt[380836]:            "lv_uuid": "5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0",
Nov 25 03:56:41 np0005534516 magical_leavitt[380836]:            "name": "ceph_lv2",
Nov 25 03:56:41 np0005534516 magical_leavitt[380836]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 03:56:41 np0005534516 magical_leavitt[380836]:            "tags": {
Nov 25 03:56:41 np0005534516 magical_leavitt[380836]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 03:56:41 np0005534516 magical_leavitt[380836]:                "ceph.block_uuid": "5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0",
Nov 25 03:56:41 np0005534516 magical_leavitt[380836]:                "ceph.cephx_lockbox_secret": "",
Nov 25 03:56:41 np0005534516 magical_leavitt[380836]:                "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 03:56:41 np0005534516 magical_leavitt[380836]:                "ceph.cluster_name": "ceph",
Nov 25 03:56:41 np0005534516 magical_leavitt[380836]:                "ceph.crush_device_class": "",
Nov 25 03:56:41 np0005534516 magical_leavitt[380836]:                "ceph.encrypted": "0",
Nov 25 03:56:41 np0005534516 magical_leavitt[380836]:                "ceph.osd_fsid": "2a15223e-f21c-42af-b702-2be1c3607399",
Nov 25 03:56:41 np0005534516 magical_leavitt[380836]:                "ceph.osd_id": "2",
Nov 25 03:56:41 np0005534516 magical_leavitt[380836]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 03:56:41 np0005534516 magical_leavitt[380836]:                "ceph.type": "block",
Nov 25 03:56:41 np0005534516 magical_leavitt[380836]:                "ceph.vdo": "0"
Nov 25 03:56:41 np0005534516 magical_leavitt[380836]:            },
Nov 25 03:56:41 np0005534516 magical_leavitt[380836]:            "type": "block",
Nov 25 03:56:41 np0005534516 magical_leavitt[380836]:            "vg_name": "ceph_vg2"
Nov 25 03:56:41 np0005534516 magical_leavitt[380836]:        }
Nov 25 03:56:41 np0005534516 magical_leavitt[380836]:    ]
Nov 25 03:56:41 np0005534516 magical_leavitt[380836]: }
Nov 25 03:56:41 np0005534516 nova_compute[253538]: 2025-11-25 08:56:41.882 253542 DEBUG nova.compute.manager [None req-442d7c35-8ef3-4365-870d-736cdeba020e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 43128a42-ed0f-42ff-8282-4ef978e7c43c] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 25 03:56:41 np0005534516 systemd[1]: libpod-4775811d3abd53beb9662b847f8aeda8c4de43590b40a4086db173ac11dafcf8.scope: Deactivated successfully.
Nov 25 03:56:41 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2277: 321 pgs: 321 active+clean; 134 MiB data, 814 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Nov 25 03:56:41 np0005534516 podman[380868]: 2025-11-25 08:56:41.967549439 +0000 UTC m=+0.053701533 container died 4775811d3abd53beb9662b847f8aeda8c4de43590b40a4086db173ac11dafcf8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_leavitt, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 03:56:41 np0005534516 nova_compute[253538]: 2025-11-25 08:56:41.989 253542 DEBUG nova.compute.manager [None req-442d7c35-8ef3-4365-870d-736cdeba020e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 43128a42-ed0f-42ff-8282-4ef978e7c43c] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 25 03:56:41 np0005534516 nova_compute[253538]: 2025-11-25 08:56:41.991 253542 DEBUG nova.virt.libvirt.driver [None req-442d7c35-8ef3-4365-870d-736cdeba020e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 43128a42-ed0f-42ff-8282-4ef978e7c43c] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 25 03:56:41 np0005534516 nova_compute[253538]: 2025-11-25 08:56:41.991 253542 INFO nova.virt.libvirt.driver [None req-442d7c35-8ef3-4365-870d-736cdeba020e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 43128a42-ed0f-42ff-8282-4ef978e7c43c] Creating image(s)#033[00m
Nov 25 03:56:42 np0005534516 nova_compute[253538]: 2025-11-25 08:56:42.025 253542 DEBUG nova.storage.rbd_utils [None req-442d7c35-8ef3-4365-870d-736cdeba020e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] rbd image 43128a42-ed0f-42ff-8282-4ef978e7c43c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:56:42 np0005534516 nova_compute[253538]: 2025-11-25 08:56:42.053 253542 DEBUG nova.storage.rbd_utils [None req-442d7c35-8ef3-4365-870d-736cdeba020e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] rbd image 43128a42-ed0f-42ff-8282-4ef978e7c43c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:56:42 np0005534516 nova_compute[253538]: 2025-11-25 08:56:42.076 253542 DEBUG nova.storage.rbd_utils [None req-442d7c35-8ef3-4365-870d-736cdeba020e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] rbd image 43128a42-ed0f-42ff-8282-4ef978e7c43c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:56:42 np0005534516 nova_compute[253538]: 2025-11-25 08:56:42.079 253542 DEBUG oslo_concurrency.processutils [None req-442d7c35-8ef3-4365-870d-736cdeba020e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:56:42 np0005534516 nova_compute[253538]: 2025-11-25 08:56:42.117 253542 DEBUG nova.policy [None req-442d7c35-8ef3-4365-870d-736cdeba020e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '4211995133cc45db8e38c47f747fb092', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '92faeb767e7a423586eaaf32661ce771', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 25 03:56:42 np0005534516 nova_compute[253538]: 2025-11-25 08:56:42.121 253542 DEBUG nova.network.neutron [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] [instance: ce8c3428-f7e4-49aa-9978-faaf5d514663] Updating instance_info_cache with network_info: [{"id": "d92eef96-9bbe-4743-96d0-393e7e6de4ee", "address": "fa:16:3e:0d:26:ec", "network": {"id": "58e30486-fde6-46bb-8263-c463bd38a1f9", "bridge": "br-int", "label": "tempest-network-smoke--1964902798", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.228", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cfffff2c57a442a59b202d368d49bf00", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd92eef96-9b", "ovs_interfaceid": "d92eef96-9bbe-4743-96d0-393e7e6de4ee", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 03:56:42 np0005534516 nova_compute[253538]: 2025-11-25 08:56:42.141 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Releasing lock "refresh_cache-ce8c3428-f7e4-49aa-9978-faaf5d514663" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 03:56:42 np0005534516 nova_compute[253538]: 2025-11-25 08:56:42.142 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] [instance: ce8c3428-f7e4-49aa-9978-faaf5d514663] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Nov 25 03:56:42 np0005534516 nova_compute[253538]: 2025-11-25 08:56:42.162 253542 DEBUG oslo_concurrency.processutils [None req-442d7c35-8ef3-4365-870d-736cdeba020e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json" returned: 0 in 0.082s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:56:42 np0005534516 nova_compute[253538]: 2025-11-25 08:56:42.163 253542 DEBUG oslo_concurrency.lockutils [None req-442d7c35-8ef3-4365-870d-736cdeba020e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Acquiring lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:56:42 np0005534516 nova_compute[253538]: 2025-11-25 08:56:42.163 253542 DEBUG oslo_concurrency.lockutils [None req-442d7c35-8ef3-4365-870d-736cdeba020e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:56:42 np0005534516 nova_compute[253538]: 2025-11-25 08:56:42.164 253542 DEBUG oslo_concurrency.lockutils [None req-442d7c35-8ef3-4365-870d-736cdeba020e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:56:42 np0005534516 nova_compute[253538]: 2025-11-25 08:56:42.190 253542 DEBUG nova.storage.rbd_utils [None req-442d7c35-8ef3-4365-870d-736cdeba020e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] rbd image 43128a42-ed0f-42ff-8282-4ef978e7c43c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:56:42 np0005534516 nova_compute[253538]: 2025-11-25 08:56:42.195 253542 DEBUG oslo_concurrency.processutils [None req-442d7c35-8ef3-4365-870d-736cdeba020e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc 43128a42-ed0f-42ff-8282-4ef978e7c43c_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:56:42 np0005534516 systemd[1]: var-lib-containers-storage-overlay-fad02d2b11501102f1352bcab2bf857424bf7ecf1b5625d57167ca6b0714ec6e-merged.mount: Deactivated successfully.
Nov 25 03:56:42 np0005534516 podman[380868]: 2025-11-25 08:56:42.394486219 +0000 UTC m=+0.480638303 container remove 4775811d3abd53beb9662b847f8aeda8c4de43590b40a4086db173ac11dafcf8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_leavitt, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 03:56:42 np0005534516 systemd[1]: libpod-conmon-4775811d3abd53beb9662b847f8aeda8c4de43590b40a4086db173ac11dafcf8.scope: Deactivated successfully.
Nov 25 03:56:42 np0005534516 podman[380867]: 2025-11-25 08:56:42.428641329 +0000 UTC m=+0.499389873 container health_status 5c8d5508db57b4c3da7e80ae11f3a3094e87785cb74f60c98199261dd8b73481 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_metadata_agent)
Nov 25 03:56:42 np0005534516 nova_compute[253538]: 2025-11-25 08:56:42.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 03:56:42 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 03:56:42 np0005534516 nova_compute[253538]: 2025-11-25 08:56:42.555 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 25 03:56:42 np0005534516 nova_compute[253538]: 2025-11-25 08:56:42.556 253542 DEBUG oslo_concurrency.processutils [None req-442d7c35-8ef3-4365-870d-736cdeba020e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc 43128a42-ed0f-42ff-8282-4ef978e7c43c_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.362s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:56:42 np0005534516 ceph-mon[75015]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #111. Immutable memtables: 0.
Nov 25 03:56:42 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:56:42.569058) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 25 03:56:42 np0005534516 ceph-mon[75015]: rocksdb: [db/flush_job.cc:856] [default] [JOB 65] Flushing memtable with next log file: 111
Nov 25 03:56:42 np0005534516 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764061002569086, "job": 65, "event": "flush_started", "num_memtables": 1, "num_entries": 314, "num_deletes": 255, "total_data_size": 103986, "memory_usage": 111488, "flush_reason": "Manual Compaction"}
Nov 25 03:56:42 np0005534516 ceph-mon[75015]: rocksdb: [db/flush_job.cc:885] [default] [JOB 65] Level-0 flush table #112: started
Nov 25 03:56:42 np0005534516 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764061002579710, "cf_name": "default", "job": 65, "event": "table_file_creation", "file_number": 112, "file_size": 103384, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 47746, "largest_seqno": 48059, "table_properties": {"data_size": 101349, "index_size": 199, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 773, "raw_key_size": 5061, "raw_average_key_size": 17, "raw_value_size": 97335, "raw_average_value_size": 340, "num_data_blocks": 9, "num_entries": 286, "num_filter_entries": 286, "num_deletions": 255, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764060998, "oldest_key_time": 1764060998, "file_creation_time": 1764061002, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "dc5dc6ae-0fb8-474e-b34d-e37705a41add", "db_session_id": "WCSCMBNWDQZ3QKJA3OWW", "orig_file_number": 112, "seqno_to_time_mapping": "N/A"}}
Nov 25 03:56:42 np0005534516 ceph-mon[75015]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 65] Flush lasted 10692 microseconds, and 818 cpu microseconds.
Nov 25 03:56:42 np0005534516 ceph-mon[75015]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 03:56:42 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:56:42.579748) [db/flush_job.cc:967] [default] [JOB 65] Level-0 flush table #112: 103384 bytes OK
Nov 25 03:56:42 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:56:42.579765) [db/memtable_list.cc:519] [default] Level-0 commit table #112 started
Nov 25 03:56:42 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:56:42.583214) [db/memtable_list.cc:722] [default] Level-0 commit table #112: memtable #1 done
Nov 25 03:56:42 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:56:42.583239) EVENT_LOG_v1 {"time_micros": 1764061002583233, "job": 65, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 25 03:56:42 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:56:42.583257) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 25 03:56:42 np0005534516 ceph-mon[75015]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 65] Try to delete WAL files size 101725, prev total WAL file size 101725, number of live WAL files 2.
Nov 25 03:56:42 np0005534516 ceph-mon[75015]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000108.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 03:56:42 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:56:42.583880) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0031373635' seq:72057594037927935, type:22 .. '6C6F676D0032303136' seq:0, type:0; will stop at (end)
Nov 25 03:56:42 np0005534516 ceph-mon[75015]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 66] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 25 03:56:42 np0005534516 ceph-mon[75015]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 65 Base level 0, inputs: [112(100KB)], [110(7621KB)]
Nov 25 03:56:42 np0005534516 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764061002583911, "job": 66, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [112], "files_L6": [110], "score": -1, "input_data_size": 7907999, "oldest_snapshot_seqno": -1}
Nov 25 03:56:42 np0005534516 nova_compute[253538]: 2025-11-25 08:56:42.623 253542 DEBUG nova.storage.rbd_utils [None req-442d7c35-8ef3-4365-870d-736cdeba020e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] resizing rbd image 43128a42-ed0f-42ff-8282-4ef978e7c43c_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 25 03:56:42 np0005534516 ceph-mon[75015]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 66] Generated table #113: 6677 keys, 7787573 bytes, temperature: kUnknown
Nov 25 03:56:42 np0005534516 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764061002723811, "cf_name": "default", "job": 66, "event": "table_file_creation", "file_number": 113, "file_size": 7787573, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7745695, "index_size": 24102, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 16709, "raw_key_size": 175495, "raw_average_key_size": 26, "raw_value_size": 7628682, "raw_average_value_size": 1142, "num_data_blocks": 935, "num_entries": 6677, "num_filter_entries": 6677, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764056881, "oldest_key_time": 0, "file_creation_time": 1764061002, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "dc5dc6ae-0fb8-474e-b34d-e37705a41add", "db_session_id": "WCSCMBNWDQZ3QKJA3OWW", "orig_file_number": 113, "seqno_to_time_mapping": "N/A"}}
Nov 25 03:56:42 np0005534516 ceph-mon[75015]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 03:56:42 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:56:42.724026) [db/compaction/compaction_job.cc:1663] [default] [JOB 66] Compacted 1@0 + 1@6 files to L6 => 7787573 bytes
Nov 25 03:56:42 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:56:42.731089) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 56.5 rd, 55.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.1, 7.4 +0.0 blob) out(7.4 +0.0 blob), read-write-amplify(151.8) write-amplify(75.3) OK, records in: 7194, records dropped: 517 output_compression: NoCompression
Nov 25 03:56:42 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:56:42.731121) EVENT_LOG_v1 {"time_micros": 1764061002731107, "job": 66, "event": "compaction_finished", "compaction_time_micros": 139968, "compaction_time_cpu_micros": 20490, "output_level": 6, "num_output_files": 1, "total_output_size": 7787573, "num_input_records": 7194, "num_output_records": 6677, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 25 03:56:42 np0005534516 ceph-mon[75015]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000112.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 03:56:42 np0005534516 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764061002731280, "job": 66, "event": "table_file_deletion", "file_number": 112}
Nov 25 03:56:42 np0005534516 ceph-mon[75015]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000110.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 03:56:42 np0005534516 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764061002733597, "job": 66, "event": "table_file_deletion", "file_number": 110}
Nov 25 03:56:42 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:56:42.583830) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 03:56:42 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:56:42.733673) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 03:56:42 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:56:42.733678) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 03:56:42 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:56:42.733679) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 03:56:42 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:56:42.733681) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 03:56:42 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:56:42.733682) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 03:56:42 np0005534516 nova_compute[253538]: 2025-11-25 08:56:42.909 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:56:43 np0005534516 nova_compute[253538]: 2025-11-25 08:56:43.040 253542 DEBUG nova.objects.instance [None req-442d7c35-8ef3-4365-870d-736cdeba020e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lazy-loading 'migration_context' on Instance uuid 43128a42-ed0f-42ff-8282-4ef978e7c43c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 03:56:43 np0005534516 nova_compute[253538]: 2025-11-25 08:56:43.060 253542 DEBUG nova.virt.libvirt.driver [None req-442d7c35-8ef3-4365-870d-736cdeba020e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 43128a42-ed0f-42ff-8282-4ef978e7c43c] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 25 03:56:43 np0005534516 nova_compute[253538]: 2025-11-25 08:56:43.061 253542 DEBUG nova.virt.libvirt.driver [None req-442d7c35-8ef3-4365-870d-736cdeba020e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 43128a42-ed0f-42ff-8282-4ef978e7c43c] Ensure instance console log exists: /var/lib/nova/instances/43128a42-ed0f-42ff-8282-4ef978e7c43c/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 25 03:56:43 np0005534516 nova_compute[253538]: 2025-11-25 08:56:43.061 253542 DEBUG oslo_concurrency.lockutils [None req-442d7c35-8ef3-4365-870d-736cdeba020e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:56:43 np0005534516 nova_compute[253538]: 2025-11-25 08:56:43.062 253542 DEBUG oslo_concurrency.lockutils [None req-442d7c35-8ef3-4365-870d-736cdeba020e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:56:43 np0005534516 nova_compute[253538]: 2025-11-25 08:56:43.062 253542 DEBUG oslo_concurrency.lockutils [None req-442d7c35-8ef3-4365-870d-736cdeba020e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:56:43 np0005534516 nova_compute[253538]: 2025-11-25 08:56:43.172 253542 DEBUG nova.network.neutron [None req-442d7c35-8ef3-4365-870d-736cdeba020e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 43128a42-ed0f-42ff-8282-4ef978e7c43c] Successfully created port: 3f8cf5bd-c7bb-44c3-a8bf-a8dd8e78c2c6 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 25 03:56:43 np0005534516 podman[381204]: 2025-11-25 08:56:43.174474959 +0000 UTC m=+0.071713572 container create 0c269861ea50c67e8f5fcb24be821124b0b24c615932419c7200389dc59a5341 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_darwin, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Nov 25 03:56:43 np0005534516 systemd[1]: Started libpod-conmon-0c269861ea50c67e8f5fcb24be821124b0b24c615932419c7200389dc59a5341.scope.
Nov 25 03:56:43 np0005534516 podman[381204]: 2025-11-25 08:56:43.127341807 +0000 UTC m=+0.024580410 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 03:56:43 np0005534516 systemd[1]: Started libcrun container.
Nov 25 03:56:43 np0005534516 podman[381204]: 2025-11-25 08:56:43.262900326 +0000 UTC m=+0.160138919 container init 0c269861ea50c67e8f5fcb24be821124b0b24c615932419c7200389dc59a5341 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_darwin, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Nov 25 03:56:43 np0005534516 podman[381204]: 2025-11-25 08:56:43.27147071 +0000 UTC m=+0.168709283 container start 0c269861ea50c67e8f5fcb24be821124b0b24c615932419c7200389dc59a5341 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_darwin, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Nov 25 03:56:43 np0005534516 youthful_darwin[381222]: 167 167
Nov 25 03:56:43 np0005534516 systemd[1]: libpod-0c269861ea50c67e8f5fcb24be821124b0b24c615932419c7200389dc59a5341.scope: Deactivated successfully.
Nov 25 03:56:43 np0005534516 podman[381204]: 2025-11-25 08:56:43.281632157 +0000 UTC m=+0.178870750 container attach 0c269861ea50c67e8f5fcb24be821124b0b24c615932419c7200389dc59a5341 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_darwin, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True)
Nov 25 03:56:43 np0005534516 podman[381204]: 2025-11-25 08:56:43.282970193 +0000 UTC m=+0.180208796 container died 0c269861ea50c67e8f5fcb24be821124b0b24c615932419c7200389dc59a5341 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_darwin, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 03:56:43 np0005534516 podman[381218]: 2025-11-25 08:56:43.296276165 +0000 UTC m=+0.082758444 container health_status 201f5ba8a846455dad7681d91099d8c3f33e8ac91298c1330a8b525b94c03938 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=multipathd, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Nov 25 03:56:43 np0005534516 systemd[1]: var-lib-containers-storage-overlay-12df25f68876a8eab5310ab331d30ec98c138b94564c857defc2858343924eb7-merged.mount: Deactivated successfully.
Nov 25 03:56:43 np0005534516 podman[381204]: 2025-11-25 08:56:43.345941927 +0000 UTC m=+0.243180520 container remove 0c269861ea50c67e8f5fcb24be821124b0b24c615932419c7200389dc59a5341 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_darwin, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 03:56:43 np0005534516 systemd[1]: libpod-conmon-0c269861ea50c67e8f5fcb24be821124b0b24c615932419c7200389dc59a5341.scope: Deactivated successfully.
Nov 25 03:56:43 np0005534516 podman[381264]: 2025-11-25 08:56:43.561944516 +0000 UTC m=+0.069960055 container create 8ee5c8f3487744f723da9cdc41dbe9bdf7d68613041b145e2c618cb34d80ccea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_joliot, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 03:56:43 np0005534516 systemd[1]: Started libpod-conmon-8ee5c8f3487744f723da9cdc41dbe9bdf7d68613041b145e2c618cb34d80ccea.scope.
Nov 25 03:56:43 np0005534516 podman[381264]: 2025-11-25 08:56:43.533192223 +0000 UTC m=+0.041207782 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 03:56:43 np0005534516 systemd[1]: Started libcrun container.
Nov 25 03:56:43 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/62fda1ceb53621411682938b8635b928d4053befdb573ae8275ea43292e55191/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 03:56:43 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/62fda1ceb53621411682938b8635b928d4053befdb573ae8275ea43292e55191/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 03:56:43 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/62fda1ceb53621411682938b8635b928d4053befdb573ae8275ea43292e55191/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 03:56:43 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/62fda1ceb53621411682938b8635b928d4053befdb573ae8275ea43292e55191/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 03:56:43 np0005534516 podman[381264]: 2025-11-25 08:56:43.872717115 +0000 UTC m=+0.380732654 container init 8ee5c8f3487744f723da9cdc41dbe9bdf7d68613041b145e2c618cb34d80ccea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_joliot, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True)
Nov 25 03:56:43 np0005534516 podman[381264]: 2025-11-25 08:56:43.884194367 +0000 UTC m=+0.392209876 container start 8ee5c8f3487744f723da9cdc41dbe9bdf7d68613041b145e2c618cb34d80ccea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_joliot, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3)
Nov 25 03:56:43 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2278: 321 pgs: 321 active+clean; 146 MiB data, 822 MiB used, 59 GiB / 60 GiB avail; 1.6 MiB/s rd, 640 KiB/s wr, 71 op/s
Nov 25 03:56:43 np0005534516 nova_compute[253538]: 2025-11-25 08:56:43.918 253542 DEBUG nova.network.neutron [None req-442d7c35-8ef3-4365-870d-736cdeba020e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 43128a42-ed0f-42ff-8282-4ef978e7c43c] Successfully updated port: 3f8cf5bd-c7bb-44c3-a8bf-a8dd8e78c2c6 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 25 03:56:43 np0005534516 nova_compute[253538]: 2025-11-25 08:56:43.937 253542 DEBUG oslo_concurrency.lockutils [None req-442d7c35-8ef3-4365-870d-736cdeba020e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Acquiring lock "refresh_cache-43128a42-ed0f-42ff-8282-4ef978e7c43c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 03:56:43 np0005534516 nova_compute[253538]: 2025-11-25 08:56:43.937 253542 DEBUG oslo_concurrency.lockutils [None req-442d7c35-8ef3-4365-870d-736cdeba020e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Acquired lock "refresh_cache-43128a42-ed0f-42ff-8282-4ef978e7c43c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 03:56:43 np0005534516 nova_compute[253538]: 2025-11-25 08:56:43.937 253542 DEBUG nova.network.neutron [None req-442d7c35-8ef3-4365-870d-736cdeba020e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 43128a42-ed0f-42ff-8282-4ef978e7c43c] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 25 03:56:43 np0005534516 nova_compute[253538]: 2025-11-25 08:56:43.991 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:56:44 np0005534516 nova_compute[253538]: 2025-11-25 08:56:44.021 253542 DEBUG nova.compute.manager [req-482d5728-6747-4f61-8a45-43cb392bfbb7 req-e902f5a5-4da3-4287-be61-d361660192aa b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 43128a42-ed0f-42ff-8282-4ef978e7c43c] Received event network-changed-3f8cf5bd-c7bb-44c3-a8bf-a8dd8e78c2c6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:56:44 np0005534516 nova_compute[253538]: 2025-11-25 08:56:44.022 253542 DEBUG nova.compute.manager [req-482d5728-6747-4f61-8a45-43cb392bfbb7 req-e902f5a5-4da3-4287-be61-d361660192aa b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 43128a42-ed0f-42ff-8282-4ef978e7c43c] Refreshing instance network info cache due to event network-changed-3f8cf5bd-c7bb-44c3-a8bf-a8dd8e78c2c6. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 03:56:44 np0005534516 nova_compute[253538]: 2025-11-25 08:56:44.022 253542 DEBUG oslo_concurrency.lockutils [req-482d5728-6747-4f61-8a45-43cb392bfbb7 req-e902f5a5-4da3-4287-be61-d361660192aa b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-43128a42-ed0f-42ff-8282-4ef978e7c43c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 03:56:44 np0005534516 nova_compute[253538]: 2025-11-25 08:56:44.089 253542 DEBUG nova.network.neutron [None req-442d7c35-8ef3-4365-870d-736cdeba020e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 43128a42-ed0f-42ff-8282-4ef978e7c43c] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 25 03:56:44 np0005534516 podman[381264]: 2025-11-25 08:56:44.223000999 +0000 UTC m=+0.731016508 container attach 8ee5c8f3487744f723da9cdc41dbe9bdf7d68613041b145e2c618cb34d80ccea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_joliot, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 03:56:44 np0005534516 nova_compute[253538]: 2025-11-25 08:56:44.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 03:56:44 np0005534516 ovn_controller[152859]: 2025-11-25T08:56:44Z|00141|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:0d:26:ec 10.100.0.3
Nov 25 03:56:44 np0005534516 ovn_controller[152859]: 2025-11-25T08:56:44Z|00142|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:0d:26:ec 10.100.0.3
Nov 25 03:56:44 np0005534516 nova_compute[253538]: 2025-11-25 08:56:44.733 253542 DEBUG nova.network.neutron [None req-442d7c35-8ef3-4365-870d-736cdeba020e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 43128a42-ed0f-42ff-8282-4ef978e7c43c] Updating instance_info_cache with network_info: [{"id": "3f8cf5bd-c7bb-44c3-a8bf-a8dd8e78c2c6", "address": "fa:16:3e:ec:7c:8c", "network": {"id": "6a3361c5-4f78-4935-9e24-d43d47b272af", "bridge": "br-int", "label": "tempest-network-smoke--1071758038", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "1.2.3.4", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92faeb767e7a423586eaaf32661ce771", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3f8cf5bd-c7", "ovs_interfaceid": "3f8cf5bd-c7bb-44c3-a8bf-a8dd8e78c2c6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 03:56:44 np0005534516 nova_compute[253538]: 2025-11-25 08:56:44.750 253542 DEBUG oslo_concurrency.lockutils [None req-442d7c35-8ef3-4365-870d-736cdeba020e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Releasing lock "refresh_cache-43128a42-ed0f-42ff-8282-4ef978e7c43c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 03:56:44 np0005534516 nova_compute[253538]: 2025-11-25 08:56:44.750 253542 DEBUG nova.compute.manager [None req-442d7c35-8ef3-4365-870d-736cdeba020e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 43128a42-ed0f-42ff-8282-4ef978e7c43c] Instance network_info: |[{"id": "3f8cf5bd-c7bb-44c3-a8bf-a8dd8e78c2c6", "address": "fa:16:3e:ec:7c:8c", "network": {"id": "6a3361c5-4f78-4935-9e24-d43d47b272af", "bridge": "br-int", "label": "tempest-network-smoke--1071758038", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "1.2.3.4", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92faeb767e7a423586eaaf32661ce771", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3f8cf5bd-c7", "ovs_interfaceid": "3f8cf5bd-c7bb-44c3-a8bf-a8dd8e78c2c6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 25 03:56:44 np0005534516 nova_compute[253538]: 2025-11-25 08:56:44.751 253542 DEBUG oslo_concurrency.lockutils [req-482d5728-6747-4f61-8a45-43cb392bfbb7 req-e902f5a5-4da3-4287-be61-d361660192aa b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-43128a42-ed0f-42ff-8282-4ef978e7c43c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 03:56:44 np0005534516 nova_compute[253538]: 2025-11-25 08:56:44.751 253542 DEBUG nova.network.neutron [req-482d5728-6747-4f61-8a45-43cb392bfbb7 req-e902f5a5-4da3-4287-be61-d361660192aa b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 43128a42-ed0f-42ff-8282-4ef978e7c43c] Refreshing network info cache for port 3f8cf5bd-c7bb-44c3-a8bf-a8dd8e78c2c6 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 03:56:44 np0005534516 nova_compute[253538]: 2025-11-25 08:56:44.755 253542 DEBUG nova.virt.libvirt.driver [None req-442d7c35-8ef3-4365-870d-736cdeba020e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 43128a42-ed0f-42ff-8282-4ef978e7c43c] Start _get_guest_xml network_info=[{"id": "3f8cf5bd-c7bb-44c3-a8bf-a8dd8e78c2c6", "address": "fa:16:3e:ec:7c:8c", "network": {"id": "6a3361c5-4f78-4935-9e24-d43d47b272af", "bridge": "br-int", "label": "tempest-network-smoke--1071758038", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "1.2.3.4", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92faeb767e7a423586eaaf32661ce771", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3f8cf5bd-c7", "ovs_interfaceid": "3f8cf5bd-c7bb-44c3-a8bf-a8dd8e78c2c6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'device_name': '/dev/vda', 'guest_format': None, 'encryption_options': None, 'boot_index': 0, 'encryption_format': None, 'encryption_secret_uuid': None, 'size': 0, 'encrypted': False, 'disk_bus': 'virtio', 'image_id': '8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 25 03:56:44 np0005534516 nova_compute[253538]: 2025-11-25 08:56:44.761 253542 WARNING nova.virt.libvirt.driver [None req-442d7c35-8ef3-4365-870d-736cdeba020e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 03:56:44 np0005534516 nova_compute[253538]: 2025-11-25 08:56:44.768 253542 DEBUG nova.virt.libvirt.host [None req-442d7c35-8ef3-4365-870d-736cdeba020e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 25 03:56:44 np0005534516 nova_compute[253538]: 2025-11-25 08:56:44.770 253542 DEBUG nova.virt.libvirt.host [None req-442d7c35-8ef3-4365-870d-736cdeba020e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 25 03:56:44 np0005534516 nova_compute[253538]: 2025-11-25 08:56:44.778 253542 DEBUG nova.virt.libvirt.host [None req-442d7c35-8ef3-4365-870d-736cdeba020e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 25 03:56:44 np0005534516 nova_compute[253538]: 2025-11-25 08:56:44.779 253542 DEBUG nova.virt.libvirt.host [None req-442d7c35-8ef3-4365-870d-736cdeba020e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 25 03:56:44 np0005534516 nova_compute[253538]: 2025-11-25 08:56:44.779 253542 DEBUG nova.virt.libvirt.driver [None req-442d7c35-8ef3-4365-870d-736cdeba020e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 25 03:56:44 np0005534516 nova_compute[253538]: 2025-11-25 08:56:44.779 253542 DEBUG nova.virt.hardware [None req-442d7c35-8ef3-4365-870d-736cdeba020e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T08:20:54Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='c0247c91-0f34-45b2-87e7-43f31a790a00',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 25 03:56:44 np0005534516 nova_compute[253538]: 2025-11-25 08:56:44.780 253542 DEBUG nova.virt.hardware [None req-442d7c35-8ef3-4365-870d-736cdeba020e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 25 03:56:44 np0005534516 nova_compute[253538]: 2025-11-25 08:56:44.780 253542 DEBUG nova.virt.hardware [None req-442d7c35-8ef3-4365-870d-736cdeba020e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 25 03:56:44 np0005534516 nova_compute[253538]: 2025-11-25 08:56:44.780 253542 DEBUG nova.virt.hardware [None req-442d7c35-8ef3-4365-870d-736cdeba020e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 25 03:56:44 np0005534516 nova_compute[253538]: 2025-11-25 08:56:44.781 253542 DEBUG nova.virt.hardware [None req-442d7c35-8ef3-4365-870d-736cdeba020e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 25 03:56:44 np0005534516 nova_compute[253538]: 2025-11-25 08:56:44.781 253542 DEBUG nova.virt.hardware [None req-442d7c35-8ef3-4365-870d-736cdeba020e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 25 03:56:44 np0005534516 nova_compute[253538]: 2025-11-25 08:56:44.781 253542 DEBUG nova.virt.hardware [None req-442d7c35-8ef3-4365-870d-736cdeba020e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 25 03:56:44 np0005534516 nova_compute[253538]: 2025-11-25 08:56:44.781 253542 DEBUG nova.virt.hardware [None req-442d7c35-8ef3-4365-870d-736cdeba020e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 25 03:56:44 np0005534516 nova_compute[253538]: 2025-11-25 08:56:44.781 253542 DEBUG nova.virt.hardware [None req-442d7c35-8ef3-4365-870d-736cdeba020e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 25 03:56:44 np0005534516 nova_compute[253538]: 2025-11-25 08:56:44.781 253542 DEBUG nova.virt.hardware [None req-442d7c35-8ef3-4365-870d-736cdeba020e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 25 03:56:44 np0005534516 nova_compute[253538]: 2025-11-25 08:56:44.781 253542 DEBUG nova.virt.hardware [None req-442d7c35-8ef3-4365-870d-736cdeba020e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 25 03:56:44 np0005534516 nova_compute[253538]: 2025-11-25 08:56:44.784 253542 DEBUG oslo_concurrency.processutils [None req-442d7c35-8ef3-4365-870d-736cdeba020e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:56:44 np0005534516 nice_joliot[381280]: {
Nov 25 03:56:44 np0005534516 nice_joliot[381280]:    "1ccbbde0-8faf-460a-800e-c84f00ed17db": {
Nov 25 03:56:44 np0005534516 nice_joliot[381280]:        "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 03:56:44 np0005534516 nice_joliot[381280]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 03:56:44 np0005534516 nice_joliot[381280]:        "osd_id": 1,
Nov 25 03:56:44 np0005534516 nice_joliot[381280]:        "osd_uuid": "1ccbbde0-8faf-460a-800e-c84f00ed17db",
Nov 25 03:56:44 np0005534516 nice_joliot[381280]:        "type": "bluestore"
Nov 25 03:56:44 np0005534516 nice_joliot[381280]:    },
Nov 25 03:56:44 np0005534516 nice_joliot[381280]:    "2a15223e-f21c-42af-b702-2be1c3607399": {
Nov 25 03:56:44 np0005534516 nice_joliot[381280]:        "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 03:56:44 np0005534516 nice_joliot[381280]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 03:56:44 np0005534516 nice_joliot[381280]:        "osd_id": 2,
Nov 25 03:56:44 np0005534516 nice_joliot[381280]:        "osd_uuid": "2a15223e-f21c-42af-b702-2be1c3607399",
Nov 25 03:56:44 np0005534516 nice_joliot[381280]:        "type": "bluestore"
Nov 25 03:56:44 np0005534516 nice_joliot[381280]:    },
Nov 25 03:56:44 np0005534516 nice_joliot[381280]:    "fa0f8d90-5df8-4d42-9078-da082765696d": {
Nov 25 03:56:44 np0005534516 nice_joliot[381280]:        "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 03:56:44 np0005534516 nice_joliot[381280]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 03:56:44 np0005534516 nice_joliot[381280]:        "osd_id": 0,
Nov 25 03:56:44 np0005534516 nice_joliot[381280]:        "osd_uuid": "fa0f8d90-5df8-4d42-9078-da082765696d",
Nov 25 03:56:44 np0005534516 nice_joliot[381280]:        "type": "bluestore"
Nov 25 03:56:44 np0005534516 nice_joliot[381280]:    }
Nov 25 03:56:44 np0005534516 nice_joliot[381280]: }
Nov 25 03:56:44 np0005534516 systemd[1]: libpod-8ee5c8f3487744f723da9cdc41dbe9bdf7d68613041b145e2c618cb34d80ccea.scope: Deactivated successfully.
Nov 25 03:56:44 np0005534516 podman[381264]: 2025-11-25 08:56:44.896946993 +0000 UTC m=+1.404962502 container died 8ee5c8f3487744f723da9cdc41dbe9bdf7d68613041b145e2c618cb34d80ccea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_joliot, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 03:56:44 np0005534516 systemd[1]: var-lib-containers-storage-overlay-62fda1ceb53621411682938b8635b928d4053befdb573ae8275ea43292e55191-merged.mount: Deactivated successfully.
Nov 25 03:56:44 np0005534516 podman[381264]: 2025-11-25 08:56:44.997008946 +0000 UTC m=+1.505024455 container remove 8ee5c8f3487744f723da9cdc41dbe9bdf7d68613041b145e2c618cb34d80ccea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_joliot, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Nov 25 03:56:45 np0005534516 systemd[1]: libpod-conmon-8ee5c8f3487744f723da9cdc41dbe9bdf7d68613041b145e2c618cb34d80ccea.scope: Deactivated successfully.
Nov 25 03:56:45 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 03:56:45 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 03:56:45 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 03:56:45 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 03:56:45 np0005534516 ceph-mgr[75313]: [progress WARNING root] complete: ev c4c00c97-d109-4459-ac6f-973872f879cf does not exist
Nov 25 03:56:45 np0005534516 ceph-mgr[75313]: [progress WARNING root] complete: ev 945e6483-3da6-41d7-82c7-5485613d822c does not exist
Nov 25 03:56:45 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 03:56:45 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/43139613' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 03:56:45 np0005534516 nova_compute[253538]: 2025-11-25 08:56:45.228 253542 DEBUG oslo_concurrency.processutils [None req-442d7c35-8ef3-4365-870d-736cdeba020e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.443s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:56:45 np0005534516 nova_compute[253538]: 2025-11-25 08:56:45.261 253542 DEBUG nova.storage.rbd_utils [None req-442d7c35-8ef3-4365-870d-736cdeba020e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] rbd image 43128a42-ed0f-42ff-8282-4ef978e7c43c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:56:45 np0005534516 nova_compute[253538]: 2025-11-25 08:56:45.265 253542 DEBUG oslo_concurrency.processutils [None req-442d7c35-8ef3-4365-870d-736cdeba020e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:56:45 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 03:56:45 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/186944104' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 03:56:45 np0005534516 nova_compute[253538]: 2025-11-25 08:56:45.727 253542 DEBUG oslo_concurrency.processutils [None req-442d7c35-8ef3-4365-870d-736cdeba020e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.462s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:56:45 np0005534516 nova_compute[253538]: 2025-11-25 08:56:45.729 253542 DEBUG nova.virt.libvirt.vif [None req-442d7c35-8ef3-4365-870d-736cdeba020e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T08:56:40Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1161720817',display_name='tempest-TestNetworkBasicOps-server-1161720817',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1161720817',id=121,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMygd7bgjp7je066rs+JSqi7wDw8mZA8bTJqZMTdVQ59AGIvWGIfB++nH0hDU9JXJAgSqR6ykwwbMc5hRBsfmnOJwLqxckNDbUsZU2WcEt8EN+Pk8Qs/v8+WIfKw25whKw==',key_name='tempest-TestNetworkBasicOps-341234383',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='92faeb767e7a423586eaaf32661ce771',ramdisk_id='',reservation_id='r-8fli2zzc',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-2019122229',owner_user_name='tempest-TestNetworkBasicOps-2019122229-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T08:56:41Z,user_data=None,user_id='4211995133cc45db8e38c47f747fb092',uuid=43128a42-ed0f-42ff-8282-4ef978e7c43c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "3f8cf5bd-c7bb-44c3-a8bf-a8dd8e78c2c6", "address": "fa:16:3e:ec:7c:8c", "network": {"id": "6a3361c5-4f78-4935-9e24-d43d47b272af", "bridge": "br-int", "label": "tempest-network-smoke--1071758038", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "1.2.3.4", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92faeb767e7a423586eaaf32661ce771", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3f8cf5bd-c7", "ovs_interfaceid": "3f8cf5bd-c7bb-44c3-a8bf-a8dd8e78c2c6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 25 03:56:45 np0005534516 nova_compute[253538]: 2025-11-25 08:56:45.729 253542 DEBUG nova.network.os_vif_util [None req-442d7c35-8ef3-4365-870d-736cdeba020e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Converting VIF {"id": "3f8cf5bd-c7bb-44c3-a8bf-a8dd8e78c2c6", "address": "fa:16:3e:ec:7c:8c", "network": {"id": "6a3361c5-4f78-4935-9e24-d43d47b272af", "bridge": "br-int", "label": "tempest-network-smoke--1071758038", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "1.2.3.4", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92faeb767e7a423586eaaf32661ce771", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3f8cf5bd-c7", "ovs_interfaceid": "3f8cf5bd-c7bb-44c3-a8bf-a8dd8e78c2c6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 03:56:45 np0005534516 nova_compute[253538]: 2025-11-25 08:56:45.730 253542 DEBUG nova.network.os_vif_util [None req-442d7c35-8ef3-4365-870d-736cdeba020e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ec:7c:8c,bridge_name='br-int',has_traffic_filtering=True,id=3f8cf5bd-c7bb-44c3-a8bf-a8dd8e78c2c6,network=Network(6a3361c5-4f78-4935-9e24-d43d47b272af),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3f8cf5bd-c7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 03:56:45 np0005534516 nova_compute[253538]: 2025-11-25 08:56:45.732 253542 DEBUG nova.objects.instance [None req-442d7c35-8ef3-4365-870d-736cdeba020e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lazy-loading 'pci_devices' on Instance uuid 43128a42-ed0f-42ff-8282-4ef978e7c43c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 03:56:45 np0005534516 nova_compute[253538]: 2025-11-25 08:56:45.745 253542 DEBUG nova.virt.libvirt.driver [None req-442d7c35-8ef3-4365-870d-736cdeba020e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 43128a42-ed0f-42ff-8282-4ef978e7c43c] End _get_guest_xml xml=<domain type="kvm">
Nov 25 03:56:45 np0005534516 nova_compute[253538]:  <uuid>43128a42-ed0f-42ff-8282-4ef978e7c43c</uuid>
Nov 25 03:56:45 np0005534516 nova_compute[253538]:  <name>instance-00000079</name>
Nov 25 03:56:45 np0005534516 nova_compute[253538]:  <memory>131072</memory>
Nov 25 03:56:45 np0005534516 nova_compute[253538]:  <vcpu>1</vcpu>
Nov 25 03:56:45 np0005534516 nova_compute[253538]:  <metadata>
Nov 25 03:56:45 np0005534516 nova_compute[253538]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 03:56:45 np0005534516 nova_compute[253538]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 03:56:45 np0005534516 nova_compute[253538]:      <nova:name>tempest-TestNetworkBasicOps-server-1161720817</nova:name>
Nov 25 03:56:45 np0005534516 nova_compute[253538]:      <nova:creationTime>2025-11-25 08:56:44</nova:creationTime>
Nov 25 03:56:45 np0005534516 nova_compute[253538]:      <nova:flavor name="m1.nano">
Nov 25 03:56:45 np0005534516 nova_compute[253538]:        <nova:memory>128</nova:memory>
Nov 25 03:56:45 np0005534516 nova_compute[253538]:        <nova:disk>1</nova:disk>
Nov 25 03:56:45 np0005534516 nova_compute[253538]:        <nova:swap>0</nova:swap>
Nov 25 03:56:45 np0005534516 nova_compute[253538]:        <nova:ephemeral>0</nova:ephemeral>
Nov 25 03:56:45 np0005534516 nova_compute[253538]:        <nova:vcpus>1</nova:vcpus>
Nov 25 03:56:45 np0005534516 nova_compute[253538]:      </nova:flavor>
Nov 25 03:56:45 np0005534516 nova_compute[253538]:      <nova:owner>
Nov 25 03:56:45 np0005534516 nova_compute[253538]:        <nova:user uuid="4211995133cc45db8e38c47f747fb092">tempest-TestNetworkBasicOps-2019122229-project-member</nova:user>
Nov 25 03:56:45 np0005534516 nova_compute[253538]:        <nova:project uuid="92faeb767e7a423586eaaf32661ce771">tempest-TestNetworkBasicOps-2019122229</nova:project>
Nov 25 03:56:45 np0005534516 nova_compute[253538]:      </nova:owner>
Nov 25 03:56:45 np0005534516 nova_compute[253538]:      <nova:root type="image" uuid="8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e"/>
Nov 25 03:56:45 np0005534516 nova_compute[253538]:      <nova:ports>
Nov 25 03:56:45 np0005534516 nova_compute[253538]:        <nova:port uuid="3f8cf5bd-c7bb-44c3-a8bf-a8dd8e78c2c6">
Nov 25 03:56:45 np0005534516 nova_compute[253538]:          <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Nov 25 03:56:45 np0005534516 nova_compute[253538]:        </nova:port>
Nov 25 03:56:45 np0005534516 nova_compute[253538]:      </nova:ports>
Nov 25 03:56:45 np0005534516 nova_compute[253538]:    </nova:instance>
Nov 25 03:56:45 np0005534516 nova_compute[253538]:  </metadata>
Nov 25 03:56:45 np0005534516 nova_compute[253538]:  <sysinfo type="smbios">
Nov 25 03:56:45 np0005534516 nova_compute[253538]:    <system>
Nov 25 03:56:45 np0005534516 nova_compute[253538]:      <entry name="manufacturer">RDO</entry>
Nov 25 03:56:45 np0005534516 nova_compute[253538]:      <entry name="product">OpenStack Compute</entry>
Nov 25 03:56:45 np0005534516 nova_compute[253538]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 03:56:45 np0005534516 nova_compute[253538]:      <entry name="serial">43128a42-ed0f-42ff-8282-4ef978e7c43c</entry>
Nov 25 03:56:45 np0005534516 nova_compute[253538]:      <entry name="uuid">43128a42-ed0f-42ff-8282-4ef978e7c43c</entry>
Nov 25 03:56:45 np0005534516 nova_compute[253538]:      <entry name="family">Virtual Machine</entry>
Nov 25 03:56:45 np0005534516 nova_compute[253538]:    </system>
Nov 25 03:56:45 np0005534516 nova_compute[253538]:  </sysinfo>
Nov 25 03:56:45 np0005534516 nova_compute[253538]:  <os>
Nov 25 03:56:45 np0005534516 nova_compute[253538]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 25 03:56:45 np0005534516 nova_compute[253538]:    <boot dev="hd"/>
Nov 25 03:56:45 np0005534516 nova_compute[253538]:    <smbios mode="sysinfo"/>
Nov 25 03:56:45 np0005534516 nova_compute[253538]:  </os>
Nov 25 03:56:45 np0005534516 nova_compute[253538]:  <features>
Nov 25 03:56:45 np0005534516 nova_compute[253538]:    <acpi/>
Nov 25 03:56:45 np0005534516 nova_compute[253538]:    <apic/>
Nov 25 03:56:45 np0005534516 nova_compute[253538]:    <vmcoreinfo/>
Nov 25 03:56:45 np0005534516 nova_compute[253538]:  </features>
Nov 25 03:56:45 np0005534516 nova_compute[253538]:  <clock offset="utc">
Nov 25 03:56:45 np0005534516 nova_compute[253538]:    <timer name="pit" tickpolicy="delay"/>
Nov 25 03:56:45 np0005534516 nova_compute[253538]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 25 03:56:45 np0005534516 nova_compute[253538]:    <timer name="hpet" present="no"/>
Nov 25 03:56:45 np0005534516 nova_compute[253538]:  </clock>
Nov 25 03:56:45 np0005534516 nova_compute[253538]:  <cpu mode="host-model" match="exact">
Nov 25 03:56:45 np0005534516 nova_compute[253538]:    <topology sockets="1" cores="1" threads="1"/>
Nov 25 03:56:45 np0005534516 nova_compute[253538]:  </cpu>
Nov 25 03:56:45 np0005534516 nova_compute[253538]:  <devices>
Nov 25 03:56:45 np0005534516 nova_compute[253538]:    <disk type="network" device="disk">
Nov 25 03:56:45 np0005534516 nova_compute[253538]:      <driver type="raw" cache="none"/>
Nov 25 03:56:45 np0005534516 nova_compute[253538]:      <source protocol="rbd" name="vms/43128a42-ed0f-42ff-8282-4ef978e7c43c_disk">
Nov 25 03:56:45 np0005534516 nova_compute[253538]:        <host name="192.168.122.100" port="6789"/>
Nov 25 03:56:45 np0005534516 nova_compute[253538]:      </source>
Nov 25 03:56:45 np0005534516 nova_compute[253538]:      <auth username="openstack">
Nov 25 03:56:45 np0005534516 nova_compute[253538]:        <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 03:56:45 np0005534516 nova_compute[253538]:      </auth>
Nov 25 03:56:45 np0005534516 nova_compute[253538]:      <target dev="vda" bus="virtio"/>
Nov 25 03:56:45 np0005534516 nova_compute[253538]:    </disk>
Nov 25 03:56:45 np0005534516 nova_compute[253538]:    <disk type="network" device="cdrom">
Nov 25 03:56:45 np0005534516 nova_compute[253538]:      <driver type="raw" cache="none"/>
Nov 25 03:56:45 np0005534516 nova_compute[253538]:      <source protocol="rbd" name="vms/43128a42-ed0f-42ff-8282-4ef978e7c43c_disk.config">
Nov 25 03:56:45 np0005534516 nova_compute[253538]:        <host name="192.168.122.100" port="6789"/>
Nov 25 03:56:45 np0005534516 nova_compute[253538]:      </source>
Nov 25 03:56:45 np0005534516 nova_compute[253538]:      <auth username="openstack">
Nov 25 03:56:45 np0005534516 nova_compute[253538]:        <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 03:56:45 np0005534516 nova_compute[253538]:      </auth>
Nov 25 03:56:45 np0005534516 nova_compute[253538]:      <target dev="sda" bus="sata"/>
Nov 25 03:56:45 np0005534516 nova_compute[253538]:    </disk>
Nov 25 03:56:45 np0005534516 nova_compute[253538]:    <interface type="ethernet">
Nov 25 03:56:45 np0005534516 nova_compute[253538]:      <mac address="fa:16:3e:ec:7c:8c"/>
Nov 25 03:56:45 np0005534516 nova_compute[253538]:      <model type="virtio"/>
Nov 25 03:56:45 np0005534516 nova_compute[253538]:      <driver name="vhost" rx_queue_size="512"/>
Nov 25 03:56:45 np0005534516 nova_compute[253538]:      <mtu size="1442"/>
Nov 25 03:56:45 np0005534516 nova_compute[253538]:      <target dev="tap3f8cf5bd-c7"/>
Nov 25 03:56:45 np0005534516 nova_compute[253538]:    </interface>
Nov 25 03:56:45 np0005534516 nova_compute[253538]:    <serial type="pty">
Nov 25 03:56:45 np0005534516 nova_compute[253538]:      <log file="/var/lib/nova/instances/43128a42-ed0f-42ff-8282-4ef978e7c43c/console.log" append="off"/>
Nov 25 03:56:45 np0005534516 nova_compute[253538]:    </serial>
Nov 25 03:56:45 np0005534516 nova_compute[253538]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 03:56:45 np0005534516 nova_compute[253538]:    <video>
Nov 25 03:56:45 np0005534516 nova_compute[253538]:      <model type="virtio"/>
Nov 25 03:56:45 np0005534516 nova_compute[253538]:    </video>
Nov 25 03:56:45 np0005534516 nova_compute[253538]:    <input type="tablet" bus="usb"/>
Nov 25 03:56:45 np0005534516 nova_compute[253538]:    <rng model="virtio">
Nov 25 03:56:45 np0005534516 nova_compute[253538]:      <backend model="random">/dev/urandom</backend>
Nov 25 03:56:45 np0005534516 nova_compute[253538]:    </rng>
Nov 25 03:56:45 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root"/>
Nov 25 03:56:45 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:56:45 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:56:45 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:56:45 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:56:45 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:56:45 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:56:45 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:56:45 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:56:45 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:56:45 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:56:45 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:56:45 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:56:45 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:56:45 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:56:45 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:56:45 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:56:45 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:56:45 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:56:45 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:56:45 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:56:45 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:56:45 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:56:45 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:56:45 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:56:45 np0005534516 nova_compute[253538]:    <controller type="usb" index="0"/>
Nov 25 03:56:45 np0005534516 nova_compute[253538]:    <memballoon model="virtio">
Nov 25 03:56:45 np0005534516 nova_compute[253538]:      <stats period="10"/>
Nov 25 03:56:45 np0005534516 nova_compute[253538]:    </memballoon>
Nov 25 03:56:45 np0005534516 nova_compute[253538]:  </devices>
Nov 25 03:56:45 np0005534516 nova_compute[253538]: </domain>
Nov 25 03:56:45 np0005534516 nova_compute[253538]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 25 03:56:45 np0005534516 nova_compute[253538]: 2025-11-25 08:56:45.745 253542 DEBUG nova.compute.manager [None req-442d7c35-8ef3-4365-870d-736cdeba020e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 43128a42-ed0f-42ff-8282-4ef978e7c43c] Preparing to wait for external event network-vif-plugged-3f8cf5bd-c7bb-44c3-a8bf-a8dd8e78c2c6 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 25 03:56:45 np0005534516 nova_compute[253538]: 2025-11-25 08:56:45.746 253542 DEBUG oslo_concurrency.lockutils [None req-442d7c35-8ef3-4365-870d-736cdeba020e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Acquiring lock "43128a42-ed0f-42ff-8282-4ef978e7c43c-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:56:45 np0005534516 nova_compute[253538]: 2025-11-25 08:56:45.746 253542 DEBUG oslo_concurrency.lockutils [None req-442d7c35-8ef3-4365-870d-736cdeba020e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lock "43128a42-ed0f-42ff-8282-4ef978e7c43c-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:56:45 np0005534516 nova_compute[253538]: 2025-11-25 08:56:45.746 253542 DEBUG oslo_concurrency.lockutils [None req-442d7c35-8ef3-4365-870d-736cdeba020e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lock "43128a42-ed0f-42ff-8282-4ef978e7c43c-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:56:45 np0005534516 nova_compute[253538]: 2025-11-25 08:56:45.747 253542 DEBUG nova.virt.libvirt.vif [None req-442d7c35-8ef3-4365-870d-736cdeba020e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T08:56:40Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1161720817',display_name='tempest-TestNetworkBasicOps-server-1161720817',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1161720817',id=121,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMygd7bgjp7je066rs+JSqi7wDw8mZA8bTJqZMTdVQ59AGIvWGIfB++nH0hDU9JXJAgSqR6ykwwbMc5hRBsfmnOJwLqxckNDbUsZU2WcEt8EN+Pk8Qs/v8+WIfKw25whKw==',key_name='tempest-TestNetworkBasicOps-341234383',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='92faeb767e7a423586eaaf32661ce771',ramdisk_id='',reservation_id='r-8fli2zzc',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-2019122229',owner_user_name='tempest-TestNetworkBasicOps-2019122229-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T08:56:41Z,user_data=None,user_id='4211995133cc45db8e38c47f747fb092',uuid=43128a42-ed0f-42ff-8282-4ef978e7c43c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "3f8cf5bd-c7bb-44c3-a8bf-a8dd8e78c2c6", "address": "fa:16:3e:ec:7c:8c", "network": {"id": "6a3361c5-4f78-4935-9e24-d43d47b272af", "bridge": "br-int", "label": "tempest-network-smoke--1071758038", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "1.2.3.4", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92faeb767e7a423586eaaf32661ce771", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3f8cf5bd-c7", "ovs_interfaceid": "3f8cf5bd-c7bb-44c3-a8bf-a8dd8e78c2c6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 25 03:56:45 np0005534516 nova_compute[253538]: 2025-11-25 08:56:45.748 253542 DEBUG nova.network.os_vif_util [None req-442d7c35-8ef3-4365-870d-736cdeba020e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Converting VIF {"id": "3f8cf5bd-c7bb-44c3-a8bf-a8dd8e78c2c6", "address": "fa:16:3e:ec:7c:8c", "network": {"id": "6a3361c5-4f78-4935-9e24-d43d47b272af", "bridge": "br-int", "label": "tempest-network-smoke--1071758038", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "1.2.3.4", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92faeb767e7a423586eaaf32661ce771", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3f8cf5bd-c7", "ovs_interfaceid": "3f8cf5bd-c7bb-44c3-a8bf-a8dd8e78c2c6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 03:56:45 np0005534516 nova_compute[253538]: 2025-11-25 08:56:45.748 253542 DEBUG nova.network.os_vif_util [None req-442d7c35-8ef3-4365-870d-736cdeba020e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ec:7c:8c,bridge_name='br-int',has_traffic_filtering=True,id=3f8cf5bd-c7bb-44c3-a8bf-a8dd8e78c2c6,network=Network(6a3361c5-4f78-4935-9e24-d43d47b272af),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3f8cf5bd-c7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 03:56:45 np0005534516 nova_compute[253538]: 2025-11-25 08:56:45.749 253542 DEBUG os_vif [None req-442d7c35-8ef3-4365-870d-736cdeba020e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:ec:7c:8c,bridge_name='br-int',has_traffic_filtering=True,id=3f8cf5bd-c7bb-44c3-a8bf-a8dd8e78c2c6,network=Network(6a3361c5-4f78-4935-9e24-d43d47b272af),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3f8cf5bd-c7') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 25 03:56:45 np0005534516 nova_compute[253538]: 2025-11-25 08:56:45.749 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:56:45 np0005534516 nova_compute[253538]: 2025-11-25 08:56:45.750 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:56:45 np0005534516 nova_compute[253538]: 2025-11-25 08:56:45.751 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 03:56:45 np0005534516 nova_compute[253538]: 2025-11-25 08:56:45.754 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:56:45 np0005534516 nova_compute[253538]: 2025-11-25 08:56:45.754 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap3f8cf5bd-c7, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:56:45 np0005534516 nova_compute[253538]: 2025-11-25 08:56:45.754 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap3f8cf5bd-c7, col_values=(('external_ids', {'iface-id': '3f8cf5bd-c7bb-44c3-a8bf-a8dd8e78c2c6', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:ec:7c:8c', 'vm-uuid': '43128a42-ed0f-42ff-8282-4ef978e7c43c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:56:45 np0005534516 nova_compute[253538]: 2025-11-25 08:56:45.756 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:56:45 np0005534516 NetworkManager[48915]: <info>  [1764061005.7583] manager: (tap3f8cf5bd-c7): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/512)
Nov 25 03:56:45 np0005534516 nova_compute[253538]: 2025-11-25 08:56:45.759 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 25 03:56:45 np0005534516 nova_compute[253538]: 2025-11-25 08:56:45.764 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:56:45 np0005534516 nova_compute[253538]: 2025-11-25 08:56:45.765 253542 INFO os_vif [None req-442d7c35-8ef3-4365-870d-736cdeba020e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:ec:7c:8c,bridge_name='br-int',has_traffic_filtering=True,id=3f8cf5bd-c7bb-44c3-a8bf-a8dd8e78c2c6,network=Network(6a3361c5-4f78-4935-9e24-d43d47b272af),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3f8cf5bd-c7')#033[00m
Nov 25 03:56:45 np0005534516 nova_compute[253538]: 2025-11-25 08:56:45.812 253542 DEBUG nova.virt.libvirt.driver [None req-442d7c35-8ef3-4365-870d-736cdeba020e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 03:56:45 np0005534516 nova_compute[253538]: 2025-11-25 08:56:45.813 253542 DEBUG nova.virt.libvirt.driver [None req-442d7c35-8ef3-4365-870d-736cdeba020e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 03:56:45 np0005534516 nova_compute[253538]: 2025-11-25 08:56:45.813 253542 DEBUG nova.virt.libvirt.driver [None req-442d7c35-8ef3-4365-870d-736cdeba020e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] No VIF found with MAC fa:16:3e:ec:7c:8c, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 25 03:56:45 np0005534516 nova_compute[253538]: 2025-11-25 08:56:45.814 253542 INFO nova.virt.libvirt.driver [None req-442d7c35-8ef3-4365-870d-736cdeba020e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 43128a42-ed0f-42ff-8282-4ef978e7c43c] Using config drive#033[00m
Nov 25 03:56:45 np0005534516 nova_compute[253538]: 2025-11-25 08:56:45.841 253542 DEBUG nova.storage.rbd_utils [None req-442d7c35-8ef3-4365-870d-736cdeba020e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] rbd image 43128a42-ed0f-42ff-8282-4ef978e7c43c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:56:45 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2279: 321 pgs: 321 active+clean; 172 MiB data, 837 MiB used, 59 GiB / 60 GiB avail; 1.2 MiB/s rd, 1.8 MiB/s wr, 70 op/s
Nov 25 03:56:46 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 03:56:46 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 03:56:46 np0005534516 nova_compute[253538]: 2025-11-25 08:56:46.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 03:56:47 np0005534516 nova_compute[253538]: 2025-11-25 08:56:47.065 253542 DEBUG nova.network.neutron [req-482d5728-6747-4f61-8a45-43cb392bfbb7 req-e902f5a5-4da3-4287-be61-d361660192aa b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 43128a42-ed0f-42ff-8282-4ef978e7c43c] Updated VIF entry in instance network info cache for port 3f8cf5bd-c7bb-44c3-a8bf-a8dd8e78c2c6. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 25 03:56:47 np0005534516 nova_compute[253538]: 2025-11-25 08:56:47.065 253542 DEBUG nova.network.neutron [req-482d5728-6747-4f61-8a45-43cb392bfbb7 req-e902f5a5-4da3-4287-be61-d361660192aa b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 43128a42-ed0f-42ff-8282-4ef978e7c43c] Updating instance_info_cache with network_info: [{"id": "3f8cf5bd-c7bb-44c3-a8bf-a8dd8e78c2c6", "address": "fa:16:3e:ec:7c:8c", "network": {"id": "6a3361c5-4f78-4935-9e24-d43d47b272af", "bridge": "br-int", "label": "tempest-network-smoke--1071758038", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "1.2.3.4", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92faeb767e7a423586eaaf32661ce771", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3f8cf5bd-c7", "ovs_interfaceid": "3f8cf5bd-c7bb-44c3-a8bf-a8dd8e78c2c6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 03:56:47 np0005534516 nova_compute[253538]: 2025-11-25 08:56:47.078 253542 DEBUG oslo_concurrency.lockutils [req-482d5728-6747-4f61-8a45-43cb392bfbb7 req-e902f5a5-4da3-4287-be61-d361660192aa b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-43128a42-ed0f-42ff-8282-4ef978e7c43c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 03:56:47 np0005534516 nova_compute[253538]: 2025-11-25 08:56:47.547 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 03:56:47 np0005534516 nova_compute[253538]: 2025-11-25 08:56:47.553 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 03:56:47 np0005534516 nova_compute[253538]: 2025-11-25 08:56:47.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 03:56:47 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 03:56:47 np0005534516 nova_compute[253538]: 2025-11-25 08:56:47.581 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:56:47 np0005534516 nova_compute[253538]: 2025-11-25 08:56:47.581 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:56:47 np0005534516 nova_compute[253538]: 2025-11-25 08:56:47.581 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:56:47 np0005534516 nova_compute[253538]: 2025-11-25 08:56:47.582 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 25 03:56:47 np0005534516 nova_compute[253538]: 2025-11-25 08:56:47.582 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:56:47 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2280: 321 pgs: 321 active+clean; 203 MiB data, 861 MiB used, 59 GiB / 60 GiB avail; 527 KiB/s rd, 3.9 MiB/s wr, 91 op/s
Nov 25 03:56:47 np0005534516 nova_compute[253538]: 2025-11-25 08:56:47.911 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:56:48 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 03:56:48 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3037464691' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 03:56:48 np0005534516 nova_compute[253538]: 2025-11-25 08:56:48.073 253542 INFO nova.virt.libvirt.driver [None req-442d7c35-8ef3-4365-870d-736cdeba020e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 43128a42-ed0f-42ff-8282-4ef978e7c43c] Creating config drive at /var/lib/nova/instances/43128a42-ed0f-42ff-8282-4ef978e7c43c/disk.config#033[00m
Nov 25 03:56:48 np0005534516 nova_compute[253538]: 2025-11-25 08:56:48.077 253542 DEBUG oslo_concurrency.processutils [None req-442d7c35-8ef3-4365-870d-736cdeba020e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/43128a42-ed0f-42ff-8282-4ef978e7c43c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpqcp8r4un execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:56:48 np0005534516 nova_compute[253538]: 2025-11-25 08:56:48.108 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.526s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:56:48 np0005534516 nova_compute[253538]: 2025-11-25 08:56:48.217 253542 DEBUG nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] skipping disk for instance-00000078 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 25 03:56:48 np0005534516 nova_compute[253538]: 2025-11-25 08:56:48.218 253542 DEBUG nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] skipping disk for instance-00000078 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 25 03:56:48 np0005534516 nova_compute[253538]: 2025-11-25 08:56:48.220 253542 DEBUG oslo_concurrency.processutils [None req-442d7c35-8ef3-4365-870d-736cdeba020e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/43128a42-ed0f-42ff-8282-4ef978e7c43c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpqcp8r4un" returned: 0 in 0.143s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:56:48 np0005534516 nova_compute[253538]: 2025-11-25 08:56:48.338 253542 DEBUG nova.storage.rbd_utils [None req-442d7c35-8ef3-4365-870d-736cdeba020e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] rbd image 43128a42-ed0f-42ff-8282-4ef978e7c43c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:56:48 np0005534516 nova_compute[253538]: 2025-11-25 08:56:48.345 253542 DEBUG oslo_concurrency.processutils [None req-442d7c35-8ef3-4365-870d-736cdeba020e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/43128a42-ed0f-42ff-8282-4ef978e7c43c/disk.config 43128a42-ed0f-42ff-8282-4ef978e7c43c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:56:48 np0005534516 nova_compute[253538]: 2025-11-25 08:56:48.416 253542 DEBUG nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] skipping disk for instance-00000079 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 25 03:56:48 np0005534516 nova_compute[253538]: 2025-11-25 08:56:48.417 253542 DEBUG nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] skipping disk for instance-00000079 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 25 03:56:48 np0005534516 nova_compute[253538]: 2025-11-25 08:56:48.614 253542 WARNING nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 03:56:48 np0005534516 nova_compute[253538]: 2025-11-25 08:56:48.616 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3582MB free_disk=59.94587326049805GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 25 03:56:48 np0005534516 nova_compute[253538]: 2025-11-25 08:56:48.616 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:56:48 np0005534516 nova_compute[253538]: 2025-11-25 08:56:48.616 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:56:48 np0005534516 nova_compute[253538]: 2025-11-25 08:56:48.726 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Instance ce8c3428-f7e4-49aa-9978-faaf5d514663 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 25 03:56:48 np0005534516 nova_compute[253538]: 2025-11-25 08:56:48.726 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Instance 43128a42-ed0f-42ff-8282-4ef978e7c43c actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 25 03:56:48 np0005534516 nova_compute[253538]: 2025-11-25 08:56:48.727 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 25 03:56:48 np0005534516 nova_compute[253538]: 2025-11-25 08:56:48.727 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=768MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 25 03:56:48 np0005534516 nova_compute[253538]: 2025-11-25 08:56:48.810 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:56:49 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 03:56:49 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3487965594' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 03:56:49 np0005534516 nova_compute[253538]: 2025-11-25 08:56:49.261 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.450s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:56:49 np0005534516 nova_compute[253538]: 2025-11-25 08:56:49.269 253542 DEBUG nova.compute.provider_tree [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 03:56:49 np0005534516 nova_compute[253538]: 2025-11-25 08:56:49.286 253542 DEBUG nova.scheduler.client.report [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 03:56:49 np0005534516 nova_compute[253538]: 2025-11-25 08:56:49.318 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 25 03:56:49 np0005534516 nova_compute[253538]: 2025-11-25 08:56:49.319 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.703s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:56:49 np0005534516 nova_compute[253538]: 2025-11-25 08:56:49.406 253542 DEBUG oslo_concurrency.processutils [None req-442d7c35-8ef3-4365-870d-736cdeba020e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/43128a42-ed0f-42ff-8282-4ef978e7c43c/disk.config 43128a42-ed0f-42ff-8282-4ef978e7c43c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:56:49 np0005534516 nova_compute[253538]: 2025-11-25 08:56:49.407 253542 INFO nova.virt.libvirt.driver [None req-442d7c35-8ef3-4365-870d-736cdeba020e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 43128a42-ed0f-42ff-8282-4ef978e7c43c] Deleting local config drive /var/lib/nova/instances/43128a42-ed0f-42ff-8282-4ef978e7c43c/disk.config because it was imported into RBD.#033[00m
Nov 25 03:56:49 np0005534516 kernel: tap3f8cf5bd-c7: entered promiscuous mode
Nov 25 03:56:49 np0005534516 NetworkManager[48915]: <info>  [1764061009.4941] manager: (tap3f8cf5bd-c7): new Tun device (/org/freedesktop/NetworkManager/Devices/513)
Nov 25 03:56:49 np0005534516 nova_compute[253538]: 2025-11-25 08:56:49.494 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:56:49 np0005534516 ovn_controller[152859]: 2025-11-25T08:56:49Z|01250|binding|INFO|Claiming lport 3f8cf5bd-c7bb-44c3-a8bf-a8dd8e78c2c6 for this chassis.
Nov 25 03:56:49 np0005534516 ovn_controller[152859]: 2025-11-25T08:56:49Z|01251|binding|INFO|3f8cf5bd-c7bb-44c3-a8bf-a8dd8e78c2c6: Claiming fa:16:3e:ec:7c:8c 10.100.0.11
Nov 25 03:56:49 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:56:49.505 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ec:7c:8c 10.100.0.11'], port_security=['fa:16:3e:ec:7c:8c 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '43128a42-ed0f-42ff-8282-4ef978e7c43c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6a3361c5-4f78-4935-9e24-d43d47b272af', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '92faeb767e7a423586eaaf32661ce771', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'aa7587ee-e656-41ca-b100-9a0da067d1dd', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=75d3afb1-0c55-49ec-b7d6-fd301cdfea08, chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=3f8cf5bd-c7bb-44c3-a8bf-a8dd8e78c2c6) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 03:56:49 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:56:49.508 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 3f8cf5bd-c7bb-44c3-a8bf-a8dd8e78c2c6 in datapath 6a3361c5-4f78-4935-9e24-d43d47b272af bound to our chassis#033[00m
Nov 25 03:56:49 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:56:49.511 162739 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 6a3361c5-4f78-4935-9e24-d43d47b272af#033[00m
Nov 25 03:56:49 np0005534516 ovn_controller[152859]: 2025-11-25T08:56:49Z|01252|binding|INFO|Setting lport 3f8cf5bd-c7bb-44c3-a8bf-a8dd8e78c2c6 ovn-installed in OVS
Nov 25 03:56:49 np0005534516 ovn_controller[152859]: 2025-11-25T08:56:49Z|01253|binding|INFO|Setting lport 3f8cf5bd-c7bb-44c3-a8bf-a8dd8e78c2c6 up in Southbound
Nov 25 03:56:49 np0005534516 nova_compute[253538]: 2025-11-25 08:56:49.520 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:56:49 np0005534516 nova_compute[253538]: 2025-11-25 08:56:49.524 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:56:49 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:56:49.530 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[b46a92d1-820e-4459-bf0d-4056866c82a9]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:56:49 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:56:49.531 162739 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap6a3361c5-41 in ovnmeta-6a3361c5-4f78-4935-9e24-d43d47b272af namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 25 03:56:49 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:56:49.534 269370 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap6a3361c5-40 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 25 03:56:49 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:56:49.535 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[880f7156-e399-4e01-a90d-b9bc44ad0eed]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:56:49 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:56:49.538 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[d62e8bed-0bfa-42f5-a409-3250b355f134]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:56:49 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:56:49.555 162852 DEBUG oslo.privsep.daemon [-] privsep: reply[5751e282-17cd-4ced-bf74-50519a603416]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:56:49 np0005534516 systemd-machined[215790]: New machine qemu-151-instance-00000079.
Nov 25 03:56:49 np0005534516 systemd[1]: Started Virtual Machine qemu-151-instance-00000079.
Nov 25 03:56:49 np0005534516 systemd-udevd[381572]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 03:56:49 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:56:49.586 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[c623ed1f-8320-4edc-b9c7-978df7fd1df7]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:56:49 np0005534516 NetworkManager[48915]: <info>  [1764061009.6055] device (tap3f8cf5bd-c7): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 03:56:49 np0005534516 NetworkManager[48915]: <info>  [1764061009.6069] device (tap3f8cf5bd-c7): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 03:56:49 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:56:49.624 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[3e845e13-88c9-45aa-b1c6-6118c1149d4a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:56:49 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:56:49.629 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[249a776d-a28e-440a-b519-eecb0bef8783]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:56:49 np0005534516 NetworkManager[48915]: <info>  [1764061009.6302] manager: (tap6a3361c5-40): new Veth device (/org/freedesktop/NetworkManager/Devices/514)
Nov 25 03:56:49 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:56:49.670 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[a81204fe-a794-4235-9eee-0d356785f6db]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:56:49 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:56:49.673 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[fd4c0ede-1b46-4827-9d31-31506d67c524]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:56:49 np0005534516 podman[381552]: 2025-11-25 08:56:49.688289338 +0000 UTC m=+0.142151271 container health_status 8ad1e319ea263c63021f01bb2d6f18ca5aea205883339692c6b10bddfa993191 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 03:56:49 np0005534516 NetworkManager[48915]: <info>  [1764061009.7035] device (tap6a3361c5-40): carrier: link connected
Nov 25 03:56:49 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:56:49.708 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[8a3fd366-ea52-41c9-bf59-56ff6363e4ca]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:56:49 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:56:49.730 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[71e95015-49ee-4eff-af25-758e596c8da1]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6a3361c5-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:71:4e:9f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 363], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 636430, 'reachable_time': 26935, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 381615, 'error': None, 'target': 'ovnmeta-6a3361c5-4f78-4935-9e24-d43d47b272af', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:56:49 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:56:49.749 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[9ffca75f-d5f4-4f3c-a693-3840dd359371]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe71:4e9f'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 636430, 'tstamp': 636430}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 381616, 'error': None, 'target': 'ovnmeta-6a3361c5-4f78-4935-9e24-d43d47b272af', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:56:49 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:56:49.772 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[2ff36863-34ec-4f7c-a17a-47b4b53ea2ca]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6a3361c5-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:71:4e:9f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 363], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 636430, 'reachable_time': 26935, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 381617, 'error': None, 'target': 'ovnmeta-6a3361c5-4f78-4935-9e24-d43d47b272af', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:56:49 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:56:49.803 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[94a41d70-de9c-4fa9-9253-d8f9b0a7a976]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:56:49 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:56:49.871 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[84905fa4-3710-4a59-883c-a21b0b52a9a7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:56:49 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:56:49.873 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6a3361c5-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:56:49 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:56:49.873 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 03:56:49 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:56:49.874 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6a3361c5-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:56:49 np0005534516 nova_compute[253538]: 2025-11-25 08:56:49.876 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:56:49 np0005534516 kernel: tap6a3361c5-40: entered promiscuous mode
Nov 25 03:56:49 np0005534516 NetworkManager[48915]: <info>  [1764061009.8772] manager: (tap6a3361c5-40): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/515)
Nov 25 03:56:49 np0005534516 nova_compute[253538]: 2025-11-25 08:56:49.878 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:56:49 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:56:49.881 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap6a3361c5-40, col_values=(('external_ids', {'iface-id': '83822cb3-5f48-4869-a929-1fd9fc361865'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:56:49 np0005534516 nova_compute[253538]: 2025-11-25 08:56:49.882 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:56:49 np0005534516 ovn_controller[152859]: 2025-11-25T08:56:49Z|01254|binding|INFO|Releasing lport 83822cb3-5f48-4869-a929-1fd9fc361865 from this chassis (sb_readonly=0)
Nov 25 03:56:49 np0005534516 nova_compute[253538]: 2025-11-25 08:56:49.902 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:56:49 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2281: 321 pgs: 321 active+clean; 213 MiB data, 861 MiB used, 59 GiB / 60 GiB avail; 343 KiB/s rd, 3.9 MiB/s wr, 91 op/s
Nov 25 03:56:49 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:56:49.905 162739 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/6a3361c5-4f78-4935-9e24-d43d47b272af.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/6a3361c5-4f78-4935-9e24-d43d47b272af.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 25 03:56:49 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:56:49.906 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[a864ec30-ec73-4c4a-b155-176cc608d20d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:56:49 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:56:49.907 162739 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 25 03:56:49 np0005534516 ovn_metadata_agent[162734]: global
Nov 25 03:56:49 np0005534516 ovn_metadata_agent[162734]:    log         /dev/log local0 debug
Nov 25 03:56:49 np0005534516 ovn_metadata_agent[162734]:    log-tag     haproxy-metadata-proxy-6a3361c5-4f78-4935-9e24-d43d47b272af
Nov 25 03:56:49 np0005534516 ovn_metadata_agent[162734]:    user        root
Nov 25 03:56:49 np0005534516 ovn_metadata_agent[162734]:    group       root
Nov 25 03:56:49 np0005534516 ovn_metadata_agent[162734]:    maxconn     1024
Nov 25 03:56:49 np0005534516 ovn_metadata_agent[162734]:    pidfile     /var/lib/neutron/external/pids/6a3361c5-4f78-4935-9e24-d43d47b272af.pid.haproxy
Nov 25 03:56:49 np0005534516 ovn_metadata_agent[162734]:    daemon
Nov 25 03:56:49 np0005534516 ovn_metadata_agent[162734]: 
Nov 25 03:56:49 np0005534516 ovn_metadata_agent[162734]: defaults
Nov 25 03:56:49 np0005534516 ovn_metadata_agent[162734]:    log global
Nov 25 03:56:49 np0005534516 ovn_metadata_agent[162734]:    mode http
Nov 25 03:56:49 np0005534516 ovn_metadata_agent[162734]:    option httplog
Nov 25 03:56:49 np0005534516 ovn_metadata_agent[162734]:    option dontlognull
Nov 25 03:56:49 np0005534516 ovn_metadata_agent[162734]:    option http-server-close
Nov 25 03:56:49 np0005534516 ovn_metadata_agent[162734]:    option forwardfor
Nov 25 03:56:49 np0005534516 ovn_metadata_agent[162734]:    retries                 3
Nov 25 03:56:49 np0005534516 ovn_metadata_agent[162734]:    timeout http-request    30s
Nov 25 03:56:49 np0005534516 ovn_metadata_agent[162734]:    timeout connect         30s
Nov 25 03:56:49 np0005534516 ovn_metadata_agent[162734]:    timeout client          32s
Nov 25 03:56:49 np0005534516 ovn_metadata_agent[162734]:    timeout server          32s
Nov 25 03:56:49 np0005534516 ovn_metadata_agent[162734]:    timeout http-keep-alive 30s
Nov 25 03:56:49 np0005534516 ovn_metadata_agent[162734]: 
Nov 25 03:56:49 np0005534516 ovn_metadata_agent[162734]: 
Nov 25 03:56:49 np0005534516 ovn_metadata_agent[162734]: listen listener
Nov 25 03:56:49 np0005534516 ovn_metadata_agent[162734]:    bind 169.254.169.254:80
Nov 25 03:56:49 np0005534516 ovn_metadata_agent[162734]:    server metadata /var/lib/neutron/metadata_proxy
Nov 25 03:56:49 np0005534516 ovn_metadata_agent[162734]:    http-request add-header X-OVN-Network-ID 6a3361c5-4f78-4935-9e24-d43d47b272af
Nov 25 03:56:49 np0005534516 ovn_metadata_agent[162734]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 25 03:56:49 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:56:49.908 162739 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-6a3361c5-4f78-4935-9e24-d43d47b272af', 'env', 'PROCESS_TAG=haproxy-6a3361c5-4f78-4935-9e24-d43d47b272af', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/6a3361c5-4f78-4935-9e24-d43d47b272af.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 25 03:56:50 np0005534516 nova_compute[253538]: 2025-11-25 08:56:50.100 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764061010.100013, 43128a42-ed0f-42ff-8282-4ef978e7c43c => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 03:56:50 np0005534516 nova_compute[253538]: 2025-11-25 08:56:50.101 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 43128a42-ed0f-42ff-8282-4ef978e7c43c] VM Started (Lifecycle Event)#033[00m
Nov 25 03:56:50 np0005534516 nova_compute[253538]: 2025-11-25 08:56:50.129 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 43128a42-ed0f-42ff-8282-4ef978e7c43c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:56:50 np0005534516 nova_compute[253538]: 2025-11-25 08:56:50.133 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764061010.100198, 43128a42-ed0f-42ff-8282-4ef978e7c43c => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 03:56:50 np0005534516 nova_compute[253538]: 2025-11-25 08:56:50.134 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 43128a42-ed0f-42ff-8282-4ef978e7c43c] VM Paused (Lifecycle Event)#033[00m
Nov 25 03:56:50 np0005534516 nova_compute[253538]: 2025-11-25 08:56:50.163 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 43128a42-ed0f-42ff-8282-4ef978e7c43c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:56:50 np0005534516 nova_compute[253538]: 2025-11-25 08:56:50.167 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 43128a42-ed0f-42ff-8282-4ef978e7c43c] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 03:56:50 np0005534516 nova_compute[253538]: 2025-11-25 08:56:50.201 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 43128a42-ed0f-42ff-8282-4ef978e7c43c] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 03:56:50 np0005534516 nova_compute[253538]: 2025-11-25 08:56:50.271 253542 DEBUG nova.compute.manager [req-acba5256-2c76-4785-b592-8a2143a1ba41 req-70cde542-6608-4038-96de-832f15596afa b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 43128a42-ed0f-42ff-8282-4ef978e7c43c] Received event network-vif-plugged-3f8cf5bd-c7bb-44c3-a8bf-a8dd8e78c2c6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:56:50 np0005534516 nova_compute[253538]: 2025-11-25 08:56:50.272 253542 DEBUG oslo_concurrency.lockutils [req-acba5256-2c76-4785-b592-8a2143a1ba41 req-70cde542-6608-4038-96de-832f15596afa b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "43128a42-ed0f-42ff-8282-4ef978e7c43c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:56:50 np0005534516 nova_compute[253538]: 2025-11-25 08:56:50.272 253542 DEBUG oslo_concurrency.lockutils [req-acba5256-2c76-4785-b592-8a2143a1ba41 req-70cde542-6608-4038-96de-832f15596afa b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "43128a42-ed0f-42ff-8282-4ef978e7c43c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:56:50 np0005534516 nova_compute[253538]: 2025-11-25 08:56:50.272 253542 DEBUG oslo_concurrency.lockutils [req-acba5256-2c76-4785-b592-8a2143a1ba41 req-70cde542-6608-4038-96de-832f15596afa b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "43128a42-ed0f-42ff-8282-4ef978e7c43c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:56:50 np0005534516 nova_compute[253538]: 2025-11-25 08:56:50.273 253542 DEBUG nova.compute.manager [req-acba5256-2c76-4785-b592-8a2143a1ba41 req-70cde542-6608-4038-96de-832f15596afa b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 43128a42-ed0f-42ff-8282-4ef978e7c43c] Processing event network-vif-plugged-3f8cf5bd-c7bb-44c3-a8bf-a8dd8e78c2c6 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 25 03:56:50 np0005534516 nova_compute[253538]: 2025-11-25 08:56:50.273 253542 DEBUG nova.compute.manager [None req-442d7c35-8ef3-4365-870d-736cdeba020e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 43128a42-ed0f-42ff-8282-4ef978e7c43c] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 25 03:56:50 np0005534516 nova_compute[253538]: 2025-11-25 08:56:50.276 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764061010.2768142, 43128a42-ed0f-42ff-8282-4ef978e7c43c => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 03:56:50 np0005534516 nova_compute[253538]: 2025-11-25 08:56:50.277 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 43128a42-ed0f-42ff-8282-4ef978e7c43c] VM Resumed (Lifecycle Event)#033[00m
Nov 25 03:56:50 np0005534516 nova_compute[253538]: 2025-11-25 08:56:50.278 253542 DEBUG nova.virt.libvirt.driver [None req-442d7c35-8ef3-4365-870d-736cdeba020e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 43128a42-ed0f-42ff-8282-4ef978e7c43c] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 25 03:56:50 np0005534516 nova_compute[253538]: 2025-11-25 08:56:50.282 253542 INFO nova.virt.libvirt.driver [-] [instance: 43128a42-ed0f-42ff-8282-4ef978e7c43c] Instance spawned successfully.#033[00m
Nov 25 03:56:50 np0005534516 nova_compute[253538]: 2025-11-25 08:56:50.283 253542 DEBUG nova.virt.libvirt.driver [None req-442d7c35-8ef3-4365-870d-736cdeba020e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 43128a42-ed0f-42ff-8282-4ef978e7c43c] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 25 03:56:50 np0005534516 nova_compute[253538]: 2025-11-25 08:56:50.313 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 03:56:50 np0005534516 nova_compute[253538]: 2025-11-25 08:56:50.316 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 43128a42-ed0f-42ff-8282-4ef978e7c43c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:56:50 np0005534516 nova_compute[253538]: 2025-11-25 08:56:50.323 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 43128a42-ed0f-42ff-8282-4ef978e7c43c] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 03:56:50 np0005534516 nova_compute[253538]: 2025-11-25 08:56:50.327 253542 DEBUG nova.virt.libvirt.driver [None req-442d7c35-8ef3-4365-870d-736cdeba020e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 43128a42-ed0f-42ff-8282-4ef978e7c43c] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:56:50 np0005534516 nova_compute[253538]: 2025-11-25 08:56:50.328 253542 DEBUG nova.virt.libvirt.driver [None req-442d7c35-8ef3-4365-870d-736cdeba020e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 43128a42-ed0f-42ff-8282-4ef978e7c43c] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:56:50 np0005534516 nova_compute[253538]: 2025-11-25 08:56:50.328 253542 DEBUG nova.virt.libvirt.driver [None req-442d7c35-8ef3-4365-870d-736cdeba020e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 43128a42-ed0f-42ff-8282-4ef978e7c43c] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:56:50 np0005534516 nova_compute[253538]: 2025-11-25 08:56:50.329 253542 DEBUG nova.virt.libvirt.driver [None req-442d7c35-8ef3-4365-870d-736cdeba020e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 43128a42-ed0f-42ff-8282-4ef978e7c43c] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:56:50 np0005534516 nova_compute[253538]: 2025-11-25 08:56:50.329 253542 DEBUG nova.virt.libvirt.driver [None req-442d7c35-8ef3-4365-870d-736cdeba020e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 43128a42-ed0f-42ff-8282-4ef978e7c43c] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:56:50 np0005534516 nova_compute[253538]: 2025-11-25 08:56:50.330 253542 DEBUG nova.virt.libvirt.driver [None req-442d7c35-8ef3-4365-870d-736cdeba020e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 43128a42-ed0f-42ff-8282-4ef978e7c43c] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:56:50 np0005534516 nova_compute[253538]: 2025-11-25 08:56:50.356 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 43128a42-ed0f-42ff-8282-4ef978e7c43c] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 03:56:50 np0005534516 podman[381689]: 2025-11-25 08:56:50.282961034 +0000 UTC m=+0.034951153 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 25 03:56:50 np0005534516 nova_compute[253538]: 2025-11-25 08:56:50.385 253542 INFO nova.compute.manager [None req-442d7c35-8ef3-4365-870d-736cdeba020e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 43128a42-ed0f-42ff-8282-4ef978e7c43c] Took 8.40 seconds to spawn the instance on the hypervisor.#033[00m
Nov 25 03:56:50 np0005534516 nova_compute[253538]: 2025-11-25 08:56:50.386 253542 DEBUG nova.compute.manager [None req-442d7c35-8ef3-4365-870d-736cdeba020e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 43128a42-ed0f-42ff-8282-4ef978e7c43c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:56:50 np0005534516 nova_compute[253538]: 2025-11-25 08:56:50.448 253542 INFO nova.compute.manager [None req-442d7c35-8ef3-4365-870d-736cdeba020e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 43128a42-ed0f-42ff-8282-4ef978e7c43c] Took 9.47 seconds to build instance.#033[00m
Nov 25 03:56:50 np0005534516 nova_compute[253538]: 2025-11-25 08:56:50.464 253542 DEBUG oslo_concurrency.lockutils [None req-442d7c35-8ef3-4365-870d-736cdeba020e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lock "43128a42-ed0f-42ff-8282-4ef978e7c43c" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.561s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:56:50 np0005534516 nova_compute[253538]: 2025-11-25 08:56:50.756 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:56:50 np0005534516 podman[381689]: 2025-11-25 08:56:50.888153716 +0000 UTC m=+0.640143795 container create de4d6031ca24a438be3d92b1dab083865d78ca9073b06086aceb0b95605dc5da (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-6a3361c5-4f78-4935-9e24-d43d47b272af, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2)
Nov 25 03:56:51 np0005534516 systemd[1]: Started libpod-conmon-de4d6031ca24a438be3d92b1dab083865d78ca9073b06086aceb0b95605dc5da.scope.
Nov 25 03:56:51 np0005534516 systemd[1]: Started libcrun container.
Nov 25 03:56:51 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3b842b0da7b301077b2be32a4ebc9c2c07bb6ef40e3f596e187ca5e2f7084ab4/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 25 03:56:51 np0005534516 podman[381689]: 2025-11-25 08:56:51.16274009 +0000 UTC m=+0.914730239 container init de4d6031ca24a438be3d92b1dab083865d78ca9073b06086aceb0b95605dc5da (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-6a3361c5-4f78-4935-9e24-d43d47b272af, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 03:56:51 np0005534516 podman[381689]: 2025-11-25 08:56:51.175196398 +0000 UTC m=+0.927186487 container start de4d6031ca24a438be3d92b1dab083865d78ca9073b06086aceb0b95605dc5da (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-6a3361c5-4f78-4935-9e24-d43d47b272af, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 03:56:51 np0005534516 neutron-haproxy-ovnmeta-6a3361c5-4f78-4935-9e24-d43d47b272af[381704]: [NOTICE]   (381708) : New worker (381710) forked
Nov 25 03:56:51 np0005534516 neutron-haproxy-ovnmeta-6a3361c5-4f78-4935-9e24-d43d47b272af[381704]: [NOTICE]   (381708) : Loading success.
Nov 25 03:56:51 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2282: 321 pgs: 321 active+clean; 213 MiB data, 861 MiB used, 59 GiB / 60 GiB avail; 1.3 MiB/s rd, 3.9 MiB/s wr, 130 op/s
Nov 25 03:56:52 np0005534516 nova_compute[253538]: 2025-11-25 08:56:52.385 253542 DEBUG nova.compute.manager [req-4a70d342-0aa3-424f-a8ce-0177a9f7929b req-dc904973-3464-4560-ad29-e5e744615406 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 43128a42-ed0f-42ff-8282-4ef978e7c43c] Received event network-vif-plugged-3f8cf5bd-c7bb-44c3-a8bf-a8dd8e78c2c6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:56:52 np0005534516 nova_compute[253538]: 2025-11-25 08:56:52.386 253542 DEBUG oslo_concurrency.lockutils [req-4a70d342-0aa3-424f-a8ce-0177a9f7929b req-dc904973-3464-4560-ad29-e5e744615406 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "43128a42-ed0f-42ff-8282-4ef978e7c43c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:56:52 np0005534516 nova_compute[253538]: 2025-11-25 08:56:52.386 253542 DEBUG oslo_concurrency.lockutils [req-4a70d342-0aa3-424f-a8ce-0177a9f7929b req-dc904973-3464-4560-ad29-e5e744615406 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "43128a42-ed0f-42ff-8282-4ef978e7c43c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:56:52 np0005534516 nova_compute[253538]: 2025-11-25 08:56:52.386 253542 DEBUG oslo_concurrency.lockutils [req-4a70d342-0aa3-424f-a8ce-0177a9f7929b req-dc904973-3464-4560-ad29-e5e744615406 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "43128a42-ed0f-42ff-8282-4ef978e7c43c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:56:52 np0005534516 nova_compute[253538]: 2025-11-25 08:56:52.386 253542 DEBUG nova.compute.manager [req-4a70d342-0aa3-424f-a8ce-0177a9f7929b req-dc904973-3464-4560-ad29-e5e744615406 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 43128a42-ed0f-42ff-8282-4ef978e7c43c] No waiting events found dispatching network-vif-plugged-3f8cf5bd-c7bb-44c3-a8bf-a8dd8e78c2c6 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 03:56:52 np0005534516 nova_compute[253538]: 2025-11-25 08:56:52.387 253542 WARNING nova.compute.manager [req-4a70d342-0aa3-424f-a8ce-0177a9f7929b req-dc904973-3464-4560-ad29-e5e744615406 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 43128a42-ed0f-42ff-8282-4ef978e7c43c] Received unexpected event network-vif-plugged-3f8cf5bd-c7bb-44c3-a8bf-a8dd8e78c2c6 for instance with vm_state active and task_state None.#033[00m
Nov 25 03:56:52 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 03:56:52 np0005534516 nova_compute[253538]: 2025-11-25 08:56:52.914 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:56:53 np0005534516 nova_compute[253538]: 2025-11-25 08:56:53.305 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:56:53 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:56:53.305 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=38, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '3e:21:09', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '6a:71:8e:07:fa:c2'}, ipsec=False) old=SB_Global(nb_cfg=37) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 03:56:53 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:56:53.308 162739 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 25 03:56:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] Optimize plan auto_2025-11-25_08:56:53
Nov 25 03:56:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 03:56:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] do_upmap
Nov 25 03:56:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] pools ['.mgr', 'vms', 'default.rgw.log', 'images', 'cephfs.cephfs.data', 'volumes', 'default.rgw.meta', 'cephfs.cephfs.meta', 'default.rgw.control', '.rgw.root', 'backups']
Nov 25 03:56:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] prepared 0/10 changes
Nov 25 03:56:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 03:56:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 03:56:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 03:56:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 03:56:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 03:56:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 03:56:53 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2283: 321 pgs: 321 active+clean; 213 MiB data, 861 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 3.9 MiB/s wr, 150 op/s
Nov 25 03:56:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 03:56:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 03:56:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 03:56:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 03:56:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 03:56:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 03:56:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 03:56:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 03:56:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 03:56:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 03:56:54 np0005534516 nova_compute[253538]: 2025-11-25 08:56:54.658 253542 DEBUG nova.compute.manager [req-c26c3644-e83e-4fb9-9ac4-bb5f331839d4 req-7a8a1a83-c847-40fa-9d75-322d5b4a3119 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 43128a42-ed0f-42ff-8282-4ef978e7c43c] Received event network-changed-3f8cf5bd-c7bb-44c3-a8bf-a8dd8e78c2c6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:56:54 np0005534516 nova_compute[253538]: 2025-11-25 08:56:54.659 253542 DEBUG nova.compute.manager [req-c26c3644-e83e-4fb9-9ac4-bb5f331839d4 req-7a8a1a83-c847-40fa-9d75-322d5b4a3119 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 43128a42-ed0f-42ff-8282-4ef978e7c43c] Refreshing instance network info cache due to event network-changed-3f8cf5bd-c7bb-44c3-a8bf-a8dd8e78c2c6. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 03:56:54 np0005534516 nova_compute[253538]: 2025-11-25 08:56:54.660 253542 DEBUG oslo_concurrency.lockutils [req-c26c3644-e83e-4fb9-9ac4-bb5f331839d4 req-7a8a1a83-c847-40fa-9d75-322d5b4a3119 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-43128a42-ed0f-42ff-8282-4ef978e7c43c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 03:56:54 np0005534516 nova_compute[253538]: 2025-11-25 08:56:54.661 253542 DEBUG oslo_concurrency.lockutils [req-c26c3644-e83e-4fb9-9ac4-bb5f331839d4 req-7a8a1a83-c847-40fa-9d75-322d5b4a3119 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-43128a42-ed0f-42ff-8282-4ef978e7c43c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 03:56:54 np0005534516 nova_compute[253538]: 2025-11-25 08:56:54.662 253542 DEBUG nova.network.neutron [req-c26c3644-e83e-4fb9-9ac4-bb5f331839d4 req-7a8a1a83-c847-40fa-9d75-322d5b4a3119 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 43128a42-ed0f-42ff-8282-4ef978e7c43c] Refreshing network info cache for port 3f8cf5bd-c7bb-44c3-a8bf-a8dd8e78c2c6 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 03:56:55 np0005534516 nova_compute[253538]: 2025-11-25 08:56:55.798 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:56:55 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2284: 321 pgs: 321 active+clean; 213 MiB data, 865 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 3.3 MiB/s wr, 151 op/s
Nov 25 03:56:56 np0005534516 nova_compute[253538]: 2025-11-25 08:56:56.375 253542 DEBUG nova.network.neutron [req-c26c3644-e83e-4fb9-9ac4-bb5f331839d4 req-7a8a1a83-c847-40fa-9d75-322d5b4a3119 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 43128a42-ed0f-42ff-8282-4ef978e7c43c] Updated VIF entry in instance network info cache for port 3f8cf5bd-c7bb-44c3-a8bf-a8dd8e78c2c6. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 25 03:56:56 np0005534516 nova_compute[253538]: 2025-11-25 08:56:56.375 253542 DEBUG nova.network.neutron [req-c26c3644-e83e-4fb9-9ac4-bb5f331839d4 req-7a8a1a83-c847-40fa-9d75-322d5b4a3119 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 43128a42-ed0f-42ff-8282-4ef978e7c43c] Updating instance_info_cache with network_info: [{"id": "3f8cf5bd-c7bb-44c3-a8bf-a8dd8e78c2c6", "address": "fa:16:3e:ec:7c:8c", "network": {"id": "6a3361c5-4f78-4935-9e24-d43d47b272af", "bridge": "br-int", "label": "tempest-network-smoke--1071758038", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "1.2.3.4", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.217", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92faeb767e7a423586eaaf32661ce771", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3f8cf5bd-c7", "ovs_interfaceid": "3f8cf5bd-c7bb-44c3-a8bf-a8dd8e78c2c6", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 03:56:56 np0005534516 nova_compute[253538]: 2025-11-25 08:56:56.408 253542 DEBUG oslo_concurrency.lockutils [req-c26c3644-e83e-4fb9-9ac4-bb5f331839d4 req-7a8a1a83-c847-40fa-9d75-322d5b4a3119 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-43128a42-ed0f-42ff-8282-4ef978e7c43c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 03:56:57 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:56:57.312 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=0e3362c2-a4a0-4a10-9289-943331244f84, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '38'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:56:57 np0005534516 nova_compute[253538]: 2025-11-25 08:56:57.385 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:56:57 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 03:56:57 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2285: 321 pgs: 321 active+clean; 213 MiB data, 865 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 2.1 MiB/s wr, 129 op/s
Nov 25 03:56:57 np0005534516 nova_compute[253538]: 2025-11-25 08:56:57.916 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:56:59 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2286: 321 pgs: 321 active+clean; 213 MiB data, 865 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 48 KiB/s wr, 78 op/s
Nov 25 03:57:00 np0005534516 nova_compute[253538]: 2025-11-25 08:57:00.800 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:57:01 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2287: 321 pgs: 321 active+clean; 216 MiB data, 865 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 172 KiB/s wr, 75 op/s
Nov 25 03:57:02 np0005534516 nova_compute[253538]: 2025-11-25 08:57:02.153 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:57:02 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 03:57:02 np0005534516 nova_compute[253538]: 2025-11-25 08:57:02.918 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:57:03 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2288: 321 pgs: 321 active+clean; 224 MiB data, 871 MiB used, 59 GiB / 60 GiB avail; 1.0 MiB/s rd, 681 KiB/s wr, 54 op/s
Nov 25 03:57:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 03:57:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:57:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 03:57:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:57:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0012537543953897246 of space, bias 1.0, pg target 0.37612631861691737 quantized to 32 (current 32)
Nov 25 03:57:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:57:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 03:57:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:57:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 03:57:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:57:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0010122368870873217 of space, bias 1.0, pg target 0.3036710661261965 quantized to 32 (current 32)
Nov 25 03:57:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:57:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 25 03:57:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:57:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 03:57:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:57:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 25 03:57:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:57:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 25 03:57:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:57:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 03:57:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:57:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 25 03:57:04 np0005534516 ovn_controller[152859]: 2025-11-25T08:57:04Z|00143|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:ec:7c:8c 10.100.0.11
Nov 25 03:57:04 np0005534516 ovn_controller[152859]: 2025-11-25T08:57:04Z|00144|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:ec:7c:8c 10.100.0.11
Nov 25 03:57:05 np0005534516 nova_compute[253538]: 2025-11-25 08:57:05.803 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:57:05 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2289: 321 pgs: 321 active+clean; 235 MiB data, 883 MiB used, 59 GiB / 60 GiB avail; 592 KiB/s rd, 1.7 MiB/s wr, 57 op/s
Nov 25 03:57:07 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 03:57:07 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2290: 321 pgs: 321 active+clean; 243 MiB data, 890 MiB used, 59 GiB / 60 GiB avail; 251 KiB/s rd, 2.1 MiB/s wr, 57 op/s
Nov 25 03:57:07 np0005534516 nova_compute[253538]: 2025-11-25 08:57:07.920 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:57:08 np0005534516 nova_compute[253538]: 2025-11-25 08:57:08.447 253542 DEBUG oslo_concurrency.lockutils [None req-37647ab5-bd9f-4491-8acb-d42f86c9642b 8ce5e2935141427a90707c14e4a73ad9 54ab33f9507e43fca43c45e6fc57f565 - - default default] Acquiring lock "9b511004-21d7-4867-aa46-4e7219827b6e" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:57:08 np0005534516 nova_compute[253538]: 2025-11-25 08:57:08.448 253542 DEBUG oslo_concurrency.lockutils [None req-37647ab5-bd9f-4491-8acb-d42f86c9642b 8ce5e2935141427a90707c14e4a73ad9 54ab33f9507e43fca43c45e6fc57f565 - - default default] Lock "9b511004-21d7-4867-aa46-4e7219827b6e" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:57:08 np0005534516 nova_compute[253538]: 2025-11-25 08:57:08.468 253542 DEBUG nova.compute.manager [None req-37647ab5-bd9f-4491-8acb-d42f86c9642b 8ce5e2935141427a90707c14e4a73ad9 54ab33f9507e43fca43c45e6fc57f565 - - default default] [instance: 9b511004-21d7-4867-aa46-4e7219827b6e] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 25 03:57:08 np0005534516 nova_compute[253538]: 2025-11-25 08:57:08.566 253542 DEBUG oslo_concurrency.lockutils [None req-37647ab5-bd9f-4491-8acb-d42f86c9642b 8ce5e2935141427a90707c14e4a73ad9 54ab33f9507e43fca43c45e6fc57f565 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:57:08 np0005534516 nova_compute[253538]: 2025-11-25 08:57:08.567 253542 DEBUG oslo_concurrency.lockutils [None req-37647ab5-bd9f-4491-8acb-d42f86c9642b 8ce5e2935141427a90707c14e4a73ad9 54ab33f9507e43fca43c45e6fc57f565 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:57:08 np0005534516 nova_compute[253538]: 2025-11-25 08:57:08.578 253542 DEBUG nova.virt.hardware [None req-37647ab5-bd9f-4491-8acb-d42f86c9642b 8ce5e2935141427a90707c14e4a73ad9 54ab33f9507e43fca43c45e6fc57f565 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 25 03:57:08 np0005534516 nova_compute[253538]: 2025-11-25 08:57:08.578 253542 INFO nova.compute.claims [None req-37647ab5-bd9f-4491-8acb-d42f86c9642b 8ce5e2935141427a90707c14e4a73ad9 54ab33f9507e43fca43c45e6fc57f565 - - default default] [instance: 9b511004-21d7-4867-aa46-4e7219827b6e] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 25 03:57:08 np0005534516 nova_compute[253538]: 2025-11-25 08:57:08.777 253542 DEBUG oslo_concurrency.processutils [None req-37647ab5-bd9f-4491-8acb-d42f86c9642b 8ce5e2935141427a90707c14e4a73ad9 54ab33f9507e43fca43c45e6fc57f565 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:57:09 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 03:57:09 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3466261657' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 03:57:09 np0005534516 nova_compute[253538]: 2025-11-25 08:57:09.256 253542 DEBUG oslo_concurrency.processutils [None req-37647ab5-bd9f-4491-8acb-d42f86c9642b 8ce5e2935141427a90707c14e4a73ad9 54ab33f9507e43fca43c45e6fc57f565 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.479s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:57:09 np0005534516 nova_compute[253538]: 2025-11-25 08:57:09.263 253542 DEBUG nova.compute.provider_tree [None req-37647ab5-bd9f-4491-8acb-d42f86c9642b 8ce5e2935141427a90707c14e4a73ad9 54ab33f9507e43fca43c45e6fc57f565 - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 03:57:09 np0005534516 nova_compute[253538]: 2025-11-25 08:57:09.278 253542 DEBUG nova.scheduler.client.report [None req-37647ab5-bd9f-4491-8acb-d42f86c9642b 8ce5e2935141427a90707c14e4a73ad9 54ab33f9507e43fca43c45e6fc57f565 - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 03:57:09 np0005534516 nova_compute[253538]: 2025-11-25 08:57:09.310 253542 DEBUG oslo_concurrency.lockutils [None req-37647ab5-bd9f-4491-8acb-d42f86c9642b 8ce5e2935141427a90707c14e4a73ad9 54ab33f9507e43fca43c45e6fc57f565 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.743s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:57:09 np0005534516 nova_compute[253538]: 2025-11-25 08:57:09.311 253542 DEBUG nova.compute.manager [None req-37647ab5-bd9f-4491-8acb-d42f86c9642b 8ce5e2935141427a90707c14e4a73ad9 54ab33f9507e43fca43c45e6fc57f565 - - default default] [instance: 9b511004-21d7-4867-aa46-4e7219827b6e] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 25 03:57:09 np0005534516 nova_compute[253538]: 2025-11-25 08:57:09.352 253542 DEBUG nova.compute.manager [None req-37647ab5-bd9f-4491-8acb-d42f86c9642b 8ce5e2935141427a90707c14e4a73ad9 54ab33f9507e43fca43c45e6fc57f565 - - default default] [instance: 9b511004-21d7-4867-aa46-4e7219827b6e] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 25 03:57:09 np0005534516 nova_compute[253538]: 2025-11-25 08:57:09.353 253542 DEBUG nova.network.neutron [None req-37647ab5-bd9f-4491-8acb-d42f86c9642b 8ce5e2935141427a90707c14e4a73ad9 54ab33f9507e43fca43c45e6fc57f565 - - default default] [instance: 9b511004-21d7-4867-aa46-4e7219827b6e] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 25 03:57:09 np0005534516 nova_compute[253538]: 2025-11-25 08:57:09.385 253542 INFO nova.virt.libvirt.driver [None req-37647ab5-bd9f-4491-8acb-d42f86c9642b 8ce5e2935141427a90707c14e4a73ad9 54ab33f9507e43fca43c45e6fc57f565 - - default default] [instance: 9b511004-21d7-4867-aa46-4e7219827b6e] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 25 03:57:09 np0005534516 nova_compute[253538]: 2025-11-25 08:57:09.444 253542 DEBUG nova.compute.manager [None req-37647ab5-bd9f-4491-8acb-d42f86c9642b 8ce5e2935141427a90707c14e4a73ad9 54ab33f9507e43fca43c45e6fc57f565 - - default default] [instance: 9b511004-21d7-4867-aa46-4e7219827b6e] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 25 03:57:09 np0005534516 nova_compute[253538]: 2025-11-25 08:57:09.534 253542 DEBUG nova.compute.manager [None req-37647ab5-bd9f-4491-8acb-d42f86c9642b 8ce5e2935141427a90707c14e4a73ad9 54ab33f9507e43fca43c45e6fc57f565 - - default default] [instance: 9b511004-21d7-4867-aa46-4e7219827b6e] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 25 03:57:09 np0005534516 nova_compute[253538]: 2025-11-25 08:57:09.536 253542 DEBUG nova.virt.libvirt.driver [None req-37647ab5-bd9f-4491-8acb-d42f86c9642b 8ce5e2935141427a90707c14e4a73ad9 54ab33f9507e43fca43c45e6fc57f565 - - default default] [instance: 9b511004-21d7-4867-aa46-4e7219827b6e] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 25 03:57:09 np0005534516 nova_compute[253538]: 2025-11-25 08:57:09.537 253542 INFO nova.virt.libvirt.driver [None req-37647ab5-bd9f-4491-8acb-d42f86c9642b 8ce5e2935141427a90707c14e4a73ad9 54ab33f9507e43fca43c45e6fc57f565 - - default default] [instance: 9b511004-21d7-4867-aa46-4e7219827b6e] Creating image(s)#033[00m
Nov 25 03:57:09 np0005534516 nova_compute[253538]: 2025-11-25 08:57:09.572 253542 DEBUG nova.storage.rbd_utils [None req-37647ab5-bd9f-4491-8acb-d42f86c9642b 8ce5e2935141427a90707c14e4a73ad9 54ab33f9507e43fca43c45e6fc57f565 - - default default] rbd image 9b511004-21d7-4867-aa46-4e7219827b6e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:57:09 np0005534516 nova_compute[253538]: 2025-11-25 08:57:09.611 253542 DEBUG nova.storage.rbd_utils [None req-37647ab5-bd9f-4491-8acb-d42f86c9642b 8ce5e2935141427a90707c14e4a73ad9 54ab33f9507e43fca43c45e6fc57f565 - - default default] rbd image 9b511004-21d7-4867-aa46-4e7219827b6e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:57:09 np0005534516 nova_compute[253538]: 2025-11-25 08:57:09.644 253542 DEBUG nova.storage.rbd_utils [None req-37647ab5-bd9f-4491-8acb-d42f86c9642b 8ce5e2935141427a90707c14e4a73ad9 54ab33f9507e43fca43c45e6fc57f565 - - default default] rbd image 9b511004-21d7-4867-aa46-4e7219827b6e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:57:09 np0005534516 nova_compute[253538]: 2025-11-25 08:57:09.650 253542 DEBUG oslo_concurrency.processutils [None req-37647ab5-bd9f-4491-8acb-d42f86c9642b 8ce5e2935141427a90707c14e4a73ad9 54ab33f9507e43fca43c45e6fc57f565 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:57:09 np0005534516 nova_compute[253538]: 2025-11-25 08:57:09.763 253542 DEBUG oslo_concurrency.processutils [None req-37647ab5-bd9f-4491-8acb-d42f86c9642b 8ce5e2935141427a90707c14e4a73ad9 54ab33f9507e43fca43c45e6fc57f565 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json" returned: 0 in 0.112s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:57:09 np0005534516 nova_compute[253538]: 2025-11-25 08:57:09.764 253542 DEBUG oslo_concurrency.lockutils [None req-37647ab5-bd9f-4491-8acb-d42f86c9642b 8ce5e2935141427a90707c14e4a73ad9 54ab33f9507e43fca43c45e6fc57f565 - - default default] Acquiring lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:57:09 np0005534516 nova_compute[253538]: 2025-11-25 08:57:09.765 253542 DEBUG oslo_concurrency.lockutils [None req-37647ab5-bd9f-4491-8acb-d42f86c9642b 8ce5e2935141427a90707c14e4a73ad9 54ab33f9507e43fca43c45e6fc57f565 - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:57:09 np0005534516 nova_compute[253538]: 2025-11-25 08:57:09.765 253542 DEBUG oslo_concurrency.lockutils [None req-37647ab5-bd9f-4491-8acb-d42f86c9642b 8ce5e2935141427a90707c14e4a73ad9 54ab33f9507e43fca43c45e6fc57f565 - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:57:09 np0005534516 nova_compute[253538]: 2025-11-25 08:57:09.794 253542 DEBUG nova.storage.rbd_utils [None req-37647ab5-bd9f-4491-8acb-d42f86c9642b 8ce5e2935141427a90707c14e4a73ad9 54ab33f9507e43fca43c45e6fc57f565 - - default default] rbd image 9b511004-21d7-4867-aa46-4e7219827b6e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:57:09 np0005534516 nova_compute[253538]: 2025-11-25 08:57:09.820 253542 DEBUG oslo_concurrency.processutils [None req-37647ab5-bd9f-4491-8acb-d42f86c9642b 8ce5e2935141427a90707c14e4a73ad9 54ab33f9507e43fca43c45e6fc57f565 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc 9b511004-21d7-4867-aa46-4e7219827b6e_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:57:09 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2291: 321 pgs: 321 active+clean; 246 MiB data, 890 MiB used, 59 GiB / 60 GiB avail; 258 KiB/s rd, 2.1 MiB/s wr, 60 op/s
Nov 25 03:57:10 np0005534516 nova_compute[253538]: 2025-11-25 08:57:10.095 253542 DEBUG nova.policy [None req-37647ab5-bd9f-4491-8acb-d42f86c9642b 8ce5e2935141427a90707c14e4a73ad9 54ab33f9507e43fca43c45e6fc57f565 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '8ce5e2935141427a90707c14e4a73ad9', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '54ab33f9507e43fca43c45e6fc57f565', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 25 03:57:10 np0005534516 nova_compute[253538]: 2025-11-25 08:57:10.184 253542 DEBUG oslo_concurrency.processutils [None req-37647ab5-bd9f-4491-8acb-d42f86c9642b 8ce5e2935141427a90707c14e4a73ad9 54ab33f9507e43fca43c45e6fc57f565 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc 9b511004-21d7-4867-aa46-4e7219827b6e_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.364s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:57:10 np0005534516 nova_compute[253538]: 2025-11-25 08:57:10.272 253542 DEBUG nova.storage.rbd_utils [None req-37647ab5-bd9f-4491-8acb-d42f86c9642b 8ce5e2935141427a90707c14e4a73ad9 54ab33f9507e43fca43c45e6fc57f565 - - default default] resizing rbd image 9b511004-21d7-4867-aa46-4e7219827b6e_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 25 03:57:10 np0005534516 nova_compute[253538]: 2025-11-25 08:57:10.361 253542 DEBUG nova.objects.instance [None req-37647ab5-bd9f-4491-8acb-d42f86c9642b 8ce5e2935141427a90707c14e4a73ad9 54ab33f9507e43fca43c45e6fc57f565 - - default default] Lazy-loading 'migration_context' on Instance uuid 9b511004-21d7-4867-aa46-4e7219827b6e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 03:57:10 np0005534516 nova_compute[253538]: 2025-11-25 08:57:10.373 253542 DEBUG nova.virt.libvirt.driver [None req-37647ab5-bd9f-4491-8acb-d42f86c9642b 8ce5e2935141427a90707c14e4a73ad9 54ab33f9507e43fca43c45e6fc57f565 - - default default] [instance: 9b511004-21d7-4867-aa46-4e7219827b6e] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 25 03:57:10 np0005534516 nova_compute[253538]: 2025-11-25 08:57:10.374 253542 DEBUG nova.virt.libvirt.driver [None req-37647ab5-bd9f-4491-8acb-d42f86c9642b 8ce5e2935141427a90707c14e4a73ad9 54ab33f9507e43fca43c45e6fc57f565 - - default default] [instance: 9b511004-21d7-4867-aa46-4e7219827b6e] Ensure instance console log exists: /var/lib/nova/instances/9b511004-21d7-4867-aa46-4e7219827b6e/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 25 03:57:10 np0005534516 nova_compute[253538]: 2025-11-25 08:57:10.374 253542 DEBUG oslo_concurrency.lockutils [None req-37647ab5-bd9f-4491-8acb-d42f86c9642b 8ce5e2935141427a90707c14e4a73ad9 54ab33f9507e43fca43c45e6fc57f565 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:57:10 np0005534516 nova_compute[253538]: 2025-11-25 08:57:10.375 253542 DEBUG oslo_concurrency.lockutils [None req-37647ab5-bd9f-4491-8acb-d42f86c9642b 8ce5e2935141427a90707c14e4a73ad9 54ab33f9507e43fca43c45e6fc57f565 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:57:10 np0005534516 nova_compute[253538]: 2025-11-25 08:57:10.375 253542 DEBUG oslo_concurrency.lockutils [None req-37647ab5-bd9f-4491-8acb-d42f86c9642b 8ce5e2935141427a90707c14e4a73ad9 54ab33f9507e43fca43c45e6fc57f565 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:57:10 np0005534516 nova_compute[253538]: 2025-11-25 08:57:10.807 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:57:10 np0005534516 nova_compute[253538]: 2025-11-25 08:57:10.876 253542 DEBUG nova.network.neutron [None req-37647ab5-bd9f-4491-8acb-d42f86c9642b 8ce5e2935141427a90707c14e4a73ad9 54ab33f9507e43fca43c45e6fc57f565 - - default default] [instance: 9b511004-21d7-4867-aa46-4e7219827b6e] Successfully created port: 88814764-016b-4232-82f8-72dbbb384932 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 25 03:57:11 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2292: 321 pgs: 321 active+clean; 266 MiB data, 906 MiB used, 59 GiB / 60 GiB avail; 274 KiB/s rd, 3.1 MiB/s wr, 83 op/s
Nov 25 03:57:11 np0005534516 nova_compute[253538]: 2025-11-25 08:57:11.967 253542 INFO nova.compute.manager [None req-c2a560f4-8cf1-4e84-a6b6-90c6ff3e17c3 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 43128a42-ed0f-42ff-8282-4ef978e7c43c] Get console output#033[00m
Nov 25 03:57:11 np0005534516 nova_compute[253538]: 2025-11-25 08:57:11.974 310639 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Nov 25 03:57:12 np0005534516 nova_compute[253538]: 2025-11-25 08:57:12.009 253542 DEBUG nova.network.neutron [None req-37647ab5-bd9f-4491-8acb-d42f86c9642b 8ce5e2935141427a90707c14e4a73ad9 54ab33f9507e43fca43c45e6fc57f565 - - default default] [instance: 9b511004-21d7-4867-aa46-4e7219827b6e] Successfully updated port: 88814764-016b-4232-82f8-72dbbb384932 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 25 03:57:12 np0005534516 nova_compute[253538]: 2025-11-25 08:57:12.023 253542 DEBUG oslo_concurrency.lockutils [None req-37647ab5-bd9f-4491-8acb-d42f86c9642b 8ce5e2935141427a90707c14e4a73ad9 54ab33f9507e43fca43c45e6fc57f565 - - default default] Acquiring lock "refresh_cache-9b511004-21d7-4867-aa46-4e7219827b6e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 03:57:12 np0005534516 nova_compute[253538]: 2025-11-25 08:57:12.023 253542 DEBUG oslo_concurrency.lockutils [None req-37647ab5-bd9f-4491-8acb-d42f86c9642b 8ce5e2935141427a90707c14e4a73ad9 54ab33f9507e43fca43c45e6fc57f565 - - default default] Acquired lock "refresh_cache-9b511004-21d7-4867-aa46-4e7219827b6e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 03:57:12 np0005534516 nova_compute[253538]: 2025-11-25 08:57:12.024 253542 DEBUG nova.network.neutron [None req-37647ab5-bd9f-4491-8acb-d42f86c9642b 8ce5e2935141427a90707c14e4a73ad9 54ab33f9507e43fca43c45e6fc57f565 - - default default] [instance: 9b511004-21d7-4867-aa46-4e7219827b6e] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 25 03:57:12 np0005534516 nova_compute[253538]: 2025-11-25 08:57:12.117 253542 DEBUG nova.compute.manager [req-4cb82b57-45d2-4e5c-9ddb-bd30169d3395 req-158ea824-d9ef-4d29-ba2f-e8774fce8338 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 9b511004-21d7-4867-aa46-4e7219827b6e] Received event network-changed-88814764-016b-4232-82f8-72dbbb384932 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:57:12 np0005534516 nova_compute[253538]: 2025-11-25 08:57:12.118 253542 DEBUG nova.compute.manager [req-4cb82b57-45d2-4e5c-9ddb-bd30169d3395 req-158ea824-d9ef-4d29-ba2f-e8774fce8338 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 9b511004-21d7-4867-aa46-4e7219827b6e] Refreshing instance network info cache due to event network-changed-88814764-016b-4232-82f8-72dbbb384932. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 03:57:12 np0005534516 nova_compute[253538]: 2025-11-25 08:57:12.119 253542 DEBUG oslo_concurrency.lockutils [req-4cb82b57-45d2-4e5c-9ddb-bd30169d3395 req-158ea824-d9ef-4d29-ba2f-e8774fce8338 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-9b511004-21d7-4867-aa46-4e7219827b6e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 03:57:12 np0005534516 nova_compute[253538]: 2025-11-25 08:57:12.187 253542 DEBUG nova.network.neutron [None req-37647ab5-bd9f-4491-8acb-d42f86c9642b 8ce5e2935141427a90707c14e4a73ad9 54ab33f9507e43fca43c45e6fc57f565 - - default default] [instance: 9b511004-21d7-4867-aa46-4e7219827b6e] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 25 03:57:12 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 03:57:12 np0005534516 podman[381909]: 2025-11-25 08:57:12.844410469 +0000 UTC m=+0.086752643 container health_status 5c8d5508db57b4c3da7e80ae11f3a3094e87785cb74f60c98199261dd8b73481 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, tcib_managed=true)
Nov 25 03:57:12 np0005534516 nova_compute[253538]: 2025-11-25 08:57:12.925 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:57:13 np0005534516 ovn_controller[152859]: 2025-11-25T08:57:13Z|00145|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:ec:7c:8c 10.100.0.11
Nov 25 03:57:13 np0005534516 podman[381929]: 2025-11-25 08:57:13.842603677 +0000 UTC m=+0.086510185 container health_status 201f5ba8a846455dad7681d91099d8c3f33e8ac91298c1330a8b525b94c03938 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118)
Nov 25 03:57:13 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2293: 321 pgs: 321 active+clean; 280 MiB data, 912 MiB used, 59 GiB / 60 GiB avail; 268 KiB/s rd, 3.5 MiB/s wr, 82 op/s
Nov 25 03:57:13 np0005534516 nova_compute[253538]: 2025-11-25 08:57:13.959 253542 DEBUG nova.network.neutron [None req-37647ab5-bd9f-4491-8acb-d42f86c9642b 8ce5e2935141427a90707c14e4a73ad9 54ab33f9507e43fca43c45e6fc57f565 - - default default] [instance: 9b511004-21d7-4867-aa46-4e7219827b6e] Updating instance_info_cache with network_info: [{"id": "88814764-016b-4232-82f8-72dbbb384932", "address": "fa:16:3e:4c:8a:98", "network": {"id": "55543b53-ee52-41ca-ba2e-341088afdcaa", "bridge": "br-int", "label": "tempest-network-smoke--246766377", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "54ab33f9507e43fca43c45e6fc57f565", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap88814764-01", "ovs_interfaceid": "88814764-016b-4232-82f8-72dbbb384932", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 03:57:13 np0005534516 nova_compute[253538]: 2025-11-25 08:57:13.982 253542 DEBUG oslo_concurrency.lockutils [None req-37647ab5-bd9f-4491-8acb-d42f86c9642b 8ce5e2935141427a90707c14e4a73ad9 54ab33f9507e43fca43c45e6fc57f565 - - default default] Releasing lock "refresh_cache-9b511004-21d7-4867-aa46-4e7219827b6e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 03:57:13 np0005534516 nova_compute[253538]: 2025-11-25 08:57:13.983 253542 DEBUG nova.compute.manager [None req-37647ab5-bd9f-4491-8acb-d42f86c9642b 8ce5e2935141427a90707c14e4a73ad9 54ab33f9507e43fca43c45e6fc57f565 - - default default] [instance: 9b511004-21d7-4867-aa46-4e7219827b6e] Instance network_info: |[{"id": "88814764-016b-4232-82f8-72dbbb384932", "address": "fa:16:3e:4c:8a:98", "network": {"id": "55543b53-ee52-41ca-ba2e-341088afdcaa", "bridge": "br-int", "label": "tempest-network-smoke--246766377", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "54ab33f9507e43fca43c45e6fc57f565", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap88814764-01", "ovs_interfaceid": "88814764-016b-4232-82f8-72dbbb384932", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 25 03:57:13 np0005534516 nova_compute[253538]: 2025-11-25 08:57:13.983 253542 DEBUG oslo_concurrency.lockutils [req-4cb82b57-45d2-4e5c-9ddb-bd30169d3395 req-158ea824-d9ef-4d29-ba2f-e8774fce8338 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-9b511004-21d7-4867-aa46-4e7219827b6e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 03:57:13 np0005534516 nova_compute[253538]: 2025-11-25 08:57:13.984 253542 DEBUG nova.network.neutron [req-4cb82b57-45d2-4e5c-9ddb-bd30169d3395 req-158ea824-d9ef-4d29-ba2f-e8774fce8338 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 9b511004-21d7-4867-aa46-4e7219827b6e] Refreshing network info cache for port 88814764-016b-4232-82f8-72dbbb384932 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 03:57:13 np0005534516 nova_compute[253538]: 2025-11-25 08:57:13.989 253542 DEBUG nova.virt.libvirt.driver [None req-37647ab5-bd9f-4491-8acb-d42f86c9642b 8ce5e2935141427a90707c14e4a73ad9 54ab33f9507e43fca43c45e6fc57f565 - - default default] [instance: 9b511004-21d7-4867-aa46-4e7219827b6e] Start _get_guest_xml network_info=[{"id": "88814764-016b-4232-82f8-72dbbb384932", "address": "fa:16:3e:4c:8a:98", "network": {"id": "55543b53-ee52-41ca-ba2e-341088afdcaa", "bridge": "br-int", "label": "tempest-network-smoke--246766377", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "54ab33f9507e43fca43c45e6fc57f565", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap88814764-01", "ovs_interfaceid": "88814764-016b-4232-82f8-72dbbb384932", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'device_name': '/dev/vda', 'guest_format': None, 'encryption_options': None, 'boot_index': 0, 'encryption_format': None, 'encryption_secret_uuid': None, 'size': 0, 'encrypted': False, 'disk_bus': 'virtio', 'image_id': '8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 25 03:57:13 np0005534516 nova_compute[253538]: 2025-11-25 08:57:13.996 253542 WARNING nova.virt.libvirt.driver [None req-37647ab5-bd9f-4491-8acb-d42f86c9642b 8ce5e2935141427a90707c14e4a73ad9 54ab33f9507e43fca43c45e6fc57f565 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 03:57:14 np0005534516 nova_compute[253538]: 2025-11-25 08:57:14.002 253542 DEBUG nova.virt.libvirt.host [None req-37647ab5-bd9f-4491-8acb-d42f86c9642b 8ce5e2935141427a90707c14e4a73ad9 54ab33f9507e43fca43c45e6fc57f565 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 25 03:57:14 np0005534516 nova_compute[253538]: 2025-11-25 08:57:14.003 253542 DEBUG nova.virt.libvirt.host [None req-37647ab5-bd9f-4491-8acb-d42f86c9642b 8ce5e2935141427a90707c14e4a73ad9 54ab33f9507e43fca43c45e6fc57f565 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 25 03:57:14 np0005534516 nova_compute[253538]: 2025-11-25 08:57:14.013 253542 DEBUG nova.virt.libvirt.host [None req-37647ab5-bd9f-4491-8acb-d42f86c9642b 8ce5e2935141427a90707c14e4a73ad9 54ab33f9507e43fca43c45e6fc57f565 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 25 03:57:14 np0005534516 nova_compute[253538]: 2025-11-25 08:57:14.014 253542 DEBUG nova.virt.libvirt.host [None req-37647ab5-bd9f-4491-8acb-d42f86c9642b 8ce5e2935141427a90707c14e4a73ad9 54ab33f9507e43fca43c45e6fc57f565 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 25 03:57:14 np0005534516 nova_compute[253538]: 2025-11-25 08:57:14.015 253542 DEBUG nova.virt.libvirt.driver [None req-37647ab5-bd9f-4491-8acb-d42f86c9642b 8ce5e2935141427a90707c14e4a73ad9 54ab33f9507e43fca43c45e6fc57f565 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 25 03:57:14 np0005534516 nova_compute[253538]: 2025-11-25 08:57:14.015 253542 DEBUG nova.virt.hardware [None req-37647ab5-bd9f-4491-8acb-d42f86c9642b 8ce5e2935141427a90707c14e4a73ad9 54ab33f9507e43fca43c45e6fc57f565 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T08:20:54Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='c0247c91-0f34-45b2-87e7-43f31a790a00',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 25 03:57:14 np0005534516 nova_compute[253538]: 2025-11-25 08:57:14.016 253542 DEBUG nova.virt.hardware [None req-37647ab5-bd9f-4491-8acb-d42f86c9642b 8ce5e2935141427a90707c14e4a73ad9 54ab33f9507e43fca43c45e6fc57f565 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 25 03:57:14 np0005534516 nova_compute[253538]: 2025-11-25 08:57:14.016 253542 DEBUG nova.virt.hardware [None req-37647ab5-bd9f-4491-8acb-d42f86c9642b 8ce5e2935141427a90707c14e4a73ad9 54ab33f9507e43fca43c45e6fc57f565 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 25 03:57:14 np0005534516 nova_compute[253538]: 2025-11-25 08:57:14.017 253542 DEBUG nova.virt.hardware [None req-37647ab5-bd9f-4491-8acb-d42f86c9642b 8ce5e2935141427a90707c14e4a73ad9 54ab33f9507e43fca43c45e6fc57f565 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 25 03:57:14 np0005534516 nova_compute[253538]: 2025-11-25 08:57:14.017 253542 DEBUG nova.virt.hardware [None req-37647ab5-bd9f-4491-8acb-d42f86c9642b 8ce5e2935141427a90707c14e4a73ad9 54ab33f9507e43fca43c45e6fc57f565 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 25 03:57:14 np0005534516 nova_compute[253538]: 2025-11-25 08:57:14.017 253542 DEBUG nova.virt.hardware [None req-37647ab5-bd9f-4491-8acb-d42f86c9642b 8ce5e2935141427a90707c14e4a73ad9 54ab33f9507e43fca43c45e6fc57f565 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 25 03:57:14 np0005534516 nova_compute[253538]: 2025-11-25 08:57:14.018 253542 DEBUG nova.virt.hardware [None req-37647ab5-bd9f-4491-8acb-d42f86c9642b 8ce5e2935141427a90707c14e4a73ad9 54ab33f9507e43fca43c45e6fc57f565 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 25 03:57:14 np0005534516 nova_compute[253538]: 2025-11-25 08:57:14.018 253542 DEBUG nova.virt.hardware [None req-37647ab5-bd9f-4491-8acb-d42f86c9642b 8ce5e2935141427a90707c14e4a73ad9 54ab33f9507e43fca43c45e6fc57f565 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 25 03:57:14 np0005534516 nova_compute[253538]: 2025-11-25 08:57:14.019 253542 DEBUG nova.virt.hardware [None req-37647ab5-bd9f-4491-8acb-d42f86c9642b 8ce5e2935141427a90707c14e4a73ad9 54ab33f9507e43fca43c45e6fc57f565 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 25 03:57:14 np0005534516 nova_compute[253538]: 2025-11-25 08:57:14.019 253542 DEBUG nova.virt.hardware [None req-37647ab5-bd9f-4491-8acb-d42f86c9642b 8ce5e2935141427a90707c14e4a73ad9 54ab33f9507e43fca43c45e6fc57f565 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 25 03:57:14 np0005534516 nova_compute[253538]: 2025-11-25 08:57:14.020 253542 DEBUG nova.virt.hardware [None req-37647ab5-bd9f-4491-8acb-d42f86c9642b 8ce5e2935141427a90707c14e4a73ad9 54ab33f9507e43fca43c45e6fc57f565 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 25 03:57:14 np0005534516 nova_compute[253538]: 2025-11-25 08:57:14.024 253542 DEBUG oslo_concurrency.processutils [None req-37647ab5-bd9f-4491-8acb-d42f86c9642b 8ce5e2935141427a90707c14e4a73ad9 54ab33f9507e43fca43c45e6fc57f565 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:57:14 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 03:57:14 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2947024011' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 03:57:14 np0005534516 nova_compute[253538]: 2025-11-25 08:57:14.526 253542 DEBUG oslo_concurrency.processutils [None req-37647ab5-bd9f-4491-8acb-d42f86c9642b 8ce5e2935141427a90707c14e4a73ad9 54ab33f9507e43fca43c45e6fc57f565 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.502s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:57:14 np0005534516 nova_compute[253538]: 2025-11-25 08:57:14.556 253542 DEBUG nova.storage.rbd_utils [None req-37647ab5-bd9f-4491-8acb-d42f86c9642b 8ce5e2935141427a90707c14e4a73ad9 54ab33f9507e43fca43c45e6fc57f565 - - default default] rbd image 9b511004-21d7-4867-aa46-4e7219827b6e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:57:14 np0005534516 nova_compute[253538]: 2025-11-25 08:57:14.562 253542 DEBUG oslo_concurrency.processutils [None req-37647ab5-bd9f-4491-8acb-d42f86c9642b 8ce5e2935141427a90707c14e4a73ad9 54ab33f9507e43fca43c45e6fc57f565 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:57:15 np0005534516 ovn_controller[152859]: 2025-11-25T08:57:15Z|00146|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:ec:7c:8c 10.100.0.11
Nov 25 03:57:15 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 03:57:15 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3301800043' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 03:57:15 np0005534516 nova_compute[253538]: 2025-11-25 08:57:15.141 253542 DEBUG oslo_concurrency.processutils [None req-37647ab5-bd9f-4491-8acb-d42f86c9642b 8ce5e2935141427a90707c14e4a73ad9 54ab33f9507e43fca43c45e6fc57f565 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.578s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:57:15 np0005534516 nova_compute[253538]: 2025-11-25 08:57:15.145 253542 DEBUG nova.virt.libvirt.vif [None req-37647ab5-bd9f-4491-8acb-d42f86c9642b 8ce5e2935141427a90707c14e4a73ad9 54ab33f9507e43fca43c45e6fc57f565 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T08:57:07Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-429479529-access_point-266233725',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-429479529-access_point-266233725',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-429479529-acc',id=122,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOtszPERwqLbWIYPkzCz4rQadM31V2GuDIdwxeEnuQlWaJ4QslduyQHZMa0L1KML6aXdq0ZmWZAYQ/HsmjaUOuAcVDO3uLPU+Gh2V/Iwtg0WToybeZDWhsFovRGb4ZBh3Q==',key_name='tempest-TestSecurityGroupsBasicOps-652156064',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='54ab33f9507e43fca43c45e6fc57f565',ramdisk_id='',reservation_id='r-pvks0odp',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-429479529',owner_user_name='tempest-TestSecurityGroupsBasicOps-429479529-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T08:57:09Z,user_data=None,user_id='8ce5e2935141427a90707c14e4a73ad9',uuid=9b511004-21d7-4867-aa46-4e7219827b6e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "88814764-016b-4232-82f8-72dbbb384932", "address": "fa:16:3e:4c:8a:98", "network": {"id": "55543b53-ee52-41ca-ba2e-341088afdcaa", "bridge": "br-int", "label": "tempest-network-smoke--246766377", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "54ab33f9507e43fca43c45e6fc57f565", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap88814764-01", "ovs_interfaceid": "88814764-016b-4232-82f8-72dbbb384932", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 25 03:57:15 np0005534516 nova_compute[253538]: 2025-11-25 08:57:15.146 253542 DEBUG nova.network.os_vif_util [None req-37647ab5-bd9f-4491-8acb-d42f86c9642b 8ce5e2935141427a90707c14e4a73ad9 54ab33f9507e43fca43c45e6fc57f565 - - default default] Converting VIF {"id": "88814764-016b-4232-82f8-72dbbb384932", "address": "fa:16:3e:4c:8a:98", "network": {"id": "55543b53-ee52-41ca-ba2e-341088afdcaa", "bridge": "br-int", "label": "tempest-network-smoke--246766377", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "54ab33f9507e43fca43c45e6fc57f565", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap88814764-01", "ovs_interfaceid": "88814764-016b-4232-82f8-72dbbb384932", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 03:57:15 np0005534516 nova_compute[253538]: 2025-11-25 08:57:15.148 253542 DEBUG nova.network.os_vif_util [None req-37647ab5-bd9f-4491-8acb-d42f86c9642b 8ce5e2935141427a90707c14e4a73ad9 54ab33f9507e43fca43c45e6fc57f565 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:4c:8a:98,bridge_name='br-int',has_traffic_filtering=True,id=88814764-016b-4232-82f8-72dbbb384932,network=Network(55543b53-ee52-41ca-ba2e-341088afdcaa),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap88814764-01') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 03:57:15 np0005534516 nova_compute[253538]: 2025-11-25 08:57:15.151 253542 DEBUG nova.objects.instance [None req-37647ab5-bd9f-4491-8acb-d42f86c9642b 8ce5e2935141427a90707c14e4a73ad9 54ab33f9507e43fca43c45e6fc57f565 - - default default] Lazy-loading 'pci_devices' on Instance uuid 9b511004-21d7-4867-aa46-4e7219827b6e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 03:57:15 np0005534516 nova_compute[253538]: 2025-11-25 08:57:15.172 253542 DEBUG oslo_concurrency.lockutils [None req-1b614170-21cc-49c4-842b-7f28eb643f6e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Acquiring lock "43128a42-ed0f-42ff-8282-4ef978e7c43c" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:57:15 np0005534516 nova_compute[253538]: 2025-11-25 08:57:15.172 253542 DEBUG oslo_concurrency.lockutils [None req-1b614170-21cc-49c4-842b-7f28eb643f6e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lock "43128a42-ed0f-42ff-8282-4ef978e7c43c" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:57:15 np0005534516 nova_compute[253538]: 2025-11-25 08:57:15.173 253542 DEBUG oslo_concurrency.lockutils [None req-1b614170-21cc-49c4-842b-7f28eb643f6e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Acquiring lock "43128a42-ed0f-42ff-8282-4ef978e7c43c-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:57:15 np0005534516 nova_compute[253538]: 2025-11-25 08:57:15.173 253542 DEBUG oslo_concurrency.lockutils [None req-1b614170-21cc-49c4-842b-7f28eb643f6e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lock "43128a42-ed0f-42ff-8282-4ef978e7c43c-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:57:15 np0005534516 nova_compute[253538]: 2025-11-25 08:57:15.173 253542 DEBUG oslo_concurrency.lockutils [None req-1b614170-21cc-49c4-842b-7f28eb643f6e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lock "43128a42-ed0f-42ff-8282-4ef978e7c43c-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:57:15 np0005534516 nova_compute[253538]: 2025-11-25 08:57:15.175 253542 INFO nova.compute.manager [None req-1b614170-21cc-49c4-842b-7f28eb643f6e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 43128a42-ed0f-42ff-8282-4ef978e7c43c] Terminating instance#033[00m
Nov 25 03:57:15 np0005534516 nova_compute[253538]: 2025-11-25 08:57:15.176 253542 DEBUG nova.compute.manager [None req-1b614170-21cc-49c4-842b-7f28eb643f6e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 43128a42-ed0f-42ff-8282-4ef978e7c43c] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 25 03:57:15 np0005534516 nova_compute[253538]: 2025-11-25 08:57:15.179 253542 DEBUG nova.virt.libvirt.driver [None req-37647ab5-bd9f-4491-8acb-d42f86c9642b 8ce5e2935141427a90707c14e4a73ad9 54ab33f9507e43fca43c45e6fc57f565 - - default default] [instance: 9b511004-21d7-4867-aa46-4e7219827b6e] End _get_guest_xml xml=<domain type="kvm">
Nov 25 03:57:15 np0005534516 nova_compute[253538]:  <uuid>9b511004-21d7-4867-aa46-4e7219827b6e</uuid>
Nov 25 03:57:15 np0005534516 nova_compute[253538]:  <name>instance-0000007a</name>
Nov 25 03:57:15 np0005534516 nova_compute[253538]:  <memory>131072</memory>
Nov 25 03:57:15 np0005534516 nova_compute[253538]:  <vcpu>1</vcpu>
Nov 25 03:57:15 np0005534516 nova_compute[253538]:  <metadata>
Nov 25 03:57:15 np0005534516 nova_compute[253538]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 03:57:15 np0005534516 nova_compute[253538]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 03:57:15 np0005534516 nova_compute[253538]:      <nova:name>tempest-server-tempest-TestSecurityGroupsBasicOps-429479529-access_point-266233725</nova:name>
Nov 25 03:57:15 np0005534516 nova_compute[253538]:      <nova:creationTime>2025-11-25 08:57:13</nova:creationTime>
Nov 25 03:57:15 np0005534516 nova_compute[253538]:      <nova:flavor name="m1.nano">
Nov 25 03:57:15 np0005534516 nova_compute[253538]:        <nova:memory>128</nova:memory>
Nov 25 03:57:15 np0005534516 nova_compute[253538]:        <nova:disk>1</nova:disk>
Nov 25 03:57:15 np0005534516 nova_compute[253538]:        <nova:swap>0</nova:swap>
Nov 25 03:57:15 np0005534516 nova_compute[253538]:        <nova:ephemeral>0</nova:ephemeral>
Nov 25 03:57:15 np0005534516 nova_compute[253538]:        <nova:vcpus>1</nova:vcpus>
Nov 25 03:57:15 np0005534516 nova_compute[253538]:      </nova:flavor>
Nov 25 03:57:15 np0005534516 nova_compute[253538]:      <nova:owner>
Nov 25 03:57:15 np0005534516 nova_compute[253538]:        <nova:user uuid="8ce5e2935141427a90707c14e4a73ad9">tempest-TestSecurityGroupsBasicOps-429479529-project-member</nova:user>
Nov 25 03:57:15 np0005534516 nova_compute[253538]:        <nova:project uuid="54ab33f9507e43fca43c45e6fc57f565">tempest-TestSecurityGroupsBasicOps-429479529</nova:project>
Nov 25 03:57:15 np0005534516 nova_compute[253538]:      </nova:owner>
Nov 25 03:57:15 np0005534516 nova_compute[253538]:      <nova:root type="image" uuid="8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e"/>
Nov 25 03:57:15 np0005534516 nova_compute[253538]:      <nova:ports>
Nov 25 03:57:15 np0005534516 nova_compute[253538]:        <nova:port uuid="88814764-016b-4232-82f8-72dbbb384932">
Nov 25 03:57:15 np0005534516 nova_compute[253538]:          <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Nov 25 03:57:15 np0005534516 nova_compute[253538]:        </nova:port>
Nov 25 03:57:15 np0005534516 nova_compute[253538]:      </nova:ports>
Nov 25 03:57:15 np0005534516 nova_compute[253538]:    </nova:instance>
Nov 25 03:57:15 np0005534516 nova_compute[253538]:  </metadata>
Nov 25 03:57:15 np0005534516 nova_compute[253538]:  <sysinfo type="smbios">
Nov 25 03:57:15 np0005534516 nova_compute[253538]:    <system>
Nov 25 03:57:15 np0005534516 nova_compute[253538]:      <entry name="manufacturer">RDO</entry>
Nov 25 03:57:15 np0005534516 nova_compute[253538]:      <entry name="product">OpenStack Compute</entry>
Nov 25 03:57:15 np0005534516 nova_compute[253538]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 03:57:15 np0005534516 nova_compute[253538]:      <entry name="serial">9b511004-21d7-4867-aa46-4e7219827b6e</entry>
Nov 25 03:57:15 np0005534516 nova_compute[253538]:      <entry name="uuid">9b511004-21d7-4867-aa46-4e7219827b6e</entry>
Nov 25 03:57:15 np0005534516 nova_compute[253538]:      <entry name="family">Virtual Machine</entry>
Nov 25 03:57:15 np0005534516 nova_compute[253538]:    </system>
Nov 25 03:57:15 np0005534516 nova_compute[253538]:  </sysinfo>
Nov 25 03:57:15 np0005534516 nova_compute[253538]:  <os>
Nov 25 03:57:15 np0005534516 nova_compute[253538]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 25 03:57:15 np0005534516 nova_compute[253538]:    <boot dev="hd"/>
Nov 25 03:57:15 np0005534516 nova_compute[253538]:    <smbios mode="sysinfo"/>
Nov 25 03:57:15 np0005534516 nova_compute[253538]:  </os>
Nov 25 03:57:15 np0005534516 nova_compute[253538]:  <features>
Nov 25 03:57:15 np0005534516 nova_compute[253538]:    <acpi/>
Nov 25 03:57:15 np0005534516 nova_compute[253538]:    <apic/>
Nov 25 03:57:15 np0005534516 nova_compute[253538]:    <vmcoreinfo/>
Nov 25 03:57:15 np0005534516 nova_compute[253538]:  </features>
Nov 25 03:57:15 np0005534516 nova_compute[253538]:  <clock offset="utc">
Nov 25 03:57:15 np0005534516 nova_compute[253538]:    <timer name="pit" tickpolicy="delay"/>
Nov 25 03:57:15 np0005534516 nova_compute[253538]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 25 03:57:15 np0005534516 nova_compute[253538]:    <timer name="hpet" present="no"/>
Nov 25 03:57:15 np0005534516 nova_compute[253538]:  </clock>
Nov 25 03:57:15 np0005534516 nova_compute[253538]:  <cpu mode="host-model" match="exact">
Nov 25 03:57:15 np0005534516 nova_compute[253538]:    <topology sockets="1" cores="1" threads="1"/>
Nov 25 03:57:15 np0005534516 nova_compute[253538]:  </cpu>
Nov 25 03:57:15 np0005534516 nova_compute[253538]:  <devices>
Nov 25 03:57:15 np0005534516 nova_compute[253538]:    <disk type="network" device="disk">
Nov 25 03:57:15 np0005534516 nova_compute[253538]:      <driver type="raw" cache="none"/>
Nov 25 03:57:15 np0005534516 nova_compute[253538]:      <source protocol="rbd" name="vms/9b511004-21d7-4867-aa46-4e7219827b6e_disk">
Nov 25 03:57:15 np0005534516 nova_compute[253538]:        <host name="192.168.122.100" port="6789"/>
Nov 25 03:57:15 np0005534516 nova_compute[253538]:      </source>
Nov 25 03:57:15 np0005534516 nova_compute[253538]:      <auth username="openstack">
Nov 25 03:57:15 np0005534516 nova_compute[253538]:        <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 03:57:15 np0005534516 nova_compute[253538]:      </auth>
Nov 25 03:57:15 np0005534516 nova_compute[253538]:      <target dev="vda" bus="virtio"/>
Nov 25 03:57:15 np0005534516 nova_compute[253538]:    </disk>
Nov 25 03:57:15 np0005534516 nova_compute[253538]:    <disk type="network" device="cdrom">
Nov 25 03:57:15 np0005534516 nova_compute[253538]:      <driver type="raw" cache="none"/>
Nov 25 03:57:15 np0005534516 nova_compute[253538]:      <source protocol="rbd" name="vms/9b511004-21d7-4867-aa46-4e7219827b6e_disk.config">
Nov 25 03:57:15 np0005534516 nova_compute[253538]:        <host name="192.168.122.100" port="6789"/>
Nov 25 03:57:15 np0005534516 nova_compute[253538]:      </source>
Nov 25 03:57:15 np0005534516 nova_compute[253538]:      <auth username="openstack">
Nov 25 03:57:15 np0005534516 nova_compute[253538]:        <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 03:57:15 np0005534516 nova_compute[253538]:      </auth>
Nov 25 03:57:15 np0005534516 nova_compute[253538]:      <target dev="sda" bus="sata"/>
Nov 25 03:57:15 np0005534516 nova_compute[253538]:    </disk>
Nov 25 03:57:15 np0005534516 nova_compute[253538]:    <interface type="ethernet">
Nov 25 03:57:15 np0005534516 nova_compute[253538]:      <mac address="fa:16:3e:4c:8a:98"/>
Nov 25 03:57:15 np0005534516 nova_compute[253538]:      <model type="virtio"/>
Nov 25 03:57:15 np0005534516 nova_compute[253538]:      <driver name="vhost" rx_queue_size="512"/>
Nov 25 03:57:15 np0005534516 nova_compute[253538]:      <mtu size="1442"/>
Nov 25 03:57:15 np0005534516 nova_compute[253538]:      <target dev="tap88814764-01"/>
Nov 25 03:57:15 np0005534516 nova_compute[253538]:    </interface>
Nov 25 03:57:15 np0005534516 nova_compute[253538]:    <serial type="pty">
Nov 25 03:57:15 np0005534516 nova_compute[253538]:      <log file="/var/lib/nova/instances/9b511004-21d7-4867-aa46-4e7219827b6e/console.log" append="off"/>
Nov 25 03:57:15 np0005534516 nova_compute[253538]:    </serial>
Nov 25 03:57:15 np0005534516 nova_compute[253538]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 03:57:15 np0005534516 nova_compute[253538]:    <video>
Nov 25 03:57:15 np0005534516 nova_compute[253538]:      <model type="virtio"/>
Nov 25 03:57:15 np0005534516 nova_compute[253538]:    </video>
Nov 25 03:57:15 np0005534516 nova_compute[253538]:    <input type="tablet" bus="usb"/>
Nov 25 03:57:15 np0005534516 nova_compute[253538]:    <rng model="virtio">
Nov 25 03:57:15 np0005534516 nova_compute[253538]:      <backend model="random">/dev/urandom</backend>
Nov 25 03:57:15 np0005534516 nova_compute[253538]:    </rng>
Nov 25 03:57:15 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root"/>
Nov 25 03:57:15 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:57:15 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:57:15 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:57:15 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:57:15 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:57:15 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:57:15 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:57:15 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:57:15 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:57:15 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:57:15 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:57:15 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:57:15 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:57:15 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:57:15 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:57:15 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:57:15 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:57:15 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:57:15 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:57:15 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:57:15 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:57:15 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:57:15 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:57:15 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:57:15 np0005534516 nova_compute[253538]:    <controller type="usb" index="0"/>
Nov 25 03:57:15 np0005534516 nova_compute[253538]:    <memballoon model="virtio">
Nov 25 03:57:15 np0005534516 nova_compute[253538]:      <stats period="10"/>
Nov 25 03:57:15 np0005534516 nova_compute[253538]:    </memballoon>
Nov 25 03:57:15 np0005534516 nova_compute[253538]:  </devices>
Nov 25 03:57:15 np0005534516 nova_compute[253538]: </domain>
Nov 25 03:57:15 np0005534516 nova_compute[253538]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 25 03:57:15 np0005534516 nova_compute[253538]: 2025-11-25 08:57:15.181 253542 DEBUG nova.compute.manager [None req-37647ab5-bd9f-4491-8acb-d42f86c9642b 8ce5e2935141427a90707c14e4a73ad9 54ab33f9507e43fca43c45e6fc57f565 - - default default] [instance: 9b511004-21d7-4867-aa46-4e7219827b6e] Preparing to wait for external event network-vif-plugged-88814764-016b-4232-82f8-72dbbb384932 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 25 03:57:15 np0005534516 nova_compute[253538]: 2025-11-25 08:57:15.182 253542 DEBUG oslo_concurrency.lockutils [None req-37647ab5-bd9f-4491-8acb-d42f86c9642b 8ce5e2935141427a90707c14e4a73ad9 54ab33f9507e43fca43c45e6fc57f565 - - default default] Acquiring lock "9b511004-21d7-4867-aa46-4e7219827b6e-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:57:15 np0005534516 nova_compute[253538]: 2025-11-25 08:57:15.183 253542 DEBUG oslo_concurrency.lockutils [None req-37647ab5-bd9f-4491-8acb-d42f86c9642b 8ce5e2935141427a90707c14e4a73ad9 54ab33f9507e43fca43c45e6fc57f565 - - default default] Lock "9b511004-21d7-4867-aa46-4e7219827b6e-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:57:15 np0005534516 nova_compute[253538]: 2025-11-25 08:57:15.183 253542 DEBUG oslo_concurrency.lockutils [None req-37647ab5-bd9f-4491-8acb-d42f86c9642b 8ce5e2935141427a90707c14e4a73ad9 54ab33f9507e43fca43c45e6fc57f565 - - default default] Lock "9b511004-21d7-4867-aa46-4e7219827b6e-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:57:15 np0005534516 nova_compute[253538]: 2025-11-25 08:57:15.184 253542 DEBUG nova.virt.libvirt.vif [None req-37647ab5-bd9f-4491-8acb-d42f86c9642b 8ce5e2935141427a90707c14e4a73ad9 54ab33f9507e43fca43c45e6fc57f565 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T08:57:07Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-429479529-access_point-266233725',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-429479529-access_point-266233725',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-429479529-acc',id=122,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOtszPERwqLbWIYPkzCz4rQadM31V2GuDIdwxeEnuQlWaJ4QslduyQHZMa0L1KML6aXdq0ZmWZAYQ/HsmjaUOuAcVDO3uLPU+Gh2V/Iwtg0WToybeZDWhsFovRGb4ZBh3Q==',key_name='tempest-TestSecurityGroupsBasicOps-652156064',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='54ab33f9507e43fca43c45e6fc57f565',ramdisk_id='',reservation_id='r-pvks0odp',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-429479529',owner_user_name='tempest-TestSecurityGroupsBasicOps-429479529-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T08:57:09Z,user_data=None,user_id='8ce5e2935141427a90707c14e4a73ad9',uuid=9b511004-21d7-4867-aa46-4e7219827b6e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "88814764-016b-4232-82f8-72dbbb384932", "address": "fa:16:3e:4c:8a:98", "network": {"id": "55543b53-ee52-41ca-ba2e-341088afdcaa", "bridge": "br-int", "label": "tempest-network-smoke--246766377", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "54ab33f9507e43fca43c45e6fc57f565", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap88814764-01", "ovs_interfaceid": "88814764-016b-4232-82f8-72dbbb384932", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 25 03:57:15 np0005534516 nova_compute[253538]: 2025-11-25 08:57:15.185 253542 DEBUG nova.network.os_vif_util [None req-37647ab5-bd9f-4491-8acb-d42f86c9642b 8ce5e2935141427a90707c14e4a73ad9 54ab33f9507e43fca43c45e6fc57f565 - - default default] Converting VIF {"id": "88814764-016b-4232-82f8-72dbbb384932", "address": "fa:16:3e:4c:8a:98", "network": {"id": "55543b53-ee52-41ca-ba2e-341088afdcaa", "bridge": "br-int", "label": "tempest-network-smoke--246766377", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "54ab33f9507e43fca43c45e6fc57f565", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap88814764-01", "ovs_interfaceid": "88814764-016b-4232-82f8-72dbbb384932", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 03:57:15 np0005534516 nova_compute[253538]: 2025-11-25 08:57:15.186 253542 DEBUG nova.network.os_vif_util [None req-37647ab5-bd9f-4491-8acb-d42f86c9642b 8ce5e2935141427a90707c14e4a73ad9 54ab33f9507e43fca43c45e6fc57f565 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:4c:8a:98,bridge_name='br-int',has_traffic_filtering=True,id=88814764-016b-4232-82f8-72dbbb384932,network=Network(55543b53-ee52-41ca-ba2e-341088afdcaa),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap88814764-01') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 03:57:15 np0005534516 nova_compute[253538]: 2025-11-25 08:57:15.187 253542 DEBUG os_vif [None req-37647ab5-bd9f-4491-8acb-d42f86c9642b 8ce5e2935141427a90707c14e4a73ad9 54ab33f9507e43fca43c45e6fc57f565 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:4c:8a:98,bridge_name='br-int',has_traffic_filtering=True,id=88814764-016b-4232-82f8-72dbbb384932,network=Network(55543b53-ee52-41ca-ba2e-341088afdcaa),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap88814764-01') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 25 03:57:15 np0005534516 nova_compute[253538]: 2025-11-25 08:57:15.190 253542 DEBUG nova.compute.manager [req-d4f4634d-03f7-42ef-bec5-39ada549cd50 req-250aa808-e02c-412f-96d3-2db495da7499 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 43128a42-ed0f-42ff-8282-4ef978e7c43c] Received event network-changed-3f8cf5bd-c7bb-44c3-a8bf-a8dd8e78c2c6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:57:15 np0005534516 nova_compute[253538]: 2025-11-25 08:57:15.190 253542 DEBUG nova.compute.manager [req-d4f4634d-03f7-42ef-bec5-39ada549cd50 req-250aa808-e02c-412f-96d3-2db495da7499 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 43128a42-ed0f-42ff-8282-4ef978e7c43c] Refreshing instance network info cache due to event network-changed-3f8cf5bd-c7bb-44c3-a8bf-a8dd8e78c2c6. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 03:57:15 np0005534516 nova_compute[253538]: 2025-11-25 08:57:15.191 253542 DEBUG oslo_concurrency.lockutils [req-d4f4634d-03f7-42ef-bec5-39ada549cd50 req-250aa808-e02c-412f-96d3-2db495da7499 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-43128a42-ed0f-42ff-8282-4ef978e7c43c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 03:57:15 np0005534516 nova_compute[253538]: 2025-11-25 08:57:15.191 253542 DEBUG oslo_concurrency.lockutils [req-d4f4634d-03f7-42ef-bec5-39ada549cd50 req-250aa808-e02c-412f-96d3-2db495da7499 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-43128a42-ed0f-42ff-8282-4ef978e7c43c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 03:57:15 np0005534516 nova_compute[253538]: 2025-11-25 08:57:15.192 253542 DEBUG nova.network.neutron [req-d4f4634d-03f7-42ef-bec5-39ada549cd50 req-250aa808-e02c-412f-96d3-2db495da7499 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 43128a42-ed0f-42ff-8282-4ef978e7c43c] Refreshing network info cache for port 3f8cf5bd-c7bb-44c3-a8bf-a8dd8e78c2c6 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 03:57:15 np0005534516 nova_compute[253538]: 2025-11-25 08:57:15.195 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:57:15 np0005534516 nova_compute[253538]: 2025-11-25 08:57:15.196 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:57:15 np0005534516 nova_compute[253538]: 2025-11-25 08:57:15.197 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 03:57:15 np0005534516 nova_compute[253538]: 2025-11-25 08:57:15.204 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:57:15 np0005534516 nova_compute[253538]: 2025-11-25 08:57:15.205 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap88814764-01, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:57:15 np0005534516 nova_compute[253538]: 2025-11-25 08:57:15.206 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap88814764-01, col_values=(('external_ids', {'iface-id': '88814764-016b-4232-82f8-72dbbb384932', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:4c:8a:98', 'vm-uuid': '9b511004-21d7-4867-aa46-4e7219827b6e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:57:15 np0005534516 NetworkManager[48915]: <info>  [1764061035.2092] manager: (tap88814764-01): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/516)
Nov 25 03:57:15 np0005534516 nova_compute[253538]: 2025-11-25 08:57:15.217 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 25 03:57:15 np0005534516 nova_compute[253538]: 2025-11-25 08:57:15.219 253542 INFO os_vif [None req-37647ab5-bd9f-4491-8acb-d42f86c9642b 8ce5e2935141427a90707c14e4a73ad9 54ab33f9507e43fca43c45e6fc57f565 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:4c:8a:98,bridge_name='br-int',has_traffic_filtering=True,id=88814764-016b-4232-82f8-72dbbb384932,network=Network(55543b53-ee52-41ca-ba2e-341088afdcaa),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap88814764-01')#033[00m
Nov 25 03:57:15 np0005534516 nova_compute[253538]: 2025-11-25 08:57:15.387 253542 DEBUG nova.virt.libvirt.driver [None req-37647ab5-bd9f-4491-8acb-d42f86c9642b 8ce5e2935141427a90707c14e4a73ad9 54ab33f9507e43fca43c45e6fc57f565 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 03:57:15 np0005534516 nova_compute[253538]: 2025-11-25 08:57:15.388 253542 DEBUG nova.virt.libvirt.driver [None req-37647ab5-bd9f-4491-8acb-d42f86c9642b 8ce5e2935141427a90707c14e4a73ad9 54ab33f9507e43fca43c45e6fc57f565 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 03:57:15 np0005534516 nova_compute[253538]: 2025-11-25 08:57:15.388 253542 DEBUG nova.virt.libvirt.driver [None req-37647ab5-bd9f-4491-8acb-d42f86c9642b 8ce5e2935141427a90707c14e4a73ad9 54ab33f9507e43fca43c45e6fc57f565 - - default default] No VIF found with MAC fa:16:3e:4c:8a:98, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 25 03:57:15 np0005534516 nova_compute[253538]: 2025-11-25 08:57:15.388 253542 INFO nova.virt.libvirt.driver [None req-37647ab5-bd9f-4491-8acb-d42f86c9642b 8ce5e2935141427a90707c14e4a73ad9 54ab33f9507e43fca43c45e6fc57f565 - - default default] [instance: 9b511004-21d7-4867-aa46-4e7219827b6e] Using config drive#033[00m
Nov 25 03:57:15 np0005534516 kernel: tap3f8cf5bd-c7 (unregistering): left promiscuous mode
Nov 25 03:57:15 np0005534516 NetworkManager[48915]: <info>  [1764061035.4120] device (tap3f8cf5bd-c7): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 03:57:15 np0005534516 ovn_controller[152859]: 2025-11-25T08:57:15Z|01255|binding|INFO|Releasing lport 3f8cf5bd-c7bb-44c3-a8bf-a8dd8e78c2c6 from this chassis (sb_readonly=0)
Nov 25 03:57:15 np0005534516 ovn_controller[152859]: 2025-11-25T08:57:15Z|01256|binding|INFO|Setting lport 3f8cf5bd-c7bb-44c3-a8bf-a8dd8e78c2c6 down in Southbound
Nov 25 03:57:15 np0005534516 ovn_controller[152859]: 2025-11-25T08:57:15Z|01257|binding|INFO|Removing iface tap3f8cf5bd-c7 ovn-installed in OVS
Nov 25 03:57:15 np0005534516 nova_compute[253538]: 2025-11-25 08:57:15.438 253542 DEBUG nova.storage.rbd_utils [None req-37647ab5-bd9f-4491-8acb-d42f86c9642b 8ce5e2935141427a90707c14e4a73ad9 54ab33f9507e43fca43c45e6fc57f565 - - default default] rbd image 9b511004-21d7-4867-aa46-4e7219827b6e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:57:15 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:57:15.444 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ec:7c:8c 10.100.0.11'], port_security=['fa:16:3e:ec:7c:8c 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '43128a42-ed0f-42ff-8282-4ef978e7c43c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6a3361c5-4f78-4935-9e24-d43d47b272af', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '92faeb767e7a423586eaaf32661ce771', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'aa7587ee-e656-41ca-b100-9a0da067d1dd', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=75d3afb1-0c55-49ec-b7d6-fd301cdfea08, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=3f8cf5bd-c7bb-44c3-a8bf-a8dd8e78c2c6) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 03:57:15 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:57:15.445 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 3f8cf5bd-c7bb-44c3-a8bf-a8dd8e78c2c6 in datapath 6a3361c5-4f78-4935-9e24-d43d47b272af unbound from our chassis#033[00m
Nov 25 03:57:15 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:57:15.447 162739 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 6a3361c5-4f78-4935-9e24-d43d47b272af, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 25 03:57:15 np0005534516 nova_compute[253538]: 2025-11-25 08:57:15.448 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:57:15 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:57:15.448 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[28432021-166b-4c5f-82e7-c80eeb7675bc]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:57:15 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:57:15.449 162739 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-6a3361c5-4f78-4935-9e24-d43d47b272af namespace which is not needed anymore#033[00m
Nov 25 03:57:15 np0005534516 systemd[1]: machine-qemu\x2d151\x2dinstance\x2d00000079.scope: Deactivated successfully.
Nov 25 03:57:15 np0005534516 systemd[1]: machine-qemu\x2d151\x2dinstance\x2d00000079.scope: Consumed 13.709s CPU time.
Nov 25 03:57:15 np0005534516 systemd-machined[215790]: Machine qemu-151-instance-00000079 terminated.
Nov 25 03:57:15 np0005534516 nova_compute[253538]: 2025-11-25 08:57:15.648 253542 INFO nova.virt.libvirt.driver [-] [instance: 43128a42-ed0f-42ff-8282-4ef978e7c43c] Instance destroyed successfully.#033[00m
Nov 25 03:57:15 np0005534516 nova_compute[253538]: 2025-11-25 08:57:15.649 253542 DEBUG nova.objects.instance [None req-1b614170-21cc-49c4-842b-7f28eb643f6e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lazy-loading 'resources' on Instance uuid 43128a42-ed0f-42ff-8282-4ef978e7c43c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 03:57:15 np0005534516 nova_compute[253538]: 2025-11-25 08:57:15.666 253542 DEBUG nova.virt.libvirt.vif [None req-1b614170-21cc-49c4-842b-7f28eb643f6e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T08:56:40Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1161720817',display_name='tempest-TestNetworkBasicOps-server-1161720817',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1161720817',id=121,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMygd7bgjp7je066rs+JSqi7wDw8mZA8bTJqZMTdVQ59AGIvWGIfB++nH0hDU9JXJAgSqR6ykwwbMc5hRBsfmnOJwLqxckNDbUsZU2WcEt8EN+Pk8Qs/v8+WIfKw25whKw==',key_name='tempest-TestNetworkBasicOps-341234383',keypairs=<?>,launch_index=0,launched_at=2025-11-25T08:56:50Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='92faeb767e7a423586eaaf32661ce771',ramdisk_id='',reservation_id='r-8fli2zzc',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-2019122229',owner_user_name='tempest-TestNetworkBasicOps-2019122229-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T08:56:50Z,user_data=None,user_id='4211995133cc45db8e38c47f747fb092',uuid=43128a42-ed0f-42ff-8282-4ef978e7c43c,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "3f8cf5bd-c7bb-44c3-a8bf-a8dd8e78c2c6", "address": "fa:16:3e:ec:7c:8c", "network": {"id": "6a3361c5-4f78-4935-9e24-d43d47b272af", "bridge": "br-int", "label": "tempest-network-smoke--1071758038", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "1.2.3.4", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.217", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92faeb767e7a423586eaaf32661ce771", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3f8cf5bd-c7", "ovs_interfaceid": "3f8cf5bd-c7bb-44c3-a8bf-a8dd8e78c2c6", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 25 03:57:15 np0005534516 nova_compute[253538]: 2025-11-25 08:57:15.666 253542 DEBUG nova.network.os_vif_util [None req-1b614170-21cc-49c4-842b-7f28eb643f6e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Converting VIF {"id": "3f8cf5bd-c7bb-44c3-a8bf-a8dd8e78c2c6", "address": "fa:16:3e:ec:7c:8c", "network": {"id": "6a3361c5-4f78-4935-9e24-d43d47b272af", "bridge": "br-int", "label": "tempest-network-smoke--1071758038", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "1.2.3.4", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.217", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92faeb767e7a423586eaaf32661ce771", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3f8cf5bd-c7", "ovs_interfaceid": "3f8cf5bd-c7bb-44c3-a8bf-a8dd8e78c2c6", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 03:57:15 np0005534516 nova_compute[253538]: 2025-11-25 08:57:15.667 253542 DEBUG nova.network.os_vif_util [None req-1b614170-21cc-49c4-842b-7f28eb643f6e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:ec:7c:8c,bridge_name='br-int',has_traffic_filtering=True,id=3f8cf5bd-c7bb-44c3-a8bf-a8dd8e78c2c6,network=Network(6a3361c5-4f78-4935-9e24-d43d47b272af),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3f8cf5bd-c7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 03:57:15 np0005534516 nova_compute[253538]: 2025-11-25 08:57:15.668 253542 DEBUG os_vif [None req-1b614170-21cc-49c4-842b-7f28eb643f6e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:ec:7c:8c,bridge_name='br-int',has_traffic_filtering=True,id=3f8cf5bd-c7bb-44c3-a8bf-a8dd8e78c2c6,network=Network(6a3361c5-4f78-4935-9e24-d43d47b272af),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3f8cf5bd-c7') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 25 03:57:15 np0005534516 nova_compute[253538]: 2025-11-25 08:57:15.669 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:57:15 np0005534516 nova_compute[253538]: 2025-11-25 08:57:15.670 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3f8cf5bd-c7, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:57:15 np0005534516 nova_compute[253538]: 2025-11-25 08:57:15.673 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:57:15 np0005534516 nova_compute[253538]: 2025-11-25 08:57:15.675 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 25 03:57:15 np0005534516 nova_compute[253538]: 2025-11-25 08:57:15.677 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:57:15 np0005534516 nova_compute[253538]: 2025-11-25 08:57:15.680 253542 INFO os_vif [None req-1b614170-21cc-49c4-842b-7f28eb643f6e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:ec:7c:8c,bridge_name='br-int',has_traffic_filtering=True,id=3f8cf5bd-c7bb-44c3-a8bf-a8dd8e78c2c6,network=Network(6a3361c5-4f78-4935-9e24-d43d47b272af),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3f8cf5bd-c7')#033[00m
Nov 25 03:57:15 np0005534516 neutron-haproxy-ovnmeta-6a3361c5-4f78-4935-9e24-d43d47b272af[381704]: [NOTICE]   (381708) : haproxy version is 2.8.14-c23fe91
Nov 25 03:57:15 np0005534516 neutron-haproxy-ovnmeta-6a3361c5-4f78-4935-9e24-d43d47b272af[381704]: [NOTICE]   (381708) : path to executable is /usr/sbin/haproxy
Nov 25 03:57:15 np0005534516 neutron-haproxy-ovnmeta-6a3361c5-4f78-4935-9e24-d43d47b272af[381704]: [WARNING]  (381708) : Exiting Master process...
Nov 25 03:57:15 np0005534516 neutron-haproxy-ovnmeta-6a3361c5-4f78-4935-9e24-d43d47b272af[381704]: [WARNING]  (381708) : Exiting Master process...
Nov 25 03:57:15 np0005534516 neutron-haproxy-ovnmeta-6a3361c5-4f78-4935-9e24-d43d47b272af[381704]: [ALERT]    (381708) : Current worker (381710) exited with code 143 (Terminated)
Nov 25 03:57:15 np0005534516 neutron-haproxy-ovnmeta-6a3361c5-4f78-4935-9e24-d43d47b272af[381704]: [WARNING]  (381708) : All workers exited. Exiting... (0)
Nov 25 03:57:15 np0005534516 systemd[1]: libpod-de4d6031ca24a438be3d92b1dab083865d78ca9073b06086aceb0b95605dc5da.scope: Deactivated successfully.
Nov 25 03:57:15 np0005534516 podman[382059]: 2025-11-25 08:57:15.714535388 +0000 UTC m=+0.177774009 container died de4d6031ca24a438be3d92b1dab083865d78ca9073b06086aceb0b95605dc5da (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-6a3361c5-4f78-4935-9e24-d43d47b272af, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 03:57:15 np0005534516 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-de4d6031ca24a438be3d92b1dab083865d78ca9073b06086aceb0b95605dc5da-userdata-shm.mount: Deactivated successfully.
Nov 25 03:57:15 np0005534516 systemd[1]: var-lib-containers-storage-overlay-3b842b0da7b301077b2be32a4ebc9c2c07bb6ef40e3f596e187ca5e2f7084ab4-merged.mount: Deactivated successfully.
Nov 25 03:57:15 np0005534516 podman[382059]: 2025-11-25 08:57:15.849390819 +0000 UTC m=+0.312629440 container cleanup de4d6031ca24a438be3d92b1dab083865d78ca9073b06086aceb0b95605dc5da (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-6a3361c5-4f78-4935-9e24-d43d47b272af, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Nov 25 03:57:15 np0005534516 systemd[1]: libpod-conmon-de4d6031ca24a438be3d92b1dab083865d78ca9073b06086aceb0b95605dc5da.scope: Deactivated successfully.
Nov 25 03:57:15 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2294: 321 pgs: 321 active+clean; 293 MiB data, 916 MiB used, 59 GiB / 60 GiB avail; 178 KiB/s rd, 3.3 MiB/s wr, 67 op/s
Nov 25 03:57:16 np0005534516 podman[382120]: 2025-11-25 08:57:16.023637992 +0000 UTC m=+0.148212775 container remove de4d6031ca24a438be3d92b1dab083865d78ca9073b06086aceb0b95605dc5da (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-6a3361c5-4f78-4935-9e24-d43d47b272af, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 25 03:57:16 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:57:16.030 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[ab51aaec-5cb4-4975-8488-fe79ff0b1519]: (4, ('Tue Nov 25 08:57:15 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-6a3361c5-4f78-4935-9e24-d43d47b272af (de4d6031ca24a438be3d92b1dab083865d78ca9073b06086aceb0b95605dc5da)\nde4d6031ca24a438be3d92b1dab083865d78ca9073b06086aceb0b95605dc5da\nTue Nov 25 08:57:15 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-6a3361c5-4f78-4935-9e24-d43d47b272af (de4d6031ca24a438be3d92b1dab083865d78ca9073b06086aceb0b95605dc5da)\nde4d6031ca24a438be3d92b1dab083865d78ca9073b06086aceb0b95605dc5da\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:57:16 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:57:16.033 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[bd7fea18-994f-4b12-b641-a76b9794092f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:57:16 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:57:16.035 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6a3361c5-40, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:57:16 np0005534516 kernel: tap6a3361c5-40: left promiscuous mode
Nov 25 03:57:16 np0005534516 nova_compute[253538]: 2025-11-25 08:57:16.039 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:57:16 np0005534516 nova_compute[253538]: 2025-11-25 08:57:16.072 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:57:16 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:57:16.074 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[9b5c8880-86d0-48fd-99a2-ab3da66549cd]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:57:16 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:57:16.092 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[411fc128-7e82-49a1-8f98-49341af569da]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:57:16 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:57:16.093 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[29bd70c9-1537-481a-bb07-472bfdc2c34d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:57:16 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:57:16.110 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[acf64a33-706d-4304-a85a-214b51b17f40]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 636421, 'reachable_time': 40953, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 382138, 'error': None, 'target': 'ovnmeta-6a3361c5-4f78-4935-9e24-d43d47b272af', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:57:16 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:57:16.114 162852 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-6a3361c5-4f78-4935-9e24-d43d47b272af deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 25 03:57:16 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:57:16.114 162852 DEBUG oslo.privsep.daemon [-] privsep: reply[c431361b-7f51-4874-9db5-fdf529b0f4bc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:57:16 np0005534516 systemd[1]: run-netns-ovnmeta\x2d6a3361c5\x2d4f78\x2d4935\x2d9e24\x2dd43d47b272af.mount: Deactivated successfully.
Nov 25 03:57:16 np0005534516 nova_compute[253538]: 2025-11-25 08:57:16.236 253542 INFO nova.virt.libvirt.driver [None req-37647ab5-bd9f-4491-8acb-d42f86c9642b 8ce5e2935141427a90707c14e4a73ad9 54ab33f9507e43fca43c45e6fc57f565 - - default default] [instance: 9b511004-21d7-4867-aa46-4e7219827b6e] Creating config drive at /var/lib/nova/instances/9b511004-21d7-4867-aa46-4e7219827b6e/disk.config#033[00m
Nov 25 03:57:16 np0005534516 nova_compute[253538]: 2025-11-25 08:57:16.248 253542 DEBUG oslo_concurrency.processutils [None req-37647ab5-bd9f-4491-8acb-d42f86c9642b 8ce5e2935141427a90707c14e4a73ad9 54ab33f9507e43fca43c45e6fc57f565 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/9b511004-21d7-4867-aa46-4e7219827b6e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp_k7ex2jg execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:57:16 np0005534516 nova_compute[253538]: 2025-11-25 08:57:16.313 253542 INFO nova.virt.libvirt.driver [None req-1b614170-21cc-49c4-842b-7f28eb643f6e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 43128a42-ed0f-42ff-8282-4ef978e7c43c] Deleting instance files /var/lib/nova/instances/43128a42-ed0f-42ff-8282-4ef978e7c43c_del#033[00m
Nov 25 03:57:16 np0005534516 nova_compute[253538]: 2025-11-25 08:57:16.315 253542 INFO nova.virt.libvirt.driver [None req-1b614170-21cc-49c4-842b-7f28eb643f6e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 43128a42-ed0f-42ff-8282-4ef978e7c43c] Deletion of /var/lib/nova/instances/43128a42-ed0f-42ff-8282-4ef978e7c43c_del complete#033[00m
Nov 25 03:57:16 np0005534516 nova_compute[253538]: 2025-11-25 08:57:16.377 253542 INFO nova.compute.manager [None req-1b614170-21cc-49c4-842b-7f28eb643f6e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 43128a42-ed0f-42ff-8282-4ef978e7c43c] Took 1.20 seconds to destroy the instance on the hypervisor.#033[00m
Nov 25 03:57:16 np0005534516 nova_compute[253538]: 2025-11-25 08:57:16.378 253542 DEBUG oslo.service.loopingcall [None req-1b614170-21cc-49c4-842b-7f28eb643f6e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 25 03:57:16 np0005534516 nova_compute[253538]: 2025-11-25 08:57:16.378 253542 DEBUG nova.compute.manager [-] [instance: 43128a42-ed0f-42ff-8282-4ef978e7c43c] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 25 03:57:16 np0005534516 nova_compute[253538]: 2025-11-25 08:57:16.378 253542 DEBUG nova.network.neutron [-] [instance: 43128a42-ed0f-42ff-8282-4ef978e7c43c] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 25 03:57:16 np0005534516 nova_compute[253538]: 2025-11-25 08:57:16.396 253542 DEBUG oslo_concurrency.processutils [None req-37647ab5-bd9f-4491-8acb-d42f86c9642b 8ce5e2935141427a90707c14e4a73ad9 54ab33f9507e43fca43c45e6fc57f565 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/9b511004-21d7-4867-aa46-4e7219827b6e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp_k7ex2jg" returned: 0 in 0.148s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:57:16 np0005534516 nova_compute[253538]: 2025-11-25 08:57:16.421 253542 DEBUG nova.storage.rbd_utils [None req-37647ab5-bd9f-4491-8acb-d42f86c9642b 8ce5e2935141427a90707c14e4a73ad9 54ab33f9507e43fca43c45e6fc57f565 - - default default] rbd image 9b511004-21d7-4867-aa46-4e7219827b6e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:57:16 np0005534516 nova_compute[253538]: 2025-11-25 08:57:16.424 253542 DEBUG oslo_concurrency.processutils [None req-37647ab5-bd9f-4491-8acb-d42f86c9642b 8ce5e2935141427a90707c14e4a73ad9 54ab33f9507e43fca43c45e6fc57f565 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/9b511004-21d7-4867-aa46-4e7219827b6e/disk.config 9b511004-21d7-4867-aa46-4e7219827b6e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:57:16 np0005534516 nova_compute[253538]: 2025-11-25 08:57:16.494 253542 DEBUG nova.network.neutron [req-4cb82b57-45d2-4e5c-9ddb-bd30169d3395 req-158ea824-d9ef-4d29-ba2f-e8774fce8338 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 9b511004-21d7-4867-aa46-4e7219827b6e] Updated VIF entry in instance network info cache for port 88814764-016b-4232-82f8-72dbbb384932. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 25 03:57:16 np0005534516 nova_compute[253538]: 2025-11-25 08:57:16.496 253542 DEBUG nova.network.neutron [req-4cb82b57-45d2-4e5c-9ddb-bd30169d3395 req-158ea824-d9ef-4d29-ba2f-e8774fce8338 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 9b511004-21d7-4867-aa46-4e7219827b6e] Updating instance_info_cache with network_info: [{"id": "88814764-016b-4232-82f8-72dbbb384932", "address": "fa:16:3e:4c:8a:98", "network": {"id": "55543b53-ee52-41ca-ba2e-341088afdcaa", "bridge": "br-int", "label": "tempest-network-smoke--246766377", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "54ab33f9507e43fca43c45e6fc57f565", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap88814764-01", "ovs_interfaceid": "88814764-016b-4232-82f8-72dbbb384932", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 03:57:16 np0005534516 nova_compute[253538]: 2025-11-25 08:57:16.515 253542 DEBUG oslo_concurrency.lockutils [req-4cb82b57-45d2-4e5c-9ddb-bd30169d3395 req-158ea824-d9ef-4d29-ba2f-e8774fce8338 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-9b511004-21d7-4867-aa46-4e7219827b6e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 03:57:16 np0005534516 nova_compute[253538]: 2025-11-25 08:57:16.593 253542 DEBUG oslo_concurrency.processutils [None req-37647ab5-bd9f-4491-8acb-d42f86c9642b 8ce5e2935141427a90707c14e4a73ad9 54ab33f9507e43fca43c45e6fc57f565 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/9b511004-21d7-4867-aa46-4e7219827b6e/disk.config 9b511004-21d7-4867-aa46-4e7219827b6e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.169s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:57:16 np0005534516 nova_compute[253538]: 2025-11-25 08:57:16.594 253542 INFO nova.virt.libvirt.driver [None req-37647ab5-bd9f-4491-8acb-d42f86c9642b 8ce5e2935141427a90707c14e4a73ad9 54ab33f9507e43fca43c45e6fc57f565 - - default default] [instance: 9b511004-21d7-4867-aa46-4e7219827b6e] Deleting local config drive /var/lib/nova/instances/9b511004-21d7-4867-aa46-4e7219827b6e/disk.config because it was imported into RBD.#033[00m
Nov 25 03:57:16 np0005534516 kernel: tap88814764-01: entered promiscuous mode
Nov 25 03:57:16 np0005534516 NetworkManager[48915]: <info>  [1764061036.6684] manager: (tap88814764-01): new Tun device (/org/freedesktop/NetworkManager/Devices/517)
Nov 25 03:57:16 np0005534516 ovn_controller[152859]: 2025-11-25T08:57:16Z|01258|binding|INFO|Claiming lport 88814764-016b-4232-82f8-72dbbb384932 for this chassis.
Nov 25 03:57:16 np0005534516 systemd-udevd[382037]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 03:57:16 np0005534516 ovn_controller[152859]: 2025-11-25T08:57:16Z|01259|binding|INFO|88814764-016b-4232-82f8-72dbbb384932: Claiming fa:16:3e:4c:8a:98 10.100.0.4
Nov 25 03:57:16 np0005534516 nova_compute[253538]: 2025-11-25 08:57:16.668 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:57:16 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:57:16.678 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:4c:8a:98 10.100.0.4'], port_security=['fa:16:3e:4c:8a:98 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '9b511004-21d7-4867-aa46-4e7219827b6e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-55543b53-ee52-41ca-ba2e-341088afdcaa', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '54ab33f9507e43fca43c45e6fc57f565', 'neutron:revision_number': '2', 'neutron:security_group_ids': '6831af45-79a5-4ed4-afa6-6b43609f2269 cfebd55d-d59e-4c7b-966a-1919c4910d21', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=90ba94bc-a7de-4a4f-bb6b-69f6919bb708, chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=88814764-016b-4232-82f8-72dbbb384932) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 03:57:16 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:57:16.680 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 88814764-016b-4232-82f8-72dbbb384932 in datapath 55543b53-ee52-41ca-ba2e-341088afdcaa bound to our chassis#033[00m
Nov 25 03:57:16 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:57:16.681 162739 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 55543b53-ee52-41ca-ba2e-341088afdcaa#033[00m
Nov 25 03:57:16 np0005534516 NetworkManager[48915]: <info>  [1764061036.6847] device (tap88814764-01): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 03:57:16 np0005534516 ovn_controller[152859]: 2025-11-25T08:57:16Z|01260|binding|INFO|Setting lport 88814764-016b-4232-82f8-72dbbb384932 ovn-installed in OVS
Nov 25 03:57:16 np0005534516 ovn_controller[152859]: 2025-11-25T08:57:16Z|01261|binding|INFO|Setting lport 88814764-016b-4232-82f8-72dbbb384932 up in Southbound
Nov 25 03:57:16 np0005534516 NetworkManager[48915]: <info>  [1764061036.6859] device (tap88814764-01): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 03:57:16 np0005534516 nova_compute[253538]: 2025-11-25 08:57:16.687 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:57:16 np0005534516 nova_compute[253538]: 2025-11-25 08:57:16.690 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:57:16 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:57:16.694 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[05cae298-dbd7-4e5b-bbf1-613f3864ccd3]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:57:16 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:57:16.694 162739 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap55543b53-e1 in ovnmeta-55543b53-ee52-41ca-ba2e-341088afdcaa namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 25 03:57:16 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:57:16.697 269370 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap55543b53-e0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 25 03:57:16 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:57:16.697 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[d816fea1-3298-4ec7-aff8-7e401d1a9d20]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:57:16 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:57:16.698 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[2ee5210d-654d-4aa9-9631-a645251ae41b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:57:16 np0005534516 systemd-machined[215790]: New machine qemu-152-instance-0000007a.
Nov 25 03:57:16 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:57:16.712 162852 DEBUG oslo.privsep.daemon [-] privsep: reply[50be2c86-54bd-4891-b1cc-2cab95369f4d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:57:16 np0005534516 systemd[1]: Started Virtual Machine qemu-152-instance-0000007a.
Nov 25 03:57:16 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:57:16.736 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[e18702be-890c-4a44-9291-6c40cb0beb2c]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:57:16 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:57:16.785 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[202833c2-187a-41d7-a27c-20fda2f68da8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:57:16 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:57:16.792 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[75d4a8f0-f649-4d6a-b311-8e62917a9041]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:57:16 np0005534516 NetworkManager[48915]: <info>  [1764061036.7939] manager: (tap55543b53-e0): new Veth device (/org/freedesktop/NetworkManager/Devices/518)
Nov 25 03:57:16 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:57:16.955 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[d817a81e-b028-4de1-9fe4-6b2c5c83fd31]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:57:16 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:57:16.959 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[d8705904-687b-44e7-bb90-2f4dad156cd0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:57:16 np0005534516 NetworkManager[48915]: <info>  [1764061036.9894] device (tap55543b53-e0): carrier: link connected
Nov 25 03:57:17 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:57:17.000 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[d611a926-0990-4021-8fe8-b4af46ca53f9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:57:17 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:57:17.031 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[d8e0d942-5d9e-4228-933e-a84ec60045d7]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap55543b53-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:d5:79:4a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 366], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 639158, 'reachable_time': 28515, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 382222, 'error': None, 'target': 'ovnmeta-55543b53-ee52-41ca-ba2e-341088afdcaa', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:57:17 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:57:17.050 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[9e2b2884-28ca-422f-b332-86b0d2578bfd]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fed5:794a'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 639158, 'tstamp': 639158}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 382223, 'error': None, 'target': 'ovnmeta-55543b53-ee52-41ca-ba2e-341088afdcaa', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:57:17 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:57:17.066 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[24c35fc5-cef8-47cb-b01a-a8876c91b5ca]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap55543b53-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:d5:79:4a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 366], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 639158, 'reachable_time': 28515, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 382224, 'error': None, 'target': 'ovnmeta-55543b53-ee52-41ca-ba2e-341088afdcaa', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:57:17 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:57:17.104 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[961bb28f-60c3-449e-ad37-211c0ec88920]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:57:17 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:57:17.182 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[2302ae09-47e5-42ab-8971-f15a995bf84d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:57:17 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:57:17.183 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap55543b53-e0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:57:17 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:57:17.183 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 03:57:17 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:57:17.184 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap55543b53-e0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:57:17 np0005534516 kernel: tap55543b53-e0: entered promiscuous mode
Nov 25 03:57:17 np0005534516 nova_compute[253538]: 2025-11-25 08:57:17.185 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:57:17 np0005534516 NetworkManager[48915]: <info>  [1764061037.1871] manager: (tap55543b53-e0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/519)
Nov 25 03:57:17 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:57:17.187 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap55543b53-e0, col_values=(('external_ids', {'iface-id': '6aef391e-f696-4074-917b-8b0c7a47b4b6'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:57:17 np0005534516 ovn_controller[152859]: 2025-11-25T08:57:17Z|01262|binding|INFO|Releasing lport 6aef391e-f696-4074-917b-8b0c7a47b4b6 from this chassis (sb_readonly=0)
Nov 25 03:57:17 np0005534516 nova_compute[253538]: 2025-11-25 08:57:17.188 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:57:17 np0005534516 nova_compute[253538]: 2025-11-25 08:57:17.202 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:57:17 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:57:17.204 162739 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/55543b53-ee52-41ca-ba2e-341088afdcaa.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/55543b53-ee52-41ca-ba2e-341088afdcaa.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 25 03:57:17 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:57:17.205 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[387b963b-ccb1-4d5f-8160-a174baa52697]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:57:17 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:57:17.206 162739 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 25 03:57:17 np0005534516 ovn_metadata_agent[162734]: global
Nov 25 03:57:17 np0005534516 ovn_metadata_agent[162734]:    log         /dev/log local0 debug
Nov 25 03:57:17 np0005534516 ovn_metadata_agent[162734]:    log-tag     haproxy-metadata-proxy-55543b53-ee52-41ca-ba2e-341088afdcaa
Nov 25 03:57:17 np0005534516 ovn_metadata_agent[162734]:    user        root
Nov 25 03:57:17 np0005534516 ovn_metadata_agent[162734]:    group       root
Nov 25 03:57:17 np0005534516 ovn_metadata_agent[162734]:    maxconn     1024
Nov 25 03:57:17 np0005534516 ovn_metadata_agent[162734]:    pidfile     /var/lib/neutron/external/pids/55543b53-ee52-41ca-ba2e-341088afdcaa.pid.haproxy
Nov 25 03:57:17 np0005534516 ovn_metadata_agent[162734]:    daemon
Nov 25 03:57:17 np0005534516 ovn_metadata_agent[162734]: 
Nov 25 03:57:17 np0005534516 ovn_metadata_agent[162734]: defaults
Nov 25 03:57:17 np0005534516 ovn_metadata_agent[162734]:    log global
Nov 25 03:57:17 np0005534516 ovn_metadata_agent[162734]:    mode http
Nov 25 03:57:17 np0005534516 ovn_metadata_agent[162734]:    option httplog
Nov 25 03:57:17 np0005534516 ovn_metadata_agent[162734]:    option dontlognull
Nov 25 03:57:17 np0005534516 ovn_metadata_agent[162734]:    option http-server-close
Nov 25 03:57:17 np0005534516 ovn_metadata_agent[162734]:    option forwardfor
Nov 25 03:57:17 np0005534516 ovn_metadata_agent[162734]:    retries                 3
Nov 25 03:57:17 np0005534516 ovn_metadata_agent[162734]:    timeout http-request    30s
Nov 25 03:57:17 np0005534516 ovn_metadata_agent[162734]:    timeout connect         30s
Nov 25 03:57:17 np0005534516 ovn_metadata_agent[162734]:    timeout client          32s
Nov 25 03:57:17 np0005534516 ovn_metadata_agent[162734]:    timeout server          32s
Nov 25 03:57:17 np0005534516 ovn_metadata_agent[162734]:    timeout http-keep-alive 30s
Nov 25 03:57:17 np0005534516 ovn_metadata_agent[162734]: 
Nov 25 03:57:17 np0005534516 ovn_metadata_agent[162734]: 
Nov 25 03:57:17 np0005534516 ovn_metadata_agent[162734]: listen listener
Nov 25 03:57:17 np0005534516 ovn_metadata_agent[162734]:    bind 169.254.169.254:80
Nov 25 03:57:17 np0005534516 ovn_metadata_agent[162734]:    server metadata /var/lib/neutron/metadata_proxy
Nov 25 03:57:17 np0005534516 ovn_metadata_agent[162734]:    http-request add-header X-OVN-Network-ID 55543b53-ee52-41ca-ba2e-341088afdcaa
Nov 25 03:57:17 np0005534516 ovn_metadata_agent[162734]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 25 03:57:17 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:57:17.207 162739 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-55543b53-ee52-41ca-ba2e-341088afdcaa', 'env', 'PROCESS_TAG=haproxy-55543b53-ee52-41ca-ba2e-341088afdcaa', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/55543b53-ee52-41ca-ba2e-341088afdcaa.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 25 03:57:17 np0005534516 nova_compute[253538]: 2025-11-25 08:57:17.302 253542 DEBUG nova.compute.manager [req-19b78f91-ea08-4b83-be58-3e2c91d9b6c3 req-ec2c21a0-6f27-438d-9737-6b3817576a2a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 43128a42-ed0f-42ff-8282-4ef978e7c43c] Received event network-vif-unplugged-3f8cf5bd-c7bb-44c3-a8bf-a8dd8e78c2c6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:57:17 np0005534516 nova_compute[253538]: 2025-11-25 08:57:17.304 253542 DEBUG oslo_concurrency.lockutils [req-19b78f91-ea08-4b83-be58-3e2c91d9b6c3 req-ec2c21a0-6f27-438d-9737-6b3817576a2a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "43128a42-ed0f-42ff-8282-4ef978e7c43c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:57:17 np0005534516 nova_compute[253538]: 2025-11-25 08:57:17.304 253542 DEBUG oslo_concurrency.lockutils [req-19b78f91-ea08-4b83-be58-3e2c91d9b6c3 req-ec2c21a0-6f27-438d-9737-6b3817576a2a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "43128a42-ed0f-42ff-8282-4ef978e7c43c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:57:17 np0005534516 nova_compute[253538]: 2025-11-25 08:57:17.305 253542 DEBUG oslo_concurrency.lockutils [req-19b78f91-ea08-4b83-be58-3e2c91d9b6c3 req-ec2c21a0-6f27-438d-9737-6b3817576a2a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "43128a42-ed0f-42ff-8282-4ef978e7c43c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:57:17 np0005534516 nova_compute[253538]: 2025-11-25 08:57:17.305 253542 DEBUG nova.compute.manager [req-19b78f91-ea08-4b83-be58-3e2c91d9b6c3 req-ec2c21a0-6f27-438d-9737-6b3817576a2a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 43128a42-ed0f-42ff-8282-4ef978e7c43c] No waiting events found dispatching network-vif-unplugged-3f8cf5bd-c7bb-44c3-a8bf-a8dd8e78c2c6 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 03:57:17 np0005534516 nova_compute[253538]: 2025-11-25 08:57:17.305 253542 DEBUG nova.compute.manager [req-19b78f91-ea08-4b83-be58-3e2c91d9b6c3 req-ec2c21a0-6f27-438d-9737-6b3817576a2a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 43128a42-ed0f-42ff-8282-4ef978e7c43c] Received event network-vif-unplugged-3f8cf5bd-c7bb-44c3-a8bf-a8dd8e78c2c6 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 25 03:57:17 np0005534516 nova_compute[253538]: 2025-11-25 08:57:17.306 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764061037.298575, 9b511004-21d7-4867-aa46-4e7219827b6e => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 03:57:17 np0005534516 nova_compute[253538]: 2025-11-25 08:57:17.307 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 9b511004-21d7-4867-aa46-4e7219827b6e] VM Started (Lifecycle Event)#033[00m
Nov 25 03:57:17 np0005534516 nova_compute[253538]: 2025-11-25 08:57:17.341 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 9b511004-21d7-4867-aa46-4e7219827b6e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:57:17 np0005534516 nova_compute[253538]: 2025-11-25 08:57:17.345 253542 DEBUG nova.network.neutron [-] [instance: 43128a42-ed0f-42ff-8282-4ef978e7c43c] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 03:57:17 np0005534516 nova_compute[253538]: 2025-11-25 08:57:17.350 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764061037.3035667, 9b511004-21d7-4867-aa46-4e7219827b6e => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 03:57:17 np0005534516 nova_compute[253538]: 2025-11-25 08:57:17.350 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 9b511004-21d7-4867-aa46-4e7219827b6e] VM Paused (Lifecycle Event)#033[00m
Nov 25 03:57:17 np0005534516 nova_compute[253538]: 2025-11-25 08:57:17.372 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 9b511004-21d7-4867-aa46-4e7219827b6e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:57:17 np0005534516 nova_compute[253538]: 2025-11-25 08:57:17.374 253542 INFO nova.compute.manager [-] [instance: 43128a42-ed0f-42ff-8282-4ef978e7c43c] Took 1.00 seconds to deallocate network for instance.#033[00m
Nov 25 03:57:17 np0005534516 nova_compute[253538]: 2025-11-25 08:57:17.381 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 9b511004-21d7-4867-aa46-4e7219827b6e] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 03:57:17 np0005534516 nova_compute[253538]: 2025-11-25 08:57:17.409 253542 DEBUG nova.compute.manager [req-4bda0695-a94d-469b-8fda-b575ca1fe5e9 req-99af7b91-9863-4b98-bb19-b6be1c855461 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 43128a42-ed0f-42ff-8282-4ef978e7c43c] Received event network-vif-deleted-3f8cf5bd-c7bb-44c3-a8bf-a8dd8e78c2c6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:57:17 np0005534516 nova_compute[253538]: 2025-11-25 08:57:17.411 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 9b511004-21d7-4867-aa46-4e7219827b6e] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 03:57:17 np0005534516 nova_compute[253538]: 2025-11-25 08:57:17.434 253542 DEBUG oslo_concurrency.lockutils [None req-1b614170-21cc-49c4-842b-7f28eb643f6e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:57:17 np0005534516 nova_compute[253538]: 2025-11-25 08:57:17.435 253542 DEBUG oslo_concurrency.lockutils [None req-1b614170-21cc-49c4-842b-7f28eb643f6e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:57:17 np0005534516 nova_compute[253538]: 2025-11-25 08:57:17.548 253542 DEBUG oslo_concurrency.processutils [None req-1b614170-21cc-49c4-842b-7f28eb643f6e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:57:17 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 03:57:17 np0005534516 podman[382298]: 2025-11-25 08:57:17.588198487 +0000 UTC m=+0.058639907 container create c5711778772d41910161078c587892ac983a62ecffb93e967a20e59df055e866 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-55543b53-ee52-41ca-ba2e-341088afdcaa, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 03:57:17 np0005534516 systemd[1]: Started libpod-conmon-c5711778772d41910161078c587892ac983a62ecffb93e967a20e59df055e866.scope.
Nov 25 03:57:17 np0005534516 podman[382298]: 2025-11-25 08:57:17.557401159 +0000 UTC m=+0.027842619 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 25 03:57:17 np0005534516 systemd[1]: Started libcrun container.
Nov 25 03:57:17 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bf8b26fcd98a92c913e834a912a7b97668705f1f438f00d41eb5de431fe4dbaf/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 25 03:57:17 np0005534516 podman[382298]: 2025-11-25 08:57:17.688654442 +0000 UTC m=+0.159095882 container init c5711778772d41910161078c587892ac983a62ecffb93e967a20e59df055e866 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-55543b53-ee52-41ca-ba2e-341088afdcaa, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 25 03:57:17 np0005534516 podman[382298]: 2025-11-25 08:57:17.698064108 +0000 UTC m=+0.168505518 container start c5711778772d41910161078c587892ac983a62ecffb93e967a20e59df055e866 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-55543b53-ee52-41ca-ba2e-341088afdcaa, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 25 03:57:17 np0005534516 neutron-haproxy-ovnmeta-55543b53-ee52-41ca-ba2e-341088afdcaa[382314]: [NOTICE]   (382318) : New worker (382330) forked
Nov 25 03:57:17 np0005534516 neutron-haproxy-ovnmeta-55543b53-ee52-41ca-ba2e-341088afdcaa[382314]: [NOTICE]   (382318) : Loading success.
Nov 25 03:57:17 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2295: 321 pgs: 321 active+clean; 265 MiB data, 901 MiB used, 59 GiB / 60 GiB avail; 105 KiB/s rd, 2.2 MiB/s wr, 65 op/s
Nov 25 03:57:17 np0005534516 nova_compute[253538]: 2025-11-25 08:57:17.927 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:57:18 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 03:57:18 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2905058715' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 03:57:18 np0005534516 nova_compute[253538]: 2025-11-25 08:57:18.058 253542 DEBUG oslo_concurrency.processutils [None req-1b614170-21cc-49c4-842b-7f28eb643f6e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.511s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:57:18 np0005534516 nova_compute[253538]: 2025-11-25 08:57:18.066 253542 DEBUG nova.compute.provider_tree [None req-1b614170-21cc-49c4-842b-7f28eb643f6e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 03:57:18 np0005534516 nova_compute[253538]: 2025-11-25 08:57:18.089 253542 DEBUG nova.scheduler.client.report [None req-1b614170-21cc-49c4-842b-7f28eb643f6e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 03:57:18 np0005534516 nova_compute[253538]: 2025-11-25 08:57:18.132 253542 DEBUG oslo_concurrency.lockutils [None req-1b614170-21cc-49c4-842b-7f28eb643f6e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.697s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:57:18 np0005534516 nova_compute[253538]: 2025-11-25 08:57:18.179 253542 INFO nova.scheduler.client.report [None req-1b614170-21cc-49c4-842b-7f28eb643f6e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Deleted allocations for instance 43128a42-ed0f-42ff-8282-4ef978e7c43c#033[00m
Nov 25 03:57:18 np0005534516 nova_compute[253538]: 2025-11-25 08:57:18.271 253542 DEBUG oslo_concurrency.lockutils [None req-1b614170-21cc-49c4-842b-7f28eb643f6e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lock "43128a42-ed0f-42ff-8282-4ef978e7c43c" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.099s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:57:18 np0005534516 nova_compute[253538]: 2025-11-25 08:57:18.394 253542 DEBUG nova.network.neutron [req-d4f4634d-03f7-42ef-bec5-39ada549cd50 req-250aa808-e02c-412f-96d3-2db495da7499 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 43128a42-ed0f-42ff-8282-4ef978e7c43c] Updated VIF entry in instance network info cache for port 3f8cf5bd-c7bb-44c3-a8bf-a8dd8e78c2c6. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 25 03:57:18 np0005534516 nova_compute[253538]: 2025-11-25 08:57:18.394 253542 DEBUG nova.network.neutron [req-d4f4634d-03f7-42ef-bec5-39ada549cd50 req-250aa808-e02c-412f-96d3-2db495da7499 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 43128a42-ed0f-42ff-8282-4ef978e7c43c] Updating instance_info_cache with network_info: [{"id": "3f8cf5bd-c7bb-44c3-a8bf-a8dd8e78c2c6", "address": "fa:16:3e:ec:7c:8c", "network": {"id": "6a3361c5-4f78-4935-9e24-d43d47b272af", "bridge": "br-int", "label": "tempest-network-smoke--1071758038", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "9.8.7.6", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92faeb767e7a423586eaaf32661ce771", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3f8cf5bd-c7", "ovs_interfaceid": "3f8cf5bd-c7bb-44c3-a8bf-a8dd8e78c2c6", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 03:57:18 np0005534516 nova_compute[253538]: 2025-11-25 08:57:18.423 253542 DEBUG oslo_concurrency.lockutils [req-d4f4634d-03f7-42ef-bec5-39ada549cd50 req-250aa808-e02c-412f-96d3-2db495da7499 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-43128a42-ed0f-42ff-8282-4ef978e7c43c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 03:57:19 np0005534516 nova_compute[253538]: 2025-11-25 08:57:19.417 253542 DEBUG nova.compute.manager [req-4edc396f-8d89-44a2-b43d-0265928419fa req-e0dafa2c-5ac2-405e-9d37-31c70b2b7358 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 43128a42-ed0f-42ff-8282-4ef978e7c43c] Received event network-vif-plugged-3f8cf5bd-c7bb-44c3-a8bf-a8dd8e78c2c6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:57:19 np0005534516 nova_compute[253538]: 2025-11-25 08:57:19.418 253542 DEBUG oslo_concurrency.lockutils [req-4edc396f-8d89-44a2-b43d-0265928419fa req-e0dafa2c-5ac2-405e-9d37-31c70b2b7358 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "43128a42-ed0f-42ff-8282-4ef978e7c43c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:57:19 np0005534516 nova_compute[253538]: 2025-11-25 08:57:19.418 253542 DEBUG oslo_concurrency.lockutils [req-4edc396f-8d89-44a2-b43d-0265928419fa req-e0dafa2c-5ac2-405e-9d37-31c70b2b7358 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "43128a42-ed0f-42ff-8282-4ef978e7c43c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:57:19 np0005534516 nova_compute[253538]: 2025-11-25 08:57:19.418 253542 DEBUG oslo_concurrency.lockutils [req-4edc396f-8d89-44a2-b43d-0265928419fa req-e0dafa2c-5ac2-405e-9d37-31c70b2b7358 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "43128a42-ed0f-42ff-8282-4ef978e7c43c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:57:19 np0005534516 nova_compute[253538]: 2025-11-25 08:57:19.418 253542 DEBUG nova.compute.manager [req-4edc396f-8d89-44a2-b43d-0265928419fa req-e0dafa2c-5ac2-405e-9d37-31c70b2b7358 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 43128a42-ed0f-42ff-8282-4ef978e7c43c] No waiting events found dispatching network-vif-plugged-3f8cf5bd-c7bb-44c3-a8bf-a8dd8e78c2c6 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 03:57:19 np0005534516 nova_compute[253538]: 2025-11-25 08:57:19.419 253542 WARNING nova.compute.manager [req-4edc396f-8d89-44a2-b43d-0265928419fa req-e0dafa2c-5ac2-405e-9d37-31c70b2b7358 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 43128a42-ed0f-42ff-8282-4ef978e7c43c] Received unexpected event network-vif-plugged-3f8cf5bd-c7bb-44c3-a8bf-a8dd8e78c2c6 for instance with vm_state deleted and task_state None.#033[00m
Nov 25 03:57:19 np0005534516 nova_compute[253538]: 2025-11-25 08:57:19.419 253542 DEBUG nova.compute.manager [req-4edc396f-8d89-44a2-b43d-0265928419fa req-e0dafa2c-5ac2-405e-9d37-31c70b2b7358 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 9b511004-21d7-4867-aa46-4e7219827b6e] Received event network-vif-plugged-88814764-016b-4232-82f8-72dbbb384932 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:57:19 np0005534516 nova_compute[253538]: 2025-11-25 08:57:19.419 253542 DEBUG oslo_concurrency.lockutils [req-4edc396f-8d89-44a2-b43d-0265928419fa req-e0dafa2c-5ac2-405e-9d37-31c70b2b7358 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "9b511004-21d7-4867-aa46-4e7219827b6e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:57:19 np0005534516 nova_compute[253538]: 2025-11-25 08:57:19.420 253542 DEBUG oslo_concurrency.lockutils [req-4edc396f-8d89-44a2-b43d-0265928419fa req-e0dafa2c-5ac2-405e-9d37-31c70b2b7358 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "9b511004-21d7-4867-aa46-4e7219827b6e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:57:19 np0005534516 nova_compute[253538]: 2025-11-25 08:57:19.420 253542 DEBUG oslo_concurrency.lockutils [req-4edc396f-8d89-44a2-b43d-0265928419fa req-e0dafa2c-5ac2-405e-9d37-31c70b2b7358 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "9b511004-21d7-4867-aa46-4e7219827b6e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:57:19 np0005534516 nova_compute[253538]: 2025-11-25 08:57:19.420 253542 DEBUG nova.compute.manager [req-4edc396f-8d89-44a2-b43d-0265928419fa req-e0dafa2c-5ac2-405e-9d37-31c70b2b7358 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 9b511004-21d7-4867-aa46-4e7219827b6e] Processing event network-vif-plugged-88814764-016b-4232-82f8-72dbbb384932 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 25 03:57:19 np0005534516 nova_compute[253538]: 2025-11-25 08:57:19.420 253542 DEBUG nova.compute.manager [req-4edc396f-8d89-44a2-b43d-0265928419fa req-e0dafa2c-5ac2-405e-9d37-31c70b2b7358 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 9b511004-21d7-4867-aa46-4e7219827b6e] Received event network-vif-plugged-88814764-016b-4232-82f8-72dbbb384932 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:57:19 np0005534516 nova_compute[253538]: 2025-11-25 08:57:19.421 253542 DEBUG oslo_concurrency.lockutils [req-4edc396f-8d89-44a2-b43d-0265928419fa req-e0dafa2c-5ac2-405e-9d37-31c70b2b7358 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "9b511004-21d7-4867-aa46-4e7219827b6e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:57:19 np0005534516 nova_compute[253538]: 2025-11-25 08:57:19.421 253542 DEBUG oslo_concurrency.lockutils [req-4edc396f-8d89-44a2-b43d-0265928419fa req-e0dafa2c-5ac2-405e-9d37-31c70b2b7358 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "9b511004-21d7-4867-aa46-4e7219827b6e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:57:19 np0005534516 nova_compute[253538]: 2025-11-25 08:57:19.422 253542 DEBUG oslo_concurrency.lockutils [req-4edc396f-8d89-44a2-b43d-0265928419fa req-e0dafa2c-5ac2-405e-9d37-31c70b2b7358 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "9b511004-21d7-4867-aa46-4e7219827b6e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:57:19 np0005534516 nova_compute[253538]: 2025-11-25 08:57:19.422 253542 DEBUG nova.compute.manager [req-4edc396f-8d89-44a2-b43d-0265928419fa req-e0dafa2c-5ac2-405e-9d37-31c70b2b7358 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 9b511004-21d7-4867-aa46-4e7219827b6e] No waiting events found dispatching network-vif-plugged-88814764-016b-4232-82f8-72dbbb384932 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 03:57:19 np0005534516 nova_compute[253538]: 2025-11-25 08:57:19.422 253542 WARNING nova.compute.manager [req-4edc396f-8d89-44a2-b43d-0265928419fa req-e0dafa2c-5ac2-405e-9d37-31c70b2b7358 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 9b511004-21d7-4867-aa46-4e7219827b6e] Received unexpected event network-vif-plugged-88814764-016b-4232-82f8-72dbbb384932 for instance with vm_state building and task_state spawning.#033[00m
Nov 25 03:57:19 np0005534516 nova_compute[253538]: 2025-11-25 08:57:19.423 253542 DEBUG nova.compute.manager [None req-37647ab5-bd9f-4491-8acb-d42f86c9642b 8ce5e2935141427a90707c14e4a73ad9 54ab33f9507e43fca43c45e6fc57f565 - - default default] [instance: 9b511004-21d7-4867-aa46-4e7219827b6e] Instance event wait completed in 2 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 25 03:57:19 np0005534516 nova_compute[253538]: 2025-11-25 08:57:19.427 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764061039.427386, 9b511004-21d7-4867-aa46-4e7219827b6e => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 03:57:19 np0005534516 nova_compute[253538]: 2025-11-25 08:57:19.428 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 9b511004-21d7-4867-aa46-4e7219827b6e] VM Resumed (Lifecycle Event)#033[00m
Nov 25 03:57:19 np0005534516 nova_compute[253538]: 2025-11-25 08:57:19.429 253542 DEBUG nova.virt.libvirt.driver [None req-37647ab5-bd9f-4491-8acb-d42f86c9642b 8ce5e2935141427a90707c14e4a73ad9 54ab33f9507e43fca43c45e6fc57f565 - - default default] [instance: 9b511004-21d7-4867-aa46-4e7219827b6e] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 25 03:57:19 np0005534516 nova_compute[253538]: 2025-11-25 08:57:19.432 253542 INFO nova.virt.libvirt.driver [-] [instance: 9b511004-21d7-4867-aa46-4e7219827b6e] Instance spawned successfully.#033[00m
Nov 25 03:57:19 np0005534516 nova_compute[253538]: 2025-11-25 08:57:19.432 253542 DEBUG nova.virt.libvirt.driver [None req-37647ab5-bd9f-4491-8acb-d42f86c9642b 8ce5e2935141427a90707c14e4a73ad9 54ab33f9507e43fca43c45e6fc57f565 - - default default] [instance: 9b511004-21d7-4867-aa46-4e7219827b6e] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 25 03:57:19 np0005534516 nova_compute[253538]: 2025-11-25 08:57:19.443 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 9b511004-21d7-4867-aa46-4e7219827b6e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:57:19 np0005534516 nova_compute[253538]: 2025-11-25 08:57:19.448 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 9b511004-21d7-4867-aa46-4e7219827b6e] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 03:57:19 np0005534516 nova_compute[253538]: 2025-11-25 08:57:19.451 253542 DEBUG nova.virt.libvirt.driver [None req-37647ab5-bd9f-4491-8acb-d42f86c9642b 8ce5e2935141427a90707c14e4a73ad9 54ab33f9507e43fca43c45e6fc57f565 - - default default] [instance: 9b511004-21d7-4867-aa46-4e7219827b6e] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:57:19 np0005534516 nova_compute[253538]: 2025-11-25 08:57:19.452 253542 DEBUG nova.virt.libvirt.driver [None req-37647ab5-bd9f-4491-8acb-d42f86c9642b 8ce5e2935141427a90707c14e4a73ad9 54ab33f9507e43fca43c45e6fc57f565 - - default default] [instance: 9b511004-21d7-4867-aa46-4e7219827b6e] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:57:19 np0005534516 nova_compute[253538]: 2025-11-25 08:57:19.452 253542 DEBUG nova.virt.libvirt.driver [None req-37647ab5-bd9f-4491-8acb-d42f86c9642b 8ce5e2935141427a90707c14e4a73ad9 54ab33f9507e43fca43c45e6fc57f565 - - default default] [instance: 9b511004-21d7-4867-aa46-4e7219827b6e] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:57:19 np0005534516 nova_compute[253538]: 2025-11-25 08:57:19.453 253542 DEBUG nova.virt.libvirt.driver [None req-37647ab5-bd9f-4491-8acb-d42f86c9642b 8ce5e2935141427a90707c14e4a73ad9 54ab33f9507e43fca43c45e6fc57f565 - - default default] [instance: 9b511004-21d7-4867-aa46-4e7219827b6e] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:57:19 np0005534516 nova_compute[253538]: 2025-11-25 08:57:19.453 253542 DEBUG nova.virt.libvirt.driver [None req-37647ab5-bd9f-4491-8acb-d42f86c9642b 8ce5e2935141427a90707c14e4a73ad9 54ab33f9507e43fca43c45e6fc57f565 - - default default] [instance: 9b511004-21d7-4867-aa46-4e7219827b6e] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:57:19 np0005534516 nova_compute[253538]: 2025-11-25 08:57:19.454 253542 DEBUG nova.virt.libvirt.driver [None req-37647ab5-bd9f-4491-8acb-d42f86c9642b 8ce5e2935141427a90707c14e4a73ad9 54ab33f9507e43fca43c45e6fc57f565 - - default default] [instance: 9b511004-21d7-4867-aa46-4e7219827b6e] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:57:19 np0005534516 nova_compute[253538]: 2025-11-25 08:57:19.477 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 9b511004-21d7-4867-aa46-4e7219827b6e] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 03:57:19 np0005534516 nova_compute[253538]: 2025-11-25 08:57:19.517 253542 INFO nova.compute.manager [None req-37647ab5-bd9f-4491-8acb-d42f86c9642b 8ce5e2935141427a90707c14e4a73ad9 54ab33f9507e43fca43c45e6fc57f565 - - default default] [instance: 9b511004-21d7-4867-aa46-4e7219827b6e] Took 9.98 seconds to spawn the instance on the hypervisor.#033[00m
Nov 25 03:57:19 np0005534516 nova_compute[253538]: 2025-11-25 08:57:19.518 253542 DEBUG nova.compute.manager [None req-37647ab5-bd9f-4491-8acb-d42f86c9642b 8ce5e2935141427a90707c14e4a73ad9 54ab33f9507e43fca43c45e6fc57f565 - - default default] [instance: 9b511004-21d7-4867-aa46-4e7219827b6e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:57:19 np0005534516 nova_compute[253538]: 2025-11-25 08:57:19.592 253542 INFO nova.compute.manager [None req-37647ab5-bd9f-4491-8acb-d42f86c9642b 8ce5e2935141427a90707c14e4a73ad9 54ab33f9507e43fca43c45e6fc57f565 - - default default] [instance: 9b511004-21d7-4867-aa46-4e7219827b6e] Took 11.07 seconds to build instance.#033[00m
Nov 25 03:57:19 np0005534516 nova_compute[253538]: 2025-11-25 08:57:19.606 253542 DEBUG oslo_concurrency.lockutils [None req-37647ab5-bd9f-4491-8acb-d42f86c9642b 8ce5e2935141427a90707c14e4a73ad9 54ab33f9507e43fca43c45e6fc57f565 - - default default] Lock "9b511004-21d7-4867-aa46-4e7219827b6e" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.158s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:57:19 np0005534516 podman[382350]: 2025-11-25 08:57:19.845393895 +0000 UTC m=+0.092356584 container health_status 8ad1e319ea263c63021f01bb2d6f18ca5aea205883339692c6b10bddfa993191 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 25 03:57:19 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2296: 321 pgs: 321 active+clean; 213 MiB data, 877 MiB used, 59 GiB / 60 GiB avail; 47 KiB/s rd, 1.8 MiB/s wr, 64 op/s
Nov 25 03:57:20 np0005534516 nova_compute[253538]: 2025-11-25 08:57:20.672 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:57:21 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2297: 321 pgs: 321 active+clean; 213 MiB data, 873 MiB used, 59 GiB / 60 GiB avail; 1.1 MiB/s rd, 1.8 MiB/s wr, 98 op/s
Nov 25 03:57:22 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 03:57:22 np0005534516 ovn_controller[152859]: 2025-11-25T08:57:22Z|01263|binding|INFO|Releasing lport f9837d3c-3aa7-48de-b240-549f6bf978b3 from this chassis (sb_readonly=0)
Nov 25 03:57:22 np0005534516 ovn_controller[152859]: 2025-11-25T08:57:22Z|01264|binding|INFO|Releasing lport 6aef391e-f696-4074-917b-8b0c7a47b4b6 from this chassis (sb_readonly=0)
Nov 25 03:57:22 np0005534516 nova_compute[253538]: 2025-11-25 08:57:22.834 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:57:22 np0005534516 nova_compute[253538]: 2025-11-25 08:57:22.929 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:57:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 03:57:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 03:57:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 03:57:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 03:57:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 03:57:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 03:57:23 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2298: 321 pgs: 321 active+clean; 213 MiB data, 873 MiB used, 59 GiB / 60 GiB avail; 1.7 MiB/s rd, 832 KiB/s wr, 98 op/s
Nov 25 03:57:25 np0005534516 nova_compute[253538]: 2025-11-25 08:57:25.709 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:57:25 np0005534516 nova_compute[253538]: 2025-11-25 08:57:25.859 253542 DEBUG nova.compute.manager [req-dea0fe30-0dad-41c2-96bf-75a9e7339e2c req-5ba0c0d0-e8e3-49ef-8058-939ca920df48 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 9b511004-21d7-4867-aa46-4e7219827b6e] Received event network-changed-88814764-016b-4232-82f8-72dbbb384932 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:57:25 np0005534516 nova_compute[253538]: 2025-11-25 08:57:25.860 253542 DEBUG nova.compute.manager [req-dea0fe30-0dad-41c2-96bf-75a9e7339e2c req-5ba0c0d0-e8e3-49ef-8058-939ca920df48 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 9b511004-21d7-4867-aa46-4e7219827b6e] Refreshing instance network info cache due to event network-changed-88814764-016b-4232-82f8-72dbbb384932. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 03:57:25 np0005534516 nova_compute[253538]: 2025-11-25 08:57:25.860 253542 DEBUG oslo_concurrency.lockutils [req-dea0fe30-0dad-41c2-96bf-75a9e7339e2c req-5ba0c0d0-e8e3-49ef-8058-939ca920df48 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-9b511004-21d7-4867-aa46-4e7219827b6e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 03:57:25 np0005534516 nova_compute[253538]: 2025-11-25 08:57:25.861 253542 DEBUG oslo_concurrency.lockutils [req-dea0fe30-0dad-41c2-96bf-75a9e7339e2c req-5ba0c0d0-e8e3-49ef-8058-939ca920df48 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-9b511004-21d7-4867-aa46-4e7219827b6e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 03:57:25 np0005534516 nova_compute[253538]: 2025-11-25 08:57:25.861 253542 DEBUG nova.network.neutron [req-dea0fe30-0dad-41c2-96bf-75a9e7339e2c req-5ba0c0d0-e8e3-49ef-8058-939ca920df48 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 9b511004-21d7-4867-aa46-4e7219827b6e] Refreshing network info cache for port 88814764-016b-4232-82f8-72dbbb384932 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 03:57:25 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2299: 321 pgs: 321 active+clean; 213 MiB data, 873 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 337 KiB/s wr, 104 op/s
Nov 25 03:57:27 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 03:57:27 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2300: 321 pgs: 321 active+clean; 213 MiB data, 873 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 16 KiB/s wr, 101 op/s
Nov 25 03:57:27 np0005534516 nova_compute[253538]: 2025-11-25 08:57:27.951 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:57:29 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 25 03:57:29 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1336926226' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 03:57:29 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 25 03:57:29 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1336926226' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 03:57:29 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2301: 321 pgs: 321 active+clean; 213 MiB data, 873 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 15 KiB/s wr, 80 op/s
Nov 25 03:57:30 np0005534516 nova_compute[253538]: 2025-11-25 08:57:30.547 253542 DEBUG nova.network.neutron [req-dea0fe30-0dad-41c2-96bf-75a9e7339e2c req-5ba0c0d0-e8e3-49ef-8058-939ca920df48 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 9b511004-21d7-4867-aa46-4e7219827b6e] Updated VIF entry in instance network info cache for port 88814764-016b-4232-82f8-72dbbb384932. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 25 03:57:30 np0005534516 nova_compute[253538]: 2025-11-25 08:57:30.548 253542 DEBUG nova.network.neutron [req-dea0fe30-0dad-41c2-96bf-75a9e7339e2c req-5ba0c0d0-e8e3-49ef-8058-939ca920df48 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 9b511004-21d7-4867-aa46-4e7219827b6e] Updating instance_info_cache with network_info: [{"id": "88814764-016b-4232-82f8-72dbbb384932", "address": "fa:16:3e:4c:8a:98", "network": {"id": "55543b53-ee52-41ca-ba2e-341088afdcaa", "bridge": "br-int", "label": "tempest-network-smoke--246766377", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.217", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "54ab33f9507e43fca43c45e6fc57f565", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap88814764-01", "ovs_interfaceid": "88814764-016b-4232-82f8-72dbbb384932", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 03:57:30 np0005534516 nova_compute[253538]: 2025-11-25 08:57:30.573 253542 DEBUG oslo_concurrency.lockutils [req-dea0fe30-0dad-41c2-96bf-75a9e7339e2c req-5ba0c0d0-e8e3-49ef-8058-939ca920df48 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-9b511004-21d7-4867-aa46-4e7219827b6e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 03:57:30 np0005534516 nova_compute[253538]: 2025-11-25 08:57:30.646 253542 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764061035.6448607, 43128a42-ed0f-42ff-8282-4ef978e7c43c => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 03:57:30 np0005534516 nova_compute[253538]: 2025-11-25 08:57:30.646 253542 INFO nova.compute.manager [-] [instance: 43128a42-ed0f-42ff-8282-4ef978e7c43c] VM Stopped (Lifecycle Event)#033[00m
Nov 25 03:57:30 np0005534516 nova_compute[253538]: 2025-11-25 08:57:30.666 253542 DEBUG nova.compute.manager [None req-21b6eb11-ef14-4acb-b771-31350e23435b - - - - - -] [instance: 43128a42-ed0f-42ff-8282-4ef978e7c43c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:57:30 np0005534516 nova_compute[253538]: 2025-11-25 08:57:30.711 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:57:30 np0005534516 nova_compute[253538]: 2025-11-25 08:57:30.713 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:57:31 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2302: 321 pgs: 321 active+clean; 218 MiB data, 879 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 485 KiB/s wr, 79 op/s
Nov 25 03:57:32 np0005534516 ovn_controller[152859]: 2025-11-25T08:57:32Z|00147|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:4c:8a:98 10.100.0.4
Nov 25 03:57:32 np0005534516 ovn_controller[152859]: 2025-11-25T08:57:32Z|00148|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:4c:8a:98 10.100.0.4
Nov 25 03:57:32 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 03:57:32 np0005534516 nova_compute[253538]: 2025-11-25 08:57:32.953 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:57:33 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2303: 321 pgs: 321 active+clean; 229 MiB data, 885 MiB used, 59 GiB / 60 GiB avail; 1.1 MiB/s rd, 1.0 MiB/s wr, 63 op/s
Nov 25 03:57:34 np0005534516 nova_compute[253538]: 2025-11-25 08:57:34.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 03:57:35 np0005534516 nova_compute[253538]: 2025-11-25 08:57:35.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 03:57:35 np0005534516 nova_compute[253538]: 2025-11-25 08:57:35.561 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:57:35 np0005534516 nova_compute[253538]: 2025-11-25 08:57:35.714 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:57:35 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2304: 321 pgs: 321 active+clean; 244 MiB data, 894 MiB used, 59 GiB / 60 GiB avail; 562 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Nov 25 03:57:37 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 03:57:37 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2305: 321 pgs: 321 active+clean; 246 MiB data, 894 MiB used, 59 GiB / 60 GiB avail; 305 KiB/s rd, 2.1 MiB/s wr, 61 op/s
Nov 25 03:57:37 np0005534516 nova_compute[253538]: 2025-11-25 08:57:37.956 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:57:39 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2306: 321 pgs: 321 active+clean; 246 MiB data, 894 MiB used, 59 GiB / 60 GiB avail; 305 KiB/s rd, 2.1 MiB/s wr, 61 op/s
Nov 25 03:57:40 np0005534516 nova_compute[253538]: 2025-11-25 08:57:40.717 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:57:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:57:41.080 162739 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:57:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:57:41.081 162739 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:57:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:57:41.081 162739 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:57:41 np0005534516 nova_compute[253538]: 2025-11-25 08:57:41.555 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 03:57:41 np0005534516 nova_compute[253538]: 2025-11-25 08:57:41.555 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 25 03:57:41 np0005534516 nova_compute[253538]: 2025-11-25 08:57:41.556 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 25 03:57:41 np0005534516 nova_compute[253538]: 2025-11-25 08:57:41.789 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Acquiring lock "refresh_cache-ce8c3428-f7e4-49aa-9978-faaf5d514663" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 03:57:41 np0005534516 nova_compute[253538]: 2025-11-25 08:57:41.789 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Acquired lock "refresh_cache-ce8c3428-f7e4-49aa-9978-faaf5d514663" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 03:57:41 np0005534516 nova_compute[253538]: 2025-11-25 08:57:41.789 253542 DEBUG nova.network.neutron [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] [instance: ce8c3428-f7e4-49aa-9978-faaf5d514663] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Nov 25 03:57:41 np0005534516 nova_compute[253538]: 2025-11-25 08:57:41.790 253542 DEBUG nova.objects.instance [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lazy-loading 'info_cache' on Instance uuid ce8c3428-f7e4-49aa-9978-faaf5d514663 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 03:57:41 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2307: 321 pgs: 321 active+clean; 246 MiB data, 894 MiB used, 59 GiB / 60 GiB avail; 305 KiB/s rd, 2.1 MiB/s wr, 61 op/s
Nov 25 03:57:42 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 03:57:42 np0005534516 nova_compute[253538]: 2025-11-25 08:57:42.958 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:57:43 np0005534516 nova_compute[253538]: 2025-11-25 08:57:43.447 253542 DEBUG oslo_concurrency.lockutils [None req-f20549d3-bfb1-4bd8-978c-3adeec9d7569 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Acquiring lock "5e55aa91-2aa5-4443-b976-0f3e4409e8ec" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:57:43 np0005534516 nova_compute[253538]: 2025-11-25 08:57:43.447 253542 DEBUG oslo_concurrency.lockutils [None req-f20549d3-bfb1-4bd8-978c-3adeec9d7569 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lock "5e55aa91-2aa5-4443-b976-0f3e4409e8ec" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:57:43 np0005534516 nova_compute[253538]: 2025-11-25 08:57:43.461 253542 DEBUG nova.compute.manager [None req-f20549d3-bfb1-4bd8-978c-3adeec9d7569 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 5e55aa91-2aa5-4443-b976-0f3e4409e8ec] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 25 03:57:43 np0005534516 nova_compute[253538]: 2025-11-25 08:57:43.538 253542 DEBUG oslo_concurrency.lockutils [None req-f20549d3-bfb1-4bd8-978c-3adeec9d7569 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:57:43 np0005534516 nova_compute[253538]: 2025-11-25 08:57:43.538 253542 DEBUG oslo_concurrency.lockutils [None req-f20549d3-bfb1-4bd8-978c-3adeec9d7569 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:57:43 np0005534516 nova_compute[253538]: 2025-11-25 08:57:43.547 253542 DEBUG nova.virt.hardware [None req-f20549d3-bfb1-4bd8-978c-3adeec9d7569 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 25 03:57:43 np0005534516 nova_compute[253538]: 2025-11-25 08:57:43.547 253542 INFO nova.compute.claims [None req-f20549d3-bfb1-4bd8-978c-3adeec9d7569 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 5e55aa91-2aa5-4443-b976-0f3e4409e8ec] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 25 03:57:43 np0005534516 nova_compute[253538]: 2025-11-25 08:57:43.629 253542 DEBUG nova.scheduler.client.report [None req-f20549d3-bfb1-4bd8-978c-3adeec9d7569 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Refreshing inventories for resource provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Nov 25 03:57:43 np0005534516 nova_compute[253538]: 2025-11-25 08:57:43.654 253542 DEBUG nova.scheduler.client.report [None req-f20549d3-bfb1-4bd8-978c-3adeec9d7569 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Updating ProviderTree inventory for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Nov 25 03:57:43 np0005534516 nova_compute[253538]: 2025-11-25 08:57:43.655 253542 DEBUG nova.compute.provider_tree [None req-f20549d3-bfb1-4bd8-978c-3adeec9d7569 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Updating inventory in ProviderTree for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Nov 25 03:57:43 np0005534516 nova_compute[253538]: 2025-11-25 08:57:43.671 253542 DEBUG nova.scheduler.client.report [None req-f20549d3-bfb1-4bd8-978c-3adeec9d7569 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Refreshing aggregate associations for resource provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Nov 25 03:57:43 np0005534516 nova_compute[253538]: 2025-11-25 08:57:43.704 253542 DEBUG nova.scheduler.client.report [None req-f20549d3-bfb1-4bd8-978c-3adeec9d7569 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Refreshing trait associations for resource provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4, traits: HW_CPU_X86_ABM,HW_CPU_X86_SSE,HW_CPU_X86_SSE4A,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_NET_VIF_MODEL_VIRTIO,HW_CPU_X86_SSSE3,COMPUTE_ACCELERATORS,COMPUTE_RESCUE_BFV,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_BMI2,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_DEVICE_TAGGING,HW_CPU_X86_SSE2,HW_CPU_X86_SSE42,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_SECURITY_TPM_1_2,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_SVM,HW_CPU_X86_AVX,HW_CPU_X86_CLMUL,COMPUTE_NODE,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_AVX2,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_SHA,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_BMI,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_MMX,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_SSE41,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_SECURITY_TPM_2_0,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_F16C,COMPUTE_STORAGE_BUS_FDC,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_AMD_SVM,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_AESNI,HW_CPU_X86_FMA3 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Nov 25 03:57:43 np0005534516 nova_compute[253538]: 2025-11-25 08:57:43.737 253542 DEBUG nova.network.neutron [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] [instance: ce8c3428-f7e4-49aa-9978-faaf5d514663] Updating instance_info_cache with network_info: [{"id": "d92eef96-9bbe-4743-96d0-393e7e6de4ee", "address": "fa:16:3e:0d:26:ec", "network": {"id": "58e30486-fde6-46bb-8263-c463bd38a1f9", "bridge": "br-int", "label": "tempest-network-smoke--1964902798", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.228", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cfffff2c57a442a59b202d368d49bf00", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd92eef96-9b", "ovs_interfaceid": "d92eef96-9bbe-4743-96d0-393e7e6de4ee", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 03:57:43 np0005534516 nova_compute[253538]: 2025-11-25 08:57:43.756 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Releasing lock "refresh_cache-ce8c3428-f7e4-49aa-9978-faaf5d514663" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 03:57:43 np0005534516 nova_compute[253538]: 2025-11-25 08:57:43.756 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] [instance: ce8c3428-f7e4-49aa-9978-faaf5d514663] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Nov 25 03:57:43 np0005534516 nova_compute[253538]: 2025-11-25 08:57:43.757 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 03:57:43 np0005534516 nova_compute[253538]: 2025-11-25 08:57:43.758 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 25 03:57:43 np0005534516 nova_compute[253538]: 2025-11-25 08:57:43.778 253542 DEBUG oslo_concurrency.processutils [None req-f20549d3-bfb1-4bd8-978c-3adeec9d7569 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:57:43 np0005534516 podman[382378]: 2025-11-25 08:57:43.833347911 +0000 UTC m=+0.066160992 container health_status 5c8d5508db57b4c3da7e80ae11f3a3094e87785cb74f60c98199261dd8b73481 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Nov 25 03:57:43 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2308: 321 pgs: 321 active+clean; 246 MiB data, 894 MiB used, 59 GiB / 60 GiB avail; 268 KiB/s rd, 1.7 MiB/s wr, 50 op/s
Nov 25 03:57:44 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 03:57:44 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1974948934' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 03:57:44 np0005534516 nova_compute[253538]: 2025-11-25 08:57:44.277 253542 DEBUG oslo_concurrency.processutils [None req-f20549d3-bfb1-4bd8-978c-3adeec9d7569 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.498s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:57:44 np0005534516 nova_compute[253538]: 2025-11-25 08:57:44.287 253542 DEBUG nova.compute.provider_tree [None req-f20549d3-bfb1-4bd8-978c-3adeec9d7569 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 03:57:44 np0005534516 nova_compute[253538]: 2025-11-25 08:57:44.304 253542 DEBUG nova.scheduler.client.report [None req-f20549d3-bfb1-4bd8-978c-3adeec9d7569 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 03:57:44 np0005534516 nova_compute[253538]: 2025-11-25 08:57:44.341 253542 DEBUG oslo_concurrency.lockutils [None req-f20549d3-bfb1-4bd8-978c-3adeec9d7569 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.803s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:57:44 np0005534516 nova_compute[253538]: 2025-11-25 08:57:44.342 253542 DEBUG nova.compute.manager [None req-f20549d3-bfb1-4bd8-978c-3adeec9d7569 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 5e55aa91-2aa5-4443-b976-0f3e4409e8ec] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 25 03:57:44 np0005534516 nova_compute[253538]: 2025-11-25 08:57:44.409 253542 DEBUG nova.compute.manager [None req-f20549d3-bfb1-4bd8-978c-3adeec9d7569 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 5e55aa91-2aa5-4443-b976-0f3e4409e8ec] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 25 03:57:44 np0005534516 nova_compute[253538]: 2025-11-25 08:57:44.409 253542 DEBUG nova.network.neutron [None req-f20549d3-bfb1-4bd8-978c-3adeec9d7569 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 5e55aa91-2aa5-4443-b976-0f3e4409e8ec] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 25 03:57:44 np0005534516 nova_compute[253538]: 2025-11-25 08:57:44.431 253542 INFO nova.virt.libvirt.driver [None req-f20549d3-bfb1-4bd8-978c-3adeec9d7569 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 5e55aa91-2aa5-4443-b976-0f3e4409e8ec] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 25 03:57:44 np0005534516 nova_compute[253538]: 2025-11-25 08:57:44.448 253542 DEBUG nova.compute.manager [None req-f20549d3-bfb1-4bd8-978c-3adeec9d7569 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 5e55aa91-2aa5-4443-b976-0f3e4409e8ec] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 25 03:57:44 np0005534516 nova_compute[253538]: 2025-11-25 08:57:44.542 253542 DEBUG nova.compute.manager [None req-f20549d3-bfb1-4bd8-978c-3adeec9d7569 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 5e55aa91-2aa5-4443-b976-0f3e4409e8ec] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 25 03:57:44 np0005534516 nova_compute[253538]: 2025-11-25 08:57:44.544 253542 DEBUG nova.virt.libvirt.driver [None req-f20549d3-bfb1-4bd8-978c-3adeec9d7569 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 5e55aa91-2aa5-4443-b976-0f3e4409e8ec] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 25 03:57:44 np0005534516 nova_compute[253538]: 2025-11-25 08:57:44.545 253542 INFO nova.virt.libvirt.driver [None req-f20549d3-bfb1-4bd8-978c-3adeec9d7569 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 5e55aa91-2aa5-4443-b976-0f3e4409e8ec] Creating image(s)#033[00m
Nov 25 03:57:44 np0005534516 nova_compute[253538]: 2025-11-25 08:57:44.580 253542 DEBUG nova.storage.rbd_utils [None req-f20549d3-bfb1-4bd8-978c-3adeec9d7569 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] rbd image 5e55aa91-2aa5-4443-b976-0f3e4409e8ec_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:57:44 np0005534516 nova_compute[253538]: 2025-11-25 08:57:44.613 253542 DEBUG nova.storage.rbd_utils [None req-f20549d3-bfb1-4bd8-978c-3adeec9d7569 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] rbd image 5e55aa91-2aa5-4443-b976-0f3e4409e8ec_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:57:44 np0005534516 nova_compute[253538]: 2025-11-25 08:57:44.639 253542 DEBUG nova.storage.rbd_utils [None req-f20549d3-bfb1-4bd8-978c-3adeec9d7569 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] rbd image 5e55aa91-2aa5-4443-b976-0f3e4409e8ec_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:57:44 np0005534516 nova_compute[253538]: 2025-11-25 08:57:44.643 253542 DEBUG oslo_concurrency.processutils [None req-f20549d3-bfb1-4bd8-978c-3adeec9d7569 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:57:44 np0005534516 nova_compute[253538]: 2025-11-25 08:57:44.688 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 03:57:44 np0005534516 nova_compute[253538]: 2025-11-25 08:57:44.737 253542 DEBUG oslo_concurrency.processutils [None req-f20549d3-bfb1-4bd8-978c-3adeec9d7569 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json" returned: 0 in 0.093s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:57:44 np0005534516 nova_compute[253538]: 2025-11-25 08:57:44.737 253542 DEBUG oslo_concurrency.lockutils [None req-f20549d3-bfb1-4bd8-978c-3adeec9d7569 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Acquiring lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:57:44 np0005534516 nova_compute[253538]: 2025-11-25 08:57:44.738 253542 DEBUG oslo_concurrency.lockutils [None req-f20549d3-bfb1-4bd8-978c-3adeec9d7569 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:57:44 np0005534516 nova_compute[253538]: 2025-11-25 08:57:44.738 253542 DEBUG oslo_concurrency.lockutils [None req-f20549d3-bfb1-4bd8-978c-3adeec9d7569 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:57:44 np0005534516 nova_compute[253538]: 2025-11-25 08:57:44.757 253542 DEBUG nova.storage.rbd_utils [None req-f20549d3-bfb1-4bd8-978c-3adeec9d7569 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] rbd image 5e55aa91-2aa5-4443-b976-0f3e4409e8ec_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:57:44 np0005534516 nova_compute[253538]: 2025-11-25 08:57:44.761 253542 DEBUG oslo_concurrency.processutils [None req-f20549d3-bfb1-4bd8-978c-3adeec9d7569 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc 5e55aa91-2aa5-4443-b976-0f3e4409e8ec_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:57:44 np0005534516 podman[382475]: 2025-11-25 08:57:44.804565346 +0000 UTC m=+0.058651797 container health_status 201f5ba8a846455dad7681d91099d8c3f33e8ac91298c1330a8b525b94c03938 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd)
Nov 25 03:57:45 np0005534516 nova_compute[253538]: 2025-11-25 08:57:45.100 253542 DEBUG oslo_concurrency.processutils [None req-f20549d3-bfb1-4bd8-978c-3adeec9d7569 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc 5e55aa91-2aa5-4443-b976-0f3e4409e8ec_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.338s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:57:45 np0005534516 nova_compute[253538]: 2025-11-25 08:57:45.131 253542 DEBUG nova.policy [None req-f20549d3-bfb1-4bd8-978c-3adeec9d7569 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '4211995133cc45db8e38c47f747fb092', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '92faeb767e7a423586eaaf32661ce771', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 25 03:57:45 np0005534516 nova_compute[253538]: 2025-11-25 08:57:45.167 253542 DEBUG nova.storage.rbd_utils [None req-f20549d3-bfb1-4bd8-978c-3adeec9d7569 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] resizing rbd image 5e55aa91-2aa5-4443-b976-0f3e4409e8ec_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 25 03:57:45 np0005534516 nova_compute[253538]: 2025-11-25 08:57:45.286 253542 DEBUG nova.objects.instance [None req-f20549d3-bfb1-4bd8-978c-3adeec9d7569 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lazy-loading 'migration_context' on Instance uuid 5e55aa91-2aa5-4443-b976-0f3e4409e8ec obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 03:57:45 np0005534516 nova_compute[253538]: 2025-11-25 08:57:45.303 253542 DEBUG nova.virt.libvirt.driver [None req-f20549d3-bfb1-4bd8-978c-3adeec9d7569 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 5e55aa91-2aa5-4443-b976-0f3e4409e8ec] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 25 03:57:45 np0005534516 nova_compute[253538]: 2025-11-25 08:57:45.304 253542 DEBUG nova.virt.libvirt.driver [None req-f20549d3-bfb1-4bd8-978c-3adeec9d7569 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 5e55aa91-2aa5-4443-b976-0f3e4409e8ec] Ensure instance console log exists: /var/lib/nova/instances/5e55aa91-2aa5-4443-b976-0f3e4409e8ec/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 25 03:57:45 np0005534516 nova_compute[253538]: 2025-11-25 08:57:45.305 253542 DEBUG oslo_concurrency.lockutils [None req-f20549d3-bfb1-4bd8-978c-3adeec9d7569 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:57:45 np0005534516 nova_compute[253538]: 2025-11-25 08:57:45.306 253542 DEBUG oslo_concurrency.lockutils [None req-f20549d3-bfb1-4bd8-978c-3adeec9d7569 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:57:45 np0005534516 nova_compute[253538]: 2025-11-25 08:57:45.306 253542 DEBUG oslo_concurrency.lockutils [None req-f20549d3-bfb1-4bd8-978c-3adeec9d7569 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:57:45 np0005534516 nova_compute[253538]: 2025-11-25 08:57:45.720 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:57:45 np0005534516 nova_compute[253538]: 2025-11-25 08:57:45.870 253542 DEBUG nova.network.neutron [None req-f20549d3-bfb1-4bd8-978c-3adeec9d7569 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 5e55aa91-2aa5-4443-b976-0f3e4409e8ec] Successfully created port: 027edfd6-09a6-4bf4-88df-8a19e59d1f72 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 25 03:57:45 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2309: 321 pgs: 321 active+clean; 246 MiB data, 894 MiB used, 59 GiB / 60 GiB avail; 103 KiB/s rd, 1.1 MiB/s wr, 29 op/s
Nov 25 03:57:46 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 03:57:46 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 03:57:46 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 03:57:46 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 03:57:46 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 03:57:46 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 03:57:46 np0005534516 ceph-mgr[75313]: [progress WARNING root] complete: ev da6d54f2-07cd-4708-ae01-0afea329c729 does not exist
Nov 25 03:57:46 np0005534516 ceph-mgr[75313]: [progress WARNING root] complete: ev 1b45df8e-ae1f-4507-8a28-ce5bfb36f342 does not exist
Nov 25 03:57:46 np0005534516 ceph-mgr[75313]: [progress WARNING root] complete: ev af0116fb-88eb-4d06-b211-2a39b9ea5135 does not exist
Nov 25 03:57:46 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 03:57:46 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 03:57:46 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 03:57:46 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 03:57:46 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 03:57:46 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 03:57:46 np0005534516 nova_compute[253538]: 2025-11-25 08:57:46.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 03:57:46 np0005534516 podman[382875]: 2025-11-25 08:57:46.90326911 +0000 UTC m=+0.068489865 container create 8e361f66d041016362b80f980d14f400c180dcfa80416f95cafac2ef38f43ae9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_robinson, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Nov 25 03:57:46 np0005534516 systemd[1]: Started libpod-conmon-8e361f66d041016362b80f980d14f400c180dcfa80416f95cafac2ef38f43ae9.scope.
Nov 25 03:57:46 np0005534516 podman[382875]: 2025-11-25 08:57:46.87423427 +0000 UTC m=+0.039455075 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 03:57:46 np0005534516 systemd[1]: Started libcrun container.
Nov 25 03:57:47 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 03:57:47 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 03:57:47 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 03:57:47 np0005534516 podman[382875]: 2025-11-25 08:57:47.022523176 +0000 UTC m=+0.187743981 container init 8e361f66d041016362b80f980d14f400c180dcfa80416f95cafac2ef38f43ae9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_robinson, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Nov 25 03:57:47 np0005534516 podman[382875]: 2025-11-25 08:57:47.036151497 +0000 UTC m=+0.201372262 container start 8e361f66d041016362b80f980d14f400c180dcfa80416f95cafac2ef38f43ae9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_robinson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Nov 25 03:57:47 np0005534516 podman[382875]: 2025-11-25 08:57:47.041041321 +0000 UTC m=+0.206262106 container attach 8e361f66d041016362b80f980d14f400c180dcfa80416f95cafac2ef38f43ae9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_robinson, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 03:57:47 np0005534516 naughty_robinson[382891]: 167 167
Nov 25 03:57:47 np0005534516 systemd[1]: libpod-8e361f66d041016362b80f980d14f400c180dcfa80416f95cafac2ef38f43ae9.scope: Deactivated successfully.
Nov 25 03:57:47 np0005534516 conmon[382891]: conmon 8e361f66d041016362b8 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-8e361f66d041016362b80f980d14f400c180dcfa80416f95cafac2ef38f43ae9.scope/container/memory.events
Nov 25 03:57:47 np0005534516 podman[382875]: 2025-11-25 08:57:47.048553424 +0000 UTC m=+0.213774199 container died 8e361f66d041016362b80f980d14f400c180dcfa80416f95cafac2ef38f43ae9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_robinson, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 03:57:47 np0005534516 systemd[1]: var-lib-containers-storage-overlay-92d12538079d78a05f067a0d6ade0c71e5266dfef5c0f8443eeb5ce42854d25d-merged.mount: Deactivated successfully.
Nov 25 03:57:47 np0005534516 podman[382875]: 2025-11-25 08:57:47.101019593 +0000 UTC m=+0.266240348 container remove 8e361f66d041016362b80f980d14f400c180dcfa80416f95cafac2ef38f43ae9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_robinson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2)
Nov 25 03:57:47 np0005534516 systemd[1]: libpod-conmon-8e361f66d041016362b80f980d14f400c180dcfa80416f95cafac2ef38f43ae9.scope: Deactivated successfully.
Nov 25 03:57:47 np0005534516 nova_compute[253538]: 2025-11-25 08:57:47.164 253542 DEBUG nova.network.neutron [None req-f20549d3-bfb1-4bd8-978c-3adeec9d7569 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 5e55aa91-2aa5-4443-b976-0f3e4409e8ec] Successfully updated port: 027edfd6-09a6-4bf4-88df-8a19e59d1f72 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 25 03:57:47 np0005534516 nova_compute[253538]: 2025-11-25 08:57:47.179 253542 DEBUG oslo_concurrency.lockutils [None req-f20549d3-bfb1-4bd8-978c-3adeec9d7569 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Acquiring lock "refresh_cache-5e55aa91-2aa5-4443-b976-0f3e4409e8ec" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 03:57:47 np0005534516 nova_compute[253538]: 2025-11-25 08:57:47.179 253542 DEBUG oslo_concurrency.lockutils [None req-f20549d3-bfb1-4bd8-978c-3adeec9d7569 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Acquired lock "refresh_cache-5e55aa91-2aa5-4443-b976-0f3e4409e8ec" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 03:57:47 np0005534516 nova_compute[253538]: 2025-11-25 08:57:47.180 253542 DEBUG nova.network.neutron [None req-f20549d3-bfb1-4bd8-978c-3adeec9d7569 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 5e55aa91-2aa5-4443-b976-0f3e4409e8ec] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 25 03:57:47 np0005534516 nova_compute[253538]: 2025-11-25 08:57:47.256 253542 DEBUG nova.compute.manager [req-a1c77e16-d3d1-4c46-8c49-0585f19f0df7 req-59ef1be7-f012-4815-95d9-9aeecf551f31 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 5e55aa91-2aa5-4443-b976-0f3e4409e8ec] Received event network-changed-027edfd6-09a6-4bf4-88df-8a19e59d1f72 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:57:47 np0005534516 nova_compute[253538]: 2025-11-25 08:57:47.257 253542 DEBUG nova.compute.manager [req-a1c77e16-d3d1-4c46-8c49-0585f19f0df7 req-59ef1be7-f012-4815-95d9-9aeecf551f31 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 5e55aa91-2aa5-4443-b976-0f3e4409e8ec] Refreshing instance network info cache due to event network-changed-027edfd6-09a6-4bf4-88df-8a19e59d1f72. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 03:57:47 np0005534516 nova_compute[253538]: 2025-11-25 08:57:47.257 253542 DEBUG oslo_concurrency.lockutils [req-a1c77e16-d3d1-4c46-8c49-0585f19f0df7 req-59ef1be7-f012-4815-95d9-9aeecf551f31 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-5e55aa91-2aa5-4443-b976-0f3e4409e8ec" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 03:57:47 np0005534516 podman[382915]: 2025-11-25 08:57:47.319096779 +0000 UTC m=+0.048563153 container create f751eb85115f956171758b6641bf133fce1dad3ab8f30a809ff9c3fc17a1046b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_shockley, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0)
Nov 25 03:57:47 np0005534516 nova_compute[253538]: 2025-11-25 08:57:47.335 253542 DEBUG nova.network.neutron [None req-f20549d3-bfb1-4bd8-978c-3adeec9d7569 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 5e55aa91-2aa5-4443-b976-0f3e4409e8ec] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 25 03:57:47 np0005534516 systemd[1]: Started libpod-conmon-f751eb85115f956171758b6641bf133fce1dad3ab8f30a809ff9c3fc17a1046b.scope.
Nov 25 03:57:47 np0005534516 podman[382915]: 2025-11-25 08:57:47.298858458 +0000 UTC m=+0.028324882 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 03:57:47 np0005534516 systemd[1]: Started libcrun container.
Nov 25 03:57:47 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/be339c62705c97828622372a4f66b82f19ed5acb57a26246a94e18db53e50b85/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 03:57:47 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/be339c62705c97828622372a4f66b82f19ed5acb57a26246a94e18db53e50b85/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 03:57:47 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/be339c62705c97828622372a4f66b82f19ed5acb57a26246a94e18db53e50b85/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 03:57:47 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/be339c62705c97828622372a4f66b82f19ed5acb57a26246a94e18db53e50b85/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 03:57:47 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/be339c62705c97828622372a4f66b82f19ed5acb57a26246a94e18db53e50b85/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 03:57:47 np0005534516 podman[382915]: 2025-11-25 08:57:47.409082868 +0000 UTC m=+0.138549262 container init f751eb85115f956171758b6641bf133fce1dad3ab8f30a809ff9c3fc17a1046b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_shockley, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 03:57:47 np0005534516 podman[382915]: 2025-11-25 08:57:47.426385489 +0000 UTC m=+0.155851903 container start f751eb85115f956171758b6641bf133fce1dad3ab8f30a809ff9c3fc17a1046b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_shockley, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 03:57:47 np0005534516 podman[382915]: 2025-11-25 08:57:47.431027655 +0000 UTC m=+0.160494049 container attach f751eb85115f956171758b6641bf133fce1dad3ab8f30a809ff9c3fc17a1046b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_shockley, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 03:57:47 np0005534516 nova_compute[253538]: 2025-11-25 08:57:47.548 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 03:57:47 np0005534516 nova_compute[253538]: 2025-11-25 08:57:47.553 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 03:57:47 np0005534516 nova_compute[253538]: 2025-11-25 08:57:47.574 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:57:47 np0005534516 nova_compute[253538]: 2025-11-25 08:57:47.574 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:57:47 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 03:57:47 np0005534516 nova_compute[253538]: 2025-11-25 08:57:47.575 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:57:47 np0005534516 nova_compute[253538]: 2025-11-25 08:57:47.575 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 25 03:57:47 np0005534516 nova_compute[253538]: 2025-11-25 08:57:47.576 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:57:47 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2310: 321 pgs: 321 active+clean; 263 MiB data, 905 MiB used, 59 GiB / 60 GiB avail; 14 KiB/s rd, 914 KiB/s wr, 8 op/s
Nov 25 03:57:47 np0005534516 nova_compute[253538]: 2025-11-25 08:57:47.962 253542 DEBUG nova.compute.manager [req-a8edc7ba-f2f1-4885-a616-0ff86e2cfc07 req-3a478600-1659-4fdf-8c0a-cac6468c9d72 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 9b511004-21d7-4867-aa46-4e7219827b6e] Received event network-changed-88814764-016b-4232-82f8-72dbbb384932 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:57:47 np0005534516 nova_compute[253538]: 2025-11-25 08:57:47.963 253542 DEBUG nova.compute.manager [req-a8edc7ba-f2f1-4885-a616-0ff86e2cfc07 req-3a478600-1659-4fdf-8c0a-cac6468c9d72 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 9b511004-21d7-4867-aa46-4e7219827b6e] Refreshing instance network info cache due to event network-changed-88814764-016b-4232-82f8-72dbbb384932. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 03:57:47 np0005534516 nova_compute[253538]: 2025-11-25 08:57:47.963 253542 DEBUG oslo_concurrency.lockutils [req-a8edc7ba-f2f1-4885-a616-0ff86e2cfc07 req-3a478600-1659-4fdf-8c0a-cac6468c9d72 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-9b511004-21d7-4867-aa46-4e7219827b6e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 03:57:47 np0005534516 nova_compute[253538]: 2025-11-25 08:57:47.963 253542 DEBUG oslo_concurrency.lockutils [req-a8edc7ba-f2f1-4885-a616-0ff86e2cfc07 req-3a478600-1659-4fdf-8c0a-cac6468c9d72 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-9b511004-21d7-4867-aa46-4e7219827b6e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 03:57:47 np0005534516 nova_compute[253538]: 2025-11-25 08:57:47.964 253542 DEBUG nova.network.neutron [req-a8edc7ba-f2f1-4885-a616-0ff86e2cfc07 req-3a478600-1659-4fdf-8c0a-cac6468c9d72 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 9b511004-21d7-4867-aa46-4e7219827b6e] Refreshing network info cache for port 88814764-016b-4232-82f8-72dbbb384932 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 03:57:47 np0005534516 nova_compute[253538]: 2025-11-25 08:57:47.965 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:57:48 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 03:57:48 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/166584326' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 03:57:48 np0005534516 nova_compute[253538]: 2025-11-25 08:57:48.028 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.452s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:57:48 np0005534516 nova_compute[253538]: 2025-11-25 08:57:48.094 253542 DEBUG oslo_concurrency.lockutils [None req-8b77574d-7777-4a2e-a943-44bc36002e5e 8ce5e2935141427a90707c14e4a73ad9 54ab33f9507e43fca43c45e6fc57f565 - - default default] Acquiring lock "9b511004-21d7-4867-aa46-4e7219827b6e" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:57:48 np0005534516 nova_compute[253538]: 2025-11-25 08:57:48.096 253542 DEBUG oslo_concurrency.lockutils [None req-8b77574d-7777-4a2e-a943-44bc36002e5e 8ce5e2935141427a90707c14e4a73ad9 54ab33f9507e43fca43c45e6fc57f565 - - default default] Lock "9b511004-21d7-4867-aa46-4e7219827b6e" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:57:48 np0005534516 nova_compute[253538]: 2025-11-25 08:57:48.096 253542 DEBUG oslo_concurrency.lockutils [None req-8b77574d-7777-4a2e-a943-44bc36002e5e 8ce5e2935141427a90707c14e4a73ad9 54ab33f9507e43fca43c45e6fc57f565 - - default default] Acquiring lock "9b511004-21d7-4867-aa46-4e7219827b6e-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:57:48 np0005534516 nova_compute[253538]: 2025-11-25 08:57:48.097 253542 DEBUG oslo_concurrency.lockutils [None req-8b77574d-7777-4a2e-a943-44bc36002e5e 8ce5e2935141427a90707c14e4a73ad9 54ab33f9507e43fca43c45e6fc57f565 - - default default] Lock "9b511004-21d7-4867-aa46-4e7219827b6e-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:57:48 np0005534516 nova_compute[253538]: 2025-11-25 08:57:48.097 253542 DEBUG oslo_concurrency.lockutils [None req-8b77574d-7777-4a2e-a943-44bc36002e5e 8ce5e2935141427a90707c14e4a73ad9 54ab33f9507e43fca43c45e6fc57f565 - - default default] Lock "9b511004-21d7-4867-aa46-4e7219827b6e-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:57:48 np0005534516 nova_compute[253538]: 2025-11-25 08:57:48.100 253542 INFO nova.compute.manager [None req-8b77574d-7777-4a2e-a943-44bc36002e5e 8ce5e2935141427a90707c14e4a73ad9 54ab33f9507e43fca43c45e6fc57f565 - - default default] [instance: 9b511004-21d7-4867-aa46-4e7219827b6e] Terminating instance#033[00m
Nov 25 03:57:48 np0005534516 nova_compute[253538]: 2025-11-25 08:57:48.102 253542 DEBUG nova.compute.manager [None req-8b77574d-7777-4a2e-a943-44bc36002e5e 8ce5e2935141427a90707c14e4a73ad9 54ab33f9507e43fca43c45e6fc57f565 - - default default] [instance: 9b511004-21d7-4867-aa46-4e7219827b6e] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 25 03:57:48 np0005534516 kernel: tap88814764-01 (unregistering): left promiscuous mode
Nov 25 03:57:48 np0005534516 NetworkManager[48915]: <info>  [1764061068.1998] device (tap88814764-01): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 03:57:48 np0005534516 ovn_controller[152859]: 2025-11-25T08:57:48Z|01265|binding|INFO|Releasing lport 88814764-016b-4232-82f8-72dbbb384932 from this chassis (sb_readonly=0)
Nov 25 03:57:48 np0005534516 ovn_controller[152859]: 2025-11-25T08:57:48Z|01266|binding|INFO|Setting lport 88814764-016b-4232-82f8-72dbbb384932 down in Southbound
Nov 25 03:57:48 np0005534516 ovn_controller[152859]: 2025-11-25T08:57:48Z|01267|binding|INFO|Removing iface tap88814764-01 ovn-installed in OVS
Nov 25 03:57:48 np0005534516 nova_compute[253538]: 2025-11-25 08:57:48.249 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:57:48 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:57:48.259 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:4c:8a:98 10.100.0.4'], port_security=['fa:16:3e:4c:8a:98 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '9b511004-21d7-4867-aa46-4e7219827b6e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-55543b53-ee52-41ca-ba2e-341088afdcaa', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '54ab33f9507e43fca43c45e6fc57f565', 'neutron:revision_number': '4', 'neutron:security_group_ids': '6831af45-79a5-4ed4-afa6-6b43609f2269 cfebd55d-d59e-4c7b-966a-1919c4910d21', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=90ba94bc-a7de-4a4f-bb6b-69f6919bb708, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=88814764-016b-4232-82f8-72dbbb384932) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 03:57:48 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:57:48.260 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 88814764-016b-4232-82f8-72dbbb384932 in datapath 55543b53-ee52-41ca-ba2e-341088afdcaa unbound from our chassis#033[00m
Nov 25 03:57:48 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:57:48.261 162739 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 55543b53-ee52-41ca-ba2e-341088afdcaa, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 25 03:57:48 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:57:48.263 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[9c9df05a-b3a4-4b13-88ff-20b0a3ad7849]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:57:48 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:57:48.263 162739 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-55543b53-ee52-41ca-ba2e-341088afdcaa namespace which is not needed anymore#033[00m
Nov 25 03:57:48 np0005534516 nova_compute[253538]: 2025-11-25 08:57:48.268 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:57:48 np0005534516 systemd[1]: machine-qemu\x2d152\x2dinstance\x2d0000007a.scope: Deactivated successfully.
Nov 25 03:57:48 np0005534516 systemd[1]: machine-qemu\x2d152\x2dinstance\x2d0000007a.scope: Consumed 13.510s CPU time.
Nov 25 03:57:48 np0005534516 systemd-machined[215790]: Machine qemu-152-instance-0000007a terminated.
Nov 25 03:57:48 np0005534516 nova_compute[253538]: 2025-11-25 08:57:48.323 253542 DEBUG nova.network.neutron [None req-f20549d3-bfb1-4bd8-978c-3adeec9d7569 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 5e55aa91-2aa5-4443-b976-0f3e4409e8ec] Updating instance_info_cache with network_info: [{"id": "027edfd6-09a6-4bf4-88df-8a19e59d1f72", "address": "fa:16:3e:d9:e8:8d", "network": {"id": "bf619d00-d285-4b9e-9996-77997075375e", "bridge": "br-int", "label": "tempest-network-smoke--598319533", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92faeb767e7a423586eaaf32661ce771", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap027edfd6-09", "ovs_interfaceid": "027edfd6-09a6-4bf4-88df-8a19e59d1f72", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 03:57:48 np0005534516 nova_compute[253538]: 2025-11-25 08:57:48.341 253542 DEBUG oslo_concurrency.lockutils [None req-f20549d3-bfb1-4bd8-978c-3adeec9d7569 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Releasing lock "refresh_cache-5e55aa91-2aa5-4443-b976-0f3e4409e8ec" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 03:57:48 np0005534516 nova_compute[253538]: 2025-11-25 08:57:48.341 253542 DEBUG nova.compute.manager [None req-f20549d3-bfb1-4bd8-978c-3adeec9d7569 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 5e55aa91-2aa5-4443-b976-0f3e4409e8ec] Instance network_info: |[{"id": "027edfd6-09a6-4bf4-88df-8a19e59d1f72", "address": "fa:16:3e:d9:e8:8d", "network": {"id": "bf619d00-d285-4b9e-9996-77997075375e", "bridge": "br-int", "label": "tempest-network-smoke--598319533", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92faeb767e7a423586eaaf32661ce771", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap027edfd6-09", "ovs_interfaceid": "027edfd6-09a6-4bf4-88df-8a19e59d1f72", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 25 03:57:48 np0005534516 nova_compute[253538]: 2025-11-25 08:57:48.342 253542 DEBUG oslo_concurrency.lockutils [req-a1c77e16-d3d1-4c46-8c49-0585f19f0df7 req-59ef1be7-f012-4815-95d9-9aeecf551f31 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-5e55aa91-2aa5-4443-b976-0f3e4409e8ec" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 03:57:48 np0005534516 nova_compute[253538]: 2025-11-25 08:57:48.342 253542 DEBUG nova.network.neutron [req-a1c77e16-d3d1-4c46-8c49-0585f19f0df7 req-59ef1be7-f012-4815-95d9-9aeecf551f31 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 5e55aa91-2aa5-4443-b976-0f3e4409e8ec] Refreshing network info cache for port 027edfd6-09a6-4bf4-88df-8a19e59d1f72 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 03:57:48 np0005534516 nova_compute[253538]: 2025-11-25 08:57:48.346 253542 DEBUG nova.virt.libvirt.driver [None req-f20549d3-bfb1-4bd8-978c-3adeec9d7569 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 5e55aa91-2aa5-4443-b976-0f3e4409e8ec] Start _get_guest_xml network_info=[{"id": "027edfd6-09a6-4bf4-88df-8a19e59d1f72", "address": "fa:16:3e:d9:e8:8d", "network": {"id": "bf619d00-d285-4b9e-9996-77997075375e", "bridge": "br-int", "label": "tempest-network-smoke--598319533", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92faeb767e7a423586eaaf32661ce771", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap027edfd6-09", "ovs_interfaceid": "027edfd6-09a6-4bf4-88df-8a19e59d1f72", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'device_name': '/dev/vda', 'guest_format': None, 'encryption_options': None, 'boot_index': 0, 'encryption_format': None, 'encryption_secret_uuid': None, 'size': 0, 'encrypted': False, 'disk_bus': 'virtio', 'image_id': '8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 25 03:57:48 np0005534516 nova_compute[253538]: 2025-11-25 08:57:48.351 253542 WARNING nova.virt.libvirt.driver [None req-f20549d3-bfb1-4bd8-978c-3adeec9d7569 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 03:57:48 np0005534516 nova_compute[253538]: 2025-11-25 08:57:48.356 253542 DEBUG nova.virt.libvirt.host [None req-f20549d3-bfb1-4bd8-978c-3adeec9d7569 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 25 03:57:48 np0005534516 nova_compute[253538]: 2025-11-25 08:57:48.357 253542 DEBUG nova.virt.libvirt.host [None req-f20549d3-bfb1-4bd8-978c-3adeec9d7569 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 25 03:57:48 np0005534516 nova_compute[253538]: 2025-11-25 08:57:48.360 253542 DEBUG nova.virt.libvirt.host [None req-f20549d3-bfb1-4bd8-978c-3adeec9d7569 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 25 03:57:48 np0005534516 nova_compute[253538]: 2025-11-25 08:57:48.360 253542 DEBUG nova.virt.libvirt.host [None req-f20549d3-bfb1-4bd8-978c-3adeec9d7569 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 25 03:57:48 np0005534516 nova_compute[253538]: 2025-11-25 08:57:48.361 253542 DEBUG nova.virt.libvirt.driver [None req-f20549d3-bfb1-4bd8-978c-3adeec9d7569 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 25 03:57:48 np0005534516 nova_compute[253538]: 2025-11-25 08:57:48.361 253542 DEBUG nova.virt.hardware [None req-f20549d3-bfb1-4bd8-978c-3adeec9d7569 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T08:20:54Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='c0247c91-0f34-45b2-87e7-43f31a790a00',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 25 03:57:48 np0005534516 nova_compute[253538]: 2025-11-25 08:57:48.362 253542 DEBUG nova.virt.hardware [None req-f20549d3-bfb1-4bd8-978c-3adeec9d7569 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 25 03:57:48 np0005534516 nova_compute[253538]: 2025-11-25 08:57:48.362 253542 DEBUG nova.virt.hardware [None req-f20549d3-bfb1-4bd8-978c-3adeec9d7569 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 25 03:57:48 np0005534516 nova_compute[253538]: 2025-11-25 08:57:48.362 253542 DEBUG nova.virt.hardware [None req-f20549d3-bfb1-4bd8-978c-3adeec9d7569 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 25 03:57:48 np0005534516 nova_compute[253538]: 2025-11-25 08:57:48.363 253542 DEBUG nova.virt.hardware [None req-f20549d3-bfb1-4bd8-978c-3adeec9d7569 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 25 03:57:48 np0005534516 nova_compute[253538]: 2025-11-25 08:57:48.363 253542 DEBUG nova.virt.hardware [None req-f20549d3-bfb1-4bd8-978c-3adeec9d7569 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 25 03:57:48 np0005534516 nova_compute[253538]: 2025-11-25 08:57:48.363 253542 DEBUG nova.virt.hardware [None req-f20549d3-bfb1-4bd8-978c-3adeec9d7569 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 25 03:57:48 np0005534516 nova_compute[253538]: 2025-11-25 08:57:48.363 253542 DEBUG nova.virt.hardware [None req-f20549d3-bfb1-4bd8-978c-3adeec9d7569 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 25 03:57:48 np0005534516 nova_compute[253538]: 2025-11-25 08:57:48.364 253542 DEBUG nova.virt.hardware [None req-f20549d3-bfb1-4bd8-978c-3adeec9d7569 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 25 03:57:48 np0005534516 nova_compute[253538]: 2025-11-25 08:57:48.364 253542 DEBUG nova.virt.hardware [None req-f20549d3-bfb1-4bd8-978c-3adeec9d7569 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 25 03:57:48 np0005534516 nova_compute[253538]: 2025-11-25 08:57:48.364 253542 DEBUG nova.virt.hardware [None req-f20549d3-bfb1-4bd8-978c-3adeec9d7569 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 25 03:57:48 np0005534516 nova_compute[253538]: 2025-11-25 08:57:48.369 253542 DEBUG oslo_concurrency.processutils [None req-f20549d3-bfb1-4bd8-978c-3adeec9d7569 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:57:48 np0005534516 neutron-haproxy-ovnmeta-55543b53-ee52-41ca-ba2e-341088afdcaa[382314]: [NOTICE]   (382318) : haproxy version is 2.8.14-c23fe91
Nov 25 03:57:48 np0005534516 neutron-haproxy-ovnmeta-55543b53-ee52-41ca-ba2e-341088afdcaa[382314]: [NOTICE]   (382318) : path to executable is /usr/sbin/haproxy
Nov 25 03:57:48 np0005534516 neutron-haproxy-ovnmeta-55543b53-ee52-41ca-ba2e-341088afdcaa[382314]: [WARNING]  (382318) : Exiting Master process...
Nov 25 03:57:48 np0005534516 neutron-haproxy-ovnmeta-55543b53-ee52-41ca-ba2e-341088afdcaa[382314]: [ALERT]    (382318) : Current worker (382330) exited with code 143 (Terminated)
Nov 25 03:57:48 np0005534516 neutron-haproxy-ovnmeta-55543b53-ee52-41ca-ba2e-341088afdcaa[382314]: [WARNING]  (382318) : All workers exited. Exiting... (0)
Nov 25 03:57:48 np0005534516 systemd[1]: libpod-c5711778772d41910161078c587892ac983a62ecffb93e967a20e59df055e866.scope: Deactivated successfully.
Nov 25 03:57:48 np0005534516 podman[382997]: 2025-11-25 08:57:48.47132333 +0000 UTC m=+0.093120765 container died c5711778772d41910161078c587892ac983a62ecffb93e967a20e59df055e866 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-55543b53-ee52-41ca-ba2e-341088afdcaa, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team)
Nov 25 03:57:48 np0005534516 vibrant_shockley[382932]: --> passed data devices: 0 physical, 3 LVM
Nov 25 03:57:48 np0005534516 vibrant_shockley[382932]: --> relative data size: 1.0
Nov 25 03:57:48 np0005534516 vibrant_shockley[382932]: --> All data devices are unavailable
Nov 25 03:57:48 np0005534516 nova_compute[253538]: 2025-11-25 08:57:48.560 253542 INFO nova.virt.libvirt.driver [-] [instance: 9b511004-21d7-4867-aa46-4e7219827b6e] Instance destroyed successfully.#033[00m
Nov 25 03:57:48 np0005534516 nova_compute[253538]: 2025-11-25 08:57:48.561 253542 DEBUG nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] skipping disk for instance-0000007a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 25 03:57:48 np0005534516 nova_compute[253538]: 2025-11-25 08:57:48.565 253542 DEBUG nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] skipping disk for instance-0000007a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 25 03:57:48 np0005534516 nova_compute[253538]: 2025-11-25 08:57:48.566 253542 DEBUG nova.objects.instance [None req-8b77574d-7777-4a2e-a943-44bc36002e5e 8ce5e2935141427a90707c14e4a73ad9 54ab33f9507e43fca43c45e6fc57f565 - - default default] Lazy-loading 'resources' on Instance uuid 9b511004-21d7-4867-aa46-4e7219827b6e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 03:57:48 np0005534516 systemd[1]: libpod-f751eb85115f956171758b6641bf133fce1dad3ab8f30a809ff9c3fc17a1046b.scope: Deactivated successfully.
Nov 25 03:57:48 np0005534516 systemd[1]: libpod-f751eb85115f956171758b6641bf133fce1dad3ab8f30a809ff9c3fc17a1046b.scope: Consumed 1.040s CPU time.
Nov 25 03:57:48 np0005534516 conmon[382932]: conmon f751eb85115f95617175 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-f751eb85115f956171758b6641bf133fce1dad3ab8f30a809ff9c3fc17a1046b.scope/container/memory.events
Nov 25 03:57:48 np0005534516 nova_compute[253538]: 2025-11-25 08:57:48.575 253542 DEBUG nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] skipping disk for instance-00000078 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 25 03:57:48 np0005534516 nova_compute[253538]: 2025-11-25 08:57:48.576 253542 DEBUG nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] skipping disk for instance-00000078 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 25 03:57:48 np0005534516 nova_compute[253538]: 2025-11-25 08:57:48.581 253542 DEBUG nova.virt.libvirt.vif [None req-8b77574d-7777-4a2e-a943-44bc36002e5e 8ce5e2935141427a90707c14e4a73ad9 54ab33f9507e43fca43c45e6fc57f565 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T08:57:07Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-429479529-access_point-266233725',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-429479529-access_point-266233725',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-429479529-acc',id=122,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOtszPERwqLbWIYPkzCz4rQadM31V2GuDIdwxeEnuQlWaJ4QslduyQHZMa0L1KML6aXdq0ZmWZAYQ/HsmjaUOuAcVDO3uLPU+Gh2V/Iwtg0WToybeZDWhsFovRGb4ZBh3Q==',key_name='tempest-TestSecurityGroupsBasicOps-652156064',keypairs=<?>,launch_index=0,launched_at=2025-11-25T08:57:19Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='54ab33f9507e43fca43c45e6fc57f565',ramdisk_id='',reservation_id='r-pvks0odp',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestSecurityGroupsBasicOps-429479529',owner_user_name='tempest-TestSecurityGroupsBasicOps-429479529-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T08:57:19Z,user_data=None,user_id='8ce5e2935141427a90707c14e4a73ad9',uuid=9b511004-21d7-4867-aa46-4e7219827b6e,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "88814764-016b-4232-82f8-72dbbb384932", "address": "fa:16:3e:4c:8a:98", "network": {"id": "55543b53-ee52-41ca-ba2e-341088afdcaa", "bridge": "br-int", "label": "tempest-network-smoke--246766377", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.217", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "54ab33f9507e43fca43c45e6fc57f565", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap88814764-01", "ovs_interfaceid": "88814764-016b-4232-82f8-72dbbb384932", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 25 03:57:48 np0005534516 nova_compute[253538]: 2025-11-25 08:57:48.581 253542 DEBUG nova.network.os_vif_util [None req-8b77574d-7777-4a2e-a943-44bc36002e5e 8ce5e2935141427a90707c14e4a73ad9 54ab33f9507e43fca43c45e6fc57f565 - - default default] Converting VIF {"id": "88814764-016b-4232-82f8-72dbbb384932", "address": "fa:16:3e:4c:8a:98", "network": {"id": "55543b53-ee52-41ca-ba2e-341088afdcaa", "bridge": "br-int", "label": "tempest-network-smoke--246766377", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.217", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "54ab33f9507e43fca43c45e6fc57f565", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap88814764-01", "ovs_interfaceid": "88814764-016b-4232-82f8-72dbbb384932", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 03:57:48 np0005534516 nova_compute[253538]: 2025-11-25 08:57:48.583 253542 DEBUG nova.network.os_vif_util [None req-8b77574d-7777-4a2e-a943-44bc36002e5e 8ce5e2935141427a90707c14e4a73ad9 54ab33f9507e43fca43c45e6fc57f565 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:4c:8a:98,bridge_name='br-int',has_traffic_filtering=True,id=88814764-016b-4232-82f8-72dbbb384932,network=Network(55543b53-ee52-41ca-ba2e-341088afdcaa),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap88814764-01') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 03:57:48 np0005534516 nova_compute[253538]: 2025-11-25 08:57:48.583 253542 DEBUG os_vif [None req-8b77574d-7777-4a2e-a943-44bc36002e5e 8ce5e2935141427a90707c14e4a73ad9 54ab33f9507e43fca43c45e6fc57f565 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:4c:8a:98,bridge_name='br-int',has_traffic_filtering=True,id=88814764-016b-4232-82f8-72dbbb384932,network=Network(55543b53-ee52-41ca-ba2e-341088afdcaa),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap88814764-01') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 25 03:57:48 np0005534516 nova_compute[253538]: 2025-11-25 08:57:48.586 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:57:48 np0005534516 nova_compute[253538]: 2025-11-25 08:57:48.586 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap88814764-01, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:57:48 np0005534516 nova_compute[253538]: 2025-11-25 08:57:48.590 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:57:48 np0005534516 nova_compute[253538]: 2025-11-25 08:57:48.591 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 25 03:57:48 np0005534516 nova_compute[253538]: 2025-11-25 08:57:48.594 253542 INFO os_vif [None req-8b77574d-7777-4a2e-a943-44bc36002e5e 8ce5e2935141427a90707c14e4a73ad9 54ab33f9507e43fca43c45e6fc57f565 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:4c:8a:98,bridge_name='br-int',has_traffic_filtering=True,id=88814764-016b-4232-82f8-72dbbb384932,network=Network(55543b53-ee52-41ca-ba2e-341088afdcaa),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap88814764-01')#033[00m
Nov 25 03:57:48 np0005534516 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-c5711778772d41910161078c587892ac983a62ecffb93e967a20e59df055e866-userdata-shm.mount: Deactivated successfully.
Nov 25 03:57:48 np0005534516 systemd[1]: var-lib-containers-storage-overlay-bf8b26fcd98a92c913e834a912a7b97668705f1f438f00d41eb5de431fe4dbaf-merged.mount: Deactivated successfully.
Nov 25 03:57:48 np0005534516 podman[383065]: 2025-11-25 08:57:48.635233612 +0000 UTC m=+0.037327857 container died f751eb85115f956171758b6641bf133fce1dad3ab8f30a809ff9c3fc17a1046b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_shockley, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2)
Nov 25 03:57:48 np0005534516 nova_compute[253538]: 2025-11-25 08:57:48.813 253542 WARNING nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 03:57:48 np0005534516 nova_compute[253538]: 2025-11-25 08:57:48.815 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3304MB free_disk=59.89720153808594GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 25 03:57:48 np0005534516 nova_compute[253538]: 2025-11-25 08:57:48.815 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:57:48 np0005534516 nova_compute[253538]: 2025-11-25 08:57:48.815 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:57:48 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 03:57:48 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1631145107' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 03:57:48 np0005534516 nova_compute[253538]: 2025-11-25 08:57:48.864 253542 DEBUG oslo_concurrency.processutils [None req-f20549d3-bfb1-4bd8-978c-3adeec9d7569 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.495s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:57:48 np0005534516 nova_compute[253538]: 2025-11-25 08:57:48.891 253542 DEBUG nova.storage.rbd_utils [None req-f20549d3-bfb1-4bd8-978c-3adeec9d7569 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] rbd image 5e55aa91-2aa5-4443-b976-0f3e4409e8ec_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:57:48 np0005534516 nova_compute[253538]: 2025-11-25 08:57:48.896 253542 DEBUG oslo_concurrency.processutils [None req-f20549d3-bfb1-4bd8-978c-3adeec9d7569 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:57:48 np0005534516 nova_compute[253538]: 2025-11-25 08:57:48.941 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Instance ce8c3428-f7e4-49aa-9978-faaf5d514663 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 25 03:57:48 np0005534516 nova_compute[253538]: 2025-11-25 08:57:48.942 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Instance 9b511004-21d7-4867-aa46-4e7219827b6e actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 25 03:57:48 np0005534516 nova_compute[253538]: 2025-11-25 08:57:48.942 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Instance 5e55aa91-2aa5-4443-b976-0f3e4409e8ec actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 25 03:57:48 np0005534516 nova_compute[253538]: 2025-11-25 08:57:48.943 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 25 03:57:48 np0005534516 nova_compute[253538]: 2025-11-25 08:57:48.943 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=896MB phys_disk=59GB used_disk=3GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 25 03:57:48 np0005534516 systemd[1]: var-lib-containers-storage-overlay-be339c62705c97828622372a4f66b82f19ed5acb57a26246a94e18db53e50b85-merged.mount: Deactivated successfully.
Nov 25 03:57:49 np0005534516 nova_compute[253538]: 2025-11-25 08:57:49.038 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:57:49 np0005534516 podman[383065]: 2025-11-25 08:57:49.239776796 +0000 UTC m=+0.641871051 container remove f751eb85115f956171758b6641bf133fce1dad3ab8f30a809ff9c3fc17a1046b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_shockley, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Nov 25 03:57:49 np0005534516 systemd[1]: libpod-conmon-f751eb85115f956171758b6641bf133fce1dad3ab8f30a809ff9c3fc17a1046b.scope: Deactivated successfully.
Nov 25 03:57:49 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 03:57:49 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/907602818' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 03:57:49 np0005534516 nova_compute[253538]: 2025-11-25 08:57:49.359 253542 DEBUG nova.compute.manager [req-4baa14cc-8b90-4169-84cd-e5556b6b837b req-ada0a325-e442-4080-9ba9-f2f76d2dbc92 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 9b511004-21d7-4867-aa46-4e7219827b6e] Received event network-vif-unplugged-88814764-016b-4232-82f8-72dbbb384932 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:57:49 np0005534516 nova_compute[253538]: 2025-11-25 08:57:49.360 253542 DEBUG oslo_concurrency.lockutils [req-4baa14cc-8b90-4169-84cd-e5556b6b837b req-ada0a325-e442-4080-9ba9-f2f76d2dbc92 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "9b511004-21d7-4867-aa46-4e7219827b6e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:57:49 np0005534516 nova_compute[253538]: 2025-11-25 08:57:49.360 253542 DEBUG oslo_concurrency.lockutils [req-4baa14cc-8b90-4169-84cd-e5556b6b837b req-ada0a325-e442-4080-9ba9-f2f76d2dbc92 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "9b511004-21d7-4867-aa46-4e7219827b6e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:57:49 np0005534516 nova_compute[253538]: 2025-11-25 08:57:49.360 253542 DEBUG oslo_concurrency.lockutils [req-4baa14cc-8b90-4169-84cd-e5556b6b837b req-ada0a325-e442-4080-9ba9-f2f76d2dbc92 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "9b511004-21d7-4867-aa46-4e7219827b6e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:57:49 np0005534516 nova_compute[253538]: 2025-11-25 08:57:49.360 253542 DEBUG nova.compute.manager [req-4baa14cc-8b90-4169-84cd-e5556b6b837b req-ada0a325-e442-4080-9ba9-f2f76d2dbc92 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 9b511004-21d7-4867-aa46-4e7219827b6e] No waiting events found dispatching network-vif-unplugged-88814764-016b-4232-82f8-72dbbb384932 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 03:57:49 np0005534516 nova_compute[253538]: 2025-11-25 08:57:49.360 253542 DEBUG nova.compute.manager [req-4baa14cc-8b90-4169-84cd-e5556b6b837b req-ada0a325-e442-4080-9ba9-f2f76d2dbc92 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 9b511004-21d7-4867-aa46-4e7219827b6e] Received event network-vif-unplugged-88814764-016b-4232-82f8-72dbbb384932 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 25 03:57:49 np0005534516 nova_compute[253538]: 2025-11-25 08:57:49.361 253542 DEBUG nova.compute.manager [req-4baa14cc-8b90-4169-84cd-e5556b6b837b req-ada0a325-e442-4080-9ba9-f2f76d2dbc92 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 9b511004-21d7-4867-aa46-4e7219827b6e] Received event network-vif-plugged-88814764-016b-4232-82f8-72dbbb384932 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:57:49 np0005534516 nova_compute[253538]: 2025-11-25 08:57:49.361 253542 DEBUG oslo_concurrency.lockutils [req-4baa14cc-8b90-4169-84cd-e5556b6b837b req-ada0a325-e442-4080-9ba9-f2f76d2dbc92 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "9b511004-21d7-4867-aa46-4e7219827b6e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:57:49 np0005534516 nova_compute[253538]: 2025-11-25 08:57:49.361 253542 DEBUG oslo_concurrency.lockutils [req-4baa14cc-8b90-4169-84cd-e5556b6b837b req-ada0a325-e442-4080-9ba9-f2f76d2dbc92 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "9b511004-21d7-4867-aa46-4e7219827b6e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:57:49 np0005534516 nova_compute[253538]: 2025-11-25 08:57:49.361 253542 DEBUG oslo_concurrency.lockutils [req-4baa14cc-8b90-4169-84cd-e5556b6b837b req-ada0a325-e442-4080-9ba9-f2f76d2dbc92 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "9b511004-21d7-4867-aa46-4e7219827b6e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:57:49 np0005534516 nova_compute[253538]: 2025-11-25 08:57:49.361 253542 DEBUG nova.compute.manager [req-4baa14cc-8b90-4169-84cd-e5556b6b837b req-ada0a325-e442-4080-9ba9-f2f76d2dbc92 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 9b511004-21d7-4867-aa46-4e7219827b6e] No waiting events found dispatching network-vif-plugged-88814764-016b-4232-82f8-72dbbb384932 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 03:57:49 np0005534516 nova_compute[253538]: 2025-11-25 08:57:49.361 253542 WARNING nova.compute.manager [req-4baa14cc-8b90-4169-84cd-e5556b6b837b req-ada0a325-e442-4080-9ba9-f2f76d2dbc92 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 9b511004-21d7-4867-aa46-4e7219827b6e] Received unexpected event network-vif-plugged-88814764-016b-4232-82f8-72dbbb384932 for instance with vm_state active and task_state deleting.#033[00m
Nov 25 03:57:49 np0005534516 nova_compute[253538]: 2025-11-25 08:57:49.362 253542 DEBUG oslo_concurrency.processutils [None req-f20549d3-bfb1-4bd8-978c-3adeec9d7569 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.466s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:57:49 np0005534516 nova_compute[253538]: 2025-11-25 08:57:49.363 253542 DEBUG nova.virt.libvirt.vif [None req-f20549d3-bfb1-4bd8-978c-3adeec9d7569 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T08:57:42Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-2129911111',display_name='tempest-TestNetworkBasicOps-server-2129911111',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-2129911111',id=123,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAN+yQyEQntLRhVgjlecOd8kZgcuJccGeN4XUmVTtZkXatG89jriyrNCx89aMj4+ppyzZUWi3hVDPwYltwxWsUBkgwfbxG4JDKfBHeP0jrr3H+wCGmTdCkbNarpgdJwyag==',key_name='tempest-TestNetworkBasicOps-619601627',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='92faeb767e7a423586eaaf32661ce771',ramdisk_id='',reservation_id='r-vzp7o245',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-2019122229',owner_user_name='tempest-TestNetworkBasicOps-2019122229-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T08:57:44Z,user_data=None,user_id='4211995133cc45db8e38c47f747fb092',uuid=5e55aa91-2aa5-4443-b976-0f3e4409e8ec,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "027edfd6-09a6-4bf4-88df-8a19e59d1f72", "address": "fa:16:3e:d9:e8:8d", "network": {"id": "bf619d00-d285-4b9e-9996-77997075375e", "bridge": "br-int", "label": "tempest-network-smoke--598319533", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92faeb767e7a423586eaaf32661ce771", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap027edfd6-09", "ovs_interfaceid": "027edfd6-09a6-4bf4-88df-8a19e59d1f72", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 25 03:57:49 np0005534516 nova_compute[253538]: 2025-11-25 08:57:49.363 253542 DEBUG nova.network.os_vif_util [None req-f20549d3-bfb1-4bd8-978c-3adeec9d7569 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Converting VIF {"id": "027edfd6-09a6-4bf4-88df-8a19e59d1f72", "address": "fa:16:3e:d9:e8:8d", "network": {"id": "bf619d00-d285-4b9e-9996-77997075375e", "bridge": "br-int", "label": "tempest-network-smoke--598319533", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92faeb767e7a423586eaaf32661ce771", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap027edfd6-09", "ovs_interfaceid": "027edfd6-09a6-4bf4-88df-8a19e59d1f72", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 03:57:49 np0005534516 nova_compute[253538]: 2025-11-25 08:57:49.364 253542 DEBUG nova.network.os_vif_util [None req-f20549d3-bfb1-4bd8-978c-3adeec9d7569 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d9:e8:8d,bridge_name='br-int',has_traffic_filtering=True,id=027edfd6-09a6-4bf4-88df-8a19e59d1f72,network=Network(bf619d00-d285-4b9e-9996-77997075375e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap027edfd6-09') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 03:57:49 np0005534516 nova_compute[253538]: 2025-11-25 08:57:49.366 253542 DEBUG nova.objects.instance [None req-f20549d3-bfb1-4bd8-978c-3adeec9d7569 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lazy-loading 'pci_devices' on Instance uuid 5e55aa91-2aa5-4443-b976-0f3e4409e8ec obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 03:57:49 np0005534516 podman[382997]: 2025-11-25 08:57:49.369849377 +0000 UTC m=+0.991646802 container cleanup c5711778772d41910161078c587892ac983a62ecffb93e967a20e59df055e866 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-55543b53-ee52-41ca-ba2e-341088afdcaa, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118)
Nov 25 03:57:49 np0005534516 systemd[1]: libpod-conmon-c5711778772d41910161078c587892ac983a62ecffb93e967a20e59df055e866.scope: Deactivated successfully.
Nov 25 03:57:49 np0005534516 nova_compute[253538]: 2025-11-25 08:57:49.388 253542 DEBUG nova.virt.libvirt.driver [None req-f20549d3-bfb1-4bd8-978c-3adeec9d7569 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 5e55aa91-2aa5-4443-b976-0f3e4409e8ec] End _get_guest_xml xml=<domain type="kvm">
Nov 25 03:57:49 np0005534516 nova_compute[253538]:  <uuid>5e55aa91-2aa5-4443-b976-0f3e4409e8ec</uuid>
Nov 25 03:57:49 np0005534516 nova_compute[253538]:  <name>instance-0000007b</name>
Nov 25 03:57:49 np0005534516 nova_compute[253538]:  <memory>131072</memory>
Nov 25 03:57:49 np0005534516 nova_compute[253538]:  <vcpu>1</vcpu>
Nov 25 03:57:49 np0005534516 nova_compute[253538]:  <metadata>
Nov 25 03:57:49 np0005534516 nova_compute[253538]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 03:57:49 np0005534516 nova_compute[253538]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 03:57:49 np0005534516 nova_compute[253538]:      <nova:name>tempest-TestNetworkBasicOps-server-2129911111</nova:name>
Nov 25 03:57:49 np0005534516 nova_compute[253538]:      <nova:creationTime>2025-11-25 08:57:48</nova:creationTime>
Nov 25 03:57:49 np0005534516 nova_compute[253538]:      <nova:flavor name="m1.nano">
Nov 25 03:57:49 np0005534516 nova_compute[253538]:        <nova:memory>128</nova:memory>
Nov 25 03:57:49 np0005534516 nova_compute[253538]:        <nova:disk>1</nova:disk>
Nov 25 03:57:49 np0005534516 nova_compute[253538]:        <nova:swap>0</nova:swap>
Nov 25 03:57:49 np0005534516 nova_compute[253538]:        <nova:ephemeral>0</nova:ephemeral>
Nov 25 03:57:49 np0005534516 nova_compute[253538]:        <nova:vcpus>1</nova:vcpus>
Nov 25 03:57:49 np0005534516 nova_compute[253538]:      </nova:flavor>
Nov 25 03:57:49 np0005534516 nova_compute[253538]:      <nova:owner>
Nov 25 03:57:49 np0005534516 nova_compute[253538]:        <nova:user uuid="4211995133cc45db8e38c47f747fb092">tempest-TestNetworkBasicOps-2019122229-project-member</nova:user>
Nov 25 03:57:49 np0005534516 nova_compute[253538]:        <nova:project uuid="92faeb767e7a423586eaaf32661ce771">tempest-TestNetworkBasicOps-2019122229</nova:project>
Nov 25 03:57:49 np0005534516 nova_compute[253538]:      </nova:owner>
Nov 25 03:57:49 np0005534516 nova_compute[253538]:      <nova:root type="image" uuid="8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e"/>
Nov 25 03:57:49 np0005534516 nova_compute[253538]:      <nova:ports>
Nov 25 03:57:49 np0005534516 nova_compute[253538]:        <nova:port uuid="027edfd6-09a6-4bf4-88df-8a19e59d1f72">
Nov 25 03:57:49 np0005534516 nova_compute[253538]:          <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Nov 25 03:57:49 np0005534516 nova_compute[253538]:        </nova:port>
Nov 25 03:57:49 np0005534516 nova_compute[253538]:      </nova:ports>
Nov 25 03:57:49 np0005534516 nova_compute[253538]:    </nova:instance>
Nov 25 03:57:49 np0005534516 nova_compute[253538]:  </metadata>
Nov 25 03:57:49 np0005534516 nova_compute[253538]:  <sysinfo type="smbios">
Nov 25 03:57:49 np0005534516 nova_compute[253538]:    <system>
Nov 25 03:57:49 np0005534516 nova_compute[253538]:      <entry name="manufacturer">RDO</entry>
Nov 25 03:57:49 np0005534516 nova_compute[253538]:      <entry name="product">OpenStack Compute</entry>
Nov 25 03:57:49 np0005534516 nova_compute[253538]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 03:57:49 np0005534516 nova_compute[253538]:      <entry name="serial">5e55aa91-2aa5-4443-b976-0f3e4409e8ec</entry>
Nov 25 03:57:49 np0005534516 nova_compute[253538]:      <entry name="uuid">5e55aa91-2aa5-4443-b976-0f3e4409e8ec</entry>
Nov 25 03:57:49 np0005534516 nova_compute[253538]:      <entry name="family">Virtual Machine</entry>
Nov 25 03:57:49 np0005534516 nova_compute[253538]:    </system>
Nov 25 03:57:49 np0005534516 nova_compute[253538]:  </sysinfo>
Nov 25 03:57:49 np0005534516 nova_compute[253538]:  <os>
Nov 25 03:57:49 np0005534516 nova_compute[253538]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 25 03:57:49 np0005534516 nova_compute[253538]:    <boot dev="hd"/>
Nov 25 03:57:49 np0005534516 nova_compute[253538]:    <smbios mode="sysinfo"/>
Nov 25 03:57:49 np0005534516 nova_compute[253538]:  </os>
Nov 25 03:57:49 np0005534516 nova_compute[253538]:  <features>
Nov 25 03:57:49 np0005534516 nova_compute[253538]:    <acpi/>
Nov 25 03:57:49 np0005534516 nova_compute[253538]:    <apic/>
Nov 25 03:57:49 np0005534516 nova_compute[253538]:    <vmcoreinfo/>
Nov 25 03:57:49 np0005534516 nova_compute[253538]:  </features>
Nov 25 03:57:49 np0005534516 nova_compute[253538]:  <clock offset="utc">
Nov 25 03:57:49 np0005534516 nova_compute[253538]:    <timer name="pit" tickpolicy="delay"/>
Nov 25 03:57:49 np0005534516 nova_compute[253538]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 25 03:57:49 np0005534516 nova_compute[253538]:    <timer name="hpet" present="no"/>
Nov 25 03:57:49 np0005534516 nova_compute[253538]:  </clock>
Nov 25 03:57:49 np0005534516 nova_compute[253538]:  <cpu mode="host-model" match="exact">
Nov 25 03:57:49 np0005534516 nova_compute[253538]:    <topology sockets="1" cores="1" threads="1"/>
Nov 25 03:57:49 np0005534516 nova_compute[253538]:  </cpu>
Nov 25 03:57:49 np0005534516 nova_compute[253538]:  <devices>
Nov 25 03:57:49 np0005534516 nova_compute[253538]:    <disk type="network" device="disk">
Nov 25 03:57:49 np0005534516 nova_compute[253538]:      <driver type="raw" cache="none"/>
Nov 25 03:57:49 np0005534516 nova_compute[253538]:      <source protocol="rbd" name="vms/5e55aa91-2aa5-4443-b976-0f3e4409e8ec_disk">
Nov 25 03:57:49 np0005534516 nova_compute[253538]:        <host name="192.168.122.100" port="6789"/>
Nov 25 03:57:49 np0005534516 nova_compute[253538]:      </source>
Nov 25 03:57:49 np0005534516 nova_compute[253538]:      <auth username="openstack">
Nov 25 03:57:49 np0005534516 nova_compute[253538]:        <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 03:57:49 np0005534516 nova_compute[253538]:      </auth>
Nov 25 03:57:49 np0005534516 nova_compute[253538]:      <target dev="vda" bus="virtio"/>
Nov 25 03:57:49 np0005534516 nova_compute[253538]:    </disk>
Nov 25 03:57:49 np0005534516 nova_compute[253538]:    <disk type="network" device="cdrom">
Nov 25 03:57:49 np0005534516 nova_compute[253538]:      <driver type="raw" cache="none"/>
Nov 25 03:57:49 np0005534516 nova_compute[253538]:      <source protocol="rbd" name="vms/5e55aa91-2aa5-4443-b976-0f3e4409e8ec_disk.config">
Nov 25 03:57:49 np0005534516 nova_compute[253538]:        <host name="192.168.122.100" port="6789"/>
Nov 25 03:57:49 np0005534516 nova_compute[253538]:      </source>
Nov 25 03:57:49 np0005534516 nova_compute[253538]:      <auth username="openstack">
Nov 25 03:57:49 np0005534516 nova_compute[253538]:        <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 03:57:49 np0005534516 nova_compute[253538]:      </auth>
Nov 25 03:57:49 np0005534516 nova_compute[253538]:      <target dev="sda" bus="sata"/>
Nov 25 03:57:49 np0005534516 nova_compute[253538]:    </disk>
Nov 25 03:57:49 np0005534516 nova_compute[253538]:    <interface type="ethernet">
Nov 25 03:57:49 np0005534516 nova_compute[253538]:      <mac address="fa:16:3e:d9:e8:8d"/>
Nov 25 03:57:49 np0005534516 nova_compute[253538]:      <model type="virtio"/>
Nov 25 03:57:49 np0005534516 nova_compute[253538]:      <driver name="vhost" rx_queue_size="512"/>
Nov 25 03:57:49 np0005534516 nova_compute[253538]:      <mtu size="1442"/>
Nov 25 03:57:49 np0005534516 nova_compute[253538]:      <target dev="tap027edfd6-09"/>
Nov 25 03:57:49 np0005534516 nova_compute[253538]:    </interface>
Nov 25 03:57:49 np0005534516 nova_compute[253538]:    <serial type="pty">
Nov 25 03:57:49 np0005534516 nova_compute[253538]:      <log file="/var/lib/nova/instances/5e55aa91-2aa5-4443-b976-0f3e4409e8ec/console.log" append="off"/>
Nov 25 03:57:49 np0005534516 nova_compute[253538]:    </serial>
Nov 25 03:57:49 np0005534516 nova_compute[253538]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 03:57:49 np0005534516 nova_compute[253538]:    <video>
Nov 25 03:57:49 np0005534516 nova_compute[253538]:      <model type="virtio"/>
Nov 25 03:57:49 np0005534516 nova_compute[253538]:    </video>
Nov 25 03:57:49 np0005534516 nova_compute[253538]:    <input type="tablet" bus="usb"/>
Nov 25 03:57:49 np0005534516 nova_compute[253538]:    <rng model="virtio">
Nov 25 03:57:49 np0005534516 nova_compute[253538]:      <backend model="random">/dev/urandom</backend>
Nov 25 03:57:49 np0005534516 nova_compute[253538]:    </rng>
Nov 25 03:57:49 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root"/>
Nov 25 03:57:49 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:57:49 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:57:49 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:57:49 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:57:49 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:57:49 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:57:49 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:57:49 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:57:49 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:57:49 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:57:49 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:57:49 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:57:49 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:57:49 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:57:49 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:57:49 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:57:49 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:57:49 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:57:49 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:57:49 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:57:49 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:57:49 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:57:49 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:57:49 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:57:49 np0005534516 nova_compute[253538]:    <controller type="usb" index="0"/>
Nov 25 03:57:49 np0005534516 nova_compute[253538]:    <memballoon model="virtio">
Nov 25 03:57:49 np0005534516 nova_compute[253538]:      <stats period="10"/>
Nov 25 03:57:49 np0005534516 nova_compute[253538]:    </memballoon>
Nov 25 03:57:49 np0005534516 nova_compute[253538]:  </devices>
Nov 25 03:57:49 np0005534516 nova_compute[253538]: </domain>
Nov 25 03:57:49 np0005534516 nova_compute[253538]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 25 03:57:49 np0005534516 nova_compute[253538]: 2025-11-25 08:57:49.389 253542 DEBUG nova.compute.manager [None req-f20549d3-bfb1-4bd8-978c-3adeec9d7569 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 5e55aa91-2aa5-4443-b976-0f3e4409e8ec] Preparing to wait for external event network-vif-plugged-027edfd6-09a6-4bf4-88df-8a19e59d1f72 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 25 03:57:49 np0005534516 nova_compute[253538]: 2025-11-25 08:57:49.389 253542 DEBUG oslo_concurrency.lockutils [None req-f20549d3-bfb1-4bd8-978c-3adeec9d7569 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Acquiring lock "5e55aa91-2aa5-4443-b976-0f3e4409e8ec-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:57:49 np0005534516 nova_compute[253538]: 2025-11-25 08:57:49.389 253542 DEBUG oslo_concurrency.lockutils [None req-f20549d3-bfb1-4bd8-978c-3adeec9d7569 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lock "5e55aa91-2aa5-4443-b976-0f3e4409e8ec-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:57:49 np0005534516 nova_compute[253538]: 2025-11-25 08:57:49.389 253542 DEBUG oslo_concurrency.lockutils [None req-f20549d3-bfb1-4bd8-978c-3adeec9d7569 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lock "5e55aa91-2aa5-4443-b976-0f3e4409e8ec-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:57:49 np0005534516 nova_compute[253538]: 2025-11-25 08:57:49.390 253542 DEBUG nova.virt.libvirt.vif [None req-f20549d3-bfb1-4bd8-978c-3adeec9d7569 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T08:57:42Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-2129911111',display_name='tempest-TestNetworkBasicOps-server-2129911111',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-2129911111',id=123,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAN+yQyEQntLRhVgjlecOd8kZgcuJccGeN4XUmVTtZkXatG89jriyrNCx89aMj4+ppyzZUWi3hVDPwYltwxWsUBkgwfbxG4JDKfBHeP0jrr3H+wCGmTdCkbNarpgdJwyag==',key_name='tempest-TestNetworkBasicOps-619601627',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='92faeb767e7a423586eaaf32661ce771',ramdisk_id='',reservation_id='r-vzp7o245',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-2019122229',owner_user_name='tempest-TestNetworkBasicOps-2019122229-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T08:57:44Z,user_data=None,user_id='4211995133cc45db8e38c47f747fb092',uuid=5e55aa91-2aa5-4443-b976-0f3e4409e8ec,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "027edfd6-09a6-4bf4-88df-8a19e59d1f72", "address": "fa:16:3e:d9:e8:8d", "network": {"id": "bf619d00-d285-4b9e-9996-77997075375e", "bridge": "br-int", "label": "tempest-network-smoke--598319533", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92faeb767e7a423586eaaf32661ce771", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap027edfd6-09", "ovs_interfaceid": "027edfd6-09a6-4bf4-88df-8a19e59d1f72", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 25 03:57:49 np0005534516 nova_compute[253538]: 2025-11-25 08:57:49.390 253542 DEBUG nova.network.os_vif_util [None req-f20549d3-bfb1-4bd8-978c-3adeec9d7569 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Converting VIF {"id": "027edfd6-09a6-4bf4-88df-8a19e59d1f72", "address": "fa:16:3e:d9:e8:8d", "network": {"id": "bf619d00-d285-4b9e-9996-77997075375e", "bridge": "br-int", "label": "tempest-network-smoke--598319533", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92faeb767e7a423586eaaf32661ce771", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap027edfd6-09", "ovs_interfaceid": "027edfd6-09a6-4bf4-88df-8a19e59d1f72", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 03:57:49 np0005534516 nova_compute[253538]: 2025-11-25 08:57:49.391 253542 DEBUG nova.network.os_vif_util [None req-f20549d3-bfb1-4bd8-978c-3adeec9d7569 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d9:e8:8d,bridge_name='br-int',has_traffic_filtering=True,id=027edfd6-09a6-4bf4-88df-8a19e59d1f72,network=Network(bf619d00-d285-4b9e-9996-77997075375e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap027edfd6-09') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 03:57:49 np0005534516 nova_compute[253538]: 2025-11-25 08:57:49.391 253542 DEBUG os_vif [None req-f20549d3-bfb1-4bd8-978c-3adeec9d7569 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:d9:e8:8d,bridge_name='br-int',has_traffic_filtering=True,id=027edfd6-09a6-4bf4-88df-8a19e59d1f72,network=Network(bf619d00-d285-4b9e-9996-77997075375e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap027edfd6-09') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 25 03:57:49 np0005534516 nova_compute[253538]: 2025-11-25 08:57:49.391 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:57:49 np0005534516 nova_compute[253538]: 2025-11-25 08:57:49.392 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:57:49 np0005534516 nova_compute[253538]: 2025-11-25 08:57:49.393 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 03:57:49 np0005534516 nova_compute[253538]: 2025-11-25 08:57:49.395 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:57:49 np0005534516 nova_compute[253538]: 2025-11-25 08:57:49.396 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap027edfd6-09, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:57:49 np0005534516 nova_compute[253538]: 2025-11-25 08:57:49.396 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap027edfd6-09, col_values=(('external_ids', {'iface-id': '027edfd6-09a6-4bf4-88df-8a19e59d1f72', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:d9:e8:8d', 'vm-uuid': '5e55aa91-2aa5-4443-b976-0f3e4409e8ec'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:57:49 np0005534516 nova_compute[253538]: 2025-11-25 08:57:49.435 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:57:49 np0005534516 NetworkManager[48915]: <info>  [1764061069.4365] manager: (tap027edfd6-09): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/520)
Nov 25 03:57:49 np0005534516 nova_compute[253538]: 2025-11-25 08:57:49.438 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 25 03:57:49 np0005534516 nova_compute[253538]: 2025-11-25 08:57:49.446 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:57:49 np0005534516 nova_compute[253538]: 2025-11-25 08:57:49.447 253542 INFO os_vif [None req-f20549d3-bfb1-4bd8-978c-3adeec9d7569 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:d9:e8:8d,bridge_name='br-int',has_traffic_filtering=True,id=027edfd6-09a6-4bf4-88df-8a19e59d1f72,network=Network(bf619d00-d285-4b9e-9996-77997075375e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap027edfd6-09')#033[00m
Nov 25 03:57:49 np0005534516 podman[383186]: 2025-11-25 08:57:49.497302916 +0000 UTC m=+0.062257976 container remove c5711778772d41910161078c587892ac983a62ecffb93e967a20e59df055e866 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-55543b53-ee52-41ca-ba2e-341088afdcaa, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 25 03:57:49 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:57:49.513 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[66b62e78-ac29-4494-958c-0730975c4091]: (4, ('Tue Nov 25 08:57:48 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-55543b53-ee52-41ca-ba2e-341088afdcaa (c5711778772d41910161078c587892ac983a62ecffb93e967a20e59df055e866)\nc5711778772d41910161078c587892ac983a62ecffb93e967a20e59df055e866\nTue Nov 25 08:57:49 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-55543b53-ee52-41ca-ba2e-341088afdcaa (c5711778772d41910161078c587892ac983a62ecffb93e967a20e59df055e866)\nc5711778772d41910161078c587892ac983a62ecffb93e967a20e59df055e866\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:57:49 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:57:49.518 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[fdb31cc7-3921-4c70-af80-756665af7c9a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:57:49 np0005534516 nova_compute[253538]: 2025-11-25 08:57:49.517 253542 DEBUG nova.virt.libvirt.driver [None req-f20549d3-bfb1-4bd8-978c-3adeec9d7569 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 03:57:49 np0005534516 nova_compute[253538]: 2025-11-25 08:57:49.518 253542 DEBUG nova.virt.libvirt.driver [None req-f20549d3-bfb1-4bd8-978c-3adeec9d7569 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 03:57:49 np0005534516 nova_compute[253538]: 2025-11-25 08:57:49.519 253542 DEBUG nova.virt.libvirt.driver [None req-f20549d3-bfb1-4bd8-978c-3adeec9d7569 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] No VIF found with MAC fa:16:3e:d9:e8:8d, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 25 03:57:49 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:57:49.519 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap55543b53-e0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:57:49 np0005534516 nova_compute[253538]: 2025-11-25 08:57:49.520 253542 INFO nova.virt.libvirt.driver [None req-f20549d3-bfb1-4bd8-978c-3adeec9d7569 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 5e55aa91-2aa5-4443-b976-0f3e4409e8ec] Using config drive#033[00m
Nov 25 03:57:49 np0005534516 kernel: tap55543b53-e0: left promiscuous mode
Nov 25 03:57:49 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 03:57:49 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2483430628' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 03:57:49 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:57:49.549 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[d8da167f-2431-4999-bc12-7027a94d1d66]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:57:49 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:57:49.561 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[dc869958-aadb-4fa6-b795-53289a1764e8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:57:49 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:57:49.563 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[d6d7ea93-28d6-40f3-a507-45e964c9fe39]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:57:49 np0005534516 nova_compute[253538]: 2025-11-25 08:57:49.571 253542 DEBUG nova.storage.rbd_utils [None req-f20549d3-bfb1-4bd8-978c-3adeec9d7569 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] rbd image 5e55aa91-2aa5-4443-b976-0f3e4409e8ec_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:57:49 np0005534516 nova_compute[253538]: 2025-11-25 08:57:49.576 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:57:49 np0005534516 nova_compute[253538]: 2025-11-25 08:57:49.578 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.540s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:57:49 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:57:49.580 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[c0128292-54d0-44d3-b727-16dfba4c7e7c]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 639137, 'reachable_time': 27178, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 383258, 'error': None, 'target': 'ovnmeta-55543b53-ee52-41ca-ba2e-341088afdcaa', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:57:49 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:57:49.584 162852 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-55543b53-ee52-41ca-ba2e-341088afdcaa deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 25 03:57:49 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:57:49.584 162852 DEBUG oslo.privsep.daemon [-] privsep: reply[ca7a18cf-b1be-4921-aaf4-bc6202e837d8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:57:49 np0005534516 nova_compute[253538]: 2025-11-25 08:57:49.584 253542 DEBUG nova.compute.provider_tree [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 03:57:49 np0005534516 systemd[1]: run-netns-ovnmeta\x2d55543b53\x2dee52\x2d41ca\x2dba2e\x2d341088afdcaa.mount: Deactivated successfully.
Nov 25 03:57:49 np0005534516 nova_compute[253538]: 2025-11-25 08:57:49.601 253542 DEBUG nova.scheduler.client.report [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 03:57:49 np0005534516 nova_compute[253538]: 2025-11-25 08:57:49.624 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 25 03:57:49 np0005534516 nova_compute[253538]: 2025-11-25 08:57:49.625 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.809s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:57:49 np0005534516 nova_compute[253538]: 2025-11-25 08:57:49.754 253542 INFO nova.virt.libvirt.driver [None req-8b77574d-7777-4a2e-a943-44bc36002e5e 8ce5e2935141427a90707c14e4a73ad9 54ab33f9507e43fca43c45e6fc57f565 - - default default] [instance: 9b511004-21d7-4867-aa46-4e7219827b6e] Deleting instance files /var/lib/nova/instances/9b511004-21d7-4867-aa46-4e7219827b6e_del#033[00m
Nov 25 03:57:49 np0005534516 nova_compute[253538]: 2025-11-25 08:57:49.755 253542 INFO nova.virt.libvirt.driver [None req-8b77574d-7777-4a2e-a943-44bc36002e5e 8ce5e2935141427a90707c14e4a73ad9 54ab33f9507e43fca43c45e6fc57f565 - - default default] [instance: 9b511004-21d7-4867-aa46-4e7219827b6e] Deletion of /var/lib/nova/instances/9b511004-21d7-4867-aa46-4e7219827b6e_del complete#033[00m
Nov 25 03:57:49 np0005534516 nova_compute[253538]: 2025-11-25 08:57:49.807 253542 INFO nova.compute.manager [None req-8b77574d-7777-4a2e-a943-44bc36002e5e 8ce5e2935141427a90707c14e4a73ad9 54ab33f9507e43fca43c45e6fc57f565 - - default default] [instance: 9b511004-21d7-4867-aa46-4e7219827b6e] Took 1.70 seconds to destroy the instance on the hypervisor.#033[00m
Nov 25 03:57:49 np0005534516 nova_compute[253538]: 2025-11-25 08:57:49.808 253542 DEBUG oslo.service.loopingcall [None req-8b77574d-7777-4a2e-a943-44bc36002e5e 8ce5e2935141427a90707c14e4a73ad9 54ab33f9507e43fca43c45e6fc57f565 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 25 03:57:49 np0005534516 nova_compute[253538]: 2025-11-25 08:57:49.808 253542 DEBUG nova.compute.manager [-] [instance: 9b511004-21d7-4867-aa46-4e7219827b6e] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 25 03:57:49 np0005534516 nova_compute[253538]: 2025-11-25 08:57:49.809 253542 DEBUG nova.network.neutron [-] [instance: 9b511004-21d7-4867-aa46-4e7219827b6e] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 25 03:57:49 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2311: 321 pgs: 321 active+clean; 265 MiB data, 916 MiB used, 59 GiB / 60 GiB avail; 28 KiB/s rd, 1.8 MiB/s wr, 31 op/s
Nov 25 03:57:50 np0005534516 podman[383345]: 2025-11-25 08:57:50.127654663 +0000 UTC m=+0.068961797 container create 2b8407cd436740bc0ed15dea868d88ebbc48ac0a7d0546bae4a456e7410425c2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_kirch, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Nov 25 03:57:50 np0005534516 systemd[1]: Started libpod-conmon-2b8407cd436740bc0ed15dea868d88ebbc48ac0a7d0546bae4a456e7410425c2.scope.
Nov 25 03:57:50 np0005534516 podman[383345]: 2025-11-25 08:57:50.098893151 +0000 UTC m=+0.040200325 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 03:57:50 np0005534516 nova_compute[253538]: 2025-11-25 08:57:50.196 253542 DEBUG nova.network.neutron [req-a8edc7ba-f2f1-4885-a616-0ff86e2cfc07 req-3a478600-1659-4fdf-8c0a-cac6468c9d72 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 9b511004-21d7-4867-aa46-4e7219827b6e] Updated VIF entry in instance network info cache for port 88814764-016b-4232-82f8-72dbbb384932. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 25 03:57:50 np0005534516 nova_compute[253538]: 2025-11-25 08:57:50.197 253542 DEBUG nova.network.neutron [req-a8edc7ba-f2f1-4885-a616-0ff86e2cfc07 req-3a478600-1659-4fdf-8c0a-cac6468c9d72 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 9b511004-21d7-4867-aa46-4e7219827b6e] Updating instance_info_cache with network_info: [{"id": "88814764-016b-4232-82f8-72dbbb384932", "address": "fa:16:3e:4c:8a:98", "network": {"id": "55543b53-ee52-41ca-ba2e-341088afdcaa", "bridge": "br-int", "label": "tempest-network-smoke--246766377", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "54ab33f9507e43fca43c45e6fc57f565", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap88814764-01", "ovs_interfaceid": "88814764-016b-4232-82f8-72dbbb384932", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 03:57:50 np0005534516 systemd[1]: Started libcrun container.
Nov 25 03:57:50 np0005534516 nova_compute[253538]: 2025-11-25 08:57:50.222 253542 DEBUG oslo_concurrency.lockutils [req-a8edc7ba-f2f1-4885-a616-0ff86e2cfc07 req-3a478600-1659-4fdf-8c0a-cac6468c9d72 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-9b511004-21d7-4867-aa46-4e7219827b6e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 03:57:50 np0005534516 podman[383345]: 2025-11-25 08:57:50.227965054 +0000 UTC m=+0.169272248 container init 2b8407cd436740bc0ed15dea868d88ebbc48ac0a7d0546bae4a456e7410425c2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_kirch, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 03:57:50 np0005534516 podman[383345]: 2025-11-25 08:57:50.24028269 +0000 UTC m=+0.181589824 container start 2b8407cd436740bc0ed15dea868d88ebbc48ac0a7d0546bae4a456e7410425c2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_kirch, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Nov 25 03:57:50 np0005534516 podman[383345]: 2025-11-25 08:57:50.245046389 +0000 UTC m=+0.186353573 container attach 2b8407cd436740bc0ed15dea868d88ebbc48ac0a7d0546bae4a456e7410425c2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_kirch, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 03:57:50 np0005534516 eloquent_kirch[383362]: 167 167
Nov 25 03:57:50 np0005534516 systemd[1]: libpod-2b8407cd436740bc0ed15dea868d88ebbc48ac0a7d0546bae4a456e7410425c2.scope: Deactivated successfully.
Nov 25 03:57:50 np0005534516 podman[383345]: 2025-11-25 08:57:50.249291255 +0000 UTC m=+0.190598349 container died 2b8407cd436740bc0ed15dea868d88ebbc48ac0a7d0546bae4a456e7410425c2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_kirch, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Nov 25 03:57:50 np0005534516 nova_compute[253538]: 2025-11-25 08:57:50.277 253542 INFO nova.virt.libvirt.driver [None req-f20549d3-bfb1-4bd8-978c-3adeec9d7569 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 5e55aa91-2aa5-4443-b976-0f3e4409e8ec] Creating config drive at /var/lib/nova/instances/5e55aa91-2aa5-4443-b976-0f3e4409e8ec/disk.config#033[00m
Nov 25 03:57:50 np0005534516 systemd[1]: var-lib-containers-storage-overlay-075fa444125eae345e8aaadbfeb4286d847d6d7fe6ecee7692abf608b566dc29-merged.mount: Deactivated successfully.
Nov 25 03:57:50 np0005534516 nova_compute[253538]: 2025-11-25 08:57:50.287 253542 DEBUG oslo_concurrency.processutils [None req-f20549d3-bfb1-4bd8-978c-3adeec9d7569 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/5e55aa91-2aa5-4443-b976-0f3e4409e8ec/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpudr_f3iy execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:57:50 np0005534516 podman[383345]: 2025-11-25 08:57:50.30238818 +0000 UTC m=+0.243695304 container remove 2b8407cd436740bc0ed15dea868d88ebbc48ac0a7d0546bae4a456e7410425c2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_kirch, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 03:57:50 np0005534516 systemd[1]: libpod-conmon-2b8407cd436740bc0ed15dea868d88ebbc48ac0a7d0546bae4a456e7410425c2.scope: Deactivated successfully.
Nov 25 03:57:50 np0005534516 podman[383359]: 2025-11-25 08:57:50.328197432 +0000 UTC m=+0.145024888 container health_status 8ad1e319ea263c63021f01bb2d6f18ca5aea205883339692c6b10bddfa993191 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, config_id=ovn_controller)
Nov 25 03:57:50 np0005534516 nova_compute[253538]: 2025-11-25 08:57:50.441 253542 DEBUG oslo_concurrency.processutils [None req-f20549d3-bfb1-4bd8-978c-3adeec9d7569 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/5e55aa91-2aa5-4443-b976-0f3e4409e8ec/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpudr_f3iy" returned: 0 in 0.154s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:57:50 np0005534516 nova_compute[253538]: 2025-11-25 08:57:50.482 253542 DEBUG nova.storage.rbd_utils [None req-f20549d3-bfb1-4bd8-978c-3adeec9d7569 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] rbd image 5e55aa91-2aa5-4443-b976-0f3e4409e8ec_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:57:50 np0005534516 nova_compute[253538]: 2025-11-25 08:57:50.486 253542 DEBUG oslo_concurrency.processutils [None req-f20549d3-bfb1-4bd8-978c-3adeec9d7569 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/5e55aa91-2aa5-4443-b976-0f3e4409e8ec/disk.config 5e55aa91-2aa5-4443-b976-0f3e4409e8ec_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:57:50 np0005534516 nova_compute[253538]: 2025-11-25 08:57:50.526 253542 DEBUG nova.network.neutron [req-a1c77e16-d3d1-4c46-8c49-0585f19f0df7 req-59ef1be7-f012-4815-95d9-9aeecf551f31 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 5e55aa91-2aa5-4443-b976-0f3e4409e8ec] Updated VIF entry in instance network info cache for port 027edfd6-09a6-4bf4-88df-8a19e59d1f72. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 25 03:57:50 np0005534516 nova_compute[253538]: 2025-11-25 08:57:50.528 253542 DEBUG nova.network.neutron [req-a1c77e16-d3d1-4c46-8c49-0585f19f0df7 req-59ef1be7-f012-4815-95d9-9aeecf551f31 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 5e55aa91-2aa5-4443-b976-0f3e4409e8ec] Updating instance_info_cache with network_info: [{"id": "027edfd6-09a6-4bf4-88df-8a19e59d1f72", "address": "fa:16:3e:d9:e8:8d", "network": {"id": "bf619d00-d285-4b9e-9996-77997075375e", "bridge": "br-int", "label": "tempest-network-smoke--598319533", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92faeb767e7a423586eaaf32661ce771", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap027edfd6-09", "ovs_interfaceid": "027edfd6-09a6-4bf4-88df-8a19e59d1f72", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 03:57:50 np0005534516 nova_compute[253538]: 2025-11-25 08:57:50.545 253542 DEBUG oslo_concurrency.lockutils [req-a1c77e16-d3d1-4c46-8c49-0585f19f0df7 req-59ef1be7-f012-4815-95d9-9aeecf551f31 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-5e55aa91-2aa5-4443-b976-0f3e4409e8ec" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 03:57:50 np0005534516 podman[383416]: 2025-11-25 08:57:50.547401348 +0000 UTC m=+0.063610751 container create fcbbb70a0d243168e8771fe6868df6d0b278b52f5ada96bdc58f06af7f89cf62 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_hopper, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Nov 25 03:57:50 np0005534516 systemd[1]: Started libpod-conmon-fcbbb70a0d243168e8771fe6868df6d0b278b52f5ada96bdc58f06af7f89cf62.scope.
Nov 25 03:57:50 np0005534516 nova_compute[253538]: 2025-11-25 08:57:50.609 253542 DEBUG nova.network.neutron [-] [instance: 9b511004-21d7-4867-aa46-4e7219827b6e] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 03:57:50 np0005534516 podman[383416]: 2025-11-25 08:57:50.518858682 +0000 UTC m=+0.035068105 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 03:57:50 np0005534516 nova_compute[253538]: 2025-11-25 08:57:50.625 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 03:57:50 np0005534516 nova_compute[253538]: 2025-11-25 08:57:50.627 253542 INFO nova.compute.manager [-] [instance: 9b511004-21d7-4867-aa46-4e7219827b6e] Took 0.82 seconds to deallocate network for instance.#033[00m
Nov 25 03:57:50 np0005534516 systemd[1]: Started libcrun container.
Nov 25 03:57:50 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f97c5d228d9de2f80ae6d93692016dc66f50aef332a35e2aeb7414bd19a657f6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 03:57:50 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f97c5d228d9de2f80ae6d93692016dc66f50aef332a35e2aeb7414bd19a657f6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 03:57:50 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f97c5d228d9de2f80ae6d93692016dc66f50aef332a35e2aeb7414bd19a657f6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 03:57:50 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f97c5d228d9de2f80ae6d93692016dc66f50aef332a35e2aeb7414bd19a657f6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 03:57:50 np0005534516 podman[383416]: 2025-11-25 08:57:50.670622292 +0000 UTC m=+0.186831695 container init fcbbb70a0d243168e8771fe6868df6d0b278b52f5ada96bdc58f06af7f89cf62 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_hopper, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef)
Nov 25 03:57:50 np0005534516 nova_compute[253538]: 2025-11-25 08:57:50.677 253542 DEBUG oslo_concurrency.lockutils [None req-8b77574d-7777-4a2e-a943-44bc36002e5e 8ce5e2935141427a90707c14e4a73ad9 54ab33f9507e43fca43c45e6fc57f565 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:57:50 np0005534516 nova_compute[253538]: 2025-11-25 08:57:50.678 253542 DEBUG oslo_concurrency.lockutils [None req-8b77574d-7777-4a2e-a943-44bc36002e5e 8ce5e2935141427a90707c14e4a73ad9 54ab33f9507e43fca43c45e6fc57f565 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:57:50 np0005534516 podman[383416]: 2025-11-25 08:57:50.687954804 +0000 UTC m=+0.204164187 container start fcbbb70a0d243168e8771fe6868df6d0b278b52f5ada96bdc58f06af7f89cf62 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_hopper, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Nov 25 03:57:50 np0005534516 podman[383416]: 2025-11-25 08:57:50.691789409 +0000 UTC m=+0.207998802 container attach fcbbb70a0d243168e8771fe6868df6d0b278b52f5ada96bdc58f06af7f89cf62 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_hopper, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Nov 25 03:57:50 np0005534516 nova_compute[253538]: 2025-11-25 08:57:50.694 253542 DEBUG oslo_concurrency.processutils [None req-f20549d3-bfb1-4bd8-978c-3adeec9d7569 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/5e55aa91-2aa5-4443-b976-0f3e4409e8ec/disk.config 5e55aa91-2aa5-4443-b976-0f3e4409e8ec_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.208s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:57:50 np0005534516 nova_compute[253538]: 2025-11-25 08:57:50.694 253542 INFO nova.virt.libvirt.driver [None req-f20549d3-bfb1-4bd8-978c-3adeec9d7569 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 5e55aa91-2aa5-4443-b976-0f3e4409e8ec] Deleting local config drive /var/lib/nova/instances/5e55aa91-2aa5-4443-b976-0f3e4409e8ec/disk.config because it was imported into RBD.#033[00m
Nov 25 03:57:50 np0005534516 kernel: tap027edfd6-09: entered promiscuous mode
Nov 25 03:57:50 np0005534516 NetworkManager[48915]: <info>  [1764061070.7600] manager: (tap027edfd6-09): new Tun device (/org/freedesktop/NetworkManager/Devices/521)
Nov 25 03:57:50 np0005534516 systemd-udevd[382970]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 03:57:50 np0005534516 ovn_controller[152859]: 2025-11-25T08:57:50Z|01268|binding|INFO|Claiming lport 027edfd6-09a6-4bf4-88df-8a19e59d1f72 for this chassis.
Nov 25 03:57:50 np0005534516 ovn_controller[152859]: 2025-11-25T08:57:50Z|01269|binding|INFO|027edfd6-09a6-4bf4-88df-8a19e59d1f72: Claiming fa:16:3e:d9:e8:8d 10.100.0.9
Nov 25 03:57:50 np0005534516 nova_compute[253538]: 2025-11-25 08:57:50.766 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:57:50 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:57:50.771 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d9:e8:8d 10.100.0.9'], port_security=['fa:16:3e:d9:e8:8d 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '5e55aa91-2aa5-4443-b976-0f3e4409e8ec', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-bf619d00-d285-4b9e-9996-77997075375e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '92faeb767e7a423586eaaf32661ce771', 'neutron:revision_number': '2', 'neutron:security_group_ids': '9d7a9182-641d-4ca5-a3ab-361222a77391', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a8f4cf68-54bc-4d0b-a896-1ef280f60a0a, chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=027edfd6-09a6-4bf4-88df-8a19e59d1f72) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 03:57:50 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:57:50.772 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 027edfd6-09a6-4bf4-88df-8a19e59d1f72 in datapath bf619d00-d285-4b9e-9996-77997075375e bound to our chassis#033[00m
Nov 25 03:57:50 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:57:50.774 162739 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network bf619d00-d285-4b9e-9996-77997075375e#033[00m
Nov 25 03:57:50 np0005534516 NetworkManager[48915]: <info>  [1764061070.7797] device (tap027edfd6-09): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 03:57:50 np0005534516 NetworkManager[48915]: <info>  [1764061070.7806] device (tap027edfd6-09): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 03:57:50 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:57:50.785 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[123d1eee-e45b-4efa-9277-8e97f9cc2b0b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:57:50 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:57:50.786 162739 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapbf619d00-d1 in ovnmeta-bf619d00-d285-4b9e-9996-77997075375e namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 25 03:57:50 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:57:50.790 269370 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapbf619d00-d0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 25 03:57:50 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:57:50.790 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[ee34ec05-7c7b-46ac-8a71-8afa3af24a74]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:57:50 np0005534516 ovn_controller[152859]: 2025-11-25T08:57:50Z|01270|binding|INFO|Setting lport 027edfd6-09a6-4bf4-88df-8a19e59d1f72 ovn-installed in OVS
Nov 25 03:57:50 np0005534516 ovn_controller[152859]: 2025-11-25T08:57:50Z|01271|binding|INFO|Setting lport 027edfd6-09a6-4bf4-88df-8a19e59d1f72 up in Southbound
Nov 25 03:57:50 np0005534516 nova_compute[253538]: 2025-11-25 08:57:50.795 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:57:50 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:57:50.795 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[8db51c1b-de41-4681-b898-1a64a232cdd0]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:57:50 np0005534516 nova_compute[253538]: 2025-11-25 08:57:50.800 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:57:50 np0005534516 nova_compute[253538]: 2025-11-25 08:57:50.802 253542 DEBUG oslo_concurrency.processutils [None req-8b77574d-7777-4a2e-a943-44bc36002e5e 8ce5e2935141427a90707c14e4a73ad9 54ab33f9507e43fca43c45e6fc57f565 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:57:50 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:57:50.811 162852 DEBUG oslo.privsep.daemon [-] privsep: reply[d91435ee-afae-443d-a0cf-922be8f24858]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:57:50 np0005534516 systemd-machined[215790]: New machine qemu-153-instance-0000007b.
Nov 25 03:57:50 np0005534516 systemd[1]: Started Virtual Machine qemu-153-instance-0000007b.
Nov 25 03:57:50 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:57:50.828 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[1b869b56-a1ff-427d-8b6f-11e83878f6a7]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:57:50 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:57:50.859 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[727fae92-6d8e-4829-b9a7-f42c6480cb6e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:57:50 np0005534516 NetworkManager[48915]: <info>  [1764061070.8657] manager: (tapbf619d00-d0): new Veth device (/org/freedesktop/NetworkManager/Devices/522)
Nov 25 03:57:50 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:57:50.864 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[03a96821-fad6-467d-9d93-c98e32abe30e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:57:50 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:57:50.907 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[be5267df-9d08-408a-b8e5-d0b3c5084766]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:57:50 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:57:50.911 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[50965635-bc0d-4938-9e48-9c7e3529ab92]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:57:50 np0005534516 NetworkManager[48915]: <info>  [1764061070.9336] device (tapbf619d00-d0): carrier: link connected
Nov 25 03:57:50 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:57:50.941 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[d1d84d9e-2db5-4fe0-a5a4-2b8ce5dc51b3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:57:50 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:57:50.961 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[af6e1ae3-dcf3-4ca6-884b-3c3db040d4ad]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapbf619d00-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:c8:be:0b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 369], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 642553, 'reachable_time': 19033, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 383534, 'error': None, 'target': 'ovnmeta-bf619d00-d285-4b9e-9996-77997075375e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:57:50 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:57:50.980 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[8833099c-a374-4d65-a971-6da3dfddeb88]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fec8:be0b'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 642553, 'tstamp': 642553}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 383535, 'error': None, 'target': 'ovnmeta-bf619d00-d285-4b9e-9996-77997075375e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:57:50 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:57:50.997 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[fc0919a9-f5ee-4501-a1b8-b681fc7e4c19]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapbf619d00-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:c8:be:0b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 369], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 642553, 'reachable_time': 19033, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 383536, 'error': None, 'target': 'ovnmeta-bf619d00-d285-4b9e-9996-77997075375e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:57:51 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:57:51.033 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[cf4023bd-1e31-4621-9040-5caf113bd05c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:57:51 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:57:51.099 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[47b8491b-7b3f-4647-9eb4-912ba65a1ad9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:57:51 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:57:51.101 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapbf619d00-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:57:51 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:57:51.101 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 03:57:51 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:57:51.101 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapbf619d00-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:57:51 np0005534516 kernel: tapbf619d00-d0: entered promiscuous mode
Nov 25 03:57:51 np0005534516 NetworkManager[48915]: <info>  [1764061071.1044] manager: (tapbf619d00-d0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/523)
Nov 25 03:57:51 np0005534516 nova_compute[253538]: 2025-11-25 08:57:51.104 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:57:51 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:57:51.113 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapbf619d00-d0, col_values=(('external_ids', {'iface-id': 'c544ed70-c59f-4fbe-97c5-a521f548f971'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:57:51 np0005534516 ovn_controller[152859]: 2025-11-25T08:57:51Z|01272|binding|INFO|Releasing lport c544ed70-c59f-4fbe-97c5-a521f548f971 from this chassis (sb_readonly=0)
Nov 25 03:57:51 np0005534516 nova_compute[253538]: 2025-11-25 08:57:51.116 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:57:51 np0005534516 nova_compute[253538]: 2025-11-25 08:57:51.140 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:57:51 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:57:51.143 162739 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/bf619d00-d285-4b9e-9996-77997075375e.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/bf619d00-d285-4b9e-9996-77997075375e.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 25 03:57:51 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:57:51.144 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[90239e25-1c6b-424b-8b44-bbe4a547583f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:57:51 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:57:51.144 162739 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 25 03:57:51 np0005534516 ovn_metadata_agent[162734]: global
Nov 25 03:57:51 np0005534516 ovn_metadata_agent[162734]:    log         /dev/log local0 debug
Nov 25 03:57:51 np0005534516 ovn_metadata_agent[162734]:    log-tag     haproxy-metadata-proxy-bf619d00-d285-4b9e-9996-77997075375e
Nov 25 03:57:51 np0005534516 ovn_metadata_agent[162734]:    user        root
Nov 25 03:57:51 np0005534516 ovn_metadata_agent[162734]:    group       root
Nov 25 03:57:51 np0005534516 ovn_metadata_agent[162734]:    maxconn     1024
Nov 25 03:57:51 np0005534516 ovn_metadata_agent[162734]:    pidfile     /var/lib/neutron/external/pids/bf619d00-d285-4b9e-9996-77997075375e.pid.haproxy
Nov 25 03:57:51 np0005534516 ovn_metadata_agent[162734]:    daemon
Nov 25 03:57:51 np0005534516 ovn_metadata_agent[162734]: 
Nov 25 03:57:51 np0005534516 ovn_metadata_agent[162734]: defaults
Nov 25 03:57:51 np0005534516 ovn_metadata_agent[162734]:    log global
Nov 25 03:57:51 np0005534516 ovn_metadata_agent[162734]:    mode http
Nov 25 03:57:51 np0005534516 ovn_metadata_agent[162734]:    option httplog
Nov 25 03:57:51 np0005534516 ovn_metadata_agent[162734]:    option dontlognull
Nov 25 03:57:51 np0005534516 ovn_metadata_agent[162734]:    option http-server-close
Nov 25 03:57:51 np0005534516 ovn_metadata_agent[162734]:    option forwardfor
Nov 25 03:57:51 np0005534516 ovn_metadata_agent[162734]:    retries                 3
Nov 25 03:57:51 np0005534516 ovn_metadata_agent[162734]:    timeout http-request    30s
Nov 25 03:57:51 np0005534516 ovn_metadata_agent[162734]:    timeout connect         30s
Nov 25 03:57:51 np0005534516 ovn_metadata_agent[162734]:    timeout client          32s
Nov 25 03:57:51 np0005534516 ovn_metadata_agent[162734]:    timeout server          32s
Nov 25 03:57:51 np0005534516 ovn_metadata_agent[162734]:    timeout http-keep-alive 30s
Nov 25 03:57:51 np0005534516 ovn_metadata_agent[162734]: 
Nov 25 03:57:51 np0005534516 ovn_metadata_agent[162734]: 
Nov 25 03:57:51 np0005534516 ovn_metadata_agent[162734]: listen listener
Nov 25 03:57:51 np0005534516 ovn_metadata_agent[162734]:    bind 169.254.169.254:80
Nov 25 03:57:51 np0005534516 ovn_metadata_agent[162734]:    server metadata /var/lib/neutron/metadata_proxy
Nov 25 03:57:51 np0005534516 ovn_metadata_agent[162734]:    http-request add-header X-OVN-Network-ID bf619d00-d285-4b9e-9996-77997075375e
Nov 25 03:57:51 np0005534516 ovn_metadata_agent[162734]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 25 03:57:51 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:57:51.145 162739 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-bf619d00-d285-4b9e-9996-77997075375e', 'env', 'PROCESS_TAG=haproxy-bf619d00-d285-4b9e-9996-77997075375e', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/bf619d00-d285-4b9e-9996-77997075375e.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 25 03:57:51 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 03:57:51 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3521971948' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 03:57:51 np0005534516 nova_compute[253538]: 2025-11-25 08:57:51.266 253542 DEBUG oslo_concurrency.processutils [None req-8b77574d-7777-4a2e-a943-44bc36002e5e 8ce5e2935141427a90707c14e4a73ad9 54ab33f9507e43fca43c45e6fc57f565 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.464s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:57:51 np0005534516 nova_compute[253538]: 2025-11-25 08:57:51.273 253542 DEBUG nova.compute.provider_tree [None req-8b77574d-7777-4a2e-a943-44bc36002e5e 8ce5e2935141427a90707c14e4a73ad9 54ab33f9507e43fca43c45e6fc57f565 - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 03:57:51 np0005534516 nova_compute[253538]: 2025-11-25 08:57:51.294 253542 DEBUG nova.scheduler.client.report [None req-8b77574d-7777-4a2e-a943-44bc36002e5e 8ce5e2935141427a90707c14e4a73ad9 54ab33f9507e43fca43c45e6fc57f565 - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 03:57:51 np0005534516 nova_compute[253538]: 2025-11-25 08:57:51.319 253542 DEBUG oslo_concurrency.lockutils [None req-8b77574d-7777-4a2e-a943-44bc36002e5e 8ce5e2935141427a90707c14e4a73ad9 54ab33f9507e43fca43c45e6fc57f565 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.641s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:57:51 np0005534516 nova_compute[253538]: 2025-11-25 08:57:51.331 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764061071.3305693, 5e55aa91-2aa5-4443-b976-0f3e4409e8ec => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 03:57:51 np0005534516 nova_compute[253538]: 2025-11-25 08:57:51.332 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 5e55aa91-2aa5-4443-b976-0f3e4409e8ec] VM Started (Lifecycle Event)#033[00m
Nov 25 03:57:51 np0005534516 nova_compute[253538]: 2025-11-25 08:57:51.349 253542 INFO nova.scheduler.client.report [None req-8b77574d-7777-4a2e-a943-44bc36002e5e 8ce5e2935141427a90707c14e4a73ad9 54ab33f9507e43fca43c45e6fc57f565 - - default default] Deleted allocations for instance 9b511004-21d7-4867-aa46-4e7219827b6e#033[00m
Nov 25 03:57:51 np0005534516 nova_compute[253538]: 2025-11-25 08:57:51.351 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 5e55aa91-2aa5-4443-b976-0f3e4409e8ec] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:57:51 np0005534516 nova_compute[253538]: 2025-11-25 08:57:51.355 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764061071.335, 5e55aa91-2aa5-4443-b976-0f3e4409e8ec => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 03:57:51 np0005534516 nova_compute[253538]: 2025-11-25 08:57:51.356 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 5e55aa91-2aa5-4443-b976-0f3e4409e8ec] VM Paused (Lifecycle Event)#033[00m
Nov 25 03:57:51 np0005534516 nova_compute[253538]: 2025-11-25 08:57:51.395 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 5e55aa91-2aa5-4443-b976-0f3e4409e8ec] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:57:51 np0005534516 nova_compute[253538]: 2025-11-25 08:57:51.404 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 5e55aa91-2aa5-4443-b976-0f3e4409e8ec] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 03:57:51 np0005534516 nova_compute[253538]: 2025-11-25 08:57:51.432 253542 DEBUG nova.compute.manager [req-eceda9a1-6a77-4ba2-b59e-e77d47be816b req-f2dd6384-26ed-4de7-8855-8e03bf514d7c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 9b511004-21d7-4867-aa46-4e7219827b6e] Received event network-vif-deleted-88814764-016b-4232-82f8-72dbbb384932 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:57:51 np0005534516 nova_compute[253538]: 2025-11-25 08:57:51.433 253542 DEBUG nova.compute.manager [req-eceda9a1-6a77-4ba2-b59e-e77d47be816b req-f2dd6384-26ed-4de7-8855-8e03bf514d7c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 5e55aa91-2aa5-4443-b976-0f3e4409e8ec] Received event network-vif-plugged-027edfd6-09a6-4bf4-88df-8a19e59d1f72 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:57:51 np0005534516 nova_compute[253538]: 2025-11-25 08:57:51.433 253542 DEBUG oslo_concurrency.lockutils [req-eceda9a1-6a77-4ba2-b59e-e77d47be816b req-f2dd6384-26ed-4de7-8855-8e03bf514d7c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "5e55aa91-2aa5-4443-b976-0f3e4409e8ec-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:57:51 np0005534516 nova_compute[253538]: 2025-11-25 08:57:51.434 253542 DEBUG oslo_concurrency.lockutils [req-eceda9a1-6a77-4ba2-b59e-e77d47be816b req-f2dd6384-26ed-4de7-8855-8e03bf514d7c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "5e55aa91-2aa5-4443-b976-0f3e4409e8ec-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:57:51 np0005534516 nova_compute[253538]: 2025-11-25 08:57:51.434 253542 DEBUG oslo_concurrency.lockutils [req-eceda9a1-6a77-4ba2-b59e-e77d47be816b req-f2dd6384-26ed-4de7-8855-8e03bf514d7c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "5e55aa91-2aa5-4443-b976-0f3e4409e8ec-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:57:51 np0005534516 nova_compute[253538]: 2025-11-25 08:57:51.435 253542 DEBUG nova.compute.manager [req-eceda9a1-6a77-4ba2-b59e-e77d47be816b req-f2dd6384-26ed-4de7-8855-8e03bf514d7c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 5e55aa91-2aa5-4443-b976-0f3e4409e8ec] Processing event network-vif-plugged-027edfd6-09a6-4bf4-88df-8a19e59d1f72 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 25 03:57:51 np0005534516 nova_compute[253538]: 2025-11-25 08:57:51.436 253542 DEBUG nova.compute.manager [req-eceda9a1-6a77-4ba2-b59e-e77d47be816b req-f2dd6384-26ed-4de7-8855-8e03bf514d7c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 5e55aa91-2aa5-4443-b976-0f3e4409e8ec] Received event network-vif-plugged-027edfd6-09a6-4bf4-88df-8a19e59d1f72 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:57:51 np0005534516 nova_compute[253538]: 2025-11-25 08:57:51.437 253542 DEBUG oslo_concurrency.lockutils [req-eceda9a1-6a77-4ba2-b59e-e77d47be816b req-f2dd6384-26ed-4de7-8855-8e03bf514d7c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "5e55aa91-2aa5-4443-b976-0f3e4409e8ec-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:57:51 np0005534516 nova_compute[253538]: 2025-11-25 08:57:51.437 253542 DEBUG oslo_concurrency.lockutils [req-eceda9a1-6a77-4ba2-b59e-e77d47be816b req-f2dd6384-26ed-4de7-8855-8e03bf514d7c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "5e55aa91-2aa5-4443-b976-0f3e4409e8ec-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:57:51 np0005534516 nova_compute[253538]: 2025-11-25 08:57:51.438 253542 DEBUG oslo_concurrency.lockutils [req-eceda9a1-6a77-4ba2-b59e-e77d47be816b req-f2dd6384-26ed-4de7-8855-8e03bf514d7c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "5e55aa91-2aa5-4443-b976-0f3e4409e8ec-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:57:51 np0005534516 nova_compute[253538]: 2025-11-25 08:57:51.438 253542 DEBUG nova.compute.manager [req-eceda9a1-6a77-4ba2-b59e-e77d47be816b req-f2dd6384-26ed-4de7-8855-8e03bf514d7c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 5e55aa91-2aa5-4443-b976-0f3e4409e8ec] No waiting events found dispatching network-vif-plugged-027edfd6-09a6-4bf4-88df-8a19e59d1f72 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 03:57:51 np0005534516 nova_compute[253538]: 2025-11-25 08:57:51.439 253542 WARNING nova.compute.manager [req-eceda9a1-6a77-4ba2-b59e-e77d47be816b req-f2dd6384-26ed-4de7-8855-8e03bf514d7c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 5e55aa91-2aa5-4443-b976-0f3e4409e8ec] Received unexpected event network-vif-plugged-027edfd6-09a6-4bf4-88df-8a19e59d1f72 for instance with vm_state building and task_state spawning.#033[00m
Nov 25 03:57:51 np0005534516 nova_compute[253538]: 2025-11-25 08:57:51.441 253542 DEBUG nova.compute.manager [None req-f20549d3-bfb1-4bd8-978c-3adeec9d7569 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 5e55aa91-2aa5-4443-b976-0f3e4409e8ec] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 25 03:57:51 np0005534516 nova_compute[253538]: 2025-11-25 08:57:51.442 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 5e55aa91-2aa5-4443-b976-0f3e4409e8ec] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 03:57:51 np0005534516 nova_compute[253538]: 2025-11-25 08:57:51.447 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764061071.4468732, 5e55aa91-2aa5-4443-b976-0f3e4409e8ec => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 03:57:51 np0005534516 nova_compute[253538]: 2025-11-25 08:57:51.447 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 5e55aa91-2aa5-4443-b976-0f3e4409e8ec] VM Resumed (Lifecycle Event)#033[00m
Nov 25 03:57:51 np0005534516 nova_compute[253538]: 2025-11-25 08:57:51.455 253542 DEBUG oslo_concurrency.lockutils [None req-8b77574d-7777-4a2e-a943-44bc36002e5e 8ce5e2935141427a90707c14e4a73ad9 54ab33f9507e43fca43c45e6fc57f565 - - default default] Lock "9b511004-21d7-4867-aa46-4e7219827b6e" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.360s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:57:51 np0005534516 nova_compute[253538]: 2025-11-25 08:57:51.459 253542 DEBUG nova.virt.libvirt.driver [None req-f20549d3-bfb1-4bd8-978c-3adeec9d7569 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 5e55aa91-2aa5-4443-b976-0f3e4409e8ec] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 25 03:57:51 np0005534516 nova_compute[253538]: 2025-11-25 08:57:51.466 253542 INFO nova.virt.libvirt.driver [-] [instance: 5e55aa91-2aa5-4443-b976-0f3e4409e8ec] Instance spawned successfully.#033[00m
Nov 25 03:57:51 np0005534516 nova_compute[253538]: 2025-11-25 08:57:51.468 253542 DEBUG nova.virt.libvirt.driver [None req-f20549d3-bfb1-4bd8-978c-3adeec9d7569 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 5e55aa91-2aa5-4443-b976-0f3e4409e8ec] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 25 03:57:51 np0005534516 nova_compute[253538]: 2025-11-25 08:57:51.471 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 5e55aa91-2aa5-4443-b976-0f3e4409e8ec] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:57:51 np0005534516 nova_compute[253538]: 2025-11-25 08:57:51.473 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 5e55aa91-2aa5-4443-b976-0f3e4409e8ec] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 03:57:51 np0005534516 nova_compute[253538]: 2025-11-25 08:57:51.491 253542 DEBUG nova.virt.libvirt.driver [None req-f20549d3-bfb1-4bd8-978c-3adeec9d7569 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 5e55aa91-2aa5-4443-b976-0f3e4409e8ec] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:57:51 np0005534516 nova_compute[253538]: 2025-11-25 08:57:51.492 253542 DEBUG nova.virt.libvirt.driver [None req-f20549d3-bfb1-4bd8-978c-3adeec9d7569 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 5e55aa91-2aa5-4443-b976-0f3e4409e8ec] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:57:51 np0005534516 nova_compute[253538]: 2025-11-25 08:57:51.492 253542 DEBUG nova.virt.libvirt.driver [None req-f20549d3-bfb1-4bd8-978c-3adeec9d7569 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 5e55aa91-2aa5-4443-b976-0f3e4409e8ec] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:57:51 np0005534516 nova_compute[253538]: 2025-11-25 08:57:51.492 253542 DEBUG nova.virt.libvirt.driver [None req-f20549d3-bfb1-4bd8-978c-3adeec9d7569 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 5e55aa91-2aa5-4443-b976-0f3e4409e8ec] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:57:51 np0005534516 nova_compute[253538]: 2025-11-25 08:57:51.493 253542 DEBUG nova.virt.libvirt.driver [None req-f20549d3-bfb1-4bd8-978c-3adeec9d7569 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 5e55aa91-2aa5-4443-b976-0f3e4409e8ec] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:57:51 np0005534516 nova_compute[253538]: 2025-11-25 08:57:51.493 253542 DEBUG nova.virt.libvirt.driver [None req-f20549d3-bfb1-4bd8-978c-3adeec9d7569 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 5e55aa91-2aa5-4443-b976-0f3e4409e8ec] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:57:51 np0005534516 nova_compute[253538]: 2025-11-25 08:57:51.498 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 5e55aa91-2aa5-4443-b976-0f3e4409e8ec] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 03:57:51 np0005534516 youthful_hopper[383462]: {
Nov 25 03:57:51 np0005534516 youthful_hopper[383462]:    "0": [
Nov 25 03:57:51 np0005534516 youthful_hopper[383462]:        {
Nov 25 03:57:51 np0005534516 youthful_hopper[383462]:            "devices": [
Nov 25 03:57:51 np0005534516 youthful_hopper[383462]:                "/dev/loop3"
Nov 25 03:57:51 np0005534516 youthful_hopper[383462]:            ],
Nov 25 03:57:51 np0005534516 youthful_hopper[383462]:            "lv_name": "ceph_lv0",
Nov 25 03:57:51 np0005534516 youthful_hopper[383462]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 03:57:51 np0005534516 youthful_hopper[383462]:            "lv_size": "21470642176",
Nov 25 03:57:51 np0005534516 youthful_hopper[383462]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=fa0f8d90-5df8-4d42-9078-da082765696d,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 03:57:51 np0005534516 youthful_hopper[383462]:            "lv_uuid": "ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr",
Nov 25 03:57:51 np0005534516 youthful_hopper[383462]:            "name": "ceph_lv0",
Nov 25 03:57:51 np0005534516 youthful_hopper[383462]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 03:57:51 np0005534516 youthful_hopper[383462]:            "tags": {
Nov 25 03:57:51 np0005534516 youthful_hopper[383462]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 03:57:51 np0005534516 youthful_hopper[383462]:                "ceph.block_uuid": "ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr",
Nov 25 03:57:51 np0005534516 youthful_hopper[383462]:                "ceph.cephx_lockbox_secret": "",
Nov 25 03:57:51 np0005534516 youthful_hopper[383462]:                "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 03:57:51 np0005534516 youthful_hopper[383462]:                "ceph.cluster_name": "ceph",
Nov 25 03:57:51 np0005534516 youthful_hopper[383462]:                "ceph.crush_device_class": "",
Nov 25 03:57:51 np0005534516 youthful_hopper[383462]:                "ceph.encrypted": "0",
Nov 25 03:57:51 np0005534516 youthful_hopper[383462]:                "ceph.osd_fsid": "fa0f8d90-5df8-4d42-9078-da082765696d",
Nov 25 03:57:51 np0005534516 youthful_hopper[383462]:                "ceph.osd_id": "0",
Nov 25 03:57:51 np0005534516 youthful_hopper[383462]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 03:57:51 np0005534516 youthful_hopper[383462]:                "ceph.type": "block",
Nov 25 03:57:51 np0005534516 youthful_hopper[383462]:                "ceph.vdo": "0"
Nov 25 03:57:51 np0005534516 youthful_hopper[383462]:            },
Nov 25 03:57:51 np0005534516 youthful_hopper[383462]:            "type": "block",
Nov 25 03:57:51 np0005534516 youthful_hopper[383462]:            "vg_name": "ceph_vg0"
Nov 25 03:57:51 np0005534516 youthful_hopper[383462]:        }
Nov 25 03:57:51 np0005534516 youthful_hopper[383462]:    ],
Nov 25 03:57:51 np0005534516 youthful_hopper[383462]:    "1": [
Nov 25 03:57:51 np0005534516 youthful_hopper[383462]:        {
Nov 25 03:57:51 np0005534516 youthful_hopper[383462]:            "devices": [
Nov 25 03:57:51 np0005534516 youthful_hopper[383462]:                "/dev/loop4"
Nov 25 03:57:51 np0005534516 youthful_hopper[383462]:            ],
Nov 25 03:57:51 np0005534516 youthful_hopper[383462]:            "lv_name": "ceph_lv1",
Nov 25 03:57:51 np0005534516 youthful_hopper[383462]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 03:57:51 np0005534516 youthful_hopper[383462]:            "lv_size": "21470642176",
Nov 25 03:57:51 np0005534516 youthful_hopper[383462]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1ccbbde0-8faf-460a-800e-c84f00ed17db,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 03:57:51 np0005534516 youthful_hopper[383462]:            "lv_uuid": "nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e",
Nov 25 03:57:51 np0005534516 youthful_hopper[383462]:            "name": "ceph_lv1",
Nov 25 03:57:51 np0005534516 youthful_hopper[383462]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 03:57:51 np0005534516 youthful_hopper[383462]:            "tags": {
Nov 25 03:57:51 np0005534516 youthful_hopper[383462]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 03:57:51 np0005534516 youthful_hopper[383462]:                "ceph.block_uuid": "nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e",
Nov 25 03:57:51 np0005534516 youthful_hopper[383462]:                "ceph.cephx_lockbox_secret": "",
Nov 25 03:57:51 np0005534516 youthful_hopper[383462]:                "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 03:57:51 np0005534516 youthful_hopper[383462]:                "ceph.cluster_name": "ceph",
Nov 25 03:57:51 np0005534516 youthful_hopper[383462]:                "ceph.crush_device_class": "",
Nov 25 03:57:51 np0005534516 youthful_hopper[383462]:                "ceph.encrypted": "0",
Nov 25 03:57:51 np0005534516 youthful_hopper[383462]:                "ceph.osd_fsid": "1ccbbde0-8faf-460a-800e-c84f00ed17db",
Nov 25 03:57:51 np0005534516 youthful_hopper[383462]:                "ceph.osd_id": "1",
Nov 25 03:57:51 np0005534516 youthful_hopper[383462]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 03:57:51 np0005534516 youthful_hopper[383462]:                "ceph.type": "block",
Nov 25 03:57:51 np0005534516 youthful_hopper[383462]:                "ceph.vdo": "0"
Nov 25 03:57:51 np0005534516 youthful_hopper[383462]:            },
Nov 25 03:57:51 np0005534516 youthful_hopper[383462]:            "type": "block",
Nov 25 03:57:51 np0005534516 youthful_hopper[383462]:            "vg_name": "ceph_vg1"
Nov 25 03:57:51 np0005534516 youthful_hopper[383462]:        }
Nov 25 03:57:51 np0005534516 youthful_hopper[383462]:    ],
Nov 25 03:57:51 np0005534516 youthful_hopper[383462]:    "2": [
Nov 25 03:57:51 np0005534516 youthful_hopper[383462]:        {
Nov 25 03:57:51 np0005534516 youthful_hopper[383462]:            "devices": [
Nov 25 03:57:51 np0005534516 youthful_hopper[383462]:                "/dev/loop5"
Nov 25 03:57:51 np0005534516 youthful_hopper[383462]:            ],
Nov 25 03:57:51 np0005534516 youthful_hopper[383462]:            "lv_name": "ceph_lv2",
Nov 25 03:57:51 np0005534516 youthful_hopper[383462]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 03:57:51 np0005534516 youthful_hopper[383462]:            "lv_size": "21470642176",
Nov 25 03:57:51 np0005534516 youthful_hopper[383462]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2a15223e-f21c-42af-b702-2be1c3607399,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 03:57:51 np0005534516 youthful_hopper[383462]:            "lv_uuid": "5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0",
Nov 25 03:57:51 np0005534516 youthful_hopper[383462]:            "name": "ceph_lv2",
Nov 25 03:57:51 np0005534516 youthful_hopper[383462]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 03:57:51 np0005534516 youthful_hopper[383462]:            "tags": {
Nov 25 03:57:51 np0005534516 youthful_hopper[383462]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 03:57:51 np0005534516 youthful_hopper[383462]:                "ceph.block_uuid": "5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0",
Nov 25 03:57:51 np0005534516 youthful_hopper[383462]:                "ceph.cephx_lockbox_secret": "",
Nov 25 03:57:51 np0005534516 youthful_hopper[383462]:                "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 03:57:51 np0005534516 youthful_hopper[383462]:                "ceph.cluster_name": "ceph",
Nov 25 03:57:51 np0005534516 youthful_hopper[383462]:                "ceph.crush_device_class": "",
Nov 25 03:57:51 np0005534516 youthful_hopper[383462]:                "ceph.encrypted": "0",
Nov 25 03:57:51 np0005534516 youthful_hopper[383462]:                "ceph.osd_fsid": "2a15223e-f21c-42af-b702-2be1c3607399",
Nov 25 03:57:51 np0005534516 youthful_hopper[383462]:                "ceph.osd_id": "2",
Nov 25 03:57:51 np0005534516 youthful_hopper[383462]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 03:57:51 np0005534516 youthful_hopper[383462]:                "ceph.type": "block",
Nov 25 03:57:51 np0005534516 youthful_hopper[383462]:                "ceph.vdo": "0"
Nov 25 03:57:51 np0005534516 youthful_hopper[383462]:            },
Nov 25 03:57:51 np0005534516 youthful_hopper[383462]:            "type": "block",
Nov 25 03:57:51 np0005534516 youthful_hopper[383462]:            "vg_name": "ceph_vg2"
Nov 25 03:57:51 np0005534516 youthful_hopper[383462]:        }
Nov 25 03:57:51 np0005534516 youthful_hopper[383462]:    ]
Nov 25 03:57:51 np0005534516 youthful_hopper[383462]: }
Nov 25 03:57:51 np0005534516 podman[383617]: 2025-11-25 08:57:51.535468302 +0000 UTC m=+0.046821075 container create 2e741201d07043758356ba915932a1ecc9e834e5043588acafeb96a919795c94 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-bf619d00-d285-4b9e-9996-77997075375e, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Nov 25 03:57:51 np0005534516 systemd[1]: libpod-fcbbb70a0d243168e8771fe6868df6d0b278b52f5ada96bdc58f06af7f89cf62.scope: Deactivated successfully.
Nov 25 03:57:51 np0005534516 nova_compute[253538]: 2025-11-25 08:57:51.546 253542 INFO nova.compute.manager [None req-f20549d3-bfb1-4bd8-978c-3adeec9d7569 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 5e55aa91-2aa5-4443-b976-0f3e4409e8ec] Took 7.00 seconds to spawn the instance on the hypervisor.#033[00m
Nov 25 03:57:51 np0005534516 nova_compute[253538]: 2025-11-25 08:57:51.547 253542 DEBUG nova.compute.manager [None req-f20549d3-bfb1-4bd8-978c-3adeec9d7569 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 5e55aa91-2aa5-4443-b976-0f3e4409e8ec] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:57:51 np0005534516 podman[383416]: 2025-11-25 08:57:51.550371018 +0000 UTC m=+1.066580421 container died fcbbb70a0d243168e8771fe6868df6d0b278b52f5ada96bdc58f06af7f89cf62 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_hopper, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 03:57:51 np0005534516 systemd[1]: Started libpod-conmon-2e741201d07043758356ba915932a1ecc9e834e5043588acafeb96a919795c94.scope.
Nov 25 03:57:51 np0005534516 systemd[1]: var-lib-containers-storage-overlay-f97c5d228d9de2f80ae6d93692016dc66f50aef332a35e2aeb7414bd19a657f6-merged.mount: Deactivated successfully.
Nov 25 03:57:51 np0005534516 nova_compute[253538]: 2025-11-25 08:57:51.596 253542 INFO nova.compute.manager [None req-f20549d3-bfb1-4bd8-978c-3adeec9d7569 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 5e55aa91-2aa5-4443-b976-0f3e4409e8ec] Took 8.09 seconds to build instance.#033[00m
Nov 25 03:57:51 np0005534516 systemd[1]: Started libcrun container.
Nov 25 03:57:51 np0005534516 podman[383617]: 2025-11-25 08:57:51.510795861 +0000 UTC m=+0.022148654 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 25 03:57:51 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c4540958df62a9ce7e0b75ff06f1bfd795b20e2ca8437d0035d39c10d955612c/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 25 03:57:51 np0005534516 nova_compute[253538]: 2025-11-25 08:57:51.614 253542 DEBUG oslo_concurrency.lockutils [None req-f20549d3-bfb1-4bd8-978c-3adeec9d7569 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lock "5e55aa91-2aa5-4443-b976-0f3e4409e8ec" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.166s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:57:51 np0005534516 podman[383617]: 2025-11-25 08:57:51.621857024 +0000 UTC m=+0.133209817 container init 2e741201d07043758356ba915932a1ecc9e834e5043588acafeb96a919795c94 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-bf619d00-d285-4b9e-9996-77997075375e, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 25 03:57:51 np0005534516 podman[383617]: 2025-11-25 08:57:51.630647143 +0000 UTC m=+0.141999906 container start 2e741201d07043758356ba915932a1ecc9e834e5043588acafeb96a919795c94 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-bf619d00-d285-4b9e-9996-77997075375e, org.label-schema.build-date=20251118, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 25 03:57:51 np0005534516 podman[383416]: 2025-11-25 08:57:51.63237181 +0000 UTC m=+1.148581203 container remove fcbbb70a0d243168e8771fe6868df6d0b278b52f5ada96bdc58f06af7f89cf62 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_hopper, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 03:57:51 np0005534516 systemd[1]: libpod-conmon-fcbbb70a0d243168e8771fe6868df6d0b278b52f5ada96bdc58f06af7f89cf62.scope: Deactivated successfully.
Nov 25 03:57:51 np0005534516 neutron-haproxy-ovnmeta-bf619d00-d285-4b9e-9996-77997075375e[383642]: [NOTICE]   (383647) : New worker (383649) forked
Nov 25 03:57:51 np0005534516 neutron-haproxy-ovnmeta-bf619d00-d285-4b9e-9996-77997075375e[383642]: [NOTICE]   (383647) : Loading success.
Nov 25 03:57:51 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2312: 321 pgs: 321 active+clean; 238 MiB data, 898 MiB used, 59 GiB / 60 GiB avail; 48 KiB/s rd, 1.8 MiB/s wr, 58 op/s
Nov 25 03:57:52 np0005534516 podman[383797]: 2025-11-25 08:57:52.308475253 +0000 UTC m=+0.050948758 container create 4c6f23bdbb271279c0682eb3b5fb95b5654ad8e628c6d5884ab4d468cb7070cf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_hopper, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Nov 25 03:57:52 np0005534516 systemd[1]: Started libpod-conmon-4c6f23bdbb271279c0682eb3b5fb95b5654ad8e628c6d5884ab4d468cb7070cf.scope.
Nov 25 03:57:52 np0005534516 podman[383797]: 2025-11-25 08:57:52.280612104 +0000 UTC m=+0.023085629 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 03:57:52 np0005534516 systemd[1]: Started libcrun container.
Nov 25 03:57:52 np0005534516 podman[383797]: 2025-11-25 08:57:52.424450389 +0000 UTC m=+0.166923904 container init 4c6f23bdbb271279c0682eb3b5fb95b5654ad8e628c6d5884ab4d468cb7070cf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_hopper, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Nov 25 03:57:52 np0005534516 podman[383797]: 2025-11-25 08:57:52.432809356 +0000 UTC m=+0.175282861 container start 4c6f23bdbb271279c0682eb3b5fb95b5654ad8e628c6d5884ab4d468cb7070cf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_hopper, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Nov 25 03:57:52 np0005534516 podman[383797]: 2025-11-25 08:57:52.436915419 +0000 UTC m=+0.179388914 container attach 4c6f23bdbb271279c0682eb3b5fb95b5654ad8e628c6d5884ab4d468cb7070cf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_hopper, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Nov 25 03:57:52 np0005534516 friendly_hopper[383813]: 167 167
Nov 25 03:57:52 np0005534516 systemd[1]: libpod-4c6f23bdbb271279c0682eb3b5fb95b5654ad8e628c6d5884ab4d468cb7070cf.scope: Deactivated successfully.
Nov 25 03:57:52 np0005534516 conmon[383813]: conmon 4c6f23bdbb271279c068 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-4c6f23bdbb271279c0682eb3b5fb95b5654ad8e628c6d5884ab4d468cb7070cf.scope/container/memory.events
Nov 25 03:57:52 np0005534516 podman[383797]: 2025-11-25 08:57:52.441447011 +0000 UTC m=+0.183920506 container died 4c6f23bdbb271279c0682eb3b5fb95b5654ad8e628c6d5884ab4d468cb7070cf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_hopper, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 03:57:52 np0005534516 systemd[1]: var-lib-containers-storage-overlay-7f145041488365d4a77ff28b2ae9cd508b93550684605cb3f543443de51deda3-merged.mount: Deactivated successfully.
Nov 25 03:57:52 np0005534516 podman[383797]: 2025-11-25 08:57:52.480517295 +0000 UTC m=+0.222990790 container remove 4c6f23bdbb271279c0682eb3b5fb95b5654ad8e628c6d5884ab4d468cb7070cf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_hopper, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Nov 25 03:57:52 np0005534516 systemd[1]: libpod-conmon-4c6f23bdbb271279c0682eb3b5fb95b5654ad8e628c6d5884ab4d468cb7070cf.scope: Deactivated successfully.
Nov 25 03:57:52 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 03:57:52 np0005534516 podman[383837]: 2025-11-25 08:57:52.67360064 +0000 UTC m=+0.051048780 container create 851c337aea400ddee13455985729ede7c2af1db7b983df6380022a8345f65d95 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_goldstine, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 03:57:52 np0005534516 systemd[1]: Started libpod-conmon-851c337aea400ddee13455985729ede7c2af1db7b983df6380022a8345f65d95.scope.
Nov 25 03:57:52 np0005534516 podman[383837]: 2025-11-25 08:57:52.649632939 +0000 UTC m=+0.027081109 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 03:57:52 np0005534516 systemd[1]: Started libcrun container.
Nov 25 03:57:52 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7f5ca8650db8467a5956d45ae08b17a10035deee8dfc70097d5be3204ddaf258/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 03:57:52 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7f5ca8650db8467a5956d45ae08b17a10035deee8dfc70097d5be3204ddaf258/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 03:57:52 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7f5ca8650db8467a5956d45ae08b17a10035deee8dfc70097d5be3204ddaf258/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 03:57:52 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7f5ca8650db8467a5956d45ae08b17a10035deee8dfc70097d5be3204ddaf258/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 03:57:52 np0005534516 podman[383837]: 2025-11-25 08:57:52.776849071 +0000 UTC m=+0.154297211 container init 851c337aea400ddee13455985729ede7c2af1db7b983df6380022a8345f65d95 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_goldstine, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 03:57:52 np0005534516 podman[383837]: 2025-11-25 08:57:52.785102636 +0000 UTC m=+0.162550756 container start 851c337aea400ddee13455985729ede7c2af1db7b983df6380022a8345f65d95 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_goldstine, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 03:57:52 np0005534516 podman[383837]: 2025-11-25 08:57:52.788256831 +0000 UTC m=+0.165704981 container attach 851c337aea400ddee13455985729ede7c2af1db7b983df6380022a8345f65d95 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_goldstine, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 03:57:52 np0005534516 nova_compute[253538]: 2025-11-25 08:57:52.963 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:57:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] Optimize plan auto_2025-11-25_08:57:53
Nov 25 03:57:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 03:57:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] do_upmap
Nov 25 03:57:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] pools ['vms', 'cephfs.cephfs.meta', '.mgr', 'default.rgw.meta', 'default.rgw.log', 'volumes', 'default.rgw.control', 'cephfs.cephfs.data', 'backups', 'images', '.rgw.root']
Nov 25 03:57:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] prepared 0/10 changes
Nov 25 03:57:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 03:57:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 03:57:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 03:57:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 03:57:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 03:57:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 03:57:53 np0005534516 lucid_goldstine[383854]: {
Nov 25 03:57:53 np0005534516 lucid_goldstine[383854]:    "1ccbbde0-8faf-460a-800e-c84f00ed17db": {
Nov 25 03:57:53 np0005534516 lucid_goldstine[383854]:        "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 03:57:53 np0005534516 lucid_goldstine[383854]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 03:57:53 np0005534516 lucid_goldstine[383854]:        "osd_id": 1,
Nov 25 03:57:53 np0005534516 lucid_goldstine[383854]:        "osd_uuid": "1ccbbde0-8faf-460a-800e-c84f00ed17db",
Nov 25 03:57:53 np0005534516 lucid_goldstine[383854]:        "type": "bluestore"
Nov 25 03:57:53 np0005534516 lucid_goldstine[383854]:    },
Nov 25 03:57:53 np0005534516 lucid_goldstine[383854]:    "2a15223e-f21c-42af-b702-2be1c3607399": {
Nov 25 03:57:53 np0005534516 lucid_goldstine[383854]:        "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 03:57:53 np0005534516 lucid_goldstine[383854]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 03:57:53 np0005534516 lucid_goldstine[383854]:        "osd_id": 2,
Nov 25 03:57:53 np0005534516 lucid_goldstine[383854]:        "osd_uuid": "2a15223e-f21c-42af-b702-2be1c3607399",
Nov 25 03:57:53 np0005534516 lucid_goldstine[383854]:        "type": "bluestore"
Nov 25 03:57:53 np0005534516 lucid_goldstine[383854]:    },
Nov 25 03:57:53 np0005534516 lucid_goldstine[383854]:    "fa0f8d90-5df8-4d42-9078-da082765696d": {
Nov 25 03:57:53 np0005534516 lucid_goldstine[383854]:        "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 03:57:53 np0005534516 lucid_goldstine[383854]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 03:57:53 np0005534516 lucid_goldstine[383854]:        "osd_id": 0,
Nov 25 03:57:53 np0005534516 lucid_goldstine[383854]:        "osd_uuid": "fa0f8d90-5df8-4d42-9078-da082765696d",
Nov 25 03:57:53 np0005534516 lucid_goldstine[383854]:        "type": "bluestore"
Nov 25 03:57:53 np0005534516 lucid_goldstine[383854]:    }
Nov 25 03:57:53 np0005534516 lucid_goldstine[383854]: }
Nov 25 03:57:53 np0005534516 systemd[1]: libpod-851c337aea400ddee13455985729ede7c2af1db7b983df6380022a8345f65d95.scope: Deactivated successfully.
Nov 25 03:57:53 np0005534516 podman[383887]: 2025-11-25 08:57:53.835799844 +0000 UTC m=+0.033668337 container died 851c337aea400ddee13455985729ede7c2af1db7b983df6380022a8345f65d95 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_goldstine, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507)
Nov 25 03:57:53 np0005534516 systemd[1]: var-lib-containers-storage-overlay-7f5ca8650db8467a5956d45ae08b17a10035deee8dfc70097d5be3204ddaf258-merged.mount: Deactivated successfully.
Nov 25 03:57:53 np0005534516 podman[383887]: 2025-11-25 08:57:53.907625169 +0000 UTC m=+0.105493662 container remove 851c337aea400ddee13455985729ede7c2af1db7b983df6380022a8345f65d95 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_goldstine, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Nov 25 03:57:53 np0005534516 systemd[1]: libpod-conmon-851c337aea400ddee13455985729ede7c2af1db7b983df6380022a8345f65d95.scope: Deactivated successfully.
Nov 25 03:57:53 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2313: 321 pgs: 321 active+clean; 214 MiB data, 886 MiB used, 59 GiB / 60 GiB avail; 979 KiB/s rd, 1.8 MiB/s wr, 97 op/s
Nov 25 03:57:53 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 03:57:53 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 03:57:53 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 03:57:53 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 03:57:53 np0005534516 ceph-mgr[75313]: [progress WARNING root] complete: ev 6d68413d-4660-4103-9cd1-583098033ae1 does not exist
Nov 25 03:57:53 np0005534516 ceph-mgr[75313]: [progress WARNING root] complete: ev bad59913-bdd8-486a-bdbd-7222b7fa76f4 does not exist
Nov 25 03:57:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 03:57:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 03:57:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 03:57:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 03:57:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 03:57:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 03:57:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 03:57:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 03:57:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 03:57:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 03:57:54 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:57:54.122 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=39, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '3e:21:09', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '6a:71:8e:07:fa:c2'}, ipsec=False) old=SB_Global(nb_cfg=38) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 03:57:54 np0005534516 nova_compute[253538]: 2025-11-25 08:57:54.122 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:57:54 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:57:54.124 162739 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 25 03:57:54 np0005534516 nova_compute[253538]: 2025-11-25 08:57:54.259 253542 DEBUG nova.compute.manager [req-8ec47ad1-bda1-4b3b-ad46-5a56b7c750ff req-aa4c4559-ba62-465a-873f-db99bf3877b6 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 5e55aa91-2aa5-4443-b976-0f3e4409e8ec] Received event network-changed-027edfd6-09a6-4bf4-88df-8a19e59d1f72 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:57:54 np0005534516 nova_compute[253538]: 2025-11-25 08:57:54.260 253542 DEBUG nova.compute.manager [req-8ec47ad1-bda1-4b3b-ad46-5a56b7c750ff req-aa4c4559-ba62-465a-873f-db99bf3877b6 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 5e55aa91-2aa5-4443-b976-0f3e4409e8ec] Refreshing instance network info cache due to event network-changed-027edfd6-09a6-4bf4-88df-8a19e59d1f72. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 03:57:54 np0005534516 nova_compute[253538]: 2025-11-25 08:57:54.261 253542 DEBUG oslo_concurrency.lockutils [req-8ec47ad1-bda1-4b3b-ad46-5a56b7c750ff req-aa4c4559-ba62-465a-873f-db99bf3877b6 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-5e55aa91-2aa5-4443-b976-0f3e4409e8ec" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 03:57:54 np0005534516 nova_compute[253538]: 2025-11-25 08:57:54.261 253542 DEBUG oslo_concurrency.lockutils [req-8ec47ad1-bda1-4b3b-ad46-5a56b7c750ff req-aa4c4559-ba62-465a-873f-db99bf3877b6 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-5e55aa91-2aa5-4443-b976-0f3e4409e8ec" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 03:57:54 np0005534516 nova_compute[253538]: 2025-11-25 08:57:54.262 253542 DEBUG nova.network.neutron [req-8ec47ad1-bda1-4b3b-ad46-5a56b7c750ff req-aa4c4559-ba62-465a-873f-db99bf3877b6 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 5e55aa91-2aa5-4443-b976-0f3e4409e8ec] Refreshing network info cache for port 027edfd6-09a6-4bf4-88df-8a19e59d1f72 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 03:57:54 np0005534516 nova_compute[253538]: 2025-11-25 08:57:54.435 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:57:54 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 03:57:54 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 03:57:55 np0005534516 nova_compute[253538]: 2025-11-25 08:57:55.509 253542 DEBUG nova.network.neutron [req-8ec47ad1-bda1-4b3b-ad46-5a56b7c750ff req-aa4c4559-ba62-465a-873f-db99bf3877b6 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 5e55aa91-2aa5-4443-b976-0f3e4409e8ec] Updated VIF entry in instance network info cache for port 027edfd6-09a6-4bf4-88df-8a19e59d1f72. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 25 03:57:55 np0005534516 nova_compute[253538]: 2025-11-25 08:57:55.509 253542 DEBUG nova.network.neutron [req-8ec47ad1-bda1-4b3b-ad46-5a56b7c750ff req-aa4c4559-ba62-465a-873f-db99bf3877b6 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 5e55aa91-2aa5-4443-b976-0f3e4409e8ec] Updating instance_info_cache with network_info: [{"id": "027edfd6-09a6-4bf4-88df-8a19e59d1f72", "address": "fa:16:3e:d9:e8:8d", "network": {"id": "bf619d00-d285-4b9e-9996-77997075375e", "bridge": "br-int", "label": "tempest-network-smoke--598319533", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.219", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92faeb767e7a423586eaaf32661ce771", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap027edfd6-09", "ovs_interfaceid": "027edfd6-09a6-4bf4-88df-8a19e59d1f72", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 03:57:55 np0005534516 nova_compute[253538]: 2025-11-25 08:57:55.526 253542 DEBUG oslo_concurrency.lockutils [req-8ec47ad1-bda1-4b3b-ad46-5a56b7c750ff req-aa4c4559-ba62-465a-873f-db99bf3877b6 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-5e55aa91-2aa5-4443-b976-0f3e4409e8ec" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 03:57:55 np0005534516 ovn_controller[152859]: 2025-11-25T08:57:55Z|01273|binding|INFO|Releasing lport f9837d3c-3aa7-48de-b240-549f6bf978b3 from this chassis (sb_readonly=0)
Nov 25 03:57:55 np0005534516 ovn_controller[152859]: 2025-11-25T08:57:55Z|01274|binding|INFO|Releasing lport c544ed70-c59f-4fbe-97c5-a521f548f971 from this chassis (sb_readonly=0)
Nov 25 03:57:55 np0005534516 nova_compute[253538]: 2025-11-25 08:57:55.600 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:57:55 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2314: 321 pgs: 321 active+clean; 213 MiB data, 869 MiB used, 59 GiB / 60 GiB avail; 1.1 MiB/s rd, 1.8 MiB/s wr, 103 op/s
Nov 25 03:57:56 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:57:56.127 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=0e3362c2-a4a0-4a10-9289-943331244f84, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '39'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:57:56 np0005534516 nova_compute[253538]: 2025-11-25 08:57:56.680 253542 DEBUG nova.compute.manager [req-1679c61b-85f9-4955-91e5-c8125fb212cc req-0709f118-3a13-4b8a-812c-63bdae5cd440 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: ce8c3428-f7e4-49aa-9978-faaf5d514663] Received event network-changed-d92eef96-9bbe-4743-96d0-393e7e6de4ee external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:57:56 np0005534516 nova_compute[253538]: 2025-11-25 08:57:56.681 253542 DEBUG nova.compute.manager [req-1679c61b-85f9-4955-91e5-c8125fb212cc req-0709f118-3a13-4b8a-812c-63bdae5cd440 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: ce8c3428-f7e4-49aa-9978-faaf5d514663] Refreshing instance network info cache due to event network-changed-d92eef96-9bbe-4743-96d0-393e7e6de4ee. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 03:57:56 np0005534516 nova_compute[253538]: 2025-11-25 08:57:56.682 253542 DEBUG oslo_concurrency.lockutils [req-1679c61b-85f9-4955-91e5-c8125fb212cc req-0709f118-3a13-4b8a-812c-63bdae5cd440 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-ce8c3428-f7e4-49aa-9978-faaf5d514663" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 03:57:56 np0005534516 nova_compute[253538]: 2025-11-25 08:57:56.682 253542 DEBUG oslo_concurrency.lockutils [req-1679c61b-85f9-4955-91e5-c8125fb212cc req-0709f118-3a13-4b8a-812c-63bdae5cd440 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-ce8c3428-f7e4-49aa-9978-faaf5d514663" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 03:57:56 np0005534516 nova_compute[253538]: 2025-11-25 08:57:56.683 253542 DEBUG nova.network.neutron [req-1679c61b-85f9-4955-91e5-c8125fb212cc req-0709f118-3a13-4b8a-812c-63bdae5cd440 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: ce8c3428-f7e4-49aa-9978-faaf5d514663] Refreshing network info cache for port d92eef96-9bbe-4743-96d0-393e7e6de4ee _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 03:57:56 np0005534516 nova_compute[253538]: 2025-11-25 08:57:56.774 253542 DEBUG oslo_concurrency.lockutils [None req-ed3f8497-365c-4404-a5bf-0bfe08645606 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Acquiring lock "ce8c3428-f7e4-49aa-9978-faaf5d514663" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:57:56 np0005534516 nova_compute[253538]: 2025-11-25 08:57:56.775 253542 DEBUG oslo_concurrency.lockutils [None req-ed3f8497-365c-4404-a5bf-0bfe08645606 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Lock "ce8c3428-f7e4-49aa-9978-faaf5d514663" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:57:56 np0005534516 nova_compute[253538]: 2025-11-25 08:57:56.776 253542 DEBUG oslo_concurrency.lockutils [None req-ed3f8497-365c-4404-a5bf-0bfe08645606 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Acquiring lock "ce8c3428-f7e4-49aa-9978-faaf5d514663-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:57:56 np0005534516 nova_compute[253538]: 2025-11-25 08:57:56.776 253542 DEBUG oslo_concurrency.lockutils [None req-ed3f8497-365c-4404-a5bf-0bfe08645606 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Lock "ce8c3428-f7e4-49aa-9978-faaf5d514663-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:57:56 np0005534516 nova_compute[253538]: 2025-11-25 08:57:56.777 253542 DEBUG oslo_concurrency.lockutils [None req-ed3f8497-365c-4404-a5bf-0bfe08645606 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Lock "ce8c3428-f7e4-49aa-9978-faaf5d514663-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:57:56 np0005534516 nova_compute[253538]: 2025-11-25 08:57:56.780 253542 INFO nova.compute.manager [None req-ed3f8497-365c-4404-a5bf-0bfe08645606 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: ce8c3428-f7e4-49aa-9978-faaf5d514663] Terminating instance#033[00m
Nov 25 03:57:56 np0005534516 nova_compute[253538]: 2025-11-25 08:57:56.782 253542 DEBUG nova.compute.manager [None req-ed3f8497-365c-4404-a5bf-0bfe08645606 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: ce8c3428-f7e4-49aa-9978-faaf5d514663] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 25 03:57:56 np0005534516 kernel: tapd92eef96-9b (unregistering): left promiscuous mode
Nov 25 03:57:56 np0005534516 NetworkManager[48915]: <info>  [1764061076.8521] device (tapd92eef96-9b): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 03:57:56 np0005534516 ovn_controller[152859]: 2025-11-25T08:57:56Z|01275|binding|INFO|Releasing lport d92eef96-9bbe-4743-96d0-393e7e6de4ee from this chassis (sb_readonly=0)
Nov 25 03:57:56 np0005534516 ovn_controller[152859]: 2025-11-25T08:57:56Z|01276|binding|INFO|Setting lport d92eef96-9bbe-4743-96d0-393e7e6de4ee down in Southbound
Nov 25 03:57:56 np0005534516 ovn_controller[152859]: 2025-11-25T08:57:56Z|01277|binding|INFO|Removing iface tapd92eef96-9b ovn-installed in OVS
Nov 25 03:57:56 np0005534516 nova_compute[253538]: 2025-11-25 08:57:56.874 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:57:56 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:57:56.878 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:0d:26:ec 10.100.0.3'], port_security=['fa:16:3e:0d:26:ec 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'ce8c3428-f7e4-49aa-9978-faaf5d514663', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-58e30486-fde6-46bb-8263-c463bd38a1f9', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'cfffff2c57a442a59b202d368d49bf00', 'neutron:revision_number': '4', 'neutron:security_group_ids': '84e8c954-c3f0-4a6c-88b0-2dc68f7ce745 aec330ab-8d77-47ae-8de6-bec0741c3114', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f8915864-93a9-4ad1-b7bb-a11d22ed3f29, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=d92eef96-9bbe-4743-96d0-393e7e6de4ee) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 03:57:56 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:57:56.879 162739 INFO neutron.agent.ovn.metadata.agent [-] Port d92eef96-9bbe-4743-96d0-393e7e6de4ee in datapath 58e30486-fde6-46bb-8263-c463bd38a1f9 unbound from our chassis#033[00m
Nov 25 03:57:56 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:57:56.880 162739 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 58e30486-fde6-46bb-8263-c463bd38a1f9, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 25 03:57:56 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:57:56.884 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[3073298e-5604-4f1c-988a-3cf2867aa3f3]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:57:56 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:57:56.885 162739 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-58e30486-fde6-46bb-8263-c463bd38a1f9 namespace which is not needed anymore#033[00m
Nov 25 03:57:56 np0005534516 nova_compute[253538]: 2025-11-25 08:57:56.903 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:57:56 np0005534516 systemd[1]: machine-qemu\x2d150\x2dinstance\x2d00000078.scope: Deactivated successfully.
Nov 25 03:57:56 np0005534516 systemd[1]: machine-qemu\x2d150\x2dinstance\x2d00000078.scope: Consumed 17.128s CPU time.
Nov 25 03:57:56 np0005534516 systemd-machined[215790]: Machine qemu-150-instance-00000078 terminated.
Nov 25 03:57:57 np0005534516 nova_compute[253538]: 2025-11-25 08:57:57.013 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:57:57 np0005534516 nova_compute[253538]: 2025-11-25 08:57:57.017 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:57:57 np0005534516 nova_compute[253538]: 2025-11-25 08:57:57.030 253542 INFO nova.virt.libvirt.driver [-] [instance: ce8c3428-f7e4-49aa-9978-faaf5d514663] Instance destroyed successfully.#033[00m
Nov 25 03:57:57 np0005534516 nova_compute[253538]: 2025-11-25 08:57:57.030 253542 DEBUG nova.objects.instance [None req-ed3f8497-365c-4404-a5bf-0bfe08645606 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Lazy-loading 'resources' on Instance uuid ce8c3428-f7e4-49aa-9978-faaf5d514663 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 03:57:57 np0005534516 nova_compute[253538]: 2025-11-25 08:57:57.041 253542 DEBUG nova.virt.libvirt.vif [None req-ed3f8497-365c-4404-a5bf-0bfe08645606 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T08:56:20Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-1495447964-access_point-1402608537',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-1495447964-access_point-1402608537',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-1495447964-ac',id=120,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBIMzihuAIn/3zUYmC89IQHlRQOFsDPQXmR2lUEIBbP/zsJ4Wb7ryhi2Z+PoqeUCEWAj2u1hLvngwGPYFPPFVKkLQWsKMEmPgeFVkFH2scsb2/c4cLoNH5bP+xcccrYAT8g==',key_name='tempest-TestSecurityGroupsBasicOps-373950628',keypairs=<?>,launch_index=0,launched_at=2025-11-25T08:56:31Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='cfffff2c57a442a59b202d368d49bf00',ramdisk_id='',reservation_id='r-hsvmofjq',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestSecurityGroupsBasicOps-1495447964',owner_user_name='tempest-TestSecurityGroupsBasicOps-1495447964-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T08:56:31Z,user_data=None,user_id='283b89dbe3284e8ea2019b797673108b',uuid=ce8c3428-f7e4-49aa-9978-faaf5d514663,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "d92eef96-9bbe-4743-96d0-393e7e6de4ee", "address": "fa:16:3e:0d:26:ec", "network": {"id": "58e30486-fde6-46bb-8263-c463bd38a1f9", "bridge": "br-int", "label": "tempest-network-smoke--1964902798", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.228", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cfffff2c57a442a59b202d368d49bf00", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd92eef96-9b", "ovs_interfaceid": "d92eef96-9bbe-4743-96d0-393e7e6de4ee", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 25 03:57:57 np0005534516 nova_compute[253538]: 2025-11-25 08:57:57.041 253542 DEBUG nova.network.os_vif_util [None req-ed3f8497-365c-4404-a5bf-0bfe08645606 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Converting VIF {"id": "d92eef96-9bbe-4743-96d0-393e7e6de4ee", "address": "fa:16:3e:0d:26:ec", "network": {"id": "58e30486-fde6-46bb-8263-c463bd38a1f9", "bridge": "br-int", "label": "tempest-network-smoke--1964902798", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.228", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cfffff2c57a442a59b202d368d49bf00", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd92eef96-9b", "ovs_interfaceid": "d92eef96-9bbe-4743-96d0-393e7e6de4ee", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 03:57:57 np0005534516 nova_compute[253538]: 2025-11-25 08:57:57.042 253542 DEBUG nova.network.os_vif_util [None req-ed3f8497-365c-4404-a5bf-0bfe08645606 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:0d:26:ec,bridge_name='br-int',has_traffic_filtering=True,id=d92eef96-9bbe-4743-96d0-393e7e6de4ee,network=Network(58e30486-fde6-46bb-8263-c463bd38a1f9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd92eef96-9b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 03:57:57 np0005534516 nova_compute[253538]: 2025-11-25 08:57:57.042 253542 DEBUG os_vif [None req-ed3f8497-365c-4404-a5bf-0bfe08645606 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:0d:26:ec,bridge_name='br-int',has_traffic_filtering=True,id=d92eef96-9bbe-4743-96d0-393e7e6de4ee,network=Network(58e30486-fde6-46bb-8263-c463bd38a1f9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd92eef96-9b') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 25 03:57:57 np0005534516 nova_compute[253538]: 2025-11-25 08:57:57.044 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:57:57 np0005534516 nova_compute[253538]: 2025-11-25 08:57:57.045 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd92eef96-9b, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:57:57 np0005534516 nova_compute[253538]: 2025-11-25 08:57:57.046 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:57:57 np0005534516 nova_compute[253538]: 2025-11-25 08:57:57.049 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 25 03:57:57 np0005534516 neutron-haproxy-ovnmeta-58e30486-fde6-46bb-8263-c463bd38a1f9[380255]: [NOTICE]   (380259) : haproxy version is 2.8.14-c23fe91
Nov 25 03:57:57 np0005534516 neutron-haproxy-ovnmeta-58e30486-fde6-46bb-8263-c463bd38a1f9[380255]: [NOTICE]   (380259) : path to executable is /usr/sbin/haproxy
Nov 25 03:57:57 np0005534516 neutron-haproxy-ovnmeta-58e30486-fde6-46bb-8263-c463bd38a1f9[380255]: [WARNING]  (380259) : Exiting Master process...
Nov 25 03:57:57 np0005534516 neutron-haproxy-ovnmeta-58e30486-fde6-46bb-8263-c463bd38a1f9[380255]: [WARNING]  (380259) : Exiting Master process...
Nov 25 03:57:57 np0005534516 nova_compute[253538]: 2025-11-25 08:57:57.054 253542 INFO os_vif [None req-ed3f8497-365c-4404-a5bf-0bfe08645606 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:0d:26:ec,bridge_name='br-int',has_traffic_filtering=True,id=d92eef96-9bbe-4743-96d0-393e7e6de4ee,network=Network(58e30486-fde6-46bb-8263-c463bd38a1f9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd92eef96-9b')#033[00m
Nov 25 03:57:57 np0005534516 neutron-haproxy-ovnmeta-58e30486-fde6-46bb-8263-c463bd38a1f9[380255]: [ALERT]    (380259) : Current worker (380261) exited with code 143 (Terminated)
Nov 25 03:57:57 np0005534516 neutron-haproxy-ovnmeta-58e30486-fde6-46bb-8263-c463bd38a1f9[380255]: [WARNING]  (380259) : All workers exited. Exiting... (0)
Nov 25 03:57:57 np0005534516 systemd[1]: libpod-3a5e054c3e2922afccdd6a996a2f3812b74fb741e37e9172311278ab6fa9774a.scope: Deactivated successfully.
Nov 25 03:57:57 np0005534516 podman[383975]: 2025-11-25 08:57:57.065259686 +0000 UTC m=+0.060548620 container died 3a5e054c3e2922afccdd6a996a2f3812b74fb741e37e9172311278ab6fa9774a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-58e30486-fde6-46bb-8263-c463bd38a1f9, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 25 03:57:57 np0005534516 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-3a5e054c3e2922afccdd6a996a2f3812b74fb741e37e9172311278ab6fa9774a-userdata-shm.mount: Deactivated successfully.
Nov 25 03:57:57 np0005534516 systemd[1]: var-lib-containers-storage-overlay-97985d9ed470fd0792a7d93bf1b618a370e10b24c8a74fa4f0901da8068fed82-merged.mount: Deactivated successfully.
Nov 25 03:57:57 np0005534516 podman[383975]: 2025-11-25 08:57:57.111848854 +0000 UTC m=+0.107137788 container cleanup 3a5e054c3e2922afccdd6a996a2f3812b74fb741e37e9172311278ab6fa9774a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-58e30486-fde6-46bb-8263-c463bd38a1f9, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Nov 25 03:57:57 np0005534516 systemd[1]: libpod-conmon-3a5e054c3e2922afccdd6a996a2f3812b74fb741e37e9172311278ab6fa9774a.scope: Deactivated successfully.
Nov 25 03:57:57 np0005534516 podman[384028]: 2025-11-25 08:57:57.21272399 +0000 UTC m=+0.068775454 container remove 3a5e054c3e2922afccdd6a996a2f3812b74fb741e37e9172311278ab6fa9774a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-58e30486-fde6-46bb-8263-c463bd38a1f9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 03:57:57 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:57:57.220 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[b703ce56-9d99-4666-b426-5e7ca06dfe0c]: (4, ('Tue Nov 25 08:57:56 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-58e30486-fde6-46bb-8263-c463bd38a1f9 (3a5e054c3e2922afccdd6a996a2f3812b74fb741e37e9172311278ab6fa9774a)\n3a5e054c3e2922afccdd6a996a2f3812b74fb741e37e9172311278ab6fa9774a\nTue Nov 25 08:57:57 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-58e30486-fde6-46bb-8263-c463bd38a1f9 (3a5e054c3e2922afccdd6a996a2f3812b74fb741e37e9172311278ab6fa9774a)\n3a5e054c3e2922afccdd6a996a2f3812b74fb741e37e9172311278ab6fa9774a\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:57:57 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:57:57.222 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[6b1311ce-24dc-46c5-b820-3588a3265c79]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:57:57 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:57:57.223 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap58e30486-f0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:57:57 np0005534516 nova_compute[253538]: 2025-11-25 08:57:57.225 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:57:57 np0005534516 kernel: tap58e30486-f0: left promiscuous mode
Nov 25 03:57:57 np0005534516 nova_compute[253538]: 2025-11-25 08:57:57.238 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:57:57 np0005534516 nova_compute[253538]: 2025-11-25 08:57:57.242 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:57:57 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:57:57.244 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[074a3ff8-445a-42cc-8d2c-02c360a6c973]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:57:57 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:57:57.260 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[2361aa0f-dcb1-444e-a5a5-74cae6729394]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:57:57 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:57:57.261 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[36168b02-700c-4ccf-b6cc-881821153070]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:57:57 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:57:57.293 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[1c1e47d2-8d38-4887-b172-af25c454be38]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 634523, 'reachable_time': 35682, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 384043, 'error': None, 'target': 'ovnmeta-58e30486-fde6-46bb-8263-c463bd38a1f9', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:57:57 np0005534516 systemd[1]: run-netns-ovnmeta\x2d58e30486\x2dfde6\x2d46bb\x2d8263\x2dc463bd38a1f9.mount: Deactivated successfully.
Nov 25 03:57:57 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:57:57.296 162852 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-58e30486-fde6-46bb-8263-c463bd38a1f9 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 25 03:57:57 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:57:57.296 162852 DEBUG oslo.privsep.daemon [-] privsep: reply[4eac92bb-8337-413c-8ab1-b51437dc6166]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:57:57 np0005534516 nova_compute[253538]: 2025-11-25 08:57:57.570 253542 INFO nova.virt.libvirt.driver [None req-ed3f8497-365c-4404-a5bf-0bfe08645606 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: ce8c3428-f7e4-49aa-9978-faaf5d514663] Deleting instance files /var/lib/nova/instances/ce8c3428-f7e4-49aa-9978-faaf5d514663_del#033[00m
Nov 25 03:57:57 np0005534516 nova_compute[253538]: 2025-11-25 08:57:57.571 253542 INFO nova.virt.libvirt.driver [None req-ed3f8497-365c-4404-a5bf-0bfe08645606 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: ce8c3428-f7e4-49aa-9978-faaf5d514663] Deletion of /var/lib/nova/instances/ce8c3428-f7e4-49aa-9978-faaf5d514663_del complete#033[00m
Nov 25 03:57:57 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 03:57:57 np0005534516 nova_compute[253538]: 2025-11-25 08:57:57.643 253542 INFO nova.compute.manager [None req-ed3f8497-365c-4404-a5bf-0bfe08645606 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: ce8c3428-f7e4-49aa-9978-faaf5d514663] Took 0.86 seconds to destroy the instance on the hypervisor.#033[00m
Nov 25 03:57:57 np0005534516 nova_compute[253538]: 2025-11-25 08:57:57.643 253542 DEBUG oslo.service.loopingcall [None req-ed3f8497-365c-4404-a5bf-0bfe08645606 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 25 03:57:57 np0005534516 nova_compute[253538]: 2025-11-25 08:57:57.646 253542 DEBUG nova.compute.manager [-] [instance: ce8c3428-f7e4-49aa-9978-faaf5d514663] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 25 03:57:57 np0005534516 nova_compute[253538]: 2025-11-25 08:57:57.646 253542 DEBUG nova.network.neutron [-] [instance: ce8c3428-f7e4-49aa-9978-faaf5d514663] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 25 03:57:57 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2315: 321 pgs: 321 active+clean; 213 MiB data, 869 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 131 op/s
Nov 25 03:57:57 np0005534516 nova_compute[253538]: 2025-11-25 08:57:57.966 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:57:58 np0005534516 nova_compute[253538]: 2025-11-25 08:57:58.411 253542 DEBUG nova.network.neutron [req-1679c61b-85f9-4955-91e5-c8125fb212cc req-0709f118-3a13-4b8a-812c-63bdae5cd440 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: ce8c3428-f7e4-49aa-9978-faaf5d514663] Updated VIF entry in instance network info cache for port d92eef96-9bbe-4743-96d0-393e7e6de4ee. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 25 03:57:58 np0005534516 nova_compute[253538]: 2025-11-25 08:57:58.412 253542 DEBUG nova.network.neutron [req-1679c61b-85f9-4955-91e5-c8125fb212cc req-0709f118-3a13-4b8a-812c-63bdae5cd440 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: ce8c3428-f7e4-49aa-9978-faaf5d514663] Updating instance_info_cache with network_info: [{"id": "d92eef96-9bbe-4743-96d0-393e7e6de4ee", "address": "fa:16:3e:0d:26:ec", "network": {"id": "58e30486-fde6-46bb-8263-c463bd38a1f9", "bridge": "br-int", "label": "tempest-network-smoke--1964902798", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cfffff2c57a442a59b202d368d49bf00", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd92eef96-9b", "ovs_interfaceid": "d92eef96-9bbe-4743-96d0-393e7e6de4ee", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 03:57:58 np0005534516 nova_compute[253538]: 2025-11-25 08:57:58.446 253542 DEBUG oslo_concurrency.lockutils [req-1679c61b-85f9-4955-91e5-c8125fb212cc req-0709f118-3a13-4b8a-812c-63bdae5cd440 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-ce8c3428-f7e4-49aa-9978-faaf5d514663" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 03:57:58 np0005534516 nova_compute[253538]: 2025-11-25 08:57:58.637 253542 DEBUG nova.network.neutron [-] [instance: ce8c3428-f7e4-49aa-9978-faaf5d514663] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 03:57:58 np0005534516 nova_compute[253538]: 2025-11-25 08:57:58.657 253542 INFO nova.compute.manager [-] [instance: ce8c3428-f7e4-49aa-9978-faaf5d514663] Took 1.01 seconds to deallocate network for instance.#033[00m
Nov 25 03:57:58 np0005534516 nova_compute[253538]: 2025-11-25 08:57:58.710 253542 DEBUG oslo_concurrency.lockutils [None req-ed3f8497-365c-4404-a5bf-0bfe08645606 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:57:58 np0005534516 nova_compute[253538]: 2025-11-25 08:57:58.711 253542 DEBUG oslo_concurrency.lockutils [None req-ed3f8497-365c-4404-a5bf-0bfe08645606 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:57:58 np0005534516 nova_compute[253538]: 2025-11-25 08:57:58.796 253542 DEBUG oslo_concurrency.processutils [None req-ed3f8497-365c-4404-a5bf-0bfe08645606 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:57:58 np0005534516 nova_compute[253538]: 2025-11-25 08:57:58.839 253542 DEBUG nova.compute.manager [req-cd893fa3-28dd-47d5-95aa-27204a07ea01 req-f733e173-c2ad-40fd-a32c-911f6b60de64 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: ce8c3428-f7e4-49aa-9978-faaf5d514663] Received event network-vif-unplugged-d92eef96-9bbe-4743-96d0-393e7e6de4ee external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:57:58 np0005534516 nova_compute[253538]: 2025-11-25 08:57:58.839 253542 DEBUG oslo_concurrency.lockutils [req-cd893fa3-28dd-47d5-95aa-27204a07ea01 req-f733e173-c2ad-40fd-a32c-911f6b60de64 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "ce8c3428-f7e4-49aa-9978-faaf5d514663-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:57:58 np0005534516 nova_compute[253538]: 2025-11-25 08:57:58.840 253542 DEBUG oslo_concurrency.lockutils [req-cd893fa3-28dd-47d5-95aa-27204a07ea01 req-f733e173-c2ad-40fd-a32c-911f6b60de64 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "ce8c3428-f7e4-49aa-9978-faaf5d514663-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:57:58 np0005534516 nova_compute[253538]: 2025-11-25 08:57:58.840 253542 DEBUG oslo_concurrency.lockutils [req-cd893fa3-28dd-47d5-95aa-27204a07ea01 req-f733e173-c2ad-40fd-a32c-911f6b60de64 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "ce8c3428-f7e4-49aa-9978-faaf5d514663-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:57:58 np0005534516 nova_compute[253538]: 2025-11-25 08:57:58.840 253542 DEBUG nova.compute.manager [req-cd893fa3-28dd-47d5-95aa-27204a07ea01 req-f733e173-c2ad-40fd-a32c-911f6b60de64 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: ce8c3428-f7e4-49aa-9978-faaf5d514663] No waiting events found dispatching network-vif-unplugged-d92eef96-9bbe-4743-96d0-393e7e6de4ee pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 03:57:58 np0005534516 nova_compute[253538]: 2025-11-25 08:57:58.841 253542 WARNING nova.compute.manager [req-cd893fa3-28dd-47d5-95aa-27204a07ea01 req-f733e173-c2ad-40fd-a32c-911f6b60de64 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: ce8c3428-f7e4-49aa-9978-faaf5d514663] Received unexpected event network-vif-unplugged-d92eef96-9bbe-4743-96d0-393e7e6de4ee for instance with vm_state deleted and task_state None.#033[00m
Nov 25 03:57:58 np0005534516 nova_compute[253538]: 2025-11-25 08:57:58.841 253542 DEBUG nova.compute.manager [req-cd893fa3-28dd-47d5-95aa-27204a07ea01 req-f733e173-c2ad-40fd-a32c-911f6b60de64 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: ce8c3428-f7e4-49aa-9978-faaf5d514663] Received event network-vif-plugged-d92eef96-9bbe-4743-96d0-393e7e6de4ee external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:57:58 np0005534516 nova_compute[253538]: 2025-11-25 08:57:58.841 253542 DEBUG oslo_concurrency.lockutils [req-cd893fa3-28dd-47d5-95aa-27204a07ea01 req-f733e173-c2ad-40fd-a32c-911f6b60de64 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "ce8c3428-f7e4-49aa-9978-faaf5d514663-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:57:58 np0005534516 nova_compute[253538]: 2025-11-25 08:57:58.842 253542 DEBUG oslo_concurrency.lockutils [req-cd893fa3-28dd-47d5-95aa-27204a07ea01 req-f733e173-c2ad-40fd-a32c-911f6b60de64 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "ce8c3428-f7e4-49aa-9978-faaf5d514663-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:57:58 np0005534516 nova_compute[253538]: 2025-11-25 08:57:58.842 253542 DEBUG oslo_concurrency.lockutils [req-cd893fa3-28dd-47d5-95aa-27204a07ea01 req-f733e173-c2ad-40fd-a32c-911f6b60de64 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "ce8c3428-f7e4-49aa-9978-faaf5d514663-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:57:58 np0005534516 nova_compute[253538]: 2025-11-25 08:57:58.842 253542 DEBUG nova.compute.manager [req-cd893fa3-28dd-47d5-95aa-27204a07ea01 req-f733e173-c2ad-40fd-a32c-911f6b60de64 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: ce8c3428-f7e4-49aa-9978-faaf5d514663] No waiting events found dispatching network-vif-plugged-d92eef96-9bbe-4743-96d0-393e7e6de4ee pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 03:57:58 np0005534516 nova_compute[253538]: 2025-11-25 08:57:58.843 253542 WARNING nova.compute.manager [req-cd893fa3-28dd-47d5-95aa-27204a07ea01 req-f733e173-c2ad-40fd-a32c-911f6b60de64 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: ce8c3428-f7e4-49aa-9978-faaf5d514663] Received unexpected event network-vif-plugged-d92eef96-9bbe-4743-96d0-393e7e6de4ee for instance with vm_state deleted and task_state None.#033[00m
Nov 25 03:57:58 np0005534516 nova_compute[253538]: 2025-11-25 08:57:58.843 253542 DEBUG nova.compute.manager [req-cd893fa3-28dd-47d5-95aa-27204a07ea01 req-f733e173-c2ad-40fd-a32c-911f6b60de64 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: ce8c3428-f7e4-49aa-9978-faaf5d514663] Received event network-vif-deleted-d92eef96-9bbe-4743-96d0-393e7e6de4ee external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:57:59 np0005534516 nova_compute[253538]: 2025-11-25 08:57:59.058 253542 DEBUG oslo_concurrency.lockutils [None req-7ad4c5e8-daeb-48e5-949c-59a86a73b69b 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Acquiring lock "7be1983b-1609-4155-b634-d14fc92539e8" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:57:59 np0005534516 nova_compute[253538]: 2025-11-25 08:57:59.058 253542 DEBUG oslo_concurrency.lockutils [None req-7ad4c5e8-daeb-48e5-949c-59a86a73b69b 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lock "7be1983b-1609-4155-b634-d14fc92539e8" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:57:59 np0005534516 nova_compute[253538]: 2025-11-25 08:57:59.082 253542 DEBUG nova.compute.manager [None req-7ad4c5e8-daeb-48e5-949c-59a86a73b69b 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 7be1983b-1609-4155-b634-d14fc92539e8] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 25 03:57:59 np0005534516 nova_compute[253538]: 2025-11-25 08:57:59.167 253542 DEBUG oslo_concurrency.lockutils [None req-7ad4c5e8-daeb-48e5-949c-59a86a73b69b 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:57:59 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 03:57:59 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/29797299' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 03:57:59 np0005534516 nova_compute[253538]: 2025-11-25 08:57:59.265 253542 DEBUG oslo_concurrency.processutils [None req-ed3f8497-365c-4404-a5bf-0bfe08645606 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.469s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:57:59 np0005534516 nova_compute[253538]: 2025-11-25 08:57:59.274 253542 DEBUG nova.compute.provider_tree [None req-ed3f8497-365c-4404-a5bf-0bfe08645606 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 03:57:59 np0005534516 nova_compute[253538]: 2025-11-25 08:57:59.289 253542 DEBUG nova.scheduler.client.report [None req-ed3f8497-365c-4404-a5bf-0bfe08645606 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 03:57:59 np0005534516 nova_compute[253538]: 2025-11-25 08:57:59.312 253542 DEBUG oslo_concurrency.lockutils [None req-ed3f8497-365c-4404-a5bf-0bfe08645606 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.602s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:57:59 np0005534516 nova_compute[253538]: 2025-11-25 08:57:59.315 253542 DEBUG oslo_concurrency.lockutils [None req-7ad4c5e8-daeb-48e5-949c-59a86a73b69b 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.148s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:57:59 np0005534516 nova_compute[253538]: 2025-11-25 08:57:59.325 253542 DEBUG nova.virt.hardware [None req-7ad4c5e8-daeb-48e5-949c-59a86a73b69b 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 25 03:57:59 np0005534516 nova_compute[253538]: 2025-11-25 08:57:59.325 253542 INFO nova.compute.claims [None req-7ad4c5e8-daeb-48e5-949c-59a86a73b69b 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 7be1983b-1609-4155-b634-d14fc92539e8] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 25 03:57:59 np0005534516 nova_compute[253538]: 2025-11-25 08:57:59.354 253542 INFO nova.scheduler.client.report [None req-ed3f8497-365c-4404-a5bf-0bfe08645606 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Deleted allocations for instance ce8c3428-f7e4-49aa-9978-faaf5d514663#033[00m
Nov 25 03:57:59 np0005534516 nova_compute[253538]: 2025-11-25 08:57:59.430 253542 DEBUG oslo_concurrency.lockutils [None req-ed3f8497-365c-4404-a5bf-0bfe08645606 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Lock "ce8c3428-f7e4-49aa-9978-faaf5d514663" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.655s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:57:59 np0005534516 nova_compute[253538]: 2025-11-25 08:57:59.467 253542 DEBUG oslo_concurrency.processutils [None req-7ad4c5e8-daeb-48e5-949c-59a86a73b69b 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:57:59 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2316: 321 pgs: 321 active+clean; 164 MiB data, 841 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 941 KiB/s wr, 142 op/s
Nov 25 03:57:59 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 03:57:59 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2367924863' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 03:57:59 np0005534516 nova_compute[253538]: 2025-11-25 08:57:59.972 253542 DEBUG oslo_concurrency.processutils [None req-7ad4c5e8-daeb-48e5-949c-59a86a73b69b 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.505s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:57:59 np0005534516 nova_compute[253538]: 2025-11-25 08:57:59.978 253542 DEBUG nova.compute.provider_tree [None req-7ad4c5e8-daeb-48e5-949c-59a86a73b69b 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 03:58:00 np0005534516 nova_compute[253538]: 2025-11-25 08:58:00.003 253542 DEBUG nova.scheduler.client.report [None req-7ad4c5e8-daeb-48e5-949c-59a86a73b69b 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 03:58:00 np0005534516 nova_compute[253538]: 2025-11-25 08:58:00.026 253542 DEBUG oslo_concurrency.lockutils [None req-7ad4c5e8-daeb-48e5-949c-59a86a73b69b 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.711s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:58:00 np0005534516 nova_compute[253538]: 2025-11-25 08:58:00.027 253542 DEBUG nova.compute.manager [None req-7ad4c5e8-daeb-48e5-949c-59a86a73b69b 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 7be1983b-1609-4155-b634-d14fc92539e8] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 25 03:58:00 np0005534516 nova_compute[253538]: 2025-11-25 08:58:00.082 253542 DEBUG nova.compute.manager [None req-7ad4c5e8-daeb-48e5-949c-59a86a73b69b 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 7be1983b-1609-4155-b634-d14fc92539e8] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 25 03:58:00 np0005534516 nova_compute[253538]: 2025-11-25 08:58:00.083 253542 DEBUG nova.network.neutron [None req-7ad4c5e8-daeb-48e5-949c-59a86a73b69b 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 7be1983b-1609-4155-b634-d14fc92539e8] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 25 03:58:00 np0005534516 nova_compute[253538]: 2025-11-25 08:58:00.103 253542 INFO nova.virt.libvirt.driver [None req-7ad4c5e8-daeb-48e5-949c-59a86a73b69b 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 7be1983b-1609-4155-b634-d14fc92539e8] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 25 03:58:00 np0005534516 nova_compute[253538]: 2025-11-25 08:58:00.122 253542 DEBUG nova.compute.manager [None req-7ad4c5e8-daeb-48e5-949c-59a86a73b69b 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 7be1983b-1609-4155-b634-d14fc92539e8] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 25 03:58:00 np0005534516 nova_compute[253538]: 2025-11-25 08:58:00.208 253542 DEBUG nova.compute.manager [None req-7ad4c5e8-daeb-48e5-949c-59a86a73b69b 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 7be1983b-1609-4155-b634-d14fc92539e8] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 25 03:58:00 np0005534516 nova_compute[253538]: 2025-11-25 08:58:00.210 253542 DEBUG nova.virt.libvirt.driver [None req-7ad4c5e8-daeb-48e5-949c-59a86a73b69b 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 7be1983b-1609-4155-b634-d14fc92539e8] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 25 03:58:00 np0005534516 nova_compute[253538]: 2025-11-25 08:58:00.211 253542 INFO nova.virt.libvirt.driver [None req-7ad4c5e8-daeb-48e5-949c-59a86a73b69b 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 7be1983b-1609-4155-b634-d14fc92539e8] Creating image(s)#033[00m
Nov 25 03:58:00 np0005534516 nova_compute[253538]: 2025-11-25 08:58:00.243 253542 DEBUG nova.storage.rbd_utils [None req-7ad4c5e8-daeb-48e5-949c-59a86a73b69b 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] rbd image 7be1983b-1609-4155-b634-d14fc92539e8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:58:00 np0005534516 nova_compute[253538]: 2025-11-25 08:58:00.271 253542 DEBUG nova.storage.rbd_utils [None req-7ad4c5e8-daeb-48e5-949c-59a86a73b69b 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] rbd image 7be1983b-1609-4155-b634-d14fc92539e8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:58:00 np0005534516 nova_compute[253538]: 2025-11-25 08:58:00.297 253542 DEBUG nova.storage.rbd_utils [None req-7ad4c5e8-daeb-48e5-949c-59a86a73b69b 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] rbd image 7be1983b-1609-4155-b634-d14fc92539e8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:58:00 np0005534516 nova_compute[253538]: 2025-11-25 08:58:00.301 253542 DEBUG oslo_concurrency.processutils [None req-7ad4c5e8-daeb-48e5-949c-59a86a73b69b 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:58:00 np0005534516 nova_compute[253538]: 2025-11-25 08:58:00.373 253542 DEBUG oslo_concurrency.processutils [None req-7ad4c5e8-daeb-48e5-949c-59a86a73b69b 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json" returned: 0 in 0.071s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:58:00 np0005534516 nova_compute[253538]: 2025-11-25 08:58:00.374 253542 DEBUG oslo_concurrency.lockutils [None req-7ad4c5e8-daeb-48e5-949c-59a86a73b69b 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Acquiring lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:58:00 np0005534516 nova_compute[253538]: 2025-11-25 08:58:00.374 253542 DEBUG oslo_concurrency.lockutils [None req-7ad4c5e8-daeb-48e5-949c-59a86a73b69b 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:58:00 np0005534516 nova_compute[253538]: 2025-11-25 08:58:00.375 253542 DEBUG oslo_concurrency.lockutils [None req-7ad4c5e8-daeb-48e5-949c-59a86a73b69b 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:58:00 np0005534516 nova_compute[253538]: 2025-11-25 08:58:00.403 253542 DEBUG nova.storage.rbd_utils [None req-7ad4c5e8-daeb-48e5-949c-59a86a73b69b 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] rbd image 7be1983b-1609-4155-b634-d14fc92539e8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:58:00 np0005534516 nova_compute[253538]: 2025-11-25 08:58:00.408 253542 DEBUG oslo_concurrency.processutils [None req-7ad4c5e8-daeb-48e5-949c-59a86a73b69b 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc 7be1983b-1609-4155-b634-d14fc92539e8_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:58:00 np0005534516 nova_compute[253538]: 2025-11-25 08:58:00.603 253542 DEBUG nova.policy [None req-7ad4c5e8-daeb-48e5-949c-59a86a73b69b 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '4211995133cc45db8e38c47f747fb092', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '92faeb767e7a423586eaaf32661ce771', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 25 03:58:00 np0005534516 nova_compute[253538]: 2025-11-25 08:58:00.708 253542 DEBUG oslo_concurrency.processutils [None req-7ad4c5e8-daeb-48e5-949c-59a86a73b69b 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc 7be1983b-1609-4155-b634-d14fc92539e8_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.300s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:58:00 np0005534516 nova_compute[253538]: 2025-11-25 08:58:00.800 253542 DEBUG nova.storage.rbd_utils [None req-7ad4c5e8-daeb-48e5-949c-59a86a73b69b 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] resizing rbd image 7be1983b-1609-4155-b634-d14fc92539e8_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 25 03:58:00 np0005534516 nova_compute[253538]: 2025-11-25 08:58:00.909 253542 DEBUG nova.objects.instance [None req-7ad4c5e8-daeb-48e5-949c-59a86a73b69b 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lazy-loading 'migration_context' on Instance uuid 7be1983b-1609-4155-b634-d14fc92539e8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 03:58:00 np0005534516 nova_compute[253538]: 2025-11-25 08:58:00.927 253542 DEBUG nova.virt.libvirt.driver [None req-7ad4c5e8-daeb-48e5-949c-59a86a73b69b 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 7be1983b-1609-4155-b634-d14fc92539e8] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 25 03:58:00 np0005534516 nova_compute[253538]: 2025-11-25 08:58:00.928 253542 DEBUG nova.virt.libvirt.driver [None req-7ad4c5e8-daeb-48e5-949c-59a86a73b69b 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 7be1983b-1609-4155-b634-d14fc92539e8] Ensure instance console log exists: /var/lib/nova/instances/7be1983b-1609-4155-b634-d14fc92539e8/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 25 03:58:00 np0005534516 nova_compute[253538]: 2025-11-25 08:58:00.929 253542 DEBUG oslo_concurrency.lockutils [None req-7ad4c5e8-daeb-48e5-949c-59a86a73b69b 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:58:00 np0005534516 nova_compute[253538]: 2025-11-25 08:58:00.929 253542 DEBUG oslo_concurrency.lockutils [None req-7ad4c5e8-daeb-48e5-949c-59a86a73b69b 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:58:00 np0005534516 nova_compute[253538]: 2025-11-25 08:58:00.930 253542 DEBUG oslo_concurrency.lockutils [None req-7ad4c5e8-daeb-48e5-949c-59a86a73b69b 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:58:01 np0005534516 nova_compute[253538]: 2025-11-25 08:58:01.851 253542 DEBUG nova.network.neutron [None req-7ad4c5e8-daeb-48e5-949c-59a86a73b69b 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 7be1983b-1609-4155-b634-d14fc92539e8] Successfully created port: 91977aa8-6282-46cb-bc4f-42567be639f9 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 25 03:58:01 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2317: 321 pgs: 321 active+clean; 163 MiB data, 835 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.0 MiB/s wr, 151 op/s
Nov 25 03:58:02 np0005534516 nova_compute[253538]: 2025-11-25 08:58:02.050 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:58:02 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 03:58:02 np0005534516 nova_compute[253538]: 2025-11-25 08:58:02.642 253542 DEBUG nova.network.neutron [None req-7ad4c5e8-daeb-48e5-949c-59a86a73b69b 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 7be1983b-1609-4155-b634-d14fc92539e8] Successfully updated port: 91977aa8-6282-46cb-bc4f-42567be639f9 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 25 03:58:02 np0005534516 nova_compute[253538]: 2025-11-25 08:58:02.659 253542 DEBUG oslo_concurrency.lockutils [None req-7ad4c5e8-daeb-48e5-949c-59a86a73b69b 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Acquiring lock "refresh_cache-7be1983b-1609-4155-b634-d14fc92539e8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 03:58:02 np0005534516 nova_compute[253538]: 2025-11-25 08:58:02.660 253542 DEBUG oslo_concurrency.lockutils [None req-7ad4c5e8-daeb-48e5-949c-59a86a73b69b 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Acquired lock "refresh_cache-7be1983b-1609-4155-b634-d14fc92539e8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 03:58:02 np0005534516 nova_compute[253538]: 2025-11-25 08:58:02.660 253542 DEBUG nova.network.neutron [None req-7ad4c5e8-daeb-48e5-949c-59a86a73b69b 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 7be1983b-1609-4155-b634-d14fc92539e8] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 25 03:58:02 np0005534516 nova_compute[253538]: 2025-11-25 08:58:02.722 253542 DEBUG nova.compute.manager [req-1b1722d2-ec6f-40b6-bbef-a50d7ce0d3cc req-285c439f-4130-4a1c-bfea-6be690a681ec b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 7be1983b-1609-4155-b634-d14fc92539e8] Received event network-changed-91977aa8-6282-46cb-bc4f-42567be639f9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:58:02 np0005534516 nova_compute[253538]: 2025-11-25 08:58:02.723 253542 DEBUG nova.compute.manager [req-1b1722d2-ec6f-40b6-bbef-a50d7ce0d3cc req-285c439f-4130-4a1c-bfea-6be690a681ec b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 7be1983b-1609-4155-b634-d14fc92539e8] Refreshing instance network info cache due to event network-changed-91977aa8-6282-46cb-bc4f-42567be639f9. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 03:58:02 np0005534516 nova_compute[253538]: 2025-11-25 08:58:02.723 253542 DEBUG oslo_concurrency.lockutils [req-1b1722d2-ec6f-40b6-bbef-a50d7ce0d3cc req-285c439f-4130-4a1c-bfea-6be690a681ec b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-7be1983b-1609-4155-b634-d14fc92539e8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 03:58:02 np0005534516 nova_compute[253538]: 2025-11-25 08:58:02.825 253542 DEBUG nova.network.neutron [None req-7ad4c5e8-daeb-48e5-949c-59a86a73b69b 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 7be1983b-1609-4155-b634-d14fc92539e8] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 25 03:58:02 np0005534516 nova_compute[253538]: 2025-11-25 08:58:02.967 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:58:03 np0005534516 ovn_controller[152859]: 2025-11-25T08:58:03Z|01278|binding|INFO|Releasing lport c544ed70-c59f-4fbe-97c5-a521f548f971 from this chassis (sb_readonly=0)
Nov 25 03:58:03 np0005534516 nova_compute[253538]: 2025-11-25 08:58:03.339 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:58:03 np0005534516 nova_compute[253538]: 2025-11-25 08:58:03.556 253542 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764061068.552513, 9b511004-21d7-4867-aa46-4e7219827b6e => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 03:58:03 np0005534516 nova_compute[253538]: 2025-11-25 08:58:03.556 253542 INFO nova.compute.manager [-] [instance: 9b511004-21d7-4867-aa46-4e7219827b6e] VM Stopped (Lifecycle Event)#033[00m
Nov 25 03:58:03 np0005534516 nova_compute[253538]: 2025-11-25 08:58:03.579 253542 DEBUG nova.compute.manager [None req-014f0b8c-9555-4ecb-ba76-e0b55af00c4b - - - - - -] [instance: 9b511004-21d7-4867-aa46-4e7219827b6e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:58:03 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2318: 321 pgs: 321 active+clean; 175 MiB data, 845 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.9 MiB/s wr, 138 op/s
Nov 25 03:58:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 03:58:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:58:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 03:58:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:58:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0007180662727095136 of space, bias 1.0, pg target 0.21541988181285407 quantized to 32 (current 32)
Nov 25 03:58:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:58:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 03:58:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:58:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 03:58:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:58:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0010122368870873217 of space, bias 1.0, pg target 0.3036710661261965 quantized to 32 (current 32)
Nov 25 03:58:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:58:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 25 03:58:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:58:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 03:58:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:58:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 25 03:58:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:58:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 25 03:58:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:58:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 03:58:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:58:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 25 03:58:05 np0005534516 nova_compute[253538]: 2025-11-25 08:58:05.114 253542 DEBUG nova.network.neutron [None req-7ad4c5e8-daeb-48e5-949c-59a86a73b69b 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 7be1983b-1609-4155-b634-d14fc92539e8] Updating instance_info_cache with network_info: [{"id": "91977aa8-6282-46cb-bc4f-42567be639f9", "address": "fa:16:3e:60:48:54", "network": {"id": "bf619d00-d285-4b9e-9996-77997075375e", "bridge": "br-int", "label": "tempest-network-smoke--598319533", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92faeb767e7a423586eaaf32661ce771", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap91977aa8-62", "ovs_interfaceid": "91977aa8-6282-46cb-bc4f-42567be639f9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 03:58:05 np0005534516 nova_compute[253538]: 2025-11-25 08:58:05.135 253542 DEBUG oslo_concurrency.lockutils [None req-7ad4c5e8-daeb-48e5-949c-59a86a73b69b 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Releasing lock "refresh_cache-7be1983b-1609-4155-b634-d14fc92539e8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 03:58:05 np0005534516 nova_compute[253538]: 2025-11-25 08:58:05.136 253542 DEBUG nova.compute.manager [None req-7ad4c5e8-daeb-48e5-949c-59a86a73b69b 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 7be1983b-1609-4155-b634-d14fc92539e8] Instance network_info: |[{"id": "91977aa8-6282-46cb-bc4f-42567be639f9", "address": "fa:16:3e:60:48:54", "network": {"id": "bf619d00-d285-4b9e-9996-77997075375e", "bridge": "br-int", "label": "tempest-network-smoke--598319533", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92faeb767e7a423586eaaf32661ce771", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap91977aa8-62", "ovs_interfaceid": "91977aa8-6282-46cb-bc4f-42567be639f9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 25 03:58:05 np0005534516 nova_compute[253538]: 2025-11-25 08:58:05.136 253542 DEBUG oslo_concurrency.lockutils [req-1b1722d2-ec6f-40b6-bbef-a50d7ce0d3cc req-285c439f-4130-4a1c-bfea-6be690a681ec b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-7be1983b-1609-4155-b634-d14fc92539e8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 03:58:05 np0005534516 nova_compute[253538]: 2025-11-25 08:58:05.136 253542 DEBUG nova.network.neutron [req-1b1722d2-ec6f-40b6-bbef-a50d7ce0d3cc req-285c439f-4130-4a1c-bfea-6be690a681ec b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 7be1983b-1609-4155-b634-d14fc92539e8] Refreshing network info cache for port 91977aa8-6282-46cb-bc4f-42567be639f9 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 03:58:05 np0005534516 nova_compute[253538]: 2025-11-25 08:58:05.139 253542 DEBUG nova.virt.libvirt.driver [None req-7ad4c5e8-daeb-48e5-949c-59a86a73b69b 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 7be1983b-1609-4155-b634-d14fc92539e8] Start _get_guest_xml network_info=[{"id": "91977aa8-6282-46cb-bc4f-42567be639f9", "address": "fa:16:3e:60:48:54", "network": {"id": "bf619d00-d285-4b9e-9996-77997075375e", "bridge": "br-int", "label": "tempest-network-smoke--598319533", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92faeb767e7a423586eaaf32661ce771", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap91977aa8-62", "ovs_interfaceid": "91977aa8-6282-46cb-bc4f-42567be639f9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'device_name': '/dev/vda', 'guest_format': None, 'encryption_options': None, 'boot_index': 0, 'encryption_format': None, 'encryption_secret_uuid': None, 'size': 0, 'encrypted': False, 'disk_bus': 'virtio', 'image_id': '8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 25 03:58:05 np0005534516 nova_compute[253538]: 2025-11-25 08:58:05.143 253542 WARNING nova.virt.libvirt.driver [None req-7ad4c5e8-daeb-48e5-949c-59a86a73b69b 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 03:58:05 np0005534516 nova_compute[253538]: 2025-11-25 08:58:05.150 253542 DEBUG nova.virt.libvirt.host [None req-7ad4c5e8-daeb-48e5-949c-59a86a73b69b 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 25 03:58:05 np0005534516 nova_compute[253538]: 2025-11-25 08:58:05.150 253542 DEBUG nova.virt.libvirt.host [None req-7ad4c5e8-daeb-48e5-949c-59a86a73b69b 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 25 03:58:05 np0005534516 nova_compute[253538]: 2025-11-25 08:58:05.153 253542 DEBUG nova.virt.libvirt.host [None req-7ad4c5e8-daeb-48e5-949c-59a86a73b69b 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 25 03:58:05 np0005534516 nova_compute[253538]: 2025-11-25 08:58:05.154 253542 DEBUG nova.virt.libvirt.host [None req-7ad4c5e8-daeb-48e5-949c-59a86a73b69b 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 25 03:58:05 np0005534516 nova_compute[253538]: 2025-11-25 08:58:05.154 253542 DEBUG nova.virt.libvirt.driver [None req-7ad4c5e8-daeb-48e5-949c-59a86a73b69b 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 25 03:58:05 np0005534516 nova_compute[253538]: 2025-11-25 08:58:05.154 253542 DEBUG nova.virt.hardware [None req-7ad4c5e8-daeb-48e5-949c-59a86a73b69b 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T08:20:54Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='c0247c91-0f34-45b2-87e7-43f31a790a00',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 25 03:58:05 np0005534516 nova_compute[253538]: 2025-11-25 08:58:05.155 253542 DEBUG nova.virt.hardware [None req-7ad4c5e8-daeb-48e5-949c-59a86a73b69b 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 25 03:58:05 np0005534516 nova_compute[253538]: 2025-11-25 08:58:05.155 253542 DEBUG nova.virt.hardware [None req-7ad4c5e8-daeb-48e5-949c-59a86a73b69b 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 25 03:58:05 np0005534516 nova_compute[253538]: 2025-11-25 08:58:05.155 253542 DEBUG nova.virt.hardware [None req-7ad4c5e8-daeb-48e5-949c-59a86a73b69b 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 25 03:58:05 np0005534516 nova_compute[253538]: 2025-11-25 08:58:05.155 253542 DEBUG nova.virt.hardware [None req-7ad4c5e8-daeb-48e5-949c-59a86a73b69b 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 25 03:58:05 np0005534516 nova_compute[253538]: 2025-11-25 08:58:05.156 253542 DEBUG nova.virt.hardware [None req-7ad4c5e8-daeb-48e5-949c-59a86a73b69b 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 25 03:58:05 np0005534516 nova_compute[253538]: 2025-11-25 08:58:05.156 253542 DEBUG nova.virt.hardware [None req-7ad4c5e8-daeb-48e5-949c-59a86a73b69b 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 25 03:58:05 np0005534516 nova_compute[253538]: 2025-11-25 08:58:05.156 253542 DEBUG nova.virt.hardware [None req-7ad4c5e8-daeb-48e5-949c-59a86a73b69b 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 25 03:58:05 np0005534516 nova_compute[253538]: 2025-11-25 08:58:05.156 253542 DEBUG nova.virt.hardware [None req-7ad4c5e8-daeb-48e5-949c-59a86a73b69b 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 25 03:58:05 np0005534516 nova_compute[253538]: 2025-11-25 08:58:05.157 253542 DEBUG nova.virt.hardware [None req-7ad4c5e8-daeb-48e5-949c-59a86a73b69b 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 25 03:58:05 np0005534516 nova_compute[253538]: 2025-11-25 08:58:05.157 253542 DEBUG nova.virt.hardware [None req-7ad4c5e8-daeb-48e5-949c-59a86a73b69b 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 25 03:58:05 np0005534516 nova_compute[253538]: 2025-11-25 08:58:05.160 253542 DEBUG oslo_concurrency.processutils [None req-7ad4c5e8-daeb-48e5-949c-59a86a73b69b 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:58:05 np0005534516 ovn_controller[152859]: 2025-11-25T08:58:05Z|00149|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:d9:e8:8d 10.100.0.9
Nov 25 03:58:05 np0005534516 ovn_controller[152859]: 2025-11-25T08:58:05Z|00150|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:d9:e8:8d 10.100.0.9
Nov 25 03:58:05 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 03:58:05 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2680538638' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 03:58:05 np0005534516 nova_compute[253538]: 2025-11-25 08:58:05.583 253542 DEBUG oslo_concurrency.processutils [None req-7ad4c5e8-daeb-48e5-949c-59a86a73b69b 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.423s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:58:05 np0005534516 nova_compute[253538]: 2025-11-25 08:58:05.610 253542 DEBUG nova.storage.rbd_utils [None req-7ad4c5e8-daeb-48e5-949c-59a86a73b69b 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] rbd image 7be1983b-1609-4155-b634-d14fc92539e8_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:58:05 np0005534516 nova_compute[253538]: 2025-11-25 08:58:05.615 253542 DEBUG oslo_concurrency.processutils [None req-7ad4c5e8-daeb-48e5-949c-59a86a73b69b 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:58:05 np0005534516 ceph-mon[75015]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 25 03:58:05 np0005534516 ceph-mon[75015]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 4200.0 total, 600.0 interval#012Cumulative writes: 10K writes, 48K keys, 10K commit groups, 1.0 writes per commit group, ingest: 0.06 GB, 0.02 MB/s#012Cumulative WAL: 10K writes, 10K syncs, 1.00 writes per sync, written: 0.06 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1441 writes, 7014 keys, 1441 commit groups, 1.0 writes per commit group, ingest: 9.13 MB, 0.02 MB/s#012Interval WAL: 1441 writes, 1441 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   1.0      0.0     31.7      1.81              0.20        33    0.055       0      0       0.0       0.0#012  L6      1/0    7.43 MB   0.0      0.3     0.1      0.2       0.2      0.0       0.0   4.3     62.1     51.6      4.79              0.82        32    0.150    185K    17K       0.0       0.0#012 Sum      1/0    7.43 MB   0.0      0.3     0.1      0.2       0.3      0.1       0.0   5.3     45.0     46.2      6.60              1.03        65    0.101    185K    17K       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   8.1    107.9    103.0      0.62              0.22        14    0.044     50K   3600       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.3     0.1      0.2       0.2      0.0       0.0   0.0     62.1     51.6      4.79              0.82        32    0.150    185K    17K       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   0.0      0.0     31.7      1.81              0.20        32    0.056       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     19.9      0.00              0.00         1    0.003       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 4200.0 total, 600.0 interval#012Flush(GB): cumulative 0.056, interval 0.008#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.30 GB write, 0.07 MB/s write, 0.29 GB read, 0.07 MB/s read, 6.6 seconds#012Interval compaction: 0.06 GB write, 0.11 MB/s write, 0.07 GB read, 0.11 MB/s read, 0.6 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55e967e9f1f0#2 capacity: 304.00 MB usage: 33.79 MB table_size: 0 occupancy: 18446744073709551615 collections: 8 last_copies: 0 last_secs: 0.00027 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(2226,32.44 MB,10.6714%) FilterBlock(66,524.55 KB,0.168504%) IndexBlock(66,852.62 KB,0.273895%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Nov 25 03:58:05 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2319: 321 pgs: 321 active+clean; 190 MiB data, 856 MiB used, 59 GiB / 60 GiB avail; 1.2 MiB/s rd, 2.8 MiB/s wr, 114 op/s
Nov 25 03:58:06 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 03:58:06 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3502249698' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 03:58:06 np0005534516 nova_compute[253538]: 2025-11-25 08:58:06.065 253542 DEBUG oslo_concurrency.processutils [None req-7ad4c5e8-daeb-48e5-949c-59a86a73b69b 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.450s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:58:06 np0005534516 nova_compute[253538]: 2025-11-25 08:58:06.069 253542 DEBUG nova.virt.libvirt.vif [None req-7ad4c5e8-daeb-48e5-949c-59a86a73b69b 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T08:57:58Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1380054515',display_name='tempest-TestNetworkBasicOps-server-1380054515',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1380054515',id=124,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBC0onj7xYRJ9u2qwke0uzdeZmqzf6dyWSNUBde1bKNBXCsKy64L0Qx4G4FAfzNe1upG08i2qlETnDI+nze7Y9Zy5eH5tzWkkjtBeM6yGgDQ+VcsL5Xix937kJOB4ium+2w==',key_name='tempest-TestNetworkBasicOps-618890367',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='92faeb767e7a423586eaaf32661ce771',ramdisk_id='',reservation_id='r-81q88fvv',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-2019122229',owner_user_name='tempest-TestNetworkBasicOps-2019122229-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T08:58:00Z,user_data=None,user_id='4211995133cc45db8e38c47f747fb092',uuid=7be1983b-1609-4155-b634-d14fc92539e8,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "91977aa8-6282-46cb-bc4f-42567be639f9", "address": "fa:16:3e:60:48:54", "network": {"id": "bf619d00-d285-4b9e-9996-77997075375e", "bridge": "br-int", "label": "tempest-network-smoke--598319533", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92faeb767e7a423586eaaf32661ce771", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap91977aa8-62", "ovs_interfaceid": "91977aa8-6282-46cb-bc4f-42567be639f9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 25 03:58:06 np0005534516 nova_compute[253538]: 2025-11-25 08:58:06.069 253542 DEBUG nova.network.os_vif_util [None req-7ad4c5e8-daeb-48e5-949c-59a86a73b69b 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Converting VIF {"id": "91977aa8-6282-46cb-bc4f-42567be639f9", "address": "fa:16:3e:60:48:54", "network": {"id": "bf619d00-d285-4b9e-9996-77997075375e", "bridge": "br-int", "label": "tempest-network-smoke--598319533", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92faeb767e7a423586eaaf32661ce771", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap91977aa8-62", "ovs_interfaceid": "91977aa8-6282-46cb-bc4f-42567be639f9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 03:58:06 np0005534516 nova_compute[253538]: 2025-11-25 08:58:06.072 253542 DEBUG nova.network.os_vif_util [None req-7ad4c5e8-daeb-48e5-949c-59a86a73b69b 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:60:48:54,bridge_name='br-int',has_traffic_filtering=True,id=91977aa8-6282-46cb-bc4f-42567be639f9,network=Network(bf619d00-d285-4b9e-9996-77997075375e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap91977aa8-62') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 03:58:06 np0005534516 nova_compute[253538]: 2025-11-25 08:58:06.074 253542 DEBUG nova.objects.instance [None req-7ad4c5e8-daeb-48e5-949c-59a86a73b69b 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lazy-loading 'pci_devices' on Instance uuid 7be1983b-1609-4155-b634-d14fc92539e8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 03:58:06 np0005534516 nova_compute[253538]: 2025-11-25 08:58:06.098 253542 DEBUG nova.virt.libvirt.driver [None req-7ad4c5e8-daeb-48e5-949c-59a86a73b69b 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 7be1983b-1609-4155-b634-d14fc92539e8] End _get_guest_xml xml=<domain type="kvm">
Nov 25 03:58:06 np0005534516 nova_compute[253538]:  <uuid>7be1983b-1609-4155-b634-d14fc92539e8</uuid>
Nov 25 03:58:06 np0005534516 nova_compute[253538]:  <name>instance-0000007c</name>
Nov 25 03:58:06 np0005534516 nova_compute[253538]:  <memory>131072</memory>
Nov 25 03:58:06 np0005534516 nova_compute[253538]:  <vcpu>1</vcpu>
Nov 25 03:58:06 np0005534516 nova_compute[253538]:  <metadata>
Nov 25 03:58:06 np0005534516 nova_compute[253538]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 03:58:06 np0005534516 nova_compute[253538]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 03:58:06 np0005534516 nova_compute[253538]:      <nova:name>tempest-TestNetworkBasicOps-server-1380054515</nova:name>
Nov 25 03:58:06 np0005534516 nova_compute[253538]:      <nova:creationTime>2025-11-25 08:58:05</nova:creationTime>
Nov 25 03:58:06 np0005534516 nova_compute[253538]:      <nova:flavor name="m1.nano">
Nov 25 03:58:06 np0005534516 nova_compute[253538]:        <nova:memory>128</nova:memory>
Nov 25 03:58:06 np0005534516 nova_compute[253538]:        <nova:disk>1</nova:disk>
Nov 25 03:58:06 np0005534516 nova_compute[253538]:        <nova:swap>0</nova:swap>
Nov 25 03:58:06 np0005534516 nova_compute[253538]:        <nova:ephemeral>0</nova:ephemeral>
Nov 25 03:58:06 np0005534516 nova_compute[253538]:        <nova:vcpus>1</nova:vcpus>
Nov 25 03:58:06 np0005534516 nova_compute[253538]:      </nova:flavor>
Nov 25 03:58:06 np0005534516 nova_compute[253538]:      <nova:owner>
Nov 25 03:58:06 np0005534516 nova_compute[253538]:        <nova:user uuid="4211995133cc45db8e38c47f747fb092">tempest-TestNetworkBasicOps-2019122229-project-member</nova:user>
Nov 25 03:58:06 np0005534516 nova_compute[253538]:        <nova:project uuid="92faeb767e7a423586eaaf32661ce771">tempest-TestNetworkBasicOps-2019122229</nova:project>
Nov 25 03:58:06 np0005534516 nova_compute[253538]:      </nova:owner>
Nov 25 03:58:06 np0005534516 nova_compute[253538]:      <nova:root type="image" uuid="8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e"/>
Nov 25 03:58:06 np0005534516 nova_compute[253538]:      <nova:ports>
Nov 25 03:58:06 np0005534516 nova_compute[253538]:        <nova:port uuid="91977aa8-6282-46cb-bc4f-42567be639f9">
Nov 25 03:58:06 np0005534516 nova_compute[253538]:          <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Nov 25 03:58:06 np0005534516 nova_compute[253538]:        </nova:port>
Nov 25 03:58:06 np0005534516 nova_compute[253538]:      </nova:ports>
Nov 25 03:58:06 np0005534516 nova_compute[253538]:    </nova:instance>
Nov 25 03:58:06 np0005534516 nova_compute[253538]:  </metadata>
Nov 25 03:58:06 np0005534516 nova_compute[253538]:  <sysinfo type="smbios">
Nov 25 03:58:06 np0005534516 nova_compute[253538]:    <system>
Nov 25 03:58:06 np0005534516 nova_compute[253538]:      <entry name="manufacturer">RDO</entry>
Nov 25 03:58:06 np0005534516 nova_compute[253538]:      <entry name="product">OpenStack Compute</entry>
Nov 25 03:58:06 np0005534516 nova_compute[253538]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 03:58:06 np0005534516 nova_compute[253538]:      <entry name="serial">7be1983b-1609-4155-b634-d14fc92539e8</entry>
Nov 25 03:58:06 np0005534516 nova_compute[253538]:      <entry name="uuid">7be1983b-1609-4155-b634-d14fc92539e8</entry>
Nov 25 03:58:06 np0005534516 nova_compute[253538]:      <entry name="family">Virtual Machine</entry>
Nov 25 03:58:06 np0005534516 nova_compute[253538]:    </system>
Nov 25 03:58:06 np0005534516 nova_compute[253538]:  </sysinfo>
Nov 25 03:58:06 np0005534516 nova_compute[253538]:  <os>
Nov 25 03:58:06 np0005534516 nova_compute[253538]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 25 03:58:06 np0005534516 nova_compute[253538]:    <boot dev="hd"/>
Nov 25 03:58:06 np0005534516 nova_compute[253538]:    <smbios mode="sysinfo"/>
Nov 25 03:58:06 np0005534516 nova_compute[253538]:  </os>
Nov 25 03:58:06 np0005534516 nova_compute[253538]:  <features>
Nov 25 03:58:06 np0005534516 nova_compute[253538]:    <acpi/>
Nov 25 03:58:06 np0005534516 nova_compute[253538]:    <apic/>
Nov 25 03:58:06 np0005534516 nova_compute[253538]:    <vmcoreinfo/>
Nov 25 03:58:06 np0005534516 nova_compute[253538]:  </features>
Nov 25 03:58:06 np0005534516 nova_compute[253538]:  <clock offset="utc">
Nov 25 03:58:06 np0005534516 nova_compute[253538]:    <timer name="pit" tickpolicy="delay"/>
Nov 25 03:58:06 np0005534516 nova_compute[253538]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 25 03:58:06 np0005534516 nova_compute[253538]:    <timer name="hpet" present="no"/>
Nov 25 03:58:06 np0005534516 nova_compute[253538]:  </clock>
Nov 25 03:58:06 np0005534516 nova_compute[253538]:  <cpu mode="host-model" match="exact">
Nov 25 03:58:06 np0005534516 nova_compute[253538]:    <topology sockets="1" cores="1" threads="1"/>
Nov 25 03:58:06 np0005534516 nova_compute[253538]:  </cpu>
Nov 25 03:58:06 np0005534516 nova_compute[253538]:  <devices>
Nov 25 03:58:06 np0005534516 nova_compute[253538]:    <disk type="network" device="disk">
Nov 25 03:58:06 np0005534516 nova_compute[253538]:      <driver type="raw" cache="none"/>
Nov 25 03:58:06 np0005534516 nova_compute[253538]:      <source protocol="rbd" name="vms/7be1983b-1609-4155-b634-d14fc92539e8_disk">
Nov 25 03:58:06 np0005534516 nova_compute[253538]:        <host name="192.168.122.100" port="6789"/>
Nov 25 03:58:06 np0005534516 nova_compute[253538]:      </source>
Nov 25 03:58:06 np0005534516 nova_compute[253538]:      <auth username="openstack">
Nov 25 03:58:06 np0005534516 nova_compute[253538]:        <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 03:58:06 np0005534516 nova_compute[253538]:      </auth>
Nov 25 03:58:06 np0005534516 nova_compute[253538]:      <target dev="vda" bus="virtio"/>
Nov 25 03:58:06 np0005534516 nova_compute[253538]:    </disk>
Nov 25 03:58:06 np0005534516 nova_compute[253538]:    <disk type="network" device="cdrom">
Nov 25 03:58:06 np0005534516 nova_compute[253538]:      <driver type="raw" cache="none"/>
Nov 25 03:58:06 np0005534516 nova_compute[253538]:      <source protocol="rbd" name="vms/7be1983b-1609-4155-b634-d14fc92539e8_disk.config">
Nov 25 03:58:06 np0005534516 nova_compute[253538]:        <host name="192.168.122.100" port="6789"/>
Nov 25 03:58:06 np0005534516 nova_compute[253538]:      </source>
Nov 25 03:58:06 np0005534516 nova_compute[253538]:      <auth username="openstack">
Nov 25 03:58:06 np0005534516 nova_compute[253538]:        <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 03:58:06 np0005534516 nova_compute[253538]:      </auth>
Nov 25 03:58:06 np0005534516 nova_compute[253538]:      <target dev="sda" bus="sata"/>
Nov 25 03:58:06 np0005534516 nova_compute[253538]:    </disk>
Nov 25 03:58:06 np0005534516 nova_compute[253538]:    <interface type="ethernet">
Nov 25 03:58:06 np0005534516 nova_compute[253538]:      <mac address="fa:16:3e:60:48:54"/>
Nov 25 03:58:06 np0005534516 nova_compute[253538]:      <model type="virtio"/>
Nov 25 03:58:06 np0005534516 nova_compute[253538]:      <driver name="vhost" rx_queue_size="512"/>
Nov 25 03:58:06 np0005534516 nova_compute[253538]:      <mtu size="1442"/>
Nov 25 03:58:06 np0005534516 nova_compute[253538]:      <target dev="tap91977aa8-62"/>
Nov 25 03:58:06 np0005534516 nova_compute[253538]:    </interface>
Nov 25 03:58:06 np0005534516 nova_compute[253538]:    <serial type="pty">
Nov 25 03:58:06 np0005534516 nova_compute[253538]:      <log file="/var/lib/nova/instances/7be1983b-1609-4155-b634-d14fc92539e8/console.log" append="off"/>
Nov 25 03:58:06 np0005534516 nova_compute[253538]:    </serial>
Nov 25 03:58:06 np0005534516 nova_compute[253538]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 03:58:06 np0005534516 nova_compute[253538]:    <video>
Nov 25 03:58:06 np0005534516 nova_compute[253538]:      <model type="virtio"/>
Nov 25 03:58:06 np0005534516 nova_compute[253538]:    </video>
Nov 25 03:58:06 np0005534516 nova_compute[253538]:    <input type="tablet" bus="usb"/>
Nov 25 03:58:06 np0005534516 nova_compute[253538]:    <rng model="virtio">
Nov 25 03:58:06 np0005534516 nova_compute[253538]:      <backend model="random">/dev/urandom</backend>
Nov 25 03:58:06 np0005534516 nova_compute[253538]:    </rng>
Nov 25 03:58:06 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root"/>
Nov 25 03:58:06 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:58:06 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:58:06 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:58:06 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:58:06 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:58:06 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:58:06 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:58:06 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:58:06 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:58:06 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:58:06 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:58:06 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:58:06 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:58:06 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:58:06 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:58:06 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:58:06 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:58:06 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:58:06 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:58:06 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:58:06 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:58:06 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:58:06 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:58:06 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:58:06 np0005534516 nova_compute[253538]:    <controller type="usb" index="0"/>
Nov 25 03:58:06 np0005534516 nova_compute[253538]:    <memballoon model="virtio">
Nov 25 03:58:06 np0005534516 nova_compute[253538]:      <stats period="10"/>
Nov 25 03:58:06 np0005534516 nova_compute[253538]:    </memballoon>
Nov 25 03:58:06 np0005534516 nova_compute[253538]:  </devices>
Nov 25 03:58:06 np0005534516 nova_compute[253538]: </domain>
Nov 25 03:58:06 np0005534516 nova_compute[253538]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 25 03:58:06 np0005534516 nova_compute[253538]: 2025-11-25 08:58:06.101 253542 DEBUG nova.compute.manager [None req-7ad4c5e8-daeb-48e5-949c-59a86a73b69b 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 7be1983b-1609-4155-b634-d14fc92539e8] Preparing to wait for external event network-vif-plugged-91977aa8-6282-46cb-bc4f-42567be639f9 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 25 03:58:06 np0005534516 nova_compute[253538]: 2025-11-25 08:58:06.102 253542 DEBUG oslo_concurrency.lockutils [None req-7ad4c5e8-daeb-48e5-949c-59a86a73b69b 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Acquiring lock "7be1983b-1609-4155-b634-d14fc92539e8-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:58:06 np0005534516 nova_compute[253538]: 2025-11-25 08:58:06.102 253542 DEBUG oslo_concurrency.lockutils [None req-7ad4c5e8-daeb-48e5-949c-59a86a73b69b 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lock "7be1983b-1609-4155-b634-d14fc92539e8-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:58:06 np0005534516 nova_compute[253538]: 2025-11-25 08:58:06.103 253542 DEBUG oslo_concurrency.lockutils [None req-7ad4c5e8-daeb-48e5-949c-59a86a73b69b 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lock "7be1983b-1609-4155-b634-d14fc92539e8-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:58:06 np0005534516 nova_compute[253538]: 2025-11-25 08:58:06.104 253542 DEBUG nova.virt.libvirt.vif [None req-7ad4c5e8-daeb-48e5-949c-59a86a73b69b 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T08:57:58Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1380054515',display_name='tempest-TestNetworkBasicOps-server-1380054515',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1380054515',id=124,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBC0onj7xYRJ9u2qwke0uzdeZmqzf6dyWSNUBde1bKNBXCsKy64L0Qx4G4FAfzNe1upG08i2qlETnDI+nze7Y9Zy5eH5tzWkkjtBeM6yGgDQ+VcsL5Xix937kJOB4ium+2w==',key_name='tempest-TestNetworkBasicOps-618890367',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='92faeb767e7a423586eaaf32661ce771',ramdisk_id='',reservation_id='r-81q88fvv',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-2019122229',owner_user_name='tempest-TestNetworkBasicOps-2019122229-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T08:58:00Z,user_data=None,user_id='4211995133cc45db8e38c47f747fb092',uuid=7be1983b-1609-4155-b634-d14fc92539e8,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "91977aa8-6282-46cb-bc4f-42567be639f9", "address": "fa:16:3e:60:48:54", "network": {"id": "bf619d00-d285-4b9e-9996-77997075375e", "bridge": "br-int", "label": "tempest-network-smoke--598319533", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92faeb767e7a423586eaaf32661ce771", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap91977aa8-62", "ovs_interfaceid": "91977aa8-6282-46cb-bc4f-42567be639f9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 25 03:58:06 np0005534516 nova_compute[253538]: 2025-11-25 08:58:06.105 253542 DEBUG nova.network.os_vif_util [None req-7ad4c5e8-daeb-48e5-949c-59a86a73b69b 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Converting VIF {"id": "91977aa8-6282-46cb-bc4f-42567be639f9", "address": "fa:16:3e:60:48:54", "network": {"id": "bf619d00-d285-4b9e-9996-77997075375e", "bridge": "br-int", "label": "tempest-network-smoke--598319533", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92faeb767e7a423586eaaf32661ce771", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap91977aa8-62", "ovs_interfaceid": "91977aa8-6282-46cb-bc4f-42567be639f9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 03:58:06 np0005534516 nova_compute[253538]: 2025-11-25 08:58:06.106 253542 DEBUG nova.network.os_vif_util [None req-7ad4c5e8-daeb-48e5-949c-59a86a73b69b 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:60:48:54,bridge_name='br-int',has_traffic_filtering=True,id=91977aa8-6282-46cb-bc4f-42567be639f9,network=Network(bf619d00-d285-4b9e-9996-77997075375e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap91977aa8-62') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 03:58:06 np0005534516 nova_compute[253538]: 2025-11-25 08:58:06.106 253542 DEBUG os_vif [None req-7ad4c5e8-daeb-48e5-949c-59a86a73b69b 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:60:48:54,bridge_name='br-int',has_traffic_filtering=True,id=91977aa8-6282-46cb-bc4f-42567be639f9,network=Network(bf619d00-d285-4b9e-9996-77997075375e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap91977aa8-62') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 25 03:58:06 np0005534516 nova_compute[253538]: 2025-11-25 08:58:06.107 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:58:06 np0005534516 nova_compute[253538]: 2025-11-25 08:58:06.108 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:58:06 np0005534516 nova_compute[253538]: 2025-11-25 08:58:06.109 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 03:58:06 np0005534516 nova_compute[253538]: 2025-11-25 08:58:06.112 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:58:06 np0005534516 nova_compute[253538]: 2025-11-25 08:58:06.113 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap91977aa8-62, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:58:06 np0005534516 nova_compute[253538]: 2025-11-25 08:58:06.114 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap91977aa8-62, col_values=(('external_ids', {'iface-id': '91977aa8-6282-46cb-bc4f-42567be639f9', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:60:48:54', 'vm-uuid': '7be1983b-1609-4155-b634-d14fc92539e8'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:58:06 np0005534516 NetworkManager[48915]: <info>  [1764061086.1169] manager: (tap91977aa8-62): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/524)
Nov 25 03:58:06 np0005534516 nova_compute[253538]: 2025-11-25 08:58:06.115 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:58:06 np0005534516 nova_compute[253538]: 2025-11-25 08:58:06.121 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 25 03:58:06 np0005534516 nova_compute[253538]: 2025-11-25 08:58:06.122 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:58:06 np0005534516 nova_compute[253538]: 2025-11-25 08:58:06.124 253542 INFO os_vif [None req-7ad4c5e8-daeb-48e5-949c-59a86a73b69b 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:60:48:54,bridge_name='br-int',has_traffic_filtering=True,id=91977aa8-6282-46cb-bc4f-42567be639f9,network=Network(bf619d00-d285-4b9e-9996-77997075375e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap91977aa8-62')#033[00m
Nov 25 03:58:06 np0005534516 nova_compute[253538]: 2025-11-25 08:58:06.189 253542 DEBUG nova.virt.libvirt.driver [None req-7ad4c5e8-daeb-48e5-949c-59a86a73b69b 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 03:58:06 np0005534516 nova_compute[253538]: 2025-11-25 08:58:06.190 253542 DEBUG nova.virt.libvirt.driver [None req-7ad4c5e8-daeb-48e5-949c-59a86a73b69b 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 03:58:06 np0005534516 nova_compute[253538]: 2025-11-25 08:58:06.190 253542 DEBUG nova.virt.libvirt.driver [None req-7ad4c5e8-daeb-48e5-949c-59a86a73b69b 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] No VIF found with MAC fa:16:3e:60:48:54, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 25 03:58:06 np0005534516 nova_compute[253538]: 2025-11-25 08:58:06.191 253542 INFO nova.virt.libvirt.driver [None req-7ad4c5e8-daeb-48e5-949c-59a86a73b69b 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 7be1983b-1609-4155-b634-d14fc92539e8] Using config drive#033[00m
Nov 25 03:58:06 np0005534516 nova_compute[253538]: 2025-11-25 08:58:06.218 253542 DEBUG nova.storage.rbd_utils [None req-7ad4c5e8-daeb-48e5-949c-59a86a73b69b 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] rbd image 7be1983b-1609-4155-b634-d14fc92539e8_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:58:07 np0005534516 nova_compute[253538]: 2025-11-25 08:58:07.305 253542 INFO nova.virt.libvirt.driver [None req-7ad4c5e8-daeb-48e5-949c-59a86a73b69b 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 7be1983b-1609-4155-b634-d14fc92539e8] Creating config drive at /var/lib/nova/instances/7be1983b-1609-4155-b634-d14fc92539e8/disk.config#033[00m
Nov 25 03:58:07 np0005534516 nova_compute[253538]: 2025-11-25 08:58:07.309 253542 DEBUG oslo_concurrency.processutils [None req-7ad4c5e8-daeb-48e5-949c-59a86a73b69b 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/7be1983b-1609-4155-b634-d14fc92539e8/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpvi1gferr execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:58:07 np0005534516 nova_compute[253538]: 2025-11-25 08:58:07.449 253542 DEBUG oslo_concurrency.processutils [None req-7ad4c5e8-daeb-48e5-949c-59a86a73b69b 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/7be1983b-1609-4155-b634-d14fc92539e8/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpvi1gferr" returned: 0 in 0.140s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:58:07 np0005534516 nova_compute[253538]: 2025-11-25 08:58:07.487 253542 DEBUG nova.storage.rbd_utils [None req-7ad4c5e8-daeb-48e5-949c-59a86a73b69b 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] rbd image 7be1983b-1609-4155-b634-d14fc92539e8_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:58:07 np0005534516 nova_compute[253538]: 2025-11-25 08:58:07.491 253542 DEBUG oslo_concurrency.processutils [None req-7ad4c5e8-daeb-48e5-949c-59a86a73b69b 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/7be1983b-1609-4155-b634-d14fc92539e8/disk.config 7be1983b-1609-4155-b634-d14fc92539e8_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:58:07 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 03:58:07 np0005534516 nova_compute[253538]: 2025-11-25 08:58:07.629 253542 DEBUG nova.network.neutron [req-1b1722d2-ec6f-40b6-bbef-a50d7ce0d3cc req-285c439f-4130-4a1c-bfea-6be690a681ec b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 7be1983b-1609-4155-b634-d14fc92539e8] Updated VIF entry in instance network info cache for port 91977aa8-6282-46cb-bc4f-42567be639f9. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 25 03:58:07 np0005534516 nova_compute[253538]: 2025-11-25 08:58:07.630 253542 DEBUG nova.network.neutron [req-1b1722d2-ec6f-40b6-bbef-a50d7ce0d3cc req-285c439f-4130-4a1c-bfea-6be690a681ec b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 7be1983b-1609-4155-b634-d14fc92539e8] Updating instance_info_cache with network_info: [{"id": "91977aa8-6282-46cb-bc4f-42567be639f9", "address": "fa:16:3e:60:48:54", "network": {"id": "bf619d00-d285-4b9e-9996-77997075375e", "bridge": "br-int", "label": "tempest-network-smoke--598319533", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92faeb767e7a423586eaaf32661ce771", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap91977aa8-62", "ovs_interfaceid": "91977aa8-6282-46cb-bc4f-42567be639f9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 03:58:07 np0005534516 nova_compute[253538]: 2025-11-25 08:58:07.645 253542 DEBUG oslo_concurrency.lockutils [req-1b1722d2-ec6f-40b6-bbef-a50d7ce0d3cc req-285c439f-4130-4a1c-bfea-6be690a681ec b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-7be1983b-1609-4155-b634-d14fc92539e8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 03:58:07 np0005534516 nova_compute[253538]: 2025-11-25 08:58:07.665 253542 DEBUG oslo_concurrency.processutils [None req-7ad4c5e8-daeb-48e5-949c-59a86a73b69b 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/7be1983b-1609-4155-b634-d14fc92539e8/disk.config 7be1983b-1609-4155-b634-d14fc92539e8_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.174s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:58:07 np0005534516 nova_compute[253538]: 2025-11-25 08:58:07.665 253542 INFO nova.virt.libvirt.driver [None req-7ad4c5e8-daeb-48e5-949c-59a86a73b69b 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 7be1983b-1609-4155-b634-d14fc92539e8] Deleting local config drive /var/lib/nova/instances/7be1983b-1609-4155-b634-d14fc92539e8/disk.config because it was imported into RBD.#033[00m
Nov 25 03:58:07 np0005534516 NetworkManager[48915]: <info>  [1764061087.7185] manager: (tap91977aa8-62): new Tun device (/org/freedesktop/NetworkManager/Devices/525)
Nov 25 03:58:07 np0005534516 kernel: tap91977aa8-62: entered promiscuous mode
Nov 25 03:58:07 np0005534516 ovn_controller[152859]: 2025-11-25T08:58:07Z|01279|binding|INFO|Claiming lport 91977aa8-6282-46cb-bc4f-42567be639f9 for this chassis.
Nov 25 03:58:07 np0005534516 ovn_controller[152859]: 2025-11-25T08:58:07Z|01280|binding|INFO|91977aa8-6282-46cb-bc4f-42567be639f9: Claiming fa:16:3e:60:48:54 10.100.0.10
Nov 25 03:58:07 np0005534516 nova_compute[253538]: 2025-11-25 08:58:07.723 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:58:07 np0005534516 ovn_controller[152859]: 2025-11-25T08:58:07Z|01281|binding|INFO|Setting lport 91977aa8-6282-46cb-bc4f-42567be639f9 ovn-installed in OVS
Nov 25 03:58:07 np0005534516 nova_compute[253538]: 2025-11-25 08:58:07.740 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:58:07 np0005534516 ovn_controller[152859]: 2025-11-25T08:58:07Z|01282|binding|INFO|Setting lport 91977aa8-6282-46cb-bc4f-42567be639f9 up in Southbound
Nov 25 03:58:07 np0005534516 nova_compute[253538]: 2025-11-25 08:58:07.745 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:58:07 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:58:07.744 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:60:48:54 10.100.0.10'], port_security=['fa:16:3e:60:48:54 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '7be1983b-1609-4155-b634-d14fc92539e8', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-bf619d00-d285-4b9e-9996-77997075375e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '92faeb767e7a423586eaaf32661ce771', 'neutron:revision_number': '2', 'neutron:security_group_ids': '3d244d5b-1cd0-48b4-a9a9-c4313e58642b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a8f4cf68-54bc-4d0b-a896-1ef280f60a0a, chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=91977aa8-6282-46cb-bc4f-42567be639f9) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 03:58:07 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:58:07.746 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 91977aa8-6282-46cb-bc4f-42567be639f9 in datapath bf619d00-d285-4b9e-9996-77997075375e bound to our chassis#033[00m
Nov 25 03:58:07 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:58:07.747 162739 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network bf619d00-d285-4b9e-9996-77997075375e#033[00m
Nov 25 03:58:07 np0005534516 systemd-udevd[384392]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 03:58:07 np0005534516 systemd-machined[215790]: New machine qemu-154-instance-0000007c.
Nov 25 03:58:07 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:58:07.764 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[264b6d12-ffd6-4dc3-bb5b-345d4a40f055]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:58:07 np0005534516 NetworkManager[48915]: <info>  [1764061087.7710] device (tap91977aa8-62): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 03:58:07 np0005534516 NetworkManager[48915]: <info>  [1764061087.7723] device (tap91977aa8-62): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 03:58:07 np0005534516 systemd[1]: Started Virtual Machine qemu-154-instance-0000007c.
Nov 25 03:58:07 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:58:07.794 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[48fbc33b-563b-44ca-a68a-62e1b9851b78]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:58:07 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:58:07.796 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[773609a7-c57c-4406-96ae-141c38bd577d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:58:07 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:58:07.822 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[7715f221-f2c0-4473-b1a2-49f06a08228a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:58:07 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:58:07.838 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[3a30d5e5-8e06-4dad-88ce-dc4e50b7bdcd]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapbf619d00-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:c8:be:0b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 6, 'tx_packets': 5, 'rx_bytes': 532, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 6, 'tx_packets': 5, 'rx_bytes': 532, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 369], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 642553, 'reachable_time': 19033, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 384404, 'error': None, 'target': 'ovnmeta-bf619d00-d285-4b9e-9996-77997075375e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:58:07 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:58:07.852 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[688402a8-2830-4c2d-96ec-c6d5d59bc090]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapbf619d00-d1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 642566, 'tstamp': 642566}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 384406, 'error': None, 'target': 'ovnmeta-bf619d00-d285-4b9e-9996-77997075375e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapbf619d00-d1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 642569, 'tstamp': 642569}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 384406, 'error': None, 'target': 'ovnmeta-bf619d00-d285-4b9e-9996-77997075375e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:58:07 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:58:07.854 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapbf619d00-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:58:07 np0005534516 nova_compute[253538]: 2025-11-25 08:58:07.855 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:58:07 np0005534516 nova_compute[253538]: 2025-11-25 08:58:07.856 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:58:07 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:58:07.856 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapbf619d00-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:58:07 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:58:07.856 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 03:58:07 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:58:07.856 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapbf619d00-d0, col_values=(('external_ids', {'iface-id': 'c544ed70-c59f-4fbe-97c5-a521f548f971'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:58:07 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:58:07.857 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 03:58:07 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2320: 321 pgs: 321 active+clean; 203 MiB data, 868 MiB used, 59 GiB / 60 GiB avail; 1.1 MiB/s rd, 3.8 MiB/s wr, 121 op/s
Nov 25 03:58:07 np0005534516 nova_compute[253538]: 2025-11-25 08:58:07.968 253542 DEBUG nova.compute.manager [req-1a5e7d2a-1978-4261-849b-d03400d8063b req-b372be22-bbf4-455c-b9d6-fd433a108bbf b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 7be1983b-1609-4155-b634-d14fc92539e8] Received event network-vif-plugged-91977aa8-6282-46cb-bc4f-42567be639f9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:58:07 np0005534516 nova_compute[253538]: 2025-11-25 08:58:07.968 253542 DEBUG oslo_concurrency.lockutils [req-1a5e7d2a-1978-4261-849b-d03400d8063b req-b372be22-bbf4-455c-b9d6-fd433a108bbf b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "7be1983b-1609-4155-b634-d14fc92539e8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:58:07 np0005534516 nova_compute[253538]: 2025-11-25 08:58:07.968 253542 DEBUG oslo_concurrency.lockutils [req-1a5e7d2a-1978-4261-849b-d03400d8063b req-b372be22-bbf4-455c-b9d6-fd433a108bbf b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "7be1983b-1609-4155-b634-d14fc92539e8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:58:07 np0005534516 nova_compute[253538]: 2025-11-25 08:58:07.969 253542 DEBUG oslo_concurrency.lockutils [req-1a5e7d2a-1978-4261-849b-d03400d8063b req-b372be22-bbf4-455c-b9d6-fd433a108bbf b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "7be1983b-1609-4155-b634-d14fc92539e8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:58:07 np0005534516 nova_compute[253538]: 2025-11-25 08:58:07.969 253542 DEBUG nova.compute.manager [req-1a5e7d2a-1978-4261-849b-d03400d8063b req-b372be22-bbf4-455c-b9d6-fd433a108bbf b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 7be1983b-1609-4155-b634-d14fc92539e8] Processing event network-vif-plugged-91977aa8-6282-46cb-bc4f-42567be639f9 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 25 03:58:07 np0005534516 nova_compute[253538]: 2025-11-25 08:58:07.969 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:58:08 np0005534516 nova_compute[253538]: 2025-11-25 08:58:08.443 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:58:08 np0005534516 nova_compute[253538]: 2025-11-25 08:58:08.557 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764061088.5568595, 7be1983b-1609-4155-b634-d14fc92539e8 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 03:58:08 np0005534516 nova_compute[253538]: 2025-11-25 08:58:08.557 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 7be1983b-1609-4155-b634-d14fc92539e8] VM Started (Lifecycle Event)#033[00m
Nov 25 03:58:08 np0005534516 nova_compute[253538]: 2025-11-25 08:58:08.561 253542 DEBUG nova.compute.manager [None req-7ad4c5e8-daeb-48e5-949c-59a86a73b69b 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 7be1983b-1609-4155-b634-d14fc92539e8] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 25 03:58:08 np0005534516 nova_compute[253538]: 2025-11-25 08:58:08.564 253542 DEBUG nova.virt.libvirt.driver [None req-7ad4c5e8-daeb-48e5-949c-59a86a73b69b 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 7be1983b-1609-4155-b634-d14fc92539e8] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 25 03:58:08 np0005534516 nova_compute[253538]: 2025-11-25 08:58:08.568 253542 INFO nova.virt.libvirt.driver [-] [instance: 7be1983b-1609-4155-b634-d14fc92539e8] Instance spawned successfully.#033[00m
Nov 25 03:58:08 np0005534516 nova_compute[253538]: 2025-11-25 08:58:08.568 253542 DEBUG nova.virt.libvirt.driver [None req-7ad4c5e8-daeb-48e5-949c-59a86a73b69b 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 7be1983b-1609-4155-b634-d14fc92539e8] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 25 03:58:08 np0005534516 nova_compute[253538]: 2025-11-25 08:58:08.586 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 7be1983b-1609-4155-b634-d14fc92539e8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:58:08 np0005534516 nova_compute[253538]: 2025-11-25 08:58:08.596 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 7be1983b-1609-4155-b634-d14fc92539e8] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 03:58:08 np0005534516 nova_compute[253538]: 2025-11-25 08:58:08.601 253542 DEBUG nova.virt.libvirt.driver [None req-7ad4c5e8-daeb-48e5-949c-59a86a73b69b 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 7be1983b-1609-4155-b634-d14fc92539e8] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:58:08 np0005534516 nova_compute[253538]: 2025-11-25 08:58:08.602 253542 DEBUG nova.virt.libvirt.driver [None req-7ad4c5e8-daeb-48e5-949c-59a86a73b69b 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 7be1983b-1609-4155-b634-d14fc92539e8] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:58:08 np0005534516 nova_compute[253538]: 2025-11-25 08:58:08.603 253542 DEBUG nova.virt.libvirt.driver [None req-7ad4c5e8-daeb-48e5-949c-59a86a73b69b 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 7be1983b-1609-4155-b634-d14fc92539e8] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:58:08 np0005534516 nova_compute[253538]: 2025-11-25 08:58:08.603 253542 DEBUG nova.virt.libvirt.driver [None req-7ad4c5e8-daeb-48e5-949c-59a86a73b69b 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 7be1983b-1609-4155-b634-d14fc92539e8] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:58:08 np0005534516 nova_compute[253538]: 2025-11-25 08:58:08.604 253542 DEBUG nova.virt.libvirt.driver [None req-7ad4c5e8-daeb-48e5-949c-59a86a73b69b 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 7be1983b-1609-4155-b634-d14fc92539e8] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:58:08 np0005534516 nova_compute[253538]: 2025-11-25 08:58:08.604 253542 DEBUG nova.virt.libvirt.driver [None req-7ad4c5e8-daeb-48e5-949c-59a86a73b69b 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 7be1983b-1609-4155-b634-d14fc92539e8] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:58:08 np0005534516 nova_compute[253538]: 2025-11-25 08:58:08.657 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 7be1983b-1609-4155-b634-d14fc92539e8] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 03:58:08 np0005534516 nova_compute[253538]: 2025-11-25 08:58:08.657 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764061088.557015, 7be1983b-1609-4155-b634-d14fc92539e8 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 03:58:08 np0005534516 nova_compute[253538]: 2025-11-25 08:58:08.658 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 7be1983b-1609-4155-b634-d14fc92539e8] VM Paused (Lifecycle Event)#033[00m
Nov 25 03:58:08 np0005534516 nova_compute[253538]: 2025-11-25 08:58:08.705 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 7be1983b-1609-4155-b634-d14fc92539e8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:58:08 np0005534516 nova_compute[253538]: 2025-11-25 08:58:08.715 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764061088.5633826, 7be1983b-1609-4155-b634-d14fc92539e8 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 03:58:08 np0005534516 nova_compute[253538]: 2025-11-25 08:58:08.715 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 7be1983b-1609-4155-b634-d14fc92539e8] VM Resumed (Lifecycle Event)#033[00m
Nov 25 03:58:08 np0005534516 nova_compute[253538]: 2025-11-25 08:58:08.736 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 7be1983b-1609-4155-b634-d14fc92539e8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:58:08 np0005534516 nova_compute[253538]: 2025-11-25 08:58:08.739 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 7be1983b-1609-4155-b634-d14fc92539e8] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 03:58:08 np0005534516 nova_compute[253538]: 2025-11-25 08:58:08.746 253542 INFO nova.compute.manager [None req-7ad4c5e8-daeb-48e5-949c-59a86a73b69b 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 7be1983b-1609-4155-b634-d14fc92539e8] Took 8.54 seconds to spawn the instance on the hypervisor.#033[00m
Nov 25 03:58:08 np0005534516 nova_compute[253538]: 2025-11-25 08:58:08.747 253542 DEBUG nova.compute.manager [None req-7ad4c5e8-daeb-48e5-949c-59a86a73b69b 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 7be1983b-1609-4155-b634-d14fc92539e8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:58:08 np0005534516 nova_compute[253538]: 2025-11-25 08:58:08.755 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 7be1983b-1609-4155-b634-d14fc92539e8] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 03:58:08 np0005534516 nova_compute[253538]: 2025-11-25 08:58:08.803 253542 INFO nova.compute.manager [None req-7ad4c5e8-daeb-48e5-949c-59a86a73b69b 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 7be1983b-1609-4155-b634-d14fc92539e8] Took 9.66 seconds to build instance.#033[00m
Nov 25 03:58:08 np0005534516 nova_compute[253538]: 2025-11-25 08:58:08.818 253542 DEBUG oslo_concurrency.lockutils [None req-7ad4c5e8-daeb-48e5-949c-59a86a73b69b 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lock "7be1983b-1609-4155-b634-d14fc92539e8" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.760s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:58:09 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2321: 321 pgs: 321 active+clean; 213 MiB data, 869 MiB used, 59 GiB / 60 GiB avail; 457 KiB/s rd, 3.9 MiB/s wr, 124 op/s
Nov 25 03:58:10 np0005534516 nova_compute[253538]: 2025-11-25 08:58:10.112 253542 DEBUG nova.compute.manager [req-ee5c7c8e-678c-445e-b7ad-ba1c6f9ef9b6 req-28ea3b4a-2731-4a23-8fe3-6c1240c0f644 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 7be1983b-1609-4155-b634-d14fc92539e8] Received event network-vif-plugged-91977aa8-6282-46cb-bc4f-42567be639f9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:58:10 np0005534516 nova_compute[253538]: 2025-11-25 08:58:10.113 253542 DEBUG oslo_concurrency.lockutils [req-ee5c7c8e-678c-445e-b7ad-ba1c6f9ef9b6 req-28ea3b4a-2731-4a23-8fe3-6c1240c0f644 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "7be1983b-1609-4155-b634-d14fc92539e8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:58:10 np0005534516 nova_compute[253538]: 2025-11-25 08:58:10.113 253542 DEBUG oslo_concurrency.lockutils [req-ee5c7c8e-678c-445e-b7ad-ba1c6f9ef9b6 req-28ea3b4a-2731-4a23-8fe3-6c1240c0f644 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "7be1983b-1609-4155-b634-d14fc92539e8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:58:10 np0005534516 nova_compute[253538]: 2025-11-25 08:58:10.113 253542 DEBUG oslo_concurrency.lockutils [req-ee5c7c8e-678c-445e-b7ad-ba1c6f9ef9b6 req-28ea3b4a-2731-4a23-8fe3-6c1240c0f644 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "7be1983b-1609-4155-b634-d14fc92539e8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:58:10 np0005534516 nova_compute[253538]: 2025-11-25 08:58:10.114 253542 DEBUG nova.compute.manager [req-ee5c7c8e-678c-445e-b7ad-ba1c6f9ef9b6 req-28ea3b4a-2731-4a23-8fe3-6c1240c0f644 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 7be1983b-1609-4155-b634-d14fc92539e8] No waiting events found dispatching network-vif-plugged-91977aa8-6282-46cb-bc4f-42567be639f9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 03:58:10 np0005534516 nova_compute[253538]: 2025-11-25 08:58:10.114 253542 WARNING nova.compute.manager [req-ee5c7c8e-678c-445e-b7ad-ba1c6f9ef9b6 req-28ea3b4a-2731-4a23-8fe3-6c1240c0f644 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 7be1983b-1609-4155-b634-d14fc92539e8] Received unexpected event network-vif-plugged-91977aa8-6282-46cb-bc4f-42567be639f9 for instance with vm_state active and task_state None.#033[00m
Nov 25 03:58:11 np0005534516 nova_compute[253538]: 2025-11-25 08:58:11.117 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:58:11 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2322: 321 pgs: 321 active+clean; 213 MiB data, 869 MiB used, 59 GiB / 60 GiB avail; 1.5 MiB/s rd, 3.9 MiB/s wr, 152 op/s
Nov 25 03:58:12 np0005534516 nova_compute[253538]: 2025-11-25 08:58:12.028 253542 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764061077.0272484, ce8c3428-f7e4-49aa-9978-faaf5d514663 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 03:58:12 np0005534516 nova_compute[253538]: 2025-11-25 08:58:12.028 253542 INFO nova.compute.manager [-] [instance: ce8c3428-f7e4-49aa-9978-faaf5d514663] VM Stopped (Lifecycle Event)#033[00m
Nov 25 03:58:12 np0005534516 nova_compute[253538]: 2025-11-25 08:58:12.052 253542 DEBUG nova.compute.manager [None req-7dbb174a-4358-4b50-b226-ee0d2f210010 - - - - - -] [instance: ce8c3428-f7e4-49aa-9978-faaf5d514663] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:58:12 np0005534516 nova_compute[253538]: 2025-11-25 08:58:12.208 253542 DEBUG nova.compute.manager [req-2061cbd6-f518-417a-9a0f-5a05848ec74c req-b32d5822-0ff6-4444-a460-d0d4efc7e02f b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 7be1983b-1609-4155-b634-d14fc92539e8] Received event network-changed-91977aa8-6282-46cb-bc4f-42567be639f9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:58:12 np0005534516 nova_compute[253538]: 2025-11-25 08:58:12.211 253542 DEBUG nova.compute.manager [req-2061cbd6-f518-417a-9a0f-5a05848ec74c req-b32d5822-0ff6-4444-a460-d0d4efc7e02f b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 7be1983b-1609-4155-b634-d14fc92539e8] Refreshing instance network info cache due to event network-changed-91977aa8-6282-46cb-bc4f-42567be639f9. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 03:58:12 np0005534516 nova_compute[253538]: 2025-11-25 08:58:12.211 253542 DEBUG oslo_concurrency.lockutils [req-2061cbd6-f518-417a-9a0f-5a05848ec74c req-b32d5822-0ff6-4444-a460-d0d4efc7e02f b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-7be1983b-1609-4155-b634-d14fc92539e8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 03:58:12 np0005534516 nova_compute[253538]: 2025-11-25 08:58:12.212 253542 DEBUG oslo_concurrency.lockutils [req-2061cbd6-f518-417a-9a0f-5a05848ec74c req-b32d5822-0ff6-4444-a460-d0d4efc7e02f b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-7be1983b-1609-4155-b634-d14fc92539e8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 03:58:12 np0005534516 nova_compute[253538]: 2025-11-25 08:58:12.212 253542 DEBUG nova.network.neutron [req-2061cbd6-f518-417a-9a0f-5a05848ec74c req-b32d5822-0ff6-4444-a460-d0d4efc7e02f b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 7be1983b-1609-4155-b634-d14fc92539e8] Refreshing network info cache for port 91977aa8-6282-46cb-bc4f-42567be639f9 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 03:58:12 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 03:58:12 np0005534516 nova_compute[253538]: 2025-11-25 08:58:12.971 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:58:13 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2323: 321 pgs: 321 active+clean; 213 MiB data, 869 MiB used, 59 GiB / 60 GiB avail; 1.7 MiB/s rd, 2.9 MiB/s wr, 124 op/s
Nov 25 03:58:14 np0005534516 nova_compute[253538]: 2025-11-25 08:58:14.159 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:58:14 np0005534516 podman[384452]: 2025-11-25 08:58:14.806986821 +0000 UTC m=+0.052270044 container health_status 5c8d5508db57b4c3da7e80ae11f3a3094e87785cb74f60c98199261dd8b73481 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3)
Nov 25 03:58:14 np0005534516 podman[384471]: 2025-11-25 08:58:14.889276322 +0000 UTC m=+0.047228277 container health_status 201f5ba8a846455dad7681d91099d8c3f33e8ac91298c1330a8b525b94c03938 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=multipathd, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=multipathd, io.buildah.version=1.41.3)
Nov 25 03:58:15 np0005534516 nova_compute[253538]: 2025-11-25 08:58:15.121 253542 DEBUG nova.network.neutron [req-2061cbd6-f518-417a-9a0f-5a05848ec74c req-b32d5822-0ff6-4444-a460-d0d4efc7e02f b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 7be1983b-1609-4155-b634-d14fc92539e8] Updated VIF entry in instance network info cache for port 91977aa8-6282-46cb-bc4f-42567be639f9. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 25 03:58:15 np0005534516 nova_compute[253538]: 2025-11-25 08:58:15.122 253542 DEBUG nova.network.neutron [req-2061cbd6-f518-417a-9a0f-5a05848ec74c req-b32d5822-0ff6-4444-a460-d0d4efc7e02f b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 7be1983b-1609-4155-b634-d14fc92539e8] Updating instance_info_cache with network_info: [{"id": "91977aa8-6282-46cb-bc4f-42567be639f9", "address": "fa:16:3e:60:48:54", "network": {"id": "bf619d00-d285-4b9e-9996-77997075375e", "bridge": "br-int", "label": "tempest-network-smoke--598319533", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.196", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92faeb767e7a423586eaaf32661ce771", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap91977aa8-62", "ovs_interfaceid": "91977aa8-6282-46cb-bc4f-42567be639f9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 03:58:15 np0005534516 nova_compute[253538]: 2025-11-25 08:58:15.143 253542 DEBUG oslo_concurrency.lockutils [req-2061cbd6-f518-417a-9a0f-5a05848ec74c req-b32d5822-0ff6-4444-a460-d0d4efc7e02f b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-7be1983b-1609-4155-b634-d14fc92539e8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 03:58:15 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2324: 321 pgs: 321 active+clean; 213 MiB data, 869 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.0 MiB/s wr, 125 op/s
Nov 25 03:58:16 np0005534516 nova_compute[253538]: 2025-11-25 08:58:16.161 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:58:17 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 03:58:17 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2325: 321 pgs: 321 active+clean; 213 MiB data, 869 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 1.1 MiB/s wr, 111 op/s
Nov 25 03:58:17 np0005534516 nova_compute[253538]: 2025-11-25 08:58:17.973 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:58:19 np0005534516 nova_compute[253538]: 2025-11-25 08:58:19.387 253542 DEBUG oslo_concurrency.lockutils [None req-85f9b72b-2a81-4b90-a114-2e48d85b1357 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Acquiring lock "9447890d-1fff-4536-a0cd-b889c23f7479" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:58:19 np0005534516 nova_compute[253538]: 2025-11-25 08:58:19.387 253542 DEBUG oslo_concurrency.lockutils [None req-85f9b72b-2a81-4b90-a114-2e48d85b1357 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Lock "9447890d-1fff-4536-a0cd-b889c23f7479" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:58:19 np0005534516 nova_compute[253538]: 2025-11-25 08:58:19.406 253542 DEBUG nova.compute.manager [None req-85f9b72b-2a81-4b90-a114-2e48d85b1357 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 9447890d-1fff-4536-a0cd-b889c23f7479] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 25 03:58:19 np0005534516 nova_compute[253538]: 2025-11-25 08:58:19.483 253542 DEBUG oslo_concurrency.lockutils [None req-85f9b72b-2a81-4b90-a114-2e48d85b1357 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:58:19 np0005534516 nova_compute[253538]: 2025-11-25 08:58:19.484 253542 DEBUG oslo_concurrency.lockutils [None req-85f9b72b-2a81-4b90-a114-2e48d85b1357 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:58:19 np0005534516 nova_compute[253538]: 2025-11-25 08:58:19.497 253542 DEBUG nova.virt.hardware [None req-85f9b72b-2a81-4b90-a114-2e48d85b1357 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 25 03:58:19 np0005534516 nova_compute[253538]: 2025-11-25 08:58:19.498 253542 INFO nova.compute.claims [None req-85f9b72b-2a81-4b90-a114-2e48d85b1357 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 9447890d-1fff-4536-a0cd-b889c23f7479] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 25 03:58:19 np0005534516 nova_compute[253538]: 2025-11-25 08:58:19.731 253542 DEBUG oslo_concurrency.processutils [None req-85f9b72b-2a81-4b90-a114-2e48d85b1357 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:58:19 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2326: 321 pgs: 321 active+clean; 213 MiB data, 869 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 105 KiB/s wr, 97 op/s
Nov 25 03:58:20 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 03:58:20 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/179490891' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 03:58:20 np0005534516 nova_compute[253538]: 2025-11-25 08:58:20.209 253542 DEBUG oslo_concurrency.processutils [None req-85f9b72b-2a81-4b90-a114-2e48d85b1357 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.478s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:58:20 np0005534516 nova_compute[253538]: 2025-11-25 08:58:20.220 253542 DEBUG nova.compute.provider_tree [None req-85f9b72b-2a81-4b90-a114-2e48d85b1357 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 03:58:20 np0005534516 nova_compute[253538]: 2025-11-25 08:58:20.237 253542 DEBUG nova.scheduler.client.report [None req-85f9b72b-2a81-4b90-a114-2e48d85b1357 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 03:58:20 np0005534516 nova_compute[253538]: 2025-11-25 08:58:20.275 253542 DEBUG oslo_concurrency.lockutils [None req-85f9b72b-2a81-4b90-a114-2e48d85b1357 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.791s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:58:20 np0005534516 nova_compute[253538]: 2025-11-25 08:58:20.276 253542 DEBUG nova.compute.manager [None req-85f9b72b-2a81-4b90-a114-2e48d85b1357 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 9447890d-1fff-4536-a0cd-b889c23f7479] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 25 03:58:20 np0005534516 nova_compute[253538]: 2025-11-25 08:58:20.363 253542 DEBUG nova.compute.manager [None req-85f9b72b-2a81-4b90-a114-2e48d85b1357 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 9447890d-1fff-4536-a0cd-b889c23f7479] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 25 03:58:20 np0005534516 nova_compute[253538]: 2025-11-25 08:58:20.363 253542 DEBUG nova.network.neutron [None req-85f9b72b-2a81-4b90-a114-2e48d85b1357 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 9447890d-1fff-4536-a0cd-b889c23f7479] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 25 03:58:20 np0005534516 nova_compute[253538]: 2025-11-25 08:58:20.384 253542 INFO nova.virt.libvirt.driver [None req-85f9b72b-2a81-4b90-a114-2e48d85b1357 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 9447890d-1fff-4536-a0cd-b889c23f7479] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 25 03:58:20 np0005534516 nova_compute[253538]: 2025-11-25 08:58:20.407 253542 DEBUG nova.compute.manager [None req-85f9b72b-2a81-4b90-a114-2e48d85b1357 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 9447890d-1fff-4536-a0cd-b889c23f7479] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 25 03:58:20 np0005534516 nova_compute[253538]: 2025-11-25 08:58:20.501 253542 DEBUG nova.compute.manager [None req-85f9b72b-2a81-4b90-a114-2e48d85b1357 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 9447890d-1fff-4536-a0cd-b889c23f7479] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 25 03:58:20 np0005534516 nova_compute[253538]: 2025-11-25 08:58:20.502 253542 DEBUG nova.virt.libvirt.driver [None req-85f9b72b-2a81-4b90-a114-2e48d85b1357 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 9447890d-1fff-4536-a0cd-b889c23f7479] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 25 03:58:20 np0005534516 nova_compute[253538]: 2025-11-25 08:58:20.502 253542 INFO nova.virt.libvirt.driver [None req-85f9b72b-2a81-4b90-a114-2e48d85b1357 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 9447890d-1fff-4536-a0cd-b889c23f7479] Creating image(s)#033[00m
Nov 25 03:58:20 np0005534516 nova_compute[253538]: 2025-11-25 08:58:20.522 253542 DEBUG nova.storage.rbd_utils [None req-85f9b72b-2a81-4b90-a114-2e48d85b1357 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] rbd image 9447890d-1fff-4536-a0cd-b889c23f7479_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:58:20 np0005534516 nova_compute[253538]: 2025-11-25 08:58:20.544 253542 DEBUG nova.storage.rbd_utils [None req-85f9b72b-2a81-4b90-a114-2e48d85b1357 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] rbd image 9447890d-1fff-4536-a0cd-b889c23f7479_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:58:20 np0005534516 nova_compute[253538]: 2025-11-25 08:58:20.566 253542 DEBUG nova.storage.rbd_utils [None req-85f9b72b-2a81-4b90-a114-2e48d85b1357 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] rbd image 9447890d-1fff-4536-a0cd-b889c23f7479_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:58:20 np0005534516 nova_compute[253538]: 2025-11-25 08:58:20.570 253542 DEBUG oslo_concurrency.processutils [None req-85f9b72b-2a81-4b90-a114-2e48d85b1357 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:58:20 np0005534516 nova_compute[253538]: 2025-11-25 08:58:20.616 253542 DEBUG nova.policy [None req-85f9b72b-2a81-4b90-a114-2e48d85b1357 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '283b89dbe3284e8ea2019b797673108b', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'cfffff2c57a442a59b202d368d49bf00', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 25 03:58:20 np0005534516 nova_compute[253538]: 2025-11-25 08:58:20.661 253542 DEBUG oslo_concurrency.processutils [None req-85f9b72b-2a81-4b90-a114-2e48d85b1357 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json" returned: 0 in 0.091s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:58:20 np0005534516 nova_compute[253538]: 2025-11-25 08:58:20.662 253542 DEBUG oslo_concurrency.lockutils [None req-85f9b72b-2a81-4b90-a114-2e48d85b1357 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Acquiring lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:58:20 np0005534516 nova_compute[253538]: 2025-11-25 08:58:20.662 253542 DEBUG oslo_concurrency.lockutils [None req-85f9b72b-2a81-4b90-a114-2e48d85b1357 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:58:20 np0005534516 nova_compute[253538]: 2025-11-25 08:58:20.663 253542 DEBUG oslo_concurrency.lockutils [None req-85f9b72b-2a81-4b90-a114-2e48d85b1357 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:58:20 np0005534516 nova_compute[253538]: 2025-11-25 08:58:20.686 253542 DEBUG nova.storage.rbd_utils [None req-85f9b72b-2a81-4b90-a114-2e48d85b1357 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] rbd image 9447890d-1fff-4536-a0cd-b889c23f7479_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:58:20 np0005534516 nova_compute[253538]: 2025-11-25 08:58:20.690 253542 DEBUG oslo_concurrency.processutils [None req-85f9b72b-2a81-4b90-a114-2e48d85b1357 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc 9447890d-1fff-4536-a0cd-b889c23f7479_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:58:20 np0005534516 podman[384589]: 2025-11-25 08:58:20.85827723 +0000 UTC m=+0.109490181 container health_status 8ad1e319ea263c63021f01bb2d6f18ca5aea205883339692c6b10bddfa993191 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 25 03:58:21 np0005534516 nova_compute[253538]: 2025-11-25 08:58:21.164 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:58:21 np0005534516 ovn_controller[152859]: 2025-11-25T08:58:21Z|00151|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:60:48:54 10.100.0.10
Nov 25 03:58:21 np0005534516 ovn_controller[152859]: 2025-11-25T08:58:21Z|00152|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:60:48:54 10.100.0.10
Nov 25 03:58:21 np0005534516 nova_compute[253538]: 2025-11-25 08:58:21.923 253542 DEBUG nova.network.neutron [None req-85f9b72b-2a81-4b90-a114-2e48d85b1357 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 9447890d-1fff-4536-a0cd-b889c23f7479] Successfully created port: 6e0acf79-7148-4555-9265-b449f234806e _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 25 03:58:21 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2327: 321 pgs: 321 active+clean; 225 MiB data, 884 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.3 MiB/s wr, 98 op/s
Nov 25 03:58:22 np0005534516 nova_compute[253538]: 2025-11-25 08:58:22.052 253542 DEBUG oslo_concurrency.processutils [None req-85f9b72b-2a81-4b90-a114-2e48d85b1357 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc 9447890d-1fff-4536-a0cd-b889c23f7479_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.362s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:58:22 np0005534516 nova_compute[253538]: 2025-11-25 08:58:22.135 253542 DEBUG nova.storage.rbd_utils [None req-85f9b72b-2a81-4b90-a114-2e48d85b1357 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] resizing rbd image 9447890d-1fff-4536-a0cd-b889c23f7479_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 25 03:58:22 np0005534516 nova_compute[253538]: 2025-11-25 08:58:22.241 253542 DEBUG nova.objects.instance [None req-85f9b72b-2a81-4b90-a114-2e48d85b1357 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Lazy-loading 'migration_context' on Instance uuid 9447890d-1fff-4536-a0cd-b889c23f7479 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 03:58:22 np0005534516 nova_compute[253538]: 2025-11-25 08:58:22.257 253542 DEBUG nova.virt.libvirt.driver [None req-85f9b72b-2a81-4b90-a114-2e48d85b1357 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 9447890d-1fff-4536-a0cd-b889c23f7479] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 25 03:58:22 np0005534516 nova_compute[253538]: 2025-11-25 08:58:22.257 253542 DEBUG nova.virt.libvirt.driver [None req-85f9b72b-2a81-4b90-a114-2e48d85b1357 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 9447890d-1fff-4536-a0cd-b889c23f7479] Ensure instance console log exists: /var/lib/nova/instances/9447890d-1fff-4536-a0cd-b889c23f7479/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 25 03:58:22 np0005534516 nova_compute[253538]: 2025-11-25 08:58:22.258 253542 DEBUG oslo_concurrency.lockutils [None req-85f9b72b-2a81-4b90-a114-2e48d85b1357 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:58:22 np0005534516 nova_compute[253538]: 2025-11-25 08:58:22.258 253542 DEBUG oslo_concurrency.lockutils [None req-85f9b72b-2a81-4b90-a114-2e48d85b1357 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:58:22 np0005534516 nova_compute[253538]: 2025-11-25 08:58:22.258 253542 DEBUG oslo_concurrency.lockutils [None req-85f9b72b-2a81-4b90-a114-2e48d85b1357 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:58:22 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 03:58:23 np0005534516 nova_compute[253538]: 2025-11-25 08:58:23.028 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:58:23 np0005534516 nova_compute[253538]: 2025-11-25 08:58:23.386 253542 DEBUG nova.network.neutron [None req-85f9b72b-2a81-4b90-a114-2e48d85b1357 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 9447890d-1fff-4536-a0cd-b889c23f7479] Successfully updated port: 6e0acf79-7148-4555-9265-b449f234806e _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 25 03:58:23 np0005534516 nova_compute[253538]: 2025-11-25 08:58:23.399 253542 DEBUG oslo_concurrency.lockutils [None req-85f9b72b-2a81-4b90-a114-2e48d85b1357 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Acquiring lock "refresh_cache-9447890d-1fff-4536-a0cd-b889c23f7479" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 03:58:23 np0005534516 nova_compute[253538]: 2025-11-25 08:58:23.399 253542 DEBUG oslo_concurrency.lockutils [None req-85f9b72b-2a81-4b90-a114-2e48d85b1357 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Acquired lock "refresh_cache-9447890d-1fff-4536-a0cd-b889c23f7479" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 03:58:23 np0005534516 nova_compute[253538]: 2025-11-25 08:58:23.400 253542 DEBUG nova.network.neutron [None req-85f9b72b-2a81-4b90-a114-2e48d85b1357 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 9447890d-1fff-4536-a0cd-b889c23f7479] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 25 03:58:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 03:58:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 03:58:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 03:58:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 03:58:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 03:58:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 03:58:23 np0005534516 nova_compute[253538]: 2025-11-25 08:58:23.474 253542 DEBUG nova.compute.manager [req-fe773c2a-af5f-438b-884b-bbf448e94031 req-007958fc-0cbe-4110-80c7-e1d419d0ae10 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 9447890d-1fff-4536-a0cd-b889c23f7479] Received event network-changed-6e0acf79-7148-4555-9265-b449f234806e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:58:23 np0005534516 nova_compute[253538]: 2025-11-25 08:58:23.475 253542 DEBUG nova.compute.manager [req-fe773c2a-af5f-438b-884b-bbf448e94031 req-007958fc-0cbe-4110-80c7-e1d419d0ae10 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 9447890d-1fff-4536-a0cd-b889c23f7479] Refreshing instance network info cache due to event network-changed-6e0acf79-7148-4555-9265-b449f234806e. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 03:58:23 np0005534516 nova_compute[253538]: 2025-11-25 08:58:23.475 253542 DEBUG oslo_concurrency.lockutils [req-fe773c2a-af5f-438b-884b-bbf448e94031 req-007958fc-0cbe-4110-80c7-e1d419d0ae10 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-9447890d-1fff-4536-a0cd-b889c23f7479" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 03:58:23 np0005534516 nova_compute[253538]: 2025-11-25 08:58:23.920 253542 DEBUG nova.network.neutron [None req-85f9b72b-2a81-4b90-a114-2e48d85b1357 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 9447890d-1fff-4536-a0cd-b889c23f7479] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 25 03:58:23 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2328: 321 pgs: 321 active+clean; 248 MiB data, 894 MiB used, 59 GiB / 60 GiB avail; 949 KiB/s rd, 2.1 MiB/s wr, 60 op/s
Nov 25 03:58:25 np0005534516 nova_compute[253538]: 2025-11-25 08:58:25.230 253542 DEBUG nova.network.neutron [None req-85f9b72b-2a81-4b90-a114-2e48d85b1357 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 9447890d-1fff-4536-a0cd-b889c23f7479] Updating instance_info_cache with network_info: [{"id": "6e0acf79-7148-4555-9265-b449f234806e", "address": "fa:16:3e:f7:96:34", "network": {"id": "bfb2a9da-10d5-4cf0-a585-a59d66a02fa0", "bridge": "br-int", "label": "tempest-network-smoke--16312468", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cfffff2c57a442a59b202d368d49bf00", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6e0acf79-71", "ovs_interfaceid": "6e0acf79-7148-4555-9265-b449f234806e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 03:58:25 np0005534516 nova_compute[253538]: 2025-11-25 08:58:25.340 253542 DEBUG oslo_concurrency.lockutils [None req-85f9b72b-2a81-4b90-a114-2e48d85b1357 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Releasing lock "refresh_cache-9447890d-1fff-4536-a0cd-b889c23f7479" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 03:58:25 np0005534516 nova_compute[253538]: 2025-11-25 08:58:25.340 253542 DEBUG nova.compute.manager [None req-85f9b72b-2a81-4b90-a114-2e48d85b1357 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 9447890d-1fff-4536-a0cd-b889c23f7479] Instance network_info: |[{"id": "6e0acf79-7148-4555-9265-b449f234806e", "address": "fa:16:3e:f7:96:34", "network": {"id": "bfb2a9da-10d5-4cf0-a585-a59d66a02fa0", "bridge": "br-int", "label": "tempest-network-smoke--16312468", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cfffff2c57a442a59b202d368d49bf00", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6e0acf79-71", "ovs_interfaceid": "6e0acf79-7148-4555-9265-b449f234806e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 25 03:58:25 np0005534516 nova_compute[253538]: 2025-11-25 08:58:25.340 253542 DEBUG oslo_concurrency.lockutils [req-fe773c2a-af5f-438b-884b-bbf448e94031 req-007958fc-0cbe-4110-80c7-e1d419d0ae10 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-9447890d-1fff-4536-a0cd-b889c23f7479" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 03:58:25 np0005534516 nova_compute[253538]: 2025-11-25 08:58:25.341 253542 DEBUG nova.network.neutron [req-fe773c2a-af5f-438b-884b-bbf448e94031 req-007958fc-0cbe-4110-80c7-e1d419d0ae10 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 9447890d-1fff-4536-a0cd-b889c23f7479] Refreshing network info cache for port 6e0acf79-7148-4555-9265-b449f234806e _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 03:58:25 np0005534516 nova_compute[253538]: 2025-11-25 08:58:25.344 253542 DEBUG nova.virt.libvirt.driver [None req-85f9b72b-2a81-4b90-a114-2e48d85b1357 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 9447890d-1fff-4536-a0cd-b889c23f7479] Start _get_guest_xml network_info=[{"id": "6e0acf79-7148-4555-9265-b449f234806e", "address": "fa:16:3e:f7:96:34", "network": {"id": "bfb2a9da-10d5-4cf0-a585-a59d66a02fa0", "bridge": "br-int", "label": "tempest-network-smoke--16312468", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cfffff2c57a442a59b202d368d49bf00", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6e0acf79-71", "ovs_interfaceid": "6e0acf79-7148-4555-9265-b449f234806e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'device_name': '/dev/vda', 'guest_format': None, 'encryption_options': None, 'boot_index': 0, 'encryption_format': None, 'encryption_secret_uuid': None, 'size': 0, 'encrypted': False, 'disk_bus': 'virtio', 'image_id': '8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 25 03:58:25 np0005534516 nova_compute[253538]: 2025-11-25 08:58:25.349 253542 WARNING nova.virt.libvirt.driver [None req-85f9b72b-2a81-4b90-a114-2e48d85b1357 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 03:58:25 np0005534516 nova_compute[253538]: 2025-11-25 08:58:25.358 253542 DEBUG nova.virt.libvirt.host [None req-85f9b72b-2a81-4b90-a114-2e48d85b1357 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 25 03:58:25 np0005534516 nova_compute[253538]: 2025-11-25 08:58:25.360 253542 DEBUG nova.virt.libvirt.host [None req-85f9b72b-2a81-4b90-a114-2e48d85b1357 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 25 03:58:25 np0005534516 nova_compute[253538]: 2025-11-25 08:58:25.364 253542 DEBUG nova.virt.libvirt.host [None req-85f9b72b-2a81-4b90-a114-2e48d85b1357 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 25 03:58:25 np0005534516 nova_compute[253538]: 2025-11-25 08:58:25.365 253542 DEBUG nova.virt.libvirt.host [None req-85f9b72b-2a81-4b90-a114-2e48d85b1357 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 25 03:58:25 np0005534516 nova_compute[253538]: 2025-11-25 08:58:25.366 253542 DEBUG nova.virt.libvirt.driver [None req-85f9b72b-2a81-4b90-a114-2e48d85b1357 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 25 03:58:25 np0005534516 nova_compute[253538]: 2025-11-25 08:58:25.367 253542 DEBUG nova.virt.hardware [None req-85f9b72b-2a81-4b90-a114-2e48d85b1357 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T08:20:54Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='c0247c91-0f34-45b2-87e7-43f31a790a00',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 25 03:58:25 np0005534516 nova_compute[253538]: 2025-11-25 08:58:25.368 253542 DEBUG nova.virt.hardware [None req-85f9b72b-2a81-4b90-a114-2e48d85b1357 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 25 03:58:25 np0005534516 nova_compute[253538]: 2025-11-25 08:58:25.368 253542 DEBUG nova.virt.hardware [None req-85f9b72b-2a81-4b90-a114-2e48d85b1357 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 25 03:58:25 np0005534516 nova_compute[253538]: 2025-11-25 08:58:25.369 253542 DEBUG nova.virt.hardware [None req-85f9b72b-2a81-4b90-a114-2e48d85b1357 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 25 03:58:25 np0005534516 nova_compute[253538]: 2025-11-25 08:58:25.369 253542 DEBUG nova.virt.hardware [None req-85f9b72b-2a81-4b90-a114-2e48d85b1357 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 25 03:58:25 np0005534516 nova_compute[253538]: 2025-11-25 08:58:25.369 253542 DEBUG nova.virt.hardware [None req-85f9b72b-2a81-4b90-a114-2e48d85b1357 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 25 03:58:25 np0005534516 nova_compute[253538]: 2025-11-25 08:58:25.370 253542 DEBUG nova.virt.hardware [None req-85f9b72b-2a81-4b90-a114-2e48d85b1357 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 25 03:58:25 np0005534516 nova_compute[253538]: 2025-11-25 08:58:25.370 253542 DEBUG nova.virt.hardware [None req-85f9b72b-2a81-4b90-a114-2e48d85b1357 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 25 03:58:25 np0005534516 nova_compute[253538]: 2025-11-25 08:58:25.371 253542 DEBUG nova.virt.hardware [None req-85f9b72b-2a81-4b90-a114-2e48d85b1357 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 25 03:58:25 np0005534516 nova_compute[253538]: 2025-11-25 08:58:25.371 253542 DEBUG nova.virt.hardware [None req-85f9b72b-2a81-4b90-a114-2e48d85b1357 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 25 03:58:25 np0005534516 nova_compute[253538]: 2025-11-25 08:58:25.372 253542 DEBUG nova.virt.hardware [None req-85f9b72b-2a81-4b90-a114-2e48d85b1357 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 25 03:58:25 np0005534516 nova_compute[253538]: 2025-11-25 08:58:25.377 253542 DEBUG oslo_concurrency.processutils [None req-85f9b72b-2a81-4b90-a114-2e48d85b1357 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:58:25 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 03:58:25 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/732512523' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 03:58:25 np0005534516 nova_compute[253538]: 2025-11-25 08:58:25.870 253542 DEBUG oslo_concurrency.processutils [None req-85f9b72b-2a81-4b90-a114-2e48d85b1357 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.493s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:58:25 np0005534516 nova_compute[253538]: 2025-11-25 08:58:25.891 253542 DEBUG nova.storage.rbd_utils [None req-85f9b72b-2a81-4b90-a114-2e48d85b1357 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] rbd image 9447890d-1fff-4536-a0cd-b889c23f7479_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:58:25 np0005534516 nova_compute[253538]: 2025-11-25 08:58:25.895 253542 DEBUG oslo_concurrency.processutils [None req-85f9b72b-2a81-4b90-a114-2e48d85b1357 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:58:25 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2329: 321 pgs: 321 active+clean; 271 MiB data, 910 MiB used, 59 GiB / 60 GiB avail; 762 KiB/s rd, 3.4 MiB/s wr, 78 op/s
Nov 25 03:58:26 np0005534516 nova_compute[253538]: 2025-11-25 08:58:26.168 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:58:26 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 03:58:26 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3299988978' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 03:58:26 np0005534516 nova_compute[253538]: 2025-11-25 08:58:26.326 253542 DEBUG oslo_concurrency.processutils [None req-85f9b72b-2a81-4b90-a114-2e48d85b1357 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.431s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:58:26 np0005534516 nova_compute[253538]: 2025-11-25 08:58:26.328 253542 DEBUG nova.virt.libvirt.vif [None req-85f9b72b-2a81-4b90-a114-2e48d85b1357 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T08:58:17Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-1495447964-access_point-711993251',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-1495447964-access_point-711993251',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-1495447964-ac',id=125,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAsKC+lTfCuui0Z3hitR52PvpfumhaLVjQ69ujiuIIevy7+I1pIFYPSg9LVKZgq98SCLPdhgyGDo4sdCyIoEofU+K0/ToXdERCiT/ZZOJi+hnfAHkBLfHK2RUxUGr/PGKw==',key_name='tempest-TestSecurityGroupsBasicOps-214386099',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='cfffff2c57a442a59b202d368d49bf00',ramdisk_id='',reservation_id='r-llehtawf',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-1495447964',owner_user_name='tempest-TestSecurityGroupsBasicOps-1495447964-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T08:58:20Z,user_data=None,user_id='283b89dbe3284e8ea2019b797673108b',uuid=9447890d-1fff-4536-a0cd-b889c23f7479,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "6e0acf79-7148-4555-9265-b449f234806e", "address": "fa:16:3e:f7:96:34", "network": {"id": "bfb2a9da-10d5-4cf0-a585-a59d66a02fa0", "bridge": "br-int", "label": "tempest-network-smoke--16312468", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cfffff2c57a442a59b202d368d49bf00", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6e0acf79-71", "ovs_interfaceid": "6e0acf79-7148-4555-9265-b449f234806e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 25 03:58:26 np0005534516 nova_compute[253538]: 2025-11-25 08:58:26.328 253542 DEBUG nova.network.os_vif_util [None req-85f9b72b-2a81-4b90-a114-2e48d85b1357 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Converting VIF {"id": "6e0acf79-7148-4555-9265-b449f234806e", "address": "fa:16:3e:f7:96:34", "network": {"id": "bfb2a9da-10d5-4cf0-a585-a59d66a02fa0", "bridge": "br-int", "label": "tempest-network-smoke--16312468", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cfffff2c57a442a59b202d368d49bf00", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6e0acf79-71", "ovs_interfaceid": "6e0acf79-7148-4555-9265-b449f234806e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 03:58:26 np0005534516 nova_compute[253538]: 2025-11-25 08:58:26.329 253542 DEBUG nova.network.os_vif_util [None req-85f9b72b-2a81-4b90-a114-2e48d85b1357 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f7:96:34,bridge_name='br-int',has_traffic_filtering=True,id=6e0acf79-7148-4555-9265-b449f234806e,network=Network(bfb2a9da-10d5-4cf0-a585-a59d66a02fa0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6e0acf79-71') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 03:58:26 np0005534516 nova_compute[253538]: 2025-11-25 08:58:26.330 253542 DEBUG nova.objects.instance [None req-85f9b72b-2a81-4b90-a114-2e48d85b1357 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Lazy-loading 'pci_devices' on Instance uuid 9447890d-1fff-4536-a0cd-b889c23f7479 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 03:58:26 np0005534516 nova_compute[253538]: 2025-11-25 08:58:26.360 253542 DEBUG nova.virt.libvirt.driver [None req-85f9b72b-2a81-4b90-a114-2e48d85b1357 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 9447890d-1fff-4536-a0cd-b889c23f7479] End _get_guest_xml xml=<domain type="kvm">
Nov 25 03:58:26 np0005534516 nova_compute[253538]:  <uuid>9447890d-1fff-4536-a0cd-b889c23f7479</uuid>
Nov 25 03:58:26 np0005534516 nova_compute[253538]:  <name>instance-0000007d</name>
Nov 25 03:58:26 np0005534516 nova_compute[253538]:  <memory>131072</memory>
Nov 25 03:58:26 np0005534516 nova_compute[253538]:  <vcpu>1</vcpu>
Nov 25 03:58:26 np0005534516 nova_compute[253538]:  <metadata>
Nov 25 03:58:26 np0005534516 nova_compute[253538]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 03:58:26 np0005534516 nova_compute[253538]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 03:58:26 np0005534516 nova_compute[253538]:      <nova:name>tempest-server-tempest-TestSecurityGroupsBasicOps-1495447964-access_point-711993251</nova:name>
Nov 25 03:58:26 np0005534516 nova_compute[253538]:      <nova:creationTime>2025-11-25 08:58:25</nova:creationTime>
Nov 25 03:58:26 np0005534516 nova_compute[253538]:      <nova:flavor name="m1.nano">
Nov 25 03:58:26 np0005534516 nova_compute[253538]:        <nova:memory>128</nova:memory>
Nov 25 03:58:26 np0005534516 nova_compute[253538]:        <nova:disk>1</nova:disk>
Nov 25 03:58:26 np0005534516 nova_compute[253538]:        <nova:swap>0</nova:swap>
Nov 25 03:58:26 np0005534516 nova_compute[253538]:        <nova:ephemeral>0</nova:ephemeral>
Nov 25 03:58:26 np0005534516 nova_compute[253538]:        <nova:vcpus>1</nova:vcpus>
Nov 25 03:58:26 np0005534516 nova_compute[253538]:      </nova:flavor>
Nov 25 03:58:26 np0005534516 nova_compute[253538]:      <nova:owner>
Nov 25 03:58:26 np0005534516 nova_compute[253538]:        <nova:user uuid="283b89dbe3284e8ea2019b797673108b">tempest-TestSecurityGroupsBasicOps-1495447964-project-member</nova:user>
Nov 25 03:58:26 np0005534516 nova_compute[253538]:        <nova:project uuid="cfffff2c57a442a59b202d368d49bf00">tempest-TestSecurityGroupsBasicOps-1495447964</nova:project>
Nov 25 03:58:26 np0005534516 nova_compute[253538]:      </nova:owner>
Nov 25 03:58:26 np0005534516 nova_compute[253538]:      <nova:root type="image" uuid="8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e"/>
Nov 25 03:58:26 np0005534516 nova_compute[253538]:      <nova:ports>
Nov 25 03:58:26 np0005534516 nova_compute[253538]:        <nova:port uuid="6e0acf79-7148-4555-9265-b449f234806e">
Nov 25 03:58:26 np0005534516 nova_compute[253538]:          <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Nov 25 03:58:26 np0005534516 nova_compute[253538]:        </nova:port>
Nov 25 03:58:26 np0005534516 nova_compute[253538]:      </nova:ports>
Nov 25 03:58:26 np0005534516 nova_compute[253538]:    </nova:instance>
Nov 25 03:58:26 np0005534516 nova_compute[253538]:  </metadata>
Nov 25 03:58:26 np0005534516 nova_compute[253538]:  <sysinfo type="smbios">
Nov 25 03:58:26 np0005534516 nova_compute[253538]:    <system>
Nov 25 03:58:26 np0005534516 nova_compute[253538]:      <entry name="manufacturer">RDO</entry>
Nov 25 03:58:26 np0005534516 nova_compute[253538]:      <entry name="product">OpenStack Compute</entry>
Nov 25 03:58:26 np0005534516 nova_compute[253538]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 03:58:26 np0005534516 nova_compute[253538]:      <entry name="serial">9447890d-1fff-4536-a0cd-b889c23f7479</entry>
Nov 25 03:58:26 np0005534516 nova_compute[253538]:      <entry name="uuid">9447890d-1fff-4536-a0cd-b889c23f7479</entry>
Nov 25 03:58:26 np0005534516 nova_compute[253538]:      <entry name="family">Virtual Machine</entry>
Nov 25 03:58:26 np0005534516 nova_compute[253538]:    </system>
Nov 25 03:58:26 np0005534516 nova_compute[253538]:  </sysinfo>
Nov 25 03:58:26 np0005534516 nova_compute[253538]:  <os>
Nov 25 03:58:26 np0005534516 nova_compute[253538]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 25 03:58:26 np0005534516 nova_compute[253538]:    <boot dev="hd"/>
Nov 25 03:58:26 np0005534516 nova_compute[253538]:    <smbios mode="sysinfo"/>
Nov 25 03:58:26 np0005534516 nova_compute[253538]:  </os>
Nov 25 03:58:26 np0005534516 nova_compute[253538]:  <features>
Nov 25 03:58:26 np0005534516 nova_compute[253538]:    <acpi/>
Nov 25 03:58:26 np0005534516 nova_compute[253538]:    <apic/>
Nov 25 03:58:26 np0005534516 nova_compute[253538]:    <vmcoreinfo/>
Nov 25 03:58:26 np0005534516 nova_compute[253538]:  </features>
Nov 25 03:58:26 np0005534516 nova_compute[253538]:  <clock offset="utc">
Nov 25 03:58:26 np0005534516 nova_compute[253538]:    <timer name="pit" tickpolicy="delay"/>
Nov 25 03:58:26 np0005534516 nova_compute[253538]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 25 03:58:26 np0005534516 nova_compute[253538]:    <timer name="hpet" present="no"/>
Nov 25 03:58:26 np0005534516 nova_compute[253538]:  </clock>
Nov 25 03:58:26 np0005534516 nova_compute[253538]:  <cpu mode="host-model" match="exact">
Nov 25 03:58:26 np0005534516 nova_compute[253538]:    <topology sockets="1" cores="1" threads="1"/>
Nov 25 03:58:26 np0005534516 nova_compute[253538]:  </cpu>
Nov 25 03:58:26 np0005534516 nova_compute[253538]:  <devices>
Nov 25 03:58:26 np0005534516 nova_compute[253538]:    <disk type="network" device="disk">
Nov 25 03:58:26 np0005534516 nova_compute[253538]:      <driver type="raw" cache="none"/>
Nov 25 03:58:26 np0005534516 nova_compute[253538]:      <source protocol="rbd" name="vms/9447890d-1fff-4536-a0cd-b889c23f7479_disk">
Nov 25 03:58:26 np0005534516 nova_compute[253538]:        <host name="192.168.122.100" port="6789"/>
Nov 25 03:58:26 np0005534516 nova_compute[253538]:      </source>
Nov 25 03:58:26 np0005534516 nova_compute[253538]:      <auth username="openstack">
Nov 25 03:58:26 np0005534516 nova_compute[253538]:        <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 03:58:26 np0005534516 nova_compute[253538]:      </auth>
Nov 25 03:58:26 np0005534516 nova_compute[253538]:      <target dev="vda" bus="virtio"/>
Nov 25 03:58:26 np0005534516 nova_compute[253538]:    </disk>
Nov 25 03:58:26 np0005534516 nova_compute[253538]:    <disk type="network" device="cdrom">
Nov 25 03:58:26 np0005534516 nova_compute[253538]:      <driver type="raw" cache="none"/>
Nov 25 03:58:26 np0005534516 nova_compute[253538]:      <source protocol="rbd" name="vms/9447890d-1fff-4536-a0cd-b889c23f7479_disk.config">
Nov 25 03:58:26 np0005534516 nova_compute[253538]:        <host name="192.168.122.100" port="6789"/>
Nov 25 03:58:26 np0005534516 nova_compute[253538]:      </source>
Nov 25 03:58:26 np0005534516 nova_compute[253538]:      <auth username="openstack">
Nov 25 03:58:26 np0005534516 nova_compute[253538]:        <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 03:58:26 np0005534516 nova_compute[253538]:      </auth>
Nov 25 03:58:26 np0005534516 nova_compute[253538]:      <target dev="sda" bus="sata"/>
Nov 25 03:58:26 np0005534516 nova_compute[253538]:    </disk>
Nov 25 03:58:26 np0005534516 nova_compute[253538]:    <interface type="ethernet">
Nov 25 03:58:26 np0005534516 nova_compute[253538]:      <mac address="fa:16:3e:f7:96:34"/>
Nov 25 03:58:26 np0005534516 nova_compute[253538]:      <model type="virtio"/>
Nov 25 03:58:26 np0005534516 nova_compute[253538]:      <driver name="vhost" rx_queue_size="512"/>
Nov 25 03:58:26 np0005534516 nova_compute[253538]:      <mtu size="1442"/>
Nov 25 03:58:26 np0005534516 nova_compute[253538]:      <target dev="tap6e0acf79-71"/>
Nov 25 03:58:26 np0005534516 nova_compute[253538]:    </interface>
Nov 25 03:58:26 np0005534516 nova_compute[253538]:    <serial type="pty">
Nov 25 03:58:26 np0005534516 nova_compute[253538]:      <log file="/var/lib/nova/instances/9447890d-1fff-4536-a0cd-b889c23f7479/console.log" append="off"/>
Nov 25 03:58:26 np0005534516 nova_compute[253538]:    </serial>
Nov 25 03:58:26 np0005534516 nova_compute[253538]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 03:58:26 np0005534516 nova_compute[253538]:    <video>
Nov 25 03:58:26 np0005534516 nova_compute[253538]:      <model type="virtio"/>
Nov 25 03:58:26 np0005534516 nova_compute[253538]:    </video>
Nov 25 03:58:26 np0005534516 nova_compute[253538]:    <input type="tablet" bus="usb"/>
Nov 25 03:58:26 np0005534516 nova_compute[253538]:    <rng model="virtio">
Nov 25 03:58:26 np0005534516 nova_compute[253538]:      <backend model="random">/dev/urandom</backend>
Nov 25 03:58:26 np0005534516 nova_compute[253538]:    </rng>
Nov 25 03:58:26 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root"/>
Nov 25 03:58:26 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:58:26 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:58:26 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:58:26 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:58:26 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:58:26 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:58:26 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:58:26 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:58:26 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:58:26 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:58:26 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:58:26 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:58:26 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:58:26 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:58:26 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:58:26 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:58:26 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:58:26 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:58:26 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:58:26 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:58:26 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:58:26 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:58:26 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:58:26 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:58:26 np0005534516 nova_compute[253538]:    <controller type="usb" index="0"/>
Nov 25 03:58:26 np0005534516 nova_compute[253538]:    <memballoon model="virtio">
Nov 25 03:58:26 np0005534516 nova_compute[253538]:      <stats period="10"/>
Nov 25 03:58:26 np0005534516 nova_compute[253538]:    </memballoon>
Nov 25 03:58:26 np0005534516 nova_compute[253538]:  </devices>
Nov 25 03:58:26 np0005534516 nova_compute[253538]: </domain>
Nov 25 03:58:26 np0005534516 nova_compute[253538]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 25 03:58:26 np0005534516 nova_compute[253538]: 2025-11-25 08:58:26.362 253542 DEBUG nova.compute.manager [None req-85f9b72b-2a81-4b90-a114-2e48d85b1357 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 9447890d-1fff-4536-a0cd-b889c23f7479] Preparing to wait for external event network-vif-plugged-6e0acf79-7148-4555-9265-b449f234806e prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 25 03:58:26 np0005534516 nova_compute[253538]: 2025-11-25 08:58:26.362 253542 DEBUG oslo_concurrency.lockutils [None req-85f9b72b-2a81-4b90-a114-2e48d85b1357 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Acquiring lock "9447890d-1fff-4536-a0cd-b889c23f7479-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:58:26 np0005534516 nova_compute[253538]: 2025-11-25 08:58:26.363 253542 DEBUG oslo_concurrency.lockutils [None req-85f9b72b-2a81-4b90-a114-2e48d85b1357 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Lock "9447890d-1fff-4536-a0cd-b889c23f7479-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:58:26 np0005534516 nova_compute[253538]: 2025-11-25 08:58:26.363 253542 DEBUG oslo_concurrency.lockutils [None req-85f9b72b-2a81-4b90-a114-2e48d85b1357 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Lock "9447890d-1fff-4536-a0cd-b889c23f7479-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:58:26 np0005534516 nova_compute[253538]: 2025-11-25 08:58:26.365 253542 DEBUG nova.virt.libvirt.vif [None req-85f9b72b-2a81-4b90-a114-2e48d85b1357 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T08:58:17Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-1495447964-access_point-711993251',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-1495447964-access_point-711993251',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-1495447964-ac',id=125,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAsKC+lTfCuui0Z3hitR52PvpfumhaLVjQ69ujiuIIevy7+I1pIFYPSg9LVKZgq98SCLPdhgyGDo4sdCyIoEofU+K0/ToXdERCiT/ZZOJi+hnfAHkBLfHK2RUxUGr/PGKw==',key_name='tempest-TestSecurityGroupsBasicOps-214386099',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='cfffff2c57a442a59b202d368d49bf00',ramdisk_id='',reservation_id='r-llehtawf',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-1495447964',owner_user_name='tempest-TestSecurityGroupsBasicOps-1495447964-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T08:58:20Z,user_data=None,user_id='283b89dbe3284e8ea2019b797673108b',uuid=9447890d-1fff-4536-a0cd-b889c23f7479,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "6e0acf79-7148-4555-9265-b449f234806e", "address": "fa:16:3e:f7:96:34", "network": {"id": "bfb2a9da-10d5-4cf0-a585-a59d66a02fa0", "bridge": "br-int", "label": "tempest-network-smoke--16312468", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cfffff2c57a442a59b202d368d49bf00", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6e0acf79-71", "ovs_interfaceid": "6e0acf79-7148-4555-9265-b449f234806e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 25 03:58:26 np0005534516 nova_compute[253538]: 2025-11-25 08:58:26.365 253542 DEBUG nova.network.os_vif_util [None req-85f9b72b-2a81-4b90-a114-2e48d85b1357 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Converting VIF {"id": "6e0acf79-7148-4555-9265-b449f234806e", "address": "fa:16:3e:f7:96:34", "network": {"id": "bfb2a9da-10d5-4cf0-a585-a59d66a02fa0", "bridge": "br-int", "label": "tempest-network-smoke--16312468", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cfffff2c57a442a59b202d368d49bf00", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6e0acf79-71", "ovs_interfaceid": "6e0acf79-7148-4555-9265-b449f234806e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 03:58:26 np0005534516 nova_compute[253538]: 2025-11-25 08:58:26.366 253542 DEBUG nova.network.os_vif_util [None req-85f9b72b-2a81-4b90-a114-2e48d85b1357 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f7:96:34,bridge_name='br-int',has_traffic_filtering=True,id=6e0acf79-7148-4555-9265-b449f234806e,network=Network(bfb2a9da-10d5-4cf0-a585-a59d66a02fa0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6e0acf79-71') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 03:58:26 np0005534516 nova_compute[253538]: 2025-11-25 08:58:26.367 253542 DEBUG os_vif [None req-85f9b72b-2a81-4b90-a114-2e48d85b1357 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:f7:96:34,bridge_name='br-int',has_traffic_filtering=True,id=6e0acf79-7148-4555-9265-b449f234806e,network=Network(bfb2a9da-10d5-4cf0-a585-a59d66a02fa0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6e0acf79-71') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 25 03:58:26 np0005534516 nova_compute[253538]: 2025-11-25 08:58:26.368 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:58:26 np0005534516 nova_compute[253538]: 2025-11-25 08:58:26.369 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:58:26 np0005534516 nova_compute[253538]: 2025-11-25 08:58:26.370 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 03:58:26 np0005534516 nova_compute[253538]: 2025-11-25 08:58:26.374 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:58:26 np0005534516 nova_compute[253538]: 2025-11-25 08:58:26.374 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6e0acf79-71, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:58:26 np0005534516 nova_compute[253538]: 2025-11-25 08:58:26.376 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap6e0acf79-71, col_values=(('external_ids', {'iface-id': '6e0acf79-7148-4555-9265-b449f234806e', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:f7:96:34', 'vm-uuid': '9447890d-1fff-4536-a0cd-b889c23f7479'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:58:26 np0005534516 nova_compute[253538]: 2025-11-25 08:58:26.378 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:58:26 np0005534516 NetworkManager[48915]: <info>  [1764061106.3802] manager: (tap6e0acf79-71): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/526)
Nov 25 03:58:26 np0005534516 nova_compute[253538]: 2025-11-25 08:58:26.380 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 25 03:58:26 np0005534516 nova_compute[253538]: 2025-11-25 08:58:26.386 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:58:26 np0005534516 nova_compute[253538]: 2025-11-25 08:58:26.387 253542 INFO os_vif [None req-85f9b72b-2a81-4b90-a114-2e48d85b1357 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:f7:96:34,bridge_name='br-int',has_traffic_filtering=True,id=6e0acf79-7148-4555-9265-b449f234806e,network=Network(bfb2a9da-10d5-4cf0-a585-a59d66a02fa0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6e0acf79-71')#033[00m
Nov 25 03:58:26 np0005534516 nova_compute[253538]: 2025-11-25 08:58:26.449 253542 DEBUG nova.virt.libvirt.driver [None req-85f9b72b-2a81-4b90-a114-2e48d85b1357 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 03:58:26 np0005534516 nova_compute[253538]: 2025-11-25 08:58:26.450 253542 DEBUG nova.virt.libvirt.driver [None req-85f9b72b-2a81-4b90-a114-2e48d85b1357 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 03:58:26 np0005534516 nova_compute[253538]: 2025-11-25 08:58:26.450 253542 DEBUG nova.virt.libvirt.driver [None req-85f9b72b-2a81-4b90-a114-2e48d85b1357 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] No VIF found with MAC fa:16:3e:f7:96:34, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 25 03:58:26 np0005534516 nova_compute[253538]: 2025-11-25 08:58:26.451 253542 INFO nova.virt.libvirt.driver [None req-85f9b72b-2a81-4b90-a114-2e48d85b1357 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 9447890d-1fff-4536-a0cd-b889c23f7479] Using config drive#033[00m
Nov 25 03:58:26 np0005534516 nova_compute[253538]: 2025-11-25 08:58:26.476 253542 DEBUG nova.storage.rbd_utils [None req-85f9b72b-2a81-4b90-a114-2e48d85b1357 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] rbd image 9447890d-1fff-4536-a0cd-b889c23f7479_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:58:27 np0005534516 nova_compute[253538]: 2025-11-25 08:58:27.359 253542 INFO nova.virt.libvirt.driver [None req-85f9b72b-2a81-4b90-a114-2e48d85b1357 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 9447890d-1fff-4536-a0cd-b889c23f7479] Creating config drive at /var/lib/nova/instances/9447890d-1fff-4536-a0cd-b889c23f7479/disk.config#033[00m
Nov 25 03:58:27 np0005534516 nova_compute[253538]: 2025-11-25 08:58:27.370 253542 DEBUG oslo_concurrency.processutils [None req-85f9b72b-2a81-4b90-a114-2e48d85b1357 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/9447890d-1fff-4536-a0cd-b889c23f7479/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpwuzaq3xh execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:58:27 np0005534516 nova_compute[253538]: 2025-11-25 08:58:27.440 253542 DEBUG nova.network.neutron [req-fe773c2a-af5f-438b-884b-bbf448e94031 req-007958fc-0cbe-4110-80c7-e1d419d0ae10 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 9447890d-1fff-4536-a0cd-b889c23f7479] Updated VIF entry in instance network info cache for port 6e0acf79-7148-4555-9265-b449f234806e. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 25 03:58:27 np0005534516 nova_compute[253538]: 2025-11-25 08:58:27.442 253542 DEBUG nova.network.neutron [req-fe773c2a-af5f-438b-884b-bbf448e94031 req-007958fc-0cbe-4110-80c7-e1d419d0ae10 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 9447890d-1fff-4536-a0cd-b889c23f7479] Updating instance_info_cache with network_info: [{"id": "6e0acf79-7148-4555-9265-b449f234806e", "address": "fa:16:3e:f7:96:34", "network": {"id": "bfb2a9da-10d5-4cf0-a585-a59d66a02fa0", "bridge": "br-int", "label": "tempest-network-smoke--16312468", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cfffff2c57a442a59b202d368d49bf00", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6e0acf79-71", "ovs_interfaceid": "6e0acf79-7148-4555-9265-b449f234806e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 03:58:27 np0005534516 nova_compute[253538]: 2025-11-25 08:58:27.461 253542 DEBUG oslo_concurrency.lockutils [req-fe773c2a-af5f-438b-884b-bbf448e94031 req-007958fc-0cbe-4110-80c7-e1d419d0ae10 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-9447890d-1fff-4536-a0cd-b889c23f7479" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 03:58:27 np0005534516 nova_compute[253538]: 2025-11-25 08:58:27.560 253542 DEBUG oslo_concurrency.processutils [None req-85f9b72b-2a81-4b90-a114-2e48d85b1357 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/9447890d-1fff-4536-a0cd-b889c23f7479/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpwuzaq3xh" returned: 0 in 0.190s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:58:27 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 03:58:27 np0005534516 nova_compute[253538]: 2025-11-25 08:58:27.595 253542 DEBUG nova.storage.rbd_utils [None req-85f9b72b-2a81-4b90-a114-2e48d85b1357 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] rbd image 9447890d-1fff-4536-a0cd-b889c23f7479_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:58:27 np0005534516 nova_compute[253538]: 2025-11-25 08:58:27.599 253542 DEBUG oslo_concurrency.processutils [None req-85f9b72b-2a81-4b90-a114-2e48d85b1357 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/9447890d-1fff-4536-a0cd-b889c23f7479/disk.config 9447890d-1fff-4536-a0cd-b889c23f7479_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:58:27 np0005534516 nova_compute[253538]: 2025-11-25 08:58:27.773 253542 INFO nova.compute.manager [None req-2f61dd6f-f29c-452d-8b89-91800300b0ff 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 5e55aa91-2aa5-4443-b976-0f3e4409e8ec] Get console output#033[00m
Nov 25 03:58:27 np0005534516 nova_compute[253538]: 2025-11-25 08:58:27.780 310639 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Nov 25 03:58:27 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2330: 321 pgs: 321 active+clean; 292 MiB data, 933 MiB used, 59 GiB / 60 GiB avail; 342 KiB/s rd, 3.9 MiB/s wr, 90 op/s
Nov 25 03:58:28 np0005534516 nova_compute[253538]: 2025-11-25 08:58:28.031 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:58:28 np0005534516 nova_compute[253538]: 2025-11-25 08:58:28.222 253542 DEBUG oslo_concurrency.processutils [None req-85f9b72b-2a81-4b90-a114-2e48d85b1357 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/9447890d-1fff-4536-a0cd-b889c23f7479/disk.config 9447890d-1fff-4536-a0cd-b889c23f7479_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.623s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:58:28 np0005534516 nova_compute[253538]: 2025-11-25 08:58:28.223 253542 INFO nova.virt.libvirt.driver [None req-85f9b72b-2a81-4b90-a114-2e48d85b1357 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 9447890d-1fff-4536-a0cd-b889c23f7479] Deleting local config drive /var/lib/nova/instances/9447890d-1fff-4536-a0cd-b889c23f7479/disk.config because it was imported into RBD.#033[00m
Nov 25 03:58:28 np0005534516 kernel: tap6e0acf79-71: entered promiscuous mode
Nov 25 03:58:28 np0005534516 NetworkManager[48915]: <info>  [1764061108.2997] manager: (tap6e0acf79-71): new Tun device (/org/freedesktop/NetworkManager/Devices/527)
Nov 25 03:58:28 np0005534516 ovn_controller[152859]: 2025-11-25T08:58:28Z|01283|binding|INFO|Claiming lport 6e0acf79-7148-4555-9265-b449f234806e for this chassis.
Nov 25 03:58:28 np0005534516 nova_compute[253538]: 2025-11-25 08:58:28.300 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:58:28 np0005534516 ovn_controller[152859]: 2025-11-25T08:58:28Z|01284|binding|INFO|6e0acf79-7148-4555-9265-b449f234806e: Claiming fa:16:3e:f7:96:34 10.100.0.5
Nov 25 03:58:28 np0005534516 ovn_controller[152859]: 2025-11-25T08:58:28Z|01285|binding|INFO|Setting lport 6e0acf79-7148-4555-9265-b449f234806e ovn-installed in OVS
Nov 25 03:58:28 np0005534516 ovn_controller[152859]: 2025-11-25T08:58:28Z|01286|binding|INFO|Setting lport 6e0acf79-7148-4555-9265-b449f234806e up in Southbound
Nov 25 03:58:28 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:58:28.325 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f7:96:34 10.100.0.5'], port_security=['fa:16:3e:f7:96:34 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '9447890d-1fff-4536-a0cd-b889c23f7479', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-bfb2a9da-10d5-4cf0-a585-a59d66a02fa0', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'cfffff2c57a442a59b202d368d49bf00', 'neutron:revision_number': '2', 'neutron:security_group_ids': '20edaba8-b789-425d-a55d-3fa69c803e14 31fd3dba-a142-469b-a6ad-eb14c55eb5d4', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=acab9bf8-f301-413f-8774-b60482e3db77, chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=6e0acf79-7148-4555-9265-b449f234806e) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 03:58:28 np0005534516 nova_compute[253538]: 2025-11-25 08:58:28.326 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:58:28 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:58:28.328 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 6e0acf79-7148-4555-9265-b449f234806e in datapath bfb2a9da-10d5-4cf0-a585-a59d66a02fa0 bound to our chassis#033[00m
Nov 25 03:58:28 np0005534516 nova_compute[253538]: 2025-11-25 08:58:28.331 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:58:28 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:58:28.331 162739 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network bfb2a9da-10d5-4cf0-a585-a59d66a02fa0#033[00m
Nov 25 03:58:28 np0005534516 systemd-machined[215790]: New machine qemu-155-instance-0000007d.
Nov 25 03:58:28 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:58:28.351 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[d64e0e64-23cf-4228-8d0d-ff4dc65c99f8]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:58:28 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:58:28.352 162739 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapbfb2a9da-11 in ovnmeta-bfb2a9da-10d5-4cf0-a585-a59d66a02fa0 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 25 03:58:28 np0005534516 systemd-udevd[384844]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 03:58:28 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:58:28.355 269370 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapbfb2a9da-10 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 25 03:58:28 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:58:28.355 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[6f1ce302-d60b-4249-9c40-28a8de296e66]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:58:28 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:58:28.357 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[441d8f61-095d-4514-9d92-04ccf1dab754]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:58:28 np0005534516 systemd[1]: Started Virtual Machine qemu-155-instance-0000007d.
Nov 25 03:58:28 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:58:28.373 162852 DEBUG oslo.privsep.daemon [-] privsep: reply[63027f1e-0516-4057-af90-ceff8cdf3afc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:58:28 np0005534516 NetworkManager[48915]: <info>  [1764061108.3765] device (tap6e0acf79-71): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 03:58:28 np0005534516 NetworkManager[48915]: <info>  [1764061108.3778] device (tap6e0acf79-71): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 03:58:28 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:58:28.391 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[5a0c97f1-b362-46b6-8715-27428448e13d]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:58:28 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:58:28.425 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[7fcf25dd-76a6-45d1-9b96-674a7b40d57c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:58:28 np0005534516 systemd-udevd[384847]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 03:58:28 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:58:28.430 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[35ab4140-be03-488d-abd7-b92e7c1a0dd1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:58:28 np0005534516 NetworkManager[48915]: <info>  [1764061108.4326] manager: (tapbfb2a9da-10): new Veth device (/org/freedesktop/NetworkManager/Devices/528)
Nov 25 03:58:28 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:58:28.466 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[d03df461-c461-46f1-8e43-9ad427dd0973]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:58:28 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:58:28.469 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[35e2fe99-b596-4171-a966-1dacba4f5b1d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:58:28 np0005534516 NetworkManager[48915]: <info>  [1764061108.4937] device (tapbfb2a9da-10): carrier: link connected
Nov 25 03:58:28 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:58:28.498 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[3875290f-5695-4b5e-b77c-fa92551b1555]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:58:28 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:58:28.522 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[1fc61ba6-f079-40b8-993b-f4739d07f6f2]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapbfb2a9da-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a7:95:82'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 373], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 646309, 'reachable_time': 34452, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 384876, 'error': None, 'target': 'ovnmeta-bfb2a9da-10d5-4cf0-a585-a59d66a02fa0', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:58:28 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:58:28.537 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[bcd00221-fed1-4d92-bc96-df06f99bfe10]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fea7:9582'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 646309, 'tstamp': 646309}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 384877, 'error': None, 'target': 'ovnmeta-bfb2a9da-10d5-4cf0-a585-a59d66a02fa0', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:58:28 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:58:28.552 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[08ffdb82-3713-40b8-92f7-af828d1dbba1]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapbfb2a9da-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a7:95:82'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 373], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 646309, 'reachable_time': 34452, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 384878, 'error': None, 'target': 'ovnmeta-bfb2a9da-10d5-4cf0-a585-a59d66a02fa0', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:58:28 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:58:28.583 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[8c999816-2e14-4b02-aa2a-b68f0c8444f3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:58:28 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:58:28.650 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[24050fd2-8a35-4e32-816f-2fd7c47c1ab8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:58:28 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:58:28.652 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapbfb2a9da-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:58:28 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:58:28.652 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 03:58:28 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:58:28.652 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapbfb2a9da-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:58:28 np0005534516 nova_compute[253538]: 2025-11-25 08:58:28.705 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:58:28 np0005534516 NetworkManager[48915]: <info>  [1764061108.7057] manager: (tapbfb2a9da-10): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/529)
Nov 25 03:58:28 np0005534516 kernel: tapbfb2a9da-10: entered promiscuous mode
Nov 25 03:58:28 np0005534516 nova_compute[253538]: 2025-11-25 08:58:28.708 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:58:28 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:58:28.709 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapbfb2a9da-10, col_values=(('external_ids', {'iface-id': '0b3aeb91-b51d-4bb6-a8fa-9a80bd12b96f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:58:28 np0005534516 nova_compute[253538]: 2025-11-25 08:58:28.710 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:58:28 np0005534516 ovn_controller[152859]: 2025-11-25T08:58:28Z|01287|binding|INFO|Releasing lport 0b3aeb91-b51d-4bb6-a8fa-9a80bd12b96f from this chassis (sb_readonly=0)
Nov 25 03:58:28 np0005534516 nova_compute[253538]: 2025-11-25 08:58:28.716 253542 DEBUG nova.compute.manager [req-ada1435e-ccb5-4f2f-8c8e-55b5a91c74d6 req-f9057b7a-46ef-4eea-95bf-389cdf019a8f b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 9447890d-1fff-4536-a0cd-b889c23f7479] Received event network-vif-plugged-6e0acf79-7148-4555-9265-b449f234806e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:58:28 np0005534516 nova_compute[253538]: 2025-11-25 08:58:28.717 253542 DEBUG oslo_concurrency.lockutils [req-ada1435e-ccb5-4f2f-8c8e-55b5a91c74d6 req-f9057b7a-46ef-4eea-95bf-389cdf019a8f b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "9447890d-1fff-4536-a0cd-b889c23f7479-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:58:28 np0005534516 nova_compute[253538]: 2025-11-25 08:58:28.717 253542 DEBUG oslo_concurrency.lockutils [req-ada1435e-ccb5-4f2f-8c8e-55b5a91c74d6 req-f9057b7a-46ef-4eea-95bf-389cdf019a8f b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "9447890d-1fff-4536-a0cd-b889c23f7479-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:58:28 np0005534516 nova_compute[253538]: 2025-11-25 08:58:28.718 253542 DEBUG oslo_concurrency.lockutils [req-ada1435e-ccb5-4f2f-8c8e-55b5a91c74d6 req-f9057b7a-46ef-4eea-95bf-389cdf019a8f b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "9447890d-1fff-4536-a0cd-b889c23f7479-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:58:28 np0005534516 nova_compute[253538]: 2025-11-25 08:58:28.718 253542 DEBUG nova.compute.manager [req-ada1435e-ccb5-4f2f-8c8e-55b5a91c74d6 req-f9057b7a-46ef-4eea-95bf-389cdf019a8f b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 9447890d-1fff-4536-a0cd-b889c23f7479] Processing event network-vif-plugged-6e0acf79-7148-4555-9265-b449f234806e _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 25 03:58:28 np0005534516 nova_compute[253538]: 2025-11-25 08:58:28.726 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:58:28 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:58:28.727 162739 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/bfb2a9da-10d5-4cf0-a585-a59d66a02fa0.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/bfb2a9da-10d5-4cf0-a585-a59d66a02fa0.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 25 03:58:28 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:58:28.728 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[7e8bb9ae-5432-4629-aedc-1d837741856c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:58:28 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:58:28.729 162739 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 25 03:58:28 np0005534516 ovn_metadata_agent[162734]: global
Nov 25 03:58:28 np0005534516 ovn_metadata_agent[162734]:    log         /dev/log local0 debug
Nov 25 03:58:28 np0005534516 ovn_metadata_agent[162734]:    log-tag     haproxy-metadata-proxy-bfb2a9da-10d5-4cf0-a585-a59d66a02fa0
Nov 25 03:58:28 np0005534516 ovn_metadata_agent[162734]:    user        root
Nov 25 03:58:28 np0005534516 ovn_metadata_agent[162734]:    group       root
Nov 25 03:58:28 np0005534516 ovn_metadata_agent[162734]:    maxconn     1024
Nov 25 03:58:28 np0005534516 ovn_metadata_agent[162734]:    pidfile     /var/lib/neutron/external/pids/bfb2a9da-10d5-4cf0-a585-a59d66a02fa0.pid.haproxy
Nov 25 03:58:28 np0005534516 ovn_metadata_agent[162734]:    daemon
Nov 25 03:58:28 np0005534516 ovn_metadata_agent[162734]: 
Nov 25 03:58:28 np0005534516 ovn_metadata_agent[162734]: defaults
Nov 25 03:58:28 np0005534516 ovn_metadata_agent[162734]:    log global
Nov 25 03:58:28 np0005534516 ovn_metadata_agent[162734]:    mode http
Nov 25 03:58:28 np0005534516 ovn_metadata_agent[162734]:    option httplog
Nov 25 03:58:28 np0005534516 ovn_metadata_agent[162734]:    option dontlognull
Nov 25 03:58:28 np0005534516 ovn_metadata_agent[162734]:    option http-server-close
Nov 25 03:58:28 np0005534516 ovn_metadata_agent[162734]:    option forwardfor
Nov 25 03:58:28 np0005534516 ovn_metadata_agent[162734]:    retries                 3
Nov 25 03:58:28 np0005534516 ovn_metadata_agent[162734]:    timeout http-request    30s
Nov 25 03:58:28 np0005534516 ovn_metadata_agent[162734]:    timeout connect         30s
Nov 25 03:58:28 np0005534516 ovn_metadata_agent[162734]:    timeout client          32s
Nov 25 03:58:28 np0005534516 ovn_metadata_agent[162734]:    timeout server          32s
Nov 25 03:58:28 np0005534516 ovn_metadata_agent[162734]:    timeout http-keep-alive 30s
Nov 25 03:58:28 np0005534516 ovn_metadata_agent[162734]: 
Nov 25 03:58:28 np0005534516 ovn_metadata_agent[162734]: 
Nov 25 03:58:28 np0005534516 ovn_metadata_agent[162734]: listen listener
Nov 25 03:58:28 np0005534516 ovn_metadata_agent[162734]:    bind 169.254.169.254:80
Nov 25 03:58:28 np0005534516 ovn_metadata_agent[162734]:    server metadata /var/lib/neutron/metadata_proxy
Nov 25 03:58:28 np0005534516 ovn_metadata_agent[162734]:    http-request add-header X-OVN-Network-ID bfb2a9da-10d5-4cf0-a585-a59d66a02fa0
Nov 25 03:58:28 np0005534516 ovn_metadata_agent[162734]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 25 03:58:28 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:58:28.730 162739 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-bfb2a9da-10d5-4cf0-a585-a59d66a02fa0', 'env', 'PROCESS_TAG=haproxy-bfb2a9da-10d5-4cf0-a585-a59d66a02fa0', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/bfb2a9da-10d5-4cf0-a585-a59d66a02fa0.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 25 03:58:28 np0005534516 nova_compute[253538]: 2025-11-25 08:58:28.894 253542 DEBUG nova.compute.manager [req-6a178f23-fd9f-40d5-8591-4f04e7c1aa23 req-cd62ac00-8c9b-4316-be02-c043e122f543 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 5e55aa91-2aa5-4443-b976-0f3e4409e8ec] Received event network-vif-unplugged-027edfd6-09a6-4bf4-88df-8a19e59d1f72 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:58:28 np0005534516 nova_compute[253538]: 2025-11-25 08:58:28.895 253542 DEBUG oslo_concurrency.lockutils [req-6a178f23-fd9f-40d5-8591-4f04e7c1aa23 req-cd62ac00-8c9b-4316-be02-c043e122f543 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "5e55aa91-2aa5-4443-b976-0f3e4409e8ec-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:58:28 np0005534516 nova_compute[253538]: 2025-11-25 08:58:28.895 253542 DEBUG oslo_concurrency.lockutils [req-6a178f23-fd9f-40d5-8591-4f04e7c1aa23 req-cd62ac00-8c9b-4316-be02-c043e122f543 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "5e55aa91-2aa5-4443-b976-0f3e4409e8ec-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:58:28 np0005534516 nova_compute[253538]: 2025-11-25 08:58:28.896 253542 DEBUG oslo_concurrency.lockutils [req-6a178f23-fd9f-40d5-8591-4f04e7c1aa23 req-cd62ac00-8c9b-4316-be02-c043e122f543 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "5e55aa91-2aa5-4443-b976-0f3e4409e8ec-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:58:28 np0005534516 nova_compute[253538]: 2025-11-25 08:58:28.896 253542 DEBUG nova.compute.manager [req-6a178f23-fd9f-40d5-8591-4f04e7c1aa23 req-cd62ac00-8c9b-4316-be02-c043e122f543 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 5e55aa91-2aa5-4443-b976-0f3e4409e8ec] No waiting events found dispatching network-vif-unplugged-027edfd6-09a6-4bf4-88df-8a19e59d1f72 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 03:58:28 np0005534516 nova_compute[253538]: 2025-11-25 08:58:28.897 253542 WARNING nova.compute.manager [req-6a178f23-fd9f-40d5-8591-4f04e7c1aa23 req-cd62ac00-8c9b-4316-be02-c043e122f543 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 5e55aa91-2aa5-4443-b976-0f3e4409e8ec] Received unexpected event network-vif-unplugged-027edfd6-09a6-4bf4-88df-8a19e59d1f72 for instance with vm_state active and task_state None.#033[00m
Nov 25 03:58:29 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 25 03:58:29 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3080222673' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 03:58:29 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 25 03:58:29 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3080222673' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 03:58:29 np0005534516 nova_compute[253538]: 2025-11-25 08:58:29.142 253542 DEBUG nova.compute.manager [None req-85f9b72b-2a81-4b90-a114-2e48d85b1357 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 9447890d-1fff-4536-a0cd-b889c23f7479] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 25 03:58:29 np0005534516 nova_compute[253538]: 2025-11-25 08:58:29.144 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764061109.1413898, 9447890d-1fff-4536-a0cd-b889c23f7479 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 03:58:29 np0005534516 nova_compute[253538]: 2025-11-25 08:58:29.145 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 9447890d-1fff-4536-a0cd-b889c23f7479] VM Started (Lifecycle Event)#033[00m
Nov 25 03:58:29 np0005534516 nova_compute[253538]: 2025-11-25 08:58:29.155 253542 DEBUG nova.virt.libvirt.driver [None req-85f9b72b-2a81-4b90-a114-2e48d85b1357 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 9447890d-1fff-4536-a0cd-b889c23f7479] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 25 03:58:29 np0005534516 podman[384951]: 2025-11-25 08:58:29.156839348 +0000 UTC m=+0.071662272 container create 6f3c69fe408619841166a42c4fd7b3fb3f54802beb68f1157082d7e916a8cb31 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-bfb2a9da-10d5-4cf0-a585-a59d66a02fa0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Nov 25 03:58:29 np0005534516 nova_compute[253538]: 2025-11-25 08:58:29.166 253542 INFO nova.virt.libvirt.driver [-] [instance: 9447890d-1fff-4536-a0cd-b889c23f7479] Instance spawned successfully.#033[00m
Nov 25 03:58:29 np0005534516 nova_compute[253538]: 2025-11-25 08:58:29.167 253542 DEBUG nova.virt.libvirt.driver [None req-85f9b72b-2a81-4b90-a114-2e48d85b1357 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 9447890d-1fff-4536-a0cd-b889c23f7479] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 25 03:58:29 np0005534516 nova_compute[253538]: 2025-11-25 08:58:29.191 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 9447890d-1fff-4536-a0cd-b889c23f7479] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:58:29 np0005534516 nova_compute[253538]: 2025-11-25 08:58:29.201 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 9447890d-1fff-4536-a0cd-b889c23f7479] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 03:58:29 np0005534516 systemd[1]: Started libpod-conmon-6f3c69fe408619841166a42c4fd7b3fb3f54802beb68f1157082d7e916a8cb31.scope.
Nov 25 03:58:29 np0005534516 podman[384951]: 2025-11-25 08:58:29.112111801 +0000 UTC m=+0.026934815 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 25 03:58:29 np0005534516 nova_compute[253538]: 2025-11-25 08:58:29.205 253542 DEBUG nova.virt.libvirt.driver [None req-85f9b72b-2a81-4b90-a114-2e48d85b1357 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 9447890d-1fff-4536-a0cd-b889c23f7479] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:58:29 np0005534516 nova_compute[253538]: 2025-11-25 08:58:29.208 253542 DEBUG nova.virt.libvirt.driver [None req-85f9b72b-2a81-4b90-a114-2e48d85b1357 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 9447890d-1fff-4536-a0cd-b889c23f7479] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:58:29 np0005534516 nova_compute[253538]: 2025-11-25 08:58:29.209 253542 DEBUG nova.virt.libvirt.driver [None req-85f9b72b-2a81-4b90-a114-2e48d85b1357 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 9447890d-1fff-4536-a0cd-b889c23f7479] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:58:29 np0005534516 nova_compute[253538]: 2025-11-25 08:58:29.209 253542 DEBUG nova.virt.libvirt.driver [None req-85f9b72b-2a81-4b90-a114-2e48d85b1357 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 9447890d-1fff-4536-a0cd-b889c23f7479] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:58:29 np0005534516 nova_compute[253538]: 2025-11-25 08:58:29.209 253542 DEBUG nova.virt.libvirt.driver [None req-85f9b72b-2a81-4b90-a114-2e48d85b1357 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 9447890d-1fff-4536-a0cd-b889c23f7479] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:58:29 np0005534516 nova_compute[253538]: 2025-11-25 08:58:29.210 253542 DEBUG nova.virt.libvirt.driver [None req-85f9b72b-2a81-4b90-a114-2e48d85b1357 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 9447890d-1fff-4536-a0cd-b889c23f7479] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:58:29 np0005534516 systemd[1]: Started libcrun container.
Nov 25 03:58:29 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1e04d9311f2566d0bdda347b5c2a05beefee95a9cbfe91f3a4e7e52afc379d7d/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 25 03:58:29 np0005534516 nova_compute[253538]: 2025-11-25 08:58:29.247 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 9447890d-1fff-4536-a0cd-b889c23f7479] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 03:58:29 np0005534516 nova_compute[253538]: 2025-11-25 08:58:29.248 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764061109.1418667, 9447890d-1fff-4536-a0cd-b889c23f7479 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 03:58:29 np0005534516 nova_compute[253538]: 2025-11-25 08:58:29.248 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 9447890d-1fff-4536-a0cd-b889c23f7479] VM Paused (Lifecycle Event)#033[00m
Nov 25 03:58:29 np0005534516 podman[384951]: 2025-11-25 08:58:29.256049158 +0000 UTC m=+0.170872102 container init 6f3c69fe408619841166a42c4fd7b3fb3f54802beb68f1157082d7e916a8cb31 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-bfb2a9da-10d5-4cf0-a585-a59d66a02fa0, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_managed=true)
Nov 25 03:58:29 np0005534516 podman[384951]: 2025-11-25 08:58:29.261578089 +0000 UTC m=+0.176401013 container start 6f3c69fe408619841166a42c4fd7b3fb3f54802beb68f1157082d7e916a8cb31 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-bfb2a9da-10d5-4cf0-a585-a59d66a02fa0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS)
Nov 25 03:58:29 np0005534516 nova_compute[253538]: 2025-11-25 08:58:29.278 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 9447890d-1fff-4536-a0cd-b889c23f7479] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:58:29 np0005534516 nova_compute[253538]: 2025-11-25 08:58:29.282 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764061109.1538963, 9447890d-1fff-4536-a0cd-b889c23f7479 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 03:58:29 np0005534516 nova_compute[253538]: 2025-11-25 08:58:29.282 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 9447890d-1fff-4536-a0cd-b889c23f7479] VM Resumed (Lifecycle Event)#033[00m
Nov 25 03:58:29 np0005534516 neutron-haproxy-ovnmeta-bfb2a9da-10d5-4cf0-a585-a59d66a02fa0[384967]: [NOTICE]   (384971) : New worker (384973) forked
Nov 25 03:58:29 np0005534516 neutron-haproxy-ovnmeta-bfb2a9da-10d5-4cf0-a585-a59d66a02fa0[384967]: [NOTICE]   (384971) : Loading success.
Nov 25 03:58:29 np0005534516 nova_compute[253538]: 2025-11-25 08:58:29.293 253542 INFO nova.compute.manager [None req-85f9b72b-2a81-4b90-a114-2e48d85b1357 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 9447890d-1fff-4536-a0cd-b889c23f7479] Took 8.79 seconds to spawn the instance on the hypervisor.#033[00m
Nov 25 03:58:29 np0005534516 nova_compute[253538]: 2025-11-25 08:58:29.293 253542 DEBUG nova.compute.manager [None req-85f9b72b-2a81-4b90-a114-2e48d85b1357 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 9447890d-1fff-4536-a0cd-b889c23f7479] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:58:29 np0005534516 nova_compute[253538]: 2025-11-25 08:58:29.316 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 9447890d-1fff-4536-a0cd-b889c23f7479] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:58:29 np0005534516 nova_compute[253538]: 2025-11-25 08:58:29.320 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 9447890d-1fff-4536-a0cd-b889c23f7479] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 03:58:29 np0005534516 nova_compute[253538]: 2025-11-25 08:58:29.349 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 9447890d-1fff-4536-a0cd-b889c23f7479] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 03:58:29 np0005534516 nova_compute[253538]: 2025-11-25 08:58:29.365 253542 INFO nova.compute.manager [None req-85f9b72b-2a81-4b90-a114-2e48d85b1357 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 9447890d-1fff-4536-a0cd-b889c23f7479] Took 9.91 seconds to build instance.#033[00m
Nov 25 03:58:29 np0005534516 nova_compute[253538]: 2025-11-25 08:58:29.402 253542 DEBUG oslo_concurrency.lockutils [None req-85f9b72b-2a81-4b90-a114-2e48d85b1357 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Lock "9447890d-1fff-4536-a0cd-b889c23f7479" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.015s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:58:29 np0005534516 nova_compute[253538]: 2025-11-25 08:58:29.935 253542 INFO nova.compute.manager [None req-ea625294-b460-4f30-b6b1-2374ee7e0de2 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 5e55aa91-2aa5-4443-b976-0f3e4409e8ec] Get console output#033[00m
Nov 25 03:58:29 np0005534516 nova_compute[253538]: 2025-11-25 08:58:29.942 310639 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Nov 25 03:58:29 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2331: 321 pgs: 321 active+clean; 293 MiB data, 933 MiB used, 59 GiB / 60 GiB avail; 355 KiB/s rd, 3.9 MiB/s wr, 99 op/s
Nov 25 03:58:30 np0005534516 nova_compute[253538]: 2025-11-25 08:58:30.814 253542 DEBUG nova.compute.manager [req-ddc78a16-693c-4c33-b982-bec61904289b req-87d8ae24-61c2-4fb0-9855-2849655cba83 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 5e55aa91-2aa5-4443-b976-0f3e4409e8ec] Received event network-changed-027edfd6-09a6-4bf4-88df-8a19e59d1f72 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:58:30 np0005534516 nova_compute[253538]: 2025-11-25 08:58:30.815 253542 DEBUG nova.compute.manager [req-ddc78a16-693c-4c33-b982-bec61904289b req-87d8ae24-61c2-4fb0-9855-2849655cba83 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 5e55aa91-2aa5-4443-b976-0f3e4409e8ec] Refreshing instance network info cache due to event network-changed-027edfd6-09a6-4bf4-88df-8a19e59d1f72. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 03:58:30 np0005534516 nova_compute[253538]: 2025-11-25 08:58:30.815 253542 DEBUG oslo_concurrency.lockutils [req-ddc78a16-693c-4c33-b982-bec61904289b req-87d8ae24-61c2-4fb0-9855-2849655cba83 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-5e55aa91-2aa5-4443-b976-0f3e4409e8ec" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 03:58:30 np0005534516 nova_compute[253538]: 2025-11-25 08:58:30.816 253542 DEBUG oslo_concurrency.lockutils [req-ddc78a16-693c-4c33-b982-bec61904289b req-87d8ae24-61c2-4fb0-9855-2849655cba83 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-5e55aa91-2aa5-4443-b976-0f3e4409e8ec" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 03:58:30 np0005534516 nova_compute[253538]: 2025-11-25 08:58:30.816 253542 DEBUG nova.network.neutron [req-ddc78a16-693c-4c33-b982-bec61904289b req-87d8ae24-61c2-4fb0-9855-2849655cba83 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 5e55aa91-2aa5-4443-b976-0f3e4409e8ec] Refreshing network info cache for port 027edfd6-09a6-4bf4-88df-8a19e59d1f72 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 03:58:30 np0005534516 nova_compute[253538]: 2025-11-25 08:58:30.997 253542 DEBUG nova.compute.manager [req-aa9fa364-0b16-4fb0-8afe-b43826732643 req-e310efea-8c56-45d1-95d3-43e839c59ecb b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 5e55aa91-2aa5-4443-b976-0f3e4409e8ec] Received event network-vif-plugged-027edfd6-09a6-4bf4-88df-8a19e59d1f72 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:58:30 np0005534516 nova_compute[253538]: 2025-11-25 08:58:30.998 253542 DEBUG oslo_concurrency.lockutils [req-aa9fa364-0b16-4fb0-8afe-b43826732643 req-e310efea-8c56-45d1-95d3-43e839c59ecb b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "5e55aa91-2aa5-4443-b976-0f3e4409e8ec-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:58:30 np0005534516 nova_compute[253538]: 2025-11-25 08:58:30.998 253542 DEBUG oslo_concurrency.lockutils [req-aa9fa364-0b16-4fb0-8afe-b43826732643 req-e310efea-8c56-45d1-95d3-43e839c59ecb b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "5e55aa91-2aa5-4443-b976-0f3e4409e8ec-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:58:31 np0005534516 nova_compute[253538]: 2025-11-25 08:58:30.999 253542 DEBUG oslo_concurrency.lockutils [req-aa9fa364-0b16-4fb0-8afe-b43826732643 req-e310efea-8c56-45d1-95d3-43e839c59ecb b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "5e55aa91-2aa5-4443-b976-0f3e4409e8ec-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:58:31 np0005534516 nova_compute[253538]: 2025-11-25 08:58:31.000 253542 DEBUG nova.compute.manager [req-aa9fa364-0b16-4fb0-8afe-b43826732643 req-e310efea-8c56-45d1-95d3-43e839c59ecb b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 5e55aa91-2aa5-4443-b976-0f3e4409e8ec] No waiting events found dispatching network-vif-plugged-027edfd6-09a6-4bf4-88df-8a19e59d1f72 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 03:58:31 np0005534516 nova_compute[253538]: 2025-11-25 08:58:31.000 253542 WARNING nova.compute.manager [req-aa9fa364-0b16-4fb0-8afe-b43826732643 req-e310efea-8c56-45d1-95d3-43e839c59ecb b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 5e55aa91-2aa5-4443-b976-0f3e4409e8ec] Received unexpected event network-vif-plugged-027edfd6-09a6-4bf4-88df-8a19e59d1f72 for instance with vm_state active and task_state None.#033[00m
Nov 25 03:58:31 np0005534516 nova_compute[253538]: 2025-11-25 08:58:31.379 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:58:31 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2332: 321 pgs: 321 active+clean; 293 MiB data, 933 MiB used, 59 GiB / 60 GiB avail; 902 KiB/s rd, 3.9 MiB/s wr, 118 op/s
Nov 25 03:58:32 np0005534516 nova_compute[253538]: 2025-11-25 08:58:32.340 253542 INFO nova.compute.manager [None req-f8e5d89b-e181-41ce-944e-e89a3ee1341a 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 5e55aa91-2aa5-4443-b976-0f3e4409e8ec] Get console output#033[00m
Nov 25 03:58:32 np0005534516 nova_compute[253538]: 2025-11-25 08:58:32.347 310639 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Nov 25 03:58:32 np0005534516 nova_compute[253538]: 2025-11-25 08:58:32.533 253542 DEBUG nova.network.neutron [req-ddc78a16-693c-4c33-b982-bec61904289b req-87d8ae24-61c2-4fb0-9855-2849655cba83 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 5e55aa91-2aa5-4443-b976-0f3e4409e8ec] Updated VIF entry in instance network info cache for port 027edfd6-09a6-4bf4-88df-8a19e59d1f72. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 25 03:58:32 np0005534516 nova_compute[253538]: 2025-11-25 08:58:32.534 253542 DEBUG nova.network.neutron [req-ddc78a16-693c-4c33-b982-bec61904289b req-87d8ae24-61c2-4fb0-9855-2849655cba83 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 5e55aa91-2aa5-4443-b976-0f3e4409e8ec] Updating instance_info_cache with network_info: [{"id": "027edfd6-09a6-4bf4-88df-8a19e59d1f72", "address": "fa:16:3e:d9:e8:8d", "network": {"id": "bf619d00-d285-4b9e-9996-77997075375e", "bridge": "br-int", "label": "tempest-network-smoke--598319533", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.219", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92faeb767e7a423586eaaf32661ce771", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap027edfd6-09", "ovs_interfaceid": "027edfd6-09a6-4bf4-88df-8a19e59d1f72", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 03:58:32 np0005534516 nova_compute[253538]: 2025-11-25 08:58:32.549 253542 DEBUG oslo_concurrency.lockutils [req-ddc78a16-693c-4c33-b982-bec61904289b req-87d8ae24-61c2-4fb0-9855-2849655cba83 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-5e55aa91-2aa5-4443-b976-0f3e4409e8ec" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 03:58:32 np0005534516 nova_compute[253538]: 2025-11-25 08:58:32.550 253542 DEBUG nova.compute.manager [req-ddc78a16-693c-4c33-b982-bec61904289b req-87d8ae24-61c2-4fb0-9855-2849655cba83 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 9447890d-1fff-4536-a0cd-b889c23f7479] Received event network-vif-plugged-6e0acf79-7148-4555-9265-b449f234806e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:58:32 np0005534516 nova_compute[253538]: 2025-11-25 08:58:32.550 253542 DEBUG oslo_concurrency.lockutils [req-ddc78a16-693c-4c33-b982-bec61904289b req-87d8ae24-61c2-4fb0-9855-2849655cba83 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "9447890d-1fff-4536-a0cd-b889c23f7479-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:58:32 np0005534516 nova_compute[253538]: 2025-11-25 08:58:32.550 253542 DEBUG oslo_concurrency.lockutils [req-ddc78a16-693c-4c33-b982-bec61904289b req-87d8ae24-61c2-4fb0-9855-2849655cba83 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "9447890d-1fff-4536-a0cd-b889c23f7479-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:58:32 np0005534516 nova_compute[253538]: 2025-11-25 08:58:32.550 253542 DEBUG oslo_concurrency.lockutils [req-ddc78a16-693c-4c33-b982-bec61904289b req-87d8ae24-61c2-4fb0-9855-2849655cba83 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "9447890d-1fff-4536-a0cd-b889c23f7479-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:58:32 np0005534516 nova_compute[253538]: 2025-11-25 08:58:32.551 253542 DEBUG nova.compute.manager [req-ddc78a16-693c-4c33-b982-bec61904289b req-87d8ae24-61c2-4fb0-9855-2849655cba83 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 9447890d-1fff-4536-a0cd-b889c23f7479] No waiting events found dispatching network-vif-plugged-6e0acf79-7148-4555-9265-b449f234806e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 03:58:32 np0005534516 nova_compute[253538]: 2025-11-25 08:58:32.551 253542 WARNING nova.compute.manager [req-ddc78a16-693c-4c33-b982-bec61904289b req-87d8ae24-61c2-4fb0-9855-2849655cba83 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 9447890d-1fff-4536-a0cd-b889c23f7479] Received unexpected event network-vif-plugged-6e0acf79-7148-4555-9265-b449f234806e for instance with vm_state active and task_state None.#033[00m
Nov 25 03:58:32 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 03:58:33 np0005534516 nova_compute[253538]: 2025-11-25 08:58:33.033 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:58:33 np0005534516 nova_compute[253538]: 2025-11-25 08:58:33.125 253542 DEBUG nova.compute.manager [req-586291b1-8def-440e-a140-d6f436bc9a84 req-de1ee20a-960c-4c7f-bb19-93f1e3fff779 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 5e55aa91-2aa5-4443-b976-0f3e4409e8ec] Received event network-changed-027edfd6-09a6-4bf4-88df-8a19e59d1f72 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:58:33 np0005534516 nova_compute[253538]: 2025-11-25 08:58:33.126 253542 DEBUG nova.compute.manager [req-586291b1-8def-440e-a140-d6f436bc9a84 req-de1ee20a-960c-4c7f-bb19-93f1e3fff779 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 5e55aa91-2aa5-4443-b976-0f3e4409e8ec] Refreshing instance network info cache due to event network-changed-027edfd6-09a6-4bf4-88df-8a19e59d1f72. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 03:58:33 np0005534516 nova_compute[253538]: 2025-11-25 08:58:33.127 253542 DEBUG oslo_concurrency.lockutils [req-586291b1-8def-440e-a140-d6f436bc9a84 req-de1ee20a-960c-4c7f-bb19-93f1e3fff779 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-5e55aa91-2aa5-4443-b976-0f3e4409e8ec" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 03:58:33 np0005534516 nova_compute[253538]: 2025-11-25 08:58:33.128 253542 DEBUG oslo_concurrency.lockutils [req-586291b1-8def-440e-a140-d6f436bc9a84 req-de1ee20a-960c-4c7f-bb19-93f1e3fff779 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-5e55aa91-2aa5-4443-b976-0f3e4409e8ec" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 03:58:33 np0005534516 nova_compute[253538]: 2025-11-25 08:58:33.129 253542 DEBUG nova.network.neutron [req-586291b1-8def-440e-a140-d6f436bc9a84 req-de1ee20a-960c-4c7f-bb19-93f1e3fff779 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 5e55aa91-2aa5-4443-b976-0f3e4409e8ec] Refreshing network info cache for port 027edfd6-09a6-4bf4-88df-8a19e59d1f72 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 03:58:33 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2333: 321 pgs: 321 active+clean; 293 MiB data, 933 MiB used, 59 GiB / 60 GiB avail; 1.2 MiB/s rd, 2.6 MiB/s wr, 104 op/s
Nov 25 03:58:34 np0005534516 nova_compute[253538]: 2025-11-25 08:58:34.373 253542 DEBUG oslo_concurrency.lockutils [None req-77a66044-8283-44d2-8074-30349f14cadf 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Acquiring lock "7be1983b-1609-4155-b634-d14fc92539e8" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:58:34 np0005534516 nova_compute[253538]: 2025-11-25 08:58:34.375 253542 DEBUG oslo_concurrency.lockutils [None req-77a66044-8283-44d2-8074-30349f14cadf 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lock "7be1983b-1609-4155-b634-d14fc92539e8" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:58:34 np0005534516 nova_compute[253538]: 2025-11-25 08:58:34.376 253542 DEBUG oslo_concurrency.lockutils [None req-77a66044-8283-44d2-8074-30349f14cadf 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Acquiring lock "7be1983b-1609-4155-b634-d14fc92539e8-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:58:34 np0005534516 nova_compute[253538]: 2025-11-25 08:58:34.376 253542 DEBUG oslo_concurrency.lockutils [None req-77a66044-8283-44d2-8074-30349f14cadf 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lock "7be1983b-1609-4155-b634-d14fc92539e8-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:58:34 np0005534516 nova_compute[253538]: 2025-11-25 08:58:34.377 253542 DEBUG oslo_concurrency.lockutils [None req-77a66044-8283-44d2-8074-30349f14cadf 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lock "7be1983b-1609-4155-b634-d14fc92539e8-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:58:34 np0005534516 nova_compute[253538]: 2025-11-25 08:58:34.379 253542 INFO nova.compute.manager [None req-77a66044-8283-44d2-8074-30349f14cadf 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 7be1983b-1609-4155-b634-d14fc92539e8] Terminating instance#033[00m
Nov 25 03:58:34 np0005534516 nova_compute[253538]: 2025-11-25 08:58:34.381 253542 DEBUG nova.compute.manager [None req-77a66044-8283-44d2-8074-30349f14cadf 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 7be1983b-1609-4155-b634-d14fc92539e8] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 25 03:58:34 np0005534516 kernel: tap91977aa8-62 (unregistering): left promiscuous mode
Nov 25 03:58:34 np0005534516 NetworkManager[48915]: <info>  [1764061114.4463] device (tap91977aa8-62): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 03:58:34 np0005534516 ovn_controller[152859]: 2025-11-25T08:58:34Z|01288|binding|INFO|Releasing lport 91977aa8-6282-46cb-bc4f-42567be639f9 from this chassis (sb_readonly=0)
Nov 25 03:58:34 np0005534516 ovn_controller[152859]: 2025-11-25T08:58:34Z|01289|binding|INFO|Setting lport 91977aa8-6282-46cb-bc4f-42567be639f9 down in Southbound
Nov 25 03:58:34 np0005534516 ovn_controller[152859]: 2025-11-25T08:58:34Z|01290|binding|INFO|Removing iface tap91977aa8-62 ovn-installed in OVS
Nov 25 03:58:34 np0005534516 nova_compute[253538]: 2025-11-25 08:58:34.457 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:58:34 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:58:34.484 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:60:48:54 10.100.0.10'], port_security=['fa:16:3e:60:48:54 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '7be1983b-1609-4155-b634-d14fc92539e8', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-bf619d00-d285-4b9e-9996-77997075375e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '92faeb767e7a423586eaaf32661ce771', 'neutron:revision_number': '4', 'neutron:security_group_ids': '3d244d5b-1cd0-48b4-a9a9-c4313e58642b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a8f4cf68-54bc-4d0b-a896-1ef280f60a0a, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=91977aa8-6282-46cb-bc4f-42567be639f9) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 03:58:34 np0005534516 nova_compute[253538]: 2025-11-25 08:58:34.488 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:58:34 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:58:34.489 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 91977aa8-6282-46cb-bc4f-42567be639f9 in datapath bf619d00-d285-4b9e-9996-77997075375e unbound from our chassis#033[00m
Nov 25 03:58:34 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:58:34.491 162739 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network bf619d00-d285-4b9e-9996-77997075375e#033[00m
Nov 25 03:58:34 np0005534516 systemd[1]: machine-qemu\x2d154\x2dinstance\x2d0000007c.scope: Deactivated successfully.
Nov 25 03:58:34 np0005534516 systemd[1]: machine-qemu\x2d154\x2dinstance\x2d0000007c.scope: Consumed 14.350s CPU time.
Nov 25 03:58:34 np0005534516 systemd-machined[215790]: Machine qemu-154-instance-0000007c terminated.
Nov 25 03:58:34 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:58:34.504 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[f49c466d-43ea-4b22-a079-d271b07d1c09]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:58:34 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:58:34.538 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[8658ac62-1bcf-476b-ac47-c8a088576dfc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:58:34 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:58:34.544 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[2dfed9c9-61d7-4918-b7ac-5aa0e69dfa56]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:58:34 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:58:34.575 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[486c0961-160f-4487-bee4-0f35f38b5475]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:58:34 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:58:34.595 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[3e3beb41-4329-43b1-9f81-7b2ad093d759]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapbf619d00-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:c8:be:0b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 7, 'rx_bytes': 700, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 7, 'rx_bytes': 700, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 369], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 642553, 'reachable_time': 19033, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 384994, 'error': None, 'target': 'ovnmeta-bf619d00-d285-4b9e-9996-77997075375e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:58:34 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:58:34.623 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[a4028733-9dd8-475e-85ca-9ec7923d8daf]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapbf619d00-d1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 642566, 'tstamp': 642566}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 384997, 'error': None, 'target': 'ovnmeta-bf619d00-d285-4b9e-9996-77997075375e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapbf619d00-d1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 642569, 'tstamp': 642569}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 384997, 'error': None, 'target': 'ovnmeta-bf619d00-d285-4b9e-9996-77997075375e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:58:34 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:58:34.624 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapbf619d00-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:58:34 np0005534516 nova_compute[253538]: 2025-11-25 08:58:34.669 253542 INFO nova.virt.libvirt.driver [-] [instance: 7be1983b-1609-4155-b634-d14fc92539e8] Instance destroyed successfully.#033[00m
Nov 25 03:58:34 np0005534516 nova_compute[253538]: 2025-11-25 08:58:34.669 253542 DEBUG nova.objects.instance [None req-77a66044-8283-44d2-8074-30349f14cadf 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lazy-loading 'resources' on Instance uuid 7be1983b-1609-4155-b634-d14fc92539e8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 03:58:34 np0005534516 nova_compute[253538]: 2025-11-25 08:58:34.670 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:58:34 np0005534516 nova_compute[253538]: 2025-11-25 08:58:34.673 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:58:34 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:58:34.674 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapbf619d00-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:58:34 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:58:34.675 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 03:58:34 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:58:34.675 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapbf619d00-d0, col_values=(('external_ids', {'iface-id': 'c544ed70-c59f-4fbe-97c5-a521f548f971'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:58:34 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:58:34.675 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 03:58:34 np0005534516 nova_compute[253538]: 2025-11-25 08:58:34.684 253542 DEBUG nova.virt.libvirt.vif [None req-77a66044-8283-44d2-8074-30349f14cadf 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T08:57:58Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1380054515',display_name='tempest-TestNetworkBasicOps-server-1380054515',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1380054515',id=124,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBC0onj7xYRJ9u2qwke0uzdeZmqzf6dyWSNUBde1bKNBXCsKy64L0Qx4G4FAfzNe1upG08i2qlETnDI+nze7Y9Zy5eH5tzWkkjtBeM6yGgDQ+VcsL5Xix937kJOB4ium+2w==',key_name='tempest-TestNetworkBasicOps-618890367',keypairs=<?>,launch_index=0,launched_at=2025-11-25T08:58:08Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='92faeb767e7a423586eaaf32661ce771',ramdisk_id='',reservation_id='r-81q88fvv',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-2019122229',owner_user_name='tempest-TestNetworkBasicOps-2019122229-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T08:58:08Z,user_data=None,user_id='4211995133cc45db8e38c47f747fb092',uuid=7be1983b-1609-4155-b634-d14fc92539e8,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "91977aa8-6282-46cb-bc4f-42567be639f9", "address": "fa:16:3e:60:48:54", "network": {"id": "bf619d00-d285-4b9e-9996-77997075375e", "bridge": "br-int", "label": "tempest-network-smoke--598319533", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.196", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92faeb767e7a423586eaaf32661ce771", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap91977aa8-62", "ovs_interfaceid": "91977aa8-6282-46cb-bc4f-42567be639f9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 25 03:58:34 np0005534516 nova_compute[253538]: 2025-11-25 08:58:34.685 253542 DEBUG nova.network.os_vif_util [None req-77a66044-8283-44d2-8074-30349f14cadf 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Converting VIF {"id": "91977aa8-6282-46cb-bc4f-42567be639f9", "address": "fa:16:3e:60:48:54", "network": {"id": "bf619d00-d285-4b9e-9996-77997075375e", "bridge": "br-int", "label": "tempest-network-smoke--598319533", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.196", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92faeb767e7a423586eaaf32661ce771", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap91977aa8-62", "ovs_interfaceid": "91977aa8-6282-46cb-bc4f-42567be639f9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 03:58:34 np0005534516 nova_compute[253538]: 2025-11-25 08:58:34.686 253542 DEBUG nova.network.os_vif_util [None req-77a66044-8283-44d2-8074-30349f14cadf 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:60:48:54,bridge_name='br-int',has_traffic_filtering=True,id=91977aa8-6282-46cb-bc4f-42567be639f9,network=Network(bf619d00-d285-4b9e-9996-77997075375e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap91977aa8-62') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 03:58:34 np0005534516 nova_compute[253538]: 2025-11-25 08:58:34.687 253542 DEBUG os_vif [None req-77a66044-8283-44d2-8074-30349f14cadf 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:60:48:54,bridge_name='br-int',has_traffic_filtering=True,id=91977aa8-6282-46cb-bc4f-42567be639f9,network=Network(bf619d00-d285-4b9e-9996-77997075375e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap91977aa8-62') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 25 03:58:34 np0005534516 nova_compute[253538]: 2025-11-25 08:58:34.689 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:58:34 np0005534516 nova_compute[253538]: 2025-11-25 08:58:34.689 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap91977aa8-62, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:58:34 np0005534516 nova_compute[253538]: 2025-11-25 08:58:34.691 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:58:34 np0005534516 nova_compute[253538]: 2025-11-25 08:58:34.694 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 25 03:58:34 np0005534516 nova_compute[253538]: 2025-11-25 08:58:34.696 253542 INFO os_vif [None req-77a66044-8283-44d2-8074-30349f14cadf 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:60:48:54,bridge_name='br-int',has_traffic_filtering=True,id=91977aa8-6282-46cb-bc4f-42567be639f9,network=Network(bf619d00-d285-4b9e-9996-77997075375e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap91977aa8-62')#033[00m
Nov 25 03:58:35 np0005534516 nova_compute[253538]: 2025-11-25 08:58:35.200 253542 INFO nova.virt.libvirt.driver [None req-77a66044-8283-44d2-8074-30349f14cadf 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 7be1983b-1609-4155-b634-d14fc92539e8] Deleting instance files /var/lib/nova/instances/7be1983b-1609-4155-b634-d14fc92539e8_del#033[00m
Nov 25 03:58:35 np0005534516 nova_compute[253538]: 2025-11-25 08:58:35.202 253542 INFO nova.virt.libvirt.driver [None req-77a66044-8283-44d2-8074-30349f14cadf 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 7be1983b-1609-4155-b634-d14fc92539e8] Deletion of /var/lib/nova/instances/7be1983b-1609-4155-b634-d14fc92539e8_del complete#033[00m
Nov 25 03:58:35 np0005534516 nova_compute[253538]: 2025-11-25 08:58:35.335 253542 DEBUG nova.compute.manager [req-a0863f22-8c8e-424d-80c4-357582aff581 req-0d334204-feaa-41a6-8fd4-7fbda4375481 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 7be1983b-1609-4155-b634-d14fc92539e8] Received event network-vif-unplugged-91977aa8-6282-46cb-bc4f-42567be639f9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:58:35 np0005534516 nova_compute[253538]: 2025-11-25 08:58:35.336 253542 DEBUG oslo_concurrency.lockutils [req-a0863f22-8c8e-424d-80c4-357582aff581 req-0d334204-feaa-41a6-8fd4-7fbda4375481 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "7be1983b-1609-4155-b634-d14fc92539e8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:58:35 np0005534516 nova_compute[253538]: 2025-11-25 08:58:35.336 253542 DEBUG oslo_concurrency.lockutils [req-a0863f22-8c8e-424d-80c4-357582aff581 req-0d334204-feaa-41a6-8fd4-7fbda4375481 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "7be1983b-1609-4155-b634-d14fc92539e8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:58:35 np0005534516 nova_compute[253538]: 2025-11-25 08:58:35.337 253542 DEBUG oslo_concurrency.lockutils [req-a0863f22-8c8e-424d-80c4-357582aff581 req-0d334204-feaa-41a6-8fd4-7fbda4375481 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "7be1983b-1609-4155-b634-d14fc92539e8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:58:35 np0005534516 nova_compute[253538]: 2025-11-25 08:58:35.338 253542 DEBUG nova.compute.manager [req-a0863f22-8c8e-424d-80c4-357582aff581 req-0d334204-feaa-41a6-8fd4-7fbda4375481 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 7be1983b-1609-4155-b634-d14fc92539e8] No waiting events found dispatching network-vif-unplugged-91977aa8-6282-46cb-bc4f-42567be639f9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 03:58:35 np0005534516 nova_compute[253538]: 2025-11-25 08:58:35.338 253542 DEBUG nova.compute.manager [req-a0863f22-8c8e-424d-80c4-357582aff581 req-0d334204-feaa-41a6-8fd4-7fbda4375481 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 7be1983b-1609-4155-b634-d14fc92539e8] Received event network-vif-unplugged-91977aa8-6282-46cb-bc4f-42567be639f9 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 25 03:58:35 np0005534516 nova_compute[253538]: 2025-11-25 08:58:35.343 253542 INFO nova.compute.manager [None req-77a66044-8283-44d2-8074-30349f14cadf 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 7be1983b-1609-4155-b634-d14fc92539e8] Took 0.96 seconds to destroy the instance on the hypervisor.#033[00m
Nov 25 03:58:35 np0005534516 nova_compute[253538]: 2025-11-25 08:58:35.344 253542 DEBUG oslo.service.loopingcall [None req-77a66044-8283-44d2-8074-30349f14cadf 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 25 03:58:35 np0005534516 nova_compute[253538]: 2025-11-25 08:58:35.345 253542 DEBUG nova.compute.manager [-] [instance: 7be1983b-1609-4155-b634-d14fc92539e8] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 25 03:58:35 np0005534516 nova_compute[253538]: 2025-11-25 08:58:35.345 253542 DEBUG nova.network.neutron [-] [instance: 7be1983b-1609-4155-b634-d14fc92539e8] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 25 03:58:35 np0005534516 nova_compute[253538]: 2025-11-25 08:58:35.422 253542 DEBUG nova.compute.manager [req-e9909664-05b2-4ecd-806e-585d941829ac req-154f9cc2-45b6-4491-91b8-dc4978dbd242 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 7be1983b-1609-4155-b634-d14fc92539e8] Received event network-changed-91977aa8-6282-46cb-bc4f-42567be639f9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:58:35 np0005534516 nova_compute[253538]: 2025-11-25 08:58:35.423 253542 DEBUG nova.compute.manager [req-e9909664-05b2-4ecd-806e-585d941829ac req-154f9cc2-45b6-4491-91b8-dc4978dbd242 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 7be1983b-1609-4155-b634-d14fc92539e8] Refreshing instance network info cache due to event network-changed-91977aa8-6282-46cb-bc4f-42567be639f9. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 03:58:35 np0005534516 nova_compute[253538]: 2025-11-25 08:58:35.424 253542 DEBUG oslo_concurrency.lockutils [req-e9909664-05b2-4ecd-806e-585d941829ac req-154f9cc2-45b6-4491-91b8-dc4978dbd242 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-7be1983b-1609-4155-b634-d14fc92539e8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 03:58:35 np0005534516 nova_compute[253538]: 2025-11-25 08:58:35.424 253542 DEBUG oslo_concurrency.lockutils [req-e9909664-05b2-4ecd-806e-585d941829ac req-154f9cc2-45b6-4491-91b8-dc4978dbd242 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-7be1983b-1609-4155-b634-d14fc92539e8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 03:58:35 np0005534516 nova_compute[253538]: 2025-11-25 08:58:35.425 253542 DEBUG nova.network.neutron [req-e9909664-05b2-4ecd-806e-585d941829ac req-154f9cc2-45b6-4491-91b8-dc4978dbd242 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 7be1983b-1609-4155-b634-d14fc92539e8] Refreshing network info cache for port 91977aa8-6282-46cb-bc4f-42567be639f9 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 03:58:35 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2334: 321 pgs: 321 active+clean; 293 MiB data, 933 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 1.8 MiB/s wr, 129 op/s
Nov 25 03:58:36 np0005534516 nova_compute[253538]: 2025-11-25 08:58:36.441 253542 DEBUG nova.network.neutron [req-586291b1-8def-440e-a140-d6f436bc9a84 req-de1ee20a-960c-4c7f-bb19-93f1e3fff779 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 5e55aa91-2aa5-4443-b976-0f3e4409e8ec] Updated VIF entry in instance network info cache for port 027edfd6-09a6-4bf4-88df-8a19e59d1f72. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 25 03:58:36 np0005534516 nova_compute[253538]: 2025-11-25 08:58:36.443 253542 DEBUG nova.network.neutron [req-586291b1-8def-440e-a140-d6f436bc9a84 req-de1ee20a-960c-4c7f-bb19-93f1e3fff779 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 5e55aa91-2aa5-4443-b976-0f3e4409e8ec] Updating instance_info_cache with network_info: [{"id": "027edfd6-09a6-4bf4-88df-8a19e59d1f72", "address": "fa:16:3e:d9:e8:8d", "network": {"id": "bf619d00-d285-4b9e-9996-77997075375e", "bridge": "br-int", "label": "tempest-network-smoke--598319533", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.219", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92faeb767e7a423586eaaf32661ce771", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap027edfd6-09", "ovs_interfaceid": "027edfd6-09a6-4bf4-88df-8a19e59d1f72", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 03:58:36 np0005534516 nova_compute[253538]: 2025-11-25 08:58:36.463 253542 DEBUG oslo_concurrency.lockutils [req-586291b1-8def-440e-a140-d6f436bc9a84 req-de1ee20a-960c-4c7f-bb19-93f1e3fff779 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-5e55aa91-2aa5-4443-b976-0f3e4409e8ec" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 03:58:36 np0005534516 nova_compute[253538]: 2025-11-25 08:58:36.467 253542 DEBUG nova.compute.manager [req-586291b1-8def-440e-a140-d6f436bc9a84 req-de1ee20a-960c-4c7f-bb19-93f1e3fff779 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 5e55aa91-2aa5-4443-b976-0f3e4409e8ec] Received event network-vif-plugged-027edfd6-09a6-4bf4-88df-8a19e59d1f72 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:58:36 np0005534516 nova_compute[253538]: 2025-11-25 08:58:36.468 253542 DEBUG oslo_concurrency.lockutils [req-586291b1-8def-440e-a140-d6f436bc9a84 req-de1ee20a-960c-4c7f-bb19-93f1e3fff779 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "5e55aa91-2aa5-4443-b976-0f3e4409e8ec-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:58:36 np0005534516 nova_compute[253538]: 2025-11-25 08:58:36.469 253542 DEBUG oslo_concurrency.lockutils [req-586291b1-8def-440e-a140-d6f436bc9a84 req-de1ee20a-960c-4c7f-bb19-93f1e3fff779 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "5e55aa91-2aa5-4443-b976-0f3e4409e8ec-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:58:36 np0005534516 nova_compute[253538]: 2025-11-25 08:58:36.469 253542 DEBUG oslo_concurrency.lockutils [req-586291b1-8def-440e-a140-d6f436bc9a84 req-de1ee20a-960c-4c7f-bb19-93f1e3fff779 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "5e55aa91-2aa5-4443-b976-0f3e4409e8ec-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:58:36 np0005534516 nova_compute[253538]: 2025-11-25 08:58:36.470 253542 DEBUG nova.compute.manager [req-586291b1-8def-440e-a140-d6f436bc9a84 req-de1ee20a-960c-4c7f-bb19-93f1e3fff779 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 5e55aa91-2aa5-4443-b976-0f3e4409e8ec] No waiting events found dispatching network-vif-plugged-027edfd6-09a6-4bf4-88df-8a19e59d1f72 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 03:58:36 np0005534516 nova_compute[253538]: 2025-11-25 08:58:36.470 253542 WARNING nova.compute.manager [req-586291b1-8def-440e-a140-d6f436bc9a84 req-de1ee20a-960c-4c7f-bb19-93f1e3fff779 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 5e55aa91-2aa5-4443-b976-0f3e4409e8ec] Received unexpected event network-vif-plugged-027edfd6-09a6-4bf4-88df-8a19e59d1f72 for instance with vm_state active and task_state None.#033[00m
Nov 25 03:58:36 np0005534516 nova_compute[253538]: 2025-11-25 08:58:36.471 253542 DEBUG nova.compute.manager [req-586291b1-8def-440e-a140-d6f436bc9a84 req-de1ee20a-960c-4c7f-bb19-93f1e3fff779 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 5e55aa91-2aa5-4443-b976-0f3e4409e8ec] Received event network-vif-plugged-027edfd6-09a6-4bf4-88df-8a19e59d1f72 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:58:36 np0005534516 nova_compute[253538]: 2025-11-25 08:58:36.472 253542 DEBUG oslo_concurrency.lockutils [req-586291b1-8def-440e-a140-d6f436bc9a84 req-de1ee20a-960c-4c7f-bb19-93f1e3fff779 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "5e55aa91-2aa5-4443-b976-0f3e4409e8ec-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:58:36 np0005534516 nova_compute[253538]: 2025-11-25 08:58:36.472 253542 DEBUG oslo_concurrency.lockutils [req-586291b1-8def-440e-a140-d6f436bc9a84 req-de1ee20a-960c-4c7f-bb19-93f1e3fff779 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "5e55aa91-2aa5-4443-b976-0f3e4409e8ec-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:58:36 np0005534516 nova_compute[253538]: 2025-11-25 08:58:36.473 253542 DEBUG oslo_concurrency.lockutils [req-586291b1-8def-440e-a140-d6f436bc9a84 req-de1ee20a-960c-4c7f-bb19-93f1e3fff779 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "5e55aa91-2aa5-4443-b976-0f3e4409e8ec-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:58:36 np0005534516 nova_compute[253538]: 2025-11-25 08:58:36.473 253542 DEBUG nova.compute.manager [req-586291b1-8def-440e-a140-d6f436bc9a84 req-de1ee20a-960c-4c7f-bb19-93f1e3fff779 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 5e55aa91-2aa5-4443-b976-0f3e4409e8ec] No waiting events found dispatching network-vif-plugged-027edfd6-09a6-4bf4-88df-8a19e59d1f72 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 03:58:36 np0005534516 nova_compute[253538]: 2025-11-25 08:58:36.474 253542 WARNING nova.compute.manager [req-586291b1-8def-440e-a140-d6f436bc9a84 req-de1ee20a-960c-4c7f-bb19-93f1e3fff779 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 5e55aa91-2aa5-4443-b976-0f3e4409e8ec] Received unexpected event network-vif-plugged-027edfd6-09a6-4bf4-88df-8a19e59d1f72 for instance with vm_state active and task_state None.#033[00m
Nov 25 03:58:36 np0005534516 nova_compute[253538]: 2025-11-25 08:58:36.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 03:58:36 np0005534516 nova_compute[253538]: 2025-11-25 08:58:36.555 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 03:58:36 np0005534516 nova_compute[253538]: 2025-11-25 08:58:36.835 253542 DEBUG nova.network.neutron [-] [instance: 7be1983b-1609-4155-b634-d14fc92539e8] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 03:58:36 np0005534516 nova_compute[253538]: 2025-11-25 08:58:36.869 253542 INFO nova.compute.manager [-] [instance: 7be1983b-1609-4155-b634-d14fc92539e8] Took 1.52 seconds to deallocate network for instance.#033[00m
Nov 25 03:58:37 np0005534516 nova_compute[253538]: 2025-11-25 08:58:37.049 253542 DEBUG oslo_concurrency.lockutils [None req-77a66044-8283-44d2-8074-30349f14cadf 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:58:37 np0005534516 nova_compute[253538]: 2025-11-25 08:58:37.050 253542 DEBUG oslo_concurrency.lockutils [None req-77a66044-8283-44d2-8074-30349f14cadf 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:58:37 np0005534516 nova_compute[253538]: 2025-11-25 08:58:37.187 253542 DEBUG oslo_concurrency.processutils [None req-77a66044-8283-44d2-8074-30349f14cadf 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:58:37 np0005534516 nova_compute[253538]: 2025-11-25 08:58:37.441 253542 DEBUG nova.network.neutron [req-e9909664-05b2-4ecd-806e-585d941829ac req-154f9cc2-45b6-4491-91b8-dc4978dbd242 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 7be1983b-1609-4155-b634-d14fc92539e8] Updated VIF entry in instance network info cache for port 91977aa8-6282-46cb-bc4f-42567be639f9. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 25 03:58:37 np0005534516 nova_compute[253538]: 2025-11-25 08:58:37.443 253542 DEBUG nova.network.neutron [req-e9909664-05b2-4ecd-806e-585d941829ac req-154f9cc2-45b6-4491-91b8-dc4978dbd242 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 7be1983b-1609-4155-b634-d14fc92539e8] Updating instance_info_cache with network_info: [{"id": "91977aa8-6282-46cb-bc4f-42567be639f9", "address": "fa:16:3e:60:48:54", "network": {"id": "bf619d00-d285-4b9e-9996-77997075375e", "bridge": "br-int", "label": "tempest-network-smoke--598319533", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92faeb767e7a423586eaaf32661ce771", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap91977aa8-62", "ovs_interfaceid": "91977aa8-6282-46cb-bc4f-42567be639f9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 03:58:37 np0005534516 nova_compute[253538]: 2025-11-25 08:58:37.556 253542 DEBUG nova.compute.manager [req-29ece294-e5a2-40cb-bb57-d11b517832ab req-44ad8be3-f8bc-4f7f-88bf-f6d3fa3b08cf b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 7be1983b-1609-4155-b634-d14fc92539e8] Received event network-vif-deleted-91977aa8-6282-46cb-bc4f-42567be639f9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:58:37 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 03:58:37 np0005534516 nova_compute[253538]: 2025-11-25 08:58:37.592 253542 DEBUG oslo_concurrency.lockutils [req-e9909664-05b2-4ecd-806e-585d941829ac req-154f9cc2-45b6-4491-91b8-dc4978dbd242 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-7be1983b-1609-4155-b634-d14fc92539e8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 03:58:37 np0005534516 nova_compute[253538]: 2025-11-25 08:58:37.593 253542 DEBUG nova.compute.manager [req-e9909664-05b2-4ecd-806e-585d941829ac req-154f9cc2-45b6-4491-91b8-dc4978dbd242 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 9447890d-1fff-4536-a0cd-b889c23f7479] Received event network-changed-6e0acf79-7148-4555-9265-b449f234806e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:58:37 np0005534516 nova_compute[253538]: 2025-11-25 08:58:37.593 253542 DEBUG nova.compute.manager [req-e9909664-05b2-4ecd-806e-585d941829ac req-154f9cc2-45b6-4491-91b8-dc4978dbd242 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 9447890d-1fff-4536-a0cd-b889c23f7479] Refreshing instance network info cache due to event network-changed-6e0acf79-7148-4555-9265-b449f234806e. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 03:58:37 np0005534516 nova_compute[253538]: 2025-11-25 08:58:37.593 253542 DEBUG oslo_concurrency.lockutils [req-e9909664-05b2-4ecd-806e-585d941829ac req-154f9cc2-45b6-4491-91b8-dc4978dbd242 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-9447890d-1fff-4536-a0cd-b889c23f7479" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 03:58:37 np0005534516 nova_compute[253538]: 2025-11-25 08:58:37.594 253542 DEBUG oslo_concurrency.lockutils [req-e9909664-05b2-4ecd-806e-585d941829ac req-154f9cc2-45b6-4491-91b8-dc4978dbd242 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-9447890d-1fff-4536-a0cd-b889c23f7479" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 03:58:37 np0005534516 nova_compute[253538]: 2025-11-25 08:58:37.594 253542 DEBUG nova.network.neutron [req-e9909664-05b2-4ecd-806e-585d941829ac req-154f9cc2-45b6-4491-91b8-dc4978dbd242 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 9447890d-1fff-4536-a0cd-b889c23f7479] Refreshing network info cache for port 6e0acf79-7148-4555-9265-b449f234806e _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 03:58:37 np0005534516 nova_compute[253538]: 2025-11-25 08:58:37.637 253542 DEBUG nova.compute.manager [req-de2f0bbe-add0-45a4-aa44-a69ce9182534 req-b542062e-58b6-459a-856a-4bb90ec64ca7 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 7be1983b-1609-4155-b634-d14fc92539e8] Received event network-vif-plugged-91977aa8-6282-46cb-bc4f-42567be639f9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:58:37 np0005534516 nova_compute[253538]: 2025-11-25 08:58:37.638 253542 DEBUG oslo_concurrency.lockutils [req-de2f0bbe-add0-45a4-aa44-a69ce9182534 req-b542062e-58b6-459a-856a-4bb90ec64ca7 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "7be1983b-1609-4155-b634-d14fc92539e8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:58:37 np0005534516 nova_compute[253538]: 2025-11-25 08:58:37.638 253542 DEBUG oslo_concurrency.lockutils [req-de2f0bbe-add0-45a4-aa44-a69ce9182534 req-b542062e-58b6-459a-856a-4bb90ec64ca7 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "7be1983b-1609-4155-b634-d14fc92539e8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:58:37 np0005534516 nova_compute[253538]: 2025-11-25 08:58:37.639 253542 DEBUG oslo_concurrency.lockutils [req-de2f0bbe-add0-45a4-aa44-a69ce9182534 req-b542062e-58b6-459a-856a-4bb90ec64ca7 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "7be1983b-1609-4155-b634-d14fc92539e8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:58:37 np0005534516 nova_compute[253538]: 2025-11-25 08:58:37.639 253542 DEBUG nova.compute.manager [req-de2f0bbe-add0-45a4-aa44-a69ce9182534 req-b542062e-58b6-459a-856a-4bb90ec64ca7 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 7be1983b-1609-4155-b634-d14fc92539e8] No waiting events found dispatching network-vif-plugged-91977aa8-6282-46cb-bc4f-42567be639f9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 03:58:37 np0005534516 nova_compute[253538]: 2025-11-25 08:58:37.640 253542 WARNING nova.compute.manager [req-de2f0bbe-add0-45a4-aa44-a69ce9182534 req-b542062e-58b6-459a-856a-4bb90ec64ca7 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 7be1983b-1609-4155-b634-d14fc92539e8] Received unexpected event network-vif-plugged-91977aa8-6282-46cb-bc4f-42567be639f9 for instance with vm_state deleted and task_state None.#033[00m
Nov 25 03:58:37 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 03:58:37 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2276302590' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 03:58:37 np0005534516 nova_compute[253538]: 2025-11-25 08:58:37.677 253542 DEBUG oslo_concurrency.processutils [None req-77a66044-8283-44d2-8074-30349f14cadf 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.489s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:58:37 np0005534516 nova_compute[253538]: 2025-11-25 08:58:37.685 253542 DEBUG nova.compute.provider_tree [None req-77a66044-8283-44d2-8074-30349f14cadf 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 03:58:37 np0005534516 nova_compute[253538]: 2025-11-25 08:58:37.709 253542 DEBUG nova.scheduler.client.report [None req-77a66044-8283-44d2-8074-30349f14cadf 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 03:58:37 np0005534516 nova_compute[253538]: 2025-11-25 08:58:37.778 253542 DEBUG oslo_concurrency.lockutils [None req-77a66044-8283-44d2-8074-30349f14cadf 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.729s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:58:37 np0005534516 nova_compute[253538]: 2025-11-25 08:58:37.908 253542 INFO nova.scheduler.client.report [None req-77a66044-8283-44d2-8074-30349f14cadf 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Deleted allocations for instance 7be1983b-1609-4155-b634-d14fc92539e8#033[00m
Nov 25 03:58:37 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2335: 321 pgs: 321 active+clean; 245 MiB data, 913 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 514 KiB/s wr, 125 op/s
Nov 25 03:58:38 np0005534516 nova_compute[253538]: 2025-11-25 08:58:38.015 253542 DEBUG oslo_concurrency.lockutils [None req-77a66044-8283-44d2-8074-30349f14cadf 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lock "7be1983b-1609-4155-b634-d14fc92539e8" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.640s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:58:38 np0005534516 nova_compute[253538]: 2025-11-25 08:58:38.039 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:58:39 np0005534516 nova_compute[253538]: 2025-11-25 08:58:39.691 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:58:39 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2336: 321 pgs: 321 active+clean; 213 MiB data, 894 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 33 KiB/s wr, 103 op/s
Nov 25 03:58:40 np0005534516 nova_compute[253538]: 2025-11-25 08:58:40.145 253542 DEBUG nova.network.neutron [req-e9909664-05b2-4ecd-806e-585d941829ac req-154f9cc2-45b6-4491-91b8-dc4978dbd242 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 9447890d-1fff-4536-a0cd-b889c23f7479] Updated VIF entry in instance network info cache for port 6e0acf79-7148-4555-9265-b449f234806e. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 25 03:58:40 np0005534516 nova_compute[253538]: 2025-11-25 08:58:40.146 253542 DEBUG nova.network.neutron [req-e9909664-05b2-4ecd-806e-585d941829ac req-154f9cc2-45b6-4491-91b8-dc4978dbd242 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 9447890d-1fff-4536-a0cd-b889c23f7479] Updating instance_info_cache with network_info: [{"id": "6e0acf79-7148-4555-9265-b449f234806e", "address": "fa:16:3e:f7:96:34", "network": {"id": "bfb2a9da-10d5-4cf0-a585-a59d66a02fa0", "bridge": "br-int", "label": "tempest-network-smoke--16312468", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.246", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cfffff2c57a442a59b202d368d49bf00", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6e0acf79-71", "ovs_interfaceid": "6e0acf79-7148-4555-9265-b449f234806e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 03:58:40 np0005534516 nova_compute[253538]: 2025-11-25 08:58:40.168 253542 DEBUG oslo_concurrency.lockutils [req-e9909664-05b2-4ecd-806e-585d941829ac req-154f9cc2-45b6-4491-91b8-dc4978dbd242 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-9447890d-1fff-4536-a0cd-b889c23f7479" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 03:58:40 np0005534516 nova_compute[253538]: 2025-11-25 08:58:40.607 253542 DEBUG nova.compute.manager [req-909e201a-740e-4baf-8a18-f64eedd9253e req-1773e90d-1899-4600-9a76-d0dfda02bd3f b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 5e55aa91-2aa5-4443-b976-0f3e4409e8ec] Received event network-changed-027edfd6-09a6-4bf4-88df-8a19e59d1f72 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:58:40 np0005534516 nova_compute[253538]: 2025-11-25 08:58:40.607 253542 DEBUG nova.compute.manager [req-909e201a-740e-4baf-8a18-f64eedd9253e req-1773e90d-1899-4600-9a76-d0dfda02bd3f b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 5e55aa91-2aa5-4443-b976-0f3e4409e8ec] Refreshing instance network info cache due to event network-changed-027edfd6-09a6-4bf4-88df-8a19e59d1f72. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 03:58:40 np0005534516 nova_compute[253538]: 2025-11-25 08:58:40.608 253542 DEBUG oslo_concurrency.lockutils [req-909e201a-740e-4baf-8a18-f64eedd9253e req-1773e90d-1899-4600-9a76-d0dfda02bd3f b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-5e55aa91-2aa5-4443-b976-0f3e4409e8ec" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 03:58:40 np0005534516 nova_compute[253538]: 2025-11-25 08:58:40.608 253542 DEBUG oslo_concurrency.lockutils [req-909e201a-740e-4baf-8a18-f64eedd9253e req-1773e90d-1899-4600-9a76-d0dfda02bd3f b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-5e55aa91-2aa5-4443-b976-0f3e4409e8ec" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 03:58:40 np0005534516 nova_compute[253538]: 2025-11-25 08:58:40.609 253542 DEBUG nova.network.neutron [req-909e201a-740e-4baf-8a18-f64eedd9253e req-1773e90d-1899-4600-9a76-d0dfda02bd3f b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 5e55aa91-2aa5-4443-b976-0f3e4409e8ec] Refreshing network info cache for port 027edfd6-09a6-4bf4-88df-8a19e59d1f72 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 03:58:40 np0005534516 nova_compute[253538]: 2025-11-25 08:58:40.871 253542 DEBUG oslo_concurrency.lockutils [None req-268b8892-ed17-439d-8a6d-ef1f296429bb 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Acquiring lock "5e55aa91-2aa5-4443-b976-0f3e4409e8ec" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:58:40 np0005534516 nova_compute[253538]: 2025-11-25 08:58:40.872 253542 DEBUG oslo_concurrency.lockutils [None req-268b8892-ed17-439d-8a6d-ef1f296429bb 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lock "5e55aa91-2aa5-4443-b976-0f3e4409e8ec" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:58:40 np0005534516 nova_compute[253538]: 2025-11-25 08:58:40.872 253542 DEBUG oslo_concurrency.lockutils [None req-268b8892-ed17-439d-8a6d-ef1f296429bb 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Acquiring lock "5e55aa91-2aa5-4443-b976-0f3e4409e8ec-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:58:40 np0005534516 nova_compute[253538]: 2025-11-25 08:58:40.872 253542 DEBUG oslo_concurrency.lockutils [None req-268b8892-ed17-439d-8a6d-ef1f296429bb 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lock "5e55aa91-2aa5-4443-b976-0f3e4409e8ec-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:58:40 np0005534516 nova_compute[253538]: 2025-11-25 08:58:40.873 253542 DEBUG oslo_concurrency.lockutils [None req-268b8892-ed17-439d-8a6d-ef1f296429bb 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lock "5e55aa91-2aa5-4443-b976-0f3e4409e8ec-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:58:40 np0005534516 nova_compute[253538]: 2025-11-25 08:58:40.874 253542 INFO nova.compute.manager [None req-268b8892-ed17-439d-8a6d-ef1f296429bb 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 5e55aa91-2aa5-4443-b976-0f3e4409e8ec] Terminating instance#033[00m
Nov 25 03:58:40 np0005534516 nova_compute[253538]: 2025-11-25 08:58:40.875 253542 DEBUG nova.compute.manager [None req-268b8892-ed17-439d-8a6d-ef1f296429bb 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 5e55aa91-2aa5-4443-b976-0f3e4409e8ec] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 25 03:58:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:58:41.081 162739 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:58:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:58:41.081 162739 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:58:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:58:41.082 162739 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:58:41 np0005534516 kernel: tap027edfd6-09 (unregistering): left promiscuous mode
Nov 25 03:58:41 np0005534516 NetworkManager[48915]: <info>  [1764061121.4334] device (tap027edfd6-09): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 03:58:41 np0005534516 ovn_controller[152859]: 2025-11-25T08:58:41Z|01291|binding|INFO|Releasing lport 027edfd6-09a6-4bf4-88df-8a19e59d1f72 from this chassis (sb_readonly=0)
Nov 25 03:58:41 np0005534516 ovn_controller[152859]: 2025-11-25T08:58:41Z|01292|binding|INFO|Setting lport 027edfd6-09a6-4bf4-88df-8a19e59d1f72 down in Southbound
Nov 25 03:58:41 np0005534516 nova_compute[253538]: 2025-11-25 08:58:41.443 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:58:41 np0005534516 ovn_controller[152859]: 2025-11-25T08:58:41Z|01293|binding|INFO|Removing iface tap027edfd6-09 ovn-installed in OVS
Nov 25 03:58:41 np0005534516 nova_compute[253538]: 2025-11-25 08:58:41.447 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:58:41 np0005534516 nova_compute[253538]: 2025-11-25 08:58:41.465 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:58:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:58:41.487 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d9:e8:8d 10.100.0.9'], port_security=['fa:16:3e:d9:e8:8d 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '5e55aa91-2aa5-4443-b976-0f3e4409e8ec', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-bf619d00-d285-4b9e-9996-77997075375e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '92faeb767e7a423586eaaf32661ce771', 'neutron:revision_number': '8', 'neutron:security_group_ids': '9d7a9182-641d-4ca5-a3ab-361222a77391', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a8f4cf68-54bc-4d0b-a896-1ef280f60a0a, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=027edfd6-09a6-4bf4-88df-8a19e59d1f72) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 03:58:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:58:41.489 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 027edfd6-09a6-4bf4-88df-8a19e59d1f72 in datapath bf619d00-d285-4b9e-9996-77997075375e unbound from our chassis#033[00m
Nov 25 03:58:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:58:41.490 162739 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network bf619d00-d285-4b9e-9996-77997075375e, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 25 03:58:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:58:41.491 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[f6b8a73b-861d-49dc-9ea5-da044b4ede54]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:58:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:58:41.492 162739 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-bf619d00-d285-4b9e-9996-77997075375e namespace which is not needed anymore#033[00m
Nov 25 03:58:41 np0005534516 systemd[1]: machine-qemu\x2d153\x2dinstance\x2d0000007b.scope: Deactivated successfully.
Nov 25 03:58:41 np0005534516 systemd[1]: machine-qemu\x2d153\x2dinstance\x2d0000007b.scope: Consumed 15.511s CPU time.
Nov 25 03:58:41 np0005534516 systemd-machined[215790]: Machine qemu-153-instance-0000007b terminated.
Nov 25 03:58:41 np0005534516 nova_compute[253538]: 2025-11-25 08:58:41.555 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 03:58:41 np0005534516 nova_compute[253538]: 2025-11-25 08:58:41.556 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 25 03:58:41 np0005534516 nova_compute[253538]: 2025-11-25 08:58:41.577 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 25 03:58:41 np0005534516 neutron-haproxy-ovnmeta-bf619d00-d285-4b9e-9996-77997075375e[383642]: [NOTICE]   (383647) : haproxy version is 2.8.14-c23fe91
Nov 25 03:58:41 np0005534516 neutron-haproxy-ovnmeta-bf619d00-d285-4b9e-9996-77997075375e[383642]: [NOTICE]   (383647) : path to executable is /usr/sbin/haproxy
Nov 25 03:58:41 np0005534516 neutron-haproxy-ovnmeta-bf619d00-d285-4b9e-9996-77997075375e[383642]: [WARNING]  (383647) : Exiting Master process...
Nov 25 03:58:41 np0005534516 neutron-haproxy-ovnmeta-bf619d00-d285-4b9e-9996-77997075375e[383642]: [ALERT]    (383647) : Current worker (383649) exited with code 143 (Terminated)
Nov 25 03:58:41 np0005534516 neutron-haproxy-ovnmeta-bf619d00-d285-4b9e-9996-77997075375e[383642]: [WARNING]  (383647) : All workers exited. Exiting... (0)
Nov 25 03:58:41 np0005534516 systemd[1]: libpod-2e741201d07043758356ba915932a1ecc9e834e5043588acafeb96a919795c94.scope: Deactivated successfully.
Nov 25 03:58:41 np0005534516 podman[385076]: 2025-11-25 08:58:41.688653256 +0000 UTC m=+0.083077472 container died 2e741201d07043758356ba915932a1ecc9e834e5043588acafeb96a919795c94 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-bf619d00-d285-4b9e-9996-77997075375e, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Nov 25 03:58:41 np0005534516 nova_compute[253538]: 2025-11-25 08:58:41.727 253542 INFO nova.virt.libvirt.driver [-] [instance: 5e55aa91-2aa5-4443-b976-0f3e4409e8ec] Instance destroyed successfully.#033[00m
Nov 25 03:58:41 np0005534516 nova_compute[253538]: 2025-11-25 08:58:41.730 253542 DEBUG nova.objects.instance [None req-268b8892-ed17-439d-8a6d-ef1f296429bb 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lazy-loading 'resources' on Instance uuid 5e55aa91-2aa5-4443-b976-0f3e4409e8ec obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 03:58:41 np0005534516 nova_compute[253538]: 2025-11-25 08:58:41.742 253542 DEBUG nova.virt.libvirt.vif [None req-268b8892-ed17-439d-8a6d-ef1f296429bb 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T08:57:42Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-2129911111',display_name='tempest-TestNetworkBasicOps-server-2129911111',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-2129911111',id=123,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAN+yQyEQntLRhVgjlecOd8kZgcuJccGeN4XUmVTtZkXatG89jriyrNCx89aMj4+ppyzZUWi3hVDPwYltwxWsUBkgwfbxG4JDKfBHeP0jrr3H+wCGmTdCkbNarpgdJwyag==',key_name='tempest-TestNetworkBasicOps-619601627',keypairs=<?>,launch_index=0,launched_at=2025-11-25T08:57:51Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='92faeb767e7a423586eaaf32661ce771',ramdisk_id='',reservation_id='r-vzp7o245',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-2019122229',owner_user_name='tempest-TestNetworkBasicOps-2019122229-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T08:57:51Z,user_data=None,user_id='4211995133cc45db8e38c47f747fb092',uuid=5e55aa91-2aa5-4443-b976-0f3e4409e8ec,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "027edfd6-09a6-4bf4-88df-8a19e59d1f72", "address": "fa:16:3e:d9:e8:8d", "network": {"id": "bf619d00-d285-4b9e-9996-77997075375e", "bridge": "br-int", "label": "tempest-network-smoke--598319533", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.219", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92faeb767e7a423586eaaf32661ce771", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap027edfd6-09", "ovs_interfaceid": "027edfd6-09a6-4bf4-88df-8a19e59d1f72", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 25 03:58:41 np0005534516 nova_compute[253538]: 2025-11-25 08:58:41.745 253542 DEBUG nova.network.os_vif_util [None req-268b8892-ed17-439d-8a6d-ef1f296429bb 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Converting VIF {"id": "027edfd6-09a6-4bf4-88df-8a19e59d1f72", "address": "fa:16:3e:d9:e8:8d", "network": {"id": "bf619d00-d285-4b9e-9996-77997075375e", "bridge": "br-int", "label": "tempest-network-smoke--598319533", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.219", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92faeb767e7a423586eaaf32661ce771", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap027edfd6-09", "ovs_interfaceid": "027edfd6-09a6-4bf4-88df-8a19e59d1f72", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 03:58:41 np0005534516 nova_compute[253538]: 2025-11-25 08:58:41.747 253542 DEBUG nova.network.os_vif_util [None req-268b8892-ed17-439d-8a6d-ef1f296429bb 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:d9:e8:8d,bridge_name='br-int',has_traffic_filtering=True,id=027edfd6-09a6-4bf4-88df-8a19e59d1f72,network=Network(bf619d00-d285-4b9e-9996-77997075375e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap027edfd6-09') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 03:58:41 np0005534516 nova_compute[253538]: 2025-11-25 08:58:41.748 253542 DEBUG os_vif [None req-268b8892-ed17-439d-8a6d-ef1f296429bb 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:d9:e8:8d,bridge_name='br-int',has_traffic_filtering=True,id=027edfd6-09a6-4bf4-88df-8a19e59d1f72,network=Network(bf619d00-d285-4b9e-9996-77997075375e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap027edfd6-09') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 25 03:58:41 np0005534516 nova_compute[253538]: 2025-11-25 08:58:41.750 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:58:41 np0005534516 nova_compute[253538]: 2025-11-25 08:58:41.751 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap027edfd6-09, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:58:41 np0005534516 nova_compute[253538]: 2025-11-25 08:58:41.752 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:58:41 np0005534516 nova_compute[253538]: 2025-11-25 08:58:41.755 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 25 03:58:41 np0005534516 nova_compute[253538]: 2025-11-25 08:58:41.758 253542 INFO os_vif [None req-268b8892-ed17-439d-8a6d-ef1f296429bb 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:d9:e8:8d,bridge_name='br-int',has_traffic_filtering=True,id=027edfd6-09a6-4bf4-88df-8a19e59d1f72,network=Network(bf619d00-d285-4b9e-9996-77997075375e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap027edfd6-09')#033[00m
Nov 25 03:58:41 np0005534516 systemd[1]: var-lib-containers-storage-overlay-c4540958df62a9ce7e0b75ff06f1bfd795b20e2ca8437d0035d39c10d955612c-merged.mount: Deactivated successfully.
Nov 25 03:58:41 np0005534516 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-2e741201d07043758356ba915932a1ecc9e834e5043588acafeb96a919795c94-userdata-shm.mount: Deactivated successfully.
Nov 25 03:58:41 np0005534516 podman[385076]: 2025-11-25 08:58:41.933610303 +0000 UTC m=+0.328034479 container cleanup 2e741201d07043758356ba915932a1ecc9e834e5043588acafeb96a919795c94 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-bf619d00-d285-4b9e-9996-77997075375e, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 25 03:58:41 np0005534516 systemd[1]: libpod-conmon-2e741201d07043758356ba915932a1ecc9e834e5043588acafeb96a919795c94.scope: Deactivated successfully.
Nov 25 03:58:41 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2337: 321 pgs: 321 active+clean; 224 MiB data, 893 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 884 KiB/s wr, 106 op/s
Nov 25 03:58:42 np0005534516 podman[385136]: 2025-11-25 08:58:42.244579538 +0000 UTC m=+0.280906987 container remove 2e741201d07043758356ba915932a1ecc9e834e5043588acafeb96a919795c94 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-bf619d00-d285-4b9e-9996-77997075375e, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, tcib_managed=true, io.buildah.version=1.41.3)
Nov 25 03:58:42 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:58:42.253 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[527e4013-ea1c-4db4-93a5-131b40a1173f]: (4, ('Tue Nov 25 08:58:41 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-bf619d00-d285-4b9e-9996-77997075375e (2e741201d07043758356ba915932a1ecc9e834e5043588acafeb96a919795c94)\n2e741201d07043758356ba915932a1ecc9e834e5043588acafeb96a919795c94\nTue Nov 25 08:58:41 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-bf619d00-d285-4b9e-9996-77997075375e (2e741201d07043758356ba915932a1ecc9e834e5043588acafeb96a919795c94)\n2e741201d07043758356ba915932a1ecc9e834e5043588acafeb96a919795c94\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:58:42 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:58:42.256 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[d3c1953a-60d5-475d-98f7-8bbf22ecc2aa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:58:42 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:58:42.257 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapbf619d00-d0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:58:42 np0005534516 nova_compute[253538]: 2025-11-25 08:58:42.259 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:58:42 np0005534516 kernel: tapbf619d00-d0: left promiscuous mode
Nov 25 03:58:42 np0005534516 nova_compute[253538]: 2025-11-25 08:58:42.265 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:58:42 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:58:42.265 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[81324436-b5de-4d43-b231-05d1d4d3c12f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:58:42 np0005534516 nova_compute[253538]: 2025-11-25 08:58:42.279 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:58:42 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:58:42.283 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[6d2d5d28-abe6-4bd0-a724-d3391bac558b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:58:42 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:58:42.285 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[854e1cf2-9ad7-4a29-b422-86a41d5da7d9]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:58:42 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:58:42.305 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[c6a0b870-d8e4-401b-b7a1-99d357fa5eda]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 642545, 'reachable_time': 19642, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 385152, 'error': None, 'target': 'ovnmeta-bf619d00-d285-4b9e-9996-77997075375e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:58:42 np0005534516 systemd[1]: run-netns-ovnmeta\x2dbf619d00\x2dd285\x2d4b9e\x2d9996\x2d77997075375e.mount: Deactivated successfully.
Nov 25 03:58:42 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:58:42.311 162852 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-bf619d00-d285-4b9e-9996-77997075375e deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 25 03:58:42 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:58:42.311 162852 DEBUG oslo.privsep.daemon [-] privsep: reply[759ea5ec-818b-4b5a-b531-8d692e9fb199]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:58:42 np0005534516 nova_compute[253538]: 2025-11-25 08:58:42.411 253542 DEBUG nova.network.neutron [req-909e201a-740e-4baf-8a18-f64eedd9253e req-1773e90d-1899-4600-9a76-d0dfda02bd3f b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 5e55aa91-2aa5-4443-b976-0f3e4409e8ec] Updated VIF entry in instance network info cache for port 027edfd6-09a6-4bf4-88df-8a19e59d1f72. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 25 03:58:42 np0005534516 nova_compute[253538]: 2025-11-25 08:58:42.411 253542 DEBUG nova.network.neutron [req-909e201a-740e-4baf-8a18-f64eedd9253e req-1773e90d-1899-4600-9a76-d0dfda02bd3f b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 5e55aa91-2aa5-4443-b976-0f3e4409e8ec] Updating instance_info_cache with network_info: [{"id": "027edfd6-09a6-4bf4-88df-8a19e59d1f72", "address": "fa:16:3e:d9:e8:8d", "network": {"id": "bf619d00-d285-4b9e-9996-77997075375e", "bridge": "br-int", "label": "tempest-network-smoke--598319533", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92faeb767e7a423586eaaf32661ce771", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap027edfd6-09", "ovs_interfaceid": "027edfd6-09a6-4bf4-88df-8a19e59d1f72", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 03:58:42 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 03:58:42 np0005534516 nova_compute[253538]: 2025-11-25 08:58:42.727 253542 DEBUG nova.compute.manager [req-29d82191-a9f1-4446-8725-9728b8797021 req-4362fd0e-1994-4d49-ac2e-bcb65f5cd51d b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 5e55aa91-2aa5-4443-b976-0f3e4409e8ec] Received event network-vif-unplugged-027edfd6-09a6-4bf4-88df-8a19e59d1f72 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:58:42 np0005534516 nova_compute[253538]: 2025-11-25 08:58:42.727 253542 DEBUG oslo_concurrency.lockutils [req-29d82191-a9f1-4446-8725-9728b8797021 req-4362fd0e-1994-4d49-ac2e-bcb65f5cd51d b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "5e55aa91-2aa5-4443-b976-0f3e4409e8ec-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:58:42 np0005534516 nova_compute[253538]: 2025-11-25 08:58:42.728 253542 DEBUG oslo_concurrency.lockutils [req-29d82191-a9f1-4446-8725-9728b8797021 req-4362fd0e-1994-4d49-ac2e-bcb65f5cd51d b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "5e55aa91-2aa5-4443-b976-0f3e4409e8ec-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:58:42 np0005534516 nova_compute[253538]: 2025-11-25 08:58:42.728 253542 DEBUG oslo_concurrency.lockutils [req-29d82191-a9f1-4446-8725-9728b8797021 req-4362fd0e-1994-4d49-ac2e-bcb65f5cd51d b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "5e55aa91-2aa5-4443-b976-0f3e4409e8ec-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:58:42 np0005534516 nova_compute[253538]: 2025-11-25 08:58:42.728 253542 DEBUG nova.compute.manager [req-29d82191-a9f1-4446-8725-9728b8797021 req-4362fd0e-1994-4d49-ac2e-bcb65f5cd51d b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 5e55aa91-2aa5-4443-b976-0f3e4409e8ec] No waiting events found dispatching network-vif-unplugged-027edfd6-09a6-4bf4-88df-8a19e59d1f72 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 03:58:42 np0005534516 nova_compute[253538]: 2025-11-25 08:58:42.728 253542 DEBUG nova.compute.manager [req-29d82191-a9f1-4446-8725-9728b8797021 req-4362fd0e-1994-4d49-ac2e-bcb65f5cd51d b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 5e55aa91-2aa5-4443-b976-0f3e4409e8ec] Received event network-vif-unplugged-027edfd6-09a6-4bf4-88df-8a19e59d1f72 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 25 03:58:42 np0005534516 nova_compute[253538]: 2025-11-25 08:58:42.729 253542 DEBUG nova.compute.manager [req-29d82191-a9f1-4446-8725-9728b8797021 req-4362fd0e-1994-4d49-ac2e-bcb65f5cd51d b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 5e55aa91-2aa5-4443-b976-0f3e4409e8ec] Received event network-vif-plugged-027edfd6-09a6-4bf4-88df-8a19e59d1f72 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:58:42 np0005534516 nova_compute[253538]: 2025-11-25 08:58:42.729 253542 DEBUG oslo_concurrency.lockutils [req-29d82191-a9f1-4446-8725-9728b8797021 req-4362fd0e-1994-4d49-ac2e-bcb65f5cd51d b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "5e55aa91-2aa5-4443-b976-0f3e4409e8ec-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:58:42 np0005534516 nova_compute[253538]: 2025-11-25 08:58:42.729 253542 DEBUG oslo_concurrency.lockutils [req-29d82191-a9f1-4446-8725-9728b8797021 req-4362fd0e-1994-4d49-ac2e-bcb65f5cd51d b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "5e55aa91-2aa5-4443-b976-0f3e4409e8ec-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:58:42 np0005534516 nova_compute[253538]: 2025-11-25 08:58:42.729 253542 DEBUG oslo_concurrency.lockutils [req-29d82191-a9f1-4446-8725-9728b8797021 req-4362fd0e-1994-4d49-ac2e-bcb65f5cd51d b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "5e55aa91-2aa5-4443-b976-0f3e4409e8ec-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:58:42 np0005534516 nova_compute[253538]: 2025-11-25 08:58:42.729 253542 DEBUG nova.compute.manager [req-29d82191-a9f1-4446-8725-9728b8797021 req-4362fd0e-1994-4d49-ac2e-bcb65f5cd51d b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 5e55aa91-2aa5-4443-b976-0f3e4409e8ec] No waiting events found dispatching network-vif-plugged-027edfd6-09a6-4bf4-88df-8a19e59d1f72 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 03:58:42 np0005534516 nova_compute[253538]: 2025-11-25 08:58:42.730 253542 WARNING nova.compute.manager [req-29d82191-a9f1-4446-8725-9728b8797021 req-4362fd0e-1994-4d49-ac2e-bcb65f5cd51d b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 5e55aa91-2aa5-4443-b976-0f3e4409e8ec] Received unexpected event network-vif-plugged-027edfd6-09a6-4bf4-88df-8a19e59d1f72 for instance with vm_state active and task_state deleting.#033[00m
Nov 25 03:58:42 np0005534516 nova_compute[253538]: 2025-11-25 08:58:42.826 253542 DEBUG oslo_concurrency.lockutils [req-909e201a-740e-4baf-8a18-f64eedd9253e req-1773e90d-1899-4600-9a76-d0dfda02bd3f b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-5e55aa91-2aa5-4443-b976-0f3e4409e8ec" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 03:58:43 np0005534516 nova_compute[253538]: 2025-11-25 08:58:43.039 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:58:43 np0005534516 nova_compute[253538]: 2025-11-25 08:58:43.065 253542 INFO nova.virt.libvirt.driver [None req-268b8892-ed17-439d-8a6d-ef1f296429bb 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 5e55aa91-2aa5-4443-b976-0f3e4409e8ec] Deleting instance files /var/lib/nova/instances/5e55aa91-2aa5-4443-b976-0f3e4409e8ec_del#033[00m
Nov 25 03:58:43 np0005534516 nova_compute[253538]: 2025-11-25 08:58:43.066 253542 INFO nova.virt.libvirt.driver [None req-268b8892-ed17-439d-8a6d-ef1f296429bb 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 5e55aa91-2aa5-4443-b976-0f3e4409e8ec] Deletion of /var/lib/nova/instances/5e55aa91-2aa5-4443-b976-0f3e4409e8ec_del complete#033[00m
Nov 25 03:58:43 np0005534516 ovn_controller[152859]: 2025-11-25T08:58:43Z|00153|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:f7:96:34 10.100.0.5
Nov 25 03:58:43 np0005534516 ovn_controller[152859]: 2025-11-25T08:58:43Z|00154|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:f7:96:34 10.100.0.5
Nov 25 03:58:43 np0005534516 nova_compute[253538]: 2025-11-25 08:58:43.278 253542 INFO nova.compute.manager [None req-268b8892-ed17-439d-8a6d-ef1f296429bb 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 5e55aa91-2aa5-4443-b976-0f3e4409e8ec] Took 2.40 seconds to destroy the instance on the hypervisor.#033[00m
Nov 25 03:58:43 np0005534516 nova_compute[253538]: 2025-11-25 08:58:43.278 253542 DEBUG oslo.service.loopingcall [None req-268b8892-ed17-439d-8a6d-ef1f296429bb 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 25 03:58:43 np0005534516 nova_compute[253538]: 2025-11-25 08:58:43.279 253542 DEBUG nova.compute.manager [-] [instance: 5e55aa91-2aa5-4443-b976-0f3e4409e8ec] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 25 03:58:43 np0005534516 nova_compute[253538]: 2025-11-25 08:58:43.279 253542 DEBUG nova.network.neutron [-] [instance: 5e55aa91-2aa5-4443-b976-0f3e4409e8ec] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 25 03:58:43 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2338: 321 pgs: 321 active+clean; 197 MiB data, 907 MiB used, 59 GiB / 60 GiB avail; 1.6 MiB/s rd, 1.5 MiB/s wr, 113 op/s
Nov 25 03:58:44 np0005534516 nova_compute[253538]: 2025-11-25 08:58:44.304 253542 DEBUG nova.network.neutron [-] [instance: 5e55aa91-2aa5-4443-b976-0f3e4409e8ec] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 03:58:44 np0005534516 nova_compute[253538]: 2025-11-25 08:58:44.344 253542 INFO nova.compute.manager [-] [instance: 5e55aa91-2aa5-4443-b976-0f3e4409e8ec] Took 1.07 seconds to deallocate network for instance.#033[00m
Nov 25 03:58:44 np0005534516 nova_compute[253538]: 2025-11-25 08:58:44.446 253542 DEBUG nova.compute.manager [req-18605d96-9067-4a5a-b84e-2d4a5c3a519c req-b815ea7f-66ab-44d1-aac9-ac8ed8d491cd b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 5e55aa91-2aa5-4443-b976-0f3e4409e8ec] Received event network-vif-deleted-027edfd6-09a6-4bf4-88df-8a19e59d1f72 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:58:44 np0005534516 nova_compute[253538]: 2025-11-25 08:58:44.449 253542 DEBUG oslo_concurrency.lockutils [None req-268b8892-ed17-439d-8a6d-ef1f296429bb 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:58:44 np0005534516 nova_compute[253538]: 2025-11-25 08:58:44.449 253542 DEBUG oslo_concurrency.lockutils [None req-268b8892-ed17-439d-8a6d-ef1f296429bb 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:58:44 np0005534516 nova_compute[253538]: 2025-11-25 08:58:44.536 253542 DEBUG oslo_concurrency.processutils [None req-268b8892-ed17-439d-8a6d-ef1f296429bb 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:58:44 np0005534516 nova_compute[253538]: 2025-11-25 08:58:44.581 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 03:58:44 np0005534516 nova_compute[253538]: 2025-11-25 08:58:44.584 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 25 03:58:44 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 03:58:44 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2767464915' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 03:58:44 np0005534516 nova_compute[253538]: 2025-11-25 08:58:44.994 253542 DEBUG oslo_concurrency.processutils [None req-268b8892-ed17-439d-8a6d-ef1f296429bb 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.458s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:58:45 np0005534516 nova_compute[253538]: 2025-11-25 08:58:45.001 253542 DEBUG nova.compute.provider_tree [None req-268b8892-ed17-439d-8a6d-ef1f296429bb 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 03:58:45 np0005534516 nova_compute[253538]: 2025-11-25 08:58:45.017 253542 DEBUG nova.scheduler.client.report [None req-268b8892-ed17-439d-8a6d-ef1f296429bb 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 03:58:45 np0005534516 nova_compute[253538]: 2025-11-25 08:58:45.062 253542 DEBUG oslo_concurrency.lockutils [None req-268b8892-ed17-439d-8a6d-ef1f296429bb 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.612s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:58:45 np0005534516 nova_compute[253538]: 2025-11-25 08:58:45.146 253542 INFO nova.scheduler.client.report [None req-268b8892-ed17-439d-8a6d-ef1f296429bb 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Deleted allocations for instance 5e55aa91-2aa5-4443-b976-0f3e4409e8ec#033[00m
Nov 25 03:58:45 np0005534516 nova_compute[253538]: 2025-11-25 08:58:45.232 253542 DEBUG oslo_concurrency.lockutils [None req-268b8892-ed17-439d-8a6d-ef1f296429bb 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lock "5e55aa91-2aa5-4443-b976-0f3e4409e8ec" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.360s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:58:45 np0005534516 nova_compute[253538]: 2025-11-25 08:58:45.559 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 03:58:45 np0005534516 podman[385178]: 2025-11-25 08:58:45.849141039 +0000 UTC m=+0.091391778 container health_status 201f5ba8a846455dad7681d91099d8c3f33e8ac91298c1330a8b525b94c03938 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.build-date=20251118)
Nov 25 03:58:45 np0005534516 podman[385179]: 2025-11-25 08:58:45.86278834 +0000 UTC m=+0.105786040 container health_status 5c8d5508db57b4c3da7e80ae11f3a3094e87785cb74f60c98199261dd8b73481 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251118, tcib_managed=true)
Nov 25 03:58:45 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2339: 321 pgs: 321 active+clean; 192 MiB data, 912 MiB used, 59 GiB / 60 GiB avail; 1.1 MiB/s rd, 2.0 MiB/s wr, 114 op/s
Nov 25 03:58:46 np0005534516 nova_compute[253538]: 2025-11-25 08:58:46.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 03:58:46 np0005534516 nova_compute[253538]: 2025-11-25 08:58:46.754 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:58:47 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 03:58:47 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2340: 321 pgs: 321 active+clean; 164 MiB data, 901 MiB used, 59 GiB / 60 GiB avail; 422 KiB/s rd, 2.1 MiB/s wr, 119 op/s
Nov 25 03:58:48 np0005534516 nova_compute[253538]: 2025-11-25 08:58:48.041 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:58:48 np0005534516 nova_compute[253538]: 2025-11-25 08:58:48.547 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 03:58:48 np0005534516 nova_compute[253538]: 2025-11-25 08:58:48.553 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 03:58:48 np0005534516 nova_compute[253538]: 2025-11-25 08:58:48.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 03:58:48 np0005534516 nova_compute[253538]: 2025-11-25 08:58:48.574 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:58:48 np0005534516 nova_compute[253538]: 2025-11-25 08:58:48.574 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:58:48 np0005534516 nova_compute[253538]: 2025-11-25 08:58:48.575 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:58:48 np0005534516 nova_compute[253538]: 2025-11-25 08:58:48.575 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 25 03:58:48 np0005534516 nova_compute[253538]: 2025-11-25 08:58:48.575 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:58:48 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 03:58:48 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2798145365' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 03:58:49 np0005534516 nova_compute[253538]: 2025-11-25 08:58:49.015 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.440s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:58:49 np0005534516 nova_compute[253538]: 2025-11-25 08:58:49.088 253542 DEBUG nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] skipping disk for instance-0000007d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 25 03:58:49 np0005534516 nova_compute[253538]: 2025-11-25 08:58:49.088 253542 DEBUG nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] skipping disk for instance-0000007d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 25 03:58:49 np0005534516 nova_compute[253538]: 2025-11-25 08:58:49.283 253542 WARNING nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 03:58:49 np0005534516 nova_compute[253538]: 2025-11-25 08:58:49.284 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3529MB free_disk=59.94315719604492GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 25 03:58:49 np0005534516 nova_compute[253538]: 2025-11-25 08:58:49.284 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:58:49 np0005534516 nova_compute[253538]: 2025-11-25 08:58:49.285 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:58:49 np0005534516 nova_compute[253538]: 2025-11-25 08:58:49.492 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Instance 9447890d-1fff-4536-a0cd-b889c23f7479 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 25 03:58:49 np0005534516 nova_compute[253538]: 2025-11-25 08:58:49.493 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 25 03:58:49 np0005534516 nova_compute[253538]: 2025-11-25 08:58:49.494 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 25 03:58:49 np0005534516 nova_compute[253538]: 2025-11-25 08:58:49.539 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:58:49 np0005534516 nova_compute[253538]: 2025-11-25 08:58:49.668 253542 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764061114.6664689, 7be1983b-1609-4155-b634-d14fc92539e8 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 03:58:49 np0005534516 nova_compute[253538]: 2025-11-25 08:58:49.669 253542 INFO nova.compute.manager [-] [instance: 7be1983b-1609-4155-b634-d14fc92539e8] VM Stopped (Lifecycle Event)#033[00m
Nov 25 03:58:49 np0005534516 nova_compute[253538]: 2025-11-25 08:58:49.694 253542 DEBUG nova.compute.manager [None req-76e8fb6b-ed11-47b4-8c18-f26d967503ad - - - - - -] [instance: 7be1983b-1609-4155-b634-d14fc92539e8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:58:49 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2341: 321 pgs: 321 active+clean; 167 MiB data, 895 MiB used, 59 GiB / 60 GiB avail; 425 KiB/s rd, 2.1 MiB/s wr, 101 op/s
Nov 25 03:58:49 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 03:58:49 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/299233274' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 03:58:49 np0005534516 nova_compute[253538]: 2025-11-25 08:58:49.977 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.438s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:58:49 np0005534516 nova_compute[253538]: 2025-11-25 08:58:49.984 253542 DEBUG nova.compute.provider_tree [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 03:58:49 np0005534516 nova_compute[253538]: 2025-11-25 08:58:49.998 253542 DEBUG nova.scheduler.client.report [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 03:58:50 np0005534516 nova_compute[253538]: 2025-11-25 08:58:50.037 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 25 03:58:50 np0005534516 nova_compute[253538]: 2025-11-25 08:58:50.038 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.753s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:58:51 np0005534516 nova_compute[253538]: 2025-11-25 08:58:51.758 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:58:51 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2342: 321 pgs: 321 active+clean; 167 MiB data, 895 MiB used, 59 GiB / 60 GiB avail; 422 KiB/s rd, 2.1 MiB/s wr, 94 op/s
Nov 25 03:58:52 np0005534516 ovn_controller[152859]: 2025-11-25T08:58:52Z|01294|binding|INFO|Releasing lport 0b3aeb91-b51d-4bb6-a8fa-9a80bd12b96f from this chassis (sb_readonly=0)
Nov 25 03:58:52 np0005534516 nova_compute[253538]: 2025-11-25 08:58:52.236 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:58:52 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 03:58:52 np0005534516 podman[385262]: 2025-11-25 08:58:52.850216187 +0000 UTC m=+0.100852006 container health_status 8ad1e319ea263c63021f01bb2d6f18ca5aea205883339692c6b10bddfa993191 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.build-date=20251118, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Nov 25 03:58:53 np0005534516 nova_compute[253538]: 2025-11-25 08:58:53.043 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:58:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] Optimize plan auto_2025-11-25_08:58:53
Nov 25 03:58:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 03:58:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] do_upmap
Nov 25 03:58:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] pools ['.rgw.root', 'default.rgw.meta', '.mgr', 'cephfs.cephfs.meta', 'default.rgw.control', 'default.rgw.log', 'volumes', 'vms', 'backups', 'cephfs.cephfs.data', 'images']
Nov 25 03:58:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] prepared 0/10 changes
Nov 25 03:58:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 03:58:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 03:58:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 03:58:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 03:58:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 03:58:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 03:58:53 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2343: 321 pgs: 321 active+clean; 167 MiB data, 895 MiB used, 59 GiB / 60 GiB avail; 405 KiB/s rd, 1.3 MiB/s wr, 82 op/s
Nov 25 03:58:53 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 03:58:54 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 03:58:54 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 03:58:54 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 03:58:54 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 03:58:54 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 03:58:54 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 03:58:54 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 03:58:54 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 03:58:54 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 03:58:55 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 03:58:55 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 03:58:55 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 03:58:55 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 03:58:55 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 03:58:55 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 03:58:55 np0005534516 ceph-mgr[75313]: [progress WARNING root] complete: ev da4b1229-30f9-4445-9f66-2eb4d44743e8 does not exist
Nov 25 03:58:55 np0005534516 ceph-mgr[75313]: [progress WARNING root] complete: ev 5af46c16-af4c-429d-85e8-b73b94d2dfa1 does not exist
Nov 25 03:58:55 np0005534516 ceph-mgr[75313]: [progress WARNING root] complete: ev 8d6a4438-54d8-4f10-939f-5ea9632673e6 does not exist
Nov 25 03:58:55 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 03:58:55 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 03:58:55 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 03:58:55 np0005534516 nova_compute[253538]: 2025-11-25 08:58:55.031 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 03:58:55 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 03:58:55 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 03:58:55 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 03:58:55 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 03:58:55 np0005534516 podman[385558]: 2025-11-25 08:58:55.647241399 +0000 UTC m=+0.041500730 container create da50987d75d4106e19d5686d113d082bbe253d955d58072d55057429c725fe58 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_morse, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Nov 25 03:58:55 np0005534516 podman[385558]: 2025-11-25 08:58:55.627135162 +0000 UTC m=+0.021394523 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 03:58:55 np0005534516 systemd[1]: Started libpod-conmon-da50987d75d4106e19d5686d113d082bbe253d955d58072d55057429c725fe58.scope.
Nov 25 03:58:55 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2344: 321 pgs: 321 active+clean; 167 MiB data, 889 MiB used, 59 GiB / 60 GiB avail; 244 KiB/s rd, 641 KiB/s wr, 55 op/s
Nov 25 03:58:55 np0005534516 systemd[1]: Started libcrun container.
Nov 25 03:58:56 np0005534516 podman[385558]: 2025-11-25 08:58:56.049100708 +0000 UTC m=+0.443360069 container init da50987d75d4106e19d5686d113d082bbe253d955d58072d55057429c725fe58 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_morse, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 03:58:56 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 03:58:56 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 03:58:56 np0005534516 podman[385558]: 2025-11-25 08:58:56.058356119 +0000 UTC m=+0.452615460 container start da50987d75d4106e19d5686d113d082bbe253d955d58072d55057429c725fe58 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_morse, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Nov 25 03:58:56 np0005534516 podman[385558]: 2025-11-25 08:58:56.062114082 +0000 UTC m=+0.456373423 container attach da50987d75d4106e19d5686d113d082bbe253d955d58072d55057429c725fe58 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_morse, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 03:58:56 np0005534516 sad_morse[385574]: 167 167
Nov 25 03:58:56 np0005534516 systemd[1]: libpod-da50987d75d4106e19d5686d113d082bbe253d955d58072d55057429c725fe58.scope: Deactivated successfully.
Nov 25 03:58:56 np0005534516 podman[385558]: 2025-11-25 08:58:56.066229234 +0000 UTC m=+0.460488615 container died da50987d75d4106e19d5686d113d082bbe253d955d58072d55057429c725fe58 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_morse, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Nov 25 03:58:56 np0005534516 systemd[1]: var-lib-containers-storage-overlay-62db4a025b71b211f6fd7d76d7e892faaad23e1fcfe2f324738f59dbd14d6697-merged.mount: Deactivated successfully.
Nov 25 03:58:56 np0005534516 podman[385558]: 2025-11-25 08:58:56.117946082 +0000 UTC m=+0.512205423 container remove da50987d75d4106e19d5686d113d082bbe253d955d58072d55057429c725fe58 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_morse, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 03:58:56 np0005534516 systemd[1]: libpod-conmon-da50987d75d4106e19d5686d113d082bbe253d955d58072d55057429c725fe58.scope: Deactivated successfully.
Nov 25 03:58:56 np0005534516 podman[385599]: 2025-11-25 08:58:56.318011587 +0000 UTC m=+0.046855347 container create c125562cb943c45c8913eae2f6cfcdfb53d7576ee0d585e1c9530b65ca4bc03e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_mahavira, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True)
Nov 25 03:58:56 np0005534516 systemd[1]: Started libpod-conmon-c125562cb943c45c8913eae2f6cfcdfb53d7576ee0d585e1c9530b65ca4bc03e.scope.
Nov 25 03:58:56 np0005534516 podman[385599]: 2025-11-25 08:58:56.302157096 +0000 UTC m=+0.031000876 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 03:58:56 np0005534516 systemd[1]: Started libcrun container.
Nov 25 03:58:56 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9e9f2ddfce02a80f6433a7fd7712f33de9fe1f314c3c200627070a35fd77fd7a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 03:58:56 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9e9f2ddfce02a80f6433a7fd7712f33de9fe1f314c3c200627070a35fd77fd7a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 03:58:56 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9e9f2ddfce02a80f6433a7fd7712f33de9fe1f314c3c200627070a35fd77fd7a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 03:58:56 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9e9f2ddfce02a80f6433a7fd7712f33de9fe1f314c3c200627070a35fd77fd7a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 03:58:56 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9e9f2ddfce02a80f6433a7fd7712f33de9fe1f314c3c200627070a35fd77fd7a/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 03:58:56 np0005534516 nova_compute[253538]: 2025-11-25 08:58:56.433 253542 DEBUG oslo_concurrency.lockutils [None req-788cdb73-26ad-4e61-9df0-725af05c8335 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Acquiring lock "68c6ff41-ad19-4b3d-947d-0a5d72e4042c" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:58:56 np0005534516 nova_compute[253538]: 2025-11-25 08:58:56.435 253542 DEBUG oslo_concurrency.lockutils [None req-788cdb73-26ad-4e61-9df0-725af05c8335 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Lock "68c6ff41-ad19-4b3d-947d-0a5d72e4042c" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:58:56 np0005534516 podman[385599]: 2025-11-25 08:58:56.441454067 +0000 UTC m=+0.170297847 container init c125562cb943c45c8913eae2f6cfcdfb53d7576ee0d585e1c9530b65ca4bc03e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_mahavira, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 03:58:56 np0005534516 podman[385599]: 2025-11-25 08:58:56.453765023 +0000 UTC m=+0.182608813 container start c125562cb943c45c8913eae2f6cfcdfb53d7576ee0d585e1c9530b65ca4bc03e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_mahavira, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Nov 25 03:58:56 np0005534516 podman[385599]: 2025-11-25 08:58:56.457098364 +0000 UTC m=+0.185942134 container attach c125562cb943c45c8913eae2f6cfcdfb53d7576ee0d585e1c9530b65ca4bc03e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_mahavira, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 03:58:56 np0005534516 nova_compute[253538]: 2025-11-25 08:58:56.456 253542 DEBUG nova.compute.manager [None req-788cdb73-26ad-4e61-9df0-725af05c8335 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 68c6ff41-ad19-4b3d-947d-0a5d72e4042c] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 25 03:58:56 np0005534516 nova_compute[253538]: 2025-11-25 08:58:56.581 253542 DEBUG oslo_concurrency.lockutils [None req-788cdb73-26ad-4e61-9df0-725af05c8335 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:58:56 np0005534516 nova_compute[253538]: 2025-11-25 08:58:56.582 253542 DEBUG oslo_concurrency.lockutils [None req-788cdb73-26ad-4e61-9df0-725af05c8335 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:58:56 np0005534516 nova_compute[253538]: 2025-11-25 08:58:56.593 253542 DEBUG nova.virt.hardware [None req-788cdb73-26ad-4e61-9df0-725af05c8335 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 25 03:58:56 np0005534516 nova_compute[253538]: 2025-11-25 08:58:56.593 253542 INFO nova.compute.claims [None req-788cdb73-26ad-4e61-9df0-725af05c8335 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 68c6ff41-ad19-4b3d-947d-0a5d72e4042c] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 25 03:58:56 np0005534516 nova_compute[253538]: 2025-11-25 08:58:56.724 253542 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764061121.7238796, 5e55aa91-2aa5-4443-b976-0f3e4409e8ec => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 03:58:56 np0005534516 nova_compute[253538]: 2025-11-25 08:58:56.725 253542 INFO nova.compute.manager [-] [instance: 5e55aa91-2aa5-4443-b976-0f3e4409e8ec] VM Stopped (Lifecycle Event)#033[00m
Nov 25 03:58:56 np0005534516 nova_compute[253538]: 2025-11-25 08:58:56.740 253542 DEBUG oslo_concurrency.processutils [None req-788cdb73-26ad-4e61-9df0-725af05c8335 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:58:56 np0005534516 nova_compute[253538]: 2025-11-25 08:58:56.791 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:58:56 np0005534516 nova_compute[253538]: 2025-11-25 08:58:56.794 253542 DEBUG nova.compute.manager [None req-d616372a-d88a-4b1b-972b-b00b38dec989 - - - - - -] [instance: 5e55aa91-2aa5-4443-b976-0f3e4409e8ec] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:58:57 np0005534516 nova_compute[253538]: 2025-11-25 08:58:57.235 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:58:57 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 03:58:57 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1521499799' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 03:58:57 np0005534516 nova_compute[253538]: 2025-11-25 08:58:57.292 253542 DEBUG oslo_concurrency.processutils [None req-788cdb73-26ad-4e61-9df0-725af05c8335 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.551s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:58:57 np0005534516 nova_compute[253538]: 2025-11-25 08:58:57.299 253542 DEBUG nova.compute.provider_tree [None req-788cdb73-26ad-4e61-9df0-725af05c8335 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 03:58:57 np0005534516 nova_compute[253538]: 2025-11-25 08:58:57.317 253542 DEBUG nova.scheduler.client.report [None req-788cdb73-26ad-4e61-9df0-725af05c8335 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 03:58:57 np0005534516 nova_compute[253538]: 2025-11-25 08:58:57.337 253542 DEBUG oslo_concurrency.lockutils [None req-788cdb73-26ad-4e61-9df0-725af05c8335 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.755s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:58:57 np0005534516 nova_compute[253538]: 2025-11-25 08:58:57.338 253542 DEBUG nova.compute.manager [None req-788cdb73-26ad-4e61-9df0-725af05c8335 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 68c6ff41-ad19-4b3d-947d-0a5d72e4042c] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 25 03:58:57 np0005534516 nova_compute[253538]: 2025-11-25 08:58:57.387 253542 DEBUG nova.compute.manager [None req-788cdb73-26ad-4e61-9df0-725af05c8335 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 68c6ff41-ad19-4b3d-947d-0a5d72e4042c] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 25 03:58:57 np0005534516 nova_compute[253538]: 2025-11-25 08:58:57.388 253542 DEBUG nova.network.neutron [None req-788cdb73-26ad-4e61-9df0-725af05c8335 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 68c6ff41-ad19-4b3d-947d-0a5d72e4042c] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 25 03:58:57 np0005534516 nova_compute[253538]: 2025-11-25 08:58:57.409 253542 INFO nova.virt.libvirt.driver [None req-788cdb73-26ad-4e61-9df0-725af05c8335 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 68c6ff41-ad19-4b3d-947d-0a5d72e4042c] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 25 03:58:57 np0005534516 nova_compute[253538]: 2025-11-25 08:58:57.435 253542 DEBUG nova.compute.manager [None req-788cdb73-26ad-4e61-9df0-725af05c8335 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 68c6ff41-ad19-4b3d-947d-0a5d72e4042c] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 25 03:58:57 np0005534516 crazy_mahavira[385616]: --> passed data devices: 0 physical, 3 LVM
Nov 25 03:58:57 np0005534516 crazy_mahavira[385616]: --> relative data size: 1.0
Nov 25 03:58:57 np0005534516 crazy_mahavira[385616]: --> All data devices are unavailable
Nov 25 03:58:57 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:58:57.520 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=40, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '3e:21:09', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '6a:71:8e:07:fa:c2'}, ipsec=False) old=SB_Global(nb_cfg=39) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 03:58:57 np0005534516 nova_compute[253538]: 2025-11-25 08:58:57.520 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:58:57 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:58:57.522 162739 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 25 03:58:57 np0005534516 systemd[1]: libpod-c125562cb943c45c8913eae2f6cfcdfb53d7576ee0d585e1c9530b65ca4bc03e.scope: Deactivated successfully.
Nov 25 03:58:57 np0005534516 systemd[1]: libpod-c125562cb943c45c8913eae2f6cfcdfb53d7576ee0d585e1c9530b65ca4bc03e.scope: Consumed 1.023s CPU time.
Nov 25 03:58:57 np0005534516 nova_compute[253538]: 2025-11-25 08:58:57.550 253542 DEBUG nova.compute.manager [None req-788cdb73-26ad-4e61-9df0-725af05c8335 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 68c6ff41-ad19-4b3d-947d-0a5d72e4042c] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 25 03:58:57 np0005534516 nova_compute[253538]: 2025-11-25 08:58:57.552 253542 DEBUG nova.virt.libvirt.driver [None req-788cdb73-26ad-4e61-9df0-725af05c8335 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 68c6ff41-ad19-4b3d-947d-0a5d72e4042c] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 25 03:58:57 np0005534516 nova_compute[253538]: 2025-11-25 08:58:57.553 253542 INFO nova.virt.libvirt.driver [None req-788cdb73-26ad-4e61-9df0-725af05c8335 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 68c6ff41-ad19-4b3d-947d-0a5d72e4042c] Creating image(s)#033[00m
Nov 25 03:58:57 np0005534516 podman[385667]: 2025-11-25 08:58:57.569024209 +0000 UTC m=+0.025825924 container died c125562cb943c45c8913eae2f6cfcdfb53d7576ee0d585e1c9530b65ca4bc03e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_mahavira, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 03:58:57 np0005534516 nova_compute[253538]: 2025-11-25 08:58:57.582 253542 DEBUG nova.storage.rbd_utils [None req-788cdb73-26ad-4e61-9df0-725af05c8335 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] rbd image 68c6ff41-ad19-4b3d-947d-0a5d72e4042c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:58:57 np0005534516 systemd[1]: var-lib-containers-storage-overlay-9e9f2ddfce02a80f6433a7fd7712f33de9fe1f314c3c200627070a35fd77fd7a-merged.mount: Deactivated successfully.
Nov 25 03:58:57 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 03:58:57 np0005534516 nova_compute[253538]: 2025-11-25 08:58:57.617 253542 DEBUG nova.storage.rbd_utils [None req-788cdb73-26ad-4e61-9df0-725af05c8335 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] rbd image 68c6ff41-ad19-4b3d-947d-0a5d72e4042c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:58:57 np0005534516 podman[385667]: 2025-11-25 08:58:57.637855282 +0000 UTC m=+0.094656977 container remove c125562cb943c45c8913eae2f6cfcdfb53d7576ee0d585e1c9530b65ca4bc03e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_mahavira, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 03:58:57 np0005534516 nova_compute[253538]: 2025-11-25 08:58:57.643 253542 DEBUG nova.storage.rbd_utils [None req-788cdb73-26ad-4e61-9df0-725af05c8335 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] rbd image 68c6ff41-ad19-4b3d-947d-0a5d72e4042c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:58:57 np0005534516 systemd[1]: libpod-conmon-c125562cb943c45c8913eae2f6cfcdfb53d7576ee0d585e1c9530b65ca4bc03e.scope: Deactivated successfully.
Nov 25 03:58:57 np0005534516 nova_compute[253538]: 2025-11-25 08:58:57.649 253542 DEBUG oslo_concurrency.processutils [None req-788cdb73-26ad-4e61-9df0-725af05c8335 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:58:57 np0005534516 nova_compute[253538]: 2025-11-25 08:58:57.698 253542 DEBUG nova.policy [None req-788cdb73-26ad-4e61-9df0-725af05c8335 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '283b89dbe3284e8ea2019b797673108b', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'cfffff2c57a442a59b202d368d49bf00', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 25 03:58:57 np0005534516 nova_compute[253538]: 2025-11-25 08:58:57.739 253542 DEBUG oslo_concurrency.processutils [None req-788cdb73-26ad-4e61-9df0-725af05c8335 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json" returned: 0 in 0.090s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:58:57 np0005534516 nova_compute[253538]: 2025-11-25 08:58:57.740 253542 DEBUG oslo_concurrency.lockutils [None req-788cdb73-26ad-4e61-9df0-725af05c8335 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Acquiring lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:58:57 np0005534516 nova_compute[253538]: 2025-11-25 08:58:57.740 253542 DEBUG oslo_concurrency.lockutils [None req-788cdb73-26ad-4e61-9df0-725af05c8335 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:58:57 np0005534516 nova_compute[253538]: 2025-11-25 08:58:57.740 253542 DEBUG oslo_concurrency.lockutils [None req-788cdb73-26ad-4e61-9df0-725af05c8335 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:58:57 np0005534516 nova_compute[253538]: 2025-11-25 08:58:57.763 253542 DEBUG nova.storage.rbd_utils [None req-788cdb73-26ad-4e61-9df0-725af05c8335 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] rbd image 68c6ff41-ad19-4b3d-947d-0a5d72e4042c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:58:57 np0005534516 nova_compute[253538]: 2025-11-25 08:58:57.770 253542 DEBUG oslo_concurrency.processutils [None req-788cdb73-26ad-4e61-9df0-725af05c8335 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc 68c6ff41-ad19-4b3d-947d-0a5d72e4042c_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:58:57 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2345: 321 pgs: 321 active+clean; 167 MiB data, 889 MiB used, 59 GiB / 60 GiB avail; 209 KiB/s rd, 112 KiB/s wr, 38 op/s
Nov 25 03:58:58 np0005534516 nova_compute[253538]: 2025-11-25 08:58:58.046 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:58:58 np0005534516 nova_compute[253538]: 2025-11-25 08:58:58.119 253542 DEBUG oslo_concurrency.processutils [None req-788cdb73-26ad-4e61-9df0-725af05c8335 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc 68c6ff41-ad19-4b3d-947d-0a5d72e4042c_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.349s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:58:58 np0005534516 nova_compute[253538]: 2025-11-25 08:58:58.188 253542 DEBUG nova.storage.rbd_utils [None req-788cdb73-26ad-4e61-9df0-725af05c8335 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] resizing rbd image 68c6ff41-ad19-4b3d-947d-0a5d72e4042c_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 25 03:58:58 np0005534516 nova_compute[253538]: 2025-11-25 08:58:58.330 253542 DEBUG nova.objects.instance [None req-788cdb73-26ad-4e61-9df0-725af05c8335 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Lazy-loading 'migration_context' on Instance uuid 68c6ff41-ad19-4b3d-947d-0a5d72e4042c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 03:58:58 np0005534516 nova_compute[253538]: 2025-11-25 08:58:58.341 253542 DEBUG nova.virt.libvirt.driver [None req-788cdb73-26ad-4e61-9df0-725af05c8335 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 68c6ff41-ad19-4b3d-947d-0a5d72e4042c] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 25 03:58:58 np0005534516 nova_compute[253538]: 2025-11-25 08:58:58.342 253542 DEBUG nova.virt.libvirt.driver [None req-788cdb73-26ad-4e61-9df0-725af05c8335 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 68c6ff41-ad19-4b3d-947d-0a5d72e4042c] Ensure instance console log exists: /var/lib/nova/instances/68c6ff41-ad19-4b3d-947d-0a5d72e4042c/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 25 03:58:58 np0005534516 nova_compute[253538]: 2025-11-25 08:58:58.342 253542 DEBUG oslo_concurrency.lockutils [None req-788cdb73-26ad-4e61-9df0-725af05c8335 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:58:58 np0005534516 nova_compute[253538]: 2025-11-25 08:58:58.343 253542 DEBUG oslo_concurrency.lockutils [None req-788cdb73-26ad-4e61-9df0-725af05c8335 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:58:58 np0005534516 nova_compute[253538]: 2025-11-25 08:58:58.343 253542 DEBUG oslo_concurrency.lockutils [None req-788cdb73-26ad-4e61-9df0-725af05c8335 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:58:58 np0005534516 podman[385971]: 2025-11-25 08:58:58.384881605 +0000 UTC m=+0.064899857 container create 7b45892b6fb9d141f1aecbf4c09f58ae93d785f6e2110bf1f9b55abb5832fcf5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_gauss, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 03:58:58 np0005534516 systemd[1]: Started libpod-conmon-7b45892b6fb9d141f1aecbf4c09f58ae93d785f6e2110bf1f9b55abb5832fcf5.scope.
Nov 25 03:58:58 np0005534516 systemd[1]: Started libcrun container.
Nov 25 03:58:58 np0005534516 podman[385971]: 2025-11-25 08:58:58.35640008 +0000 UTC m=+0.036418382 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 03:58:58 np0005534516 podman[385971]: 2025-11-25 08:58:58.460996717 +0000 UTC m=+0.141014999 container init 7b45892b6fb9d141f1aecbf4c09f58ae93d785f6e2110bf1f9b55abb5832fcf5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_gauss, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Nov 25 03:58:58 np0005534516 podman[385971]: 2025-11-25 08:58:58.468642335 +0000 UTC m=+0.148660557 container start 7b45892b6fb9d141f1aecbf4c09f58ae93d785f6e2110bf1f9b55abb5832fcf5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_gauss, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 03:58:58 np0005534516 podman[385971]: 2025-11-25 08:58:58.471576785 +0000 UTC m=+0.151595047 container attach 7b45892b6fb9d141f1aecbf4c09f58ae93d785f6e2110bf1f9b55abb5832fcf5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_gauss, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Nov 25 03:58:58 np0005534516 elegant_gauss[386004]: 167 167
Nov 25 03:58:58 np0005534516 systemd[1]: libpod-7b45892b6fb9d141f1aecbf4c09f58ae93d785f6e2110bf1f9b55abb5832fcf5.scope: Deactivated successfully.
Nov 25 03:58:58 np0005534516 podman[385971]: 2025-11-25 08:58:58.473875607 +0000 UTC m=+0.153893839 container died 7b45892b6fb9d141f1aecbf4c09f58ae93d785f6e2110bf1f9b55abb5832fcf5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_gauss, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Nov 25 03:58:58 np0005534516 systemd[1]: var-lib-containers-storage-overlay-a052771c98df8563db783cd7b54dd061ceadf1d71206a81c1033bd162ae34e20-merged.mount: Deactivated successfully.
Nov 25 03:58:58 np0005534516 podman[385971]: 2025-11-25 08:58:58.508222312 +0000 UTC m=+0.188240534 container remove 7b45892b6fb9d141f1aecbf4c09f58ae93d785f6e2110bf1f9b55abb5832fcf5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_gauss, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Nov 25 03:58:58 np0005534516 systemd[1]: libpod-conmon-7b45892b6fb9d141f1aecbf4c09f58ae93d785f6e2110bf1f9b55abb5832fcf5.scope: Deactivated successfully.
Nov 25 03:58:58 np0005534516 nova_compute[253538]: 2025-11-25 08:58:58.605 253542 DEBUG nova.network.neutron [None req-788cdb73-26ad-4e61-9df0-725af05c8335 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 68c6ff41-ad19-4b3d-947d-0a5d72e4042c] Successfully created port: eba714df-d5db-464e-b5b6-6d56c52d33fd _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 25 03:58:58 np0005534516 podman[386027]: 2025-11-25 08:58:58.772263539 +0000 UTC m=+0.079437823 container create acb3e83852462d7d755e1046fb7479ba3ae7bd9c9720149dbb21336d7b7e5e09 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_ride, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 03:58:58 np0005534516 systemd[1]: Started libpod-conmon-acb3e83852462d7d755e1046fb7479ba3ae7bd9c9720149dbb21336d7b7e5e09.scope.
Nov 25 03:58:58 np0005534516 podman[386027]: 2025-11-25 08:58:58.732912638 +0000 UTC m=+0.040087002 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 03:58:58 np0005534516 systemd[1]: Started libcrun container.
Nov 25 03:58:58 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3424c645a66e07900c11cbb0b08ef80aaad1cf2de381e0d67caaf7220e9d9465/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 03:58:58 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3424c645a66e07900c11cbb0b08ef80aaad1cf2de381e0d67caaf7220e9d9465/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 03:58:58 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3424c645a66e07900c11cbb0b08ef80aaad1cf2de381e0d67caaf7220e9d9465/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 03:58:58 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3424c645a66e07900c11cbb0b08ef80aaad1cf2de381e0d67caaf7220e9d9465/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 03:58:58 np0005534516 podman[386027]: 2025-11-25 08:58:58.885779579 +0000 UTC m=+0.192953903 container init acb3e83852462d7d755e1046fb7479ba3ae7bd9c9720149dbb21336d7b7e5e09 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_ride, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 03:58:58 np0005534516 podman[386027]: 2025-11-25 08:58:58.893168499 +0000 UTC m=+0.200342793 container start acb3e83852462d7d755e1046fb7479ba3ae7bd9c9720149dbb21336d7b7e5e09 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_ride, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 03:58:58 np0005534516 podman[386027]: 2025-11-25 08:58:58.897470847 +0000 UTC m=+0.204645211 container attach acb3e83852462d7d755e1046fb7479ba3ae7bd9c9720149dbb21336d7b7e5e09 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_ride, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 25 03:58:59 np0005534516 nova_compute[253538]: 2025-11-25 08:58:59.273 253542 DEBUG nova.network.neutron [None req-788cdb73-26ad-4e61-9df0-725af05c8335 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 68c6ff41-ad19-4b3d-947d-0a5d72e4042c] Successfully updated port: eba714df-d5db-464e-b5b6-6d56c52d33fd _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 25 03:58:59 np0005534516 nova_compute[253538]: 2025-11-25 08:58:59.310 253542 DEBUG oslo_concurrency.lockutils [None req-788cdb73-26ad-4e61-9df0-725af05c8335 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Acquiring lock "refresh_cache-68c6ff41-ad19-4b3d-947d-0a5d72e4042c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 03:58:59 np0005534516 nova_compute[253538]: 2025-11-25 08:58:59.311 253542 DEBUG oslo_concurrency.lockutils [None req-788cdb73-26ad-4e61-9df0-725af05c8335 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Acquired lock "refresh_cache-68c6ff41-ad19-4b3d-947d-0a5d72e4042c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 03:58:59 np0005534516 nova_compute[253538]: 2025-11-25 08:58:59.312 253542 DEBUG nova.network.neutron [None req-788cdb73-26ad-4e61-9df0-725af05c8335 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 68c6ff41-ad19-4b3d-947d-0a5d72e4042c] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 25 03:58:59 np0005534516 nova_compute[253538]: 2025-11-25 08:58:59.512 253542 DEBUG nova.compute.manager [req-c51cf682-e22e-479a-ae64-fe2af2ac581a req-65759925-b4a3-415c-9f8b-0bcbacecf6a5 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 68c6ff41-ad19-4b3d-947d-0a5d72e4042c] Received event network-changed-eba714df-d5db-464e-b5b6-6d56c52d33fd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:58:59 np0005534516 nova_compute[253538]: 2025-11-25 08:58:59.512 253542 DEBUG nova.compute.manager [req-c51cf682-e22e-479a-ae64-fe2af2ac581a req-65759925-b4a3-415c-9f8b-0bcbacecf6a5 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 68c6ff41-ad19-4b3d-947d-0a5d72e4042c] Refreshing instance network info cache due to event network-changed-eba714df-d5db-464e-b5b6-6d56c52d33fd. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 03:58:59 np0005534516 nova_compute[253538]: 2025-11-25 08:58:59.512 253542 DEBUG oslo_concurrency.lockutils [req-c51cf682-e22e-479a-ae64-fe2af2ac581a req-65759925-b4a3-415c-9f8b-0bcbacecf6a5 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-68c6ff41-ad19-4b3d-947d-0a5d72e4042c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 03:58:59 np0005534516 nova_compute[253538]: 2025-11-25 08:58:59.590 253542 DEBUG nova.network.neutron [None req-788cdb73-26ad-4e61-9df0-725af05c8335 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 68c6ff41-ad19-4b3d-947d-0a5d72e4042c] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 25 03:58:59 np0005534516 vibrant_ride[386044]: {
Nov 25 03:58:59 np0005534516 vibrant_ride[386044]:    "0": [
Nov 25 03:58:59 np0005534516 vibrant_ride[386044]:        {
Nov 25 03:58:59 np0005534516 vibrant_ride[386044]:            "devices": [
Nov 25 03:58:59 np0005534516 vibrant_ride[386044]:                "/dev/loop3"
Nov 25 03:58:59 np0005534516 vibrant_ride[386044]:            ],
Nov 25 03:58:59 np0005534516 vibrant_ride[386044]:            "lv_name": "ceph_lv0",
Nov 25 03:58:59 np0005534516 vibrant_ride[386044]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 03:58:59 np0005534516 vibrant_ride[386044]:            "lv_size": "21470642176",
Nov 25 03:58:59 np0005534516 vibrant_ride[386044]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=fa0f8d90-5df8-4d42-9078-da082765696d,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 03:58:59 np0005534516 vibrant_ride[386044]:            "lv_uuid": "ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr",
Nov 25 03:58:59 np0005534516 vibrant_ride[386044]:            "name": "ceph_lv0",
Nov 25 03:58:59 np0005534516 vibrant_ride[386044]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 03:58:59 np0005534516 vibrant_ride[386044]:            "tags": {
Nov 25 03:58:59 np0005534516 vibrant_ride[386044]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 03:58:59 np0005534516 vibrant_ride[386044]:                "ceph.block_uuid": "ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr",
Nov 25 03:58:59 np0005534516 vibrant_ride[386044]:                "ceph.cephx_lockbox_secret": "",
Nov 25 03:58:59 np0005534516 vibrant_ride[386044]:                "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 03:58:59 np0005534516 vibrant_ride[386044]:                "ceph.cluster_name": "ceph",
Nov 25 03:58:59 np0005534516 vibrant_ride[386044]:                "ceph.crush_device_class": "",
Nov 25 03:58:59 np0005534516 vibrant_ride[386044]:                "ceph.encrypted": "0",
Nov 25 03:58:59 np0005534516 vibrant_ride[386044]:                "ceph.osd_fsid": "fa0f8d90-5df8-4d42-9078-da082765696d",
Nov 25 03:58:59 np0005534516 vibrant_ride[386044]:                "ceph.osd_id": "0",
Nov 25 03:58:59 np0005534516 vibrant_ride[386044]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 03:58:59 np0005534516 vibrant_ride[386044]:                "ceph.type": "block",
Nov 25 03:58:59 np0005534516 vibrant_ride[386044]:                "ceph.vdo": "0"
Nov 25 03:58:59 np0005534516 vibrant_ride[386044]:            },
Nov 25 03:58:59 np0005534516 vibrant_ride[386044]:            "type": "block",
Nov 25 03:58:59 np0005534516 vibrant_ride[386044]:            "vg_name": "ceph_vg0"
Nov 25 03:58:59 np0005534516 vibrant_ride[386044]:        }
Nov 25 03:58:59 np0005534516 vibrant_ride[386044]:    ],
Nov 25 03:58:59 np0005534516 vibrant_ride[386044]:    "1": [
Nov 25 03:58:59 np0005534516 vibrant_ride[386044]:        {
Nov 25 03:58:59 np0005534516 vibrant_ride[386044]:            "devices": [
Nov 25 03:58:59 np0005534516 vibrant_ride[386044]:                "/dev/loop4"
Nov 25 03:58:59 np0005534516 vibrant_ride[386044]:            ],
Nov 25 03:58:59 np0005534516 vibrant_ride[386044]:            "lv_name": "ceph_lv1",
Nov 25 03:58:59 np0005534516 vibrant_ride[386044]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 03:58:59 np0005534516 vibrant_ride[386044]:            "lv_size": "21470642176",
Nov 25 03:58:59 np0005534516 vibrant_ride[386044]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1ccbbde0-8faf-460a-800e-c84f00ed17db,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 03:58:59 np0005534516 vibrant_ride[386044]:            "lv_uuid": "nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e",
Nov 25 03:58:59 np0005534516 vibrant_ride[386044]:            "name": "ceph_lv1",
Nov 25 03:58:59 np0005534516 vibrant_ride[386044]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 03:58:59 np0005534516 vibrant_ride[386044]:            "tags": {
Nov 25 03:58:59 np0005534516 vibrant_ride[386044]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 03:58:59 np0005534516 vibrant_ride[386044]:                "ceph.block_uuid": "nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e",
Nov 25 03:58:59 np0005534516 vibrant_ride[386044]:                "ceph.cephx_lockbox_secret": "",
Nov 25 03:58:59 np0005534516 vibrant_ride[386044]:                "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 03:58:59 np0005534516 vibrant_ride[386044]:                "ceph.cluster_name": "ceph",
Nov 25 03:58:59 np0005534516 vibrant_ride[386044]:                "ceph.crush_device_class": "",
Nov 25 03:58:59 np0005534516 vibrant_ride[386044]:                "ceph.encrypted": "0",
Nov 25 03:58:59 np0005534516 vibrant_ride[386044]:                "ceph.osd_fsid": "1ccbbde0-8faf-460a-800e-c84f00ed17db",
Nov 25 03:58:59 np0005534516 vibrant_ride[386044]:                "ceph.osd_id": "1",
Nov 25 03:58:59 np0005534516 vibrant_ride[386044]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 03:58:59 np0005534516 vibrant_ride[386044]:                "ceph.type": "block",
Nov 25 03:58:59 np0005534516 vibrant_ride[386044]:                "ceph.vdo": "0"
Nov 25 03:58:59 np0005534516 vibrant_ride[386044]:            },
Nov 25 03:58:59 np0005534516 vibrant_ride[386044]:            "type": "block",
Nov 25 03:58:59 np0005534516 vibrant_ride[386044]:            "vg_name": "ceph_vg1"
Nov 25 03:58:59 np0005534516 vibrant_ride[386044]:        }
Nov 25 03:58:59 np0005534516 vibrant_ride[386044]:    ],
Nov 25 03:58:59 np0005534516 vibrant_ride[386044]:    "2": [
Nov 25 03:58:59 np0005534516 vibrant_ride[386044]:        {
Nov 25 03:58:59 np0005534516 vibrant_ride[386044]:            "devices": [
Nov 25 03:58:59 np0005534516 vibrant_ride[386044]:                "/dev/loop5"
Nov 25 03:58:59 np0005534516 vibrant_ride[386044]:            ],
Nov 25 03:58:59 np0005534516 vibrant_ride[386044]:            "lv_name": "ceph_lv2",
Nov 25 03:58:59 np0005534516 vibrant_ride[386044]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 03:58:59 np0005534516 vibrant_ride[386044]:            "lv_size": "21470642176",
Nov 25 03:58:59 np0005534516 vibrant_ride[386044]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2a15223e-f21c-42af-b702-2be1c3607399,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 03:58:59 np0005534516 vibrant_ride[386044]:            "lv_uuid": "5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0",
Nov 25 03:58:59 np0005534516 vibrant_ride[386044]:            "name": "ceph_lv2",
Nov 25 03:58:59 np0005534516 vibrant_ride[386044]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 03:58:59 np0005534516 vibrant_ride[386044]:            "tags": {
Nov 25 03:58:59 np0005534516 vibrant_ride[386044]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 03:58:59 np0005534516 vibrant_ride[386044]:                "ceph.block_uuid": "5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0",
Nov 25 03:58:59 np0005534516 vibrant_ride[386044]:                "ceph.cephx_lockbox_secret": "",
Nov 25 03:58:59 np0005534516 vibrant_ride[386044]:                "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 03:58:59 np0005534516 vibrant_ride[386044]:                "ceph.cluster_name": "ceph",
Nov 25 03:58:59 np0005534516 vibrant_ride[386044]:                "ceph.crush_device_class": "",
Nov 25 03:58:59 np0005534516 vibrant_ride[386044]:                "ceph.encrypted": "0",
Nov 25 03:58:59 np0005534516 vibrant_ride[386044]:                "ceph.osd_fsid": "2a15223e-f21c-42af-b702-2be1c3607399",
Nov 25 03:58:59 np0005534516 vibrant_ride[386044]:                "ceph.osd_id": "2",
Nov 25 03:58:59 np0005534516 vibrant_ride[386044]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 03:58:59 np0005534516 vibrant_ride[386044]:                "ceph.type": "block",
Nov 25 03:58:59 np0005534516 vibrant_ride[386044]:                "ceph.vdo": "0"
Nov 25 03:58:59 np0005534516 vibrant_ride[386044]:            },
Nov 25 03:58:59 np0005534516 vibrant_ride[386044]:            "type": "block",
Nov 25 03:58:59 np0005534516 vibrant_ride[386044]:            "vg_name": "ceph_vg2"
Nov 25 03:58:59 np0005534516 vibrant_ride[386044]:        }
Nov 25 03:58:59 np0005534516 vibrant_ride[386044]:    ]
Nov 25 03:58:59 np0005534516 vibrant_ride[386044]: }
Nov 25 03:58:59 np0005534516 systemd[1]: libpod-acb3e83852462d7d755e1046fb7479ba3ae7bd9c9720149dbb21336d7b7e5e09.scope: Deactivated successfully.
Nov 25 03:58:59 np0005534516 podman[386027]: 2025-11-25 08:58:59.768840934 +0000 UTC m=+1.076015238 container died acb3e83852462d7d755e1046fb7479ba3ae7bd9c9720149dbb21336d7b7e5e09 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_ride, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS)
Nov 25 03:58:59 np0005534516 systemd[1]: var-lib-containers-storage-overlay-3424c645a66e07900c11cbb0b08ef80aaad1cf2de381e0d67caaf7220e9d9465-merged.mount: Deactivated successfully.
Nov 25 03:58:59 np0005534516 podman[386027]: 2025-11-25 08:58:59.846211191 +0000 UTC m=+1.153385465 container remove acb3e83852462d7d755e1046fb7479ba3ae7bd9c9720149dbb21336d7b7e5e09 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_ride, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Nov 25 03:58:59 np0005534516 systemd[1]: libpod-conmon-acb3e83852462d7d755e1046fb7479ba3ae7bd9c9720149dbb21336d7b7e5e09.scope: Deactivated successfully.
Nov 25 03:58:59 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2346: 321 pgs: 321 active+clean; 209 MiB data, 907 MiB used, 59 GiB / 60 GiB avail; 36 KiB/s rd, 1.5 MiB/s wr, 29 op/s
Nov 25 03:59:00 np0005534516 podman[386206]: 2025-11-25 08:59:00.586564192 +0000 UTC m=+0.065669559 container create 3a9ee4cca17c5cf32fbd39f3b46526a755f3766ddf29f19650e3dc6393602f1e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_haslett, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 03:59:00 np0005534516 systemd[1]: Started libpod-conmon-3a9ee4cca17c5cf32fbd39f3b46526a755f3766ddf29f19650e3dc6393602f1e.scope.
Nov 25 03:59:00 np0005534516 podman[386206]: 2025-11-25 08:59:00.558538889 +0000 UTC m=+0.037644306 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 03:59:00 np0005534516 systemd[1]: Started libcrun container.
Nov 25 03:59:00 np0005534516 podman[386206]: 2025-11-25 08:59:00.668846191 +0000 UTC m=+0.147951638 container init 3a9ee4cca17c5cf32fbd39f3b46526a755f3766ddf29f19650e3dc6393602f1e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_haslett, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Nov 25 03:59:00 np0005534516 podman[386206]: 2025-11-25 08:59:00.67688342 +0000 UTC m=+0.155988827 container start 3a9ee4cca17c5cf32fbd39f3b46526a755f3766ddf29f19650e3dc6393602f1e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_haslett, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 03:59:00 np0005534516 podman[386206]: 2025-11-25 08:59:00.67982046 +0000 UTC m=+0.158925867 container attach 3a9ee4cca17c5cf32fbd39f3b46526a755f3766ddf29f19650e3dc6393602f1e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_haslett, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Nov 25 03:59:00 np0005534516 sleepy_haslett[386222]: 167 167
Nov 25 03:59:00 np0005534516 systemd[1]: libpod-3a9ee4cca17c5cf32fbd39f3b46526a755f3766ddf29f19650e3dc6393602f1e.scope: Deactivated successfully.
Nov 25 03:59:00 np0005534516 podman[386206]: 2025-11-25 08:59:00.685680909 +0000 UTC m=+0.164786306 container died 3a9ee4cca17c5cf32fbd39f3b46526a755f3766ddf29f19650e3dc6393602f1e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_haslett, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 03:59:00 np0005534516 nova_compute[253538]: 2025-11-25 08:59:00.694 253542 DEBUG nova.network.neutron [None req-788cdb73-26ad-4e61-9df0-725af05c8335 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 68c6ff41-ad19-4b3d-947d-0a5d72e4042c] Updating instance_info_cache with network_info: [{"id": "eba714df-d5db-464e-b5b6-6d56c52d33fd", "address": "fa:16:3e:55:3b:fd", "network": {"id": "bfb2a9da-10d5-4cf0-a585-a59d66a02fa0", "bridge": "br-int", "label": "tempest-network-smoke--16312468", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cfffff2c57a442a59b202d368d49bf00", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapeba714df-d5", "ovs_interfaceid": "eba714df-d5db-464e-b5b6-6d56c52d33fd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 03:59:00 np0005534516 nova_compute[253538]: 2025-11-25 08:59:00.709 253542 DEBUG oslo_concurrency.lockutils [None req-788cdb73-26ad-4e61-9df0-725af05c8335 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Releasing lock "refresh_cache-68c6ff41-ad19-4b3d-947d-0a5d72e4042c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 03:59:00 np0005534516 systemd[1]: var-lib-containers-storage-overlay-9dafd25d5898352859509f39a402ce96d59cf4468d9cfe317266c8d25b033853-merged.mount: Deactivated successfully.
Nov 25 03:59:00 np0005534516 nova_compute[253538]: 2025-11-25 08:59:00.710 253542 DEBUG nova.compute.manager [None req-788cdb73-26ad-4e61-9df0-725af05c8335 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 68c6ff41-ad19-4b3d-947d-0a5d72e4042c] Instance network_info: |[{"id": "eba714df-d5db-464e-b5b6-6d56c52d33fd", "address": "fa:16:3e:55:3b:fd", "network": {"id": "bfb2a9da-10d5-4cf0-a585-a59d66a02fa0", "bridge": "br-int", "label": "tempest-network-smoke--16312468", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cfffff2c57a442a59b202d368d49bf00", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapeba714df-d5", "ovs_interfaceid": "eba714df-d5db-464e-b5b6-6d56c52d33fd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 25 03:59:00 np0005534516 nova_compute[253538]: 2025-11-25 08:59:00.710 253542 DEBUG oslo_concurrency.lockutils [req-c51cf682-e22e-479a-ae64-fe2af2ac581a req-65759925-b4a3-415c-9f8b-0bcbacecf6a5 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-68c6ff41-ad19-4b3d-947d-0a5d72e4042c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 03:59:00 np0005534516 nova_compute[253538]: 2025-11-25 08:59:00.711 253542 DEBUG nova.network.neutron [req-c51cf682-e22e-479a-ae64-fe2af2ac581a req-65759925-b4a3-415c-9f8b-0bcbacecf6a5 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 68c6ff41-ad19-4b3d-947d-0a5d72e4042c] Refreshing network info cache for port eba714df-d5db-464e-b5b6-6d56c52d33fd _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 03:59:00 np0005534516 nova_compute[253538]: 2025-11-25 08:59:00.714 253542 DEBUG nova.virt.libvirt.driver [None req-788cdb73-26ad-4e61-9df0-725af05c8335 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 68c6ff41-ad19-4b3d-947d-0a5d72e4042c] Start _get_guest_xml network_info=[{"id": "eba714df-d5db-464e-b5b6-6d56c52d33fd", "address": "fa:16:3e:55:3b:fd", "network": {"id": "bfb2a9da-10d5-4cf0-a585-a59d66a02fa0", "bridge": "br-int", "label": "tempest-network-smoke--16312468", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cfffff2c57a442a59b202d368d49bf00", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapeba714df-d5", "ovs_interfaceid": "eba714df-d5db-464e-b5b6-6d56c52d33fd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'device_name': '/dev/vda', 'guest_format': None, 'encryption_options': None, 'boot_index': 0, 'encryption_format': None, 'encryption_secret_uuid': None, 'size': 0, 'encrypted': False, 'disk_bus': 'virtio', 'image_id': '8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 25 03:59:00 np0005534516 nova_compute[253538]: 2025-11-25 08:59:00.721 253542 WARNING nova.virt.libvirt.driver [None req-788cdb73-26ad-4e61-9df0-725af05c8335 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 03:59:00 np0005534516 podman[386206]: 2025-11-25 08:59:00.725145033 +0000 UTC m=+0.204250410 container remove 3a9ee4cca17c5cf32fbd39f3b46526a755f3766ddf29f19650e3dc6393602f1e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_haslett, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 03:59:00 np0005534516 nova_compute[253538]: 2025-11-25 08:59:00.728 253542 DEBUG nova.virt.libvirt.host [None req-788cdb73-26ad-4e61-9df0-725af05c8335 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 25 03:59:00 np0005534516 nova_compute[253538]: 2025-11-25 08:59:00.729 253542 DEBUG nova.virt.libvirt.host [None req-788cdb73-26ad-4e61-9df0-725af05c8335 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 25 03:59:00 np0005534516 nova_compute[253538]: 2025-11-25 08:59:00.733 253542 DEBUG nova.virt.libvirt.host [None req-788cdb73-26ad-4e61-9df0-725af05c8335 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 25 03:59:00 np0005534516 nova_compute[253538]: 2025-11-25 08:59:00.733 253542 DEBUG nova.virt.libvirt.host [None req-788cdb73-26ad-4e61-9df0-725af05c8335 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 25 03:59:00 np0005534516 nova_compute[253538]: 2025-11-25 08:59:00.734 253542 DEBUG nova.virt.libvirt.driver [None req-788cdb73-26ad-4e61-9df0-725af05c8335 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 25 03:59:00 np0005534516 nova_compute[253538]: 2025-11-25 08:59:00.734 253542 DEBUG nova.virt.hardware [None req-788cdb73-26ad-4e61-9df0-725af05c8335 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T08:20:54Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='c0247c91-0f34-45b2-87e7-43f31a790a00',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 25 03:59:00 np0005534516 nova_compute[253538]: 2025-11-25 08:59:00.735 253542 DEBUG nova.virt.hardware [None req-788cdb73-26ad-4e61-9df0-725af05c8335 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 25 03:59:00 np0005534516 nova_compute[253538]: 2025-11-25 08:59:00.735 253542 DEBUG nova.virt.hardware [None req-788cdb73-26ad-4e61-9df0-725af05c8335 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 25 03:59:00 np0005534516 nova_compute[253538]: 2025-11-25 08:59:00.735 253542 DEBUG nova.virt.hardware [None req-788cdb73-26ad-4e61-9df0-725af05c8335 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 25 03:59:00 np0005534516 nova_compute[253538]: 2025-11-25 08:59:00.736 253542 DEBUG nova.virt.hardware [None req-788cdb73-26ad-4e61-9df0-725af05c8335 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 25 03:59:00 np0005534516 nova_compute[253538]: 2025-11-25 08:59:00.736 253542 DEBUG nova.virt.hardware [None req-788cdb73-26ad-4e61-9df0-725af05c8335 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 25 03:59:00 np0005534516 nova_compute[253538]: 2025-11-25 08:59:00.736 253542 DEBUG nova.virt.hardware [None req-788cdb73-26ad-4e61-9df0-725af05c8335 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 25 03:59:00 np0005534516 nova_compute[253538]: 2025-11-25 08:59:00.736 253542 DEBUG nova.virt.hardware [None req-788cdb73-26ad-4e61-9df0-725af05c8335 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 25 03:59:00 np0005534516 nova_compute[253538]: 2025-11-25 08:59:00.736 253542 DEBUG nova.virt.hardware [None req-788cdb73-26ad-4e61-9df0-725af05c8335 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 25 03:59:00 np0005534516 nova_compute[253538]: 2025-11-25 08:59:00.737 253542 DEBUG nova.virt.hardware [None req-788cdb73-26ad-4e61-9df0-725af05c8335 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 25 03:59:00 np0005534516 nova_compute[253538]: 2025-11-25 08:59:00.737 253542 DEBUG nova.virt.hardware [None req-788cdb73-26ad-4e61-9df0-725af05c8335 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 25 03:59:00 np0005534516 nova_compute[253538]: 2025-11-25 08:59:00.740 253542 DEBUG oslo_concurrency.processutils [None req-788cdb73-26ad-4e61-9df0-725af05c8335 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:59:00 np0005534516 systemd[1]: libpod-conmon-3a9ee4cca17c5cf32fbd39f3b46526a755f3766ddf29f19650e3dc6393602f1e.scope: Deactivated successfully.
Nov 25 03:59:00 np0005534516 podman[386257]: 2025-11-25 08:59:00.985424998 +0000 UTC m=+0.074395215 container create eae9e6dcf7224dbf09ffe41430950f8dde1043774e5e92a26c00d280b7733929 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_herschel, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Nov 25 03:59:01 np0005534516 systemd[1]: Started libpod-conmon-eae9e6dcf7224dbf09ffe41430950f8dde1043774e5e92a26c00d280b7733929.scope.
Nov 25 03:59:01 np0005534516 podman[386257]: 2025-11-25 08:59:00.95278418 +0000 UTC m=+0.041754477 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 03:59:01 np0005534516 systemd[1]: Started libcrun container.
Nov 25 03:59:01 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7c348421604df6d0682b54110a8895878a79ac0bbe2171aa572db4ebfd8b9f9f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 03:59:01 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7c348421604df6d0682b54110a8895878a79ac0bbe2171aa572db4ebfd8b9f9f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 03:59:01 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7c348421604df6d0682b54110a8895878a79ac0bbe2171aa572db4ebfd8b9f9f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 03:59:01 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7c348421604df6d0682b54110a8895878a79ac0bbe2171aa572db4ebfd8b9f9f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 03:59:01 np0005534516 podman[386257]: 2025-11-25 08:59:01.076113437 +0000 UTC m=+0.165083684 container init eae9e6dcf7224dbf09ffe41430950f8dde1043774e5e92a26c00d280b7733929 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_herschel, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Nov 25 03:59:01 np0005534516 podman[386257]: 2025-11-25 08:59:01.088293788 +0000 UTC m=+0.177264035 container start eae9e6dcf7224dbf09ffe41430950f8dde1043774e5e92a26c00d280b7733929 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_herschel, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Nov 25 03:59:01 np0005534516 podman[386257]: 2025-11-25 08:59:01.093619563 +0000 UTC m=+0.182589830 container attach eae9e6dcf7224dbf09ffe41430950f8dde1043774e5e92a26c00d280b7733929 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_herschel, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 03:59:01 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 03:59:01 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/16663584' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 03:59:01 np0005534516 nova_compute[253538]: 2025-11-25 08:59:01.238 253542 DEBUG oslo_concurrency.processutils [None req-788cdb73-26ad-4e61-9df0-725af05c8335 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.498s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:59:01 np0005534516 nova_compute[253538]: 2025-11-25 08:59:01.260 253542 DEBUG nova.storage.rbd_utils [None req-788cdb73-26ad-4e61-9df0-725af05c8335 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] rbd image 68c6ff41-ad19-4b3d-947d-0a5d72e4042c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:59:01 np0005534516 nova_compute[253538]: 2025-11-25 08:59:01.263 253542 DEBUG oslo_concurrency.processutils [None req-788cdb73-26ad-4e61-9df0-725af05c8335 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:59:01 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 03:59:01 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4068820527' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 03:59:01 np0005534516 nova_compute[253538]: 2025-11-25 08:59:01.673 253542 DEBUG oslo_concurrency.processutils [None req-788cdb73-26ad-4e61-9df0-725af05c8335 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.410s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:59:01 np0005534516 nova_compute[253538]: 2025-11-25 08:59:01.676 253542 DEBUG nova.virt.libvirt.vif [None req-788cdb73-26ad-4e61-9df0-725af05c8335 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T08:58:55Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-1495447964-gen-0-1184974683',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-1495447964-gen-0-1184974683',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-1495447964-ge',id=126,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAsKC+lTfCuui0Z3hitR52PvpfumhaLVjQ69ujiuIIevy7+I1pIFYPSg9LVKZgq98SCLPdhgyGDo4sdCyIoEofU+K0/ToXdERCiT/ZZOJi+hnfAHkBLfHK2RUxUGr/PGKw==',key_name='tempest-TestSecurityGroupsBasicOps-214386099',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='cfffff2c57a442a59b202d368d49bf00',ramdisk_id='',reservation_id='r-1i7wo9f0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-1495447964',owner_user_name='tempest-TestSecurityGroupsBasicOps-1495447964-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T08:58:57Z,user_data=None,user_id='283b89dbe3284e8ea2019b797673108b',uuid=68c6ff41-ad19-4b3d-947d-0a5d72e4042c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "eba714df-d5db-464e-b5b6-6d56c52d33fd", "address": "fa:16:3e:55:3b:fd", "network": {"id": "bfb2a9da-10d5-4cf0-a585-a59d66a02fa0", "bridge": "br-int", "label": "tempest-network-smoke--16312468", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cfffff2c57a442a59b202d368d49bf00", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapeba714df-d5", "ovs_interfaceid": "eba714df-d5db-464e-b5b6-6d56c52d33fd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 25 03:59:01 np0005534516 nova_compute[253538]: 2025-11-25 08:59:01.677 253542 DEBUG nova.network.os_vif_util [None req-788cdb73-26ad-4e61-9df0-725af05c8335 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Converting VIF {"id": "eba714df-d5db-464e-b5b6-6d56c52d33fd", "address": "fa:16:3e:55:3b:fd", "network": {"id": "bfb2a9da-10d5-4cf0-a585-a59d66a02fa0", "bridge": "br-int", "label": "tempest-network-smoke--16312468", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cfffff2c57a442a59b202d368d49bf00", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapeba714df-d5", "ovs_interfaceid": "eba714df-d5db-464e-b5b6-6d56c52d33fd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 03:59:01 np0005534516 nova_compute[253538]: 2025-11-25 08:59:01.679 253542 DEBUG nova.network.os_vif_util [None req-788cdb73-26ad-4e61-9df0-725af05c8335 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:55:3b:fd,bridge_name='br-int',has_traffic_filtering=True,id=eba714df-d5db-464e-b5b6-6d56c52d33fd,network=Network(bfb2a9da-10d5-4cf0-a585-a59d66a02fa0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapeba714df-d5') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 03:59:01 np0005534516 nova_compute[253538]: 2025-11-25 08:59:01.681 253542 DEBUG nova.objects.instance [None req-788cdb73-26ad-4e61-9df0-725af05c8335 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Lazy-loading 'pci_devices' on Instance uuid 68c6ff41-ad19-4b3d-947d-0a5d72e4042c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 03:59:01 np0005534516 nova_compute[253538]: 2025-11-25 08:59:01.697 253542 DEBUG nova.virt.libvirt.driver [None req-788cdb73-26ad-4e61-9df0-725af05c8335 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 68c6ff41-ad19-4b3d-947d-0a5d72e4042c] End _get_guest_xml xml=<domain type="kvm">
Nov 25 03:59:01 np0005534516 nova_compute[253538]:  <uuid>68c6ff41-ad19-4b3d-947d-0a5d72e4042c</uuid>
Nov 25 03:59:01 np0005534516 nova_compute[253538]:  <name>instance-0000007e</name>
Nov 25 03:59:01 np0005534516 nova_compute[253538]:  <memory>131072</memory>
Nov 25 03:59:01 np0005534516 nova_compute[253538]:  <vcpu>1</vcpu>
Nov 25 03:59:01 np0005534516 nova_compute[253538]:  <metadata>
Nov 25 03:59:01 np0005534516 nova_compute[253538]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 03:59:01 np0005534516 nova_compute[253538]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 03:59:01 np0005534516 nova_compute[253538]:      <nova:name>tempest-server-tempest-TestSecurityGroupsBasicOps-1495447964-gen-0-1184974683</nova:name>
Nov 25 03:59:01 np0005534516 nova_compute[253538]:      <nova:creationTime>2025-11-25 08:59:00</nova:creationTime>
Nov 25 03:59:01 np0005534516 nova_compute[253538]:      <nova:flavor name="m1.nano">
Nov 25 03:59:01 np0005534516 nova_compute[253538]:        <nova:memory>128</nova:memory>
Nov 25 03:59:01 np0005534516 nova_compute[253538]:        <nova:disk>1</nova:disk>
Nov 25 03:59:01 np0005534516 nova_compute[253538]:        <nova:swap>0</nova:swap>
Nov 25 03:59:01 np0005534516 nova_compute[253538]:        <nova:ephemeral>0</nova:ephemeral>
Nov 25 03:59:01 np0005534516 nova_compute[253538]:        <nova:vcpus>1</nova:vcpus>
Nov 25 03:59:01 np0005534516 nova_compute[253538]:      </nova:flavor>
Nov 25 03:59:01 np0005534516 nova_compute[253538]:      <nova:owner>
Nov 25 03:59:01 np0005534516 nova_compute[253538]:        <nova:user uuid="283b89dbe3284e8ea2019b797673108b">tempest-TestSecurityGroupsBasicOps-1495447964-project-member</nova:user>
Nov 25 03:59:01 np0005534516 nova_compute[253538]:        <nova:project uuid="cfffff2c57a442a59b202d368d49bf00">tempest-TestSecurityGroupsBasicOps-1495447964</nova:project>
Nov 25 03:59:01 np0005534516 nova_compute[253538]:      </nova:owner>
Nov 25 03:59:01 np0005534516 nova_compute[253538]:      <nova:root type="image" uuid="8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e"/>
Nov 25 03:59:01 np0005534516 nova_compute[253538]:      <nova:ports>
Nov 25 03:59:01 np0005534516 nova_compute[253538]:        <nova:port uuid="eba714df-d5db-464e-b5b6-6d56c52d33fd">
Nov 25 03:59:01 np0005534516 nova_compute[253538]:          <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Nov 25 03:59:01 np0005534516 nova_compute[253538]:        </nova:port>
Nov 25 03:59:01 np0005534516 nova_compute[253538]:      </nova:ports>
Nov 25 03:59:01 np0005534516 nova_compute[253538]:    </nova:instance>
Nov 25 03:59:01 np0005534516 nova_compute[253538]:  </metadata>
Nov 25 03:59:01 np0005534516 nova_compute[253538]:  <sysinfo type="smbios">
Nov 25 03:59:01 np0005534516 nova_compute[253538]:    <system>
Nov 25 03:59:01 np0005534516 nova_compute[253538]:      <entry name="manufacturer">RDO</entry>
Nov 25 03:59:01 np0005534516 nova_compute[253538]:      <entry name="product">OpenStack Compute</entry>
Nov 25 03:59:01 np0005534516 nova_compute[253538]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 03:59:01 np0005534516 nova_compute[253538]:      <entry name="serial">68c6ff41-ad19-4b3d-947d-0a5d72e4042c</entry>
Nov 25 03:59:01 np0005534516 nova_compute[253538]:      <entry name="uuid">68c6ff41-ad19-4b3d-947d-0a5d72e4042c</entry>
Nov 25 03:59:01 np0005534516 nova_compute[253538]:      <entry name="family">Virtual Machine</entry>
Nov 25 03:59:01 np0005534516 nova_compute[253538]:    </system>
Nov 25 03:59:01 np0005534516 nova_compute[253538]:  </sysinfo>
Nov 25 03:59:01 np0005534516 nova_compute[253538]:  <os>
Nov 25 03:59:01 np0005534516 nova_compute[253538]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 25 03:59:01 np0005534516 nova_compute[253538]:    <boot dev="hd"/>
Nov 25 03:59:01 np0005534516 nova_compute[253538]:    <smbios mode="sysinfo"/>
Nov 25 03:59:01 np0005534516 nova_compute[253538]:  </os>
Nov 25 03:59:01 np0005534516 nova_compute[253538]:  <features>
Nov 25 03:59:01 np0005534516 nova_compute[253538]:    <acpi/>
Nov 25 03:59:01 np0005534516 nova_compute[253538]:    <apic/>
Nov 25 03:59:01 np0005534516 nova_compute[253538]:    <vmcoreinfo/>
Nov 25 03:59:01 np0005534516 nova_compute[253538]:  </features>
Nov 25 03:59:01 np0005534516 nova_compute[253538]:  <clock offset="utc">
Nov 25 03:59:01 np0005534516 nova_compute[253538]:    <timer name="pit" tickpolicy="delay"/>
Nov 25 03:59:01 np0005534516 nova_compute[253538]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 25 03:59:01 np0005534516 nova_compute[253538]:    <timer name="hpet" present="no"/>
Nov 25 03:59:01 np0005534516 nova_compute[253538]:  </clock>
Nov 25 03:59:01 np0005534516 nova_compute[253538]:  <cpu mode="host-model" match="exact">
Nov 25 03:59:01 np0005534516 nova_compute[253538]:    <topology sockets="1" cores="1" threads="1"/>
Nov 25 03:59:01 np0005534516 nova_compute[253538]:  </cpu>
Nov 25 03:59:01 np0005534516 nova_compute[253538]:  <devices>
Nov 25 03:59:01 np0005534516 nova_compute[253538]:    <disk type="network" device="disk">
Nov 25 03:59:01 np0005534516 nova_compute[253538]:      <driver type="raw" cache="none"/>
Nov 25 03:59:01 np0005534516 nova_compute[253538]:      <source protocol="rbd" name="vms/68c6ff41-ad19-4b3d-947d-0a5d72e4042c_disk">
Nov 25 03:59:01 np0005534516 nova_compute[253538]:        <host name="192.168.122.100" port="6789"/>
Nov 25 03:59:01 np0005534516 nova_compute[253538]:      </source>
Nov 25 03:59:01 np0005534516 nova_compute[253538]:      <auth username="openstack">
Nov 25 03:59:01 np0005534516 nova_compute[253538]:        <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 03:59:01 np0005534516 nova_compute[253538]:      </auth>
Nov 25 03:59:01 np0005534516 nova_compute[253538]:      <target dev="vda" bus="virtio"/>
Nov 25 03:59:01 np0005534516 nova_compute[253538]:    </disk>
Nov 25 03:59:01 np0005534516 nova_compute[253538]:    <disk type="network" device="cdrom">
Nov 25 03:59:01 np0005534516 nova_compute[253538]:      <driver type="raw" cache="none"/>
Nov 25 03:59:01 np0005534516 nova_compute[253538]:      <source protocol="rbd" name="vms/68c6ff41-ad19-4b3d-947d-0a5d72e4042c_disk.config">
Nov 25 03:59:01 np0005534516 nova_compute[253538]:        <host name="192.168.122.100" port="6789"/>
Nov 25 03:59:01 np0005534516 nova_compute[253538]:      </source>
Nov 25 03:59:01 np0005534516 nova_compute[253538]:      <auth username="openstack">
Nov 25 03:59:01 np0005534516 nova_compute[253538]:        <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 03:59:01 np0005534516 nova_compute[253538]:      </auth>
Nov 25 03:59:01 np0005534516 nova_compute[253538]:      <target dev="sda" bus="sata"/>
Nov 25 03:59:01 np0005534516 nova_compute[253538]:    </disk>
Nov 25 03:59:01 np0005534516 nova_compute[253538]:    <interface type="ethernet">
Nov 25 03:59:01 np0005534516 nova_compute[253538]:      <mac address="fa:16:3e:55:3b:fd"/>
Nov 25 03:59:01 np0005534516 nova_compute[253538]:      <model type="virtio"/>
Nov 25 03:59:01 np0005534516 nova_compute[253538]:      <driver name="vhost" rx_queue_size="512"/>
Nov 25 03:59:01 np0005534516 nova_compute[253538]:      <mtu size="1442"/>
Nov 25 03:59:01 np0005534516 nova_compute[253538]:      <target dev="tapeba714df-d5"/>
Nov 25 03:59:01 np0005534516 nova_compute[253538]:    </interface>
Nov 25 03:59:01 np0005534516 nova_compute[253538]:    <serial type="pty">
Nov 25 03:59:01 np0005534516 nova_compute[253538]:      <log file="/var/lib/nova/instances/68c6ff41-ad19-4b3d-947d-0a5d72e4042c/console.log" append="off"/>
Nov 25 03:59:01 np0005534516 nova_compute[253538]:    </serial>
Nov 25 03:59:01 np0005534516 nova_compute[253538]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 03:59:01 np0005534516 nova_compute[253538]:    <video>
Nov 25 03:59:01 np0005534516 nova_compute[253538]:      <model type="virtio"/>
Nov 25 03:59:01 np0005534516 nova_compute[253538]:    </video>
Nov 25 03:59:01 np0005534516 nova_compute[253538]:    <input type="tablet" bus="usb"/>
Nov 25 03:59:01 np0005534516 nova_compute[253538]:    <rng model="virtio">
Nov 25 03:59:01 np0005534516 nova_compute[253538]:      <backend model="random">/dev/urandom</backend>
Nov 25 03:59:01 np0005534516 nova_compute[253538]:    </rng>
Nov 25 03:59:01 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root"/>
Nov 25 03:59:01 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:59:01 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:59:01 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:59:01 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:59:01 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:59:01 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:59:01 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:59:01 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:59:01 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:59:01 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:59:01 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:59:01 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:59:01 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:59:01 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:59:01 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:59:01 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:59:01 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:59:01 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:59:01 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:59:01 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:59:01 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:59:01 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:59:01 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:59:01 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:59:01 np0005534516 nova_compute[253538]:    <controller type="usb" index="0"/>
Nov 25 03:59:01 np0005534516 nova_compute[253538]:    <memballoon model="virtio">
Nov 25 03:59:01 np0005534516 nova_compute[253538]:      <stats period="10"/>
Nov 25 03:59:01 np0005534516 nova_compute[253538]:    </memballoon>
Nov 25 03:59:01 np0005534516 nova_compute[253538]:  </devices>
Nov 25 03:59:01 np0005534516 nova_compute[253538]: </domain>
Nov 25 03:59:01 np0005534516 nova_compute[253538]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 25 03:59:01 np0005534516 nova_compute[253538]: 2025-11-25 08:59:01.700 253542 DEBUG nova.compute.manager [None req-788cdb73-26ad-4e61-9df0-725af05c8335 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 68c6ff41-ad19-4b3d-947d-0a5d72e4042c] Preparing to wait for external event network-vif-plugged-eba714df-d5db-464e-b5b6-6d56c52d33fd prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 25 03:59:01 np0005534516 nova_compute[253538]: 2025-11-25 08:59:01.701 253542 DEBUG oslo_concurrency.lockutils [None req-788cdb73-26ad-4e61-9df0-725af05c8335 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Acquiring lock "68c6ff41-ad19-4b3d-947d-0a5d72e4042c-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:59:01 np0005534516 nova_compute[253538]: 2025-11-25 08:59:01.701 253542 DEBUG oslo_concurrency.lockutils [None req-788cdb73-26ad-4e61-9df0-725af05c8335 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Lock "68c6ff41-ad19-4b3d-947d-0a5d72e4042c-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:59:01 np0005534516 nova_compute[253538]: 2025-11-25 08:59:01.702 253542 DEBUG oslo_concurrency.lockutils [None req-788cdb73-26ad-4e61-9df0-725af05c8335 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Lock "68c6ff41-ad19-4b3d-947d-0a5d72e4042c-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:59:01 np0005534516 nova_compute[253538]: 2025-11-25 08:59:01.703 253542 DEBUG nova.virt.libvirt.vif [None req-788cdb73-26ad-4e61-9df0-725af05c8335 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T08:58:55Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-1495447964-gen-0-1184974683',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-1495447964-gen-0-1184974683',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-1495447964-ge',id=126,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAsKC+lTfCuui0Z3hitR52PvpfumhaLVjQ69ujiuIIevy7+I1pIFYPSg9LVKZgq98SCLPdhgyGDo4sdCyIoEofU+K0/ToXdERCiT/ZZOJi+hnfAHkBLfHK2RUxUGr/PGKw==',key_name='tempest-TestSecurityGroupsBasicOps-214386099',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='cfffff2c57a442a59b202d368d49bf00',ramdisk_id='',reservation_id='r-1i7wo9f0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-1495447964',owner_user_name='tempest-TestSecurityGroupsBasicOps-1495447964-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T08:58:57Z,user_data=None,user_id='283b89dbe3284e8ea2019b797673108b',uuid=68c6ff41-ad19-4b3d-947d-0a5d72e4042c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "eba714df-d5db-464e-b5b6-6d56c52d33fd", "address": "fa:16:3e:55:3b:fd", "network": {"id": "bfb2a9da-10d5-4cf0-a585-a59d66a02fa0", "bridge": "br-int", "label": "tempest-network-smoke--16312468", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cfffff2c57a442a59b202d368d49bf00", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapeba714df-d5", "ovs_interfaceid": "eba714df-d5db-464e-b5b6-6d56c52d33fd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 25 03:59:01 np0005534516 nova_compute[253538]: 2025-11-25 08:59:01.703 253542 DEBUG nova.network.os_vif_util [None req-788cdb73-26ad-4e61-9df0-725af05c8335 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Converting VIF {"id": "eba714df-d5db-464e-b5b6-6d56c52d33fd", "address": "fa:16:3e:55:3b:fd", "network": {"id": "bfb2a9da-10d5-4cf0-a585-a59d66a02fa0", "bridge": "br-int", "label": "tempest-network-smoke--16312468", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cfffff2c57a442a59b202d368d49bf00", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapeba714df-d5", "ovs_interfaceid": "eba714df-d5db-464e-b5b6-6d56c52d33fd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 03:59:01 np0005534516 nova_compute[253538]: 2025-11-25 08:59:01.704 253542 DEBUG nova.network.os_vif_util [None req-788cdb73-26ad-4e61-9df0-725af05c8335 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:55:3b:fd,bridge_name='br-int',has_traffic_filtering=True,id=eba714df-d5db-464e-b5b6-6d56c52d33fd,network=Network(bfb2a9da-10d5-4cf0-a585-a59d66a02fa0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapeba714df-d5') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 03:59:01 np0005534516 nova_compute[253538]: 2025-11-25 08:59:01.705 253542 DEBUG os_vif [None req-788cdb73-26ad-4e61-9df0-725af05c8335 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:55:3b:fd,bridge_name='br-int',has_traffic_filtering=True,id=eba714df-d5db-464e-b5b6-6d56c52d33fd,network=Network(bfb2a9da-10d5-4cf0-a585-a59d66a02fa0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapeba714df-d5') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 25 03:59:01 np0005534516 nova_compute[253538]: 2025-11-25 08:59:01.706 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:59:01 np0005534516 nova_compute[253538]: 2025-11-25 08:59:01.707 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:59:01 np0005534516 nova_compute[253538]: 2025-11-25 08:59:01.708 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 03:59:01 np0005534516 nova_compute[253538]: 2025-11-25 08:59:01.713 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:59:01 np0005534516 nova_compute[253538]: 2025-11-25 08:59:01.714 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapeba714df-d5, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:59:01 np0005534516 nova_compute[253538]: 2025-11-25 08:59:01.715 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapeba714df-d5, col_values=(('external_ids', {'iface-id': 'eba714df-d5db-464e-b5b6-6d56c52d33fd', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:55:3b:fd', 'vm-uuid': '68c6ff41-ad19-4b3d-947d-0a5d72e4042c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:59:01 np0005534516 nova_compute[253538]: 2025-11-25 08:59:01.717 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:59:01 np0005534516 NetworkManager[48915]: <info>  [1764061141.7184] manager: (tapeba714df-d5): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/530)
Nov 25 03:59:01 np0005534516 nova_compute[253538]: 2025-11-25 08:59:01.720 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 25 03:59:01 np0005534516 nova_compute[253538]: 2025-11-25 08:59:01.725 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:59:01 np0005534516 nova_compute[253538]: 2025-11-25 08:59:01.726 253542 INFO os_vif [None req-788cdb73-26ad-4e61-9df0-725af05c8335 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:55:3b:fd,bridge_name='br-int',has_traffic_filtering=True,id=eba714df-d5db-464e-b5b6-6d56c52d33fd,network=Network(bfb2a9da-10d5-4cf0-a585-a59d66a02fa0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapeba714df-d5')#033[00m
Nov 25 03:59:01 np0005534516 nova_compute[253538]: 2025-11-25 08:59:01.782 253542 DEBUG nova.virt.libvirt.driver [None req-788cdb73-26ad-4e61-9df0-725af05c8335 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 03:59:01 np0005534516 nova_compute[253538]: 2025-11-25 08:59:01.782 253542 DEBUG nova.virt.libvirt.driver [None req-788cdb73-26ad-4e61-9df0-725af05c8335 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 03:59:01 np0005534516 nova_compute[253538]: 2025-11-25 08:59:01.783 253542 DEBUG nova.virt.libvirt.driver [None req-788cdb73-26ad-4e61-9df0-725af05c8335 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] No VIF found with MAC fa:16:3e:55:3b:fd, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 25 03:59:01 np0005534516 nova_compute[253538]: 2025-11-25 08:59:01.784 253542 INFO nova.virt.libvirt.driver [None req-788cdb73-26ad-4e61-9df0-725af05c8335 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 68c6ff41-ad19-4b3d-947d-0a5d72e4042c] Using config drive#033[00m
Nov 25 03:59:01 np0005534516 nova_compute[253538]: 2025-11-25 08:59:01.815 253542 DEBUG nova.storage.rbd_utils [None req-788cdb73-26ad-4e61-9df0-725af05c8335 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] rbd image 68c6ff41-ad19-4b3d-947d-0a5d72e4042c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:59:01 np0005534516 nova_compute[253538]: 2025-11-25 08:59:01.822 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:59:01 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2347: 321 pgs: 321 active+clean; 213 MiB data, 904 MiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 25 03:59:02 np0005534516 objective_herschel[386283]: {
Nov 25 03:59:02 np0005534516 objective_herschel[386283]:    "1ccbbde0-8faf-460a-800e-c84f00ed17db": {
Nov 25 03:59:02 np0005534516 objective_herschel[386283]:        "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 03:59:02 np0005534516 objective_herschel[386283]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 03:59:02 np0005534516 objective_herschel[386283]:        "osd_id": 1,
Nov 25 03:59:02 np0005534516 objective_herschel[386283]:        "osd_uuid": "1ccbbde0-8faf-460a-800e-c84f00ed17db",
Nov 25 03:59:02 np0005534516 objective_herschel[386283]:        "type": "bluestore"
Nov 25 03:59:02 np0005534516 objective_herschel[386283]:    },
Nov 25 03:59:02 np0005534516 objective_herschel[386283]:    "2a15223e-f21c-42af-b702-2be1c3607399": {
Nov 25 03:59:02 np0005534516 objective_herschel[386283]:        "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 03:59:02 np0005534516 objective_herschel[386283]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 03:59:02 np0005534516 objective_herschel[386283]:        "osd_id": 2,
Nov 25 03:59:02 np0005534516 objective_herschel[386283]:        "osd_uuid": "2a15223e-f21c-42af-b702-2be1c3607399",
Nov 25 03:59:02 np0005534516 objective_herschel[386283]:        "type": "bluestore"
Nov 25 03:59:02 np0005534516 objective_herschel[386283]:    },
Nov 25 03:59:02 np0005534516 objective_herschel[386283]:    "fa0f8d90-5df8-4d42-9078-da082765696d": {
Nov 25 03:59:02 np0005534516 objective_herschel[386283]:        "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 03:59:02 np0005534516 objective_herschel[386283]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 03:59:02 np0005534516 objective_herschel[386283]:        "osd_id": 0,
Nov 25 03:59:02 np0005534516 objective_herschel[386283]:        "osd_uuid": "fa0f8d90-5df8-4d42-9078-da082765696d",
Nov 25 03:59:02 np0005534516 objective_herschel[386283]:        "type": "bluestore"
Nov 25 03:59:02 np0005534516 objective_herschel[386283]:    }
Nov 25 03:59:02 np0005534516 objective_herschel[386283]: }
Nov 25 03:59:02 np0005534516 systemd[1]: libpod-eae9e6dcf7224dbf09ffe41430950f8dde1043774e5e92a26c00d280b7733929.scope: Deactivated successfully.
Nov 25 03:59:02 np0005534516 podman[386378]: 2025-11-25 08:59:02.09722059 +0000 UTC m=+0.020987342 container died eae9e6dcf7224dbf09ffe41430950f8dde1043774e5e92a26c00d280b7733929 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_herschel, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 03:59:02 np0005534516 nova_compute[253538]: 2025-11-25 08:59:02.107 253542 DEBUG nova.network.neutron [req-c51cf682-e22e-479a-ae64-fe2af2ac581a req-65759925-b4a3-415c-9f8b-0bcbacecf6a5 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 68c6ff41-ad19-4b3d-947d-0a5d72e4042c] Updated VIF entry in instance network info cache for port eba714df-d5db-464e-b5b6-6d56c52d33fd. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 25 03:59:02 np0005534516 nova_compute[253538]: 2025-11-25 08:59:02.108 253542 DEBUG nova.network.neutron [req-c51cf682-e22e-479a-ae64-fe2af2ac581a req-65759925-b4a3-415c-9f8b-0bcbacecf6a5 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 68c6ff41-ad19-4b3d-947d-0a5d72e4042c] Updating instance_info_cache with network_info: [{"id": "eba714df-d5db-464e-b5b6-6d56c52d33fd", "address": "fa:16:3e:55:3b:fd", "network": {"id": "bfb2a9da-10d5-4cf0-a585-a59d66a02fa0", "bridge": "br-int", "label": "tempest-network-smoke--16312468", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cfffff2c57a442a59b202d368d49bf00", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapeba714df-d5", "ovs_interfaceid": "eba714df-d5db-464e-b5b6-6d56c52d33fd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 03:59:02 np0005534516 nova_compute[253538]: 2025-11-25 08:59:02.119 253542 DEBUG oslo_concurrency.lockutils [req-c51cf682-e22e-479a-ae64-fe2af2ac581a req-65759925-b4a3-415c-9f8b-0bcbacecf6a5 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-68c6ff41-ad19-4b3d-947d-0a5d72e4042c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 03:59:02 np0005534516 systemd[1]: var-lib-containers-storage-overlay-7c348421604df6d0682b54110a8895878a79ac0bbe2171aa572db4ebfd8b9f9f-merged.mount: Deactivated successfully.
Nov 25 03:59:02 np0005534516 podman[386378]: 2025-11-25 08:59:02.171867392 +0000 UTC m=+0.095634114 container remove eae9e6dcf7224dbf09ffe41430950f8dde1043774e5e92a26c00d280b7733929 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_herschel, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 03:59:02 np0005534516 systemd[1]: libpod-conmon-eae9e6dcf7224dbf09ffe41430950f8dde1043774e5e92a26c00d280b7733929.scope: Deactivated successfully.
Nov 25 03:59:02 np0005534516 nova_compute[253538]: 2025-11-25 08:59:02.214 253542 INFO nova.virt.libvirt.driver [None req-788cdb73-26ad-4e61-9df0-725af05c8335 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 68c6ff41-ad19-4b3d-947d-0a5d72e4042c] Creating config drive at /var/lib/nova/instances/68c6ff41-ad19-4b3d-947d-0a5d72e4042c/disk.config#033[00m
Nov 25 03:59:02 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 03:59:02 np0005534516 nova_compute[253538]: 2025-11-25 08:59:02.219 253542 DEBUG oslo_concurrency.processutils [None req-788cdb73-26ad-4e61-9df0-725af05c8335 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/68c6ff41-ad19-4b3d-947d-0a5d72e4042c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpcdzg0rwe execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:59:02 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 03:59:02 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 03:59:02 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 03:59:02 np0005534516 ceph-mgr[75313]: [progress WARNING root] complete: ev dc077c3a-9fbe-4feb-95a0-835b50048c04 does not exist
Nov 25 03:59:02 np0005534516 ceph-mgr[75313]: [progress WARNING root] complete: ev 41f705ae-e521-4a0a-a2d3-ddb6b07b7078 does not exist
Nov 25 03:59:02 np0005534516 nova_compute[253538]: 2025-11-25 08:59:02.357 253542 DEBUG oslo_concurrency.processutils [None req-788cdb73-26ad-4e61-9df0-725af05c8335 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/68c6ff41-ad19-4b3d-947d-0a5d72e4042c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpcdzg0rwe" returned: 0 in 0.138s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:59:02 np0005534516 nova_compute[253538]: 2025-11-25 08:59:02.383 253542 DEBUG nova.storage.rbd_utils [None req-788cdb73-26ad-4e61-9df0-725af05c8335 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] rbd image 68c6ff41-ad19-4b3d-947d-0a5d72e4042c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:59:02 np0005534516 nova_compute[253538]: 2025-11-25 08:59:02.387 253542 DEBUG oslo_concurrency.processutils [None req-788cdb73-26ad-4e61-9df0-725af05c8335 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/68c6ff41-ad19-4b3d-947d-0a5d72e4042c/disk.config 68c6ff41-ad19-4b3d-947d-0a5d72e4042c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:59:02 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 03:59:02 np0005534516 nova_compute[253538]: 2025-11-25 08:59:02.625 253542 DEBUG oslo_concurrency.processutils [None req-788cdb73-26ad-4e61-9df0-725af05c8335 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/68c6ff41-ad19-4b3d-947d-0a5d72e4042c/disk.config 68c6ff41-ad19-4b3d-947d-0a5d72e4042c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.238s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:59:02 np0005534516 nova_compute[253538]: 2025-11-25 08:59:02.626 253542 INFO nova.virt.libvirt.driver [None req-788cdb73-26ad-4e61-9df0-725af05c8335 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 68c6ff41-ad19-4b3d-947d-0a5d72e4042c] Deleting local config drive /var/lib/nova/instances/68c6ff41-ad19-4b3d-947d-0a5d72e4042c/disk.config because it was imported into RBD.#033[00m
Nov 25 03:59:02 np0005534516 kernel: tapeba714df-d5: entered promiscuous mode
Nov 25 03:59:02 np0005534516 ovn_controller[152859]: 2025-11-25T08:59:02Z|01295|binding|INFO|Claiming lport eba714df-d5db-464e-b5b6-6d56c52d33fd for this chassis.
Nov 25 03:59:02 np0005534516 ovn_controller[152859]: 2025-11-25T08:59:02Z|01296|binding|INFO|eba714df-d5db-464e-b5b6-6d56c52d33fd: Claiming fa:16:3e:55:3b:fd 10.100.0.6
Nov 25 03:59:02 np0005534516 NetworkManager[48915]: <info>  [1764061142.7018] manager: (tapeba714df-d5): new Tun device (/org/freedesktop/NetworkManager/Devices/531)
Nov 25 03:59:02 np0005534516 nova_compute[253538]: 2025-11-25 08:59:02.701 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:59:02 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:59:02.711 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:55:3b:fd 10.100.0.6'], port_security=['fa:16:3e:55:3b:fd 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '68c6ff41-ad19-4b3d-947d-0a5d72e4042c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-bfb2a9da-10d5-4cf0-a585-a59d66a02fa0', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'cfffff2c57a442a59b202d368d49bf00', 'neutron:revision_number': '2', 'neutron:security_group_ids': '20edaba8-b789-425d-a55d-3fa69c803e14', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=acab9bf8-f301-413f-8774-b60482e3db77, chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=eba714df-d5db-464e-b5b6-6d56c52d33fd) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 03:59:02 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:59:02.714 162739 INFO neutron.agent.ovn.metadata.agent [-] Port eba714df-d5db-464e-b5b6-6d56c52d33fd in datapath bfb2a9da-10d5-4cf0-a585-a59d66a02fa0 bound to our chassis#033[00m
Nov 25 03:59:02 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:59:02.717 162739 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network bfb2a9da-10d5-4cf0-a585-a59d66a02fa0#033[00m
Nov 25 03:59:02 np0005534516 ovn_controller[152859]: 2025-11-25T08:59:02Z|01297|binding|INFO|Setting lport eba714df-d5db-464e-b5b6-6d56c52d33fd ovn-installed in OVS
Nov 25 03:59:02 np0005534516 ovn_controller[152859]: 2025-11-25T08:59:02Z|01298|binding|INFO|Setting lport eba714df-d5db-464e-b5b6-6d56c52d33fd up in Southbound
Nov 25 03:59:02 np0005534516 nova_compute[253538]: 2025-11-25 08:59:02.741 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:59:02 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:59:02.741 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[e5c643d5-85bf-49c1-a9c8-65dffb8bee2c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:59:02 np0005534516 systemd-machined[215790]: New machine qemu-156-instance-0000007e.
Nov 25 03:59:02 np0005534516 systemd-udevd[386498]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 03:59:02 np0005534516 NetworkManager[48915]: <info>  [1764061142.7694] device (tapeba714df-d5): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 03:59:02 np0005534516 systemd[1]: Started Virtual Machine qemu-156-instance-0000007e.
Nov 25 03:59:02 np0005534516 NetworkManager[48915]: <info>  [1764061142.7707] device (tapeba714df-d5): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 03:59:02 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:59:02.777 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[01906bc5-d692-4070-a483-d42b1311182f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:59:02 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:59:02.781 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[b36f2ac5-6097-4e2a-a9c3-d99300b77723]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:59:02 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:59:02.811 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[1799dadf-3f0d-4e6e-931f-fbb84e8c1eef]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:59:02 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:59:02.832 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[c0fd05cc-ec85-4a83-9dff-919b2ef17905]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapbfb2a9da-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a7:95:82'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 6, 'rx_bytes': 616, 'tx_bytes': 440, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 6, 'rx_bytes': 616, 'tx_bytes': 440, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 373], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 646309, 'reachable_time': 34452, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 300, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 300, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 386505, 'error': None, 'target': 'ovnmeta-bfb2a9da-10d5-4cf0-a585-a59d66a02fa0', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:59:02 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:59:02.849 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[99527602-66e2-4faf-8a4e-e2deb9efbdd7]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapbfb2a9da-11'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 646321, 'tstamp': 646321}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 386510, 'error': None, 'target': 'ovnmeta-bfb2a9da-10d5-4cf0-a585-a59d66a02fa0', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapbfb2a9da-11'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 646324, 'tstamp': 646324}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 386510, 'error': None, 'target': 'ovnmeta-bfb2a9da-10d5-4cf0-a585-a59d66a02fa0', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:59:02 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:59:02.851 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapbfb2a9da-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:59:02 np0005534516 nova_compute[253538]: 2025-11-25 08:59:02.854 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:59:02 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:59:02.855 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapbfb2a9da-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:59:02 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:59:02.856 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 03:59:02 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:59:02.857 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapbfb2a9da-10, col_values=(('external_ids', {'iface-id': '0b3aeb91-b51d-4bb6-a8fa-9a80bd12b96f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:59:02 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:59:02.858 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 03:59:02 np0005534516 nova_compute[253538]: 2025-11-25 08:59:02.927 253542 DEBUG nova.compute.manager [req-b5ea68df-1e86-4843-9cfa-a0d1e503af93 req-7cca8697-88ec-4544-8516-079c8015dc91 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 68c6ff41-ad19-4b3d-947d-0a5d72e4042c] Received event network-vif-plugged-eba714df-d5db-464e-b5b6-6d56c52d33fd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:59:02 np0005534516 nova_compute[253538]: 2025-11-25 08:59:02.928 253542 DEBUG oslo_concurrency.lockutils [req-b5ea68df-1e86-4843-9cfa-a0d1e503af93 req-7cca8697-88ec-4544-8516-079c8015dc91 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "68c6ff41-ad19-4b3d-947d-0a5d72e4042c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:59:02 np0005534516 nova_compute[253538]: 2025-11-25 08:59:02.929 253542 DEBUG oslo_concurrency.lockutils [req-b5ea68df-1e86-4843-9cfa-a0d1e503af93 req-7cca8697-88ec-4544-8516-079c8015dc91 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "68c6ff41-ad19-4b3d-947d-0a5d72e4042c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:59:02 np0005534516 nova_compute[253538]: 2025-11-25 08:59:02.930 253542 DEBUG oslo_concurrency.lockutils [req-b5ea68df-1e86-4843-9cfa-a0d1e503af93 req-7cca8697-88ec-4544-8516-079c8015dc91 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "68c6ff41-ad19-4b3d-947d-0a5d72e4042c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:59:02 np0005534516 nova_compute[253538]: 2025-11-25 08:59:02.931 253542 DEBUG nova.compute.manager [req-b5ea68df-1e86-4843-9cfa-a0d1e503af93 req-7cca8697-88ec-4544-8516-079c8015dc91 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 68c6ff41-ad19-4b3d-947d-0a5d72e4042c] Processing event network-vif-plugged-eba714df-d5db-464e-b5b6-6d56c52d33fd _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 25 03:59:03 np0005534516 nova_compute[253538]: 2025-11-25 08:59:03.050 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:59:03 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 03:59:03 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 03:59:03 np0005534516 nova_compute[253538]: 2025-11-25 08:59:03.270 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764061143.270425, 68c6ff41-ad19-4b3d-947d-0a5d72e4042c => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 03:59:03 np0005534516 nova_compute[253538]: 2025-11-25 08:59:03.271 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 68c6ff41-ad19-4b3d-947d-0a5d72e4042c] VM Started (Lifecycle Event)#033[00m
Nov 25 03:59:03 np0005534516 nova_compute[253538]: 2025-11-25 08:59:03.274 253542 DEBUG nova.compute.manager [None req-788cdb73-26ad-4e61-9df0-725af05c8335 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 68c6ff41-ad19-4b3d-947d-0a5d72e4042c] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 25 03:59:03 np0005534516 nova_compute[253538]: 2025-11-25 08:59:03.278 253542 DEBUG nova.virt.libvirt.driver [None req-788cdb73-26ad-4e61-9df0-725af05c8335 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 68c6ff41-ad19-4b3d-947d-0a5d72e4042c] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 25 03:59:03 np0005534516 nova_compute[253538]: 2025-11-25 08:59:03.281 253542 INFO nova.virt.libvirt.driver [-] [instance: 68c6ff41-ad19-4b3d-947d-0a5d72e4042c] Instance spawned successfully.#033[00m
Nov 25 03:59:03 np0005534516 nova_compute[253538]: 2025-11-25 08:59:03.282 253542 DEBUG nova.virt.libvirt.driver [None req-788cdb73-26ad-4e61-9df0-725af05c8335 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 68c6ff41-ad19-4b3d-947d-0a5d72e4042c] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 25 03:59:03 np0005534516 nova_compute[253538]: 2025-11-25 08:59:03.441 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 68c6ff41-ad19-4b3d-947d-0a5d72e4042c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:59:03 np0005534516 nova_compute[253538]: 2025-11-25 08:59:03.448 253542 DEBUG nova.virt.libvirt.driver [None req-788cdb73-26ad-4e61-9df0-725af05c8335 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 68c6ff41-ad19-4b3d-947d-0a5d72e4042c] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:59:03 np0005534516 nova_compute[253538]: 2025-11-25 08:59:03.449 253542 DEBUG nova.virt.libvirt.driver [None req-788cdb73-26ad-4e61-9df0-725af05c8335 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 68c6ff41-ad19-4b3d-947d-0a5d72e4042c] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:59:03 np0005534516 nova_compute[253538]: 2025-11-25 08:59:03.449 253542 DEBUG nova.virt.libvirt.driver [None req-788cdb73-26ad-4e61-9df0-725af05c8335 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 68c6ff41-ad19-4b3d-947d-0a5d72e4042c] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:59:03 np0005534516 nova_compute[253538]: 2025-11-25 08:59:03.450 253542 DEBUG nova.virt.libvirt.driver [None req-788cdb73-26ad-4e61-9df0-725af05c8335 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 68c6ff41-ad19-4b3d-947d-0a5d72e4042c] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:59:03 np0005534516 nova_compute[253538]: 2025-11-25 08:59:03.451 253542 DEBUG nova.virt.libvirt.driver [None req-788cdb73-26ad-4e61-9df0-725af05c8335 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 68c6ff41-ad19-4b3d-947d-0a5d72e4042c] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:59:03 np0005534516 nova_compute[253538]: 2025-11-25 08:59:03.451 253542 DEBUG nova.virt.libvirt.driver [None req-788cdb73-26ad-4e61-9df0-725af05c8335 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 68c6ff41-ad19-4b3d-947d-0a5d72e4042c] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:59:03 np0005534516 nova_compute[253538]: 2025-11-25 08:59:03.459 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 68c6ff41-ad19-4b3d-947d-0a5d72e4042c] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 03:59:03 np0005534516 nova_compute[253538]: 2025-11-25 08:59:03.488 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 68c6ff41-ad19-4b3d-947d-0a5d72e4042c] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 03:59:03 np0005534516 nova_compute[253538]: 2025-11-25 08:59:03.488 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764061143.2706258, 68c6ff41-ad19-4b3d-947d-0a5d72e4042c => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 03:59:03 np0005534516 nova_compute[253538]: 2025-11-25 08:59:03.489 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 68c6ff41-ad19-4b3d-947d-0a5d72e4042c] VM Paused (Lifecycle Event)#033[00m
Nov 25 03:59:03 np0005534516 nova_compute[253538]: 2025-11-25 08:59:03.505 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 68c6ff41-ad19-4b3d-947d-0a5d72e4042c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:59:03 np0005534516 nova_compute[253538]: 2025-11-25 08:59:03.509 253542 INFO nova.compute.manager [None req-788cdb73-26ad-4e61-9df0-725af05c8335 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 68c6ff41-ad19-4b3d-947d-0a5d72e4042c] Took 5.96 seconds to spawn the instance on the hypervisor.#033[00m
Nov 25 03:59:03 np0005534516 nova_compute[253538]: 2025-11-25 08:59:03.509 253542 DEBUG nova.compute.manager [None req-788cdb73-26ad-4e61-9df0-725af05c8335 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 68c6ff41-ad19-4b3d-947d-0a5d72e4042c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:59:03 np0005534516 nova_compute[253538]: 2025-11-25 08:59:03.513 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764061143.2769334, 68c6ff41-ad19-4b3d-947d-0a5d72e4042c => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 03:59:03 np0005534516 nova_compute[253538]: 2025-11-25 08:59:03.513 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 68c6ff41-ad19-4b3d-947d-0a5d72e4042c] VM Resumed (Lifecycle Event)#033[00m
Nov 25 03:59:03 np0005534516 nova_compute[253538]: 2025-11-25 08:59:03.548 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 68c6ff41-ad19-4b3d-947d-0a5d72e4042c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:59:03 np0005534516 nova_compute[253538]: 2025-11-25 08:59:03.551 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 68c6ff41-ad19-4b3d-947d-0a5d72e4042c] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 03:59:03 np0005534516 nova_compute[253538]: 2025-11-25 08:59:03.564 253542 INFO nova.compute.manager [None req-788cdb73-26ad-4e61-9df0-725af05c8335 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 68c6ff41-ad19-4b3d-947d-0a5d72e4042c] Took 7.03 seconds to build instance.#033[00m
Nov 25 03:59:03 np0005534516 nova_compute[253538]: 2025-11-25 08:59:03.576 253542 DEBUG oslo_concurrency.lockutils [None req-788cdb73-26ad-4e61-9df0-725af05c8335 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Lock "68c6ff41-ad19-4b3d-947d-0a5d72e4042c" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 7.140s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:59:03 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2348: 321 pgs: 321 active+clean; 213 MiB data, 904 MiB used, 59 GiB / 60 GiB avail; 20 KiB/s rd, 1.8 MiB/s wr, 30 op/s
Nov 25 03:59:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 03:59:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:59:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 03:59:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:59:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0011052700926287686 of space, bias 1.0, pg target 0.3315810277886306 quantized to 32 (current 32)
Nov 25 03:59:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:59:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 03:59:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:59:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 03:59:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:59:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0010122368870873217 of space, bias 1.0, pg target 0.3036710661261965 quantized to 32 (current 32)
Nov 25 03:59:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:59:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 25 03:59:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:59:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 03:59:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:59:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 25 03:59:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:59:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 25 03:59:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:59:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 03:59:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 03:59:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 25 03:59:05 np0005534516 nova_compute[253538]: 2025-11-25 08:59:05.385 253542 DEBUG nova.compute.manager [req-94b00f07-beec-4864-9cdd-133f126bcec7 req-9af4c5c2-003f-4101-b54e-3ecbeebba8c7 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 68c6ff41-ad19-4b3d-947d-0a5d72e4042c] Received event network-vif-plugged-eba714df-d5db-464e-b5b6-6d56c52d33fd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:59:05 np0005534516 nova_compute[253538]: 2025-11-25 08:59:05.386 253542 DEBUG oslo_concurrency.lockutils [req-94b00f07-beec-4864-9cdd-133f126bcec7 req-9af4c5c2-003f-4101-b54e-3ecbeebba8c7 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "68c6ff41-ad19-4b3d-947d-0a5d72e4042c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:59:05 np0005534516 nova_compute[253538]: 2025-11-25 08:59:05.386 253542 DEBUG oslo_concurrency.lockutils [req-94b00f07-beec-4864-9cdd-133f126bcec7 req-9af4c5c2-003f-4101-b54e-3ecbeebba8c7 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "68c6ff41-ad19-4b3d-947d-0a5d72e4042c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:59:05 np0005534516 nova_compute[253538]: 2025-11-25 08:59:05.387 253542 DEBUG oslo_concurrency.lockutils [req-94b00f07-beec-4864-9cdd-133f126bcec7 req-9af4c5c2-003f-4101-b54e-3ecbeebba8c7 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "68c6ff41-ad19-4b3d-947d-0a5d72e4042c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:59:05 np0005534516 nova_compute[253538]: 2025-11-25 08:59:05.387 253542 DEBUG nova.compute.manager [req-94b00f07-beec-4864-9cdd-133f126bcec7 req-9af4c5c2-003f-4101-b54e-3ecbeebba8c7 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 68c6ff41-ad19-4b3d-947d-0a5d72e4042c] No waiting events found dispatching network-vif-plugged-eba714df-d5db-464e-b5b6-6d56c52d33fd pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 03:59:05 np0005534516 nova_compute[253538]: 2025-11-25 08:59:05.387 253542 WARNING nova.compute.manager [req-94b00f07-beec-4864-9cdd-133f126bcec7 req-9af4c5c2-003f-4101-b54e-3ecbeebba8c7 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 68c6ff41-ad19-4b3d-947d-0a5d72e4042c] Received unexpected event network-vif-plugged-eba714df-d5db-464e-b5b6-6d56c52d33fd for instance with vm_state active and task_state None.#033[00m
Nov 25 03:59:05 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2349: 321 pgs: 321 active+clean; 213 MiB data, 904 MiB used, 59 GiB / 60 GiB avail; 845 KiB/s rd, 1.8 MiB/s wr, 61 op/s
Nov 25 03:59:06 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:59:06.524 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=0e3362c2-a4a0-4a10-9289-943331244f84, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '40'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:59:06 np0005534516 nova_compute[253538]: 2025-11-25 08:59:06.718 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:59:07 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 03:59:07 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2350: 321 pgs: 321 active+clean; 213 MiB data, 904 MiB used, 59 GiB / 60 GiB avail; 1.2 MiB/s rd, 1.8 MiB/s wr, 74 op/s
Nov 25 03:59:08 np0005534516 nova_compute[253538]: 2025-11-25 08:59:08.052 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:59:08 np0005534516 nova_compute[253538]: 2025-11-25 08:59:08.547 253542 DEBUG oslo_concurrency.lockutils [None req-ef5a73b8-ce48-4958-aadb-1ae0fdb7d727 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Acquiring lock "6a6a3230-e005-48d6-b758-3cf5d4f9410f" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:59:08 np0005534516 nova_compute[253538]: 2025-11-25 08:59:08.548 253542 DEBUG oslo_concurrency.lockutils [None req-ef5a73b8-ce48-4958-aadb-1ae0fdb7d727 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lock "6a6a3230-e005-48d6-b758-3cf5d4f9410f" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:59:08 np0005534516 nova_compute[253538]: 2025-11-25 08:59:08.590 253542 DEBUG nova.compute.manager [None req-ef5a73b8-ce48-4958-aadb-1ae0fdb7d727 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 6a6a3230-e005-48d6-b758-3cf5d4f9410f] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 25 03:59:08 np0005534516 nova_compute[253538]: 2025-11-25 08:59:08.790 253542 DEBUG oslo_concurrency.lockutils [None req-ef5a73b8-ce48-4958-aadb-1ae0fdb7d727 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:59:08 np0005534516 nova_compute[253538]: 2025-11-25 08:59:08.790 253542 DEBUG oslo_concurrency.lockutils [None req-ef5a73b8-ce48-4958-aadb-1ae0fdb7d727 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:59:08 np0005534516 nova_compute[253538]: 2025-11-25 08:59:08.797 253542 DEBUG nova.virt.hardware [None req-ef5a73b8-ce48-4958-aadb-1ae0fdb7d727 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 25 03:59:08 np0005534516 nova_compute[253538]: 2025-11-25 08:59:08.797 253542 INFO nova.compute.claims [None req-ef5a73b8-ce48-4958-aadb-1ae0fdb7d727 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 6a6a3230-e005-48d6-b758-3cf5d4f9410f] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 25 03:59:08 np0005534516 nova_compute[253538]: 2025-11-25 08:59:08.957 253542 DEBUG oslo_concurrency.processutils [None req-ef5a73b8-ce48-4958-aadb-1ae0fdb7d727 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:59:09 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 03:59:09 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2947983238' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 03:59:09 np0005534516 nova_compute[253538]: 2025-11-25 08:59:09.420 253542 DEBUG oslo_concurrency.processutils [None req-ef5a73b8-ce48-4958-aadb-1ae0fdb7d727 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.464s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:59:09 np0005534516 nova_compute[253538]: 2025-11-25 08:59:09.426 253542 DEBUG nova.compute.provider_tree [None req-ef5a73b8-ce48-4958-aadb-1ae0fdb7d727 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 03:59:09 np0005534516 nova_compute[253538]: 2025-11-25 08:59:09.439 253542 DEBUG nova.scheduler.client.report [None req-ef5a73b8-ce48-4958-aadb-1ae0fdb7d727 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 03:59:09 np0005534516 nova_compute[253538]: 2025-11-25 08:59:09.459 253542 DEBUG oslo_concurrency.lockutils [None req-ef5a73b8-ce48-4958-aadb-1ae0fdb7d727 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.669s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:59:09 np0005534516 nova_compute[253538]: 2025-11-25 08:59:09.461 253542 DEBUG nova.compute.manager [None req-ef5a73b8-ce48-4958-aadb-1ae0fdb7d727 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 6a6a3230-e005-48d6-b758-3cf5d4f9410f] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 25 03:59:09 np0005534516 nova_compute[253538]: 2025-11-25 08:59:09.505 253542 DEBUG nova.compute.manager [None req-ef5a73b8-ce48-4958-aadb-1ae0fdb7d727 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 6a6a3230-e005-48d6-b758-3cf5d4f9410f] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 25 03:59:09 np0005534516 nova_compute[253538]: 2025-11-25 08:59:09.505 253542 DEBUG nova.network.neutron [None req-ef5a73b8-ce48-4958-aadb-1ae0fdb7d727 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 6a6a3230-e005-48d6-b758-3cf5d4f9410f] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 25 03:59:09 np0005534516 nova_compute[253538]: 2025-11-25 08:59:09.522 253542 INFO nova.virt.libvirt.driver [None req-ef5a73b8-ce48-4958-aadb-1ae0fdb7d727 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 6a6a3230-e005-48d6-b758-3cf5d4f9410f] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 25 03:59:09 np0005534516 nova_compute[253538]: 2025-11-25 08:59:09.545 253542 DEBUG nova.compute.manager [None req-ef5a73b8-ce48-4958-aadb-1ae0fdb7d727 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 6a6a3230-e005-48d6-b758-3cf5d4f9410f] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 25 03:59:09 np0005534516 nova_compute[253538]: 2025-11-25 08:59:09.632 253542 DEBUG nova.compute.manager [None req-ef5a73b8-ce48-4958-aadb-1ae0fdb7d727 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 6a6a3230-e005-48d6-b758-3cf5d4f9410f] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 25 03:59:09 np0005534516 nova_compute[253538]: 2025-11-25 08:59:09.633 253542 DEBUG nova.virt.libvirt.driver [None req-ef5a73b8-ce48-4958-aadb-1ae0fdb7d727 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 6a6a3230-e005-48d6-b758-3cf5d4f9410f] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 25 03:59:09 np0005534516 nova_compute[253538]: 2025-11-25 08:59:09.634 253542 INFO nova.virt.libvirt.driver [None req-ef5a73b8-ce48-4958-aadb-1ae0fdb7d727 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 6a6a3230-e005-48d6-b758-3cf5d4f9410f] Creating image(s)#033[00m
Nov 25 03:59:09 np0005534516 nova_compute[253538]: 2025-11-25 08:59:09.660 253542 DEBUG nova.storage.rbd_utils [None req-ef5a73b8-ce48-4958-aadb-1ae0fdb7d727 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] rbd image 6a6a3230-e005-48d6-b758-3cf5d4f9410f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:59:09 np0005534516 nova_compute[253538]: 2025-11-25 08:59:09.687 253542 DEBUG nova.storage.rbd_utils [None req-ef5a73b8-ce48-4958-aadb-1ae0fdb7d727 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] rbd image 6a6a3230-e005-48d6-b758-3cf5d4f9410f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:59:09 np0005534516 nova_compute[253538]: 2025-11-25 08:59:09.713 253542 DEBUG nova.storage.rbd_utils [None req-ef5a73b8-ce48-4958-aadb-1ae0fdb7d727 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] rbd image 6a6a3230-e005-48d6-b758-3cf5d4f9410f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:59:09 np0005534516 nova_compute[253538]: 2025-11-25 08:59:09.718 253542 DEBUG oslo_concurrency.processutils [None req-ef5a73b8-ce48-4958-aadb-1ae0fdb7d727 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:59:09 np0005534516 nova_compute[253538]: 2025-11-25 08:59:09.757 253542 DEBUG nova.policy [None req-ef5a73b8-ce48-4958-aadb-1ae0fdb7d727 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '4211995133cc45db8e38c47f747fb092', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '92faeb767e7a423586eaaf32661ce771', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 25 03:59:09 np0005534516 nova_compute[253538]: 2025-11-25 08:59:09.799 253542 DEBUG oslo_concurrency.processutils [None req-ef5a73b8-ce48-4958-aadb-1ae0fdb7d727 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json" returned: 0 in 0.081s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:59:09 np0005534516 nova_compute[253538]: 2025-11-25 08:59:09.800 253542 DEBUG oslo_concurrency.lockutils [None req-ef5a73b8-ce48-4958-aadb-1ae0fdb7d727 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Acquiring lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:59:09 np0005534516 nova_compute[253538]: 2025-11-25 08:59:09.800 253542 DEBUG oslo_concurrency.lockutils [None req-ef5a73b8-ce48-4958-aadb-1ae0fdb7d727 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:59:09 np0005534516 nova_compute[253538]: 2025-11-25 08:59:09.801 253542 DEBUG oslo_concurrency.lockutils [None req-ef5a73b8-ce48-4958-aadb-1ae0fdb7d727 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:59:09 np0005534516 nova_compute[253538]: 2025-11-25 08:59:09.826 253542 DEBUG nova.storage.rbd_utils [None req-ef5a73b8-ce48-4958-aadb-1ae0fdb7d727 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] rbd image 6a6a3230-e005-48d6-b758-3cf5d4f9410f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:59:09 np0005534516 nova_compute[253538]: 2025-11-25 08:59:09.830 253542 DEBUG oslo_concurrency.processutils [None req-ef5a73b8-ce48-4958-aadb-1ae0fdb7d727 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc 6a6a3230-e005-48d6-b758-3cf5d4f9410f_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:59:09 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2351: 321 pgs: 321 active+clean; 213 MiB data, 904 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 101 op/s
Nov 25 03:59:10 np0005534516 nova_compute[253538]: 2025-11-25 08:59:10.165 253542 DEBUG oslo_concurrency.processutils [None req-ef5a73b8-ce48-4958-aadb-1ae0fdb7d727 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc 6a6a3230-e005-48d6-b758-3cf5d4f9410f_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.335s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:59:10 np0005534516 nova_compute[253538]: 2025-11-25 08:59:10.217 253542 DEBUG nova.storage.rbd_utils [None req-ef5a73b8-ce48-4958-aadb-1ae0fdb7d727 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] resizing rbd image 6a6a3230-e005-48d6-b758-3cf5d4f9410f_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 25 03:59:10 np0005534516 nova_compute[253538]: 2025-11-25 08:59:10.337 253542 DEBUG nova.objects.instance [None req-ef5a73b8-ce48-4958-aadb-1ae0fdb7d727 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lazy-loading 'migration_context' on Instance uuid 6a6a3230-e005-48d6-b758-3cf5d4f9410f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 03:59:10 np0005534516 nova_compute[253538]: 2025-11-25 08:59:10.357 253542 DEBUG nova.virt.libvirt.driver [None req-ef5a73b8-ce48-4958-aadb-1ae0fdb7d727 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 6a6a3230-e005-48d6-b758-3cf5d4f9410f] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 25 03:59:10 np0005534516 nova_compute[253538]: 2025-11-25 08:59:10.358 253542 DEBUG nova.virt.libvirt.driver [None req-ef5a73b8-ce48-4958-aadb-1ae0fdb7d727 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 6a6a3230-e005-48d6-b758-3cf5d4f9410f] Ensure instance console log exists: /var/lib/nova/instances/6a6a3230-e005-48d6-b758-3cf5d4f9410f/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 25 03:59:10 np0005534516 nova_compute[253538]: 2025-11-25 08:59:10.358 253542 DEBUG oslo_concurrency.lockutils [None req-ef5a73b8-ce48-4958-aadb-1ae0fdb7d727 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:59:10 np0005534516 nova_compute[253538]: 2025-11-25 08:59:10.359 253542 DEBUG oslo_concurrency.lockutils [None req-ef5a73b8-ce48-4958-aadb-1ae0fdb7d727 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:59:10 np0005534516 nova_compute[253538]: 2025-11-25 08:59:10.359 253542 DEBUG oslo_concurrency.lockutils [None req-ef5a73b8-ce48-4958-aadb-1ae0fdb7d727 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:59:10 np0005534516 nova_compute[253538]: 2025-11-25 08:59:10.661 253542 DEBUG nova.network.neutron [None req-ef5a73b8-ce48-4958-aadb-1ae0fdb7d727 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 6a6a3230-e005-48d6-b758-3cf5d4f9410f] Successfully created port: 0b7bd252-0c0e-43fd-b9ae-27d615ec9c29 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 25 03:59:11 np0005534516 nova_compute[253538]: 2025-11-25 08:59:11.721 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:59:11 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2352: 321 pgs: 321 active+clean; 226 MiB data, 906 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 494 KiB/s wr, 76 op/s
Nov 25 03:59:12 np0005534516 nova_compute[253538]: 2025-11-25 08:59:12.060 253542 DEBUG nova.network.neutron [None req-ef5a73b8-ce48-4958-aadb-1ae0fdb7d727 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 6a6a3230-e005-48d6-b758-3cf5d4f9410f] Successfully updated port: 0b7bd252-0c0e-43fd-b9ae-27d615ec9c29 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 25 03:59:12 np0005534516 nova_compute[253538]: 2025-11-25 08:59:12.075 253542 DEBUG oslo_concurrency.lockutils [None req-ef5a73b8-ce48-4958-aadb-1ae0fdb7d727 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Acquiring lock "refresh_cache-6a6a3230-e005-48d6-b758-3cf5d4f9410f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 03:59:12 np0005534516 nova_compute[253538]: 2025-11-25 08:59:12.076 253542 DEBUG oslo_concurrency.lockutils [None req-ef5a73b8-ce48-4958-aadb-1ae0fdb7d727 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Acquired lock "refresh_cache-6a6a3230-e005-48d6-b758-3cf5d4f9410f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 03:59:12 np0005534516 nova_compute[253538]: 2025-11-25 08:59:12.076 253542 DEBUG nova.network.neutron [None req-ef5a73b8-ce48-4958-aadb-1ae0fdb7d727 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 6a6a3230-e005-48d6-b758-3cf5d4f9410f] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 25 03:59:12 np0005534516 nova_compute[253538]: 2025-11-25 08:59:12.149 253542 DEBUG nova.compute.manager [req-cf679e49-f47a-4d2d-9d23-268d22e141cf req-926f2c1e-a30b-41f0-8252-54564a96b5a4 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 6a6a3230-e005-48d6-b758-3cf5d4f9410f] Received event network-changed-0b7bd252-0c0e-43fd-b9ae-27d615ec9c29 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:59:12 np0005534516 nova_compute[253538]: 2025-11-25 08:59:12.149 253542 DEBUG nova.compute.manager [req-cf679e49-f47a-4d2d-9d23-268d22e141cf req-926f2c1e-a30b-41f0-8252-54564a96b5a4 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 6a6a3230-e005-48d6-b758-3cf5d4f9410f] Refreshing instance network info cache due to event network-changed-0b7bd252-0c0e-43fd-b9ae-27d615ec9c29. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 03:59:12 np0005534516 nova_compute[253538]: 2025-11-25 08:59:12.150 253542 DEBUG oslo_concurrency.lockutils [req-cf679e49-f47a-4d2d-9d23-268d22e141cf req-926f2c1e-a30b-41f0-8252-54564a96b5a4 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-6a6a3230-e005-48d6-b758-3cf5d4f9410f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 03:59:12 np0005534516 nova_compute[253538]: 2025-11-25 08:59:12.333 253542 DEBUG nova.network.neutron [None req-ef5a73b8-ce48-4958-aadb-1ae0fdb7d727 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 6a6a3230-e005-48d6-b758-3cf5d4f9410f] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 25 03:59:12 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 03:59:13 np0005534516 nova_compute[253538]: 2025-11-25 08:59:13.092 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:59:13 np0005534516 nova_compute[253538]: 2025-11-25 08:59:13.235 253542 DEBUG nova.network.neutron [None req-ef5a73b8-ce48-4958-aadb-1ae0fdb7d727 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 6a6a3230-e005-48d6-b758-3cf5d4f9410f] Updating instance_info_cache with network_info: [{"id": "0b7bd252-0c0e-43fd-b9ae-27d615ec9c29", "address": "fa:16:3e:61:6b:c3", "network": {"id": "d0402a09-5c1d-4dec-b1c6-38e77edc4409", "bridge": "br-int", "label": "tempest-network-smoke--803417347", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92faeb767e7a423586eaaf32661ce771", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0b7bd252-0c", "ovs_interfaceid": "0b7bd252-0c0e-43fd-b9ae-27d615ec9c29", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 03:59:13 np0005534516 nova_compute[253538]: 2025-11-25 08:59:13.260 253542 DEBUG oslo_concurrency.lockutils [None req-ef5a73b8-ce48-4958-aadb-1ae0fdb7d727 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Releasing lock "refresh_cache-6a6a3230-e005-48d6-b758-3cf5d4f9410f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 03:59:13 np0005534516 nova_compute[253538]: 2025-11-25 08:59:13.261 253542 DEBUG nova.compute.manager [None req-ef5a73b8-ce48-4958-aadb-1ae0fdb7d727 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 6a6a3230-e005-48d6-b758-3cf5d4f9410f] Instance network_info: |[{"id": "0b7bd252-0c0e-43fd-b9ae-27d615ec9c29", "address": "fa:16:3e:61:6b:c3", "network": {"id": "d0402a09-5c1d-4dec-b1c6-38e77edc4409", "bridge": "br-int", "label": "tempest-network-smoke--803417347", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92faeb767e7a423586eaaf32661ce771", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0b7bd252-0c", "ovs_interfaceid": "0b7bd252-0c0e-43fd-b9ae-27d615ec9c29", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 25 03:59:13 np0005534516 nova_compute[253538]: 2025-11-25 08:59:13.261 253542 DEBUG oslo_concurrency.lockutils [req-cf679e49-f47a-4d2d-9d23-268d22e141cf req-926f2c1e-a30b-41f0-8252-54564a96b5a4 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-6a6a3230-e005-48d6-b758-3cf5d4f9410f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 03:59:13 np0005534516 nova_compute[253538]: 2025-11-25 08:59:13.261 253542 DEBUG nova.network.neutron [req-cf679e49-f47a-4d2d-9d23-268d22e141cf req-926f2c1e-a30b-41f0-8252-54564a96b5a4 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 6a6a3230-e005-48d6-b758-3cf5d4f9410f] Refreshing network info cache for port 0b7bd252-0c0e-43fd-b9ae-27d615ec9c29 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 03:59:13 np0005534516 nova_compute[253538]: 2025-11-25 08:59:13.264 253542 DEBUG nova.virt.libvirt.driver [None req-ef5a73b8-ce48-4958-aadb-1ae0fdb7d727 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 6a6a3230-e005-48d6-b758-3cf5d4f9410f] Start _get_guest_xml network_info=[{"id": "0b7bd252-0c0e-43fd-b9ae-27d615ec9c29", "address": "fa:16:3e:61:6b:c3", "network": {"id": "d0402a09-5c1d-4dec-b1c6-38e77edc4409", "bridge": "br-int", "label": "tempest-network-smoke--803417347", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92faeb767e7a423586eaaf32661ce771", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0b7bd252-0c", "ovs_interfaceid": "0b7bd252-0c0e-43fd-b9ae-27d615ec9c29", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'device_name': '/dev/vda', 'guest_format': None, 'encryption_options': None, 'boot_index': 0, 'encryption_format': None, 'encryption_secret_uuid': None, 'size': 0, 'encrypted': False, 'disk_bus': 'virtio', 'image_id': '8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 25 03:59:13 np0005534516 nova_compute[253538]: 2025-11-25 08:59:13.268 253542 WARNING nova.virt.libvirt.driver [None req-ef5a73b8-ce48-4958-aadb-1ae0fdb7d727 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 03:59:13 np0005534516 nova_compute[253538]: 2025-11-25 08:59:13.274 253542 DEBUG nova.virt.libvirt.host [None req-ef5a73b8-ce48-4958-aadb-1ae0fdb7d727 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 25 03:59:13 np0005534516 nova_compute[253538]: 2025-11-25 08:59:13.275 253542 DEBUG nova.virt.libvirt.host [None req-ef5a73b8-ce48-4958-aadb-1ae0fdb7d727 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 25 03:59:13 np0005534516 nova_compute[253538]: 2025-11-25 08:59:13.278 253542 DEBUG nova.virt.libvirt.host [None req-ef5a73b8-ce48-4958-aadb-1ae0fdb7d727 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 25 03:59:13 np0005534516 nova_compute[253538]: 2025-11-25 08:59:13.278 253542 DEBUG nova.virt.libvirt.host [None req-ef5a73b8-ce48-4958-aadb-1ae0fdb7d727 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 25 03:59:13 np0005534516 nova_compute[253538]: 2025-11-25 08:59:13.278 253542 DEBUG nova.virt.libvirt.driver [None req-ef5a73b8-ce48-4958-aadb-1ae0fdb7d727 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 25 03:59:13 np0005534516 nova_compute[253538]: 2025-11-25 08:59:13.279 253542 DEBUG nova.virt.hardware [None req-ef5a73b8-ce48-4958-aadb-1ae0fdb7d727 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T08:20:54Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='c0247c91-0f34-45b2-87e7-43f31a790a00',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 25 03:59:13 np0005534516 nova_compute[253538]: 2025-11-25 08:59:13.279 253542 DEBUG nova.virt.hardware [None req-ef5a73b8-ce48-4958-aadb-1ae0fdb7d727 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 25 03:59:13 np0005534516 nova_compute[253538]: 2025-11-25 08:59:13.279 253542 DEBUG nova.virt.hardware [None req-ef5a73b8-ce48-4958-aadb-1ae0fdb7d727 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 25 03:59:13 np0005534516 nova_compute[253538]: 2025-11-25 08:59:13.280 253542 DEBUG nova.virt.hardware [None req-ef5a73b8-ce48-4958-aadb-1ae0fdb7d727 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 25 03:59:13 np0005534516 nova_compute[253538]: 2025-11-25 08:59:13.280 253542 DEBUG nova.virt.hardware [None req-ef5a73b8-ce48-4958-aadb-1ae0fdb7d727 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 25 03:59:13 np0005534516 nova_compute[253538]: 2025-11-25 08:59:13.280 253542 DEBUG nova.virt.hardware [None req-ef5a73b8-ce48-4958-aadb-1ae0fdb7d727 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 25 03:59:13 np0005534516 nova_compute[253538]: 2025-11-25 08:59:13.280 253542 DEBUG nova.virt.hardware [None req-ef5a73b8-ce48-4958-aadb-1ae0fdb7d727 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 25 03:59:13 np0005534516 nova_compute[253538]: 2025-11-25 08:59:13.281 253542 DEBUG nova.virt.hardware [None req-ef5a73b8-ce48-4958-aadb-1ae0fdb7d727 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 25 03:59:13 np0005534516 nova_compute[253538]: 2025-11-25 08:59:13.281 253542 DEBUG nova.virt.hardware [None req-ef5a73b8-ce48-4958-aadb-1ae0fdb7d727 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 25 03:59:13 np0005534516 nova_compute[253538]: 2025-11-25 08:59:13.281 253542 DEBUG nova.virt.hardware [None req-ef5a73b8-ce48-4958-aadb-1ae0fdb7d727 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 25 03:59:13 np0005534516 nova_compute[253538]: 2025-11-25 08:59:13.281 253542 DEBUG nova.virt.hardware [None req-ef5a73b8-ce48-4958-aadb-1ae0fdb7d727 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 25 03:59:13 np0005534516 nova_compute[253538]: 2025-11-25 08:59:13.284 253542 DEBUG oslo_concurrency.processutils [None req-ef5a73b8-ce48-4958-aadb-1ae0fdb7d727 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:59:13 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 03:59:13 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3555515879' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 03:59:13 np0005534516 nova_compute[253538]: 2025-11-25 08:59:13.751 253542 DEBUG oslo_concurrency.processutils [None req-ef5a73b8-ce48-4958-aadb-1ae0fdb7d727 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.467s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:59:13 np0005534516 nova_compute[253538]: 2025-11-25 08:59:13.781 253542 DEBUG nova.storage.rbd_utils [None req-ef5a73b8-ce48-4958-aadb-1ae0fdb7d727 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] rbd image 6a6a3230-e005-48d6-b758-3cf5d4f9410f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:59:13 np0005534516 nova_compute[253538]: 2025-11-25 08:59:13.786 253542 DEBUG oslo_concurrency.processutils [None req-ef5a73b8-ce48-4958-aadb-1ae0fdb7d727 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:59:13 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2353: 321 pgs: 321 active+clean; 246 MiB data, 920 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.3 MiB/s wr, 87 op/s
Nov 25 03:59:14 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 03:59:14 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1438497949' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 03:59:14 np0005534516 nova_compute[253538]: 2025-11-25 08:59:14.243 253542 DEBUG oslo_concurrency.processutils [None req-ef5a73b8-ce48-4958-aadb-1ae0fdb7d727 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.457s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:59:14 np0005534516 nova_compute[253538]: 2025-11-25 08:59:14.245 253542 DEBUG nova.virt.libvirt.vif [None req-ef5a73b8-ce48-4958-aadb-1ae0fdb7d727 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T08:59:07Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-571013725',display_name='tempest-TestNetworkBasicOps-server-571013725',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-571013725',id=127,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBD9vDi5E2E+ywOEJjDZ0PQ8FPpnwynuIL6i1c9SarSwpCQ7QnXbRw7n+Ck5BMm/3gHxZu4fef569DYJ0xiHgyqCAtkk+E+7ZMYtBKG+VyGO33faTg/ful5ZkeC+zSQwIDw==',key_name='tempest-TestNetworkBasicOps-363283039',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='92faeb767e7a423586eaaf32661ce771',ramdisk_id='',reservation_id='r-kwd70nuj',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-2019122229',owner_user_name='tempest-TestNetworkBasicOps-2019122229-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T08:59:09Z,user_data=None,user_id='4211995133cc45db8e38c47f747fb092',uuid=6a6a3230-e005-48d6-b758-3cf5d4f9410f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "0b7bd252-0c0e-43fd-b9ae-27d615ec9c29", "address": "fa:16:3e:61:6b:c3", "network": {"id": "d0402a09-5c1d-4dec-b1c6-38e77edc4409", "bridge": "br-int", "label": "tempest-network-smoke--803417347", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92faeb767e7a423586eaaf32661ce771", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0b7bd252-0c", "ovs_interfaceid": "0b7bd252-0c0e-43fd-b9ae-27d615ec9c29", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 25 03:59:14 np0005534516 nova_compute[253538]: 2025-11-25 08:59:14.245 253542 DEBUG nova.network.os_vif_util [None req-ef5a73b8-ce48-4958-aadb-1ae0fdb7d727 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Converting VIF {"id": "0b7bd252-0c0e-43fd-b9ae-27d615ec9c29", "address": "fa:16:3e:61:6b:c3", "network": {"id": "d0402a09-5c1d-4dec-b1c6-38e77edc4409", "bridge": "br-int", "label": "tempest-network-smoke--803417347", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92faeb767e7a423586eaaf32661ce771", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0b7bd252-0c", "ovs_interfaceid": "0b7bd252-0c0e-43fd-b9ae-27d615ec9c29", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 03:59:14 np0005534516 nova_compute[253538]: 2025-11-25 08:59:14.246 253542 DEBUG nova.network.os_vif_util [None req-ef5a73b8-ce48-4958-aadb-1ae0fdb7d727 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:61:6b:c3,bridge_name='br-int',has_traffic_filtering=True,id=0b7bd252-0c0e-43fd-b9ae-27d615ec9c29,network=Network(d0402a09-5c1d-4dec-b1c6-38e77edc4409),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0b7bd252-0c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 03:59:14 np0005534516 nova_compute[253538]: 2025-11-25 08:59:14.247 253542 DEBUG nova.objects.instance [None req-ef5a73b8-ce48-4958-aadb-1ae0fdb7d727 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lazy-loading 'pci_devices' on Instance uuid 6a6a3230-e005-48d6-b758-3cf5d4f9410f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 03:59:14 np0005534516 nova_compute[253538]: 2025-11-25 08:59:14.263 253542 DEBUG nova.virt.libvirt.driver [None req-ef5a73b8-ce48-4958-aadb-1ae0fdb7d727 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 6a6a3230-e005-48d6-b758-3cf5d4f9410f] End _get_guest_xml xml=<domain type="kvm">
Nov 25 03:59:14 np0005534516 nova_compute[253538]:  <uuid>6a6a3230-e005-48d6-b758-3cf5d4f9410f</uuid>
Nov 25 03:59:14 np0005534516 nova_compute[253538]:  <name>instance-0000007f</name>
Nov 25 03:59:14 np0005534516 nova_compute[253538]:  <memory>131072</memory>
Nov 25 03:59:14 np0005534516 nova_compute[253538]:  <vcpu>1</vcpu>
Nov 25 03:59:14 np0005534516 nova_compute[253538]:  <metadata>
Nov 25 03:59:14 np0005534516 nova_compute[253538]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 03:59:14 np0005534516 nova_compute[253538]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 03:59:14 np0005534516 nova_compute[253538]:      <nova:name>tempest-TestNetworkBasicOps-server-571013725</nova:name>
Nov 25 03:59:14 np0005534516 nova_compute[253538]:      <nova:creationTime>2025-11-25 08:59:13</nova:creationTime>
Nov 25 03:59:14 np0005534516 nova_compute[253538]:      <nova:flavor name="m1.nano">
Nov 25 03:59:14 np0005534516 nova_compute[253538]:        <nova:memory>128</nova:memory>
Nov 25 03:59:14 np0005534516 nova_compute[253538]:        <nova:disk>1</nova:disk>
Nov 25 03:59:14 np0005534516 nova_compute[253538]:        <nova:swap>0</nova:swap>
Nov 25 03:59:14 np0005534516 nova_compute[253538]:        <nova:ephemeral>0</nova:ephemeral>
Nov 25 03:59:14 np0005534516 nova_compute[253538]:        <nova:vcpus>1</nova:vcpus>
Nov 25 03:59:14 np0005534516 nova_compute[253538]:      </nova:flavor>
Nov 25 03:59:14 np0005534516 nova_compute[253538]:      <nova:owner>
Nov 25 03:59:14 np0005534516 nova_compute[253538]:        <nova:user uuid="4211995133cc45db8e38c47f747fb092">tempest-TestNetworkBasicOps-2019122229-project-member</nova:user>
Nov 25 03:59:14 np0005534516 nova_compute[253538]:        <nova:project uuid="92faeb767e7a423586eaaf32661ce771">tempest-TestNetworkBasicOps-2019122229</nova:project>
Nov 25 03:59:14 np0005534516 nova_compute[253538]:      </nova:owner>
Nov 25 03:59:14 np0005534516 nova_compute[253538]:      <nova:root type="image" uuid="8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e"/>
Nov 25 03:59:14 np0005534516 nova_compute[253538]:      <nova:ports>
Nov 25 03:59:14 np0005534516 nova_compute[253538]:        <nova:port uuid="0b7bd252-0c0e-43fd-b9ae-27d615ec9c29">
Nov 25 03:59:14 np0005534516 nova_compute[253538]:          <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Nov 25 03:59:14 np0005534516 nova_compute[253538]:        </nova:port>
Nov 25 03:59:14 np0005534516 nova_compute[253538]:      </nova:ports>
Nov 25 03:59:14 np0005534516 nova_compute[253538]:    </nova:instance>
Nov 25 03:59:14 np0005534516 nova_compute[253538]:  </metadata>
Nov 25 03:59:14 np0005534516 nova_compute[253538]:  <sysinfo type="smbios">
Nov 25 03:59:14 np0005534516 nova_compute[253538]:    <system>
Nov 25 03:59:14 np0005534516 nova_compute[253538]:      <entry name="manufacturer">RDO</entry>
Nov 25 03:59:14 np0005534516 nova_compute[253538]:      <entry name="product">OpenStack Compute</entry>
Nov 25 03:59:14 np0005534516 nova_compute[253538]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 03:59:14 np0005534516 nova_compute[253538]:      <entry name="serial">6a6a3230-e005-48d6-b758-3cf5d4f9410f</entry>
Nov 25 03:59:14 np0005534516 nova_compute[253538]:      <entry name="uuid">6a6a3230-e005-48d6-b758-3cf5d4f9410f</entry>
Nov 25 03:59:14 np0005534516 nova_compute[253538]:      <entry name="family">Virtual Machine</entry>
Nov 25 03:59:14 np0005534516 nova_compute[253538]:    </system>
Nov 25 03:59:14 np0005534516 nova_compute[253538]:  </sysinfo>
Nov 25 03:59:14 np0005534516 nova_compute[253538]:  <os>
Nov 25 03:59:14 np0005534516 nova_compute[253538]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 25 03:59:14 np0005534516 nova_compute[253538]:    <boot dev="hd"/>
Nov 25 03:59:14 np0005534516 nova_compute[253538]:    <smbios mode="sysinfo"/>
Nov 25 03:59:14 np0005534516 nova_compute[253538]:  </os>
Nov 25 03:59:14 np0005534516 nova_compute[253538]:  <features>
Nov 25 03:59:14 np0005534516 nova_compute[253538]:    <acpi/>
Nov 25 03:59:14 np0005534516 nova_compute[253538]:    <apic/>
Nov 25 03:59:14 np0005534516 nova_compute[253538]:    <vmcoreinfo/>
Nov 25 03:59:14 np0005534516 nova_compute[253538]:  </features>
Nov 25 03:59:14 np0005534516 nova_compute[253538]:  <clock offset="utc">
Nov 25 03:59:14 np0005534516 nova_compute[253538]:    <timer name="pit" tickpolicy="delay"/>
Nov 25 03:59:14 np0005534516 nova_compute[253538]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 25 03:59:14 np0005534516 nova_compute[253538]:    <timer name="hpet" present="no"/>
Nov 25 03:59:14 np0005534516 nova_compute[253538]:  </clock>
Nov 25 03:59:14 np0005534516 nova_compute[253538]:  <cpu mode="host-model" match="exact">
Nov 25 03:59:14 np0005534516 nova_compute[253538]:    <topology sockets="1" cores="1" threads="1"/>
Nov 25 03:59:14 np0005534516 nova_compute[253538]:  </cpu>
Nov 25 03:59:14 np0005534516 nova_compute[253538]:  <devices>
Nov 25 03:59:14 np0005534516 nova_compute[253538]:    <disk type="network" device="disk">
Nov 25 03:59:14 np0005534516 nova_compute[253538]:      <driver type="raw" cache="none"/>
Nov 25 03:59:14 np0005534516 nova_compute[253538]:      <source protocol="rbd" name="vms/6a6a3230-e005-48d6-b758-3cf5d4f9410f_disk">
Nov 25 03:59:14 np0005534516 nova_compute[253538]:        <host name="192.168.122.100" port="6789"/>
Nov 25 03:59:14 np0005534516 nova_compute[253538]:      </source>
Nov 25 03:59:14 np0005534516 nova_compute[253538]:      <auth username="openstack">
Nov 25 03:59:14 np0005534516 nova_compute[253538]:        <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 03:59:14 np0005534516 nova_compute[253538]:      </auth>
Nov 25 03:59:14 np0005534516 nova_compute[253538]:      <target dev="vda" bus="virtio"/>
Nov 25 03:59:14 np0005534516 nova_compute[253538]:    </disk>
Nov 25 03:59:14 np0005534516 nova_compute[253538]:    <disk type="network" device="cdrom">
Nov 25 03:59:14 np0005534516 nova_compute[253538]:      <driver type="raw" cache="none"/>
Nov 25 03:59:14 np0005534516 nova_compute[253538]:      <source protocol="rbd" name="vms/6a6a3230-e005-48d6-b758-3cf5d4f9410f_disk.config">
Nov 25 03:59:14 np0005534516 nova_compute[253538]:        <host name="192.168.122.100" port="6789"/>
Nov 25 03:59:14 np0005534516 nova_compute[253538]:      </source>
Nov 25 03:59:14 np0005534516 nova_compute[253538]:      <auth username="openstack">
Nov 25 03:59:14 np0005534516 nova_compute[253538]:        <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 03:59:14 np0005534516 nova_compute[253538]:      </auth>
Nov 25 03:59:14 np0005534516 nova_compute[253538]:      <target dev="sda" bus="sata"/>
Nov 25 03:59:14 np0005534516 nova_compute[253538]:    </disk>
Nov 25 03:59:14 np0005534516 nova_compute[253538]:    <interface type="ethernet">
Nov 25 03:59:14 np0005534516 nova_compute[253538]:      <mac address="fa:16:3e:61:6b:c3"/>
Nov 25 03:59:14 np0005534516 nova_compute[253538]:      <model type="virtio"/>
Nov 25 03:59:14 np0005534516 nova_compute[253538]:      <driver name="vhost" rx_queue_size="512"/>
Nov 25 03:59:14 np0005534516 nova_compute[253538]:      <mtu size="1442"/>
Nov 25 03:59:14 np0005534516 nova_compute[253538]:      <target dev="tap0b7bd252-0c"/>
Nov 25 03:59:14 np0005534516 nova_compute[253538]:    </interface>
Nov 25 03:59:14 np0005534516 nova_compute[253538]:    <serial type="pty">
Nov 25 03:59:14 np0005534516 nova_compute[253538]:      <log file="/var/lib/nova/instances/6a6a3230-e005-48d6-b758-3cf5d4f9410f/console.log" append="off"/>
Nov 25 03:59:14 np0005534516 nova_compute[253538]:    </serial>
Nov 25 03:59:14 np0005534516 nova_compute[253538]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 03:59:14 np0005534516 nova_compute[253538]:    <video>
Nov 25 03:59:14 np0005534516 nova_compute[253538]:      <model type="virtio"/>
Nov 25 03:59:14 np0005534516 nova_compute[253538]:    </video>
Nov 25 03:59:14 np0005534516 nova_compute[253538]:    <input type="tablet" bus="usb"/>
Nov 25 03:59:14 np0005534516 nova_compute[253538]:    <rng model="virtio">
Nov 25 03:59:14 np0005534516 nova_compute[253538]:      <backend model="random">/dev/urandom</backend>
Nov 25 03:59:14 np0005534516 nova_compute[253538]:    </rng>
Nov 25 03:59:14 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root"/>
Nov 25 03:59:14 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:59:14 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:59:14 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:59:14 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:59:14 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:59:14 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:59:14 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:59:14 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:59:14 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:59:14 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:59:14 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:59:14 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:59:14 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:59:14 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:59:14 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:59:14 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:59:14 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:59:14 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:59:14 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:59:14 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:59:14 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:59:14 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:59:14 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:59:14 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 03:59:14 np0005534516 nova_compute[253538]:    <controller type="usb" index="0"/>
Nov 25 03:59:14 np0005534516 nova_compute[253538]:    <memballoon model="virtio">
Nov 25 03:59:14 np0005534516 nova_compute[253538]:      <stats period="10"/>
Nov 25 03:59:14 np0005534516 nova_compute[253538]:    </memballoon>
Nov 25 03:59:14 np0005534516 nova_compute[253538]:  </devices>
Nov 25 03:59:14 np0005534516 nova_compute[253538]: </domain>
Nov 25 03:59:14 np0005534516 nova_compute[253538]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 25 03:59:14 np0005534516 nova_compute[253538]: 2025-11-25 08:59:14.264 253542 DEBUG nova.compute.manager [None req-ef5a73b8-ce48-4958-aadb-1ae0fdb7d727 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 6a6a3230-e005-48d6-b758-3cf5d4f9410f] Preparing to wait for external event network-vif-plugged-0b7bd252-0c0e-43fd-b9ae-27d615ec9c29 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 25 03:59:14 np0005534516 nova_compute[253538]: 2025-11-25 08:59:14.265 253542 DEBUG oslo_concurrency.lockutils [None req-ef5a73b8-ce48-4958-aadb-1ae0fdb7d727 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Acquiring lock "6a6a3230-e005-48d6-b758-3cf5d4f9410f-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:59:14 np0005534516 nova_compute[253538]: 2025-11-25 08:59:14.266 253542 DEBUG oslo_concurrency.lockutils [None req-ef5a73b8-ce48-4958-aadb-1ae0fdb7d727 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lock "6a6a3230-e005-48d6-b758-3cf5d4f9410f-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:59:14 np0005534516 nova_compute[253538]: 2025-11-25 08:59:14.266 253542 DEBUG oslo_concurrency.lockutils [None req-ef5a73b8-ce48-4958-aadb-1ae0fdb7d727 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lock "6a6a3230-e005-48d6-b758-3cf5d4f9410f-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:59:14 np0005534516 nova_compute[253538]: 2025-11-25 08:59:14.267 253542 DEBUG nova.virt.libvirt.vif [None req-ef5a73b8-ce48-4958-aadb-1ae0fdb7d727 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T08:59:07Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-571013725',display_name='tempest-TestNetworkBasicOps-server-571013725',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-571013725',id=127,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBD9vDi5E2E+ywOEJjDZ0PQ8FPpnwynuIL6i1c9SarSwpCQ7QnXbRw7n+Ck5BMm/3gHxZu4fef569DYJ0xiHgyqCAtkk+E+7ZMYtBKG+VyGO33faTg/ful5ZkeC+zSQwIDw==',key_name='tempest-TestNetworkBasicOps-363283039',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='92faeb767e7a423586eaaf32661ce771',ramdisk_id='',reservation_id='r-kwd70nuj',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-2019122229',owner_user_name='tempest-TestNetworkBasicOps-2019122229-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T08:59:09Z,user_data=None,user_id='4211995133cc45db8e38c47f747fb092',uuid=6a6a3230-e005-48d6-b758-3cf5d4f9410f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "0b7bd252-0c0e-43fd-b9ae-27d615ec9c29", "address": "fa:16:3e:61:6b:c3", "network": {"id": "d0402a09-5c1d-4dec-b1c6-38e77edc4409", "bridge": "br-int", "label": "tempest-network-smoke--803417347", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92faeb767e7a423586eaaf32661ce771", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0b7bd252-0c", "ovs_interfaceid": "0b7bd252-0c0e-43fd-b9ae-27d615ec9c29", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 25 03:59:14 np0005534516 nova_compute[253538]: 2025-11-25 08:59:14.267 253542 DEBUG nova.network.os_vif_util [None req-ef5a73b8-ce48-4958-aadb-1ae0fdb7d727 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Converting VIF {"id": "0b7bd252-0c0e-43fd-b9ae-27d615ec9c29", "address": "fa:16:3e:61:6b:c3", "network": {"id": "d0402a09-5c1d-4dec-b1c6-38e77edc4409", "bridge": "br-int", "label": "tempest-network-smoke--803417347", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92faeb767e7a423586eaaf32661ce771", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0b7bd252-0c", "ovs_interfaceid": "0b7bd252-0c0e-43fd-b9ae-27d615ec9c29", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 03:59:14 np0005534516 nova_compute[253538]: 2025-11-25 08:59:14.268 253542 DEBUG nova.network.os_vif_util [None req-ef5a73b8-ce48-4958-aadb-1ae0fdb7d727 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:61:6b:c3,bridge_name='br-int',has_traffic_filtering=True,id=0b7bd252-0c0e-43fd-b9ae-27d615ec9c29,network=Network(d0402a09-5c1d-4dec-b1c6-38e77edc4409),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0b7bd252-0c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 03:59:14 np0005534516 nova_compute[253538]: 2025-11-25 08:59:14.269 253542 DEBUG os_vif [None req-ef5a73b8-ce48-4958-aadb-1ae0fdb7d727 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:61:6b:c3,bridge_name='br-int',has_traffic_filtering=True,id=0b7bd252-0c0e-43fd-b9ae-27d615ec9c29,network=Network(d0402a09-5c1d-4dec-b1c6-38e77edc4409),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0b7bd252-0c') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 25 03:59:14 np0005534516 nova_compute[253538]: 2025-11-25 08:59:14.269 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:59:14 np0005534516 nova_compute[253538]: 2025-11-25 08:59:14.270 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:59:14 np0005534516 nova_compute[253538]: 2025-11-25 08:59:14.271 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 03:59:14 np0005534516 nova_compute[253538]: 2025-11-25 08:59:14.276 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:59:14 np0005534516 nova_compute[253538]: 2025-11-25 08:59:14.276 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap0b7bd252-0c, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:59:14 np0005534516 nova_compute[253538]: 2025-11-25 08:59:14.277 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap0b7bd252-0c, col_values=(('external_ids', {'iface-id': '0b7bd252-0c0e-43fd-b9ae-27d615ec9c29', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:61:6b:c3', 'vm-uuid': '6a6a3230-e005-48d6-b758-3cf5d4f9410f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:59:14 np0005534516 nova_compute[253538]: 2025-11-25 08:59:14.278 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:59:14 np0005534516 NetworkManager[48915]: <info>  [1764061154.2796] manager: (tap0b7bd252-0c): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/532)
Nov 25 03:59:14 np0005534516 nova_compute[253538]: 2025-11-25 08:59:14.281 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 25 03:59:14 np0005534516 nova_compute[253538]: 2025-11-25 08:59:14.285 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:59:14 np0005534516 nova_compute[253538]: 2025-11-25 08:59:14.286 253542 INFO os_vif [None req-ef5a73b8-ce48-4958-aadb-1ae0fdb7d727 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:61:6b:c3,bridge_name='br-int',has_traffic_filtering=True,id=0b7bd252-0c0e-43fd-b9ae-27d615ec9c29,network=Network(d0402a09-5c1d-4dec-b1c6-38e77edc4409),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0b7bd252-0c')#033[00m
Nov 25 03:59:14 np0005534516 nova_compute[253538]: 2025-11-25 08:59:14.359 253542 DEBUG nova.virt.libvirt.driver [None req-ef5a73b8-ce48-4958-aadb-1ae0fdb7d727 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 03:59:14 np0005534516 nova_compute[253538]: 2025-11-25 08:59:14.360 253542 DEBUG nova.virt.libvirt.driver [None req-ef5a73b8-ce48-4958-aadb-1ae0fdb7d727 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 03:59:14 np0005534516 nova_compute[253538]: 2025-11-25 08:59:14.360 253542 DEBUG nova.virt.libvirt.driver [None req-ef5a73b8-ce48-4958-aadb-1ae0fdb7d727 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] No VIF found with MAC fa:16:3e:61:6b:c3, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 25 03:59:14 np0005534516 nova_compute[253538]: 2025-11-25 08:59:14.361 253542 INFO nova.virt.libvirt.driver [None req-ef5a73b8-ce48-4958-aadb-1ae0fdb7d727 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 6a6a3230-e005-48d6-b758-3cf5d4f9410f] Using config drive#033[00m
Nov 25 03:59:14 np0005534516 nova_compute[253538]: 2025-11-25 08:59:14.385 253542 DEBUG nova.storage.rbd_utils [None req-ef5a73b8-ce48-4958-aadb-1ae0fdb7d727 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] rbd image 6a6a3230-e005-48d6-b758-3cf5d4f9410f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:59:14 np0005534516 ceph-osd[90711]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #48. Immutable memtables: 5.
Nov 25 03:59:15 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2354: 321 pgs: 321 active+clean; 260 MiB data, 926 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 101 op/s
Nov 25 03:59:16 np0005534516 nova_compute[253538]: 2025-11-25 08:59:16.125 253542 INFO nova.virt.libvirt.driver [None req-ef5a73b8-ce48-4958-aadb-1ae0fdb7d727 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 6a6a3230-e005-48d6-b758-3cf5d4f9410f] Creating config drive at /var/lib/nova/instances/6a6a3230-e005-48d6-b758-3cf5d4f9410f/disk.config#033[00m
Nov 25 03:59:16 np0005534516 nova_compute[253538]: 2025-11-25 08:59:16.131 253542 DEBUG oslo_concurrency.processutils [None req-ef5a73b8-ce48-4958-aadb-1ae0fdb7d727 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/6a6a3230-e005-48d6-b758-3cf5d4f9410f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpuy8mhbhj execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:59:16 np0005534516 nova_compute[253538]: 2025-11-25 08:59:16.290 253542 DEBUG oslo_concurrency.processutils [None req-ef5a73b8-ce48-4958-aadb-1ae0fdb7d727 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/6a6a3230-e005-48d6-b758-3cf5d4f9410f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpuy8mhbhj" returned: 0 in 0.160s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:59:16 np0005534516 nova_compute[253538]: 2025-11-25 08:59:16.333 253542 DEBUG nova.storage.rbd_utils [None req-ef5a73b8-ce48-4958-aadb-1ae0fdb7d727 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] rbd image 6a6a3230-e005-48d6-b758-3cf5d4f9410f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:59:16 np0005534516 nova_compute[253538]: 2025-11-25 08:59:16.338 253542 DEBUG oslo_concurrency.processutils [None req-ef5a73b8-ce48-4958-aadb-1ae0fdb7d727 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/6a6a3230-e005-48d6-b758-3cf5d4f9410f/disk.config 6a6a3230-e005-48d6-b758-3cf5d4f9410f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:59:16 np0005534516 nova_compute[253538]: 2025-11-25 08:59:16.383 253542 DEBUG nova.network.neutron [req-cf679e49-f47a-4d2d-9d23-268d22e141cf req-926f2c1e-a30b-41f0-8252-54564a96b5a4 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 6a6a3230-e005-48d6-b758-3cf5d4f9410f] Updated VIF entry in instance network info cache for port 0b7bd252-0c0e-43fd-b9ae-27d615ec9c29. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 25 03:59:16 np0005534516 nova_compute[253538]: 2025-11-25 08:59:16.385 253542 DEBUG nova.network.neutron [req-cf679e49-f47a-4d2d-9d23-268d22e141cf req-926f2c1e-a30b-41f0-8252-54564a96b5a4 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 6a6a3230-e005-48d6-b758-3cf5d4f9410f] Updating instance_info_cache with network_info: [{"id": "0b7bd252-0c0e-43fd-b9ae-27d615ec9c29", "address": "fa:16:3e:61:6b:c3", "network": {"id": "d0402a09-5c1d-4dec-b1c6-38e77edc4409", "bridge": "br-int", "label": "tempest-network-smoke--803417347", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92faeb767e7a423586eaaf32661ce771", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0b7bd252-0c", "ovs_interfaceid": "0b7bd252-0c0e-43fd-b9ae-27d615ec9c29", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 03:59:16 np0005534516 nova_compute[253538]: 2025-11-25 08:59:16.402 253542 DEBUG oslo_concurrency.lockutils [req-cf679e49-f47a-4d2d-9d23-268d22e141cf req-926f2c1e-a30b-41f0-8252-54564a96b5a4 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-6a6a3230-e005-48d6-b758-3cf5d4f9410f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 03:59:16 np0005534516 nova_compute[253538]: 2025-11-25 08:59:16.663 253542 DEBUG oslo_concurrency.processutils [None req-ef5a73b8-ce48-4958-aadb-1ae0fdb7d727 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/6a6a3230-e005-48d6-b758-3cf5d4f9410f/disk.config 6a6a3230-e005-48d6-b758-3cf5d4f9410f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.325s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:59:16 np0005534516 nova_compute[253538]: 2025-11-25 08:59:16.663 253542 INFO nova.virt.libvirt.driver [None req-ef5a73b8-ce48-4958-aadb-1ae0fdb7d727 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 6a6a3230-e005-48d6-b758-3cf5d4f9410f] Deleting local config drive /var/lib/nova/instances/6a6a3230-e005-48d6-b758-3cf5d4f9410f/disk.config because it was imported into RBD.#033[00m
Nov 25 03:59:16 np0005534516 ovn_controller[152859]: 2025-11-25T08:59:16Z|00155|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:55:3b:fd 10.100.0.6
Nov 25 03:59:16 np0005534516 ovn_controller[152859]: 2025-11-25T08:59:16Z|00156|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:55:3b:fd 10.100.0.6
Nov 25 03:59:16 np0005534516 kernel: tap0b7bd252-0c: entered promiscuous mode
Nov 25 03:59:16 np0005534516 NetworkManager[48915]: <info>  [1764061156.7238] manager: (tap0b7bd252-0c): new Tun device (/org/freedesktop/NetworkManager/Devices/533)
Nov 25 03:59:16 np0005534516 ovn_controller[152859]: 2025-11-25T08:59:16Z|01299|binding|INFO|Claiming lport 0b7bd252-0c0e-43fd-b9ae-27d615ec9c29 for this chassis.
Nov 25 03:59:16 np0005534516 ovn_controller[152859]: 2025-11-25T08:59:16Z|01300|binding|INFO|0b7bd252-0c0e-43fd-b9ae-27d615ec9c29: Claiming fa:16:3e:61:6b:c3 10.100.0.8
Nov 25 03:59:16 np0005534516 nova_compute[253538]: 2025-11-25 08:59:16.746 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:59:16 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:59:16.755 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:61:6b:c3 10.100.0.8'], port_security=['fa:16:3e:61:6b:c3 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '6a6a3230-e005-48d6-b758-3cf5d4f9410f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d0402a09-5c1d-4dec-b1c6-38e77edc4409', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '92faeb767e7a423586eaaf32661ce771', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'fbcd9381-e965-435b-8fa1-373c60075d6f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ff74e0b1-b375-483e-985d-fce7814dd7fc, chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=0b7bd252-0c0e-43fd-b9ae-27d615ec9c29) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 03:59:16 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:59:16.757 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 0b7bd252-0c0e-43fd-b9ae-27d615ec9c29 in datapath d0402a09-5c1d-4dec-b1c6-38e77edc4409 bound to our chassis#033[00m
Nov 25 03:59:16 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:59:16.759 162739 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network d0402a09-5c1d-4dec-b1c6-38e77edc4409#033[00m
Nov 25 03:59:16 np0005534516 ovn_controller[152859]: 2025-11-25T08:59:16Z|01301|binding|INFO|Setting lport 0b7bd252-0c0e-43fd-b9ae-27d615ec9c29 ovn-installed in OVS
Nov 25 03:59:16 np0005534516 ovn_controller[152859]: 2025-11-25T08:59:16Z|01302|binding|INFO|Setting lport 0b7bd252-0c0e-43fd-b9ae-27d615ec9c29 up in Southbound
Nov 25 03:59:16 np0005534516 nova_compute[253538]: 2025-11-25 08:59:16.776 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:59:16 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:59:16.783 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[a7621050-37f1-4844-a7be-dd0e82406349]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:59:16 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:59:16.784 162739 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapd0402a09-51 in ovnmeta-d0402a09-5c1d-4dec-b1c6-38e77edc4409 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 25 03:59:16 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:59:16.787 269370 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapd0402a09-50 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 25 03:59:16 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:59:16.787 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[8392d444-b89a-4370-b699-04e5e8c81229]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:59:16 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:59:16.790 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[57bba2ea-0f30-45c0-8f97-b1d91000dfea]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:59:16 np0005534516 systemd-udevd[386891]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 03:59:16 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:59:16.803 162852 DEBUG oslo.privsep.daemon [-] privsep: reply[c1ac89dd-6c76-4092-b916-960517dd9b37]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:59:16 np0005534516 systemd-machined[215790]: New machine qemu-157-instance-0000007f.
Nov 25 03:59:16 np0005534516 NetworkManager[48915]: <info>  [1764061156.8084] device (tap0b7bd252-0c): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 03:59:16 np0005534516 NetworkManager[48915]: <info>  [1764061156.8099] device (tap0b7bd252-0c): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 03:59:16 np0005534516 systemd[1]: Started Virtual Machine qemu-157-instance-0000007f.
Nov 25 03:59:16 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:59:16.828 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[bd2e3f1e-ad04-415b-8e44-427b599a684e]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:59:16 np0005534516 podman[386875]: 2025-11-25 08:59:16.847286566 +0000 UTC m=+0.076273947 container health_status 201f5ba8a846455dad7681d91099d8c3f33e8ac91298c1330a8b525b94c03938 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, config_id=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Nov 25 03:59:16 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:59:16.865 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[e64ed48f-72dd-4a0a-b748-924452be6136]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:59:16 np0005534516 NetworkManager[48915]: <info>  [1764061156.8715] manager: (tapd0402a09-50): new Veth device (/org/freedesktop/NetworkManager/Devices/534)
Nov 25 03:59:16 np0005534516 systemd-udevd[386903]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 03:59:16 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:59:16.873 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[9d0e509a-4df9-41e7-946c-1a7a88b7e6d1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:59:16 np0005534516 podman[386876]: 2025-11-25 08:59:16.888544779 +0000 UTC m=+0.105046150 container health_status 5c8d5508db57b4c3da7e80ae11f3a3094e87785cb74f60c98199261dd8b73481 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent)
Nov 25 03:59:16 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:59:16.915 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[61d2d5db-1a72-4e8a-ac4a-864af39e5f29]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:59:16 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:59:16.924 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[5fb94b71-88ab-49a4-b1db-0f287be1c85b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:59:16 np0005534516 NetworkManager[48915]: <info>  [1764061156.9477] device (tapd0402a09-50): carrier: link connected
Nov 25 03:59:16 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:59:16.958 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[d35d6de0-0ece-45e9-8ff5-a49ffea96d26]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:59:16 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:59:16.987 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[59cc16af-fc12-4983-87b9-5ade18d9ed6d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapd0402a09-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:d6:9d:de'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 378], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 651154, 'reachable_time': 26559, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 386949, 'error': None, 'target': 'ovnmeta-d0402a09-5c1d-4dec-b1c6-38e77edc4409', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:59:17 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:59:17.012 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[bb8d688a-81c0-4122-b0ac-351a72026e84]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fed6:9dde'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 651154, 'tstamp': 651154}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 386950, 'error': None, 'target': 'ovnmeta-d0402a09-5c1d-4dec-b1c6-38e77edc4409', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:59:17 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:59:17.030 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[c53a19ef-0286-4d76-84bd-181c20d6696c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapd0402a09-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:d6:9d:de'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 378], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 651154, 'reachable_time': 26559, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 386951, 'error': None, 'target': 'ovnmeta-d0402a09-5c1d-4dec-b1c6-38e77edc4409', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:59:17 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:59:17.067 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[2850f8a7-991f-4ec9-8bf4-74cdda115853]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:59:17 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:59:17.125 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[c44bd1f4-848d-4d56-92c3-bc7129a72041]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:59:17 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:59:17.126 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd0402a09-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:59:17 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:59:17.127 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 03:59:17 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:59:17.127 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd0402a09-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:59:17 np0005534516 NetworkManager[48915]: <info>  [1764061157.1292] manager: (tapd0402a09-50): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/535)
Nov 25 03:59:17 np0005534516 kernel: tapd0402a09-50: entered promiscuous mode
Nov 25 03:59:17 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:59:17.133 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapd0402a09-50, col_values=(('external_ids', {'iface-id': '936f7aae-c7b9-4c9a-a88d-93ca5394771e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:59:17 np0005534516 ovn_controller[152859]: 2025-11-25T08:59:17Z|01303|binding|INFO|Releasing lport 936f7aae-c7b9-4c9a-a88d-93ca5394771e from this chassis (sb_readonly=0)
Nov 25 03:59:17 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:59:17.138 162739 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/d0402a09-5c1d-4dec-b1c6-38e77edc4409.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/d0402a09-5c1d-4dec-b1c6-38e77edc4409.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 25 03:59:17 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:59:17.140 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[2adfc2c7-377f-4066-8764-94d94fcc7f1e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:59:17 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:59:17.141 162739 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 25 03:59:17 np0005534516 ovn_metadata_agent[162734]: global
Nov 25 03:59:17 np0005534516 ovn_metadata_agent[162734]:    log         /dev/log local0 debug
Nov 25 03:59:17 np0005534516 ovn_metadata_agent[162734]:    log-tag     haproxy-metadata-proxy-d0402a09-5c1d-4dec-b1c6-38e77edc4409
Nov 25 03:59:17 np0005534516 ovn_metadata_agent[162734]:    user        root
Nov 25 03:59:17 np0005534516 ovn_metadata_agent[162734]:    group       root
Nov 25 03:59:17 np0005534516 ovn_metadata_agent[162734]:    maxconn     1024
Nov 25 03:59:17 np0005534516 ovn_metadata_agent[162734]:    pidfile     /var/lib/neutron/external/pids/d0402a09-5c1d-4dec-b1c6-38e77edc4409.pid.haproxy
Nov 25 03:59:17 np0005534516 ovn_metadata_agent[162734]:    daemon
Nov 25 03:59:17 np0005534516 ovn_metadata_agent[162734]: 
Nov 25 03:59:17 np0005534516 ovn_metadata_agent[162734]: defaults
Nov 25 03:59:17 np0005534516 ovn_metadata_agent[162734]:    log global
Nov 25 03:59:17 np0005534516 ovn_metadata_agent[162734]:    mode http
Nov 25 03:59:17 np0005534516 ovn_metadata_agent[162734]:    option httplog
Nov 25 03:59:17 np0005534516 ovn_metadata_agent[162734]:    option dontlognull
Nov 25 03:59:17 np0005534516 ovn_metadata_agent[162734]:    option http-server-close
Nov 25 03:59:17 np0005534516 ovn_metadata_agent[162734]:    option forwardfor
Nov 25 03:59:17 np0005534516 ovn_metadata_agent[162734]:    retries                 3
Nov 25 03:59:17 np0005534516 ovn_metadata_agent[162734]:    timeout http-request    30s
Nov 25 03:59:17 np0005534516 ovn_metadata_agent[162734]:    timeout connect         30s
Nov 25 03:59:17 np0005534516 ovn_metadata_agent[162734]:    timeout client          32s
Nov 25 03:59:17 np0005534516 ovn_metadata_agent[162734]:    timeout server          32s
Nov 25 03:59:17 np0005534516 ovn_metadata_agent[162734]:    timeout http-keep-alive 30s
Nov 25 03:59:17 np0005534516 ovn_metadata_agent[162734]: 
Nov 25 03:59:17 np0005534516 ovn_metadata_agent[162734]: 
Nov 25 03:59:17 np0005534516 ovn_metadata_agent[162734]: listen listener
Nov 25 03:59:17 np0005534516 ovn_metadata_agent[162734]:    bind 169.254.169.254:80
Nov 25 03:59:17 np0005534516 ovn_metadata_agent[162734]:    server metadata /var/lib/neutron/metadata_proxy
Nov 25 03:59:17 np0005534516 ovn_metadata_agent[162734]:    http-request add-header X-OVN-Network-ID d0402a09-5c1d-4dec-b1c6-38e77edc4409
Nov 25 03:59:17 np0005534516 ovn_metadata_agent[162734]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 25 03:59:17 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:59:17.142 162739 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-d0402a09-5c1d-4dec-b1c6-38e77edc4409', 'env', 'PROCESS_TAG=haproxy-d0402a09-5c1d-4dec-b1c6-38e77edc4409', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/d0402a09-5c1d-4dec-b1c6-38e77edc4409.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 25 03:59:17 np0005534516 nova_compute[253538]: 2025-11-25 08:59:17.147 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:59:17 np0005534516 nova_compute[253538]: 2025-11-25 08:59:17.282 253542 DEBUG nova.compute.manager [req-6e842302-5a25-4750-91d6-26e03254dd71 req-61f4b62b-8ffb-4d35-8e86-3ca4637e27cc b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 6a6a3230-e005-48d6-b758-3cf5d4f9410f] Received event network-vif-plugged-0b7bd252-0c0e-43fd-b9ae-27d615ec9c29 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:59:17 np0005534516 nova_compute[253538]: 2025-11-25 08:59:17.283 253542 DEBUG oslo_concurrency.lockutils [req-6e842302-5a25-4750-91d6-26e03254dd71 req-61f4b62b-8ffb-4d35-8e86-3ca4637e27cc b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "6a6a3230-e005-48d6-b758-3cf5d4f9410f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:59:17 np0005534516 nova_compute[253538]: 2025-11-25 08:59:17.283 253542 DEBUG oslo_concurrency.lockutils [req-6e842302-5a25-4750-91d6-26e03254dd71 req-61f4b62b-8ffb-4d35-8e86-3ca4637e27cc b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "6a6a3230-e005-48d6-b758-3cf5d4f9410f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:59:17 np0005534516 nova_compute[253538]: 2025-11-25 08:59:17.284 253542 DEBUG oslo_concurrency.lockutils [req-6e842302-5a25-4750-91d6-26e03254dd71 req-61f4b62b-8ffb-4d35-8e86-3ca4637e27cc b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "6a6a3230-e005-48d6-b758-3cf5d4f9410f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:59:17 np0005534516 nova_compute[253538]: 2025-11-25 08:59:17.284 253542 DEBUG nova.compute.manager [req-6e842302-5a25-4750-91d6-26e03254dd71 req-61f4b62b-8ffb-4d35-8e86-3ca4637e27cc b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 6a6a3230-e005-48d6-b758-3cf5d4f9410f] Processing event network-vif-plugged-0b7bd252-0c0e-43fd-b9ae-27d615ec9c29 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 25 03:59:17 np0005534516 nova_compute[253538]: 2025-11-25 08:59:17.369 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764061157.36897, 6a6a3230-e005-48d6-b758-3cf5d4f9410f => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 03:59:17 np0005534516 nova_compute[253538]: 2025-11-25 08:59:17.370 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 6a6a3230-e005-48d6-b758-3cf5d4f9410f] VM Started (Lifecycle Event)#033[00m
Nov 25 03:59:17 np0005534516 nova_compute[253538]: 2025-11-25 08:59:17.372 253542 DEBUG nova.compute.manager [None req-ef5a73b8-ce48-4958-aadb-1ae0fdb7d727 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 6a6a3230-e005-48d6-b758-3cf5d4f9410f] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 25 03:59:17 np0005534516 nova_compute[253538]: 2025-11-25 08:59:17.376 253542 DEBUG nova.virt.libvirt.driver [None req-ef5a73b8-ce48-4958-aadb-1ae0fdb7d727 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 6a6a3230-e005-48d6-b758-3cf5d4f9410f] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 25 03:59:17 np0005534516 nova_compute[253538]: 2025-11-25 08:59:17.380 253542 INFO nova.virt.libvirt.driver [-] [instance: 6a6a3230-e005-48d6-b758-3cf5d4f9410f] Instance spawned successfully.#033[00m
Nov 25 03:59:17 np0005534516 nova_compute[253538]: 2025-11-25 08:59:17.380 253542 DEBUG nova.virt.libvirt.driver [None req-ef5a73b8-ce48-4958-aadb-1ae0fdb7d727 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 6a6a3230-e005-48d6-b758-3cf5d4f9410f] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 25 03:59:17 np0005534516 nova_compute[253538]: 2025-11-25 08:59:17.401 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 6a6a3230-e005-48d6-b758-3cf5d4f9410f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:59:17 np0005534516 nova_compute[253538]: 2025-11-25 08:59:17.407 253542 DEBUG nova.virt.libvirt.driver [None req-ef5a73b8-ce48-4958-aadb-1ae0fdb7d727 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 6a6a3230-e005-48d6-b758-3cf5d4f9410f] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:59:17 np0005534516 nova_compute[253538]: 2025-11-25 08:59:17.407 253542 DEBUG nova.virt.libvirt.driver [None req-ef5a73b8-ce48-4958-aadb-1ae0fdb7d727 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 6a6a3230-e005-48d6-b758-3cf5d4f9410f] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:59:17 np0005534516 nova_compute[253538]: 2025-11-25 08:59:17.407 253542 DEBUG nova.virt.libvirt.driver [None req-ef5a73b8-ce48-4958-aadb-1ae0fdb7d727 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 6a6a3230-e005-48d6-b758-3cf5d4f9410f] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:59:17 np0005534516 nova_compute[253538]: 2025-11-25 08:59:17.408 253542 DEBUG nova.virt.libvirt.driver [None req-ef5a73b8-ce48-4958-aadb-1ae0fdb7d727 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 6a6a3230-e005-48d6-b758-3cf5d4f9410f] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:59:17 np0005534516 nova_compute[253538]: 2025-11-25 08:59:17.408 253542 DEBUG nova.virt.libvirt.driver [None req-ef5a73b8-ce48-4958-aadb-1ae0fdb7d727 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 6a6a3230-e005-48d6-b758-3cf5d4f9410f] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:59:17 np0005534516 nova_compute[253538]: 2025-11-25 08:59:17.409 253542 DEBUG nova.virt.libvirt.driver [None req-ef5a73b8-ce48-4958-aadb-1ae0fdb7d727 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 6a6a3230-e005-48d6-b758-3cf5d4f9410f] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 03:59:17 np0005534516 nova_compute[253538]: 2025-11-25 08:59:17.413 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 6a6a3230-e005-48d6-b758-3cf5d4f9410f] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 03:59:17 np0005534516 nova_compute[253538]: 2025-11-25 08:59:17.447 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 6a6a3230-e005-48d6-b758-3cf5d4f9410f] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 03:59:17 np0005534516 nova_compute[253538]: 2025-11-25 08:59:17.448 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764061157.3692114, 6a6a3230-e005-48d6-b758-3cf5d4f9410f => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 03:59:17 np0005534516 nova_compute[253538]: 2025-11-25 08:59:17.448 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 6a6a3230-e005-48d6-b758-3cf5d4f9410f] VM Paused (Lifecycle Event)#033[00m
Nov 25 03:59:17 np0005534516 nova_compute[253538]: 2025-11-25 08:59:17.468 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 6a6a3230-e005-48d6-b758-3cf5d4f9410f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:59:17 np0005534516 nova_compute[253538]: 2025-11-25 08:59:17.471 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764061157.375583, 6a6a3230-e005-48d6-b758-3cf5d4f9410f => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 03:59:17 np0005534516 nova_compute[253538]: 2025-11-25 08:59:17.472 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 6a6a3230-e005-48d6-b758-3cf5d4f9410f] VM Resumed (Lifecycle Event)#033[00m
Nov 25 03:59:17 np0005534516 nova_compute[253538]: 2025-11-25 08:59:17.480 253542 INFO nova.compute.manager [None req-ef5a73b8-ce48-4958-aadb-1ae0fdb7d727 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 6a6a3230-e005-48d6-b758-3cf5d4f9410f] Took 7.85 seconds to spawn the instance on the hypervisor.#033[00m
Nov 25 03:59:17 np0005534516 nova_compute[253538]: 2025-11-25 08:59:17.480 253542 DEBUG nova.compute.manager [None req-ef5a73b8-ce48-4958-aadb-1ae0fdb7d727 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 6a6a3230-e005-48d6-b758-3cf5d4f9410f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:59:17 np0005534516 nova_compute[253538]: 2025-11-25 08:59:17.505 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 6a6a3230-e005-48d6-b758-3cf5d4f9410f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:59:17 np0005534516 nova_compute[253538]: 2025-11-25 08:59:17.509 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 6a6a3230-e005-48d6-b758-3cf5d4f9410f] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 03:59:17 np0005534516 nova_compute[253538]: 2025-11-25 08:59:17.547 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 6a6a3230-e005-48d6-b758-3cf5d4f9410f] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 03:59:17 np0005534516 nova_compute[253538]: 2025-11-25 08:59:17.552 253542 INFO nova.compute.manager [None req-ef5a73b8-ce48-4958-aadb-1ae0fdb7d727 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 6a6a3230-e005-48d6-b758-3cf5d4f9410f] Took 8.80 seconds to build instance.#033[00m
Nov 25 03:59:17 np0005534516 nova_compute[253538]: 2025-11-25 08:59:17.565 253542 DEBUG oslo_concurrency.lockutils [None req-ef5a73b8-ce48-4958-aadb-1ae0fdb7d727 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lock "6a6a3230-e005-48d6-b758-3cf5d4f9410f" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.017s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:59:17 np0005534516 podman[387025]: 2025-11-25 08:59:17.56717034 +0000 UTC m=+0.090103893 container create a41782ed97ee4d448f4409f10540456942e8fdc1bbcea1477e1e710d084a3407 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-d0402a09-5c1d-4dec-b1c6-38e77edc4409, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 03:59:17 np0005534516 podman[387025]: 2025-11-25 08:59:17.51240891 +0000 UTC m=+0.035342493 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 25 03:59:17 np0005534516 systemd[1]: Started libpod-conmon-a41782ed97ee4d448f4409f10540456942e8fdc1bbcea1477e1e710d084a3407.scope.
Nov 25 03:59:17 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 03:59:17 np0005534516 systemd[1]: Started libcrun container.
Nov 25 03:59:17 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/30d2bcefa150aa50a2485c101e427760b62e7f7acb3630901c864073572e7a19/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 25 03:59:17 np0005534516 podman[387025]: 2025-11-25 08:59:17.661426026 +0000 UTC m=+0.184359599 container init a41782ed97ee4d448f4409f10540456942e8fdc1bbcea1477e1e710d084a3407 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-d0402a09-5c1d-4dec-b1c6-38e77edc4409, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2)
Nov 25 03:59:17 np0005534516 podman[387025]: 2025-11-25 08:59:17.669221229 +0000 UTC m=+0.192154782 container start a41782ed97ee4d448f4409f10540456942e8fdc1bbcea1477e1e710d084a3407 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-d0402a09-5c1d-4dec-b1c6-38e77edc4409, org.label-schema.build-date=20251118, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 25 03:59:17 np0005534516 neutron-haproxy-ovnmeta-d0402a09-5c1d-4dec-b1c6-38e77edc4409[387040]: [NOTICE]   (387044) : New worker (387046) forked
Nov 25 03:59:17 np0005534516 neutron-haproxy-ovnmeta-d0402a09-5c1d-4dec-b1c6-38e77edc4409[387040]: [NOTICE]   (387044) : Loading success.
Nov 25 03:59:17 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2355: 321 pgs: 321 active+clean; 264 MiB data, 929 MiB used, 59 GiB / 60 GiB avail; 1.1 MiB/s rd, 2.1 MiB/s wr, 70 op/s
Nov 25 03:59:18 np0005534516 nova_compute[253538]: 2025-11-25 08:59:18.219 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:59:19 np0005534516 nova_compute[253538]: 2025-11-25 08:59:19.279 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:59:19 np0005534516 nova_compute[253538]: 2025-11-25 08:59:19.400 253542 DEBUG nova.compute.manager [req-700368b8-b221-411f-b173-0a04474cd10f req-d3e0716d-7df4-4415-b43a-26577744f2c7 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 6a6a3230-e005-48d6-b758-3cf5d4f9410f] Received event network-vif-plugged-0b7bd252-0c0e-43fd-b9ae-27d615ec9c29 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:59:19 np0005534516 nova_compute[253538]: 2025-11-25 08:59:19.401 253542 DEBUG oslo_concurrency.lockutils [req-700368b8-b221-411f-b173-0a04474cd10f req-d3e0716d-7df4-4415-b43a-26577744f2c7 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "6a6a3230-e005-48d6-b758-3cf5d4f9410f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:59:19 np0005534516 nova_compute[253538]: 2025-11-25 08:59:19.401 253542 DEBUG oslo_concurrency.lockutils [req-700368b8-b221-411f-b173-0a04474cd10f req-d3e0716d-7df4-4415-b43a-26577744f2c7 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "6a6a3230-e005-48d6-b758-3cf5d4f9410f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:59:19 np0005534516 nova_compute[253538]: 2025-11-25 08:59:19.401 253542 DEBUG oslo_concurrency.lockutils [req-700368b8-b221-411f-b173-0a04474cd10f req-d3e0716d-7df4-4415-b43a-26577744f2c7 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "6a6a3230-e005-48d6-b758-3cf5d4f9410f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:59:19 np0005534516 nova_compute[253538]: 2025-11-25 08:59:19.402 253542 DEBUG nova.compute.manager [req-700368b8-b221-411f-b173-0a04474cd10f req-d3e0716d-7df4-4415-b43a-26577744f2c7 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 6a6a3230-e005-48d6-b758-3cf5d4f9410f] No waiting events found dispatching network-vif-plugged-0b7bd252-0c0e-43fd-b9ae-27d615ec9c29 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 03:59:19 np0005534516 nova_compute[253538]: 2025-11-25 08:59:19.402 253542 WARNING nova.compute.manager [req-700368b8-b221-411f-b173-0a04474cd10f req-d3e0716d-7df4-4415-b43a-26577744f2c7 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 6a6a3230-e005-48d6-b758-3cf5d4f9410f] Received unexpected event network-vif-plugged-0b7bd252-0c0e-43fd-b9ae-27d615ec9c29 for instance with vm_state active and task_state None.#033[00m
Nov 25 03:59:19 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2356: 321 pgs: 321 active+clean; 292 MiB data, 969 MiB used, 59 GiB / 60 GiB avail; 2.7 MiB/s rd, 3.9 MiB/s wr, 175 op/s
Nov 25 03:59:21 np0005534516 nova_compute[253538]: 2025-11-25 08:59:21.756 253542 DEBUG oslo_concurrency.lockutils [None req-02663957-0ce7-4244-beb3-89f6639d78be 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Acquiring lock "68c6ff41-ad19-4b3d-947d-0a5d72e4042c" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:59:21 np0005534516 nova_compute[253538]: 2025-11-25 08:59:21.757 253542 DEBUG oslo_concurrency.lockutils [None req-02663957-0ce7-4244-beb3-89f6639d78be 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Lock "68c6ff41-ad19-4b3d-947d-0a5d72e4042c" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:59:21 np0005534516 nova_compute[253538]: 2025-11-25 08:59:21.757 253542 DEBUG oslo_concurrency.lockutils [None req-02663957-0ce7-4244-beb3-89f6639d78be 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Acquiring lock "68c6ff41-ad19-4b3d-947d-0a5d72e4042c-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:59:21 np0005534516 nova_compute[253538]: 2025-11-25 08:59:21.757 253542 DEBUG oslo_concurrency.lockutils [None req-02663957-0ce7-4244-beb3-89f6639d78be 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Lock "68c6ff41-ad19-4b3d-947d-0a5d72e4042c-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:59:21 np0005534516 nova_compute[253538]: 2025-11-25 08:59:21.757 253542 DEBUG oslo_concurrency.lockutils [None req-02663957-0ce7-4244-beb3-89f6639d78be 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Lock "68c6ff41-ad19-4b3d-947d-0a5d72e4042c-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:59:21 np0005534516 nova_compute[253538]: 2025-11-25 08:59:21.759 253542 INFO nova.compute.manager [None req-02663957-0ce7-4244-beb3-89f6639d78be 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 68c6ff41-ad19-4b3d-947d-0a5d72e4042c] Terminating instance#033[00m
Nov 25 03:59:21 np0005534516 nova_compute[253538]: 2025-11-25 08:59:21.760 253542 DEBUG nova.compute.manager [None req-02663957-0ce7-4244-beb3-89f6639d78be 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 68c6ff41-ad19-4b3d-947d-0a5d72e4042c] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 25 03:59:21 np0005534516 kernel: tapeba714df-d5 (unregistering): left promiscuous mode
Nov 25 03:59:21 np0005534516 NetworkManager[48915]: <info>  [1764061161.8057] device (tapeba714df-d5): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 03:59:21 np0005534516 ovn_controller[152859]: 2025-11-25T08:59:21Z|01304|binding|INFO|Releasing lport eba714df-d5db-464e-b5b6-6d56c52d33fd from this chassis (sb_readonly=0)
Nov 25 03:59:21 np0005534516 ovn_controller[152859]: 2025-11-25T08:59:21Z|01305|binding|INFO|Setting lport eba714df-d5db-464e-b5b6-6d56c52d33fd down in Southbound
Nov 25 03:59:21 np0005534516 ovn_controller[152859]: 2025-11-25T08:59:21Z|01306|binding|INFO|Removing iface tapeba714df-d5 ovn-installed in OVS
Nov 25 03:59:21 np0005534516 nova_compute[253538]: 2025-11-25 08:59:21.870 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:59:21 np0005534516 nova_compute[253538]: 2025-11-25 08:59:21.880 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:59:21 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:59:21.881 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:55:3b:fd 10.100.0.6'], port_security=['fa:16:3e:55:3b:fd 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '68c6ff41-ad19-4b3d-947d-0a5d72e4042c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-bfb2a9da-10d5-4cf0-a585-a59d66a02fa0', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'cfffff2c57a442a59b202d368d49bf00', 'neutron:revision_number': '4', 'neutron:security_group_ids': '20edaba8-b789-425d-a55d-3fa69c803e14', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=acab9bf8-f301-413f-8774-b60482e3db77, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=eba714df-d5db-464e-b5b6-6d56c52d33fd) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 03:59:21 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:59:21.883 162739 INFO neutron.agent.ovn.metadata.agent [-] Port eba714df-d5db-464e-b5b6-6d56c52d33fd in datapath bfb2a9da-10d5-4cf0-a585-a59d66a02fa0 unbound from our chassis#033[00m
Nov 25 03:59:21 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:59:21.885 162739 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network bfb2a9da-10d5-4cf0-a585-a59d66a02fa0#033[00m
Nov 25 03:59:21 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:59:21.902 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[ca3aa89b-9876-4ebb-b6e6-a41835d336ad]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:59:21 np0005534516 systemd[1]: machine-qemu\x2d156\x2dinstance\x2d0000007e.scope: Deactivated successfully.
Nov 25 03:59:21 np0005534516 systemd[1]: machine-qemu\x2d156\x2dinstance\x2d0000007e.scope: Consumed 14.262s CPU time.
Nov 25 03:59:21 np0005534516 systemd-machined[215790]: Machine qemu-156-instance-0000007e terminated.
Nov 25 03:59:21 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:59:21.936 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[4ad1445d-42ea-49a2-8573-2b4f3d18f141]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:59:21 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:59:21.940 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[a297c185-efc8-42ff-b3e3-23fbdfa53ebf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:59:21 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2357: 321 pgs: 321 active+clean; 293 MiB data, 969 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 3.9 MiB/s wr, 160 op/s
Nov 25 03:59:21 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:59:21.989 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[863774c9-155c-4ee9-bee6-1f57a3455275]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:59:22 np0005534516 nova_compute[253538]: 2025-11-25 08:59:22.004 253542 INFO nova.virt.libvirt.driver [-] [instance: 68c6ff41-ad19-4b3d-947d-0a5d72e4042c] Instance destroyed successfully.#033[00m
Nov 25 03:59:22 np0005534516 nova_compute[253538]: 2025-11-25 08:59:22.005 253542 DEBUG nova.objects.instance [None req-02663957-0ce7-4244-beb3-89f6639d78be 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Lazy-loading 'resources' on Instance uuid 68c6ff41-ad19-4b3d-947d-0a5d72e4042c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 03:59:22 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:59:22.016 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[76ea3350-f438-40de-a365-d286903996bf]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapbfb2a9da-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a7:95:82'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 9, 'tx_packets': 8, 'rx_bytes': 658, 'tx_bytes': 524, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 9, 'tx_packets': 8, 'rx_bytes': 658, 'tx_bytes': 524, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 373], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 646309, 'reachable_time': 34452, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 300, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 300, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 387075, 'error': None, 'target': 'ovnmeta-bfb2a9da-10d5-4cf0-a585-a59d66a02fa0', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:59:22 np0005534516 nova_compute[253538]: 2025-11-25 08:59:22.018 253542 DEBUG nova.virt.libvirt.vif [None req-02663957-0ce7-4244-beb3-89f6639d78be 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T08:58:55Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-1495447964-gen-0-1184974683',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-1495447964-gen-0-1184974683',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-1495447964-ge',id=126,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAsKC+lTfCuui0Z3hitR52PvpfumhaLVjQ69ujiuIIevy7+I1pIFYPSg9LVKZgq98SCLPdhgyGDo4sdCyIoEofU+K0/ToXdERCiT/ZZOJi+hnfAHkBLfHK2RUxUGr/PGKw==',key_name='tempest-TestSecurityGroupsBasicOps-214386099',keypairs=<?>,launch_index=0,launched_at=2025-11-25T08:59:03Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='cfffff2c57a442a59b202d368d49bf00',ramdisk_id='',reservation_id='r-1i7wo9f0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestSecurityGroupsBasicOps-1495447964',owner_user_name='tempest-TestSecurityGroupsBasicOps-1495447964-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T08:59:03Z,user_data=None,user_id='283b89dbe3284e8ea2019b797673108b',uuid=68c6ff41-ad19-4b3d-947d-0a5d72e4042c,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "eba714df-d5db-464e-b5b6-6d56c52d33fd", "address": "fa:16:3e:55:3b:fd", "network": {"id": "bfb2a9da-10d5-4cf0-a585-a59d66a02fa0", "bridge": "br-int", "label": "tempest-network-smoke--16312468", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cfffff2c57a442a59b202d368d49bf00", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapeba714df-d5", "ovs_interfaceid": "eba714df-d5db-464e-b5b6-6d56c52d33fd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 25 03:59:22 np0005534516 nova_compute[253538]: 2025-11-25 08:59:22.019 253542 DEBUG nova.network.os_vif_util [None req-02663957-0ce7-4244-beb3-89f6639d78be 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Converting VIF {"id": "eba714df-d5db-464e-b5b6-6d56c52d33fd", "address": "fa:16:3e:55:3b:fd", "network": {"id": "bfb2a9da-10d5-4cf0-a585-a59d66a02fa0", "bridge": "br-int", "label": "tempest-network-smoke--16312468", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cfffff2c57a442a59b202d368d49bf00", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapeba714df-d5", "ovs_interfaceid": "eba714df-d5db-464e-b5b6-6d56c52d33fd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 03:59:22 np0005534516 nova_compute[253538]: 2025-11-25 08:59:22.020 253542 DEBUG nova.network.os_vif_util [None req-02663957-0ce7-4244-beb3-89f6639d78be 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:55:3b:fd,bridge_name='br-int',has_traffic_filtering=True,id=eba714df-d5db-464e-b5b6-6d56c52d33fd,network=Network(bfb2a9da-10d5-4cf0-a585-a59d66a02fa0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapeba714df-d5') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 03:59:22 np0005534516 nova_compute[253538]: 2025-11-25 08:59:22.020 253542 DEBUG os_vif [None req-02663957-0ce7-4244-beb3-89f6639d78be 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:55:3b:fd,bridge_name='br-int',has_traffic_filtering=True,id=eba714df-d5db-464e-b5b6-6d56c52d33fd,network=Network(bfb2a9da-10d5-4cf0-a585-a59d66a02fa0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapeba714df-d5') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 25 03:59:22 np0005534516 nova_compute[253538]: 2025-11-25 08:59:22.024 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:59:22 np0005534516 nova_compute[253538]: 2025-11-25 08:59:22.024 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapeba714df-d5, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:59:22 np0005534516 nova_compute[253538]: 2025-11-25 08:59:22.026 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:59:22 np0005534516 nova_compute[253538]: 2025-11-25 08:59:22.030 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 25 03:59:22 np0005534516 nova_compute[253538]: 2025-11-25 08:59:22.031 253542 INFO os_vif [None req-02663957-0ce7-4244-beb3-89f6639d78be 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:55:3b:fd,bridge_name='br-int',has_traffic_filtering=True,id=eba714df-d5db-464e-b5b6-6d56c52d33fd,network=Network(bfb2a9da-10d5-4cf0-a585-a59d66a02fa0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapeba714df-d5')#033[00m
Nov 25 03:59:22 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:59:22.031 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[90379492-0f7a-48dd-ade5-2ad0d676f686]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapbfb2a9da-11'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 646321, 'tstamp': 646321}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 387078, 'error': None, 'target': 'ovnmeta-bfb2a9da-10d5-4cf0-a585-a59d66a02fa0', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapbfb2a9da-11'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 646324, 'tstamp': 646324}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 387078, 'error': None, 'target': 'ovnmeta-bfb2a9da-10d5-4cf0-a585-a59d66a02fa0', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:59:22 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:59:22.042 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapbfb2a9da-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:59:22 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:59:22.044 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapbfb2a9da-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:59:22 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:59:22.045 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 03:59:22 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:59:22.045 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapbfb2a9da-10, col_values=(('external_ids', {'iface-id': '0b3aeb91-b51d-4bb6-a8fa-9a80bd12b96f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:59:22 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:59:22.045 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 03:59:22 np0005534516 nova_compute[253538]: 2025-11-25 08:59:22.052 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:59:22 np0005534516 nova_compute[253538]: 2025-11-25 08:59:22.062 253542 DEBUG nova.compute.manager [req-d5c2b644-bfd0-40ea-9b66-b3c87b4783c8 req-2d558549-b065-4c18-a1be-7fa6eda7417a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 6a6a3230-e005-48d6-b758-3cf5d4f9410f] Received event network-changed-0b7bd252-0c0e-43fd-b9ae-27d615ec9c29 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:59:22 np0005534516 nova_compute[253538]: 2025-11-25 08:59:22.062 253542 DEBUG nova.compute.manager [req-d5c2b644-bfd0-40ea-9b66-b3c87b4783c8 req-2d558549-b065-4c18-a1be-7fa6eda7417a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 6a6a3230-e005-48d6-b758-3cf5d4f9410f] Refreshing instance network info cache due to event network-changed-0b7bd252-0c0e-43fd-b9ae-27d615ec9c29. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 03:59:22 np0005534516 nova_compute[253538]: 2025-11-25 08:59:22.063 253542 DEBUG oslo_concurrency.lockutils [req-d5c2b644-bfd0-40ea-9b66-b3c87b4783c8 req-2d558549-b065-4c18-a1be-7fa6eda7417a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-6a6a3230-e005-48d6-b758-3cf5d4f9410f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 03:59:22 np0005534516 nova_compute[253538]: 2025-11-25 08:59:22.063 253542 DEBUG oslo_concurrency.lockutils [req-d5c2b644-bfd0-40ea-9b66-b3c87b4783c8 req-2d558549-b065-4c18-a1be-7fa6eda7417a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-6a6a3230-e005-48d6-b758-3cf5d4f9410f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 03:59:22 np0005534516 nova_compute[253538]: 2025-11-25 08:59:22.063 253542 DEBUG nova.network.neutron [req-d5c2b644-bfd0-40ea-9b66-b3c87b4783c8 req-2d558549-b065-4c18-a1be-7fa6eda7417a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 6a6a3230-e005-48d6-b758-3cf5d4f9410f] Refreshing network info cache for port 0b7bd252-0c0e-43fd-b9ae-27d615ec9c29 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 03:59:22 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 03:59:22 np0005534516 nova_compute[253538]: 2025-11-25 08:59:22.639 253542 INFO nova.virt.libvirt.driver [None req-02663957-0ce7-4244-beb3-89f6639d78be 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 68c6ff41-ad19-4b3d-947d-0a5d72e4042c] Deleting instance files /var/lib/nova/instances/68c6ff41-ad19-4b3d-947d-0a5d72e4042c_del#033[00m
Nov 25 03:59:22 np0005534516 nova_compute[253538]: 2025-11-25 08:59:22.641 253542 INFO nova.virt.libvirt.driver [None req-02663957-0ce7-4244-beb3-89f6639d78be 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 68c6ff41-ad19-4b3d-947d-0a5d72e4042c] Deletion of /var/lib/nova/instances/68c6ff41-ad19-4b3d-947d-0a5d72e4042c_del complete#033[00m
Nov 25 03:59:22 np0005534516 nova_compute[253538]: 2025-11-25 08:59:22.808 253542 INFO nova.compute.manager [None req-02663957-0ce7-4244-beb3-89f6639d78be 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 68c6ff41-ad19-4b3d-947d-0a5d72e4042c] Took 1.05 seconds to destroy the instance on the hypervisor.#033[00m
Nov 25 03:59:22 np0005534516 nova_compute[253538]: 2025-11-25 08:59:22.808 253542 DEBUG oslo.service.loopingcall [None req-02663957-0ce7-4244-beb3-89f6639d78be 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 25 03:59:22 np0005534516 nova_compute[253538]: 2025-11-25 08:59:22.809 253542 DEBUG nova.compute.manager [-] [instance: 68c6ff41-ad19-4b3d-947d-0a5d72e4042c] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 25 03:59:22 np0005534516 nova_compute[253538]: 2025-11-25 08:59:22.809 253542 DEBUG nova.network.neutron [-] [instance: 68c6ff41-ad19-4b3d-947d-0a5d72e4042c] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 25 03:59:23 np0005534516 nova_compute[253538]: 2025-11-25 08:59:23.244 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:59:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 03:59:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 03:59:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 03:59:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 03:59:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 03:59:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 03:59:23 np0005534516 podman[387098]: 2025-11-25 08:59:23.875099474 +0000 UTC m=+0.117160271 container health_status 8ad1e319ea263c63021f01bb2d6f18ca5aea205883339692c6b10bddfa993191 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Nov 25 03:59:23 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2358: 321 pgs: 321 active+clean; 263 MiB data, 957 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 3.8 MiB/s wr, 175 op/s
Nov 25 03:59:23 np0005534516 nova_compute[253538]: 2025-11-25 08:59:23.973 253542 DEBUG nova.network.neutron [-] [instance: 68c6ff41-ad19-4b3d-947d-0a5d72e4042c] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 03:59:23 np0005534516 nova_compute[253538]: 2025-11-25 08:59:23.993 253542 DEBUG nova.network.neutron [req-d5c2b644-bfd0-40ea-9b66-b3c87b4783c8 req-2d558549-b065-4c18-a1be-7fa6eda7417a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 6a6a3230-e005-48d6-b758-3cf5d4f9410f] Updated VIF entry in instance network info cache for port 0b7bd252-0c0e-43fd-b9ae-27d615ec9c29. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 25 03:59:23 np0005534516 nova_compute[253538]: 2025-11-25 08:59:23.994 253542 DEBUG nova.network.neutron [req-d5c2b644-bfd0-40ea-9b66-b3c87b4783c8 req-2d558549-b065-4c18-a1be-7fa6eda7417a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 6a6a3230-e005-48d6-b758-3cf5d4f9410f] Updating instance_info_cache with network_info: [{"id": "0b7bd252-0c0e-43fd-b9ae-27d615ec9c29", "address": "fa:16:3e:61:6b:c3", "network": {"id": "d0402a09-5c1d-4dec-b1c6-38e77edc4409", "bridge": "br-int", "label": "tempest-network-smoke--803417347", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.173", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92faeb767e7a423586eaaf32661ce771", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0b7bd252-0c", "ovs_interfaceid": "0b7bd252-0c0e-43fd-b9ae-27d615ec9c29", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 03:59:23 np0005534516 nova_compute[253538]: 2025-11-25 08:59:23.997 253542 INFO nova.compute.manager [-] [instance: 68c6ff41-ad19-4b3d-947d-0a5d72e4042c] Took 1.19 seconds to deallocate network for instance.#033[00m
Nov 25 03:59:24 np0005534516 nova_compute[253538]: 2025-11-25 08:59:24.029 253542 DEBUG oslo_concurrency.lockutils [req-d5c2b644-bfd0-40ea-9b66-b3c87b4783c8 req-2d558549-b065-4c18-a1be-7fa6eda7417a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-6a6a3230-e005-48d6-b758-3cf5d4f9410f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 03:59:24 np0005534516 nova_compute[253538]: 2025-11-25 08:59:24.054 253542 DEBUG oslo_concurrency.lockutils [None req-02663957-0ce7-4244-beb3-89f6639d78be 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:59:24 np0005534516 nova_compute[253538]: 2025-11-25 08:59:24.054 253542 DEBUG oslo_concurrency.lockutils [None req-02663957-0ce7-4244-beb3-89f6639d78be 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:59:24 np0005534516 nova_compute[253538]: 2025-11-25 08:59:24.116 253542 DEBUG nova.compute.manager [req-0f1e83af-d9e9-44b0-a11a-a28f9b4f3df5 req-ba9990ea-bd40-43c6-a4d2-5f4ada38799f b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 68c6ff41-ad19-4b3d-947d-0a5d72e4042c] Received event network-vif-deleted-eba714df-d5db-464e-b5b6-6d56c52d33fd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:59:24 np0005534516 nova_compute[253538]: 2025-11-25 08:59:24.167 253542 DEBUG oslo_concurrency.processutils [None req-02663957-0ce7-4244-beb3-89f6639d78be 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:59:24 np0005534516 nova_compute[253538]: 2025-11-25 08:59:24.218 253542 DEBUG nova.compute.manager [req-1d94d367-f888-4b7e-b173-052b4568e0ea req-6b144e0a-5994-4c33-bf46-ffda1014772c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 68c6ff41-ad19-4b3d-947d-0a5d72e4042c] Received event network-vif-unplugged-eba714df-d5db-464e-b5b6-6d56c52d33fd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:59:24 np0005534516 nova_compute[253538]: 2025-11-25 08:59:24.219 253542 DEBUG oslo_concurrency.lockutils [req-1d94d367-f888-4b7e-b173-052b4568e0ea req-6b144e0a-5994-4c33-bf46-ffda1014772c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "68c6ff41-ad19-4b3d-947d-0a5d72e4042c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:59:24 np0005534516 nova_compute[253538]: 2025-11-25 08:59:24.220 253542 DEBUG oslo_concurrency.lockutils [req-1d94d367-f888-4b7e-b173-052b4568e0ea req-6b144e0a-5994-4c33-bf46-ffda1014772c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "68c6ff41-ad19-4b3d-947d-0a5d72e4042c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:59:24 np0005534516 nova_compute[253538]: 2025-11-25 08:59:24.221 253542 DEBUG oslo_concurrency.lockutils [req-1d94d367-f888-4b7e-b173-052b4568e0ea req-6b144e0a-5994-4c33-bf46-ffda1014772c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "68c6ff41-ad19-4b3d-947d-0a5d72e4042c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:59:24 np0005534516 nova_compute[253538]: 2025-11-25 08:59:24.222 253542 DEBUG nova.compute.manager [req-1d94d367-f888-4b7e-b173-052b4568e0ea req-6b144e0a-5994-4c33-bf46-ffda1014772c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 68c6ff41-ad19-4b3d-947d-0a5d72e4042c] No waiting events found dispatching network-vif-unplugged-eba714df-d5db-464e-b5b6-6d56c52d33fd pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 03:59:24 np0005534516 nova_compute[253538]: 2025-11-25 08:59:24.222 253542 WARNING nova.compute.manager [req-1d94d367-f888-4b7e-b173-052b4568e0ea req-6b144e0a-5994-4c33-bf46-ffda1014772c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 68c6ff41-ad19-4b3d-947d-0a5d72e4042c] Received unexpected event network-vif-unplugged-eba714df-d5db-464e-b5b6-6d56c52d33fd for instance with vm_state deleted and task_state None.#033[00m
Nov 25 03:59:24 np0005534516 nova_compute[253538]: 2025-11-25 08:59:24.223 253542 DEBUG nova.compute.manager [req-1d94d367-f888-4b7e-b173-052b4568e0ea req-6b144e0a-5994-4c33-bf46-ffda1014772c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 68c6ff41-ad19-4b3d-947d-0a5d72e4042c] Received event network-vif-plugged-eba714df-d5db-464e-b5b6-6d56c52d33fd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:59:24 np0005534516 nova_compute[253538]: 2025-11-25 08:59:24.224 253542 DEBUG oslo_concurrency.lockutils [req-1d94d367-f888-4b7e-b173-052b4568e0ea req-6b144e0a-5994-4c33-bf46-ffda1014772c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "68c6ff41-ad19-4b3d-947d-0a5d72e4042c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:59:24 np0005534516 nova_compute[253538]: 2025-11-25 08:59:24.225 253542 DEBUG oslo_concurrency.lockutils [req-1d94d367-f888-4b7e-b173-052b4568e0ea req-6b144e0a-5994-4c33-bf46-ffda1014772c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "68c6ff41-ad19-4b3d-947d-0a5d72e4042c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:59:24 np0005534516 nova_compute[253538]: 2025-11-25 08:59:24.225 253542 DEBUG oslo_concurrency.lockutils [req-1d94d367-f888-4b7e-b173-052b4568e0ea req-6b144e0a-5994-4c33-bf46-ffda1014772c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "68c6ff41-ad19-4b3d-947d-0a5d72e4042c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:59:24 np0005534516 nova_compute[253538]: 2025-11-25 08:59:24.226 253542 DEBUG nova.compute.manager [req-1d94d367-f888-4b7e-b173-052b4568e0ea req-6b144e0a-5994-4c33-bf46-ffda1014772c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 68c6ff41-ad19-4b3d-947d-0a5d72e4042c] No waiting events found dispatching network-vif-plugged-eba714df-d5db-464e-b5b6-6d56c52d33fd pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 03:59:24 np0005534516 nova_compute[253538]: 2025-11-25 08:59:24.227 253542 WARNING nova.compute.manager [req-1d94d367-f888-4b7e-b173-052b4568e0ea req-6b144e0a-5994-4c33-bf46-ffda1014772c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 68c6ff41-ad19-4b3d-947d-0a5d72e4042c] Received unexpected event network-vif-plugged-eba714df-d5db-464e-b5b6-6d56c52d33fd for instance with vm_state deleted and task_state None.#033[00m
Nov 25 03:59:24 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 03:59:24 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/104438142' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 03:59:24 np0005534516 nova_compute[253538]: 2025-11-25 08:59:24.632 253542 DEBUG oslo_concurrency.processutils [None req-02663957-0ce7-4244-beb3-89f6639d78be 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.465s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:59:24 np0005534516 nova_compute[253538]: 2025-11-25 08:59:24.638 253542 DEBUG nova.compute.provider_tree [None req-02663957-0ce7-4244-beb3-89f6639d78be 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 03:59:24 np0005534516 nova_compute[253538]: 2025-11-25 08:59:24.652 253542 DEBUG nova.scheduler.client.report [None req-02663957-0ce7-4244-beb3-89f6639d78be 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 03:59:24 np0005534516 nova_compute[253538]: 2025-11-25 08:59:24.676 253542 DEBUG oslo_concurrency.lockutils [None req-02663957-0ce7-4244-beb3-89f6639d78be 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.622s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:59:24 np0005534516 nova_compute[253538]: 2025-11-25 08:59:24.708 253542 INFO nova.scheduler.client.report [None req-02663957-0ce7-4244-beb3-89f6639d78be 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Deleted allocations for instance 68c6ff41-ad19-4b3d-947d-0a5d72e4042c#033[00m
Nov 25 03:59:24 np0005534516 nova_compute[253538]: 2025-11-25 08:59:24.794 253542 DEBUG oslo_concurrency.lockutils [None req-02663957-0ce7-4244-beb3-89f6639d78be 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Lock "68c6ff41-ad19-4b3d-947d-0a5d72e4042c" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.037s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:59:25 np0005534516 nova_compute[253538]: 2025-11-25 08:59:25.702 253542 DEBUG oslo_concurrency.lockutils [None req-d4e9d3c9-5388-4747-941d-94bce3c9c71d 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Acquiring lock "9447890d-1fff-4536-a0cd-b889c23f7479" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:59:25 np0005534516 nova_compute[253538]: 2025-11-25 08:59:25.703 253542 DEBUG oslo_concurrency.lockutils [None req-d4e9d3c9-5388-4747-941d-94bce3c9c71d 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Lock "9447890d-1fff-4536-a0cd-b889c23f7479" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:59:25 np0005534516 nova_compute[253538]: 2025-11-25 08:59:25.703 253542 DEBUG oslo_concurrency.lockutils [None req-d4e9d3c9-5388-4747-941d-94bce3c9c71d 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Acquiring lock "9447890d-1fff-4536-a0cd-b889c23f7479-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:59:25 np0005534516 nova_compute[253538]: 2025-11-25 08:59:25.703 253542 DEBUG oslo_concurrency.lockutils [None req-d4e9d3c9-5388-4747-941d-94bce3c9c71d 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Lock "9447890d-1fff-4536-a0cd-b889c23f7479-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:59:25 np0005534516 nova_compute[253538]: 2025-11-25 08:59:25.704 253542 DEBUG oslo_concurrency.lockutils [None req-d4e9d3c9-5388-4747-941d-94bce3c9c71d 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Lock "9447890d-1fff-4536-a0cd-b889c23f7479-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:59:25 np0005534516 nova_compute[253538]: 2025-11-25 08:59:25.705 253542 INFO nova.compute.manager [None req-d4e9d3c9-5388-4747-941d-94bce3c9c71d 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 9447890d-1fff-4536-a0cd-b889c23f7479] Terminating instance#033[00m
Nov 25 03:59:25 np0005534516 nova_compute[253538]: 2025-11-25 08:59:25.706 253542 DEBUG nova.compute.manager [None req-d4e9d3c9-5388-4747-941d-94bce3c9c71d 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 9447890d-1fff-4536-a0cd-b889c23f7479] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 25 03:59:25 np0005534516 kernel: tap6e0acf79-71 (unregistering): left promiscuous mode
Nov 25 03:59:25 np0005534516 NetworkManager[48915]: <info>  [1764061165.7902] device (tap6e0acf79-71): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 03:59:25 np0005534516 nova_compute[253538]: 2025-11-25 08:59:25.802 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:59:25 np0005534516 ovn_controller[152859]: 2025-11-25T08:59:25Z|01307|binding|INFO|Releasing lport 6e0acf79-7148-4555-9265-b449f234806e from this chassis (sb_readonly=0)
Nov 25 03:59:25 np0005534516 ovn_controller[152859]: 2025-11-25T08:59:25Z|01308|binding|INFO|Setting lport 6e0acf79-7148-4555-9265-b449f234806e down in Southbound
Nov 25 03:59:25 np0005534516 ovn_controller[152859]: 2025-11-25T08:59:25Z|01309|binding|INFO|Removing iface tap6e0acf79-71 ovn-installed in OVS
Nov 25 03:59:25 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:59:25.822 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f7:96:34 10.100.0.5'], port_security=['fa:16:3e:f7:96:34 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '9447890d-1fff-4536-a0cd-b889c23f7479', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-bfb2a9da-10d5-4cf0-a585-a59d66a02fa0', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'cfffff2c57a442a59b202d368d49bf00', 'neutron:revision_number': '4', 'neutron:security_group_ids': '20edaba8-b789-425d-a55d-3fa69c803e14 31fd3dba-a142-469b-a6ad-eb14c55eb5d4', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=acab9bf8-f301-413f-8774-b60482e3db77, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=6e0acf79-7148-4555-9265-b449f234806e) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 03:59:25 np0005534516 nova_compute[253538]: 2025-11-25 08:59:25.822 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:59:25 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:59:25.824 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 6e0acf79-7148-4555-9265-b449f234806e in datapath bfb2a9da-10d5-4cf0-a585-a59d66a02fa0 unbound from our chassis#033[00m
Nov 25 03:59:25 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:59:25.827 162739 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network bfb2a9da-10d5-4cf0-a585-a59d66a02fa0, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 25 03:59:25 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:59:25.828 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[4f9abff8-b63a-4746-b256-623ba95d6122]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:59:25 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:59:25.831 162739 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-bfb2a9da-10d5-4cf0-a585-a59d66a02fa0 namespace which is not needed anymore#033[00m
Nov 25 03:59:25 np0005534516 nova_compute[253538]: 2025-11-25 08:59:25.843 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:59:25 np0005534516 systemd[1]: machine-qemu\x2d155\x2dinstance\x2d0000007d.scope: Deactivated successfully.
Nov 25 03:59:25 np0005534516 systemd[1]: machine-qemu\x2d155\x2dinstance\x2d0000007d.scope: Consumed 15.955s CPU time.
Nov 25 03:59:25 np0005534516 systemd-machined[215790]: Machine qemu-155-instance-0000007d terminated.
Nov 25 03:59:25 np0005534516 nova_compute[253538]: 2025-11-25 08:59:25.949 253542 INFO nova.virt.libvirt.driver [-] [instance: 9447890d-1fff-4536-a0cd-b889c23f7479] Instance destroyed successfully.#033[00m
Nov 25 03:59:25 np0005534516 nova_compute[253538]: 2025-11-25 08:59:25.950 253542 DEBUG nova.objects.instance [None req-d4e9d3c9-5388-4747-941d-94bce3c9c71d 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Lazy-loading 'resources' on Instance uuid 9447890d-1fff-4536-a0cd-b889c23f7479 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 03:59:25 np0005534516 nova_compute[253538]: 2025-11-25 08:59:25.963 253542 DEBUG nova.virt.libvirt.vif [None req-d4e9d3c9-5388-4747-941d-94bce3c9c71d 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T08:58:17Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-1495447964-access_point-711993251',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-1495447964-access_point-711993251',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-1495447964-ac',id=125,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAsKC+lTfCuui0Z3hitR52PvpfumhaLVjQ69ujiuIIevy7+I1pIFYPSg9LVKZgq98SCLPdhgyGDo4sdCyIoEofU+K0/ToXdERCiT/ZZOJi+hnfAHkBLfHK2RUxUGr/PGKw==',key_name='tempest-TestSecurityGroupsBasicOps-214386099',keypairs=<?>,launch_index=0,launched_at=2025-11-25T08:58:29Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='cfffff2c57a442a59b202d368d49bf00',ramdisk_id='',reservation_id='r-llehtawf',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestSecurityGroupsBasicOps-1495447964',owner_user_name='tempest-TestSecurityGroupsBasicOps-1495447964-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T08:58:29Z,user_data=None,user_id='283b89dbe3284e8ea2019b797673108b',uuid=9447890d-1fff-4536-a0cd-b889c23f7479,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "6e0acf79-7148-4555-9265-b449f234806e", "address": "fa:16:3e:f7:96:34", "network": {"id": "bfb2a9da-10d5-4cf0-a585-a59d66a02fa0", "bridge": "br-int", "label": "tempest-network-smoke--16312468", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.246", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cfffff2c57a442a59b202d368d49bf00", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6e0acf79-71", "ovs_interfaceid": "6e0acf79-7148-4555-9265-b449f234806e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 25 03:59:25 np0005534516 nova_compute[253538]: 2025-11-25 08:59:25.963 253542 DEBUG nova.network.os_vif_util [None req-d4e9d3c9-5388-4747-941d-94bce3c9c71d 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Converting VIF {"id": "6e0acf79-7148-4555-9265-b449f234806e", "address": "fa:16:3e:f7:96:34", "network": {"id": "bfb2a9da-10d5-4cf0-a585-a59d66a02fa0", "bridge": "br-int", "label": "tempest-network-smoke--16312468", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.246", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cfffff2c57a442a59b202d368d49bf00", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6e0acf79-71", "ovs_interfaceid": "6e0acf79-7148-4555-9265-b449f234806e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 03:59:25 np0005534516 nova_compute[253538]: 2025-11-25 08:59:25.964 253542 DEBUG nova.network.os_vif_util [None req-d4e9d3c9-5388-4747-941d-94bce3c9c71d 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:f7:96:34,bridge_name='br-int',has_traffic_filtering=True,id=6e0acf79-7148-4555-9265-b449f234806e,network=Network(bfb2a9da-10d5-4cf0-a585-a59d66a02fa0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6e0acf79-71') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 03:59:25 np0005534516 nova_compute[253538]: 2025-11-25 08:59:25.964 253542 DEBUG os_vif [None req-d4e9d3c9-5388-4747-941d-94bce3c9c71d 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:f7:96:34,bridge_name='br-int',has_traffic_filtering=True,id=6e0acf79-7148-4555-9265-b449f234806e,network=Network(bfb2a9da-10d5-4cf0-a585-a59d66a02fa0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6e0acf79-71') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 25 03:59:25 np0005534516 nova_compute[253538]: 2025-11-25 08:59:25.967 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:59:25 np0005534516 nova_compute[253538]: 2025-11-25 08:59:25.967 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6e0acf79-71, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:59:25 np0005534516 nova_compute[253538]: 2025-11-25 08:59:25.969 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:59:25 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2359: 321 pgs: 321 active+clean; 222 MiB data, 937 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.6 MiB/s wr, 173 op/s
Nov 25 03:59:25 np0005534516 nova_compute[253538]: 2025-11-25 08:59:25.971 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 25 03:59:25 np0005534516 nova_compute[253538]: 2025-11-25 08:59:25.974 253542 INFO os_vif [None req-d4e9d3c9-5388-4747-941d-94bce3c9c71d 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:f7:96:34,bridge_name='br-int',has_traffic_filtering=True,id=6e0acf79-7148-4555-9265-b449f234806e,network=Network(bfb2a9da-10d5-4cf0-a585-a59d66a02fa0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6e0acf79-71')#033[00m
Nov 25 03:59:25 np0005534516 neutron-haproxy-ovnmeta-bfb2a9da-10d5-4cf0-a585-a59d66a02fa0[384967]: [NOTICE]   (384971) : haproxy version is 2.8.14-c23fe91
Nov 25 03:59:25 np0005534516 neutron-haproxy-ovnmeta-bfb2a9da-10d5-4cf0-a585-a59d66a02fa0[384967]: [NOTICE]   (384971) : path to executable is /usr/sbin/haproxy
Nov 25 03:59:25 np0005534516 neutron-haproxy-ovnmeta-bfb2a9da-10d5-4cf0-a585-a59d66a02fa0[384967]: [WARNING]  (384971) : Exiting Master process...
Nov 25 03:59:25 np0005534516 neutron-haproxy-ovnmeta-bfb2a9da-10d5-4cf0-a585-a59d66a02fa0[384967]: [WARNING]  (384971) : Exiting Master process...
Nov 25 03:59:25 np0005534516 neutron-haproxy-ovnmeta-bfb2a9da-10d5-4cf0-a585-a59d66a02fa0[384967]: [ALERT]    (384971) : Current worker (384973) exited with code 143 (Terminated)
Nov 25 03:59:25 np0005534516 neutron-haproxy-ovnmeta-bfb2a9da-10d5-4cf0-a585-a59d66a02fa0[384967]: [WARNING]  (384971) : All workers exited. Exiting... (0)
Nov 25 03:59:25 np0005534516 systemd[1]: libpod-6f3c69fe408619841166a42c4fd7b3fb3f54802beb68f1157082d7e916a8cb31.scope: Deactivated successfully.
Nov 25 03:59:26 np0005534516 podman[387169]: 2025-11-25 08:59:26.003106655 +0000 UTC m=+0.088519430 container died 6f3c69fe408619841166a42c4fd7b3fb3f54802beb68f1157082d7e916a8cb31 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-bfb2a9da-10d5-4cf0-a585-a59d66a02fa0, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Nov 25 03:59:26 np0005534516 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-6f3c69fe408619841166a42c4fd7b3fb3f54802beb68f1157082d7e916a8cb31-userdata-shm.mount: Deactivated successfully.
Nov 25 03:59:26 np0005534516 systemd[1]: var-lib-containers-storage-overlay-1e04d9311f2566d0bdda347b5c2a05beefee95a9cbfe91f3a4e7e52afc379d7d-merged.mount: Deactivated successfully.
Nov 25 03:59:26 np0005534516 podman[387169]: 2025-11-25 08:59:26.062087141 +0000 UTC m=+0.147499906 container cleanup 6f3c69fe408619841166a42c4fd7b3fb3f54802beb68f1157082d7e916a8cb31 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-bfb2a9da-10d5-4cf0-a585-a59d66a02fa0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 25 03:59:26 np0005534516 systemd[1]: libpod-conmon-6f3c69fe408619841166a42c4fd7b3fb3f54802beb68f1157082d7e916a8cb31.scope: Deactivated successfully.
Nov 25 03:59:26 np0005534516 podman[387227]: 2025-11-25 08:59:26.12708955 +0000 UTC m=+0.044507133 container remove 6f3c69fe408619841166a42c4fd7b3fb3f54802beb68f1157082d7e916a8cb31 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-bfb2a9da-10d5-4cf0-a585-a59d66a02fa0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true)
Nov 25 03:59:26 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:59:26.134 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[42aa46af-a6ca-4c3b-981c-7dcfb77a6128]: (4, ('Tue Nov 25 08:59:25 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-bfb2a9da-10d5-4cf0-a585-a59d66a02fa0 (6f3c69fe408619841166a42c4fd7b3fb3f54802beb68f1157082d7e916a8cb31)\n6f3c69fe408619841166a42c4fd7b3fb3f54802beb68f1157082d7e916a8cb31\nTue Nov 25 08:59:26 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-bfb2a9da-10d5-4cf0-a585-a59d66a02fa0 (6f3c69fe408619841166a42c4fd7b3fb3f54802beb68f1157082d7e916a8cb31)\n6f3c69fe408619841166a42c4fd7b3fb3f54802beb68f1157082d7e916a8cb31\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:59:26 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:59:26.136 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[b5916862-9d53-4599-9eb6-8b55b19b3076]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:59:26 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:59:26.138 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapbfb2a9da-10, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:59:26 np0005534516 kernel: tapbfb2a9da-10: left promiscuous mode
Nov 25 03:59:26 np0005534516 nova_compute[253538]: 2025-11-25 08:59:26.139 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:59:26 np0005534516 nova_compute[253538]: 2025-11-25 08:59:26.152 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:59:26 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:59:26.157 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[558bf816-d1b6-4c0d-865f-b814d834195b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:59:26 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:59:26.176 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[79de7430-36a9-4bbd-be2a-8da5aeed1eef]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:59:26 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:59:26.177 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[7cf2e6a6-827e-44df-b0f1-2f5786ee369d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:59:26 np0005534516 nova_compute[253538]: 2025-11-25 08:59:26.184 253542 DEBUG nova.compute.manager [req-c695081a-6207-4cb6-9c08-4aa612ee787a req-31571555-f56d-4685-a1b8-79913c547640 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 9447890d-1fff-4536-a0cd-b889c23f7479] Received event network-vif-unplugged-6e0acf79-7148-4555-9265-b449f234806e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:59:26 np0005534516 nova_compute[253538]: 2025-11-25 08:59:26.185 253542 DEBUG oslo_concurrency.lockutils [req-c695081a-6207-4cb6-9c08-4aa612ee787a req-31571555-f56d-4685-a1b8-79913c547640 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "9447890d-1fff-4536-a0cd-b889c23f7479-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:59:26 np0005534516 nova_compute[253538]: 2025-11-25 08:59:26.185 253542 DEBUG oslo_concurrency.lockutils [req-c695081a-6207-4cb6-9c08-4aa612ee787a req-31571555-f56d-4685-a1b8-79913c547640 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "9447890d-1fff-4536-a0cd-b889c23f7479-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:59:26 np0005534516 nova_compute[253538]: 2025-11-25 08:59:26.186 253542 DEBUG oslo_concurrency.lockutils [req-c695081a-6207-4cb6-9c08-4aa612ee787a req-31571555-f56d-4685-a1b8-79913c547640 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "9447890d-1fff-4536-a0cd-b889c23f7479-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:59:26 np0005534516 nova_compute[253538]: 2025-11-25 08:59:26.186 253542 DEBUG nova.compute.manager [req-c695081a-6207-4cb6-9c08-4aa612ee787a req-31571555-f56d-4685-a1b8-79913c547640 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 9447890d-1fff-4536-a0cd-b889c23f7479] No waiting events found dispatching network-vif-unplugged-6e0acf79-7148-4555-9265-b449f234806e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 03:59:26 np0005534516 nova_compute[253538]: 2025-11-25 08:59:26.187 253542 DEBUG nova.compute.manager [req-c695081a-6207-4cb6-9c08-4aa612ee787a req-31571555-f56d-4685-a1b8-79913c547640 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 9447890d-1fff-4536-a0cd-b889c23f7479] Received event network-vif-unplugged-6e0acf79-7148-4555-9265-b449f234806e for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 25 03:59:26 np0005534516 nova_compute[253538]: 2025-11-25 08:59:26.187 253542 DEBUG nova.compute.manager [req-c695081a-6207-4cb6-9c08-4aa612ee787a req-31571555-f56d-4685-a1b8-79913c547640 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 9447890d-1fff-4536-a0cd-b889c23f7479] Received event network-vif-plugged-6e0acf79-7148-4555-9265-b449f234806e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:59:26 np0005534516 nova_compute[253538]: 2025-11-25 08:59:26.187 253542 DEBUG oslo_concurrency.lockutils [req-c695081a-6207-4cb6-9c08-4aa612ee787a req-31571555-f56d-4685-a1b8-79913c547640 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "9447890d-1fff-4536-a0cd-b889c23f7479-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:59:26 np0005534516 nova_compute[253538]: 2025-11-25 08:59:26.188 253542 DEBUG oslo_concurrency.lockutils [req-c695081a-6207-4cb6-9c08-4aa612ee787a req-31571555-f56d-4685-a1b8-79913c547640 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "9447890d-1fff-4536-a0cd-b889c23f7479-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:59:26 np0005534516 nova_compute[253538]: 2025-11-25 08:59:26.188 253542 DEBUG oslo_concurrency.lockutils [req-c695081a-6207-4cb6-9c08-4aa612ee787a req-31571555-f56d-4685-a1b8-79913c547640 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "9447890d-1fff-4536-a0cd-b889c23f7479-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:59:26 np0005534516 nova_compute[253538]: 2025-11-25 08:59:26.189 253542 DEBUG nova.compute.manager [req-c695081a-6207-4cb6-9c08-4aa612ee787a req-31571555-f56d-4685-a1b8-79913c547640 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 9447890d-1fff-4536-a0cd-b889c23f7479] No waiting events found dispatching network-vif-plugged-6e0acf79-7148-4555-9265-b449f234806e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 03:59:26 np0005534516 nova_compute[253538]: 2025-11-25 08:59:26.189 253542 WARNING nova.compute.manager [req-c695081a-6207-4cb6-9c08-4aa612ee787a req-31571555-f56d-4685-a1b8-79913c547640 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 9447890d-1fff-4536-a0cd-b889c23f7479] Received unexpected event network-vif-plugged-6e0acf79-7148-4555-9265-b449f234806e for instance with vm_state active and task_state deleting.#033[00m
Nov 25 03:59:26 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:59:26.197 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[b78be2f1-010b-40ee-b828-9b7ed1ce599d]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 646301, 'reachable_time': 19680, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 387242, 'error': None, 'target': 'ovnmeta-bfb2a9da-10d5-4cf0-a585-a59d66a02fa0', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:59:26 np0005534516 systemd[1]: run-netns-ovnmeta\x2dbfb2a9da\x2d10d5\x2d4cf0\x2da585\x2da59d66a02fa0.mount: Deactivated successfully.
Nov 25 03:59:26 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:59:26.199 162852 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-bfb2a9da-10d5-4cf0-a585-a59d66a02fa0 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 25 03:59:26 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:59:26.200 162852 DEBUG oslo.privsep.daemon [-] privsep: reply[155f18cc-57d2-489f-bb16-dcc8b24af6c2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:59:26 np0005534516 nova_compute[253538]: 2025-11-25 08:59:26.242 253542 DEBUG nova.compute.manager [req-1c61dd1e-5139-4a04-b94b-737ce8f1d861 req-4f4ebcf6-787c-49b8-80fd-a6902a66bcaf b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 9447890d-1fff-4536-a0cd-b889c23f7479] Received event network-changed-6e0acf79-7148-4555-9265-b449f234806e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:59:26 np0005534516 nova_compute[253538]: 2025-11-25 08:59:26.243 253542 DEBUG nova.compute.manager [req-1c61dd1e-5139-4a04-b94b-737ce8f1d861 req-4f4ebcf6-787c-49b8-80fd-a6902a66bcaf b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 9447890d-1fff-4536-a0cd-b889c23f7479] Refreshing instance network info cache due to event network-changed-6e0acf79-7148-4555-9265-b449f234806e. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 03:59:26 np0005534516 nova_compute[253538]: 2025-11-25 08:59:26.243 253542 DEBUG oslo_concurrency.lockutils [req-1c61dd1e-5139-4a04-b94b-737ce8f1d861 req-4f4ebcf6-787c-49b8-80fd-a6902a66bcaf b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-9447890d-1fff-4536-a0cd-b889c23f7479" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 03:59:26 np0005534516 nova_compute[253538]: 2025-11-25 08:59:26.243 253542 DEBUG oslo_concurrency.lockutils [req-1c61dd1e-5139-4a04-b94b-737ce8f1d861 req-4f4ebcf6-787c-49b8-80fd-a6902a66bcaf b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-9447890d-1fff-4536-a0cd-b889c23f7479" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 03:59:26 np0005534516 nova_compute[253538]: 2025-11-25 08:59:26.243 253542 DEBUG nova.network.neutron [req-1c61dd1e-5139-4a04-b94b-737ce8f1d861 req-4f4ebcf6-787c-49b8-80fd-a6902a66bcaf b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 9447890d-1fff-4536-a0cd-b889c23f7479] Refreshing network info cache for port 6e0acf79-7148-4555-9265-b449f234806e _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 03:59:26 np0005534516 nova_compute[253538]: 2025-11-25 08:59:26.395 253542 INFO nova.virt.libvirt.driver [None req-d4e9d3c9-5388-4747-941d-94bce3c9c71d 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 9447890d-1fff-4536-a0cd-b889c23f7479] Deleting instance files /var/lib/nova/instances/9447890d-1fff-4536-a0cd-b889c23f7479_del#033[00m
Nov 25 03:59:26 np0005534516 nova_compute[253538]: 2025-11-25 08:59:26.396 253542 INFO nova.virt.libvirt.driver [None req-d4e9d3c9-5388-4747-941d-94bce3c9c71d 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 9447890d-1fff-4536-a0cd-b889c23f7479] Deletion of /var/lib/nova/instances/9447890d-1fff-4536-a0cd-b889c23f7479_del complete#033[00m
Nov 25 03:59:26 np0005534516 nova_compute[253538]: 2025-11-25 08:59:26.440 253542 INFO nova.compute.manager [None req-d4e9d3c9-5388-4747-941d-94bce3c9c71d 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 9447890d-1fff-4536-a0cd-b889c23f7479] Took 0.73 seconds to destroy the instance on the hypervisor.#033[00m
Nov 25 03:59:26 np0005534516 nova_compute[253538]: 2025-11-25 08:59:26.441 253542 DEBUG oslo.service.loopingcall [None req-d4e9d3c9-5388-4747-941d-94bce3c9c71d 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 25 03:59:26 np0005534516 nova_compute[253538]: 2025-11-25 08:59:26.442 253542 DEBUG nova.compute.manager [-] [instance: 9447890d-1fff-4536-a0cd-b889c23f7479] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 25 03:59:26 np0005534516 nova_compute[253538]: 2025-11-25 08:59:26.442 253542 DEBUG nova.network.neutron [-] [instance: 9447890d-1fff-4536-a0cd-b889c23f7479] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 25 03:59:27 np0005534516 nova_compute[253538]: 2025-11-25 08:59:27.582 253542 DEBUG nova.network.neutron [-] [instance: 9447890d-1fff-4536-a0cd-b889c23f7479] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 03:59:27 np0005534516 nova_compute[253538]: 2025-11-25 08:59:27.602 253542 INFO nova.compute.manager [-] [instance: 9447890d-1fff-4536-a0cd-b889c23f7479] Took 1.16 seconds to deallocate network for instance.#033[00m
Nov 25 03:59:27 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 03:59:27 np0005534516 nova_compute[253538]: 2025-11-25 08:59:27.668 253542 DEBUG oslo_concurrency.lockutils [None req-d4e9d3c9-5388-4747-941d-94bce3c9c71d 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:59:27 np0005534516 nova_compute[253538]: 2025-11-25 08:59:27.669 253542 DEBUG oslo_concurrency.lockutils [None req-d4e9d3c9-5388-4747-941d-94bce3c9c71d 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:59:27 np0005534516 nova_compute[253538]: 2025-11-25 08:59:27.762 253542 DEBUG oslo_concurrency.processutils [None req-d4e9d3c9-5388-4747-941d-94bce3c9c71d 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:59:27 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2360: 321 pgs: 321 active+clean; 186 MiB data, 917 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.1 MiB/s wr, 171 op/s
Nov 25 03:59:28 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 03:59:28 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1544755068' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 03:59:28 np0005534516 nova_compute[253538]: 2025-11-25 08:59:28.218 253542 DEBUG oslo_concurrency.processutils [None req-d4e9d3c9-5388-4747-941d-94bce3c9c71d 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.456s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:59:28 np0005534516 nova_compute[253538]: 2025-11-25 08:59:28.228 253542 DEBUG nova.compute.provider_tree [None req-d4e9d3c9-5388-4747-941d-94bce3c9c71d 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 03:59:28 np0005534516 nova_compute[253538]: 2025-11-25 08:59:28.258 253542 DEBUG nova.scheduler.client.report [None req-d4e9d3c9-5388-4747-941d-94bce3c9c71d 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 03:59:28 np0005534516 nova_compute[253538]: 2025-11-25 08:59:28.268 253542 DEBUG nova.network.neutron [req-1c61dd1e-5139-4a04-b94b-737ce8f1d861 req-4f4ebcf6-787c-49b8-80fd-a6902a66bcaf b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 9447890d-1fff-4536-a0cd-b889c23f7479] Updated VIF entry in instance network info cache for port 6e0acf79-7148-4555-9265-b449f234806e. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 25 03:59:28 np0005534516 nova_compute[253538]: 2025-11-25 08:59:28.269 253542 DEBUG nova.network.neutron [req-1c61dd1e-5139-4a04-b94b-737ce8f1d861 req-4f4ebcf6-787c-49b8-80fd-a6902a66bcaf b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 9447890d-1fff-4536-a0cd-b889c23f7479] Updating instance_info_cache with network_info: [{"id": "6e0acf79-7148-4555-9265-b449f234806e", "address": "fa:16:3e:f7:96:34", "network": {"id": "bfb2a9da-10d5-4cf0-a585-a59d66a02fa0", "bridge": "br-int", "label": "tempest-network-smoke--16312468", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cfffff2c57a442a59b202d368d49bf00", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6e0acf79-71", "ovs_interfaceid": "6e0acf79-7148-4555-9265-b449f234806e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 03:59:28 np0005534516 nova_compute[253538]: 2025-11-25 08:59:28.282 253542 DEBUG oslo_concurrency.lockutils [None req-d4e9d3c9-5388-4747-941d-94bce3c9c71d 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.613s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:59:28 np0005534516 nova_compute[253538]: 2025-11-25 08:59:28.286 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:59:28 np0005534516 nova_compute[253538]: 2025-11-25 08:59:28.291 253542 DEBUG oslo_concurrency.lockutils [req-1c61dd1e-5139-4a04-b94b-737ce8f1d861 req-4f4ebcf6-787c-49b8-80fd-a6902a66bcaf b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-9447890d-1fff-4536-a0cd-b889c23f7479" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 03:59:28 np0005534516 nova_compute[253538]: 2025-11-25 08:59:28.327 253542 INFO nova.scheduler.client.report [None req-d4e9d3c9-5388-4747-941d-94bce3c9c71d 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Deleted allocations for instance 9447890d-1fff-4536-a0cd-b889c23f7479#033[00m
Nov 25 03:59:28 np0005534516 nova_compute[253538]: 2025-11-25 08:59:28.402 253542 DEBUG nova.compute.manager [req-5cf50249-b4f9-41a3-bb9b-f543fb9af5ad req-0988b32a-de94-4d83-939a-178db7704487 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 9447890d-1fff-4536-a0cd-b889c23f7479] Received event network-vif-deleted-6e0acf79-7148-4555-9265-b449f234806e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:59:28 np0005534516 nova_compute[253538]: 2025-11-25 08:59:28.403 253542 INFO nova.compute.manager [req-5cf50249-b4f9-41a3-bb9b-f543fb9af5ad req-0988b32a-de94-4d83-939a-178db7704487 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 9447890d-1fff-4536-a0cd-b889c23f7479] Neutron deleted interface 6e0acf79-7148-4555-9265-b449f234806e; detaching it from the instance and deleting it from the info cache#033[00m
Nov 25 03:59:28 np0005534516 nova_compute[253538]: 2025-11-25 08:59:28.404 253542 DEBUG nova.network.neutron [req-5cf50249-b4f9-41a3-bb9b-f543fb9af5ad req-0988b32a-de94-4d83-939a-178db7704487 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 9447890d-1fff-4536-a0cd-b889c23f7479] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 03:59:28 np0005534516 nova_compute[253538]: 2025-11-25 08:59:28.433 253542 DEBUG oslo_concurrency.lockutils [None req-d4e9d3c9-5388-4747-941d-94bce3c9c71d 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Lock "9447890d-1fff-4536-a0cd-b889c23f7479" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.730s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:59:28 np0005534516 nova_compute[253538]: 2025-11-25 08:59:28.441 253542 DEBUG nova.compute.manager [req-5cf50249-b4f9-41a3-bb9b-f543fb9af5ad req-0988b32a-de94-4d83-939a-178db7704487 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 9447890d-1fff-4536-a0cd-b889c23f7479] Detach interface failed, port_id=6e0acf79-7148-4555-9265-b449f234806e, reason: Instance 9447890d-1fff-4536-a0cd-b889c23f7479 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882#033[00m
Nov 25 03:59:29 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 25 03:59:29 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/761127550' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 03:59:29 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 25 03:59:29 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/761127550' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 03:59:29 np0005534516 ceph-osd[89702]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #49. Immutable memtables: 6.
Nov 25 03:59:29 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2361: 321 pgs: 321 active+clean; 135 MiB data, 879 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.1 MiB/s wr, 198 op/s
Nov 25 03:59:30 np0005534516 ovn_controller[152859]: 2025-11-25T08:59:30Z|00157|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:61:6b:c3 10.100.0.8
Nov 25 03:59:30 np0005534516 ovn_controller[152859]: 2025-11-25T08:59:30Z|00158|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:61:6b:c3 10.100.0.8
Nov 25 03:59:30 np0005534516 nova_compute[253538]: 2025-11-25 08:59:30.970 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:59:31 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2362: 321 pgs: 321 active+clean; 154 MiB data, 911 MiB used, 59 GiB / 60 GiB avail; 661 KiB/s rd, 1.7 MiB/s wr, 105 op/s
Nov 25 03:59:32 np0005534516 ovn_controller[152859]: 2025-11-25T08:59:32Z|01310|binding|INFO|Releasing lport 936f7aae-c7b9-4c9a-a88d-93ca5394771e from this chassis (sb_readonly=0)
Nov 25 03:59:32 np0005534516 nova_compute[253538]: 2025-11-25 08:59:32.575 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:59:32 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 03:59:33 np0005534516 nova_compute[253538]: 2025-11-25 08:59:33.290 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:59:33 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2363: 321 pgs: 321 active+clean; 163 MiB data, 915 MiB used, 59 GiB / 60 GiB avail; 449 KiB/s rd, 2.1 MiB/s wr, 105 op/s
Nov 25 03:59:35 np0005534516 nova_compute[253538]: 2025-11-25 08:59:35.972 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:59:35 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2364: 321 pgs: 321 active+clean; 167 MiB data, 918 MiB used, 59 GiB / 60 GiB avail; 354 KiB/s rd, 2.1 MiB/s wr, 106 op/s
Nov 25 03:59:36 np0005534516 nova_compute[253538]: 2025-11-25 08:59:36.837 253542 INFO nova.compute.manager [None req-2c58d1b1-3f79-4c03-8851-0545f2f45b85 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 6a6a3230-e005-48d6-b758-3cf5d4f9410f] Get console output#033[00m
Nov 25 03:59:36 np0005534516 nova_compute[253538]: 2025-11-25 08:59:36.843 310639 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Nov 25 03:59:37 np0005534516 nova_compute[253538]: 2025-11-25 08:59:37.001 253542 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764061161.9994311, 68c6ff41-ad19-4b3d-947d-0a5d72e4042c => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 03:59:37 np0005534516 nova_compute[253538]: 2025-11-25 08:59:37.002 253542 INFO nova.compute.manager [-] [instance: 68c6ff41-ad19-4b3d-947d-0a5d72e4042c] VM Stopped (Lifecycle Event)#033[00m
Nov 25 03:59:37 np0005534516 nova_compute[253538]: 2025-11-25 08:59:37.022 253542 DEBUG nova.compute.manager [None req-c0d5e154-31e6-474f-8de5-98352445c924 - - - - - -] [instance: 68c6ff41-ad19-4b3d-947d-0a5d72e4042c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:59:37 np0005534516 ceph-mon[75015]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #114. Immutable memtables: 0.
Nov 25 03:59:37 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:59:37.131093) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 25 03:59:37 np0005534516 ceph-mon[75015]: rocksdb: [db/flush_job.cc:856] [default] [JOB 67] Flushing memtable with next log file: 114
Nov 25 03:59:37 np0005534516 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764061177131170, "job": 67, "event": "flush_started", "num_memtables": 1, "num_entries": 1700, "num_deletes": 251, "total_data_size": 2649244, "memory_usage": 2681744, "flush_reason": "Manual Compaction"}
Nov 25 03:59:37 np0005534516 ceph-mon[75015]: rocksdb: [db/flush_job.cc:885] [default] [JOB 67] Level-0 flush table #115: started
Nov 25 03:59:37 np0005534516 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764061177147532, "cf_name": "default", "job": 67, "event": "table_file_creation", "file_number": 115, "file_size": 2600111, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 48060, "largest_seqno": 49759, "table_properties": {"data_size": 2592404, "index_size": 4586, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2053, "raw_key_size": 16299, "raw_average_key_size": 20, "raw_value_size": 2576882, "raw_average_value_size": 3177, "num_data_blocks": 204, "num_entries": 811, "num_filter_entries": 811, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764061003, "oldest_key_time": 1764061003, "file_creation_time": 1764061177, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "dc5dc6ae-0fb8-474e-b34d-e37705a41add", "db_session_id": "WCSCMBNWDQZ3QKJA3OWW", "orig_file_number": 115, "seqno_to_time_mapping": "N/A"}}
Nov 25 03:59:37 np0005534516 ceph-mon[75015]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 67] Flush lasted 16484 microseconds, and 7014 cpu microseconds.
Nov 25 03:59:37 np0005534516 ceph-mon[75015]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 03:59:37 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:59:37.147583) [db/flush_job.cc:967] [default] [JOB 67] Level-0 flush table #115: 2600111 bytes OK
Nov 25 03:59:37 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:59:37.147609) [db/memtable_list.cc:519] [default] Level-0 commit table #115 started
Nov 25 03:59:37 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:59:37.149438) [db/memtable_list.cc:722] [default] Level-0 commit table #115: memtable #1 done
Nov 25 03:59:37 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:59:37.149457) EVENT_LOG_v1 {"time_micros": 1764061177149451, "job": 67, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 25 03:59:37 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:59:37.149476) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 25 03:59:37 np0005534516 ceph-mon[75015]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 67] Try to delete WAL files size 2641917, prev total WAL file size 2641917, number of live WAL files 2.
Nov 25 03:59:37 np0005534516 ceph-mon[75015]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000111.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 03:59:37 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:59:37.150430) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730034353138' seq:72057594037927935, type:22 .. '7061786F730034373730' seq:0, type:0; will stop at (end)
Nov 25 03:59:37 np0005534516 ceph-mon[75015]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 68] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 25 03:59:37 np0005534516 ceph-mon[75015]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 67 Base level 0, inputs: [115(2539KB)], [113(7605KB)]
Nov 25 03:59:37 np0005534516 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764061177150483, "job": 68, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [115], "files_L6": [113], "score": -1, "input_data_size": 10387684, "oldest_snapshot_seqno": -1}
Nov 25 03:59:37 np0005534516 ceph-mon[75015]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 68] Generated table #116: 6974 keys, 8624620 bytes, temperature: kUnknown
Nov 25 03:59:37 np0005534516 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764061177207842, "cf_name": "default", "job": 68, "event": "table_file_creation", "file_number": 116, "file_size": 8624620, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8580112, "index_size": 25964, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 17477, "raw_key_size": 182367, "raw_average_key_size": 26, "raw_value_size": 8457288, "raw_average_value_size": 1212, "num_data_blocks": 1010, "num_entries": 6974, "num_filter_entries": 6974, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764056881, "oldest_key_time": 0, "file_creation_time": 1764061177, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "dc5dc6ae-0fb8-474e-b34d-e37705a41add", "db_session_id": "WCSCMBNWDQZ3QKJA3OWW", "orig_file_number": 116, "seqno_to_time_mapping": "N/A"}}
Nov 25 03:59:37 np0005534516 ceph-mon[75015]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 03:59:37 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:59:37.208170) [db/compaction/compaction_job.cc:1663] [default] [JOB 68] Compacted 1@0 + 1@6 files to L6 => 8624620 bytes
Nov 25 03:59:37 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:59:37.209740) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 180.8 rd, 150.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.5, 7.4 +0.0 blob) out(8.2 +0.0 blob), read-write-amplify(7.3) write-amplify(3.3) OK, records in: 7488, records dropped: 514 output_compression: NoCompression
Nov 25 03:59:37 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:59:37.209771) EVENT_LOG_v1 {"time_micros": 1764061177209757, "job": 68, "event": "compaction_finished", "compaction_time_micros": 57449, "compaction_time_cpu_micros": 23656, "output_level": 6, "num_output_files": 1, "total_output_size": 8624620, "num_input_records": 7488, "num_output_records": 6974, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 25 03:59:37 np0005534516 ceph-mon[75015]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000115.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 03:59:37 np0005534516 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764061177210881, "job": 68, "event": "table_file_deletion", "file_number": 115}
Nov 25 03:59:37 np0005534516 ceph-mon[75015]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000113.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 03:59:37 np0005534516 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764061177213562, "job": 68, "event": "table_file_deletion", "file_number": 113}
Nov 25 03:59:37 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:59:37.150264) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 03:59:37 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:59:37.213627) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 03:59:37 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:59:37.213635) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 03:59:37 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:59:37.213639) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 03:59:37 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:59:37.213642) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 03:59:37 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:59:37.213645) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 03:59:37 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 03:59:37 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2365: 321 pgs: 321 active+clean; 167 MiB data, 918 MiB used, 59 GiB / 60 GiB avail; 346 KiB/s rd, 2.1 MiB/s wr, 96 op/s
Nov 25 03:59:38 np0005534516 nova_compute[253538]: 2025-11-25 08:59:38.292 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:59:38 np0005534516 ovn_controller[152859]: 2025-11-25T08:59:38Z|01311|binding|INFO|Releasing lport 936f7aae-c7b9-4c9a-a88d-93ca5394771e from this chassis (sb_readonly=0)
Nov 25 03:59:38 np0005534516 nova_compute[253538]: 2025-11-25 08:59:38.368 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:59:38 np0005534516 ovn_controller[152859]: 2025-11-25T08:59:38Z|01312|binding|INFO|Releasing lport 936f7aae-c7b9-4c9a-a88d-93ca5394771e from this chassis (sb_readonly=0)
Nov 25 03:59:38 np0005534516 nova_compute[253538]: 2025-11-25 08:59:38.463 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:59:38 np0005534516 nova_compute[253538]: 2025-11-25 08:59:38.553 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 03:59:38 np0005534516 nova_compute[253538]: 2025-11-25 08:59:38.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 03:59:39 np0005534516 nova_compute[253538]: 2025-11-25 08:59:39.657 253542 INFO nova.compute.manager [None req-8557c5e5-95f0-4f75-a2cd-3a1b89257541 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 6a6a3230-e005-48d6-b758-3cf5d4f9410f] Get console output#033[00m
Nov 25 03:59:39 np0005534516 nova_compute[253538]: 2025-11-25 08:59:39.664 310639 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Nov 25 03:59:39 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2366: 321 pgs: 321 active+clean; 167 MiB data, 918 MiB used, 59 GiB / 60 GiB avail; 340 KiB/s rd, 2.1 MiB/s wr, 83 op/s
Nov 25 03:59:40 np0005534516 nova_compute[253538]: 2025-11-25 08:59:40.948 253542 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764061165.947497, 9447890d-1fff-4536-a0cd-b889c23f7479 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 03:59:40 np0005534516 nova_compute[253538]: 2025-11-25 08:59:40.948 253542 INFO nova.compute.manager [-] [instance: 9447890d-1fff-4536-a0cd-b889c23f7479] VM Stopped (Lifecycle Event)#033[00m
Nov 25 03:59:40 np0005534516 nova_compute[253538]: 2025-11-25 08:59:40.973 253542 DEBUG nova.compute.manager [None req-7b1068af-f4a0-4917-b708-4fa46f40ab23 - - - - - -] [instance: 9447890d-1fff-4536-a0cd-b889c23f7479] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:59:40 np0005534516 nova_compute[253538]: 2025-11-25 08:59:40.975 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:59:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:59:41.082 162739 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:59:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:59:41.083 162739 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:59:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:59:41.084 162739 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:59:41 np0005534516 nova_compute[253538]: 2025-11-25 08:59:41.404 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:59:41 np0005534516 NetworkManager[48915]: <info>  [1764061181.4046] manager: (patch-provnet-14ba300c-36d8-42f5-a7bd-8245a4488c5f-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/536)
Nov 25 03:59:41 np0005534516 NetworkManager[48915]: <info>  [1764061181.4062] manager: (patch-br-int-to-provnet-14ba300c-36d8-42f5-a7bd-8245a4488c5f): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/537)
Nov 25 03:59:41 np0005534516 nova_compute[253538]: 2025-11-25 08:59:41.501 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:59:41 np0005534516 ovn_controller[152859]: 2025-11-25T08:59:41Z|01313|binding|INFO|Releasing lport 936f7aae-c7b9-4c9a-a88d-93ca5394771e from this chassis (sb_readonly=0)
Nov 25 03:59:41 np0005534516 nova_compute[253538]: 2025-11-25 08:59:41.515 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:59:41 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2367: 321 pgs: 321 active+clean; 167 MiB data, 918 MiB used, 59 GiB / 60 GiB avail; 305 KiB/s rd, 1.9 MiB/s wr, 54 op/s
Nov 25 03:59:42 np0005534516 nova_compute[253538]: 2025-11-25 08:59:42.115 253542 INFO nova.compute.manager [None req-ee83d514-173b-4d71-bf5f-e0a4f83342a5 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 6a6a3230-e005-48d6-b758-3cf5d4f9410f] Get console output#033[00m
Nov 25 03:59:42 np0005534516 nova_compute[253538]: 2025-11-25 08:59:42.122 310639 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Nov 25 03:59:42 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 03:59:43 np0005534516 nova_compute[253538]: 2025-11-25 08:59:43.140 253542 DEBUG nova.compute.manager [req-57306cb8-b385-4596-8a43-0a21a2546fa9 req-627f9f0a-ab82-4924-b90c-75dfefd93278 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 6a6a3230-e005-48d6-b758-3cf5d4f9410f] Received event network-changed-0b7bd252-0c0e-43fd-b9ae-27d615ec9c29 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:59:43 np0005534516 nova_compute[253538]: 2025-11-25 08:59:43.140 253542 DEBUG nova.compute.manager [req-57306cb8-b385-4596-8a43-0a21a2546fa9 req-627f9f0a-ab82-4924-b90c-75dfefd93278 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 6a6a3230-e005-48d6-b758-3cf5d4f9410f] Refreshing instance network info cache due to event network-changed-0b7bd252-0c0e-43fd-b9ae-27d615ec9c29. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 03:59:43 np0005534516 nova_compute[253538]: 2025-11-25 08:59:43.141 253542 DEBUG oslo_concurrency.lockutils [req-57306cb8-b385-4596-8a43-0a21a2546fa9 req-627f9f0a-ab82-4924-b90c-75dfefd93278 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-6a6a3230-e005-48d6-b758-3cf5d4f9410f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 03:59:43 np0005534516 nova_compute[253538]: 2025-11-25 08:59:43.141 253542 DEBUG oslo_concurrency.lockutils [req-57306cb8-b385-4596-8a43-0a21a2546fa9 req-627f9f0a-ab82-4924-b90c-75dfefd93278 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-6a6a3230-e005-48d6-b758-3cf5d4f9410f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 03:59:43 np0005534516 nova_compute[253538]: 2025-11-25 08:59:43.141 253542 DEBUG nova.network.neutron [req-57306cb8-b385-4596-8a43-0a21a2546fa9 req-627f9f0a-ab82-4924-b90c-75dfefd93278 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 6a6a3230-e005-48d6-b758-3cf5d4f9410f] Refreshing network info cache for port 0b7bd252-0c0e-43fd-b9ae-27d615ec9c29 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 03:59:43 np0005534516 nova_compute[253538]: 2025-11-25 08:59:43.229 253542 DEBUG oslo_concurrency.lockutils [None req-6b06ef83-12c5-4ab5-91d5-9878a1809a2b 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Acquiring lock "6a6a3230-e005-48d6-b758-3cf5d4f9410f" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:59:43 np0005534516 nova_compute[253538]: 2025-11-25 08:59:43.230 253542 DEBUG oslo_concurrency.lockutils [None req-6b06ef83-12c5-4ab5-91d5-9878a1809a2b 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lock "6a6a3230-e005-48d6-b758-3cf5d4f9410f" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:59:43 np0005534516 nova_compute[253538]: 2025-11-25 08:59:43.231 253542 DEBUG oslo_concurrency.lockutils [None req-6b06ef83-12c5-4ab5-91d5-9878a1809a2b 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Acquiring lock "6a6a3230-e005-48d6-b758-3cf5d4f9410f-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:59:43 np0005534516 nova_compute[253538]: 2025-11-25 08:59:43.231 253542 DEBUG oslo_concurrency.lockutils [None req-6b06ef83-12c5-4ab5-91d5-9878a1809a2b 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lock "6a6a3230-e005-48d6-b758-3cf5d4f9410f-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:59:43 np0005534516 nova_compute[253538]: 2025-11-25 08:59:43.231 253542 DEBUG oslo_concurrency.lockutils [None req-6b06ef83-12c5-4ab5-91d5-9878a1809a2b 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lock "6a6a3230-e005-48d6-b758-3cf5d4f9410f-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:59:43 np0005534516 nova_compute[253538]: 2025-11-25 08:59:43.233 253542 INFO nova.compute.manager [None req-6b06ef83-12c5-4ab5-91d5-9878a1809a2b 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 6a6a3230-e005-48d6-b758-3cf5d4f9410f] Terminating instance#033[00m
Nov 25 03:59:43 np0005534516 nova_compute[253538]: 2025-11-25 08:59:43.234 253542 DEBUG nova.compute.manager [None req-6b06ef83-12c5-4ab5-91d5-9878a1809a2b 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 6a6a3230-e005-48d6-b758-3cf5d4f9410f] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 25 03:59:43 np0005534516 kernel: tap0b7bd252-0c (unregistering): left promiscuous mode
Nov 25 03:59:43 np0005534516 nova_compute[253538]: 2025-11-25 08:59:43.359 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:59:43 np0005534516 NetworkManager[48915]: <info>  [1764061183.3656] device (tap0b7bd252-0c): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 03:59:43 np0005534516 ovn_controller[152859]: 2025-11-25T08:59:43Z|01314|binding|INFO|Releasing lport 0b7bd252-0c0e-43fd-b9ae-27d615ec9c29 from this chassis (sb_readonly=0)
Nov 25 03:59:43 np0005534516 nova_compute[253538]: 2025-11-25 08:59:43.369 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:59:43 np0005534516 ovn_controller[152859]: 2025-11-25T08:59:43Z|01315|binding|INFO|Setting lport 0b7bd252-0c0e-43fd-b9ae-27d615ec9c29 down in Southbound
Nov 25 03:59:43 np0005534516 ovn_controller[152859]: 2025-11-25T08:59:43Z|01316|binding|INFO|Removing iface tap0b7bd252-0c ovn-installed in OVS
Nov 25 03:59:43 np0005534516 nova_compute[253538]: 2025-11-25 08:59:43.370 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:59:43 np0005534516 nova_compute[253538]: 2025-11-25 08:59:43.371 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:59:43 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:59:43.380 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:61:6b:c3 10.100.0.8'], port_security=['fa:16:3e:61:6b:c3 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '6a6a3230-e005-48d6-b758-3cf5d4f9410f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d0402a09-5c1d-4dec-b1c6-38e77edc4409', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '92faeb767e7a423586eaaf32661ce771', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'fbcd9381-e965-435b-8fa1-373c60075d6f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ff74e0b1-b375-483e-985d-fce7814dd7fc, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=0b7bd252-0c0e-43fd-b9ae-27d615ec9c29) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 03:59:43 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:59:43.381 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 0b7bd252-0c0e-43fd-b9ae-27d615ec9c29 in datapath d0402a09-5c1d-4dec-b1c6-38e77edc4409 unbound from our chassis#033[00m
Nov 25 03:59:43 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:59:43.382 162739 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network d0402a09-5c1d-4dec-b1c6-38e77edc4409, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 25 03:59:43 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:59:43.383 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[e638c993-31fb-440f-8bdb-4c073f0dfb58]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:59:43 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:59:43.383 162739 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-d0402a09-5c1d-4dec-b1c6-38e77edc4409 namespace which is not needed anymore#033[00m
Nov 25 03:59:43 np0005534516 nova_compute[253538]: 2025-11-25 08:59:43.391 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:59:43 np0005534516 systemd[1]: machine-qemu\x2d157\x2dinstance\x2d0000007f.scope: Deactivated successfully.
Nov 25 03:59:43 np0005534516 systemd[1]: machine-qemu\x2d157\x2dinstance\x2d0000007f.scope: Consumed 14.340s CPU time.
Nov 25 03:59:43 np0005534516 systemd-machined[215790]: Machine qemu-157-instance-0000007f terminated.
Nov 25 03:59:43 np0005534516 nova_compute[253538]: 2025-11-25 08:59:43.476 253542 INFO nova.virt.libvirt.driver [-] [instance: 6a6a3230-e005-48d6-b758-3cf5d4f9410f] Instance destroyed successfully.#033[00m
Nov 25 03:59:43 np0005534516 nova_compute[253538]: 2025-11-25 08:59:43.477 253542 DEBUG nova.objects.instance [None req-6b06ef83-12c5-4ab5-91d5-9878a1809a2b 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lazy-loading 'resources' on Instance uuid 6a6a3230-e005-48d6-b758-3cf5d4f9410f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 03:59:43 np0005534516 nova_compute[253538]: 2025-11-25 08:59:43.491 253542 DEBUG nova.virt.libvirt.vif [None req-6b06ef83-12c5-4ab5-91d5-9878a1809a2b 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T08:59:07Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-571013725',display_name='tempest-TestNetworkBasicOps-server-571013725',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-571013725',id=127,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBD9vDi5E2E+ywOEJjDZ0PQ8FPpnwynuIL6i1c9SarSwpCQ7QnXbRw7n+Ck5BMm/3gHxZu4fef569DYJ0xiHgyqCAtkk+E+7ZMYtBKG+VyGO33faTg/ful5ZkeC+zSQwIDw==',key_name='tempest-TestNetworkBasicOps-363283039',keypairs=<?>,launch_index=0,launched_at=2025-11-25T08:59:17Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='92faeb767e7a423586eaaf32661ce771',ramdisk_id='',reservation_id='r-kwd70nuj',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-2019122229',owner_user_name='tempest-TestNetworkBasicOps-2019122229-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T08:59:17Z,user_data=None,user_id='4211995133cc45db8e38c47f747fb092',uuid=6a6a3230-e005-48d6-b758-3cf5d4f9410f,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "0b7bd252-0c0e-43fd-b9ae-27d615ec9c29", "address": "fa:16:3e:61:6b:c3", "network": {"id": "d0402a09-5c1d-4dec-b1c6-38e77edc4409", "bridge": "br-int", "label": "tempest-network-smoke--803417347", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.173", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92faeb767e7a423586eaaf32661ce771", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0b7bd252-0c", "ovs_interfaceid": "0b7bd252-0c0e-43fd-b9ae-27d615ec9c29", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 25 03:59:43 np0005534516 nova_compute[253538]: 2025-11-25 08:59:43.492 253542 DEBUG nova.network.os_vif_util [None req-6b06ef83-12c5-4ab5-91d5-9878a1809a2b 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Converting VIF {"id": "0b7bd252-0c0e-43fd-b9ae-27d615ec9c29", "address": "fa:16:3e:61:6b:c3", "network": {"id": "d0402a09-5c1d-4dec-b1c6-38e77edc4409", "bridge": "br-int", "label": "tempest-network-smoke--803417347", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.173", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92faeb767e7a423586eaaf32661ce771", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0b7bd252-0c", "ovs_interfaceid": "0b7bd252-0c0e-43fd-b9ae-27d615ec9c29", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 03:59:43 np0005534516 nova_compute[253538]: 2025-11-25 08:59:43.492 253542 DEBUG nova.network.os_vif_util [None req-6b06ef83-12c5-4ab5-91d5-9878a1809a2b 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:61:6b:c3,bridge_name='br-int',has_traffic_filtering=True,id=0b7bd252-0c0e-43fd-b9ae-27d615ec9c29,network=Network(d0402a09-5c1d-4dec-b1c6-38e77edc4409),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0b7bd252-0c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 03:59:43 np0005534516 nova_compute[253538]: 2025-11-25 08:59:43.493 253542 DEBUG os_vif [None req-6b06ef83-12c5-4ab5-91d5-9878a1809a2b 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:61:6b:c3,bridge_name='br-int',has_traffic_filtering=True,id=0b7bd252-0c0e-43fd-b9ae-27d615ec9c29,network=Network(d0402a09-5c1d-4dec-b1c6-38e77edc4409),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0b7bd252-0c') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 25 03:59:43 np0005534516 nova_compute[253538]: 2025-11-25 08:59:43.494 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:59:43 np0005534516 nova_compute[253538]: 2025-11-25 08:59:43.494 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0b7bd252-0c, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:59:43 np0005534516 nova_compute[253538]: 2025-11-25 08:59:43.496 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:59:43 np0005534516 nova_compute[253538]: 2025-11-25 08:59:43.499 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 25 03:59:43 np0005534516 nova_compute[253538]: 2025-11-25 08:59:43.501 253542 INFO os_vif [None req-6b06ef83-12c5-4ab5-91d5-9878a1809a2b 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:61:6b:c3,bridge_name='br-int',has_traffic_filtering=True,id=0b7bd252-0c0e-43fd-b9ae-27d615ec9c29,network=Network(d0402a09-5c1d-4dec-b1c6-38e77edc4409),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0b7bd252-0c')#033[00m
Nov 25 03:59:43 np0005534516 neutron-haproxy-ovnmeta-d0402a09-5c1d-4dec-b1c6-38e77edc4409[387040]: [NOTICE]   (387044) : haproxy version is 2.8.14-c23fe91
Nov 25 03:59:43 np0005534516 neutron-haproxy-ovnmeta-d0402a09-5c1d-4dec-b1c6-38e77edc4409[387040]: [NOTICE]   (387044) : path to executable is /usr/sbin/haproxy
Nov 25 03:59:43 np0005534516 neutron-haproxy-ovnmeta-d0402a09-5c1d-4dec-b1c6-38e77edc4409[387040]: [WARNING]  (387044) : Exiting Master process...
Nov 25 03:59:43 np0005534516 neutron-haproxy-ovnmeta-d0402a09-5c1d-4dec-b1c6-38e77edc4409[387040]: [ALERT]    (387044) : Current worker (387046) exited with code 143 (Terminated)
Nov 25 03:59:43 np0005534516 neutron-haproxy-ovnmeta-d0402a09-5c1d-4dec-b1c6-38e77edc4409[387040]: [WARNING]  (387044) : All workers exited. Exiting... (0)
Nov 25 03:59:43 np0005534516 systemd[1]: libpod-a41782ed97ee4d448f4409f10540456942e8fdc1bbcea1477e1e710d084a3407.scope: Deactivated successfully.
Nov 25 03:59:43 np0005534516 podman[387303]: 2025-11-25 08:59:43.52665306 +0000 UTC m=+0.046723933 container died a41782ed97ee4d448f4409f10540456942e8fdc1bbcea1477e1e710d084a3407 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-d0402a09-5c1d-4dec-b1c6-38e77edc4409, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Nov 25 03:59:43 np0005534516 nova_compute[253538]: 2025-11-25 08:59:43.553 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 03:59:43 np0005534516 nova_compute[253538]: 2025-11-25 08:59:43.554 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 25 03:59:43 np0005534516 nova_compute[253538]: 2025-11-25 08:59:43.554 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 25 03:59:43 np0005534516 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-a41782ed97ee4d448f4409f10540456942e8fdc1bbcea1477e1e710d084a3407-userdata-shm.mount: Deactivated successfully.
Nov 25 03:59:43 np0005534516 systemd[1]: var-lib-containers-storage-overlay-30d2bcefa150aa50a2485c101e427760b62e7f7acb3630901c864073572e7a19-merged.mount: Deactivated successfully.
Nov 25 03:59:43 np0005534516 podman[387303]: 2025-11-25 08:59:43.569519496 +0000 UTC m=+0.089590359 container cleanup a41782ed97ee4d448f4409f10540456942e8fdc1bbcea1477e1e710d084a3407 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-d0402a09-5c1d-4dec-b1c6-38e77edc4409, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118)
Nov 25 03:59:43 np0005534516 nova_compute[253538]: 2025-11-25 08:59:43.571 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] [instance: 6a6a3230-e005-48d6-b758-3cf5d4f9410f] Skipping network cache update for instance because it is being deleted. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9875#033[00m
Nov 25 03:59:43 np0005534516 nova_compute[253538]: 2025-11-25 08:59:43.571 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 25 03:59:43 np0005534516 systemd[1]: libpod-conmon-a41782ed97ee4d448f4409f10540456942e8fdc1bbcea1477e1e710d084a3407.scope: Deactivated successfully.
Nov 25 03:59:43 np0005534516 podman[387353]: 2025-11-25 08:59:43.66776419 +0000 UTC m=+0.066874750 container remove a41782ed97ee4d448f4409f10540456942e8fdc1bbcea1477e1e710d084a3407 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-d0402a09-5c1d-4dec-b1c6-38e77edc4409, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 03:59:43 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:59:43.675 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[c8df159b-e6e1-412e-8d21-a21053f7c83b]: (4, ('Tue Nov 25 08:59:43 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-d0402a09-5c1d-4dec-b1c6-38e77edc4409 (a41782ed97ee4d448f4409f10540456942e8fdc1bbcea1477e1e710d084a3407)\na41782ed97ee4d448f4409f10540456942e8fdc1bbcea1477e1e710d084a3407\nTue Nov 25 08:59:43 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-d0402a09-5c1d-4dec-b1c6-38e77edc4409 (a41782ed97ee4d448f4409f10540456942e8fdc1bbcea1477e1e710d084a3407)\na41782ed97ee4d448f4409f10540456942e8fdc1bbcea1477e1e710d084a3407\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:59:43 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:59:43.678 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[f6d3b506-e857-49df-be14-fc941c5b1cf8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:59:43 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:59:43.680 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd0402a09-50, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 03:59:43 np0005534516 nova_compute[253538]: 2025-11-25 08:59:43.683 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:59:43 np0005534516 kernel: tapd0402a09-50: left promiscuous mode
Nov 25 03:59:43 np0005534516 nova_compute[253538]: 2025-11-25 08:59:43.702 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:59:43 np0005534516 nova_compute[253538]: 2025-11-25 08:59:43.704 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:59:43 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:59:43.706 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[60125edb-fc5b-40e5-b0cf-09ad03cf4af4]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:59:43 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:59:43.721 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[d061b886-82c7-4a5a-8a51-5687da845419]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:59:43 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:59:43.723 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[f9bf1f59-93cf-4286-86e5-6942bae1c893]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:59:43 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:59:43.741 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[f620a3e9-0b3f-4cfa-bf6c-450e92b76b5a]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 651145, 'reachable_time': 44799, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 387366, 'error': None, 'target': 'ovnmeta-d0402a09-5c1d-4dec-b1c6-38e77edc4409', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:59:43 np0005534516 systemd[1]: run-netns-ovnmeta\x2dd0402a09\x2d5c1d\x2d4dec\x2db1c6\x2d38e77edc4409.mount: Deactivated successfully.
Nov 25 03:59:43 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:59:43.744 162852 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-d0402a09-5c1d-4dec-b1c6-38e77edc4409 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 25 03:59:43 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 08:59:43.744 162852 DEBUG oslo.privsep.daemon [-] privsep: reply[484aa584-aca0-4857-8de7-a4323086c4df]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 03:59:43 np0005534516 nova_compute[253538]: 2025-11-25 08:59:43.895 253542 INFO nova.virt.libvirt.driver [None req-6b06ef83-12c5-4ab5-91d5-9878a1809a2b 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 6a6a3230-e005-48d6-b758-3cf5d4f9410f] Deleting instance files /var/lib/nova/instances/6a6a3230-e005-48d6-b758-3cf5d4f9410f_del#033[00m
Nov 25 03:59:43 np0005534516 nova_compute[253538]: 2025-11-25 08:59:43.896 253542 INFO nova.virt.libvirt.driver [None req-6b06ef83-12c5-4ab5-91d5-9878a1809a2b 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 6a6a3230-e005-48d6-b758-3cf5d4f9410f] Deletion of /var/lib/nova/instances/6a6a3230-e005-48d6-b758-3cf5d4f9410f_del complete#033[00m
Nov 25 03:59:43 np0005534516 nova_compute[253538]: 2025-11-25 08:59:43.973 253542 INFO nova.compute.manager [None req-6b06ef83-12c5-4ab5-91d5-9878a1809a2b 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 6a6a3230-e005-48d6-b758-3cf5d4f9410f] Took 0.74 seconds to destroy the instance on the hypervisor.#033[00m
Nov 25 03:59:43 np0005534516 nova_compute[253538]: 2025-11-25 08:59:43.974 253542 DEBUG oslo.service.loopingcall [None req-6b06ef83-12c5-4ab5-91d5-9878a1809a2b 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 25 03:59:43 np0005534516 nova_compute[253538]: 2025-11-25 08:59:43.974 253542 DEBUG nova.compute.manager [-] [instance: 6a6a3230-e005-48d6-b758-3cf5d4f9410f] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 25 03:59:43 np0005534516 nova_compute[253538]: 2025-11-25 08:59:43.975 253542 DEBUG nova.network.neutron [-] [instance: 6a6a3230-e005-48d6-b758-3cf5d4f9410f] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 25 03:59:43 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2368: 321 pgs: 321 active+clean; 167 MiB data, 918 MiB used, 59 GiB / 60 GiB avail; 129 KiB/s rd, 421 KiB/s wr, 35 op/s
Nov 25 03:59:44 np0005534516 nova_compute[253538]: 2025-11-25 08:59:44.747 253542 DEBUG nova.network.neutron [-] [instance: 6a6a3230-e005-48d6-b758-3cf5d4f9410f] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 03:59:44 np0005534516 nova_compute[253538]: 2025-11-25 08:59:44.762 253542 INFO nova.compute.manager [-] [instance: 6a6a3230-e005-48d6-b758-3cf5d4f9410f] Took 0.79 seconds to deallocate network for instance.#033[00m
Nov 25 03:59:44 np0005534516 nova_compute[253538]: 2025-11-25 08:59:44.822 253542 DEBUG nova.compute.manager [req-6ed2b8f3-9a8e-4f39-84c7-d3300739df06 req-05d55742-92e9-4815-97ba-1b4c8dbfd731 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 6a6a3230-e005-48d6-b758-3cf5d4f9410f] Received event network-vif-deleted-0b7bd252-0c0e-43fd-b9ae-27d615ec9c29 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:59:44 np0005534516 nova_compute[253538]: 2025-11-25 08:59:44.840 253542 DEBUG oslo_concurrency.lockutils [None req-6b06ef83-12c5-4ab5-91d5-9878a1809a2b 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:59:44 np0005534516 nova_compute[253538]: 2025-11-25 08:59:44.841 253542 DEBUG oslo_concurrency.lockutils [None req-6b06ef83-12c5-4ab5-91d5-9878a1809a2b 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:59:44 np0005534516 nova_compute[253538]: 2025-11-25 08:59:44.903 253542 DEBUG oslo_concurrency.processutils [None req-6b06ef83-12c5-4ab5-91d5-9878a1809a2b 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:59:44 np0005534516 nova_compute[253538]: 2025-11-25 08:59:44.951 253542 DEBUG nova.network.neutron [req-57306cb8-b385-4596-8a43-0a21a2546fa9 req-627f9f0a-ab82-4924-b90c-75dfefd93278 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 6a6a3230-e005-48d6-b758-3cf5d4f9410f] Updated VIF entry in instance network info cache for port 0b7bd252-0c0e-43fd-b9ae-27d615ec9c29. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 25 03:59:44 np0005534516 nova_compute[253538]: 2025-11-25 08:59:44.953 253542 DEBUG nova.network.neutron [req-57306cb8-b385-4596-8a43-0a21a2546fa9 req-627f9f0a-ab82-4924-b90c-75dfefd93278 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 6a6a3230-e005-48d6-b758-3cf5d4f9410f] Updating instance_info_cache with network_info: [{"id": "0b7bd252-0c0e-43fd-b9ae-27d615ec9c29", "address": "fa:16:3e:61:6b:c3", "network": {"id": "d0402a09-5c1d-4dec-b1c6-38e77edc4409", "bridge": "br-int", "label": "tempest-network-smoke--803417347", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92faeb767e7a423586eaaf32661ce771", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0b7bd252-0c", "ovs_interfaceid": "0b7bd252-0c0e-43fd-b9ae-27d615ec9c29", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 03:59:45 np0005534516 nova_compute[253538]: 2025-11-25 08:59:45.006 253542 DEBUG oslo_concurrency.lockutils [req-57306cb8-b385-4596-8a43-0a21a2546fa9 req-627f9f0a-ab82-4924-b90c-75dfefd93278 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-6a6a3230-e005-48d6-b758-3cf5d4f9410f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 03:59:45 np0005534516 nova_compute[253538]: 2025-11-25 08:59:45.264 253542 DEBUG nova.compute.manager [req-2afd7512-4e8e-45f1-8753-f7345b3523d6 req-11e553d3-193f-4423-9a07-97bd47cf376b b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 6a6a3230-e005-48d6-b758-3cf5d4f9410f] Received event network-vif-unplugged-0b7bd252-0c0e-43fd-b9ae-27d615ec9c29 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:59:45 np0005534516 nova_compute[253538]: 2025-11-25 08:59:45.264 253542 DEBUG oslo_concurrency.lockutils [req-2afd7512-4e8e-45f1-8753-f7345b3523d6 req-11e553d3-193f-4423-9a07-97bd47cf376b b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "6a6a3230-e005-48d6-b758-3cf5d4f9410f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:59:45 np0005534516 nova_compute[253538]: 2025-11-25 08:59:45.265 253542 DEBUG oslo_concurrency.lockutils [req-2afd7512-4e8e-45f1-8753-f7345b3523d6 req-11e553d3-193f-4423-9a07-97bd47cf376b b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "6a6a3230-e005-48d6-b758-3cf5d4f9410f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:59:45 np0005534516 nova_compute[253538]: 2025-11-25 08:59:45.266 253542 DEBUG oslo_concurrency.lockutils [req-2afd7512-4e8e-45f1-8753-f7345b3523d6 req-11e553d3-193f-4423-9a07-97bd47cf376b b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "6a6a3230-e005-48d6-b758-3cf5d4f9410f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:59:45 np0005534516 nova_compute[253538]: 2025-11-25 08:59:45.267 253542 DEBUG nova.compute.manager [req-2afd7512-4e8e-45f1-8753-f7345b3523d6 req-11e553d3-193f-4423-9a07-97bd47cf376b b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 6a6a3230-e005-48d6-b758-3cf5d4f9410f] No waiting events found dispatching network-vif-unplugged-0b7bd252-0c0e-43fd-b9ae-27d615ec9c29 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 03:59:45 np0005534516 nova_compute[253538]: 2025-11-25 08:59:45.267 253542 WARNING nova.compute.manager [req-2afd7512-4e8e-45f1-8753-f7345b3523d6 req-11e553d3-193f-4423-9a07-97bd47cf376b b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 6a6a3230-e005-48d6-b758-3cf5d4f9410f] Received unexpected event network-vif-unplugged-0b7bd252-0c0e-43fd-b9ae-27d615ec9c29 for instance with vm_state deleted and task_state None.#033[00m
Nov 25 03:59:45 np0005534516 nova_compute[253538]: 2025-11-25 08:59:45.268 253542 DEBUG nova.compute.manager [req-2afd7512-4e8e-45f1-8753-f7345b3523d6 req-11e553d3-193f-4423-9a07-97bd47cf376b b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 6a6a3230-e005-48d6-b758-3cf5d4f9410f] Received event network-vif-plugged-0b7bd252-0c0e-43fd-b9ae-27d615ec9c29 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:59:45 np0005534516 nova_compute[253538]: 2025-11-25 08:59:45.269 253542 DEBUG oslo_concurrency.lockutils [req-2afd7512-4e8e-45f1-8753-f7345b3523d6 req-11e553d3-193f-4423-9a07-97bd47cf376b b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "6a6a3230-e005-48d6-b758-3cf5d4f9410f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:59:45 np0005534516 nova_compute[253538]: 2025-11-25 08:59:45.269 253542 DEBUG oslo_concurrency.lockutils [req-2afd7512-4e8e-45f1-8753-f7345b3523d6 req-11e553d3-193f-4423-9a07-97bd47cf376b b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "6a6a3230-e005-48d6-b758-3cf5d4f9410f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:59:45 np0005534516 nova_compute[253538]: 2025-11-25 08:59:45.270 253542 DEBUG oslo_concurrency.lockutils [req-2afd7512-4e8e-45f1-8753-f7345b3523d6 req-11e553d3-193f-4423-9a07-97bd47cf376b b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "6a6a3230-e005-48d6-b758-3cf5d4f9410f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:59:45 np0005534516 nova_compute[253538]: 2025-11-25 08:59:45.270 253542 DEBUG nova.compute.manager [req-2afd7512-4e8e-45f1-8753-f7345b3523d6 req-11e553d3-193f-4423-9a07-97bd47cf376b b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 6a6a3230-e005-48d6-b758-3cf5d4f9410f] No waiting events found dispatching network-vif-plugged-0b7bd252-0c0e-43fd-b9ae-27d615ec9c29 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 03:59:45 np0005534516 nova_compute[253538]: 2025-11-25 08:59:45.271 253542 WARNING nova.compute.manager [req-2afd7512-4e8e-45f1-8753-f7345b3523d6 req-11e553d3-193f-4423-9a07-97bd47cf376b b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 6a6a3230-e005-48d6-b758-3cf5d4f9410f] Received unexpected event network-vif-plugged-0b7bd252-0c0e-43fd-b9ae-27d615ec9c29 for instance with vm_state deleted and task_state None.#033[00m
Nov 25 03:59:45 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 03:59:45 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3456270002' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 03:59:45 np0005534516 nova_compute[253538]: 2025-11-25 08:59:45.376 253542 DEBUG oslo_concurrency.processutils [None req-6b06ef83-12c5-4ab5-91d5-9878a1809a2b 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.473s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:59:45 np0005534516 nova_compute[253538]: 2025-11-25 08:59:45.384 253542 DEBUG nova.compute.provider_tree [None req-6b06ef83-12c5-4ab5-91d5-9878a1809a2b 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 03:59:45 np0005534516 nova_compute[253538]: 2025-11-25 08:59:45.399 253542 DEBUG nova.scheduler.client.report [None req-6b06ef83-12c5-4ab5-91d5-9878a1809a2b 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 03:59:45 np0005534516 nova_compute[253538]: 2025-11-25 08:59:45.424 253542 DEBUG oslo_concurrency.lockutils [None req-6b06ef83-12c5-4ab5-91d5-9878a1809a2b 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.583s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:59:45 np0005534516 nova_compute[253538]: 2025-11-25 08:59:45.518 253542 INFO nova.scheduler.client.report [None req-6b06ef83-12c5-4ab5-91d5-9878a1809a2b 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Deleted allocations for instance 6a6a3230-e005-48d6-b758-3cf5d4f9410f#033[00m
Nov 25 03:59:45 np0005534516 nova_compute[253538]: 2025-11-25 08:59:45.553 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 03:59:45 np0005534516 nova_compute[253538]: 2025-11-25 08:59:45.553 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 25 03:59:45 np0005534516 nova_compute[253538]: 2025-11-25 08:59:45.625 253542 DEBUG oslo_concurrency.lockutils [None req-6b06ef83-12c5-4ab5-91d5-9878a1809a2b 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lock "6a6a3230-e005-48d6-b758-3cf5d4f9410f" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.395s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:59:45 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2369: 321 pgs: 321 active+clean; 149 MiB data, 910 MiB used, 59 GiB / 60 GiB avail; 59 KiB/s rd, 20 KiB/s wr, 32 op/s
Nov 25 03:59:46 np0005534516 nova_compute[253538]: 2025-11-25 08:59:46.558 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 03:59:46 np0005534516 nova_compute[253538]: 2025-11-25 08:59:46.668 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:59:47 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 03:59:47 np0005534516 podman[387391]: 2025-11-25 08:59:47.848249998 +0000 UTC m=+0.081249162 container health_status 5c8d5508db57b4c3da7e80ae11f3a3094e87785cb74f60c98199261dd8b73481 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent)
Nov 25 03:59:47 np0005534516 podman[387390]: 2025-11-25 08:59:47.847907899 +0000 UTC m=+0.082654371 container health_status 201f5ba8a846455dad7681d91099d8c3f33e8ac91298c1330a8b525b94c03938 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Nov 25 03:59:47 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2370: 321 pgs: 321 active+clean; 117 MiB data, 892 MiB used, 59 GiB / 60 GiB avail; 10 KiB/s rd, 15 KiB/s wr, 16 op/s
Nov 25 03:59:48 np0005534516 nova_compute[253538]: 2025-11-25 08:59:48.369 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:59:48 np0005534516 nova_compute[253538]: 2025-11-25 08:59:48.496 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:59:48 np0005534516 nova_compute[253538]: 2025-11-25 08:59:48.548 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 03:59:48 np0005534516 nova_compute[253538]: 2025-11-25 08:59:48.553 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 03:59:49 np0005534516 nova_compute[253538]: 2025-11-25 08:59:49.553 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 03:59:49 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2371: 321 pgs: 321 active+clean; 88 MiB data, 873 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 3.5 KiB/s wr, 28 op/s
Nov 25 03:59:50 np0005534516 nova_compute[253538]: 2025-11-25 08:59:50.553 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 03:59:50 np0005534516 nova_compute[253538]: 2025-11-25 08:59:50.592 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:59:50 np0005534516 nova_compute[253538]: 2025-11-25 08:59:50.593 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:59:50 np0005534516 nova_compute[253538]: 2025-11-25 08:59:50.593 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:59:50 np0005534516 nova_compute[253538]: 2025-11-25 08:59:50.593 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 25 03:59:50 np0005534516 nova_compute[253538]: 2025-11-25 08:59:50.593 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:59:51 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 03:59:51 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4110125167' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 03:59:51 np0005534516 nova_compute[253538]: 2025-11-25 08:59:51.081 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.487s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:59:51 np0005534516 nova_compute[253538]: 2025-11-25 08:59:51.315 253542 WARNING nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 03:59:51 np0005534516 nova_compute[253538]: 2025-11-25 08:59:51.317 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3737MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 25 03:59:51 np0005534516 nova_compute[253538]: 2025-11-25 08:59:51.318 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:59:51 np0005534516 nova_compute[253538]: 2025-11-25 08:59:51.318 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:59:51 np0005534516 nova_compute[253538]: 2025-11-25 08:59:51.385 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 25 03:59:51 np0005534516 nova_compute[253538]: 2025-11-25 08:59:51.386 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 25 03:59:51 np0005534516 nova_compute[253538]: 2025-11-25 08:59:51.409 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:59:51 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 03:59:51 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3885284282' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 03:59:51 np0005534516 nova_compute[253538]: 2025-11-25 08:59:51.900 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.491s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:59:51 np0005534516 nova_compute[253538]: 2025-11-25 08:59:51.905 253542 DEBUG nova.compute.provider_tree [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 03:59:51 np0005534516 nova_compute[253538]: 2025-11-25 08:59:51.925 253542 DEBUG nova.scheduler.client.report [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 03:59:51 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2372: 321 pgs: 321 active+clean; 88 MiB data, 873 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 3.2 KiB/s wr, 27 op/s
Nov 25 03:59:52 np0005534516 nova_compute[253538]: 2025-11-25 08:59:52.047 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:59:52 np0005534516 nova_compute[253538]: 2025-11-25 08:59:52.099 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 25 03:59:52 np0005534516 nova_compute[253538]: 2025-11-25 08:59:52.100 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.782s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:59:52 np0005534516 nova_compute[253538]: 2025-11-25 08:59:52.216 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:59:52 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 03:59:52 np0005534516 nova_compute[253538]: 2025-11-25 08:59:52.678 253542 DEBUG oslo_concurrency.lockutils [None req-60e6232e-31b9-4174-8bd5-e4a5c4c8cf0f 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Acquiring lock "4356e66d-96cf-4d55-bf3e-280638024374" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:59:52 np0005534516 nova_compute[253538]: 2025-11-25 08:59:52.679 253542 DEBUG oslo_concurrency.lockutils [None req-60e6232e-31b9-4174-8bd5-e4a5c4c8cf0f 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Lock "4356e66d-96cf-4d55-bf3e-280638024374" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:59:52 np0005534516 nova_compute[253538]: 2025-11-25 08:59:52.707 253542 DEBUG nova.compute.manager [None req-60e6232e-31b9-4174-8bd5-e4a5c4c8cf0f 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 4356e66d-96cf-4d55-bf3e-280638024374] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 25 03:59:52 np0005534516 nova_compute[253538]: 2025-11-25 08:59:52.785 253542 DEBUG oslo_concurrency.lockutils [None req-60e6232e-31b9-4174-8bd5-e4a5c4c8cf0f 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:59:52 np0005534516 nova_compute[253538]: 2025-11-25 08:59:52.785 253542 DEBUG oslo_concurrency.lockutils [None req-60e6232e-31b9-4174-8bd5-e4a5c4c8cf0f 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:59:52 np0005534516 nova_compute[253538]: 2025-11-25 08:59:52.793 253542 DEBUG nova.virt.hardware [None req-60e6232e-31b9-4174-8bd5-e4a5c4c8cf0f 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 25 03:59:52 np0005534516 nova_compute[253538]: 2025-11-25 08:59:52.793 253542 INFO nova.compute.claims [None req-60e6232e-31b9-4174-8bd5-e4a5c4c8cf0f 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 4356e66d-96cf-4d55-bf3e-280638024374] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 25 03:59:52 np0005534516 nova_compute[253538]: 2025-11-25 08:59:52.886 253542 DEBUG oslo_concurrency.processutils [None req-60e6232e-31b9-4174-8bd5-e4a5c4c8cf0f 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:59:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] Optimize plan auto_2025-11-25_08:59:53
Nov 25 03:59:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 03:59:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] do_upmap
Nov 25 03:59:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] pools ['volumes', 'backups', '.rgw.root', 'cephfs.cephfs.data', 'default.rgw.meta', 'vms', 'default.rgw.log', '.mgr', 'images', 'cephfs.cephfs.meta', 'default.rgw.control']
Nov 25 03:59:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] prepared 0/10 changes
Nov 25 03:59:53 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 03:59:53 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4023664004' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 03:59:53 np0005534516 nova_compute[253538]: 2025-11-25 08:59:53.398 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:59:53 np0005534516 nova_compute[253538]: 2025-11-25 08:59:53.409 253542 DEBUG oslo_concurrency.processutils [None req-60e6232e-31b9-4174-8bd5-e4a5c4c8cf0f 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.523s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:59:53 np0005534516 nova_compute[253538]: 2025-11-25 08:59:53.419 253542 DEBUG nova.compute.provider_tree [None req-60e6232e-31b9-4174-8bd5-e4a5c4c8cf0f 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 03:59:53 np0005534516 nova_compute[253538]: 2025-11-25 08:59:53.441 253542 DEBUG nova.scheduler.client.report [None req-60e6232e-31b9-4174-8bd5-e4a5c4c8cf0f 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 03:59:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 03:59:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 03:59:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 03:59:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 03:59:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 03:59:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 03:59:53 np0005534516 nova_compute[253538]: 2025-11-25 08:59:53.480 253542 DEBUG oslo_concurrency.lockutils [None req-60e6232e-31b9-4174-8bd5-e4a5c4c8cf0f 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.694s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:59:53 np0005534516 nova_compute[253538]: 2025-11-25 08:59:53.481 253542 DEBUG nova.compute.manager [None req-60e6232e-31b9-4174-8bd5-e4a5c4c8cf0f 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 4356e66d-96cf-4d55-bf3e-280638024374] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 25 03:59:53 np0005534516 nova_compute[253538]: 2025-11-25 08:59:53.499 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:59:53 np0005534516 nova_compute[253538]: 2025-11-25 08:59:53.538 253542 DEBUG nova.compute.manager [None req-60e6232e-31b9-4174-8bd5-e4a5c4c8cf0f 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 4356e66d-96cf-4d55-bf3e-280638024374] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 25 03:59:53 np0005534516 nova_compute[253538]: 2025-11-25 08:59:53.539 253542 DEBUG nova.network.neutron [None req-60e6232e-31b9-4174-8bd5-e4a5c4c8cf0f 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 4356e66d-96cf-4d55-bf3e-280638024374] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 25 03:59:53 np0005534516 nova_compute[253538]: 2025-11-25 08:59:53.564 253542 INFO nova.virt.libvirt.driver [None req-60e6232e-31b9-4174-8bd5-e4a5c4c8cf0f 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 4356e66d-96cf-4d55-bf3e-280638024374] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 25 03:59:53 np0005534516 nova_compute[253538]: 2025-11-25 08:59:53.589 253542 DEBUG nova.compute.manager [None req-60e6232e-31b9-4174-8bd5-e4a5c4c8cf0f 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 4356e66d-96cf-4d55-bf3e-280638024374] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 25 03:59:53 np0005534516 nova_compute[253538]: 2025-11-25 08:59:53.861 253542 DEBUG nova.compute.manager [None req-60e6232e-31b9-4174-8bd5-e4a5c4c8cf0f 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 4356e66d-96cf-4d55-bf3e-280638024374] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 25 03:59:53 np0005534516 nova_compute[253538]: 2025-11-25 08:59:53.863 253542 DEBUG nova.virt.libvirt.driver [None req-60e6232e-31b9-4174-8bd5-e4a5c4c8cf0f 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 4356e66d-96cf-4d55-bf3e-280638024374] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 25 03:59:53 np0005534516 nova_compute[253538]: 2025-11-25 08:59:53.863 253542 INFO nova.virt.libvirt.driver [None req-60e6232e-31b9-4174-8bd5-e4a5c4c8cf0f 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 4356e66d-96cf-4d55-bf3e-280638024374] Creating image(s)#033[00m
Nov 25 03:59:53 np0005534516 nova_compute[253538]: 2025-11-25 08:59:53.892 253542 DEBUG nova.storage.rbd_utils [None req-60e6232e-31b9-4174-8bd5-e4a5c4c8cf0f 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] rbd image 4356e66d-96cf-4d55-bf3e-280638024374_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:59:53 np0005534516 nova_compute[253538]: 2025-11-25 08:59:53.924 253542 DEBUG nova.storage.rbd_utils [None req-60e6232e-31b9-4174-8bd5-e4a5c4c8cf0f 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] rbd image 4356e66d-96cf-4d55-bf3e-280638024374_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:59:53 np0005534516 nova_compute[253538]: 2025-11-25 08:59:53.958 253542 DEBUG nova.storage.rbd_utils [None req-60e6232e-31b9-4174-8bd5-e4a5c4c8cf0f 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] rbd image 4356e66d-96cf-4d55-bf3e-280638024374_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:59:53 np0005534516 nova_compute[253538]: 2025-11-25 08:59:53.964 253542 DEBUG oslo_concurrency.processutils [None req-60e6232e-31b9-4174-8bd5-e4a5c4c8cf0f 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:59:53 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2373: 321 pgs: 321 active+clean; 88 MiB data, 873 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 3.2 KiB/s wr, 27 op/s
Nov 25 03:59:54 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 03:59:54 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 03:59:54 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 03:59:54 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 03:59:54 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 03:59:54 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 03:59:54 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 03:59:54 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 03:59:54 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 03:59:54 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 03:59:54 np0005534516 nova_compute[253538]: 2025-11-25 08:59:54.065 253542 DEBUG oslo_concurrency.processutils [None req-60e6232e-31b9-4174-8bd5-e4a5c4c8cf0f 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json" returned: 0 in 0.101s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:59:54 np0005534516 nova_compute[253538]: 2025-11-25 08:59:54.067 253542 DEBUG oslo_concurrency.lockutils [None req-60e6232e-31b9-4174-8bd5-e4a5c4c8cf0f 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Acquiring lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:59:54 np0005534516 nova_compute[253538]: 2025-11-25 08:59:54.068 253542 DEBUG oslo_concurrency.lockutils [None req-60e6232e-31b9-4174-8bd5-e4a5c4c8cf0f 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:59:54 np0005534516 nova_compute[253538]: 2025-11-25 08:59:54.069 253542 DEBUG oslo_concurrency.lockutils [None req-60e6232e-31b9-4174-8bd5-e4a5c4c8cf0f 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:59:54 np0005534516 nova_compute[253538]: 2025-11-25 08:59:54.103 253542 DEBUG nova.storage.rbd_utils [None req-60e6232e-31b9-4174-8bd5-e4a5c4c8cf0f 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] rbd image 4356e66d-96cf-4d55-bf3e-280638024374_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 03:59:54 np0005534516 nova_compute[253538]: 2025-11-25 08:59:54.108 253542 DEBUG oslo_concurrency.processutils [None req-60e6232e-31b9-4174-8bd5-e4a5c4c8cf0f 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc 4356e66d-96cf-4d55-bf3e-280638024374_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 03:59:54 np0005534516 nova_compute[253538]: 2025-11-25 08:59:54.154 253542 DEBUG nova.policy [None req-60e6232e-31b9-4174-8bd5-e4a5c4c8cf0f 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '283b89dbe3284e8ea2019b797673108b', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'cfffff2c57a442a59b202d368d49bf00', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 25 03:59:54 np0005534516 nova_compute[253538]: 2025-11-25 08:59:54.424 253542 DEBUG oslo_concurrency.processutils [None req-60e6232e-31b9-4174-8bd5-e4a5c4c8cf0f 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc 4356e66d-96cf-4d55-bf3e-280638024374_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.316s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 03:59:54 np0005534516 nova_compute[253538]: 2025-11-25 08:59:54.515 253542 DEBUG nova.storage.rbd_utils [None req-60e6232e-31b9-4174-8bd5-e4a5c4c8cf0f 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] resizing rbd image 4356e66d-96cf-4d55-bf3e-280638024374_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 25 03:59:54 np0005534516 nova_compute[253538]: 2025-11-25 08:59:54.624 253542 DEBUG nova.objects.instance [None req-60e6232e-31b9-4174-8bd5-e4a5c4c8cf0f 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Lazy-loading 'migration_context' on Instance uuid 4356e66d-96cf-4d55-bf3e-280638024374 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 03:59:54 np0005534516 nova_compute[253538]: 2025-11-25 08:59:54.640 253542 DEBUG nova.virt.libvirt.driver [None req-60e6232e-31b9-4174-8bd5-e4a5c4c8cf0f 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 4356e66d-96cf-4d55-bf3e-280638024374] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 25 03:59:54 np0005534516 nova_compute[253538]: 2025-11-25 08:59:54.640 253542 DEBUG nova.virt.libvirt.driver [None req-60e6232e-31b9-4174-8bd5-e4a5c4c8cf0f 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 4356e66d-96cf-4d55-bf3e-280638024374] Ensure instance console log exists: /var/lib/nova/instances/4356e66d-96cf-4d55-bf3e-280638024374/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 25 03:59:54 np0005534516 nova_compute[253538]: 2025-11-25 08:59:54.641 253542 DEBUG oslo_concurrency.lockutils [None req-60e6232e-31b9-4174-8bd5-e4a5c4c8cf0f 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 03:59:54 np0005534516 nova_compute[253538]: 2025-11-25 08:59:54.642 253542 DEBUG oslo_concurrency.lockutils [None req-60e6232e-31b9-4174-8bd5-e4a5c4c8cf0f 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 03:59:54 np0005534516 nova_compute[253538]: 2025-11-25 08:59:54.642 253542 DEBUG oslo_concurrency.lockutils [None req-60e6232e-31b9-4174-8bd5-e4a5c4c8cf0f 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 03:59:54 np0005534516 podman[387664]: 2025-11-25 08:59:54.918731648 +0000 UTC m=+0.163708057 container health_status 8ad1e319ea263c63021f01bb2d6f18ca5aea205883339692c6b10bddfa993191 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Nov 25 03:59:55 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2374: 321 pgs: 321 active+clean; 105 MiB data, 873 MiB used, 59 GiB / 60 GiB avail; 15 KiB/s rd, 448 KiB/s wr, 25 op/s
Nov 25 03:59:56 np0005534516 nova_compute[253538]: 2025-11-25 08:59:56.374 253542 DEBUG nova.network.neutron [None req-60e6232e-31b9-4174-8bd5-e4a5c4c8cf0f 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 4356e66d-96cf-4d55-bf3e-280638024374] Successfully created port: 6654b89e-a102-49f6-ad76-45e598fe2702 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 25 03:59:57 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 03:59:57 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2375: 321 pgs: 321 active+clean; 124 MiB data, 879 MiB used, 59 GiB / 60 GiB avail; 11 KiB/s rd, 1.1 MiB/s wr, 18 op/s
Nov 25 03:59:58 np0005534516 nova_compute[253538]: 2025-11-25 08:59:58.401 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:59:58 np0005534516 nova_compute[253538]: 2025-11-25 08:59:58.474 253542 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764061183.4726167, 6a6a3230-e005-48d6-b758-3cf5d4f9410f => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 03:59:58 np0005534516 nova_compute[253538]: 2025-11-25 08:59:58.475 253542 INFO nova.compute.manager [-] [instance: 6a6a3230-e005-48d6-b758-3cf5d4f9410f] VM Stopped (Lifecycle Event)#033[00m
Nov 25 03:59:58 np0005534516 nova_compute[253538]: 2025-11-25 08:59:58.498 253542 DEBUG nova.compute.manager [None req-397aa31d-6142-47fb-9d04-282272f7baf2 - - - - - -] [instance: 6a6a3230-e005-48d6-b758-3cf5d4f9410f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 03:59:58 np0005534516 nova_compute[253538]: 2025-11-25 08:59:58.501 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 03:59:59 np0005534516 nova_compute[253538]: 2025-11-25 08:59:59.126 253542 DEBUG nova.network.neutron [None req-60e6232e-31b9-4174-8bd5-e4a5c4c8cf0f 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 4356e66d-96cf-4d55-bf3e-280638024374] Successfully updated port: 6654b89e-a102-49f6-ad76-45e598fe2702 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 25 03:59:59 np0005534516 nova_compute[253538]: 2025-11-25 08:59:59.209 253542 DEBUG oslo_concurrency.lockutils [None req-60e6232e-31b9-4174-8bd5-e4a5c4c8cf0f 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Acquiring lock "refresh_cache-4356e66d-96cf-4d55-bf3e-280638024374" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 03:59:59 np0005534516 nova_compute[253538]: 2025-11-25 08:59:59.209 253542 DEBUG oslo_concurrency.lockutils [None req-60e6232e-31b9-4174-8bd5-e4a5c4c8cf0f 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Acquired lock "refresh_cache-4356e66d-96cf-4d55-bf3e-280638024374" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 03:59:59 np0005534516 nova_compute[253538]: 2025-11-25 08:59:59.209 253542 DEBUG nova.network.neutron [None req-60e6232e-31b9-4174-8bd5-e4a5c4c8cf0f 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 4356e66d-96cf-4d55-bf3e-280638024374] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 25 03:59:59 np0005534516 nova_compute[253538]: 2025-11-25 08:59:59.422 253542 DEBUG nova.compute.manager [req-80c15d05-d7c5-4680-9c49-98089332aebc req-02ac9b35-9d27-4f9c-a7e6-41e709819804 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 4356e66d-96cf-4d55-bf3e-280638024374] Received event network-changed-6654b89e-a102-49f6-ad76-45e598fe2702 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 03:59:59 np0005534516 nova_compute[253538]: 2025-11-25 08:59:59.423 253542 DEBUG nova.compute.manager [req-80c15d05-d7c5-4680-9c49-98089332aebc req-02ac9b35-9d27-4f9c-a7e6-41e709819804 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 4356e66d-96cf-4d55-bf3e-280638024374] Refreshing instance network info cache due to event network-changed-6654b89e-a102-49f6-ad76-45e598fe2702. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 03:59:59 np0005534516 nova_compute[253538]: 2025-11-25 08:59:59.423 253542 DEBUG oslo_concurrency.lockutils [req-80c15d05-d7c5-4680-9c49-98089332aebc req-02ac9b35-9d27-4f9c-a7e6-41e709819804 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-4356e66d-96cf-4d55-bf3e-280638024374" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 03:59:59 np0005534516 nova_compute[253538]: 2025-11-25 08:59:59.508 253542 DEBUG nova.network.neutron [None req-60e6232e-31b9-4174-8bd5-e4a5c4c8cf0f 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 4356e66d-96cf-4d55-bf3e-280638024374] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 25 03:59:59 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2376: 321 pgs: 321 active+clean; 134 MiB data, 893 MiB used, 59 GiB / 60 GiB avail; 26 KiB/s rd, 1.8 MiB/s wr, 38 op/s
Nov 25 04:00:00 np0005534516 nova_compute[253538]: 2025-11-25 09:00:00.630 253542 DEBUG nova.network.neutron [None req-60e6232e-31b9-4174-8bd5-e4a5c4c8cf0f 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 4356e66d-96cf-4d55-bf3e-280638024374] Updating instance_info_cache with network_info: [{"id": "6654b89e-a102-49f6-ad76-45e598fe2702", "address": "fa:16:3e:d9:12:97", "network": {"id": "0f656b36-7475-4baf-b321-d82280dade68", "bridge": "br-int", "label": "tempest-network-smoke--187809639", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cfffff2c57a442a59b202d368d49bf00", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6654b89e-a1", "ovs_interfaceid": "6654b89e-a102-49f6-ad76-45e598fe2702", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 04:00:00 np0005534516 nova_compute[253538]: 2025-11-25 09:00:00.835 253542 DEBUG oslo_concurrency.lockutils [None req-60e6232e-31b9-4174-8bd5-e4a5c4c8cf0f 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Releasing lock "refresh_cache-4356e66d-96cf-4d55-bf3e-280638024374" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 04:00:00 np0005534516 nova_compute[253538]: 2025-11-25 09:00:00.836 253542 DEBUG nova.compute.manager [None req-60e6232e-31b9-4174-8bd5-e4a5c4c8cf0f 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 4356e66d-96cf-4d55-bf3e-280638024374] Instance network_info: |[{"id": "6654b89e-a102-49f6-ad76-45e598fe2702", "address": "fa:16:3e:d9:12:97", "network": {"id": "0f656b36-7475-4baf-b321-d82280dade68", "bridge": "br-int", "label": "tempest-network-smoke--187809639", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cfffff2c57a442a59b202d368d49bf00", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6654b89e-a1", "ovs_interfaceid": "6654b89e-a102-49f6-ad76-45e598fe2702", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 25 04:00:00 np0005534516 nova_compute[253538]: 2025-11-25 09:00:00.837 253542 DEBUG oslo_concurrency.lockutils [req-80c15d05-d7c5-4680-9c49-98089332aebc req-02ac9b35-9d27-4f9c-a7e6-41e709819804 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-4356e66d-96cf-4d55-bf3e-280638024374" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 04:00:00 np0005534516 nova_compute[253538]: 2025-11-25 09:00:00.837 253542 DEBUG nova.network.neutron [req-80c15d05-d7c5-4680-9c49-98089332aebc req-02ac9b35-9d27-4f9c-a7e6-41e709819804 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 4356e66d-96cf-4d55-bf3e-280638024374] Refreshing network info cache for port 6654b89e-a102-49f6-ad76-45e598fe2702 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 04:00:00 np0005534516 nova_compute[253538]: 2025-11-25 09:00:00.842 253542 DEBUG nova.virt.libvirt.driver [None req-60e6232e-31b9-4174-8bd5-e4a5c4c8cf0f 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 4356e66d-96cf-4d55-bf3e-280638024374] Start _get_guest_xml network_info=[{"id": "6654b89e-a102-49f6-ad76-45e598fe2702", "address": "fa:16:3e:d9:12:97", "network": {"id": "0f656b36-7475-4baf-b321-d82280dade68", "bridge": "br-int", "label": "tempest-network-smoke--187809639", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cfffff2c57a442a59b202d368d49bf00", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6654b89e-a1", "ovs_interfaceid": "6654b89e-a102-49f6-ad76-45e598fe2702", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'device_name': '/dev/vda', 'guest_format': None, 'encryption_options': None, 'boot_index': 0, 'encryption_format': None, 'encryption_secret_uuid': None, 'size': 0, 'encrypted': False, 'disk_bus': 'virtio', 'image_id': '8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 25 04:00:00 np0005534516 nova_compute[253538]: 2025-11-25 09:00:00.849 253542 WARNING nova.virt.libvirt.driver [None req-60e6232e-31b9-4174-8bd5-e4a5c4c8cf0f 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 04:00:00 np0005534516 nova_compute[253538]: 2025-11-25 09:00:00.864 253542 DEBUG nova.virt.libvirt.host [None req-60e6232e-31b9-4174-8bd5-e4a5c4c8cf0f 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 25 04:00:00 np0005534516 nova_compute[253538]: 2025-11-25 09:00:00.865 253542 DEBUG nova.virt.libvirt.host [None req-60e6232e-31b9-4174-8bd5-e4a5c4c8cf0f 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 25 04:00:00 np0005534516 nova_compute[253538]: 2025-11-25 09:00:00.870 253542 DEBUG nova.virt.libvirt.host [None req-60e6232e-31b9-4174-8bd5-e4a5c4c8cf0f 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 25 04:00:00 np0005534516 nova_compute[253538]: 2025-11-25 09:00:00.871 253542 DEBUG nova.virt.libvirt.host [None req-60e6232e-31b9-4174-8bd5-e4a5c4c8cf0f 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 25 04:00:00 np0005534516 nova_compute[253538]: 2025-11-25 09:00:00.872 253542 DEBUG nova.virt.libvirt.driver [None req-60e6232e-31b9-4174-8bd5-e4a5c4c8cf0f 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 25 04:00:00 np0005534516 nova_compute[253538]: 2025-11-25 09:00:00.872 253542 DEBUG nova.virt.hardware [None req-60e6232e-31b9-4174-8bd5-e4a5c4c8cf0f 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T08:20:54Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='c0247c91-0f34-45b2-87e7-43f31a790a00',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 25 04:00:00 np0005534516 nova_compute[253538]: 2025-11-25 09:00:00.874 253542 DEBUG nova.virt.hardware [None req-60e6232e-31b9-4174-8bd5-e4a5c4c8cf0f 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 25 04:00:00 np0005534516 nova_compute[253538]: 2025-11-25 09:00:00.874 253542 DEBUG nova.virt.hardware [None req-60e6232e-31b9-4174-8bd5-e4a5c4c8cf0f 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 25 04:00:00 np0005534516 nova_compute[253538]: 2025-11-25 09:00:00.875 253542 DEBUG nova.virt.hardware [None req-60e6232e-31b9-4174-8bd5-e4a5c4c8cf0f 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 25 04:00:00 np0005534516 nova_compute[253538]: 2025-11-25 09:00:00.875 253542 DEBUG nova.virt.hardware [None req-60e6232e-31b9-4174-8bd5-e4a5c4c8cf0f 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 25 04:00:00 np0005534516 nova_compute[253538]: 2025-11-25 09:00:00.876 253542 DEBUG nova.virt.hardware [None req-60e6232e-31b9-4174-8bd5-e4a5c4c8cf0f 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 25 04:00:00 np0005534516 nova_compute[253538]: 2025-11-25 09:00:00.876 253542 DEBUG nova.virt.hardware [None req-60e6232e-31b9-4174-8bd5-e4a5c4c8cf0f 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 25 04:00:00 np0005534516 nova_compute[253538]: 2025-11-25 09:00:00.877 253542 DEBUG nova.virt.hardware [None req-60e6232e-31b9-4174-8bd5-e4a5c4c8cf0f 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 25 04:00:00 np0005534516 nova_compute[253538]: 2025-11-25 09:00:00.878 253542 DEBUG nova.virt.hardware [None req-60e6232e-31b9-4174-8bd5-e4a5c4c8cf0f 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 25 04:00:00 np0005534516 nova_compute[253538]: 2025-11-25 09:00:00.878 253542 DEBUG nova.virt.hardware [None req-60e6232e-31b9-4174-8bd5-e4a5c4c8cf0f 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 25 04:00:00 np0005534516 nova_compute[253538]: 2025-11-25 09:00:00.879 253542 DEBUG nova.virt.hardware [None req-60e6232e-31b9-4174-8bd5-e4a5c4c8cf0f 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 25 04:00:00 np0005534516 nova_compute[253538]: 2025-11-25 09:00:00.885 253542 DEBUG oslo_concurrency.processutils [None req-60e6232e-31b9-4174-8bd5-e4a5c4c8cf0f 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 04:00:01 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 04:00:01 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/513629832' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 04:00:01 np0005534516 nova_compute[253538]: 2025-11-25 09:00:01.370 253542 DEBUG oslo_concurrency.processutils [None req-60e6232e-31b9-4174-8bd5-e4a5c4c8cf0f 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.484s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 04:00:01 np0005534516 nova_compute[253538]: 2025-11-25 09:00:01.393 253542 DEBUG nova.storage.rbd_utils [None req-60e6232e-31b9-4174-8bd5-e4a5c4c8cf0f 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] rbd image 4356e66d-96cf-4d55-bf3e-280638024374_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 04:00:01 np0005534516 nova_compute[253538]: 2025-11-25 09:00:01.397 253542 DEBUG oslo_concurrency.processutils [None req-60e6232e-31b9-4174-8bd5-e4a5c4c8cf0f 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 04:00:01 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 04:00:01 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3022759584' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 04:00:01 np0005534516 nova_compute[253538]: 2025-11-25 09:00:01.828 253542 DEBUG oslo_concurrency.processutils [None req-60e6232e-31b9-4174-8bd5-e4a5c4c8cf0f 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.431s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 04:00:01 np0005534516 nova_compute[253538]: 2025-11-25 09:00:01.831 253542 DEBUG nova.virt.libvirt.vif [None req-60e6232e-31b9-4174-8bd5-e4a5c4c8cf0f 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T08:59:51Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-1495447964-access_point-1963588795',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-1495447964-access_point-1963588795',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-1495447964-ac',id=128,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBM9GKDtc/iAkQHMT31cNhSUeZ5EWysDNnQ0GII5sebZcAX/FwcSvLZXUXWZQJzndY+3PoIINOvsAEaMLFDOLThu4z2CfTlWQWolUUIfZRA+bjYm4j7TZTDILpFaRJI4ZIA==',key_name='tempest-TestSecurityGroupsBasicOps-1928582476',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='cfffff2c57a442a59b202d368d49bf00',ramdisk_id='',reservation_id='r-tzlro08m',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-1495447964',owner_user_name='tempest-TestSecurityGroupsBasicOps-1495447964-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T08:59:53Z,user_data=None,user_id='283b89dbe3284e8ea2019b797673108b',uuid=4356e66d-96cf-4d55-bf3e-280638024374,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "6654b89e-a102-49f6-ad76-45e598fe2702", "address": "fa:16:3e:d9:12:97", "network": {"id": "0f656b36-7475-4baf-b321-d82280dade68", "bridge": "br-int", "label": "tempest-network-smoke--187809639", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cfffff2c57a442a59b202d368d49bf00", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6654b89e-a1", "ovs_interfaceid": "6654b89e-a102-49f6-ad76-45e598fe2702", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 25 04:00:01 np0005534516 nova_compute[253538]: 2025-11-25 09:00:01.831 253542 DEBUG nova.network.os_vif_util [None req-60e6232e-31b9-4174-8bd5-e4a5c4c8cf0f 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Converting VIF {"id": "6654b89e-a102-49f6-ad76-45e598fe2702", "address": "fa:16:3e:d9:12:97", "network": {"id": "0f656b36-7475-4baf-b321-d82280dade68", "bridge": "br-int", "label": "tempest-network-smoke--187809639", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cfffff2c57a442a59b202d368d49bf00", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6654b89e-a1", "ovs_interfaceid": "6654b89e-a102-49f6-ad76-45e598fe2702", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 04:00:01 np0005534516 nova_compute[253538]: 2025-11-25 09:00:01.833 253542 DEBUG nova.network.os_vif_util [None req-60e6232e-31b9-4174-8bd5-e4a5c4c8cf0f 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d9:12:97,bridge_name='br-int',has_traffic_filtering=True,id=6654b89e-a102-49f6-ad76-45e598fe2702,network=Network(0f656b36-7475-4baf-b321-d82280dade68),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6654b89e-a1') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 04:00:01 np0005534516 nova_compute[253538]: 2025-11-25 09:00:01.835 253542 DEBUG nova.objects.instance [None req-60e6232e-31b9-4174-8bd5-e4a5c4c8cf0f 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Lazy-loading 'pci_devices' on Instance uuid 4356e66d-96cf-4d55-bf3e-280638024374 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 04:00:01 np0005534516 nova_compute[253538]: 2025-11-25 09:00:01.854 253542 DEBUG nova.virt.libvirt.driver [None req-60e6232e-31b9-4174-8bd5-e4a5c4c8cf0f 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 4356e66d-96cf-4d55-bf3e-280638024374] End _get_guest_xml xml=<domain type="kvm">
Nov 25 04:00:01 np0005534516 nova_compute[253538]:  <uuid>4356e66d-96cf-4d55-bf3e-280638024374</uuid>
Nov 25 04:00:01 np0005534516 nova_compute[253538]:  <name>instance-00000080</name>
Nov 25 04:00:01 np0005534516 nova_compute[253538]:  <memory>131072</memory>
Nov 25 04:00:01 np0005534516 nova_compute[253538]:  <vcpu>1</vcpu>
Nov 25 04:00:01 np0005534516 nova_compute[253538]:  <metadata>
Nov 25 04:00:01 np0005534516 nova_compute[253538]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 04:00:01 np0005534516 nova_compute[253538]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 04:00:01 np0005534516 nova_compute[253538]:      <nova:name>tempest-server-tempest-TestSecurityGroupsBasicOps-1495447964-access_point-1963588795</nova:name>
Nov 25 04:00:01 np0005534516 nova_compute[253538]:      <nova:creationTime>2025-11-25 09:00:00</nova:creationTime>
Nov 25 04:00:01 np0005534516 nova_compute[253538]:      <nova:flavor name="m1.nano">
Nov 25 04:00:01 np0005534516 nova_compute[253538]:        <nova:memory>128</nova:memory>
Nov 25 04:00:01 np0005534516 nova_compute[253538]:        <nova:disk>1</nova:disk>
Nov 25 04:00:01 np0005534516 nova_compute[253538]:        <nova:swap>0</nova:swap>
Nov 25 04:00:01 np0005534516 nova_compute[253538]:        <nova:ephemeral>0</nova:ephemeral>
Nov 25 04:00:01 np0005534516 nova_compute[253538]:        <nova:vcpus>1</nova:vcpus>
Nov 25 04:00:01 np0005534516 nova_compute[253538]:      </nova:flavor>
Nov 25 04:00:01 np0005534516 nova_compute[253538]:      <nova:owner>
Nov 25 04:00:01 np0005534516 nova_compute[253538]:        <nova:user uuid="283b89dbe3284e8ea2019b797673108b">tempest-TestSecurityGroupsBasicOps-1495447964-project-member</nova:user>
Nov 25 04:00:01 np0005534516 nova_compute[253538]:        <nova:project uuid="cfffff2c57a442a59b202d368d49bf00">tempest-TestSecurityGroupsBasicOps-1495447964</nova:project>
Nov 25 04:00:01 np0005534516 nova_compute[253538]:      </nova:owner>
Nov 25 04:00:01 np0005534516 nova_compute[253538]:      <nova:root type="image" uuid="8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e"/>
Nov 25 04:00:01 np0005534516 nova_compute[253538]:      <nova:ports>
Nov 25 04:00:01 np0005534516 nova_compute[253538]:        <nova:port uuid="6654b89e-a102-49f6-ad76-45e598fe2702">
Nov 25 04:00:01 np0005534516 nova_compute[253538]:          <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Nov 25 04:00:01 np0005534516 nova_compute[253538]:        </nova:port>
Nov 25 04:00:01 np0005534516 nova_compute[253538]:      </nova:ports>
Nov 25 04:00:01 np0005534516 nova_compute[253538]:    </nova:instance>
Nov 25 04:00:01 np0005534516 nova_compute[253538]:  </metadata>
Nov 25 04:00:01 np0005534516 nova_compute[253538]:  <sysinfo type="smbios">
Nov 25 04:00:01 np0005534516 nova_compute[253538]:    <system>
Nov 25 04:00:01 np0005534516 nova_compute[253538]:      <entry name="manufacturer">RDO</entry>
Nov 25 04:00:01 np0005534516 nova_compute[253538]:      <entry name="product">OpenStack Compute</entry>
Nov 25 04:00:01 np0005534516 nova_compute[253538]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 04:00:01 np0005534516 nova_compute[253538]:      <entry name="serial">4356e66d-96cf-4d55-bf3e-280638024374</entry>
Nov 25 04:00:01 np0005534516 nova_compute[253538]:      <entry name="uuid">4356e66d-96cf-4d55-bf3e-280638024374</entry>
Nov 25 04:00:01 np0005534516 nova_compute[253538]:      <entry name="family">Virtual Machine</entry>
Nov 25 04:00:01 np0005534516 nova_compute[253538]:    </system>
Nov 25 04:00:01 np0005534516 nova_compute[253538]:  </sysinfo>
Nov 25 04:00:01 np0005534516 nova_compute[253538]:  <os>
Nov 25 04:00:01 np0005534516 nova_compute[253538]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 25 04:00:01 np0005534516 nova_compute[253538]:    <boot dev="hd"/>
Nov 25 04:00:01 np0005534516 nova_compute[253538]:    <smbios mode="sysinfo"/>
Nov 25 04:00:01 np0005534516 nova_compute[253538]:  </os>
Nov 25 04:00:01 np0005534516 nova_compute[253538]:  <features>
Nov 25 04:00:01 np0005534516 nova_compute[253538]:    <acpi/>
Nov 25 04:00:01 np0005534516 nova_compute[253538]:    <apic/>
Nov 25 04:00:01 np0005534516 nova_compute[253538]:    <vmcoreinfo/>
Nov 25 04:00:01 np0005534516 nova_compute[253538]:  </features>
Nov 25 04:00:01 np0005534516 nova_compute[253538]:  <clock offset="utc">
Nov 25 04:00:01 np0005534516 nova_compute[253538]:    <timer name="pit" tickpolicy="delay"/>
Nov 25 04:00:01 np0005534516 nova_compute[253538]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 25 04:00:01 np0005534516 nova_compute[253538]:    <timer name="hpet" present="no"/>
Nov 25 04:00:01 np0005534516 nova_compute[253538]:  </clock>
Nov 25 04:00:01 np0005534516 nova_compute[253538]:  <cpu mode="host-model" match="exact">
Nov 25 04:00:01 np0005534516 nova_compute[253538]:    <topology sockets="1" cores="1" threads="1"/>
Nov 25 04:00:01 np0005534516 nova_compute[253538]:  </cpu>
Nov 25 04:00:01 np0005534516 nova_compute[253538]:  <devices>
Nov 25 04:00:01 np0005534516 nova_compute[253538]:    <disk type="network" device="disk">
Nov 25 04:00:01 np0005534516 nova_compute[253538]:      <driver type="raw" cache="none"/>
Nov 25 04:00:01 np0005534516 nova_compute[253538]:      <source protocol="rbd" name="vms/4356e66d-96cf-4d55-bf3e-280638024374_disk">
Nov 25 04:00:01 np0005534516 nova_compute[253538]:        <host name="192.168.122.100" port="6789"/>
Nov 25 04:00:01 np0005534516 nova_compute[253538]:      </source>
Nov 25 04:00:01 np0005534516 nova_compute[253538]:      <auth username="openstack">
Nov 25 04:00:01 np0005534516 nova_compute[253538]:        <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 04:00:01 np0005534516 nova_compute[253538]:      </auth>
Nov 25 04:00:01 np0005534516 nova_compute[253538]:      <target dev="vda" bus="virtio"/>
Nov 25 04:00:01 np0005534516 nova_compute[253538]:    </disk>
Nov 25 04:00:01 np0005534516 nova_compute[253538]:    <disk type="network" device="cdrom">
Nov 25 04:00:01 np0005534516 nova_compute[253538]:      <driver type="raw" cache="none"/>
Nov 25 04:00:01 np0005534516 nova_compute[253538]:      <source protocol="rbd" name="vms/4356e66d-96cf-4d55-bf3e-280638024374_disk.config">
Nov 25 04:00:01 np0005534516 nova_compute[253538]:        <host name="192.168.122.100" port="6789"/>
Nov 25 04:00:01 np0005534516 nova_compute[253538]:      </source>
Nov 25 04:00:01 np0005534516 nova_compute[253538]:      <auth username="openstack">
Nov 25 04:00:01 np0005534516 nova_compute[253538]:        <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 04:00:01 np0005534516 nova_compute[253538]:      </auth>
Nov 25 04:00:01 np0005534516 nova_compute[253538]:      <target dev="sda" bus="sata"/>
Nov 25 04:00:01 np0005534516 nova_compute[253538]:    </disk>
Nov 25 04:00:01 np0005534516 nova_compute[253538]:    <interface type="ethernet">
Nov 25 04:00:01 np0005534516 nova_compute[253538]:      <mac address="fa:16:3e:d9:12:97"/>
Nov 25 04:00:01 np0005534516 nova_compute[253538]:      <model type="virtio"/>
Nov 25 04:00:01 np0005534516 nova_compute[253538]:      <driver name="vhost" rx_queue_size="512"/>
Nov 25 04:00:01 np0005534516 nova_compute[253538]:      <mtu size="1442"/>
Nov 25 04:00:01 np0005534516 nova_compute[253538]:      <target dev="tap6654b89e-a1"/>
Nov 25 04:00:01 np0005534516 nova_compute[253538]:    </interface>
Nov 25 04:00:01 np0005534516 nova_compute[253538]:    <serial type="pty">
Nov 25 04:00:01 np0005534516 nova_compute[253538]:      <log file="/var/lib/nova/instances/4356e66d-96cf-4d55-bf3e-280638024374/console.log" append="off"/>
Nov 25 04:00:01 np0005534516 nova_compute[253538]:    </serial>
Nov 25 04:00:01 np0005534516 nova_compute[253538]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 04:00:01 np0005534516 nova_compute[253538]:    <video>
Nov 25 04:00:01 np0005534516 nova_compute[253538]:      <model type="virtio"/>
Nov 25 04:00:01 np0005534516 nova_compute[253538]:    </video>
Nov 25 04:00:01 np0005534516 nova_compute[253538]:    <input type="tablet" bus="usb"/>
Nov 25 04:00:01 np0005534516 nova_compute[253538]:    <rng model="virtio">
Nov 25 04:00:01 np0005534516 nova_compute[253538]:      <backend model="random">/dev/urandom</backend>
Nov 25 04:00:01 np0005534516 nova_compute[253538]:    </rng>
Nov 25 04:00:01 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root"/>
Nov 25 04:00:01 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:00:01 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:00:01 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:00:01 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:00:01 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:00:01 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:00:01 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:00:01 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:00:01 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:00:01 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:00:01 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:00:01 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:00:01 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:00:01 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:00:01 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:00:01 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:00:01 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:00:01 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:00:01 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:00:01 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:00:01 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:00:01 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:00:01 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:00:01 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:00:01 np0005534516 nova_compute[253538]:    <controller type="usb" index="0"/>
Nov 25 04:00:01 np0005534516 nova_compute[253538]:    <memballoon model="virtio">
Nov 25 04:00:01 np0005534516 nova_compute[253538]:      <stats period="10"/>
Nov 25 04:00:01 np0005534516 nova_compute[253538]:    </memballoon>
Nov 25 04:00:01 np0005534516 nova_compute[253538]:  </devices>
Nov 25 04:00:01 np0005534516 nova_compute[253538]: </domain>
Nov 25 04:00:01 np0005534516 nova_compute[253538]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 25 04:00:01 np0005534516 nova_compute[253538]: 2025-11-25 09:00:01.857 253542 DEBUG nova.compute.manager [None req-60e6232e-31b9-4174-8bd5-e4a5c4c8cf0f 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 4356e66d-96cf-4d55-bf3e-280638024374] Preparing to wait for external event network-vif-plugged-6654b89e-a102-49f6-ad76-45e598fe2702 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 25 04:00:01 np0005534516 nova_compute[253538]: 2025-11-25 09:00:01.858 253542 DEBUG oslo_concurrency.lockutils [None req-60e6232e-31b9-4174-8bd5-e4a5c4c8cf0f 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Acquiring lock "4356e66d-96cf-4d55-bf3e-280638024374-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:00:01 np0005534516 nova_compute[253538]: 2025-11-25 09:00:01.858 253542 DEBUG oslo_concurrency.lockutils [None req-60e6232e-31b9-4174-8bd5-e4a5c4c8cf0f 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Lock "4356e66d-96cf-4d55-bf3e-280638024374-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:00:01 np0005534516 nova_compute[253538]: 2025-11-25 09:00:01.859 253542 DEBUG oslo_concurrency.lockutils [None req-60e6232e-31b9-4174-8bd5-e4a5c4c8cf0f 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Lock "4356e66d-96cf-4d55-bf3e-280638024374-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:00:01 np0005534516 nova_compute[253538]: 2025-11-25 09:00:01.860 253542 DEBUG nova.virt.libvirt.vif [None req-60e6232e-31b9-4174-8bd5-e4a5c4c8cf0f 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T08:59:51Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-1495447964-access_point-1963588795',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-1495447964-access_point-1963588795',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-1495447964-ac',id=128,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBM9GKDtc/iAkQHMT31cNhSUeZ5EWysDNnQ0GII5sebZcAX/FwcSvLZXUXWZQJzndY+3PoIINOvsAEaMLFDOLThu4z2CfTlWQWolUUIfZRA+bjYm4j7TZTDILpFaRJI4ZIA==',key_name='tempest-TestSecurityGroupsBasicOps-1928582476',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='cfffff2c57a442a59b202d368d49bf00',ramdisk_id='',reservation_id='r-tzlro08m',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-1495447964',owner_user_name='tempest-TestSecurityGroupsBasicOps-1495447964-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T08:59:53Z,user_data=None,user_id='283b89dbe3284e8ea2019b797673108b',uuid=4356e66d-96cf-4d55-bf3e-280638024374,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "6654b89e-a102-49f6-ad76-45e598fe2702", "address": "fa:16:3e:d9:12:97", "network": {"id": "0f656b36-7475-4baf-b321-d82280dade68", "bridge": "br-int", "label": "tempest-network-smoke--187809639", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cfffff2c57a442a59b202d368d49bf00", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6654b89e-a1", "ovs_interfaceid": "6654b89e-a102-49f6-ad76-45e598fe2702", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 25 04:00:01 np0005534516 nova_compute[253538]: 2025-11-25 09:00:01.861 253542 DEBUG nova.network.os_vif_util [None req-60e6232e-31b9-4174-8bd5-e4a5c4c8cf0f 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Converting VIF {"id": "6654b89e-a102-49f6-ad76-45e598fe2702", "address": "fa:16:3e:d9:12:97", "network": {"id": "0f656b36-7475-4baf-b321-d82280dade68", "bridge": "br-int", "label": "tempest-network-smoke--187809639", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cfffff2c57a442a59b202d368d49bf00", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6654b89e-a1", "ovs_interfaceid": "6654b89e-a102-49f6-ad76-45e598fe2702", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 04:00:01 np0005534516 nova_compute[253538]: 2025-11-25 09:00:01.862 253542 DEBUG nova.network.os_vif_util [None req-60e6232e-31b9-4174-8bd5-e4a5c4c8cf0f 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d9:12:97,bridge_name='br-int',has_traffic_filtering=True,id=6654b89e-a102-49f6-ad76-45e598fe2702,network=Network(0f656b36-7475-4baf-b321-d82280dade68),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6654b89e-a1') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 04:00:01 np0005534516 nova_compute[253538]: 2025-11-25 09:00:01.863 253542 DEBUG os_vif [None req-60e6232e-31b9-4174-8bd5-e4a5c4c8cf0f 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:d9:12:97,bridge_name='br-int',has_traffic_filtering=True,id=6654b89e-a102-49f6-ad76-45e598fe2702,network=Network(0f656b36-7475-4baf-b321-d82280dade68),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6654b89e-a1') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 25 04:00:01 np0005534516 nova_compute[253538]: 2025-11-25 09:00:01.864 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:00:01 np0005534516 nova_compute[253538]: 2025-11-25 09:00:01.865 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 04:00:01 np0005534516 nova_compute[253538]: 2025-11-25 09:00:01.866 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 04:00:01 np0005534516 nova_compute[253538]: 2025-11-25 09:00:01.871 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:00:01 np0005534516 nova_compute[253538]: 2025-11-25 09:00:01.871 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6654b89e-a1, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 04:00:01 np0005534516 nova_compute[253538]: 2025-11-25 09:00:01.872 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap6654b89e-a1, col_values=(('external_ids', {'iface-id': '6654b89e-a102-49f6-ad76-45e598fe2702', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:d9:12:97', 'vm-uuid': '4356e66d-96cf-4d55-bf3e-280638024374'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 04:00:01 np0005534516 nova_compute[253538]: 2025-11-25 09:00:01.874 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:00:01 np0005534516 NetworkManager[48915]: <info>  [1764061201.8760] manager: (tap6654b89e-a1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/538)
Nov 25 04:00:01 np0005534516 nova_compute[253538]: 2025-11-25 09:00:01.876 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 25 04:00:01 np0005534516 nova_compute[253538]: 2025-11-25 09:00:01.880 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:00:01 np0005534516 nova_compute[253538]: 2025-11-25 09:00:01.881 253542 INFO os_vif [None req-60e6232e-31b9-4174-8bd5-e4a5c4c8cf0f 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:d9:12:97,bridge_name='br-int',has_traffic_filtering=True,id=6654b89e-a102-49f6-ad76-45e598fe2702,network=Network(0f656b36-7475-4baf-b321-d82280dade68),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6654b89e-a1')#033[00m
Nov 25 04:00:01 np0005534516 nova_compute[253538]: 2025-11-25 09:00:01.929 253542 DEBUG nova.virt.libvirt.driver [None req-60e6232e-31b9-4174-8bd5-e4a5c4c8cf0f 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 04:00:01 np0005534516 nova_compute[253538]: 2025-11-25 09:00:01.930 253542 DEBUG nova.virt.libvirt.driver [None req-60e6232e-31b9-4174-8bd5-e4a5c4c8cf0f 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 04:00:01 np0005534516 nova_compute[253538]: 2025-11-25 09:00:01.930 253542 DEBUG nova.virt.libvirt.driver [None req-60e6232e-31b9-4174-8bd5-e4a5c4c8cf0f 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] No VIF found with MAC fa:16:3e:d9:12:97, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 25 04:00:01 np0005534516 nova_compute[253538]: 2025-11-25 09:00:01.930 253542 INFO nova.virt.libvirt.driver [None req-60e6232e-31b9-4174-8bd5-e4a5c4c8cf0f 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 4356e66d-96cf-4d55-bf3e-280638024374] Using config drive#033[00m
Nov 25 04:00:01 np0005534516 nova_compute[253538]: 2025-11-25 09:00:01.953 253542 DEBUG nova.storage.rbd_utils [None req-60e6232e-31b9-4174-8bd5-e4a5c4c8cf0f 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] rbd image 4356e66d-96cf-4d55-bf3e-280638024374_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 04:00:01 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2377: 321 pgs: 321 active+clean; 134 MiB data, 893 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 25 04:00:02 np0005534516 nova_compute[253538]: 2025-11-25 09:00:02.519 253542 INFO nova.virt.libvirt.driver [None req-60e6232e-31b9-4174-8bd5-e4a5c4c8cf0f 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 4356e66d-96cf-4d55-bf3e-280638024374] Creating config drive at /var/lib/nova/instances/4356e66d-96cf-4d55-bf3e-280638024374/disk.config#033[00m
Nov 25 04:00:02 np0005534516 nova_compute[253538]: 2025-11-25 09:00:02.523 253542 DEBUG oslo_concurrency.processutils [None req-60e6232e-31b9-4174-8bd5-e4a5c4c8cf0f 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/4356e66d-96cf-4d55-bf3e-280638024374/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpmu11xni_ execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 04:00:02 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:00:02 np0005534516 nova_compute[253538]: 2025-11-25 09:00:02.662 253542 DEBUG oslo_concurrency.processutils [None req-60e6232e-31b9-4174-8bd5-e4a5c4c8cf0f 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/4356e66d-96cf-4d55-bf3e-280638024374/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpmu11xni_" returned: 0 in 0.139s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 04:00:02 np0005534516 nova_compute[253538]: 2025-11-25 09:00:02.689 253542 DEBUG nova.storage.rbd_utils [None req-60e6232e-31b9-4174-8bd5-e4a5c4c8cf0f 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] rbd image 4356e66d-96cf-4d55-bf3e-280638024374_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 04:00:02 np0005534516 nova_compute[253538]: 2025-11-25 09:00:02.695 253542 DEBUG oslo_concurrency.processutils [None req-60e6232e-31b9-4174-8bd5-e4a5c4c8cf0f 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/4356e66d-96cf-4d55-bf3e-280638024374/disk.config 4356e66d-96cf-4d55-bf3e-280638024374_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 04:00:02 np0005534516 nova_compute[253538]: 2025-11-25 09:00:02.849 253542 DEBUG oslo_concurrency.processutils [None req-60e6232e-31b9-4174-8bd5-e4a5c4c8cf0f 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/4356e66d-96cf-4d55-bf3e-280638024374/disk.config 4356e66d-96cf-4d55-bf3e-280638024374_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.154s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 04:00:02 np0005534516 nova_compute[253538]: 2025-11-25 09:00:02.851 253542 INFO nova.virt.libvirt.driver [None req-60e6232e-31b9-4174-8bd5-e4a5c4c8cf0f 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 4356e66d-96cf-4d55-bf3e-280638024374] Deleting local config drive /var/lib/nova/instances/4356e66d-96cf-4d55-bf3e-280638024374/disk.config because it was imported into RBD.#033[00m
Nov 25 04:00:02 np0005534516 kernel: tap6654b89e-a1: entered promiscuous mode
Nov 25 04:00:02 np0005534516 NetworkManager[48915]: <info>  [1764061202.9198] manager: (tap6654b89e-a1): new Tun device (/org/freedesktop/NetworkManager/Devices/539)
Nov 25 04:00:02 np0005534516 ovn_controller[152859]: 2025-11-25T09:00:02Z|01317|binding|INFO|Claiming lport 6654b89e-a102-49f6-ad76-45e598fe2702 for this chassis.
Nov 25 04:00:02 np0005534516 ovn_controller[152859]: 2025-11-25T09:00:02Z|01318|binding|INFO|6654b89e-a102-49f6-ad76-45e598fe2702: Claiming fa:16:3e:d9:12:97 10.100.0.4
Nov 25 04:00:02 np0005534516 nova_compute[253538]: 2025-11-25 09:00:02.921 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:00:02 np0005534516 nova_compute[253538]: 2025-11-25 09:00:02.929 253542 DEBUG nova.network.neutron [req-80c15d05-d7c5-4680-9c49-98089332aebc req-02ac9b35-9d27-4f9c-a7e6-41e709819804 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 4356e66d-96cf-4d55-bf3e-280638024374] Updated VIF entry in instance network info cache for port 6654b89e-a102-49f6-ad76-45e598fe2702. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 25 04:00:02 np0005534516 nova_compute[253538]: 2025-11-25 09:00:02.930 253542 DEBUG nova.network.neutron [req-80c15d05-d7c5-4680-9c49-98089332aebc req-02ac9b35-9d27-4f9c-a7e6-41e709819804 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 4356e66d-96cf-4d55-bf3e-280638024374] Updating instance_info_cache with network_info: [{"id": "6654b89e-a102-49f6-ad76-45e598fe2702", "address": "fa:16:3e:d9:12:97", "network": {"id": "0f656b36-7475-4baf-b321-d82280dade68", "bridge": "br-int", "label": "tempest-network-smoke--187809639", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cfffff2c57a442a59b202d368d49bf00", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6654b89e-a1", "ovs_interfaceid": "6654b89e-a102-49f6-ad76-45e598fe2702", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 04:00:02 np0005534516 nova_compute[253538]: 2025-11-25 09:00:02.950 253542 DEBUG oslo_concurrency.lockutils [req-80c15d05-d7c5-4680-9c49-98089332aebc req-02ac9b35-9d27-4f9c-a7e6-41e709819804 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-4356e66d-96cf-4d55-bf3e-280638024374" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 04:00:02 np0005534516 systemd-udevd[387944]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 04:00:02 np0005534516 systemd-machined[215790]: New machine qemu-158-instance-00000080.
Nov 25 04:00:02 np0005534516 NetworkManager[48915]: <info>  [1764061202.9687] device (tap6654b89e-a1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 04:00:02 np0005534516 NetworkManager[48915]: <info>  [1764061202.9704] device (tap6654b89e-a1): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 04:00:02 np0005534516 nova_compute[253538]: 2025-11-25 09:00:02.982 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:00:02 np0005534516 nova_compute[253538]: 2025-11-25 09:00:02.987 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:00:02 np0005534516 systemd[1]: Started Virtual Machine qemu-158-instance-00000080.
Nov 25 04:00:02 np0005534516 ovn_controller[152859]: 2025-11-25T09:00:02Z|01319|binding|INFO|Setting lport 6654b89e-a102-49f6-ad76-45e598fe2702 ovn-installed in OVS
Nov 25 04:00:02 np0005534516 nova_compute[253538]: 2025-11-25 09:00:02.991 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:00:03 np0005534516 ovn_controller[152859]: 2025-11-25T09:00:03Z|01320|binding|INFO|Setting lport 6654b89e-a102-49f6-ad76-45e598fe2702 up in Southbound
Nov 25 04:00:03 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:00:03.022 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d9:12:97 10.100.0.4'], port_security=['fa:16:3e:d9:12:97 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '4356e66d-96cf-4d55-bf3e-280638024374', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0f656b36-7475-4baf-b321-d82280dade68', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'cfffff2c57a442a59b202d368d49bf00', 'neutron:revision_number': '2', 'neutron:security_group_ids': '2c277aa6-98be-4b49-b9f4-357b27c34694 6ba9af76-9f60-4459-905f-068c6194f108', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=51801c6e-6c82-485c-b7cb-963a30ef2813, chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=6654b89e-a102-49f6-ad76-45e598fe2702) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 04:00:03 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:00:03.023 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 6654b89e-a102-49f6-ad76-45e598fe2702 in datapath 0f656b36-7475-4baf-b321-d82280dade68 bound to our chassis#033[00m
Nov 25 04:00:03 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:00:03.023 162739 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 0f656b36-7475-4baf-b321-d82280dade68#033[00m
Nov 25 04:00:03 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:00:03.034 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[7be230f6-d24b-449f-a234-d81bf4883b02]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:00:03 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:00:03.034 162739 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap0f656b36-71 in ovnmeta-0f656b36-7475-4baf-b321-d82280dade68 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 25 04:00:03 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:00:03.037 269370 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap0f656b36-70 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 25 04:00:03 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:00:03.037 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[87259df3-4a11-4702-a205-74bb2892dde1]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:00:03 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:00:03.039 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[6ff49599-07f2-4825-b37c-d346d91f0355]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:00:03 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:00:03.054 162852 DEBUG oslo.privsep.daemon [-] privsep: reply[28f14582-3762-488e-b0dd-9b1d42a10f51]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:00:03 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:00:03.077 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[69e1c098-bbec-48a0-a5cb-424f352adb0a]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:00:03 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 04:00:03 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 04:00:03 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 04:00:03 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 04:00:03 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 04:00:03 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 04:00:03 np0005534516 ceph-mgr[75313]: [progress WARNING root] complete: ev b2b08cd8-0faa-4dd0-87d3-3385d66cb740 does not exist
Nov 25 04:00:03 np0005534516 ceph-mgr[75313]: [progress WARNING root] complete: ev d54ced60-48aa-4d9d-943d-1c1e32e8dbd9 does not exist
Nov 25 04:00:03 np0005534516 ceph-mgr[75313]: [progress WARNING root] complete: ev 20363c20-0fb1-48d8-b86c-da7fe1a25cab does not exist
Nov 25 04:00:03 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 04:00:03 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 04:00:03 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 04:00:03 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 04:00:03 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 04:00:03 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 04:00:03 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:00:03.109 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[06483a1d-b8c0-44f6-94a0-bc10f4f4504f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:00:03 np0005534516 NetworkManager[48915]: <info>  [1764061203.1169] manager: (tap0f656b36-70): new Veth device (/org/freedesktop/NetworkManager/Devices/540)
Nov 25 04:00:03 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:00:03.115 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[221d3d2c-bb90-40a7-bc56-0beeae37220a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:00:03 np0005534516 systemd-udevd[387946]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 04:00:03 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 04:00:03 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 04:00:03 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 04:00:03 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:00:03.152 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[efeaf913-c755-42a0-a21c-4d85565fa2af]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:00:03 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:00:03.157 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[71d7303f-accb-412e-800a-cc9158b5a5a1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:00:03 np0005534516 NetworkManager[48915]: <info>  [1764061203.1920] device (tap0f656b36-70): carrier: link connected
Nov 25 04:00:03 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:00:03.200 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[b676a1e4-68bf-4c8a-b2c9-df95253be650]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:00:03 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:00:03.223 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[a414ef7a-1b8c-491a-ad39-2258e442b9ad]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap0f656b36-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:98:19:69'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 383], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 655779, 'reachable_time': 38231, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 388017, 'error': None, 'target': 'ovnmeta-0f656b36-7475-4baf-b321-d82280dade68', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:00:03 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:00:03.239 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[e7266bba-5508-439c-b154-656e8bdf4953]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe98:1969'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 655779, 'tstamp': 655779}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 388035, 'error': None, 'target': 'ovnmeta-0f656b36-7475-4baf-b321-d82280dade68', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:00:03 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:00:03.258 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[f0fffe4b-b588-45b1-b794-a314d491bfeb]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap0f656b36-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:98:19:69'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 383], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 655779, 'reachable_time': 38231, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 388039, 'error': None, 'target': 'ovnmeta-0f656b36-7475-4baf-b321-d82280dade68', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:00:03 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:00:03.292 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[58503f1c-cd2b-4137-8de6-c73693c90eec]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:00:03 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:00:03.359 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[e55f1c6a-db03-4a6f-8b44-02185feedf2f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:00:03 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:00:03.360 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0f656b36-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 04:00:03 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:00:03.361 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 04:00:03 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:00:03.361 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap0f656b36-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 04:00:03 np0005534516 nova_compute[253538]: 2025-11-25 09:00:03.363 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:00:03 np0005534516 NetworkManager[48915]: <info>  [1764061203.3639] manager: (tap0f656b36-70): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/541)
Nov 25 04:00:03 np0005534516 kernel: tap0f656b36-70: entered promiscuous mode
Nov 25 04:00:03 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:00:03.366 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap0f656b36-70, col_values=(('external_ids', {'iface-id': 'df5aa971-5155-42d7-8605-12093fd4412c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 04:00:03 np0005534516 ovn_controller[152859]: 2025-11-25T09:00:03Z|01321|binding|INFO|Releasing lport df5aa971-5155-42d7-8605-12093fd4412c from this chassis (sb_readonly=0)
Nov 25 04:00:03 np0005534516 nova_compute[253538]: 2025-11-25 09:00:03.369 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:00:03 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:00:03.369 162739 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/0f656b36-7475-4baf-b321-d82280dade68.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/0f656b36-7475-4baf-b321-d82280dade68.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 25 04:00:03 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:00:03.370 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[a3a5b31e-72ef-4d79-8a72-33f352631694]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:00:03 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:00:03.371 162739 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 25 04:00:03 np0005534516 ovn_metadata_agent[162734]: global
Nov 25 04:00:03 np0005534516 ovn_metadata_agent[162734]:    log         /dev/log local0 debug
Nov 25 04:00:03 np0005534516 ovn_metadata_agent[162734]:    log-tag     haproxy-metadata-proxy-0f656b36-7475-4baf-b321-d82280dade68
Nov 25 04:00:03 np0005534516 ovn_metadata_agent[162734]:    user        root
Nov 25 04:00:03 np0005534516 ovn_metadata_agent[162734]:    group       root
Nov 25 04:00:03 np0005534516 ovn_metadata_agent[162734]:    maxconn     1024
Nov 25 04:00:03 np0005534516 ovn_metadata_agent[162734]:    pidfile     /var/lib/neutron/external/pids/0f656b36-7475-4baf-b321-d82280dade68.pid.haproxy
Nov 25 04:00:03 np0005534516 ovn_metadata_agent[162734]:    daemon
Nov 25 04:00:03 np0005534516 ovn_metadata_agent[162734]: 
Nov 25 04:00:03 np0005534516 ovn_metadata_agent[162734]: defaults
Nov 25 04:00:03 np0005534516 ovn_metadata_agent[162734]:    log global
Nov 25 04:00:03 np0005534516 ovn_metadata_agent[162734]:    mode http
Nov 25 04:00:03 np0005534516 ovn_metadata_agent[162734]:    option httplog
Nov 25 04:00:03 np0005534516 ovn_metadata_agent[162734]:    option dontlognull
Nov 25 04:00:03 np0005534516 ovn_metadata_agent[162734]:    option http-server-close
Nov 25 04:00:03 np0005534516 ovn_metadata_agent[162734]:    option forwardfor
Nov 25 04:00:03 np0005534516 ovn_metadata_agent[162734]:    retries                 3
Nov 25 04:00:03 np0005534516 ovn_metadata_agent[162734]:    timeout http-request    30s
Nov 25 04:00:03 np0005534516 ovn_metadata_agent[162734]:    timeout connect         30s
Nov 25 04:00:03 np0005534516 ovn_metadata_agent[162734]:    timeout client          32s
Nov 25 04:00:03 np0005534516 ovn_metadata_agent[162734]:    timeout server          32s
Nov 25 04:00:03 np0005534516 ovn_metadata_agent[162734]:    timeout http-keep-alive 30s
Nov 25 04:00:03 np0005534516 ovn_metadata_agent[162734]: 
Nov 25 04:00:03 np0005534516 ovn_metadata_agent[162734]: 
Nov 25 04:00:03 np0005534516 ovn_metadata_agent[162734]: listen listener
Nov 25 04:00:03 np0005534516 ovn_metadata_agent[162734]:    bind 169.254.169.254:80
Nov 25 04:00:03 np0005534516 ovn_metadata_agent[162734]:    server metadata /var/lib/neutron/metadata_proxy
Nov 25 04:00:03 np0005534516 ovn_metadata_agent[162734]:    http-request add-header X-OVN-Network-ID 0f656b36-7475-4baf-b321-d82280dade68
Nov 25 04:00:03 np0005534516 ovn_metadata_agent[162734]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 25 04:00:03 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:00:03.372 162739 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-0f656b36-7475-4baf-b321-d82280dade68', 'env', 'PROCESS_TAG=haproxy-0f656b36-7475-4baf-b321-d82280dade68', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/0f656b36-7475-4baf-b321-d82280dade68.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 25 04:00:03 np0005534516 nova_compute[253538]: 2025-11-25 09:00:03.381 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:00:03 np0005534516 nova_compute[253538]: 2025-11-25 09:00:03.401 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:00:03 np0005534516 nova_compute[253538]: 2025-11-25 09:00:03.545 253542 DEBUG nova.compute.manager [req-bd0cdbc6-1c32-4a6a-8eaf-6ec45ffb43e0 req-0972e105-9b20-4682-b98e-e55871a818b2 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 4356e66d-96cf-4d55-bf3e-280638024374] Received event network-vif-plugged-6654b89e-a102-49f6-ad76-45e598fe2702 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 04:00:03 np0005534516 nova_compute[253538]: 2025-11-25 09:00:03.546 253542 DEBUG oslo_concurrency.lockutils [req-bd0cdbc6-1c32-4a6a-8eaf-6ec45ffb43e0 req-0972e105-9b20-4682-b98e-e55871a818b2 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "4356e66d-96cf-4d55-bf3e-280638024374-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:00:03 np0005534516 nova_compute[253538]: 2025-11-25 09:00:03.546 253542 DEBUG oslo_concurrency.lockutils [req-bd0cdbc6-1c32-4a6a-8eaf-6ec45ffb43e0 req-0972e105-9b20-4682-b98e-e55871a818b2 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "4356e66d-96cf-4d55-bf3e-280638024374-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:00:03 np0005534516 nova_compute[253538]: 2025-11-25 09:00:03.546 253542 DEBUG oslo_concurrency.lockutils [req-bd0cdbc6-1c32-4a6a-8eaf-6ec45ffb43e0 req-0972e105-9b20-4682-b98e-e55871a818b2 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "4356e66d-96cf-4d55-bf3e-280638024374-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:00:03 np0005534516 nova_compute[253538]: 2025-11-25 09:00:03.546 253542 DEBUG nova.compute.manager [req-bd0cdbc6-1c32-4a6a-8eaf-6ec45ffb43e0 req-0972e105-9b20-4682-b98e-e55871a818b2 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 4356e66d-96cf-4d55-bf3e-280638024374] Processing event network-vif-plugged-6654b89e-a102-49f6-ad76-45e598fe2702 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 25 04:00:03 np0005534516 nova_compute[253538]: 2025-11-25 09:00:03.738 253542 DEBUG nova.compute.manager [None req-60e6232e-31b9-4174-8bd5-e4a5c4c8cf0f 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 4356e66d-96cf-4d55-bf3e-280638024374] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 25 04:00:03 np0005534516 nova_compute[253538]: 2025-11-25 09:00:03.739 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764061203.737384, 4356e66d-96cf-4d55-bf3e-280638024374 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 04:00:03 np0005534516 nova_compute[253538]: 2025-11-25 09:00:03.739 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 4356e66d-96cf-4d55-bf3e-280638024374] VM Started (Lifecycle Event)#033[00m
Nov 25 04:00:03 np0005534516 nova_compute[253538]: 2025-11-25 09:00:03.749 253542 DEBUG nova.virt.libvirt.driver [None req-60e6232e-31b9-4174-8bd5-e4a5c4c8cf0f 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 4356e66d-96cf-4d55-bf3e-280638024374] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 25 04:00:03 np0005534516 nova_compute[253538]: 2025-11-25 09:00:03.755 253542 INFO nova.virt.libvirt.driver [-] [instance: 4356e66d-96cf-4d55-bf3e-280638024374] Instance spawned successfully.#033[00m
Nov 25 04:00:03 np0005534516 nova_compute[253538]: 2025-11-25 09:00:03.756 253542 DEBUG nova.virt.libvirt.driver [None req-60e6232e-31b9-4174-8bd5-e4a5c4c8cf0f 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 4356e66d-96cf-4d55-bf3e-280638024374] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 25 04:00:03 np0005534516 podman[388204]: 2025-11-25 09:00:03.771753986 +0000 UTC m=+0.058523204 container create af8724245293e0a8eeab72042011d14b5a558212e403fb4614fcaa71e705ad8b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-0f656b36-7475-4baf-b321-d82280dade68, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Nov 25 04:00:03 np0005534516 podman[388210]: 2025-11-25 09:00:03.77963958 +0000 UTC m=+0.053774015 container create 37a4d0eec340308ec5bbc2dc96ceeaa4cfe5973fc18144ad4b4f40e8d60abbb3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_snyder, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 04:00:03 np0005534516 nova_compute[253538]: 2025-11-25 09:00:03.792 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 4356e66d-96cf-4d55-bf3e-280638024374] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 04:00:03 np0005534516 nova_compute[253538]: 2025-11-25 09:00:03.798 253542 DEBUG nova.virt.libvirt.driver [None req-60e6232e-31b9-4174-8bd5-e4a5c4c8cf0f 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 4356e66d-96cf-4d55-bf3e-280638024374] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 04:00:03 np0005534516 nova_compute[253538]: 2025-11-25 09:00:03.800 253542 DEBUG nova.virt.libvirt.driver [None req-60e6232e-31b9-4174-8bd5-e4a5c4c8cf0f 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 4356e66d-96cf-4d55-bf3e-280638024374] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 04:00:03 np0005534516 nova_compute[253538]: 2025-11-25 09:00:03.801 253542 DEBUG nova.virt.libvirt.driver [None req-60e6232e-31b9-4174-8bd5-e4a5c4c8cf0f 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 4356e66d-96cf-4d55-bf3e-280638024374] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 04:00:03 np0005534516 nova_compute[253538]: 2025-11-25 09:00:03.803 253542 DEBUG nova.virt.libvirt.driver [None req-60e6232e-31b9-4174-8bd5-e4a5c4c8cf0f 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 4356e66d-96cf-4d55-bf3e-280638024374] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 04:00:03 np0005534516 nova_compute[253538]: 2025-11-25 09:00:03.805 253542 DEBUG nova.virt.libvirt.driver [None req-60e6232e-31b9-4174-8bd5-e4a5c4c8cf0f 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 4356e66d-96cf-4d55-bf3e-280638024374] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 04:00:03 np0005534516 nova_compute[253538]: 2025-11-25 09:00:03.806 253542 DEBUG nova.virt.libvirt.driver [None req-60e6232e-31b9-4174-8bd5-e4a5c4c8cf0f 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 4356e66d-96cf-4d55-bf3e-280638024374] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 04:00:03 np0005534516 nova_compute[253538]: 2025-11-25 09:00:03.811 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 4356e66d-96cf-4d55-bf3e-280638024374] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 04:00:03 np0005534516 systemd[1]: Started libpod-conmon-37a4d0eec340308ec5bbc2dc96ceeaa4cfe5973fc18144ad4b4f40e8d60abbb3.scope.
Nov 25 04:00:03 np0005534516 systemd[1]: Started libpod-conmon-af8724245293e0a8eeab72042011d14b5a558212e403fb4614fcaa71e705ad8b.scope.
Nov 25 04:00:03 np0005534516 systemd[1]: Started libcrun container.
Nov 25 04:00:03 np0005534516 systemd[1]: Started libcrun container.
Nov 25 04:00:03 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4d8292ddc888ebc952e3026b61ff6dd04593c40f1d79add281577b607917257e/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 25 04:00:03 np0005534516 podman[388204]: 2025-11-25 09:00:03.747633379 +0000 UTC m=+0.034402617 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 25 04:00:03 np0005534516 podman[388210]: 2025-11-25 09:00:03.75064023 +0000 UTC m=+0.024774695 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 04:00:03 np0005534516 nova_compute[253538]: 2025-11-25 09:00:03.849 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 4356e66d-96cf-4d55-bf3e-280638024374] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 04:00:03 np0005534516 nova_compute[253538]: 2025-11-25 09:00:03.850 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764061203.7387707, 4356e66d-96cf-4d55-bf3e-280638024374 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 04:00:03 np0005534516 nova_compute[253538]: 2025-11-25 09:00:03.850 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 4356e66d-96cf-4d55-bf3e-280638024374] VM Paused (Lifecycle Event)#033[00m
Nov 25 04:00:03 np0005534516 podman[388204]: 2025-11-25 09:00:03.855591888 +0000 UTC m=+0.142361116 container init af8724245293e0a8eeab72042011d14b5a558212e403fb4614fcaa71e705ad8b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-0f656b36-7475-4baf-b321-d82280dade68, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118)
Nov 25 04:00:03 np0005534516 podman[388210]: 2025-11-25 09:00:03.859286508 +0000 UTC m=+0.133420963 container init 37a4d0eec340308ec5bbc2dc96ceeaa4cfe5973fc18144ad4b4f40e8d60abbb3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_snyder, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Nov 25 04:00:03 np0005534516 podman[388204]: 2025-11-25 09:00:03.862349121 +0000 UTC m=+0.149118349 container start af8724245293e0a8eeab72042011d14b5a558212e403fb4614fcaa71e705ad8b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-0f656b36-7475-4baf-b321-d82280dade68, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team)
Nov 25 04:00:03 np0005534516 podman[388210]: 2025-11-25 09:00:03.868133659 +0000 UTC m=+0.142268104 container start 37a4d0eec340308ec5bbc2dc96ceeaa4cfe5973fc18144ad4b4f40e8d60abbb3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_snyder, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 04:00:03 np0005534516 nova_compute[253538]: 2025-11-25 09:00:03.870 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 4356e66d-96cf-4d55-bf3e-280638024374] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 04:00:03 np0005534516 silly_snyder[388233]: 167 167
Nov 25 04:00:03 np0005534516 systemd[1]: libpod-37a4d0eec340308ec5bbc2dc96ceeaa4cfe5973fc18144ad4b4f40e8d60abbb3.scope: Deactivated successfully.
Nov 25 04:00:03 np0005534516 podman[388210]: 2025-11-25 09:00:03.875301364 +0000 UTC m=+0.149435829 container attach 37a4d0eec340308ec5bbc2dc96ceeaa4cfe5973fc18144ad4b4f40e8d60abbb3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_snyder, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 04:00:03 np0005534516 podman[388210]: 2025-11-25 09:00:03.875768277 +0000 UTC m=+0.149902742 container died 37a4d0eec340308ec5bbc2dc96ceeaa4cfe5973fc18144ad4b4f40e8d60abbb3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_snyder, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 25 04:00:03 np0005534516 nova_compute[253538]: 2025-11-25 09:00:03.877 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764061203.7431898, 4356e66d-96cf-4d55-bf3e-280638024374 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 04:00:03 np0005534516 nova_compute[253538]: 2025-11-25 09:00:03.877 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 4356e66d-96cf-4d55-bf3e-280638024374] VM Resumed (Lifecycle Event)#033[00m
Nov 25 04:00:03 np0005534516 neutron-haproxy-ovnmeta-0f656b36-7475-4baf-b321-d82280dade68[388234]: [NOTICE]   (388241) : New worker (388245) forked
Nov 25 04:00:03 np0005534516 neutron-haproxy-ovnmeta-0f656b36-7475-4baf-b321-d82280dade68[388234]: [NOTICE]   (388241) : Loading success.
Nov 25 04:00:03 np0005534516 nova_compute[253538]: 2025-11-25 09:00:03.893 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 4356e66d-96cf-4d55-bf3e-280638024374] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 04:00:03 np0005534516 nova_compute[253538]: 2025-11-25 09:00:03.896 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 4356e66d-96cf-4d55-bf3e-280638024374] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 04:00:03 np0005534516 nova_compute[253538]: 2025-11-25 09:00:03.899 253542 INFO nova.compute.manager [None req-60e6232e-31b9-4174-8bd5-e4a5c4c8cf0f 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 4356e66d-96cf-4d55-bf3e-280638024374] Took 10.04 seconds to spawn the instance on the hypervisor.#033[00m
Nov 25 04:00:03 np0005534516 nova_compute[253538]: 2025-11-25 09:00:03.899 253542 DEBUG nova.compute.manager [None req-60e6232e-31b9-4174-8bd5-e4a5c4c8cf0f 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 4356e66d-96cf-4d55-bf3e-280638024374] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 04:00:03 np0005534516 systemd[1]: var-lib-containers-storage-overlay-82d83fc8e248d99a3a95afad2b14db4cab22474e81f5eeb268b780faffe52862-merged.mount: Deactivated successfully.
Nov 25 04:00:03 np0005534516 nova_compute[253538]: 2025-11-25 09:00:03.911 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 4356e66d-96cf-4d55-bf3e-280638024374] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 04:00:03 np0005534516 podman[388210]: 2025-11-25 09:00:03.926138888 +0000 UTC m=+0.200273333 container remove 37a4d0eec340308ec5bbc2dc96ceeaa4cfe5973fc18144ad4b4f40e8d60abbb3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_snyder, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Nov 25 04:00:03 np0005534516 systemd[1]: libpod-conmon-37a4d0eec340308ec5bbc2dc96ceeaa4cfe5973fc18144ad4b4f40e8d60abbb3.scope: Deactivated successfully.
Nov 25 04:00:03 np0005534516 nova_compute[253538]: 2025-11-25 09:00:03.980 253542 INFO nova.compute.manager [None req-60e6232e-31b9-4174-8bd5-e4a5c4c8cf0f 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 4356e66d-96cf-4d55-bf3e-280638024374] Took 11.22 seconds to build instance.#033[00m
Nov 25 04:00:03 np0005534516 nova_compute[253538]: 2025-11-25 09:00:03.982 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:00:03 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:00:03.982 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=41, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '3e:21:09', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '6a:71:8e:07:fa:c2'}, ipsec=False) old=SB_Global(nb_cfg=40) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 04:00:03 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:00:03.982 162739 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 1 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 25 04:00:03 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2378: 321 pgs: 321 active+clean; 134 MiB data, 893 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.8 MiB/s wr, 29 op/s
Nov 25 04:00:04 np0005534516 nova_compute[253538]: 2025-11-25 09:00:04.004 253542 DEBUG oslo_concurrency.lockutils [None req-60e6232e-31b9-4174-8bd5-e4a5c4c8cf0f 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Lock "4356e66d-96cf-4d55-bf3e-280638024374" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.325s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:00:04 np0005534516 podman[388271]: 2025-11-25 09:00:04.099091225 +0000 UTC m=+0.045814478 container create 94991b75ca74a20f2ee63c99497805c113841cfa25b0a26e7be71fe01ecb2717 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_khayyam, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 04:00:04 np0005534516 systemd[1]: Started libpod-conmon-94991b75ca74a20f2ee63c99497805c113841cfa25b0a26e7be71fe01ecb2717.scope.
Nov 25 04:00:04 np0005534516 podman[388271]: 2025-11-25 09:00:04.078235387 +0000 UTC m=+0.024958670 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 04:00:04 np0005534516 systemd[1]: Started libcrun container.
Nov 25 04:00:04 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0d6e3f950babd04fbd78bccac18244223e2dda3257c715509cc225dff4adee3f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 04:00:04 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0d6e3f950babd04fbd78bccac18244223e2dda3257c715509cc225dff4adee3f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 04:00:04 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0d6e3f950babd04fbd78bccac18244223e2dda3257c715509cc225dff4adee3f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 04:00:04 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0d6e3f950babd04fbd78bccac18244223e2dda3257c715509cc225dff4adee3f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 04:00:04 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0d6e3f950babd04fbd78bccac18244223e2dda3257c715509cc225dff4adee3f/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 04:00:04 np0005534516 podman[388271]: 2025-11-25 09:00:04.205342598 +0000 UTC m=+0.152065871 container init 94991b75ca74a20f2ee63c99497805c113841cfa25b0a26e7be71fe01ecb2717 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_khayyam, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True)
Nov 25 04:00:04 np0005534516 podman[388271]: 2025-11-25 09:00:04.213741836 +0000 UTC m=+0.160465089 container start 94991b75ca74a20f2ee63c99497805c113841cfa25b0a26e7be71fe01ecb2717 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_khayyam, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 04:00:04 np0005534516 podman[388271]: 2025-11-25 09:00:04.216917052 +0000 UTC m=+0.163640365 container attach 94991b75ca74a20f2ee63c99497805c113841cfa25b0a26e7be71fe01ecb2717 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_khayyam, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 04:00:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 04:00:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 04:00:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 04:00:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 04:00:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0003459970412515465 of space, bias 1.0, pg target 0.10379911237546395 quantized to 32 (current 32)
Nov 25 04:00:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 04:00:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 04:00:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 04:00:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 04:00:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 04:00:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0010122368870873217 of space, bias 1.0, pg target 0.3036710661261965 quantized to 32 (current 32)
Nov 25 04:00:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 04:00:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 25 04:00:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 04:00:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 04:00:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 04:00:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 25 04:00:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 04:00:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 25 04:00:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 04:00:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 04:00:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 04:00:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 25 04:00:04 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:00:04.985 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=0e3362c2-a4a0-4a10-9289-943331244f84, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '41'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 04:00:05 np0005534516 ceph-osd[88620]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 25 04:00:05 np0005534516 ceph-osd[88620]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 4200.5 total, 600.0 interval#012Cumulative writes: 38K writes, 155K keys, 38K commit groups, 1.0 writes per commit group, ingest: 0.15 GB, 0.04 MB/s#012Cumulative WAL: 38K writes, 13K syncs, 2.87 writes per sync, written: 0.15 GB, 0.04 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 5228 writes, 21K keys, 5228 commit groups, 1.0 writes per commit group, ingest: 24.78 MB, 0.04 MB/s#012Interval WAL: 5228 writes, 1957 syncs, 2.67 writes per sync, written: 0.02 GB, 0.04 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 25 04:00:05 np0005534516 busy_khayyam[388287]: --> passed data devices: 0 physical, 3 LVM
Nov 25 04:00:05 np0005534516 busy_khayyam[388287]: --> relative data size: 1.0
Nov 25 04:00:05 np0005534516 busy_khayyam[388287]: --> All data devices are unavailable
Nov 25 04:00:05 np0005534516 systemd[1]: libpod-94991b75ca74a20f2ee63c99497805c113841cfa25b0a26e7be71fe01ecb2717.scope: Deactivated successfully.
Nov 25 04:00:05 np0005534516 systemd[1]: libpod-94991b75ca74a20f2ee63c99497805c113841cfa25b0a26e7be71fe01ecb2717.scope: Consumed 1.073s CPU time.
Nov 25 04:00:05 np0005534516 conmon[388287]: conmon 94991b75ca74a20f2ee6 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-94991b75ca74a20f2ee63c99497805c113841cfa25b0a26e7be71fe01ecb2717.scope/container/memory.events
Nov 25 04:00:05 np0005534516 podman[388271]: 2025-11-25 09:00:05.392191181 +0000 UTC m=+1.338914464 container died 94991b75ca74a20f2ee63c99497805c113841cfa25b0a26e7be71fe01ecb2717 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_khayyam, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Nov 25 04:00:05 np0005534516 systemd[1]: var-lib-containers-storage-overlay-0d6e3f950babd04fbd78bccac18244223e2dda3257c715509cc225dff4adee3f-merged.mount: Deactivated successfully.
Nov 25 04:00:05 np0005534516 podman[388271]: 2025-11-25 09:00:05.468476538 +0000 UTC m=+1.415199791 container remove 94991b75ca74a20f2ee63c99497805c113841cfa25b0a26e7be71fe01ecb2717 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_khayyam, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Nov 25 04:00:05 np0005534516 systemd[1]: libpod-conmon-94991b75ca74a20f2ee63c99497805c113841cfa25b0a26e7be71fe01ecb2717.scope: Deactivated successfully.
Nov 25 04:00:05 np0005534516 nova_compute[253538]: 2025-11-25 09:00:05.677 253542 DEBUG nova.compute.manager [req-26c89a1c-a3a5-4f98-a2b3-fa679abdfb19 req-380ba652-07f4-4522-b955-5e990f383b5f b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 4356e66d-96cf-4d55-bf3e-280638024374] Received event network-vif-plugged-6654b89e-a102-49f6-ad76-45e598fe2702 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 04:00:05 np0005534516 nova_compute[253538]: 2025-11-25 09:00:05.681 253542 DEBUG oslo_concurrency.lockutils [req-26c89a1c-a3a5-4f98-a2b3-fa679abdfb19 req-380ba652-07f4-4522-b955-5e990f383b5f b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "4356e66d-96cf-4d55-bf3e-280638024374-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:00:05 np0005534516 nova_compute[253538]: 2025-11-25 09:00:05.681 253542 DEBUG oslo_concurrency.lockutils [req-26c89a1c-a3a5-4f98-a2b3-fa679abdfb19 req-380ba652-07f4-4522-b955-5e990f383b5f b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "4356e66d-96cf-4d55-bf3e-280638024374-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:00:05 np0005534516 nova_compute[253538]: 2025-11-25 09:00:05.682 253542 DEBUG oslo_concurrency.lockutils [req-26c89a1c-a3a5-4f98-a2b3-fa679abdfb19 req-380ba652-07f4-4522-b955-5e990f383b5f b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "4356e66d-96cf-4d55-bf3e-280638024374-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:00:05 np0005534516 nova_compute[253538]: 2025-11-25 09:00:05.682 253542 DEBUG nova.compute.manager [req-26c89a1c-a3a5-4f98-a2b3-fa679abdfb19 req-380ba652-07f4-4522-b955-5e990f383b5f b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 4356e66d-96cf-4d55-bf3e-280638024374] No waiting events found dispatching network-vif-plugged-6654b89e-a102-49f6-ad76-45e598fe2702 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 04:00:05 np0005534516 nova_compute[253538]: 2025-11-25 09:00:05.682 253542 WARNING nova.compute.manager [req-26c89a1c-a3a5-4f98-a2b3-fa679abdfb19 req-380ba652-07f4-4522-b955-5e990f383b5f b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 4356e66d-96cf-4d55-bf3e-280638024374] Received unexpected event network-vif-plugged-6654b89e-a102-49f6-ad76-45e598fe2702 for instance with vm_state active and task_state None.#033[00m
Nov 25 04:00:05 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2379: 321 pgs: 321 active+clean; 134 MiB data, 893 MiB used, 59 GiB / 60 GiB avail; 138 KiB/s rd, 1.8 MiB/s wr, 34 op/s
Nov 25 04:00:06 np0005534516 podman[388470]: 2025-11-25 09:00:06.179922483 +0000 UTC m=+0.063188181 container create 888023c7b37ad0a691525109a524ee9247def5f5e6609232d827b83c6a76179e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_kalam, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Nov 25 04:00:06 np0005534516 systemd[1]: Started libpod-conmon-888023c7b37ad0a691525109a524ee9247def5f5e6609232d827b83c6a76179e.scope.
Nov 25 04:00:06 np0005534516 podman[388470]: 2025-11-25 09:00:06.148267721 +0000 UTC m=+0.031533459 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 04:00:06 np0005534516 systemd[1]: Started libcrun container.
Nov 25 04:00:06 np0005534516 podman[388470]: 2025-11-25 09:00:06.282194226 +0000 UTC m=+0.165459954 container init 888023c7b37ad0a691525109a524ee9247def5f5e6609232d827b83c6a76179e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_kalam, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default)
Nov 25 04:00:06 np0005534516 podman[388470]: 2025-11-25 09:00:06.291398106 +0000 UTC m=+0.174663834 container start 888023c7b37ad0a691525109a524ee9247def5f5e6609232d827b83c6a76179e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_kalam, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Nov 25 04:00:06 np0005534516 podman[388470]: 2025-11-25 09:00:06.296272019 +0000 UTC m=+0.179537737 container attach 888023c7b37ad0a691525109a524ee9247def5f5e6609232d827b83c6a76179e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_kalam, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 25 04:00:06 np0005534516 dazzling_kalam[388485]: 167 167
Nov 25 04:00:06 np0005534516 podman[388470]: 2025-11-25 09:00:06.302004755 +0000 UTC m=+0.185270453 container died 888023c7b37ad0a691525109a524ee9247def5f5e6609232d827b83c6a76179e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_kalam, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0)
Nov 25 04:00:06 np0005534516 systemd[1]: libpod-888023c7b37ad0a691525109a524ee9247def5f5e6609232d827b83c6a76179e.scope: Deactivated successfully.
Nov 25 04:00:06 np0005534516 systemd[1]: var-lib-containers-storage-overlay-3917e53c00a5c9288edc47b4371273c781e1a5178d9f8832d784bf2b9fd0bc55-merged.mount: Deactivated successfully.
Nov 25 04:00:06 np0005534516 podman[388470]: 2025-11-25 09:00:06.34038984 +0000 UTC m=+0.223655538 container remove 888023c7b37ad0a691525109a524ee9247def5f5e6609232d827b83c6a76179e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_kalam, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True)
Nov 25 04:00:06 np0005534516 systemd[1]: libpod-conmon-888023c7b37ad0a691525109a524ee9247def5f5e6609232d827b83c6a76179e.scope: Deactivated successfully.
Nov 25 04:00:06 np0005534516 podman[388509]: 2025-11-25 09:00:06.569469796 +0000 UTC m=+0.064266260 container create ab7ba2a1893dfddaece4d8187e98c5c45a5efe2412f0ffca3ba3048a0b31f1f1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_germain, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 04:00:06 np0005534516 systemd[1]: Started libpod-conmon-ab7ba2a1893dfddaece4d8187e98c5c45a5efe2412f0ffca3ba3048a0b31f1f1.scope.
Nov 25 04:00:06 np0005534516 podman[388509]: 2025-11-25 09:00:06.54101176 +0000 UTC m=+0.035808314 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 04:00:06 np0005534516 systemd[1]: Started libcrun container.
Nov 25 04:00:06 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0ad42156624ae0e34acf7740f9185fda1939cd23b52323b368b362b59c0d4553/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 04:00:06 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0ad42156624ae0e34acf7740f9185fda1939cd23b52323b368b362b59c0d4553/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 04:00:06 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0ad42156624ae0e34acf7740f9185fda1939cd23b52323b368b362b59c0d4553/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 04:00:06 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0ad42156624ae0e34acf7740f9185fda1939cd23b52323b368b362b59c0d4553/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 04:00:06 np0005534516 podman[388509]: 2025-11-25 09:00:06.69305917 +0000 UTC m=+0.187855674 container init ab7ba2a1893dfddaece4d8187e98c5c45a5efe2412f0ffca3ba3048a0b31f1f1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_germain, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 04:00:06 np0005534516 podman[388509]: 2025-11-25 09:00:06.700235614 +0000 UTC m=+0.195032118 container start ab7ba2a1893dfddaece4d8187e98c5c45a5efe2412f0ffca3ba3048a0b31f1f1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_germain, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Nov 25 04:00:06 np0005534516 podman[388509]: 2025-11-25 09:00:06.703827833 +0000 UTC m=+0.198624337 container attach ab7ba2a1893dfddaece4d8187e98c5c45a5efe2412f0ffca3ba3048a0b31f1f1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_germain, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Nov 25 04:00:06 np0005534516 nova_compute[253538]: 2025-11-25 09:00:06.876 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:00:07 np0005534516 gallant_germain[388525]: {
Nov 25 04:00:07 np0005534516 gallant_germain[388525]:    "0": [
Nov 25 04:00:07 np0005534516 gallant_germain[388525]:        {
Nov 25 04:00:07 np0005534516 gallant_germain[388525]:            "devices": [
Nov 25 04:00:07 np0005534516 gallant_germain[388525]:                "/dev/loop3"
Nov 25 04:00:07 np0005534516 gallant_germain[388525]:            ],
Nov 25 04:00:07 np0005534516 gallant_germain[388525]:            "lv_name": "ceph_lv0",
Nov 25 04:00:07 np0005534516 gallant_germain[388525]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 04:00:07 np0005534516 gallant_germain[388525]:            "lv_size": "21470642176",
Nov 25 04:00:07 np0005534516 gallant_germain[388525]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=fa0f8d90-5df8-4d42-9078-da082765696d,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 04:00:07 np0005534516 gallant_germain[388525]:            "lv_uuid": "ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr",
Nov 25 04:00:07 np0005534516 gallant_germain[388525]:            "name": "ceph_lv0",
Nov 25 04:00:07 np0005534516 gallant_germain[388525]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 04:00:07 np0005534516 gallant_germain[388525]:            "tags": {
Nov 25 04:00:07 np0005534516 gallant_germain[388525]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 04:00:07 np0005534516 gallant_germain[388525]:                "ceph.block_uuid": "ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr",
Nov 25 04:00:07 np0005534516 gallant_germain[388525]:                "ceph.cephx_lockbox_secret": "",
Nov 25 04:00:07 np0005534516 gallant_germain[388525]:                "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 04:00:07 np0005534516 gallant_germain[388525]:                "ceph.cluster_name": "ceph",
Nov 25 04:00:07 np0005534516 gallant_germain[388525]:                "ceph.crush_device_class": "",
Nov 25 04:00:07 np0005534516 gallant_germain[388525]:                "ceph.encrypted": "0",
Nov 25 04:00:07 np0005534516 gallant_germain[388525]:                "ceph.osd_fsid": "fa0f8d90-5df8-4d42-9078-da082765696d",
Nov 25 04:00:07 np0005534516 gallant_germain[388525]:                "ceph.osd_id": "0",
Nov 25 04:00:07 np0005534516 gallant_germain[388525]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 04:00:07 np0005534516 gallant_germain[388525]:                "ceph.type": "block",
Nov 25 04:00:07 np0005534516 gallant_germain[388525]:                "ceph.vdo": "0"
Nov 25 04:00:07 np0005534516 gallant_germain[388525]:            },
Nov 25 04:00:07 np0005534516 gallant_germain[388525]:            "type": "block",
Nov 25 04:00:07 np0005534516 gallant_germain[388525]:            "vg_name": "ceph_vg0"
Nov 25 04:00:07 np0005534516 gallant_germain[388525]:        }
Nov 25 04:00:07 np0005534516 gallant_germain[388525]:    ],
Nov 25 04:00:07 np0005534516 gallant_germain[388525]:    "1": [
Nov 25 04:00:07 np0005534516 gallant_germain[388525]:        {
Nov 25 04:00:07 np0005534516 gallant_germain[388525]:            "devices": [
Nov 25 04:00:07 np0005534516 gallant_germain[388525]:                "/dev/loop4"
Nov 25 04:00:07 np0005534516 gallant_germain[388525]:            ],
Nov 25 04:00:07 np0005534516 gallant_germain[388525]:            "lv_name": "ceph_lv1",
Nov 25 04:00:07 np0005534516 gallant_germain[388525]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 04:00:07 np0005534516 gallant_germain[388525]:            "lv_size": "21470642176",
Nov 25 04:00:07 np0005534516 gallant_germain[388525]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1ccbbde0-8faf-460a-800e-c84f00ed17db,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 04:00:07 np0005534516 gallant_germain[388525]:            "lv_uuid": "nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e",
Nov 25 04:00:07 np0005534516 gallant_germain[388525]:            "name": "ceph_lv1",
Nov 25 04:00:07 np0005534516 gallant_germain[388525]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 04:00:07 np0005534516 gallant_germain[388525]:            "tags": {
Nov 25 04:00:07 np0005534516 gallant_germain[388525]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 04:00:07 np0005534516 gallant_germain[388525]:                "ceph.block_uuid": "nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e",
Nov 25 04:00:07 np0005534516 gallant_germain[388525]:                "ceph.cephx_lockbox_secret": "",
Nov 25 04:00:07 np0005534516 gallant_germain[388525]:                "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 04:00:07 np0005534516 gallant_germain[388525]:                "ceph.cluster_name": "ceph",
Nov 25 04:00:07 np0005534516 gallant_germain[388525]:                "ceph.crush_device_class": "",
Nov 25 04:00:07 np0005534516 gallant_germain[388525]:                "ceph.encrypted": "0",
Nov 25 04:00:07 np0005534516 gallant_germain[388525]:                "ceph.osd_fsid": "1ccbbde0-8faf-460a-800e-c84f00ed17db",
Nov 25 04:00:07 np0005534516 gallant_germain[388525]:                "ceph.osd_id": "1",
Nov 25 04:00:07 np0005534516 gallant_germain[388525]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 04:00:07 np0005534516 gallant_germain[388525]:                "ceph.type": "block",
Nov 25 04:00:07 np0005534516 gallant_germain[388525]:                "ceph.vdo": "0"
Nov 25 04:00:07 np0005534516 gallant_germain[388525]:            },
Nov 25 04:00:07 np0005534516 gallant_germain[388525]:            "type": "block",
Nov 25 04:00:07 np0005534516 gallant_germain[388525]:            "vg_name": "ceph_vg1"
Nov 25 04:00:07 np0005534516 gallant_germain[388525]:        }
Nov 25 04:00:07 np0005534516 gallant_germain[388525]:    ],
Nov 25 04:00:07 np0005534516 gallant_germain[388525]:    "2": [
Nov 25 04:00:07 np0005534516 gallant_germain[388525]:        {
Nov 25 04:00:07 np0005534516 gallant_germain[388525]:            "devices": [
Nov 25 04:00:07 np0005534516 gallant_germain[388525]:                "/dev/loop5"
Nov 25 04:00:07 np0005534516 gallant_germain[388525]:            ],
Nov 25 04:00:07 np0005534516 gallant_germain[388525]:            "lv_name": "ceph_lv2",
Nov 25 04:00:07 np0005534516 gallant_germain[388525]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 04:00:07 np0005534516 gallant_germain[388525]:            "lv_size": "21470642176",
Nov 25 04:00:07 np0005534516 gallant_germain[388525]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2a15223e-f21c-42af-b702-2be1c3607399,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 04:00:07 np0005534516 gallant_germain[388525]:            "lv_uuid": "5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0",
Nov 25 04:00:07 np0005534516 gallant_germain[388525]:            "name": "ceph_lv2",
Nov 25 04:00:07 np0005534516 gallant_germain[388525]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 04:00:07 np0005534516 gallant_germain[388525]:            "tags": {
Nov 25 04:00:07 np0005534516 gallant_germain[388525]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 04:00:07 np0005534516 gallant_germain[388525]:                "ceph.block_uuid": "5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0",
Nov 25 04:00:07 np0005534516 gallant_germain[388525]:                "ceph.cephx_lockbox_secret": "",
Nov 25 04:00:07 np0005534516 gallant_germain[388525]:                "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 04:00:07 np0005534516 gallant_germain[388525]:                "ceph.cluster_name": "ceph",
Nov 25 04:00:07 np0005534516 gallant_germain[388525]:                "ceph.crush_device_class": "",
Nov 25 04:00:07 np0005534516 gallant_germain[388525]:                "ceph.encrypted": "0",
Nov 25 04:00:07 np0005534516 gallant_germain[388525]:                "ceph.osd_fsid": "2a15223e-f21c-42af-b702-2be1c3607399",
Nov 25 04:00:07 np0005534516 gallant_germain[388525]:                "ceph.osd_id": "2",
Nov 25 04:00:07 np0005534516 gallant_germain[388525]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 04:00:07 np0005534516 gallant_germain[388525]:                "ceph.type": "block",
Nov 25 04:00:07 np0005534516 gallant_germain[388525]:                "ceph.vdo": "0"
Nov 25 04:00:07 np0005534516 gallant_germain[388525]:            },
Nov 25 04:00:07 np0005534516 gallant_germain[388525]:            "type": "block",
Nov 25 04:00:07 np0005534516 gallant_germain[388525]:            "vg_name": "ceph_vg2"
Nov 25 04:00:07 np0005534516 gallant_germain[388525]:        }
Nov 25 04:00:07 np0005534516 gallant_germain[388525]:    ]
Nov 25 04:00:07 np0005534516 gallant_germain[388525]: }
Nov 25 04:00:07 np0005534516 systemd[1]: libpod-ab7ba2a1893dfddaece4d8187e98c5c45a5efe2412f0ffca3ba3048a0b31f1f1.scope: Deactivated successfully.
Nov 25 04:00:07 np0005534516 conmon[388525]: conmon ab7ba2a1893dfddaece4 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-ab7ba2a1893dfddaece4d8187e98c5c45a5efe2412f0ffca3ba3048a0b31f1f1.scope/container/memory.events
Nov 25 04:00:07 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:00:07 np0005534516 podman[388534]: 2025-11-25 09:00:07.652275278 +0000 UTC m=+0.025784653 container died ab7ba2a1893dfddaece4d8187e98c5c45a5efe2412f0ffca3ba3048a0b31f1f1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_germain, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 04:00:07 np0005534516 systemd[1]: var-lib-containers-storage-overlay-0ad42156624ae0e34acf7740f9185fda1939cd23b52323b368b362b59c0d4553-merged.mount: Deactivated successfully.
Nov 25 04:00:07 np0005534516 podman[388534]: 2025-11-25 09:00:07.722803367 +0000 UTC m=+0.096312732 container remove ab7ba2a1893dfddaece4d8187e98c5c45a5efe2412f0ffca3ba3048a0b31f1f1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_germain, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Nov 25 04:00:07 np0005534516 systemd[1]: libpod-conmon-ab7ba2a1893dfddaece4d8187e98c5c45a5efe2412f0ffca3ba3048a0b31f1f1.scope: Deactivated successfully.
Nov 25 04:00:07 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2380: 321 pgs: 321 active+clean; 134 MiB data, 893 MiB used, 59 GiB / 60 GiB avail; 952 KiB/s rd, 1.3 MiB/s wr, 65 op/s
Nov 25 04:00:08 np0005534516 nova_compute[253538]: 2025-11-25 09:00:08.403 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:00:08 np0005534516 podman[388687]: 2025-11-25 09:00:08.500463924 +0000 UTC m=+0.038758086 container create 0003edf7adecc06df7dfc8a461f36ee02bfe1a2a18b1e3979159145e2118cc24 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_goldstine, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 04:00:08 np0005534516 podman[388687]: 2025-11-25 09:00:08.481879778 +0000 UTC m=+0.020173970 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 04:00:08 np0005534516 systemd[1]: Started libpod-conmon-0003edf7adecc06df7dfc8a461f36ee02bfe1a2a18b1e3979159145e2118cc24.scope.
Nov 25 04:00:08 np0005534516 systemd[1]: Started libcrun container.
Nov 25 04:00:08 np0005534516 podman[388687]: 2025-11-25 09:00:08.663263055 +0000 UTC m=+0.201557247 container init 0003edf7adecc06df7dfc8a461f36ee02bfe1a2a18b1e3979159145e2118cc24 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_goldstine, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Nov 25 04:00:08 np0005534516 podman[388687]: 2025-11-25 09:00:08.67039499 +0000 UTC m=+0.208689182 container start 0003edf7adecc06df7dfc8a461f36ee02bfe1a2a18b1e3979159145e2118cc24 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_goldstine, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Nov 25 04:00:08 np0005534516 podman[388687]: 2025-11-25 09:00:08.674619994 +0000 UTC m=+0.212914246 container attach 0003edf7adecc06df7dfc8a461f36ee02bfe1a2a18b1e3979159145e2118cc24 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_goldstine, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 04:00:08 np0005534516 lucid_goldstine[388703]: 167 167
Nov 25 04:00:08 np0005534516 systemd[1]: libpod-0003edf7adecc06df7dfc8a461f36ee02bfe1a2a18b1e3979159145e2118cc24.scope: Deactivated successfully.
Nov 25 04:00:08 np0005534516 podman[388687]: 2025-11-25 09:00:08.67665462 +0000 UTC m=+0.214948812 container died 0003edf7adecc06df7dfc8a461f36ee02bfe1a2a18b1e3979159145e2118cc24 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_goldstine, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default)
Nov 25 04:00:08 np0005534516 systemd[1]: var-lib-containers-storage-overlay-aebb5199e19e87493e23dcb2ae71ca0e60cc5ffaa78e9ed38571dae04b4a3f34-merged.mount: Deactivated successfully.
Nov 25 04:00:08 np0005534516 podman[388687]: 2025-11-25 09:00:08.813231658 +0000 UTC m=+0.351525820 container remove 0003edf7adecc06df7dfc8a461f36ee02bfe1a2a18b1e3979159145e2118cc24 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_goldstine, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Nov 25 04:00:08 np0005534516 systemd[1]: libpod-conmon-0003edf7adecc06df7dfc8a461f36ee02bfe1a2a18b1e3979159145e2118cc24.scope: Deactivated successfully.
Nov 25 04:00:09 np0005534516 podman[388728]: 2025-11-25 09:00:09.003155216 +0000 UTC m=+0.039493675 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 04:00:09 np0005534516 podman[388728]: 2025-11-25 09:00:09.1256238 +0000 UTC m=+0.161962239 container create 717084890ad6dd7820e408c9ff446cbc3713e5fce582334a76eec8b6a230636c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_brahmagupta, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef)
Nov 25 04:00:09 np0005534516 systemd[1]: Started libpod-conmon-717084890ad6dd7820e408c9ff446cbc3713e5fce582334a76eec8b6a230636c.scope.
Nov 25 04:00:09 np0005534516 systemd[1]: Started libcrun container.
Nov 25 04:00:09 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f0034c6b949637d6ebcb0c46a92ce5440c36bbb54237d477ed9b0aba177eccd2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 04:00:09 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f0034c6b949637d6ebcb0c46a92ce5440c36bbb54237d477ed9b0aba177eccd2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 04:00:09 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f0034c6b949637d6ebcb0c46a92ce5440c36bbb54237d477ed9b0aba177eccd2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 04:00:09 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f0034c6b949637d6ebcb0c46a92ce5440c36bbb54237d477ed9b0aba177eccd2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 04:00:09 np0005534516 podman[388728]: 2025-11-25 09:00:09.254097147 +0000 UTC m=+0.290435576 container init 717084890ad6dd7820e408c9ff446cbc3713e5fce582334a76eec8b6a230636c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_brahmagupta, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Nov 25 04:00:09 np0005534516 podman[388728]: 2025-11-25 09:00:09.263836042 +0000 UTC m=+0.300174461 container start 717084890ad6dd7820e408c9ff446cbc3713e5fce582334a76eec8b6a230636c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_brahmagupta, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 04:00:09 np0005534516 podman[388728]: 2025-11-25 09:00:09.267390599 +0000 UTC m=+0.303729028 container attach 717084890ad6dd7820e408c9ff446cbc3713e5fce582334a76eec8b6a230636c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_brahmagupta, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 04:00:09 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2381: 321 pgs: 321 active+clean; 134 MiB data, 893 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 744 KiB/s wr, 96 op/s
Nov 25 04:00:10 np0005534516 pensive_brahmagupta[388745]: {
Nov 25 04:00:10 np0005534516 pensive_brahmagupta[388745]:    "1ccbbde0-8faf-460a-800e-c84f00ed17db": {
Nov 25 04:00:10 np0005534516 pensive_brahmagupta[388745]:        "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 04:00:10 np0005534516 pensive_brahmagupta[388745]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 04:00:10 np0005534516 pensive_brahmagupta[388745]:        "osd_id": 1,
Nov 25 04:00:10 np0005534516 pensive_brahmagupta[388745]:        "osd_uuid": "1ccbbde0-8faf-460a-800e-c84f00ed17db",
Nov 25 04:00:10 np0005534516 pensive_brahmagupta[388745]:        "type": "bluestore"
Nov 25 04:00:10 np0005534516 pensive_brahmagupta[388745]:    },
Nov 25 04:00:10 np0005534516 pensive_brahmagupta[388745]:    "2a15223e-f21c-42af-b702-2be1c3607399": {
Nov 25 04:00:10 np0005534516 pensive_brahmagupta[388745]:        "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 04:00:10 np0005534516 pensive_brahmagupta[388745]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 04:00:10 np0005534516 pensive_brahmagupta[388745]:        "osd_id": 2,
Nov 25 04:00:10 np0005534516 pensive_brahmagupta[388745]:        "osd_uuid": "2a15223e-f21c-42af-b702-2be1c3607399",
Nov 25 04:00:10 np0005534516 pensive_brahmagupta[388745]:        "type": "bluestore"
Nov 25 04:00:10 np0005534516 pensive_brahmagupta[388745]:    },
Nov 25 04:00:10 np0005534516 pensive_brahmagupta[388745]:    "fa0f8d90-5df8-4d42-9078-da082765696d": {
Nov 25 04:00:10 np0005534516 pensive_brahmagupta[388745]:        "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 04:00:10 np0005534516 pensive_brahmagupta[388745]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 04:00:10 np0005534516 pensive_brahmagupta[388745]:        "osd_id": 0,
Nov 25 04:00:10 np0005534516 pensive_brahmagupta[388745]:        "osd_uuid": "fa0f8d90-5df8-4d42-9078-da082765696d",
Nov 25 04:00:10 np0005534516 pensive_brahmagupta[388745]:        "type": "bluestore"
Nov 25 04:00:10 np0005534516 pensive_brahmagupta[388745]:    }
Nov 25 04:00:10 np0005534516 pensive_brahmagupta[388745]: }
Nov 25 04:00:10 np0005534516 systemd[1]: libpod-717084890ad6dd7820e408c9ff446cbc3713e5fce582334a76eec8b6a230636c.scope: Deactivated successfully.
Nov 25 04:00:10 np0005534516 podman[388728]: 2025-11-25 09:00:10.339941162 +0000 UTC m=+1.376279611 container died 717084890ad6dd7820e408c9ff446cbc3713e5fce582334a76eec8b6a230636c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_brahmagupta, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 04:00:10 np0005534516 systemd[1]: libpod-717084890ad6dd7820e408c9ff446cbc3713e5fce582334a76eec8b6a230636c.scope: Consumed 1.081s CPU time.
Nov 25 04:00:10 np0005534516 systemd[1]: var-lib-containers-storage-overlay-f0034c6b949637d6ebcb0c46a92ce5440c36bbb54237d477ed9b0aba177eccd2-merged.mount: Deactivated successfully.
Nov 25 04:00:10 np0005534516 podman[388728]: 2025-11-25 09:00:10.397165221 +0000 UTC m=+1.433503620 container remove 717084890ad6dd7820e408c9ff446cbc3713e5fce582334a76eec8b6a230636c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_brahmagupta, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 04:00:10 np0005534516 systemd[1]: libpod-conmon-717084890ad6dd7820e408c9ff446cbc3713e5fce582334a76eec8b6a230636c.scope: Deactivated successfully.
Nov 25 04:00:10 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 04:00:10 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 04:00:10 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 04:00:10 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 04:00:10 np0005534516 ceph-mgr[75313]: [progress WARNING root] complete: ev be8c2088-eff7-4875-a893-0c8320f791ca does not exist
Nov 25 04:00:10 np0005534516 ceph-mgr[75313]: [progress WARNING root] complete: ev 9e5a642c-52e7-42a4-87c9-84d432b7c6aa does not exist
Nov 25 04:00:11 np0005534516 NetworkManager[48915]: <info>  [1764061211.1221] manager: (patch-provnet-14ba300c-36d8-42f5-a7bd-8245a4488c5f-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/542)
Nov 25 04:00:11 np0005534516 NetworkManager[48915]: <info>  [1764061211.1234] manager: (patch-br-int-to-provnet-14ba300c-36d8-42f5-a7bd-8245a4488c5f): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/543)
Nov 25 04:00:11 np0005534516 ovn_controller[152859]: 2025-11-25T09:00:11Z|01322|binding|INFO|Releasing lport df5aa971-5155-42d7-8605-12093fd4412c from this chassis (sb_readonly=0)
Nov 25 04:00:11 np0005534516 nova_compute[253538]: 2025-11-25 09:00:11.126 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:00:11 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 04:00:11 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 04:00:11 np0005534516 ovn_controller[152859]: 2025-11-25T09:00:11Z|01323|binding|INFO|Releasing lport df5aa971-5155-42d7-8605-12093fd4412c from this chassis (sb_readonly=0)
Nov 25 04:00:11 np0005534516 nova_compute[253538]: 2025-11-25 09:00:11.191 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:00:11 np0005534516 nova_compute[253538]: 2025-11-25 09:00:11.197 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:00:11 np0005534516 nova_compute[253538]: 2025-11-25 09:00:11.572 253542 DEBUG nova.compute.manager [req-343d4938-577a-4c4e-96b9-84f3a3705c2c req-95053db3-e2dc-4b46-9f82-71f3c8b6880a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 4356e66d-96cf-4d55-bf3e-280638024374] Received event network-changed-6654b89e-a102-49f6-ad76-45e598fe2702 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 04:00:11 np0005534516 nova_compute[253538]: 2025-11-25 09:00:11.572 253542 DEBUG nova.compute.manager [req-343d4938-577a-4c4e-96b9-84f3a3705c2c req-95053db3-e2dc-4b46-9f82-71f3c8b6880a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 4356e66d-96cf-4d55-bf3e-280638024374] Refreshing instance network info cache due to event network-changed-6654b89e-a102-49f6-ad76-45e598fe2702. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 04:00:11 np0005534516 nova_compute[253538]: 2025-11-25 09:00:11.573 253542 DEBUG oslo_concurrency.lockutils [req-343d4938-577a-4c4e-96b9-84f3a3705c2c req-95053db3-e2dc-4b46-9f82-71f3c8b6880a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-4356e66d-96cf-4d55-bf3e-280638024374" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 04:00:11 np0005534516 nova_compute[253538]: 2025-11-25 09:00:11.573 253542 DEBUG oslo_concurrency.lockutils [req-343d4938-577a-4c4e-96b9-84f3a3705c2c req-95053db3-e2dc-4b46-9f82-71f3c8b6880a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-4356e66d-96cf-4d55-bf3e-280638024374" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 04:00:11 np0005534516 nova_compute[253538]: 2025-11-25 09:00:11.573 253542 DEBUG nova.network.neutron [req-343d4938-577a-4c4e-96b9-84f3a3705c2c req-95053db3-e2dc-4b46-9f82-71f3c8b6880a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 4356e66d-96cf-4d55-bf3e-280638024374] Refreshing network info cache for port 6654b89e-a102-49f6-ad76-45e598fe2702 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 04:00:11 np0005534516 ceph-osd[89702]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 25 04:00:11 np0005534516 ceph-osd[89702]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 4200.4 total, 600.0 interval#012Cumulative writes: 38K writes, 149K keys, 38K commit groups, 1.0 writes per commit group, ingest: 0.14 GB, 0.03 MB/s#012Cumulative WAL: 38K writes, 13K syncs, 2.83 writes per sync, written: 0.14 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 6047 writes, 23K keys, 6047 commit groups, 1.0 writes per commit group, ingest: 24.53 MB, 0.04 MB/s#012Interval WAL: 6047 writes, 2362 syncs, 2.56 writes per sync, written: 0.02 GB, 0.04 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 25 04:00:11 np0005534516 nova_compute[253538]: 2025-11-25 09:00:11.886 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:00:11 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2382: 321 pgs: 321 active+clean; 134 MiB data, 893 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Nov 25 04:00:12 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:00:13 np0005534516 nova_compute[253538]: 2025-11-25 09:00:13.404 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:00:13 np0005534516 nova_compute[253538]: 2025-11-25 09:00:13.520 253542 DEBUG nova.network.neutron [req-343d4938-577a-4c4e-96b9-84f3a3705c2c req-95053db3-e2dc-4b46-9f82-71f3c8b6880a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 4356e66d-96cf-4d55-bf3e-280638024374] Updated VIF entry in instance network info cache for port 6654b89e-a102-49f6-ad76-45e598fe2702. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 25 04:00:13 np0005534516 nova_compute[253538]: 2025-11-25 09:00:13.520 253542 DEBUG nova.network.neutron [req-343d4938-577a-4c4e-96b9-84f3a3705c2c req-95053db3-e2dc-4b46-9f82-71f3c8b6880a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 4356e66d-96cf-4d55-bf3e-280638024374] Updating instance_info_cache with network_info: [{"id": "6654b89e-a102-49f6-ad76-45e598fe2702", "address": "fa:16:3e:d9:12:97", "network": {"id": "0f656b36-7475-4baf-b321-d82280dade68", "bridge": "br-int", "label": "tempest-network-smoke--187809639", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.187", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cfffff2c57a442a59b202d368d49bf00", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6654b89e-a1", "ovs_interfaceid": "6654b89e-a102-49f6-ad76-45e598fe2702", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 04:00:13 np0005534516 nova_compute[253538]: 2025-11-25 09:00:13.554 253542 DEBUG oslo_concurrency.lockutils [req-343d4938-577a-4c4e-96b9-84f3a3705c2c req-95053db3-e2dc-4b46-9f82-71f3c8b6880a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-4356e66d-96cf-4d55-bf3e-280638024374" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 04:00:13 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2383: 321 pgs: 321 active+clean; 134 MiB data, 893 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Nov 25 04:00:15 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2384: 321 pgs: 321 active+clean; 134 MiB data, 893 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 72 op/s
Nov 25 04:00:16 np0005534516 nova_compute[253538]: 2025-11-25 09:00:16.641 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:00:16 np0005534516 nova_compute[253538]: 2025-11-25 09:00:16.887 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:00:17 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:00:17 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2385: 321 pgs: 321 active+clean; 140 MiB data, 902 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 794 KiB/s wr, 87 op/s
Nov 25 04:00:18 np0005534516 ovn_controller[152859]: 2025-11-25T09:00:18Z|00159|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:d9:12:97 10.100.0.4
Nov 25 04:00:18 np0005534516 ovn_controller[152859]: 2025-11-25T09:00:18Z|00160|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:d9:12:97 10.100.0.4
Nov 25 04:00:18 np0005534516 nova_compute[253538]: 2025-11-25 09:00:18.405 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:00:18 np0005534516 podman[388847]: 2025-11-25 09:00:18.820533036 +0000 UTC m=+0.068637110 container health_status 5c8d5508db57b4c3da7e80ae11f3a3094e87785cb74f60c98199261dd8b73481 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118)
Nov 25 04:00:18 np0005534516 podman[388846]: 2025-11-25 09:00:18.829783968 +0000 UTC m=+0.078006595 container health_status 201f5ba8a846455dad7681d91099d8c3f33e8ac91298c1330a8b525b94c03938 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 04:00:19 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2386: 321 pgs: 321 active+clean; 163 MiB data, 918 MiB used, 59 GiB / 60 GiB avail; 1.3 MiB/s rd, 2.1 MiB/s wr, 89 op/s
Nov 25 04:00:21 np0005534516 nova_compute[253538]: 2025-11-25 09:00:21.660 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:00:21 np0005534516 nova_compute[253538]: 2025-11-25 09:00:21.888 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:00:21 np0005534516 ceph-osd[90711]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 25 04:00:21 np0005534516 ceph-osd[90711]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 4202.4 total, 600.0 interval#012Cumulative writes: 30K writes, 120K keys, 30K commit groups, 1.0 writes per commit group, ingest: 0.12 GB, 0.03 MB/s#012Cumulative WAL: 30K writes, 10K syncs, 2.88 writes per sync, written: 0.12 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 4235 writes, 17K keys, 4235 commit groups, 1.0 writes per commit group, ingest: 21.75 MB, 0.04 MB/s#012Interval WAL: 4235 writes, 1652 syncs, 2.56 writes per sync, written: 0.02 GB, 0.04 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 25 04:00:21 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2387: 321 pgs: 321 active+clean; 167 MiB data, 918 MiB used, 59 GiB / 60 GiB avail; 324 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Nov 25 04:00:22 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:00:23 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:00:23.232 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e5:81:3e 10.100.0.2 2001:db8::f816:3eff:fee5:813e'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:fee5:813e/64', 'neutron:device_id': 'ovnmeta-6c2834b5-0444-432c-8da4-c0b4f4aabc4d', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6c2834b5-0444-432c-8da4-c0b4f4aabc4d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a3cf572dfc9f42528923d69b8fa76422', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=566c4500-0375-4680-b110-24535007c05e, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=25e4a85d-5a04-4d07-a006-66576a20c294) old=Port_Binding(mac=['fa:16:3e:e5:81:3e 10.100.0.2'], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'ovnmeta-6c2834b5-0444-432c-8da4-c0b4f4aabc4d', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6c2834b5-0444-432c-8da4-c0b4f4aabc4d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a3cf572dfc9f42528923d69b8fa76422', 'neutron:revision_number': '2', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 04:00:23 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:00:23.233 162739 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port 25e4a85d-5a04-4d07-a006-66576a20c294 in datapath 6c2834b5-0444-432c-8da4-c0b4f4aabc4d updated#033[00m
Nov 25 04:00:23 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:00:23.234 162739 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 6c2834b5-0444-432c-8da4-c0b4f4aabc4d, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 25 04:00:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 04:00:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 04:00:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 04:00:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 04:00:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 04:00:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 04:00:23 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:00:23.570 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[45ff31ef-9bc5-409e-a7df-ace22e0e812d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:00:23 np0005534516 nova_compute[253538]: 2025-11-25 09:00:23.570 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:00:23 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2388: 321 pgs: 321 active+clean; 167 MiB data, 918 MiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Nov 25 04:00:25 np0005534516 podman[388884]: 2025-11-25 09:00:25.837839809 +0000 UTC m=+0.089572449 container health_status 8ad1e319ea263c63021f01bb2d6f18ca5aea205883339692c6b10bddfa993191 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Nov 25 04:00:25 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2389: 321 pgs: 321 active+clean; 167 MiB data, 918 MiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Nov 25 04:00:26 np0005534516 nova_compute[253538]: 2025-11-25 09:00:26.891 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:00:27 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:00:27 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2390: 321 pgs: 321 active+clean; 167 MiB data, 918 MiB used, 59 GiB / 60 GiB avail; 322 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Nov 25 04:00:28 np0005534516 nova_compute[253538]: 2025-11-25 09:00:28.572 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:00:29 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 25 04:00:29 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/941402795' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 04:00:29 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 25 04:00:29 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/941402795' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 04:00:30 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2391: 321 pgs: 321 active+clean; 167 MiB data, 918 MiB used, 59 GiB / 60 GiB avail; 180 KiB/s rd, 1.4 MiB/s wr, 43 op/s
Nov 25 04:00:30 np0005534516 nova_compute[253538]: 2025-11-25 09:00:30.798 253542 DEBUG oslo_concurrency.lockutils [None req-73baf30d-b3ff-4201-b0d3-37a16d64b79f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Acquiring lock "f05e074d-5838-4c4b-89dc-76afe386f635" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:00:30 np0005534516 nova_compute[253538]: 2025-11-25 09:00:30.798 253542 DEBUG oslo_concurrency.lockutils [None req-73baf30d-b3ff-4201-b0d3-37a16d64b79f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "f05e074d-5838-4c4b-89dc-76afe386f635" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:00:30 np0005534516 nova_compute[253538]: 2025-11-25 09:00:30.822 253542 DEBUG nova.compute.manager [None req-73baf30d-b3ff-4201-b0d3-37a16d64b79f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: f05e074d-5838-4c4b-89dc-76afe386f635] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 25 04:00:30 np0005534516 nova_compute[253538]: 2025-11-25 09:00:30.928 253542 DEBUG oslo_concurrency.lockutils [None req-73baf30d-b3ff-4201-b0d3-37a16d64b79f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:00:30 np0005534516 nova_compute[253538]: 2025-11-25 09:00:30.929 253542 DEBUG oslo_concurrency.lockutils [None req-73baf30d-b3ff-4201-b0d3-37a16d64b79f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:00:30 np0005534516 nova_compute[253538]: 2025-11-25 09:00:30.942 253542 DEBUG nova.virt.hardware [None req-73baf30d-b3ff-4201-b0d3-37a16d64b79f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 25 04:00:30 np0005534516 nova_compute[253538]: 2025-11-25 09:00:30.943 253542 INFO nova.compute.claims [None req-73baf30d-b3ff-4201-b0d3-37a16d64b79f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: f05e074d-5838-4c4b-89dc-76afe386f635] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 25 04:00:31 np0005534516 nova_compute[253538]: 2025-11-25 09:00:31.074 253542 DEBUG oslo_concurrency.processutils [None req-73baf30d-b3ff-4201-b0d3-37a16d64b79f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 04:00:31 np0005534516 nova_compute[253538]: 2025-11-25 09:00:31.396 253542 DEBUG nova.compute.manager [req-326e1da0-2b81-4ae4-bf59-a35fdf9677d7 req-b9a11942-29d2-427e-b742-60227efdcdb6 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 4356e66d-96cf-4d55-bf3e-280638024374] Received event network-changed-6654b89e-a102-49f6-ad76-45e598fe2702 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 04:00:31 np0005534516 nova_compute[253538]: 2025-11-25 09:00:31.397 253542 DEBUG nova.compute.manager [req-326e1da0-2b81-4ae4-bf59-a35fdf9677d7 req-b9a11942-29d2-427e-b742-60227efdcdb6 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 4356e66d-96cf-4d55-bf3e-280638024374] Refreshing instance network info cache due to event network-changed-6654b89e-a102-49f6-ad76-45e598fe2702. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 04:00:31 np0005534516 nova_compute[253538]: 2025-11-25 09:00:31.398 253542 DEBUG oslo_concurrency.lockutils [req-326e1da0-2b81-4ae4-bf59-a35fdf9677d7 req-b9a11942-29d2-427e-b742-60227efdcdb6 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-4356e66d-96cf-4d55-bf3e-280638024374" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 04:00:31 np0005534516 nova_compute[253538]: 2025-11-25 09:00:31.398 253542 DEBUG oslo_concurrency.lockutils [req-326e1da0-2b81-4ae4-bf59-a35fdf9677d7 req-b9a11942-29d2-427e-b742-60227efdcdb6 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-4356e66d-96cf-4d55-bf3e-280638024374" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 04:00:31 np0005534516 nova_compute[253538]: 2025-11-25 09:00:31.399 253542 DEBUG nova.network.neutron [req-326e1da0-2b81-4ae4-bf59-a35fdf9677d7 req-b9a11942-29d2-427e-b742-60227efdcdb6 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 4356e66d-96cf-4d55-bf3e-280638024374] Refreshing network info cache for port 6654b89e-a102-49f6-ad76-45e598fe2702 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 04:00:31 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 04:00:31 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3975728011' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 04:00:31 np0005534516 nova_compute[253538]: 2025-11-25 09:00:31.515 253542 DEBUG oslo_concurrency.processutils [None req-73baf30d-b3ff-4201-b0d3-37a16d64b79f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.441s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 04:00:31 np0005534516 nova_compute[253538]: 2025-11-25 09:00:31.529 253542 DEBUG nova.compute.provider_tree [None req-73baf30d-b3ff-4201-b0d3-37a16d64b79f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 04:00:31 np0005534516 nova_compute[253538]: 2025-11-25 09:00:31.541 253542 DEBUG oslo_concurrency.lockutils [None req-8e2688be-e1e9-4563-b98a-127a8c25ed02 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Acquiring lock "4356e66d-96cf-4d55-bf3e-280638024374" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:00:31 np0005534516 nova_compute[253538]: 2025-11-25 09:00:31.541 253542 DEBUG oslo_concurrency.lockutils [None req-8e2688be-e1e9-4563-b98a-127a8c25ed02 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Lock "4356e66d-96cf-4d55-bf3e-280638024374" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:00:31 np0005534516 nova_compute[253538]: 2025-11-25 09:00:31.541 253542 DEBUG oslo_concurrency.lockutils [None req-8e2688be-e1e9-4563-b98a-127a8c25ed02 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Acquiring lock "4356e66d-96cf-4d55-bf3e-280638024374-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:00:31 np0005534516 nova_compute[253538]: 2025-11-25 09:00:31.542 253542 DEBUG oslo_concurrency.lockutils [None req-8e2688be-e1e9-4563-b98a-127a8c25ed02 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Lock "4356e66d-96cf-4d55-bf3e-280638024374-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:00:31 np0005534516 nova_compute[253538]: 2025-11-25 09:00:31.542 253542 DEBUG oslo_concurrency.lockutils [None req-8e2688be-e1e9-4563-b98a-127a8c25ed02 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Lock "4356e66d-96cf-4d55-bf3e-280638024374-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:00:31 np0005534516 nova_compute[253538]: 2025-11-25 09:00:31.543 253542 INFO nova.compute.manager [None req-8e2688be-e1e9-4563-b98a-127a8c25ed02 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 4356e66d-96cf-4d55-bf3e-280638024374] Terminating instance#033[00m
Nov 25 04:00:31 np0005534516 nova_compute[253538]: 2025-11-25 09:00:31.544 253542 DEBUG nova.compute.manager [None req-8e2688be-e1e9-4563-b98a-127a8c25ed02 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 4356e66d-96cf-4d55-bf3e-280638024374] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 25 04:00:31 np0005534516 nova_compute[253538]: 2025-11-25 09:00:31.545 253542 DEBUG nova.scheduler.client.report [None req-73baf30d-b3ff-4201-b0d3-37a16d64b79f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 04:00:31 np0005534516 nova_compute[253538]: 2025-11-25 09:00:31.566 253542 DEBUG oslo_concurrency.lockutils [None req-73baf30d-b3ff-4201-b0d3-37a16d64b79f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.637s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:00:31 np0005534516 nova_compute[253538]: 2025-11-25 09:00:31.566 253542 DEBUG nova.compute.manager [None req-73baf30d-b3ff-4201-b0d3-37a16d64b79f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: f05e074d-5838-4c4b-89dc-76afe386f635] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 25 04:00:31 np0005534516 nova_compute[253538]: 2025-11-25 09:00:31.606 253542 DEBUG nova.compute.manager [None req-73baf30d-b3ff-4201-b0d3-37a16d64b79f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: f05e074d-5838-4c4b-89dc-76afe386f635] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 25 04:00:31 np0005534516 nova_compute[253538]: 2025-11-25 09:00:31.606 253542 DEBUG nova.network.neutron [None req-73baf30d-b3ff-4201-b0d3-37a16d64b79f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: f05e074d-5838-4c4b-89dc-76afe386f635] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 25 04:00:31 np0005534516 nova_compute[253538]: 2025-11-25 09:00:31.621 253542 INFO nova.virt.libvirt.driver [None req-73baf30d-b3ff-4201-b0d3-37a16d64b79f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: f05e074d-5838-4c4b-89dc-76afe386f635] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 25 04:00:31 np0005534516 nova_compute[253538]: 2025-11-25 09:00:31.636 253542 DEBUG nova.compute.manager [None req-73baf30d-b3ff-4201-b0d3-37a16d64b79f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: f05e074d-5838-4c4b-89dc-76afe386f635] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 25 04:00:31 np0005534516 nova_compute[253538]: 2025-11-25 09:00:31.719 253542 DEBUG nova.compute.manager [None req-73baf30d-b3ff-4201-b0d3-37a16d64b79f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: f05e074d-5838-4c4b-89dc-76afe386f635] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 25 04:00:31 np0005534516 nova_compute[253538]: 2025-11-25 09:00:31.720 253542 DEBUG nova.virt.libvirt.driver [None req-73baf30d-b3ff-4201-b0d3-37a16d64b79f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: f05e074d-5838-4c4b-89dc-76afe386f635] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 25 04:00:31 np0005534516 nova_compute[253538]: 2025-11-25 09:00:31.720 253542 INFO nova.virt.libvirt.driver [None req-73baf30d-b3ff-4201-b0d3-37a16d64b79f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: f05e074d-5838-4c4b-89dc-76afe386f635] Creating image(s)#033[00m
Nov 25 04:00:31 np0005534516 nova_compute[253538]: 2025-11-25 09:00:31.853 253542 DEBUG nova.storage.rbd_utils [None req-73baf30d-b3ff-4201-b0d3-37a16d64b79f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] rbd image f05e074d-5838-4c4b-89dc-76afe386f635_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 04:00:31 np0005534516 nova_compute[253538]: 2025-11-25 09:00:31.881 253542 DEBUG nova.storage.rbd_utils [None req-73baf30d-b3ff-4201-b0d3-37a16d64b79f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] rbd image f05e074d-5838-4c4b-89dc-76afe386f635_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 04:00:31 np0005534516 nova_compute[253538]: 2025-11-25 09:00:31.963 253542 DEBUG nova.storage.rbd_utils [None req-73baf30d-b3ff-4201-b0d3-37a16d64b79f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] rbd image f05e074d-5838-4c4b-89dc-76afe386f635_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 04:00:31 np0005534516 nova_compute[253538]: 2025-11-25 09:00:31.967 253542 DEBUG oslo_concurrency.processutils [None req-73baf30d-b3ff-4201-b0d3-37a16d64b79f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 04:00:31 np0005534516 kernel: tap6654b89e-a1 (unregistering): left promiscuous mode
Nov 25 04:00:31 np0005534516 NetworkManager[48915]: <info>  [1764061231.9848] device (tap6654b89e-a1): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 04:00:32 np0005534516 ovn_controller[152859]: 2025-11-25T09:00:31Z|01324|binding|INFO|Releasing lport 6654b89e-a102-49f6-ad76-45e598fe2702 from this chassis (sb_readonly=0)
Nov 25 04:00:32 np0005534516 ovn_controller[152859]: 2025-11-25T09:00:31Z|01325|binding|INFO|Setting lport 6654b89e-a102-49f6-ad76-45e598fe2702 down in Southbound
Nov 25 04:00:32 np0005534516 ovn_controller[152859]: 2025-11-25T09:00:32Z|01326|binding|INFO|Removing iface tap6654b89e-a1 ovn-installed in OVS
Nov 25 04:00:32 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2392: 321 pgs: 321 active+clean; 167 MiB data, 918 MiB used, 59 GiB / 60 GiB avail; 50 KiB/s rd, 87 KiB/s wr, 8 op/s
Nov 25 04:00:32 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:00:32.007 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d9:12:97 10.100.0.4'], port_security=['fa:16:3e:d9:12:97 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '4356e66d-96cf-4d55-bf3e-280638024374', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0f656b36-7475-4baf-b321-d82280dade68', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'cfffff2c57a442a59b202d368d49bf00', 'neutron:revision_number': '4', 'neutron:security_group_ids': '2c277aa6-98be-4b49-b9f4-357b27c34694 6ba9af76-9f60-4459-905f-068c6194f108', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=51801c6e-6c82-485c-b7cb-963a30ef2813, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=6654b89e-a102-49f6-ad76-45e598fe2702) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 04:00:32 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:00:32.008 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 6654b89e-a102-49f6-ad76-45e598fe2702 in datapath 0f656b36-7475-4baf-b321-d82280dade68 unbound from our chassis#033[00m
Nov 25 04:00:32 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:00:32.009 162739 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 0f656b36-7475-4baf-b321-d82280dade68, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 25 04:00:32 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:00:32.010 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[67687c34-0c2c-45b9-99aa-214c01379bab]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:00:32 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:00:32.010 162739 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-0f656b36-7475-4baf-b321-d82280dade68 namespace which is not needed anymore#033[00m
Nov 25 04:00:32 np0005534516 nova_compute[253538]: 2025-11-25 09:00:32.023 253542 DEBUG nova.policy [None req-73baf30d-b3ff-4201-b0d3-37a16d64b79f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'c9fb13d4ba9041458692330b7276232f', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'a3cf572dfc9f42528923d69b8fa76422', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 25 04:00:32 np0005534516 nova_compute[253538]: 2025-11-25 09:00:32.025 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:00:32 np0005534516 nova_compute[253538]: 2025-11-25 09:00:32.034 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 25 04:00:32 np0005534516 systemd[1]: machine-qemu\x2d158\x2dinstance\x2d00000080.scope: Deactivated successfully.
Nov 25 04:00:32 np0005534516 systemd[1]: machine-qemu\x2d158\x2dinstance\x2d00000080.scope: Consumed 14.414s CPU time.
Nov 25 04:00:32 np0005534516 systemd-machined[215790]: Machine qemu-158-instance-00000080 terminated.
Nov 25 04:00:32 np0005534516 nova_compute[253538]: 2025-11-25 09:00:32.070 253542 DEBUG oslo_concurrency.processutils [None req-73baf30d-b3ff-4201-b0d3-37a16d64b79f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json" returned: 0 in 0.103s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 04:00:32 np0005534516 nova_compute[253538]: 2025-11-25 09:00:32.071 253542 DEBUG oslo_concurrency.lockutils [None req-73baf30d-b3ff-4201-b0d3-37a16d64b79f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Acquiring lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:00:32 np0005534516 nova_compute[253538]: 2025-11-25 09:00:32.071 253542 DEBUG oslo_concurrency.lockutils [None req-73baf30d-b3ff-4201-b0d3-37a16d64b79f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:00:32 np0005534516 nova_compute[253538]: 2025-11-25 09:00:32.071 253542 DEBUG oslo_concurrency.lockutils [None req-73baf30d-b3ff-4201-b0d3-37a16d64b79f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:00:32 np0005534516 nova_compute[253538]: 2025-11-25 09:00:32.094 253542 DEBUG nova.storage.rbd_utils [None req-73baf30d-b3ff-4201-b0d3-37a16d64b79f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] rbd image f05e074d-5838-4c4b-89dc-76afe386f635_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 04:00:32 np0005534516 nova_compute[253538]: 2025-11-25 09:00:32.098 253542 DEBUG oslo_concurrency.processutils [None req-73baf30d-b3ff-4201-b0d3-37a16d64b79f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc f05e074d-5838-4c4b-89dc-76afe386f635_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 04:00:32 np0005534516 neutron-haproxy-ovnmeta-0f656b36-7475-4baf-b321-d82280dade68[388234]: [NOTICE]   (388241) : haproxy version is 2.8.14-c23fe91
Nov 25 04:00:32 np0005534516 neutron-haproxy-ovnmeta-0f656b36-7475-4baf-b321-d82280dade68[388234]: [NOTICE]   (388241) : path to executable is /usr/sbin/haproxy
Nov 25 04:00:32 np0005534516 neutron-haproxy-ovnmeta-0f656b36-7475-4baf-b321-d82280dade68[388234]: [WARNING]  (388241) : Exiting Master process...
Nov 25 04:00:32 np0005534516 neutron-haproxy-ovnmeta-0f656b36-7475-4baf-b321-d82280dade68[388234]: [WARNING]  (388241) : Exiting Master process...
Nov 25 04:00:32 np0005534516 neutron-haproxy-ovnmeta-0f656b36-7475-4baf-b321-d82280dade68[388234]: [ALERT]    (388241) : Current worker (388245) exited with code 143 (Terminated)
Nov 25 04:00:32 np0005534516 neutron-haproxy-ovnmeta-0f656b36-7475-4baf-b321-d82280dade68[388234]: [WARNING]  (388241) : All workers exited. Exiting... (0)
Nov 25 04:00:32 np0005534516 systemd[1]: libpod-af8724245293e0a8eeab72042011d14b5a558212e403fb4614fcaa71e705ad8b.scope: Deactivated successfully.
Nov 25 04:00:32 np0005534516 podman[389033]: 2025-11-25 09:00:32.160699869 +0000 UTC m=+0.054043313 container died af8724245293e0a8eeab72042011d14b5a558212e403fb4614fcaa71e705ad8b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-0f656b36-7475-4baf-b321-d82280dade68, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 04:00:32 np0005534516 nova_compute[253538]: 2025-11-25 09:00:32.194 253542 INFO nova.virt.libvirt.driver [-] [instance: 4356e66d-96cf-4d55-bf3e-280638024374] Instance destroyed successfully.#033[00m
Nov 25 04:00:32 np0005534516 nova_compute[253538]: 2025-11-25 09:00:32.198 253542 DEBUG nova.objects.instance [None req-8e2688be-e1e9-4563-b98a-127a8c25ed02 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Lazy-loading 'resources' on Instance uuid 4356e66d-96cf-4d55-bf3e-280638024374 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 04:00:32 np0005534516 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-af8724245293e0a8eeab72042011d14b5a558212e403fb4614fcaa71e705ad8b-userdata-shm.mount: Deactivated successfully.
Nov 25 04:00:32 np0005534516 systemd[1]: var-lib-containers-storage-overlay-4d8292ddc888ebc952e3026b61ff6dd04593c40f1d79add281577b607917257e-merged.mount: Deactivated successfully.
Nov 25 04:00:32 np0005534516 nova_compute[253538]: 2025-11-25 09:00:32.209 253542 DEBUG nova.virt.libvirt.vif [None req-8e2688be-e1e9-4563-b98a-127a8c25ed02 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T08:59:51Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-1495447964-access_point-1963588795',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-1495447964-access_point-1963588795',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-1495447964-ac',id=128,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBM9GKDtc/iAkQHMT31cNhSUeZ5EWysDNnQ0GII5sebZcAX/FwcSvLZXUXWZQJzndY+3PoIINOvsAEaMLFDOLThu4z2CfTlWQWolUUIfZRA+bjYm4j7TZTDILpFaRJI4ZIA==',key_name='tempest-TestSecurityGroupsBasicOps-1928582476',keypairs=<?>,launch_index=0,launched_at=2025-11-25T09:00:03Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='cfffff2c57a442a59b202d368d49bf00',ramdisk_id='',reservation_id='r-tzlro08m',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestSecurityGroupsBasicOps-1495447964',owner_user_name='tempest-TestSecurityGroupsBasicOps-1495447964-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T09:00:03Z,user_data=None,user_id='283b89dbe3284e8ea2019b797673108b',uuid=4356e66d-96cf-4d55-bf3e-280638024374,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "6654b89e-a102-49f6-ad76-45e598fe2702", "address": "fa:16:3e:d9:12:97", "network": {"id": "0f656b36-7475-4baf-b321-d82280dade68", "bridge": "br-int", "label": "tempest-network-smoke--187809639", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.187", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cfffff2c57a442a59b202d368d49bf00", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6654b89e-a1", "ovs_interfaceid": "6654b89e-a102-49f6-ad76-45e598fe2702", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 25 04:00:32 np0005534516 nova_compute[253538]: 2025-11-25 09:00:32.209 253542 DEBUG nova.network.os_vif_util [None req-8e2688be-e1e9-4563-b98a-127a8c25ed02 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Converting VIF {"id": "6654b89e-a102-49f6-ad76-45e598fe2702", "address": "fa:16:3e:d9:12:97", "network": {"id": "0f656b36-7475-4baf-b321-d82280dade68", "bridge": "br-int", "label": "tempest-network-smoke--187809639", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.187", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cfffff2c57a442a59b202d368d49bf00", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6654b89e-a1", "ovs_interfaceid": "6654b89e-a102-49f6-ad76-45e598fe2702", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 04:00:32 np0005534516 nova_compute[253538]: 2025-11-25 09:00:32.210 253542 DEBUG nova.network.os_vif_util [None req-8e2688be-e1e9-4563-b98a-127a8c25ed02 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:d9:12:97,bridge_name='br-int',has_traffic_filtering=True,id=6654b89e-a102-49f6-ad76-45e598fe2702,network=Network(0f656b36-7475-4baf-b321-d82280dade68),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6654b89e-a1') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 04:00:32 np0005534516 nova_compute[253538]: 2025-11-25 09:00:32.210 253542 DEBUG os_vif [None req-8e2688be-e1e9-4563-b98a-127a8c25ed02 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:d9:12:97,bridge_name='br-int',has_traffic_filtering=True,id=6654b89e-a102-49f6-ad76-45e598fe2702,network=Network(0f656b36-7475-4baf-b321-d82280dade68),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6654b89e-a1') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 25 04:00:32 np0005534516 nova_compute[253538]: 2025-11-25 09:00:32.212 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:00:32 np0005534516 nova_compute[253538]: 2025-11-25 09:00:32.212 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6654b89e-a1, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 04:00:32 np0005534516 nova_compute[253538]: 2025-11-25 09:00:32.214 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:00:32 np0005534516 nova_compute[253538]: 2025-11-25 09:00:32.217 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:00:32 np0005534516 nova_compute[253538]: 2025-11-25 09:00:32.219 253542 INFO os_vif [None req-8e2688be-e1e9-4563-b98a-127a8c25ed02 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:d9:12:97,bridge_name='br-int',has_traffic_filtering=True,id=6654b89e-a102-49f6-ad76-45e598fe2702,network=Network(0f656b36-7475-4baf-b321-d82280dade68),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6654b89e-a1')#033[00m
Nov 25 04:00:32 np0005534516 podman[389033]: 2025-11-25 09:00:32.275847683 +0000 UTC m=+0.169191107 container cleanup af8724245293e0a8eeab72042011d14b5a558212e403fb4614fcaa71e705ad8b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-0f656b36-7475-4baf-b321-d82280dade68, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 25 04:00:32 np0005534516 systemd[1]: libpod-conmon-af8724245293e0a8eeab72042011d14b5a558212e403fb4614fcaa71e705ad8b.scope: Deactivated successfully.
Nov 25 04:00:32 np0005534516 podman[389108]: 2025-11-25 09:00:32.375752083 +0000 UTC m=+0.078568241 container remove af8724245293e0a8eeab72042011d14b5a558212e403fb4614fcaa71e705ad8b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-0f656b36-7475-4baf-b321-d82280dade68, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251118)
Nov 25 04:00:32 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:00:32.383 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[26035961-2f11-420e-83da-77cef2105198]: (4, ('Tue Nov 25 09:00:32 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-0f656b36-7475-4baf-b321-d82280dade68 (af8724245293e0a8eeab72042011d14b5a558212e403fb4614fcaa71e705ad8b)\naf8724245293e0a8eeab72042011d14b5a558212e403fb4614fcaa71e705ad8b\nTue Nov 25 09:00:32 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-0f656b36-7475-4baf-b321-d82280dade68 (af8724245293e0a8eeab72042011d14b5a558212e403fb4614fcaa71e705ad8b)\naf8724245293e0a8eeab72042011d14b5a558212e403fb4614fcaa71e705ad8b\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:00:32 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:00:32.385 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[70eaf3bd-30af-4aa0-9cbf-e57abf32f529]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:00:32 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:00:32.386 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0f656b36-70, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 04:00:32 np0005534516 nova_compute[253538]: 2025-11-25 09:00:32.387 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:00:32 np0005534516 kernel: tap0f656b36-70: left promiscuous mode
Nov 25 04:00:32 np0005534516 nova_compute[253538]: 2025-11-25 09:00:32.401 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:00:32 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:00:32.406 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[f46ad7b1-029c-4801-b7a8-ab5aea80e159]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:00:32 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:00:32.418 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[8828d72f-acf5-44dd-90bf-d5fbddb72e81]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:00:32 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:00:32.419 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[a9df5f60-ca79-4ad8-8164-489e64f01b68]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:00:32 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:00:32.435 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[66d8028a-de49-482d-8e41-40477c5d139e]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 655770, 'reachable_time': 31378, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 389125, 'error': None, 'target': 'ovnmeta-0f656b36-7475-4baf-b321-d82280dade68', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:00:32 np0005534516 systemd[1]: run-netns-ovnmeta\x2d0f656b36\x2d7475\x2d4baf\x2db321\x2dd82280dade68.mount: Deactivated successfully.
Nov 25 04:00:32 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:00:32.439 162852 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-0f656b36-7475-4baf-b321-d82280dade68 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 25 04:00:32 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:00:32.439 162852 DEBUG oslo.privsep.daemon [-] privsep: reply[8928675c-c0d7-41ec-b188-5dce1d66df15]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:00:32 np0005534516 nova_compute[253538]: 2025-11-25 09:00:32.614 253542 DEBUG oslo_concurrency.processutils [None req-73baf30d-b3ff-4201-b0d3-37a16d64b79f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc f05e074d-5838-4c4b-89dc-76afe386f635_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.517s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 04:00:32 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:00:32 np0005534516 nova_compute[253538]: 2025-11-25 09:00:32.676 253542 DEBUG nova.storage.rbd_utils [None req-73baf30d-b3ff-4201-b0d3-37a16d64b79f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] resizing rbd image f05e074d-5838-4c4b-89dc-76afe386f635_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 25 04:00:32 np0005534516 nova_compute[253538]: 2025-11-25 09:00:32.705 253542 DEBUG nova.network.neutron [None req-73baf30d-b3ff-4201-b0d3-37a16d64b79f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: f05e074d-5838-4c4b-89dc-76afe386f635] Successfully created port: 30ba0f84-3dca-47f6-911d-5fff56a99b0b _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 25 04:00:32 np0005534516 nova_compute[253538]: 2025-11-25 09:00:32.721 253542 DEBUG nova.network.neutron [req-326e1da0-2b81-4ae4-bf59-a35fdf9677d7 req-b9a11942-29d2-427e-b742-60227efdcdb6 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 4356e66d-96cf-4d55-bf3e-280638024374] Updated VIF entry in instance network info cache for port 6654b89e-a102-49f6-ad76-45e598fe2702. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 25 04:00:32 np0005534516 nova_compute[253538]: 2025-11-25 09:00:32.721 253542 DEBUG nova.network.neutron [req-326e1da0-2b81-4ae4-bf59-a35fdf9677d7 req-b9a11942-29d2-427e-b742-60227efdcdb6 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 4356e66d-96cf-4d55-bf3e-280638024374] Updating instance_info_cache with network_info: [{"id": "6654b89e-a102-49f6-ad76-45e598fe2702", "address": "fa:16:3e:d9:12:97", "network": {"id": "0f656b36-7475-4baf-b321-d82280dade68", "bridge": "br-int", "label": "tempest-network-smoke--187809639", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cfffff2c57a442a59b202d368d49bf00", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6654b89e-a1", "ovs_interfaceid": "6654b89e-a102-49f6-ad76-45e598fe2702", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 04:00:32 np0005534516 nova_compute[253538]: 2025-11-25 09:00:32.743 253542 DEBUG oslo_concurrency.lockutils [req-326e1da0-2b81-4ae4-bf59-a35fdf9677d7 req-b9a11942-29d2-427e-b742-60227efdcdb6 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-4356e66d-96cf-4d55-bf3e-280638024374" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 04:00:32 np0005534516 nova_compute[253538]: 2025-11-25 09:00:32.780 253542 DEBUG nova.objects.instance [None req-73baf30d-b3ff-4201-b0d3-37a16d64b79f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lazy-loading 'migration_context' on Instance uuid f05e074d-5838-4c4b-89dc-76afe386f635 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 04:00:32 np0005534516 nova_compute[253538]: 2025-11-25 09:00:32.792 253542 DEBUG nova.virt.libvirt.driver [None req-73baf30d-b3ff-4201-b0d3-37a16d64b79f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: f05e074d-5838-4c4b-89dc-76afe386f635] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 25 04:00:32 np0005534516 nova_compute[253538]: 2025-11-25 09:00:32.792 253542 DEBUG nova.virt.libvirt.driver [None req-73baf30d-b3ff-4201-b0d3-37a16d64b79f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: f05e074d-5838-4c4b-89dc-76afe386f635] Ensure instance console log exists: /var/lib/nova/instances/f05e074d-5838-4c4b-89dc-76afe386f635/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 25 04:00:32 np0005534516 nova_compute[253538]: 2025-11-25 09:00:32.793 253542 DEBUG oslo_concurrency.lockutils [None req-73baf30d-b3ff-4201-b0d3-37a16d64b79f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:00:32 np0005534516 nova_compute[253538]: 2025-11-25 09:00:32.795 253542 DEBUG oslo_concurrency.lockutils [None req-73baf30d-b3ff-4201-b0d3-37a16d64b79f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:00:32 np0005534516 nova_compute[253538]: 2025-11-25 09:00:32.795 253542 DEBUG oslo_concurrency.lockutils [None req-73baf30d-b3ff-4201-b0d3-37a16d64b79f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:00:32 np0005534516 nova_compute[253538]: 2025-11-25 09:00:32.962 253542 INFO nova.virt.libvirt.driver [None req-8e2688be-e1e9-4563-b98a-127a8c25ed02 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 4356e66d-96cf-4d55-bf3e-280638024374] Deleting instance files /var/lib/nova/instances/4356e66d-96cf-4d55-bf3e-280638024374_del#033[00m
Nov 25 04:00:32 np0005534516 nova_compute[253538]: 2025-11-25 09:00:32.963 253542 INFO nova.virt.libvirt.driver [None req-8e2688be-e1e9-4563-b98a-127a8c25ed02 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 4356e66d-96cf-4d55-bf3e-280638024374] Deletion of /var/lib/nova/instances/4356e66d-96cf-4d55-bf3e-280638024374_del complete#033[00m
Nov 25 04:00:33 np0005534516 nova_compute[253538]: 2025-11-25 09:00:33.011 253542 INFO nova.compute.manager [None req-8e2688be-e1e9-4563-b98a-127a8c25ed02 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 4356e66d-96cf-4d55-bf3e-280638024374] Took 1.47 seconds to destroy the instance on the hypervisor.#033[00m
Nov 25 04:00:33 np0005534516 nova_compute[253538]: 2025-11-25 09:00:33.011 253542 DEBUG oslo.service.loopingcall [None req-8e2688be-e1e9-4563-b98a-127a8c25ed02 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 25 04:00:33 np0005534516 nova_compute[253538]: 2025-11-25 09:00:33.012 253542 DEBUG nova.compute.manager [-] [instance: 4356e66d-96cf-4d55-bf3e-280638024374] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 25 04:00:33 np0005534516 nova_compute[253538]: 2025-11-25 09:00:33.012 253542 DEBUG nova.network.neutron [-] [instance: 4356e66d-96cf-4d55-bf3e-280638024374] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 25 04:00:33 np0005534516 nova_compute[253538]: 2025-11-25 09:00:33.575 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:00:33 np0005534516 nova_compute[253538]: 2025-11-25 09:00:33.640 253542 DEBUG nova.network.neutron [None req-73baf30d-b3ff-4201-b0d3-37a16d64b79f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: f05e074d-5838-4c4b-89dc-76afe386f635] Successfully updated port: 30ba0f84-3dca-47f6-911d-5fff56a99b0b _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 25 04:00:33 np0005534516 nova_compute[253538]: 2025-11-25 09:00:33.655 253542 DEBUG oslo_concurrency.lockutils [None req-73baf30d-b3ff-4201-b0d3-37a16d64b79f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Acquiring lock "refresh_cache-f05e074d-5838-4c4b-89dc-76afe386f635" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 04:00:33 np0005534516 nova_compute[253538]: 2025-11-25 09:00:33.655 253542 DEBUG oslo_concurrency.lockutils [None req-73baf30d-b3ff-4201-b0d3-37a16d64b79f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Acquired lock "refresh_cache-f05e074d-5838-4c4b-89dc-76afe386f635" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 04:00:33 np0005534516 nova_compute[253538]: 2025-11-25 09:00:33.655 253542 DEBUG nova.network.neutron [None req-73baf30d-b3ff-4201-b0d3-37a16d64b79f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: f05e074d-5838-4c4b-89dc-76afe386f635] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 25 04:00:33 np0005534516 nova_compute[253538]: 2025-11-25 09:00:33.788 253542 DEBUG nova.compute.manager [req-900f573c-a4a8-4203-8d92-751cabc75950 req-e4c0ed71-aaa3-4e8a-80fe-6fc007134b75 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: f05e074d-5838-4c4b-89dc-76afe386f635] Received event network-changed-30ba0f84-3dca-47f6-911d-5fff56a99b0b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 04:00:33 np0005534516 nova_compute[253538]: 2025-11-25 09:00:33.789 253542 DEBUG nova.compute.manager [req-900f573c-a4a8-4203-8d92-751cabc75950 req-e4c0ed71-aaa3-4e8a-80fe-6fc007134b75 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: f05e074d-5838-4c4b-89dc-76afe386f635] Refreshing instance network info cache due to event network-changed-30ba0f84-3dca-47f6-911d-5fff56a99b0b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 04:00:33 np0005534516 nova_compute[253538]: 2025-11-25 09:00:33.789 253542 DEBUG oslo_concurrency.lockutils [req-900f573c-a4a8-4203-8d92-751cabc75950 req-e4c0ed71-aaa3-4e8a-80fe-6fc007134b75 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-f05e074d-5838-4c4b-89dc-76afe386f635" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 04:00:33 np0005534516 nova_compute[253538]: 2025-11-25 09:00:33.790 253542 DEBUG nova.network.neutron [-] [instance: 4356e66d-96cf-4d55-bf3e-280638024374] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 04:00:33 np0005534516 nova_compute[253538]: 2025-11-25 09:00:33.804 253542 INFO nova.compute.manager [-] [instance: 4356e66d-96cf-4d55-bf3e-280638024374] Took 0.79 seconds to deallocate network for instance.#033[00m
Nov 25 04:00:33 np0005534516 nova_compute[253538]: 2025-11-25 09:00:33.850 253542 DEBUG nova.network.neutron [None req-73baf30d-b3ff-4201-b0d3-37a16d64b79f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: f05e074d-5838-4c4b-89dc-76afe386f635] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 25 04:00:33 np0005534516 nova_compute[253538]: 2025-11-25 09:00:33.855 253542 DEBUG oslo_concurrency.lockutils [None req-8e2688be-e1e9-4563-b98a-127a8c25ed02 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:00:33 np0005534516 nova_compute[253538]: 2025-11-25 09:00:33.856 253542 DEBUG oslo_concurrency.lockutils [None req-8e2688be-e1e9-4563-b98a-127a8c25ed02 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:00:33 np0005534516 nova_compute[253538]: 2025-11-25 09:00:33.927 253542 DEBUG oslo_concurrency.processutils [None req-8e2688be-e1e9-4563-b98a-127a8c25ed02 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 04:00:34 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2393: 321 pgs: 321 active+clean; 147 MiB data, 909 MiB used, 59 GiB / 60 GiB avail; 11 KiB/s rd, 360 KiB/s wr, 14 op/s
Nov 25 04:00:34 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 04:00:34 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/437537554' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 04:00:34 np0005534516 nova_compute[253538]: 2025-11-25 09:00:34.445 253542 DEBUG oslo_concurrency.processutils [None req-8e2688be-e1e9-4563-b98a-127a8c25ed02 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.518s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 04:00:34 np0005534516 nova_compute[253538]: 2025-11-25 09:00:34.456 253542 DEBUG nova.compute.provider_tree [None req-8e2688be-e1e9-4563-b98a-127a8c25ed02 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 04:00:34 np0005534516 nova_compute[253538]: 2025-11-25 09:00:34.481 253542 DEBUG nova.scheduler.client.report [None req-8e2688be-e1e9-4563-b98a-127a8c25ed02 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 04:00:34 np0005534516 nova_compute[253538]: 2025-11-25 09:00:34.513 253542 DEBUG oslo_concurrency.lockutils [None req-8e2688be-e1e9-4563-b98a-127a8c25ed02 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.657s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:00:34 np0005534516 nova_compute[253538]: 2025-11-25 09:00:34.557 253542 INFO nova.scheduler.client.report [None req-8e2688be-e1e9-4563-b98a-127a8c25ed02 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Deleted allocations for instance 4356e66d-96cf-4d55-bf3e-280638024374#033[00m
Nov 25 04:00:34 np0005534516 nova_compute[253538]: 2025-11-25 09:00:34.625 253542 DEBUG oslo_concurrency.lockutils [None req-8e2688be-e1e9-4563-b98a-127a8c25ed02 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Lock "4356e66d-96cf-4d55-bf3e-280638024374" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.084s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:00:35 np0005534516 nova_compute[253538]: 2025-11-25 09:00:35.920 253542 DEBUG nova.compute.manager [req-f1903a81-7b57-4d4f-b642-a2011ddba0e3 req-d8927bb2-8f5f-452d-bbe0-5ffa5bf14318 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 4356e66d-96cf-4d55-bf3e-280638024374] Received event network-vif-deleted-6654b89e-a102-49f6-ad76-45e598fe2702 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 04:00:36 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2394: 321 pgs: 321 active+clean; 151 MiB data, 911 MiB used, 59 GiB / 60 GiB avail; 28 KiB/s rd, 1.2 MiB/s wr, 40 op/s
Nov 25 04:00:36 np0005534516 nova_compute[253538]: 2025-11-25 09:00:36.227 253542 DEBUG nova.network.neutron [None req-73baf30d-b3ff-4201-b0d3-37a16d64b79f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: f05e074d-5838-4c4b-89dc-76afe386f635] Updating instance_info_cache with network_info: [{"id": "30ba0f84-3dca-47f6-911d-5fff56a99b0b", "address": "fa:16:3e:d4:d0:53", "network": {"id": "6c2834b5-0444-432c-8da4-c0b4f4aabc4d", "bridge": "br-int", "label": "tempest-network-smoke--139778261", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fed4:d053", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap30ba0f84-3d", "ovs_interfaceid": "30ba0f84-3dca-47f6-911d-5fff56a99b0b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 04:00:36 np0005534516 nova_compute[253538]: 2025-11-25 09:00:36.255 253542 DEBUG oslo_concurrency.lockutils [None req-73baf30d-b3ff-4201-b0d3-37a16d64b79f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Releasing lock "refresh_cache-f05e074d-5838-4c4b-89dc-76afe386f635" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 04:00:36 np0005534516 nova_compute[253538]: 2025-11-25 09:00:36.256 253542 DEBUG nova.compute.manager [None req-73baf30d-b3ff-4201-b0d3-37a16d64b79f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: f05e074d-5838-4c4b-89dc-76afe386f635] Instance network_info: |[{"id": "30ba0f84-3dca-47f6-911d-5fff56a99b0b", "address": "fa:16:3e:d4:d0:53", "network": {"id": "6c2834b5-0444-432c-8da4-c0b4f4aabc4d", "bridge": "br-int", "label": "tempest-network-smoke--139778261", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fed4:d053", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap30ba0f84-3d", "ovs_interfaceid": "30ba0f84-3dca-47f6-911d-5fff56a99b0b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 25 04:00:36 np0005534516 nova_compute[253538]: 2025-11-25 09:00:36.256 253542 DEBUG oslo_concurrency.lockutils [req-900f573c-a4a8-4203-8d92-751cabc75950 req-e4c0ed71-aaa3-4e8a-80fe-6fc007134b75 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-f05e074d-5838-4c4b-89dc-76afe386f635" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 04:00:36 np0005534516 nova_compute[253538]: 2025-11-25 09:00:36.257 253542 DEBUG nova.network.neutron [req-900f573c-a4a8-4203-8d92-751cabc75950 req-e4c0ed71-aaa3-4e8a-80fe-6fc007134b75 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: f05e074d-5838-4c4b-89dc-76afe386f635] Refreshing network info cache for port 30ba0f84-3dca-47f6-911d-5fff56a99b0b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 04:00:36 np0005534516 nova_compute[253538]: 2025-11-25 09:00:36.263 253542 DEBUG nova.virt.libvirt.driver [None req-73baf30d-b3ff-4201-b0d3-37a16d64b79f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: f05e074d-5838-4c4b-89dc-76afe386f635] Start _get_guest_xml network_info=[{"id": "30ba0f84-3dca-47f6-911d-5fff56a99b0b", "address": "fa:16:3e:d4:d0:53", "network": {"id": "6c2834b5-0444-432c-8da4-c0b4f4aabc4d", "bridge": "br-int", "label": "tempest-network-smoke--139778261", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fed4:d053", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap30ba0f84-3d", "ovs_interfaceid": "30ba0f84-3dca-47f6-911d-5fff56a99b0b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'device_name': '/dev/vda', 'guest_format': None, 'encryption_options': None, 'boot_index': 0, 'encryption_format': None, 'encryption_secret_uuid': None, 'size': 0, 'encrypted': False, 'disk_bus': 'virtio', 'image_id': '8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 25 04:00:36 np0005534516 nova_compute[253538]: 2025-11-25 09:00:36.268 253542 WARNING nova.virt.libvirt.driver [None req-73baf30d-b3ff-4201-b0d3-37a16d64b79f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 04:00:36 np0005534516 nova_compute[253538]: 2025-11-25 09:00:36.274 253542 DEBUG nova.virt.libvirt.host [None req-73baf30d-b3ff-4201-b0d3-37a16d64b79f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 25 04:00:36 np0005534516 nova_compute[253538]: 2025-11-25 09:00:36.274 253542 DEBUG nova.virt.libvirt.host [None req-73baf30d-b3ff-4201-b0d3-37a16d64b79f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 25 04:00:36 np0005534516 nova_compute[253538]: 2025-11-25 09:00:36.278 253542 DEBUG nova.virt.libvirt.host [None req-73baf30d-b3ff-4201-b0d3-37a16d64b79f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 25 04:00:36 np0005534516 nova_compute[253538]: 2025-11-25 09:00:36.278 253542 DEBUG nova.virt.libvirt.host [None req-73baf30d-b3ff-4201-b0d3-37a16d64b79f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 25 04:00:36 np0005534516 nova_compute[253538]: 2025-11-25 09:00:36.279 253542 DEBUG nova.virt.libvirt.driver [None req-73baf30d-b3ff-4201-b0d3-37a16d64b79f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 25 04:00:36 np0005534516 nova_compute[253538]: 2025-11-25 09:00:36.279 253542 DEBUG nova.virt.hardware [None req-73baf30d-b3ff-4201-b0d3-37a16d64b79f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T08:20:54Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='c0247c91-0f34-45b2-87e7-43f31a790a00',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 25 04:00:36 np0005534516 nova_compute[253538]: 2025-11-25 09:00:36.279 253542 DEBUG nova.virt.hardware [None req-73baf30d-b3ff-4201-b0d3-37a16d64b79f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 25 04:00:36 np0005534516 nova_compute[253538]: 2025-11-25 09:00:36.280 253542 DEBUG nova.virt.hardware [None req-73baf30d-b3ff-4201-b0d3-37a16d64b79f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 25 04:00:36 np0005534516 nova_compute[253538]: 2025-11-25 09:00:36.280 253542 DEBUG nova.virt.hardware [None req-73baf30d-b3ff-4201-b0d3-37a16d64b79f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 25 04:00:36 np0005534516 nova_compute[253538]: 2025-11-25 09:00:36.280 253542 DEBUG nova.virt.hardware [None req-73baf30d-b3ff-4201-b0d3-37a16d64b79f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 25 04:00:36 np0005534516 nova_compute[253538]: 2025-11-25 09:00:36.280 253542 DEBUG nova.virt.hardware [None req-73baf30d-b3ff-4201-b0d3-37a16d64b79f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 25 04:00:36 np0005534516 nova_compute[253538]: 2025-11-25 09:00:36.280 253542 DEBUG nova.virt.hardware [None req-73baf30d-b3ff-4201-b0d3-37a16d64b79f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 25 04:00:36 np0005534516 nova_compute[253538]: 2025-11-25 09:00:36.281 253542 DEBUG nova.virt.hardware [None req-73baf30d-b3ff-4201-b0d3-37a16d64b79f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 25 04:00:36 np0005534516 nova_compute[253538]: 2025-11-25 09:00:36.281 253542 DEBUG nova.virt.hardware [None req-73baf30d-b3ff-4201-b0d3-37a16d64b79f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 25 04:00:36 np0005534516 nova_compute[253538]: 2025-11-25 09:00:36.281 253542 DEBUG nova.virt.hardware [None req-73baf30d-b3ff-4201-b0d3-37a16d64b79f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 25 04:00:36 np0005534516 nova_compute[253538]: 2025-11-25 09:00:36.281 253542 DEBUG nova.virt.hardware [None req-73baf30d-b3ff-4201-b0d3-37a16d64b79f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 25 04:00:36 np0005534516 nova_compute[253538]: 2025-11-25 09:00:36.285 253542 DEBUG oslo_concurrency.processutils [None req-73baf30d-b3ff-4201-b0d3-37a16d64b79f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 04:00:36 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 04:00:36 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/639692007' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 04:00:36 np0005534516 nova_compute[253538]: 2025-11-25 09:00:36.751 253542 DEBUG oslo_concurrency.processutils [None req-73baf30d-b3ff-4201-b0d3-37a16d64b79f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.467s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 04:00:36 np0005534516 nova_compute[253538]: 2025-11-25 09:00:36.784 253542 DEBUG nova.storage.rbd_utils [None req-73baf30d-b3ff-4201-b0d3-37a16d64b79f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] rbd image f05e074d-5838-4c4b-89dc-76afe386f635_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 04:00:36 np0005534516 nova_compute[253538]: 2025-11-25 09:00:36.789 253542 DEBUG oslo_concurrency.processutils [None req-73baf30d-b3ff-4201-b0d3-37a16d64b79f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 04:00:37 np0005534516 nova_compute[253538]: 2025-11-25 09:00:37.217 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:00:37 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 04:00:37 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2883182616' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 04:00:37 np0005534516 nova_compute[253538]: 2025-11-25 09:00:37.254 253542 DEBUG oslo_concurrency.processutils [None req-73baf30d-b3ff-4201-b0d3-37a16d64b79f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.465s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 04:00:37 np0005534516 nova_compute[253538]: 2025-11-25 09:00:37.256 253542 DEBUG nova.virt.libvirt.vif [None req-73baf30d-b3ff-4201-b0d3-37a16d64b79f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T09:00:29Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1223430136',display_name='tempest-TestGettingAddress-server-1223430136',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1223430136',id=129,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBE43RDkFrIAVI0BW11KJE2OYK1HJrAvC1/LPNZ8TDm6qjtPF0a19WFe4a1radMZBBdiDLKOYgFd/MDesehfKkwox7hQY3R8UYDimfx0Df8eUJAXIFHj7mDCbbhz+nTG4ag==',key_name='tempest-TestGettingAddress-356224709',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='a3cf572dfc9f42528923d69b8fa76422',ramdisk_id='',reservation_id='r-w1sbzkv2',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-364728108',owner_user_name='tempest-TestGettingAddress-364728108-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T09:00:31Z,user_data=None,user_id='c9fb13d4ba9041458692330b7276232f',uuid=f05e074d-5838-4c4b-89dc-76afe386f635,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "30ba0f84-3dca-47f6-911d-5fff56a99b0b", "address": "fa:16:3e:d4:d0:53", "network": {"id": "6c2834b5-0444-432c-8da4-c0b4f4aabc4d", "bridge": "br-int", "label": "tempest-network-smoke--139778261", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fed4:d053", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap30ba0f84-3d", "ovs_interfaceid": "30ba0f84-3dca-47f6-911d-5fff56a99b0b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 25 04:00:37 np0005534516 nova_compute[253538]: 2025-11-25 09:00:37.257 253542 DEBUG nova.network.os_vif_util [None req-73baf30d-b3ff-4201-b0d3-37a16d64b79f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Converting VIF {"id": "30ba0f84-3dca-47f6-911d-5fff56a99b0b", "address": "fa:16:3e:d4:d0:53", "network": {"id": "6c2834b5-0444-432c-8da4-c0b4f4aabc4d", "bridge": "br-int", "label": "tempest-network-smoke--139778261", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fed4:d053", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap30ba0f84-3d", "ovs_interfaceid": "30ba0f84-3dca-47f6-911d-5fff56a99b0b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 04:00:37 np0005534516 nova_compute[253538]: 2025-11-25 09:00:37.258 253542 DEBUG nova.network.os_vif_util [None req-73baf30d-b3ff-4201-b0d3-37a16d64b79f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d4:d0:53,bridge_name='br-int',has_traffic_filtering=True,id=30ba0f84-3dca-47f6-911d-5fff56a99b0b,network=Network(6c2834b5-0444-432c-8da4-c0b4f4aabc4d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap30ba0f84-3d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 04:00:37 np0005534516 nova_compute[253538]: 2025-11-25 09:00:37.260 253542 DEBUG nova.objects.instance [None req-73baf30d-b3ff-4201-b0d3-37a16d64b79f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lazy-loading 'pci_devices' on Instance uuid f05e074d-5838-4c4b-89dc-76afe386f635 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 04:00:37 np0005534516 nova_compute[253538]: 2025-11-25 09:00:37.275 253542 DEBUG nova.virt.libvirt.driver [None req-73baf30d-b3ff-4201-b0d3-37a16d64b79f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: f05e074d-5838-4c4b-89dc-76afe386f635] End _get_guest_xml xml=<domain type="kvm">
Nov 25 04:00:37 np0005534516 nova_compute[253538]:  <uuid>f05e074d-5838-4c4b-89dc-76afe386f635</uuid>
Nov 25 04:00:37 np0005534516 nova_compute[253538]:  <name>instance-00000081</name>
Nov 25 04:00:37 np0005534516 nova_compute[253538]:  <memory>131072</memory>
Nov 25 04:00:37 np0005534516 nova_compute[253538]:  <vcpu>1</vcpu>
Nov 25 04:00:37 np0005534516 nova_compute[253538]:  <metadata>
Nov 25 04:00:37 np0005534516 nova_compute[253538]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 04:00:37 np0005534516 nova_compute[253538]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 04:00:37 np0005534516 nova_compute[253538]:      <nova:name>tempest-TestGettingAddress-server-1223430136</nova:name>
Nov 25 04:00:37 np0005534516 nova_compute[253538]:      <nova:creationTime>2025-11-25 09:00:36</nova:creationTime>
Nov 25 04:00:37 np0005534516 nova_compute[253538]:      <nova:flavor name="m1.nano">
Nov 25 04:00:37 np0005534516 nova_compute[253538]:        <nova:memory>128</nova:memory>
Nov 25 04:00:37 np0005534516 nova_compute[253538]:        <nova:disk>1</nova:disk>
Nov 25 04:00:37 np0005534516 nova_compute[253538]:        <nova:swap>0</nova:swap>
Nov 25 04:00:37 np0005534516 nova_compute[253538]:        <nova:ephemeral>0</nova:ephemeral>
Nov 25 04:00:37 np0005534516 nova_compute[253538]:        <nova:vcpus>1</nova:vcpus>
Nov 25 04:00:37 np0005534516 nova_compute[253538]:      </nova:flavor>
Nov 25 04:00:37 np0005534516 nova_compute[253538]:      <nova:owner>
Nov 25 04:00:37 np0005534516 nova_compute[253538]:        <nova:user uuid="c9fb13d4ba9041458692330b7276232f">tempest-TestGettingAddress-364728108-project-member</nova:user>
Nov 25 04:00:37 np0005534516 nova_compute[253538]:        <nova:project uuid="a3cf572dfc9f42528923d69b8fa76422">tempest-TestGettingAddress-364728108</nova:project>
Nov 25 04:00:37 np0005534516 nova_compute[253538]:      </nova:owner>
Nov 25 04:00:37 np0005534516 nova_compute[253538]:      <nova:root type="image" uuid="8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e"/>
Nov 25 04:00:37 np0005534516 nova_compute[253538]:      <nova:ports>
Nov 25 04:00:37 np0005534516 nova_compute[253538]:        <nova:port uuid="30ba0f84-3dca-47f6-911d-5fff56a99b0b">
Nov 25 04:00:37 np0005534516 nova_compute[253538]:          <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Nov 25 04:00:37 np0005534516 nova_compute[253538]:          <nova:ip type="fixed" address="2001:db8::f816:3eff:fed4:d053" ipVersion="6"/>
Nov 25 04:00:37 np0005534516 nova_compute[253538]:        </nova:port>
Nov 25 04:00:37 np0005534516 nova_compute[253538]:      </nova:ports>
Nov 25 04:00:37 np0005534516 nova_compute[253538]:    </nova:instance>
Nov 25 04:00:37 np0005534516 nova_compute[253538]:  </metadata>
Nov 25 04:00:37 np0005534516 nova_compute[253538]:  <sysinfo type="smbios">
Nov 25 04:00:37 np0005534516 nova_compute[253538]:    <system>
Nov 25 04:00:37 np0005534516 nova_compute[253538]:      <entry name="manufacturer">RDO</entry>
Nov 25 04:00:37 np0005534516 nova_compute[253538]:      <entry name="product">OpenStack Compute</entry>
Nov 25 04:00:37 np0005534516 nova_compute[253538]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 04:00:37 np0005534516 nova_compute[253538]:      <entry name="serial">f05e074d-5838-4c4b-89dc-76afe386f635</entry>
Nov 25 04:00:37 np0005534516 nova_compute[253538]:      <entry name="uuid">f05e074d-5838-4c4b-89dc-76afe386f635</entry>
Nov 25 04:00:37 np0005534516 nova_compute[253538]:      <entry name="family">Virtual Machine</entry>
Nov 25 04:00:37 np0005534516 nova_compute[253538]:    </system>
Nov 25 04:00:37 np0005534516 nova_compute[253538]:  </sysinfo>
Nov 25 04:00:37 np0005534516 nova_compute[253538]:  <os>
Nov 25 04:00:37 np0005534516 nova_compute[253538]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 25 04:00:37 np0005534516 nova_compute[253538]:    <boot dev="hd"/>
Nov 25 04:00:37 np0005534516 nova_compute[253538]:    <smbios mode="sysinfo"/>
Nov 25 04:00:37 np0005534516 nova_compute[253538]:  </os>
Nov 25 04:00:37 np0005534516 nova_compute[253538]:  <features>
Nov 25 04:00:37 np0005534516 nova_compute[253538]:    <acpi/>
Nov 25 04:00:37 np0005534516 nova_compute[253538]:    <apic/>
Nov 25 04:00:37 np0005534516 nova_compute[253538]:    <vmcoreinfo/>
Nov 25 04:00:37 np0005534516 nova_compute[253538]:  </features>
Nov 25 04:00:37 np0005534516 nova_compute[253538]:  <clock offset="utc">
Nov 25 04:00:37 np0005534516 nova_compute[253538]:    <timer name="pit" tickpolicy="delay"/>
Nov 25 04:00:37 np0005534516 nova_compute[253538]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 25 04:00:37 np0005534516 nova_compute[253538]:    <timer name="hpet" present="no"/>
Nov 25 04:00:37 np0005534516 nova_compute[253538]:  </clock>
Nov 25 04:00:37 np0005534516 nova_compute[253538]:  <cpu mode="host-model" match="exact">
Nov 25 04:00:37 np0005534516 nova_compute[253538]:    <topology sockets="1" cores="1" threads="1"/>
Nov 25 04:00:37 np0005534516 nova_compute[253538]:  </cpu>
Nov 25 04:00:37 np0005534516 nova_compute[253538]:  <devices>
Nov 25 04:00:37 np0005534516 nova_compute[253538]:    <disk type="network" device="disk">
Nov 25 04:00:37 np0005534516 nova_compute[253538]:      <driver type="raw" cache="none"/>
Nov 25 04:00:37 np0005534516 nova_compute[253538]:      <source protocol="rbd" name="vms/f05e074d-5838-4c4b-89dc-76afe386f635_disk">
Nov 25 04:00:37 np0005534516 nova_compute[253538]:        <host name="192.168.122.100" port="6789"/>
Nov 25 04:00:37 np0005534516 nova_compute[253538]:      </source>
Nov 25 04:00:37 np0005534516 nova_compute[253538]:      <auth username="openstack">
Nov 25 04:00:37 np0005534516 nova_compute[253538]:        <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 04:00:37 np0005534516 nova_compute[253538]:      </auth>
Nov 25 04:00:37 np0005534516 nova_compute[253538]:      <target dev="vda" bus="virtio"/>
Nov 25 04:00:37 np0005534516 nova_compute[253538]:    </disk>
Nov 25 04:00:37 np0005534516 nova_compute[253538]:    <disk type="network" device="cdrom">
Nov 25 04:00:37 np0005534516 nova_compute[253538]:      <driver type="raw" cache="none"/>
Nov 25 04:00:37 np0005534516 nova_compute[253538]:      <source protocol="rbd" name="vms/f05e074d-5838-4c4b-89dc-76afe386f635_disk.config">
Nov 25 04:00:37 np0005534516 nova_compute[253538]:        <host name="192.168.122.100" port="6789"/>
Nov 25 04:00:37 np0005534516 nova_compute[253538]:      </source>
Nov 25 04:00:37 np0005534516 nova_compute[253538]:      <auth username="openstack">
Nov 25 04:00:37 np0005534516 nova_compute[253538]:        <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 04:00:37 np0005534516 nova_compute[253538]:      </auth>
Nov 25 04:00:37 np0005534516 nova_compute[253538]:      <target dev="sda" bus="sata"/>
Nov 25 04:00:37 np0005534516 nova_compute[253538]:    </disk>
Nov 25 04:00:37 np0005534516 nova_compute[253538]:    <interface type="ethernet">
Nov 25 04:00:37 np0005534516 nova_compute[253538]:      <mac address="fa:16:3e:d4:d0:53"/>
Nov 25 04:00:37 np0005534516 nova_compute[253538]:      <model type="virtio"/>
Nov 25 04:00:37 np0005534516 nova_compute[253538]:      <driver name="vhost" rx_queue_size="512"/>
Nov 25 04:00:37 np0005534516 nova_compute[253538]:      <mtu size="1442"/>
Nov 25 04:00:37 np0005534516 nova_compute[253538]:      <target dev="tap30ba0f84-3d"/>
Nov 25 04:00:37 np0005534516 nova_compute[253538]:    </interface>
Nov 25 04:00:37 np0005534516 nova_compute[253538]:    <serial type="pty">
Nov 25 04:00:37 np0005534516 nova_compute[253538]:      <log file="/var/lib/nova/instances/f05e074d-5838-4c4b-89dc-76afe386f635/console.log" append="off"/>
Nov 25 04:00:37 np0005534516 nova_compute[253538]:    </serial>
Nov 25 04:00:37 np0005534516 nova_compute[253538]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 04:00:37 np0005534516 nova_compute[253538]:    <video>
Nov 25 04:00:37 np0005534516 nova_compute[253538]:      <model type="virtio"/>
Nov 25 04:00:37 np0005534516 nova_compute[253538]:    </video>
Nov 25 04:00:37 np0005534516 nova_compute[253538]:    <input type="tablet" bus="usb"/>
Nov 25 04:00:37 np0005534516 nova_compute[253538]:    <rng model="virtio">
Nov 25 04:00:37 np0005534516 nova_compute[253538]:      <backend model="random">/dev/urandom</backend>
Nov 25 04:00:37 np0005534516 nova_compute[253538]:    </rng>
Nov 25 04:00:37 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root"/>
Nov 25 04:00:37 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:00:37 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:00:37 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:00:37 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:00:37 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:00:37 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:00:37 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:00:37 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:00:37 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:00:37 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:00:37 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:00:37 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:00:37 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:00:37 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:00:37 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:00:37 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:00:37 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:00:37 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:00:37 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:00:37 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:00:37 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:00:37 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:00:37 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:00:37 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:00:37 np0005534516 nova_compute[253538]:    <controller type="usb" index="0"/>
Nov 25 04:00:37 np0005534516 nova_compute[253538]:    <memballoon model="virtio">
Nov 25 04:00:37 np0005534516 nova_compute[253538]:      <stats period="10"/>
Nov 25 04:00:37 np0005534516 nova_compute[253538]:    </memballoon>
Nov 25 04:00:37 np0005534516 nova_compute[253538]:  </devices>
Nov 25 04:00:37 np0005534516 nova_compute[253538]: </domain>
Nov 25 04:00:37 np0005534516 nova_compute[253538]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 25 04:00:37 np0005534516 nova_compute[253538]: 2025-11-25 09:00:37.277 253542 DEBUG nova.compute.manager [None req-73baf30d-b3ff-4201-b0d3-37a16d64b79f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: f05e074d-5838-4c4b-89dc-76afe386f635] Preparing to wait for external event network-vif-plugged-30ba0f84-3dca-47f6-911d-5fff56a99b0b prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 25 04:00:37 np0005534516 nova_compute[253538]: 2025-11-25 09:00:37.277 253542 DEBUG oslo_concurrency.lockutils [None req-73baf30d-b3ff-4201-b0d3-37a16d64b79f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Acquiring lock "f05e074d-5838-4c4b-89dc-76afe386f635-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:00:37 np0005534516 nova_compute[253538]: 2025-11-25 09:00:37.278 253542 DEBUG oslo_concurrency.lockutils [None req-73baf30d-b3ff-4201-b0d3-37a16d64b79f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "f05e074d-5838-4c4b-89dc-76afe386f635-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:00:37 np0005534516 nova_compute[253538]: 2025-11-25 09:00:37.279 253542 DEBUG oslo_concurrency.lockutils [None req-73baf30d-b3ff-4201-b0d3-37a16d64b79f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "f05e074d-5838-4c4b-89dc-76afe386f635-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:00:37 np0005534516 nova_compute[253538]: 2025-11-25 09:00:37.280 253542 DEBUG nova.virt.libvirt.vif [None req-73baf30d-b3ff-4201-b0d3-37a16d64b79f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T09:00:29Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1223430136',display_name='tempest-TestGettingAddress-server-1223430136',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1223430136',id=129,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBE43RDkFrIAVI0BW11KJE2OYK1HJrAvC1/LPNZ8TDm6qjtPF0a19WFe4a1radMZBBdiDLKOYgFd/MDesehfKkwox7hQY3R8UYDimfx0Df8eUJAXIFHj7mDCbbhz+nTG4ag==',key_name='tempest-TestGettingAddress-356224709',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='a3cf572dfc9f42528923d69b8fa76422',ramdisk_id='',reservation_id='r-w1sbzkv2',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-364728108',owner_user_name='tempest-TestGettingAddress-364728108-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T09:00:31Z,user_data=None,user_id='c9fb13d4ba9041458692330b7276232f',uuid=f05e074d-5838-4c4b-89dc-76afe386f635,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "30ba0f84-3dca-47f6-911d-5fff56a99b0b", "address": "fa:16:3e:d4:d0:53", "network": {"id": "6c2834b5-0444-432c-8da4-c0b4f4aabc4d", "bridge": "br-int", "label": "tempest-network-smoke--139778261", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fed4:d053", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap30ba0f84-3d", "ovs_interfaceid": "30ba0f84-3dca-47f6-911d-5fff56a99b0b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 25 04:00:37 np0005534516 nova_compute[253538]: 2025-11-25 09:00:37.281 253542 DEBUG nova.network.os_vif_util [None req-73baf30d-b3ff-4201-b0d3-37a16d64b79f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Converting VIF {"id": "30ba0f84-3dca-47f6-911d-5fff56a99b0b", "address": "fa:16:3e:d4:d0:53", "network": {"id": "6c2834b5-0444-432c-8da4-c0b4f4aabc4d", "bridge": "br-int", "label": "tempest-network-smoke--139778261", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fed4:d053", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap30ba0f84-3d", "ovs_interfaceid": "30ba0f84-3dca-47f6-911d-5fff56a99b0b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 04:00:37 np0005534516 nova_compute[253538]: 2025-11-25 09:00:37.282 253542 DEBUG nova.network.os_vif_util [None req-73baf30d-b3ff-4201-b0d3-37a16d64b79f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d4:d0:53,bridge_name='br-int',has_traffic_filtering=True,id=30ba0f84-3dca-47f6-911d-5fff56a99b0b,network=Network(6c2834b5-0444-432c-8da4-c0b4f4aabc4d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap30ba0f84-3d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 04:00:37 np0005534516 nova_compute[253538]: 2025-11-25 09:00:37.284 253542 DEBUG os_vif [None req-73baf30d-b3ff-4201-b0d3-37a16d64b79f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:d4:d0:53,bridge_name='br-int',has_traffic_filtering=True,id=30ba0f84-3dca-47f6-911d-5fff56a99b0b,network=Network(6c2834b5-0444-432c-8da4-c0b4f4aabc4d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap30ba0f84-3d') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 25 04:00:37 np0005534516 nova_compute[253538]: 2025-11-25 09:00:37.285 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:00:37 np0005534516 nova_compute[253538]: 2025-11-25 09:00:37.286 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 04:00:37 np0005534516 nova_compute[253538]: 2025-11-25 09:00:37.287 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 04:00:37 np0005534516 nova_compute[253538]: 2025-11-25 09:00:37.291 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:00:37 np0005534516 nova_compute[253538]: 2025-11-25 09:00:37.291 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap30ba0f84-3d, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 04:00:37 np0005534516 nova_compute[253538]: 2025-11-25 09:00:37.292 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap30ba0f84-3d, col_values=(('external_ids', {'iface-id': '30ba0f84-3dca-47f6-911d-5fff56a99b0b', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:d4:d0:53', 'vm-uuid': 'f05e074d-5838-4c4b-89dc-76afe386f635'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 04:00:37 np0005534516 nova_compute[253538]: 2025-11-25 09:00:37.294 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:00:37 np0005534516 NetworkManager[48915]: <info>  [1764061237.2955] manager: (tap30ba0f84-3d): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/544)
Nov 25 04:00:37 np0005534516 nova_compute[253538]: 2025-11-25 09:00:37.297 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 25 04:00:37 np0005534516 nova_compute[253538]: 2025-11-25 09:00:37.300 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:00:37 np0005534516 nova_compute[253538]: 2025-11-25 09:00:37.301 253542 INFO os_vif [None req-73baf30d-b3ff-4201-b0d3-37a16d64b79f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:d4:d0:53,bridge_name='br-int',has_traffic_filtering=True,id=30ba0f84-3dca-47f6-911d-5fff56a99b0b,network=Network(6c2834b5-0444-432c-8da4-c0b4f4aabc4d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap30ba0f84-3d')#033[00m
Nov 25 04:00:37 np0005534516 nova_compute[253538]: 2025-11-25 09:00:37.396 253542 DEBUG nova.virt.libvirt.driver [None req-73baf30d-b3ff-4201-b0d3-37a16d64b79f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 04:00:37 np0005534516 nova_compute[253538]: 2025-11-25 09:00:37.400 253542 DEBUG nova.virt.libvirt.driver [None req-73baf30d-b3ff-4201-b0d3-37a16d64b79f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 04:00:37 np0005534516 nova_compute[253538]: 2025-11-25 09:00:37.400 253542 DEBUG nova.virt.libvirt.driver [None req-73baf30d-b3ff-4201-b0d3-37a16d64b79f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] No VIF found with MAC fa:16:3e:d4:d0:53, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 25 04:00:37 np0005534516 nova_compute[253538]: 2025-11-25 09:00:37.401 253542 INFO nova.virt.libvirt.driver [None req-73baf30d-b3ff-4201-b0d3-37a16d64b79f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: f05e074d-5838-4c4b-89dc-76afe386f635] Using config drive#033[00m
Nov 25 04:00:37 np0005534516 nova_compute[253538]: 2025-11-25 09:00:37.428 253542 DEBUG nova.storage.rbd_utils [None req-73baf30d-b3ff-4201-b0d3-37a16d64b79f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] rbd image f05e074d-5838-4c4b-89dc-76afe386f635_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 04:00:37 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:00:38 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2395: 321 pgs: 321 active+clean; 134 MiB data, 900 MiB used, 59 GiB / 60 GiB avail; 37 KiB/s rd, 1.8 MiB/s wr, 55 op/s
Nov 25 04:00:38 np0005534516 nova_compute[253538]: 2025-11-25 09:00:38.576 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:00:39 np0005534516 nova_compute[253538]: 2025-11-25 09:00:39.157 253542 INFO nova.virt.libvirt.driver [None req-73baf30d-b3ff-4201-b0d3-37a16d64b79f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: f05e074d-5838-4c4b-89dc-76afe386f635] Creating config drive at /var/lib/nova/instances/f05e074d-5838-4c4b-89dc-76afe386f635/disk.config#033[00m
Nov 25 04:00:39 np0005534516 nova_compute[253538]: 2025-11-25 09:00:39.170 253542 DEBUG oslo_concurrency.processutils [None req-73baf30d-b3ff-4201-b0d3-37a16d64b79f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/f05e074d-5838-4c4b-89dc-76afe386f635/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp_pz4ih0t execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 04:00:39 np0005534516 nova_compute[253538]: 2025-11-25 09:00:39.321 253542 DEBUG oslo_concurrency.processutils [None req-73baf30d-b3ff-4201-b0d3-37a16d64b79f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/f05e074d-5838-4c4b-89dc-76afe386f635/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp_pz4ih0t" returned: 0 in 0.151s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 04:00:39 np0005534516 nova_compute[253538]: 2025-11-25 09:00:39.364 253542 DEBUG nova.storage.rbd_utils [None req-73baf30d-b3ff-4201-b0d3-37a16d64b79f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] rbd image f05e074d-5838-4c4b-89dc-76afe386f635_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 04:00:39 np0005534516 nova_compute[253538]: 2025-11-25 09:00:39.369 253542 DEBUG oslo_concurrency.processutils [None req-73baf30d-b3ff-4201-b0d3-37a16d64b79f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/f05e074d-5838-4c4b-89dc-76afe386f635/disk.config f05e074d-5838-4c4b-89dc-76afe386f635_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 04:00:39 np0005534516 nova_compute[253538]: 2025-11-25 09:00:39.582 253542 DEBUG oslo_concurrency.processutils [None req-73baf30d-b3ff-4201-b0d3-37a16d64b79f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/f05e074d-5838-4c4b-89dc-76afe386f635/disk.config f05e074d-5838-4c4b-89dc-76afe386f635_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.213s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 04:00:39 np0005534516 nova_compute[253538]: 2025-11-25 09:00:39.584 253542 INFO nova.virt.libvirt.driver [None req-73baf30d-b3ff-4201-b0d3-37a16d64b79f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: f05e074d-5838-4c4b-89dc-76afe386f635] Deleting local config drive /var/lib/nova/instances/f05e074d-5838-4c4b-89dc-76afe386f635/disk.config because it was imported into RBD.#033[00m
Nov 25 04:00:39 np0005534516 kernel: tap30ba0f84-3d: entered promiscuous mode
Nov 25 04:00:39 np0005534516 NetworkManager[48915]: <info>  [1764061239.6590] manager: (tap30ba0f84-3d): new Tun device (/org/freedesktop/NetworkManager/Devices/545)
Nov 25 04:00:39 np0005534516 nova_compute[253538]: 2025-11-25 09:00:39.657 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:00:39 np0005534516 ovn_controller[152859]: 2025-11-25T09:00:39Z|01327|binding|INFO|Claiming lport 30ba0f84-3dca-47f6-911d-5fff56a99b0b for this chassis.
Nov 25 04:00:39 np0005534516 ovn_controller[152859]: 2025-11-25T09:00:39Z|01328|binding|INFO|30ba0f84-3dca-47f6-911d-5fff56a99b0b: Claiming fa:16:3e:d4:d0:53 10.100.0.14 2001:db8::f816:3eff:fed4:d053
Nov 25 04:00:39 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:00:39.670 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d4:d0:53 10.100.0.14 2001:db8::f816:3eff:fed4:d053'], port_security=['fa:16:3e:d4:d0:53 10.100.0.14 2001:db8::f816:3eff:fed4:d053'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28 2001:db8::f816:3eff:fed4:d053/64', 'neutron:device_id': 'f05e074d-5838-4c4b-89dc-76afe386f635', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6c2834b5-0444-432c-8da4-c0b4f4aabc4d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a3cf572dfc9f42528923d69b8fa76422', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'bc2d8b7e-9a70-4d0b-ab2b-2be8746de01b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=566c4500-0375-4680-b110-24535007c05e, chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=30ba0f84-3dca-47f6-911d-5fff56a99b0b) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 04:00:39 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:00:39.673 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 30ba0f84-3dca-47f6-911d-5fff56a99b0b in datapath 6c2834b5-0444-432c-8da4-c0b4f4aabc4d bound to our chassis#033[00m
Nov 25 04:00:39 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:00:39.674 162739 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 6c2834b5-0444-432c-8da4-c0b4f4aabc4d#033[00m
Nov 25 04:00:39 np0005534516 ovn_controller[152859]: 2025-11-25T09:00:39Z|01329|binding|INFO|Setting lport 30ba0f84-3dca-47f6-911d-5fff56a99b0b ovn-installed in OVS
Nov 25 04:00:39 np0005534516 ovn_controller[152859]: 2025-11-25T09:00:39Z|01330|binding|INFO|Setting lport 30ba0f84-3dca-47f6-911d-5fff56a99b0b up in Southbound
Nov 25 04:00:39 np0005534516 nova_compute[253538]: 2025-11-25 09:00:39.678 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:00:39 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:00:39.684 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[c63a5440-7164-4d51-8fca-d04df14fca04]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:00:39 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:00:39.685 162739 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap6c2834b5-01 in ovnmeta-6c2834b5-0444-432c-8da4-c0b4f4aabc4d namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 25 04:00:39 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:00:39.687 269370 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap6c2834b5-00 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 25 04:00:39 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:00:39.687 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[a356c37f-2ea3-4b88-bb2a-76002022c210]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:00:39 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:00:39.688 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[c5bf092b-f60a-4ed7-8907-2de011fe3b29]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:00:39 np0005534516 systemd-udevd[389358]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 04:00:39 np0005534516 systemd-machined[215790]: New machine qemu-159-instance-00000081.
Nov 25 04:00:39 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:00:39.702 162852 DEBUG oslo.privsep.daemon [-] privsep: reply[5b0cffca-61e6-49a5-b0ac-67032705e441]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:00:39 np0005534516 NetworkManager[48915]: <info>  [1764061239.7098] device (tap30ba0f84-3d): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 04:00:39 np0005534516 NetworkManager[48915]: <info>  [1764061239.7106] device (tap30ba0f84-3d): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 04:00:39 np0005534516 systemd[1]: Started Virtual Machine qemu-159-instance-00000081.
Nov 25 04:00:39 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:00:39.720 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[7389d1aa-a342-41df-bc35-9ab777d09082]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:00:39 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:00:39.756 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[20ac8fe7-2c0e-49ac-8612-5194c25851d3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:00:39 np0005534516 systemd-udevd[389362]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 04:00:39 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:00:39.762 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[569ae129-4cc9-4d8a-aa8b-dcbf8c7d8481]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:00:39 np0005534516 NetworkManager[48915]: <info>  [1764061239.7627] manager: (tap6c2834b5-00): new Veth device (/org/freedesktop/NetworkManager/Devices/546)
Nov 25 04:00:39 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:00:39.799 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[cceb5397-fb66-4261-86ae-e41a8dabfdfd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:00:39 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:00:39.803 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[78bf2fe2-74e2-4dc1-9e21-e5132681f8e5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:00:39 np0005534516 NetworkManager[48915]: <info>  [1764061239.8380] device (tap6c2834b5-00): carrier: link connected
Nov 25 04:00:39 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:00:39.846 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[a0912dd6-01bd-4767-9c61-83291183b3c2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:00:39 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:00:39.861 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[c301226f-713a-4166-864d-456adcdfbedb]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6c2834b5-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e5:81:3e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 386], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 659443, 'reachable_time': 25259, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 148, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 148, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 389390, 'error': None, 'target': 'ovnmeta-6c2834b5-0444-432c-8da4-c0b4f4aabc4d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:00:39 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:00:39.873 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[3b5e13f0-6742-42c7-ba1d-99f4e25f2f0a]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fee5:813e'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 659443, 'tstamp': 659443}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 389391, 'error': None, 'target': 'ovnmeta-6c2834b5-0444-432c-8da4-c0b4f4aabc4d', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:00:39 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:00:39.886 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[8ffd531f-2690-43d9-a561-a53956d147eb]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6c2834b5-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e5:81:3e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 2, 'rx_bytes': 176, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 2, 'rx_bytes': 176, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 386], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 659443, 'reachable_time': 25259, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 148, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 148, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 148, 'outmcastoctets': 148, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 389392, 'error': None, 'target': 'ovnmeta-6c2834b5-0444-432c-8da4-c0b4f4aabc4d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:00:39 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:00:39.920 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[5b62a253-5bdc-4669-8e14-8e52d7422e56]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:00:39 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:00:39.997 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[330164d7-7d7b-4fde-9cab-61622cc51388]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:00:40 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:00:39.999 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6c2834b5-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 04:00:40 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:00:40.000 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 04:00:40 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:00:40.001 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6c2834b5-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 04:00:40 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2396: 321 pgs: 321 active+clean; 134 MiB data, 900 MiB used, 59 GiB / 60 GiB avail; 37 KiB/s rd, 1.8 MiB/s wr, 55 op/s
Nov 25 04:00:40 np0005534516 nova_compute[253538]: 2025-11-25 09:00:40.052 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:00:40 np0005534516 kernel: tap6c2834b5-00: entered promiscuous mode
Nov 25 04:00:40 np0005534516 NetworkManager[48915]: <info>  [1764061240.0528] manager: (tap6c2834b5-00): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/547)
Nov 25 04:00:40 np0005534516 nova_compute[253538]: 2025-11-25 09:00:40.055 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:00:40 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:00:40.055 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap6c2834b5-00, col_values=(('external_ids', {'iface-id': '25e4a85d-5a04-4d07-a006-66576a20c294'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 04:00:40 np0005534516 ovn_controller[152859]: 2025-11-25T09:00:40Z|01331|binding|INFO|Releasing lport 25e4a85d-5a04-4d07-a006-66576a20c294 from this chassis (sb_readonly=0)
Nov 25 04:00:40 np0005534516 nova_compute[253538]: 2025-11-25 09:00:40.056 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:00:40 np0005534516 nova_compute[253538]: 2025-11-25 09:00:40.074 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:00:40 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:00:40.075 162739 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/6c2834b5-0444-432c-8da4-c0b4f4aabc4d.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/6c2834b5-0444-432c-8da4-c0b4f4aabc4d.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 25 04:00:40 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:00:40.076 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[8e7a710e-551c-4c6d-9d93-c682b6800a75]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:00:40 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:00:40.076 162739 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 25 04:00:40 np0005534516 ovn_metadata_agent[162734]: global
Nov 25 04:00:40 np0005534516 ovn_metadata_agent[162734]:    log         /dev/log local0 debug
Nov 25 04:00:40 np0005534516 ovn_metadata_agent[162734]:    log-tag     haproxy-metadata-proxy-6c2834b5-0444-432c-8da4-c0b4f4aabc4d
Nov 25 04:00:40 np0005534516 ovn_metadata_agent[162734]:    user        root
Nov 25 04:00:40 np0005534516 ovn_metadata_agent[162734]:    group       root
Nov 25 04:00:40 np0005534516 ovn_metadata_agent[162734]:    maxconn     1024
Nov 25 04:00:40 np0005534516 ovn_metadata_agent[162734]:    pidfile     /var/lib/neutron/external/pids/6c2834b5-0444-432c-8da4-c0b4f4aabc4d.pid.haproxy
Nov 25 04:00:40 np0005534516 ovn_metadata_agent[162734]:    daemon
Nov 25 04:00:40 np0005534516 ovn_metadata_agent[162734]: 
Nov 25 04:00:40 np0005534516 ovn_metadata_agent[162734]: defaults
Nov 25 04:00:40 np0005534516 ovn_metadata_agent[162734]:    log global
Nov 25 04:00:40 np0005534516 ovn_metadata_agent[162734]:    mode http
Nov 25 04:00:40 np0005534516 ovn_metadata_agent[162734]:    option httplog
Nov 25 04:00:40 np0005534516 ovn_metadata_agent[162734]:    option dontlognull
Nov 25 04:00:40 np0005534516 ovn_metadata_agent[162734]:    option http-server-close
Nov 25 04:00:40 np0005534516 ovn_metadata_agent[162734]:    option forwardfor
Nov 25 04:00:40 np0005534516 ovn_metadata_agent[162734]:    retries                 3
Nov 25 04:00:40 np0005534516 ovn_metadata_agent[162734]:    timeout http-request    30s
Nov 25 04:00:40 np0005534516 ovn_metadata_agent[162734]:    timeout connect         30s
Nov 25 04:00:40 np0005534516 ovn_metadata_agent[162734]:    timeout client          32s
Nov 25 04:00:40 np0005534516 ovn_metadata_agent[162734]:    timeout server          32s
Nov 25 04:00:40 np0005534516 ovn_metadata_agent[162734]:    timeout http-keep-alive 30s
Nov 25 04:00:40 np0005534516 ovn_metadata_agent[162734]: 
Nov 25 04:00:40 np0005534516 ovn_metadata_agent[162734]: 
Nov 25 04:00:40 np0005534516 ovn_metadata_agent[162734]: listen listener
Nov 25 04:00:40 np0005534516 ovn_metadata_agent[162734]:    bind 169.254.169.254:80
Nov 25 04:00:40 np0005534516 ovn_metadata_agent[162734]:    server metadata /var/lib/neutron/metadata_proxy
Nov 25 04:00:40 np0005534516 ovn_metadata_agent[162734]:    http-request add-header X-OVN-Network-ID 6c2834b5-0444-432c-8da4-c0b4f4aabc4d
Nov 25 04:00:40 np0005534516 ovn_metadata_agent[162734]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 25 04:00:40 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:00:40.077 162739 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-6c2834b5-0444-432c-8da4-c0b4f4aabc4d', 'env', 'PROCESS_TAG=haproxy-6c2834b5-0444-432c-8da4-c0b4f4aabc4d', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/6c2834b5-0444-432c-8da4-c0b4f4aabc4d.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 25 04:00:40 np0005534516 nova_compute[253538]: 2025-11-25 09:00:40.101 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:00:40 np0005534516 nova_compute[253538]: 2025-11-25 09:00:40.407 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764061240.4065454, f05e074d-5838-4c4b-89dc-76afe386f635 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 04:00:40 np0005534516 nova_compute[253538]: 2025-11-25 09:00:40.408 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: f05e074d-5838-4c4b-89dc-76afe386f635] VM Started (Lifecycle Event)#033[00m
Nov 25 04:00:40 np0005534516 nova_compute[253538]: 2025-11-25 09:00:40.428 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: f05e074d-5838-4c4b-89dc-76afe386f635] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 04:00:40 np0005534516 nova_compute[253538]: 2025-11-25 09:00:40.432 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764061240.4075935, f05e074d-5838-4c4b-89dc-76afe386f635 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 04:00:40 np0005534516 nova_compute[253538]: 2025-11-25 09:00:40.432 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: f05e074d-5838-4c4b-89dc-76afe386f635] VM Paused (Lifecycle Event)#033[00m
Nov 25 04:00:40 np0005534516 nova_compute[253538]: 2025-11-25 09:00:40.448 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: f05e074d-5838-4c4b-89dc-76afe386f635] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 04:00:40 np0005534516 nova_compute[253538]: 2025-11-25 09:00:40.451 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: f05e074d-5838-4c4b-89dc-76afe386f635] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 04:00:40 np0005534516 podman[389466]: 2025-11-25 09:00:40.45799714 +0000 UTC m=+0.052140331 container create d3176deb3929171a3708dfedf6cc865ea546447c29f69deab5baa393f5ab0b60 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-6c2834b5-0444-432c-8da4-c0b4f4aabc4d, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true)
Nov 25 04:00:40 np0005534516 nova_compute[253538]: 2025-11-25 09:00:40.464 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: f05e074d-5838-4c4b-89dc-76afe386f635] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 04:00:40 np0005534516 systemd[1]: Started libpod-conmon-d3176deb3929171a3708dfedf6cc865ea546447c29f69deab5baa393f5ab0b60.scope.
Nov 25 04:00:40 np0005534516 podman[389466]: 2025-11-25 09:00:40.426882963 +0000 UTC m=+0.021026114 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 25 04:00:40 np0005534516 systemd[1]: Started libcrun container.
Nov 25 04:00:40 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/95f8cf4280ba7f34d6348b543dd0da02fb745e7d55375aa17f4fa4a2f697425a/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 25 04:00:40 np0005534516 podman[389466]: 2025-11-25 09:00:40.553272623 +0000 UTC m=+0.147415784 container init d3176deb3929171a3708dfedf6cc865ea546447c29f69deab5baa393f5ab0b60 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-6c2834b5-0444-432c-8da4-c0b4f4aabc4d, org.label-schema.build-date=20251118, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 04:00:40 np0005534516 nova_compute[253538]: 2025-11-25 09:00:40.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:00:40 np0005534516 podman[389466]: 2025-11-25 09:00:40.564034885 +0000 UTC m=+0.158178026 container start d3176deb3929171a3708dfedf6cc865ea546447c29f69deab5baa393f5ab0b60 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-6c2834b5-0444-432c-8da4-c0b4f4aabc4d, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Nov 25 04:00:40 np0005534516 neutron-haproxy-ovnmeta-6c2834b5-0444-432c-8da4-c0b4f4aabc4d[389481]: [NOTICE]   (389485) : New worker (389487) forked
Nov 25 04:00:40 np0005534516 neutron-haproxy-ovnmeta-6c2834b5-0444-432c-8da4-c0b4f4aabc4d[389481]: [NOTICE]   (389485) : Loading success.
Nov 25 04:00:40 np0005534516 nova_compute[253538]: 2025-11-25 09:00:40.732 253542 DEBUG nova.compute.manager [req-f6710749-e8b4-4549-8759-f10f9facbbec req-880a86ac-4022-4234-9fbd-24fcbd7c2159 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: f05e074d-5838-4c4b-89dc-76afe386f635] Received event network-vif-plugged-30ba0f84-3dca-47f6-911d-5fff56a99b0b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 04:00:40 np0005534516 nova_compute[253538]: 2025-11-25 09:00:40.733 253542 DEBUG oslo_concurrency.lockutils [req-f6710749-e8b4-4549-8759-f10f9facbbec req-880a86ac-4022-4234-9fbd-24fcbd7c2159 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "f05e074d-5838-4c4b-89dc-76afe386f635-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:00:40 np0005534516 nova_compute[253538]: 2025-11-25 09:00:40.734 253542 DEBUG oslo_concurrency.lockutils [req-f6710749-e8b4-4549-8759-f10f9facbbec req-880a86ac-4022-4234-9fbd-24fcbd7c2159 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "f05e074d-5838-4c4b-89dc-76afe386f635-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:00:40 np0005534516 nova_compute[253538]: 2025-11-25 09:00:40.734 253542 DEBUG oslo_concurrency.lockutils [req-f6710749-e8b4-4549-8759-f10f9facbbec req-880a86ac-4022-4234-9fbd-24fcbd7c2159 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "f05e074d-5838-4c4b-89dc-76afe386f635-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:00:40 np0005534516 nova_compute[253538]: 2025-11-25 09:00:40.734 253542 DEBUG nova.compute.manager [req-f6710749-e8b4-4549-8759-f10f9facbbec req-880a86ac-4022-4234-9fbd-24fcbd7c2159 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: f05e074d-5838-4c4b-89dc-76afe386f635] Processing event network-vif-plugged-30ba0f84-3dca-47f6-911d-5fff56a99b0b _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 25 04:00:40 np0005534516 nova_compute[253538]: 2025-11-25 09:00:40.736 253542 DEBUG nova.compute.manager [None req-73baf30d-b3ff-4201-b0d3-37a16d64b79f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: f05e074d-5838-4c4b-89dc-76afe386f635] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 25 04:00:40 np0005534516 nova_compute[253538]: 2025-11-25 09:00:40.741 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764061240.7408547, f05e074d-5838-4c4b-89dc-76afe386f635 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 04:00:40 np0005534516 nova_compute[253538]: 2025-11-25 09:00:40.741 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: f05e074d-5838-4c4b-89dc-76afe386f635] VM Resumed (Lifecycle Event)#033[00m
Nov 25 04:00:40 np0005534516 nova_compute[253538]: 2025-11-25 09:00:40.744 253542 DEBUG nova.virt.libvirt.driver [None req-73baf30d-b3ff-4201-b0d3-37a16d64b79f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: f05e074d-5838-4c4b-89dc-76afe386f635] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 25 04:00:40 np0005534516 nova_compute[253538]: 2025-11-25 09:00:40.749 253542 INFO nova.virt.libvirt.driver [-] [instance: f05e074d-5838-4c4b-89dc-76afe386f635] Instance spawned successfully.#033[00m
Nov 25 04:00:40 np0005534516 nova_compute[253538]: 2025-11-25 09:00:40.750 253542 DEBUG nova.virt.libvirt.driver [None req-73baf30d-b3ff-4201-b0d3-37a16d64b79f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: f05e074d-5838-4c4b-89dc-76afe386f635] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 25 04:00:40 np0005534516 nova_compute[253538]: 2025-11-25 09:00:40.767 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: f05e074d-5838-4c4b-89dc-76afe386f635] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 04:00:40 np0005534516 nova_compute[253538]: 2025-11-25 09:00:40.772 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: f05e074d-5838-4c4b-89dc-76afe386f635] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 04:00:40 np0005534516 nova_compute[253538]: 2025-11-25 09:00:40.787 253542 DEBUG nova.virt.libvirt.driver [None req-73baf30d-b3ff-4201-b0d3-37a16d64b79f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: f05e074d-5838-4c4b-89dc-76afe386f635] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 04:00:40 np0005534516 nova_compute[253538]: 2025-11-25 09:00:40.788 253542 DEBUG nova.virt.libvirt.driver [None req-73baf30d-b3ff-4201-b0d3-37a16d64b79f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: f05e074d-5838-4c4b-89dc-76afe386f635] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 04:00:40 np0005534516 nova_compute[253538]: 2025-11-25 09:00:40.789 253542 DEBUG nova.virt.libvirt.driver [None req-73baf30d-b3ff-4201-b0d3-37a16d64b79f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: f05e074d-5838-4c4b-89dc-76afe386f635] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 04:00:40 np0005534516 nova_compute[253538]: 2025-11-25 09:00:40.790 253542 DEBUG nova.virt.libvirt.driver [None req-73baf30d-b3ff-4201-b0d3-37a16d64b79f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: f05e074d-5838-4c4b-89dc-76afe386f635] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 04:00:40 np0005534516 nova_compute[253538]: 2025-11-25 09:00:40.791 253542 DEBUG nova.virt.libvirt.driver [None req-73baf30d-b3ff-4201-b0d3-37a16d64b79f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: f05e074d-5838-4c4b-89dc-76afe386f635] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 04:00:40 np0005534516 nova_compute[253538]: 2025-11-25 09:00:40.792 253542 DEBUG nova.virt.libvirt.driver [None req-73baf30d-b3ff-4201-b0d3-37a16d64b79f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: f05e074d-5838-4c4b-89dc-76afe386f635] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 04:00:40 np0005534516 nova_compute[253538]: 2025-11-25 09:00:40.799 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: f05e074d-5838-4c4b-89dc-76afe386f635] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 04:00:40 np0005534516 nova_compute[253538]: 2025-11-25 09:00:40.844 253542 INFO nova.compute.manager [None req-73baf30d-b3ff-4201-b0d3-37a16d64b79f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: f05e074d-5838-4c4b-89dc-76afe386f635] Took 9.12 seconds to spawn the instance on the hypervisor.#033[00m
Nov 25 04:00:40 np0005534516 nova_compute[253538]: 2025-11-25 09:00:40.844 253542 DEBUG nova.compute.manager [None req-73baf30d-b3ff-4201-b0d3-37a16d64b79f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: f05e074d-5838-4c4b-89dc-76afe386f635] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 04:00:40 np0005534516 nova_compute[253538]: 2025-11-25 09:00:40.912 253542 INFO nova.compute.manager [None req-73baf30d-b3ff-4201-b0d3-37a16d64b79f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: f05e074d-5838-4c4b-89dc-76afe386f635] Took 10.04 seconds to build instance.#033[00m
Nov 25 04:00:40 np0005534516 nova_compute[253538]: 2025-11-25 09:00:40.930 253542 DEBUG oslo_concurrency.lockutils [None req-73baf30d-b3ff-4201-b0d3-37a16d64b79f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "f05e074d-5838-4c4b-89dc-76afe386f635" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.132s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:00:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:00:41.084 162739 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:00:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:00:41.084 162739 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:00:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:00:41.085 162739 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:00:41 np0005534516 nova_compute[253538]: 2025-11-25 09:00:41.975 253542 DEBUG nova.network.neutron [req-900f573c-a4a8-4203-8d92-751cabc75950 req-e4c0ed71-aaa3-4e8a-80fe-6fc007134b75 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: f05e074d-5838-4c4b-89dc-76afe386f635] Updated VIF entry in instance network info cache for port 30ba0f84-3dca-47f6-911d-5fff56a99b0b. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 25 04:00:41 np0005534516 nova_compute[253538]: 2025-11-25 09:00:41.975 253542 DEBUG nova.network.neutron [req-900f573c-a4a8-4203-8d92-751cabc75950 req-e4c0ed71-aaa3-4e8a-80fe-6fc007134b75 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: f05e074d-5838-4c4b-89dc-76afe386f635] Updating instance_info_cache with network_info: [{"id": "30ba0f84-3dca-47f6-911d-5fff56a99b0b", "address": "fa:16:3e:d4:d0:53", "network": {"id": "6c2834b5-0444-432c-8da4-c0b4f4aabc4d", "bridge": "br-int", "label": "tempest-network-smoke--139778261", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fed4:d053", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap30ba0f84-3d", "ovs_interfaceid": "30ba0f84-3dca-47f6-911d-5fff56a99b0b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 04:00:41 np0005534516 nova_compute[253538]: 2025-11-25 09:00:41.994 253542 DEBUG oslo_concurrency.lockutils [req-900f573c-a4a8-4203-8d92-751cabc75950 req-e4c0ed71-aaa3-4e8a-80fe-6fc007134b75 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-f05e074d-5838-4c4b-89dc-76afe386f635" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 04:00:42 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2397: 321 pgs: 321 active+clean; 134 MiB data, 900 MiB used, 59 GiB / 60 GiB avail; 463 KiB/s rd, 1.8 MiB/s wr, 70 op/s
Nov 25 04:00:42 np0005534516 ceph-mgr[75313]: [devicehealth INFO root] Check health
Nov 25 04:00:42 np0005534516 nova_compute[253538]: 2025-11-25 09:00:42.330 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:00:42 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:00:42 np0005534516 ovn_controller[152859]: 2025-11-25T09:00:42Z|01332|binding|INFO|Releasing lport 25e4a85d-5a04-4d07-a006-66576a20c294 from this chassis (sb_readonly=0)
Nov 25 04:00:42 np0005534516 nova_compute[253538]: 2025-11-25 09:00:42.819 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:00:42 np0005534516 nova_compute[253538]: 2025-11-25 09:00:42.838 253542 DEBUG nova.compute.manager [req-b01e15ed-6225-4c90-af06-d55ca474b999 req-0bf37402-8aa7-452e-8338-5125e5b35ea8 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: f05e074d-5838-4c4b-89dc-76afe386f635] Received event network-vif-plugged-30ba0f84-3dca-47f6-911d-5fff56a99b0b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 04:00:42 np0005534516 nova_compute[253538]: 2025-11-25 09:00:42.839 253542 DEBUG oslo_concurrency.lockutils [req-b01e15ed-6225-4c90-af06-d55ca474b999 req-0bf37402-8aa7-452e-8338-5125e5b35ea8 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "f05e074d-5838-4c4b-89dc-76afe386f635-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:00:42 np0005534516 nova_compute[253538]: 2025-11-25 09:00:42.839 253542 DEBUG oslo_concurrency.lockutils [req-b01e15ed-6225-4c90-af06-d55ca474b999 req-0bf37402-8aa7-452e-8338-5125e5b35ea8 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "f05e074d-5838-4c4b-89dc-76afe386f635-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:00:42 np0005534516 nova_compute[253538]: 2025-11-25 09:00:42.839 253542 DEBUG oslo_concurrency.lockutils [req-b01e15ed-6225-4c90-af06-d55ca474b999 req-0bf37402-8aa7-452e-8338-5125e5b35ea8 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "f05e074d-5838-4c4b-89dc-76afe386f635-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:00:42 np0005534516 nova_compute[253538]: 2025-11-25 09:00:42.840 253542 DEBUG nova.compute.manager [req-b01e15ed-6225-4c90-af06-d55ca474b999 req-0bf37402-8aa7-452e-8338-5125e5b35ea8 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: f05e074d-5838-4c4b-89dc-76afe386f635] No waiting events found dispatching network-vif-plugged-30ba0f84-3dca-47f6-911d-5fff56a99b0b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 04:00:42 np0005534516 nova_compute[253538]: 2025-11-25 09:00:42.840 253542 WARNING nova.compute.manager [req-b01e15ed-6225-4c90-af06-d55ca474b999 req-0bf37402-8aa7-452e-8338-5125e5b35ea8 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: f05e074d-5838-4c4b-89dc-76afe386f635] Received unexpected event network-vif-plugged-30ba0f84-3dca-47f6-911d-5fff56a99b0b for instance with vm_state active and task_state None.#033[00m
Nov 25 04:00:42 np0005534516 ovn_controller[152859]: 2025-11-25T09:00:42Z|01333|binding|INFO|Releasing lport 25e4a85d-5a04-4d07-a006-66576a20c294 from this chassis (sb_readonly=0)
Nov 25 04:00:42 np0005534516 nova_compute[253538]: 2025-11-25 09:00:42.934 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:00:43 np0005534516 nova_compute[253538]: 2025-11-25 09:00:43.578 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:00:44 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2398: 321 pgs: 321 active+clean; 134 MiB data, 893 MiB used, 59 GiB / 60 GiB avail; 747 KiB/s rd, 1.8 MiB/s wr, 85 op/s
Nov 25 04:00:44 np0005534516 nova_compute[253538]: 2025-11-25 09:00:44.286 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:00:44 np0005534516 NetworkManager[48915]: <info>  [1764061244.2875] manager: (patch-br-int-to-provnet-14ba300c-36d8-42f5-a7bd-8245a4488c5f): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/548)
Nov 25 04:00:44 np0005534516 NetworkManager[48915]: <info>  [1764061244.2887] manager: (patch-provnet-14ba300c-36d8-42f5-a7bd-8245a4488c5f-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/549)
Nov 25 04:00:44 np0005534516 ovn_controller[152859]: 2025-11-25T09:00:44Z|01334|binding|INFO|Releasing lport 25e4a85d-5a04-4d07-a006-66576a20c294 from this chassis (sb_readonly=0)
Nov 25 04:00:44 np0005534516 nova_compute[253538]: 2025-11-25 09:00:44.319 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:00:44 np0005534516 ovn_controller[152859]: 2025-11-25T09:00:44Z|01335|binding|INFO|Releasing lport 25e4a85d-5a04-4d07-a006-66576a20c294 from this chassis (sb_readonly=0)
Nov 25 04:00:44 np0005534516 nova_compute[253538]: 2025-11-25 09:00:44.327 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:00:44 np0005534516 nova_compute[253538]: 2025-11-25 09:00:44.937 253542 DEBUG nova.compute.manager [req-38227f6d-94ef-49b4-8cd4-5c8d2def5364 req-d974bf70-e100-4a70-9875-4d91fbe7164c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: f05e074d-5838-4c4b-89dc-76afe386f635] Received event network-changed-30ba0f84-3dca-47f6-911d-5fff56a99b0b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 04:00:44 np0005534516 nova_compute[253538]: 2025-11-25 09:00:44.939 253542 DEBUG nova.compute.manager [req-38227f6d-94ef-49b4-8cd4-5c8d2def5364 req-d974bf70-e100-4a70-9875-4d91fbe7164c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: f05e074d-5838-4c4b-89dc-76afe386f635] Refreshing instance network info cache due to event network-changed-30ba0f84-3dca-47f6-911d-5fff56a99b0b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 04:00:44 np0005534516 nova_compute[253538]: 2025-11-25 09:00:44.939 253542 DEBUG oslo_concurrency.lockutils [req-38227f6d-94ef-49b4-8cd4-5c8d2def5364 req-d974bf70-e100-4a70-9875-4d91fbe7164c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-f05e074d-5838-4c4b-89dc-76afe386f635" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 04:00:44 np0005534516 nova_compute[253538]: 2025-11-25 09:00:44.939 253542 DEBUG oslo_concurrency.lockutils [req-38227f6d-94ef-49b4-8cd4-5c8d2def5364 req-d974bf70-e100-4a70-9875-4d91fbe7164c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-f05e074d-5838-4c4b-89dc-76afe386f635" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 04:00:44 np0005534516 nova_compute[253538]: 2025-11-25 09:00:44.940 253542 DEBUG nova.network.neutron [req-38227f6d-94ef-49b4-8cd4-5c8d2def5364 req-d974bf70-e100-4a70-9875-4d91fbe7164c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: f05e074d-5838-4c4b-89dc-76afe386f635] Refreshing network info cache for port 30ba0f84-3dca-47f6-911d-5fff56a99b0b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 04:00:45 np0005534516 nova_compute[253538]: 2025-11-25 09:00:45.553 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:00:45 np0005534516 nova_compute[253538]: 2025-11-25 09:00:45.554 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 25 04:00:45 np0005534516 nova_compute[253538]: 2025-11-25 09:00:45.555 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 25 04:00:46 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2399: 321 pgs: 321 active+clean; 134 MiB data, 893 MiB used, 59 GiB / 60 GiB avail; 1.7 MiB/s rd, 1.4 MiB/s wr, 106 op/s
Nov 25 04:00:46 np0005534516 nova_compute[253538]: 2025-11-25 09:00:46.145 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Acquiring lock "refresh_cache-f05e074d-5838-4c4b-89dc-76afe386f635" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 04:00:46 np0005534516 nova_compute[253538]: 2025-11-25 09:00:46.459 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:00:47 np0005534516 nova_compute[253538]: 2025-11-25 09:00:47.191 253542 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764061232.1897573, 4356e66d-96cf-4d55-bf3e-280638024374 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 04:00:47 np0005534516 nova_compute[253538]: 2025-11-25 09:00:47.192 253542 INFO nova.compute.manager [-] [instance: 4356e66d-96cf-4d55-bf3e-280638024374] VM Stopped (Lifecycle Event)#033[00m
Nov 25 04:00:47 np0005534516 nova_compute[253538]: 2025-11-25 09:00:47.215 253542 DEBUG nova.compute.manager [None req-df57c3cf-a639-415a-9a8c-f297cf07c0e3 - - - - - -] [instance: 4356e66d-96cf-4d55-bf3e-280638024374] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 04:00:47 np0005534516 nova_compute[253538]: 2025-11-25 09:00:47.332 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:00:47 np0005534516 nova_compute[253538]: 2025-11-25 09:00:47.414 253542 DEBUG nova.network.neutron [req-38227f6d-94ef-49b4-8cd4-5c8d2def5364 req-d974bf70-e100-4a70-9875-4d91fbe7164c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: f05e074d-5838-4c4b-89dc-76afe386f635] Updated VIF entry in instance network info cache for port 30ba0f84-3dca-47f6-911d-5fff56a99b0b. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 25 04:00:47 np0005534516 nova_compute[253538]: 2025-11-25 09:00:47.415 253542 DEBUG nova.network.neutron [req-38227f6d-94ef-49b4-8cd4-5c8d2def5364 req-d974bf70-e100-4a70-9875-4d91fbe7164c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: f05e074d-5838-4c4b-89dc-76afe386f635] Updating instance_info_cache with network_info: [{"id": "30ba0f84-3dca-47f6-911d-5fff56a99b0b", "address": "fa:16:3e:d4:d0:53", "network": {"id": "6c2834b5-0444-432c-8da4-c0b4f4aabc4d", "bridge": "br-int", "label": "tempest-network-smoke--139778261", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.202", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fed4:d053", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap30ba0f84-3d", "ovs_interfaceid": "30ba0f84-3dca-47f6-911d-5fff56a99b0b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 04:00:47 np0005534516 nova_compute[253538]: 2025-11-25 09:00:47.440 253542 DEBUG oslo_concurrency.lockutils [req-38227f6d-94ef-49b4-8cd4-5c8d2def5364 req-d974bf70-e100-4a70-9875-4d91fbe7164c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-f05e074d-5838-4c4b-89dc-76afe386f635" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 04:00:47 np0005534516 nova_compute[253538]: 2025-11-25 09:00:47.442 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Acquired lock "refresh_cache-f05e074d-5838-4c4b-89dc-76afe386f635" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 04:00:47 np0005534516 nova_compute[253538]: 2025-11-25 09:00:47.442 253542 DEBUG nova.network.neutron [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] [instance: f05e074d-5838-4c4b-89dc-76afe386f635] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Nov 25 04:00:47 np0005534516 nova_compute[253538]: 2025-11-25 09:00:47.443 253542 DEBUG nova.objects.instance [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lazy-loading 'info_cache' on Instance uuid f05e074d-5838-4c4b-89dc-76afe386f635 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 04:00:47 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:00:48 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2400: 321 pgs: 321 active+clean; 134 MiB data, 893 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 585 KiB/s wr, 87 op/s
Nov 25 04:00:48 np0005534516 nova_compute[253538]: 2025-11-25 09:00:48.579 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:00:49 np0005534516 podman[389499]: 2025-11-25 09:00:49.83501062 +0000 UTC m=+0.075490336 container health_status 5c8d5508db57b4c3da7e80ae11f3a3094e87785cb74f60c98199261dd8b73481 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true)
Nov 25 04:00:49 np0005534516 podman[389498]: 2025-11-25 09:00:49.850186202 +0000 UTC m=+0.096065275 container health_status 201f5ba8a846455dad7681d91099d8c3f33e8ac91298c1330a8b525b94c03938 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3)
Nov 25 04:00:49 np0005534516 nova_compute[253538]: 2025-11-25 09:00:49.858 253542 DEBUG nova.network.neutron [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] [instance: f05e074d-5838-4c4b-89dc-76afe386f635] Updating instance_info_cache with network_info: [{"id": "30ba0f84-3dca-47f6-911d-5fff56a99b0b", "address": "fa:16:3e:d4:d0:53", "network": {"id": "6c2834b5-0444-432c-8da4-c0b4f4aabc4d", "bridge": "br-int", "label": "tempest-network-smoke--139778261", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.202", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fed4:d053", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap30ba0f84-3d", "ovs_interfaceid": "30ba0f84-3dca-47f6-911d-5fff56a99b0b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 04:00:49 np0005534516 nova_compute[253538]: 2025-11-25 09:00:49.886 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Releasing lock "refresh_cache-f05e074d-5838-4c4b-89dc-76afe386f635" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 04:00:49 np0005534516 nova_compute[253538]: 2025-11-25 09:00:49.886 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] [instance: f05e074d-5838-4c4b-89dc-76afe386f635] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Nov 25 04:00:49 np0005534516 nova_compute[253538]: 2025-11-25 09:00:49.886 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:00:49 np0005534516 nova_compute[253538]: 2025-11-25 09:00:49.887 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:00:49 np0005534516 nova_compute[253538]: 2025-11-25 09:00:49.887 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:00:49 np0005534516 nova_compute[253538]: 2025-11-25 09:00:49.887 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 25 04:00:50 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2401: 321 pgs: 321 active+clean; 134 MiB data, 893 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Nov 25 04:00:50 np0005534516 nova_compute[253538]: 2025-11-25 09:00:50.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:00:50 np0005534516 nova_compute[253538]: 2025-11-25 09:00:50.555 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:00:51 np0005534516 nova_compute[253538]: 2025-11-25 09:00:51.274 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:00:52 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2402: 321 pgs: 321 active+clean; 134 MiB data, 893 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Nov 25 04:00:52 np0005534516 nova_compute[253538]: 2025-11-25 09:00:52.334 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:00:52 np0005534516 nova_compute[253538]: 2025-11-25 09:00:52.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:00:52 np0005534516 nova_compute[253538]: 2025-11-25 09:00:52.582 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:00:52 np0005534516 nova_compute[253538]: 2025-11-25 09:00:52.582 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:00:52 np0005534516 nova_compute[253538]: 2025-11-25 09:00:52.582 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:00:52 np0005534516 nova_compute[253538]: 2025-11-25 09:00:52.583 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 25 04:00:52 np0005534516 nova_compute[253538]: 2025-11-25 09:00:52.583 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 04:00:52 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:00:53 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 04:00:53 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2128169' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 04:00:53 np0005534516 nova_compute[253538]: 2025-11-25 09:00:53.082 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.499s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 04:00:53 np0005534516 nova_compute[253538]: 2025-11-25 09:00:53.171 253542 DEBUG nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] skipping disk for instance-00000081 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 25 04:00:53 np0005534516 nova_compute[253538]: 2025-11-25 09:00:53.172 253542 DEBUG nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] skipping disk for instance-00000081 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 25 04:00:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] Optimize plan auto_2025-11-25_09:00:53
Nov 25 04:00:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 04:00:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] do_upmap
Nov 25 04:00:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] pools ['.mgr', 'cephfs.cephfs.meta', 'default.rgw.meta', 'images', 'default.rgw.control', 'volumes', 'backups', 'vms', '.rgw.root', 'default.rgw.log', 'cephfs.cephfs.data']
Nov 25 04:00:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] prepared 0/10 changes
Nov 25 04:00:53 np0005534516 nova_compute[253538]: 2025-11-25 09:00:53.387 253542 WARNING nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 04:00:53 np0005534516 nova_compute[253538]: 2025-11-25 09:00:53.389 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3583MB free_disk=59.96738052368164GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 25 04:00:53 np0005534516 nova_compute[253538]: 2025-11-25 09:00:53.389 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:00:53 np0005534516 nova_compute[253538]: 2025-11-25 09:00:53.389 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:00:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 04:00:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 04:00:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 04:00:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 04:00:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 04:00:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 04:00:53 np0005534516 nova_compute[253538]: 2025-11-25 09:00:53.482 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Instance f05e074d-5838-4c4b-89dc-76afe386f635 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 25 04:00:53 np0005534516 nova_compute[253538]: 2025-11-25 09:00:53.483 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 25 04:00:53 np0005534516 nova_compute[253538]: 2025-11-25 09:00:53.484 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 25 04:00:53 np0005534516 nova_compute[253538]: 2025-11-25 09:00:53.582 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:00:53 np0005534516 nova_compute[253538]: 2025-11-25 09:00:53.725 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 04:00:54 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2403: 321 pgs: 321 active+clean; 138 MiB data, 897 MiB used, 59 GiB / 60 GiB avail; 1.5 MiB/s rd, 329 KiB/s wr, 62 op/s
Nov 25 04:00:54 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 04:00:54 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 04:00:54 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 04:00:54 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 04:00:54 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 04:00:54 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 04:00:54 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 04:00:54 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 04:00:54 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 04:00:54 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 04:00:54 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 04:00:54 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4204900587' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 04:00:54 np0005534516 nova_compute[253538]: 2025-11-25 09:00:54.267 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.542s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 04:00:54 np0005534516 nova_compute[253538]: 2025-11-25 09:00:54.273 253542 DEBUG nova.compute.provider_tree [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 04:00:54 np0005534516 nova_compute[253538]: 2025-11-25 09:00:54.287 253542 DEBUG nova.scheduler.client.report [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 04:00:54 np0005534516 nova_compute[253538]: 2025-11-25 09:00:54.358 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 25 04:00:54 np0005534516 nova_compute[253538]: 2025-11-25 09:00:54.359 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.970s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:00:55 np0005534516 nova_compute[253538]: 2025-11-25 09:00:55.353 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:00:55 np0005534516 ovn_controller[152859]: 2025-11-25T09:00:55Z|00161|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:d4:d0:53 10.100.0.14
Nov 25 04:00:55 np0005534516 ovn_controller[152859]: 2025-11-25T09:00:55Z|00162|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:d4:d0:53 10.100.0.14
Nov 25 04:00:56 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2404: 321 pgs: 321 active+clean; 148 MiB data, 909 MiB used, 59 GiB / 60 GiB avail; 1.3 MiB/s rd, 1.4 MiB/s wr, 63 op/s
Nov 25 04:00:56 np0005534516 podman[389584]: 2025-11-25 09:00:56.854351196 +0000 UTC m=+0.105712478 container health_status 8ad1e319ea263c63021f01bb2d6f18ca5aea205883339692c6b10bddfa993191 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 04:00:57 np0005534516 nova_compute[253538]: 2025-11-25 09:00:57.337 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:00:57 np0005534516 nova_compute[253538]: 2025-11-25 09:00:57.429 253542 DEBUG oslo_concurrency.lockutils [None req-6e36b076-4c39-4eef-9cf4-c1944c47f212 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Acquiring lock "830678ef-9f48-4175-aa6d-666c24a11689" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:00:57 np0005534516 nova_compute[253538]: 2025-11-25 09:00:57.430 253542 DEBUG oslo_concurrency.lockutils [None req-6e36b076-4c39-4eef-9cf4-c1944c47f212 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Lock "830678ef-9f48-4175-aa6d-666c24a11689" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:00:57 np0005534516 nova_compute[253538]: 2025-11-25 09:00:57.475 253542 DEBUG nova.compute.manager [None req-6e36b076-4c39-4eef-9cf4-c1944c47f212 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 830678ef-9f48-4175-aa6d-666c24a11689] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 25 04:00:57 np0005534516 nova_compute[253538]: 2025-11-25 09:00:57.616 253542 DEBUG oslo_concurrency.lockutils [None req-6e36b076-4c39-4eef-9cf4-c1944c47f212 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:00:57 np0005534516 nova_compute[253538]: 2025-11-25 09:00:57.617 253542 DEBUG oslo_concurrency.lockutils [None req-6e36b076-4c39-4eef-9cf4-c1944c47f212 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:00:57 np0005534516 nova_compute[253538]: 2025-11-25 09:00:57.623 253542 DEBUG nova.virt.hardware [None req-6e36b076-4c39-4eef-9cf4-c1944c47f212 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 25 04:00:57 np0005534516 nova_compute[253538]: 2025-11-25 09:00:57.624 253542 INFO nova.compute.claims [None req-6e36b076-4c39-4eef-9cf4-c1944c47f212 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 830678ef-9f48-4175-aa6d-666c24a11689] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 25 04:00:57 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:00:57 np0005534516 nova_compute[253538]: 2025-11-25 09:00:57.778 253542 DEBUG oslo_concurrency.processutils [None req-6e36b076-4c39-4eef-9cf4-c1944c47f212 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 04:00:58 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2405: 321 pgs: 321 active+clean; 161 MiB data, 917 MiB used, 59 GiB / 60 GiB avail; 441 KiB/s rd, 2.1 MiB/s wr, 53 op/s
Nov 25 04:00:58 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 04:00:58 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/816391199' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 04:00:58 np0005534516 nova_compute[253538]: 2025-11-25 09:00:58.245 253542 DEBUG oslo_concurrency.processutils [None req-6e36b076-4c39-4eef-9cf4-c1944c47f212 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.467s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 04:00:58 np0005534516 nova_compute[253538]: 2025-11-25 09:00:58.250 253542 DEBUG nova.compute.provider_tree [None req-6e36b076-4c39-4eef-9cf4-c1944c47f212 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 04:00:58 np0005534516 nova_compute[253538]: 2025-11-25 09:00:58.265 253542 DEBUG nova.scheduler.client.report [None req-6e36b076-4c39-4eef-9cf4-c1944c47f212 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 04:00:58 np0005534516 nova_compute[253538]: 2025-11-25 09:00:58.295 253542 DEBUG oslo_concurrency.lockutils [None req-6e36b076-4c39-4eef-9cf4-c1944c47f212 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.678s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:00:58 np0005534516 nova_compute[253538]: 2025-11-25 09:00:58.296 253542 DEBUG nova.compute.manager [None req-6e36b076-4c39-4eef-9cf4-c1944c47f212 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 830678ef-9f48-4175-aa6d-666c24a11689] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 25 04:00:58 np0005534516 nova_compute[253538]: 2025-11-25 09:00:58.431 253542 DEBUG nova.compute.manager [None req-6e36b076-4c39-4eef-9cf4-c1944c47f212 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 830678ef-9f48-4175-aa6d-666c24a11689] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 25 04:00:58 np0005534516 nova_compute[253538]: 2025-11-25 09:00:58.431 253542 DEBUG nova.network.neutron [None req-6e36b076-4c39-4eef-9cf4-c1944c47f212 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 830678ef-9f48-4175-aa6d-666c24a11689] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 25 04:00:58 np0005534516 nova_compute[253538]: 2025-11-25 09:00:58.462 253542 INFO nova.virt.libvirt.driver [None req-6e36b076-4c39-4eef-9cf4-c1944c47f212 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 830678ef-9f48-4175-aa6d-666c24a11689] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 25 04:00:58 np0005534516 nova_compute[253538]: 2025-11-25 09:00:58.486 253542 DEBUG nova.compute.manager [None req-6e36b076-4c39-4eef-9cf4-c1944c47f212 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 830678ef-9f48-4175-aa6d-666c24a11689] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 25 04:00:58 np0005534516 nova_compute[253538]: 2025-11-25 09:00:58.584 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:00:58 np0005534516 nova_compute[253538]: 2025-11-25 09:00:58.607 253542 DEBUG nova.compute.manager [None req-6e36b076-4c39-4eef-9cf4-c1944c47f212 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 830678ef-9f48-4175-aa6d-666c24a11689] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 25 04:00:58 np0005534516 nova_compute[253538]: 2025-11-25 09:00:58.609 253542 DEBUG nova.virt.libvirt.driver [None req-6e36b076-4c39-4eef-9cf4-c1944c47f212 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 830678ef-9f48-4175-aa6d-666c24a11689] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 25 04:00:58 np0005534516 nova_compute[253538]: 2025-11-25 09:00:58.609 253542 INFO nova.virt.libvirt.driver [None req-6e36b076-4c39-4eef-9cf4-c1944c47f212 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 830678ef-9f48-4175-aa6d-666c24a11689] Creating image(s)#033[00m
Nov 25 04:00:58 np0005534516 nova_compute[253538]: 2025-11-25 09:00:58.636 253542 DEBUG nova.storage.rbd_utils [None req-6e36b076-4c39-4eef-9cf4-c1944c47f212 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] rbd image 830678ef-9f48-4175-aa6d-666c24a11689_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 04:00:58 np0005534516 nova_compute[253538]: 2025-11-25 09:00:58.660 253542 DEBUG nova.storage.rbd_utils [None req-6e36b076-4c39-4eef-9cf4-c1944c47f212 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] rbd image 830678ef-9f48-4175-aa6d-666c24a11689_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 04:00:58 np0005534516 nova_compute[253538]: 2025-11-25 09:00:58.687 253542 DEBUG nova.storage.rbd_utils [None req-6e36b076-4c39-4eef-9cf4-c1944c47f212 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] rbd image 830678ef-9f48-4175-aa6d-666c24a11689_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 04:00:58 np0005534516 nova_compute[253538]: 2025-11-25 09:00:58.692 253542 DEBUG oslo_concurrency.processutils [None req-6e36b076-4c39-4eef-9cf4-c1944c47f212 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 04:00:58 np0005534516 nova_compute[253538]: 2025-11-25 09:00:58.746 253542 DEBUG nova.policy [None req-6e36b076-4c39-4eef-9cf4-c1944c47f212 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '283b89dbe3284e8ea2019b797673108b', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'cfffff2c57a442a59b202d368d49bf00', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 25 04:00:58 np0005534516 nova_compute[253538]: 2025-11-25 09:00:58.788 253542 DEBUG oslo_concurrency.processutils [None req-6e36b076-4c39-4eef-9cf4-c1944c47f212 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json" returned: 0 in 0.096s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 04:00:58 np0005534516 nova_compute[253538]: 2025-11-25 09:00:58.789 253542 DEBUG oslo_concurrency.lockutils [None req-6e36b076-4c39-4eef-9cf4-c1944c47f212 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Acquiring lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:00:58 np0005534516 nova_compute[253538]: 2025-11-25 09:00:58.790 253542 DEBUG oslo_concurrency.lockutils [None req-6e36b076-4c39-4eef-9cf4-c1944c47f212 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:00:58 np0005534516 nova_compute[253538]: 2025-11-25 09:00:58.790 253542 DEBUG oslo_concurrency.lockutils [None req-6e36b076-4c39-4eef-9cf4-c1944c47f212 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:00:58 np0005534516 nova_compute[253538]: 2025-11-25 09:00:58.811 253542 DEBUG nova.storage.rbd_utils [None req-6e36b076-4c39-4eef-9cf4-c1944c47f212 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] rbd image 830678ef-9f48-4175-aa6d-666c24a11689_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 04:00:58 np0005534516 nova_compute[253538]: 2025-11-25 09:00:58.814 253542 DEBUG oslo_concurrency.processutils [None req-6e36b076-4c39-4eef-9cf4-c1944c47f212 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc 830678ef-9f48-4175-aa6d-666c24a11689_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 04:00:59 np0005534516 nova_compute[253538]: 2025-11-25 09:00:59.256 253542 DEBUG oslo_concurrency.processutils [None req-6e36b076-4c39-4eef-9cf4-c1944c47f212 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc 830678ef-9f48-4175-aa6d-666c24a11689_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.442s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 04:00:59 np0005534516 nova_compute[253538]: 2025-11-25 09:00:59.335 253542 DEBUG nova.storage.rbd_utils [None req-6e36b076-4c39-4eef-9cf4-c1944c47f212 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] resizing rbd image 830678ef-9f48-4175-aa6d-666c24a11689_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 25 04:00:59 np0005534516 nova_compute[253538]: 2025-11-25 09:00:59.475 253542 DEBUG nova.objects.instance [None req-6e36b076-4c39-4eef-9cf4-c1944c47f212 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Lazy-loading 'migration_context' on Instance uuid 830678ef-9f48-4175-aa6d-666c24a11689 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 04:00:59 np0005534516 nova_compute[253538]: 2025-11-25 09:00:59.494 253542 DEBUG nova.virt.libvirt.driver [None req-6e36b076-4c39-4eef-9cf4-c1944c47f212 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 830678ef-9f48-4175-aa6d-666c24a11689] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 25 04:00:59 np0005534516 nova_compute[253538]: 2025-11-25 09:00:59.495 253542 DEBUG nova.virt.libvirt.driver [None req-6e36b076-4c39-4eef-9cf4-c1944c47f212 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 830678ef-9f48-4175-aa6d-666c24a11689] Ensure instance console log exists: /var/lib/nova/instances/830678ef-9f48-4175-aa6d-666c24a11689/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 25 04:00:59 np0005534516 nova_compute[253538]: 2025-11-25 09:00:59.495 253542 DEBUG oslo_concurrency.lockutils [None req-6e36b076-4c39-4eef-9cf4-c1944c47f212 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:00:59 np0005534516 nova_compute[253538]: 2025-11-25 09:00:59.496 253542 DEBUG oslo_concurrency.lockutils [None req-6e36b076-4c39-4eef-9cf4-c1944c47f212 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:00:59 np0005534516 nova_compute[253538]: 2025-11-25 09:00:59.496 253542 DEBUG oslo_concurrency.lockutils [None req-6e36b076-4c39-4eef-9cf4-c1944c47f212 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:00:59 np0005534516 nova_compute[253538]: 2025-11-25 09:00:59.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:00:59 np0005534516 nova_compute[253538]: 2025-11-25 09:00:59.555 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Nov 25 04:01:00 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2406: 321 pgs: 321 active+clean; 187 MiB data, 926 MiB used, 59 GiB / 60 GiB avail; 275 KiB/s rd, 3.2 MiB/s wr, 62 op/s
Nov 25 04:01:00 np0005534516 nova_compute[253538]: 2025-11-25 09:01:00.480 253542 DEBUG nova.network.neutron [None req-6e36b076-4c39-4eef-9cf4-c1944c47f212 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 830678ef-9f48-4175-aa6d-666c24a11689] Successfully created port: 2719889c-c962-425f-9df3-6f3d741ca0ec _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 25 04:01:02 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2407: 321 pgs: 321 active+clean; 206 MiB data, 934 MiB used, 59 GiB / 60 GiB avail; 282 KiB/s rd, 3.8 MiB/s wr, 73 op/s
Nov 25 04:01:02 np0005534516 nova_compute[253538]: 2025-11-25 09:01:02.343 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:01:02 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:01:03 np0005534516 nova_compute[253538]: 2025-11-25 09:01:03.586 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:01:04 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2408: 321 pgs: 321 active+clean; 213 MiB data, 934 MiB used, 59 GiB / 60 GiB avail; 292 KiB/s rd, 3.9 MiB/s wr, 87 op/s
Nov 25 04:01:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 04:01:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 04:01:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 04:01:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 04:01:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.001102599282900306 of space, bias 1.0, pg target 0.3307797848700918 quantized to 32 (current 32)
Nov 25 04:01:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 04:01:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 04:01:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 04:01:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 04:01:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 04:01:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0010122368870873217 of space, bias 1.0, pg target 0.3036710661261965 quantized to 32 (current 32)
Nov 25 04:01:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 04:01:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 25 04:01:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 04:01:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 04:01:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 04:01:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 25 04:01:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 04:01:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 25 04:01:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 04:01:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 04:01:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 04:01:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 25 04:01:05 np0005534516 nova_compute[253538]: 2025-11-25 09:01:05.137 253542 DEBUG nova.network.neutron [None req-6e36b076-4c39-4eef-9cf4-c1944c47f212 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 830678ef-9f48-4175-aa6d-666c24a11689] Successfully updated port: 2719889c-c962-425f-9df3-6f3d741ca0ec _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 25 04:01:05 np0005534516 nova_compute[253538]: 2025-11-25 09:01:05.282 253542 DEBUG nova.compute.manager [req-0b732332-bcfe-4650-89b7-99d245bcf0ad req-e5f69855-2c05-4a05-a619-4772c3aee78b b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 830678ef-9f48-4175-aa6d-666c24a11689] Received event network-changed-2719889c-c962-425f-9df3-6f3d741ca0ec external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 04:01:05 np0005534516 nova_compute[253538]: 2025-11-25 09:01:05.283 253542 DEBUG nova.compute.manager [req-0b732332-bcfe-4650-89b7-99d245bcf0ad req-e5f69855-2c05-4a05-a619-4772c3aee78b b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 830678ef-9f48-4175-aa6d-666c24a11689] Refreshing instance network info cache due to event network-changed-2719889c-c962-425f-9df3-6f3d741ca0ec. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 04:01:05 np0005534516 nova_compute[253538]: 2025-11-25 09:01:05.283 253542 DEBUG oslo_concurrency.lockutils [req-0b732332-bcfe-4650-89b7-99d245bcf0ad req-e5f69855-2c05-4a05-a619-4772c3aee78b b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-830678ef-9f48-4175-aa6d-666c24a11689" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 04:01:05 np0005534516 nova_compute[253538]: 2025-11-25 09:01:05.283 253542 DEBUG oslo_concurrency.lockutils [req-0b732332-bcfe-4650-89b7-99d245bcf0ad req-e5f69855-2c05-4a05-a619-4772c3aee78b b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-830678ef-9f48-4175-aa6d-666c24a11689" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 04:01:05 np0005534516 nova_compute[253538]: 2025-11-25 09:01:05.283 253542 DEBUG nova.network.neutron [req-0b732332-bcfe-4650-89b7-99d245bcf0ad req-e5f69855-2c05-4a05-a619-4772c3aee78b b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 830678ef-9f48-4175-aa6d-666c24a11689] Refreshing network info cache for port 2719889c-c962-425f-9df3-6f3d741ca0ec _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 04:01:06 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2409: 321 pgs: 321 active+clean; 213 MiB data, 940 MiB used, 59 GiB / 60 GiB avail; 282 KiB/s rd, 3.6 MiB/s wr, 83 op/s
Nov 25 04:01:06 np0005534516 nova_compute[253538]: 2025-11-25 09:01:06.092 253542 DEBUG oslo_concurrency.lockutils [None req-6e36b076-4c39-4eef-9cf4-c1944c47f212 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Acquiring lock "refresh_cache-830678ef-9f48-4175-aa6d-666c24a11689" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 04:01:06 np0005534516 nova_compute[253538]: 2025-11-25 09:01:06.141 253542 DEBUG nova.network.neutron [req-0b732332-bcfe-4650-89b7-99d245bcf0ad req-e5f69855-2c05-4a05-a619-4772c3aee78b b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 830678ef-9f48-4175-aa6d-666c24a11689] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 25 04:01:07 np0005534516 nova_compute[253538]: 2025-11-25 09:01:07.135 253542 DEBUG nova.network.neutron [req-0b732332-bcfe-4650-89b7-99d245bcf0ad req-e5f69855-2c05-4a05-a619-4772c3aee78b b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 830678ef-9f48-4175-aa6d-666c24a11689] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 04:01:07 np0005534516 nova_compute[253538]: 2025-11-25 09:01:07.153 253542 DEBUG oslo_concurrency.lockutils [req-0b732332-bcfe-4650-89b7-99d245bcf0ad req-e5f69855-2c05-4a05-a619-4772c3aee78b b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-830678ef-9f48-4175-aa6d-666c24a11689" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 04:01:07 np0005534516 nova_compute[253538]: 2025-11-25 09:01:07.154 253542 DEBUG oslo_concurrency.lockutils [None req-6e36b076-4c39-4eef-9cf4-c1944c47f212 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Acquired lock "refresh_cache-830678ef-9f48-4175-aa6d-666c24a11689" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 04:01:07 np0005534516 nova_compute[253538]: 2025-11-25 09:01:07.154 253542 DEBUG nova.network.neutron [None req-6e36b076-4c39-4eef-9cf4-c1944c47f212 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 830678ef-9f48-4175-aa6d-666c24a11689] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 25 04:01:07 np0005534516 nova_compute[253538]: 2025-11-25 09:01:07.347 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:01:07 np0005534516 nova_compute[253538]: 2025-11-25 09:01:07.377 253542 DEBUG nova.network.neutron [None req-6e36b076-4c39-4eef-9cf4-c1944c47f212 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 830678ef-9f48-4175-aa6d-666c24a11689] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 25 04:01:07 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:01:08 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2410: 321 pgs: 321 active+clean; 213 MiB data, 940 MiB used, 59 GiB / 60 GiB avail; 225 KiB/s rd, 2.5 MiB/s wr, 67 op/s
Nov 25 04:01:08 np0005534516 nova_compute[253538]: 2025-11-25 09:01:08.587 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:01:09 np0005534516 nova_compute[253538]: 2025-11-25 09:01:09.540 253542 DEBUG nova.network.neutron [None req-6e36b076-4c39-4eef-9cf4-c1944c47f212 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 830678ef-9f48-4175-aa6d-666c24a11689] Updating instance_info_cache with network_info: [{"id": "2719889c-c962-425f-9df3-6f3d741ca0ec", "address": "fa:16:3e:0c:1f:9b", "network": {"id": "01d5ee0a-5a87-445b-8539-b33b1f9d0842", "bridge": "br-int", "label": "tempest-network-smoke--2130316306", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cfffff2c57a442a59b202d368d49bf00", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2719889c-c9", "ovs_interfaceid": "2719889c-c962-425f-9df3-6f3d741ca0ec", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 04:01:09 np0005534516 nova_compute[253538]: 2025-11-25 09:01:09.568 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:01:09 np0005534516 nova_compute[253538]: 2025-11-25 09:01:09.569 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Nov 25 04:01:09 np0005534516 nova_compute[253538]: 2025-11-25 09:01:09.584 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Nov 25 04:01:09 np0005534516 nova_compute[253538]: 2025-11-25 09:01:09.993 253542 DEBUG oslo_concurrency.lockutils [None req-6e36b076-4c39-4eef-9cf4-c1944c47f212 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Releasing lock "refresh_cache-830678ef-9f48-4175-aa6d-666c24a11689" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 04:01:09 np0005534516 nova_compute[253538]: 2025-11-25 09:01:09.994 253542 DEBUG nova.compute.manager [None req-6e36b076-4c39-4eef-9cf4-c1944c47f212 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 830678ef-9f48-4175-aa6d-666c24a11689] Instance network_info: |[{"id": "2719889c-c962-425f-9df3-6f3d741ca0ec", "address": "fa:16:3e:0c:1f:9b", "network": {"id": "01d5ee0a-5a87-445b-8539-b33b1f9d0842", "bridge": "br-int", "label": "tempest-network-smoke--2130316306", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cfffff2c57a442a59b202d368d49bf00", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2719889c-c9", "ovs_interfaceid": "2719889c-c962-425f-9df3-6f3d741ca0ec", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 25 04:01:09 np0005534516 nova_compute[253538]: 2025-11-25 09:01:09.997 253542 DEBUG nova.virt.libvirt.driver [None req-6e36b076-4c39-4eef-9cf4-c1944c47f212 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 830678ef-9f48-4175-aa6d-666c24a11689] Start _get_guest_xml network_info=[{"id": "2719889c-c962-425f-9df3-6f3d741ca0ec", "address": "fa:16:3e:0c:1f:9b", "network": {"id": "01d5ee0a-5a87-445b-8539-b33b1f9d0842", "bridge": "br-int", "label": "tempest-network-smoke--2130316306", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cfffff2c57a442a59b202d368d49bf00", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2719889c-c9", "ovs_interfaceid": "2719889c-c962-425f-9df3-6f3d741ca0ec", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'device_name': '/dev/vda', 'guest_format': None, 'encryption_options': None, 'boot_index': 0, 'encryption_format': None, 'encryption_secret_uuid': None, 'size': 0, 'encrypted': False, 'disk_bus': 'virtio', 'image_id': '8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 25 04:01:10 np0005534516 nova_compute[253538]: 2025-11-25 09:01:10.003 253542 WARNING nova.virt.libvirt.driver [None req-6e36b076-4c39-4eef-9cf4-c1944c47f212 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 04:01:10 np0005534516 nova_compute[253538]: 2025-11-25 09:01:10.014 253542 DEBUG nova.virt.libvirt.host [None req-6e36b076-4c39-4eef-9cf4-c1944c47f212 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 25 04:01:10 np0005534516 nova_compute[253538]: 2025-11-25 09:01:10.015 253542 DEBUG nova.virt.libvirt.host [None req-6e36b076-4c39-4eef-9cf4-c1944c47f212 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 25 04:01:10 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2411: 321 pgs: 321 active+clean; 213 MiB data, 940 MiB used, 59 GiB / 60 GiB avail; 119 KiB/s rd, 1.8 MiB/s wr, 42 op/s
Nov 25 04:01:10 np0005534516 nova_compute[253538]: 2025-11-25 09:01:10.019 253542 DEBUG nova.virt.libvirt.host [None req-6e36b076-4c39-4eef-9cf4-c1944c47f212 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 25 04:01:10 np0005534516 nova_compute[253538]: 2025-11-25 09:01:10.020 253542 DEBUG nova.virt.libvirt.host [None req-6e36b076-4c39-4eef-9cf4-c1944c47f212 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 25 04:01:10 np0005534516 nova_compute[253538]: 2025-11-25 09:01:10.021 253542 DEBUG nova.virt.libvirt.driver [None req-6e36b076-4c39-4eef-9cf4-c1944c47f212 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 25 04:01:10 np0005534516 nova_compute[253538]: 2025-11-25 09:01:10.021 253542 DEBUG nova.virt.hardware [None req-6e36b076-4c39-4eef-9cf4-c1944c47f212 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T08:20:54Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='c0247c91-0f34-45b2-87e7-43f31a790a00',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 25 04:01:10 np0005534516 nova_compute[253538]: 2025-11-25 09:01:10.022 253542 DEBUG nova.virt.hardware [None req-6e36b076-4c39-4eef-9cf4-c1944c47f212 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 25 04:01:10 np0005534516 nova_compute[253538]: 2025-11-25 09:01:10.022 253542 DEBUG nova.virt.hardware [None req-6e36b076-4c39-4eef-9cf4-c1944c47f212 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 25 04:01:10 np0005534516 nova_compute[253538]: 2025-11-25 09:01:10.023 253542 DEBUG nova.virt.hardware [None req-6e36b076-4c39-4eef-9cf4-c1944c47f212 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 25 04:01:10 np0005534516 nova_compute[253538]: 2025-11-25 09:01:10.023 253542 DEBUG nova.virt.hardware [None req-6e36b076-4c39-4eef-9cf4-c1944c47f212 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 25 04:01:10 np0005534516 nova_compute[253538]: 2025-11-25 09:01:10.024 253542 DEBUG nova.virt.hardware [None req-6e36b076-4c39-4eef-9cf4-c1944c47f212 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 25 04:01:10 np0005534516 nova_compute[253538]: 2025-11-25 09:01:10.024 253542 DEBUG nova.virt.hardware [None req-6e36b076-4c39-4eef-9cf4-c1944c47f212 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 25 04:01:10 np0005534516 nova_compute[253538]: 2025-11-25 09:01:10.025 253542 DEBUG nova.virt.hardware [None req-6e36b076-4c39-4eef-9cf4-c1944c47f212 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 25 04:01:10 np0005534516 nova_compute[253538]: 2025-11-25 09:01:10.025 253542 DEBUG nova.virt.hardware [None req-6e36b076-4c39-4eef-9cf4-c1944c47f212 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 25 04:01:10 np0005534516 nova_compute[253538]: 2025-11-25 09:01:10.026 253542 DEBUG nova.virt.hardware [None req-6e36b076-4c39-4eef-9cf4-c1944c47f212 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 25 04:01:10 np0005534516 nova_compute[253538]: 2025-11-25 09:01:10.026 253542 DEBUG nova.virt.hardware [None req-6e36b076-4c39-4eef-9cf4-c1944c47f212 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 25 04:01:10 np0005534516 nova_compute[253538]: 2025-11-25 09:01:10.030 253542 DEBUG oslo_concurrency.processutils [None req-6e36b076-4c39-4eef-9cf4-c1944c47f212 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 04:01:10 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 04:01:10 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1750179277' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 04:01:10 np0005534516 nova_compute[253538]: 2025-11-25 09:01:10.499 253542 DEBUG oslo_concurrency.processutils [None req-6e36b076-4c39-4eef-9cf4-c1944c47f212 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.469s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 04:01:10 np0005534516 nova_compute[253538]: 2025-11-25 09:01:10.522 253542 DEBUG nova.storage.rbd_utils [None req-6e36b076-4c39-4eef-9cf4-c1944c47f212 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] rbd image 830678ef-9f48-4175-aa6d-666c24a11689_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 04:01:10 np0005534516 nova_compute[253538]: 2025-11-25 09:01:10.527 253542 DEBUG oslo_concurrency.processutils [None req-6e36b076-4c39-4eef-9cf4-c1944c47f212 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 04:01:10 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 04:01:10 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2750775944' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 04:01:10 np0005534516 nova_compute[253538]: 2025-11-25 09:01:10.949 253542 DEBUG oslo_concurrency.processutils [None req-6e36b076-4c39-4eef-9cf4-c1944c47f212 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.422s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 04:01:10 np0005534516 nova_compute[253538]: 2025-11-25 09:01:10.951 253542 DEBUG nova.virt.libvirt.vif [None req-6e36b076-4c39-4eef-9cf4-c1944c47f212 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T09:00:54Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-1495447964-access_point-1626905983',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-1495447964-access_point-1626905983',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-1495447964-ac',id=130,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLJWOQIYal/ibrReI+6RW0ugIdawfbseW0yB5VZg2UY83dhKROJ1pIw9MNC3hDvNCvPlRsh4rMt3whBfd3b+Yn9hu3bxkRBW96s2JDhI6eOb1sodQt8E/2VrhY6VxJQFmA==',key_name='tempest-TestSecurityGroupsBasicOps-688042133',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='cfffff2c57a442a59b202d368d49bf00',ramdisk_id='',reservation_id='r-ddf3avyz',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-1495447964',owner_user_name='tempest-TestSecurityGroupsBasicOps-1495447964-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T09:00:58Z,user_data=None,user_id='283b89dbe3284e8ea2019b797673108b',uuid=830678ef-9f48-4175-aa6d-666c24a11689,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "2719889c-c962-425f-9df3-6f3d741ca0ec", "address": "fa:16:3e:0c:1f:9b", "network": {"id": "01d5ee0a-5a87-445b-8539-b33b1f9d0842", "bridge": "br-int", "label": "tempest-network-smoke--2130316306", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cfffff2c57a442a59b202d368d49bf00", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2719889c-c9", "ovs_interfaceid": "2719889c-c962-425f-9df3-6f3d741ca0ec", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 25 04:01:10 np0005534516 nova_compute[253538]: 2025-11-25 09:01:10.952 253542 DEBUG nova.network.os_vif_util [None req-6e36b076-4c39-4eef-9cf4-c1944c47f212 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Converting VIF {"id": "2719889c-c962-425f-9df3-6f3d741ca0ec", "address": "fa:16:3e:0c:1f:9b", "network": {"id": "01d5ee0a-5a87-445b-8539-b33b1f9d0842", "bridge": "br-int", "label": "tempest-network-smoke--2130316306", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cfffff2c57a442a59b202d368d49bf00", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2719889c-c9", "ovs_interfaceid": "2719889c-c962-425f-9df3-6f3d741ca0ec", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 04:01:10 np0005534516 nova_compute[253538]: 2025-11-25 09:01:10.953 253542 DEBUG nova.network.os_vif_util [None req-6e36b076-4c39-4eef-9cf4-c1944c47f212 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:0c:1f:9b,bridge_name='br-int',has_traffic_filtering=True,id=2719889c-c962-425f-9df3-6f3d741ca0ec,network=Network(01d5ee0a-5a87-445b-8539-b33b1f9d0842),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2719889c-c9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 04:01:10 np0005534516 nova_compute[253538]: 2025-11-25 09:01:10.955 253542 DEBUG nova.objects.instance [None req-6e36b076-4c39-4eef-9cf4-c1944c47f212 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Lazy-loading 'pci_devices' on Instance uuid 830678ef-9f48-4175-aa6d-666c24a11689 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 04:01:10 np0005534516 nova_compute[253538]: 2025-11-25 09:01:10.978 253542 DEBUG nova.virt.libvirt.driver [None req-6e36b076-4c39-4eef-9cf4-c1944c47f212 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 830678ef-9f48-4175-aa6d-666c24a11689] End _get_guest_xml xml=<domain type="kvm">
Nov 25 04:01:10 np0005534516 nova_compute[253538]:  <uuid>830678ef-9f48-4175-aa6d-666c24a11689</uuid>
Nov 25 04:01:10 np0005534516 nova_compute[253538]:  <name>instance-00000082</name>
Nov 25 04:01:10 np0005534516 nova_compute[253538]:  <memory>131072</memory>
Nov 25 04:01:10 np0005534516 nova_compute[253538]:  <vcpu>1</vcpu>
Nov 25 04:01:10 np0005534516 nova_compute[253538]:  <metadata>
Nov 25 04:01:10 np0005534516 nova_compute[253538]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 04:01:10 np0005534516 nova_compute[253538]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 04:01:10 np0005534516 nova_compute[253538]:      <nova:name>tempest-server-tempest-TestSecurityGroupsBasicOps-1495447964-access_point-1626905983</nova:name>
Nov 25 04:01:10 np0005534516 nova_compute[253538]:      <nova:creationTime>2025-11-25 09:01:10</nova:creationTime>
Nov 25 04:01:10 np0005534516 nova_compute[253538]:      <nova:flavor name="m1.nano">
Nov 25 04:01:10 np0005534516 nova_compute[253538]:        <nova:memory>128</nova:memory>
Nov 25 04:01:10 np0005534516 nova_compute[253538]:        <nova:disk>1</nova:disk>
Nov 25 04:01:10 np0005534516 nova_compute[253538]:        <nova:swap>0</nova:swap>
Nov 25 04:01:10 np0005534516 nova_compute[253538]:        <nova:ephemeral>0</nova:ephemeral>
Nov 25 04:01:10 np0005534516 nova_compute[253538]:        <nova:vcpus>1</nova:vcpus>
Nov 25 04:01:10 np0005534516 nova_compute[253538]:      </nova:flavor>
Nov 25 04:01:10 np0005534516 nova_compute[253538]:      <nova:owner>
Nov 25 04:01:10 np0005534516 nova_compute[253538]:        <nova:user uuid="283b89dbe3284e8ea2019b797673108b">tempest-TestSecurityGroupsBasicOps-1495447964-project-member</nova:user>
Nov 25 04:01:10 np0005534516 nova_compute[253538]:        <nova:project uuid="cfffff2c57a442a59b202d368d49bf00">tempest-TestSecurityGroupsBasicOps-1495447964</nova:project>
Nov 25 04:01:10 np0005534516 nova_compute[253538]:      </nova:owner>
Nov 25 04:01:10 np0005534516 nova_compute[253538]:      <nova:root type="image" uuid="8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e"/>
Nov 25 04:01:10 np0005534516 nova_compute[253538]:      <nova:ports>
Nov 25 04:01:10 np0005534516 nova_compute[253538]:        <nova:port uuid="2719889c-c962-425f-9df3-6f3d741ca0ec">
Nov 25 04:01:10 np0005534516 nova_compute[253538]:          <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Nov 25 04:01:10 np0005534516 nova_compute[253538]:        </nova:port>
Nov 25 04:01:10 np0005534516 nova_compute[253538]:      </nova:ports>
Nov 25 04:01:10 np0005534516 nova_compute[253538]:    </nova:instance>
Nov 25 04:01:10 np0005534516 nova_compute[253538]:  </metadata>
Nov 25 04:01:10 np0005534516 nova_compute[253538]:  <sysinfo type="smbios">
Nov 25 04:01:10 np0005534516 nova_compute[253538]:    <system>
Nov 25 04:01:10 np0005534516 nova_compute[253538]:      <entry name="manufacturer">RDO</entry>
Nov 25 04:01:10 np0005534516 nova_compute[253538]:      <entry name="product">OpenStack Compute</entry>
Nov 25 04:01:10 np0005534516 nova_compute[253538]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 04:01:10 np0005534516 nova_compute[253538]:      <entry name="serial">830678ef-9f48-4175-aa6d-666c24a11689</entry>
Nov 25 04:01:10 np0005534516 nova_compute[253538]:      <entry name="uuid">830678ef-9f48-4175-aa6d-666c24a11689</entry>
Nov 25 04:01:10 np0005534516 nova_compute[253538]:      <entry name="family">Virtual Machine</entry>
Nov 25 04:01:10 np0005534516 nova_compute[253538]:    </system>
Nov 25 04:01:10 np0005534516 nova_compute[253538]:  </sysinfo>
Nov 25 04:01:10 np0005534516 nova_compute[253538]:  <os>
Nov 25 04:01:10 np0005534516 nova_compute[253538]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 25 04:01:10 np0005534516 nova_compute[253538]:    <boot dev="hd"/>
Nov 25 04:01:10 np0005534516 nova_compute[253538]:    <smbios mode="sysinfo"/>
Nov 25 04:01:10 np0005534516 nova_compute[253538]:  </os>
Nov 25 04:01:10 np0005534516 nova_compute[253538]:  <features>
Nov 25 04:01:10 np0005534516 nova_compute[253538]:    <acpi/>
Nov 25 04:01:10 np0005534516 nova_compute[253538]:    <apic/>
Nov 25 04:01:10 np0005534516 nova_compute[253538]:    <vmcoreinfo/>
Nov 25 04:01:10 np0005534516 nova_compute[253538]:  </features>
Nov 25 04:01:10 np0005534516 nova_compute[253538]:  <clock offset="utc">
Nov 25 04:01:10 np0005534516 nova_compute[253538]:    <timer name="pit" tickpolicy="delay"/>
Nov 25 04:01:10 np0005534516 nova_compute[253538]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 25 04:01:10 np0005534516 nova_compute[253538]:    <timer name="hpet" present="no"/>
Nov 25 04:01:10 np0005534516 nova_compute[253538]:  </clock>
Nov 25 04:01:10 np0005534516 nova_compute[253538]:  <cpu mode="host-model" match="exact">
Nov 25 04:01:10 np0005534516 nova_compute[253538]:    <topology sockets="1" cores="1" threads="1"/>
Nov 25 04:01:10 np0005534516 nova_compute[253538]:  </cpu>
Nov 25 04:01:10 np0005534516 nova_compute[253538]:  <devices>
Nov 25 04:01:10 np0005534516 nova_compute[253538]:    <disk type="network" device="disk">
Nov 25 04:01:10 np0005534516 nova_compute[253538]:      <driver type="raw" cache="none"/>
Nov 25 04:01:10 np0005534516 nova_compute[253538]:      <source protocol="rbd" name="vms/830678ef-9f48-4175-aa6d-666c24a11689_disk">
Nov 25 04:01:10 np0005534516 nova_compute[253538]:        <host name="192.168.122.100" port="6789"/>
Nov 25 04:01:10 np0005534516 nova_compute[253538]:      </source>
Nov 25 04:01:10 np0005534516 nova_compute[253538]:      <auth username="openstack">
Nov 25 04:01:10 np0005534516 nova_compute[253538]:        <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 04:01:10 np0005534516 nova_compute[253538]:      </auth>
Nov 25 04:01:10 np0005534516 nova_compute[253538]:      <target dev="vda" bus="virtio"/>
Nov 25 04:01:10 np0005534516 nova_compute[253538]:    </disk>
Nov 25 04:01:10 np0005534516 nova_compute[253538]:    <disk type="network" device="cdrom">
Nov 25 04:01:10 np0005534516 nova_compute[253538]:      <driver type="raw" cache="none"/>
Nov 25 04:01:10 np0005534516 nova_compute[253538]:      <source protocol="rbd" name="vms/830678ef-9f48-4175-aa6d-666c24a11689_disk.config">
Nov 25 04:01:10 np0005534516 nova_compute[253538]:        <host name="192.168.122.100" port="6789"/>
Nov 25 04:01:10 np0005534516 nova_compute[253538]:      </source>
Nov 25 04:01:10 np0005534516 nova_compute[253538]:      <auth username="openstack">
Nov 25 04:01:10 np0005534516 nova_compute[253538]:        <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 04:01:10 np0005534516 nova_compute[253538]:      </auth>
Nov 25 04:01:10 np0005534516 nova_compute[253538]:      <target dev="sda" bus="sata"/>
Nov 25 04:01:10 np0005534516 nova_compute[253538]:    </disk>
Nov 25 04:01:10 np0005534516 nova_compute[253538]:    <interface type="ethernet">
Nov 25 04:01:10 np0005534516 nova_compute[253538]:      <mac address="fa:16:3e:0c:1f:9b"/>
Nov 25 04:01:10 np0005534516 nova_compute[253538]:      <model type="virtio"/>
Nov 25 04:01:10 np0005534516 nova_compute[253538]:      <driver name="vhost" rx_queue_size="512"/>
Nov 25 04:01:10 np0005534516 nova_compute[253538]:      <mtu size="1442"/>
Nov 25 04:01:10 np0005534516 nova_compute[253538]:      <target dev="tap2719889c-c9"/>
Nov 25 04:01:10 np0005534516 nova_compute[253538]:    </interface>
Nov 25 04:01:10 np0005534516 nova_compute[253538]:    <serial type="pty">
Nov 25 04:01:10 np0005534516 nova_compute[253538]:      <log file="/var/lib/nova/instances/830678ef-9f48-4175-aa6d-666c24a11689/console.log" append="off"/>
Nov 25 04:01:10 np0005534516 nova_compute[253538]:    </serial>
Nov 25 04:01:10 np0005534516 nova_compute[253538]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 04:01:10 np0005534516 nova_compute[253538]:    <video>
Nov 25 04:01:10 np0005534516 nova_compute[253538]:      <model type="virtio"/>
Nov 25 04:01:10 np0005534516 nova_compute[253538]:    </video>
Nov 25 04:01:10 np0005534516 nova_compute[253538]:    <input type="tablet" bus="usb"/>
Nov 25 04:01:10 np0005534516 nova_compute[253538]:    <rng model="virtio">
Nov 25 04:01:10 np0005534516 nova_compute[253538]:      <backend model="random">/dev/urandom</backend>
Nov 25 04:01:10 np0005534516 nova_compute[253538]:    </rng>
Nov 25 04:01:10 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root"/>
Nov 25 04:01:10 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:01:10 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:01:10 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:01:10 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:01:10 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:01:10 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:01:10 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:01:10 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:01:10 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:01:10 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:01:10 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:01:10 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:01:10 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:01:10 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:01:10 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:01:10 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:01:10 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:01:10 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:01:10 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:01:10 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:01:10 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:01:10 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:01:10 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:01:10 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:01:10 np0005534516 nova_compute[253538]:    <controller type="usb" index="0"/>
Nov 25 04:01:10 np0005534516 nova_compute[253538]:    <memballoon model="virtio">
Nov 25 04:01:10 np0005534516 nova_compute[253538]:      <stats period="10"/>
Nov 25 04:01:10 np0005534516 nova_compute[253538]:    </memballoon>
Nov 25 04:01:10 np0005534516 nova_compute[253538]:  </devices>
Nov 25 04:01:10 np0005534516 nova_compute[253538]: </domain>
Nov 25 04:01:10 np0005534516 nova_compute[253538]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 25 04:01:10 np0005534516 nova_compute[253538]: 2025-11-25 09:01:10.980 253542 DEBUG nova.compute.manager [None req-6e36b076-4c39-4eef-9cf4-c1944c47f212 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 830678ef-9f48-4175-aa6d-666c24a11689] Preparing to wait for external event network-vif-plugged-2719889c-c962-425f-9df3-6f3d741ca0ec prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 25 04:01:10 np0005534516 nova_compute[253538]: 2025-11-25 09:01:10.981 253542 DEBUG oslo_concurrency.lockutils [None req-6e36b076-4c39-4eef-9cf4-c1944c47f212 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Acquiring lock "830678ef-9f48-4175-aa6d-666c24a11689-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:01:10 np0005534516 nova_compute[253538]: 2025-11-25 09:01:10.981 253542 DEBUG oslo_concurrency.lockutils [None req-6e36b076-4c39-4eef-9cf4-c1944c47f212 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Lock "830678ef-9f48-4175-aa6d-666c24a11689-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:01:10 np0005534516 nova_compute[253538]: 2025-11-25 09:01:10.982 253542 DEBUG oslo_concurrency.lockutils [None req-6e36b076-4c39-4eef-9cf4-c1944c47f212 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Lock "830678ef-9f48-4175-aa6d-666c24a11689-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:01:10 np0005534516 nova_compute[253538]: 2025-11-25 09:01:10.982 253542 DEBUG nova.virt.libvirt.vif [None req-6e36b076-4c39-4eef-9cf4-c1944c47f212 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T09:00:54Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-1495447964-access_point-1626905983',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-1495447964-access_point-1626905983',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-1495447964-ac',id=130,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLJWOQIYal/ibrReI+6RW0ugIdawfbseW0yB5VZg2UY83dhKROJ1pIw9MNC3hDvNCvPlRsh4rMt3whBfd3b+Yn9hu3bxkRBW96s2JDhI6eOb1sodQt8E/2VrhY6VxJQFmA==',key_name='tempest-TestSecurityGroupsBasicOps-688042133',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='cfffff2c57a442a59b202d368d49bf00',ramdisk_id='',reservation_id='r-ddf3avyz',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-1495447964',owner_user_name='tempest-TestSecurityGroupsBasicOps-1495447964-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T09:00:58Z,user_data=None,user_id='283b89dbe3284e8ea2019b797673108b',uuid=830678ef-9f48-4175-aa6d-666c24a11689,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "2719889c-c962-425f-9df3-6f3d741ca0ec", "address": "fa:16:3e:0c:1f:9b", "network": {"id": "01d5ee0a-5a87-445b-8539-b33b1f9d0842", "bridge": "br-int", "label": "tempest-network-smoke--2130316306", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cfffff2c57a442a59b202d368d49bf00", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2719889c-c9", "ovs_interfaceid": "2719889c-c962-425f-9df3-6f3d741ca0ec", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 25 04:01:10 np0005534516 nova_compute[253538]: 2025-11-25 09:01:10.983 253542 DEBUG nova.network.os_vif_util [None req-6e36b076-4c39-4eef-9cf4-c1944c47f212 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Converting VIF {"id": "2719889c-c962-425f-9df3-6f3d741ca0ec", "address": "fa:16:3e:0c:1f:9b", "network": {"id": "01d5ee0a-5a87-445b-8539-b33b1f9d0842", "bridge": "br-int", "label": "tempest-network-smoke--2130316306", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cfffff2c57a442a59b202d368d49bf00", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2719889c-c9", "ovs_interfaceid": "2719889c-c962-425f-9df3-6f3d741ca0ec", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 04:01:10 np0005534516 nova_compute[253538]: 2025-11-25 09:01:10.984 253542 DEBUG nova.network.os_vif_util [None req-6e36b076-4c39-4eef-9cf4-c1944c47f212 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:0c:1f:9b,bridge_name='br-int',has_traffic_filtering=True,id=2719889c-c962-425f-9df3-6f3d741ca0ec,network=Network(01d5ee0a-5a87-445b-8539-b33b1f9d0842),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2719889c-c9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 04:01:10 np0005534516 nova_compute[253538]: 2025-11-25 09:01:10.984 253542 DEBUG os_vif [None req-6e36b076-4c39-4eef-9cf4-c1944c47f212 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:0c:1f:9b,bridge_name='br-int',has_traffic_filtering=True,id=2719889c-c962-425f-9df3-6f3d741ca0ec,network=Network(01d5ee0a-5a87-445b-8539-b33b1f9d0842),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2719889c-c9') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 25 04:01:10 np0005534516 nova_compute[253538]: 2025-11-25 09:01:10.985 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:01:10 np0005534516 nova_compute[253538]: 2025-11-25 09:01:10.986 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 04:01:10 np0005534516 nova_compute[253538]: 2025-11-25 09:01:10.986 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 04:01:10 np0005534516 nova_compute[253538]: 2025-11-25 09:01:10.990 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:01:10 np0005534516 nova_compute[253538]: 2025-11-25 09:01:10.991 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2719889c-c9, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 04:01:10 np0005534516 nova_compute[253538]: 2025-11-25 09:01:10.991 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap2719889c-c9, col_values=(('external_ids', {'iface-id': '2719889c-c962-425f-9df3-6f3d741ca0ec', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:0c:1f:9b', 'vm-uuid': '830678ef-9f48-4175-aa6d-666c24a11689'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 04:01:10 np0005534516 nova_compute[253538]: 2025-11-25 09:01:10.993 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:01:10 np0005534516 NetworkManager[48915]: <info>  [1764061270.9944] manager: (tap2719889c-c9): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/550)
Nov 25 04:01:10 np0005534516 nova_compute[253538]: 2025-11-25 09:01:10.995 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 25 04:01:11 np0005534516 nova_compute[253538]: 2025-11-25 09:01:11.003 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:01:11 np0005534516 nova_compute[253538]: 2025-11-25 09:01:11.004 253542 INFO os_vif [None req-6e36b076-4c39-4eef-9cf4-c1944c47f212 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:0c:1f:9b,bridge_name='br-int',has_traffic_filtering=True,id=2719889c-c962-425f-9df3-6f3d741ca0ec,network=Network(01d5ee0a-5a87-445b-8539-b33b1f9d0842),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2719889c-c9')#033[00m
Nov 25 04:01:11 np0005534516 nova_compute[253538]: 2025-11-25 09:01:11.053 253542 DEBUG nova.virt.libvirt.driver [None req-6e36b076-4c39-4eef-9cf4-c1944c47f212 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 04:01:11 np0005534516 nova_compute[253538]: 2025-11-25 09:01:11.054 253542 DEBUG nova.virt.libvirt.driver [None req-6e36b076-4c39-4eef-9cf4-c1944c47f212 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 04:01:11 np0005534516 nova_compute[253538]: 2025-11-25 09:01:11.054 253542 DEBUG nova.virt.libvirt.driver [None req-6e36b076-4c39-4eef-9cf4-c1944c47f212 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] No VIF found with MAC fa:16:3e:0c:1f:9b, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 25 04:01:11 np0005534516 nova_compute[253538]: 2025-11-25 09:01:11.054 253542 INFO nova.virt.libvirt.driver [None req-6e36b076-4c39-4eef-9cf4-c1944c47f212 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 830678ef-9f48-4175-aa6d-666c24a11689] Using config drive#033[00m
Nov 25 04:01:11 np0005534516 nova_compute[253538]: 2025-11-25 09:01:11.088 253542 DEBUG nova.storage.rbd_utils [None req-6e36b076-4c39-4eef-9cf4-c1944c47f212 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] rbd image 830678ef-9f48-4175-aa6d-666c24a11689_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 04:01:11 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 04:01:11 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 04:01:11 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 04:01:11 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 04:01:11 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 04:01:11 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 04:01:11 np0005534516 ceph-mgr[75313]: [progress WARNING root] complete: ev 752e73c7-75ac-4d34-aa35-8b8a9f409851 does not exist
Nov 25 04:01:11 np0005534516 ceph-mgr[75313]: [progress WARNING root] complete: ev 4475ab59-34bf-44ef-ab0e-3a0961802d48 does not exist
Nov 25 04:01:11 np0005534516 ceph-mgr[75313]: [progress WARNING root] complete: ev 10df075c-21e3-4ecb-9769-789baa4c9bac does not exist
Nov 25 04:01:11 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 04:01:11 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 04:01:11 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 04:01:11 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 04:01:11 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 04:01:11 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 04:01:12 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2412: 321 pgs: 321 active+clean; 213 MiB data, 940 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 696 KiB/s wr, 25 op/s
Nov 25 04:01:12 np0005534516 nova_compute[253538]: 2025-11-25 09:01:12.077 253542 INFO nova.virt.libvirt.driver [None req-6e36b076-4c39-4eef-9cf4-c1944c47f212 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 830678ef-9f48-4175-aa6d-666c24a11689] Creating config drive at /var/lib/nova/instances/830678ef-9f48-4175-aa6d-666c24a11689/disk.config#033[00m
Nov 25 04:01:12 np0005534516 nova_compute[253538]: 2025-11-25 09:01:12.083 253542 DEBUG oslo_concurrency.processutils [None req-6e36b076-4c39-4eef-9cf4-c1944c47f212 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/830678ef-9f48-4175-aa6d-666c24a11689/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp_u_h_8rh execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 04:01:12 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 04:01:12 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 04:01:12 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 04:01:12 np0005534516 nova_compute[253538]: 2025-11-25 09:01:12.227 253542 DEBUG oslo_concurrency.processutils [None req-6e36b076-4c39-4eef-9cf4-c1944c47f212 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/830678ef-9f48-4175-aa6d-666c24a11689/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp_u_h_8rh" returned: 0 in 0.144s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 04:01:12 np0005534516 nova_compute[253538]: 2025-11-25 09:01:12.254 253542 DEBUG nova.storage.rbd_utils [None req-6e36b076-4c39-4eef-9cf4-c1944c47f212 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] rbd image 830678ef-9f48-4175-aa6d-666c24a11689_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 04:01:12 np0005534516 nova_compute[253538]: 2025-11-25 09:01:12.257 253542 DEBUG oslo_concurrency.processutils [None req-6e36b076-4c39-4eef-9cf4-c1944c47f212 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/830678ef-9f48-4175-aa6d-666c24a11689/disk.config 830678ef-9f48-4175-aa6d-666c24a11689_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 04:01:12 np0005534516 podman[390171]: 2025-11-25 09:01:12.29835137 +0000 UTC m=+0.054698311 container create 2aea7d2783f03fd8c7df167a36d0721b5e28432e7e410ed98f58ac68fd0020a4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_shamir, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 04:01:12 np0005534516 systemd[1]: Started libpod-conmon-2aea7d2783f03fd8c7df167a36d0721b5e28432e7e410ed98f58ac68fd0020a4.scope.
Nov 25 04:01:12 np0005534516 systemd[1]: Started libcrun container.
Nov 25 04:01:12 np0005534516 podman[390171]: 2025-11-25 09:01:12.273929015 +0000 UTC m=+0.030275996 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 04:01:12 np0005534516 podman[390171]: 2025-11-25 09:01:12.383526997 +0000 UTC m=+0.139873978 container init 2aea7d2783f03fd8c7df167a36d0721b5e28432e7e410ed98f58ac68fd0020a4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_shamir, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True)
Nov 25 04:01:12 np0005534516 podman[390171]: 2025-11-25 09:01:12.392201614 +0000 UTC m=+0.148548565 container start 2aea7d2783f03fd8c7df167a36d0721b5e28432e7e410ed98f58ac68fd0020a4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_shamir, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Nov 25 04:01:12 np0005534516 podman[390171]: 2025-11-25 09:01:12.397348294 +0000 UTC m=+0.153695245 container attach 2aea7d2783f03fd8c7df167a36d0721b5e28432e7e410ed98f58ac68fd0020a4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_shamir, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 04:01:12 np0005534516 practical_shamir[390207]: 167 167
Nov 25 04:01:12 np0005534516 systemd[1]: libpod-2aea7d2783f03fd8c7df167a36d0721b5e28432e7e410ed98f58ac68fd0020a4.scope: Deactivated successfully.
Nov 25 04:01:12 np0005534516 podman[390171]: 2025-11-25 09:01:12.399457031 +0000 UTC m=+0.155803972 container died 2aea7d2783f03fd8c7df167a36d0721b5e28432e7e410ed98f58ac68fd0020a4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_shamir, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Nov 25 04:01:12 np0005534516 systemd[1]: var-lib-containers-storage-overlay-4c9d529062dfe4c437cfe613279421b23a2f5c4855d082a656c03eb3d32805c9-merged.mount: Deactivated successfully.
Nov 25 04:01:12 np0005534516 nova_compute[253538]: 2025-11-25 09:01:12.437 253542 DEBUG oslo_concurrency.processutils [None req-6e36b076-4c39-4eef-9cf4-c1944c47f212 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/830678ef-9f48-4175-aa6d-666c24a11689/disk.config 830678ef-9f48-4175-aa6d-666c24a11689_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.179s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 04:01:12 np0005534516 nova_compute[253538]: 2025-11-25 09:01:12.440 253542 INFO nova.virt.libvirt.driver [None req-6e36b076-4c39-4eef-9cf4-c1944c47f212 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 830678ef-9f48-4175-aa6d-666c24a11689] Deleting local config drive /var/lib/nova/instances/830678ef-9f48-4175-aa6d-666c24a11689/disk.config because it was imported into RBD.#033[00m
Nov 25 04:01:12 np0005534516 podman[390171]: 2025-11-25 09:01:12.451204399 +0000 UTC m=+0.207551330 container remove 2aea7d2783f03fd8c7df167a36d0721b5e28432e7e410ed98f58ac68fd0020a4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_shamir, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 04:01:12 np0005534516 systemd[1]: libpod-conmon-2aea7d2783f03fd8c7df167a36d0721b5e28432e7e410ed98f58ac68fd0020a4.scope: Deactivated successfully.
Nov 25 04:01:12 np0005534516 NetworkManager[48915]: <info>  [1764061272.4964] manager: (tap2719889c-c9): new Tun device (/org/freedesktop/NetworkManager/Devices/551)
Nov 25 04:01:12 np0005534516 kernel: tap2719889c-c9: entered promiscuous mode
Nov 25 04:01:12 np0005534516 ovn_controller[152859]: 2025-11-25T09:01:12Z|01336|binding|INFO|Claiming lport 2719889c-c962-425f-9df3-6f3d741ca0ec for this chassis.
Nov 25 04:01:12 np0005534516 ovn_controller[152859]: 2025-11-25T09:01:12Z|01337|binding|INFO|2719889c-c962-425f-9df3-6f3d741ca0ec: Claiming fa:16:3e:0c:1f:9b 10.100.0.7
Nov 25 04:01:12 np0005534516 nova_compute[253538]: 2025-11-25 09:01:12.499 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:01:12 np0005534516 ovn_controller[152859]: 2025-11-25T09:01:12Z|01338|binding|INFO|Setting lport 2719889c-c962-425f-9df3-6f3d741ca0ec ovn-installed in OVS
Nov 25 04:01:12 np0005534516 nova_compute[253538]: 2025-11-25 09:01:12.514 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:01:12 np0005534516 nova_compute[253538]: 2025-11-25 09:01:12.517 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:01:12 np0005534516 systemd-udevd[390253]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 04:01:12 np0005534516 systemd-machined[215790]: New machine qemu-160-instance-00000082.
Nov 25 04:01:12 np0005534516 NetworkManager[48915]: <info>  [1764061272.5498] device (tap2719889c-c9): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 04:01:12 np0005534516 NetworkManager[48915]: <info>  [1764061272.5511] device (tap2719889c-c9): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 04:01:12 np0005534516 ovn_controller[152859]: 2025-11-25T09:01:12Z|01339|binding|INFO|Setting lport 2719889c-c962-425f-9df3-6f3d741ca0ec up in Southbound
Nov 25 04:01:12 np0005534516 systemd[1]: Started Virtual Machine qemu-160-instance-00000082.
Nov 25 04:01:12 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:01:12.558 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:0c:1f:9b 10.100.0.7'], port_security=['fa:16:3e:0c:1f:9b 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '830678ef-9f48-4175-aa6d-666c24a11689', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-01d5ee0a-5a87-445b-8539-b33b1f9d0842', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'cfffff2c57a442a59b202d368d49bf00', 'neutron:revision_number': '2', 'neutron:security_group_ids': '7a01ec2f-9868-40ca-9120-52725aa4431e 8a14d0f4-bb68-44c1-9d93-80bac0a038b8', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9d4cb606-f8e1-4247-876b-21f84cfe5e61, chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=2719889c-c962-425f-9df3-6f3d741ca0ec) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 04:01:12 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:01:12.559 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 2719889c-c962-425f-9df3-6f3d741ca0ec in datapath 01d5ee0a-5a87-445b-8539-b33b1f9d0842 bound to our chassis#033[00m
Nov 25 04:01:12 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:01:12.560 162739 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 01d5ee0a-5a87-445b-8539-b33b1f9d0842#033[00m
Nov 25 04:01:12 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:01:12.572 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[e170b9d2-7afe-4330-a7ff-8b6f89666aea]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:01:12 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:01:12.573 162739 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap01d5ee0a-51 in ovnmeta-01d5ee0a-5a87-445b-8539-b33b1f9d0842 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 25 04:01:12 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:01:12.575 269370 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap01d5ee0a-50 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 25 04:01:12 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:01:12.575 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[94a654c1-c31c-45fa-826a-ad0d9a2ef717]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:01:12 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:01:12.576 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[e2c6f823-8919-4ba9-9cd1-e9181c2e2ae2]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:01:12 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:01:12.588 162852 DEBUG oslo.privsep.daemon [-] privsep: reply[e24770ca-10ad-4d4b-b0ea-38060480baa0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:01:12 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:01:12.601 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[27060116-69ae-4dfe-8911-b6846d972fe3]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:01:12 np0005534516 podman[390265]: 2025-11-25 09:01:12.630622013 +0000 UTC m=+0.034657634 container create 233cdcc8be2a23802aab4f899b1f5f6fc20cea004cf66abbe36a8da7c0632544 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_snyder, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 04:01:12 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:01:12.637 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[6f5f75ca-2986-40b2-a244-231a92252ea6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:01:12 np0005534516 NetworkManager[48915]: <info>  [1764061272.6434] manager: (tap01d5ee0a-50): new Veth device (/org/freedesktop/NetworkManager/Devices/552)
Nov 25 04:01:12 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:01:12 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:01:12.643 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[7a216def-e32d-4497-b2a4-84b928c16a50]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:01:12 np0005534516 systemd[1]: Started libpod-conmon-233cdcc8be2a23802aab4f899b1f5f6fc20cea004cf66abbe36a8da7c0632544.scope.
Nov 25 04:01:12 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:01:12.677 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[afd08671-2fdd-4cc1-904a-6f3d9895cfa9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:01:12 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:01:12.680 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[61f12dac-5eb0-4945-bd69-0f0f02e816bf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:01:12 np0005534516 systemd[1]: Started libcrun container.
Nov 25 04:01:12 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/098702006aeefdadc98d48c8ab9d32fc36c18d606e3fea854186f94d988e9beb/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 04:01:12 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/098702006aeefdadc98d48c8ab9d32fc36c18d606e3fea854186f94d988e9beb/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 04:01:12 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/098702006aeefdadc98d48c8ab9d32fc36c18d606e3fea854186f94d988e9beb/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 04:01:12 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/098702006aeefdadc98d48c8ab9d32fc36c18d606e3fea854186f94d988e9beb/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 04:01:12 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/098702006aeefdadc98d48c8ab9d32fc36c18d606e3fea854186f94d988e9beb/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 04:01:12 np0005534516 podman[390265]: 2025-11-25 09:01:12.617183867 +0000 UTC m=+0.021219508 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 04:01:12 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:01:12.718 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[505a24fd-f858-4fa0-9aca-fbceb7b5e3cf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:01:12 np0005534516 NetworkManager[48915]: <info>  [1764061272.7215] device (tap01d5ee0a-50): carrier: link connected
Nov 25 04:01:12 np0005534516 podman[390265]: 2025-11-25 09:01:12.728671222 +0000 UTC m=+0.132706853 container init 233cdcc8be2a23802aab4f899b1f5f6fc20cea004cf66abbe36a8da7c0632544 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_snyder, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 04:01:12 np0005534516 podman[390265]: 2025-11-25 09:01:12.740924726 +0000 UTC m=+0.144960357 container start 233cdcc8be2a23802aab4f899b1f5f6fc20cea004cf66abbe36a8da7c0632544 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_snyder, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 04:01:12 np0005534516 podman[390265]: 2025-11-25 09:01:12.744476972 +0000 UTC m=+0.148512643 container attach 233cdcc8be2a23802aab4f899b1f5f6fc20cea004cf66abbe36a8da7c0632544 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_snyder, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Nov 25 04:01:12 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:01:12.746 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[5eeaaae6-b505-49a9-a2f4-6c0c674e8050]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap01d5ee0a-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:7a:39:4a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 388], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 662730, 'reachable_time': 37346, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 390312, 'error': None, 'target': 'ovnmeta-01d5ee0a-5a87-445b-8539-b33b1f9d0842', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:01:12 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:01:12.762 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[7210de0a-6f5a-435a-a9b3-3d17b5e3b77d]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe7a:394a'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 662730, 'tstamp': 662730}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 390314, 'error': None, 'target': 'ovnmeta-01d5ee0a-5a87-445b-8539-b33b1f9d0842', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:01:12 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:01:12.778 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[73d2a943-0ba9-4ecf-9424-8fec1b6e0aa1]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap01d5ee0a-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:7a:39:4a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 388], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 662730, 'reachable_time': 37346, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 390315, 'error': None, 'target': 'ovnmeta-01d5ee0a-5a87-445b-8539-b33b1f9d0842', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:01:12 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:01:12.812 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[56770ab1-804b-4450-82e2-091a4d586141]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:01:12 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:01:12.875 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[19baacbd-3571-45cf-9839-ad79c06c846d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:01:12 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:01:12.878 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap01d5ee0a-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 04:01:12 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:01:12.879 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 04:01:12 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:01:12.880 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap01d5ee0a-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 04:01:12 np0005534516 nova_compute[253538]: 2025-11-25 09:01:12.882 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:01:12 np0005534516 NetworkManager[48915]: <info>  [1764061272.8830] manager: (tap01d5ee0a-50): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/553)
Nov 25 04:01:12 np0005534516 kernel: tap01d5ee0a-50: entered promiscuous mode
Nov 25 04:01:12 np0005534516 nova_compute[253538]: 2025-11-25 09:01:12.885 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:01:12 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:01:12.886 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap01d5ee0a-50, col_values=(('external_ids', {'iface-id': 'e613213f-7deb-43ce-acbb-25b798b2b340'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 04:01:12 np0005534516 nova_compute[253538]: 2025-11-25 09:01:12.887 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:01:12 np0005534516 ovn_controller[152859]: 2025-11-25T09:01:12Z|01340|binding|INFO|Releasing lport e613213f-7deb-43ce-acbb-25b798b2b340 from this chassis (sb_readonly=0)
Nov 25 04:01:12 np0005534516 nova_compute[253538]: 2025-11-25 09:01:12.949 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:01:12 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:01:12.950 162739 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/01d5ee0a-5a87-445b-8539-b33b1f9d0842.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/01d5ee0a-5a87-445b-8539-b33b1f9d0842.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 25 04:01:12 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:01:12.951 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[e700d4eb-36cc-4ec5-8e72-d5be9ced80ab]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:01:12 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:01:12.952 162739 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 25 04:01:12 np0005534516 ovn_metadata_agent[162734]: global
Nov 25 04:01:12 np0005534516 ovn_metadata_agent[162734]:    log         /dev/log local0 debug
Nov 25 04:01:12 np0005534516 ovn_metadata_agent[162734]:    log-tag     haproxy-metadata-proxy-01d5ee0a-5a87-445b-8539-b33b1f9d0842
Nov 25 04:01:12 np0005534516 ovn_metadata_agent[162734]:    user        root
Nov 25 04:01:12 np0005534516 ovn_metadata_agent[162734]:    group       root
Nov 25 04:01:12 np0005534516 ovn_metadata_agent[162734]:    maxconn     1024
Nov 25 04:01:12 np0005534516 ovn_metadata_agent[162734]:    pidfile     /var/lib/neutron/external/pids/01d5ee0a-5a87-445b-8539-b33b1f9d0842.pid.haproxy
Nov 25 04:01:12 np0005534516 ovn_metadata_agent[162734]:    daemon
Nov 25 04:01:12 np0005534516 ovn_metadata_agent[162734]: 
Nov 25 04:01:12 np0005534516 ovn_metadata_agent[162734]: defaults
Nov 25 04:01:12 np0005534516 ovn_metadata_agent[162734]:    log global
Nov 25 04:01:12 np0005534516 ovn_metadata_agent[162734]:    mode http
Nov 25 04:01:12 np0005534516 ovn_metadata_agent[162734]:    option httplog
Nov 25 04:01:12 np0005534516 ovn_metadata_agent[162734]:    option dontlognull
Nov 25 04:01:12 np0005534516 ovn_metadata_agent[162734]:    option http-server-close
Nov 25 04:01:12 np0005534516 ovn_metadata_agent[162734]:    option forwardfor
Nov 25 04:01:12 np0005534516 ovn_metadata_agent[162734]:    retries                 3
Nov 25 04:01:12 np0005534516 ovn_metadata_agent[162734]:    timeout http-request    30s
Nov 25 04:01:12 np0005534516 ovn_metadata_agent[162734]:    timeout connect         30s
Nov 25 04:01:12 np0005534516 ovn_metadata_agent[162734]:    timeout client          32s
Nov 25 04:01:12 np0005534516 ovn_metadata_agent[162734]:    timeout server          32s
Nov 25 04:01:12 np0005534516 ovn_metadata_agent[162734]:    timeout http-keep-alive 30s
Nov 25 04:01:12 np0005534516 ovn_metadata_agent[162734]: 
Nov 25 04:01:12 np0005534516 ovn_metadata_agent[162734]: 
Nov 25 04:01:12 np0005534516 ovn_metadata_agent[162734]: listen listener
Nov 25 04:01:12 np0005534516 ovn_metadata_agent[162734]:    bind 169.254.169.254:80
Nov 25 04:01:12 np0005534516 ovn_metadata_agent[162734]:    server metadata /var/lib/neutron/metadata_proxy
Nov 25 04:01:12 np0005534516 ovn_metadata_agent[162734]:    http-request add-header X-OVN-Network-ID 01d5ee0a-5a87-445b-8539-b33b1f9d0842
Nov 25 04:01:12 np0005534516 ovn_metadata_agent[162734]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 25 04:01:12 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:01:12.953 162739 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-01d5ee0a-5a87-445b-8539-b33b1f9d0842', 'env', 'PROCESS_TAG=haproxy-01d5ee0a-5a87-445b-8539-b33b1f9d0842', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/01d5ee0a-5a87-445b-8539-b33b1f9d0842.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 25 04:01:13 np0005534516 nova_compute[253538]: 2025-11-25 09:01:13.275 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764061273.274793, 830678ef-9f48-4175-aa6d-666c24a11689 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 04:01:13 np0005534516 nova_compute[253538]: 2025-11-25 09:01:13.277 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 830678ef-9f48-4175-aa6d-666c24a11689] VM Started (Lifecycle Event)#033[00m
Nov 25 04:01:13 np0005534516 nova_compute[253538]: 2025-11-25 09:01:13.300 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 830678ef-9f48-4175-aa6d-666c24a11689] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 04:01:13 np0005534516 nova_compute[253538]: 2025-11-25 09:01:13.305 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764061273.2750247, 830678ef-9f48-4175-aa6d-666c24a11689 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 04:01:13 np0005534516 nova_compute[253538]: 2025-11-25 09:01:13.306 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 830678ef-9f48-4175-aa6d-666c24a11689] VM Paused (Lifecycle Event)#033[00m
Nov 25 04:01:13 np0005534516 nova_compute[253538]: 2025-11-25 09:01:13.321 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 830678ef-9f48-4175-aa6d-666c24a11689] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 04:01:13 np0005534516 nova_compute[253538]: 2025-11-25 09:01:13.325 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 830678ef-9f48-4175-aa6d-666c24a11689] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 04:01:13 np0005534516 nova_compute[253538]: 2025-11-25 09:01:13.338 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 830678ef-9f48-4175-aa6d-666c24a11689] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 04:01:13 np0005534516 podman[390387]: 2025-11-25 09:01:13.358382632 +0000 UTC m=+0.048837790 container create f925e285d80e411ef3468a7a3967feb9f7a3b50cd31f59d28a132bc43d5e9a6a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-01d5ee0a-5a87-445b-8539-b33b1f9d0842, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3)
Nov 25 04:01:13 np0005534516 systemd[1]: Started libpod-conmon-f925e285d80e411ef3468a7a3967feb9f7a3b50cd31f59d28a132bc43d5e9a6a.scope.
Nov 25 04:01:13 np0005534516 systemd[1]: Started libcrun container.
Nov 25 04:01:13 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/19a4031ab9bbc2e36451a0ff49fcc18576186761ed3f26382ce7cd4974678668/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 25 04:01:13 np0005534516 podman[390387]: 2025-11-25 09:01:13.332028675 +0000 UTC m=+0.022483853 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 25 04:01:13 np0005534516 podman[390387]: 2025-11-25 09:01:13.433741413 +0000 UTC m=+0.124196611 container init f925e285d80e411ef3468a7a3967feb9f7a3b50cd31f59d28a132bc43d5e9a6a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-01d5ee0a-5a87-445b-8539-b33b1f9d0842, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 04:01:13 np0005534516 podman[390387]: 2025-11-25 09:01:13.446027477 +0000 UTC m=+0.136482665 container start f925e285d80e411ef3468a7a3967feb9f7a3b50cd31f59d28a132bc43d5e9a6a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-01d5ee0a-5a87-445b-8539-b33b1f9d0842, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS)
Nov 25 04:01:13 np0005534516 neutron-haproxy-ovnmeta-01d5ee0a-5a87-445b-8539-b33b1f9d0842[390402]: [NOTICE]   (390407) : New worker (390411) forked
Nov 25 04:01:13 np0005534516 neutron-haproxy-ovnmeta-01d5ee0a-5a87-445b-8539-b33b1f9d0842[390402]: [NOTICE]   (390407) : Loading success.
Nov 25 04:01:13 np0005534516 nova_compute[253538]: 2025-11-25 09:01:13.588 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:01:13 np0005534516 nova_compute[253538]: 2025-11-25 09:01:13.706 253542 DEBUG nova.compute.manager [req-8796410c-a88a-4ab0-9f03-7c9770ddea1f req-6af55f84-7c82-4a28-a908-52ed44157482 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 830678ef-9f48-4175-aa6d-666c24a11689] Received event network-vif-plugged-2719889c-c962-425f-9df3-6f3d741ca0ec external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 04:01:13 np0005534516 nova_compute[253538]: 2025-11-25 09:01:13.707 253542 DEBUG oslo_concurrency.lockutils [req-8796410c-a88a-4ab0-9f03-7c9770ddea1f req-6af55f84-7c82-4a28-a908-52ed44157482 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "830678ef-9f48-4175-aa6d-666c24a11689-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:01:13 np0005534516 nova_compute[253538]: 2025-11-25 09:01:13.708 253542 DEBUG oslo_concurrency.lockutils [req-8796410c-a88a-4ab0-9f03-7c9770ddea1f req-6af55f84-7c82-4a28-a908-52ed44157482 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "830678ef-9f48-4175-aa6d-666c24a11689-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:01:13 np0005534516 nova_compute[253538]: 2025-11-25 09:01:13.709 253542 DEBUG oslo_concurrency.lockutils [req-8796410c-a88a-4ab0-9f03-7c9770ddea1f req-6af55f84-7c82-4a28-a908-52ed44157482 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "830678ef-9f48-4175-aa6d-666c24a11689-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:01:13 np0005534516 nova_compute[253538]: 2025-11-25 09:01:13.709 253542 DEBUG nova.compute.manager [req-8796410c-a88a-4ab0-9f03-7c9770ddea1f req-6af55f84-7c82-4a28-a908-52ed44157482 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 830678ef-9f48-4175-aa6d-666c24a11689] Processing event network-vif-plugged-2719889c-c962-425f-9df3-6f3d741ca0ec _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 25 04:01:13 np0005534516 nova_compute[253538]: 2025-11-25 09:01:13.710 253542 DEBUG nova.compute.manager [None req-6e36b076-4c39-4eef-9cf4-c1944c47f212 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 830678ef-9f48-4175-aa6d-666c24a11689] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 25 04:01:13 np0005534516 nova_compute[253538]: 2025-11-25 09:01:13.716 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764061273.7159095, 830678ef-9f48-4175-aa6d-666c24a11689 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 04:01:13 np0005534516 nova_compute[253538]: 2025-11-25 09:01:13.716 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 830678ef-9f48-4175-aa6d-666c24a11689] VM Resumed (Lifecycle Event)#033[00m
Nov 25 04:01:13 np0005534516 nova_compute[253538]: 2025-11-25 09:01:13.720 253542 DEBUG nova.virt.libvirt.driver [None req-6e36b076-4c39-4eef-9cf4-c1944c47f212 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 830678ef-9f48-4175-aa6d-666c24a11689] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 25 04:01:13 np0005534516 nova_compute[253538]: 2025-11-25 09:01:13.726 253542 INFO nova.virt.libvirt.driver [-] [instance: 830678ef-9f48-4175-aa6d-666c24a11689] Instance spawned successfully.#033[00m
Nov 25 04:01:13 np0005534516 nova_compute[253538]: 2025-11-25 09:01:13.727 253542 DEBUG nova.virt.libvirt.driver [None req-6e36b076-4c39-4eef-9cf4-c1944c47f212 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 830678ef-9f48-4175-aa6d-666c24a11689] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 25 04:01:13 np0005534516 nova_compute[253538]: 2025-11-25 09:01:13.736 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 830678ef-9f48-4175-aa6d-666c24a11689] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 04:01:13 np0005534516 nova_compute[253538]: 2025-11-25 09:01:13.748 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 830678ef-9f48-4175-aa6d-666c24a11689] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 04:01:13 np0005534516 nova_compute[253538]: 2025-11-25 09:01:13.755 253542 DEBUG nova.virt.libvirt.driver [None req-6e36b076-4c39-4eef-9cf4-c1944c47f212 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 830678ef-9f48-4175-aa6d-666c24a11689] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 04:01:13 np0005534516 nova_compute[253538]: 2025-11-25 09:01:13.755 253542 DEBUG nova.virt.libvirt.driver [None req-6e36b076-4c39-4eef-9cf4-c1944c47f212 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 830678ef-9f48-4175-aa6d-666c24a11689] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 04:01:13 np0005534516 nova_compute[253538]: 2025-11-25 09:01:13.756 253542 DEBUG nova.virt.libvirt.driver [None req-6e36b076-4c39-4eef-9cf4-c1944c47f212 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 830678ef-9f48-4175-aa6d-666c24a11689] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 04:01:13 np0005534516 nova_compute[253538]: 2025-11-25 09:01:13.757 253542 DEBUG nova.virt.libvirt.driver [None req-6e36b076-4c39-4eef-9cf4-c1944c47f212 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 830678ef-9f48-4175-aa6d-666c24a11689] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 04:01:13 np0005534516 nova_compute[253538]: 2025-11-25 09:01:13.758 253542 DEBUG nova.virt.libvirt.driver [None req-6e36b076-4c39-4eef-9cf4-c1944c47f212 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 830678ef-9f48-4175-aa6d-666c24a11689] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 04:01:13 np0005534516 nova_compute[253538]: 2025-11-25 09:01:13.759 253542 DEBUG nova.virt.libvirt.driver [None req-6e36b076-4c39-4eef-9cf4-c1944c47f212 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 830678ef-9f48-4175-aa6d-666c24a11689] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 04:01:13 np0005534516 nova_compute[253538]: 2025-11-25 09:01:13.767 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 830678ef-9f48-4175-aa6d-666c24a11689] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 04:01:13 np0005534516 distracted_snyder[390299]: --> passed data devices: 0 physical, 3 LVM
Nov 25 04:01:13 np0005534516 distracted_snyder[390299]: --> relative data size: 1.0
Nov 25 04:01:13 np0005534516 distracted_snyder[390299]: --> All data devices are unavailable
Nov 25 04:01:13 np0005534516 systemd[1]: libpod-233cdcc8be2a23802aab4f899b1f5f6fc20cea004cf66abbe36a8da7c0632544.scope: Deactivated successfully.
Nov 25 04:01:13 np0005534516 systemd[1]: libpod-233cdcc8be2a23802aab4f899b1f5f6fc20cea004cf66abbe36a8da7c0632544.scope: Consumed 1.121s CPU time.
Nov 25 04:01:13 np0005534516 podman[390265]: 2025-11-25 09:01:13.942543181 +0000 UTC m=+1.346578802 container died 233cdcc8be2a23802aab4f899b1f5f6fc20cea004cf66abbe36a8da7c0632544 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_snyder, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Nov 25 04:01:13 np0005534516 systemd[1]: var-lib-containers-storage-overlay-098702006aeefdadc98d48c8ab9d32fc36c18d606e3fea854186f94d988e9beb-merged.mount: Deactivated successfully.
Nov 25 04:01:13 np0005534516 nova_compute[253538]: 2025-11-25 09:01:13.989 253542 INFO nova.compute.manager [None req-6e36b076-4c39-4eef-9cf4-c1944c47f212 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 830678ef-9f48-4175-aa6d-666c24a11689] Took 15.38 seconds to spawn the instance on the hypervisor.#033[00m
Nov 25 04:01:13 np0005534516 nova_compute[253538]: 2025-11-25 09:01:13.990 253542 DEBUG nova.compute.manager [None req-6e36b076-4c39-4eef-9cf4-c1944c47f212 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 830678ef-9f48-4175-aa6d-666c24a11689] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 04:01:14 np0005534516 podman[390265]: 2025-11-25 09:01:14.011974261 +0000 UTC m=+1.416009892 container remove 233cdcc8be2a23802aab4f899b1f5f6fc20cea004cf66abbe36a8da7c0632544 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_snyder, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 04:01:14 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2413: 321 pgs: 321 active+clean; 213 MiB data, 940 MiB used, 59 GiB / 60 GiB avail; 13 KiB/s rd, 66 KiB/s wr, 17 op/s
Nov 25 04:01:14 np0005534516 systemd[1]: libpod-conmon-233cdcc8be2a23802aab4f899b1f5f6fc20cea004cf66abbe36a8da7c0632544.scope: Deactivated successfully.
Nov 25 04:01:14 np0005534516 nova_compute[253538]: 2025-11-25 09:01:14.203 253542 INFO nova.compute.manager [None req-6e36b076-4c39-4eef-9cf4-c1944c47f212 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 830678ef-9f48-4175-aa6d-666c24a11689] Took 16.61 seconds to build instance.#033[00m
Nov 25 04:01:14 np0005534516 nova_compute[253538]: 2025-11-25 09:01:14.285 253542 DEBUG oslo_concurrency.lockutils [None req-6e36b076-4c39-4eef-9cf4-c1944c47f212 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Lock "830678ef-9f48-4175-aa6d-666c24a11689" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 16.855s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:01:14 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:01:14.558 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=42, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '3e:21:09', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '6a:71:8e:07:fa:c2'}, ipsec=False) old=SB_Global(nb_cfg=41) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 04:01:14 np0005534516 nova_compute[253538]: 2025-11-25 09:01:14.560 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:01:14 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:01:14.561 162739 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 25 04:01:14 np0005534516 podman[390589]: 2025-11-25 09:01:14.916604444 +0000 UTC m=+0.063410536 container create f267f6c2048ce7468b3e70114997aca675f81b25584ae5e9bd3c9dc5bcc147b3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_poitras, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Nov 25 04:01:14 np0005534516 systemd[1]: Started libpod-conmon-f267f6c2048ce7468b3e70114997aca675f81b25584ae5e9bd3c9dc5bcc147b3.scope.
Nov 25 04:01:14 np0005534516 podman[390589]: 2025-11-25 09:01:14.890578356 +0000 UTC m=+0.037384488 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 04:01:15 np0005534516 systemd[1]: Started libcrun container.
Nov 25 04:01:15 np0005534516 podman[390589]: 2025-11-25 09:01:15.019801733 +0000 UTC m=+0.166607845 container init f267f6c2048ce7468b3e70114997aca675f81b25584ae5e9bd3c9dc5bcc147b3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_poitras, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 25 04:01:15 np0005534516 podman[390589]: 2025-11-25 09:01:15.030922266 +0000 UTC m=+0.177728358 container start f267f6c2048ce7468b3e70114997aca675f81b25584ae5e9bd3c9dc5bcc147b3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_poitras, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Nov 25 04:01:15 np0005534516 podman[390589]: 2025-11-25 09:01:15.03403525 +0000 UTC m=+0.180841342 container attach f267f6c2048ce7468b3e70114997aca675f81b25584ae5e9bd3c9dc5bcc147b3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_poitras, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 04:01:15 np0005534516 festive_poitras[390605]: 167 167
Nov 25 04:01:15 np0005534516 systemd[1]: libpod-f267f6c2048ce7468b3e70114997aca675f81b25584ae5e9bd3c9dc5bcc147b3.scope: Deactivated successfully.
Nov 25 04:01:15 np0005534516 podman[390589]: 2025-11-25 09:01:15.038067601 +0000 UTC m=+0.184873703 container died f267f6c2048ce7468b3e70114997aca675f81b25584ae5e9bd3c9dc5bcc147b3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_poitras, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Nov 25 04:01:15 np0005534516 systemd[1]: var-lib-containers-storage-overlay-ea862fb840c21e1ac31bfc3ac7b168aba1e543b0b68351e4da8ce0b85a33d6b5-merged.mount: Deactivated successfully.
Nov 25 04:01:15 np0005534516 podman[390589]: 2025-11-25 09:01:15.07997052 +0000 UTC m=+0.226776622 container remove f267f6c2048ce7468b3e70114997aca675f81b25584ae5e9bd3c9dc5bcc147b3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_poitras, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3)
Nov 25 04:01:15 np0005534516 systemd[1]: libpod-conmon-f267f6c2048ce7468b3e70114997aca675f81b25584ae5e9bd3c9dc5bcc147b3.scope: Deactivated successfully.
Nov 25 04:01:15 np0005534516 podman[390627]: 2025-11-25 09:01:15.28786696 +0000 UTC m=+0.056669914 container create 3b6b32360f7439bc47141fb2911c72612f8c59aeec8cf4c66b06fb22772192db (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_bassi, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Nov 25 04:01:15 np0005534516 podman[390627]: 2025-11-25 09:01:15.255841108 +0000 UTC m=+0.024644122 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 04:01:15 np0005534516 systemd[1]: Started libpod-conmon-3b6b32360f7439bc47141fb2911c72612f8c59aeec8cf4c66b06fb22772192db.scope.
Nov 25 04:01:15 np0005534516 systemd[1]: Started libcrun container.
Nov 25 04:01:15 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9eafeb57e2011e64fb01aa988d3bf8e0be4173607df466a1cdf34fd29c0aa8e9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 04:01:15 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9eafeb57e2011e64fb01aa988d3bf8e0be4173607df466a1cdf34fd29c0aa8e9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 04:01:15 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9eafeb57e2011e64fb01aa988d3bf8e0be4173607df466a1cdf34fd29c0aa8e9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 04:01:15 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9eafeb57e2011e64fb01aa988d3bf8e0be4173607df466a1cdf34fd29c0aa8e9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 04:01:15 np0005534516 podman[390627]: 2025-11-25 09:01:15.423853531 +0000 UTC m=+0.192656505 container init 3b6b32360f7439bc47141fb2911c72612f8c59aeec8cf4c66b06fb22772192db (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_bassi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 04:01:15 np0005534516 podman[390627]: 2025-11-25 09:01:15.437209934 +0000 UTC m=+0.206012878 container start 3b6b32360f7439bc47141fb2911c72612f8c59aeec8cf4c66b06fb22772192db (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_bassi, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Nov 25 04:01:15 np0005534516 podman[390627]: 2025-11-25 09:01:15.440910555 +0000 UTC m=+0.209713529 container attach 3b6b32360f7439bc47141fb2911c72612f8c59aeec8cf4c66b06fb22772192db (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_bassi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Nov 25 04:01:15 np0005534516 nova_compute[253538]: 2025-11-25 09:01:15.811 253542 DEBUG nova.compute.manager [req-00a97137-fc9a-488d-afcc-eb28801f5e9d req-36d4656e-a82c-4399-851e-50a1f894cfd3 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 830678ef-9f48-4175-aa6d-666c24a11689] Received event network-vif-plugged-2719889c-c962-425f-9df3-6f3d741ca0ec external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 04:01:15 np0005534516 nova_compute[253538]: 2025-11-25 09:01:15.815 253542 DEBUG oslo_concurrency.lockutils [req-00a97137-fc9a-488d-afcc-eb28801f5e9d req-36d4656e-a82c-4399-851e-50a1f894cfd3 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "830678ef-9f48-4175-aa6d-666c24a11689-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:01:15 np0005534516 nova_compute[253538]: 2025-11-25 09:01:15.816 253542 DEBUG oslo_concurrency.lockutils [req-00a97137-fc9a-488d-afcc-eb28801f5e9d req-36d4656e-a82c-4399-851e-50a1f894cfd3 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "830678ef-9f48-4175-aa6d-666c24a11689-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:01:15 np0005534516 nova_compute[253538]: 2025-11-25 09:01:15.816 253542 DEBUG oslo_concurrency.lockutils [req-00a97137-fc9a-488d-afcc-eb28801f5e9d req-36d4656e-a82c-4399-851e-50a1f894cfd3 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "830678ef-9f48-4175-aa6d-666c24a11689-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:01:15 np0005534516 nova_compute[253538]: 2025-11-25 09:01:15.817 253542 DEBUG nova.compute.manager [req-00a97137-fc9a-488d-afcc-eb28801f5e9d req-36d4656e-a82c-4399-851e-50a1f894cfd3 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 830678ef-9f48-4175-aa6d-666c24a11689] No waiting events found dispatching network-vif-plugged-2719889c-c962-425f-9df3-6f3d741ca0ec pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 04:01:15 np0005534516 nova_compute[253538]: 2025-11-25 09:01:15.818 253542 WARNING nova.compute.manager [req-00a97137-fc9a-488d-afcc-eb28801f5e9d req-36d4656e-a82c-4399-851e-50a1f894cfd3 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 830678ef-9f48-4175-aa6d-666c24a11689] Received unexpected event network-vif-plugged-2719889c-c962-425f-9df3-6f3d741ca0ec for instance with vm_state active and task_state None.#033[00m
Nov 25 04:01:15 np0005534516 nova_compute[253538]: 2025-11-25 09:01:15.993 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:01:16 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2414: 321 pgs: 321 active+clean; 213 MiB data, 940 MiB used, 59 GiB / 60 GiB avail; 141 KiB/s rd, 13 KiB/s wr, 13 op/s
Nov 25 04:01:16 np0005534516 exciting_bassi[390643]: {
Nov 25 04:01:16 np0005534516 exciting_bassi[390643]:    "0": [
Nov 25 04:01:16 np0005534516 exciting_bassi[390643]:        {
Nov 25 04:01:16 np0005534516 exciting_bassi[390643]:            "devices": [
Nov 25 04:01:16 np0005534516 exciting_bassi[390643]:                "/dev/loop3"
Nov 25 04:01:16 np0005534516 exciting_bassi[390643]:            ],
Nov 25 04:01:16 np0005534516 exciting_bassi[390643]:            "lv_name": "ceph_lv0",
Nov 25 04:01:16 np0005534516 exciting_bassi[390643]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 04:01:16 np0005534516 exciting_bassi[390643]:            "lv_size": "21470642176",
Nov 25 04:01:16 np0005534516 exciting_bassi[390643]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=fa0f8d90-5df8-4d42-9078-da082765696d,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 04:01:16 np0005534516 exciting_bassi[390643]:            "lv_uuid": "ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr",
Nov 25 04:01:16 np0005534516 exciting_bassi[390643]:            "name": "ceph_lv0",
Nov 25 04:01:16 np0005534516 exciting_bassi[390643]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 04:01:16 np0005534516 exciting_bassi[390643]:            "tags": {
Nov 25 04:01:16 np0005534516 exciting_bassi[390643]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 04:01:16 np0005534516 exciting_bassi[390643]:                "ceph.block_uuid": "ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr",
Nov 25 04:01:16 np0005534516 exciting_bassi[390643]:                "ceph.cephx_lockbox_secret": "",
Nov 25 04:01:16 np0005534516 exciting_bassi[390643]:                "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 04:01:16 np0005534516 exciting_bassi[390643]:                "ceph.cluster_name": "ceph",
Nov 25 04:01:16 np0005534516 exciting_bassi[390643]:                "ceph.crush_device_class": "",
Nov 25 04:01:16 np0005534516 exciting_bassi[390643]:                "ceph.encrypted": "0",
Nov 25 04:01:16 np0005534516 exciting_bassi[390643]:                "ceph.osd_fsid": "fa0f8d90-5df8-4d42-9078-da082765696d",
Nov 25 04:01:16 np0005534516 exciting_bassi[390643]:                "ceph.osd_id": "0",
Nov 25 04:01:16 np0005534516 exciting_bassi[390643]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 04:01:16 np0005534516 exciting_bassi[390643]:                "ceph.type": "block",
Nov 25 04:01:16 np0005534516 exciting_bassi[390643]:                "ceph.vdo": "0"
Nov 25 04:01:16 np0005534516 exciting_bassi[390643]:            },
Nov 25 04:01:16 np0005534516 exciting_bassi[390643]:            "type": "block",
Nov 25 04:01:16 np0005534516 exciting_bassi[390643]:            "vg_name": "ceph_vg0"
Nov 25 04:01:16 np0005534516 exciting_bassi[390643]:        }
Nov 25 04:01:16 np0005534516 exciting_bassi[390643]:    ],
Nov 25 04:01:16 np0005534516 exciting_bassi[390643]:    "1": [
Nov 25 04:01:16 np0005534516 exciting_bassi[390643]:        {
Nov 25 04:01:16 np0005534516 exciting_bassi[390643]:            "devices": [
Nov 25 04:01:16 np0005534516 exciting_bassi[390643]:                "/dev/loop4"
Nov 25 04:01:16 np0005534516 exciting_bassi[390643]:            ],
Nov 25 04:01:16 np0005534516 exciting_bassi[390643]:            "lv_name": "ceph_lv1",
Nov 25 04:01:16 np0005534516 exciting_bassi[390643]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 04:01:16 np0005534516 exciting_bassi[390643]:            "lv_size": "21470642176",
Nov 25 04:01:16 np0005534516 exciting_bassi[390643]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1ccbbde0-8faf-460a-800e-c84f00ed17db,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 04:01:16 np0005534516 exciting_bassi[390643]:            "lv_uuid": "nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e",
Nov 25 04:01:16 np0005534516 exciting_bassi[390643]:            "name": "ceph_lv1",
Nov 25 04:01:16 np0005534516 exciting_bassi[390643]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 04:01:16 np0005534516 exciting_bassi[390643]:            "tags": {
Nov 25 04:01:16 np0005534516 exciting_bassi[390643]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 04:01:16 np0005534516 exciting_bassi[390643]:                "ceph.block_uuid": "nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e",
Nov 25 04:01:16 np0005534516 exciting_bassi[390643]:                "ceph.cephx_lockbox_secret": "",
Nov 25 04:01:16 np0005534516 exciting_bassi[390643]:                "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 04:01:16 np0005534516 exciting_bassi[390643]:                "ceph.cluster_name": "ceph",
Nov 25 04:01:16 np0005534516 exciting_bassi[390643]:                "ceph.crush_device_class": "",
Nov 25 04:01:16 np0005534516 exciting_bassi[390643]:                "ceph.encrypted": "0",
Nov 25 04:01:16 np0005534516 exciting_bassi[390643]:                "ceph.osd_fsid": "1ccbbde0-8faf-460a-800e-c84f00ed17db",
Nov 25 04:01:16 np0005534516 exciting_bassi[390643]:                "ceph.osd_id": "1",
Nov 25 04:01:16 np0005534516 exciting_bassi[390643]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 04:01:16 np0005534516 exciting_bassi[390643]:                "ceph.type": "block",
Nov 25 04:01:16 np0005534516 exciting_bassi[390643]:                "ceph.vdo": "0"
Nov 25 04:01:16 np0005534516 exciting_bassi[390643]:            },
Nov 25 04:01:16 np0005534516 exciting_bassi[390643]:            "type": "block",
Nov 25 04:01:16 np0005534516 exciting_bassi[390643]:            "vg_name": "ceph_vg1"
Nov 25 04:01:16 np0005534516 exciting_bassi[390643]:        }
Nov 25 04:01:16 np0005534516 exciting_bassi[390643]:    ],
Nov 25 04:01:16 np0005534516 exciting_bassi[390643]:    "2": [
Nov 25 04:01:16 np0005534516 exciting_bassi[390643]:        {
Nov 25 04:01:16 np0005534516 exciting_bassi[390643]:            "devices": [
Nov 25 04:01:16 np0005534516 exciting_bassi[390643]:                "/dev/loop5"
Nov 25 04:01:16 np0005534516 exciting_bassi[390643]:            ],
Nov 25 04:01:16 np0005534516 exciting_bassi[390643]:            "lv_name": "ceph_lv2",
Nov 25 04:01:16 np0005534516 exciting_bassi[390643]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 04:01:16 np0005534516 exciting_bassi[390643]:            "lv_size": "21470642176",
Nov 25 04:01:16 np0005534516 exciting_bassi[390643]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2a15223e-f21c-42af-b702-2be1c3607399,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 04:01:16 np0005534516 exciting_bassi[390643]:            "lv_uuid": "5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0",
Nov 25 04:01:16 np0005534516 exciting_bassi[390643]:            "name": "ceph_lv2",
Nov 25 04:01:16 np0005534516 exciting_bassi[390643]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 04:01:16 np0005534516 exciting_bassi[390643]:            "tags": {
Nov 25 04:01:16 np0005534516 exciting_bassi[390643]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 04:01:16 np0005534516 exciting_bassi[390643]:                "ceph.block_uuid": "5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0",
Nov 25 04:01:16 np0005534516 exciting_bassi[390643]:                "ceph.cephx_lockbox_secret": "",
Nov 25 04:01:16 np0005534516 exciting_bassi[390643]:                "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 04:01:16 np0005534516 exciting_bassi[390643]:                "ceph.cluster_name": "ceph",
Nov 25 04:01:16 np0005534516 exciting_bassi[390643]:                "ceph.crush_device_class": "",
Nov 25 04:01:16 np0005534516 exciting_bassi[390643]:                "ceph.encrypted": "0",
Nov 25 04:01:16 np0005534516 exciting_bassi[390643]:                "ceph.osd_fsid": "2a15223e-f21c-42af-b702-2be1c3607399",
Nov 25 04:01:16 np0005534516 exciting_bassi[390643]:                "ceph.osd_id": "2",
Nov 25 04:01:16 np0005534516 exciting_bassi[390643]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 04:01:16 np0005534516 exciting_bassi[390643]:                "ceph.type": "block",
Nov 25 04:01:16 np0005534516 exciting_bassi[390643]:                "ceph.vdo": "0"
Nov 25 04:01:16 np0005534516 exciting_bassi[390643]:            },
Nov 25 04:01:16 np0005534516 exciting_bassi[390643]:            "type": "block",
Nov 25 04:01:16 np0005534516 exciting_bassi[390643]:            "vg_name": "ceph_vg2"
Nov 25 04:01:16 np0005534516 exciting_bassi[390643]:        }
Nov 25 04:01:16 np0005534516 exciting_bassi[390643]:    ]
Nov 25 04:01:16 np0005534516 exciting_bassi[390643]: }
Nov 25 04:01:16 np0005534516 systemd[1]: libpod-3b6b32360f7439bc47141fb2911c72612f8c59aeec8cf4c66b06fb22772192db.scope: Deactivated successfully.
Nov 25 04:01:16 np0005534516 podman[390652]: 2025-11-25 09:01:16.237616481 +0000 UTC m=+0.022952056 container died 3b6b32360f7439bc47141fb2911c72612f8c59aeec8cf4c66b06fb22772192db (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_bassi, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Nov 25 04:01:16 np0005534516 systemd[1]: var-lib-containers-storage-overlay-9eafeb57e2011e64fb01aa988d3bf8e0be4173607df466a1cdf34fd29c0aa8e9-merged.mount: Deactivated successfully.
Nov 25 04:01:16 np0005534516 podman[390652]: 2025-11-25 09:01:16.284199528 +0000 UTC m=+0.069535043 container remove 3b6b32360f7439bc47141fb2911c72612f8c59aeec8cf4c66b06fb22772192db (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_bassi, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Nov 25 04:01:16 np0005534516 systemd[1]: libpod-conmon-3b6b32360f7439bc47141fb2911c72612f8c59aeec8cf4c66b06fb22772192db.scope: Deactivated successfully.
Nov 25 04:01:16 np0005534516 podman[390809]: 2025-11-25 09:01:16.875785761 +0000 UTC m=+0.039061744 container create fca33543ce50d4c9b83328a26542ef5ee4bed1ff1a851a42d87cf7f35b86652f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_archimedes, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Nov 25 04:01:16 np0005534516 systemd[1]: Started libpod-conmon-fca33543ce50d4c9b83328a26542ef5ee4bed1ff1a851a42d87cf7f35b86652f.scope.
Nov 25 04:01:16 np0005534516 podman[390809]: 2025-11-25 09:01:16.85558003 +0000 UTC m=+0.018855993 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 04:01:16 np0005534516 systemd[1]: Started libcrun container.
Nov 25 04:01:16 np0005534516 podman[390809]: 2025-11-25 09:01:16.974151018 +0000 UTC m=+0.137427081 container init fca33543ce50d4c9b83328a26542ef5ee4bed1ff1a851a42d87cf7f35b86652f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_archimedes, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 04:01:16 np0005534516 podman[390809]: 2025-11-25 09:01:16.981531518 +0000 UTC m=+0.144807471 container start fca33543ce50d4c9b83328a26542ef5ee4bed1ff1a851a42d87cf7f35b86652f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_archimedes, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Nov 25 04:01:16 np0005534516 podman[390809]: 2025-11-25 09:01:16.985104446 +0000 UTC m=+0.148380429 container attach fca33543ce50d4c9b83328a26542ef5ee4bed1ff1a851a42d87cf7f35b86652f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_archimedes, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 04:01:16 np0005534516 pensive_archimedes[390825]: 167 167
Nov 25 04:01:16 np0005534516 systemd[1]: libpod-fca33543ce50d4c9b83328a26542ef5ee4bed1ff1a851a42d87cf7f35b86652f.scope: Deactivated successfully.
Nov 25 04:01:16 np0005534516 podman[390809]: 2025-11-25 09:01:16.987847171 +0000 UTC m=+0.151123114 container died fca33543ce50d4c9b83328a26542ef5ee4bed1ff1a851a42d87cf7f35b86652f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_archimedes, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef)
Nov 25 04:01:17 np0005534516 systemd[1]: var-lib-containers-storage-overlay-7a97dce1d426b3e3a51143c3f7d37bed786b5d17933ca44a673aeeaed327b296-merged.mount: Deactivated successfully.
Nov 25 04:01:17 np0005534516 podman[390809]: 2025-11-25 09:01:17.031970481 +0000 UTC m=+0.195246464 container remove fca33543ce50d4c9b83328a26542ef5ee4bed1ff1a851a42d87cf7f35b86652f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_archimedes, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Nov 25 04:01:17 np0005534516 systemd[1]: libpod-conmon-fca33543ce50d4c9b83328a26542ef5ee4bed1ff1a851a42d87cf7f35b86652f.scope: Deactivated successfully.
Nov 25 04:01:17 np0005534516 podman[390850]: 2025-11-25 09:01:17.262338072 +0000 UTC m=+0.059627094 container create f6cd09bb2c55482919541c6e887de8529d47138b79dc073283e91cc5cced7a57 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_joliot, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0)
Nov 25 04:01:17 np0005534516 systemd[1]: Started libpod-conmon-f6cd09bb2c55482919541c6e887de8529d47138b79dc073283e91cc5cced7a57.scope.
Nov 25 04:01:17 np0005534516 systemd[1]: Started libcrun container.
Nov 25 04:01:17 np0005534516 podman[390850]: 2025-11-25 09:01:17.245035551 +0000 UTC m=+0.042324673 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 04:01:17 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7b95c61a306c593a3f39733b8ff68fc6632cb98819fe20882093f86dbd8ac508/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 04:01:17 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7b95c61a306c593a3f39733b8ff68fc6632cb98819fe20882093f86dbd8ac508/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 04:01:17 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7b95c61a306c593a3f39733b8ff68fc6632cb98819fe20882093f86dbd8ac508/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 04:01:17 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7b95c61a306c593a3f39733b8ff68fc6632cb98819fe20882093f86dbd8ac508/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 04:01:17 np0005534516 podman[390850]: 2025-11-25 09:01:17.358849699 +0000 UTC m=+0.156138811 container init f6cd09bb2c55482919541c6e887de8529d47138b79dc073283e91cc5cced7a57 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_joliot, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 04:01:17 np0005534516 podman[390850]: 2025-11-25 09:01:17.370509496 +0000 UTC m=+0.167798518 container start f6cd09bb2c55482919541c6e887de8529d47138b79dc073283e91cc5cced7a57 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_joliot, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2)
Nov 25 04:01:17 np0005534516 podman[390850]: 2025-11-25 09:01:17.374340911 +0000 UTC m=+0.171629993 container attach f6cd09bb2c55482919541c6e887de8529d47138b79dc073283e91cc5cced7a57 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_joliot, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Nov 25 04:01:17 np0005534516 nova_compute[253538]: 2025-11-25 09:01:17.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:01:17 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:01:18 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2415: 321 pgs: 321 active+clean; 213 MiB data, 940 MiB used, 59 GiB / 60 GiB avail; 841 KiB/s rd, 12 KiB/s wr, 38 op/s
Nov 25 04:01:18 np0005534516 jovial_joliot[390866]: {
Nov 25 04:01:18 np0005534516 jovial_joliot[390866]:    "1ccbbde0-8faf-460a-800e-c84f00ed17db": {
Nov 25 04:01:18 np0005534516 jovial_joliot[390866]:        "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 04:01:18 np0005534516 jovial_joliot[390866]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 04:01:18 np0005534516 jovial_joliot[390866]:        "osd_id": 1,
Nov 25 04:01:18 np0005534516 jovial_joliot[390866]:        "osd_uuid": "1ccbbde0-8faf-460a-800e-c84f00ed17db",
Nov 25 04:01:18 np0005534516 jovial_joliot[390866]:        "type": "bluestore"
Nov 25 04:01:18 np0005534516 jovial_joliot[390866]:    },
Nov 25 04:01:18 np0005534516 jovial_joliot[390866]:    "2a15223e-f21c-42af-b702-2be1c3607399": {
Nov 25 04:01:18 np0005534516 jovial_joliot[390866]:        "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 04:01:18 np0005534516 jovial_joliot[390866]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 04:01:18 np0005534516 jovial_joliot[390866]:        "osd_id": 2,
Nov 25 04:01:18 np0005534516 jovial_joliot[390866]:        "osd_uuid": "2a15223e-f21c-42af-b702-2be1c3607399",
Nov 25 04:01:18 np0005534516 jovial_joliot[390866]:        "type": "bluestore"
Nov 25 04:01:18 np0005534516 jovial_joliot[390866]:    },
Nov 25 04:01:18 np0005534516 jovial_joliot[390866]:    "fa0f8d90-5df8-4d42-9078-da082765696d": {
Nov 25 04:01:18 np0005534516 jovial_joliot[390866]:        "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 04:01:18 np0005534516 jovial_joliot[390866]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 04:01:18 np0005534516 jovial_joliot[390866]:        "osd_id": 0,
Nov 25 04:01:18 np0005534516 jovial_joliot[390866]:        "osd_uuid": "fa0f8d90-5df8-4d42-9078-da082765696d",
Nov 25 04:01:18 np0005534516 jovial_joliot[390866]:        "type": "bluestore"
Nov 25 04:01:18 np0005534516 jovial_joliot[390866]:    }
Nov 25 04:01:18 np0005534516 jovial_joliot[390866]: }
Nov 25 04:01:18 np0005534516 systemd[1]: libpod-f6cd09bb2c55482919541c6e887de8529d47138b79dc073283e91cc5cced7a57.scope: Deactivated successfully.
Nov 25 04:01:18 np0005534516 systemd[1]: libpod-f6cd09bb2c55482919541c6e887de8529d47138b79dc073283e91cc5cced7a57.scope: Consumed 1.066s CPU time.
Nov 25 04:01:18 np0005534516 podman[390850]: 2025-11-25 09:01:18.431037792 +0000 UTC m=+1.228326814 container died f6cd09bb2c55482919541c6e887de8529d47138b79dc073283e91cc5cced7a57 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_joliot, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 04:01:18 np0005534516 systemd[1]: var-lib-containers-storage-overlay-7b95c61a306c593a3f39733b8ff68fc6632cb98819fe20882093f86dbd8ac508-merged.mount: Deactivated successfully.
Nov 25 04:01:18 np0005534516 podman[390850]: 2025-11-25 09:01:18.512230892 +0000 UTC m=+1.309519914 container remove f6cd09bb2c55482919541c6e887de8529d47138b79dc073283e91cc5cced7a57 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_joliot, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Nov 25 04:01:18 np0005534516 systemd[1]: libpod-conmon-f6cd09bb2c55482919541c6e887de8529d47138b79dc073283e91cc5cced7a57.scope: Deactivated successfully.
Nov 25 04:01:18 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 04:01:18 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 04:01:18 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 04:01:18 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 04:01:18 np0005534516 ceph-mgr[75313]: [progress WARNING root] complete: ev d27debef-311f-4557-b4e3-14ee9032084d does not exist
Nov 25 04:01:18 np0005534516 ceph-mgr[75313]: [progress WARNING root] complete: ev 192de5e3-8ee2-40cf-895f-a666b0ec470e does not exist
Nov 25 04:01:18 np0005534516 nova_compute[253538]: 2025-11-25 09:01:18.590 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:01:19 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 04:01:19 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 04:01:20 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2416: 321 pgs: 321 active+clean; 213 MiB data, 940 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Nov 25 04:01:20 np0005534516 nova_compute[253538]: 2025-11-25 09:01:20.450 253542 DEBUG oslo_concurrency.lockutils [None req-929f6467-62d9-4b67-964d-212d9d487e9f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Acquiring lock "3dd3cc22-d02b-4948-b7e0-da630c6ad4b0" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:01:20 np0005534516 nova_compute[253538]: 2025-11-25 09:01:20.451 253542 DEBUG oslo_concurrency.lockutils [None req-929f6467-62d9-4b67-964d-212d9d487e9f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "3dd3cc22-d02b-4948-b7e0-da630c6ad4b0" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:01:20 np0005534516 nova_compute[253538]: 2025-11-25 09:01:20.471 253542 DEBUG nova.compute.manager [None req-929f6467-62d9-4b67-964d-212d9d487e9f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 3dd3cc22-d02b-4948-b7e0-da630c6ad4b0] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 25 04:01:20 np0005534516 nova_compute[253538]: 2025-11-25 09:01:20.551 253542 DEBUG oslo_concurrency.lockutils [None req-929f6467-62d9-4b67-964d-212d9d487e9f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:01:20 np0005534516 nova_compute[253538]: 2025-11-25 09:01:20.552 253542 DEBUG oslo_concurrency.lockutils [None req-929f6467-62d9-4b67-964d-212d9d487e9f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:01:20 np0005534516 nova_compute[253538]: 2025-11-25 09:01:20.562 253542 DEBUG nova.virt.hardware [None req-929f6467-62d9-4b67-964d-212d9d487e9f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 25 04:01:20 np0005534516 nova_compute[253538]: 2025-11-25 09:01:20.563 253542 INFO nova.compute.claims [None req-929f6467-62d9-4b67-964d-212d9d487e9f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 3dd3cc22-d02b-4948-b7e0-da630c6ad4b0] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 25 04:01:20 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:01:20.564 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=0e3362c2-a4a0-4a10-9289-943331244f84, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '42'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 04:01:20 np0005534516 nova_compute[253538]: 2025-11-25 09:01:20.680 253542 DEBUG oslo_concurrency.processutils [None req-929f6467-62d9-4b67-964d-212d9d487e9f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 04:01:20 np0005534516 podman[390962]: 2025-11-25 09:01:20.811258288 +0000 UTC m=+0.054550265 container health_status 5c8d5508db57b4c3da7e80ae11f3a3094e87785cb74f60c98199261dd8b73481 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Nov 25 04:01:20 np0005534516 podman[390961]: 2025-11-25 09:01:20.818455994 +0000 UTC m=+0.060754095 container health_status 201f5ba8a846455dad7681d91099d8c3f33e8ac91298c1330a8b525b94c03938 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=multipathd, container_name=multipathd, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true)
Nov 25 04:01:20 np0005534516 nova_compute[253538]: 2025-11-25 09:01:20.996 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:01:21 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 04:01:21 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3060430614' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 04:01:21 np0005534516 nova_compute[253538]: 2025-11-25 09:01:21.323 253542 DEBUG oslo_concurrency.processutils [None req-929f6467-62d9-4b67-964d-212d9d487e9f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.643s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 04:01:21 np0005534516 nova_compute[253538]: 2025-11-25 09:01:21.329 253542 DEBUG nova.compute.provider_tree [None req-929f6467-62d9-4b67-964d-212d9d487e9f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 04:01:21 np0005534516 nova_compute[253538]: 2025-11-25 09:01:21.345 253542 DEBUG nova.scheduler.client.report [None req-929f6467-62d9-4b67-964d-212d9d487e9f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 04:01:21 np0005534516 nova_compute[253538]: 2025-11-25 09:01:21.379 253542 DEBUG oslo_concurrency.lockutils [None req-929f6467-62d9-4b67-964d-212d9d487e9f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.827s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:01:21 np0005534516 nova_compute[253538]: 2025-11-25 09:01:21.379 253542 DEBUG nova.compute.manager [None req-929f6467-62d9-4b67-964d-212d9d487e9f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 3dd3cc22-d02b-4948-b7e0-da630c6ad4b0] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 25 04:01:21 np0005534516 nova_compute[253538]: 2025-11-25 09:01:21.433 253542 DEBUG nova.compute.manager [None req-929f6467-62d9-4b67-964d-212d9d487e9f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 3dd3cc22-d02b-4948-b7e0-da630c6ad4b0] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 25 04:01:21 np0005534516 nova_compute[253538]: 2025-11-25 09:01:21.433 253542 DEBUG nova.network.neutron [None req-929f6467-62d9-4b67-964d-212d9d487e9f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 3dd3cc22-d02b-4948-b7e0-da630c6ad4b0] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 25 04:01:21 np0005534516 nova_compute[253538]: 2025-11-25 09:01:21.450 253542 INFO nova.virt.libvirt.driver [None req-929f6467-62d9-4b67-964d-212d9d487e9f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 3dd3cc22-d02b-4948-b7e0-da630c6ad4b0] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 25 04:01:21 np0005534516 nova_compute[253538]: 2025-11-25 09:01:21.466 253542 DEBUG nova.compute.manager [None req-929f6467-62d9-4b67-964d-212d9d487e9f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 3dd3cc22-d02b-4948-b7e0-da630c6ad4b0] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 25 04:01:21 np0005534516 nova_compute[253538]: 2025-11-25 09:01:21.545 253542 DEBUG nova.compute.manager [None req-929f6467-62d9-4b67-964d-212d9d487e9f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 3dd3cc22-d02b-4948-b7e0-da630c6ad4b0] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 25 04:01:21 np0005534516 nova_compute[253538]: 2025-11-25 09:01:21.546 253542 DEBUG nova.virt.libvirt.driver [None req-929f6467-62d9-4b67-964d-212d9d487e9f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 3dd3cc22-d02b-4948-b7e0-da630c6ad4b0] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 25 04:01:21 np0005534516 nova_compute[253538]: 2025-11-25 09:01:21.547 253542 INFO nova.virt.libvirt.driver [None req-929f6467-62d9-4b67-964d-212d9d487e9f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 3dd3cc22-d02b-4948-b7e0-da630c6ad4b0] Creating image(s)#033[00m
Nov 25 04:01:21 np0005534516 nova_compute[253538]: 2025-11-25 09:01:21.569 253542 DEBUG nova.storage.rbd_utils [None req-929f6467-62d9-4b67-964d-212d9d487e9f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] rbd image 3dd3cc22-d02b-4948-b7e0-da630c6ad4b0_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 04:01:21 np0005534516 nova_compute[253538]: 2025-11-25 09:01:21.591 253542 DEBUG nova.storage.rbd_utils [None req-929f6467-62d9-4b67-964d-212d9d487e9f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] rbd image 3dd3cc22-d02b-4948-b7e0-da630c6ad4b0_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 04:01:21 np0005534516 nova_compute[253538]: 2025-11-25 09:01:21.611 253542 DEBUG nova.storage.rbd_utils [None req-929f6467-62d9-4b67-964d-212d9d487e9f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] rbd image 3dd3cc22-d02b-4948-b7e0-da630c6ad4b0_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 04:01:21 np0005534516 nova_compute[253538]: 2025-11-25 09:01:21.615 253542 DEBUG oslo_concurrency.processutils [None req-929f6467-62d9-4b67-964d-212d9d487e9f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 04:01:21 np0005534516 nova_compute[253538]: 2025-11-25 09:01:21.664 253542 DEBUG nova.policy [None req-929f6467-62d9-4b67-964d-212d9d487e9f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'c9fb13d4ba9041458692330b7276232f', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'a3cf572dfc9f42528923d69b8fa76422', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 25 04:01:21 np0005534516 nova_compute[253538]: 2025-11-25 09:01:21.687 253542 DEBUG oslo_concurrency.processutils [None req-929f6467-62d9-4b67-964d-212d9d487e9f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json" returned: 0 in 0.072s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 04:01:21 np0005534516 nova_compute[253538]: 2025-11-25 09:01:21.688 253542 DEBUG oslo_concurrency.lockutils [None req-929f6467-62d9-4b67-964d-212d9d487e9f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Acquiring lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:01:21 np0005534516 nova_compute[253538]: 2025-11-25 09:01:21.688 253542 DEBUG oslo_concurrency.lockutils [None req-929f6467-62d9-4b67-964d-212d9d487e9f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:01:21 np0005534516 nova_compute[253538]: 2025-11-25 09:01:21.688 253542 DEBUG oslo_concurrency.lockutils [None req-929f6467-62d9-4b67-964d-212d9d487e9f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:01:21 np0005534516 nova_compute[253538]: 2025-11-25 09:01:21.709 253542 DEBUG nova.storage.rbd_utils [None req-929f6467-62d9-4b67-964d-212d9d487e9f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] rbd image 3dd3cc22-d02b-4948-b7e0-da630c6ad4b0_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 04:01:21 np0005534516 nova_compute[253538]: 2025-11-25 09:01:21.714 253542 DEBUG oslo_concurrency.processutils [None req-929f6467-62d9-4b67-964d-212d9d487e9f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc 3dd3cc22-d02b-4948-b7e0-da630c6ad4b0_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 04:01:21 np0005534516 nova_compute[253538]: 2025-11-25 09:01:21.753 253542 DEBUG nova.compute.manager [req-81e3de33-3dca-4b27-a58e-35928b6ee3fe req-a501c094-110c-459e-90b3-b0364e53c2ad b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 830678ef-9f48-4175-aa6d-666c24a11689] Received event network-changed-2719889c-c962-425f-9df3-6f3d741ca0ec external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 04:01:21 np0005534516 nova_compute[253538]: 2025-11-25 09:01:21.753 253542 DEBUG nova.compute.manager [req-81e3de33-3dca-4b27-a58e-35928b6ee3fe req-a501c094-110c-459e-90b3-b0364e53c2ad b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 830678ef-9f48-4175-aa6d-666c24a11689] Refreshing instance network info cache due to event network-changed-2719889c-c962-425f-9df3-6f3d741ca0ec. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 04:01:21 np0005534516 nova_compute[253538]: 2025-11-25 09:01:21.753 253542 DEBUG oslo_concurrency.lockutils [req-81e3de33-3dca-4b27-a58e-35928b6ee3fe req-a501c094-110c-459e-90b3-b0364e53c2ad b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-830678ef-9f48-4175-aa6d-666c24a11689" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 04:01:21 np0005534516 nova_compute[253538]: 2025-11-25 09:01:21.754 253542 DEBUG oslo_concurrency.lockutils [req-81e3de33-3dca-4b27-a58e-35928b6ee3fe req-a501c094-110c-459e-90b3-b0364e53c2ad b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-830678ef-9f48-4175-aa6d-666c24a11689" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 04:01:21 np0005534516 nova_compute[253538]: 2025-11-25 09:01:21.754 253542 DEBUG nova.network.neutron [req-81e3de33-3dca-4b27-a58e-35928b6ee3fe req-a501c094-110c-459e-90b3-b0364e53c2ad b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 830678ef-9f48-4175-aa6d-666c24a11689] Refreshing network info cache for port 2719889c-c962-425f-9df3-6f3d741ca0ec _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 04:01:21 np0005534516 nova_compute[253538]: 2025-11-25 09:01:21.958 253542 DEBUG oslo_concurrency.processutils [None req-929f6467-62d9-4b67-964d-212d9d487e9f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc 3dd3cc22-d02b-4948-b7e0-da630c6ad4b0_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.243s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 04:01:22 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2417: 321 pgs: 321 active+clean; 220 MiB data, 940 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 116 KiB/s wr, 74 op/s
Nov 25 04:01:22 np0005534516 nova_compute[253538]: 2025-11-25 09:01:22.030 253542 DEBUG nova.storage.rbd_utils [None req-929f6467-62d9-4b67-964d-212d9d487e9f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] resizing rbd image 3dd3cc22-d02b-4948-b7e0-da630c6ad4b0_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 25 04:01:22 np0005534516 nova_compute[253538]: 2025-11-25 09:01:22.118 253542 DEBUG nova.objects.instance [None req-929f6467-62d9-4b67-964d-212d9d487e9f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lazy-loading 'migration_context' on Instance uuid 3dd3cc22-d02b-4948-b7e0-da630c6ad4b0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 04:01:22 np0005534516 nova_compute[253538]: 2025-11-25 09:01:22.137 253542 DEBUG nova.virt.libvirt.driver [None req-929f6467-62d9-4b67-964d-212d9d487e9f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 3dd3cc22-d02b-4948-b7e0-da630c6ad4b0] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 25 04:01:22 np0005534516 nova_compute[253538]: 2025-11-25 09:01:22.138 253542 DEBUG nova.virt.libvirt.driver [None req-929f6467-62d9-4b67-964d-212d9d487e9f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 3dd3cc22-d02b-4948-b7e0-da630c6ad4b0] Ensure instance console log exists: /var/lib/nova/instances/3dd3cc22-d02b-4948-b7e0-da630c6ad4b0/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 25 04:01:22 np0005534516 nova_compute[253538]: 2025-11-25 09:01:22.138 253542 DEBUG oslo_concurrency.lockutils [None req-929f6467-62d9-4b67-964d-212d9d487e9f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:01:22 np0005534516 nova_compute[253538]: 2025-11-25 09:01:22.138 253542 DEBUG oslo_concurrency.lockutils [None req-929f6467-62d9-4b67-964d-212d9d487e9f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:01:22 np0005534516 nova_compute[253538]: 2025-11-25 09:01:22.139 253542 DEBUG oslo_concurrency.lockutils [None req-929f6467-62d9-4b67-964d-212d9d487e9f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:01:22 np0005534516 nova_compute[253538]: 2025-11-25 09:01:22.623 253542 DEBUG nova.network.neutron [None req-929f6467-62d9-4b67-964d-212d9d487e9f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 3dd3cc22-d02b-4948-b7e0-da630c6ad4b0] Successfully created port: cf17086b-8fa3-4041-8a87-a1f9ede3f871 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 25 04:01:22 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:01:23 np0005534516 nova_compute[253538]: 2025-11-25 09:01:23.439 253542 DEBUG nova.network.neutron [req-81e3de33-3dca-4b27-a58e-35928b6ee3fe req-a501c094-110c-459e-90b3-b0364e53c2ad b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 830678ef-9f48-4175-aa6d-666c24a11689] Updated VIF entry in instance network info cache for port 2719889c-c962-425f-9df3-6f3d741ca0ec. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 25 04:01:23 np0005534516 nova_compute[253538]: 2025-11-25 09:01:23.439 253542 DEBUG nova.network.neutron [req-81e3de33-3dca-4b27-a58e-35928b6ee3fe req-a501c094-110c-459e-90b3-b0364e53c2ad b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 830678ef-9f48-4175-aa6d-666c24a11689] Updating instance_info_cache with network_info: [{"id": "2719889c-c962-425f-9df3-6f3d741ca0ec", "address": "fa:16:3e:0c:1f:9b", "network": {"id": "01d5ee0a-5a87-445b-8539-b33b1f9d0842", "bridge": "br-int", "label": "tempest-network-smoke--2130316306", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.221", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cfffff2c57a442a59b202d368d49bf00", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2719889c-c9", "ovs_interfaceid": "2719889c-c962-425f-9df3-6f3d741ca0ec", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 04:01:23 np0005534516 nova_compute[253538]: 2025-11-25 09:01:23.453 253542 DEBUG oslo_concurrency.lockutils [req-81e3de33-3dca-4b27-a58e-35928b6ee3fe req-a501c094-110c-459e-90b3-b0364e53c2ad b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-830678ef-9f48-4175-aa6d-666c24a11689" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 04:01:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 04:01:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 04:01:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 04:01:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 04:01:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 04:01:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 04:01:23 np0005534516 nova_compute[253538]: 2025-11-25 09:01:23.592 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:01:24 np0005534516 nova_compute[253538]: 2025-11-25 09:01:24.014 253542 DEBUG nova.network.neutron [None req-929f6467-62d9-4b67-964d-212d9d487e9f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 3dd3cc22-d02b-4948-b7e0-da630c6ad4b0] Successfully updated port: cf17086b-8fa3-4041-8a87-a1f9ede3f871 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 25 04:01:24 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2418: 321 pgs: 321 active+clean; 236 MiB data, 948 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 783 KiB/s wr, 75 op/s
Nov 25 04:01:24 np0005534516 nova_compute[253538]: 2025-11-25 09:01:24.032 253542 DEBUG oslo_concurrency.lockutils [None req-929f6467-62d9-4b67-964d-212d9d487e9f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Acquiring lock "refresh_cache-3dd3cc22-d02b-4948-b7e0-da630c6ad4b0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 04:01:24 np0005534516 nova_compute[253538]: 2025-11-25 09:01:24.032 253542 DEBUG oslo_concurrency.lockutils [None req-929f6467-62d9-4b67-964d-212d9d487e9f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Acquired lock "refresh_cache-3dd3cc22-d02b-4948-b7e0-da630c6ad4b0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 04:01:24 np0005534516 nova_compute[253538]: 2025-11-25 09:01:24.033 253542 DEBUG nova.network.neutron [None req-929f6467-62d9-4b67-964d-212d9d487e9f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 3dd3cc22-d02b-4948-b7e0-da630c6ad4b0] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 25 04:01:24 np0005534516 nova_compute[253538]: 2025-11-25 09:01:24.113 253542 DEBUG nova.compute.manager [req-efc30647-e42e-43e1-9e2b-4915b470d6d3 req-365ff198-f061-4875-bee2-fa483775127d b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 3dd3cc22-d02b-4948-b7e0-da630c6ad4b0] Received event network-changed-cf17086b-8fa3-4041-8a87-a1f9ede3f871 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 04:01:24 np0005534516 nova_compute[253538]: 2025-11-25 09:01:24.114 253542 DEBUG nova.compute.manager [req-efc30647-e42e-43e1-9e2b-4915b470d6d3 req-365ff198-f061-4875-bee2-fa483775127d b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 3dd3cc22-d02b-4948-b7e0-da630c6ad4b0] Refreshing instance network info cache due to event network-changed-cf17086b-8fa3-4041-8a87-a1f9ede3f871. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 04:01:24 np0005534516 nova_compute[253538]: 2025-11-25 09:01:24.114 253542 DEBUG oslo_concurrency.lockutils [req-efc30647-e42e-43e1-9e2b-4915b470d6d3 req-365ff198-f061-4875-bee2-fa483775127d b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-3dd3cc22-d02b-4948-b7e0-da630c6ad4b0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 04:01:24 np0005534516 nova_compute[253538]: 2025-11-25 09:01:24.190 253542 DEBUG nova.network.neutron [None req-929f6467-62d9-4b67-964d-212d9d487e9f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 3dd3cc22-d02b-4948-b7e0-da630c6ad4b0] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 25 04:01:25 np0005534516 nova_compute[253538]: 2025-11-25 09:01:25.720 253542 DEBUG nova.network.neutron [None req-929f6467-62d9-4b67-964d-212d9d487e9f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 3dd3cc22-d02b-4948-b7e0-da630c6ad4b0] Updating instance_info_cache with network_info: [{"id": "cf17086b-8fa3-4041-8a87-a1f9ede3f871", "address": "fa:16:3e:2b:64:2f", "network": {"id": "6c2834b5-0444-432c-8da4-c0b4f4aabc4d", "bridge": "br-int", "label": "tempest-network-smoke--139778261", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe2b:642f", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcf17086b-8f", "ovs_interfaceid": "cf17086b-8fa3-4041-8a87-a1f9ede3f871", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 04:01:25 np0005534516 nova_compute[253538]: 2025-11-25 09:01:25.738 253542 DEBUG oslo_concurrency.lockutils [None req-929f6467-62d9-4b67-964d-212d9d487e9f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Releasing lock "refresh_cache-3dd3cc22-d02b-4948-b7e0-da630c6ad4b0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 04:01:25 np0005534516 nova_compute[253538]: 2025-11-25 09:01:25.739 253542 DEBUG nova.compute.manager [None req-929f6467-62d9-4b67-964d-212d9d487e9f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 3dd3cc22-d02b-4948-b7e0-da630c6ad4b0] Instance network_info: |[{"id": "cf17086b-8fa3-4041-8a87-a1f9ede3f871", "address": "fa:16:3e:2b:64:2f", "network": {"id": "6c2834b5-0444-432c-8da4-c0b4f4aabc4d", "bridge": "br-int", "label": "tempest-network-smoke--139778261", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe2b:642f", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcf17086b-8f", "ovs_interfaceid": "cf17086b-8fa3-4041-8a87-a1f9ede3f871", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 25 04:01:25 np0005534516 nova_compute[253538]: 2025-11-25 09:01:25.740 253542 DEBUG oslo_concurrency.lockutils [req-efc30647-e42e-43e1-9e2b-4915b470d6d3 req-365ff198-f061-4875-bee2-fa483775127d b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-3dd3cc22-d02b-4948-b7e0-da630c6ad4b0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 04:01:25 np0005534516 nova_compute[253538]: 2025-11-25 09:01:25.740 253542 DEBUG nova.network.neutron [req-efc30647-e42e-43e1-9e2b-4915b470d6d3 req-365ff198-f061-4875-bee2-fa483775127d b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 3dd3cc22-d02b-4948-b7e0-da630c6ad4b0] Refreshing network info cache for port cf17086b-8fa3-4041-8a87-a1f9ede3f871 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 04:01:25 np0005534516 nova_compute[253538]: 2025-11-25 09:01:25.743 253542 DEBUG nova.virt.libvirt.driver [None req-929f6467-62d9-4b67-964d-212d9d487e9f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 3dd3cc22-d02b-4948-b7e0-da630c6ad4b0] Start _get_guest_xml network_info=[{"id": "cf17086b-8fa3-4041-8a87-a1f9ede3f871", "address": "fa:16:3e:2b:64:2f", "network": {"id": "6c2834b5-0444-432c-8da4-c0b4f4aabc4d", "bridge": "br-int", "label": "tempest-network-smoke--139778261", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe2b:642f", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcf17086b-8f", "ovs_interfaceid": "cf17086b-8fa3-4041-8a87-a1f9ede3f871", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'device_name': '/dev/vda', 'guest_format': None, 'encryption_options': None, 'boot_index': 0, 'encryption_format': None, 'encryption_secret_uuid': None, 'size': 0, 'encrypted': False, 'disk_bus': 'virtio', 'image_id': '8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 25 04:01:25 np0005534516 nova_compute[253538]: 2025-11-25 09:01:25.747 253542 WARNING nova.virt.libvirt.driver [None req-929f6467-62d9-4b67-964d-212d9d487e9f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 04:01:25 np0005534516 nova_compute[253538]: 2025-11-25 09:01:25.755 253542 DEBUG nova.virt.libvirt.host [None req-929f6467-62d9-4b67-964d-212d9d487e9f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 25 04:01:25 np0005534516 nova_compute[253538]: 2025-11-25 09:01:25.756 253542 DEBUG nova.virt.libvirt.host [None req-929f6467-62d9-4b67-964d-212d9d487e9f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 25 04:01:25 np0005534516 nova_compute[253538]: 2025-11-25 09:01:25.759 253542 DEBUG nova.virt.libvirt.host [None req-929f6467-62d9-4b67-964d-212d9d487e9f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 25 04:01:25 np0005534516 nova_compute[253538]: 2025-11-25 09:01:25.759 253542 DEBUG nova.virt.libvirt.host [None req-929f6467-62d9-4b67-964d-212d9d487e9f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 25 04:01:25 np0005534516 nova_compute[253538]: 2025-11-25 09:01:25.760 253542 DEBUG nova.virt.libvirt.driver [None req-929f6467-62d9-4b67-964d-212d9d487e9f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 25 04:01:25 np0005534516 nova_compute[253538]: 2025-11-25 09:01:25.760 253542 DEBUG nova.virt.hardware [None req-929f6467-62d9-4b67-964d-212d9d487e9f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T08:20:54Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='c0247c91-0f34-45b2-87e7-43f31a790a00',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 25 04:01:25 np0005534516 nova_compute[253538]: 2025-11-25 09:01:25.760 253542 DEBUG nova.virt.hardware [None req-929f6467-62d9-4b67-964d-212d9d487e9f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 25 04:01:25 np0005534516 nova_compute[253538]: 2025-11-25 09:01:25.760 253542 DEBUG nova.virt.hardware [None req-929f6467-62d9-4b67-964d-212d9d487e9f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 25 04:01:25 np0005534516 nova_compute[253538]: 2025-11-25 09:01:25.761 253542 DEBUG nova.virt.hardware [None req-929f6467-62d9-4b67-964d-212d9d487e9f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 25 04:01:25 np0005534516 nova_compute[253538]: 2025-11-25 09:01:25.761 253542 DEBUG nova.virt.hardware [None req-929f6467-62d9-4b67-964d-212d9d487e9f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 25 04:01:25 np0005534516 nova_compute[253538]: 2025-11-25 09:01:25.761 253542 DEBUG nova.virt.hardware [None req-929f6467-62d9-4b67-964d-212d9d487e9f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 25 04:01:25 np0005534516 nova_compute[253538]: 2025-11-25 09:01:25.761 253542 DEBUG nova.virt.hardware [None req-929f6467-62d9-4b67-964d-212d9d487e9f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 25 04:01:25 np0005534516 nova_compute[253538]: 2025-11-25 09:01:25.761 253542 DEBUG nova.virt.hardware [None req-929f6467-62d9-4b67-964d-212d9d487e9f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 25 04:01:25 np0005534516 nova_compute[253538]: 2025-11-25 09:01:25.762 253542 DEBUG nova.virt.hardware [None req-929f6467-62d9-4b67-964d-212d9d487e9f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 25 04:01:25 np0005534516 nova_compute[253538]: 2025-11-25 09:01:25.762 253542 DEBUG nova.virt.hardware [None req-929f6467-62d9-4b67-964d-212d9d487e9f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 25 04:01:25 np0005534516 nova_compute[253538]: 2025-11-25 09:01:25.762 253542 DEBUG nova.virt.hardware [None req-929f6467-62d9-4b67-964d-212d9d487e9f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 25 04:01:25 np0005534516 nova_compute[253538]: 2025-11-25 09:01:25.764 253542 DEBUG oslo_concurrency.processutils [None req-929f6467-62d9-4b67-964d-212d9d487e9f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 04:01:25 np0005534516 nova_compute[253538]: 2025-11-25 09:01:25.998 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:01:26 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2419: 321 pgs: 321 active+clean; 244 MiB data, 951 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.1 MiB/s wr, 96 op/s
Nov 25 04:01:26 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 04:01:26 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4129416049' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 04:01:26 np0005534516 nova_compute[253538]: 2025-11-25 09:01:26.244 253542 DEBUG oslo_concurrency.processutils [None req-929f6467-62d9-4b67-964d-212d9d487e9f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.479s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 04:01:26 np0005534516 nova_compute[253538]: 2025-11-25 09:01:26.272 253542 DEBUG nova.storage.rbd_utils [None req-929f6467-62d9-4b67-964d-212d9d487e9f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] rbd image 3dd3cc22-d02b-4948-b7e0-da630c6ad4b0_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 04:01:26 np0005534516 nova_compute[253538]: 2025-11-25 09:01:26.278 253542 DEBUG oslo_concurrency.processutils [None req-929f6467-62d9-4b67-964d-212d9d487e9f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 04:01:26 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 04:01:26 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2885720655' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 04:01:26 np0005534516 nova_compute[253538]: 2025-11-25 09:01:26.738 253542 DEBUG oslo_concurrency.processutils [None req-929f6467-62d9-4b67-964d-212d9d487e9f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.460s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 04:01:26 np0005534516 nova_compute[253538]: 2025-11-25 09:01:26.740 253542 DEBUG nova.virt.libvirt.vif [None req-929f6467-62d9-4b67-964d-212d9d487e9f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T09:01:19Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-432989417',display_name='tempest-TestGettingAddress-server-432989417',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-432989417',id=131,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBE43RDkFrIAVI0BW11KJE2OYK1HJrAvC1/LPNZ8TDm6qjtPF0a19WFe4a1radMZBBdiDLKOYgFd/MDesehfKkwox7hQY3R8UYDimfx0Df8eUJAXIFHj7mDCbbhz+nTG4ag==',key_name='tempest-TestGettingAddress-356224709',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='a3cf572dfc9f42528923d69b8fa76422',ramdisk_id='',reservation_id='r-z9h0547w',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-364728108',owner_user_name='tempest-TestGettingAddress-364728108-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T09:01:21Z,user_data=None,user_id='c9fb13d4ba9041458692330b7276232f',uuid=3dd3cc22-d02b-4948-b7e0-da630c6ad4b0,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "cf17086b-8fa3-4041-8a87-a1f9ede3f871", "address": "fa:16:3e:2b:64:2f", "network": {"id": "6c2834b5-0444-432c-8da4-c0b4f4aabc4d", "bridge": "br-int", "label": "tempest-network-smoke--139778261", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe2b:642f", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcf17086b-8f", "ovs_interfaceid": "cf17086b-8fa3-4041-8a87-a1f9ede3f871", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 25 04:01:26 np0005534516 nova_compute[253538]: 2025-11-25 09:01:26.740 253542 DEBUG nova.network.os_vif_util [None req-929f6467-62d9-4b67-964d-212d9d487e9f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Converting VIF {"id": "cf17086b-8fa3-4041-8a87-a1f9ede3f871", "address": "fa:16:3e:2b:64:2f", "network": {"id": "6c2834b5-0444-432c-8da4-c0b4f4aabc4d", "bridge": "br-int", "label": "tempest-network-smoke--139778261", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe2b:642f", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcf17086b-8f", "ovs_interfaceid": "cf17086b-8fa3-4041-8a87-a1f9ede3f871", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 04:01:26 np0005534516 nova_compute[253538]: 2025-11-25 09:01:26.742 253542 DEBUG nova.network.os_vif_util [None req-929f6467-62d9-4b67-964d-212d9d487e9f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:2b:64:2f,bridge_name='br-int',has_traffic_filtering=True,id=cf17086b-8fa3-4041-8a87-a1f9ede3f871,network=Network(6c2834b5-0444-432c-8da4-c0b4f4aabc4d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcf17086b-8f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 04:01:26 np0005534516 nova_compute[253538]: 2025-11-25 09:01:26.744 253542 DEBUG nova.objects.instance [None req-929f6467-62d9-4b67-964d-212d9d487e9f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lazy-loading 'pci_devices' on Instance uuid 3dd3cc22-d02b-4948-b7e0-da630c6ad4b0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 04:01:26 np0005534516 nova_compute[253538]: 2025-11-25 09:01:26.760 253542 DEBUG nova.virt.libvirt.driver [None req-929f6467-62d9-4b67-964d-212d9d487e9f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 3dd3cc22-d02b-4948-b7e0-da630c6ad4b0] End _get_guest_xml xml=<domain type="kvm">
Nov 25 04:01:26 np0005534516 nova_compute[253538]:  <uuid>3dd3cc22-d02b-4948-b7e0-da630c6ad4b0</uuid>
Nov 25 04:01:26 np0005534516 nova_compute[253538]:  <name>instance-00000083</name>
Nov 25 04:01:26 np0005534516 nova_compute[253538]:  <memory>131072</memory>
Nov 25 04:01:26 np0005534516 nova_compute[253538]:  <vcpu>1</vcpu>
Nov 25 04:01:26 np0005534516 nova_compute[253538]:  <metadata>
Nov 25 04:01:26 np0005534516 nova_compute[253538]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 04:01:26 np0005534516 nova_compute[253538]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 04:01:26 np0005534516 nova_compute[253538]:      <nova:name>tempest-TestGettingAddress-server-432989417</nova:name>
Nov 25 04:01:26 np0005534516 nova_compute[253538]:      <nova:creationTime>2025-11-25 09:01:25</nova:creationTime>
Nov 25 04:01:26 np0005534516 nova_compute[253538]:      <nova:flavor name="m1.nano">
Nov 25 04:01:26 np0005534516 nova_compute[253538]:        <nova:memory>128</nova:memory>
Nov 25 04:01:26 np0005534516 nova_compute[253538]:        <nova:disk>1</nova:disk>
Nov 25 04:01:26 np0005534516 nova_compute[253538]:        <nova:swap>0</nova:swap>
Nov 25 04:01:26 np0005534516 nova_compute[253538]:        <nova:ephemeral>0</nova:ephemeral>
Nov 25 04:01:26 np0005534516 nova_compute[253538]:        <nova:vcpus>1</nova:vcpus>
Nov 25 04:01:26 np0005534516 nova_compute[253538]:      </nova:flavor>
Nov 25 04:01:26 np0005534516 nova_compute[253538]:      <nova:owner>
Nov 25 04:01:26 np0005534516 nova_compute[253538]:        <nova:user uuid="c9fb13d4ba9041458692330b7276232f">tempest-TestGettingAddress-364728108-project-member</nova:user>
Nov 25 04:01:26 np0005534516 nova_compute[253538]:        <nova:project uuid="a3cf572dfc9f42528923d69b8fa76422">tempest-TestGettingAddress-364728108</nova:project>
Nov 25 04:01:26 np0005534516 nova_compute[253538]:      </nova:owner>
Nov 25 04:01:26 np0005534516 nova_compute[253538]:      <nova:root type="image" uuid="8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e"/>
Nov 25 04:01:26 np0005534516 nova_compute[253538]:      <nova:ports>
Nov 25 04:01:26 np0005534516 nova_compute[253538]:        <nova:port uuid="cf17086b-8fa3-4041-8a87-a1f9ede3f871">
Nov 25 04:01:26 np0005534516 nova_compute[253538]:          <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Nov 25 04:01:26 np0005534516 nova_compute[253538]:          <nova:ip type="fixed" address="2001:db8::f816:3eff:fe2b:642f" ipVersion="6"/>
Nov 25 04:01:26 np0005534516 nova_compute[253538]:        </nova:port>
Nov 25 04:01:26 np0005534516 nova_compute[253538]:      </nova:ports>
Nov 25 04:01:26 np0005534516 nova_compute[253538]:    </nova:instance>
Nov 25 04:01:26 np0005534516 nova_compute[253538]:  </metadata>
Nov 25 04:01:26 np0005534516 nova_compute[253538]:  <sysinfo type="smbios">
Nov 25 04:01:26 np0005534516 nova_compute[253538]:    <system>
Nov 25 04:01:26 np0005534516 nova_compute[253538]:      <entry name="manufacturer">RDO</entry>
Nov 25 04:01:26 np0005534516 nova_compute[253538]:      <entry name="product">OpenStack Compute</entry>
Nov 25 04:01:26 np0005534516 nova_compute[253538]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 04:01:26 np0005534516 nova_compute[253538]:      <entry name="serial">3dd3cc22-d02b-4948-b7e0-da630c6ad4b0</entry>
Nov 25 04:01:26 np0005534516 nova_compute[253538]:      <entry name="uuid">3dd3cc22-d02b-4948-b7e0-da630c6ad4b0</entry>
Nov 25 04:01:26 np0005534516 nova_compute[253538]:      <entry name="family">Virtual Machine</entry>
Nov 25 04:01:26 np0005534516 nova_compute[253538]:    </system>
Nov 25 04:01:26 np0005534516 nova_compute[253538]:  </sysinfo>
Nov 25 04:01:26 np0005534516 nova_compute[253538]:  <os>
Nov 25 04:01:26 np0005534516 nova_compute[253538]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 25 04:01:26 np0005534516 nova_compute[253538]:    <boot dev="hd"/>
Nov 25 04:01:26 np0005534516 nova_compute[253538]:    <smbios mode="sysinfo"/>
Nov 25 04:01:26 np0005534516 nova_compute[253538]:  </os>
Nov 25 04:01:26 np0005534516 nova_compute[253538]:  <features>
Nov 25 04:01:26 np0005534516 nova_compute[253538]:    <acpi/>
Nov 25 04:01:26 np0005534516 nova_compute[253538]:    <apic/>
Nov 25 04:01:26 np0005534516 nova_compute[253538]:    <vmcoreinfo/>
Nov 25 04:01:26 np0005534516 nova_compute[253538]:  </features>
Nov 25 04:01:26 np0005534516 nova_compute[253538]:  <clock offset="utc">
Nov 25 04:01:26 np0005534516 nova_compute[253538]:    <timer name="pit" tickpolicy="delay"/>
Nov 25 04:01:26 np0005534516 nova_compute[253538]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 25 04:01:26 np0005534516 nova_compute[253538]:    <timer name="hpet" present="no"/>
Nov 25 04:01:26 np0005534516 nova_compute[253538]:  </clock>
Nov 25 04:01:26 np0005534516 nova_compute[253538]:  <cpu mode="host-model" match="exact">
Nov 25 04:01:26 np0005534516 nova_compute[253538]:    <topology sockets="1" cores="1" threads="1"/>
Nov 25 04:01:26 np0005534516 nova_compute[253538]:  </cpu>
Nov 25 04:01:26 np0005534516 nova_compute[253538]:  <devices>
Nov 25 04:01:26 np0005534516 nova_compute[253538]:    <disk type="network" device="disk">
Nov 25 04:01:26 np0005534516 nova_compute[253538]:      <driver type="raw" cache="none"/>
Nov 25 04:01:26 np0005534516 nova_compute[253538]:      <source protocol="rbd" name="vms/3dd3cc22-d02b-4948-b7e0-da630c6ad4b0_disk">
Nov 25 04:01:26 np0005534516 nova_compute[253538]:        <host name="192.168.122.100" port="6789"/>
Nov 25 04:01:26 np0005534516 nova_compute[253538]:      </source>
Nov 25 04:01:26 np0005534516 nova_compute[253538]:      <auth username="openstack">
Nov 25 04:01:26 np0005534516 nova_compute[253538]:        <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 04:01:26 np0005534516 nova_compute[253538]:      </auth>
Nov 25 04:01:26 np0005534516 nova_compute[253538]:      <target dev="vda" bus="virtio"/>
Nov 25 04:01:26 np0005534516 nova_compute[253538]:    </disk>
Nov 25 04:01:26 np0005534516 nova_compute[253538]:    <disk type="network" device="cdrom">
Nov 25 04:01:26 np0005534516 nova_compute[253538]:      <driver type="raw" cache="none"/>
Nov 25 04:01:26 np0005534516 nova_compute[253538]:      <source protocol="rbd" name="vms/3dd3cc22-d02b-4948-b7e0-da630c6ad4b0_disk.config">
Nov 25 04:01:26 np0005534516 nova_compute[253538]:        <host name="192.168.122.100" port="6789"/>
Nov 25 04:01:26 np0005534516 nova_compute[253538]:      </source>
Nov 25 04:01:26 np0005534516 nova_compute[253538]:      <auth username="openstack">
Nov 25 04:01:26 np0005534516 nova_compute[253538]:        <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 04:01:26 np0005534516 nova_compute[253538]:      </auth>
Nov 25 04:01:26 np0005534516 nova_compute[253538]:      <target dev="sda" bus="sata"/>
Nov 25 04:01:26 np0005534516 nova_compute[253538]:    </disk>
Nov 25 04:01:26 np0005534516 nova_compute[253538]:    <interface type="ethernet">
Nov 25 04:01:26 np0005534516 nova_compute[253538]:      <mac address="fa:16:3e:2b:64:2f"/>
Nov 25 04:01:26 np0005534516 nova_compute[253538]:      <model type="virtio"/>
Nov 25 04:01:26 np0005534516 nova_compute[253538]:      <driver name="vhost" rx_queue_size="512"/>
Nov 25 04:01:26 np0005534516 nova_compute[253538]:      <mtu size="1442"/>
Nov 25 04:01:26 np0005534516 nova_compute[253538]:      <target dev="tapcf17086b-8f"/>
Nov 25 04:01:26 np0005534516 nova_compute[253538]:    </interface>
Nov 25 04:01:26 np0005534516 nova_compute[253538]:    <serial type="pty">
Nov 25 04:01:26 np0005534516 nova_compute[253538]:      <log file="/var/lib/nova/instances/3dd3cc22-d02b-4948-b7e0-da630c6ad4b0/console.log" append="off"/>
Nov 25 04:01:26 np0005534516 nova_compute[253538]:    </serial>
Nov 25 04:01:26 np0005534516 nova_compute[253538]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 04:01:26 np0005534516 nova_compute[253538]:    <video>
Nov 25 04:01:26 np0005534516 nova_compute[253538]:      <model type="virtio"/>
Nov 25 04:01:26 np0005534516 nova_compute[253538]:    </video>
Nov 25 04:01:26 np0005534516 nova_compute[253538]:    <input type="tablet" bus="usb"/>
Nov 25 04:01:26 np0005534516 nova_compute[253538]:    <rng model="virtio">
Nov 25 04:01:26 np0005534516 nova_compute[253538]:      <backend model="random">/dev/urandom</backend>
Nov 25 04:01:26 np0005534516 nova_compute[253538]:    </rng>
Nov 25 04:01:26 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root"/>
Nov 25 04:01:26 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:01:26 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:01:26 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:01:26 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:01:26 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:01:26 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:01:26 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:01:26 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:01:26 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:01:26 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:01:26 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:01:26 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:01:26 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:01:26 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:01:26 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:01:26 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:01:26 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:01:26 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:01:26 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:01:26 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:01:26 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:01:26 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:01:26 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:01:26 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:01:26 np0005534516 nova_compute[253538]:    <controller type="usb" index="0"/>
Nov 25 04:01:26 np0005534516 nova_compute[253538]:    <memballoon model="virtio">
Nov 25 04:01:26 np0005534516 nova_compute[253538]:      <stats period="10"/>
Nov 25 04:01:26 np0005534516 nova_compute[253538]:    </memballoon>
Nov 25 04:01:26 np0005534516 nova_compute[253538]:  </devices>
Nov 25 04:01:26 np0005534516 nova_compute[253538]: </domain>
Nov 25 04:01:26 np0005534516 nova_compute[253538]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 25 04:01:26 np0005534516 nova_compute[253538]: 2025-11-25 09:01:26.774 253542 DEBUG nova.compute.manager [None req-929f6467-62d9-4b67-964d-212d9d487e9f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 3dd3cc22-d02b-4948-b7e0-da630c6ad4b0] Preparing to wait for external event network-vif-plugged-cf17086b-8fa3-4041-8a87-a1f9ede3f871 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 25 04:01:26 np0005534516 nova_compute[253538]: 2025-11-25 09:01:26.775 253542 DEBUG oslo_concurrency.lockutils [None req-929f6467-62d9-4b67-964d-212d9d487e9f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Acquiring lock "3dd3cc22-d02b-4948-b7e0-da630c6ad4b0-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:01:26 np0005534516 nova_compute[253538]: 2025-11-25 09:01:26.775 253542 DEBUG oslo_concurrency.lockutils [None req-929f6467-62d9-4b67-964d-212d9d487e9f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "3dd3cc22-d02b-4948-b7e0-da630c6ad4b0-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:01:26 np0005534516 nova_compute[253538]: 2025-11-25 09:01:26.775 253542 DEBUG oslo_concurrency.lockutils [None req-929f6467-62d9-4b67-964d-212d9d487e9f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "3dd3cc22-d02b-4948-b7e0-da630c6ad4b0-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:01:26 np0005534516 nova_compute[253538]: 2025-11-25 09:01:26.776 253542 DEBUG nova.virt.libvirt.vif [None req-929f6467-62d9-4b67-964d-212d9d487e9f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T09:01:19Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-432989417',display_name='tempest-TestGettingAddress-server-432989417',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-432989417',id=131,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBE43RDkFrIAVI0BW11KJE2OYK1HJrAvC1/LPNZ8TDm6qjtPF0a19WFe4a1radMZBBdiDLKOYgFd/MDesehfKkwox7hQY3R8UYDimfx0Df8eUJAXIFHj7mDCbbhz+nTG4ag==',key_name='tempest-TestGettingAddress-356224709',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='a3cf572dfc9f42528923d69b8fa76422',ramdisk_id='',reservation_id='r-z9h0547w',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-364728108',owner_user_name='tempest-TestGettingAddress-364728108-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T09:01:21Z,user_data=None,user_id='c9fb13d4ba9041458692330b7276232f',uuid=3dd3cc22-d02b-4948-b7e0-da630c6ad4b0,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "cf17086b-8fa3-4041-8a87-a1f9ede3f871", "address": "fa:16:3e:2b:64:2f", "network": {"id": "6c2834b5-0444-432c-8da4-c0b4f4aabc4d", "bridge": "br-int", "label": "tempest-network-smoke--139778261", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe2b:642f", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcf17086b-8f", "ovs_interfaceid": "cf17086b-8fa3-4041-8a87-a1f9ede3f871", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 25 04:01:26 np0005534516 nova_compute[253538]: 2025-11-25 09:01:26.776 253542 DEBUG nova.network.os_vif_util [None req-929f6467-62d9-4b67-964d-212d9d487e9f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Converting VIF {"id": "cf17086b-8fa3-4041-8a87-a1f9ede3f871", "address": "fa:16:3e:2b:64:2f", "network": {"id": "6c2834b5-0444-432c-8da4-c0b4f4aabc4d", "bridge": "br-int", "label": "tempest-network-smoke--139778261", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe2b:642f", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcf17086b-8f", "ovs_interfaceid": "cf17086b-8fa3-4041-8a87-a1f9ede3f871", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 04:01:26 np0005534516 nova_compute[253538]: 2025-11-25 09:01:26.777 253542 DEBUG nova.network.os_vif_util [None req-929f6467-62d9-4b67-964d-212d9d487e9f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:2b:64:2f,bridge_name='br-int',has_traffic_filtering=True,id=cf17086b-8fa3-4041-8a87-a1f9ede3f871,network=Network(6c2834b5-0444-432c-8da4-c0b4f4aabc4d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcf17086b-8f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 04:01:26 np0005534516 nova_compute[253538]: 2025-11-25 09:01:26.777 253542 DEBUG os_vif [None req-929f6467-62d9-4b67-964d-212d9d487e9f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:2b:64:2f,bridge_name='br-int',has_traffic_filtering=True,id=cf17086b-8fa3-4041-8a87-a1f9ede3f871,network=Network(6c2834b5-0444-432c-8da4-c0b4f4aabc4d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcf17086b-8f') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 25 04:01:26 np0005534516 nova_compute[253538]: 2025-11-25 09:01:26.777 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:01:26 np0005534516 nova_compute[253538]: 2025-11-25 09:01:26.778 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 04:01:26 np0005534516 nova_compute[253538]: 2025-11-25 09:01:26.778 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 04:01:26 np0005534516 nova_compute[253538]: 2025-11-25 09:01:26.784 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:01:26 np0005534516 nova_compute[253538]: 2025-11-25 09:01:26.784 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapcf17086b-8f, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 04:01:26 np0005534516 nova_compute[253538]: 2025-11-25 09:01:26.785 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapcf17086b-8f, col_values=(('external_ids', {'iface-id': 'cf17086b-8fa3-4041-8a87-a1f9ede3f871', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:2b:64:2f', 'vm-uuid': '3dd3cc22-d02b-4948-b7e0-da630c6ad4b0'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 04:01:26 np0005534516 nova_compute[253538]: 2025-11-25 09:01:26.786 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:01:26 np0005534516 NetworkManager[48915]: <info>  [1764061286.7872] manager: (tapcf17086b-8f): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/554)
Nov 25 04:01:26 np0005534516 nova_compute[253538]: 2025-11-25 09:01:26.788 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 25 04:01:26 np0005534516 nova_compute[253538]: 2025-11-25 09:01:26.792 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:01:26 np0005534516 nova_compute[253538]: 2025-11-25 09:01:26.793 253542 INFO os_vif [None req-929f6467-62d9-4b67-964d-212d9d487e9f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:2b:64:2f,bridge_name='br-int',has_traffic_filtering=True,id=cf17086b-8fa3-4041-8a87-a1f9ede3f871,network=Network(6c2834b5-0444-432c-8da4-c0b4f4aabc4d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcf17086b-8f')#033[00m
Nov 25 04:01:26 np0005534516 nova_compute[253538]: 2025-11-25 09:01:26.835 253542 DEBUG nova.virt.libvirt.driver [None req-929f6467-62d9-4b67-964d-212d9d487e9f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 04:01:26 np0005534516 nova_compute[253538]: 2025-11-25 09:01:26.836 253542 DEBUG nova.virt.libvirt.driver [None req-929f6467-62d9-4b67-964d-212d9d487e9f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 04:01:26 np0005534516 nova_compute[253538]: 2025-11-25 09:01:26.836 253542 DEBUG nova.virt.libvirt.driver [None req-929f6467-62d9-4b67-964d-212d9d487e9f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] No VIF found with MAC fa:16:3e:2b:64:2f, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 25 04:01:26 np0005534516 nova_compute[253538]: 2025-11-25 09:01:26.837 253542 INFO nova.virt.libvirt.driver [None req-929f6467-62d9-4b67-964d-212d9d487e9f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 3dd3cc22-d02b-4948-b7e0-da630c6ad4b0] Using config drive#033[00m
Nov 25 04:01:26 np0005534516 nova_compute[253538]: 2025-11-25 09:01:26.859 253542 DEBUG nova.storage.rbd_utils [None req-929f6467-62d9-4b67-964d-212d9d487e9f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] rbd image 3dd3cc22-d02b-4948-b7e0-da630c6ad4b0_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 04:01:27 np0005534516 nova_compute[253538]: 2025-11-25 09:01:27.271 253542 INFO nova.virt.libvirt.driver [None req-929f6467-62d9-4b67-964d-212d9d487e9f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 3dd3cc22-d02b-4948-b7e0-da630c6ad4b0] Creating config drive at /var/lib/nova/instances/3dd3cc22-d02b-4948-b7e0-da630c6ad4b0/disk.config#033[00m
Nov 25 04:01:27 np0005534516 nova_compute[253538]: 2025-11-25 09:01:27.277 253542 DEBUG oslo_concurrency.processutils [None req-929f6467-62d9-4b67-964d-212d9d487e9f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/3dd3cc22-d02b-4948-b7e0-da630c6ad4b0/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpebfw6he5 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 04:01:27 np0005534516 nova_compute[253538]: 2025-11-25 09:01:27.417 253542 DEBUG oslo_concurrency.processutils [None req-929f6467-62d9-4b67-964d-212d9d487e9f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/3dd3cc22-d02b-4948-b7e0-da630c6ad4b0/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpebfw6he5" returned: 0 in 0.140s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 04:01:27 np0005534516 nova_compute[253538]: 2025-11-25 09:01:27.441 253542 DEBUG nova.storage.rbd_utils [None req-929f6467-62d9-4b67-964d-212d9d487e9f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] rbd image 3dd3cc22-d02b-4948-b7e0-da630c6ad4b0_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 04:01:27 np0005534516 nova_compute[253538]: 2025-11-25 09:01:27.445 253542 DEBUG oslo_concurrency.processutils [None req-929f6467-62d9-4b67-964d-212d9d487e9f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/3dd3cc22-d02b-4948-b7e0-da630c6ad4b0/disk.config 3dd3cc22-d02b-4948-b7e0-da630c6ad4b0_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 04:01:27 np0005534516 ceph-osd[88620]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #50. Immutable memtables: 7.
Nov 25 04:01:27 np0005534516 nova_compute[253538]: 2025-11-25 09:01:27.608 253542 DEBUG oslo_concurrency.processutils [None req-929f6467-62d9-4b67-964d-212d9d487e9f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/3dd3cc22-d02b-4948-b7e0-da630c6ad4b0/disk.config 3dd3cc22-d02b-4948-b7e0-da630c6ad4b0_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.163s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 04:01:27 np0005534516 nova_compute[253538]: 2025-11-25 09:01:27.609 253542 INFO nova.virt.libvirt.driver [None req-929f6467-62d9-4b67-964d-212d9d487e9f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 3dd3cc22-d02b-4948-b7e0-da630c6ad4b0] Deleting local config drive /var/lib/nova/instances/3dd3cc22-d02b-4948-b7e0-da630c6ad4b0/disk.config because it was imported into RBD.#033[00m
Nov 25 04:01:27 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:01:27 np0005534516 kernel: tapcf17086b-8f: entered promiscuous mode
Nov 25 04:01:27 np0005534516 NetworkManager[48915]: <info>  [1764061287.6602] manager: (tapcf17086b-8f): new Tun device (/org/freedesktop/NetworkManager/Devices/555)
Nov 25 04:01:27 np0005534516 nova_compute[253538]: 2025-11-25 09:01:27.661 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:01:27 np0005534516 ovn_controller[152859]: 2025-11-25T09:01:27Z|01341|binding|INFO|Claiming lport cf17086b-8fa3-4041-8a87-a1f9ede3f871 for this chassis.
Nov 25 04:01:27 np0005534516 ovn_controller[152859]: 2025-11-25T09:01:27Z|01342|binding|INFO|cf17086b-8fa3-4041-8a87-a1f9ede3f871: Claiming fa:16:3e:2b:64:2f 10.100.0.9 2001:db8::f816:3eff:fe2b:642f
Nov 25 04:01:27 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:01:27.672 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:2b:64:2f 10.100.0.9 2001:db8::f816:3eff:fe2b:642f'], port_security=['fa:16:3e:2b:64:2f 10.100.0.9 2001:db8::f816:3eff:fe2b:642f'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28 2001:db8::f816:3eff:fe2b:642f/64', 'neutron:device_id': '3dd3cc22-d02b-4948-b7e0-da630c6ad4b0', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6c2834b5-0444-432c-8da4-c0b4f4aabc4d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a3cf572dfc9f42528923d69b8fa76422', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'bc2d8b7e-9a70-4d0b-ab2b-2be8746de01b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=566c4500-0375-4680-b110-24535007c05e, chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=cf17086b-8fa3-4041-8a87-a1f9ede3f871) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 04:01:27 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:01:27.673 162739 INFO neutron.agent.ovn.metadata.agent [-] Port cf17086b-8fa3-4041-8a87-a1f9ede3f871 in datapath 6c2834b5-0444-432c-8da4-c0b4f4aabc4d bound to our chassis#033[00m
Nov 25 04:01:27 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:01:27.675 162739 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 6c2834b5-0444-432c-8da4-c0b4f4aabc4d#033[00m
Nov 25 04:01:27 np0005534516 ovn_controller[152859]: 2025-11-25T09:01:27Z|01343|binding|INFO|Setting lport cf17086b-8fa3-4041-8a87-a1f9ede3f871 ovn-installed in OVS
Nov 25 04:01:27 np0005534516 ovn_controller[152859]: 2025-11-25T09:01:27Z|01344|binding|INFO|Setting lport cf17086b-8fa3-4041-8a87-a1f9ede3f871 up in Southbound
Nov 25 04:01:27 np0005534516 nova_compute[253538]: 2025-11-25 09:01:27.687 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:01:27 np0005534516 nova_compute[253538]: 2025-11-25 09:01:27.690 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:01:27 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:01:27.693 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[cc30e756-75bb-4949-a06d-a3f6fa9e0044]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:01:27 np0005534516 nova_compute[253538]: 2025-11-25 09:01:27.697 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:01:27 np0005534516 systemd-machined[215790]: New machine qemu-161-instance-00000083.
Nov 25 04:01:27 np0005534516 systemd[1]: Started Virtual Machine qemu-161-instance-00000083.
Nov 25 04:01:27 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:01:27.744 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[7acc6e47-9166-4145-a484-b3b72466d6a3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:01:27 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:01:27.748 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[a5434a05-71ba-4e2a-8f33-9c9f9a961caf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:01:27 np0005534516 systemd-udevd[391337]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 04:01:27 np0005534516 NetworkManager[48915]: <info>  [1764061287.7732] device (tapcf17086b-8f): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 04:01:27 np0005534516 NetworkManager[48915]: <info>  [1764061287.7742] device (tapcf17086b-8f): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 04:01:27 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:01:27.792 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[5334810a-b40c-4c80-bc51-75cff835b0e6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:01:27 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:01:27.829 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[2e768691-362c-432f-9a45-89c8f705d3e0]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6c2834b5-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e5:81:3e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 19, 'tx_packets': 6, 'rx_bytes': 1586, 'tx_bytes': 440, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 19, 'tx_packets': 6, 'rx_bytes': 1586, 'tx_bytes': 440, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 386], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 659443, 'reachable_time': 25259, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 17, 'inoctets': 1264, 'indelivers': 4, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 300, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 17, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 1264, 'outmcastoctets': 300, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 17, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 4, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 391353, 'error': None, 'target': 'ovnmeta-6c2834b5-0444-432c-8da4-c0b4f4aabc4d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:01:27 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:01:27.845 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[5a06a55c-3d5c-4aa3-95b2-ba0b3b606993]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap6c2834b5-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 659455, 'tstamp': 659455}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 391362, 'error': None, 'target': 'ovnmeta-6c2834b5-0444-432c-8da4-c0b4f4aabc4d', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap6c2834b5-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 659459, 'tstamp': 659459}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 391362, 'error': None, 'target': 'ovnmeta-6c2834b5-0444-432c-8da4-c0b4f4aabc4d', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:01:27 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:01:27.847 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6c2834b5-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 04:01:27 np0005534516 nova_compute[253538]: 2025-11-25 09:01:27.849 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:01:27 np0005534516 nova_compute[253538]: 2025-11-25 09:01:27.851 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:01:27 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:01:27.851 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6c2834b5-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 04:01:27 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:01:27.851 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 04:01:27 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:01:27.852 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap6c2834b5-00, col_values=(('external_ids', {'iface-id': '25e4a85d-5a04-4d07-a006-66576a20c294'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 04:01:27 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:01:27.852 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 04:01:27 np0005534516 podman[391318]: 2025-11-25 09:01:27.853549351 +0000 UTC m=+0.137299128 container health_status 8ad1e319ea263c63021f01bb2d6f18ca5aea205883339692c6b10bddfa993191 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_controller, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible)
Nov 25 04:01:27 np0005534516 ovn_controller[152859]: 2025-11-25T09:01:27Z|00163|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:0c:1f:9b 10.100.0.7
Nov 25 04:01:27 np0005534516 ovn_controller[152859]: 2025-11-25T09:01:27Z|00164|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:0c:1f:9b 10.100.0.7
Nov 25 04:01:27 np0005534516 nova_compute[253538]: 2025-11-25 09:01:27.953 253542 DEBUG nova.network.neutron [req-efc30647-e42e-43e1-9e2b-4915b470d6d3 req-365ff198-f061-4875-bee2-fa483775127d b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 3dd3cc22-d02b-4948-b7e0-da630c6ad4b0] Updated VIF entry in instance network info cache for port cf17086b-8fa3-4041-8a87-a1f9ede3f871. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 25 04:01:27 np0005534516 nova_compute[253538]: 2025-11-25 09:01:27.954 253542 DEBUG nova.network.neutron [req-efc30647-e42e-43e1-9e2b-4915b470d6d3 req-365ff198-f061-4875-bee2-fa483775127d b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 3dd3cc22-d02b-4948-b7e0-da630c6ad4b0] Updating instance_info_cache with network_info: [{"id": "cf17086b-8fa3-4041-8a87-a1f9ede3f871", "address": "fa:16:3e:2b:64:2f", "network": {"id": "6c2834b5-0444-432c-8da4-c0b4f4aabc4d", "bridge": "br-int", "label": "tempest-network-smoke--139778261", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe2b:642f", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcf17086b-8f", "ovs_interfaceid": "cf17086b-8fa3-4041-8a87-a1f9ede3f871", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 04:01:27 np0005534516 nova_compute[253538]: 2025-11-25 09:01:27.969 253542 DEBUG oslo_concurrency.lockutils [req-efc30647-e42e-43e1-9e2b-4915b470d6d3 req-365ff198-f061-4875-bee2-fa483775127d b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-3dd3cc22-d02b-4948-b7e0-da630c6ad4b0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 04:01:28 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2420: 321 pgs: 321 active+clean; 269 MiB data, 968 MiB used, 59 GiB / 60 GiB avail; 1.8 MiB/s rd, 2.6 MiB/s wr, 106 op/s
Nov 25 04:01:28 np0005534516 nova_compute[253538]: 2025-11-25 09:01:28.317 253542 DEBUG nova.compute.manager [req-34977306-7e8c-4b5c-b78f-4b70ba68175a req-94d52f8f-03c3-4ce8-ae8c-bc01e5e0805a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 3dd3cc22-d02b-4948-b7e0-da630c6ad4b0] Received event network-vif-plugged-cf17086b-8fa3-4041-8a87-a1f9ede3f871 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 04:01:28 np0005534516 nova_compute[253538]: 2025-11-25 09:01:28.317 253542 DEBUG oslo_concurrency.lockutils [req-34977306-7e8c-4b5c-b78f-4b70ba68175a req-94d52f8f-03c3-4ce8-ae8c-bc01e5e0805a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "3dd3cc22-d02b-4948-b7e0-da630c6ad4b0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:01:28 np0005534516 nova_compute[253538]: 2025-11-25 09:01:28.318 253542 DEBUG oslo_concurrency.lockutils [req-34977306-7e8c-4b5c-b78f-4b70ba68175a req-94d52f8f-03c3-4ce8-ae8c-bc01e5e0805a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "3dd3cc22-d02b-4948-b7e0-da630c6ad4b0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:01:28 np0005534516 nova_compute[253538]: 2025-11-25 09:01:28.319 253542 DEBUG oslo_concurrency.lockutils [req-34977306-7e8c-4b5c-b78f-4b70ba68175a req-94d52f8f-03c3-4ce8-ae8c-bc01e5e0805a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "3dd3cc22-d02b-4948-b7e0-da630c6ad4b0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:01:28 np0005534516 nova_compute[253538]: 2025-11-25 09:01:28.320 253542 DEBUG nova.compute.manager [req-34977306-7e8c-4b5c-b78f-4b70ba68175a req-94d52f8f-03c3-4ce8-ae8c-bc01e5e0805a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 3dd3cc22-d02b-4948-b7e0-da630c6ad4b0] Processing event network-vif-plugged-cf17086b-8fa3-4041-8a87-a1f9ede3f871 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 25 04:01:28 np0005534516 nova_compute[253538]: 2025-11-25 09:01:28.489 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764061288.4893496, 3dd3cc22-d02b-4948-b7e0-da630c6ad4b0 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 04:01:28 np0005534516 nova_compute[253538]: 2025-11-25 09:01:28.490 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 3dd3cc22-d02b-4948-b7e0-da630c6ad4b0] VM Started (Lifecycle Event)#033[00m
Nov 25 04:01:28 np0005534516 nova_compute[253538]: 2025-11-25 09:01:28.492 253542 DEBUG nova.compute.manager [None req-929f6467-62d9-4b67-964d-212d9d487e9f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 3dd3cc22-d02b-4948-b7e0-da630c6ad4b0] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 25 04:01:28 np0005534516 nova_compute[253538]: 2025-11-25 09:01:28.494 253542 DEBUG nova.virt.libvirt.driver [None req-929f6467-62d9-4b67-964d-212d9d487e9f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 3dd3cc22-d02b-4948-b7e0-da630c6ad4b0] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 25 04:01:28 np0005534516 nova_compute[253538]: 2025-11-25 09:01:28.498 253542 INFO nova.virt.libvirt.driver [-] [instance: 3dd3cc22-d02b-4948-b7e0-da630c6ad4b0] Instance spawned successfully.#033[00m
Nov 25 04:01:28 np0005534516 nova_compute[253538]: 2025-11-25 09:01:28.498 253542 DEBUG nova.virt.libvirt.driver [None req-929f6467-62d9-4b67-964d-212d9d487e9f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 3dd3cc22-d02b-4948-b7e0-da630c6ad4b0] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 25 04:01:28 np0005534516 nova_compute[253538]: 2025-11-25 09:01:28.504 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 3dd3cc22-d02b-4948-b7e0-da630c6ad4b0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 04:01:28 np0005534516 nova_compute[253538]: 2025-11-25 09:01:28.507 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 3dd3cc22-d02b-4948-b7e0-da630c6ad4b0] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 04:01:28 np0005534516 nova_compute[253538]: 2025-11-25 09:01:28.515 253542 DEBUG nova.virt.libvirt.driver [None req-929f6467-62d9-4b67-964d-212d9d487e9f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 3dd3cc22-d02b-4948-b7e0-da630c6ad4b0] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 04:01:28 np0005534516 nova_compute[253538]: 2025-11-25 09:01:28.516 253542 DEBUG nova.virt.libvirt.driver [None req-929f6467-62d9-4b67-964d-212d9d487e9f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 3dd3cc22-d02b-4948-b7e0-da630c6ad4b0] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 04:01:28 np0005534516 nova_compute[253538]: 2025-11-25 09:01:28.516 253542 DEBUG nova.virt.libvirt.driver [None req-929f6467-62d9-4b67-964d-212d9d487e9f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 3dd3cc22-d02b-4948-b7e0-da630c6ad4b0] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 04:01:28 np0005534516 nova_compute[253538]: 2025-11-25 09:01:28.517 253542 DEBUG nova.virt.libvirt.driver [None req-929f6467-62d9-4b67-964d-212d9d487e9f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 3dd3cc22-d02b-4948-b7e0-da630c6ad4b0] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 04:01:28 np0005534516 nova_compute[253538]: 2025-11-25 09:01:28.517 253542 DEBUG nova.virt.libvirt.driver [None req-929f6467-62d9-4b67-964d-212d9d487e9f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 3dd3cc22-d02b-4948-b7e0-da630c6ad4b0] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 04:01:28 np0005534516 nova_compute[253538]: 2025-11-25 09:01:28.517 253542 DEBUG nova.virt.libvirt.driver [None req-929f6467-62d9-4b67-964d-212d9d487e9f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 3dd3cc22-d02b-4948-b7e0-da630c6ad4b0] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 04:01:28 np0005534516 nova_compute[253538]: 2025-11-25 09:01:28.525 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 3dd3cc22-d02b-4948-b7e0-da630c6ad4b0] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 04:01:28 np0005534516 nova_compute[253538]: 2025-11-25 09:01:28.525 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764061288.491612, 3dd3cc22-d02b-4948-b7e0-da630c6ad4b0 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 04:01:28 np0005534516 nova_compute[253538]: 2025-11-25 09:01:28.525 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 3dd3cc22-d02b-4948-b7e0-da630c6ad4b0] VM Paused (Lifecycle Event)#033[00m
Nov 25 04:01:28 np0005534516 nova_compute[253538]: 2025-11-25 09:01:28.550 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 3dd3cc22-d02b-4948-b7e0-da630c6ad4b0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 04:01:28 np0005534516 nova_compute[253538]: 2025-11-25 09:01:28.553 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764061288.4942284, 3dd3cc22-d02b-4948-b7e0-da630c6ad4b0 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 04:01:28 np0005534516 nova_compute[253538]: 2025-11-25 09:01:28.553 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 3dd3cc22-d02b-4948-b7e0-da630c6ad4b0] VM Resumed (Lifecycle Event)#033[00m
Nov 25 04:01:28 np0005534516 nova_compute[253538]: 2025-11-25 09:01:28.580 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 3dd3cc22-d02b-4948-b7e0-da630c6ad4b0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 04:01:28 np0005534516 nova_compute[253538]: 2025-11-25 09:01:28.582 253542 INFO nova.compute.manager [None req-929f6467-62d9-4b67-964d-212d9d487e9f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 3dd3cc22-d02b-4948-b7e0-da630c6ad4b0] Took 7.04 seconds to spawn the instance on the hypervisor.#033[00m
Nov 25 04:01:28 np0005534516 nova_compute[253538]: 2025-11-25 09:01:28.582 253542 DEBUG nova.compute.manager [None req-929f6467-62d9-4b67-964d-212d9d487e9f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 3dd3cc22-d02b-4948-b7e0-da630c6ad4b0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 04:01:28 np0005534516 nova_compute[253538]: 2025-11-25 09:01:28.584 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 3dd3cc22-d02b-4948-b7e0-da630c6ad4b0] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 04:01:28 np0005534516 nova_compute[253538]: 2025-11-25 09:01:28.593 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:01:28 np0005534516 nova_compute[253538]: 2025-11-25 09:01:28.650 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 3dd3cc22-d02b-4948-b7e0-da630c6ad4b0] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 04:01:28 np0005534516 nova_compute[253538]: 2025-11-25 09:01:28.736 253542 INFO nova.compute.manager [None req-929f6467-62d9-4b67-964d-212d9d487e9f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 3dd3cc22-d02b-4948-b7e0-da630c6ad4b0] Took 8.22 seconds to build instance.#033[00m
Nov 25 04:01:28 np0005534516 nova_compute[253538]: 2025-11-25 09:01:28.764 253542 DEBUG oslo_concurrency.lockutils [None req-929f6467-62d9-4b67-964d-212d9d487e9f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "3dd3cc22-d02b-4948-b7e0-da630c6ad4b0" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.313s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:01:29 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 25 04:01:29 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4163160749' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 04:01:29 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 25 04:01:29 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4163160749' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 04:01:30 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2421: 321 pgs: 321 active+clean; 293 MiB data, 1018 MiB used, 59 GiB / 60 GiB avail; 1.7 MiB/s rd, 3.9 MiB/s wr, 138 op/s
Nov 25 04:01:30 np0005534516 nova_compute[253538]: 2025-11-25 09:01:30.449 253542 DEBUG nova.compute.manager [req-e1bb7283-c1b7-4090-9fdd-3a1a3a429218 req-61b7fed6-422c-459d-8b10-b42420f16883 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 3dd3cc22-d02b-4948-b7e0-da630c6ad4b0] Received event network-vif-plugged-cf17086b-8fa3-4041-8a87-a1f9ede3f871 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 04:01:30 np0005534516 nova_compute[253538]: 2025-11-25 09:01:30.450 253542 DEBUG oslo_concurrency.lockutils [req-e1bb7283-c1b7-4090-9fdd-3a1a3a429218 req-61b7fed6-422c-459d-8b10-b42420f16883 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "3dd3cc22-d02b-4948-b7e0-da630c6ad4b0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:01:30 np0005534516 nova_compute[253538]: 2025-11-25 09:01:30.450 253542 DEBUG oslo_concurrency.lockutils [req-e1bb7283-c1b7-4090-9fdd-3a1a3a429218 req-61b7fed6-422c-459d-8b10-b42420f16883 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "3dd3cc22-d02b-4948-b7e0-da630c6ad4b0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:01:30 np0005534516 nova_compute[253538]: 2025-11-25 09:01:30.450 253542 DEBUG oslo_concurrency.lockutils [req-e1bb7283-c1b7-4090-9fdd-3a1a3a429218 req-61b7fed6-422c-459d-8b10-b42420f16883 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "3dd3cc22-d02b-4948-b7e0-da630c6ad4b0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:01:30 np0005534516 nova_compute[253538]: 2025-11-25 09:01:30.451 253542 DEBUG nova.compute.manager [req-e1bb7283-c1b7-4090-9fdd-3a1a3a429218 req-61b7fed6-422c-459d-8b10-b42420f16883 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 3dd3cc22-d02b-4948-b7e0-da630c6ad4b0] No waiting events found dispatching network-vif-plugged-cf17086b-8fa3-4041-8a87-a1f9ede3f871 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 04:01:30 np0005534516 nova_compute[253538]: 2025-11-25 09:01:30.451 253542 WARNING nova.compute.manager [req-e1bb7283-c1b7-4090-9fdd-3a1a3a429218 req-61b7fed6-422c-459d-8b10-b42420f16883 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 3dd3cc22-d02b-4948-b7e0-da630c6ad4b0] Received unexpected event network-vif-plugged-cf17086b-8fa3-4041-8a87-a1f9ede3f871 for instance with vm_state active and task_state None.#033[00m
Nov 25 04:01:31 np0005534516 nova_compute[253538]: 2025-11-25 09:01:31.787 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:01:32 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2422: 321 pgs: 321 active+clean; 293 MiB data, 1021 MiB used, 59 GiB / 60 GiB avail; 1.6 MiB/s rd, 3.9 MiB/s wr, 142 op/s
Nov 25 04:01:32 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:01:32 np0005534516 nova_compute[253538]: 2025-11-25 09:01:32.980 253542 DEBUG nova.compute.manager [req-d3f991b3-dca4-497d-a2cd-07b9901a1584 req-0d59347a-35e2-47a6-9467-4b7cc5de4c14 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 3dd3cc22-d02b-4948-b7e0-da630c6ad4b0] Received event network-changed-cf17086b-8fa3-4041-8a87-a1f9ede3f871 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 04:01:32 np0005534516 nova_compute[253538]: 2025-11-25 09:01:32.981 253542 DEBUG nova.compute.manager [req-d3f991b3-dca4-497d-a2cd-07b9901a1584 req-0d59347a-35e2-47a6-9467-4b7cc5de4c14 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 3dd3cc22-d02b-4948-b7e0-da630c6ad4b0] Refreshing instance network info cache due to event network-changed-cf17086b-8fa3-4041-8a87-a1f9ede3f871. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 04:01:32 np0005534516 nova_compute[253538]: 2025-11-25 09:01:32.981 253542 DEBUG oslo_concurrency.lockutils [req-d3f991b3-dca4-497d-a2cd-07b9901a1584 req-0d59347a-35e2-47a6-9467-4b7cc5de4c14 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-3dd3cc22-d02b-4948-b7e0-da630c6ad4b0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 04:01:32 np0005534516 nova_compute[253538]: 2025-11-25 09:01:32.981 253542 DEBUG oslo_concurrency.lockutils [req-d3f991b3-dca4-497d-a2cd-07b9901a1584 req-0d59347a-35e2-47a6-9467-4b7cc5de4c14 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-3dd3cc22-d02b-4948-b7e0-da630c6ad4b0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 04:01:32 np0005534516 nova_compute[253538]: 2025-11-25 09:01:32.982 253542 DEBUG nova.network.neutron [req-d3f991b3-dca4-497d-a2cd-07b9901a1584 req-0d59347a-35e2-47a6-9467-4b7cc5de4c14 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 3dd3cc22-d02b-4948-b7e0-da630c6ad4b0] Refreshing network info cache for port cf17086b-8fa3-4041-8a87-a1f9ede3f871 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 04:01:33 np0005534516 nova_compute[253538]: 2025-11-25 09:01:33.596 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:01:34 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2423: 321 pgs: 321 active+clean; 293 MiB data, 1021 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 3.8 MiB/s wr, 164 op/s
Nov 25 04:01:36 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2424: 321 pgs: 321 active+clean; 293 MiB data, 1021 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 3.2 MiB/s wr, 163 op/s
Nov 25 04:01:36 np0005534516 nova_compute[253538]: 2025-11-25 09:01:36.185 253542 DEBUG nova.network.neutron [req-d3f991b3-dca4-497d-a2cd-07b9901a1584 req-0d59347a-35e2-47a6-9467-4b7cc5de4c14 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 3dd3cc22-d02b-4948-b7e0-da630c6ad4b0] Updated VIF entry in instance network info cache for port cf17086b-8fa3-4041-8a87-a1f9ede3f871. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 25 04:01:36 np0005534516 nova_compute[253538]: 2025-11-25 09:01:36.186 253542 DEBUG nova.network.neutron [req-d3f991b3-dca4-497d-a2cd-07b9901a1584 req-0d59347a-35e2-47a6-9467-4b7cc5de4c14 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 3dd3cc22-d02b-4948-b7e0-da630c6ad4b0] Updating instance_info_cache with network_info: [{"id": "cf17086b-8fa3-4041-8a87-a1f9ede3f871", "address": "fa:16:3e:2b:64:2f", "network": {"id": "6c2834b5-0444-432c-8da4-c0b4f4aabc4d", "bridge": "br-int", "label": "tempest-network-smoke--139778261", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.180", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe2b:642f", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcf17086b-8f", "ovs_interfaceid": "cf17086b-8fa3-4041-8a87-a1f9ede3f871", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 04:01:36 np0005534516 nova_compute[253538]: 2025-11-25 09:01:36.204 253542 DEBUG oslo_concurrency.lockutils [req-d3f991b3-dca4-497d-a2cd-07b9901a1584 req-0d59347a-35e2-47a6-9467-4b7cc5de4c14 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-3dd3cc22-d02b-4948-b7e0-da630c6ad4b0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 04:01:36 np0005534516 nova_compute[253538]: 2025-11-25 09:01:36.790 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:01:37 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:01:38 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2425: 321 pgs: 321 active+clean; 293 MiB data, 1022 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.9 MiB/s wr, 139 op/s
Nov 25 04:01:38 np0005534516 nova_compute[253538]: 2025-11-25 09:01:38.600 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:01:39 np0005534516 nova_compute[253538]: 2025-11-25 09:01:39.570 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:01:40 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2426: 321 pgs: 321 active+clean; 293 MiB data, 1022 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 1.3 MiB/s wr, 119 op/s
Nov 25 04:01:40 np0005534516 nova_compute[253538]: 2025-11-25 09:01:40.907 253542 DEBUG oslo_concurrency.lockutils [None req-d35aed32-9a06-49d6-a4ac-4f84c5a5ab8b 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Acquiring lock "2d372af7-dca6-4f5f-bd4c-beedbb8cc055" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:01:40 np0005534516 nova_compute[253538]: 2025-11-25 09:01:40.908 253542 DEBUG oslo_concurrency.lockutils [None req-d35aed32-9a06-49d6-a4ac-4f84c5a5ab8b 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Lock "2d372af7-dca6-4f5f-bd4c-beedbb8cc055" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:01:40 np0005534516 nova_compute[253538]: 2025-11-25 09:01:40.920 253542 DEBUG nova.compute.manager [None req-d35aed32-9a06-49d6-a4ac-4f84c5a5ab8b 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 2d372af7-dca6-4f5f-bd4c-beedbb8cc055] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 25 04:01:40 np0005534516 nova_compute[253538]: 2025-11-25 09:01:40.994 253542 DEBUG oslo_concurrency.lockutils [None req-d35aed32-9a06-49d6-a4ac-4f84c5a5ab8b 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:01:40 np0005534516 nova_compute[253538]: 2025-11-25 09:01:40.995 253542 DEBUG oslo_concurrency.lockutils [None req-d35aed32-9a06-49d6-a4ac-4f84c5a5ab8b 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:01:41 np0005534516 nova_compute[253538]: 2025-11-25 09:01:41.007 253542 DEBUG nova.virt.hardware [None req-d35aed32-9a06-49d6-a4ac-4f84c5a5ab8b 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 25 04:01:41 np0005534516 nova_compute[253538]: 2025-11-25 09:01:41.008 253542 INFO nova.compute.claims [None req-d35aed32-9a06-49d6-a4ac-4f84c5a5ab8b 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 2d372af7-dca6-4f5f-bd4c-beedbb8cc055] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 25 04:01:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:01:41.084 162739 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:01:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:01:41.084 162739 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:01:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:01:41.085 162739 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:01:41 np0005534516 nova_compute[253538]: 2025-11-25 09:01:41.209 253542 DEBUG oslo_concurrency.processutils [None req-d35aed32-9a06-49d6-a4ac-4f84c5a5ab8b 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 04:01:41 np0005534516 nova_compute[253538]: 2025-11-25 09:01:41.555 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:01:41 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 04:01:41 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2319355486' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 04:01:41 np0005534516 nova_compute[253538]: 2025-11-25 09:01:41.692 253542 DEBUG oslo_concurrency.processutils [None req-d35aed32-9a06-49d6-a4ac-4f84c5a5ab8b 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.482s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 04:01:41 np0005534516 nova_compute[253538]: 2025-11-25 09:01:41.699 253542 DEBUG nova.compute.provider_tree [None req-d35aed32-9a06-49d6-a4ac-4f84c5a5ab8b 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 04:01:41 np0005534516 nova_compute[253538]: 2025-11-25 09:01:41.716 253542 DEBUG nova.scheduler.client.report [None req-d35aed32-9a06-49d6-a4ac-4f84c5a5ab8b 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 04:01:41 np0005534516 nova_compute[253538]: 2025-11-25 09:01:41.746 253542 DEBUG oslo_concurrency.lockutils [None req-d35aed32-9a06-49d6-a4ac-4f84c5a5ab8b 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.751s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:01:41 np0005534516 nova_compute[253538]: 2025-11-25 09:01:41.747 253542 DEBUG nova.compute.manager [None req-d35aed32-9a06-49d6-a4ac-4f84c5a5ab8b 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 2d372af7-dca6-4f5f-bd4c-beedbb8cc055] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 25 04:01:41 np0005534516 nova_compute[253538]: 2025-11-25 09:01:41.786 253542 DEBUG nova.compute.manager [None req-d35aed32-9a06-49d6-a4ac-4f84c5a5ab8b 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 2d372af7-dca6-4f5f-bd4c-beedbb8cc055] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 25 04:01:41 np0005534516 nova_compute[253538]: 2025-11-25 09:01:41.786 253542 DEBUG nova.network.neutron [None req-d35aed32-9a06-49d6-a4ac-4f84c5a5ab8b 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 2d372af7-dca6-4f5f-bd4c-beedbb8cc055] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 25 04:01:41 np0005534516 nova_compute[253538]: 2025-11-25 09:01:41.794 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:01:41 np0005534516 nova_compute[253538]: 2025-11-25 09:01:41.809 253542 INFO nova.virt.libvirt.driver [None req-d35aed32-9a06-49d6-a4ac-4f84c5a5ab8b 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 2d372af7-dca6-4f5f-bd4c-beedbb8cc055] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 25 04:01:41 np0005534516 nova_compute[253538]: 2025-11-25 09:01:41.830 253542 DEBUG nova.compute.manager [None req-d35aed32-9a06-49d6-a4ac-4f84c5a5ab8b 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 2d372af7-dca6-4f5f-bd4c-beedbb8cc055] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 25 04:01:41 np0005534516 nova_compute[253538]: 2025-11-25 09:01:41.912 253542 DEBUG nova.compute.manager [None req-d35aed32-9a06-49d6-a4ac-4f84c5a5ab8b 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 2d372af7-dca6-4f5f-bd4c-beedbb8cc055] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 25 04:01:41 np0005534516 nova_compute[253538]: 2025-11-25 09:01:41.914 253542 DEBUG nova.virt.libvirt.driver [None req-d35aed32-9a06-49d6-a4ac-4f84c5a5ab8b 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 2d372af7-dca6-4f5f-bd4c-beedbb8cc055] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 25 04:01:41 np0005534516 nova_compute[253538]: 2025-11-25 09:01:41.914 253542 INFO nova.virt.libvirt.driver [None req-d35aed32-9a06-49d6-a4ac-4f84c5a5ab8b 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 2d372af7-dca6-4f5f-bd4c-beedbb8cc055] Creating image(s)#033[00m
Nov 25 04:01:41 np0005534516 nova_compute[253538]: 2025-11-25 09:01:41.954 253542 DEBUG nova.storage.rbd_utils [None req-d35aed32-9a06-49d6-a4ac-4f84c5a5ab8b 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] rbd image 2d372af7-dca6-4f5f-bd4c-beedbb8cc055_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 04:01:41 np0005534516 nova_compute[253538]: 2025-11-25 09:01:41.982 253542 DEBUG nova.storage.rbd_utils [None req-d35aed32-9a06-49d6-a4ac-4f84c5a5ab8b 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] rbd image 2d372af7-dca6-4f5f-bd4c-beedbb8cc055_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 04:01:42 np0005534516 nova_compute[253538]: 2025-11-25 09:01:42.008 253542 DEBUG nova.storage.rbd_utils [None req-d35aed32-9a06-49d6-a4ac-4f84c5a5ab8b 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] rbd image 2d372af7-dca6-4f5f-bd4c-beedbb8cc055_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 04:01:42 np0005534516 nova_compute[253538]: 2025-11-25 09:01:42.012 253542 DEBUG oslo_concurrency.processutils [None req-d35aed32-9a06-49d6-a4ac-4f84c5a5ab8b 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 04:01:42 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2427: 321 pgs: 321 active+clean; 308 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.0 MiB/s wr, 97 op/s
Nov 25 04:01:42 np0005534516 nova_compute[253538]: 2025-11-25 09:01:42.089 253542 DEBUG oslo_concurrency.processutils [None req-d35aed32-9a06-49d6-a4ac-4f84c5a5ab8b 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json" returned: 0 in 0.077s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 04:01:42 np0005534516 nova_compute[253538]: 2025-11-25 09:01:42.089 253542 DEBUG oslo_concurrency.lockutils [None req-d35aed32-9a06-49d6-a4ac-4f84c5a5ab8b 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Acquiring lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:01:42 np0005534516 nova_compute[253538]: 2025-11-25 09:01:42.090 253542 DEBUG oslo_concurrency.lockutils [None req-d35aed32-9a06-49d6-a4ac-4f84c5a5ab8b 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:01:42 np0005534516 nova_compute[253538]: 2025-11-25 09:01:42.091 253542 DEBUG oslo_concurrency.lockutils [None req-d35aed32-9a06-49d6-a4ac-4f84c5a5ab8b 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:01:42 np0005534516 nova_compute[253538]: 2025-11-25 09:01:42.108 253542 DEBUG nova.storage.rbd_utils [None req-d35aed32-9a06-49d6-a4ac-4f84c5a5ab8b 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] rbd image 2d372af7-dca6-4f5f-bd4c-beedbb8cc055_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 04:01:42 np0005534516 nova_compute[253538]: 2025-11-25 09:01:42.111 253542 DEBUG oslo_concurrency.processutils [None req-d35aed32-9a06-49d6-a4ac-4f84c5a5ab8b 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc 2d372af7-dca6-4f5f-bd4c-beedbb8cc055_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 04:01:42 np0005534516 ovn_controller[152859]: 2025-11-25T09:01:42Z|00165|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:2b:64:2f 10.100.0.9
Nov 25 04:01:42 np0005534516 ovn_controller[152859]: 2025-11-25T09:01:42Z|00166|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:2b:64:2f 10.100.0.9
Nov 25 04:01:42 np0005534516 nova_compute[253538]: 2025-11-25 09:01:42.177 253542 DEBUG nova.policy [None req-d35aed32-9a06-49d6-a4ac-4f84c5a5ab8b 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '283b89dbe3284e8ea2019b797673108b', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'cfffff2c57a442a59b202d368d49bf00', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 25 04:01:42 np0005534516 nova_compute[253538]: 2025-11-25 09:01:42.374 253542 DEBUG oslo_concurrency.processutils [None req-d35aed32-9a06-49d6-a4ac-4f84c5a5ab8b 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc 2d372af7-dca6-4f5f-bd4c-beedbb8cc055_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.263s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 04:01:42 np0005534516 nova_compute[253538]: 2025-11-25 09:01:42.454 253542 DEBUG nova.storage.rbd_utils [None req-d35aed32-9a06-49d6-a4ac-4f84c5a5ab8b 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] resizing rbd image 2d372af7-dca6-4f5f-bd4c-beedbb8cc055_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 25 04:01:42 np0005534516 nova_compute[253538]: 2025-11-25 09:01:42.557 253542 DEBUG nova.objects.instance [None req-d35aed32-9a06-49d6-a4ac-4f84c5a5ab8b 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Lazy-loading 'migration_context' on Instance uuid 2d372af7-dca6-4f5f-bd4c-beedbb8cc055 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 04:01:42 np0005534516 nova_compute[253538]: 2025-11-25 09:01:42.579 253542 DEBUG nova.virt.libvirt.driver [None req-d35aed32-9a06-49d6-a4ac-4f84c5a5ab8b 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 2d372af7-dca6-4f5f-bd4c-beedbb8cc055] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 25 04:01:42 np0005534516 nova_compute[253538]: 2025-11-25 09:01:42.580 253542 DEBUG nova.virt.libvirt.driver [None req-d35aed32-9a06-49d6-a4ac-4f84c5a5ab8b 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 2d372af7-dca6-4f5f-bd4c-beedbb8cc055] Ensure instance console log exists: /var/lib/nova/instances/2d372af7-dca6-4f5f-bd4c-beedbb8cc055/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 25 04:01:42 np0005534516 nova_compute[253538]: 2025-11-25 09:01:42.580 253542 DEBUG oslo_concurrency.lockutils [None req-d35aed32-9a06-49d6-a4ac-4f84c5a5ab8b 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:01:42 np0005534516 nova_compute[253538]: 2025-11-25 09:01:42.580 253542 DEBUG oslo_concurrency.lockutils [None req-d35aed32-9a06-49d6-a4ac-4f84c5a5ab8b 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:01:42 np0005534516 nova_compute[253538]: 2025-11-25 09:01:42.581 253542 DEBUG oslo_concurrency.lockutils [None req-d35aed32-9a06-49d6-a4ac-4f84c5a5ab8b 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:01:42 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:01:43 np0005534516 nova_compute[253538]: 2025-11-25 09:01:43.602 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:01:43 np0005534516 nova_compute[253538]: 2025-11-25 09:01:43.708 253542 DEBUG nova.network.neutron [None req-d35aed32-9a06-49d6-a4ac-4f84c5a5ab8b 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 2d372af7-dca6-4f5f-bd4c-beedbb8cc055] Successfully created port: c1655f18-1254-402d-b9b6-7ca2d5a8bcdc _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 25 04:01:43 np0005534516 nova_compute[253538]: 2025-11-25 09:01:43.842 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:01:43 np0005534516 nova_compute[253538]: 2025-11-25 09:01:43.868 253542 WARNING nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] While synchronizing instance power states, found 4 instances in the database and 3 instances on the hypervisor.#033[00m
Nov 25 04:01:43 np0005534516 nova_compute[253538]: 2025-11-25 09:01:43.868 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Triggering sync for uuid f05e074d-5838-4c4b-89dc-76afe386f635 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268#033[00m
Nov 25 04:01:43 np0005534516 nova_compute[253538]: 2025-11-25 09:01:43.868 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Triggering sync for uuid 830678ef-9f48-4175-aa6d-666c24a11689 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268#033[00m
Nov 25 04:01:43 np0005534516 nova_compute[253538]: 2025-11-25 09:01:43.868 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Triggering sync for uuid 3dd3cc22-d02b-4948-b7e0-da630c6ad4b0 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268#033[00m
Nov 25 04:01:43 np0005534516 nova_compute[253538]: 2025-11-25 09:01:43.868 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Triggering sync for uuid 2d372af7-dca6-4f5f-bd4c-beedbb8cc055 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268#033[00m
Nov 25 04:01:43 np0005534516 nova_compute[253538]: 2025-11-25 09:01:43.869 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Acquiring lock "f05e074d-5838-4c4b-89dc-76afe386f635" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:01:43 np0005534516 nova_compute[253538]: 2025-11-25 09:01:43.869 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "f05e074d-5838-4c4b-89dc-76afe386f635" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:01:43 np0005534516 nova_compute[253538]: 2025-11-25 09:01:43.869 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Acquiring lock "830678ef-9f48-4175-aa6d-666c24a11689" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:01:43 np0005534516 nova_compute[253538]: 2025-11-25 09:01:43.869 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "830678ef-9f48-4175-aa6d-666c24a11689" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:01:43 np0005534516 nova_compute[253538]: 2025-11-25 09:01:43.870 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Acquiring lock "3dd3cc22-d02b-4948-b7e0-da630c6ad4b0" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:01:43 np0005534516 nova_compute[253538]: 2025-11-25 09:01:43.870 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "3dd3cc22-d02b-4948-b7e0-da630c6ad4b0" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:01:43 np0005534516 nova_compute[253538]: 2025-11-25 09:01:43.870 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Acquiring lock "2d372af7-dca6-4f5f-bd4c-beedbb8cc055" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:01:43 np0005534516 nova_compute[253538]: 2025-11-25 09:01:43.904 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "830678ef-9f48-4175-aa6d-666c24a11689" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.035s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:01:43 np0005534516 nova_compute[253538]: 2025-11-25 09:01:43.907 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "f05e074d-5838-4c4b-89dc-76afe386f635" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.038s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:01:43 np0005534516 nova_compute[253538]: 2025-11-25 09:01:43.908 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "3dd3cc22-d02b-4948-b7e0-da630c6ad4b0" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.038s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:01:44 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2428: 321 pgs: 321 active+clean; 325 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1000 KiB/s rd, 1.7 MiB/s wr, 66 op/s
Nov 25 04:01:44 np0005534516 nova_compute[253538]: 2025-11-25 09:01:44.780 253542 DEBUG nova.network.neutron [None req-d35aed32-9a06-49d6-a4ac-4f84c5a5ab8b 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 2d372af7-dca6-4f5f-bd4c-beedbb8cc055] Successfully updated port: c1655f18-1254-402d-b9b6-7ca2d5a8bcdc _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 25 04:01:44 np0005534516 nova_compute[253538]: 2025-11-25 09:01:44.809 253542 DEBUG oslo_concurrency.lockutils [None req-d35aed32-9a06-49d6-a4ac-4f84c5a5ab8b 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Acquiring lock "refresh_cache-2d372af7-dca6-4f5f-bd4c-beedbb8cc055" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 04:01:44 np0005534516 nova_compute[253538]: 2025-11-25 09:01:44.809 253542 DEBUG oslo_concurrency.lockutils [None req-d35aed32-9a06-49d6-a4ac-4f84c5a5ab8b 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Acquired lock "refresh_cache-2d372af7-dca6-4f5f-bd4c-beedbb8cc055" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 04:01:44 np0005534516 nova_compute[253538]: 2025-11-25 09:01:44.809 253542 DEBUG nova.network.neutron [None req-d35aed32-9a06-49d6-a4ac-4f84c5a5ab8b 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 2d372af7-dca6-4f5f-bd4c-beedbb8cc055] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 25 04:01:44 np0005534516 nova_compute[253538]: 2025-11-25 09:01:44.885 253542 DEBUG nova.compute.manager [req-1714bf5c-6c28-4243-830d-abbb035e68ee req-281f9807-1562-4140-aa80-1aeafe7933f5 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 2d372af7-dca6-4f5f-bd4c-beedbb8cc055] Received event network-changed-c1655f18-1254-402d-b9b6-7ca2d5a8bcdc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 04:01:44 np0005534516 nova_compute[253538]: 2025-11-25 09:01:44.885 253542 DEBUG nova.compute.manager [req-1714bf5c-6c28-4243-830d-abbb035e68ee req-281f9807-1562-4140-aa80-1aeafe7933f5 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 2d372af7-dca6-4f5f-bd4c-beedbb8cc055] Refreshing instance network info cache due to event network-changed-c1655f18-1254-402d-b9b6-7ca2d5a8bcdc. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 04:01:44 np0005534516 nova_compute[253538]: 2025-11-25 09:01:44.885 253542 DEBUG oslo_concurrency.lockutils [req-1714bf5c-6c28-4243-830d-abbb035e68ee req-281f9807-1562-4140-aa80-1aeafe7933f5 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-2d372af7-dca6-4f5f-bd4c-beedbb8cc055" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 04:01:45 np0005534516 nova_compute[253538]: 2025-11-25 09:01:45.148 253542 DEBUG nova.network.neutron [None req-d35aed32-9a06-49d6-a4ac-4f84c5a5ab8b 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 2d372af7-dca6-4f5f-bd4c-beedbb8cc055] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 25 04:01:45 np0005534516 nova_compute[253538]: 2025-11-25 09:01:45.889 253542 DEBUG nova.network.neutron [None req-d35aed32-9a06-49d6-a4ac-4f84c5a5ab8b 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 2d372af7-dca6-4f5f-bd4c-beedbb8cc055] Updating instance_info_cache with network_info: [{"id": "c1655f18-1254-402d-b9b6-7ca2d5a8bcdc", "address": "fa:16:3e:cb:b3:62", "network": {"id": "01d5ee0a-5a87-445b-8539-b33b1f9d0842", "bridge": "br-int", "label": "tempest-network-smoke--2130316306", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cfffff2c57a442a59b202d368d49bf00", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc1655f18-12", "ovs_interfaceid": "c1655f18-1254-402d-b9b6-7ca2d5a8bcdc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 04:01:45 np0005534516 nova_compute[253538]: 2025-11-25 09:01:45.930 253542 DEBUG oslo_concurrency.lockutils [None req-d35aed32-9a06-49d6-a4ac-4f84c5a5ab8b 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Releasing lock "refresh_cache-2d372af7-dca6-4f5f-bd4c-beedbb8cc055" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 04:01:45 np0005534516 nova_compute[253538]: 2025-11-25 09:01:45.930 253542 DEBUG nova.compute.manager [None req-d35aed32-9a06-49d6-a4ac-4f84c5a5ab8b 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 2d372af7-dca6-4f5f-bd4c-beedbb8cc055] Instance network_info: |[{"id": "c1655f18-1254-402d-b9b6-7ca2d5a8bcdc", "address": "fa:16:3e:cb:b3:62", "network": {"id": "01d5ee0a-5a87-445b-8539-b33b1f9d0842", "bridge": "br-int", "label": "tempest-network-smoke--2130316306", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cfffff2c57a442a59b202d368d49bf00", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc1655f18-12", "ovs_interfaceid": "c1655f18-1254-402d-b9b6-7ca2d5a8bcdc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 25 04:01:45 np0005534516 nova_compute[253538]: 2025-11-25 09:01:45.930 253542 DEBUG oslo_concurrency.lockutils [req-1714bf5c-6c28-4243-830d-abbb035e68ee req-281f9807-1562-4140-aa80-1aeafe7933f5 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-2d372af7-dca6-4f5f-bd4c-beedbb8cc055" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 04:01:45 np0005534516 nova_compute[253538]: 2025-11-25 09:01:45.931 253542 DEBUG nova.network.neutron [req-1714bf5c-6c28-4243-830d-abbb035e68ee req-281f9807-1562-4140-aa80-1aeafe7933f5 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 2d372af7-dca6-4f5f-bd4c-beedbb8cc055] Refreshing network info cache for port c1655f18-1254-402d-b9b6-7ca2d5a8bcdc _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 04:01:45 np0005534516 nova_compute[253538]: 2025-11-25 09:01:45.933 253542 DEBUG nova.virt.libvirt.driver [None req-d35aed32-9a06-49d6-a4ac-4f84c5a5ab8b 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 2d372af7-dca6-4f5f-bd4c-beedbb8cc055] Start _get_guest_xml network_info=[{"id": "c1655f18-1254-402d-b9b6-7ca2d5a8bcdc", "address": "fa:16:3e:cb:b3:62", "network": {"id": "01d5ee0a-5a87-445b-8539-b33b1f9d0842", "bridge": "br-int", "label": "tempest-network-smoke--2130316306", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cfffff2c57a442a59b202d368d49bf00", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc1655f18-12", "ovs_interfaceid": "c1655f18-1254-402d-b9b6-7ca2d5a8bcdc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'device_name': '/dev/vda', 'guest_format': None, 'encryption_options': None, 'boot_index': 0, 'encryption_format': None, 'encryption_secret_uuid': None, 'size': 0, 'encrypted': False, 'disk_bus': 'virtio', 'image_id': '8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 25 04:01:45 np0005534516 nova_compute[253538]: 2025-11-25 09:01:45.938 253542 WARNING nova.virt.libvirt.driver [None req-d35aed32-9a06-49d6-a4ac-4f84c5a5ab8b 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 04:01:45 np0005534516 nova_compute[253538]: 2025-11-25 09:01:45.946 253542 DEBUG nova.virt.libvirt.host [None req-d35aed32-9a06-49d6-a4ac-4f84c5a5ab8b 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 25 04:01:45 np0005534516 nova_compute[253538]: 2025-11-25 09:01:45.947 253542 DEBUG nova.virt.libvirt.host [None req-d35aed32-9a06-49d6-a4ac-4f84c5a5ab8b 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 25 04:01:45 np0005534516 nova_compute[253538]: 2025-11-25 09:01:45.950 253542 DEBUG nova.virt.libvirt.host [None req-d35aed32-9a06-49d6-a4ac-4f84c5a5ab8b 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 25 04:01:45 np0005534516 nova_compute[253538]: 2025-11-25 09:01:45.951 253542 DEBUG nova.virt.libvirt.host [None req-d35aed32-9a06-49d6-a4ac-4f84c5a5ab8b 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 25 04:01:45 np0005534516 nova_compute[253538]: 2025-11-25 09:01:45.951 253542 DEBUG nova.virt.libvirt.driver [None req-d35aed32-9a06-49d6-a4ac-4f84c5a5ab8b 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 25 04:01:45 np0005534516 nova_compute[253538]: 2025-11-25 09:01:45.951 253542 DEBUG nova.virt.hardware [None req-d35aed32-9a06-49d6-a4ac-4f84c5a5ab8b 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T08:20:54Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='c0247c91-0f34-45b2-87e7-43f31a790a00',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 25 04:01:45 np0005534516 nova_compute[253538]: 2025-11-25 09:01:45.952 253542 DEBUG nova.virt.hardware [None req-d35aed32-9a06-49d6-a4ac-4f84c5a5ab8b 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 25 04:01:45 np0005534516 nova_compute[253538]: 2025-11-25 09:01:45.952 253542 DEBUG nova.virt.hardware [None req-d35aed32-9a06-49d6-a4ac-4f84c5a5ab8b 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 25 04:01:45 np0005534516 nova_compute[253538]: 2025-11-25 09:01:45.952 253542 DEBUG nova.virt.hardware [None req-d35aed32-9a06-49d6-a4ac-4f84c5a5ab8b 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 25 04:01:45 np0005534516 nova_compute[253538]: 2025-11-25 09:01:45.952 253542 DEBUG nova.virt.hardware [None req-d35aed32-9a06-49d6-a4ac-4f84c5a5ab8b 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 25 04:01:45 np0005534516 nova_compute[253538]: 2025-11-25 09:01:45.952 253542 DEBUG nova.virt.hardware [None req-d35aed32-9a06-49d6-a4ac-4f84c5a5ab8b 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 25 04:01:45 np0005534516 nova_compute[253538]: 2025-11-25 09:01:45.953 253542 DEBUG nova.virt.hardware [None req-d35aed32-9a06-49d6-a4ac-4f84c5a5ab8b 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 25 04:01:45 np0005534516 nova_compute[253538]: 2025-11-25 09:01:45.953 253542 DEBUG nova.virt.hardware [None req-d35aed32-9a06-49d6-a4ac-4f84c5a5ab8b 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 25 04:01:45 np0005534516 nova_compute[253538]: 2025-11-25 09:01:45.953 253542 DEBUG nova.virt.hardware [None req-d35aed32-9a06-49d6-a4ac-4f84c5a5ab8b 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 25 04:01:45 np0005534516 nova_compute[253538]: 2025-11-25 09:01:45.953 253542 DEBUG nova.virt.hardware [None req-d35aed32-9a06-49d6-a4ac-4f84c5a5ab8b 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 25 04:01:45 np0005534516 nova_compute[253538]: 2025-11-25 09:01:45.954 253542 DEBUG nova.virt.hardware [None req-d35aed32-9a06-49d6-a4ac-4f84c5a5ab8b 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 25 04:01:45 np0005534516 nova_compute[253538]: 2025-11-25 09:01:45.957 253542 DEBUG oslo_concurrency.processutils [None req-d35aed32-9a06-49d6-a4ac-4f84c5a5ab8b 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 04:01:46 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2429: 321 pgs: 321 active+clean; 351 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 328 KiB/s rd, 2.7 MiB/s wr, 66 op/s
Nov 25 04:01:46 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 04:01:46 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3804695167' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 04:01:46 np0005534516 nova_compute[253538]: 2025-11-25 09:01:46.438 253542 DEBUG oslo_concurrency.processutils [None req-d35aed32-9a06-49d6-a4ac-4f84c5a5ab8b 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.481s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 04:01:46 np0005534516 nova_compute[253538]: 2025-11-25 09:01:46.465 253542 DEBUG nova.storage.rbd_utils [None req-d35aed32-9a06-49d6-a4ac-4f84c5a5ab8b 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] rbd image 2d372af7-dca6-4f5f-bd4c-beedbb8cc055_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 04:01:46 np0005534516 nova_compute[253538]: 2025-11-25 09:01:46.469 253542 DEBUG oslo_concurrency.processutils [None req-d35aed32-9a06-49d6-a4ac-4f84c5a5ab8b 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 04:01:46 np0005534516 nova_compute[253538]: 2025-11-25 09:01:46.583 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:01:46 np0005534516 nova_compute[253538]: 2025-11-25 09:01:46.583 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 25 04:01:46 np0005534516 nova_compute[253538]: 2025-11-25 09:01:46.584 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 25 04:01:46 np0005534516 nova_compute[253538]: 2025-11-25 09:01:46.605 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] [instance: 2d372af7-dca6-4f5f-bd4c-beedbb8cc055] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Nov 25 04:01:46 np0005534516 nova_compute[253538]: 2025-11-25 09:01:46.798 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:01:46 np0005534516 nova_compute[253538]: 2025-11-25 09:01:46.812 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Acquiring lock "refresh_cache-f05e074d-5838-4c4b-89dc-76afe386f635" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 04:01:46 np0005534516 nova_compute[253538]: 2025-11-25 09:01:46.812 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Acquired lock "refresh_cache-f05e074d-5838-4c4b-89dc-76afe386f635" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 04:01:46 np0005534516 nova_compute[253538]: 2025-11-25 09:01:46.812 253542 DEBUG nova.network.neutron [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] [instance: f05e074d-5838-4c4b-89dc-76afe386f635] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Nov 25 04:01:46 np0005534516 nova_compute[253538]: 2025-11-25 09:01:46.813 253542 DEBUG nova.objects.instance [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lazy-loading 'info_cache' on Instance uuid f05e074d-5838-4c4b-89dc-76afe386f635 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 04:01:46 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 04:01:46 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4068414821' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 04:01:46 np0005534516 nova_compute[253538]: 2025-11-25 09:01:46.989 253542 DEBUG oslo_concurrency.processutils [None req-d35aed32-9a06-49d6-a4ac-4f84c5a5ab8b 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.520s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 04:01:46 np0005534516 nova_compute[253538]: 2025-11-25 09:01:46.991 253542 DEBUG nova.virt.libvirt.vif [None req-d35aed32-9a06-49d6-a4ac-4f84c5a5ab8b 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T09:01:40Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-1495447964-gen-1-1970158435',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-1495447964-gen-1-1970158435',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-1495447964-ge',id=132,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLJWOQIYal/ibrReI+6RW0ugIdawfbseW0yB5VZg2UY83dhKROJ1pIw9MNC3hDvNCvPlRsh4rMt3whBfd3b+Yn9hu3bxkRBW96s2JDhI6eOb1sodQt8E/2VrhY6VxJQFmA==',key_name='tempest-TestSecurityGroupsBasicOps-688042133',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='cfffff2c57a442a59b202d368d49bf00',ramdisk_id='',reservation_id='r-ejm0jt8p',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-1495447964',owner_user_name='tempest-TestSecurityGroupsBasicOps-1495447964-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T09:01:41Z,user_data=None,user_id='283b89dbe3284e8ea2019b797673108b',uuid=2d372af7-dca6-4f5f-bd4c-beedbb8cc055,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "c1655f18-1254-402d-b9b6-7ca2d5a8bcdc", "address": "fa:16:3e:cb:b3:62", "network": {"id": "01d5ee0a-5a87-445b-8539-b33b1f9d0842", "bridge": "br-int", "label": "tempest-network-smoke--2130316306", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cfffff2c57a442a59b202d368d49bf00", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc1655f18-12", "ovs_interfaceid": "c1655f18-1254-402d-b9b6-7ca2d5a8bcdc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 25 04:01:46 np0005534516 nova_compute[253538]: 2025-11-25 09:01:46.991 253542 DEBUG nova.network.os_vif_util [None req-d35aed32-9a06-49d6-a4ac-4f84c5a5ab8b 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Converting VIF {"id": "c1655f18-1254-402d-b9b6-7ca2d5a8bcdc", "address": "fa:16:3e:cb:b3:62", "network": {"id": "01d5ee0a-5a87-445b-8539-b33b1f9d0842", "bridge": "br-int", "label": "tempest-network-smoke--2130316306", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cfffff2c57a442a59b202d368d49bf00", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc1655f18-12", "ovs_interfaceid": "c1655f18-1254-402d-b9b6-7ca2d5a8bcdc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 04:01:46 np0005534516 nova_compute[253538]: 2025-11-25 09:01:46.992 253542 DEBUG nova.network.os_vif_util [None req-d35aed32-9a06-49d6-a4ac-4f84c5a5ab8b 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:cb:b3:62,bridge_name='br-int',has_traffic_filtering=True,id=c1655f18-1254-402d-b9b6-7ca2d5a8bcdc,network=Network(01d5ee0a-5a87-445b-8539-b33b1f9d0842),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc1655f18-12') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 04:01:46 np0005534516 nova_compute[253538]: 2025-11-25 09:01:46.993 253542 DEBUG nova.objects.instance [None req-d35aed32-9a06-49d6-a4ac-4f84c5a5ab8b 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Lazy-loading 'pci_devices' on Instance uuid 2d372af7-dca6-4f5f-bd4c-beedbb8cc055 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 04:01:47 np0005534516 nova_compute[253538]: 2025-11-25 09:01:47.008 253542 DEBUG nova.virt.libvirt.driver [None req-d35aed32-9a06-49d6-a4ac-4f84c5a5ab8b 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 2d372af7-dca6-4f5f-bd4c-beedbb8cc055] End _get_guest_xml xml=<domain type="kvm">
Nov 25 04:01:47 np0005534516 nova_compute[253538]:  <uuid>2d372af7-dca6-4f5f-bd4c-beedbb8cc055</uuid>
Nov 25 04:01:47 np0005534516 nova_compute[253538]:  <name>instance-00000084</name>
Nov 25 04:01:47 np0005534516 nova_compute[253538]:  <memory>131072</memory>
Nov 25 04:01:47 np0005534516 nova_compute[253538]:  <vcpu>1</vcpu>
Nov 25 04:01:47 np0005534516 nova_compute[253538]:  <metadata>
Nov 25 04:01:47 np0005534516 nova_compute[253538]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 04:01:47 np0005534516 nova_compute[253538]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 04:01:47 np0005534516 nova_compute[253538]:      <nova:name>tempest-server-tempest-TestSecurityGroupsBasicOps-1495447964-gen-1-1970158435</nova:name>
Nov 25 04:01:47 np0005534516 nova_compute[253538]:      <nova:creationTime>2025-11-25 09:01:45</nova:creationTime>
Nov 25 04:01:47 np0005534516 nova_compute[253538]:      <nova:flavor name="m1.nano">
Nov 25 04:01:47 np0005534516 nova_compute[253538]:        <nova:memory>128</nova:memory>
Nov 25 04:01:47 np0005534516 nova_compute[253538]:        <nova:disk>1</nova:disk>
Nov 25 04:01:47 np0005534516 nova_compute[253538]:        <nova:swap>0</nova:swap>
Nov 25 04:01:47 np0005534516 nova_compute[253538]:        <nova:ephemeral>0</nova:ephemeral>
Nov 25 04:01:47 np0005534516 nova_compute[253538]:        <nova:vcpus>1</nova:vcpus>
Nov 25 04:01:47 np0005534516 nova_compute[253538]:      </nova:flavor>
Nov 25 04:01:47 np0005534516 nova_compute[253538]:      <nova:owner>
Nov 25 04:01:47 np0005534516 nova_compute[253538]:        <nova:user uuid="283b89dbe3284e8ea2019b797673108b">tempest-TestSecurityGroupsBasicOps-1495447964-project-member</nova:user>
Nov 25 04:01:47 np0005534516 nova_compute[253538]:        <nova:project uuid="cfffff2c57a442a59b202d368d49bf00">tempest-TestSecurityGroupsBasicOps-1495447964</nova:project>
Nov 25 04:01:47 np0005534516 nova_compute[253538]:      </nova:owner>
Nov 25 04:01:47 np0005534516 nova_compute[253538]:      <nova:root type="image" uuid="8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e"/>
Nov 25 04:01:47 np0005534516 nova_compute[253538]:      <nova:ports>
Nov 25 04:01:47 np0005534516 nova_compute[253538]:        <nova:port uuid="c1655f18-1254-402d-b9b6-7ca2d5a8bcdc">
Nov 25 04:01:47 np0005534516 nova_compute[253538]:          <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Nov 25 04:01:47 np0005534516 nova_compute[253538]:        </nova:port>
Nov 25 04:01:47 np0005534516 nova_compute[253538]:      </nova:ports>
Nov 25 04:01:47 np0005534516 nova_compute[253538]:    </nova:instance>
Nov 25 04:01:47 np0005534516 nova_compute[253538]:  </metadata>
Nov 25 04:01:47 np0005534516 nova_compute[253538]:  <sysinfo type="smbios">
Nov 25 04:01:47 np0005534516 nova_compute[253538]:    <system>
Nov 25 04:01:47 np0005534516 nova_compute[253538]:      <entry name="manufacturer">RDO</entry>
Nov 25 04:01:47 np0005534516 nova_compute[253538]:      <entry name="product">OpenStack Compute</entry>
Nov 25 04:01:47 np0005534516 nova_compute[253538]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 04:01:47 np0005534516 nova_compute[253538]:      <entry name="serial">2d372af7-dca6-4f5f-bd4c-beedbb8cc055</entry>
Nov 25 04:01:47 np0005534516 nova_compute[253538]:      <entry name="uuid">2d372af7-dca6-4f5f-bd4c-beedbb8cc055</entry>
Nov 25 04:01:47 np0005534516 nova_compute[253538]:      <entry name="family">Virtual Machine</entry>
Nov 25 04:01:47 np0005534516 nova_compute[253538]:    </system>
Nov 25 04:01:47 np0005534516 nova_compute[253538]:  </sysinfo>
Nov 25 04:01:47 np0005534516 nova_compute[253538]:  <os>
Nov 25 04:01:47 np0005534516 nova_compute[253538]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 25 04:01:47 np0005534516 nova_compute[253538]:    <boot dev="hd"/>
Nov 25 04:01:47 np0005534516 nova_compute[253538]:    <smbios mode="sysinfo"/>
Nov 25 04:01:47 np0005534516 nova_compute[253538]:  </os>
Nov 25 04:01:47 np0005534516 nova_compute[253538]:  <features>
Nov 25 04:01:47 np0005534516 nova_compute[253538]:    <acpi/>
Nov 25 04:01:47 np0005534516 nova_compute[253538]:    <apic/>
Nov 25 04:01:47 np0005534516 nova_compute[253538]:    <vmcoreinfo/>
Nov 25 04:01:47 np0005534516 nova_compute[253538]:  </features>
Nov 25 04:01:47 np0005534516 nova_compute[253538]:  <clock offset="utc">
Nov 25 04:01:47 np0005534516 nova_compute[253538]:    <timer name="pit" tickpolicy="delay"/>
Nov 25 04:01:47 np0005534516 nova_compute[253538]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 25 04:01:47 np0005534516 nova_compute[253538]:    <timer name="hpet" present="no"/>
Nov 25 04:01:47 np0005534516 nova_compute[253538]:  </clock>
Nov 25 04:01:47 np0005534516 nova_compute[253538]:  <cpu mode="host-model" match="exact">
Nov 25 04:01:47 np0005534516 nova_compute[253538]:    <topology sockets="1" cores="1" threads="1"/>
Nov 25 04:01:47 np0005534516 nova_compute[253538]:  </cpu>
Nov 25 04:01:47 np0005534516 nova_compute[253538]:  <devices>
Nov 25 04:01:47 np0005534516 nova_compute[253538]:    <disk type="network" device="disk">
Nov 25 04:01:47 np0005534516 nova_compute[253538]:      <driver type="raw" cache="none"/>
Nov 25 04:01:47 np0005534516 nova_compute[253538]:      <source protocol="rbd" name="vms/2d372af7-dca6-4f5f-bd4c-beedbb8cc055_disk">
Nov 25 04:01:47 np0005534516 nova_compute[253538]:        <host name="192.168.122.100" port="6789"/>
Nov 25 04:01:47 np0005534516 nova_compute[253538]:      </source>
Nov 25 04:01:47 np0005534516 nova_compute[253538]:      <auth username="openstack">
Nov 25 04:01:47 np0005534516 nova_compute[253538]:        <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 04:01:47 np0005534516 nova_compute[253538]:      </auth>
Nov 25 04:01:47 np0005534516 nova_compute[253538]:      <target dev="vda" bus="virtio"/>
Nov 25 04:01:47 np0005534516 nova_compute[253538]:    </disk>
Nov 25 04:01:47 np0005534516 nova_compute[253538]:    <disk type="network" device="cdrom">
Nov 25 04:01:47 np0005534516 nova_compute[253538]:      <driver type="raw" cache="none"/>
Nov 25 04:01:47 np0005534516 nova_compute[253538]:      <source protocol="rbd" name="vms/2d372af7-dca6-4f5f-bd4c-beedbb8cc055_disk.config">
Nov 25 04:01:47 np0005534516 nova_compute[253538]:        <host name="192.168.122.100" port="6789"/>
Nov 25 04:01:47 np0005534516 nova_compute[253538]:      </source>
Nov 25 04:01:47 np0005534516 nova_compute[253538]:      <auth username="openstack">
Nov 25 04:01:47 np0005534516 nova_compute[253538]:        <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 04:01:47 np0005534516 nova_compute[253538]:      </auth>
Nov 25 04:01:47 np0005534516 nova_compute[253538]:      <target dev="sda" bus="sata"/>
Nov 25 04:01:47 np0005534516 nova_compute[253538]:    </disk>
Nov 25 04:01:47 np0005534516 nova_compute[253538]:    <interface type="ethernet">
Nov 25 04:01:47 np0005534516 nova_compute[253538]:      <mac address="fa:16:3e:cb:b3:62"/>
Nov 25 04:01:47 np0005534516 nova_compute[253538]:      <model type="virtio"/>
Nov 25 04:01:47 np0005534516 nova_compute[253538]:      <driver name="vhost" rx_queue_size="512"/>
Nov 25 04:01:47 np0005534516 nova_compute[253538]:      <mtu size="1442"/>
Nov 25 04:01:47 np0005534516 nova_compute[253538]:      <target dev="tapc1655f18-12"/>
Nov 25 04:01:47 np0005534516 nova_compute[253538]:    </interface>
Nov 25 04:01:47 np0005534516 nova_compute[253538]:    <serial type="pty">
Nov 25 04:01:47 np0005534516 nova_compute[253538]:      <log file="/var/lib/nova/instances/2d372af7-dca6-4f5f-bd4c-beedbb8cc055/console.log" append="off"/>
Nov 25 04:01:47 np0005534516 nova_compute[253538]:    </serial>
Nov 25 04:01:47 np0005534516 nova_compute[253538]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 04:01:47 np0005534516 nova_compute[253538]:    <video>
Nov 25 04:01:47 np0005534516 nova_compute[253538]:      <model type="virtio"/>
Nov 25 04:01:47 np0005534516 nova_compute[253538]:    </video>
Nov 25 04:01:47 np0005534516 nova_compute[253538]:    <input type="tablet" bus="usb"/>
Nov 25 04:01:47 np0005534516 nova_compute[253538]:    <rng model="virtio">
Nov 25 04:01:47 np0005534516 nova_compute[253538]:      <backend model="random">/dev/urandom</backend>
Nov 25 04:01:47 np0005534516 nova_compute[253538]:    </rng>
Nov 25 04:01:47 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root"/>
Nov 25 04:01:47 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:01:47 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:01:47 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:01:47 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:01:47 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:01:47 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:01:47 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:01:47 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:01:47 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:01:47 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:01:47 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:01:47 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:01:47 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:01:47 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:01:47 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:01:47 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:01:47 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:01:47 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:01:47 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:01:47 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:01:47 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:01:47 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:01:47 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:01:47 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:01:47 np0005534516 nova_compute[253538]:    <controller type="usb" index="0"/>
Nov 25 04:01:47 np0005534516 nova_compute[253538]:    <memballoon model="virtio">
Nov 25 04:01:47 np0005534516 nova_compute[253538]:      <stats period="10"/>
Nov 25 04:01:47 np0005534516 nova_compute[253538]:    </memballoon>
Nov 25 04:01:47 np0005534516 nova_compute[253538]:  </devices>
Nov 25 04:01:47 np0005534516 nova_compute[253538]: </domain>
Nov 25 04:01:47 np0005534516 nova_compute[253538]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 25 04:01:47 np0005534516 nova_compute[253538]: 2025-11-25 09:01:47.009 253542 DEBUG nova.compute.manager [None req-d35aed32-9a06-49d6-a4ac-4f84c5a5ab8b 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 2d372af7-dca6-4f5f-bd4c-beedbb8cc055] Preparing to wait for external event network-vif-plugged-c1655f18-1254-402d-b9b6-7ca2d5a8bcdc prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 25 04:01:47 np0005534516 nova_compute[253538]: 2025-11-25 09:01:47.009 253542 DEBUG oslo_concurrency.lockutils [None req-d35aed32-9a06-49d6-a4ac-4f84c5a5ab8b 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Acquiring lock "2d372af7-dca6-4f5f-bd4c-beedbb8cc055-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:01:47 np0005534516 nova_compute[253538]: 2025-11-25 09:01:47.009 253542 DEBUG oslo_concurrency.lockutils [None req-d35aed32-9a06-49d6-a4ac-4f84c5a5ab8b 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Lock "2d372af7-dca6-4f5f-bd4c-beedbb8cc055-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:01:47 np0005534516 nova_compute[253538]: 2025-11-25 09:01:47.009 253542 DEBUG oslo_concurrency.lockutils [None req-d35aed32-9a06-49d6-a4ac-4f84c5a5ab8b 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Lock "2d372af7-dca6-4f5f-bd4c-beedbb8cc055-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:01:47 np0005534516 nova_compute[253538]: 2025-11-25 09:01:47.010 253542 DEBUG nova.virt.libvirt.vif [None req-d35aed32-9a06-49d6-a4ac-4f84c5a5ab8b 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T09:01:40Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-1495447964-gen-1-1970158435',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-1495447964-gen-1-1970158435',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-1495447964-ge',id=132,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLJWOQIYal/ibrReI+6RW0ugIdawfbseW0yB5VZg2UY83dhKROJ1pIw9MNC3hDvNCvPlRsh4rMt3whBfd3b+Yn9hu3bxkRBW96s2JDhI6eOb1sodQt8E/2VrhY6VxJQFmA==',key_name='tempest-TestSecurityGroupsBasicOps-688042133',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='cfffff2c57a442a59b202d368d49bf00',ramdisk_id='',reservation_id='r-ejm0jt8p',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-1495447964',owner_user_name='tempest-TestSecurityGroupsBasicOps-1495447964-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T09:01:41Z,user_data=None,user_id='283b89dbe3284e8ea2019b797673108b',uuid=2d372af7-dca6-4f5f-bd4c-beedbb8cc055,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "c1655f18-1254-402d-b9b6-7ca2d5a8bcdc", "address": "fa:16:3e:cb:b3:62", "network": {"id": "01d5ee0a-5a87-445b-8539-b33b1f9d0842", "bridge": "br-int", "label": "tempest-network-smoke--2130316306", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cfffff2c57a442a59b202d368d49bf00", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc1655f18-12", "ovs_interfaceid": "c1655f18-1254-402d-b9b6-7ca2d5a8bcdc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 25 04:01:47 np0005534516 nova_compute[253538]: 2025-11-25 09:01:47.010 253542 DEBUG nova.network.os_vif_util [None req-d35aed32-9a06-49d6-a4ac-4f84c5a5ab8b 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Converting VIF {"id": "c1655f18-1254-402d-b9b6-7ca2d5a8bcdc", "address": "fa:16:3e:cb:b3:62", "network": {"id": "01d5ee0a-5a87-445b-8539-b33b1f9d0842", "bridge": "br-int", "label": "tempest-network-smoke--2130316306", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cfffff2c57a442a59b202d368d49bf00", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc1655f18-12", "ovs_interfaceid": "c1655f18-1254-402d-b9b6-7ca2d5a8bcdc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 04:01:47 np0005534516 nova_compute[253538]: 2025-11-25 09:01:47.011 253542 DEBUG nova.network.os_vif_util [None req-d35aed32-9a06-49d6-a4ac-4f84c5a5ab8b 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:cb:b3:62,bridge_name='br-int',has_traffic_filtering=True,id=c1655f18-1254-402d-b9b6-7ca2d5a8bcdc,network=Network(01d5ee0a-5a87-445b-8539-b33b1f9d0842),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc1655f18-12') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 04:01:47 np0005534516 nova_compute[253538]: 2025-11-25 09:01:47.011 253542 DEBUG os_vif [None req-d35aed32-9a06-49d6-a4ac-4f84c5a5ab8b 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:cb:b3:62,bridge_name='br-int',has_traffic_filtering=True,id=c1655f18-1254-402d-b9b6-7ca2d5a8bcdc,network=Network(01d5ee0a-5a87-445b-8539-b33b1f9d0842),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc1655f18-12') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 25 04:01:47 np0005534516 nova_compute[253538]: 2025-11-25 09:01:47.012 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:01:47 np0005534516 nova_compute[253538]: 2025-11-25 09:01:47.012 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 04:01:47 np0005534516 nova_compute[253538]: 2025-11-25 09:01:47.013 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 04:01:47 np0005534516 nova_compute[253538]: 2025-11-25 09:01:47.015 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:01:47 np0005534516 nova_compute[253538]: 2025-11-25 09:01:47.016 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc1655f18-12, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 04:01:47 np0005534516 nova_compute[253538]: 2025-11-25 09:01:47.016 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapc1655f18-12, col_values=(('external_ids', {'iface-id': 'c1655f18-1254-402d-b9b6-7ca2d5a8bcdc', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:cb:b3:62', 'vm-uuid': '2d372af7-dca6-4f5f-bd4c-beedbb8cc055'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 04:01:47 np0005534516 nova_compute[253538]: 2025-11-25 09:01:47.018 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:01:47 np0005534516 NetworkManager[48915]: <info>  [1764061307.0189] manager: (tapc1655f18-12): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/556)
Nov 25 04:01:47 np0005534516 nova_compute[253538]: 2025-11-25 09:01:47.020 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 25 04:01:47 np0005534516 nova_compute[253538]: 2025-11-25 09:01:47.024 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:01:47 np0005534516 nova_compute[253538]: 2025-11-25 09:01:47.025 253542 INFO os_vif [None req-d35aed32-9a06-49d6-a4ac-4f84c5a5ab8b 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:cb:b3:62,bridge_name='br-int',has_traffic_filtering=True,id=c1655f18-1254-402d-b9b6-7ca2d5a8bcdc,network=Network(01d5ee0a-5a87-445b-8539-b33b1f9d0842),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc1655f18-12')#033[00m
Nov 25 04:01:47 np0005534516 nova_compute[253538]: 2025-11-25 09:01:47.080 253542 DEBUG nova.virt.libvirt.driver [None req-d35aed32-9a06-49d6-a4ac-4f84c5a5ab8b 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 04:01:47 np0005534516 nova_compute[253538]: 2025-11-25 09:01:47.080 253542 DEBUG nova.virt.libvirt.driver [None req-d35aed32-9a06-49d6-a4ac-4f84c5a5ab8b 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 04:01:47 np0005534516 nova_compute[253538]: 2025-11-25 09:01:47.080 253542 DEBUG nova.virt.libvirt.driver [None req-d35aed32-9a06-49d6-a4ac-4f84c5a5ab8b 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] No VIF found with MAC fa:16:3e:cb:b3:62, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 25 04:01:47 np0005534516 nova_compute[253538]: 2025-11-25 09:01:47.081 253542 INFO nova.virt.libvirt.driver [None req-d35aed32-9a06-49d6-a4ac-4f84c5a5ab8b 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 2d372af7-dca6-4f5f-bd4c-beedbb8cc055] Using config drive#033[00m
Nov 25 04:01:47 np0005534516 nova_compute[253538]: 2025-11-25 09:01:47.101 253542 DEBUG nova.storage.rbd_utils [None req-d35aed32-9a06-49d6-a4ac-4f84c5a5ab8b 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] rbd image 2d372af7-dca6-4f5f-bd4c-beedbb8cc055_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 04:01:47 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:01:48 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2430: 321 pgs: 321 active+clean; 372 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 344 KiB/s rd, 3.9 MiB/s wr, 91 op/s
Nov 25 04:01:48 np0005534516 nova_compute[253538]: 2025-11-25 09:01:48.280 253542 DEBUG nova.network.neutron [req-1714bf5c-6c28-4243-830d-abbb035e68ee req-281f9807-1562-4140-aa80-1aeafe7933f5 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 2d372af7-dca6-4f5f-bd4c-beedbb8cc055] Updated VIF entry in instance network info cache for port c1655f18-1254-402d-b9b6-7ca2d5a8bcdc. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 25 04:01:48 np0005534516 nova_compute[253538]: 2025-11-25 09:01:48.281 253542 DEBUG nova.network.neutron [req-1714bf5c-6c28-4243-830d-abbb035e68ee req-281f9807-1562-4140-aa80-1aeafe7933f5 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 2d372af7-dca6-4f5f-bd4c-beedbb8cc055] Updating instance_info_cache with network_info: [{"id": "c1655f18-1254-402d-b9b6-7ca2d5a8bcdc", "address": "fa:16:3e:cb:b3:62", "network": {"id": "01d5ee0a-5a87-445b-8539-b33b1f9d0842", "bridge": "br-int", "label": "tempest-network-smoke--2130316306", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cfffff2c57a442a59b202d368d49bf00", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc1655f18-12", "ovs_interfaceid": "c1655f18-1254-402d-b9b6-7ca2d5a8bcdc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 04:01:48 np0005534516 nova_compute[253538]: 2025-11-25 09:01:48.298 253542 DEBUG oslo_concurrency.lockutils [req-1714bf5c-6c28-4243-830d-abbb035e68ee req-281f9807-1562-4140-aa80-1aeafe7933f5 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-2d372af7-dca6-4f5f-bd4c-beedbb8cc055" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 04:01:48 np0005534516 nova_compute[253538]: 2025-11-25 09:01:48.605 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:01:49 np0005534516 nova_compute[253538]: 2025-11-25 09:01:49.201 253542 INFO nova.virt.libvirt.driver [None req-d35aed32-9a06-49d6-a4ac-4f84c5a5ab8b 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 2d372af7-dca6-4f5f-bd4c-beedbb8cc055] Creating config drive at /var/lib/nova/instances/2d372af7-dca6-4f5f-bd4c-beedbb8cc055/disk.config#033[00m
Nov 25 04:01:49 np0005534516 nova_compute[253538]: 2025-11-25 09:01:49.206 253542 DEBUG oslo_concurrency.processutils [None req-d35aed32-9a06-49d6-a4ac-4f84c5a5ab8b 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/2d372af7-dca6-4f5f-bd4c-beedbb8cc055/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpndydkpuw execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 04:01:49 np0005534516 nova_compute[253538]: 2025-11-25 09:01:49.347 253542 DEBUG oslo_concurrency.processutils [None req-d35aed32-9a06-49d6-a4ac-4f84c5a5ab8b 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/2d372af7-dca6-4f5f-bd4c-beedbb8cc055/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpndydkpuw" returned: 0 in 0.140s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 04:01:49 np0005534516 nova_compute[253538]: 2025-11-25 09:01:49.373 253542 DEBUG nova.storage.rbd_utils [None req-d35aed32-9a06-49d6-a4ac-4f84c5a5ab8b 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] rbd image 2d372af7-dca6-4f5f-bd4c-beedbb8cc055_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 04:01:49 np0005534516 nova_compute[253538]: 2025-11-25 09:01:49.378 253542 DEBUG oslo_concurrency.processutils [None req-d35aed32-9a06-49d6-a4ac-4f84c5a5ab8b 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/2d372af7-dca6-4f5f-bd4c-beedbb8cc055/disk.config 2d372af7-dca6-4f5f-bd4c-beedbb8cc055_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 04:01:50 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2431: 321 pgs: 321 active+clean; 372 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 349 KiB/s rd, 3.9 MiB/s wr, 91 op/s
Nov 25 04:01:50 np0005534516 nova_compute[253538]: 2025-11-25 09:01:50.774 253542 DEBUG nova.network.neutron [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] [instance: f05e074d-5838-4c4b-89dc-76afe386f635] Updating instance_info_cache with network_info: [{"id": "30ba0f84-3dca-47f6-911d-5fff56a99b0b", "address": "fa:16:3e:d4:d0:53", "network": {"id": "6c2834b5-0444-432c-8da4-c0b4f4aabc4d", "bridge": "br-int", "label": "tempest-network-smoke--139778261", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.202", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fed4:d053", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap30ba0f84-3d", "ovs_interfaceid": "30ba0f84-3dca-47f6-911d-5fff56a99b0b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 04:01:50 np0005534516 nova_compute[253538]: 2025-11-25 09:01:50.792 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Releasing lock "refresh_cache-f05e074d-5838-4c4b-89dc-76afe386f635" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 04:01:50 np0005534516 nova_compute[253538]: 2025-11-25 09:01:50.793 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] [instance: f05e074d-5838-4c4b-89dc-76afe386f635] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Nov 25 04:01:50 np0005534516 nova_compute[253538]: 2025-11-25 09:01:50.794 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:01:50 np0005534516 nova_compute[253538]: 2025-11-25 09:01:50.794 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:01:50 np0005534516 nova_compute[253538]: 2025-11-25 09:01:50.794 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:01:50 np0005534516 nova_compute[253538]: 2025-11-25 09:01:50.795 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 25 04:01:50 np0005534516 nova_compute[253538]: 2025-11-25 09:01:50.868 253542 DEBUG nova.compute.manager [req-e460ec4a-2595-40bd-a4b8-c4d12aeba6b4 req-edb31421-8da4-4326-bf21-cfbde2a51dde b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 3dd3cc22-d02b-4948-b7e0-da630c6ad4b0] Received event network-changed-cf17086b-8fa3-4041-8a87-a1f9ede3f871 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 04:01:50 np0005534516 nova_compute[253538]: 2025-11-25 09:01:50.869 253542 DEBUG nova.compute.manager [req-e460ec4a-2595-40bd-a4b8-c4d12aeba6b4 req-edb31421-8da4-4326-bf21-cfbde2a51dde b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 3dd3cc22-d02b-4948-b7e0-da630c6ad4b0] Refreshing instance network info cache due to event network-changed-cf17086b-8fa3-4041-8a87-a1f9ede3f871. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 04:01:50 np0005534516 nova_compute[253538]: 2025-11-25 09:01:50.869 253542 DEBUG oslo_concurrency.lockutils [req-e460ec4a-2595-40bd-a4b8-c4d12aeba6b4 req-edb31421-8da4-4326-bf21-cfbde2a51dde b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-3dd3cc22-d02b-4948-b7e0-da630c6ad4b0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 04:01:50 np0005534516 nova_compute[253538]: 2025-11-25 09:01:50.870 253542 DEBUG oslo_concurrency.lockutils [req-e460ec4a-2595-40bd-a4b8-c4d12aeba6b4 req-edb31421-8da4-4326-bf21-cfbde2a51dde b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-3dd3cc22-d02b-4948-b7e0-da630c6ad4b0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 04:01:50 np0005534516 nova_compute[253538]: 2025-11-25 09:01:50.870 253542 DEBUG nova.network.neutron [req-e460ec4a-2595-40bd-a4b8-c4d12aeba6b4 req-edb31421-8da4-4326-bf21-cfbde2a51dde b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 3dd3cc22-d02b-4948-b7e0-da630c6ad4b0] Refreshing network info cache for port cf17086b-8fa3-4041-8a87-a1f9ede3f871 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 04:01:50 np0005534516 nova_compute[253538]: 2025-11-25 09:01:50.899 253542 DEBUG oslo_concurrency.lockutils [None req-6adced35-cf3c-4fbd-b013-bc3653cb8899 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Acquiring lock "3dd3cc22-d02b-4948-b7e0-da630c6ad4b0" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:01:50 np0005534516 nova_compute[253538]: 2025-11-25 09:01:50.900 253542 DEBUG oslo_concurrency.lockutils [None req-6adced35-cf3c-4fbd-b013-bc3653cb8899 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "3dd3cc22-d02b-4948-b7e0-da630c6ad4b0" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:01:50 np0005534516 nova_compute[253538]: 2025-11-25 09:01:50.900 253542 DEBUG oslo_concurrency.lockutils [None req-6adced35-cf3c-4fbd-b013-bc3653cb8899 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Acquiring lock "3dd3cc22-d02b-4948-b7e0-da630c6ad4b0-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:01:50 np0005534516 nova_compute[253538]: 2025-11-25 09:01:50.900 253542 DEBUG oslo_concurrency.lockutils [None req-6adced35-cf3c-4fbd-b013-bc3653cb8899 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "3dd3cc22-d02b-4948-b7e0-da630c6ad4b0-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:01:50 np0005534516 nova_compute[253538]: 2025-11-25 09:01:50.901 253542 DEBUG oslo_concurrency.lockutils [None req-6adced35-cf3c-4fbd-b013-bc3653cb8899 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "3dd3cc22-d02b-4948-b7e0-da630c6ad4b0-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:01:50 np0005534516 nova_compute[253538]: 2025-11-25 09:01:50.903 253542 INFO nova.compute.manager [None req-6adced35-cf3c-4fbd-b013-bc3653cb8899 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 3dd3cc22-d02b-4948-b7e0-da630c6ad4b0] Terminating instance#033[00m
Nov 25 04:01:50 np0005534516 nova_compute[253538]: 2025-11-25 09:01:50.905 253542 DEBUG nova.compute.manager [None req-6adced35-cf3c-4fbd-b013-bc3653cb8899 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 3dd3cc22-d02b-4948-b7e0-da630c6ad4b0] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 25 04:01:51 np0005534516 nova_compute[253538]: 2025-11-25 09:01:51.066 253542 DEBUG oslo_concurrency.processutils [None req-d35aed32-9a06-49d6-a4ac-4f84c5a5ab8b 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/2d372af7-dca6-4f5f-bd4c-beedbb8cc055/disk.config 2d372af7-dca6-4f5f-bd4c-beedbb8cc055_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.688s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 04:01:51 np0005534516 nova_compute[253538]: 2025-11-25 09:01:51.067 253542 INFO nova.virt.libvirt.driver [None req-d35aed32-9a06-49d6-a4ac-4f84c5a5ab8b 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 2d372af7-dca6-4f5f-bd4c-beedbb8cc055] Deleting local config drive /var/lib/nova/instances/2d372af7-dca6-4f5f-bd4c-beedbb8cc055/disk.config because it was imported into RBD.#033[00m
Nov 25 04:01:51 np0005534516 NetworkManager[48915]: <info>  [1764061311.1279] manager: (tapc1655f18-12): new Tun device (/org/freedesktop/NetworkManager/Devices/557)
Nov 25 04:01:51 np0005534516 kernel: tapc1655f18-12: entered promiscuous mode
Nov 25 04:01:51 np0005534516 ovn_controller[152859]: 2025-11-25T09:01:51Z|01345|binding|INFO|Claiming lport c1655f18-1254-402d-b9b6-7ca2d5a8bcdc for this chassis.
Nov 25 04:01:51 np0005534516 ovn_controller[152859]: 2025-11-25T09:01:51Z|01346|binding|INFO|c1655f18-1254-402d-b9b6-7ca2d5a8bcdc: Claiming fa:16:3e:cb:b3:62 10.100.0.12
Nov 25 04:01:51 np0005534516 nova_compute[253538]: 2025-11-25 09:01:51.133 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:01:51 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:01:51.142 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:cb:b3:62 10.100.0.12'], port_security=['fa:16:3e:cb:b3:62 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '2d372af7-dca6-4f5f-bd4c-beedbb8cc055', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-01d5ee0a-5a87-445b-8539-b33b1f9d0842', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'cfffff2c57a442a59b202d368d49bf00', 'neutron:revision_number': '2', 'neutron:security_group_ids': '7a01ec2f-9868-40ca-9120-52725aa4431e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9d4cb606-f8e1-4247-876b-21f84cfe5e61, chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=c1655f18-1254-402d-b9b6-7ca2d5a8bcdc) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 04:01:51 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:01:51.143 162739 INFO neutron.agent.ovn.metadata.agent [-] Port c1655f18-1254-402d-b9b6-7ca2d5a8bcdc in datapath 01d5ee0a-5a87-445b-8539-b33b1f9d0842 bound to our chassis#033[00m
Nov 25 04:01:51 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:01:51.145 162739 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 01d5ee0a-5a87-445b-8539-b33b1f9d0842#033[00m
Nov 25 04:01:51 np0005534516 ovn_controller[152859]: 2025-11-25T09:01:51Z|01347|binding|INFO|Setting lport c1655f18-1254-402d-b9b6-7ca2d5a8bcdc ovn-installed in OVS
Nov 25 04:01:51 np0005534516 ovn_controller[152859]: 2025-11-25T09:01:51Z|01348|binding|INFO|Setting lport c1655f18-1254-402d-b9b6-7ca2d5a8bcdc up in Southbound
Nov 25 04:01:51 np0005534516 nova_compute[253538]: 2025-11-25 09:01:51.161 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:01:51 np0005534516 nova_compute[253538]: 2025-11-25 09:01:51.165 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:01:51 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:01:51.173 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[027735f8-7815-4e36-8392-b66640510e7c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:01:51 np0005534516 systemd-machined[215790]: New machine qemu-162-instance-00000084.
Nov 25 04:01:51 np0005534516 systemd[1]: Started Virtual Machine qemu-162-instance-00000084.
Nov 25 04:01:51 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:01:51.208 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[2b863ed8-0661-459e-865a-34d87c058d50]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:01:51 np0005534516 systemd-udevd[391752]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 04:01:51 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:01:51.212 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[4d16cb4c-f6db-4555-872f-0e0428af6044]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:01:51 np0005534516 NetworkManager[48915]: <info>  [1764061311.2233] device (tapc1655f18-12): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 04:01:51 np0005534516 NetworkManager[48915]: <info>  [1764061311.2243] device (tapc1655f18-12): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 04:01:51 np0005534516 podman[391728]: 2025-11-25 09:01:51.240453032 +0000 UTC m=+0.065378641 container health_status 5c8d5508db57b4c3da7e80ae11f3a3094e87785cb74f60c98199261dd8b73481 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_metadata_agent, org.label-schema.build-date=20251118, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 25 04:01:51 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:01:51.246 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[e3006cbd-32c2-42c9-934d-88cd34e10e75]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:01:51 np0005534516 podman[391726]: 2025-11-25 09:01:51.248235893 +0000 UTC m=+0.075126535 container health_status 201f5ba8a846455dad7681d91099d8c3f33e8ac91298c1330a8b525b94c03938 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, config_id=multipathd, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 25 04:01:51 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:01:51.268 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[8f740e7e-9af1-4333-acbb-e96ed487cff2]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap01d5ee0a-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:7a:39:4a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 388], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 662730, 'reachable_time': 37346, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 391777, 'error': None, 'target': 'ovnmeta-01d5ee0a-5a87-445b-8539-b33b1f9d0842', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:01:51 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:01:51.288 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[a706244c-441b-47e1-90a8-ac9f70fd0457]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap01d5ee0a-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 662744, 'tstamp': 662744}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 391780, 'error': None, 'target': 'ovnmeta-01d5ee0a-5a87-445b-8539-b33b1f9d0842', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap01d5ee0a-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 662746, 'tstamp': 662746}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 391780, 'error': None, 'target': 'ovnmeta-01d5ee0a-5a87-445b-8539-b33b1f9d0842', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:01:51 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:01:51.291 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap01d5ee0a-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 04:01:51 np0005534516 nova_compute[253538]: 2025-11-25 09:01:51.293 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:01:51 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:01:51.296 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap01d5ee0a-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 04:01:51 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:01:51.296 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 04:01:51 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:01:51.297 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap01d5ee0a-50, col_values=(('external_ids', {'iface-id': 'e613213f-7deb-43ce-acbb-25b798b2b340'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 04:01:51 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:01:51.297 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 04:01:51 np0005534516 nova_compute[253538]: 2025-11-25 09:01:51.404 253542 DEBUG nova.compute.manager [req-38592265-fc5a-4690-9712-392990a1dbb8 req-4871de1c-572a-471a-86dc-dfe1e01140c6 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 2d372af7-dca6-4f5f-bd4c-beedbb8cc055] Received event network-vif-plugged-c1655f18-1254-402d-b9b6-7ca2d5a8bcdc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 04:01:51 np0005534516 nova_compute[253538]: 2025-11-25 09:01:51.405 253542 DEBUG oslo_concurrency.lockutils [req-38592265-fc5a-4690-9712-392990a1dbb8 req-4871de1c-572a-471a-86dc-dfe1e01140c6 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "2d372af7-dca6-4f5f-bd4c-beedbb8cc055-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:01:51 np0005534516 nova_compute[253538]: 2025-11-25 09:01:51.405 253542 DEBUG oslo_concurrency.lockutils [req-38592265-fc5a-4690-9712-392990a1dbb8 req-4871de1c-572a-471a-86dc-dfe1e01140c6 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "2d372af7-dca6-4f5f-bd4c-beedbb8cc055-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:01:51 np0005534516 nova_compute[253538]: 2025-11-25 09:01:51.406 253542 DEBUG oslo_concurrency.lockutils [req-38592265-fc5a-4690-9712-392990a1dbb8 req-4871de1c-572a-471a-86dc-dfe1e01140c6 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "2d372af7-dca6-4f5f-bd4c-beedbb8cc055-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:01:51 np0005534516 nova_compute[253538]: 2025-11-25 09:01:51.406 253542 DEBUG nova.compute.manager [req-38592265-fc5a-4690-9712-392990a1dbb8 req-4871de1c-572a-471a-86dc-dfe1e01140c6 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 2d372af7-dca6-4f5f-bd4c-beedbb8cc055] Processing event network-vif-plugged-c1655f18-1254-402d-b9b6-7ca2d5a8bcdc _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 25 04:01:51 np0005534516 kernel: tapcf17086b-8f (unregistering): left promiscuous mode
Nov 25 04:01:51 np0005534516 NetworkManager[48915]: <info>  [1764061311.4451] device (tapcf17086b-8f): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 04:01:51 np0005534516 ovn_controller[152859]: 2025-11-25T09:01:51Z|01349|binding|INFO|Releasing lport cf17086b-8fa3-4041-8a87-a1f9ede3f871 from this chassis (sb_readonly=0)
Nov 25 04:01:51 np0005534516 ovn_controller[152859]: 2025-11-25T09:01:51Z|01350|binding|INFO|Setting lport cf17086b-8fa3-4041-8a87-a1f9ede3f871 down in Southbound
Nov 25 04:01:51 np0005534516 ovn_controller[152859]: 2025-11-25T09:01:51Z|01351|binding|INFO|Removing iface tapcf17086b-8f ovn-installed in OVS
Nov 25 04:01:51 np0005534516 nova_compute[253538]: 2025-11-25 09:01:51.455 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:01:51 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:01:51.461 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:2b:64:2f 10.100.0.9 2001:db8::f816:3eff:fe2b:642f'], port_security=['fa:16:3e:2b:64:2f 10.100.0.9 2001:db8::f816:3eff:fe2b:642f'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28 2001:db8::f816:3eff:fe2b:642f/64', 'neutron:device_id': '3dd3cc22-d02b-4948-b7e0-da630c6ad4b0', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6c2834b5-0444-432c-8da4-c0b4f4aabc4d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a3cf572dfc9f42528923d69b8fa76422', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'bc2d8b7e-9a70-4d0b-ab2b-2be8746de01b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=566c4500-0375-4680-b110-24535007c05e, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=cf17086b-8fa3-4041-8a87-a1f9ede3f871) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 04:01:51 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:01:51.462 162739 INFO neutron.agent.ovn.metadata.agent [-] Port cf17086b-8fa3-4041-8a87-a1f9ede3f871 in datapath 6c2834b5-0444-432c-8da4-c0b4f4aabc4d unbound from our chassis#033[00m
Nov 25 04:01:51 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:01:51.464 162739 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 6c2834b5-0444-432c-8da4-c0b4f4aabc4d#033[00m
Nov 25 04:01:51 np0005534516 nova_compute[253538]: 2025-11-25 09:01:51.472 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:01:51 np0005534516 nova_compute[253538]: 2025-11-25 09:01:51.474 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:01:51 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:01:51.491 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[b514d0d1-7919-4a42-900a-0df5a00931f0]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:01:51 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:01:51.524 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[5ff83287-4a90-4003-9ef9-6df9999d18ee]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:01:51 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:01:51.527 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[82d221e7-1c3c-41db-9457-0bde78fee2b5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:01:51 np0005534516 systemd[1]: machine-qemu\x2d161\x2dinstance\x2d00000083.scope: Deactivated successfully.
Nov 25 04:01:51 np0005534516 systemd[1]: machine-qemu\x2d161\x2dinstance\x2d00000083.scope: Consumed 13.710s CPU time.
Nov 25 04:01:51 np0005534516 systemd-machined[215790]: Machine qemu-161-instance-00000083 terminated.
Nov 25 04:01:51 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:01:51.552 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[01a94cee-3e47-442b-a6d6-b527e34ab4fc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:01:51 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:01:51.573 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[d44fed5e-de06-4f58-a30b-eaafd6126e75]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6c2834b5-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e5:81:3e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 32, 'tx_packets': 8, 'rx_bytes': 2640, 'tx_bytes': 524, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 32, 'tx_packets': 8, 'rx_bytes': 2640, 'tx_bytes': 524, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 386], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 659443, 'reachable_time': 25259, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 28, 'inoctets': 2080, 'indelivers': 7, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 300, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 28, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 2080, 'outmcastoctets': 300, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 28, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 7, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 391790, 'error': None, 'target': 'ovnmeta-6c2834b5-0444-432c-8da4-c0b4f4aabc4d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:01:51 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:01:51.587 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[4f0ea9c7-904b-49ab-80d8-5dbc6e365881]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap6c2834b5-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 659455, 'tstamp': 659455}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 391791, 'error': None, 'target': 'ovnmeta-6c2834b5-0444-432c-8da4-c0b4f4aabc4d', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap6c2834b5-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 659459, 'tstamp': 659459}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 391791, 'error': None, 'target': 'ovnmeta-6c2834b5-0444-432c-8da4-c0b4f4aabc4d', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:01:51 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:01:51.589 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6c2834b5-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 04:01:51 np0005534516 nova_compute[253538]: 2025-11-25 09:01:51.590 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:01:51 np0005534516 nova_compute[253538]: 2025-11-25 09:01:51.594 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:01:51 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:01:51.595 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6c2834b5-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 04:01:51 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:01:51.596 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 04:01:51 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:01:51.596 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap6c2834b5-00, col_values=(('external_ids', {'iface-id': '25e4a85d-5a04-4d07-a006-66576a20c294'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 04:01:51 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:01:51.596 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 04:01:51 np0005534516 nova_compute[253538]: 2025-11-25 09:01:51.741 253542 INFO nova.virt.libvirt.driver [-] [instance: 3dd3cc22-d02b-4948-b7e0-da630c6ad4b0] Instance destroyed successfully.#033[00m
Nov 25 04:01:51 np0005534516 nova_compute[253538]: 2025-11-25 09:01:51.742 253542 DEBUG nova.objects.instance [None req-6adced35-cf3c-4fbd-b013-bc3653cb8899 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lazy-loading 'resources' on Instance uuid 3dd3cc22-d02b-4948-b7e0-da630c6ad4b0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 04:01:51 np0005534516 nova_compute[253538]: 2025-11-25 09:01:51.753 253542 DEBUG nova.virt.libvirt.vif [None req-6adced35-cf3c-4fbd-b013-bc3653cb8899 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T09:01:19Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-432989417',display_name='tempest-TestGettingAddress-server-432989417',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-432989417',id=131,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBE43RDkFrIAVI0BW11KJE2OYK1HJrAvC1/LPNZ8TDm6qjtPF0a19WFe4a1radMZBBdiDLKOYgFd/MDesehfKkwox7hQY3R8UYDimfx0Df8eUJAXIFHj7mDCbbhz+nTG4ag==',key_name='tempest-TestGettingAddress-356224709',keypairs=<?>,launch_index=0,launched_at=2025-11-25T09:01:28Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='a3cf572dfc9f42528923d69b8fa76422',ramdisk_id='',reservation_id='r-z9h0547w',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-364728108',owner_user_name='tempest-TestGettingAddress-364728108-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T09:01:28Z,user_data=None,user_id='c9fb13d4ba9041458692330b7276232f',uuid=3dd3cc22-d02b-4948-b7e0-da630c6ad4b0,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "cf17086b-8fa3-4041-8a87-a1f9ede3f871", "address": "fa:16:3e:2b:64:2f", "network": {"id": "6c2834b5-0444-432c-8da4-c0b4f4aabc4d", "bridge": "br-int", "label": "tempest-network-smoke--139778261", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.180", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe2b:642f", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcf17086b-8f", "ovs_interfaceid": "cf17086b-8fa3-4041-8a87-a1f9ede3f871", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 25 04:01:51 np0005534516 nova_compute[253538]: 2025-11-25 09:01:51.753 253542 DEBUG nova.network.os_vif_util [None req-6adced35-cf3c-4fbd-b013-bc3653cb8899 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Converting VIF {"id": "cf17086b-8fa3-4041-8a87-a1f9ede3f871", "address": "fa:16:3e:2b:64:2f", "network": {"id": "6c2834b5-0444-432c-8da4-c0b4f4aabc4d", "bridge": "br-int", "label": "tempest-network-smoke--139778261", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.180", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe2b:642f", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcf17086b-8f", "ovs_interfaceid": "cf17086b-8fa3-4041-8a87-a1f9ede3f871", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 04:01:51 np0005534516 nova_compute[253538]: 2025-11-25 09:01:51.754 253542 DEBUG nova.network.os_vif_util [None req-6adced35-cf3c-4fbd-b013-bc3653cb8899 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:2b:64:2f,bridge_name='br-int',has_traffic_filtering=True,id=cf17086b-8fa3-4041-8a87-a1f9ede3f871,network=Network(6c2834b5-0444-432c-8da4-c0b4f4aabc4d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcf17086b-8f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 04:01:51 np0005534516 nova_compute[253538]: 2025-11-25 09:01:51.754 253542 DEBUG os_vif [None req-6adced35-cf3c-4fbd-b013-bc3653cb8899 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:2b:64:2f,bridge_name='br-int',has_traffic_filtering=True,id=cf17086b-8fa3-4041-8a87-a1f9ede3f871,network=Network(6c2834b5-0444-432c-8da4-c0b4f4aabc4d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcf17086b-8f') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 25 04:01:51 np0005534516 nova_compute[253538]: 2025-11-25 09:01:51.756 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:01:51 np0005534516 nova_compute[253538]: 2025-11-25 09:01:51.756 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapcf17086b-8f, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 04:01:51 np0005534516 nova_compute[253538]: 2025-11-25 09:01:51.760 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 25 04:01:51 np0005534516 nova_compute[253538]: 2025-11-25 09:01:51.762 253542 INFO os_vif [None req-6adced35-cf3c-4fbd-b013-bc3653cb8899 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:2b:64:2f,bridge_name='br-int',has_traffic_filtering=True,id=cf17086b-8fa3-4041-8a87-a1f9ede3f871,network=Network(6c2834b5-0444-432c-8da4-c0b4f4aabc4d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcf17086b-8f')#033[00m
Nov 25 04:01:51 np0005534516 nova_compute[253538]: 2025-11-25 09:01:51.956 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764061311.955714, 2d372af7-dca6-4f5f-bd4c-beedbb8cc055 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 04:01:51 np0005534516 nova_compute[253538]: 2025-11-25 09:01:51.956 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 2d372af7-dca6-4f5f-bd4c-beedbb8cc055] VM Started (Lifecycle Event)#033[00m
Nov 25 04:01:51 np0005534516 nova_compute[253538]: 2025-11-25 09:01:51.958 253542 DEBUG nova.compute.manager [None req-d35aed32-9a06-49d6-a4ac-4f84c5a5ab8b 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 2d372af7-dca6-4f5f-bd4c-beedbb8cc055] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 25 04:01:51 np0005534516 nova_compute[253538]: 2025-11-25 09:01:51.961 253542 DEBUG nova.virt.libvirt.driver [None req-d35aed32-9a06-49d6-a4ac-4f84c5a5ab8b 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 2d372af7-dca6-4f5f-bd4c-beedbb8cc055] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 25 04:01:51 np0005534516 nova_compute[253538]: 2025-11-25 09:01:51.964 253542 INFO nova.virt.libvirt.driver [-] [instance: 2d372af7-dca6-4f5f-bd4c-beedbb8cc055] Instance spawned successfully.#033[00m
Nov 25 04:01:51 np0005534516 nova_compute[253538]: 2025-11-25 09:01:51.964 253542 DEBUG nova.virt.libvirt.driver [None req-d35aed32-9a06-49d6-a4ac-4f84c5a5ab8b 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 2d372af7-dca6-4f5f-bd4c-beedbb8cc055] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 25 04:01:51 np0005534516 nova_compute[253538]: 2025-11-25 09:01:51.982 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 2d372af7-dca6-4f5f-bd4c-beedbb8cc055] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 04:01:51 np0005534516 nova_compute[253538]: 2025-11-25 09:01:51.987 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 2d372af7-dca6-4f5f-bd4c-beedbb8cc055] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 04:01:51 np0005534516 nova_compute[253538]: 2025-11-25 09:01:51.992 253542 DEBUG nova.virt.libvirt.driver [None req-d35aed32-9a06-49d6-a4ac-4f84c5a5ab8b 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 2d372af7-dca6-4f5f-bd4c-beedbb8cc055] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 04:01:51 np0005534516 nova_compute[253538]: 2025-11-25 09:01:51.993 253542 DEBUG nova.virt.libvirt.driver [None req-d35aed32-9a06-49d6-a4ac-4f84c5a5ab8b 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 2d372af7-dca6-4f5f-bd4c-beedbb8cc055] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 04:01:51 np0005534516 nova_compute[253538]: 2025-11-25 09:01:51.993 253542 DEBUG nova.virt.libvirt.driver [None req-d35aed32-9a06-49d6-a4ac-4f84c5a5ab8b 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 2d372af7-dca6-4f5f-bd4c-beedbb8cc055] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 04:01:51 np0005534516 nova_compute[253538]: 2025-11-25 09:01:51.994 253542 DEBUG nova.virt.libvirt.driver [None req-d35aed32-9a06-49d6-a4ac-4f84c5a5ab8b 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 2d372af7-dca6-4f5f-bd4c-beedbb8cc055] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 04:01:51 np0005534516 nova_compute[253538]: 2025-11-25 09:01:51.994 253542 DEBUG nova.virt.libvirt.driver [None req-d35aed32-9a06-49d6-a4ac-4f84c5a5ab8b 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 2d372af7-dca6-4f5f-bd4c-beedbb8cc055] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 04:01:51 np0005534516 nova_compute[253538]: 2025-11-25 09:01:51.995 253542 DEBUG nova.virt.libvirt.driver [None req-d35aed32-9a06-49d6-a4ac-4f84c5a5ab8b 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 2d372af7-dca6-4f5f-bd4c-beedbb8cc055] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 04:01:52 np0005534516 nova_compute[253538]: 2025-11-25 09:01:52.019 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 2d372af7-dca6-4f5f-bd4c-beedbb8cc055] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 04:01:52 np0005534516 nova_compute[253538]: 2025-11-25 09:01:52.020 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764061311.9559531, 2d372af7-dca6-4f5f-bd4c-beedbb8cc055 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 04:01:52 np0005534516 nova_compute[253538]: 2025-11-25 09:01:52.020 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 2d372af7-dca6-4f5f-bd4c-beedbb8cc055] VM Paused (Lifecycle Event)#033[00m
Nov 25 04:01:52 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2432: 321 pgs: 321 active+clean; 372 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 357 KiB/s rd, 3.9 MiB/s wr, 95 op/s
Nov 25 04:01:52 np0005534516 nova_compute[253538]: 2025-11-25 09:01:52.051 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 2d372af7-dca6-4f5f-bd4c-beedbb8cc055] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 04:01:52 np0005534516 nova_compute[253538]: 2025-11-25 09:01:52.055 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764061311.961139, 2d372af7-dca6-4f5f-bd4c-beedbb8cc055 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 04:01:52 np0005534516 nova_compute[253538]: 2025-11-25 09:01:52.055 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 2d372af7-dca6-4f5f-bd4c-beedbb8cc055] VM Resumed (Lifecycle Event)#033[00m
Nov 25 04:01:52 np0005534516 nova_compute[253538]: 2025-11-25 09:01:52.064 253542 INFO nova.compute.manager [None req-d35aed32-9a06-49d6-a4ac-4f84c5a5ab8b 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 2d372af7-dca6-4f5f-bd4c-beedbb8cc055] Took 10.15 seconds to spawn the instance on the hypervisor.#033[00m
Nov 25 04:01:52 np0005534516 nova_compute[253538]: 2025-11-25 09:01:52.065 253542 DEBUG nova.compute.manager [None req-d35aed32-9a06-49d6-a4ac-4f84c5a5ab8b 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 2d372af7-dca6-4f5f-bd4c-beedbb8cc055] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 04:01:52 np0005534516 nova_compute[253538]: 2025-11-25 09:01:52.086 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 2d372af7-dca6-4f5f-bd4c-beedbb8cc055] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 04:01:52 np0005534516 nova_compute[253538]: 2025-11-25 09:01:52.090 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 2d372af7-dca6-4f5f-bd4c-beedbb8cc055] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 04:01:52 np0005534516 nova_compute[253538]: 2025-11-25 09:01:52.117 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 2d372af7-dca6-4f5f-bd4c-beedbb8cc055] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 04:01:52 np0005534516 nova_compute[253538]: 2025-11-25 09:01:52.129 253542 INFO nova.compute.manager [None req-d35aed32-9a06-49d6-a4ac-4f84c5a5ab8b 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 2d372af7-dca6-4f5f-bd4c-beedbb8cc055] Took 11.16 seconds to build instance.#033[00m
Nov 25 04:01:52 np0005534516 nova_compute[253538]: 2025-11-25 09:01:52.142 253542 DEBUG oslo_concurrency.lockutils [None req-d35aed32-9a06-49d6-a4ac-4f84c5a5ab8b 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Lock "2d372af7-dca6-4f5f-bd4c-beedbb8cc055" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.235s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:01:52 np0005534516 nova_compute[253538]: 2025-11-25 09:01:52.143 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "2d372af7-dca6-4f5f-bd4c-beedbb8cc055" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 8.273s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:01:52 np0005534516 nova_compute[253538]: 2025-11-25 09:01:52.143 253542 INFO nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] [instance: 2d372af7-dca6-4f5f-bd4c-beedbb8cc055] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 04:01:52 np0005534516 nova_compute[253538]: 2025-11-25 09:01:52.144 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "2d372af7-dca6-4f5f-bd4c-beedbb8cc055" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:01:52 np0005534516 nova_compute[253538]: 2025-11-25 09:01:52.521 253542 DEBUG nova.network.neutron [req-e460ec4a-2595-40bd-a4b8-c4d12aeba6b4 req-edb31421-8da4-4326-bf21-cfbde2a51dde b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 3dd3cc22-d02b-4948-b7e0-da630c6ad4b0] Updated VIF entry in instance network info cache for port cf17086b-8fa3-4041-8a87-a1f9ede3f871. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 25 04:01:52 np0005534516 nova_compute[253538]: 2025-11-25 09:01:52.524 253542 DEBUG nova.network.neutron [req-e460ec4a-2595-40bd-a4b8-c4d12aeba6b4 req-edb31421-8da4-4326-bf21-cfbde2a51dde b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 3dd3cc22-d02b-4948-b7e0-da630c6ad4b0] Updating instance_info_cache with network_info: [{"id": "cf17086b-8fa3-4041-8a87-a1f9ede3f871", "address": "fa:16:3e:2b:64:2f", "network": {"id": "6c2834b5-0444-432c-8da4-c0b4f4aabc4d", "bridge": "br-int", "label": "tempest-network-smoke--139778261", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe2b:642f", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcf17086b-8f", "ovs_interfaceid": "cf17086b-8fa3-4041-8a87-a1f9ede3f871", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 04:01:52 np0005534516 nova_compute[253538]: 2025-11-25 09:01:52.541 253542 DEBUG oslo_concurrency.lockutils [req-e460ec4a-2595-40bd-a4b8-c4d12aeba6b4 req-edb31421-8da4-4326-bf21-cfbde2a51dde b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-3dd3cc22-d02b-4948-b7e0-da630c6ad4b0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 04:01:52 np0005534516 nova_compute[253538]: 2025-11-25 09:01:52.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:01:52 np0005534516 nova_compute[253538]: 2025-11-25 09:01:52.555 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:01:52 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:01:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] Optimize plan auto_2025-11-25_09:01:53
Nov 25 04:01:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 04:01:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] do_upmap
Nov 25 04:01:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] pools ['vms', 'default.rgw.meta', 'volumes', 'images', '.mgr', 'default.rgw.log', '.rgw.root', 'cephfs.cephfs.data', 'backups', 'default.rgw.control', 'cephfs.cephfs.meta']
Nov 25 04:01:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] prepared 0/10 changes
Nov 25 04:01:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 04:01:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 04:01:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 04:01:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 04:01:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 04:01:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 04:01:53 np0005534516 nova_compute[253538]: 2025-11-25 09:01:53.495 253542 DEBUG nova.compute.manager [req-1e0a66ad-1555-4a61-baf4-9908c6cc2ef6 req-99c7927e-f7dd-4072-af4b-5f92a3be824c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 2d372af7-dca6-4f5f-bd4c-beedbb8cc055] Received event network-vif-plugged-c1655f18-1254-402d-b9b6-7ca2d5a8bcdc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 04:01:53 np0005534516 nova_compute[253538]: 2025-11-25 09:01:53.496 253542 DEBUG oslo_concurrency.lockutils [req-1e0a66ad-1555-4a61-baf4-9908c6cc2ef6 req-99c7927e-f7dd-4072-af4b-5f92a3be824c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "2d372af7-dca6-4f5f-bd4c-beedbb8cc055-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:01:53 np0005534516 nova_compute[253538]: 2025-11-25 09:01:53.496 253542 DEBUG oslo_concurrency.lockutils [req-1e0a66ad-1555-4a61-baf4-9908c6cc2ef6 req-99c7927e-f7dd-4072-af4b-5f92a3be824c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "2d372af7-dca6-4f5f-bd4c-beedbb8cc055-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:01:53 np0005534516 nova_compute[253538]: 2025-11-25 09:01:53.496 253542 DEBUG oslo_concurrency.lockutils [req-1e0a66ad-1555-4a61-baf4-9908c6cc2ef6 req-99c7927e-f7dd-4072-af4b-5f92a3be824c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "2d372af7-dca6-4f5f-bd4c-beedbb8cc055-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:01:53 np0005534516 nova_compute[253538]: 2025-11-25 09:01:53.496 253542 DEBUG nova.compute.manager [req-1e0a66ad-1555-4a61-baf4-9908c6cc2ef6 req-99c7927e-f7dd-4072-af4b-5f92a3be824c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 2d372af7-dca6-4f5f-bd4c-beedbb8cc055] No waiting events found dispatching network-vif-plugged-c1655f18-1254-402d-b9b6-7ca2d5a8bcdc pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 04:01:53 np0005534516 nova_compute[253538]: 2025-11-25 09:01:53.497 253542 WARNING nova.compute.manager [req-1e0a66ad-1555-4a61-baf4-9908c6cc2ef6 req-99c7927e-f7dd-4072-af4b-5f92a3be824c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 2d372af7-dca6-4f5f-bd4c-beedbb8cc055] Received unexpected event network-vif-plugged-c1655f18-1254-402d-b9b6-7ca2d5a8bcdc for instance with vm_state active and task_state None.#033[00m
Nov 25 04:01:53 np0005534516 nova_compute[253538]: 2025-11-25 09:01:53.497 253542 DEBUG nova.compute.manager [req-1e0a66ad-1555-4a61-baf4-9908c6cc2ef6 req-99c7927e-f7dd-4072-af4b-5f92a3be824c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 3dd3cc22-d02b-4948-b7e0-da630c6ad4b0] Received event network-vif-unplugged-cf17086b-8fa3-4041-8a87-a1f9ede3f871 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 04:01:53 np0005534516 nova_compute[253538]: 2025-11-25 09:01:53.497 253542 DEBUG oslo_concurrency.lockutils [req-1e0a66ad-1555-4a61-baf4-9908c6cc2ef6 req-99c7927e-f7dd-4072-af4b-5f92a3be824c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "3dd3cc22-d02b-4948-b7e0-da630c6ad4b0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:01:53 np0005534516 nova_compute[253538]: 2025-11-25 09:01:53.498 253542 DEBUG oslo_concurrency.lockutils [req-1e0a66ad-1555-4a61-baf4-9908c6cc2ef6 req-99c7927e-f7dd-4072-af4b-5f92a3be824c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "3dd3cc22-d02b-4948-b7e0-da630c6ad4b0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:01:53 np0005534516 nova_compute[253538]: 2025-11-25 09:01:53.498 253542 DEBUG oslo_concurrency.lockutils [req-1e0a66ad-1555-4a61-baf4-9908c6cc2ef6 req-99c7927e-f7dd-4072-af4b-5f92a3be824c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "3dd3cc22-d02b-4948-b7e0-da630c6ad4b0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:01:53 np0005534516 nova_compute[253538]: 2025-11-25 09:01:53.498 253542 DEBUG nova.compute.manager [req-1e0a66ad-1555-4a61-baf4-9908c6cc2ef6 req-99c7927e-f7dd-4072-af4b-5f92a3be824c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 3dd3cc22-d02b-4948-b7e0-da630c6ad4b0] No waiting events found dispatching network-vif-unplugged-cf17086b-8fa3-4041-8a87-a1f9ede3f871 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 04:01:53 np0005534516 nova_compute[253538]: 2025-11-25 09:01:53.498 253542 DEBUG nova.compute.manager [req-1e0a66ad-1555-4a61-baf4-9908c6cc2ef6 req-99c7927e-f7dd-4072-af4b-5f92a3be824c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 3dd3cc22-d02b-4948-b7e0-da630c6ad4b0] Received event network-vif-unplugged-cf17086b-8fa3-4041-8a87-a1f9ede3f871 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 25 04:01:53 np0005534516 nova_compute[253538]: 2025-11-25 09:01:53.499 253542 DEBUG nova.compute.manager [req-1e0a66ad-1555-4a61-baf4-9908c6cc2ef6 req-99c7927e-f7dd-4072-af4b-5f92a3be824c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 3dd3cc22-d02b-4948-b7e0-da630c6ad4b0] Received event network-vif-plugged-cf17086b-8fa3-4041-8a87-a1f9ede3f871 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 04:01:53 np0005534516 nova_compute[253538]: 2025-11-25 09:01:53.499 253542 DEBUG oslo_concurrency.lockutils [req-1e0a66ad-1555-4a61-baf4-9908c6cc2ef6 req-99c7927e-f7dd-4072-af4b-5f92a3be824c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "3dd3cc22-d02b-4948-b7e0-da630c6ad4b0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:01:53 np0005534516 nova_compute[253538]: 2025-11-25 09:01:53.499 253542 DEBUG oslo_concurrency.lockutils [req-1e0a66ad-1555-4a61-baf4-9908c6cc2ef6 req-99c7927e-f7dd-4072-af4b-5f92a3be824c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "3dd3cc22-d02b-4948-b7e0-da630c6ad4b0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:01:53 np0005534516 nova_compute[253538]: 2025-11-25 09:01:53.499 253542 DEBUG oslo_concurrency.lockutils [req-1e0a66ad-1555-4a61-baf4-9908c6cc2ef6 req-99c7927e-f7dd-4072-af4b-5f92a3be824c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "3dd3cc22-d02b-4948-b7e0-da630c6ad4b0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:01:53 np0005534516 nova_compute[253538]: 2025-11-25 09:01:53.500 253542 DEBUG nova.compute.manager [req-1e0a66ad-1555-4a61-baf4-9908c6cc2ef6 req-99c7927e-f7dd-4072-af4b-5f92a3be824c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 3dd3cc22-d02b-4948-b7e0-da630c6ad4b0] No waiting events found dispatching network-vif-plugged-cf17086b-8fa3-4041-8a87-a1f9ede3f871 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 04:01:53 np0005534516 nova_compute[253538]: 2025-11-25 09:01:53.500 253542 WARNING nova.compute.manager [req-1e0a66ad-1555-4a61-baf4-9908c6cc2ef6 req-99c7927e-f7dd-4072-af4b-5f92a3be824c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 3dd3cc22-d02b-4948-b7e0-da630c6ad4b0] Received unexpected event network-vif-plugged-cf17086b-8fa3-4041-8a87-a1f9ede3f871 for instance with vm_state active and task_state deleting.#033[00m
Nov 25 04:01:53 np0005534516 nova_compute[253538]: 2025-11-25 09:01:53.608 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:01:54 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 04:01:54 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 04:01:54 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 04:01:54 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 04:01:54 np0005534516 nova_compute[253538]: 2025-11-25 09:01:54.024 253542 INFO nova.virt.libvirt.driver [None req-6adced35-cf3c-4fbd-b013-bc3653cb8899 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 3dd3cc22-d02b-4948-b7e0-da630c6ad4b0] Deleting instance files /var/lib/nova/instances/3dd3cc22-d02b-4948-b7e0-da630c6ad4b0_del#033[00m
Nov 25 04:01:54 np0005534516 nova_compute[253538]: 2025-11-25 09:01:54.025 253542 INFO nova.virt.libvirt.driver [None req-6adced35-cf3c-4fbd-b013-bc3653cb8899 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 3dd3cc22-d02b-4948-b7e0-da630c6ad4b0] Deletion of /var/lib/nova/instances/3dd3cc22-d02b-4948-b7e0-da630c6ad4b0_del complete#033[00m
Nov 25 04:01:54 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 04:01:54 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 04:01:54 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 04:01:54 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 04:01:54 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 04:01:54 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 04:01:54 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2433: 321 pgs: 321 active+clean; 346 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 156 KiB/s rd, 3.0 MiB/s wr, 65 op/s
Nov 25 04:01:54 np0005534516 nova_compute[253538]: 2025-11-25 09:01:54.132 253542 INFO nova.compute.manager [None req-6adced35-cf3c-4fbd-b013-bc3653cb8899 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 3dd3cc22-d02b-4948-b7e0-da630c6ad4b0] Took 3.23 seconds to destroy the instance on the hypervisor.#033[00m
Nov 25 04:01:54 np0005534516 nova_compute[253538]: 2025-11-25 09:01:54.133 253542 DEBUG oslo.service.loopingcall [None req-6adced35-cf3c-4fbd-b013-bc3653cb8899 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 25 04:01:54 np0005534516 nova_compute[253538]: 2025-11-25 09:01:54.133 253542 DEBUG nova.compute.manager [-] [instance: 3dd3cc22-d02b-4948-b7e0-da630c6ad4b0] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 25 04:01:54 np0005534516 nova_compute[253538]: 2025-11-25 09:01:54.134 253542 DEBUG nova.network.neutron [-] [instance: 3dd3cc22-d02b-4948-b7e0-da630c6ad4b0] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 25 04:01:54 np0005534516 nova_compute[253538]: 2025-11-25 09:01:54.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:01:54 np0005534516 nova_compute[253538]: 2025-11-25 09:01:54.573 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:01:54 np0005534516 nova_compute[253538]: 2025-11-25 09:01:54.574 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:01:54 np0005534516 nova_compute[253538]: 2025-11-25 09:01:54.574 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:01:54 np0005534516 nova_compute[253538]: 2025-11-25 09:01:54.574 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 25 04:01:54 np0005534516 nova_compute[253538]: 2025-11-25 09:01:54.575 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 04:01:54 np0005534516 nova_compute[253538]: 2025-11-25 09:01:54.897 253542 DEBUG nova.network.neutron [-] [instance: 3dd3cc22-d02b-4948-b7e0-da630c6ad4b0] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 04:01:54 np0005534516 nova_compute[253538]: 2025-11-25 09:01:54.913 253542 INFO nova.compute.manager [-] [instance: 3dd3cc22-d02b-4948-b7e0-da630c6ad4b0] Took 0.78 seconds to deallocate network for instance.#033[00m
Nov 25 04:01:54 np0005534516 nova_compute[253538]: 2025-11-25 09:01:54.950 253542 DEBUG oslo_concurrency.lockutils [None req-6adced35-cf3c-4fbd-b013-bc3653cb8899 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:01:54 np0005534516 nova_compute[253538]: 2025-11-25 09:01:54.951 253542 DEBUG oslo_concurrency.lockutils [None req-6adced35-cf3c-4fbd-b013-bc3653cb8899 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:01:55 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 04:01:55 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3222683498' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 04:01:55 np0005534516 nova_compute[253538]: 2025-11-25 09:01:55.053 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.478s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 04:01:55 np0005534516 nova_compute[253538]: 2025-11-25 09:01:55.073 253542 DEBUG oslo_concurrency.processutils [None req-6adced35-cf3c-4fbd-b013-bc3653cb8899 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 04:01:55 np0005534516 nova_compute[253538]: 2025-11-25 09:01:55.183 253542 DEBUG nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] skipping disk for instance-00000084 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 25 04:01:55 np0005534516 nova_compute[253538]: 2025-11-25 09:01:55.184 253542 DEBUG nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] skipping disk for instance-00000084 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 25 04:01:55 np0005534516 nova_compute[253538]: 2025-11-25 09:01:55.189 253542 DEBUG nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] skipping disk for instance-00000081 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 25 04:01:55 np0005534516 nova_compute[253538]: 2025-11-25 09:01:55.190 253542 DEBUG nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] skipping disk for instance-00000081 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 25 04:01:55 np0005534516 nova_compute[253538]: 2025-11-25 09:01:55.195 253542 DEBUG nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] skipping disk for instance-00000082 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 25 04:01:55 np0005534516 nova_compute[253538]: 2025-11-25 09:01:55.195 253542 DEBUG nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] skipping disk for instance-00000082 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 25 04:01:55 np0005534516 nova_compute[253538]: 2025-11-25 09:01:55.356 253542 DEBUG nova.compute.manager [req-63beb1e3-3b83-4432-bba0-71bd07d98918 req-30a2eb91-a950-423f-9f03-992904e82441 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 2d372af7-dca6-4f5f-bd4c-beedbb8cc055] Received event network-changed-c1655f18-1254-402d-b9b6-7ca2d5a8bcdc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 04:01:55 np0005534516 nova_compute[253538]: 2025-11-25 09:01:55.357 253542 DEBUG nova.compute.manager [req-63beb1e3-3b83-4432-bba0-71bd07d98918 req-30a2eb91-a950-423f-9f03-992904e82441 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 2d372af7-dca6-4f5f-bd4c-beedbb8cc055] Refreshing instance network info cache due to event network-changed-c1655f18-1254-402d-b9b6-7ca2d5a8bcdc. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 04:01:55 np0005534516 nova_compute[253538]: 2025-11-25 09:01:55.358 253542 DEBUG oslo_concurrency.lockutils [req-63beb1e3-3b83-4432-bba0-71bd07d98918 req-30a2eb91-a950-423f-9f03-992904e82441 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-2d372af7-dca6-4f5f-bd4c-beedbb8cc055" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 04:01:55 np0005534516 nova_compute[253538]: 2025-11-25 09:01:55.359 253542 DEBUG oslo_concurrency.lockutils [req-63beb1e3-3b83-4432-bba0-71bd07d98918 req-30a2eb91-a950-423f-9f03-992904e82441 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-2d372af7-dca6-4f5f-bd4c-beedbb8cc055" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 04:01:55 np0005534516 nova_compute[253538]: 2025-11-25 09:01:55.359 253542 DEBUG nova.network.neutron [req-63beb1e3-3b83-4432-bba0-71bd07d98918 req-30a2eb91-a950-423f-9f03-992904e82441 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 2d372af7-dca6-4f5f-bd4c-beedbb8cc055] Refreshing network info cache for port c1655f18-1254-402d-b9b6-7ca2d5a8bcdc _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 04:01:55 np0005534516 nova_compute[253538]: 2025-11-25 09:01:55.508 253542 WARNING nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 04:01:55 np0005534516 nova_compute[253538]: 2025-11-25 09:01:55.509 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3114MB free_disk=59.84657669067383GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 25 04:01:55 np0005534516 nova_compute[253538]: 2025-11-25 09:01:55.509 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:01:55 np0005534516 nova_compute[253538]: 2025-11-25 09:01:55.596 253542 DEBUG nova.compute.manager [req-2fbf3e9a-9153-462e-bda2-8bc9d78f1467 req-b84792ee-a9ed-4fa4-99a7-ab4524481ba0 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 3dd3cc22-d02b-4948-b7e0-da630c6ad4b0] Received event network-vif-deleted-cf17086b-8fa3-4041-8a87-a1f9ede3f871 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 04:01:55 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 04:01:55 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1247479192' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 04:01:55 np0005534516 nova_compute[253538]: 2025-11-25 09:01:55.697 253542 DEBUG oslo_concurrency.processutils [None req-6adced35-cf3c-4fbd-b013-bc3653cb8899 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.625s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 04:01:55 np0005534516 nova_compute[253538]: 2025-11-25 09:01:55.704 253542 DEBUG nova.compute.provider_tree [None req-6adced35-cf3c-4fbd-b013-bc3653cb8899 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 04:01:55 np0005534516 nova_compute[253538]: 2025-11-25 09:01:55.726 253542 DEBUG nova.scheduler.client.report [None req-6adced35-cf3c-4fbd-b013-bc3653cb8899 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 04:01:55 np0005534516 nova_compute[253538]: 2025-11-25 09:01:55.750 253542 DEBUG oslo_concurrency.lockutils [None req-6adced35-cf3c-4fbd-b013-bc3653cb8899 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.799s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:01:55 np0005534516 nova_compute[253538]: 2025-11-25 09:01:55.754 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.245s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:01:55 np0005534516 nova_compute[253538]: 2025-11-25 09:01:55.802 253542 INFO nova.scheduler.client.report [None req-6adced35-cf3c-4fbd-b013-bc3653cb8899 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Deleted allocations for instance 3dd3cc22-d02b-4948-b7e0-da630c6ad4b0#033[00m
Nov 25 04:01:55 np0005534516 nova_compute[253538]: 2025-11-25 09:01:55.897 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Instance f05e074d-5838-4c4b-89dc-76afe386f635 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 25 04:01:55 np0005534516 nova_compute[253538]: 2025-11-25 09:01:55.898 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Instance 830678ef-9f48-4175-aa6d-666c24a11689 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 25 04:01:55 np0005534516 nova_compute[253538]: 2025-11-25 09:01:55.898 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Instance 2d372af7-dca6-4f5f-bd4c-beedbb8cc055 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 25 04:01:55 np0005534516 nova_compute[253538]: 2025-11-25 09:01:55.898 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 25 04:01:55 np0005534516 nova_compute[253538]: 2025-11-25 09:01:55.899 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=896MB phys_disk=59GB used_disk=3GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 25 04:01:55 np0005534516 nova_compute[253538]: 2025-11-25 09:01:55.941 253542 DEBUG oslo_concurrency.lockutils [None req-6adced35-cf3c-4fbd-b013-bc3653cb8899 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "3dd3cc22-d02b-4948-b7e0-da630c6ad4b0" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 5.041s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:01:55 np0005534516 nova_compute[253538]: 2025-11-25 09:01:55.993 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 04:01:56 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2434: 321 pgs: 321 active+clean; 330 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 681 KiB/s rd, 2.2 MiB/s wr, 95 op/s
Nov 25 04:01:56 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 04:01:56 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1807212356' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 04:01:56 np0005534516 nova_compute[253538]: 2025-11-25 09:01:56.599 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.607s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 04:01:56 np0005534516 nova_compute[253538]: 2025-11-25 09:01:56.604 253542 DEBUG nova.compute.provider_tree [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 04:01:56 np0005534516 nova_compute[253538]: 2025-11-25 09:01:56.631 253542 DEBUG nova.scheduler.client.report [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 04:01:56 np0005534516 nova_compute[253538]: 2025-11-25 09:01:56.673 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 25 04:01:56 np0005534516 nova_compute[253538]: 2025-11-25 09:01:56.674 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.920s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:01:56 np0005534516 nova_compute[253538]: 2025-11-25 09:01:56.760 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:01:56 np0005534516 nova_compute[253538]: 2025-11-25 09:01:56.784 253542 DEBUG oslo_concurrency.lockutils [None req-58afc9c6-4eaa-4ac4-8a3e-2c46e29e2d61 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Acquiring lock "f05e074d-5838-4c4b-89dc-76afe386f635" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:01:56 np0005534516 nova_compute[253538]: 2025-11-25 09:01:56.785 253542 DEBUG oslo_concurrency.lockutils [None req-58afc9c6-4eaa-4ac4-8a3e-2c46e29e2d61 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "f05e074d-5838-4c4b-89dc-76afe386f635" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:01:56 np0005534516 nova_compute[253538]: 2025-11-25 09:01:56.785 253542 DEBUG oslo_concurrency.lockutils [None req-58afc9c6-4eaa-4ac4-8a3e-2c46e29e2d61 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Acquiring lock "f05e074d-5838-4c4b-89dc-76afe386f635-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:01:56 np0005534516 nova_compute[253538]: 2025-11-25 09:01:56.785 253542 DEBUG oslo_concurrency.lockutils [None req-58afc9c6-4eaa-4ac4-8a3e-2c46e29e2d61 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "f05e074d-5838-4c4b-89dc-76afe386f635-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:01:56 np0005534516 nova_compute[253538]: 2025-11-25 09:01:56.786 253542 DEBUG oslo_concurrency.lockutils [None req-58afc9c6-4eaa-4ac4-8a3e-2c46e29e2d61 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "f05e074d-5838-4c4b-89dc-76afe386f635-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:01:56 np0005534516 nova_compute[253538]: 2025-11-25 09:01:56.787 253542 INFO nova.compute.manager [None req-58afc9c6-4eaa-4ac4-8a3e-2c46e29e2d61 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: f05e074d-5838-4c4b-89dc-76afe386f635] Terminating instance#033[00m
Nov 25 04:01:56 np0005534516 nova_compute[253538]: 2025-11-25 09:01:56.789 253542 DEBUG nova.compute.manager [None req-58afc9c6-4eaa-4ac4-8a3e-2c46e29e2d61 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: f05e074d-5838-4c4b-89dc-76afe386f635] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 25 04:01:56 np0005534516 nova_compute[253538]: 2025-11-25 09:01:56.930 253542 DEBUG nova.network.neutron [req-63beb1e3-3b83-4432-bba0-71bd07d98918 req-30a2eb91-a950-423f-9f03-992904e82441 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 2d372af7-dca6-4f5f-bd4c-beedbb8cc055] Updated VIF entry in instance network info cache for port c1655f18-1254-402d-b9b6-7ca2d5a8bcdc. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 25 04:01:56 np0005534516 nova_compute[253538]: 2025-11-25 09:01:56.930 253542 DEBUG nova.network.neutron [req-63beb1e3-3b83-4432-bba0-71bd07d98918 req-30a2eb91-a950-423f-9f03-992904e82441 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 2d372af7-dca6-4f5f-bd4c-beedbb8cc055] Updating instance_info_cache with network_info: [{"id": "c1655f18-1254-402d-b9b6-7ca2d5a8bcdc", "address": "fa:16:3e:cb:b3:62", "network": {"id": "01d5ee0a-5a87-445b-8539-b33b1f9d0842", "bridge": "br-int", "label": "tempest-network-smoke--2130316306", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cfffff2c57a442a59b202d368d49bf00", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc1655f18-12", "ovs_interfaceid": "c1655f18-1254-402d-b9b6-7ca2d5a8bcdc", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 04:01:56 np0005534516 nova_compute[253538]: 2025-11-25 09:01:56.947 253542 DEBUG oslo_concurrency.lockutils [req-63beb1e3-3b83-4432-bba0-71bd07d98918 req-30a2eb91-a950-423f-9f03-992904e82441 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-2d372af7-dca6-4f5f-bd4c-beedbb8cc055" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 04:01:57 np0005534516 kernel: tap30ba0f84-3d (unregistering): left promiscuous mode
Nov 25 04:01:57 np0005534516 NetworkManager[48915]: <info>  [1764061317.2248] device (tap30ba0f84-3d): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 04:01:57 np0005534516 ovn_controller[152859]: 2025-11-25T09:01:57Z|01352|binding|INFO|Releasing lport 30ba0f84-3dca-47f6-911d-5fff56a99b0b from this chassis (sb_readonly=0)
Nov 25 04:01:57 np0005534516 ovn_controller[152859]: 2025-11-25T09:01:57Z|01353|binding|INFO|Setting lport 30ba0f84-3dca-47f6-911d-5fff56a99b0b down in Southbound
Nov 25 04:01:57 np0005534516 nova_compute[253538]: 2025-11-25 09:01:57.232 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:01:57 np0005534516 ovn_controller[152859]: 2025-11-25T09:01:57Z|01354|binding|INFO|Removing iface tap30ba0f84-3d ovn-installed in OVS
Nov 25 04:01:57 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:01:57.241 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d4:d0:53 10.100.0.14 2001:db8::f816:3eff:fed4:d053'], port_security=['fa:16:3e:d4:d0:53 10.100.0.14 2001:db8::f816:3eff:fed4:d053'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28 2001:db8::f816:3eff:fed4:d053/64', 'neutron:device_id': 'f05e074d-5838-4c4b-89dc-76afe386f635', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6c2834b5-0444-432c-8da4-c0b4f4aabc4d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a3cf572dfc9f42528923d69b8fa76422', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'bc2d8b7e-9a70-4d0b-ab2b-2be8746de01b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=566c4500-0375-4680-b110-24535007c05e, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=30ba0f84-3dca-47f6-911d-5fff56a99b0b) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 04:01:57 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:01:57.242 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 30ba0f84-3dca-47f6-911d-5fff56a99b0b in datapath 6c2834b5-0444-432c-8da4-c0b4f4aabc4d unbound from our chassis#033[00m
Nov 25 04:01:57 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:01:57.243 162739 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 6c2834b5-0444-432c-8da4-c0b4f4aabc4d, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 25 04:01:57 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:01:57.243 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[7866c655-3561-4bd5-b4eb-893ac1e4d7b5]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:01:57 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:01:57.245 162739 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-6c2834b5-0444-432c-8da4-c0b4f4aabc4d namespace which is not needed anymore#033[00m
Nov 25 04:01:57 np0005534516 nova_compute[253538]: 2025-11-25 09:01:57.252 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:01:57 np0005534516 systemd[1]: machine-qemu\x2d159\x2dinstance\x2d00000081.scope: Deactivated successfully.
Nov 25 04:01:57 np0005534516 systemd[1]: machine-qemu\x2d159\x2dinstance\x2d00000081.scope: Consumed 16.762s CPU time.
Nov 25 04:01:57 np0005534516 systemd-machined[215790]: Machine qemu-159-instance-00000081 terminated.
Nov 25 04:01:57 np0005534516 nova_compute[253538]: 2025-11-25 09:01:57.428 253542 INFO nova.virt.libvirt.driver [-] [instance: f05e074d-5838-4c4b-89dc-76afe386f635] Instance destroyed successfully.#033[00m
Nov 25 04:01:57 np0005534516 nova_compute[253538]: 2025-11-25 09:01:57.429 253542 DEBUG nova.objects.instance [None req-58afc9c6-4eaa-4ac4-8a3e-2c46e29e2d61 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lazy-loading 'resources' on Instance uuid f05e074d-5838-4c4b-89dc-76afe386f635 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 04:01:57 np0005534516 nova_compute[253538]: 2025-11-25 09:01:57.439 253542 DEBUG nova.compute.manager [req-3c51296c-8d2f-4355-93d1-6b3840646bf6 req-86c45f0a-16a9-45c6-accc-3043692c4120 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 2d372af7-dca6-4f5f-bd4c-beedbb8cc055] Received event network-changed-c1655f18-1254-402d-b9b6-7ca2d5a8bcdc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 04:01:57 np0005534516 nova_compute[253538]: 2025-11-25 09:01:57.440 253542 DEBUG nova.compute.manager [req-3c51296c-8d2f-4355-93d1-6b3840646bf6 req-86c45f0a-16a9-45c6-accc-3043692c4120 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 2d372af7-dca6-4f5f-bd4c-beedbb8cc055] Refreshing instance network info cache due to event network-changed-c1655f18-1254-402d-b9b6-7ca2d5a8bcdc. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 04:01:57 np0005534516 nova_compute[253538]: 2025-11-25 09:01:57.440 253542 DEBUG oslo_concurrency.lockutils [req-3c51296c-8d2f-4355-93d1-6b3840646bf6 req-86c45f0a-16a9-45c6-accc-3043692c4120 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-2d372af7-dca6-4f5f-bd4c-beedbb8cc055" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 04:01:57 np0005534516 nova_compute[253538]: 2025-11-25 09:01:57.440 253542 DEBUG oslo_concurrency.lockutils [req-3c51296c-8d2f-4355-93d1-6b3840646bf6 req-86c45f0a-16a9-45c6-accc-3043692c4120 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-2d372af7-dca6-4f5f-bd4c-beedbb8cc055" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 04:01:57 np0005534516 nova_compute[253538]: 2025-11-25 09:01:57.440 253542 DEBUG nova.network.neutron [req-3c51296c-8d2f-4355-93d1-6b3840646bf6 req-86c45f0a-16a9-45c6-accc-3043692c4120 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 2d372af7-dca6-4f5f-bd4c-beedbb8cc055] Refreshing network info cache for port c1655f18-1254-402d-b9b6-7ca2d5a8bcdc _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 04:01:57 np0005534516 nova_compute[253538]: 2025-11-25 09:01:57.442 253542 DEBUG nova.virt.libvirt.vif [None req-58afc9c6-4eaa-4ac4-8a3e-2c46e29e2d61 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T09:00:29Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1223430136',display_name='tempest-TestGettingAddress-server-1223430136',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1223430136',id=129,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBE43RDkFrIAVI0BW11KJE2OYK1HJrAvC1/LPNZ8TDm6qjtPF0a19WFe4a1radMZBBdiDLKOYgFd/MDesehfKkwox7hQY3R8UYDimfx0Df8eUJAXIFHj7mDCbbhz+nTG4ag==',key_name='tempest-TestGettingAddress-356224709',keypairs=<?>,launch_index=0,launched_at=2025-11-25T09:00:40Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='a3cf572dfc9f42528923d69b8fa76422',ramdisk_id='',reservation_id='r-w1sbzkv2',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-364728108',owner_user_name='tempest-TestGettingAddress-364728108-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T09:00:40Z,user_data=None,user_id='c9fb13d4ba9041458692330b7276232f',uuid=f05e074d-5838-4c4b-89dc-76afe386f635,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "30ba0f84-3dca-47f6-911d-5fff56a99b0b", "address": "fa:16:3e:d4:d0:53", "network": {"id": "6c2834b5-0444-432c-8da4-c0b4f4aabc4d", "bridge": "br-int", "label": "tempest-network-smoke--139778261", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.202", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fed4:d053", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap30ba0f84-3d", "ovs_interfaceid": "30ba0f84-3dca-47f6-911d-5fff56a99b0b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 25 04:01:57 np0005534516 nova_compute[253538]: 2025-11-25 09:01:57.443 253542 DEBUG nova.network.os_vif_util [None req-58afc9c6-4eaa-4ac4-8a3e-2c46e29e2d61 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Converting VIF {"id": "30ba0f84-3dca-47f6-911d-5fff56a99b0b", "address": "fa:16:3e:d4:d0:53", "network": {"id": "6c2834b5-0444-432c-8da4-c0b4f4aabc4d", "bridge": "br-int", "label": "tempest-network-smoke--139778261", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.202", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fed4:d053", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap30ba0f84-3d", "ovs_interfaceid": "30ba0f84-3dca-47f6-911d-5fff56a99b0b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 04:01:57 np0005534516 nova_compute[253538]: 2025-11-25 09:01:57.444 253542 DEBUG nova.network.os_vif_util [None req-58afc9c6-4eaa-4ac4-8a3e-2c46e29e2d61 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:d4:d0:53,bridge_name='br-int',has_traffic_filtering=True,id=30ba0f84-3dca-47f6-911d-5fff56a99b0b,network=Network(6c2834b5-0444-432c-8da4-c0b4f4aabc4d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap30ba0f84-3d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 04:01:57 np0005534516 nova_compute[253538]: 2025-11-25 09:01:57.444 253542 DEBUG os_vif [None req-58afc9c6-4eaa-4ac4-8a3e-2c46e29e2d61 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:d4:d0:53,bridge_name='br-int',has_traffic_filtering=True,id=30ba0f84-3dca-47f6-911d-5fff56a99b0b,network=Network(6c2834b5-0444-432c-8da4-c0b4f4aabc4d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap30ba0f84-3d') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 25 04:01:57 np0005534516 nova_compute[253538]: 2025-11-25 09:01:57.445 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:01:57 np0005534516 nova_compute[253538]: 2025-11-25 09:01:57.446 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap30ba0f84-3d, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 04:01:57 np0005534516 nova_compute[253538]: 2025-11-25 09:01:57.448 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:01:57 np0005534516 nova_compute[253538]: 2025-11-25 09:01:57.451 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:01:57 np0005534516 nova_compute[253538]: 2025-11-25 09:01:57.453 253542 INFO os_vif [None req-58afc9c6-4eaa-4ac4-8a3e-2c46e29e2d61 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:d4:d0:53,bridge_name='br-int',has_traffic_filtering=True,id=30ba0f84-3dca-47f6-911d-5fff56a99b0b,network=Network(6c2834b5-0444-432c-8da4-c0b4f4aabc4d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap30ba0f84-3d')#033[00m
Nov 25 04:01:57 np0005534516 neutron-haproxy-ovnmeta-6c2834b5-0444-432c-8da4-c0b4f4aabc4d[389481]: [NOTICE]   (389485) : haproxy version is 2.8.14-c23fe91
Nov 25 04:01:57 np0005534516 neutron-haproxy-ovnmeta-6c2834b5-0444-432c-8da4-c0b4f4aabc4d[389481]: [NOTICE]   (389485) : path to executable is /usr/sbin/haproxy
Nov 25 04:01:57 np0005534516 neutron-haproxy-ovnmeta-6c2834b5-0444-432c-8da4-c0b4f4aabc4d[389481]: [WARNING]  (389485) : Exiting Master process...
Nov 25 04:01:57 np0005534516 neutron-haproxy-ovnmeta-6c2834b5-0444-432c-8da4-c0b4f4aabc4d[389481]: [ALERT]    (389485) : Current worker (389487) exited with code 143 (Terminated)
Nov 25 04:01:57 np0005534516 neutron-haproxy-ovnmeta-6c2834b5-0444-432c-8da4-c0b4f4aabc4d[389481]: [WARNING]  (389485) : All workers exited. Exiting... (0)
Nov 25 04:01:57 np0005534516 systemd[1]: libpod-d3176deb3929171a3708dfedf6cc865ea546447c29f69deab5baa393f5ab0b60.scope: Deactivated successfully.
Nov 25 04:01:57 np0005534516 podman[391956]: 2025-11-25 09:01:57.528172764 +0000 UTC m=+0.185802928 container died d3176deb3929171a3708dfedf6cc865ea546447c29f69deab5baa393f5ab0b60 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-6c2834b5-0444-432c-8da4-c0b4f4aabc4d, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Nov 25 04:01:57 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:01:57 np0005534516 nova_compute[253538]: 2025-11-25 09:01:57.690 253542 DEBUG nova.compute.manager [req-7efe3add-a307-45c5-b5ee-e4a824d0a349 req-750f161f-6399-4e98-8dc0-2bd8ed363cdc b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: f05e074d-5838-4c4b-89dc-76afe386f635] Received event network-changed-30ba0f84-3dca-47f6-911d-5fff56a99b0b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 04:01:57 np0005534516 nova_compute[253538]: 2025-11-25 09:01:57.691 253542 DEBUG nova.compute.manager [req-7efe3add-a307-45c5-b5ee-e4a824d0a349 req-750f161f-6399-4e98-8dc0-2bd8ed363cdc b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: f05e074d-5838-4c4b-89dc-76afe386f635] Refreshing instance network info cache due to event network-changed-30ba0f84-3dca-47f6-911d-5fff56a99b0b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 04:01:57 np0005534516 nova_compute[253538]: 2025-11-25 09:01:57.691 253542 DEBUG oslo_concurrency.lockutils [req-7efe3add-a307-45c5-b5ee-e4a824d0a349 req-750f161f-6399-4e98-8dc0-2bd8ed363cdc b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-f05e074d-5838-4c4b-89dc-76afe386f635" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 04:01:57 np0005534516 nova_compute[253538]: 2025-11-25 09:01:57.691 253542 DEBUG oslo_concurrency.lockutils [req-7efe3add-a307-45c5-b5ee-e4a824d0a349 req-750f161f-6399-4e98-8dc0-2bd8ed363cdc b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-f05e074d-5838-4c4b-89dc-76afe386f635" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 04:01:57 np0005534516 nova_compute[253538]: 2025-11-25 09:01:57.692 253542 DEBUG nova.network.neutron [req-7efe3add-a307-45c5-b5ee-e4a824d0a349 req-750f161f-6399-4e98-8dc0-2bd8ed363cdc b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: f05e074d-5838-4c4b-89dc-76afe386f635] Refreshing network info cache for port 30ba0f84-3dca-47f6-911d-5fff56a99b0b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 04:01:57 np0005534516 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-d3176deb3929171a3708dfedf6cc865ea546447c29f69deab5baa393f5ab0b60-userdata-shm.mount: Deactivated successfully.
Nov 25 04:01:57 np0005534516 systemd[1]: var-lib-containers-storage-overlay-95f8cf4280ba7f34d6348b543dd0da02fb745e7d55375aa17f4fa4a2f697425a-merged.mount: Deactivated successfully.
Nov 25 04:01:58 np0005534516 podman[391956]: 2025-11-25 09:01:58.015599271 +0000 UTC m=+0.673229435 container cleanup d3176deb3929171a3708dfedf6cc865ea546447c29f69deab5baa393f5ab0b60 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-6c2834b5-0444-432c-8da4-c0b4f4aabc4d, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Nov 25 04:01:58 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2435: 321 pgs: 321 active+clean; 293 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.3 MiB/s wr, 120 op/s
Nov 25 04:01:58 np0005534516 systemd[1]: libpod-conmon-d3176deb3929171a3708dfedf6cc865ea546447c29f69deab5baa393f5ab0b60.scope: Deactivated successfully.
Nov 25 04:01:58 np0005534516 podman[392016]: 2025-11-25 09:01:58.235224579 +0000 UTC m=+0.163099801 container health_status 8ad1e319ea263c63021f01bb2d6f18ca5aea205883339692c6b10bddfa993191 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Nov 25 04:01:58 np0005534516 podman[392015]: 2025-11-25 09:01:58.605512538 +0000 UTC m=+0.556672223 container remove d3176deb3929171a3708dfedf6cc865ea546447c29f69deab5baa393f5ab0b60 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-6c2834b5-0444-432c-8da4-c0b4f4aabc4d, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 04:01:58 np0005534516 nova_compute[253538]: 2025-11-25 09:01:58.609 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:01:58 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:01:58.614 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[a6da2d84-4c8c-4188-894a-35557877ff08]: (4, ('Tue Nov 25 09:01:57 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-6c2834b5-0444-432c-8da4-c0b4f4aabc4d (d3176deb3929171a3708dfedf6cc865ea546447c29f69deab5baa393f5ab0b60)\nd3176deb3929171a3708dfedf6cc865ea546447c29f69deab5baa393f5ab0b60\nTue Nov 25 09:01:58 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-6c2834b5-0444-432c-8da4-c0b4f4aabc4d (d3176deb3929171a3708dfedf6cc865ea546447c29f69deab5baa393f5ab0b60)\nd3176deb3929171a3708dfedf6cc865ea546447c29f69deab5baa393f5ab0b60\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:01:58 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:01:58.616 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[52f367dd-0db5-4254-a308-f9fa4a8a7538]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:01:58 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:01:58.617 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6c2834b5-00, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 04:01:58 np0005534516 nova_compute[253538]: 2025-11-25 09:01:58.618 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:01:58 np0005534516 kernel: tap6c2834b5-00: left promiscuous mode
Nov 25 04:01:58 np0005534516 nova_compute[253538]: 2025-11-25 09:01:58.620 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:01:58 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:01:58.629 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[85a9f392-bb4c-441e-bda4-95d540c5f5f1]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:01:58 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:01:58.651 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[47abf192-b0cf-45d0-8500-8af66767f1aa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:01:58 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:01:58.653 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[9595b11e-9a4c-4061-8da8-35da5856d75d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:01:58 np0005534516 nova_compute[253538]: 2025-11-25 09:01:58.653 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:01:58 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:01:58.671 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[9befea9e-1031-4ffb-8380-e91c242be8a7]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 659434, 'reachable_time': 37034, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 392056, 'error': None, 'target': 'ovnmeta-6c2834b5-0444-432c-8da4-c0b4f4aabc4d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:01:58 np0005534516 systemd[1]: run-netns-ovnmeta\x2d6c2834b5\x2d0444\x2d432c\x2d8da4\x2dc0b4f4aabc4d.mount: Deactivated successfully.
Nov 25 04:01:58 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:01:58.676 162852 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-6c2834b5-0444-432c-8da4-c0b4f4aabc4d deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 25 04:01:58 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:01:58.676 162852 DEBUG oslo.privsep.daemon [-] privsep: reply[862172d9-dede-41fa-ad1f-d90cbea5c720]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:01:59 np0005534516 nova_compute[253538]: 2025-11-25 09:01:59.497 253542 DEBUG nova.network.neutron [req-3c51296c-8d2f-4355-93d1-6b3840646bf6 req-86c45f0a-16a9-45c6-accc-3043692c4120 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 2d372af7-dca6-4f5f-bd4c-beedbb8cc055] Updated VIF entry in instance network info cache for port c1655f18-1254-402d-b9b6-7ca2d5a8bcdc. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 25 04:01:59 np0005534516 nova_compute[253538]: 2025-11-25 09:01:59.497 253542 DEBUG nova.network.neutron [req-3c51296c-8d2f-4355-93d1-6b3840646bf6 req-86c45f0a-16a9-45c6-accc-3043692c4120 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 2d372af7-dca6-4f5f-bd4c-beedbb8cc055] Updating instance_info_cache with network_info: [{"id": "c1655f18-1254-402d-b9b6-7ca2d5a8bcdc", "address": "fa:16:3e:cb:b3:62", "network": {"id": "01d5ee0a-5a87-445b-8539-b33b1f9d0842", "bridge": "br-int", "label": "tempest-network-smoke--2130316306", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cfffff2c57a442a59b202d368d49bf00", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc1655f18-12", "ovs_interfaceid": "c1655f18-1254-402d-b9b6-7ca2d5a8bcdc", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 04:01:59 np0005534516 nova_compute[253538]: 2025-11-25 09:01:59.518 253542 DEBUG oslo_concurrency.lockutils [req-3c51296c-8d2f-4355-93d1-6b3840646bf6 req-86c45f0a-16a9-45c6-accc-3043692c4120 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-2d372af7-dca6-4f5f-bd4c-beedbb8cc055" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 04:01:59 np0005534516 nova_compute[253538]: 2025-11-25 09:01:59.518 253542 DEBUG nova.compute.manager [req-3c51296c-8d2f-4355-93d1-6b3840646bf6 req-86c45f0a-16a9-45c6-accc-3043692c4120 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: f05e074d-5838-4c4b-89dc-76afe386f635] Received event network-vif-unplugged-30ba0f84-3dca-47f6-911d-5fff56a99b0b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 04:01:59 np0005534516 nova_compute[253538]: 2025-11-25 09:01:59.519 253542 DEBUG oslo_concurrency.lockutils [req-3c51296c-8d2f-4355-93d1-6b3840646bf6 req-86c45f0a-16a9-45c6-accc-3043692c4120 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "f05e074d-5838-4c4b-89dc-76afe386f635-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:01:59 np0005534516 nova_compute[253538]: 2025-11-25 09:01:59.519 253542 DEBUG oslo_concurrency.lockutils [req-3c51296c-8d2f-4355-93d1-6b3840646bf6 req-86c45f0a-16a9-45c6-accc-3043692c4120 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "f05e074d-5838-4c4b-89dc-76afe386f635-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:01:59 np0005534516 nova_compute[253538]: 2025-11-25 09:01:59.519 253542 DEBUG oslo_concurrency.lockutils [req-3c51296c-8d2f-4355-93d1-6b3840646bf6 req-86c45f0a-16a9-45c6-accc-3043692c4120 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "f05e074d-5838-4c4b-89dc-76afe386f635-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:01:59 np0005534516 nova_compute[253538]: 2025-11-25 09:01:59.520 253542 DEBUG nova.compute.manager [req-3c51296c-8d2f-4355-93d1-6b3840646bf6 req-86c45f0a-16a9-45c6-accc-3043692c4120 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: f05e074d-5838-4c4b-89dc-76afe386f635] No waiting events found dispatching network-vif-unplugged-30ba0f84-3dca-47f6-911d-5fff56a99b0b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 04:01:59 np0005534516 nova_compute[253538]: 2025-11-25 09:01:59.520 253542 DEBUG nova.compute.manager [req-3c51296c-8d2f-4355-93d1-6b3840646bf6 req-86c45f0a-16a9-45c6-accc-3043692c4120 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: f05e074d-5838-4c4b-89dc-76afe386f635] Received event network-vif-unplugged-30ba0f84-3dca-47f6-911d-5fff56a99b0b for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 25 04:01:59 np0005534516 nova_compute[253538]: 2025-11-25 09:01:59.571 253542 DEBUG nova.compute.manager [req-dbbb69e5-b861-4014-88f6-d0239517bccf req-35f10a0a-436c-4167-8b4f-7287e3489dce b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: f05e074d-5838-4c4b-89dc-76afe386f635] Received event network-vif-plugged-30ba0f84-3dca-47f6-911d-5fff56a99b0b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 04:01:59 np0005534516 nova_compute[253538]: 2025-11-25 09:01:59.571 253542 DEBUG oslo_concurrency.lockutils [req-dbbb69e5-b861-4014-88f6-d0239517bccf req-35f10a0a-436c-4167-8b4f-7287e3489dce b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "f05e074d-5838-4c4b-89dc-76afe386f635-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:01:59 np0005534516 nova_compute[253538]: 2025-11-25 09:01:59.571 253542 DEBUG oslo_concurrency.lockutils [req-dbbb69e5-b861-4014-88f6-d0239517bccf req-35f10a0a-436c-4167-8b4f-7287e3489dce b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "f05e074d-5838-4c4b-89dc-76afe386f635-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:01:59 np0005534516 nova_compute[253538]: 2025-11-25 09:01:59.572 253542 DEBUG oslo_concurrency.lockutils [req-dbbb69e5-b861-4014-88f6-d0239517bccf req-35f10a0a-436c-4167-8b4f-7287e3489dce b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "f05e074d-5838-4c4b-89dc-76afe386f635-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:01:59 np0005534516 nova_compute[253538]: 2025-11-25 09:01:59.572 253542 DEBUG nova.compute.manager [req-dbbb69e5-b861-4014-88f6-d0239517bccf req-35f10a0a-436c-4167-8b4f-7287e3489dce b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: f05e074d-5838-4c4b-89dc-76afe386f635] No waiting events found dispatching network-vif-plugged-30ba0f84-3dca-47f6-911d-5fff56a99b0b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 04:01:59 np0005534516 nova_compute[253538]: 2025-11-25 09:01:59.572 253542 WARNING nova.compute.manager [req-dbbb69e5-b861-4014-88f6-d0239517bccf req-35f10a0a-436c-4167-8b4f-7287e3489dce b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: f05e074d-5838-4c4b-89dc-76afe386f635] Received unexpected event network-vif-plugged-30ba0f84-3dca-47f6-911d-5fff56a99b0b for instance with vm_state active and task_state deleting.#033[00m
Nov 25 04:02:00 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2436: 321 pgs: 321 active+clean; 285 MiB data, 1022 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 31 KiB/s wr, 114 op/s
Nov 25 04:02:00 np0005534516 nova_compute[253538]: 2025-11-25 09:02:00.186 253542 DEBUG nova.network.neutron [req-7efe3add-a307-45c5-b5ee-e4a824d0a349 req-750f161f-6399-4e98-8dc0-2bd8ed363cdc b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: f05e074d-5838-4c4b-89dc-76afe386f635] Updated VIF entry in instance network info cache for port 30ba0f84-3dca-47f6-911d-5fff56a99b0b. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 25 04:02:00 np0005534516 nova_compute[253538]: 2025-11-25 09:02:00.187 253542 DEBUG nova.network.neutron [req-7efe3add-a307-45c5-b5ee-e4a824d0a349 req-750f161f-6399-4e98-8dc0-2bd8ed363cdc b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: f05e074d-5838-4c4b-89dc-76afe386f635] Updating instance_info_cache with network_info: [{"id": "30ba0f84-3dca-47f6-911d-5fff56a99b0b", "address": "fa:16:3e:d4:d0:53", "network": {"id": "6c2834b5-0444-432c-8da4-c0b4f4aabc4d", "bridge": "br-int", "label": "tempest-network-smoke--139778261", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fed4:d053", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap30ba0f84-3d", "ovs_interfaceid": "30ba0f84-3dca-47f6-911d-5fff56a99b0b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 04:02:00 np0005534516 nova_compute[253538]: 2025-11-25 09:02:00.203 253542 DEBUG oslo_concurrency.lockutils [req-7efe3add-a307-45c5-b5ee-e4a824d0a349 req-750f161f-6399-4e98-8dc0-2bd8ed363cdc b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-f05e074d-5838-4c4b-89dc-76afe386f635" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 04:02:02 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2437: 321 pgs: 321 active+clean; 260 MiB data, 1007 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 32 KiB/s wr, 116 op/s
Nov 25 04:02:02 np0005534516 nova_compute[253538]: 2025-11-25 09:02:02.108 253542 INFO nova.virt.libvirt.driver [None req-58afc9c6-4eaa-4ac4-8a3e-2c46e29e2d61 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: f05e074d-5838-4c4b-89dc-76afe386f635] Deleting instance files /var/lib/nova/instances/f05e074d-5838-4c4b-89dc-76afe386f635_del#033[00m
Nov 25 04:02:02 np0005534516 nova_compute[253538]: 2025-11-25 09:02:02.110 253542 INFO nova.virt.libvirt.driver [None req-58afc9c6-4eaa-4ac4-8a3e-2c46e29e2d61 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: f05e074d-5838-4c4b-89dc-76afe386f635] Deletion of /var/lib/nova/instances/f05e074d-5838-4c4b-89dc-76afe386f635_del complete#033[00m
Nov 25 04:02:02 np0005534516 nova_compute[253538]: 2025-11-25 09:02:02.175 253542 INFO nova.compute.manager [None req-58afc9c6-4eaa-4ac4-8a3e-2c46e29e2d61 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: f05e074d-5838-4c4b-89dc-76afe386f635] Took 5.39 seconds to destroy the instance on the hypervisor.#033[00m
Nov 25 04:02:02 np0005534516 nova_compute[253538]: 2025-11-25 09:02:02.175 253542 DEBUG oslo.service.loopingcall [None req-58afc9c6-4eaa-4ac4-8a3e-2c46e29e2d61 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 25 04:02:02 np0005534516 nova_compute[253538]: 2025-11-25 09:02:02.175 253542 DEBUG nova.compute.manager [-] [instance: f05e074d-5838-4c4b-89dc-76afe386f635] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 25 04:02:02 np0005534516 nova_compute[253538]: 2025-11-25 09:02:02.176 253542 DEBUG nova.network.neutron [-] [instance: f05e074d-5838-4c4b-89dc-76afe386f635] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 25 04:02:02 np0005534516 nova_compute[253538]: 2025-11-25 09:02:02.447 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:02:02 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:02:02 np0005534516 nova_compute[253538]: 2025-11-25 09:02:02.761 253542 DEBUG nova.network.neutron [-] [instance: f05e074d-5838-4c4b-89dc-76afe386f635] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 04:02:02 np0005534516 nova_compute[253538]: 2025-11-25 09:02:02.776 253542 INFO nova.compute.manager [-] [instance: f05e074d-5838-4c4b-89dc-76afe386f635] Took 0.60 seconds to deallocate network for instance.#033[00m
Nov 25 04:02:02 np0005534516 nova_compute[253538]: 2025-11-25 09:02:02.811 253542 DEBUG oslo_concurrency.lockutils [None req-58afc9c6-4eaa-4ac4-8a3e-2c46e29e2d61 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:02:02 np0005534516 nova_compute[253538]: 2025-11-25 09:02:02.812 253542 DEBUG oslo_concurrency.lockutils [None req-58afc9c6-4eaa-4ac4-8a3e-2c46e29e2d61 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:02:02 np0005534516 nova_compute[253538]: 2025-11-25 09:02:02.839 253542 DEBUG nova.compute.manager [req-7550e33d-b7f4-4d45-9cfe-57972b56016b req-aa0b8644-4b9c-490c-ade7-68eb962a0991 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: f05e074d-5838-4c4b-89dc-76afe386f635] Received event network-vif-deleted-30ba0f84-3dca-47f6-911d-5fff56a99b0b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 04:02:02 np0005534516 nova_compute[253538]: 2025-11-25 09:02:02.881 253542 DEBUG oslo_concurrency.processutils [None req-58afc9c6-4eaa-4ac4-8a3e-2c46e29e2d61 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 04:02:03 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 04:02:03 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/62532698' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 04:02:03 np0005534516 nova_compute[253538]: 2025-11-25 09:02:03.332 253542 DEBUG oslo_concurrency.processutils [None req-58afc9c6-4eaa-4ac4-8a3e-2c46e29e2d61 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.451s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 04:02:03 np0005534516 nova_compute[253538]: 2025-11-25 09:02:03.342 253542 DEBUG nova.compute.provider_tree [None req-58afc9c6-4eaa-4ac4-8a3e-2c46e29e2d61 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 04:02:03 np0005534516 nova_compute[253538]: 2025-11-25 09:02:03.363 253542 DEBUG nova.scheduler.client.report [None req-58afc9c6-4eaa-4ac4-8a3e-2c46e29e2d61 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 04:02:03 np0005534516 nova_compute[253538]: 2025-11-25 09:02:03.397 253542 DEBUG oslo_concurrency.lockutils [None req-58afc9c6-4eaa-4ac4-8a3e-2c46e29e2d61 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.585s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:02:03 np0005534516 nova_compute[253538]: 2025-11-25 09:02:03.432 253542 INFO nova.scheduler.client.report [None req-58afc9c6-4eaa-4ac4-8a3e-2c46e29e2d61 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Deleted allocations for instance f05e074d-5838-4c4b-89dc-76afe386f635#033[00m
Nov 25 04:02:03 np0005534516 nova_compute[253538]: 2025-11-25 09:02:03.498 253542 DEBUG oslo_concurrency.lockutils [None req-58afc9c6-4eaa-4ac4-8a3e-2c46e29e2d61 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "f05e074d-5838-4c4b-89dc-76afe386f635" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 6.714s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:02:03 np0005534516 nova_compute[253538]: 2025-11-25 09:02:03.699 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:02:04 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2438: 321 pgs: 321 active+clean; 247 MiB data, 999 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 9.5 KiB/s wr, 122 op/s
Nov 25 04:02:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 04:02:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 04:02:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 04:02:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 04:02:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0012935621784853813 of space, bias 1.0, pg target 0.3880686535456144 quantized to 32 (current 32)
Nov 25 04:02:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 04:02:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 04:02:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 04:02:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 04:02:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 04:02:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0010122368870873217 of space, bias 1.0, pg target 0.3036710661261965 quantized to 32 (current 32)
Nov 25 04:02:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 04:02:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 25 04:02:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 04:02:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 04:02:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 04:02:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 25 04:02:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 04:02:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 25 04:02:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 04:02:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 04:02:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 04:02:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 25 04:02:05 np0005534516 ovn_controller[152859]: 2025-11-25T09:02:05Z|00167|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:cb:b3:62 10.100.0.12
Nov 25 04:02:05 np0005534516 ovn_controller[152859]: 2025-11-25T09:02:05Z|00168|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:cb:b3:62 10.100.0.12
Nov 25 04:02:06 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2439: 321 pgs: 321 active+clean; 224 MiB data, 983 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.2 MiB/s wr, 139 op/s
Nov 25 04:02:06 np0005534516 nova_compute[253538]: 2025-11-25 09:02:06.740 253542 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764061311.73886, 3dd3cc22-d02b-4948-b7e0-da630c6ad4b0 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 04:02:06 np0005534516 nova_compute[253538]: 2025-11-25 09:02:06.741 253542 INFO nova.compute.manager [-] [instance: 3dd3cc22-d02b-4948-b7e0-da630c6ad4b0] VM Stopped (Lifecycle Event)#033[00m
Nov 25 04:02:06 np0005534516 nova_compute[253538]: 2025-11-25 09:02:06.764 253542 DEBUG nova.compute.manager [None req-7f7ba636-9164-40a0-a2f9-b3554f9096b1 - - - - - -] [instance: 3dd3cc22-d02b-4948-b7e0-da630c6ad4b0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 04:02:07 np0005534516 nova_compute[253538]: 2025-11-25 09:02:07.451 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:02:07 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:02:08 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2440: 321 pgs: 321 active+clean; 229 MiB data, 988 MiB used, 59 GiB / 60 GiB avail; 1.5 MiB/s rd, 1.6 MiB/s wr, 146 op/s
Nov 25 04:02:08 np0005534516 nova_compute[253538]: 2025-11-25 09:02:08.703 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:02:10 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2441: 321 pgs: 321 active+clean; 246 MiB data, 1000 MiB used, 59 GiB / 60 GiB avail; 369 KiB/s rd, 2.1 MiB/s wr, 156 op/s
Nov 25 04:02:11 np0005534516 nova_compute[253538]: 2025-11-25 09:02:11.764 253542 DEBUG oslo_concurrency.lockutils [None req-5d1d5cc0-d945-44f7-8628-e35e03692700 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Acquiring lock "2d372af7-dca6-4f5f-bd4c-beedbb8cc055" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:02:11 np0005534516 nova_compute[253538]: 2025-11-25 09:02:11.765 253542 DEBUG oslo_concurrency.lockutils [None req-5d1d5cc0-d945-44f7-8628-e35e03692700 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Lock "2d372af7-dca6-4f5f-bd4c-beedbb8cc055" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:02:11 np0005534516 nova_compute[253538]: 2025-11-25 09:02:11.765 253542 DEBUG oslo_concurrency.lockutils [None req-5d1d5cc0-d945-44f7-8628-e35e03692700 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Acquiring lock "2d372af7-dca6-4f5f-bd4c-beedbb8cc055-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:02:11 np0005534516 nova_compute[253538]: 2025-11-25 09:02:11.765 253542 DEBUG oslo_concurrency.lockutils [None req-5d1d5cc0-d945-44f7-8628-e35e03692700 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Lock "2d372af7-dca6-4f5f-bd4c-beedbb8cc055-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:02:11 np0005534516 nova_compute[253538]: 2025-11-25 09:02:11.766 253542 DEBUG oslo_concurrency.lockutils [None req-5d1d5cc0-d945-44f7-8628-e35e03692700 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Lock "2d372af7-dca6-4f5f-bd4c-beedbb8cc055-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:02:11 np0005534516 nova_compute[253538]: 2025-11-25 09:02:11.767 253542 INFO nova.compute.manager [None req-5d1d5cc0-d945-44f7-8628-e35e03692700 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 2d372af7-dca6-4f5f-bd4c-beedbb8cc055] Terminating instance#033[00m
Nov 25 04:02:11 np0005534516 nova_compute[253538]: 2025-11-25 09:02:11.769 253542 DEBUG nova.compute.manager [None req-5d1d5cc0-d945-44f7-8628-e35e03692700 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 2d372af7-dca6-4f5f-bd4c-beedbb8cc055] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 25 04:02:11 np0005534516 kernel: tapc1655f18-12 (unregistering): left promiscuous mode
Nov 25 04:02:11 np0005534516 NetworkManager[48915]: <info>  [1764061331.8745] device (tapc1655f18-12): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 04:02:11 np0005534516 ovn_controller[152859]: 2025-11-25T09:02:11Z|01355|binding|INFO|Releasing lport c1655f18-1254-402d-b9b6-7ca2d5a8bcdc from this chassis (sb_readonly=0)
Nov 25 04:02:11 np0005534516 ovn_controller[152859]: 2025-11-25T09:02:11Z|01356|binding|INFO|Setting lport c1655f18-1254-402d-b9b6-7ca2d5a8bcdc down in Southbound
Nov 25 04:02:11 np0005534516 ovn_controller[152859]: 2025-11-25T09:02:11Z|01357|binding|INFO|Removing iface tapc1655f18-12 ovn-installed in OVS
Nov 25 04:02:11 np0005534516 nova_compute[253538]: 2025-11-25 09:02:11.888 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:02:11 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:02:11.897 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:cb:b3:62 10.100.0.12', 'unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '2d372af7-dca6-4f5f-bd4c-beedbb8cc055', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-01d5ee0a-5a87-445b-8539-b33b1f9d0842', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'cfffff2c57a442a59b202d368d49bf00', 'neutron:revision_number': '6', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9d4cb606-f8e1-4247-876b-21f84cfe5e61, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=c1655f18-1254-402d-b9b6-7ca2d5a8bcdc) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 04:02:11 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:02:11.900 162739 INFO neutron.agent.ovn.metadata.agent [-] Port c1655f18-1254-402d-b9b6-7ca2d5a8bcdc in datapath 01d5ee0a-5a87-445b-8539-b33b1f9d0842 unbound from our chassis#033[00m
Nov 25 04:02:11 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:02:11.903 162739 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 01d5ee0a-5a87-445b-8539-b33b1f9d0842#033[00m
Nov 25 04:02:11 np0005534516 nova_compute[253538]: 2025-11-25 09:02:11.903 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:02:11 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:02:11.921 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[4f159209-1cbd-4848-bfe0-1b5719373607]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:02:11 np0005534516 systemd[1]: machine-qemu\x2d162\x2dinstance\x2d00000084.scope: Deactivated successfully.
Nov 25 04:02:11 np0005534516 systemd[1]: machine-qemu\x2d162\x2dinstance\x2d00000084.scope: Consumed 13.992s CPU time.
Nov 25 04:02:11 np0005534516 systemd-machined[215790]: Machine qemu-162-instance-00000084 terminated.
Nov 25 04:02:11 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:02:11.962 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[da8ea646-f717-4800-bc71-96c3ac2b7bdb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:02:11 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:02:11.965 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[4fd07486-23c2-4b48-9f4c-00a391dc4861]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:02:11 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:02:11.995 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[f472863d-1742-4ebf-a9d2-926f8692a8b6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:02:12 np0005534516 nova_compute[253538]: 2025-11-25 09:02:12.011 253542 INFO nova.virt.libvirt.driver [-] [instance: 2d372af7-dca6-4f5f-bd4c-beedbb8cc055] Instance destroyed successfully.#033[00m
Nov 25 04:02:12 np0005534516 nova_compute[253538]: 2025-11-25 09:02:12.012 253542 DEBUG nova.objects.instance [None req-5d1d5cc0-d945-44f7-8628-e35e03692700 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Lazy-loading 'resources' on Instance uuid 2d372af7-dca6-4f5f-bd4c-beedbb8cc055 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 04:02:12 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:02:12.013 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[84f7a74b-6971-470c-97a4-295f0b25a1a0]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap01d5ee0a-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:7a:39:4a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 19, 'tx_packets': 7, 'rx_bytes': 1370, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 19, 'tx_packets': 7, 'rx_bytes': 1370, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 388], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 662730, 'reachable_time': 37346, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 13, 'inoctets': 936, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 13, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 936, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 13, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 392098, 'error': None, 'target': 'ovnmeta-01d5ee0a-5a87-445b-8539-b33b1f9d0842', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:02:12 np0005534516 nova_compute[253538]: 2025-11-25 09:02:12.024 253542 DEBUG nova.virt.libvirt.vif [None req-5d1d5cc0-d945-44f7-8628-e35e03692700 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T09:01:40Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-1495447964-gen-1-1970158435',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-1495447964-gen-1-1970158435',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-1495447964-ge',id=132,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLJWOQIYal/ibrReI+6RW0ugIdawfbseW0yB5VZg2UY83dhKROJ1pIw9MNC3hDvNCvPlRsh4rMt3whBfd3b+Yn9hu3bxkRBW96s2JDhI6eOb1sodQt8E/2VrhY6VxJQFmA==',key_name='tempest-TestSecurityGroupsBasicOps-688042133',keypairs=<?>,launch_index=0,launched_at=2025-11-25T09:01:52Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='cfffff2c57a442a59b202d368d49bf00',ramdisk_id='',reservation_id='r-ejm0jt8p',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestSecurityGroupsBasicOps-1495447964',owner_user_name='tempest-TestSecurityGroupsBasicOps-1495447964-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T09:01:52Z,user_data=None,user_id='283b89dbe3284e8ea2019b797673108b',uuid=2d372af7-dca6-4f5f-bd4c-beedbb8cc055,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "c1655f18-1254-402d-b9b6-7ca2d5a8bcdc", "address": "fa:16:3e:cb:b3:62", "network": {"id": "01d5ee0a-5a87-445b-8539-b33b1f9d0842", "bridge": "br-int", "label": "tempest-network-smoke--2130316306", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cfffff2c57a442a59b202d368d49bf00", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc1655f18-12", "ovs_interfaceid": "c1655f18-1254-402d-b9b6-7ca2d5a8bcdc", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 25 04:02:12 np0005534516 nova_compute[253538]: 2025-11-25 09:02:12.024 253542 DEBUG nova.network.os_vif_util [None req-5d1d5cc0-d945-44f7-8628-e35e03692700 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Converting VIF {"id": "c1655f18-1254-402d-b9b6-7ca2d5a8bcdc", "address": "fa:16:3e:cb:b3:62", "network": {"id": "01d5ee0a-5a87-445b-8539-b33b1f9d0842", "bridge": "br-int", "label": "tempest-network-smoke--2130316306", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cfffff2c57a442a59b202d368d49bf00", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc1655f18-12", "ovs_interfaceid": "c1655f18-1254-402d-b9b6-7ca2d5a8bcdc", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 04:02:12 np0005534516 nova_compute[253538]: 2025-11-25 09:02:12.025 253542 DEBUG nova.network.os_vif_util [None req-5d1d5cc0-d945-44f7-8628-e35e03692700 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:cb:b3:62,bridge_name='br-int',has_traffic_filtering=True,id=c1655f18-1254-402d-b9b6-7ca2d5a8bcdc,network=Network(01d5ee0a-5a87-445b-8539-b33b1f9d0842),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc1655f18-12') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 04:02:12 np0005534516 nova_compute[253538]: 2025-11-25 09:02:12.026 253542 DEBUG os_vif [None req-5d1d5cc0-d945-44f7-8628-e35e03692700 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:cb:b3:62,bridge_name='br-int',has_traffic_filtering=True,id=c1655f18-1254-402d-b9b6-7ca2d5a8bcdc,network=Network(01d5ee0a-5a87-445b-8539-b33b1f9d0842),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc1655f18-12') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 25 04:02:12 np0005534516 nova_compute[253538]: 2025-11-25 09:02:12.029 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:02:12 np0005534516 nova_compute[253538]: 2025-11-25 09:02:12.029 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc1655f18-12, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 04:02:12 np0005534516 nova_compute[253538]: 2025-11-25 09:02:12.031 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:02:12 np0005534516 nova_compute[253538]: 2025-11-25 09:02:12.032 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:02:12 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:02:12.032 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[dd20c0cd-e6fa-4348-953e-aee9a0d4b402]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap01d5ee0a-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 662744, 'tstamp': 662744}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 392103, 'error': None, 'target': 'ovnmeta-01d5ee0a-5a87-445b-8539-b33b1f9d0842', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap01d5ee0a-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 662746, 'tstamp': 662746}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 392103, 'error': None, 'target': 'ovnmeta-01d5ee0a-5a87-445b-8539-b33b1f9d0842', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:02:12 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:02:12.035 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap01d5ee0a-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 04:02:12 np0005534516 nova_compute[253538]: 2025-11-25 09:02:12.035 253542 INFO os_vif [None req-5d1d5cc0-d945-44f7-8628-e35e03692700 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:cb:b3:62,bridge_name='br-int',has_traffic_filtering=True,id=c1655f18-1254-402d-b9b6-7ca2d5a8bcdc,network=Network(01d5ee0a-5a87-445b-8539-b33b1f9d0842),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc1655f18-12')#033[00m
Nov 25 04:02:12 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:02:12.039 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap01d5ee0a-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 04:02:12 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:02:12.040 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 04:02:12 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:02:12.040 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap01d5ee0a-50, col_values=(('external_ids', {'iface-id': 'e613213f-7deb-43ce-acbb-25b798b2b340'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 04:02:12 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:02:12.041 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 04:02:12 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2442: 321 pgs: 321 active+clean; 246 MiB data, 1004 MiB used, 59 GiB / 60 GiB avail; 349 KiB/s rd, 2.1 MiB/s wr, 138 op/s
Nov 25 04:02:12 np0005534516 nova_compute[253538]: 2025-11-25 09:02:12.055 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:02:12 np0005534516 nova_compute[253538]: 2025-11-25 09:02:12.123 253542 DEBUG nova.compute.manager [req-d49e277a-bc33-40e1-ae51-3ca684a7cca2 req-e27260ea-74c6-4479-99fe-337fbe625361 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 2d372af7-dca6-4f5f-bd4c-beedbb8cc055] Received event network-vif-unplugged-c1655f18-1254-402d-b9b6-7ca2d5a8bcdc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 04:02:12 np0005534516 nova_compute[253538]: 2025-11-25 09:02:12.123 253542 DEBUG oslo_concurrency.lockutils [req-d49e277a-bc33-40e1-ae51-3ca684a7cca2 req-e27260ea-74c6-4479-99fe-337fbe625361 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "2d372af7-dca6-4f5f-bd4c-beedbb8cc055-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:02:12 np0005534516 nova_compute[253538]: 2025-11-25 09:02:12.124 253542 DEBUG oslo_concurrency.lockutils [req-d49e277a-bc33-40e1-ae51-3ca684a7cca2 req-e27260ea-74c6-4479-99fe-337fbe625361 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "2d372af7-dca6-4f5f-bd4c-beedbb8cc055-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:02:12 np0005534516 nova_compute[253538]: 2025-11-25 09:02:12.124 253542 DEBUG oslo_concurrency.lockutils [req-d49e277a-bc33-40e1-ae51-3ca684a7cca2 req-e27260ea-74c6-4479-99fe-337fbe625361 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "2d372af7-dca6-4f5f-bd4c-beedbb8cc055-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:02:12 np0005534516 nova_compute[253538]: 2025-11-25 09:02:12.125 253542 DEBUG nova.compute.manager [req-d49e277a-bc33-40e1-ae51-3ca684a7cca2 req-e27260ea-74c6-4479-99fe-337fbe625361 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 2d372af7-dca6-4f5f-bd4c-beedbb8cc055] No waiting events found dispatching network-vif-unplugged-c1655f18-1254-402d-b9b6-7ca2d5a8bcdc pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 04:02:12 np0005534516 nova_compute[253538]: 2025-11-25 09:02:12.125 253542 DEBUG nova.compute.manager [req-d49e277a-bc33-40e1-ae51-3ca684a7cca2 req-e27260ea-74c6-4479-99fe-337fbe625361 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 2d372af7-dca6-4f5f-bd4c-beedbb8cc055] Received event network-vif-unplugged-c1655f18-1254-402d-b9b6-7ca2d5a8bcdc for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 25 04:02:12 np0005534516 ovn_controller[152859]: 2025-11-25T09:02:12Z|01358|binding|INFO|Releasing lport e613213f-7deb-43ce-acbb-25b798b2b340 from this chassis (sb_readonly=0)
Nov 25 04:02:12 np0005534516 nova_compute[253538]: 2025-11-25 09:02:12.427 253542 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764061317.426396, f05e074d-5838-4c4b-89dc-76afe386f635 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 04:02:12 np0005534516 nova_compute[253538]: 2025-11-25 09:02:12.428 253542 INFO nova.compute.manager [-] [instance: f05e074d-5838-4c4b-89dc-76afe386f635] VM Stopped (Lifecycle Event)#033[00m
Nov 25 04:02:12 np0005534516 nova_compute[253538]: 2025-11-25 09:02:12.450 253542 DEBUG nova.compute.manager [None req-59424f37-1b62-4e8c-8a26-954b8c195306 - - - - - -] [instance: f05e074d-5838-4c4b-89dc-76afe386f635] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 04:02:12 np0005534516 nova_compute[253538]: 2025-11-25 09:02:12.505 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:02:12 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:02:12 np0005534516 nova_compute[253538]: 2025-11-25 09:02:12.682 253542 INFO nova.virt.libvirt.driver [None req-5d1d5cc0-d945-44f7-8628-e35e03692700 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 2d372af7-dca6-4f5f-bd4c-beedbb8cc055] Deleting instance files /var/lib/nova/instances/2d372af7-dca6-4f5f-bd4c-beedbb8cc055_del#033[00m
Nov 25 04:02:12 np0005534516 nova_compute[253538]: 2025-11-25 09:02:12.683 253542 INFO nova.virt.libvirt.driver [None req-5d1d5cc0-d945-44f7-8628-e35e03692700 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 2d372af7-dca6-4f5f-bd4c-beedbb8cc055] Deletion of /var/lib/nova/instances/2d372af7-dca6-4f5f-bd4c-beedbb8cc055_del complete#033[00m
Nov 25 04:02:12 np0005534516 nova_compute[253538]: 2025-11-25 09:02:12.763 253542 INFO nova.compute.manager [None req-5d1d5cc0-d945-44f7-8628-e35e03692700 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 2d372af7-dca6-4f5f-bd4c-beedbb8cc055] Took 0.99 seconds to destroy the instance on the hypervisor.#033[00m
Nov 25 04:02:12 np0005534516 nova_compute[253538]: 2025-11-25 09:02:12.764 253542 DEBUG oslo.service.loopingcall [None req-5d1d5cc0-d945-44f7-8628-e35e03692700 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 25 04:02:12 np0005534516 nova_compute[253538]: 2025-11-25 09:02:12.764 253542 DEBUG nova.compute.manager [-] [instance: 2d372af7-dca6-4f5f-bd4c-beedbb8cc055] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 25 04:02:12 np0005534516 nova_compute[253538]: 2025-11-25 09:02:12.764 253542 DEBUG nova.network.neutron [-] [instance: 2d372af7-dca6-4f5f-bd4c-beedbb8cc055] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 25 04:02:13 np0005534516 nova_compute[253538]: 2025-11-25 09:02:13.704 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:02:13 np0005534516 nova_compute[253538]: 2025-11-25 09:02:13.963 253542 DEBUG nova.network.neutron [-] [instance: 2d372af7-dca6-4f5f-bd4c-beedbb8cc055] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 04:02:13 np0005534516 nova_compute[253538]: 2025-11-25 09:02:13.983 253542 INFO nova.compute.manager [-] [instance: 2d372af7-dca6-4f5f-bd4c-beedbb8cc055] Took 1.22 seconds to deallocate network for instance.#033[00m
Nov 25 04:02:14 np0005534516 nova_compute[253538]: 2025-11-25 09:02:14.046 253542 DEBUG oslo_concurrency.lockutils [None req-5d1d5cc0-d945-44f7-8628-e35e03692700 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:02:14 np0005534516 nova_compute[253538]: 2025-11-25 09:02:14.046 253542 DEBUG oslo_concurrency.lockutils [None req-5d1d5cc0-d945-44f7-8628-e35e03692700 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:02:14 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2443: 321 pgs: 321 active+clean; 229 MiB data, 998 MiB used, 59 GiB / 60 GiB avail; 351 KiB/s rd, 2.1 MiB/s wr, 140 op/s
Nov 25 04:02:14 np0005534516 nova_compute[253538]: 2025-11-25 09:02:14.063 253542 DEBUG nova.compute.manager [req-29171dff-1b2d-4599-98e6-9723c6ac3c90 req-23d7d12f-94d8-42de-bfd9-e5679260074c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 2d372af7-dca6-4f5f-bd4c-beedbb8cc055] Received event network-vif-deleted-c1655f18-1254-402d-b9b6-7ca2d5a8bcdc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 04:02:14 np0005534516 nova_compute[253538]: 2025-11-25 09:02:14.120 253542 DEBUG oslo_concurrency.processutils [None req-5d1d5cc0-d945-44f7-8628-e35e03692700 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 04:02:14 np0005534516 nova_compute[253538]: 2025-11-25 09:02:14.227 253542 DEBUG nova.compute.manager [req-2339ed3a-b89b-4b44-bd75-a04a3d27fef9 req-5df11435-3377-45bf-b4a9-c63789e69ca8 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 2d372af7-dca6-4f5f-bd4c-beedbb8cc055] Received event network-vif-plugged-c1655f18-1254-402d-b9b6-7ca2d5a8bcdc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 04:02:14 np0005534516 nova_compute[253538]: 2025-11-25 09:02:14.228 253542 DEBUG oslo_concurrency.lockutils [req-2339ed3a-b89b-4b44-bd75-a04a3d27fef9 req-5df11435-3377-45bf-b4a9-c63789e69ca8 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "2d372af7-dca6-4f5f-bd4c-beedbb8cc055-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:02:14 np0005534516 nova_compute[253538]: 2025-11-25 09:02:14.228 253542 DEBUG oslo_concurrency.lockutils [req-2339ed3a-b89b-4b44-bd75-a04a3d27fef9 req-5df11435-3377-45bf-b4a9-c63789e69ca8 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "2d372af7-dca6-4f5f-bd4c-beedbb8cc055-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:02:14 np0005534516 nova_compute[253538]: 2025-11-25 09:02:14.229 253542 DEBUG oslo_concurrency.lockutils [req-2339ed3a-b89b-4b44-bd75-a04a3d27fef9 req-5df11435-3377-45bf-b4a9-c63789e69ca8 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "2d372af7-dca6-4f5f-bd4c-beedbb8cc055-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:02:14 np0005534516 nova_compute[253538]: 2025-11-25 09:02:14.229 253542 DEBUG nova.compute.manager [req-2339ed3a-b89b-4b44-bd75-a04a3d27fef9 req-5df11435-3377-45bf-b4a9-c63789e69ca8 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 2d372af7-dca6-4f5f-bd4c-beedbb8cc055] No waiting events found dispatching network-vif-plugged-c1655f18-1254-402d-b9b6-7ca2d5a8bcdc pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 04:02:14 np0005534516 nova_compute[253538]: 2025-11-25 09:02:14.229 253542 WARNING nova.compute.manager [req-2339ed3a-b89b-4b44-bd75-a04a3d27fef9 req-5df11435-3377-45bf-b4a9-c63789e69ca8 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 2d372af7-dca6-4f5f-bd4c-beedbb8cc055] Received unexpected event network-vif-plugged-c1655f18-1254-402d-b9b6-7ca2d5a8bcdc for instance with vm_state deleted and task_state None.#033[00m
Nov 25 04:02:14 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 04:02:14 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2462729414' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 04:02:14 np0005534516 nova_compute[253538]: 2025-11-25 09:02:14.550 253542 DEBUG oslo_concurrency.processutils [None req-5d1d5cc0-d945-44f7-8628-e35e03692700 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.430s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 04:02:14 np0005534516 nova_compute[253538]: 2025-11-25 09:02:14.558 253542 DEBUG nova.compute.provider_tree [None req-5d1d5cc0-d945-44f7-8628-e35e03692700 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 04:02:14 np0005534516 nova_compute[253538]: 2025-11-25 09:02:14.570 253542 DEBUG nova.scheduler.client.report [None req-5d1d5cc0-d945-44f7-8628-e35e03692700 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 04:02:14 np0005534516 nova_compute[253538]: 2025-11-25 09:02:14.588 253542 DEBUG oslo_concurrency.lockutils [None req-5d1d5cc0-d945-44f7-8628-e35e03692700 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.541s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:02:14 np0005534516 nova_compute[253538]: 2025-11-25 09:02:14.617 253542 INFO nova.scheduler.client.report [None req-5d1d5cc0-d945-44f7-8628-e35e03692700 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Deleted allocations for instance 2d372af7-dca6-4f5f-bd4c-beedbb8cc055#033[00m
Nov 25 04:02:14 np0005534516 nova_compute[253538]: 2025-11-25 09:02:14.707 253542 DEBUG oslo_concurrency.lockutils [None req-5d1d5cc0-d945-44f7-8628-e35e03692700 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Lock "2d372af7-dca6-4f5f-bd4c-beedbb8cc055" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.943s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:02:14 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:02:14.726 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=43, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '3e:21:09', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '6a:71:8e:07:fa:c2'}, ipsec=False) old=SB_Global(nb_cfg=42) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 04:02:14 np0005534516 nova_compute[253538]: 2025-11-25 09:02:14.726 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:02:14 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:02:14.727 162739 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 25 04:02:15 np0005534516 nova_compute[253538]: 2025-11-25 09:02:15.744 253542 DEBUG oslo_concurrency.lockutils [None req-7776ed08-73c2-45b3-83c1-bd8e3abb85b5 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Acquiring lock "830678ef-9f48-4175-aa6d-666c24a11689" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:02:15 np0005534516 nova_compute[253538]: 2025-11-25 09:02:15.744 253542 DEBUG oslo_concurrency.lockutils [None req-7776ed08-73c2-45b3-83c1-bd8e3abb85b5 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Lock "830678ef-9f48-4175-aa6d-666c24a11689" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:02:15 np0005534516 nova_compute[253538]: 2025-11-25 09:02:15.745 253542 DEBUG oslo_concurrency.lockutils [None req-7776ed08-73c2-45b3-83c1-bd8e3abb85b5 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Acquiring lock "830678ef-9f48-4175-aa6d-666c24a11689-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:02:15 np0005534516 nova_compute[253538]: 2025-11-25 09:02:15.745 253542 DEBUG oslo_concurrency.lockutils [None req-7776ed08-73c2-45b3-83c1-bd8e3abb85b5 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Lock "830678ef-9f48-4175-aa6d-666c24a11689-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:02:15 np0005534516 nova_compute[253538]: 2025-11-25 09:02:15.746 253542 DEBUG oslo_concurrency.lockutils [None req-7776ed08-73c2-45b3-83c1-bd8e3abb85b5 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Lock "830678ef-9f48-4175-aa6d-666c24a11689-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:02:15 np0005534516 nova_compute[253538]: 2025-11-25 09:02:15.747 253542 INFO nova.compute.manager [None req-7776ed08-73c2-45b3-83c1-bd8e3abb85b5 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 830678ef-9f48-4175-aa6d-666c24a11689] Terminating instance#033[00m
Nov 25 04:02:15 np0005534516 nova_compute[253538]: 2025-11-25 09:02:15.748 253542 DEBUG nova.compute.manager [None req-7776ed08-73c2-45b3-83c1-bd8e3abb85b5 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 830678ef-9f48-4175-aa6d-666c24a11689] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 25 04:02:16 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2444: 321 pgs: 321 active+clean; 193 MiB data, 982 MiB used, 59 GiB / 60 GiB avail; 346 KiB/s rd, 2.2 MiB/s wr, 141 op/s
Nov 25 04:02:16 np0005534516 nova_compute[253538]: 2025-11-25 09:02:16.180 253542 DEBUG nova.compute.manager [req-6e77dd4f-102b-4a5b-ad39-2084d781118c req-ea794970-2afb-45b0-afd2-f644292df707 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 830678ef-9f48-4175-aa6d-666c24a11689] Received event network-changed-2719889c-c962-425f-9df3-6f3d741ca0ec external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 04:02:16 np0005534516 nova_compute[253538]: 2025-11-25 09:02:16.181 253542 DEBUG nova.compute.manager [req-6e77dd4f-102b-4a5b-ad39-2084d781118c req-ea794970-2afb-45b0-afd2-f644292df707 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 830678ef-9f48-4175-aa6d-666c24a11689] Refreshing instance network info cache due to event network-changed-2719889c-c962-425f-9df3-6f3d741ca0ec. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 04:02:16 np0005534516 nova_compute[253538]: 2025-11-25 09:02:16.181 253542 DEBUG oslo_concurrency.lockutils [req-6e77dd4f-102b-4a5b-ad39-2084d781118c req-ea794970-2afb-45b0-afd2-f644292df707 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-830678ef-9f48-4175-aa6d-666c24a11689" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 04:02:16 np0005534516 nova_compute[253538]: 2025-11-25 09:02:16.182 253542 DEBUG oslo_concurrency.lockutils [req-6e77dd4f-102b-4a5b-ad39-2084d781118c req-ea794970-2afb-45b0-afd2-f644292df707 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-830678ef-9f48-4175-aa6d-666c24a11689" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 04:02:16 np0005534516 nova_compute[253538]: 2025-11-25 09:02:16.182 253542 DEBUG nova.network.neutron [req-6e77dd4f-102b-4a5b-ad39-2084d781118c req-ea794970-2afb-45b0-afd2-f644292df707 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 830678ef-9f48-4175-aa6d-666c24a11689] Refreshing network info cache for port 2719889c-c962-425f-9df3-6f3d741ca0ec _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 04:02:16 np0005534516 kernel: tap2719889c-c9 (unregistering): left promiscuous mode
Nov 25 04:02:16 np0005534516 NetworkManager[48915]: <info>  [1764061336.3348] device (tap2719889c-c9): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 04:02:16 np0005534516 ovn_controller[152859]: 2025-11-25T09:02:16Z|01359|binding|INFO|Releasing lport 2719889c-c962-425f-9df3-6f3d741ca0ec from this chassis (sb_readonly=0)
Nov 25 04:02:16 np0005534516 ovn_controller[152859]: 2025-11-25T09:02:16Z|01360|binding|INFO|Setting lport 2719889c-c962-425f-9df3-6f3d741ca0ec down in Southbound
Nov 25 04:02:16 np0005534516 nova_compute[253538]: 2025-11-25 09:02:16.344 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:02:16 np0005534516 ovn_controller[152859]: 2025-11-25T09:02:16Z|01361|binding|INFO|Removing iface tap2719889c-c9 ovn-installed in OVS
Nov 25 04:02:16 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:02:16.355 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:0c:1f:9b 10.100.0.7'], port_security=['fa:16:3e:0c:1f:9b 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '830678ef-9f48-4175-aa6d-666c24a11689', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-01d5ee0a-5a87-445b-8539-b33b1f9d0842', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'cfffff2c57a442a59b202d368d49bf00', 'neutron:revision_number': '4', 'neutron:security_group_ids': '7a01ec2f-9868-40ca-9120-52725aa4431e 8a14d0f4-bb68-44c1-9d93-80bac0a038b8', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9d4cb606-f8e1-4247-876b-21f84cfe5e61, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=2719889c-c962-425f-9df3-6f3d741ca0ec) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 04:02:16 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:02:16.357 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 2719889c-c962-425f-9df3-6f3d741ca0ec in datapath 01d5ee0a-5a87-445b-8539-b33b1f9d0842 unbound from our chassis#033[00m
Nov 25 04:02:16 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:02:16.359 162739 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 01d5ee0a-5a87-445b-8539-b33b1f9d0842, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 25 04:02:16 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:02:16.360 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[bc0621ca-28af-4f48-aa5e-1700de7b9647]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:02:16 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:02:16.361 162739 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-01d5ee0a-5a87-445b-8539-b33b1f9d0842 namespace which is not needed anymore#033[00m
Nov 25 04:02:16 np0005534516 nova_compute[253538]: 2025-11-25 09:02:16.382 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:02:16 np0005534516 systemd[1]: machine-qemu\x2d160\x2dinstance\x2d00000082.scope: Deactivated successfully.
Nov 25 04:02:16 np0005534516 systemd[1]: machine-qemu\x2d160\x2dinstance\x2d00000082.scope: Consumed 16.356s CPU time.
Nov 25 04:02:16 np0005534516 systemd-machined[215790]: Machine qemu-160-instance-00000082 terminated.
Nov 25 04:02:16 np0005534516 neutron-haproxy-ovnmeta-01d5ee0a-5a87-445b-8539-b33b1f9d0842[390402]: [NOTICE]   (390407) : haproxy version is 2.8.14-c23fe91
Nov 25 04:02:16 np0005534516 neutron-haproxy-ovnmeta-01d5ee0a-5a87-445b-8539-b33b1f9d0842[390402]: [NOTICE]   (390407) : path to executable is /usr/sbin/haproxy
Nov 25 04:02:16 np0005534516 neutron-haproxy-ovnmeta-01d5ee0a-5a87-445b-8539-b33b1f9d0842[390402]: [WARNING]  (390407) : Exiting Master process...
Nov 25 04:02:16 np0005534516 neutron-haproxy-ovnmeta-01d5ee0a-5a87-445b-8539-b33b1f9d0842[390402]: [WARNING]  (390407) : Exiting Master process...
Nov 25 04:02:16 np0005534516 neutron-haproxy-ovnmeta-01d5ee0a-5a87-445b-8539-b33b1f9d0842[390402]: [ALERT]    (390407) : Current worker (390411) exited with code 143 (Terminated)
Nov 25 04:02:16 np0005534516 neutron-haproxy-ovnmeta-01d5ee0a-5a87-445b-8539-b33b1f9d0842[390402]: [WARNING]  (390407) : All workers exited. Exiting... (0)
Nov 25 04:02:16 np0005534516 systemd[1]: libpod-f925e285d80e411ef3468a7a3967feb9f7a3b50cd31f59d28a132bc43d5e9a6a.scope: Deactivated successfully.
Nov 25 04:02:16 np0005534516 podman[392171]: 2025-11-25 09:02:16.60418489 +0000 UTC m=+0.102359267 container died f925e285d80e411ef3468a7a3967feb9f7a3b50cd31f59d28a132bc43d5e9a6a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-01d5ee0a-5a87-445b-8539-b33b1f9d0842, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118)
Nov 25 04:02:16 np0005534516 nova_compute[253538]: 2025-11-25 09:02:16.605 253542 INFO nova.virt.libvirt.driver [-] [instance: 830678ef-9f48-4175-aa6d-666c24a11689] Instance destroyed successfully.#033[00m
Nov 25 04:02:16 np0005534516 nova_compute[253538]: 2025-11-25 09:02:16.606 253542 DEBUG nova.objects.instance [None req-7776ed08-73c2-45b3-83c1-bd8e3abb85b5 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Lazy-loading 'resources' on Instance uuid 830678ef-9f48-4175-aa6d-666c24a11689 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 04:02:16 np0005534516 nova_compute[253538]: 2025-11-25 09:02:16.623 253542 DEBUG nova.virt.libvirt.vif [None req-7776ed08-73c2-45b3-83c1-bd8e3abb85b5 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T09:00:54Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-1495447964-access_point-1626905983',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-1495447964-access_point-1626905983',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-1495447964-ac',id=130,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLJWOQIYal/ibrReI+6RW0ugIdawfbseW0yB5VZg2UY83dhKROJ1pIw9MNC3hDvNCvPlRsh4rMt3whBfd3b+Yn9hu3bxkRBW96s2JDhI6eOb1sodQt8E/2VrhY6VxJQFmA==',key_name='tempest-TestSecurityGroupsBasicOps-688042133',keypairs=<?>,launch_index=0,launched_at=2025-11-25T09:01:13Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='cfffff2c57a442a59b202d368d49bf00',ramdisk_id='',reservation_id='r-ddf3avyz',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestSecurityGroupsBasicOps-1495447964',owner_user_name='tempest-TestSecurityGroupsBasicOps-1495447964-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T09:01:14Z,user_data=None,user_id='283b89dbe3284e8ea2019b797673108b',uuid=830678ef-9f48-4175-aa6d-666c24a11689,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "2719889c-c962-425f-9df3-6f3d741ca0ec", "address": "fa:16:3e:0c:1f:9b", "network": {"id": "01d5ee0a-5a87-445b-8539-b33b1f9d0842", "bridge": "br-int", "label": "tempest-network-smoke--2130316306", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.221", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cfffff2c57a442a59b202d368d49bf00", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2719889c-c9", "ovs_interfaceid": "2719889c-c962-425f-9df3-6f3d741ca0ec", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 25 04:02:16 np0005534516 nova_compute[253538]: 2025-11-25 09:02:16.624 253542 DEBUG nova.network.os_vif_util [None req-7776ed08-73c2-45b3-83c1-bd8e3abb85b5 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Converting VIF {"id": "2719889c-c962-425f-9df3-6f3d741ca0ec", "address": "fa:16:3e:0c:1f:9b", "network": {"id": "01d5ee0a-5a87-445b-8539-b33b1f9d0842", "bridge": "br-int", "label": "tempest-network-smoke--2130316306", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.221", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cfffff2c57a442a59b202d368d49bf00", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2719889c-c9", "ovs_interfaceid": "2719889c-c962-425f-9df3-6f3d741ca0ec", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 04:02:16 np0005534516 nova_compute[253538]: 2025-11-25 09:02:16.625 253542 DEBUG nova.network.os_vif_util [None req-7776ed08-73c2-45b3-83c1-bd8e3abb85b5 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:0c:1f:9b,bridge_name='br-int',has_traffic_filtering=True,id=2719889c-c962-425f-9df3-6f3d741ca0ec,network=Network(01d5ee0a-5a87-445b-8539-b33b1f9d0842),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2719889c-c9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 04:02:16 np0005534516 nova_compute[253538]: 2025-11-25 09:02:16.625 253542 DEBUG os_vif [None req-7776ed08-73c2-45b3-83c1-bd8e3abb85b5 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:0c:1f:9b,bridge_name='br-int',has_traffic_filtering=True,id=2719889c-c962-425f-9df3-6f3d741ca0ec,network=Network(01d5ee0a-5a87-445b-8539-b33b1f9d0842),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2719889c-c9') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 25 04:02:16 np0005534516 nova_compute[253538]: 2025-11-25 09:02:16.627 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:02:16 np0005534516 nova_compute[253538]: 2025-11-25 09:02:16.628 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2719889c-c9, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 04:02:16 np0005534516 nova_compute[253538]: 2025-11-25 09:02:16.632 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 25 04:02:16 np0005534516 nova_compute[253538]: 2025-11-25 09:02:16.635 253542 INFO os_vif [None req-7776ed08-73c2-45b3-83c1-bd8e3abb85b5 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:0c:1f:9b,bridge_name='br-int',has_traffic_filtering=True,id=2719889c-c962-425f-9df3-6f3d741ca0ec,network=Network(01d5ee0a-5a87-445b-8539-b33b1f9d0842),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2719889c-c9')#033[00m
Nov 25 04:02:16 np0005534516 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-f925e285d80e411ef3468a7a3967feb9f7a3b50cd31f59d28a132bc43d5e9a6a-userdata-shm.mount: Deactivated successfully.
Nov 25 04:02:16 np0005534516 systemd[1]: var-lib-containers-storage-overlay-19a4031ab9bbc2e36451a0ff49fcc18576186761ed3f26382ce7cd4974678668-merged.mount: Deactivated successfully.
Nov 25 04:02:17 np0005534516 podman[392171]: 2025-11-25 09:02:17.25188046 +0000 UTC m=+0.750054847 container cleanup f925e285d80e411ef3468a7a3967feb9f7a3b50cd31f59d28a132bc43d5e9a6a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-01d5ee0a-5a87-445b-8539-b33b1f9d0842, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Nov 25 04:02:17 np0005534516 systemd[1]: libpod-conmon-f925e285d80e411ef3468a7a3967feb9f7a3b50cd31f59d28a132bc43d5e9a6a.scope: Deactivated successfully.
Nov 25 04:02:17 np0005534516 nova_compute[253538]: 2025-11-25 09:02:17.384 253542 DEBUG nova.network.neutron [req-6e77dd4f-102b-4a5b-ad39-2084d781118c req-ea794970-2afb-45b0-afd2-f644292df707 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 830678ef-9f48-4175-aa6d-666c24a11689] Updated VIF entry in instance network info cache for port 2719889c-c962-425f-9df3-6f3d741ca0ec. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 25 04:02:17 np0005534516 nova_compute[253538]: 2025-11-25 09:02:17.385 253542 DEBUG nova.network.neutron [req-6e77dd4f-102b-4a5b-ad39-2084d781118c req-ea794970-2afb-45b0-afd2-f644292df707 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 830678ef-9f48-4175-aa6d-666c24a11689] Updating instance_info_cache with network_info: [{"id": "2719889c-c962-425f-9df3-6f3d741ca0ec", "address": "fa:16:3e:0c:1f:9b", "network": {"id": "01d5ee0a-5a87-445b-8539-b33b1f9d0842", "bridge": "br-int", "label": "tempest-network-smoke--2130316306", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cfffff2c57a442a59b202d368d49bf00", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2719889c-c9", "ovs_interfaceid": "2719889c-c962-425f-9df3-6f3d741ca0ec", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 04:02:17 np0005534516 nova_compute[253538]: 2025-11-25 09:02:17.439 253542 DEBUG oslo_concurrency.lockutils [req-6e77dd4f-102b-4a5b-ad39-2084d781118c req-ea794970-2afb-45b0-afd2-f644292df707 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-830678ef-9f48-4175-aa6d-666c24a11689" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 04:02:17 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:02:18 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2445: 321 pgs: 321 active+clean; 167 MiB data, 967 MiB used, 59 GiB / 60 GiB avail; 264 KiB/s rd, 966 KiB/s wr, 134 op/s
Nov 25 04:02:18 np0005534516 nova_compute[253538]: 2025-11-25 09:02:18.281 253542 DEBUG nova.compute.manager [req-188c9f02-3e97-4115-878d-bc6ed7747401 req-57d87d45-98f7-4591-bc5f-32532bca78a9 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 830678ef-9f48-4175-aa6d-666c24a11689] Received event network-vif-unplugged-2719889c-c962-425f-9df3-6f3d741ca0ec external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 04:02:18 np0005534516 nova_compute[253538]: 2025-11-25 09:02:18.282 253542 DEBUG oslo_concurrency.lockutils [req-188c9f02-3e97-4115-878d-bc6ed7747401 req-57d87d45-98f7-4591-bc5f-32532bca78a9 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "830678ef-9f48-4175-aa6d-666c24a11689-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:02:18 np0005534516 nova_compute[253538]: 2025-11-25 09:02:18.283 253542 DEBUG oslo_concurrency.lockutils [req-188c9f02-3e97-4115-878d-bc6ed7747401 req-57d87d45-98f7-4591-bc5f-32532bca78a9 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "830678ef-9f48-4175-aa6d-666c24a11689-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:02:18 np0005534516 nova_compute[253538]: 2025-11-25 09:02:18.283 253542 DEBUG oslo_concurrency.lockutils [req-188c9f02-3e97-4115-878d-bc6ed7747401 req-57d87d45-98f7-4591-bc5f-32532bca78a9 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "830678ef-9f48-4175-aa6d-666c24a11689-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:02:18 np0005534516 nova_compute[253538]: 2025-11-25 09:02:18.283 253542 DEBUG nova.compute.manager [req-188c9f02-3e97-4115-878d-bc6ed7747401 req-57d87d45-98f7-4591-bc5f-32532bca78a9 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 830678ef-9f48-4175-aa6d-666c24a11689] No waiting events found dispatching network-vif-unplugged-2719889c-c962-425f-9df3-6f3d741ca0ec pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 04:02:18 np0005534516 nova_compute[253538]: 2025-11-25 09:02:18.284 253542 DEBUG nova.compute.manager [req-188c9f02-3e97-4115-878d-bc6ed7747401 req-57d87d45-98f7-4591-bc5f-32532bca78a9 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 830678ef-9f48-4175-aa6d-666c24a11689] Received event network-vif-unplugged-2719889c-c962-425f-9df3-6f3d741ca0ec for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 25 04:02:18 np0005534516 nova_compute[253538]: 2025-11-25 09:02:18.284 253542 DEBUG nova.compute.manager [req-188c9f02-3e97-4115-878d-bc6ed7747401 req-57d87d45-98f7-4591-bc5f-32532bca78a9 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 830678ef-9f48-4175-aa6d-666c24a11689] Received event network-vif-plugged-2719889c-c962-425f-9df3-6f3d741ca0ec external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 04:02:18 np0005534516 nova_compute[253538]: 2025-11-25 09:02:18.284 253542 DEBUG oslo_concurrency.lockutils [req-188c9f02-3e97-4115-878d-bc6ed7747401 req-57d87d45-98f7-4591-bc5f-32532bca78a9 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "830678ef-9f48-4175-aa6d-666c24a11689-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:02:18 np0005534516 nova_compute[253538]: 2025-11-25 09:02:18.285 253542 DEBUG oslo_concurrency.lockutils [req-188c9f02-3e97-4115-878d-bc6ed7747401 req-57d87d45-98f7-4591-bc5f-32532bca78a9 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "830678ef-9f48-4175-aa6d-666c24a11689-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:02:18 np0005534516 nova_compute[253538]: 2025-11-25 09:02:18.285 253542 DEBUG oslo_concurrency.lockutils [req-188c9f02-3e97-4115-878d-bc6ed7747401 req-57d87d45-98f7-4591-bc5f-32532bca78a9 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "830678ef-9f48-4175-aa6d-666c24a11689-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:02:18 np0005534516 nova_compute[253538]: 2025-11-25 09:02:18.286 253542 DEBUG nova.compute.manager [req-188c9f02-3e97-4115-878d-bc6ed7747401 req-57d87d45-98f7-4591-bc5f-32532bca78a9 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 830678ef-9f48-4175-aa6d-666c24a11689] No waiting events found dispatching network-vif-plugged-2719889c-c962-425f-9df3-6f3d741ca0ec pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 04:02:18 np0005534516 podman[392230]: 2025-11-25 09:02:18.286292395 +0000 UTC m=+1.006207089 container remove f925e285d80e411ef3468a7a3967feb9f7a3b50cd31f59d28a132bc43d5e9a6a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-01d5ee0a-5a87-445b-8539-b33b1f9d0842, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2)
Nov 25 04:02:18 np0005534516 nova_compute[253538]: 2025-11-25 09:02:18.286 253542 WARNING nova.compute.manager [req-188c9f02-3e97-4115-878d-bc6ed7747401 req-57d87d45-98f7-4591-bc5f-32532bca78a9 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 830678ef-9f48-4175-aa6d-666c24a11689] Received unexpected event network-vif-plugged-2719889c-c962-425f-9df3-6f3d741ca0ec for instance with vm_state active and task_state deleting.#033[00m
Nov 25 04:02:18 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:02:18.293 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[fd75971a-87d7-4b5e-bcb6-1296a1a7aa96]: (4, ('Tue Nov 25 09:02:16 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-01d5ee0a-5a87-445b-8539-b33b1f9d0842 (f925e285d80e411ef3468a7a3967feb9f7a3b50cd31f59d28a132bc43d5e9a6a)\nf925e285d80e411ef3468a7a3967feb9f7a3b50cd31f59d28a132bc43d5e9a6a\nTue Nov 25 09:02:17 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-01d5ee0a-5a87-445b-8539-b33b1f9d0842 (f925e285d80e411ef3468a7a3967feb9f7a3b50cd31f59d28a132bc43d5e9a6a)\nf925e285d80e411ef3468a7a3967feb9f7a3b50cd31f59d28a132bc43d5e9a6a\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:02:18 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:02:18.294 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[116ec404-e7c7-4ebc-80f2-72cdcf09a020]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:02:18 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:02:18.295 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap01d5ee0a-50, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 04:02:18 np0005534516 nova_compute[253538]: 2025-11-25 09:02:18.296 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:02:18 np0005534516 kernel: tap01d5ee0a-50: left promiscuous mode
Nov 25 04:02:18 np0005534516 nova_compute[253538]: 2025-11-25 09:02:18.311 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:02:18 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:02:18.314 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[dbbe58ad-4dda-41ea-991f-0d1251717669]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:02:18 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:02:18.334 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[8f596e55-4d06-4e52-bac6-b193dacc1f66]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:02:18 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:02:18.334 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[4a18cc45-7c52-4a5c-a470-0fd9e7b28f53]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:02:18 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:02:18.350 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[e558b2f0-4cda-4ab8-9a20-ff39d153baaf]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 662722, 'reachable_time': 32740, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 392246, 'error': None, 'target': 'ovnmeta-01d5ee0a-5a87-445b-8539-b33b1f9d0842', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:02:18 np0005534516 systemd[1]: run-netns-ovnmeta\x2d01d5ee0a\x2d5a87\x2d445b\x2d8539\x2db33b1f9d0842.mount: Deactivated successfully.
Nov 25 04:02:18 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:02:18.353 162852 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-01d5ee0a-5a87-445b-8539-b33b1f9d0842 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 25 04:02:18 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:02:18.354 162852 DEBUG oslo.privsep.daemon [-] privsep: reply[60abf6d5-c078-4018-becb-39900265daf5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:02:18 np0005534516 nova_compute[253538]: 2025-11-25 09:02:18.709 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:02:19 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 04:02:19 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 04:02:19 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 04:02:19 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 04:02:20 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2446: 321 pgs: 321 active+clean; 167 MiB data, 967 MiB used, 59 GiB / 60 GiB avail; 175 KiB/s rd, 526 KiB/s wr, 93 op/s
Nov 25 04:02:20 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 04:02:20 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 04:02:20 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 04:02:20 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 04:02:20 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 04:02:20 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 04:02:20 np0005534516 ceph-mgr[75313]: [progress WARNING root] complete: ev 98628459-36dd-4b35-8bd4-74d5197386eb does not exist
Nov 25 04:02:20 np0005534516 ceph-mgr[75313]: [progress WARNING root] complete: ev 427ae1a3-3f72-44a8-9d50-4511e3cf4cf1 does not exist
Nov 25 04:02:20 np0005534516 ceph-mgr[75313]: [progress WARNING root] complete: ev 4d590eec-0606-4779-8627-b76c553221be does not exist
Nov 25 04:02:20 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 04:02:20 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 04:02:20 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 04:02:20 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 04:02:20 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 04:02:20 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 04:02:20 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 04:02:20 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 04:02:20 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 04:02:21 np0005534516 podman[392643]: 2025-11-25 09:02:21.474439852 +0000 UTC m=+0.097671300 container create 7224245a95c09ebe038660fa87a25db2281460425bc286a3fc13d758052283c7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_noether, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 04:02:21 np0005534516 podman[392643]: 2025-11-25 09:02:21.400036937 +0000 UTC m=+0.023268465 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 04:02:21 np0005534516 systemd[1]: Started libpod-conmon-7224245a95c09ebe038660fa87a25db2281460425bc286a3fc13d758052283c7.scope.
Nov 25 04:02:21 np0005534516 podman[392657]: 2025-11-25 09:02:21.606181948 +0000 UTC m=+0.078974340 container health_status 201f5ba8a846455dad7681d91099d8c3f33e8ac91298c1330a8b525b94c03938 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=multipathd, container_name=multipathd)
Nov 25 04:02:21 np0005534516 systemd[1]: Started libcrun container.
Nov 25 04:02:21 np0005534516 nova_compute[253538]: 2025-11-25 09:02:21.660 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:02:21 np0005534516 podman[392643]: 2025-11-25 09:02:21.676611895 +0000 UTC m=+0.299843333 container init 7224245a95c09ebe038660fa87a25db2281460425bc286a3fc13d758052283c7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_noether, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 04:02:21 np0005534516 podman[392643]: 2025-11-25 09:02:21.684724076 +0000 UTC m=+0.307955504 container start 7224245a95c09ebe038660fa87a25db2281460425bc286a3fc13d758052283c7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_noether, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef)
Nov 25 04:02:21 np0005534516 peaceful_noether[392689]: 167 167
Nov 25 04:02:21 np0005534516 systemd[1]: libpod-7224245a95c09ebe038660fa87a25db2281460425bc286a3fc13d758052283c7.scope: Deactivated successfully.
Nov 25 04:02:21 np0005534516 podman[392643]: 2025-11-25 09:02:21.725755393 +0000 UTC m=+0.348986811 container attach 7224245a95c09ebe038660fa87a25db2281460425bc286a3fc13d758052283c7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_noether, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 04:02:21 np0005534516 podman[392643]: 2025-11-25 09:02:21.726228145 +0000 UTC m=+0.349459553 container died 7224245a95c09ebe038660fa87a25db2281460425bc286a3fc13d758052283c7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_noether, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Nov 25 04:02:21 np0005534516 podman[392658]: 2025-11-25 09:02:21.861699352 +0000 UTC m=+0.334119635 container health_status 5c8d5508db57b4c3da7e80ae11f3a3094e87785cb74f60c98199261dd8b73481 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 04:02:21 np0005534516 systemd[1]: var-lib-containers-storage-overlay-0d0060f63eaa902da1b7dbee76a53390e9dab1ce284c38454bab3deb8e55db62-merged.mount: Deactivated successfully.
Nov 25 04:02:21 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 04:02:21 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 04:02:22 np0005534516 podman[392643]: 2025-11-25 09:02:22.041202118 +0000 UTC m=+0.664433536 container remove 7224245a95c09ebe038660fa87a25db2281460425bc286a3fc13d758052283c7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_noether, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 04:02:22 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2447: 321 pgs: 321 active+clean; 139 MiB data, 943 MiB used, 59 GiB / 60 GiB avail; 26 KiB/s rd, 24 KiB/s wr, 40 op/s
Nov 25 04:02:22 np0005534516 systemd[1]: libpod-conmon-7224245a95c09ebe038660fa87a25db2281460425bc286a3fc13d758052283c7.scope: Deactivated successfully.
Nov 25 04:02:22 np0005534516 podman[392723]: 2025-11-25 09:02:22.252598243 +0000 UTC m=+0.081317285 container create 0f549d782a02e1072866029a7f4c34cd3ddc205ed2512c6982bbca721df6d8ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_germain, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Nov 25 04:02:22 np0005534516 podman[392723]: 2025-11-25 09:02:22.197786961 +0000 UTC m=+0.026506003 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 04:02:22 np0005534516 systemd[1]: Started libpod-conmon-0f549d782a02e1072866029a7f4c34cd3ddc205ed2512c6982bbca721df6d8ca.scope.
Nov 25 04:02:22 np0005534516 systemd[1]: Started libcrun container.
Nov 25 04:02:22 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f3b89a5fbbd3ca7973748cf79a9ef380532fbdbcc6bbd2b614f8330ac845ab9f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 04:02:22 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f3b89a5fbbd3ca7973748cf79a9ef380532fbdbcc6bbd2b614f8330ac845ab9f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 04:02:22 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f3b89a5fbbd3ca7973748cf79a9ef380532fbdbcc6bbd2b614f8330ac845ab9f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 04:02:22 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f3b89a5fbbd3ca7973748cf79a9ef380532fbdbcc6bbd2b614f8330ac845ab9f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 04:02:22 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f3b89a5fbbd3ca7973748cf79a9ef380532fbdbcc6bbd2b614f8330ac845ab9f/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 04:02:22 np0005534516 podman[392723]: 2025-11-25 09:02:22.386844167 +0000 UTC m=+0.215563189 container init 0f549d782a02e1072866029a7f4c34cd3ddc205ed2512c6982bbca721df6d8ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_germain, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 04:02:22 np0005534516 podman[392723]: 2025-11-25 09:02:22.395223185 +0000 UTC m=+0.223942227 container start 0f549d782a02e1072866029a7f4c34cd3ddc205ed2512c6982bbca721df6d8ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_germain, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 04:02:22 np0005534516 podman[392723]: 2025-11-25 09:02:22.588396953 +0000 UTC m=+0.417115965 container attach 0f549d782a02e1072866029a7f4c34cd3ddc205ed2512c6982bbca721df6d8ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_germain, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Nov 25 04:02:22 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:02:22 np0005534516 nova_compute[253538]: 2025-11-25 09:02:22.826 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:02:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 04:02:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 04:02:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 04:02:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 04:02:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 04:02:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 04:02:23 np0005534516 nervous_germain[392739]: --> passed data devices: 0 physical, 3 LVM
Nov 25 04:02:23 np0005534516 nervous_germain[392739]: --> relative data size: 1.0
Nov 25 04:02:23 np0005534516 nervous_germain[392739]: --> All data devices are unavailable
Nov 25 04:02:23 np0005534516 nova_compute[253538]: 2025-11-25 09:02:23.712 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:02:23 np0005534516 systemd[1]: libpod-0f549d782a02e1072866029a7f4c34cd3ddc205ed2512c6982bbca721df6d8ca.scope: Deactivated successfully.
Nov 25 04:02:23 np0005534516 systemd[1]: libpod-0f549d782a02e1072866029a7f4c34cd3ddc205ed2512c6982bbca721df6d8ca.scope: Consumed 1.101s CPU time.
Nov 25 04:02:23 np0005534516 podman[392723]: 2025-11-25 09:02:23.753400923 +0000 UTC m=+1.582119955 container died 0f549d782a02e1072866029a7f4c34cd3ddc205ed2512c6982bbca721df6d8ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_germain, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 04:02:24 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2448: 321 pgs: 321 active+clean; 118 MiB data, 934 MiB used, 59 GiB / 60 GiB avail; 29 KiB/s rd, 13 KiB/s wr, 43 op/s
Nov 25 04:02:24 np0005534516 systemd[1]: var-lib-containers-storage-overlay-f3b89a5fbbd3ca7973748cf79a9ef380532fbdbcc6bbd2b614f8330ac845ab9f-merged.mount: Deactivated successfully.
Nov 25 04:02:24 np0005534516 nova_compute[253538]: 2025-11-25 09:02:24.388 253542 INFO nova.virt.libvirt.driver [None req-7776ed08-73c2-45b3-83c1-bd8e3abb85b5 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 830678ef-9f48-4175-aa6d-666c24a11689] Deleting instance files /var/lib/nova/instances/830678ef-9f48-4175-aa6d-666c24a11689_del#033[00m
Nov 25 04:02:24 np0005534516 nova_compute[253538]: 2025-11-25 09:02:24.390 253542 INFO nova.virt.libvirt.driver [None req-7776ed08-73c2-45b3-83c1-bd8e3abb85b5 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 830678ef-9f48-4175-aa6d-666c24a11689] Deletion of /var/lib/nova/instances/830678ef-9f48-4175-aa6d-666c24a11689_del complete#033[00m
Nov 25 04:02:24 np0005534516 nova_compute[253538]: 2025-11-25 09:02:24.458 253542 INFO nova.compute.manager [None req-7776ed08-73c2-45b3-83c1-bd8e3abb85b5 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 830678ef-9f48-4175-aa6d-666c24a11689] Took 8.71 seconds to destroy the instance on the hypervisor.#033[00m
Nov 25 04:02:24 np0005534516 nova_compute[253538]: 2025-11-25 09:02:24.460 253542 DEBUG oslo.service.loopingcall [None req-7776ed08-73c2-45b3-83c1-bd8e3abb85b5 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 25 04:02:24 np0005534516 nova_compute[253538]: 2025-11-25 09:02:24.460 253542 DEBUG nova.compute.manager [-] [instance: 830678ef-9f48-4175-aa6d-666c24a11689] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 25 04:02:24 np0005534516 nova_compute[253538]: 2025-11-25 09:02:24.460 253542 DEBUG nova.network.neutron [-] [instance: 830678ef-9f48-4175-aa6d-666c24a11689] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 25 04:02:24 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:02:24.728 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=0e3362c2-a4a0-4a10-9289-943331244f84, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '43'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 04:02:25 np0005534516 podman[392723]: 2025-11-25 09:02:25.071536101 +0000 UTC m=+2.900255123 container remove 0f549d782a02e1072866029a7f4c34cd3ddc205ed2512c6982bbca721df6d8ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_germain, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 04:02:25 np0005534516 systemd[1]: libpod-conmon-0f549d782a02e1072866029a7f4c34cd3ddc205ed2512c6982bbca721df6d8ca.scope: Deactivated successfully.
Nov 25 04:02:25 np0005534516 nova_compute[253538]: 2025-11-25 09:02:25.391 253542 DEBUG nova.network.neutron [-] [instance: 830678ef-9f48-4175-aa6d-666c24a11689] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 04:02:25 np0005534516 nova_compute[253538]: 2025-11-25 09:02:25.426 253542 DEBUG nova.compute.manager [req-fc95b160-0454-4227-810e-4775cbe428bb req-bc00770d-1f3f-4a6d-bf96-3105ef6ab75b b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 830678ef-9f48-4175-aa6d-666c24a11689] Received event network-vif-deleted-2719889c-c962-425f-9df3-6f3d741ca0ec external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 04:02:25 np0005534516 nova_compute[253538]: 2025-11-25 09:02:25.427 253542 INFO nova.compute.manager [req-fc95b160-0454-4227-810e-4775cbe428bb req-bc00770d-1f3f-4a6d-bf96-3105ef6ab75b b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 830678ef-9f48-4175-aa6d-666c24a11689] Neutron deleted interface 2719889c-c962-425f-9df3-6f3d741ca0ec; detaching it from the instance and deleting it from the info cache#033[00m
Nov 25 04:02:25 np0005534516 nova_compute[253538]: 2025-11-25 09:02:25.427 253542 DEBUG nova.network.neutron [req-fc95b160-0454-4227-810e-4775cbe428bb req-bc00770d-1f3f-4a6d-bf96-3105ef6ab75b b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 830678ef-9f48-4175-aa6d-666c24a11689] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 04:02:25 np0005534516 nova_compute[253538]: 2025-11-25 09:02:25.433 253542 INFO nova.compute.manager [-] [instance: 830678ef-9f48-4175-aa6d-666c24a11689] Took 0.97 seconds to deallocate network for instance.#033[00m
Nov 25 04:02:25 np0005534516 nova_compute[253538]: 2025-11-25 09:02:25.462 253542 DEBUG nova.compute.manager [req-fc95b160-0454-4227-810e-4775cbe428bb req-bc00770d-1f3f-4a6d-bf96-3105ef6ab75b b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 830678ef-9f48-4175-aa6d-666c24a11689] Detach interface failed, port_id=2719889c-c962-425f-9df3-6f3d741ca0ec, reason: Instance 830678ef-9f48-4175-aa6d-666c24a11689 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882#033[00m
Nov 25 04:02:25 np0005534516 nova_compute[253538]: 2025-11-25 09:02:25.563 253542 DEBUG oslo_concurrency.lockutils [None req-7776ed08-73c2-45b3-83c1-bd8e3abb85b5 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:02:25 np0005534516 nova_compute[253538]: 2025-11-25 09:02:25.564 253542 DEBUG oslo_concurrency.lockutils [None req-7776ed08-73c2-45b3-83c1-bd8e3abb85b5 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:02:25 np0005534516 nova_compute[253538]: 2025-11-25 09:02:25.640 253542 DEBUG oslo_concurrency.processutils [None req-7776ed08-73c2-45b3-83c1-bd8e3abb85b5 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 04:02:25 np0005534516 podman[392925]: 2025-11-25 09:02:25.803166555 +0000 UTC m=+0.031201811 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 04:02:25 np0005534516 podman[392925]: 2025-11-25 09:02:25.909197221 +0000 UTC m=+0.137232467 container create f932d28ebf0f7c92599fad90ed35f57befdc10d14dabf5247b0bd4a4042e6ccc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_wu, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Nov 25 04:02:26 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 04:02:26 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2700976738' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 04:02:26 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2449: 321 pgs: 321 active+clean; 88 MiB data, 915 MiB used, 59 GiB / 60 GiB avail; 33 KiB/s rd, 11 KiB/s wr, 48 op/s
Nov 25 04:02:26 np0005534516 nova_compute[253538]: 2025-11-25 09:02:26.075 253542 DEBUG oslo_concurrency.processutils [None req-7776ed08-73c2-45b3-83c1-bd8e3abb85b5 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.435s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 04:02:26 np0005534516 nova_compute[253538]: 2025-11-25 09:02:26.082 253542 DEBUG nova.compute.provider_tree [None req-7776ed08-73c2-45b3-83c1-bd8e3abb85b5 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 04:02:26 np0005534516 nova_compute[253538]: 2025-11-25 09:02:26.096 253542 DEBUG nova.scheduler.client.report [None req-7776ed08-73c2-45b3-83c1-bd8e3abb85b5 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 04:02:26 np0005534516 nova_compute[253538]: 2025-11-25 09:02:26.119 253542 DEBUG oslo_concurrency.lockutils [None req-7776ed08-73c2-45b3-83c1-bd8e3abb85b5 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.555s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:02:26 np0005534516 nova_compute[253538]: 2025-11-25 09:02:26.158 253542 INFO nova.scheduler.client.report [None req-7776ed08-73c2-45b3-83c1-bd8e3abb85b5 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Deleted allocations for instance 830678ef-9f48-4175-aa6d-666c24a11689#033[00m
Nov 25 04:02:26 np0005534516 systemd[1]: Started libpod-conmon-f932d28ebf0f7c92599fad90ed35f57befdc10d14dabf5247b0bd4a4042e6ccc.scope.
Nov 25 04:02:26 np0005534516 nova_compute[253538]: 2025-11-25 09:02:26.241 253542 DEBUG oslo_concurrency.lockutils [None req-7776ed08-73c2-45b3-83c1-bd8e3abb85b5 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Lock "830678ef-9f48-4175-aa6d-666c24a11689" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 10.497s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:02:26 np0005534516 systemd[1]: Started libcrun container.
Nov 25 04:02:26 np0005534516 podman[392925]: 2025-11-25 09:02:26.358652115 +0000 UTC m=+0.586687381 container init f932d28ebf0f7c92599fad90ed35f57befdc10d14dabf5247b0bd4a4042e6ccc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_wu, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 04:02:26 np0005534516 podman[392925]: 2025-11-25 09:02:26.367803643 +0000 UTC m=+0.595838879 container start f932d28ebf0f7c92599fad90ed35f57befdc10d14dabf5247b0bd4a4042e6ccc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_wu, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 04:02:26 np0005534516 gracious_wu[392960]: 167 167
Nov 25 04:02:26 np0005534516 systemd[1]: libpod-f932d28ebf0f7c92599fad90ed35f57befdc10d14dabf5247b0bd4a4042e6ccc.scope: Deactivated successfully.
Nov 25 04:02:26 np0005534516 conmon[392960]: conmon f932d28ebf0f7c92599f <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-f932d28ebf0f7c92599fad90ed35f57befdc10d14dabf5247b0bd4a4042e6ccc.scope/container/memory.events
Nov 25 04:02:26 np0005534516 podman[392925]: 2025-11-25 09:02:26.645895183 +0000 UTC m=+0.873930439 container attach f932d28ebf0f7c92599fad90ed35f57befdc10d14dabf5247b0bd4a4042e6ccc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_wu, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 04:02:26 np0005534516 podman[392925]: 2025-11-25 09:02:26.648638198 +0000 UTC m=+0.876673434 container died f932d28ebf0f7c92599fad90ed35f57befdc10d14dabf5247b0bd4a4042e6ccc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_wu, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Nov 25 04:02:26 np0005534516 nova_compute[253538]: 2025-11-25 09:02:26.663 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:02:26 np0005534516 nova_compute[253538]: 2025-11-25 09:02:26.858 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:02:26 np0005534516 systemd[1]: var-lib-containers-storage-overlay-ac65601efe6fc9495de3437e29022a7af158daa35c868bd48f67727d684505e5-merged.mount: Deactivated successfully.
Nov 25 04:02:27 np0005534516 nova_compute[253538]: 2025-11-25 09:02:27.009 253542 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764061332.008601, 2d372af7-dca6-4f5f-bd4c-beedbb8cc055 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 04:02:27 np0005534516 nova_compute[253538]: 2025-11-25 09:02:27.011 253542 INFO nova.compute.manager [-] [instance: 2d372af7-dca6-4f5f-bd4c-beedbb8cc055] VM Stopped (Lifecycle Event)#033[00m
Nov 25 04:02:27 np0005534516 nova_compute[253538]: 2025-11-25 09:02:27.033 253542 DEBUG nova.compute.manager [None req-04cd7668-d974-4567-bad0-e40dd3274c16 - - - - - -] [instance: 2d372af7-dca6-4f5f-bd4c-beedbb8cc055] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 04:02:27 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:02:28 np0005534516 podman[392925]: 2025-11-25 09:02:28.028957351 +0000 UTC m=+2.256992577 container remove f932d28ebf0f7c92599fad90ed35f57befdc10d14dabf5247b0bd4a4042e6ccc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_wu, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 04:02:28 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2450: 321 pgs: 321 active+clean; 88 MiB data, 915 MiB used, 59 GiB / 60 GiB avail; 28 KiB/s rd, 3.0 KiB/s wr, 41 op/s
Nov 25 04:02:28 np0005534516 systemd[1]: libpod-conmon-f932d28ebf0f7c92599fad90ed35f57befdc10d14dabf5247b0bd4a4042e6ccc.scope: Deactivated successfully.
Nov 25 04:02:28 np0005534516 podman[392983]: 2025-11-25 09:02:28.284563629 +0000 UTC m=+0.096751001 container create b812990ce5cb99fe6431850fbb0bdc766c95ddbbd3f2055d14e3b3c62c81b6cc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_grothendieck, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Nov 25 04:02:28 np0005534516 podman[392983]: 2025-11-25 09:02:28.225040601 +0000 UTC m=+0.037227983 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 04:02:28 np0005534516 systemd[1]: Started libpod-conmon-b812990ce5cb99fe6431850fbb0bdc766c95ddbbd3f2055d14e3b3c62c81b6cc.scope.
Nov 25 04:02:28 np0005534516 systemd[1]: Started libcrun container.
Nov 25 04:02:28 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7d9b498e189d8517d6f19a521ba1ee849073bed21d0aec61620bda4ce9287153/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 04:02:28 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7d9b498e189d8517d6f19a521ba1ee849073bed21d0aec61620bda4ce9287153/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 04:02:28 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7d9b498e189d8517d6f19a521ba1ee849073bed21d0aec61620bda4ce9287153/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 04:02:28 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7d9b498e189d8517d6f19a521ba1ee849073bed21d0aec61620bda4ce9287153/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 04:02:28 np0005534516 nova_compute[253538]: 2025-11-25 09:02:28.714 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:02:28 np0005534516 podman[392983]: 2025-11-25 09:02:28.839516624 +0000 UTC m=+0.651704066 container init b812990ce5cb99fe6431850fbb0bdc766c95ddbbd3f2055d14e3b3c62c81b6cc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_grothendieck, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 04:02:28 np0005534516 podman[392983]: 2025-11-25 09:02:28.854718098 +0000 UTC m=+0.666905500 container start b812990ce5cb99fe6431850fbb0bdc766c95ddbbd3f2055d14e3b3c62c81b6cc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_grothendieck, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Nov 25 04:02:29 np0005534516 podman[392983]: 2025-11-25 09:02:29.20171843 +0000 UTC m=+1.013905842 container attach b812990ce5cb99fe6431850fbb0bdc766c95ddbbd3f2055d14e3b3c62c81b6cc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_grothendieck, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Nov 25 04:02:29 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 25 04:02:29 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3062220266' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 04:02:29 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 25 04:02:29 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3062220266' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 04:02:29 np0005534516 podman[393002]: 2025-11-25 09:02:29.390463112 +0000 UTC m=+0.634211013 container health_status 8ad1e319ea263c63021f01bb2d6f18ca5aea205883339692c6b10bddfa993191 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible)
Nov 25 04:02:29 np0005534516 nova_compute[253538]: 2025-11-25 09:02:29.571 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:02:29 np0005534516 nova_compute[253538]: 2025-11-25 09:02:29.676 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:02:29 np0005534516 cool_grothendieck[392999]: {
Nov 25 04:02:29 np0005534516 cool_grothendieck[392999]:    "0": [
Nov 25 04:02:29 np0005534516 cool_grothendieck[392999]:        {
Nov 25 04:02:29 np0005534516 cool_grothendieck[392999]:            "devices": [
Nov 25 04:02:29 np0005534516 cool_grothendieck[392999]:                "/dev/loop3"
Nov 25 04:02:29 np0005534516 cool_grothendieck[392999]:            ],
Nov 25 04:02:29 np0005534516 cool_grothendieck[392999]:            "lv_name": "ceph_lv0",
Nov 25 04:02:29 np0005534516 cool_grothendieck[392999]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 04:02:29 np0005534516 cool_grothendieck[392999]:            "lv_size": "21470642176",
Nov 25 04:02:29 np0005534516 cool_grothendieck[392999]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=fa0f8d90-5df8-4d42-9078-da082765696d,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 04:02:29 np0005534516 cool_grothendieck[392999]:            "lv_uuid": "ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr",
Nov 25 04:02:29 np0005534516 cool_grothendieck[392999]:            "name": "ceph_lv0",
Nov 25 04:02:29 np0005534516 cool_grothendieck[392999]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 04:02:29 np0005534516 cool_grothendieck[392999]:            "tags": {
Nov 25 04:02:29 np0005534516 cool_grothendieck[392999]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 04:02:29 np0005534516 cool_grothendieck[392999]:                "ceph.block_uuid": "ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr",
Nov 25 04:02:29 np0005534516 cool_grothendieck[392999]:                "ceph.cephx_lockbox_secret": "",
Nov 25 04:02:29 np0005534516 cool_grothendieck[392999]:                "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 04:02:29 np0005534516 cool_grothendieck[392999]:                "ceph.cluster_name": "ceph",
Nov 25 04:02:29 np0005534516 cool_grothendieck[392999]:                "ceph.crush_device_class": "",
Nov 25 04:02:29 np0005534516 cool_grothendieck[392999]:                "ceph.encrypted": "0",
Nov 25 04:02:29 np0005534516 cool_grothendieck[392999]:                "ceph.osd_fsid": "fa0f8d90-5df8-4d42-9078-da082765696d",
Nov 25 04:02:29 np0005534516 cool_grothendieck[392999]:                "ceph.osd_id": "0",
Nov 25 04:02:29 np0005534516 cool_grothendieck[392999]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 04:02:29 np0005534516 cool_grothendieck[392999]:                "ceph.type": "block",
Nov 25 04:02:29 np0005534516 cool_grothendieck[392999]:                "ceph.vdo": "0"
Nov 25 04:02:29 np0005534516 cool_grothendieck[392999]:            },
Nov 25 04:02:29 np0005534516 cool_grothendieck[392999]:            "type": "block",
Nov 25 04:02:29 np0005534516 cool_grothendieck[392999]:            "vg_name": "ceph_vg0"
Nov 25 04:02:29 np0005534516 cool_grothendieck[392999]:        }
Nov 25 04:02:29 np0005534516 cool_grothendieck[392999]:    ],
Nov 25 04:02:29 np0005534516 cool_grothendieck[392999]:    "1": [
Nov 25 04:02:29 np0005534516 cool_grothendieck[392999]:        {
Nov 25 04:02:29 np0005534516 cool_grothendieck[392999]:            "devices": [
Nov 25 04:02:29 np0005534516 cool_grothendieck[392999]:                "/dev/loop4"
Nov 25 04:02:29 np0005534516 cool_grothendieck[392999]:            ],
Nov 25 04:02:29 np0005534516 cool_grothendieck[392999]:            "lv_name": "ceph_lv1",
Nov 25 04:02:29 np0005534516 cool_grothendieck[392999]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 04:02:29 np0005534516 cool_grothendieck[392999]:            "lv_size": "21470642176",
Nov 25 04:02:29 np0005534516 cool_grothendieck[392999]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1ccbbde0-8faf-460a-800e-c84f00ed17db,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 04:02:29 np0005534516 cool_grothendieck[392999]:            "lv_uuid": "nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e",
Nov 25 04:02:29 np0005534516 cool_grothendieck[392999]:            "name": "ceph_lv1",
Nov 25 04:02:29 np0005534516 cool_grothendieck[392999]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 04:02:29 np0005534516 cool_grothendieck[392999]:            "tags": {
Nov 25 04:02:29 np0005534516 cool_grothendieck[392999]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 04:02:29 np0005534516 cool_grothendieck[392999]:                "ceph.block_uuid": "nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e",
Nov 25 04:02:29 np0005534516 cool_grothendieck[392999]:                "ceph.cephx_lockbox_secret": "",
Nov 25 04:02:29 np0005534516 cool_grothendieck[392999]:                "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 04:02:29 np0005534516 cool_grothendieck[392999]:                "ceph.cluster_name": "ceph",
Nov 25 04:02:29 np0005534516 cool_grothendieck[392999]:                "ceph.crush_device_class": "",
Nov 25 04:02:29 np0005534516 cool_grothendieck[392999]:                "ceph.encrypted": "0",
Nov 25 04:02:29 np0005534516 cool_grothendieck[392999]:                "ceph.osd_fsid": "1ccbbde0-8faf-460a-800e-c84f00ed17db",
Nov 25 04:02:29 np0005534516 cool_grothendieck[392999]:                "ceph.osd_id": "1",
Nov 25 04:02:29 np0005534516 cool_grothendieck[392999]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 04:02:29 np0005534516 cool_grothendieck[392999]:                "ceph.type": "block",
Nov 25 04:02:29 np0005534516 cool_grothendieck[392999]:                "ceph.vdo": "0"
Nov 25 04:02:29 np0005534516 cool_grothendieck[392999]:            },
Nov 25 04:02:29 np0005534516 cool_grothendieck[392999]:            "type": "block",
Nov 25 04:02:29 np0005534516 cool_grothendieck[392999]:            "vg_name": "ceph_vg1"
Nov 25 04:02:29 np0005534516 cool_grothendieck[392999]:        }
Nov 25 04:02:29 np0005534516 cool_grothendieck[392999]:    ],
Nov 25 04:02:29 np0005534516 cool_grothendieck[392999]:    "2": [
Nov 25 04:02:29 np0005534516 cool_grothendieck[392999]:        {
Nov 25 04:02:29 np0005534516 cool_grothendieck[392999]:            "devices": [
Nov 25 04:02:29 np0005534516 cool_grothendieck[392999]:                "/dev/loop5"
Nov 25 04:02:29 np0005534516 cool_grothendieck[392999]:            ],
Nov 25 04:02:29 np0005534516 cool_grothendieck[392999]:            "lv_name": "ceph_lv2",
Nov 25 04:02:29 np0005534516 cool_grothendieck[392999]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 04:02:29 np0005534516 cool_grothendieck[392999]:            "lv_size": "21470642176",
Nov 25 04:02:29 np0005534516 cool_grothendieck[392999]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2a15223e-f21c-42af-b702-2be1c3607399,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 04:02:29 np0005534516 cool_grothendieck[392999]:            "lv_uuid": "5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0",
Nov 25 04:02:29 np0005534516 cool_grothendieck[392999]:            "name": "ceph_lv2",
Nov 25 04:02:29 np0005534516 cool_grothendieck[392999]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 04:02:29 np0005534516 cool_grothendieck[392999]:            "tags": {
Nov 25 04:02:29 np0005534516 cool_grothendieck[392999]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 04:02:29 np0005534516 cool_grothendieck[392999]:                "ceph.block_uuid": "5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0",
Nov 25 04:02:29 np0005534516 cool_grothendieck[392999]:                "ceph.cephx_lockbox_secret": "",
Nov 25 04:02:29 np0005534516 cool_grothendieck[392999]:                "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 04:02:29 np0005534516 cool_grothendieck[392999]:                "ceph.cluster_name": "ceph",
Nov 25 04:02:29 np0005534516 cool_grothendieck[392999]:                "ceph.crush_device_class": "",
Nov 25 04:02:29 np0005534516 cool_grothendieck[392999]:                "ceph.encrypted": "0",
Nov 25 04:02:29 np0005534516 cool_grothendieck[392999]:                "ceph.osd_fsid": "2a15223e-f21c-42af-b702-2be1c3607399",
Nov 25 04:02:29 np0005534516 cool_grothendieck[392999]:                "ceph.osd_id": "2",
Nov 25 04:02:29 np0005534516 cool_grothendieck[392999]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 04:02:29 np0005534516 cool_grothendieck[392999]:                "ceph.type": "block",
Nov 25 04:02:29 np0005534516 cool_grothendieck[392999]:                "ceph.vdo": "0"
Nov 25 04:02:29 np0005534516 cool_grothendieck[392999]:            },
Nov 25 04:02:29 np0005534516 cool_grothendieck[392999]:            "type": "block",
Nov 25 04:02:29 np0005534516 cool_grothendieck[392999]:            "vg_name": "ceph_vg2"
Nov 25 04:02:29 np0005534516 cool_grothendieck[392999]:        }
Nov 25 04:02:29 np0005534516 cool_grothendieck[392999]:    ]
Nov 25 04:02:29 np0005534516 cool_grothendieck[392999]: }
Nov 25 04:02:29 np0005534516 systemd[1]: libpod-b812990ce5cb99fe6431850fbb0bdc766c95ddbbd3f2055d14e3b3c62c81b6cc.scope: Deactivated successfully.
Nov 25 04:02:29 np0005534516 podman[392983]: 2025-11-25 09:02:29.790052013 +0000 UTC m=+1.602239405 container died b812990ce5cb99fe6431850fbb0bdc766c95ddbbd3f2055d14e3b3c62c81b6cc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_grothendieck, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Nov 25 04:02:30 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2451: 321 pgs: 321 active+clean; 88 MiB data, 915 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Nov 25 04:02:30 np0005534516 systemd[1]: var-lib-containers-storage-overlay-7d9b498e189d8517d6f19a521ba1ee849073bed21d0aec61620bda4ce9287153-merged.mount: Deactivated successfully.
Nov 25 04:02:31 np0005534516 podman[392983]: 2025-11-25 09:02:31.173937733 +0000 UTC m=+2.986125115 container remove b812990ce5cb99fe6431850fbb0bdc766c95ddbbd3f2055d14e3b3c62c81b6cc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_grothendieck, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Nov 25 04:02:31 np0005534516 systemd[1]: libpod-conmon-b812990ce5cb99fe6431850fbb0bdc766c95ddbbd3f2055d14e3b3c62c81b6cc.scope: Deactivated successfully.
Nov 25 04:02:31 np0005534516 nova_compute[253538]: 2025-11-25 09:02:31.598 253542 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764061336.5972981, 830678ef-9f48-4175-aa6d-666c24a11689 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 04:02:31 np0005534516 nova_compute[253538]: 2025-11-25 09:02:31.599 253542 INFO nova.compute.manager [-] [instance: 830678ef-9f48-4175-aa6d-666c24a11689] VM Stopped (Lifecycle Event)#033[00m
Nov 25 04:02:31 np0005534516 nova_compute[253538]: 2025-11-25 09:02:31.621 253542 DEBUG nova.compute.manager [None req-d6d076e5-3f17-49fb-9ca7-1a1fd9df23f3 - - - - - -] [instance: 830678ef-9f48-4175-aa6d-666c24a11689] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 04:02:31 np0005534516 nova_compute[253538]: 2025-11-25 09:02:31.667 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:02:31 np0005534516 podman[393186]: 2025-11-25 09:02:31.907061972 +0000 UTC m=+0.059986951 container create 691ecbd83df97660eef38e10fdb484b2594163cc075b1a175f4b5edb48e971c0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_brahmagupta, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef)
Nov 25 04:02:31 np0005534516 systemd[1]: Started libpod-conmon-691ecbd83df97660eef38e10fdb484b2594163cc075b1a175f4b5edb48e971c0.scope.
Nov 25 04:02:31 np0005534516 podman[393186]: 2025-11-25 09:02:31.872047791 +0000 UTC m=+0.024972780 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 04:02:31 np0005534516 systemd[1]: Started libcrun container.
Nov 25 04:02:32 np0005534516 podman[393186]: 2025-11-25 09:02:32.038146766 +0000 UTC m=+0.191071735 container init 691ecbd83df97660eef38e10fdb484b2594163cc075b1a175f4b5edb48e971c0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_brahmagupta, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 04:02:32 np0005534516 podman[393186]: 2025-11-25 09:02:32.045082654 +0000 UTC m=+0.198007633 container start 691ecbd83df97660eef38e10fdb484b2594163cc075b1a175f4b5edb48e971c0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_brahmagupta, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 04:02:32 np0005534516 epic_brahmagupta[393202]: 167 167
Nov 25 04:02:32 np0005534516 systemd[1]: libpod-691ecbd83df97660eef38e10fdb484b2594163cc075b1a175f4b5edb48e971c0.scope: Deactivated successfully.
Nov 25 04:02:32 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2452: 321 pgs: 321 active+clean; 88 MiB data, 915 MiB used, 59 GiB / 60 GiB avail; 14 KiB/s rd, 1.2 KiB/s wr, 21 op/s
Nov 25 04:02:32 np0005534516 podman[393186]: 2025-11-25 09:02:32.113896055 +0000 UTC m=+0.266821054 container attach 691ecbd83df97660eef38e10fdb484b2594163cc075b1a175f4b5edb48e971c0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_brahmagupta, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 04:02:32 np0005534516 podman[393186]: 2025-11-25 09:02:32.115084197 +0000 UTC m=+0.268009196 container died 691ecbd83df97660eef38e10fdb484b2594163cc075b1a175f4b5edb48e971c0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_brahmagupta, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 04:02:32 np0005534516 systemd[1]: var-lib-containers-storage-overlay-dcc30919b6d79a0bd141af1944a0aa9c48ca12d7731e17f4f6a3f0562508fcbe-merged.mount: Deactivated successfully.
Nov 25 04:02:32 np0005534516 podman[393186]: 2025-11-25 09:02:32.323574715 +0000 UTC m=+0.476499694 container remove 691ecbd83df97660eef38e10fdb484b2594163cc075b1a175f4b5edb48e971c0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_brahmagupta, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Nov 25 04:02:32 np0005534516 systemd[1]: libpod-conmon-691ecbd83df97660eef38e10fdb484b2594163cc075b1a175f4b5edb48e971c0.scope: Deactivated successfully.
Nov 25 04:02:32 np0005534516 podman[393228]: 2025-11-25 09:02:32.548631862 +0000 UTC m=+0.081283370 container create 08aecfa979a6b575e9d3e7b59ba41656941354766f961c4dc6f93a4527ff91f8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_feistel, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Nov 25 04:02:32 np0005534516 podman[393228]: 2025-11-25 09:02:32.496456345 +0000 UTC m=+0.029107833 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 04:02:32 np0005534516 systemd[1]: Started libpod-conmon-08aecfa979a6b575e9d3e7b59ba41656941354766f961c4dc6f93a4527ff91f8.scope.
Nov 25 04:02:32 np0005534516 systemd[1]: Started libcrun container.
Nov 25 04:02:32 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/831f78d36eb69a72a4585aba9444b9d81ea77494fe8786460d1e6860559ac20d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 04:02:32 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/831f78d36eb69a72a4585aba9444b9d81ea77494fe8786460d1e6860559ac20d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 04:02:32 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/831f78d36eb69a72a4585aba9444b9d81ea77494fe8786460d1e6860559ac20d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 04:02:32 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/831f78d36eb69a72a4585aba9444b9d81ea77494fe8786460d1e6860559ac20d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 04:02:32 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:02:32 np0005534516 podman[393228]: 2025-11-25 09:02:32.662527179 +0000 UTC m=+0.195178677 container init 08aecfa979a6b575e9d3e7b59ba41656941354766f961c4dc6f93a4527ff91f8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_feistel, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True)
Nov 25 04:02:32 np0005534516 podman[393228]: 2025-11-25 09:02:32.67472074 +0000 UTC m=+0.207372208 container start 08aecfa979a6b575e9d3e7b59ba41656941354766f961c4dc6f93a4527ff91f8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_feistel, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Nov 25 04:02:32 np0005534516 podman[393228]: 2025-11-25 09:02:32.680044765 +0000 UTC m=+0.212696263 container attach 08aecfa979a6b575e9d3e7b59ba41656941354766f961c4dc6f93a4527ff91f8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_feistel, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Nov 25 04:02:33 np0005534516 priceless_feistel[393244]: {
Nov 25 04:02:33 np0005534516 priceless_feistel[393244]:    "1ccbbde0-8faf-460a-800e-c84f00ed17db": {
Nov 25 04:02:33 np0005534516 priceless_feistel[393244]:        "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 04:02:33 np0005534516 priceless_feistel[393244]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 04:02:33 np0005534516 priceless_feistel[393244]:        "osd_id": 1,
Nov 25 04:02:33 np0005534516 priceless_feistel[393244]:        "osd_uuid": "1ccbbde0-8faf-460a-800e-c84f00ed17db",
Nov 25 04:02:33 np0005534516 priceless_feistel[393244]:        "type": "bluestore"
Nov 25 04:02:33 np0005534516 priceless_feistel[393244]:    },
Nov 25 04:02:33 np0005534516 priceless_feistel[393244]:    "2a15223e-f21c-42af-b702-2be1c3607399": {
Nov 25 04:02:33 np0005534516 priceless_feistel[393244]:        "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 04:02:33 np0005534516 priceless_feistel[393244]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 04:02:33 np0005534516 priceless_feistel[393244]:        "osd_id": 2,
Nov 25 04:02:33 np0005534516 priceless_feistel[393244]:        "osd_uuid": "2a15223e-f21c-42af-b702-2be1c3607399",
Nov 25 04:02:33 np0005534516 priceless_feistel[393244]:        "type": "bluestore"
Nov 25 04:02:33 np0005534516 priceless_feistel[393244]:    },
Nov 25 04:02:33 np0005534516 priceless_feistel[393244]:    "fa0f8d90-5df8-4d42-9078-da082765696d": {
Nov 25 04:02:33 np0005534516 priceless_feistel[393244]:        "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 04:02:33 np0005534516 priceless_feistel[393244]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 04:02:33 np0005534516 priceless_feistel[393244]:        "osd_id": 0,
Nov 25 04:02:33 np0005534516 priceless_feistel[393244]:        "osd_uuid": "fa0f8d90-5df8-4d42-9078-da082765696d",
Nov 25 04:02:33 np0005534516 priceless_feistel[393244]:        "type": "bluestore"
Nov 25 04:02:33 np0005534516 priceless_feistel[393244]:    }
Nov 25 04:02:33 np0005534516 priceless_feistel[393244]: }
Nov 25 04:02:33 np0005534516 systemd[1]: libpod-08aecfa979a6b575e9d3e7b59ba41656941354766f961c4dc6f93a4527ff91f8.scope: Deactivated successfully.
Nov 25 04:02:33 np0005534516 podman[393228]: 2025-11-25 09:02:33.671803275 +0000 UTC m=+1.204454733 container died 08aecfa979a6b575e9d3e7b59ba41656941354766f961c4dc6f93a4527ff91f8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_feistel, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Nov 25 04:02:33 np0005534516 systemd[1]: libpod-08aecfa979a6b575e9d3e7b59ba41656941354766f961c4dc6f93a4527ff91f8.scope: Consumed 1.001s CPU time.
Nov 25 04:02:33 np0005534516 systemd[1]: var-lib-containers-storage-overlay-831f78d36eb69a72a4585aba9444b9d81ea77494fe8786460d1e6860559ac20d-merged.mount: Deactivated successfully.
Nov 25 04:02:33 np0005534516 nova_compute[253538]: 2025-11-25 09:02:33.716 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:02:33 np0005534516 podman[393228]: 2025-11-25 09:02:33.740673968 +0000 UTC m=+1.273325436 container remove 08aecfa979a6b575e9d3e7b59ba41656941354766f961c4dc6f93a4527ff91f8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_feistel, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 04:02:33 np0005534516 systemd[1]: libpod-conmon-08aecfa979a6b575e9d3e7b59ba41656941354766f961c4dc6f93a4527ff91f8.scope: Deactivated successfully.
Nov 25 04:02:33 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 04:02:33 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 04:02:33 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 04:02:33 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 04:02:33 np0005534516 ceph-mgr[75313]: [progress WARNING root] complete: ev 32f8db10-73f3-4d3a-9b39-7fe4cd84d93c does not exist
Nov 25 04:02:33 np0005534516 ceph-mgr[75313]: [progress WARNING root] complete: ev 5922ec99-a8fc-43b5-9d82-d92a1449200b does not exist
Nov 25 04:02:34 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2453: 321 pgs: 321 active+clean; 88 MiB data, 915 MiB used, 59 GiB / 60 GiB avail; 11 KiB/s rd, 596 B/s wr, 16 op/s
Nov 25 04:02:34 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 04:02:34 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 04:02:34 np0005534516 nova_compute[253538]: 2025-11-25 09:02:34.977 253542 DEBUG oslo_concurrency.lockutils [None req-f71c3cb0-8ec9-4bda-b83e-e143f3daa1c9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Acquiring lock "28376454-90b2-431d-9052-48b369973c8e" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:02:34 np0005534516 nova_compute[253538]: 2025-11-25 09:02:34.977 253542 DEBUG oslo_concurrency.lockutils [None req-f71c3cb0-8ec9-4bda-b83e-e143f3daa1c9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "28376454-90b2-431d-9052-48b369973c8e" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:02:34 np0005534516 nova_compute[253538]: 2025-11-25 09:02:34.994 253542 DEBUG nova.compute.manager [None req-f71c3cb0-8ec9-4bda-b83e-e143f3daa1c9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 28376454-90b2-431d-9052-48b369973c8e] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 25 04:02:35 np0005534516 nova_compute[253538]: 2025-11-25 09:02:35.075 253542 DEBUG oslo_concurrency.lockutils [None req-f71c3cb0-8ec9-4bda-b83e-e143f3daa1c9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:02:35 np0005534516 nova_compute[253538]: 2025-11-25 09:02:35.075 253542 DEBUG oslo_concurrency.lockutils [None req-f71c3cb0-8ec9-4bda-b83e-e143f3daa1c9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:02:35 np0005534516 nova_compute[253538]: 2025-11-25 09:02:35.085 253542 DEBUG nova.virt.hardware [None req-f71c3cb0-8ec9-4bda-b83e-e143f3daa1c9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 25 04:02:35 np0005534516 nova_compute[253538]: 2025-11-25 09:02:35.085 253542 INFO nova.compute.claims [None req-f71c3cb0-8ec9-4bda-b83e-e143f3daa1c9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 28376454-90b2-431d-9052-48b369973c8e] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 25 04:02:35 np0005534516 nova_compute[253538]: 2025-11-25 09:02:35.179 253542 DEBUG oslo_concurrency.processutils [None req-f71c3cb0-8ec9-4bda-b83e-e143f3daa1c9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 04:02:35 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 04:02:35 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3925754751' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 04:02:35 np0005534516 nova_compute[253538]: 2025-11-25 09:02:35.603 253542 DEBUG oslo_concurrency.processutils [None req-f71c3cb0-8ec9-4bda-b83e-e143f3daa1c9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.424s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 04:02:35 np0005534516 nova_compute[253538]: 2025-11-25 09:02:35.610 253542 DEBUG nova.compute.provider_tree [None req-f71c3cb0-8ec9-4bda-b83e-e143f3daa1c9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 04:02:35 np0005534516 nova_compute[253538]: 2025-11-25 09:02:35.624 253542 DEBUG nova.scheduler.client.report [None req-f71c3cb0-8ec9-4bda-b83e-e143f3daa1c9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 04:02:35 np0005534516 nova_compute[253538]: 2025-11-25 09:02:35.656 253542 DEBUG oslo_concurrency.lockutils [None req-f71c3cb0-8ec9-4bda-b83e-e143f3daa1c9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.581s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:02:35 np0005534516 nova_compute[253538]: 2025-11-25 09:02:35.657 253542 DEBUG nova.compute.manager [None req-f71c3cb0-8ec9-4bda-b83e-e143f3daa1c9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 28376454-90b2-431d-9052-48b369973c8e] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 25 04:02:35 np0005534516 nova_compute[253538]: 2025-11-25 09:02:35.707 253542 DEBUG nova.compute.manager [None req-f71c3cb0-8ec9-4bda-b83e-e143f3daa1c9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 28376454-90b2-431d-9052-48b369973c8e] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 25 04:02:35 np0005534516 nova_compute[253538]: 2025-11-25 09:02:35.708 253542 DEBUG nova.network.neutron [None req-f71c3cb0-8ec9-4bda-b83e-e143f3daa1c9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 28376454-90b2-431d-9052-48b369973c8e] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 25 04:02:35 np0005534516 nova_compute[253538]: 2025-11-25 09:02:35.727 253542 INFO nova.virt.libvirt.driver [None req-f71c3cb0-8ec9-4bda-b83e-e143f3daa1c9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 28376454-90b2-431d-9052-48b369973c8e] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 25 04:02:35 np0005534516 nova_compute[253538]: 2025-11-25 09:02:35.741 253542 DEBUG nova.compute.manager [None req-f71c3cb0-8ec9-4bda-b83e-e143f3daa1c9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 28376454-90b2-431d-9052-48b369973c8e] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 25 04:02:35 np0005534516 nova_compute[253538]: 2025-11-25 09:02:35.872 253542 DEBUG nova.compute.manager [None req-f71c3cb0-8ec9-4bda-b83e-e143f3daa1c9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 28376454-90b2-431d-9052-48b369973c8e] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 25 04:02:35 np0005534516 nova_compute[253538]: 2025-11-25 09:02:35.874 253542 DEBUG nova.virt.libvirt.driver [None req-f71c3cb0-8ec9-4bda-b83e-e143f3daa1c9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 28376454-90b2-431d-9052-48b369973c8e] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 25 04:02:35 np0005534516 nova_compute[253538]: 2025-11-25 09:02:35.874 253542 INFO nova.virt.libvirt.driver [None req-f71c3cb0-8ec9-4bda-b83e-e143f3daa1c9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 28376454-90b2-431d-9052-48b369973c8e] Creating image(s)#033[00m
Nov 25 04:02:35 np0005534516 nova_compute[253538]: 2025-11-25 09:02:35.904 253542 DEBUG nova.storage.rbd_utils [None req-f71c3cb0-8ec9-4bda-b83e-e143f3daa1c9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] rbd image 28376454-90b2-431d-9052-48b369973c8e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 04:02:35 np0005534516 nova_compute[253538]: 2025-11-25 09:02:35.924 253542 DEBUG nova.storage.rbd_utils [None req-f71c3cb0-8ec9-4bda-b83e-e143f3daa1c9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] rbd image 28376454-90b2-431d-9052-48b369973c8e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 04:02:35 np0005534516 nova_compute[253538]: 2025-11-25 09:02:35.946 253542 DEBUG nova.storage.rbd_utils [None req-f71c3cb0-8ec9-4bda-b83e-e143f3daa1c9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] rbd image 28376454-90b2-431d-9052-48b369973c8e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 04:02:35 np0005534516 nova_compute[253538]: 2025-11-25 09:02:35.950 253542 DEBUG oslo_concurrency.processutils [None req-f71c3cb0-8ec9-4bda-b83e-e143f3daa1c9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 04:02:35 np0005534516 nova_compute[253538]: 2025-11-25 09:02:35.987 253542 DEBUG nova.policy [None req-f71c3cb0-8ec9-4bda-b83e-e143f3daa1c9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'c9fb13d4ba9041458692330b7276232f', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'a3cf572dfc9f42528923d69b8fa76422', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 25 04:02:36 np0005534516 nova_compute[253538]: 2025-11-25 09:02:36.027 253542 DEBUG oslo_concurrency.processutils [None req-f71c3cb0-8ec9-4bda-b83e-e143f3daa1c9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json" returned: 0 in 0.077s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 04:02:36 np0005534516 nova_compute[253538]: 2025-11-25 09:02:36.028 253542 DEBUG oslo_concurrency.lockutils [None req-f71c3cb0-8ec9-4bda-b83e-e143f3daa1c9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Acquiring lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:02:36 np0005534516 nova_compute[253538]: 2025-11-25 09:02:36.028 253542 DEBUG oslo_concurrency.lockutils [None req-f71c3cb0-8ec9-4bda-b83e-e143f3daa1c9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:02:36 np0005534516 nova_compute[253538]: 2025-11-25 09:02:36.029 253542 DEBUG oslo_concurrency.lockutils [None req-f71c3cb0-8ec9-4bda-b83e-e143f3daa1c9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:02:36 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2454: 321 pgs: 321 active+clean; 88 MiB data, 915 MiB used, 59 GiB / 60 GiB avail; 8.9 KiB/s rd, 596 B/s wr, 13 op/s
Nov 25 04:02:36 np0005534516 nova_compute[253538]: 2025-11-25 09:02:36.090 253542 DEBUG nova.storage.rbd_utils [None req-f71c3cb0-8ec9-4bda-b83e-e143f3daa1c9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] rbd image 28376454-90b2-431d-9052-48b369973c8e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 04:02:36 np0005534516 nova_compute[253538]: 2025-11-25 09:02:36.093 253542 DEBUG oslo_concurrency.processutils [None req-f71c3cb0-8ec9-4bda-b83e-e143f3daa1c9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc 28376454-90b2-431d-9052-48b369973c8e_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 04:02:36 np0005534516 nova_compute[253538]: 2025-11-25 09:02:36.676 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:02:37 np0005534516 nova_compute[253538]: 2025-11-25 09:02:37.466 253542 DEBUG nova.network.neutron [None req-f71c3cb0-8ec9-4bda-b83e-e143f3daa1c9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 28376454-90b2-431d-9052-48b369973c8e] Successfully created port: 9918858c-8b7c-4d3f-aada-d04fcb6eab03 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 25 04:02:37 np0005534516 nova_compute[253538]: 2025-11-25 09:02:37.592 253542 DEBUG oslo_concurrency.processutils [None req-f71c3cb0-8ec9-4bda-b83e-e143f3daa1c9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc 28376454-90b2-431d-9052-48b369973c8e_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.499s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 04:02:37 np0005534516 nova_compute[253538]: 2025-11-25 09:02:37.641 253542 DEBUG nova.storage.rbd_utils [None req-f71c3cb0-8ec9-4bda-b83e-e143f3daa1c9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] resizing rbd image 28376454-90b2-431d-9052-48b369973c8e_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 25 04:02:37 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:02:37 np0005534516 nova_compute[253538]: 2025-11-25 09:02:37.846 253542 DEBUG nova.objects.instance [None req-f71c3cb0-8ec9-4bda-b83e-e143f3daa1c9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lazy-loading 'migration_context' on Instance uuid 28376454-90b2-431d-9052-48b369973c8e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 04:02:37 np0005534516 nova_compute[253538]: 2025-11-25 09:02:37.858 253542 DEBUG nova.virt.libvirt.driver [None req-f71c3cb0-8ec9-4bda-b83e-e143f3daa1c9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 28376454-90b2-431d-9052-48b369973c8e] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 25 04:02:37 np0005534516 nova_compute[253538]: 2025-11-25 09:02:37.859 253542 DEBUG nova.virt.libvirt.driver [None req-f71c3cb0-8ec9-4bda-b83e-e143f3daa1c9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 28376454-90b2-431d-9052-48b369973c8e] Ensure instance console log exists: /var/lib/nova/instances/28376454-90b2-431d-9052-48b369973c8e/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 25 04:02:37 np0005534516 nova_compute[253538]: 2025-11-25 09:02:37.859 253542 DEBUG oslo_concurrency.lockutils [None req-f71c3cb0-8ec9-4bda-b83e-e143f3daa1c9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:02:37 np0005534516 nova_compute[253538]: 2025-11-25 09:02:37.860 253542 DEBUG oslo_concurrency.lockutils [None req-f71c3cb0-8ec9-4bda-b83e-e143f3daa1c9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:02:37 np0005534516 nova_compute[253538]: 2025-11-25 09:02:37.860 253542 DEBUG oslo_concurrency.lockutils [None req-f71c3cb0-8ec9-4bda-b83e-e143f3daa1c9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:02:38 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2455: 321 pgs: 321 active+clean; 88 MiB data, 915 MiB used, 59 GiB / 60 GiB avail; 2.1 KiB/s rd, 597 B/s wr, 3 op/s
Nov 25 04:02:38 np0005534516 nova_compute[253538]: 2025-11-25 09:02:38.181 253542 DEBUG nova.network.neutron [None req-f71c3cb0-8ec9-4bda-b83e-e143f3daa1c9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 28376454-90b2-431d-9052-48b369973c8e] Successfully created port: fd44d480-0242-4c7a-b02e-f58852c99ca0 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 25 04:02:38 np0005534516 nova_compute[253538]: 2025-11-25 09:02:38.719 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:02:39 np0005534516 nova_compute[253538]: 2025-11-25 09:02:39.491 253542 DEBUG nova.network.neutron [None req-f71c3cb0-8ec9-4bda-b83e-e143f3daa1c9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 28376454-90b2-431d-9052-48b369973c8e] Successfully updated port: 9918858c-8b7c-4d3f-aada-d04fcb6eab03 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 25 04:02:39 np0005534516 nova_compute[253538]: 2025-11-25 09:02:39.571 253542 DEBUG nova.compute.manager [req-91b54e20-efd4-4340-b1e5-7e37352c3488 req-8a513184-1086-46a0-b7ba-3fce159f1042 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 28376454-90b2-431d-9052-48b369973c8e] Received event network-changed-9918858c-8b7c-4d3f-aada-d04fcb6eab03 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 04:02:39 np0005534516 nova_compute[253538]: 2025-11-25 09:02:39.572 253542 DEBUG nova.compute.manager [req-91b54e20-efd4-4340-b1e5-7e37352c3488 req-8a513184-1086-46a0-b7ba-3fce159f1042 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 28376454-90b2-431d-9052-48b369973c8e] Refreshing instance network info cache due to event network-changed-9918858c-8b7c-4d3f-aada-d04fcb6eab03. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 04:02:39 np0005534516 nova_compute[253538]: 2025-11-25 09:02:39.573 253542 DEBUG oslo_concurrency.lockutils [req-91b54e20-efd4-4340-b1e5-7e37352c3488 req-8a513184-1086-46a0-b7ba-3fce159f1042 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-28376454-90b2-431d-9052-48b369973c8e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 04:02:39 np0005534516 nova_compute[253538]: 2025-11-25 09:02:39.574 253542 DEBUG oslo_concurrency.lockutils [req-91b54e20-efd4-4340-b1e5-7e37352c3488 req-8a513184-1086-46a0-b7ba-3fce159f1042 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-28376454-90b2-431d-9052-48b369973c8e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 04:02:39 np0005534516 nova_compute[253538]: 2025-11-25 09:02:39.574 253542 DEBUG nova.network.neutron [req-91b54e20-efd4-4340-b1e5-7e37352c3488 req-8a513184-1086-46a0-b7ba-3fce159f1042 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 28376454-90b2-431d-9052-48b369973c8e] Refreshing network info cache for port 9918858c-8b7c-4d3f-aada-d04fcb6eab03 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 04:02:39 np0005534516 nova_compute[253538]: 2025-11-25 09:02:39.811 253542 DEBUG nova.network.neutron [req-91b54e20-efd4-4340-b1e5-7e37352c3488 req-8a513184-1086-46a0-b7ba-3fce159f1042 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 28376454-90b2-431d-9052-48b369973c8e] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 25 04:02:40 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2456: 321 pgs: 321 active+clean; 119 MiB data, 933 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.2 MiB/s wr, 26 op/s
Nov 25 04:02:40 np0005534516 nova_compute[253538]: 2025-11-25 09:02:40.584 253542 DEBUG nova.network.neutron [req-91b54e20-efd4-4340-b1e5-7e37352c3488 req-8a513184-1086-46a0-b7ba-3fce159f1042 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 28376454-90b2-431d-9052-48b369973c8e] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 04:02:40 np0005534516 nova_compute[253538]: 2025-11-25 09:02:40.595 253542 DEBUG oslo_concurrency.lockutils [req-91b54e20-efd4-4340-b1e5-7e37352c3488 req-8a513184-1086-46a0-b7ba-3fce159f1042 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-28376454-90b2-431d-9052-48b369973c8e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 04:02:40 np0005534516 nova_compute[253538]: 2025-11-25 09:02:40.720 253542 DEBUG nova.network.neutron [None req-f71c3cb0-8ec9-4bda-b83e-e143f3daa1c9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 28376454-90b2-431d-9052-48b369973c8e] Successfully updated port: fd44d480-0242-4c7a-b02e-f58852c99ca0 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 25 04:02:40 np0005534516 nova_compute[253538]: 2025-11-25 09:02:40.901 253542 DEBUG oslo_concurrency.lockutils [None req-f71c3cb0-8ec9-4bda-b83e-e143f3daa1c9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Acquiring lock "refresh_cache-28376454-90b2-431d-9052-48b369973c8e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 04:02:40 np0005534516 nova_compute[253538]: 2025-11-25 09:02:40.902 253542 DEBUG oslo_concurrency.lockutils [None req-f71c3cb0-8ec9-4bda-b83e-e143f3daa1c9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Acquired lock "refresh_cache-28376454-90b2-431d-9052-48b369973c8e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 04:02:40 np0005534516 nova_compute[253538]: 2025-11-25 09:02:40.902 253542 DEBUG nova.network.neutron [None req-f71c3cb0-8ec9-4bda-b83e-e143f3daa1c9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 28376454-90b2-431d-9052-48b369973c8e] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 25 04:02:41 np0005534516 nova_compute[253538]: 2025-11-25 09:02:41.070 253542 DEBUG nova.network.neutron [None req-f71c3cb0-8ec9-4bda-b83e-e143f3daa1c9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 28376454-90b2-431d-9052-48b369973c8e] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 25 04:02:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:02:41.085 162739 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:02:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:02:41.086 162739 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:02:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:02:41.086 162739 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:02:41 np0005534516 nova_compute[253538]: 2025-11-25 09:02:41.677 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:02:41 np0005534516 nova_compute[253538]: 2025-11-25 09:02:41.678 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:02:41 np0005534516 nova_compute[253538]: 2025-11-25 09:02:41.687 253542 DEBUG nova.compute.manager [req-b9c3c7b3-5830-4116-91aa-f04dcaa0566b req-d10d06cf-e42a-4ba1-9c9d-429c51c8c04c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 28376454-90b2-431d-9052-48b369973c8e] Received event network-changed-fd44d480-0242-4c7a-b02e-f58852c99ca0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 04:02:41 np0005534516 nova_compute[253538]: 2025-11-25 09:02:41.688 253542 DEBUG nova.compute.manager [req-b9c3c7b3-5830-4116-91aa-f04dcaa0566b req-d10d06cf-e42a-4ba1-9c9d-429c51c8c04c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 28376454-90b2-431d-9052-48b369973c8e] Refreshing instance network info cache due to event network-changed-fd44d480-0242-4c7a-b02e-f58852c99ca0. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 04:02:41 np0005534516 nova_compute[253538]: 2025-11-25 09:02:41.688 253542 DEBUG oslo_concurrency.lockutils [req-b9c3c7b3-5830-4116-91aa-f04dcaa0566b req-d10d06cf-e42a-4ba1-9c9d-429c51c8c04c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-28376454-90b2-431d-9052-48b369973c8e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 04:02:42 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2457: 321 pgs: 321 active+clean; 134 MiB data, 940 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 25 04:02:42 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:02:43 np0005534516 nova_compute[253538]: 2025-11-25 09:02:43.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:02:43 np0005534516 nova_compute[253538]: 2025-11-25 09:02:43.720 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:02:44 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2458: 321 pgs: 321 active+clean; 134 MiB data, 940 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 25 04:02:45 np0005534516 nova_compute[253538]: 2025-11-25 09:02:45.635 253542 DEBUG nova.network.neutron [None req-f71c3cb0-8ec9-4bda-b83e-e143f3daa1c9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 28376454-90b2-431d-9052-48b369973c8e] Updating instance_info_cache with network_info: [{"id": "9918858c-8b7c-4d3f-aada-d04fcb6eab03", "address": "fa:16:3e:f4:e3:ea", "network": {"id": "f691304c-d112-4c32-b3ac-0f33230178b0", "bridge": "br-int", "label": "tempest-network-smoke--636122136", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9918858c-8b", "ovs_interfaceid": "9918858c-8b7c-4d3f-aada-d04fcb6eab03", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "fd44d480-0242-4c7a-b02e-f58852c99ca0", "address": "fa:16:3e:b8:5e:87", "network": {"id": "269f4fa4-a7fb-4f9a-b49d-3b1968826304", "bridge": "br-int", "label": "tempest-network-smoke--375194542", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feb8:5e87", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfd44d480-02", "ovs_interfaceid": "fd44d480-0242-4c7a-b02e-f58852c99ca0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 04:02:45 np0005534516 nova_compute[253538]: 2025-11-25 09:02:45.663 253542 DEBUG oslo_concurrency.lockutils [None req-f71c3cb0-8ec9-4bda-b83e-e143f3daa1c9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Releasing lock "refresh_cache-28376454-90b2-431d-9052-48b369973c8e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 04:02:45 np0005534516 nova_compute[253538]: 2025-11-25 09:02:45.663 253542 DEBUG nova.compute.manager [None req-f71c3cb0-8ec9-4bda-b83e-e143f3daa1c9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 28376454-90b2-431d-9052-48b369973c8e] Instance network_info: |[{"id": "9918858c-8b7c-4d3f-aada-d04fcb6eab03", "address": "fa:16:3e:f4:e3:ea", "network": {"id": "f691304c-d112-4c32-b3ac-0f33230178b0", "bridge": "br-int", "label": "tempest-network-smoke--636122136", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9918858c-8b", "ovs_interfaceid": "9918858c-8b7c-4d3f-aada-d04fcb6eab03", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "fd44d480-0242-4c7a-b02e-f58852c99ca0", "address": "fa:16:3e:b8:5e:87", "network": {"id": "269f4fa4-a7fb-4f9a-b49d-3b1968826304", "bridge": "br-int", "label": "tempest-network-smoke--375194542", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feb8:5e87", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfd44d480-02", "ovs_interfaceid": "fd44d480-0242-4c7a-b02e-f58852c99ca0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 25 04:02:45 np0005534516 nova_compute[253538]: 2025-11-25 09:02:45.663 253542 DEBUG oslo_concurrency.lockutils [req-b9c3c7b3-5830-4116-91aa-f04dcaa0566b req-d10d06cf-e42a-4ba1-9c9d-429c51c8c04c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-28376454-90b2-431d-9052-48b369973c8e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 04:02:45 np0005534516 nova_compute[253538]: 2025-11-25 09:02:45.664 253542 DEBUG nova.network.neutron [req-b9c3c7b3-5830-4116-91aa-f04dcaa0566b req-d10d06cf-e42a-4ba1-9c9d-429c51c8c04c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 28376454-90b2-431d-9052-48b369973c8e] Refreshing network info cache for port fd44d480-0242-4c7a-b02e-f58852c99ca0 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 04:02:45 np0005534516 nova_compute[253538]: 2025-11-25 09:02:45.666 253542 DEBUG nova.virt.libvirt.driver [None req-f71c3cb0-8ec9-4bda-b83e-e143f3daa1c9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 28376454-90b2-431d-9052-48b369973c8e] Start _get_guest_xml network_info=[{"id": "9918858c-8b7c-4d3f-aada-d04fcb6eab03", "address": "fa:16:3e:f4:e3:ea", "network": {"id": "f691304c-d112-4c32-b3ac-0f33230178b0", "bridge": "br-int", "label": "tempest-network-smoke--636122136", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9918858c-8b", "ovs_interfaceid": "9918858c-8b7c-4d3f-aada-d04fcb6eab03", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "fd44d480-0242-4c7a-b02e-f58852c99ca0", "address": "fa:16:3e:b8:5e:87", "network": {"id": "269f4fa4-a7fb-4f9a-b49d-3b1968826304", "bridge": "br-int", "label": "tempest-network-smoke--375194542", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feb8:5e87", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfd44d480-02", "ovs_interfaceid": "fd44d480-0242-4c7a-b02e-f58852c99ca0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'device_name': '/dev/vda', 'guest_format': None, 'encryption_options': None, 'boot_index': 0, 'encryption_format': None, 'encryption_secret_uuid': None, 'size': 0, 'encrypted': False, 'disk_bus': 'virtio', 'image_id': '8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 25 04:02:45 np0005534516 nova_compute[253538]: 2025-11-25 09:02:45.671 253542 WARNING nova.virt.libvirt.driver [None req-f71c3cb0-8ec9-4bda-b83e-e143f3daa1c9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 04:02:45 np0005534516 nova_compute[253538]: 2025-11-25 09:02:45.678 253542 DEBUG nova.virt.libvirt.host [None req-f71c3cb0-8ec9-4bda-b83e-e143f3daa1c9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 25 04:02:45 np0005534516 nova_compute[253538]: 2025-11-25 09:02:45.678 253542 DEBUG nova.virt.libvirt.host [None req-f71c3cb0-8ec9-4bda-b83e-e143f3daa1c9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 25 04:02:45 np0005534516 nova_compute[253538]: 2025-11-25 09:02:45.681 253542 DEBUG nova.virt.libvirt.host [None req-f71c3cb0-8ec9-4bda-b83e-e143f3daa1c9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 25 04:02:45 np0005534516 nova_compute[253538]: 2025-11-25 09:02:45.682 253542 DEBUG nova.virt.libvirt.host [None req-f71c3cb0-8ec9-4bda-b83e-e143f3daa1c9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 25 04:02:45 np0005534516 nova_compute[253538]: 2025-11-25 09:02:45.682 253542 DEBUG nova.virt.libvirt.driver [None req-f71c3cb0-8ec9-4bda-b83e-e143f3daa1c9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 25 04:02:45 np0005534516 nova_compute[253538]: 2025-11-25 09:02:45.682 253542 DEBUG nova.virt.hardware [None req-f71c3cb0-8ec9-4bda-b83e-e143f3daa1c9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T08:20:54Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='c0247c91-0f34-45b2-87e7-43f31a790a00',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 25 04:02:45 np0005534516 nova_compute[253538]: 2025-11-25 09:02:45.683 253542 DEBUG nova.virt.hardware [None req-f71c3cb0-8ec9-4bda-b83e-e143f3daa1c9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 25 04:02:45 np0005534516 nova_compute[253538]: 2025-11-25 09:02:45.683 253542 DEBUG nova.virt.hardware [None req-f71c3cb0-8ec9-4bda-b83e-e143f3daa1c9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 25 04:02:45 np0005534516 nova_compute[253538]: 2025-11-25 09:02:45.683 253542 DEBUG nova.virt.hardware [None req-f71c3cb0-8ec9-4bda-b83e-e143f3daa1c9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 25 04:02:45 np0005534516 nova_compute[253538]: 2025-11-25 09:02:45.683 253542 DEBUG nova.virt.hardware [None req-f71c3cb0-8ec9-4bda-b83e-e143f3daa1c9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 25 04:02:45 np0005534516 nova_compute[253538]: 2025-11-25 09:02:45.684 253542 DEBUG nova.virt.hardware [None req-f71c3cb0-8ec9-4bda-b83e-e143f3daa1c9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 25 04:02:45 np0005534516 nova_compute[253538]: 2025-11-25 09:02:45.684 253542 DEBUG nova.virt.hardware [None req-f71c3cb0-8ec9-4bda-b83e-e143f3daa1c9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 25 04:02:45 np0005534516 nova_compute[253538]: 2025-11-25 09:02:45.684 253542 DEBUG nova.virt.hardware [None req-f71c3cb0-8ec9-4bda-b83e-e143f3daa1c9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 25 04:02:45 np0005534516 nova_compute[253538]: 2025-11-25 09:02:45.684 253542 DEBUG nova.virt.hardware [None req-f71c3cb0-8ec9-4bda-b83e-e143f3daa1c9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 25 04:02:45 np0005534516 nova_compute[253538]: 2025-11-25 09:02:45.685 253542 DEBUG nova.virt.hardware [None req-f71c3cb0-8ec9-4bda-b83e-e143f3daa1c9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 25 04:02:45 np0005534516 nova_compute[253538]: 2025-11-25 09:02:45.685 253542 DEBUG nova.virt.hardware [None req-f71c3cb0-8ec9-4bda-b83e-e143f3daa1c9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 25 04:02:45 np0005534516 nova_compute[253538]: 2025-11-25 09:02:45.687 253542 DEBUG oslo_concurrency.processutils [None req-f71c3cb0-8ec9-4bda-b83e-e143f3daa1c9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 04:02:46 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2459: 321 pgs: 321 active+clean; 134 MiB data, 940 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 25 04:02:46 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 04:02:46 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2577756570' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 04:02:46 np0005534516 nova_compute[253538]: 2025-11-25 09:02:46.126 253542 DEBUG oslo_concurrency.processutils [None req-f71c3cb0-8ec9-4bda-b83e-e143f3daa1c9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.439s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 04:02:46 np0005534516 nova_compute[253538]: 2025-11-25 09:02:46.146 253542 DEBUG nova.storage.rbd_utils [None req-f71c3cb0-8ec9-4bda-b83e-e143f3daa1c9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] rbd image 28376454-90b2-431d-9052-48b369973c8e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 04:02:46 np0005534516 nova_compute[253538]: 2025-11-25 09:02:46.149 253542 DEBUG oslo_concurrency.processutils [None req-f71c3cb0-8ec9-4bda-b83e-e143f3daa1c9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 04:02:46 np0005534516 nova_compute[253538]: 2025-11-25 09:02:46.555 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:02:46 np0005534516 nova_compute[253538]: 2025-11-25 09:02:46.556 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 25 04:02:46 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 04:02:46 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2224312869' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 04:02:46 np0005534516 nova_compute[253538]: 2025-11-25 09:02:46.593 253542 DEBUG oslo_concurrency.processutils [None req-f71c3cb0-8ec9-4bda-b83e-e143f3daa1c9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.443s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 04:02:46 np0005534516 nova_compute[253538]: 2025-11-25 09:02:46.594 253542 DEBUG nova.virt.libvirt.vif [None req-f71c3cb0-8ec9-4bda-b83e-e143f3daa1c9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T09:02:33Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-2133610236',display_name='tempest-TestGettingAddress-server-2133610236',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-2133610236',id=133,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCL/CVAR3vbL/QBCkvY1zGP75ygSlwt3u20wjPL1pxq8pHBjwVx3a3z7YKcJhNBX7yKxfeJakRZPN9e1zFn+ojVqSwTlufKzTatjarhh+2Z6TgY95SPM/GKylcUrgXrauw==',key_name='tempest-TestGettingAddress-1143965571',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='a3cf572dfc9f42528923d69b8fa76422',ramdisk_id='',reservation_id='r-lil1hhw9',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-364728108',owner_user_name='tempest-TestGettingAddress-364728108-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T09:02:35Z,user_data=None,user_id='c9fb13d4ba9041458692330b7276232f',uuid=28376454-90b2-431d-9052-48b369973c8e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "9918858c-8b7c-4d3f-aada-d04fcb6eab03", "address": "fa:16:3e:f4:e3:ea", "network": {"id": "f691304c-d112-4c32-b3ac-0f33230178b0", "bridge": "br-int", "label": "tempest-network-smoke--636122136", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9918858c-8b", "ovs_interfaceid": "9918858c-8b7c-4d3f-aada-d04fcb6eab03", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 25 04:02:46 np0005534516 nova_compute[253538]: 2025-11-25 09:02:46.594 253542 DEBUG nova.network.os_vif_util [None req-f71c3cb0-8ec9-4bda-b83e-e143f3daa1c9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Converting VIF {"id": "9918858c-8b7c-4d3f-aada-d04fcb6eab03", "address": "fa:16:3e:f4:e3:ea", "network": {"id": "f691304c-d112-4c32-b3ac-0f33230178b0", "bridge": "br-int", "label": "tempest-network-smoke--636122136", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9918858c-8b", "ovs_interfaceid": "9918858c-8b7c-4d3f-aada-d04fcb6eab03", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 04:02:46 np0005534516 nova_compute[253538]: 2025-11-25 09:02:46.595 253542 DEBUG nova.network.os_vif_util [None req-f71c3cb0-8ec9-4bda-b83e-e143f3daa1c9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f4:e3:ea,bridge_name='br-int',has_traffic_filtering=True,id=9918858c-8b7c-4d3f-aada-d04fcb6eab03,network=Network(f691304c-d112-4c32-b3ac-0f33230178b0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9918858c-8b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 04:02:46 np0005534516 nova_compute[253538]: 2025-11-25 09:02:46.596 253542 DEBUG nova.virt.libvirt.vif [None req-f71c3cb0-8ec9-4bda-b83e-e143f3daa1c9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T09:02:33Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-2133610236',display_name='tempest-TestGettingAddress-server-2133610236',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-2133610236',id=133,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCL/CVAR3vbL/QBCkvY1zGP75ygSlwt3u20wjPL1pxq8pHBjwVx3a3z7YKcJhNBX7yKxfeJakRZPN9e1zFn+ojVqSwTlufKzTatjarhh+2Z6TgY95SPM/GKylcUrgXrauw==',key_name='tempest-TestGettingAddress-1143965571',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='a3cf572dfc9f42528923d69b8fa76422',ramdisk_id='',reservation_id='r-lil1hhw9',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-364728108',owner_user_name='tempest-TestGettingAddress-364728108-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T09:02:35Z,user_data=None,user_id='c9fb13d4ba9041458692330b7276232f',uuid=28376454-90b2-431d-9052-48b369973c8e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "fd44d480-0242-4c7a-b02e-f58852c99ca0", "address": "fa:16:3e:b8:5e:87", "network": {"id": "269f4fa4-a7fb-4f9a-b49d-3b1968826304", "bridge": "br-int", "label": "tempest-network-smoke--375194542", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feb8:5e87", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfd44d480-02", "ovs_interfaceid": "fd44d480-0242-4c7a-b02e-f58852c99ca0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 25 04:02:46 np0005534516 nova_compute[253538]: 2025-11-25 09:02:46.596 253542 DEBUG nova.network.os_vif_util [None req-f71c3cb0-8ec9-4bda-b83e-e143f3daa1c9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Converting VIF {"id": "fd44d480-0242-4c7a-b02e-f58852c99ca0", "address": "fa:16:3e:b8:5e:87", "network": {"id": "269f4fa4-a7fb-4f9a-b49d-3b1968826304", "bridge": "br-int", "label": "tempest-network-smoke--375194542", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feb8:5e87", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfd44d480-02", "ovs_interfaceid": "fd44d480-0242-4c7a-b02e-f58852c99ca0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 04:02:46 np0005534516 nova_compute[253538]: 2025-11-25 09:02:46.597 253542 DEBUG nova.network.os_vif_util [None req-f71c3cb0-8ec9-4bda-b83e-e143f3daa1c9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b8:5e:87,bridge_name='br-int',has_traffic_filtering=True,id=fd44d480-0242-4c7a-b02e-f58852c99ca0,network=Network(269f4fa4-a7fb-4f9a-b49d-3b1968826304),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfd44d480-02') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 04:02:46 np0005534516 nova_compute[253538]: 2025-11-25 09:02:46.598 253542 DEBUG nova.objects.instance [None req-f71c3cb0-8ec9-4bda-b83e-e143f3daa1c9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lazy-loading 'pci_devices' on Instance uuid 28376454-90b2-431d-9052-48b369973c8e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 04:02:46 np0005534516 nova_compute[253538]: 2025-11-25 09:02:46.607 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 25 04:02:46 np0005534516 nova_compute[253538]: 2025-11-25 09:02:46.610 253542 DEBUG nova.virt.libvirt.driver [None req-f71c3cb0-8ec9-4bda-b83e-e143f3daa1c9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 28376454-90b2-431d-9052-48b369973c8e] End _get_guest_xml xml=<domain type="kvm">
Nov 25 04:02:46 np0005534516 nova_compute[253538]:  <uuid>28376454-90b2-431d-9052-48b369973c8e</uuid>
Nov 25 04:02:46 np0005534516 nova_compute[253538]:  <name>instance-00000085</name>
Nov 25 04:02:46 np0005534516 nova_compute[253538]:  <memory>131072</memory>
Nov 25 04:02:46 np0005534516 nova_compute[253538]:  <vcpu>1</vcpu>
Nov 25 04:02:46 np0005534516 nova_compute[253538]:  <metadata>
Nov 25 04:02:46 np0005534516 nova_compute[253538]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 04:02:46 np0005534516 nova_compute[253538]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 04:02:46 np0005534516 nova_compute[253538]:      <nova:name>tempest-TestGettingAddress-server-2133610236</nova:name>
Nov 25 04:02:46 np0005534516 nova_compute[253538]:      <nova:creationTime>2025-11-25 09:02:45</nova:creationTime>
Nov 25 04:02:46 np0005534516 nova_compute[253538]:      <nova:flavor name="m1.nano">
Nov 25 04:02:46 np0005534516 nova_compute[253538]:        <nova:memory>128</nova:memory>
Nov 25 04:02:46 np0005534516 nova_compute[253538]:        <nova:disk>1</nova:disk>
Nov 25 04:02:46 np0005534516 nova_compute[253538]:        <nova:swap>0</nova:swap>
Nov 25 04:02:46 np0005534516 nova_compute[253538]:        <nova:ephemeral>0</nova:ephemeral>
Nov 25 04:02:46 np0005534516 nova_compute[253538]:        <nova:vcpus>1</nova:vcpus>
Nov 25 04:02:46 np0005534516 nova_compute[253538]:      </nova:flavor>
Nov 25 04:02:46 np0005534516 nova_compute[253538]:      <nova:owner>
Nov 25 04:02:46 np0005534516 nova_compute[253538]:        <nova:user uuid="c9fb13d4ba9041458692330b7276232f">tempest-TestGettingAddress-364728108-project-member</nova:user>
Nov 25 04:02:46 np0005534516 nova_compute[253538]:        <nova:project uuid="a3cf572dfc9f42528923d69b8fa76422">tempest-TestGettingAddress-364728108</nova:project>
Nov 25 04:02:46 np0005534516 nova_compute[253538]:      </nova:owner>
Nov 25 04:02:46 np0005534516 nova_compute[253538]:      <nova:root type="image" uuid="8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e"/>
Nov 25 04:02:46 np0005534516 nova_compute[253538]:      <nova:ports>
Nov 25 04:02:46 np0005534516 nova_compute[253538]:        <nova:port uuid="9918858c-8b7c-4d3f-aada-d04fcb6eab03">
Nov 25 04:02:46 np0005534516 nova_compute[253538]:          <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Nov 25 04:02:46 np0005534516 nova_compute[253538]:        </nova:port>
Nov 25 04:02:46 np0005534516 nova_compute[253538]:        <nova:port uuid="fd44d480-0242-4c7a-b02e-f58852c99ca0">
Nov 25 04:02:46 np0005534516 nova_compute[253538]:          <nova:ip type="fixed" address="2001:db8::f816:3eff:feb8:5e87" ipVersion="6"/>
Nov 25 04:02:46 np0005534516 nova_compute[253538]:        </nova:port>
Nov 25 04:02:46 np0005534516 nova_compute[253538]:      </nova:ports>
Nov 25 04:02:46 np0005534516 nova_compute[253538]:    </nova:instance>
Nov 25 04:02:46 np0005534516 nova_compute[253538]:  </metadata>
Nov 25 04:02:46 np0005534516 nova_compute[253538]:  <sysinfo type="smbios">
Nov 25 04:02:46 np0005534516 nova_compute[253538]:    <system>
Nov 25 04:02:46 np0005534516 nova_compute[253538]:      <entry name="manufacturer">RDO</entry>
Nov 25 04:02:46 np0005534516 nova_compute[253538]:      <entry name="product">OpenStack Compute</entry>
Nov 25 04:02:46 np0005534516 nova_compute[253538]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 04:02:46 np0005534516 nova_compute[253538]:      <entry name="serial">28376454-90b2-431d-9052-48b369973c8e</entry>
Nov 25 04:02:46 np0005534516 nova_compute[253538]:      <entry name="uuid">28376454-90b2-431d-9052-48b369973c8e</entry>
Nov 25 04:02:46 np0005534516 nova_compute[253538]:      <entry name="family">Virtual Machine</entry>
Nov 25 04:02:46 np0005534516 nova_compute[253538]:    </system>
Nov 25 04:02:46 np0005534516 nova_compute[253538]:  </sysinfo>
Nov 25 04:02:46 np0005534516 nova_compute[253538]:  <os>
Nov 25 04:02:46 np0005534516 nova_compute[253538]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 25 04:02:46 np0005534516 nova_compute[253538]:    <boot dev="hd"/>
Nov 25 04:02:46 np0005534516 nova_compute[253538]:    <smbios mode="sysinfo"/>
Nov 25 04:02:46 np0005534516 nova_compute[253538]:  </os>
Nov 25 04:02:46 np0005534516 nova_compute[253538]:  <features>
Nov 25 04:02:46 np0005534516 nova_compute[253538]:    <acpi/>
Nov 25 04:02:46 np0005534516 nova_compute[253538]:    <apic/>
Nov 25 04:02:46 np0005534516 nova_compute[253538]:    <vmcoreinfo/>
Nov 25 04:02:46 np0005534516 nova_compute[253538]:  </features>
Nov 25 04:02:46 np0005534516 nova_compute[253538]:  <clock offset="utc">
Nov 25 04:02:46 np0005534516 nova_compute[253538]:    <timer name="pit" tickpolicy="delay"/>
Nov 25 04:02:46 np0005534516 nova_compute[253538]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 25 04:02:46 np0005534516 nova_compute[253538]:    <timer name="hpet" present="no"/>
Nov 25 04:02:46 np0005534516 nova_compute[253538]:  </clock>
Nov 25 04:02:46 np0005534516 nova_compute[253538]:  <cpu mode="host-model" match="exact">
Nov 25 04:02:46 np0005534516 nova_compute[253538]:    <topology sockets="1" cores="1" threads="1"/>
Nov 25 04:02:46 np0005534516 nova_compute[253538]:  </cpu>
Nov 25 04:02:46 np0005534516 nova_compute[253538]:  <devices>
Nov 25 04:02:46 np0005534516 nova_compute[253538]:    <disk type="network" device="disk">
Nov 25 04:02:46 np0005534516 nova_compute[253538]:      <driver type="raw" cache="none"/>
Nov 25 04:02:46 np0005534516 nova_compute[253538]:      <source protocol="rbd" name="vms/28376454-90b2-431d-9052-48b369973c8e_disk">
Nov 25 04:02:46 np0005534516 nova_compute[253538]:        <host name="192.168.122.100" port="6789"/>
Nov 25 04:02:46 np0005534516 nova_compute[253538]:      </source>
Nov 25 04:02:46 np0005534516 nova_compute[253538]:      <auth username="openstack">
Nov 25 04:02:46 np0005534516 nova_compute[253538]:        <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 04:02:46 np0005534516 nova_compute[253538]:      </auth>
Nov 25 04:02:46 np0005534516 nova_compute[253538]:      <target dev="vda" bus="virtio"/>
Nov 25 04:02:46 np0005534516 nova_compute[253538]:    </disk>
Nov 25 04:02:46 np0005534516 nova_compute[253538]:    <disk type="network" device="cdrom">
Nov 25 04:02:46 np0005534516 nova_compute[253538]:      <driver type="raw" cache="none"/>
Nov 25 04:02:46 np0005534516 nova_compute[253538]:      <source protocol="rbd" name="vms/28376454-90b2-431d-9052-48b369973c8e_disk.config">
Nov 25 04:02:46 np0005534516 nova_compute[253538]:        <host name="192.168.122.100" port="6789"/>
Nov 25 04:02:46 np0005534516 nova_compute[253538]:      </source>
Nov 25 04:02:46 np0005534516 nova_compute[253538]:      <auth username="openstack">
Nov 25 04:02:46 np0005534516 nova_compute[253538]:        <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 04:02:46 np0005534516 nova_compute[253538]:      </auth>
Nov 25 04:02:46 np0005534516 nova_compute[253538]:      <target dev="sda" bus="sata"/>
Nov 25 04:02:46 np0005534516 nova_compute[253538]:    </disk>
Nov 25 04:02:46 np0005534516 nova_compute[253538]:    <interface type="ethernet">
Nov 25 04:02:46 np0005534516 nova_compute[253538]:      <mac address="fa:16:3e:f4:e3:ea"/>
Nov 25 04:02:46 np0005534516 nova_compute[253538]:      <model type="virtio"/>
Nov 25 04:02:46 np0005534516 nova_compute[253538]:      <driver name="vhost" rx_queue_size="512"/>
Nov 25 04:02:46 np0005534516 nova_compute[253538]:      <mtu size="1442"/>
Nov 25 04:02:46 np0005534516 nova_compute[253538]:      <target dev="tap9918858c-8b"/>
Nov 25 04:02:46 np0005534516 nova_compute[253538]:    </interface>
Nov 25 04:02:46 np0005534516 nova_compute[253538]:    <interface type="ethernet">
Nov 25 04:02:46 np0005534516 nova_compute[253538]:      <mac address="fa:16:3e:b8:5e:87"/>
Nov 25 04:02:46 np0005534516 nova_compute[253538]:      <model type="virtio"/>
Nov 25 04:02:46 np0005534516 nova_compute[253538]:      <driver name="vhost" rx_queue_size="512"/>
Nov 25 04:02:46 np0005534516 nova_compute[253538]:      <mtu size="1442"/>
Nov 25 04:02:46 np0005534516 nova_compute[253538]:      <target dev="tapfd44d480-02"/>
Nov 25 04:02:46 np0005534516 nova_compute[253538]:    </interface>
Nov 25 04:02:46 np0005534516 nova_compute[253538]:    <serial type="pty">
Nov 25 04:02:46 np0005534516 nova_compute[253538]:      <log file="/var/lib/nova/instances/28376454-90b2-431d-9052-48b369973c8e/console.log" append="off"/>
Nov 25 04:02:46 np0005534516 nova_compute[253538]:    </serial>
Nov 25 04:02:46 np0005534516 nova_compute[253538]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 04:02:46 np0005534516 nova_compute[253538]:    <video>
Nov 25 04:02:46 np0005534516 nova_compute[253538]:      <model type="virtio"/>
Nov 25 04:02:46 np0005534516 nova_compute[253538]:    </video>
Nov 25 04:02:46 np0005534516 nova_compute[253538]:    <input type="tablet" bus="usb"/>
Nov 25 04:02:46 np0005534516 nova_compute[253538]:    <rng model="virtio">
Nov 25 04:02:46 np0005534516 nova_compute[253538]:      <backend model="random">/dev/urandom</backend>
Nov 25 04:02:46 np0005534516 nova_compute[253538]:    </rng>
Nov 25 04:02:46 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root"/>
Nov 25 04:02:46 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:02:46 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:02:46 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:02:46 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:02:46 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:02:46 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:02:46 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:02:46 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:02:46 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:02:46 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:02:46 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:02:46 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:02:46 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:02:46 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:02:46 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:02:46 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:02:46 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:02:46 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:02:46 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:02:46 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:02:46 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:02:46 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:02:46 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:02:46 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:02:46 np0005534516 nova_compute[253538]:    <controller type="usb" index="0"/>
Nov 25 04:02:46 np0005534516 nova_compute[253538]:    <memballoon model="virtio">
Nov 25 04:02:46 np0005534516 nova_compute[253538]:      <stats period="10"/>
Nov 25 04:02:46 np0005534516 nova_compute[253538]:    </memballoon>
Nov 25 04:02:46 np0005534516 nova_compute[253538]:  </devices>
Nov 25 04:02:46 np0005534516 nova_compute[253538]: </domain>
Nov 25 04:02:46 np0005534516 nova_compute[253538]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 25 04:02:46 np0005534516 nova_compute[253538]: 2025-11-25 09:02:46.612 253542 DEBUG nova.compute.manager [None req-f71c3cb0-8ec9-4bda-b83e-e143f3daa1c9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 28376454-90b2-431d-9052-48b369973c8e] Preparing to wait for external event network-vif-plugged-9918858c-8b7c-4d3f-aada-d04fcb6eab03 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 25 04:02:46 np0005534516 nova_compute[253538]: 2025-11-25 09:02:46.612 253542 DEBUG oslo_concurrency.lockutils [None req-f71c3cb0-8ec9-4bda-b83e-e143f3daa1c9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Acquiring lock "28376454-90b2-431d-9052-48b369973c8e-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:02:46 np0005534516 nova_compute[253538]: 2025-11-25 09:02:46.612 253542 DEBUG oslo_concurrency.lockutils [None req-f71c3cb0-8ec9-4bda-b83e-e143f3daa1c9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "28376454-90b2-431d-9052-48b369973c8e-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:02:46 np0005534516 nova_compute[253538]: 2025-11-25 09:02:46.613 253542 DEBUG oslo_concurrency.lockutils [None req-f71c3cb0-8ec9-4bda-b83e-e143f3daa1c9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "28376454-90b2-431d-9052-48b369973c8e-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:02:46 np0005534516 nova_compute[253538]: 2025-11-25 09:02:46.613 253542 DEBUG nova.compute.manager [None req-f71c3cb0-8ec9-4bda-b83e-e143f3daa1c9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 28376454-90b2-431d-9052-48b369973c8e] Preparing to wait for external event network-vif-plugged-fd44d480-0242-4c7a-b02e-f58852c99ca0 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 25 04:02:46 np0005534516 nova_compute[253538]: 2025-11-25 09:02:46.613 253542 DEBUG oslo_concurrency.lockutils [None req-f71c3cb0-8ec9-4bda-b83e-e143f3daa1c9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Acquiring lock "28376454-90b2-431d-9052-48b369973c8e-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:02:46 np0005534516 nova_compute[253538]: 2025-11-25 09:02:46.613 253542 DEBUG oslo_concurrency.lockutils [None req-f71c3cb0-8ec9-4bda-b83e-e143f3daa1c9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "28376454-90b2-431d-9052-48b369973c8e-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:02:46 np0005534516 nova_compute[253538]: 2025-11-25 09:02:46.613 253542 DEBUG oslo_concurrency.lockutils [None req-f71c3cb0-8ec9-4bda-b83e-e143f3daa1c9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "28376454-90b2-431d-9052-48b369973c8e-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:02:46 np0005534516 nova_compute[253538]: 2025-11-25 09:02:46.614 253542 DEBUG nova.virt.libvirt.vif [None req-f71c3cb0-8ec9-4bda-b83e-e143f3daa1c9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T09:02:33Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-2133610236',display_name='tempest-TestGettingAddress-server-2133610236',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-2133610236',id=133,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCL/CVAR3vbL/QBCkvY1zGP75ygSlwt3u20wjPL1pxq8pHBjwVx3a3z7YKcJhNBX7yKxfeJakRZPN9e1zFn+ojVqSwTlufKzTatjarhh+2Z6TgY95SPM/GKylcUrgXrauw==',key_name='tempest-TestGettingAddress-1143965571',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='a3cf572dfc9f42528923d69b8fa76422',ramdisk_id='',reservation_id='r-lil1hhw9',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-364728108',owner_user_name='tempest-TestGettingAddress-364728108-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T09:02:35Z,user_data=None,user_id='c9fb13d4ba9041458692330b7276232f',uuid=28376454-90b2-431d-9052-48b369973c8e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "9918858c-8b7c-4d3f-aada-d04fcb6eab03", "address": "fa:16:3e:f4:e3:ea", "network": {"id": "f691304c-d112-4c32-b3ac-0f33230178b0", "bridge": "br-int", "label": "tempest-network-smoke--636122136", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9918858c-8b", "ovs_interfaceid": "9918858c-8b7c-4d3f-aada-d04fcb6eab03", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 25 04:02:46 np0005534516 nova_compute[253538]: 2025-11-25 09:02:46.614 253542 DEBUG nova.network.os_vif_util [None req-f71c3cb0-8ec9-4bda-b83e-e143f3daa1c9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Converting VIF {"id": "9918858c-8b7c-4d3f-aada-d04fcb6eab03", "address": "fa:16:3e:f4:e3:ea", "network": {"id": "f691304c-d112-4c32-b3ac-0f33230178b0", "bridge": "br-int", "label": "tempest-network-smoke--636122136", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9918858c-8b", "ovs_interfaceid": "9918858c-8b7c-4d3f-aada-d04fcb6eab03", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 04:02:46 np0005534516 nova_compute[253538]: 2025-11-25 09:02:46.615 253542 DEBUG nova.network.os_vif_util [None req-f71c3cb0-8ec9-4bda-b83e-e143f3daa1c9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f4:e3:ea,bridge_name='br-int',has_traffic_filtering=True,id=9918858c-8b7c-4d3f-aada-d04fcb6eab03,network=Network(f691304c-d112-4c32-b3ac-0f33230178b0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9918858c-8b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 04:02:46 np0005534516 nova_compute[253538]: 2025-11-25 09:02:46.615 253542 DEBUG os_vif [None req-f71c3cb0-8ec9-4bda-b83e-e143f3daa1c9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:f4:e3:ea,bridge_name='br-int',has_traffic_filtering=True,id=9918858c-8b7c-4d3f-aada-d04fcb6eab03,network=Network(f691304c-d112-4c32-b3ac-0f33230178b0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9918858c-8b') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 25 04:02:46 np0005534516 nova_compute[253538]: 2025-11-25 09:02:46.615 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:02:46 np0005534516 nova_compute[253538]: 2025-11-25 09:02:46.616 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 04:02:46 np0005534516 nova_compute[253538]: 2025-11-25 09:02:46.616 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 04:02:46 np0005534516 nova_compute[253538]: 2025-11-25 09:02:46.619 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:02:46 np0005534516 nova_compute[253538]: 2025-11-25 09:02:46.619 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9918858c-8b, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 04:02:46 np0005534516 nova_compute[253538]: 2025-11-25 09:02:46.620 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap9918858c-8b, col_values=(('external_ids', {'iface-id': '9918858c-8b7c-4d3f-aada-d04fcb6eab03', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:f4:e3:ea', 'vm-uuid': '28376454-90b2-431d-9052-48b369973c8e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 04:02:46 np0005534516 nova_compute[253538]: 2025-11-25 09:02:46.621 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:02:46 np0005534516 NetworkManager[48915]: <info>  [1764061366.6222] manager: (tap9918858c-8b): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/558)
Nov 25 04:02:46 np0005534516 nova_compute[253538]: 2025-11-25 09:02:46.624 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 25 04:02:46 np0005534516 nova_compute[253538]: 2025-11-25 09:02:46.628 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:02:46 np0005534516 nova_compute[253538]: 2025-11-25 09:02:46.629 253542 INFO os_vif [None req-f71c3cb0-8ec9-4bda-b83e-e143f3daa1c9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:f4:e3:ea,bridge_name='br-int',has_traffic_filtering=True,id=9918858c-8b7c-4d3f-aada-d04fcb6eab03,network=Network(f691304c-d112-4c32-b3ac-0f33230178b0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9918858c-8b')#033[00m
Nov 25 04:02:46 np0005534516 nova_compute[253538]: 2025-11-25 09:02:46.630 253542 DEBUG nova.virt.libvirt.vif [None req-f71c3cb0-8ec9-4bda-b83e-e143f3daa1c9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T09:02:33Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-2133610236',display_name='tempest-TestGettingAddress-server-2133610236',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-2133610236',id=133,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCL/CVAR3vbL/QBCkvY1zGP75ygSlwt3u20wjPL1pxq8pHBjwVx3a3z7YKcJhNBX7yKxfeJakRZPN9e1zFn+ojVqSwTlufKzTatjarhh+2Z6TgY95SPM/GKylcUrgXrauw==',key_name='tempest-TestGettingAddress-1143965571',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='a3cf572dfc9f42528923d69b8fa76422',ramdisk_id='',reservation_id='r-lil1hhw9',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-364728108',owner_user_name='tempest-TestGettingAddress-364728108-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T09:02:35Z,user_data=None,user_id='c9fb13d4ba9041458692330b7276232f',uuid=28376454-90b2-431d-9052-48b369973c8e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "fd44d480-0242-4c7a-b02e-f58852c99ca0", "address": "fa:16:3e:b8:5e:87", "network": {"id": "269f4fa4-a7fb-4f9a-b49d-3b1968826304", "bridge": "br-int", "label": "tempest-network-smoke--375194542", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feb8:5e87", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfd44d480-02", "ovs_interfaceid": "fd44d480-0242-4c7a-b02e-f58852c99ca0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 25 04:02:46 np0005534516 nova_compute[253538]: 2025-11-25 09:02:46.630 253542 DEBUG nova.network.os_vif_util [None req-f71c3cb0-8ec9-4bda-b83e-e143f3daa1c9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Converting VIF {"id": "fd44d480-0242-4c7a-b02e-f58852c99ca0", "address": "fa:16:3e:b8:5e:87", "network": {"id": "269f4fa4-a7fb-4f9a-b49d-3b1968826304", "bridge": "br-int", "label": "tempest-network-smoke--375194542", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feb8:5e87", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfd44d480-02", "ovs_interfaceid": "fd44d480-0242-4c7a-b02e-f58852c99ca0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 04:02:46 np0005534516 nova_compute[253538]: 2025-11-25 09:02:46.630 253542 DEBUG nova.network.os_vif_util [None req-f71c3cb0-8ec9-4bda-b83e-e143f3daa1c9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b8:5e:87,bridge_name='br-int',has_traffic_filtering=True,id=fd44d480-0242-4c7a-b02e-f58852c99ca0,network=Network(269f4fa4-a7fb-4f9a-b49d-3b1968826304),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfd44d480-02') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 04:02:46 np0005534516 nova_compute[253538]: 2025-11-25 09:02:46.631 253542 DEBUG os_vif [None req-f71c3cb0-8ec9-4bda-b83e-e143f3daa1c9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:b8:5e:87,bridge_name='br-int',has_traffic_filtering=True,id=fd44d480-0242-4c7a-b02e-f58852c99ca0,network=Network(269f4fa4-a7fb-4f9a-b49d-3b1968826304),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfd44d480-02') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 25 04:02:46 np0005534516 nova_compute[253538]: 2025-11-25 09:02:46.631 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:02:46 np0005534516 nova_compute[253538]: 2025-11-25 09:02:46.631 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 04:02:46 np0005534516 nova_compute[253538]: 2025-11-25 09:02:46.632 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 04:02:46 np0005534516 nova_compute[253538]: 2025-11-25 09:02:46.634 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:02:46 np0005534516 nova_compute[253538]: 2025-11-25 09:02:46.635 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapfd44d480-02, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 04:02:46 np0005534516 nova_compute[253538]: 2025-11-25 09:02:46.635 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapfd44d480-02, col_values=(('external_ids', {'iface-id': 'fd44d480-0242-4c7a-b02e-f58852c99ca0', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:b8:5e:87', 'vm-uuid': '28376454-90b2-431d-9052-48b369973c8e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 04:02:46 np0005534516 nova_compute[253538]: 2025-11-25 09:02:46.637 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:02:46 np0005534516 NetworkManager[48915]: <info>  [1764061366.6379] manager: (tapfd44d480-02): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/559)
Nov 25 04:02:46 np0005534516 nova_compute[253538]: 2025-11-25 09:02:46.639 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 25 04:02:46 np0005534516 nova_compute[253538]: 2025-11-25 09:02:46.643 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:02:46 np0005534516 nova_compute[253538]: 2025-11-25 09:02:46.644 253542 INFO os_vif [None req-f71c3cb0-8ec9-4bda-b83e-e143f3daa1c9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:b8:5e:87,bridge_name='br-int',has_traffic_filtering=True,id=fd44d480-0242-4c7a-b02e-f58852c99ca0,network=Network(269f4fa4-a7fb-4f9a-b49d-3b1968826304),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfd44d480-02')#033[00m
Nov 25 04:02:46 np0005534516 nova_compute[253538]: 2025-11-25 09:02:46.806 253542 DEBUG oslo_concurrency.lockutils [None req-e81b8dd4-6010-4d30-aa5a-a47b67de9ac9 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Acquiring lock "2c4a1d63-7674-4276-8da9-b9d4f4fea307" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:02:46 np0005534516 nova_compute[253538]: 2025-11-25 09:02:46.807 253542 DEBUG oslo_concurrency.lockutils [None req-e81b8dd4-6010-4d30-aa5a-a47b67de9ac9 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Lock "2c4a1d63-7674-4276-8da9-b9d4f4fea307" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:02:46 np0005534516 nova_compute[253538]: 2025-11-25 09:02:46.813 253542 DEBUG nova.virt.libvirt.driver [None req-f71c3cb0-8ec9-4bda-b83e-e143f3daa1c9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 04:02:46 np0005534516 nova_compute[253538]: 2025-11-25 09:02:46.814 253542 DEBUG nova.virt.libvirt.driver [None req-f71c3cb0-8ec9-4bda-b83e-e143f3daa1c9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 04:02:46 np0005534516 nova_compute[253538]: 2025-11-25 09:02:46.814 253542 DEBUG nova.virt.libvirt.driver [None req-f71c3cb0-8ec9-4bda-b83e-e143f3daa1c9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] No VIF found with MAC fa:16:3e:f4:e3:ea, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 25 04:02:46 np0005534516 nova_compute[253538]: 2025-11-25 09:02:46.814 253542 DEBUG nova.virt.libvirt.driver [None req-f71c3cb0-8ec9-4bda-b83e-e143f3daa1c9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] No VIF found with MAC fa:16:3e:b8:5e:87, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 25 04:02:46 np0005534516 nova_compute[253538]: 2025-11-25 09:02:46.815 253542 INFO nova.virt.libvirt.driver [None req-f71c3cb0-8ec9-4bda-b83e-e143f3daa1c9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 28376454-90b2-431d-9052-48b369973c8e] Using config drive#033[00m
Nov 25 04:02:46 np0005534516 nova_compute[253538]: 2025-11-25 09:02:46.837 253542 DEBUG nova.storage.rbd_utils [None req-f71c3cb0-8ec9-4bda-b83e-e143f3daa1c9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] rbd image 28376454-90b2-431d-9052-48b369973c8e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 04:02:46 np0005534516 nova_compute[253538]: 2025-11-25 09:02:46.842 253542 DEBUG nova.compute.manager [None req-e81b8dd4-6010-4d30-aa5a-a47b67de9ac9 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 2c4a1d63-7674-4276-8da9-b9d4f4fea307] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 25 04:02:46 np0005534516 nova_compute[253538]: 2025-11-25 09:02:46.925 253542 DEBUG oslo_concurrency.lockutils [None req-e81b8dd4-6010-4d30-aa5a-a47b67de9ac9 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:02:46 np0005534516 nova_compute[253538]: 2025-11-25 09:02:46.926 253542 DEBUG oslo_concurrency.lockutils [None req-e81b8dd4-6010-4d30-aa5a-a47b67de9ac9 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:02:46 np0005534516 nova_compute[253538]: 2025-11-25 09:02:46.932 253542 DEBUG nova.virt.hardware [None req-e81b8dd4-6010-4d30-aa5a-a47b67de9ac9 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 25 04:02:46 np0005534516 nova_compute[253538]: 2025-11-25 09:02:46.933 253542 INFO nova.compute.claims [None req-e81b8dd4-6010-4d30-aa5a-a47b67de9ac9 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 2c4a1d63-7674-4276-8da9-b9d4f4fea307] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 25 04:02:47 np0005534516 nova_compute[253538]: 2025-11-25 09:02:47.026 253542 DEBUG nova.scheduler.client.report [None req-e81b8dd4-6010-4d30-aa5a-a47b67de9ac9 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Refreshing inventories for resource provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Nov 25 04:02:47 np0005534516 nova_compute[253538]: 2025-11-25 09:02:47.058 253542 DEBUG nova.scheduler.client.report [None req-e81b8dd4-6010-4d30-aa5a-a47b67de9ac9 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Updating ProviderTree inventory for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Nov 25 04:02:47 np0005534516 nova_compute[253538]: 2025-11-25 09:02:47.058 253542 DEBUG nova.compute.provider_tree [None req-e81b8dd4-6010-4d30-aa5a-a47b67de9ac9 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Updating inventory in ProviderTree for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Nov 25 04:02:47 np0005534516 nova_compute[253538]: 2025-11-25 09:02:47.074 253542 DEBUG nova.scheduler.client.report [None req-e81b8dd4-6010-4d30-aa5a-a47b67de9ac9 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Refreshing aggregate associations for resource provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Nov 25 04:02:47 np0005534516 nova_compute[253538]: 2025-11-25 09:02:47.100 253542 DEBUG nova.scheduler.client.report [None req-e81b8dd4-6010-4d30-aa5a-a47b67de9ac9 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Refreshing trait associations for resource provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4, traits: HW_CPU_X86_ABM,HW_CPU_X86_SSE,HW_CPU_X86_SSE4A,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_NET_VIF_MODEL_VIRTIO,HW_CPU_X86_SSSE3,COMPUTE_ACCELERATORS,COMPUTE_RESCUE_BFV,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_BMI2,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_DEVICE_TAGGING,HW_CPU_X86_SSE2,HW_CPU_X86_SSE42,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_SECURITY_TPM_1_2,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_SVM,HW_CPU_X86_AVX,HW_CPU_X86_CLMUL,COMPUTE_NODE,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_AVX2,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_SHA,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_BMI,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_MMX,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_SSE41,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_SECURITY_TPM_2_0,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_F16C,COMPUTE_STORAGE_BUS_FDC,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_AMD_SVM,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_AESNI,HW_CPU_X86_FMA3 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Nov 25 04:02:47 np0005534516 nova_compute[253538]: 2025-11-25 09:02:47.149 253542 DEBUG oslo_concurrency.processutils [None req-e81b8dd4-6010-4d30-aa5a-a47b67de9ac9 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 04:02:47 np0005534516 nova_compute[253538]: 2025-11-25 09:02:47.385 253542 INFO nova.virt.libvirt.driver [None req-f71c3cb0-8ec9-4bda-b83e-e143f3daa1c9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 28376454-90b2-431d-9052-48b369973c8e] Creating config drive at /var/lib/nova/instances/28376454-90b2-431d-9052-48b369973c8e/disk.config#033[00m
Nov 25 04:02:47 np0005534516 nova_compute[253538]: 2025-11-25 09:02:47.390 253542 DEBUG oslo_concurrency.processutils [None req-f71c3cb0-8ec9-4bda-b83e-e143f3daa1c9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/28376454-90b2-431d-9052-48b369973c8e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmphqvzwcwn execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 04:02:47 np0005534516 nova_compute[253538]: 2025-11-25 09:02:47.533 253542 DEBUG oslo_concurrency.processutils [None req-f71c3cb0-8ec9-4bda-b83e-e143f3daa1c9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/28376454-90b2-431d-9052-48b369973c8e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmphqvzwcwn" returned: 0 in 0.143s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 04:02:47 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 04:02:47 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3400637335' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 04:02:47 np0005534516 nova_compute[253538]: 2025-11-25 09:02:47.567 253542 DEBUG nova.storage.rbd_utils [None req-f71c3cb0-8ec9-4bda-b83e-e143f3daa1c9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] rbd image 28376454-90b2-431d-9052-48b369973c8e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 04:02:47 np0005534516 nova_compute[253538]: 2025-11-25 09:02:47.570 253542 DEBUG oslo_concurrency.processutils [None req-f71c3cb0-8ec9-4bda-b83e-e143f3daa1c9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/28376454-90b2-431d-9052-48b369973c8e/disk.config 28376454-90b2-431d-9052-48b369973c8e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 04:02:47 np0005534516 nova_compute[253538]: 2025-11-25 09:02:47.597 253542 DEBUG oslo_concurrency.processutils [None req-e81b8dd4-6010-4d30-aa5a-a47b67de9ac9 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.448s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 04:02:47 np0005534516 nova_compute[253538]: 2025-11-25 09:02:47.603 253542 DEBUG nova.compute.provider_tree [None req-e81b8dd4-6010-4d30-aa5a-a47b67de9ac9 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 04:02:47 np0005534516 nova_compute[253538]: 2025-11-25 09:02:47.615 253542 DEBUG nova.network.neutron [req-b9c3c7b3-5830-4116-91aa-f04dcaa0566b req-d10d06cf-e42a-4ba1-9c9d-429c51c8c04c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 28376454-90b2-431d-9052-48b369973c8e] Updated VIF entry in instance network info cache for port fd44d480-0242-4c7a-b02e-f58852c99ca0. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 25 04:02:47 np0005534516 nova_compute[253538]: 2025-11-25 09:02:47.615 253542 DEBUG nova.network.neutron [req-b9c3c7b3-5830-4116-91aa-f04dcaa0566b req-d10d06cf-e42a-4ba1-9c9d-429c51c8c04c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 28376454-90b2-431d-9052-48b369973c8e] Updating instance_info_cache with network_info: [{"id": "9918858c-8b7c-4d3f-aada-d04fcb6eab03", "address": "fa:16:3e:f4:e3:ea", "network": {"id": "f691304c-d112-4c32-b3ac-0f33230178b0", "bridge": "br-int", "label": "tempest-network-smoke--636122136", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9918858c-8b", "ovs_interfaceid": "9918858c-8b7c-4d3f-aada-d04fcb6eab03", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "fd44d480-0242-4c7a-b02e-f58852c99ca0", "address": "fa:16:3e:b8:5e:87", "network": {"id": "269f4fa4-a7fb-4f9a-b49d-3b1968826304", "bridge": "br-int", "label": "tempest-network-smoke--375194542", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feb8:5e87", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfd44d480-02", "ovs_interfaceid": "fd44d480-0242-4c7a-b02e-f58852c99ca0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 04:02:47 np0005534516 nova_compute[253538]: 2025-11-25 09:02:47.618 253542 DEBUG nova.scheduler.client.report [None req-e81b8dd4-6010-4d30-aa5a-a47b67de9ac9 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 04:02:47 np0005534516 nova_compute[253538]: 2025-11-25 09:02:47.641 253542 DEBUG oslo_concurrency.lockutils [req-b9c3c7b3-5830-4116-91aa-f04dcaa0566b req-d10d06cf-e42a-4ba1-9c9d-429c51c8c04c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-28376454-90b2-431d-9052-48b369973c8e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 04:02:47 np0005534516 nova_compute[253538]: 2025-11-25 09:02:47.646 253542 DEBUG oslo_concurrency.lockutils [None req-e81b8dd4-6010-4d30-aa5a-a47b67de9ac9 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.721s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:02:47 np0005534516 nova_compute[253538]: 2025-11-25 09:02:47.647 253542 DEBUG nova.compute.manager [None req-e81b8dd4-6010-4d30-aa5a-a47b67de9ac9 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 2c4a1d63-7674-4276-8da9-b9d4f4fea307] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 25 04:02:47 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:02:47 np0005534516 nova_compute[253538]: 2025-11-25 09:02:47.695 253542 DEBUG nova.compute.manager [None req-e81b8dd4-6010-4d30-aa5a-a47b67de9ac9 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 2c4a1d63-7674-4276-8da9-b9d4f4fea307] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 25 04:02:47 np0005534516 nova_compute[253538]: 2025-11-25 09:02:47.696 253542 DEBUG nova.network.neutron [None req-e81b8dd4-6010-4d30-aa5a-a47b67de9ac9 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 2c4a1d63-7674-4276-8da9-b9d4f4fea307] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 25 04:02:47 np0005534516 nova_compute[253538]: 2025-11-25 09:02:47.710 253542 INFO nova.virt.libvirt.driver [None req-e81b8dd4-6010-4d30-aa5a-a47b67de9ac9 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 2c4a1d63-7674-4276-8da9-b9d4f4fea307] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 25 04:02:47 np0005534516 nova_compute[253538]: 2025-11-25 09:02:47.725 253542 DEBUG nova.compute.manager [None req-e81b8dd4-6010-4d30-aa5a-a47b67de9ac9 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 2c4a1d63-7674-4276-8da9-b9d4f4fea307] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 25 04:02:47 np0005534516 nova_compute[253538]: 2025-11-25 09:02:47.812 253542 DEBUG nova.compute.manager [None req-e81b8dd4-6010-4d30-aa5a-a47b67de9ac9 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 2c4a1d63-7674-4276-8da9-b9d4f4fea307] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 25 04:02:47 np0005534516 nova_compute[253538]: 2025-11-25 09:02:47.813 253542 DEBUG nova.virt.libvirt.driver [None req-e81b8dd4-6010-4d30-aa5a-a47b67de9ac9 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 2c4a1d63-7674-4276-8da9-b9d4f4fea307] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 25 04:02:47 np0005534516 nova_compute[253538]: 2025-11-25 09:02:47.814 253542 INFO nova.virt.libvirt.driver [None req-e81b8dd4-6010-4d30-aa5a-a47b67de9ac9 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 2c4a1d63-7674-4276-8da9-b9d4f4fea307] Creating image(s)#033[00m
Nov 25 04:02:47 np0005534516 nova_compute[253538]: 2025-11-25 09:02:47.840 253542 DEBUG nova.storage.rbd_utils [None req-e81b8dd4-6010-4d30-aa5a-a47b67de9ac9 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] rbd image 2c4a1d63-7674-4276-8da9-b9d4f4fea307_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 04:02:47 np0005534516 nova_compute[253538]: 2025-11-25 09:02:47.867 253542 DEBUG nova.storage.rbd_utils [None req-e81b8dd4-6010-4d30-aa5a-a47b67de9ac9 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] rbd image 2c4a1d63-7674-4276-8da9-b9d4f4fea307_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 04:02:47 np0005534516 nova_compute[253538]: 2025-11-25 09:02:47.895 253542 DEBUG nova.storage.rbd_utils [None req-e81b8dd4-6010-4d30-aa5a-a47b67de9ac9 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] rbd image 2c4a1d63-7674-4276-8da9-b9d4f4fea307_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 04:02:47 np0005534516 nova_compute[253538]: 2025-11-25 09:02:47.900 253542 DEBUG oslo_concurrency.processutils [None req-e81b8dd4-6010-4d30-aa5a-a47b67de9ac9 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 04:02:47 np0005534516 nova_compute[253538]: 2025-11-25 09:02:47.935 253542 DEBUG nova.policy [None req-e81b8dd4-6010-4d30-aa5a-a47b67de9ac9 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '283b89dbe3284e8ea2019b797673108b', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'cfffff2c57a442a59b202d368d49bf00', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 25 04:02:47 np0005534516 nova_compute[253538]: 2025-11-25 09:02:47.970 253542 DEBUG oslo_concurrency.processutils [None req-e81b8dd4-6010-4d30-aa5a-a47b67de9ac9 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json" returned: 0 in 0.070s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 04:02:47 np0005534516 nova_compute[253538]: 2025-11-25 09:02:47.971 253542 DEBUG oslo_concurrency.lockutils [None req-e81b8dd4-6010-4d30-aa5a-a47b67de9ac9 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Acquiring lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:02:47 np0005534516 nova_compute[253538]: 2025-11-25 09:02:47.972 253542 DEBUG oslo_concurrency.lockutils [None req-e81b8dd4-6010-4d30-aa5a-a47b67de9ac9 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:02:47 np0005534516 nova_compute[253538]: 2025-11-25 09:02:47.972 253542 DEBUG oslo_concurrency.lockutils [None req-e81b8dd4-6010-4d30-aa5a-a47b67de9ac9 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:02:47 np0005534516 nova_compute[253538]: 2025-11-25 09:02:47.996 253542 DEBUG nova.storage.rbd_utils [None req-e81b8dd4-6010-4d30-aa5a-a47b67de9ac9 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] rbd image 2c4a1d63-7674-4276-8da9-b9d4f4fea307_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 04:02:48 np0005534516 nova_compute[253538]: 2025-11-25 09:02:47.999 253542 DEBUG oslo_concurrency.processutils [None req-e81b8dd4-6010-4d30-aa5a-a47b67de9ac9 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc 2c4a1d63-7674-4276-8da9-b9d4f4fea307_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 04:02:48 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2460: 321 pgs: 321 active+clean; 134 MiB data, 940 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 25 04:02:48 np0005534516 nova_compute[253538]: 2025-11-25 09:02:48.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:02:48 np0005534516 nova_compute[253538]: 2025-11-25 09:02:48.554 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 25 04:02:48 np0005534516 nova_compute[253538]: 2025-11-25 09:02:48.784 253542 DEBUG nova.network.neutron [None req-e81b8dd4-6010-4d30-aa5a-a47b67de9ac9 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 2c4a1d63-7674-4276-8da9-b9d4f4fea307] Successfully created port: a93aab06-4a98-453a-87c3-01b817ee7602 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 25 04:02:48 np0005534516 nova_compute[253538]: 2025-11-25 09:02:48.791 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:02:49 np0005534516 nova_compute[253538]: 2025-11-25 09:02:49.234 253542 DEBUG oslo_concurrency.processutils [None req-f71c3cb0-8ec9-4bda-b83e-e143f3daa1c9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/28376454-90b2-431d-9052-48b369973c8e/disk.config 28376454-90b2-431d-9052-48b369973c8e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.664s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 04:02:49 np0005534516 nova_compute[253538]: 2025-11-25 09:02:49.235 253542 INFO nova.virt.libvirt.driver [None req-f71c3cb0-8ec9-4bda-b83e-e143f3daa1c9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 28376454-90b2-431d-9052-48b369973c8e] Deleting local config drive /var/lib/nova/instances/28376454-90b2-431d-9052-48b369973c8e/disk.config because it was imported into RBD.#033[00m
Nov 25 04:02:49 np0005534516 NetworkManager[48915]: <info>  [1764061369.2797] manager: (tap9918858c-8b): new Tun device (/org/freedesktop/NetworkManager/Devices/560)
Nov 25 04:02:49 np0005534516 kernel: tap9918858c-8b: entered promiscuous mode
Nov 25 04:02:49 np0005534516 nova_compute[253538]: 2025-11-25 09:02:49.283 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:02:49 np0005534516 ovn_controller[152859]: 2025-11-25T09:02:49Z|01362|binding|INFO|Claiming lport 9918858c-8b7c-4d3f-aada-d04fcb6eab03 for this chassis.
Nov 25 04:02:49 np0005534516 ovn_controller[152859]: 2025-11-25T09:02:49Z|01363|binding|INFO|9918858c-8b7c-4d3f-aada-d04fcb6eab03: Claiming fa:16:3e:f4:e3:ea 10.100.0.12
Nov 25 04:02:49 np0005534516 NetworkManager[48915]: <info>  [1764061369.2941] manager: (tapfd44d480-02): new Tun device (/org/freedesktop/NetworkManager/Devices/561)
Nov 25 04:02:49 np0005534516 kernel: tapfd44d480-02: entered promiscuous mode
Nov 25 04:02:49 np0005534516 nova_compute[253538]: 2025-11-25 09:02:49.304 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:02:49 np0005534516 ovn_controller[152859]: 2025-11-25T09:02:49Z|01364|if_status|INFO|Not updating pb chassis for fd44d480-0242-4c7a-b02e-f58852c99ca0 now as sb is readonly
Nov 25 04:02:49 np0005534516 systemd-udevd[393784]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 04:02:49 np0005534516 systemd-udevd[393785]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 04:02:49 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:02:49.312 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f4:e3:ea 10.100.0.12'], port_security=['fa:16:3e:f4:e3:ea 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '28376454-90b2-431d-9052-48b369973c8e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f691304c-d112-4c32-b3ac-0f33230178b0', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a3cf572dfc9f42528923d69b8fa76422', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'f5f96402-bff3-4d85-bba1-4edfd7ec059e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9a4166e3-493a-4dd7-9e89-e40e3bf1bed7, chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=9918858c-8b7c-4d3f-aada-d04fcb6eab03) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 04:02:49 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:02:49.313 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 9918858c-8b7c-4d3f-aada-d04fcb6eab03 in datapath f691304c-d112-4c32-b3ac-0f33230178b0 bound to our chassis#033[00m
Nov 25 04:02:49 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:02:49.314 162739 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network f691304c-d112-4c32-b3ac-0f33230178b0#033[00m
Nov 25 04:02:49 np0005534516 NetworkManager[48915]: <info>  [1764061369.3257] device (tapfd44d480-02): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 04:02:49 np0005534516 NetworkManager[48915]: <info>  [1764061369.3267] device (tapfd44d480-02): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 04:02:49 np0005534516 NetworkManager[48915]: <info>  [1764061369.3276] device (tap9918858c-8b): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 04:02:49 np0005534516 systemd-machined[215790]: New machine qemu-163-instance-00000085.
Nov 25 04:02:49 np0005534516 NetworkManager[48915]: <info>  [1764061369.3284] device (tap9918858c-8b): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 04:02:49 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:02:49.329 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[73fcf272-2fcf-477d-87b9-ce18493953a1]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:02:49 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:02:49.330 162739 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapf691304c-d1 in ovnmeta-f691304c-d112-4c32-b3ac-0f33230178b0 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 25 04:02:49 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:02:49.332 269370 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapf691304c-d0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 25 04:02:49 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:02:49.332 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[41fdbc07-acee-4d1e-9aa9-6ca6de3ece23]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:02:49 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:02:49.333 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[35dd6f73-5eff-47a0-8eb7-7dbb63d07f1a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:02:49 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:02:49.345 162852 DEBUG oslo.privsep.daemon [-] privsep: reply[4e2b309b-5bc3-4c4d-98a6-da46821929c9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:02:49 np0005534516 systemd[1]: Started Virtual Machine qemu-163-instance-00000085.
Nov 25 04:02:49 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:02:49.371 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[4d3a58c7-9307-4459-ba49-d4c72aae4010]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:02:49 np0005534516 ovn_controller[152859]: 2025-11-25T09:02:49Z|01365|binding|INFO|Claiming lport fd44d480-0242-4c7a-b02e-f58852c99ca0 for this chassis.
Nov 25 04:02:49 np0005534516 ovn_controller[152859]: 2025-11-25T09:02:49Z|01366|binding|INFO|fd44d480-0242-4c7a-b02e-f58852c99ca0: Claiming fa:16:3e:b8:5e:87 2001:db8::f816:3eff:feb8:5e87
Nov 25 04:02:49 np0005534516 ovn_controller[152859]: 2025-11-25T09:02:49Z|01367|binding|INFO|Setting lport 9918858c-8b7c-4d3f-aada-d04fcb6eab03 ovn-installed in OVS
Nov 25 04:02:49 np0005534516 nova_compute[253538]: 2025-11-25 09:02:49.398 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:02:49 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:02:49.399 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[120f899f-ceb3-45bb-aa0b-47a711ba3dac]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:02:49 np0005534516 ovn_controller[152859]: 2025-11-25T09:02:49Z|01368|binding|INFO|Setting lport 9918858c-8b7c-4d3f-aada-d04fcb6eab03 up in Southbound
Nov 25 04:02:49 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:02:49.404 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b8:5e:87 2001:db8::f816:3eff:feb8:5e87'], port_security=['fa:16:3e:b8:5e:87 2001:db8::f816:3eff:feb8:5e87'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:feb8:5e87/64', 'neutron:device_id': '28376454-90b2-431d-9052-48b369973c8e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-269f4fa4-a7fb-4f9a-b49d-3b1968826304', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a3cf572dfc9f42528923d69b8fa76422', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'f5f96402-bff3-4d85-bba1-4edfd7ec059e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a218a470-f1de-46c9-a998-6139383f8f9c, chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=fd44d480-0242-4c7a-b02e-f58852c99ca0) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 04:02:49 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:02:49.408 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[c7e476e3-092e-4024-8ebd-461812c63b32]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:02:49 np0005534516 ovn_controller[152859]: 2025-11-25T09:02:49Z|01369|binding|INFO|Setting lport fd44d480-0242-4c7a-b02e-f58852c99ca0 ovn-installed in OVS
Nov 25 04:02:49 np0005534516 ovn_controller[152859]: 2025-11-25T09:02:49Z|01370|binding|INFO|Setting lport fd44d480-0242-4c7a-b02e-f58852c99ca0 up in Southbound
Nov 25 04:02:49 np0005534516 NetworkManager[48915]: <info>  [1764061369.4094] manager: (tapf691304c-d0): new Veth device (/org/freedesktop/NetworkManager/Devices/562)
Nov 25 04:02:49 np0005534516 nova_compute[253538]: 2025-11-25 09:02:49.412 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:02:49 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:02:49.448 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[00578ddf-12ad-46e6-a2cc-387290a50b6b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:02:49 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:02:49.451 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[1f612679-ff66-477a-ab7f-0b884c20b8fb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:02:49 np0005534516 NetworkManager[48915]: <info>  [1764061369.4730] device (tapf691304c-d0): carrier: link connected
Nov 25 04:02:49 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:02:49.479 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[9d3b48d6-6df8-4307-83c1-f32d470d0e66]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:02:49 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:02:49.497 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[51392a50-f54f-4695-afef-bb7dca2bbea4]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf691304c-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:62:d6:9f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 397], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 672407, 'reachable_time': 19375, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 148, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 148, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 393821, 'error': None, 'target': 'ovnmeta-f691304c-d112-4c32-b3ac-0f33230178b0', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:02:49 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:02:49.513 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[e6a1ae20-5ec9-470a-b495-4725bdc237f0]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe62:d69f'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 672407, 'tstamp': 672407}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 393822, 'error': None, 'target': 'ovnmeta-f691304c-d112-4c32-b3ac-0f33230178b0', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:02:49 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:02:49.527 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[dbbc9028-b8fe-42a7-b4d7-fe566352a001]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf691304c-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:62:d6:9f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 397], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 672407, 'reachable_time': 19375, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 148, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 148, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 393823, 'error': None, 'target': 'ovnmeta-f691304c-d112-4c32-b3ac-0f33230178b0', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:02:49 np0005534516 nova_compute[253538]: 2025-11-25 09:02:49.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:02:49 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:02:49.555 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[ae6f2aca-08c4-4c00-9776-773f30ce4501]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:02:49 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:02:49.623 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[2e9b2520-3e9c-4e01-919c-bb382a2697eb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:02:49 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:02:49.625 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf691304c-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 04:02:49 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:02:49.625 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 04:02:49 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:02:49.625 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf691304c-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 04:02:49 np0005534516 NetworkManager[48915]: <info>  [1764061369.6276] manager: (tapf691304c-d0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/563)
Nov 25 04:02:49 np0005534516 nova_compute[253538]: 2025-11-25 09:02:49.627 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:02:49 np0005534516 kernel: tapf691304c-d0: entered promiscuous mode
Nov 25 04:02:49 np0005534516 nova_compute[253538]: 2025-11-25 09:02:49.629 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:02:49 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:02:49.629 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapf691304c-d0, col_values=(('external_ids', {'iface-id': '5564ec46-e1ee-4a7e-990b-f716b4d2c9e2'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 04:02:49 np0005534516 nova_compute[253538]: 2025-11-25 09:02:49.630 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:02:49 np0005534516 ovn_controller[152859]: 2025-11-25T09:02:49Z|01371|binding|INFO|Releasing lport 5564ec46-e1ee-4a7e-990b-f716b4d2c9e2 from this chassis (sb_readonly=0)
Nov 25 04:02:49 np0005534516 nova_compute[253538]: 2025-11-25 09:02:49.631 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:02:49 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:02:49.632 162739 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/f691304c-d112-4c32-b3ac-0f33230178b0.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/f691304c-d112-4c32-b3ac-0f33230178b0.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 25 04:02:49 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:02:49.640 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[b647b3cb-c58c-47cc-b3fe-8114cce5eb23]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:02:49 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:02:49.640 162739 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 25 04:02:49 np0005534516 ovn_metadata_agent[162734]: global
Nov 25 04:02:49 np0005534516 ovn_metadata_agent[162734]:    log         /dev/log local0 debug
Nov 25 04:02:49 np0005534516 ovn_metadata_agent[162734]:    log-tag     haproxy-metadata-proxy-f691304c-d112-4c32-b3ac-0f33230178b0
Nov 25 04:02:49 np0005534516 ovn_metadata_agent[162734]:    user        root
Nov 25 04:02:49 np0005534516 ovn_metadata_agent[162734]:    group       root
Nov 25 04:02:49 np0005534516 ovn_metadata_agent[162734]:    maxconn     1024
Nov 25 04:02:49 np0005534516 ovn_metadata_agent[162734]:    pidfile     /var/lib/neutron/external/pids/f691304c-d112-4c32-b3ac-0f33230178b0.pid.haproxy
Nov 25 04:02:49 np0005534516 ovn_metadata_agent[162734]:    daemon
Nov 25 04:02:49 np0005534516 ovn_metadata_agent[162734]: 
Nov 25 04:02:49 np0005534516 ovn_metadata_agent[162734]: defaults
Nov 25 04:02:49 np0005534516 ovn_metadata_agent[162734]:    log global
Nov 25 04:02:49 np0005534516 ovn_metadata_agent[162734]:    mode http
Nov 25 04:02:49 np0005534516 ovn_metadata_agent[162734]:    option httplog
Nov 25 04:02:49 np0005534516 ovn_metadata_agent[162734]:    option dontlognull
Nov 25 04:02:49 np0005534516 ovn_metadata_agent[162734]:    option http-server-close
Nov 25 04:02:49 np0005534516 ovn_metadata_agent[162734]:    option forwardfor
Nov 25 04:02:49 np0005534516 ovn_metadata_agent[162734]:    retries                 3
Nov 25 04:02:49 np0005534516 ovn_metadata_agent[162734]:    timeout http-request    30s
Nov 25 04:02:49 np0005534516 ovn_metadata_agent[162734]:    timeout connect         30s
Nov 25 04:02:49 np0005534516 ovn_metadata_agent[162734]:    timeout client          32s
Nov 25 04:02:49 np0005534516 ovn_metadata_agent[162734]:    timeout server          32s
Nov 25 04:02:49 np0005534516 ovn_metadata_agent[162734]:    timeout http-keep-alive 30s
Nov 25 04:02:49 np0005534516 ovn_metadata_agent[162734]: 
Nov 25 04:02:49 np0005534516 ovn_metadata_agent[162734]: 
Nov 25 04:02:49 np0005534516 ovn_metadata_agent[162734]: listen listener
Nov 25 04:02:49 np0005534516 ovn_metadata_agent[162734]:    bind 169.254.169.254:80
Nov 25 04:02:49 np0005534516 ovn_metadata_agent[162734]:    server metadata /var/lib/neutron/metadata_proxy
Nov 25 04:02:49 np0005534516 ovn_metadata_agent[162734]:    http-request add-header X-OVN-Network-ID f691304c-d112-4c32-b3ac-0f33230178b0
Nov 25 04:02:49 np0005534516 ovn_metadata_agent[162734]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 25 04:02:49 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:02:49.641 162739 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-f691304c-d112-4c32-b3ac-0f33230178b0', 'env', 'PROCESS_TAG=haproxy-f691304c-d112-4c32-b3ac-0f33230178b0', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/f691304c-d112-4c32-b3ac-0f33230178b0.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 25 04:02:49 np0005534516 nova_compute[253538]: 2025-11-25 09:02:49.647 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:02:49 np0005534516 nova_compute[253538]: 2025-11-25 09:02:49.659 253542 DEBUG nova.network.neutron [None req-e81b8dd4-6010-4d30-aa5a-a47b67de9ac9 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 2c4a1d63-7674-4276-8da9-b9d4f4fea307] Successfully updated port: a93aab06-4a98-453a-87c3-01b817ee7602 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 25 04:02:49 np0005534516 nova_compute[253538]: 2025-11-25 09:02:49.675 253542 DEBUG oslo_concurrency.lockutils [None req-e81b8dd4-6010-4d30-aa5a-a47b67de9ac9 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Acquiring lock "refresh_cache-2c4a1d63-7674-4276-8da9-b9d4f4fea307" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 04:02:49 np0005534516 nova_compute[253538]: 2025-11-25 09:02:49.676 253542 DEBUG oslo_concurrency.lockutils [None req-e81b8dd4-6010-4d30-aa5a-a47b67de9ac9 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Acquired lock "refresh_cache-2c4a1d63-7674-4276-8da9-b9d4f4fea307" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 04:02:49 np0005534516 nova_compute[253538]: 2025-11-25 09:02:49.676 253542 DEBUG nova.network.neutron [None req-e81b8dd4-6010-4d30-aa5a-a47b67de9ac9 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 2c4a1d63-7674-4276-8da9-b9d4f4fea307] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 25 04:02:49 np0005534516 nova_compute[253538]: 2025-11-25 09:02:49.698 253542 DEBUG nova.compute.manager [req-0909415b-0e1f-41b3-96f6-d2e0c160431d req-02037659-52c7-473a-8c48-c3f86f34b2a2 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 28376454-90b2-431d-9052-48b369973c8e] Received event network-vif-plugged-9918858c-8b7c-4d3f-aada-d04fcb6eab03 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 04:02:49 np0005534516 nova_compute[253538]: 2025-11-25 09:02:49.699 253542 DEBUG oslo_concurrency.lockutils [req-0909415b-0e1f-41b3-96f6-d2e0c160431d req-02037659-52c7-473a-8c48-c3f86f34b2a2 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "28376454-90b2-431d-9052-48b369973c8e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:02:49 np0005534516 nova_compute[253538]: 2025-11-25 09:02:49.699 253542 DEBUG oslo_concurrency.lockutils [req-0909415b-0e1f-41b3-96f6-d2e0c160431d req-02037659-52c7-473a-8c48-c3f86f34b2a2 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "28376454-90b2-431d-9052-48b369973c8e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:02:49 np0005534516 nova_compute[253538]: 2025-11-25 09:02:49.699 253542 DEBUG oslo_concurrency.lockutils [req-0909415b-0e1f-41b3-96f6-d2e0c160431d req-02037659-52c7-473a-8c48-c3f86f34b2a2 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "28376454-90b2-431d-9052-48b369973c8e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:02:49 np0005534516 nova_compute[253538]: 2025-11-25 09:02:49.699 253542 DEBUG nova.compute.manager [req-0909415b-0e1f-41b3-96f6-d2e0c160431d req-02037659-52c7-473a-8c48-c3f86f34b2a2 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 28376454-90b2-431d-9052-48b369973c8e] Processing event network-vif-plugged-9918858c-8b7c-4d3f-aada-d04fcb6eab03 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 25 04:02:49 np0005534516 nova_compute[253538]: 2025-11-25 09:02:49.834 253542 DEBUG nova.network.neutron [None req-e81b8dd4-6010-4d30-aa5a-a47b67de9ac9 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 2c4a1d63-7674-4276-8da9-b9d4f4fea307] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 25 04:02:50 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2461: 321 pgs: 321 active+clean; 153 MiB data, 950 MiB used, 59 GiB / 60 GiB avail; 28 KiB/s rd, 2.5 MiB/s wr, 46 op/s
Nov 25 04:02:50 np0005534516 podman[393874]: 2025-11-25 09:02:50.007194987 +0000 UTC m=+0.018753370 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 25 04:02:50 np0005534516 nova_compute[253538]: 2025-11-25 09:02:50.547 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:02:50 np0005534516 nova_compute[253538]: 2025-11-25 09:02:50.559 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764061370.558355, 28376454-90b2-431d-9052-48b369973c8e => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 04:02:50 np0005534516 nova_compute[253538]: 2025-11-25 09:02:50.559 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 28376454-90b2-431d-9052-48b369973c8e] VM Started (Lifecycle Event)#033[00m
Nov 25 04:02:50 np0005534516 nova_compute[253538]: 2025-11-25 09:02:50.580 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 28376454-90b2-431d-9052-48b369973c8e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 04:02:50 np0005534516 nova_compute[253538]: 2025-11-25 09:02:50.584 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764061370.5593734, 28376454-90b2-431d-9052-48b369973c8e => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 04:02:50 np0005534516 nova_compute[253538]: 2025-11-25 09:02:50.584 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 28376454-90b2-431d-9052-48b369973c8e] VM Paused (Lifecycle Event)#033[00m
Nov 25 04:02:50 np0005534516 nova_compute[253538]: 2025-11-25 09:02:50.603 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 28376454-90b2-431d-9052-48b369973c8e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 04:02:50 np0005534516 nova_compute[253538]: 2025-11-25 09:02:50.606 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 28376454-90b2-431d-9052-48b369973c8e] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 04:02:50 np0005534516 nova_compute[253538]: 2025-11-25 09:02:50.621 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 28376454-90b2-431d-9052-48b369973c8e] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 04:02:50 np0005534516 podman[393874]: 2025-11-25 09:02:50.632481645 +0000 UTC m=+0.644040008 container create adb9d81385c86af26091f86b4facd3c110feb8543d79924ee01eefa24baef3a0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-f691304c-d112-4c32-b3ac-0f33230178b0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0)
Nov 25 04:02:50 np0005534516 systemd[1]: Started libpod-conmon-adb9d81385c86af26091f86b4facd3c110feb8543d79924ee01eefa24baef3a0.scope.
Nov 25 04:02:50 np0005534516 systemd[1]: Started libcrun container.
Nov 25 04:02:50 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a80d86f6fd563b663b27ffb3825320ba53e316c324703edef04af20e73a076c2/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 25 04:02:50 np0005534516 podman[393874]: 2025-11-25 09:02:50.760920776 +0000 UTC m=+0.772479219 container init adb9d81385c86af26091f86b4facd3c110feb8543d79924ee01eefa24baef3a0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-f691304c-d112-4c32-b3ac-0f33230178b0, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 25 04:02:50 np0005534516 nova_compute[253538]: 2025-11-25 09:02:50.760 253542 DEBUG oslo_concurrency.processutils [None req-e81b8dd4-6010-4d30-aa5a-a47b67de9ac9 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc 2c4a1d63-7674-4276-8da9-b9d4f4fea307_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 2.760s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 04:02:50 np0005534516 podman[393874]: 2025-11-25 09:02:50.768592725 +0000 UTC m=+0.780151128 container start adb9d81385c86af26091f86b4facd3c110feb8543d79924ee01eefa24baef3a0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-f691304c-d112-4c32-b3ac-0f33230178b0, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true)
Nov 25 04:02:50 np0005534516 neutron-haproxy-ovnmeta-f691304c-d112-4c32-b3ac-0f33230178b0[393915]: [NOTICE]   (393919) : New worker (393939) forked
Nov 25 04:02:50 np0005534516 neutron-haproxy-ovnmeta-f691304c-d112-4c32-b3ac-0f33230178b0[393915]: [NOTICE]   (393919) : Loading success.
Nov 25 04:02:50 np0005534516 nova_compute[253538]: 2025-11-25 09:02:50.822 253542 DEBUG nova.storage.rbd_utils [None req-e81b8dd4-6010-4d30-aa5a-a47b67de9ac9 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] resizing rbd image 2c4a1d63-7674-4276-8da9-b9d4f4fea307_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 25 04:02:50 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:02:50.969 162739 INFO neutron.agent.ovn.metadata.agent [-] Port fd44d480-0242-4c7a-b02e-f58852c99ca0 in datapath 269f4fa4-a7fb-4f9a-b49d-3b1968826304 unbound from our chassis#033[00m
Nov 25 04:02:50 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:02:50.971 162739 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 269f4fa4-a7fb-4f9a-b49d-3b1968826304#033[00m
Nov 25 04:02:50 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:02:50.981 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[40cf23c1-0d04-440a-b81d-4181a55fd53a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:02:50 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:02:50.982 162739 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap269f4fa4-a1 in ovnmeta-269f4fa4-a7fb-4f9a-b49d-3b1968826304 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 25 04:02:50 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:02:50.984 269370 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap269f4fa4-a0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 25 04:02:50 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:02:50.984 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[21e6612f-9201-4fc6-ac52-8e1f9dd5f463]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:02:50 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:02:50.985 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[a1c7c37e-6511-47c8-88d7-1812d7874c78]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:02:50 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:02:50.995 162852 DEBUG oslo.privsep.daemon [-] privsep: reply[4be70ba4-f060-4661-87fd-24b5726a8340]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:02:51 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:02:51.007 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[5ee2c815-c629-40bf-90fe-3c4e2835c008]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:02:51 np0005534516 nova_compute[253538]: 2025-11-25 09:02:51.037 253542 DEBUG nova.network.neutron [None req-e81b8dd4-6010-4d30-aa5a-a47b67de9ac9 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 2c4a1d63-7674-4276-8da9-b9d4f4fea307] Updating instance_info_cache with network_info: [{"id": "a93aab06-4a98-453a-87c3-01b817ee7602", "address": "fa:16:3e:2f:fb:42", "network": {"id": "72472fc5-3661-404c-a0d2-df155795bd2b", "bridge": "br-int", "label": "tempest-network-smoke--1408423100", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cfffff2c57a442a59b202d368d49bf00", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa93aab06-4a", "ovs_interfaceid": "a93aab06-4a98-453a-87c3-01b817ee7602", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 04:02:51 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:02:51.037 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[6da867ae-2fe3-4fb8-b8ba-0ebb0bbd22c4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:02:51 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:02:51.046 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[662d9366-b4f3-4537-8fc6-9c37d11a1fd7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:02:51 np0005534516 NetworkManager[48915]: <info>  [1764061371.0480] manager: (tap269f4fa4-a0): new Veth device (/org/freedesktop/NetworkManager/Devices/564)
Nov 25 04:02:51 np0005534516 systemd-udevd[393815]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 04:02:51 np0005534516 nova_compute[253538]: 2025-11-25 09:02:51.055 253542 DEBUG oslo_concurrency.lockutils [None req-e81b8dd4-6010-4d30-aa5a-a47b67de9ac9 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Releasing lock "refresh_cache-2c4a1d63-7674-4276-8da9-b9d4f4fea307" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 04:02:51 np0005534516 nova_compute[253538]: 2025-11-25 09:02:51.056 253542 DEBUG nova.compute.manager [None req-e81b8dd4-6010-4d30-aa5a-a47b67de9ac9 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 2c4a1d63-7674-4276-8da9-b9d4f4fea307] Instance network_info: |[{"id": "a93aab06-4a98-453a-87c3-01b817ee7602", "address": "fa:16:3e:2f:fb:42", "network": {"id": "72472fc5-3661-404c-a0d2-df155795bd2b", "bridge": "br-int", "label": "tempest-network-smoke--1408423100", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cfffff2c57a442a59b202d368d49bf00", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa93aab06-4a", "ovs_interfaceid": "a93aab06-4a98-453a-87c3-01b817ee7602", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 25 04:02:51 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:02:51.082 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[faa619a6-516f-4acd-931d-be1b951edb76]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:02:51 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:02:51.085 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[6d11f85e-c8a5-4ba4-8012-a0daac377869]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:02:51 np0005534516 NetworkManager[48915]: <info>  [1764061371.1066] device (tap269f4fa4-a0): carrier: link connected
Nov 25 04:02:51 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:02:51.110 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[fd683c3f-a4d1-4f2d-bd97-12e49fccf080]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:02:51 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:02:51.126 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[632aba05-2867-4c17-9559-2a0fe7aab024]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap269f4fa4-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:3e:a8:84'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 398], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 672570, 'reachable_time': 38765, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 393994, 'error': None, 'target': 'ovnmeta-269f4fa4-a7fb-4f9a-b49d-3b1968826304', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:02:51 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:02:51.142 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[36a86f6d-2a6c-494f-9e5e-2e32c0c5c096]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe3e:a884'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 672570, 'tstamp': 672570}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 393995, 'error': None, 'target': 'ovnmeta-269f4fa4-a7fb-4f9a-b49d-3b1968826304', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:02:51 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:02:51.157 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[22f21bf4-c3b1-4bf7-a0af-c512c2dc4e40]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap269f4fa4-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:3e:a8:84'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 398], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 672570, 'reachable_time': 38765, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 393996, 'error': None, 'target': 'ovnmeta-269f4fa4-a7fb-4f9a-b49d-3b1968826304', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:02:51 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:02:51.183 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[dc00dac0-1519-479a-a87b-10fefe0efc5e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:02:51 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:02:51.209 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[bb845718-784c-4ffc-bda0-772bfcd7952a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:02:51 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:02:51.210 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap269f4fa4-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 04:02:51 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:02:51.211 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 04:02:51 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:02:51.211 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap269f4fa4-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 04:02:51 np0005534516 NetworkManager[48915]: <info>  [1764061371.2134] manager: (tap269f4fa4-a0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/565)
Nov 25 04:02:51 np0005534516 nova_compute[253538]: 2025-11-25 09:02:51.213 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:02:51 np0005534516 kernel: tap269f4fa4-a0: entered promiscuous mode
Nov 25 04:02:51 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:02:51.217 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap269f4fa4-a0, col_values=(('external_ids', {'iface-id': 'ab77c41f-12b1-44c7-af48-058abf7be28c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 04:02:51 np0005534516 nova_compute[253538]: 2025-11-25 09:02:51.218 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:02:51 np0005534516 ovn_controller[152859]: 2025-11-25T09:02:51Z|01372|binding|INFO|Releasing lport ab77c41f-12b1-44c7-af48-058abf7be28c from this chassis (sb_readonly=0)
Nov 25 04:02:51 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:02:51.221 162739 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/269f4fa4-a7fb-4f9a-b49d-3b1968826304.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/269f4fa4-a7fb-4f9a-b49d-3b1968826304.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 25 04:02:51 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:02:51.221 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[cfcb4158-e623-459f-a35f-ffe14d765450]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:02:51 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:02:51.222 162739 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 25 04:02:51 np0005534516 ovn_metadata_agent[162734]: global
Nov 25 04:02:51 np0005534516 ovn_metadata_agent[162734]:    log         /dev/log local0 debug
Nov 25 04:02:51 np0005534516 ovn_metadata_agent[162734]:    log-tag     haproxy-metadata-proxy-269f4fa4-a7fb-4f9a-b49d-3b1968826304
Nov 25 04:02:51 np0005534516 ovn_metadata_agent[162734]:    user        root
Nov 25 04:02:51 np0005534516 ovn_metadata_agent[162734]:    group       root
Nov 25 04:02:51 np0005534516 ovn_metadata_agent[162734]:    maxconn     1024
Nov 25 04:02:51 np0005534516 ovn_metadata_agent[162734]:    pidfile     /var/lib/neutron/external/pids/269f4fa4-a7fb-4f9a-b49d-3b1968826304.pid.haproxy
Nov 25 04:02:51 np0005534516 ovn_metadata_agent[162734]:    daemon
Nov 25 04:02:51 np0005534516 ovn_metadata_agent[162734]: 
Nov 25 04:02:51 np0005534516 ovn_metadata_agent[162734]: defaults
Nov 25 04:02:51 np0005534516 ovn_metadata_agent[162734]:    log global
Nov 25 04:02:51 np0005534516 ovn_metadata_agent[162734]:    mode http
Nov 25 04:02:51 np0005534516 ovn_metadata_agent[162734]:    option httplog
Nov 25 04:02:51 np0005534516 ovn_metadata_agent[162734]:    option dontlognull
Nov 25 04:02:51 np0005534516 ovn_metadata_agent[162734]:    option http-server-close
Nov 25 04:02:51 np0005534516 ovn_metadata_agent[162734]:    option forwardfor
Nov 25 04:02:51 np0005534516 ovn_metadata_agent[162734]:    retries                 3
Nov 25 04:02:51 np0005534516 ovn_metadata_agent[162734]:    timeout http-request    30s
Nov 25 04:02:51 np0005534516 ovn_metadata_agent[162734]:    timeout connect         30s
Nov 25 04:02:51 np0005534516 ovn_metadata_agent[162734]:    timeout client          32s
Nov 25 04:02:51 np0005534516 ovn_metadata_agent[162734]:    timeout server          32s
Nov 25 04:02:51 np0005534516 ovn_metadata_agent[162734]:    timeout http-keep-alive 30s
Nov 25 04:02:51 np0005534516 ovn_metadata_agent[162734]: 
Nov 25 04:02:51 np0005534516 ovn_metadata_agent[162734]: 
Nov 25 04:02:51 np0005534516 ovn_metadata_agent[162734]: listen listener
Nov 25 04:02:51 np0005534516 ovn_metadata_agent[162734]:    bind 169.254.169.254:80
Nov 25 04:02:51 np0005534516 ovn_metadata_agent[162734]:    server metadata /var/lib/neutron/metadata_proxy
Nov 25 04:02:51 np0005534516 ovn_metadata_agent[162734]:    http-request add-header X-OVN-Network-ID 269f4fa4-a7fb-4f9a-b49d-3b1968826304
Nov 25 04:02:51 np0005534516 ovn_metadata_agent[162734]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 25 04:02:51 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:02:51.223 162739 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-269f4fa4-a7fb-4f9a-b49d-3b1968826304', 'env', 'PROCESS_TAG=haproxy-269f4fa4-a7fb-4f9a-b49d-3b1968826304', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/269f4fa4-a7fb-4f9a-b49d-3b1968826304.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 25 04:02:51 np0005534516 nova_compute[253538]: 2025-11-25 09:02:51.233 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:02:51 np0005534516 nova_compute[253538]: 2025-11-25 09:02:51.553 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:02:51 np0005534516 nova_compute[253538]: 2025-11-25 09:02:51.638 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:02:51 np0005534516 podman[394027]: 2025-11-25 09:02:51.596346167 +0000 UTC m=+0.023270364 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 25 04:02:51 np0005534516 nova_compute[253538]: 2025-11-25 09:02:51.827 253542 DEBUG nova.compute.manager [req-35d81841-171d-4e0e-94a9-03e31a2142ac req-919f96fd-609a-46f8-8d58-a3c289844759 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 2c4a1d63-7674-4276-8da9-b9d4f4fea307] Received event network-changed-a93aab06-4a98-453a-87c3-01b817ee7602 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 04:02:51 np0005534516 nova_compute[253538]: 2025-11-25 09:02:51.827 253542 DEBUG nova.compute.manager [req-35d81841-171d-4e0e-94a9-03e31a2142ac req-919f96fd-609a-46f8-8d58-a3c289844759 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 2c4a1d63-7674-4276-8da9-b9d4f4fea307] Refreshing instance network info cache due to event network-changed-a93aab06-4a98-453a-87c3-01b817ee7602. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 04:02:51 np0005534516 nova_compute[253538]: 2025-11-25 09:02:51.828 253542 DEBUG oslo_concurrency.lockutils [req-35d81841-171d-4e0e-94a9-03e31a2142ac req-919f96fd-609a-46f8-8d58-a3c289844759 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-2c4a1d63-7674-4276-8da9-b9d4f4fea307" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 04:02:51 np0005534516 nova_compute[253538]: 2025-11-25 09:02:51.828 253542 DEBUG oslo_concurrency.lockutils [req-35d81841-171d-4e0e-94a9-03e31a2142ac req-919f96fd-609a-46f8-8d58-a3c289844759 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-2c4a1d63-7674-4276-8da9-b9d4f4fea307" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 04:02:51 np0005534516 nova_compute[253538]: 2025-11-25 09:02:51.829 253542 DEBUG nova.network.neutron [req-35d81841-171d-4e0e-94a9-03e31a2142ac req-919f96fd-609a-46f8-8d58-a3c289844759 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 2c4a1d63-7674-4276-8da9-b9d4f4fea307] Refreshing network info cache for port a93aab06-4a98-453a-87c3-01b817ee7602 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 04:02:51 np0005534516 nova_compute[253538]: 2025-11-25 09:02:51.875 253542 DEBUG nova.objects.instance [None req-e81b8dd4-6010-4d30-aa5a-a47b67de9ac9 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Lazy-loading 'migration_context' on Instance uuid 2c4a1d63-7674-4276-8da9-b9d4f4fea307 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 04:02:51 np0005534516 podman[394040]: 2025-11-25 09:02:51.881722064 +0000 UTC m=+0.126051608 container health_status 201f5ba8a846455dad7681d91099d8c3f33e8ac91298c1330a8b525b94c03938 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, container_name=multipathd, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=multipathd)
Nov 25 04:02:51 np0005534516 nova_compute[253538]: 2025-11-25 09:02:51.890 253542 DEBUG nova.virt.libvirt.driver [None req-e81b8dd4-6010-4d30-aa5a-a47b67de9ac9 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 2c4a1d63-7674-4276-8da9-b9d4f4fea307] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 25 04:02:51 np0005534516 nova_compute[253538]: 2025-11-25 09:02:51.891 253542 DEBUG nova.virt.libvirt.driver [None req-e81b8dd4-6010-4d30-aa5a-a47b67de9ac9 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 2c4a1d63-7674-4276-8da9-b9d4f4fea307] Ensure instance console log exists: /var/lib/nova/instances/2c4a1d63-7674-4276-8da9-b9d4f4fea307/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 25 04:02:51 np0005534516 nova_compute[253538]: 2025-11-25 09:02:51.891 253542 DEBUG oslo_concurrency.lockutils [None req-e81b8dd4-6010-4d30-aa5a-a47b67de9ac9 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:02:51 np0005534516 nova_compute[253538]: 2025-11-25 09:02:51.891 253542 DEBUG oslo_concurrency.lockutils [None req-e81b8dd4-6010-4d30-aa5a-a47b67de9ac9 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:02:51 np0005534516 nova_compute[253538]: 2025-11-25 09:02:51.892 253542 DEBUG oslo_concurrency.lockutils [None req-e81b8dd4-6010-4d30-aa5a-a47b67de9ac9 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:02:51 np0005534516 nova_compute[253538]: 2025-11-25 09:02:51.893 253542 DEBUG nova.virt.libvirt.driver [None req-e81b8dd4-6010-4d30-aa5a-a47b67de9ac9 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 2c4a1d63-7674-4276-8da9-b9d4f4fea307] Start _get_guest_xml network_info=[{"id": "a93aab06-4a98-453a-87c3-01b817ee7602", "address": "fa:16:3e:2f:fb:42", "network": {"id": "72472fc5-3661-404c-a0d2-df155795bd2b", "bridge": "br-int", "label": "tempest-network-smoke--1408423100", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cfffff2c57a442a59b202d368d49bf00", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa93aab06-4a", "ovs_interfaceid": "a93aab06-4a98-453a-87c3-01b817ee7602", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'device_name': '/dev/vda', 'guest_format': None, 'encryption_options': None, 'boot_index': 0, 'encryption_format': None, 'encryption_secret_uuid': None, 'size': 0, 'encrypted': False, 'disk_bus': 'virtio', 'image_id': '8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 25 04:02:51 np0005534516 nova_compute[253538]: 2025-11-25 09:02:51.898 253542 WARNING nova.virt.libvirt.driver [None req-e81b8dd4-6010-4d30-aa5a-a47b67de9ac9 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 04:02:51 np0005534516 nova_compute[253538]: 2025-11-25 09:02:51.904 253542 DEBUG nova.virt.libvirt.host [None req-e81b8dd4-6010-4d30-aa5a-a47b67de9ac9 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 25 04:02:51 np0005534516 nova_compute[253538]: 2025-11-25 09:02:51.904 253542 DEBUG nova.virt.libvirt.host [None req-e81b8dd4-6010-4d30-aa5a-a47b67de9ac9 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 25 04:02:51 np0005534516 nova_compute[253538]: 2025-11-25 09:02:51.911 253542 DEBUG nova.virt.libvirt.host [None req-e81b8dd4-6010-4d30-aa5a-a47b67de9ac9 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 25 04:02:51 np0005534516 nova_compute[253538]: 2025-11-25 09:02:51.911 253542 DEBUG nova.virt.libvirt.host [None req-e81b8dd4-6010-4d30-aa5a-a47b67de9ac9 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 25 04:02:51 np0005534516 nova_compute[253538]: 2025-11-25 09:02:51.912 253542 DEBUG nova.virt.libvirt.driver [None req-e81b8dd4-6010-4d30-aa5a-a47b67de9ac9 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 25 04:02:51 np0005534516 nova_compute[253538]: 2025-11-25 09:02:51.912 253542 DEBUG nova.virt.hardware [None req-e81b8dd4-6010-4d30-aa5a-a47b67de9ac9 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T08:20:54Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='c0247c91-0f34-45b2-87e7-43f31a790a00',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 25 04:02:51 np0005534516 nova_compute[253538]: 2025-11-25 09:02:51.912 253542 DEBUG nova.virt.hardware [None req-e81b8dd4-6010-4d30-aa5a-a47b67de9ac9 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 25 04:02:51 np0005534516 nova_compute[253538]: 2025-11-25 09:02:51.913 253542 DEBUG nova.virt.hardware [None req-e81b8dd4-6010-4d30-aa5a-a47b67de9ac9 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 25 04:02:51 np0005534516 nova_compute[253538]: 2025-11-25 09:02:51.913 253542 DEBUG nova.virt.hardware [None req-e81b8dd4-6010-4d30-aa5a-a47b67de9ac9 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 25 04:02:51 np0005534516 nova_compute[253538]: 2025-11-25 09:02:51.913 253542 DEBUG nova.virt.hardware [None req-e81b8dd4-6010-4d30-aa5a-a47b67de9ac9 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 25 04:02:51 np0005534516 nova_compute[253538]: 2025-11-25 09:02:51.913 253542 DEBUG nova.virt.hardware [None req-e81b8dd4-6010-4d30-aa5a-a47b67de9ac9 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 25 04:02:51 np0005534516 nova_compute[253538]: 2025-11-25 09:02:51.913 253542 DEBUG nova.virt.hardware [None req-e81b8dd4-6010-4d30-aa5a-a47b67de9ac9 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 25 04:02:51 np0005534516 nova_compute[253538]: 2025-11-25 09:02:51.913 253542 DEBUG nova.virt.hardware [None req-e81b8dd4-6010-4d30-aa5a-a47b67de9ac9 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 25 04:02:51 np0005534516 nova_compute[253538]: 2025-11-25 09:02:51.914 253542 DEBUG nova.virt.hardware [None req-e81b8dd4-6010-4d30-aa5a-a47b67de9ac9 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 25 04:02:51 np0005534516 nova_compute[253538]: 2025-11-25 09:02:51.914 253542 DEBUG nova.virt.hardware [None req-e81b8dd4-6010-4d30-aa5a-a47b67de9ac9 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 25 04:02:51 np0005534516 nova_compute[253538]: 2025-11-25 09:02:51.914 253542 DEBUG nova.virt.hardware [None req-e81b8dd4-6010-4d30-aa5a-a47b67de9ac9 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 25 04:02:51 np0005534516 nova_compute[253538]: 2025-11-25 09:02:51.916 253542 DEBUG oslo_concurrency.processutils [None req-e81b8dd4-6010-4d30-aa5a-a47b67de9ac9 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 04:02:51 np0005534516 podman[394027]: 2025-11-25 09:02:51.970058686 +0000 UTC m=+0.396982863 container create 205b317ded882a30395431bd420e7da91fc5e6103e9b654f88e7b330157c80b5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-269f4fa4-a7fb-4f9a-b49d-3b1968826304, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Nov 25 04:02:52 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2462: 321 pgs: 321 active+clean; 165 MiB data, 953 MiB used, 59 GiB / 60 GiB avail; 12 KiB/s rd, 1.6 MiB/s wr, 21 op/s
Nov 25 04:02:52 np0005534516 systemd[1]: Started libpod-conmon-205b317ded882a30395431bd420e7da91fc5e6103e9b654f88e7b330157c80b5.scope.
Nov 25 04:02:52 np0005534516 systemd[1]: Started libcrun container.
Nov 25 04:02:52 np0005534516 podman[394078]: 2025-11-25 09:02:52.152921206 +0000 UTC m=+0.243775078 container health_status 5c8d5508db57b4c3da7e80ae11f3a3094e87785cb74f60c98199261dd8b73481 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20251118, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 04:02:52 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0a675655322cb97a61297c5daeb9688e6dd7732cb7cb875eab3105e1fff65389/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 25 04:02:52 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 04:02:52 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2943920349' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 04:02:52 np0005534516 podman[394027]: 2025-11-25 09:02:52.372882126 +0000 UTC m=+0.799806293 container init 205b317ded882a30395431bd420e7da91fc5e6103e9b654f88e7b330157c80b5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-269f4fa4-a7fb-4f9a-b49d-3b1968826304, org.label-schema.build-date=20251118, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.license=GPLv2)
Nov 25 04:02:52 np0005534516 podman[394027]: 2025-11-25 09:02:52.378564191 +0000 UTC m=+0.805488358 container start 205b317ded882a30395431bd420e7da91fc5e6103e9b654f88e7b330157c80b5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-269f4fa4-a7fb-4f9a-b49d-3b1968826304, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Nov 25 04:02:52 np0005534516 nova_compute[253538]: 2025-11-25 09:02:52.380 253542 DEBUG oslo_concurrency.processutils [None req-e81b8dd4-6010-4d30-aa5a-a47b67de9ac9 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.463s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 04:02:52 np0005534516 neutron-haproxy-ovnmeta-269f4fa4-a7fb-4f9a-b49d-3b1968826304[394115]: [NOTICE]   (394125) : New worker (394142) forked
Nov 25 04:02:52 np0005534516 neutron-haproxy-ovnmeta-269f4fa4-a7fb-4f9a-b49d-3b1968826304[394115]: [NOTICE]   (394125) : Loading success.
Nov 25 04:02:52 np0005534516 nova_compute[253538]: 2025-11-25 09:02:52.405 253542 DEBUG nova.storage.rbd_utils [None req-e81b8dd4-6010-4d30-aa5a-a47b67de9ac9 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] rbd image 2c4a1d63-7674-4276-8da9-b9d4f4fea307_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 04:02:52 np0005534516 nova_compute[253538]: 2025-11-25 09:02:52.411 253542 DEBUG oslo_concurrency.processutils [None req-e81b8dd4-6010-4d30-aa5a-a47b67de9ac9 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 04:02:52 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:02:52 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 04:02:52 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1831216243' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 04:02:52 np0005534516 nova_compute[253538]: 2025-11-25 09:02:52.838 253542 DEBUG oslo_concurrency.processutils [None req-e81b8dd4-6010-4d30-aa5a-a47b67de9ac9 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.427s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 04:02:52 np0005534516 nova_compute[253538]: 2025-11-25 09:02:52.840 253542 DEBUG nova.virt.libvirt.vif [None req-e81b8dd4-6010-4d30-aa5a-a47b67de9ac9 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T09:02:44Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-1495447964-access_point-582189118',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-1495447964-access_point-582189118',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-1495447964-ac',id=134,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBKmWC6+Lhjt7ZggX7R271Z8ydSfaz55B57w+DkFd+9P5ZZi/RtND2Lrlt8eHEO6BYJ7zwCthN/Bq/sxErPbCK7jo7dXwLPbl9Tdlo4a8btinzQw58JjdPmMZRYnO+DrRAQ==',key_name='tempest-TestSecurityGroupsBasicOps-1979245065',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='cfffff2c57a442a59b202d368d49bf00',ramdisk_id='',reservation_id='r-jwi4cpze',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-1495447964',owner_user_name='tempest-TestSecurityGroupsBasicOps-1495447964-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T09:02:47Z,user_data=None,user_id='283b89dbe3284e8ea2019b797673108b',uuid=2c4a1d63-7674-4276-8da9-b9d4f4fea307,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "a93aab06-4a98-453a-87c3-01b817ee7602", "address": "fa:16:3e:2f:fb:42", "network": {"id": "72472fc5-3661-404c-a0d2-df155795bd2b", "bridge": "br-int", "label": "tempest-network-smoke--1408423100", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cfffff2c57a442a59b202d368d49bf00", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa93aab06-4a", "ovs_interfaceid": "a93aab06-4a98-453a-87c3-01b817ee7602", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 25 04:02:52 np0005534516 nova_compute[253538]: 2025-11-25 09:02:52.840 253542 DEBUG nova.network.os_vif_util [None req-e81b8dd4-6010-4d30-aa5a-a47b67de9ac9 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Converting VIF {"id": "a93aab06-4a98-453a-87c3-01b817ee7602", "address": "fa:16:3e:2f:fb:42", "network": {"id": "72472fc5-3661-404c-a0d2-df155795bd2b", "bridge": "br-int", "label": "tempest-network-smoke--1408423100", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cfffff2c57a442a59b202d368d49bf00", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa93aab06-4a", "ovs_interfaceid": "a93aab06-4a98-453a-87c3-01b817ee7602", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 04:02:52 np0005534516 nova_compute[253538]: 2025-11-25 09:02:52.841 253542 DEBUG nova.network.os_vif_util [None req-e81b8dd4-6010-4d30-aa5a-a47b67de9ac9 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:2f:fb:42,bridge_name='br-int',has_traffic_filtering=True,id=a93aab06-4a98-453a-87c3-01b817ee7602,network=Network(72472fc5-3661-404c-a0d2-df155795bd2b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa93aab06-4a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 04:02:52 np0005534516 nova_compute[253538]: 2025-11-25 09:02:52.842 253542 DEBUG nova.objects.instance [None req-e81b8dd4-6010-4d30-aa5a-a47b67de9ac9 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Lazy-loading 'pci_devices' on Instance uuid 2c4a1d63-7674-4276-8da9-b9d4f4fea307 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 04:02:52 np0005534516 nova_compute[253538]: 2025-11-25 09:02:52.855 253542 DEBUG nova.virt.libvirt.driver [None req-e81b8dd4-6010-4d30-aa5a-a47b67de9ac9 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 2c4a1d63-7674-4276-8da9-b9d4f4fea307] End _get_guest_xml xml=<domain type="kvm">
Nov 25 04:02:52 np0005534516 nova_compute[253538]:  <uuid>2c4a1d63-7674-4276-8da9-b9d4f4fea307</uuid>
Nov 25 04:02:52 np0005534516 nova_compute[253538]:  <name>instance-00000086</name>
Nov 25 04:02:52 np0005534516 nova_compute[253538]:  <memory>131072</memory>
Nov 25 04:02:52 np0005534516 nova_compute[253538]:  <vcpu>1</vcpu>
Nov 25 04:02:52 np0005534516 nova_compute[253538]:  <metadata>
Nov 25 04:02:52 np0005534516 nova_compute[253538]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 04:02:52 np0005534516 nova_compute[253538]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 04:02:52 np0005534516 nova_compute[253538]:      <nova:name>tempest-server-tempest-TestSecurityGroupsBasicOps-1495447964-access_point-582189118</nova:name>
Nov 25 04:02:52 np0005534516 nova_compute[253538]:      <nova:creationTime>2025-11-25 09:02:51</nova:creationTime>
Nov 25 04:02:52 np0005534516 nova_compute[253538]:      <nova:flavor name="m1.nano">
Nov 25 04:02:52 np0005534516 nova_compute[253538]:        <nova:memory>128</nova:memory>
Nov 25 04:02:52 np0005534516 nova_compute[253538]:        <nova:disk>1</nova:disk>
Nov 25 04:02:52 np0005534516 nova_compute[253538]:        <nova:swap>0</nova:swap>
Nov 25 04:02:52 np0005534516 nova_compute[253538]:        <nova:ephemeral>0</nova:ephemeral>
Nov 25 04:02:52 np0005534516 nova_compute[253538]:        <nova:vcpus>1</nova:vcpus>
Nov 25 04:02:52 np0005534516 nova_compute[253538]:      </nova:flavor>
Nov 25 04:02:52 np0005534516 nova_compute[253538]:      <nova:owner>
Nov 25 04:02:52 np0005534516 nova_compute[253538]:        <nova:user uuid="283b89dbe3284e8ea2019b797673108b">tempest-TestSecurityGroupsBasicOps-1495447964-project-member</nova:user>
Nov 25 04:02:52 np0005534516 nova_compute[253538]:        <nova:project uuid="cfffff2c57a442a59b202d368d49bf00">tempest-TestSecurityGroupsBasicOps-1495447964</nova:project>
Nov 25 04:02:52 np0005534516 nova_compute[253538]:      </nova:owner>
Nov 25 04:02:52 np0005534516 nova_compute[253538]:      <nova:root type="image" uuid="8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e"/>
Nov 25 04:02:52 np0005534516 nova_compute[253538]:      <nova:ports>
Nov 25 04:02:52 np0005534516 nova_compute[253538]:        <nova:port uuid="a93aab06-4a98-453a-87c3-01b817ee7602">
Nov 25 04:02:52 np0005534516 nova_compute[253538]:          <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Nov 25 04:02:52 np0005534516 nova_compute[253538]:        </nova:port>
Nov 25 04:02:52 np0005534516 nova_compute[253538]:      </nova:ports>
Nov 25 04:02:52 np0005534516 nova_compute[253538]:    </nova:instance>
Nov 25 04:02:52 np0005534516 nova_compute[253538]:  </metadata>
Nov 25 04:02:52 np0005534516 nova_compute[253538]:  <sysinfo type="smbios">
Nov 25 04:02:52 np0005534516 nova_compute[253538]:    <system>
Nov 25 04:02:52 np0005534516 nova_compute[253538]:      <entry name="manufacturer">RDO</entry>
Nov 25 04:02:52 np0005534516 nova_compute[253538]:      <entry name="product">OpenStack Compute</entry>
Nov 25 04:02:52 np0005534516 nova_compute[253538]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 04:02:52 np0005534516 nova_compute[253538]:      <entry name="serial">2c4a1d63-7674-4276-8da9-b9d4f4fea307</entry>
Nov 25 04:02:52 np0005534516 nova_compute[253538]:      <entry name="uuid">2c4a1d63-7674-4276-8da9-b9d4f4fea307</entry>
Nov 25 04:02:52 np0005534516 nova_compute[253538]:      <entry name="family">Virtual Machine</entry>
Nov 25 04:02:52 np0005534516 nova_compute[253538]:    </system>
Nov 25 04:02:52 np0005534516 nova_compute[253538]:  </sysinfo>
Nov 25 04:02:52 np0005534516 nova_compute[253538]:  <os>
Nov 25 04:02:52 np0005534516 nova_compute[253538]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 25 04:02:52 np0005534516 nova_compute[253538]:    <boot dev="hd"/>
Nov 25 04:02:52 np0005534516 nova_compute[253538]:    <smbios mode="sysinfo"/>
Nov 25 04:02:52 np0005534516 nova_compute[253538]:  </os>
Nov 25 04:02:52 np0005534516 nova_compute[253538]:  <features>
Nov 25 04:02:52 np0005534516 nova_compute[253538]:    <acpi/>
Nov 25 04:02:52 np0005534516 nova_compute[253538]:    <apic/>
Nov 25 04:02:52 np0005534516 nova_compute[253538]:    <vmcoreinfo/>
Nov 25 04:02:52 np0005534516 nova_compute[253538]:  </features>
Nov 25 04:02:52 np0005534516 nova_compute[253538]:  <clock offset="utc">
Nov 25 04:02:52 np0005534516 nova_compute[253538]:    <timer name="pit" tickpolicy="delay"/>
Nov 25 04:02:52 np0005534516 nova_compute[253538]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 25 04:02:52 np0005534516 nova_compute[253538]:    <timer name="hpet" present="no"/>
Nov 25 04:02:52 np0005534516 nova_compute[253538]:  </clock>
Nov 25 04:02:52 np0005534516 nova_compute[253538]:  <cpu mode="host-model" match="exact">
Nov 25 04:02:52 np0005534516 nova_compute[253538]:    <topology sockets="1" cores="1" threads="1"/>
Nov 25 04:02:52 np0005534516 nova_compute[253538]:  </cpu>
Nov 25 04:02:52 np0005534516 nova_compute[253538]:  <devices>
Nov 25 04:02:52 np0005534516 nova_compute[253538]:    <disk type="network" device="disk">
Nov 25 04:02:52 np0005534516 nova_compute[253538]:      <driver type="raw" cache="none"/>
Nov 25 04:02:52 np0005534516 nova_compute[253538]:      <source protocol="rbd" name="vms/2c4a1d63-7674-4276-8da9-b9d4f4fea307_disk">
Nov 25 04:02:52 np0005534516 nova_compute[253538]:        <host name="192.168.122.100" port="6789"/>
Nov 25 04:02:52 np0005534516 nova_compute[253538]:      </source>
Nov 25 04:02:52 np0005534516 nova_compute[253538]:      <auth username="openstack">
Nov 25 04:02:52 np0005534516 nova_compute[253538]:        <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 04:02:52 np0005534516 nova_compute[253538]:      </auth>
Nov 25 04:02:52 np0005534516 nova_compute[253538]:      <target dev="vda" bus="virtio"/>
Nov 25 04:02:52 np0005534516 nova_compute[253538]:    </disk>
Nov 25 04:02:52 np0005534516 nova_compute[253538]:    <disk type="network" device="cdrom">
Nov 25 04:02:52 np0005534516 nova_compute[253538]:      <driver type="raw" cache="none"/>
Nov 25 04:02:52 np0005534516 nova_compute[253538]:      <source protocol="rbd" name="vms/2c4a1d63-7674-4276-8da9-b9d4f4fea307_disk.config">
Nov 25 04:02:52 np0005534516 nova_compute[253538]:        <host name="192.168.122.100" port="6789"/>
Nov 25 04:02:52 np0005534516 nova_compute[253538]:      </source>
Nov 25 04:02:52 np0005534516 nova_compute[253538]:      <auth username="openstack">
Nov 25 04:02:52 np0005534516 nova_compute[253538]:        <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 04:02:52 np0005534516 nova_compute[253538]:      </auth>
Nov 25 04:02:52 np0005534516 nova_compute[253538]:      <target dev="sda" bus="sata"/>
Nov 25 04:02:52 np0005534516 nova_compute[253538]:    </disk>
Nov 25 04:02:52 np0005534516 nova_compute[253538]:    <interface type="ethernet">
Nov 25 04:02:52 np0005534516 nova_compute[253538]:      <mac address="fa:16:3e:2f:fb:42"/>
Nov 25 04:02:52 np0005534516 nova_compute[253538]:      <model type="virtio"/>
Nov 25 04:02:52 np0005534516 nova_compute[253538]:      <driver name="vhost" rx_queue_size="512"/>
Nov 25 04:02:52 np0005534516 nova_compute[253538]:      <mtu size="1442"/>
Nov 25 04:02:52 np0005534516 nova_compute[253538]:      <target dev="tapa93aab06-4a"/>
Nov 25 04:02:52 np0005534516 nova_compute[253538]:    </interface>
Nov 25 04:02:52 np0005534516 nova_compute[253538]:    <serial type="pty">
Nov 25 04:02:52 np0005534516 nova_compute[253538]:      <log file="/var/lib/nova/instances/2c4a1d63-7674-4276-8da9-b9d4f4fea307/console.log" append="off"/>
Nov 25 04:02:52 np0005534516 nova_compute[253538]:    </serial>
Nov 25 04:02:52 np0005534516 nova_compute[253538]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 04:02:52 np0005534516 nova_compute[253538]:    <video>
Nov 25 04:02:52 np0005534516 nova_compute[253538]:      <model type="virtio"/>
Nov 25 04:02:52 np0005534516 nova_compute[253538]:    </video>
Nov 25 04:02:52 np0005534516 nova_compute[253538]:    <input type="tablet" bus="usb"/>
Nov 25 04:02:52 np0005534516 nova_compute[253538]:    <rng model="virtio">
Nov 25 04:02:52 np0005534516 nova_compute[253538]:      <backend model="random">/dev/urandom</backend>
Nov 25 04:02:52 np0005534516 nova_compute[253538]:    </rng>
Nov 25 04:02:52 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root"/>
Nov 25 04:02:52 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:02:52 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:02:52 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:02:52 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:02:52 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:02:52 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:02:52 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:02:52 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:02:52 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:02:52 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:02:52 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:02:52 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:02:52 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:02:52 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:02:52 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:02:52 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:02:52 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:02:52 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:02:52 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:02:52 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:02:52 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:02:52 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:02:52 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:02:52 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:02:52 np0005534516 nova_compute[253538]:    <controller type="usb" index="0"/>
Nov 25 04:02:52 np0005534516 nova_compute[253538]:    <memballoon model="virtio">
Nov 25 04:02:52 np0005534516 nova_compute[253538]:      <stats period="10"/>
Nov 25 04:02:52 np0005534516 nova_compute[253538]:    </memballoon>
Nov 25 04:02:52 np0005534516 nova_compute[253538]:  </devices>
Nov 25 04:02:52 np0005534516 nova_compute[253538]: </domain>
Nov 25 04:02:52 np0005534516 nova_compute[253538]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 25 04:02:52 np0005534516 nova_compute[253538]: 2025-11-25 09:02:52.856 253542 DEBUG nova.compute.manager [None req-e81b8dd4-6010-4d30-aa5a-a47b67de9ac9 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 2c4a1d63-7674-4276-8da9-b9d4f4fea307] Preparing to wait for external event network-vif-plugged-a93aab06-4a98-453a-87c3-01b817ee7602 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 25 04:02:52 np0005534516 nova_compute[253538]: 2025-11-25 09:02:52.857 253542 DEBUG oslo_concurrency.lockutils [None req-e81b8dd4-6010-4d30-aa5a-a47b67de9ac9 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Acquiring lock "2c4a1d63-7674-4276-8da9-b9d4f4fea307-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:02:52 np0005534516 nova_compute[253538]: 2025-11-25 09:02:52.857 253542 DEBUG oslo_concurrency.lockutils [None req-e81b8dd4-6010-4d30-aa5a-a47b67de9ac9 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Lock "2c4a1d63-7674-4276-8da9-b9d4f4fea307-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:02:52 np0005534516 nova_compute[253538]: 2025-11-25 09:02:52.858 253542 DEBUG oslo_concurrency.lockutils [None req-e81b8dd4-6010-4d30-aa5a-a47b67de9ac9 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Lock "2c4a1d63-7674-4276-8da9-b9d4f4fea307-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:02:52 np0005534516 nova_compute[253538]: 2025-11-25 09:02:52.859 253542 DEBUG nova.virt.libvirt.vif [None req-e81b8dd4-6010-4d30-aa5a-a47b67de9ac9 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T09:02:44Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-1495447964-access_point-582189118',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-1495447964-access_point-582189118',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-1495447964-ac',id=134,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBKmWC6+Lhjt7ZggX7R271Z8ydSfaz55B57w+DkFd+9P5ZZi/RtND2Lrlt8eHEO6BYJ7zwCthN/Bq/sxErPbCK7jo7dXwLPbl9Tdlo4a8btinzQw58JjdPmMZRYnO+DrRAQ==',key_name='tempest-TestSecurityGroupsBasicOps-1979245065',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='cfffff2c57a442a59b202d368d49bf00',ramdisk_id='',reservation_id='r-jwi4cpze',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-1495447964',owner_user_name='tempest-TestSecurityGroupsBasicOps-1495447964-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T09:02:47Z,user_data=None,user_id='283b89dbe3284e8ea2019b797673108b',uuid=2c4a1d63-7674-4276-8da9-b9d4f4fea307,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "a93aab06-4a98-453a-87c3-01b817ee7602", "address": "fa:16:3e:2f:fb:42", "network": {"id": "72472fc5-3661-404c-a0d2-df155795bd2b", "bridge": "br-int", "label": "tempest-network-smoke--1408423100", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cfffff2c57a442a59b202d368d49bf00", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa93aab06-4a", "ovs_interfaceid": "a93aab06-4a98-453a-87c3-01b817ee7602", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 25 04:02:52 np0005534516 nova_compute[253538]: 2025-11-25 09:02:52.859 253542 DEBUG nova.network.os_vif_util [None req-e81b8dd4-6010-4d30-aa5a-a47b67de9ac9 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Converting VIF {"id": "a93aab06-4a98-453a-87c3-01b817ee7602", "address": "fa:16:3e:2f:fb:42", "network": {"id": "72472fc5-3661-404c-a0d2-df155795bd2b", "bridge": "br-int", "label": "tempest-network-smoke--1408423100", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cfffff2c57a442a59b202d368d49bf00", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa93aab06-4a", "ovs_interfaceid": "a93aab06-4a98-453a-87c3-01b817ee7602", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 04:02:52 np0005534516 nova_compute[253538]: 2025-11-25 09:02:52.860 253542 DEBUG nova.network.os_vif_util [None req-e81b8dd4-6010-4d30-aa5a-a47b67de9ac9 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:2f:fb:42,bridge_name='br-int',has_traffic_filtering=True,id=a93aab06-4a98-453a-87c3-01b817ee7602,network=Network(72472fc5-3661-404c-a0d2-df155795bd2b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa93aab06-4a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 04:02:52 np0005534516 nova_compute[253538]: 2025-11-25 09:02:52.860 253542 DEBUG os_vif [None req-e81b8dd4-6010-4d30-aa5a-a47b67de9ac9 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:2f:fb:42,bridge_name='br-int',has_traffic_filtering=True,id=a93aab06-4a98-453a-87c3-01b817ee7602,network=Network(72472fc5-3661-404c-a0d2-df155795bd2b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa93aab06-4a') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 25 04:02:52 np0005534516 nova_compute[253538]: 2025-11-25 09:02:52.861 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:02:52 np0005534516 nova_compute[253538]: 2025-11-25 09:02:52.862 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 04:02:52 np0005534516 nova_compute[253538]: 2025-11-25 09:02:52.862 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 04:02:52 np0005534516 nova_compute[253538]: 2025-11-25 09:02:52.865 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:02:52 np0005534516 nova_compute[253538]: 2025-11-25 09:02:52.866 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa93aab06-4a, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 04:02:52 np0005534516 nova_compute[253538]: 2025-11-25 09:02:52.866 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapa93aab06-4a, col_values=(('external_ids', {'iface-id': 'a93aab06-4a98-453a-87c3-01b817ee7602', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:2f:fb:42', 'vm-uuid': '2c4a1d63-7674-4276-8da9-b9d4f4fea307'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 04:02:52 np0005534516 nova_compute[253538]: 2025-11-25 09:02:52.868 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:02:52 np0005534516 NetworkManager[48915]: <info>  [1764061372.8689] manager: (tapa93aab06-4a): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/566)
Nov 25 04:02:52 np0005534516 nova_compute[253538]: 2025-11-25 09:02:52.870 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 25 04:02:52 np0005534516 nova_compute[253538]: 2025-11-25 09:02:52.877 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:02:52 np0005534516 nova_compute[253538]: 2025-11-25 09:02:52.877 253542 INFO os_vif [None req-e81b8dd4-6010-4d30-aa5a-a47b67de9ac9 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:2f:fb:42,bridge_name='br-int',has_traffic_filtering=True,id=a93aab06-4a98-453a-87c3-01b817ee7602,network=Network(72472fc5-3661-404c-a0d2-df155795bd2b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa93aab06-4a')#033[00m
Nov 25 04:02:52 np0005534516 nova_compute[253538]: 2025-11-25 09:02:52.957 253542 DEBUG nova.virt.libvirt.driver [None req-e81b8dd4-6010-4d30-aa5a-a47b67de9ac9 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 04:02:52 np0005534516 nova_compute[253538]: 2025-11-25 09:02:52.958 253542 DEBUG nova.virt.libvirt.driver [None req-e81b8dd4-6010-4d30-aa5a-a47b67de9ac9 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 04:02:52 np0005534516 nova_compute[253538]: 2025-11-25 09:02:52.958 253542 DEBUG nova.virt.libvirt.driver [None req-e81b8dd4-6010-4d30-aa5a-a47b67de9ac9 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] No VIF found with MAC fa:16:3e:2f:fb:42, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 25 04:02:52 np0005534516 nova_compute[253538]: 2025-11-25 09:02:52.959 253542 INFO nova.virt.libvirt.driver [None req-e81b8dd4-6010-4d30-aa5a-a47b67de9ac9 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 2c4a1d63-7674-4276-8da9-b9d4f4fea307] Using config drive#033[00m
Nov 25 04:02:53 np0005534516 nova_compute[253538]: 2025-11-25 09:02:53.016 253542 DEBUG nova.storage.rbd_utils [None req-e81b8dd4-6010-4d30-aa5a-a47b67de9ac9 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] rbd image 2c4a1d63-7674-4276-8da9-b9d4f4fea307_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 04:02:53 np0005534516 nova_compute[253538]: 2025-11-25 09:02:53.262 253542 DEBUG nova.network.neutron [req-35d81841-171d-4e0e-94a9-03e31a2142ac req-919f96fd-609a-46f8-8d58-a3c289844759 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 2c4a1d63-7674-4276-8da9-b9d4f4fea307] Updated VIF entry in instance network info cache for port a93aab06-4a98-453a-87c3-01b817ee7602. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 25 04:02:53 np0005534516 nova_compute[253538]: 2025-11-25 09:02:53.263 253542 DEBUG nova.network.neutron [req-35d81841-171d-4e0e-94a9-03e31a2142ac req-919f96fd-609a-46f8-8d58-a3c289844759 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 2c4a1d63-7674-4276-8da9-b9d4f4fea307] Updating instance_info_cache with network_info: [{"id": "a93aab06-4a98-453a-87c3-01b817ee7602", "address": "fa:16:3e:2f:fb:42", "network": {"id": "72472fc5-3661-404c-a0d2-df155795bd2b", "bridge": "br-int", "label": "tempest-network-smoke--1408423100", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cfffff2c57a442a59b202d368d49bf00", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa93aab06-4a", "ovs_interfaceid": "a93aab06-4a98-453a-87c3-01b817ee7602", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 04:02:53 np0005534516 nova_compute[253538]: 2025-11-25 09:02:53.283 253542 DEBUG oslo_concurrency.lockutils [req-35d81841-171d-4e0e-94a9-03e31a2142ac req-919f96fd-609a-46f8-8d58-a3c289844759 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-2c4a1d63-7674-4276-8da9-b9d4f4fea307" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 04:02:53 np0005534516 nova_compute[253538]: 2025-11-25 09:02:53.284 253542 DEBUG nova.compute.manager [req-35d81841-171d-4e0e-94a9-03e31a2142ac req-919f96fd-609a-46f8-8d58-a3c289844759 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 28376454-90b2-431d-9052-48b369973c8e] Received event network-vif-plugged-9918858c-8b7c-4d3f-aada-d04fcb6eab03 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 04:02:53 np0005534516 nova_compute[253538]: 2025-11-25 09:02:53.284 253542 DEBUG oslo_concurrency.lockutils [req-35d81841-171d-4e0e-94a9-03e31a2142ac req-919f96fd-609a-46f8-8d58-a3c289844759 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "28376454-90b2-431d-9052-48b369973c8e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:02:53 np0005534516 nova_compute[253538]: 2025-11-25 09:02:53.284 253542 DEBUG oslo_concurrency.lockutils [req-35d81841-171d-4e0e-94a9-03e31a2142ac req-919f96fd-609a-46f8-8d58-a3c289844759 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "28376454-90b2-431d-9052-48b369973c8e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:02:53 np0005534516 nova_compute[253538]: 2025-11-25 09:02:53.284 253542 DEBUG oslo_concurrency.lockutils [req-35d81841-171d-4e0e-94a9-03e31a2142ac req-919f96fd-609a-46f8-8d58-a3c289844759 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "28376454-90b2-431d-9052-48b369973c8e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:02:53 np0005534516 nova_compute[253538]: 2025-11-25 09:02:53.285 253542 DEBUG nova.compute.manager [req-35d81841-171d-4e0e-94a9-03e31a2142ac req-919f96fd-609a-46f8-8d58-a3c289844759 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 28376454-90b2-431d-9052-48b369973c8e] No event matching network-vif-plugged-9918858c-8b7c-4d3f-aada-d04fcb6eab03 in dict_keys([('network-vif-plugged', 'fd44d480-0242-4c7a-b02e-f58852c99ca0')]) pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:325#033[00m
Nov 25 04:02:53 np0005534516 nova_compute[253538]: 2025-11-25 09:02:53.285 253542 WARNING nova.compute.manager [req-35d81841-171d-4e0e-94a9-03e31a2142ac req-919f96fd-609a-46f8-8d58-a3c289844759 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 28376454-90b2-431d-9052-48b369973c8e] Received unexpected event network-vif-plugged-9918858c-8b7c-4d3f-aada-d04fcb6eab03 for instance with vm_state building and task_state spawning.#033[00m
Nov 25 04:02:53 np0005534516 nova_compute[253538]: 2025-11-25 09:02:53.285 253542 DEBUG nova.compute.manager [req-35d81841-171d-4e0e-94a9-03e31a2142ac req-919f96fd-609a-46f8-8d58-a3c289844759 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 28376454-90b2-431d-9052-48b369973c8e] Received event network-vif-plugged-fd44d480-0242-4c7a-b02e-f58852c99ca0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 04:02:53 np0005534516 nova_compute[253538]: 2025-11-25 09:02:53.285 253542 DEBUG oslo_concurrency.lockutils [req-35d81841-171d-4e0e-94a9-03e31a2142ac req-919f96fd-609a-46f8-8d58-a3c289844759 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "28376454-90b2-431d-9052-48b369973c8e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:02:53 np0005534516 nova_compute[253538]: 2025-11-25 09:02:53.285 253542 DEBUG oslo_concurrency.lockutils [req-35d81841-171d-4e0e-94a9-03e31a2142ac req-919f96fd-609a-46f8-8d58-a3c289844759 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "28376454-90b2-431d-9052-48b369973c8e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:02:53 np0005534516 nova_compute[253538]: 2025-11-25 09:02:53.286 253542 DEBUG oslo_concurrency.lockutils [req-35d81841-171d-4e0e-94a9-03e31a2142ac req-919f96fd-609a-46f8-8d58-a3c289844759 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "28376454-90b2-431d-9052-48b369973c8e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:02:53 np0005534516 nova_compute[253538]: 2025-11-25 09:02:53.286 253542 DEBUG nova.compute.manager [req-35d81841-171d-4e0e-94a9-03e31a2142ac req-919f96fd-609a-46f8-8d58-a3c289844759 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 28376454-90b2-431d-9052-48b369973c8e] Processing event network-vif-plugged-fd44d480-0242-4c7a-b02e-f58852c99ca0 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 25 04:02:53 np0005534516 nova_compute[253538]: 2025-11-25 09:02:53.286 253542 DEBUG nova.compute.manager [req-35d81841-171d-4e0e-94a9-03e31a2142ac req-919f96fd-609a-46f8-8d58-a3c289844759 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 28376454-90b2-431d-9052-48b369973c8e] Received event network-vif-plugged-fd44d480-0242-4c7a-b02e-f58852c99ca0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 04:02:53 np0005534516 nova_compute[253538]: 2025-11-25 09:02:53.286 253542 DEBUG oslo_concurrency.lockutils [req-35d81841-171d-4e0e-94a9-03e31a2142ac req-919f96fd-609a-46f8-8d58-a3c289844759 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "28376454-90b2-431d-9052-48b369973c8e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:02:53 np0005534516 nova_compute[253538]: 2025-11-25 09:02:53.286 253542 DEBUG oslo_concurrency.lockutils [req-35d81841-171d-4e0e-94a9-03e31a2142ac req-919f96fd-609a-46f8-8d58-a3c289844759 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "28376454-90b2-431d-9052-48b369973c8e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:02:53 np0005534516 nova_compute[253538]: 2025-11-25 09:02:53.287 253542 DEBUG oslo_concurrency.lockutils [req-35d81841-171d-4e0e-94a9-03e31a2142ac req-919f96fd-609a-46f8-8d58-a3c289844759 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "28376454-90b2-431d-9052-48b369973c8e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:02:53 np0005534516 nova_compute[253538]: 2025-11-25 09:02:53.287 253542 DEBUG nova.compute.manager [req-35d81841-171d-4e0e-94a9-03e31a2142ac req-919f96fd-609a-46f8-8d58-a3c289844759 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 28376454-90b2-431d-9052-48b369973c8e] No waiting events found dispatching network-vif-plugged-fd44d480-0242-4c7a-b02e-f58852c99ca0 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 04:02:53 np0005534516 nova_compute[253538]: 2025-11-25 09:02:53.287 253542 WARNING nova.compute.manager [req-35d81841-171d-4e0e-94a9-03e31a2142ac req-919f96fd-609a-46f8-8d58-a3c289844759 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 28376454-90b2-431d-9052-48b369973c8e] Received unexpected event network-vif-plugged-fd44d480-0242-4c7a-b02e-f58852c99ca0 for instance with vm_state building and task_state spawning.#033[00m
Nov 25 04:02:53 np0005534516 nova_compute[253538]: 2025-11-25 09:02:53.287 253542 DEBUG nova.compute.manager [None req-f71c3cb0-8ec9-4bda-b83e-e143f3daa1c9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 28376454-90b2-431d-9052-48b369973c8e] Instance event wait completed in 2 seconds for network-vif-plugged,network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 25 04:02:53 np0005534516 nova_compute[253538]: 2025-11-25 09:02:53.308 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764061373.3077528, 28376454-90b2-431d-9052-48b369973c8e => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 04:02:53 np0005534516 nova_compute[253538]: 2025-11-25 09:02:53.309 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 28376454-90b2-431d-9052-48b369973c8e] VM Resumed (Lifecycle Event)#033[00m
Nov 25 04:02:53 np0005534516 nova_compute[253538]: 2025-11-25 09:02:53.312 253542 DEBUG nova.virt.libvirt.driver [None req-f71c3cb0-8ec9-4bda-b83e-e143f3daa1c9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 28376454-90b2-431d-9052-48b369973c8e] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 25 04:02:53 np0005534516 nova_compute[253538]: 2025-11-25 09:02:53.316 253542 INFO nova.virt.libvirt.driver [-] [instance: 28376454-90b2-431d-9052-48b369973c8e] Instance spawned successfully.#033[00m
Nov 25 04:02:53 np0005534516 nova_compute[253538]: 2025-11-25 09:02:53.316 253542 DEBUG nova.virt.libvirt.driver [None req-f71c3cb0-8ec9-4bda-b83e-e143f3daa1c9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 28376454-90b2-431d-9052-48b369973c8e] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 25 04:02:53 np0005534516 nova_compute[253538]: 2025-11-25 09:02:53.330 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 28376454-90b2-431d-9052-48b369973c8e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 04:02:53 np0005534516 nova_compute[253538]: 2025-11-25 09:02:53.339 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 28376454-90b2-431d-9052-48b369973c8e] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 04:02:53 np0005534516 nova_compute[253538]: 2025-11-25 09:02:53.342 253542 DEBUG nova.virt.libvirt.driver [None req-f71c3cb0-8ec9-4bda-b83e-e143f3daa1c9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 28376454-90b2-431d-9052-48b369973c8e] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 04:02:53 np0005534516 nova_compute[253538]: 2025-11-25 09:02:53.343 253542 DEBUG nova.virt.libvirt.driver [None req-f71c3cb0-8ec9-4bda-b83e-e143f3daa1c9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 28376454-90b2-431d-9052-48b369973c8e] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 04:02:53 np0005534516 nova_compute[253538]: 2025-11-25 09:02:53.343 253542 DEBUG nova.virt.libvirt.driver [None req-f71c3cb0-8ec9-4bda-b83e-e143f3daa1c9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 28376454-90b2-431d-9052-48b369973c8e] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 04:02:53 np0005534516 nova_compute[253538]: 2025-11-25 09:02:53.344 253542 DEBUG nova.virt.libvirt.driver [None req-f71c3cb0-8ec9-4bda-b83e-e143f3daa1c9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 28376454-90b2-431d-9052-48b369973c8e] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 04:02:53 np0005534516 nova_compute[253538]: 2025-11-25 09:02:53.344 253542 DEBUG nova.virt.libvirt.driver [None req-f71c3cb0-8ec9-4bda-b83e-e143f3daa1c9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 28376454-90b2-431d-9052-48b369973c8e] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 04:02:53 np0005534516 nova_compute[253538]: 2025-11-25 09:02:53.345 253542 DEBUG nova.virt.libvirt.driver [None req-f71c3cb0-8ec9-4bda-b83e-e143f3daa1c9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 28376454-90b2-431d-9052-48b369973c8e] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 04:02:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] Optimize plan auto_2025-11-25_09:02:53
Nov 25 04:02:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 04:02:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] do_upmap
Nov 25 04:02:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] pools ['cephfs.cephfs.meta', '.rgw.root', '.mgr', 'backups', 'default.rgw.meta', 'default.rgw.log', 'volumes', 'images', 'vms', 'cephfs.cephfs.data', 'default.rgw.control']
Nov 25 04:02:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] prepared 0/10 changes
Nov 25 04:02:53 np0005534516 nova_compute[253538]: 2025-11-25 09:02:53.371 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 28376454-90b2-431d-9052-48b369973c8e] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 04:02:53 np0005534516 nova_compute[253538]: 2025-11-25 09:02:53.387 253542 INFO nova.virt.libvirt.driver [None req-e81b8dd4-6010-4d30-aa5a-a47b67de9ac9 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 2c4a1d63-7674-4276-8da9-b9d4f4fea307] Creating config drive at /var/lib/nova/instances/2c4a1d63-7674-4276-8da9-b9d4f4fea307/disk.config#033[00m
Nov 25 04:02:53 np0005534516 nova_compute[253538]: 2025-11-25 09:02:53.392 253542 DEBUG oslo_concurrency.processutils [None req-e81b8dd4-6010-4d30-aa5a-a47b67de9ac9 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/2c4a1d63-7674-4276-8da9-b9d4f4fea307/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpfgg4p4mt execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 04:02:53 np0005534516 nova_compute[253538]: 2025-11-25 09:02:53.440 253542 INFO nova.compute.manager [None req-f71c3cb0-8ec9-4bda-b83e-e143f3daa1c9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 28376454-90b2-431d-9052-48b369973c8e] Took 17.57 seconds to spawn the instance on the hypervisor.#033[00m
Nov 25 04:02:53 np0005534516 nova_compute[253538]: 2025-11-25 09:02:53.441 253542 DEBUG nova.compute.manager [None req-f71c3cb0-8ec9-4bda-b83e-e143f3daa1c9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 28376454-90b2-431d-9052-48b369973c8e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 04:02:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 04:02:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 04:02:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 04:02:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 04:02:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 04:02:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 04:02:53 np0005534516 nova_compute[253538]: 2025-11-25 09:02:53.511 253542 INFO nova.compute.manager [None req-f71c3cb0-8ec9-4bda-b83e-e143f3daa1c9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 28376454-90b2-431d-9052-48b369973c8e] Took 18.47 seconds to build instance.#033[00m
Nov 25 04:02:53 np0005534516 nova_compute[253538]: 2025-11-25 09:02:53.527 253542 DEBUG oslo_concurrency.lockutils [None req-f71c3cb0-8ec9-4bda-b83e-e143f3daa1c9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "28376454-90b2-431d-9052-48b369973c8e" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 18.549s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:02:53 np0005534516 nova_compute[253538]: 2025-11-25 09:02:53.550 253542 DEBUG oslo_concurrency.processutils [None req-e81b8dd4-6010-4d30-aa5a-a47b67de9ac9 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/2c4a1d63-7674-4276-8da9-b9d4f4fea307/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpfgg4p4mt" returned: 0 in 0.159s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 04:02:53 np0005534516 nova_compute[253538]: 2025-11-25 09:02:53.651 253542 DEBUG nova.storage.rbd_utils [None req-e81b8dd4-6010-4d30-aa5a-a47b67de9ac9 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] rbd image 2c4a1d63-7674-4276-8da9-b9d4f4fea307_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 04:02:53 np0005534516 nova_compute[253538]: 2025-11-25 09:02:53.657 253542 DEBUG oslo_concurrency.processutils [None req-e81b8dd4-6010-4d30-aa5a-a47b67de9ac9 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/2c4a1d63-7674-4276-8da9-b9d4f4fea307/disk.config 2c4a1d63-7674-4276-8da9-b9d4f4fea307_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 04:02:53 np0005534516 nova_compute[253538]: 2025-11-25 09:02:53.699 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:02:53 np0005534516 nova_compute[253538]: 2025-11-25 09:02:53.786 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:02:54 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 04:02:54 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 04:02:54 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 04:02:54 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 04:02:54 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 04:02:54 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 04:02:54 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 04:02:54 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 04:02:54 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 04:02:54 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 04:02:54 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2463: 321 pgs: 321 active+clean; 176 MiB data, 962 MiB used, 59 GiB / 60 GiB avail; 14 KiB/s rd, 1.7 MiB/s wr, 24 op/s
Nov 25 04:02:54 np0005534516 nova_compute[253538]: 2025-11-25 09:02:54.237 253542 DEBUG oslo_concurrency.processutils [None req-e81b8dd4-6010-4d30-aa5a-a47b67de9ac9 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/2c4a1d63-7674-4276-8da9-b9d4f4fea307/disk.config 2c4a1d63-7674-4276-8da9-b9d4f4fea307_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.580s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 04:02:54 np0005534516 nova_compute[253538]: 2025-11-25 09:02:54.237 253542 INFO nova.virt.libvirt.driver [None req-e81b8dd4-6010-4d30-aa5a-a47b67de9ac9 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 2c4a1d63-7674-4276-8da9-b9d4f4fea307] Deleting local config drive /var/lib/nova/instances/2c4a1d63-7674-4276-8da9-b9d4f4fea307/disk.config because it was imported into RBD.#033[00m
Nov 25 04:02:54 np0005534516 NetworkManager[48915]: <info>  [1764061374.2805] manager: (tapa93aab06-4a): new Tun device (/org/freedesktop/NetworkManager/Devices/567)
Nov 25 04:02:54 np0005534516 kernel: tapa93aab06-4a: entered promiscuous mode
Nov 25 04:02:54 np0005534516 ovn_controller[152859]: 2025-11-25T09:02:54Z|01373|binding|INFO|Claiming lport a93aab06-4a98-453a-87c3-01b817ee7602 for this chassis.
Nov 25 04:02:54 np0005534516 ovn_controller[152859]: 2025-11-25T09:02:54Z|01374|binding|INFO|a93aab06-4a98-453a-87c3-01b817ee7602: Claiming fa:16:3e:2f:fb:42 10.100.0.13
Nov 25 04:02:54 np0005534516 nova_compute[253538]: 2025-11-25 09:02:54.322 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:02:54 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:02:54.331 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:2f:fb:42 10.100.0.13'], port_security=['fa:16:3e:2f:fb:42 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '2c4a1d63-7674-4276-8da9-b9d4f4fea307', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-72472fc5-3661-404c-a0d2-df155795bd2b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'cfffff2c57a442a59b202d368d49bf00', 'neutron:revision_number': '2', 'neutron:security_group_ids': '046f46ed-7d5f-45ad-8313-fa0fe77b097a 0f20aab2-1f55-4a0f-8bdf-77bad4fbb70d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=48d726f4-a876-48b7-812b-492b6f2eebf1, chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=a93aab06-4a98-453a-87c3-01b817ee7602) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 04:02:54 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:02:54.333 162739 INFO neutron.agent.ovn.metadata.agent [-] Port a93aab06-4a98-453a-87c3-01b817ee7602 in datapath 72472fc5-3661-404c-a0d2-df155795bd2b bound to our chassis#033[00m
Nov 25 04:02:54 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:02:54.334 162739 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 72472fc5-3661-404c-a0d2-df155795bd2b#033[00m
Nov 25 04:02:54 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:02:54.344 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[b57be80f-bf72-48d4-893a-6c795c372a7f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:02:54 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:02:54.345 162739 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap72472fc5-31 in ovnmeta-72472fc5-3661-404c-a0d2-df155795bd2b namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 25 04:02:54 np0005534516 systemd-udevd[394251]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 04:02:54 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:02:54.347 269370 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap72472fc5-30 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 25 04:02:54 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:02:54.347 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[f52b3fc6-2551-4783-a6fd-03b6fed3f615]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:02:54 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:02:54.348 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[7d3a2533-9c03-41a3-aed3-e6c04ba16da9]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:02:54 np0005534516 systemd-machined[215790]: New machine qemu-164-instance-00000086.
Nov 25 04:02:54 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:02:54.358 162852 DEBUG oslo.privsep.daemon [-] privsep: reply[9f895fde-e745-46fa-ab3d-daffca5df5b7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:02:54 np0005534516 NetworkManager[48915]: <info>  [1764061374.3659] device (tapa93aab06-4a): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 04:02:54 np0005534516 NetworkManager[48915]: <info>  [1764061374.3669] device (tapa93aab06-4a): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 04:02:54 np0005534516 systemd[1]: Started Virtual Machine qemu-164-instance-00000086.
Nov 25 04:02:54 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:02:54.384 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[0e7aacec-fe85-4830-a423-057ead4db048]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:02:54 np0005534516 nova_compute[253538]: 2025-11-25 09:02:54.388 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:02:54 np0005534516 ovn_controller[152859]: 2025-11-25T09:02:54Z|01375|binding|INFO|Setting lport a93aab06-4a98-453a-87c3-01b817ee7602 ovn-installed in OVS
Nov 25 04:02:54 np0005534516 ovn_controller[152859]: 2025-11-25T09:02:54Z|01376|binding|INFO|Setting lport a93aab06-4a98-453a-87c3-01b817ee7602 up in Southbound
Nov 25 04:02:54 np0005534516 nova_compute[253538]: 2025-11-25 09:02:54.391 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:02:54 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:02:54.416 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[962f5d31-9227-4971-ac28-3e5fc00a27f3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:02:54 np0005534516 NetworkManager[48915]: <info>  [1764061374.4240] manager: (tap72472fc5-30): new Veth device (/org/freedesktop/NetworkManager/Devices/568)
Nov 25 04:02:54 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:02:54.423 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[7cb76805-fd3f-479d-bf2c-aad48faf6975]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:02:54 np0005534516 systemd-udevd[394255]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 04:02:54 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:02:54.458 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[2b66ed0b-57d2-4f49-881d-f046ca3fa405]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:02:54 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:02:54.461 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[4d5eebc0-0dcb-4ff2-9702-b20083d6f446]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:02:54 np0005534516 NetworkManager[48915]: <info>  [1764061374.4858] device (tap72472fc5-30): carrier: link connected
Nov 25 04:02:54 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:02:54.490 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[0d345b6a-9b91-4e05-8591-1ed6cd32aef6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:02:54 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:02:54.514 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[bbac8c80-db24-46d8-a92c-809f181f1caf]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap72472fc5-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:1a:5c:07'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 400], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 672908, 'reachable_time': 15576, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 394285, 'error': None, 'target': 'ovnmeta-72472fc5-3661-404c-a0d2-df155795bd2b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:02:54 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:02:54.530 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[00fa75d9-0139-44af-9cb9-2eb3c6a7384e]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe1a:5c07'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 672908, 'tstamp': 672908}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 394286, 'error': None, 'target': 'ovnmeta-72472fc5-3661-404c-a0d2-df155795bd2b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:02:54 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:02:54.545 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[a924a64a-7b47-4d0b-98b3-98706b8e5790]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap72472fc5-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:1a:5c:07'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 400], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 672908, 'reachable_time': 15576, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 394287, 'error': None, 'target': 'ovnmeta-72472fc5-3661-404c-a0d2-df155795bd2b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:02:54 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:02:54.571 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[efa207cf-956b-4fdf-883b-dc7eab73040e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:02:54 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:02:54.629 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[1c41d503-abba-4c45-a715-2d7b6daa7e93]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:02:54 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:02:54.630 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap72472fc5-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 04:02:54 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:02:54.631 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 04:02:54 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:02:54.631 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap72472fc5-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 04:02:54 np0005534516 nova_compute[253538]: 2025-11-25 09:02:54.632 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:02:54 np0005534516 NetworkManager[48915]: <info>  [1764061374.6337] manager: (tap72472fc5-30): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/569)
Nov 25 04:02:54 np0005534516 kernel: tap72472fc5-30: entered promiscuous mode
Nov 25 04:02:54 np0005534516 nova_compute[253538]: 2025-11-25 09:02:54.636 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:02:54 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:02:54.637 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap72472fc5-30, col_values=(('external_ids', {'iface-id': '7518767c-6a1a-4489-968c-840b865348d3'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 04:02:54 np0005534516 ovn_controller[152859]: 2025-11-25T09:02:54Z|01377|binding|INFO|Releasing lport 7518767c-6a1a-4489-968c-840b865348d3 from this chassis (sb_readonly=0)
Nov 25 04:02:54 np0005534516 nova_compute[253538]: 2025-11-25 09:02:54.652 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:02:54 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:02:54.653 162739 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/72472fc5-3661-404c-a0d2-df155795bd2b.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/72472fc5-3661-404c-a0d2-df155795bd2b.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 25 04:02:54 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:02:54.654 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[5f71ff37-188a-4fa7-a21c-d2228e0eb38f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:02:54 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:02:54.655 162739 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 25 04:02:54 np0005534516 ovn_metadata_agent[162734]: global
Nov 25 04:02:54 np0005534516 ovn_metadata_agent[162734]:    log         /dev/log local0 debug
Nov 25 04:02:54 np0005534516 ovn_metadata_agent[162734]:    log-tag     haproxy-metadata-proxy-72472fc5-3661-404c-a0d2-df155795bd2b
Nov 25 04:02:54 np0005534516 ovn_metadata_agent[162734]:    user        root
Nov 25 04:02:54 np0005534516 ovn_metadata_agent[162734]:    group       root
Nov 25 04:02:54 np0005534516 ovn_metadata_agent[162734]:    maxconn     1024
Nov 25 04:02:54 np0005534516 ovn_metadata_agent[162734]:    pidfile     /var/lib/neutron/external/pids/72472fc5-3661-404c-a0d2-df155795bd2b.pid.haproxy
Nov 25 04:02:54 np0005534516 ovn_metadata_agent[162734]:    daemon
Nov 25 04:02:54 np0005534516 ovn_metadata_agent[162734]: 
Nov 25 04:02:54 np0005534516 ovn_metadata_agent[162734]: defaults
Nov 25 04:02:54 np0005534516 ovn_metadata_agent[162734]:    log global
Nov 25 04:02:54 np0005534516 ovn_metadata_agent[162734]:    mode http
Nov 25 04:02:54 np0005534516 ovn_metadata_agent[162734]:    option httplog
Nov 25 04:02:54 np0005534516 ovn_metadata_agent[162734]:    option dontlognull
Nov 25 04:02:54 np0005534516 ovn_metadata_agent[162734]:    option http-server-close
Nov 25 04:02:54 np0005534516 ovn_metadata_agent[162734]:    option forwardfor
Nov 25 04:02:54 np0005534516 ovn_metadata_agent[162734]:    retries                 3
Nov 25 04:02:54 np0005534516 ovn_metadata_agent[162734]:    timeout http-request    30s
Nov 25 04:02:54 np0005534516 ovn_metadata_agent[162734]:    timeout connect         30s
Nov 25 04:02:54 np0005534516 ovn_metadata_agent[162734]:    timeout client          32s
Nov 25 04:02:54 np0005534516 ovn_metadata_agent[162734]:    timeout server          32s
Nov 25 04:02:54 np0005534516 ovn_metadata_agent[162734]:    timeout http-keep-alive 30s
Nov 25 04:02:54 np0005534516 ovn_metadata_agent[162734]: 
Nov 25 04:02:54 np0005534516 ovn_metadata_agent[162734]: 
Nov 25 04:02:54 np0005534516 ovn_metadata_agent[162734]: listen listener
Nov 25 04:02:54 np0005534516 ovn_metadata_agent[162734]:    bind 169.254.169.254:80
Nov 25 04:02:54 np0005534516 ovn_metadata_agent[162734]:    server metadata /var/lib/neutron/metadata_proxy
Nov 25 04:02:54 np0005534516 ovn_metadata_agent[162734]:    http-request add-header X-OVN-Network-ID 72472fc5-3661-404c-a0d2-df155795bd2b
Nov 25 04:02:54 np0005534516 ovn_metadata_agent[162734]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 25 04:02:54 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:02:54.655 162739 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-72472fc5-3661-404c-a0d2-df155795bd2b', 'env', 'PROCESS_TAG=haproxy-72472fc5-3661-404c-a0d2-df155795bd2b', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/72472fc5-3661-404c-a0d2-df155795bd2b.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 25 04:02:54 np0005534516 nova_compute[253538]: 2025-11-25 09:02:54.746 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764061374.7456698, 2c4a1d63-7674-4276-8da9-b9d4f4fea307 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 04:02:54 np0005534516 nova_compute[253538]: 2025-11-25 09:02:54.746 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 2c4a1d63-7674-4276-8da9-b9d4f4fea307] VM Started (Lifecycle Event)#033[00m
Nov 25 04:02:54 np0005534516 nova_compute[253538]: 2025-11-25 09:02:54.775 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 2c4a1d63-7674-4276-8da9-b9d4f4fea307] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 04:02:54 np0005534516 nova_compute[253538]: 2025-11-25 09:02:54.779 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764061374.7485554, 2c4a1d63-7674-4276-8da9-b9d4f4fea307 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 04:02:54 np0005534516 nova_compute[253538]: 2025-11-25 09:02:54.780 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 2c4a1d63-7674-4276-8da9-b9d4f4fea307] VM Paused (Lifecycle Event)#033[00m
Nov 25 04:02:54 np0005534516 nova_compute[253538]: 2025-11-25 09:02:54.805 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 2c4a1d63-7674-4276-8da9-b9d4f4fea307] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 04:02:54 np0005534516 nova_compute[253538]: 2025-11-25 09:02:54.807 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 2c4a1d63-7674-4276-8da9-b9d4f4fea307] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 04:02:54 np0005534516 nova_compute[253538]: 2025-11-25 09:02:54.835 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 2c4a1d63-7674-4276-8da9-b9d4f4fea307] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 04:02:55 np0005534516 podman[394361]: 2025-11-25 09:02:55.02901297 +0000 UTC m=+0.037870011 container create 4d5507b550ed52bb6ccf48abf5b01f4c5ae6fba6c6fa789780fe85272d962cd2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-72472fc5-3661-404c-a0d2-df155795bd2b, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0, tcib_managed=true)
Nov 25 04:02:55 np0005534516 systemd[1]: Started libpod-conmon-4d5507b550ed52bb6ccf48abf5b01f4c5ae6fba6c6fa789780fe85272d962cd2.scope.
Nov 25 04:02:55 np0005534516 podman[394361]: 2025-11-25 09:02:55.009273384 +0000 UTC m=+0.018130445 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 25 04:02:55 np0005534516 systemd[1]: Started libcrun container.
Nov 25 04:02:55 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c472ba9f61967f5c2c1bbfa218a3c4c1a73bb2a29d7b40245ca25e38f08a9e8a/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 25 04:02:55 np0005534516 podman[394361]: 2025-11-25 09:02:55.126728907 +0000 UTC m=+0.135585968 container init 4d5507b550ed52bb6ccf48abf5b01f4c5ae6fba6c6fa789780fe85272d962cd2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-72472fc5-3661-404c-a0d2-df155795bd2b, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team)
Nov 25 04:02:55 np0005534516 podman[394361]: 2025-11-25 09:02:55.132335869 +0000 UTC m=+0.141192910 container start 4d5507b550ed52bb6ccf48abf5b01f4c5ae6fba6c6fa789780fe85272d962cd2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-72472fc5-3661-404c-a0d2-df155795bd2b, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 25 04:02:55 np0005534516 neutron-haproxy-ovnmeta-72472fc5-3661-404c-a0d2-df155795bd2b[394377]: [NOTICE]   (394381) : New worker (394383) forked
Nov 25 04:02:55 np0005534516 neutron-haproxy-ovnmeta-72472fc5-3661-404c-a0d2-df155795bd2b[394377]: [NOTICE]   (394381) : Loading success.
Nov 25 04:02:55 np0005534516 nova_compute[253538]: 2025-11-25 09:02:55.550 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:02:56 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2464: 321 pgs: 321 active+clean; 180 MiB data, 962 MiB used, 59 GiB / 60 GiB avail; 1.3 MiB/s rd, 1.8 MiB/s wr, 82 op/s
Nov 25 04:02:56 np0005534516 nova_compute[253538]: 2025-11-25 09:02:56.553 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:02:56 np0005534516 nova_compute[253538]: 2025-11-25 09:02:56.582 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:02:56 np0005534516 nova_compute[253538]: 2025-11-25 09:02:56.583 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:02:56 np0005534516 nova_compute[253538]: 2025-11-25 09:02:56.583 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:02:56 np0005534516 nova_compute[253538]: 2025-11-25 09:02:56.583 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 25 04:02:56 np0005534516 nova_compute[253538]: 2025-11-25 09:02:56.583 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 04:02:57 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 04:02:57 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/265326903' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 04:02:57 np0005534516 nova_compute[253538]: 2025-11-25 09:02:57.042 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.458s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 04:02:57 np0005534516 nova_compute[253538]: 2025-11-25 09:02:57.115 253542 DEBUG nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] skipping disk for instance-00000086 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 25 04:02:57 np0005534516 nova_compute[253538]: 2025-11-25 09:02:57.115 253542 DEBUG nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] skipping disk for instance-00000086 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 25 04:02:57 np0005534516 nova_compute[253538]: 2025-11-25 09:02:57.119 253542 DEBUG nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] skipping disk for instance-00000085 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 25 04:02:57 np0005534516 nova_compute[253538]: 2025-11-25 09:02:57.119 253542 DEBUG nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] skipping disk for instance-00000085 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 25 04:02:57 np0005534516 nova_compute[253538]: 2025-11-25 09:02:57.278 253542 WARNING nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 04:02:57 np0005534516 nova_compute[253538]: 2025-11-25 09:02:57.279 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3506MB free_disk=59.946624755859375GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 25 04:02:57 np0005534516 nova_compute[253538]: 2025-11-25 09:02:57.279 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:02:57 np0005534516 nova_compute[253538]: 2025-11-25 09:02:57.279 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:02:57 np0005534516 nova_compute[253538]: 2025-11-25 09:02:57.289 253542 DEBUG nova.compute.manager [req-da911e8e-80dd-4e5e-a1da-abf73979b24b req-04b40c58-dc86-4344-9d58-14dd0c9d2e5f b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 2c4a1d63-7674-4276-8da9-b9d4f4fea307] Received event network-vif-plugged-a93aab06-4a98-453a-87c3-01b817ee7602 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 04:02:57 np0005534516 nova_compute[253538]: 2025-11-25 09:02:57.289 253542 DEBUG oslo_concurrency.lockutils [req-da911e8e-80dd-4e5e-a1da-abf73979b24b req-04b40c58-dc86-4344-9d58-14dd0c9d2e5f b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "2c4a1d63-7674-4276-8da9-b9d4f4fea307-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:02:57 np0005534516 nova_compute[253538]: 2025-11-25 09:02:57.289 253542 DEBUG oslo_concurrency.lockutils [req-da911e8e-80dd-4e5e-a1da-abf73979b24b req-04b40c58-dc86-4344-9d58-14dd0c9d2e5f b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "2c4a1d63-7674-4276-8da9-b9d4f4fea307-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:02:57 np0005534516 nova_compute[253538]: 2025-11-25 09:02:57.289 253542 DEBUG oslo_concurrency.lockutils [req-da911e8e-80dd-4e5e-a1da-abf73979b24b req-04b40c58-dc86-4344-9d58-14dd0c9d2e5f b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "2c4a1d63-7674-4276-8da9-b9d4f4fea307-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:02:57 np0005534516 nova_compute[253538]: 2025-11-25 09:02:57.290 253542 DEBUG nova.compute.manager [req-da911e8e-80dd-4e5e-a1da-abf73979b24b req-04b40c58-dc86-4344-9d58-14dd0c9d2e5f b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 2c4a1d63-7674-4276-8da9-b9d4f4fea307] Processing event network-vif-plugged-a93aab06-4a98-453a-87c3-01b817ee7602 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 25 04:02:57 np0005534516 nova_compute[253538]: 2025-11-25 09:02:57.290 253542 DEBUG nova.compute.manager [None req-e81b8dd4-6010-4d30-aa5a-a47b67de9ac9 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 2c4a1d63-7674-4276-8da9-b9d4f4fea307] Instance event wait completed in 2 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 25 04:02:57 np0005534516 nova_compute[253538]: 2025-11-25 09:02:57.298 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764061377.2986612, 2c4a1d63-7674-4276-8da9-b9d4f4fea307 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 04:02:57 np0005534516 nova_compute[253538]: 2025-11-25 09:02:57.299 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 2c4a1d63-7674-4276-8da9-b9d4f4fea307] VM Resumed (Lifecycle Event)#033[00m
Nov 25 04:02:57 np0005534516 nova_compute[253538]: 2025-11-25 09:02:57.301 253542 DEBUG nova.virt.libvirt.driver [None req-e81b8dd4-6010-4d30-aa5a-a47b67de9ac9 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 2c4a1d63-7674-4276-8da9-b9d4f4fea307] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 25 04:02:57 np0005534516 nova_compute[253538]: 2025-11-25 09:02:57.305 253542 INFO nova.virt.libvirt.driver [-] [instance: 2c4a1d63-7674-4276-8da9-b9d4f4fea307] Instance spawned successfully.#033[00m
Nov 25 04:02:57 np0005534516 nova_compute[253538]: 2025-11-25 09:02:57.305 253542 DEBUG nova.virt.libvirt.driver [None req-e81b8dd4-6010-4d30-aa5a-a47b67de9ac9 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 2c4a1d63-7674-4276-8da9-b9d4f4fea307] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 25 04:02:57 np0005534516 nova_compute[253538]: 2025-11-25 09:02:57.335 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 2c4a1d63-7674-4276-8da9-b9d4f4fea307] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 04:02:57 np0005534516 nova_compute[253538]: 2025-11-25 09:02:57.338 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 2c4a1d63-7674-4276-8da9-b9d4f4fea307] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 04:02:57 np0005534516 nova_compute[253538]: 2025-11-25 09:02:57.346 253542 DEBUG nova.virt.libvirt.driver [None req-e81b8dd4-6010-4d30-aa5a-a47b67de9ac9 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 2c4a1d63-7674-4276-8da9-b9d4f4fea307] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 04:02:57 np0005534516 nova_compute[253538]: 2025-11-25 09:02:57.347 253542 DEBUG nova.virt.libvirt.driver [None req-e81b8dd4-6010-4d30-aa5a-a47b67de9ac9 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 2c4a1d63-7674-4276-8da9-b9d4f4fea307] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 04:02:57 np0005534516 nova_compute[253538]: 2025-11-25 09:02:57.347 253542 DEBUG nova.virt.libvirt.driver [None req-e81b8dd4-6010-4d30-aa5a-a47b67de9ac9 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 2c4a1d63-7674-4276-8da9-b9d4f4fea307] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 04:02:57 np0005534516 nova_compute[253538]: 2025-11-25 09:02:57.347 253542 DEBUG nova.virt.libvirt.driver [None req-e81b8dd4-6010-4d30-aa5a-a47b67de9ac9 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 2c4a1d63-7674-4276-8da9-b9d4f4fea307] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 04:02:57 np0005534516 nova_compute[253538]: 2025-11-25 09:02:57.348 253542 DEBUG nova.virt.libvirt.driver [None req-e81b8dd4-6010-4d30-aa5a-a47b67de9ac9 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 2c4a1d63-7674-4276-8da9-b9d4f4fea307] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 04:02:57 np0005534516 nova_compute[253538]: 2025-11-25 09:02:57.348 253542 DEBUG nova.virt.libvirt.driver [None req-e81b8dd4-6010-4d30-aa5a-a47b67de9ac9 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 2c4a1d63-7674-4276-8da9-b9d4f4fea307] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 04:02:57 np0005534516 nova_compute[253538]: 2025-11-25 09:02:57.370 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 2c4a1d63-7674-4276-8da9-b9d4f4fea307] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 04:02:57 np0005534516 nova_compute[253538]: 2025-11-25 09:02:57.397 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Instance 28376454-90b2-431d-9052-48b369973c8e actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 25 04:02:57 np0005534516 nova_compute[253538]: 2025-11-25 09:02:57.398 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Instance 2c4a1d63-7674-4276-8da9-b9d4f4fea307 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 25 04:02:57 np0005534516 nova_compute[253538]: 2025-11-25 09:02:57.398 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 25 04:02:57 np0005534516 nova_compute[253538]: 2025-11-25 09:02:57.398 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=768MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 25 04:02:57 np0005534516 nova_compute[253538]: 2025-11-25 09:02:57.407 253542 INFO nova.compute.manager [None req-e81b8dd4-6010-4d30-aa5a-a47b67de9ac9 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 2c4a1d63-7674-4276-8da9-b9d4f4fea307] Took 9.59 seconds to spawn the instance on the hypervisor.#033[00m
Nov 25 04:02:57 np0005534516 nova_compute[253538]: 2025-11-25 09:02:57.407 253542 DEBUG nova.compute.manager [None req-e81b8dd4-6010-4d30-aa5a-a47b67de9ac9 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 2c4a1d63-7674-4276-8da9-b9d4f4fea307] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 04:02:57 np0005534516 nova_compute[253538]: 2025-11-25 09:02:57.465 253542 INFO nova.compute.manager [None req-e81b8dd4-6010-4d30-aa5a-a47b67de9ac9 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 2c4a1d63-7674-4276-8da9-b9d4f4fea307] Took 10.57 seconds to build instance.#033[00m
Nov 25 04:02:57 np0005534516 nova_compute[253538]: 2025-11-25 09:02:57.470 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 04:02:57 np0005534516 nova_compute[253538]: 2025-11-25 09:02:57.511 253542 DEBUG oslo_concurrency.lockutils [None req-e81b8dd4-6010-4d30-aa5a-a47b67de9ac9 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Lock "2c4a1d63-7674-4276-8da9-b9d4f4fea307" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.704s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:02:57 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:02:57 np0005534516 nova_compute[253538]: 2025-11-25 09:02:57.870 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:02:57 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 04:02:57 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2706206898' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 04:02:58 np0005534516 nova_compute[253538]: 2025-11-25 09:02:58.012 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.543s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 04:02:58 np0005534516 nova_compute[253538]: 2025-11-25 09:02:58.018 253542 DEBUG nova.compute.provider_tree [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 04:02:58 np0005534516 nova_compute[253538]: 2025-11-25 09:02:58.044 253542 DEBUG nova.scheduler.client.report [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 04:02:58 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2465: 321 pgs: 321 active+clean; 181 MiB data, 962 MiB used, 59 GiB / 60 GiB avail; 1.8 MiB/s rd, 1.8 MiB/s wr, 105 op/s
Nov 25 04:02:58 np0005534516 nova_compute[253538]: 2025-11-25 09:02:58.070 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 25 04:02:58 np0005534516 nova_compute[253538]: 2025-11-25 09:02:58.071 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.792s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:02:58 np0005534516 nova_compute[253538]: 2025-11-25 09:02:58.821 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:02:59 np0005534516 nova_compute[253538]: 2025-11-25 09:02:59.453 253542 DEBUG nova.compute.manager [req-fef3d36a-671c-441f-abe8-122dfe9171fb req-3b116c1d-8c71-4358-aef3-4e37f1798dee b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 2c4a1d63-7674-4276-8da9-b9d4f4fea307] Received event network-vif-plugged-a93aab06-4a98-453a-87c3-01b817ee7602 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 04:02:59 np0005534516 nova_compute[253538]: 2025-11-25 09:02:59.454 253542 DEBUG oslo_concurrency.lockutils [req-fef3d36a-671c-441f-abe8-122dfe9171fb req-3b116c1d-8c71-4358-aef3-4e37f1798dee b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "2c4a1d63-7674-4276-8da9-b9d4f4fea307-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:02:59 np0005534516 nova_compute[253538]: 2025-11-25 09:02:59.455 253542 DEBUG oslo_concurrency.lockutils [req-fef3d36a-671c-441f-abe8-122dfe9171fb req-3b116c1d-8c71-4358-aef3-4e37f1798dee b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "2c4a1d63-7674-4276-8da9-b9d4f4fea307-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:02:59 np0005534516 nova_compute[253538]: 2025-11-25 09:02:59.455 253542 DEBUG oslo_concurrency.lockutils [req-fef3d36a-671c-441f-abe8-122dfe9171fb req-3b116c1d-8c71-4358-aef3-4e37f1798dee b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "2c4a1d63-7674-4276-8da9-b9d4f4fea307-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:02:59 np0005534516 nova_compute[253538]: 2025-11-25 09:02:59.455 253542 DEBUG nova.compute.manager [req-fef3d36a-671c-441f-abe8-122dfe9171fb req-3b116c1d-8c71-4358-aef3-4e37f1798dee b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 2c4a1d63-7674-4276-8da9-b9d4f4fea307] No waiting events found dispatching network-vif-plugged-a93aab06-4a98-453a-87c3-01b817ee7602 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 04:02:59 np0005534516 nova_compute[253538]: 2025-11-25 09:02:59.456 253542 WARNING nova.compute.manager [req-fef3d36a-671c-441f-abe8-122dfe9171fb req-3b116c1d-8c71-4358-aef3-4e37f1798dee b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 2c4a1d63-7674-4276-8da9-b9d4f4fea307] Received unexpected event network-vif-plugged-a93aab06-4a98-453a-87c3-01b817ee7602 for instance with vm_state active and task_state None.#033[00m
Nov 25 04:02:59 np0005534516 podman[394437]: 2025-11-25 09:02:59.828941411 +0000 UTC m=+0.082412311 container health_status 8ad1e319ea263c63021f01bb2d6f18ca5aea205883339692c6b10bddfa993191 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 04:02:59 np0005534516 nova_compute[253538]: 2025-11-25 09:02:59.901 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:02:59 np0005534516 NetworkManager[48915]: <info>  [1764061379.9046] manager: (patch-provnet-14ba300c-36d8-42f5-a7bd-8245a4488c5f-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/570)
Nov 25 04:02:59 np0005534516 ovn_controller[152859]: 2025-11-25T09:02:59Z|01378|binding|INFO|Releasing lport ab77c41f-12b1-44c7-af48-058abf7be28c from this chassis (sb_readonly=0)
Nov 25 04:02:59 np0005534516 ovn_controller[152859]: 2025-11-25T09:02:59Z|01379|binding|INFO|Releasing lport 5564ec46-e1ee-4a7e-990b-f716b4d2c9e2 from this chassis (sb_readonly=0)
Nov 25 04:02:59 np0005534516 ovn_controller[152859]: 2025-11-25T09:02:59Z|01380|binding|INFO|Releasing lport 7518767c-6a1a-4489-968c-840b865348d3 from this chassis (sb_readonly=0)
Nov 25 04:02:59 np0005534516 NetworkManager[48915]: <info>  [1764061379.9058] manager: (patch-br-int-to-provnet-14ba300c-36d8-42f5-a7bd-8245a4488c5f): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/571)
Nov 25 04:02:59 np0005534516 ovn_controller[152859]: 2025-11-25T09:02:59Z|01381|binding|INFO|Releasing lport ab77c41f-12b1-44c7-af48-058abf7be28c from this chassis (sb_readonly=0)
Nov 25 04:02:59 np0005534516 ovn_controller[152859]: 2025-11-25T09:02:59Z|01382|binding|INFO|Releasing lport 5564ec46-e1ee-4a7e-990b-f716b4d2c9e2 from this chassis (sb_readonly=0)
Nov 25 04:02:59 np0005534516 ovn_controller[152859]: 2025-11-25T09:02:59Z|01383|binding|INFO|Releasing lport 7518767c-6a1a-4489-968c-840b865348d3 from this chassis (sb_readonly=0)
Nov 25 04:02:59 np0005534516 nova_compute[253538]: 2025-11-25 09:02:59.942 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:02:59 np0005534516 nova_compute[253538]: 2025-11-25 09:02:59.956 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:03:00 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2466: 321 pgs: 321 active+clean; 181 MiB data, 962 MiB used, 59 GiB / 60 GiB avail; 3.6 MiB/s rd, 1.8 MiB/s wr, 165 op/s
Nov 25 04:03:00 np0005534516 nova_compute[253538]: 2025-11-25 09:03:00.318 253542 DEBUG nova.compute.manager [req-122d2af0-0b6e-4966-968e-e8b19bfe2e9f req-ef9f69e5-7333-4f80-bcf8-8408796cdd98 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 28376454-90b2-431d-9052-48b369973c8e] Received event network-changed-9918858c-8b7c-4d3f-aada-d04fcb6eab03 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 04:03:00 np0005534516 nova_compute[253538]: 2025-11-25 09:03:00.319 253542 DEBUG nova.compute.manager [req-122d2af0-0b6e-4966-968e-e8b19bfe2e9f req-ef9f69e5-7333-4f80-bcf8-8408796cdd98 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 28376454-90b2-431d-9052-48b369973c8e] Refreshing instance network info cache due to event network-changed-9918858c-8b7c-4d3f-aada-d04fcb6eab03. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 04:03:00 np0005534516 nova_compute[253538]: 2025-11-25 09:03:00.319 253542 DEBUG oslo_concurrency.lockutils [req-122d2af0-0b6e-4966-968e-e8b19bfe2e9f req-ef9f69e5-7333-4f80-bcf8-8408796cdd98 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-28376454-90b2-431d-9052-48b369973c8e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 04:03:00 np0005534516 nova_compute[253538]: 2025-11-25 09:03:00.320 253542 DEBUG oslo_concurrency.lockutils [req-122d2af0-0b6e-4966-968e-e8b19bfe2e9f req-ef9f69e5-7333-4f80-bcf8-8408796cdd98 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-28376454-90b2-431d-9052-48b369973c8e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 04:03:00 np0005534516 nova_compute[253538]: 2025-11-25 09:03:00.320 253542 DEBUG nova.network.neutron [req-122d2af0-0b6e-4966-968e-e8b19bfe2e9f req-ef9f69e5-7333-4f80-bcf8-8408796cdd98 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 28376454-90b2-431d-9052-48b369973c8e] Refreshing network info cache for port 9918858c-8b7c-4d3f-aada-d04fcb6eab03 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 04:03:02 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2467: 321 pgs: 321 active+clean; 181 MiB data, 962 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 1.1 MiB/s wr, 155 op/s
Nov 25 04:03:02 np0005534516 nova_compute[253538]: 2025-11-25 09:03:02.216 253542 DEBUG nova.network.neutron [req-122d2af0-0b6e-4966-968e-e8b19bfe2e9f req-ef9f69e5-7333-4f80-bcf8-8408796cdd98 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 28376454-90b2-431d-9052-48b369973c8e] Updated VIF entry in instance network info cache for port 9918858c-8b7c-4d3f-aada-d04fcb6eab03. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 25 04:03:02 np0005534516 nova_compute[253538]: 2025-11-25 09:03:02.217 253542 DEBUG nova.network.neutron [req-122d2af0-0b6e-4966-968e-e8b19bfe2e9f req-ef9f69e5-7333-4f80-bcf8-8408796cdd98 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 28376454-90b2-431d-9052-48b369973c8e] Updating instance_info_cache with network_info: [{"id": "9918858c-8b7c-4d3f-aada-d04fcb6eab03", "address": "fa:16:3e:f4:e3:ea", "network": {"id": "f691304c-d112-4c32-b3ac-0f33230178b0", "bridge": "br-int", "label": "tempest-network-smoke--636122136", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.236", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9918858c-8b", "ovs_interfaceid": "9918858c-8b7c-4d3f-aada-d04fcb6eab03", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "fd44d480-0242-4c7a-b02e-f58852c99ca0", "address": "fa:16:3e:b8:5e:87", "network": {"id": "269f4fa4-a7fb-4f9a-b49d-3b1968826304", "bridge": "br-int", "label": "tempest-network-smoke--375194542", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feb8:5e87", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfd44d480-02", "ovs_interfaceid": "fd44d480-0242-4c7a-b02e-f58852c99ca0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 04:03:02 np0005534516 nova_compute[253538]: 2025-11-25 09:03:02.234 253542 DEBUG oslo_concurrency.lockutils [req-122d2af0-0b6e-4966-968e-e8b19bfe2e9f req-ef9f69e5-7333-4f80-bcf8-8408796cdd98 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-28376454-90b2-431d-9052-48b369973c8e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 04:03:02 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:03:02 np0005534516 nova_compute[253538]: 2025-11-25 09:03:02.876 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:03:03 np0005534516 nova_compute[253538]: 2025-11-25 09:03:03.824 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:03:04 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2468: 321 pgs: 321 active+clean; 181 MiB data, 962 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 810 KiB/s wr, 153 op/s
Nov 25 04:03:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 04:03:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 04:03:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 04:03:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 04:03:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0006967633855896333 of space, bias 1.0, pg target 0.20902901567689 quantized to 32 (current 32)
Nov 25 04:03:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 04:03:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 04:03:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 04:03:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 04:03:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 04:03:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0010122368870873217 of space, bias 1.0, pg target 0.3036710661261965 quantized to 32 (current 32)
Nov 25 04:03:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 04:03:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 25 04:03:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 04:03:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 04:03:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 04:03:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 25 04:03:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 04:03:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 25 04:03:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 04:03:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 04:03:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 04:03:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 25 04:03:04 np0005534516 nova_compute[253538]: 2025-11-25 09:03:04.616 253542 DEBUG nova.compute.manager [req-5c6cb4da-21a2-48c8-8706-ca0d2dac6056 req-b17d46ff-931e-4c6d-a04d-0360ebc417ae b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 2c4a1d63-7674-4276-8da9-b9d4f4fea307] Received event network-changed-a93aab06-4a98-453a-87c3-01b817ee7602 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 04:03:04 np0005534516 nova_compute[253538]: 2025-11-25 09:03:04.618 253542 DEBUG nova.compute.manager [req-5c6cb4da-21a2-48c8-8706-ca0d2dac6056 req-b17d46ff-931e-4c6d-a04d-0360ebc417ae b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 2c4a1d63-7674-4276-8da9-b9d4f4fea307] Refreshing instance network info cache due to event network-changed-a93aab06-4a98-453a-87c3-01b817ee7602. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 04:03:04 np0005534516 nova_compute[253538]: 2025-11-25 09:03:04.619 253542 DEBUG oslo_concurrency.lockutils [req-5c6cb4da-21a2-48c8-8706-ca0d2dac6056 req-b17d46ff-931e-4c6d-a04d-0360ebc417ae b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-2c4a1d63-7674-4276-8da9-b9d4f4fea307" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 04:03:04 np0005534516 nova_compute[253538]: 2025-11-25 09:03:04.619 253542 DEBUG oslo_concurrency.lockutils [req-5c6cb4da-21a2-48c8-8706-ca0d2dac6056 req-b17d46ff-931e-4c6d-a04d-0360ebc417ae b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-2c4a1d63-7674-4276-8da9-b9d4f4fea307" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 04:03:04 np0005534516 nova_compute[253538]: 2025-11-25 09:03:04.619 253542 DEBUG nova.network.neutron [req-5c6cb4da-21a2-48c8-8706-ca0d2dac6056 req-b17d46ff-931e-4c6d-a04d-0360ebc417ae b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 2c4a1d63-7674-4276-8da9-b9d4f4fea307] Refreshing network info cache for port a93aab06-4a98-453a-87c3-01b817ee7602 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 04:03:06 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2469: 321 pgs: 321 active+clean; 181 MiB data, 962 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 55 KiB/s wr, 152 op/s
Nov 25 04:03:06 np0005534516 ovn_controller[152859]: 2025-11-25T09:03:06Z|00169|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:f4:e3:ea 10.100.0.12
Nov 25 04:03:06 np0005534516 ovn_controller[152859]: 2025-11-25T09:03:06Z|00170|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:f4:e3:ea 10.100.0.12
Nov 25 04:03:07 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:03:07 np0005534516 nova_compute[253538]: 2025-11-25 09:03:07.880 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:03:08 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2470: 321 pgs: 321 active+clean; 199 MiB data, 970 MiB used, 59 GiB / 60 GiB avail; 2.7 MiB/s rd, 744 KiB/s wr, 123 op/s
Nov 25 04:03:08 np0005534516 nova_compute[253538]: 2025-11-25 09:03:08.321 253542 DEBUG nova.network.neutron [req-5c6cb4da-21a2-48c8-8706-ca0d2dac6056 req-b17d46ff-931e-4c6d-a04d-0360ebc417ae b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 2c4a1d63-7674-4276-8da9-b9d4f4fea307] Updated VIF entry in instance network info cache for port a93aab06-4a98-453a-87c3-01b817ee7602. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 25 04:03:08 np0005534516 nova_compute[253538]: 2025-11-25 09:03:08.321 253542 DEBUG nova.network.neutron [req-5c6cb4da-21a2-48c8-8706-ca0d2dac6056 req-b17d46ff-931e-4c6d-a04d-0360ebc417ae b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 2c4a1d63-7674-4276-8da9-b9d4f4fea307] Updating instance_info_cache with network_info: [{"id": "a93aab06-4a98-453a-87c3-01b817ee7602", "address": "fa:16:3e:2f:fb:42", "network": {"id": "72472fc5-3661-404c-a0d2-df155795bd2b", "bridge": "br-int", "label": "tempest-network-smoke--1408423100", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.224", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cfffff2c57a442a59b202d368d49bf00", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa93aab06-4a", "ovs_interfaceid": "a93aab06-4a98-453a-87c3-01b817ee7602", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 04:03:08 np0005534516 nova_compute[253538]: 2025-11-25 09:03:08.337 253542 DEBUG oslo_concurrency.lockutils [req-5c6cb4da-21a2-48c8-8706-ca0d2dac6056 req-b17d46ff-931e-4c6d-a04d-0360ebc417ae b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-2c4a1d63-7674-4276-8da9-b9d4f4fea307" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 04:03:08 np0005534516 nova_compute[253538]: 2025-11-25 09:03:08.826 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:03:10 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2471: 321 pgs: 321 active+clean; 222 MiB data, 997 MiB used, 59 GiB / 60 GiB avail; 2.4 MiB/s rd, 2.9 MiB/s wr, 156 op/s
Nov 25 04:03:10 np0005534516 ovn_controller[152859]: 2025-11-25T09:03:10Z|00171|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:2f:fb:42 10.100.0.13
Nov 25 04:03:10 np0005534516 ovn_controller[152859]: 2025-11-25T09:03:10Z|00172|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:2f:fb:42 10.100.0.13
Nov 25 04:03:12 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2472: 321 pgs: 321 active+clean; 234 MiB data, 1005 MiB used, 59 GiB / 60 GiB avail; 939 KiB/s rd, 3.6 MiB/s wr, 124 op/s
Nov 25 04:03:12 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:03:12 np0005534516 nova_compute[253538]: 2025-11-25 09:03:12.883 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:03:13 np0005534516 nova_compute[253538]: 2025-11-25 09:03:13.873 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:03:14 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2473: 321 pgs: 321 active+clean; 243 MiB data, 1012 MiB used, 59 GiB / 60 GiB avail; 677 KiB/s rd, 4.2 MiB/s wr, 118 op/s
Nov 25 04:03:16 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2474: 321 pgs: 321 active+clean; 246 MiB data, 1012 MiB used, 59 GiB / 60 GiB avail; 715 KiB/s rd, 4.3 MiB/s wr, 128 op/s
Nov 25 04:03:16 np0005534516 ceph-mon[75015]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #117. Immutable memtables: 0.
Nov 25 04:03:16 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:03:16.752393) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 25 04:03:16 np0005534516 ceph-mon[75015]: rocksdb: [db/flush_job.cc:856] [default] [JOB 69] Flushing memtable with next log file: 117
Nov 25 04:03:16 np0005534516 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764061396752463, "job": 69, "event": "flush_started", "num_memtables": 1, "num_entries": 2060, "num_deletes": 251, "total_data_size": 3346609, "memory_usage": 3404680, "flush_reason": "Manual Compaction"}
Nov 25 04:03:16 np0005534516 ceph-mon[75015]: rocksdb: [db/flush_job.cc:885] [default] [JOB 69] Level-0 flush table #118: started
Nov 25 04:03:16 np0005534516 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764061396897639, "cf_name": "default", "job": 69, "event": "table_file_creation", "file_number": 118, "file_size": 3279708, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 49760, "largest_seqno": 51819, "table_properties": {"data_size": 3270425, "index_size": 5841, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2373, "raw_key_size": 19034, "raw_average_key_size": 20, "raw_value_size": 3251904, "raw_average_value_size": 3444, "num_data_blocks": 259, "num_entries": 944, "num_filter_entries": 944, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764061178, "oldest_key_time": 1764061178, "file_creation_time": 1764061396, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "dc5dc6ae-0fb8-474e-b34d-e37705a41add", "db_session_id": "WCSCMBNWDQZ3QKJA3OWW", "orig_file_number": 118, "seqno_to_time_mapping": "N/A"}}
Nov 25 04:03:16 np0005534516 ceph-mon[75015]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 69] Flush lasted 145292 microseconds, and 13556 cpu microseconds.
Nov 25 04:03:16 np0005534516 ceph-mon[75015]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 04:03:17 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:03:16.897690) [db/flush_job.cc:967] [default] [JOB 69] Level-0 flush table #118: 3279708 bytes OK
Nov 25 04:03:17 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:03:16.897715) [db/memtable_list.cc:519] [default] Level-0 commit table #118 started
Nov 25 04:03:17 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:03:17.056571) [db/memtable_list.cc:722] [default] Level-0 commit table #118: memtable #1 done
Nov 25 04:03:17 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:03:17.056617) EVENT_LOG_v1 {"time_micros": 1764061397056609, "job": 69, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 25 04:03:17 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:03:17.056640) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 25 04:03:17 np0005534516 ceph-mon[75015]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 69] Try to delete WAL files size 3337962, prev total WAL file size 3337962, number of live WAL files 2.
Nov 25 04:03:17 np0005534516 ceph-mon[75015]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000114.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 04:03:17 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:03:17.057688) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730034373639' seq:72057594037927935, type:22 .. '7061786F730035303231' seq:0, type:0; will stop at (end)
Nov 25 04:03:17 np0005534516 ceph-mon[75015]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 70] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 25 04:03:17 np0005534516 ceph-mon[75015]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 69 Base level 0, inputs: [118(3202KB)], [116(8422KB)]
Nov 25 04:03:17 np0005534516 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764061397057717, "job": 70, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [118], "files_L6": [116], "score": -1, "input_data_size": 11904328, "oldest_snapshot_seqno": -1}
Nov 25 04:03:17 np0005534516 ceph-mon[75015]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 70] Generated table #119: 7404 keys, 10210900 bytes, temperature: kUnknown
Nov 25 04:03:17 np0005534516 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764061397239104, "cf_name": "default", "job": 70, "event": "table_file_creation", "file_number": 119, "file_size": 10210900, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10161992, "index_size": 29276, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 18565, "raw_key_size": 191974, "raw_average_key_size": 25, "raw_value_size": 10030229, "raw_average_value_size": 1354, "num_data_blocks": 1149, "num_entries": 7404, "num_filter_entries": 7404, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764056881, "oldest_key_time": 0, "file_creation_time": 1764061397, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "dc5dc6ae-0fb8-474e-b34d-e37705a41add", "db_session_id": "WCSCMBNWDQZ3QKJA3OWW", "orig_file_number": 119, "seqno_to_time_mapping": "N/A"}}
Nov 25 04:03:17 np0005534516 ceph-mon[75015]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 04:03:17 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:03:17.239401) [db/compaction/compaction_job.cc:1663] [default] [JOB 70] Compacted 1@0 + 1@6 files to L6 => 10210900 bytes
Nov 25 04:03:17 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:03:17.409944) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 65.6 rd, 56.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.1, 8.2 +0.0 blob) out(9.7 +0.0 blob), read-write-amplify(6.7) write-amplify(3.1) OK, records in: 7918, records dropped: 514 output_compression: NoCompression
Nov 25 04:03:17 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:03:17.409999) EVENT_LOG_v1 {"time_micros": 1764061397409979, "job": 70, "event": "compaction_finished", "compaction_time_micros": 181492, "compaction_time_cpu_micros": 25084, "output_level": 6, "num_output_files": 1, "total_output_size": 10210900, "num_input_records": 7918, "num_output_records": 7404, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 25 04:03:17 np0005534516 ceph-mon[75015]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000118.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 04:03:17 np0005534516 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764061397411617, "job": 70, "event": "table_file_deletion", "file_number": 118}
Nov 25 04:03:17 np0005534516 ceph-mon[75015]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000116.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 04:03:17 np0005534516 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764061397414882, "job": 70, "event": "table_file_deletion", "file_number": 116}
Nov 25 04:03:17 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:03:17.057606) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 04:03:17 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:03:17.414980) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 04:03:17 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:03:17.414987) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 04:03:17 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:03:17.414990) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 04:03:17 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:03:17.414993) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 04:03:17 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:03:17.414996) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 04:03:17 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:03:17 np0005534516 nova_compute[253538]: 2025-11-25 09:03:17.902 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:03:18 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2475: 321 pgs: 321 active+clean; 246 MiB data, 1012 MiB used, 59 GiB / 60 GiB avail; 707 KiB/s rd, 4.3 MiB/s wr, 126 op/s
Nov 25 04:03:18 np0005534516 nova_compute[253538]: 2025-11-25 09:03:18.876 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:03:20 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2476: 321 pgs: 321 active+clean; 246 MiB data, 1012 MiB used, 59 GiB / 60 GiB avail; 573 KiB/s rd, 3.6 MiB/s wr, 97 op/s
Nov 25 04:03:21 np0005534516 nova_compute[253538]: 2025-11-25 09:03:21.743 253542 DEBUG oslo_concurrency.lockutils [None req-07640ace-66ce-4387-9528-c13339b929d9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Acquiring lock "1ee319a5-b613-4b27-a1e6-64b0129bf269" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:03:21 np0005534516 nova_compute[253538]: 2025-11-25 09:03:21.744 253542 DEBUG oslo_concurrency.lockutils [None req-07640ace-66ce-4387-9528-c13339b929d9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "1ee319a5-b613-4b27-a1e6-64b0129bf269" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:03:21 np0005534516 nova_compute[253538]: 2025-11-25 09:03:21.762 253542 DEBUG nova.compute.manager [None req-07640ace-66ce-4387-9528-c13339b929d9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 1ee319a5-b613-4b27-a1e6-64b0129bf269] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 25 04:03:21 np0005534516 nova_compute[253538]: 2025-11-25 09:03:21.853 253542 DEBUG oslo_concurrency.lockutils [None req-07640ace-66ce-4387-9528-c13339b929d9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:03:21 np0005534516 nova_compute[253538]: 2025-11-25 09:03:21.854 253542 DEBUG oslo_concurrency.lockutils [None req-07640ace-66ce-4387-9528-c13339b929d9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:03:21 np0005534516 nova_compute[253538]: 2025-11-25 09:03:21.864 253542 DEBUG nova.virt.hardware [None req-07640ace-66ce-4387-9528-c13339b929d9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 25 04:03:21 np0005534516 nova_compute[253538]: 2025-11-25 09:03:21.865 253542 INFO nova.compute.claims [None req-07640ace-66ce-4387-9528-c13339b929d9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 1ee319a5-b613-4b27-a1e6-64b0129bf269] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 25 04:03:22 np0005534516 nova_compute[253538]: 2025-11-25 09:03:22.030 253542 DEBUG oslo_concurrency.processutils [None req-07640ace-66ce-4387-9528-c13339b929d9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 04:03:22 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2477: 321 pgs: 321 active+clean; 246 MiB data, 1012 MiB used, 59 GiB / 60 GiB avail; 260 KiB/s rd, 1.4 MiB/s wr, 42 op/s
Nov 25 04:03:22 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 04:03:22 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2571430072' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 04:03:22 np0005534516 nova_compute[253538]: 2025-11-25 09:03:22.540 253542 DEBUG oslo_concurrency.processutils [None req-07640ace-66ce-4387-9528-c13339b929d9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.509s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 04:03:22 np0005534516 nova_compute[253538]: 2025-11-25 09:03:22.548 253542 DEBUG nova.compute.provider_tree [None req-07640ace-66ce-4387-9528-c13339b929d9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 04:03:22 np0005534516 nova_compute[253538]: 2025-11-25 09:03:22.564 253542 DEBUG nova.scheduler.client.report [None req-07640ace-66ce-4387-9528-c13339b929d9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 04:03:22 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:03:22 np0005534516 nova_compute[253538]: 2025-11-25 09:03:22.787 253542 DEBUG oslo_concurrency.lockutils [None req-07640ace-66ce-4387-9528-c13339b929d9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.933s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:03:22 np0005534516 nova_compute[253538]: 2025-11-25 09:03:22.787 253542 DEBUG nova.compute.manager [None req-07640ace-66ce-4387-9528-c13339b929d9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 1ee319a5-b613-4b27-a1e6-64b0129bf269] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 25 04:03:22 np0005534516 podman[394489]: 2025-11-25 09:03:22.809870998 +0000 UTC m=+0.049846547 container health_status 5c8d5508db57b4c3da7e80ae11f3a3094e87785cb74f60c98199261dd8b73481 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Nov 25 04:03:22 np0005534516 podman[394488]: 2025-11-25 09:03:22.835534475 +0000 UTC m=+0.084380754 container health_status 201f5ba8a846455dad7681d91099d8c3f33e8ac91298c1330a8b525b94c03938 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team)
Nov 25 04:03:22 np0005534516 nova_compute[253538]: 2025-11-25 09:03:22.907 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:03:23 np0005534516 nova_compute[253538]: 2025-11-25 09:03:23.254 253542 DEBUG nova.compute.manager [None req-07640ace-66ce-4387-9528-c13339b929d9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 1ee319a5-b613-4b27-a1e6-64b0129bf269] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 25 04:03:23 np0005534516 nova_compute[253538]: 2025-11-25 09:03:23.254 253542 DEBUG nova.network.neutron [None req-07640ace-66ce-4387-9528-c13339b929d9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 1ee319a5-b613-4b27-a1e6-64b0129bf269] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 25 04:03:23 np0005534516 nova_compute[253538]: 2025-11-25 09:03:23.272 253542 INFO nova.virt.libvirt.driver [None req-07640ace-66ce-4387-9528-c13339b929d9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 1ee319a5-b613-4b27-a1e6-64b0129bf269] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 25 04:03:23 np0005534516 nova_compute[253538]: 2025-11-25 09:03:23.285 253542 DEBUG nova.compute.manager [None req-07640ace-66ce-4387-9528-c13339b929d9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 1ee319a5-b613-4b27-a1e6-64b0129bf269] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 25 04:03:23 np0005534516 nova_compute[253538]: 2025-11-25 09:03:23.368 253542 DEBUG nova.compute.manager [None req-07640ace-66ce-4387-9528-c13339b929d9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 1ee319a5-b613-4b27-a1e6-64b0129bf269] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 25 04:03:23 np0005534516 nova_compute[253538]: 2025-11-25 09:03:23.370 253542 DEBUG nova.virt.libvirt.driver [None req-07640ace-66ce-4387-9528-c13339b929d9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 1ee319a5-b613-4b27-a1e6-64b0129bf269] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 25 04:03:23 np0005534516 nova_compute[253538]: 2025-11-25 09:03:23.370 253542 INFO nova.virt.libvirt.driver [None req-07640ace-66ce-4387-9528-c13339b929d9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 1ee319a5-b613-4b27-a1e6-64b0129bf269] Creating image(s)#033[00m
Nov 25 04:03:23 np0005534516 nova_compute[253538]: 2025-11-25 09:03:23.396 253542 DEBUG nova.storage.rbd_utils [None req-07640ace-66ce-4387-9528-c13339b929d9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] rbd image 1ee319a5-b613-4b27-a1e6-64b0129bf269_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 04:03:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 04:03:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 04:03:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 04:03:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 04:03:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 04:03:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 04:03:23 np0005534516 nova_compute[253538]: 2025-11-25 09:03:23.671 253542 DEBUG nova.storage.rbd_utils [None req-07640ace-66ce-4387-9528-c13339b929d9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] rbd image 1ee319a5-b613-4b27-a1e6-64b0129bf269_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 04:03:23 np0005534516 nova_compute[253538]: 2025-11-25 09:03:23.778 253542 DEBUG nova.storage.rbd_utils [None req-07640ace-66ce-4387-9528-c13339b929d9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] rbd image 1ee319a5-b613-4b27-a1e6-64b0129bf269_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 04:03:23 np0005534516 nova_compute[253538]: 2025-11-25 09:03:23.781 253542 DEBUG oslo_concurrency.processutils [None req-07640ace-66ce-4387-9528-c13339b929d9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 04:03:23 np0005534516 nova_compute[253538]: 2025-11-25 09:03:23.857 253542 DEBUG oslo_concurrency.processutils [None req-07640ace-66ce-4387-9528-c13339b929d9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json" returned: 0 in 0.077s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 04:03:23 np0005534516 nova_compute[253538]: 2025-11-25 09:03:23.858 253542 DEBUG oslo_concurrency.lockutils [None req-07640ace-66ce-4387-9528-c13339b929d9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Acquiring lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:03:23 np0005534516 nova_compute[253538]: 2025-11-25 09:03:23.859 253542 DEBUG oslo_concurrency.lockutils [None req-07640ace-66ce-4387-9528-c13339b929d9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:03:23 np0005534516 nova_compute[253538]: 2025-11-25 09:03:23.859 253542 DEBUG oslo_concurrency.lockutils [None req-07640ace-66ce-4387-9528-c13339b929d9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:03:23 np0005534516 nova_compute[253538]: 2025-11-25 09:03:23.881 253542 DEBUG nova.storage.rbd_utils [None req-07640ace-66ce-4387-9528-c13339b929d9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] rbd image 1ee319a5-b613-4b27-a1e6-64b0129bf269_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 04:03:23 np0005534516 nova_compute[253538]: 2025-11-25 09:03:23.884 253542 DEBUG oslo_concurrency.processutils [None req-07640ace-66ce-4387-9528-c13339b929d9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc 1ee319a5-b613-4b27-a1e6-64b0129bf269_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 04:03:23 np0005534516 nova_compute[253538]: 2025-11-25 09:03:23.919 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:03:24 np0005534516 nova_compute[253538]: 2025-11-25 09:03:24.022 253542 DEBUG nova.policy [None req-07640ace-66ce-4387-9528-c13339b929d9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'c9fb13d4ba9041458692330b7276232f', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'a3cf572dfc9f42528923d69b8fa76422', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 25 04:03:24 np0005534516 nova_compute[253538]: 2025-11-25 09:03:24.055 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:03:24 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:03:24.056 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=44, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '3e:21:09', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '6a:71:8e:07:fa:c2'}, ipsec=False) old=SB_Global(nb_cfg=43) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 04:03:24 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:03:24.058 162739 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 1 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 25 04:03:24 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2478: 321 pgs: 321 active+clean; 246 MiB data, 1012 MiB used, 59 GiB / 60 GiB avail; 55 KiB/s rd, 656 KiB/s wr, 15 op/s
Nov 25 04:03:24 np0005534516 nova_compute[253538]: 2025-11-25 09:03:24.796 253542 DEBUG oslo_concurrency.processutils [None req-07640ace-66ce-4387-9528-c13339b929d9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc 1ee319a5-b613-4b27-a1e6-64b0129bf269_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.911s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 04:03:24 np0005534516 nova_compute[253538]: 2025-11-25 09:03:24.858 253542 DEBUG nova.storage.rbd_utils [None req-07640ace-66ce-4387-9528-c13339b929d9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] resizing rbd image 1ee319a5-b613-4b27-a1e6-64b0129bf269_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 25 04:03:25 np0005534516 nova_compute[253538]: 2025-11-25 09:03:25.035 253542 DEBUG nova.objects.instance [None req-07640ace-66ce-4387-9528-c13339b929d9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lazy-loading 'migration_context' on Instance uuid 1ee319a5-b613-4b27-a1e6-64b0129bf269 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 04:03:25 np0005534516 nova_compute[253538]: 2025-11-25 09:03:25.058 253542 DEBUG nova.virt.libvirt.driver [None req-07640ace-66ce-4387-9528-c13339b929d9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 1ee319a5-b613-4b27-a1e6-64b0129bf269] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 25 04:03:25 np0005534516 nova_compute[253538]: 2025-11-25 09:03:25.058 253542 DEBUG nova.virt.libvirt.driver [None req-07640ace-66ce-4387-9528-c13339b929d9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 1ee319a5-b613-4b27-a1e6-64b0129bf269] Ensure instance console log exists: /var/lib/nova/instances/1ee319a5-b613-4b27-a1e6-64b0129bf269/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 25 04:03:25 np0005534516 nova_compute[253538]: 2025-11-25 09:03:25.058 253542 DEBUG oslo_concurrency.lockutils [None req-07640ace-66ce-4387-9528-c13339b929d9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:03:25 np0005534516 nova_compute[253538]: 2025-11-25 09:03:25.059 253542 DEBUG oslo_concurrency.lockutils [None req-07640ace-66ce-4387-9528-c13339b929d9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:03:25 np0005534516 nova_compute[253538]: 2025-11-25 09:03:25.059 253542 DEBUG oslo_concurrency.lockutils [None req-07640ace-66ce-4387-9528-c13339b929d9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:03:25 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:03:25.060 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=0e3362c2-a4a0-4a10-9289-943331244f84, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '44'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 04:03:26 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2479: 321 pgs: 321 active+clean; 274 MiB data, 1012 MiB used, 59 GiB / 60 GiB avail; 46 KiB/s rd, 1.3 MiB/s wr, 25 op/s
Nov 25 04:03:27 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:03:27 np0005534516 nova_compute[253538]: 2025-11-25 09:03:27.912 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:03:28 np0005534516 nova_compute[253538]: 2025-11-25 09:03:28.063 253542 DEBUG nova.network.neutron [None req-07640ace-66ce-4387-9528-c13339b929d9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 1ee319a5-b613-4b27-a1e6-64b0129bf269] Successfully created port: 2cd88dce-60d9-4da6-a5f6-ba6622fd8812 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 25 04:03:28 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2480: 321 pgs: 321 active+clean; 289 MiB data, 1018 MiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.7 MiB/s wr, 27 op/s
Nov 25 04:03:28 np0005534516 nova_compute[253538]: 2025-11-25 09:03:28.880 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:03:29 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 25 04:03:29 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2469312614' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 04:03:29 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 25 04:03:29 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2469312614' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 04:03:29 np0005534516 nova_compute[253538]: 2025-11-25 09:03:29.214 253542 DEBUG nova.network.neutron [None req-07640ace-66ce-4387-9528-c13339b929d9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 1ee319a5-b613-4b27-a1e6-64b0129bf269] Successfully created port: 25c8c441-cd5e-4cd3-9151-e8137db08e65 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 25 04:03:29 np0005534516 nova_compute[253538]: 2025-11-25 09:03:29.843 253542 DEBUG oslo_concurrency.lockutils [None req-114760a2-1ab1-4b38-a8f4-182ed57f57d9 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Acquiring lock "60fab831-4ae4-4e18-a4e4-5466abbece52" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:03:29 np0005534516 nova_compute[253538]: 2025-11-25 09:03:29.843 253542 DEBUG oslo_concurrency.lockutils [None req-114760a2-1ab1-4b38-a8f4-182ed57f57d9 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Lock "60fab831-4ae4-4e18-a4e4-5466abbece52" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:03:29 np0005534516 nova_compute[253538]: 2025-11-25 09:03:29.855 253542 DEBUG nova.compute.manager [None req-114760a2-1ab1-4b38-a8f4-182ed57f57d9 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 60fab831-4ae4-4e18-a4e4-5466abbece52] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 25 04:03:29 np0005534516 nova_compute[253538]: 2025-11-25 09:03:29.937 253542 DEBUG oslo_concurrency.lockutils [None req-114760a2-1ab1-4b38-a8f4-182ed57f57d9 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:03:29 np0005534516 nova_compute[253538]: 2025-11-25 09:03:29.939 253542 DEBUG oslo_concurrency.lockutils [None req-114760a2-1ab1-4b38-a8f4-182ed57f57d9 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:03:29 np0005534516 nova_compute[253538]: 2025-11-25 09:03:29.955 253542 DEBUG nova.virt.hardware [None req-114760a2-1ab1-4b38-a8f4-182ed57f57d9 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 25 04:03:29 np0005534516 nova_compute[253538]: 2025-11-25 09:03:29.955 253542 INFO nova.compute.claims [None req-114760a2-1ab1-4b38-a8f4-182ed57f57d9 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 60fab831-4ae4-4e18-a4e4-5466abbece52] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 25 04:03:30 np0005534516 nova_compute[253538]: 2025-11-25 09:03:30.075 253542 DEBUG nova.network.neutron [None req-07640ace-66ce-4387-9528-c13339b929d9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 1ee319a5-b613-4b27-a1e6-64b0129bf269] Successfully updated port: 2cd88dce-60d9-4da6-a5f6-ba6622fd8812 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 25 04:03:30 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2481: 321 pgs: 321 active+clean; 293 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Nov 25 04:03:30 np0005534516 nova_compute[253538]: 2025-11-25 09:03:30.099 253542 DEBUG oslo_concurrency.processutils [None req-114760a2-1ab1-4b38-a8f4-182ed57f57d9 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 04:03:30 np0005534516 nova_compute[253538]: 2025-11-25 09:03:30.182 253542 DEBUG nova.compute.manager [req-049a09f1-8ecc-4141-a2c8-1ca1d499bf14 req-0dabf1c1-7fcf-4e0c-a0e6-56f3ef9eb92e b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 1ee319a5-b613-4b27-a1e6-64b0129bf269] Received event network-changed-2cd88dce-60d9-4da6-a5f6-ba6622fd8812 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 04:03:30 np0005534516 nova_compute[253538]: 2025-11-25 09:03:30.183 253542 DEBUG nova.compute.manager [req-049a09f1-8ecc-4141-a2c8-1ca1d499bf14 req-0dabf1c1-7fcf-4e0c-a0e6-56f3ef9eb92e b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 1ee319a5-b613-4b27-a1e6-64b0129bf269] Refreshing instance network info cache due to event network-changed-2cd88dce-60d9-4da6-a5f6-ba6622fd8812. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 04:03:30 np0005534516 nova_compute[253538]: 2025-11-25 09:03:30.183 253542 DEBUG oslo_concurrency.lockutils [req-049a09f1-8ecc-4141-a2c8-1ca1d499bf14 req-0dabf1c1-7fcf-4e0c-a0e6-56f3ef9eb92e b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-1ee319a5-b613-4b27-a1e6-64b0129bf269" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 04:03:30 np0005534516 nova_compute[253538]: 2025-11-25 09:03:30.183 253542 DEBUG oslo_concurrency.lockutils [req-049a09f1-8ecc-4141-a2c8-1ca1d499bf14 req-0dabf1c1-7fcf-4e0c-a0e6-56f3ef9eb92e b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-1ee319a5-b613-4b27-a1e6-64b0129bf269" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 04:03:30 np0005534516 nova_compute[253538]: 2025-11-25 09:03:30.183 253542 DEBUG nova.network.neutron [req-049a09f1-8ecc-4141-a2c8-1ca1d499bf14 req-0dabf1c1-7fcf-4e0c-a0e6-56f3ef9eb92e b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 1ee319a5-b613-4b27-a1e6-64b0129bf269] Refreshing network info cache for port 2cd88dce-60d9-4da6-a5f6-ba6622fd8812 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 04:03:30 np0005534516 nova_compute[253538]: 2025-11-25 09:03:30.400 253542 DEBUG nova.network.neutron [req-049a09f1-8ecc-4141-a2c8-1ca1d499bf14 req-0dabf1c1-7fcf-4e0c-a0e6-56f3ef9eb92e b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 1ee319a5-b613-4b27-a1e6-64b0129bf269] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 25 04:03:30 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 04:03:30 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2663197943' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 04:03:30 np0005534516 nova_compute[253538]: 2025-11-25 09:03:30.619 253542 DEBUG oslo_concurrency.processutils [None req-114760a2-1ab1-4b38-a8f4-182ed57f57d9 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.520s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 04:03:30 np0005534516 nova_compute[253538]: 2025-11-25 09:03:30.627 253542 DEBUG nova.compute.provider_tree [None req-114760a2-1ab1-4b38-a8f4-182ed57f57d9 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 04:03:30 np0005534516 nova_compute[253538]: 2025-11-25 09:03:30.643 253542 DEBUG nova.scheduler.client.report [None req-114760a2-1ab1-4b38-a8f4-182ed57f57d9 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 04:03:30 np0005534516 nova_compute[253538]: 2025-11-25 09:03:30.700 253542 DEBUG oslo_concurrency.lockutils [None req-114760a2-1ab1-4b38-a8f4-182ed57f57d9 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.761s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:03:30 np0005534516 nova_compute[253538]: 2025-11-25 09:03:30.701 253542 DEBUG nova.compute.manager [None req-114760a2-1ab1-4b38-a8f4-182ed57f57d9 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 60fab831-4ae4-4e18-a4e4-5466abbece52] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 25 04:03:30 np0005534516 nova_compute[253538]: 2025-11-25 09:03:30.811 253542 DEBUG nova.compute.manager [None req-114760a2-1ab1-4b38-a8f4-182ed57f57d9 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 60fab831-4ae4-4e18-a4e4-5466abbece52] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 25 04:03:30 np0005534516 nova_compute[253538]: 2025-11-25 09:03:30.811 253542 DEBUG nova.network.neutron [None req-114760a2-1ab1-4b38-a8f4-182ed57f57d9 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 60fab831-4ae4-4e18-a4e4-5466abbece52] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 25 04:03:30 np0005534516 nova_compute[253538]: 2025-11-25 09:03:30.831 253542 INFO nova.virt.libvirt.driver [None req-114760a2-1ab1-4b38-a8f4-182ed57f57d9 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 60fab831-4ae4-4e18-a4e4-5466abbece52] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 25 04:03:30 np0005534516 podman[394714]: 2025-11-25 09:03:30.85325476 +0000 UTC m=+0.103946127 container health_status 8ad1e319ea263c63021f01bb2d6f18ca5aea205883339692c6b10bddfa993191 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 04:03:30 np0005534516 nova_compute[253538]: 2025-11-25 09:03:30.854 253542 DEBUG nova.compute.manager [None req-114760a2-1ab1-4b38-a8f4-182ed57f57d9 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 60fab831-4ae4-4e18-a4e4-5466abbece52] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 25 04:03:30 np0005534516 nova_compute[253538]: 2025-11-25 09:03:30.960 253542 DEBUG nova.compute.manager [None req-114760a2-1ab1-4b38-a8f4-182ed57f57d9 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 60fab831-4ae4-4e18-a4e4-5466abbece52] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 25 04:03:30 np0005534516 nova_compute[253538]: 2025-11-25 09:03:30.962 253542 DEBUG nova.virt.libvirt.driver [None req-114760a2-1ab1-4b38-a8f4-182ed57f57d9 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 60fab831-4ae4-4e18-a4e4-5466abbece52] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 25 04:03:30 np0005534516 nova_compute[253538]: 2025-11-25 09:03:30.963 253542 INFO nova.virt.libvirt.driver [None req-114760a2-1ab1-4b38-a8f4-182ed57f57d9 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 60fab831-4ae4-4e18-a4e4-5466abbece52] Creating image(s)#033[00m
Nov 25 04:03:30 np0005534516 nova_compute[253538]: 2025-11-25 09:03:30.981 253542 DEBUG nova.storage.rbd_utils [None req-114760a2-1ab1-4b38-a8f4-182ed57f57d9 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] rbd image 60fab831-4ae4-4e18-a4e4-5466abbece52_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 04:03:31 np0005534516 nova_compute[253538]: 2025-11-25 09:03:31.002 253542 DEBUG nova.storage.rbd_utils [None req-114760a2-1ab1-4b38-a8f4-182ed57f57d9 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] rbd image 60fab831-4ae4-4e18-a4e4-5466abbece52_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 04:03:31 np0005534516 nova_compute[253538]: 2025-11-25 09:03:31.024 253542 DEBUG nova.storage.rbd_utils [None req-114760a2-1ab1-4b38-a8f4-182ed57f57d9 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] rbd image 60fab831-4ae4-4e18-a4e4-5466abbece52_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 04:03:31 np0005534516 nova_compute[253538]: 2025-11-25 09:03:31.027 253542 DEBUG oslo_concurrency.processutils [None req-114760a2-1ab1-4b38-a8f4-182ed57f57d9 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 04:03:31 np0005534516 nova_compute[253538]: 2025-11-25 09:03:31.102 253542 DEBUG oslo_concurrency.processutils [None req-114760a2-1ab1-4b38-a8f4-182ed57f57d9 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json" returned: 0 in 0.074s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 04:03:31 np0005534516 nova_compute[253538]: 2025-11-25 09:03:31.103 253542 DEBUG oslo_concurrency.lockutils [None req-114760a2-1ab1-4b38-a8f4-182ed57f57d9 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Acquiring lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:03:31 np0005534516 nova_compute[253538]: 2025-11-25 09:03:31.104 253542 DEBUG oslo_concurrency.lockutils [None req-114760a2-1ab1-4b38-a8f4-182ed57f57d9 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:03:31 np0005534516 nova_compute[253538]: 2025-11-25 09:03:31.104 253542 DEBUG oslo_concurrency.lockutils [None req-114760a2-1ab1-4b38-a8f4-182ed57f57d9 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:03:31 np0005534516 nova_compute[253538]: 2025-11-25 09:03:31.125 253542 DEBUG nova.storage.rbd_utils [None req-114760a2-1ab1-4b38-a8f4-182ed57f57d9 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] rbd image 60fab831-4ae4-4e18-a4e4-5466abbece52_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 04:03:31 np0005534516 nova_compute[253538]: 2025-11-25 09:03:31.128 253542 DEBUG oslo_concurrency.processutils [None req-114760a2-1ab1-4b38-a8f4-182ed57f57d9 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc 60fab831-4ae4-4e18-a4e4-5466abbece52_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 04:03:31 np0005534516 nova_compute[253538]: 2025-11-25 09:03:31.263 253542 DEBUG nova.network.neutron [req-049a09f1-8ecc-4141-a2c8-1ca1d499bf14 req-0dabf1c1-7fcf-4e0c-a0e6-56f3ef9eb92e b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 1ee319a5-b613-4b27-a1e6-64b0129bf269] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 04:03:31 np0005534516 nova_compute[253538]: 2025-11-25 09:03:31.290 253542 DEBUG oslo_concurrency.lockutils [req-049a09f1-8ecc-4141-a2c8-1ca1d499bf14 req-0dabf1c1-7fcf-4e0c-a0e6-56f3ef9eb92e b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-1ee319a5-b613-4b27-a1e6-64b0129bf269" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 04:03:31 np0005534516 nova_compute[253538]: 2025-11-25 09:03:31.331 253542 DEBUG nova.policy [None req-114760a2-1ab1-4b38-a8f4-182ed57f57d9 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '283b89dbe3284e8ea2019b797673108b', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'cfffff2c57a442a59b202d368d49bf00', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 25 04:03:31 np0005534516 nova_compute[253538]: 2025-11-25 09:03:31.462 253542 DEBUG oslo_concurrency.processutils [None req-114760a2-1ab1-4b38-a8f4-182ed57f57d9 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc 60fab831-4ae4-4e18-a4e4-5466abbece52_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.333s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 04:03:31 np0005534516 nova_compute[253538]: 2025-11-25 09:03:31.498 253542 DEBUG nova.network.neutron [None req-07640ace-66ce-4387-9528-c13339b929d9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 1ee319a5-b613-4b27-a1e6-64b0129bf269] Successfully updated port: 25c8c441-cd5e-4cd3-9151-e8137db08e65 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 25 04:03:31 np0005534516 nova_compute[253538]: 2025-11-25 09:03:31.549 253542 DEBUG oslo_concurrency.lockutils [None req-07640ace-66ce-4387-9528-c13339b929d9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Acquiring lock "refresh_cache-1ee319a5-b613-4b27-a1e6-64b0129bf269" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 04:03:31 np0005534516 nova_compute[253538]: 2025-11-25 09:03:31.549 253542 DEBUG oslo_concurrency.lockutils [None req-07640ace-66ce-4387-9528-c13339b929d9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Acquired lock "refresh_cache-1ee319a5-b613-4b27-a1e6-64b0129bf269" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 04:03:31 np0005534516 nova_compute[253538]: 2025-11-25 09:03:31.550 253542 DEBUG nova.network.neutron [None req-07640ace-66ce-4387-9528-c13339b929d9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 1ee319a5-b613-4b27-a1e6-64b0129bf269] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 25 04:03:31 np0005534516 nova_compute[253538]: 2025-11-25 09:03:31.557 253542 DEBUG nova.storage.rbd_utils [None req-114760a2-1ab1-4b38-a8f4-182ed57f57d9 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] resizing rbd image 60fab831-4ae4-4e18-a4e4-5466abbece52_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 25 04:03:31 np0005534516 nova_compute[253538]: 2025-11-25 09:03:31.668 253542 DEBUG nova.objects.instance [None req-114760a2-1ab1-4b38-a8f4-182ed57f57d9 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Lazy-loading 'migration_context' on Instance uuid 60fab831-4ae4-4e18-a4e4-5466abbece52 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 04:03:31 np0005534516 nova_compute[253538]: 2025-11-25 09:03:31.678 253542 DEBUG nova.virt.libvirt.driver [None req-114760a2-1ab1-4b38-a8f4-182ed57f57d9 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 60fab831-4ae4-4e18-a4e4-5466abbece52] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 25 04:03:31 np0005534516 nova_compute[253538]: 2025-11-25 09:03:31.679 253542 DEBUG nova.virt.libvirt.driver [None req-114760a2-1ab1-4b38-a8f4-182ed57f57d9 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 60fab831-4ae4-4e18-a4e4-5466abbece52] Ensure instance console log exists: /var/lib/nova/instances/60fab831-4ae4-4e18-a4e4-5466abbece52/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 25 04:03:31 np0005534516 nova_compute[253538]: 2025-11-25 09:03:31.679 253542 DEBUG oslo_concurrency.lockutils [None req-114760a2-1ab1-4b38-a8f4-182ed57f57d9 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:03:31 np0005534516 nova_compute[253538]: 2025-11-25 09:03:31.680 253542 DEBUG oslo_concurrency.lockutils [None req-114760a2-1ab1-4b38-a8f4-182ed57f57d9 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:03:31 np0005534516 nova_compute[253538]: 2025-11-25 09:03:31.680 253542 DEBUG oslo_concurrency.lockutils [None req-114760a2-1ab1-4b38-a8f4-182ed57f57d9 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:03:31 np0005534516 nova_compute[253538]: 2025-11-25 09:03:31.952 253542 DEBUG nova.network.neutron [None req-07640ace-66ce-4387-9528-c13339b929d9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 1ee319a5-b613-4b27-a1e6-64b0129bf269] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 25 04:03:32 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2482: 321 pgs: 321 active+clean; 303 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 24 KiB/s rd, 2.2 MiB/s wr, 38 op/s
Nov 25 04:03:32 np0005534516 nova_compute[253538]: 2025-11-25 09:03:32.260 253542 DEBUG nova.compute.manager [req-372b9d1e-f967-4ee7-b893-6a89a10e98d7 req-86f1c0a0-8677-4f84-b004-00e30e928c73 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 1ee319a5-b613-4b27-a1e6-64b0129bf269] Received event network-changed-25c8c441-cd5e-4cd3-9151-e8137db08e65 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 04:03:32 np0005534516 nova_compute[253538]: 2025-11-25 09:03:32.260 253542 DEBUG nova.compute.manager [req-372b9d1e-f967-4ee7-b893-6a89a10e98d7 req-86f1c0a0-8677-4f84-b004-00e30e928c73 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 1ee319a5-b613-4b27-a1e6-64b0129bf269] Refreshing instance network info cache due to event network-changed-25c8c441-cd5e-4cd3-9151-e8137db08e65. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 04:03:32 np0005534516 nova_compute[253538]: 2025-11-25 09:03:32.261 253542 DEBUG oslo_concurrency.lockutils [req-372b9d1e-f967-4ee7-b893-6a89a10e98d7 req-86f1c0a0-8677-4f84-b004-00e30e928c73 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-1ee319a5-b613-4b27-a1e6-64b0129bf269" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 04:03:32 np0005534516 nova_compute[253538]: 2025-11-25 09:03:32.329 253542 DEBUG nova.network.neutron [None req-114760a2-1ab1-4b38-a8f4-182ed57f57d9 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 60fab831-4ae4-4e18-a4e4-5466abbece52] Successfully created port: d2008aa0-bac3-4d83-88d2-34376e911b2c _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 25 04:03:32 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:03:32 np0005534516 nova_compute[253538]: 2025-11-25 09:03:32.916 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:03:33 np0005534516 nova_compute[253538]: 2025-11-25 09:03:33.649 253542 DEBUG nova.network.neutron [None req-114760a2-1ab1-4b38-a8f4-182ed57f57d9 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 60fab831-4ae4-4e18-a4e4-5466abbece52] Successfully updated port: d2008aa0-bac3-4d83-88d2-34376e911b2c _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 25 04:03:33 np0005534516 nova_compute[253538]: 2025-11-25 09:03:33.663 253542 DEBUG oslo_concurrency.lockutils [None req-114760a2-1ab1-4b38-a8f4-182ed57f57d9 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Acquiring lock "refresh_cache-60fab831-4ae4-4e18-a4e4-5466abbece52" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 04:03:33 np0005534516 nova_compute[253538]: 2025-11-25 09:03:33.663 253542 DEBUG oslo_concurrency.lockutils [None req-114760a2-1ab1-4b38-a8f4-182ed57f57d9 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Acquired lock "refresh_cache-60fab831-4ae4-4e18-a4e4-5466abbece52" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 04:03:33 np0005534516 nova_compute[253538]: 2025-11-25 09:03:33.664 253542 DEBUG nova.network.neutron [None req-114760a2-1ab1-4b38-a8f4-182ed57f57d9 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 60fab831-4ae4-4e18-a4e4-5466abbece52] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 25 04:03:33 np0005534516 nova_compute[253538]: 2025-11-25 09:03:33.756 253542 DEBUG nova.compute.manager [req-894204e2-b11b-4939-bf49-188c572977d5 req-07fe0918-a967-4cea-90bd-77260f02f8e9 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 60fab831-4ae4-4e18-a4e4-5466abbece52] Received event network-changed-d2008aa0-bac3-4d83-88d2-34376e911b2c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 04:03:33 np0005534516 nova_compute[253538]: 2025-11-25 09:03:33.757 253542 DEBUG nova.compute.manager [req-894204e2-b11b-4939-bf49-188c572977d5 req-07fe0918-a967-4cea-90bd-77260f02f8e9 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 60fab831-4ae4-4e18-a4e4-5466abbece52] Refreshing instance network info cache due to event network-changed-d2008aa0-bac3-4d83-88d2-34376e911b2c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 04:03:33 np0005534516 nova_compute[253538]: 2025-11-25 09:03:33.758 253542 DEBUG oslo_concurrency.lockutils [req-894204e2-b11b-4939-bf49-188c572977d5 req-07fe0918-a967-4cea-90bd-77260f02f8e9 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-60fab831-4ae4-4e18-a4e4-5466abbece52" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 04:03:33 np0005534516 nova_compute[253538]: 2025-11-25 09:03:33.883 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:03:33 np0005534516 nova_compute[253538]: 2025-11-25 09:03:33.964 253542 DEBUG nova.network.neutron [None req-114760a2-1ab1-4b38-a8f4-182ed57f57d9 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 60fab831-4ae4-4e18-a4e4-5466abbece52] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 25 04:03:34 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2483: 321 pgs: 321 active+clean; 322 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 24 KiB/s rd, 2.9 MiB/s wr, 40 op/s
Nov 25 04:03:34 np0005534516 nova_compute[253538]: 2025-11-25 09:03:34.157 253542 DEBUG nova.network.neutron [None req-07640ace-66ce-4387-9528-c13339b929d9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 1ee319a5-b613-4b27-a1e6-64b0129bf269] Updating instance_info_cache with network_info: [{"id": "2cd88dce-60d9-4da6-a5f6-ba6622fd8812", "address": "fa:16:3e:1e:e8:e0", "network": {"id": "f691304c-d112-4c32-b3ac-0f33230178b0", "bridge": "br-int", "label": "tempest-network-smoke--636122136", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2cd88dce-60", "ovs_interfaceid": "2cd88dce-60d9-4da6-a5f6-ba6622fd8812", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "25c8c441-cd5e-4cd3-9151-e8137db08e65", "address": "fa:16:3e:fe:c4:a5", "network": {"id": "269f4fa4-a7fb-4f9a-b49d-3b1968826304", "bridge": "br-int", "label": "tempest-network-smoke--375194542", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fefe:c4a5", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap25c8c441-cd", "ovs_interfaceid": "25c8c441-cd5e-4cd3-9151-e8137db08e65", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 04:03:34 np0005534516 nova_compute[253538]: 2025-11-25 09:03:34.174 253542 DEBUG oslo_concurrency.lockutils [None req-07640ace-66ce-4387-9528-c13339b929d9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Releasing lock "refresh_cache-1ee319a5-b613-4b27-a1e6-64b0129bf269" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 04:03:34 np0005534516 nova_compute[253538]: 2025-11-25 09:03:34.174 253542 DEBUG nova.compute.manager [None req-07640ace-66ce-4387-9528-c13339b929d9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 1ee319a5-b613-4b27-a1e6-64b0129bf269] Instance network_info: |[{"id": "2cd88dce-60d9-4da6-a5f6-ba6622fd8812", "address": "fa:16:3e:1e:e8:e0", "network": {"id": "f691304c-d112-4c32-b3ac-0f33230178b0", "bridge": "br-int", "label": "tempest-network-smoke--636122136", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2cd88dce-60", "ovs_interfaceid": "2cd88dce-60d9-4da6-a5f6-ba6622fd8812", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "25c8c441-cd5e-4cd3-9151-e8137db08e65", "address": "fa:16:3e:fe:c4:a5", "network": {"id": "269f4fa4-a7fb-4f9a-b49d-3b1968826304", "bridge": "br-int", "label": "tempest-network-smoke--375194542", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fefe:c4a5", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap25c8c441-cd", "ovs_interfaceid": "25c8c441-cd5e-4cd3-9151-e8137db08e65", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 25 04:03:34 np0005534516 nova_compute[253538]: 2025-11-25 09:03:34.174 253542 DEBUG oslo_concurrency.lockutils [req-372b9d1e-f967-4ee7-b893-6a89a10e98d7 req-86f1c0a0-8677-4f84-b004-00e30e928c73 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-1ee319a5-b613-4b27-a1e6-64b0129bf269" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 04:03:34 np0005534516 nova_compute[253538]: 2025-11-25 09:03:34.175 253542 DEBUG nova.network.neutron [req-372b9d1e-f967-4ee7-b893-6a89a10e98d7 req-86f1c0a0-8677-4f84-b004-00e30e928c73 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 1ee319a5-b613-4b27-a1e6-64b0129bf269] Refreshing network info cache for port 25c8c441-cd5e-4cd3-9151-e8137db08e65 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 04:03:34 np0005534516 nova_compute[253538]: 2025-11-25 09:03:34.178 253542 DEBUG nova.virt.libvirt.driver [None req-07640ace-66ce-4387-9528-c13339b929d9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 1ee319a5-b613-4b27-a1e6-64b0129bf269] Start _get_guest_xml network_info=[{"id": "2cd88dce-60d9-4da6-a5f6-ba6622fd8812", "address": "fa:16:3e:1e:e8:e0", "network": {"id": "f691304c-d112-4c32-b3ac-0f33230178b0", "bridge": "br-int", "label": "tempest-network-smoke--636122136", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2cd88dce-60", "ovs_interfaceid": "2cd88dce-60d9-4da6-a5f6-ba6622fd8812", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "25c8c441-cd5e-4cd3-9151-e8137db08e65", "address": "fa:16:3e:fe:c4:a5", "network": {"id": "269f4fa4-a7fb-4f9a-b49d-3b1968826304", "bridge": "br-int", "label": "tempest-network-smoke--375194542", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fefe:c4a5", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap25c8c441-cd", "ovs_interfaceid": "25c8c441-cd5e-4cd3-9151-e8137db08e65", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'device_name': '/dev/vda', 'guest_format': None, 'encryption_options': None, 'boot_index': 0, 'encryption_format': None, 'encryption_secret_uuid': None, 'size': 0, 'encrypted': False, 'disk_bus': 'virtio', 'image_id': '8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 25 04:03:34 np0005534516 nova_compute[253538]: 2025-11-25 09:03:34.183 253542 WARNING nova.virt.libvirt.driver [None req-07640ace-66ce-4387-9528-c13339b929d9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 04:03:34 np0005534516 nova_compute[253538]: 2025-11-25 09:03:34.192 253542 DEBUG nova.virt.libvirt.host [None req-07640ace-66ce-4387-9528-c13339b929d9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 25 04:03:34 np0005534516 nova_compute[253538]: 2025-11-25 09:03:34.192 253542 DEBUG nova.virt.libvirt.host [None req-07640ace-66ce-4387-9528-c13339b929d9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 25 04:03:34 np0005534516 nova_compute[253538]: 2025-11-25 09:03:34.196 253542 DEBUG nova.virt.libvirt.host [None req-07640ace-66ce-4387-9528-c13339b929d9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 25 04:03:34 np0005534516 nova_compute[253538]: 2025-11-25 09:03:34.196 253542 DEBUG nova.virt.libvirt.host [None req-07640ace-66ce-4387-9528-c13339b929d9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 25 04:03:34 np0005534516 nova_compute[253538]: 2025-11-25 09:03:34.196 253542 DEBUG nova.virt.libvirt.driver [None req-07640ace-66ce-4387-9528-c13339b929d9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 25 04:03:34 np0005534516 nova_compute[253538]: 2025-11-25 09:03:34.197 253542 DEBUG nova.virt.hardware [None req-07640ace-66ce-4387-9528-c13339b929d9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T08:20:54Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='c0247c91-0f34-45b2-87e7-43f31a790a00',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 25 04:03:34 np0005534516 nova_compute[253538]: 2025-11-25 09:03:34.197 253542 DEBUG nova.virt.hardware [None req-07640ace-66ce-4387-9528-c13339b929d9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 25 04:03:34 np0005534516 nova_compute[253538]: 2025-11-25 09:03:34.198 253542 DEBUG nova.virt.hardware [None req-07640ace-66ce-4387-9528-c13339b929d9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 25 04:03:34 np0005534516 nova_compute[253538]: 2025-11-25 09:03:34.198 253542 DEBUG nova.virt.hardware [None req-07640ace-66ce-4387-9528-c13339b929d9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 25 04:03:34 np0005534516 nova_compute[253538]: 2025-11-25 09:03:34.198 253542 DEBUG nova.virt.hardware [None req-07640ace-66ce-4387-9528-c13339b929d9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 25 04:03:34 np0005534516 nova_compute[253538]: 2025-11-25 09:03:34.198 253542 DEBUG nova.virt.hardware [None req-07640ace-66ce-4387-9528-c13339b929d9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 25 04:03:34 np0005534516 nova_compute[253538]: 2025-11-25 09:03:34.199 253542 DEBUG nova.virt.hardware [None req-07640ace-66ce-4387-9528-c13339b929d9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 25 04:03:34 np0005534516 nova_compute[253538]: 2025-11-25 09:03:34.199 253542 DEBUG nova.virt.hardware [None req-07640ace-66ce-4387-9528-c13339b929d9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 25 04:03:34 np0005534516 nova_compute[253538]: 2025-11-25 09:03:34.199 253542 DEBUG nova.virt.hardware [None req-07640ace-66ce-4387-9528-c13339b929d9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 25 04:03:34 np0005534516 nova_compute[253538]: 2025-11-25 09:03:34.199 253542 DEBUG nova.virt.hardware [None req-07640ace-66ce-4387-9528-c13339b929d9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 25 04:03:34 np0005534516 nova_compute[253538]: 2025-11-25 09:03:34.200 253542 DEBUG nova.virt.hardware [None req-07640ace-66ce-4387-9528-c13339b929d9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 25 04:03:34 np0005534516 nova_compute[253538]: 2025-11-25 09:03:34.203 253542 DEBUG oslo_concurrency.processutils [None req-07640ace-66ce-4387-9528-c13339b929d9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 04:03:34 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 04:03:34 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2778574145' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 04:03:34 np0005534516 nova_compute[253538]: 2025-11-25 09:03:34.664 253542 DEBUG oslo_concurrency.processutils [None req-07640ace-66ce-4387-9528-c13339b929d9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.461s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 04:03:34 np0005534516 nova_compute[253538]: 2025-11-25 09:03:34.689 253542 DEBUG nova.storage.rbd_utils [None req-07640ace-66ce-4387-9528-c13339b929d9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] rbd image 1ee319a5-b613-4b27-a1e6-64b0129bf269_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 04:03:34 np0005534516 nova_compute[253538]: 2025-11-25 09:03:34.692 253542 DEBUG oslo_concurrency.processutils [None req-07640ace-66ce-4387-9528-c13339b929d9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 04:03:34 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0) v1
Nov 25 04:03:34 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 25 04:03:34 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 04:03:34 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 04:03:34 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 04:03:34 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 04:03:34 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 04:03:34 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 04:03:34 np0005534516 ceph-mgr[75313]: [progress WARNING root] complete: ev fe72d3be-15b6-49ab-a422-849cac669e17 does not exist
Nov 25 04:03:34 np0005534516 ceph-mgr[75313]: [progress WARNING root] complete: ev 4a80d837-ad7a-480b-aeaf-6c2c204c4c1b does not exist
Nov 25 04:03:34 np0005534516 ceph-mgr[75313]: [progress WARNING root] complete: ev d4c12f1b-1e7f-42cc-8a01-126e19d1bbdb does not exist
Nov 25 04:03:34 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 04:03:34 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 04:03:34 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 04:03:34 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 04:03:34 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 04:03:34 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 04:03:35 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 04:03:35 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/734369686' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 04:03:35 np0005534516 nova_compute[253538]: 2025-11-25 09:03:35.149 253542 DEBUG oslo_concurrency.processutils [None req-07640ace-66ce-4387-9528-c13339b929d9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.457s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 04:03:35 np0005534516 nova_compute[253538]: 2025-11-25 09:03:35.151 253542 DEBUG nova.virt.libvirt.vif [None req-07640ace-66ce-4387-9528-c13339b929d9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T09:03:20Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-872919100',display_name='tempest-TestGettingAddress-server-872919100',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-872919100',id=135,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCL/CVAR3vbL/QBCkvY1zGP75ygSlwt3u20wjPL1pxq8pHBjwVx3a3z7YKcJhNBX7yKxfeJakRZPN9e1zFn+ojVqSwTlufKzTatjarhh+2Z6TgY95SPM/GKylcUrgXrauw==',key_name='tempest-TestGettingAddress-1143965571',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='a3cf572dfc9f42528923d69b8fa76422',ramdisk_id='',reservation_id='r-91m001wh',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-364728108',owner_user_name='tempest-TestGettingAddress-364728108-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T09:03:23Z,user_data=None,user_id='c9fb13d4ba9041458692330b7276232f',uuid=1ee319a5-b613-4b27-a1e6-64b0129bf269,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "2cd88dce-60d9-4da6-a5f6-ba6622fd8812", "address": "fa:16:3e:1e:e8:e0", "network": {"id": "f691304c-d112-4c32-b3ac-0f33230178b0", "bridge": "br-int", "label": "tempest-network-smoke--636122136", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2cd88dce-60", "ovs_interfaceid": "2cd88dce-60d9-4da6-a5f6-ba6622fd8812", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 25 04:03:35 np0005534516 nova_compute[253538]: 2025-11-25 09:03:35.152 253542 DEBUG nova.network.os_vif_util [None req-07640ace-66ce-4387-9528-c13339b929d9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Converting VIF {"id": "2cd88dce-60d9-4da6-a5f6-ba6622fd8812", "address": "fa:16:3e:1e:e8:e0", "network": {"id": "f691304c-d112-4c32-b3ac-0f33230178b0", "bridge": "br-int", "label": "tempest-network-smoke--636122136", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2cd88dce-60", "ovs_interfaceid": "2cd88dce-60d9-4da6-a5f6-ba6622fd8812", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 04:03:35 np0005534516 nova_compute[253538]: 2025-11-25 09:03:35.153 253542 DEBUG nova.network.os_vif_util [None req-07640ace-66ce-4387-9528-c13339b929d9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:1e:e8:e0,bridge_name='br-int',has_traffic_filtering=True,id=2cd88dce-60d9-4da6-a5f6-ba6622fd8812,network=Network(f691304c-d112-4c32-b3ac-0f33230178b0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2cd88dce-60') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 04:03:35 np0005534516 nova_compute[253538]: 2025-11-25 09:03:35.154 253542 DEBUG nova.virt.libvirt.vif [None req-07640ace-66ce-4387-9528-c13339b929d9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T09:03:20Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-872919100',display_name='tempest-TestGettingAddress-server-872919100',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-872919100',id=135,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCL/CVAR3vbL/QBCkvY1zGP75ygSlwt3u20wjPL1pxq8pHBjwVx3a3z7YKcJhNBX7yKxfeJakRZPN9e1zFn+ojVqSwTlufKzTatjarhh+2Z6TgY95SPM/GKylcUrgXrauw==',key_name='tempest-TestGettingAddress-1143965571',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='a3cf572dfc9f42528923d69b8fa76422',ramdisk_id='',reservation_id='r-91m001wh',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-364728108',owner_user_name='tempest-TestGettingAddress-364728108-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T09:03:23Z,user_data=None,user_id='c9fb13d4ba9041458692330b7276232f',uuid=1ee319a5-b613-4b27-a1e6-64b0129bf269,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "25c8c441-cd5e-4cd3-9151-e8137db08e65", "address": "fa:16:3e:fe:c4:a5", "network": {"id": "269f4fa4-a7fb-4f9a-b49d-3b1968826304", "bridge": "br-int", "label": "tempest-network-smoke--375194542", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fefe:c4a5", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap25c8c441-cd", "ovs_interfaceid": "25c8c441-cd5e-4cd3-9151-e8137db08e65", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 25 04:03:35 np0005534516 nova_compute[253538]: 2025-11-25 09:03:35.154 253542 DEBUG nova.network.os_vif_util [None req-07640ace-66ce-4387-9528-c13339b929d9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Converting VIF {"id": "25c8c441-cd5e-4cd3-9151-e8137db08e65", "address": "fa:16:3e:fe:c4:a5", "network": {"id": "269f4fa4-a7fb-4f9a-b49d-3b1968826304", "bridge": "br-int", "label": "tempest-network-smoke--375194542", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fefe:c4a5", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap25c8c441-cd", "ovs_interfaceid": "25c8c441-cd5e-4cd3-9151-e8137db08e65", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 04:03:35 np0005534516 nova_compute[253538]: 2025-11-25 09:03:35.155 253542 DEBUG nova.network.os_vif_util [None req-07640ace-66ce-4387-9528-c13339b929d9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:fe:c4:a5,bridge_name='br-int',has_traffic_filtering=True,id=25c8c441-cd5e-4cd3-9151-e8137db08e65,network=Network(269f4fa4-a7fb-4f9a-b49d-3b1968826304),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap25c8c441-cd') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 04:03:35 np0005534516 nova_compute[253538]: 2025-11-25 09:03:35.156 253542 DEBUG nova.objects.instance [None req-07640ace-66ce-4387-9528-c13339b929d9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lazy-loading 'pci_devices' on Instance uuid 1ee319a5-b613-4b27-a1e6-64b0129bf269 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 04:03:35 np0005534516 nova_compute[253538]: 2025-11-25 09:03:35.168 253542 DEBUG nova.virt.libvirt.driver [None req-07640ace-66ce-4387-9528-c13339b929d9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 1ee319a5-b613-4b27-a1e6-64b0129bf269] End _get_guest_xml xml=<domain type="kvm">
Nov 25 04:03:35 np0005534516 nova_compute[253538]:  <uuid>1ee319a5-b613-4b27-a1e6-64b0129bf269</uuid>
Nov 25 04:03:35 np0005534516 nova_compute[253538]:  <name>instance-00000087</name>
Nov 25 04:03:35 np0005534516 nova_compute[253538]:  <memory>131072</memory>
Nov 25 04:03:35 np0005534516 nova_compute[253538]:  <vcpu>1</vcpu>
Nov 25 04:03:35 np0005534516 nova_compute[253538]:  <metadata>
Nov 25 04:03:35 np0005534516 nova_compute[253538]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 04:03:35 np0005534516 nova_compute[253538]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 04:03:35 np0005534516 nova_compute[253538]:      <nova:name>tempest-TestGettingAddress-server-872919100</nova:name>
Nov 25 04:03:35 np0005534516 nova_compute[253538]:      <nova:creationTime>2025-11-25 09:03:34</nova:creationTime>
Nov 25 04:03:35 np0005534516 nova_compute[253538]:      <nova:flavor name="m1.nano">
Nov 25 04:03:35 np0005534516 nova_compute[253538]:        <nova:memory>128</nova:memory>
Nov 25 04:03:35 np0005534516 nova_compute[253538]:        <nova:disk>1</nova:disk>
Nov 25 04:03:35 np0005534516 nova_compute[253538]:        <nova:swap>0</nova:swap>
Nov 25 04:03:35 np0005534516 nova_compute[253538]:        <nova:ephemeral>0</nova:ephemeral>
Nov 25 04:03:35 np0005534516 nova_compute[253538]:        <nova:vcpus>1</nova:vcpus>
Nov 25 04:03:35 np0005534516 nova_compute[253538]:      </nova:flavor>
Nov 25 04:03:35 np0005534516 nova_compute[253538]:      <nova:owner>
Nov 25 04:03:35 np0005534516 nova_compute[253538]:        <nova:user uuid="c9fb13d4ba9041458692330b7276232f">tempest-TestGettingAddress-364728108-project-member</nova:user>
Nov 25 04:03:35 np0005534516 nova_compute[253538]:        <nova:project uuid="a3cf572dfc9f42528923d69b8fa76422">tempest-TestGettingAddress-364728108</nova:project>
Nov 25 04:03:35 np0005534516 nova_compute[253538]:      </nova:owner>
Nov 25 04:03:35 np0005534516 nova_compute[253538]:      <nova:root type="image" uuid="8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e"/>
Nov 25 04:03:35 np0005534516 nova_compute[253538]:      <nova:ports>
Nov 25 04:03:35 np0005534516 nova_compute[253538]:        <nova:port uuid="2cd88dce-60d9-4da6-a5f6-ba6622fd8812">
Nov 25 04:03:35 np0005534516 nova_compute[253538]:          <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Nov 25 04:03:35 np0005534516 nova_compute[253538]:        </nova:port>
Nov 25 04:03:35 np0005534516 nova_compute[253538]:        <nova:port uuid="25c8c441-cd5e-4cd3-9151-e8137db08e65">
Nov 25 04:03:35 np0005534516 nova_compute[253538]:          <nova:ip type="fixed" address="2001:db8::f816:3eff:fefe:c4a5" ipVersion="6"/>
Nov 25 04:03:35 np0005534516 nova_compute[253538]:        </nova:port>
Nov 25 04:03:35 np0005534516 nova_compute[253538]:      </nova:ports>
Nov 25 04:03:35 np0005534516 nova_compute[253538]:    </nova:instance>
Nov 25 04:03:35 np0005534516 nova_compute[253538]:  </metadata>
Nov 25 04:03:35 np0005534516 nova_compute[253538]:  <sysinfo type="smbios">
Nov 25 04:03:35 np0005534516 nova_compute[253538]:    <system>
Nov 25 04:03:35 np0005534516 nova_compute[253538]:      <entry name="manufacturer">RDO</entry>
Nov 25 04:03:35 np0005534516 nova_compute[253538]:      <entry name="product">OpenStack Compute</entry>
Nov 25 04:03:35 np0005534516 nova_compute[253538]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 04:03:35 np0005534516 nova_compute[253538]:      <entry name="serial">1ee319a5-b613-4b27-a1e6-64b0129bf269</entry>
Nov 25 04:03:35 np0005534516 nova_compute[253538]:      <entry name="uuid">1ee319a5-b613-4b27-a1e6-64b0129bf269</entry>
Nov 25 04:03:35 np0005534516 nova_compute[253538]:      <entry name="family">Virtual Machine</entry>
Nov 25 04:03:35 np0005534516 nova_compute[253538]:    </system>
Nov 25 04:03:35 np0005534516 nova_compute[253538]:  </sysinfo>
Nov 25 04:03:35 np0005534516 nova_compute[253538]:  <os>
Nov 25 04:03:35 np0005534516 nova_compute[253538]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 25 04:03:35 np0005534516 nova_compute[253538]:    <boot dev="hd"/>
Nov 25 04:03:35 np0005534516 nova_compute[253538]:    <smbios mode="sysinfo"/>
Nov 25 04:03:35 np0005534516 nova_compute[253538]:  </os>
Nov 25 04:03:35 np0005534516 nova_compute[253538]:  <features>
Nov 25 04:03:35 np0005534516 nova_compute[253538]:    <acpi/>
Nov 25 04:03:35 np0005534516 nova_compute[253538]:    <apic/>
Nov 25 04:03:35 np0005534516 nova_compute[253538]:    <vmcoreinfo/>
Nov 25 04:03:35 np0005534516 nova_compute[253538]:  </features>
Nov 25 04:03:35 np0005534516 nova_compute[253538]:  <clock offset="utc">
Nov 25 04:03:35 np0005534516 nova_compute[253538]:    <timer name="pit" tickpolicy="delay"/>
Nov 25 04:03:35 np0005534516 nova_compute[253538]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 25 04:03:35 np0005534516 nova_compute[253538]:    <timer name="hpet" present="no"/>
Nov 25 04:03:35 np0005534516 nova_compute[253538]:  </clock>
Nov 25 04:03:35 np0005534516 nova_compute[253538]:  <cpu mode="host-model" match="exact">
Nov 25 04:03:35 np0005534516 nova_compute[253538]:    <topology sockets="1" cores="1" threads="1"/>
Nov 25 04:03:35 np0005534516 nova_compute[253538]:  </cpu>
Nov 25 04:03:35 np0005534516 nova_compute[253538]:  <devices>
Nov 25 04:03:35 np0005534516 nova_compute[253538]:    <disk type="network" device="disk">
Nov 25 04:03:35 np0005534516 nova_compute[253538]:      <driver type="raw" cache="none"/>
Nov 25 04:03:35 np0005534516 nova_compute[253538]:      <source protocol="rbd" name="vms/1ee319a5-b613-4b27-a1e6-64b0129bf269_disk">
Nov 25 04:03:35 np0005534516 nova_compute[253538]:        <host name="192.168.122.100" port="6789"/>
Nov 25 04:03:35 np0005534516 nova_compute[253538]:      </source>
Nov 25 04:03:35 np0005534516 nova_compute[253538]:      <auth username="openstack">
Nov 25 04:03:35 np0005534516 nova_compute[253538]:        <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 04:03:35 np0005534516 nova_compute[253538]:      </auth>
Nov 25 04:03:35 np0005534516 nova_compute[253538]:      <target dev="vda" bus="virtio"/>
Nov 25 04:03:35 np0005534516 nova_compute[253538]:    </disk>
Nov 25 04:03:35 np0005534516 nova_compute[253538]:    <disk type="network" device="cdrom">
Nov 25 04:03:35 np0005534516 nova_compute[253538]:      <driver type="raw" cache="none"/>
Nov 25 04:03:35 np0005534516 nova_compute[253538]:      <source protocol="rbd" name="vms/1ee319a5-b613-4b27-a1e6-64b0129bf269_disk.config">
Nov 25 04:03:35 np0005534516 nova_compute[253538]:        <host name="192.168.122.100" port="6789"/>
Nov 25 04:03:35 np0005534516 nova_compute[253538]:      </source>
Nov 25 04:03:35 np0005534516 nova_compute[253538]:      <auth username="openstack">
Nov 25 04:03:35 np0005534516 nova_compute[253538]:        <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 04:03:35 np0005534516 nova_compute[253538]:      </auth>
Nov 25 04:03:35 np0005534516 nova_compute[253538]:      <target dev="sda" bus="sata"/>
Nov 25 04:03:35 np0005534516 nova_compute[253538]:    </disk>
Nov 25 04:03:35 np0005534516 nova_compute[253538]:    <interface type="ethernet">
Nov 25 04:03:35 np0005534516 nova_compute[253538]:      <mac address="fa:16:3e:1e:e8:e0"/>
Nov 25 04:03:35 np0005534516 nova_compute[253538]:      <model type="virtio"/>
Nov 25 04:03:35 np0005534516 nova_compute[253538]:      <driver name="vhost" rx_queue_size="512"/>
Nov 25 04:03:35 np0005534516 nova_compute[253538]:      <mtu size="1442"/>
Nov 25 04:03:35 np0005534516 nova_compute[253538]:      <target dev="tap2cd88dce-60"/>
Nov 25 04:03:35 np0005534516 nova_compute[253538]:    </interface>
Nov 25 04:03:35 np0005534516 nova_compute[253538]:    <interface type="ethernet">
Nov 25 04:03:35 np0005534516 nova_compute[253538]:      <mac address="fa:16:3e:fe:c4:a5"/>
Nov 25 04:03:35 np0005534516 nova_compute[253538]:      <model type="virtio"/>
Nov 25 04:03:35 np0005534516 nova_compute[253538]:      <driver name="vhost" rx_queue_size="512"/>
Nov 25 04:03:35 np0005534516 nova_compute[253538]:      <mtu size="1442"/>
Nov 25 04:03:35 np0005534516 nova_compute[253538]:      <target dev="tap25c8c441-cd"/>
Nov 25 04:03:35 np0005534516 nova_compute[253538]:    </interface>
Nov 25 04:03:35 np0005534516 nova_compute[253538]:    <serial type="pty">
Nov 25 04:03:35 np0005534516 nova_compute[253538]:      <log file="/var/lib/nova/instances/1ee319a5-b613-4b27-a1e6-64b0129bf269/console.log" append="off"/>
Nov 25 04:03:35 np0005534516 nova_compute[253538]:    </serial>
Nov 25 04:03:35 np0005534516 nova_compute[253538]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 04:03:35 np0005534516 nova_compute[253538]:    <video>
Nov 25 04:03:35 np0005534516 nova_compute[253538]:      <model type="virtio"/>
Nov 25 04:03:35 np0005534516 nova_compute[253538]:    </video>
Nov 25 04:03:35 np0005534516 nova_compute[253538]:    <input type="tablet" bus="usb"/>
Nov 25 04:03:35 np0005534516 nova_compute[253538]:    <rng model="virtio">
Nov 25 04:03:35 np0005534516 nova_compute[253538]:      <backend model="random">/dev/urandom</backend>
Nov 25 04:03:35 np0005534516 nova_compute[253538]:    </rng>
Nov 25 04:03:35 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root"/>
Nov 25 04:03:35 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:03:35 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:03:35 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:03:35 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:03:35 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:03:35 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:03:35 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:03:35 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:03:35 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:03:35 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:03:35 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:03:35 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:03:35 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:03:35 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:03:35 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:03:35 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:03:35 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:03:35 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:03:35 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:03:35 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:03:35 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:03:35 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:03:35 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:03:35 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:03:35 np0005534516 nova_compute[253538]:    <controller type="usb" index="0"/>
Nov 25 04:03:35 np0005534516 nova_compute[253538]:    <memballoon model="virtio">
Nov 25 04:03:35 np0005534516 nova_compute[253538]:      <stats period="10"/>
Nov 25 04:03:35 np0005534516 nova_compute[253538]:    </memballoon>
Nov 25 04:03:35 np0005534516 nova_compute[253538]:  </devices>
Nov 25 04:03:35 np0005534516 nova_compute[253538]: </domain>
Nov 25 04:03:35 np0005534516 nova_compute[253538]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 25 04:03:35 np0005534516 nova_compute[253538]: 2025-11-25 09:03:35.170 253542 DEBUG nova.compute.manager [None req-07640ace-66ce-4387-9528-c13339b929d9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 1ee319a5-b613-4b27-a1e6-64b0129bf269] Preparing to wait for external event network-vif-plugged-2cd88dce-60d9-4da6-a5f6-ba6622fd8812 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 25 04:03:35 np0005534516 nova_compute[253538]: 2025-11-25 09:03:35.170 253542 DEBUG oslo_concurrency.lockutils [None req-07640ace-66ce-4387-9528-c13339b929d9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Acquiring lock "1ee319a5-b613-4b27-a1e6-64b0129bf269-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:03:35 np0005534516 nova_compute[253538]: 2025-11-25 09:03:35.171 253542 DEBUG oslo_concurrency.lockutils [None req-07640ace-66ce-4387-9528-c13339b929d9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "1ee319a5-b613-4b27-a1e6-64b0129bf269-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:03:35 np0005534516 nova_compute[253538]: 2025-11-25 09:03:35.171 253542 DEBUG oslo_concurrency.lockutils [None req-07640ace-66ce-4387-9528-c13339b929d9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "1ee319a5-b613-4b27-a1e6-64b0129bf269-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:03:35 np0005534516 nova_compute[253538]: 2025-11-25 09:03:35.171 253542 DEBUG nova.compute.manager [None req-07640ace-66ce-4387-9528-c13339b929d9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 1ee319a5-b613-4b27-a1e6-64b0129bf269] Preparing to wait for external event network-vif-plugged-25c8c441-cd5e-4cd3-9151-e8137db08e65 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 25 04:03:35 np0005534516 nova_compute[253538]: 2025-11-25 09:03:35.171 253542 DEBUG oslo_concurrency.lockutils [None req-07640ace-66ce-4387-9528-c13339b929d9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Acquiring lock "1ee319a5-b613-4b27-a1e6-64b0129bf269-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:03:35 np0005534516 nova_compute[253538]: 2025-11-25 09:03:35.171 253542 DEBUG oslo_concurrency.lockutils [None req-07640ace-66ce-4387-9528-c13339b929d9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "1ee319a5-b613-4b27-a1e6-64b0129bf269-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:03:35 np0005534516 nova_compute[253538]: 2025-11-25 09:03:35.171 253542 DEBUG oslo_concurrency.lockutils [None req-07640ace-66ce-4387-9528-c13339b929d9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "1ee319a5-b613-4b27-a1e6-64b0129bf269-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:03:35 np0005534516 nova_compute[253538]: 2025-11-25 09:03:35.172 253542 DEBUG nova.virt.libvirt.vif [None req-07640ace-66ce-4387-9528-c13339b929d9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T09:03:20Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-872919100',display_name='tempest-TestGettingAddress-server-872919100',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-872919100',id=135,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCL/CVAR3vbL/QBCkvY1zGP75ygSlwt3u20wjPL1pxq8pHBjwVx3a3z7YKcJhNBX7yKxfeJakRZPN9e1zFn+ojVqSwTlufKzTatjarhh+2Z6TgY95SPM/GKylcUrgXrauw==',key_name='tempest-TestGettingAddress-1143965571',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='a3cf572dfc9f42528923d69b8fa76422',ramdisk_id='',reservation_id='r-91m001wh',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-364728108',owner_user_name='tempest-TestGettingAddress-364728108-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T09:03:23Z,user_data=None,user_id='c9fb13d4ba9041458692330b7276232f',uuid=1ee319a5-b613-4b27-a1e6-64b0129bf269,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "2cd88dce-60d9-4da6-a5f6-ba6622fd8812", "address": "fa:16:3e:1e:e8:e0", "network": {"id": "f691304c-d112-4c32-b3ac-0f33230178b0", "bridge": "br-int", "label": "tempest-network-smoke--636122136", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2cd88dce-60", "ovs_interfaceid": "2cd88dce-60d9-4da6-a5f6-ba6622fd8812", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 25 04:03:35 np0005534516 nova_compute[253538]: 2025-11-25 09:03:35.172 253542 DEBUG nova.network.os_vif_util [None req-07640ace-66ce-4387-9528-c13339b929d9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Converting VIF {"id": "2cd88dce-60d9-4da6-a5f6-ba6622fd8812", "address": "fa:16:3e:1e:e8:e0", "network": {"id": "f691304c-d112-4c32-b3ac-0f33230178b0", "bridge": "br-int", "label": "tempest-network-smoke--636122136", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2cd88dce-60", "ovs_interfaceid": "2cd88dce-60d9-4da6-a5f6-ba6622fd8812", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 04:03:35 np0005534516 nova_compute[253538]: 2025-11-25 09:03:35.173 253542 DEBUG nova.network.os_vif_util [None req-07640ace-66ce-4387-9528-c13339b929d9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:1e:e8:e0,bridge_name='br-int',has_traffic_filtering=True,id=2cd88dce-60d9-4da6-a5f6-ba6622fd8812,network=Network(f691304c-d112-4c32-b3ac-0f33230178b0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2cd88dce-60') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 04:03:35 np0005534516 nova_compute[253538]: 2025-11-25 09:03:35.173 253542 DEBUG os_vif [None req-07640ace-66ce-4387-9528-c13339b929d9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:1e:e8:e0,bridge_name='br-int',has_traffic_filtering=True,id=2cd88dce-60d9-4da6-a5f6-ba6622fd8812,network=Network(f691304c-d112-4c32-b3ac-0f33230178b0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2cd88dce-60') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 25 04:03:35 np0005534516 nova_compute[253538]: 2025-11-25 09:03:35.174 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:03:35 np0005534516 nova_compute[253538]: 2025-11-25 09:03:35.175 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 04:03:35 np0005534516 nova_compute[253538]: 2025-11-25 09:03:35.175 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 04:03:35 np0005534516 nova_compute[253538]: 2025-11-25 09:03:35.178 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:03:35 np0005534516 nova_compute[253538]: 2025-11-25 09:03:35.178 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2cd88dce-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 04:03:35 np0005534516 nova_compute[253538]: 2025-11-25 09:03:35.178 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap2cd88dce-60, col_values=(('external_ids', {'iface-id': '2cd88dce-60d9-4da6-a5f6-ba6622fd8812', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:1e:e8:e0', 'vm-uuid': '1ee319a5-b613-4b27-a1e6-64b0129bf269'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 04:03:35 np0005534516 nova_compute[253538]: 2025-11-25 09:03:35.180 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:03:35 np0005534516 NetworkManager[48915]: <info>  [1764061415.1817] manager: (tap2cd88dce-60): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/572)
Nov 25 04:03:35 np0005534516 nova_compute[253538]: 2025-11-25 09:03:35.182 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 25 04:03:35 np0005534516 nova_compute[253538]: 2025-11-25 09:03:35.187 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:03:35 np0005534516 nova_compute[253538]: 2025-11-25 09:03:35.187 253542 INFO os_vif [None req-07640ace-66ce-4387-9528-c13339b929d9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:1e:e8:e0,bridge_name='br-int',has_traffic_filtering=True,id=2cd88dce-60d9-4da6-a5f6-ba6622fd8812,network=Network(f691304c-d112-4c32-b3ac-0f33230178b0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2cd88dce-60')#033[00m
Nov 25 04:03:35 np0005534516 nova_compute[253538]: 2025-11-25 09:03:35.188 253542 DEBUG nova.virt.libvirt.vif [None req-07640ace-66ce-4387-9528-c13339b929d9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T09:03:20Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-872919100',display_name='tempest-TestGettingAddress-server-872919100',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-872919100',id=135,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCL/CVAR3vbL/QBCkvY1zGP75ygSlwt3u20wjPL1pxq8pHBjwVx3a3z7YKcJhNBX7yKxfeJakRZPN9e1zFn+ojVqSwTlufKzTatjarhh+2Z6TgY95SPM/GKylcUrgXrauw==',key_name='tempest-TestGettingAddress-1143965571',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='a3cf572dfc9f42528923d69b8fa76422',ramdisk_id='',reservation_id='r-91m001wh',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-364728108',owner_user_name='tempest-TestGettingAddress-364728108-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T09:03:23Z,user_data=None,user_id='c9fb13d4ba9041458692330b7276232f',uuid=1ee319a5-b613-4b27-a1e6-64b0129bf269,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "25c8c441-cd5e-4cd3-9151-e8137db08e65", "address": "fa:16:3e:fe:c4:a5", "network": {"id": "269f4fa4-a7fb-4f9a-b49d-3b1968826304", "bridge": "br-int", "label": "tempest-network-smoke--375194542", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fefe:c4a5", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap25c8c441-cd", "ovs_interfaceid": "25c8c441-cd5e-4cd3-9151-e8137db08e65", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 25 04:03:35 np0005534516 nova_compute[253538]: 2025-11-25 09:03:35.188 253542 DEBUG nova.network.os_vif_util [None req-07640ace-66ce-4387-9528-c13339b929d9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Converting VIF {"id": "25c8c441-cd5e-4cd3-9151-e8137db08e65", "address": "fa:16:3e:fe:c4:a5", "network": {"id": "269f4fa4-a7fb-4f9a-b49d-3b1968826304", "bridge": "br-int", "label": "tempest-network-smoke--375194542", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fefe:c4a5", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap25c8c441-cd", "ovs_interfaceid": "25c8c441-cd5e-4cd3-9151-e8137db08e65", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 04:03:35 np0005534516 nova_compute[253538]: 2025-11-25 09:03:35.189 253542 DEBUG nova.network.os_vif_util [None req-07640ace-66ce-4387-9528-c13339b929d9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:fe:c4:a5,bridge_name='br-int',has_traffic_filtering=True,id=25c8c441-cd5e-4cd3-9151-e8137db08e65,network=Network(269f4fa4-a7fb-4f9a-b49d-3b1968826304),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap25c8c441-cd') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 04:03:35 np0005534516 nova_compute[253538]: 2025-11-25 09:03:35.189 253542 DEBUG os_vif [None req-07640ace-66ce-4387-9528-c13339b929d9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:fe:c4:a5,bridge_name='br-int',has_traffic_filtering=True,id=25c8c441-cd5e-4cd3-9151-e8137db08e65,network=Network(269f4fa4-a7fb-4f9a-b49d-3b1968826304),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap25c8c441-cd') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 25 04:03:35 np0005534516 nova_compute[253538]: 2025-11-25 09:03:35.190 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:03:35 np0005534516 nova_compute[253538]: 2025-11-25 09:03:35.190 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 04:03:35 np0005534516 nova_compute[253538]: 2025-11-25 09:03:35.190 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 04:03:35 np0005534516 nova_compute[253538]: 2025-11-25 09:03:35.192 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:03:35 np0005534516 nova_compute[253538]: 2025-11-25 09:03:35.192 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap25c8c441-cd, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 04:03:35 np0005534516 nova_compute[253538]: 2025-11-25 09:03:35.192 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap25c8c441-cd, col_values=(('external_ids', {'iface-id': '25c8c441-cd5e-4cd3-9151-e8137db08e65', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:fe:c4:a5', 'vm-uuid': '1ee319a5-b613-4b27-a1e6-64b0129bf269'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 04:03:35 np0005534516 nova_compute[253538]: 2025-11-25 09:03:35.193 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:03:35 np0005534516 NetworkManager[48915]: <info>  [1764061415.1952] manager: (tap25c8c441-cd): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/573)
Nov 25 04:03:35 np0005534516 nova_compute[253538]: 2025-11-25 09:03:35.195 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 25 04:03:35 np0005534516 nova_compute[253538]: 2025-11-25 09:03:35.203 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:03:35 np0005534516 nova_compute[253538]: 2025-11-25 09:03:35.204 253542 INFO os_vif [None req-07640ace-66ce-4387-9528-c13339b929d9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:fe:c4:a5,bridge_name='br-int',has_traffic_filtering=True,id=25c8c441-cd5e-4cd3-9151-e8137db08e65,network=Network(269f4fa4-a7fb-4f9a-b49d-3b1968826304),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap25c8c441-cd')#033[00m
Nov 25 04:03:35 np0005534516 nova_compute[253538]: 2025-11-25 09:03:35.259 253542 DEBUG nova.virt.libvirt.driver [None req-07640ace-66ce-4387-9528-c13339b929d9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 04:03:35 np0005534516 nova_compute[253538]: 2025-11-25 09:03:35.259 253542 DEBUG nova.virt.libvirt.driver [None req-07640ace-66ce-4387-9528-c13339b929d9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 04:03:35 np0005534516 nova_compute[253538]: 2025-11-25 09:03:35.259 253542 DEBUG nova.virt.libvirt.driver [None req-07640ace-66ce-4387-9528-c13339b929d9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] No VIF found with MAC fa:16:3e:1e:e8:e0, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 25 04:03:35 np0005534516 nova_compute[253538]: 2025-11-25 09:03:35.260 253542 DEBUG nova.virt.libvirt.driver [None req-07640ace-66ce-4387-9528-c13339b929d9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] No VIF found with MAC fa:16:3e:fe:c4:a5, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 25 04:03:35 np0005534516 nova_compute[253538]: 2025-11-25 09:03:35.260 253542 INFO nova.virt.libvirt.driver [None req-07640ace-66ce-4387-9528-c13339b929d9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 1ee319a5-b613-4b27-a1e6-64b0129bf269] Using config drive#033[00m
Nov 25 04:03:35 np0005534516 nova_compute[253538]: 2025-11-25 09:03:35.292 253542 DEBUG nova.storage.rbd_utils [None req-07640ace-66ce-4387-9528-c13339b929d9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] rbd image 1ee319a5-b613-4b27-a1e6-64b0129bf269_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 04:03:35 np0005534516 nova_compute[253538]: 2025-11-25 09:03:35.323 253542 DEBUG nova.network.neutron [None req-114760a2-1ab1-4b38-a8f4-182ed57f57d9 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 60fab831-4ae4-4e18-a4e4-5466abbece52] Updating instance_info_cache with network_info: [{"id": "d2008aa0-bac3-4d83-88d2-34376e911b2c", "address": "fa:16:3e:72:7d:84", "network": {"id": "72472fc5-3661-404c-a0d2-df155795bd2b", "bridge": "br-int", "label": "tempest-network-smoke--1408423100", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cfffff2c57a442a59b202d368d49bf00", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd2008aa0-ba", "ovs_interfaceid": "d2008aa0-bac3-4d83-88d2-34376e911b2c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 04:03:35 np0005534516 nova_compute[253538]: 2025-11-25 09:03:35.345 253542 DEBUG oslo_concurrency.lockutils [None req-114760a2-1ab1-4b38-a8f4-182ed57f57d9 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Releasing lock "refresh_cache-60fab831-4ae4-4e18-a4e4-5466abbece52" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 04:03:35 np0005534516 nova_compute[253538]: 2025-11-25 09:03:35.346 253542 DEBUG nova.compute.manager [None req-114760a2-1ab1-4b38-a8f4-182ed57f57d9 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 60fab831-4ae4-4e18-a4e4-5466abbece52] Instance network_info: |[{"id": "d2008aa0-bac3-4d83-88d2-34376e911b2c", "address": "fa:16:3e:72:7d:84", "network": {"id": "72472fc5-3661-404c-a0d2-df155795bd2b", "bridge": "br-int", "label": "tempest-network-smoke--1408423100", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cfffff2c57a442a59b202d368d49bf00", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd2008aa0-ba", "ovs_interfaceid": "d2008aa0-bac3-4d83-88d2-34376e911b2c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 25 04:03:35 np0005534516 nova_compute[253538]: 2025-11-25 09:03:35.346 253542 DEBUG oslo_concurrency.lockutils [req-894204e2-b11b-4939-bf49-188c572977d5 req-07fe0918-a967-4cea-90bd-77260f02f8e9 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-60fab831-4ae4-4e18-a4e4-5466abbece52" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 04:03:35 np0005534516 nova_compute[253538]: 2025-11-25 09:03:35.347 253542 DEBUG nova.network.neutron [req-894204e2-b11b-4939-bf49-188c572977d5 req-07fe0918-a967-4cea-90bd-77260f02f8e9 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 60fab831-4ae4-4e18-a4e4-5466abbece52] Refreshing network info cache for port d2008aa0-bac3-4d83-88d2-34376e911b2c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 04:03:35 np0005534516 nova_compute[253538]: 2025-11-25 09:03:35.350 253542 DEBUG nova.virt.libvirt.driver [None req-114760a2-1ab1-4b38-a8f4-182ed57f57d9 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 60fab831-4ae4-4e18-a4e4-5466abbece52] Start _get_guest_xml network_info=[{"id": "d2008aa0-bac3-4d83-88d2-34376e911b2c", "address": "fa:16:3e:72:7d:84", "network": {"id": "72472fc5-3661-404c-a0d2-df155795bd2b", "bridge": "br-int", "label": "tempest-network-smoke--1408423100", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cfffff2c57a442a59b202d368d49bf00", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd2008aa0-ba", "ovs_interfaceid": "d2008aa0-bac3-4d83-88d2-34376e911b2c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'device_name': '/dev/vda', 'guest_format': None, 'encryption_options': None, 'boot_index': 0, 'encryption_format': None, 'encryption_secret_uuid': None, 'size': 0, 'encrypted': False, 'disk_bus': 'virtio', 'image_id': '8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 25 04:03:35 np0005534516 nova_compute[253538]: 2025-11-25 09:03:35.355 253542 WARNING nova.virt.libvirt.driver [None req-114760a2-1ab1-4b38-a8f4-182ed57f57d9 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 04:03:35 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 25 04:03:35 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 04:03:35 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 04:03:35 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 04:03:35 np0005534516 nova_compute[253538]: 2025-11-25 09:03:35.365 253542 DEBUG nova.virt.libvirt.host [None req-114760a2-1ab1-4b38-a8f4-182ed57f57d9 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 25 04:03:35 np0005534516 nova_compute[253538]: 2025-11-25 09:03:35.366 253542 DEBUG nova.virt.libvirt.host [None req-114760a2-1ab1-4b38-a8f4-182ed57f57d9 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 25 04:03:35 np0005534516 nova_compute[253538]: 2025-11-25 09:03:35.370 253542 DEBUG nova.virt.libvirt.host [None req-114760a2-1ab1-4b38-a8f4-182ed57f57d9 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 25 04:03:35 np0005534516 nova_compute[253538]: 2025-11-25 09:03:35.371 253542 DEBUG nova.virt.libvirt.host [None req-114760a2-1ab1-4b38-a8f4-182ed57f57d9 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 25 04:03:35 np0005534516 nova_compute[253538]: 2025-11-25 09:03:35.371 253542 DEBUG nova.virt.libvirt.driver [None req-114760a2-1ab1-4b38-a8f4-182ed57f57d9 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 25 04:03:35 np0005534516 nova_compute[253538]: 2025-11-25 09:03:35.372 253542 DEBUG nova.virt.hardware [None req-114760a2-1ab1-4b38-a8f4-182ed57f57d9 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T08:20:54Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='c0247c91-0f34-45b2-87e7-43f31a790a00',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 25 04:03:35 np0005534516 nova_compute[253538]: 2025-11-25 09:03:35.372 253542 DEBUG nova.virt.hardware [None req-114760a2-1ab1-4b38-a8f4-182ed57f57d9 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 25 04:03:35 np0005534516 nova_compute[253538]: 2025-11-25 09:03:35.372 253542 DEBUG nova.virt.hardware [None req-114760a2-1ab1-4b38-a8f4-182ed57f57d9 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 25 04:03:35 np0005534516 nova_compute[253538]: 2025-11-25 09:03:35.373 253542 DEBUG nova.virt.hardware [None req-114760a2-1ab1-4b38-a8f4-182ed57f57d9 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 25 04:03:35 np0005534516 nova_compute[253538]: 2025-11-25 09:03:35.373 253542 DEBUG nova.virt.hardware [None req-114760a2-1ab1-4b38-a8f4-182ed57f57d9 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 25 04:03:35 np0005534516 nova_compute[253538]: 2025-11-25 09:03:35.373 253542 DEBUG nova.virt.hardware [None req-114760a2-1ab1-4b38-a8f4-182ed57f57d9 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 25 04:03:35 np0005534516 nova_compute[253538]: 2025-11-25 09:03:35.373 253542 DEBUG nova.virt.hardware [None req-114760a2-1ab1-4b38-a8f4-182ed57f57d9 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 25 04:03:35 np0005534516 nova_compute[253538]: 2025-11-25 09:03:35.373 253542 DEBUG nova.virt.hardware [None req-114760a2-1ab1-4b38-a8f4-182ed57f57d9 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 25 04:03:35 np0005534516 nova_compute[253538]: 2025-11-25 09:03:35.374 253542 DEBUG nova.virt.hardware [None req-114760a2-1ab1-4b38-a8f4-182ed57f57d9 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 25 04:03:35 np0005534516 nova_compute[253538]: 2025-11-25 09:03:35.374 253542 DEBUG nova.virt.hardware [None req-114760a2-1ab1-4b38-a8f4-182ed57f57d9 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 25 04:03:35 np0005534516 nova_compute[253538]: 2025-11-25 09:03:35.374 253542 DEBUG nova.virt.hardware [None req-114760a2-1ab1-4b38-a8f4-182ed57f57d9 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 25 04:03:35 np0005534516 nova_compute[253538]: 2025-11-25 09:03:35.377 253542 DEBUG oslo_concurrency.processutils [None req-114760a2-1ab1-4b38-a8f4-182ed57f57d9 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 04:03:35 np0005534516 podman[395263]: 2025-11-25 09:03:35.520474514 +0000 UTC m=+0.100998967 container create b3fbc7d43e02f03db9e2a0e7ba3a0adc50f0e98ae28c21415f593d0e079876c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_banach, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 04:03:35 np0005534516 podman[395263]: 2025-11-25 09:03:35.440975723 +0000 UTC m=+0.021500196 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 04:03:35 np0005534516 systemd[1]: Started libpod-conmon-b3fbc7d43e02f03db9e2a0e7ba3a0adc50f0e98ae28c21415f593d0e079876c1.scope.
Nov 25 04:03:35 np0005534516 systemd[1]: Started libcrun container.
Nov 25 04:03:35 np0005534516 nova_compute[253538]: 2025-11-25 09:03:35.672 253542 INFO nova.virt.libvirt.driver [None req-07640ace-66ce-4387-9528-c13339b929d9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 1ee319a5-b613-4b27-a1e6-64b0129bf269] Creating config drive at /var/lib/nova/instances/1ee319a5-b613-4b27-a1e6-64b0129bf269/disk.config#033[00m
Nov 25 04:03:35 np0005534516 nova_compute[253538]: 2025-11-25 09:03:35.680 253542 DEBUG oslo_concurrency.processutils [None req-07640ace-66ce-4387-9528-c13339b929d9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/1ee319a5-b613-4b27-a1e6-64b0129bf269/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmphvqaj0jt execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 04:03:35 np0005534516 podman[395263]: 2025-11-25 09:03:35.759875191 +0000 UTC m=+0.340399704 container init b3fbc7d43e02f03db9e2a0e7ba3a0adc50f0e98ae28c21415f593d0e079876c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_banach, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Nov 25 04:03:35 np0005534516 podman[395263]: 2025-11-25 09:03:35.769027221 +0000 UTC m=+0.349551674 container start b3fbc7d43e02f03db9e2a0e7ba3a0adc50f0e98ae28c21415f593d0e079876c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_banach, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Nov 25 04:03:35 np0005534516 adoring_banach[395299]: 167 167
Nov 25 04:03:35 np0005534516 systemd[1]: libpod-b3fbc7d43e02f03db9e2a0e7ba3a0adc50f0e98ae28c21415f593d0e079876c1.scope: Deactivated successfully.
Nov 25 04:03:35 np0005534516 podman[395263]: 2025-11-25 09:03:35.821335132 +0000 UTC m=+0.401859635 container attach b3fbc7d43e02f03db9e2a0e7ba3a0adc50f0e98ae28c21415f593d0e079876c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_banach, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Nov 25 04:03:35 np0005534516 podman[395263]: 2025-11-25 09:03:35.821707483 +0000 UTC m=+0.402231956 container died b3fbc7d43e02f03db9e2a0e7ba3a0adc50f0e98ae28c21415f593d0e079876c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_banach, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default)
Nov 25 04:03:35 np0005534516 nova_compute[253538]: 2025-11-25 09:03:35.822 253542 DEBUG oslo_concurrency.processutils [None req-07640ace-66ce-4387-9528-c13339b929d9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/1ee319a5-b613-4b27-a1e6-64b0129bf269/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmphvqaj0jt" returned: 0 in 0.142s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 04:03:35 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 04:03:35 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1880590095' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 04:03:35 np0005534516 nova_compute[253538]: 2025-11-25 09:03:35.849 253542 DEBUG nova.storage.rbd_utils [None req-07640ace-66ce-4387-9528-c13339b929d9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] rbd image 1ee319a5-b613-4b27-a1e6-64b0129bf269_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 04:03:35 np0005534516 nova_compute[253538]: 2025-11-25 09:03:35.852 253542 DEBUG oslo_concurrency.processutils [None req-07640ace-66ce-4387-9528-c13339b929d9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/1ee319a5-b613-4b27-a1e6-64b0129bf269/disk.config 1ee319a5-b613-4b27-a1e6-64b0129bf269_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 04:03:35 np0005534516 systemd[1]: var-lib-containers-storage-overlay-290eec8d57d83157bf2bf33c009f078ff32b600cab5be2c95e5ca0acd6bf3d04-merged.mount: Deactivated successfully.
Nov 25 04:03:35 np0005534516 nova_compute[253538]: 2025-11-25 09:03:35.886 253542 DEBUG oslo_concurrency.processutils [None req-114760a2-1ab1-4b38-a8f4-182ed57f57d9 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.509s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 04:03:35 np0005534516 nova_compute[253538]: 2025-11-25 09:03:35.906 253542 DEBUG nova.storage.rbd_utils [None req-114760a2-1ab1-4b38-a8f4-182ed57f57d9 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] rbd image 60fab831-4ae4-4e18-a4e4-5466abbece52_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 04:03:35 np0005534516 nova_compute[253538]: 2025-11-25 09:03:35.910 253542 DEBUG oslo_concurrency.processutils [None req-114760a2-1ab1-4b38-a8f4-182ed57f57d9 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 04:03:36 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2484: 321 pgs: 321 active+clean; 339 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 34 KiB/s rd, 3.5 MiB/s wr, 54 op/s
Nov 25 04:03:36 np0005534516 podman[395263]: 2025-11-25 09:03:36.093583874 +0000 UTC m=+0.674108327 container remove b3fbc7d43e02f03db9e2a0e7ba3a0adc50f0e98ae28c21415f593d0e079876c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_banach, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True)
Nov 25 04:03:36 np0005534516 systemd[1]: libpod-conmon-b3fbc7d43e02f03db9e2a0e7ba3a0adc50f0e98ae28c21415f593d0e079876c1.scope: Deactivated successfully.
Nov 25 04:03:36 np0005534516 nova_compute[253538]: 2025-11-25 09:03:36.282 253542 DEBUG nova.network.neutron [req-372b9d1e-f967-4ee7-b893-6a89a10e98d7 req-86f1c0a0-8677-4f84-b004-00e30e928c73 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 1ee319a5-b613-4b27-a1e6-64b0129bf269] Updated VIF entry in instance network info cache for port 25c8c441-cd5e-4cd3-9151-e8137db08e65. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 25 04:03:36 np0005534516 nova_compute[253538]: 2025-11-25 09:03:36.285 253542 DEBUG nova.network.neutron [req-372b9d1e-f967-4ee7-b893-6a89a10e98d7 req-86f1c0a0-8677-4f84-b004-00e30e928c73 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 1ee319a5-b613-4b27-a1e6-64b0129bf269] Updating instance_info_cache with network_info: [{"id": "2cd88dce-60d9-4da6-a5f6-ba6622fd8812", "address": "fa:16:3e:1e:e8:e0", "network": {"id": "f691304c-d112-4c32-b3ac-0f33230178b0", "bridge": "br-int", "label": "tempest-network-smoke--636122136", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2cd88dce-60", "ovs_interfaceid": "2cd88dce-60d9-4da6-a5f6-ba6622fd8812", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "25c8c441-cd5e-4cd3-9151-e8137db08e65", "address": "fa:16:3e:fe:c4:a5", "network": {"id": "269f4fa4-a7fb-4f9a-b49d-3b1968826304", "bridge": "br-int", "label": "tempest-network-smoke--375194542", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fefe:c4a5", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap25c8c441-cd", "ovs_interfaceid": "25c8c441-cd5e-4cd3-9151-e8137db08e65", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 04:03:36 np0005534516 nova_compute[253538]: 2025-11-25 09:03:36.307 253542 DEBUG oslo_concurrency.lockutils [req-372b9d1e-f967-4ee7-b893-6a89a10e98d7 req-86f1c0a0-8677-4f84-b004-00e30e928c73 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-1ee319a5-b613-4b27-a1e6-64b0129bf269" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 04:03:36 np0005534516 podman[395403]: 2025-11-25 09:03:36.274601924 +0000 UTC m=+0.037186702 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 04:03:36 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 04:03:36 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/606996446' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 04:03:36 np0005534516 podman[395403]: 2025-11-25 09:03:36.461536066 +0000 UTC m=+0.224120824 container create d09bd3b709ad1fa294b8b59a3c1d26d977814b56f41fc8e912e4960f88f96299 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_jepsen, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 04:03:36 np0005534516 nova_compute[253538]: 2025-11-25 09:03:36.496 253542 DEBUG oslo_concurrency.processutils [None req-114760a2-1ab1-4b38-a8f4-182ed57f57d9 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.585s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 04:03:36 np0005534516 nova_compute[253538]: 2025-11-25 09:03:36.498 253542 DEBUG nova.virt.libvirt.vif [None req-114760a2-1ab1-4b38-a8f4-182ed57f57d9 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T09:03:28Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-1495447964-gen-1-1215590521',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-1495447964-gen-1-1215590521',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-1495447964-ge',id=136,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBKmWC6+Lhjt7ZggX7R271Z8ydSfaz55B57w+DkFd+9P5ZZi/RtND2Lrlt8eHEO6BYJ7zwCthN/Bq/sxErPbCK7jo7dXwLPbl9Tdlo4a8btinzQw58JjdPmMZRYnO+DrRAQ==',key_name='tempest-TestSecurityGroupsBasicOps-1979245065',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='cfffff2c57a442a59b202d368d49bf00',ramdisk_id='',reservation_id='r-lk5znizs',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-1495447964',owner_user_name='tempest-TestSecurityGroupsBasicOps-1495447964-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T09:03:30Z,user_data=None,user_id='283b89dbe3284e8ea2019b797673108b',uuid=60fab831-4ae4-4e18-a4e4-5466abbece52,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d2008aa0-bac3-4d83-88d2-34376e911b2c", "address": "fa:16:3e:72:7d:84", "network": {"id": "72472fc5-3661-404c-a0d2-df155795bd2b", "bridge": "br-int", "label": "tempest-network-smoke--1408423100", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cfffff2c57a442a59b202d368d49bf00", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd2008aa0-ba", "ovs_interfaceid": "d2008aa0-bac3-4d83-88d2-34376e911b2c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 25 04:03:36 np0005534516 nova_compute[253538]: 2025-11-25 09:03:36.498 253542 DEBUG nova.network.os_vif_util [None req-114760a2-1ab1-4b38-a8f4-182ed57f57d9 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Converting VIF {"id": "d2008aa0-bac3-4d83-88d2-34376e911b2c", "address": "fa:16:3e:72:7d:84", "network": {"id": "72472fc5-3661-404c-a0d2-df155795bd2b", "bridge": "br-int", "label": "tempest-network-smoke--1408423100", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cfffff2c57a442a59b202d368d49bf00", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd2008aa0-ba", "ovs_interfaceid": "d2008aa0-bac3-4d83-88d2-34376e911b2c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 04:03:36 np0005534516 nova_compute[253538]: 2025-11-25 09:03:36.499 253542 DEBUG nova.network.os_vif_util [None req-114760a2-1ab1-4b38-a8f4-182ed57f57d9 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:72:7d:84,bridge_name='br-int',has_traffic_filtering=True,id=d2008aa0-bac3-4d83-88d2-34376e911b2c,network=Network(72472fc5-3661-404c-a0d2-df155795bd2b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd2008aa0-ba') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 04:03:36 np0005534516 nova_compute[253538]: 2025-11-25 09:03:36.500 253542 DEBUG nova.objects.instance [None req-114760a2-1ab1-4b38-a8f4-182ed57f57d9 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Lazy-loading 'pci_devices' on Instance uuid 60fab831-4ae4-4e18-a4e4-5466abbece52 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 04:03:36 np0005534516 nova_compute[253538]: 2025-11-25 09:03:36.517 253542 DEBUG nova.virt.libvirt.driver [None req-114760a2-1ab1-4b38-a8f4-182ed57f57d9 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 60fab831-4ae4-4e18-a4e4-5466abbece52] End _get_guest_xml xml=<domain type="kvm">
Nov 25 04:03:36 np0005534516 nova_compute[253538]:  <uuid>60fab831-4ae4-4e18-a4e4-5466abbece52</uuid>
Nov 25 04:03:36 np0005534516 nova_compute[253538]:  <name>instance-00000088</name>
Nov 25 04:03:36 np0005534516 nova_compute[253538]:  <memory>131072</memory>
Nov 25 04:03:36 np0005534516 nova_compute[253538]:  <vcpu>1</vcpu>
Nov 25 04:03:36 np0005534516 nova_compute[253538]:  <metadata>
Nov 25 04:03:36 np0005534516 nova_compute[253538]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 04:03:36 np0005534516 nova_compute[253538]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 04:03:36 np0005534516 nova_compute[253538]:      <nova:name>tempest-server-tempest-TestSecurityGroupsBasicOps-1495447964-gen-1-1215590521</nova:name>
Nov 25 04:03:36 np0005534516 nova_compute[253538]:      <nova:creationTime>2025-11-25 09:03:35</nova:creationTime>
Nov 25 04:03:36 np0005534516 nova_compute[253538]:      <nova:flavor name="m1.nano">
Nov 25 04:03:36 np0005534516 nova_compute[253538]:        <nova:memory>128</nova:memory>
Nov 25 04:03:36 np0005534516 nova_compute[253538]:        <nova:disk>1</nova:disk>
Nov 25 04:03:36 np0005534516 nova_compute[253538]:        <nova:swap>0</nova:swap>
Nov 25 04:03:36 np0005534516 nova_compute[253538]:        <nova:ephemeral>0</nova:ephemeral>
Nov 25 04:03:36 np0005534516 nova_compute[253538]:        <nova:vcpus>1</nova:vcpus>
Nov 25 04:03:36 np0005534516 nova_compute[253538]:      </nova:flavor>
Nov 25 04:03:36 np0005534516 nova_compute[253538]:      <nova:owner>
Nov 25 04:03:36 np0005534516 nova_compute[253538]:        <nova:user uuid="283b89dbe3284e8ea2019b797673108b">tempest-TestSecurityGroupsBasicOps-1495447964-project-member</nova:user>
Nov 25 04:03:36 np0005534516 nova_compute[253538]:        <nova:project uuid="cfffff2c57a442a59b202d368d49bf00">tempest-TestSecurityGroupsBasicOps-1495447964</nova:project>
Nov 25 04:03:36 np0005534516 nova_compute[253538]:      </nova:owner>
Nov 25 04:03:36 np0005534516 nova_compute[253538]:      <nova:root type="image" uuid="8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e"/>
Nov 25 04:03:36 np0005534516 nova_compute[253538]:      <nova:ports>
Nov 25 04:03:36 np0005534516 nova_compute[253538]:        <nova:port uuid="d2008aa0-bac3-4d83-88d2-34376e911b2c">
Nov 25 04:03:36 np0005534516 nova_compute[253538]:          <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Nov 25 04:03:36 np0005534516 nova_compute[253538]:        </nova:port>
Nov 25 04:03:36 np0005534516 nova_compute[253538]:      </nova:ports>
Nov 25 04:03:36 np0005534516 nova_compute[253538]:    </nova:instance>
Nov 25 04:03:36 np0005534516 nova_compute[253538]:  </metadata>
Nov 25 04:03:36 np0005534516 nova_compute[253538]:  <sysinfo type="smbios">
Nov 25 04:03:36 np0005534516 nova_compute[253538]:    <system>
Nov 25 04:03:36 np0005534516 nova_compute[253538]:      <entry name="manufacturer">RDO</entry>
Nov 25 04:03:36 np0005534516 nova_compute[253538]:      <entry name="product">OpenStack Compute</entry>
Nov 25 04:03:36 np0005534516 nova_compute[253538]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 04:03:36 np0005534516 nova_compute[253538]:      <entry name="serial">60fab831-4ae4-4e18-a4e4-5466abbece52</entry>
Nov 25 04:03:36 np0005534516 nova_compute[253538]:      <entry name="uuid">60fab831-4ae4-4e18-a4e4-5466abbece52</entry>
Nov 25 04:03:36 np0005534516 nova_compute[253538]:      <entry name="family">Virtual Machine</entry>
Nov 25 04:03:36 np0005534516 nova_compute[253538]:    </system>
Nov 25 04:03:36 np0005534516 nova_compute[253538]:  </sysinfo>
Nov 25 04:03:36 np0005534516 nova_compute[253538]:  <os>
Nov 25 04:03:36 np0005534516 nova_compute[253538]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 25 04:03:36 np0005534516 nova_compute[253538]:    <boot dev="hd"/>
Nov 25 04:03:36 np0005534516 nova_compute[253538]:    <smbios mode="sysinfo"/>
Nov 25 04:03:36 np0005534516 nova_compute[253538]:  </os>
Nov 25 04:03:36 np0005534516 nova_compute[253538]:  <features>
Nov 25 04:03:36 np0005534516 nova_compute[253538]:    <acpi/>
Nov 25 04:03:36 np0005534516 nova_compute[253538]:    <apic/>
Nov 25 04:03:36 np0005534516 nova_compute[253538]:    <vmcoreinfo/>
Nov 25 04:03:36 np0005534516 nova_compute[253538]:  </features>
Nov 25 04:03:36 np0005534516 nova_compute[253538]:  <clock offset="utc">
Nov 25 04:03:36 np0005534516 nova_compute[253538]:    <timer name="pit" tickpolicy="delay"/>
Nov 25 04:03:36 np0005534516 nova_compute[253538]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 25 04:03:36 np0005534516 nova_compute[253538]:    <timer name="hpet" present="no"/>
Nov 25 04:03:36 np0005534516 nova_compute[253538]:  </clock>
Nov 25 04:03:36 np0005534516 nova_compute[253538]:  <cpu mode="host-model" match="exact">
Nov 25 04:03:36 np0005534516 nova_compute[253538]:    <topology sockets="1" cores="1" threads="1"/>
Nov 25 04:03:36 np0005534516 nova_compute[253538]:  </cpu>
Nov 25 04:03:36 np0005534516 nova_compute[253538]:  <devices>
Nov 25 04:03:36 np0005534516 nova_compute[253538]:    <disk type="network" device="disk">
Nov 25 04:03:36 np0005534516 nova_compute[253538]:      <driver type="raw" cache="none"/>
Nov 25 04:03:36 np0005534516 nova_compute[253538]:      <source protocol="rbd" name="vms/60fab831-4ae4-4e18-a4e4-5466abbece52_disk">
Nov 25 04:03:36 np0005534516 nova_compute[253538]:        <host name="192.168.122.100" port="6789"/>
Nov 25 04:03:36 np0005534516 nova_compute[253538]:      </source>
Nov 25 04:03:36 np0005534516 nova_compute[253538]:      <auth username="openstack">
Nov 25 04:03:36 np0005534516 nova_compute[253538]:        <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 04:03:36 np0005534516 nova_compute[253538]:      </auth>
Nov 25 04:03:36 np0005534516 nova_compute[253538]:      <target dev="vda" bus="virtio"/>
Nov 25 04:03:36 np0005534516 nova_compute[253538]:    </disk>
Nov 25 04:03:36 np0005534516 nova_compute[253538]:    <disk type="network" device="cdrom">
Nov 25 04:03:36 np0005534516 nova_compute[253538]:      <driver type="raw" cache="none"/>
Nov 25 04:03:36 np0005534516 nova_compute[253538]:      <source protocol="rbd" name="vms/60fab831-4ae4-4e18-a4e4-5466abbece52_disk.config">
Nov 25 04:03:36 np0005534516 nova_compute[253538]:        <host name="192.168.122.100" port="6789"/>
Nov 25 04:03:36 np0005534516 nova_compute[253538]:      </source>
Nov 25 04:03:36 np0005534516 nova_compute[253538]:      <auth username="openstack">
Nov 25 04:03:36 np0005534516 nova_compute[253538]:        <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 04:03:36 np0005534516 nova_compute[253538]:      </auth>
Nov 25 04:03:36 np0005534516 nova_compute[253538]:      <target dev="sda" bus="sata"/>
Nov 25 04:03:36 np0005534516 nova_compute[253538]:    </disk>
Nov 25 04:03:36 np0005534516 nova_compute[253538]:    <interface type="ethernet">
Nov 25 04:03:36 np0005534516 nova_compute[253538]:      <mac address="fa:16:3e:72:7d:84"/>
Nov 25 04:03:36 np0005534516 nova_compute[253538]:      <model type="virtio"/>
Nov 25 04:03:36 np0005534516 nova_compute[253538]:      <driver name="vhost" rx_queue_size="512"/>
Nov 25 04:03:36 np0005534516 nova_compute[253538]:      <mtu size="1442"/>
Nov 25 04:03:36 np0005534516 nova_compute[253538]:      <target dev="tapd2008aa0-ba"/>
Nov 25 04:03:36 np0005534516 nova_compute[253538]:    </interface>
Nov 25 04:03:36 np0005534516 nova_compute[253538]:    <serial type="pty">
Nov 25 04:03:36 np0005534516 nova_compute[253538]:      <log file="/var/lib/nova/instances/60fab831-4ae4-4e18-a4e4-5466abbece52/console.log" append="off"/>
Nov 25 04:03:36 np0005534516 nova_compute[253538]:    </serial>
Nov 25 04:03:36 np0005534516 nova_compute[253538]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 04:03:36 np0005534516 nova_compute[253538]:    <video>
Nov 25 04:03:36 np0005534516 nova_compute[253538]:      <model type="virtio"/>
Nov 25 04:03:36 np0005534516 nova_compute[253538]:    </video>
Nov 25 04:03:36 np0005534516 nova_compute[253538]:    <input type="tablet" bus="usb"/>
Nov 25 04:03:36 np0005534516 nova_compute[253538]:    <rng model="virtio">
Nov 25 04:03:36 np0005534516 nova_compute[253538]:      <backend model="random">/dev/urandom</backend>
Nov 25 04:03:36 np0005534516 nova_compute[253538]:    </rng>
Nov 25 04:03:36 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root"/>
Nov 25 04:03:36 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:03:36 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:03:36 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:03:36 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:03:36 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:03:36 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:03:36 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:03:36 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:03:36 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:03:36 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:03:36 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:03:36 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:03:36 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:03:36 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:03:36 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:03:36 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:03:36 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:03:36 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:03:36 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:03:36 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:03:36 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:03:36 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:03:36 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:03:36 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:03:36 np0005534516 nova_compute[253538]:    <controller type="usb" index="0"/>
Nov 25 04:03:36 np0005534516 nova_compute[253538]:    <memballoon model="virtio">
Nov 25 04:03:36 np0005534516 nova_compute[253538]:      <stats period="10"/>
Nov 25 04:03:36 np0005534516 nova_compute[253538]:    </memballoon>
Nov 25 04:03:36 np0005534516 nova_compute[253538]:  </devices>
Nov 25 04:03:36 np0005534516 nova_compute[253538]: </domain>
Nov 25 04:03:36 np0005534516 nova_compute[253538]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 25 04:03:36 np0005534516 nova_compute[253538]: 2025-11-25 09:03:36.519 253542 DEBUG nova.compute.manager [None req-114760a2-1ab1-4b38-a8f4-182ed57f57d9 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 60fab831-4ae4-4e18-a4e4-5466abbece52] Preparing to wait for external event network-vif-plugged-d2008aa0-bac3-4d83-88d2-34376e911b2c prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 25 04:03:36 np0005534516 nova_compute[253538]: 2025-11-25 09:03:36.520 253542 DEBUG oslo_concurrency.lockutils [None req-114760a2-1ab1-4b38-a8f4-182ed57f57d9 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Acquiring lock "60fab831-4ae4-4e18-a4e4-5466abbece52-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:03:36 np0005534516 nova_compute[253538]: 2025-11-25 09:03:36.520 253542 DEBUG oslo_concurrency.lockutils [None req-114760a2-1ab1-4b38-a8f4-182ed57f57d9 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Lock "60fab831-4ae4-4e18-a4e4-5466abbece52-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:03:36 np0005534516 nova_compute[253538]: 2025-11-25 09:03:36.521 253542 DEBUG oslo_concurrency.lockutils [None req-114760a2-1ab1-4b38-a8f4-182ed57f57d9 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Lock "60fab831-4ae4-4e18-a4e4-5466abbece52-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:03:36 np0005534516 nova_compute[253538]: 2025-11-25 09:03:36.522 253542 DEBUG nova.virt.libvirt.vif [None req-114760a2-1ab1-4b38-a8f4-182ed57f57d9 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T09:03:28Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-1495447964-gen-1-1215590521',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-1495447964-gen-1-1215590521',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-1495447964-ge',id=136,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBKmWC6+Lhjt7ZggX7R271Z8ydSfaz55B57w+DkFd+9P5ZZi/RtND2Lrlt8eHEO6BYJ7zwCthN/Bq/sxErPbCK7jo7dXwLPbl9Tdlo4a8btinzQw58JjdPmMZRYnO+DrRAQ==',key_name='tempest-TestSecurityGroupsBasicOps-1979245065',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='cfffff2c57a442a59b202d368d49bf00',ramdisk_id='',reservation_id='r-lk5znizs',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-1495447964',owner_user_name='tempest-TestSecurityGroupsBasicOps-1495447964-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T09:03:30Z,user_data=None,user_id='283b89dbe3284e8ea2019b797673108b',uuid=60fab831-4ae4-4e18-a4e4-5466abbece52,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d2008aa0-bac3-4d83-88d2-34376e911b2c", "address": "fa:16:3e:72:7d:84", "network": {"id": "72472fc5-3661-404c-a0d2-df155795bd2b", "bridge": "br-int", "label": "tempest-network-smoke--1408423100", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cfffff2c57a442a59b202d368d49bf00", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd2008aa0-ba", "ovs_interfaceid": "d2008aa0-bac3-4d83-88d2-34376e911b2c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 25 04:03:36 np0005534516 rsyslogd[1007]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 25 04:03:36 np0005534516 nova_compute[253538]: 2025-11-25 09:03:36.522 253542 DEBUG nova.network.os_vif_util [None req-114760a2-1ab1-4b38-a8f4-182ed57f57d9 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Converting VIF {"id": "d2008aa0-bac3-4d83-88d2-34376e911b2c", "address": "fa:16:3e:72:7d:84", "network": {"id": "72472fc5-3661-404c-a0d2-df155795bd2b", "bridge": "br-int", "label": "tempest-network-smoke--1408423100", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cfffff2c57a442a59b202d368d49bf00", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd2008aa0-ba", "ovs_interfaceid": "d2008aa0-bac3-4d83-88d2-34376e911b2c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 04:03:36 np0005534516 nova_compute[253538]: 2025-11-25 09:03:36.523 253542 DEBUG nova.network.os_vif_util [None req-114760a2-1ab1-4b38-a8f4-182ed57f57d9 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:72:7d:84,bridge_name='br-int',has_traffic_filtering=True,id=d2008aa0-bac3-4d83-88d2-34376e911b2c,network=Network(72472fc5-3661-404c-a0d2-df155795bd2b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd2008aa0-ba') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 04:03:36 np0005534516 nova_compute[253538]: 2025-11-25 09:03:36.524 253542 DEBUG os_vif [None req-114760a2-1ab1-4b38-a8f4-182ed57f57d9 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:72:7d:84,bridge_name='br-int',has_traffic_filtering=True,id=d2008aa0-bac3-4d83-88d2-34376e911b2c,network=Network(72472fc5-3661-404c-a0d2-df155795bd2b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd2008aa0-ba') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 25 04:03:36 np0005534516 nova_compute[253538]: 2025-11-25 09:03:36.524 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:03:36 np0005534516 nova_compute[253538]: 2025-11-25 09:03:36.525 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 04:03:36 np0005534516 nova_compute[253538]: 2025-11-25 09:03:36.526 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 04:03:36 np0005534516 nova_compute[253538]: 2025-11-25 09:03:36.530 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:03:36 np0005534516 nova_compute[253538]: 2025-11-25 09:03:36.531 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd2008aa0-ba, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 04:03:36 np0005534516 nova_compute[253538]: 2025-11-25 09:03:36.531 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapd2008aa0-ba, col_values=(('external_ids', {'iface-id': 'd2008aa0-bac3-4d83-88d2-34376e911b2c', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:72:7d:84', 'vm-uuid': '60fab831-4ae4-4e18-a4e4-5466abbece52'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 04:03:36 np0005534516 nova_compute[253538]: 2025-11-25 09:03:36.533 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:03:36 np0005534516 nova_compute[253538]: 2025-11-25 09:03:36.537 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 25 04:03:36 np0005534516 NetworkManager[48915]: <info>  [1764061416.5385] manager: (tapd2008aa0-ba): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/574)
Nov 25 04:03:36 np0005534516 nova_compute[253538]: 2025-11-25 09:03:36.544 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:03:36 np0005534516 nova_compute[253538]: 2025-11-25 09:03:36.546 253542 INFO os_vif [None req-114760a2-1ab1-4b38-a8f4-182ed57f57d9 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:72:7d:84,bridge_name='br-int',has_traffic_filtering=True,id=d2008aa0-bac3-4d83-88d2-34376e911b2c,network=Network(72472fc5-3661-404c-a0d2-df155795bd2b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd2008aa0-ba')#033[00m
Nov 25 04:03:36 np0005534516 rsyslogd[1007]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 25 04:03:36 np0005534516 systemd[1]: Started libpod-conmon-d09bd3b709ad1fa294b8b59a3c1d26d977814b56f41fc8e912e4960f88f96299.scope.
Nov 25 04:03:36 np0005534516 nova_compute[253538]: 2025-11-25 09:03:36.573 253542 DEBUG oslo_concurrency.processutils [None req-07640ace-66ce-4387-9528-c13339b929d9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/1ee319a5-b613-4b27-a1e6-64b0129bf269/disk.config 1ee319a5-b613-4b27-a1e6-64b0129bf269_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.721s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 04:03:36 np0005534516 nova_compute[253538]: 2025-11-25 09:03:36.574 253542 INFO nova.virt.libvirt.driver [None req-07640ace-66ce-4387-9528-c13339b929d9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 1ee319a5-b613-4b27-a1e6-64b0129bf269] Deleting local config drive /var/lib/nova/instances/1ee319a5-b613-4b27-a1e6-64b0129bf269/disk.config because it was imported into RBD.#033[00m
Nov 25 04:03:36 np0005534516 systemd[1]: Started libcrun container.
Nov 25 04:03:36 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/833fc99fdcb4670e859ff04d55199eec67b5628be698cbe604b240e1437b8ced/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 04:03:36 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/833fc99fdcb4670e859ff04d55199eec67b5628be698cbe604b240e1437b8ced/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 04:03:36 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/833fc99fdcb4670e859ff04d55199eec67b5628be698cbe604b240e1437b8ced/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 04:03:36 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/833fc99fdcb4670e859ff04d55199eec67b5628be698cbe604b240e1437b8ced/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 04:03:36 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/833fc99fdcb4670e859ff04d55199eec67b5628be698cbe604b240e1437b8ced/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 04:03:36 np0005534516 podman[395403]: 2025-11-25 09:03:36.604861272 +0000 UTC m=+0.367446060 container init d09bd3b709ad1fa294b8b59a3c1d26d977814b56f41fc8e912e4960f88f96299 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_jepsen, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default)
Nov 25 04:03:36 np0005534516 nova_compute[253538]: 2025-11-25 09:03:36.615 253542 DEBUG nova.virt.libvirt.driver [None req-114760a2-1ab1-4b38-a8f4-182ed57f57d9 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 04:03:36 np0005534516 nova_compute[253538]: 2025-11-25 09:03:36.616 253542 DEBUG nova.virt.libvirt.driver [None req-114760a2-1ab1-4b38-a8f4-182ed57f57d9 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 04:03:36 np0005534516 nova_compute[253538]: 2025-11-25 09:03:36.616 253542 DEBUG nova.virt.libvirt.driver [None req-114760a2-1ab1-4b38-a8f4-182ed57f57d9 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] No VIF found with MAC fa:16:3e:72:7d:84, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 25 04:03:36 np0005534516 nova_compute[253538]: 2025-11-25 09:03:36.616 253542 INFO nova.virt.libvirt.driver [None req-114760a2-1ab1-4b38-a8f4-182ed57f57d9 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 60fab831-4ae4-4e18-a4e4-5466abbece52] Using config drive#033[00m
Nov 25 04:03:36 np0005534516 podman[395403]: 2025-11-25 09:03:36.617860706 +0000 UTC m=+0.380445464 container start d09bd3b709ad1fa294b8b59a3c1d26d977814b56f41fc8e912e4960f88f96299 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_jepsen, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Nov 25 04:03:36 np0005534516 nova_compute[253538]: 2025-11-25 09:03:36.645 253542 DEBUG nova.storage.rbd_utils [None req-114760a2-1ab1-4b38-a8f4-182ed57f57d9 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] rbd image 60fab831-4ae4-4e18-a4e4-5466abbece52_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 04:03:36 np0005534516 NetworkManager[48915]: <info>  [1764061416.6472] manager: (tap2cd88dce-60): new Tun device (/org/freedesktop/NetworkManager/Devices/575)
Nov 25 04:03:36 np0005534516 kernel: tap2cd88dce-60: entered promiscuous mode
Nov 25 04:03:36 np0005534516 podman[395403]: 2025-11-25 09:03:36.675938274 +0000 UTC m=+0.438523052 container attach d09bd3b709ad1fa294b8b59a3c1d26d977814b56f41fc8e912e4960f88f96299 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_jepsen, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Nov 25 04:03:36 np0005534516 ovn_controller[152859]: 2025-11-25T09:03:36Z|01384|binding|INFO|Claiming lport 2cd88dce-60d9-4da6-a5f6-ba6622fd8812 for this chassis.
Nov 25 04:03:36 np0005534516 ovn_controller[152859]: 2025-11-25T09:03:36Z|01385|binding|INFO|2cd88dce-60d9-4da6-a5f6-ba6622fd8812: Claiming fa:16:3e:1e:e8:e0 10.100.0.14
Nov 25 04:03:36 np0005534516 systemd-udevd[395462]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 04:03:36 np0005534516 NetworkManager[48915]: <info>  [1764061416.6821] manager: (tap25c8c441-cd): new Tun device (/org/freedesktop/NetworkManager/Devices/576)
Nov 25 04:03:36 np0005534516 systemd-udevd[395465]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 04:03:36 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:03:36.684 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:1e:e8:e0 10.100.0.14'], port_security=['fa:16:3e:1e:e8:e0 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '1ee319a5-b613-4b27-a1e6-64b0129bf269', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f691304c-d112-4c32-b3ac-0f33230178b0', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a3cf572dfc9f42528923d69b8fa76422', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'f5f96402-bff3-4d85-bba1-4edfd7ec059e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9a4166e3-493a-4dd7-9e89-e40e3bf1bed7, chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=2cd88dce-60d9-4da6-a5f6-ba6622fd8812) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 04:03:36 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:03:36.686 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 2cd88dce-60d9-4da6-a5f6-ba6622fd8812 in datapath f691304c-d112-4c32-b3ac-0f33230178b0 bound to our chassis#033[00m
Nov 25 04:03:36 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:03:36.687 162739 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network f691304c-d112-4c32-b3ac-0f33230178b0#033[00m
Nov 25 04:03:36 np0005534516 nova_compute[253538]: 2025-11-25 09:03:36.692 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:03:36 np0005534516 NetworkManager[48915]: <info>  [1764061416.6954] device (tap2cd88dce-60): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 04:03:36 np0005534516 NetworkManager[48915]: <info>  [1764061416.6961] device (tap2cd88dce-60): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 04:03:36 np0005534516 NetworkManager[48915]: <info>  [1764061416.7027] device (tap25c8c441-cd): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 04:03:36 np0005534516 kernel: tap25c8c441-cd: entered promiscuous mode
Nov 25 04:03:36 np0005534516 NetworkManager[48915]: <info>  [1764061416.7038] device (tap25c8c441-cd): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 04:03:36 np0005534516 ovn_controller[152859]: 2025-11-25T09:03:36Z|01386|binding|INFO|Setting lport 2cd88dce-60d9-4da6-a5f6-ba6622fd8812 ovn-installed in OVS
Nov 25 04:03:36 np0005534516 ovn_controller[152859]: 2025-11-25T09:03:36Z|01387|binding|INFO|Setting lport 2cd88dce-60d9-4da6-a5f6-ba6622fd8812 up in Southbound
Nov 25 04:03:36 np0005534516 ovn_controller[152859]: 2025-11-25T09:03:36Z|01388|if_status|INFO|Not updating pb chassis for 25c8c441-cd5e-4cd3-9151-e8137db08e65 now as sb is readonly
Nov 25 04:03:36 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:03:36.705 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[8c952ebb-93f1-434f-a290-7eca6efdc5d8]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:03:36 np0005534516 ovn_controller[152859]: 2025-11-25T09:03:36Z|01389|binding|INFO|Claiming lport 25c8c441-cd5e-4cd3-9151-e8137db08e65 for this chassis.
Nov 25 04:03:36 np0005534516 ovn_controller[152859]: 2025-11-25T09:03:36Z|01390|binding|INFO|25c8c441-cd5e-4cd3-9151-e8137db08e65: Claiming fa:16:3e:fe:c4:a5 2001:db8::f816:3eff:fefe:c4a5
Nov 25 04:03:36 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:03:36.737 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:fe:c4:a5 2001:db8::f816:3eff:fefe:c4a5'], port_security=['fa:16:3e:fe:c4:a5 2001:db8::f816:3eff:fefe:c4a5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:fefe:c4a5/64', 'neutron:device_id': '1ee319a5-b613-4b27-a1e6-64b0129bf269', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-269f4fa4-a7fb-4f9a-b49d-3b1968826304', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a3cf572dfc9f42528923d69b8fa76422', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'f5f96402-bff3-4d85-bba1-4edfd7ec059e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a218a470-f1de-46c9-a998-6139383f8f9c, chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=25c8c441-cd5e-4cd3-9151-e8137db08e65) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 04:03:36 np0005534516 nova_compute[253538]: 2025-11-25 09:03:36.707 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:03:36 np0005534516 ovn_controller[152859]: 2025-11-25T09:03:36Z|01391|binding|INFO|Setting lport 25c8c441-cd5e-4cd3-9151-e8137db08e65 ovn-installed in OVS
Nov 25 04:03:36 np0005534516 ovn_controller[152859]: 2025-11-25T09:03:36Z|01392|binding|INFO|Setting lport 25c8c441-cd5e-4cd3-9151-e8137db08e65 up in Southbound
Nov 25 04:03:36 np0005534516 nova_compute[253538]: 2025-11-25 09:03:36.752 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:03:36 np0005534516 systemd-machined[215790]: New machine qemu-165-instance-00000087.
Nov 25 04:03:36 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:03:36.770 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[f6f85db8-a2a5-4d5a-a6a2-3b4978cdf947]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:03:36 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:03:36.775 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[acde9e8e-33ec-43e4-8828-550e34ad9dd9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:03:36 np0005534516 systemd[1]: Started Virtual Machine qemu-165-instance-00000087.
Nov 25 04:03:36 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:03:36.808 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[b03e3464-9075-4f14-be5f-8fb1cf2429ec]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:03:36 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:03:36.828 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[36d9fb8a-f1a2-4592-ba07-705b3b3f88c1]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf691304c-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:62:d6:9f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 397], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 672407, 'reachable_time': 19375, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 395480, 'error': None, 'target': 'ovnmeta-f691304c-d112-4c32-b3ac-0f33230178b0', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:03:36 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:03:36.843 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[04cb5edb-d211-43cb-a215-44ea5ac89610]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapf691304c-d1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 672419, 'tstamp': 672419}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 395485, 'error': None, 'target': 'ovnmeta-f691304c-d112-4c32-b3ac-0f33230178b0', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapf691304c-d1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 672421, 'tstamp': 672421}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 395485, 'error': None, 'target': 'ovnmeta-f691304c-d112-4c32-b3ac-0f33230178b0', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:03:36 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:03:36.844 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf691304c-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 04:03:36 np0005534516 nova_compute[253538]: 2025-11-25 09:03:36.846 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:03:36 np0005534516 nova_compute[253538]: 2025-11-25 09:03:36.851 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:03:36 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:03:36.852 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf691304c-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 04:03:36 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:03:36.852 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 04:03:36 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:03:36.853 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapf691304c-d0, col_values=(('external_ids', {'iface-id': '5564ec46-e1ee-4a7e-990b-f716b4d2c9e2'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 04:03:36 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:03:36.853 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 04:03:36 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:03:36.855 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 25c8c441-cd5e-4cd3-9151-e8137db08e65 in datapath 269f4fa4-a7fb-4f9a-b49d-3b1968826304 unbound from our chassis#033[00m
Nov 25 04:03:36 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:03:36.857 162739 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 269f4fa4-a7fb-4f9a-b49d-3b1968826304#033[00m
Nov 25 04:03:36 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:03:36.871 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[21612972-5311-4696-908c-f881e40ab195]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:03:36 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:03:36.904 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[ef9bd8f9-86b9-4b40-8d86-f582cc45415a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:03:36 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:03:36.906 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[d13f5630-8af8-4bb7-86f3-fc4497de7aef]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:03:36 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:03:36.944 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[887050a0-2afb-4ab6-93af-e37a350c4194]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:03:36 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:03:36.965 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[d91be723-7f3b-45ea-a9ea-6fa905a1fbba]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap269f4fa4-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:3e:a8:84'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 18, 'tx_packets': 4, 'rx_bytes': 1572, 'tx_bytes': 312, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 18, 'tx_packets': 4, 'rx_bytes': 1572, 'tx_bytes': 312, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 398], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 672570, 'reachable_time': 38765, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 18, 'inoctets': 1320, 'indelivers': 4, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 18, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 1320, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 18, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 4, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 395493, 'error': None, 'target': 'ovnmeta-269f4fa4-a7fb-4f9a-b49d-3b1968826304', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:03:36 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:03:36.986 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[38e8dff1-0840-4b91-89d9-90f962c86300]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap269f4fa4-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 672580, 'tstamp': 672580}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 395494, 'error': None, 'target': 'ovnmeta-269f4fa4-a7fb-4f9a-b49d-3b1968826304', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:03:36 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:03:36.988 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap269f4fa4-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 04:03:36 np0005534516 nova_compute[253538]: 2025-11-25 09:03:36.990 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:03:36 np0005534516 nova_compute[253538]: 2025-11-25 09:03:36.996 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:03:36 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:03:36.997 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap269f4fa4-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 04:03:36 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:03:36.997 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 04:03:36 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:03:36.998 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap269f4fa4-a0, col_values=(('external_ids', {'iface-id': 'ab77c41f-12b1-44c7-af48-058abf7be28c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 04:03:36 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:03:36.998 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 04:03:37 np0005534516 nova_compute[253538]: 2025-11-25 09:03:37.371 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764061417.3714201, 1ee319a5-b613-4b27-a1e6-64b0129bf269 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 04:03:37 np0005534516 nova_compute[253538]: 2025-11-25 09:03:37.372 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 1ee319a5-b613-4b27-a1e6-64b0129bf269] VM Started (Lifecycle Event)#033[00m
Nov 25 04:03:37 np0005534516 nova_compute[253538]: 2025-11-25 09:03:37.388 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 1ee319a5-b613-4b27-a1e6-64b0129bf269] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 04:03:37 np0005534516 nova_compute[253538]: 2025-11-25 09:03:37.393 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764061417.3715806, 1ee319a5-b613-4b27-a1e6-64b0129bf269 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 04:03:37 np0005534516 nova_compute[253538]: 2025-11-25 09:03:37.394 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 1ee319a5-b613-4b27-a1e6-64b0129bf269] VM Paused (Lifecycle Event)#033[00m
Nov 25 04:03:37 np0005534516 nova_compute[253538]: 2025-11-25 09:03:37.412 253542 DEBUG nova.compute.manager [req-bbc1137f-07d3-4399-9485-b727693defca req-0f936d07-38a6-409a-bfb1-82ca33e5b4e3 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 1ee319a5-b613-4b27-a1e6-64b0129bf269] Received event network-vif-plugged-2cd88dce-60d9-4da6-a5f6-ba6622fd8812 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 04:03:37 np0005534516 nova_compute[253538]: 2025-11-25 09:03:37.412 253542 DEBUG oslo_concurrency.lockutils [req-bbc1137f-07d3-4399-9485-b727693defca req-0f936d07-38a6-409a-bfb1-82ca33e5b4e3 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "1ee319a5-b613-4b27-a1e6-64b0129bf269-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:03:37 np0005534516 nova_compute[253538]: 2025-11-25 09:03:37.412 253542 DEBUG oslo_concurrency.lockutils [req-bbc1137f-07d3-4399-9485-b727693defca req-0f936d07-38a6-409a-bfb1-82ca33e5b4e3 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "1ee319a5-b613-4b27-a1e6-64b0129bf269-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:03:37 np0005534516 nova_compute[253538]: 2025-11-25 09:03:37.412 253542 DEBUG oslo_concurrency.lockutils [req-bbc1137f-07d3-4399-9485-b727693defca req-0f936d07-38a6-409a-bfb1-82ca33e5b4e3 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "1ee319a5-b613-4b27-a1e6-64b0129bf269-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:03:37 np0005534516 nova_compute[253538]: 2025-11-25 09:03:37.413 253542 DEBUG nova.compute.manager [req-bbc1137f-07d3-4399-9485-b727693defca req-0f936d07-38a6-409a-bfb1-82ca33e5b4e3 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 1ee319a5-b613-4b27-a1e6-64b0129bf269] Processing event network-vif-plugged-2cd88dce-60d9-4da6-a5f6-ba6622fd8812 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 25 04:03:37 np0005534516 nova_compute[253538]: 2025-11-25 09:03:37.414 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 1ee319a5-b613-4b27-a1e6-64b0129bf269] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 04:03:37 np0005534516 nova_compute[253538]: 2025-11-25 09:03:37.418 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 1ee319a5-b613-4b27-a1e6-64b0129bf269] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 04:03:37 np0005534516 nova_compute[253538]: 2025-11-25 09:03:37.435 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 1ee319a5-b613-4b27-a1e6-64b0129bf269] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 04:03:37 np0005534516 nova_compute[253538]: 2025-11-25 09:03:37.498 253542 INFO nova.virt.libvirt.driver [None req-114760a2-1ab1-4b38-a8f4-182ed57f57d9 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 60fab831-4ae4-4e18-a4e4-5466abbece52] Creating config drive at /var/lib/nova/instances/60fab831-4ae4-4e18-a4e4-5466abbece52/disk.config#033[00m
Nov 25 04:03:37 np0005534516 nova_compute[253538]: 2025-11-25 09:03:37.503 253542 DEBUG oslo_concurrency.processutils [None req-114760a2-1ab1-4b38-a8f4-182ed57f57d9 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/60fab831-4ae4-4e18-a4e4-5466abbece52/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp7qkw4gl4 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 04:03:37 np0005534516 nova_compute[253538]: 2025-11-25 09:03:37.536 253542 DEBUG nova.network.neutron [req-894204e2-b11b-4939-bf49-188c572977d5 req-07fe0918-a967-4cea-90bd-77260f02f8e9 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 60fab831-4ae4-4e18-a4e4-5466abbece52] Updated VIF entry in instance network info cache for port d2008aa0-bac3-4d83-88d2-34376e911b2c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 25 04:03:37 np0005534516 nova_compute[253538]: 2025-11-25 09:03:37.537 253542 DEBUG nova.network.neutron [req-894204e2-b11b-4939-bf49-188c572977d5 req-07fe0918-a967-4cea-90bd-77260f02f8e9 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 60fab831-4ae4-4e18-a4e4-5466abbece52] Updating instance_info_cache with network_info: [{"id": "d2008aa0-bac3-4d83-88d2-34376e911b2c", "address": "fa:16:3e:72:7d:84", "network": {"id": "72472fc5-3661-404c-a0d2-df155795bd2b", "bridge": "br-int", "label": "tempest-network-smoke--1408423100", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cfffff2c57a442a59b202d368d49bf00", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd2008aa0-ba", "ovs_interfaceid": "d2008aa0-bac3-4d83-88d2-34376e911b2c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 04:03:37 np0005534516 nova_compute[253538]: 2025-11-25 09:03:37.550 253542 DEBUG oslo_concurrency.lockutils [req-894204e2-b11b-4939-bf49-188c572977d5 req-07fe0918-a967-4cea-90bd-77260f02f8e9 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-60fab831-4ae4-4e18-a4e4-5466abbece52" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 04:03:37 np0005534516 nova_compute[253538]: 2025-11-25 09:03:37.641 253542 DEBUG oslo_concurrency.processutils [None req-114760a2-1ab1-4b38-a8f4-182ed57f57d9 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/60fab831-4ae4-4e18-a4e4-5466abbece52/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp7qkw4gl4" returned: 0 in 0.137s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 04:03:37 np0005534516 nova_compute[253538]: 2025-11-25 09:03:37.662 253542 DEBUG nova.storage.rbd_utils [None req-114760a2-1ab1-4b38-a8f4-182ed57f57d9 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] rbd image 60fab831-4ae4-4e18-a4e4-5466abbece52_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 04:03:37 np0005534516 nova_compute[253538]: 2025-11-25 09:03:37.665 253542 DEBUG oslo_concurrency.processutils [None req-114760a2-1ab1-4b38-a8f4-182ed57f57d9 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/60fab831-4ae4-4e18-a4e4-5466abbece52/disk.config 60fab831-4ae4-4e18-a4e4-5466abbece52_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 04:03:37 np0005534516 recursing_jepsen[395427]: --> passed data devices: 0 physical, 3 LVM
Nov 25 04:03:37 np0005534516 recursing_jepsen[395427]: --> relative data size: 1.0
Nov 25 04:03:37 np0005534516 recursing_jepsen[395427]: --> All data devices are unavailable
Nov 25 04:03:37 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:03:37 np0005534516 systemd[1]: libpod-d09bd3b709ad1fa294b8b59a3c1d26d977814b56f41fc8e912e4960f88f96299.scope: Deactivated successfully.
Nov 25 04:03:37 np0005534516 systemd[1]: libpod-d09bd3b709ad1fa294b8b59a3c1d26d977814b56f41fc8e912e4960f88f96299.scope: Consumed 1.033s CPU time.
Nov 25 04:03:37 np0005534516 podman[395403]: 2025-11-25 09:03:37.736969147 +0000 UTC m=+1.499553905 container died d09bd3b709ad1fa294b8b59a3c1d26d977814b56f41fc8e912e4960f88f96299 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_jepsen, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 04:03:37 np0005534516 systemd[1]: var-lib-containers-storage-overlay-833fc99fdcb4670e859ff04d55199eec67b5628be698cbe604b240e1437b8ced-merged.mount: Deactivated successfully.
Nov 25 04:03:37 np0005534516 podman[395403]: 2025-11-25 09:03:37.895793875 +0000 UTC m=+1.658378633 container remove d09bd3b709ad1fa294b8b59a3c1d26d977814b56f41fc8e912e4960f88f96299 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_jepsen, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Nov 25 04:03:37 np0005534516 systemd[1]: libpod-conmon-d09bd3b709ad1fa294b8b59a3c1d26d977814b56f41fc8e912e4960f88f96299.scope: Deactivated successfully.
Nov 25 04:03:38 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2485: 321 pgs: 321 active+clean; 339 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 29 KiB/s rd, 2.3 MiB/s wr, 43 op/s
Nov 25 04:03:38 np0005534516 nova_compute[253538]: 2025-11-25 09:03:38.405 253542 DEBUG oslo_concurrency.processutils [None req-114760a2-1ab1-4b38-a8f4-182ed57f57d9 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/60fab831-4ae4-4e18-a4e4-5466abbece52/disk.config 60fab831-4ae4-4e18-a4e4-5466abbece52_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.740s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 04:03:38 np0005534516 nova_compute[253538]: 2025-11-25 09:03:38.408 253542 INFO nova.virt.libvirt.driver [None req-114760a2-1ab1-4b38-a8f4-182ed57f57d9 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 60fab831-4ae4-4e18-a4e4-5466abbece52] Deleting local config drive /var/lib/nova/instances/60fab831-4ae4-4e18-a4e4-5466abbece52/disk.config because it was imported into RBD.#033[00m
Nov 25 04:03:38 np0005534516 NetworkManager[48915]: <info>  [1764061418.4710] manager: (tapd2008aa0-ba): new Tun device (/org/freedesktop/NetworkManager/Devices/577)
Nov 25 04:03:38 np0005534516 kernel: tapd2008aa0-ba: entered promiscuous mode
Nov 25 04:03:38 np0005534516 ovn_controller[152859]: 2025-11-25T09:03:38Z|01393|binding|INFO|Claiming lport d2008aa0-bac3-4d83-88d2-34376e911b2c for this chassis.
Nov 25 04:03:38 np0005534516 ovn_controller[152859]: 2025-11-25T09:03:38Z|01394|binding|INFO|d2008aa0-bac3-4d83-88d2-34376e911b2c: Claiming fa:16:3e:72:7d:84 10.100.0.11
Nov 25 04:03:38 np0005534516 nova_compute[253538]: 2025-11-25 09:03:38.479 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:03:38 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:03:38.486 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:72:7d:84 10.100.0.11'], port_security=['fa:16:3e:72:7d:84 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '60fab831-4ae4-4e18-a4e4-5466abbece52', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-72472fc5-3661-404c-a0d2-df155795bd2b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'cfffff2c57a442a59b202d368d49bf00', 'neutron:revision_number': '2', 'neutron:security_group_ids': '0f20aab2-1f55-4a0f-8bdf-77bad4fbb70d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=48d726f4-a876-48b7-812b-492b6f2eebf1, chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=d2008aa0-bac3-4d83-88d2-34376e911b2c) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 04:03:38 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:03:38.487 162739 INFO neutron.agent.ovn.metadata.agent [-] Port d2008aa0-bac3-4d83-88d2-34376e911b2c in datapath 72472fc5-3661-404c-a0d2-df155795bd2b bound to our chassis#033[00m
Nov 25 04:03:38 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:03:38.488 162739 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 72472fc5-3661-404c-a0d2-df155795bd2b#033[00m
Nov 25 04:03:38 np0005534516 NetworkManager[48915]: <info>  [1764061418.4961] device (tapd2008aa0-ba): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 04:03:38 np0005534516 NetworkManager[48915]: <info>  [1764061418.4970] device (tapd2008aa0-ba): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 04:03:38 np0005534516 ovn_controller[152859]: 2025-11-25T09:03:38Z|01395|binding|INFO|Setting lport d2008aa0-bac3-4d83-88d2-34376e911b2c ovn-installed in OVS
Nov 25 04:03:38 np0005534516 ovn_controller[152859]: 2025-11-25T09:03:38Z|01396|binding|INFO|Setting lport d2008aa0-bac3-4d83-88d2-34376e911b2c up in Southbound
Nov 25 04:03:38 np0005534516 nova_compute[253538]: 2025-11-25 09:03:38.507 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:03:38 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:03:38.507 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[1ee576ae-aedf-4d06-a3e1-21ce1ac56451]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:03:38 np0005534516 nova_compute[253538]: 2025-11-25 09:03:38.509 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:03:38 np0005534516 systemd-machined[215790]: New machine qemu-166-instance-00000088.
Nov 25 04:03:38 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:03:38.546 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[7ce81f7e-3d45-4b2a-8033-7fe4cc158996]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:03:38 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:03:38.550 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[19753bfd-1536-4d41-bf9e-12ae0540d18a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:03:38 np0005534516 systemd[1]: Started Virtual Machine qemu-166-instance-00000088.
Nov 25 04:03:38 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:03:38.583 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[9ac8e156-a2f1-473b-b069-d0c6e47137af]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:03:38 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:03:38.602 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[33edc571-215c-421f-8390-3dc874b082d7]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap72472fc5-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:1a:5c:07'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 400], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 672908, 'reachable_time': 15576, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 395781, 'error': None, 'target': 'ovnmeta-72472fc5-3661-404c-a0d2-df155795bd2b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:03:38 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:03:38.621 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[5dc44e31-5fda-4b8f-928c-19f13f11a85b]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap72472fc5-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 672919, 'tstamp': 672919}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 395794, 'error': None, 'target': 'ovnmeta-72472fc5-3661-404c-a0d2-df155795bd2b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap72472fc5-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 672922, 'tstamp': 672922}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 395794, 'error': None, 'target': 'ovnmeta-72472fc5-3661-404c-a0d2-df155795bd2b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:03:38 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:03:38.623 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap72472fc5-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 04:03:38 np0005534516 nova_compute[253538]: 2025-11-25 09:03:38.624 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:03:38 np0005534516 nova_compute[253538]: 2025-11-25 09:03:38.625 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:03:38 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:03:38.627 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap72472fc5-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 04:03:38 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:03:38.627 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 04:03:38 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:03:38.628 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap72472fc5-30, col_values=(('external_ids', {'iface-id': '7518767c-6a1a-4489-968c-840b865348d3'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 04:03:38 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:03:38.628 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 04:03:38 np0005534516 podman[395778]: 2025-11-25 09:03:38.617913595 +0000 UTC m=+0.023172351 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 04:03:38 np0005534516 podman[395778]: 2025-11-25 09:03:38.797511487 +0000 UTC m=+0.202770233 container create dc257c01d4a60eaa7986adf959496c3179c414fc3688c2bda75991c1373c7b8a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_goldwasser, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3)
Nov 25 04:03:38 np0005534516 systemd[1]: Started libpod-conmon-dc257c01d4a60eaa7986adf959496c3179c414fc3688c2bda75991c1373c7b8a.scope.
Nov 25 04:03:38 np0005534516 systemd[1]: Started libcrun container.
Nov 25 04:03:38 np0005534516 nova_compute[253538]: 2025-11-25 09:03:38.904 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:03:38 np0005534516 podman[395778]: 2025-11-25 09:03:38.942419587 +0000 UTC m=+0.347678343 container init dc257c01d4a60eaa7986adf959496c3179c414fc3688c2bda75991c1373c7b8a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_goldwasser, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Nov 25 04:03:38 np0005534516 podman[395778]: 2025-11-25 09:03:38.951808252 +0000 UTC m=+0.357067028 container start dc257c01d4a60eaa7986adf959496c3179c414fc3688c2bda75991c1373c7b8a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_goldwasser, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 04:03:38 np0005534516 goofy_goldwasser[395821]: 167 167
Nov 25 04:03:38 np0005534516 systemd[1]: libpod-dc257c01d4a60eaa7986adf959496c3179c414fc3688c2bda75991c1373c7b8a.scope: Deactivated successfully.
Nov 25 04:03:38 np0005534516 conmon[395821]: conmon dc257c01d4a60eaa7986 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-dc257c01d4a60eaa7986adf959496c3179c414fc3688c2bda75991c1373c7b8a.scope/container/memory.events
Nov 25 04:03:39 np0005534516 nova_compute[253538]: 2025-11-25 09:03:39.039 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764061419.038855, 60fab831-4ae4-4e18-a4e4-5466abbece52 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 04:03:39 np0005534516 nova_compute[253538]: 2025-11-25 09:03:39.040 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 60fab831-4ae4-4e18-a4e4-5466abbece52] VM Started (Lifecycle Event)#033[00m
Nov 25 04:03:39 np0005534516 podman[395778]: 2025-11-25 09:03:39.043995188 +0000 UTC m=+0.449253954 container attach dc257c01d4a60eaa7986adf959496c3179c414fc3688c2bda75991c1373c7b8a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_goldwasser, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Nov 25 04:03:39 np0005534516 podman[395778]: 2025-11-25 09:03:39.045034106 +0000 UTC m=+0.450292852 container died dc257c01d4a60eaa7986adf959496c3179c414fc3688c2bda75991c1373c7b8a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_goldwasser, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 04:03:39 np0005534516 nova_compute[253538]: 2025-11-25 09:03:39.061 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 60fab831-4ae4-4e18-a4e4-5466abbece52] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 04:03:39 np0005534516 nova_compute[253538]: 2025-11-25 09:03:39.075 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764061419.0390875, 60fab831-4ae4-4e18-a4e4-5466abbece52 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 04:03:39 np0005534516 nova_compute[253538]: 2025-11-25 09:03:39.075 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 60fab831-4ae4-4e18-a4e4-5466abbece52] VM Paused (Lifecycle Event)#033[00m
Nov 25 04:03:39 np0005534516 nova_compute[253538]: 2025-11-25 09:03:39.096 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 60fab831-4ae4-4e18-a4e4-5466abbece52] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 04:03:39 np0005534516 nova_compute[253538]: 2025-11-25 09:03:39.098 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 60fab831-4ae4-4e18-a4e4-5466abbece52] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 04:03:39 np0005534516 nova_compute[253538]: 2025-11-25 09:03:39.113 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 60fab831-4ae4-4e18-a4e4-5466abbece52] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 04:03:39 np0005534516 systemd[1]: var-lib-containers-storage-overlay-14fcf2b2042cbe2926ff5c1bc005ab6b15e8a4e7b2e009ac4d389f9e6fd5158f-merged.mount: Deactivated successfully.
Nov 25 04:03:39 np0005534516 podman[395778]: 2025-11-25 09:03:39.341429553 +0000 UTC m=+0.746688329 container remove dc257c01d4a60eaa7986adf959496c3179c414fc3688c2bda75991c1373c7b8a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_goldwasser, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 04:03:39 np0005534516 systemd[1]: libpod-conmon-dc257c01d4a60eaa7986adf959496c3179c414fc3688c2bda75991c1373c7b8a.scope: Deactivated successfully.
Nov 25 04:03:39 np0005534516 nova_compute[253538]: 2025-11-25 09:03:39.574 253542 DEBUG nova.compute.manager [req-cadc6a5d-0997-43d7-81cb-ee665b1f7ab8 req-856e2c64-ea12-4477-8792-a40ca8340519 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 1ee319a5-b613-4b27-a1e6-64b0129bf269] Received event network-vif-plugged-2cd88dce-60d9-4da6-a5f6-ba6622fd8812 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 04:03:39 np0005534516 nova_compute[253538]: 2025-11-25 09:03:39.575 253542 DEBUG oslo_concurrency.lockutils [req-cadc6a5d-0997-43d7-81cb-ee665b1f7ab8 req-856e2c64-ea12-4477-8792-a40ca8340519 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "1ee319a5-b613-4b27-a1e6-64b0129bf269-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:03:39 np0005534516 nova_compute[253538]: 2025-11-25 09:03:39.575 253542 DEBUG oslo_concurrency.lockutils [req-cadc6a5d-0997-43d7-81cb-ee665b1f7ab8 req-856e2c64-ea12-4477-8792-a40ca8340519 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "1ee319a5-b613-4b27-a1e6-64b0129bf269-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:03:39 np0005534516 nova_compute[253538]: 2025-11-25 09:03:39.575 253542 DEBUG oslo_concurrency.lockutils [req-cadc6a5d-0997-43d7-81cb-ee665b1f7ab8 req-856e2c64-ea12-4477-8792-a40ca8340519 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "1ee319a5-b613-4b27-a1e6-64b0129bf269-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:03:39 np0005534516 nova_compute[253538]: 2025-11-25 09:03:39.575 253542 DEBUG nova.compute.manager [req-cadc6a5d-0997-43d7-81cb-ee665b1f7ab8 req-856e2c64-ea12-4477-8792-a40ca8340519 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 1ee319a5-b613-4b27-a1e6-64b0129bf269] No event matching network-vif-plugged-2cd88dce-60d9-4da6-a5f6-ba6622fd8812 in dict_keys([('network-vif-plugged', '25c8c441-cd5e-4cd3-9151-e8137db08e65')]) pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:325#033[00m
Nov 25 04:03:39 np0005534516 nova_compute[253538]: 2025-11-25 09:03:39.576 253542 WARNING nova.compute.manager [req-cadc6a5d-0997-43d7-81cb-ee665b1f7ab8 req-856e2c64-ea12-4477-8792-a40ca8340519 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 1ee319a5-b613-4b27-a1e6-64b0129bf269] Received unexpected event network-vif-plugged-2cd88dce-60d9-4da6-a5f6-ba6622fd8812 for instance with vm_state building and task_state spawning.#033[00m
Nov 25 04:03:39 np0005534516 nova_compute[253538]: 2025-11-25 09:03:39.576 253542 DEBUG nova.compute.manager [req-cadc6a5d-0997-43d7-81cb-ee665b1f7ab8 req-856e2c64-ea12-4477-8792-a40ca8340519 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 1ee319a5-b613-4b27-a1e6-64b0129bf269] Received event network-vif-plugged-25c8c441-cd5e-4cd3-9151-e8137db08e65 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 04:03:39 np0005534516 nova_compute[253538]: 2025-11-25 09:03:39.576 253542 DEBUG oslo_concurrency.lockutils [req-cadc6a5d-0997-43d7-81cb-ee665b1f7ab8 req-856e2c64-ea12-4477-8792-a40ca8340519 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "1ee319a5-b613-4b27-a1e6-64b0129bf269-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:03:39 np0005534516 nova_compute[253538]: 2025-11-25 09:03:39.576 253542 DEBUG oslo_concurrency.lockutils [req-cadc6a5d-0997-43d7-81cb-ee665b1f7ab8 req-856e2c64-ea12-4477-8792-a40ca8340519 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "1ee319a5-b613-4b27-a1e6-64b0129bf269-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:03:39 np0005534516 nova_compute[253538]: 2025-11-25 09:03:39.577 253542 DEBUG oslo_concurrency.lockutils [req-cadc6a5d-0997-43d7-81cb-ee665b1f7ab8 req-856e2c64-ea12-4477-8792-a40ca8340519 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "1ee319a5-b613-4b27-a1e6-64b0129bf269-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:03:39 np0005534516 nova_compute[253538]: 2025-11-25 09:03:39.577 253542 DEBUG nova.compute.manager [req-cadc6a5d-0997-43d7-81cb-ee665b1f7ab8 req-856e2c64-ea12-4477-8792-a40ca8340519 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 1ee319a5-b613-4b27-a1e6-64b0129bf269] Processing event network-vif-plugged-25c8c441-cd5e-4cd3-9151-e8137db08e65 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 25 04:03:39 np0005534516 nova_compute[253538]: 2025-11-25 09:03:39.577 253542 DEBUG nova.compute.manager [req-cadc6a5d-0997-43d7-81cb-ee665b1f7ab8 req-856e2c64-ea12-4477-8792-a40ca8340519 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 1ee319a5-b613-4b27-a1e6-64b0129bf269] Received event network-vif-plugged-25c8c441-cd5e-4cd3-9151-e8137db08e65 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 04:03:39 np0005534516 nova_compute[253538]: 2025-11-25 09:03:39.577 253542 DEBUG oslo_concurrency.lockutils [req-cadc6a5d-0997-43d7-81cb-ee665b1f7ab8 req-856e2c64-ea12-4477-8792-a40ca8340519 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "1ee319a5-b613-4b27-a1e6-64b0129bf269-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:03:39 np0005534516 nova_compute[253538]: 2025-11-25 09:03:39.577 253542 DEBUG oslo_concurrency.lockutils [req-cadc6a5d-0997-43d7-81cb-ee665b1f7ab8 req-856e2c64-ea12-4477-8792-a40ca8340519 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "1ee319a5-b613-4b27-a1e6-64b0129bf269-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:03:39 np0005534516 nova_compute[253538]: 2025-11-25 09:03:39.578 253542 DEBUG oslo_concurrency.lockutils [req-cadc6a5d-0997-43d7-81cb-ee665b1f7ab8 req-856e2c64-ea12-4477-8792-a40ca8340519 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "1ee319a5-b613-4b27-a1e6-64b0129bf269-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:03:39 np0005534516 nova_compute[253538]: 2025-11-25 09:03:39.578 253542 DEBUG nova.compute.manager [req-cadc6a5d-0997-43d7-81cb-ee665b1f7ab8 req-856e2c64-ea12-4477-8792-a40ca8340519 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 1ee319a5-b613-4b27-a1e6-64b0129bf269] No waiting events found dispatching network-vif-plugged-25c8c441-cd5e-4cd3-9151-e8137db08e65 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 04:03:39 np0005534516 nova_compute[253538]: 2025-11-25 09:03:39.578 253542 WARNING nova.compute.manager [req-cadc6a5d-0997-43d7-81cb-ee665b1f7ab8 req-856e2c64-ea12-4477-8792-a40ca8340519 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 1ee319a5-b613-4b27-a1e6-64b0129bf269] Received unexpected event network-vif-plugged-25c8c441-cd5e-4cd3-9151-e8137db08e65 for instance with vm_state building and task_state spawning.#033[00m
Nov 25 04:03:39 np0005534516 nova_compute[253538]: 2025-11-25 09:03:39.578 253542 DEBUG nova.compute.manager [req-cadc6a5d-0997-43d7-81cb-ee665b1f7ab8 req-856e2c64-ea12-4477-8792-a40ca8340519 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 60fab831-4ae4-4e18-a4e4-5466abbece52] Received event network-vif-plugged-d2008aa0-bac3-4d83-88d2-34376e911b2c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 04:03:39 np0005534516 nova_compute[253538]: 2025-11-25 09:03:39.578 253542 DEBUG oslo_concurrency.lockutils [req-cadc6a5d-0997-43d7-81cb-ee665b1f7ab8 req-856e2c64-ea12-4477-8792-a40ca8340519 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "60fab831-4ae4-4e18-a4e4-5466abbece52-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:03:39 np0005534516 nova_compute[253538]: 2025-11-25 09:03:39.579 253542 DEBUG oslo_concurrency.lockutils [req-cadc6a5d-0997-43d7-81cb-ee665b1f7ab8 req-856e2c64-ea12-4477-8792-a40ca8340519 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "60fab831-4ae4-4e18-a4e4-5466abbece52-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:03:39 np0005534516 nova_compute[253538]: 2025-11-25 09:03:39.579 253542 DEBUG oslo_concurrency.lockutils [req-cadc6a5d-0997-43d7-81cb-ee665b1f7ab8 req-856e2c64-ea12-4477-8792-a40ca8340519 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "60fab831-4ae4-4e18-a4e4-5466abbece52-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:03:39 np0005534516 nova_compute[253538]: 2025-11-25 09:03:39.579 253542 DEBUG nova.compute.manager [req-cadc6a5d-0997-43d7-81cb-ee665b1f7ab8 req-856e2c64-ea12-4477-8792-a40ca8340519 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 60fab831-4ae4-4e18-a4e4-5466abbece52] Processing event network-vif-plugged-d2008aa0-bac3-4d83-88d2-34376e911b2c _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 25 04:03:39 np0005534516 nova_compute[253538]: 2025-11-25 09:03:39.580 253542 DEBUG nova.compute.manager [None req-07640ace-66ce-4387-9528-c13339b929d9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 1ee319a5-b613-4b27-a1e6-64b0129bf269] Instance event wait completed in 2 seconds for network-vif-plugged,network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 25 04:03:39 np0005534516 nova_compute[253538]: 2025-11-25 09:03:39.580 253542 DEBUG nova.compute.manager [None req-114760a2-1ab1-4b38-a8f4-182ed57f57d9 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 60fab831-4ae4-4e18-a4e4-5466abbece52] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 25 04:03:39 np0005534516 nova_compute[253538]: 2025-11-25 09:03:39.585 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764061419.585224, 1ee319a5-b613-4b27-a1e6-64b0129bf269 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 04:03:39 np0005534516 nova_compute[253538]: 2025-11-25 09:03:39.586 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 1ee319a5-b613-4b27-a1e6-64b0129bf269] VM Resumed (Lifecycle Event)#033[00m
Nov 25 04:03:39 np0005534516 nova_compute[253538]: 2025-11-25 09:03:39.587 253542 DEBUG nova.virt.libvirt.driver [None req-07640ace-66ce-4387-9528-c13339b929d9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 1ee319a5-b613-4b27-a1e6-64b0129bf269] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 25 04:03:39 np0005534516 nova_compute[253538]: 2025-11-25 09:03:39.588 253542 DEBUG nova.virt.libvirt.driver [None req-114760a2-1ab1-4b38-a8f4-182ed57f57d9 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 60fab831-4ae4-4e18-a4e4-5466abbece52] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 25 04:03:39 np0005534516 nova_compute[253538]: 2025-11-25 09:03:39.591 253542 INFO nova.virt.libvirt.driver [-] [instance: 1ee319a5-b613-4b27-a1e6-64b0129bf269] Instance spawned successfully.#033[00m
Nov 25 04:03:39 np0005534516 nova_compute[253538]: 2025-11-25 09:03:39.592 253542 DEBUG nova.virt.libvirt.driver [None req-07640ace-66ce-4387-9528-c13339b929d9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 1ee319a5-b613-4b27-a1e6-64b0129bf269] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 25 04:03:39 np0005534516 nova_compute[253538]: 2025-11-25 09:03:39.595 253542 INFO nova.virt.libvirt.driver [-] [instance: 60fab831-4ae4-4e18-a4e4-5466abbece52] Instance spawned successfully.#033[00m
Nov 25 04:03:39 np0005534516 nova_compute[253538]: 2025-11-25 09:03:39.597 253542 DEBUG nova.virt.libvirt.driver [None req-114760a2-1ab1-4b38-a8f4-182ed57f57d9 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 60fab831-4ae4-4e18-a4e4-5466abbece52] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 25 04:03:39 np0005534516 podman[395869]: 2025-11-25 09:03:39.600504905 +0000 UTC m=+0.089342789 container create 9f532a17f42f7bb8fa301fe6819437362244531bcd7e8d6dde781cbdc131550f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_leavitt, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Nov 25 04:03:39 np0005534516 podman[395869]: 2025-11-25 09:03:39.537162214 +0000 UTC m=+0.026000178 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 04:03:39 np0005534516 systemd[1]: Started libpod-conmon-9f532a17f42f7bb8fa301fe6819437362244531bcd7e8d6dde781cbdc131550f.scope.
Nov 25 04:03:39 np0005534516 systemd[1]: Started libcrun container.
Nov 25 04:03:39 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dcaf4ed3ca92606b0ad6cf3dda4874bb51e64b6fd1142f2bf0d47b0124f3f2ea/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 04:03:39 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dcaf4ed3ca92606b0ad6cf3dda4874bb51e64b6fd1142f2bf0d47b0124f3f2ea/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 04:03:39 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dcaf4ed3ca92606b0ad6cf3dda4874bb51e64b6fd1142f2bf0d47b0124f3f2ea/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 04:03:39 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dcaf4ed3ca92606b0ad6cf3dda4874bb51e64b6fd1142f2bf0d47b0124f3f2ea/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 04:03:39 np0005534516 nova_compute[253538]: 2025-11-25 09:03:39.698 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 1ee319a5-b613-4b27-a1e6-64b0129bf269] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 04:03:39 np0005534516 nova_compute[253538]: 2025-11-25 09:03:39.701 253542 DEBUG nova.virt.libvirt.driver [None req-114760a2-1ab1-4b38-a8f4-182ed57f57d9 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 60fab831-4ae4-4e18-a4e4-5466abbece52] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 04:03:39 np0005534516 nova_compute[253538]: 2025-11-25 09:03:39.702 253542 DEBUG nova.virt.libvirt.driver [None req-114760a2-1ab1-4b38-a8f4-182ed57f57d9 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 60fab831-4ae4-4e18-a4e4-5466abbece52] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 04:03:39 np0005534516 nova_compute[253538]: 2025-11-25 09:03:39.702 253542 DEBUG nova.virt.libvirt.driver [None req-114760a2-1ab1-4b38-a8f4-182ed57f57d9 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 60fab831-4ae4-4e18-a4e4-5466abbece52] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 04:03:39 np0005534516 nova_compute[253538]: 2025-11-25 09:03:39.703 253542 DEBUG nova.virt.libvirt.driver [None req-114760a2-1ab1-4b38-a8f4-182ed57f57d9 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 60fab831-4ae4-4e18-a4e4-5466abbece52] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 04:03:39 np0005534516 nova_compute[253538]: 2025-11-25 09:03:39.703 253542 DEBUG nova.virt.libvirt.driver [None req-114760a2-1ab1-4b38-a8f4-182ed57f57d9 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 60fab831-4ae4-4e18-a4e4-5466abbece52] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 04:03:39 np0005534516 nova_compute[253538]: 2025-11-25 09:03:39.703 253542 DEBUG nova.virt.libvirt.driver [None req-114760a2-1ab1-4b38-a8f4-182ed57f57d9 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 60fab831-4ae4-4e18-a4e4-5466abbece52] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 04:03:39 np0005534516 podman[395869]: 2025-11-25 09:03:39.722561274 +0000 UTC m=+0.211399178 container init 9f532a17f42f7bb8fa301fe6819437362244531bcd7e8d6dde781cbdc131550f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_leavitt, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Nov 25 04:03:39 np0005534516 nova_compute[253538]: 2025-11-25 09:03:39.716 253542 DEBUG nova.virt.libvirt.driver [None req-07640ace-66ce-4387-9528-c13339b929d9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 1ee319a5-b613-4b27-a1e6-64b0129bf269] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 04:03:39 np0005534516 nova_compute[253538]: 2025-11-25 09:03:39.716 253542 DEBUG nova.virt.libvirt.driver [None req-07640ace-66ce-4387-9528-c13339b929d9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 1ee319a5-b613-4b27-a1e6-64b0129bf269] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 04:03:39 np0005534516 nova_compute[253538]: 2025-11-25 09:03:39.716 253542 DEBUG nova.virt.libvirt.driver [None req-07640ace-66ce-4387-9528-c13339b929d9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 1ee319a5-b613-4b27-a1e6-64b0129bf269] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 04:03:39 np0005534516 nova_compute[253538]: 2025-11-25 09:03:39.717 253542 DEBUG nova.virt.libvirt.driver [None req-07640ace-66ce-4387-9528-c13339b929d9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 1ee319a5-b613-4b27-a1e6-64b0129bf269] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 04:03:39 np0005534516 nova_compute[253538]: 2025-11-25 09:03:39.717 253542 DEBUG nova.virt.libvirt.driver [None req-07640ace-66ce-4387-9528-c13339b929d9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 1ee319a5-b613-4b27-a1e6-64b0129bf269] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 04:03:39 np0005534516 nova_compute[253538]: 2025-11-25 09:03:39.717 253542 DEBUG nova.virt.libvirt.driver [None req-07640ace-66ce-4387-9528-c13339b929d9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 1ee319a5-b613-4b27-a1e6-64b0129bf269] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 04:03:39 np0005534516 nova_compute[253538]: 2025-11-25 09:03:39.723 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 1ee319a5-b613-4b27-a1e6-64b0129bf269] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 04:03:39 np0005534516 podman[395869]: 2025-11-25 09:03:39.732093412 +0000 UTC m=+0.220931296 container start 9f532a17f42f7bb8fa301fe6819437362244531bcd7e8d6dde781cbdc131550f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_leavitt, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 04:03:39 np0005534516 podman[395869]: 2025-11-25 09:03:39.736911864 +0000 UTC m=+0.225749778 container attach 9f532a17f42f7bb8fa301fe6819437362244531bcd7e8d6dde781cbdc131550f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_leavitt, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Nov 25 04:03:39 np0005534516 nova_compute[253538]: 2025-11-25 09:03:39.768 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 1ee319a5-b613-4b27-a1e6-64b0129bf269] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 04:03:39 np0005534516 nova_compute[253538]: 2025-11-25 09:03:39.768 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764061419.5856168, 60fab831-4ae4-4e18-a4e4-5466abbece52 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 04:03:39 np0005534516 nova_compute[253538]: 2025-11-25 09:03:39.769 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 60fab831-4ae4-4e18-a4e4-5466abbece52] VM Resumed (Lifecycle Event)#033[00m
Nov 25 04:03:39 np0005534516 nova_compute[253538]: 2025-11-25 09:03:39.797 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 60fab831-4ae4-4e18-a4e4-5466abbece52] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 04:03:39 np0005534516 nova_compute[253538]: 2025-11-25 09:03:39.799 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 60fab831-4ae4-4e18-a4e4-5466abbece52] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 04:03:39 np0005534516 nova_compute[253538]: 2025-11-25 09:03:39.806 253542 INFO nova.compute.manager [None req-114760a2-1ab1-4b38-a8f4-182ed57f57d9 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 60fab831-4ae4-4e18-a4e4-5466abbece52] Took 8.84 seconds to spawn the instance on the hypervisor.#033[00m
Nov 25 04:03:39 np0005534516 nova_compute[253538]: 2025-11-25 09:03:39.806 253542 DEBUG nova.compute.manager [None req-114760a2-1ab1-4b38-a8f4-182ed57f57d9 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 60fab831-4ae4-4e18-a4e4-5466abbece52] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 04:03:39 np0005534516 nova_compute[253538]: 2025-11-25 09:03:39.816 253542 INFO nova.compute.manager [None req-07640ace-66ce-4387-9528-c13339b929d9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 1ee319a5-b613-4b27-a1e6-64b0129bf269] Took 16.45 seconds to spawn the instance on the hypervisor.#033[00m
Nov 25 04:03:39 np0005534516 nova_compute[253538]: 2025-11-25 09:03:39.817 253542 DEBUG nova.compute.manager [None req-07640ace-66ce-4387-9528-c13339b929d9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 1ee319a5-b613-4b27-a1e6-64b0129bf269] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 04:03:39 np0005534516 nova_compute[253538]: 2025-11-25 09:03:39.832 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 60fab831-4ae4-4e18-a4e4-5466abbece52] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 04:03:39 np0005534516 nova_compute[253538]: 2025-11-25 09:03:39.875 253542 INFO nova.compute.manager [None req-114760a2-1ab1-4b38-a8f4-182ed57f57d9 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 60fab831-4ae4-4e18-a4e4-5466abbece52] Took 9.97 seconds to build instance.#033[00m
Nov 25 04:03:39 np0005534516 nova_compute[253538]: 2025-11-25 09:03:39.878 253542 INFO nova.compute.manager [None req-07640ace-66ce-4387-9528-c13339b929d9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 1ee319a5-b613-4b27-a1e6-64b0129bf269] Took 18.06 seconds to build instance.#033[00m
Nov 25 04:03:39 np0005534516 nova_compute[253538]: 2025-11-25 09:03:39.892 253542 DEBUG oslo_concurrency.lockutils [None req-114760a2-1ab1-4b38-a8f4-182ed57f57d9 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Lock "60fab831-4ae4-4e18-a4e4-5466abbece52" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.049s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:03:39 np0005534516 nova_compute[253538]: 2025-11-25 09:03:39.893 253542 DEBUG oslo_concurrency.lockutils [None req-07640ace-66ce-4387-9528-c13339b929d9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "1ee319a5-b613-4b27-a1e6-64b0129bf269" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 18.150s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:03:40 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2486: 321 pgs: 321 active+clean; 339 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 26 KiB/s rd, 1.8 MiB/s wr, 41 op/s
Nov 25 04:03:40 np0005534516 eloquent_leavitt[395885]: {
Nov 25 04:03:40 np0005534516 eloquent_leavitt[395885]:    "0": [
Nov 25 04:03:40 np0005534516 eloquent_leavitt[395885]:        {
Nov 25 04:03:40 np0005534516 eloquent_leavitt[395885]:            "devices": [
Nov 25 04:03:40 np0005534516 eloquent_leavitt[395885]:                "/dev/loop3"
Nov 25 04:03:40 np0005534516 eloquent_leavitt[395885]:            ],
Nov 25 04:03:40 np0005534516 eloquent_leavitt[395885]:            "lv_name": "ceph_lv0",
Nov 25 04:03:40 np0005534516 eloquent_leavitt[395885]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 04:03:40 np0005534516 eloquent_leavitt[395885]:            "lv_size": "21470642176",
Nov 25 04:03:40 np0005534516 eloquent_leavitt[395885]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=fa0f8d90-5df8-4d42-9078-da082765696d,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 04:03:40 np0005534516 eloquent_leavitt[395885]:            "lv_uuid": "ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr",
Nov 25 04:03:40 np0005534516 eloquent_leavitt[395885]:            "name": "ceph_lv0",
Nov 25 04:03:40 np0005534516 eloquent_leavitt[395885]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 04:03:40 np0005534516 eloquent_leavitt[395885]:            "tags": {
Nov 25 04:03:40 np0005534516 eloquent_leavitt[395885]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 04:03:40 np0005534516 eloquent_leavitt[395885]:                "ceph.block_uuid": "ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr",
Nov 25 04:03:40 np0005534516 eloquent_leavitt[395885]:                "ceph.cephx_lockbox_secret": "",
Nov 25 04:03:40 np0005534516 eloquent_leavitt[395885]:                "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 04:03:40 np0005534516 eloquent_leavitt[395885]:                "ceph.cluster_name": "ceph",
Nov 25 04:03:40 np0005534516 eloquent_leavitt[395885]:                "ceph.crush_device_class": "",
Nov 25 04:03:40 np0005534516 eloquent_leavitt[395885]:                "ceph.encrypted": "0",
Nov 25 04:03:40 np0005534516 eloquent_leavitt[395885]:                "ceph.osd_fsid": "fa0f8d90-5df8-4d42-9078-da082765696d",
Nov 25 04:03:40 np0005534516 eloquent_leavitt[395885]:                "ceph.osd_id": "0",
Nov 25 04:03:40 np0005534516 eloquent_leavitt[395885]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 04:03:40 np0005534516 eloquent_leavitt[395885]:                "ceph.type": "block",
Nov 25 04:03:40 np0005534516 eloquent_leavitt[395885]:                "ceph.vdo": "0"
Nov 25 04:03:40 np0005534516 eloquent_leavitt[395885]:            },
Nov 25 04:03:40 np0005534516 eloquent_leavitt[395885]:            "type": "block",
Nov 25 04:03:40 np0005534516 eloquent_leavitt[395885]:            "vg_name": "ceph_vg0"
Nov 25 04:03:40 np0005534516 eloquent_leavitt[395885]:        }
Nov 25 04:03:40 np0005534516 eloquent_leavitt[395885]:    ],
Nov 25 04:03:40 np0005534516 eloquent_leavitt[395885]:    "1": [
Nov 25 04:03:40 np0005534516 eloquent_leavitt[395885]:        {
Nov 25 04:03:40 np0005534516 eloquent_leavitt[395885]:            "devices": [
Nov 25 04:03:40 np0005534516 eloquent_leavitt[395885]:                "/dev/loop4"
Nov 25 04:03:40 np0005534516 eloquent_leavitt[395885]:            ],
Nov 25 04:03:40 np0005534516 eloquent_leavitt[395885]:            "lv_name": "ceph_lv1",
Nov 25 04:03:40 np0005534516 eloquent_leavitt[395885]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 04:03:40 np0005534516 eloquent_leavitt[395885]:            "lv_size": "21470642176",
Nov 25 04:03:40 np0005534516 eloquent_leavitt[395885]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1ccbbde0-8faf-460a-800e-c84f00ed17db,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 04:03:40 np0005534516 eloquent_leavitt[395885]:            "lv_uuid": "nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e",
Nov 25 04:03:40 np0005534516 eloquent_leavitt[395885]:            "name": "ceph_lv1",
Nov 25 04:03:40 np0005534516 eloquent_leavitt[395885]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 04:03:40 np0005534516 eloquent_leavitt[395885]:            "tags": {
Nov 25 04:03:40 np0005534516 eloquent_leavitt[395885]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 04:03:40 np0005534516 eloquent_leavitt[395885]:                "ceph.block_uuid": "nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e",
Nov 25 04:03:40 np0005534516 eloquent_leavitt[395885]:                "ceph.cephx_lockbox_secret": "",
Nov 25 04:03:40 np0005534516 eloquent_leavitt[395885]:                "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 04:03:40 np0005534516 eloquent_leavitt[395885]:                "ceph.cluster_name": "ceph",
Nov 25 04:03:40 np0005534516 eloquent_leavitt[395885]:                "ceph.crush_device_class": "",
Nov 25 04:03:40 np0005534516 eloquent_leavitt[395885]:                "ceph.encrypted": "0",
Nov 25 04:03:40 np0005534516 eloquent_leavitt[395885]:                "ceph.osd_fsid": "1ccbbde0-8faf-460a-800e-c84f00ed17db",
Nov 25 04:03:40 np0005534516 eloquent_leavitt[395885]:                "ceph.osd_id": "1",
Nov 25 04:03:40 np0005534516 eloquent_leavitt[395885]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 04:03:40 np0005534516 eloquent_leavitt[395885]:                "ceph.type": "block",
Nov 25 04:03:40 np0005534516 eloquent_leavitt[395885]:                "ceph.vdo": "0"
Nov 25 04:03:40 np0005534516 eloquent_leavitt[395885]:            },
Nov 25 04:03:40 np0005534516 eloquent_leavitt[395885]:            "type": "block",
Nov 25 04:03:40 np0005534516 eloquent_leavitt[395885]:            "vg_name": "ceph_vg1"
Nov 25 04:03:40 np0005534516 eloquent_leavitt[395885]:        }
Nov 25 04:03:40 np0005534516 eloquent_leavitt[395885]:    ],
Nov 25 04:03:40 np0005534516 eloquent_leavitt[395885]:    "2": [
Nov 25 04:03:40 np0005534516 eloquent_leavitt[395885]:        {
Nov 25 04:03:40 np0005534516 eloquent_leavitt[395885]:            "devices": [
Nov 25 04:03:40 np0005534516 eloquent_leavitt[395885]:                "/dev/loop5"
Nov 25 04:03:40 np0005534516 eloquent_leavitt[395885]:            ],
Nov 25 04:03:40 np0005534516 eloquent_leavitt[395885]:            "lv_name": "ceph_lv2",
Nov 25 04:03:40 np0005534516 eloquent_leavitt[395885]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 04:03:40 np0005534516 eloquent_leavitt[395885]:            "lv_size": "21470642176",
Nov 25 04:03:40 np0005534516 eloquent_leavitt[395885]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2a15223e-f21c-42af-b702-2be1c3607399,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 04:03:40 np0005534516 eloquent_leavitt[395885]:            "lv_uuid": "5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0",
Nov 25 04:03:40 np0005534516 eloquent_leavitt[395885]:            "name": "ceph_lv2",
Nov 25 04:03:40 np0005534516 eloquent_leavitt[395885]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 04:03:40 np0005534516 eloquent_leavitt[395885]:            "tags": {
Nov 25 04:03:40 np0005534516 eloquent_leavitt[395885]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 04:03:40 np0005534516 eloquent_leavitt[395885]:                "ceph.block_uuid": "5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0",
Nov 25 04:03:40 np0005534516 eloquent_leavitt[395885]:                "ceph.cephx_lockbox_secret": "",
Nov 25 04:03:40 np0005534516 eloquent_leavitt[395885]:                "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 04:03:40 np0005534516 eloquent_leavitt[395885]:                "ceph.cluster_name": "ceph",
Nov 25 04:03:40 np0005534516 eloquent_leavitt[395885]:                "ceph.crush_device_class": "",
Nov 25 04:03:40 np0005534516 eloquent_leavitt[395885]:                "ceph.encrypted": "0",
Nov 25 04:03:40 np0005534516 eloquent_leavitt[395885]:                "ceph.osd_fsid": "2a15223e-f21c-42af-b702-2be1c3607399",
Nov 25 04:03:40 np0005534516 eloquent_leavitt[395885]:                "ceph.osd_id": "2",
Nov 25 04:03:40 np0005534516 eloquent_leavitt[395885]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 04:03:40 np0005534516 eloquent_leavitt[395885]:                "ceph.type": "block",
Nov 25 04:03:40 np0005534516 eloquent_leavitt[395885]:                "ceph.vdo": "0"
Nov 25 04:03:40 np0005534516 eloquent_leavitt[395885]:            },
Nov 25 04:03:40 np0005534516 eloquent_leavitt[395885]:            "type": "block",
Nov 25 04:03:40 np0005534516 eloquent_leavitt[395885]:            "vg_name": "ceph_vg2"
Nov 25 04:03:40 np0005534516 eloquent_leavitt[395885]:        }
Nov 25 04:03:40 np0005534516 eloquent_leavitt[395885]:    ]
Nov 25 04:03:40 np0005534516 eloquent_leavitt[395885]: }
Nov 25 04:03:40 np0005534516 systemd[1]: libpod-9f532a17f42f7bb8fa301fe6819437362244531bcd7e8d6dde781cbdc131550f.scope: Deactivated successfully.
Nov 25 04:03:40 np0005534516 podman[395869]: 2025-11-25 09:03:40.612065634 +0000 UTC m=+1.100903508 container died 9f532a17f42f7bb8fa301fe6819437362244531bcd7e8d6dde781cbdc131550f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_leavitt, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 04:03:40 np0005534516 systemd[1]: var-lib-containers-storage-overlay-dcaf4ed3ca92606b0ad6cf3dda4874bb51e64b6fd1142f2bf0d47b0124f3f2ea-merged.mount: Deactivated successfully.
Nov 25 04:03:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:03:41.087 162739 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:03:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:03:41.088 162739 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:03:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:03:41.089 162739 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:03:41 np0005534516 podman[395869]: 2025-11-25 09:03:41.299478541 +0000 UTC m=+1.788316425 container remove 9f532a17f42f7bb8fa301fe6819437362244531bcd7e8d6dde781cbdc131550f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_leavitt, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 04:03:41 np0005534516 systemd[1]: libpod-conmon-9f532a17f42f7bb8fa301fe6819437362244531bcd7e8d6dde781cbdc131550f.scope: Deactivated successfully.
Nov 25 04:03:41 np0005534516 nova_compute[253538]: 2025-11-25 09:03:41.534 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:03:42 np0005534516 podman[396048]: 2025-11-25 09:03:41.981977224 +0000 UTC m=+0.030988944 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 04:03:42 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2487: 321 pgs: 321 active+clean; 339 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.0 MiB/s rd, 1.8 MiB/s wr, 78 op/s
Nov 25 04:03:42 np0005534516 podman[396048]: 2025-11-25 09:03:42.106672423 +0000 UTC m=+0.155684123 container create bfd3778d8f8675c8524353b174e929c5345ea77a0e8c16c102ef7584c3f02814 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_elgamal, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 04:03:42 np0005534516 systemd[1]: Started libpod-conmon-bfd3778d8f8675c8524353b174e929c5345ea77a0e8c16c102ef7584c3f02814.scope.
Nov 25 04:03:42 np0005534516 systemd[1]: Started libcrun container.
Nov 25 04:03:42 np0005534516 nova_compute[253538]: 2025-11-25 09:03:42.341 253542 DEBUG nova.compute.manager [req-9cb5eed8-b56d-4f66-a279-dbe9dba8419f req-69c27afa-d4ef-471c-b06f-b7e3a4eb8a2f b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 60fab831-4ae4-4e18-a4e4-5466abbece52] Received event network-vif-plugged-d2008aa0-bac3-4d83-88d2-34376e911b2c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 04:03:42 np0005534516 nova_compute[253538]: 2025-11-25 09:03:42.341 253542 DEBUG oslo_concurrency.lockutils [req-9cb5eed8-b56d-4f66-a279-dbe9dba8419f req-69c27afa-d4ef-471c-b06f-b7e3a4eb8a2f b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "60fab831-4ae4-4e18-a4e4-5466abbece52-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:03:42 np0005534516 nova_compute[253538]: 2025-11-25 09:03:42.342 253542 DEBUG oslo_concurrency.lockutils [req-9cb5eed8-b56d-4f66-a279-dbe9dba8419f req-69c27afa-d4ef-471c-b06f-b7e3a4eb8a2f b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "60fab831-4ae4-4e18-a4e4-5466abbece52-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:03:42 np0005534516 nova_compute[253538]: 2025-11-25 09:03:42.342 253542 DEBUG oslo_concurrency.lockutils [req-9cb5eed8-b56d-4f66-a279-dbe9dba8419f req-69c27afa-d4ef-471c-b06f-b7e3a4eb8a2f b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "60fab831-4ae4-4e18-a4e4-5466abbece52-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:03:42 np0005534516 nova_compute[253538]: 2025-11-25 09:03:42.342 253542 DEBUG nova.compute.manager [req-9cb5eed8-b56d-4f66-a279-dbe9dba8419f req-69c27afa-d4ef-471c-b06f-b7e3a4eb8a2f b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 60fab831-4ae4-4e18-a4e4-5466abbece52] No waiting events found dispatching network-vif-plugged-d2008aa0-bac3-4d83-88d2-34376e911b2c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 04:03:42 np0005534516 nova_compute[253538]: 2025-11-25 09:03:42.342 253542 WARNING nova.compute.manager [req-9cb5eed8-b56d-4f66-a279-dbe9dba8419f req-69c27afa-d4ef-471c-b06f-b7e3a4eb8a2f b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 60fab831-4ae4-4e18-a4e4-5466abbece52] Received unexpected event network-vif-plugged-d2008aa0-bac3-4d83-88d2-34376e911b2c for instance with vm_state active and task_state None.#033[00m
Nov 25 04:03:42 np0005534516 podman[396048]: 2025-11-25 09:03:42.59507387 +0000 UTC m=+0.644085600 container init bfd3778d8f8675c8524353b174e929c5345ea77a0e8c16c102ef7584c3f02814 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_elgamal, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Nov 25 04:03:42 np0005534516 nostalgic_elgamal[396064]: 167 167
Nov 25 04:03:42 np0005534516 systemd[1]: libpod-bfd3778d8f8675c8524353b174e929c5345ea77a0e8c16c102ef7584c3f02814.scope: Deactivated successfully.
Nov 25 04:03:42 np0005534516 podman[396048]: 2025-11-25 09:03:42.61052431 +0000 UTC m=+0.659536010 container start bfd3778d8f8675c8524353b174e929c5345ea77a0e8c16c102ef7584c3f02814 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_elgamal, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2)
Nov 25 04:03:42 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:03:42 np0005534516 podman[396048]: 2025-11-25 09:03:42.814011462 +0000 UTC m=+0.863023262 container attach bfd3778d8f8675c8524353b174e929c5345ea77a0e8c16c102ef7584c3f02814 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_elgamal, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Nov 25 04:03:42 np0005534516 podman[396048]: 2025-11-25 09:03:42.814935527 +0000 UTC m=+0.863947267 container died bfd3778d8f8675c8524353b174e929c5345ea77a0e8c16c102ef7584c3f02814 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_elgamal, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Nov 25 04:03:43 np0005534516 nova_compute[253538]: 2025-11-25 09:03:43.072 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:03:43 np0005534516 systemd[1]: var-lib-containers-storage-overlay-1cd28420316157bba06c463141e823a465e6c8de1667300fa4b451aac0083951-merged.mount: Deactivated successfully.
Nov 25 04:03:43 np0005534516 podman[396048]: 2025-11-25 09:03:43.475506674 +0000 UTC m=+1.524518394 container remove bfd3778d8f8675c8524353b174e929c5345ea77a0e8c16c102ef7584c3f02814 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_elgamal, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Nov 25 04:03:43 np0005534516 systemd[1]: libpod-conmon-bfd3778d8f8675c8524353b174e929c5345ea77a0e8c16c102ef7584c3f02814.scope: Deactivated successfully.
Nov 25 04:03:43 np0005534516 nova_compute[253538]: 2025-11-25 09:03:43.555 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:03:43 np0005534516 podman[396086]: 2025-11-25 09:03:43.737015872 +0000 UTC m=+0.042478455 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 04:03:43 np0005534516 nova_compute[253538]: 2025-11-25 09:03:43.911 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:03:43 np0005534516 podman[396086]: 2025-11-25 09:03:43.992365164 +0000 UTC m=+0.297827727 container create a8c0a3fc0739410be3c4ab1340789954bdde9243a35903c300e05d65f7ef27c3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_kirch, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 04:03:44 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2488: 321 pgs: 321 active+clean; 339 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.8 MiB/s rd, 1.4 MiB/s wr, 95 op/s
Nov 25 04:03:44 np0005534516 nova_compute[253538]: 2025-11-25 09:03:44.112 253542 DEBUG nova.compute.manager [req-ed77d0bc-d88e-418d-9e02-382def51063f req-55b577de-c098-4784-92a7-b46424e29d08 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 60fab831-4ae4-4e18-a4e4-5466abbece52] Received event network-changed-d2008aa0-bac3-4d83-88d2-34376e911b2c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 04:03:44 np0005534516 nova_compute[253538]: 2025-11-25 09:03:44.113 253542 DEBUG nova.compute.manager [req-ed77d0bc-d88e-418d-9e02-382def51063f req-55b577de-c098-4784-92a7-b46424e29d08 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 60fab831-4ae4-4e18-a4e4-5466abbece52] Refreshing instance network info cache due to event network-changed-d2008aa0-bac3-4d83-88d2-34376e911b2c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 04:03:44 np0005534516 nova_compute[253538]: 2025-11-25 09:03:44.114 253542 DEBUG oslo_concurrency.lockutils [req-ed77d0bc-d88e-418d-9e02-382def51063f req-55b577de-c098-4784-92a7-b46424e29d08 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-60fab831-4ae4-4e18-a4e4-5466abbece52" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 04:03:44 np0005534516 nova_compute[253538]: 2025-11-25 09:03:44.114 253542 DEBUG oslo_concurrency.lockutils [req-ed77d0bc-d88e-418d-9e02-382def51063f req-55b577de-c098-4784-92a7-b46424e29d08 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-60fab831-4ae4-4e18-a4e4-5466abbece52" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 04:03:44 np0005534516 nova_compute[253538]: 2025-11-25 09:03:44.115 253542 DEBUG nova.network.neutron [req-ed77d0bc-d88e-418d-9e02-382def51063f req-55b577de-c098-4784-92a7-b46424e29d08 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 60fab831-4ae4-4e18-a4e4-5466abbece52] Refreshing network info cache for port d2008aa0-bac3-4d83-88d2-34376e911b2c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 04:03:44 np0005534516 systemd[1]: Started libpod-conmon-a8c0a3fc0739410be3c4ab1340789954bdde9243a35903c300e05d65f7ef27c3.scope.
Nov 25 04:03:44 np0005534516 systemd[1]: Started libcrun container.
Nov 25 04:03:44 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ecfad7911025f9390be9a4762af21e2a46dde347def1effa25e0febe236a5b66/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 04:03:44 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ecfad7911025f9390be9a4762af21e2a46dde347def1effa25e0febe236a5b66/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 04:03:44 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ecfad7911025f9390be9a4762af21e2a46dde347def1effa25e0febe236a5b66/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 04:03:44 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ecfad7911025f9390be9a4762af21e2a46dde347def1effa25e0febe236a5b66/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 04:03:44 np0005534516 podman[396086]: 2025-11-25 09:03:44.297788136 +0000 UTC m=+0.603250799 container init a8c0a3fc0739410be3c4ab1340789954bdde9243a35903c300e05d65f7ef27c3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_kirch, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Nov 25 04:03:44 np0005534516 podman[396086]: 2025-11-25 09:03:44.30677244 +0000 UTC m=+0.612235003 container start a8c0a3fc0739410be3c4ab1340789954bdde9243a35903c300e05d65f7ef27c3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_kirch, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Nov 25 04:03:44 np0005534516 nova_compute[253538]: 2025-11-25 09:03:44.443 253542 DEBUG nova.compute.manager [req-65dadb0d-11ef-48f3-86e7-81ae8ecd0a0c req-ac4ad3e4-4896-4bde-911a-b49c4fc8bcfa b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 1ee319a5-b613-4b27-a1e6-64b0129bf269] Received event network-changed-2cd88dce-60d9-4da6-a5f6-ba6622fd8812 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 04:03:44 np0005534516 nova_compute[253538]: 2025-11-25 09:03:44.444 253542 DEBUG nova.compute.manager [req-65dadb0d-11ef-48f3-86e7-81ae8ecd0a0c req-ac4ad3e4-4896-4bde-911a-b49c4fc8bcfa b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 1ee319a5-b613-4b27-a1e6-64b0129bf269] Refreshing instance network info cache due to event network-changed-2cd88dce-60d9-4da6-a5f6-ba6622fd8812. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 04:03:44 np0005534516 nova_compute[253538]: 2025-11-25 09:03:44.445 253542 DEBUG oslo_concurrency.lockutils [req-65dadb0d-11ef-48f3-86e7-81ae8ecd0a0c req-ac4ad3e4-4896-4bde-911a-b49c4fc8bcfa b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-1ee319a5-b613-4b27-a1e6-64b0129bf269" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 04:03:44 np0005534516 nova_compute[253538]: 2025-11-25 09:03:44.446 253542 DEBUG oslo_concurrency.lockutils [req-65dadb0d-11ef-48f3-86e7-81ae8ecd0a0c req-ac4ad3e4-4896-4bde-911a-b49c4fc8bcfa b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-1ee319a5-b613-4b27-a1e6-64b0129bf269" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 04:03:44 np0005534516 nova_compute[253538]: 2025-11-25 09:03:44.446 253542 DEBUG nova.network.neutron [req-65dadb0d-11ef-48f3-86e7-81ae8ecd0a0c req-ac4ad3e4-4896-4bde-911a-b49c4fc8bcfa b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 1ee319a5-b613-4b27-a1e6-64b0129bf269] Refreshing network info cache for port 2cd88dce-60d9-4da6-a5f6-ba6622fd8812 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 04:03:44 np0005534516 podman[396086]: 2025-11-25 09:03:44.641161731 +0000 UTC m=+0.946624314 container attach a8c0a3fc0739410be3c4ab1340789954bdde9243a35903c300e05d65f7ef27c3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_kirch, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Nov 25 04:03:45 np0005534516 recursing_kirch[396103]: {
Nov 25 04:03:45 np0005534516 recursing_kirch[396103]:    "1ccbbde0-8faf-460a-800e-c84f00ed17db": {
Nov 25 04:03:45 np0005534516 recursing_kirch[396103]:        "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 04:03:45 np0005534516 recursing_kirch[396103]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 04:03:45 np0005534516 recursing_kirch[396103]:        "osd_id": 1,
Nov 25 04:03:45 np0005534516 recursing_kirch[396103]:        "osd_uuid": "1ccbbde0-8faf-460a-800e-c84f00ed17db",
Nov 25 04:03:45 np0005534516 recursing_kirch[396103]:        "type": "bluestore"
Nov 25 04:03:45 np0005534516 recursing_kirch[396103]:    },
Nov 25 04:03:45 np0005534516 recursing_kirch[396103]:    "2a15223e-f21c-42af-b702-2be1c3607399": {
Nov 25 04:03:45 np0005534516 recursing_kirch[396103]:        "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 04:03:45 np0005534516 recursing_kirch[396103]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 04:03:45 np0005534516 recursing_kirch[396103]:        "osd_id": 2,
Nov 25 04:03:45 np0005534516 recursing_kirch[396103]:        "osd_uuid": "2a15223e-f21c-42af-b702-2be1c3607399",
Nov 25 04:03:45 np0005534516 recursing_kirch[396103]:        "type": "bluestore"
Nov 25 04:03:45 np0005534516 recursing_kirch[396103]:    },
Nov 25 04:03:45 np0005534516 recursing_kirch[396103]:    "fa0f8d90-5df8-4d42-9078-da082765696d": {
Nov 25 04:03:45 np0005534516 recursing_kirch[396103]:        "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 04:03:45 np0005534516 recursing_kirch[396103]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 04:03:45 np0005534516 recursing_kirch[396103]:        "osd_id": 0,
Nov 25 04:03:45 np0005534516 recursing_kirch[396103]:        "osd_uuid": "fa0f8d90-5df8-4d42-9078-da082765696d",
Nov 25 04:03:45 np0005534516 recursing_kirch[396103]:        "type": "bluestore"
Nov 25 04:03:45 np0005534516 recursing_kirch[396103]:    }
Nov 25 04:03:45 np0005534516 recursing_kirch[396103]: }
Nov 25 04:03:45 np0005534516 systemd[1]: libpod-a8c0a3fc0739410be3c4ab1340789954bdde9243a35903c300e05d65f7ef27c3.scope: Deactivated successfully.
Nov 25 04:03:45 np0005534516 systemd[1]: libpod-a8c0a3fc0739410be3c4ab1340789954bdde9243a35903c300e05d65f7ef27c3.scope: Consumed 1.049s CPU time.
Nov 25 04:03:45 np0005534516 podman[396086]: 2025-11-25 09:03:45.361332108 +0000 UTC m=+1.666794671 container died a8c0a3fc0739410be3c4ab1340789954bdde9243a35903c300e05d65f7ef27c3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_kirch, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Nov 25 04:03:45 np0005534516 systemd[1]: var-lib-containers-storage-overlay-ecfad7911025f9390be9a4762af21e2a46dde347def1effa25e0febe236a5b66-merged.mount: Deactivated successfully.
Nov 25 04:03:45 np0005534516 podman[396086]: 2025-11-25 09:03:45.928556427 +0000 UTC m=+2.234018990 container remove a8c0a3fc0739410be3c4ab1340789954bdde9243a35903c300e05d65f7ef27c3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_kirch, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Nov 25 04:03:45 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 04:03:46 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 04:03:46 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 04:03:46 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 04:03:46 np0005534516 ceph-mgr[75313]: [progress WARNING root] complete: ev 531720b5-6ac8-481f-a6e9-acde38fc5bf2 does not exist
Nov 25 04:03:46 np0005534516 ceph-mgr[75313]: [progress WARNING root] complete: ev e1bed3ad-cfac-4290-8292-b790c4dde700 does not exist
Nov 25 04:03:46 np0005534516 systemd[1]: libpod-conmon-a8c0a3fc0739410be3c4ab1340789954bdde9243a35903c300e05d65f7ef27c3.scope: Deactivated successfully.
Nov 25 04:03:46 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2489: 321 pgs: 321 active+clean; 339 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 646 KiB/s wr, 162 op/s
Nov 25 04:03:46 np0005534516 nova_compute[253538]: 2025-11-25 09:03:46.535 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:03:47 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 04:03:47 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 04:03:47 np0005534516 nova_compute[253538]: 2025-11-25 09:03:47.555 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:03:47 np0005534516 nova_compute[253538]: 2025-11-25 09:03:47.556 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 25 04:03:47 np0005534516 nova_compute[253538]: 2025-11-25 09:03:47.556 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 25 04:03:47 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:03:47 np0005534516 ceph-mon[75015]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #120. Immutable memtables: 0.
Nov 25 04:03:47 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:03:47.778133) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 25 04:03:47 np0005534516 ceph-mon[75015]: rocksdb: [db/flush_job.cc:856] [default] [JOB 71] Flushing memtable with next log file: 120
Nov 25 04:03:47 np0005534516 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764061427778208, "job": 71, "event": "flush_started", "num_memtables": 1, "num_entries": 531, "num_deletes": 258, "total_data_size": 499597, "memory_usage": 511288, "flush_reason": "Manual Compaction"}
Nov 25 04:03:47 np0005534516 ceph-mon[75015]: rocksdb: [db/flush_job.cc:885] [default] [JOB 71] Level-0 flush table #121: started
Nov 25 04:03:47 np0005534516 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764061427928033, "cf_name": "default", "job": 71, "event": "table_file_creation", "file_number": 121, "file_size": 495208, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 51820, "largest_seqno": 52350, "table_properties": {"data_size": 492244, "index_size": 936, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 965, "raw_key_size": 7031, "raw_average_key_size": 18, "raw_value_size": 486191, "raw_average_value_size": 1293, "num_data_blocks": 41, "num_entries": 376, "num_filter_entries": 376, "num_deletions": 258, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764061398, "oldest_key_time": 1764061398, "file_creation_time": 1764061427, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "dc5dc6ae-0fb8-474e-b34d-e37705a41add", "db_session_id": "WCSCMBNWDQZ3QKJA3OWW", "orig_file_number": 121, "seqno_to_time_mapping": "N/A"}}
Nov 25 04:03:47 np0005534516 ceph-mon[75015]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 71] Flush lasted 149944 microseconds, and 2673 cpu microseconds.
Nov 25 04:03:47 np0005534516 ceph-mon[75015]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 04:03:48 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2490: 321 pgs: 321 active+clean; 339 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 30 KiB/s wr, 148 op/s
Nov 25 04:03:48 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:03:47.928086) [db/flush_job.cc:967] [default] [JOB 71] Level-0 flush table #121: 495208 bytes OK
Nov 25 04:03:48 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:03:47.928108) [db/memtable_list.cc:519] [default] Level-0 commit table #121 started
Nov 25 04:03:48 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:03:48.091801) [db/memtable_list.cc:722] [default] Level-0 commit table #121: memtable #1 done
Nov 25 04:03:48 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:03:48.091844) EVENT_LOG_v1 {"time_micros": 1764061428091835, "job": 71, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 25 04:03:48 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:03:48.091873) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 25 04:03:48 np0005534516 ceph-mon[75015]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 71] Try to delete WAL files size 496510, prev total WAL file size 496510, number of live WAL files 2.
Nov 25 04:03:48 np0005534516 ceph-mon[75015]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000117.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 04:03:48 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:03:48.092554) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0032303135' seq:72057594037927935, type:22 .. '6C6F676D0032323639' seq:0, type:0; will stop at (end)
Nov 25 04:03:48 np0005534516 ceph-mon[75015]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 72] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 25 04:03:48 np0005534516 ceph-mon[75015]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 71 Base level 0, inputs: [121(483KB)], [119(9971KB)]
Nov 25 04:03:48 np0005534516 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764061428092599, "job": 72, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [121], "files_L6": [119], "score": -1, "input_data_size": 10706108, "oldest_snapshot_seqno": -1}
Nov 25 04:03:48 np0005534516 nova_compute[253538]: 2025-11-25 09:03:48.260 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Acquiring lock "refresh_cache-28376454-90b2-431d-9052-48b369973c8e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 04:03:48 np0005534516 nova_compute[253538]: 2025-11-25 09:03:48.261 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Acquired lock "refresh_cache-28376454-90b2-431d-9052-48b369973c8e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 04:03:48 np0005534516 nova_compute[253538]: 2025-11-25 09:03:48.262 253542 DEBUG nova.network.neutron [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] [instance: 28376454-90b2-431d-9052-48b369973c8e] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Nov 25 04:03:48 np0005534516 nova_compute[253538]: 2025-11-25 09:03:48.263 253542 DEBUG nova.objects.instance [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 28376454-90b2-431d-9052-48b369973c8e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 04:03:48 np0005534516 ceph-mon[75015]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 72] Generated table #122: 7252 keys, 10589790 bytes, temperature: kUnknown
Nov 25 04:03:48 np0005534516 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764061428312442, "cf_name": "default", "job": 72, "event": "table_file_creation", "file_number": 122, "file_size": 10589790, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10540882, "index_size": 29660, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 18181, "raw_key_size": 189793, "raw_average_key_size": 26, "raw_value_size": 10410776, "raw_average_value_size": 1435, "num_data_blocks": 1163, "num_entries": 7252, "num_filter_entries": 7252, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764056881, "oldest_key_time": 0, "file_creation_time": 1764061428, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "dc5dc6ae-0fb8-474e-b34d-e37705a41add", "db_session_id": "WCSCMBNWDQZ3QKJA3OWW", "orig_file_number": 122, "seqno_to_time_mapping": "N/A"}}
Nov 25 04:03:48 np0005534516 ceph-mon[75015]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 04:03:48 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:03:48.313097) [db/compaction/compaction_job.cc:1663] [default] [JOB 72] Compacted 1@0 + 1@6 files to L6 => 10589790 bytes
Nov 25 04:03:48 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:03:48.341089) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 48.7 rd, 48.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.5, 9.7 +0.0 blob) out(10.1 +0.0 blob), read-write-amplify(43.0) write-amplify(21.4) OK, records in: 7780, records dropped: 528 output_compression: NoCompression
Nov 25 04:03:48 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:03:48.341178) EVENT_LOG_v1 {"time_micros": 1764061428341146, "job": 72, "event": "compaction_finished", "compaction_time_micros": 219980, "compaction_time_cpu_micros": 27739, "output_level": 6, "num_output_files": 1, "total_output_size": 10589790, "num_input_records": 7780, "num_output_records": 7252, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 25 04:03:48 np0005534516 ceph-mon[75015]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000121.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 04:03:48 np0005534516 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764061428341797, "job": 72, "event": "table_file_deletion", "file_number": 121}
Nov 25 04:03:48 np0005534516 ceph-mon[75015]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000119.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 04:03:48 np0005534516 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764061428346478, "job": 72, "event": "table_file_deletion", "file_number": 119}
Nov 25 04:03:48 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:03:48.092461) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 04:03:48 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:03:48.346544) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 04:03:48 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:03:48.346558) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 04:03:48 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:03:48.346561) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 04:03:48 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:03:48.346564) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 04:03:48 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:03:48.346571) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 04:03:48 np0005534516 nova_compute[253538]: 2025-11-25 09:03:48.925 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:03:49 np0005534516 nova_compute[253538]: 2025-11-25 09:03:49.322 253542 DEBUG nova.network.neutron [req-ed77d0bc-d88e-418d-9e02-382def51063f req-55b577de-c098-4784-92a7-b46424e29d08 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 60fab831-4ae4-4e18-a4e4-5466abbece52] Updated VIF entry in instance network info cache for port d2008aa0-bac3-4d83-88d2-34376e911b2c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 25 04:03:49 np0005534516 nova_compute[253538]: 2025-11-25 09:03:49.323 253542 DEBUG nova.network.neutron [req-ed77d0bc-d88e-418d-9e02-382def51063f req-55b577de-c098-4784-92a7-b46424e29d08 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 60fab831-4ae4-4e18-a4e4-5466abbece52] Updating instance_info_cache with network_info: [{"id": "d2008aa0-bac3-4d83-88d2-34376e911b2c", "address": "fa:16:3e:72:7d:84", "network": {"id": "72472fc5-3661-404c-a0d2-df155795bd2b", "bridge": "br-int", "label": "tempest-network-smoke--1408423100", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cfffff2c57a442a59b202d368d49bf00", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd2008aa0-ba", "ovs_interfaceid": "d2008aa0-bac3-4d83-88d2-34376e911b2c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 04:03:49 np0005534516 nova_compute[253538]: 2025-11-25 09:03:49.326 253542 DEBUG nova.network.neutron [req-65dadb0d-11ef-48f3-86e7-81ae8ecd0a0c req-ac4ad3e4-4896-4bde-911a-b49c4fc8bcfa b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 1ee319a5-b613-4b27-a1e6-64b0129bf269] Updated VIF entry in instance network info cache for port 2cd88dce-60d9-4da6-a5f6-ba6622fd8812. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 25 04:03:49 np0005534516 nova_compute[253538]: 2025-11-25 09:03:49.327 253542 DEBUG nova.network.neutron [req-65dadb0d-11ef-48f3-86e7-81ae8ecd0a0c req-ac4ad3e4-4896-4bde-911a-b49c4fc8bcfa b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 1ee319a5-b613-4b27-a1e6-64b0129bf269] Updating instance_info_cache with network_info: [{"id": "2cd88dce-60d9-4da6-a5f6-ba6622fd8812", "address": "fa:16:3e:1e:e8:e0", "network": {"id": "f691304c-d112-4c32-b3ac-0f33230178b0", "bridge": "br-int", "label": "tempest-network-smoke--636122136", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.229", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2cd88dce-60", "ovs_interfaceid": "2cd88dce-60d9-4da6-a5f6-ba6622fd8812", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "25c8c441-cd5e-4cd3-9151-e8137db08e65", "address": "fa:16:3e:fe:c4:a5", "network": {"id": "269f4fa4-a7fb-4f9a-b49d-3b1968826304", "bridge": "br-int", "label": "tempest-network-smoke--375194542", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fefe:c4a5", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap25c8c441-cd", "ovs_interfaceid": "25c8c441-cd5e-4cd3-9151-e8137db08e65", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 04:03:49 np0005534516 nova_compute[253538]: 2025-11-25 09:03:49.347 253542 DEBUG oslo_concurrency.lockutils [req-65dadb0d-11ef-48f3-86e7-81ae8ecd0a0c req-ac4ad3e4-4896-4bde-911a-b49c4fc8bcfa b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-1ee319a5-b613-4b27-a1e6-64b0129bf269" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 04:03:49 np0005534516 nova_compute[253538]: 2025-11-25 09:03:49.349 253542 DEBUG oslo_concurrency.lockutils [req-ed77d0bc-d88e-418d-9e02-382def51063f req-55b577de-c098-4784-92a7-b46424e29d08 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-60fab831-4ae4-4e18-a4e4-5466abbece52" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 04:03:50 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2491: 321 pgs: 321 active+clean; 339 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 20 KiB/s wr, 145 op/s
Nov 25 04:03:51 np0005534516 nova_compute[253538]: 2025-11-25 09:03:51.539 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:03:52 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2492: 321 pgs: 321 active+clean; 339 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 5.7 KiB/s wr, 135 op/s
Nov 25 04:03:52 np0005534516 nova_compute[253538]: 2025-11-25 09:03:52.108 253542 DEBUG nova.network.neutron [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] [instance: 28376454-90b2-431d-9052-48b369973c8e] Updating instance_info_cache with network_info: [{"id": "9918858c-8b7c-4d3f-aada-d04fcb6eab03", "address": "fa:16:3e:f4:e3:ea", "network": {"id": "f691304c-d112-4c32-b3ac-0f33230178b0", "bridge": "br-int", "label": "tempest-network-smoke--636122136", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.236", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9918858c-8b", "ovs_interfaceid": "9918858c-8b7c-4d3f-aada-d04fcb6eab03", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "fd44d480-0242-4c7a-b02e-f58852c99ca0", "address": "fa:16:3e:b8:5e:87", "network": {"id": "269f4fa4-a7fb-4f9a-b49d-3b1968826304", "bridge": "br-int", "label": "tempest-network-smoke--375194542", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feb8:5e87", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfd44d480-02", "ovs_interfaceid": "fd44d480-0242-4c7a-b02e-f58852c99ca0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 04:03:52 np0005534516 nova_compute[253538]: 2025-11-25 09:03:52.131 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Releasing lock "refresh_cache-28376454-90b2-431d-9052-48b369973c8e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 04:03:52 np0005534516 nova_compute[253538]: 2025-11-25 09:03:52.132 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] [instance: 28376454-90b2-431d-9052-48b369973c8e] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Nov 25 04:03:52 np0005534516 nova_compute[253538]: 2025-11-25 09:03:52.132 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:03:52 np0005534516 nova_compute[253538]: 2025-11-25 09:03:52.133 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:03:52 np0005534516 nova_compute[253538]: 2025-11-25 09:03:52.133 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 25 04:03:52 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:03:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] Optimize plan auto_2025-11-25_09:03:53
Nov 25 04:03:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 04:03:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] do_upmap
Nov 25 04:03:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] pools ['cephfs.cephfs.meta', '.rgw.root', 'cephfs.cephfs.data', 'default.rgw.control', 'images', '.mgr', 'vms', 'backups', 'volumes', 'default.rgw.meta', 'default.rgw.log']
Nov 25 04:03:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] prepared 0/10 changes
Nov 25 04:03:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 04:03:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 04:03:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 04:03:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 04:03:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 04:03:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 04:03:53 np0005534516 nova_compute[253538]: 2025-11-25 09:03:53.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:03:53 np0005534516 nova_compute[253538]: 2025-11-25 09:03:53.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:03:53 np0005534516 podman[396200]: 2025-11-25 09:03:53.827856712 +0000 UTC m=+0.065243734 container health_status 5c8d5508db57b4c3da7e80ae11f3a3094e87785cb74f60c98199261dd8b73481 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, config_id=ovn_metadata_agent, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0)
Nov 25 04:03:53 np0005534516 podman[396199]: 2025-11-25 09:03:53.847045574 +0000 UTC m=+0.092125195 container health_status 201f5ba8a846455dad7681d91099d8c3f33e8ac91298c1330a8b525b94c03938 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251118, container_name=multipathd)
Nov 25 04:03:53 np0005534516 nova_compute[253538]: 2025-11-25 09:03:53.924 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:03:54 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 04:03:54 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 04:03:54 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 04:03:54 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 04:03:54 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 04:03:54 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 04:03:54 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 04:03:54 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 04:03:54 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 04:03:54 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 04:03:54 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2493: 321 pgs: 321 active+clean; 340 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 2.8 MiB/s rd, 411 KiB/s wr, 101 op/s
Nov 25 04:03:54 np0005534516 ceph-mgr[75313]: client.0 ms_handle_reset on v2:192.168.122.100:6800/3119838916
Nov 25 04:03:55 np0005534516 nova_compute[253538]: 2025-11-25 09:03:55.553 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:03:56 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2494: 321 pgs: 321 active+clean; 346 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.3 MiB/s wr, 90 op/s
Nov 25 04:03:56 np0005534516 nova_compute[253538]: 2025-11-25 09:03:56.604 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:03:57 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:03:58 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2495: 321 pgs: 321 active+clean; 357 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 66 KiB/s rd, 2.3 MiB/s wr, 38 op/s
Nov 25 04:03:58 np0005534516 nova_compute[253538]: 2025-11-25 09:03:58.553 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:03:58 np0005534516 nova_compute[253538]: 2025-11-25 09:03:58.587 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:03:58 np0005534516 nova_compute[253538]: 2025-11-25 09:03:58.588 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:03:58 np0005534516 nova_compute[253538]: 2025-11-25 09:03:58.588 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:03:58 np0005534516 nova_compute[253538]: 2025-11-25 09:03:58.589 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 25 04:03:58 np0005534516 nova_compute[253538]: 2025-11-25 09:03:58.589 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 04:03:58 np0005534516 nova_compute[253538]: 2025-11-25 09:03:58.975 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:03:59 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 04:03:59 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1321375396' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 04:03:59 np0005534516 nova_compute[253538]: 2025-11-25 09:03:59.150 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.561s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 04:03:59 np0005534516 nova_compute[253538]: 2025-11-25 09:03:59.378 253542 DEBUG nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] skipping disk for instance-00000086 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 25 04:03:59 np0005534516 nova_compute[253538]: 2025-11-25 09:03:59.379 253542 DEBUG nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] skipping disk for instance-00000086 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 25 04:03:59 np0005534516 nova_compute[253538]: 2025-11-25 09:03:59.384 253542 DEBUG nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] skipping disk for instance-00000085 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 25 04:03:59 np0005534516 nova_compute[253538]: 2025-11-25 09:03:59.384 253542 DEBUG nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] skipping disk for instance-00000085 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 25 04:03:59 np0005534516 nova_compute[253538]: 2025-11-25 09:03:59.388 253542 DEBUG nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] skipping disk for instance-00000087 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 25 04:03:59 np0005534516 nova_compute[253538]: 2025-11-25 09:03:59.389 253542 DEBUG nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] skipping disk for instance-00000087 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 25 04:03:59 np0005534516 nova_compute[253538]: 2025-11-25 09:03:59.393 253542 DEBUG nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] skipping disk for instance-00000088 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 25 04:03:59 np0005534516 nova_compute[253538]: 2025-11-25 09:03:59.393 253542 DEBUG nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] skipping disk for instance-00000088 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 25 04:03:59 np0005534516 nova_compute[253538]: 2025-11-25 09:03:59.621 253542 WARNING nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 04:03:59 np0005534516 nova_compute[253538]: 2025-11-25 09:03:59.623 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=2929MB free_disk=59.82619857788086GB free_vcpus=4 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 25 04:03:59 np0005534516 nova_compute[253538]: 2025-11-25 09:03:59.623 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:03:59 np0005534516 nova_compute[253538]: 2025-11-25 09:03:59.623 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:03:59 np0005534516 nova_compute[253538]: 2025-11-25 09:03:59.711 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Instance 28376454-90b2-431d-9052-48b369973c8e actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 25 04:03:59 np0005534516 nova_compute[253538]: 2025-11-25 09:03:59.712 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Instance 2c4a1d63-7674-4276-8da9-b9d4f4fea307 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 25 04:03:59 np0005534516 nova_compute[253538]: 2025-11-25 09:03:59.712 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Instance 1ee319a5-b613-4b27-a1e6-64b0129bf269 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 25 04:03:59 np0005534516 nova_compute[253538]: 2025-11-25 09:03:59.712 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Instance 60fab831-4ae4-4e18-a4e4-5466abbece52 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 25 04:03:59 np0005534516 nova_compute[253538]: 2025-11-25 09:03:59.712 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 4 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 25 04:03:59 np0005534516 nova_compute[253538]: 2025-11-25 09:03:59.712 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=1024MB phys_disk=59GB used_disk=4GB total_vcpus=8 used_vcpus=4 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 25 04:03:59 np0005534516 nova_compute[253538]: 2025-11-25 09:03:59.818 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 04:04:00 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2496: 321 pgs: 321 active+clean; 382 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 349 KiB/s rd, 4.0 MiB/s wr, 77 op/s
Nov 25 04:04:00 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 04:04:00 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/465604835' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 04:04:00 np0005534516 nova_compute[253538]: 2025-11-25 09:04:00.280 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.462s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 04:04:00 np0005534516 nova_compute[253538]: 2025-11-25 09:04:00.285 253542 DEBUG nova.compute.provider_tree [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 04:04:00 np0005534516 nova_compute[253538]: 2025-11-25 09:04:00.296 253542 DEBUG nova.scheduler.client.report [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 04:04:00 np0005534516 ovn_controller[152859]: 2025-11-25T09:04:00Z|00173|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:1e:e8:e0 10.100.0.14
Nov 25 04:04:00 np0005534516 ovn_controller[152859]: 2025-11-25T09:04:00Z|00174|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:1e:e8:e0 10.100.0.14
Nov 25 04:04:00 np0005534516 nova_compute[253538]: 2025-11-25 09:04:00.346 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 25 04:04:00 np0005534516 nova_compute[253538]: 2025-11-25 09:04:00.347 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.724s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:04:01 np0005534516 ovn_controller[152859]: 2025-11-25T09:04:01Z|00175|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:72:7d:84 10.100.0.11
Nov 25 04:04:01 np0005534516 ovn_controller[152859]: 2025-11-25T09:04:01Z|00176|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:72:7d:84 10.100.0.11
Nov 25 04:04:01 np0005534516 nova_compute[253538]: 2025-11-25 09:04:01.607 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:04:01 np0005534516 podman[396282]: 2025-11-25 09:04:01.861278145 +0000 UTC m=+0.099662901 container health_status 8ad1e319ea263c63021f01bb2d6f18ca5aea205883339692c6b10bddfa993191 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 04:04:02 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2497: 321 pgs: 321 active+clean; 394 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 563 KiB/s rd, 4.1 MiB/s wr, 109 op/s
Nov 25 04:04:02 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:04:03 np0005534516 nova_compute[253538]: 2025-11-25 09:04:03.979 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:04:04 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2498: 321 pgs: 321 active+clean; 399 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 605 KiB/s rd, 4.1 MiB/s wr, 114 op/s
Nov 25 04:04:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 04:04:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 04:04:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 04:04:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 04:04:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0030138815971543923 of space, bias 1.0, pg target 0.9041644791463177 quantized to 32 (current 32)
Nov 25 04:04:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 04:04:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 04:04:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 04:04:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 04:04:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 04:04:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0010122368870873217 of space, bias 1.0, pg target 0.3036710661261965 quantized to 32 (current 32)
Nov 25 04:04:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 04:04:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 25 04:04:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 04:04:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 04:04:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 04:04:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 25 04:04:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 04:04:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 25 04:04:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 04:04:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 04:04:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 04:04:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 25 04:04:06 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2499: 321 pgs: 321 active+clean; 405 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 648 KiB/s rd, 3.9 MiB/s wr, 125 op/s
Nov 25 04:04:06 np0005534516 nova_compute[253538]: 2025-11-25 09:04:06.613 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:04:07 np0005534516 nova_compute[253538]: 2025-11-25 09:04:07.597 253542 DEBUG oslo_concurrency.lockutils [None req-148edbc7-74b7-407c-adb3-ee461f084c56 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Acquiring lock "60fab831-4ae4-4e18-a4e4-5466abbece52" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:04:07 np0005534516 nova_compute[253538]: 2025-11-25 09:04:07.598 253542 DEBUG oslo_concurrency.lockutils [None req-148edbc7-74b7-407c-adb3-ee461f084c56 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Lock "60fab831-4ae4-4e18-a4e4-5466abbece52" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:04:07 np0005534516 nova_compute[253538]: 2025-11-25 09:04:07.598 253542 DEBUG oslo_concurrency.lockutils [None req-148edbc7-74b7-407c-adb3-ee461f084c56 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Acquiring lock "60fab831-4ae4-4e18-a4e4-5466abbece52-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:04:07 np0005534516 nova_compute[253538]: 2025-11-25 09:04:07.599 253542 DEBUG oslo_concurrency.lockutils [None req-148edbc7-74b7-407c-adb3-ee461f084c56 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Lock "60fab831-4ae4-4e18-a4e4-5466abbece52-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:04:07 np0005534516 nova_compute[253538]: 2025-11-25 09:04:07.599 253542 DEBUG oslo_concurrency.lockutils [None req-148edbc7-74b7-407c-adb3-ee461f084c56 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Lock "60fab831-4ae4-4e18-a4e4-5466abbece52-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:04:07 np0005534516 nova_compute[253538]: 2025-11-25 09:04:07.601 253542 INFO nova.compute.manager [None req-148edbc7-74b7-407c-adb3-ee461f084c56 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 60fab831-4ae4-4e18-a4e4-5466abbece52] Terminating instance#033[00m
Nov 25 04:04:07 np0005534516 nova_compute[253538]: 2025-11-25 09:04:07.602 253542 DEBUG nova.compute.manager [None req-148edbc7-74b7-407c-adb3-ee461f084c56 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 60fab831-4ae4-4e18-a4e4-5466abbece52] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 25 04:04:07 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:04:08 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2500: 321 pgs: 321 active+clean; 405 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 621 KiB/s rd, 3.0 MiB/s wr, 108 op/s
Nov 25 04:04:09 np0005534516 nova_compute[253538]: 2025-11-25 09:04:09.007 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:04:09 np0005534516 kernel: tapd2008aa0-ba (unregistering): left promiscuous mode
Nov 25 04:04:09 np0005534516 NetworkManager[48915]: <info>  [1764061449.2473] device (tapd2008aa0-ba): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 04:04:09 np0005534516 nova_compute[253538]: 2025-11-25 09:04:09.261 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:04:09 np0005534516 ovn_controller[152859]: 2025-11-25T09:04:09Z|01397|binding|INFO|Releasing lport d2008aa0-bac3-4d83-88d2-34376e911b2c from this chassis (sb_readonly=0)
Nov 25 04:04:09 np0005534516 ovn_controller[152859]: 2025-11-25T09:04:09Z|01398|binding|INFO|Setting lport d2008aa0-bac3-4d83-88d2-34376e911b2c down in Southbound
Nov 25 04:04:09 np0005534516 ovn_controller[152859]: 2025-11-25T09:04:09Z|01399|binding|INFO|Removing iface tapd2008aa0-ba ovn-installed in OVS
Nov 25 04:04:09 np0005534516 nova_compute[253538]: 2025-11-25 09:04:09.268 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:04:09 np0005534516 nova_compute[253538]: 2025-11-25 09:04:09.285 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:04:09 np0005534516 systemd[1]: machine-qemu\x2d166\x2dinstance\x2d00000088.scope: Deactivated successfully.
Nov 25 04:04:09 np0005534516 systemd[1]: machine-qemu\x2d166\x2dinstance\x2d00000088.scope: Consumed 15.748s CPU time.
Nov 25 04:04:09 np0005534516 systemd-machined[215790]: Machine qemu-166-instance-00000088 terminated.
Nov 25 04:04:09 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:04:09.342 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:72:7d:84 10.100.0.11'], port_security=['fa:16:3e:72:7d:84 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '60fab831-4ae4-4e18-a4e4-5466abbece52', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-72472fc5-3661-404c-a0d2-df155795bd2b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'cfffff2c57a442a59b202d368d49bf00', 'neutron:revision_number': '5', 'neutron:security_group_ids': '1e296d37-dec5-4d7e-978f-4bb613fbda54', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=48d726f4-a876-48b7-812b-492b6f2eebf1, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=d2008aa0-bac3-4d83-88d2-34376e911b2c) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 04:04:09 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:04:09.343 162739 INFO neutron.agent.ovn.metadata.agent [-] Port d2008aa0-bac3-4d83-88d2-34376e911b2c in datapath 72472fc5-3661-404c-a0d2-df155795bd2b unbound from our chassis#033[00m
Nov 25 04:04:09 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:04:09.344 162739 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 72472fc5-3661-404c-a0d2-df155795bd2b#033[00m
Nov 25 04:04:09 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:04:09.365 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[3dd0e34d-59a5-4f59-9899-e4718f6c09a5]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:04:09 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:04:09.400 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[8a6f7837-817a-4168-adf8-df7bebf46201]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:04:09 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:04:09.403 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[228728de-bb02-421c-a574-b16945feef5a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:04:09 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:04:09.445 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[aa4a77e3-837c-4486-80a9-d523db8a035b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:04:09 np0005534516 nova_compute[253538]: 2025-11-25 09:04:09.451 253542 INFO nova.virt.libvirt.driver [-] [instance: 60fab831-4ae4-4e18-a4e4-5466abbece52] Instance destroyed successfully.#033[00m
Nov 25 04:04:09 np0005534516 nova_compute[253538]: 2025-11-25 09:04:09.451 253542 DEBUG nova.objects.instance [None req-148edbc7-74b7-407c-adb3-ee461f084c56 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Lazy-loading 'resources' on Instance uuid 60fab831-4ae4-4e18-a4e4-5466abbece52 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 04:04:09 np0005534516 nova_compute[253538]: 2025-11-25 09:04:09.463 253542 DEBUG nova.virt.libvirt.vif [None req-148edbc7-74b7-407c-adb3-ee461f084c56 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T09:03:28Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-1495447964-gen-1-1215590521',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-1495447964-gen-1-1215590521',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-1495447964-ge',id=136,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBKmWC6+Lhjt7ZggX7R271Z8ydSfaz55B57w+DkFd+9P5ZZi/RtND2Lrlt8eHEO6BYJ7zwCthN/Bq/sxErPbCK7jo7dXwLPbl9Tdlo4a8btinzQw58JjdPmMZRYnO+DrRAQ==',key_name='tempest-TestSecurityGroupsBasicOps-1979245065',keypairs=<?>,launch_index=0,launched_at=2025-11-25T09:03:39Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='cfffff2c57a442a59b202d368d49bf00',ramdisk_id='',reservation_id='r-lk5znizs',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestSecurityGroupsBasicOps-1495447964',owner_user_name='tempest-TestSecurityGroupsBasicOps-1495447964-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T09:03:39Z,user_data=None,user_id='283b89dbe3284e8ea2019b797673108b',uuid=60fab831-4ae4-4e18-a4e4-5466abbece52,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "d2008aa0-bac3-4d83-88d2-34376e911b2c", "address": "fa:16:3e:72:7d:84", "network": {"id": "72472fc5-3661-404c-a0d2-df155795bd2b", "bridge": "br-int", "label": "tempest-network-smoke--1408423100", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cfffff2c57a442a59b202d368d49bf00", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd2008aa0-ba", "ovs_interfaceid": "d2008aa0-bac3-4d83-88d2-34376e911b2c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 25 04:04:09 np0005534516 nova_compute[253538]: 2025-11-25 09:04:09.464 253542 DEBUG nova.network.os_vif_util [None req-148edbc7-74b7-407c-adb3-ee461f084c56 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Converting VIF {"id": "d2008aa0-bac3-4d83-88d2-34376e911b2c", "address": "fa:16:3e:72:7d:84", "network": {"id": "72472fc5-3661-404c-a0d2-df155795bd2b", "bridge": "br-int", "label": "tempest-network-smoke--1408423100", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cfffff2c57a442a59b202d368d49bf00", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd2008aa0-ba", "ovs_interfaceid": "d2008aa0-bac3-4d83-88d2-34376e911b2c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 04:04:09 np0005534516 nova_compute[253538]: 2025-11-25 09:04:09.465 253542 DEBUG nova.network.os_vif_util [None req-148edbc7-74b7-407c-adb3-ee461f084c56 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:72:7d:84,bridge_name='br-int',has_traffic_filtering=True,id=d2008aa0-bac3-4d83-88d2-34376e911b2c,network=Network(72472fc5-3661-404c-a0d2-df155795bd2b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd2008aa0-ba') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 04:04:09 np0005534516 nova_compute[253538]: 2025-11-25 09:04:09.465 253542 DEBUG os_vif [None req-148edbc7-74b7-407c-adb3-ee461f084c56 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:72:7d:84,bridge_name='br-int',has_traffic_filtering=True,id=d2008aa0-bac3-4d83-88d2-34376e911b2c,network=Network(72472fc5-3661-404c-a0d2-df155795bd2b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd2008aa0-ba') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 25 04:04:09 np0005534516 nova_compute[253538]: 2025-11-25 09:04:09.466 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:04:09 np0005534516 nova_compute[253538]: 2025-11-25 09:04:09.467 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd2008aa0-ba, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 04:04:09 np0005534516 nova_compute[253538]: 2025-11-25 09:04:09.468 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:04:09 np0005534516 nova_compute[253538]: 2025-11-25 09:04:09.470 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:04:09 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:04:09.469 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[0448e5e9-38da-482e-838f-0cdf719a6941]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap72472fc5-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:1a:5c:07'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 9, 'tx_packets': 7, 'rx_bytes': 658, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 9, 'tx_packets': 7, 'rx_bytes': 658, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 400], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 672908, 'reachable_time': 15576, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 396329, 'error': None, 'target': 'ovnmeta-72472fc5-3661-404c-a0d2-df155795bd2b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:04:09 np0005534516 nova_compute[253538]: 2025-11-25 09:04:09.472 253542 INFO os_vif [None req-148edbc7-74b7-407c-adb3-ee461f084c56 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:72:7d:84,bridge_name='br-int',has_traffic_filtering=True,id=d2008aa0-bac3-4d83-88d2-34376e911b2c,network=Network(72472fc5-3661-404c-a0d2-df155795bd2b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd2008aa0-ba')#033[00m
Nov 25 04:04:09 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:04:09.486 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[e331a514-27f2-47fa-8926-0c1be6cf9365]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap72472fc5-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 672919, 'tstamp': 672919}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 396332, 'error': None, 'target': 'ovnmeta-72472fc5-3661-404c-a0d2-df155795bd2b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap72472fc5-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 672922, 'tstamp': 672922}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 396332, 'error': None, 'target': 'ovnmeta-72472fc5-3661-404c-a0d2-df155795bd2b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:04:09 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:04:09.488 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap72472fc5-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 04:04:09 np0005534516 nova_compute[253538]: 2025-11-25 09:04:09.489 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:04:09 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:04:09.491 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap72472fc5-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 04:04:09 np0005534516 nova_compute[253538]: 2025-11-25 09:04:09.491 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:04:09 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:04:09.491 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 04:04:09 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:04:09.491 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap72472fc5-30, col_values=(('external_ids', {'iface-id': '7518767c-6a1a-4489-968c-840b865348d3'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 04:04:09 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:04:09.492 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 04:04:09 np0005534516 nova_compute[253538]: 2025-11-25 09:04:09.761 253542 DEBUG nova.compute.manager [req-519e17b1-7d81-4a51-9d61-526b3a36f812 req-f5f6ace6-5405-4f11-994d-ff1778aaa257 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 60fab831-4ae4-4e18-a4e4-5466abbece52] Received event network-vif-unplugged-d2008aa0-bac3-4d83-88d2-34376e911b2c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 04:04:09 np0005534516 nova_compute[253538]: 2025-11-25 09:04:09.761 253542 DEBUG oslo_concurrency.lockutils [req-519e17b1-7d81-4a51-9d61-526b3a36f812 req-f5f6ace6-5405-4f11-994d-ff1778aaa257 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "60fab831-4ae4-4e18-a4e4-5466abbece52-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:04:09 np0005534516 nova_compute[253538]: 2025-11-25 09:04:09.761 253542 DEBUG oslo_concurrency.lockutils [req-519e17b1-7d81-4a51-9d61-526b3a36f812 req-f5f6ace6-5405-4f11-994d-ff1778aaa257 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "60fab831-4ae4-4e18-a4e4-5466abbece52-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:04:09 np0005534516 nova_compute[253538]: 2025-11-25 09:04:09.762 253542 DEBUG oslo_concurrency.lockutils [req-519e17b1-7d81-4a51-9d61-526b3a36f812 req-f5f6ace6-5405-4f11-994d-ff1778aaa257 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "60fab831-4ae4-4e18-a4e4-5466abbece52-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:04:09 np0005534516 nova_compute[253538]: 2025-11-25 09:04:09.762 253542 DEBUG nova.compute.manager [req-519e17b1-7d81-4a51-9d61-526b3a36f812 req-f5f6ace6-5405-4f11-994d-ff1778aaa257 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 60fab831-4ae4-4e18-a4e4-5466abbece52] No waiting events found dispatching network-vif-unplugged-d2008aa0-bac3-4d83-88d2-34376e911b2c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 04:04:09 np0005534516 nova_compute[253538]: 2025-11-25 09:04:09.762 253542 DEBUG nova.compute.manager [req-519e17b1-7d81-4a51-9d61-526b3a36f812 req-f5f6ace6-5405-4f11-994d-ff1778aaa257 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 60fab831-4ae4-4e18-a4e4-5466abbece52] Received event network-vif-unplugged-d2008aa0-bac3-4d83-88d2-34376e911b2c for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 25 04:04:10 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2501: 321 pgs: 321 active+clean; 405 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 593 KiB/s rd, 2.0 MiB/s wr, 96 op/s
Nov 25 04:04:11 np0005534516 nova_compute[253538]: 2025-11-25 09:04:11.848 253542 DEBUG nova.compute.manager [req-494d07a3-479a-4017-a22d-353c75d05f30 req-7e3b6f71-5e6f-4112-9089-a814a59eafe0 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 60fab831-4ae4-4e18-a4e4-5466abbece52] Received event network-vif-plugged-d2008aa0-bac3-4d83-88d2-34376e911b2c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 04:04:11 np0005534516 nova_compute[253538]: 2025-11-25 09:04:11.848 253542 DEBUG oslo_concurrency.lockutils [req-494d07a3-479a-4017-a22d-353c75d05f30 req-7e3b6f71-5e6f-4112-9089-a814a59eafe0 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "60fab831-4ae4-4e18-a4e4-5466abbece52-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:04:11 np0005534516 nova_compute[253538]: 2025-11-25 09:04:11.849 253542 DEBUG oslo_concurrency.lockutils [req-494d07a3-479a-4017-a22d-353c75d05f30 req-7e3b6f71-5e6f-4112-9089-a814a59eafe0 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "60fab831-4ae4-4e18-a4e4-5466abbece52-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:04:11 np0005534516 nova_compute[253538]: 2025-11-25 09:04:11.849 253542 DEBUG oslo_concurrency.lockutils [req-494d07a3-479a-4017-a22d-353c75d05f30 req-7e3b6f71-5e6f-4112-9089-a814a59eafe0 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "60fab831-4ae4-4e18-a4e4-5466abbece52-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:04:11 np0005534516 nova_compute[253538]: 2025-11-25 09:04:11.850 253542 DEBUG nova.compute.manager [req-494d07a3-479a-4017-a22d-353c75d05f30 req-7e3b6f71-5e6f-4112-9089-a814a59eafe0 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 60fab831-4ae4-4e18-a4e4-5466abbece52] No waiting events found dispatching network-vif-plugged-d2008aa0-bac3-4d83-88d2-34376e911b2c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 04:04:11 np0005534516 nova_compute[253538]: 2025-11-25 09:04:11.850 253542 WARNING nova.compute.manager [req-494d07a3-479a-4017-a22d-353c75d05f30 req-7e3b6f71-5e6f-4112-9089-a814a59eafe0 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 60fab831-4ae4-4e18-a4e4-5466abbece52] Received unexpected event network-vif-plugged-d2008aa0-bac3-4d83-88d2-34376e911b2c for instance with vm_state active and task_state deleting.#033[00m
Nov 25 04:04:12 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2502: 321 pgs: 321 active+clean; 405 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 305 KiB/s rd, 285 KiB/s wr, 59 op/s
Nov 25 04:04:13 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:04:13 np0005534516 ovn_controller[152859]: 2025-11-25T09:04:13Z|01400|memory_trim|INFO|Detected inactivity (last active 30002 ms ago): trimming memory
Nov 25 04:04:14 np0005534516 nova_compute[253538]: 2025-11-25 09:04:14.010 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:04:14 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2503: 321 pgs: 321 active+clean; 376 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 92 KiB/s rd, 177 KiB/s wr, 28 op/s
Nov 25 04:04:14 np0005534516 nova_compute[253538]: 2025-11-25 09:04:14.470 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:04:16 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2504: 321 pgs: 321 active+clean; 350 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 54 KiB/s rd, 170 KiB/s wr, 30 op/s
Nov 25 04:04:18 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:04:18 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2505: 321 pgs: 321 active+clean; 326 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 11 KiB/s rd, 44 KiB/s wr, 19 op/s
Nov 25 04:04:19 np0005534516 nova_compute[253538]: 2025-11-25 09:04:19.012 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:04:19 np0005534516 nova_compute[253538]: 2025-11-25 09:04:19.472 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:04:19 np0005534516 nova_compute[253538]: 2025-11-25 09:04:19.966 253542 DEBUG nova.compute.manager [req-0703c0ba-7df6-47d9-b08e-66ae870bd76c req-6ccba779-03f1-44ff-86c7-6c61ff3da3a4 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 1ee319a5-b613-4b27-a1e6-64b0129bf269] Received event network-changed-2cd88dce-60d9-4da6-a5f6-ba6622fd8812 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 04:04:19 np0005534516 nova_compute[253538]: 2025-11-25 09:04:19.967 253542 DEBUG nova.compute.manager [req-0703c0ba-7df6-47d9-b08e-66ae870bd76c req-6ccba779-03f1-44ff-86c7-6c61ff3da3a4 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 1ee319a5-b613-4b27-a1e6-64b0129bf269] Refreshing instance network info cache due to event network-changed-2cd88dce-60d9-4da6-a5f6-ba6622fd8812. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 04:04:19 np0005534516 nova_compute[253538]: 2025-11-25 09:04:19.967 253542 DEBUG oslo_concurrency.lockutils [req-0703c0ba-7df6-47d9-b08e-66ae870bd76c req-6ccba779-03f1-44ff-86c7-6c61ff3da3a4 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-1ee319a5-b613-4b27-a1e6-64b0129bf269" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 04:04:19 np0005534516 nova_compute[253538]: 2025-11-25 09:04:19.967 253542 DEBUG oslo_concurrency.lockutils [req-0703c0ba-7df6-47d9-b08e-66ae870bd76c req-6ccba779-03f1-44ff-86c7-6c61ff3da3a4 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-1ee319a5-b613-4b27-a1e6-64b0129bf269" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 04:04:19 np0005534516 nova_compute[253538]: 2025-11-25 09:04:19.968 253542 DEBUG nova.network.neutron [req-0703c0ba-7df6-47d9-b08e-66ae870bd76c req-6ccba779-03f1-44ff-86c7-6c61ff3da3a4 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 1ee319a5-b613-4b27-a1e6-64b0129bf269] Refreshing network info cache for port 2cd88dce-60d9-4da6-a5f6-ba6622fd8812 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 04:04:20 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2506: 321 pgs: 321 active+clean; 326 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 25 KiB/s rd, 46 KiB/s wr, 27 op/s
Nov 25 04:04:20 np0005534516 nova_compute[253538]: 2025-11-25 09:04:20.108 253542 DEBUG oslo_concurrency.lockutils [None req-13c5af3d-5c05-4928-9e59-9e03050fad50 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Acquiring lock "1ee319a5-b613-4b27-a1e6-64b0129bf269" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:04:20 np0005534516 nova_compute[253538]: 2025-11-25 09:04:20.108 253542 DEBUG oslo_concurrency.lockutils [None req-13c5af3d-5c05-4928-9e59-9e03050fad50 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "1ee319a5-b613-4b27-a1e6-64b0129bf269" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:04:20 np0005534516 nova_compute[253538]: 2025-11-25 09:04:20.109 253542 DEBUG oslo_concurrency.lockutils [None req-13c5af3d-5c05-4928-9e59-9e03050fad50 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Acquiring lock "1ee319a5-b613-4b27-a1e6-64b0129bf269-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:04:20 np0005534516 nova_compute[253538]: 2025-11-25 09:04:20.109 253542 DEBUG oslo_concurrency.lockutils [None req-13c5af3d-5c05-4928-9e59-9e03050fad50 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "1ee319a5-b613-4b27-a1e6-64b0129bf269-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:04:20 np0005534516 nova_compute[253538]: 2025-11-25 09:04:20.109 253542 DEBUG oslo_concurrency.lockutils [None req-13c5af3d-5c05-4928-9e59-9e03050fad50 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "1ee319a5-b613-4b27-a1e6-64b0129bf269-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:04:20 np0005534516 nova_compute[253538]: 2025-11-25 09:04:20.110 253542 INFO nova.compute.manager [None req-13c5af3d-5c05-4928-9e59-9e03050fad50 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 1ee319a5-b613-4b27-a1e6-64b0129bf269] Terminating instance#033[00m
Nov 25 04:04:20 np0005534516 nova_compute[253538]: 2025-11-25 09:04:20.111 253542 DEBUG nova.compute.manager [None req-13c5af3d-5c05-4928-9e59-9e03050fad50 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 1ee319a5-b613-4b27-a1e6-64b0129bf269] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 25 04:04:21 np0005534516 kernel: tap2cd88dce-60 (unregistering): left promiscuous mode
Nov 25 04:04:21 np0005534516 NetworkManager[48915]: <info>  [1764061461.2691] device (tap2cd88dce-60): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 04:04:21 np0005534516 ovn_controller[152859]: 2025-11-25T09:04:21Z|01401|binding|INFO|Releasing lport 2cd88dce-60d9-4da6-a5f6-ba6622fd8812 from this chassis (sb_readonly=0)
Nov 25 04:04:21 np0005534516 ovn_controller[152859]: 2025-11-25T09:04:21Z|01402|binding|INFO|Setting lport 2cd88dce-60d9-4da6-a5f6-ba6622fd8812 down in Southbound
Nov 25 04:04:21 np0005534516 nova_compute[253538]: 2025-11-25 09:04:21.321 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:04:21 np0005534516 ovn_controller[152859]: 2025-11-25T09:04:21Z|01403|binding|INFO|Removing iface tap2cd88dce-60 ovn-installed in OVS
Nov 25 04:04:21 np0005534516 nova_compute[253538]: 2025-11-25 09:04:21.324 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:04:21 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:04:21.330 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:1e:e8:e0 10.100.0.14'], port_security=['fa:16:3e:1e:e8:e0 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '1ee319a5-b613-4b27-a1e6-64b0129bf269', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f691304c-d112-4c32-b3ac-0f33230178b0', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a3cf572dfc9f42528923d69b8fa76422', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'f5f96402-bff3-4d85-bba1-4edfd7ec059e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9a4166e3-493a-4dd7-9e89-e40e3bf1bed7, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=2cd88dce-60d9-4da6-a5f6-ba6622fd8812) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 04:04:21 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:04:21.332 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 2cd88dce-60d9-4da6-a5f6-ba6622fd8812 in datapath f691304c-d112-4c32-b3ac-0f33230178b0 unbound from our chassis#033[00m
Nov 25 04:04:21 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:04:21.334 162739 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network f691304c-d112-4c32-b3ac-0f33230178b0#033[00m
Nov 25 04:04:21 np0005534516 nova_compute[253538]: 2025-11-25 09:04:21.337 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:04:21 np0005534516 kernel: tap25c8c441-cd (unregistering): left promiscuous mode
Nov 25 04:04:21 np0005534516 NetworkManager[48915]: <info>  [1764061461.3434] device (tap25c8c441-cd): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 04:04:21 np0005534516 nova_compute[253538]: 2025-11-25 09:04:21.348 253542 DEBUG nova.network.neutron [req-0703c0ba-7df6-47d9-b08e-66ae870bd76c req-6ccba779-03f1-44ff-86c7-6c61ff3da3a4 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 1ee319a5-b613-4b27-a1e6-64b0129bf269] Updated VIF entry in instance network info cache for port 2cd88dce-60d9-4da6-a5f6-ba6622fd8812. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 25 04:04:21 np0005534516 nova_compute[253538]: 2025-11-25 09:04:21.348 253542 DEBUG nova.network.neutron [req-0703c0ba-7df6-47d9-b08e-66ae870bd76c req-6ccba779-03f1-44ff-86c7-6c61ff3da3a4 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 1ee319a5-b613-4b27-a1e6-64b0129bf269] Updating instance_info_cache with network_info: [{"id": "2cd88dce-60d9-4da6-a5f6-ba6622fd8812", "address": "fa:16:3e:1e:e8:e0", "network": {"id": "f691304c-d112-4c32-b3ac-0f33230178b0", "bridge": "br-int", "label": "tempest-network-smoke--636122136", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2cd88dce-60", "ovs_interfaceid": "2cd88dce-60d9-4da6-a5f6-ba6622fd8812", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "25c8c441-cd5e-4cd3-9151-e8137db08e65", "address": "fa:16:3e:fe:c4:a5", "network": {"id": "269f4fa4-a7fb-4f9a-b49d-3b1968826304", "bridge": "br-int", "label": "tempest-network-smoke--375194542", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fefe:c4a5", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap25c8c441-cd", "ovs_interfaceid": "25c8c441-cd5e-4cd3-9151-e8137db08e65", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 04:04:21 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:04:21.351 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[d9374098-ba37-43bc-bb60-a5983587c27c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:04:21 np0005534516 nova_compute[253538]: 2025-11-25 09:04:21.352 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:04:21 np0005534516 ovn_controller[152859]: 2025-11-25T09:04:21Z|01404|binding|INFO|Releasing lport 25c8c441-cd5e-4cd3-9151-e8137db08e65 from this chassis (sb_readonly=0)
Nov 25 04:04:21 np0005534516 ovn_controller[152859]: 2025-11-25T09:04:21Z|01405|binding|INFO|Setting lport 25c8c441-cd5e-4cd3-9151-e8137db08e65 down in Southbound
Nov 25 04:04:21 np0005534516 ovn_controller[152859]: 2025-11-25T09:04:21Z|01406|binding|INFO|Removing iface tap25c8c441-cd ovn-installed in OVS
Nov 25 04:04:21 np0005534516 nova_compute[253538]: 2025-11-25 09:04:21.355 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:04:21 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:04:21.361 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:fe:c4:a5 2001:db8::f816:3eff:fefe:c4a5'], port_security=['fa:16:3e:fe:c4:a5 2001:db8::f816:3eff:fefe:c4a5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:fefe:c4a5/64', 'neutron:device_id': '1ee319a5-b613-4b27-a1e6-64b0129bf269', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-269f4fa4-a7fb-4f9a-b49d-3b1968826304', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a3cf572dfc9f42528923d69b8fa76422', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'f5f96402-bff3-4d85-bba1-4edfd7ec059e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a218a470-f1de-46c9-a998-6139383f8f9c, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=25c8c441-cd5e-4cd3-9151-e8137db08e65) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 04:04:21 np0005534516 nova_compute[253538]: 2025-11-25 09:04:21.374 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:04:21 np0005534516 nova_compute[253538]: 2025-11-25 09:04:21.377 253542 DEBUG oslo_concurrency.lockutils [req-0703c0ba-7df6-47d9-b08e-66ae870bd76c req-6ccba779-03f1-44ff-86c7-6c61ff3da3a4 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-1ee319a5-b613-4b27-a1e6-64b0129bf269" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 04:04:21 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:04:21.389 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[e09a5fa4-bcfb-4a62-8470-9aa115e1930f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:04:21 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:04:21.393 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[176afe60-6f21-45b1-ad01-32510ca6b176]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:04:21 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:04:21.423 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[e19e408f-eaa9-4d12-bb0d-50a494023086]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:04:21 np0005534516 systemd[1]: machine-qemu\x2d165\x2dinstance\x2d00000087.scope: Deactivated successfully.
Nov 25 04:04:21 np0005534516 systemd[1]: machine-qemu\x2d165\x2dinstance\x2d00000087.scope: Consumed 16.896s CPU time.
Nov 25 04:04:21 np0005534516 systemd-machined[215790]: Machine qemu-165-instance-00000087 terminated.
Nov 25 04:04:21 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:04:21.440 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[a32bff81-c7d5-45a0-8577-e8c8f330af73]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf691304c-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:62:d6:9f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 7, 'rx_bytes': 700, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 7, 'rx_bytes': 700, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 397], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 672407, 'reachable_time': 19375, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 396370, 'error': None, 'target': 'ovnmeta-f691304c-d112-4c32-b3ac-0f33230178b0', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:04:21 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:04:21.461 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[281c26aa-ea9f-4597-942d-7c9262358664]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapf691304c-d1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 672419, 'tstamp': 672419}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 396371, 'error': None, 'target': 'ovnmeta-f691304c-d112-4c32-b3ac-0f33230178b0', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapf691304c-d1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 672421, 'tstamp': 672421}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 396371, 'error': None, 'target': 'ovnmeta-f691304c-d112-4c32-b3ac-0f33230178b0', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:04:21 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:04:21.463 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf691304c-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 04:04:21 np0005534516 nova_compute[253538]: 2025-11-25 09:04:21.465 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:04:21 np0005534516 nova_compute[253538]: 2025-11-25 09:04:21.472 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:04:21 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:04:21.472 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf691304c-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 04:04:21 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:04:21.473 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 04:04:21 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:04:21.473 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapf691304c-d0, col_values=(('external_ids', {'iface-id': '5564ec46-e1ee-4a7e-990b-f716b4d2c9e2'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 04:04:21 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:04:21.473 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 04:04:21 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:04:21.474 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 25c8c441-cd5e-4cd3-9151-e8137db08e65 in datapath 269f4fa4-a7fb-4f9a-b49d-3b1968826304 unbound from our chassis#033[00m
Nov 25 04:04:21 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:04:21.475 162739 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 269f4fa4-a7fb-4f9a-b49d-3b1968826304#033[00m
Nov 25 04:04:21 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:04:21.490 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[7cdd81cc-bfdc-4b9c-be4d-4dd41a975d3c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:04:21 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:04:21.531 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[bf9673c4-28a9-49d3-a1b2-4efc3f44c94d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:04:21 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:04:21.534 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[e5e48481-30fd-45fa-897c-c855879732ee]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:04:21 np0005534516 NetworkManager[48915]: <info>  [1764061461.5410] manager: (tap25c8c441-cd): new Tun device (/org/freedesktop/NetworkManager/Devices/578)
Nov 25 04:04:21 np0005534516 nova_compute[253538]: 2025-11-25 09:04:21.559 253542 INFO nova.virt.libvirt.driver [-] [instance: 1ee319a5-b613-4b27-a1e6-64b0129bf269] Instance destroyed successfully.#033[00m
Nov 25 04:04:21 np0005534516 nova_compute[253538]: 2025-11-25 09:04:21.560 253542 DEBUG nova.objects.instance [None req-13c5af3d-5c05-4928-9e59-9e03050fad50 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lazy-loading 'resources' on Instance uuid 1ee319a5-b613-4b27-a1e6-64b0129bf269 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 04:04:21 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:04:21.574 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[8fb56adc-9de4-4e9c-8f8d-8e53bccda2e9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:04:21 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:04:21.598 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[16007a2b-fc31-4781-8254-d65823c32720]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap269f4fa4-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:3e:a8:84'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 30, 'tx_packets': 5, 'rx_bytes': 2612, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 30, 'tx_packets': 5, 'rx_bytes': 2612, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 398], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 672570, 'reachable_time': 38765, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 30, 'inoctets': 2192, 'indelivers': 7, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 30, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 2192, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 30, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 7, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 396398, 'error': None, 'target': 'ovnmeta-269f4fa4-a7fb-4f9a-b49d-3b1968826304', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:04:21 np0005534516 nova_compute[253538]: 2025-11-25 09:04:21.604 253542 DEBUG nova.virt.libvirt.vif [None req-13c5af3d-5c05-4928-9e59-9e03050fad50 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T09:03:20Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-872919100',display_name='tempest-TestGettingAddress-server-872919100',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-872919100',id=135,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCL/CVAR3vbL/QBCkvY1zGP75ygSlwt3u20wjPL1pxq8pHBjwVx3a3z7YKcJhNBX7yKxfeJakRZPN9e1zFn+ojVqSwTlufKzTatjarhh+2Z6TgY95SPM/GKylcUrgXrauw==',key_name='tempest-TestGettingAddress-1143965571',keypairs=<?>,launch_index=0,launched_at=2025-11-25T09:03:39Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='a3cf572dfc9f42528923d69b8fa76422',ramdisk_id='',reservation_id='r-91m001wh',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-364728108',owner_user_name='tempest-TestGettingAddress-364728108-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T09:03:39Z,user_data=None,user_id='c9fb13d4ba9041458692330b7276232f',uuid=1ee319a5-b613-4b27-a1e6-64b0129bf269,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "2cd88dce-60d9-4da6-a5f6-ba6622fd8812", "address": "fa:16:3e:1e:e8:e0", "network": {"id": "f691304c-d112-4c32-b3ac-0f33230178b0", "bridge": "br-int", "label": "tempest-network-smoke--636122136", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.229", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2cd88dce-60", "ovs_interfaceid": "2cd88dce-60d9-4da6-a5f6-ba6622fd8812", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 25 04:04:21 np0005534516 nova_compute[253538]: 2025-11-25 09:04:21.604 253542 DEBUG nova.network.os_vif_util [None req-13c5af3d-5c05-4928-9e59-9e03050fad50 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Converting VIF {"id": "2cd88dce-60d9-4da6-a5f6-ba6622fd8812", "address": "fa:16:3e:1e:e8:e0", "network": {"id": "f691304c-d112-4c32-b3ac-0f33230178b0", "bridge": "br-int", "label": "tempest-network-smoke--636122136", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.229", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2cd88dce-60", "ovs_interfaceid": "2cd88dce-60d9-4da6-a5f6-ba6622fd8812", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 04:04:21 np0005534516 nova_compute[253538]: 2025-11-25 09:04:21.605 253542 DEBUG nova.network.os_vif_util [None req-13c5af3d-5c05-4928-9e59-9e03050fad50 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:1e:e8:e0,bridge_name='br-int',has_traffic_filtering=True,id=2cd88dce-60d9-4da6-a5f6-ba6622fd8812,network=Network(f691304c-d112-4c32-b3ac-0f33230178b0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2cd88dce-60') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 04:04:21 np0005534516 nova_compute[253538]: 2025-11-25 09:04:21.605 253542 DEBUG os_vif [None req-13c5af3d-5c05-4928-9e59-9e03050fad50 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:1e:e8:e0,bridge_name='br-int',has_traffic_filtering=True,id=2cd88dce-60d9-4da6-a5f6-ba6622fd8812,network=Network(f691304c-d112-4c32-b3ac-0f33230178b0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2cd88dce-60') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 25 04:04:21 np0005534516 nova_compute[253538]: 2025-11-25 09:04:21.607 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:04:21 np0005534516 nova_compute[253538]: 2025-11-25 09:04:21.607 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2cd88dce-60, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 04:04:21 np0005534516 nova_compute[253538]: 2025-11-25 09:04:21.610 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 25 04:04:21 np0005534516 nova_compute[253538]: 2025-11-25 09:04:21.616 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:04:21 np0005534516 nova_compute[253538]: 2025-11-25 09:04:21.619 253542 INFO os_vif [None req-13c5af3d-5c05-4928-9e59-9e03050fad50 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:1e:e8:e0,bridge_name='br-int',has_traffic_filtering=True,id=2cd88dce-60d9-4da6-a5f6-ba6622fd8812,network=Network(f691304c-d112-4c32-b3ac-0f33230178b0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2cd88dce-60')#033[00m
Nov 25 04:04:21 np0005534516 nova_compute[253538]: 2025-11-25 09:04:21.620 253542 DEBUG nova.virt.libvirt.vif [None req-13c5af3d-5c05-4928-9e59-9e03050fad50 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T09:03:20Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-872919100',display_name='tempest-TestGettingAddress-server-872919100',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-872919100',id=135,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCL/CVAR3vbL/QBCkvY1zGP75ygSlwt3u20wjPL1pxq8pHBjwVx3a3z7YKcJhNBX7yKxfeJakRZPN9e1zFn+ojVqSwTlufKzTatjarhh+2Z6TgY95SPM/GKylcUrgXrauw==',key_name='tempest-TestGettingAddress-1143965571',keypairs=<?>,launch_index=0,launched_at=2025-11-25T09:03:39Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='a3cf572dfc9f42528923d69b8fa76422',ramdisk_id='',reservation_id='r-91m001wh',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-364728108',owner_user_name='tempest-TestGettingAddress-364728108-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T09:03:39Z,user_data=None,user_id='c9fb13d4ba9041458692330b7276232f',uuid=1ee319a5-b613-4b27-a1e6-64b0129bf269,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "25c8c441-cd5e-4cd3-9151-e8137db08e65", "address": "fa:16:3e:fe:c4:a5", "network": {"id": "269f4fa4-a7fb-4f9a-b49d-3b1968826304", "bridge": "br-int", "label": "tempest-network-smoke--375194542", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fefe:c4a5", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap25c8c441-cd", "ovs_interfaceid": "25c8c441-cd5e-4cd3-9151-e8137db08e65", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 25 04:04:21 np0005534516 nova_compute[253538]: 2025-11-25 09:04:21.620 253542 DEBUG nova.network.os_vif_util [None req-13c5af3d-5c05-4928-9e59-9e03050fad50 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Converting VIF {"id": "25c8c441-cd5e-4cd3-9151-e8137db08e65", "address": "fa:16:3e:fe:c4:a5", "network": {"id": "269f4fa4-a7fb-4f9a-b49d-3b1968826304", "bridge": "br-int", "label": "tempest-network-smoke--375194542", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fefe:c4a5", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap25c8c441-cd", "ovs_interfaceid": "25c8c441-cd5e-4cd3-9151-e8137db08e65", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 04:04:21 np0005534516 nova_compute[253538]: 2025-11-25 09:04:21.621 253542 DEBUG nova.network.os_vif_util [None req-13c5af3d-5c05-4928-9e59-9e03050fad50 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:fe:c4:a5,bridge_name='br-int',has_traffic_filtering=True,id=25c8c441-cd5e-4cd3-9151-e8137db08e65,network=Network(269f4fa4-a7fb-4f9a-b49d-3b1968826304),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap25c8c441-cd') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 04:04:21 np0005534516 nova_compute[253538]: 2025-11-25 09:04:21.621 253542 DEBUG os_vif [None req-13c5af3d-5c05-4928-9e59-9e03050fad50 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:fe:c4:a5,bridge_name='br-int',has_traffic_filtering=True,id=25c8c441-cd5e-4cd3-9151-e8137db08e65,network=Network(269f4fa4-a7fb-4f9a-b49d-3b1968826304),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap25c8c441-cd') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 25 04:04:21 np0005534516 nova_compute[253538]: 2025-11-25 09:04:21.622 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:04:21 np0005534516 nova_compute[253538]: 2025-11-25 09:04:21.622 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap25c8c441-cd, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 04:04:21 np0005534516 nova_compute[253538]: 2025-11-25 09:04:21.623 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:04:21 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:04:21.623 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[558c0839-81db-43fb-92fc-558232c31d81]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap269f4fa4-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 672580, 'tstamp': 672580}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 396400, 'error': None, 'target': 'ovnmeta-269f4fa4-a7fb-4f9a-b49d-3b1968826304', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:04:21 np0005534516 nova_compute[253538]: 2025-11-25 09:04:21.625 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:04:21 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:04:21.625 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap269f4fa4-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 04:04:21 np0005534516 nova_compute[253538]: 2025-11-25 09:04:21.626 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:04:21 np0005534516 nova_compute[253538]: 2025-11-25 09:04:21.627 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:04:21 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:04:21.628 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap269f4fa4-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 04:04:21 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:04:21.628 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 04:04:21 np0005534516 nova_compute[253538]: 2025-11-25 09:04:21.628 253542 INFO os_vif [None req-13c5af3d-5c05-4928-9e59-9e03050fad50 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:fe:c4:a5,bridge_name='br-int',has_traffic_filtering=True,id=25c8c441-cd5e-4cd3-9151-e8137db08e65,network=Network(269f4fa4-a7fb-4f9a-b49d-3b1968826304),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap25c8c441-cd')#033[00m
Nov 25 04:04:21 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:04:21.628 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap269f4fa4-a0, col_values=(('external_ids', {'iface-id': 'ab77c41f-12b1-44c7-af48-058abf7be28c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 04:04:21 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:04:21.628 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 04:04:21 np0005534516 nova_compute[253538]: 2025-11-25 09:04:21.659 253542 DEBUG nova.compute.manager [req-63d2feb9-aa38-4ae8-9350-17b808fb3577 req-fbd125ad-a1f4-440e-9d81-176a93e412bb b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 1ee319a5-b613-4b27-a1e6-64b0129bf269] Received event network-vif-unplugged-2cd88dce-60d9-4da6-a5f6-ba6622fd8812 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 04:04:21 np0005534516 nova_compute[253538]: 2025-11-25 09:04:21.660 253542 DEBUG oslo_concurrency.lockutils [req-63d2feb9-aa38-4ae8-9350-17b808fb3577 req-fbd125ad-a1f4-440e-9d81-176a93e412bb b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "1ee319a5-b613-4b27-a1e6-64b0129bf269-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:04:21 np0005534516 nova_compute[253538]: 2025-11-25 09:04:21.660 253542 DEBUG oslo_concurrency.lockutils [req-63d2feb9-aa38-4ae8-9350-17b808fb3577 req-fbd125ad-a1f4-440e-9d81-176a93e412bb b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "1ee319a5-b613-4b27-a1e6-64b0129bf269-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:04:21 np0005534516 nova_compute[253538]: 2025-11-25 09:04:21.660 253542 DEBUG oslo_concurrency.lockutils [req-63d2feb9-aa38-4ae8-9350-17b808fb3577 req-fbd125ad-a1f4-440e-9d81-176a93e412bb b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "1ee319a5-b613-4b27-a1e6-64b0129bf269-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:04:21 np0005534516 nova_compute[253538]: 2025-11-25 09:04:21.660 253542 DEBUG nova.compute.manager [req-63d2feb9-aa38-4ae8-9350-17b808fb3577 req-fbd125ad-a1f4-440e-9d81-176a93e412bb b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 1ee319a5-b613-4b27-a1e6-64b0129bf269] No waiting events found dispatching network-vif-unplugged-2cd88dce-60d9-4da6-a5f6-ba6622fd8812 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 04:04:21 np0005534516 nova_compute[253538]: 2025-11-25 09:04:21.660 253542 DEBUG nova.compute.manager [req-63d2feb9-aa38-4ae8-9350-17b808fb3577 req-fbd125ad-a1f4-440e-9d81-176a93e412bb b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 1ee319a5-b613-4b27-a1e6-64b0129bf269] Received event network-vif-unplugged-2cd88dce-60d9-4da6-a5f6-ba6622fd8812 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 25 04:04:22 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2507: 321 pgs: 321 active+clean; 326 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 30 KiB/s rd, 13 KiB/s wr, 25 op/s
Nov 25 04:04:23 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:04:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 04:04:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 04:04:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 04:04:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 04:04:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 04:04:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 04:04:23 np0005534516 nova_compute[253538]: 2025-11-25 09:04:23.809 253542 DEBUG nova.compute.manager [req-0ae28dfa-515d-4849-8dea-facede86f595 req-8cbc6c67-6597-4c5a-9cfe-82cd6187b26d b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 1ee319a5-b613-4b27-a1e6-64b0129bf269] Received event network-vif-plugged-2cd88dce-60d9-4da6-a5f6-ba6622fd8812 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 04:04:23 np0005534516 nova_compute[253538]: 2025-11-25 09:04:23.809 253542 DEBUG oslo_concurrency.lockutils [req-0ae28dfa-515d-4849-8dea-facede86f595 req-8cbc6c67-6597-4c5a-9cfe-82cd6187b26d b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "1ee319a5-b613-4b27-a1e6-64b0129bf269-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:04:23 np0005534516 nova_compute[253538]: 2025-11-25 09:04:23.810 253542 DEBUG oslo_concurrency.lockutils [req-0ae28dfa-515d-4849-8dea-facede86f595 req-8cbc6c67-6597-4c5a-9cfe-82cd6187b26d b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "1ee319a5-b613-4b27-a1e6-64b0129bf269-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:04:23 np0005534516 nova_compute[253538]: 2025-11-25 09:04:23.810 253542 DEBUG oslo_concurrency.lockutils [req-0ae28dfa-515d-4849-8dea-facede86f595 req-8cbc6c67-6597-4c5a-9cfe-82cd6187b26d b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "1ee319a5-b613-4b27-a1e6-64b0129bf269-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:04:23 np0005534516 nova_compute[253538]: 2025-11-25 09:04:23.810 253542 DEBUG nova.compute.manager [req-0ae28dfa-515d-4849-8dea-facede86f595 req-8cbc6c67-6597-4c5a-9cfe-82cd6187b26d b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 1ee319a5-b613-4b27-a1e6-64b0129bf269] No waiting events found dispatching network-vif-plugged-2cd88dce-60d9-4da6-a5f6-ba6622fd8812 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 04:04:23 np0005534516 nova_compute[253538]: 2025-11-25 09:04:23.810 253542 WARNING nova.compute.manager [req-0ae28dfa-515d-4849-8dea-facede86f595 req-8cbc6c67-6597-4c5a-9cfe-82cd6187b26d b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 1ee319a5-b613-4b27-a1e6-64b0129bf269] Received unexpected event network-vif-plugged-2cd88dce-60d9-4da6-a5f6-ba6622fd8812 for instance with vm_state active and task_state deleting.#033[00m
Nov 25 04:04:23 np0005534516 nova_compute[253538]: 2025-11-25 09:04:23.811 253542 DEBUG nova.compute.manager [req-0ae28dfa-515d-4849-8dea-facede86f595 req-8cbc6c67-6597-4c5a-9cfe-82cd6187b26d b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 1ee319a5-b613-4b27-a1e6-64b0129bf269] Received event network-vif-unplugged-25c8c441-cd5e-4cd3-9151-e8137db08e65 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 04:04:23 np0005534516 nova_compute[253538]: 2025-11-25 09:04:23.811 253542 DEBUG oslo_concurrency.lockutils [req-0ae28dfa-515d-4849-8dea-facede86f595 req-8cbc6c67-6597-4c5a-9cfe-82cd6187b26d b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "1ee319a5-b613-4b27-a1e6-64b0129bf269-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:04:23 np0005534516 nova_compute[253538]: 2025-11-25 09:04:23.811 253542 DEBUG oslo_concurrency.lockutils [req-0ae28dfa-515d-4849-8dea-facede86f595 req-8cbc6c67-6597-4c5a-9cfe-82cd6187b26d b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "1ee319a5-b613-4b27-a1e6-64b0129bf269-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:04:23 np0005534516 nova_compute[253538]: 2025-11-25 09:04:23.811 253542 DEBUG oslo_concurrency.lockutils [req-0ae28dfa-515d-4849-8dea-facede86f595 req-8cbc6c67-6597-4c5a-9cfe-82cd6187b26d b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "1ee319a5-b613-4b27-a1e6-64b0129bf269-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:04:23 np0005534516 nova_compute[253538]: 2025-11-25 09:04:23.811 253542 DEBUG nova.compute.manager [req-0ae28dfa-515d-4849-8dea-facede86f595 req-8cbc6c67-6597-4c5a-9cfe-82cd6187b26d b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 1ee319a5-b613-4b27-a1e6-64b0129bf269] No waiting events found dispatching network-vif-unplugged-25c8c441-cd5e-4cd3-9151-e8137db08e65 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 04:04:23 np0005534516 nova_compute[253538]: 2025-11-25 09:04:23.812 253542 DEBUG nova.compute.manager [req-0ae28dfa-515d-4849-8dea-facede86f595 req-8cbc6c67-6597-4c5a-9cfe-82cd6187b26d b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 1ee319a5-b613-4b27-a1e6-64b0129bf269] Received event network-vif-unplugged-25c8c441-cd5e-4cd3-9151-e8137db08e65 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 25 04:04:23 np0005534516 nova_compute[253538]: 2025-11-25 09:04:23.812 253542 DEBUG nova.compute.manager [req-0ae28dfa-515d-4849-8dea-facede86f595 req-8cbc6c67-6597-4c5a-9cfe-82cd6187b26d b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 1ee319a5-b613-4b27-a1e6-64b0129bf269] Received event network-vif-plugged-25c8c441-cd5e-4cd3-9151-e8137db08e65 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 04:04:23 np0005534516 nova_compute[253538]: 2025-11-25 09:04:23.812 253542 DEBUG oslo_concurrency.lockutils [req-0ae28dfa-515d-4849-8dea-facede86f595 req-8cbc6c67-6597-4c5a-9cfe-82cd6187b26d b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "1ee319a5-b613-4b27-a1e6-64b0129bf269-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:04:23 np0005534516 nova_compute[253538]: 2025-11-25 09:04:23.812 253542 DEBUG oslo_concurrency.lockutils [req-0ae28dfa-515d-4849-8dea-facede86f595 req-8cbc6c67-6597-4c5a-9cfe-82cd6187b26d b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "1ee319a5-b613-4b27-a1e6-64b0129bf269-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:04:23 np0005534516 nova_compute[253538]: 2025-11-25 09:04:23.812 253542 DEBUG oslo_concurrency.lockutils [req-0ae28dfa-515d-4849-8dea-facede86f595 req-8cbc6c67-6597-4c5a-9cfe-82cd6187b26d b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "1ee319a5-b613-4b27-a1e6-64b0129bf269-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:04:23 np0005534516 nova_compute[253538]: 2025-11-25 09:04:23.812 253542 DEBUG nova.compute.manager [req-0ae28dfa-515d-4849-8dea-facede86f595 req-8cbc6c67-6597-4c5a-9cfe-82cd6187b26d b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 1ee319a5-b613-4b27-a1e6-64b0129bf269] No waiting events found dispatching network-vif-plugged-25c8c441-cd5e-4cd3-9151-e8137db08e65 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 04:04:23 np0005534516 nova_compute[253538]: 2025-11-25 09:04:23.813 253542 WARNING nova.compute.manager [req-0ae28dfa-515d-4849-8dea-facede86f595 req-8cbc6c67-6597-4c5a-9cfe-82cd6187b26d b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 1ee319a5-b613-4b27-a1e6-64b0129bf269] Received unexpected event network-vif-plugged-25c8c441-cd5e-4cd3-9151-e8137db08e65 for instance with vm_state active and task_state deleting.#033[00m
Nov 25 04:04:23 np0005534516 nova_compute[253538]: 2025-11-25 09:04:23.821 253542 INFO nova.virt.libvirt.driver [None req-148edbc7-74b7-407c-adb3-ee461f084c56 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 60fab831-4ae4-4e18-a4e4-5466abbece52] Deleting instance files /var/lib/nova/instances/60fab831-4ae4-4e18-a4e4-5466abbece52_del#033[00m
Nov 25 04:04:23 np0005534516 nova_compute[253538]: 2025-11-25 09:04:23.822 253542 INFO nova.virt.libvirt.driver [None req-148edbc7-74b7-407c-adb3-ee461f084c56 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 60fab831-4ae4-4e18-a4e4-5466abbece52] Deletion of /var/lib/nova/instances/60fab831-4ae4-4e18-a4e4-5466abbece52_del complete#033[00m
Nov 25 04:04:23 np0005534516 nova_compute[253538]: 2025-11-25 09:04:23.874 253542 INFO nova.compute.manager [None req-148edbc7-74b7-407c-adb3-ee461f084c56 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 60fab831-4ae4-4e18-a4e4-5466abbece52] Took 16.27 seconds to destroy the instance on the hypervisor.#033[00m
Nov 25 04:04:23 np0005534516 nova_compute[253538]: 2025-11-25 09:04:23.875 253542 DEBUG oslo.service.loopingcall [None req-148edbc7-74b7-407c-adb3-ee461f084c56 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 25 04:04:23 np0005534516 nova_compute[253538]: 2025-11-25 09:04:23.876 253542 DEBUG nova.compute.manager [-] [instance: 60fab831-4ae4-4e18-a4e4-5466abbece52] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 25 04:04:23 np0005534516 nova_compute[253538]: 2025-11-25 09:04:23.876 253542 DEBUG nova.network.neutron [-] [instance: 60fab831-4ae4-4e18-a4e4-5466abbece52] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 25 04:04:24 np0005534516 nova_compute[253538]: 2025-11-25 09:04:24.014 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:04:24 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2508: 321 pgs: 321 active+clean; 326 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 33 KiB/s rd, 13 KiB/s wr, 28 op/s
Nov 25 04:04:24 np0005534516 nova_compute[253538]: 2025-11-25 09:04:24.449 253542 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764061449.4481587, 60fab831-4ae4-4e18-a4e4-5466abbece52 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 04:04:24 np0005534516 nova_compute[253538]: 2025-11-25 09:04:24.450 253542 INFO nova.compute.manager [-] [instance: 60fab831-4ae4-4e18-a4e4-5466abbece52] VM Stopped (Lifecycle Event)#033[00m
Nov 25 04:04:24 np0005534516 nova_compute[253538]: 2025-11-25 09:04:24.463 253542 DEBUG nova.compute.manager [None req-89e273be-1c0f-46c7-9640-ba7499c4ffe5 - - - - - -] [instance: 60fab831-4ae4-4e18-a4e4-5466abbece52] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 04:04:24 np0005534516 nova_compute[253538]: 2025-11-25 09:04:24.576 253542 DEBUG nova.network.neutron [-] [instance: 60fab831-4ae4-4e18-a4e4-5466abbece52] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 04:04:24 np0005534516 nova_compute[253538]: 2025-11-25 09:04:24.589 253542 INFO nova.compute.manager [-] [instance: 60fab831-4ae4-4e18-a4e4-5466abbece52] Took 0.71 seconds to deallocate network for instance.#033[00m
Nov 25 04:04:24 np0005534516 nova_compute[253538]: 2025-11-25 09:04:24.629 253542 DEBUG oslo_concurrency.lockutils [None req-148edbc7-74b7-407c-adb3-ee461f084c56 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:04:24 np0005534516 nova_compute[253538]: 2025-11-25 09:04:24.629 253542 DEBUG oslo_concurrency.lockutils [None req-148edbc7-74b7-407c-adb3-ee461f084c56 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:04:24 np0005534516 nova_compute[253538]: 2025-11-25 09:04:24.767 253542 DEBUG nova.compute.manager [req-9cdb6414-ad8f-4b17-9223-f7363215a641 req-9b8aa4fc-a20b-4331-99ed-59ddc9faec64 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 60fab831-4ae4-4e18-a4e4-5466abbece52] Received event network-vif-deleted-d2008aa0-bac3-4d83-88d2-34376e911b2c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 04:04:24 np0005534516 nova_compute[253538]: 2025-11-25 09:04:24.795 253542 DEBUG oslo_concurrency.processutils [None req-148edbc7-74b7-407c-adb3-ee461f084c56 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 04:04:24 np0005534516 podman[396420]: 2025-11-25 09:04:24.80543474 +0000 UTC m=+0.054929695 container health_status 5c8d5508db57b4c3da7e80ae11f3a3094e87785cb74f60c98199261dd8b73481 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 25 04:04:24 np0005534516 podman[396419]: 2025-11-25 09:04:24.816506231 +0000 UTC m=+0.066722285 container health_status 201f5ba8a846455dad7681d91099d8c3f33e8ac91298c1330a8b525b94c03938 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=multipathd, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3)
Nov 25 04:04:25 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 04:04:25 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/228841934' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 04:04:25 np0005534516 nova_compute[253538]: 2025-11-25 09:04:25.243 253542 DEBUG oslo_concurrency.processutils [None req-148edbc7-74b7-407c-adb3-ee461f084c56 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.448s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 04:04:25 np0005534516 nova_compute[253538]: 2025-11-25 09:04:25.250 253542 DEBUG nova.compute.provider_tree [None req-148edbc7-74b7-407c-adb3-ee461f084c56 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 04:04:25 np0005534516 nova_compute[253538]: 2025-11-25 09:04:25.266 253542 DEBUG nova.scheduler.client.report [None req-148edbc7-74b7-407c-adb3-ee461f084c56 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 04:04:25 np0005534516 nova_compute[253538]: 2025-11-25 09:04:25.439 253542 DEBUG oslo_concurrency.lockutils [None req-148edbc7-74b7-407c-adb3-ee461f084c56 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.810s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:04:25 np0005534516 nova_compute[253538]: 2025-11-25 09:04:25.537 253542 INFO nova.scheduler.client.report [None req-148edbc7-74b7-407c-adb3-ee461f084c56 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Deleted allocations for instance 60fab831-4ae4-4e18-a4e4-5466abbece52#033[00m
Nov 25 04:04:25 np0005534516 nova_compute[253538]: 2025-11-25 09:04:25.609 253542 DEBUG oslo_concurrency.lockutils [None req-148edbc7-74b7-407c-adb3-ee461f084c56 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Lock "60fab831-4ae4-4e18-a4e4-5466abbece52" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 18.011s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:04:26 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2509: 321 pgs: 321 active+clean; 326 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 33 KiB/s rd, 17 KiB/s wr, 29 op/s
Nov 25 04:04:26 np0005534516 nova_compute[253538]: 2025-11-25 09:04:26.625 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:04:26 np0005534516 nova_compute[253538]: 2025-11-25 09:04:26.924 253542 DEBUG oslo_concurrency.lockutils [None req-13c2ed13-a610-4e15-a7f8-a702b1dc0d4f 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Acquiring lock "2c4a1d63-7674-4276-8da9-b9d4f4fea307" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:04:26 np0005534516 nova_compute[253538]: 2025-11-25 09:04:26.925 253542 DEBUG oslo_concurrency.lockutils [None req-13c2ed13-a610-4e15-a7f8-a702b1dc0d4f 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Lock "2c4a1d63-7674-4276-8da9-b9d4f4fea307" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:04:26 np0005534516 nova_compute[253538]: 2025-11-25 09:04:26.925 253542 DEBUG oslo_concurrency.lockutils [None req-13c2ed13-a610-4e15-a7f8-a702b1dc0d4f 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Acquiring lock "2c4a1d63-7674-4276-8da9-b9d4f4fea307-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:04:26 np0005534516 nova_compute[253538]: 2025-11-25 09:04:26.925 253542 DEBUG oslo_concurrency.lockutils [None req-13c2ed13-a610-4e15-a7f8-a702b1dc0d4f 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Lock "2c4a1d63-7674-4276-8da9-b9d4f4fea307-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:04:26 np0005534516 nova_compute[253538]: 2025-11-25 09:04:26.926 253542 DEBUG oslo_concurrency.lockutils [None req-13c2ed13-a610-4e15-a7f8-a702b1dc0d4f 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Lock "2c4a1d63-7674-4276-8da9-b9d4f4fea307-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:04:26 np0005534516 nova_compute[253538]: 2025-11-25 09:04:26.927 253542 INFO nova.compute.manager [None req-13c2ed13-a610-4e15-a7f8-a702b1dc0d4f 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 2c4a1d63-7674-4276-8da9-b9d4f4fea307] Terminating instance#033[00m
Nov 25 04:04:26 np0005534516 nova_compute[253538]: 2025-11-25 09:04:26.928 253542 DEBUG nova.compute.manager [None req-13c2ed13-a610-4e15-a7f8-a702b1dc0d4f 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 2c4a1d63-7674-4276-8da9-b9d4f4fea307] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 25 04:04:27 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:04:27.010 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=45, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '3e:21:09', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '6a:71:8e:07:fa:c2'}, ipsec=False) old=SB_Global(nb_cfg=44) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 04:04:27 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:04:27.010 162739 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 1 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 25 04:04:27 np0005534516 nova_compute[253538]: 2025-11-25 09:04:27.011 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:04:27 np0005534516 nova_compute[253538]: 2025-11-25 09:04:27.040 253542 DEBUG nova.compute.manager [req-5e4040bc-e314-4a78-b1e0-05c5d330b069 req-e0e70227-ddc9-440a-bf5b-2dafdf0cacb2 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 2c4a1d63-7674-4276-8da9-b9d4f4fea307] Received event network-changed-a93aab06-4a98-453a-87c3-01b817ee7602 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 04:04:27 np0005534516 nova_compute[253538]: 2025-11-25 09:04:27.040 253542 DEBUG nova.compute.manager [req-5e4040bc-e314-4a78-b1e0-05c5d330b069 req-e0e70227-ddc9-440a-bf5b-2dafdf0cacb2 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 2c4a1d63-7674-4276-8da9-b9d4f4fea307] Refreshing instance network info cache due to event network-changed-a93aab06-4a98-453a-87c3-01b817ee7602. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 04:04:27 np0005534516 nova_compute[253538]: 2025-11-25 09:04:27.040 253542 DEBUG oslo_concurrency.lockutils [req-5e4040bc-e314-4a78-b1e0-05c5d330b069 req-e0e70227-ddc9-440a-bf5b-2dafdf0cacb2 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-2c4a1d63-7674-4276-8da9-b9d4f4fea307" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 04:04:27 np0005534516 nova_compute[253538]: 2025-11-25 09:04:27.040 253542 DEBUG oslo_concurrency.lockutils [req-5e4040bc-e314-4a78-b1e0-05c5d330b069 req-e0e70227-ddc9-440a-bf5b-2dafdf0cacb2 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-2c4a1d63-7674-4276-8da9-b9d4f4fea307" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 04:04:27 np0005534516 nova_compute[253538]: 2025-11-25 09:04:27.041 253542 DEBUG nova.network.neutron [req-5e4040bc-e314-4a78-b1e0-05c5d330b069 req-e0e70227-ddc9-440a-bf5b-2dafdf0cacb2 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 2c4a1d63-7674-4276-8da9-b9d4f4fea307] Refreshing network info cache for port a93aab06-4a98-453a-87c3-01b817ee7602 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 04:04:27 np0005534516 kernel: tapa93aab06-4a (unregistering): left promiscuous mode
Nov 25 04:04:27 np0005534516 NetworkManager[48915]: <info>  [1764061467.8369] device (tapa93aab06-4a): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 04:04:27 np0005534516 ovn_controller[152859]: 2025-11-25T09:04:27Z|01407|binding|INFO|Releasing lport a93aab06-4a98-453a-87c3-01b817ee7602 from this chassis (sb_readonly=0)
Nov 25 04:04:27 np0005534516 ovn_controller[152859]: 2025-11-25T09:04:27Z|01408|binding|INFO|Setting lport a93aab06-4a98-453a-87c3-01b817ee7602 down in Southbound
Nov 25 04:04:27 np0005534516 nova_compute[253538]: 2025-11-25 09:04:27.848 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:04:27 np0005534516 ovn_controller[152859]: 2025-11-25T09:04:27Z|01409|binding|INFO|Removing iface tapa93aab06-4a ovn-installed in OVS
Nov 25 04:04:27 np0005534516 nova_compute[253538]: 2025-11-25 09:04:27.851 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:04:27 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:04:27.860 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:2f:fb:42 10.100.0.13'], port_security=['fa:16:3e:2f:fb:42 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '2c4a1d63-7674-4276-8da9-b9d4f4fea307', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-72472fc5-3661-404c-a0d2-df155795bd2b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'cfffff2c57a442a59b202d368d49bf00', 'neutron:revision_number': '4', 'neutron:security_group_ids': '046f46ed-7d5f-45ad-8313-fa0fe77b097a 0f20aab2-1f55-4a0f-8bdf-77bad4fbb70d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=48d726f4-a876-48b7-812b-492b6f2eebf1, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=a93aab06-4a98-453a-87c3-01b817ee7602) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 04:04:27 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:04:27.861 162739 INFO neutron.agent.ovn.metadata.agent [-] Port a93aab06-4a98-453a-87c3-01b817ee7602 in datapath 72472fc5-3661-404c-a0d2-df155795bd2b unbound from our chassis#033[00m
Nov 25 04:04:27 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:04:27.863 162739 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 72472fc5-3661-404c-a0d2-df155795bd2b, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 25 04:04:27 np0005534516 nova_compute[253538]: 2025-11-25 09:04:27.864 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:04:27 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:04:27.864 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[70bd5e33-e7c6-464c-be49-330582d7f1c5]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:04:27 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:04:27.864 162739 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-72472fc5-3661-404c-a0d2-df155795bd2b namespace which is not needed anymore#033[00m
Nov 25 04:04:27 np0005534516 systemd[1]: machine-qemu\x2d164\x2dinstance\x2d00000086.scope: Deactivated successfully.
Nov 25 04:04:27 np0005534516 systemd[1]: machine-qemu\x2d164\x2dinstance\x2d00000086.scope: Consumed 16.553s CPU time.
Nov 25 04:04:27 np0005534516 systemd-machined[215790]: Machine qemu-164-instance-00000086 terminated.
Nov 25 04:04:27 np0005534516 nova_compute[253538]: 2025-11-25 09:04:27.965 253542 INFO nova.virt.libvirt.driver [-] [instance: 2c4a1d63-7674-4276-8da9-b9d4f4fea307] Instance destroyed successfully.#033[00m
Nov 25 04:04:27 np0005534516 nova_compute[253538]: 2025-11-25 09:04:27.965 253542 DEBUG nova.objects.instance [None req-13c2ed13-a610-4e15-a7f8-a702b1dc0d4f 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Lazy-loading 'resources' on Instance uuid 2c4a1d63-7674-4276-8da9-b9d4f4fea307 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 04:04:28 np0005534516 nova_compute[253538]: 2025-11-25 09:04:28.003 253542 DEBUG nova.virt.libvirt.vif [None req-13c2ed13-a610-4e15-a7f8-a702b1dc0d4f 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T09:02:44Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-1495447964-access_point-582189118',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-1495447964-access_point-582189118',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-1495447964-ac',id=134,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBKmWC6+Lhjt7ZggX7R271Z8ydSfaz55B57w+DkFd+9P5ZZi/RtND2Lrlt8eHEO6BYJ7zwCthN/Bq/sxErPbCK7jo7dXwLPbl9Tdlo4a8btinzQw58JjdPmMZRYnO+DrRAQ==',key_name='tempest-TestSecurityGroupsBasicOps-1979245065',keypairs=<?>,launch_index=0,launched_at=2025-11-25T09:02:57Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='cfffff2c57a442a59b202d368d49bf00',ramdisk_id='',reservation_id='r-jwi4cpze',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestSecurityGroupsBasicOps-1495447964',owner_user_name='tempest-TestSecurityGroupsBasicOps-1495447964-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T09:02:57Z,user_data=None,user_id='283b89dbe3284e8ea2019b797673108b',uuid=2c4a1d63-7674-4276-8da9-b9d4f4fea307,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "a93aab06-4a98-453a-87c3-01b817ee7602", "address": "fa:16:3e:2f:fb:42", "network": {"id": "72472fc5-3661-404c-a0d2-df155795bd2b", "bridge": "br-int", "label": "tempest-network-smoke--1408423100", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.224", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cfffff2c57a442a59b202d368d49bf00", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa93aab06-4a", "ovs_interfaceid": "a93aab06-4a98-453a-87c3-01b817ee7602", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 25 04:04:28 np0005534516 nova_compute[253538]: 2025-11-25 09:04:28.004 253542 DEBUG nova.network.os_vif_util [None req-13c2ed13-a610-4e15-a7f8-a702b1dc0d4f 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Converting VIF {"id": "a93aab06-4a98-453a-87c3-01b817ee7602", "address": "fa:16:3e:2f:fb:42", "network": {"id": "72472fc5-3661-404c-a0d2-df155795bd2b", "bridge": "br-int", "label": "tempest-network-smoke--1408423100", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.224", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cfffff2c57a442a59b202d368d49bf00", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa93aab06-4a", "ovs_interfaceid": "a93aab06-4a98-453a-87c3-01b817ee7602", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 04:04:28 np0005534516 nova_compute[253538]: 2025-11-25 09:04:28.005 253542 DEBUG nova.network.os_vif_util [None req-13c2ed13-a610-4e15-a7f8-a702b1dc0d4f 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:2f:fb:42,bridge_name='br-int',has_traffic_filtering=True,id=a93aab06-4a98-453a-87c3-01b817ee7602,network=Network(72472fc5-3661-404c-a0d2-df155795bd2b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa93aab06-4a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 04:04:28 np0005534516 nova_compute[253538]: 2025-11-25 09:04:28.005 253542 DEBUG os_vif [None req-13c2ed13-a610-4e15-a7f8-a702b1dc0d4f 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:2f:fb:42,bridge_name='br-int',has_traffic_filtering=True,id=a93aab06-4a98-453a-87c3-01b817ee7602,network=Network(72472fc5-3661-404c-a0d2-df155795bd2b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa93aab06-4a') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 25 04:04:28 np0005534516 nova_compute[253538]: 2025-11-25 09:04:28.006 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:04:28 np0005534516 nova_compute[253538]: 2025-11-25 09:04:28.007 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa93aab06-4a, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 04:04:28 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:04:28.012 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=0e3362c2-a4a0-4a10-9289-943331244f84, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '45'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 04:04:28 np0005534516 nova_compute[253538]: 2025-11-25 09:04:28.014 253542 DEBUG nova.compute.manager [req-f4065564-cfc4-40e0-b21e-c5e1e856fa7b req-0f4ae239-f6bb-4282-8be8-a39a2e8b466f b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 2c4a1d63-7674-4276-8da9-b9d4f4fea307] Received event network-vif-unplugged-a93aab06-4a98-453a-87c3-01b817ee7602 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 04:04:28 np0005534516 nova_compute[253538]: 2025-11-25 09:04:28.014 253542 DEBUG oslo_concurrency.lockutils [req-f4065564-cfc4-40e0-b21e-c5e1e856fa7b req-0f4ae239-f6bb-4282-8be8-a39a2e8b466f b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "2c4a1d63-7674-4276-8da9-b9d4f4fea307-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:04:28 np0005534516 nova_compute[253538]: 2025-11-25 09:04:28.015 253542 DEBUG oslo_concurrency.lockutils [req-f4065564-cfc4-40e0-b21e-c5e1e856fa7b req-0f4ae239-f6bb-4282-8be8-a39a2e8b466f b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "2c4a1d63-7674-4276-8da9-b9d4f4fea307-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:04:28 np0005534516 nova_compute[253538]: 2025-11-25 09:04:28.015 253542 DEBUG oslo_concurrency.lockutils [req-f4065564-cfc4-40e0-b21e-c5e1e856fa7b req-0f4ae239-f6bb-4282-8be8-a39a2e8b466f b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "2c4a1d63-7674-4276-8da9-b9d4f4fea307-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:04:28 np0005534516 nova_compute[253538]: 2025-11-25 09:04:28.015 253542 DEBUG nova.compute.manager [req-f4065564-cfc4-40e0-b21e-c5e1e856fa7b req-0f4ae239-f6bb-4282-8be8-a39a2e8b466f b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 2c4a1d63-7674-4276-8da9-b9d4f4fea307] No waiting events found dispatching network-vif-unplugged-a93aab06-4a98-453a-87c3-01b817ee7602 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 04:04:28 np0005534516 nova_compute[253538]: 2025-11-25 09:04:28.016 253542 DEBUG nova.compute.manager [req-f4065564-cfc4-40e0-b21e-c5e1e856fa7b req-0f4ae239-f6bb-4282-8be8-a39a2e8b466f b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 2c4a1d63-7674-4276-8da9-b9d4f4fea307] Received event network-vif-unplugged-a93aab06-4a98-453a-87c3-01b817ee7602 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 25 04:04:28 np0005534516 nova_compute[253538]: 2025-11-25 09:04:28.016 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 25 04:04:28 np0005534516 nova_compute[253538]: 2025-11-25 09:04:28.019 253542 INFO os_vif [None req-13c2ed13-a610-4e15-a7f8-a702b1dc0d4f 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:2f:fb:42,bridge_name='br-int',has_traffic_filtering=True,id=a93aab06-4a98-453a-87c3-01b817ee7602,network=Network(72472fc5-3661-404c-a0d2-df155795bd2b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa93aab06-4a')#033[00m
Nov 25 04:04:28 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:04:28 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2510: 321 pgs: 321 active+clean; 318 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 34 KiB/s rd, 8.2 KiB/s wr, 28 op/s
Nov 25 04:04:28 np0005534516 nova_compute[253538]: 2025-11-25 09:04:28.195 253542 DEBUG nova.network.neutron [req-5e4040bc-e314-4a78-b1e0-05c5d330b069 req-e0e70227-ddc9-440a-bf5b-2dafdf0cacb2 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 2c4a1d63-7674-4276-8da9-b9d4f4fea307] Updated VIF entry in instance network info cache for port a93aab06-4a98-453a-87c3-01b817ee7602. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 25 04:04:28 np0005534516 nova_compute[253538]: 2025-11-25 09:04:28.196 253542 DEBUG nova.network.neutron [req-5e4040bc-e314-4a78-b1e0-05c5d330b069 req-e0e70227-ddc9-440a-bf5b-2dafdf0cacb2 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 2c4a1d63-7674-4276-8da9-b9d4f4fea307] Updating instance_info_cache with network_info: [{"id": "a93aab06-4a98-453a-87c3-01b817ee7602", "address": "fa:16:3e:2f:fb:42", "network": {"id": "72472fc5-3661-404c-a0d2-df155795bd2b", "bridge": "br-int", "label": "tempest-network-smoke--1408423100", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cfffff2c57a442a59b202d368d49bf00", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa93aab06-4a", "ovs_interfaceid": "a93aab06-4a98-453a-87c3-01b817ee7602", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 04:04:28 np0005534516 nova_compute[253538]: 2025-11-25 09:04:28.213 253542 DEBUG oslo_concurrency.lockutils [req-5e4040bc-e314-4a78-b1e0-05c5d330b069 req-e0e70227-ddc9-440a-bf5b-2dafdf0cacb2 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-2c4a1d63-7674-4276-8da9-b9d4f4fea307" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 04:04:28 np0005534516 neutron-haproxy-ovnmeta-72472fc5-3661-404c-a0d2-df155795bd2b[394377]: [NOTICE]   (394381) : haproxy version is 2.8.14-c23fe91
Nov 25 04:04:28 np0005534516 neutron-haproxy-ovnmeta-72472fc5-3661-404c-a0d2-df155795bd2b[394377]: [NOTICE]   (394381) : path to executable is /usr/sbin/haproxy
Nov 25 04:04:28 np0005534516 neutron-haproxy-ovnmeta-72472fc5-3661-404c-a0d2-df155795bd2b[394377]: [WARNING]  (394381) : Exiting Master process...
Nov 25 04:04:28 np0005534516 neutron-haproxy-ovnmeta-72472fc5-3661-404c-a0d2-df155795bd2b[394377]: [ALERT]    (394381) : Current worker (394383) exited with code 143 (Terminated)
Nov 25 04:04:28 np0005534516 neutron-haproxy-ovnmeta-72472fc5-3661-404c-a0d2-df155795bd2b[394377]: [WARNING]  (394381) : All workers exited. Exiting... (0)
Nov 25 04:04:28 np0005534516 systemd[1]: libpod-4d5507b550ed52bb6ccf48abf5b01f4c5ae6fba6c6fa789780fe85272d962cd2.scope: Deactivated successfully.
Nov 25 04:04:28 np0005534516 podman[396513]: 2025-11-25 09:04:28.241746933 +0000 UTC m=+0.275941723 container died 4d5507b550ed52bb6ccf48abf5b01f4c5ae6fba6c6fa789780fe85272d962cd2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-72472fc5-3661-404c-a0d2-df155795bd2b, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 04:04:28 np0005534516 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-4d5507b550ed52bb6ccf48abf5b01f4c5ae6fba6c6fa789780fe85272d962cd2-userdata-shm.mount: Deactivated successfully.
Nov 25 04:04:28 np0005534516 systemd[1]: var-lib-containers-storage-overlay-c472ba9f61967f5c2c1bbfa218a3c4c1a73bb2a29d7b40245ca25e38f08a9e8a-merged.mount: Deactivated successfully.
Nov 25 04:04:28 np0005534516 podman[396513]: 2025-11-25 09:04:28.756606519 +0000 UTC m=+0.790801309 container cleanup 4d5507b550ed52bb6ccf48abf5b01f4c5ae6fba6c6fa789780fe85272d962cd2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-72472fc5-3661-404c-a0d2-df155795bd2b, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3)
Nov 25 04:04:28 np0005534516 systemd[1]: libpod-conmon-4d5507b550ed52bb6ccf48abf5b01f4c5ae6fba6c6fa789780fe85272d962cd2.scope: Deactivated successfully.
Nov 25 04:04:29 np0005534516 nova_compute[253538]: 2025-11-25 09:04:29.016 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:04:29 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 25 04:04:29 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/415942382' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 04:04:29 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 25 04:04:29 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/415942382' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 04:04:29 np0005534516 podman[396569]: 2025-11-25 09:04:29.222269638 +0000 UTC m=+0.440946589 container remove 4d5507b550ed52bb6ccf48abf5b01f4c5ae6fba6c6fa789780fe85272d962cd2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-72472fc5-3661-404c-a0d2-df155795bd2b, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 04:04:29 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:04:29.228 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[b4115d3b-7308-40d6-af78-c8b23603fbbe]: (4, ('Tue Nov 25 09:04:27 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-72472fc5-3661-404c-a0d2-df155795bd2b (4d5507b550ed52bb6ccf48abf5b01f4c5ae6fba6c6fa789780fe85272d962cd2)\n4d5507b550ed52bb6ccf48abf5b01f4c5ae6fba6c6fa789780fe85272d962cd2\nTue Nov 25 09:04:28 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-72472fc5-3661-404c-a0d2-df155795bd2b (4d5507b550ed52bb6ccf48abf5b01f4c5ae6fba6c6fa789780fe85272d962cd2)\n4d5507b550ed52bb6ccf48abf5b01f4c5ae6fba6c6fa789780fe85272d962cd2\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:04:29 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:04:29.230 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[d4fe35cb-56d4-416b-b262-cb3debb65565]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:04:29 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:04:29.232 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap72472fc5-30, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 04:04:29 np0005534516 nova_compute[253538]: 2025-11-25 09:04:29.233 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:04:29 np0005534516 kernel: tap72472fc5-30: left promiscuous mode
Nov 25 04:04:29 np0005534516 nova_compute[253538]: 2025-11-25 09:04:29.237 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:04:29 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:04:29.239 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[a9a2db03-9811-4f7b-a5b3-f130cb12ad95]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:04:29 np0005534516 nova_compute[253538]: 2025-11-25 09:04:29.249 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:04:29 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:04:29.258 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[26b87281-c4f3-4276-af41-ecfe4b916454]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:04:29 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:04:29.259 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[71493e26-00e1-45fa-912d-e4c35d3e21ff]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:04:29 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:04:29.279 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[2e932295-e198-4e82-8455-6d07f7637b8a]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 672901, 'reachable_time': 22980, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 396583, 'error': None, 'target': 'ovnmeta-72472fc5-3661-404c-a0d2-df155795bd2b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:04:29 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:04:29.283 162852 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-72472fc5-3661-404c-a0d2-df155795bd2b deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 25 04:04:29 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:04:29.283 162852 DEBUG oslo.privsep.daemon [-] privsep: reply[885aefcf-aa9c-42d4-948d-41708341ffbe]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:04:29 np0005534516 systemd[1]: run-netns-ovnmeta\x2d72472fc5\x2d3661\x2d404c\x2da0d2\x2ddf155795bd2b.mount: Deactivated successfully.
Nov 25 04:04:30 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2511: 321 pgs: 321 active+clean; 262 MiB data, 1022 MiB used, 59 GiB / 60 GiB avail; 39 KiB/s rd, 9.2 KiB/s wr, 37 op/s
Nov 25 04:04:30 np0005534516 nova_compute[253538]: 2025-11-25 09:04:30.534 253542 DEBUG nova.compute.manager [req-82b3d032-ac68-4705-b069-e65036d5bda2 req-5acaa0f5-94fb-4a53-8299-3ad62066ed7f b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 2c4a1d63-7674-4276-8da9-b9d4f4fea307] Received event network-vif-plugged-a93aab06-4a98-453a-87c3-01b817ee7602 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 04:04:30 np0005534516 nova_compute[253538]: 2025-11-25 09:04:30.535 253542 DEBUG oslo_concurrency.lockutils [req-82b3d032-ac68-4705-b069-e65036d5bda2 req-5acaa0f5-94fb-4a53-8299-3ad62066ed7f b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "2c4a1d63-7674-4276-8da9-b9d4f4fea307-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:04:30 np0005534516 nova_compute[253538]: 2025-11-25 09:04:30.535 253542 DEBUG oslo_concurrency.lockutils [req-82b3d032-ac68-4705-b069-e65036d5bda2 req-5acaa0f5-94fb-4a53-8299-3ad62066ed7f b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "2c4a1d63-7674-4276-8da9-b9d4f4fea307-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:04:30 np0005534516 nova_compute[253538]: 2025-11-25 09:04:30.536 253542 DEBUG oslo_concurrency.lockutils [req-82b3d032-ac68-4705-b069-e65036d5bda2 req-5acaa0f5-94fb-4a53-8299-3ad62066ed7f b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "2c4a1d63-7674-4276-8da9-b9d4f4fea307-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:04:30 np0005534516 nova_compute[253538]: 2025-11-25 09:04:30.536 253542 DEBUG nova.compute.manager [req-82b3d032-ac68-4705-b069-e65036d5bda2 req-5acaa0f5-94fb-4a53-8299-3ad62066ed7f b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 2c4a1d63-7674-4276-8da9-b9d4f4fea307] No waiting events found dispatching network-vif-plugged-a93aab06-4a98-453a-87c3-01b817ee7602 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 04:04:30 np0005534516 nova_compute[253538]: 2025-11-25 09:04:30.537 253542 WARNING nova.compute.manager [req-82b3d032-ac68-4705-b069-e65036d5bda2 req-5acaa0f5-94fb-4a53-8299-3ad62066ed7f b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 2c4a1d63-7674-4276-8da9-b9d4f4fea307] Received unexpected event network-vif-plugged-a93aab06-4a98-453a-87c3-01b817ee7602 for instance with vm_state active and task_state deleting.#033[00m
Nov 25 04:04:32 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2512: 321 pgs: 321 active+clean; 224 MiB data, 1013 MiB used, 59 GiB / 60 GiB avail; 32 KiB/s rd, 7.0 KiB/s wr, 39 op/s
Nov 25 04:04:32 np0005534516 podman[396584]: 2025-11-25 09:04:32.835626254 +0000 UTC m=+0.086541084 container health_status 8ad1e319ea263c63021f01bb2d6f18ca5aea205883339692c6b10bddfa993191 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, io.buildah.version=1.41.3)
Nov 25 04:04:33 np0005534516 nova_compute[253538]: 2025-11-25 09:04:33.008 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:04:33 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:04:34 np0005534516 nova_compute[253538]: 2025-11-25 09:04:34.018 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:04:34 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2513: 321 pgs: 321 active+clean; 204 MiB data, 997 MiB used, 59 GiB / 60 GiB avail; 25 KiB/s rd, 6.7 KiB/s wr, 37 op/s
Nov 25 04:04:34 np0005534516 nova_compute[253538]: 2025-11-25 09:04:34.957 253542 INFO nova.virt.libvirt.driver [None req-13c5af3d-5c05-4928-9e59-9e03050fad50 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 1ee319a5-b613-4b27-a1e6-64b0129bf269] Deleting instance files /var/lib/nova/instances/1ee319a5-b613-4b27-a1e6-64b0129bf269_del#033[00m
Nov 25 04:04:34 np0005534516 nova_compute[253538]: 2025-11-25 09:04:34.958 253542 INFO nova.virt.libvirt.driver [None req-13c5af3d-5c05-4928-9e59-9e03050fad50 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 1ee319a5-b613-4b27-a1e6-64b0129bf269] Deletion of /var/lib/nova/instances/1ee319a5-b613-4b27-a1e6-64b0129bf269_del complete#033[00m
Nov 25 04:04:35 np0005534516 nova_compute[253538]: 2025-11-25 09:04:35.020 253542 INFO nova.compute.manager [None req-13c5af3d-5c05-4928-9e59-9e03050fad50 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 1ee319a5-b613-4b27-a1e6-64b0129bf269] Took 14.91 seconds to destroy the instance on the hypervisor.#033[00m
Nov 25 04:04:35 np0005534516 nova_compute[253538]: 2025-11-25 09:04:35.020 253542 DEBUG oslo.service.loopingcall [None req-13c5af3d-5c05-4928-9e59-9e03050fad50 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 25 04:04:35 np0005534516 nova_compute[253538]: 2025-11-25 09:04:35.021 253542 DEBUG nova.compute.manager [-] [instance: 1ee319a5-b613-4b27-a1e6-64b0129bf269] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 25 04:04:35 np0005534516 nova_compute[253538]: 2025-11-25 09:04:35.021 253542 DEBUG nova.network.neutron [-] [instance: 1ee319a5-b613-4b27-a1e6-64b0129bf269] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 25 04:04:36 np0005534516 nova_compute[253538]: 2025-11-25 09:04:36.083 253542 DEBUG nova.compute.manager [req-6acccd52-a97a-4eba-b1dc-26bcc34cc468 req-2e836a4c-90ca-486b-a200-73157b956dc1 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 1ee319a5-b613-4b27-a1e6-64b0129bf269] Received event network-vif-deleted-25c8c441-cd5e-4cd3-9151-e8137db08e65 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 04:04:36 np0005534516 nova_compute[253538]: 2025-11-25 09:04:36.083 253542 INFO nova.compute.manager [req-6acccd52-a97a-4eba-b1dc-26bcc34cc468 req-2e836a4c-90ca-486b-a200-73157b956dc1 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 1ee319a5-b613-4b27-a1e6-64b0129bf269] Neutron deleted interface 25c8c441-cd5e-4cd3-9151-e8137db08e65; detaching it from the instance and deleting it from the info cache#033[00m
Nov 25 04:04:36 np0005534516 nova_compute[253538]: 2025-11-25 09:04:36.084 253542 DEBUG nova.network.neutron [req-6acccd52-a97a-4eba-b1dc-26bcc34cc468 req-2e836a4c-90ca-486b-a200-73157b956dc1 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 1ee319a5-b613-4b27-a1e6-64b0129bf269] Updating instance_info_cache with network_info: [{"id": "2cd88dce-60d9-4da6-a5f6-ba6622fd8812", "address": "fa:16:3e:1e:e8:e0", "network": {"id": "f691304c-d112-4c32-b3ac-0f33230178b0", "bridge": "br-int", "label": "tempest-network-smoke--636122136", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2cd88dce-60", "ovs_interfaceid": "2cd88dce-60d9-4da6-a5f6-ba6622fd8812", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 04:04:36 np0005534516 nova_compute[253538]: 2025-11-25 09:04:36.105 253542 DEBUG nova.compute.manager [req-6acccd52-a97a-4eba-b1dc-26bcc34cc468 req-2e836a4c-90ca-486b-a200-73157b956dc1 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 1ee319a5-b613-4b27-a1e6-64b0129bf269] Detach interface failed, port_id=25c8c441-cd5e-4cd3-9151-e8137db08e65, reason: Instance 1ee319a5-b613-4b27-a1e6-64b0129bf269 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882#033[00m
Nov 25 04:04:36 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2514: 321 pgs: 321 active+clean; 169 MiB data, 978 MiB used, 59 GiB / 60 GiB avail; 26 KiB/s rd, 7.1 KiB/s wr, 40 op/s
Nov 25 04:04:36 np0005534516 nova_compute[253538]: 2025-11-25 09:04:36.557 253542 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764061461.5565076, 1ee319a5-b613-4b27-a1e6-64b0129bf269 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 04:04:36 np0005534516 nova_compute[253538]: 2025-11-25 09:04:36.558 253542 INFO nova.compute.manager [-] [instance: 1ee319a5-b613-4b27-a1e6-64b0129bf269] VM Stopped (Lifecycle Event)#033[00m
Nov 25 04:04:36 np0005534516 nova_compute[253538]: 2025-11-25 09:04:36.580 253542 DEBUG nova.compute.manager [None req-9685c2eb-2920-4e4a-985f-d4ddddda688c - - - - - -] [instance: 1ee319a5-b613-4b27-a1e6-64b0129bf269] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 04:04:36 np0005534516 nova_compute[253538]: 2025-11-25 09:04:36.651 253542 DEBUG nova.network.neutron [-] [instance: 1ee319a5-b613-4b27-a1e6-64b0129bf269] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 04:04:36 np0005534516 nova_compute[253538]: 2025-11-25 09:04:36.686 253542 INFO nova.compute.manager [-] [instance: 1ee319a5-b613-4b27-a1e6-64b0129bf269] Took 1.66 seconds to deallocate network for instance.#033[00m
Nov 25 04:04:36 np0005534516 nova_compute[253538]: 2025-11-25 09:04:36.760 253542 DEBUG oslo_concurrency.lockutils [None req-13c5af3d-5c05-4928-9e59-9e03050fad50 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:04:36 np0005534516 nova_compute[253538]: 2025-11-25 09:04:36.760 253542 DEBUG oslo_concurrency.lockutils [None req-13c5af3d-5c05-4928-9e59-9e03050fad50 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:04:36 np0005534516 nova_compute[253538]: 2025-11-25 09:04:36.876 253542 DEBUG oslo_concurrency.processutils [None req-13c5af3d-5c05-4928-9e59-9e03050fad50 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 04:04:37 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 04:04:37 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3857513962' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 04:04:37 np0005534516 nova_compute[253538]: 2025-11-25 09:04:37.340 253542 DEBUG oslo_concurrency.processutils [None req-13c5af3d-5c05-4928-9e59-9e03050fad50 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.465s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 04:04:37 np0005534516 nova_compute[253538]: 2025-11-25 09:04:37.349 253542 DEBUG nova.compute.provider_tree [None req-13c5af3d-5c05-4928-9e59-9e03050fad50 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 04:04:37 np0005534516 nova_compute[253538]: 2025-11-25 09:04:37.366 253542 DEBUG nova.scheduler.client.report [None req-13c5af3d-5c05-4928-9e59-9e03050fad50 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 04:04:37 np0005534516 nova_compute[253538]: 2025-11-25 09:04:37.386 253542 DEBUG oslo_concurrency.lockutils [None req-13c5af3d-5c05-4928-9e59-9e03050fad50 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.626s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:04:37 np0005534516 nova_compute[253538]: 2025-11-25 09:04:37.416 253542 INFO nova.scheduler.client.report [None req-13c5af3d-5c05-4928-9e59-9e03050fad50 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Deleted allocations for instance 1ee319a5-b613-4b27-a1e6-64b0129bf269#033[00m
Nov 25 04:04:37 np0005534516 nova_compute[253538]: 2025-11-25 09:04:37.470 253542 DEBUG oslo_concurrency.lockutils [None req-13c5af3d-5c05-4928-9e59-9e03050fad50 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "1ee319a5-b613-4b27-a1e6-64b0129bf269" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 17.362s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:04:38 np0005534516 nova_compute[253538]: 2025-11-25 09:04:38.013 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:04:38 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:04:38 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2515: 321 pgs: 321 active+clean; 167 MiB data, 966 MiB used, 59 GiB / 60 GiB avail; 30 KiB/s rd, 3.2 KiB/s wr, 44 op/s
Nov 25 04:04:38 np0005534516 nova_compute[253538]: 2025-11-25 09:04:38.171 253542 DEBUG nova.compute.manager [req-c43723f9-b411-43d0-a5e1-22ff2b03ba4f req-1240758d-2eae-45ce-8b72-a169476780b9 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 1ee319a5-b613-4b27-a1e6-64b0129bf269] Received event network-vif-deleted-2cd88dce-60d9-4da6-a5f6-ba6622fd8812 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 04:04:38 np0005534516 nova_compute[253538]: 2025-11-25 09:04:38.327 253542 INFO nova.virt.libvirt.driver [None req-13c2ed13-a610-4e15-a7f8-a702b1dc0d4f 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 2c4a1d63-7674-4276-8da9-b9d4f4fea307] Deleting instance files /var/lib/nova/instances/2c4a1d63-7674-4276-8da9-b9d4f4fea307_del#033[00m
Nov 25 04:04:38 np0005534516 nova_compute[253538]: 2025-11-25 09:04:38.328 253542 INFO nova.virt.libvirt.driver [None req-13c2ed13-a610-4e15-a7f8-a702b1dc0d4f 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 2c4a1d63-7674-4276-8da9-b9d4f4fea307] Deletion of /var/lib/nova/instances/2c4a1d63-7674-4276-8da9-b9d4f4fea307_del complete#033[00m
Nov 25 04:04:38 np0005534516 nova_compute[253538]: 2025-11-25 09:04:38.378 253542 INFO nova.compute.manager [None req-13c2ed13-a610-4e15-a7f8-a702b1dc0d4f 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 2c4a1d63-7674-4276-8da9-b9d4f4fea307] Took 11.45 seconds to destroy the instance on the hypervisor.#033[00m
Nov 25 04:04:38 np0005534516 nova_compute[253538]: 2025-11-25 09:04:38.379 253542 DEBUG oslo.service.loopingcall [None req-13c2ed13-a610-4e15-a7f8-a702b1dc0d4f 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 25 04:04:38 np0005534516 nova_compute[253538]: 2025-11-25 09:04:38.380 253542 DEBUG nova.compute.manager [-] [instance: 2c4a1d63-7674-4276-8da9-b9d4f4fea307] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 25 04:04:38 np0005534516 nova_compute[253538]: 2025-11-25 09:04:38.382 253542 DEBUG nova.network.neutron [-] [instance: 2c4a1d63-7674-4276-8da9-b9d4f4fea307] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 25 04:04:38 np0005534516 nova_compute[253538]: 2025-11-25 09:04:38.792 253542 DEBUG oslo_concurrency.lockutils [None req-d225763c-ebc0-46c2-8acb-0fceb516b1ac c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Acquiring lock "28376454-90b2-431d-9052-48b369973c8e" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:04:38 np0005534516 nova_compute[253538]: 2025-11-25 09:04:38.792 253542 DEBUG oslo_concurrency.lockutils [None req-d225763c-ebc0-46c2-8acb-0fceb516b1ac c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "28376454-90b2-431d-9052-48b369973c8e" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:04:38 np0005534516 nova_compute[253538]: 2025-11-25 09:04:38.793 253542 DEBUG oslo_concurrency.lockutils [None req-d225763c-ebc0-46c2-8acb-0fceb516b1ac c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Acquiring lock "28376454-90b2-431d-9052-48b369973c8e-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:04:38 np0005534516 nova_compute[253538]: 2025-11-25 09:04:38.793 253542 DEBUG oslo_concurrency.lockutils [None req-d225763c-ebc0-46c2-8acb-0fceb516b1ac c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "28376454-90b2-431d-9052-48b369973c8e-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:04:38 np0005534516 nova_compute[253538]: 2025-11-25 09:04:38.793 253542 DEBUG oslo_concurrency.lockutils [None req-d225763c-ebc0-46c2-8acb-0fceb516b1ac c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "28376454-90b2-431d-9052-48b369973c8e-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:04:38 np0005534516 nova_compute[253538]: 2025-11-25 09:04:38.795 253542 INFO nova.compute.manager [None req-d225763c-ebc0-46c2-8acb-0fceb516b1ac c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 28376454-90b2-431d-9052-48b369973c8e] Terminating instance#033[00m
Nov 25 04:04:38 np0005534516 nova_compute[253538]: 2025-11-25 09:04:38.796 253542 DEBUG nova.compute.manager [None req-d225763c-ebc0-46c2-8acb-0fceb516b1ac c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 28376454-90b2-431d-9052-48b369973c8e] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 25 04:04:39 np0005534516 nova_compute[253538]: 2025-11-25 09:04:39.019 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:04:39 np0005534516 kernel: tap9918858c-8b (unregistering): left promiscuous mode
Nov 25 04:04:39 np0005534516 NetworkManager[48915]: <info>  [1764061479.1192] device (tap9918858c-8b): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 04:04:39 np0005534516 ovn_controller[152859]: 2025-11-25T09:04:39Z|01410|binding|INFO|Releasing lport 9918858c-8b7c-4d3f-aada-d04fcb6eab03 from this chassis (sb_readonly=0)
Nov 25 04:04:39 np0005534516 ovn_controller[152859]: 2025-11-25T09:04:39Z|01411|binding|INFO|Setting lport 9918858c-8b7c-4d3f-aada-d04fcb6eab03 down in Southbound
Nov 25 04:04:39 np0005534516 nova_compute[253538]: 2025-11-25 09:04:39.131 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:04:39 np0005534516 ovn_controller[152859]: 2025-11-25T09:04:39Z|01412|binding|INFO|Removing iface tap9918858c-8b ovn-installed in OVS
Nov 25 04:04:39 np0005534516 nova_compute[253538]: 2025-11-25 09:04:39.134 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:04:39 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:04:39.144 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f4:e3:ea 10.100.0.12'], port_security=['fa:16:3e:f4:e3:ea 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '28376454-90b2-431d-9052-48b369973c8e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f691304c-d112-4c32-b3ac-0f33230178b0', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a3cf572dfc9f42528923d69b8fa76422', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'f5f96402-bff3-4d85-bba1-4edfd7ec059e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9a4166e3-493a-4dd7-9e89-e40e3bf1bed7, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=9918858c-8b7c-4d3f-aada-d04fcb6eab03) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 04:04:39 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:04:39.146 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 9918858c-8b7c-4d3f-aada-d04fcb6eab03 in datapath f691304c-d112-4c32-b3ac-0f33230178b0 unbound from our chassis#033[00m
Nov 25 04:04:39 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:04:39.147 162739 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network f691304c-d112-4c32-b3ac-0f33230178b0, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 25 04:04:39 np0005534516 nova_compute[253538]: 2025-11-25 09:04:39.148 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:04:39 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:04:39.148 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[9aa3876f-1e51-47d4-aa42-0afa22af178e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:04:39 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:04:39.149 162739 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-f691304c-d112-4c32-b3ac-0f33230178b0 namespace which is not needed anymore#033[00m
Nov 25 04:04:39 np0005534516 kernel: tapfd44d480-02 (unregistering): left promiscuous mode
Nov 25 04:04:39 np0005534516 NetworkManager[48915]: <info>  [1764061479.1585] device (tapfd44d480-02): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 04:04:39 np0005534516 nova_compute[253538]: 2025-11-25 09:04:39.158 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:04:39 np0005534516 ovn_controller[152859]: 2025-11-25T09:04:39Z|01413|binding|INFO|Releasing lport fd44d480-0242-4c7a-b02e-f58852c99ca0 from this chassis (sb_readonly=0)
Nov 25 04:04:39 np0005534516 ovn_controller[152859]: 2025-11-25T09:04:39Z|01414|binding|INFO|Setting lport fd44d480-0242-4c7a-b02e-f58852c99ca0 down in Southbound
Nov 25 04:04:39 np0005534516 nova_compute[253538]: 2025-11-25 09:04:39.167 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:04:39 np0005534516 nova_compute[253538]: 2025-11-25 09:04:39.169 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:04:39 np0005534516 ovn_controller[152859]: 2025-11-25T09:04:39Z|01415|binding|INFO|Removing iface tapfd44d480-02 ovn-installed in OVS
Nov 25 04:04:39 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:04:39.177 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b8:5e:87 2001:db8::f816:3eff:feb8:5e87'], port_security=['fa:16:3e:b8:5e:87 2001:db8::f816:3eff:feb8:5e87'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:feb8:5e87/64', 'neutron:device_id': '28376454-90b2-431d-9052-48b369973c8e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-269f4fa4-a7fb-4f9a-b49d-3b1968826304', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a3cf572dfc9f42528923d69b8fa76422', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'f5f96402-bff3-4d85-bba1-4edfd7ec059e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a218a470-f1de-46c9-a998-6139383f8f9c, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=fd44d480-0242-4c7a-b02e-f58852c99ca0) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 04:04:39 np0005534516 nova_compute[253538]: 2025-11-25 09:04:39.182 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:04:39 np0005534516 systemd[1]: machine-qemu\x2d163\x2dinstance\x2d00000085.scope: Deactivated successfully.
Nov 25 04:04:39 np0005534516 systemd[1]: machine-qemu\x2d163\x2dinstance\x2d00000085.scope: Consumed 17.307s CPU time.
Nov 25 04:04:39 np0005534516 systemd-machined[215790]: Machine qemu-163-instance-00000085 terminated.
Nov 25 04:04:39 np0005534516 neutron-haproxy-ovnmeta-f691304c-d112-4c32-b3ac-0f33230178b0[393915]: [NOTICE]   (393919) : haproxy version is 2.8.14-c23fe91
Nov 25 04:04:39 np0005534516 neutron-haproxy-ovnmeta-f691304c-d112-4c32-b3ac-0f33230178b0[393915]: [NOTICE]   (393919) : path to executable is /usr/sbin/haproxy
Nov 25 04:04:39 np0005534516 neutron-haproxy-ovnmeta-f691304c-d112-4c32-b3ac-0f33230178b0[393915]: [WARNING]  (393919) : Exiting Master process...
Nov 25 04:04:39 np0005534516 neutron-haproxy-ovnmeta-f691304c-d112-4c32-b3ac-0f33230178b0[393915]: [ALERT]    (393919) : Current worker (393939) exited with code 143 (Terminated)
Nov 25 04:04:39 np0005534516 neutron-haproxy-ovnmeta-f691304c-d112-4c32-b3ac-0f33230178b0[393915]: [WARNING]  (393919) : All workers exited. Exiting... (0)
Nov 25 04:04:39 np0005534516 systemd[1]: libpod-adb9d81385c86af26091f86b4facd3c110feb8543d79924ee01eefa24baef3a0.scope: Deactivated successfully.
Nov 25 04:04:39 np0005534516 podman[396659]: 2025-11-25 09:04:39.332902095 +0000 UTC m=+0.099763633 container died adb9d81385c86af26091f86b4facd3c110feb8543d79924ee01eefa24baef3a0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-f691304c-d112-4c32-b3ac-0f33230178b0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2)
Nov 25 04:04:39 np0005534516 NetworkManager[48915]: <info>  [1764061479.4398] manager: (tapfd44d480-02): new Tun device (/org/freedesktop/NetworkManager/Devices/579)
Nov 25 04:04:39 np0005534516 nova_compute[253538]: 2025-11-25 09:04:39.458 253542 INFO nova.virt.libvirt.driver [-] [instance: 28376454-90b2-431d-9052-48b369973c8e] Instance destroyed successfully.#033[00m
Nov 25 04:04:39 np0005534516 nova_compute[253538]: 2025-11-25 09:04:39.458 253542 DEBUG nova.objects.instance [None req-d225763c-ebc0-46c2-8acb-0fceb516b1ac c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lazy-loading 'resources' on Instance uuid 28376454-90b2-431d-9052-48b369973c8e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 04:04:39 np0005534516 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-adb9d81385c86af26091f86b4facd3c110feb8543d79924ee01eefa24baef3a0-userdata-shm.mount: Deactivated successfully.
Nov 25 04:04:39 np0005534516 systemd[1]: var-lib-containers-storage-overlay-a80d86f6fd563b663b27ffb3825320ba53e316c324703edef04af20e73a076c2-merged.mount: Deactivated successfully.
Nov 25 04:04:39 np0005534516 nova_compute[253538]: 2025-11-25 09:04:39.489 253542 DEBUG nova.virt.libvirt.vif [None req-d225763c-ebc0-46c2-8acb-0fceb516b1ac c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T09:02:33Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-2133610236',display_name='tempest-TestGettingAddress-server-2133610236',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-2133610236',id=133,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCL/CVAR3vbL/QBCkvY1zGP75ygSlwt3u20wjPL1pxq8pHBjwVx3a3z7YKcJhNBX7yKxfeJakRZPN9e1zFn+ojVqSwTlufKzTatjarhh+2Z6TgY95SPM/GKylcUrgXrauw==',key_name='tempest-TestGettingAddress-1143965571',keypairs=<?>,launch_index=0,launched_at=2025-11-25T09:02:53Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='a3cf572dfc9f42528923d69b8fa76422',ramdisk_id='',reservation_id='r-lil1hhw9',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-364728108',owner_user_name='tempest-TestGettingAddress-364728108-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T09:02:53Z,user_data=None,user_id='c9fb13d4ba9041458692330b7276232f',uuid=28376454-90b2-431d-9052-48b369973c8e,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "9918858c-8b7c-4d3f-aada-d04fcb6eab03", "address": "fa:16:3e:f4:e3:ea", "network": {"id": "f691304c-d112-4c32-b3ac-0f33230178b0", "bridge": "br-int", "label": "tempest-network-smoke--636122136", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.236", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9918858c-8b", "ovs_interfaceid": "9918858c-8b7c-4d3f-aada-d04fcb6eab03", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 25 04:04:39 np0005534516 nova_compute[253538]: 2025-11-25 09:04:39.490 253542 DEBUG nova.network.os_vif_util [None req-d225763c-ebc0-46c2-8acb-0fceb516b1ac c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Converting VIF {"id": "9918858c-8b7c-4d3f-aada-d04fcb6eab03", "address": "fa:16:3e:f4:e3:ea", "network": {"id": "f691304c-d112-4c32-b3ac-0f33230178b0", "bridge": "br-int", "label": "tempest-network-smoke--636122136", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.236", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9918858c-8b", "ovs_interfaceid": "9918858c-8b7c-4d3f-aada-d04fcb6eab03", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 04:04:39 np0005534516 nova_compute[253538]: 2025-11-25 09:04:39.490 253542 DEBUG nova.network.os_vif_util [None req-d225763c-ebc0-46c2-8acb-0fceb516b1ac c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:f4:e3:ea,bridge_name='br-int',has_traffic_filtering=True,id=9918858c-8b7c-4d3f-aada-d04fcb6eab03,network=Network(f691304c-d112-4c32-b3ac-0f33230178b0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9918858c-8b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 04:04:39 np0005534516 nova_compute[253538]: 2025-11-25 09:04:39.491 253542 DEBUG os_vif [None req-d225763c-ebc0-46c2-8acb-0fceb516b1ac c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:f4:e3:ea,bridge_name='br-int',has_traffic_filtering=True,id=9918858c-8b7c-4d3f-aada-d04fcb6eab03,network=Network(f691304c-d112-4c32-b3ac-0f33230178b0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9918858c-8b') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 25 04:04:39 np0005534516 nova_compute[253538]: 2025-11-25 09:04:39.493 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:04:39 np0005534516 nova_compute[253538]: 2025-11-25 09:04:39.494 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9918858c-8b, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 04:04:39 np0005534516 nova_compute[253538]: 2025-11-25 09:04:39.495 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:04:39 np0005534516 nova_compute[253538]: 2025-11-25 09:04:39.498 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 25 04:04:39 np0005534516 nova_compute[253538]: 2025-11-25 09:04:39.500 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:04:39 np0005534516 nova_compute[253538]: 2025-11-25 09:04:39.503 253542 INFO os_vif [None req-d225763c-ebc0-46c2-8acb-0fceb516b1ac c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:f4:e3:ea,bridge_name='br-int',has_traffic_filtering=True,id=9918858c-8b7c-4d3f-aada-d04fcb6eab03,network=Network(f691304c-d112-4c32-b3ac-0f33230178b0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9918858c-8b')#033[00m
Nov 25 04:04:39 np0005534516 nova_compute[253538]: 2025-11-25 09:04:39.504 253542 DEBUG nova.virt.libvirt.vif [None req-d225763c-ebc0-46c2-8acb-0fceb516b1ac c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T09:02:33Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-2133610236',display_name='tempest-TestGettingAddress-server-2133610236',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-2133610236',id=133,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCL/CVAR3vbL/QBCkvY1zGP75ygSlwt3u20wjPL1pxq8pHBjwVx3a3z7YKcJhNBX7yKxfeJakRZPN9e1zFn+ojVqSwTlufKzTatjarhh+2Z6TgY95SPM/GKylcUrgXrauw==',key_name='tempest-TestGettingAddress-1143965571',keypairs=<?>,launch_index=0,launched_at=2025-11-25T09:02:53Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='a3cf572dfc9f42528923d69b8fa76422',ramdisk_id='',reservation_id='r-lil1hhw9',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-364728108',owner_user_name='tempest-TestGettingAddress-364728108-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T09:02:53Z,user_data=None,user_id='c9fb13d4ba9041458692330b7276232f',uuid=28376454-90b2-431d-9052-48b369973c8e,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "fd44d480-0242-4c7a-b02e-f58852c99ca0", "address": "fa:16:3e:b8:5e:87", "network": {"id": "269f4fa4-a7fb-4f9a-b49d-3b1968826304", "bridge": "br-int", "label": "tempest-network-smoke--375194542", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feb8:5e87", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfd44d480-02", "ovs_interfaceid": "fd44d480-0242-4c7a-b02e-f58852c99ca0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 25 04:04:39 np0005534516 nova_compute[253538]: 2025-11-25 09:04:39.505 253542 DEBUG nova.network.os_vif_util [None req-d225763c-ebc0-46c2-8acb-0fceb516b1ac c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Converting VIF {"id": "fd44d480-0242-4c7a-b02e-f58852c99ca0", "address": "fa:16:3e:b8:5e:87", "network": {"id": "269f4fa4-a7fb-4f9a-b49d-3b1968826304", "bridge": "br-int", "label": "tempest-network-smoke--375194542", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feb8:5e87", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfd44d480-02", "ovs_interfaceid": "fd44d480-0242-4c7a-b02e-f58852c99ca0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 04:04:39 np0005534516 nova_compute[253538]: 2025-11-25 09:04:39.505 253542 DEBUG nova.network.os_vif_util [None req-d225763c-ebc0-46c2-8acb-0fceb516b1ac c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:b8:5e:87,bridge_name='br-int',has_traffic_filtering=True,id=fd44d480-0242-4c7a-b02e-f58852c99ca0,network=Network(269f4fa4-a7fb-4f9a-b49d-3b1968826304),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfd44d480-02') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 04:04:39 np0005534516 nova_compute[253538]: 2025-11-25 09:04:39.506 253542 DEBUG os_vif [None req-d225763c-ebc0-46c2-8acb-0fceb516b1ac c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:b8:5e:87,bridge_name='br-int',has_traffic_filtering=True,id=fd44d480-0242-4c7a-b02e-f58852c99ca0,network=Network(269f4fa4-a7fb-4f9a-b49d-3b1968826304),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfd44d480-02') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 25 04:04:39 np0005534516 nova_compute[253538]: 2025-11-25 09:04:39.507 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:04:39 np0005534516 nova_compute[253538]: 2025-11-25 09:04:39.507 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfd44d480-02, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 04:04:39 np0005534516 nova_compute[253538]: 2025-11-25 09:04:39.509 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:04:39 np0005534516 nova_compute[253538]: 2025-11-25 09:04:39.511 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:04:39 np0005534516 nova_compute[253538]: 2025-11-25 09:04:39.513 253542 INFO os_vif [None req-d225763c-ebc0-46c2-8acb-0fceb516b1ac c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:b8:5e:87,bridge_name='br-int',has_traffic_filtering=True,id=fd44d480-0242-4c7a-b02e-f58852c99ca0,network=Network(269f4fa4-a7fb-4f9a-b49d-3b1968826304),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfd44d480-02')#033[00m
Nov 25 04:04:39 np0005534516 podman[396659]: 2025-11-25 09:04:39.607010517 +0000 UTC m=+0.373872045 container cleanup adb9d81385c86af26091f86b4facd3c110feb8543d79924ee01eefa24baef3a0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-f691304c-d112-4c32-b3ac-0f33230178b0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2)
Nov 25 04:04:39 np0005534516 systemd[1]: libpod-conmon-adb9d81385c86af26091f86b4facd3c110feb8543d79924ee01eefa24baef3a0.scope: Deactivated successfully.
Nov 25 04:04:39 np0005534516 podman[396734]: 2025-11-25 09:04:39.68072164 +0000 UTC m=+0.052755745 container remove adb9d81385c86af26091f86b4facd3c110feb8543d79924ee01eefa24baef3a0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-f691304c-d112-4c32-b3ac-0f33230178b0, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team)
Nov 25 04:04:39 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:04:39.688 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[68c1c163-9cc9-434e-8953-3b5c260d4653]: (4, ('Tue Nov 25 09:04:39 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-f691304c-d112-4c32-b3ac-0f33230178b0 (adb9d81385c86af26091f86b4facd3c110feb8543d79924ee01eefa24baef3a0)\nadb9d81385c86af26091f86b4facd3c110feb8543d79924ee01eefa24baef3a0\nTue Nov 25 09:04:39 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-f691304c-d112-4c32-b3ac-0f33230178b0 (adb9d81385c86af26091f86b4facd3c110feb8543d79924ee01eefa24baef3a0)\nadb9d81385c86af26091f86b4facd3c110feb8543d79924ee01eefa24baef3a0\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:04:39 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:04:39.689 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[e5091e52-6804-4384-93f6-b3b24066afc8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:04:39 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:04:39.690 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf691304c-d0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 04:04:39 np0005534516 nova_compute[253538]: 2025-11-25 09:04:39.693 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:04:39 np0005534516 kernel: tapf691304c-d0: left promiscuous mode
Nov 25 04:04:39 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:04:39.698 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[2f245312-84ce-4e2a-b54e-50e1d0ad42ee]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:04:39 np0005534516 nova_compute[253538]: 2025-11-25 09:04:39.707 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:04:39 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:04:39.716 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[01c3c4c1-2bd7-4538-b14c-88b6432860a3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:04:39 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:04:39.717 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[05c38b24-d1bc-4e79-916a-7831bb2c6cbc]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:04:39 np0005534516 nova_compute[253538]: 2025-11-25 09:04:39.718 253542 DEBUG nova.network.neutron [-] [instance: 2c4a1d63-7674-4276-8da9-b9d4f4fea307] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 04:04:39 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:04:39.733 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[96b6a677-ff1f-4d6b-8b9c-7ef15d0243e2]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 672399, 'reachable_time': 41962, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 396749, 'error': None, 'target': 'ovnmeta-f691304c-d112-4c32-b3ac-0f33230178b0', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:04:39 np0005534516 systemd[1]: run-netns-ovnmeta\x2df691304c\x2dd112\x2d4c32\x2db3ac\x2d0f33230178b0.mount: Deactivated successfully.
Nov 25 04:04:39 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:04:39.735 162852 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-f691304c-d112-4c32-b3ac-0f33230178b0 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 25 04:04:39 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:04:39.735 162852 DEBUG oslo.privsep.daemon [-] privsep: reply[6f762ead-6ed9-4603-b6dd-6be3b4d0bd10]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:04:39 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:04:39.736 162739 INFO neutron.agent.ovn.metadata.agent [-] Port fd44d480-0242-4c7a-b02e-f58852c99ca0 in datapath 269f4fa4-a7fb-4f9a-b49d-3b1968826304 unbound from our chassis#033[00m
Nov 25 04:04:39 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:04:39.738 162739 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 269f4fa4-a7fb-4f9a-b49d-3b1968826304, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 25 04:04:39 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:04:39.738 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[f8866200-77a4-4de7-a512-a58b9272e0bd]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:04:39 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:04:39.739 162739 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-269f4fa4-a7fb-4f9a-b49d-3b1968826304 namespace which is not needed anymore#033[00m
Nov 25 04:04:39 np0005534516 nova_compute[253538]: 2025-11-25 09:04:39.748 253542 INFO nova.compute.manager [-] [instance: 2c4a1d63-7674-4276-8da9-b9d4f4fea307] Took 1.37 seconds to deallocate network for instance.#033[00m
Nov 25 04:04:39 np0005534516 nova_compute[253538]: 2025-11-25 09:04:39.786 253542 DEBUG oslo_concurrency.lockutils [None req-13c2ed13-a610-4e15-a7f8-a702b1dc0d4f 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:04:39 np0005534516 nova_compute[253538]: 2025-11-25 09:04:39.787 253542 DEBUG oslo_concurrency.lockutils [None req-13c2ed13-a610-4e15-a7f8-a702b1dc0d4f 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:04:39 np0005534516 nova_compute[253538]: 2025-11-25 09:04:39.857 253542 DEBUG oslo_concurrency.processutils [None req-13c2ed13-a610-4e15-a7f8-a702b1dc0d4f 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 04:04:39 np0005534516 neutron-haproxy-ovnmeta-269f4fa4-a7fb-4f9a-b49d-3b1968826304[394115]: [NOTICE]   (394125) : haproxy version is 2.8.14-c23fe91
Nov 25 04:04:39 np0005534516 neutron-haproxy-ovnmeta-269f4fa4-a7fb-4f9a-b49d-3b1968826304[394115]: [NOTICE]   (394125) : path to executable is /usr/sbin/haproxy
Nov 25 04:04:39 np0005534516 neutron-haproxy-ovnmeta-269f4fa4-a7fb-4f9a-b49d-3b1968826304[394115]: [WARNING]  (394125) : Exiting Master process...
Nov 25 04:04:39 np0005534516 neutron-haproxy-ovnmeta-269f4fa4-a7fb-4f9a-b49d-3b1968826304[394115]: [ALERT]    (394125) : Current worker (394142) exited with code 143 (Terminated)
Nov 25 04:04:39 np0005534516 neutron-haproxy-ovnmeta-269f4fa4-a7fb-4f9a-b49d-3b1968826304[394115]: [WARNING]  (394125) : All workers exited. Exiting... (0)
Nov 25 04:04:39 np0005534516 systemd[1]: libpod-205b317ded882a30395431bd420e7da91fc5e6103e9b654f88e7b330157c80b5.scope: Deactivated successfully.
Nov 25 04:04:39 np0005534516 podman[396767]: 2025-11-25 09:04:39.894652905 +0000 UTC m=+0.052510128 container died 205b317ded882a30395431bd420e7da91fc5e6103e9b654f88e7b330157c80b5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-269f4fa4-a7fb-4f9a-b49d-3b1968826304, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 04:04:39 np0005534516 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-205b317ded882a30395431bd420e7da91fc5e6103e9b654f88e7b330157c80b5-userdata-shm.mount: Deactivated successfully.
Nov 25 04:04:39 np0005534516 systemd[1]: var-lib-containers-storage-overlay-0a675655322cb97a61297c5daeb9688e6dd7732cb7cb875eab3105e1fff65389-merged.mount: Deactivated successfully.
Nov 25 04:04:39 np0005534516 podman[396767]: 2025-11-25 09:04:39.939546896 +0000 UTC m=+0.097404079 container cleanup 205b317ded882a30395431bd420e7da91fc5e6103e9b654f88e7b330157c80b5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-269f4fa4-a7fb-4f9a-b49d-3b1968826304, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 04:04:39 np0005534516 systemd[1]: libpod-conmon-205b317ded882a30395431bd420e7da91fc5e6103e9b654f88e7b330157c80b5.scope: Deactivated successfully.
Nov 25 04:04:40 np0005534516 podman[396798]: 2025-11-25 09:04:40.009882278 +0000 UTC m=+0.041865699 container remove 205b317ded882a30395431bd420e7da91fc5e6103e9b654f88e7b330157c80b5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-269f4fa4-a7fb-4f9a-b49d-3b1968826304, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team)
Nov 25 04:04:40 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:04:40.016 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[00da122f-e9f6-48e7-b069-4af68dceb23f]: (4, ('Tue Nov 25 09:04:39 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-269f4fa4-a7fb-4f9a-b49d-3b1968826304 (205b317ded882a30395431bd420e7da91fc5e6103e9b654f88e7b330157c80b5)\n205b317ded882a30395431bd420e7da91fc5e6103e9b654f88e7b330157c80b5\nTue Nov 25 09:04:39 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-269f4fa4-a7fb-4f9a-b49d-3b1968826304 (205b317ded882a30395431bd420e7da91fc5e6103e9b654f88e7b330157c80b5)\n205b317ded882a30395431bd420e7da91fc5e6103e9b654f88e7b330157c80b5\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:04:40 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:04:40.018 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[2d0e97d4-ccfd-44c8-a193-64f3cb2185f8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:04:40 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:04:40.019 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap269f4fa4-a0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 04:04:40 np0005534516 kernel: tap269f4fa4-a0: left promiscuous mode
Nov 25 04:04:40 np0005534516 nova_compute[253538]: 2025-11-25 09:04:40.063 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:04:40 np0005534516 nova_compute[253538]: 2025-11-25 09:04:40.079 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:04:40 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:04:40.080 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[144ce451-5c69-4e4d-a9d3-e19335b4c9d1]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:04:40 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:04:40.095 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[d9a12719-4116-4f32-915d-c3db79de4b24]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:04:40 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:04:40.097 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[cd1f7693-0e7b-459c-8a9c-4ce1367a5a3f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:04:40 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2516: 321 pgs: 321 active+clean; 136 MiB data, 966 MiB used, 59 GiB / 60 GiB avail; 30 KiB/s rd, 2.9 KiB/s wr, 46 op/s
Nov 25 04:04:40 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:04:40.121 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[35fe3141-ac5c-437f-a142-f521e55196d8]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 672563, 'reachable_time': 39102, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 396830, 'error': None, 'target': 'ovnmeta-269f4fa4-a7fb-4f9a-b49d-3b1968826304', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:04:40 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:04:40.123 162852 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-269f4fa4-a7fb-4f9a-b49d-3b1968826304 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 25 04:04:40 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:04:40.123 162852 DEBUG oslo.privsep.daemon [-] privsep: reply[d3911d0a-4fb3-4d52-b5d6-a87c5a01bb33]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:04:40 np0005534516 nova_compute[253538]: 2025-11-25 09:04:40.177 253542 INFO nova.virt.libvirt.driver [None req-d225763c-ebc0-46c2-8acb-0fceb516b1ac c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 28376454-90b2-431d-9052-48b369973c8e] Deleting instance files /var/lib/nova/instances/28376454-90b2-431d-9052-48b369973c8e_del#033[00m
Nov 25 04:04:40 np0005534516 nova_compute[253538]: 2025-11-25 09:04:40.178 253542 INFO nova.virt.libvirt.driver [None req-d225763c-ebc0-46c2-8acb-0fceb516b1ac c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 28376454-90b2-431d-9052-48b369973c8e] Deletion of /var/lib/nova/instances/28376454-90b2-431d-9052-48b369973c8e_del complete#033[00m
Nov 25 04:04:40 np0005534516 nova_compute[253538]: 2025-11-25 09:04:40.249 253542 INFO nova.compute.manager [None req-d225763c-ebc0-46c2-8acb-0fceb516b1ac c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 28376454-90b2-431d-9052-48b369973c8e] Took 1.45 seconds to destroy the instance on the hypervisor.#033[00m
Nov 25 04:04:40 np0005534516 nova_compute[253538]: 2025-11-25 09:04:40.250 253542 DEBUG oslo.service.loopingcall [None req-d225763c-ebc0-46c2-8acb-0fceb516b1ac c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 25 04:04:40 np0005534516 nova_compute[253538]: 2025-11-25 09:04:40.250 253542 DEBUG nova.compute.manager [-] [instance: 28376454-90b2-431d-9052-48b369973c8e] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 25 04:04:40 np0005534516 nova_compute[253538]: 2025-11-25 09:04:40.251 253542 DEBUG nova.network.neutron [-] [instance: 28376454-90b2-431d-9052-48b369973c8e] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 25 04:04:40 np0005534516 nova_compute[253538]: 2025-11-25 09:04:40.267 253542 DEBUG nova.compute.manager [req-85a88d63-77e6-43c5-affa-9caeafdde3f4 req-7e002b91-4791-4dfe-9b3c-ba5f5fcd5f86 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 28376454-90b2-431d-9052-48b369973c8e] Received event network-changed-9918858c-8b7c-4d3f-aada-d04fcb6eab03 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 04:04:40 np0005534516 nova_compute[253538]: 2025-11-25 09:04:40.268 253542 DEBUG nova.compute.manager [req-85a88d63-77e6-43c5-affa-9caeafdde3f4 req-7e002b91-4791-4dfe-9b3c-ba5f5fcd5f86 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 28376454-90b2-431d-9052-48b369973c8e] Refreshing instance network info cache due to event network-changed-9918858c-8b7c-4d3f-aada-d04fcb6eab03. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 04:04:40 np0005534516 nova_compute[253538]: 2025-11-25 09:04:40.268 253542 DEBUG oslo_concurrency.lockutils [req-85a88d63-77e6-43c5-affa-9caeafdde3f4 req-7e002b91-4791-4dfe-9b3c-ba5f5fcd5f86 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-28376454-90b2-431d-9052-48b369973c8e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 04:04:40 np0005534516 nova_compute[253538]: 2025-11-25 09:04:40.269 253542 DEBUG oslo_concurrency.lockutils [req-85a88d63-77e6-43c5-affa-9caeafdde3f4 req-7e002b91-4791-4dfe-9b3c-ba5f5fcd5f86 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-28376454-90b2-431d-9052-48b369973c8e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 04:04:40 np0005534516 nova_compute[253538]: 2025-11-25 09:04:40.269 253542 DEBUG nova.network.neutron [req-85a88d63-77e6-43c5-affa-9caeafdde3f4 req-7e002b91-4791-4dfe-9b3c-ba5f5fcd5f86 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 28376454-90b2-431d-9052-48b369973c8e] Refreshing network info cache for port 9918858c-8b7c-4d3f-aada-d04fcb6eab03 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 04:04:40 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 04:04:40 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1487161151' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 04:04:40 np0005534516 nova_compute[253538]: 2025-11-25 09:04:40.363 253542 DEBUG oslo_concurrency.processutils [None req-13c2ed13-a610-4e15-a7f8-a702b1dc0d4f 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.506s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 04:04:40 np0005534516 nova_compute[253538]: 2025-11-25 09:04:40.370 253542 DEBUG nova.compute.provider_tree [None req-13c2ed13-a610-4e15-a7f8-a702b1dc0d4f 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 04:04:40 np0005534516 nova_compute[253538]: 2025-11-25 09:04:40.434 253542 DEBUG nova.scheduler.client.report [None req-13c2ed13-a610-4e15-a7f8-a702b1dc0d4f 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 04:04:40 np0005534516 nova_compute[253538]: 2025-11-25 09:04:40.476 253542 DEBUG oslo_concurrency.lockutils [None req-13c2ed13-a610-4e15-a7f8-a702b1dc0d4f 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.689s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:04:40 np0005534516 systemd[1]: run-netns-ovnmeta\x2d269f4fa4\x2da7fb\x2d4f9a\x2db49d\x2d3b1968826304.mount: Deactivated successfully.
Nov 25 04:04:40 np0005534516 nova_compute[253538]: 2025-11-25 09:04:40.513 253542 INFO nova.scheduler.client.report [None req-13c2ed13-a610-4e15-a7f8-a702b1dc0d4f 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Deleted allocations for instance 2c4a1d63-7674-4276-8da9-b9d4f4fea307#033[00m
Nov 25 04:04:40 np0005534516 nova_compute[253538]: 2025-11-25 09:04:40.581 253542 DEBUG oslo_concurrency.lockutils [None req-13c2ed13-a610-4e15-a7f8-a702b1dc0d4f 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Lock "2c4a1d63-7674-4276-8da9-b9d4f4fea307" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 13.656s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:04:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:04:41.088 162739 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:04:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:04:41.088 162739 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:04:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:04:41.088 162739 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:04:42 np0005534516 nova_compute[253538]: 2025-11-25 09:04:42.011 253542 DEBUG nova.network.neutron [req-85a88d63-77e6-43c5-affa-9caeafdde3f4 req-7e002b91-4791-4dfe-9b3c-ba5f5fcd5f86 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 28376454-90b2-431d-9052-48b369973c8e] Updated VIF entry in instance network info cache for port 9918858c-8b7c-4d3f-aada-d04fcb6eab03. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 25 04:04:42 np0005534516 nova_compute[253538]: 2025-11-25 09:04:42.012 253542 DEBUG nova.network.neutron [req-85a88d63-77e6-43c5-affa-9caeafdde3f4 req-7e002b91-4791-4dfe-9b3c-ba5f5fcd5f86 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 28376454-90b2-431d-9052-48b369973c8e] Updating instance_info_cache with network_info: [{"id": "9918858c-8b7c-4d3f-aada-d04fcb6eab03", "address": "fa:16:3e:f4:e3:ea", "network": {"id": "f691304c-d112-4c32-b3ac-0f33230178b0", "bridge": "br-int", "label": "tempest-network-smoke--636122136", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9918858c-8b", "ovs_interfaceid": "9918858c-8b7c-4d3f-aada-d04fcb6eab03", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "fd44d480-0242-4c7a-b02e-f58852c99ca0", "address": "fa:16:3e:b8:5e:87", "network": {"id": "269f4fa4-a7fb-4f9a-b49d-3b1968826304", "bridge": "br-int", "label": "tempest-network-smoke--375194542", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feb8:5e87", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfd44d480-02", "ovs_interfaceid": "fd44d480-0242-4c7a-b02e-f58852c99ca0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 04:04:42 np0005534516 nova_compute[253538]: 2025-11-25 09:04:42.030 253542 DEBUG oslo_concurrency.lockutils [req-85a88d63-77e6-43c5-affa-9caeafdde3f4 req-7e002b91-4791-4dfe-9b3c-ba5f5fcd5f86 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-28376454-90b2-431d-9052-48b369973c8e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 04:04:42 np0005534516 nova_compute[253538]: 2025-11-25 09:04:42.031 253542 DEBUG nova.compute.manager [req-85a88d63-77e6-43c5-affa-9caeafdde3f4 req-7e002b91-4791-4dfe-9b3c-ba5f5fcd5f86 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 28376454-90b2-431d-9052-48b369973c8e] Received event network-vif-unplugged-9918858c-8b7c-4d3f-aada-d04fcb6eab03 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 04:04:42 np0005534516 nova_compute[253538]: 2025-11-25 09:04:42.032 253542 DEBUG oslo_concurrency.lockutils [req-85a88d63-77e6-43c5-affa-9caeafdde3f4 req-7e002b91-4791-4dfe-9b3c-ba5f5fcd5f86 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "28376454-90b2-431d-9052-48b369973c8e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:04:42 np0005534516 nova_compute[253538]: 2025-11-25 09:04:42.032 253542 DEBUG oslo_concurrency.lockutils [req-85a88d63-77e6-43c5-affa-9caeafdde3f4 req-7e002b91-4791-4dfe-9b3c-ba5f5fcd5f86 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "28376454-90b2-431d-9052-48b369973c8e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:04:42 np0005534516 nova_compute[253538]: 2025-11-25 09:04:42.032 253542 DEBUG oslo_concurrency.lockutils [req-85a88d63-77e6-43c5-affa-9caeafdde3f4 req-7e002b91-4791-4dfe-9b3c-ba5f5fcd5f86 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "28376454-90b2-431d-9052-48b369973c8e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:04:42 np0005534516 nova_compute[253538]: 2025-11-25 09:04:42.032 253542 DEBUG nova.compute.manager [req-85a88d63-77e6-43c5-affa-9caeafdde3f4 req-7e002b91-4791-4dfe-9b3c-ba5f5fcd5f86 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 28376454-90b2-431d-9052-48b369973c8e] No waiting events found dispatching network-vif-unplugged-9918858c-8b7c-4d3f-aada-d04fcb6eab03 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 04:04:42 np0005534516 nova_compute[253538]: 2025-11-25 09:04:42.033 253542 DEBUG nova.compute.manager [req-85a88d63-77e6-43c5-affa-9caeafdde3f4 req-7e002b91-4791-4dfe-9b3c-ba5f5fcd5f86 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 28376454-90b2-431d-9052-48b369973c8e] Received event network-vif-unplugged-9918858c-8b7c-4d3f-aada-d04fcb6eab03 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 25 04:04:42 np0005534516 nova_compute[253538]: 2025-11-25 09:04:42.033 253542 DEBUG nova.compute.manager [req-85a88d63-77e6-43c5-affa-9caeafdde3f4 req-7e002b91-4791-4dfe-9b3c-ba5f5fcd5f86 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 2c4a1d63-7674-4276-8da9-b9d4f4fea307] Received event network-vif-deleted-a93aab06-4a98-453a-87c3-01b817ee7602 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 04:04:42 np0005534516 nova_compute[253538]: 2025-11-25 09:04:42.033 253542 DEBUG nova.compute.manager [req-85a88d63-77e6-43c5-affa-9caeafdde3f4 req-7e002b91-4791-4dfe-9b3c-ba5f5fcd5f86 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 28376454-90b2-431d-9052-48b369973c8e] Received event network-vif-plugged-9918858c-8b7c-4d3f-aada-d04fcb6eab03 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 04:04:42 np0005534516 nova_compute[253538]: 2025-11-25 09:04:42.034 253542 DEBUG oslo_concurrency.lockutils [req-85a88d63-77e6-43c5-affa-9caeafdde3f4 req-7e002b91-4791-4dfe-9b3c-ba5f5fcd5f86 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "28376454-90b2-431d-9052-48b369973c8e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:04:42 np0005534516 nova_compute[253538]: 2025-11-25 09:04:42.034 253542 DEBUG oslo_concurrency.lockutils [req-85a88d63-77e6-43c5-affa-9caeafdde3f4 req-7e002b91-4791-4dfe-9b3c-ba5f5fcd5f86 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "28376454-90b2-431d-9052-48b369973c8e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:04:42 np0005534516 nova_compute[253538]: 2025-11-25 09:04:42.034 253542 DEBUG oslo_concurrency.lockutils [req-85a88d63-77e6-43c5-affa-9caeafdde3f4 req-7e002b91-4791-4dfe-9b3c-ba5f5fcd5f86 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "28376454-90b2-431d-9052-48b369973c8e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:04:42 np0005534516 nova_compute[253538]: 2025-11-25 09:04:42.034 253542 DEBUG nova.compute.manager [req-85a88d63-77e6-43c5-affa-9caeafdde3f4 req-7e002b91-4791-4dfe-9b3c-ba5f5fcd5f86 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 28376454-90b2-431d-9052-48b369973c8e] No waiting events found dispatching network-vif-plugged-9918858c-8b7c-4d3f-aada-d04fcb6eab03 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 04:04:42 np0005534516 nova_compute[253538]: 2025-11-25 09:04:42.035 253542 WARNING nova.compute.manager [req-85a88d63-77e6-43c5-affa-9caeafdde3f4 req-7e002b91-4791-4dfe-9b3c-ba5f5fcd5f86 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 28376454-90b2-431d-9052-48b369973c8e] Received unexpected event network-vif-plugged-9918858c-8b7c-4d3f-aada-d04fcb6eab03 for instance with vm_state active and task_state deleting.#033[00m
Nov 25 04:04:42 np0005534516 nova_compute[253538]: 2025-11-25 09:04:42.035 253542 DEBUG nova.compute.manager [req-85a88d63-77e6-43c5-affa-9caeafdde3f4 req-7e002b91-4791-4dfe-9b3c-ba5f5fcd5f86 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 28376454-90b2-431d-9052-48b369973c8e] Received event network-vif-unplugged-fd44d480-0242-4c7a-b02e-f58852c99ca0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 04:04:42 np0005534516 nova_compute[253538]: 2025-11-25 09:04:42.036 253542 DEBUG oslo_concurrency.lockutils [req-85a88d63-77e6-43c5-affa-9caeafdde3f4 req-7e002b91-4791-4dfe-9b3c-ba5f5fcd5f86 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "28376454-90b2-431d-9052-48b369973c8e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:04:42 np0005534516 nova_compute[253538]: 2025-11-25 09:04:42.036 253542 DEBUG oslo_concurrency.lockutils [req-85a88d63-77e6-43c5-affa-9caeafdde3f4 req-7e002b91-4791-4dfe-9b3c-ba5f5fcd5f86 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "28376454-90b2-431d-9052-48b369973c8e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:04:42 np0005534516 nova_compute[253538]: 2025-11-25 09:04:42.036 253542 DEBUG oslo_concurrency.lockutils [req-85a88d63-77e6-43c5-affa-9caeafdde3f4 req-7e002b91-4791-4dfe-9b3c-ba5f5fcd5f86 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "28376454-90b2-431d-9052-48b369973c8e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:04:42 np0005534516 nova_compute[253538]: 2025-11-25 09:04:42.037 253542 DEBUG nova.compute.manager [req-85a88d63-77e6-43c5-affa-9caeafdde3f4 req-7e002b91-4791-4dfe-9b3c-ba5f5fcd5f86 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 28376454-90b2-431d-9052-48b369973c8e] No waiting events found dispatching network-vif-unplugged-fd44d480-0242-4c7a-b02e-f58852c99ca0 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 04:04:42 np0005534516 nova_compute[253538]: 2025-11-25 09:04:42.037 253542 DEBUG nova.compute.manager [req-85a88d63-77e6-43c5-affa-9caeafdde3f4 req-7e002b91-4791-4dfe-9b3c-ba5f5fcd5f86 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 28376454-90b2-431d-9052-48b369973c8e] Received event network-vif-unplugged-fd44d480-0242-4c7a-b02e-f58852c99ca0 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 25 04:04:42 np0005534516 nova_compute[253538]: 2025-11-25 09:04:42.037 253542 DEBUG nova.compute.manager [req-85a88d63-77e6-43c5-affa-9caeafdde3f4 req-7e002b91-4791-4dfe-9b3c-ba5f5fcd5f86 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 28376454-90b2-431d-9052-48b369973c8e] Received event network-vif-plugged-fd44d480-0242-4c7a-b02e-f58852c99ca0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 04:04:42 np0005534516 nova_compute[253538]: 2025-11-25 09:04:42.038 253542 DEBUG oslo_concurrency.lockutils [req-85a88d63-77e6-43c5-affa-9caeafdde3f4 req-7e002b91-4791-4dfe-9b3c-ba5f5fcd5f86 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "28376454-90b2-431d-9052-48b369973c8e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:04:42 np0005534516 nova_compute[253538]: 2025-11-25 09:04:42.038 253542 DEBUG oslo_concurrency.lockutils [req-85a88d63-77e6-43c5-affa-9caeafdde3f4 req-7e002b91-4791-4dfe-9b3c-ba5f5fcd5f86 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "28376454-90b2-431d-9052-48b369973c8e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:04:42 np0005534516 nova_compute[253538]: 2025-11-25 09:04:42.038 253542 DEBUG oslo_concurrency.lockutils [req-85a88d63-77e6-43c5-affa-9caeafdde3f4 req-7e002b91-4791-4dfe-9b3c-ba5f5fcd5f86 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "28376454-90b2-431d-9052-48b369973c8e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:04:42 np0005534516 nova_compute[253538]: 2025-11-25 09:04:42.039 253542 DEBUG nova.compute.manager [req-85a88d63-77e6-43c5-affa-9caeafdde3f4 req-7e002b91-4791-4dfe-9b3c-ba5f5fcd5f86 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 28376454-90b2-431d-9052-48b369973c8e] No waiting events found dispatching network-vif-plugged-fd44d480-0242-4c7a-b02e-f58852c99ca0 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 04:04:42 np0005534516 nova_compute[253538]: 2025-11-25 09:04:42.039 253542 WARNING nova.compute.manager [req-85a88d63-77e6-43c5-affa-9caeafdde3f4 req-7e002b91-4791-4dfe-9b3c-ba5f5fcd5f86 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 28376454-90b2-431d-9052-48b369973c8e] Received unexpected event network-vif-plugged-fd44d480-0242-4c7a-b02e-f58852c99ca0 for instance with vm_state active and task_state deleting.#033[00m
Nov 25 04:04:42 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2517: 321 pgs: 321 active+clean; 103 MiB data, 950 MiB used, 59 GiB / 60 GiB avail; 26 KiB/s rd, 2.2 KiB/s wr, 39 op/s
Nov 25 04:04:42 np0005534516 nova_compute[253538]: 2025-11-25 09:04:42.279 253542 DEBUG nova.network.neutron [-] [instance: 28376454-90b2-431d-9052-48b369973c8e] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 04:04:42 np0005534516 nova_compute[253538]: 2025-11-25 09:04:42.326 253542 INFO nova.compute.manager [-] [instance: 28376454-90b2-431d-9052-48b369973c8e] Took 2.07 seconds to deallocate network for instance.#033[00m
Nov 25 04:04:42 np0005534516 nova_compute[253538]: 2025-11-25 09:04:42.344 253542 DEBUG nova.compute.manager [req-101ba256-08c4-44a4-bea2-9be65234d09d req-430a455d-3a28-4424-a160-a449c57f2a04 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 28376454-90b2-431d-9052-48b369973c8e] Received event network-vif-deleted-fd44d480-0242-4c7a-b02e-f58852c99ca0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 04:04:42 np0005534516 nova_compute[253538]: 2025-11-25 09:04:42.344 253542 DEBUG nova.compute.manager [req-101ba256-08c4-44a4-bea2-9be65234d09d req-430a455d-3a28-4424-a160-a449c57f2a04 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 28376454-90b2-431d-9052-48b369973c8e] Received event network-vif-deleted-9918858c-8b7c-4d3f-aada-d04fcb6eab03 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 04:04:42 np0005534516 nova_compute[253538]: 2025-11-25 09:04:42.404 253542 DEBUG oslo_concurrency.lockutils [None req-d225763c-ebc0-46c2-8acb-0fceb516b1ac c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:04:42 np0005534516 nova_compute[253538]: 2025-11-25 09:04:42.405 253542 DEBUG oslo_concurrency.lockutils [None req-d225763c-ebc0-46c2-8acb-0fceb516b1ac c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:04:42 np0005534516 nova_compute[253538]: 2025-11-25 09:04:42.445 253542 DEBUG oslo_concurrency.processutils [None req-d225763c-ebc0-46c2-8acb-0fceb516b1ac c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 04:04:42 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 04:04:42 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/120779292' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 04:04:42 np0005534516 nova_compute[253538]: 2025-11-25 09:04:42.883 253542 DEBUG oslo_concurrency.processutils [None req-d225763c-ebc0-46c2-8acb-0fceb516b1ac c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.438s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 04:04:42 np0005534516 nova_compute[253538]: 2025-11-25 09:04:42.890 253542 DEBUG nova.compute.provider_tree [None req-d225763c-ebc0-46c2-8acb-0fceb516b1ac c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 04:04:42 np0005534516 nova_compute[253538]: 2025-11-25 09:04:42.903 253542 DEBUG nova.scheduler.client.report [None req-d225763c-ebc0-46c2-8acb-0fceb516b1ac c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 04:04:42 np0005534516 nova_compute[253538]: 2025-11-25 09:04:42.933 253542 DEBUG oslo_concurrency.lockutils [None req-d225763c-ebc0-46c2-8acb-0fceb516b1ac c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.528s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:04:42 np0005534516 nova_compute[253538]: 2025-11-25 09:04:42.964 253542 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764061467.9628727, 2c4a1d63-7674-4276-8da9-b9d4f4fea307 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 04:04:42 np0005534516 nova_compute[253538]: 2025-11-25 09:04:42.964 253542 INFO nova.compute.manager [-] [instance: 2c4a1d63-7674-4276-8da9-b9d4f4fea307] VM Stopped (Lifecycle Event)#033[00m
Nov 25 04:04:42 np0005534516 nova_compute[253538]: 2025-11-25 09:04:42.989 253542 DEBUG nova.compute.manager [None req-2f725777-ee92-4b58-bf96-b7c7a2e577cd - - - - - -] [instance: 2c4a1d63-7674-4276-8da9-b9d4f4fea307] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 04:04:43 np0005534516 nova_compute[253538]: 2025-11-25 09:04:42.999 253542 INFO nova.scheduler.client.report [None req-d225763c-ebc0-46c2-8acb-0fceb516b1ac c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Deleted allocations for instance 28376454-90b2-431d-9052-48b369973c8e#033[00m
Nov 25 04:04:43 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:04:43 np0005534516 nova_compute[253538]: 2025-11-25 09:04:43.111 253542 DEBUG oslo_concurrency.lockutils [None req-d225763c-ebc0-46c2-8acb-0fceb516b1ac c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "28376454-90b2-431d-9052-48b369973c8e" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.319s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:04:44 np0005534516 nova_compute[253538]: 2025-11-25 09:04:44.021 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:04:44 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2518: 321 pgs: 321 active+clean; 88 MiB data, 941 MiB used, 59 GiB / 60 GiB avail; 29 KiB/s rd, 2.4 KiB/s wr, 44 op/s
Nov 25 04:04:44 np0005534516 nova_compute[253538]: 2025-11-25 09:04:44.511 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:04:44 np0005534516 nova_compute[253538]: 2025-11-25 09:04:44.514 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 25 04:04:44 np0005534516 nova_compute[253538]: 2025-11-25 09:04:44.661 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:04:45 np0005534516 nova_compute[253538]: 2025-11-25 09:04:45.349 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:04:45 np0005534516 nova_compute[253538]: 2025-11-25 09:04:45.349 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:04:46 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2519: 321 pgs: 321 active+clean; 88 MiB data, 919 MiB used, 59 GiB / 60 GiB avail; 33 KiB/s rd, 2.4 KiB/s wr, 49 op/s
Nov 25 04:04:46 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 04:04:46 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 04:04:46 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 04:04:46 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 04:04:46 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 04:04:46 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 04:04:46 np0005534516 ceph-mgr[75313]: [progress WARNING root] complete: ev 8a08af04-d070-4a65-9410-0b63ab08408f does not exist
Nov 25 04:04:46 np0005534516 ceph-mgr[75313]: [progress WARNING root] complete: ev 6603b883-ac97-4819-b081-fdde4ebb62cb does not exist
Nov 25 04:04:46 np0005534516 ceph-mgr[75313]: [progress WARNING root] complete: ev 08bf02cd-255a-4faf-9999-8507075e3581 does not exist
Nov 25 04:04:46 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 04:04:46 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 04:04:46 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 04:04:46 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 04:04:46 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 04:04:46 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 04:04:47 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 04:04:47 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 04:04:47 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 04:04:47 np0005534516 podman[397129]: 2025-11-25 09:04:47.623905809 +0000 UTC m=+0.038560470 container create 8723c4fe0c877e460bae03de4cb84240f0c24a1505f18723d1196cbad819471d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_goldstine, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3)
Nov 25 04:04:47 np0005534516 systemd[1]: Started libpod-conmon-8723c4fe0c877e460bae03de4cb84240f0c24a1505f18723d1196cbad819471d.scope.
Nov 25 04:04:47 np0005534516 systemd[1]: Started libcrun container.
Nov 25 04:04:47 np0005534516 podman[397129]: 2025-11-25 09:04:47.606274509 +0000 UTC m=+0.020929190 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 04:04:47 np0005534516 podman[397129]: 2025-11-25 09:04:47.708516778 +0000 UTC m=+0.123171469 container init 8723c4fe0c877e460bae03de4cb84240f0c24a1505f18723d1196cbad819471d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_goldstine, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 04:04:47 np0005534516 podman[397129]: 2025-11-25 09:04:47.716935807 +0000 UTC m=+0.131590468 container start 8723c4fe0c877e460bae03de4cb84240f0c24a1505f18723d1196cbad819471d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_goldstine, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Nov 25 04:04:47 np0005534516 podman[397129]: 2025-11-25 09:04:47.720893744 +0000 UTC m=+0.135548405 container attach 8723c4fe0c877e460bae03de4cb84240f0c24a1505f18723d1196cbad819471d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_goldstine, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 04:04:47 np0005534516 funny_goldstine[397146]: 167 167
Nov 25 04:04:47 np0005534516 systemd[1]: libpod-8723c4fe0c877e460bae03de4cb84240f0c24a1505f18723d1196cbad819471d.scope: Deactivated successfully.
Nov 25 04:04:47 np0005534516 podman[397129]: 2025-11-25 09:04:47.725943762 +0000 UTC m=+0.140598433 container died 8723c4fe0c877e460bae03de4cb84240f0c24a1505f18723d1196cbad819471d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_goldstine, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 04:04:47 np0005534516 systemd[1]: var-lib-containers-storage-overlay-e7c1a6e4d72f3583ef2740e7f855ad6ae8b2839060bb518f836c45a7fa90ca6a-merged.mount: Deactivated successfully.
Nov 25 04:04:47 np0005534516 podman[397129]: 2025-11-25 09:04:47.768642472 +0000 UTC m=+0.183297133 container remove 8723c4fe0c877e460bae03de4cb84240f0c24a1505f18723d1196cbad819471d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_goldstine, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Nov 25 04:04:47 np0005534516 systemd[1]: libpod-conmon-8723c4fe0c877e460bae03de4cb84240f0c24a1505f18723d1196cbad819471d.scope: Deactivated successfully.
Nov 25 04:04:47 np0005534516 podman[397169]: 2025-11-25 09:04:47.964196899 +0000 UTC m=+0.055414168 container create e0e68cdaf99e0160cd0cf1ffd58415186ed81d1284be63dd23a70fe62a3cfd4b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_mirzakhani, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 04:04:47 np0005534516 systemd[1]: Started libpod-conmon-e0e68cdaf99e0160cd0cf1ffd58415186ed81d1284be63dd23a70fe62a3cfd4b.scope.
Nov 25 04:04:48 np0005534516 systemd[1]: Started libcrun container.
Nov 25 04:04:48 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f2e7472c60a98303cbfc01a8265810fecd3e14add2a191d6400b5ea5ce927425/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 04:04:48 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f2e7472c60a98303cbfc01a8265810fecd3e14add2a191d6400b5ea5ce927425/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 04:04:48 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f2e7472c60a98303cbfc01a8265810fecd3e14add2a191d6400b5ea5ce927425/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 04:04:48 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f2e7472c60a98303cbfc01a8265810fecd3e14add2a191d6400b5ea5ce927425/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 04:04:48 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f2e7472c60a98303cbfc01a8265810fecd3e14add2a191d6400b5ea5ce927425/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 04:04:48 np0005534516 podman[397169]: 2025-11-25 09:04:48.022704699 +0000 UTC m=+0.113921988 container init e0e68cdaf99e0160cd0cf1ffd58415186ed81d1284be63dd23a70fe62a3cfd4b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_mirzakhani, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 04:04:48 np0005534516 podman[397169]: 2025-11-25 09:04:48.032246628 +0000 UTC m=+0.123463887 container start e0e68cdaf99e0160cd0cf1ffd58415186ed81d1284be63dd23a70fe62a3cfd4b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_mirzakhani, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Nov 25 04:04:48 np0005534516 podman[397169]: 2025-11-25 09:04:48.034685465 +0000 UTC m=+0.125902734 container attach e0e68cdaf99e0160cd0cf1ffd58415186ed81d1284be63dd23a70fe62a3cfd4b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_mirzakhani, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 04:04:48 np0005534516 podman[397169]: 2025-11-25 09:04:47.94588822 +0000 UTC m=+0.037105509 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 04:04:48 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:04:48 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2520: 321 pgs: 321 active+clean; 88 MiB data, 919 MiB used, 59 GiB / 60 GiB avail; 28 KiB/s rd, 1.7 KiB/s wr, 41 op/s
Nov 25 04:04:48 np0005534516 nova_compute[253538]: 2025-11-25 09:04:48.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:04:48 np0005534516 nova_compute[253538]: 2025-11-25 09:04:48.556 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 25 04:04:48 np0005534516 nova_compute[253538]: 2025-11-25 09:04:48.613 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 25 04:04:49 np0005534516 nova_compute[253538]: 2025-11-25 09:04:49.024 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:04:49 np0005534516 happy_mirzakhani[397186]: --> passed data devices: 0 physical, 3 LVM
Nov 25 04:04:49 np0005534516 happy_mirzakhani[397186]: --> relative data size: 1.0
Nov 25 04:04:49 np0005534516 happy_mirzakhani[397186]: --> All data devices are unavailable
Nov 25 04:04:49 np0005534516 systemd[1]: libpod-e0e68cdaf99e0160cd0cf1ffd58415186ed81d1284be63dd23a70fe62a3cfd4b.scope: Deactivated successfully.
Nov 25 04:04:49 np0005534516 podman[397215]: 2025-11-25 09:04:49.379723308 +0000 UTC m=+0.026091530 container died e0e68cdaf99e0160cd0cf1ffd58415186ed81d1284be63dd23a70fe62a3cfd4b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_mirzakhani, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Nov 25 04:04:49 np0005534516 nova_compute[253538]: 2025-11-25 09:04:49.514 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:04:49 np0005534516 systemd[1]: var-lib-containers-storage-overlay-f2e7472c60a98303cbfc01a8265810fecd3e14add2a191d6400b5ea5ce927425-merged.mount: Deactivated successfully.
Nov 25 04:04:50 np0005534516 podman[397215]: 2025-11-25 09:04:50.085671009 +0000 UTC m=+0.732039211 container remove e0e68cdaf99e0160cd0cf1ffd58415186ed81d1284be63dd23a70fe62a3cfd4b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_mirzakhani, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 04:04:50 np0005534516 systemd[1]: libpod-conmon-e0e68cdaf99e0160cd0cf1ffd58415186ed81d1284be63dd23a70fe62a3cfd4b.scope: Deactivated successfully.
Nov 25 04:04:50 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2521: 321 pgs: 321 active+clean; 88 MiB data, 919 MiB used, 59 GiB / 60 GiB avail; 24 KiB/s rd, 1.7 KiB/s wr, 34 op/s
Nov 25 04:04:50 np0005534516 nova_compute[253538]: 2025-11-25 09:04:50.606 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:04:50 np0005534516 podman[397368]: 2025-11-25 09:04:50.726953711 +0000 UTC m=+0.020713753 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 04:04:51 np0005534516 podman[397368]: 2025-11-25 09:04:51.01160976 +0000 UTC m=+0.305369772 container create ef0b917d3a073717451dc425088d171c1def12f8b98e36be0a80ba7cbfd3c5d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_keldysh, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Nov 25 04:04:51 np0005534516 systemd[1]: Started libpod-conmon-ef0b917d3a073717451dc425088d171c1def12f8b98e36be0a80ba7cbfd3c5d7.scope.
Nov 25 04:04:51 np0005534516 systemd[1]: Started libcrun container.
Nov 25 04:04:51 np0005534516 podman[397368]: 2025-11-25 09:04:51.295014014 +0000 UTC m=+0.588774056 container init ef0b917d3a073717451dc425088d171c1def12f8b98e36be0a80ba7cbfd3c5d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_keldysh, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Nov 25 04:04:51 np0005534516 podman[397368]: 2025-11-25 09:04:51.304635635 +0000 UTC m=+0.598395657 container start ef0b917d3a073717451dc425088d171c1def12f8b98e36be0a80ba7cbfd3c5d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_keldysh, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 04:04:51 np0005534516 gallant_keldysh[397384]: 167 167
Nov 25 04:04:51 np0005534516 systemd[1]: libpod-ef0b917d3a073717451dc425088d171c1def12f8b98e36be0a80ba7cbfd3c5d7.scope: Deactivated successfully.
Nov 25 04:04:51 np0005534516 podman[397368]: 2025-11-25 09:04:51.607398336 +0000 UTC m=+0.901158348 container attach ef0b917d3a073717451dc425088d171c1def12f8b98e36be0a80ba7cbfd3c5d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_keldysh, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 04:04:51 np0005534516 podman[397368]: 2025-11-25 09:04:51.609239996 +0000 UTC m=+0.903000008 container died ef0b917d3a073717451dc425088d171c1def12f8b98e36be0a80ba7cbfd3c5d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_keldysh, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Nov 25 04:04:51 np0005534516 systemd[1]: var-lib-containers-storage-overlay-7498f61e2443d5d62b12b01d538269d5902c0045074d5409b4f65d42f3380a8a-merged.mount: Deactivated successfully.
Nov 25 04:04:51 np0005534516 podman[397368]: 2025-11-25 09:04:51.785546409 +0000 UTC m=+1.079306411 container remove ef0b917d3a073717451dc425088d171c1def12f8b98e36be0a80ba7cbfd3c5d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_keldysh, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 04:04:51 np0005534516 systemd[1]: libpod-conmon-ef0b917d3a073717451dc425088d171c1def12f8b98e36be0a80ba7cbfd3c5d7.scope: Deactivated successfully.
Nov 25 04:04:51 np0005534516 podman[397408]: 2025-11-25 09:04:51.935611419 +0000 UTC m=+0.037035509 container create ed0417f26f2b0d6bfc06dfa65ead5c158a936dd1d801f6cd864c5ec5cde1136c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_brown, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 04:04:51 np0005534516 systemd[1]: Started libpod-conmon-ed0417f26f2b0d6bfc06dfa65ead5c158a936dd1d801f6cd864c5ec5cde1136c.scope.
Nov 25 04:04:52 np0005534516 systemd[1]: Started libcrun container.
Nov 25 04:04:52 np0005534516 podman[397408]: 2025-11-25 09:04:51.920751724 +0000 UTC m=+0.022175824 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 04:04:52 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2bbe4b61bfac9c4bd63da063f3b035f96fdfb61c38a7ad2dba0d9f0e4c5304d8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 04:04:52 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2bbe4b61bfac9c4bd63da063f3b035f96fdfb61c38a7ad2dba0d9f0e4c5304d8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 04:04:52 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2bbe4b61bfac9c4bd63da063f3b035f96fdfb61c38a7ad2dba0d9f0e4c5304d8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 04:04:52 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2bbe4b61bfac9c4bd63da063f3b035f96fdfb61c38a7ad2dba0d9f0e4c5304d8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 04:04:52 np0005534516 podman[397408]: 2025-11-25 09:04:52.030344003 +0000 UTC m=+0.131768103 container init ed0417f26f2b0d6bfc06dfa65ead5c158a936dd1d801f6cd864c5ec5cde1136c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_brown, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True)
Nov 25 04:04:52 np0005534516 podman[397408]: 2025-11-25 09:04:52.037131738 +0000 UTC m=+0.138555818 container start ed0417f26f2b0d6bfc06dfa65ead5c158a936dd1d801f6cd864c5ec5cde1136c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_brown, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 04:04:52 np0005534516 podman[397408]: 2025-11-25 09:04:52.040838599 +0000 UTC m=+0.142262699 container attach ed0417f26f2b0d6bfc06dfa65ead5c158a936dd1d801f6cd864c5ec5cde1136c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_brown, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Nov 25 04:04:52 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2522: 321 pgs: 321 active+clean; 88 MiB data, 919 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.3 KiB/s wr, 26 op/s
Nov 25 04:04:52 np0005534516 nova_compute[253538]: 2025-11-25 09:04:52.553 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:04:52 np0005534516 nova_compute[253538]: 2025-11-25 09:04:52.556 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:04:52 np0005534516 nova_compute[253538]: 2025-11-25 09:04:52.556 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 25 04:04:52 np0005534516 friendly_brown[397425]: {
Nov 25 04:04:52 np0005534516 friendly_brown[397425]:    "0": [
Nov 25 04:04:52 np0005534516 friendly_brown[397425]:        {
Nov 25 04:04:52 np0005534516 friendly_brown[397425]:            "devices": [
Nov 25 04:04:52 np0005534516 friendly_brown[397425]:                "/dev/loop3"
Nov 25 04:04:52 np0005534516 friendly_brown[397425]:            ],
Nov 25 04:04:52 np0005534516 friendly_brown[397425]:            "lv_name": "ceph_lv0",
Nov 25 04:04:52 np0005534516 friendly_brown[397425]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 04:04:52 np0005534516 friendly_brown[397425]:            "lv_size": "21470642176",
Nov 25 04:04:52 np0005534516 friendly_brown[397425]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=fa0f8d90-5df8-4d42-9078-da082765696d,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 04:04:52 np0005534516 friendly_brown[397425]:            "lv_uuid": "ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr",
Nov 25 04:04:52 np0005534516 friendly_brown[397425]:            "name": "ceph_lv0",
Nov 25 04:04:52 np0005534516 friendly_brown[397425]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 04:04:52 np0005534516 friendly_brown[397425]:            "tags": {
Nov 25 04:04:52 np0005534516 friendly_brown[397425]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 04:04:52 np0005534516 friendly_brown[397425]:                "ceph.block_uuid": "ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr",
Nov 25 04:04:52 np0005534516 friendly_brown[397425]:                "ceph.cephx_lockbox_secret": "",
Nov 25 04:04:52 np0005534516 friendly_brown[397425]:                "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 04:04:52 np0005534516 friendly_brown[397425]:                "ceph.cluster_name": "ceph",
Nov 25 04:04:52 np0005534516 friendly_brown[397425]:                "ceph.crush_device_class": "",
Nov 25 04:04:52 np0005534516 friendly_brown[397425]:                "ceph.encrypted": "0",
Nov 25 04:04:52 np0005534516 friendly_brown[397425]:                "ceph.osd_fsid": "fa0f8d90-5df8-4d42-9078-da082765696d",
Nov 25 04:04:52 np0005534516 friendly_brown[397425]:                "ceph.osd_id": "0",
Nov 25 04:04:52 np0005534516 friendly_brown[397425]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 04:04:52 np0005534516 friendly_brown[397425]:                "ceph.type": "block",
Nov 25 04:04:52 np0005534516 friendly_brown[397425]:                "ceph.vdo": "0"
Nov 25 04:04:52 np0005534516 friendly_brown[397425]:            },
Nov 25 04:04:52 np0005534516 friendly_brown[397425]:            "type": "block",
Nov 25 04:04:52 np0005534516 friendly_brown[397425]:            "vg_name": "ceph_vg0"
Nov 25 04:04:52 np0005534516 friendly_brown[397425]:        }
Nov 25 04:04:52 np0005534516 friendly_brown[397425]:    ],
Nov 25 04:04:52 np0005534516 friendly_brown[397425]:    "1": [
Nov 25 04:04:52 np0005534516 friendly_brown[397425]:        {
Nov 25 04:04:52 np0005534516 friendly_brown[397425]:            "devices": [
Nov 25 04:04:52 np0005534516 friendly_brown[397425]:                "/dev/loop4"
Nov 25 04:04:52 np0005534516 friendly_brown[397425]:            ],
Nov 25 04:04:52 np0005534516 friendly_brown[397425]:            "lv_name": "ceph_lv1",
Nov 25 04:04:52 np0005534516 friendly_brown[397425]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 04:04:52 np0005534516 friendly_brown[397425]:            "lv_size": "21470642176",
Nov 25 04:04:52 np0005534516 friendly_brown[397425]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1ccbbde0-8faf-460a-800e-c84f00ed17db,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 04:04:52 np0005534516 friendly_brown[397425]:            "lv_uuid": "nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e",
Nov 25 04:04:52 np0005534516 friendly_brown[397425]:            "name": "ceph_lv1",
Nov 25 04:04:52 np0005534516 friendly_brown[397425]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 04:04:52 np0005534516 friendly_brown[397425]:            "tags": {
Nov 25 04:04:52 np0005534516 friendly_brown[397425]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 04:04:52 np0005534516 friendly_brown[397425]:                "ceph.block_uuid": "nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e",
Nov 25 04:04:52 np0005534516 friendly_brown[397425]:                "ceph.cephx_lockbox_secret": "",
Nov 25 04:04:52 np0005534516 friendly_brown[397425]:                "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 04:04:52 np0005534516 friendly_brown[397425]:                "ceph.cluster_name": "ceph",
Nov 25 04:04:52 np0005534516 friendly_brown[397425]:                "ceph.crush_device_class": "",
Nov 25 04:04:52 np0005534516 friendly_brown[397425]:                "ceph.encrypted": "0",
Nov 25 04:04:52 np0005534516 friendly_brown[397425]:                "ceph.osd_fsid": "1ccbbde0-8faf-460a-800e-c84f00ed17db",
Nov 25 04:04:52 np0005534516 friendly_brown[397425]:                "ceph.osd_id": "1",
Nov 25 04:04:52 np0005534516 friendly_brown[397425]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 04:04:52 np0005534516 friendly_brown[397425]:                "ceph.type": "block",
Nov 25 04:04:52 np0005534516 friendly_brown[397425]:                "ceph.vdo": "0"
Nov 25 04:04:52 np0005534516 friendly_brown[397425]:            },
Nov 25 04:04:52 np0005534516 friendly_brown[397425]:            "type": "block",
Nov 25 04:04:52 np0005534516 friendly_brown[397425]:            "vg_name": "ceph_vg1"
Nov 25 04:04:52 np0005534516 friendly_brown[397425]:        }
Nov 25 04:04:52 np0005534516 friendly_brown[397425]:    ],
Nov 25 04:04:52 np0005534516 friendly_brown[397425]:    "2": [
Nov 25 04:04:52 np0005534516 friendly_brown[397425]:        {
Nov 25 04:04:52 np0005534516 friendly_brown[397425]:            "devices": [
Nov 25 04:04:52 np0005534516 friendly_brown[397425]:                "/dev/loop5"
Nov 25 04:04:52 np0005534516 friendly_brown[397425]:            ],
Nov 25 04:04:52 np0005534516 friendly_brown[397425]:            "lv_name": "ceph_lv2",
Nov 25 04:04:52 np0005534516 friendly_brown[397425]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 04:04:52 np0005534516 friendly_brown[397425]:            "lv_size": "21470642176",
Nov 25 04:04:52 np0005534516 friendly_brown[397425]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2a15223e-f21c-42af-b702-2be1c3607399,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 04:04:52 np0005534516 friendly_brown[397425]:            "lv_uuid": "5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0",
Nov 25 04:04:52 np0005534516 friendly_brown[397425]:            "name": "ceph_lv2",
Nov 25 04:04:52 np0005534516 friendly_brown[397425]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 04:04:52 np0005534516 friendly_brown[397425]:            "tags": {
Nov 25 04:04:52 np0005534516 friendly_brown[397425]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 04:04:52 np0005534516 friendly_brown[397425]:                "ceph.block_uuid": "5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0",
Nov 25 04:04:52 np0005534516 friendly_brown[397425]:                "ceph.cephx_lockbox_secret": "",
Nov 25 04:04:52 np0005534516 friendly_brown[397425]:                "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 04:04:52 np0005534516 friendly_brown[397425]:                "ceph.cluster_name": "ceph",
Nov 25 04:04:52 np0005534516 friendly_brown[397425]:                "ceph.crush_device_class": "",
Nov 25 04:04:52 np0005534516 friendly_brown[397425]:                "ceph.encrypted": "0",
Nov 25 04:04:52 np0005534516 friendly_brown[397425]:                "ceph.osd_fsid": "2a15223e-f21c-42af-b702-2be1c3607399",
Nov 25 04:04:52 np0005534516 friendly_brown[397425]:                "ceph.osd_id": "2",
Nov 25 04:04:52 np0005534516 friendly_brown[397425]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 04:04:52 np0005534516 friendly_brown[397425]:                "ceph.type": "block",
Nov 25 04:04:52 np0005534516 friendly_brown[397425]:                "ceph.vdo": "0"
Nov 25 04:04:52 np0005534516 friendly_brown[397425]:            },
Nov 25 04:04:52 np0005534516 friendly_brown[397425]:            "type": "block",
Nov 25 04:04:52 np0005534516 friendly_brown[397425]:            "vg_name": "ceph_vg2"
Nov 25 04:04:52 np0005534516 friendly_brown[397425]:        }
Nov 25 04:04:52 np0005534516 friendly_brown[397425]:    ]
Nov 25 04:04:52 np0005534516 friendly_brown[397425]: }
Nov 25 04:04:52 np0005534516 systemd[1]: libpod-ed0417f26f2b0d6bfc06dfa65ead5c158a936dd1d801f6cd864c5ec5cde1136c.scope: Deactivated successfully.
Nov 25 04:04:52 np0005534516 podman[397408]: 2025-11-25 09:04:52.858823705 +0000 UTC m=+0.960247785 container died ed0417f26f2b0d6bfc06dfa65ead5c158a936dd1d801f6cd864c5ec5cde1136c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_brown, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 04:04:52 np0005534516 systemd[1]: var-lib-containers-storage-overlay-2bbe4b61bfac9c4bd63da063f3b035f96fdfb61c38a7ad2dba0d9f0e4c5304d8-merged.mount: Deactivated successfully.
Nov 25 04:04:52 np0005534516 podman[397408]: 2025-11-25 09:04:52.912484513 +0000 UTC m=+1.013908593 container remove ed0417f26f2b0d6bfc06dfa65ead5c158a936dd1d801f6cd864c5ec5cde1136c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_brown, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 04:04:52 np0005534516 systemd[1]: libpod-conmon-ed0417f26f2b0d6bfc06dfa65ead5c158a936dd1d801f6cd864c5ec5cde1136c.scope: Deactivated successfully.
Nov 25 04:04:53 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:04:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] Optimize plan auto_2025-11-25_09:04:53
Nov 25 04:04:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 04:04:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] do_upmap
Nov 25 04:04:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] pools ['cephfs.cephfs.data', 'images', '.mgr', 'backups', '.rgw.root', 'default.rgw.control', 'vms', 'default.rgw.log', 'cephfs.cephfs.meta', 'default.rgw.meta', 'volumes']
Nov 25 04:04:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] prepared 0/10 changes
Nov 25 04:04:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 04:04:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 04:04:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 04:04:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 04:04:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 04:04:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 04:04:53 np0005534516 podman[397588]: 2025-11-25 09:04:53.511784135 +0000 UTC m=+0.042366103 container create f8c60f222bcb07af4dfd4f68ccf70b788fcc7777ea8e85f65b6bed1a2d589dbf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_hawking, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 04:04:53 np0005534516 nova_compute[253538]: 2025-11-25 09:04:53.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:04:53 np0005534516 systemd[1]: Started libpod-conmon-f8c60f222bcb07af4dfd4f68ccf70b788fcc7777ea8e85f65b6bed1a2d589dbf.scope.
Nov 25 04:04:53 np0005534516 systemd[1]: Started libcrun container.
Nov 25 04:04:53 np0005534516 podman[397588]: 2025-11-25 09:04:53.494757122 +0000 UTC m=+0.025339120 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 04:04:53 np0005534516 podman[397588]: 2025-11-25 09:04:53.600868776 +0000 UTC m=+0.131450774 container init f8c60f222bcb07af4dfd4f68ccf70b788fcc7777ea8e85f65b6bed1a2d589dbf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_hawking, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 04:04:53 np0005534516 podman[397588]: 2025-11-25 09:04:53.609340237 +0000 UTC m=+0.139922195 container start f8c60f222bcb07af4dfd4f68ccf70b788fcc7777ea8e85f65b6bed1a2d589dbf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_hawking, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Nov 25 04:04:53 np0005534516 podman[397588]: 2025-11-25 09:04:53.613646624 +0000 UTC m=+0.144228612 container attach f8c60f222bcb07af4dfd4f68ccf70b788fcc7777ea8e85f65b6bed1a2d589dbf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_hawking, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 04:04:53 np0005534516 peaceful_hawking[397604]: 167 167
Nov 25 04:04:53 np0005534516 systemd[1]: libpod-f8c60f222bcb07af4dfd4f68ccf70b788fcc7777ea8e85f65b6bed1a2d589dbf.scope: Deactivated successfully.
Nov 25 04:04:53 np0005534516 podman[397588]: 2025-11-25 09:04:53.615115784 +0000 UTC m=+0.145697752 container died f8c60f222bcb07af4dfd4f68ccf70b788fcc7777ea8e85f65b6bed1a2d589dbf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_hawking, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Nov 25 04:04:53 np0005534516 systemd[1]: var-lib-containers-storage-overlay-9f03bf4cb38f84816c566e32deedf5bdc92d884d27aa292f0602868fb08df768-merged.mount: Deactivated successfully.
Nov 25 04:04:53 np0005534516 podman[397588]: 2025-11-25 09:04:53.651197595 +0000 UTC m=+0.181779563 container remove f8c60f222bcb07af4dfd4f68ccf70b788fcc7777ea8e85f65b6bed1a2d589dbf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_hawking, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 04:04:53 np0005534516 systemd[1]: libpod-conmon-f8c60f222bcb07af4dfd4f68ccf70b788fcc7777ea8e85f65b6bed1a2d589dbf.scope: Deactivated successfully.
Nov 25 04:04:53 np0005534516 podman[397627]: 2025-11-25 09:04:53.82539561 +0000 UTC m=+0.048613752 container create b8ac2c15b4a9acaf5e8f012b00fcda3e33c8fcdf7d6813709aefd0e7fa8bc177 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_hodgkin, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 04:04:53 np0005534516 systemd[1]: Started libpod-conmon-b8ac2c15b4a9acaf5e8f012b00fcda3e33c8fcdf7d6813709aefd0e7fa8bc177.scope.
Nov 25 04:04:53 np0005534516 podman[397627]: 2025-11-25 09:04:53.805077328 +0000 UTC m=+0.028295460 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 04:04:53 np0005534516 systemd[1]: Started libcrun container.
Nov 25 04:04:53 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c6eb31707272b5f5e7765fd2e64ec49a879d85ae123dea70abe9625add75fbb0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 04:04:53 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c6eb31707272b5f5e7765fd2e64ec49a879d85ae123dea70abe9625add75fbb0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 04:04:53 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c6eb31707272b5f5e7765fd2e64ec49a879d85ae123dea70abe9625add75fbb0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 04:04:53 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c6eb31707272b5f5e7765fd2e64ec49a879d85ae123dea70abe9625add75fbb0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 04:04:53 np0005534516 podman[397627]: 2025-11-25 09:04:53.937746604 +0000 UTC m=+0.160964736 container init b8ac2c15b4a9acaf5e8f012b00fcda3e33c8fcdf7d6813709aefd0e7fa8bc177 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_hodgkin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 04:04:53 np0005534516 podman[397627]: 2025-11-25 09:04:53.951894108 +0000 UTC m=+0.175112240 container start b8ac2c15b4a9acaf5e8f012b00fcda3e33c8fcdf7d6813709aefd0e7fa8bc177 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_hodgkin, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Nov 25 04:04:53 np0005534516 podman[397627]: 2025-11-25 09:04:53.956631578 +0000 UTC m=+0.179849730 container attach b8ac2c15b4a9acaf5e8f012b00fcda3e33c8fcdf7d6813709aefd0e7fa8bc177 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_hodgkin, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 04:04:54 np0005534516 nova_compute[253538]: 2025-11-25 09:04:54.025 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:04:54 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 04:04:54 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 04:04:54 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 04:04:54 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 04:04:54 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 04:04:54 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 04:04:54 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 04:04:54 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 04:04:54 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 04:04:54 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 04:04:54 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2523: 321 pgs: 321 active+clean; 88 MiB data, 919 MiB used, 59 GiB / 60 GiB avail; 15 KiB/s rd, 682 B/s wr, 20 op/s
Nov 25 04:04:54 np0005534516 nova_compute[253538]: 2025-11-25 09:04:54.455 253542 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764061479.4542713, 28376454-90b2-431d-9052-48b369973c8e => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 04:04:54 np0005534516 nova_compute[253538]: 2025-11-25 09:04:54.456 253542 INFO nova.compute.manager [-] [instance: 28376454-90b2-431d-9052-48b369973c8e] VM Stopped (Lifecycle Event)#033[00m
Nov 25 04:04:54 np0005534516 nova_compute[253538]: 2025-11-25 09:04:54.475 253542 DEBUG nova.compute.manager [None req-584695c1-1904-4507-9f8b-7a65274d7549 - - - - - -] [instance: 28376454-90b2-431d-9052-48b369973c8e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 04:04:54 np0005534516 nova_compute[253538]: 2025-11-25 09:04:54.515 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:04:54 np0005534516 ecstatic_hodgkin[397644]: {
Nov 25 04:04:54 np0005534516 ecstatic_hodgkin[397644]:    "1ccbbde0-8faf-460a-800e-c84f00ed17db": {
Nov 25 04:04:54 np0005534516 ecstatic_hodgkin[397644]:        "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 04:04:54 np0005534516 ecstatic_hodgkin[397644]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 04:04:54 np0005534516 ecstatic_hodgkin[397644]:        "osd_id": 1,
Nov 25 04:04:54 np0005534516 ecstatic_hodgkin[397644]:        "osd_uuid": "1ccbbde0-8faf-460a-800e-c84f00ed17db",
Nov 25 04:04:54 np0005534516 ecstatic_hodgkin[397644]:        "type": "bluestore"
Nov 25 04:04:54 np0005534516 ecstatic_hodgkin[397644]:    },
Nov 25 04:04:54 np0005534516 ecstatic_hodgkin[397644]:    "2a15223e-f21c-42af-b702-2be1c3607399": {
Nov 25 04:04:54 np0005534516 ecstatic_hodgkin[397644]:        "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 04:04:54 np0005534516 ecstatic_hodgkin[397644]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 04:04:54 np0005534516 ecstatic_hodgkin[397644]:        "osd_id": 2,
Nov 25 04:04:54 np0005534516 ecstatic_hodgkin[397644]:        "osd_uuid": "2a15223e-f21c-42af-b702-2be1c3607399",
Nov 25 04:04:54 np0005534516 ecstatic_hodgkin[397644]:        "type": "bluestore"
Nov 25 04:04:54 np0005534516 ecstatic_hodgkin[397644]:    },
Nov 25 04:04:54 np0005534516 ecstatic_hodgkin[397644]:    "fa0f8d90-5df8-4d42-9078-da082765696d": {
Nov 25 04:04:54 np0005534516 ecstatic_hodgkin[397644]:        "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 04:04:54 np0005534516 ecstatic_hodgkin[397644]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 04:04:54 np0005534516 ecstatic_hodgkin[397644]:        "osd_id": 0,
Nov 25 04:04:54 np0005534516 ecstatic_hodgkin[397644]:        "osd_uuid": "fa0f8d90-5df8-4d42-9078-da082765696d",
Nov 25 04:04:54 np0005534516 ecstatic_hodgkin[397644]:        "type": "bluestore"
Nov 25 04:04:54 np0005534516 ecstatic_hodgkin[397644]:    }
Nov 25 04:04:54 np0005534516 ecstatic_hodgkin[397644]: }
Nov 25 04:04:54 np0005534516 systemd[1]: libpod-b8ac2c15b4a9acaf5e8f012b00fcda3e33c8fcdf7d6813709aefd0e7fa8bc177.scope: Deactivated successfully.
Nov 25 04:04:54 np0005534516 podman[397627]: 2025-11-25 09:04:54.978762933 +0000 UTC m=+1.201981035 container died b8ac2c15b4a9acaf5e8f012b00fcda3e33c8fcdf7d6813709aefd0e7fa8bc177 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_hodgkin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 04:04:54 np0005534516 systemd[1]: libpod-b8ac2c15b4a9acaf5e8f012b00fcda3e33c8fcdf7d6813709aefd0e7fa8bc177.scope: Consumed 1.032s CPU time.
Nov 25 04:04:55 np0005534516 systemd[1]: var-lib-containers-storage-overlay-c6eb31707272b5f5e7765fd2e64ec49a879d85ae123dea70abe9625add75fbb0-merged.mount: Deactivated successfully.
Nov 25 04:04:55 np0005534516 podman[397627]: 2025-11-25 09:04:55.044645454 +0000 UTC m=+1.267863566 container remove b8ac2c15b4a9acaf5e8f012b00fcda3e33c8fcdf7d6813709aefd0e7fa8bc177 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_hodgkin, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Nov 25 04:04:55 np0005534516 systemd[1]: libpod-conmon-b8ac2c15b4a9acaf5e8f012b00fcda3e33c8fcdf7d6813709aefd0e7fa8bc177.scope: Deactivated successfully.
Nov 25 04:04:55 np0005534516 podman[397678]: 2025-11-25 09:04:55.084337024 +0000 UTC m=+0.067610970 container health_status 201f5ba8a846455dad7681d91099d8c3f33e8ac91298c1330a8b525b94c03938 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=multipathd, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 25 04:04:55 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 04:04:55 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 04:04:55 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 04:04:55 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 04:04:55 np0005534516 ceph-mgr[75313]: [progress WARNING root] complete: ev 11d73f36-6b86-42ce-a003-e1ba2ce0955b does not exist
Nov 25 04:04:55 np0005534516 podman[397686]: 2025-11-25 09:04:55.105688543 +0000 UTC m=+0.090908961 container health_status 5c8d5508db57b4c3da7e80ae11f3a3094e87785cb74f60c98199261dd8b73481 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.build-date=20251118, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3)
Nov 25 04:04:55 np0005534516 ceph-mgr[75313]: [progress WARNING root] complete: ev 0353cb75-88c3-495a-82c3-5bb9285ff9c7 does not exist
Nov 25 04:04:55 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 04:04:55 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 04:04:55 np0005534516 nova_compute[253538]: 2025-11-25 09:04:55.547 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:04:56 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2524: 321 pgs: 321 active+clean; 88 MiB data, 919 MiB used, 59 GiB / 60 GiB avail; 4.5 KiB/s rd, 0 B/s wr, 5 op/s
Nov 25 04:04:56 np0005534516 nova_compute[253538]: 2025-11-25 09:04:56.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:04:58 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:04:58 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2525: 321 pgs: 321 active+clean; 88 MiB data, 919 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:04:58 np0005534516 ceph-mon[75015]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #123. Immutable memtables: 0.
Nov 25 04:04:58 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:04:58.247365) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 25 04:04:58 np0005534516 ceph-mon[75015]: rocksdb: [db/flush_job.cc:856] [default] [JOB 73] Flushing memtable with next log file: 123
Nov 25 04:04:58 np0005534516 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764061498247450, "job": 73, "event": "flush_started", "num_memtables": 1, "num_entries": 814, "num_deletes": 250, "total_data_size": 1059638, "memory_usage": 1083208, "flush_reason": "Manual Compaction"}
Nov 25 04:04:58 np0005534516 ceph-mon[75015]: rocksdb: [db/flush_job.cc:885] [default] [JOB 73] Level-0 flush table #124: started
Nov 25 04:04:58 np0005534516 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764061498282123, "cf_name": "default", "job": 73, "event": "table_file_creation", "file_number": 124, "file_size": 666791, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 52351, "largest_seqno": 53164, "table_properties": {"data_size": 663370, "index_size": 1201, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1157, "raw_key_size": 9126, "raw_average_key_size": 20, "raw_value_size": 656158, "raw_average_value_size": 1484, "num_data_blocks": 54, "num_entries": 442, "num_filter_entries": 442, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764061428, "oldest_key_time": 1764061428, "file_creation_time": 1764061498, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "dc5dc6ae-0fb8-474e-b34d-e37705a41add", "db_session_id": "WCSCMBNWDQZ3QKJA3OWW", "orig_file_number": 124, "seqno_to_time_mapping": "N/A"}}
Nov 25 04:04:58 np0005534516 ceph-mon[75015]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 73] Flush lasted 34802 microseconds, and 3005 cpu microseconds.
Nov 25 04:04:58 np0005534516 ceph-mon[75015]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 04:04:58 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:04:58.282175) [db/flush_job.cc:967] [default] [JOB 73] Level-0 flush table #124: 666791 bytes OK
Nov 25 04:04:58 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:04:58.282199) [db/memtable_list.cc:519] [default] Level-0 commit table #124 started
Nov 25 04:04:58 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:04:58.645833) [db/memtable_list.cc:722] [default] Level-0 commit table #124: memtable #1 done
Nov 25 04:04:58 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:04:58.645890) EVENT_LOG_v1 {"time_micros": 1764061498645878, "job": 73, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 25 04:04:58 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:04:58.645915) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 25 04:04:58 np0005534516 ceph-mon[75015]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 73] Try to delete WAL files size 1055586, prev total WAL file size 1082074, number of live WAL files 2.
Nov 25 04:04:58 np0005534516 ceph-mon[75015]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000120.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 04:04:58 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:04:58.646748) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740032303035' seq:72057594037927935, type:22 .. '6D6772737461740032323536' seq:0, type:0; will stop at (end)
Nov 25 04:04:58 np0005534516 ceph-mon[75015]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 74] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 25 04:04:58 np0005534516 ceph-mon[75015]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 73 Base level 0, inputs: [124(651KB)], [122(10MB)]
Nov 25 04:04:58 np0005534516 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764061498646798, "job": 74, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [124], "files_L6": [122], "score": -1, "input_data_size": 11256581, "oldest_snapshot_seqno": -1}
Nov 25 04:04:59 np0005534516 ceph-mon[75015]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 74] Generated table #125: 7211 keys, 8336726 bytes, temperature: kUnknown
Nov 25 04:04:59 np0005534516 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764061499023287, "cf_name": "default", "job": 74, "event": "table_file_creation", "file_number": 125, "file_size": 8336726, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8292042, "index_size": 25564, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 18053, "raw_key_size": 189106, "raw_average_key_size": 26, "raw_value_size": 8166614, "raw_average_value_size": 1132, "num_data_blocks": 993, "num_entries": 7211, "num_filter_entries": 7211, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764056881, "oldest_key_time": 0, "file_creation_time": 1764061498, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "dc5dc6ae-0fb8-474e-b34d-e37705a41add", "db_session_id": "WCSCMBNWDQZ3QKJA3OWW", "orig_file_number": 125, "seqno_to_time_mapping": "N/A"}}
Nov 25 04:04:59 np0005534516 ceph-mon[75015]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 04:04:59 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:04:59.023810) [db/compaction/compaction_job.cc:1663] [default] [JOB 74] Compacted 1@0 + 1@6 files to L6 => 8336726 bytes
Nov 25 04:04:59 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:04:59.025778) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 29.9 rd, 22.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.6, 10.1 +0.0 blob) out(8.0 +0.0 blob), read-write-amplify(29.4) write-amplify(12.5) OK, records in: 7694, records dropped: 483 output_compression: NoCompression
Nov 25 04:04:59 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:04:59.025834) EVENT_LOG_v1 {"time_micros": 1764061499025817, "job": 74, "event": "compaction_finished", "compaction_time_micros": 376797, "compaction_time_cpu_micros": 22633, "output_level": 6, "num_output_files": 1, "total_output_size": 8336726, "num_input_records": 7694, "num_output_records": 7211, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 25 04:04:59 np0005534516 ceph-mon[75015]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000124.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 04:04:59 np0005534516 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764061499026417, "job": 74, "event": "table_file_deletion", "file_number": 124}
Nov 25 04:04:59 np0005534516 ceph-mon[75015]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000122.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 04:04:59 np0005534516 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764061499028409, "job": 74, "event": "table_file_deletion", "file_number": 122}
Nov 25 04:04:59 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:04:58.646681) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 04:04:59 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:04:59.028504) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 04:04:59 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:04:59.028509) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 04:04:59 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:04:59.028511) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 04:04:59 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:04:59.028513) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 04:04:59 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:04:59.028515) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 04:04:59 np0005534516 nova_compute[253538]: 2025-11-25 09:04:59.027 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:04:59 np0005534516 nova_compute[253538]: 2025-11-25 09:04:59.564 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:05:00 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2526: 321 pgs: 321 active+clean; 88 MiB data, 919 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:05:00 np0005534516 nova_compute[253538]: 2025-11-25 09:05:00.553 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:05:00 np0005534516 nova_compute[253538]: 2025-11-25 09:05:00.576 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:05:00 np0005534516 nova_compute[253538]: 2025-11-25 09:05:00.577 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:05:00 np0005534516 nova_compute[253538]: 2025-11-25 09:05:00.577 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:05:00 np0005534516 nova_compute[253538]: 2025-11-25 09:05:00.577 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 25 04:05:00 np0005534516 nova_compute[253538]: 2025-11-25 09:05:00.577 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 04:05:01 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 04:05:01 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/38150164' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 04:05:01 np0005534516 nova_compute[253538]: 2025-11-25 09:05:01.075 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.497s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 04:05:01 np0005534516 nova_compute[253538]: 2025-11-25 09:05:01.287 253542 WARNING nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 04:05:01 np0005534516 nova_compute[253538]: 2025-11-25 09:05:01.288 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3705MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 25 04:05:01 np0005534516 nova_compute[253538]: 2025-11-25 09:05:01.289 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:05:01 np0005534516 nova_compute[253538]: 2025-11-25 09:05:01.289 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:05:01 np0005534516 nova_compute[253538]: 2025-11-25 09:05:01.357 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 25 04:05:01 np0005534516 nova_compute[253538]: 2025-11-25 09:05:01.358 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 25 04:05:01 np0005534516 nova_compute[253538]: 2025-11-25 09:05:01.373 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 04:05:01 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 04:05:01 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1468073979' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 04:05:01 np0005534516 nova_compute[253538]: 2025-11-25 09:05:01.818 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.444s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 04:05:01 np0005534516 nova_compute[253538]: 2025-11-25 09:05:01.825 253542 DEBUG nova.compute.provider_tree [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 04:05:01 np0005534516 nova_compute[253538]: 2025-11-25 09:05:01.841 253542 DEBUG nova.scheduler.client.report [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 04:05:01 np0005534516 nova_compute[253538]: 2025-11-25 09:05:01.908 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 25 04:05:01 np0005534516 nova_compute[253538]: 2025-11-25 09:05:01.909 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.620s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:05:02 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2527: 321 pgs: 321 active+clean; 88 MiB data, 919 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:05:03 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:05:03 np0005534516 podman[397827]: 2025-11-25 09:05:03.845800125 +0000 UTC m=+0.103138525 container health_status 8ad1e319ea263c63021f01bb2d6f18ca5aea205883339692c6b10bddfa993191 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Nov 25 04:05:04 np0005534516 nova_compute[253538]: 2025-11-25 09:05:04.030 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:05:04 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2528: 321 pgs: 321 active+clean; 88 MiB data, 919 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:05:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 04:05:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 04:05:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 04:05:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 04:05:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 25 04:05:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 04:05:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 04:05:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 04:05:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 04:05:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 04:05:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0010122368870873217 of space, bias 1.0, pg target 0.3036710661261965 quantized to 32 (current 32)
Nov 25 04:05:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 04:05:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 25 04:05:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 04:05:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 04:05:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 04:05:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 25 04:05:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 04:05:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 25 04:05:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 04:05:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 04:05:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 04:05:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 25 04:05:04 np0005534516 nova_compute[253538]: 2025-11-25 09:05:04.566 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:05:06 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2529: 321 pgs: 321 active+clean; 88 MiB data, 919 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:05:08 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:05:08 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2530: 321 pgs: 321 active+clean; 88 MiB data, 919 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:05:09 np0005534516 nova_compute[253538]: 2025-11-25 09:05:09.032 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:05:09 np0005534516 nova_compute[253538]: 2025-11-25 09:05:09.639 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:05:10 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2531: 321 pgs: 321 active+clean; 88 MiB data, 919 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:05:12 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2532: 321 pgs: 321 active+clean; 88 MiB data, 919 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:05:13 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:05:14 np0005534516 nova_compute[253538]: 2025-11-25 09:05:14.035 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:05:14 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2533: 321 pgs: 321 active+clean; 88 MiB data, 919 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:05:14 np0005534516 nova_compute[253538]: 2025-11-25 09:05:14.642 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:05:16 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2534: 321 pgs: 321 active+clean; 88 MiB data, 919 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:05:17 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:05:17.586 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:71:b4:6d 2001:db8:0:1:f816:3eff:fe71:b46d 2001:db8::f816:3eff:fe71:b46d'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8:0:1:f816:3eff:fe71:b46d/64 2001:db8::f816:3eff:fe71:b46d/64', 'neutron:device_id': 'ovnmeta-6c644b4d-59a5-410c-b57a-1faa3d063b78', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6c644b4d-59a5-410c-b57a-1faa3d063b78', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a3cf572dfc9f42528923d69b8fa76422', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4dd839f3-f2ee-404d-b9f9-bedcfcb25484, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=f8aacb3c-1998-431a-ac4d-66021d7412c1) old=Port_Binding(mac=['fa:16:3e:71:b4:6d 2001:db8::f816:3eff:fe71:b46d'], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:fe71:b46d/64', 'neutron:device_id': 'ovnmeta-6c644b4d-59a5-410c-b57a-1faa3d063b78', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6c644b4d-59a5-410c-b57a-1faa3d063b78', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a3cf572dfc9f42528923d69b8fa76422', 'neutron:revision_number': '2', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 04:05:17 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:05:17.588 162739 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port f8aacb3c-1998-431a-ac4d-66021d7412c1 in datapath 6c644b4d-59a5-410c-b57a-1faa3d063b78 updated#033[00m
Nov 25 04:05:17 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:05:17.590 162739 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 6c644b4d-59a5-410c-b57a-1faa3d063b78, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 25 04:05:17 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:05:17.592 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[f0e64fe2-92a6-48fa-bf3f-dd864d642c19]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:05:18 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:05:18 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2535: 321 pgs: 321 active+clean; 88 MiB data, 919 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:05:19 np0005534516 nova_compute[253538]: 2025-11-25 09:05:19.037 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:05:19 np0005534516 nova_compute[253538]: 2025-11-25 09:05:19.645 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:05:20 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2536: 321 pgs: 321 active+clean; 88 MiB data, 919 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:05:22 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2537: 321 pgs: 321 active+clean; 88 MiB data, 919 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:05:23 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:05:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 04:05:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 04:05:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 04:05:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 04:05:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 04:05:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 04:05:24 np0005534516 nova_compute[253538]: 2025-11-25 09:05:24.038 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:05:24 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2538: 321 pgs: 321 active+clean; 88 MiB data, 919 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:05:24 np0005534516 nova_compute[253538]: 2025-11-25 09:05:24.140 253542 DEBUG oslo_concurrency.lockutils [None req-b7a1b552-6c58-4f68-93a5-293ff97ee175 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Acquiring lock "7bc2f34f-b7dd-4b8d-99c0-1fc01d3c2aa0" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:05:24 np0005534516 nova_compute[253538]: 2025-11-25 09:05:24.141 253542 DEBUG oslo_concurrency.lockutils [None req-b7a1b552-6c58-4f68-93a5-293ff97ee175 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "7bc2f34f-b7dd-4b8d-99c0-1fc01d3c2aa0" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:05:24 np0005534516 nova_compute[253538]: 2025-11-25 09:05:24.167 253542 DEBUG nova.compute.manager [None req-b7a1b552-6c58-4f68-93a5-293ff97ee175 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 7bc2f34f-b7dd-4b8d-99c0-1fc01d3c2aa0] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 25 04:05:24 np0005534516 nova_compute[253538]: 2025-11-25 09:05:24.280 253542 DEBUG oslo_concurrency.lockutils [None req-b7a1b552-6c58-4f68-93a5-293ff97ee175 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:05:24 np0005534516 nova_compute[253538]: 2025-11-25 09:05:24.280 253542 DEBUG oslo_concurrency.lockutils [None req-b7a1b552-6c58-4f68-93a5-293ff97ee175 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:05:24 np0005534516 nova_compute[253538]: 2025-11-25 09:05:24.297 253542 DEBUG nova.virt.hardware [None req-b7a1b552-6c58-4f68-93a5-293ff97ee175 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 25 04:05:24 np0005534516 nova_compute[253538]: 2025-11-25 09:05:24.298 253542 INFO nova.compute.claims [None req-b7a1b552-6c58-4f68-93a5-293ff97ee175 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 7bc2f34f-b7dd-4b8d-99c0-1fc01d3c2aa0] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 25 04:05:24 np0005534516 nova_compute[253538]: 2025-11-25 09:05:24.463 253542 DEBUG oslo_concurrency.processutils [None req-b7a1b552-6c58-4f68-93a5-293ff97ee175 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 04:05:24 np0005534516 nova_compute[253538]: 2025-11-25 09:05:24.646 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:05:24 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 04:05:24 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1036385225' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 04:05:24 np0005534516 nova_compute[253538]: 2025-11-25 09:05:24.932 253542 DEBUG oslo_concurrency.processutils [None req-b7a1b552-6c58-4f68-93a5-293ff97ee175 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.469s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 04:05:24 np0005534516 nova_compute[253538]: 2025-11-25 09:05:24.938 253542 DEBUG nova.compute.provider_tree [None req-b7a1b552-6c58-4f68-93a5-293ff97ee175 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 04:05:24 np0005534516 nova_compute[253538]: 2025-11-25 09:05:24.959 253542 DEBUG nova.scheduler.client.report [None req-b7a1b552-6c58-4f68-93a5-293ff97ee175 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 04:05:24 np0005534516 nova_compute[253538]: 2025-11-25 09:05:24.994 253542 DEBUG oslo_concurrency.lockutils [None req-b7a1b552-6c58-4f68-93a5-293ff97ee175 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.713s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:05:24 np0005534516 nova_compute[253538]: 2025-11-25 09:05:24.995 253542 DEBUG nova.compute.manager [None req-b7a1b552-6c58-4f68-93a5-293ff97ee175 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 7bc2f34f-b7dd-4b8d-99c0-1fc01d3c2aa0] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 25 04:05:25 np0005534516 nova_compute[253538]: 2025-11-25 09:05:25.069 253542 DEBUG nova.compute.manager [None req-b7a1b552-6c58-4f68-93a5-293ff97ee175 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 7bc2f34f-b7dd-4b8d-99c0-1fc01d3c2aa0] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 25 04:05:25 np0005534516 nova_compute[253538]: 2025-11-25 09:05:25.069 253542 DEBUG nova.network.neutron [None req-b7a1b552-6c58-4f68-93a5-293ff97ee175 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 7bc2f34f-b7dd-4b8d-99c0-1fc01d3c2aa0] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 25 04:05:25 np0005534516 nova_compute[253538]: 2025-11-25 09:05:25.090 253542 INFO nova.virt.libvirt.driver [None req-b7a1b552-6c58-4f68-93a5-293ff97ee175 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 7bc2f34f-b7dd-4b8d-99c0-1fc01d3c2aa0] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 25 04:05:25 np0005534516 nova_compute[253538]: 2025-11-25 09:05:25.112 253542 DEBUG nova.compute.manager [None req-b7a1b552-6c58-4f68-93a5-293ff97ee175 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 7bc2f34f-b7dd-4b8d-99c0-1fc01d3c2aa0] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 25 04:05:25 np0005534516 nova_compute[253538]: 2025-11-25 09:05:25.210 253542 DEBUG nova.compute.manager [None req-b7a1b552-6c58-4f68-93a5-293ff97ee175 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 7bc2f34f-b7dd-4b8d-99c0-1fc01d3c2aa0] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 25 04:05:25 np0005534516 nova_compute[253538]: 2025-11-25 09:05:25.211 253542 DEBUG nova.virt.libvirt.driver [None req-b7a1b552-6c58-4f68-93a5-293ff97ee175 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 7bc2f34f-b7dd-4b8d-99c0-1fc01d3c2aa0] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 25 04:05:25 np0005534516 nova_compute[253538]: 2025-11-25 09:05:25.211 253542 INFO nova.virt.libvirt.driver [None req-b7a1b552-6c58-4f68-93a5-293ff97ee175 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 7bc2f34f-b7dd-4b8d-99c0-1fc01d3c2aa0] Creating image(s)#033[00m
Nov 25 04:05:25 np0005534516 nova_compute[253538]: 2025-11-25 09:05:25.234 253542 DEBUG nova.storage.rbd_utils [None req-b7a1b552-6c58-4f68-93a5-293ff97ee175 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] rbd image 7bc2f34f-b7dd-4b8d-99c0-1fc01d3c2aa0_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 04:05:25 np0005534516 nova_compute[253538]: 2025-11-25 09:05:25.261 253542 DEBUG nova.storage.rbd_utils [None req-b7a1b552-6c58-4f68-93a5-293ff97ee175 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] rbd image 7bc2f34f-b7dd-4b8d-99c0-1fc01d3c2aa0_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 04:05:25 np0005534516 nova_compute[253538]: 2025-11-25 09:05:25.285 253542 DEBUG nova.storage.rbd_utils [None req-b7a1b552-6c58-4f68-93a5-293ff97ee175 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] rbd image 7bc2f34f-b7dd-4b8d-99c0-1fc01d3c2aa0_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 04:05:25 np0005534516 nova_compute[253538]: 2025-11-25 09:05:25.288 253542 DEBUG oslo_concurrency.processutils [None req-b7a1b552-6c58-4f68-93a5-293ff97ee175 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 04:05:25 np0005534516 nova_compute[253538]: 2025-11-25 09:05:25.357 253542 DEBUG oslo_concurrency.processutils [None req-b7a1b552-6c58-4f68-93a5-293ff97ee175 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json" returned: 0 in 0.068s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 04:05:25 np0005534516 nova_compute[253538]: 2025-11-25 09:05:25.357 253542 DEBUG oslo_concurrency.lockutils [None req-b7a1b552-6c58-4f68-93a5-293ff97ee175 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Acquiring lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:05:25 np0005534516 nova_compute[253538]: 2025-11-25 09:05:25.358 253542 DEBUG oslo_concurrency.lockutils [None req-b7a1b552-6c58-4f68-93a5-293ff97ee175 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:05:25 np0005534516 nova_compute[253538]: 2025-11-25 09:05:25.358 253542 DEBUG oslo_concurrency.lockutils [None req-b7a1b552-6c58-4f68-93a5-293ff97ee175 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:05:25 np0005534516 nova_compute[253538]: 2025-11-25 09:05:25.380 253542 DEBUG nova.storage.rbd_utils [None req-b7a1b552-6c58-4f68-93a5-293ff97ee175 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] rbd image 7bc2f34f-b7dd-4b8d-99c0-1fc01d3c2aa0_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 04:05:25 np0005534516 nova_compute[253538]: 2025-11-25 09:05:25.383 253542 DEBUG oslo_concurrency.processutils [None req-b7a1b552-6c58-4f68-93a5-293ff97ee175 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc 7bc2f34f-b7dd-4b8d-99c0-1fc01d3c2aa0_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 04:05:25 np0005534516 nova_compute[253538]: 2025-11-25 09:05:25.413 253542 DEBUG nova.policy [None req-b7a1b552-6c58-4f68-93a5-293ff97ee175 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'c9fb13d4ba9041458692330b7276232f', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'a3cf572dfc9f42528923d69b8fa76422', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 25 04:05:25 np0005534516 nova_compute[253538]: 2025-11-25 09:05:25.658 253542 DEBUG oslo_concurrency.processutils [None req-b7a1b552-6c58-4f68-93a5-293ff97ee175 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc 7bc2f34f-b7dd-4b8d-99c0-1fc01d3c2aa0_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.275s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 04:05:25 np0005534516 nova_compute[253538]: 2025-11-25 09:05:25.730 253542 DEBUG nova.storage.rbd_utils [None req-b7a1b552-6c58-4f68-93a5-293ff97ee175 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] resizing rbd image 7bc2f34f-b7dd-4b8d-99c0-1fc01d3c2aa0_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 25 04:05:25 np0005534516 podman[398021]: 2025-11-25 09:05:25.820202978 +0000 UTC m=+0.061688477 container health_status 5c8d5508db57b4c3da7e80ae11f3a3094e87785cb74f60c98199261dd8b73481 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Nov 25 04:05:25 np0005534516 nova_compute[253538]: 2025-11-25 09:05:25.835 253542 DEBUG nova.objects.instance [None req-b7a1b552-6c58-4f68-93a5-293ff97ee175 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lazy-loading 'migration_context' on Instance uuid 7bc2f34f-b7dd-4b8d-99c0-1fc01d3c2aa0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 04:05:25 np0005534516 podman[398006]: 2025-11-25 09:05:25.842445513 +0000 UTC m=+0.091804387 container health_status 201f5ba8a846455dad7681d91099d8c3f33e8ac91298c1330a8b525b94c03938 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251118, tcib_managed=true, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 04:05:25 np0005534516 nova_compute[253538]: 2025-11-25 09:05:25.851 253542 DEBUG nova.virt.libvirt.driver [None req-b7a1b552-6c58-4f68-93a5-293ff97ee175 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 7bc2f34f-b7dd-4b8d-99c0-1fc01d3c2aa0] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 25 04:05:25 np0005534516 nova_compute[253538]: 2025-11-25 09:05:25.852 253542 DEBUG nova.virt.libvirt.driver [None req-b7a1b552-6c58-4f68-93a5-293ff97ee175 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 7bc2f34f-b7dd-4b8d-99c0-1fc01d3c2aa0] Ensure instance console log exists: /var/lib/nova/instances/7bc2f34f-b7dd-4b8d-99c0-1fc01d3c2aa0/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 25 04:05:25 np0005534516 nova_compute[253538]: 2025-11-25 09:05:25.852 253542 DEBUG oslo_concurrency.lockutils [None req-b7a1b552-6c58-4f68-93a5-293ff97ee175 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:05:25 np0005534516 nova_compute[253538]: 2025-11-25 09:05:25.852 253542 DEBUG oslo_concurrency.lockutils [None req-b7a1b552-6c58-4f68-93a5-293ff97ee175 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:05:25 np0005534516 nova_compute[253538]: 2025-11-25 09:05:25.853 253542 DEBUG oslo_concurrency.lockutils [None req-b7a1b552-6c58-4f68-93a5-293ff97ee175 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:05:26 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2539: 321 pgs: 321 active+clean; 88 MiB data, 919 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:05:26 np0005534516 nova_compute[253538]: 2025-11-25 09:05:26.544 253542 DEBUG nova.network.neutron [None req-b7a1b552-6c58-4f68-93a5-293ff97ee175 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 7bc2f34f-b7dd-4b8d-99c0-1fc01d3c2aa0] Successfully created port: bc72cf9d-bb8d-4968-879b-a65c0e151d35 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 25 04:05:27 np0005534516 ovn_controller[152859]: 2025-11-25T09:05:27Z|01416|memory_trim|INFO|Detected inactivity (last active 30001 ms ago): trimming memory
Nov 25 04:05:27 np0005534516 nova_compute[253538]: 2025-11-25 09:05:27.350 253542 DEBUG nova.network.neutron [None req-b7a1b552-6c58-4f68-93a5-293ff97ee175 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 7bc2f34f-b7dd-4b8d-99c0-1fc01d3c2aa0] Successfully created port: e64e0c93-9ff8-4b26-9a7e-1bae8b024966 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 25 04:05:28 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:05:28 np0005534516 nova_compute[253538]: 2025-11-25 09:05:28.108 253542 DEBUG oslo_concurrency.lockutils [None req-2f2e9b20-8e15-45de-84f5-40efdcfc8a3b 637a807a37ce403a8612d303b1acbb3b a3908211615c4cbaae61d6e5833ca908 - - default default] Acquiring lock "497131ea-c693-4c1d-b471-5b69d2294e3a" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:05:28 np0005534516 nova_compute[253538]: 2025-11-25 09:05:28.109 253542 DEBUG oslo_concurrency.lockutils [None req-2f2e9b20-8e15-45de-84f5-40efdcfc8a3b 637a807a37ce403a8612d303b1acbb3b a3908211615c4cbaae61d6e5833ca908 - - default default] Lock "497131ea-c693-4c1d-b471-5b69d2294e3a" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:05:28 np0005534516 nova_compute[253538]: 2025-11-25 09:05:28.122 253542 DEBUG nova.compute.manager [None req-2f2e9b20-8e15-45de-84f5-40efdcfc8a3b 637a807a37ce403a8612d303b1acbb3b a3908211615c4cbaae61d6e5833ca908 - - default default] [instance: 497131ea-c693-4c1d-b471-5b69d2294e3a] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 25 04:05:28 np0005534516 nova_compute[253538]: 2025-11-25 09:05:28.137 253542 DEBUG nova.network.neutron [None req-b7a1b552-6c58-4f68-93a5-293ff97ee175 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 7bc2f34f-b7dd-4b8d-99c0-1fc01d3c2aa0] Successfully updated port: bc72cf9d-bb8d-4968-879b-a65c0e151d35 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 25 04:05:28 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2540: 321 pgs: 321 active+clean; 106 MiB data, 928 MiB used, 59 GiB / 60 GiB avail; 426 B/s rd, 775 KiB/s wr, 2 op/s
Nov 25 04:05:28 np0005534516 nova_compute[253538]: 2025-11-25 09:05:28.184 253542 DEBUG oslo_concurrency.lockutils [None req-2f2e9b20-8e15-45de-84f5-40efdcfc8a3b 637a807a37ce403a8612d303b1acbb3b a3908211615c4cbaae61d6e5833ca908 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:05:28 np0005534516 nova_compute[253538]: 2025-11-25 09:05:28.185 253542 DEBUG oslo_concurrency.lockutils [None req-2f2e9b20-8e15-45de-84f5-40efdcfc8a3b 637a807a37ce403a8612d303b1acbb3b a3908211615c4cbaae61d6e5833ca908 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:05:28 np0005534516 nova_compute[253538]: 2025-11-25 09:05:28.196 253542 DEBUG nova.virt.hardware [None req-2f2e9b20-8e15-45de-84f5-40efdcfc8a3b 637a807a37ce403a8612d303b1acbb3b a3908211615c4cbaae61d6e5833ca908 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 25 04:05:28 np0005534516 nova_compute[253538]: 2025-11-25 09:05:28.196 253542 INFO nova.compute.claims [None req-2f2e9b20-8e15-45de-84f5-40efdcfc8a3b 637a807a37ce403a8612d303b1acbb3b a3908211615c4cbaae61d6e5833ca908 - - default default] [instance: 497131ea-c693-4c1d-b471-5b69d2294e3a] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 25 04:05:28 np0005534516 nova_compute[253538]: 2025-11-25 09:05:28.213 253542 DEBUG nova.compute.manager [req-d032b321-04c3-4b67-8666-8a7b12b358ec req-be4e74ef-a4f7-4f5e-aaff-48d56a90851f b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 7bc2f34f-b7dd-4b8d-99c0-1fc01d3c2aa0] Received event network-changed-bc72cf9d-bb8d-4968-879b-a65c0e151d35 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 04:05:28 np0005534516 nova_compute[253538]: 2025-11-25 09:05:28.214 253542 DEBUG nova.compute.manager [req-d032b321-04c3-4b67-8666-8a7b12b358ec req-be4e74ef-a4f7-4f5e-aaff-48d56a90851f b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 7bc2f34f-b7dd-4b8d-99c0-1fc01d3c2aa0] Refreshing instance network info cache due to event network-changed-bc72cf9d-bb8d-4968-879b-a65c0e151d35. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 04:05:28 np0005534516 nova_compute[253538]: 2025-11-25 09:05:28.214 253542 DEBUG oslo_concurrency.lockutils [req-d032b321-04c3-4b67-8666-8a7b12b358ec req-be4e74ef-a4f7-4f5e-aaff-48d56a90851f b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-7bc2f34f-b7dd-4b8d-99c0-1fc01d3c2aa0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 04:05:28 np0005534516 nova_compute[253538]: 2025-11-25 09:05:28.215 253542 DEBUG oslo_concurrency.lockutils [req-d032b321-04c3-4b67-8666-8a7b12b358ec req-be4e74ef-a4f7-4f5e-aaff-48d56a90851f b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-7bc2f34f-b7dd-4b8d-99c0-1fc01d3c2aa0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 04:05:28 np0005534516 nova_compute[253538]: 2025-11-25 09:05:28.215 253542 DEBUG nova.network.neutron [req-d032b321-04c3-4b67-8666-8a7b12b358ec req-be4e74ef-a4f7-4f5e-aaff-48d56a90851f b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 7bc2f34f-b7dd-4b8d-99c0-1fc01d3c2aa0] Refreshing network info cache for port bc72cf9d-bb8d-4968-879b-a65c0e151d35 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 04:05:28 np0005534516 nova_compute[253538]: 2025-11-25 09:05:28.313 253542 DEBUG oslo_concurrency.processutils [None req-2f2e9b20-8e15-45de-84f5-40efdcfc8a3b 637a807a37ce403a8612d303b1acbb3b a3908211615c4cbaae61d6e5833ca908 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 04:05:28 np0005534516 nova_compute[253538]: 2025-11-25 09:05:28.382 253542 DEBUG nova.network.neutron [req-d032b321-04c3-4b67-8666-8a7b12b358ec req-be4e74ef-a4f7-4f5e-aaff-48d56a90851f b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 7bc2f34f-b7dd-4b8d-99c0-1fc01d3c2aa0] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 25 04:05:28 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 04:05:28 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/177567934' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 04:05:28 np0005534516 nova_compute[253538]: 2025-11-25 09:05:28.778 253542 DEBUG oslo_concurrency.processutils [None req-2f2e9b20-8e15-45de-84f5-40efdcfc8a3b 637a807a37ce403a8612d303b1acbb3b a3908211615c4cbaae61d6e5833ca908 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.464s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 04:05:28 np0005534516 nova_compute[253538]: 2025-11-25 09:05:28.782 253542 DEBUG nova.compute.provider_tree [None req-2f2e9b20-8e15-45de-84f5-40efdcfc8a3b 637a807a37ce403a8612d303b1acbb3b a3908211615c4cbaae61d6e5833ca908 - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 04:05:28 np0005534516 nova_compute[253538]: 2025-11-25 09:05:28.834 253542 DEBUG nova.scheduler.client.report [None req-2f2e9b20-8e15-45de-84f5-40efdcfc8a3b 637a807a37ce403a8612d303b1acbb3b a3908211615c4cbaae61d6e5833ca908 - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 04:05:28 np0005534516 nova_compute[253538]: 2025-11-25 09:05:28.849 253542 DEBUG nova.network.neutron [req-d032b321-04c3-4b67-8666-8a7b12b358ec req-be4e74ef-a4f7-4f5e-aaff-48d56a90851f b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 7bc2f34f-b7dd-4b8d-99c0-1fc01d3c2aa0] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 04:05:28 np0005534516 nova_compute[253538]: 2025-11-25 09:05:28.861 253542 DEBUG oslo_concurrency.lockutils [None req-2f2e9b20-8e15-45de-84f5-40efdcfc8a3b 637a807a37ce403a8612d303b1acbb3b a3908211615c4cbaae61d6e5833ca908 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.676s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:05:28 np0005534516 nova_compute[253538]: 2025-11-25 09:05:28.862 253542 DEBUG nova.compute.manager [None req-2f2e9b20-8e15-45de-84f5-40efdcfc8a3b 637a807a37ce403a8612d303b1acbb3b a3908211615c4cbaae61d6e5833ca908 - - default default] [instance: 497131ea-c693-4c1d-b471-5b69d2294e3a] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 25 04:05:28 np0005534516 nova_compute[253538]: 2025-11-25 09:05:28.864 253542 DEBUG oslo_concurrency.lockutils [req-d032b321-04c3-4b67-8666-8a7b12b358ec req-be4e74ef-a4f7-4f5e-aaff-48d56a90851f b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-7bc2f34f-b7dd-4b8d-99c0-1fc01d3c2aa0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 04:05:28 np0005534516 nova_compute[253538]: 2025-11-25 09:05:28.898 253542 DEBUG nova.compute.manager [None req-2f2e9b20-8e15-45de-84f5-40efdcfc8a3b 637a807a37ce403a8612d303b1acbb3b a3908211615c4cbaae61d6e5833ca908 - - default default] [instance: 497131ea-c693-4c1d-b471-5b69d2294e3a] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 25 04:05:28 np0005534516 nova_compute[253538]: 2025-11-25 09:05:28.898 253542 DEBUG nova.network.neutron [None req-2f2e9b20-8e15-45de-84f5-40efdcfc8a3b 637a807a37ce403a8612d303b1acbb3b a3908211615c4cbaae61d6e5833ca908 - - default default] [instance: 497131ea-c693-4c1d-b471-5b69d2294e3a] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 25 04:05:28 np0005534516 nova_compute[253538]: 2025-11-25 09:05:28.911 253542 INFO nova.virt.libvirt.driver [None req-2f2e9b20-8e15-45de-84f5-40efdcfc8a3b 637a807a37ce403a8612d303b1acbb3b a3908211615c4cbaae61d6e5833ca908 - - default default] [instance: 497131ea-c693-4c1d-b471-5b69d2294e3a] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 25 04:05:28 np0005534516 nova_compute[253538]: 2025-11-25 09:05:28.926 253542 DEBUG nova.compute.manager [None req-2f2e9b20-8e15-45de-84f5-40efdcfc8a3b 637a807a37ce403a8612d303b1acbb3b a3908211615c4cbaae61d6e5833ca908 - - default default] [instance: 497131ea-c693-4c1d-b471-5b69d2294e3a] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 25 04:05:29 np0005534516 nova_compute[253538]: 2025-11-25 09:05:29.005 253542 DEBUG nova.network.neutron [None req-b7a1b552-6c58-4f68-93a5-293ff97ee175 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 7bc2f34f-b7dd-4b8d-99c0-1fc01d3c2aa0] Successfully updated port: e64e0c93-9ff8-4b26-9a7e-1bae8b024966 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 25 04:05:29 np0005534516 nova_compute[253538]: 2025-11-25 09:05:29.017 253542 DEBUG nova.compute.manager [None req-2f2e9b20-8e15-45de-84f5-40efdcfc8a3b 637a807a37ce403a8612d303b1acbb3b a3908211615c4cbaae61d6e5833ca908 - - default default] [instance: 497131ea-c693-4c1d-b471-5b69d2294e3a] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 25 04:05:29 np0005534516 nova_compute[253538]: 2025-11-25 09:05:29.019 253542 DEBUG nova.virt.libvirt.driver [None req-2f2e9b20-8e15-45de-84f5-40efdcfc8a3b 637a807a37ce403a8612d303b1acbb3b a3908211615c4cbaae61d6e5833ca908 - - default default] [instance: 497131ea-c693-4c1d-b471-5b69d2294e3a] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 25 04:05:29 np0005534516 nova_compute[253538]: 2025-11-25 09:05:29.020 253542 INFO nova.virt.libvirt.driver [None req-2f2e9b20-8e15-45de-84f5-40efdcfc8a3b 637a807a37ce403a8612d303b1acbb3b a3908211615c4cbaae61d6e5833ca908 - - default default] [instance: 497131ea-c693-4c1d-b471-5b69d2294e3a] Creating image(s)#033[00m
Nov 25 04:05:29 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 25 04:05:29 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1999833537' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 04:05:29 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 25 04:05:29 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1999833537' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 04:05:29 np0005534516 nova_compute[253538]: 2025-11-25 09:05:29.047 253542 DEBUG nova.storage.rbd_utils [None req-2f2e9b20-8e15-45de-84f5-40efdcfc8a3b 637a807a37ce403a8612d303b1acbb3b a3908211615c4cbaae61d6e5833ca908 - - default default] rbd image 497131ea-c693-4c1d-b471-5b69d2294e3a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 04:05:29 np0005534516 nova_compute[253538]: 2025-11-25 09:05:29.076 253542 DEBUG nova.storage.rbd_utils [None req-2f2e9b20-8e15-45de-84f5-40efdcfc8a3b 637a807a37ce403a8612d303b1acbb3b a3908211615c4cbaae61d6e5833ca908 - - default default] rbd image 497131ea-c693-4c1d-b471-5b69d2294e3a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 04:05:29 np0005534516 nova_compute[253538]: 2025-11-25 09:05:29.103 253542 DEBUG nova.storage.rbd_utils [None req-2f2e9b20-8e15-45de-84f5-40efdcfc8a3b 637a807a37ce403a8612d303b1acbb3b a3908211615c4cbaae61d6e5833ca908 - - default default] rbd image 497131ea-c693-4c1d-b471-5b69d2294e3a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 04:05:29 np0005534516 nova_compute[253538]: 2025-11-25 09:05:29.107 253542 DEBUG oslo_concurrency.processutils [None req-2f2e9b20-8e15-45de-84f5-40efdcfc8a3b 637a807a37ce403a8612d303b1acbb3b a3908211615c4cbaae61d6e5833ca908 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 04:05:29 np0005534516 nova_compute[253538]: 2025-11-25 09:05:29.155 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:05:29 np0005534516 nova_compute[253538]: 2025-11-25 09:05:29.159 253542 DEBUG nova.policy [None req-2f2e9b20-8e15-45de-84f5-40efdcfc8a3b 637a807a37ce403a8612d303b1acbb3b a3908211615c4cbaae61d6e5833ca908 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '637a807a37ce403a8612d303b1acbb3b', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'a3908211615c4cbaae61d6e5833ca908', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 25 04:05:29 np0005534516 nova_compute[253538]: 2025-11-25 09:05:29.162 253542 DEBUG oslo_concurrency.lockutils [None req-b7a1b552-6c58-4f68-93a5-293ff97ee175 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Acquiring lock "refresh_cache-7bc2f34f-b7dd-4b8d-99c0-1fc01d3c2aa0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 04:05:29 np0005534516 nova_compute[253538]: 2025-11-25 09:05:29.162 253542 DEBUG oslo_concurrency.lockutils [None req-b7a1b552-6c58-4f68-93a5-293ff97ee175 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Acquired lock "refresh_cache-7bc2f34f-b7dd-4b8d-99c0-1fc01d3c2aa0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 04:05:29 np0005534516 nova_compute[253538]: 2025-11-25 09:05:29.163 253542 DEBUG nova.network.neutron [None req-b7a1b552-6c58-4f68-93a5-293ff97ee175 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 7bc2f34f-b7dd-4b8d-99c0-1fc01d3c2aa0] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 25 04:05:29 np0005534516 nova_compute[253538]: 2025-11-25 09:05:29.216 253542 DEBUG oslo_concurrency.processutils [None req-2f2e9b20-8e15-45de-84f5-40efdcfc8a3b 637a807a37ce403a8612d303b1acbb3b a3908211615c4cbaae61d6e5833ca908 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json" returned: 0 in 0.109s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 04:05:29 np0005534516 nova_compute[253538]: 2025-11-25 09:05:29.217 253542 DEBUG oslo_concurrency.lockutils [None req-2f2e9b20-8e15-45de-84f5-40efdcfc8a3b 637a807a37ce403a8612d303b1acbb3b a3908211615c4cbaae61d6e5833ca908 - - default default] Acquiring lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:05:29 np0005534516 nova_compute[253538]: 2025-11-25 09:05:29.217 253542 DEBUG oslo_concurrency.lockutils [None req-2f2e9b20-8e15-45de-84f5-40efdcfc8a3b 637a807a37ce403a8612d303b1acbb3b a3908211615c4cbaae61d6e5833ca908 - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:05:29 np0005534516 nova_compute[253538]: 2025-11-25 09:05:29.217 253542 DEBUG oslo_concurrency.lockutils [None req-2f2e9b20-8e15-45de-84f5-40efdcfc8a3b 637a807a37ce403a8612d303b1acbb3b a3908211615c4cbaae61d6e5833ca908 - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:05:29 np0005534516 nova_compute[253538]: 2025-11-25 09:05:29.235 253542 DEBUG nova.storage.rbd_utils [None req-2f2e9b20-8e15-45de-84f5-40efdcfc8a3b 637a807a37ce403a8612d303b1acbb3b a3908211615c4cbaae61d6e5833ca908 - - default default] rbd image 497131ea-c693-4c1d-b471-5b69d2294e3a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 04:05:29 np0005534516 nova_compute[253538]: 2025-11-25 09:05:29.239 253542 DEBUG oslo_concurrency.processutils [None req-2f2e9b20-8e15-45de-84f5-40efdcfc8a3b 637a807a37ce403a8612d303b1acbb3b a3908211615c4cbaae61d6e5833ca908 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc 497131ea-c693-4c1d-b471-5b69d2294e3a_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 04:05:29 np0005534516 nova_compute[253538]: 2025-11-25 09:05:29.373 253542 DEBUG nova.network.neutron [None req-b7a1b552-6c58-4f68-93a5-293ff97ee175 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 7bc2f34f-b7dd-4b8d-99c0-1fc01d3c2aa0] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 25 04:05:29 np0005534516 nova_compute[253538]: 2025-11-25 09:05:29.648 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:05:29 np0005534516 nova_compute[253538]: 2025-11-25 09:05:29.661 253542 DEBUG oslo_concurrency.processutils [None req-2f2e9b20-8e15-45de-84f5-40efdcfc8a3b 637a807a37ce403a8612d303b1acbb3b a3908211615c4cbaae61d6e5833ca908 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc 497131ea-c693-4c1d-b471-5b69d2294e3a_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.422s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 04:05:29 np0005534516 nova_compute[253538]: 2025-11-25 09:05:29.714 253542 DEBUG nova.storage.rbd_utils [None req-2f2e9b20-8e15-45de-84f5-40efdcfc8a3b 637a807a37ce403a8612d303b1acbb3b a3908211615c4cbaae61d6e5833ca908 - - default default] resizing rbd image 497131ea-c693-4c1d-b471-5b69d2294e3a_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 25 04:05:29 np0005534516 nova_compute[253538]: 2025-11-25 09:05:29.799 253542 DEBUG nova.objects.instance [None req-2f2e9b20-8e15-45de-84f5-40efdcfc8a3b 637a807a37ce403a8612d303b1acbb3b a3908211615c4cbaae61d6e5833ca908 - - default default] Lazy-loading 'migration_context' on Instance uuid 497131ea-c693-4c1d-b471-5b69d2294e3a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 04:05:29 np0005534516 nova_compute[253538]: 2025-11-25 09:05:29.818 253542 DEBUG nova.virt.libvirt.driver [None req-2f2e9b20-8e15-45de-84f5-40efdcfc8a3b 637a807a37ce403a8612d303b1acbb3b a3908211615c4cbaae61d6e5833ca908 - - default default] [instance: 497131ea-c693-4c1d-b471-5b69d2294e3a] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 25 04:05:29 np0005534516 nova_compute[253538]: 2025-11-25 09:05:29.818 253542 DEBUG nova.virt.libvirt.driver [None req-2f2e9b20-8e15-45de-84f5-40efdcfc8a3b 637a807a37ce403a8612d303b1acbb3b a3908211615c4cbaae61d6e5833ca908 - - default default] [instance: 497131ea-c693-4c1d-b471-5b69d2294e3a] Ensure instance console log exists: /var/lib/nova/instances/497131ea-c693-4c1d-b471-5b69d2294e3a/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 25 04:05:29 np0005534516 nova_compute[253538]: 2025-11-25 09:05:29.818 253542 DEBUG oslo_concurrency.lockutils [None req-2f2e9b20-8e15-45de-84f5-40efdcfc8a3b 637a807a37ce403a8612d303b1acbb3b a3908211615c4cbaae61d6e5833ca908 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:05:29 np0005534516 nova_compute[253538]: 2025-11-25 09:05:29.819 253542 DEBUG oslo_concurrency.lockutils [None req-2f2e9b20-8e15-45de-84f5-40efdcfc8a3b 637a807a37ce403a8612d303b1acbb3b a3908211615c4cbaae61d6e5833ca908 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:05:29 np0005534516 nova_compute[253538]: 2025-11-25 09:05:29.819 253542 DEBUG oslo_concurrency.lockutils [None req-2f2e9b20-8e15-45de-84f5-40efdcfc8a3b 637a807a37ce403a8612d303b1acbb3b a3908211615c4cbaae61d6e5833ca908 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:05:30 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:05:30.089 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=46, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '3e:21:09', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '6a:71:8e:07:fa:c2'}, ipsec=False) old=SB_Global(nb_cfg=45) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 04:05:30 np0005534516 nova_compute[253538]: 2025-11-25 09:05:30.090 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:05:30 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:05:30.091 162739 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 25 04:05:30 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2541: 321 pgs: 321 active+clean; 141 MiB data, 940 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.9 MiB/s wr, 28 op/s
Nov 25 04:05:30 np0005534516 nova_compute[253538]: 2025-11-25 09:05:30.188 253542 DEBUG nova.network.neutron [None req-2f2e9b20-8e15-45de-84f5-40efdcfc8a3b 637a807a37ce403a8612d303b1acbb3b a3908211615c4cbaae61d6e5833ca908 - - default default] [instance: 497131ea-c693-4c1d-b471-5b69d2294e3a] Successfully created port: 44ba14ce-3677-4e53-b6ea-a21b98ba45d6 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 25 04:05:30 np0005534516 nova_compute[253538]: 2025-11-25 09:05:30.324 253542 DEBUG nova.compute.manager [req-d5343e8a-6ebe-4b4d-8f48-ce1f0640d2a8 req-92cbaa91-479c-4215-8cfa-e874faa59d17 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 7bc2f34f-b7dd-4b8d-99c0-1fc01d3c2aa0] Received event network-changed-e64e0c93-9ff8-4b26-9a7e-1bae8b024966 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 04:05:30 np0005534516 nova_compute[253538]: 2025-11-25 09:05:30.325 253542 DEBUG nova.compute.manager [req-d5343e8a-6ebe-4b4d-8f48-ce1f0640d2a8 req-92cbaa91-479c-4215-8cfa-e874faa59d17 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 7bc2f34f-b7dd-4b8d-99c0-1fc01d3c2aa0] Refreshing instance network info cache due to event network-changed-e64e0c93-9ff8-4b26-9a7e-1bae8b024966. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 04:05:30 np0005534516 nova_compute[253538]: 2025-11-25 09:05:30.325 253542 DEBUG oslo_concurrency.lockutils [req-d5343e8a-6ebe-4b4d-8f48-ce1f0640d2a8 req-92cbaa91-479c-4215-8cfa-e874faa59d17 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-7bc2f34f-b7dd-4b8d-99c0-1fc01d3c2aa0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 04:05:31 np0005534516 nova_compute[253538]: 2025-11-25 09:05:31.096 253542 DEBUG nova.network.neutron [None req-2f2e9b20-8e15-45de-84f5-40efdcfc8a3b 637a807a37ce403a8612d303b1acbb3b a3908211615c4cbaae61d6e5833ca908 - - default default] [instance: 497131ea-c693-4c1d-b471-5b69d2294e3a] Successfully updated port: 44ba14ce-3677-4e53-b6ea-a21b98ba45d6 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 25 04:05:31 np0005534516 nova_compute[253538]: 2025-11-25 09:05:31.121 253542 DEBUG oslo_concurrency.lockutils [None req-2f2e9b20-8e15-45de-84f5-40efdcfc8a3b 637a807a37ce403a8612d303b1acbb3b a3908211615c4cbaae61d6e5833ca908 - - default default] Acquiring lock "refresh_cache-497131ea-c693-4c1d-b471-5b69d2294e3a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 04:05:31 np0005534516 nova_compute[253538]: 2025-11-25 09:05:31.121 253542 DEBUG oslo_concurrency.lockutils [None req-2f2e9b20-8e15-45de-84f5-40efdcfc8a3b 637a807a37ce403a8612d303b1acbb3b a3908211615c4cbaae61d6e5833ca908 - - default default] Acquired lock "refresh_cache-497131ea-c693-4c1d-b471-5b69d2294e3a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 04:05:31 np0005534516 nova_compute[253538]: 2025-11-25 09:05:31.122 253542 DEBUG nova.network.neutron [None req-2f2e9b20-8e15-45de-84f5-40efdcfc8a3b 637a807a37ce403a8612d303b1acbb3b a3908211615c4cbaae61d6e5833ca908 - - default default] [instance: 497131ea-c693-4c1d-b471-5b69d2294e3a] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 25 04:05:31 np0005534516 nova_compute[253538]: 2025-11-25 09:05:31.205 253542 DEBUG nova.compute.manager [req-18946731-5463-4347-ae2a-e4895b7239b4 req-eeebd2e3-0091-4601-b15b-f6597d574eb9 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 497131ea-c693-4c1d-b471-5b69d2294e3a] Received event network-changed-44ba14ce-3677-4e53-b6ea-a21b98ba45d6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 04:05:31 np0005534516 nova_compute[253538]: 2025-11-25 09:05:31.206 253542 DEBUG nova.compute.manager [req-18946731-5463-4347-ae2a-e4895b7239b4 req-eeebd2e3-0091-4601-b15b-f6597d574eb9 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 497131ea-c693-4c1d-b471-5b69d2294e3a] Refreshing instance network info cache due to event network-changed-44ba14ce-3677-4e53-b6ea-a21b98ba45d6. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 04:05:31 np0005534516 nova_compute[253538]: 2025-11-25 09:05:31.206 253542 DEBUG oslo_concurrency.lockutils [req-18946731-5463-4347-ae2a-e4895b7239b4 req-eeebd2e3-0091-4601-b15b-f6597d574eb9 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-497131ea-c693-4c1d-b471-5b69d2294e3a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 04:05:31 np0005534516 nova_compute[253538]: 2025-11-25 09:05:31.313 253542 DEBUG nova.network.neutron [None req-2f2e9b20-8e15-45de-84f5-40efdcfc8a3b 637a807a37ce403a8612d303b1acbb3b a3908211615c4cbaae61d6e5833ca908 - - default default] [instance: 497131ea-c693-4c1d-b471-5b69d2294e3a] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 25 04:05:31 np0005534516 nova_compute[253538]: 2025-11-25 09:05:31.552 253542 DEBUG nova.network.neutron [None req-b7a1b552-6c58-4f68-93a5-293ff97ee175 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 7bc2f34f-b7dd-4b8d-99c0-1fc01d3c2aa0] Updating instance_info_cache with network_info: [{"id": "bc72cf9d-bb8d-4968-879b-a65c0e151d35", "address": "fa:16:3e:b7:a3:07", "network": {"id": "8f08e3a5-c18c-40d6-a052-3721725c11a7", "bridge": "br-int", "label": "tempest-network-smoke--1961352312", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbc72cf9d-bb", "ovs_interfaceid": "bc72cf9d-bb8d-4968-879b-a65c0e151d35", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "e64e0c93-9ff8-4b26-9a7e-1bae8b024966", "address": "fa:16:3e:6a:41:4a", "network": {"id": "6c644b4d-59a5-410c-b57a-1faa3d063b78", "bridge": "br-int", "label": "tempest-network-smoke--1048784636", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe6a:414a", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe6a:414a", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape64e0c93-9f", "ovs_interfaceid": "e64e0c93-9ff8-4b26-9a7e-1bae8b024966", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 04:05:31 np0005534516 nova_compute[253538]: 2025-11-25 09:05:31.572 253542 DEBUG oslo_concurrency.lockutils [None req-b7a1b552-6c58-4f68-93a5-293ff97ee175 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Releasing lock "refresh_cache-7bc2f34f-b7dd-4b8d-99c0-1fc01d3c2aa0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 04:05:31 np0005534516 nova_compute[253538]: 2025-11-25 09:05:31.574 253542 DEBUG nova.compute.manager [None req-b7a1b552-6c58-4f68-93a5-293ff97ee175 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 7bc2f34f-b7dd-4b8d-99c0-1fc01d3c2aa0] Instance network_info: |[{"id": "bc72cf9d-bb8d-4968-879b-a65c0e151d35", "address": "fa:16:3e:b7:a3:07", "network": {"id": "8f08e3a5-c18c-40d6-a052-3721725c11a7", "bridge": "br-int", "label": "tempest-network-smoke--1961352312", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbc72cf9d-bb", "ovs_interfaceid": "bc72cf9d-bb8d-4968-879b-a65c0e151d35", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "e64e0c93-9ff8-4b26-9a7e-1bae8b024966", "address": "fa:16:3e:6a:41:4a", "network": {"id": "6c644b4d-59a5-410c-b57a-1faa3d063b78", "bridge": "br-int", "label": "tempest-network-smoke--1048784636", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe6a:414a", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe6a:414a", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape64e0c93-9f", "ovs_interfaceid": "e64e0c93-9ff8-4b26-9a7e-1bae8b024966", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 25 04:05:31 np0005534516 nova_compute[253538]: 2025-11-25 09:05:31.576 253542 DEBUG oslo_concurrency.lockutils [req-d5343e8a-6ebe-4b4d-8f48-ce1f0640d2a8 req-92cbaa91-479c-4215-8cfa-e874faa59d17 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-7bc2f34f-b7dd-4b8d-99c0-1fc01d3c2aa0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 04:05:31 np0005534516 nova_compute[253538]: 2025-11-25 09:05:31.577 253542 DEBUG nova.network.neutron [req-d5343e8a-6ebe-4b4d-8f48-ce1f0640d2a8 req-92cbaa91-479c-4215-8cfa-e874faa59d17 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 7bc2f34f-b7dd-4b8d-99c0-1fc01d3c2aa0] Refreshing network info cache for port e64e0c93-9ff8-4b26-9a7e-1bae8b024966 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 04:05:31 np0005534516 nova_compute[253538]: 2025-11-25 09:05:31.585 253542 DEBUG nova.virt.libvirt.driver [None req-b7a1b552-6c58-4f68-93a5-293ff97ee175 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 7bc2f34f-b7dd-4b8d-99c0-1fc01d3c2aa0] Start _get_guest_xml network_info=[{"id": "bc72cf9d-bb8d-4968-879b-a65c0e151d35", "address": "fa:16:3e:b7:a3:07", "network": {"id": "8f08e3a5-c18c-40d6-a052-3721725c11a7", "bridge": "br-int", "label": "tempest-network-smoke--1961352312", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbc72cf9d-bb", "ovs_interfaceid": "bc72cf9d-bb8d-4968-879b-a65c0e151d35", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "e64e0c93-9ff8-4b26-9a7e-1bae8b024966", "address": "fa:16:3e:6a:41:4a", "network": {"id": "6c644b4d-59a5-410c-b57a-1faa3d063b78", "bridge": "br-int", "label": "tempest-network-smoke--1048784636", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe6a:414a", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe6a:414a", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape64e0c93-9f", "ovs_interfaceid": "e64e0c93-9ff8-4b26-9a7e-1bae8b024966", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'device_name': '/dev/vda', 'guest_format': None, 'encryption_options': None, 'boot_index': 0, 'encryption_format': None, 'encryption_secret_uuid': None, 'size': 0, 'encrypted': False, 'disk_bus': 'virtio', 'image_id': '8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 25 04:05:31 np0005534516 nova_compute[253538]: 2025-11-25 09:05:31.594 253542 WARNING nova.virt.libvirt.driver [None req-b7a1b552-6c58-4f68-93a5-293ff97ee175 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 04:05:31 np0005534516 nova_compute[253538]: 2025-11-25 09:05:31.606 253542 DEBUG nova.virt.libvirt.host [None req-b7a1b552-6c58-4f68-93a5-293ff97ee175 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 25 04:05:31 np0005534516 nova_compute[253538]: 2025-11-25 09:05:31.607 253542 DEBUG nova.virt.libvirt.host [None req-b7a1b552-6c58-4f68-93a5-293ff97ee175 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 25 04:05:31 np0005534516 nova_compute[253538]: 2025-11-25 09:05:31.612 253542 DEBUG nova.virt.libvirt.host [None req-b7a1b552-6c58-4f68-93a5-293ff97ee175 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 25 04:05:31 np0005534516 nova_compute[253538]: 2025-11-25 09:05:31.613 253542 DEBUG nova.virt.libvirt.host [None req-b7a1b552-6c58-4f68-93a5-293ff97ee175 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 25 04:05:31 np0005534516 nova_compute[253538]: 2025-11-25 09:05:31.614 253542 DEBUG nova.virt.libvirt.driver [None req-b7a1b552-6c58-4f68-93a5-293ff97ee175 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 25 04:05:31 np0005534516 nova_compute[253538]: 2025-11-25 09:05:31.614 253542 DEBUG nova.virt.hardware [None req-b7a1b552-6c58-4f68-93a5-293ff97ee175 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T08:20:54Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='c0247c91-0f34-45b2-87e7-43f31a790a00',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 25 04:05:31 np0005534516 nova_compute[253538]: 2025-11-25 09:05:31.615 253542 DEBUG nova.virt.hardware [None req-b7a1b552-6c58-4f68-93a5-293ff97ee175 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 25 04:05:31 np0005534516 nova_compute[253538]: 2025-11-25 09:05:31.616 253542 DEBUG nova.virt.hardware [None req-b7a1b552-6c58-4f68-93a5-293ff97ee175 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 25 04:05:31 np0005534516 nova_compute[253538]: 2025-11-25 09:05:31.616 253542 DEBUG nova.virt.hardware [None req-b7a1b552-6c58-4f68-93a5-293ff97ee175 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 25 04:05:31 np0005534516 nova_compute[253538]: 2025-11-25 09:05:31.617 253542 DEBUG nova.virt.hardware [None req-b7a1b552-6c58-4f68-93a5-293ff97ee175 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 25 04:05:31 np0005534516 nova_compute[253538]: 2025-11-25 09:05:31.617 253542 DEBUG nova.virt.hardware [None req-b7a1b552-6c58-4f68-93a5-293ff97ee175 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 25 04:05:31 np0005534516 nova_compute[253538]: 2025-11-25 09:05:31.618 253542 DEBUG nova.virt.hardware [None req-b7a1b552-6c58-4f68-93a5-293ff97ee175 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 25 04:05:31 np0005534516 nova_compute[253538]: 2025-11-25 09:05:31.618 253542 DEBUG nova.virt.hardware [None req-b7a1b552-6c58-4f68-93a5-293ff97ee175 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 25 04:05:31 np0005534516 nova_compute[253538]: 2025-11-25 09:05:31.619 253542 DEBUG nova.virt.hardware [None req-b7a1b552-6c58-4f68-93a5-293ff97ee175 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 25 04:05:31 np0005534516 nova_compute[253538]: 2025-11-25 09:05:31.619 253542 DEBUG nova.virt.hardware [None req-b7a1b552-6c58-4f68-93a5-293ff97ee175 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 25 04:05:31 np0005534516 nova_compute[253538]: 2025-11-25 09:05:31.620 253542 DEBUG nova.virt.hardware [None req-b7a1b552-6c58-4f68-93a5-293ff97ee175 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 25 04:05:31 np0005534516 nova_compute[253538]: 2025-11-25 09:05:31.626 253542 DEBUG oslo_concurrency.processutils [None req-b7a1b552-6c58-4f68-93a5-293ff97ee175 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 04:05:32 np0005534516 nova_compute[253538]: 2025-11-25 09:05:32.078 253542 DEBUG nova.network.neutron [None req-2f2e9b20-8e15-45de-84f5-40efdcfc8a3b 637a807a37ce403a8612d303b1acbb3b a3908211615c4cbaae61d6e5833ca908 - - default default] [instance: 497131ea-c693-4c1d-b471-5b69d2294e3a] Updating instance_info_cache with network_info: [{"id": "44ba14ce-3677-4e53-b6ea-a21b98ba45d6", "address": "fa:16:3e:70:84:10", "network": {"id": "dbe7469c-9d57-4418-b63b-ede368786895", "bridge": "br-int", "label": "tempest-TestServerBasicOps-32608326-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a3908211615c4cbaae61d6e5833ca908", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap44ba14ce-36", "ovs_interfaceid": "44ba14ce-3677-4e53-b6ea-a21b98ba45d6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 04:05:32 np0005534516 nova_compute[253538]: 2025-11-25 09:05:32.098 253542 DEBUG oslo_concurrency.lockutils [None req-2f2e9b20-8e15-45de-84f5-40efdcfc8a3b 637a807a37ce403a8612d303b1acbb3b a3908211615c4cbaae61d6e5833ca908 - - default default] Releasing lock "refresh_cache-497131ea-c693-4c1d-b471-5b69d2294e3a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 04:05:32 np0005534516 nova_compute[253538]: 2025-11-25 09:05:32.099 253542 DEBUG nova.compute.manager [None req-2f2e9b20-8e15-45de-84f5-40efdcfc8a3b 637a807a37ce403a8612d303b1acbb3b a3908211615c4cbaae61d6e5833ca908 - - default default] [instance: 497131ea-c693-4c1d-b471-5b69d2294e3a] Instance network_info: |[{"id": "44ba14ce-3677-4e53-b6ea-a21b98ba45d6", "address": "fa:16:3e:70:84:10", "network": {"id": "dbe7469c-9d57-4418-b63b-ede368786895", "bridge": "br-int", "label": "tempest-TestServerBasicOps-32608326-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a3908211615c4cbaae61d6e5833ca908", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap44ba14ce-36", "ovs_interfaceid": "44ba14ce-3677-4e53-b6ea-a21b98ba45d6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 25 04:05:32 np0005534516 nova_compute[253538]: 2025-11-25 09:05:32.100 253542 DEBUG oslo_concurrency.lockutils [req-18946731-5463-4347-ae2a-e4895b7239b4 req-eeebd2e3-0091-4601-b15b-f6597d574eb9 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-497131ea-c693-4c1d-b471-5b69d2294e3a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 04:05:32 np0005534516 nova_compute[253538]: 2025-11-25 09:05:32.101 253542 DEBUG nova.network.neutron [req-18946731-5463-4347-ae2a-e4895b7239b4 req-eeebd2e3-0091-4601-b15b-f6597d574eb9 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 497131ea-c693-4c1d-b471-5b69d2294e3a] Refreshing network info cache for port 44ba14ce-3677-4e53-b6ea-a21b98ba45d6 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 04:05:32 np0005534516 nova_compute[253538]: 2025-11-25 09:05:32.106 253542 DEBUG nova.virt.libvirt.driver [None req-2f2e9b20-8e15-45de-84f5-40efdcfc8a3b 637a807a37ce403a8612d303b1acbb3b a3908211615c4cbaae61d6e5833ca908 - - default default] [instance: 497131ea-c693-4c1d-b471-5b69d2294e3a] Start _get_guest_xml network_info=[{"id": "44ba14ce-3677-4e53-b6ea-a21b98ba45d6", "address": "fa:16:3e:70:84:10", "network": {"id": "dbe7469c-9d57-4418-b63b-ede368786895", "bridge": "br-int", "label": "tempest-TestServerBasicOps-32608326-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a3908211615c4cbaae61d6e5833ca908", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap44ba14ce-36", "ovs_interfaceid": "44ba14ce-3677-4e53-b6ea-a21b98ba45d6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'device_name': '/dev/vda', 'guest_format': None, 'encryption_options': None, 'boot_index': 0, 'encryption_format': None, 'encryption_secret_uuid': None, 'size': 0, 'encrypted': False, 'disk_bus': 'virtio', 'image_id': '8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 25 04:05:32 np0005534516 nova_compute[253538]: 2025-11-25 09:05:32.113 253542 WARNING nova.virt.libvirt.driver [None req-2f2e9b20-8e15-45de-84f5-40efdcfc8a3b 637a807a37ce403a8612d303b1acbb3b a3908211615c4cbaae61d6e5833ca908 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 04:05:32 np0005534516 nova_compute[253538]: 2025-11-25 09:05:32.125 253542 DEBUG nova.virt.libvirt.host [None req-2f2e9b20-8e15-45de-84f5-40efdcfc8a3b 637a807a37ce403a8612d303b1acbb3b a3908211615c4cbaae61d6e5833ca908 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 25 04:05:32 np0005534516 nova_compute[253538]: 2025-11-25 09:05:32.126 253542 DEBUG nova.virt.libvirt.host [None req-2f2e9b20-8e15-45de-84f5-40efdcfc8a3b 637a807a37ce403a8612d303b1acbb3b a3908211615c4cbaae61d6e5833ca908 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 25 04:05:32 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 04:05:32 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1767503369' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 04:05:32 np0005534516 nova_compute[253538]: 2025-11-25 09:05:32.133 253542 DEBUG nova.virt.libvirt.host [None req-2f2e9b20-8e15-45de-84f5-40efdcfc8a3b 637a807a37ce403a8612d303b1acbb3b a3908211615c4cbaae61d6e5833ca908 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 25 04:05:32 np0005534516 nova_compute[253538]: 2025-11-25 09:05:32.134 253542 DEBUG nova.virt.libvirt.host [None req-2f2e9b20-8e15-45de-84f5-40efdcfc8a3b 637a807a37ce403a8612d303b1acbb3b a3908211615c4cbaae61d6e5833ca908 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 25 04:05:32 np0005534516 nova_compute[253538]: 2025-11-25 09:05:32.135 253542 DEBUG nova.virt.libvirt.driver [None req-2f2e9b20-8e15-45de-84f5-40efdcfc8a3b 637a807a37ce403a8612d303b1acbb3b a3908211615c4cbaae61d6e5833ca908 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 25 04:05:32 np0005534516 nova_compute[253538]: 2025-11-25 09:05:32.135 253542 DEBUG nova.virt.hardware [None req-2f2e9b20-8e15-45de-84f5-40efdcfc8a3b 637a807a37ce403a8612d303b1acbb3b a3908211615c4cbaae61d6e5833ca908 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T08:20:54Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='c0247c91-0f34-45b2-87e7-43f31a790a00',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 25 04:05:32 np0005534516 nova_compute[253538]: 2025-11-25 09:05:32.136 253542 DEBUG nova.virt.hardware [None req-2f2e9b20-8e15-45de-84f5-40efdcfc8a3b 637a807a37ce403a8612d303b1acbb3b a3908211615c4cbaae61d6e5833ca908 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 25 04:05:32 np0005534516 nova_compute[253538]: 2025-11-25 09:05:32.136 253542 DEBUG nova.virt.hardware [None req-2f2e9b20-8e15-45de-84f5-40efdcfc8a3b 637a807a37ce403a8612d303b1acbb3b a3908211615c4cbaae61d6e5833ca908 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 25 04:05:32 np0005534516 nova_compute[253538]: 2025-11-25 09:05:32.137 253542 DEBUG nova.virt.hardware [None req-2f2e9b20-8e15-45de-84f5-40efdcfc8a3b 637a807a37ce403a8612d303b1acbb3b a3908211615c4cbaae61d6e5833ca908 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 25 04:05:32 np0005534516 nova_compute[253538]: 2025-11-25 09:05:32.137 253542 DEBUG nova.virt.hardware [None req-2f2e9b20-8e15-45de-84f5-40efdcfc8a3b 637a807a37ce403a8612d303b1acbb3b a3908211615c4cbaae61d6e5833ca908 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 25 04:05:32 np0005534516 nova_compute[253538]: 2025-11-25 09:05:32.138 253542 DEBUG nova.virt.hardware [None req-2f2e9b20-8e15-45de-84f5-40efdcfc8a3b 637a807a37ce403a8612d303b1acbb3b a3908211615c4cbaae61d6e5833ca908 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 25 04:05:32 np0005534516 nova_compute[253538]: 2025-11-25 09:05:32.139 253542 DEBUG nova.virt.hardware [None req-2f2e9b20-8e15-45de-84f5-40efdcfc8a3b 637a807a37ce403a8612d303b1acbb3b a3908211615c4cbaae61d6e5833ca908 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 25 04:05:32 np0005534516 nova_compute[253538]: 2025-11-25 09:05:32.139 253542 DEBUG nova.virt.hardware [None req-2f2e9b20-8e15-45de-84f5-40efdcfc8a3b 637a807a37ce403a8612d303b1acbb3b a3908211615c4cbaae61d6e5833ca908 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 25 04:05:32 np0005534516 nova_compute[253538]: 2025-11-25 09:05:32.140 253542 DEBUG nova.virt.hardware [None req-2f2e9b20-8e15-45de-84f5-40efdcfc8a3b 637a807a37ce403a8612d303b1acbb3b a3908211615c4cbaae61d6e5833ca908 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 25 04:05:32 np0005534516 nova_compute[253538]: 2025-11-25 09:05:32.140 253542 DEBUG nova.virt.hardware [None req-2f2e9b20-8e15-45de-84f5-40efdcfc8a3b 637a807a37ce403a8612d303b1acbb3b a3908211615c4cbaae61d6e5833ca908 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 25 04:05:32 np0005534516 nova_compute[253538]: 2025-11-25 09:05:32.141 253542 DEBUG nova.virt.hardware [None req-2f2e9b20-8e15-45de-84f5-40efdcfc8a3b 637a807a37ce403a8612d303b1acbb3b a3908211615c4cbaae61d6e5833ca908 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 25 04:05:32 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2542: 321 pgs: 321 active+clean; 156 MiB data, 948 MiB used, 59 GiB / 60 GiB avail; 27 KiB/s rd, 2.6 MiB/s wr, 40 op/s
Nov 25 04:05:32 np0005534516 nova_compute[253538]: 2025-11-25 09:05:32.157 253542 DEBUG oslo_concurrency.processutils [None req-2f2e9b20-8e15-45de-84f5-40efdcfc8a3b 637a807a37ce403a8612d303b1acbb3b a3908211615c4cbaae61d6e5833ca908 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 04:05:32 np0005534516 nova_compute[253538]: 2025-11-25 09:05:32.211 253542 DEBUG oslo_concurrency.processutils [None req-b7a1b552-6c58-4f68-93a5-293ff97ee175 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.585s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 04:05:32 np0005534516 nova_compute[253538]: 2025-11-25 09:05:32.237 253542 DEBUG nova.storage.rbd_utils [None req-b7a1b552-6c58-4f68-93a5-293ff97ee175 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] rbd image 7bc2f34f-b7dd-4b8d-99c0-1fc01d3c2aa0_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 04:05:32 np0005534516 nova_compute[253538]: 2025-11-25 09:05:32.241 253542 DEBUG oslo_concurrency.processutils [None req-b7a1b552-6c58-4f68-93a5-293ff97ee175 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 04:05:32 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 04:05:32 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2383854619' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 04:05:32 np0005534516 nova_compute[253538]: 2025-11-25 09:05:32.646 253542 DEBUG oslo_concurrency.processutils [None req-2f2e9b20-8e15-45de-84f5-40efdcfc8a3b 637a807a37ce403a8612d303b1acbb3b a3908211615c4cbaae61d6e5833ca908 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.489s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 04:05:32 np0005534516 nova_compute[253538]: 2025-11-25 09:05:32.669 253542 DEBUG nova.storage.rbd_utils [None req-2f2e9b20-8e15-45de-84f5-40efdcfc8a3b 637a807a37ce403a8612d303b1acbb3b a3908211615c4cbaae61d6e5833ca908 - - default default] rbd image 497131ea-c693-4c1d-b471-5b69d2294e3a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 04:05:32 np0005534516 nova_compute[253538]: 2025-11-25 09:05:32.672 253542 DEBUG oslo_concurrency.processutils [None req-2f2e9b20-8e15-45de-84f5-40efdcfc8a3b 637a807a37ce403a8612d303b1acbb3b a3908211615c4cbaae61d6e5833ca908 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 04:05:32 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 04:05:32 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/921679297' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 04:05:32 np0005534516 nova_compute[253538]: 2025-11-25 09:05:32.702 253542 DEBUG oslo_concurrency.processutils [None req-b7a1b552-6c58-4f68-93a5-293ff97ee175 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.460s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 04:05:32 np0005534516 nova_compute[253538]: 2025-11-25 09:05:32.704 253542 DEBUG nova.virt.libvirt.vif [None req-b7a1b552-6c58-4f68-93a5-293ff97ee175 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T09:05:23Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1397309390',display_name='tempest-TestGettingAddress-server-1397309390',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1397309390',id=137,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBG2EHMFfBwqY5UWv5ZUTvDGydPH+lI0Q3ZijcI/Z0KNxJwJ0XwD1zp8yjXj4vyT5Wr1yKGnfee4Ard9WUGqQHTmOIMYiRA7o0ggCLm4Ee9TLzJApKNCTFF0W861lRMGbtg==',key_name='tempest-TestGettingAddress-99102097',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='a3cf572dfc9f42528923d69b8fa76422',ramdisk_id='',reservation_id='r-ie7bcng6',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-364728108',owner_user_name='tempest-TestGettingAddress-364728108-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T09:05:25Z,user_data=None,user_id='c9fb13d4ba9041458692330b7276232f',uuid=7bc2f34f-b7dd-4b8d-99c0-1fc01d3c2aa0,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "bc72cf9d-bb8d-4968-879b-a65c0e151d35", "address": "fa:16:3e:b7:a3:07", "network": {"id": "8f08e3a5-c18c-40d6-a052-3721725c11a7", "bridge": "br-int", "label": "tempest-network-smoke--1961352312", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbc72cf9d-bb", "ovs_interfaceid": "bc72cf9d-bb8d-4968-879b-a65c0e151d35", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 25 04:05:32 np0005534516 nova_compute[253538]: 2025-11-25 09:05:32.704 253542 DEBUG nova.network.os_vif_util [None req-b7a1b552-6c58-4f68-93a5-293ff97ee175 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Converting VIF {"id": "bc72cf9d-bb8d-4968-879b-a65c0e151d35", "address": "fa:16:3e:b7:a3:07", "network": {"id": "8f08e3a5-c18c-40d6-a052-3721725c11a7", "bridge": "br-int", "label": "tempest-network-smoke--1961352312", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbc72cf9d-bb", "ovs_interfaceid": "bc72cf9d-bb8d-4968-879b-a65c0e151d35", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 04:05:32 np0005534516 nova_compute[253538]: 2025-11-25 09:05:32.706 253542 DEBUG nova.network.os_vif_util [None req-b7a1b552-6c58-4f68-93a5-293ff97ee175 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b7:a3:07,bridge_name='br-int',has_traffic_filtering=True,id=bc72cf9d-bb8d-4968-879b-a65c0e151d35,network=Network(8f08e3a5-c18c-40d6-a052-3721725c11a7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbc72cf9d-bb') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 04:05:32 np0005534516 nova_compute[253538]: 2025-11-25 09:05:32.707 253542 DEBUG nova.virt.libvirt.vif [None req-b7a1b552-6c58-4f68-93a5-293ff97ee175 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T09:05:23Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1397309390',display_name='tempest-TestGettingAddress-server-1397309390',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1397309390',id=137,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBG2EHMFfBwqY5UWv5ZUTvDGydPH+lI0Q3ZijcI/Z0KNxJwJ0XwD1zp8yjXj4vyT5Wr1yKGnfee4Ard9WUGqQHTmOIMYiRA7o0ggCLm4Ee9TLzJApKNCTFF0W861lRMGbtg==',key_name='tempest-TestGettingAddress-99102097',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='a3cf572dfc9f42528923d69b8fa76422',ramdisk_id='',reservation_id='r-ie7bcng6',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-364728108',owner_user_name='tempest-TestGettingAddress-364728108-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T09:05:25Z,user_data=None,user_id='c9fb13d4ba9041458692330b7276232f',uuid=7bc2f34f-b7dd-4b8d-99c0-1fc01d3c2aa0,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "e64e0c93-9ff8-4b26-9a7e-1bae8b024966", "address": "fa:16:3e:6a:41:4a", "network": {"id": "6c644b4d-59a5-410c-b57a-1faa3d063b78", "bridge": "br-int", "label": "tempest-network-smoke--1048784636", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe6a:414a", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe6a:414a", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape64e0c93-9f", "ovs_interfaceid": "e64e0c93-9ff8-4b26-9a7e-1bae8b024966", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 25 04:05:32 np0005534516 nova_compute[253538]: 2025-11-25 09:05:32.707 253542 DEBUG nova.network.os_vif_util [None req-b7a1b552-6c58-4f68-93a5-293ff97ee175 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Converting VIF {"id": "e64e0c93-9ff8-4b26-9a7e-1bae8b024966", "address": "fa:16:3e:6a:41:4a", "network": {"id": "6c644b4d-59a5-410c-b57a-1faa3d063b78", "bridge": "br-int", "label": "tempest-network-smoke--1048784636", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe6a:414a", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe6a:414a", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape64e0c93-9f", "ovs_interfaceid": "e64e0c93-9ff8-4b26-9a7e-1bae8b024966", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 04:05:32 np0005534516 nova_compute[253538]: 2025-11-25 09:05:32.708 253542 DEBUG nova.network.os_vif_util [None req-b7a1b552-6c58-4f68-93a5-293ff97ee175 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:6a:41:4a,bridge_name='br-int',has_traffic_filtering=True,id=e64e0c93-9ff8-4b26-9a7e-1bae8b024966,network=Network(6c644b4d-59a5-410c-b57a-1faa3d063b78),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape64e0c93-9f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 04:05:32 np0005534516 nova_compute[253538]: 2025-11-25 09:05:32.709 253542 DEBUG nova.objects.instance [None req-b7a1b552-6c58-4f68-93a5-293ff97ee175 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lazy-loading 'pci_devices' on Instance uuid 7bc2f34f-b7dd-4b8d-99c0-1fc01d3c2aa0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 04:05:32 np0005534516 nova_compute[253538]: 2025-11-25 09:05:32.727 253542 DEBUG nova.virt.libvirt.driver [None req-b7a1b552-6c58-4f68-93a5-293ff97ee175 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 7bc2f34f-b7dd-4b8d-99c0-1fc01d3c2aa0] End _get_guest_xml xml=<domain type="kvm">
Nov 25 04:05:32 np0005534516 nova_compute[253538]:  <uuid>7bc2f34f-b7dd-4b8d-99c0-1fc01d3c2aa0</uuid>
Nov 25 04:05:32 np0005534516 nova_compute[253538]:  <name>instance-00000089</name>
Nov 25 04:05:32 np0005534516 nova_compute[253538]:  <memory>131072</memory>
Nov 25 04:05:32 np0005534516 nova_compute[253538]:  <vcpu>1</vcpu>
Nov 25 04:05:32 np0005534516 nova_compute[253538]:  <metadata>
Nov 25 04:05:32 np0005534516 nova_compute[253538]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 04:05:32 np0005534516 nova_compute[253538]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 04:05:32 np0005534516 nova_compute[253538]:      <nova:name>tempest-TestGettingAddress-server-1397309390</nova:name>
Nov 25 04:05:32 np0005534516 nova_compute[253538]:      <nova:creationTime>2025-11-25 09:05:31</nova:creationTime>
Nov 25 04:05:32 np0005534516 nova_compute[253538]:      <nova:flavor name="m1.nano">
Nov 25 04:05:32 np0005534516 nova_compute[253538]:        <nova:memory>128</nova:memory>
Nov 25 04:05:32 np0005534516 nova_compute[253538]:        <nova:disk>1</nova:disk>
Nov 25 04:05:32 np0005534516 nova_compute[253538]:        <nova:swap>0</nova:swap>
Nov 25 04:05:32 np0005534516 nova_compute[253538]:        <nova:ephemeral>0</nova:ephemeral>
Nov 25 04:05:32 np0005534516 nova_compute[253538]:        <nova:vcpus>1</nova:vcpus>
Nov 25 04:05:32 np0005534516 nova_compute[253538]:      </nova:flavor>
Nov 25 04:05:32 np0005534516 nova_compute[253538]:      <nova:owner>
Nov 25 04:05:32 np0005534516 nova_compute[253538]:        <nova:user uuid="c9fb13d4ba9041458692330b7276232f">tempest-TestGettingAddress-364728108-project-member</nova:user>
Nov 25 04:05:32 np0005534516 nova_compute[253538]:        <nova:project uuid="a3cf572dfc9f42528923d69b8fa76422">tempest-TestGettingAddress-364728108</nova:project>
Nov 25 04:05:32 np0005534516 nova_compute[253538]:      </nova:owner>
Nov 25 04:05:32 np0005534516 nova_compute[253538]:      <nova:root type="image" uuid="8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e"/>
Nov 25 04:05:32 np0005534516 nova_compute[253538]:      <nova:ports>
Nov 25 04:05:32 np0005534516 nova_compute[253538]:        <nova:port uuid="bc72cf9d-bb8d-4968-879b-a65c0e151d35">
Nov 25 04:05:32 np0005534516 nova_compute[253538]:          <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Nov 25 04:05:32 np0005534516 nova_compute[253538]:        </nova:port>
Nov 25 04:05:32 np0005534516 nova_compute[253538]:        <nova:port uuid="e64e0c93-9ff8-4b26-9a7e-1bae8b024966">
Nov 25 04:05:32 np0005534516 nova_compute[253538]:          <nova:ip type="fixed" address="2001:db8::f816:3eff:fe6a:414a" ipVersion="6"/>
Nov 25 04:05:32 np0005534516 nova_compute[253538]:          <nova:ip type="fixed" address="2001:db8:0:1:f816:3eff:fe6a:414a" ipVersion="6"/>
Nov 25 04:05:32 np0005534516 nova_compute[253538]:        </nova:port>
Nov 25 04:05:32 np0005534516 nova_compute[253538]:      </nova:ports>
Nov 25 04:05:32 np0005534516 nova_compute[253538]:    </nova:instance>
Nov 25 04:05:32 np0005534516 nova_compute[253538]:  </metadata>
Nov 25 04:05:32 np0005534516 nova_compute[253538]:  <sysinfo type="smbios">
Nov 25 04:05:32 np0005534516 nova_compute[253538]:    <system>
Nov 25 04:05:32 np0005534516 nova_compute[253538]:      <entry name="manufacturer">RDO</entry>
Nov 25 04:05:32 np0005534516 nova_compute[253538]:      <entry name="product">OpenStack Compute</entry>
Nov 25 04:05:32 np0005534516 nova_compute[253538]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 04:05:32 np0005534516 nova_compute[253538]:      <entry name="serial">7bc2f34f-b7dd-4b8d-99c0-1fc01d3c2aa0</entry>
Nov 25 04:05:32 np0005534516 nova_compute[253538]:      <entry name="uuid">7bc2f34f-b7dd-4b8d-99c0-1fc01d3c2aa0</entry>
Nov 25 04:05:32 np0005534516 nova_compute[253538]:      <entry name="family">Virtual Machine</entry>
Nov 25 04:05:32 np0005534516 nova_compute[253538]:    </system>
Nov 25 04:05:32 np0005534516 nova_compute[253538]:  </sysinfo>
Nov 25 04:05:32 np0005534516 nova_compute[253538]:  <os>
Nov 25 04:05:32 np0005534516 nova_compute[253538]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 25 04:05:32 np0005534516 nova_compute[253538]:    <boot dev="hd"/>
Nov 25 04:05:32 np0005534516 nova_compute[253538]:    <smbios mode="sysinfo"/>
Nov 25 04:05:32 np0005534516 nova_compute[253538]:  </os>
Nov 25 04:05:32 np0005534516 nova_compute[253538]:  <features>
Nov 25 04:05:32 np0005534516 nova_compute[253538]:    <acpi/>
Nov 25 04:05:32 np0005534516 nova_compute[253538]:    <apic/>
Nov 25 04:05:32 np0005534516 nova_compute[253538]:    <vmcoreinfo/>
Nov 25 04:05:32 np0005534516 nova_compute[253538]:  </features>
Nov 25 04:05:32 np0005534516 nova_compute[253538]:  <clock offset="utc">
Nov 25 04:05:32 np0005534516 nova_compute[253538]:    <timer name="pit" tickpolicy="delay"/>
Nov 25 04:05:32 np0005534516 nova_compute[253538]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 25 04:05:32 np0005534516 nova_compute[253538]:    <timer name="hpet" present="no"/>
Nov 25 04:05:32 np0005534516 nova_compute[253538]:  </clock>
Nov 25 04:05:32 np0005534516 nova_compute[253538]:  <cpu mode="host-model" match="exact">
Nov 25 04:05:32 np0005534516 nova_compute[253538]:    <topology sockets="1" cores="1" threads="1"/>
Nov 25 04:05:32 np0005534516 nova_compute[253538]:  </cpu>
Nov 25 04:05:32 np0005534516 nova_compute[253538]:  <devices>
Nov 25 04:05:32 np0005534516 nova_compute[253538]:    <disk type="network" device="disk">
Nov 25 04:05:32 np0005534516 nova_compute[253538]:      <driver type="raw" cache="none"/>
Nov 25 04:05:32 np0005534516 nova_compute[253538]:      <source protocol="rbd" name="vms/7bc2f34f-b7dd-4b8d-99c0-1fc01d3c2aa0_disk">
Nov 25 04:05:32 np0005534516 nova_compute[253538]:        <host name="192.168.122.100" port="6789"/>
Nov 25 04:05:32 np0005534516 nova_compute[253538]:      </source>
Nov 25 04:05:32 np0005534516 nova_compute[253538]:      <auth username="openstack">
Nov 25 04:05:32 np0005534516 nova_compute[253538]:        <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 04:05:32 np0005534516 nova_compute[253538]:      </auth>
Nov 25 04:05:32 np0005534516 nova_compute[253538]:      <target dev="vda" bus="virtio"/>
Nov 25 04:05:32 np0005534516 nova_compute[253538]:    </disk>
Nov 25 04:05:32 np0005534516 nova_compute[253538]:    <disk type="network" device="cdrom">
Nov 25 04:05:32 np0005534516 nova_compute[253538]:      <driver type="raw" cache="none"/>
Nov 25 04:05:32 np0005534516 nova_compute[253538]:      <source protocol="rbd" name="vms/7bc2f34f-b7dd-4b8d-99c0-1fc01d3c2aa0_disk.config">
Nov 25 04:05:32 np0005534516 nova_compute[253538]:        <host name="192.168.122.100" port="6789"/>
Nov 25 04:05:32 np0005534516 nova_compute[253538]:      </source>
Nov 25 04:05:32 np0005534516 nova_compute[253538]:      <auth username="openstack">
Nov 25 04:05:32 np0005534516 nova_compute[253538]:        <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 04:05:32 np0005534516 nova_compute[253538]:      </auth>
Nov 25 04:05:32 np0005534516 nova_compute[253538]:      <target dev="sda" bus="sata"/>
Nov 25 04:05:32 np0005534516 nova_compute[253538]:    </disk>
Nov 25 04:05:32 np0005534516 nova_compute[253538]:    <interface type="ethernet">
Nov 25 04:05:32 np0005534516 nova_compute[253538]:      <mac address="fa:16:3e:b7:a3:07"/>
Nov 25 04:05:32 np0005534516 nova_compute[253538]:      <model type="virtio"/>
Nov 25 04:05:32 np0005534516 nova_compute[253538]:      <driver name="vhost" rx_queue_size="512"/>
Nov 25 04:05:32 np0005534516 nova_compute[253538]:      <mtu size="1442"/>
Nov 25 04:05:32 np0005534516 nova_compute[253538]:      <target dev="tapbc72cf9d-bb"/>
Nov 25 04:05:32 np0005534516 nova_compute[253538]:    </interface>
Nov 25 04:05:32 np0005534516 nova_compute[253538]:    <interface type="ethernet">
Nov 25 04:05:32 np0005534516 nova_compute[253538]:      <mac address="fa:16:3e:6a:41:4a"/>
Nov 25 04:05:32 np0005534516 nova_compute[253538]:      <model type="virtio"/>
Nov 25 04:05:32 np0005534516 nova_compute[253538]:      <driver name="vhost" rx_queue_size="512"/>
Nov 25 04:05:32 np0005534516 nova_compute[253538]:      <mtu size="1442"/>
Nov 25 04:05:32 np0005534516 nova_compute[253538]:      <target dev="tape64e0c93-9f"/>
Nov 25 04:05:32 np0005534516 nova_compute[253538]:    </interface>
Nov 25 04:05:32 np0005534516 nova_compute[253538]:    <serial type="pty">
Nov 25 04:05:32 np0005534516 nova_compute[253538]:      <log file="/var/lib/nova/instances/7bc2f34f-b7dd-4b8d-99c0-1fc01d3c2aa0/console.log" append="off"/>
Nov 25 04:05:32 np0005534516 nova_compute[253538]:    </serial>
Nov 25 04:05:32 np0005534516 nova_compute[253538]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 04:05:32 np0005534516 nova_compute[253538]:    <video>
Nov 25 04:05:32 np0005534516 nova_compute[253538]:      <model type="virtio"/>
Nov 25 04:05:32 np0005534516 nova_compute[253538]:    </video>
Nov 25 04:05:32 np0005534516 nova_compute[253538]:    <input type="tablet" bus="usb"/>
Nov 25 04:05:32 np0005534516 nova_compute[253538]:    <rng model="virtio">
Nov 25 04:05:32 np0005534516 nova_compute[253538]:      <backend model="random">/dev/urandom</backend>
Nov 25 04:05:32 np0005534516 nova_compute[253538]:    </rng>
Nov 25 04:05:32 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root"/>
Nov 25 04:05:32 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:05:32 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:05:32 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:05:32 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:05:32 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:05:32 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:05:32 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:05:32 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:05:32 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:05:32 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:05:32 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:05:32 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:05:32 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:05:32 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:05:32 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:05:32 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:05:32 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:05:32 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:05:32 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:05:32 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:05:32 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:05:32 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:05:32 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:05:32 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:05:32 np0005534516 nova_compute[253538]:    <controller type="usb" index="0"/>
Nov 25 04:05:32 np0005534516 nova_compute[253538]:    <memballoon model="virtio">
Nov 25 04:05:32 np0005534516 nova_compute[253538]:      <stats period="10"/>
Nov 25 04:05:32 np0005534516 nova_compute[253538]:    </memballoon>
Nov 25 04:05:32 np0005534516 nova_compute[253538]:  </devices>
Nov 25 04:05:32 np0005534516 nova_compute[253538]: </domain>
Nov 25 04:05:32 np0005534516 nova_compute[253538]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 25 04:05:32 np0005534516 nova_compute[253538]: 2025-11-25 09:05:32.728 253542 DEBUG nova.compute.manager [None req-b7a1b552-6c58-4f68-93a5-293ff97ee175 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 7bc2f34f-b7dd-4b8d-99c0-1fc01d3c2aa0] Preparing to wait for external event network-vif-plugged-bc72cf9d-bb8d-4968-879b-a65c0e151d35 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 25 04:05:32 np0005534516 nova_compute[253538]: 2025-11-25 09:05:32.728 253542 DEBUG oslo_concurrency.lockutils [None req-b7a1b552-6c58-4f68-93a5-293ff97ee175 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Acquiring lock "7bc2f34f-b7dd-4b8d-99c0-1fc01d3c2aa0-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:05:32 np0005534516 nova_compute[253538]: 2025-11-25 09:05:32.729 253542 DEBUG oslo_concurrency.lockutils [None req-b7a1b552-6c58-4f68-93a5-293ff97ee175 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "7bc2f34f-b7dd-4b8d-99c0-1fc01d3c2aa0-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:05:32 np0005534516 nova_compute[253538]: 2025-11-25 09:05:32.729 253542 DEBUG oslo_concurrency.lockutils [None req-b7a1b552-6c58-4f68-93a5-293ff97ee175 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "7bc2f34f-b7dd-4b8d-99c0-1fc01d3c2aa0-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:05:32 np0005534516 nova_compute[253538]: 2025-11-25 09:05:32.729 253542 DEBUG nova.compute.manager [None req-b7a1b552-6c58-4f68-93a5-293ff97ee175 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 7bc2f34f-b7dd-4b8d-99c0-1fc01d3c2aa0] Preparing to wait for external event network-vif-plugged-e64e0c93-9ff8-4b26-9a7e-1bae8b024966 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 25 04:05:32 np0005534516 nova_compute[253538]: 2025-11-25 09:05:32.730 253542 DEBUG oslo_concurrency.lockutils [None req-b7a1b552-6c58-4f68-93a5-293ff97ee175 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Acquiring lock "7bc2f34f-b7dd-4b8d-99c0-1fc01d3c2aa0-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:05:32 np0005534516 nova_compute[253538]: 2025-11-25 09:05:32.730 253542 DEBUG oslo_concurrency.lockutils [None req-b7a1b552-6c58-4f68-93a5-293ff97ee175 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "7bc2f34f-b7dd-4b8d-99c0-1fc01d3c2aa0-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:05:32 np0005534516 nova_compute[253538]: 2025-11-25 09:05:32.730 253542 DEBUG oslo_concurrency.lockutils [None req-b7a1b552-6c58-4f68-93a5-293ff97ee175 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "7bc2f34f-b7dd-4b8d-99c0-1fc01d3c2aa0-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:05:32 np0005534516 nova_compute[253538]: 2025-11-25 09:05:32.731 253542 DEBUG nova.virt.libvirt.vif [None req-b7a1b552-6c58-4f68-93a5-293ff97ee175 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T09:05:23Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1397309390',display_name='tempest-TestGettingAddress-server-1397309390',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1397309390',id=137,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBG2EHMFfBwqY5UWv5ZUTvDGydPH+lI0Q3ZijcI/Z0KNxJwJ0XwD1zp8yjXj4vyT5Wr1yKGnfee4Ard9WUGqQHTmOIMYiRA7o0ggCLm4Ee9TLzJApKNCTFF0W861lRMGbtg==',key_name='tempest-TestGettingAddress-99102097',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='a3cf572dfc9f42528923d69b8fa76422',ramdisk_id='',reservation_id='r-ie7bcng6',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-364728108',owner_user_name='tempest-TestGettingAddress-364728108-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T09:05:25Z,user_data=None,user_id='c9fb13d4ba9041458692330b7276232f',uuid=7bc2f34f-b7dd-4b8d-99c0-1fc01d3c2aa0,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "bc72cf9d-bb8d-4968-879b-a65c0e151d35", "address": "fa:16:3e:b7:a3:07", "network": {"id": "8f08e3a5-c18c-40d6-a052-3721725c11a7", "bridge": "br-int", "label": "tempest-network-smoke--1961352312", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbc72cf9d-bb", "ovs_interfaceid": "bc72cf9d-bb8d-4968-879b-a65c0e151d35", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 25 04:05:32 np0005534516 nova_compute[253538]: 2025-11-25 09:05:32.731 253542 DEBUG nova.network.os_vif_util [None req-b7a1b552-6c58-4f68-93a5-293ff97ee175 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Converting VIF {"id": "bc72cf9d-bb8d-4968-879b-a65c0e151d35", "address": "fa:16:3e:b7:a3:07", "network": {"id": "8f08e3a5-c18c-40d6-a052-3721725c11a7", "bridge": "br-int", "label": "tempest-network-smoke--1961352312", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbc72cf9d-bb", "ovs_interfaceid": "bc72cf9d-bb8d-4968-879b-a65c0e151d35", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 04:05:32 np0005534516 nova_compute[253538]: 2025-11-25 09:05:32.732 253542 DEBUG nova.network.os_vif_util [None req-b7a1b552-6c58-4f68-93a5-293ff97ee175 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b7:a3:07,bridge_name='br-int',has_traffic_filtering=True,id=bc72cf9d-bb8d-4968-879b-a65c0e151d35,network=Network(8f08e3a5-c18c-40d6-a052-3721725c11a7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbc72cf9d-bb') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 04:05:32 np0005534516 nova_compute[253538]: 2025-11-25 09:05:32.732 253542 DEBUG os_vif [None req-b7a1b552-6c58-4f68-93a5-293ff97ee175 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:b7:a3:07,bridge_name='br-int',has_traffic_filtering=True,id=bc72cf9d-bb8d-4968-879b-a65c0e151d35,network=Network(8f08e3a5-c18c-40d6-a052-3721725c11a7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbc72cf9d-bb') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 25 04:05:32 np0005534516 nova_compute[253538]: 2025-11-25 09:05:32.734 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:05:32 np0005534516 nova_compute[253538]: 2025-11-25 09:05:32.734 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 04:05:32 np0005534516 nova_compute[253538]: 2025-11-25 09:05:32.735 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 04:05:32 np0005534516 nova_compute[253538]: 2025-11-25 09:05:32.738 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:05:32 np0005534516 nova_compute[253538]: 2025-11-25 09:05:32.738 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapbc72cf9d-bb, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 04:05:32 np0005534516 nova_compute[253538]: 2025-11-25 09:05:32.739 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapbc72cf9d-bb, col_values=(('external_ids', {'iface-id': 'bc72cf9d-bb8d-4968-879b-a65c0e151d35', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:b7:a3:07', 'vm-uuid': '7bc2f34f-b7dd-4b8d-99c0-1fc01d3c2aa0'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 04:05:32 np0005534516 NetworkManager[48915]: <info>  [1764061532.7422] manager: (tapbc72cf9d-bb): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/580)
Nov 25 04:05:32 np0005534516 nova_compute[253538]: 2025-11-25 09:05:32.743 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:05:32 np0005534516 nova_compute[253538]: 2025-11-25 09:05:32.745 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 25 04:05:32 np0005534516 nova_compute[253538]: 2025-11-25 09:05:32.747 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:05:32 np0005534516 nova_compute[253538]: 2025-11-25 09:05:32.748 253542 INFO os_vif [None req-b7a1b552-6c58-4f68-93a5-293ff97ee175 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:b7:a3:07,bridge_name='br-int',has_traffic_filtering=True,id=bc72cf9d-bb8d-4968-879b-a65c0e151d35,network=Network(8f08e3a5-c18c-40d6-a052-3721725c11a7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbc72cf9d-bb')#033[00m
Nov 25 04:05:32 np0005534516 nova_compute[253538]: 2025-11-25 09:05:32.749 253542 DEBUG nova.virt.libvirt.vif [None req-b7a1b552-6c58-4f68-93a5-293ff97ee175 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T09:05:23Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1397309390',display_name='tempest-TestGettingAddress-server-1397309390',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1397309390',id=137,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBG2EHMFfBwqY5UWv5ZUTvDGydPH+lI0Q3ZijcI/Z0KNxJwJ0XwD1zp8yjXj4vyT5Wr1yKGnfee4Ard9WUGqQHTmOIMYiRA7o0ggCLm4Ee9TLzJApKNCTFF0W861lRMGbtg==',key_name='tempest-TestGettingAddress-99102097',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='a3cf572dfc9f42528923d69b8fa76422',ramdisk_id='',reservation_id='r-ie7bcng6',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-364728108',owner_user_name='tempest-TestGettingAddress-364728108-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T09:05:25Z,user_data=None,user_id='c9fb13d4ba9041458692330b7276232f',uuid=7bc2f34f-b7dd-4b8d-99c0-1fc01d3c2aa0,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "e64e0c93-9ff8-4b26-9a7e-1bae8b024966", "address": "fa:16:3e:6a:41:4a", "network": {"id": "6c644b4d-59a5-410c-b57a-1faa3d063b78", "bridge": "br-int", "label": "tempest-network-smoke--1048784636", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe6a:414a", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe6a:414a", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape64e0c93-9f", "ovs_interfaceid": "e64e0c93-9ff8-4b26-9a7e-1bae8b024966", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 25 04:05:32 np0005534516 nova_compute[253538]: 2025-11-25 09:05:32.749 253542 DEBUG nova.network.os_vif_util [None req-b7a1b552-6c58-4f68-93a5-293ff97ee175 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Converting VIF {"id": "e64e0c93-9ff8-4b26-9a7e-1bae8b024966", "address": "fa:16:3e:6a:41:4a", "network": {"id": "6c644b4d-59a5-410c-b57a-1faa3d063b78", "bridge": "br-int", "label": "tempest-network-smoke--1048784636", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe6a:414a", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe6a:414a", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape64e0c93-9f", "ovs_interfaceid": "e64e0c93-9ff8-4b26-9a7e-1bae8b024966", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 04:05:32 np0005534516 nova_compute[253538]: 2025-11-25 09:05:32.750 253542 DEBUG nova.network.os_vif_util [None req-b7a1b552-6c58-4f68-93a5-293ff97ee175 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:6a:41:4a,bridge_name='br-int',has_traffic_filtering=True,id=e64e0c93-9ff8-4b26-9a7e-1bae8b024966,network=Network(6c644b4d-59a5-410c-b57a-1faa3d063b78),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape64e0c93-9f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 04:05:32 np0005534516 nova_compute[253538]: 2025-11-25 09:05:32.751 253542 DEBUG os_vif [None req-b7a1b552-6c58-4f68-93a5-293ff97ee175 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:6a:41:4a,bridge_name='br-int',has_traffic_filtering=True,id=e64e0c93-9ff8-4b26-9a7e-1bae8b024966,network=Network(6c644b4d-59a5-410c-b57a-1faa3d063b78),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape64e0c93-9f') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 25 04:05:32 np0005534516 nova_compute[253538]: 2025-11-25 09:05:32.751 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:05:32 np0005534516 nova_compute[253538]: 2025-11-25 09:05:32.751 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 04:05:32 np0005534516 nova_compute[253538]: 2025-11-25 09:05:32.752 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 04:05:32 np0005534516 nova_compute[253538]: 2025-11-25 09:05:32.755 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:05:32 np0005534516 nova_compute[253538]: 2025-11-25 09:05:32.755 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape64e0c93-9f, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 04:05:32 np0005534516 nova_compute[253538]: 2025-11-25 09:05:32.756 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tape64e0c93-9f, col_values=(('external_ids', {'iface-id': 'e64e0c93-9ff8-4b26-9a7e-1bae8b024966', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:6a:41:4a', 'vm-uuid': '7bc2f34f-b7dd-4b8d-99c0-1fc01d3c2aa0'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 04:05:32 np0005534516 nova_compute[253538]: 2025-11-25 09:05:32.757 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:05:32 np0005534516 NetworkManager[48915]: <info>  [1764061532.7580] manager: (tape64e0c93-9f): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/581)
Nov 25 04:05:32 np0005534516 nova_compute[253538]: 2025-11-25 09:05:32.760 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 25 04:05:32 np0005534516 nova_compute[253538]: 2025-11-25 09:05:32.764 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:05:32 np0005534516 nova_compute[253538]: 2025-11-25 09:05:32.765 253542 INFO os_vif [None req-b7a1b552-6c58-4f68-93a5-293ff97ee175 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:6a:41:4a,bridge_name='br-int',has_traffic_filtering=True,id=e64e0c93-9ff8-4b26-9a7e-1bae8b024966,network=Network(6c644b4d-59a5-410c-b57a-1faa3d063b78),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape64e0c93-9f')#033[00m
Nov 25 04:05:32 np0005534516 nova_compute[253538]: 2025-11-25 09:05:32.822 253542 DEBUG nova.virt.libvirt.driver [None req-b7a1b552-6c58-4f68-93a5-293ff97ee175 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 04:05:32 np0005534516 nova_compute[253538]: 2025-11-25 09:05:32.822 253542 DEBUG nova.virt.libvirt.driver [None req-b7a1b552-6c58-4f68-93a5-293ff97ee175 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 04:05:32 np0005534516 nova_compute[253538]: 2025-11-25 09:05:32.823 253542 DEBUG nova.virt.libvirt.driver [None req-b7a1b552-6c58-4f68-93a5-293ff97ee175 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] No VIF found with MAC fa:16:3e:b7:a3:07, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 25 04:05:32 np0005534516 nova_compute[253538]: 2025-11-25 09:05:32.824 253542 DEBUG nova.virt.libvirt.driver [None req-b7a1b552-6c58-4f68-93a5-293ff97ee175 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] No VIF found with MAC fa:16:3e:6a:41:4a, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 25 04:05:32 np0005534516 nova_compute[253538]: 2025-11-25 09:05:32.824 253542 INFO nova.virt.libvirt.driver [None req-b7a1b552-6c58-4f68-93a5-293ff97ee175 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 7bc2f34f-b7dd-4b8d-99c0-1fc01d3c2aa0] Using config drive#033[00m
Nov 25 04:05:32 np0005534516 nova_compute[253538]: 2025-11-25 09:05:32.858 253542 DEBUG nova.storage.rbd_utils [None req-b7a1b552-6c58-4f68-93a5-293ff97ee175 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] rbd image 7bc2f34f-b7dd-4b8d-99c0-1fc01d3c2aa0_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 04:05:32 np0005534516 nova_compute[253538]: 2025-11-25 09:05:32.906 253542 DEBUG nova.network.neutron [req-d5343e8a-6ebe-4b4d-8f48-ce1f0640d2a8 req-92cbaa91-479c-4215-8cfa-e874faa59d17 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 7bc2f34f-b7dd-4b8d-99c0-1fc01d3c2aa0] Updated VIF entry in instance network info cache for port e64e0c93-9ff8-4b26-9a7e-1bae8b024966. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 25 04:05:32 np0005534516 nova_compute[253538]: 2025-11-25 09:05:32.907 253542 DEBUG nova.network.neutron [req-d5343e8a-6ebe-4b4d-8f48-ce1f0640d2a8 req-92cbaa91-479c-4215-8cfa-e874faa59d17 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 7bc2f34f-b7dd-4b8d-99c0-1fc01d3c2aa0] Updating instance_info_cache with network_info: [{"id": "bc72cf9d-bb8d-4968-879b-a65c0e151d35", "address": "fa:16:3e:b7:a3:07", "network": {"id": "8f08e3a5-c18c-40d6-a052-3721725c11a7", "bridge": "br-int", "label": "tempest-network-smoke--1961352312", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbc72cf9d-bb", "ovs_interfaceid": "bc72cf9d-bb8d-4968-879b-a65c0e151d35", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "e64e0c93-9ff8-4b26-9a7e-1bae8b024966", "address": "fa:16:3e:6a:41:4a", "network": {"id": "6c644b4d-59a5-410c-b57a-1faa3d063b78", "bridge": "br-int", "label": "tempest-network-smoke--1048784636", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe6a:414a", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe6a:414a", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape64e0c93-9f", "ovs_interfaceid": "e64e0c93-9ff8-4b26-9a7e-1bae8b024966", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 04:05:32 np0005534516 nova_compute[253538]: 2025-11-25 09:05:32.925 253542 DEBUG oslo_concurrency.lockutils [req-d5343e8a-6ebe-4b4d-8f48-ce1f0640d2a8 req-92cbaa91-479c-4215-8cfa-e874faa59d17 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-7bc2f34f-b7dd-4b8d-99c0-1fc01d3c2aa0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 04:05:33 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:05:33 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 04:05:33 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3616471348' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 04:05:33 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:05:33.093 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=0e3362c2-a4a0-4a10-9289-943331244f84, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '46'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 04:05:33 np0005534516 nova_compute[253538]: 2025-11-25 09:05:33.100 253542 DEBUG oslo_concurrency.processutils [None req-2f2e9b20-8e15-45de-84f5-40efdcfc8a3b 637a807a37ce403a8612d303b1acbb3b a3908211615c4cbaae61d6e5833ca908 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.428s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 04:05:33 np0005534516 nova_compute[253538]: 2025-11-25 09:05:33.101 253542 DEBUG nova.virt.libvirt.vif [None req-2f2e9b20-8e15-45de-84f5-40efdcfc8a3b 637a807a37ce403a8612d303b1acbb3b a3908211615c4cbaae61d6e5833ca908 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T09:05:27Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestServerBasicOps-server-829104372',display_name='tempest-TestServerBasicOps-server-829104372',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testserverbasicops-server-829104372',id=138,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHzwD53kQ8BpPBb54UPZdiuwcAps8iqsBmsdvuGpmBwC+Q4SksGNyI7vnMrtWDCi5xUrajEjXki8ZVS3NyMr/F7GJW+4JitS6beGfKpA2babih/6mXQzAB6PKdgZETbkFw==',key_name='tempest-TestServerBasicOps-1951077282',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={meta1='data1',meta2='data2',metaN='dataN'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='a3908211615c4cbaae61d6e5833ca908',ramdisk_id='',reservation_id='r-r1jf1pgp',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestServerBasicOps-1625299484',owner_user_name='tempest-TestServerBasicOps-1625299484-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T09:05:28Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='637a807a37ce403a8612d303b1acbb3b',uuid=497131ea-c693-4c1d-b471-5b69d2294e3a,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "44ba14ce-3677-4e53-b6ea-a21b98ba45d6", "address": "fa:16:3e:70:84:10", "network": {"id": "dbe7469c-9d57-4418-b63b-ede368786895", "bridge": "br-int", "label": "tempest-TestServerBasicOps-32608326-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a3908211615c4cbaae61d6e5833ca908", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap44ba14ce-36", "ovs_interfaceid": "44ba14ce-3677-4e53-b6ea-a21b98ba45d6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 25 04:05:33 np0005534516 nova_compute[253538]: 2025-11-25 09:05:33.101 253542 DEBUG nova.network.os_vif_util [None req-2f2e9b20-8e15-45de-84f5-40efdcfc8a3b 637a807a37ce403a8612d303b1acbb3b a3908211615c4cbaae61d6e5833ca908 - - default default] Converting VIF {"id": "44ba14ce-3677-4e53-b6ea-a21b98ba45d6", "address": "fa:16:3e:70:84:10", "network": {"id": "dbe7469c-9d57-4418-b63b-ede368786895", "bridge": "br-int", "label": "tempest-TestServerBasicOps-32608326-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a3908211615c4cbaae61d6e5833ca908", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap44ba14ce-36", "ovs_interfaceid": "44ba14ce-3677-4e53-b6ea-a21b98ba45d6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 04:05:33 np0005534516 nova_compute[253538]: 2025-11-25 09:05:33.102 253542 DEBUG nova.network.os_vif_util [None req-2f2e9b20-8e15-45de-84f5-40efdcfc8a3b 637a807a37ce403a8612d303b1acbb3b a3908211615c4cbaae61d6e5833ca908 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:70:84:10,bridge_name='br-int',has_traffic_filtering=True,id=44ba14ce-3677-4e53-b6ea-a21b98ba45d6,network=Network(dbe7469c-9d57-4418-b63b-ede368786895),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap44ba14ce-36') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 04:05:33 np0005534516 nova_compute[253538]: 2025-11-25 09:05:33.102 253542 DEBUG nova.objects.instance [None req-2f2e9b20-8e15-45de-84f5-40efdcfc8a3b 637a807a37ce403a8612d303b1acbb3b a3908211615c4cbaae61d6e5833ca908 - - default default] Lazy-loading 'pci_devices' on Instance uuid 497131ea-c693-4c1d-b471-5b69d2294e3a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 04:05:33 np0005534516 nova_compute[253538]: 2025-11-25 09:05:33.116 253542 DEBUG nova.virt.libvirt.driver [None req-2f2e9b20-8e15-45de-84f5-40efdcfc8a3b 637a807a37ce403a8612d303b1acbb3b a3908211615c4cbaae61d6e5833ca908 - - default default] [instance: 497131ea-c693-4c1d-b471-5b69d2294e3a] End _get_guest_xml xml=<domain type="kvm">
Nov 25 04:05:33 np0005534516 nova_compute[253538]:  <uuid>497131ea-c693-4c1d-b471-5b69d2294e3a</uuid>
Nov 25 04:05:33 np0005534516 nova_compute[253538]:  <name>instance-0000008a</name>
Nov 25 04:05:33 np0005534516 nova_compute[253538]:  <memory>131072</memory>
Nov 25 04:05:33 np0005534516 nova_compute[253538]:  <vcpu>1</vcpu>
Nov 25 04:05:33 np0005534516 nova_compute[253538]:  <metadata>
Nov 25 04:05:33 np0005534516 nova_compute[253538]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 04:05:33 np0005534516 nova_compute[253538]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 04:05:33 np0005534516 nova_compute[253538]:      <nova:name>tempest-TestServerBasicOps-server-829104372</nova:name>
Nov 25 04:05:33 np0005534516 nova_compute[253538]:      <nova:creationTime>2025-11-25 09:05:32</nova:creationTime>
Nov 25 04:05:33 np0005534516 nova_compute[253538]:      <nova:flavor name="m1.nano">
Nov 25 04:05:33 np0005534516 nova_compute[253538]:        <nova:memory>128</nova:memory>
Nov 25 04:05:33 np0005534516 nova_compute[253538]:        <nova:disk>1</nova:disk>
Nov 25 04:05:33 np0005534516 nova_compute[253538]:        <nova:swap>0</nova:swap>
Nov 25 04:05:33 np0005534516 nova_compute[253538]:        <nova:ephemeral>0</nova:ephemeral>
Nov 25 04:05:33 np0005534516 nova_compute[253538]:        <nova:vcpus>1</nova:vcpus>
Nov 25 04:05:33 np0005534516 nova_compute[253538]:      </nova:flavor>
Nov 25 04:05:33 np0005534516 nova_compute[253538]:      <nova:owner>
Nov 25 04:05:33 np0005534516 nova_compute[253538]:        <nova:user uuid="637a807a37ce403a8612d303b1acbb3b">tempest-TestServerBasicOps-1625299484-project-member</nova:user>
Nov 25 04:05:33 np0005534516 nova_compute[253538]:        <nova:project uuid="a3908211615c4cbaae61d6e5833ca908">tempest-TestServerBasicOps-1625299484</nova:project>
Nov 25 04:05:33 np0005534516 nova_compute[253538]:      </nova:owner>
Nov 25 04:05:33 np0005534516 nova_compute[253538]:      <nova:root type="image" uuid="8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e"/>
Nov 25 04:05:33 np0005534516 nova_compute[253538]:      <nova:ports>
Nov 25 04:05:33 np0005534516 nova_compute[253538]:        <nova:port uuid="44ba14ce-3677-4e53-b6ea-a21b98ba45d6">
Nov 25 04:05:33 np0005534516 nova_compute[253538]:          <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Nov 25 04:05:33 np0005534516 nova_compute[253538]:        </nova:port>
Nov 25 04:05:33 np0005534516 nova_compute[253538]:      </nova:ports>
Nov 25 04:05:33 np0005534516 nova_compute[253538]:    </nova:instance>
Nov 25 04:05:33 np0005534516 nova_compute[253538]:  </metadata>
Nov 25 04:05:33 np0005534516 nova_compute[253538]:  <sysinfo type="smbios">
Nov 25 04:05:33 np0005534516 nova_compute[253538]:    <system>
Nov 25 04:05:33 np0005534516 nova_compute[253538]:      <entry name="manufacturer">RDO</entry>
Nov 25 04:05:33 np0005534516 nova_compute[253538]:      <entry name="product">OpenStack Compute</entry>
Nov 25 04:05:33 np0005534516 nova_compute[253538]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 04:05:33 np0005534516 nova_compute[253538]:      <entry name="serial">497131ea-c693-4c1d-b471-5b69d2294e3a</entry>
Nov 25 04:05:33 np0005534516 nova_compute[253538]:      <entry name="uuid">497131ea-c693-4c1d-b471-5b69d2294e3a</entry>
Nov 25 04:05:33 np0005534516 nova_compute[253538]:      <entry name="family">Virtual Machine</entry>
Nov 25 04:05:33 np0005534516 nova_compute[253538]:    </system>
Nov 25 04:05:33 np0005534516 nova_compute[253538]:  </sysinfo>
Nov 25 04:05:33 np0005534516 nova_compute[253538]:  <os>
Nov 25 04:05:33 np0005534516 nova_compute[253538]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 25 04:05:33 np0005534516 nova_compute[253538]:    <boot dev="hd"/>
Nov 25 04:05:33 np0005534516 nova_compute[253538]:    <smbios mode="sysinfo"/>
Nov 25 04:05:33 np0005534516 nova_compute[253538]:  </os>
Nov 25 04:05:33 np0005534516 nova_compute[253538]:  <features>
Nov 25 04:05:33 np0005534516 nova_compute[253538]:    <acpi/>
Nov 25 04:05:33 np0005534516 nova_compute[253538]:    <apic/>
Nov 25 04:05:33 np0005534516 nova_compute[253538]:    <vmcoreinfo/>
Nov 25 04:05:33 np0005534516 nova_compute[253538]:  </features>
Nov 25 04:05:33 np0005534516 nova_compute[253538]:  <clock offset="utc">
Nov 25 04:05:33 np0005534516 nova_compute[253538]:    <timer name="pit" tickpolicy="delay"/>
Nov 25 04:05:33 np0005534516 nova_compute[253538]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 25 04:05:33 np0005534516 nova_compute[253538]:    <timer name="hpet" present="no"/>
Nov 25 04:05:33 np0005534516 nova_compute[253538]:  </clock>
Nov 25 04:05:33 np0005534516 nova_compute[253538]:  <cpu mode="host-model" match="exact">
Nov 25 04:05:33 np0005534516 nova_compute[253538]:    <topology sockets="1" cores="1" threads="1"/>
Nov 25 04:05:33 np0005534516 nova_compute[253538]:  </cpu>
Nov 25 04:05:33 np0005534516 nova_compute[253538]:  <devices>
Nov 25 04:05:33 np0005534516 nova_compute[253538]:    <disk type="network" device="disk">
Nov 25 04:05:33 np0005534516 nova_compute[253538]:      <driver type="raw" cache="none"/>
Nov 25 04:05:33 np0005534516 nova_compute[253538]:      <source protocol="rbd" name="vms/497131ea-c693-4c1d-b471-5b69d2294e3a_disk">
Nov 25 04:05:33 np0005534516 nova_compute[253538]:        <host name="192.168.122.100" port="6789"/>
Nov 25 04:05:33 np0005534516 nova_compute[253538]:      </source>
Nov 25 04:05:33 np0005534516 nova_compute[253538]:      <auth username="openstack">
Nov 25 04:05:33 np0005534516 nova_compute[253538]:        <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 04:05:33 np0005534516 nova_compute[253538]:      </auth>
Nov 25 04:05:33 np0005534516 nova_compute[253538]:      <target dev="vda" bus="virtio"/>
Nov 25 04:05:33 np0005534516 nova_compute[253538]:    </disk>
Nov 25 04:05:33 np0005534516 nova_compute[253538]:    <disk type="network" device="cdrom">
Nov 25 04:05:33 np0005534516 nova_compute[253538]:      <driver type="raw" cache="none"/>
Nov 25 04:05:33 np0005534516 nova_compute[253538]:      <source protocol="rbd" name="vms/497131ea-c693-4c1d-b471-5b69d2294e3a_disk.config">
Nov 25 04:05:33 np0005534516 nova_compute[253538]:        <host name="192.168.122.100" port="6789"/>
Nov 25 04:05:33 np0005534516 nova_compute[253538]:      </source>
Nov 25 04:05:33 np0005534516 nova_compute[253538]:      <auth username="openstack">
Nov 25 04:05:33 np0005534516 nova_compute[253538]:        <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 04:05:33 np0005534516 nova_compute[253538]:      </auth>
Nov 25 04:05:33 np0005534516 nova_compute[253538]:      <target dev="sda" bus="sata"/>
Nov 25 04:05:33 np0005534516 nova_compute[253538]:    </disk>
Nov 25 04:05:33 np0005534516 nova_compute[253538]:    <interface type="ethernet">
Nov 25 04:05:33 np0005534516 nova_compute[253538]:      <mac address="fa:16:3e:70:84:10"/>
Nov 25 04:05:33 np0005534516 nova_compute[253538]:      <model type="virtio"/>
Nov 25 04:05:33 np0005534516 nova_compute[253538]:      <driver name="vhost" rx_queue_size="512"/>
Nov 25 04:05:33 np0005534516 nova_compute[253538]:      <mtu size="1442"/>
Nov 25 04:05:33 np0005534516 nova_compute[253538]:      <target dev="tap44ba14ce-36"/>
Nov 25 04:05:33 np0005534516 nova_compute[253538]:    </interface>
Nov 25 04:05:33 np0005534516 nova_compute[253538]:    <serial type="pty">
Nov 25 04:05:33 np0005534516 nova_compute[253538]:      <log file="/var/lib/nova/instances/497131ea-c693-4c1d-b471-5b69d2294e3a/console.log" append="off"/>
Nov 25 04:05:33 np0005534516 nova_compute[253538]:    </serial>
Nov 25 04:05:33 np0005534516 nova_compute[253538]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 04:05:33 np0005534516 nova_compute[253538]:    <video>
Nov 25 04:05:33 np0005534516 nova_compute[253538]:      <model type="virtio"/>
Nov 25 04:05:33 np0005534516 nova_compute[253538]:    </video>
Nov 25 04:05:33 np0005534516 nova_compute[253538]:    <input type="tablet" bus="usb"/>
Nov 25 04:05:33 np0005534516 nova_compute[253538]:    <rng model="virtio">
Nov 25 04:05:33 np0005534516 nova_compute[253538]:      <backend model="random">/dev/urandom</backend>
Nov 25 04:05:33 np0005534516 nova_compute[253538]:    </rng>
Nov 25 04:05:33 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root"/>
Nov 25 04:05:33 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:05:33 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:05:33 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:05:33 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:05:33 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:05:33 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:05:33 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:05:33 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:05:33 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:05:33 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:05:33 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:05:33 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:05:33 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:05:33 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:05:33 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:05:33 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:05:33 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:05:33 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:05:33 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:05:33 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:05:33 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:05:33 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:05:33 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:05:33 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:05:33 np0005534516 nova_compute[253538]:    <controller type="usb" index="0"/>
Nov 25 04:05:33 np0005534516 nova_compute[253538]:    <memballoon model="virtio">
Nov 25 04:05:33 np0005534516 nova_compute[253538]:      <stats period="10"/>
Nov 25 04:05:33 np0005534516 nova_compute[253538]:    </memballoon>
Nov 25 04:05:33 np0005534516 nova_compute[253538]:  </devices>
Nov 25 04:05:33 np0005534516 nova_compute[253538]: </domain>
Nov 25 04:05:33 np0005534516 nova_compute[253538]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 25 04:05:33 np0005534516 nova_compute[253538]: 2025-11-25 09:05:33.118 253542 DEBUG nova.compute.manager [None req-2f2e9b20-8e15-45de-84f5-40efdcfc8a3b 637a807a37ce403a8612d303b1acbb3b a3908211615c4cbaae61d6e5833ca908 - - default default] [instance: 497131ea-c693-4c1d-b471-5b69d2294e3a] Preparing to wait for external event network-vif-plugged-44ba14ce-3677-4e53-b6ea-a21b98ba45d6 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 25 04:05:33 np0005534516 nova_compute[253538]: 2025-11-25 09:05:33.118 253542 DEBUG oslo_concurrency.lockutils [None req-2f2e9b20-8e15-45de-84f5-40efdcfc8a3b 637a807a37ce403a8612d303b1acbb3b a3908211615c4cbaae61d6e5833ca908 - - default default] Acquiring lock "497131ea-c693-4c1d-b471-5b69d2294e3a-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:05:33 np0005534516 nova_compute[253538]: 2025-11-25 09:05:33.118 253542 DEBUG oslo_concurrency.lockutils [None req-2f2e9b20-8e15-45de-84f5-40efdcfc8a3b 637a807a37ce403a8612d303b1acbb3b a3908211615c4cbaae61d6e5833ca908 - - default default] Lock "497131ea-c693-4c1d-b471-5b69d2294e3a-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:05:33 np0005534516 nova_compute[253538]: 2025-11-25 09:05:33.118 253542 DEBUG oslo_concurrency.lockutils [None req-2f2e9b20-8e15-45de-84f5-40efdcfc8a3b 637a807a37ce403a8612d303b1acbb3b a3908211615c4cbaae61d6e5833ca908 - - default default] Lock "497131ea-c693-4c1d-b471-5b69d2294e3a-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:05:33 np0005534516 nova_compute[253538]: 2025-11-25 09:05:33.119 253542 DEBUG nova.virt.libvirt.vif [None req-2f2e9b20-8e15-45de-84f5-40efdcfc8a3b 637a807a37ce403a8612d303b1acbb3b a3908211615c4cbaae61d6e5833ca908 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T09:05:27Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestServerBasicOps-server-829104372',display_name='tempest-TestServerBasicOps-server-829104372',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testserverbasicops-server-829104372',id=138,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHzwD53kQ8BpPBb54UPZdiuwcAps8iqsBmsdvuGpmBwC+Q4SksGNyI7vnMrtWDCi5xUrajEjXki8ZVS3NyMr/F7GJW+4JitS6beGfKpA2babih/6mXQzAB6PKdgZETbkFw==',key_name='tempest-TestServerBasicOps-1951077282',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={meta1='data1',meta2='data2',metaN='dataN'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='a3908211615c4cbaae61d6e5833ca908',ramdisk_id='',reservation_id='r-r1jf1pgp',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestServerBasicOps-1625299484',owner_user_name='tempest-TestServerBasicOps-1625299484-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T09:05:28Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='637a807a37ce403a8612d303b1acbb3b',uuid=497131ea-c693-4c1d-b471-5b69d2294e3a,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "44ba14ce-3677-4e53-b6ea-a21b98ba45d6", "address": "fa:16:3e:70:84:10", "network": {"id": "dbe7469c-9d57-4418-b63b-ede368786895", "bridge": "br-int", "label": "tempest-TestServerBasicOps-32608326-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a3908211615c4cbaae61d6e5833ca908", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap44ba14ce-36", "ovs_interfaceid": "44ba14ce-3677-4e53-b6ea-a21b98ba45d6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 25 04:05:33 np0005534516 nova_compute[253538]: 2025-11-25 09:05:33.119 253542 DEBUG nova.network.os_vif_util [None req-2f2e9b20-8e15-45de-84f5-40efdcfc8a3b 637a807a37ce403a8612d303b1acbb3b a3908211615c4cbaae61d6e5833ca908 - - default default] Converting VIF {"id": "44ba14ce-3677-4e53-b6ea-a21b98ba45d6", "address": "fa:16:3e:70:84:10", "network": {"id": "dbe7469c-9d57-4418-b63b-ede368786895", "bridge": "br-int", "label": "tempest-TestServerBasicOps-32608326-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a3908211615c4cbaae61d6e5833ca908", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap44ba14ce-36", "ovs_interfaceid": "44ba14ce-3677-4e53-b6ea-a21b98ba45d6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 04:05:33 np0005534516 nova_compute[253538]: 2025-11-25 09:05:33.119 253542 DEBUG nova.network.os_vif_util [None req-2f2e9b20-8e15-45de-84f5-40efdcfc8a3b 637a807a37ce403a8612d303b1acbb3b a3908211615c4cbaae61d6e5833ca908 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:70:84:10,bridge_name='br-int',has_traffic_filtering=True,id=44ba14ce-3677-4e53-b6ea-a21b98ba45d6,network=Network(dbe7469c-9d57-4418-b63b-ede368786895),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap44ba14ce-36') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 04:05:33 np0005534516 nova_compute[253538]: 2025-11-25 09:05:33.120 253542 DEBUG os_vif [None req-2f2e9b20-8e15-45de-84f5-40efdcfc8a3b 637a807a37ce403a8612d303b1acbb3b a3908211615c4cbaae61d6e5833ca908 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:70:84:10,bridge_name='br-int',has_traffic_filtering=True,id=44ba14ce-3677-4e53-b6ea-a21b98ba45d6,network=Network(dbe7469c-9d57-4418-b63b-ede368786895),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap44ba14ce-36') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 25 04:05:33 np0005534516 nova_compute[253538]: 2025-11-25 09:05:33.120 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:05:33 np0005534516 nova_compute[253538]: 2025-11-25 09:05:33.121 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 04:05:33 np0005534516 nova_compute[253538]: 2025-11-25 09:05:33.121 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 04:05:33 np0005534516 nova_compute[253538]: 2025-11-25 09:05:33.122 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:05:33 np0005534516 nova_compute[253538]: 2025-11-25 09:05:33.123 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap44ba14ce-36, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 04:05:33 np0005534516 nova_compute[253538]: 2025-11-25 09:05:33.123 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap44ba14ce-36, col_values=(('external_ids', {'iface-id': '44ba14ce-3677-4e53-b6ea-a21b98ba45d6', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:70:84:10', 'vm-uuid': '497131ea-c693-4c1d-b471-5b69d2294e3a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 04:05:33 np0005534516 nova_compute[253538]: 2025-11-25 09:05:33.124 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:05:33 np0005534516 NetworkManager[48915]: <info>  [1764061533.1262] manager: (tap44ba14ce-36): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/582)
Nov 25 04:05:33 np0005534516 nova_compute[253538]: 2025-11-25 09:05:33.126 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 25 04:05:33 np0005534516 nova_compute[253538]: 2025-11-25 09:05:33.135 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:05:33 np0005534516 nova_compute[253538]: 2025-11-25 09:05:33.136 253542 INFO os_vif [None req-2f2e9b20-8e15-45de-84f5-40efdcfc8a3b 637a807a37ce403a8612d303b1acbb3b a3908211615c4cbaae61d6e5833ca908 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:70:84:10,bridge_name='br-int',has_traffic_filtering=True,id=44ba14ce-3677-4e53-b6ea-a21b98ba45d6,network=Network(dbe7469c-9d57-4418-b63b-ede368786895),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap44ba14ce-36')#033[00m
Nov 25 04:05:33 np0005534516 nova_compute[253538]: 2025-11-25 09:05:33.187 253542 INFO nova.virt.libvirt.driver [None req-b7a1b552-6c58-4f68-93a5-293ff97ee175 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 7bc2f34f-b7dd-4b8d-99c0-1fc01d3c2aa0] Creating config drive at /var/lib/nova/instances/7bc2f34f-b7dd-4b8d-99c0-1fc01d3c2aa0/disk.config#033[00m
Nov 25 04:05:33 np0005534516 nova_compute[253538]: 2025-11-25 09:05:33.193 253542 DEBUG oslo_concurrency.processutils [None req-b7a1b552-6c58-4f68-93a5-293ff97ee175 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/7bc2f34f-b7dd-4b8d-99c0-1fc01d3c2aa0/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpt9nozznd execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 04:05:33 np0005534516 nova_compute[253538]: 2025-11-25 09:05:33.235 253542 DEBUG nova.network.neutron [req-18946731-5463-4347-ae2a-e4895b7239b4 req-eeebd2e3-0091-4601-b15b-f6597d574eb9 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 497131ea-c693-4c1d-b471-5b69d2294e3a] Updated VIF entry in instance network info cache for port 44ba14ce-3677-4e53-b6ea-a21b98ba45d6. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 25 04:05:33 np0005534516 nova_compute[253538]: 2025-11-25 09:05:33.236 253542 DEBUG nova.network.neutron [req-18946731-5463-4347-ae2a-e4895b7239b4 req-eeebd2e3-0091-4601-b15b-f6597d574eb9 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 497131ea-c693-4c1d-b471-5b69d2294e3a] Updating instance_info_cache with network_info: [{"id": "44ba14ce-3677-4e53-b6ea-a21b98ba45d6", "address": "fa:16:3e:70:84:10", "network": {"id": "dbe7469c-9d57-4418-b63b-ede368786895", "bridge": "br-int", "label": "tempest-TestServerBasicOps-32608326-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a3908211615c4cbaae61d6e5833ca908", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap44ba14ce-36", "ovs_interfaceid": "44ba14ce-3677-4e53-b6ea-a21b98ba45d6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 04:05:33 np0005534516 nova_compute[253538]: 2025-11-25 09:05:33.239 253542 DEBUG nova.virt.libvirt.driver [None req-2f2e9b20-8e15-45de-84f5-40efdcfc8a3b 637a807a37ce403a8612d303b1acbb3b a3908211615c4cbaae61d6e5833ca908 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 04:05:33 np0005534516 nova_compute[253538]: 2025-11-25 09:05:33.239 253542 DEBUG nova.virt.libvirt.driver [None req-2f2e9b20-8e15-45de-84f5-40efdcfc8a3b 637a807a37ce403a8612d303b1acbb3b a3908211615c4cbaae61d6e5833ca908 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 04:05:33 np0005534516 nova_compute[253538]: 2025-11-25 09:05:33.239 253542 DEBUG nova.virt.libvirt.driver [None req-2f2e9b20-8e15-45de-84f5-40efdcfc8a3b 637a807a37ce403a8612d303b1acbb3b a3908211615c4cbaae61d6e5833ca908 - - default default] No VIF found with MAC fa:16:3e:70:84:10, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 25 04:05:33 np0005534516 nova_compute[253538]: 2025-11-25 09:05:33.240 253542 INFO nova.virt.libvirt.driver [None req-2f2e9b20-8e15-45de-84f5-40efdcfc8a3b 637a807a37ce403a8612d303b1acbb3b a3908211615c4cbaae61d6e5833ca908 - - default default] [instance: 497131ea-c693-4c1d-b471-5b69d2294e3a] Using config drive#033[00m
Nov 25 04:05:33 np0005534516 nova_compute[253538]: 2025-11-25 09:05:33.263 253542 DEBUG nova.storage.rbd_utils [None req-2f2e9b20-8e15-45de-84f5-40efdcfc8a3b 637a807a37ce403a8612d303b1acbb3b a3908211615c4cbaae61d6e5833ca908 - - default default] rbd image 497131ea-c693-4c1d-b471-5b69d2294e3a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 04:05:33 np0005534516 nova_compute[253538]: 2025-11-25 09:05:33.302 253542 DEBUG oslo_concurrency.lockutils [req-18946731-5463-4347-ae2a-e4895b7239b4 req-eeebd2e3-0091-4601-b15b-f6597d574eb9 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-497131ea-c693-4c1d-b471-5b69d2294e3a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 04:05:33 np0005534516 nova_compute[253538]: 2025-11-25 09:05:33.346 253542 DEBUG oslo_concurrency.processutils [None req-b7a1b552-6c58-4f68-93a5-293ff97ee175 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/7bc2f34f-b7dd-4b8d-99c0-1fc01d3c2aa0/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpt9nozznd" returned: 0 in 0.154s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 04:05:33 np0005534516 nova_compute[253538]: 2025-11-25 09:05:33.375 253542 DEBUG nova.storage.rbd_utils [None req-b7a1b552-6c58-4f68-93a5-293ff97ee175 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] rbd image 7bc2f34f-b7dd-4b8d-99c0-1fc01d3c2aa0_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 04:05:33 np0005534516 nova_compute[253538]: 2025-11-25 09:05:33.380 253542 DEBUG oslo_concurrency.processutils [None req-b7a1b552-6c58-4f68-93a5-293ff97ee175 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/7bc2f34f-b7dd-4b8d-99c0-1fc01d3c2aa0/disk.config 7bc2f34f-b7dd-4b8d-99c0-1fc01d3c2aa0_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 04:05:33 np0005534516 nova_compute[253538]: 2025-11-25 09:05:33.546 253542 DEBUG oslo_concurrency.processutils [None req-b7a1b552-6c58-4f68-93a5-293ff97ee175 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/7bc2f34f-b7dd-4b8d-99c0-1fc01d3c2aa0/disk.config 7bc2f34f-b7dd-4b8d-99c0-1fc01d3c2aa0_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.166s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 04:05:33 np0005534516 nova_compute[253538]: 2025-11-25 09:05:33.548 253542 INFO nova.virt.libvirt.driver [None req-b7a1b552-6c58-4f68-93a5-293ff97ee175 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 7bc2f34f-b7dd-4b8d-99c0-1fc01d3c2aa0] Deleting local config drive /var/lib/nova/instances/7bc2f34f-b7dd-4b8d-99c0-1fc01d3c2aa0/disk.config because it was imported into RBD.#033[00m
Nov 25 04:05:33 np0005534516 kernel: tapbc72cf9d-bb: entered promiscuous mode
Nov 25 04:05:33 np0005534516 NetworkManager[48915]: <info>  [1764061533.6155] manager: (tapbc72cf9d-bb): new Tun device (/org/freedesktop/NetworkManager/Devices/583)
Nov 25 04:05:33 np0005534516 ovn_controller[152859]: 2025-11-25T09:05:33Z|01417|binding|INFO|Claiming lport bc72cf9d-bb8d-4968-879b-a65c0e151d35 for this chassis.
Nov 25 04:05:33 np0005534516 ovn_controller[152859]: 2025-11-25T09:05:33Z|01418|binding|INFO|bc72cf9d-bb8d-4968-879b-a65c0e151d35: Claiming fa:16:3e:b7:a3:07 10.100.0.14
Nov 25 04:05:33 np0005534516 nova_compute[253538]: 2025-11-25 09:05:33.623 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:05:33 np0005534516 NetworkManager[48915]: <info>  [1764061533.6314] manager: (tape64e0c93-9f): new Tun device (/org/freedesktop/NetworkManager/Devices/584)
Nov 25 04:05:33 np0005534516 kernel: tape64e0c93-9f: entered promiscuous mode
Nov 25 04:05:33 np0005534516 nova_compute[253538]: 2025-11-25 09:05:33.637 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:05:33 np0005534516 ovn_controller[152859]: 2025-11-25T09:05:33Z|01419|if_status|INFO|Dropped 1 log messages in last 117 seconds (most recently, 117 seconds ago) due to excessive rate
Nov 25 04:05:33 np0005534516 ovn_controller[152859]: 2025-11-25T09:05:33Z|01420|if_status|INFO|Not updating pb chassis for e64e0c93-9ff8-4b26-9a7e-1bae8b024966 now as sb is readonly
Nov 25 04:05:33 np0005534516 ovn_controller[152859]: 2025-11-25T09:05:33Z|01421|binding|INFO|Claiming lport e64e0c93-9ff8-4b26-9a7e-1bae8b024966 for this chassis.
Nov 25 04:05:33 np0005534516 ovn_controller[152859]: 2025-11-25T09:05:33Z|01422|binding|INFO|e64e0c93-9ff8-4b26-9a7e-1bae8b024966: Claiming fa:16:3e:6a:41:4a 2001:db8:0:1:f816:3eff:fe6a:414a 2001:db8::f816:3eff:fe6a:414a
Nov 25 04:05:33 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:05:33.644 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b7:a3:07 10.100.0.14'], port_security=['fa:16:3e:b7:a3:07 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '7bc2f34f-b7dd-4b8d-99c0-1fc01d3c2aa0', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8f08e3a5-c18c-40d6-a052-3721725c11a7', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a3cf572dfc9f42528923d69b8fa76422', 'neutron:revision_number': '2', 'neutron:security_group_ids': '8ddf0de9-e47d-4ca3-b0fe-d196d1eb3ce6', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0fc5f4dc-b315-4292-9809-5084499d762b, chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=bc72cf9d-bb8d-4968-879b-a65c0e151d35) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 04:05:33 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:05:33.646 162739 INFO neutron.agent.ovn.metadata.agent [-] Port bc72cf9d-bb8d-4968-879b-a65c0e151d35 in datapath 8f08e3a5-c18c-40d6-a052-3721725c11a7 bound to our chassis#033[00m
Nov 25 04:05:33 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:05:33.648 162739 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 8f08e3a5-c18c-40d6-a052-3721725c11a7#033[00m
Nov 25 04:05:33 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:05:33.655 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:6a:41:4a 2001:db8:0:1:f816:3eff:fe6a:414a 2001:db8::f816:3eff:fe6a:414a'], port_security=['fa:16:3e:6a:41:4a 2001:db8:0:1:f816:3eff:fe6a:414a 2001:db8::f816:3eff:fe6a:414a'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8:0:1:f816:3eff:fe6a:414a/64 2001:db8::f816:3eff:fe6a:414a/64', 'neutron:device_id': '7bc2f34f-b7dd-4b8d-99c0-1fc01d3c2aa0', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6c644b4d-59a5-410c-b57a-1faa3d063b78', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a3cf572dfc9f42528923d69b8fa76422', 'neutron:revision_number': '2', 'neutron:security_group_ids': '8ddf0de9-e47d-4ca3-b0fe-d196d1eb3ce6', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4dd839f3-f2ee-404d-b9f9-bedcfcb25484, chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=e64e0c93-9ff8-4b26-9a7e-1bae8b024966) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 04:05:33 np0005534516 systemd-udevd[398492]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 04:05:33 np0005534516 systemd-udevd[398490]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 04:05:33 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:05:33.662 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[6b6c02d8-09fb-40a2-ba26-c49d2bad43a3]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:05:33 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:05:33.664 162739 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap8f08e3a5-c1 in ovnmeta-8f08e3a5-c18c-40d6-a052-3721725c11a7 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 25 04:05:33 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:05:33.667 269370 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap8f08e3a5-c0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 25 04:05:33 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:05:33.667 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[6075988e-33b8-46d7-b208-f78cbdd94779]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:05:33 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:05:33.668 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[8c0d5d8b-dd84-4fd8-8104-a9ead0ddeabf]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:05:33 np0005534516 NetworkManager[48915]: <info>  [1764061533.6794] device (tape64e0c93-9f): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 04:05:33 np0005534516 NetworkManager[48915]: <info>  [1764061533.6811] device (tape64e0c93-9f): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 04:05:33 np0005534516 NetworkManager[48915]: <info>  [1764061533.6819] device (tapbc72cf9d-bb): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 04:05:33 np0005534516 NetworkManager[48915]: <info>  [1764061533.6830] device (tapbc72cf9d-bb): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 04:05:33 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:05:33.689 162852 DEBUG oslo.privsep.daemon [-] privsep: reply[532abce3-0484-4d3f-a0d9-418b79667604]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:05:33 np0005534516 systemd-machined[215790]: New machine qemu-167-instance-00000089.
Nov 25 04:05:33 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:05:33.717 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[230a67df-191c-4b0f-b054-e90610633696]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:05:33 np0005534516 systemd[1]: Started Virtual Machine qemu-167-instance-00000089.
Nov 25 04:05:33 np0005534516 nova_compute[253538]: 2025-11-25 09:05:33.740 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:05:33 np0005534516 ovn_controller[152859]: 2025-11-25T09:05:33Z|01423|binding|INFO|Setting lport bc72cf9d-bb8d-4968-879b-a65c0e151d35 ovn-installed in OVS
Nov 25 04:05:33 np0005534516 ovn_controller[152859]: 2025-11-25T09:05:33Z|01424|binding|INFO|Setting lport bc72cf9d-bb8d-4968-879b-a65c0e151d35 up in Southbound
Nov 25 04:05:33 np0005534516 nova_compute[253538]: 2025-11-25 09:05:33.752 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:05:33 np0005534516 ovn_controller[152859]: 2025-11-25T09:05:33Z|01425|binding|INFO|Setting lport e64e0c93-9ff8-4b26-9a7e-1bae8b024966 ovn-installed in OVS
Nov 25 04:05:33 np0005534516 ovn_controller[152859]: 2025-11-25T09:05:33Z|01426|binding|INFO|Setting lport e64e0c93-9ff8-4b26-9a7e-1bae8b024966 up in Southbound
Nov 25 04:05:33 np0005534516 nova_compute[253538]: 2025-11-25 09:05:33.764 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:05:33 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:05:33.767 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[9e7c98c8-b88f-4923-a00e-b828618ca04f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:05:33 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:05:33.772 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[12de0896-8ea9-4ac5-812c-7b916518d3c8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:05:33 np0005534516 NetworkManager[48915]: <info>  [1764061533.7735] manager: (tap8f08e3a5-c0): new Veth device (/org/freedesktop/NetworkManager/Devices/585)
Nov 25 04:05:33 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:05:33.814 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[cebcbdaa-a0af-474a-bf96-3a2d18d442cb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:05:33 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:05:33.820 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[a8f6c3b9-0b5d-4dfa-a446-316b1886bc27]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:05:33 np0005534516 NetworkManager[48915]: <info>  [1764061533.8504] device (tap8f08e3a5-c0): carrier: link connected
Nov 25 04:05:33 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:05:33.856 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[a4381c79-9114-49f5-a919-ba28a80fa2f8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:05:33 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:05:33.874 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[a02026f4-8cae-436e-8a26-40f0cb3b2e46]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap8f08e3a5-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:33:64:5c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 412], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 688844, 'reachable_time': 16923, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 398531, 'error': None, 'target': 'ovnmeta-8f08e3a5-c18c-40d6-a052-3721725c11a7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:05:33 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:05:33.888 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[d3f360dd-1fe1-4373-989b-0faaf8538ce6]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe33:645c'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 688844, 'tstamp': 688844}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 398532, 'error': None, 'target': 'ovnmeta-8f08e3a5-c18c-40d6-a052-3721725c11a7', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:05:33 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:05:33.905 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[567c1b95-62e8-4f5d-a622-57c19e40f5b0]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap8f08e3a5-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:33:64:5c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 412], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 688844, 'reachable_time': 16923, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 398533, 'error': None, 'target': 'ovnmeta-8f08e3a5-c18c-40d6-a052-3721725c11a7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:05:33 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:05:33.934 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[a20ed8a6-1471-493d-9dd1-e7bb1e120003]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:05:34 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:05:34.013 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[5fb6e554-6c3e-44c4-8648-922bb013485f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:05:34 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:05:34.014 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8f08e3a5-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 04:05:34 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:05:34.014 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 04:05:34 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:05:34.015 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap8f08e3a5-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 04:05:34 np0005534516 NetworkManager[48915]: <info>  [1764061534.0174] manager: (tap8f08e3a5-c0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/586)
Nov 25 04:05:34 np0005534516 nova_compute[253538]: 2025-11-25 09:05:34.016 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:05:34 np0005534516 kernel: tap8f08e3a5-c0: entered promiscuous mode
Nov 25 04:05:34 np0005534516 nova_compute[253538]: 2025-11-25 09:05:34.026 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:05:34 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:05:34.028 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap8f08e3a5-c0, col_values=(('external_ids', {'iface-id': '8fae56b6-9884-44ea-b3b3-2b19412193c5'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 04:05:34 np0005534516 nova_compute[253538]: 2025-11-25 09:05:34.029 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:05:34 np0005534516 ovn_controller[152859]: 2025-11-25T09:05:34Z|01427|binding|INFO|Releasing lport 8fae56b6-9884-44ea-b3b3-2b19412193c5 from this chassis (sb_readonly=0)
Nov 25 04:05:34 np0005534516 nova_compute[253538]: 2025-11-25 09:05:34.043 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:05:34 np0005534516 nova_compute[253538]: 2025-11-25 09:05:34.047 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:05:34 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:05:34.048 162739 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/8f08e3a5-c18c-40d6-a052-3721725c11a7.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/8f08e3a5-c18c-40d6-a052-3721725c11a7.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 25 04:05:34 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:05:34.049 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[f9a47b1b-5fd7-4f48-81bb-75d63b22601b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:05:34 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:05:34.049 162739 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 25 04:05:34 np0005534516 ovn_metadata_agent[162734]: global
Nov 25 04:05:34 np0005534516 ovn_metadata_agent[162734]:    log         /dev/log local0 debug
Nov 25 04:05:34 np0005534516 ovn_metadata_agent[162734]:    log-tag     haproxy-metadata-proxy-8f08e3a5-c18c-40d6-a052-3721725c11a7
Nov 25 04:05:34 np0005534516 ovn_metadata_agent[162734]:    user        root
Nov 25 04:05:34 np0005534516 ovn_metadata_agent[162734]:    group       root
Nov 25 04:05:34 np0005534516 ovn_metadata_agent[162734]:    maxconn     1024
Nov 25 04:05:34 np0005534516 ovn_metadata_agent[162734]:    pidfile     /var/lib/neutron/external/pids/8f08e3a5-c18c-40d6-a052-3721725c11a7.pid.haproxy
Nov 25 04:05:34 np0005534516 ovn_metadata_agent[162734]:    daemon
Nov 25 04:05:34 np0005534516 ovn_metadata_agent[162734]: 
Nov 25 04:05:34 np0005534516 ovn_metadata_agent[162734]: defaults
Nov 25 04:05:34 np0005534516 ovn_metadata_agent[162734]:    log global
Nov 25 04:05:34 np0005534516 ovn_metadata_agent[162734]:    mode http
Nov 25 04:05:34 np0005534516 ovn_metadata_agent[162734]:    option httplog
Nov 25 04:05:34 np0005534516 ovn_metadata_agent[162734]:    option dontlognull
Nov 25 04:05:34 np0005534516 ovn_metadata_agent[162734]:    option http-server-close
Nov 25 04:05:34 np0005534516 ovn_metadata_agent[162734]:    option forwardfor
Nov 25 04:05:34 np0005534516 ovn_metadata_agent[162734]:    retries                 3
Nov 25 04:05:34 np0005534516 ovn_metadata_agent[162734]:    timeout http-request    30s
Nov 25 04:05:34 np0005534516 ovn_metadata_agent[162734]:    timeout connect         30s
Nov 25 04:05:34 np0005534516 ovn_metadata_agent[162734]:    timeout client          32s
Nov 25 04:05:34 np0005534516 ovn_metadata_agent[162734]:    timeout server          32s
Nov 25 04:05:34 np0005534516 ovn_metadata_agent[162734]:    timeout http-keep-alive 30s
Nov 25 04:05:34 np0005534516 ovn_metadata_agent[162734]: 
Nov 25 04:05:34 np0005534516 ovn_metadata_agent[162734]: 
Nov 25 04:05:34 np0005534516 ovn_metadata_agent[162734]: listen listener
Nov 25 04:05:34 np0005534516 ovn_metadata_agent[162734]:    bind 169.254.169.254:80
Nov 25 04:05:34 np0005534516 ovn_metadata_agent[162734]:    server metadata /var/lib/neutron/metadata_proxy
Nov 25 04:05:34 np0005534516 ovn_metadata_agent[162734]:    http-request add-header X-OVN-Network-ID 8f08e3a5-c18c-40d6-a052-3721725c11a7
Nov 25 04:05:34 np0005534516 ovn_metadata_agent[162734]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 25 04:05:34 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:05:34.050 162739 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-8f08e3a5-c18c-40d6-a052-3721725c11a7', 'env', 'PROCESS_TAG=haproxy-8f08e3a5-c18c-40d6-a052-3721725c11a7', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/8f08e3a5-c18c-40d6-a052-3721725c11a7.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 25 04:05:34 np0005534516 nova_compute[253538]: 2025-11-25 09:05:34.115 253542 DEBUG nova.compute.manager [req-6034f5c8-e1d4-4411-a01e-800e32fabfbf req-5631e041-6c3e-454e-a271-7effeca7968b b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 7bc2f34f-b7dd-4b8d-99c0-1fc01d3c2aa0] Received event network-vif-plugged-bc72cf9d-bb8d-4968-879b-a65c0e151d35 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 04:05:34 np0005534516 nova_compute[253538]: 2025-11-25 09:05:34.116 253542 DEBUG oslo_concurrency.lockutils [req-6034f5c8-e1d4-4411-a01e-800e32fabfbf req-5631e041-6c3e-454e-a271-7effeca7968b b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "7bc2f34f-b7dd-4b8d-99c0-1fc01d3c2aa0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:05:34 np0005534516 nova_compute[253538]: 2025-11-25 09:05:34.116 253542 DEBUG oslo_concurrency.lockutils [req-6034f5c8-e1d4-4411-a01e-800e32fabfbf req-5631e041-6c3e-454e-a271-7effeca7968b b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "7bc2f34f-b7dd-4b8d-99c0-1fc01d3c2aa0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:05:34 np0005534516 nova_compute[253538]: 2025-11-25 09:05:34.116 253542 DEBUG oslo_concurrency.lockutils [req-6034f5c8-e1d4-4411-a01e-800e32fabfbf req-5631e041-6c3e-454e-a271-7effeca7968b b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "7bc2f34f-b7dd-4b8d-99c0-1fc01d3c2aa0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:05:34 np0005534516 nova_compute[253538]: 2025-11-25 09:05:34.117 253542 DEBUG nova.compute.manager [req-6034f5c8-e1d4-4411-a01e-800e32fabfbf req-5631e041-6c3e-454e-a271-7effeca7968b b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 7bc2f34f-b7dd-4b8d-99c0-1fc01d3c2aa0] Processing event network-vif-plugged-bc72cf9d-bb8d-4968-879b-a65c0e151d35 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 25 04:05:34 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2543: 321 pgs: 321 active+clean; 175 MiB data, 958 MiB used, 59 GiB / 60 GiB avail; 36 KiB/s rd, 3.4 MiB/s wr, 56 op/s
Nov 25 04:05:34 np0005534516 nova_compute[253538]: 2025-11-25 09:05:34.183 253542 INFO nova.virt.libvirt.driver [None req-2f2e9b20-8e15-45de-84f5-40efdcfc8a3b 637a807a37ce403a8612d303b1acbb3b a3908211615c4cbaae61d6e5833ca908 - - default default] [instance: 497131ea-c693-4c1d-b471-5b69d2294e3a] Creating config drive at /var/lib/nova/instances/497131ea-c693-4c1d-b471-5b69d2294e3a/disk.config#033[00m
Nov 25 04:05:34 np0005534516 nova_compute[253538]: 2025-11-25 09:05:34.191 253542 DEBUG oslo_concurrency.processutils [None req-2f2e9b20-8e15-45de-84f5-40efdcfc8a3b 637a807a37ce403a8612d303b1acbb3b a3908211615c4cbaae61d6e5833ca908 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/497131ea-c693-4c1d-b471-5b69d2294e3a/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmplzgsjx_9 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 04:05:34 np0005534516 nova_compute[253538]: 2025-11-25 09:05:34.361 253542 DEBUG oslo_concurrency.processutils [None req-2f2e9b20-8e15-45de-84f5-40efdcfc8a3b 637a807a37ce403a8612d303b1acbb3b a3908211615c4cbaae61d6e5833ca908 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/497131ea-c693-4c1d-b471-5b69d2294e3a/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmplzgsjx_9" returned: 0 in 0.170s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 04:05:34 np0005534516 nova_compute[253538]: 2025-11-25 09:05:34.403 253542 DEBUG nova.storage.rbd_utils [None req-2f2e9b20-8e15-45de-84f5-40efdcfc8a3b 637a807a37ce403a8612d303b1acbb3b a3908211615c4cbaae61d6e5833ca908 - - default default] rbd image 497131ea-c693-4c1d-b471-5b69d2294e3a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 04:05:34 np0005534516 nova_compute[253538]: 2025-11-25 09:05:34.409 253542 DEBUG oslo_concurrency.processutils [None req-2f2e9b20-8e15-45de-84f5-40efdcfc8a3b 637a807a37ce403a8612d303b1acbb3b a3908211615c4cbaae61d6e5833ca908 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/497131ea-c693-4c1d-b471-5b69d2294e3a/disk.config 497131ea-c693-4c1d-b471-5b69d2294e3a_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 04:05:34 np0005534516 nova_compute[253538]: 2025-11-25 09:05:34.452 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764061534.4293022, 7bc2f34f-b7dd-4b8d-99c0-1fc01d3c2aa0 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 04:05:34 np0005534516 nova_compute[253538]: 2025-11-25 09:05:34.453 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 7bc2f34f-b7dd-4b8d-99c0-1fc01d3c2aa0] VM Started (Lifecycle Event)#033[00m
Nov 25 04:05:34 np0005534516 nova_compute[253538]: 2025-11-25 09:05:34.479 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 7bc2f34f-b7dd-4b8d-99c0-1fc01d3c2aa0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 04:05:34 np0005534516 podman[398634]: 2025-11-25 09:05:34.482375071 +0000 UTC m=+0.056654991 container create 51f25d4677022ebbe2b594fe9f78cd07eb053d885ed4dadb80219cb3a3c42435 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-8f08e3a5-c18c-40d6-a052-3721725c11a7, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2)
Nov 25 04:05:34 np0005534516 nova_compute[253538]: 2025-11-25 09:05:34.483 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764061534.4300125, 7bc2f34f-b7dd-4b8d-99c0-1fc01d3c2aa0 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 04:05:34 np0005534516 nova_compute[253538]: 2025-11-25 09:05:34.484 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 7bc2f34f-b7dd-4b8d-99c0-1fc01d3c2aa0] VM Paused (Lifecycle Event)#033[00m
Nov 25 04:05:34 np0005534516 nova_compute[253538]: 2025-11-25 09:05:34.508 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 7bc2f34f-b7dd-4b8d-99c0-1fc01d3c2aa0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 04:05:34 np0005534516 nova_compute[253538]: 2025-11-25 09:05:34.515 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 7bc2f34f-b7dd-4b8d-99c0-1fc01d3c2aa0] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 04:05:34 np0005534516 systemd[1]: Started libpod-conmon-51f25d4677022ebbe2b594fe9f78cd07eb053d885ed4dadb80219cb3a3c42435.scope.
Nov 25 04:05:34 np0005534516 systemd[1]: Started libcrun container.
Nov 25 04:05:34 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/80317eb3c6d5e4b152cb82d3e07a88ef2d85b9598516f68f790b2b2e984d1b96/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 25 04:05:34 np0005534516 nova_compute[253538]: 2025-11-25 09:05:34.544 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 7bc2f34f-b7dd-4b8d-99c0-1fc01d3c2aa0] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 04:05:34 np0005534516 podman[398634]: 2025-11-25 09:05:34.455184283 +0000 UTC m=+0.029464223 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 25 04:05:34 np0005534516 podman[398634]: 2025-11-25 09:05:34.55993073 +0000 UTC m=+0.134210670 container init 51f25d4677022ebbe2b594fe9f78cd07eb053d885ed4dadb80219cb3a3c42435 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-8f08e3a5-c18c-40d6-a052-3721725c11a7, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Nov 25 04:05:34 np0005534516 podman[398634]: 2025-11-25 09:05:34.565549723 +0000 UTC m=+0.139829643 container start 51f25d4677022ebbe2b594fe9f78cd07eb053d885ed4dadb80219cb3a3c42435 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-8f08e3a5-c18c-40d6-a052-3721725c11a7, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true)
Nov 25 04:05:34 np0005534516 neutron-haproxy-ovnmeta-8f08e3a5-c18c-40d6-a052-3721725c11a7[398671]: [NOTICE]   (398689) : New worker (398696) forked
Nov 25 04:05:34 np0005534516 neutron-haproxy-ovnmeta-8f08e3a5-c18c-40d6-a052-3721725c11a7[398671]: [NOTICE]   (398689) : Loading success.
Nov 25 04:05:34 np0005534516 podman[398648]: 2025-11-25 09:05:34.602847696 +0000 UTC m=+0.086655287 container health_status 8ad1e319ea263c63021f01bb2d6f18ca5aea205883339692c6b10bddfa993191 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Nov 25 04:05:34 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:05:34.618 162739 INFO neutron.agent.ovn.metadata.agent [-] Port e64e0c93-9ff8-4b26-9a7e-1bae8b024966 in datapath 6c644b4d-59a5-410c-b57a-1faa3d063b78 unbound from our chassis#033[00m
Nov 25 04:05:34 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:05:34.620 162739 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 6c644b4d-59a5-410c-b57a-1faa3d063b78#033[00m
Nov 25 04:05:34 np0005534516 nova_compute[253538]: 2025-11-25 09:05:34.625 253542 DEBUG oslo_concurrency.processutils [None req-2f2e9b20-8e15-45de-84f5-40efdcfc8a3b 637a807a37ce403a8612d303b1acbb3b a3908211615c4cbaae61d6e5833ca908 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/497131ea-c693-4c1d-b471-5b69d2294e3a/disk.config 497131ea-c693-4c1d-b471-5b69d2294e3a_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.216s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 04:05:34 np0005534516 nova_compute[253538]: 2025-11-25 09:05:34.626 253542 INFO nova.virt.libvirt.driver [None req-2f2e9b20-8e15-45de-84f5-40efdcfc8a3b 637a807a37ce403a8612d303b1acbb3b a3908211615c4cbaae61d6e5833ca908 - - default default] [instance: 497131ea-c693-4c1d-b471-5b69d2294e3a] Deleting local config drive /var/lib/nova/instances/497131ea-c693-4c1d-b471-5b69d2294e3a/disk.config because it was imported into RBD.#033[00m
Nov 25 04:05:34 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:05:34.630 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[fd1bd913-906d-49dc-b304-5aa4953cb671]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:05:34 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:05:34.632 162739 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap6c644b4d-51 in ovnmeta-6c644b4d-59a5-410c-b57a-1faa3d063b78 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 25 04:05:34 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:05:34.634 269370 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap6c644b4d-50 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 25 04:05:34 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:05:34.634 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[21a99274-543a-410c-a84b-3d526c9ab5b0]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:05:34 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:05:34.636 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[056efffe-544e-48be-a84c-68553f34179e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:05:34 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:05:34.651 162852 DEBUG oslo.privsep.daemon [-] privsep: reply[7c282ecb-1e71-4c56-ab74-0974b43e5bec]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:05:34 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:05:34.675 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[e8f158f7-b3cb-40cb-a857-fcf6a3a9ccbc]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:05:34 np0005534516 kernel: tap44ba14ce-36: entered promiscuous mode
Nov 25 04:05:34 np0005534516 systemd-udevd[398514]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 04:05:34 np0005534516 NetworkManager[48915]: <info>  [1764061534.6835] manager: (tap44ba14ce-36): new Tun device (/org/freedesktop/NetworkManager/Devices/587)
Nov 25 04:05:34 np0005534516 ovn_controller[152859]: 2025-11-25T09:05:34Z|01428|binding|INFO|Claiming lport 44ba14ce-3677-4e53-b6ea-a21b98ba45d6 for this chassis.
Nov 25 04:05:34 np0005534516 ovn_controller[152859]: 2025-11-25T09:05:34Z|01429|binding|INFO|44ba14ce-3677-4e53-b6ea-a21b98ba45d6: Claiming fa:16:3e:70:84:10 10.100.0.14
Nov 25 04:05:34 np0005534516 nova_compute[253538]: 2025-11-25 09:05:34.685 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:05:34 np0005534516 nova_compute[253538]: 2025-11-25 09:05:34.688 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:05:34 np0005534516 nova_compute[253538]: 2025-11-25 09:05:34.694 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:05:34 np0005534516 NetworkManager[48915]: <info>  [1764061534.7010] device (tap44ba14ce-36): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 04:05:34 np0005534516 NetworkManager[48915]: <info>  [1764061534.7020] device (tap44ba14ce-36): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 04:05:34 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:05:34.702 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:70:84:10 10.100.0.14'], port_security=['fa:16:3e:70:84:10 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '497131ea-c693-4c1d-b471-5b69d2294e3a', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-dbe7469c-9d57-4418-b63b-ede368786895', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a3908211615c4cbaae61d6e5833ca908', 'neutron:revision_number': '2', 'neutron:security_group_ids': '96995e66-0af1-4c06-becd-28a8c446152a ba12b146-5dc4-4552-8b04-abe689899999', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=15de48bd-9bbf-4354-b481-d22133abf514, chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=44ba14ce-3677-4e53-b6ea-a21b98ba45d6) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 04:05:34 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:05:34.714 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[3cfe04f2-a0d9-4aab-9e25-681f0171ba54]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:05:34 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:05:34.724 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[9481e425-c221-4215-b6be-e01f9bdac16c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:05:34 np0005534516 NetworkManager[48915]: <info>  [1764061534.7256] manager: (tap6c644b4d-50): new Veth device (/org/freedesktop/NetworkManager/Devices/588)
Nov 25 04:05:34 np0005534516 systemd-machined[215790]: New machine qemu-168-instance-0000008a.
Nov 25 04:05:34 np0005534516 systemd[1]: Started Virtual Machine qemu-168-instance-0000008a.
Nov 25 04:05:34 np0005534516 ovn_controller[152859]: 2025-11-25T09:05:34Z|01430|binding|INFO|Setting lport 44ba14ce-3677-4e53-b6ea-a21b98ba45d6 ovn-installed in OVS
Nov 25 04:05:34 np0005534516 ovn_controller[152859]: 2025-11-25T09:05:34Z|01431|binding|INFO|Setting lport 44ba14ce-3677-4e53-b6ea-a21b98ba45d6 up in Southbound
Nov 25 04:05:34 np0005534516 nova_compute[253538]: 2025-11-25 09:05:34.756 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:05:34 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:05:34.757 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[460baff5-b3f8-4759-aa93-3bf2076e24df]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:05:34 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:05:34.762 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[f1b8ac9d-d92d-4d3e-ba10-48bd2462415f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:05:34 np0005534516 NetworkManager[48915]: <info>  [1764061534.7848] device (tap6c644b4d-50): carrier: link connected
Nov 25 04:05:34 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:05:34.788 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[44099974-b431-4171-8c3d-cbe38b177236]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:05:34 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:05:34.807 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[d5f762e1-c18a-4cda-b4ff-ae59261f4cfc]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6c644b4d-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:71:b4:6d'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 414], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 688938, 'reachable_time': 35223, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 398735, 'error': None, 'target': 'ovnmeta-6c644b4d-59a5-410c-b57a-1faa3d063b78', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:05:34 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:05:34.829 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[addb5d77-e97a-4d74-b785-da408de57afd]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe71:b46d'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 688938, 'tstamp': 688938}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 398738, 'error': None, 'target': 'ovnmeta-6c644b4d-59a5-410c-b57a-1faa3d063b78', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:05:34 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:05:34.852 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[31594316-095a-4b43-92ec-214c72f4c079]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6c644b4d-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:71:b4:6d'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 3, 'tx_packets': 1, 'rx_bytes': 266, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 3, 'tx_packets': 1, 'rx_bytes': 266, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 414], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 688938, 'reachable_time': 35223, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 3, 'inoctets': 224, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 3, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 224, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 3, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 398739, 'error': None, 'target': 'ovnmeta-6c644b4d-59a5-410c-b57a-1faa3d063b78', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:05:34 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:05:34.898 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[69726ba1-2a0b-4b6d-82a7-8924e84f7d69]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:05:34 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:05:34.942 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[478a2528-1c10-4c15-9229-3ad6bb64f8dc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:05:34 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:05:34.944 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6c644b4d-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 04:05:34 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:05:34.944 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 04:05:34 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:05:34.944 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6c644b4d-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 04:05:34 np0005534516 nova_compute[253538]: 2025-11-25 09:05:34.946 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:05:34 np0005534516 kernel: tap6c644b4d-50: entered promiscuous mode
Nov 25 04:05:34 np0005534516 NetworkManager[48915]: <info>  [1764061534.9473] manager: (tap6c644b4d-50): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/589)
Nov 25 04:05:34 np0005534516 nova_compute[253538]: 2025-11-25 09:05:34.950 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:05:34 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:05:34.950 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap6c644b4d-50, col_values=(('external_ids', {'iface-id': 'f8aacb3c-1998-431a-ac4d-66021d7412c1'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 04:05:34 np0005534516 nova_compute[253538]: 2025-11-25 09:05:34.952 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:05:34 np0005534516 ovn_controller[152859]: 2025-11-25T09:05:34Z|01432|binding|INFO|Releasing lport f8aacb3c-1998-431a-ac4d-66021d7412c1 from this chassis (sb_readonly=0)
Nov 25 04:05:34 np0005534516 nova_compute[253538]: 2025-11-25 09:05:34.953 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:05:34 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:05:34.954 162739 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/6c644b4d-59a5-410c-b57a-1faa3d063b78.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/6c644b4d-59a5-410c-b57a-1faa3d063b78.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 25 04:05:34 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:05:34.954 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[dbb18991-7ec5-48ed-b1fa-027c5f3a82b9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:05:34 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:05:34.956 162739 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 25 04:05:34 np0005534516 ovn_metadata_agent[162734]: global
Nov 25 04:05:34 np0005534516 ovn_metadata_agent[162734]:    log         /dev/log local0 debug
Nov 25 04:05:34 np0005534516 ovn_metadata_agent[162734]:    log-tag     haproxy-metadata-proxy-6c644b4d-59a5-410c-b57a-1faa3d063b78
Nov 25 04:05:34 np0005534516 ovn_metadata_agent[162734]:    user        root
Nov 25 04:05:34 np0005534516 ovn_metadata_agent[162734]:    group       root
Nov 25 04:05:34 np0005534516 ovn_metadata_agent[162734]:    maxconn     1024
Nov 25 04:05:34 np0005534516 ovn_metadata_agent[162734]:    pidfile     /var/lib/neutron/external/pids/6c644b4d-59a5-410c-b57a-1faa3d063b78.pid.haproxy
Nov 25 04:05:34 np0005534516 ovn_metadata_agent[162734]:    daemon
Nov 25 04:05:34 np0005534516 ovn_metadata_agent[162734]: 
Nov 25 04:05:34 np0005534516 ovn_metadata_agent[162734]: defaults
Nov 25 04:05:34 np0005534516 ovn_metadata_agent[162734]:    log global
Nov 25 04:05:34 np0005534516 ovn_metadata_agent[162734]:    mode http
Nov 25 04:05:34 np0005534516 ovn_metadata_agent[162734]:    option httplog
Nov 25 04:05:34 np0005534516 ovn_metadata_agent[162734]:    option dontlognull
Nov 25 04:05:34 np0005534516 ovn_metadata_agent[162734]:    option http-server-close
Nov 25 04:05:34 np0005534516 ovn_metadata_agent[162734]:    option forwardfor
Nov 25 04:05:34 np0005534516 ovn_metadata_agent[162734]:    retries                 3
Nov 25 04:05:34 np0005534516 ovn_metadata_agent[162734]:    timeout http-request    30s
Nov 25 04:05:34 np0005534516 ovn_metadata_agent[162734]:    timeout connect         30s
Nov 25 04:05:34 np0005534516 ovn_metadata_agent[162734]:    timeout client          32s
Nov 25 04:05:34 np0005534516 ovn_metadata_agent[162734]:    timeout server          32s
Nov 25 04:05:34 np0005534516 ovn_metadata_agent[162734]:    timeout http-keep-alive 30s
Nov 25 04:05:34 np0005534516 ovn_metadata_agent[162734]: 
Nov 25 04:05:34 np0005534516 ovn_metadata_agent[162734]: 
Nov 25 04:05:34 np0005534516 ovn_metadata_agent[162734]: listen listener
Nov 25 04:05:34 np0005534516 ovn_metadata_agent[162734]:    bind 169.254.169.254:80
Nov 25 04:05:34 np0005534516 ovn_metadata_agent[162734]:    server metadata /var/lib/neutron/metadata_proxy
Nov 25 04:05:34 np0005534516 ovn_metadata_agent[162734]:    http-request add-header X-OVN-Network-ID 6c644b4d-59a5-410c-b57a-1faa3d063b78
Nov 25 04:05:34 np0005534516 ovn_metadata_agent[162734]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 25 04:05:34 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:05:34.957 162739 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-6c644b4d-59a5-410c-b57a-1faa3d063b78', 'env', 'PROCESS_TAG=haproxy-6c644b4d-59a5-410c-b57a-1faa3d063b78', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/6c644b4d-59a5-410c-b57a-1faa3d063b78.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 25 04:05:34 np0005534516 nova_compute[253538]: 2025-11-25 09:05:34.966 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:05:35 np0005534516 nova_compute[253538]: 2025-11-25 09:05:35.122 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764061535.1219375, 497131ea-c693-4c1d-b471-5b69d2294e3a => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 04:05:35 np0005534516 nova_compute[253538]: 2025-11-25 09:05:35.123 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 497131ea-c693-4c1d-b471-5b69d2294e3a] VM Started (Lifecycle Event)#033[00m
Nov 25 04:05:35 np0005534516 nova_compute[253538]: 2025-11-25 09:05:35.140 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 497131ea-c693-4c1d-b471-5b69d2294e3a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 04:05:35 np0005534516 nova_compute[253538]: 2025-11-25 09:05:35.145 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764061535.1220737, 497131ea-c693-4c1d-b471-5b69d2294e3a => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 04:05:35 np0005534516 nova_compute[253538]: 2025-11-25 09:05:35.145 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 497131ea-c693-4c1d-b471-5b69d2294e3a] VM Paused (Lifecycle Event)#033[00m
Nov 25 04:05:35 np0005534516 nova_compute[253538]: 2025-11-25 09:05:35.160 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 497131ea-c693-4c1d-b471-5b69d2294e3a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 04:05:35 np0005534516 nova_compute[253538]: 2025-11-25 09:05:35.164 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 497131ea-c693-4c1d-b471-5b69d2294e3a] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 04:05:35 np0005534516 nova_compute[253538]: 2025-11-25 09:05:35.179 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 497131ea-c693-4c1d-b471-5b69d2294e3a] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 04:05:35 np0005534516 podman[398811]: 2025-11-25 09:05:35.340995662 +0000 UTC m=+0.029123573 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 25 04:05:35 np0005534516 nova_compute[253538]: 2025-11-25 09:05:35.526 253542 DEBUG nova.compute.manager [req-c2f5a027-55c2-4a4d-ae20-c77b5b4add08 req-a7e999c0-018f-42e6-a7f8-90794c621cf8 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 497131ea-c693-4c1d-b471-5b69d2294e3a] Received event network-vif-plugged-44ba14ce-3677-4e53-b6ea-a21b98ba45d6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 04:05:35 np0005534516 nova_compute[253538]: 2025-11-25 09:05:35.527 253542 DEBUG oslo_concurrency.lockutils [req-c2f5a027-55c2-4a4d-ae20-c77b5b4add08 req-a7e999c0-018f-42e6-a7f8-90794c621cf8 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "497131ea-c693-4c1d-b471-5b69d2294e3a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:05:35 np0005534516 nova_compute[253538]: 2025-11-25 09:05:35.527 253542 DEBUG oslo_concurrency.lockutils [req-c2f5a027-55c2-4a4d-ae20-c77b5b4add08 req-a7e999c0-018f-42e6-a7f8-90794c621cf8 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "497131ea-c693-4c1d-b471-5b69d2294e3a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:05:35 np0005534516 nova_compute[253538]: 2025-11-25 09:05:35.527 253542 DEBUG oslo_concurrency.lockutils [req-c2f5a027-55c2-4a4d-ae20-c77b5b4add08 req-a7e999c0-018f-42e6-a7f8-90794c621cf8 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "497131ea-c693-4c1d-b471-5b69d2294e3a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:05:35 np0005534516 nova_compute[253538]: 2025-11-25 09:05:35.528 253542 DEBUG nova.compute.manager [req-c2f5a027-55c2-4a4d-ae20-c77b5b4add08 req-a7e999c0-018f-42e6-a7f8-90794c621cf8 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 497131ea-c693-4c1d-b471-5b69d2294e3a] Processing event network-vif-plugged-44ba14ce-3677-4e53-b6ea-a21b98ba45d6 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 25 04:05:35 np0005534516 nova_compute[253538]: 2025-11-25 09:05:35.528 253542 DEBUG nova.compute.manager [None req-2f2e9b20-8e15-45de-84f5-40efdcfc8a3b 637a807a37ce403a8612d303b1acbb3b a3908211615c4cbaae61d6e5833ca908 - - default default] [instance: 497131ea-c693-4c1d-b471-5b69d2294e3a] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 25 04:05:35 np0005534516 nova_compute[253538]: 2025-11-25 09:05:35.532 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764061535.532646, 497131ea-c693-4c1d-b471-5b69d2294e3a => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 04:05:35 np0005534516 nova_compute[253538]: 2025-11-25 09:05:35.533 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 497131ea-c693-4c1d-b471-5b69d2294e3a] VM Resumed (Lifecycle Event)#033[00m
Nov 25 04:05:35 np0005534516 nova_compute[253538]: 2025-11-25 09:05:35.535 253542 DEBUG nova.virt.libvirt.driver [None req-2f2e9b20-8e15-45de-84f5-40efdcfc8a3b 637a807a37ce403a8612d303b1acbb3b a3908211615c4cbaae61d6e5833ca908 - - default default] [instance: 497131ea-c693-4c1d-b471-5b69d2294e3a] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 25 04:05:35 np0005534516 nova_compute[253538]: 2025-11-25 09:05:35.538 253542 INFO nova.virt.libvirt.driver [-] [instance: 497131ea-c693-4c1d-b471-5b69d2294e3a] Instance spawned successfully.#033[00m
Nov 25 04:05:35 np0005534516 nova_compute[253538]: 2025-11-25 09:05:35.538 253542 DEBUG nova.virt.libvirt.driver [None req-2f2e9b20-8e15-45de-84f5-40efdcfc8a3b 637a807a37ce403a8612d303b1acbb3b a3908211615c4cbaae61d6e5833ca908 - - default default] [instance: 497131ea-c693-4c1d-b471-5b69d2294e3a] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 25 04:05:35 np0005534516 podman[398811]: 2025-11-25 09:05:35.556076129 +0000 UTC m=+0.244204020 container create d317ae8c5007405fc2e9d775491a99f4a2311cee72727f2819f698c1226b5b91 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-6c644b4d-59a5-410c-b57a-1faa3d063b78, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 04:05:35 np0005534516 nova_compute[253538]: 2025-11-25 09:05:35.556 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 497131ea-c693-4c1d-b471-5b69d2294e3a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 04:05:35 np0005534516 nova_compute[253538]: 2025-11-25 09:05:35.562 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 497131ea-c693-4c1d-b471-5b69d2294e3a] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 04:05:35 np0005534516 nova_compute[253538]: 2025-11-25 09:05:35.566 253542 DEBUG nova.virt.libvirt.driver [None req-2f2e9b20-8e15-45de-84f5-40efdcfc8a3b 637a807a37ce403a8612d303b1acbb3b a3908211615c4cbaae61d6e5833ca908 - - default default] [instance: 497131ea-c693-4c1d-b471-5b69d2294e3a] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 04:05:35 np0005534516 nova_compute[253538]: 2025-11-25 09:05:35.567 253542 DEBUG nova.virt.libvirt.driver [None req-2f2e9b20-8e15-45de-84f5-40efdcfc8a3b 637a807a37ce403a8612d303b1acbb3b a3908211615c4cbaae61d6e5833ca908 - - default default] [instance: 497131ea-c693-4c1d-b471-5b69d2294e3a] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 04:05:35 np0005534516 nova_compute[253538]: 2025-11-25 09:05:35.567 253542 DEBUG nova.virt.libvirt.driver [None req-2f2e9b20-8e15-45de-84f5-40efdcfc8a3b 637a807a37ce403a8612d303b1acbb3b a3908211615c4cbaae61d6e5833ca908 - - default default] [instance: 497131ea-c693-4c1d-b471-5b69d2294e3a] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 04:05:35 np0005534516 nova_compute[253538]: 2025-11-25 09:05:35.568 253542 DEBUG nova.virt.libvirt.driver [None req-2f2e9b20-8e15-45de-84f5-40efdcfc8a3b 637a807a37ce403a8612d303b1acbb3b a3908211615c4cbaae61d6e5833ca908 - - default default] [instance: 497131ea-c693-4c1d-b471-5b69d2294e3a] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 04:05:35 np0005534516 nova_compute[253538]: 2025-11-25 09:05:35.568 253542 DEBUG nova.virt.libvirt.driver [None req-2f2e9b20-8e15-45de-84f5-40efdcfc8a3b 637a807a37ce403a8612d303b1acbb3b a3908211615c4cbaae61d6e5833ca908 - - default default] [instance: 497131ea-c693-4c1d-b471-5b69d2294e3a] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 04:05:35 np0005534516 nova_compute[253538]: 2025-11-25 09:05:35.568 253542 DEBUG nova.virt.libvirt.driver [None req-2f2e9b20-8e15-45de-84f5-40efdcfc8a3b 637a807a37ce403a8612d303b1acbb3b a3908211615c4cbaae61d6e5833ca908 - - default default] [instance: 497131ea-c693-4c1d-b471-5b69d2294e3a] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 04:05:35 np0005534516 nova_compute[253538]: 2025-11-25 09:05:35.591 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 497131ea-c693-4c1d-b471-5b69d2294e3a] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 04:05:35 np0005534516 systemd[1]: Started libpod-conmon-d317ae8c5007405fc2e9d775491a99f4a2311cee72727f2819f698c1226b5b91.scope.
Nov 25 04:05:35 np0005534516 systemd[1]: Started libcrun container.
Nov 25 04:05:35 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6d698aa358060802f75d0f6998f934a98b483adfd949a7477e1c82db0e0b4996/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 25 04:05:35 np0005534516 nova_compute[253538]: 2025-11-25 09:05:35.747 253542 INFO nova.compute.manager [None req-2f2e9b20-8e15-45de-84f5-40efdcfc8a3b 637a807a37ce403a8612d303b1acbb3b a3908211615c4cbaae61d6e5833ca908 - - default default] [instance: 497131ea-c693-4c1d-b471-5b69d2294e3a] Took 6.73 seconds to spawn the instance on the hypervisor.#033[00m
Nov 25 04:05:35 np0005534516 nova_compute[253538]: 2025-11-25 09:05:35.748 253542 DEBUG nova.compute.manager [None req-2f2e9b20-8e15-45de-84f5-40efdcfc8a3b 637a807a37ce403a8612d303b1acbb3b a3908211615c4cbaae61d6e5833ca908 - - default default] [instance: 497131ea-c693-4c1d-b471-5b69d2294e3a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 04:05:35 np0005534516 podman[398811]: 2025-11-25 09:05:35.7746156 +0000 UTC m=+0.462743521 container init d317ae8c5007405fc2e9d775491a99f4a2311cee72727f2819f698c1226b5b91 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-6c644b4d-59a5-410c-b57a-1faa3d063b78, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, tcib_managed=true, org.label-schema.vendor=CentOS)
Nov 25 04:05:35 np0005534516 podman[398811]: 2025-11-25 09:05:35.780253113 +0000 UTC m=+0.468381014 container start d317ae8c5007405fc2e9d775491a99f4a2311cee72727f2819f698c1226b5b91 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-6c644b4d-59a5-410c-b57a-1faa3d063b78, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 25 04:05:35 np0005534516 neutron-haproxy-ovnmeta-6c644b4d-59a5-410c-b57a-1faa3d063b78[398826]: [NOTICE]   (398830) : New worker (398832) forked
Nov 25 04:05:35 np0005534516 neutron-haproxy-ovnmeta-6c644b4d-59a5-410c-b57a-1faa3d063b78[398826]: [NOTICE]   (398830) : Loading success.
Nov 25 04:05:35 np0005534516 nova_compute[253538]: 2025-11-25 09:05:35.815 253542 INFO nova.compute.manager [None req-2f2e9b20-8e15-45de-84f5-40efdcfc8a3b 637a807a37ce403a8612d303b1acbb3b a3908211615c4cbaae61d6e5833ca908 - - default default] [instance: 497131ea-c693-4c1d-b471-5b69d2294e3a] Took 7.65 seconds to build instance.#033[00m
Nov 25 04:05:35 np0005534516 nova_compute[253538]: 2025-11-25 09:05:35.836 253542 DEBUG oslo_concurrency.lockutils [None req-2f2e9b20-8e15-45de-84f5-40efdcfc8a3b 637a807a37ce403a8612d303b1acbb3b a3908211615c4cbaae61d6e5833ca908 - - default default] Lock "497131ea-c693-4c1d-b471-5b69d2294e3a" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 7.728s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:05:36 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2544: 321 pgs: 321 active+clean; 180 MiB data, 962 MiB used, 59 GiB / 60 GiB avail; 39 KiB/s rd, 3.6 MiB/s wr, 60 op/s
Nov 25 04:05:36 np0005534516 nova_compute[253538]: 2025-11-25 09:05:36.186 253542 DEBUG nova.compute.manager [req-73d03d81-236a-48ad-91ff-35a209f82fed req-0f8a45d7-f6ee-43fc-8f28-ee24f49bfa5d b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 7bc2f34f-b7dd-4b8d-99c0-1fc01d3c2aa0] Received event network-vif-plugged-bc72cf9d-bb8d-4968-879b-a65c0e151d35 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 04:05:36 np0005534516 nova_compute[253538]: 2025-11-25 09:05:36.186 253542 DEBUG oslo_concurrency.lockutils [req-73d03d81-236a-48ad-91ff-35a209f82fed req-0f8a45d7-f6ee-43fc-8f28-ee24f49bfa5d b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "7bc2f34f-b7dd-4b8d-99c0-1fc01d3c2aa0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:05:36 np0005534516 nova_compute[253538]: 2025-11-25 09:05:36.187 253542 DEBUG oslo_concurrency.lockutils [req-73d03d81-236a-48ad-91ff-35a209f82fed req-0f8a45d7-f6ee-43fc-8f28-ee24f49bfa5d b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "7bc2f34f-b7dd-4b8d-99c0-1fc01d3c2aa0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:05:36 np0005534516 nova_compute[253538]: 2025-11-25 09:05:36.187 253542 DEBUG oslo_concurrency.lockutils [req-73d03d81-236a-48ad-91ff-35a209f82fed req-0f8a45d7-f6ee-43fc-8f28-ee24f49bfa5d b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "7bc2f34f-b7dd-4b8d-99c0-1fc01d3c2aa0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:05:36 np0005534516 nova_compute[253538]: 2025-11-25 09:05:36.188 253542 DEBUG nova.compute.manager [req-73d03d81-236a-48ad-91ff-35a209f82fed req-0f8a45d7-f6ee-43fc-8f28-ee24f49bfa5d b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 7bc2f34f-b7dd-4b8d-99c0-1fc01d3c2aa0] No event matching network-vif-plugged-bc72cf9d-bb8d-4968-879b-a65c0e151d35 in dict_keys([('network-vif-plugged', 'e64e0c93-9ff8-4b26-9a7e-1bae8b024966')]) pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:325#033[00m
Nov 25 04:05:36 np0005534516 nova_compute[253538]: 2025-11-25 09:05:36.189 253542 WARNING nova.compute.manager [req-73d03d81-236a-48ad-91ff-35a209f82fed req-0f8a45d7-f6ee-43fc-8f28-ee24f49bfa5d b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 7bc2f34f-b7dd-4b8d-99c0-1fc01d3c2aa0] Received unexpected event network-vif-plugged-bc72cf9d-bb8d-4968-879b-a65c0e151d35 for instance with vm_state building and task_state spawning.#033[00m
Nov 25 04:05:36 np0005534516 nova_compute[253538]: 2025-11-25 09:05:36.189 253542 DEBUG nova.compute.manager [req-73d03d81-236a-48ad-91ff-35a209f82fed req-0f8a45d7-f6ee-43fc-8f28-ee24f49bfa5d b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 7bc2f34f-b7dd-4b8d-99c0-1fc01d3c2aa0] Received event network-vif-plugged-e64e0c93-9ff8-4b26-9a7e-1bae8b024966 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 04:05:36 np0005534516 nova_compute[253538]: 2025-11-25 09:05:36.190 253542 DEBUG oslo_concurrency.lockutils [req-73d03d81-236a-48ad-91ff-35a209f82fed req-0f8a45d7-f6ee-43fc-8f28-ee24f49bfa5d b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "7bc2f34f-b7dd-4b8d-99c0-1fc01d3c2aa0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:05:36 np0005534516 nova_compute[253538]: 2025-11-25 09:05:36.190 253542 DEBUG oslo_concurrency.lockutils [req-73d03d81-236a-48ad-91ff-35a209f82fed req-0f8a45d7-f6ee-43fc-8f28-ee24f49bfa5d b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "7bc2f34f-b7dd-4b8d-99c0-1fc01d3c2aa0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:05:36 np0005534516 nova_compute[253538]: 2025-11-25 09:05:36.191 253542 DEBUG oslo_concurrency.lockutils [req-73d03d81-236a-48ad-91ff-35a209f82fed req-0f8a45d7-f6ee-43fc-8f28-ee24f49bfa5d b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "7bc2f34f-b7dd-4b8d-99c0-1fc01d3c2aa0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:05:36 np0005534516 nova_compute[253538]: 2025-11-25 09:05:36.191 253542 DEBUG nova.compute.manager [req-73d03d81-236a-48ad-91ff-35a209f82fed req-0f8a45d7-f6ee-43fc-8f28-ee24f49bfa5d b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 7bc2f34f-b7dd-4b8d-99c0-1fc01d3c2aa0] Processing event network-vif-plugged-e64e0c93-9ff8-4b26-9a7e-1bae8b024966 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 25 04:05:36 np0005534516 nova_compute[253538]: 2025-11-25 09:05:36.192 253542 DEBUG nova.compute.manager [req-73d03d81-236a-48ad-91ff-35a209f82fed req-0f8a45d7-f6ee-43fc-8f28-ee24f49bfa5d b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 7bc2f34f-b7dd-4b8d-99c0-1fc01d3c2aa0] Received event network-vif-plugged-e64e0c93-9ff8-4b26-9a7e-1bae8b024966 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 04:05:36 np0005534516 nova_compute[253538]: 2025-11-25 09:05:36.192 253542 DEBUG oslo_concurrency.lockutils [req-73d03d81-236a-48ad-91ff-35a209f82fed req-0f8a45d7-f6ee-43fc-8f28-ee24f49bfa5d b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "7bc2f34f-b7dd-4b8d-99c0-1fc01d3c2aa0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:05:36 np0005534516 nova_compute[253538]: 2025-11-25 09:05:36.193 253542 DEBUG oslo_concurrency.lockutils [req-73d03d81-236a-48ad-91ff-35a209f82fed req-0f8a45d7-f6ee-43fc-8f28-ee24f49bfa5d b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "7bc2f34f-b7dd-4b8d-99c0-1fc01d3c2aa0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:05:36 np0005534516 nova_compute[253538]: 2025-11-25 09:05:36.194 253542 DEBUG oslo_concurrency.lockutils [req-73d03d81-236a-48ad-91ff-35a209f82fed req-0f8a45d7-f6ee-43fc-8f28-ee24f49bfa5d b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "7bc2f34f-b7dd-4b8d-99c0-1fc01d3c2aa0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:05:36 np0005534516 nova_compute[253538]: 2025-11-25 09:05:36.194 253542 DEBUG nova.compute.manager [req-73d03d81-236a-48ad-91ff-35a209f82fed req-0f8a45d7-f6ee-43fc-8f28-ee24f49bfa5d b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 7bc2f34f-b7dd-4b8d-99c0-1fc01d3c2aa0] No waiting events found dispatching network-vif-plugged-e64e0c93-9ff8-4b26-9a7e-1bae8b024966 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 04:05:36 np0005534516 nova_compute[253538]: 2025-11-25 09:05:36.195 253542 WARNING nova.compute.manager [req-73d03d81-236a-48ad-91ff-35a209f82fed req-0f8a45d7-f6ee-43fc-8f28-ee24f49bfa5d b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 7bc2f34f-b7dd-4b8d-99c0-1fc01d3c2aa0] Received unexpected event network-vif-plugged-e64e0c93-9ff8-4b26-9a7e-1bae8b024966 for instance with vm_state building and task_state spawning.#033[00m
Nov 25 04:05:36 np0005534516 nova_compute[253538]: 2025-11-25 09:05:36.197 253542 DEBUG nova.compute.manager [None req-b7a1b552-6c58-4f68-93a5-293ff97ee175 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 7bc2f34f-b7dd-4b8d-99c0-1fc01d3c2aa0] Instance event wait completed in 1 seconds for network-vif-plugged,network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 25 04:05:36 np0005534516 nova_compute[253538]: 2025-11-25 09:05:36.204 253542 DEBUG nova.virt.libvirt.driver [None req-b7a1b552-6c58-4f68-93a5-293ff97ee175 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 7bc2f34f-b7dd-4b8d-99c0-1fc01d3c2aa0] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 25 04:05:36 np0005534516 nova_compute[253538]: 2025-11-25 09:05:36.205 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764061536.20414, 7bc2f34f-b7dd-4b8d-99c0-1fc01d3c2aa0 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 04:05:36 np0005534516 nova_compute[253538]: 2025-11-25 09:05:36.205 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 7bc2f34f-b7dd-4b8d-99c0-1fc01d3c2aa0] VM Resumed (Lifecycle Event)#033[00m
Nov 25 04:05:36 np0005534516 nova_compute[253538]: 2025-11-25 09:05:36.209 253542 INFO nova.virt.libvirt.driver [-] [instance: 7bc2f34f-b7dd-4b8d-99c0-1fc01d3c2aa0] Instance spawned successfully.#033[00m
Nov 25 04:05:36 np0005534516 nova_compute[253538]: 2025-11-25 09:05:36.209 253542 DEBUG nova.virt.libvirt.driver [None req-b7a1b552-6c58-4f68-93a5-293ff97ee175 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 7bc2f34f-b7dd-4b8d-99c0-1fc01d3c2aa0] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 25 04:05:36 np0005534516 nova_compute[253538]: 2025-11-25 09:05:36.223 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 7bc2f34f-b7dd-4b8d-99c0-1fc01d3c2aa0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 04:05:36 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:05:36.226 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 44ba14ce-3677-4e53-b6ea-a21b98ba45d6 in datapath dbe7469c-9d57-4418-b63b-ede368786895 unbound from our chassis#033[00m
Nov 25 04:05:36 np0005534516 nova_compute[253538]: 2025-11-25 09:05:36.229 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 7bc2f34f-b7dd-4b8d-99c0-1fc01d3c2aa0] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 04:05:36 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:05:36.230 162739 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network dbe7469c-9d57-4418-b63b-ede368786895#033[00m
Nov 25 04:05:36 np0005534516 nova_compute[253538]: 2025-11-25 09:05:36.231 253542 DEBUG nova.virt.libvirt.driver [None req-b7a1b552-6c58-4f68-93a5-293ff97ee175 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 7bc2f34f-b7dd-4b8d-99c0-1fc01d3c2aa0] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 04:05:36 np0005534516 nova_compute[253538]: 2025-11-25 09:05:36.232 253542 DEBUG nova.virt.libvirt.driver [None req-b7a1b552-6c58-4f68-93a5-293ff97ee175 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 7bc2f34f-b7dd-4b8d-99c0-1fc01d3c2aa0] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 04:05:36 np0005534516 nova_compute[253538]: 2025-11-25 09:05:36.232 253542 DEBUG nova.virt.libvirt.driver [None req-b7a1b552-6c58-4f68-93a5-293ff97ee175 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 7bc2f34f-b7dd-4b8d-99c0-1fc01d3c2aa0] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 04:05:36 np0005534516 nova_compute[253538]: 2025-11-25 09:05:36.233 253542 DEBUG nova.virt.libvirt.driver [None req-b7a1b552-6c58-4f68-93a5-293ff97ee175 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 7bc2f34f-b7dd-4b8d-99c0-1fc01d3c2aa0] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 04:05:36 np0005534516 nova_compute[253538]: 2025-11-25 09:05:36.233 253542 DEBUG nova.virt.libvirt.driver [None req-b7a1b552-6c58-4f68-93a5-293ff97ee175 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 7bc2f34f-b7dd-4b8d-99c0-1fc01d3c2aa0] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 04:05:36 np0005534516 nova_compute[253538]: 2025-11-25 09:05:36.233 253542 DEBUG nova.virt.libvirt.driver [None req-b7a1b552-6c58-4f68-93a5-293ff97ee175 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 7bc2f34f-b7dd-4b8d-99c0-1fc01d3c2aa0] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 04:05:36 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:05:36.243 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[e10af586-fe1b-468b-a077-a66a7c876a55]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:05:36 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:05:36.244 162739 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapdbe7469c-91 in ovnmeta-dbe7469c-9d57-4418-b63b-ede368786895 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 25 04:05:36 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:05:36.248 269370 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapdbe7469c-90 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 25 04:05:36 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:05:36.248 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[63404250-426e-4d2f-a478-fb0121d42681]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:05:36 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:05:36.250 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[d7169a4a-f970-41b1-88fa-55dac9a6ef7d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:05:36 np0005534516 nova_compute[253538]: 2025-11-25 09:05:36.264 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 7bc2f34f-b7dd-4b8d-99c0-1fc01d3c2aa0] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 04:05:36 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:05:36.268 162852 DEBUG oslo.privsep.daemon [-] privsep: reply[2e17e037-3056-4a95-af14-9e1e4d8166d7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:05:36 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:05:36.283 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[bd597c70-7a1b-484f-aa4d-7660a18b4dc2]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:05:36 np0005534516 nova_compute[253538]: 2025-11-25 09:05:36.293 253542 INFO nova.compute.manager [None req-b7a1b552-6c58-4f68-93a5-293ff97ee175 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 7bc2f34f-b7dd-4b8d-99c0-1fc01d3c2aa0] Took 11.08 seconds to spawn the instance on the hypervisor.#033[00m
Nov 25 04:05:36 np0005534516 nova_compute[253538]: 2025-11-25 09:05:36.294 253542 DEBUG nova.compute.manager [None req-b7a1b552-6c58-4f68-93a5-293ff97ee175 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 7bc2f34f-b7dd-4b8d-99c0-1fc01d3c2aa0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 04:05:36 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:05:36.316 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[f77843dc-bc9b-486e-bb6b-9decdeb2e8b1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:05:36 np0005534516 NetworkManager[48915]: <info>  [1764061536.3235] manager: (tapdbe7469c-90): new Veth device (/org/freedesktop/NetworkManager/Devices/590)
Nov 25 04:05:36 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:05:36.322 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[37d07e8c-2c34-44ef-8f6e-698c1b1ecac1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:05:36 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:05:36.352 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[38ef0167-f6a9-48f7-b774-104cf4574fa7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:05:36 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:05:36.355 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[68b7c474-ace3-41f2-9f21-8c3d7f2c1af8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:05:36 np0005534516 nova_compute[253538]: 2025-11-25 09:05:36.362 253542 INFO nova.compute.manager [None req-b7a1b552-6c58-4f68-93a5-293ff97ee175 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 7bc2f34f-b7dd-4b8d-99c0-1fc01d3c2aa0] Took 12.13 seconds to build instance.#033[00m
Nov 25 04:05:36 np0005534516 NetworkManager[48915]: <info>  [1764061536.3786] device (tapdbe7469c-90): carrier: link connected
Nov 25 04:05:36 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:05:36.385 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[6ea75ec1-172d-4fb0-b404-05b67cb1ebaa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:05:36 np0005534516 nova_compute[253538]: 2025-11-25 09:05:36.396 253542 DEBUG oslo_concurrency.lockutils [None req-b7a1b552-6c58-4f68-93a5-293ff97ee175 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "7bc2f34f-b7dd-4b8d-99c0-1fc01d3c2aa0" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 12.255s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:05:36 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:05:36.405 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[0a498b58-e38e-4f60-90ea-b5e2c6bc249e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapdbe7469c-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:52:82:42'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 415], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 689097, 'reachable_time': 40708, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 398851, 'error': None, 'target': 'ovnmeta-dbe7469c-9d57-4418-b63b-ede368786895', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:05:36 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:05:36.422 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[2d995d0d-d041-4ec8-9a7a-660c76d55e11]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe52:8242'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 689097, 'tstamp': 689097}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 398852, 'error': None, 'target': 'ovnmeta-dbe7469c-9d57-4418-b63b-ede368786895', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:05:36 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:05:36.440 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[3134650d-890c-42bd-b9a8-20e1f6080c86]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapdbe7469c-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:52:82:42'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 415], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 689097, 'reachable_time': 40708, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 398853, 'error': None, 'target': 'ovnmeta-dbe7469c-9d57-4418-b63b-ede368786895', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:05:36 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:05:36.475 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[8cf267bf-327d-45e2-a0a8-fb1cc8d68e0e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:05:36 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:05:36.552 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[d3a0cec7-66ba-4ae6-b5d4-f6fd1174d145]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:05:36 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:05:36.553 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapdbe7469c-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 04:05:36 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:05:36.553 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 04:05:36 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:05:36.554 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapdbe7469c-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 04:05:36 np0005534516 NetworkManager[48915]: <info>  [1764061536.5566] manager: (tapdbe7469c-90): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/591)
Nov 25 04:05:36 np0005534516 kernel: tapdbe7469c-90: entered promiscuous mode
Nov 25 04:05:36 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:05:36.560 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapdbe7469c-90, col_values=(('external_ids', {'iface-id': '4e603e97-58e9-4264-9b31-7189cd08be5d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 04:05:36 np0005534516 ovn_controller[152859]: 2025-11-25T09:05:36Z|01433|binding|INFO|Releasing lport 4e603e97-58e9-4264-9b31-7189cd08be5d from this chassis (sb_readonly=0)
Nov 25 04:05:36 np0005534516 nova_compute[253538]: 2025-11-25 09:05:36.573 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:05:36 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:05:36.576 162739 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/dbe7469c-9d57-4418-b63b-ede368786895.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/dbe7469c-9d57-4418-b63b-ede368786895.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 25 04:05:36 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:05:36.577 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[4032dcb4-9306-4e81-abc4-d70f372c8171]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:05:36 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:05:36.578 162739 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 25 04:05:36 np0005534516 ovn_metadata_agent[162734]: global
Nov 25 04:05:36 np0005534516 ovn_metadata_agent[162734]:    log         /dev/log local0 debug
Nov 25 04:05:36 np0005534516 ovn_metadata_agent[162734]:    log-tag     haproxy-metadata-proxy-dbe7469c-9d57-4418-b63b-ede368786895
Nov 25 04:05:36 np0005534516 ovn_metadata_agent[162734]:    user        root
Nov 25 04:05:36 np0005534516 ovn_metadata_agent[162734]:    group       root
Nov 25 04:05:36 np0005534516 ovn_metadata_agent[162734]:    maxconn     1024
Nov 25 04:05:36 np0005534516 ovn_metadata_agent[162734]:    pidfile     /var/lib/neutron/external/pids/dbe7469c-9d57-4418-b63b-ede368786895.pid.haproxy
Nov 25 04:05:36 np0005534516 ovn_metadata_agent[162734]:    daemon
Nov 25 04:05:36 np0005534516 ovn_metadata_agent[162734]: 
Nov 25 04:05:36 np0005534516 ovn_metadata_agent[162734]: defaults
Nov 25 04:05:36 np0005534516 ovn_metadata_agent[162734]:    log global
Nov 25 04:05:36 np0005534516 ovn_metadata_agent[162734]:    mode http
Nov 25 04:05:36 np0005534516 ovn_metadata_agent[162734]:    option httplog
Nov 25 04:05:36 np0005534516 ovn_metadata_agent[162734]:    option dontlognull
Nov 25 04:05:36 np0005534516 ovn_metadata_agent[162734]:    option http-server-close
Nov 25 04:05:36 np0005534516 ovn_metadata_agent[162734]:    option forwardfor
Nov 25 04:05:36 np0005534516 ovn_metadata_agent[162734]:    retries                 3
Nov 25 04:05:36 np0005534516 ovn_metadata_agent[162734]:    timeout http-request    30s
Nov 25 04:05:36 np0005534516 ovn_metadata_agent[162734]:    timeout connect         30s
Nov 25 04:05:36 np0005534516 ovn_metadata_agent[162734]:    timeout client          32s
Nov 25 04:05:36 np0005534516 ovn_metadata_agent[162734]:    timeout server          32s
Nov 25 04:05:36 np0005534516 ovn_metadata_agent[162734]:    timeout http-keep-alive 30s
Nov 25 04:05:36 np0005534516 ovn_metadata_agent[162734]: 
Nov 25 04:05:36 np0005534516 ovn_metadata_agent[162734]: 
Nov 25 04:05:36 np0005534516 ovn_metadata_agent[162734]: listen listener
Nov 25 04:05:36 np0005534516 ovn_metadata_agent[162734]:    bind 169.254.169.254:80
Nov 25 04:05:36 np0005534516 ovn_metadata_agent[162734]:    server metadata /var/lib/neutron/metadata_proxy
Nov 25 04:05:36 np0005534516 ovn_metadata_agent[162734]:    http-request add-header X-OVN-Network-ID dbe7469c-9d57-4418-b63b-ede368786895
Nov 25 04:05:36 np0005534516 ovn_metadata_agent[162734]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 25 04:05:36 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:05:36.578 162739 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-dbe7469c-9d57-4418-b63b-ede368786895', 'env', 'PROCESS_TAG=haproxy-dbe7469c-9d57-4418-b63b-ede368786895', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/dbe7469c-9d57-4418-b63b-ede368786895.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 25 04:05:36 np0005534516 podman[398886]: 2025-11-25 09:05:36.99040339 +0000 UTC m=+0.063678502 container create e894baea833b52fb0996e9308f3864faa6b3a5c411e8ca9a13b8c10bcdb32af7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-dbe7469c-9d57-4418-b63b-ede368786895, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251118)
Nov 25 04:05:37 np0005534516 systemd[1]: Started libpod-conmon-e894baea833b52fb0996e9308f3864faa6b3a5c411e8ca9a13b8c10bcdb32af7.scope.
Nov 25 04:05:37 np0005534516 systemd[1]: Started libcrun container.
Nov 25 04:05:37 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/183e12d28189ae10c66d159b8f25f1d139631c6af13b1ca004806ba514918c2a/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 25 04:05:37 np0005534516 podman[398886]: 2025-11-25 09:05:36.962277476 +0000 UTC m=+0.035552638 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 25 04:05:37 np0005534516 podman[398886]: 2025-11-25 09:05:37.064458773 +0000 UTC m=+0.137733885 container init e894baea833b52fb0996e9308f3864faa6b3a5c411e8ca9a13b8c10bcdb32af7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-dbe7469c-9d57-4418-b63b-ede368786895, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251118)
Nov 25 04:05:37 np0005534516 podman[398886]: 2025-11-25 09:05:37.074418914 +0000 UTC m=+0.147694026 container start e894baea833b52fb0996e9308f3864faa6b3a5c411e8ca9a13b8c10bcdb32af7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-dbe7469c-9d57-4418-b63b-ede368786895, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118)
Nov 25 04:05:37 np0005534516 neutron-haproxy-ovnmeta-dbe7469c-9d57-4418-b63b-ede368786895[398901]: [NOTICE]   (398905) : New worker (398907) forked
Nov 25 04:05:37 np0005534516 neutron-haproxy-ovnmeta-dbe7469c-9d57-4418-b63b-ede368786895[398901]: [NOTICE]   (398905) : Loading success.
Nov 25 04:05:37 np0005534516 nova_compute[253538]: 2025-11-25 09:05:37.612 253542 DEBUG nova.compute.manager [req-20a4dc2f-bbfc-4773-abd2-259e19fcf636 req-4471d4e7-c9a2-4b01-8b69-269388cd0bf2 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 497131ea-c693-4c1d-b471-5b69d2294e3a] Received event network-vif-plugged-44ba14ce-3677-4e53-b6ea-a21b98ba45d6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 04:05:37 np0005534516 nova_compute[253538]: 2025-11-25 09:05:37.613 253542 DEBUG oslo_concurrency.lockutils [req-20a4dc2f-bbfc-4773-abd2-259e19fcf636 req-4471d4e7-c9a2-4b01-8b69-269388cd0bf2 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "497131ea-c693-4c1d-b471-5b69d2294e3a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:05:37 np0005534516 nova_compute[253538]: 2025-11-25 09:05:37.614 253542 DEBUG oslo_concurrency.lockutils [req-20a4dc2f-bbfc-4773-abd2-259e19fcf636 req-4471d4e7-c9a2-4b01-8b69-269388cd0bf2 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "497131ea-c693-4c1d-b471-5b69d2294e3a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:05:37 np0005534516 nova_compute[253538]: 2025-11-25 09:05:37.614 253542 DEBUG oslo_concurrency.lockutils [req-20a4dc2f-bbfc-4773-abd2-259e19fcf636 req-4471d4e7-c9a2-4b01-8b69-269388cd0bf2 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "497131ea-c693-4c1d-b471-5b69d2294e3a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:05:37 np0005534516 nova_compute[253538]: 2025-11-25 09:05:37.614 253542 DEBUG nova.compute.manager [req-20a4dc2f-bbfc-4773-abd2-259e19fcf636 req-4471d4e7-c9a2-4b01-8b69-269388cd0bf2 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 497131ea-c693-4c1d-b471-5b69d2294e3a] No waiting events found dispatching network-vif-plugged-44ba14ce-3677-4e53-b6ea-a21b98ba45d6 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 04:05:37 np0005534516 nova_compute[253538]: 2025-11-25 09:05:37.615 253542 WARNING nova.compute.manager [req-20a4dc2f-bbfc-4773-abd2-259e19fcf636 req-4471d4e7-c9a2-4b01-8b69-269388cd0bf2 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 497131ea-c693-4c1d-b471-5b69d2294e3a] Received unexpected event network-vif-plugged-44ba14ce-3677-4e53-b6ea-a21b98ba45d6 for instance with vm_state active and task_state None.#033[00m
Nov 25 04:05:38 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:05:38 np0005534516 nova_compute[253538]: 2025-11-25 09:05:38.126 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:05:38 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2545: 321 pgs: 321 active+clean; 181 MiB data, 962 MiB used, 59 GiB / 60 GiB avail; 825 KiB/s rd, 3.6 MiB/s wr, 95 op/s
Nov 25 04:05:38 np0005534516 nova_compute[253538]: 2025-11-25 09:05:38.743 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:05:38 np0005534516 NetworkManager[48915]: <info>  [1764061538.7445] manager: (patch-br-int-to-provnet-14ba300c-36d8-42f5-a7bd-8245a4488c5f): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/592)
Nov 25 04:05:38 np0005534516 NetworkManager[48915]: <info>  [1764061538.7457] manager: (patch-provnet-14ba300c-36d8-42f5-a7bd-8245a4488c5f-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/593)
Nov 25 04:05:38 np0005534516 nova_compute[253538]: 2025-11-25 09:05:38.816 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:05:38 np0005534516 ovn_controller[152859]: 2025-11-25T09:05:38Z|01434|binding|INFO|Releasing lport 4e603e97-58e9-4264-9b31-7189cd08be5d from this chassis (sb_readonly=0)
Nov 25 04:05:38 np0005534516 ovn_controller[152859]: 2025-11-25T09:05:38Z|01435|binding|INFO|Releasing lport 8fae56b6-9884-44ea-b3b3-2b19412193c5 from this chassis (sb_readonly=0)
Nov 25 04:05:38 np0005534516 ovn_controller[152859]: 2025-11-25T09:05:38Z|01436|binding|INFO|Releasing lport f8aacb3c-1998-431a-ac4d-66021d7412c1 from this chassis (sb_readonly=0)
Nov 25 04:05:38 np0005534516 nova_compute[253538]: 2025-11-25 09:05:38.830 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:05:39 np0005534516 nova_compute[253538]: 2025-11-25 09:05:39.049 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:05:39 np0005534516 nova_compute[253538]: 2025-11-25 09:05:39.732 253542 DEBUG nova.compute.manager [req-586f3de0-57cb-4fe2-9bbc-535b597a1055 req-ecbfdb3c-53dc-4bde-9d5c-66e251ddcd1c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 497131ea-c693-4c1d-b471-5b69d2294e3a] Received event network-changed-44ba14ce-3677-4e53-b6ea-a21b98ba45d6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 04:05:39 np0005534516 nova_compute[253538]: 2025-11-25 09:05:39.736 253542 DEBUG nova.compute.manager [req-586f3de0-57cb-4fe2-9bbc-535b597a1055 req-ecbfdb3c-53dc-4bde-9d5c-66e251ddcd1c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 497131ea-c693-4c1d-b471-5b69d2294e3a] Refreshing instance network info cache due to event network-changed-44ba14ce-3677-4e53-b6ea-a21b98ba45d6. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 04:05:39 np0005534516 nova_compute[253538]: 2025-11-25 09:05:39.737 253542 DEBUG oslo_concurrency.lockutils [req-586f3de0-57cb-4fe2-9bbc-535b597a1055 req-ecbfdb3c-53dc-4bde-9d5c-66e251ddcd1c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-497131ea-c693-4c1d-b471-5b69d2294e3a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 04:05:39 np0005534516 nova_compute[253538]: 2025-11-25 09:05:39.737 253542 DEBUG oslo_concurrency.lockutils [req-586f3de0-57cb-4fe2-9bbc-535b597a1055 req-ecbfdb3c-53dc-4bde-9d5c-66e251ddcd1c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-497131ea-c693-4c1d-b471-5b69d2294e3a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 04:05:39 np0005534516 nova_compute[253538]: 2025-11-25 09:05:39.737 253542 DEBUG nova.network.neutron [req-586f3de0-57cb-4fe2-9bbc-535b597a1055 req-ecbfdb3c-53dc-4bde-9d5c-66e251ddcd1c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 497131ea-c693-4c1d-b471-5b69d2294e3a] Refreshing network info cache for port 44ba14ce-3677-4e53-b6ea-a21b98ba45d6 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 04:05:40 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2546: 321 pgs: 321 active+clean; 181 MiB data, 962 MiB used, 59 GiB / 60 GiB avail; 3.2 MiB/s rd, 2.8 MiB/s wr, 176 op/s
Nov 25 04:05:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:05:41.089 162739 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:05:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:05:41.090 162739 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:05:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:05:41.091 162739 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:05:41 np0005534516 nova_compute[253538]: 2025-11-25 09:05:41.166 253542 DEBUG nova.compute.manager [req-15969e47-e2a7-492c-b880-bcfa2bab1f3e req-64d02fe6-2d92-4074-8213-5401d3af7a75 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 7bc2f34f-b7dd-4b8d-99c0-1fc01d3c2aa0] Received event network-changed-bc72cf9d-bb8d-4968-879b-a65c0e151d35 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 04:05:41 np0005534516 nova_compute[253538]: 2025-11-25 09:05:41.167 253542 DEBUG nova.compute.manager [req-15969e47-e2a7-492c-b880-bcfa2bab1f3e req-64d02fe6-2d92-4074-8213-5401d3af7a75 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 7bc2f34f-b7dd-4b8d-99c0-1fc01d3c2aa0] Refreshing instance network info cache due to event network-changed-bc72cf9d-bb8d-4968-879b-a65c0e151d35. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 04:05:41 np0005534516 nova_compute[253538]: 2025-11-25 09:05:41.168 253542 DEBUG oslo_concurrency.lockutils [req-15969e47-e2a7-492c-b880-bcfa2bab1f3e req-64d02fe6-2d92-4074-8213-5401d3af7a75 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-7bc2f34f-b7dd-4b8d-99c0-1fc01d3c2aa0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 04:05:41 np0005534516 nova_compute[253538]: 2025-11-25 09:05:41.168 253542 DEBUG oslo_concurrency.lockutils [req-15969e47-e2a7-492c-b880-bcfa2bab1f3e req-64d02fe6-2d92-4074-8213-5401d3af7a75 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-7bc2f34f-b7dd-4b8d-99c0-1fc01d3c2aa0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 04:05:41 np0005534516 nova_compute[253538]: 2025-11-25 09:05:41.169 253542 DEBUG nova.network.neutron [req-15969e47-e2a7-492c-b880-bcfa2bab1f3e req-64d02fe6-2d92-4074-8213-5401d3af7a75 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 7bc2f34f-b7dd-4b8d-99c0-1fc01d3c2aa0] Refreshing network info cache for port bc72cf9d-bb8d-4968-879b-a65c0e151d35 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 04:05:41 np0005534516 nova_compute[253538]: 2025-11-25 09:05:41.521 253542 DEBUG nova.network.neutron [req-586f3de0-57cb-4fe2-9bbc-535b597a1055 req-ecbfdb3c-53dc-4bde-9d5c-66e251ddcd1c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 497131ea-c693-4c1d-b471-5b69d2294e3a] Updated VIF entry in instance network info cache for port 44ba14ce-3677-4e53-b6ea-a21b98ba45d6. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 25 04:05:41 np0005534516 nova_compute[253538]: 2025-11-25 09:05:41.522 253542 DEBUG nova.network.neutron [req-586f3de0-57cb-4fe2-9bbc-535b597a1055 req-ecbfdb3c-53dc-4bde-9d5c-66e251ddcd1c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 497131ea-c693-4c1d-b471-5b69d2294e3a] Updating instance_info_cache with network_info: [{"id": "44ba14ce-3677-4e53-b6ea-a21b98ba45d6", "address": "fa:16:3e:70:84:10", "network": {"id": "dbe7469c-9d57-4418-b63b-ede368786895", "bridge": "br-int", "label": "tempest-TestServerBasicOps-32608326-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.199", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a3908211615c4cbaae61d6e5833ca908", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap44ba14ce-36", "ovs_interfaceid": "44ba14ce-3677-4e53-b6ea-a21b98ba45d6", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 04:05:41 np0005534516 nova_compute[253538]: 2025-11-25 09:05:41.541 253542 DEBUG oslo_concurrency.lockutils [req-586f3de0-57cb-4fe2-9bbc-535b597a1055 req-ecbfdb3c-53dc-4bde-9d5c-66e251ddcd1c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-497131ea-c693-4c1d-b471-5b69d2294e3a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 04:05:42 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2547: 321 pgs: 321 active+clean; 181 MiB data, 962 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 1.7 MiB/s wr, 173 op/s
Nov 25 04:05:42 np0005534516 nova_compute[253538]: 2025-11-25 09:05:42.738 253542 DEBUG nova.network.neutron [req-15969e47-e2a7-492c-b880-bcfa2bab1f3e req-64d02fe6-2d92-4074-8213-5401d3af7a75 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 7bc2f34f-b7dd-4b8d-99c0-1fc01d3c2aa0] Updated VIF entry in instance network info cache for port bc72cf9d-bb8d-4968-879b-a65c0e151d35. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 25 04:05:42 np0005534516 nova_compute[253538]: 2025-11-25 09:05:42.739 253542 DEBUG nova.network.neutron [req-15969e47-e2a7-492c-b880-bcfa2bab1f3e req-64d02fe6-2d92-4074-8213-5401d3af7a75 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 7bc2f34f-b7dd-4b8d-99c0-1fc01d3c2aa0] Updating instance_info_cache with network_info: [{"id": "bc72cf9d-bb8d-4968-879b-a65c0e151d35", "address": "fa:16:3e:b7:a3:07", "network": {"id": "8f08e3a5-c18c-40d6-a052-3721725c11a7", "bridge": "br-int", "label": "tempest-network-smoke--1961352312", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.226", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbc72cf9d-bb", "ovs_interfaceid": "bc72cf9d-bb8d-4968-879b-a65c0e151d35", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "e64e0c93-9ff8-4b26-9a7e-1bae8b024966", "address": "fa:16:3e:6a:41:4a", "network": {"id": "6c644b4d-59a5-410c-b57a-1faa3d063b78", "bridge": "br-int", "label": "tempest-network-smoke--1048784636", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe6a:414a", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe6a:414a", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape64e0c93-9f", "ovs_interfaceid": "e64e0c93-9ff8-4b26-9a7e-1bae8b024966", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 04:05:42 np0005534516 nova_compute[253538]: 2025-11-25 09:05:42.759 253542 DEBUG oslo_concurrency.lockutils [req-15969e47-e2a7-492c-b880-bcfa2bab1f3e req-64d02fe6-2d92-4074-8213-5401d3af7a75 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-7bc2f34f-b7dd-4b8d-99c0-1fc01d3c2aa0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 04:05:43 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:05:43 np0005534516 nova_compute[253538]: 2025-11-25 09:05:43.130 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:05:44 np0005534516 nova_compute[253538]: 2025-11-25 09:05:44.051 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:05:44 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2548: 321 pgs: 321 active+clean; 181 MiB data, 962 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 1012 KiB/s wr, 161 op/s
Nov 25 04:05:44 np0005534516 nova_compute[253538]: 2025-11-25 09:05:44.910 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:05:45 np0005534516 nova_compute[253538]: 2025-11-25 09:05:45.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:05:46 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2549: 321 pgs: 321 active+clean; 181 MiB data, 962 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 201 KiB/s wr, 144 op/s
Nov 25 04:05:48 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:05:48 np0005534516 nova_compute[253538]: 2025-11-25 09:05:48.135 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:05:48 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2550: 321 pgs: 321 active+clean; 181 MiB data, 962 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 15 KiB/s wr, 142 op/s
Nov 25 04:05:48 np0005534516 ovn_controller[152859]: 2025-11-25T09:05:48Z|00177|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:70:84:10 10.100.0.14
Nov 25 04:05:48 np0005534516 ovn_controller[152859]: 2025-11-25T09:05:48Z|00178|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:70:84:10 10.100.0.14
Nov 25 04:05:48 np0005534516 ovn_controller[152859]: 2025-11-25T09:05:48Z|00179|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:b7:a3:07 10.100.0.14
Nov 25 04:05:48 np0005534516 ovn_controller[152859]: 2025-11-25T09:05:48Z|00180|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:b7:a3:07 10.100.0.14
Nov 25 04:05:49 np0005534516 nova_compute[253538]: 2025-11-25 09:05:49.112 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:05:50 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2551: 321 pgs: 321 active+clean; 210 MiB data, 989 MiB used, 59 GiB / 60 GiB avail; 3.5 MiB/s rd, 2.3 MiB/s wr, 188 op/s
Nov 25 04:05:50 np0005534516 nova_compute[253538]: 2025-11-25 09:05:50.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:05:50 np0005534516 nova_compute[253538]: 2025-11-25 09:05:50.555 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 25 04:05:50 np0005534516 nova_compute[253538]: 2025-11-25 09:05:50.555 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 25 04:05:51 np0005534516 nova_compute[253538]: 2025-11-25 09:05:51.360 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Acquiring lock "refresh_cache-7bc2f34f-b7dd-4b8d-99c0-1fc01d3c2aa0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 04:05:51 np0005534516 nova_compute[253538]: 2025-11-25 09:05:51.360 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Acquired lock "refresh_cache-7bc2f34f-b7dd-4b8d-99c0-1fc01d3c2aa0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 04:05:51 np0005534516 nova_compute[253538]: 2025-11-25 09:05:51.361 253542 DEBUG nova.network.neutron [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] [instance: 7bc2f34f-b7dd-4b8d-99c0-1fc01d3c2aa0] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Nov 25 04:05:51 np0005534516 nova_compute[253538]: 2025-11-25 09:05:51.361 253542 DEBUG nova.objects.instance [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 7bc2f34f-b7dd-4b8d-99c0-1fc01d3c2aa0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 04:05:52 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2552: 321 pgs: 321 active+clean; 242 MiB data, 1012 MiB used, 59 GiB / 60 GiB avail; 1.3 MiB/s rd, 4.2 MiB/s wr, 141 op/s
Nov 25 04:05:53 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:05:53 np0005534516 nova_compute[253538]: 2025-11-25 09:05:53.137 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:05:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] Optimize plan auto_2025-11-25_09:05:53
Nov 25 04:05:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 04:05:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] do_upmap
Nov 25 04:05:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] pools ['cephfs.cephfs.data', 'default.rgw.meta', 'images', 'volumes', 'default.rgw.control', '.mgr', 'cephfs.cephfs.meta', '.rgw.root', 'vms', 'default.rgw.log', 'backups']
Nov 25 04:05:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] prepared 0/10 changes
Nov 25 04:05:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 04:05:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 04:05:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 04:05:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 04:05:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 04:05:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 04:05:53 np0005534516 nova_compute[253538]: 2025-11-25 09:05:53.824 253542 DEBUG nova.network.neutron [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] [instance: 7bc2f34f-b7dd-4b8d-99c0-1fc01d3c2aa0] Updating instance_info_cache with network_info: [{"id": "bc72cf9d-bb8d-4968-879b-a65c0e151d35", "address": "fa:16:3e:b7:a3:07", "network": {"id": "8f08e3a5-c18c-40d6-a052-3721725c11a7", "bridge": "br-int", "label": "tempest-network-smoke--1961352312", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.226", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbc72cf9d-bb", "ovs_interfaceid": "bc72cf9d-bb8d-4968-879b-a65c0e151d35", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "e64e0c93-9ff8-4b26-9a7e-1bae8b024966", "address": "fa:16:3e:6a:41:4a", "network": {"id": "6c644b4d-59a5-410c-b57a-1faa3d063b78", "bridge": "br-int", "label": "tempest-network-smoke--1048784636", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe6a:414a", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe6a:414a", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape64e0c93-9f", "ovs_interfaceid": "e64e0c93-9ff8-4b26-9a7e-1bae8b024966", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 04:05:53 np0005534516 nova_compute[253538]: 2025-11-25 09:05:53.850 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Releasing lock "refresh_cache-7bc2f34f-b7dd-4b8d-99c0-1fc01d3c2aa0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 04:05:53 np0005534516 nova_compute[253538]: 2025-11-25 09:05:53.850 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] [instance: 7bc2f34f-b7dd-4b8d-99c0-1fc01d3c2aa0] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Nov 25 04:05:54 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 04:05:54 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 04:05:54 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 04:05:54 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 04:05:54 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 04:05:54 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 04:05:54 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 04:05:54 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 04:05:54 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 04:05:54 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 04:05:54 np0005534516 nova_compute[253538]: 2025-11-25 09:05:54.114 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:05:54 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2553: 321 pgs: 321 active+clean; 246 MiB data, 1012 MiB used, 59 GiB / 60 GiB avail; 652 KiB/s rd, 4.3 MiB/s wr, 126 op/s
Nov 25 04:05:54 np0005534516 nova_compute[253538]: 2025-11-25 09:05:54.553 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:05:54 np0005534516 nova_compute[253538]: 2025-11-25 09:05:54.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:05:54 np0005534516 nova_compute[253538]: 2025-11-25 09:05:54.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:05:54 np0005534516 nova_compute[253538]: 2025-11-25 09:05:54.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:05:54 np0005534516 nova_compute[253538]: 2025-11-25 09:05:54.555 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 25 04:05:55 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:05:55.922 162847 DEBUG eventlet.wsgi.server [-] (162847) accepted '' server /usr/lib/python3.9/site-packages/eventlet/wsgi.py:1004#033[00m
Nov 25 04:05:55 np0005534516 podman[399062]: 2025-11-25 09:05:55.92426977 +0000 UTC m=+0.059329974 container health_status 5c8d5508db57b4c3da7e80ae11f3a3094e87785cb74f60c98199261dd8b73481 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251118)
Nov 25 04:05:55 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:05:55.924 162847 DEBUG neutron.agent.ovn.metadata.server [-] Request: GET /latest/meta-data/public-ipv4 HTTP/1.0#015
Nov 25 04:05:55 np0005534516 ovn_metadata_agent[162734]: Accept: */*#015
Nov 25 04:05:55 np0005534516 ovn_metadata_agent[162734]: Connection: close#015
Nov 25 04:05:55 np0005534516 ovn_metadata_agent[162734]: Content-Type: text/plain#015
Nov 25 04:05:55 np0005534516 ovn_metadata_agent[162734]: Host: 169.254.169.254#015
Nov 25 04:05:55 np0005534516 ovn_metadata_agent[162734]: User-Agent: curl/7.84.0#015
Nov 25 04:05:55 np0005534516 ovn_metadata_agent[162734]: X-Forwarded-For: 10.100.0.14#015
Nov 25 04:05:55 np0005534516 ovn_metadata_agent[162734]: X-Ovn-Network-Id: dbe7469c-9d57-4418-b63b-ede368786895 __call__ /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/server.py:82#033[00m
Nov 25 04:05:55 np0005534516 podman[399063]: 2025-11-25 09:05:55.949352731 +0000 UTC m=+0.082679188 container health_status 201f5ba8a846455dad7681d91099d8c3f33e8ac91298c1330a8b525b94c03938 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0)
Nov 25 04:05:56 np0005534516 podman[399122]: 2025-11-25 09:05:56.028804812 +0000 UTC m=+0.060783204 container exec 04ce8b89bbacb77b3b59c4996e899d7494010827e740656ebea8754c25303956 (image=quay.io/ceph/ceph:v18, name=ceph-a058ea16-8b73-51e1-b172-ed66107102bf-mon-compute-0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Nov 25 04:05:56 np0005534516 podman[399122]: 2025-11-25 09:05:56.119566429 +0000 UTC m=+0.151544801 container exec_died 04ce8b89bbacb77b3b59c4996e899d7494010827e740656ebea8754c25303956 (image=quay.io/ceph/ceph:v18, name=ceph-a058ea16-8b73-51e1-b172-ed66107102bf-mon-compute-0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 04:05:56 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2554: 321 pgs: 321 active+clean; 246 MiB data, 1012 MiB used, 59 GiB / 60 GiB avail; 652 KiB/s rd, 4.3 MiB/s wr, 126 op/s
Nov 25 04:05:56 np0005534516 nova_compute[253538]: 2025-11-25 09:05:56.553 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:05:56 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 04:05:56 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 04:05:56 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 04:05:56 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 04:05:57 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 04:05:57 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 04:05:57 np0005534516 haproxy-metadata-proxy-dbe7469c-9d57-4418-b63b-ede368786895[398907]: 10.100.0.14:60996 [25/Nov/2025:09:05:55.920] listener listener/metadata 0/0/0/1563/1563 200 135 - - ---- 1/1/0/0/0 0/0 "GET /latest/meta-data/public-ipv4 HTTP/1.1"
Nov 25 04:05:57 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:05:57.483 162847 DEBUG neutron.agent.ovn.metadata.server [-] <Response [200]> _proxy_request /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/server.py:161#033[00m
Nov 25 04:05:57 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:05:57.484 162847 INFO eventlet.wsgi.server [-] 10.100.0.14,<local> "GET /latest/meta-data/public-ipv4 HTTP/1.1" status: 200  len: 151 time: 1.5605788#033[00m
Nov 25 04:05:57 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 04:05:57 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 04:05:57 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 04:05:57 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 04:05:57 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 04:05:57 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:05:57.616 162847 DEBUG eventlet.wsgi.server [-] (162847) accepted '' server /usr/lib/python3.9/site-packages/eventlet/wsgi.py:1004#033[00m
Nov 25 04:05:57 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:05:57.617 162847 DEBUG neutron.agent.ovn.metadata.server [-] Request: POST /openstack/2013-10-17/password HTTP/1.0#015
Nov 25 04:05:57 np0005534516 ovn_metadata_agent[162734]: Accept: */*#015
Nov 25 04:05:57 np0005534516 ovn_metadata_agent[162734]: Connection: close#015
Nov 25 04:05:57 np0005534516 ovn_metadata_agent[162734]: Content-Length: 100#015
Nov 25 04:05:57 np0005534516 ovn_metadata_agent[162734]: Content-Type: application/x-www-form-urlencoded#015
Nov 25 04:05:57 np0005534516 ovn_metadata_agent[162734]: Host: 169.254.169.254#015
Nov 25 04:05:57 np0005534516 ovn_metadata_agent[162734]: User-Agent: curl/7.84.0#015
Nov 25 04:05:57 np0005534516 ovn_metadata_agent[162734]: X-Forwarded-For: 10.100.0.14#015
Nov 25 04:05:57 np0005534516 ovn_metadata_agent[162734]: X-Ovn-Network-Id: dbe7469c-9d57-4418-b63b-ede368786895#015
Nov 25 04:05:57 np0005534516 ovn_metadata_agent[162734]: #015
Nov 25 04:05:57 np0005534516 ovn_metadata_agent[162734]: testtesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttest __call__ /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/server.py:82#033[00m
Nov 25 04:05:57 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 04:05:57 np0005534516 ceph-mgr[75313]: [progress WARNING root] complete: ev cdd73f30-5f36-43f1-8f23-50a4f11b2be0 does not exist
Nov 25 04:05:57 np0005534516 ceph-mgr[75313]: [progress WARNING root] complete: ev 4e12e532-edcd-4e58-a1f0-d79dade0052c does not exist
Nov 25 04:05:57 np0005534516 ceph-mgr[75313]: [progress WARNING root] complete: ev d6a61e78-fc78-40cc-8512-b6b98051d094 does not exist
Nov 25 04:05:57 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 04:05:57 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 04:05:57 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 04:05:57 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 04:05:57 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 04:05:57 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 04:05:57 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:05:57.897 162847 DEBUG neutron.agent.ovn.metadata.server [-] <Response [200]> _proxy_request /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/server.py:161#033[00m
Nov 25 04:05:57 np0005534516 haproxy-metadata-proxy-dbe7469c-9d57-4418-b63b-ede368786895[398907]: 10.100.0.14:32772 [25/Nov/2025:09:05:57.615] listener listener/metadata 0/0/0/282/282 200 118 - - ---- 1/1/0/0/0 0/0 "POST /openstack/2013-10-17/password HTTP/1.1"
Nov 25 04:05:57 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:05:57.898 162847 INFO eventlet.wsgi.server [-] 10.100.0.14,<local> "POST /openstack/2013-10-17/password HTTP/1.1" status: 200  len: 134 time: 0.2805822#033[00m
Nov 25 04:05:58 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:05:58 np0005534516 nova_compute[253538]: 2025-11-25 09:05:58.140 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:05:58 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2555: 321 pgs: 321 active+clean; 246 MiB data, 1012 MiB used, 59 GiB / 60 GiB avail; 652 KiB/s rd, 4.3 MiB/s wr, 127 op/s
Nov 25 04:05:58 np0005534516 podman[399551]: 2025-11-25 09:05:58.234024408 +0000 UTC m=+0.060561677 container create 65ae9bd48ca180dad68c6e11d10d3e445c6a52b16be5c45509a39aa59b6d49b8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_lehmann, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 04:05:58 np0005534516 systemd[1]: Started libpod-conmon-65ae9bd48ca180dad68c6e11d10d3e445c6a52b16be5c45509a39aa59b6d49b8.scope.
Nov 25 04:05:58 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 04:05:58 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 04:05:58 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 04:05:58 np0005534516 podman[399551]: 2025-11-25 09:05:58.206050618 +0000 UTC m=+0.032587967 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 04:05:58 np0005534516 systemd[1]: Started libcrun container.
Nov 25 04:05:58 np0005534516 podman[399551]: 2025-11-25 09:05:58.329757541 +0000 UTC m=+0.156294850 container init 65ae9bd48ca180dad68c6e11d10d3e445c6a52b16be5c45509a39aa59b6d49b8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_lehmann, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 04:05:58 np0005534516 podman[399551]: 2025-11-25 09:05:58.338220071 +0000 UTC m=+0.164757350 container start 65ae9bd48ca180dad68c6e11d10d3e445c6a52b16be5c45509a39aa59b6d49b8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_lehmann, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 04:05:58 np0005534516 podman[399551]: 2025-11-25 09:05:58.341765897 +0000 UTC m=+0.168303166 container attach 65ae9bd48ca180dad68c6e11d10d3e445c6a52b16be5c45509a39aa59b6d49b8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_lehmann, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 04:05:58 np0005534516 laughing_lehmann[399567]: 167 167
Nov 25 04:05:58 np0005534516 systemd[1]: libpod-65ae9bd48ca180dad68c6e11d10d3e445c6a52b16be5c45509a39aa59b6d49b8.scope: Deactivated successfully.
Nov 25 04:05:58 np0005534516 podman[399551]: 2025-11-25 09:05:58.345368915 +0000 UTC m=+0.171906194 container died 65ae9bd48ca180dad68c6e11d10d3e445c6a52b16be5c45509a39aa59b6d49b8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_lehmann, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Nov 25 04:05:58 np0005534516 systemd[1]: var-lib-containers-storage-overlay-2414d02f59ed5de810d19ecaef98cda7d3248155515dd23a2aa4e37ae18907b3-merged.mount: Deactivated successfully.
Nov 25 04:05:58 np0005534516 podman[399551]: 2025-11-25 09:05:58.389824354 +0000 UTC m=+0.216361653 container remove 65ae9bd48ca180dad68c6e11d10d3e445c6a52b16be5c45509a39aa59b6d49b8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_lehmann, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Nov 25 04:05:58 np0005534516 systemd[1]: libpod-conmon-65ae9bd48ca180dad68c6e11d10d3e445c6a52b16be5c45509a39aa59b6d49b8.scope: Deactivated successfully.
Nov 25 04:05:58 np0005534516 podman[399592]: 2025-11-25 09:05:58.603761879 +0000 UTC m=+0.049587479 container create 7035302d6241346d49702ef8a6d41c230fd1ad2f3bcd911f249af1743ca64d57 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_lederberg, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Nov 25 04:05:58 np0005534516 systemd[1]: Started libpod-conmon-7035302d6241346d49702ef8a6d41c230fd1ad2f3bcd911f249af1743ca64d57.scope.
Nov 25 04:05:58 np0005534516 podman[399592]: 2025-11-25 09:05:58.57951044 +0000 UTC m=+0.025336090 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 04:05:58 np0005534516 systemd[1]: Started libcrun container.
Nov 25 04:05:58 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e87a2078062c625d8d9ff3fcd6efd32b1867605ec72e73b869c67e1bd746814f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 04:05:58 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e87a2078062c625d8d9ff3fcd6efd32b1867605ec72e73b869c67e1bd746814f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 04:05:58 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e87a2078062c625d8d9ff3fcd6efd32b1867605ec72e73b869c67e1bd746814f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 04:05:58 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e87a2078062c625d8d9ff3fcd6efd32b1867605ec72e73b869c67e1bd746814f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 04:05:58 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e87a2078062c625d8d9ff3fcd6efd32b1867605ec72e73b869c67e1bd746814f/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 04:05:58 np0005534516 podman[399592]: 2025-11-25 09:05:58.698937617 +0000 UTC m=+0.144763237 container init 7035302d6241346d49702ef8a6d41c230fd1ad2f3bcd911f249af1743ca64d57 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_lederberg, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 04:05:58 np0005534516 podman[399592]: 2025-11-25 09:05:58.708511377 +0000 UTC m=+0.154336977 container start 7035302d6241346d49702ef8a6d41c230fd1ad2f3bcd911f249af1743ca64d57 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_lederberg, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 04:05:58 np0005534516 podman[399592]: 2025-11-25 09:05:58.71229343 +0000 UTC m=+0.158119160 container attach 7035302d6241346d49702ef8a6d41c230fd1ad2f3bcd911f249af1743ca64d57 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_lederberg, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Nov 25 04:05:59 np0005534516 nova_compute[253538]: 2025-11-25 09:05:59.117 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:05:59 np0005534516 nova_compute[253538]: 2025-11-25 09:05:59.753 253542 DEBUG oslo_concurrency.lockutils [None req-2f5d4e90-4405-495f-9aba-72e02aa05145 637a807a37ce403a8612d303b1acbb3b a3908211615c4cbaae61d6e5833ca908 - - default default] Acquiring lock "497131ea-c693-4c1d-b471-5b69d2294e3a" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:05:59 np0005534516 nova_compute[253538]: 2025-11-25 09:05:59.754 253542 DEBUG oslo_concurrency.lockutils [None req-2f5d4e90-4405-495f-9aba-72e02aa05145 637a807a37ce403a8612d303b1acbb3b a3908211615c4cbaae61d6e5833ca908 - - default default] Lock "497131ea-c693-4c1d-b471-5b69d2294e3a" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:05:59 np0005534516 nova_compute[253538]: 2025-11-25 09:05:59.755 253542 DEBUG oslo_concurrency.lockutils [None req-2f5d4e90-4405-495f-9aba-72e02aa05145 637a807a37ce403a8612d303b1acbb3b a3908211615c4cbaae61d6e5833ca908 - - default default] Acquiring lock "497131ea-c693-4c1d-b471-5b69d2294e3a-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:05:59 np0005534516 nova_compute[253538]: 2025-11-25 09:05:59.756 253542 DEBUG oslo_concurrency.lockutils [None req-2f5d4e90-4405-495f-9aba-72e02aa05145 637a807a37ce403a8612d303b1acbb3b a3908211615c4cbaae61d6e5833ca908 - - default default] Lock "497131ea-c693-4c1d-b471-5b69d2294e3a-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:05:59 np0005534516 nova_compute[253538]: 2025-11-25 09:05:59.756 253542 DEBUG oslo_concurrency.lockutils [None req-2f5d4e90-4405-495f-9aba-72e02aa05145 637a807a37ce403a8612d303b1acbb3b a3908211615c4cbaae61d6e5833ca908 - - default default] Lock "497131ea-c693-4c1d-b471-5b69d2294e3a-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:05:59 np0005534516 nova_compute[253538]: 2025-11-25 09:05:59.758 253542 INFO nova.compute.manager [None req-2f5d4e90-4405-495f-9aba-72e02aa05145 637a807a37ce403a8612d303b1acbb3b a3908211615c4cbaae61d6e5833ca908 - - default default] [instance: 497131ea-c693-4c1d-b471-5b69d2294e3a] Terminating instance#033[00m
Nov 25 04:05:59 np0005534516 nova_compute[253538]: 2025-11-25 09:05:59.760 253542 DEBUG nova.compute.manager [None req-2f5d4e90-4405-495f-9aba-72e02aa05145 637a807a37ce403a8612d303b1acbb3b a3908211615c4cbaae61d6e5833ca908 - - default default] [instance: 497131ea-c693-4c1d-b471-5b69d2294e3a] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 25 04:05:59 np0005534516 kernel: tap44ba14ce-36 (unregistering): left promiscuous mode
Nov 25 04:05:59 np0005534516 NetworkManager[48915]: <info>  [1764061559.8150] device (tap44ba14ce-36): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 04:05:59 np0005534516 nova_compute[253538]: 2025-11-25 09:05:59.826 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:05:59 np0005534516 ovn_controller[152859]: 2025-11-25T09:05:59Z|01437|binding|INFO|Releasing lport 44ba14ce-3677-4e53-b6ea-a21b98ba45d6 from this chassis (sb_readonly=0)
Nov 25 04:05:59 np0005534516 ovn_controller[152859]: 2025-11-25T09:05:59Z|01438|binding|INFO|Setting lport 44ba14ce-3677-4e53-b6ea-a21b98ba45d6 down in Southbound
Nov 25 04:05:59 np0005534516 ovn_controller[152859]: 2025-11-25T09:05:59Z|01439|binding|INFO|Removing iface tap44ba14ce-36 ovn-installed in OVS
Nov 25 04:05:59 np0005534516 nova_compute[253538]: 2025-11-25 09:05:59.833 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:05:59 np0005534516 xenodochial_lederberg[399608]: --> passed data devices: 0 physical, 3 LVM
Nov 25 04:05:59 np0005534516 xenodochial_lederberg[399608]: --> relative data size: 1.0
Nov 25 04:05:59 np0005534516 xenodochial_lederberg[399608]: --> All data devices are unavailable
Nov 25 04:05:59 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:05:59.846 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:70:84:10 10.100.0.14'], port_security=['fa:16:3e:70:84:10 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '497131ea-c693-4c1d-b471-5b69d2294e3a', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-dbe7469c-9d57-4418-b63b-ede368786895', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a3908211615c4cbaae61d6e5833ca908', 'neutron:revision_number': '4', 'neutron:security_group_ids': '96995e66-0af1-4c06-becd-28a8c446152a ba12b146-5dc4-4552-8b04-abe689899999', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.199'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=15de48bd-9bbf-4354-b481-d22133abf514, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=44ba14ce-3677-4e53-b6ea-a21b98ba45d6) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 04:05:59 np0005534516 nova_compute[253538]: 2025-11-25 09:05:59.849 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:05:59 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:05:59.850 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 44ba14ce-3677-4e53-b6ea-a21b98ba45d6 in datapath dbe7469c-9d57-4418-b63b-ede368786895 unbound from our chassis#033[00m
Nov 25 04:05:59 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:05:59.853 162739 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network dbe7469c-9d57-4418-b63b-ede368786895, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 25 04:05:59 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:05:59.854 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[6c2b631f-7874-4198-aca0-af81c22e9bc2]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:05:59 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:05:59.855 162739 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-dbe7469c-9d57-4418-b63b-ede368786895 namespace which is not needed anymore#033[00m
Nov 25 04:05:59 np0005534516 systemd[1]: libpod-7035302d6241346d49702ef8a6d41c230fd1ad2f3bcd911f249af1743ca64d57.scope: Deactivated successfully.
Nov 25 04:05:59 np0005534516 systemd[1]: libpod-7035302d6241346d49702ef8a6d41c230fd1ad2f3bcd911f249af1743ca64d57.scope: Consumed 1.093s CPU time.
Nov 25 04:05:59 np0005534516 podman[399592]: 2025-11-25 09:05:59.878226634 +0000 UTC m=+1.324052244 container died 7035302d6241346d49702ef8a6d41c230fd1ad2f3bcd911f249af1743ca64d57 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_lederberg, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507)
Nov 25 04:05:59 np0005534516 systemd[1]: machine-qemu\x2d168\x2dinstance\x2d0000008a.scope: Deactivated successfully.
Nov 25 04:05:59 np0005534516 systemd[1]: machine-qemu\x2d168\x2dinstance\x2d0000008a.scope: Consumed 13.914s CPU time.
Nov 25 04:05:59 np0005534516 systemd-machined[215790]: Machine qemu-168-instance-0000008a terminated.
Nov 25 04:05:59 np0005534516 systemd[1]: var-lib-containers-storage-overlay-e87a2078062c625d8d9ff3fcd6efd32b1867605ec72e73b869c67e1bd746814f-merged.mount: Deactivated successfully.
Nov 25 04:05:59 np0005534516 podman[399592]: 2025-11-25 09:05:59.937198358 +0000 UTC m=+1.383023958 container remove 7035302d6241346d49702ef8a6d41c230fd1ad2f3bcd911f249af1743ca64d57 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_lederberg, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Nov 25 04:05:59 np0005534516 systemd[1]: libpod-conmon-7035302d6241346d49702ef8a6d41c230fd1ad2f3bcd911f249af1743ca64d57.scope: Deactivated successfully.
Nov 25 04:06:00 np0005534516 nova_compute[253538]: 2025-11-25 09:06:00.003 253542 INFO nova.virt.libvirt.driver [-] [instance: 497131ea-c693-4c1d-b471-5b69d2294e3a] Instance destroyed successfully.#033[00m
Nov 25 04:06:00 np0005534516 nova_compute[253538]: 2025-11-25 09:06:00.004 253542 DEBUG nova.objects.instance [None req-2f5d4e90-4405-495f-9aba-72e02aa05145 637a807a37ce403a8612d303b1acbb3b a3908211615c4cbaae61d6e5833ca908 - - default default] Lazy-loading 'resources' on Instance uuid 497131ea-c693-4c1d-b471-5b69d2294e3a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 04:06:00 np0005534516 neutron-haproxy-ovnmeta-dbe7469c-9d57-4418-b63b-ede368786895[398901]: [NOTICE]   (398905) : haproxy version is 2.8.14-c23fe91
Nov 25 04:06:00 np0005534516 neutron-haproxy-ovnmeta-dbe7469c-9d57-4418-b63b-ede368786895[398901]: [NOTICE]   (398905) : path to executable is /usr/sbin/haproxy
Nov 25 04:06:00 np0005534516 neutron-haproxy-ovnmeta-dbe7469c-9d57-4418-b63b-ede368786895[398901]: [WARNING]  (398905) : Exiting Master process...
Nov 25 04:06:00 np0005534516 neutron-haproxy-ovnmeta-dbe7469c-9d57-4418-b63b-ede368786895[398901]: [WARNING]  (398905) : Exiting Master process...
Nov 25 04:06:00 np0005534516 neutron-haproxy-ovnmeta-dbe7469c-9d57-4418-b63b-ede368786895[398901]: [ALERT]    (398905) : Current worker (398907) exited with code 143 (Terminated)
Nov 25 04:06:00 np0005534516 neutron-haproxy-ovnmeta-dbe7469c-9d57-4418-b63b-ede368786895[398901]: [WARNING]  (398905) : All workers exited. Exiting... (0)
Nov 25 04:06:00 np0005534516 systemd[1]: libpod-e894baea833b52fb0996e9308f3864faa6b3a5c411e8ca9a13b8c10bcdb32af7.scope: Deactivated successfully.
Nov 25 04:06:00 np0005534516 conmon[398901]: conmon e894baea833b52fb0996 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-e894baea833b52fb0996e9308f3864faa6b3a5c411e8ca9a13b8c10bcdb32af7.scope/container/memory.events
Nov 25 04:06:00 np0005534516 nova_compute[253538]: 2025-11-25 09:06:00.018 253542 DEBUG nova.virt.libvirt.vif [None req-2f5d4e90-4405-495f-9aba-72e02aa05145 637a807a37ce403a8612d303b1acbb3b a3908211615c4cbaae61d6e5833ca908 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T09:05:27Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestServerBasicOps-server-829104372',display_name='tempest-TestServerBasicOps-server-829104372',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testserverbasicops-server-829104372',id=138,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHzwD53kQ8BpPBb54UPZdiuwcAps8iqsBmsdvuGpmBwC+Q4SksGNyI7vnMrtWDCi5xUrajEjXki8ZVS3NyMr/F7GJW+4JitS6beGfKpA2babih/6mXQzAB6PKdgZETbkFw==',key_name='tempest-TestServerBasicOps-1951077282',keypairs=<?>,launch_index=0,launched_at=2025-11-25T09:05:35Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={meta1='data1',meta2='data2',metaN='dataN'},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='a3908211615c4cbaae61d6e5833ca908',ramdisk_id='',reservation_id='r-r1jf1pgp',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestServerBasicOps-1625299484',owner_user_name='tempest-TestServerBasicOps-1625299484-project-member',password_0='testtesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttest',password_1='',password_2='',password_3=''},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T09:05:57Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='637a807a37ce403a8612d303b1acbb3b',uuid=497131ea-c693-4c1d-b471-5b69d2294e3a,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "44ba14ce-3677-4e53-b6ea-a21b98ba45d6", "address": "fa:16:3e:70:84:10", "network": {"id": "dbe7469c-9d57-4418-b63b-ede368786895", "bridge": "br-int", "label": "tempest-TestServerBasicOps-32608326-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.199", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a3908211615c4cbaae61d6e5833ca908", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap44ba14ce-36", "ovs_interfaceid": "44ba14ce-3677-4e53-b6ea-a21b98ba45d6", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 25 04:06:00 np0005534516 nova_compute[253538]: 2025-11-25 09:06:00.020 253542 DEBUG nova.network.os_vif_util [None req-2f5d4e90-4405-495f-9aba-72e02aa05145 637a807a37ce403a8612d303b1acbb3b a3908211615c4cbaae61d6e5833ca908 - - default default] Converting VIF {"id": "44ba14ce-3677-4e53-b6ea-a21b98ba45d6", "address": "fa:16:3e:70:84:10", "network": {"id": "dbe7469c-9d57-4418-b63b-ede368786895", "bridge": "br-int", "label": "tempest-TestServerBasicOps-32608326-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.199", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a3908211615c4cbaae61d6e5833ca908", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap44ba14ce-36", "ovs_interfaceid": "44ba14ce-3677-4e53-b6ea-a21b98ba45d6", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 04:06:00 np0005534516 podman[399673]: 2025-11-25 09:06:00.021068818 +0000 UTC m=+0.058812410 container died e894baea833b52fb0996e9308f3864faa6b3a5c411e8ca9a13b8c10bcdb32af7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-dbe7469c-9d57-4418-b63b-ede368786895, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2)
Nov 25 04:06:00 np0005534516 nova_compute[253538]: 2025-11-25 09:06:00.021 253542 DEBUG nova.network.os_vif_util [None req-2f5d4e90-4405-495f-9aba-72e02aa05145 637a807a37ce403a8612d303b1acbb3b a3908211615c4cbaae61d6e5833ca908 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:70:84:10,bridge_name='br-int',has_traffic_filtering=True,id=44ba14ce-3677-4e53-b6ea-a21b98ba45d6,network=Network(dbe7469c-9d57-4418-b63b-ede368786895),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap44ba14ce-36') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 04:06:00 np0005534516 nova_compute[253538]: 2025-11-25 09:06:00.021 253542 DEBUG os_vif [None req-2f5d4e90-4405-495f-9aba-72e02aa05145 637a807a37ce403a8612d303b1acbb3b a3908211615c4cbaae61d6e5833ca908 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:70:84:10,bridge_name='br-int',has_traffic_filtering=True,id=44ba14ce-3677-4e53-b6ea-a21b98ba45d6,network=Network(dbe7469c-9d57-4418-b63b-ede368786895),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap44ba14ce-36') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 25 04:06:00 np0005534516 nova_compute[253538]: 2025-11-25 09:06:00.023 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:06:00 np0005534516 nova_compute[253538]: 2025-11-25 09:06:00.023 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap44ba14ce-36, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 04:06:00 np0005534516 nova_compute[253538]: 2025-11-25 09:06:00.025 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:06:00 np0005534516 nova_compute[253538]: 2025-11-25 09:06:00.026 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:06:00 np0005534516 nova_compute[253538]: 2025-11-25 09:06:00.028 253542 INFO os_vif [None req-2f5d4e90-4405-495f-9aba-72e02aa05145 637a807a37ce403a8612d303b1acbb3b a3908211615c4cbaae61d6e5833ca908 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:70:84:10,bridge_name='br-int',has_traffic_filtering=True,id=44ba14ce-3677-4e53-b6ea-a21b98ba45d6,network=Network(dbe7469c-9d57-4418-b63b-ede368786895),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap44ba14ce-36')#033[00m
Nov 25 04:06:00 np0005534516 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-e894baea833b52fb0996e9308f3864faa6b3a5c411e8ca9a13b8c10bcdb32af7-userdata-shm.mount: Deactivated successfully.
Nov 25 04:06:00 np0005534516 systemd[1]: var-lib-containers-storage-overlay-183e12d28189ae10c66d159b8f25f1d139631c6af13b1ca004806ba514918c2a-merged.mount: Deactivated successfully.
Nov 25 04:06:00 np0005534516 podman[399673]: 2025-11-25 09:06:00.070293855 +0000 UTC m=+0.108037407 container cleanup e894baea833b52fb0996e9308f3864faa6b3a5c411e8ca9a13b8c10bcdb32af7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-dbe7469c-9d57-4418-b63b-ede368786895, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2)
Nov 25 04:06:00 np0005534516 systemd[1]: libpod-conmon-e894baea833b52fb0996e9308f3864faa6b3a5c411e8ca9a13b8c10bcdb32af7.scope: Deactivated successfully.
Nov 25 04:06:00 np0005534516 nova_compute[253538]: 2025-11-25 09:06:00.102 253542 DEBUG oslo_concurrency.lockutils [None req-611360db-81bc-42f5-8bf9-42345a8fc7ae c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Acquiring lock "935c4eb2-999f-40a4-8643-0479d293c149" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:06:00 np0005534516 nova_compute[253538]: 2025-11-25 09:06:00.103 253542 DEBUG oslo_concurrency.lockutils [None req-611360db-81bc-42f5-8bf9-42345a8fc7ae c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "935c4eb2-999f-40a4-8643-0479d293c149" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:06:00 np0005534516 nova_compute[253538]: 2025-11-25 09:06:00.108 253542 DEBUG nova.compute.manager [req-a2c394fb-7a5e-46fa-9dc5-4b17f0d10a97 req-0c2f5a71-f3fd-4a08-92c8-c7f07c6cb14c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 497131ea-c693-4c1d-b471-5b69d2294e3a] Received event network-vif-unplugged-44ba14ce-3677-4e53-b6ea-a21b98ba45d6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 04:06:00 np0005534516 nova_compute[253538]: 2025-11-25 09:06:00.109 253542 DEBUG oslo_concurrency.lockutils [req-a2c394fb-7a5e-46fa-9dc5-4b17f0d10a97 req-0c2f5a71-f3fd-4a08-92c8-c7f07c6cb14c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "497131ea-c693-4c1d-b471-5b69d2294e3a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:06:00 np0005534516 nova_compute[253538]: 2025-11-25 09:06:00.109 253542 DEBUG oslo_concurrency.lockutils [req-a2c394fb-7a5e-46fa-9dc5-4b17f0d10a97 req-0c2f5a71-f3fd-4a08-92c8-c7f07c6cb14c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "497131ea-c693-4c1d-b471-5b69d2294e3a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:06:00 np0005534516 nova_compute[253538]: 2025-11-25 09:06:00.109 253542 DEBUG oslo_concurrency.lockutils [req-a2c394fb-7a5e-46fa-9dc5-4b17f0d10a97 req-0c2f5a71-f3fd-4a08-92c8-c7f07c6cb14c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "497131ea-c693-4c1d-b471-5b69d2294e3a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:06:00 np0005534516 nova_compute[253538]: 2025-11-25 09:06:00.109 253542 DEBUG nova.compute.manager [req-a2c394fb-7a5e-46fa-9dc5-4b17f0d10a97 req-0c2f5a71-f3fd-4a08-92c8-c7f07c6cb14c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 497131ea-c693-4c1d-b471-5b69d2294e3a] No waiting events found dispatching network-vif-unplugged-44ba14ce-3677-4e53-b6ea-a21b98ba45d6 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 04:06:00 np0005534516 nova_compute[253538]: 2025-11-25 09:06:00.110 253542 DEBUG nova.compute.manager [req-a2c394fb-7a5e-46fa-9dc5-4b17f0d10a97 req-0c2f5a71-f3fd-4a08-92c8-c7f07c6cb14c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 497131ea-c693-4c1d-b471-5b69d2294e3a] Received event network-vif-unplugged-44ba14ce-3677-4e53-b6ea-a21b98ba45d6 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 25 04:06:00 np0005534516 nova_compute[253538]: 2025-11-25 09:06:00.123 253542 DEBUG nova.compute.manager [None req-611360db-81bc-42f5-8bf9-42345a8fc7ae c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 935c4eb2-999f-40a4-8643-0479d293c149] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 25 04:06:00 np0005534516 podman[399778]: 2025-11-25 09:06:00.147364121 +0000 UTC m=+0.049796505 container remove e894baea833b52fb0996e9308f3864faa6b3a5c411e8ca9a13b8c10bcdb32af7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-dbe7469c-9d57-4418-b63b-ede368786895, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 25 04:06:00 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2556: 321 pgs: 321 active+clean; 246 MiB data, 1012 MiB used, 59 GiB / 60 GiB avail; 651 KiB/s rd, 4.3 MiB/s wr, 127 op/s
Nov 25 04:06:00 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:06:00.156 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[03de744c-91ab-4b42-88a7-c0c3d7041809]: (4, ('Tue Nov 25 09:05:59 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-dbe7469c-9d57-4418-b63b-ede368786895 (e894baea833b52fb0996e9308f3864faa6b3a5c411e8ca9a13b8c10bcdb32af7)\ne894baea833b52fb0996e9308f3864faa6b3a5c411e8ca9a13b8c10bcdb32af7\nTue Nov 25 09:06:00 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-dbe7469c-9d57-4418-b63b-ede368786895 (e894baea833b52fb0996e9308f3864faa6b3a5c411e8ca9a13b8c10bcdb32af7)\ne894baea833b52fb0996e9308f3864faa6b3a5c411e8ca9a13b8c10bcdb32af7\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:06:00 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:06:00.160 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[2da27ee5-698d-42ac-beb5-e3e41aacfd59]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:06:00 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:06:00.161 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapdbe7469c-90, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 04:06:00 np0005534516 nova_compute[253538]: 2025-11-25 09:06:00.163 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:06:00 np0005534516 kernel: tapdbe7469c-90: left promiscuous mode
Nov 25 04:06:00 np0005534516 nova_compute[253538]: 2025-11-25 09:06:00.177 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:06:00 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:06:00.183 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[3733cd3f-a3d3-4dd0-b6c4-ad87c19fb540]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:06:00 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:06:00.197 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[bc4c0c45-4ecb-4a75-b728-a3b22df33439]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:06:00 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:06:00.198 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[aa5803f4-b375-45ab-8448-0d8aeb72f02d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:06:00 np0005534516 nova_compute[253538]: 2025-11-25 09:06:00.207 253542 DEBUG oslo_concurrency.lockutils [None req-611360db-81bc-42f5-8bf9-42345a8fc7ae c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:06:00 np0005534516 nova_compute[253538]: 2025-11-25 09:06:00.208 253542 DEBUG oslo_concurrency.lockutils [None req-611360db-81bc-42f5-8bf9-42345a8fc7ae c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:06:00 np0005534516 nova_compute[253538]: 2025-11-25 09:06:00.217 253542 DEBUG nova.virt.hardware [None req-611360db-81bc-42f5-8bf9-42345a8fc7ae c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 25 04:06:00 np0005534516 nova_compute[253538]: 2025-11-25 09:06:00.217 253542 INFO nova.compute.claims [None req-611360db-81bc-42f5-8bf9-42345a8fc7ae c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 935c4eb2-999f-40a4-8643-0479d293c149] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 25 04:06:00 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:06:00.221 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[76fbf831-f439-4924-8b9d-f234517900cc]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 689090, 'reachable_time': 28151, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 399822, 'error': None, 'target': 'ovnmeta-dbe7469c-9d57-4418-b63b-ede368786895', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:06:00 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:06:00.224 162852 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-dbe7469c-9d57-4418-b63b-ede368786895 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 25 04:06:00 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:06:00.224 162852 DEBUG oslo.privsep.daemon [-] privsep: reply[1db601d8-44e5-4c1a-84db-2d52faf7c631]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:06:00 np0005534516 systemd[1]: run-netns-ovnmeta\x2ddbe7469c\x2d9d57\x2d4418\x2db63b\x2dede368786895.mount: Deactivated successfully.
Nov 25 04:06:00 np0005534516 nova_compute[253538]: 2025-11-25 09:06:00.440 253542 DEBUG oslo_concurrency.processutils [None req-611360db-81bc-42f5-8bf9-42345a8fc7ae c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 04:06:00 np0005534516 nova_compute[253538]: 2025-11-25 09:06:00.498 253542 INFO nova.virt.libvirt.driver [None req-2f5d4e90-4405-495f-9aba-72e02aa05145 637a807a37ce403a8612d303b1acbb3b a3908211615c4cbaae61d6e5833ca908 - - default default] [instance: 497131ea-c693-4c1d-b471-5b69d2294e3a] Deleting instance files /var/lib/nova/instances/497131ea-c693-4c1d-b471-5b69d2294e3a_del#033[00m
Nov 25 04:06:00 np0005534516 nova_compute[253538]: 2025-11-25 09:06:00.499 253542 INFO nova.virt.libvirt.driver [None req-2f5d4e90-4405-495f-9aba-72e02aa05145 637a807a37ce403a8612d303b1acbb3b a3908211615c4cbaae61d6e5833ca908 - - default default] [instance: 497131ea-c693-4c1d-b471-5b69d2294e3a] Deletion of /var/lib/nova/instances/497131ea-c693-4c1d-b471-5b69d2294e3a_del complete#033[00m
Nov 25 04:06:00 np0005534516 nova_compute[253538]: 2025-11-25 09:06:00.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:06:00 np0005534516 nova_compute[253538]: 2025-11-25 09:06:00.567 253542 INFO nova.compute.manager [None req-2f5d4e90-4405-495f-9aba-72e02aa05145 637a807a37ce403a8612d303b1acbb3b a3908211615c4cbaae61d6e5833ca908 - - default default] [instance: 497131ea-c693-4c1d-b471-5b69d2294e3a] Took 0.81 seconds to destroy the instance on the hypervisor.#033[00m
Nov 25 04:06:00 np0005534516 nova_compute[253538]: 2025-11-25 09:06:00.567 253542 DEBUG oslo.service.loopingcall [None req-2f5d4e90-4405-495f-9aba-72e02aa05145 637a807a37ce403a8612d303b1acbb3b a3908211615c4cbaae61d6e5833ca908 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 25 04:06:00 np0005534516 nova_compute[253538]: 2025-11-25 09:06:00.567 253542 DEBUG nova.compute.manager [-] [instance: 497131ea-c693-4c1d-b471-5b69d2294e3a] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 25 04:06:00 np0005534516 nova_compute[253538]: 2025-11-25 09:06:00.568 253542 DEBUG nova.network.neutron [-] [instance: 497131ea-c693-4c1d-b471-5b69d2294e3a] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 25 04:06:00 np0005534516 nova_compute[253538]: 2025-11-25 09:06:00.574 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:06:00 np0005534516 podman[399907]: 2025-11-25 09:06:00.652129282 +0000 UTC m=+0.045644741 container create b8cec6758febf38b3d4890b38d8bed52d525664516745d9f40bdd764605f5dd7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_moore, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 04:06:00 np0005534516 systemd[1]: Started libpod-conmon-b8cec6758febf38b3d4890b38d8bed52d525664516745d9f40bdd764605f5dd7.scope.
Nov 25 04:06:00 np0005534516 systemd[1]: Started libcrun container.
Nov 25 04:06:00 np0005534516 podman[399907]: 2025-11-25 09:06:00.632788336 +0000 UTC m=+0.026303825 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 04:06:00 np0005534516 podman[399907]: 2025-11-25 09:06:00.73445415 +0000 UTC m=+0.127969669 container init b8cec6758febf38b3d4890b38d8bed52d525664516745d9f40bdd764605f5dd7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_moore, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Nov 25 04:06:00 np0005534516 podman[399907]: 2025-11-25 09:06:00.743722772 +0000 UTC m=+0.137238231 container start b8cec6758febf38b3d4890b38d8bed52d525664516745d9f40bdd764605f5dd7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_moore, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 04:06:00 np0005534516 podman[399907]: 2025-11-25 09:06:00.747380891 +0000 UTC m=+0.140896460 container attach b8cec6758febf38b3d4890b38d8bed52d525664516745d9f40bdd764605f5dd7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_moore, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Nov 25 04:06:00 np0005534516 sweet_moore[399924]: 167 167
Nov 25 04:06:00 np0005534516 systemd[1]: libpod-b8cec6758febf38b3d4890b38d8bed52d525664516745d9f40bdd764605f5dd7.scope: Deactivated successfully.
Nov 25 04:06:00 np0005534516 conmon[399924]: conmon b8cec6758febf38b3d48 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-b8cec6758febf38b3d4890b38d8bed52d525664516745d9f40bdd764605f5dd7.scope/container/memory.events
Nov 25 04:06:00 np0005534516 podman[399907]: 2025-11-25 09:06:00.752202083 +0000 UTC m=+0.145717552 container died b8cec6758febf38b3d4890b38d8bed52d525664516745d9f40bdd764605f5dd7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_moore, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 04:06:00 np0005534516 podman[399907]: 2025-11-25 09:06:00.788618443 +0000 UTC m=+0.182133902 container remove b8cec6758febf38b3d4890b38d8bed52d525664516745d9f40bdd764605f5dd7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_moore, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 04:06:00 np0005534516 systemd[1]: libpod-conmon-b8cec6758febf38b3d4890b38d8bed52d525664516745d9f40bdd764605f5dd7.scope: Deactivated successfully.
Nov 25 04:06:00 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 04:06:00 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2524204620' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 04:06:00 np0005534516 systemd[1]: var-lib-containers-storage-overlay-b9da89a1aa05cd1fd4333b5666d3ef0de34928fd08ef8022c58c6c195f7b1d10-merged.mount: Deactivated successfully.
Nov 25 04:06:00 np0005534516 nova_compute[253538]: 2025-11-25 09:06:00.918 253542 DEBUG oslo_concurrency.processutils [None req-611360db-81bc-42f5-8bf9-42345a8fc7ae c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.478s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 04:06:00 np0005534516 nova_compute[253538]: 2025-11-25 09:06:00.927 253542 DEBUG nova.compute.provider_tree [None req-611360db-81bc-42f5-8bf9-42345a8fc7ae c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 04:06:00 np0005534516 nova_compute[253538]: 2025-11-25 09:06:00.940 253542 DEBUG nova.scheduler.client.report [None req-611360db-81bc-42f5-8bf9-42345a8fc7ae c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 04:06:00 np0005534516 nova_compute[253538]: 2025-11-25 09:06:00.960 253542 DEBUG oslo_concurrency.lockutils [None req-611360db-81bc-42f5-8bf9-42345a8fc7ae c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.752s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:06:00 np0005534516 nova_compute[253538]: 2025-11-25 09:06:00.961 253542 DEBUG nova.compute.manager [None req-611360db-81bc-42f5-8bf9-42345a8fc7ae c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 935c4eb2-999f-40a4-8643-0479d293c149] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 25 04:06:00 np0005534516 nova_compute[253538]: 2025-11-25 09:06:00.963 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.388s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:06:00 np0005534516 nova_compute[253538]: 2025-11-25 09:06:00.963 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:06:00 np0005534516 nova_compute[253538]: 2025-11-25 09:06:00.963 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 25 04:06:00 np0005534516 nova_compute[253538]: 2025-11-25 09:06:00.963 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 04:06:01 np0005534516 podman[399950]: 2025-11-25 09:06:01.017133084 +0000 UTC m=+0.057412191 container create 8ce155a6e844ad0ebdc3db99412bb2bf2f2c161ad8e99b1a2e24b35f991ba146 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_merkle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 04:06:01 np0005534516 nova_compute[253538]: 2025-11-25 09:06:01.045 253542 DEBUG nova.compute.manager [None req-611360db-81bc-42f5-8bf9-42345a8fc7ae c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 935c4eb2-999f-40a4-8643-0479d293c149] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 25 04:06:01 np0005534516 nova_compute[253538]: 2025-11-25 09:06:01.046 253542 DEBUG nova.network.neutron [None req-611360db-81bc-42f5-8bf9-42345a8fc7ae c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 935c4eb2-999f-40a4-8643-0479d293c149] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 25 04:06:01 np0005534516 systemd[1]: Started libpod-conmon-8ce155a6e844ad0ebdc3db99412bb2bf2f2c161ad8e99b1a2e24b35f991ba146.scope.
Nov 25 04:06:01 np0005534516 nova_compute[253538]: 2025-11-25 09:06:01.070 253542 INFO nova.virt.libvirt.driver [None req-611360db-81bc-42f5-8bf9-42345a8fc7ae c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 935c4eb2-999f-40a4-8643-0479d293c149] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 25 04:06:01 np0005534516 podman[399950]: 2025-11-25 09:06:00.995542858 +0000 UTC m=+0.035822055 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 04:06:01 np0005534516 systemd[1]: Started libcrun container.
Nov 25 04:06:01 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/310305027b8220902a9605de02267739117c08b754481d5ba6f4f9c89e88921a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 04:06:01 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/310305027b8220902a9605de02267739117c08b754481d5ba6f4f9c89e88921a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 04:06:01 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/310305027b8220902a9605de02267739117c08b754481d5ba6f4f9c89e88921a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 04:06:01 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/310305027b8220902a9605de02267739117c08b754481d5ba6f4f9c89e88921a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 04:06:01 np0005534516 podman[399950]: 2025-11-25 09:06:01.122728725 +0000 UTC m=+0.163007872 container init 8ce155a6e844ad0ebdc3db99412bb2bf2f2c161ad8e99b1a2e24b35f991ba146 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_merkle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 04:06:01 np0005534516 podman[399950]: 2025-11-25 09:06:01.131962526 +0000 UTC m=+0.172241633 container start 8ce155a6e844ad0ebdc3db99412bb2bf2f2c161ad8e99b1a2e24b35f991ba146 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_merkle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True)
Nov 25 04:06:01 np0005534516 nova_compute[253538]: 2025-11-25 09:06:01.138 253542 DEBUG nova.compute.manager [None req-611360db-81bc-42f5-8bf9-42345a8fc7ae c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 935c4eb2-999f-40a4-8643-0479d293c149] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 25 04:06:01 np0005534516 podman[399950]: 2025-11-25 09:06:01.140035915 +0000 UTC m=+0.180315072 container attach 8ce155a6e844ad0ebdc3db99412bb2bf2f2c161ad8e99b1a2e24b35f991ba146 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_merkle, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Nov 25 04:06:01 np0005534516 nova_compute[253538]: 2025-11-25 09:06:01.240 253542 DEBUG nova.compute.manager [None req-611360db-81bc-42f5-8bf9-42345a8fc7ae c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 935c4eb2-999f-40a4-8643-0479d293c149] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 25 04:06:01 np0005534516 nova_compute[253538]: 2025-11-25 09:06:01.241 253542 DEBUG nova.virt.libvirt.driver [None req-611360db-81bc-42f5-8bf9-42345a8fc7ae c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 935c4eb2-999f-40a4-8643-0479d293c149] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 25 04:06:01 np0005534516 nova_compute[253538]: 2025-11-25 09:06:01.242 253542 INFO nova.virt.libvirt.driver [None req-611360db-81bc-42f5-8bf9-42345a8fc7ae c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 935c4eb2-999f-40a4-8643-0479d293c149] Creating image(s)#033[00m
Nov 25 04:06:01 np0005534516 nova_compute[253538]: 2025-11-25 09:06:01.267 253542 DEBUG nova.storage.rbd_utils [None req-611360db-81bc-42f5-8bf9-42345a8fc7ae c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] rbd image 935c4eb2-999f-40a4-8643-0479d293c149_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 04:06:01 np0005534516 nova_compute[253538]: 2025-11-25 09:06:01.298 253542 DEBUG nova.storage.rbd_utils [None req-611360db-81bc-42f5-8bf9-42345a8fc7ae c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] rbd image 935c4eb2-999f-40a4-8643-0479d293c149_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 04:06:01 np0005534516 nova_compute[253538]: 2025-11-25 09:06:01.323 253542 DEBUG nova.storage.rbd_utils [None req-611360db-81bc-42f5-8bf9-42345a8fc7ae c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] rbd image 935c4eb2-999f-40a4-8643-0479d293c149_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 04:06:01 np0005534516 nova_compute[253538]: 2025-11-25 09:06:01.327 253542 DEBUG oslo_concurrency.processutils [None req-611360db-81bc-42f5-8bf9-42345a8fc7ae c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 04:06:01 np0005534516 nova_compute[253538]: 2025-11-25 09:06:01.415 253542 DEBUG oslo_concurrency.processutils [None req-611360db-81bc-42f5-8bf9-42345a8fc7ae c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json" returned: 0 in 0.088s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 04:06:01 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 04:06:01 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/415574734' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 04:06:01 np0005534516 nova_compute[253538]: 2025-11-25 09:06:01.416 253542 DEBUG oslo_concurrency.lockutils [None req-611360db-81bc-42f5-8bf9-42345a8fc7ae c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Acquiring lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:06:01 np0005534516 nova_compute[253538]: 2025-11-25 09:06:01.417 253542 DEBUG oslo_concurrency.lockutils [None req-611360db-81bc-42f5-8bf9-42345a8fc7ae c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:06:01 np0005534516 nova_compute[253538]: 2025-11-25 09:06:01.417 253542 DEBUG oslo_concurrency.lockutils [None req-611360db-81bc-42f5-8bf9-42345a8fc7ae c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:06:01 np0005534516 nova_compute[253538]: 2025-11-25 09:06:01.443 253542 DEBUG nova.storage.rbd_utils [None req-611360db-81bc-42f5-8bf9-42345a8fc7ae c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] rbd image 935c4eb2-999f-40a4-8643-0479d293c149_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 04:06:01 np0005534516 nova_compute[253538]: 2025-11-25 09:06:01.448 253542 DEBUG oslo_concurrency.processutils [None req-611360db-81bc-42f5-8bf9-42345a8fc7ae c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc 935c4eb2-999f-40a4-8643-0479d293c149_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 04:06:01 np0005534516 nova_compute[253538]: 2025-11-25 09:06:01.489 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.526s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 04:06:01 np0005534516 nova_compute[253538]: 2025-11-25 09:06:01.495 253542 DEBUG nova.policy [None req-611360db-81bc-42f5-8bf9-42345a8fc7ae c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'c9fb13d4ba9041458692330b7276232f', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'a3cf572dfc9f42528923d69b8fa76422', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 25 04:06:01 np0005534516 nova_compute[253538]: 2025-11-25 09:06:01.544 253542 DEBUG nova.network.neutron [-] [instance: 497131ea-c693-4c1d-b471-5b69d2294e3a] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 04:06:01 np0005534516 nova_compute[253538]: 2025-11-25 09:06:01.567 253542 DEBUG nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] skipping disk for instance-00000089 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 25 04:06:01 np0005534516 nova_compute[253538]: 2025-11-25 09:06:01.567 253542 DEBUG nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] skipping disk for instance-00000089 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 25 04:06:01 np0005534516 nova_compute[253538]: 2025-11-25 09:06:01.579 253542 INFO nova.compute.manager [-] [instance: 497131ea-c693-4c1d-b471-5b69d2294e3a] Took 1.01 seconds to deallocate network for instance.#033[00m
Nov 25 04:06:01 np0005534516 nova_compute[253538]: 2025-11-25 09:06:01.674 253542 DEBUG oslo_concurrency.lockutils [None req-2f5d4e90-4405-495f-9aba-72e02aa05145 637a807a37ce403a8612d303b1acbb3b a3908211615c4cbaae61d6e5833ca908 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:06:01 np0005534516 nova_compute[253538]: 2025-11-25 09:06:01.675 253542 DEBUG oslo_concurrency.lockutils [None req-2f5d4e90-4405-495f-9aba-72e02aa05145 637a807a37ce403a8612d303b1acbb3b a3908211615c4cbaae61d6e5833ca908 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:06:01 np0005534516 nova_compute[253538]: 2025-11-25 09:06:01.777 253542 DEBUG oslo_concurrency.processutils [None req-611360db-81bc-42f5-8bf9-42345a8fc7ae c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc 935c4eb2-999f-40a4-8643-0479d293c149_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.329s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 04:06:01 np0005534516 nova_compute[253538]: 2025-11-25 09:06:01.809 253542 DEBUG oslo_concurrency.processutils [None req-2f5d4e90-4405-495f-9aba-72e02aa05145 637a807a37ce403a8612d303b1acbb3b a3908211615c4cbaae61d6e5833ca908 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 04:06:01 np0005534516 nova_compute[253538]: 2025-11-25 09:06:01.875 253542 DEBUG nova.storage.rbd_utils [None req-611360db-81bc-42f5-8bf9-42345a8fc7ae c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] resizing rbd image 935c4eb2-999f-40a4-8643-0479d293c149_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 25 04:06:01 np0005534516 great_merkle[399969]: {
Nov 25 04:06:01 np0005534516 great_merkle[399969]:    "0": [
Nov 25 04:06:01 np0005534516 great_merkle[399969]:        {
Nov 25 04:06:01 np0005534516 great_merkle[399969]:            "devices": [
Nov 25 04:06:01 np0005534516 great_merkle[399969]:                "/dev/loop3"
Nov 25 04:06:01 np0005534516 great_merkle[399969]:            ],
Nov 25 04:06:01 np0005534516 great_merkle[399969]:            "lv_name": "ceph_lv0",
Nov 25 04:06:01 np0005534516 great_merkle[399969]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 04:06:01 np0005534516 great_merkle[399969]:            "lv_size": "21470642176",
Nov 25 04:06:01 np0005534516 great_merkle[399969]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=fa0f8d90-5df8-4d42-9078-da082765696d,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 04:06:01 np0005534516 great_merkle[399969]:            "lv_uuid": "ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr",
Nov 25 04:06:01 np0005534516 great_merkle[399969]:            "name": "ceph_lv0",
Nov 25 04:06:01 np0005534516 great_merkle[399969]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 04:06:01 np0005534516 great_merkle[399969]:            "tags": {
Nov 25 04:06:01 np0005534516 great_merkle[399969]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 04:06:01 np0005534516 great_merkle[399969]:                "ceph.block_uuid": "ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr",
Nov 25 04:06:01 np0005534516 great_merkle[399969]:                "ceph.cephx_lockbox_secret": "",
Nov 25 04:06:01 np0005534516 great_merkle[399969]:                "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 04:06:01 np0005534516 great_merkle[399969]:                "ceph.cluster_name": "ceph",
Nov 25 04:06:01 np0005534516 great_merkle[399969]:                "ceph.crush_device_class": "",
Nov 25 04:06:01 np0005534516 great_merkle[399969]:                "ceph.encrypted": "0",
Nov 25 04:06:01 np0005534516 great_merkle[399969]:                "ceph.osd_fsid": "fa0f8d90-5df8-4d42-9078-da082765696d",
Nov 25 04:06:01 np0005534516 great_merkle[399969]:                "ceph.osd_id": "0",
Nov 25 04:06:01 np0005534516 great_merkle[399969]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 04:06:01 np0005534516 great_merkle[399969]:                "ceph.type": "block",
Nov 25 04:06:01 np0005534516 great_merkle[399969]:                "ceph.vdo": "0"
Nov 25 04:06:01 np0005534516 great_merkle[399969]:            },
Nov 25 04:06:01 np0005534516 great_merkle[399969]:            "type": "block",
Nov 25 04:06:01 np0005534516 great_merkle[399969]:            "vg_name": "ceph_vg0"
Nov 25 04:06:01 np0005534516 great_merkle[399969]:        }
Nov 25 04:06:01 np0005534516 great_merkle[399969]:    ],
Nov 25 04:06:01 np0005534516 great_merkle[399969]:    "1": [
Nov 25 04:06:01 np0005534516 great_merkle[399969]:        {
Nov 25 04:06:01 np0005534516 great_merkle[399969]:            "devices": [
Nov 25 04:06:01 np0005534516 great_merkle[399969]:                "/dev/loop4"
Nov 25 04:06:01 np0005534516 great_merkle[399969]:            ],
Nov 25 04:06:01 np0005534516 great_merkle[399969]:            "lv_name": "ceph_lv1",
Nov 25 04:06:01 np0005534516 great_merkle[399969]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 04:06:01 np0005534516 great_merkle[399969]:            "lv_size": "21470642176",
Nov 25 04:06:01 np0005534516 great_merkle[399969]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1ccbbde0-8faf-460a-800e-c84f00ed17db,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 04:06:01 np0005534516 great_merkle[399969]:            "lv_uuid": "nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e",
Nov 25 04:06:01 np0005534516 great_merkle[399969]:            "name": "ceph_lv1",
Nov 25 04:06:01 np0005534516 great_merkle[399969]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 04:06:01 np0005534516 great_merkle[399969]:            "tags": {
Nov 25 04:06:01 np0005534516 great_merkle[399969]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 04:06:01 np0005534516 great_merkle[399969]:                "ceph.block_uuid": "nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e",
Nov 25 04:06:01 np0005534516 great_merkle[399969]:                "ceph.cephx_lockbox_secret": "",
Nov 25 04:06:01 np0005534516 great_merkle[399969]:                "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 04:06:01 np0005534516 great_merkle[399969]:                "ceph.cluster_name": "ceph",
Nov 25 04:06:01 np0005534516 great_merkle[399969]:                "ceph.crush_device_class": "",
Nov 25 04:06:01 np0005534516 great_merkle[399969]:                "ceph.encrypted": "0",
Nov 25 04:06:01 np0005534516 great_merkle[399969]:                "ceph.osd_fsid": "1ccbbde0-8faf-460a-800e-c84f00ed17db",
Nov 25 04:06:01 np0005534516 great_merkle[399969]:                "ceph.osd_id": "1",
Nov 25 04:06:01 np0005534516 great_merkle[399969]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 04:06:01 np0005534516 great_merkle[399969]:                "ceph.type": "block",
Nov 25 04:06:01 np0005534516 great_merkle[399969]:                "ceph.vdo": "0"
Nov 25 04:06:01 np0005534516 great_merkle[399969]:            },
Nov 25 04:06:01 np0005534516 great_merkle[399969]:            "type": "block",
Nov 25 04:06:01 np0005534516 great_merkle[399969]:            "vg_name": "ceph_vg1"
Nov 25 04:06:01 np0005534516 great_merkle[399969]:        }
Nov 25 04:06:01 np0005534516 great_merkle[399969]:    ],
Nov 25 04:06:01 np0005534516 great_merkle[399969]:    "2": [
Nov 25 04:06:01 np0005534516 great_merkle[399969]:        {
Nov 25 04:06:01 np0005534516 great_merkle[399969]:            "devices": [
Nov 25 04:06:01 np0005534516 great_merkle[399969]:                "/dev/loop5"
Nov 25 04:06:01 np0005534516 great_merkle[399969]:            ],
Nov 25 04:06:01 np0005534516 great_merkle[399969]:            "lv_name": "ceph_lv2",
Nov 25 04:06:01 np0005534516 great_merkle[399969]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 04:06:01 np0005534516 great_merkle[399969]:            "lv_size": "21470642176",
Nov 25 04:06:01 np0005534516 great_merkle[399969]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2a15223e-f21c-42af-b702-2be1c3607399,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 04:06:01 np0005534516 great_merkle[399969]:            "lv_uuid": "5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0",
Nov 25 04:06:01 np0005534516 great_merkle[399969]:            "name": "ceph_lv2",
Nov 25 04:06:01 np0005534516 great_merkle[399969]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 04:06:01 np0005534516 great_merkle[399969]:            "tags": {
Nov 25 04:06:01 np0005534516 great_merkle[399969]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 04:06:01 np0005534516 great_merkle[399969]:                "ceph.block_uuid": "5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0",
Nov 25 04:06:01 np0005534516 great_merkle[399969]:                "ceph.cephx_lockbox_secret": "",
Nov 25 04:06:01 np0005534516 great_merkle[399969]:                "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 04:06:01 np0005534516 great_merkle[399969]:                "ceph.cluster_name": "ceph",
Nov 25 04:06:01 np0005534516 great_merkle[399969]:                "ceph.crush_device_class": "",
Nov 25 04:06:01 np0005534516 great_merkle[399969]:                "ceph.encrypted": "0",
Nov 25 04:06:01 np0005534516 great_merkle[399969]:                "ceph.osd_fsid": "2a15223e-f21c-42af-b702-2be1c3607399",
Nov 25 04:06:01 np0005534516 great_merkle[399969]:                "ceph.osd_id": "2",
Nov 25 04:06:01 np0005534516 great_merkle[399969]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 04:06:01 np0005534516 great_merkle[399969]:                "ceph.type": "block",
Nov 25 04:06:01 np0005534516 great_merkle[399969]:                "ceph.vdo": "0"
Nov 25 04:06:01 np0005534516 great_merkle[399969]:            },
Nov 25 04:06:01 np0005534516 great_merkle[399969]:            "type": "block",
Nov 25 04:06:01 np0005534516 great_merkle[399969]:            "vg_name": "ceph_vg2"
Nov 25 04:06:01 np0005534516 great_merkle[399969]:        }
Nov 25 04:06:01 np0005534516 great_merkle[399969]:    ]
Nov 25 04:06:01 np0005534516 great_merkle[399969]: }
Nov 25 04:06:01 np0005534516 systemd[1]: libpod-8ce155a6e844ad0ebdc3db99412bb2bf2f2c161ad8e99b1a2e24b35f991ba146.scope: Deactivated successfully.
Nov 25 04:06:01 np0005534516 conmon[399969]: conmon 8ce155a6e844ad0ebdc3 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-8ce155a6e844ad0ebdc3db99412bb2bf2f2c161ad8e99b1a2e24b35f991ba146.scope/container/memory.events
Nov 25 04:06:01 np0005534516 podman[399950]: 2025-11-25 09:06:01.940456364 +0000 UTC m=+0.980735471 container died 8ce155a6e844ad0ebdc3db99412bb2bf2f2c161ad8e99b1a2e24b35f991ba146 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_merkle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef)
Nov 25 04:06:01 np0005534516 systemd[1]: var-lib-containers-storage-overlay-310305027b8220902a9605de02267739117c08b754481d5ba6f4f9c89e88921a-merged.mount: Deactivated successfully.
Nov 25 04:06:01 np0005534516 nova_compute[253538]: 2025-11-25 09:06:01.978 253542 DEBUG nova.objects.instance [None req-611360db-81bc-42f5-8bf9-42345a8fc7ae c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lazy-loading 'migration_context' on Instance uuid 935c4eb2-999f-40a4-8643-0479d293c149 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 04:06:01 np0005534516 nova_compute[253538]: 2025-11-25 09:06:01.989 253542 DEBUG nova.virt.libvirt.driver [None req-611360db-81bc-42f5-8bf9-42345a8fc7ae c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 935c4eb2-999f-40a4-8643-0479d293c149] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 25 04:06:01 np0005534516 nova_compute[253538]: 2025-11-25 09:06:01.989 253542 DEBUG nova.virt.libvirt.driver [None req-611360db-81bc-42f5-8bf9-42345a8fc7ae c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 935c4eb2-999f-40a4-8643-0479d293c149] Ensure instance console log exists: /var/lib/nova/instances/935c4eb2-999f-40a4-8643-0479d293c149/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 25 04:06:01 np0005534516 nova_compute[253538]: 2025-11-25 09:06:01.990 253542 DEBUG oslo_concurrency.lockutils [None req-611360db-81bc-42f5-8bf9-42345a8fc7ae c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:06:01 np0005534516 nova_compute[253538]: 2025-11-25 09:06:01.990 253542 DEBUG oslo_concurrency.lockutils [None req-611360db-81bc-42f5-8bf9-42345a8fc7ae c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:06:01 np0005534516 nova_compute[253538]: 2025-11-25 09:06:01.990 253542 DEBUG oslo_concurrency.lockutils [None req-611360db-81bc-42f5-8bf9-42345a8fc7ae c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:06:01 np0005534516 podman[399950]: 2025-11-25 09:06:01.993146137 +0000 UTC m=+1.033425234 container remove 8ce155a6e844ad0ebdc3db99412bb2bf2f2c161ad8e99b1a2e24b35f991ba146 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_merkle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Nov 25 04:06:02 np0005534516 systemd[1]: libpod-conmon-8ce155a6e844ad0ebdc3db99412bb2bf2f2c161ad8e99b1a2e24b35f991ba146.scope: Deactivated successfully.
Nov 25 04:06:02 np0005534516 nova_compute[253538]: 2025-11-25 09:06:02.029 253542 WARNING nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 04:06:02 np0005534516 nova_compute[253538]: 2025-11-25 09:06:02.031 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3392MB free_disk=59.897193908691406GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 25 04:06:02 np0005534516 nova_compute[253538]: 2025-11-25 09:06:02.031 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:06:02 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2557: 321 pgs: 321 active+clean; 214 MiB data, 991 MiB used, 59 GiB / 60 GiB avail; 216 KiB/s rd, 1.9 MiB/s wr, 66 op/s
Nov 25 04:06:02 np0005534516 nova_compute[253538]: 2025-11-25 09:06:02.198 253542 DEBUG nova.compute.manager [req-5ad593b0-9155-4ee8-ba2f-1eaa629a5bc1 req-c76a241f-5b98-4376-85c6-05bd10d8194a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 497131ea-c693-4c1d-b471-5b69d2294e3a] Received event network-vif-plugged-44ba14ce-3677-4e53-b6ea-a21b98ba45d6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 04:06:02 np0005534516 nova_compute[253538]: 2025-11-25 09:06:02.199 253542 DEBUG oslo_concurrency.lockutils [req-5ad593b0-9155-4ee8-ba2f-1eaa629a5bc1 req-c76a241f-5b98-4376-85c6-05bd10d8194a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "497131ea-c693-4c1d-b471-5b69d2294e3a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:06:02 np0005534516 nova_compute[253538]: 2025-11-25 09:06:02.200 253542 DEBUG oslo_concurrency.lockutils [req-5ad593b0-9155-4ee8-ba2f-1eaa629a5bc1 req-c76a241f-5b98-4376-85c6-05bd10d8194a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "497131ea-c693-4c1d-b471-5b69d2294e3a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:06:02 np0005534516 nova_compute[253538]: 2025-11-25 09:06:02.200 253542 DEBUG oslo_concurrency.lockutils [req-5ad593b0-9155-4ee8-ba2f-1eaa629a5bc1 req-c76a241f-5b98-4376-85c6-05bd10d8194a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "497131ea-c693-4c1d-b471-5b69d2294e3a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:06:02 np0005534516 nova_compute[253538]: 2025-11-25 09:06:02.200 253542 DEBUG nova.compute.manager [req-5ad593b0-9155-4ee8-ba2f-1eaa629a5bc1 req-c76a241f-5b98-4376-85c6-05bd10d8194a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 497131ea-c693-4c1d-b471-5b69d2294e3a] No waiting events found dispatching network-vif-plugged-44ba14ce-3677-4e53-b6ea-a21b98ba45d6 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 04:06:02 np0005534516 nova_compute[253538]: 2025-11-25 09:06:02.200 253542 WARNING nova.compute.manager [req-5ad593b0-9155-4ee8-ba2f-1eaa629a5bc1 req-c76a241f-5b98-4376-85c6-05bd10d8194a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 497131ea-c693-4c1d-b471-5b69d2294e3a] Received unexpected event network-vif-plugged-44ba14ce-3677-4e53-b6ea-a21b98ba45d6 for instance with vm_state deleted and task_state None.#033[00m
Nov 25 04:06:02 np0005534516 nova_compute[253538]: 2025-11-25 09:06:02.201 253542 DEBUG nova.compute.manager [req-5ad593b0-9155-4ee8-ba2f-1eaa629a5bc1 req-c76a241f-5b98-4376-85c6-05bd10d8194a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 497131ea-c693-4c1d-b471-5b69d2294e3a] Received event network-vif-deleted-44ba14ce-3677-4e53-b6ea-a21b98ba45d6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 04:06:02 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 04:06:02 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2053078326' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 04:06:02 np0005534516 nova_compute[253538]: 2025-11-25 09:06:02.278 253542 DEBUG oslo_concurrency.processutils [None req-2f5d4e90-4405-495f-9aba-72e02aa05145 637a807a37ce403a8612d303b1acbb3b a3908211615c4cbaae61d6e5833ca908 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.469s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 04:06:02 np0005534516 nova_compute[253538]: 2025-11-25 09:06:02.283 253542 DEBUG nova.compute.provider_tree [None req-2f5d4e90-4405-495f-9aba-72e02aa05145 637a807a37ce403a8612d303b1acbb3b a3908211615c4cbaae61d6e5833ca908 - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 04:06:02 np0005534516 nova_compute[253538]: 2025-11-25 09:06:02.302 253542 DEBUG nova.scheduler.client.report [None req-2f5d4e90-4405-495f-9aba-72e02aa05145 637a807a37ce403a8612d303b1acbb3b a3908211615c4cbaae61d6e5833ca908 - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 04:06:02 np0005534516 nova_compute[253538]: 2025-11-25 09:06:02.324 253542 DEBUG oslo_concurrency.lockutils [None req-2f5d4e90-4405-495f-9aba-72e02aa05145 637a807a37ce403a8612d303b1acbb3b a3908211615c4cbaae61d6e5833ca908 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.648s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:06:02 np0005534516 nova_compute[253538]: 2025-11-25 09:06:02.327 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.296s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:06:02 np0005534516 nova_compute[253538]: 2025-11-25 09:06:02.356 253542 INFO nova.scheduler.client.report [None req-2f5d4e90-4405-495f-9aba-72e02aa05145 637a807a37ce403a8612d303b1acbb3b a3908211615c4cbaae61d6e5833ca908 - - default default] Deleted allocations for instance 497131ea-c693-4c1d-b471-5b69d2294e3a#033[00m
Nov 25 04:06:02 np0005534516 nova_compute[253538]: 2025-11-25 09:06:02.416 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Instance 7bc2f34f-b7dd-4b8d-99c0-1fc01d3c2aa0 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 25 04:06:02 np0005534516 nova_compute[253538]: 2025-11-25 09:06:02.417 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Instance 935c4eb2-999f-40a4-8643-0479d293c149 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 25 04:06:02 np0005534516 nova_compute[253538]: 2025-11-25 09:06:02.419 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 25 04:06:02 np0005534516 nova_compute[253538]: 2025-11-25 09:06:02.420 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=768MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 25 04:06:02 np0005534516 nova_compute[253538]: 2025-11-25 09:06:02.425 253542 DEBUG nova.network.neutron [None req-611360db-81bc-42f5-8bf9-42345a8fc7ae c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 935c4eb2-999f-40a4-8643-0479d293c149] Successfully created port: bf69fe43-dd03-40a9-a38f-2ec005c27f58 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 25 04:06:02 np0005534516 nova_compute[253538]: 2025-11-25 09:06:02.441 253542 DEBUG oslo_concurrency.lockutils [None req-2f5d4e90-4405-495f-9aba-72e02aa05145 637a807a37ce403a8612d303b1acbb3b a3908211615c4cbaae61d6e5833ca908 - - default default] Lock "497131ea-c693-4c1d-b471-5b69d2294e3a" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.686s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:06:02 np0005534516 nova_compute[253538]: 2025-11-25 09:06:02.479 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 04:06:02 np0005534516 podman[400343]: 2025-11-25 09:06:02.620645995 +0000 UTC m=+0.049429975 container create 80f69981fcab8e00afa83eefc57a0df9430de3f752820e89c24e900e008680e2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_ishizaka, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 04:06:02 np0005534516 systemd[1]: Started libpod-conmon-80f69981fcab8e00afa83eefc57a0df9430de3f752820e89c24e900e008680e2.scope.
Nov 25 04:06:02 np0005534516 systemd[1]: Started libcrun container.
Nov 25 04:06:02 np0005534516 podman[400343]: 2025-11-25 09:06:02.604475945 +0000 UTC m=+0.033259955 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 04:06:02 np0005534516 podman[400343]: 2025-11-25 09:06:02.7282939 +0000 UTC m=+0.157077890 container init 80f69981fcab8e00afa83eefc57a0df9430de3f752820e89c24e900e008680e2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_ishizaka, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True)
Nov 25 04:06:02 np0005534516 podman[400343]: 2025-11-25 09:06:02.737263674 +0000 UTC m=+0.166047664 container start 80f69981fcab8e00afa83eefc57a0df9430de3f752820e89c24e900e008680e2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_ishizaka, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Nov 25 04:06:02 np0005534516 laughing_ishizaka[400378]: 167 167
Nov 25 04:06:02 np0005534516 podman[400343]: 2025-11-25 09:06:02.743215007 +0000 UTC m=+0.171999037 container attach 80f69981fcab8e00afa83eefc57a0df9430de3f752820e89c24e900e008680e2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_ishizaka, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 04:06:02 np0005534516 systemd[1]: libpod-80f69981fcab8e00afa83eefc57a0df9430de3f752820e89c24e900e008680e2.scope: Deactivated successfully.
Nov 25 04:06:02 np0005534516 podman[400343]: 2025-11-25 09:06:02.745740684 +0000 UTC m=+0.174524684 container died 80f69981fcab8e00afa83eefc57a0df9430de3f752820e89c24e900e008680e2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_ishizaka, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Nov 25 04:06:02 np0005534516 systemd[1]: var-lib-containers-storage-overlay-b839f33407981b2d24abbbd96c883cd4be049992cbe0cced12dc03c73743ee13-merged.mount: Deactivated successfully.
Nov 25 04:06:02 np0005534516 podman[400343]: 2025-11-25 09:06:02.788471107 +0000 UTC m=+0.217255087 container remove 80f69981fcab8e00afa83eefc57a0df9430de3f752820e89c24e900e008680e2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_ishizaka, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 04:06:02 np0005534516 systemd[1]: libpod-conmon-80f69981fcab8e00afa83eefc57a0df9430de3f752820e89c24e900e008680e2.scope: Deactivated successfully.
Nov 25 04:06:02 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 04:06:02 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3134778005' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 04:06:02 np0005534516 nova_compute[253538]: 2025-11-25 09:06:02.904 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.426s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 04:06:02 np0005534516 nova_compute[253538]: 2025-11-25 09:06:02.913 253542 DEBUG nova.compute.provider_tree [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 04:06:02 np0005534516 nova_compute[253538]: 2025-11-25 09:06:02.931 253542 DEBUG nova.scheduler.client.report [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 04:06:03 np0005534516 podman[400405]: 2025-11-25 09:06:02.999993656 +0000 UTC m=+0.057175954 container create 108535d395d88d6d34fb419f105c9fa2334a9beb5c1cda52fd155e4118dcd6a6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_leakey, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True)
Nov 25 04:06:03 np0005534516 systemd[1]: Started libpod-conmon-108535d395d88d6d34fb419f105c9fa2334a9beb5c1cda52fd155e4118dcd6a6.scope.
Nov 25 04:06:03 np0005534516 nova_compute[253538]: 2025-11-25 09:06:03.053 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 25 04:06:03 np0005534516 nova_compute[253538]: 2025-11-25 09:06:03.054 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.727s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:06:03 np0005534516 systemd[1]: Started libcrun container.
Nov 25 04:06:03 np0005534516 podman[400405]: 2025-11-25 09:06:02.970897256 +0000 UTC m=+0.028079624 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 04:06:03 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f3ad723ecdd14fd854bb3581ad1b1654bf7159e8ec03f73428e0c38f570016d9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 04:06:03 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f3ad723ecdd14fd854bb3581ad1b1654bf7159e8ec03f73428e0c38f570016d9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 04:06:03 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f3ad723ecdd14fd854bb3581ad1b1654bf7159e8ec03f73428e0c38f570016d9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 04:06:03 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f3ad723ecdd14fd854bb3581ad1b1654bf7159e8ec03f73428e0c38f570016d9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 04:06:03 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:06:03 np0005534516 podman[400405]: 2025-11-25 09:06:03.077024661 +0000 UTC m=+0.134206949 container init 108535d395d88d6d34fb419f105c9fa2334a9beb5c1cda52fd155e4118dcd6a6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_leakey, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True)
Nov 25 04:06:03 np0005534516 podman[400405]: 2025-11-25 09:06:03.084690818 +0000 UTC m=+0.141873096 container start 108535d395d88d6d34fb419f105c9fa2334a9beb5c1cda52fd155e4118dcd6a6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_leakey, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 04:06:03 np0005534516 podman[400405]: 2025-11-25 09:06:03.088519973 +0000 UTC m=+0.145702281 container attach 108535d395d88d6d34fb419f105c9fa2334a9beb5c1cda52fd155e4118dcd6a6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_leakey, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Nov 25 04:06:03 np0005534516 nova_compute[253538]: 2025-11-25 09:06:03.139 253542 DEBUG nova.network.neutron [None req-611360db-81bc-42f5-8bf9-42345a8fc7ae c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 935c4eb2-999f-40a4-8643-0479d293c149] Successfully created port: 20e31743-4fc4-43d2-ab28-5205c776f506 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 25 04:06:04 np0005534516 friendly_leakey[400422]: {
Nov 25 04:06:04 np0005534516 friendly_leakey[400422]:    "1ccbbde0-8faf-460a-800e-c84f00ed17db": {
Nov 25 04:06:04 np0005534516 friendly_leakey[400422]:        "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 04:06:04 np0005534516 friendly_leakey[400422]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 04:06:04 np0005534516 friendly_leakey[400422]:        "osd_id": 1,
Nov 25 04:06:04 np0005534516 friendly_leakey[400422]:        "osd_uuid": "1ccbbde0-8faf-460a-800e-c84f00ed17db",
Nov 25 04:06:04 np0005534516 friendly_leakey[400422]:        "type": "bluestore"
Nov 25 04:06:04 np0005534516 friendly_leakey[400422]:    },
Nov 25 04:06:04 np0005534516 friendly_leakey[400422]:    "2a15223e-f21c-42af-b702-2be1c3607399": {
Nov 25 04:06:04 np0005534516 friendly_leakey[400422]:        "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 04:06:04 np0005534516 friendly_leakey[400422]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 04:06:04 np0005534516 friendly_leakey[400422]:        "osd_id": 2,
Nov 25 04:06:04 np0005534516 friendly_leakey[400422]:        "osd_uuid": "2a15223e-f21c-42af-b702-2be1c3607399",
Nov 25 04:06:04 np0005534516 friendly_leakey[400422]:        "type": "bluestore"
Nov 25 04:06:04 np0005534516 friendly_leakey[400422]:    },
Nov 25 04:06:04 np0005534516 friendly_leakey[400422]:    "fa0f8d90-5df8-4d42-9078-da082765696d": {
Nov 25 04:06:04 np0005534516 friendly_leakey[400422]:        "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 04:06:04 np0005534516 friendly_leakey[400422]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 04:06:04 np0005534516 friendly_leakey[400422]:        "osd_id": 0,
Nov 25 04:06:04 np0005534516 friendly_leakey[400422]:        "osd_uuid": "fa0f8d90-5df8-4d42-9078-da082765696d",
Nov 25 04:06:04 np0005534516 friendly_leakey[400422]:        "type": "bluestore"
Nov 25 04:06:04 np0005534516 friendly_leakey[400422]:    }
Nov 25 04:06:04 np0005534516 friendly_leakey[400422]: }
Nov 25 04:06:04 np0005534516 systemd[1]: libpod-108535d395d88d6d34fb419f105c9fa2334a9beb5c1cda52fd155e4118dcd6a6.scope: Deactivated successfully.
Nov 25 04:06:04 np0005534516 systemd[1]: libpod-108535d395d88d6d34fb419f105c9fa2334a9beb5c1cda52fd155e4118dcd6a6.scope: Consumed 1.038s CPU time.
Nov 25 04:06:04 np0005534516 podman[400405]: 2025-11-25 09:06:04.116794106 +0000 UTC m=+1.173976384 container died 108535d395d88d6d34fb419f105c9fa2334a9beb5c1cda52fd155e4118dcd6a6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_leakey, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 04:06:04 np0005534516 nova_compute[253538]: 2025-11-25 09:06:04.119 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:06:04 np0005534516 systemd[1]: var-lib-containers-storage-overlay-f3ad723ecdd14fd854bb3581ad1b1654bf7159e8ec03f73428e0c38f570016d9-merged.mount: Deactivated successfully.
Nov 25 04:06:04 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2558: 321 pgs: 321 active+clean; 199 MiB data, 981 MiB used, 59 GiB / 60 GiB avail; 48 KiB/s rd, 438 KiB/s wr, 38 op/s
Nov 25 04:06:04 np0005534516 podman[400405]: 2025-11-25 09:06:04.167803402 +0000 UTC m=+1.224985690 container remove 108535d395d88d6d34fb419f105c9fa2334a9beb5c1cda52fd155e4118dcd6a6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_leakey, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 04:06:04 np0005534516 systemd[1]: libpod-conmon-108535d395d88d6d34fb419f105c9fa2334a9beb5c1cda52fd155e4118dcd6a6.scope: Deactivated successfully.
Nov 25 04:06:04 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 04:06:04 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 04:06:04 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 04:06:04 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 04:06:04 np0005534516 ceph-mgr[75313]: [progress WARNING root] complete: ev 7640ac54-c959-4d89-a139-f80406d1c08b does not exist
Nov 25 04:06:04 np0005534516 ceph-mgr[75313]: [progress WARNING root] complete: ev bedad813-7a0c-4d52-8961-898d14ac5226 does not exist
Nov 25 04:06:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 04:06:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 04:06:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 04:06:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 04:06:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0009890262787328255 of space, bias 1.0, pg target 0.2967078836198477 quantized to 32 (current 32)
Nov 25 04:06:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 04:06:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 04:06:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 04:06:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 04:06:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 04:06:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0010122368870873217 of space, bias 1.0, pg target 0.3036710661261965 quantized to 32 (current 32)
Nov 25 04:06:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 04:06:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 25 04:06:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 04:06:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 04:06:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 04:06:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 25 04:06:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 04:06:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 25 04:06:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 04:06:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 04:06:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 04:06:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 25 04:06:04 np0005534516 nova_compute[253538]: 2025-11-25 09:06:04.855 253542 DEBUG nova.network.neutron [None req-611360db-81bc-42f5-8bf9-42345a8fc7ae c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 935c4eb2-999f-40a4-8643-0479d293c149] Successfully updated port: bf69fe43-dd03-40a9-a38f-2ec005c27f58 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 25 04:06:04 np0005534516 podman[400517]: 2025-11-25 09:06:04.860998217 +0000 UTC m=+0.112716116 container health_status 8ad1e319ea263c63021f01bb2d6f18ca5aea205883339692c6b10bddfa993191 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 04:06:04 np0005534516 nova_compute[253538]: 2025-11-25 09:06:04.968 253542 DEBUG nova.compute.manager [req-35069aba-f630-43d6-b883-93bdd9fa5476 req-fe114c44-6b14-40a9-9f64-41212caed099 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 935c4eb2-999f-40a4-8643-0479d293c149] Received event network-changed-bf69fe43-dd03-40a9-a38f-2ec005c27f58 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 04:06:04 np0005534516 nova_compute[253538]: 2025-11-25 09:06:04.969 253542 DEBUG nova.compute.manager [req-35069aba-f630-43d6-b883-93bdd9fa5476 req-fe114c44-6b14-40a9-9f64-41212caed099 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 935c4eb2-999f-40a4-8643-0479d293c149] Refreshing instance network info cache due to event network-changed-bf69fe43-dd03-40a9-a38f-2ec005c27f58. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 04:06:04 np0005534516 nova_compute[253538]: 2025-11-25 09:06:04.969 253542 DEBUG oslo_concurrency.lockutils [req-35069aba-f630-43d6-b883-93bdd9fa5476 req-fe114c44-6b14-40a9-9f64-41212caed099 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-935c4eb2-999f-40a4-8643-0479d293c149" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 04:06:04 np0005534516 nova_compute[253538]: 2025-11-25 09:06:04.970 253542 DEBUG oslo_concurrency.lockutils [req-35069aba-f630-43d6-b883-93bdd9fa5476 req-fe114c44-6b14-40a9-9f64-41212caed099 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-935c4eb2-999f-40a4-8643-0479d293c149" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 04:06:04 np0005534516 nova_compute[253538]: 2025-11-25 09:06:04.970 253542 DEBUG nova.network.neutron [req-35069aba-f630-43d6-b883-93bdd9fa5476 req-fe114c44-6b14-40a9-9f64-41212caed099 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 935c4eb2-999f-40a4-8643-0479d293c149] Refreshing network info cache for port bf69fe43-dd03-40a9-a38f-2ec005c27f58 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 04:06:05 np0005534516 nova_compute[253538]: 2025-11-25 09:06:05.025 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:06:05 np0005534516 nova_compute[253538]: 2025-11-25 09:06:05.119 253542 DEBUG nova.network.neutron [req-35069aba-f630-43d6-b883-93bdd9fa5476 req-fe114c44-6b14-40a9-9f64-41212caed099 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 935c4eb2-999f-40a4-8643-0479d293c149] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 25 04:06:05 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 04:06:05 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 04:06:05 np0005534516 nova_compute[253538]: 2025-11-25 09:06:05.389 253542 DEBUG nova.network.neutron [req-35069aba-f630-43d6-b883-93bdd9fa5476 req-fe114c44-6b14-40a9-9f64-41212caed099 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 935c4eb2-999f-40a4-8643-0479d293c149] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 04:06:05 np0005534516 nova_compute[253538]: 2025-11-25 09:06:05.400 253542 DEBUG oslo_concurrency.lockutils [req-35069aba-f630-43d6-b883-93bdd9fa5476 req-fe114c44-6b14-40a9-9f64-41212caed099 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-935c4eb2-999f-40a4-8643-0479d293c149" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 04:06:05 np0005534516 nova_compute[253538]: 2025-11-25 09:06:05.916 253542 DEBUG nova.network.neutron [None req-611360db-81bc-42f5-8bf9-42345a8fc7ae c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 935c4eb2-999f-40a4-8643-0479d293c149] Successfully updated port: 20e31743-4fc4-43d2-ab28-5205c776f506 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 25 04:06:05 np0005534516 nova_compute[253538]: 2025-11-25 09:06:05.940 253542 DEBUG oslo_concurrency.lockutils [None req-611360db-81bc-42f5-8bf9-42345a8fc7ae c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Acquiring lock "refresh_cache-935c4eb2-999f-40a4-8643-0479d293c149" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 04:06:05 np0005534516 nova_compute[253538]: 2025-11-25 09:06:05.940 253542 DEBUG oslo_concurrency.lockutils [None req-611360db-81bc-42f5-8bf9-42345a8fc7ae c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Acquired lock "refresh_cache-935c4eb2-999f-40a4-8643-0479d293c149" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 04:06:05 np0005534516 nova_compute[253538]: 2025-11-25 09:06:05.941 253542 DEBUG nova.network.neutron [None req-611360db-81bc-42f5-8bf9-42345a8fc7ae c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 935c4eb2-999f-40a4-8643-0479d293c149] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 25 04:06:06 np0005534516 nova_compute[253538]: 2025-11-25 09:06:06.109 253542 DEBUG nova.network.neutron [None req-611360db-81bc-42f5-8bf9-42345a8fc7ae c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 935c4eb2-999f-40a4-8643-0479d293c149] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 25 04:06:06 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2559: 321 pgs: 321 active+clean; 192 MiB data, 981 MiB used, 59 GiB / 60 GiB avail; 37 KiB/s rd, 1.2 MiB/s wr, 45 op/s
Nov 25 04:06:06 np0005534516 nova_compute[253538]: 2025-11-25 09:06:06.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:06:06 np0005534516 nova_compute[253538]: 2025-11-25 09:06:06.555 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Nov 25 04:06:07 np0005534516 nova_compute[253538]: 2025-11-25 09:06:07.056 253542 DEBUG nova.compute.manager [req-42731087-d8b3-476f-a24a-a9e7a0781e68 req-fb377a87-bee2-4cab-b971-860acaa05590 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 935c4eb2-999f-40a4-8643-0479d293c149] Received event network-changed-20e31743-4fc4-43d2-ab28-5205c776f506 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 04:06:07 np0005534516 nova_compute[253538]: 2025-11-25 09:06:07.056 253542 DEBUG nova.compute.manager [req-42731087-d8b3-476f-a24a-a9e7a0781e68 req-fb377a87-bee2-4cab-b971-860acaa05590 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 935c4eb2-999f-40a4-8643-0479d293c149] Refreshing instance network info cache due to event network-changed-20e31743-4fc4-43d2-ab28-5205c776f506. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 04:06:07 np0005534516 nova_compute[253538]: 2025-11-25 09:06:07.057 253542 DEBUG oslo_concurrency.lockutils [req-42731087-d8b3-476f-a24a-a9e7a0781e68 req-fb377a87-bee2-4cab-b971-860acaa05590 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-935c4eb2-999f-40a4-8643-0479d293c149" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 04:06:07 np0005534516 ovn_controller[152859]: 2025-11-25T09:06:07Z|01440|binding|INFO|Releasing lport 8fae56b6-9884-44ea-b3b3-2b19412193c5 from this chassis (sb_readonly=0)
Nov 25 04:06:07 np0005534516 ovn_controller[152859]: 2025-11-25T09:06:07Z|01441|binding|INFO|Releasing lport f8aacb3c-1998-431a-ac4d-66021d7412c1 from this chassis (sb_readonly=0)
Nov 25 04:06:07 np0005534516 nova_compute[253538]: 2025-11-25 09:06:07.652 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:06:08 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:06:08 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2560: 321 pgs: 321 active+clean; 213 MiB data, 988 MiB used, 59 GiB / 60 GiB avail; 44 KiB/s rd, 1.8 MiB/s wr, 58 op/s
Nov 25 04:06:08 np0005534516 nova_compute[253538]: 2025-11-25 09:06:08.197 253542 DEBUG nova.network.neutron [None req-611360db-81bc-42f5-8bf9-42345a8fc7ae c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 935c4eb2-999f-40a4-8643-0479d293c149] Updating instance_info_cache with network_info: [{"id": "bf69fe43-dd03-40a9-a38f-2ec005c27f58", "address": "fa:16:3e:8d:64:ff", "network": {"id": "8f08e3a5-c18c-40d6-a052-3721725c11a7", "bridge": "br-int", "label": "tempest-network-smoke--1961352312", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf69fe43-dd", "ovs_interfaceid": "bf69fe43-dd03-40a9-a38f-2ec005c27f58", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "20e31743-4fc4-43d2-ab28-5205c776f506", "address": "fa:16:3e:db:48:c2", "network": {"id": "6c644b4d-59a5-410c-b57a-1faa3d063b78", "bridge": "br-int", "label": "tempest-network-smoke--1048784636", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fedb:48c2", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fedb:48c2", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap20e31743-4f", "ovs_interfaceid": "20e31743-4fc4-43d2-ab28-5205c776f506", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 04:06:08 np0005534516 nova_compute[253538]: 2025-11-25 09:06:08.218 253542 DEBUG oslo_concurrency.lockutils [None req-611360db-81bc-42f5-8bf9-42345a8fc7ae c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Releasing lock "refresh_cache-935c4eb2-999f-40a4-8643-0479d293c149" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 04:06:08 np0005534516 nova_compute[253538]: 2025-11-25 09:06:08.219 253542 DEBUG nova.compute.manager [None req-611360db-81bc-42f5-8bf9-42345a8fc7ae c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 935c4eb2-999f-40a4-8643-0479d293c149] Instance network_info: |[{"id": "bf69fe43-dd03-40a9-a38f-2ec005c27f58", "address": "fa:16:3e:8d:64:ff", "network": {"id": "8f08e3a5-c18c-40d6-a052-3721725c11a7", "bridge": "br-int", "label": "tempest-network-smoke--1961352312", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf69fe43-dd", "ovs_interfaceid": "bf69fe43-dd03-40a9-a38f-2ec005c27f58", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "20e31743-4fc4-43d2-ab28-5205c776f506", "address": "fa:16:3e:db:48:c2", "network": {"id": "6c644b4d-59a5-410c-b57a-1faa3d063b78", "bridge": "br-int", "label": "tempest-network-smoke--1048784636", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fedb:48c2", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fedb:48c2", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap20e31743-4f", "ovs_interfaceid": "20e31743-4fc4-43d2-ab28-5205c776f506", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 25 04:06:08 np0005534516 nova_compute[253538]: 2025-11-25 09:06:08.219 253542 DEBUG oslo_concurrency.lockutils [req-42731087-d8b3-476f-a24a-a9e7a0781e68 req-fb377a87-bee2-4cab-b971-860acaa05590 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-935c4eb2-999f-40a4-8643-0479d293c149" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 04:06:08 np0005534516 nova_compute[253538]: 2025-11-25 09:06:08.219 253542 DEBUG nova.network.neutron [req-42731087-d8b3-476f-a24a-a9e7a0781e68 req-fb377a87-bee2-4cab-b971-860acaa05590 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 935c4eb2-999f-40a4-8643-0479d293c149] Refreshing network info cache for port 20e31743-4fc4-43d2-ab28-5205c776f506 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 04:06:08 np0005534516 nova_compute[253538]: 2025-11-25 09:06:08.223 253542 DEBUG nova.virt.libvirt.driver [None req-611360db-81bc-42f5-8bf9-42345a8fc7ae c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 935c4eb2-999f-40a4-8643-0479d293c149] Start _get_guest_xml network_info=[{"id": "bf69fe43-dd03-40a9-a38f-2ec005c27f58", "address": "fa:16:3e:8d:64:ff", "network": {"id": "8f08e3a5-c18c-40d6-a052-3721725c11a7", "bridge": "br-int", "label": "tempest-network-smoke--1961352312", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf69fe43-dd", "ovs_interfaceid": "bf69fe43-dd03-40a9-a38f-2ec005c27f58", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "20e31743-4fc4-43d2-ab28-5205c776f506", "address": "fa:16:3e:db:48:c2", "network": {"id": "6c644b4d-59a5-410c-b57a-1faa3d063b78", "bridge": "br-int", "label": "tempest-network-smoke--1048784636", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fedb:48c2", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fedb:48c2", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap20e31743-4f", "ovs_interfaceid": "20e31743-4fc4-43d2-ab28-5205c776f506", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'device_name': '/dev/vda', 'guest_format': None, 'encryption_options': None, 'boot_index': 0, 'encryption_format': None, 'encryption_secret_uuid': None, 'size': 0, 'encrypted': False, 'disk_bus': 'virtio', 'image_id': '8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 25 04:06:08 np0005534516 nova_compute[253538]: 2025-11-25 09:06:08.227 253542 WARNING nova.virt.libvirt.driver [None req-611360db-81bc-42f5-8bf9-42345a8fc7ae c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 04:06:08 np0005534516 nova_compute[253538]: 2025-11-25 09:06:08.235 253542 DEBUG nova.virt.libvirt.host [None req-611360db-81bc-42f5-8bf9-42345a8fc7ae c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 25 04:06:08 np0005534516 nova_compute[253538]: 2025-11-25 09:06:08.236 253542 DEBUG nova.virt.libvirt.host [None req-611360db-81bc-42f5-8bf9-42345a8fc7ae c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 25 04:06:08 np0005534516 nova_compute[253538]: 2025-11-25 09:06:08.239 253542 DEBUG nova.virt.libvirt.host [None req-611360db-81bc-42f5-8bf9-42345a8fc7ae c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 25 04:06:08 np0005534516 nova_compute[253538]: 2025-11-25 09:06:08.239 253542 DEBUG nova.virt.libvirt.host [None req-611360db-81bc-42f5-8bf9-42345a8fc7ae c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 25 04:06:08 np0005534516 nova_compute[253538]: 2025-11-25 09:06:08.240 253542 DEBUG nova.virt.libvirt.driver [None req-611360db-81bc-42f5-8bf9-42345a8fc7ae c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 25 04:06:08 np0005534516 nova_compute[253538]: 2025-11-25 09:06:08.240 253542 DEBUG nova.virt.hardware [None req-611360db-81bc-42f5-8bf9-42345a8fc7ae c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T08:20:54Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='c0247c91-0f34-45b2-87e7-43f31a790a00',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 25 04:06:08 np0005534516 nova_compute[253538]: 2025-11-25 09:06:08.240 253542 DEBUG nova.virt.hardware [None req-611360db-81bc-42f5-8bf9-42345a8fc7ae c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 25 04:06:08 np0005534516 nova_compute[253538]: 2025-11-25 09:06:08.241 253542 DEBUG nova.virt.hardware [None req-611360db-81bc-42f5-8bf9-42345a8fc7ae c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 25 04:06:08 np0005534516 nova_compute[253538]: 2025-11-25 09:06:08.241 253542 DEBUG nova.virt.hardware [None req-611360db-81bc-42f5-8bf9-42345a8fc7ae c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 25 04:06:08 np0005534516 nova_compute[253538]: 2025-11-25 09:06:08.241 253542 DEBUG nova.virt.hardware [None req-611360db-81bc-42f5-8bf9-42345a8fc7ae c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 25 04:06:08 np0005534516 nova_compute[253538]: 2025-11-25 09:06:08.241 253542 DEBUG nova.virt.hardware [None req-611360db-81bc-42f5-8bf9-42345a8fc7ae c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 25 04:06:08 np0005534516 nova_compute[253538]: 2025-11-25 09:06:08.242 253542 DEBUG nova.virt.hardware [None req-611360db-81bc-42f5-8bf9-42345a8fc7ae c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 25 04:06:08 np0005534516 nova_compute[253538]: 2025-11-25 09:06:08.242 253542 DEBUG nova.virt.hardware [None req-611360db-81bc-42f5-8bf9-42345a8fc7ae c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 25 04:06:08 np0005534516 nova_compute[253538]: 2025-11-25 09:06:08.242 253542 DEBUG nova.virt.hardware [None req-611360db-81bc-42f5-8bf9-42345a8fc7ae c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 25 04:06:08 np0005534516 nova_compute[253538]: 2025-11-25 09:06:08.242 253542 DEBUG nova.virt.hardware [None req-611360db-81bc-42f5-8bf9-42345a8fc7ae c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 25 04:06:08 np0005534516 nova_compute[253538]: 2025-11-25 09:06:08.243 253542 DEBUG nova.virt.hardware [None req-611360db-81bc-42f5-8bf9-42345a8fc7ae c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 25 04:06:08 np0005534516 nova_compute[253538]: 2025-11-25 09:06:08.246 253542 DEBUG oslo_concurrency.processutils [None req-611360db-81bc-42f5-8bf9-42345a8fc7ae c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 04:06:08 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 04:06:08 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1142923082' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 04:06:08 np0005534516 nova_compute[253538]: 2025-11-25 09:06:08.671 253542 DEBUG oslo_concurrency.processutils [None req-611360db-81bc-42f5-8bf9-42345a8fc7ae c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.425s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 04:06:08 np0005534516 nova_compute[253538]: 2025-11-25 09:06:08.706 253542 DEBUG nova.storage.rbd_utils [None req-611360db-81bc-42f5-8bf9-42345a8fc7ae c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] rbd image 935c4eb2-999f-40a4-8643-0479d293c149_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 04:06:08 np0005534516 nova_compute[253538]: 2025-11-25 09:06:08.711 253542 DEBUG oslo_concurrency.processutils [None req-611360db-81bc-42f5-8bf9-42345a8fc7ae c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 04:06:09 np0005534516 nova_compute[253538]: 2025-11-25 09:06:09.121 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:06:09 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 04:06:09 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2124237264' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 04:06:09 np0005534516 nova_compute[253538]: 2025-11-25 09:06:09.212 253542 DEBUG oslo_concurrency.processutils [None req-611360db-81bc-42f5-8bf9-42345a8fc7ae c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.501s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 04:06:09 np0005534516 nova_compute[253538]: 2025-11-25 09:06:09.214 253542 DEBUG nova.virt.libvirt.vif [None req-611360db-81bc-42f5-8bf9-42345a8fc7ae c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T09:05:58Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1102871071',display_name='tempest-TestGettingAddress-server-1102871071',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1102871071',id=139,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBG2EHMFfBwqY5UWv5ZUTvDGydPH+lI0Q3ZijcI/Z0KNxJwJ0XwD1zp8yjXj4vyT5Wr1yKGnfee4Ard9WUGqQHTmOIMYiRA7o0ggCLm4Ee9TLzJApKNCTFF0W861lRMGbtg==',key_name='tempest-TestGettingAddress-99102097',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='a3cf572dfc9f42528923d69b8fa76422',ramdisk_id='',reservation_id='r-wh4fqbxd',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-364728108',owner_user_name='tempest-TestGettingAddress-364728108-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T09:06:01Z,user_data=None,user_id='c9fb13d4ba9041458692330b7276232f',uuid=935c4eb2-999f-40a4-8643-0479d293c149,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "bf69fe43-dd03-40a9-a38f-2ec005c27f58", "address": "fa:16:3e:8d:64:ff", "network": {"id": "8f08e3a5-c18c-40d6-a052-3721725c11a7", "bridge": "br-int", "label": "tempest-network-smoke--1961352312", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf69fe43-dd", "ovs_interfaceid": "bf69fe43-dd03-40a9-a38f-2ec005c27f58", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 25 04:06:09 np0005534516 nova_compute[253538]: 2025-11-25 09:06:09.215 253542 DEBUG nova.network.os_vif_util [None req-611360db-81bc-42f5-8bf9-42345a8fc7ae c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Converting VIF {"id": "bf69fe43-dd03-40a9-a38f-2ec005c27f58", "address": "fa:16:3e:8d:64:ff", "network": {"id": "8f08e3a5-c18c-40d6-a052-3721725c11a7", "bridge": "br-int", "label": "tempest-network-smoke--1961352312", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf69fe43-dd", "ovs_interfaceid": "bf69fe43-dd03-40a9-a38f-2ec005c27f58", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 04:06:09 np0005534516 nova_compute[253538]: 2025-11-25 09:06:09.216 253542 DEBUG nova.network.os_vif_util [None req-611360db-81bc-42f5-8bf9-42345a8fc7ae c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:8d:64:ff,bridge_name='br-int',has_traffic_filtering=True,id=bf69fe43-dd03-40a9-a38f-2ec005c27f58,network=Network(8f08e3a5-c18c-40d6-a052-3721725c11a7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbf69fe43-dd') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 04:06:09 np0005534516 nova_compute[253538]: 2025-11-25 09:06:09.217 253542 DEBUG nova.virt.libvirt.vif [None req-611360db-81bc-42f5-8bf9-42345a8fc7ae c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T09:05:58Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1102871071',display_name='tempest-TestGettingAddress-server-1102871071',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1102871071',id=139,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBG2EHMFfBwqY5UWv5ZUTvDGydPH+lI0Q3ZijcI/Z0KNxJwJ0XwD1zp8yjXj4vyT5Wr1yKGnfee4Ard9WUGqQHTmOIMYiRA7o0ggCLm4Ee9TLzJApKNCTFF0W861lRMGbtg==',key_name='tempest-TestGettingAddress-99102097',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='a3cf572dfc9f42528923d69b8fa76422',ramdisk_id='',reservation_id='r-wh4fqbxd',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-364728108',owner_user_name='tempest-TestGettingAddress-364728108-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T09:06:01Z,user_data=None,user_id='c9fb13d4ba9041458692330b7276232f',uuid=935c4eb2-999f-40a4-8643-0479d293c149,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "20e31743-4fc4-43d2-ab28-5205c776f506", "address": "fa:16:3e:db:48:c2", "network": {"id": "6c644b4d-59a5-410c-b57a-1faa3d063b78", "bridge": "br-int", "label": "tempest-network-smoke--1048784636", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fedb:48c2", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fedb:48c2", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap20e31743-4f", "ovs_interfaceid": "20e31743-4fc4-43d2-ab28-5205c776f506", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 25 04:06:09 np0005534516 nova_compute[253538]: 2025-11-25 09:06:09.217 253542 DEBUG nova.network.os_vif_util [None req-611360db-81bc-42f5-8bf9-42345a8fc7ae c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Converting VIF {"id": "20e31743-4fc4-43d2-ab28-5205c776f506", "address": "fa:16:3e:db:48:c2", "network": {"id": "6c644b4d-59a5-410c-b57a-1faa3d063b78", "bridge": "br-int", "label": "tempest-network-smoke--1048784636", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fedb:48c2", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fedb:48c2", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap20e31743-4f", "ovs_interfaceid": "20e31743-4fc4-43d2-ab28-5205c776f506", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 04:06:09 np0005534516 nova_compute[253538]: 2025-11-25 09:06:09.218 253542 DEBUG nova.network.os_vif_util [None req-611360db-81bc-42f5-8bf9-42345a8fc7ae c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:db:48:c2,bridge_name='br-int',has_traffic_filtering=True,id=20e31743-4fc4-43d2-ab28-5205c776f506,network=Network(6c644b4d-59a5-410c-b57a-1faa3d063b78),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap20e31743-4f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 04:06:09 np0005534516 nova_compute[253538]: 2025-11-25 09:06:09.220 253542 DEBUG nova.objects.instance [None req-611360db-81bc-42f5-8bf9-42345a8fc7ae c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lazy-loading 'pci_devices' on Instance uuid 935c4eb2-999f-40a4-8643-0479d293c149 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 04:06:09 np0005534516 nova_compute[253538]: 2025-11-25 09:06:09.251 253542 DEBUG nova.virt.libvirt.driver [None req-611360db-81bc-42f5-8bf9-42345a8fc7ae c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 935c4eb2-999f-40a4-8643-0479d293c149] End _get_guest_xml xml=<domain type="kvm">
Nov 25 04:06:09 np0005534516 nova_compute[253538]:  <uuid>935c4eb2-999f-40a4-8643-0479d293c149</uuid>
Nov 25 04:06:09 np0005534516 nova_compute[253538]:  <name>instance-0000008b</name>
Nov 25 04:06:09 np0005534516 nova_compute[253538]:  <memory>131072</memory>
Nov 25 04:06:09 np0005534516 nova_compute[253538]:  <vcpu>1</vcpu>
Nov 25 04:06:09 np0005534516 nova_compute[253538]:  <metadata>
Nov 25 04:06:09 np0005534516 nova_compute[253538]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 04:06:09 np0005534516 nova_compute[253538]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 04:06:09 np0005534516 nova_compute[253538]:      <nova:name>tempest-TestGettingAddress-server-1102871071</nova:name>
Nov 25 04:06:09 np0005534516 nova_compute[253538]:      <nova:creationTime>2025-11-25 09:06:08</nova:creationTime>
Nov 25 04:06:09 np0005534516 nova_compute[253538]:      <nova:flavor name="m1.nano">
Nov 25 04:06:09 np0005534516 nova_compute[253538]:        <nova:memory>128</nova:memory>
Nov 25 04:06:09 np0005534516 nova_compute[253538]:        <nova:disk>1</nova:disk>
Nov 25 04:06:09 np0005534516 nova_compute[253538]:        <nova:swap>0</nova:swap>
Nov 25 04:06:09 np0005534516 nova_compute[253538]:        <nova:ephemeral>0</nova:ephemeral>
Nov 25 04:06:09 np0005534516 nova_compute[253538]:        <nova:vcpus>1</nova:vcpus>
Nov 25 04:06:09 np0005534516 nova_compute[253538]:      </nova:flavor>
Nov 25 04:06:09 np0005534516 nova_compute[253538]:      <nova:owner>
Nov 25 04:06:09 np0005534516 nova_compute[253538]:        <nova:user uuid="c9fb13d4ba9041458692330b7276232f">tempest-TestGettingAddress-364728108-project-member</nova:user>
Nov 25 04:06:09 np0005534516 nova_compute[253538]:        <nova:project uuid="a3cf572dfc9f42528923d69b8fa76422">tempest-TestGettingAddress-364728108</nova:project>
Nov 25 04:06:09 np0005534516 nova_compute[253538]:      </nova:owner>
Nov 25 04:06:09 np0005534516 nova_compute[253538]:      <nova:root type="image" uuid="8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e"/>
Nov 25 04:06:09 np0005534516 nova_compute[253538]:      <nova:ports>
Nov 25 04:06:09 np0005534516 nova_compute[253538]:        <nova:port uuid="bf69fe43-dd03-40a9-a38f-2ec005c27f58">
Nov 25 04:06:09 np0005534516 nova_compute[253538]:          <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Nov 25 04:06:09 np0005534516 nova_compute[253538]:        </nova:port>
Nov 25 04:06:09 np0005534516 nova_compute[253538]:        <nova:port uuid="20e31743-4fc4-43d2-ab28-5205c776f506">
Nov 25 04:06:09 np0005534516 nova_compute[253538]:          <nova:ip type="fixed" address="2001:db8::f816:3eff:fedb:48c2" ipVersion="6"/>
Nov 25 04:06:09 np0005534516 nova_compute[253538]:          <nova:ip type="fixed" address="2001:db8:0:1:f816:3eff:fedb:48c2" ipVersion="6"/>
Nov 25 04:06:09 np0005534516 nova_compute[253538]:        </nova:port>
Nov 25 04:06:09 np0005534516 nova_compute[253538]:      </nova:ports>
Nov 25 04:06:09 np0005534516 nova_compute[253538]:    </nova:instance>
Nov 25 04:06:09 np0005534516 nova_compute[253538]:  </metadata>
Nov 25 04:06:09 np0005534516 nova_compute[253538]:  <sysinfo type="smbios">
Nov 25 04:06:09 np0005534516 nova_compute[253538]:    <system>
Nov 25 04:06:09 np0005534516 nova_compute[253538]:      <entry name="manufacturer">RDO</entry>
Nov 25 04:06:09 np0005534516 nova_compute[253538]:      <entry name="product">OpenStack Compute</entry>
Nov 25 04:06:09 np0005534516 nova_compute[253538]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 04:06:09 np0005534516 nova_compute[253538]:      <entry name="serial">935c4eb2-999f-40a4-8643-0479d293c149</entry>
Nov 25 04:06:09 np0005534516 nova_compute[253538]:      <entry name="uuid">935c4eb2-999f-40a4-8643-0479d293c149</entry>
Nov 25 04:06:09 np0005534516 nova_compute[253538]:      <entry name="family">Virtual Machine</entry>
Nov 25 04:06:09 np0005534516 nova_compute[253538]:    </system>
Nov 25 04:06:09 np0005534516 nova_compute[253538]:  </sysinfo>
Nov 25 04:06:09 np0005534516 nova_compute[253538]:  <os>
Nov 25 04:06:09 np0005534516 nova_compute[253538]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 25 04:06:09 np0005534516 nova_compute[253538]:    <boot dev="hd"/>
Nov 25 04:06:09 np0005534516 nova_compute[253538]:    <smbios mode="sysinfo"/>
Nov 25 04:06:09 np0005534516 nova_compute[253538]:  </os>
Nov 25 04:06:09 np0005534516 nova_compute[253538]:  <features>
Nov 25 04:06:09 np0005534516 nova_compute[253538]:    <acpi/>
Nov 25 04:06:09 np0005534516 nova_compute[253538]:    <apic/>
Nov 25 04:06:09 np0005534516 nova_compute[253538]:    <vmcoreinfo/>
Nov 25 04:06:09 np0005534516 nova_compute[253538]:  </features>
Nov 25 04:06:09 np0005534516 nova_compute[253538]:  <clock offset="utc">
Nov 25 04:06:09 np0005534516 nova_compute[253538]:    <timer name="pit" tickpolicy="delay"/>
Nov 25 04:06:09 np0005534516 nova_compute[253538]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 25 04:06:09 np0005534516 nova_compute[253538]:    <timer name="hpet" present="no"/>
Nov 25 04:06:09 np0005534516 nova_compute[253538]:  </clock>
Nov 25 04:06:09 np0005534516 nova_compute[253538]:  <cpu mode="host-model" match="exact">
Nov 25 04:06:09 np0005534516 nova_compute[253538]:    <topology sockets="1" cores="1" threads="1"/>
Nov 25 04:06:09 np0005534516 nova_compute[253538]:  </cpu>
Nov 25 04:06:09 np0005534516 nova_compute[253538]:  <devices>
Nov 25 04:06:09 np0005534516 nova_compute[253538]:    <disk type="network" device="disk">
Nov 25 04:06:09 np0005534516 nova_compute[253538]:      <driver type="raw" cache="none"/>
Nov 25 04:06:09 np0005534516 nova_compute[253538]:      <source protocol="rbd" name="vms/935c4eb2-999f-40a4-8643-0479d293c149_disk">
Nov 25 04:06:09 np0005534516 nova_compute[253538]:        <host name="192.168.122.100" port="6789"/>
Nov 25 04:06:09 np0005534516 nova_compute[253538]:      </source>
Nov 25 04:06:09 np0005534516 nova_compute[253538]:      <auth username="openstack">
Nov 25 04:06:09 np0005534516 nova_compute[253538]:        <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 04:06:09 np0005534516 nova_compute[253538]:      </auth>
Nov 25 04:06:09 np0005534516 nova_compute[253538]:      <target dev="vda" bus="virtio"/>
Nov 25 04:06:09 np0005534516 nova_compute[253538]:    </disk>
Nov 25 04:06:09 np0005534516 nova_compute[253538]:    <disk type="network" device="cdrom">
Nov 25 04:06:09 np0005534516 nova_compute[253538]:      <driver type="raw" cache="none"/>
Nov 25 04:06:09 np0005534516 nova_compute[253538]:      <source protocol="rbd" name="vms/935c4eb2-999f-40a4-8643-0479d293c149_disk.config">
Nov 25 04:06:09 np0005534516 nova_compute[253538]:        <host name="192.168.122.100" port="6789"/>
Nov 25 04:06:09 np0005534516 nova_compute[253538]:      </source>
Nov 25 04:06:09 np0005534516 nova_compute[253538]:      <auth username="openstack">
Nov 25 04:06:09 np0005534516 nova_compute[253538]:        <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 04:06:09 np0005534516 nova_compute[253538]:      </auth>
Nov 25 04:06:09 np0005534516 nova_compute[253538]:      <target dev="sda" bus="sata"/>
Nov 25 04:06:09 np0005534516 nova_compute[253538]:    </disk>
Nov 25 04:06:09 np0005534516 nova_compute[253538]:    <interface type="ethernet">
Nov 25 04:06:09 np0005534516 nova_compute[253538]:      <mac address="fa:16:3e:8d:64:ff"/>
Nov 25 04:06:09 np0005534516 nova_compute[253538]:      <model type="virtio"/>
Nov 25 04:06:09 np0005534516 nova_compute[253538]:      <driver name="vhost" rx_queue_size="512"/>
Nov 25 04:06:09 np0005534516 nova_compute[253538]:      <mtu size="1442"/>
Nov 25 04:06:09 np0005534516 nova_compute[253538]:      <target dev="tapbf69fe43-dd"/>
Nov 25 04:06:09 np0005534516 nova_compute[253538]:    </interface>
Nov 25 04:06:09 np0005534516 nova_compute[253538]:    <interface type="ethernet">
Nov 25 04:06:09 np0005534516 nova_compute[253538]:      <mac address="fa:16:3e:db:48:c2"/>
Nov 25 04:06:09 np0005534516 nova_compute[253538]:      <model type="virtio"/>
Nov 25 04:06:09 np0005534516 nova_compute[253538]:      <driver name="vhost" rx_queue_size="512"/>
Nov 25 04:06:09 np0005534516 nova_compute[253538]:      <mtu size="1442"/>
Nov 25 04:06:09 np0005534516 nova_compute[253538]:      <target dev="tap20e31743-4f"/>
Nov 25 04:06:09 np0005534516 nova_compute[253538]:    </interface>
Nov 25 04:06:09 np0005534516 nova_compute[253538]:    <serial type="pty">
Nov 25 04:06:09 np0005534516 nova_compute[253538]:      <log file="/var/lib/nova/instances/935c4eb2-999f-40a4-8643-0479d293c149/console.log" append="off"/>
Nov 25 04:06:09 np0005534516 nova_compute[253538]:    </serial>
Nov 25 04:06:09 np0005534516 nova_compute[253538]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 04:06:09 np0005534516 nova_compute[253538]:    <video>
Nov 25 04:06:09 np0005534516 nova_compute[253538]:      <model type="virtio"/>
Nov 25 04:06:09 np0005534516 nova_compute[253538]:    </video>
Nov 25 04:06:09 np0005534516 nova_compute[253538]:    <input type="tablet" bus="usb"/>
Nov 25 04:06:09 np0005534516 nova_compute[253538]:    <rng model="virtio">
Nov 25 04:06:09 np0005534516 nova_compute[253538]:      <backend model="random">/dev/urandom</backend>
Nov 25 04:06:09 np0005534516 nova_compute[253538]:    </rng>
Nov 25 04:06:09 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root"/>
Nov 25 04:06:09 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:06:09 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:06:09 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:06:09 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:06:09 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:06:09 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:06:09 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:06:09 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:06:09 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:06:09 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:06:09 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:06:09 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:06:09 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:06:09 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:06:09 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:06:09 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:06:09 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:06:09 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:06:09 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:06:09 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:06:09 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:06:09 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:06:09 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:06:09 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:06:09 np0005534516 nova_compute[253538]:    <controller type="usb" index="0"/>
Nov 25 04:06:09 np0005534516 nova_compute[253538]:    <memballoon model="virtio">
Nov 25 04:06:09 np0005534516 nova_compute[253538]:      <stats period="10"/>
Nov 25 04:06:09 np0005534516 nova_compute[253538]:    </memballoon>
Nov 25 04:06:09 np0005534516 nova_compute[253538]:  </devices>
Nov 25 04:06:09 np0005534516 nova_compute[253538]: </domain>
Nov 25 04:06:09 np0005534516 nova_compute[253538]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 25 04:06:09 np0005534516 nova_compute[253538]: 2025-11-25 09:06:09.253 253542 DEBUG nova.compute.manager [None req-611360db-81bc-42f5-8bf9-42345a8fc7ae c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 935c4eb2-999f-40a4-8643-0479d293c149] Preparing to wait for external event network-vif-plugged-bf69fe43-dd03-40a9-a38f-2ec005c27f58 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 25 04:06:09 np0005534516 nova_compute[253538]: 2025-11-25 09:06:09.254 253542 DEBUG oslo_concurrency.lockutils [None req-611360db-81bc-42f5-8bf9-42345a8fc7ae c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Acquiring lock "935c4eb2-999f-40a4-8643-0479d293c149-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:06:09 np0005534516 nova_compute[253538]: 2025-11-25 09:06:09.254 253542 DEBUG oslo_concurrency.lockutils [None req-611360db-81bc-42f5-8bf9-42345a8fc7ae c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "935c4eb2-999f-40a4-8643-0479d293c149-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:06:09 np0005534516 nova_compute[253538]: 2025-11-25 09:06:09.255 253542 DEBUG oslo_concurrency.lockutils [None req-611360db-81bc-42f5-8bf9-42345a8fc7ae c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "935c4eb2-999f-40a4-8643-0479d293c149-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:06:09 np0005534516 nova_compute[253538]: 2025-11-25 09:06:09.255 253542 DEBUG nova.compute.manager [None req-611360db-81bc-42f5-8bf9-42345a8fc7ae c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 935c4eb2-999f-40a4-8643-0479d293c149] Preparing to wait for external event network-vif-plugged-20e31743-4fc4-43d2-ab28-5205c776f506 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 25 04:06:09 np0005534516 nova_compute[253538]: 2025-11-25 09:06:09.256 253542 DEBUG oslo_concurrency.lockutils [None req-611360db-81bc-42f5-8bf9-42345a8fc7ae c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Acquiring lock "935c4eb2-999f-40a4-8643-0479d293c149-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:06:09 np0005534516 nova_compute[253538]: 2025-11-25 09:06:09.257 253542 DEBUG oslo_concurrency.lockutils [None req-611360db-81bc-42f5-8bf9-42345a8fc7ae c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "935c4eb2-999f-40a4-8643-0479d293c149-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:06:09 np0005534516 nova_compute[253538]: 2025-11-25 09:06:09.258 253542 DEBUG oslo_concurrency.lockutils [None req-611360db-81bc-42f5-8bf9-42345a8fc7ae c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "935c4eb2-999f-40a4-8643-0479d293c149-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:06:09 np0005534516 nova_compute[253538]: 2025-11-25 09:06:09.259 253542 DEBUG nova.virt.libvirt.vif [None req-611360db-81bc-42f5-8bf9-42345a8fc7ae c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T09:05:58Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1102871071',display_name='tempest-TestGettingAddress-server-1102871071',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1102871071',id=139,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBG2EHMFfBwqY5UWv5ZUTvDGydPH+lI0Q3ZijcI/Z0KNxJwJ0XwD1zp8yjXj4vyT5Wr1yKGnfee4Ard9WUGqQHTmOIMYiRA7o0ggCLm4Ee9TLzJApKNCTFF0W861lRMGbtg==',key_name='tempest-TestGettingAddress-99102097',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='a3cf572dfc9f42528923d69b8fa76422',ramdisk_id='',reservation_id='r-wh4fqbxd',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-364728108',owner_user_name='tempest-TestGettingAddress-364728108-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T09:06:01Z,user_data=None,user_id='c9fb13d4ba9041458692330b7276232f',uuid=935c4eb2-999f-40a4-8643-0479d293c149,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "bf69fe43-dd03-40a9-a38f-2ec005c27f58", "address": "fa:16:3e:8d:64:ff", "network": {"id": "8f08e3a5-c18c-40d6-a052-3721725c11a7", "bridge": "br-int", "label": "tempest-network-smoke--1961352312", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf69fe43-dd", "ovs_interfaceid": "bf69fe43-dd03-40a9-a38f-2ec005c27f58", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 25 04:06:09 np0005534516 nova_compute[253538]: 2025-11-25 09:06:09.259 253542 DEBUG nova.network.os_vif_util [None req-611360db-81bc-42f5-8bf9-42345a8fc7ae c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Converting VIF {"id": "bf69fe43-dd03-40a9-a38f-2ec005c27f58", "address": "fa:16:3e:8d:64:ff", "network": {"id": "8f08e3a5-c18c-40d6-a052-3721725c11a7", "bridge": "br-int", "label": "tempest-network-smoke--1961352312", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf69fe43-dd", "ovs_interfaceid": "bf69fe43-dd03-40a9-a38f-2ec005c27f58", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 04:06:09 np0005534516 nova_compute[253538]: 2025-11-25 09:06:09.261 253542 DEBUG nova.network.os_vif_util [None req-611360db-81bc-42f5-8bf9-42345a8fc7ae c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:8d:64:ff,bridge_name='br-int',has_traffic_filtering=True,id=bf69fe43-dd03-40a9-a38f-2ec005c27f58,network=Network(8f08e3a5-c18c-40d6-a052-3721725c11a7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbf69fe43-dd') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 04:06:09 np0005534516 nova_compute[253538]: 2025-11-25 09:06:09.261 253542 DEBUG os_vif [None req-611360db-81bc-42f5-8bf9-42345a8fc7ae c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:8d:64:ff,bridge_name='br-int',has_traffic_filtering=True,id=bf69fe43-dd03-40a9-a38f-2ec005c27f58,network=Network(8f08e3a5-c18c-40d6-a052-3721725c11a7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbf69fe43-dd') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 25 04:06:09 np0005534516 nova_compute[253538]: 2025-11-25 09:06:09.262 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:06:09 np0005534516 nova_compute[253538]: 2025-11-25 09:06:09.263 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 04:06:09 np0005534516 nova_compute[253538]: 2025-11-25 09:06:09.263 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 04:06:09 np0005534516 nova_compute[253538]: 2025-11-25 09:06:09.267 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:06:09 np0005534516 nova_compute[253538]: 2025-11-25 09:06:09.268 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapbf69fe43-dd, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 04:06:09 np0005534516 nova_compute[253538]: 2025-11-25 09:06:09.269 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapbf69fe43-dd, col_values=(('external_ids', {'iface-id': 'bf69fe43-dd03-40a9-a38f-2ec005c27f58', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:8d:64:ff', 'vm-uuid': '935c4eb2-999f-40a4-8643-0479d293c149'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 04:06:09 np0005534516 nova_compute[253538]: 2025-11-25 09:06:09.271 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:06:09 np0005534516 NetworkManager[48915]: <info>  [1764061569.2716] manager: (tapbf69fe43-dd): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/594)
Nov 25 04:06:09 np0005534516 nova_compute[253538]: 2025-11-25 09:06:09.274 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 25 04:06:09 np0005534516 nova_compute[253538]: 2025-11-25 09:06:09.280 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:06:09 np0005534516 nova_compute[253538]: 2025-11-25 09:06:09.282 253542 INFO os_vif [None req-611360db-81bc-42f5-8bf9-42345a8fc7ae c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:8d:64:ff,bridge_name='br-int',has_traffic_filtering=True,id=bf69fe43-dd03-40a9-a38f-2ec005c27f58,network=Network(8f08e3a5-c18c-40d6-a052-3721725c11a7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbf69fe43-dd')#033[00m
Nov 25 04:06:09 np0005534516 nova_compute[253538]: 2025-11-25 09:06:09.283 253542 DEBUG nova.virt.libvirt.vif [None req-611360db-81bc-42f5-8bf9-42345a8fc7ae c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T09:05:58Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1102871071',display_name='tempest-TestGettingAddress-server-1102871071',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1102871071',id=139,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBG2EHMFfBwqY5UWv5ZUTvDGydPH+lI0Q3ZijcI/Z0KNxJwJ0XwD1zp8yjXj4vyT5Wr1yKGnfee4Ard9WUGqQHTmOIMYiRA7o0ggCLm4Ee9TLzJApKNCTFF0W861lRMGbtg==',key_name='tempest-TestGettingAddress-99102097',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='a3cf572dfc9f42528923d69b8fa76422',ramdisk_id='',reservation_id='r-wh4fqbxd',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-364728108',owner_user_name='tempest-TestGettingAddress-364728108-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T09:06:01Z,user_data=None,user_id='c9fb13d4ba9041458692330b7276232f',uuid=935c4eb2-999f-40a4-8643-0479d293c149,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "20e31743-4fc4-43d2-ab28-5205c776f506", "address": "fa:16:3e:db:48:c2", "network": {"id": "6c644b4d-59a5-410c-b57a-1faa3d063b78", "bridge": "br-int", "label": "tempest-network-smoke--1048784636", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fedb:48c2", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fedb:48c2", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap20e31743-4f", "ovs_interfaceid": "20e31743-4fc4-43d2-ab28-5205c776f506", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 25 04:06:09 np0005534516 nova_compute[253538]: 2025-11-25 09:06:09.283 253542 DEBUG nova.network.os_vif_util [None req-611360db-81bc-42f5-8bf9-42345a8fc7ae c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Converting VIF {"id": "20e31743-4fc4-43d2-ab28-5205c776f506", "address": "fa:16:3e:db:48:c2", "network": {"id": "6c644b4d-59a5-410c-b57a-1faa3d063b78", "bridge": "br-int", "label": "tempest-network-smoke--1048784636", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fedb:48c2", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fedb:48c2", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap20e31743-4f", "ovs_interfaceid": "20e31743-4fc4-43d2-ab28-5205c776f506", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 04:06:09 np0005534516 nova_compute[253538]: 2025-11-25 09:06:09.285 253542 DEBUG nova.network.os_vif_util [None req-611360db-81bc-42f5-8bf9-42345a8fc7ae c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:db:48:c2,bridge_name='br-int',has_traffic_filtering=True,id=20e31743-4fc4-43d2-ab28-5205c776f506,network=Network(6c644b4d-59a5-410c-b57a-1faa3d063b78),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap20e31743-4f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 04:06:09 np0005534516 nova_compute[253538]: 2025-11-25 09:06:09.286 253542 DEBUG os_vif [None req-611360db-81bc-42f5-8bf9-42345a8fc7ae c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:db:48:c2,bridge_name='br-int',has_traffic_filtering=True,id=20e31743-4fc4-43d2-ab28-5205c776f506,network=Network(6c644b4d-59a5-410c-b57a-1faa3d063b78),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap20e31743-4f') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 25 04:06:09 np0005534516 nova_compute[253538]: 2025-11-25 09:06:09.287 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:06:09 np0005534516 nova_compute[253538]: 2025-11-25 09:06:09.287 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 04:06:09 np0005534516 nova_compute[253538]: 2025-11-25 09:06:09.287 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 04:06:09 np0005534516 nova_compute[253538]: 2025-11-25 09:06:09.289 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:06:09 np0005534516 nova_compute[253538]: 2025-11-25 09:06:09.289 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap20e31743-4f, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 04:06:09 np0005534516 nova_compute[253538]: 2025-11-25 09:06:09.290 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap20e31743-4f, col_values=(('external_ids', {'iface-id': '20e31743-4fc4-43d2-ab28-5205c776f506', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:db:48:c2', 'vm-uuid': '935c4eb2-999f-40a4-8643-0479d293c149'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 04:06:09 np0005534516 nova_compute[253538]: 2025-11-25 09:06:09.291 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:06:09 np0005534516 NetworkManager[48915]: <info>  [1764061569.2920] manager: (tap20e31743-4f): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/595)
Nov 25 04:06:09 np0005534516 nova_compute[253538]: 2025-11-25 09:06:09.294 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 25 04:06:09 np0005534516 nova_compute[253538]: 2025-11-25 09:06:09.296 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:06:09 np0005534516 nova_compute[253538]: 2025-11-25 09:06:09.298 253542 INFO os_vif [None req-611360db-81bc-42f5-8bf9-42345a8fc7ae c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:db:48:c2,bridge_name='br-int',has_traffic_filtering=True,id=20e31743-4fc4-43d2-ab28-5205c776f506,network=Network(6c644b4d-59a5-410c-b57a-1faa3d063b78),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap20e31743-4f')#033[00m
Nov 25 04:06:09 np0005534516 nova_compute[253538]: 2025-11-25 09:06:09.348 253542 DEBUG nova.virt.libvirt.driver [None req-611360db-81bc-42f5-8bf9-42345a8fc7ae c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 04:06:09 np0005534516 nova_compute[253538]: 2025-11-25 09:06:09.348 253542 DEBUG nova.virt.libvirt.driver [None req-611360db-81bc-42f5-8bf9-42345a8fc7ae c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 04:06:09 np0005534516 nova_compute[253538]: 2025-11-25 09:06:09.348 253542 DEBUG nova.virt.libvirt.driver [None req-611360db-81bc-42f5-8bf9-42345a8fc7ae c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] No VIF found with MAC fa:16:3e:8d:64:ff, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 25 04:06:09 np0005534516 nova_compute[253538]: 2025-11-25 09:06:09.349 253542 DEBUG nova.virt.libvirt.driver [None req-611360db-81bc-42f5-8bf9-42345a8fc7ae c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] No VIF found with MAC fa:16:3e:db:48:c2, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 25 04:06:09 np0005534516 nova_compute[253538]: 2025-11-25 09:06:09.349 253542 INFO nova.virt.libvirt.driver [None req-611360db-81bc-42f5-8bf9-42345a8fc7ae c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 935c4eb2-999f-40a4-8643-0479d293c149] Using config drive#033[00m
Nov 25 04:06:09 np0005534516 nova_compute[253538]: 2025-11-25 09:06:09.373 253542 DEBUG nova.storage.rbd_utils [None req-611360db-81bc-42f5-8bf9-42345a8fc7ae c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] rbd image 935c4eb2-999f-40a4-8643-0479d293c149_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 04:06:09 np0005534516 nova_compute[253538]: 2025-11-25 09:06:09.567 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:06:09 np0005534516 nova_compute[253538]: 2025-11-25 09:06:09.568 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Nov 25 04:06:09 np0005534516 nova_compute[253538]: 2025-11-25 09:06:09.583 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Nov 25 04:06:09 np0005534516 nova_compute[253538]: 2025-11-25 09:06:09.773 253542 INFO nova.virt.libvirt.driver [None req-611360db-81bc-42f5-8bf9-42345a8fc7ae c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 935c4eb2-999f-40a4-8643-0479d293c149] Creating config drive at /var/lib/nova/instances/935c4eb2-999f-40a4-8643-0479d293c149/disk.config#033[00m
Nov 25 04:06:09 np0005534516 nova_compute[253538]: 2025-11-25 09:06:09.782 253542 DEBUG oslo_concurrency.processutils [None req-611360db-81bc-42f5-8bf9-42345a8fc7ae c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/935c4eb2-999f-40a4-8643-0479d293c149/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpacz45rq9 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 04:06:09 np0005534516 nova_compute[253538]: 2025-11-25 09:06:09.927 253542 DEBUG oslo_concurrency.processutils [None req-611360db-81bc-42f5-8bf9-42345a8fc7ae c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/935c4eb2-999f-40a4-8643-0479d293c149/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpacz45rq9" returned: 0 in 0.145s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 04:06:09 np0005534516 nova_compute[253538]: 2025-11-25 09:06:09.951 253542 DEBUG nova.storage.rbd_utils [None req-611360db-81bc-42f5-8bf9-42345a8fc7ae c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] rbd image 935c4eb2-999f-40a4-8643-0479d293c149_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 04:06:09 np0005534516 nova_compute[253538]: 2025-11-25 09:06:09.954 253542 DEBUG oslo_concurrency.processutils [None req-611360db-81bc-42f5-8bf9-42345a8fc7ae c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/935c4eb2-999f-40a4-8643-0479d293c149/disk.config 935c4eb2-999f-40a4-8643-0479d293c149_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 04:06:10 np0005534516 nova_compute[253538]: 2025-11-25 09:06:10.060 253542 DEBUG nova.network.neutron [req-42731087-d8b3-476f-a24a-a9e7a0781e68 req-fb377a87-bee2-4cab-b971-860acaa05590 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 935c4eb2-999f-40a4-8643-0479d293c149] Updated VIF entry in instance network info cache for port 20e31743-4fc4-43d2-ab28-5205c776f506. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 25 04:06:10 np0005534516 nova_compute[253538]: 2025-11-25 09:06:10.061 253542 DEBUG nova.network.neutron [req-42731087-d8b3-476f-a24a-a9e7a0781e68 req-fb377a87-bee2-4cab-b971-860acaa05590 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 935c4eb2-999f-40a4-8643-0479d293c149] Updating instance_info_cache with network_info: [{"id": "bf69fe43-dd03-40a9-a38f-2ec005c27f58", "address": "fa:16:3e:8d:64:ff", "network": {"id": "8f08e3a5-c18c-40d6-a052-3721725c11a7", "bridge": "br-int", "label": "tempest-network-smoke--1961352312", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf69fe43-dd", "ovs_interfaceid": "bf69fe43-dd03-40a9-a38f-2ec005c27f58", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "20e31743-4fc4-43d2-ab28-5205c776f506", "address": "fa:16:3e:db:48:c2", "network": {"id": "6c644b4d-59a5-410c-b57a-1faa3d063b78", "bridge": "br-int", "label": "tempest-network-smoke--1048784636", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fedb:48c2", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fedb:48c2", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap20e31743-4f", "ovs_interfaceid": "20e31743-4fc4-43d2-ab28-5205c776f506", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 04:06:10 np0005534516 nova_compute[253538]: 2025-11-25 09:06:10.081 253542 DEBUG oslo_concurrency.lockutils [req-42731087-d8b3-476f-a24a-a9e7a0781e68 req-fb377a87-bee2-4cab-b971-860acaa05590 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-935c4eb2-999f-40a4-8643-0479d293c149" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 04:06:10 np0005534516 nova_compute[253538]: 2025-11-25 09:06:10.151 253542 DEBUG oslo_concurrency.processutils [None req-611360db-81bc-42f5-8bf9-42345a8fc7ae c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/935c4eb2-999f-40a4-8643-0479d293c149/disk.config 935c4eb2-999f-40a4-8643-0479d293c149_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.198s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 04:06:10 np0005534516 nova_compute[253538]: 2025-11-25 09:06:10.152 253542 INFO nova.virt.libvirt.driver [None req-611360db-81bc-42f5-8bf9-42345a8fc7ae c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 935c4eb2-999f-40a4-8643-0479d293c149] Deleting local config drive /var/lib/nova/instances/935c4eb2-999f-40a4-8643-0479d293c149/disk.config because it was imported into RBD.#033[00m
Nov 25 04:06:10 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2561: 321 pgs: 321 active+clean; 213 MiB data, 988 MiB used, 59 GiB / 60 GiB avail; 44 KiB/s rd, 1.8 MiB/s wr, 58 op/s
Nov 25 04:06:10 np0005534516 kernel: tapbf69fe43-dd: entered promiscuous mode
Nov 25 04:06:10 np0005534516 NetworkManager[48915]: <info>  [1764061570.2131] manager: (tapbf69fe43-dd): new Tun device (/org/freedesktop/NetworkManager/Devices/596)
Nov 25 04:06:10 np0005534516 ovn_controller[152859]: 2025-11-25T09:06:10Z|01442|binding|INFO|Claiming lport bf69fe43-dd03-40a9-a38f-2ec005c27f58 for this chassis.
Nov 25 04:06:10 np0005534516 ovn_controller[152859]: 2025-11-25T09:06:10Z|01443|binding|INFO|bf69fe43-dd03-40a9-a38f-2ec005c27f58: Claiming fa:16:3e:8d:64:ff 10.100.0.9
Nov 25 04:06:10 np0005534516 nova_compute[253538]: 2025-11-25 09:06:10.218 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:06:10 np0005534516 NetworkManager[48915]: <info>  [1764061570.2284] manager: (tap20e31743-4f): new Tun device (/org/freedesktop/NetworkManager/Devices/597)
Nov 25 04:06:10 np0005534516 systemd-udevd[400681]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 04:06:10 np0005534516 systemd-udevd[400682]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 04:06:10 np0005534516 kernel: tap20e31743-4f: entered promiscuous mode
Nov 25 04:06:10 np0005534516 NetworkManager[48915]: <info>  [1764061570.2558] device (tap20e31743-4f): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 04:06:10 np0005534516 NetworkManager[48915]: <info>  [1764061570.2567] device (tap20e31743-4f): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 04:06:10 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:06:10.256 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:8d:64:ff 10.100.0.9'], port_security=['fa:16:3e:8d:64:ff 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '935c4eb2-999f-40a4-8643-0479d293c149', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8f08e3a5-c18c-40d6-a052-3721725c11a7', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a3cf572dfc9f42528923d69b8fa76422', 'neutron:revision_number': '2', 'neutron:security_group_ids': '8ddf0de9-e47d-4ca3-b0fe-d196d1eb3ce6', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0fc5f4dc-b315-4292-9809-5084499d762b, chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=bf69fe43-dd03-40a9-a38f-2ec005c27f58) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 04:06:10 np0005534516 nova_compute[253538]: 2025-11-25 09:06:10.256 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:06:10 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:06:10.260 162739 INFO neutron.agent.ovn.metadata.agent [-] Port bf69fe43-dd03-40a9-a38f-2ec005c27f58 in datapath 8f08e3a5-c18c-40d6-a052-3721725c11a7 bound to our chassis#033[00m
Nov 25 04:06:10 np0005534516 ovn_controller[152859]: 2025-11-25T09:06:10Z|01444|binding|INFO|Claiming lport 20e31743-4fc4-43d2-ab28-5205c776f506 for this chassis.
Nov 25 04:06:10 np0005534516 ovn_controller[152859]: 2025-11-25T09:06:10Z|01445|binding|INFO|20e31743-4fc4-43d2-ab28-5205c776f506: Claiming fa:16:3e:db:48:c2 2001:db8:0:1:f816:3eff:fedb:48c2 2001:db8::f816:3eff:fedb:48c2
Nov 25 04:06:10 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:06:10.261 162739 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 8f08e3a5-c18c-40d6-a052-3721725c11a7#033[00m
Nov 25 04:06:10 np0005534516 ovn_controller[152859]: 2025-11-25T09:06:10Z|01446|binding|INFO|Setting lport bf69fe43-dd03-40a9-a38f-2ec005c27f58 ovn-installed in OVS
Nov 25 04:06:10 np0005534516 nova_compute[253538]: 2025-11-25 09:06:10.264 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:06:10 np0005534516 NetworkManager[48915]: <info>  [1764061570.2695] device (tapbf69fe43-dd): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 04:06:10 np0005534516 NetworkManager[48915]: <info>  [1764061570.2710] device (tapbf69fe43-dd): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 04:06:10 np0005534516 systemd-machined[215790]: New machine qemu-169-instance-0000008b.
Nov 25 04:06:10 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:06:10.278 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[e64ec752-be3a-488c-8c52-66481760dded]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:06:10 np0005534516 ovn_controller[152859]: 2025-11-25T09:06:10Z|01447|binding|INFO|Setting lport 20e31743-4fc4-43d2-ab28-5205c776f506 ovn-installed in OVS
Nov 25 04:06:10 np0005534516 nova_compute[253538]: 2025-11-25 09:06:10.281 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:06:10 np0005534516 systemd[1]: Started Virtual Machine qemu-169-instance-0000008b.
Nov 25 04:06:10 np0005534516 ovn_controller[152859]: 2025-11-25T09:06:10Z|01448|binding|INFO|Setting lport 20e31743-4fc4-43d2-ab28-5205c776f506 up in Southbound
Nov 25 04:06:10 np0005534516 ovn_controller[152859]: 2025-11-25T09:06:10Z|01449|binding|INFO|Setting lport bf69fe43-dd03-40a9-a38f-2ec005c27f58 up in Southbound
Nov 25 04:06:10 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:06:10.287 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:db:48:c2 2001:db8:0:1:f816:3eff:fedb:48c2 2001:db8::f816:3eff:fedb:48c2'], port_security=['fa:16:3e:db:48:c2 2001:db8:0:1:f816:3eff:fedb:48c2 2001:db8::f816:3eff:fedb:48c2'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8:0:1:f816:3eff:fedb:48c2/64 2001:db8::f816:3eff:fedb:48c2/64', 'neutron:device_id': '935c4eb2-999f-40a4-8643-0479d293c149', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6c644b4d-59a5-410c-b57a-1faa3d063b78', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a3cf572dfc9f42528923d69b8fa76422', 'neutron:revision_number': '2', 'neutron:security_group_ids': '8ddf0de9-e47d-4ca3-b0fe-d196d1eb3ce6', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4dd839f3-f2ee-404d-b9f9-bedcfcb25484, chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=20e31743-4fc4-43d2-ab28-5205c776f506) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 04:06:10 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:06:10.309 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[ae351201-d09d-42aa-8836-b2fc7f9f7460]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:06:10 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:06:10.313 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[3b5fb44f-88be-4e82-8571-7c21720b3f77]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:06:10 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:06:10.341 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[8ad7a109-3d53-4e26-b6fa-558601fff26d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:06:10 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:06:10.362 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[430f3209-147f-4936-8361-7fc3e9152240]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap8f08e3a5-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:33:64:5c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 412], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 688844, 'reachable_time': 16923, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 400698, 'error': None, 'target': 'ovnmeta-8f08e3a5-c18c-40d6-a052-3721725c11a7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:06:10 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:06:10.376 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[aa628d71-3a90-4177-b969-9fb9a5d6c4ec]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap8f08e3a5-c1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 688856, 'tstamp': 688856}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 400700, 'error': None, 'target': 'ovnmeta-8f08e3a5-c18c-40d6-a052-3721725c11a7', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap8f08e3a5-c1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 688860, 'tstamp': 688860}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 400700, 'error': None, 'target': 'ovnmeta-8f08e3a5-c18c-40d6-a052-3721725c11a7', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:06:10 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:06:10.377 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8f08e3a5-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 04:06:10 np0005534516 nova_compute[253538]: 2025-11-25 09:06:10.378 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:06:10 np0005534516 nova_compute[253538]: 2025-11-25 09:06:10.379 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:06:10 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:06:10.380 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap8f08e3a5-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 04:06:10 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:06:10.380 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 04:06:10 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:06:10.380 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap8f08e3a5-c0, col_values=(('external_ids', {'iface-id': '8fae56b6-9884-44ea-b3b3-2b19412193c5'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 04:06:10 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:06:10.381 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 04:06:10 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:06:10.383 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 20e31743-4fc4-43d2-ab28-5205c776f506 in datapath 6c644b4d-59a5-410c-b57a-1faa3d063b78 unbound from our chassis#033[00m
Nov 25 04:06:10 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:06:10.384 162739 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 6c644b4d-59a5-410c-b57a-1faa3d063b78#033[00m
Nov 25 04:06:10 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:06:10.411 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[fa8e8396-8033-41a9-90d1-bcf565d40e75]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:06:10 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:06:10.450 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[01af1a60-1301-467e-84f3-d4e3473e4d12]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:06:10 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:06:10.453 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[085a088d-94ef-43e2-89b8-acbd99f43202]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:06:10 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:06:10.487 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[120672f9-6d84-4108-9fd0-e12d8650162b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:06:10 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:06:10.518 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[d5fc5a16-e0f5-410a-9adb-7822ad65db05]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6c644b4d-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:71:b4:6d'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 22, 'tx_packets': 4, 'rx_bytes': 1916, 'tx_bytes': 312, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 22, 'tx_packets': 4, 'rx_bytes': 1916, 'tx_bytes': 312, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 414], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 688938, 'reachable_time': 35223, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 22, 'inoctets': 1608, 'indelivers': 7, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 22, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 1608, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 22, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 7, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 400706, 'error': None, 'target': 'ovnmeta-6c644b4d-59a5-410c-b57a-1faa3d063b78', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:06:10 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:06:10.540 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[feaf6618-0c49-4e5d-90da-c3ffadf84199]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap6c644b4d-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 688953, 'tstamp': 688953}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 400707, 'error': None, 'target': 'ovnmeta-6c644b4d-59a5-410c-b57a-1faa3d063b78', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:06:10 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:06:10.542 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6c644b4d-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 04:06:10 np0005534516 nova_compute[253538]: 2025-11-25 09:06:10.544 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:06:10 np0005534516 nova_compute[253538]: 2025-11-25 09:06:10.546 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:06:10 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:06:10.546 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6c644b4d-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 04:06:10 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:06:10.546 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 04:06:10 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:06:10.547 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap6c644b4d-50, col_values=(('external_ids', {'iface-id': 'f8aacb3c-1998-431a-ac4d-66021d7412c1'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 04:06:10 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:06:10.547 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 04:06:10 np0005534516 nova_compute[253538]: 2025-11-25 09:06:10.825 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764061570.8244038, 935c4eb2-999f-40a4-8643-0479d293c149 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 04:06:10 np0005534516 nova_compute[253538]: 2025-11-25 09:06:10.826 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 935c4eb2-999f-40a4-8643-0479d293c149] VM Started (Lifecycle Event)#033[00m
Nov 25 04:06:10 np0005534516 nova_compute[253538]: 2025-11-25 09:06:10.848 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 935c4eb2-999f-40a4-8643-0479d293c149] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 04:06:10 np0005534516 nova_compute[253538]: 2025-11-25 09:06:10.852 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764061570.8246381, 935c4eb2-999f-40a4-8643-0479d293c149 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 04:06:10 np0005534516 nova_compute[253538]: 2025-11-25 09:06:10.853 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 935c4eb2-999f-40a4-8643-0479d293c149] VM Paused (Lifecycle Event)#033[00m
Nov 25 04:06:10 np0005534516 nova_compute[253538]: 2025-11-25 09:06:10.867 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 935c4eb2-999f-40a4-8643-0479d293c149] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 04:06:10 np0005534516 nova_compute[253538]: 2025-11-25 09:06:10.870 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 935c4eb2-999f-40a4-8643-0479d293c149] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 04:06:10 np0005534516 nova_compute[253538]: 2025-11-25 09:06:10.886 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 935c4eb2-999f-40a4-8643-0479d293c149] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 04:06:10 np0005534516 nova_compute[253538]: 2025-11-25 09:06:10.926 253542 DEBUG nova.compute.manager [req-85dd83b8-bd19-4200-bb1c-2fd9201e3574 req-da15cdf7-5119-4297-930e-fac50a85dc64 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 935c4eb2-999f-40a4-8643-0479d293c149] Received event network-vif-plugged-bf69fe43-dd03-40a9-a38f-2ec005c27f58 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 04:06:10 np0005534516 nova_compute[253538]: 2025-11-25 09:06:10.927 253542 DEBUG oslo_concurrency.lockutils [req-85dd83b8-bd19-4200-bb1c-2fd9201e3574 req-da15cdf7-5119-4297-930e-fac50a85dc64 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "935c4eb2-999f-40a4-8643-0479d293c149-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:06:10 np0005534516 nova_compute[253538]: 2025-11-25 09:06:10.927 253542 DEBUG oslo_concurrency.lockutils [req-85dd83b8-bd19-4200-bb1c-2fd9201e3574 req-da15cdf7-5119-4297-930e-fac50a85dc64 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "935c4eb2-999f-40a4-8643-0479d293c149-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:06:10 np0005534516 nova_compute[253538]: 2025-11-25 09:06:10.927 253542 DEBUG oslo_concurrency.lockutils [req-85dd83b8-bd19-4200-bb1c-2fd9201e3574 req-da15cdf7-5119-4297-930e-fac50a85dc64 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "935c4eb2-999f-40a4-8643-0479d293c149-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:06:10 np0005534516 nova_compute[253538]: 2025-11-25 09:06:10.927 253542 DEBUG nova.compute.manager [req-85dd83b8-bd19-4200-bb1c-2fd9201e3574 req-da15cdf7-5119-4297-930e-fac50a85dc64 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 935c4eb2-999f-40a4-8643-0479d293c149] Processing event network-vif-plugged-bf69fe43-dd03-40a9-a38f-2ec005c27f58 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 25 04:06:11 np0005534516 ovn_controller[152859]: 2025-11-25T09:06:11Z|01450|binding|INFO|Releasing lport 8fae56b6-9884-44ea-b3b3-2b19412193c5 from this chassis (sb_readonly=0)
Nov 25 04:06:11 np0005534516 ovn_controller[152859]: 2025-11-25T09:06:11Z|01451|binding|INFO|Releasing lport f8aacb3c-1998-431a-ac4d-66021d7412c1 from this chassis (sb_readonly=0)
Nov 25 04:06:11 np0005534516 nova_compute[253538]: 2025-11-25 09:06:11.557 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:06:12 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2562: 321 pgs: 321 active+clean; 213 MiB data, 988 MiB used, 59 GiB / 60 GiB avail; 41 KiB/s rd, 1.8 MiB/s wr, 57 op/s
Nov 25 04:06:13 np0005534516 nova_compute[253538]: 2025-11-25 09:06:13.008 253542 DEBUG nova.compute.manager [req-bde926e0-95db-4546-a640-792c86974df7 req-dabfa31a-51c4-4ab9-a771-9b9469d14ca5 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 935c4eb2-999f-40a4-8643-0479d293c149] Received event network-vif-plugged-bf69fe43-dd03-40a9-a38f-2ec005c27f58 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 04:06:13 np0005534516 nova_compute[253538]: 2025-11-25 09:06:13.009 253542 DEBUG oslo_concurrency.lockutils [req-bde926e0-95db-4546-a640-792c86974df7 req-dabfa31a-51c4-4ab9-a771-9b9469d14ca5 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "935c4eb2-999f-40a4-8643-0479d293c149-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:06:13 np0005534516 nova_compute[253538]: 2025-11-25 09:06:13.009 253542 DEBUG oslo_concurrency.lockutils [req-bde926e0-95db-4546-a640-792c86974df7 req-dabfa31a-51c4-4ab9-a771-9b9469d14ca5 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "935c4eb2-999f-40a4-8643-0479d293c149-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:06:13 np0005534516 nova_compute[253538]: 2025-11-25 09:06:13.010 253542 DEBUG oslo_concurrency.lockutils [req-bde926e0-95db-4546-a640-792c86974df7 req-dabfa31a-51c4-4ab9-a771-9b9469d14ca5 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "935c4eb2-999f-40a4-8643-0479d293c149-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:06:13 np0005534516 nova_compute[253538]: 2025-11-25 09:06:13.010 253542 DEBUG nova.compute.manager [req-bde926e0-95db-4546-a640-792c86974df7 req-dabfa31a-51c4-4ab9-a771-9b9469d14ca5 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 935c4eb2-999f-40a4-8643-0479d293c149] No event matching network-vif-plugged-bf69fe43-dd03-40a9-a38f-2ec005c27f58 in dict_keys([('network-vif-plugged', '20e31743-4fc4-43d2-ab28-5205c776f506')]) pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:325#033[00m
Nov 25 04:06:13 np0005534516 nova_compute[253538]: 2025-11-25 09:06:13.010 253542 WARNING nova.compute.manager [req-bde926e0-95db-4546-a640-792c86974df7 req-dabfa31a-51c4-4ab9-a771-9b9469d14ca5 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 935c4eb2-999f-40a4-8643-0479d293c149] Received unexpected event network-vif-plugged-bf69fe43-dd03-40a9-a38f-2ec005c27f58 for instance with vm_state building and task_state spawning.#033[00m
Nov 25 04:06:13 np0005534516 nova_compute[253538]: 2025-11-25 09:06:13.011 253542 DEBUG nova.compute.manager [req-bde926e0-95db-4546-a640-792c86974df7 req-dabfa31a-51c4-4ab9-a771-9b9469d14ca5 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 935c4eb2-999f-40a4-8643-0479d293c149] Received event network-vif-plugged-20e31743-4fc4-43d2-ab28-5205c776f506 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 04:06:13 np0005534516 nova_compute[253538]: 2025-11-25 09:06:13.011 253542 DEBUG oslo_concurrency.lockutils [req-bde926e0-95db-4546-a640-792c86974df7 req-dabfa31a-51c4-4ab9-a771-9b9469d14ca5 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "935c4eb2-999f-40a4-8643-0479d293c149-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:06:13 np0005534516 nova_compute[253538]: 2025-11-25 09:06:13.012 253542 DEBUG oslo_concurrency.lockutils [req-bde926e0-95db-4546-a640-792c86974df7 req-dabfa31a-51c4-4ab9-a771-9b9469d14ca5 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "935c4eb2-999f-40a4-8643-0479d293c149-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:06:13 np0005534516 nova_compute[253538]: 2025-11-25 09:06:13.012 253542 DEBUG oslo_concurrency.lockutils [req-bde926e0-95db-4546-a640-792c86974df7 req-dabfa31a-51c4-4ab9-a771-9b9469d14ca5 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "935c4eb2-999f-40a4-8643-0479d293c149-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:06:13 np0005534516 nova_compute[253538]: 2025-11-25 09:06:13.013 253542 DEBUG nova.compute.manager [req-bde926e0-95db-4546-a640-792c86974df7 req-dabfa31a-51c4-4ab9-a771-9b9469d14ca5 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 935c4eb2-999f-40a4-8643-0479d293c149] Processing event network-vif-plugged-20e31743-4fc4-43d2-ab28-5205c776f506 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 25 04:06:13 np0005534516 nova_compute[253538]: 2025-11-25 09:06:13.013 253542 DEBUG nova.compute.manager [req-bde926e0-95db-4546-a640-792c86974df7 req-dabfa31a-51c4-4ab9-a771-9b9469d14ca5 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 935c4eb2-999f-40a4-8643-0479d293c149] Received event network-vif-plugged-20e31743-4fc4-43d2-ab28-5205c776f506 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 04:06:13 np0005534516 nova_compute[253538]: 2025-11-25 09:06:13.013 253542 DEBUG oslo_concurrency.lockutils [req-bde926e0-95db-4546-a640-792c86974df7 req-dabfa31a-51c4-4ab9-a771-9b9469d14ca5 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "935c4eb2-999f-40a4-8643-0479d293c149-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:06:13 np0005534516 nova_compute[253538]: 2025-11-25 09:06:13.014 253542 DEBUG oslo_concurrency.lockutils [req-bde926e0-95db-4546-a640-792c86974df7 req-dabfa31a-51c4-4ab9-a771-9b9469d14ca5 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "935c4eb2-999f-40a4-8643-0479d293c149-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:06:13 np0005534516 nova_compute[253538]: 2025-11-25 09:06:13.014 253542 DEBUG oslo_concurrency.lockutils [req-bde926e0-95db-4546-a640-792c86974df7 req-dabfa31a-51c4-4ab9-a771-9b9469d14ca5 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "935c4eb2-999f-40a4-8643-0479d293c149-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:06:13 np0005534516 nova_compute[253538]: 2025-11-25 09:06:13.014 253542 DEBUG nova.compute.manager [req-bde926e0-95db-4546-a640-792c86974df7 req-dabfa31a-51c4-4ab9-a771-9b9469d14ca5 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 935c4eb2-999f-40a4-8643-0479d293c149] No waiting events found dispatching network-vif-plugged-20e31743-4fc4-43d2-ab28-5205c776f506 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 04:06:13 np0005534516 nova_compute[253538]: 2025-11-25 09:06:13.015 253542 WARNING nova.compute.manager [req-bde926e0-95db-4546-a640-792c86974df7 req-dabfa31a-51c4-4ab9-a771-9b9469d14ca5 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 935c4eb2-999f-40a4-8643-0479d293c149] Received unexpected event network-vif-plugged-20e31743-4fc4-43d2-ab28-5205c776f506 for instance with vm_state building and task_state spawning.#033[00m
Nov 25 04:06:13 np0005534516 nova_compute[253538]: 2025-11-25 09:06:13.016 253542 DEBUG nova.compute.manager [None req-611360db-81bc-42f5-8bf9-42345a8fc7ae c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 935c4eb2-999f-40a4-8643-0479d293c149] Instance event wait completed in 2 seconds for network-vif-plugged,network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 25 04:06:13 np0005534516 nova_compute[253538]: 2025-11-25 09:06:13.021 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764061573.0208182, 935c4eb2-999f-40a4-8643-0479d293c149 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 04:06:13 np0005534516 nova_compute[253538]: 2025-11-25 09:06:13.021 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 935c4eb2-999f-40a4-8643-0479d293c149] VM Resumed (Lifecycle Event)#033[00m
Nov 25 04:06:13 np0005534516 nova_compute[253538]: 2025-11-25 09:06:13.026 253542 DEBUG nova.virt.libvirt.driver [None req-611360db-81bc-42f5-8bf9-42345a8fc7ae c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 935c4eb2-999f-40a4-8643-0479d293c149] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 25 04:06:13 np0005534516 nova_compute[253538]: 2025-11-25 09:06:13.034 253542 INFO nova.virt.libvirt.driver [-] [instance: 935c4eb2-999f-40a4-8643-0479d293c149] Instance spawned successfully.#033[00m
Nov 25 04:06:13 np0005534516 nova_compute[253538]: 2025-11-25 09:06:13.035 253542 DEBUG nova.virt.libvirt.driver [None req-611360db-81bc-42f5-8bf9-42345a8fc7ae c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 935c4eb2-999f-40a4-8643-0479d293c149] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 25 04:06:13 np0005534516 nova_compute[253538]: 2025-11-25 09:06:13.043 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 935c4eb2-999f-40a4-8643-0479d293c149] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 04:06:13 np0005534516 nova_compute[253538]: 2025-11-25 09:06:13.052 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 935c4eb2-999f-40a4-8643-0479d293c149] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 04:06:13 np0005534516 nova_compute[253538]: 2025-11-25 09:06:13.061 253542 DEBUG nova.virt.libvirt.driver [None req-611360db-81bc-42f5-8bf9-42345a8fc7ae c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 935c4eb2-999f-40a4-8643-0479d293c149] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 04:06:13 np0005534516 nova_compute[253538]: 2025-11-25 09:06:13.062 253542 DEBUG nova.virt.libvirt.driver [None req-611360db-81bc-42f5-8bf9-42345a8fc7ae c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 935c4eb2-999f-40a4-8643-0479d293c149] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 04:06:13 np0005534516 nova_compute[253538]: 2025-11-25 09:06:13.063 253542 DEBUG nova.virt.libvirt.driver [None req-611360db-81bc-42f5-8bf9-42345a8fc7ae c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 935c4eb2-999f-40a4-8643-0479d293c149] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 04:06:13 np0005534516 nova_compute[253538]: 2025-11-25 09:06:13.064 253542 DEBUG nova.virt.libvirt.driver [None req-611360db-81bc-42f5-8bf9-42345a8fc7ae c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 935c4eb2-999f-40a4-8643-0479d293c149] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 04:06:13 np0005534516 nova_compute[253538]: 2025-11-25 09:06:13.065 253542 DEBUG nova.virt.libvirt.driver [None req-611360db-81bc-42f5-8bf9-42345a8fc7ae c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 935c4eb2-999f-40a4-8643-0479d293c149] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 04:06:13 np0005534516 nova_compute[253538]: 2025-11-25 09:06:13.065 253542 DEBUG nova.virt.libvirt.driver [None req-611360db-81bc-42f5-8bf9-42345a8fc7ae c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 935c4eb2-999f-40a4-8643-0479d293c149] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 04:06:13 np0005534516 nova_compute[253538]: 2025-11-25 09:06:13.071 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 935c4eb2-999f-40a4-8643-0479d293c149] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 04:06:13 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:06:13 np0005534516 nova_compute[253538]: 2025-11-25 09:06:13.129 253542 INFO nova.compute.manager [None req-611360db-81bc-42f5-8bf9-42345a8fc7ae c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 935c4eb2-999f-40a4-8643-0479d293c149] Took 11.89 seconds to spawn the instance on the hypervisor.#033[00m
Nov 25 04:06:13 np0005534516 nova_compute[253538]: 2025-11-25 09:06:13.129 253542 DEBUG nova.compute.manager [None req-611360db-81bc-42f5-8bf9-42345a8fc7ae c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 935c4eb2-999f-40a4-8643-0479d293c149] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 04:06:13 np0005534516 nova_compute[253538]: 2025-11-25 09:06:13.193 253542 INFO nova.compute.manager [None req-611360db-81bc-42f5-8bf9-42345a8fc7ae c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 935c4eb2-999f-40a4-8643-0479d293c149] Took 13.01 seconds to build instance.#033[00m
Nov 25 04:06:13 np0005534516 nova_compute[253538]: 2025-11-25 09:06:13.335 253542 DEBUG oslo_concurrency.lockutils [None req-611360db-81bc-42f5-8bf9-42345a8fc7ae c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "935c4eb2-999f-40a4-8643-0479d293c149" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 13.232s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:06:14 np0005534516 nova_compute[253538]: 2025-11-25 09:06:14.123 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:06:14 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2563: 321 pgs: 321 active+clean; 213 MiB data, 987 MiB used, 59 GiB / 60 GiB avail; 27 KiB/s rd, 1.8 MiB/s wr, 44 op/s
Nov 25 04:06:14 np0005534516 nova_compute[253538]: 2025-11-25 09:06:14.291 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:06:14 np0005534516 nova_compute[253538]: 2025-11-25 09:06:14.998 253542 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764061559.9972892, 497131ea-c693-4c1d-b471-5b69d2294e3a => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 04:06:14 np0005534516 nova_compute[253538]: 2025-11-25 09:06:14.998 253542 INFO nova.compute.manager [-] [instance: 497131ea-c693-4c1d-b471-5b69d2294e3a] VM Stopped (Lifecycle Event)#033[00m
Nov 25 04:06:15 np0005534516 nova_compute[253538]: 2025-11-25 09:06:15.021 253542 DEBUG nova.compute.manager [None req-dc11a86d-1a24-4d73-ae5f-c2e2681478ca - - - - - -] [instance: 497131ea-c693-4c1d-b471-5b69d2294e3a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 04:06:16 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2564: 321 pgs: 321 active+clean; 213 MiB data, 987 MiB used, 59 GiB / 60 GiB avail; 905 KiB/s rd, 1.4 MiB/s wr, 65 op/s
Nov 25 04:06:18 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:06:18 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2565: 321 pgs: 321 active+clean; 213 MiB data, 987 MiB used, 59 GiB / 60 GiB avail; 1.5 MiB/s rd, 612 KiB/s wr, 71 op/s
Nov 25 04:06:18 np0005534516 nova_compute[253538]: 2025-11-25 09:06:18.276 253542 DEBUG nova.compute.manager [req-948af99b-fdb8-46ce-8e8a-0703641130c1 req-d7a89e4c-5c2e-460d-aa2c-00c8db1f1130 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 935c4eb2-999f-40a4-8643-0479d293c149] Received event network-changed-bf69fe43-dd03-40a9-a38f-2ec005c27f58 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 04:06:18 np0005534516 nova_compute[253538]: 2025-11-25 09:06:18.276 253542 DEBUG nova.compute.manager [req-948af99b-fdb8-46ce-8e8a-0703641130c1 req-d7a89e4c-5c2e-460d-aa2c-00c8db1f1130 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 935c4eb2-999f-40a4-8643-0479d293c149] Refreshing instance network info cache due to event network-changed-bf69fe43-dd03-40a9-a38f-2ec005c27f58. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 04:06:18 np0005534516 nova_compute[253538]: 2025-11-25 09:06:18.276 253542 DEBUG oslo_concurrency.lockutils [req-948af99b-fdb8-46ce-8e8a-0703641130c1 req-d7a89e4c-5c2e-460d-aa2c-00c8db1f1130 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-935c4eb2-999f-40a4-8643-0479d293c149" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 04:06:18 np0005534516 nova_compute[253538]: 2025-11-25 09:06:18.277 253542 DEBUG oslo_concurrency.lockutils [req-948af99b-fdb8-46ce-8e8a-0703641130c1 req-d7a89e4c-5c2e-460d-aa2c-00c8db1f1130 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-935c4eb2-999f-40a4-8643-0479d293c149" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 04:06:18 np0005534516 nova_compute[253538]: 2025-11-25 09:06:18.277 253542 DEBUG nova.network.neutron [req-948af99b-fdb8-46ce-8e8a-0703641130c1 req-d7a89e4c-5c2e-460d-aa2c-00c8db1f1130 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 935c4eb2-999f-40a4-8643-0479d293c149] Refreshing network info cache for port bf69fe43-dd03-40a9-a38f-2ec005c27f58 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 04:06:19 np0005534516 nova_compute[253538]: 2025-11-25 09:06:19.126 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:06:19 np0005534516 nova_compute[253538]: 2025-11-25 09:06:19.293 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:06:19 np0005534516 nova_compute[253538]: 2025-11-25 09:06:19.418 253542 DEBUG nova.network.neutron [req-948af99b-fdb8-46ce-8e8a-0703641130c1 req-d7a89e4c-5c2e-460d-aa2c-00c8db1f1130 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 935c4eb2-999f-40a4-8643-0479d293c149] Updated VIF entry in instance network info cache for port bf69fe43-dd03-40a9-a38f-2ec005c27f58. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 25 04:06:19 np0005534516 nova_compute[253538]: 2025-11-25 09:06:19.418 253542 DEBUG nova.network.neutron [req-948af99b-fdb8-46ce-8e8a-0703641130c1 req-d7a89e4c-5c2e-460d-aa2c-00c8db1f1130 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 935c4eb2-999f-40a4-8643-0479d293c149] Updating instance_info_cache with network_info: [{"id": "bf69fe43-dd03-40a9-a38f-2ec005c27f58", "address": "fa:16:3e:8d:64:ff", "network": {"id": "8f08e3a5-c18c-40d6-a052-3721725c11a7", "bridge": "br-int", "label": "tempest-network-smoke--1961352312", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.214", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf69fe43-dd", "ovs_interfaceid": "bf69fe43-dd03-40a9-a38f-2ec005c27f58", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "20e31743-4fc4-43d2-ab28-5205c776f506", "address": "fa:16:3e:db:48:c2", "network": {"id": "6c644b4d-59a5-410c-b57a-1faa3d063b78", "bridge": "br-int", "label": "tempest-network-smoke--1048784636", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fedb:48c2", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fedb:48c2", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap20e31743-4f", "ovs_interfaceid": "20e31743-4fc4-43d2-ab28-5205c776f506", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 04:06:19 np0005534516 nova_compute[253538]: 2025-11-25 09:06:19.439 253542 DEBUG oslo_concurrency.lockutils [req-948af99b-fdb8-46ce-8e8a-0703641130c1 req-d7a89e4c-5c2e-460d-aa2c-00c8db1f1130 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-935c4eb2-999f-40a4-8643-0479d293c149" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 04:06:19 np0005534516 nova_compute[253538]: 2025-11-25 09:06:19.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:06:20 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2566: 321 pgs: 321 active+clean; 213 MiB data, 987 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 73 op/s
Nov 25 04:06:21 np0005534516 nova_compute[253538]: 2025-11-25 09:06:21.525 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:06:22 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2567: 321 pgs: 321 active+clean; 213 MiB data, 987 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 73 op/s
Nov 25 04:06:23 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:06:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 04:06:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 04:06:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 04:06:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 04:06:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 04:06:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 04:06:23 np0005534516 nova_compute[253538]: 2025-11-25 09:06:23.880 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:06:24 np0005534516 nova_compute[253538]: 2025-11-25 09:06:24.127 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:06:24 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2568: 321 pgs: 321 active+clean; 213 MiB data, 987 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 15 KiB/s wr, 74 op/s
Nov 25 04:06:24 np0005534516 nova_compute[253538]: 2025-11-25 09:06:24.296 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:06:26 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2569: 321 pgs: 321 active+clean; 213 MiB data, 987 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 66 op/s
Nov 25 04:06:26 np0005534516 ovn_controller[152859]: 2025-11-25T09:06:26Z|00181|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:8d:64:ff 10.100.0.9
Nov 25 04:06:26 np0005534516 ovn_controller[152859]: 2025-11-25T09:06:26Z|00182|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:8d:64:ff 10.100.0.9
Nov 25 04:06:26 np0005534516 podman[400752]: 2025-11-25 09:06:26.815360746 +0000 UTC m=+0.068312609 container health_status 201f5ba8a846455dad7681d91099d8c3f33e8ac91298c1330a8b525b94c03938 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=multipathd, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3)
Nov 25 04:06:26 np0005534516 podman[400753]: 2025-11-25 09:06:26.832132921 +0000 UTC m=+0.085429753 container health_status 5c8d5508db57b4c3da7e80ae11f3a3094e87785cb74f60c98199261dd8b73481 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent)
Nov 25 04:06:28 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:06:28 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2570: 321 pgs: 321 active+clean; 225 MiB data, 1012 MiB used, 59 GiB / 60 GiB avail; 1.2 MiB/s rd, 703 KiB/s wr, 66 op/s
Nov 25 04:06:29 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 25 04:06:29 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1571887286' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 04:06:29 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 25 04:06:29 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1571887286' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 04:06:29 np0005534516 nova_compute[253538]: 2025-11-25 09:06:29.129 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:06:29 np0005534516 nova_compute[253538]: 2025-11-25 09:06:29.297 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:06:30 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2571: 321 pgs: 321 active+clean; 246 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 834 KiB/s rd, 2.1 MiB/s wr, 80 op/s
Nov 25 04:06:30 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:06:30.714 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=47, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '3e:21:09', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '6a:71:8e:07:fa:c2'}, ipsec=False) old=SB_Global(nb_cfg=46) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 04:06:30 np0005534516 nova_compute[253538]: 2025-11-25 09:06:30.715 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:06:30 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:06:30.715 162739 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 25 04:06:32 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2572: 321 pgs: 321 active+clean; 246 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 367 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Nov 25 04:06:33 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:06:34 np0005534516 nova_compute[253538]: 2025-11-25 09:06:34.131 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:06:34 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2573: 321 pgs: 321 active+clean; 246 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 367 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Nov 25 04:06:34 np0005534516 nova_compute[253538]: 2025-11-25 09:06:34.299 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:06:34 np0005534516 nova_compute[253538]: 2025-11-25 09:06:34.610 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:06:34 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:06:34.717 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=0e3362c2-a4a0-4a10-9289-943331244f84, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '47'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 04:06:35 np0005534516 podman[400792]: 2025-11-25 09:06:35.859786141 +0000 UTC m=+0.106188918 container health_status 8ad1e319ea263c63021f01bb2d6f18ca5aea205883339692c6b10bddfa993191 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 25 04:06:36 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2574: 321 pgs: 321 active+clean; 246 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 367 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Nov 25 04:06:37 np0005534516 nova_compute[253538]: 2025-11-25 09:06:37.958 253542 DEBUG nova.compute.manager [req-9f40bcc0-13e6-4325-9e03-8e3471d3bb84 req-b092db25-ad01-4ef1-8005-1761a31f22ab b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 935c4eb2-999f-40a4-8643-0479d293c149] Received event network-changed-bf69fe43-dd03-40a9-a38f-2ec005c27f58 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 04:06:37 np0005534516 nova_compute[253538]: 2025-11-25 09:06:37.958 253542 DEBUG nova.compute.manager [req-9f40bcc0-13e6-4325-9e03-8e3471d3bb84 req-b092db25-ad01-4ef1-8005-1761a31f22ab b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 935c4eb2-999f-40a4-8643-0479d293c149] Refreshing instance network info cache due to event network-changed-bf69fe43-dd03-40a9-a38f-2ec005c27f58. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 04:06:37 np0005534516 nova_compute[253538]: 2025-11-25 09:06:37.960 253542 DEBUG oslo_concurrency.lockutils [req-9f40bcc0-13e6-4325-9e03-8e3471d3bb84 req-b092db25-ad01-4ef1-8005-1761a31f22ab b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-935c4eb2-999f-40a4-8643-0479d293c149" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 04:06:37 np0005534516 nova_compute[253538]: 2025-11-25 09:06:37.960 253542 DEBUG oslo_concurrency.lockutils [req-9f40bcc0-13e6-4325-9e03-8e3471d3bb84 req-b092db25-ad01-4ef1-8005-1761a31f22ab b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-935c4eb2-999f-40a4-8643-0479d293c149" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 04:06:37 np0005534516 nova_compute[253538]: 2025-11-25 09:06:37.960 253542 DEBUG nova.network.neutron [req-9f40bcc0-13e6-4325-9e03-8e3471d3bb84 req-b092db25-ad01-4ef1-8005-1761a31f22ab b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 935c4eb2-999f-40a4-8643-0479d293c149] Refreshing network info cache for port bf69fe43-dd03-40a9-a38f-2ec005c27f58 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 04:06:38 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:06:38 np0005534516 nova_compute[253538]: 2025-11-25 09:06:38.120 253542 DEBUG oslo_concurrency.lockutils [None req-015b4da1-0d87-4e37-86d1-eefd05601139 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Acquiring lock "935c4eb2-999f-40a4-8643-0479d293c149" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:06:38 np0005534516 nova_compute[253538]: 2025-11-25 09:06:38.120 253542 DEBUG oslo_concurrency.lockutils [None req-015b4da1-0d87-4e37-86d1-eefd05601139 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "935c4eb2-999f-40a4-8643-0479d293c149" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:06:38 np0005534516 nova_compute[253538]: 2025-11-25 09:06:38.121 253542 DEBUG oslo_concurrency.lockutils [None req-015b4da1-0d87-4e37-86d1-eefd05601139 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Acquiring lock "935c4eb2-999f-40a4-8643-0479d293c149-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:06:38 np0005534516 nova_compute[253538]: 2025-11-25 09:06:38.121 253542 DEBUG oslo_concurrency.lockutils [None req-015b4da1-0d87-4e37-86d1-eefd05601139 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "935c4eb2-999f-40a4-8643-0479d293c149-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:06:38 np0005534516 nova_compute[253538]: 2025-11-25 09:06:38.121 253542 DEBUG oslo_concurrency.lockutils [None req-015b4da1-0d87-4e37-86d1-eefd05601139 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "935c4eb2-999f-40a4-8643-0479d293c149-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:06:38 np0005534516 nova_compute[253538]: 2025-11-25 09:06:38.122 253542 INFO nova.compute.manager [None req-015b4da1-0d87-4e37-86d1-eefd05601139 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 935c4eb2-999f-40a4-8643-0479d293c149] Terminating instance#033[00m
Nov 25 04:06:38 np0005534516 nova_compute[253538]: 2025-11-25 09:06:38.124 253542 DEBUG nova.compute.manager [None req-015b4da1-0d87-4e37-86d1-eefd05601139 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 935c4eb2-999f-40a4-8643-0479d293c149] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 25 04:06:38 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2575: 321 pgs: 321 active+clean; 246 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 368 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Nov 25 04:06:38 np0005534516 kernel: tapbf69fe43-dd (unregistering): left promiscuous mode
Nov 25 04:06:38 np0005534516 NetworkManager[48915]: <info>  [1764061598.1881] device (tapbf69fe43-dd): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 04:06:38 np0005534516 ovn_controller[152859]: 2025-11-25T09:06:38Z|01452|binding|INFO|Releasing lport bf69fe43-dd03-40a9-a38f-2ec005c27f58 from this chassis (sb_readonly=0)
Nov 25 04:06:38 np0005534516 ovn_controller[152859]: 2025-11-25T09:06:38Z|01453|binding|INFO|Setting lport bf69fe43-dd03-40a9-a38f-2ec005c27f58 down in Southbound
Nov 25 04:06:38 np0005534516 nova_compute[253538]: 2025-11-25 09:06:38.208 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:06:38 np0005534516 ovn_controller[152859]: 2025-11-25T09:06:38Z|01454|binding|INFO|Removing iface tapbf69fe43-dd ovn-installed in OVS
Nov 25 04:06:38 np0005534516 nova_compute[253538]: 2025-11-25 09:06:38.216 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:06:38 np0005534516 kernel: tap20e31743-4f (unregistering): left promiscuous mode
Nov 25 04:06:38 np0005534516 nova_compute[253538]: 2025-11-25 09:06:38.231 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:06:38 np0005534516 NetworkManager[48915]: <info>  [1764061598.2369] device (tap20e31743-4f): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 04:06:38 np0005534516 nova_compute[253538]: 2025-11-25 09:06:38.249 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:06:38 np0005534516 ovn_controller[152859]: 2025-11-25T09:06:38Z|01455|binding|INFO|Releasing lport 20e31743-4fc4-43d2-ab28-5205c776f506 from this chassis (sb_readonly=1)
Nov 25 04:06:38 np0005534516 ovn_controller[152859]: 2025-11-25T09:06:38Z|01456|binding|INFO|Removing iface tap20e31743-4f ovn-installed in OVS
Nov 25 04:06:38 np0005534516 ovn_controller[152859]: 2025-11-25T09:06:38Z|01457|if_status|INFO|Dropped 4 log messages in last 678 seconds (most recently, 667 seconds ago) due to excessive rate
Nov 25 04:06:38 np0005534516 ovn_controller[152859]: 2025-11-25T09:06:38Z|01458|if_status|INFO|Not setting lport 20e31743-4fc4-43d2-ab28-5205c776f506 down as sb is readonly
Nov 25 04:06:38 np0005534516 nova_compute[253538]: 2025-11-25 09:06:38.253 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:06:38 np0005534516 nova_compute[253538]: 2025-11-25 09:06:38.268 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:06:38 np0005534516 systemd[1]: machine-qemu\x2d169\x2dinstance\x2d0000008b.scope: Deactivated successfully.
Nov 25 04:06:38 np0005534516 systemd[1]: machine-qemu\x2d169\x2dinstance\x2d0000008b.scope: Consumed 14.264s CPU time.
Nov 25 04:06:38 np0005534516 systemd-machined[215790]: Machine qemu-169-instance-0000008b terminated.
Nov 25 04:06:38 np0005534516 ovn_controller[152859]: 2025-11-25T09:06:38Z|01459|binding|INFO|Setting lport 20e31743-4fc4-43d2-ab28-5205c776f506 down in Southbound
Nov 25 04:06:38 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:06:38.307 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:8d:64:ff 10.100.0.9'], port_security=['fa:16:3e:8d:64:ff 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '935c4eb2-999f-40a4-8643-0479d293c149', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8f08e3a5-c18c-40d6-a052-3721725c11a7', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a3cf572dfc9f42528923d69b8fa76422', 'neutron:revision_number': '4', 'neutron:security_group_ids': '8ddf0de9-e47d-4ca3-b0fe-d196d1eb3ce6', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0fc5f4dc-b315-4292-9809-5084499d762b, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=bf69fe43-dd03-40a9-a38f-2ec005c27f58) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 04:06:38 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:06:38.308 162739 INFO neutron.agent.ovn.metadata.agent [-] Port bf69fe43-dd03-40a9-a38f-2ec005c27f58 in datapath 8f08e3a5-c18c-40d6-a052-3721725c11a7 unbound from our chassis#033[00m
Nov 25 04:06:38 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:06:38.310 162739 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 8f08e3a5-c18c-40d6-a052-3721725c11a7#033[00m
Nov 25 04:06:38 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:06:38.342 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[eb2df7bb-6e90-4e63-9865-3fc1c590fcc5]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:06:38 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:06:38.347 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:db:48:c2 2001:db8:0:1:f816:3eff:fedb:48c2 2001:db8::f816:3eff:fedb:48c2'], port_security=['fa:16:3e:db:48:c2 2001:db8:0:1:f816:3eff:fedb:48c2 2001:db8::f816:3eff:fedb:48c2'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8:0:1:f816:3eff:fedb:48c2/64 2001:db8::f816:3eff:fedb:48c2/64', 'neutron:device_id': '935c4eb2-999f-40a4-8643-0479d293c149', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6c644b4d-59a5-410c-b57a-1faa3d063b78', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a3cf572dfc9f42528923d69b8fa76422', 'neutron:revision_number': '4', 'neutron:security_group_ids': '8ddf0de9-e47d-4ca3-b0fe-d196d1eb3ce6', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4dd839f3-f2ee-404d-b9f9-bedcfcb25484, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=20e31743-4fc4-43d2-ab28-5205c776f506) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 04:06:38 np0005534516 NetworkManager[48915]: <info>  [1764061598.3606] manager: (tap20e31743-4f): new Tun device (/org/freedesktop/NetworkManager/Devices/598)
Nov 25 04:06:38 np0005534516 nova_compute[253538]: 2025-11-25 09:06:38.372 253542 INFO nova.virt.libvirt.driver [-] [instance: 935c4eb2-999f-40a4-8643-0479d293c149] Instance destroyed successfully.#033[00m
Nov 25 04:06:38 np0005534516 nova_compute[253538]: 2025-11-25 09:06:38.372 253542 DEBUG nova.objects.instance [None req-015b4da1-0d87-4e37-86d1-eefd05601139 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lazy-loading 'resources' on Instance uuid 935c4eb2-999f-40a4-8643-0479d293c149 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 04:06:38 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:06:38.376 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[7993fb21-a285-4821-ae38-1450db8ee4b3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:06:38 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:06:38.380 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[4a629a11-7630-4e21-a892-7dd1645139c4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:06:38 np0005534516 nova_compute[253538]: 2025-11-25 09:06:38.383 253542 DEBUG nova.virt.libvirt.vif [None req-015b4da1-0d87-4e37-86d1-eefd05601139 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T09:05:58Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1102871071',display_name='tempest-TestGettingAddress-server-1102871071',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1102871071',id=139,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBG2EHMFfBwqY5UWv5ZUTvDGydPH+lI0Q3ZijcI/Z0KNxJwJ0XwD1zp8yjXj4vyT5Wr1yKGnfee4Ard9WUGqQHTmOIMYiRA7o0ggCLm4Ee9TLzJApKNCTFF0W861lRMGbtg==',key_name='tempest-TestGettingAddress-99102097',keypairs=<?>,launch_index=0,launched_at=2025-11-25T09:06:13Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='a3cf572dfc9f42528923d69b8fa76422',ramdisk_id='',reservation_id='r-wh4fqbxd',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-364728108',owner_user_name='tempest-TestGettingAddress-364728108-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T09:06:13Z,user_data=None,user_id='c9fb13d4ba9041458692330b7276232f',uuid=935c4eb2-999f-40a4-8643-0479d293c149,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "bf69fe43-dd03-40a9-a38f-2ec005c27f58", "address": "fa:16:3e:8d:64:ff", "network": {"id": "8f08e3a5-c18c-40d6-a052-3721725c11a7", "bridge": "br-int", "label": "tempest-network-smoke--1961352312", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.214", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf69fe43-dd", "ovs_interfaceid": "bf69fe43-dd03-40a9-a38f-2ec005c27f58", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 25 04:06:38 np0005534516 nova_compute[253538]: 2025-11-25 09:06:38.384 253542 DEBUG nova.network.os_vif_util [None req-015b4da1-0d87-4e37-86d1-eefd05601139 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Converting VIF {"id": "bf69fe43-dd03-40a9-a38f-2ec005c27f58", "address": "fa:16:3e:8d:64:ff", "network": {"id": "8f08e3a5-c18c-40d6-a052-3721725c11a7", "bridge": "br-int", "label": "tempest-network-smoke--1961352312", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.214", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf69fe43-dd", "ovs_interfaceid": "bf69fe43-dd03-40a9-a38f-2ec005c27f58", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 04:06:38 np0005534516 nova_compute[253538]: 2025-11-25 09:06:38.384 253542 DEBUG nova.network.os_vif_util [None req-015b4da1-0d87-4e37-86d1-eefd05601139 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:8d:64:ff,bridge_name='br-int',has_traffic_filtering=True,id=bf69fe43-dd03-40a9-a38f-2ec005c27f58,network=Network(8f08e3a5-c18c-40d6-a052-3721725c11a7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbf69fe43-dd') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 04:06:38 np0005534516 nova_compute[253538]: 2025-11-25 09:06:38.384 253542 DEBUG os_vif [None req-015b4da1-0d87-4e37-86d1-eefd05601139 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:8d:64:ff,bridge_name='br-int',has_traffic_filtering=True,id=bf69fe43-dd03-40a9-a38f-2ec005c27f58,network=Network(8f08e3a5-c18c-40d6-a052-3721725c11a7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbf69fe43-dd') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 25 04:06:38 np0005534516 nova_compute[253538]: 2025-11-25 09:06:38.386 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:06:38 np0005534516 nova_compute[253538]: 2025-11-25 09:06:38.386 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapbf69fe43-dd, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 04:06:38 np0005534516 nova_compute[253538]: 2025-11-25 09:06:38.389 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:06:38 np0005534516 nova_compute[253538]: 2025-11-25 09:06:38.391 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 25 04:06:38 np0005534516 nova_compute[253538]: 2025-11-25 09:06:38.393 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:06:38 np0005534516 nova_compute[253538]: 2025-11-25 09:06:38.395 253542 INFO os_vif [None req-015b4da1-0d87-4e37-86d1-eefd05601139 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:8d:64:ff,bridge_name='br-int',has_traffic_filtering=True,id=bf69fe43-dd03-40a9-a38f-2ec005c27f58,network=Network(8f08e3a5-c18c-40d6-a052-3721725c11a7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbf69fe43-dd')#033[00m
Nov 25 04:06:38 np0005534516 nova_compute[253538]: 2025-11-25 09:06:38.396 253542 DEBUG nova.virt.libvirt.vif [None req-015b4da1-0d87-4e37-86d1-eefd05601139 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T09:05:58Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1102871071',display_name='tempest-TestGettingAddress-server-1102871071',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1102871071',id=139,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBG2EHMFfBwqY5UWv5ZUTvDGydPH+lI0Q3ZijcI/Z0KNxJwJ0XwD1zp8yjXj4vyT5Wr1yKGnfee4Ard9WUGqQHTmOIMYiRA7o0ggCLm4Ee9TLzJApKNCTFF0W861lRMGbtg==',key_name='tempest-TestGettingAddress-99102097',keypairs=<?>,launch_index=0,launched_at=2025-11-25T09:06:13Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='a3cf572dfc9f42528923d69b8fa76422',ramdisk_id='',reservation_id='r-wh4fqbxd',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-364728108',owner_user_name='tempest-TestGettingAddress-364728108-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T09:06:13Z,user_data=None,user_id='c9fb13d4ba9041458692330b7276232f',uuid=935c4eb2-999f-40a4-8643-0479d293c149,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "20e31743-4fc4-43d2-ab28-5205c776f506", "address": "fa:16:3e:db:48:c2", "network": {"id": "6c644b4d-59a5-410c-b57a-1faa3d063b78", "bridge": "br-int", "label": "tempest-network-smoke--1048784636", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fedb:48c2", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fedb:48c2", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap20e31743-4f", "ovs_interfaceid": "20e31743-4fc4-43d2-ab28-5205c776f506", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 25 04:06:38 np0005534516 nova_compute[253538]: 2025-11-25 09:06:38.396 253542 DEBUG nova.network.os_vif_util [None req-015b4da1-0d87-4e37-86d1-eefd05601139 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Converting VIF {"id": "20e31743-4fc4-43d2-ab28-5205c776f506", "address": "fa:16:3e:db:48:c2", "network": {"id": "6c644b4d-59a5-410c-b57a-1faa3d063b78", "bridge": "br-int", "label": "tempest-network-smoke--1048784636", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fedb:48c2", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fedb:48c2", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap20e31743-4f", "ovs_interfaceid": "20e31743-4fc4-43d2-ab28-5205c776f506", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 04:06:38 np0005534516 nova_compute[253538]: 2025-11-25 09:06:38.397 253542 DEBUG nova.network.os_vif_util [None req-015b4da1-0d87-4e37-86d1-eefd05601139 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:db:48:c2,bridge_name='br-int',has_traffic_filtering=True,id=20e31743-4fc4-43d2-ab28-5205c776f506,network=Network(6c644b4d-59a5-410c-b57a-1faa3d063b78),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap20e31743-4f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 04:06:38 np0005534516 nova_compute[253538]: 2025-11-25 09:06:38.397 253542 DEBUG os_vif [None req-015b4da1-0d87-4e37-86d1-eefd05601139 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:db:48:c2,bridge_name='br-int',has_traffic_filtering=True,id=20e31743-4fc4-43d2-ab28-5205c776f506,network=Network(6c644b4d-59a5-410c-b57a-1faa3d063b78),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap20e31743-4f') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 25 04:06:38 np0005534516 nova_compute[253538]: 2025-11-25 09:06:38.398 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:06:38 np0005534516 nova_compute[253538]: 2025-11-25 09:06:38.398 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap20e31743-4f, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 04:06:38 np0005534516 nova_compute[253538]: 2025-11-25 09:06:38.399 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:06:38 np0005534516 nova_compute[253538]: 2025-11-25 09:06:38.400 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:06:38 np0005534516 nova_compute[253538]: 2025-11-25 09:06:38.402 253542 INFO os_vif [None req-015b4da1-0d87-4e37-86d1-eefd05601139 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:db:48:c2,bridge_name='br-int',has_traffic_filtering=True,id=20e31743-4fc4-43d2-ab28-5205c776f506,network=Network(6c644b4d-59a5-410c-b57a-1faa3d063b78),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap20e31743-4f')#033[00m
Nov 25 04:06:38 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:06:38.413 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[578e39af-28ac-45ab-8a00-e4d5f49e0165]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:06:38 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:06:38.432 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[354129dd-89bd-4de3-9b1e-06de76a0ccd2]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap8f08e3a5-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:33:64:5c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 7, 'rx_bytes': 700, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 7, 'rx_bytes': 700, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 412], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 688844, 'reachable_time': 16923, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 400873, 'error': None, 'target': 'ovnmeta-8f08e3a5-c18c-40d6-a052-3721725c11a7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:06:38 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:06:38.448 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[68366a5c-17c6-43cd-93c3-1d907bc44f0a]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap8f08e3a5-c1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 688856, 'tstamp': 688856}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 400877, 'error': None, 'target': 'ovnmeta-8f08e3a5-c18c-40d6-a052-3721725c11a7', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap8f08e3a5-c1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 688860, 'tstamp': 688860}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 400877, 'error': None, 'target': 'ovnmeta-8f08e3a5-c18c-40d6-a052-3721725c11a7', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:06:38 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:06:38.449 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8f08e3a5-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 04:06:38 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:06:38.452 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap8f08e3a5-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 04:06:38 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:06:38.452 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 04:06:38 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:06:38.452 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap8f08e3a5-c0, col_values=(('external_ids', {'iface-id': '8fae56b6-9884-44ea-b3b3-2b19412193c5'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 04:06:38 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:06:38.452 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 04:06:38 np0005534516 nova_compute[253538]: 2025-11-25 09:06:38.453 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:06:38 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:06:38.454 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 20e31743-4fc4-43d2-ab28-5205c776f506 in datapath 6c644b4d-59a5-410c-b57a-1faa3d063b78 unbound from our chassis#033[00m
Nov 25 04:06:38 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:06:38.456 162739 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 6c644b4d-59a5-410c-b57a-1faa3d063b78#033[00m
Nov 25 04:06:38 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:06:38.471 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[5b4af0f3-efba-4718-b22e-2ebe9eb2900c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:06:38 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:06:38.501 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[4207e997-1d0f-4dbf-963a-aa9f01fb7819]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:06:38 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:06:38.504 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[6970267c-13db-4e5b-94fc-7b449ed2935e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:06:38 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:06:38.536 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[dcd51e18-f869-4ae2-ab19-745e737a19fb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:06:38 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:06:38.556 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[7bf61d95-23ab-43d6-accb-64946197669c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6c644b4d-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:71:b4:6d'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 38, 'tx_packets': 5, 'rx_bytes': 3300, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 38, 'tx_packets': 5, 'rx_bytes': 3300, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 414], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 688938, 'reachable_time': 35223, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 38, 'inoctets': 2768, 'indelivers': 13, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 38, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 2768, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 38, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 13, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 400883, 'error': None, 'target': 'ovnmeta-6c644b4d-59a5-410c-b57a-1faa3d063b78', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:06:38 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:06:38.574 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[d1837d07-6a54-427a-89fe-2eebaf6f5364]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap6c644b4d-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 688953, 'tstamp': 688953}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 400884, 'error': None, 'target': 'ovnmeta-6c644b4d-59a5-410c-b57a-1faa3d063b78', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:06:38 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:06:38.578 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6c644b4d-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 04:06:38 np0005534516 nova_compute[253538]: 2025-11-25 09:06:38.579 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:06:38 np0005534516 nova_compute[253538]: 2025-11-25 09:06:38.581 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:06:38 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:06:38.582 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6c644b4d-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 04:06:38 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:06:38.582 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 04:06:38 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:06:38.582 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap6c644b4d-50, col_values=(('external_ids', {'iface-id': 'f8aacb3c-1998-431a-ac4d-66021d7412c1'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 04:06:38 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:06:38.583 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 04:06:38 np0005534516 nova_compute[253538]: 2025-11-25 09:06:38.846 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:06:38 np0005534516 nova_compute[253538]: 2025-11-25 09:06:38.860 253542 INFO nova.virt.libvirt.driver [None req-015b4da1-0d87-4e37-86d1-eefd05601139 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 935c4eb2-999f-40a4-8643-0479d293c149] Deleting instance files /var/lib/nova/instances/935c4eb2-999f-40a4-8643-0479d293c149_del#033[00m
Nov 25 04:06:38 np0005534516 nova_compute[253538]: 2025-11-25 09:06:38.861 253542 INFO nova.virt.libvirt.driver [None req-015b4da1-0d87-4e37-86d1-eefd05601139 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 935c4eb2-999f-40a4-8643-0479d293c149] Deletion of /var/lib/nova/instances/935c4eb2-999f-40a4-8643-0479d293c149_del complete#033[00m
Nov 25 04:06:39 np0005534516 nova_compute[253538]: 2025-11-25 09:06:39.135 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:06:39 np0005534516 nova_compute[253538]: 2025-11-25 09:06:39.462 253542 INFO nova.compute.manager [None req-015b4da1-0d87-4e37-86d1-eefd05601139 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 935c4eb2-999f-40a4-8643-0479d293c149] Took 1.34 seconds to destroy the instance on the hypervisor.#033[00m
Nov 25 04:06:39 np0005534516 nova_compute[253538]: 2025-11-25 09:06:39.462 253542 DEBUG oslo.service.loopingcall [None req-015b4da1-0d87-4e37-86d1-eefd05601139 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 25 04:06:39 np0005534516 nova_compute[253538]: 2025-11-25 09:06:39.462 253542 DEBUG nova.compute.manager [-] [instance: 935c4eb2-999f-40a4-8643-0479d293c149] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 25 04:06:39 np0005534516 nova_compute[253538]: 2025-11-25 09:06:39.463 253542 DEBUG nova.network.neutron [-] [instance: 935c4eb2-999f-40a4-8643-0479d293c149] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 25 04:06:39 np0005534516 nova_compute[253538]: 2025-11-25 09:06:39.701 253542 DEBUG nova.network.neutron [req-9f40bcc0-13e6-4325-9e03-8e3471d3bb84 req-b092db25-ad01-4ef1-8005-1761a31f22ab b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 935c4eb2-999f-40a4-8643-0479d293c149] Updated VIF entry in instance network info cache for port bf69fe43-dd03-40a9-a38f-2ec005c27f58. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 25 04:06:39 np0005534516 nova_compute[253538]: 2025-11-25 09:06:39.702 253542 DEBUG nova.network.neutron [req-9f40bcc0-13e6-4325-9e03-8e3471d3bb84 req-b092db25-ad01-4ef1-8005-1761a31f22ab b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 935c4eb2-999f-40a4-8643-0479d293c149] Updating instance_info_cache with network_info: [{"id": "bf69fe43-dd03-40a9-a38f-2ec005c27f58", "address": "fa:16:3e:8d:64:ff", "network": {"id": "8f08e3a5-c18c-40d6-a052-3721725c11a7", "bridge": "br-int", "label": "tempest-network-smoke--1961352312", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf69fe43-dd", "ovs_interfaceid": "bf69fe43-dd03-40a9-a38f-2ec005c27f58", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "20e31743-4fc4-43d2-ab28-5205c776f506", "address": "fa:16:3e:db:48:c2", "network": {"id": "6c644b4d-59a5-410c-b57a-1faa3d063b78", "bridge": "br-int", "label": "tempest-network-smoke--1048784636", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fedb:48c2", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fedb:48c2", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap20e31743-4f", "ovs_interfaceid": "20e31743-4fc4-43d2-ab28-5205c776f506", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 04:06:39 np0005534516 nova_compute[253538]: 2025-11-25 09:06:39.721 253542 DEBUG oslo_concurrency.lockutils [req-9f40bcc0-13e6-4325-9e03-8e3471d3bb84 req-b092db25-ad01-4ef1-8005-1761a31f22ab b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-935c4eb2-999f-40a4-8643-0479d293c149" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 04:06:40 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2576: 321 pgs: 321 active+clean; 223 MiB data, 1013 MiB used, 59 GiB / 60 GiB avail; 222 KiB/s rd, 1.5 MiB/s wr, 48 op/s
Nov 25 04:06:40 np0005534516 nova_compute[253538]: 2025-11-25 09:06:40.609 253542 DEBUG nova.compute.manager [req-32a844c0-9e4a-43e1-86be-e1385074dc18 req-d0fcd5dc-dc6d-49c3-9cd5-732eed286ba5 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 935c4eb2-999f-40a4-8643-0479d293c149] Received event network-vif-deleted-bf69fe43-dd03-40a9-a38f-2ec005c27f58 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 04:06:40 np0005534516 nova_compute[253538]: 2025-11-25 09:06:40.609 253542 INFO nova.compute.manager [req-32a844c0-9e4a-43e1-86be-e1385074dc18 req-d0fcd5dc-dc6d-49c3-9cd5-732eed286ba5 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 935c4eb2-999f-40a4-8643-0479d293c149] Neutron deleted interface bf69fe43-dd03-40a9-a38f-2ec005c27f58; detaching it from the instance and deleting it from the info cache#033[00m
Nov 25 04:06:40 np0005534516 nova_compute[253538]: 2025-11-25 09:06:40.610 253542 DEBUG nova.network.neutron [req-32a844c0-9e4a-43e1-86be-e1385074dc18 req-d0fcd5dc-dc6d-49c3-9cd5-732eed286ba5 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 935c4eb2-999f-40a4-8643-0479d293c149] Updating instance_info_cache with network_info: [{"id": "20e31743-4fc4-43d2-ab28-5205c776f506", "address": "fa:16:3e:db:48:c2", "network": {"id": "6c644b4d-59a5-410c-b57a-1faa3d063b78", "bridge": "br-int", "label": "tempest-network-smoke--1048784636", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fedb:48c2", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fedb:48c2", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap20e31743-4f", "ovs_interfaceid": "20e31743-4fc4-43d2-ab28-5205c776f506", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 04:06:40 np0005534516 nova_compute[253538]: 2025-11-25 09:06:40.653 253542 DEBUG nova.compute.manager [req-32a844c0-9e4a-43e1-86be-e1385074dc18 req-d0fcd5dc-dc6d-49c3-9cd5-732eed286ba5 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 935c4eb2-999f-40a4-8643-0479d293c149] Detach interface failed, port_id=bf69fe43-dd03-40a9-a38f-2ec005c27f58, reason: Instance 935c4eb2-999f-40a4-8643-0479d293c149 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882#033[00m
Nov 25 04:06:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:06:41.090 162739 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:06:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:06:41.090 162739 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:06:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:06:41.091 162739 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:06:41 np0005534516 nova_compute[253538]: 2025-11-25 09:06:41.448 253542 DEBUG nova.network.neutron [-] [instance: 935c4eb2-999f-40a4-8643-0479d293c149] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 04:06:41 np0005534516 nova_compute[253538]: 2025-11-25 09:06:41.467 253542 INFO nova.compute.manager [-] [instance: 935c4eb2-999f-40a4-8643-0479d293c149] Took 2.00 seconds to deallocate network for instance.#033[00m
Nov 25 04:06:41 np0005534516 nova_compute[253538]: 2025-11-25 09:06:41.516 253542 DEBUG oslo_concurrency.lockutils [None req-015b4da1-0d87-4e37-86d1-eefd05601139 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:06:41 np0005534516 nova_compute[253538]: 2025-11-25 09:06:41.517 253542 DEBUG oslo_concurrency.lockutils [None req-015b4da1-0d87-4e37-86d1-eefd05601139 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:06:41 np0005534516 nova_compute[253538]: 2025-11-25 09:06:41.708 253542 DEBUG oslo_concurrency.processutils [None req-015b4da1-0d87-4e37-86d1-eefd05601139 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 04:06:42 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 04:06:42 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1534015723' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 04:06:42 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2577: 321 pgs: 321 active+clean; 191 MiB data, 998 MiB used, 59 GiB / 60 GiB avail; 26 KiB/s rd, 27 KiB/s wr, 19 op/s
Nov 25 04:06:42 np0005534516 nova_compute[253538]: 2025-11-25 09:06:42.173 253542 DEBUG oslo_concurrency.processutils [None req-015b4da1-0d87-4e37-86d1-eefd05601139 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.465s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 04:06:42 np0005534516 nova_compute[253538]: 2025-11-25 09:06:42.182 253542 DEBUG nova.compute.provider_tree [None req-015b4da1-0d87-4e37-86d1-eefd05601139 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 04:06:42 np0005534516 nova_compute[253538]: 2025-11-25 09:06:42.274 253542 DEBUG nova.scheduler.client.report [None req-015b4da1-0d87-4e37-86d1-eefd05601139 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 04:06:42 np0005534516 nova_compute[253538]: 2025-11-25 09:06:42.296 253542 DEBUG oslo_concurrency.lockutils [None req-015b4da1-0d87-4e37-86d1-eefd05601139 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.779s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:06:42 np0005534516 nova_compute[253538]: 2025-11-25 09:06:42.319 253542 INFO nova.scheduler.client.report [None req-015b4da1-0d87-4e37-86d1-eefd05601139 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Deleted allocations for instance 935c4eb2-999f-40a4-8643-0479d293c149#033[00m
Nov 25 04:06:42 np0005534516 nova_compute[253538]: 2025-11-25 09:06:42.377 253542 DEBUG oslo_concurrency.lockutils [None req-015b4da1-0d87-4e37-86d1-eefd05601139 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "935c4eb2-999f-40a4-8643-0479d293c149" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.256s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:06:42 np0005534516 nova_compute[253538]: 2025-11-25 09:06:42.699 253542 DEBUG nova.compute.manager [req-dc14f490-3551-4853-96f1-7858007ec18e req-10ffa19f-4784-4b80-86c7-8b370dd1a515 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 935c4eb2-999f-40a4-8643-0479d293c149] Received event network-vif-deleted-20e31743-4fc4-43d2-ab28-5205c776f506 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 04:06:43 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:06:43 np0005534516 nova_compute[253538]: 2025-11-25 09:06:43.400 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:06:43 np0005534516 nova_compute[253538]: 2025-11-25 09:06:43.753 253542 DEBUG oslo_concurrency.lockutils [None req-cb3238fb-c719-4d4a-9ae1-20a2e4b8f3bf c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Acquiring lock "7bc2f34f-b7dd-4b8d-99c0-1fc01d3c2aa0" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:06:43 np0005534516 nova_compute[253538]: 2025-11-25 09:06:43.754 253542 DEBUG oslo_concurrency.lockutils [None req-cb3238fb-c719-4d4a-9ae1-20a2e4b8f3bf c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "7bc2f34f-b7dd-4b8d-99c0-1fc01d3c2aa0" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:06:43 np0005534516 nova_compute[253538]: 2025-11-25 09:06:43.754 253542 DEBUG oslo_concurrency.lockutils [None req-cb3238fb-c719-4d4a-9ae1-20a2e4b8f3bf c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Acquiring lock "7bc2f34f-b7dd-4b8d-99c0-1fc01d3c2aa0-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:06:43 np0005534516 nova_compute[253538]: 2025-11-25 09:06:43.754 253542 DEBUG oslo_concurrency.lockutils [None req-cb3238fb-c719-4d4a-9ae1-20a2e4b8f3bf c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "7bc2f34f-b7dd-4b8d-99c0-1fc01d3c2aa0-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:06:43 np0005534516 nova_compute[253538]: 2025-11-25 09:06:43.754 253542 DEBUG oslo_concurrency.lockutils [None req-cb3238fb-c719-4d4a-9ae1-20a2e4b8f3bf c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "7bc2f34f-b7dd-4b8d-99c0-1fc01d3c2aa0-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:06:43 np0005534516 nova_compute[253538]: 2025-11-25 09:06:43.755 253542 INFO nova.compute.manager [None req-cb3238fb-c719-4d4a-9ae1-20a2e4b8f3bf c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 7bc2f34f-b7dd-4b8d-99c0-1fc01d3c2aa0] Terminating instance#033[00m
Nov 25 04:06:43 np0005534516 nova_compute[253538]: 2025-11-25 09:06:43.756 253542 DEBUG nova.compute.manager [None req-cb3238fb-c719-4d4a-9ae1-20a2e4b8f3bf c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 7bc2f34f-b7dd-4b8d-99c0-1fc01d3c2aa0] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 25 04:06:43 np0005534516 kernel: tapbc72cf9d-bb (unregistering): left promiscuous mode
Nov 25 04:06:43 np0005534516 NetworkManager[48915]: <info>  [1764061603.8215] device (tapbc72cf9d-bb): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 04:06:43 np0005534516 nova_compute[253538]: 2025-11-25 09:06:43.826 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:06:43 np0005534516 ovn_controller[152859]: 2025-11-25T09:06:43Z|01460|binding|INFO|Releasing lport bc72cf9d-bb8d-4968-879b-a65c0e151d35 from this chassis (sb_readonly=0)
Nov 25 04:06:43 np0005534516 ovn_controller[152859]: 2025-11-25T09:06:43Z|01461|binding|INFO|Setting lport bc72cf9d-bb8d-4968-879b-a65c0e151d35 down in Southbound
Nov 25 04:06:43 np0005534516 ovn_controller[152859]: 2025-11-25T09:06:43Z|01462|binding|INFO|Removing iface tapbc72cf9d-bb ovn-installed in OVS
Nov 25 04:06:43 np0005534516 nova_compute[253538]: 2025-11-25 09:06:43.829 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:06:43 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:06:43.837 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b7:a3:07 10.100.0.14'], port_security=['fa:16:3e:b7:a3:07 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '7bc2f34f-b7dd-4b8d-99c0-1fc01d3c2aa0', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8f08e3a5-c18c-40d6-a052-3721725c11a7', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a3cf572dfc9f42528923d69b8fa76422', 'neutron:revision_number': '4', 'neutron:security_group_ids': '8ddf0de9-e47d-4ca3-b0fe-d196d1eb3ce6', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0fc5f4dc-b315-4292-9809-5084499d762b, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=bc72cf9d-bb8d-4968-879b-a65c0e151d35) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 04:06:43 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:06:43.839 162739 INFO neutron.agent.ovn.metadata.agent [-] Port bc72cf9d-bb8d-4968-879b-a65c0e151d35 in datapath 8f08e3a5-c18c-40d6-a052-3721725c11a7 unbound from our chassis#033[00m
Nov 25 04:06:43 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:06:43.841 162739 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 8f08e3a5-c18c-40d6-a052-3721725c11a7, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 25 04:06:43 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:06:43.841 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[e9a21799-0477-40a3-9c0f-98197bc0829c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:06:43 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:06:43.842 162739 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-8f08e3a5-c18c-40d6-a052-3721725c11a7 namespace which is not needed anymore#033[00m
Nov 25 04:06:43 np0005534516 kernel: tape64e0c93-9f (unregistering): left promiscuous mode
Nov 25 04:06:43 np0005534516 NetworkManager[48915]: <info>  [1764061603.8475] device (tape64e0c93-9f): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 04:06:43 np0005534516 nova_compute[253538]: 2025-11-25 09:06:43.849 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:06:43 np0005534516 ovn_controller[152859]: 2025-11-25T09:06:43Z|01463|binding|INFO|Releasing lport e64e0c93-9ff8-4b26-9a7e-1bae8b024966 from this chassis (sb_readonly=0)
Nov 25 04:06:43 np0005534516 ovn_controller[152859]: 2025-11-25T09:06:43Z|01464|binding|INFO|Setting lport e64e0c93-9ff8-4b26-9a7e-1bae8b024966 down in Southbound
Nov 25 04:06:43 np0005534516 nova_compute[253538]: 2025-11-25 09:06:43.858 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:06:43 np0005534516 ovn_controller[152859]: 2025-11-25T09:06:43Z|01465|binding|INFO|Removing iface tape64e0c93-9f ovn-installed in OVS
Nov 25 04:06:43 np0005534516 nova_compute[253538]: 2025-11-25 09:06:43.862 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:06:43 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:06:43.867 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:6a:41:4a 2001:db8:0:1:f816:3eff:fe6a:414a 2001:db8::f816:3eff:fe6a:414a'], port_security=['fa:16:3e:6a:41:4a 2001:db8:0:1:f816:3eff:fe6a:414a 2001:db8::f816:3eff:fe6a:414a'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8:0:1:f816:3eff:fe6a:414a/64 2001:db8::f816:3eff:fe6a:414a/64', 'neutron:device_id': '7bc2f34f-b7dd-4b8d-99c0-1fc01d3c2aa0', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6c644b4d-59a5-410c-b57a-1faa3d063b78', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a3cf572dfc9f42528923d69b8fa76422', 'neutron:revision_number': '4', 'neutron:security_group_ids': '8ddf0de9-e47d-4ca3-b0fe-d196d1eb3ce6', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4dd839f3-f2ee-404d-b9f9-bedcfcb25484, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=e64e0c93-9ff8-4b26-9a7e-1bae8b024966) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 04:06:43 np0005534516 nova_compute[253538]: 2025-11-25 09:06:43.897 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:06:43 np0005534516 systemd[1]: machine-qemu\x2d167\x2dinstance\x2d00000089.scope: Deactivated successfully.
Nov 25 04:06:43 np0005534516 systemd[1]: machine-qemu\x2d167\x2dinstance\x2d00000089.scope: Consumed 15.714s CPU time.
Nov 25 04:06:43 np0005534516 systemd-machined[215790]: Machine qemu-167-instance-00000089 terminated.
Nov 25 04:06:43 np0005534516 kernel: tapbc72cf9d-bb: entered promiscuous mode
Nov 25 04:06:43 np0005534516 kernel: tapbc72cf9d-bb (unregistering): left promiscuous mode
Nov 25 04:06:43 np0005534516 NetworkManager[48915]: <info>  [1764061603.9829] manager: (tapbc72cf9d-bb): new Tun device (/org/freedesktop/NetworkManager/Devices/599)
Nov 25 04:06:43 np0005534516 nova_compute[253538]: 2025-11-25 09:06:43.984 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:06:43 np0005534516 ovn_controller[152859]: 2025-11-25T09:06:43Z|01466|binding|INFO|Claiming lport bc72cf9d-bb8d-4968-879b-a65c0e151d35 for this chassis.
Nov 25 04:06:43 np0005534516 ovn_controller[152859]: 2025-11-25T09:06:43Z|01467|binding|INFO|bc72cf9d-bb8d-4968-879b-a65c0e151d35: Claiming fa:16:3e:b7:a3:07 10.100.0.14
Nov 25 04:06:43 np0005534516 NetworkManager[48915]: <info>  [1764061603.9950] manager: (tape64e0c93-9f): new Tun device (/org/freedesktop/NetworkManager/Devices/600)
Nov 25 04:06:43 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:06:43.996 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b7:a3:07 10.100.0.14'], port_security=['fa:16:3e:b7:a3:07 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '7bc2f34f-b7dd-4b8d-99c0-1fc01d3c2aa0', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8f08e3a5-c18c-40d6-a052-3721725c11a7', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a3cf572dfc9f42528923d69b8fa76422', 'neutron:revision_number': '4', 'neutron:security_group_ids': '8ddf0de9-e47d-4ca3-b0fe-d196d1eb3ce6', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0fc5f4dc-b315-4292-9809-5084499d762b, chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=bc72cf9d-bb8d-4968-879b-a65c0e151d35) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 04:06:44 np0005534516 nova_compute[253538]: 2025-11-25 09:06:44.009 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:06:44 np0005534516 ovn_controller[152859]: 2025-11-25T09:06:44Z|01468|binding|INFO|Releasing lport bc72cf9d-bb8d-4968-879b-a65c0e151d35 from this chassis (sb_readonly=0)
Nov 25 04:06:44 np0005534516 nova_compute[253538]: 2025-11-25 09:06:44.015 253542 INFO nova.virt.libvirt.driver [-] [instance: 7bc2f34f-b7dd-4b8d-99c0-1fc01d3c2aa0] Instance destroyed successfully.#033[00m
Nov 25 04:06:44 np0005534516 nova_compute[253538]: 2025-11-25 09:06:44.016 253542 DEBUG nova.objects.instance [None req-cb3238fb-c719-4d4a-9ae1-20a2e4b8f3bf c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lazy-loading 'resources' on Instance uuid 7bc2f34f-b7dd-4b8d-99c0-1fc01d3c2aa0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 04:06:44 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:06:44.021 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b7:a3:07 10.100.0.14'], port_security=['fa:16:3e:b7:a3:07 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '7bc2f34f-b7dd-4b8d-99c0-1fc01d3c2aa0', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8f08e3a5-c18c-40d6-a052-3721725c11a7', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a3cf572dfc9f42528923d69b8fa76422', 'neutron:revision_number': '5', 'neutron:security_group_ids': '8ddf0de9-e47d-4ca3-b0fe-d196d1eb3ce6', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0fc5f4dc-b315-4292-9809-5084499d762b, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=bc72cf9d-bb8d-4968-879b-a65c0e151d35) old=Port_Binding(chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 04:06:44 np0005534516 neutron-haproxy-ovnmeta-8f08e3a5-c18c-40d6-a052-3721725c11a7[398671]: [NOTICE]   (398689) : haproxy version is 2.8.14-c23fe91
Nov 25 04:06:44 np0005534516 neutron-haproxy-ovnmeta-8f08e3a5-c18c-40d6-a052-3721725c11a7[398671]: [NOTICE]   (398689) : path to executable is /usr/sbin/haproxy
Nov 25 04:06:44 np0005534516 neutron-haproxy-ovnmeta-8f08e3a5-c18c-40d6-a052-3721725c11a7[398671]: [WARNING]  (398689) : Exiting Master process...
Nov 25 04:06:44 np0005534516 neutron-haproxy-ovnmeta-8f08e3a5-c18c-40d6-a052-3721725c11a7[398671]: [WARNING]  (398689) : Exiting Master process...
Nov 25 04:06:44 np0005534516 nova_compute[253538]: 2025-11-25 09:06:44.027 253542 DEBUG nova.virt.libvirt.vif [None req-cb3238fb-c719-4d4a-9ae1-20a2e4b8f3bf c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T09:05:23Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1397309390',display_name='tempest-TestGettingAddress-server-1397309390',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1397309390',id=137,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBG2EHMFfBwqY5UWv5ZUTvDGydPH+lI0Q3ZijcI/Z0KNxJwJ0XwD1zp8yjXj4vyT5Wr1yKGnfee4Ard9WUGqQHTmOIMYiRA7o0ggCLm4Ee9TLzJApKNCTFF0W861lRMGbtg==',key_name='tempest-TestGettingAddress-99102097',keypairs=<?>,launch_index=0,launched_at=2025-11-25T09:05:36Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='a3cf572dfc9f42528923d69b8fa76422',ramdisk_id='',reservation_id='r-ie7bcng6',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-364728108',owner_user_name='tempest-TestGettingAddress-364728108-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T09:05:36Z,user_data=None,user_id='c9fb13d4ba9041458692330b7276232f',uuid=7bc2f34f-b7dd-4b8d-99c0-1fc01d3c2aa0,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "bc72cf9d-bb8d-4968-879b-a65c0e151d35", "address": "fa:16:3e:b7:a3:07", "network": {"id": "8f08e3a5-c18c-40d6-a052-3721725c11a7", "bridge": "br-int", "label": "tempest-network-smoke--1961352312", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.226", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbc72cf9d-bb", "ovs_interfaceid": "bc72cf9d-bb8d-4968-879b-a65c0e151d35", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 25 04:06:44 np0005534516 nova_compute[253538]: 2025-11-25 09:06:44.028 253542 DEBUG nova.network.os_vif_util [None req-cb3238fb-c719-4d4a-9ae1-20a2e4b8f3bf c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Converting VIF {"id": "bc72cf9d-bb8d-4968-879b-a65c0e151d35", "address": "fa:16:3e:b7:a3:07", "network": {"id": "8f08e3a5-c18c-40d6-a052-3721725c11a7", "bridge": "br-int", "label": "tempest-network-smoke--1961352312", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.226", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbc72cf9d-bb", "ovs_interfaceid": "bc72cf9d-bb8d-4968-879b-a65c0e151d35", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 04:06:44 np0005534516 nova_compute[253538]: 2025-11-25 09:06:44.028 253542 DEBUG nova.network.os_vif_util [None req-cb3238fb-c719-4d4a-9ae1-20a2e4b8f3bf c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:b7:a3:07,bridge_name='br-int',has_traffic_filtering=True,id=bc72cf9d-bb8d-4968-879b-a65c0e151d35,network=Network(8f08e3a5-c18c-40d6-a052-3721725c11a7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbc72cf9d-bb') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 04:06:44 np0005534516 nova_compute[253538]: 2025-11-25 09:06:44.029 253542 DEBUG os_vif [None req-cb3238fb-c719-4d4a-9ae1-20a2e4b8f3bf c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:b7:a3:07,bridge_name='br-int',has_traffic_filtering=True,id=bc72cf9d-bb8d-4968-879b-a65c0e151d35,network=Network(8f08e3a5-c18c-40d6-a052-3721725c11a7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbc72cf9d-bb') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 25 04:06:44 np0005534516 nova_compute[253538]: 2025-11-25 09:06:44.031 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:06:44 np0005534516 nova_compute[253538]: 2025-11-25 09:06:44.031 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapbc72cf9d-bb, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 04:06:44 np0005534516 neutron-haproxy-ovnmeta-8f08e3a5-c18c-40d6-a052-3721725c11a7[398671]: [ALERT]    (398689) : Current worker (398696) exited with code 143 (Terminated)
Nov 25 04:06:44 np0005534516 neutron-haproxy-ovnmeta-8f08e3a5-c18c-40d6-a052-3721725c11a7[398671]: [WARNING]  (398689) : All workers exited. Exiting... (0)
Nov 25 04:06:44 np0005534516 systemd[1]: libpod-51f25d4677022ebbe2b594fe9f78cd07eb053d885ed4dadb80219cb3a3c42435.scope: Deactivated successfully.
Nov 25 04:06:44 np0005534516 nova_compute[253538]: 2025-11-25 09:06:44.037 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:06:44 np0005534516 nova_compute[253538]: 2025-11-25 09:06:44.040 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 25 04:06:44 np0005534516 nova_compute[253538]: 2025-11-25 09:06:44.042 253542 INFO os_vif [None req-cb3238fb-c719-4d4a-9ae1-20a2e4b8f3bf c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:b7:a3:07,bridge_name='br-int',has_traffic_filtering=True,id=bc72cf9d-bb8d-4968-879b-a65c0e151d35,network=Network(8f08e3a5-c18c-40d6-a052-3721725c11a7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbc72cf9d-bb')#033[00m
Nov 25 04:06:44 np0005534516 nova_compute[253538]: 2025-11-25 09:06:44.043 253542 DEBUG nova.virt.libvirt.vif [None req-cb3238fb-c719-4d4a-9ae1-20a2e4b8f3bf c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T09:05:23Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1397309390',display_name='tempest-TestGettingAddress-server-1397309390',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1397309390',id=137,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBG2EHMFfBwqY5UWv5ZUTvDGydPH+lI0Q3ZijcI/Z0KNxJwJ0XwD1zp8yjXj4vyT5Wr1yKGnfee4Ard9WUGqQHTmOIMYiRA7o0ggCLm4Ee9TLzJApKNCTFF0W861lRMGbtg==',key_name='tempest-TestGettingAddress-99102097',keypairs=<?>,launch_index=0,launched_at=2025-11-25T09:05:36Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='a3cf572dfc9f42528923d69b8fa76422',ramdisk_id='',reservation_id='r-ie7bcng6',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-364728108',owner_user_name='tempest-TestGettingAddress-364728108-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T09:05:36Z,user_data=None,user_id='c9fb13d4ba9041458692330b7276232f',uuid=7bc2f34f-b7dd-4b8d-99c0-1fc01d3c2aa0,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "e64e0c93-9ff8-4b26-9a7e-1bae8b024966", "address": "fa:16:3e:6a:41:4a", "network": {"id": "6c644b4d-59a5-410c-b57a-1faa3d063b78", "bridge": "br-int", "label": "tempest-network-smoke--1048784636", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe6a:414a", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe6a:414a", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape64e0c93-9f", "ovs_interfaceid": "e64e0c93-9ff8-4b26-9a7e-1bae8b024966", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 25 04:06:44 np0005534516 nova_compute[253538]: 2025-11-25 09:06:44.043 253542 DEBUG nova.network.os_vif_util [None req-cb3238fb-c719-4d4a-9ae1-20a2e4b8f3bf c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Converting VIF {"id": "e64e0c93-9ff8-4b26-9a7e-1bae8b024966", "address": "fa:16:3e:6a:41:4a", "network": {"id": "6c644b4d-59a5-410c-b57a-1faa3d063b78", "bridge": "br-int", "label": "tempest-network-smoke--1048784636", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe6a:414a", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe6a:414a", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape64e0c93-9f", "ovs_interfaceid": "e64e0c93-9ff8-4b26-9a7e-1bae8b024966", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 04:06:44 np0005534516 podman[400937]: 2025-11-25 09:06:44.043437876 +0000 UTC m=+0.083451959 container died 51f25d4677022ebbe2b594fe9f78cd07eb053d885ed4dadb80219cb3a3c42435 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-8f08e3a5-c18c-40d6-a052-3721725c11a7, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 25 04:06:44 np0005534516 nova_compute[253538]: 2025-11-25 09:06:44.044 253542 DEBUG nova.network.os_vif_util [None req-cb3238fb-c719-4d4a-9ae1-20a2e4b8f3bf c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:6a:41:4a,bridge_name='br-int',has_traffic_filtering=True,id=e64e0c93-9ff8-4b26-9a7e-1bae8b024966,network=Network(6c644b4d-59a5-410c-b57a-1faa3d063b78),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape64e0c93-9f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 04:06:44 np0005534516 nova_compute[253538]: 2025-11-25 09:06:44.044 253542 DEBUG os_vif [None req-cb3238fb-c719-4d4a-9ae1-20a2e4b8f3bf c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:6a:41:4a,bridge_name='br-int',has_traffic_filtering=True,id=e64e0c93-9ff8-4b26-9a7e-1bae8b024966,network=Network(6c644b4d-59a5-410c-b57a-1faa3d063b78),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape64e0c93-9f') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 25 04:06:44 np0005534516 nova_compute[253538]: 2025-11-25 09:06:44.046 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:06:44 np0005534516 nova_compute[253538]: 2025-11-25 09:06:44.046 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape64e0c93-9f, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 04:06:44 np0005534516 nova_compute[253538]: 2025-11-25 09:06:44.047 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:06:44 np0005534516 nova_compute[253538]: 2025-11-25 09:06:44.050 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 25 04:06:44 np0005534516 nova_compute[253538]: 2025-11-25 09:06:44.051 253542 INFO os_vif [None req-cb3238fb-c719-4d4a-9ae1-20a2e4b8f3bf c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:6a:41:4a,bridge_name='br-int',has_traffic_filtering=True,id=e64e0c93-9ff8-4b26-9a7e-1bae8b024966,network=Network(6c644b4d-59a5-410c-b57a-1faa3d063b78),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape64e0c93-9f')#033[00m
Nov 25 04:06:44 np0005534516 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-51f25d4677022ebbe2b594fe9f78cd07eb053d885ed4dadb80219cb3a3c42435-userdata-shm.mount: Deactivated successfully.
Nov 25 04:06:44 np0005534516 systemd[1]: var-lib-containers-storage-overlay-80317eb3c6d5e4b152cb82d3e07a88ef2d85b9598516f68f790b2b2e984d1b96-merged.mount: Deactivated successfully.
Nov 25 04:06:44 np0005534516 podman[400937]: 2025-11-25 09:06:44.11014433 +0000 UTC m=+0.150158403 container cleanup 51f25d4677022ebbe2b594fe9f78cd07eb053d885ed4dadb80219cb3a3c42435 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-8f08e3a5-c18c-40d6-a052-3721725c11a7, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 04:06:44 np0005534516 systemd[1]: libpod-conmon-51f25d4677022ebbe2b594fe9f78cd07eb053d885ed4dadb80219cb3a3c42435.scope: Deactivated successfully.
Nov 25 04:06:44 np0005534516 nova_compute[253538]: 2025-11-25 09:06:44.138 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:06:44 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2578: 321 pgs: 321 active+clean; 167 MiB data, 983 MiB used, 59 GiB / 60 GiB avail; 34 KiB/s rd, 31 KiB/s wr, 31 op/s
Nov 25 04:06:44 np0005534516 podman[400997]: 2025-11-25 09:06:44.190515395 +0000 UTC m=+0.058287226 container remove 51f25d4677022ebbe2b594fe9f78cd07eb053d885ed4dadb80219cb3a3c42435 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-8f08e3a5-c18c-40d6-a052-3721725c11a7, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118)
Nov 25 04:06:44 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:06:44.198 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[fe9a408f-a2f3-4688-934e-9cada135e482]: (4, ('Tue Nov 25 09:06:43 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-8f08e3a5-c18c-40d6-a052-3721725c11a7 (51f25d4677022ebbe2b594fe9f78cd07eb053d885ed4dadb80219cb3a3c42435)\n51f25d4677022ebbe2b594fe9f78cd07eb053d885ed4dadb80219cb3a3c42435\nTue Nov 25 09:06:44 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-8f08e3a5-c18c-40d6-a052-3721725c11a7 (51f25d4677022ebbe2b594fe9f78cd07eb053d885ed4dadb80219cb3a3c42435)\n51f25d4677022ebbe2b594fe9f78cd07eb053d885ed4dadb80219cb3a3c42435\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:06:44 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:06:44.200 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[2905fc10-d7eb-47ce-92ae-88bf2933c1e3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:06:44 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:06:44.201 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8f08e3a5-c0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 04:06:44 np0005534516 nova_compute[253538]: 2025-11-25 09:06:44.203 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:06:44 np0005534516 kernel: tap8f08e3a5-c0: left promiscuous mode
Nov 25 04:06:44 np0005534516 nova_compute[253538]: 2025-11-25 09:06:44.218 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:06:44 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:06:44.222 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[c65d9bc1-b666-4c41-b4a2-3b6edb6c0b3e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:06:44 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:06:44.248 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[a3e42c2d-f5b2-453b-becd-983d3e997d6c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:06:44 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:06:44.250 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[4ff859f7-b1d0-49f6-a8f3-030865f47050]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:06:44 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:06:44.264 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[30b59c4f-7025-43e4-a102-b152f3c094d9]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 688835, 'reachable_time': 36307, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 401012, 'error': None, 'target': 'ovnmeta-8f08e3a5-c18c-40d6-a052-3721725c11a7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:06:44 np0005534516 systemd[1]: run-netns-ovnmeta\x2d8f08e3a5\x2dc18c\x2d40d6\x2da052\x2d3721725c11a7.mount: Deactivated successfully.
Nov 25 04:06:44 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:06:44.268 162852 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-8f08e3a5-c18c-40d6-a052-3721725c11a7 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 25 04:06:44 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:06:44.269 162852 DEBUG oslo.privsep.daemon [-] privsep: reply[dd5552b9-bb7a-41e1-a5f3-eabb197fe1a2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:06:44 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:06:44.272 162739 INFO neutron.agent.ovn.metadata.agent [-] Port e64e0c93-9ff8-4b26-9a7e-1bae8b024966 in datapath 6c644b4d-59a5-410c-b57a-1faa3d063b78 unbound from our chassis#033[00m
Nov 25 04:06:44 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:06:44.273 162739 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 6c644b4d-59a5-410c-b57a-1faa3d063b78, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 25 04:06:44 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:06:44.274 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[6d47cb48-5dbf-43f5-8adc-21897c3ab1f5]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:06:44 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:06:44.275 162739 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-6c644b4d-59a5-410c-b57a-1faa3d063b78 namespace which is not needed anymore#033[00m
Nov 25 04:06:44 np0005534516 neutron-haproxy-ovnmeta-6c644b4d-59a5-410c-b57a-1faa3d063b78[398826]: [NOTICE]   (398830) : haproxy version is 2.8.14-c23fe91
Nov 25 04:06:44 np0005534516 neutron-haproxy-ovnmeta-6c644b4d-59a5-410c-b57a-1faa3d063b78[398826]: [NOTICE]   (398830) : path to executable is /usr/sbin/haproxy
Nov 25 04:06:44 np0005534516 neutron-haproxy-ovnmeta-6c644b4d-59a5-410c-b57a-1faa3d063b78[398826]: [WARNING]  (398830) : Exiting Master process...
Nov 25 04:06:44 np0005534516 neutron-haproxy-ovnmeta-6c644b4d-59a5-410c-b57a-1faa3d063b78[398826]: [WARNING]  (398830) : Exiting Master process...
Nov 25 04:06:44 np0005534516 neutron-haproxy-ovnmeta-6c644b4d-59a5-410c-b57a-1faa3d063b78[398826]: [ALERT]    (398830) : Current worker (398832) exited with code 143 (Terminated)
Nov 25 04:06:44 np0005534516 neutron-haproxy-ovnmeta-6c644b4d-59a5-410c-b57a-1faa3d063b78[398826]: [WARNING]  (398830) : All workers exited. Exiting... (0)
Nov 25 04:06:44 np0005534516 systemd[1]: libpod-d317ae8c5007405fc2e9d775491a99f4a2311cee72727f2819f698c1226b5b91.scope: Deactivated successfully.
Nov 25 04:06:44 np0005534516 podman[401031]: 2025-11-25 09:06:44.419388467 +0000 UTC m=+0.056391955 container died d317ae8c5007405fc2e9d775491a99f4a2311cee72727f2819f698c1226b5b91 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-6c644b4d-59a5-410c-b57a-1faa3d063b78, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118)
Nov 25 04:06:44 np0005534516 systemd[1]: var-lib-containers-storage-overlay-6d698aa358060802f75d0f6998f934a98b483adfd949a7477e1c82db0e0b4996-merged.mount: Deactivated successfully.
Nov 25 04:06:44 np0005534516 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-d317ae8c5007405fc2e9d775491a99f4a2311cee72727f2819f698c1226b5b91-userdata-shm.mount: Deactivated successfully.
Nov 25 04:06:44 np0005534516 podman[401031]: 2025-11-25 09:06:44.460876244 +0000 UTC m=+0.097879662 container cleanup d317ae8c5007405fc2e9d775491a99f4a2311cee72727f2819f698c1226b5b91 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-6c644b4d-59a5-410c-b57a-1faa3d063b78, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118)
Nov 25 04:06:44 np0005534516 systemd[1]: libpod-conmon-d317ae8c5007405fc2e9d775491a99f4a2311cee72727f2819f698c1226b5b91.scope: Deactivated successfully.
Nov 25 04:06:44 np0005534516 podman[401062]: 2025-11-25 09:06:44.55119314 +0000 UTC m=+0.060434105 container remove d317ae8c5007405fc2e9d775491a99f4a2311cee72727f2819f698c1226b5b91 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-6c644b4d-59a5-410c-b57a-1faa3d063b78, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS)
Nov 25 04:06:44 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:06:44.558 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[1e759d56-8fb6-447b-9801-26b9a2d9f471]: (4, ('Tue Nov 25 09:06:44 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-6c644b4d-59a5-410c-b57a-1faa3d063b78 (d317ae8c5007405fc2e9d775491a99f4a2311cee72727f2819f698c1226b5b91)\nd317ae8c5007405fc2e9d775491a99f4a2311cee72727f2819f698c1226b5b91\nTue Nov 25 09:06:44 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-6c644b4d-59a5-410c-b57a-1faa3d063b78 (d317ae8c5007405fc2e9d775491a99f4a2311cee72727f2819f698c1226b5b91)\nd317ae8c5007405fc2e9d775491a99f4a2311cee72727f2819f698c1226b5b91\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:06:44 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:06:44.560 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[9d299deb-5413-4d2e-b7bf-0485eaa545fe]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:06:44 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:06:44.561 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6c644b4d-50, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 04:06:44 np0005534516 nova_compute[253538]: 2025-11-25 09:06:44.581 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:06:44 np0005534516 kernel: tap6c644b4d-50: left promiscuous mode
Nov 25 04:06:44 np0005534516 nova_compute[253538]: 2025-11-25 09:06:44.598 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:06:44 np0005534516 nova_compute[253538]: 2025-11-25 09:06:44.599 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:06:44 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:06:44.601 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[ed235222-a9fb-42fa-ab60-667ddcd659c5]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:06:44 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:06:44.618 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[e464db56-c7b1-4a0a-a42c-c7f6e5365035]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:06:44 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:06:44.620 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[cb819a57-d5dc-4cb9-834d-7ad679eccb12]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:06:44 np0005534516 nova_compute[253538]: 2025-11-25 09:06:44.629 253542 INFO nova.virt.libvirt.driver [None req-cb3238fb-c719-4d4a-9ae1-20a2e4b8f3bf c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 7bc2f34f-b7dd-4b8d-99c0-1fc01d3c2aa0] Deleting instance files /var/lib/nova/instances/7bc2f34f-b7dd-4b8d-99c0-1fc01d3c2aa0_del#033[00m
Nov 25 04:06:44 np0005534516 nova_compute[253538]: 2025-11-25 09:06:44.630 253542 INFO nova.virt.libvirt.driver [None req-cb3238fb-c719-4d4a-9ae1-20a2e4b8f3bf c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 7bc2f34f-b7dd-4b8d-99c0-1fc01d3c2aa0] Deletion of /var/lib/nova/instances/7bc2f34f-b7dd-4b8d-99c0-1fc01d3c2aa0_del complete#033[00m
Nov 25 04:06:44 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:06:44.637 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[1afffc7a-9ba4-40d9-8e0e-4a42bdf22bb7]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 688930, 'reachable_time': 20816, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 401075, 'error': None, 'target': 'ovnmeta-6c644b4d-59a5-410c-b57a-1faa3d063b78', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:06:44 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:06:44.638 162852 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-6c644b4d-59a5-410c-b57a-1faa3d063b78 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 25 04:06:44 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:06:44.639 162852 DEBUG oslo.privsep.daemon [-] privsep: reply[1a706114-0dba-425b-8ced-3c42951d6ff0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:06:44 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:06:44.639 162739 INFO neutron.agent.ovn.metadata.agent [-] Port bc72cf9d-bb8d-4968-879b-a65c0e151d35 in datapath 8f08e3a5-c18c-40d6-a052-3721725c11a7 unbound from our chassis#033[00m
Nov 25 04:06:44 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:06:44.640 162739 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 8f08e3a5-c18c-40d6-a052-3721725c11a7, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 25 04:06:44 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:06:44.641 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[4381bd9f-a9c1-4a24-b423-8db2fd52132f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:06:44 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:06:44.641 162739 INFO neutron.agent.ovn.metadata.agent [-] Port bc72cf9d-bb8d-4968-879b-a65c0e151d35 in datapath 8f08e3a5-c18c-40d6-a052-3721725c11a7 unbound from our chassis#033[00m
Nov 25 04:06:44 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:06:44.642 162739 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 8f08e3a5-c18c-40d6-a052-3721725c11a7, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 25 04:06:44 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:06:44.643 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[d5284d97-2816-4422-a6bf-2090a04ad274]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:06:44 np0005534516 nova_compute[253538]: 2025-11-25 09:06:44.694 253542 INFO nova.compute.manager [None req-cb3238fb-c719-4d4a-9ae1-20a2e4b8f3bf c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 7bc2f34f-b7dd-4b8d-99c0-1fc01d3c2aa0] Took 0.94 seconds to destroy the instance on the hypervisor.#033[00m
Nov 25 04:06:44 np0005534516 nova_compute[253538]: 2025-11-25 09:06:44.694 253542 DEBUG oslo.service.loopingcall [None req-cb3238fb-c719-4d4a-9ae1-20a2e4b8f3bf c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 25 04:06:44 np0005534516 nova_compute[253538]: 2025-11-25 09:06:44.695 253542 DEBUG nova.compute.manager [-] [instance: 7bc2f34f-b7dd-4b8d-99c0-1fc01d3c2aa0] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 25 04:06:44 np0005534516 nova_compute[253538]: 2025-11-25 09:06:44.695 253542 DEBUG nova.network.neutron [-] [instance: 7bc2f34f-b7dd-4b8d-99c0-1fc01d3c2aa0] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 25 04:06:45 np0005534516 systemd[1]: run-netns-ovnmeta\x2d6c644b4d\x2d59a5\x2d410c\x2db57a\x2d1faa3d063b78.mount: Deactivated successfully.
Nov 25 04:06:45 np0005534516 nova_compute[253538]: 2025-11-25 09:06:45.562 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:06:45 np0005534516 nova_compute[253538]: 2025-11-25 09:06:45.563 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:06:45 np0005534516 nova_compute[253538]: 2025-11-25 09:06:45.663 253542 DEBUG nova.compute.manager [req-a58857d1-c18a-476c-8e45-ec4dd2c08fce req-e81c591c-aa36-4843-bae6-3a77afa9e8f2 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 7bc2f34f-b7dd-4b8d-99c0-1fc01d3c2aa0] Received event network-changed-bc72cf9d-bb8d-4968-879b-a65c0e151d35 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 04:06:45 np0005534516 nova_compute[253538]: 2025-11-25 09:06:45.663 253542 DEBUG nova.compute.manager [req-a58857d1-c18a-476c-8e45-ec4dd2c08fce req-e81c591c-aa36-4843-bae6-3a77afa9e8f2 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 7bc2f34f-b7dd-4b8d-99c0-1fc01d3c2aa0] Refreshing instance network info cache due to event network-changed-bc72cf9d-bb8d-4968-879b-a65c0e151d35. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 04:06:45 np0005534516 nova_compute[253538]: 2025-11-25 09:06:45.664 253542 DEBUG oslo_concurrency.lockutils [req-a58857d1-c18a-476c-8e45-ec4dd2c08fce req-e81c591c-aa36-4843-bae6-3a77afa9e8f2 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-7bc2f34f-b7dd-4b8d-99c0-1fc01d3c2aa0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 04:06:45 np0005534516 nova_compute[253538]: 2025-11-25 09:06:45.665 253542 DEBUG oslo_concurrency.lockutils [req-a58857d1-c18a-476c-8e45-ec4dd2c08fce req-e81c591c-aa36-4843-bae6-3a77afa9e8f2 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-7bc2f34f-b7dd-4b8d-99c0-1fc01d3c2aa0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 04:06:45 np0005534516 nova_compute[253538]: 2025-11-25 09:06:45.665 253542 DEBUG nova.network.neutron [req-a58857d1-c18a-476c-8e45-ec4dd2c08fce req-e81c591c-aa36-4843-bae6-3a77afa9e8f2 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 7bc2f34f-b7dd-4b8d-99c0-1fc01d3c2aa0] Refreshing network info cache for port bc72cf9d-bb8d-4968-879b-a65c0e151d35 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 04:06:46 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2579: 321 pgs: 321 active+clean; 148 MiB data, 972 MiB used, 59 GiB / 60 GiB avail; 43 KiB/s rd, 31 KiB/s wr, 44 op/s
Nov 25 04:06:46 np0005534516 nova_compute[253538]: 2025-11-25 09:06:46.834 253542 DEBUG oslo_concurrency.lockutils [None req-1295a4e2-e903-455c-be0d-3566e41a5fe7 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] Acquiring lock "7aeb9ccf-2506-41d1-92c2-c72892096857" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:06:46 np0005534516 nova_compute[253538]: 2025-11-25 09:06:46.835 253542 DEBUG oslo_concurrency.lockutils [None req-1295a4e2-e903-455c-be0d-3566e41a5fe7 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] Lock "7aeb9ccf-2506-41d1-92c2-c72892096857" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:06:46 np0005534516 nova_compute[253538]: 2025-11-25 09:06:46.852 253542 DEBUG nova.compute.manager [None req-1295a4e2-e903-455c-be0d-3566e41a5fe7 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] [instance: 7aeb9ccf-2506-41d1-92c2-c72892096857] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 25 04:06:46 np0005534516 nova_compute[253538]: 2025-11-25 09:06:46.929 253542 DEBUG oslo_concurrency.lockutils [None req-1295a4e2-e903-455c-be0d-3566e41a5fe7 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:06:46 np0005534516 nova_compute[253538]: 2025-11-25 09:06:46.930 253542 DEBUG oslo_concurrency.lockutils [None req-1295a4e2-e903-455c-be0d-3566e41a5fe7 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:06:46 np0005534516 nova_compute[253538]: 2025-11-25 09:06:46.936 253542 DEBUG nova.virt.hardware [None req-1295a4e2-e903-455c-be0d-3566e41a5fe7 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 25 04:06:46 np0005534516 nova_compute[253538]: 2025-11-25 09:06:46.936 253542 INFO nova.compute.claims [None req-1295a4e2-e903-455c-be0d-3566e41a5fe7 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] [instance: 7aeb9ccf-2506-41d1-92c2-c72892096857] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 25 04:06:47 np0005534516 nova_compute[253538]: 2025-11-25 09:06:47.061 253542 DEBUG oslo_concurrency.processutils [None req-1295a4e2-e903-455c-be0d-3566e41a5fe7 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 04:06:47 np0005534516 nova_compute[253538]: 2025-11-25 09:06:47.212 253542 DEBUG nova.network.neutron [-] [instance: 7bc2f34f-b7dd-4b8d-99c0-1fc01d3c2aa0] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 04:06:47 np0005534516 nova_compute[253538]: 2025-11-25 09:06:47.236 253542 INFO nova.compute.manager [-] [instance: 7bc2f34f-b7dd-4b8d-99c0-1fc01d3c2aa0] Took 2.54 seconds to deallocate network for instance.#033[00m
Nov 25 04:06:47 np0005534516 nova_compute[253538]: 2025-11-25 09:06:47.279 253542 DEBUG oslo_concurrency.lockutils [None req-cb3238fb-c719-4d4a-9ae1-20a2e4b8f3bf c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:06:47 np0005534516 nova_compute[253538]: 2025-11-25 09:06:47.443 253542 DEBUG nova.network.neutron [req-a58857d1-c18a-476c-8e45-ec4dd2c08fce req-e81c591c-aa36-4843-bae6-3a77afa9e8f2 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 7bc2f34f-b7dd-4b8d-99c0-1fc01d3c2aa0] Updated VIF entry in instance network info cache for port bc72cf9d-bb8d-4968-879b-a65c0e151d35. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 25 04:06:47 np0005534516 nova_compute[253538]: 2025-11-25 09:06:47.444 253542 DEBUG nova.network.neutron [req-a58857d1-c18a-476c-8e45-ec4dd2c08fce req-e81c591c-aa36-4843-bae6-3a77afa9e8f2 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 7bc2f34f-b7dd-4b8d-99c0-1fc01d3c2aa0] Updating instance_info_cache with network_info: [{"id": "bc72cf9d-bb8d-4968-879b-a65c0e151d35", "address": "fa:16:3e:b7:a3:07", "network": {"id": "8f08e3a5-c18c-40d6-a052-3721725c11a7", "bridge": "br-int", "label": "tempest-network-smoke--1961352312", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbc72cf9d-bb", "ovs_interfaceid": "bc72cf9d-bb8d-4968-879b-a65c0e151d35", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "e64e0c93-9ff8-4b26-9a7e-1bae8b024966", "address": "fa:16:3e:6a:41:4a", "network": {"id": "6c644b4d-59a5-410c-b57a-1faa3d063b78", "bridge": "br-int", "label": "tempest-network-smoke--1048784636", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe6a:414a", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe6a:414a", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape64e0c93-9f", "ovs_interfaceid": "e64e0c93-9ff8-4b26-9a7e-1bae8b024966", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 04:06:47 np0005534516 nova_compute[253538]: 2025-11-25 09:06:47.465 253542 DEBUG oslo_concurrency.lockutils [req-a58857d1-c18a-476c-8e45-ec4dd2c08fce req-e81c591c-aa36-4843-bae6-3a77afa9e8f2 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-7bc2f34f-b7dd-4b8d-99c0-1fc01d3c2aa0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 04:06:47 np0005534516 nova_compute[253538]: 2025-11-25 09:06:47.466 253542 DEBUG nova.compute.manager [req-a58857d1-c18a-476c-8e45-ec4dd2c08fce req-e81c591c-aa36-4843-bae6-3a77afa9e8f2 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 7bc2f34f-b7dd-4b8d-99c0-1fc01d3c2aa0] Received event network-vif-unplugged-bc72cf9d-bb8d-4968-879b-a65c0e151d35 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 04:06:47 np0005534516 nova_compute[253538]: 2025-11-25 09:06:47.467 253542 DEBUG oslo_concurrency.lockutils [req-a58857d1-c18a-476c-8e45-ec4dd2c08fce req-e81c591c-aa36-4843-bae6-3a77afa9e8f2 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "7bc2f34f-b7dd-4b8d-99c0-1fc01d3c2aa0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:06:47 np0005534516 nova_compute[253538]: 2025-11-25 09:06:47.467 253542 DEBUG oslo_concurrency.lockutils [req-a58857d1-c18a-476c-8e45-ec4dd2c08fce req-e81c591c-aa36-4843-bae6-3a77afa9e8f2 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "7bc2f34f-b7dd-4b8d-99c0-1fc01d3c2aa0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:06:47 np0005534516 nova_compute[253538]: 2025-11-25 09:06:47.467 253542 DEBUG oslo_concurrency.lockutils [req-a58857d1-c18a-476c-8e45-ec4dd2c08fce req-e81c591c-aa36-4843-bae6-3a77afa9e8f2 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "7bc2f34f-b7dd-4b8d-99c0-1fc01d3c2aa0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:06:47 np0005534516 nova_compute[253538]: 2025-11-25 09:06:47.468 253542 DEBUG nova.compute.manager [req-a58857d1-c18a-476c-8e45-ec4dd2c08fce req-e81c591c-aa36-4843-bae6-3a77afa9e8f2 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 7bc2f34f-b7dd-4b8d-99c0-1fc01d3c2aa0] No waiting events found dispatching network-vif-unplugged-bc72cf9d-bb8d-4968-879b-a65c0e151d35 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 04:06:47 np0005534516 nova_compute[253538]: 2025-11-25 09:06:47.468 253542 DEBUG nova.compute.manager [req-a58857d1-c18a-476c-8e45-ec4dd2c08fce req-e81c591c-aa36-4843-bae6-3a77afa9e8f2 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 7bc2f34f-b7dd-4b8d-99c0-1fc01d3c2aa0] Received event network-vif-unplugged-bc72cf9d-bb8d-4968-879b-a65c0e151d35 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 25 04:06:47 np0005534516 nova_compute[253538]: 2025-11-25 09:06:47.469 253542 DEBUG nova.compute.manager [req-a58857d1-c18a-476c-8e45-ec4dd2c08fce req-e81c591c-aa36-4843-bae6-3a77afa9e8f2 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 7bc2f34f-b7dd-4b8d-99c0-1fc01d3c2aa0] Received event network-vif-plugged-bc72cf9d-bb8d-4968-879b-a65c0e151d35 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 04:06:47 np0005534516 nova_compute[253538]: 2025-11-25 09:06:47.469 253542 DEBUG oslo_concurrency.lockutils [req-a58857d1-c18a-476c-8e45-ec4dd2c08fce req-e81c591c-aa36-4843-bae6-3a77afa9e8f2 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "7bc2f34f-b7dd-4b8d-99c0-1fc01d3c2aa0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:06:47 np0005534516 nova_compute[253538]: 2025-11-25 09:06:47.469 253542 DEBUG oslo_concurrency.lockutils [req-a58857d1-c18a-476c-8e45-ec4dd2c08fce req-e81c591c-aa36-4843-bae6-3a77afa9e8f2 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "7bc2f34f-b7dd-4b8d-99c0-1fc01d3c2aa0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:06:47 np0005534516 nova_compute[253538]: 2025-11-25 09:06:47.470 253542 DEBUG oslo_concurrency.lockutils [req-a58857d1-c18a-476c-8e45-ec4dd2c08fce req-e81c591c-aa36-4843-bae6-3a77afa9e8f2 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "7bc2f34f-b7dd-4b8d-99c0-1fc01d3c2aa0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:06:47 np0005534516 nova_compute[253538]: 2025-11-25 09:06:47.470 253542 DEBUG nova.compute.manager [req-a58857d1-c18a-476c-8e45-ec4dd2c08fce req-e81c591c-aa36-4843-bae6-3a77afa9e8f2 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 7bc2f34f-b7dd-4b8d-99c0-1fc01d3c2aa0] No waiting events found dispatching network-vif-plugged-bc72cf9d-bb8d-4968-879b-a65c0e151d35 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 04:06:47 np0005534516 nova_compute[253538]: 2025-11-25 09:06:47.470 253542 WARNING nova.compute.manager [req-a58857d1-c18a-476c-8e45-ec4dd2c08fce req-e81c591c-aa36-4843-bae6-3a77afa9e8f2 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 7bc2f34f-b7dd-4b8d-99c0-1fc01d3c2aa0] Received unexpected event network-vif-plugged-bc72cf9d-bb8d-4968-879b-a65c0e151d35 for instance with vm_state active and task_state deleting.#033[00m
Nov 25 04:06:47 np0005534516 nova_compute[253538]: 2025-11-25 09:06:47.471 253542 DEBUG nova.compute.manager [req-a58857d1-c18a-476c-8e45-ec4dd2c08fce req-e81c591c-aa36-4843-bae6-3a77afa9e8f2 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 7bc2f34f-b7dd-4b8d-99c0-1fc01d3c2aa0] Received event network-vif-unplugged-e64e0c93-9ff8-4b26-9a7e-1bae8b024966 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 04:06:47 np0005534516 nova_compute[253538]: 2025-11-25 09:06:47.471 253542 DEBUG oslo_concurrency.lockutils [req-a58857d1-c18a-476c-8e45-ec4dd2c08fce req-e81c591c-aa36-4843-bae6-3a77afa9e8f2 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "7bc2f34f-b7dd-4b8d-99c0-1fc01d3c2aa0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:06:47 np0005534516 nova_compute[253538]: 2025-11-25 09:06:47.471 253542 DEBUG oslo_concurrency.lockutils [req-a58857d1-c18a-476c-8e45-ec4dd2c08fce req-e81c591c-aa36-4843-bae6-3a77afa9e8f2 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "7bc2f34f-b7dd-4b8d-99c0-1fc01d3c2aa0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:06:47 np0005534516 nova_compute[253538]: 2025-11-25 09:06:47.472 253542 DEBUG oslo_concurrency.lockutils [req-a58857d1-c18a-476c-8e45-ec4dd2c08fce req-e81c591c-aa36-4843-bae6-3a77afa9e8f2 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "7bc2f34f-b7dd-4b8d-99c0-1fc01d3c2aa0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:06:47 np0005534516 nova_compute[253538]: 2025-11-25 09:06:47.472 253542 DEBUG nova.compute.manager [req-a58857d1-c18a-476c-8e45-ec4dd2c08fce req-e81c591c-aa36-4843-bae6-3a77afa9e8f2 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 7bc2f34f-b7dd-4b8d-99c0-1fc01d3c2aa0] No waiting events found dispatching network-vif-unplugged-e64e0c93-9ff8-4b26-9a7e-1bae8b024966 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 04:06:47 np0005534516 nova_compute[253538]: 2025-11-25 09:06:47.472 253542 DEBUG nova.compute.manager [req-a58857d1-c18a-476c-8e45-ec4dd2c08fce req-e81c591c-aa36-4843-bae6-3a77afa9e8f2 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 7bc2f34f-b7dd-4b8d-99c0-1fc01d3c2aa0] Received event network-vif-unplugged-e64e0c93-9ff8-4b26-9a7e-1bae8b024966 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 25 04:06:47 np0005534516 nova_compute[253538]: 2025-11-25 09:06:47.473 253542 DEBUG nova.compute.manager [req-a58857d1-c18a-476c-8e45-ec4dd2c08fce req-e81c591c-aa36-4843-bae6-3a77afa9e8f2 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 7bc2f34f-b7dd-4b8d-99c0-1fc01d3c2aa0] Received event network-vif-plugged-e64e0c93-9ff8-4b26-9a7e-1bae8b024966 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 04:06:47 np0005534516 nova_compute[253538]: 2025-11-25 09:06:47.473 253542 DEBUG oslo_concurrency.lockutils [req-a58857d1-c18a-476c-8e45-ec4dd2c08fce req-e81c591c-aa36-4843-bae6-3a77afa9e8f2 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "7bc2f34f-b7dd-4b8d-99c0-1fc01d3c2aa0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:06:47 np0005534516 nova_compute[253538]: 2025-11-25 09:06:47.473 253542 DEBUG oslo_concurrency.lockutils [req-a58857d1-c18a-476c-8e45-ec4dd2c08fce req-e81c591c-aa36-4843-bae6-3a77afa9e8f2 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "7bc2f34f-b7dd-4b8d-99c0-1fc01d3c2aa0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:06:47 np0005534516 nova_compute[253538]: 2025-11-25 09:06:47.474 253542 DEBUG oslo_concurrency.lockutils [req-a58857d1-c18a-476c-8e45-ec4dd2c08fce req-e81c591c-aa36-4843-bae6-3a77afa9e8f2 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "7bc2f34f-b7dd-4b8d-99c0-1fc01d3c2aa0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:06:47 np0005534516 nova_compute[253538]: 2025-11-25 09:06:47.474 253542 DEBUG nova.compute.manager [req-a58857d1-c18a-476c-8e45-ec4dd2c08fce req-e81c591c-aa36-4843-bae6-3a77afa9e8f2 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 7bc2f34f-b7dd-4b8d-99c0-1fc01d3c2aa0] No waiting events found dispatching network-vif-plugged-e64e0c93-9ff8-4b26-9a7e-1bae8b024966 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 04:06:47 np0005534516 nova_compute[253538]: 2025-11-25 09:06:47.474 253542 WARNING nova.compute.manager [req-a58857d1-c18a-476c-8e45-ec4dd2c08fce req-e81c591c-aa36-4843-bae6-3a77afa9e8f2 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 7bc2f34f-b7dd-4b8d-99c0-1fc01d3c2aa0] Received unexpected event network-vif-plugged-e64e0c93-9ff8-4b26-9a7e-1bae8b024966 for instance with vm_state active and task_state deleting.#033[00m
Nov 25 04:06:47 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 04:06:47 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3491286695' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 04:06:47 np0005534516 nova_compute[253538]: 2025-11-25 09:06:47.496 253542 DEBUG oslo_concurrency.processutils [None req-1295a4e2-e903-455c-be0d-3566e41a5fe7 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.435s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 04:06:47 np0005534516 nova_compute[253538]: 2025-11-25 09:06:47.503 253542 DEBUG nova.compute.provider_tree [None req-1295a4e2-e903-455c-be0d-3566e41a5fe7 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 04:06:47 np0005534516 nova_compute[253538]: 2025-11-25 09:06:47.518 253542 DEBUG nova.scheduler.client.report [None req-1295a4e2-e903-455c-be0d-3566e41a5fe7 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 04:06:47 np0005534516 nova_compute[253538]: 2025-11-25 09:06:47.538 253542 DEBUG oslo_concurrency.lockutils [None req-1295a4e2-e903-455c-be0d-3566e41a5fe7 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.608s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:06:47 np0005534516 nova_compute[253538]: 2025-11-25 09:06:47.539 253542 DEBUG nova.compute.manager [None req-1295a4e2-e903-455c-be0d-3566e41a5fe7 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] [instance: 7aeb9ccf-2506-41d1-92c2-c72892096857] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 25 04:06:47 np0005534516 nova_compute[253538]: 2025-11-25 09:06:47.541 253542 DEBUG oslo_concurrency.lockutils [None req-cb3238fb-c719-4d4a-9ae1-20a2e4b8f3bf c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.262s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:06:47 np0005534516 nova_compute[253538]: 2025-11-25 09:06:47.596 253542 DEBUG nova.compute.manager [None req-1295a4e2-e903-455c-be0d-3566e41a5fe7 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] [instance: 7aeb9ccf-2506-41d1-92c2-c72892096857] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 25 04:06:47 np0005534516 nova_compute[253538]: 2025-11-25 09:06:47.596 253542 DEBUG nova.network.neutron [None req-1295a4e2-e903-455c-be0d-3566e41a5fe7 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] [instance: 7aeb9ccf-2506-41d1-92c2-c72892096857] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 25 04:06:47 np0005534516 nova_compute[253538]: 2025-11-25 09:06:47.617 253542 INFO nova.virt.libvirt.driver [None req-1295a4e2-e903-455c-be0d-3566e41a5fe7 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] [instance: 7aeb9ccf-2506-41d1-92c2-c72892096857] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 25 04:06:47 np0005534516 nova_compute[253538]: 2025-11-25 09:06:47.626 253542 DEBUG oslo_concurrency.processutils [None req-cb3238fb-c719-4d4a-9ae1-20a2e4b8f3bf c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 04:06:47 np0005534516 nova_compute[253538]: 2025-11-25 09:06:47.666 253542 DEBUG nova.compute.manager [None req-1295a4e2-e903-455c-be0d-3566e41a5fe7 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] [instance: 7aeb9ccf-2506-41d1-92c2-c72892096857] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 25 04:06:47 np0005534516 nova_compute[253538]: 2025-11-25 09:06:47.754 253542 DEBUG nova.compute.manager [req-099a01da-4ee0-4aa3-94ef-955dc62b0c85 req-72b70455-460d-4313-aea8-352d495e9976 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 7bc2f34f-b7dd-4b8d-99c0-1fc01d3c2aa0] Received event network-vif-deleted-e64e0c93-9ff8-4b26-9a7e-1bae8b024966 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 04:06:47 np0005534516 nova_compute[253538]: 2025-11-25 09:06:47.755 253542 INFO nova.compute.manager [req-099a01da-4ee0-4aa3-94ef-955dc62b0c85 req-72b70455-460d-4313-aea8-352d495e9976 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 7bc2f34f-b7dd-4b8d-99c0-1fc01d3c2aa0] Neutron deleted interface e64e0c93-9ff8-4b26-9a7e-1bae8b024966; detaching it from the instance and deleting it from the info cache#033[00m
Nov 25 04:06:47 np0005534516 nova_compute[253538]: 2025-11-25 09:06:47.755 253542 DEBUG nova.network.neutron [req-099a01da-4ee0-4aa3-94ef-955dc62b0c85 req-72b70455-460d-4313-aea8-352d495e9976 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 7bc2f34f-b7dd-4b8d-99c0-1fc01d3c2aa0] Updating instance_info_cache with network_info: [{"id": "bc72cf9d-bb8d-4968-879b-a65c0e151d35", "address": "fa:16:3e:b7:a3:07", "network": {"id": "8f08e3a5-c18c-40d6-a052-3721725c11a7", "bridge": "br-int", "label": "tempest-network-smoke--1961352312", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbc72cf9d-bb", "ovs_interfaceid": "bc72cf9d-bb8d-4968-879b-a65c0e151d35", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 04:06:47 np0005534516 nova_compute[253538]: 2025-11-25 09:06:47.766 253542 DEBUG nova.compute.manager [None req-1295a4e2-e903-455c-be0d-3566e41a5fe7 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] [instance: 7aeb9ccf-2506-41d1-92c2-c72892096857] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 25 04:06:47 np0005534516 nova_compute[253538]: 2025-11-25 09:06:47.767 253542 DEBUG nova.virt.libvirt.driver [None req-1295a4e2-e903-455c-be0d-3566e41a5fe7 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] [instance: 7aeb9ccf-2506-41d1-92c2-c72892096857] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 25 04:06:47 np0005534516 nova_compute[253538]: 2025-11-25 09:06:47.767 253542 INFO nova.virt.libvirt.driver [None req-1295a4e2-e903-455c-be0d-3566e41a5fe7 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] [instance: 7aeb9ccf-2506-41d1-92c2-c72892096857] Creating image(s)#033[00m
Nov 25 04:06:47 np0005534516 nova_compute[253538]: 2025-11-25 09:06:47.788 253542 DEBUG nova.storage.rbd_utils [None req-1295a4e2-e903-455c-be0d-3566e41a5fe7 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] rbd image 7aeb9ccf-2506-41d1-92c2-c72892096857_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 04:06:47 np0005534516 nova_compute[253538]: 2025-11-25 09:06:47.810 253542 DEBUG nova.storage.rbd_utils [None req-1295a4e2-e903-455c-be0d-3566e41a5fe7 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] rbd image 7aeb9ccf-2506-41d1-92c2-c72892096857_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 04:06:47 np0005534516 nova_compute[253538]: 2025-11-25 09:06:47.842 253542 DEBUG nova.storage.rbd_utils [None req-1295a4e2-e903-455c-be0d-3566e41a5fe7 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] rbd image 7aeb9ccf-2506-41d1-92c2-c72892096857_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 04:06:47 np0005534516 nova_compute[253538]: 2025-11-25 09:06:47.847 253542 DEBUG oslo_concurrency.processutils [None req-1295a4e2-e903-455c-be0d-3566e41a5fe7 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 04:06:47 np0005534516 nova_compute[253538]: 2025-11-25 09:06:47.879 253542 DEBUG nova.compute.manager [req-099a01da-4ee0-4aa3-94ef-955dc62b0c85 req-72b70455-460d-4313-aea8-352d495e9976 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 7bc2f34f-b7dd-4b8d-99c0-1fc01d3c2aa0] Detach interface failed, port_id=e64e0c93-9ff8-4b26-9a7e-1bae8b024966, reason: Instance 7bc2f34f-b7dd-4b8d-99c0-1fc01d3c2aa0 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882#033[00m
Nov 25 04:06:47 np0005534516 nova_compute[253538]: 2025-11-25 09:06:47.880 253542 DEBUG nova.compute.manager [req-099a01da-4ee0-4aa3-94ef-955dc62b0c85 req-72b70455-460d-4313-aea8-352d495e9976 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 7bc2f34f-b7dd-4b8d-99c0-1fc01d3c2aa0] Received event network-vif-deleted-bc72cf9d-bb8d-4968-879b-a65c0e151d35 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 04:06:47 np0005534516 nova_compute[253538]: 2025-11-25 09:06:47.880 253542 INFO nova.compute.manager [req-099a01da-4ee0-4aa3-94ef-955dc62b0c85 req-72b70455-460d-4313-aea8-352d495e9976 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 7bc2f34f-b7dd-4b8d-99c0-1fc01d3c2aa0] Neutron deleted interface bc72cf9d-bb8d-4968-879b-a65c0e151d35; detaching it from the instance and deleting it from the info cache#033[00m
Nov 25 04:06:47 np0005534516 nova_compute[253538]: 2025-11-25 09:06:47.881 253542 DEBUG nova.network.neutron [req-099a01da-4ee0-4aa3-94ef-955dc62b0c85 req-72b70455-460d-4313-aea8-352d495e9976 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 7bc2f34f-b7dd-4b8d-99c0-1fc01d3c2aa0] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 04:06:47 np0005534516 nova_compute[253538]: 2025-11-25 09:06:47.902 253542 DEBUG nova.compute.manager [req-099a01da-4ee0-4aa3-94ef-955dc62b0c85 req-72b70455-460d-4313-aea8-352d495e9976 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 7bc2f34f-b7dd-4b8d-99c0-1fc01d3c2aa0] Detach interface failed, port_id=bc72cf9d-bb8d-4968-879b-a65c0e151d35, reason: Instance 7bc2f34f-b7dd-4b8d-99c0-1fc01d3c2aa0 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882#033[00m
Nov 25 04:06:47 np0005534516 nova_compute[253538]: 2025-11-25 09:06:47.917 253542 DEBUG oslo_concurrency.processutils [None req-1295a4e2-e903-455c-be0d-3566e41a5fe7 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json" returned: 0 in 0.070s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 04:06:47 np0005534516 nova_compute[253538]: 2025-11-25 09:06:47.918 253542 DEBUG oslo_concurrency.lockutils [None req-1295a4e2-e903-455c-be0d-3566e41a5fe7 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] Acquiring lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:06:47 np0005534516 nova_compute[253538]: 2025-11-25 09:06:47.919 253542 DEBUG oslo_concurrency.lockutils [None req-1295a4e2-e903-455c-be0d-3566e41a5fe7 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:06:47 np0005534516 nova_compute[253538]: 2025-11-25 09:06:47.919 253542 DEBUG oslo_concurrency.lockutils [None req-1295a4e2-e903-455c-be0d-3566e41a5fe7 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:06:47 np0005534516 nova_compute[253538]: 2025-11-25 09:06:47.940 253542 DEBUG nova.storage.rbd_utils [None req-1295a4e2-e903-455c-be0d-3566e41a5fe7 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] rbd image 7aeb9ccf-2506-41d1-92c2-c72892096857_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 04:06:47 np0005534516 nova_compute[253538]: 2025-11-25 09:06:47.943 253542 DEBUG oslo_concurrency.processutils [None req-1295a4e2-e903-455c-be0d-3566e41a5fe7 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc 7aeb9ccf-2506-41d1-92c2-c72892096857_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 04:06:48 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:06:48 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 04:06:48 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2602972307' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 04:06:48 np0005534516 nova_compute[253538]: 2025-11-25 09:06:48.136 253542 DEBUG oslo_concurrency.processutils [None req-cb3238fb-c719-4d4a-9ae1-20a2e4b8f3bf c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.510s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 04:06:48 np0005534516 nova_compute[253538]: 2025-11-25 09:06:48.143 253542 DEBUG nova.compute.provider_tree [None req-cb3238fb-c719-4d4a-9ae1-20a2e4b8f3bf c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 04:06:48 np0005534516 nova_compute[253538]: 2025-11-25 09:06:48.160 253542 DEBUG nova.scheduler.client.report [None req-cb3238fb-c719-4d4a-9ae1-20a2e4b8f3bf c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 04:06:48 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2580: 321 pgs: 321 active+clean; 115 MiB data, 954 MiB used, 59 GiB / 60 GiB avail; 46 KiB/s rd, 31 KiB/s wr, 49 op/s
Nov 25 04:06:48 np0005534516 nova_compute[253538]: 2025-11-25 09:06:48.195 253542 DEBUG oslo_concurrency.lockutils [None req-cb3238fb-c719-4d4a-9ae1-20a2e4b8f3bf c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.653s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:06:48 np0005534516 nova_compute[253538]: 2025-11-25 09:06:48.234 253542 INFO nova.scheduler.client.report [None req-cb3238fb-c719-4d4a-9ae1-20a2e4b8f3bf c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Deleted allocations for instance 7bc2f34f-b7dd-4b8d-99c0-1fc01d3c2aa0#033[00m
Nov 25 04:06:48 np0005534516 nova_compute[253538]: 2025-11-25 09:06:48.309 253542 DEBUG nova.policy [None req-1295a4e2-e903-455c-be0d-3566e41a5fe7 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'aef72e2ffce442d1848c4753c324ae92', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '8771100a91ef4eb3b58cc4840f6154b4', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 25 04:06:48 np0005534516 nova_compute[253538]: 2025-11-25 09:06:48.316 253542 DEBUG oslo_concurrency.lockutils [None req-cb3238fb-c719-4d4a-9ae1-20a2e4b8f3bf c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "7bc2f34f-b7dd-4b8d-99c0-1fc01d3c2aa0" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.562s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:06:48 np0005534516 ceph-osd[89702]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #50. Immutable memtables: 7.
Nov 25 04:06:49 np0005534516 nova_compute[253538]: 2025-11-25 09:06:49.048 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:06:49 np0005534516 nova_compute[253538]: 2025-11-25 09:06:49.076 253542 DEBUG oslo_concurrency.processutils [None req-1295a4e2-e903-455c-be0d-3566e41a5fe7 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc 7aeb9ccf-2506-41d1-92c2-c72892096857_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.133s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 04:06:49 np0005534516 nova_compute[253538]: 2025-11-25 09:06:49.140 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:06:49 np0005534516 nova_compute[253538]: 2025-11-25 09:06:49.145 253542 DEBUG nova.storage.rbd_utils [None req-1295a4e2-e903-455c-be0d-3566e41a5fe7 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] resizing rbd image 7aeb9ccf-2506-41d1-92c2-c72892096857_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 25 04:06:49 np0005534516 nova_compute[253538]: 2025-11-25 09:06:49.195 253542 DEBUG nova.network.neutron [None req-1295a4e2-e903-455c-be0d-3566e41a5fe7 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] [instance: 7aeb9ccf-2506-41d1-92c2-c72892096857] Successfully created port: 0ddcebf0-d7e9-474d-b53b-d1746f5af8f2 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 25 04:06:49 np0005534516 nova_compute[253538]: 2025-11-25 09:06:49.254 253542 DEBUG nova.objects.instance [None req-1295a4e2-e903-455c-be0d-3566e41a5fe7 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] Lazy-loading 'migration_context' on Instance uuid 7aeb9ccf-2506-41d1-92c2-c72892096857 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 04:06:49 np0005534516 nova_compute[253538]: 2025-11-25 09:06:49.265 253542 DEBUG nova.virt.libvirt.driver [None req-1295a4e2-e903-455c-be0d-3566e41a5fe7 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] [instance: 7aeb9ccf-2506-41d1-92c2-c72892096857] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 25 04:06:49 np0005534516 nova_compute[253538]: 2025-11-25 09:06:49.266 253542 DEBUG nova.virt.libvirt.driver [None req-1295a4e2-e903-455c-be0d-3566e41a5fe7 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] [instance: 7aeb9ccf-2506-41d1-92c2-c72892096857] Ensure instance console log exists: /var/lib/nova/instances/7aeb9ccf-2506-41d1-92c2-c72892096857/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 25 04:06:49 np0005534516 nova_compute[253538]: 2025-11-25 09:06:49.266 253542 DEBUG oslo_concurrency.lockutils [None req-1295a4e2-e903-455c-be0d-3566e41a5fe7 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:06:49 np0005534516 nova_compute[253538]: 2025-11-25 09:06:49.267 253542 DEBUG oslo_concurrency.lockutils [None req-1295a4e2-e903-455c-be0d-3566e41a5fe7 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:06:49 np0005534516 nova_compute[253538]: 2025-11-25 09:06:49.267 253542 DEBUG oslo_concurrency.lockutils [None req-1295a4e2-e903-455c-be0d-3566e41a5fe7 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:06:50 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2581: 321 pgs: 321 active+clean; 105 MiB data, 943 MiB used, 59 GiB / 60 GiB avail; 54 KiB/s rd, 592 KiB/s wr, 63 op/s
Nov 25 04:06:50 np0005534516 nova_compute[253538]: 2025-11-25 09:06:50.495 253542 DEBUG nova.network.neutron [None req-1295a4e2-e903-455c-be0d-3566e41a5fe7 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] [instance: 7aeb9ccf-2506-41d1-92c2-c72892096857] Successfully updated port: 0ddcebf0-d7e9-474d-b53b-d1746f5af8f2 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 25 04:06:50 np0005534516 nova_compute[253538]: 2025-11-25 09:06:50.533 253542 DEBUG oslo_concurrency.lockutils [None req-1295a4e2-e903-455c-be0d-3566e41a5fe7 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] Acquiring lock "refresh_cache-7aeb9ccf-2506-41d1-92c2-c72892096857" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 04:06:50 np0005534516 nova_compute[253538]: 2025-11-25 09:06:50.534 253542 DEBUG oslo_concurrency.lockutils [None req-1295a4e2-e903-455c-be0d-3566e41a5fe7 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] Acquired lock "refresh_cache-7aeb9ccf-2506-41d1-92c2-c72892096857" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 04:06:50 np0005534516 nova_compute[253538]: 2025-11-25 09:06:50.534 253542 DEBUG nova.network.neutron [None req-1295a4e2-e903-455c-be0d-3566e41a5fe7 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] [instance: 7aeb9ccf-2506-41d1-92c2-c72892096857] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 25 04:06:50 np0005534516 nova_compute[253538]: 2025-11-25 09:06:50.626 253542 DEBUG nova.compute.manager [req-f69aaf89-36f2-4f78-ba3d-1223228a2a9b req-218016f0-2840-46b7-95aa-f7b60d64e066 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 7aeb9ccf-2506-41d1-92c2-c72892096857] Received event network-changed-0ddcebf0-d7e9-474d-b53b-d1746f5af8f2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 04:06:50 np0005534516 nova_compute[253538]: 2025-11-25 09:06:50.626 253542 DEBUG nova.compute.manager [req-f69aaf89-36f2-4f78-ba3d-1223228a2a9b req-218016f0-2840-46b7-95aa-f7b60d64e066 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 7aeb9ccf-2506-41d1-92c2-c72892096857] Refreshing instance network info cache due to event network-changed-0ddcebf0-d7e9-474d-b53b-d1746f5af8f2. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 04:06:50 np0005534516 nova_compute[253538]: 2025-11-25 09:06:50.627 253542 DEBUG oslo_concurrency.lockutils [req-f69aaf89-36f2-4f78-ba3d-1223228a2a9b req-218016f0-2840-46b7-95aa-f7b60d64e066 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-7aeb9ccf-2506-41d1-92c2-c72892096857" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 04:06:50 np0005534516 nova_compute[253538]: 2025-11-25 09:06:50.729 253542 DEBUG nova.network.neutron [None req-1295a4e2-e903-455c-be0d-3566e41a5fe7 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] [instance: 7aeb9ccf-2506-41d1-92c2-c72892096857] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 25 04:06:51 np0005534516 nova_compute[253538]: 2025-11-25 09:06:51.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:06:51 np0005534516 nova_compute[253538]: 2025-11-25 09:06:51.554 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 25 04:06:51 np0005534516 nova_compute[253538]: 2025-11-25 09:06:51.570 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 25 04:06:52 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2582: 321 pgs: 321 active+clean; 115 MiB data, 963 MiB used, 59 GiB / 60 GiB avail; 39 KiB/s rd, 774 KiB/s wr, 62 op/s
Nov 25 04:06:52 np0005534516 nova_compute[253538]: 2025-11-25 09:06:52.392 253542 DEBUG nova.network.neutron [None req-1295a4e2-e903-455c-be0d-3566e41a5fe7 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] [instance: 7aeb9ccf-2506-41d1-92c2-c72892096857] Updating instance_info_cache with network_info: [{"id": "0ddcebf0-d7e9-474d-b53b-d1746f5af8f2", "address": "fa:16:3e:d3:95:36", "network": {"id": "3c3eb82e-1161-4c2f-9fce-53fdf4386d9c", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-990123454-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8771100a91ef4eb3b58cc4840f6154b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0ddcebf0-d7", "ovs_interfaceid": "0ddcebf0-d7e9-474d-b53b-d1746f5af8f2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 04:06:52 np0005534516 nova_compute[253538]: 2025-11-25 09:06:52.422 253542 DEBUG oslo_concurrency.lockutils [None req-1295a4e2-e903-455c-be0d-3566e41a5fe7 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] Releasing lock "refresh_cache-7aeb9ccf-2506-41d1-92c2-c72892096857" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 04:06:52 np0005534516 nova_compute[253538]: 2025-11-25 09:06:52.423 253542 DEBUG nova.compute.manager [None req-1295a4e2-e903-455c-be0d-3566e41a5fe7 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] [instance: 7aeb9ccf-2506-41d1-92c2-c72892096857] Instance network_info: |[{"id": "0ddcebf0-d7e9-474d-b53b-d1746f5af8f2", "address": "fa:16:3e:d3:95:36", "network": {"id": "3c3eb82e-1161-4c2f-9fce-53fdf4386d9c", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-990123454-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8771100a91ef4eb3b58cc4840f6154b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0ddcebf0-d7", "ovs_interfaceid": "0ddcebf0-d7e9-474d-b53b-d1746f5af8f2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 25 04:06:52 np0005534516 nova_compute[253538]: 2025-11-25 09:06:52.423 253542 DEBUG oslo_concurrency.lockutils [req-f69aaf89-36f2-4f78-ba3d-1223228a2a9b req-218016f0-2840-46b7-95aa-f7b60d64e066 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-7aeb9ccf-2506-41d1-92c2-c72892096857" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 04:06:52 np0005534516 nova_compute[253538]: 2025-11-25 09:06:52.423 253542 DEBUG nova.network.neutron [req-f69aaf89-36f2-4f78-ba3d-1223228a2a9b req-218016f0-2840-46b7-95aa-f7b60d64e066 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 7aeb9ccf-2506-41d1-92c2-c72892096857] Refreshing network info cache for port 0ddcebf0-d7e9-474d-b53b-d1746f5af8f2 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 04:06:52 np0005534516 nova_compute[253538]: 2025-11-25 09:06:52.426 253542 DEBUG nova.virt.libvirt.driver [None req-1295a4e2-e903-455c-be0d-3566e41a5fe7 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] [instance: 7aeb9ccf-2506-41d1-92c2-c72892096857] Start _get_guest_xml network_info=[{"id": "0ddcebf0-d7e9-474d-b53b-d1746f5af8f2", "address": "fa:16:3e:d3:95:36", "network": {"id": "3c3eb82e-1161-4c2f-9fce-53fdf4386d9c", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-990123454-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8771100a91ef4eb3b58cc4840f6154b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0ddcebf0-d7", "ovs_interfaceid": "0ddcebf0-d7e9-474d-b53b-d1746f5af8f2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'device_name': '/dev/vda', 'guest_format': None, 'encryption_options': None, 'boot_index': 0, 'encryption_format': None, 'encryption_secret_uuid': None, 'size': 0, 'encrypted': False, 'disk_bus': 'virtio', 'image_id': '8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 25 04:06:52 np0005534516 nova_compute[253538]: 2025-11-25 09:06:52.431 253542 WARNING nova.virt.libvirt.driver [None req-1295a4e2-e903-455c-be0d-3566e41a5fe7 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 04:06:52 np0005534516 nova_compute[253538]: 2025-11-25 09:06:52.435 253542 DEBUG nova.virt.libvirt.host [None req-1295a4e2-e903-455c-be0d-3566e41a5fe7 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 25 04:06:52 np0005534516 nova_compute[253538]: 2025-11-25 09:06:52.436 253542 DEBUG nova.virt.libvirt.host [None req-1295a4e2-e903-455c-be0d-3566e41a5fe7 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 25 04:06:52 np0005534516 nova_compute[253538]: 2025-11-25 09:06:52.441 253542 DEBUG nova.virt.libvirt.host [None req-1295a4e2-e903-455c-be0d-3566e41a5fe7 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 25 04:06:52 np0005534516 nova_compute[253538]: 2025-11-25 09:06:52.442 253542 DEBUG nova.virt.libvirt.host [None req-1295a4e2-e903-455c-be0d-3566e41a5fe7 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 25 04:06:52 np0005534516 nova_compute[253538]: 2025-11-25 09:06:52.442 253542 DEBUG nova.virt.libvirt.driver [None req-1295a4e2-e903-455c-be0d-3566e41a5fe7 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 25 04:06:52 np0005534516 nova_compute[253538]: 2025-11-25 09:06:52.443 253542 DEBUG nova.virt.hardware [None req-1295a4e2-e903-455c-be0d-3566e41a5fe7 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T08:20:54Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='c0247c91-0f34-45b2-87e7-43f31a790a00',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 25 04:06:52 np0005534516 nova_compute[253538]: 2025-11-25 09:06:52.443 253542 DEBUG nova.virt.hardware [None req-1295a4e2-e903-455c-be0d-3566e41a5fe7 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 25 04:06:52 np0005534516 nova_compute[253538]: 2025-11-25 09:06:52.443 253542 DEBUG nova.virt.hardware [None req-1295a4e2-e903-455c-be0d-3566e41a5fe7 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 25 04:06:52 np0005534516 nova_compute[253538]: 2025-11-25 09:06:52.444 253542 DEBUG nova.virt.hardware [None req-1295a4e2-e903-455c-be0d-3566e41a5fe7 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 25 04:06:52 np0005534516 nova_compute[253538]: 2025-11-25 09:06:52.444 253542 DEBUG nova.virt.hardware [None req-1295a4e2-e903-455c-be0d-3566e41a5fe7 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 25 04:06:52 np0005534516 nova_compute[253538]: 2025-11-25 09:06:52.444 253542 DEBUG nova.virt.hardware [None req-1295a4e2-e903-455c-be0d-3566e41a5fe7 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 25 04:06:52 np0005534516 nova_compute[253538]: 2025-11-25 09:06:52.444 253542 DEBUG nova.virt.hardware [None req-1295a4e2-e903-455c-be0d-3566e41a5fe7 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 25 04:06:52 np0005534516 nova_compute[253538]: 2025-11-25 09:06:52.445 253542 DEBUG nova.virt.hardware [None req-1295a4e2-e903-455c-be0d-3566e41a5fe7 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 25 04:06:52 np0005534516 nova_compute[253538]: 2025-11-25 09:06:52.445 253542 DEBUG nova.virt.hardware [None req-1295a4e2-e903-455c-be0d-3566e41a5fe7 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 25 04:06:52 np0005534516 nova_compute[253538]: 2025-11-25 09:06:52.445 253542 DEBUG nova.virt.hardware [None req-1295a4e2-e903-455c-be0d-3566e41a5fe7 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 25 04:06:52 np0005534516 nova_compute[253538]: 2025-11-25 09:06:52.445 253542 DEBUG nova.virt.hardware [None req-1295a4e2-e903-455c-be0d-3566e41a5fe7 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 25 04:06:52 np0005534516 nova_compute[253538]: 2025-11-25 09:06:52.448 253542 DEBUG oslo_concurrency.processutils [None req-1295a4e2-e903-455c-be0d-3566e41a5fe7 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 04:06:52 np0005534516 nova_compute[253538]: 2025-11-25 09:06:52.564 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:06:52 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 04:06:52 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/814264828' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 04:06:52 np0005534516 nova_compute[253538]: 2025-11-25 09:06:52.903 253542 DEBUG oslo_concurrency.processutils [None req-1295a4e2-e903-455c-be0d-3566e41a5fe7 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.455s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 04:06:52 np0005534516 nova_compute[253538]: 2025-11-25 09:06:52.939 253542 DEBUG nova.storage.rbd_utils [None req-1295a4e2-e903-455c-be0d-3566e41a5fe7 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] rbd image 7aeb9ccf-2506-41d1-92c2-c72892096857_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 04:06:52 np0005534516 nova_compute[253538]: 2025-11-25 09:06:52.944 253542 DEBUG oslo_concurrency.processutils [None req-1295a4e2-e903-455c-be0d-3566e41a5fe7 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 04:06:53 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:06:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] Optimize plan auto_2025-11-25_09:06:53
Nov 25 04:06:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 04:06:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] do_upmap
Nov 25 04:06:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] pools ['.rgw.root', 'cephfs.cephfs.meta', 'default.rgw.log', '.mgr', 'default.rgw.control', 'cephfs.cephfs.data', 'vms', 'volumes', 'images', 'backups', 'default.rgw.meta']
Nov 25 04:06:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] prepared 0/10 changes
Nov 25 04:06:53 np0005534516 nova_compute[253538]: 2025-11-25 09:06:53.369 253542 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764061598.3686907, 935c4eb2-999f-40a4-8643-0479d293c149 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 04:06:53 np0005534516 nova_compute[253538]: 2025-11-25 09:06:53.370 253542 INFO nova.compute.manager [-] [instance: 935c4eb2-999f-40a4-8643-0479d293c149] VM Stopped (Lifecycle Event)#033[00m
Nov 25 04:06:53 np0005534516 nova_compute[253538]: 2025-11-25 09:06:53.399 253542 DEBUG nova.compute.manager [None req-8a432def-77de-4807-8e39-dd150b949f05 - - - - - -] [instance: 935c4eb2-999f-40a4-8643-0479d293c149] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 04:06:53 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 04:06:53 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1153513911' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 04:06:53 np0005534516 nova_compute[253538]: 2025-11-25 09:06:53.437 253542 DEBUG oslo_concurrency.processutils [None req-1295a4e2-e903-455c-be0d-3566e41a5fe7 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.493s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 04:06:53 np0005534516 nova_compute[253538]: 2025-11-25 09:06:53.438 253542 DEBUG nova.virt.libvirt.vif [None req-1295a4e2-e903-455c-be0d-3566e41a5fe7 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T09:06:45Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestSnapshotPattern-server-150948456',display_name='tempest-TestSnapshotPattern-server-150948456',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testsnapshotpattern-server-150948456',id=140,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLErEV2xwN9mBf+XGKCIgVT/D6OXwB0cwyyX2NBYPEm+JKbk9OpT/b7EfE4XEaPQBYqSc0cfR5p0864dWMnh2OIUXkOANq7ZGUSKBMTzjmr8EUsGjFEiOhtcXJ0bHjOVZQ==',key_name='tempest-TestSnapshotPattern-1105951773',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='8771100a91ef4eb3b58cc4840f6154b4',ramdisk_id='',reservation_id='r-9z2sb50j',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSnapshotPattern-569624779',owner_user_name='tempest-TestSnapshotPattern-569624779-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T09:06:47Z,user_data=None,user_id='aef72e2ffce442d1848c4753c324ae92',uuid=7aeb9ccf-2506-41d1-92c2-c72892096857,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "0ddcebf0-d7e9-474d-b53b-d1746f5af8f2", "address": "fa:16:3e:d3:95:36", "network": {"id": "3c3eb82e-1161-4c2f-9fce-53fdf4386d9c", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-990123454-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8771100a91ef4eb3b58cc4840f6154b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0ddcebf0-d7", "ovs_interfaceid": "0ddcebf0-d7e9-474d-b53b-d1746f5af8f2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 25 04:06:53 np0005534516 nova_compute[253538]: 2025-11-25 09:06:53.439 253542 DEBUG nova.network.os_vif_util [None req-1295a4e2-e903-455c-be0d-3566e41a5fe7 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] Converting VIF {"id": "0ddcebf0-d7e9-474d-b53b-d1746f5af8f2", "address": "fa:16:3e:d3:95:36", "network": {"id": "3c3eb82e-1161-4c2f-9fce-53fdf4386d9c", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-990123454-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8771100a91ef4eb3b58cc4840f6154b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0ddcebf0-d7", "ovs_interfaceid": "0ddcebf0-d7e9-474d-b53b-d1746f5af8f2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 04:06:53 np0005534516 nova_compute[253538]: 2025-11-25 09:06:53.440 253542 DEBUG nova.network.os_vif_util [None req-1295a4e2-e903-455c-be0d-3566e41a5fe7 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d3:95:36,bridge_name='br-int',has_traffic_filtering=True,id=0ddcebf0-d7e9-474d-b53b-d1746f5af8f2,network=Network(3c3eb82e-1161-4c2f-9fce-53fdf4386d9c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0ddcebf0-d7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 04:06:53 np0005534516 nova_compute[253538]: 2025-11-25 09:06:53.441 253542 DEBUG nova.objects.instance [None req-1295a4e2-e903-455c-be0d-3566e41a5fe7 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] Lazy-loading 'pci_devices' on Instance uuid 7aeb9ccf-2506-41d1-92c2-c72892096857 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 04:06:53 np0005534516 nova_compute[253538]: 2025-11-25 09:06:53.456 253542 DEBUG nova.virt.libvirt.driver [None req-1295a4e2-e903-455c-be0d-3566e41a5fe7 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] [instance: 7aeb9ccf-2506-41d1-92c2-c72892096857] End _get_guest_xml xml=<domain type="kvm">
Nov 25 04:06:53 np0005534516 nova_compute[253538]:  <uuid>7aeb9ccf-2506-41d1-92c2-c72892096857</uuid>
Nov 25 04:06:53 np0005534516 nova_compute[253538]:  <name>instance-0000008c</name>
Nov 25 04:06:53 np0005534516 nova_compute[253538]:  <memory>131072</memory>
Nov 25 04:06:53 np0005534516 nova_compute[253538]:  <vcpu>1</vcpu>
Nov 25 04:06:53 np0005534516 nova_compute[253538]:  <metadata>
Nov 25 04:06:53 np0005534516 nova_compute[253538]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 04:06:53 np0005534516 nova_compute[253538]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 04:06:53 np0005534516 nova_compute[253538]:      <nova:name>tempest-TestSnapshotPattern-server-150948456</nova:name>
Nov 25 04:06:53 np0005534516 nova_compute[253538]:      <nova:creationTime>2025-11-25 09:06:52</nova:creationTime>
Nov 25 04:06:53 np0005534516 nova_compute[253538]:      <nova:flavor name="m1.nano">
Nov 25 04:06:53 np0005534516 nova_compute[253538]:        <nova:memory>128</nova:memory>
Nov 25 04:06:53 np0005534516 nova_compute[253538]:        <nova:disk>1</nova:disk>
Nov 25 04:06:53 np0005534516 nova_compute[253538]:        <nova:swap>0</nova:swap>
Nov 25 04:06:53 np0005534516 nova_compute[253538]:        <nova:ephemeral>0</nova:ephemeral>
Nov 25 04:06:53 np0005534516 nova_compute[253538]:        <nova:vcpus>1</nova:vcpus>
Nov 25 04:06:53 np0005534516 nova_compute[253538]:      </nova:flavor>
Nov 25 04:06:53 np0005534516 nova_compute[253538]:      <nova:owner>
Nov 25 04:06:53 np0005534516 nova_compute[253538]:        <nova:user uuid="aef72e2ffce442d1848c4753c324ae92">tempest-TestSnapshotPattern-569624779-project-member</nova:user>
Nov 25 04:06:53 np0005534516 nova_compute[253538]:        <nova:project uuid="8771100a91ef4eb3b58cc4840f6154b4">tempest-TestSnapshotPattern-569624779</nova:project>
Nov 25 04:06:53 np0005534516 nova_compute[253538]:      </nova:owner>
Nov 25 04:06:53 np0005534516 nova_compute[253538]:      <nova:root type="image" uuid="8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e"/>
Nov 25 04:06:53 np0005534516 nova_compute[253538]:      <nova:ports>
Nov 25 04:06:53 np0005534516 nova_compute[253538]:        <nova:port uuid="0ddcebf0-d7e9-474d-b53b-d1746f5af8f2">
Nov 25 04:06:53 np0005534516 nova_compute[253538]:          <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Nov 25 04:06:53 np0005534516 nova_compute[253538]:        </nova:port>
Nov 25 04:06:53 np0005534516 nova_compute[253538]:      </nova:ports>
Nov 25 04:06:53 np0005534516 nova_compute[253538]:    </nova:instance>
Nov 25 04:06:53 np0005534516 nova_compute[253538]:  </metadata>
Nov 25 04:06:53 np0005534516 nova_compute[253538]:  <sysinfo type="smbios">
Nov 25 04:06:53 np0005534516 nova_compute[253538]:    <system>
Nov 25 04:06:53 np0005534516 nova_compute[253538]:      <entry name="manufacturer">RDO</entry>
Nov 25 04:06:53 np0005534516 nova_compute[253538]:      <entry name="product">OpenStack Compute</entry>
Nov 25 04:06:53 np0005534516 nova_compute[253538]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 04:06:53 np0005534516 nova_compute[253538]:      <entry name="serial">7aeb9ccf-2506-41d1-92c2-c72892096857</entry>
Nov 25 04:06:53 np0005534516 nova_compute[253538]:      <entry name="uuid">7aeb9ccf-2506-41d1-92c2-c72892096857</entry>
Nov 25 04:06:53 np0005534516 nova_compute[253538]:      <entry name="family">Virtual Machine</entry>
Nov 25 04:06:53 np0005534516 nova_compute[253538]:    </system>
Nov 25 04:06:53 np0005534516 nova_compute[253538]:  </sysinfo>
Nov 25 04:06:53 np0005534516 nova_compute[253538]:  <os>
Nov 25 04:06:53 np0005534516 nova_compute[253538]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 25 04:06:53 np0005534516 nova_compute[253538]:    <boot dev="hd"/>
Nov 25 04:06:53 np0005534516 nova_compute[253538]:    <smbios mode="sysinfo"/>
Nov 25 04:06:53 np0005534516 nova_compute[253538]:  </os>
Nov 25 04:06:53 np0005534516 nova_compute[253538]:  <features>
Nov 25 04:06:53 np0005534516 nova_compute[253538]:    <acpi/>
Nov 25 04:06:53 np0005534516 nova_compute[253538]:    <apic/>
Nov 25 04:06:53 np0005534516 nova_compute[253538]:    <vmcoreinfo/>
Nov 25 04:06:53 np0005534516 nova_compute[253538]:  </features>
Nov 25 04:06:53 np0005534516 nova_compute[253538]:  <clock offset="utc">
Nov 25 04:06:53 np0005534516 nova_compute[253538]:    <timer name="pit" tickpolicy="delay"/>
Nov 25 04:06:53 np0005534516 nova_compute[253538]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 25 04:06:53 np0005534516 nova_compute[253538]:    <timer name="hpet" present="no"/>
Nov 25 04:06:53 np0005534516 nova_compute[253538]:  </clock>
Nov 25 04:06:53 np0005534516 nova_compute[253538]:  <cpu mode="host-model" match="exact">
Nov 25 04:06:53 np0005534516 nova_compute[253538]:    <topology sockets="1" cores="1" threads="1"/>
Nov 25 04:06:53 np0005534516 nova_compute[253538]:  </cpu>
Nov 25 04:06:53 np0005534516 nova_compute[253538]:  <devices>
Nov 25 04:06:53 np0005534516 nova_compute[253538]:    <disk type="network" device="disk">
Nov 25 04:06:53 np0005534516 nova_compute[253538]:      <driver type="raw" cache="none"/>
Nov 25 04:06:53 np0005534516 nova_compute[253538]:      <source protocol="rbd" name="vms/7aeb9ccf-2506-41d1-92c2-c72892096857_disk">
Nov 25 04:06:53 np0005534516 nova_compute[253538]:        <host name="192.168.122.100" port="6789"/>
Nov 25 04:06:53 np0005534516 nova_compute[253538]:      </source>
Nov 25 04:06:53 np0005534516 nova_compute[253538]:      <auth username="openstack">
Nov 25 04:06:53 np0005534516 nova_compute[253538]:        <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 04:06:53 np0005534516 nova_compute[253538]:      </auth>
Nov 25 04:06:53 np0005534516 nova_compute[253538]:      <target dev="vda" bus="virtio"/>
Nov 25 04:06:53 np0005534516 nova_compute[253538]:    </disk>
Nov 25 04:06:53 np0005534516 nova_compute[253538]:    <disk type="network" device="cdrom">
Nov 25 04:06:53 np0005534516 nova_compute[253538]:      <driver type="raw" cache="none"/>
Nov 25 04:06:53 np0005534516 nova_compute[253538]:      <source protocol="rbd" name="vms/7aeb9ccf-2506-41d1-92c2-c72892096857_disk.config">
Nov 25 04:06:53 np0005534516 nova_compute[253538]:        <host name="192.168.122.100" port="6789"/>
Nov 25 04:06:53 np0005534516 nova_compute[253538]:      </source>
Nov 25 04:06:53 np0005534516 nova_compute[253538]:      <auth username="openstack">
Nov 25 04:06:53 np0005534516 nova_compute[253538]:        <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 04:06:53 np0005534516 nova_compute[253538]:      </auth>
Nov 25 04:06:53 np0005534516 nova_compute[253538]:      <target dev="sda" bus="sata"/>
Nov 25 04:06:53 np0005534516 nova_compute[253538]:    </disk>
Nov 25 04:06:53 np0005534516 nova_compute[253538]:    <interface type="ethernet">
Nov 25 04:06:53 np0005534516 nova_compute[253538]:      <mac address="fa:16:3e:d3:95:36"/>
Nov 25 04:06:53 np0005534516 nova_compute[253538]:      <model type="virtio"/>
Nov 25 04:06:53 np0005534516 nova_compute[253538]:      <driver name="vhost" rx_queue_size="512"/>
Nov 25 04:06:53 np0005534516 nova_compute[253538]:      <mtu size="1442"/>
Nov 25 04:06:53 np0005534516 nova_compute[253538]:      <target dev="tap0ddcebf0-d7"/>
Nov 25 04:06:53 np0005534516 nova_compute[253538]:    </interface>
Nov 25 04:06:53 np0005534516 nova_compute[253538]:    <serial type="pty">
Nov 25 04:06:53 np0005534516 nova_compute[253538]:      <log file="/var/lib/nova/instances/7aeb9ccf-2506-41d1-92c2-c72892096857/console.log" append="off"/>
Nov 25 04:06:53 np0005534516 nova_compute[253538]:    </serial>
Nov 25 04:06:53 np0005534516 nova_compute[253538]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 04:06:53 np0005534516 nova_compute[253538]:    <video>
Nov 25 04:06:53 np0005534516 nova_compute[253538]:      <model type="virtio"/>
Nov 25 04:06:53 np0005534516 nova_compute[253538]:    </video>
Nov 25 04:06:53 np0005534516 nova_compute[253538]:    <input type="tablet" bus="usb"/>
Nov 25 04:06:53 np0005534516 nova_compute[253538]:    <rng model="virtio">
Nov 25 04:06:53 np0005534516 nova_compute[253538]:      <backend model="random">/dev/urandom</backend>
Nov 25 04:06:53 np0005534516 nova_compute[253538]:    </rng>
Nov 25 04:06:53 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root"/>
Nov 25 04:06:53 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:06:53 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:06:53 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:06:53 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:06:53 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:06:53 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:06:53 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:06:53 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:06:53 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:06:53 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:06:53 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:06:53 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:06:53 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:06:53 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:06:53 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:06:53 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:06:53 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:06:53 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:06:53 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:06:53 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:06:53 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:06:53 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:06:53 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:06:53 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:06:53 np0005534516 nova_compute[253538]:    <controller type="usb" index="0"/>
Nov 25 04:06:53 np0005534516 nova_compute[253538]:    <memballoon model="virtio">
Nov 25 04:06:53 np0005534516 nova_compute[253538]:      <stats period="10"/>
Nov 25 04:06:53 np0005534516 nova_compute[253538]:    </memballoon>
Nov 25 04:06:53 np0005534516 nova_compute[253538]:  </devices>
Nov 25 04:06:53 np0005534516 nova_compute[253538]: </domain>
Nov 25 04:06:53 np0005534516 nova_compute[253538]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 25 04:06:53 np0005534516 nova_compute[253538]: 2025-11-25 09:06:53.458 253542 DEBUG nova.compute.manager [None req-1295a4e2-e903-455c-be0d-3566e41a5fe7 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] [instance: 7aeb9ccf-2506-41d1-92c2-c72892096857] Preparing to wait for external event network-vif-plugged-0ddcebf0-d7e9-474d-b53b-d1746f5af8f2 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 25 04:06:53 np0005534516 nova_compute[253538]: 2025-11-25 09:06:53.458 253542 DEBUG oslo_concurrency.lockutils [None req-1295a4e2-e903-455c-be0d-3566e41a5fe7 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] Acquiring lock "7aeb9ccf-2506-41d1-92c2-c72892096857-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:06:53 np0005534516 nova_compute[253538]: 2025-11-25 09:06:53.458 253542 DEBUG oslo_concurrency.lockutils [None req-1295a4e2-e903-455c-be0d-3566e41a5fe7 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] Lock "7aeb9ccf-2506-41d1-92c2-c72892096857-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:06:53 np0005534516 nova_compute[253538]: 2025-11-25 09:06:53.459 253542 DEBUG oslo_concurrency.lockutils [None req-1295a4e2-e903-455c-be0d-3566e41a5fe7 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] Lock "7aeb9ccf-2506-41d1-92c2-c72892096857-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:06:53 np0005534516 nova_compute[253538]: 2025-11-25 09:06:53.460 253542 DEBUG nova.virt.libvirt.vif [None req-1295a4e2-e903-455c-be0d-3566e41a5fe7 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T09:06:45Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestSnapshotPattern-server-150948456',display_name='tempest-TestSnapshotPattern-server-150948456',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testsnapshotpattern-server-150948456',id=140,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLErEV2xwN9mBf+XGKCIgVT/D6OXwB0cwyyX2NBYPEm+JKbk9OpT/b7EfE4XEaPQBYqSc0cfR5p0864dWMnh2OIUXkOANq7ZGUSKBMTzjmr8EUsGjFEiOhtcXJ0bHjOVZQ==',key_name='tempest-TestSnapshotPattern-1105951773',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='8771100a91ef4eb3b58cc4840f6154b4',ramdisk_id='',reservation_id='r-9z2sb50j',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSnapshotPattern-569624779',owner_user_name='tempest-TestSnapshotPattern-569624779-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T09:06:47Z,user_data=None,user_id='aef72e2ffce442d1848c4753c324ae92',uuid=7aeb9ccf-2506-41d1-92c2-c72892096857,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "0ddcebf0-d7e9-474d-b53b-d1746f5af8f2", "address": "fa:16:3e:d3:95:36", "network": {"id": "3c3eb82e-1161-4c2f-9fce-53fdf4386d9c", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-990123454-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8771100a91ef4eb3b58cc4840f6154b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0ddcebf0-d7", "ovs_interfaceid": "0ddcebf0-d7e9-474d-b53b-d1746f5af8f2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 25 04:06:53 np0005534516 nova_compute[253538]: 2025-11-25 09:06:53.460 253542 DEBUG nova.network.os_vif_util [None req-1295a4e2-e903-455c-be0d-3566e41a5fe7 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] Converting VIF {"id": "0ddcebf0-d7e9-474d-b53b-d1746f5af8f2", "address": "fa:16:3e:d3:95:36", "network": {"id": "3c3eb82e-1161-4c2f-9fce-53fdf4386d9c", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-990123454-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8771100a91ef4eb3b58cc4840f6154b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0ddcebf0-d7", "ovs_interfaceid": "0ddcebf0-d7e9-474d-b53b-d1746f5af8f2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 04:06:53 np0005534516 nova_compute[253538]: 2025-11-25 09:06:53.461 253542 DEBUG nova.network.os_vif_util [None req-1295a4e2-e903-455c-be0d-3566e41a5fe7 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d3:95:36,bridge_name='br-int',has_traffic_filtering=True,id=0ddcebf0-d7e9-474d-b53b-d1746f5af8f2,network=Network(3c3eb82e-1161-4c2f-9fce-53fdf4386d9c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0ddcebf0-d7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 04:06:53 np0005534516 nova_compute[253538]: 2025-11-25 09:06:53.461 253542 DEBUG os_vif [None req-1295a4e2-e903-455c-be0d-3566e41a5fe7 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:d3:95:36,bridge_name='br-int',has_traffic_filtering=True,id=0ddcebf0-d7e9-474d-b53b-d1746f5af8f2,network=Network(3c3eb82e-1161-4c2f-9fce-53fdf4386d9c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0ddcebf0-d7') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 25 04:06:53 np0005534516 nova_compute[253538]: 2025-11-25 09:06:53.462 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:06:53 np0005534516 nova_compute[253538]: 2025-11-25 09:06:53.462 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 04:06:53 np0005534516 nova_compute[253538]: 2025-11-25 09:06:53.463 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 04:06:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 04:06:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 04:06:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 04:06:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 04:06:53 np0005534516 nova_compute[253538]: 2025-11-25 09:06:53.468 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:06:53 np0005534516 nova_compute[253538]: 2025-11-25 09:06:53.469 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap0ddcebf0-d7, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 04:06:53 np0005534516 nova_compute[253538]: 2025-11-25 09:06:53.469 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap0ddcebf0-d7, col_values=(('external_ids', {'iface-id': '0ddcebf0-d7e9-474d-b53b-d1746f5af8f2', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:d3:95:36', 'vm-uuid': '7aeb9ccf-2506-41d1-92c2-c72892096857'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 04:06:53 np0005534516 nova_compute[253538]: 2025-11-25 09:06:53.471 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:06:53 np0005534516 NetworkManager[48915]: <info>  [1764061613.4717] manager: (tap0ddcebf0-d7): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/601)
Nov 25 04:06:53 np0005534516 nova_compute[253538]: 2025-11-25 09:06:53.473 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 25 04:06:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 04:06:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 04:06:53 np0005534516 nova_compute[253538]: 2025-11-25 09:06:53.476 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:06:53 np0005534516 nova_compute[253538]: 2025-11-25 09:06:53.477 253542 INFO os_vif [None req-1295a4e2-e903-455c-be0d-3566e41a5fe7 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:d3:95:36,bridge_name='br-int',has_traffic_filtering=True,id=0ddcebf0-d7e9-474d-b53b-d1746f5af8f2,network=Network(3c3eb82e-1161-4c2f-9fce-53fdf4386d9c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0ddcebf0-d7')#033[00m
Nov 25 04:06:53 np0005534516 nova_compute[253538]: 2025-11-25 09:06:53.535 253542 DEBUG nova.virt.libvirt.driver [None req-1295a4e2-e903-455c-be0d-3566e41a5fe7 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 04:06:53 np0005534516 nova_compute[253538]: 2025-11-25 09:06:53.535 253542 DEBUG nova.virt.libvirt.driver [None req-1295a4e2-e903-455c-be0d-3566e41a5fe7 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 04:06:53 np0005534516 nova_compute[253538]: 2025-11-25 09:06:53.535 253542 DEBUG nova.virt.libvirt.driver [None req-1295a4e2-e903-455c-be0d-3566e41a5fe7 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] No VIF found with MAC fa:16:3e:d3:95:36, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 25 04:06:53 np0005534516 nova_compute[253538]: 2025-11-25 09:06:53.536 253542 INFO nova.virt.libvirt.driver [None req-1295a4e2-e903-455c-be0d-3566e41a5fe7 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] [instance: 7aeb9ccf-2506-41d1-92c2-c72892096857] Using config drive#033[00m
Nov 25 04:06:53 np0005534516 nova_compute[253538]: 2025-11-25 09:06:53.556 253542 DEBUG nova.storage.rbd_utils [None req-1295a4e2-e903-455c-be0d-3566e41a5fe7 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] rbd image 7aeb9ccf-2506-41d1-92c2-c72892096857_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 04:06:53 np0005534516 nova_compute[253538]: 2025-11-25 09:06:53.966 253542 DEBUG nova.network.neutron [req-f69aaf89-36f2-4f78-ba3d-1223228a2a9b req-218016f0-2840-46b7-95aa-f7b60d64e066 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 7aeb9ccf-2506-41d1-92c2-c72892096857] Updated VIF entry in instance network info cache for port 0ddcebf0-d7e9-474d-b53b-d1746f5af8f2. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 25 04:06:53 np0005534516 nova_compute[253538]: 2025-11-25 09:06:53.967 253542 DEBUG nova.network.neutron [req-f69aaf89-36f2-4f78-ba3d-1223228a2a9b req-218016f0-2840-46b7-95aa-f7b60d64e066 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 7aeb9ccf-2506-41d1-92c2-c72892096857] Updating instance_info_cache with network_info: [{"id": "0ddcebf0-d7e9-474d-b53b-d1746f5af8f2", "address": "fa:16:3e:d3:95:36", "network": {"id": "3c3eb82e-1161-4c2f-9fce-53fdf4386d9c", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-990123454-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8771100a91ef4eb3b58cc4840f6154b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0ddcebf0-d7", "ovs_interfaceid": "0ddcebf0-d7e9-474d-b53b-d1746f5af8f2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 04:06:53 np0005534516 nova_compute[253538]: 2025-11-25 09:06:53.985 253542 DEBUG oslo_concurrency.lockutils [req-f69aaf89-36f2-4f78-ba3d-1223228a2a9b req-218016f0-2840-46b7-95aa-f7b60d64e066 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-7aeb9ccf-2506-41d1-92c2-c72892096857" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 04:06:54 np0005534516 nova_compute[253538]: 2025-11-25 09:06:54.037 253542 INFO nova.virt.libvirt.driver [None req-1295a4e2-e903-455c-be0d-3566e41a5fe7 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] [instance: 7aeb9ccf-2506-41d1-92c2-c72892096857] Creating config drive at /var/lib/nova/instances/7aeb9ccf-2506-41d1-92c2-c72892096857/disk.config#033[00m
Nov 25 04:06:54 np0005534516 nova_compute[253538]: 2025-11-25 09:06:54.042 253542 DEBUG oslo_concurrency.processutils [None req-1295a4e2-e903-455c-be0d-3566e41a5fe7 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/7aeb9ccf-2506-41d1-92c2-c72892096857/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpu_vie_v8 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 04:06:54 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 04:06:54 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 04:06:54 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 04:06:54 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 04:06:54 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 04:06:54 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 04:06:54 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 04:06:54 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 04:06:54 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 04:06:54 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 04:06:54 np0005534516 nova_compute[253538]: 2025-11-25 09:06:54.143 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:06:54 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2583: 321 pgs: 321 active+clean; 134 MiB data, 976 MiB used, 59 GiB / 60 GiB avail; 45 KiB/s rd, 1.8 MiB/s wr, 66 op/s
Nov 25 04:06:54 np0005534516 nova_compute[253538]: 2025-11-25 09:06:54.184 253542 DEBUG oslo_concurrency.processutils [None req-1295a4e2-e903-455c-be0d-3566e41a5fe7 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/7aeb9ccf-2506-41d1-92c2-c72892096857/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpu_vie_v8" returned: 0 in 0.142s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 04:06:54 np0005534516 nova_compute[253538]: 2025-11-25 09:06:54.207 253542 DEBUG nova.storage.rbd_utils [None req-1295a4e2-e903-455c-be0d-3566e41a5fe7 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] rbd image 7aeb9ccf-2506-41d1-92c2-c72892096857_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 04:06:54 np0005534516 nova_compute[253538]: 2025-11-25 09:06:54.210 253542 DEBUG oslo_concurrency.processutils [None req-1295a4e2-e903-455c-be0d-3566e41a5fe7 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/7aeb9ccf-2506-41d1-92c2-c72892096857/disk.config 7aeb9ccf-2506-41d1-92c2-c72892096857_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 04:06:54 np0005534516 nova_compute[253538]: 2025-11-25 09:06:54.376 253542 DEBUG oslo_concurrency.processutils [None req-1295a4e2-e903-455c-be0d-3566e41a5fe7 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/7aeb9ccf-2506-41d1-92c2-c72892096857/disk.config 7aeb9ccf-2506-41d1-92c2-c72892096857_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.166s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 04:06:54 np0005534516 nova_compute[253538]: 2025-11-25 09:06:54.377 253542 INFO nova.virt.libvirt.driver [None req-1295a4e2-e903-455c-be0d-3566e41a5fe7 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] [instance: 7aeb9ccf-2506-41d1-92c2-c72892096857] Deleting local config drive /var/lib/nova/instances/7aeb9ccf-2506-41d1-92c2-c72892096857/disk.config because it was imported into RBD.#033[00m
Nov 25 04:06:54 np0005534516 kernel: tap0ddcebf0-d7: entered promiscuous mode
Nov 25 04:06:54 np0005534516 ovn_controller[152859]: 2025-11-25T09:06:54Z|01469|binding|INFO|Claiming lport 0ddcebf0-d7e9-474d-b53b-d1746f5af8f2 for this chassis.
Nov 25 04:06:54 np0005534516 ovn_controller[152859]: 2025-11-25T09:06:54Z|01470|binding|INFO|0ddcebf0-d7e9-474d-b53b-d1746f5af8f2: Claiming fa:16:3e:d3:95:36 10.100.0.13
Nov 25 04:06:54 np0005534516 NetworkManager[48915]: <info>  [1764061614.4329] manager: (tap0ddcebf0-d7): new Tun device (/org/freedesktop/NetworkManager/Devices/602)
Nov 25 04:06:54 np0005534516 nova_compute[253538]: 2025-11-25 09:06:54.432 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:06:54 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:06:54.441 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d3:95:36 10.100.0.13'], port_security=['fa:16:3e:d3:95:36 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '7aeb9ccf-2506-41d1-92c2-c72892096857', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3c3eb82e-1161-4c2f-9fce-53fdf4386d9c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8771100a91ef4eb3b58cc4840f6154b4', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'b51ac1f2-fc04-45c4-8aed-ff9624bae478', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=774fcdab-a888-48f5-b941-79ea7db76602, chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=0ddcebf0-d7e9-474d-b53b-d1746f5af8f2) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 04:06:54 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:06:54.442 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 0ddcebf0-d7e9-474d-b53b-d1746f5af8f2 in datapath 3c3eb82e-1161-4c2f-9fce-53fdf4386d9c bound to our chassis#033[00m
Nov 25 04:06:54 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:06:54.444 162739 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 3c3eb82e-1161-4c2f-9fce-53fdf4386d9c#033[00m
Nov 25 04:06:54 np0005534516 nova_compute[253538]: 2025-11-25 09:06:54.448 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:06:54 np0005534516 ovn_controller[152859]: 2025-11-25T09:06:54Z|01471|binding|INFO|Setting lport 0ddcebf0-d7e9-474d-b53b-d1746f5af8f2 ovn-installed in OVS
Nov 25 04:06:54 np0005534516 ovn_controller[152859]: 2025-11-25T09:06:54Z|01472|binding|INFO|Setting lport 0ddcebf0-d7e9-474d-b53b-d1746f5af8f2 up in Southbound
Nov 25 04:06:54 np0005534516 nova_compute[253538]: 2025-11-25 09:06:54.453 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:06:54 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:06:54.455 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[d7625f36-2fd6-41bb-b9e4-498b72edd5b6]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:06:54 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:06:54.456 162739 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap3c3eb82e-11 in ovnmeta-3c3eb82e-1161-4c2f-9fce-53fdf4386d9c namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 25 04:06:54 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:06:54.458 269370 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap3c3eb82e-10 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 25 04:06:54 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:06:54.458 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[0924495c-3898-4253-9149-3b058444e523]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:06:54 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:06:54.459 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[eceedb38-a206-478b-ab41-92ad1878a978]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:06:54 np0005534516 systemd-udevd[401420]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 04:06:54 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:06:54.470 162852 DEBUG oslo.privsep.daemon [-] privsep: reply[a0b1e305-cd78-42d3-9ac8-750033b6d265]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:06:54 np0005534516 NetworkManager[48915]: <info>  [1764061614.4737] device (tap0ddcebf0-d7): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 04:06:54 np0005534516 NetworkManager[48915]: <info>  [1764061614.4757] device (tap0ddcebf0-d7): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 04:06:54 np0005534516 systemd-machined[215790]: New machine qemu-170-instance-0000008c.
Nov 25 04:06:54 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:06:54.484 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[e03ad70d-d0da-49af-a374-8cf18ed2641e]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:06:54 np0005534516 systemd[1]: Started Virtual Machine qemu-170-instance-0000008c.
Nov 25 04:06:54 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:06:54.515 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[8bf92dc7-92d4-42ad-adde-80017cc4e018]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:06:54 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:06:54.520 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[434bb6b9-b889-4f31-8863-658dbc572ed2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:06:54 np0005534516 NetworkManager[48915]: <info>  [1764061614.5218] manager: (tap3c3eb82e-10): new Veth device (/org/freedesktop/NetworkManager/Devices/603)
Nov 25 04:06:54 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:06:54.557 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[222b18ea-921a-47a8-b4b8-adc0305b0dbd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:06:54 np0005534516 nova_compute[253538]: 2025-11-25 09:06:54.558 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:06:54 np0005534516 nova_compute[253538]: 2025-11-25 09:06:54.558 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 25 04:06:54 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:06:54.560 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[70207f56-4fa5-4f35-ba13-ba7928172f95]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:06:54 np0005534516 NetworkManager[48915]: <info>  [1764061614.5832] device (tap3c3eb82e-10): carrier: link connected
Nov 25 04:06:54 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:06:54.590 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[50a840f6-32eb-4fa3-b69c-91696bf53750]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:06:54 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:06:54.607 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[1a1499df-a385-4971-9870-bef381feb570]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap3c3eb82e-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:0e:4d:ac'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 424], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 696918, 'reachable_time': 15963, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 401454, 'error': None, 'target': 'ovnmeta-3c3eb82e-1161-4c2f-9fce-53fdf4386d9c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:06:54 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:06:54.621 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[97cac539-d2b8-4efb-955f-f374f6bf6246]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe0e:4dac'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 696918, 'tstamp': 696918}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 401455, 'error': None, 'target': 'ovnmeta-3c3eb82e-1161-4c2f-9fce-53fdf4386d9c', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:06:54 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:06:54.639 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[cc48c9e8-b2ca-4104-8041-36f4bb3c8e1e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap3c3eb82e-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:0e:4d:ac'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 424], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 696918, 'reachable_time': 15963, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 401456, 'error': None, 'target': 'ovnmeta-3c3eb82e-1161-4c2f-9fce-53fdf4386d9c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:06:54 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:06:54.667 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[3e55cf72-8ec1-4216-a8a4-a987cd3ff715]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:06:54 np0005534516 nova_compute[253538]: 2025-11-25 09:06:54.691 253542 DEBUG nova.compute.manager [req-ec22508e-1465-4432-b858-aea2fea53b05 req-6a093228-37c1-41d3-8384-a9b388f2e51c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 7aeb9ccf-2506-41d1-92c2-c72892096857] Received event network-vif-plugged-0ddcebf0-d7e9-474d-b53b-d1746f5af8f2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 04:06:54 np0005534516 nova_compute[253538]: 2025-11-25 09:06:54.691 253542 DEBUG oslo_concurrency.lockutils [req-ec22508e-1465-4432-b858-aea2fea53b05 req-6a093228-37c1-41d3-8384-a9b388f2e51c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "7aeb9ccf-2506-41d1-92c2-c72892096857-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:06:54 np0005534516 nova_compute[253538]: 2025-11-25 09:06:54.691 253542 DEBUG oslo_concurrency.lockutils [req-ec22508e-1465-4432-b858-aea2fea53b05 req-6a093228-37c1-41d3-8384-a9b388f2e51c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "7aeb9ccf-2506-41d1-92c2-c72892096857-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:06:54 np0005534516 nova_compute[253538]: 2025-11-25 09:06:54.692 253542 DEBUG oslo_concurrency.lockutils [req-ec22508e-1465-4432-b858-aea2fea53b05 req-6a093228-37c1-41d3-8384-a9b388f2e51c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "7aeb9ccf-2506-41d1-92c2-c72892096857-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:06:54 np0005534516 nova_compute[253538]: 2025-11-25 09:06:54.692 253542 DEBUG nova.compute.manager [req-ec22508e-1465-4432-b858-aea2fea53b05 req-6a093228-37c1-41d3-8384-a9b388f2e51c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 7aeb9ccf-2506-41d1-92c2-c72892096857] Processing event network-vif-plugged-0ddcebf0-d7e9-474d-b53b-d1746f5af8f2 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 25 04:06:54 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:06:54.722 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[8d50ca00-c2ab-4fd5-a027-edce20d23f4a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:06:54 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:06:54.723 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3c3eb82e-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 04:06:54 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:06:54.723 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 04:06:54 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:06:54.724 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap3c3eb82e-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 04:06:54 np0005534516 nova_compute[253538]: 2025-11-25 09:06:54.726 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:06:54 np0005534516 kernel: tap3c3eb82e-10: entered promiscuous mode
Nov 25 04:06:54 np0005534516 NetworkManager[48915]: <info>  [1764061614.7271] manager: (tap3c3eb82e-10): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/604)
Nov 25 04:06:54 np0005534516 nova_compute[253538]: 2025-11-25 09:06:54.729 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:06:54 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:06:54.731 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap3c3eb82e-10, col_values=(('external_ids', {'iface-id': 'aca5006e-311f-469a-ba5d-688da3f7d396'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 04:06:54 np0005534516 nova_compute[253538]: 2025-11-25 09:06:54.732 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:06:54 np0005534516 ovn_controller[152859]: 2025-11-25T09:06:54Z|01473|binding|INFO|Releasing lport aca5006e-311f-469a-ba5d-688da3f7d396 from this chassis (sb_readonly=0)
Nov 25 04:06:54 np0005534516 nova_compute[253538]: 2025-11-25 09:06:54.732 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:06:54 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:06:54.733 162739 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/3c3eb82e-1161-4c2f-9fce-53fdf4386d9c.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/3c3eb82e-1161-4c2f-9fce-53fdf4386d9c.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 25 04:06:54 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:06:54.734 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[6fc45e17-9bec-4a0d-b8ea-4195af8d9e43]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:06:54 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:06:54.735 162739 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 25 04:06:54 np0005534516 ovn_metadata_agent[162734]: global
Nov 25 04:06:54 np0005534516 ovn_metadata_agent[162734]:    log         /dev/log local0 debug
Nov 25 04:06:54 np0005534516 ovn_metadata_agent[162734]:    log-tag     haproxy-metadata-proxy-3c3eb82e-1161-4c2f-9fce-53fdf4386d9c
Nov 25 04:06:54 np0005534516 ovn_metadata_agent[162734]:    user        root
Nov 25 04:06:54 np0005534516 ovn_metadata_agent[162734]:    group       root
Nov 25 04:06:54 np0005534516 ovn_metadata_agent[162734]:    maxconn     1024
Nov 25 04:06:54 np0005534516 ovn_metadata_agent[162734]:    pidfile     /var/lib/neutron/external/pids/3c3eb82e-1161-4c2f-9fce-53fdf4386d9c.pid.haproxy
Nov 25 04:06:54 np0005534516 ovn_metadata_agent[162734]:    daemon
Nov 25 04:06:54 np0005534516 ovn_metadata_agent[162734]: 
Nov 25 04:06:54 np0005534516 ovn_metadata_agent[162734]: defaults
Nov 25 04:06:54 np0005534516 ovn_metadata_agent[162734]:    log global
Nov 25 04:06:54 np0005534516 ovn_metadata_agent[162734]:    mode http
Nov 25 04:06:54 np0005534516 ovn_metadata_agent[162734]:    option httplog
Nov 25 04:06:54 np0005534516 ovn_metadata_agent[162734]:    option dontlognull
Nov 25 04:06:54 np0005534516 ovn_metadata_agent[162734]:    option http-server-close
Nov 25 04:06:54 np0005534516 ovn_metadata_agent[162734]:    option forwardfor
Nov 25 04:06:54 np0005534516 ovn_metadata_agent[162734]:    retries                 3
Nov 25 04:06:54 np0005534516 ovn_metadata_agent[162734]:    timeout http-request    30s
Nov 25 04:06:54 np0005534516 ovn_metadata_agent[162734]:    timeout connect         30s
Nov 25 04:06:54 np0005534516 ovn_metadata_agent[162734]:    timeout client          32s
Nov 25 04:06:54 np0005534516 ovn_metadata_agent[162734]:    timeout server          32s
Nov 25 04:06:54 np0005534516 ovn_metadata_agent[162734]:    timeout http-keep-alive 30s
Nov 25 04:06:54 np0005534516 ovn_metadata_agent[162734]: 
Nov 25 04:06:54 np0005534516 ovn_metadata_agent[162734]: 
Nov 25 04:06:54 np0005534516 ovn_metadata_agent[162734]: listen listener
Nov 25 04:06:54 np0005534516 ovn_metadata_agent[162734]:    bind 169.254.169.254:80
Nov 25 04:06:54 np0005534516 ovn_metadata_agent[162734]:    server metadata /var/lib/neutron/metadata_proxy
Nov 25 04:06:54 np0005534516 ovn_metadata_agent[162734]:    http-request add-header X-OVN-Network-ID 3c3eb82e-1161-4c2f-9fce-53fdf4386d9c
Nov 25 04:06:54 np0005534516 ovn_metadata_agent[162734]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 25 04:06:54 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:06:54.736 162739 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-3c3eb82e-1161-4c2f-9fce-53fdf4386d9c', 'env', 'PROCESS_TAG=haproxy-3c3eb82e-1161-4c2f-9fce-53fdf4386d9c', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/3c3eb82e-1161-4c2f-9fce-53fdf4386d9c.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 25 04:06:54 np0005534516 nova_compute[253538]: 2025-11-25 09:06:54.746 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:06:55 np0005534516 nova_compute[253538]: 2025-11-25 09:06:55.094 253542 DEBUG nova.compute.manager [None req-1295a4e2-e903-455c-be0d-3566e41a5fe7 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] [instance: 7aeb9ccf-2506-41d1-92c2-c72892096857] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 25 04:06:55 np0005534516 nova_compute[253538]: 2025-11-25 09:06:55.096 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764061615.094353, 7aeb9ccf-2506-41d1-92c2-c72892096857 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 04:06:55 np0005534516 nova_compute[253538]: 2025-11-25 09:06:55.096 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 7aeb9ccf-2506-41d1-92c2-c72892096857] VM Started (Lifecycle Event)#033[00m
Nov 25 04:06:55 np0005534516 nova_compute[253538]: 2025-11-25 09:06:55.098 253542 DEBUG nova.virt.libvirt.driver [None req-1295a4e2-e903-455c-be0d-3566e41a5fe7 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] [instance: 7aeb9ccf-2506-41d1-92c2-c72892096857] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 25 04:06:55 np0005534516 nova_compute[253538]: 2025-11-25 09:06:55.103 253542 INFO nova.virt.libvirt.driver [-] [instance: 7aeb9ccf-2506-41d1-92c2-c72892096857] Instance spawned successfully.#033[00m
Nov 25 04:06:55 np0005534516 nova_compute[253538]: 2025-11-25 09:06:55.104 253542 DEBUG nova.virt.libvirt.driver [None req-1295a4e2-e903-455c-be0d-3566e41a5fe7 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] [instance: 7aeb9ccf-2506-41d1-92c2-c72892096857] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 25 04:06:55 np0005534516 nova_compute[253538]: 2025-11-25 09:06:55.115 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 7aeb9ccf-2506-41d1-92c2-c72892096857] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 04:06:55 np0005534516 nova_compute[253538]: 2025-11-25 09:06:55.121 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 7aeb9ccf-2506-41d1-92c2-c72892096857] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 04:06:55 np0005534516 nova_compute[253538]: 2025-11-25 09:06:55.125 253542 DEBUG nova.virt.libvirt.driver [None req-1295a4e2-e903-455c-be0d-3566e41a5fe7 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] [instance: 7aeb9ccf-2506-41d1-92c2-c72892096857] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 04:06:55 np0005534516 nova_compute[253538]: 2025-11-25 09:06:55.126 253542 DEBUG nova.virt.libvirt.driver [None req-1295a4e2-e903-455c-be0d-3566e41a5fe7 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] [instance: 7aeb9ccf-2506-41d1-92c2-c72892096857] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 04:06:55 np0005534516 nova_compute[253538]: 2025-11-25 09:06:55.126 253542 DEBUG nova.virt.libvirt.driver [None req-1295a4e2-e903-455c-be0d-3566e41a5fe7 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] [instance: 7aeb9ccf-2506-41d1-92c2-c72892096857] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 04:06:55 np0005534516 nova_compute[253538]: 2025-11-25 09:06:55.127 253542 DEBUG nova.virt.libvirt.driver [None req-1295a4e2-e903-455c-be0d-3566e41a5fe7 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] [instance: 7aeb9ccf-2506-41d1-92c2-c72892096857] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 04:06:55 np0005534516 nova_compute[253538]: 2025-11-25 09:06:55.127 253542 DEBUG nova.virt.libvirt.driver [None req-1295a4e2-e903-455c-be0d-3566e41a5fe7 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] [instance: 7aeb9ccf-2506-41d1-92c2-c72892096857] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 04:06:55 np0005534516 nova_compute[253538]: 2025-11-25 09:06:55.127 253542 DEBUG nova.virt.libvirt.driver [None req-1295a4e2-e903-455c-be0d-3566e41a5fe7 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] [instance: 7aeb9ccf-2506-41d1-92c2-c72892096857] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 04:06:55 np0005534516 podman[401530]: 2025-11-25 09:06:55.149618706 +0000 UTC m=+0.058369277 container create 83877085b0968a0b59b5171956da2977080f99dba624d8394ba1eb5fbfc46c41 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-3c3eb82e-1161-4c2f-9fce-53fdf4386d9c, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, tcib_managed=true)
Nov 25 04:06:55 np0005534516 nova_compute[253538]: 2025-11-25 09:06:55.167 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 7aeb9ccf-2506-41d1-92c2-c72892096857] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 04:06:55 np0005534516 nova_compute[253538]: 2025-11-25 09:06:55.167 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764061615.096618, 7aeb9ccf-2506-41d1-92c2-c72892096857 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 04:06:55 np0005534516 nova_compute[253538]: 2025-11-25 09:06:55.168 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 7aeb9ccf-2506-41d1-92c2-c72892096857] VM Paused (Lifecycle Event)#033[00m
Nov 25 04:06:55 np0005534516 nova_compute[253538]: 2025-11-25 09:06:55.187 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 7aeb9ccf-2506-41d1-92c2-c72892096857] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 04:06:55 np0005534516 nova_compute[253538]: 2025-11-25 09:06:55.190 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764061615.0980508, 7aeb9ccf-2506-41d1-92c2-c72892096857 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 04:06:55 np0005534516 nova_compute[253538]: 2025-11-25 09:06:55.190 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 7aeb9ccf-2506-41d1-92c2-c72892096857] VM Resumed (Lifecycle Event)#033[00m
Nov 25 04:06:55 np0005534516 nova_compute[253538]: 2025-11-25 09:06:55.195 253542 INFO nova.compute.manager [None req-1295a4e2-e903-455c-be0d-3566e41a5fe7 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] [instance: 7aeb9ccf-2506-41d1-92c2-c72892096857] Took 7.43 seconds to spawn the instance on the hypervisor.#033[00m
Nov 25 04:06:55 np0005534516 nova_compute[253538]: 2025-11-25 09:06:55.195 253542 DEBUG nova.compute.manager [None req-1295a4e2-e903-455c-be0d-3566e41a5fe7 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] [instance: 7aeb9ccf-2506-41d1-92c2-c72892096857] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 04:06:55 np0005534516 systemd[1]: Started libpod-conmon-83877085b0968a0b59b5171956da2977080f99dba624d8394ba1eb5fbfc46c41.scope.
Nov 25 04:06:55 np0005534516 podman[401530]: 2025-11-25 09:06:55.116939708 +0000 UTC m=+0.025690279 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 25 04:06:55 np0005534516 nova_compute[253538]: 2025-11-25 09:06:55.217 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 7aeb9ccf-2506-41d1-92c2-c72892096857] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 04:06:55 np0005534516 nova_compute[253538]: 2025-11-25 09:06:55.220 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 7aeb9ccf-2506-41d1-92c2-c72892096857] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 04:06:55 np0005534516 systemd[1]: Started libcrun container.
Nov 25 04:06:55 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c24402a25caa013e206ed4779ad47cd8d103184e7670c046105bc5ea23a229e0/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 25 04:06:55 np0005534516 podman[401530]: 2025-11-25 09:06:55.246792358 +0000 UTC m=+0.155542929 container init 83877085b0968a0b59b5171956da2977080f99dba624d8394ba1eb5fbfc46c41 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-3c3eb82e-1161-4c2f-9fce-53fdf4386d9c, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Nov 25 04:06:55 np0005534516 podman[401530]: 2025-11-25 09:06:55.25204511 +0000 UTC m=+0.160795661 container start 83877085b0968a0b59b5171956da2977080f99dba624d8394ba1eb5fbfc46c41 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-3c3eb82e-1161-4c2f-9fce-53fdf4386d9c, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 25 04:06:55 np0005534516 nova_compute[253538]: 2025-11-25 09:06:55.256 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 7aeb9ccf-2506-41d1-92c2-c72892096857] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 04:06:55 np0005534516 neutron-haproxy-ovnmeta-3c3eb82e-1161-4c2f-9fce-53fdf4386d9c[401544]: [NOTICE]   (401548) : New worker (401550) forked
Nov 25 04:06:55 np0005534516 neutron-haproxy-ovnmeta-3c3eb82e-1161-4c2f-9fce-53fdf4386d9c[401544]: [NOTICE]   (401548) : Loading success.
Nov 25 04:06:55 np0005534516 nova_compute[253538]: 2025-11-25 09:06:55.284 253542 INFO nova.compute.manager [None req-1295a4e2-e903-455c-be0d-3566e41a5fe7 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] [instance: 7aeb9ccf-2506-41d1-92c2-c72892096857] Took 8.37 seconds to build instance.#033[00m
Nov 25 04:06:55 np0005534516 nova_compute[253538]: 2025-11-25 09:06:55.301 253542 DEBUG oslo_concurrency.lockutils [None req-1295a4e2-e903-455c-be0d-3566e41a5fe7 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] Lock "7aeb9ccf-2506-41d1-92c2-c72892096857" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.466s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:06:55 np0005534516 nova_compute[253538]: 2025-11-25 09:06:55.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:06:55 np0005534516 ovn_controller[152859]: 2025-11-25T09:06:55Z|01474|binding|INFO|Releasing lport aca5006e-311f-469a-ba5d-688da3f7d396 from this chassis (sb_readonly=0)
Nov 25 04:06:55 np0005534516 nova_compute[253538]: 2025-11-25 09:06:55.686 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:06:55 np0005534516 ovn_controller[152859]: 2025-11-25T09:06:55Z|01475|binding|INFO|Releasing lport aca5006e-311f-469a-ba5d-688da3f7d396 from this chassis (sb_readonly=0)
Nov 25 04:06:55 np0005534516 nova_compute[253538]: 2025-11-25 09:06:55.861 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:06:56 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2584: 321 pgs: 321 active+clean; 134 MiB data, 976 MiB used, 59 GiB / 60 GiB avail; 36 KiB/s rd, 1.8 MiB/s wr, 55 op/s
Nov 25 04:06:56 np0005534516 nova_compute[253538]: 2025-11-25 09:06:56.553 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:06:56 np0005534516 nova_compute[253538]: 2025-11-25 09:06:56.752 253542 DEBUG nova.compute.manager [req-e8f968e6-26c3-4a57-bb90-e33c6d2198c3 req-247faed9-d7d7-4d86-86e0-ac0c849fd974 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 7aeb9ccf-2506-41d1-92c2-c72892096857] Received event network-vif-plugged-0ddcebf0-d7e9-474d-b53b-d1746f5af8f2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 04:06:56 np0005534516 nova_compute[253538]: 2025-11-25 09:06:56.753 253542 DEBUG oslo_concurrency.lockutils [req-e8f968e6-26c3-4a57-bb90-e33c6d2198c3 req-247faed9-d7d7-4d86-86e0-ac0c849fd974 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "7aeb9ccf-2506-41d1-92c2-c72892096857-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:06:56 np0005534516 nova_compute[253538]: 2025-11-25 09:06:56.753 253542 DEBUG oslo_concurrency.lockutils [req-e8f968e6-26c3-4a57-bb90-e33c6d2198c3 req-247faed9-d7d7-4d86-86e0-ac0c849fd974 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "7aeb9ccf-2506-41d1-92c2-c72892096857-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:06:56 np0005534516 nova_compute[253538]: 2025-11-25 09:06:56.753 253542 DEBUG oslo_concurrency.lockutils [req-e8f968e6-26c3-4a57-bb90-e33c6d2198c3 req-247faed9-d7d7-4d86-86e0-ac0c849fd974 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "7aeb9ccf-2506-41d1-92c2-c72892096857-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:06:56 np0005534516 nova_compute[253538]: 2025-11-25 09:06:56.753 253542 DEBUG nova.compute.manager [req-e8f968e6-26c3-4a57-bb90-e33c6d2198c3 req-247faed9-d7d7-4d86-86e0-ac0c849fd974 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 7aeb9ccf-2506-41d1-92c2-c72892096857] No waiting events found dispatching network-vif-plugged-0ddcebf0-d7e9-474d-b53b-d1746f5af8f2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 04:06:56 np0005534516 nova_compute[253538]: 2025-11-25 09:06:56.753 253542 WARNING nova.compute.manager [req-e8f968e6-26c3-4a57-bb90-e33c6d2198c3 req-247faed9-d7d7-4d86-86e0-ac0c849fd974 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 7aeb9ccf-2506-41d1-92c2-c72892096857] Received unexpected event network-vif-plugged-0ddcebf0-d7e9-474d-b53b-d1746f5af8f2 for instance with vm_state active and task_state None.#033[00m
Nov 25 04:06:57 np0005534516 podman[401560]: 2025-11-25 09:06:57.818527018 +0000 UTC m=+0.066082937 container health_status 201f5ba8a846455dad7681d91099d8c3f33e8ac91298c1330a8b525b94c03938 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=multipathd, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team)
Nov 25 04:06:57 np0005534516 podman[401561]: 2025-11-25 09:06:57.831234623 +0000 UTC m=+0.074206157 container health_status 5c8d5508db57b4c3da7e80ae11f3a3094e87785cb74f60c98199261dd8b73481 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_metadata_agent, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS)
Nov 25 04:06:58 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:06:58 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2585: 321 pgs: 321 active+clean; 134 MiB data, 976 MiB used, 59 GiB / 60 GiB avail; 196 KiB/s rd, 1.8 MiB/s wr, 50 op/s
Nov 25 04:06:58 np0005534516 nova_compute[253538]: 2025-11-25 09:06:58.471 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:06:58 np0005534516 nova_compute[253538]: 2025-11-25 09:06:58.547 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:06:58 np0005534516 nova_compute[253538]: 2025-11-25 09:06:58.563 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:06:59 np0005534516 nova_compute[253538]: 2025-11-25 09:06:59.010 253542 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764061604.0089462, 7bc2f34f-b7dd-4b8d-99c0-1fc01d3c2aa0 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 04:06:59 np0005534516 nova_compute[253538]: 2025-11-25 09:06:59.010 253542 INFO nova.compute.manager [-] [instance: 7bc2f34f-b7dd-4b8d-99c0-1fc01d3c2aa0] VM Stopped (Lifecycle Event)#033[00m
Nov 25 04:06:59 np0005534516 nova_compute[253538]: 2025-11-25 09:06:59.032 253542 DEBUG nova.compute.manager [None req-b75574f4-44a0-461d-acd2-d1853650cd33 - - - - - -] [instance: 7bc2f34f-b7dd-4b8d-99c0-1fc01d3c2aa0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 04:06:59 np0005534516 nova_compute[253538]: 2025-11-25 09:06:59.145 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:06:59 np0005534516 ceph-mon[75015]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #126. Immutable memtables: 0.
Nov 25 04:06:59 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:06:59.282927) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 25 04:06:59 np0005534516 ceph-mon[75015]: rocksdb: [db/flush_job.cc:856] [default] [JOB 75] Flushing memtable with next log file: 126
Nov 25 04:06:59 np0005534516 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764061619282976, "job": 75, "event": "flush_started", "num_memtables": 1, "num_entries": 1222, "num_deletes": 251, "total_data_size": 1847332, "memory_usage": 1876624, "flush_reason": "Manual Compaction"}
Nov 25 04:06:59 np0005534516 ceph-mon[75015]: rocksdb: [db/flush_job.cc:885] [default] [JOB 75] Level-0 flush table #127: started
Nov 25 04:06:59 np0005534516 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764061619298462, "cf_name": "default", "job": 75, "event": "table_file_creation", "file_number": 127, "file_size": 1818565, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 53165, "largest_seqno": 54386, "table_properties": {"data_size": 1812679, "index_size": 3217, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1605, "raw_key_size": 12467, "raw_average_key_size": 19, "raw_value_size": 1800941, "raw_average_value_size": 2872, "num_data_blocks": 144, "num_entries": 627, "num_filter_entries": 627, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764061498, "oldest_key_time": 1764061498, "file_creation_time": 1764061619, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "dc5dc6ae-0fb8-474e-b34d-e37705a41add", "db_session_id": "WCSCMBNWDQZ3QKJA3OWW", "orig_file_number": 127, "seqno_to_time_mapping": "N/A"}}
Nov 25 04:06:59 np0005534516 ceph-mon[75015]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 75] Flush lasted 15680 microseconds, and 5143 cpu microseconds.
Nov 25 04:06:59 np0005534516 ceph-mon[75015]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 04:06:59 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:06:59.298614) [db/flush_job.cc:967] [default] [JOB 75] Level-0 flush table #127: 1818565 bytes OK
Nov 25 04:06:59 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:06:59.298661) [db/memtable_list.cc:519] [default] Level-0 commit table #127 started
Nov 25 04:06:59 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:06:59.301824) [db/memtable_list.cc:722] [default] Level-0 commit table #127: memtable #1 done
Nov 25 04:06:59 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:06:59.301846) EVENT_LOG_v1 {"time_micros": 1764061619301840, "job": 75, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 25 04:06:59 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:06:59.301863) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 25 04:06:59 np0005534516 ceph-mon[75015]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 75] Try to delete WAL files size 1841767, prev total WAL file size 1841767, number of live WAL files 2.
Nov 25 04:06:59 np0005534516 ceph-mon[75015]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000123.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 04:06:59 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:06:59.302579) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730035303230' seq:72057594037927935, type:22 .. '7061786F730035323732' seq:0, type:0; will stop at (end)
Nov 25 04:06:59 np0005534516 ceph-mon[75015]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 76] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 25 04:06:59 np0005534516 ceph-mon[75015]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 75 Base level 0, inputs: [127(1775KB)], [125(8141KB)]
Nov 25 04:06:59 np0005534516 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764061619302602, "job": 76, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [127], "files_L6": [125], "score": -1, "input_data_size": 10155291, "oldest_snapshot_seqno": -1}
Nov 25 04:06:59 np0005534516 ceph-mon[75015]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 76] Generated table #128: 7324 keys, 8477771 bytes, temperature: kUnknown
Nov 25 04:06:59 np0005534516 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764061619361070, "cf_name": "default", "job": 76, "event": "table_file_creation", "file_number": 128, "file_size": 8477771, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8432044, "index_size": 26286, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 18373, "raw_key_size": 192146, "raw_average_key_size": 26, "raw_value_size": 8304303, "raw_average_value_size": 1133, "num_data_blocks": 1019, "num_entries": 7324, "num_filter_entries": 7324, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764056881, "oldest_key_time": 0, "file_creation_time": 1764061619, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "dc5dc6ae-0fb8-474e-b34d-e37705a41add", "db_session_id": "WCSCMBNWDQZ3QKJA3OWW", "orig_file_number": 128, "seqno_to_time_mapping": "N/A"}}
Nov 25 04:06:59 np0005534516 ceph-mon[75015]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 04:06:59 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:06:59.361300) [db/compaction/compaction_job.cc:1663] [default] [JOB 76] Compacted 1@0 + 1@6 files to L6 => 8477771 bytes
Nov 25 04:06:59 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:06:59.362644) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 173.4 rd, 144.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.7, 8.0 +0.0 blob) out(8.1 +0.0 blob), read-write-amplify(10.2) write-amplify(4.7) OK, records in: 7838, records dropped: 514 output_compression: NoCompression
Nov 25 04:06:59 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:06:59.362658) EVENT_LOG_v1 {"time_micros": 1764061619362651, "job": 76, "event": "compaction_finished", "compaction_time_micros": 58555, "compaction_time_cpu_micros": 20015, "output_level": 6, "num_output_files": 1, "total_output_size": 8477771, "num_input_records": 7838, "num_output_records": 7324, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 25 04:06:59 np0005534516 ceph-mon[75015]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000127.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 04:06:59 np0005534516 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764061619363128, "job": 76, "event": "table_file_deletion", "file_number": 127}
Nov 25 04:06:59 np0005534516 ceph-mon[75015]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000125.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 04:06:59 np0005534516 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764061619364323, "job": 76, "event": "table_file_deletion", "file_number": 125}
Nov 25 04:06:59 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:06:59.302533) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 04:06:59 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:06:59.364380) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 04:06:59 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:06:59.364384) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 04:06:59 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:06:59.364385) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 04:06:59 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:06:59.364387) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 04:06:59 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:06:59.364388) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 04:07:00 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2586: 321 pgs: 321 active+clean; 134 MiB data, 976 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 110 op/s
Nov 25 04:07:00 np0005534516 NetworkManager[48915]: <info>  [1764061620.3404] manager: (patch-br-int-to-provnet-14ba300c-36d8-42f5-a7bd-8245a4488c5f): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/605)
Nov 25 04:07:00 np0005534516 NetworkManager[48915]: <info>  [1764061620.3415] manager: (patch-provnet-14ba300c-36d8-42f5-a7bd-8245a4488c5f-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/606)
Nov 25 04:07:00 np0005534516 nova_compute[253538]: 2025-11-25 09:07:00.342 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:07:00 np0005534516 nova_compute[253538]: 2025-11-25 09:07:00.450 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:07:00 np0005534516 ovn_controller[152859]: 2025-11-25T09:07:00Z|01476|binding|INFO|Releasing lport aca5006e-311f-469a-ba5d-688da3f7d396 from this chassis (sb_readonly=0)
Nov 25 04:07:00 np0005534516 nova_compute[253538]: 2025-11-25 09:07:00.459 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:07:01 np0005534516 nova_compute[253538]: 2025-11-25 09:07:01.508 253542 DEBUG nova.compute.manager [req-9ec32d62-6dfe-4056-bac0-514538e2c950 req-7baf3401-dbd0-46dd-96ef-6cae0c493e59 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 7aeb9ccf-2506-41d1-92c2-c72892096857] Received event network-changed-0ddcebf0-d7e9-474d-b53b-d1746f5af8f2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 04:07:01 np0005534516 nova_compute[253538]: 2025-11-25 09:07:01.509 253542 DEBUG nova.compute.manager [req-9ec32d62-6dfe-4056-bac0-514538e2c950 req-7baf3401-dbd0-46dd-96ef-6cae0c493e59 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 7aeb9ccf-2506-41d1-92c2-c72892096857] Refreshing instance network info cache due to event network-changed-0ddcebf0-d7e9-474d-b53b-d1746f5af8f2. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 04:07:01 np0005534516 nova_compute[253538]: 2025-11-25 09:07:01.509 253542 DEBUG oslo_concurrency.lockutils [req-9ec32d62-6dfe-4056-bac0-514538e2c950 req-7baf3401-dbd0-46dd-96ef-6cae0c493e59 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-7aeb9ccf-2506-41d1-92c2-c72892096857" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 04:07:01 np0005534516 nova_compute[253538]: 2025-11-25 09:07:01.509 253542 DEBUG oslo_concurrency.lockutils [req-9ec32d62-6dfe-4056-bac0-514538e2c950 req-7baf3401-dbd0-46dd-96ef-6cae0c493e59 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-7aeb9ccf-2506-41d1-92c2-c72892096857" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 04:07:01 np0005534516 nova_compute[253538]: 2025-11-25 09:07:01.509 253542 DEBUG nova.network.neutron [req-9ec32d62-6dfe-4056-bac0-514538e2c950 req-7baf3401-dbd0-46dd-96ef-6cae0c493e59 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 7aeb9ccf-2506-41d1-92c2-c72892096857] Refreshing network info cache for port 0ddcebf0-d7e9-474d-b53b-d1746f5af8f2 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 04:07:01 np0005534516 nova_compute[253538]: 2025-11-25 09:07:01.553 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:07:01 np0005534516 nova_compute[253538]: 2025-11-25 09:07:01.571 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:07:01 np0005534516 nova_compute[253538]: 2025-11-25 09:07:01.572 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:07:01 np0005534516 nova_compute[253538]: 2025-11-25 09:07:01.572 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:07:01 np0005534516 nova_compute[253538]: 2025-11-25 09:07:01.572 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 25 04:07:01 np0005534516 nova_compute[253538]: 2025-11-25 09:07:01.572 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 04:07:02 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 04:07:02 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2433178434' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 04:07:02 np0005534516 nova_compute[253538]: 2025-11-25 09:07:02.024 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.452s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 04:07:02 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2587: 321 pgs: 321 active+clean; 134 MiB data, 976 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.2 MiB/s wr, 95 op/s
Nov 25 04:07:02 np0005534516 nova_compute[253538]: 2025-11-25 09:07:02.206 253542 DEBUG nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] skipping disk for instance-0000008c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 25 04:07:02 np0005534516 nova_compute[253538]: 2025-11-25 09:07:02.206 253542 DEBUG nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] skipping disk for instance-0000008c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 25 04:07:02 np0005534516 nova_compute[253538]: 2025-11-25 09:07:02.372 253542 WARNING nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 04:07:02 np0005534516 nova_compute[253538]: 2025-11-25 09:07:02.374 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3537MB free_disk=59.96738052368164GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 25 04:07:02 np0005534516 nova_compute[253538]: 2025-11-25 09:07:02.374 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:07:02 np0005534516 nova_compute[253538]: 2025-11-25 09:07:02.374 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:07:02 np0005534516 nova_compute[253538]: 2025-11-25 09:07:02.446 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Instance 7aeb9ccf-2506-41d1-92c2-c72892096857 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 25 04:07:02 np0005534516 nova_compute[253538]: 2025-11-25 09:07:02.447 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 25 04:07:02 np0005534516 nova_compute[253538]: 2025-11-25 09:07:02.447 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 25 04:07:02 np0005534516 nova_compute[253538]: 2025-11-25 09:07:02.492 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 04:07:02 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 04:07:02 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3885092081' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 04:07:02 np0005534516 nova_compute[253538]: 2025-11-25 09:07:02.953 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.461s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 04:07:02 np0005534516 nova_compute[253538]: 2025-11-25 09:07:02.959 253542 DEBUG nova.compute.provider_tree [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 04:07:02 np0005534516 nova_compute[253538]: 2025-11-25 09:07:02.981 253542 DEBUG nova.scheduler.client.report [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 04:07:03 np0005534516 nova_compute[253538]: 2025-11-25 09:07:03.015 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 25 04:07:03 np0005534516 nova_compute[253538]: 2025-11-25 09:07:03.016 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.641s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:07:03 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:07:03 np0005534516 nova_compute[253538]: 2025-11-25 09:07:03.472 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:07:03 np0005534516 nova_compute[253538]: 2025-11-25 09:07:03.768 253542 DEBUG nova.network.neutron [req-9ec32d62-6dfe-4056-bac0-514538e2c950 req-7baf3401-dbd0-46dd-96ef-6cae0c493e59 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 7aeb9ccf-2506-41d1-92c2-c72892096857] Updated VIF entry in instance network info cache for port 0ddcebf0-d7e9-474d-b53b-d1746f5af8f2. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 25 04:07:03 np0005534516 nova_compute[253538]: 2025-11-25 09:07:03.769 253542 DEBUG nova.network.neutron [req-9ec32d62-6dfe-4056-bac0-514538e2c950 req-7baf3401-dbd0-46dd-96ef-6cae0c493e59 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 7aeb9ccf-2506-41d1-92c2-c72892096857] Updating instance_info_cache with network_info: [{"id": "0ddcebf0-d7e9-474d-b53b-d1746f5af8f2", "address": "fa:16:3e:d3:95:36", "network": {"id": "3c3eb82e-1161-4c2f-9fce-53fdf4386d9c", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-990123454-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.181", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8771100a91ef4eb3b58cc4840f6154b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0ddcebf0-d7", "ovs_interfaceid": "0ddcebf0-d7e9-474d-b53b-d1746f5af8f2", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 04:07:03 np0005534516 nova_compute[253538]: 2025-11-25 09:07:03.836 253542 DEBUG oslo_concurrency.lockutils [req-9ec32d62-6dfe-4056-bac0-514538e2c950 req-7baf3401-dbd0-46dd-96ef-6cae0c493e59 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-7aeb9ccf-2506-41d1-92c2-c72892096857" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 04:07:04 np0005534516 nova_compute[253538]: 2025-11-25 09:07:04.147 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:07:04 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2588: 321 pgs: 321 active+clean; 134 MiB data, 976 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.0 MiB/s wr, 83 op/s
Nov 25 04:07:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 04:07:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 04:07:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 04:07:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 04:07:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.00034841348814872695 of space, bias 1.0, pg target 0.10452404644461809 quantized to 32 (current 32)
Nov 25 04:07:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 04:07:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 04:07:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 04:07:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 04:07:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 04:07:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0010122368870873217 of space, bias 1.0, pg target 0.3036710661261965 quantized to 32 (current 32)
Nov 25 04:07:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 04:07:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 25 04:07:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 04:07:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 04:07:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 04:07:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 25 04:07:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 04:07:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 25 04:07:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 04:07:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 04:07:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 04:07:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 25 04:07:05 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 04:07:05 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 04:07:05 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 04:07:05 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 04:07:05 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 04:07:05 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 04:07:05 np0005534516 ceph-mgr[75313]: [progress WARNING root] complete: ev 3470c4f4-eb92-45cc-9d5a-111c323537b1 does not exist
Nov 25 04:07:05 np0005534516 ceph-mgr[75313]: [progress WARNING root] complete: ev fd2a2eb5-8f50-4371-9a6c-1151a7d23693 does not exist
Nov 25 04:07:05 np0005534516 ceph-mgr[75313]: [progress WARNING root] complete: ev 222a9998-08f3-4f1d-871d-424dd161aeb2 does not exist
Nov 25 04:07:05 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 04:07:05 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 04:07:05 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 04:07:05 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 04:07:05 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 04:07:05 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 04:07:05 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 04:07:05 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 04:07:05 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 04:07:05 np0005534516 podman[401916]: 2025-11-25 09:07:05.829838608 +0000 UTC m=+0.057215326 container create 7135a79d554a31a47d48454889f4db8ee54a0c7653bda52e8bbde46f4b337fc9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_ardinghelli, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 04:07:05 np0005534516 systemd[1]: Started libpod-conmon-7135a79d554a31a47d48454889f4db8ee54a0c7653bda52e8bbde46f4b337fc9.scope.
Nov 25 04:07:05 np0005534516 podman[401916]: 2025-11-25 09:07:05.797704115 +0000 UTC m=+0.025080843 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 04:07:05 np0005534516 systemd[1]: Started libcrun container.
Nov 25 04:07:05 np0005534516 podman[401916]: 2025-11-25 09:07:05.91229038 +0000 UTC m=+0.139667128 container init 7135a79d554a31a47d48454889f4db8ee54a0c7653bda52e8bbde46f4b337fc9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_ardinghelli, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Nov 25 04:07:05 np0005534516 podman[401916]: 2025-11-25 09:07:05.918498498 +0000 UTC m=+0.145875216 container start 7135a79d554a31a47d48454889f4db8ee54a0c7653bda52e8bbde46f4b337fc9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_ardinghelli, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 04:07:05 np0005534516 systemd[1]: libpod-7135a79d554a31a47d48454889f4db8ee54a0c7653bda52e8bbde46f4b337fc9.scope: Deactivated successfully.
Nov 25 04:07:05 np0005534516 great_ardinghelli[401932]: 167 167
Nov 25 04:07:05 np0005534516 conmon[401932]: conmon 7135a79d554a31a47d48 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-7135a79d554a31a47d48454889f4db8ee54a0c7653bda52e8bbde46f4b337fc9.scope/container/memory.events
Nov 25 04:07:05 np0005534516 podman[401916]: 2025-11-25 09:07:05.926026083 +0000 UTC m=+0.153402831 container attach 7135a79d554a31a47d48454889f4db8ee54a0c7653bda52e8bbde46f4b337fc9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_ardinghelli, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 04:07:05 np0005534516 podman[401916]: 2025-11-25 09:07:05.927259056 +0000 UTC m=+0.154635794 container died 7135a79d554a31a47d48454889f4db8ee54a0c7653bda52e8bbde46f4b337fc9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_ardinghelli, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Nov 25 04:07:05 np0005534516 systemd[1]: var-lib-containers-storage-overlay-a363d05051ff9e0334c0cefbc004fd3d501e926facac4bf1fd0810e422858d56-merged.mount: Deactivated successfully.
Nov 25 04:07:05 np0005534516 podman[401916]: 2025-11-25 09:07:05.977641786 +0000 UTC m=+0.205018504 container remove 7135a79d554a31a47d48454889f4db8ee54a0c7653bda52e8bbde46f4b337fc9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_ardinghelli, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef)
Nov 25 04:07:05 np0005534516 systemd[1]: libpod-conmon-7135a79d554a31a47d48454889f4db8ee54a0c7653bda52e8bbde46f4b337fc9.scope: Deactivated successfully.
Nov 25 04:07:06 np0005534516 podman[401935]: 2025-11-25 09:07:06.011151467 +0000 UTC m=+0.099737943 container health_status 8ad1e319ea263c63021f01bb2d6f18ca5aea205883339692c6b10bddfa993191 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_controller)
Nov 25 04:07:06 np0005534516 podman[401981]: 2025-11-25 09:07:06.15364833 +0000 UTC m=+0.047856281 container create 297489f0950dc8b7d355c380d9e17855ccae3143e9edc8ad6aa4db857daca900 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_darwin, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 04:07:06 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2589: 321 pgs: 321 active+clean; 134 MiB data, 976 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Nov 25 04:07:06 np0005534516 systemd[1]: Started libpod-conmon-297489f0950dc8b7d355c380d9e17855ccae3143e9edc8ad6aa4db857daca900.scope.
Nov 25 04:07:06 np0005534516 systemd[1]: Started libcrun container.
Nov 25 04:07:06 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/47f17f3cebb8fb7e865d73739b74ff0516aa8c117aba2842de043b36e7a8bace/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 04:07:06 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/47f17f3cebb8fb7e865d73739b74ff0516aa8c117aba2842de043b36e7a8bace/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 04:07:06 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/47f17f3cebb8fb7e865d73739b74ff0516aa8c117aba2842de043b36e7a8bace/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 04:07:06 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/47f17f3cebb8fb7e865d73739b74ff0516aa8c117aba2842de043b36e7a8bace/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 04:07:06 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/47f17f3cebb8fb7e865d73739b74ff0516aa8c117aba2842de043b36e7a8bace/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 04:07:06 np0005534516 podman[401981]: 2025-11-25 09:07:06.132828465 +0000 UTC m=+0.027036446 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 04:07:06 np0005534516 podman[401981]: 2025-11-25 09:07:06.257637197 +0000 UTC m=+0.151845198 container init 297489f0950dc8b7d355c380d9e17855ccae3143e9edc8ad6aa4db857daca900 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_darwin, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 04:07:06 np0005534516 podman[401981]: 2025-11-25 09:07:06.263724663 +0000 UTC m=+0.157932624 container start 297489f0950dc8b7d355c380d9e17855ccae3143e9edc8ad6aa4db857daca900 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_darwin, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Nov 25 04:07:06 np0005534516 podman[401981]: 2025-11-25 09:07:06.283378737 +0000 UTC m=+0.177586688 container attach 297489f0950dc8b7d355c380d9e17855ccae3143e9edc8ad6aa4db857daca900 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_darwin, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2)
Nov 25 04:07:07 np0005534516 admiring_darwin[401997]: --> passed data devices: 0 physical, 3 LVM
Nov 25 04:07:07 np0005534516 admiring_darwin[401997]: --> relative data size: 1.0
Nov 25 04:07:07 np0005534516 admiring_darwin[401997]: --> All data devices are unavailable
Nov 25 04:07:07 np0005534516 systemd[1]: libpod-297489f0950dc8b7d355c380d9e17855ccae3143e9edc8ad6aa4db857daca900.scope: Deactivated successfully.
Nov 25 04:07:07 np0005534516 systemd[1]: libpod-297489f0950dc8b7d355c380d9e17855ccae3143e9edc8ad6aa4db857daca900.scope: Consumed 1.013s CPU time.
Nov 25 04:07:07 np0005534516 podman[401981]: 2025-11-25 09:07:07.33300373 +0000 UTC m=+1.227211701 container died 297489f0950dc8b7d355c380d9e17855ccae3143e9edc8ad6aa4db857daca900 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_darwin, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 04:07:07 np0005534516 systemd[1]: var-lib-containers-storage-overlay-47f17f3cebb8fb7e865d73739b74ff0516aa8c117aba2842de043b36e7a8bace-merged.mount: Deactivated successfully.
Nov 25 04:07:07 np0005534516 podman[401981]: 2025-11-25 09:07:07.83846033 +0000 UTC m=+1.732668321 container remove 297489f0950dc8b7d355c380d9e17855ccae3143e9edc8ad6aa4db857daca900 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_darwin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Nov 25 04:07:07 np0005534516 systemd[1]: libpod-conmon-297489f0950dc8b7d355c380d9e17855ccae3143e9edc8ad6aa4db857daca900.scope: Deactivated successfully.
Nov 25 04:07:08 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:07:08 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2590: 321 pgs: 321 active+clean; 134 MiB data, 976 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 76 op/s
Nov 25 04:07:08 np0005534516 nova_compute[253538]: 2025-11-25 09:07:08.474 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:07:08 np0005534516 podman[402175]: 2025-11-25 09:07:08.491816432 +0000 UTC m=+0.040703928 container create ef635661db48d779fe6d66362ba4f26b1de8812b6f7fbdeede3c6700eab46c08 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_blackburn, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 04:07:08 np0005534516 systemd[1]: Started libpod-conmon-ef635661db48d779fe6d66362ba4f26b1de8812b6f7fbdeede3c6700eab46c08.scope.
Nov 25 04:07:08 np0005534516 systemd[1]: Started libcrun container.
Nov 25 04:07:08 np0005534516 podman[402175]: 2025-11-25 09:07:08.474348296 +0000 UTC m=+0.023235822 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 04:07:08 np0005534516 podman[402175]: 2025-11-25 09:07:08.569936704 +0000 UTC m=+0.118824220 container init ef635661db48d779fe6d66362ba4f26b1de8812b6f7fbdeede3c6700eab46c08 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_blackburn, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 04:07:08 np0005534516 podman[402175]: 2025-11-25 09:07:08.576576966 +0000 UTC m=+0.125464462 container start ef635661db48d779fe6d66362ba4f26b1de8812b6f7fbdeede3c6700eab46c08 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_blackburn, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Nov 25 04:07:08 np0005534516 podman[402175]: 2025-11-25 09:07:08.580915003 +0000 UTC m=+0.129802519 container attach ef635661db48d779fe6d66362ba4f26b1de8812b6f7fbdeede3c6700eab46c08 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_blackburn, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Nov 25 04:07:08 np0005534516 practical_blackburn[402191]: 167 167
Nov 25 04:07:08 np0005534516 systemd[1]: libpod-ef635661db48d779fe6d66362ba4f26b1de8812b6f7fbdeede3c6700eab46c08.scope: Deactivated successfully.
Nov 25 04:07:08 np0005534516 podman[402175]: 2025-11-25 09:07:08.583295538 +0000 UTC m=+0.132183034 container died ef635661db48d779fe6d66362ba4f26b1de8812b6f7fbdeede3c6700eab46c08 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_blackburn, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Nov 25 04:07:08 np0005534516 systemd[1]: var-lib-containers-storage-overlay-b19b6fbfffe4eaefbb8866cae7a026dc3431cc17e5ec8d394a10331157f13130-merged.mount: Deactivated successfully.
Nov 25 04:07:08 np0005534516 podman[402175]: 2025-11-25 09:07:08.626913614 +0000 UTC m=+0.175801110 container remove ef635661db48d779fe6d66362ba4f26b1de8812b6f7fbdeede3c6700eab46c08 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_blackburn, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 04:07:08 np0005534516 systemd[1]: libpod-conmon-ef635661db48d779fe6d66362ba4f26b1de8812b6f7fbdeede3c6700eab46c08.scope: Deactivated successfully.
Nov 25 04:07:08 np0005534516 ovn_controller[152859]: 2025-11-25T09:07:08Z|00183|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:d3:95:36 10.100.0.13
Nov 25 04:07:08 np0005534516 ovn_controller[152859]: 2025-11-25T09:07:08Z|00184|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:d3:95:36 10.100.0.13
Nov 25 04:07:08 np0005534516 podman[402214]: 2025-11-25 09:07:08.784627211 +0000 UTC m=+0.041027667 container create 5aa1fcd0435785699f58110ce7db204d4dc901afe78a18ffd9d006c5445b6706 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_wu, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Nov 25 04:07:08 np0005534516 systemd[1]: Started libpod-conmon-5aa1fcd0435785699f58110ce7db204d4dc901afe78a18ffd9d006c5445b6706.scope.
Nov 25 04:07:08 np0005534516 systemd[1]: Started libcrun container.
Nov 25 04:07:08 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6dcf0ce642ae64399c0c64264db2b35a04adce66219c7ca3a2ddb533e15d9dd7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 04:07:08 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6dcf0ce642ae64399c0c64264db2b35a04adce66219c7ca3a2ddb533e15d9dd7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 04:07:08 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6dcf0ce642ae64399c0c64264db2b35a04adce66219c7ca3a2ddb533e15d9dd7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 04:07:08 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6dcf0ce642ae64399c0c64264db2b35a04adce66219c7ca3a2ddb533e15d9dd7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 04:07:08 np0005534516 podman[402214]: 2025-11-25 09:07:08.768933345 +0000 UTC m=+0.025333811 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 04:07:08 np0005534516 podman[402214]: 2025-11-25 09:07:08.872841309 +0000 UTC m=+0.129241785 container init 5aa1fcd0435785699f58110ce7db204d4dc901afe78a18ffd9d006c5445b6706 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_wu, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Nov 25 04:07:08 np0005534516 podman[402214]: 2025-11-25 09:07:08.888145025 +0000 UTC m=+0.144545511 container start 5aa1fcd0435785699f58110ce7db204d4dc901afe78a18ffd9d006c5445b6706 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_wu, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Nov 25 04:07:08 np0005534516 podman[402214]: 2025-11-25 09:07:08.892215896 +0000 UTC m=+0.148616372 container attach 5aa1fcd0435785699f58110ce7db204d4dc901afe78a18ffd9d006c5445b6706 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_wu, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Nov 25 04:07:09 np0005534516 nova_compute[253538]: 2025-11-25 09:07:09.150 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:07:09 np0005534516 festive_wu[402230]: {
Nov 25 04:07:09 np0005534516 festive_wu[402230]:    "0": [
Nov 25 04:07:09 np0005534516 festive_wu[402230]:        {
Nov 25 04:07:09 np0005534516 festive_wu[402230]:            "devices": [
Nov 25 04:07:09 np0005534516 festive_wu[402230]:                "/dev/loop3"
Nov 25 04:07:09 np0005534516 festive_wu[402230]:            ],
Nov 25 04:07:09 np0005534516 festive_wu[402230]:            "lv_name": "ceph_lv0",
Nov 25 04:07:09 np0005534516 festive_wu[402230]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 04:07:09 np0005534516 festive_wu[402230]:            "lv_size": "21470642176",
Nov 25 04:07:09 np0005534516 festive_wu[402230]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=fa0f8d90-5df8-4d42-9078-da082765696d,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 04:07:09 np0005534516 festive_wu[402230]:            "lv_uuid": "ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr",
Nov 25 04:07:09 np0005534516 festive_wu[402230]:            "name": "ceph_lv0",
Nov 25 04:07:09 np0005534516 festive_wu[402230]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 04:07:09 np0005534516 festive_wu[402230]:            "tags": {
Nov 25 04:07:09 np0005534516 festive_wu[402230]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 04:07:09 np0005534516 festive_wu[402230]:                "ceph.block_uuid": "ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr",
Nov 25 04:07:09 np0005534516 festive_wu[402230]:                "ceph.cephx_lockbox_secret": "",
Nov 25 04:07:09 np0005534516 festive_wu[402230]:                "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 04:07:09 np0005534516 festive_wu[402230]:                "ceph.cluster_name": "ceph",
Nov 25 04:07:09 np0005534516 festive_wu[402230]:                "ceph.crush_device_class": "",
Nov 25 04:07:09 np0005534516 festive_wu[402230]:                "ceph.encrypted": "0",
Nov 25 04:07:09 np0005534516 festive_wu[402230]:                "ceph.osd_fsid": "fa0f8d90-5df8-4d42-9078-da082765696d",
Nov 25 04:07:09 np0005534516 festive_wu[402230]:                "ceph.osd_id": "0",
Nov 25 04:07:09 np0005534516 festive_wu[402230]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 04:07:09 np0005534516 festive_wu[402230]:                "ceph.type": "block",
Nov 25 04:07:09 np0005534516 festive_wu[402230]:                "ceph.vdo": "0"
Nov 25 04:07:09 np0005534516 festive_wu[402230]:            },
Nov 25 04:07:09 np0005534516 festive_wu[402230]:            "type": "block",
Nov 25 04:07:09 np0005534516 festive_wu[402230]:            "vg_name": "ceph_vg0"
Nov 25 04:07:09 np0005534516 festive_wu[402230]:        }
Nov 25 04:07:09 np0005534516 festive_wu[402230]:    ],
Nov 25 04:07:09 np0005534516 festive_wu[402230]:    "1": [
Nov 25 04:07:09 np0005534516 festive_wu[402230]:        {
Nov 25 04:07:09 np0005534516 festive_wu[402230]:            "devices": [
Nov 25 04:07:09 np0005534516 festive_wu[402230]:                "/dev/loop4"
Nov 25 04:07:09 np0005534516 festive_wu[402230]:            ],
Nov 25 04:07:09 np0005534516 festive_wu[402230]:            "lv_name": "ceph_lv1",
Nov 25 04:07:09 np0005534516 festive_wu[402230]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 04:07:09 np0005534516 festive_wu[402230]:            "lv_size": "21470642176",
Nov 25 04:07:09 np0005534516 festive_wu[402230]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1ccbbde0-8faf-460a-800e-c84f00ed17db,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 04:07:09 np0005534516 festive_wu[402230]:            "lv_uuid": "nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e",
Nov 25 04:07:09 np0005534516 festive_wu[402230]:            "name": "ceph_lv1",
Nov 25 04:07:09 np0005534516 festive_wu[402230]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 04:07:09 np0005534516 festive_wu[402230]:            "tags": {
Nov 25 04:07:09 np0005534516 festive_wu[402230]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 04:07:09 np0005534516 festive_wu[402230]:                "ceph.block_uuid": "nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e",
Nov 25 04:07:09 np0005534516 festive_wu[402230]:                "ceph.cephx_lockbox_secret": "",
Nov 25 04:07:09 np0005534516 festive_wu[402230]:                "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 04:07:09 np0005534516 festive_wu[402230]:                "ceph.cluster_name": "ceph",
Nov 25 04:07:09 np0005534516 festive_wu[402230]:                "ceph.crush_device_class": "",
Nov 25 04:07:09 np0005534516 festive_wu[402230]:                "ceph.encrypted": "0",
Nov 25 04:07:09 np0005534516 festive_wu[402230]:                "ceph.osd_fsid": "1ccbbde0-8faf-460a-800e-c84f00ed17db",
Nov 25 04:07:09 np0005534516 festive_wu[402230]:                "ceph.osd_id": "1",
Nov 25 04:07:09 np0005534516 festive_wu[402230]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 04:07:09 np0005534516 festive_wu[402230]:                "ceph.type": "block",
Nov 25 04:07:09 np0005534516 festive_wu[402230]:                "ceph.vdo": "0"
Nov 25 04:07:09 np0005534516 festive_wu[402230]:            },
Nov 25 04:07:09 np0005534516 festive_wu[402230]:            "type": "block",
Nov 25 04:07:09 np0005534516 festive_wu[402230]:            "vg_name": "ceph_vg1"
Nov 25 04:07:09 np0005534516 festive_wu[402230]:        }
Nov 25 04:07:09 np0005534516 festive_wu[402230]:    ],
Nov 25 04:07:09 np0005534516 festive_wu[402230]:    "2": [
Nov 25 04:07:09 np0005534516 festive_wu[402230]:        {
Nov 25 04:07:09 np0005534516 festive_wu[402230]:            "devices": [
Nov 25 04:07:09 np0005534516 festive_wu[402230]:                "/dev/loop5"
Nov 25 04:07:09 np0005534516 festive_wu[402230]:            ],
Nov 25 04:07:09 np0005534516 festive_wu[402230]:            "lv_name": "ceph_lv2",
Nov 25 04:07:09 np0005534516 festive_wu[402230]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 04:07:09 np0005534516 festive_wu[402230]:            "lv_size": "21470642176",
Nov 25 04:07:09 np0005534516 festive_wu[402230]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2a15223e-f21c-42af-b702-2be1c3607399,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 04:07:09 np0005534516 festive_wu[402230]:            "lv_uuid": "5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0",
Nov 25 04:07:09 np0005534516 festive_wu[402230]:            "name": "ceph_lv2",
Nov 25 04:07:09 np0005534516 festive_wu[402230]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 04:07:09 np0005534516 festive_wu[402230]:            "tags": {
Nov 25 04:07:09 np0005534516 festive_wu[402230]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 04:07:09 np0005534516 festive_wu[402230]:                "ceph.block_uuid": "5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0",
Nov 25 04:07:09 np0005534516 festive_wu[402230]:                "ceph.cephx_lockbox_secret": "",
Nov 25 04:07:09 np0005534516 festive_wu[402230]:                "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 04:07:09 np0005534516 festive_wu[402230]:                "ceph.cluster_name": "ceph",
Nov 25 04:07:09 np0005534516 festive_wu[402230]:                "ceph.crush_device_class": "",
Nov 25 04:07:09 np0005534516 festive_wu[402230]:                "ceph.encrypted": "0",
Nov 25 04:07:09 np0005534516 festive_wu[402230]:                "ceph.osd_fsid": "2a15223e-f21c-42af-b702-2be1c3607399",
Nov 25 04:07:09 np0005534516 festive_wu[402230]:                "ceph.osd_id": "2",
Nov 25 04:07:09 np0005534516 festive_wu[402230]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 04:07:09 np0005534516 festive_wu[402230]:                "ceph.type": "block",
Nov 25 04:07:09 np0005534516 festive_wu[402230]:                "ceph.vdo": "0"
Nov 25 04:07:09 np0005534516 festive_wu[402230]:            },
Nov 25 04:07:09 np0005534516 festive_wu[402230]:            "type": "block",
Nov 25 04:07:09 np0005534516 festive_wu[402230]:            "vg_name": "ceph_vg2"
Nov 25 04:07:09 np0005534516 festive_wu[402230]:        }
Nov 25 04:07:09 np0005534516 festive_wu[402230]:    ]
Nov 25 04:07:09 np0005534516 festive_wu[402230]: }
Nov 25 04:07:09 np0005534516 systemd[1]: libpod-5aa1fcd0435785699f58110ce7db204d4dc901afe78a18ffd9d006c5445b6706.scope: Deactivated successfully.
Nov 25 04:07:09 np0005534516 podman[402214]: 2025-11-25 09:07:09.706195753 +0000 UTC m=+0.962596219 container died 5aa1fcd0435785699f58110ce7db204d4dc901afe78a18ffd9d006c5445b6706 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_wu, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 04:07:09 np0005534516 systemd[1]: var-lib-containers-storage-overlay-6dcf0ce642ae64399c0c64264db2b35a04adce66219c7ca3a2ddb533e15d9dd7-merged.mount: Deactivated successfully.
Nov 25 04:07:09 np0005534516 podman[402214]: 2025-11-25 09:07:09.784917003 +0000 UTC m=+1.041317469 container remove 5aa1fcd0435785699f58110ce7db204d4dc901afe78a18ffd9d006c5445b6706 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_wu, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 04:07:09 np0005534516 systemd[1]: libpod-conmon-5aa1fcd0435785699f58110ce7db204d4dc901afe78a18ffd9d006c5445b6706.scope: Deactivated successfully.
Nov 25 04:07:09 np0005534516 nova_compute[253538]: 2025-11-25 09:07:09.986 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:07:10 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2591: 321 pgs: 321 active+clean; 159 MiB data, 995 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.6 MiB/s wr, 93 op/s
Nov 25 04:07:10 np0005534516 podman[402391]: 2025-11-25 09:07:10.426116983 +0000 UTC m=+0.038051456 container create 3623ee54d41744a3602d33237befcae75e8ca18d18c0e38783ebe4f2d4c1a11f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_beaver, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Nov 25 04:07:10 np0005534516 systemd[1]: Started libpod-conmon-3623ee54d41744a3602d33237befcae75e8ca18d18c0e38783ebe4f2d4c1a11f.scope.
Nov 25 04:07:10 np0005534516 systemd[1]: Started libcrun container.
Nov 25 04:07:10 np0005534516 podman[402391]: 2025-11-25 09:07:10.409302756 +0000 UTC m=+0.021237249 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 04:07:10 np0005534516 podman[402391]: 2025-11-25 09:07:10.507076044 +0000 UTC m=+0.119010537 container init 3623ee54d41744a3602d33237befcae75e8ca18d18c0e38783ebe4f2d4c1a11f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_beaver, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Nov 25 04:07:10 np0005534516 podman[402391]: 2025-11-25 09:07:10.514970549 +0000 UTC m=+0.126905032 container start 3623ee54d41744a3602d33237befcae75e8ca18d18c0e38783ebe4f2d4c1a11f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_beaver, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Nov 25 04:07:10 np0005534516 podman[402391]: 2025-11-25 09:07:10.518829663 +0000 UTC m=+0.130764206 container attach 3623ee54d41744a3602d33237befcae75e8ca18d18c0e38783ebe4f2d4c1a11f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_beaver, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Nov 25 04:07:10 np0005534516 friendly_beaver[402407]: 167 167
Nov 25 04:07:10 np0005534516 systemd[1]: libpod-3623ee54d41744a3602d33237befcae75e8ca18d18c0e38783ebe4f2d4c1a11f.scope: Deactivated successfully.
Nov 25 04:07:10 np0005534516 podman[402391]: 2025-11-25 09:07:10.521853376 +0000 UTC m=+0.133787849 container died 3623ee54d41744a3602d33237befcae75e8ca18d18c0e38783ebe4f2d4c1a11f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_beaver, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Nov 25 04:07:10 np0005534516 systemd[1]: var-lib-containers-storage-overlay-b13b8fdbceb8e9673a4721cfd00f23e4d9bd736e4971706cf6e7b9f7771da55f-merged.mount: Deactivated successfully.
Nov 25 04:07:10 np0005534516 podman[402391]: 2025-11-25 09:07:10.559548221 +0000 UTC m=+0.171482694 container remove 3623ee54d41744a3602d33237befcae75e8ca18d18c0e38783ebe4f2d4c1a11f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_beaver, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 04:07:10 np0005534516 systemd[1]: libpod-conmon-3623ee54d41744a3602d33237befcae75e8ca18d18c0e38783ebe4f2d4c1a11f.scope: Deactivated successfully.
Nov 25 04:07:10 np0005534516 podman[402430]: 2025-11-25 09:07:10.736379648 +0000 UTC m=+0.042381483 container create 94379ed3b27c5e046c59cf71a8701f74905a616f529620796ffb84bacbef3d6c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_diffie, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 04:07:10 np0005534516 systemd[1]: Started libpod-conmon-94379ed3b27c5e046c59cf71a8701f74905a616f529620796ffb84bacbef3d6c.scope.
Nov 25 04:07:10 np0005534516 systemd[1]: Started libcrun container.
Nov 25 04:07:10 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c981229acc3180246becf62ae8396677f6daf94938c8069fdcabc29f2caf318c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 04:07:10 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c981229acc3180246becf62ae8396677f6daf94938c8069fdcabc29f2caf318c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 04:07:10 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c981229acc3180246becf62ae8396677f6daf94938c8069fdcabc29f2caf318c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 04:07:10 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c981229acc3180246becf62ae8396677f6daf94938c8069fdcabc29f2caf318c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 04:07:10 np0005534516 podman[402430]: 2025-11-25 09:07:10.718214933 +0000 UTC m=+0.024216778 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 04:07:10 np0005534516 podman[402430]: 2025-11-25 09:07:10.820875125 +0000 UTC m=+0.126877010 container init 94379ed3b27c5e046c59cf71a8701f74905a616f529620796ffb84bacbef3d6c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_diffie, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Nov 25 04:07:10 np0005534516 podman[402430]: 2025-11-25 09:07:10.837747953 +0000 UTC m=+0.143749808 container start 94379ed3b27c5e046c59cf71a8701f74905a616f529620796ffb84bacbef3d6c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_diffie, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Nov 25 04:07:10 np0005534516 podman[402430]: 2025-11-25 09:07:10.842235626 +0000 UTC m=+0.148237461 container attach 94379ed3b27c5e046c59cf71a8701f74905a616f529620796ffb84bacbef3d6c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_diffie, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 04:07:11 np0005534516 blissful_diffie[402448]: {
Nov 25 04:07:11 np0005534516 blissful_diffie[402448]:    "1ccbbde0-8faf-460a-800e-c84f00ed17db": {
Nov 25 04:07:11 np0005534516 blissful_diffie[402448]:        "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 04:07:11 np0005534516 blissful_diffie[402448]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 04:07:11 np0005534516 blissful_diffie[402448]:        "osd_id": 1,
Nov 25 04:07:11 np0005534516 blissful_diffie[402448]:        "osd_uuid": "1ccbbde0-8faf-460a-800e-c84f00ed17db",
Nov 25 04:07:11 np0005534516 blissful_diffie[402448]:        "type": "bluestore"
Nov 25 04:07:11 np0005534516 blissful_diffie[402448]:    },
Nov 25 04:07:11 np0005534516 blissful_diffie[402448]:    "2a15223e-f21c-42af-b702-2be1c3607399": {
Nov 25 04:07:11 np0005534516 blissful_diffie[402448]:        "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 04:07:11 np0005534516 blissful_diffie[402448]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 04:07:11 np0005534516 blissful_diffie[402448]:        "osd_id": 2,
Nov 25 04:07:11 np0005534516 blissful_diffie[402448]:        "osd_uuid": "2a15223e-f21c-42af-b702-2be1c3607399",
Nov 25 04:07:11 np0005534516 blissful_diffie[402448]:        "type": "bluestore"
Nov 25 04:07:11 np0005534516 blissful_diffie[402448]:    },
Nov 25 04:07:11 np0005534516 blissful_diffie[402448]:    "fa0f8d90-5df8-4d42-9078-da082765696d": {
Nov 25 04:07:11 np0005534516 blissful_diffie[402448]:        "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 04:07:11 np0005534516 blissful_diffie[402448]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 04:07:11 np0005534516 blissful_diffie[402448]:        "osd_id": 0,
Nov 25 04:07:11 np0005534516 blissful_diffie[402448]:        "osd_uuid": "fa0f8d90-5df8-4d42-9078-da082765696d",
Nov 25 04:07:11 np0005534516 blissful_diffie[402448]:        "type": "bluestore"
Nov 25 04:07:11 np0005534516 blissful_diffie[402448]:    }
Nov 25 04:07:11 np0005534516 blissful_diffie[402448]: }
Nov 25 04:07:11 np0005534516 systemd[1]: libpod-94379ed3b27c5e046c59cf71a8701f74905a616f529620796ffb84bacbef3d6c.scope: Deactivated successfully.
Nov 25 04:07:11 np0005534516 systemd[1]: libpod-94379ed3b27c5e046c59cf71a8701f74905a616f529620796ffb84bacbef3d6c.scope: Consumed 1.024s CPU time.
Nov 25 04:07:11 np0005534516 podman[402430]: 2025-11-25 09:07:11.858044349 +0000 UTC m=+1.164046184 container died 94379ed3b27c5e046c59cf71a8701f74905a616f529620796ffb84bacbef3d6c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_diffie, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 04:07:11 np0005534516 systemd[1]: var-lib-containers-storage-overlay-c981229acc3180246becf62ae8396677f6daf94938c8069fdcabc29f2caf318c-merged.mount: Deactivated successfully.
Nov 25 04:07:11 np0005534516 podman[402430]: 2025-11-25 09:07:11.916521778 +0000 UTC m=+1.222523603 container remove 94379ed3b27c5e046c59cf71a8701f74905a616f529620796ffb84bacbef3d6c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_diffie, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 04:07:11 np0005534516 systemd[1]: libpod-conmon-94379ed3b27c5e046c59cf71a8701f74905a616f529620796ffb84bacbef3d6c.scope: Deactivated successfully.
Nov 25 04:07:11 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 04:07:11 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 04:07:11 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 04:07:11 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 04:07:11 np0005534516 ceph-mgr[75313]: [progress WARNING root] complete: ev 8cb696f3-9849-4981-b480-3cca206d97c7 does not exist
Nov 25 04:07:11 np0005534516 ceph-mgr[75313]: [progress WARNING root] complete: ev 0378c638-7eb0-42ff-a448-df7657e204fa does not exist
Nov 25 04:07:12 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2592: 321 pgs: 321 active+clean; 167 MiB data, 1001 MiB used, 59 GiB / 60 GiB avail; 247 KiB/s rd, 2.1 MiB/s wr, 58 op/s
Nov 25 04:07:12 np0005534516 nova_compute[253538]: 2025-11-25 09:07:12.858 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:07:12 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 04:07:12 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 04:07:13 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:07:13 np0005534516 nova_compute[253538]: 2025-11-25 09:07:13.478 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:07:14 np0005534516 nova_compute[253538]: 2025-11-25 09:07:14.152 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:07:14 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2593: 321 pgs: 321 active+clean; 167 MiB data, 1001 MiB used, 59 GiB / 60 GiB avail; 275 KiB/s rd, 2.1 MiB/s wr, 59 op/s
Nov 25 04:07:16 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2594: 321 pgs: 321 active+clean; 167 MiB data, 1001 MiB used, 59 GiB / 60 GiB avail; 275 KiB/s rd, 2.1 MiB/s wr, 59 op/s
Nov 25 04:07:17 np0005534516 nova_compute[253538]: 2025-11-25 09:07:17.927 253542 DEBUG nova.compute.manager [None req-fb97dd2a-bb73-4d24-bde6-cb7eda278abf aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] [instance: 7aeb9ccf-2506-41d1-92c2-c72892096857] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 04:07:17 np0005534516 nova_compute[253538]: 2025-11-25 09:07:17.987 253542 INFO nova.compute.manager [None req-fb97dd2a-bb73-4d24-bde6-cb7eda278abf aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] [instance: 7aeb9ccf-2506-41d1-92c2-c72892096857] instance snapshotting#033[00m
Nov 25 04:07:18 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:07:18 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2595: 321 pgs: 321 active+clean; 167 MiB data, 1001 MiB used, 59 GiB / 60 GiB avail; 275 KiB/s rd, 2.1 MiB/s wr, 59 op/s
Nov 25 04:07:18 np0005534516 nova_compute[253538]: 2025-11-25 09:07:18.281 253542 INFO nova.virt.libvirt.driver [None req-fb97dd2a-bb73-4d24-bde6-cb7eda278abf aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] [instance: 7aeb9ccf-2506-41d1-92c2-c72892096857] Beginning live snapshot process#033[00m
Nov 25 04:07:18 np0005534516 nova_compute[253538]: 2025-11-25 09:07:18.460 253542 DEBUG nova.virt.libvirt.imagebackend [None req-fb97dd2a-bb73-4d24-bde6-cb7eda278abf aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] No parent info for 8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163#033[00m
Nov 25 04:07:18 np0005534516 nova_compute[253538]: 2025-11-25 09:07:18.480 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:07:18 np0005534516 nova_compute[253538]: 2025-11-25 09:07:18.928 253542 DEBUG nova.storage.rbd_utils [None req-fb97dd2a-bb73-4d24-bde6-cb7eda278abf aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] creating snapshot(e7a5b94946234757868df5f88e1b5970) on rbd image(7aeb9ccf-2506-41d1-92c2-c72892096857_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Nov 25 04:07:18 np0005534516 nova_compute[253538]: 2025-11-25 09:07:18.963 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:07:19 np0005534516 nova_compute[253538]: 2025-11-25 09:07:19.155 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:07:19 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e253 do_prune osdmap full prune enabled
Nov 25 04:07:19 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e254 e254: 3 total, 3 up, 3 in
Nov 25 04:07:19 np0005534516 ceph-mon[75015]: log_channel(cluster) log [DBG] : osdmap e254: 3 total, 3 up, 3 in
Nov 25 04:07:19 np0005534516 nova_compute[253538]: 2025-11-25 09:07:19.375 253542 DEBUG nova.storage.rbd_utils [None req-fb97dd2a-bb73-4d24-bde6-cb7eda278abf aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] cloning vms/7aeb9ccf-2506-41d1-92c2-c72892096857_disk@e7a5b94946234757868df5f88e1b5970 to images/cea21f13-1c78-4633-9d51-3cb641934c22 clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Nov 25 04:07:19 np0005534516 nova_compute[253538]: 2025-11-25 09:07:19.485 253542 DEBUG nova.storage.rbd_utils [None req-fb97dd2a-bb73-4d24-bde6-cb7eda278abf aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] flattening images/cea21f13-1c78-4633-9d51-3cb641934c22 flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314#033[00m
Nov 25 04:07:19 np0005534516 nova_compute[253538]: 2025-11-25 09:07:19.879 253542 DEBUG nova.storage.rbd_utils [None req-fb97dd2a-bb73-4d24-bde6-cb7eda278abf aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] removing snapshot(e7a5b94946234757868df5f88e1b5970) on rbd image(7aeb9ccf-2506-41d1-92c2-c72892096857_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489#033[00m
Nov 25 04:07:20 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2597: 321 pgs: 321 active+clean; 167 MiB data, 1001 MiB used, 59 GiB / 60 GiB avail; 148 KiB/s rd, 620 KiB/s wr, 42 op/s
Nov 25 04:07:20 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e254 do_prune osdmap full prune enabled
Nov 25 04:07:20 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e255 e255: 3 total, 3 up, 3 in
Nov 25 04:07:20 np0005534516 ceph-mon[75015]: log_channel(cluster) log [DBG] : osdmap e255: 3 total, 3 up, 3 in
Nov 25 04:07:20 np0005534516 nova_compute[253538]: 2025-11-25 09:07:20.636 253542 DEBUG nova.storage.rbd_utils [None req-fb97dd2a-bb73-4d24-bde6-cb7eda278abf aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] creating snapshot(snap) on rbd image(cea21f13-1c78-4633-9d51-3cb641934c22) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Nov 25 04:07:20 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:07:20.893 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a4:b7:82 2001:db8:0:1:f816:3eff:fea4:b782 2001:db8::f816:3eff:fea4:b782'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8:0:1:f816:3eff:fea4:b782/64 2001:db8::f816:3eff:fea4:b782/64', 'neutron:device_id': 'ovnmeta-21786b2a-59f9-4c4e-b462-8a28f7bd93a3', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-21786b2a-59f9-4c4e-b462-8a28f7bd93a3', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a3cf572dfc9f42528923d69b8fa76422', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c4355854-52b0-49fe-b048-f2ee64c9c702, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=c3131dc3-cd6c-4c0c-a1cd-e385c6ff1693) old=Port_Binding(mac=['fa:16:3e:a4:b7:82 2001:db8::f816:3eff:fea4:b782'], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:fea4:b782/64', 'neutron:device_id': 'ovnmeta-21786b2a-59f9-4c4e-b462-8a28f7bd93a3', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-21786b2a-59f9-4c4e-b462-8a28f7bd93a3', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a3cf572dfc9f42528923d69b8fa76422', 'neutron:revision_number': '2', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 04:07:20 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:07:20.895 162739 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port c3131dc3-cd6c-4c0c-a1cd-e385c6ff1693 in datapath 21786b2a-59f9-4c4e-b462-8a28f7bd93a3 updated#033[00m
Nov 25 04:07:20 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:07:20.898 162739 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 21786b2a-59f9-4c4e-b462-8a28f7bd93a3, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 25 04:07:20 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:07:20.899 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[841d2420-a318-4d87-a202-e36ae8fe2fd9]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:07:21 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e255 do_prune osdmap full prune enabled
Nov 25 04:07:21 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e256 e256: 3 total, 3 up, 3 in
Nov 25 04:07:21 np0005534516 ceph-mon[75015]: log_channel(cluster) log [DBG] : osdmap e256: 3 total, 3 up, 3 in
Nov 25 04:07:22 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2600: 321 pgs: 321 active+clean; 196 MiB data, 1019 MiB used, 59 GiB / 60 GiB avail; 1.4 MiB/s rd, 3.0 MiB/s wr, 24 op/s
Nov 25 04:07:23 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e256 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:07:23 np0005534516 nova_compute[253538]: 2025-11-25 09:07:23.268 253542 INFO nova.virt.libvirt.driver [None req-fb97dd2a-bb73-4d24-bde6-cb7eda278abf aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] [instance: 7aeb9ccf-2506-41d1-92c2-c72892096857] Snapshot image upload complete#033[00m
Nov 25 04:07:23 np0005534516 nova_compute[253538]: 2025-11-25 09:07:23.269 253542 INFO nova.compute.manager [None req-fb97dd2a-bb73-4d24-bde6-cb7eda278abf aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] [instance: 7aeb9ccf-2506-41d1-92c2-c72892096857] Took 5.28 seconds to snapshot the instance on the hypervisor.#033[00m
Nov 25 04:07:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 04:07:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 04:07:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 04:07:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 04:07:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 04:07:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 04:07:23 np0005534516 nova_compute[253538]: 2025-11-25 09:07:23.481 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:07:24 np0005534516 nova_compute[253538]: 2025-11-25 09:07:24.157 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:07:24 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2601: 321 pgs: 321 active+clean; 219 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 5.2 MiB/s rd, 5.3 MiB/s wr, 126 op/s
Nov 25 04:07:25 np0005534516 nova_compute[253538]: 2025-11-25 09:07:25.556 253542 DEBUG oslo_concurrency.lockutils [None req-85aeb3ae-c5c3-40cc-8a06-a8e3b54adb3e c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Acquiring lock "a5ec67ec-7042-47d0-925d-6ff3847d3846" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:07:25 np0005534516 nova_compute[253538]: 2025-11-25 09:07:25.557 253542 DEBUG oslo_concurrency.lockutils [None req-85aeb3ae-c5c3-40cc-8a06-a8e3b54adb3e c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "a5ec67ec-7042-47d0-925d-6ff3847d3846" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:07:25 np0005534516 nova_compute[253538]: 2025-11-25 09:07:25.580 253542 DEBUG nova.compute.manager [None req-85aeb3ae-c5c3-40cc-8a06-a8e3b54adb3e c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: a5ec67ec-7042-47d0-925d-6ff3847d3846] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 25 04:07:25 np0005534516 nova_compute[253538]: 2025-11-25 09:07:25.680 253542 DEBUG oslo_concurrency.lockutils [None req-85aeb3ae-c5c3-40cc-8a06-a8e3b54adb3e c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:07:25 np0005534516 nova_compute[253538]: 2025-11-25 09:07:25.681 253542 DEBUG oslo_concurrency.lockutils [None req-85aeb3ae-c5c3-40cc-8a06-a8e3b54adb3e c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:07:25 np0005534516 nova_compute[253538]: 2025-11-25 09:07:25.690 253542 DEBUG nova.virt.hardware [None req-85aeb3ae-c5c3-40cc-8a06-a8e3b54adb3e c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 25 04:07:25 np0005534516 nova_compute[253538]: 2025-11-25 09:07:25.690 253542 INFO nova.compute.claims [None req-85aeb3ae-c5c3-40cc-8a06-a8e3b54adb3e c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: a5ec67ec-7042-47d0-925d-6ff3847d3846] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 25 04:07:25 np0005534516 nova_compute[253538]: 2025-11-25 09:07:25.808 253542 DEBUG oslo_concurrency.processutils [None req-85aeb3ae-c5c3-40cc-8a06-a8e3b54adb3e c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 04:07:26 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2602: 321 pgs: 321 active+clean; 246 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 6.9 MiB/s rd, 6.8 MiB/s wr, 155 op/s
Nov 25 04:07:26 np0005534516 nova_compute[253538]: 2025-11-25 09:07:26.236 253542 DEBUG oslo_concurrency.lockutils [None req-6caa8021-d676-481f-8d01-e5222d951739 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] Acquiring lock "de5bfbef-7a99-4280-a304-71b9099f110b" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:07:26 np0005534516 nova_compute[253538]: 2025-11-25 09:07:26.237 253542 DEBUG oslo_concurrency.lockutils [None req-6caa8021-d676-481f-8d01-e5222d951739 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] Lock "de5bfbef-7a99-4280-a304-71b9099f110b" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:07:26 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 04:07:26 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/362277075' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 04:07:26 np0005534516 nova_compute[253538]: 2025-11-25 09:07:26.270 253542 DEBUG nova.compute.manager [None req-6caa8021-d676-481f-8d01-e5222d951739 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] [instance: de5bfbef-7a99-4280-a304-71b9099f110b] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 25 04:07:26 np0005534516 nova_compute[253538]: 2025-11-25 09:07:26.274 253542 DEBUG oslo_concurrency.processutils [None req-85aeb3ae-c5c3-40cc-8a06-a8e3b54adb3e c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.466s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 04:07:26 np0005534516 nova_compute[253538]: 2025-11-25 09:07:26.279 253542 DEBUG nova.compute.provider_tree [None req-85aeb3ae-c5c3-40cc-8a06-a8e3b54adb3e c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 04:07:26 np0005534516 nova_compute[253538]: 2025-11-25 09:07:26.299 253542 DEBUG nova.scheduler.client.report [None req-85aeb3ae-c5c3-40cc-8a06-a8e3b54adb3e c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 04:07:26 np0005534516 nova_compute[253538]: 2025-11-25 09:07:26.334 253542 DEBUG oslo_concurrency.lockutils [None req-85aeb3ae-c5c3-40cc-8a06-a8e3b54adb3e c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.653s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:07:26 np0005534516 nova_compute[253538]: 2025-11-25 09:07:26.334 253542 DEBUG nova.compute.manager [None req-85aeb3ae-c5c3-40cc-8a06-a8e3b54adb3e c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: a5ec67ec-7042-47d0-925d-6ff3847d3846] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 25 04:07:26 np0005534516 nova_compute[253538]: 2025-11-25 09:07:26.355 253542 DEBUG oslo_concurrency.lockutils [None req-6caa8021-d676-481f-8d01-e5222d951739 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:07:26 np0005534516 nova_compute[253538]: 2025-11-25 09:07:26.355 253542 DEBUG oslo_concurrency.lockutils [None req-6caa8021-d676-481f-8d01-e5222d951739 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:07:26 np0005534516 nova_compute[253538]: 2025-11-25 09:07:26.362 253542 DEBUG nova.virt.hardware [None req-6caa8021-d676-481f-8d01-e5222d951739 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 25 04:07:26 np0005534516 nova_compute[253538]: 2025-11-25 09:07:26.362 253542 INFO nova.compute.claims [None req-6caa8021-d676-481f-8d01-e5222d951739 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] [instance: de5bfbef-7a99-4280-a304-71b9099f110b] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 25 04:07:26 np0005534516 nova_compute[253538]: 2025-11-25 09:07:26.404 253542 DEBUG nova.compute.manager [None req-85aeb3ae-c5c3-40cc-8a06-a8e3b54adb3e c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: a5ec67ec-7042-47d0-925d-6ff3847d3846] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 25 04:07:26 np0005534516 nova_compute[253538]: 2025-11-25 09:07:26.404 253542 DEBUG nova.network.neutron [None req-85aeb3ae-c5c3-40cc-8a06-a8e3b54adb3e c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: a5ec67ec-7042-47d0-925d-6ff3847d3846] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 25 04:07:26 np0005534516 nova_compute[253538]: 2025-11-25 09:07:26.431 253542 INFO nova.virt.libvirt.driver [None req-85aeb3ae-c5c3-40cc-8a06-a8e3b54adb3e c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: a5ec67ec-7042-47d0-925d-6ff3847d3846] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 25 04:07:26 np0005534516 nova_compute[253538]: 2025-11-25 09:07:26.453 253542 DEBUG nova.compute.manager [None req-85aeb3ae-c5c3-40cc-8a06-a8e3b54adb3e c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: a5ec67ec-7042-47d0-925d-6ff3847d3846] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 25 04:07:26 np0005534516 nova_compute[253538]: 2025-11-25 09:07:26.675 253542 DEBUG oslo_concurrency.processutils [None req-6caa8021-d676-481f-8d01-e5222d951739 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 04:07:26 np0005534516 nova_compute[253538]: 2025-11-25 09:07:26.795 253542 DEBUG nova.compute.manager [None req-85aeb3ae-c5c3-40cc-8a06-a8e3b54adb3e c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: a5ec67ec-7042-47d0-925d-6ff3847d3846] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 25 04:07:26 np0005534516 nova_compute[253538]: 2025-11-25 09:07:26.797 253542 DEBUG nova.virt.libvirt.driver [None req-85aeb3ae-c5c3-40cc-8a06-a8e3b54adb3e c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: a5ec67ec-7042-47d0-925d-6ff3847d3846] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 25 04:07:26 np0005534516 nova_compute[253538]: 2025-11-25 09:07:26.797 253542 INFO nova.virt.libvirt.driver [None req-85aeb3ae-c5c3-40cc-8a06-a8e3b54adb3e c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: a5ec67ec-7042-47d0-925d-6ff3847d3846] Creating image(s)#033[00m
Nov 25 04:07:26 np0005534516 nova_compute[253538]: 2025-11-25 09:07:26.823 253542 DEBUG nova.storage.rbd_utils [None req-85aeb3ae-c5c3-40cc-8a06-a8e3b54adb3e c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] rbd image a5ec67ec-7042-47d0-925d-6ff3847d3846_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 04:07:26 np0005534516 nova_compute[253538]: 2025-11-25 09:07:26.858 253542 DEBUG nova.storage.rbd_utils [None req-85aeb3ae-c5c3-40cc-8a06-a8e3b54adb3e c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] rbd image a5ec67ec-7042-47d0-925d-6ff3847d3846_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 04:07:26 np0005534516 nova_compute[253538]: 2025-11-25 09:07:26.886 253542 DEBUG nova.storage.rbd_utils [None req-85aeb3ae-c5c3-40cc-8a06-a8e3b54adb3e c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] rbd image a5ec67ec-7042-47d0-925d-6ff3847d3846_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 04:07:26 np0005534516 nova_compute[253538]: 2025-11-25 09:07:26.891 253542 DEBUG oslo_concurrency.processutils [None req-85aeb3ae-c5c3-40cc-8a06-a8e3b54adb3e c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 04:07:26 np0005534516 nova_compute[253538]: 2025-11-25 09:07:26.989 253542 DEBUG oslo_concurrency.processutils [None req-85aeb3ae-c5c3-40cc-8a06-a8e3b54adb3e c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json" returned: 0 in 0.098s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 04:07:26 np0005534516 nova_compute[253538]: 2025-11-25 09:07:26.991 253542 DEBUG oslo_concurrency.lockutils [None req-85aeb3ae-c5c3-40cc-8a06-a8e3b54adb3e c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Acquiring lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:07:26 np0005534516 nova_compute[253538]: 2025-11-25 09:07:26.992 253542 DEBUG oslo_concurrency.lockutils [None req-85aeb3ae-c5c3-40cc-8a06-a8e3b54adb3e c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:07:26 np0005534516 nova_compute[253538]: 2025-11-25 09:07:26.992 253542 DEBUG oslo_concurrency.lockutils [None req-85aeb3ae-c5c3-40cc-8a06-a8e3b54adb3e c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:07:27 np0005534516 nova_compute[253538]: 2025-11-25 09:07:27.027 253542 DEBUG nova.storage.rbd_utils [None req-85aeb3ae-c5c3-40cc-8a06-a8e3b54adb3e c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] rbd image a5ec67ec-7042-47d0-925d-6ff3847d3846_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 04:07:27 np0005534516 nova_compute[253538]: 2025-11-25 09:07:27.032 253542 DEBUG oslo_concurrency.processutils [None req-85aeb3ae-c5c3-40cc-8a06-a8e3b54adb3e c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc a5ec67ec-7042-47d0-925d-6ff3847d3846_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 04:07:27 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 04:07:27 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/488964994' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 04:07:27 np0005534516 nova_compute[253538]: 2025-11-25 09:07:27.191 253542 DEBUG oslo_concurrency.processutils [None req-6caa8021-d676-481f-8d01-e5222d951739 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.515s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 04:07:27 np0005534516 nova_compute[253538]: 2025-11-25 09:07:27.201 253542 DEBUG nova.compute.provider_tree [None req-6caa8021-d676-481f-8d01-e5222d951739 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 04:07:27 np0005534516 nova_compute[253538]: 2025-11-25 09:07:27.226 253542 DEBUG nova.scheduler.client.report [None req-6caa8021-d676-481f-8d01-e5222d951739 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 04:07:27 np0005534516 nova_compute[253538]: 2025-11-25 09:07:27.254 253542 DEBUG oslo_concurrency.lockutils [None req-6caa8021-d676-481f-8d01-e5222d951739 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.899s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:07:27 np0005534516 nova_compute[253538]: 2025-11-25 09:07:27.255 253542 DEBUG nova.compute.manager [None req-6caa8021-d676-481f-8d01-e5222d951739 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] [instance: de5bfbef-7a99-4280-a304-71b9099f110b] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 25 04:07:27 np0005534516 nova_compute[253538]: 2025-11-25 09:07:27.317 253542 DEBUG nova.compute.manager [None req-6caa8021-d676-481f-8d01-e5222d951739 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] [instance: de5bfbef-7a99-4280-a304-71b9099f110b] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 25 04:07:27 np0005534516 nova_compute[253538]: 2025-11-25 09:07:27.318 253542 DEBUG nova.network.neutron [None req-6caa8021-d676-481f-8d01-e5222d951739 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] [instance: de5bfbef-7a99-4280-a304-71b9099f110b] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 25 04:07:27 np0005534516 nova_compute[253538]: 2025-11-25 09:07:27.324 253542 DEBUG oslo_concurrency.processutils [None req-85aeb3ae-c5c3-40cc-8a06-a8e3b54adb3e c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc a5ec67ec-7042-47d0-925d-6ff3847d3846_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.292s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 04:07:27 np0005534516 nova_compute[253538]: 2025-11-25 09:07:27.353 253542 INFO nova.virt.libvirt.driver [None req-6caa8021-d676-481f-8d01-e5222d951739 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] [instance: de5bfbef-7a99-4280-a304-71b9099f110b] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 25 04:07:27 np0005534516 nova_compute[253538]: 2025-11-25 09:07:27.388 253542 DEBUG nova.compute.manager [None req-6caa8021-d676-481f-8d01-e5222d951739 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] [instance: de5bfbef-7a99-4280-a304-71b9099f110b] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 25 04:07:27 np0005534516 nova_compute[253538]: 2025-11-25 09:07:27.395 253542 DEBUG nova.storage.rbd_utils [None req-85aeb3ae-c5c3-40cc-8a06-a8e3b54adb3e c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] resizing rbd image a5ec67ec-7042-47d0-925d-6ff3847d3846_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 25 04:07:27 np0005534516 nova_compute[253538]: 2025-11-25 09:07:27.427 253542 DEBUG nova.policy [None req-85aeb3ae-c5c3-40cc-8a06-a8e3b54adb3e c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'c9fb13d4ba9041458692330b7276232f', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'a3cf572dfc9f42528923d69b8fa76422', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 25 04:07:27 np0005534516 nova_compute[253538]: 2025-11-25 09:07:27.494 253542 DEBUG nova.compute.manager [None req-6caa8021-d676-481f-8d01-e5222d951739 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] [instance: de5bfbef-7a99-4280-a304-71b9099f110b] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 25 04:07:27 np0005534516 nova_compute[253538]: 2025-11-25 09:07:27.495 253542 DEBUG nova.virt.libvirt.driver [None req-6caa8021-d676-481f-8d01-e5222d951739 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] [instance: de5bfbef-7a99-4280-a304-71b9099f110b] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 25 04:07:27 np0005534516 nova_compute[253538]: 2025-11-25 09:07:27.496 253542 INFO nova.virt.libvirt.driver [None req-6caa8021-d676-481f-8d01-e5222d951739 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] [instance: de5bfbef-7a99-4280-a304-71b9099f110b] Creating image(s)#033[00m
Nov 25 04:07:27 np0005534516 nova_compute[253538]: 2025-11-25 09:07:27.518 253542 DEBUG nova.storage.rbd_utils [None req-6caa8021-d676-481f-8d01-e5222d951739 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] rbd image de5bfbef-7a99-4280-a304-71b9099f110b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 04:07:27 np0005534516 nova_compute[253538]: 2025-11-25 09:07:27.542 253542 DEBUG nova.storage.rbd_utils [None req-6caa8021-d676-481f-8d01-e5222d951739 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] rbd image de5bfbef-7a99-4280-a304-71b9099f110b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 04:07:27 np0005534516 nova_compute[253538]: 2025-11-25 09:07:27.568 253542 DEBUG nova.storage.rbd_utils [None req-6caa8021-d676-481f-8d01-e5222d951739 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] rbd image de5bfbef-7a99-4280-a304-71b9099f110b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 04:07:27 np0005534516 nova_compute[253538]: 2025-11-25 09:07:27.572 253542 DEBUG oslo_concurrency.lockutils [None req-6caa8021-d676-481f-8d01-e5222d951739 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] Acquiring lock "61b07ac455ac595ffac8250648100eba5804ec9e" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:07:27 np0005534516 nova_compute[253538]: 2025-11-25 09:07:27.573 253542 DEBUG oslo_concurrency.lockutils [None req-6caa8021-d676-481f-8d01-e5222d951739 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] Lock "61b07ac455ac595ffac8250648100eba5804ec9e" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:07:27 np0005534516 nova_compute[253538]: 2025-11-25 09:07:27.583 253542 DEBUG nova.objects.instance [None req-85aeb3ae-c5c3-40cc-8a06-a8e3b54adb3e c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lazy-loading 'migration_context' on Instance uuid a5ec67ec-7042-47d0-925d-6ff3847d3846 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 04:07:27 np0005534516 nova_compute[253538]: 2025-11-25 09:07:27.605 253542 DEBUG nova.virt.libvirt.driver [None req-85aeb3ae-c5c3-40cc-8a06-a8e3b54adb3e c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: a5ec67ec-7042-47d0-925d-6ff3847d3846] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 25 04:07:27 np0005534516 nova_compute[253538]: 2025-11-25 09:07:27.606 253542 DEBUG nova.virt.libvirt.driver [None req-85aeb3ae-c5c3-40cc-8a06-a8e3b54adb3e c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: a5ec67ec-7042-47d0-925d-6ff3847d3846] Ensure instance console log exists: /var/lib/nova/instances/a5ec67ec-7042-47d0-925d-6ff3847d3846/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 25 04:07:27 np0005534516 nova_compute[253538]: 2025-11-25 09:07:27.607 253542 DEBUG oslo_concurrency.lockutils [None req-85aeb3ae-c5c3-40cc-8a06-a8e3b54adb3e c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:07:27 np0005534516 nova_compute[253538]: 2025-11-25 09:07:27.607 253542 DEBUG oslo_concurrency.lockutils [None req-85aeb3ae-c5c3-40cc-8a06-a8e3b54adb3e c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:07:27 np0005534516 nova_compute[253538]: 2025-11-25 09:07:27.607 253542 DEBUG oslo_concurrency.lockutils [None req-85aeb3ae-c5c3-40cc-8a06-a8e3b54adb3e c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:07:27 np0005534516 nova_compute[253538]: 2025-11-25 09:07:27.770 253542 DEBUG nova.virt.libvirt.imagebackend [None req-6caa8021-d676-481f-8d01-e5222d951739 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] Image locations are: [{'url': 'rbd://a058ea16-8b73-51e1-b172-ed66107102bf/images/cea21f13-1c78-4633-9d51-3cb641934c22/snap', 'metadata': {'store': 'default_backend'}}, {'url': 'rbd://a058ea16-8b73-51e1-b172-ed66107102bf/images/cea21f13-1c78-4633-9d51-3cb641934c22/snap', 'metadata': {}}] clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1085#033[00m
Nov 25 04:07:27 np0005534516 nova_compute[253538]: 2025-11-25 09:07:27.810 253542 DEBUG nova.virt.libvirt.imagebackend [None req-6caa8021-d676-481f-8d01-e5222d951739 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] Selected location: {'url': 'rbd://a058ea16-8b73-51e1-b172-ed66107102bf/images/cea21f13-1c78-4633-9d51-3cb641934c22/snap', 'metadata': {'store': 'default_backend'}} clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1094#033[00m
Nov 25 04:07:27 np0005534516 nova_compute[253538]: 2025-11-25 09:07:27.811 253542 DEBUG nova.storage.rbd_utils [None req-6caa8021-d676-481f-8d01-e5222d951739 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] cloning images/cea21f13-1c78-4633-9d51-3cb641934c22@snap to None/de5bfbef-7a99-4280-a304-71b9099f110b_disk clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Nov 25 04:07:27 np0005534516 nova_compute[253538]: 2025-11-25 09:07:27.915 253542 DEBUG oslo_concurrency.lockutils [None req-6caa8021-d676-481f-8d01-e5222d951739 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] Lock "61b07ac455ac595ffac8250648100eba5804ec9e" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.342s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:07:28 np0005534516 nova_compute[253538]: 2025-11-25 09:07:28.046 253542 DEBUG nova.objects.instance [None req-6caa8021-d676-481f-8d01-e5222d951739 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] Lazy-loading 'migration_context' on Instance uuid de5bfbef-7a99-4280-a304-71b9099f110b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 04:07:28 np0005534516 nova_compute[253538]: 2025-11-25 09:07:28.057 253542 DEBUG nova.virt.libvirt.driver [None req-6caa8021-d676-481f-8d01-e5222d951739 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] [instance: de5bfbef-7a99-4280-a304-71b9099f110b] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 25 04:07:28 np0005534516 nova_compute[253538]: 2025-11-25 09:07:28.058 253542 DEBUG nova.virt.libvirt.driver [None req-6caa8021-d676-481f-8d01-e5222d951739 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] [instance: de5bfbef-7a99-4280-a304-71b9099f110b] Ensure instance console log exists: /var/lib/nova/instances/de5bfbef-7a99-4280-a304-71b9099f110b/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 25 04:07:28 np0005534516 nova_compute[253538]: 2025-11-25 09:07:28.058 253542 DEBUG oslo_concurrency.lockutils [None req-6caa8021-d676-481f-8d01-e5222d951739 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:07:28 np0005534516 nova_compute[253538]: 2025-11-25 09:07:28.059 253542 DEBUG oslo_concurrency.lockutils [None req-6caa8021-d676-481f-8d01-e5222d951739 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:07:28 np0005534516 nova_compute[253538]: 2025-11-25 09:07:28.059 253542 DEBUG oslo_concurrency.lockutils [None req-6caa8021-d676-481f-8d01-e5222d951739 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:07:28 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e256 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:07:28 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e256 do_prune osdmap full prune enabled
Nov 25 04:07:28 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e257 e257: 3 total, 3 up, 3 in
Nov 25 04:07:28 np0005534516 ceph-mon[75015]: log_channel(cluster) log [DBG] : osdmap e257: 3 total, 3 up, 3 in
Nov 25 04:07:28 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2604: 321 pgs: 321 active+clean; 246 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 6.2 MiB/s rd, 6.2 MiB/s wr, 139 op/s
Nov 25 04:07:28 np0005534516 nova_compute[253538]: 2025-11-25 09:07:28.284 253542 DEBUG nova.policy [None req-6caa8021-d676-481f-8d01-e5222d951739 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'aef72e2ffce442d1848c4753c324ae92', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '8771100a91ef4eb3b58cc4840f6154b4', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 25 04:07:28 np0005534516 nova_compute[253538]: 2025-11-25 09:07:28.483 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:07:28 np0005534516 podman[403072]: 2025-11-25 09:07:28.815164023 +0000 UTC m=+0.063317762 container health_status 201f5ba8a846455dad7681d91099d8c3f33e8ac91298c1330a8b525b94c03938 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, org.label-schema.license=GPLv2, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 25 04:07:28 np0005534516 podman[403073]: 2025-11-25 09:07:28.831271341 +0000 UTC m=+0.070772975 container health_status 5c8d5508db57b4c3da7e80ae11f3a3094e87785cb74f60c98199261dd8b73481 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent)
Nov 25 04:07:28 np0005534516 nova_compute[253538]: 2025-11-25 09:07:28.852 253542 DEBUG nova.network.neutron [None req-85aeb3ae-c5c3-40cc-8a06-a8e3b54adb3e c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: a5ec67ec-7042-47d0-925d-6ff3847d3846] Successfully created port: 05d0fd93-ce0f-4842-962f-c9491d3850c8 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 25 04:07:29 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 25 04:07:29 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3750147748' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 04:07:29 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 25 04:07:29 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3750147748' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 04:07:29 np0005534516 nova_compute[253538]: 2025-11-25 09:07:29.069 253542 DEBUG nova.network.neutron [None req-6caa8021-d676-481f-8d01-e5222d951739 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] [instance: de5bfbef-7a99-4280-a304-71b9099f110b] Successfully created port: ff88ca8a-d270-4991-b2c4-617f04418848 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 25 04:07:29 np0005534516 nova_compute[253538]: 2025-11-25 09:07:29.159 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:07:29 np0005534516 nova_compute[253538]: 2025-11-25 09:07:29.456 253542 DEBUG nova.network.neutron [None req-85aeb3ae-c5c3-40cc-8a06-a8e3b54adb3e c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: a5ec67ec-7042-47d0-925d-6ff3847d3846] Successfully created port: 2eda6ce1-df50-4620-a5e9-d08e62f7350e _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 25 04:07:30 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2605: 321 pgs: 321 active+clean; 277 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 4.6 MiB/s rd, 5.3 MiB/s wr, 167 op/s
Nov 25 04:07:30 np0005534516 nova_compute[253538]: 2025-11-25 09:07:30.276 253542 DEBUG nova.network.neutron [None req-85aeb3ae-c5c3-40cc-8a06-a8e3b54adb3e c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: a5ec67ec-7042-47d0-925d-6ff3847d3846] Successfully updated port: 05d0fd93-ce0f-4842-962f-c9491d3850c8 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 25 04:07:30 np0005534516 nova_compute[253538]: 2025-11-25 09:07:30.307 253542 DEBUG nova.network.neutron [None req-6caa8021-d676-481f-8d01-e5222d951739 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] [instance: de5bfbef-7a99-4280-a304-71b9099f110b] Successfully updated port: ff88ca8a-d270-4991-b2c4-617f04418848 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 25 04:07:30 np0005534516 nova_compute[253538]: 2025-11-25 09:07:30.321 253542 DEBUG oslo_concurrency.lockutils [None req-6caa8021-d676-481f-8d01-e5222d951739 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] Acquiring lock "refresh_cache-de5bfbef-7a99-4280-a304-71b9099f110b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 04:07:30 np0005534516 nova_compute[253538]: 2025-11-25 09:07:30.321 253542 DEBUG oslo_concurrency.lockutils [None req-6caa8021-d676-481f-8d01-e5222d951739 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] Acquired lock "refresh_cache-de5bfbef-7a99-4280-a304-71b9099f110b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 04:07:30 np0005534516 nova_compute[253538]: 2025-11-25 09:07:30.322 253542 DEBUG nova.network.neutron [None req-6caa8021-d676-481f-8d01-e5222d951739 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] [instance: de5bfbef-7a99-4280-a304-71b9099f110b] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 25 04:07:30 np0005534516 nova_compute[253538]: 2025-11-25 09:07:30.384 253542 DEBUG nova.compute.manager [req-d0ac20b3-28b6-4c56-b14a-9e02d11d6254 req-a38f1de1-06ff-4527-8de2-dabfea6489dc b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: a5ec67ec-7042-47d0-925d-6ff3847d3846] Received event network-changed-05d0fd93-ce0f-4842-962f-c9491d3850c8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 04:07:30 np0005534516 nova_compute[253538]: 2025-11-25 09:07:30.385 253542 DEBUG nova.compute.manager [req-d0ac20b3-28b6-4c56-b14a-9e02d11d6254 req-a38f1de1-06ff-4527-8de2-dabfea6489dc b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: a5ec67ec-7042-47d0-925d-6ff3847d3846] Refreshing instance network info cache due to event network-changed-05d0fd93-ce0f-4842-962f-c9491d3850c8. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 04:07:30 np0005534516 nova_compute[253538]: 2025-11-25 09:07:30.388 253542 DEBUG oslo_concurrency.lockutils [req-d0ac20b3-28b6-4c56-b14a-9e02d11d6254 req-a38f1de1-06ff-4527-8de2-dabfea6489dc b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-a5ec67ec-7042-47d0-925d-6ff3847d3846" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 04:07:30 np0005534516 nova_compute[253538]: 2025-11-25 09:07:30.389 253542 DEBUG oslo_concurrency.lockutils [req-d0ac20b3-28b6-4c56-b14a-9e02d11d6254 req-a38f1de1-06ff-4527-8de2-dabfea6489dc b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-a5ec67ec-7042-47d0-925d-6ff3847d3846" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 04:07:30 np0005534516 nova_compute[253538]: 2025-11-25 09:07:30.389 253542 DEBUG nova.network.neutron [req-d0ac20b3-28b6-4c56-b14a-9e02d11d6254 req-a38f1de1-06ff-4527-8de2-dabfea6489dc b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: a5ec67ec-7042-47d0-925d-6ff3847d3846] Refreshing network info cache for port 05d0fd93-ce0f-4842-962f-c9491d3850c8 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 04:07:30 np0005534516 nova_compute[253538]: 2025-11-25 09:07:30.403 253542 DEBUG nova.compute.manager [req-59aa2f56-4a2c-4aed-9fab-801d9c1b8c40 req-57da30a7-aefa-4afa-a6ab-b96556537c8d b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: de5bfbef-7a99-4280-a304-71b9099f110b] Received event network-changed-ff88ca8a-d270-4991-b2c4-617f04418848 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 04:07:30 np0005534516 nova_compute[253538]: 2025-11-25 09:07:30.404 253542 DEBUG nova.compute.manager [req-59aa2f56-4a2c-4aed-9fab-801d9c1b8c40 req-57da30a7-aefa-4afa-a6ab-b96556537c8d b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: de5bfbef-7a99-4280-a304-71b9099f110b] Refreshing instance network info cache due to event network-changed-ff88ca8a-d270-4991-b2c4-617f04418848. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 04:07:30 np0005534516 nova_compute[253538]: 2025-11-25 09:07:30.404 253542 DEBUG oslo_concurrency.lockutils [req-59aa2f56-4a2c-4aed-9fab-801d9c1b8c40 req-57da30a7-aefa-4afa-a6ab-b96556537c8d b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-de5bfbef-7a99-4280-a304-71b9099f110b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 04:07:30 np0005534516 nova_compute[253538]: 2025-11-25 09:07:30.503 253542 DEBUG nova.network.neutron [None req-6caa8021-d676-481f-8d01-e5222d951739 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] [instance: de5bfbef-7a99-4280-a304-71b9099f110b] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 25 04:07:30 np0005534516 nova_compute[253538]: 2025-11-25 09:07:30.638 253542 DEBUG nova.network.neutron [req-d0ac20b3-28b6-4c56-b14a-9e02d11d6254 req-a38f1de1-06ff-4527-8de2-dabfea6489dc b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: a5ec67ec-7042-47d0-925d-6ff3847d3846] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 25 04:07:31 np0005534516 nova_compute[253538]: 2025-11-25 09:07:31.084 253542 DEBUG nova.network.neutron [req-d0ac20b3-28b6-4c56-b14a-9e02d11d6254 req-a38f1de1-06ff-4527-8de2-dabfea6489dc b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: a5ec67ec-7042-47d0-925d-6ff3847d3846] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 04:07:31 np0005534516 nova_compute[253538]: 2025-11-25 09:07:31.099 253542 DEBUG oslo_concurrency.lockutils [req-d0ac20b3-28b6-4c56-b14a-9e02d11d6254 req-a38f1de1-06ff-4527-8de2-dabfea6489dc b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-a5ec67ec-7042-47d0-925d-6ff3847d3846" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 04:07:31 np0005534516 nova_compute[253538]: 2025-11-25 09:07:31.173 253542 DEBUG nova.network.neutron [None req-85aeb3ae-c5c3-40cc-8a06-a8e3b54adb3e c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: a5ec67ec-7042-47d0-925d-6ff3847d3846] Successfully updated port: 2eda6ce1-df50-4620-a5e9-d08e62f7350e _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 25 04:07:31 np0005534516 nova_compute[253538]: 2025-11-25 09:07:31.190 253542 DEBUG oslo_concurrency.lockutils [None req-85aeb3ae-c5c3-40cc-8a06-a8e3b54adb3e c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Acquiring lock "refresh_cache-a5ec67ec-7042-47d0-925d-6ff3847d3846" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 04:07:31 np0005534516 nova_compute[253538]: 2025-11-25 09:07:31.190 253542 DEBUG oslo_concurrency.lockutils [None req-85aeb3ae-c5c3-40cc-8a06-a8e3b54adb3e c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Acquired lock "refresh_cache-a5ec67ec-7042-47d0-925d-6ff3847d3846" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 04:07:31 np0005534516 nova_compute[253538]: 2025-11-25 09:07:31.190 253542 DEBUG nova.network.neutron [None req-85aeb3ae-c5c3-40cc-8a06-a8e3b54adb3e c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: a5ec67ec-7042-47d0-925d-6ff3847d3846] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 25 04:07:31 np0005534516 nova_compute[253538]: 2025-11-25 09:07:31.332 253542 DEBUG nova.network.neutron [None req-6caa8021-d676-481f-8d01-e5222d951739 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] [instance: de5bfbef-7a99-4280-a304-71b9099f110b] Updating instance_info_cache with network_info: [{"id": "ff88ca8a-d270-4991-b2c4-617f04418848", "address": "fa:16:3e:ec:c7:f1", "network": {"id": "3c3eb82e-1161-4c2f-9fce-53fdf4386d9c", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-990123454-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8771100a91ef4eb3b58cc4840f6154b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapff88ca8a-d2", "ovs_interfaceid": "ff88ca8a-d270-4991-b2c4-617f04418848", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 04:07:31 np0005534516 nova_compute[253538]: 2025-11-25 09:07:31.347 253542 DEBUG nova.network.neutron [None req-85aeb3ae-c5c3-40cc-8a06-a8e3b54adb3e c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: a5ec67ec-7042-47d0-925d-6ff3847d3846] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 25 04:07:31 np0005534516 nova_compute[253538]: 2025-11-25 09:07:31.352 253542 DEBUG oslo_concurrency.lockutils [None req-6caa8021-d676-481f-8d01-e5222d951739 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] Releasing lock "refresh_cache-de5bfbef-7a99-4280-a304-71b9099f110b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 04:07:31 np0005534516 nova_compute[253538]: 2025-11-25 09:07:31.353 253542 DEBUG nova.compute.manager [None req-6caa8021-d676-481f-8d01-e5222d951739 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] [instance: de5bfbef-7a99-4280-a304-71b9099f110b] Instance network_info: |[{"id": "ff88ca8a-d270-4991-b2c4-617f04418848", "address": "fa:16:3e:ec:c7:f1", "network": {"id": "3c3eb82e-1161-4c2f-9fce-53fdf4386d9c", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-990123454-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8771100a91ef4eb3b58cc4840f6154b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapff88ca8a-d2", "ovs_interfaceid": "ff88ca8a-d270-4991-b2c4-617f04418848", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 25 04:07:31 np0005534516 nova_compute[253538]: 2025-11-25 09:07:31.354 253542 DEBUG oslo_concurrency.lockutils [req-59aa2f56-4a2c-4aed-9fab-801d9c1b8c40 req-57da30a7-aefa-4afa-a6ab-b96556537c8d b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-de5bfbef-7a99-4280-a304-71b9099f110b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 04:07:31 np0005534516 nova_compute[253538]: 2025-11-25 09:07:31.354 253542 DEBUG nova.network.neutron [req-59aa2f56-4a2c-4aed-9fab-801d9c1b8c40 req-57da30a7-aefa-4afa-a6ab-b96556537c8d b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: de5bfbef-7a99-4280-a304-71b9099f110b] Refreshing network info cache for port ff88ca8a-d270-4991-b2c4-617f04418848 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 04:07:31 np0005534516 nova_compute[253538]: 2025-11-25 09:07:31.359 253542 DEBUG nova.virt.libvirt.driver [None req-6caa8021-d676-481f-8d01-e5222d951739 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] [instance: de5bfbef-7a99-4280-a304-71b9099f110b] Start _get_guest_xml network_info=[{"id": "ff88ca8a-d270-4991-b2c4-617f04418848", "address": "fa:16:3e:ec:c7:f1", "network": {"id": "3c3eb82e-1161-4c2f-9fce-53fdf4386d9c", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-990123454-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8771100a91ef4eb3b58cc4840f6154b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapff88ca8a-d2", "ovs_interfaceid": "ff88ca8a-d270-4991-b2c4-617f04418848", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='',container_format='bare',created_at=2025-11-25T09:07:17Z,direct_url=<?>,disk_format='raw',id=cea21f13-1c78-4633-9d51-3cb641934c22,min_disk=1,min_ram=0,name='tempest-TestSnapshotPatternsnapshot-651714253',owner='8771100a91ef4eb3b58cc4840f6154b4',properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=2025-11-25T09:07:23Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'device_name': '/dev/vda', 'guest_format': None, 'encryption_options': None, 'boot_index': 0, 'encryption_format': None, 'encryption_secret_uuid': None, 'size': 0, 'encrypted': False, 'disk_bus': 'virtio', 'image_id': 'cea21f13-1c78-4633-9d51-3cb641934c22'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 25 04:07:31 np0005534516 nova_compute[253538]: 2025-11-25 09:07:31.366 253542 WARNING nova.virt.libvirt.driver [None req-6caa8021-d676-481f-8d01-e5222d951739 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 04:07:31 np0005534516 nova_compute[253538]: 2025-11-25 09:07:31.377 253542 DEBUG nova.virt.libvirt.host [None req-6caa8021-d676-481f-8d01-e5222d951739 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 25 04:07:31 np0005534516 nova_compute[253538]: 2025-11-25 09:07:31.378 253542 DEBUG nova.virt.libvirt.host [None req-6caa8021-d676-481f-8d01-e5222d951739 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 25 04:07:31 np0005534516 nova_compute[253538]: 2025-11-25 09:07:31.383 253542 DEBUG nova.virt.libvirt.host [None req-6caa8021-d676-481f-8d01-e5222d951739 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 25 04:07:31 np0005534516 nova_compute[253538]: 2025-11-25 09:07:31.383 253542 DEBUG nova.virt.libvirt.host [None req-6caa8021-d676-481f-8d01-e5222d951739 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 25 04:07:31 np0005534516 nova_compute[253538]: 2025-11-25 09:07:31.384 253542 DEBUG nova.virt.libvirt.driver [None req-6caa8021-d676-481f-8d01-e5222d951739 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 25 04:07:31 np0005534516 nova_compute[253538]: 2025-11-25 09:07:31.384 253542 DEBUG nova.virt.hardware [None req-6caa8021-d676-481f-8d01-e5222d951739 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T08:20:54Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='c0247c91-0f34-45b2-87e7-43f31a790a00',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='',container_format='bare',created_at=2025-11-25T09:07:17Z,direct_url=<?>,disk_format='raw',id=cea21f13-1c78-4633-9d51-3cb641934c22,min_disk=1,min_ram=0,name='tempest-TestSnapshotPatternsnapshot-651714253',owner='8771100a91ef4eb3b58cc4840f6154b4',properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=2025-11-25T09:07:23Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 25 04:07:31 np0005534516 nova_compute[253538]: 2025-11-25 09:07:31.385 253542 DEBUG nova.virt.hardware [None req-6caa8021-d676-481f-8d01-e5222d951739 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 25 04:07:31 np0005534516 nova_compute[253538]: 2025-11-25 09:07:31.385 253542 DEBUG nova.virt.hardware [None req-6caa8021-d676-481f-8d01-e5222d951739 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 25 04:07:31 np0005534516 nova_compute[253538]: 2025-11-25 09:07:31.386 253542 DEBUG nova.virt.hardware [None req-6caa8021-d676-481f-8d01-e5222d951739 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 25 04:07:31 np0005534516 nova_compute[253538]: 2025-11-25 09:07:31.386 253542 DEBUG nova.virt.hardware [None req-6caa8021-d676-481f-8d01-e5222d951739 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 25 04:07:31 np0005534516 nova_compute[253538]: 2025-11-25 09:07:31.386 253542 DEBUG nova.virt.hardware [None req-6caa8021-d676-481f-8d01-e5222d951739 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 25 04:07:31 np0005534516 nova_compute[253538]: 2025-11-25 09:07:31.387 253542 DEBUG nova.virt.hardware [None req-6caa8021-d676-481f-8d01-e5222d951739 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 25 04:07:31 np0005534516 nova_compute[253538]: 2025-11-25 09:07:31.387 253542 DEBUG nova.virt.hardware [None req-6caa8021-d676-481f-8d01-e5222d951739 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 25 04:07:31 np0005534516 nova_compute[253538]: 2025-11-25 09:07:31.387 253542 DEBUG nova.virt.hardware [None req-6caa8021-d676-481f-8d01-e5222d951739 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 25 04:07:31 np0005534516 nova_compute[253538]: 2025-11-25 09:07:31.388 253542 DEBUG nova.virt.hardware [None req-6caa8021-d676-481f-8d01-e5222d951739 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 25 04:07:31 np0005534516 nova_compute[253538]: 2025-11-25 09:07:31.388 253542 DEBUG nova.virt.hardware [None req-6caa8021-d676-481f-8d01-e5222d951739 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 25 04:07:31 np0005534516 nova_compute[253538]: 2025-11-25 09:07:31.392 253542 DEBUG oslo_concurrency.processutils [None req-6caa8021-d676-481f-8d01-e5222d951739 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 04:07:31 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 04:07:31 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/512203468' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 04:07:31 np0005534516 nova_compute[253538]: 2025-11-25 09:07:31.882 253542 DEBUG oslo_concurrency.processutils [None req-6caa8021-d676-481f-8d01-e5222d951739 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.490s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 04:07:31 np0005534516 nova_compute[253538]: 2025-11-25 09:07:31.911 253542 DEBUG nova.storage.rbd_utils [None req-6caa8021-d676-481f-8d01-e5222d951739 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] rbd image de5bfbef-7a99-4280-a304-71b9099f110b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 04:07:31 np0005534516 nova_compute[253538]: 2025-11-25 09:07:31.917 253542 DEBUG oslo_concurrency.processutils [None req-6caa8021-d676-481f-8d01-e5222d951739 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 04:07:32 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2606: 321 pgs: 321 active+clean; 292 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 5.0 MiB/s wr, 159 op/s
Nov 25 04:07:32 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 04:07:32 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2725074280' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 04:07:32 np0005534516 nova_compute[253538]: 2025-11-25 09:07:32.396 253542 DEBUG oslo_concurrency.processutils [None req-6caa8021-d676-481f-8d01-e5222d951739 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.479s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 04:07:32 np0005534516 nova_compute[253538]: 2025-11-25 09:07:32.398 253542 DEBUG nova.virt.libvirt.vif [None req-6caa8021-d676-481f-8d01-e5222d951739 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T09:07:25Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestSnapshotPattern-server-1748422570',display_name='tempest-TestSnapshotPattern-server-1748422570',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testsnapshotpattern-server-1748422570',id=142,image_ref='cea21f13-1c78-4633-9d51-3cb641934c22',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLErEV2xwN9mBf+XGKCIgVT/D6OXwB0cwyyX2NBYPEm+JKbk9OpT/b7EfE4XEaPQBYqSc0cfR5p0864dWMnh2OIUXkOANq7ZGUSKBMTzjmr8EUsGjFEiOhtcXJ0bHjOVZQ==',key_name='tempest-TestSnapshotPattern-1105951773',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='8771100a91ef4eb3b58cc4840f6154b4',ramdisk_id='',reservation_id='r-a2i419d2',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_boot_roles='reader,member',image_container_format='bare',image_disk_format='raw',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_image_location='snapshot',image_image_state='available',image_image_type='snapshot',image_instance_uuid='7aeb9ccf-2506-41d1-92c2-c72892096857',image_min_disk='1',image_min_ram='0',image_owner_id='8771100a91ef4eb3b58cc4840f6154b4',image_owner_project_name='tempest-TestSnapshotPattern-569624779',image_owner_user_name='tempest-TestSnapshotPattern-569624779-project-member',image_user_id='aef72e2ffce442d1848c4753c324ae92',image_version='8.0',network_allocated='True',owner_project_name='tempest-TestSnapshotPattern-569624779',owner_user_name='tempest-TestSnapshotPattern-569624779-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T09:07:27Z,user_data=None,user_id='aef72e2ffce442d1848c4753c324ae92',uuid=de5bfbef-7a99-4280-a304-71b9099f110b,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "ff88ca8a-d270-4991-b2c4-617f04418848", "address": "fa:16:3e:ec:c7:f1", "network": {"id": "3c3eb82e-1161-4c2f-9fce-53fdf4386d9c", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-990123454-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8771100a91ef4eb3b58cc4840f6154b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapff88ca8a-d2", "ovs_interfaceid": "ff88ca8a-d270-4991-b2c4-617f04418848", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 25 04:07:32 np0005534516 nova_compute[253538]: 2025-11-25 09:07:32.398 253542 DEBUG nova.network.os_vif_util [None req-6caa8021-d676-481f-8d01-e5222d951739 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] Converting VIF {"id": "ff88ca8a-d270-4991-b2c4-617f04418848", "address": "fa:16:3e:ec:c7:f1", "network": {"id": "3c3eb82e-1161-4c2f-9fce-53fdf4386d9c", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-990123454-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8771100a91ef4eb3b58cc4840f6154b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapff88ca8a-d2", "ovs_interfaceid": "ff88ca8a-d270-4991-b2c4-617f04418848", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 04:07:32 np0005534516 nova_compute[253538]: 2025-11-25 09:07:32.399 253542 DEBUG nova.network.os_vif_util [None req-6caa8021-d676-481f-8d01-e5222d951739 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ec:c7:f1,bridge_name='br-int',has_traffic_filtering=True,id=ff88ca8a-d270-4991-b2c4-617f04418848,network=Network(3c3eb82e-1161-4c2f-9fce-53fdf4386d9c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapff88ca8a-d2') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 04:07:32 np0005534516 nova_compute[253538]: 2025-11-25 09:07:32.400 253542 DEBUG nova.objects.instance [None req-6caa8021-d676-481f-8d01-e5222d951739 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] Lazy-loading 'pci_devices' on Instance uuid de5bfbef-7a99-4280-a304-71b9099f110b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 04:07:32 np0005534516 nova_compute[253538]: 2025-11-25 09:07:32.421 253542 DEBUG nova.virt.libvirt.driver [None req-6caa8021-d676-481f-8d01-e5222d951739 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] [instance: de5bfbef-7a99-4280-a304-71b9099f110b] End _get_guest_xml xml=<domain type="kvm">
Nov 25 04:07:32 np0005534516 nova_compute[253538]:  <uuid>de5bfbef-7a99-4280-a304-71b9099f110b</uuid>
Nov 25 04:07:32 np0005534516 nova_compute[253538]:  <name>instance-0000008e</name>
Nov 25 04:07:32 np0005534516 nova_compute[253538]:  <memory>131072</memory>
Nov 25 04:07:32 np0005534516 nova_compute[253538]:  <vcpu>1</vcpu>
Nov 25 04:07:32 np0005534516 nova_compute[253538]:  <metadata>
Nov 25 04:07:32 np0005534516 nova_compute[253538]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 04:07:32 np0005534516 nova_compute[253538]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 04:07:32 np0005534516 nova_compute[253538]:      <nova:name>tempest-TestSnapshotPattern-server-1748422570</nova:name>
Nov 25 04:07:32 np0005534516 nova_compute[253538]:      <nova:creationTime>2025-11-25 09:07:31</nova:creationTime>
Nov 25 04:07:32 np0005534516 nova_compute[253538]:      <nova:flavor name="m1.nano">
Nov 25 04:07:32 np0005534516 nova_compute[253538]:        <nova:memory>128</nova:memory>
Nov 25 04:07:32 np0005534516 nova_compute[253538]:        <nova:disk>1</nova:disk>
Nov 25 04:07:32 np0005534516 nova_compute[253538]:        <nova:swap>0</nova:swap>
Nov 25 04:07:32 np0005534516 nova_compute[253538]:        <nova:ephemeral>0</nova:ephemeral>
Nov 25 04:07:32 np0005534516 nova_compute[253538]:        <nova:vcpus>1</nova:vcpus>
Nov 25 04:07:32 np0005534516 nova_compute[253538]:      </nova:flavor>
Nov 25 04:07:32 np0005534516 nova_compute[253538]:      <nova:owner>
Nov 25 04:07:32 np0005534516 nova_compute[253538]:        <nova:user uuid="aef72e2ffce442d1848c4753c324ae92">tempest-TestSnapshotPattern-569624779-project-member</nova:user>
Nov 25 04:07:32 np0005534516 nova_compute[253538]:        <nova:project uuid="8771100a91ef4eb3b58cc4840f6154b4">tempest-TestSnapshotPattern-569624779</nova:project>
Nov 25 04:07:32 np0005534516 nova_compute[253538]:      </nova:owner>
Nov 25 04:07:32 np0005534516 nova_compute[253538]:      <nova:root type="image" uuid="cea21f13-1c78-4633-9d51-3cb641934c22"/>
Nov 25 04:07:32 np0005534516 nova_compute[253538]:      <nova:ports>
Nov 25 04:07:32 np0005534516 nova_compute[253538]:        <nova:port uuid="ff88ca8a-d270-4991-b2c4-617f04418848">
Nov 25 04:07:32 np0005534516 nova_compute[253538]:          <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Nov 25 04:07:32 np0005534516 nova_compute[253538]:        </nova:port>
Nov 25 04:07:32 np0005534516 nova_compute[253538]:      </nova:ports>
Nov 25 04:07:32 np0005534516 nova_compute[253538]:    </nova:instance>
Nov 25 04:07:32 np0005534516 nova_compute[253538]:  </metadata>
Nov 25 04:07:32 np0005534516 nova_compute[253538]:  <sysinfo type="smbios">
Nov 25 04:07:32 np0005534516 nova_compute[253538]:    <system>
Nov 25 04:07:32 np0005534516 nova_compute[253538]:      <entry name="manufacturer">RDO</entry>
Nov 25 04:07:32 np0005534516 nova_compute[253538]:      <entry name="product">OpenStack Compute</entry>
Nov 25 04:07:32 np0005534516 nova_compute[253538]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 04:07:32 np0005534516 nova_compute[253538]:      <entry name="serial">de5bfbef-7a99-4280-a304-71b9099f110b</entry>
Nov 25 04:07:32 np0005534516 nova_compute[253538]:      <entry name="uuid">de5bfbef-7a99-4280-a304-71b9099f110b</entry>
Nov 25 04:07:32 np0005534516 nova_compute[253538]:      <entry name="family">Virtual Machine</entry>
Nov 25 04:07:32 np0005534516 nova_compute[253538]:    </system>
Nov 25 04:07:32 np0005534516 nova_compute[253538]:  </sysinfo>
Nov 25 04:07:32 np0005534516 nova_compute[253538]:  <os>
Nov 25 04:07:32 np0005534516 nova_compute[253538]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 25 04:07:32 np0005534516 nova_compute[253538]:    <boot dev="hd"/>
Nov 25 04:07:32 np0005534516 nova_compute[253538]:    <smbios mode="sysinfo"/>
Nov 25 04:07:32 np0005534516 nova_compute[253538]:  </os>
Nov 25 04:07:32 np0005534516 nova_compute[253538]:  <features>
Nov 25 04:07:32 np0005534516 nova_compute[253538]:    <acpi/>
Nov 25 04:07:32 np0005534516 nova_compute[253538]:    <apic/>
Nov 25 04:07:32 np0005534516 nova_compute[253538]:    <vmcoreinfo/>
Nov 25 04:07:32 np0005534516 nova_compute[253538]:  </features>
Nov 25 04:07:32 np0005534516 nova_compute[253538]:  <clock offset="utc">
Nov 25 04:07:32 np0005534516 nova_compute[253538]:    <timer name="pit" tickpolicy="delay"/>
Nov 25 04:07:32 np0005534516 nova_compute[253538]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 25 04:07:32 np0005534516 nova_compute[253538]:    <timer name="hpet" present="no"/>
Nov 25 04:07:32 np0005534516 nova_compute[253538]:  </clock>
Nov 25 04:07:32 np0005534516 nova_compute[253538]:  <cpu mode="host-model" match="exact">
Nov 25 04:07:32 np0005534516 nova_compute[253538]:    <topology sockets="1" cores="1" threads="1"/>
Nov 25 04:07:32 np0005534516 nova_compute[253538]:  </cpu>
Nov 25 04:07:32 np0005534516 nova_compute[253538]:  <devices>
Nov 25 04:07:32 np0005534516 nova_compute[253538]:    <disk type="network" device="disk">
Nov 25 04:07:32 np0005534516 nova_compute[253538]:      <driver type="raw" cache="none"/>
Nov 25 04:07:32 np0005534516 nova_compute[253538]:      <source protocol="rbd" name="vms/de5bfbef-7a99-4280-a304-71b9099f110b_disk">
Nov 25 04:07:32 np0005534516 nova_compute[253538]:        <host name="192.168.122.100" port="6789"/>
Nov 25 04:07:32 np0005534516 nova_compute[253538]:      </source>
Nov 25 04:07:32 np0005534516 nova_compute[253538]:      <auth username="openstack">
Nov 25 04:07:32 np0005534516 nova_compute[253538]:        <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 04:07:32 np0005534516 nova_compute[253538]:      </auth>
Nov 25 04:07:32 np0005534516 nova_compute[253538]:      <target dev="vda" bus="virtio"/>
Nov 25 04:07:32 np0005534516 nova_compute[253538]:    </disk>
Nov 25 04:07:32 np0005534516 nova_compute[253538]:    <disk type="network" device="cdrom">
Nov 25 04:07:32 np0005534516 nova_compute[253538]:      <driver type="raw" cache="none"/>
Nov 25 04:07:32 np0005534516 nova_compute[253538]:      <source protocol="rbd" name="vms/de5bfbef-7a99-4280-a304-71b9099f110b_disk.config">
Nov 25 04:07:32 np0005534516 nova_compute[253538]:        <host name="192.168.122.100" port="6789"/>
Nov 25 04:07:32 np0005534516 nova_compute[253538]:      </source>
Nov 25 04:07:32 np0005534516 nova_compute[253538]:      <auth username="openstack">
Nov 25 04:07:32 np0005534516 nova_compute[253538]:        <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 04:07:32 np0005534516 nova_compute[253538]:      </auth>
Nov 25 04:07:32 np0005534516 nova_compute[253538]:      <target dev="sda" bus="sata"/>
Nov 25 04:07:32 np0005534516 nova_compute[253538]:    </disk>
Nov 25 04:07:32 np0005534516 nova_compute[253538]:    <interface type="ethernet">
Nov 25 04:07:32 np0005534516 nova_compute[253538]:      <mac address="fa:16:3e:ec:c7:f1"/>
Nov 25 04:07:32 np0005534516 nova_compute[253538]:      <model type="virtio"/>
Nov 25 04:07:32 np0005534516 nova_compute[253538]:      <driver name="vhost" rx_queue_size="512"/>
Nov 25 04:07:32 np0005534516 nova_compute[253538]:      <mtu size="1442"/>
Nov 25 04:07:32 np0005534516 nova_compute[253538]:      <target dev="tapff88ca8a-d2"/>
Nov 25 04:07:32 np0005534516 nova_compute[253538]:    </interface>
Nov 25 04:07:32 np0005534516 nova_compute[253538]:    <serial type="pty">
Nov 25 04:07:32 np0005534516 nova_compute[253538]:      <log file="/var/lib/nova/instances/de5bfbef-7a99-4280-a304-71b9099f110b/console.log" append="off"/>
Nov 25 04:07:32 np0005534516 nova_compute[253538]:    </serial>
Nov 25 04:07:32 np0005534516 nova_compute[253538]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 04:07:32 np0005534516 nova_compute[253538]:    <video>
Nov 25 04:07:32 np0005534516 nova_compute[253538]:      <model type="virtio"/>
Nov 25 04:07:32 np0005534516 nova_compute[253538]:    </video>
Nov 25 04:07:32 np0005534516 nova_compute[253538]:    <input type="tablet" bus="usb"/>
Nov 25 04:07:32 np0005534516 nova_compute[253538]:    <input type="keyboard" bus="usb"/>
Nov 25 04:07:32 np0005534516 nova_compute[253538]:    <rng model="virtio">
Nov 25 04:07:32 np0005534516 nova_compute[253538]:      <backend model="random">/dev/urandom</backend>
Nov 25 04:07:32 np0005534516 nova_compute[253538]:    </rng>
Nov 25 04:07:32 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root"/>
Nov 25 04:07:32 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:07:32 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:07:32 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:07:32 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:07:32 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:07:32 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:07:32 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:07:32 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:07:32 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:07:32 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:07:32 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:07:32 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:07:32 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:07:32 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:07:32 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:07:32 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:07:32 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:07:32 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:07:32 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:07:32 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:07:32 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:07:32 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:07:32 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:07:32 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:07:32 np0005534516 nova_compute[253538]:    <controller type="usb" index="0"/>
Nov 25 04:07:32 np0005534516 nova_compute[253538]:    <memballoon model="virtio">
Nov 25 04:07:32 np0005534516 nova_compute[253538]:      <stats period="10"/>
Nov 25 04:07:32 np0005534516 nova_compute[253538]:    </memballoon>
Nov 25 04:07:32 np0005534516 nova_compute[253538]:  </devices>
Nov 25 04:07:32 np0005534516 nova_compute[253538]: </domain>
Nov 25 04:07:32 np0005534516 nova_compute[253538]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 25 04:07:32 np0005534516 nova_compute[253538]: 2025-11-25 09:07:32.422 253542 DEBUG nova.compute.manager [None req-6caa8021-d676-481f-8d01-e5222d951739 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] [instance: de5bfbef-7a99-4280-a304-71b9099f110b] Preparing to wait for external event network-vif-plugged-ff88ca8a-d270-4991-b2c4-617f04418848 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 25 04:07:32 np0005534516 nova_compute[253538]: 2025-11-25 09:07:32.422 253542 DEBUG oslo_concurrency.lockutils [None req-6caa8021-d676-481f-8d01-e5222d951739 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] Acquiring lock "de5bfbef-7a99-4280-a304-71b9099f110b-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:07:32 np0005534516 nova_compute[253538]: 2025-11-25 09:07:32.423 253542 DEBUG oslo_concurrency.lockutils [None req-6caa8021-d676-481f-8d01-e5222d951739 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] Lock "de5bfbef-7a99-4280-a304-71b9099f110b-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:07:32 np0005534516 nova_compute[253538]: 2025-11-25 09:07:32.423 253542 DEBUG oslo_concurrency.lockutils [None req-6caa8021-d676-481f-8d01-e5222d951739 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] Lock "de5bfbef-7a99-4280-a304-71b9099f110b-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:07:32 np0005534516 nova_compute[253538]: 2025-11-25 09:07:32.423 253542 DEBUG nova.virt.libvirt.vif [None req-6caa8021-d676-481f-8d01-e5222d951739 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T09:07:25Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestSnapshotPattern-server-1748422570',display_name='tempest-TestSnapshotPattern-server-1748422570',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testsnapshotpattern-server-1748422570',id=142,image_ref='cea21f13-1c78-4633-9d51-3cb641934c22',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLErEV2xwN9mBf+XGKCIgVT/D6OXwB0cwyyX2NBYPEm+JKbk9OpT/b7EfE4XEaPQBYqSc0cfR5p0864dWMnh2OIUXkOANq7ZGUSKBMTzjmr8EUsGjFEiOhtcXJ0bHjOVZQ==',key_name='tempest-TestSnapshotPattern-1105951773',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='8771100a91ef4eb3b58cc4840f6154b4',ramdisk_id='',reservation_id='r-a2i419d2',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_boot_roles='reader,member',image_container_format='bare',image_disk_format='raw',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_image_location='snapshot',image_image_state='available',image_image_type='snapshot',image_instance_uuid='7aeb9ccf-2506-41d1-92c2-c72892096857',image_min_disk='1',image_min_ram='0',image_owner_id='8771100a91ef4eb3b58cc4840f6154b4',image_owner_project_name='tempest-TestSnapshotPattern-569624779',image_owner_user_name='tempest-TestSnapshotPattern-569624779-project-member',image_user_id='aef72e2ffce442d1848c4753c324ae92',image_version='8.0',network_allocated='True',owner_project_name='tempest-TestSnapshotPattern-569624779',owner_user_name='tempest-TestSnapshotPattern-569624779-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T09:07:27Z,user_data=None,user_id='aef72e2ffce442d1848c4753c324ae92',uuid=de5bfbef-7a99-4280-a304-71b9099f110b,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "ff88ca8a-d270-4991-b2c4-617f04418848", "address": "fa:16:3e:ec:c7:f1", "network": {"id": "3c3eb82e-1161-4c2f-9fce-53fdf4386d9c", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-990123454-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8771100a91ef4eb3b58cc4840f6154b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapff88ca8a-d2", "ovs_interfaceid": "ff88ca8a-d270-4991-b2c4-617f04418848", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 25 04:07:32 np0005534516 nova_compute[253538]: 2025-11-25 09:07:32.424 253542 DEBUG nova.network.os_vif_util [None req-6caa8021-d676-481f-8d01-e5222d951739 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] Converting VIF {"id": "ff88ca8a-d270-4991-b2c4-617f04418848", "address": "fa:16:3e:ec:c7:f1", "network": {"id": "3c3eb82e-1161-4c2f-9fce-53fdf4386d9c", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-990123454-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8771100a91ef4eb3b58cc4840f6154b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapff88ca8a-d2", "ovs_interfaceid": "ff88ca8a-d270-4991-b2c4-617f04418848", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 04:07:32 np0005534516 nova_compute[253538]: 2025-11-25 09:07:32.424 253542 DEBUG nova.network.os_vif_util [None req-6caa8021-d676-481f-8d01-e5222d951739 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ec:c7:f1,bridge_name='br-int',has_traffic_filtering=True,id=ff88ca8a-d270-4991-b2c4-617f04418848,network=Network(3c3eb82e-1161-4c2f-9fce-53fdf4386d9c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapff88ca8a-d2') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 04:07:32 np0005534516 nova_compute[253538]: 2025-11-25 09:07:32.425 253542 DEBUG os_vif [None req-6caa8021-d676-481f-8d01-e5222d951739 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:ec:c7:f1,bridge_name='br-int',has_traffic_filtering=True,id=ff88ca8a-d270-4991-b2c4-617f04418848,network=Network(3c3eb82e-1161-4c2f-9fce-53fdf4386d9c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapff88ca8a-d2') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 25 04:07:32 np0005534516 nova_compute[253538]: 2025-11-25 09:07:32.425 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:07:32 np0005534516 nova_compute[253538]: 2025-11-25 09:07:32.426 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 04:07:32 np0005534516 nova_compute[253538]: 2025-11-25 09:07:32.426 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 04:07:32 np0005534516 nova_compute[253538]: 2025-11-25 09:07:32.433 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:07:32 np0005534516 nova_compute[253538]: 2025-11-25 09:07:32.434 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapff88ca8a-d2, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 04:07:32 np0005534516 nova_compute[253538]: 2025-11-25 09:07:32.434 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapff88ca8a-d2, col_values=(('external_ids', {'iface-id': 'ff88ca8a-d270-4991-b2c4-617f04418848', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:ec:c7:f1', 'vm-uuid': 'de5bfbef-7a99-4280-a304-71b9099f110b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 04:07:32 np0005534516 nova_compute[253538]: 2025-11-25 09:07:32.436 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:07:32 np0005534516 NetworkManager[48915]: <info>  [1764061652.4386] manager: (tapff88ca8a-d2): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/607)
Nov 25 04:07:32 np0005534516 nova_compute[253538]: 2025-11-25 09:07:32.441 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 25 04:07:32 np0005534516 nova_compute[253538]: 2025-11-25 09:07:32.444 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:07:32 np0005534516 nova_compute[253538]: 2025-11-25 09:07:32.445 253542 INFO os_vif [None req-6caa8021-d676-481f-8d01-e5222d951739 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:ec:c7:f1,bridge_name='br-int',has_traffic_filtering=True,id=ff88ca8a-d270-4991-b2c4-617f04418848,network=Network(3c3eb82e-1161-4c2f-9fce-53fdf4386d9c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapff88ca8a-d2')#033[00m
Nov 25 04:07:32 np0005534516 nova_compute[253538]: 2025-11-25 09:07:32.473 253542 DEBUG nova.compute.manager [req-d24e9ac6-b6e1-4c49-b415-2c9f0b06b997 req-c06b1edd-d267-44a2-83a3-4f8fd87e1845 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: a5ec67ec-7042-47d0-925d-6ff3847d3846] Received event network-changed-2eda6ce1-df50-4620-a5e9-d08e62f7350e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 04:07:32 np0005534516 nova_compute[253538]: 2025-11-25 09:07:32.474 253542 DEBUG nova.compute.manager [req-d24e9ac6-b6e1-4c49-b415-2c9f0b06b997 req-c06b1edd-d267-44a2-83a3-4f8fd87e1845 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: a5ec67ec-7042-47d0-925d-6ff3847d3846] Refreshing instance network info cache due to event network-changed-2eda6ce1-df50-4620-a5e9-d08e62f7350e. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 04:07:32 np0005534516 nova_compute[253538]: 2025-11-25 09:07:32.474 253542 DEBUG oslo_concurrency.lockutils [req-d24e9ac6-b6e1-4c49-b415-2c9f0b06b997 req-c06b1edd-d267-44a2-83a3-4f8fd87e1845 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-a5ec67ec-7042-47d0-925d-6ff3847d3846" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 04:07:32 np0005534516 nova_compute[253538]: 2025-11-25 09:07:32.495 253542 DEBUG nova.virt.libvirt.driver [None req-6caa8021-d676-481f-8d01-e5222d951739 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 04:07:32 np0005534516 nova_compute[253538]: 2025-11-25 09:07:32.495 253542 DEBUG nova.virt.libvirt.driver [None req-6caa8021-d676-481f-8d01-e5222d951739 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 04:07:32 np0005534516 nova_compute[253538]: 2025-11-25 09:07:32.495 253542 DEBUG nova.virt.libvirt.driver [None req-6caa8021-d676-481f-8d01-e5222d951739 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] No VIF found with MAC fa:16:3e:ec:c7:f1, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 25 04:07:32 np0005534516 nova_compute[253538]: 2025-11-25 09:07:32.496 253542 INFO nova.virt.libvirt.driver [None req-6caa8021-d676-481f-8d01-e5222d951739 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] [instance: de5bfbef-7a99-4280-a304-71b9099f110b] Using config drive#033[00m
Nov 25 04:07:32 np0005534516 nova_compute[253538]: 2025-11-25 09:07:32.519 253542 DEBUG nova.storage.rbd_utils [None req-6caa8021-d676-481f-8d01-e5222d951739 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] rbd image de5bfbef-7a99-4280-a304-71b9099f110b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 04:07:33 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e257 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:07:33 np0005534516 nova_compute[253538]: 2025-11-25 09:07:33.284 253542 INFO nova.virt.libvirt.driver [None req-6caa8021-d676-481f-8d01-e5222d951739 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] [instance: de5bfbef-7a99-4280-a304-71b9099f110b] Creating config drive at /var/lib/nova/instances/de5bfbef-7a99-4280-a304-71b9099f110b/disk.config#033[00m
Nov 25 04:07:33 np0005534516 nova_compute[253538]: 2025-11-25 09:07:33.289 253542 DEBUG oslo_concurrency.processutils [None req-6caa8021-d676-481f-8d01-e5222d951739 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/de5bfbef-7a99-4280-a304-71b9099f110b/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpaw7bvzpx execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 04:07:33 np0005534516 nova_compute[253538]: 2025-11-25 09:07:33.341 253542 DEBUG nova.network.neutron [req-59aa2f56-4a2c-4aed-9fab-801d9c1b8c40 req-57da30a7-aefa-4afa-a6ab-b96556537c8d b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: de5bfbef-7a99-4280-a304-71b9099f110b] Updated VIF entry in instance network info cache for port ff88ca8a-d270-4991-b2c4-617f04418848. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 25 04:07:33 np0005534516 nova_compute[253538]: 2025-11-25 09:07:33.341 253542 DEBUG nova.network.neutron [req-59aa2f56-4a2c-4aed-9fab-801d9c1b8c40 req-57da30a7-aefa-4afa-a6ab-b96556537c8d b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: de5bfbef-7a99-4280-a304-71b9099f110b] Updating instance_info_cache with network_info: [{"id": "ff88ca8a-d270-4991-b2c4-617f04418848", "address": "fa:16:3e:ec:c7:f1", "network": {"id": "3c3eb82e-1161-4c2f-9fce-53fdf4386d9c", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-990123454-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8771100a91ef4eb3b58cc4840f6154b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapff88ca8a-d2", "ovs_interfaceid": "ff88ca8a-d270-4991-b2c4-617f04418848", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 04:07:33 np0005534516 nova_compute[253538]: 2025-11-25 09:07:33.356 253542 DEBUG oslo_concurrency.lockutils [req-59aa2f56-4a2c-4aed-9fab-801d9c1b8c40 req-57da30a7-aefa-4afa-a6ab-b96556537c8d b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-de5bfbef-7a99-4280-a304-71b9099f110b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 04:07:33 np0005534516 nova_compute[253538]: 2025-11-25 09:07:33.455 253542 DEBUG oslo_concurrency.processutils [None req-6caa8021-d676-481f-8d01-e5222d951739 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/de5bfbef-7a99-4280-a304-71b9099f110b/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpaw7bvzpx" returned: 0 in 0.165s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 04:07:33 np0005534516 nova_compute[253538]: 2025-11-25 09:07:33.480 253542 DEBUG nova.storage.rbd_utils [None req-6caa8021-d676-481f-8d01-e5222d951739 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] rbd image de5bfbef-7a99-4280-a304-71b9099f110b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 04:07:33 np0005534516 nova_compute[253538]: 2025-11-25 09:07:33.485 253542 DEBUG oslo_concurrency.processutils [None req-6caa8021-d676-481f-8d01-e5222d951739 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/de5bfbef-7a99-4280-a304-71b9099f110b/disk.config de5bfbef-7a99-4280-a304-71b9099f110b_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 04:07:33 np0005534516 nova_compute[253538]: 2025-11-25 09:07:33.564 253542 DEBUG nova.network.neutron [None req-85aeb3ae-c5c3-40cc-8a06-a8e3b54adb3e c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: a5ec67ec-7042-47d0-925d-6ff3847d3846] Updating instance_info_cache with network_info: [{"id": "05d0fd93-ce0f-4842-962f-c9491d3850c8", "address": "fa:16:3e:e4:ec:f8", "network": {"id": "2a6609b2-beb0-48a5-8dc0-1a4c153da77e", "bridge": "br-int", "label": "tempest-network-smoke--734444177", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap05d0fd93-ce", "ovs_interfaceid": "05d0fd93-ce0f-4842-962f-c9491d3850c8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "2eda6ce1-df50-4620-a5e9-d08e62f7350e", "address": "fa:16:3e:fc:ed:73", "network": {"id": "21786b2a-59f9-4c4e-b462-8a28f7bd93a3", "bridge": "br-int", "label": "tempest-network-smoke--1716336469", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fefc:ed73", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fefc:ed73", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2eda6ce1-df", "ovs_interfaceid": "2eda6ce1-df50-4620-a5e9-d08e62f7350e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 04:07:33 np0005534516 nova_compute[253538]: 2025-11-25 09:07:33.674 253542 DEBUG oslo_concurrency.processutils [None req-6caa8021-d676-481f-8d01-e5222d951739 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/de5bfbef-7a99-4280-a304-71b9099f110b/disk.config de5bfbef-7a99-4280-a304-71b9099f110b_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.189s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 04:07:33 np0005534516 nova_compute[253538]: 2025-11-25 09:07:33.675 253542 INFO nova.virt.libvirt.driver [None req-6caa8021-d676-481f-8d01-e5222d951739 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] [instance: de5bfbef-7a99-4280-a304-71b9099f110b] Deleting local config drive /var/lib/nova/instances/de5bfbef-7a99-4280-a304-71b9099f110b/disk.config because it was imported into RBD.#033[00m
Nov 25 04:07:33 np0005534516 kernel: tapff88ca8a-d2: entered promiscuous mode
Nov 25 04:07:33 np0005534516 NetworkManager[48915]: <info>  [1764061653.7459] manager: (tapff88ca8a-d2): new Tun device (/org/freedesktop/NetworkManager/Devices/608)
Nov 25 04:07:33 np0005534516 ovn_controller[152859]: 2025-11-25T09:07:33Z|01477|binding|INFO|Claiming lport ff88ca8a-d270-4991-b2c4-617f04418848 for this chassis.
Nov 25 04:07:33 np0005534516 ovn_controller[152859]: 2025-11-25T09:07:33Z|01478|binding|INFO|ff88ca8a-d270-4991-b2c4-617f04418848: Claiming fa:16:3e:ec:c7:f1 10.100.0.6
Nov 25 04:07:33 np0005534516 nova_compute[253538]: 2025-11-25 09:07:33.748 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:07:33 np0005534516 ovn_controller[152859]: 2025-11-25T09:07:33Z|01479|binding|INFO|Setting lport ff88ca8a-d270-4991-b2c4-617f04418848 ovn-installed in OVS
Nov 25 04:07:33 np0005534516 nova_compute[253538]: 2025-11-25 09:07:33.771 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:07:33 np0005534516 ovn_controller[152859]: 2025-11-25T09:07:33Z|01480|binding|INFO|Setting lport ff88ca8a-d270-4991-b2c4-617f04418848 up in Southbound
Nov 25 04:07:33 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:07:33.773 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ec:c7:f1 10.100.0.6'], port_security=['fa:16:3e:ec:c7:f1 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': 'de5bfbef-7a99-4280-a304-71b9099f110b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3c3eb82e-1161-4c2f-9fce-53fdf4386d9c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8771100a91ef4eb3b58cc4840f6154b4', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'b51ac1f2-fc04-45c4-8aed-ff9624bae478', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=774fcdab-a888-48f5-b941-79ea7db76602, chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=ff88ca8a-d270-4991-b2c4-617f04418848) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 04:07:33 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:07:33.774 162739 INFO neutron.agent.ovn.metadata.agent [-] Port ff88ca8a-d270-4991-b2c4-617f04418848 in datapath 3c3eb82e-1161-4c2f-9fce-53fdf4386d9c bound to our chassis#033[00m
Nov 25 04:07:33 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:07:33.775 162739 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 3c3eb82e-1161-4c2f-9fce-53fdf4386d9c#033[00m
Nov 25 04:07:33 np0005534516 nova_compute[253538]: 2025-11-25 09:07:33.775 253542 DEBUG oslo_concurrency.lockutils [None req-85aeb3ae-c5c3-40cc-8a06-a8e3b54adb3e c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Releasing lock "refresh_cache-a5ec67ec-7042-47d0-925d-6ff3847d3846" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 04:07:33 np0005534516 nova_compute[253538]: 2025-11-25 09:07:33.776 253542 DEBUG nova.compute.manager [None req-85aeb3ae-c5c3-40cc-8a06-a8e3b54adb3e c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: a5ec67ec-7042-47d0-925d-6ff3847d3846] Instance network_info: |[{"id": "05d0fd93-ce0f-4842-962f-c9491d3850c8", "address": "fa:16:3e:e4:ec:f8", "network": {"id": "2a6609b2-beb0-48a5-8dc0-1a4c153da77e", "bridge": "br-int", "label": "tempest-network-smoke--734444177", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap05d0fd93-ce", "ovs_interfaceid": "05d0fd93-ce0f-4842-962f-c9491d3850c8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "2eda6ce1-df50-4620-a5e9-d08e62f7350e", "address": "fa:16:3e:fc:ed:73", "network": {"id": "21786b2a-59f9-4c4e-b462-8a28f7bd93a3", "bridge": "br-int", "label": "tempest-network-smoke--1716336469", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fefc:ed73", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fefc:ed73", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2eda6ce1-df", "ovs_interfaceid": "2eda6ce1-df50-4620-a5e9-d08e62f7350e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 25 04:07:33 np0005534516 nova_compute[253538]: 2025-11-25 09:07:33.776 253542 DEBUG oslo_concurrency.lockutils [req-d24e9ac6-b6e1-4c49-b415-2c9f0b06b997 req-c06b1edd-d267-44a2-83a3-4f8fd87e1845 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-a5ec67ec-7042-47d0-925d-6ff3847d3846" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 04:07:33 np0005534516 nova_compute[253538]: 2025-11-25 09:07:33.776 253542 DEBUG nova.network.neutron [req-d24e9ac6-b6e1-4c49-b415-2c9f0b06b997 req-c06b1edd-d267-44a2-83a3-4f8fd87e1845 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: a5ec67ec-7042-47d0-925d-6ff3847d3846] Refreshing network info cache for port 2eda6ce1-df50-4620-a5e9-d08e62f7350e _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 04:07:33 np0005534516 systemd-udevd[403247]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 04:07:33 np0005534516 nova_compute[253538]: 2025-11-25 09:07:33.781 253542 DEBUG nova.virt.libvirt.driver [None req-85aeb3ae-c5c3-40cc-8a06-a8e3b54adb3e c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: a5ec67ec-7042-47d0-925d-6ff3847d3846] Start _get_guest_xml network_info=[{"id": "05d0fd93-ce0f-4842-962f-c9491d3850c8", "address": "fa:16:3e:e4:ec:f8", "network": {"id": "2a6609b2-beb0-48a5-8dc0-1a4c153da77e", "bridge": "br-int", "label": "tempest-network-smoke--734444177", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap05d0fd93-ce", "ovs_interfaceid": "05d0fd93-ce0f-4842-962f-c9491d3850c8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "2eda6ce1-df50-4620-a5e9-d08e62f7350e", "address": "fa:16:3e:fc:ed:73", "network": {"id": "21786b2a-59f9-4c4e-b462-8a28f7bd93a3", "bridge": "br-int", "label": "tempest-network-smoke--1716336469", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fefc:ed73", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fefc:ed73", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2eda6ce1-df", "ovs_interfaceid": "2eda6ce1-df50-4620-a5e9-d08e62f7350e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'device_name': '/dev/vda', 'guest_format': None, 'encryption_options': None, 'boot_index': 0, 'encryption_format': None, 'encryption_secret_uuid': None, 'size': 0, 'encrypted': False, 'disk_bus': 'virtio', 'image_id': '8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 25 04:07:33 np0005534516 nova_compute[253538]: 2025-11-25 09:07:33.781 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:07:33 np0005534516 systemd-machined[215790]: New machine qemu-171-instance-0000008e.
Nov 25 04:07:33 np0005534516 nova_compute[253538]: 2025-11-25 09:07:33.788 253542 WARNING nova.virt.libvirt.driver [None req-85aeb3ae-c5c3-40cc-8a06-a8e3b54adb3e c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 04:07:33 np0005534516 NetworkManager[48915]: <info>  [1764061653.7951] device (tapff88ca8a-d2): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 04:07:33 np0005534516 NetworkManager[48915]: <info>  [1764061653.7959] device (tapff88ca8a-d2): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 04:07:33 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:07:33.798 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[120a4caa-3d83-4d23-910e-6d2c46e55aee]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:07:33 np0005534516 nova_compute[253538]: 2025-11-25 09:07:33.800 253542 DEBUG nova.virt.libvirt.host [None req-85aeb3ae-c5c3-40cc-8a06-a8e3b54adb3e c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 25 04:07:33 np0005534516 systemd[1]: Started Virtual Machine qemu-171-instance-0000008e.
Nov 25 04:07:33 np0005534516 nova_compute[253538]: 2025-11-25 09:07:33.802 253542 DEBUG nova.virt.libvirt.host [None req-85aeb3ae-c5c3-40cc-8a06-a8e3b54adb3e c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 25 04:07:33 np0005534516 nova_compute[253538]: 2025-11-25 09:07:33.805 253542 DEBUG nova.virt.libvirt.host [None req-85aeb3ae-c5c3-40cc-8a06-a8e3b54adb3e c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 25 04:07:33 np0005534516 nova_compute[253538]: 2025-11-25 09:07:33.806 253542 DEBUG nova.virt.libvirt.host [None req-85aeb3ae-c5c3-40cc-8a06-a8e3b54adb3e c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 25 04:07:33 np0005534516 nova_compute[253538]: 2025-11-25 09:07:33.806 253542 DEBUG nova.virt.libvirt.driver [None req-85aeb3ae-c5c3-40cc-8a06-a8e3b54adb3e c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 25 04:07:33 np0005534516 nova_compute[253538]: 2025-11-25 09:07:33.806 253542 DEBUG nova.virt.hardware [None req-85aeb3ae-c5c3-40cc-8a06-a8e3b54adb3e c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T08:20:54Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='c0247c91-0f34-45b2-87e7-43f31a790a00',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 25 04:07:33 np0005534516 nova_compute[253538]: 2025-11-25 09:07:33.807 253542 DEBUG nova.virt.hardware [None req-85aeb3ae-c5c3-40cc-8a06-a8e3b54adb3e c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 25 04:07:33 np0005534516 nova_compute[253538]: 2025-11-25 09:07:33.807 253542 DEBUG nova.virt.hardware [None req-85aeb3ae-c5c3-40cc-8a06-a8e3b54adb3e c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 25 04:07:33 np0005534516 nova_compute[253538]: 2025-11-25 09:07:33.807 253542 DEBUG nova.virt.hardware [None req-85aeb3ae-c5c3-40cc-8a06-a8e3b54adb3e c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 25 04:07:33 np0005534516 nova_compute[253538]: 2025-11-25 09:07:33.808 253542 DEBUG nova.virt.hardware [None req-85aeb3ae-c5c3-40cc-8a06-a8e3b54adb3e c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 25 04:07:33 np0005534516 nova_compute[253538]: 2025-11-25 09:07:33.808 253542 DEBUG nova.virt.hardware [None req-85aeb3ae-c5c3-40cc-8a06-a8e3b54adb3e c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 25 04:07:33 np0005534516 nova_compute[253538]: 2025-11-25 09:07:33.808 253542 DEBUG nova.virt.hardware [None req-85aeb3ae-c5c3-40cc-8a06-a8e3b54adb3e c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 25 04:07:33 np0005534516 nova_compute[253538]: 2025-11-25 09:07:33.808 253542 DEBUG nova.virt.hardware [None req-85aeb3ae-c5c3-40cc-8a06-a8e3b54adb3e c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 25 04:07:33 np0005534516 nova_compute[253538]: 2025-11-25 09:07:33.809 253542 DEBUG nova.virt.hardware [None req-85aeb3ae-c5c3-40cc-8a06-a8e3b54adb3e c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 25 04:07:33 np0005534516 nova_compute[253538]: 2025-11-25 09:07:33.809 253542 DEBUG nova.virt.hardware [None req-85aeb3ae-c5c3-40cc-8a06-a8e3b54adb3e c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 25 04:07:33 np0005534516 nova_compute[253538]: 2025-11-25 09:07:33.809 253542 DEBUG nova.virt.hardware [None req-85aeb3ae-c5c3-40cc-8a06-a8e3b54adb3e c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 25 04:07:33 np0005534516 nova_compute[253538]: 2025-11-25 09:07:33.812 253542 DEBUG oslo_concurrency.processutils [None req-85aeb3ae-c5c3-40cc-8a06-a8e3b54adb3e c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 04:07:33 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:07:33.843 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[35eca959-4acd-4361-94e8-5a2c5ec9d3c4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:07:33 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:07:33.848 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[54a31d38-3e0f-4797-a733-78c1af53c955]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:07:33 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:07:33.885 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[1074e76c-784a-4afb-9fc1-f9ec61bc3675]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:07:33 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:07:33.908 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[e5f64606-3cae-4c92-941a-cde1787ea21a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap3c3eb82e-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:0e:4d:ac'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 424], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 696918, 'reachable_time': 15963, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 403263, 'error': None, 'target': 'ovnmeta-3c3eb82e-1161-4c2f-9fce-53fdf4386d9c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:07:33 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:07:33.933 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[4f771805-132d-4ddb-82d8-2afecb09a2cb]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap3c3eb82e-11'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 696928, 'tstamp': 696928}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 403264, 'error': None, 'target': 'ovnmeta-3c3eb82e-1161-4c2f-9fce-53fdf4386d9c', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap3c3eb82e-11'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 696931, 'tstamp': 696931}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 403264, 'error': None, 'target': 'ovnmeta-3c3eb82e-1161-4c2f-9fce-53fdf4386d9c', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:07:33 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:07:33.937 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3c3eb82e-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 04:07:33 np0005534516 nova_compute[253538]: 2025-11-25 09:07:33.939 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:07:33 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:07:33.940 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap3c3eb82e-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 04:07:33 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:07:33.941 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 04:07:33 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:07:33.941 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap3c3eb82e-10, col_values=(('external_ids', {'iface-id': 'aca5006e-311f-469a-ba5d-688da3f7d396'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 04:07:33 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:07:33.941 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 04:07:34 np0005534516 nova_compute[253538]: 2025-11-25 09:07:34.161 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:07:34 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2607: 321 pgs: 321 active+clean; 292 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.6 MiB/s rd, 3.6 MiB/s wr, 98 op/s
Nov 25 04:07:34 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 04:07:34 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2953828355' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 04:07:34 np0005534516 nova_compute[253538]: 2025-11-25 09:07:34.283 253542 DEBUG oslo_concurrency.processutils [None req-85aeb3ae-c5c3-40cc-8a06-a8e3b54adb3e c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.471s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 04:07:34 np0005534516 nova_compute[253538]: 2025-11-25 09:07:34.316 253542 DEBUG nova.storage.rbd_utils [None req-85aeb3ae-c5c3-40cc-8a06-a8e3b54adb3e c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] rbd image a5ec67ec-7042-47d0-925d-6ff3847d3846_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 04:07:34 np0005534516 nova_compute[253538]: 2025-11-25 09:07:34.327 253542 DEBUG oslo_concurrency.processutils [None req-85aeb3ae-c5c3-40cc-8a06-a8e3b54adb3e c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 04:07:34 np0005534516 nova_compute[253538]: 2025-11-25 09:07:34.484 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764061654.4831617, de5bfbef-7a99-4280-a304-71b9099f110b => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 04:07:34 np0005534516 nova_compute[253538]: 2025-11-25 09:07:34.485 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: de5bfbef-7a99-4280-a304-71b9099f110b] VM Started (Lifecycle Event)#033[00m
Nov 25 04:07:34 np0005534516 nova_compute[253538]: 2025-11-25 09:07:34.502 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: de5bfbef-7a99-4280-a304-71b9099f110b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 04:07:34 np0005534516 nova_compute[253538]: 2025-11-25 09:07:34.507 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764061654.4834478, de5bfbef-7a99-4280-a304-71b9099f110b => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 04:07:34 np0005534516 nova_compute[253538]: 2025-11-25 09:07:34.507 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: de5bfbef-7a99-4280-a304-71b9099f110b] VM Paused (Lifecycle Event)#033[00m
Nov 25 04:07:34 np0005534516 nova_compute[253538]: 2025-11-25 09:07:34.523 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: de5bfbef-7a99-4280-a304-71b9099f110b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 04:07:34 np0005534516 nova_compute[253538]: 2025-11-25 09:07:34.525 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: de5bfbef-7a99-4280-a304-71b9099f110b] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 04:07:34 np0005534516 nova_compute[253538]: 2025-11-25 09:07:34.545 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: de5bfbef-7a99-4280-a304-71b9099f110b] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 04:07:34 np0005534516 nova_compute[253538]: 2025-11-25 09:07:34.579 253542 DEBUG nova.compute.manager [req-70232c9a-124d-447f-a54e-0650f6355798 req-c4bc2f78-b66f-48a1-b56c-af8c616e1691 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: de5bfbef-7a99-4280-a304-71b9099f110b] Received event network-vif-plugged-ff88ca8a-d270-4991-b2c4-617f04418848 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 04:07:34 np0005534516 nova_compute[253538]: 2025-11-25 09:07:34.580 253542 DEBUG oslo_concurrency.lockutils [req-70232c9a-124d-447f-a54e-0650f6355798 req-c4bc2f78-b66f-48a1-b56c-af8c616e1691 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "de5bfbef-7a99-4280-a304-71b9099f110b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:07:34 np0005534516 nova_compute[253538]: 2025-11-25 09:07:34.580 253542 DEBUG oslo_concurrency.lockutils [req-70232c9a-124d-447f-a54e-0650f6355798 req-c4bc2f78-b66f-48a1-b56c-af8c616e1691 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "de5bfbef-7a99-4280-a304-71b9099f110b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:07:34 np0005534516 nova_compute[253538]: 2025-11-25 09:07:34.580 253542 DEBUG oslo_concurrency.lockutils [req-70232c9a-124d-447f-a54e-0650f6355798 req-c4bc2f78-b66f-48a1-b56c-af8c616e1691 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "de5bfbef-7a99-4280-a304-71b9099f110b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:07:34 np0005534516 nova_compute[253538]: 2025-11-25 09:07:34.580 253542 DEBUG nova.compute.manager [req-70232c9a-124d-447f-a54e-0650f6355798 req-c4bc2f78-b66f-48a1-b56c-af8c616e1691 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: de5bfbef-7a99-4280-a304-71b9099f110b] Processing event network-vif-plugged-ff88ca8a-d270-4991-b2c4-617f04418848 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 25 04:07:34 np0005534516 nova_compute[253538]: 2025-11-25 09:07:34.581 253542 DEBUG nova.compute.manager [req-70232c9a-124d-447f-a54e-0650f6355798 req-c4bc2f78-b66f-48a1-b56c-af8c616e1691 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: de5bfbef-7a99-4280-a304-71b9099f110b] Received event network-vif-plugged-ff88ca8a-d270-4991-b2c4-617f04418848 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 04:07:34 np0005534516 nova_compute[253538]: 2025-11-25 09:07:34.581 253542 DEBUG oslo_concurrency.lockutils [req-70232c9a-124d-447f-a54e-0650f6355798 req-c4bc2f78-b66f-48a1-b56c-af8c616e1691 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "de5bfbef-7a99-4280-a304-71b9099f110b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:07:34 np0005534516 nova_compute[253538]: 2025-11-25 09:07:34.581 253542 DEBUG oslo_concurrency.lockutils [req-70232c9a-124d-447f-a54e-0650f6355798 req-c4bc2f78-b66f-48a1-b56c-af8c616e1691 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "de5bfbef-7a99-4280-a304-71b9099f110b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:07:34 np0005534516 nova_compute[253538]: 2025-11-25 09:07:34.581 253542 DEBUG oslo_concurrency.lockutils [req-70232c9a-124d-447f-a54e-0650f6355798 req-c4bc2f78-b66f-48a1-b56c-af8c616e1691 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "de5bfbef-7a99-4280-a304-71b9099f110b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:07:34 np0005534516 nova_compute[253538]: 2025-11-25 09:07:34.581 253542 DEBUG nova.compute.manager [req-70232c9a-124d-447f-a54e-0650f6355798 req-c4bc2f78-b66f-48a1-b56c-af8c616e1691 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: de5bfbef-7a99-4280-a304-71b9099f110b] No waiting events found dispatching network-vif-plugged-ff88ca8a-d270-4991-b2c4-617f04418848 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 04:07:34 np0005534516 nova_compute[253538]: 2025-11-25 09:07:34.581 253542 WARNING nova.compute.manager [req-70232c9a-124d-447f-a54e-0650f6355798 req-c4bc2f78-b66f-48a1-b56c-af8c616e1691 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: de5bfbef-7a99-4280-a304-71b9099f110b] Received unexpected event network-vif-plugged-ff88ca8a-d270-4991-b2c4-617f04418848 for instance with vm_state building and task_state spawning.#033[00m
Nov 25 04:07:34 np0005534516 nova_compute[253538]: 2025-11-25 09:07:34.582 253542 DEBUG nova.compute.manager [None req-6caa8021-d676-481f-8d01-e5222d951739 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] [instance: de5bfbef-7a99-4280-a304-71b9099f110b] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 25 04:07:34 np0005534516 nova_compute[253538]: 2025-11-25 09:07:34.586 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764061654.586144, de5bfbef-7a99-4280-a304-71b9099f110b => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 04:07:34 np0005534516 nova_compute[253538]: 2025-11-25 09:07:34.586 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: de5bfbef-7a99-4280-a304-71b9099f110b] VM Resumed (Lifecycle Event)#033[00m
Nov 25 04:07:34 np0005534516 nova_compute[253538]: 2025-11-25 09:07:34.588 253542 DEBUG nova.virt.libvirt.driver [None req-6caa8021-d676-481f-8d01-e5222d951739 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] [instance: de5bfbef-7a99-4280-a304-71b9099f110b] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 25 04:07:34 np0005534516 nova_compute[253538]: 2025-11-25 09:07:34.601 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: de5bfbef-7a99-4280-a304-71b9099f110b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 04:07:34 np0005534516 nova_compute[253538]: 2025-11-25 09:07:34.603 253542 INFO nova.virt.libvirt.driver [-] [instance: de5bfbef-7a99-4280-a304-71b9099f110b] Instance spawned successfully.#033[00m
Nov 25 04:07:34 np0005534516 nova_compute[253538]: 2025-11-25 09:07:34.603 253542 INFO nova.compute.manager [None req-6caa8021-d676-481f-8d01-e5222d951739 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] [instance: de5bfbef-7a99-4280-a304-71b9099f110b] Took 7.11 seconds to spawn the instance on the hypervisor.#033[00m
Nov 25 04:07:34 np0005534516 nova_compute[253538]: 2025-11-25 09:07:34.604 253542 DEBUG nova.compute.manager [None req-6caa8021-d676-481f-8d01-e5222d951739 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] [instance: de5bfbef-7a99-4280-a304-71b9099f110b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 04:07:34 np0005534516 nova_compute[253538]: 2025-11-25 09:07:34.606 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: de5bfbef-7a99-4280-a304-71b9099f110b] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 04:07:34 np0005534516 nova_compute[253538]: 2025-11-25 09:07:34.773 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: de5bfbef-7a99-4280-a304-71b9099f110b] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 04:07:34 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 04:07:34 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1831369604' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 04:07:34 np0005534516 nova_compute[253538]: 2025-11-25 09:07:34.800 253542 DEBUG oslo_concurrency.processutils [None req-85aeb3ae-c5c3-40cc-8a06-a8e3b54adb3e c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.473s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 04:07:34 np0005534516 nova_compute[253538]: 2025-11-25 09:07:34.802 253542 DEBUG nova.virt.libvirt.vif [None req-85aeb3ae-c5c3-40cc-8a06-a8e3b54adb3e c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T09:07:24Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1660352591',display_name='tempest-TestGettingAddress-server-1660352591',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1660352591',id=141,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNNXZVgUcpOXPWXO6P6soD+m15iDx22PdjLpwIPj4NhiqrYSa8gOlVpM1gBbkAKhCG74Rw2mdANKyxO2M4jwTdZWZdvZ4G3xfBv8VGqbvmMGN/YvwZlGw5MNBtRc1Ho0Cw==',key_name='tempest-TestGettingAddress-1769249141',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='a3cf572dfc9f42528923d69b8fa76422',ramdisk_id='',reservation_id='r-umnvxfkq',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-364728108',owner_user_name='tempest-TestGettingAddress-364728108-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T09:07:26Z,user_data=None,user_id='c9fb13d4ba9041458692330b7276232f',uuid=a5ec67ec-7042-47d0-925d-6ff3847d3846,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "05d0fd93-ce0f-4842-962f-c9491d3850c8", "address": "fa:16:3e:e4:ec:f8", "network": {"id": "2a6609b2-beb0-48a5-8dc0-1a4c153da77e", "bridge": "br-int", "label": "tempest-network-smoke--734444177", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap05d0fd93-ce", "ovs_interfaceid": "05d0fd93-ce0f-4842-962f-c9491d3850c8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 25 04:07:34 np0005534516 nova_compute[253538]: 2025-11-25 09:07:34.803 253542 DEBUG nova.network.os_vif_util [None req-85aeb3ae-c5c3-40cc-8a06-a8e3b54adb3e c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Converting VIF {"id": "05d0fd93-ce0f-4842-962f-c9491d3850c8", "address": "fa:16:3e:e4:ec:f8", "network": {"id": "2a6609b2-beb0-48a5-8dc0-1a4c153da77e", "bridge": "br-int", "label": "tempest-network-smoke--734444177", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap05d0fd93-ce", "ovs_interfaceid": "05d0fd93-ce0f-4842-962f-c9491d3850c8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 04:07:34 np0005534516 nova_compute[253538]: 2025-11-25 09:07:34.804 253542 DEBUG nova.network.os_vif_util [None req-85aeb3ae-c5c3-40cc-8a06-a8e3b54adb3e c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e4:ec:f8,bridge_name='br-int',has_traffic_filtering=True,id=05d0fd93-ce0f-4842-962f-c9491d3850c8,network=Network(2a6609b2-beb0-48a5-8dc0-1a4c153da77e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap05d0fd93-ce') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 04:07:34 np0005534516 nova_compute[253538]: 2025-11-25 09:07:34.805 253542 DEBUG nova.virt.libvirt.vif [None req-85aeb3ae-c5c3-40cc-8a06-a8e3b54adb3e c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T09:07:24Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1660352591',display_name='tempest-TestGettingAddress-server-1660352591',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1660352591',id=141,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNNXZVgUcpOXPWXO6P6soD+m15iDx22PdjLpwIPj4NhiqrYSa8gOlVpM1gBbkAKhCG74Rw2mdANKyxO2M4jwTdZWZdvZ4G3xfBv8VGqbvmMGN/YvwZlGw5MNBtRc1Ho0Cw==',key_name='tempest-TestGettingAddress-1769249141',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='a3cf572dfc9f42528923d69b8fa76422',ramdisk_id='',reservation_id='r-umnvxfkq',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-364728108',owner_user_name='tempest-TestGettingAddress-364728108-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T09:07:26Z,user_data=None,user_id='c9fb13d4ba9041458692330b7276232f',uuid=a5ec67ec-7042-47d0-925d-6ff3847d3846,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "2eda6ce1-df50-4620-a5e9-d08e62f7350e", "address": "fa:16:3e:fc:ed:73", "network": {"id": "21786b2a-59f9-4c4e-b462-8a28f7bd93a3", "bridge": "br-int", "label": "tempest-network-smoke--1716336469", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fefc:ed73", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fefc:ed73", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2eda6ce1-df", "ovs_interfaceid": "2eda6ce1-df50-4620-a5e9-d08e62f7350e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 25 04:07:34 np0005534516 nova_compute[253538]: 2025-11-25 09:07:34.805 253542 DEBUG nova.network.os_vif_util [None req-85aeb3ae-c5c3-40cc-8a06-a8e3b54adb3e c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Converting VIF {"id": "2eda6ce1-df50-4620-a5e9-d08e62f7350e", "address": "fa:16:3e:fc:ed:73", "network": {"id": "21786b2a-59f9-4c4e-b462-8a28f7bd93a3", "bridge": "br-int", "label": "tempest-network-smoke--1716336469", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fefc:ed73", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fefc:ed73", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2eda6ce1-df", "ovs_interfaceid": "2eda6ce1-df50-4620-a5e9-d08e62f7350e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 04:07:34 np0005534516 nova_compute[253538]: 2025-11-25 09:07:34.806 253542 DEBUG nova.network.os_vif_util [None req-85aeb3ae-c5c3-40cc-8a06-a8e3b54adb3e c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:fc:ed:73,bridge_name='br-int',has_traffic_filtering=True,id=2eda6ce1-df50-4620-a5e9-d08e62f7350e,network=Network(21786b2a-59f9-4c4e-b462-8a28f7bd93a3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2eda6ce1-df') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 04:07:34 np0005534516 nova_compute[253538]: 2025-11-25 09:07:34.807 253542 DEBUG nova.objects.instance [None req-85aeb3ae-c5c3-40cc-8a06-a8e3b54adb3e c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lazy-loading 'pci_devices' on Instance uuid a5ec67ec-7042-47d0-925d-6ff3847d3846 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 04:07:34 np0005534516 nova_compute[253538]: 2025-11-25 09:07:34.810 253542 INFO nova.compute.manager [None req-6caa8021-d676-481f-8d01-e5222d951739 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] [instance: de5bfbef-7a99-4280-a304-71b9099f110b] Took 8.48 seconds to build instance.#033[00m
Nov 25 04:07:34 np0005534516 nova_compute[253538]: 2025-11-25 09:07:34.829 253542 DEBUG nova.virt.libvirt.driver [None req-85aeb3ae-c5c3-40cc-8a06-a8e3b54adb3e c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: a5ec67ec-7042-47d0-925d-6ff3847d3846] End _get_guest_xml xml=<domain type="kvm">
Nov 25 04:07:34 np0005534516 nova_compute[253538]:  <uuid>a5ec67ec-7042-47d0-925d-6ff3847d3846</uuid>
Nov 25 04:07:34 np0005534516 nova_compute[253538]:  <name>instance-0000008d</name>
Nov 25 04:07:34 np0005534516 nova_compute[253538]:  <memory>131072</memory>
Nov 25 04:07:34 np0005534516 nova_compute[253538]:  <vcpu>1</vcpu>
Nov 25 04:07:34 np0005534516 nova_compute[253538]:  <metadata>
Nov 25 04:07:34 np0005534516 nova_compute[253538]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 04:07:34 np0005534516 nova_compute[253538]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 04:07:34 np0005534516 nova_compute[253538]:      <nova:name>tempest-TestGettingAddress-server-1660352591</nova:name>
Nov 25 04:07:34 np0005534516 nova_compute[253538]:      <nova:creationTime>2025-11-25 09:07:33</nova:creationTime>
Nov 25 04:07:34 np0005534516 nova_compute[253538]:      <nova:flavor name="m1.nano">
Nov 25 04:07:34 np0005534516 nova_compute[253538]:        <nova:memory>128</nova:memory>
Nov 25 04:07:34 np0005534516 nova_compute[253538]:        <nova:disk>1</nova:disk>
Nov 25 04:07:34 np0005534516 nova_compute[253538]:        <nova:swap>0</nova:swap>
Nov 25 04:07:34 np0005534516 nova_compute[253538]:        <nova:ephemeral>0</nova:ephemeral>
Nov 25 04:07:34 np0005534516 nova_compute[253538]:        <nova:vcpus>1</nova:vcpus>
Nov 25 04:07:34 np0005534516 nova_compute[253538]:      </nova:flavor>
Nov 25 04:07:34 np0005534516 nova_compute[253538]:      <nova:owner>
Nov 25 04:07:34 np0005534516 nova_compute[253538]:        <nova:user uuid="c9fb13d4ba9041458692330b7276232f">tempest-TestGettingAddress-364728108-project-member</nova:user>
Nov 25 04:07:34 np0005534516 nova_compute[253538]:        <nova:project uuid="a3cf572dfc9f42528923d69b8fa76422">tempest-TestGettingAddress-364728108</nova:project>
Nov 25 04:07:34 np0005534516 nova_compute[253538]:      </nova:owner>
Nov 25 04:07:34 np0005534516 nova_compute[253538]:      <nova:root type="image" uuid="8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e"/>
Nov 25 04:07:34 np0005534516 nova_compute[253538]:      <nova:ports>
Nov 25 04:07:34 np0005534516 nova_compute[253538]:        <nova:port uuid="05d0fd93-ce0f-4842-962f-c9491d3850c8">
Nov 25 04:07:34 np0005534516 nova_compute[253538]:          <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Nov 25 04:07:34 np0005534516 nova_compute[253538]:        </nova:port>
Nov 25 04:07:34 np0005534516 nova_compute[253538]:        <nova:port uuid="2eda6ce1-df50-4620-a5e9-d08e62f7350e">
Nov 25 04:07:34 np0005534516 nova_compute[253538]:          <nova:ip type="fixed" address="2001:db8:0:1:f816:3eff:fefc:ed73" ipVersion="6"/>
Nov 25 04:07:34 np0005534516 nova_compute[253538]:          <nova:ip type="fixed" address="2001:db8::f816:3eff:fefc:ed73" ipVersion="6"/>
Nov 25 04:07:34 np0005534516 nova_compute[253538]:        </nova:port>
Nov 25 04:07:34 np0005534516 nova_compute[253538]:      </nova:ports>
Nov 25 04:07:34 np0005534516 nova_compute[253538]:    </nova:instance>
Nov 25 04:07:34 np0005534516 nova_compute[253538]:  </metadata>
Nov 25 04:07:34 np0005534516 nova_compute[253538]:  <sysinfo type="smbios">
Nov 25 04:07:34 np0005534516 nova_compute[253538]:    <system>
Nov 25 04:07:34 np0005534516 nova_compute[253538]:      <entry name="manufacturer">RDO</entry>
Nov 25 04:07:34 np0005534516 nova_compute[253538]:      <entry name="product">OpenStack Compute</entry>
Nov 25 04:07:34 np0005534516 nova_compute[253538]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 04:07:34 np0005534516 nova_compute[253538]:      <entry name="serial">a5ec67ec-7042-47d0-925d-6ff3847d3846</entry>
Nov 25 04:07:34 np0005534516 nova_compute[253538]:      <entry name="uuid">a5ec67ec-7042-47d0-925d-6ff3847d3846</entry>
Nov 25 04:07:34 np0005534516 nova_compute[253538]:      <entry name="family">Virtual Machine</entry>
Nov 25 04:07:34 np0005534516 nova_compute[253538]:    </system>
Nov 25 04:07:34 np0005534516 nova_compute[253538]:  </sysinfo>
Nov 25 04:07:34 np0005534516 nova_compute[253538]:  <os>
Nov 25 04:07:34 np0005534516 nova_compute[253538]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 25 04:07:34 np0005534516 nova_compute[253538]:    <boot dev="hd"/>
Nov 25 04:07:34 np0005534516 nova_compute[253538]:    <smbios mode="sysinfo"/>
Nov 25 04:07:34 np0005534516 nova_compute[253538]:  </os>
Nov 25 04:07:34 np0005534516 nova_compute[253538]:  <features>
Nov 25 04:07:34 np0005534516 nova_compute[253538]:    <acpi/>
Nov 25 04:07:34 np0005534516 nova_compute[253538]:    <apic/>
Nov 25 04:07:34 np0005534516 nova_compute[253538]:    <vmcoreinfo/>
Nov 25 04:07:34 np0005534516 nova_compute[253538]:  </features>
Nov 25 04:07:34 np0005534516 nova_compute[253538]:  <clock offset="utc">
Nov 25 04:07:34 np0005534516 nova_compute[253538]:    <timer name="pit" tickpolicy="delay"/>
Nov 25 04:07:34 np0005534516 nova_compute[253538]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 25 04:07:34 np0005534516 nova_compute[253538]:    <timer name="hpet" present="no"/>
Nov 25 04:07:34 np0005534516 nova_compute[253538]:  </clock>
Nov 25 04:07:34 np0005534516 nova_compute[253538]:  <cpu mode="host-model" match="exact">
Nov 25 04:07:34 np0005534516 nova_compute[253538]:    <topology sockets="1" cores="1" threads="1"/>
Nov 25 04:07:34 np0005534516 nova_compute[253538]:  </cpu>
Nov 25 04:07:34 np0005534516 nova_compute[253538]:  <devices>
Nov 25 04:07:34 np0005534516 nova_compute[253538]:    <disk type="network" device="disk">
Nov 25 04:07:34 np0005534516 nova_compute[253538]:      <driver type="raw" cache="none"/>
Nov 25 04:07:34 np0005534516 nova_compute[253538]:      <source protocol="rbd" name="vms/a5ec67ec-7042-47d0-925d-6ff3847d3846_disk">
Nov 25 04:07:34 np0005534516 nova_compute[253538]:        <host name="192.168.122.100" port="6789"/>
Nov 25 04:07:34 np0005534516 nova_compute[253538]:      </source>
Nov 25 04:07:34 np0005534516 nova_compute[253538]:      <auth username="openstack">
Nov 25 04:07:34 np0005534516 nova_compute[253538]:        <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 04:07:34 np0005534516 nova_compute[253538]:      </auth>
Nov 25 04:07:34 np0005534516 nova_compute[253538]:      <target dev="vda" bus="virtio"/>
Nov 25 04:07:34 np0005534516 nova_compute[253538]:    </disk>
Nov 25 04:07:34 np0005534516 nova_compute[253538]:    <disk type="network" device="cdrom">
Nov 25 04:07:34 np0005534516 nova_compute[253538]:      <driver type="raw" cache="none"/>
Nov 25 04:07:34 np0005534516 nova_compute[253538]:      <source protocol="rbd" name="vms/a5ec67ec-7042-47d0-925d-6ff3847d3846_disk.config">
Nov 25 04:07:34 np0005534516 nova_compute[253538]:        <host name="192.168.122.100" port="6789"/>
Nov 25 04:07:34 np0005534516 nova_compute[253538]:      </source>
Nov 25 04:07:34 np0005534516 nova_compute[253538]:      <auth username="openstack">
Nov 25 04:07:34 np0005534516 nova_compute[253538]:        <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 04:07:34 np0005534516 nova_compute[253538]:      </auth>
Nov 25 04:07:34 np0005534516 nova_compute[253538]:      <target dev="sda" bus="sata"/>
Nov 25 04:07:34 np0005534516 nova_compute[253538]:    </disk>
Nov 25 04:07:34 np0005534516 nova_compute[253538]:    <interface type="ethernet">
Nov 25 04:07:34 np0005534516 nova_compute[253538]:      <mac address="fa:16:3e:e4:ec:f8"/>
Nov 25 04:07:34 np0005534516 nova_compute[253538]:      <model type="virtio"/>
Nov 25 04:07:34 np0005534516 nova_compute[253538]:      <driver name="vhost" rx_queue_size="512"/>
Nov 25 04:07:34 np0005534516 nova_compute[253538]:      <mtu size="1442"/>
Nov 25 04:07:34 np0005534516 nova_compute[253538]:      <target dev="tap05d0fd93-ce"/>
Nov 25 04:07:34 np0005534516 nova_compute[253538]:    </interface>
Nov 25 04:07:34 np0005534516 nova_compute[253538]:    <interface type="ethernet">
Nov 25 04:07:34 np0005534516 nova_compute[253538]:      <mac address="fa:16:3e:fc:ed:73"/>
Nov 25 04:07:34 np0005534516 nova_compute[253538]:      <model type="virtio"/>
Nov 25 04:07:34 np0005534516 nova_compute[253538]:      <driver name="vhost" rx_queue_size="512"/>
Nov 25 04:07:34 np0005534516 nova_compute[253538]:      <mtu size="1442"/>
Nov 25 04:07:34 np0005534516 nova_compute[253538]:      <target dev="tap2eda6ce1-df"/>
Nov 25 04:07:34 np0005534516 nova_compute[253538]:    </interface>
Nov 25 04:07:34 np0005534516 nova_compute[253538]:    <serial type="pty">
Nov 25 04:07:34 np0005534516 nova_compute[253538]:      <log file="/var/lib/nova/instances/a5ec67ec-7042-47d0-925d-6ff3847d3846/console.log" append="off"/>
Nov 25 04:07:34 np0005534516 nova_compute[253538]:    </serial>
Nov 25 04:07:34 np0005534516 nova_compute[253538]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 04:07:34 np0005534516 nova_compute[253538]:    <video>
Nov 25 04:07:34 np0005534516 nova_compute[253538]:      <model type="virtio"/>
Nov 25 04:07:34 np0005534516 nova_compute[253538]:    </video>
Nov 25 04:07:34 np0005534516 nova_compute[253538]:    <input type="tablet" bus="usb"/>
Nov 25 04:07:34 np0005534516 nova_compute[253538]:    <rng model="virtio">
Nov 25 04:07:34 np0005534516 nova_compute[253538]:      <backend model="random">/dev/urandom</backend>
Nov 25 04:07:34 np0005534516 nova_compute[253538]:    </rng>
Nov 25 04:07:34 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root"/>
Nov 25 04:07:34 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:07:34 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:07:34 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:07:34 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:07:34 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:07:34 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:07:34 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:07:34 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:07:34 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:07:34 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:07:34 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:07:34 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:07:34 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:07:34 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:07:34 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:07:34 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:07:34 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:07:34 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:07:34 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:07:34 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:07:34 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:07:34 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:07:34 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:07:34 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:07:34 np0005534516 nova_compute[253538]:    <controller type="usb" index="0"/>
Nov 25 04:07:34 np0005534516 nova_compute[253538]:    <memballoon model="virtio">
Nov 25 04:07:34 np0005534516 nova_compute[253538]:      <stats period="10"/>
Nov 25 04:07:34 np0005534516 nova_compute[253538]:    </memballoon>
Nov 25 04:07:34 np0005534516 nova_compute[253538]:  </devices>
Nov 25 04:07:34 np0005534516 nova_compute[253538]: </domain>
Nov 25 04:07:34 np0005534516 nova_compute[253538]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 25 04:07:34 np0005534516 nova_compute[253538]: 2025-11-25 09:07:34.830 253542 DEBUG nova.compute.manager [None req-85aeb3ae-c5c3-40cc-8a06-a8e3b54adb3e c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: a5ec67ec-7042-47d0-925d-6ff3847d3846] Preparing to wait for external event network-vif-plugged-05d0fd93-ce0f-4842-962f-c9491d3850c8 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 25 04:07:34 np0005534516 nova_compute[253538]: 2025-11-25 09:07:34.830 253542 DEBUG oslo_concurrency.lockutils [None req-85aeb3ae-c5c3-40cc-8a06-a8e3b54adb3e c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Acquiring lock "a5ec67ec-7042-47d0-925d-6ff3847d3846-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:07:34 np0005534516 nova_compute[253538]: 2025-11-25 09:07:34.830 253542 DEBUG oslo_concurrency.lockutils [None req-85aeb3ae-c5c3-40cc-8a06-a8e3b54adb3e c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "a5ec67ec-7042-47d0-925d-6ff3847d3846-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:07:34 np0005534516 nova_compute[253538]: 2025-11-25 09:07:34.831 253542 DEBUG oslo_concurrency.lockutils [None req-85aeb3ae-c5c3-40cc-8a06-a8e3b54adb3e c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "a5ec67ec-7042-47d0-925d-6ff3847d3846-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:07:34 np0005534516 nova_compute[253538]: 2025-11-25 09:07:34.831 253542 DEBUG nova.compute.manager [None req-85aeb3ae-c5c3-40cc-8a06-a8e3b54adb3e c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: a5ec67ec-7042-47d0-925d-6ff3847d3846] Preparing to wait for external event network-vif-plugged-2eda6ce1-df50-4620-a5e9-d08e62f7350e prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 25 04:07:34 np0005534516 nova_compute[253538]: 2025-11-25 09:07:34.832 253542 DEBUG oslo_concurrency.lockutils [None req-85aeb3ae-c5c3-40cc-8a06-a8e3b54adb3e c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Acquiring lock "a5ec67ec-7042-47d0-925d-6ff3847d3846-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:07:34 np0005534516 nova_compute[253538]: 2025-11-25 09:07:34.832 253542 DEBUG oslo_concurrency.lockutils [None req-85aeb3ae-c5c3-40cc-8a06-a8e3b54adb3e c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "a5ec67ec-7042-47d0-925d-6ff3847d3846-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:07:34 np0005534516 nova_compute[253538]: 2025-11-25 09:07:34.832 253542 DEBUG oslo_concurrency.lockutils [None req-85aeb3ae-c5c3-40cc-8a06-a8e3b54adb3e c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "a5ec67ec-7042-47d0-925d-6ff3847d3846-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:07:34 np0005534516 nova_compute[253538]: 2025-11-25 09:07:34.834 253542 DEBUG nova.virt.libvirt.vif [None req-85aeb3ae-c5c3-40cc-8a06-a8e3b54adb3e c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T09:07:24Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1660352591',display_name='tempest-TestGettingAddress-server-1660352591',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1660352591',id=141,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNNXZVgUcpOXPWXO6P6soD+m15iDx22PdjLpwIPj4NhiqrYSa8gOlVpM1gBbkAKhCG74Rw2mdANKyxO2M4jwTdZWZdvZ4G3xfBv8VGqbvmMGN/YvwZlGw5MNBtRc1Ho0Cw==',key_name='tempest-TestGettingAddress-1769249141',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='a3cf572dfc9f42528923d69b8fa76422',ramdisk_id='',reservation_id='r-umnvxfkq',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-364728108',owner_user_name='tempest-TestGettingAddress-364728108-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T09:07:26Z,user_data=None,user_id='c9fb13d4ba9041458692330b7276232f',uuid=a5ec67ec-7042-47d0-925d-6ff3847d3846,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "05d0fd93-ce0f-4842-962f-c9491d3850c8", "address": "fa:16:3e:e4:ec:f8", "network": {"id": "2a6609b2-beb0-48a5-8dc0-1a4c153da77e", "bridge": "br-int", "label": "tempest-network-smoke--734444177", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap05d0fd93-ce", "ovs_interfaceid": "05d0fd93-ce0f-4842-962f-c9491d3850c8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 25 04:07:34 np0005534516 nova_compute[253538]: 2025-11-25 09:07:34.834 253542 DEBUG nova.network.os_vif_util [None req-85aeb3ae-c5c3-40cc-8a06-a8e3b54adb3e c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Converting VIF {"id": "05d0fd93-ce0f-4842-962f-c9491d3850c8", "address": "fa:16:3e:e4:ec:f8", "network": {"id": "2a6609b2-beb0-48a5-8dc0-1a4c153da77e", "bridge": "br-int", "label": "tempest-network-smoke--734444177", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap05d0fd93-ce", "ovs_interfaceid": "05d0fd93-ce0f-4842-962f-c9491d3850c8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 04:07:34 np0005534516 nova_compute[253538]: 2025-11-25 09:07:34.835 253542 DEBUG nova.network.os_vif_util [None req-85aeb3ae-c5c3-40cc-8a06-a8e3b54adb3e c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e4:ec:f8,bridge_name='br-int',has_traffic_filtering=True,id=05d0fd93-ce0f-4842-962f-c9491d3850c8,network=Network(2a6609b2-beb0-48a5-8dc0-1a4c153da77e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap05d0fd93-ce') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 04:07:34 np0005534516 nova_compute[253538]: 2025-11-25 09:07:34.836 253542 DEBUG os_vif [None req-85aeb3ae-c5c3-40cc-8a06-a8e3b54adb3e c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:e4:ec:f8,bridge_name='br-int',has_traffic_filtering=True,id=05d0fd93-ce0f-4842-962f-c9491d3850c8,network=Network(2a6609b2-beb0-48a5-8dc0-1a4c153da77e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap05d0fd93-ce') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 25 04:07:34 np0005534516 nova_compute[253538]: 2025-11-25 09:07:34.837 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:07:34 np0005534516 nova_compute[253538]: 2025-11-25 09:07:34.838 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 04:07:34 np0005534516 nova_compute[253538]: 2025-11-25 09:07:34.839 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 04:07:34 np0005534516 nova_compute[253538]: 2025-11-25 09:07:34.841 253542 DEBUG oslo_concurrency.lockutils [None req-6caa8021-d676-481f-8d01-e5222d951739 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] Lock "de5bfbef-7a99-4280-a304-71b9099f110b" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.604s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:07:34 np0005534516 nova_compute[253538]: 2025-11-25 09:07:34.843 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:07:34 np0005534516 nova_compute[253538]: 2025-11-25 09:07:34.843 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap05d0fd93-ce, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 04:07:34 np0005534516 nova_compute[253538]: 2025-11-25 09:07:34.844 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap05d0fd93-ce, col_values=(('external_ids', {'iface-id': '05d0fd93-ce0f-4842-962f-c9491d3850c8', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:e4:ec:f8', 'vm-uuid': 'a5ec67ec-7042-47d0-925d-6ff3847d3846'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 04:07:34 np0005534516 nova_compute[253538]: 2025-11-25 09:07:34.845 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:07:34 np0005534516 NetworkManager[48915]: <info>  [1764061654.8464] manager: (tap05d0fd93-ce): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/609)
Nov 25 04:07:34 np0005534516 nova_compute[253538]: 2025-11-25 09:07:34.848 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 25 04:07:34 np0005534516 nova_compute[253538]: 2025-11-25 09:07:34.854 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:07:34 np0005534516 nova_compute[253538]: 2025-11-25 09:07:34.855 253542 INFO os_vif [None req-85aeb3ae-c5c3-40cc-8a06-a8e3b54adb3e c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:e4:ec:f8,bridge_name='br-int',has_traffic_filtering=True,id=05d0fd93-ce0f-4842-962f-c9491d3850c8,network=Network(2a6609b2-beb0-48a5-8dc0-1a4c153da77e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap05d0fd93-ce')#033[00m
Nov 25 04:07:34 np0005534516 nova_compute[253538]: 2025-11-25 09:07:34.856 253542 DEBUG nova.virt.libvirt.vif [None req-85aeb3ae-c5c3-40cc-8a06-a8e3b54adb3e c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T09:07:24Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1660352591',display_name='tempest-TestGettingAddress-server-1660352591',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1660352591',id=141,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNNXZVgUcpOXPWXO6P6soD+m15iDx22PdjLpwIPj4NhiqrYSa8gOlVpM1gBbkAKhCG74Rw2mdANKyxO2M4jwTdZWZdvZ4G3xfBv8VGqbvmMGN/YvwZlGw5MNBtRc1Ho0Cw==',key_name='tempest-TestGettingAddress-1769249141',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='a3cf572dfc9f42528923d69b8fa76422',ramdisk_id='',reservation_id='r-umnvxfkq',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-364728108',owner_user_name='tempest-TestGettingAddress-364728108-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T09:07:26Z,user_data=None,user_id='c9fb13d4ba9041458692330b7276232f',uuid=a5ec67ec-7042-47d0-925d-6ff3847d3846,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "2eda6ce1-df50-4620-a5e9-d08e62f7350e", "address": "fa:16:3e:fc:ed:73", "network": {"id": "21786b2a-59f9-4c4e-b462-8a28f7bd93a3", "bridge": "br-int", "label": "tempest-network-smoke--1716336469", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fefc:ed73", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fefc:ed73", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2eda6ce1-df", "ovs_interfaceid": "2eda6ce1-df50-4620-a5e9-d08e62f7350e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 25 04:07:34 np0005534516 nova_compute[253538]: 2025-11-25 09:07:34.857 253542 DEBUG nova.network.os_vif_util [None req-85aeb3ae-c5c3-40cc-8a06-a8e3b54adb3e c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Converting VIF {"id": "2eda6ce1-df50-4620-a5e9-d08e62f7350e", "address": "fa:16:3e:fc:ed:73", "network": {"id": "21786b2a-59f9-4c4e-b462-8a28f7bd93a3", "bridge": "br-int", "label": "tempest-network-smoke--1716336469", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fefc:ed73", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fefc:ed73", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2eda6ce1-df", "ovs_interfaceid": "2eda6ce1-df50-4620-a5e9-d08e62f7350e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 04:07:34 np0005534516 nova_compute[253538]: 2025-11-25 09:07:34.859 253542 DEBUG nova.network.os_vif_util [None req-85aeb3ae-c5c3-40cc-8a06-a8e3b54adb3e c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:fc:ed:73,bridge_name='br-int',has_traffic_filtering=True,id=2eda6ce1-df50-4620-a5e9-d08e62f7350e,network=Network(21786b2a-59f9-4c4e-b462-8a28f7bd93a3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2eda6ce1-df') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 04:07:34 np0005534516 nova_compute[253538]: 2025-11-25 09:07:34.859 253542 DEBUG os_vif [None req-85aeb3ae-c5c3-40cc-8a06-a8e3b54adb3e c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:fc:ed:73,bridge_name='br-int',has_traffic_filtering=True,id=2eda6ce1-df50-4620-a5e9-d08e62f7350e,network=Network(21786b2a-59f9-4c4e-b462-8a28f7bd93a3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2eda6ce1-df') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 25 04:07:34 np0005534516 nova_compute[253538]: 2025-11-25 09:07:34.860 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:07:34 np0005534516 nova_compute[253538]: 2025-11-25 09:07:34.860 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 04:07:34 np0005534516 nova_compute[253538]: 2025-11-25 09:07:34.860 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 04:07:34 np0005534516 nova_compute[253538]: 2025-11-25 09:07:34.863 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:07:34 np0005534516 nova_compute[253538]: 2025-11-25 09:07:34.863 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2eda6ce1-df, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 04:07:34 np0005534516 nova_compute[253538]: 2025-11-25 09:07:34.864 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap2eda6ce1-df, col_values=(('external_ids', {'iface-id': '2eda6ce1-df50-4620-a5e9-d08e62f7350e', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:fc:ed:73', 'vm-uuid': 'a5ec67ec-7042-47d0-925d-6ff3847d3846'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 04:07:34 np0005534516 NetworkManager[48915]: <info>  [1764061654.8661] manager: (tap2eda6ce1-df): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/610)
Nov 25 04:07:34 np0005534516 nova_compute[253538]: 2025-11-25 09:07:34.865 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:07:34 np0005534516 nova_compute[253538]: 2025-11-25 09:07:34.867 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 25 04:07:34 np0005534516 nova_compute[253538]: 2025-11-25 09:07:34.872 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:07:34 np0005534516 nova_compute[253538]: 2025-11-25 09:07:34.873 253542 INFO os_vif [None req-85aeb3ae-c5c3-40cc-8a06-a8e3b54adb3e c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:fc:ed:73,bridge_name='br-int',has_traffic_filtering=True,id=2eda6ce1-df50-4620-a5e9-d08e62f7350e,network=Network(21786b2a-59f9-4c4e-b462-8a28f7bd93a3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2eda6ce1-df')#033[00m
Nov 25 04:07:34 np0005534516 nova_compute[253538]: 2025-11-25 09:07:34.914 253542 DEBUG nova.virt.libvirt.driver [None req-85aeb3ae-c5c3-40cc-8a06-a8e3b54adb3e c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 04:07:34 np0005534516 nova_compute[253538]: 2025-11-25 09:07:34.915 253542 DEBUG nova.virt.libvirt.driver [None req-85aeb3ae-c5c3-40cc-8a06-a8e3b54adb3e c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 04:07:34 np0005534516 nova_compute[253538]: 2025-11-25 09:07:34.915 253542 DEBUG nova.virt.libvirt.driver [None req-85aeb3ae-c5c3-40cc-8a06-a8e3b54adb3e c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] No VIF found with MAC fa:16:3e:e4:ec:f8, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 25 04:07:34 np0005534516 nova_compute[253538]: 2025-11-25 09:07:34.915 253542 DEBUG nova.virt.libvirt.driver [None req-85aeb3ae-c5c3-40cc-8a06-a8e3b54adb3e c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] No VIF found with MAC fa:16:3e:fc:ed:73, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 25 04:07:34 np0005534516 nova_compute[253538]: 2025-11-25 09:07:34.916 253542 INFO nova.virt.libvirt.driver [None req-85aeb3ae-c5c3-40cc-8a06-a8e3b54adb3e c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: a5ec67ec-7042-47d0-925d-6ff3847d3846] Using config drive#033[00m
Nov 25 04:07:34 np0005534516 nova_compute[253538]: 2025-11-25 09:07:34.942 253542 DEBUG nova.storage.rbd_utils [None req-85aeb3ae-c5c3-40cc-8a06-a8e3b54adb3e c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] rbd image a5ec67ec-7042-47d0-925d-6ff3847d3846_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 04:07:35 np0005534516 nova_compute[253538]: 2025-11-25 09:07:35.513 253542 INFO nova.virt.libvirt.driver [None req-85aeb3ae-c5c3-40cc-8a06-a8e3b54adb3e c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: a5ec67ec-7042-47d0-925d-6ff3847d3846] Creating config drive at /var/lib/nova/instances/a5ec67ec-7042-47d0-925d-6ff3847d3846/disk.config#033[00m
Nov 25 04:07:35 np0005534516 nova_compute[253538]: 2025-11-25 09:07:35.520 253542 DEBUG oslo_concurrency.processutils [None req-85aeb3ae-c5c3-40cc-8a06-a8e3b54adb3e c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/a5ec67ec-7042-47d0-925d-6ff3847d3846/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpmokdw9gy execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 04:07:35 np0005534516 nova_compute[253538]: 2025-11-25 09:07:35.564 253542 DEBUG nova.network.neutron [req-d24e9ac6-b6e1-4c49-b415-2c9f0b06b997 req-c06b1edd-d267-44a2-83a3-4f8fd87e1845 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: a5ec67ec-7042-47d0-925d-6ff3847d3846] Updated VIF entry in instance network info cache for port 2eda6ce1-df50-4620-a5e9-d08e62f7350e. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 25 04:07:35 np0005534516 nova_compute[253538]: 2025-11-25 09:07:35.565 253542 DEBUG nova.network.neutron [req-d24e9ac6-b6e1-4c49-b415-2c9f0b06b997 req-c06b1edd-d267-44a2-83a3-4f8fd87e1845 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: a5ec67ec-7042-47d0-925d-6ff3847d3846] Updating instance_info_cache with network_info: [{"id": "05d0fd93-ce0f-4842-962f-c9491d3850c8", "address": "fa:16:3e:e4:ec:f8", "network": {"id": "2a6609b2-beb0-48a5-8dc0-1a4c153da77e", "bridge": "br-int", "label": "tempest-network-smoke--734444177", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap05d0fd93-ce", "ovs_interfaceid": "05d0fd93-ce0f-4842-962f-c9491d3850c8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "2eda6ce1-df50-4620-a5e9-d08e62f7350e", "address": "fa:16:3e:fc:ed:73", "network": {"id": "21786b2a-59f9-4c4e-b462-8a28f7bd93a3", "bridge": "br-int", "label": "tempest-network-smoke--1716336469", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fefc:ed73", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fefc:ed73", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2eda6ce1-df", "ovs_interfaceid": "2eda6ce1-df50-4620-a5e9-d08e62f7350e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 04:07:35 np0005534516 nova_compute[253538]: 2025-11-25 09:07:35.578 253542 DEBUG oslo_concurrency.lockutils [req-d24e9ac6-b6e1-4c49-b415-2c9f0b06b997 req-c06b1edd-d267-44a2-83a3-4f8fd87e1845 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-a5ec67ec-7042-47d0-925d-6ff3847d3846" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 04:07:35 np0005534516 nova_compute[253538]: 2025-11-25 09:07:35.675 253542 DEBUG oslo_concurrency.processutils [None req-85aeb3ae-c5c3-40cc-8a06-a8e3b54adb3e c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/a5ec67ec-7042-47d0-925d-6ff3847d3846/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpmokdw9gy" returned: 0 in 0.156s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 04:07:35 np0005534516 nova_compute[253538]: 2025-11-25 09:07:35.700 253542 DEBUG nova.storage.rbd_utils [None req-85aeb3ae-c5c3-40cc-8a06-a8e3b54adb3e c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] rbd image a5ec67ec-7042-47d0-925d-6ff3847d3846_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 04:07:35 np0005534516 nova_compute[253538]: 2025-11-25 09:07:35.704 253542 DEBUG oslo_concurrency.processutils [None req-85aeb3ae-c5c3-40cc-8a06-a8e3b54adb3e c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/a5ec67ec-7042-47d0-925d-6ff3847d3846/disk.config a5ec67ec-7042-47d0-925d-6ff3847d3846_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 04:07:35 np0005534516 nova_compute[253538]: 2025-11-25 09:07:35.849 253542 DEBUG oslo_concurrency.processutils [None req-85aeb3ae-c5c3-40cc-8a06-a8e3b54adb3e c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/a5ec67ec-7042-47d0-925d-6ff3847d3846/disk.config a5ec67ec-7042-47d0-925d-6ff3847d3846_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.145s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 04:07:35 np0005534516 nova_compute[253538]: 2025-11-25 09:07:35.850 253542 INFO nova.virt.libvirt.driver [None req-85aeb3ae-c5c3-40cc-8a06-a8e3b54adb3e c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: a5ec67ec-7042-47d0-925d-6ff3847d3846] Deleting local config drive /var/lib/nova/instances/a5ec67ec-7042-47d0-925d-6ff3847d3846/disk.config because it was imported into RBD.#033[00m
Nov 25 04:07:35 np0005534516 kernel: tap05d0fd93-ce: entered promiscuous mode
Nov 25 04:07:35 np0005534516 NetworkManager[48915]: <info>  [1764061655.9007] manager: (tap05d0fd93-ce): new Tun device (/org/freedesktop/NetworkManager/Devices/611)
Nov 25 04:07:35 np0005534516 systemd-udevd[403251]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 04:07:35 np0005534516 nova_compute[253538]: 2025-11-25 09:07:35.906 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:07:35 np0005534516 ovn_controller[152859]: 2025-11-25T09:07:35Z|01481|binding|INFO|Claiming lport 05d0fd93-ce0f-4842-962f-c9491d3850c8 for this chassis.
Nov 25 04:07:35 np0005534516 ovn_controller[152859]: 2025-11-25T09:07:35Z|01482|binding|INFO|05d0fd93-ce0f-4842-962f-c9491d3850c8: Claiming fa:16:3e:e4:ec:f8 10.100.0.7
Nov 25 04:07:35 np0005534516 NetworkManager[48915]: <info>  [1764061655.9116] device (tap05d0fd93-ce): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 04:07:35 np0005534516 NetworkManager[48915]: <info>  [1764061655.9128] device (tap05d0fd93-ce): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 04:07:35 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:07:35.917 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e4:ec:f8 10.100.0.7'], port_security=['fa:16:3e:e4:ec:f8 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': 'a5ec67ec-7042-47d0-925d-6ff3847d3846', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2a6609b2-beb0-48a5-8dc0-1a4c153da77e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a3cf572dfc9f42528923d69b8fa76422', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'b341b451-7173-4ca0-817f-090d2ce6e1dc', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d52cd819-aa98-4895-9898-b2ec17432e84, chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=05d0fd93-ce0f-4842-962f-c9491d3850c8) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 04:07:35 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:07:35.920 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 05d0fd93-ce0f-4842-962f-c9491d3850c8 in datapath 2a6609b2-beb0-48a5-8dc0-1a4c153da77e bound to our chassis#033[00m
Nov 25 04:07:35 np0005534516 NetworkManager[48915]: <info>  [1764061655.9239] manager: (tap2eda6ce1-df): new Tun device (/org/freedesktop/NetworkManager/Devices/612)
Nov 25 04:07:35 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:07:35.924 162739 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 2a6609b2-beb0-48a5-8dc0-1a4c153da77e#033[00m
Nov 25 04:07:35 np0005534516 kernel: tap2eda6ce1-df: entered promiscuous mode
Nov 25 04:07:35 np0005534516 ovn_controller[152859]: 2025-11-25T09:07:35Z|01483|binding|INFO|Setting lport 05d0fd93-ce0f-4842-962f-c9491d3850c8 ovn-installed in OVS
Nov 25 04:07:35 np0005534516 ovn_controller[152859]: 2025-11-25T09:07:35Z|01484|binding|INFO|Setting lport 05d0fd93-ce0f-4842-962f-c9491d3850c8 up in Southbound
Nov 25 04:07:35 np0005534516 ovn_controller[152859]: 2025-11-25T09:07:35Z|01485|if_status|INFO|Not updating pb chassis for 2eda6ce1-df50-4620-a5e9-d08e62f7350e now as sb is readonly
Nov 25 04:07:35 np0005534516 nova_compute[253538]: 2025-11-25 09:07:35.932 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:07:35 np0005534516 ovn_controller[152859]: 2025-11-25T09:07:35Z|01486|binding|INFO|Claiming lport 2eda6ce1-df50-4620-a5e9-d08e62f7350e for this chassis.
Nov 25 04:07:35 np0005534516 ovn_controller[152859]: 2025-11-25T09:07:35Z|01487|binding|INFO|2eda6ce1-df50-4620-a5e9-d08e62f7350e: Claiming fa:16:3e:fc:ed:73 2001:db8:0:1:f816:3eff:fefc:ed73 2001:db8::f816:3eff:fefc:ed73
Nov 25 04:07:35 np0005534516 NetworkManager[48915]: <info>  [1764061655.9371] device (tap2eda6ce1-df): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 04:07:35 np0005534516 NetworkManager[48915]: <info>  [1764061655.9379] device (tap2eda6ce1-df): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 04:07:35 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:07:35.940 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[d8caf94a-74e6-4047-8855-5cc608bd5ba3]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:07:35 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:07:35.943 162739 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap2a6609b2-b1 in ovnmeta-2a6609b2-beb0-48a5-8dc0-1a4c153da77e namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 25 04:07:35 np0005534516 ovn_controller[152859]: 2025-11-25T09:07:35Z|01488|binding|INFO|Setting lport 2eda6ce1-df50-4620-a5e9-d08e62f7350e ovn-installed in OVS
Nov 25 04:07:35 np0005534516 ovn_controller[152859]: 2025-11-25T09:07:35Z|01489|binding|INFO|Setting lport 2eda6ce1-df50-4620-a5e9-d08e62f7350e up in Southbound
Nov 25 04:07:35 np0005534516 nova_compute[253538]: 2025-11-25 09:07:35.950 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:07:35 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:07:35.952 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:fc:ed:73 2001:db8:0:1:f816:3eff:fefc:ed73 2001:db8::f816:3eff:fefc:ed73'], port_security=['fa:16:3e:fc:ed:73 2001:db8:0:1:f816:3eff:fefc:ed73 2001:db8::f816:3eff:fefc:ed73'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8:0:1:f816:3eff:fefc:ed73/64 2001:db8::f816:3eff:fefc:ed73/64', 'neutron:device_id': 'a5ec67ec-7042-47d0-925d-6ff3847d3846', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-21786b2a-59f9-4c4e-b462-8a28f7bd93a3', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a3cf572dfc9f42528923d69b8fa76422', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'b341b451-7173-4ca0-817f-090d2ce6e1dc', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c4355854-52b0-49fe-b048-f2ee64c9c702, chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=2eda6ce1-df50-4620-a5e9-d08e62f7350e) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 04:07:35 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:07:35.950 269370 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap2a6609b2-b0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 25 04:07:35 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:07:35.950 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[5a2578cc-43d0-408d-a889-f6a671ca9177]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:07:35 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:07:35.958 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[efbdfe45-d082-4cae-a7f9-0d986893895e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:07:35 np0005534516 systemd-machined[215790]: New machine qemu-172-instance-0000008d.
Nov 25 04:07:35 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:07:35.976 162852 DEBUG oslo.privsep.daemon [-] privsep: reply[f1d9ec2e-4013-43e0-87f6-a9ed95812cea]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:07:35 np0005534516 systemd[1]: Started Virtual Machine qemu-172-instance-0000008d.
Nov 25 04:07:36 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:07:35.999 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[59ad622e-e0c1-45c4-aee8-93f7956353e8]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:07:36 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:07:36.047 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[05f6717e-2417-4d2e-8461-d8f6ac9eb271]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:07:36 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:07:36.065 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[ffbe80fd-3af6-4e01-9f2c-b660769c543a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:07:36 np0005534516 NetworkManager[48915]: <info>  [1764061656.0658] manager: (tap2a6609b2-b0): new Veth device (/org/freedesktop/NetworkManager/Devices/613)
Nov 25 04:07:36 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:07:36.114 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[a804e2e0-5ef6-43eb-b7ba-28ea8134e930]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:07:36 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:07:36.117 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[e7abc562-9895-4f43-b503-bf5a0589ba42]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:07:36 np0005534516 NetworkManager[48915]: <info>  [1764061656.1534] device (tap2a6609b2-b0): carrier: link connected
Nov 25 04:07:36 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:07:36.160 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[15a3bc11-3207-409b-b19b-11acdb1f3abb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:07:36 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:07:36.178 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[1f1aa8aa-4120-4d48-8a5c-36d25e28eed0]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap2a6609b2-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:17:74:b5'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 428], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 701075, 'reachable_time': 23436, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 403495, 'error': None, 'target': 'ovnmeta-2a6609b2-beb0-48a5-8dc0-1a4c153da77e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:07:36 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2608: 321 pgs: 321 active+clean; 292 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 49 KiB/s rd, 2.1 MiB/s wr, 70 op/s
Nov 25 04:07:36 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:07:36.198 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[71809fcf-75ca-436e-8a5d-f2ab6f4b5da1]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe17:74b5'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 701075, 'tstamp': 701075}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 403499, 'error': None, 'target': 'ovnmeta-2a6609b2-beb0-48a5-8dc0-1a4c153da77e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:07:36 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:07:36.221 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[b22a73b7-6cae-4aea-b1b8-2dd6b605ebba]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap2a6609b2-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:17:74:b5'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 428], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 701075, 'reachable_time': 23436, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 403505, 'error': None, 'target': 'ovnmeta-2a6609b2-beb0-48a5-8dc0-1a4c153da77e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:07:36 np0005534516 podman[403471]: 2025-11-25 09:07:36.242302783 +0000 UTC m=+0.129236735 container health_status 8ad1e319ea263c63021f01bb2d6f18ca5aea205883339692c6b10bddfa993191 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Nov 25 04:07:36 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:07:36.272 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[c2e8c4b4-8617-4e69-b004-9e82b76c4989]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:07:36 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:07:36.350 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[6e247429-3fe3-47b5-a9c8-f71c3273ec42]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:07:36 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:07:36.352 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2a6609b2-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 04:07:36 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:07:36.352 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 04:07:36 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:07:36.352 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2a6609b2-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 04:07:36 np0005534516 nova_compute[253538]: 2025-11-25 09:07:36.354 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:07:36 np0005534516 NetworkManager[48915]: <info>  [1764061656.3556] manager: (tap2a6609b2-b0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/614)
Nov 25 04:07:36 np0005534516 kernel: tap2a6609b2-b0: entered promiscuous mode
Nov 25 04:07:36 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:07:36.358 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap2a6609b2-b0, col_values=(('external_ids', {'iface-id': 'd96f8f4b-113a-4ac4-a7d8-3fb3e9d6b94c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 04:07:36 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:07:36.361 162739 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/2a6609b2-beb0-48a5-8dc0-1a4c153da77e.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/2a6609b2-beb0-48a5-8dc0-1a4c153da77e.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 25 04:07:36 np0005534516 ovn_controller[152859]: 2025-11-25T09:07:36Z|01490|binding|INFO|Releasing lport d96f8f4b-113a-4ac4-a7d8-3fb3e9d6b94c from this chassis (sb_readonly=0)
Nov 25 04:07:36 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:07:36.362 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[c67dbf89-bcd1-46ab-8f5b-e50fbf81745c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:07:36 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:07:36.363 162739 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 25 04:07:36 np0005534516 ovn_metadata_agent[162734]: global
Nov 25 04:07:36 np0005534516 ovn_metadata_agent[162734]:    log         /dev/log local0 debug
Nov 25 04:07:36 np0005534516 ovn_metadata_agent[162734]:    log-tag     haproxy-metadata-proxy-2a6609b2-beb0-48a5-8dc0-1a4c153da77e
Nov 25 04:07:36 np0005534516 ovn_metadata_agent[162734]:    user        root
Nov 25 04:07:36 np0005534516 ovn_metadata_agent[162734]:    group       root
Nov 25 04:07:36 np0005534516 ovn_metadata_agent[162734]:    maxconn     1024
Nov 25 04:07:36 np0005534516 ovn_metadata_agent[162734]:    pidfile     /var/lib/neutron/external/pids/2a6609b2-beb0-48a5-8dc0-1a4c153da77e.pid.haproxy
Nov 25 04:07:36 np0005534516 ovn_metadata_agent[162734]:    daemon
Nov 25 04:07:36 np0005534516 ovn_metadata_agent[162734]: 
Nov 25 04:07:36 np0005534516 ovn_metadata_agent[162734]: defaults
Nov 25 04:07:36 np0005534516 ovn_metadata_agent[162734]:    log global
Nov 25 04:07:36 np0005534516 ovn_metadata_agent[162734]:    mode http
Nov 25 04:07:36 np0005534516 ovn_metadata_agent[162734]:    option httplog
Nov 25 04:07:36 np0005534516 ovn_metadata_agent[162734]:    option dontlognull
Nov 25 04:07:36 np0005534516 ovn_metadata_agent[162734]:    option http-server-close
Nov 25 04:07:36 np0005534516 ovn_metadata_agent[162734]:    option forwardfor
Nov 25 04:07:36 np0005534516 ovn_metadata_agent[162734]:    retries                 3
Nov 25 04:07:36 np0005534516 ovn_metadata_agent[162734]:    timeout http-request    30s
Nov 25 04:07:36 np0005534516 ovn_metadata_agent[162734]:    timeout connect         30s
Nov 25 04:07:36 np0005534516 ovn_metadata_agent[162734]:    timeout client          32s
Nov 25 04:07:36 np0005534516 ovn_metadata_agent[162734]:    timeout server          32s
Nov 25 04:07:36 np0005534516 ovn_metadata_agent[162734]:    timeout http-keep-alive 30s
Nov 25 04:07:36 np0005534516 ovn_metadata_agent[162734]: 
Nov 25 04:07:36 np0005534516 ovn_metadata_agent[162734]: 
Nov 25 04:07:36 np0005534516 ovn_metadata_agent[162734]: listen listener
Nov 25 04:07:36 np0005534516 ovn_metadata_agent[162734]:    bind 169.254.169.254:80
Nov 25 04:07:36 np0005534516 ovn_metadata_agent[162734]:    server metadata /var/lib/neutron/metadata_proxy
Nov 25 04:07:36 np0005534516 ovn_metadata_agent[162734]:    http-request add-header X-OVN-Network-ID 2a6609b2-beb0-48a5-8dc0-1a4c153da77e
Nov 25 04:07:36 np0005534516 ovn_metadata_agent[162734]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 25 04:07:36 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:07:36.363 162739 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-2a6609b2-beb0-48a5-8dc0-1a4c153da77e', 'env', 'PROCESS_TAG=haproxy-2a6609b2-beb0-48a5-8dc0-1a4c153da77e', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/2a6609b2-beb0-48a5-8dc0-1a4c153da77e.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 25 04:07:36 np0005534516 nova_compute[253538]: 2025-11-25 09:07:36.377 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:07:36 np0005534516 nova_compute[253538]: 2025-11-25 09:07:36.489 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764061656.489156, a5ec67ec-7042-47d0-925d-6ff3847d3846 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 04:07:36 np0005534516 nova_compute[253538]: 2025-11-25 09:07:36.490 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: a5ec67ec-7042-47d0-925d-6ff3847d3846] VM Started (Lifecycle Event)#033[00m
Nov 25 04:07:36 np0005534516 nova_compute[253538]: 2025-11-25 09:07:36.520 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: a5ec67ec-7042-47d0-925d-6ff3847d3846] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 04:07:36 np0005534516 nova_compute[253538]: 2025-11-25 09:07:36.525 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764061656.489671, a5ec67ec-7042-47d0-925d-6ff3847d3846 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 04:07:36 np0005534516 nova_compute[253538]: 2025-11-25 09:07:36.525 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: a5ec67ec-7042-47d0-925d-6ff3847d3846] VM Paused (Lifecycle Event)#033[00m
Nov 25 04:07:36 np0005534516 nova_compute[253538]: 2025-11-25 09:07:36.548 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: a5ec67ec-7042-47d0-925d-6ff3847d3846] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 04:07:36 np0005534516 nova_compute[253538]: 2025-11-25 09:07:36.551 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: a5ec67ec-7042-47d0-925d-6ff3847d3846] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 04:07:36 np0005534516 nova_compute[253538]: 2025-11-25 09:07:36.573 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: a5ec67ec-7042-47d0-925d-6ff3847d3846] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 04:07:36 np0005534516 podman[403578]: 2025-11-25 09:07:36.803470147 +0000 UTC m=+0.051748778 container create 9a504edfb770e834c8ae706c82d27a846bbfdf5baeddccd2911bc4b7aa6bc6fa (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-2a6609b2-beb0-48a5-8dc0-1a4c153da77e, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 04:07:36 np0005534516 systemd[1]: Started libpod-conmon-9a504edfb770e834c8ae706c82d27a846bbfdf5baeddccd2911bc4b7aa6bc6fa.scope.
Nov 25 04:07:36 np0005534516 nova_compute[253538]: 2025-11-25 09:07:36.867 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:07:36 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:07:36.866 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=48, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '3e:21:09', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '6a:71:8e:07:fa:c2'}, ipsec=False) old=SB_Global(nb_cfg=47) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 04:07:36 np0005534516 podman[403578]: 2025-11-25 09:07:36.777769439 +0000 UTC m=+0.026048090 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 25 04:07:36 np0005534516 systemd[1]: Started libcrun container.
Nov 25 04:07:36 np0005534516 nova_compute[253538]: 2025-11-25 09:07:36.877 253542 DEBUG nova.compute.manager [req-7a23aca1-cc2b-4448-971d-ffabfd0e51b9 req-050777ab-e335-4b4b-96d3-2d5a75f789ae b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: a5ec67ec-7042-47d0-925d-6ff3847d3846] Received event network-vif-plugged-05d0fd93-ce0f-4842-962f-c9491d3850c8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 04:07:36 np0005534516 nova_compute[253538]: 2025-11-25 09:07:36.878 253542 DEBUG oslo_concurrency.lockutils [req-7a23aca1-cc2b-4448-971d-ffabfd0e51b9 req-050777ab-e335-4b4b-96d3-2d5a75f789ae b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "a5ec67ec-7042-47d0-925d-6ff3847d3846-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:07:36 np0005534516 nova_compute[253538]: 2025-11-25 09:07:36.879 253542 DEBUG oslo_concurrency.lockutils [req-7a23aca1-cc2b-4448-971d-ffabfd0e51b9 req-050777ab-e335-4b4b-96d3-2d5a75f789ae b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "a5ec67ec-7042-47d0-925d-6ff3847d3846-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:07:36 np0005534516 nova_compute[253538]: 2025-11-25 09:07:36.879 253542 DEBUG oslo_concurrency.lockutils [req-7a23aca1-cc2b-4448-971d-ffabfd0e51b9 req-050777ab-e335-4b4b-96d3-2d5a75f789ae b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "a5ec67ec-7042-47d0-925d-6ff3847d3846-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:07:36 np0005534516 nova_compute[253538]: 2025-11-25 09:07:36.880 253542 DEBUG nova.compute.manager [req-7a23aca1-cc2b-4448-971d-ffabfd0e51b9 req-050777ab-e335-4b4b-96d3-2d5a75f789ae b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: a5ec67ec-7042-47d0-925d-6ff3847d3846] Processing event network-vif-plugged-05d0fd93-ce0f-4842-962f-c9491d3850c8 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 25 04:07:36 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e09528d4ae8d074a0aa083a055a8dc133b765cc40f864841f0ed3ff34768cc83/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 25 04:07:36 np0005534516 podman[403578]: 2025-11-25 09:07:36.902222692 +0000 UTC m=+0.150501363 container init 9a504edfb770e834c8ae706c82d27a846bbfdf5baeddccd2911bc4b7aa6bc6fa (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-2a6609b2-beb0-48a5-8dc0-1a4c153da77e, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251118)
Nov 25 04:07:36 np0005534516 podman[403578]: 2025-11-25 09:07:36.907938347 +0000 UTC m=+0.156216988 container start 9a504edfb770e834c8ae706c82d27a846bbfdf5baeddccd2911bc4b7aa6bc6fa (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-2a6609b2-beb0-48a5-8dc0-1a4c153da77e, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 04:07:36 np0005534516 neutron-haproxy-ovnmeta-2a6609b2-beb0-48a5-8dc0-1a4c153da77e[403594]: [NOTICE]   (403598) : New worker (403600) forked
Nov 25 04:07:36 np0005534516 neutron-haproxy-ovnmeta-2a6609b2-beb0-48a5-8dc0-1a4c153da77e[403594]: [NOTICE]   (403598) : Loading success.
Nov 25 04:07:36 np0005534516 nova_compute[253538]: 2025-11-25 09:07:36.948 253542 DEBUG nova.compute.manager [req-a69ccf3c-4005-454b-b417-71d1a57a11b7 req-93fae0ea-d7d7-4558-a42b-974f52f46335 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: a5ec67ec-7042-47d0-925d-6ff3847d3846] Received event network-vif-plugged-2eda6ce1-df50-4620-a5e9-d08e62f7350e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 04:07:36 np0005534516 nova_compute[253538]: 2025-11-25 09:07:36.948 253542 DEBUG oslo_concurrency.lockutils [req-a69ccf3c-4005-454b-b417-71d1a57a11b7 req-93fae0ea-d7d7-4558-a42b-974f52f46335 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "a5ec67ec-7042-47d0-925d-6ff3847d3846-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:07:36 np0005534516 nova_compute[253538]: 2025-11-25 09:07:36.949 253542 DEBUG oslo_concurrency.lockutils [req-a69ccf3c-4005-454b-b417-71d1a57a11b7 req-93fae0ea-d7d7-4558-a42b-974f52f46335 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "a5ec67ec-7042-47d0-925d-6ff3847d3846-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:07:36 np0005534516 nova_compute[253538]: 2025-11-25 09:07:36.949 253542 DEBUG oslo_concurrency.lockutils [req-a69ccf3c-4005-454b-b417-71d1a57a11b7 req-93fae0ea-d7d7-4558-a42b-974f52f46335 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "a5ec67ec-7042-47d0-925d-6ff3847d3846-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:07:36 np0005534516 nova_compute[253538]: 2025-11-25 09:07:36.949 253542 DEBUG nova.compute.manager [req-a69ccf3c-4005-454b-b417-71d1a57a11b7 req-93fae0ea-d7d7-4558-a42b-974f52f46335 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: a5ec67ec-7042-47d0-925d-6ff3847d3846] Processing event network-vif-plugged-2eda6ce1-df50-4620-a5e9-d08e62f7350e _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 25 04:07:36 np0005534516 nova_compute[253538]: 2025-11-25 09:07:36.950 253542 DEBUG nova.compute.manager [None req-85aeb3ae-c5c3-40cc-8a06-a8e3b54adb3e c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: a5ec67ec-7042-47d0-925d-6ff3847d3846] Instance event wait completed in 0 seconds for network-vif-plugged,network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 25 04:07:36 np0005534516 nova_compute[253538]: 2025-11-25 09:07:36.954 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764061656.9539952, a5ec67ec-7042-47d0-925d-6ff3847d3846 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 04:07:36 np0005534516 nova_compute[253538]: 2025-11-25 09:07:36.954 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: a5ec67ec-7042-47d0-925d-6ff3847d3846] VM Resumed (Lifecycle Event)#033[00m
Nov 25 04:07:36 np0005534516 nova_compute[253538]: 2025-11-25 09:07:36.956 253542 DEBUG nova.virt.libvirt.driver [None req-85aeb3ae-c5c3-40cc-8a06-a8e3b54adb3e c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: a5ec67ec-7042-47d0-925d-6ff3847d3846] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 25 04:07:36 np0005534516 nova_compute[253538]: 2025-11-25 09:07:36.961 253542 INFO nova.virt.libvirt.driver [-] [instance: a5ec67ec-7042-47d0-925d-6ff3847d3846] Instance spawned successfully.#033[00m
Nov 25 04:07:36 np0005534516 nova_compute[253538]: 2025-11-25 09:07:36.961 253542 DEBUG nova.virt.libvirt.driver [None req-85aeb3ae-c5c3-40cc-8a06-a8e3b54adb3e c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: a5ec67ec-7042-47d0-925d-6ff3847d3846] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 25 04:07:36 np0005534516 nova_compute[253538]: 2025-11-25 09:07:36.977 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: a5ec67ec-7042-47d0-925d-6ff3847d3846] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 04:07:36 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:07:36.982 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 2eda6ce1-df50-4620-a5e9-d08e62f7350e in datapath 21786b2a-59f9-4c4e-b462-8a28f7bd93a3 unbound from our chassis#033[00m
Nov 25 04:07:36 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:07:36.986 162739 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 21786b2a-59f9-4c4e-b462-8a28f7bd93a3#033[00m
Nov 25 04:07:36 np0005534516 nova_compute[253538]: 2025-11-25 09:07:36.986 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: a5ec67ec-7042-47d0-925d-6ff3847d3846] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 04:07:36 np0005534516 nova_compute[253538]: 2025-11-25 09:07:36.992 253542 DEBUG nova.virt.libvirt.driver [None req-85aeb3ae-c5c3-40cc-8a06-a8e3b54adb3e c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: a5ec67ec-7042-47d0-925d-6ff3847d3846] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 04:07:36 np0005534516 nova_compute[253538]: 2025-11-25 09:07:36.993 253542 DEBUG nova.virt.libvirt.driver [None req-85aeb3ae-c5c3-40cc-8a06-a8e3b54adb3e c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: a5ec67ec-7042-47d0-925d-6ff3847d3846] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 04:07:36 np0005534516 nova_compute[253538]: 2025-11-25 09:07:36.994 253542 DEBUG nova.virt.libvirt.driver [None req-85aeb3ae-c5c3-40cc-8a06-a8e3b54adb3e c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: a5ec67ec-7042-47d0-925d-6ff3847d3846] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 04:07:36 np0005534516 nova_compute[253538]: 2025-11-25 09:07:36.994 253542 DEBUG nova.virt.libvirt.driver [None req-85aeb3ae-c5c3-40cc-8a06-a8e3b54adb3e c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: a5ec67ec-7042-47d0-925d-6ff3847d3846] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 04:07:36 np0005534516 nova_compute[253538]: 2025-11-25 09:07:36.995 253542 DEBUG nova.virt.libvirt.driver [None req-85aeb3ae-c5c3-40cc-8a06-a8e3b54adb3e c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: a5ec67ec-7042-47d0-925d-6ff3847d3846] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 04:07:36 np0005534516 nova_compute[253538]: 2025-11-25 09:07:36.996 253542 DEBUG nova.virt.libvirt.driver [None req-85aeb3ae-c5c3-40cc-8a06-a8e3b54adb3e c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: a5ec67ec-7042-47d0-925d-6ff3847d3846] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 04:07:37 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:07:37.001 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[9b25c4e5-1d94-4c65-b0c1-8f31f1d97aa6]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:07:37 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:07:37.002 162739 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap21786b2a-51 in ovnmeta-21786b2a-59f9-4c4e-b462-8a28f7bd93a3 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 25 04:07:37 np0005534516 nova_compute[253538]: 2025-11-25 09:07:37.003 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: a5ec67ec-7042-47d0-925d-6ff3847d3846] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 04:07:37 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:07:37.004 269370 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap21786b2a-50 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 25 04:07:37 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:07:37.004 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[7557a6b9-826d-4f57-8af9-6f4d571ee1b6]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:07:37 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:07:37.005 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[74a74390-e687-47b0-98e7-1ec1b531a9b4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:07:37 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:07:37.020 162852 DEBUG oslo.privsep.daemon [-] privsep: reply[0fb97c39-9d82-43b9-bf6e-05f3202df5bb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:07:37 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:07:37.035 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[d689c4b8-33de-4c1f-9c4f-4d748132ec0a]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:07:37 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:07:37.064 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[a37f7dc5-001c-4db4-a28d-2495c98c71a8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:07:37 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:07:37.069 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[f255de48-71b1-44f3-ab0d-02e6ce382e1a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:07:37 np0005534516 NetworkManager[48915]: <info>  [1764061657.0709] manager: (tap21786b2a-50): new Veth device (/org/freedesktop/NetworkManager/Devices/615)
Nov 25 04:07:37 np0005534516 nova_compute[253538]: 2025-11-25 09:07:37.081 253542 INFO nova.compute.manager [None req-85aeb3ae-c5c3-40cc-8a06-a8e3b54adb3e c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: a5ec67ec-7042-47d0-925d-6ff3847d3846] Took 10.28 seconds to spawn the instance on the hypervisor.#033[00m
Nov 25 04:07:37 np0005534516 nova_compute[253538]: 2025-11-25 09:07:37.081 253542 DEBUG nova.compute.manager [None req-85aeb3ae-c5c3-40cc-8a06-a8e3b54adb3e c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: a5ec67ec-7042-47d0-925d-6ff3847d3846] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 04:07:37 np0005534516 systemd-udevd[403616]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 04:07:37 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:07:37.119 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[40c69750-a4a3-4977-84f8-8bdfed8288fb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:07:37 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:07:37.125 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[bc3102d0-278c-4e7e-a389-8049b8b288fc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:07:37 np0005534516 NetworkManager[48915]: <info>  [1764061657.1507] device (tap21786b2a-50): carrier: link connected
Nov 25 04:07:37 np0005534516 nova_compute[253538]: 2025-11-25 09:07:37.151 253542 INFO nova.compute.manager [None req-85aeb3ae-c5c3-40cc-8a06-a8e3b54adb3e c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: a5ec67ec-7042-47d0-925d-6ff3847d3846] Took 11.50 seconds to build instance.#033[00m
Nov 25 04:07:37 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:07:37.156 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[858d829d-5964-49b2-ad0b-f0e226279bca]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:07:37 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:07:37.171 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[2dcff875-0499-4200-9056-b8b0b30f5dec]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap21786b2a-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a4:b7:82'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 429], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 701174, 'reachable_time': 31262, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 403635, 'error': None, 'target': 'ovnmeta-21786b2a-59f9-4c4e-b462-8a28f7bd93a3', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:07:37 np0005534516 nova_compute[253538]: 2025-11-25 09:07:37.179 253542 DEBUG oslo_concurrency.lockutils [None req-85aeb3ae-c5c3-40cc-8a06-a8e3b54adb3e c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "a5ec67ec-7042-47d0-925d-6ff3847d3846" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.622s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:07:37 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:07:37.185 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[614b21dc-01ea-44a4-b562-6d92cac8714f]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fea4:b782'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 701174, 'tstamp': 701174}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 403636, 'error': None, 'target': 'ovnmeta-21786b2a-59f9-4c4e-b462-8a28f7bd93a3', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:07:37 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:07:37.204 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[ff8ce041-bd4c-4087-a87f-524aae15eb58]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap21786b2a-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a4:b7:82'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 429], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 701174, 'reachable_time': 31262, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 403637, 'error': None, 'target': 'ovnmeta-21786b2a-59f9-4c4e-b462-8a28f7bd93a3', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:07:37 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:07:37.237 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[4c415e89-134b-4428-932e-ab042ec18aec]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:07:37 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:07:37.281 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[f19485ec-6536-443b-be8b-d8b0310a6b81]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:07:37 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:07:37.282 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap21786b2a-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 04:07:37 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:07:37.283 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 04:07:37 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:07:37.283 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap21786b2a-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 04:07:37 np0005534516 nova_compute[253538]: 2025-11-25 09:07:37.285 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:07:37 np0005534516 NetworkManager[48915]: <info>  [1764061657.2855] manager: (tap21786b2a-50): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/616)
Nov 25 04:07:37 np0005534516 kernel: tap21786b2a-50: entered promiscuous mode
Nov 25 04:07:37 np0005534516 nova_compute[253538]: 2025-11-25 09:07:37.287 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:07:37 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:07:37.288 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap21786b2a-50, col_values=(('external_ids', {'iface-id': 'c3131dc3-cd6c-4c0c-a1cd-e385c6ff1693'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 04:07:37 np0005534516 nova_compute[253538]: 2025-11-25 09:07:37.289 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:07:37 np0005534516 ovn_controller[152859]: 2025-11-25T09:07:37Z|01491|binding|INFO|Releasing lport c3131dc3-cd6c-4c0c-a1cd-e385c6ff1693 from this chassis (sb_readonly=0)
Nov 25 04:07:37 np0005534516 nova_compute[253538]: 2025-11-25 09:07:37.304 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:07:37 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:07:37.306 162739 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/21786b2a-59f9-4c4e-b462-8a28f7bd93a3.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/21786b2a-59f9-4c4e-b462-8a28f7bd93a3.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 25 04:07:37 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:07:37.307 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[27b33ae7-2bdb-47a2-be79-4e5acd21409f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:07:37 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:07:37.307 162739 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 25 04:07:37 np0005534516 ovn_metadata_agent[162734]: global
Nov 25 04:07:37 np0005534516 ovn_metadata_agent[162734]:    log         /dev/log local0 debug
Nov 25 04:07:37 np0005534516 ovn_metadata_agent[162734]:    log-tag     haproxy-metadata-proxy-21786b2a-59f9-4c4e-b462-8a28f7bd93a3
Nov 25 04:07:37 np0005534516 ovn_metadata_agent[162734]:    user        root
Nov 25 04:07:37 np0005534516 ovn_metadata_agent[162734]:    group       root
Nov 25 04:07:37 np0005534516 ovn_metadata_agent[162734]:    maxconn     1024
Nov 25 04:07:37 np0005534516 ovn_metadata_agent[162734]:    pidfile     /var/lib/neutron/external/pids/21786b2a-59f9-4c4e-b462-8a28f7bd93a3.pid.haproxy
Nov 25 04:07:37 np0005534516 ovn_metadata_agent[162734]:    daemon
Nov 25 04:07:37 np0005534516 ovn_metadata_agent[162734]: 
Nov 25 04:07:37 np0005534516 ovn_metadata_agent[162734]: defaults
Nov 25 04:07:37 np0005534516 ovn_metadata_agent[162734]:    log global
Nov 25 04:07:37 np0005534516 ovn_metadata_agent[162734]:    mode http
Nov 25 04:07:37 np0005534516 ovn_metadata_agent[162734]:    option httplog
Nov 25 04:07:37 np0005534516 ovn_metadata_agent[162734]:    option dontlognull
Nov 25 04:07:37 np0005534516 ovn_metadata_agent[162734]:    option http-server-close
Nov 25 04:07:37 np0005534516 ovn_metadata_agent[162734]:    option forwardfor
Nov 25 04:07:37 np0005534516 ovn_metadata_agent[162734]:    retries                 3
Nov 25 04:07:37 np0005534516 ovn_metadata_agent[162734]:    timeout http-request    30s
Nov 25 04:07:37 np0005534516 ovn_metadata_agent[162734]:    timeout connect         30s
Nov 25 04:07:37 np0005534516 ovn_metadata_agent[162734]:    timeout client          32s
Nov 25 04:07:37 np0005534516 ovn_metadata_agent[162734]:    timeout server          32s
Nov 25 04:07:37 np0005534516 ovn_metadata_agent[162734]:    timeout http-keep-alive 30s
Nov 25 04:07:37 np0005534516 ovn_metadata_agent[162734]: 
Nov 25 04:07:37 np0005534516 ovn_metadata_agent[162734]: 
Nov 25 04:07:37 np0005534516 ovn_metadata_agent[162734]: listen listener
Nov 25 04:07:37 np0005534516 ovn_metadata_agent[162734]:    bind 169.254.169.254:80
Nov 25 04:07:37 np0005534516 ovn_metadata_agent[162734]:    server metadata /var/lib/neutron/metadata_proxy
Nov 25 04:07:37 np0005534516 ovn_metadata_agent[162734]:    http-request add-header X-OVN-Network-ID 21786b2a-59f9-4c4e-b462-8a28f7bd93a3
Nov 25 04:07:37 np0005534516 ovn_metadata_agent[162734]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 25 04:07:37 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:07:37.308 162739 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-21786b2a-59f9-4c4e-b462-8a28f7bd93a3', 'env', 'PROCESS_TAG=haproxy-21786b2a-59f9-4c4e-b462-8a28f7bd93a3', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/21786b2a-59f9-4c4e-b462-8a28f7bd93a3.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 25 04:07:37 np0005534516 podman[403668]: 2025-11-25 09:07:37.724743162 +0000 UTC m=+0.052115008 container create 4c4ab6fadfcdb863651d7b97fe31c5cb7b19c73ef6a849b2c903b75365943d51 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-21786b2a-59f9-4c4e-b462-8a28f7bd93a3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 25 04:07:37 np0005534516 systemd[1]: Started libpod-conmon-4c4ab6fadfcdb863651d7b97fe31c5cb7b19c73ef6a849b2c903b75365943d51.scope.
Nov 25 04:07:37 np0005534516 podman[403668]: 2025-11-25 09:07:37.697430329 +0000 UTC m=+0.024802195 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 25 04:07:37 np0005534516 systemd[1]: Started libcrun container.
Nov 25 04:07:37 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/489261d86b924eb877c414a169c28246ea802beb251d049c1cd79a5b173b9a69/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 25 04:07:37 np0005534516 podman[403668]: 2025-11-25 09:07:37.8269489 +0000 UTC m=+0.154320746 container init 4c4ab6fadfcdb863651d7b97fe31c5cb7b19c73ef6a849b2c903b75365943d51 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-21786b2a-59f9-4c4e-b462-8a28f7bd93a3, org.label-schema.build-date=20251118, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 04:07:37 np0005534516 podman[403668]: 2025-11-25 09:07:37.834922547 +0000 UTC m=+0.162294393 container start 4c4ab6fadfcdb863651d7b97fe31c5cb7b19c73ef6a849b2c903b75365943d51 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-21786b2a-59f9-4c4e-b462-8a28f7bd93a3, tcib_managed=true, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 25 04:07:37 np0005534516 neutron-haproxy-ovnmeta-21786b2a-59f9-4c4e-b462-8a28f7bd93a3[403683]: [NOTICE]   (403687) : New worker (403689) forked
Nov 25 04:07:37 np0005534516 neutron-haproxy-ovnmeta-21786b2a-59f9-4c4e-b462-8a28f7bd93a3[403683]: [NOTICE]   (403687) : Loading success.
Nov 25 04:07:37 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:07:37.898 162739 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 25 04:07:38 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e257 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:07:38 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2609: 321 pgs: 321 active+clean; 293 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 340 KiB/s rd, 2.1 MiB/s wr, 84 op/s
Nov 25 04:07:38 np0005534516 nova_compute[253538]: 2025-11-25 09:07:38.994 253542 DEBUG nova.compute.manager [req-7196ae77-d36e-48e2-a65a-15c57a14b04b req-4bfb3f9d-80eb-43b8-b73b-92a5063310d9 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: a5ec67ec-7042-47d0-925d-6ff3847d3846] Received event network-vif-plugged-05d0fd93-ce0f-4842-962f-c9491d3850c8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 04:07:38 np0005534516 nova_compute[253538]: 2025-11-25 09:07:38.995 253542 DEBUG oslo_concurrency.lockutils [req-7196ae77-d36e-48e2-a65a-15c57a14b04b req-4bfb3f9d-80eb-43b8-b73b-92a5063310d9 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "a5ec67ec-7042-47d0-925d-6ff3847d3846-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:07:38 np0005534516 nova_compute[253538]: 2025-11-25 09:07:38.995 253542 DEBUG oslo_concurrency.lockutils [req-7196ae77-d36e-48e2-a65a-15c57a14b04b req-4bfb3f9d-80eb-43b8-b73b-92a5063310d9 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "a5ec67ec-7042-47d0-925d-6ff3847d3846-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:07:38 np0005534516 nova_compute[253538]: 2025-11-25 09:07:38.995 253542 DEBUG oslo_concurrency.lockutils [req-7196ae77-d36e-48e2-a65a-15c57a14b04b req-4bfb3f9d-80eb-43b8-b73b-92a5063310d9 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "a5ec67ec-7042-47d0-925d-6ff3847d3846-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:07:38 np0005534516 nova_compute[253538]: 2025-11-25 09:07:38.995 253542 DEBUG nova.compute.manager [req-7196ae77-d36e-48e2-a65a-15c57a14b04b req-4bfb3f9d-80eb-43b8-b73b-92a5063310d9 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: a5ec67ec-7042-47d0-925d-6ff3847d3846] No waiting events found dispatching network-vif-plugged-05d0fd93-ce0f-4842-962f-c9491d3850c8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 04:07:38 np0005534516 nova_compute[253538]: 2025-11-25 09:07:38.995 253542 WARNING nova.compute.manager [req-7196ae77-d36e-48e2-a65a-15c57a14b04b req-4bfb3f9d-80eb-43b8-b73b-92a5063310d9 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: a5ec67ec-7042-47d0-925d-6ff3847d3846] Received unexpected event network-vif-plugged-05d0fd93-ce0f-4842-962f-c9491d3850c8 for instance with vm_state active and task_state None.#033[00m
Nov 25 04:07:39 np0005534516 nova_compute[253538]: 2025-11-25 09:07:39.034 253542 DEBUG nova.compute.manager [req-2692a35c-ea2b-4675-8568-0fcc301049d7 req-d7aae92a-92a9-410d-8280-e57eaf161ee3 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: a5ec67ec-7042-47d0-925d-6ff3847d3846] Received event network-vif-plugged-2eda6ce1-df50-4620-a5e9-d08e62f7350e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 04:07:39 np0005534516 nova_compute[253538]: 2025-11-25 09:07:39.034 253542 DEBUG oslo_concurrency.lockutils [req-2692a35c-ea2b-4675-8568-0fcc301049d7 req-d7aae92a-92a9-410d-8280-e57eaf161ee3 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "a5ec67ec-7042-47d0-925d-6ff3847d3846-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:07:39 np0005534516 nova_compute[253538]: 2025-11-25 09:07:39.034 253542 DEBUG oslo_concurrency.lockutils [req-2692a35c-ea2b-4675-8568-0fcc301049d7 req-d7aae92a-92a9-410d-8280-e57eaf161ee3 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "a5ec67ec-7042-47d0-925d-6ff3847d3846-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:07:39 np0005534516 nova_compute[253538]: 2025-11-25 09:07:39.034 253542 DEBUG oslo_concurrency.lockutils [req-2692a35c-ea2b-4675-8568-0fcc301049d7 req-d7aae92a-92a9-410d-8280-e57eaf161ee3 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "a5ec67ec-7042-47d0-925d-6ff3847d3846-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:07:39 np0005534516 nova_compute[253538]: 2025-11-25 09:07:39.035 253542 DEBUG nova.compute.manager [req-2692a35c-ea2b-4675-8568-0fcc301049d7 req-d7aae92a-92a9-410d-8280-e57eaf161ee3 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: a5ec67ec-7042-47d0-925d-6ff3847d3846] No waiting events found dispatching network-vif-plugged-2eda6ce1-df50-4620-a5e9-d08e62f7350e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 04:07:39 np0005534516 nova_compute[253538]: 2025-11-25 09:07:39.035 253542 WARNING nova.compute.manager [req-2692a35c-ea2b-4675-8568-0fcc301049d7 req-d7aae92a-92a9-410d-8280-e57eaf161ee3 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: a5ec67ec-7042-47d0-925d-6ff3847d3846] Received unexpected event network-vif-plugged-2eda6ce1-df50-4620-a5e9-d08e62f7350e for instance with vm_state active and task_state None.#033[00m
Nov 25 04:07:39 np0005534516 nova_compute[253538]: 2025-11-25 09:07:39.164 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:07:39 np0005534516 nova_compute[253538]: 2025-11-25 09:07:39.866 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:07:40 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2610: 321 pgs: 321 active+clean; 293 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 3.2 MiB/s rd, 1.8 MiB/s wr, 183 op/s
Nov 25 04:07:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:07:41.092 162739 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:07:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:07:41.092 162739 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:07:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:07:41.093 162739 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:07:41 np0005534516 nova_compute[253538]: 2025-11-25 09:07:41.490 253542 DEBUG nova.compute.manager [req-b579cec6-3aab-4bed-92ec-ba1bf7703cf6 req-5d76c048-99bb-47d5-891e-3f3d633220d1 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: de5bfbef-7a99-4280-a304-71b9099f110b] Received event network-changed-ff88ca8a-d270-4991-b2c4-617f04418848 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 04:07:41 np0005534516 nova_compute[253538]: 2025-11-25 09:07:41.491 253542 DEBUG nova.compute.manager [req-b579cec6-3aab-4bed-92ec-ba1bf7703cf6 req-5d76c048-99bb-47d5-891e-3f3d633220d1 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: de5bfbef-7a99-4280-a304-71b9099f110b] Refreshing instance network info cache due to event network-changed-ff88ca8a-d270-4991-b2c4-617f04418848. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 04:07:41 np0005534516 nova_compute[253538]: 2025-11-25 09:07:41.492 253542 DEBUG oslo_concurrency.lockutils [req-b579cec6-3aab-4bed-92ec-ba1bf7703cf6 req-5d76c048-99bb-47d5-891e-3f3d633220d1 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-de5bfbef-7a99-4280-a304-71b9099f110b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 04:07:41 np0005534516 nova_compute[253538]: 2025-11-25 09:07:41.492 253542 DEBUG oslo_concurrency.lockutils [req-b579cec6-3aab-4bed-92ec-ba1bf7703cf6 req-5d76c048-99bb-47d5-891e-3f3d633220d1 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-de5bfbef-7a99-4280-a304-71b9099f110b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 04:07:41 np0005534516 nova_compute[253538]: 2025-11-25 09:07:41.492 253542 DEBUG nova.network.neutron [req-b579cec6-3aab-4bed-92ec-ba1bf7703cf6 req-5d76c048-99bb-47d5-891e-3f3d633220d1 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: de5bfbef-7a99-4280-a304-71b9099f110b] Refreshing network info cache for port ff88ca8a-d270-4991-b2c4-617f04418848 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 04:07:42 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2611: 321 pgs: 321 active+clean; 293 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 3.6 MiB/s rd, 408 KiB/s wr, 155 op/s
Nov 25 04:07:43 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e257 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:07:44 np0005534516 nova_compute[253538]: 2025-11-25 09:07:44.167 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:07:44 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2612: 321 pgs: 321 active+clean; 293 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 25 KiB/s wr, 150 op/s
Nov 25 04:07:44 np0005534516 nova_compute[253538]: 2025-11-25 09:07:44.246 253542 DEBUG nova.network.neutron [req-b579cec6-3aab-4bed-92ec-ba1bf7703cf6 req-5d76c048-99bb-47d5-891e-3f3d633220d1 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: de5bfbef-7a99-4280-a304-71b9099f110b] Updated VIF entry in instance network info cache for port ff88ca8a-d270-4991-b2c4-617f04418848. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 25 04:07:44 np0005534516 nova_compute[253538]: 2025-11-25 09:07:44.246 253542 DEBUG nova.network.neutron [req-b579cec6-3aab-4bed-92ec-ba1bf7703cf6 req-5d76c048-99bb-47d5-891e-3f3d633220d1 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: de5bfbef-7a99-4280-a304-71b9099f110b] Updating instance_info_cache with network_info: [{"id": "ff88ca8a-d270-4991-b2c4-617f04418848", "address": "fa:16:3e:ec:c7:f1", "network": {"id": "3c3eb82e-1161-4c2f-9fce-53fdf4386d9c", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-990123454-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.226", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8771100a91ef4eb3b58cc4840f6154b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapff88ca8a-d2", "ovs_interfaceid": "ff88ca8a-d270-4991-b2c4-617f04418848", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 04:07:44 np0005534516 nova_compute[253538]: 2025-11-25 09:07:44.285 253542 DEBUG oslo_concurrency.lockutils [req-b579cec6-3aab-4bed-92ec-ba1bf7703cf6 req-5d76c048-99bb-47d5-891e-3f3d633220d1 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-de5bfbef-7a99-4280-a304-71b9099f110b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 04:07:44 np0005534516 nova_compute[253538]: 2025-11-25 09:07:44.286 253542 DEBUG nova.compute.manager [req-b579cec6-3aab-4bed-92ec-ba1bf7703cf6 req-5d76c048-99bb-47d5-891e-3f3d633220d1 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: a5ec67ec-7042-47d0-925d-6ff3847d3846] Received event network-changed-05d0fd93-ce0f-4842-962f-c9491d3850c8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 04:07:44 np0005534516 nova_compute[253538]: 2025-11-25 09:07:44.286 253542 DEBUG nova.compute.manager [req-b579cec6-3aab-4bed-92ec-ba1bf7703cf6 req-5d76c048-99bb-47d5-891e-3f3d633220d1 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: a5ec67ec-7042-47d0-925d-6ff3847d3846] Refreshing instance network info cache due to event network-changed-05d0fd93-ce0f-4842-962f-c9491d3850c8. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 04:07:44 np0005534516 nova_compute[253538]: 2025-11-25 09:07:44.287 253542 DEBUG oslo_concurrency.lockutils [req-b579cec6-3aab-4bed-92ec-ba1bf7703cf6 req-5d76c048-99bb-47d5-891e-3f3d633220d1 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-a5ec67ec-7042-47d0-925d-6ff3847d3846" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 04:07:44 np0005534516 nova_compute[253538]: 2025-11-25 09:07:44.287 253542 DEBUG oslo_concurrency.lockutils [req-b579cec6-3aab-4bed-92ec-ba1bf7703cf6 req-5d76c048-99bb-47d5-891e-3f3d633220d1 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-a5ec67ec-7042-47d0-925d-6ff3847d3846" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 04:07:44 np0005534516 nova_compute[253538]: 2025-11-25 09:07:44.287 253542 DEBUG nova.network.neutron [req-b579cec6-3aab-4bed-92ec-ba1bf7703cf6 req-5d76c048-99bb-47d5-891e-3f3d633220d1 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: a5ec67ec-7042-47d0-925d-6ff3847d3846] Refreshing network info cache for port 05d0fd93-ce0f-4842-962f-c9491d3850c8 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 04:07:44 np0005534516 nova_compute[253538]: 2025-11-25 09:07:44.871 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:07:46 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2613: 321 pgs: 321 active+clean; 293 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 25 KiB/s wr, 150 op/s
Nov 25 04:07:46 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:07:46.900 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=0e3362c2-a4a0-4a10-9289-943331244f84, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '48'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 04:07:47 np0005534516 ovn_controller[152859]: 2025-11-25T09:07:47Z|00185|pinctrl(ovn_pinctrl0)|WARN|DHCPREQUEST requested IP 10.100.0.13 does not match offer 10.100.0.6
Nov 25 04:07:47 np0005534516 ovn_controller[152859]: 2025-11-25T09:07:47Z|00186|pinctrl(ovn_pinctrl0)|INFO|DHCPNAK fa:16:3e:ec:c7:f1 10.100.0.6
Nov 25 04:07:47 np0005534516 nova_compute[253538]: 2025-11-25 09:07:47.417 253542 DEBUG nova.network.neutron [req-b579cec6-3aab-4bed-92ec-ba1bf7703cf6 req-5d76c048-99bb-47d5-891e-3f3d633220d1 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: a5ec67ec-7042-47d0-925d-6ff3847d3846] Updated VIF entry in instance network info cache for port 05d0fd93-ce0f-4842-962f-c9491d3850c8. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 25 04:07:47 np0005534516 nova_compute[253538]: 2025-11-25 09:07:47.418 253542 DEBUG nova.network.neutron [req-b579cec6-3aab-4bed-92ec-ba1bf7703cf6 req-5d76c048-99bb-47d5-891e-3f3d633220d1 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: a5ec67ec-7042-47d0-925d-6ff3847d3846] Updating instance_info_cache with network_info: [{"id": "05d0fd93-ce0f-4842-962f-c9491d3850c8", "address": "fa:16:3e:e4:ec:f8", "network": {"id": "2a6609b2-beb0-48a5-8dc0-1a4c153da77e", "bridge": "br-int", "label": "tempest-network-smoke--734444177", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.200", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap05d0fd93-ce", "ovs_interfaceid": "05d0fd93-ce0f-4842-962f-c9491d3850c8", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "2eda6ce1-df50-4620-a5e9-d08e62f7350e", "address": "fa:16:3e:fc:ed:73", "network": {"id": "21786b2a-59f9-4c4e-b462-8a28f7bd93a3", "bridge": "br-int", "label": "tempest-network-smoke--1716336469", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fefc:ed73", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fefc:ed73", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2eda6ce1-df", "ovs_interfaceid": "2eda6ce1-df50-4620-a5e9-d08e62f7350e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 04:07:47 np0005534516 nova_compute[253538]: 2025-11-25 09:07:47.434 253542 DEBUG oslo_concurrency.lockutils [req-b579cec6-3aab-4bed-92ec-ba1bf7703cf6 req-5d76c048-99bb-47d5-891e-3f3d633220d1 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-a5ec67ec-7042-47d0-925d-6ff3847d3846" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 04:07:48 np0005534516 nova_compute[253538]: 2025-11-25 09:07:48.017 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:07:48 np0005534516 nova_compute[253538]: 2025-11-25 09:07:48.018 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:07:48 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e257 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:07:48 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2614: 321 pgs: 321 active+clean; 302 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 4.5 MiB/s rd, 347 KiB/s wr, 180 op/s
Nov 25 04:07:49 np0005534516 nova_compute[253538]: 2025-11-25 09:07:49.171 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:07:49 np0005534516 nova_compute[253538]: 2025-11-25 09:07:49.874 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:07:50 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2615: 321 pgs: 321 active+clean; 319 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 4.6 MiB/s rd, 1.4 MiB/s wr, 195 op/s
Nov 25 04:07:50 np0005534516 ovn_controller[152859]: 2025-11-25T09:07:50Z|00187|pinctrl(ovn_pinctrl0)|WARN|DHCPREQUEST requested IP 10.100.0.13 does not match offer 10.100.0.6
Nov 25 04:07:50 np0005534516 ovn_controller[152859]: 2025-11-25T09:07:50Z|00188|pinctrl(ovn_pinctrl0)|INFO|DHCPNAK fa:16:3e:ec:c7:f1 10.100.0.6
Nov 25 04:07:50 np0005534516 ovn_controller[152859]: 2025-11-25T09:07:50Z|00189|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:e4:ec:f8 10.100.0.7
Nov 25 04:07:50 np0005534516 ovn_controller[152859]: 2025-11-25T09:07:50Z|00190|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:e4:ec:f8 10.100.0.7
Nov 25 04:07:51 np0005534516 nova_compute[253538]: 2025-11-25 09:07:51.555 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:07:51 np0005534516 nova_compute[253538]: 2025-11-25 09:07:51.556 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 25 04:07:51 np0005534516 nova_compute[253538]: 2025-11-25 09:07:51.556 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 25 04:07:51 np0005534516 nova_compute[253538]: 2025-11-25 09:07:51.737 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Acquiring lock "refresh_cache-7aeb9ccf-2506-41d1-92c2-c72892096857" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 04:07:51 np0005534516 nova_compute[253538]: 2025-11-25 09:07:51.738 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Acquired lock "refresh_cache-7aeb9ccf-2506-41d1-92c2-c72892096857" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 04:07:51 np0005534516 nova_compute[253538]: 2025-11-25 09:07:51.738 253542 DEBUG nova.network.neutron [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] [instance: 7aeb9ccf-2506-41d1-92c2-c72892096857] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Nov 25 04:07:51 np0005534516 nova_compute[253538]: 2025-11-25 09:07:51.738 253542 DEBUG nova.objects.instance [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 7aeb9ccf-2506-41d1-92c2-c72892096857 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 04:07:52 np0005534516 ovn_controller[152859]: 2025-11-25T09:07:52Z|00191|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:ec:c7:f1 10.100.0.6
Nov 25 04:07:52 np0005534516 ovn_controller[152859]: 2025-11-25T09:07:52Z|00192|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:ec:c7:f1 10.100.0.6
Nov 25 04:07:52 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2616: 321 pgs: 321 active+clean; 322 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.7 MiB/s wr, 115 op/s
Nov 25 04:07:52 np0005534516 nova_compute[253538]: 2025-11-25 09:07:52.886 253542 DEBUG nova.network.neutron [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] [instance: 7aeb9ccf-2506-41d1-92c2-c72892096857] Updating instance_info_cache with network_info: [{"id": "0ddcebf0-d7e9-474d-b53b-d1746f5af8f2", "address": "fa:16:3e:d3:95:36", "network": {"id": "3c3eb82e-1161-4c2f-9fce-53fdf4386d9c", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-990123454-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.181", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8771100a91ef4eb3b58cc4840f6154b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0ddcebf0-d7", "ovs_interfaceid": "0ddcebf0-d7e9-474d-b53b-d1746f5af8f2", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 04:07:52 np0005534516 nova_compute[253538]: 2025-11-25 09:07:52.903 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Releasing lock "refresh_cache-7aeb9ccf-2506-41d1-92c2-c72892096857" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 04:07:52 np0005534516 nova_compute[253538]: 2025-11-25 09:07:52.904 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] [instance: 7aeb9ccf-2506-41d1-92c2-c72892096857] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Nov 25 04:07:53 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e257 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:07:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] Optimize plan auto_2025-11-25_09:07:53
Nov 25 04:07:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 04:07:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] do_upmap
Nov 25 04:07:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] pools ['cephfs.cephfs.data', 'default.rgw.meta', 'default.rgw.control', 'default.rgw.log', 'volumes', 'backups', '.mgr', 'vms', 'cephfs.cephfs.meta', '.rgw.root', 'images']
Nov 25 04:07:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] prepared 0/10 changes
Nov 25 04:07:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 04:07:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 04:07:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 04:07:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 04:07:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 04:07:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 04:07:53 np0005534516 nova_compute[253538]: 2025-11-25 09:07:53.896 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:07:54 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 04:07:54 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 04:07:54 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 04:07:54 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 04:07:54 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 04:07:54 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 04:07:54 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 04:07:54 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 04:07:54 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 04:07:54 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 04:07:54 np0005534516 nova_compute[253538]: 2025-11-25 09:07:54.174 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:07:54 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2617: 321 pgs: 321 active+clean; 338 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.5 MiB/s rd, 2.6 MiB/s wr, 119 op/s
Nov 25 04:07:54 np0005534516 nova_compute[253538]: 2025-11-25 09:07:54.553 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:07:54 np0005534516 nova_compute[253538]: 2025-11-25 09:07:54.554 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 25 04:07:54 np0005534516 nova_compute[253538]: 2025-11-25 09:07:54.876 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:07:56 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2618: 321 pgs: 321 active+clean; 340 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.3 MiB/s rd, 2.6 MiB/s wr, 117 op/s
Nov 25 04:07:56 np0005534516 nova_compute[253538]: 2025-11-25 09:07:56.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:07:57 np0005534516 nova_compute[253538]: 2025-11-25 09:07:57.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:07:58 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e257 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:07:58 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2619: 321 pgs: 321 active+clean; 343 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.4 MiB/s rd, 2.7 MiB/s wr, 120 op/s
Nov 25 04:07:58 np0005534516 nova_compute[253538]: 2025-11-25 09:07:58.553 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:07:59 np0005534516 nova_compute[253538]: 2025-11-25 09:07:59.176 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:07:59 np0005534516 nova_compute[253538]: 2025-11-25 09:07:59.522 253542 DEBUG oslo_concurrency.lockutils [None req-bb3b520f-9d15-44fe-b5d5-b950290e8ab8 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Acquiring lock "24611274-7a7c-4258-8631-032a6c1d8410" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:07:59 np0005534516 nova_compute[253538]: 2025-11-25 09:07:59.523 253542 DEBUG oslo_concurrency.lockutils [None req-bb3b520f-9d15-44fe-b5d5-b950290e8ab8 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "24611274-7a7c-4258-8631-032a6c1d8410" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:07:59 np0005534516 nova_compute[253538]: 2025-11-25 09:07:59.535 253542 DEBUG nova.compute.manager [None req-bb3b520f-9d15-44fe-b5d5-b950290e8ab8 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 24611274-7a7c-4258-8631-032a6c1d8410] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 25 04:07:59 np0005534516 podman[403699]: 2025-11-25 09:07:59.826180847 +0000 UTC m=+0.058057398 container health_status 5c8d5508db57b4c3da7e80ae11f3a3094e87785cb74f60c98199261dd8b73481 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2)
Nov 25 04:07:59 np0005534516 podman[403698]: 2025-11-25 09:07:59.829253421 +0000 UTC m=+0.067229109 container health_status 201f5ba8a846455dad7681d91099d8c3f33e8ac91298c1330a8b525b94c03938 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.schema-version=1.0, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team)
Nov 25 04:07:59 np0005534516 nova_compute[253538]: 2025-11-25 09:07:59.879 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:07:59 np0005534516 nova_compute[253538]: 2025-11-25 09:07:59.996 253542 DEBUG oslo_concurrency.lockutils [None req-bb3b520f-9d15-44fe-b5d5-b950290e8ab8 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:07:59 np0005534516 nova_compute[253538]: 2025-11-25 09:07:59.996 253542 DEBUG oslo_concurrency.lockutils [None req-bb3b520f-9d15-44fe-b5d5-b950290e8ab8 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:08:00 np0005534516 nova_compute[253538]: 2025-11-25 09:08:00.006 253542 DEBUG nova.virt.hardware [None req-bb3b520f-9d15-44fe-b5d5-b950290e8ab8 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 25 04:08:00 np0005534516 nova_compute[253538]: 2025-11-25 09:08:00.007 253542 INFO nova.compute.claims [None req-bb3b520f-9d15-44fe-b5d5-b950290e8ab8 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 24611274-7a7c-4258-8631-032a6c1d8410] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 25 04:08:00 np0005534516 nova_compute[253538]: 2025-11-25 09:08:00.140 253542 DEBUG nova.scheduler.client.report [None req-bb3b520f-9d15-44fe-b5d5-b950290e8ab8 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Refreshing inventories for resource provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Nov 25 04:08:00 np0005534516 nova_compute[253538]: 2025-11-25 09:08:00.199 253542 DEBUG nova.scheduler.client.report [None req-bb3b520f-9d15-44fe-b5d5-b950290e8ab8 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Updating ProviderTree inventory for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Nov 25 04:08:00 np0005534516 nova_compute[253538]: 2025-11-25 09:08:00.199 253542 DEBUG nova.compute.provider_tree [None req-bb3b520f-9d15-44fe-b5d5-b950290e8ab8 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Updating inventory in ProviderTree for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Nov 25 04:08:00 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2620: 321 pgs: 321 active+clean; 343 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 675 KiB/s rd, 2.4 MiB/s wr, 88 op/s
Nov 25 04:08:00 np0005534516 nova_compute[253538]: 2025-11-25 09:08:00.211 253542 DEBUG nova.scheduler.client.report [None req-bb3b520f-9d15-44fe-b5d5-b950290e8ab8 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Refreshing aggregate associations for resource provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Nov 25 04:08:00 np0005534516 nova_compute[253538]: 2025-11-25 09:08:00.230 253542 DEBUG nova.scheduler.client.report [None req-bb3b520f-9d15-44fe-b5d5-b950290e8ab8 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Refreshing trait associations for resource provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4, traits: HW_CPU_X86_ABM,HW_CPU_X86_SSE,HW_CPU_X86_SSE4A,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_NET_VIF_MODEL_VIRTIO,HW_CPU_X86_SSSE3,COMPUTE_ACCELERATORS,COMPUTE_RESCUE_BFV,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_BMI2,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_DEVICE_TAGGING,HW_CPU_X86_SSE2,HW_CPU_X86_SSE42,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_SECURITY_TPM_1_2,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_SVM,HW_CPU_X86_AVX,HW_CPU_X86_CLMUL,COMPUTE_NODE,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_AVX2,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_SHA,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_BMI,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_MMX,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_SSE41,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_SECURITY_TPM_2_0,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_F16C,COMPUTE_STORAGE_BUS_FDC,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_AMD_SVM,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_AESNI,HW_CPU_X86_FMA3 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Nov 25 04:08:00 np0005534516 nova_compute[253538]: 2025-11-25 09:08:00.326 253542 DEBUG oslo_concurrency.processutils [None req-bb3b520f-9d15-44fe-b5d5-b950290e8ab8 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 04:08:00 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 04:08:00 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2736770756' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 04:08:00 np0005534516 nova_compute[253538]: 2025-11-25 09:08:00.782 253542 DEBUG oslo_concurrency.processutils [None req-bb3b520f-9d15-44fe-b5d5-b950290e8ab8 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.456s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 04:08:00 np0005534516 nova_compute[253538]: 2025-11-25 09:08:00.790 253542 DEBUG nova.compute.provider_tree [None req-bb3b520f-9d15-44fe-b5d5-b950290e8ab8 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 04:08:00 np0005534516 nova_compute[253538]: 2025-11-25 09:08:00.806 253542 DEBUG nova.scheduler.client.report [None req-bb3b520f-9d15-44fe-b5d5-b950290e8ab8 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 04:08:00 np0005534516 nova_compute[253538]: 2025-11-25 09:08:00.827 253542 DEBUG oslo_concurrency.lockutils [None req-bb3b520f-9d15-44fe-b5d5-b950290e8ab8 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.830s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:08:00 np0005534516 nova_compute[253538]: 2025-11-25 09:08:00.827 253542 DEBUG nova.compute.manager [None req-bb3b520f-9d15-44fe-b5d5-b950290e8ab8 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 24611274-7a7c-4258-8631-032a6c1d8410] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 25 04:08:00 np0005534516 nova_compute[253538]: 2025-11-25 09:08:00.876 253542 DEBUG nova.compute.manager [None req-bb3b520f-9d15-44fe-b5d5-b950290e8ab8 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 24611274-7a7c-4258-8631-032a6c1d8410] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 25 04:08:00 np0005534516 nova_compute[253538]: 2025-11-25 09:08:00.876 253542 DEBUG nova.network.neutron [None req-bb3b520f-9d15-44fe-b5d5-b950290e8ab8 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 24611274-7a7c-4258-8631-032a6c1d8410] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 25 04:08:00 np0005534516 nova_compute[253538]: 2025-11-25 09:08:00.903 253542 INFO nova.virt.libvirt.driver [None req-bb3b520f-9d15-44fe-b5d5-b950290e8ab8 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 24611274-7a7c-4258-8631-032a6c1d8410] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 25 04:08:00 np0005534516 nova_compute[253538]: 2025-11-25 09:08:00.922 253542 DEBUG nova.compute.manager [None req-bb3b520f-9d15-44fe-b5d5-b950290e8ab8 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 24611274-7a7c-4258-8631-032a6c1d8410] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 25 04:08:01 np0005534516 nova_compute[253538]: 2025-11-25 09:08:01.030 253542 DEBUG nova.compute.manager [None req-bb3b520f-9d15-44fe-b5d5-b950290e8ab8 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 24611274-7a7c-4258-8631-032a6c1d8410] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 25 04:08:01 np0005534516 nova_compute[253538]: 2025-11-25 09:08:01.032 253542 DEBUG nova.virt.libvirt.driver [None req-bb3b520f-9d15-44fe-b5d5-b950290e8ab8 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 24611274-7a7c-4258-8631-032a6c1d8410] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 25 04:08:01 np0005534516 nova_compute[253538]: 2025-11-25 09:08:01.032 253542 INFO nova.virt.libvirt.driver [None req-bb3b520f-9d15-44fe-b5d5-b950290e8ab8 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 24611274-7a7c-4258-8631-032a6c1d8410] Creating image(s)#033[00m
Nov 25 04:08:01 np0005534516 nova_compute[253538]: 2025-11-25 09:08:01.057 253542 DEBUG nova.storage.rbd_utils [None req-bb3b520f-9d15-44fe-b5d5-b950290e8ab8 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] rbd image 24611274-7a7c-4258-8631-032a6c1d8410_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 04:08:01 np0005534516 nova_compute[253538]: 2025-11-25 09:08:01.086 253542 DEBUG nova.storage.rbd_utils [None req-bb3b520f-9d15-44fe-b5d5-b950290e8ab8 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] rbd image 24611274-7a7c-4258-8631-032a6c1d8410_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 04:08:01 np0005534516 nova_compute[253538]: 2025-11-25 09:08:01.110 253542 DEBUG nova.storage.rbd_utils [None req-bb3b520f-9d15-44fe-b5d5-b950290e8ab8 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] rbd image 24611274-7a7c-4258-8631-032a6c1d8410_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 04:08:01 np0005534516 nova_compute[253538]: 2025-11-25 09:08:01.114 253542 DEBUG oslo_concurrency.processutils [None req-bb3b520f-9d15-44fe-b5d5-b950290e8ab8 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 04:08:01 np0005534516 nova_compute[253538]: 2025-11-25 09:08:01.166 253542 DEBUG nova.policy [None req-bb3b520f-9d15-44fe-b5d5-b950290e8ab8 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'c9fb13d4ba9041458692330b7276232f', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'a3cf572dfc9f42528923d69b8fa76422', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 25 04:08:01 np0005534516 nova_compute[253538]: 2025-11-25 09:08:01.212 253542 DEBUG oslo_concurrency.processutils [None req-bb3b520f-9d15-44fe-b5d5-b950290e8ab8 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json" returned: 0 in 0.098s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 04:08:01 np0005534516 nova_compute[253538]: 2025-11-25 09:08:01.213 253542 DEBUG oslo_concurrency.lockutils [None req-bb3b520f-9d15-44fe-b5d5-b950290e8ab8 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Acquiring lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:08:01 np0005534516 nova_compute[253538]: 2025-11-25 09:08:01.214 253542 DEBUG oslo_concurrency.lockutils [None req-bb3b520f-9d15-44fe-b5d5-b950290e8ab8 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:08:01 np0005534516 nova_compute[253538]: 2025-11-25 09:08:01.214 253542 DEBUG oslo_concurrency.lockutils [None req-bb3b520f-9d15-44fe-b5d5-b950290e8ab8 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:08:01 np0005534516 nova_compute[253538]: 2025-11-25 09:08:01.242 253542 DEBUG nova.storage.rbd_utils [None req-bb3b520f-9d15-44fe-b5d5-b950290e8ab8 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] rbd image 24611274-7a7c-4258-8631-032a6c1d8410_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 04:08:01 np0005534516 nova_compute[253538]: 2025-11-25 09:08:01.246 253542 DEBUG oslo_concurrency.processutils [None req-bb3b520f-9d15-44fe-b5d5-b950290e8ab8 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc 24611274-7a7c-4258-8631-032a6c1d8410_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 04:08:01 np0005534516 nova_compute[253538]: 2025-11-25 09:08:01.511 253542 DEBUG oslo_concurrency.processutils [None req-bb3b520f-9d15-44fe-b5d5-b950290e8ab8 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc 24611274-7a7c-4258-8631-032a6c1d8410_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.265s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 04:08:01 np0005534516 nova_compute[253538]: 2025-11-25 09:08:01.572 253542 DEBUG nova.storage.rbd_utils [None req-bb3b520f-9d15-44fe-b5d5-b950290e8ab8 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] resizing rbd image 24611274-7a7c-4258-8631-032a6c1d8410_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 25 04:08:01 np0005534516 nova_compute[253538]: 2025-11-25 09:08:01.656 253542 DEBUG nova.objects.instance [None req-bb3b520f-9d15-44fe-b5d5-b950290e8ab8 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lazy-loading 'migration_context' on Instance uuid 24611274-7a7c-4258-8631-032a6c1d8410 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 04:08:01 np0005534516 nova_compute[253538]: 2025-11-25 09:08:01.668 253542 DEBUG nova.virt.libvirt.driver [None req-bb3b520f-9d15-44fe-b5d5-b950290e8ab8 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 24611274-7a7c-4258-8631-032a6c1d8410] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 25 04:08:01 np0005534516 nova_compute[253538]: 2025-11-25 09:08:01.668 253542 DEBUG nova.virt.libvirt.driver [None req-bb3b520f-9d15-44fe-b5d5-b950290e8ab8 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 24611274-7a7c-4258-8631-032a6c1d8410] Ensure instance console log exists: /var/lib/nova/instances/24611274-7a7c-4258-8631-032a6c1d8410/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 25 04:08:01 np0005534516 nova_compute[253538]: 2025-11-25 09:08:01.669 253542 DEBUG oslo_concurrency.lockutils [None req-bb3b520f-9d15-44fe-b5d5-b950290e8ab8 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:08:01 np0005534516 nova_compute[253538]: 2025-11-25 09:08:01.669 253542 DEBUG oslo_concurrency.lockutils [None req-bb3b520f-9d15-44fe-b5d5-b950290e8ab8 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:08:01 np0005534516 nova_compute[253538]: 2025-11-25 09:08:01.669 253542 DEBUG oslo_concurrency.lockutils [None req-bb3b520f-9d15-44fe-b5d5-b950290e8ab8 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:08:02 np0005534516 nova_compute[253538]: 2025-11-25 09:08:02.107 253542 DEBUG nova.network.neutron [None req-bb3b520f-9d15-44fe-b5d5-b950290e8ab8 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 24611274-7a7c-4258-8631-032a6c1d8410] Successfully created port: 0967717d-564b-4989-8f67-1cd8c2de57ce _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 25 04:08:02 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2621: 321 pgs: 321 active+clean; 343 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 389 KiB/s rd, 1.3 MiB/s wr, 60 op/s
Nov 25 04:08:02 np0005534516 nova_compute[253538]: 2025-11-25 09:08:02.757 253542 DEBUG nova.network.neutron [None req-bb3b520f-9d15-44fe-b5d5-b950290e8ab8 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 24611274-7a7c-4258-8631-032a6c1d8410] Successfully created port: 6effd17c-b1ee-44e2-8346-9e445de0dfb9 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 25 04:08:03 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e257 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:08:03 np0005534516 nova_compute[253538]: 2025-11-25 09:08:03.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:08:03 np0005534516 nova_compute[253538]: 2025-11-25 09:08:03.574 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:08:03 np0005534516 nova_compute[253538]: 2025-11-25 09:08:03.575 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:08:03 np0005534516 nova_compute[253538]: 2025-11-25 09:08:03.576 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:08:03 np0005534516 nova_compute[253538]: 2025-11-25 09:08:03.576 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 25 04:08:03 np0005534516 nova_compute[253538]: 2025-11-25 09:08:03.577 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 04:08:03 np0005534516 nova_compute[253538]: 2025-11-25 09:08:03.634 253542 DEBUG nova.network.neutron [None req-bb3b520f-9d15-44fe-b5d5-b950290e8ab8 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 24611274-7a7c-4258-8631-032a6c1d8410] Successfully updated port: 0967717d-564b-4989-8f67-1cd8c2de57ce _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 25 04:08:03 np0005534516 nova_compute[253538]: 2025-11-25 09:08:03.710 253542 DEBUG nova.compute.manager [req-ed74f5d0-87e3-429f-931e-000bb3fd45d7 req-2fde0e27-450f-4a34-9927-4e454ec9c94c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 24611274-7a7c-4258-8631-032a6c1d8410] Received event network-changed-0967717d-564b-4989-8f67-1cd8c2de57ce external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 04:08:03 np0005534516 nova_compute[253538]: 2025-11-25 09:08:03.711 253542 DEBUG nova.compute.manager [req-ed74f5d0-87e3-429f-931e-000bb3fd45d7 req-2fde0e27-450f-4a34-9927-4e454ec9c94c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 24611274-7a7c-4258-8631-032a6c1d8410] Refreshing instance network info cache due to event network-changed-0967717d-564b-4989-8f67-1cd8c2de57ce. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 04:08:03 np0005534516 nova_compute[253538]: 2025-11-25 09:08:03.711 253542 DEBUG oslo_concurrency.lockutils [req-ed74f5d0-87e3-429f-931e-000bb3fd45d7 req-2fde0e27-450f-4a34-9927-4e454ec9c94c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-24611274-7a7c-4258-8631-032a6c1d8410" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 04:08:03 np0005534516 nova_compute[253538]: 2025-11-25 09:08:03.711 253542 DEBUG oslo_concurrency.lockutils [req-ed74f5d0-87e3-429f-931e-000bb3fd45d7 req-2fde0e27-450f-4a34-9927-4e454ec9c94c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-24611274-7a7c-4258-8631-032a6c1d8410" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 04:08:03 np0005534516 nova_compute[253538]: 2025-11-25 09:08:03.711 253542 DEBUG nova.network.neutron [req-ed74f5d0-87e3-429f-931e-000bb3fd45d7 req-2fde0e27-450f-4a34-9927-4e454ec9c94c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 24611274-7a7c-4258-8631-032a6c1d8410] Refreshing network info cache for port 0967717d-564b-4989-8f67-1cd8c2de57ce _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 04:08:03 np0005534516 nova_compute[253538]: 2025-11-25 09:08:03.893 253542 DEBUG nova.network.neutron [req-ed74f5d0-87e3-429f-931e-000bb3fd45d7 req-2fde0e27-450f-4a34-9927-4e454ec9c94c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 24611274-7a7c-4258-8631-032a6c1d8410] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 25 04:08:04 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 04:08:04 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4143973489' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 04:08:04 np0005534516 nova_compute[253538]: 2025-11-25 09:08:04.103 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.526s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 04:08:04 np0005534516 nova_compute[253538]: 2025-11-25 09:08:04.180 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:08:04 np0005534516 nova_compute[253538]: 2025-11-25 09:08:04.207 253542 DEBUG nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] skipping disk for instance-0000008e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 25 04:08:04 np0005534516 nova_compute[253538]: 2025-11-25 09:08:04.207 253542 DEBUG nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] skipping disk for instance-0000008e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 25 04:08:04 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2622: 321 pgs: 321 active+clean; 361 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 162 KiB/s rd, 1.7 MiB/s wr, 41 op/s
Nov 25 04:08:04 np0005534516 nova_compute[253538]: 2025-11-25 09:08:04.211 253542 DEBUG nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] skipping disk for instance-0000008d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 25 04:08:04 np0005534516 nova_compute[253538]: 2025-11-25 09:08:04.212 253542 DEBUG nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] skipping disk for instance-0000008d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 25 04:08:04 np0005534516 nova_compute[253538]: 2025-11-25 09:08:04.215 253542 DEBUG nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] skipping disk for instance-0000008c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 25 04:08:04 np0005534516 nova_compute[253538]: 2025-11-25 09:08:04.216 253542 DEBUG nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] skipping disk for instance-0000008c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 25 04:08:04 np0005534516 nova_compute[253538]: 2025-11-25 09:08:04.232 253542 DEBUG nova.network.neutron [req-ed74f5d0-87e3-429f-931e-000bb3fd45d7 req-2fde0e27-450f-4a34-9927-4e454ec9c94c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 24611274-7a7c-4258-8631-032a6c1d8410] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 04:08:04 np0005534516 nova_compute[253538]: 2025-11-25 09:08:04.248 253542 DEBUG oslo_concurrency.lockutils [req-ed74f5d0-87e3-429f-931e-000bb3fd45d7 req-2fde0e27-450f-4a34-9927-4e454ec9c94c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-24611274-7a7c-4258-8631-032a6c1d8410" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 04:08:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 04:08:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 04:08:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 04:08:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 04:08:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0017758341065963405 of space, bias 1.0, pg target 0.5327502319789021 quantized to 32 (current 32)
Nov 25 04:08:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 04:08:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 04:08:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 04:08:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 04:08:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 04:08:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.001769983761476851 of space, bias 1.0, pg target 0.5309951284430553 quantized to 32 (current 32)
Nov 25 04:08:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 04:08:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 25 04:08:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 04:08:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 04:08:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 04:08:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 25 04:08:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 04:08:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 25 04:08:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 04:08:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 04:08:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 04:08:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 25 04:08:04 np0005534516 nova_compute[253538]: 2025-11-25 09:08:04.422 253542 WARNING nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 04:08:04 np0005534516 nova_compute[253538]: 2025-11-25 09:08:04.423 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=2982MB free_disk=59.89088821411133GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 25 04:08:04 np0005534516 nova_compute[253538]: 2025-11-25 09:08:04.424 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:08:04 np0005534516 nova_compute[253538]: 2025-11-25 09:08:04.424 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:08:04 np0005534516 nova_compute[253538]: 2025-11-25 09:08:04.451 253542 DEBUG nova.network.neutron [None req-bb3b520f-9d15-44fe-b5d5-b950290e8ab8 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 24611274-7a7c-4258-8631-032a6c1d8410] Successfully updated port: 6effd17c-b1ee-44e2-8346-9e445de0dfb9 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 25 04:08:04 np0005534516 nova_compute[253538]: 2025-11-25 09:08:04.476 253542 DEBUG oslo_concurrency.lockutils [None req-bb3b520f-9d15-44fe-b5d5-b950290e8ab8 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Acquiring lock "refresh_cache-24611274-7a7c-4258-8631-032a6c1d8410" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 04:08:04 np0005534516 nova_compute[253538]: 2025-11-25 09:08:04.476 253542 DEBUG oslo_concurrency.lockutils [None req-bb3b520f-9d15-44fe-b5d5-b950290e8ab8 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Acquired lock "refresh_cache-24611274-7a7c-4258-8631-032a6c1d8410" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 04:08:04 np0005534516 nova_compute[253538]: 2025-11-25 09:08:04.476 253542 DEBUG nova.network.neutron [None req-bb3b520f-9d15-44fe-b5d5-b950290e8ab8 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 24611274-7a7c-4258-8631-032a6c1d8410] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 25 04:08:04 np0005534516 nova_compute[253538]: 2025-11-25 09:08:04.501 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Instance 7aeb9ccf-2506-41d1-92c2-c72892096857 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 25 04:08:04 np0005534516 nova_compute[253538]: 2025-11-25 09:08:04.501 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Instance a5ec67ec-7042-47d0-925d-6ff3847d3846 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 25 04:08:04 np0005534516 nova_compute[253538]: 2025-11-25 09:08:04.501 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Instance de5bfbef-7a99-4280-a304-71b9099f110b actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 25 04:08:04 np0005534516 nova_compute[253538]: 2025-11-25 09:08:04.502 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Instance 24611274-7a7c-4258-8631-032a6c1d8410 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 25 04:08:04 np0005534516 nova_compute[253538]: 2025-11-25 09:08:04.502 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 4 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 25 04:08:04 np0005534516 nova_compute[253538]: 2025-11-25 09:08:04.502 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=1024MB phys_disk=59GB used_disk=4GB total_vcpus=8 used_vcpus=4 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 25 04:08:04 np0005534516 nova_compute[253538]: 2025-11-25 09:08:04.616 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 04:08:04 np0005534516 nova_compute[253538]: 2025-11-25 09:08:04.669 253542 DEBUG nova.network.neutron [None req-bb3b520f-9d15-44fe-b5d5-b950290e8ab8 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 24611274-7a7c-4258-8631-032a6c1d8410] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 25 04:08:04 np0005534516 nova_compute[253538]: 2025-11-25 09:08:04.882 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:08:05 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 04:08:05 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4164471518' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 04:08:05 np0005534516 nova_compute[253538]: 2025-11-25 09:08:05.079 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.464s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 04:08:05 np0005534516 nova_compute[253538]: 2025-11-25 09:08:05.089 253542 DEBUG nova.compute.provider_tree [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 04:08:05 np0005534516 nova_compute[253538]: 2025-11-25 09:08:05.105 253542 DEBUG nova.scheduler.client.report [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 04:08:05 np0005534516 nova_compute[253538]: 2025-11-25 09:08:05.130 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 25 04:08:05 np0005534516 nova_compute[253538]: 2025-11-25 09:08:05.131 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.707s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:08:05 np0005534516 nova_compute[253538]: 2025-11-25 09:08:05.792 253542 DEBUG nova.compute.manager [req-fcb3e5b3-7f48-471d-af94-b4088e8205ab req-13e21f76-62d1-4267-9bbf-b02c360942af b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 24611274-7a7c-4258-8631-032a6c1d8410] Received event network-changed-6effd17c-b1ee-44e2-8346-9e445de0dfb9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 04:08:05 np0005534516 nova_compute[253538]: 2025-11-25 09:08:05.792 253542 DEBUG nova.compute.manager [req-fcb3e5b3-7f48-471d-af94-b4088e8205ab req-13e21f76-62d1-4267-9bbf-b02c360942af b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 24611274-7a7c-4258-8631-032a6c1d8410] Refreshing instance network info cache due to event network-changed-6effd17c-b1ee-44e2-8346-9e445de0dfb9. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 04:08:05 np0005534516 nova_compute[253538]: 2025-11-25 09:08:05.792 253542 DEBUG oslo_concurrency.lockutils [req-fcb3e5b3-7f48-471d-af94-b4088e8205ab req-13e21f76-62d1-4267-9bbf-b02c360942af b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-24611274-7a7c-4258-8631-032a6c1d8410" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 04:08:05 np0005534516 ceph-mon[75015]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 25 04:08:05 np0005534516 ceph-mon[75015]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 4800.0 total, 600.0 interval#012Cumulative writes: 11K writes, 54K keys, 11K commit groups, 1.0 writes per commit group, ingest: 0.07 GB, 0.02 MB/s#012Cumulative WAL: 11K writes, 11K syncs, 1.00 writes per sync, written: 0.07 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1374 writes, 6219 keys, 1374 commit groups, 1.0 writes per commit group, ingest: 8.73 MB, 0.01 MB/s#012Interval WAL: 1374 writes, 1374 syncs, 1.00 writes per sync, written: 0.01 GB, 0.01 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   1.0      0.0     30.3      2.17              0.23        38    0.057       0      0       0.0       0.0#012  L6      1/0    8.09 MB   0.0      0.3     0.1      0.3       0.3      0.0       0.0   4.4     61.4     51.3      5.68              0.94        37    0.154    224K    20K       0.0       0.0#012 Sum      1/0    8.09 MB   0.0      0.3     0.1      0.3       0.3      0.1       0.0   5.4     44.4     45.5      7.85              1.18        75    0.105    224K    20K       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.1     0.0      0.0       0.1      0.0       0.0   6.2     41.3     41.8      1.26              0.15        10    0.126     38K   2553       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.3     0.1      0.3       0.3      0.0       0.0   0.0     61.4     51.3      5.68              0.94        37    0.154    224K    20K       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   0.0      0.0     30.3      2.17              0.23        37    0.059       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     19.9      0.00              0.00         1    0.003       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 4800.0 total, 600.0 interval#012Flush(GB): cumulative 0.064, interval 0.008#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.35 GB write, 0.07 MB/s write, 0.34 GB read, 0.07 MB/s read, 7.9 seconds#012Interval compaction: 0.05 GB write, 0.09 MB/s write, 0.05 GB read, 0.09 MB/s read, 1.3 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55e967e9f1f0#2 capacity: 304.00 MB usage: 40.65 MB table_size: 0 occupancy: 18446744073709551615 collections: 9 last_copies: 0 last_secs: 0.00026 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(2656,39.04 MB,12.8429%) FilterBlock(76,633.61 KB,0.203539%) IndexBlock(76,1011.41 KB,0.324902%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Nov 25 04:08:06 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2623: 321 pgs: 321 active+clean; 375 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 89 KiB/s rd, 1.4 MiB/s wr, 27 op/s
Nov 25 04:08:06 np0005534516 podman[403973]: 2025-11-25 09:08:06.85904195 +0000 UTC m=+0.105568251 container health_status 8ad1e319ea263c63021f01bb2d6f18ca5aea205883339692c6b10bddfa993191 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_controller)
Nov 25 04:08:08 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e257 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:08:08 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2624: 321 pgs: 321 active+clean; 389 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 51 KiB/s rd, 1.8 MiB/s wr, 30 op/s
Nov 25 04:08:09 np0005534516 nova_compute[253538]: 2025-11-25 09:08:09.179 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:08:09 np0005534516 nova_compute[253538]: 2025-11-25 09:08:09.652 253542 DEBUG nova.network.neutron [None req-bb3b520f-9d15-44fe-b5d5-b950290e8ab8 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 24611274-7a7c-4258-8631-032a6c1d8410] Updating instance_info_cache with network_info: [{"id": "0967717d-564b-4989-8f67-1cd8c2de57ce", "address": "fa:16:3e:11:69:66", "network": {"id": "2a6609b2-beb0-48a5-8dc0-1a4c153da77e", "bridge": "br-int", "label": "tempest-network-smoke--734444177", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0967717d-56", "ovs_interfaceid": "0967717d-564b-4989-8f67-1cd8c2de57ce", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "6effd17c-b1ee-44e2-8346-9e445de0dfb9", "address": "fa:16:3e:28:6f:de", "network": {"id": "21786b2a-59f9-4c4e-b462-8a28f7bd93a3", "bridge": "br-int", "label": "tempest-network-smoke--1716336469", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe28:6fde", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe28:6fde", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6effd17c-b1", "ovs_interfaceid": "6effd17c-b1ee-44e2-8346-9e445de0dfb9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 04:08:09 np0005534516 nova_compute[253538]: 2025-11-25 09:08:09.696 253542 DEBUG oslo_concurrency.lockutils [None req-bb3b520f-9d15-44fe-b5d5-b950290e8ab8 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Releasing lock "refresh_cache-24611274-7a7c-4258-8631-032a6c1d8410" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 04:08:09 np0005534516 nova_compute[253538]: 2025-11-25 09:08:09.696 253542 DEBUG nova.compute.manager [None req-bb3b520f-9d15-44fe-b5d5-b950290e8ab8 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 24611274-7a7c-4258-8631-032a6c1d8410] Instance network_info: |[{"id": "0967717d-564b-4989-8f67-1cd8c2de57ce", "address": "fa:16:3e:11:69:66", "network": {"id": "2a6609b2-beb0-48a5-8dc0-1a4c153da77e", "bridge": "br-int", "label": "tempest-network-smoke--734444177", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0967717d-56", "ovs_interfaceid": "0967717d-564b-4989-8f67-1cd8c2de57ce", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "6effd17c-b1ee-44e2-8346-9e445de0dfb9", "address": "fa:16:3e:28:6f:de", "network": {"id": "21786b2a-59f9-4c4e-b462-8a28f7bd93a3", "bridge": "br-int", "label": "tempest-network-smoke--1716336469", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe28:6fde", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe28:6fde", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6effd17c-b1", "ovs_interfaceid": "6effd17c-b1ee-44e2-8346-9e445de0dfb9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 25 04:08:09 np0005534516 nova_compute[253538]: 2025-11-25 09:08:09.697 253542 DEBUG oslo_concurrency.lockutils [req-fcb3e5b3-7f48-471d-af94-b4088e8205ab req-13e21f76-62d1-4267-9bbf-b02c360942af b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-24611274-7a7c-4258-8631-032a6c1d8410" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 04:08:09 np0005534516 nova_compute[253538]: 2025-11-25 09:08:09.697 253542 DEBUG nova.network.neutron [req-fcb3e5b3-7f48-471d-af94-b4088e8205ab req-13e21f76-62d1-4267-9bbf-b02c360942af b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 24611274-7a7c-4258-8631-032a6c1d8410] Refreshing network info cache for port 6effd17c-b1ee-44e2-8346-9e445de0dfb9 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 04:08:09 np0005534516 nova_compute[253538]: 2025-11-25 09:08:09.701 253542 DEBUG nova.virt.libvirt.driver [None req-bb3b520f-9d15-44fe-b5d5-b950290e8ab8 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 24611274-7a7c-4258-8631-032a6c1d8410] Start _get_guest_xml network_info=[{"id": "0967717d-564b-4989-8f67-1cd8c2de57ce", "address": "fa:16:3e:11:69:66", "network": {"id": "2a6609b2-beb0-48a5-8dc0-1a4c153da77e", "bridge": "br-int", "label": "tempest-network-smoke--734444177", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0967717d-56", "ovs_interfaceid": "0967717d-564b-4989-8f67-1cd8c2de57ce", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "6effd17c-b1ee-44e2-8346-9e445de0dfb9", "address": "fa:16:3e:28:6f:de", "network": {"id": "21786b2a-59f9-4c4e-b462-8a28f7bd93a3", "bridge": "br-int", "label": "tempest-network-smoke--1716336469", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe28:6fde", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe28:6fde", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6effd17c-b1", "ovs_interfaceid": "6effd17c-b1ee-44e2-8346-9e445de0dfb9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'device_name': '/dev/vda', 'guest_format': None, 'encryption_options': None, 'boot_index': 0, 'encryption_format': None, 'encryption_secret_uuid': None, 'size': 0, 'encrypted': False, 'disk_bus': 'virtio', 'image_id': '8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 25 04:08:09 np0005534516 nova_compute[253538]: 2025-11-25 09:08:09.706 253542 WARNING nova.virt.libvirt.driver [None req-bb3b520f-9d15-44fe-b5d5-b950290e8ab8 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 04:08:09 np0005534516 nova_compute[253538]: 2025-11-25 09:08:09.710 253542 DEBUG nova.virt.libvirt.host [None req-bb3b520f-9d15-44fe-b5d5-b950290e8ab8 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 25 04:08:09 np0005534516 nova_compute[253538]: 2025-11-25 09:08:09.711 253542 DEBUG nova.virt.libvirt.host [None req-bb3b520f-9d15-44fe-b5d5-b950290e8ab8 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 25 04:08:09 np0005534516 nova_compute[253538]: 2025-11-25 09:08:09.714 253542 DEBUG nova.virt.libvirt.host [None req-bb3b520f-9d15-44fe-b5d5-b950290e8ab8 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 25 04:08:09 np0005534516 nova_compute[253538]: 2025-11-25 09:08:09.714 253542 DEBUG nova.virt.libvirt.host [None req-bb3b520f-9d15-44fe-b5d5-b950290e8ab8 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 25 04:08:09 np0005534516 nova_compute[253538]: 2025-11-25 09:08:09.715 253542 DEBUG nova.virt.libvirt.driver [None req-bb3b520f-9d15-44fe-b5d5-b950290e8ab8 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 25 04:08:09 np0005534516 nova_compute[253538]: 2025-11-25 09:08:09.715 253542 DEBUG nova.virt.hardware [None req-bb3b520f-9d15-44fe-b5d5-b950290e8ab8 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T08:20:54Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='c0247c91-0f34-45b2-87e7-43f31a790a00',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 25 04:08:09 np0005534516 nova_compute[253538]: 2025-11-25 09:08:09.715 253542 DEBUG nova.virt.hardware [None req-bb3b520f-9d15-44fe-b5d5-b950290e8ab8 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 25 04:08:09 np0005534516 nova_compute[253538]: 2025-11-25 09:08:09.716 253542 DEBUG nova.virt.hardware [None req-bb3b520f-9d15-44fe-b5d5-b950290e8ab8 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 25 04:08:09 np0005534516 nova_compute[253538]: 2025-11-25 09:08:09.716 253542 DEBUG nova.virt.hardware [None req-bb3b520f-9d15-44fe-b5d5-b950290e8ab8 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 25 04:08:09 np0005534516 nova_compute[253538]: 2025-11-25 09:08:09.716 253542 DEBUG nova.virt.hardware [None req-bb3b520f-9d15-44fe-b5d5-b950290e8ab8 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 25 04:08:09 np0005534516 nova_compute[253538]: 2025-11-25 09:08:09.716 253542 DEBUG nova.virt.hardware [None req-bb3b520f-9d15-44fe-b5d5-b950290e8ab8 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 25 04:08:09 np0005534516 nova_compute[253538]: 2025-11-25 09:08:09.717 253542 DEBUG nova.virt.hardware [None req-bb3b520f-9d15-44fe-b5d5-b950290e8ab8 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 25 04:08:09 np0005534516 nova_compute[253538]: 2025-11-25 09:08:09.717 253542 DEBUG nova.virt.hardware [None req-bb3b520f-9d15-44fe-b5d5-b950290e8ab8 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 25 04:08:09 np0005534516 nova_compute[253538]: 2025-11-25 09:08:09.717 253542 DEBUG nova.virt.hardware [None req-bb3b520f-9d15-44fe-b5d5-b950290e8ab8 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 25 04:08:09 np0005534516 nova_compute[253538]: 2025-11-25 09:08:09.717 253542 DEBUG nova.virt.hardware [None req-bb3b520f-9d15-44fe-b5d5-b950290e8ab8 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 25 04:08:09 np0005534516 nova_compute[253538]: 2025-11-25 09:08:09.718 253542 DEBUG nova.virt.hardware [None req-bb3b520f-9d15-44fe-b5d5-b950290e8ab8 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 25 04:08:09 np0005534516 nova_compute[253538]: 2025-11-25 09:08:09.721 253542 DEBUG oslo_concurrency.processutils [None req-bb3b520f-9d15-44fe-b5d5-b950290e8ab8 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 04:08:09 np0005534516 nova_compute[253538]: 2025-11-25 09:08:09.885 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:08:10 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2625: 321 pgs: 321 active+clean; 389 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Nov 25 04:08:10 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 04:08:10 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3565731362' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 04:08:10 np0005534516 nova_compute[253538]: 2025-11-25 09:08:10.264 253542 DEBUG oslo_concurrency.processutils [None req-bb3b520f-9d15-44fe-b5d5-b950290e8ab8 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.543s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 04:08:10 np0005534516 nova_compute[253538]: 2025-11-25 09:08:10.288 253542 DEBUG nova.storage.rbd_utils [None req-bb3b520f-9d15-44fe-b5d5-b950290e8ab8 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] rbd image 24611274-7a7c-4258-8631-032a6c1d8410_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 04:08:10 np0005534516 nova_compute[253538]: 2025-11-25 09:08:10.293 253542 DEBUG oslo_concurrency.processutils [None req-bb3b520f-9d15-44fe-b5d5-b950290e8ab8 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 04:08:10 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 04:08:10 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/668307544' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 04:08:10 np0005534516 nova_compute[253538]: 2025-11-25 09:08:10.772 253542 DEBUG oslo_concurrency.processutils [None req-bb3b520f-9d15-44fe-b5d5-b950290e8ab8 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.479s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 04:08:10 np0005534516 nova_compute[253538]: 2025-11-25 09:08:10.774 253542 DEBUG nova.virt.libvirt.vif [None req-bb3b520f-9d15-44fe-b5d5-b950290e8ab8 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T09:07:58Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1852306099',display_name='tempest-TestGettingAddress-server-1852306099',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1852306099',id=143,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNNXZVgUcpOXPWXO6P6soD+m15iDx22PdjLpwIPj4NhiqrYSa8gOlVpM1gBbkAKhCG74Rw2mdANKyxO2M4jwTdZWZdvZ4G3xfBv8VGqbvmMGN/YvwZlGw5MNBtRc1Ho0Cw==',key_name='tempest-TestGettingAddress-1769249141',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='a3cf572dfc9f42528923d69b8fa76422',ramdisk_id='',reservation_id='r-0x2ihl6d',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-364728108',owner_user_name='tempest-TestGettingAddress-364728108-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T09:08:00Z,user_data=None,user_id='c9fb13d4ba9041458692330b7276232f',uuid=24611274-7a7c-4258-8631-032a6c1d8410,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "0967717d-564b-4989-8f67-1cd8c2de57ce", "address": "fa:16:3e:11:69:66", "network": {"id": "2a6609b2-beb0-48a5-8dc0-1a4c153da77e", "bridge": "br-int", "label": "tempest-network-smoke--734444177", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0967717d-56", "ovs_interfaceid": "0967717d-564b-4989-8f67-1cd8c2de57ce", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 25 04:08:10 np0005534516 nova_compute[253538]: 2025-11-25 09:08:10.774 253542 DEBUG nova.network.os_vif_util [None req-bb3b520f-9d15-44fe-b5d5-b950290e8ab8 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Converting VIF {"id": "0967717d-564b-4989-8f67-1cd8c2de57ce", "address": "fa:16:3e:11:69:66", "network": {"id": "2a6609b2-beb0-48a5-8dc0-1a4c153da77e", "bridge": "br-int", "label": "tempest-network-smoke--734444177", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0967717d-56", "ovs_interfaceid": "0967717d-564b-4989-8f67-1cd8c2de57ce", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 04:08:10 np0005534516 nova_compute[253538]: 2025-11-25 09:08:10.775 253542 DEBUG nova.network.os_vif_util [None req-bb3b520f-9d15-44fe-b5d5-b950290e8ab8 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:11:69:66,bridge_name='br-int',has_traffic_filtering=True,id=0967717d-564b-4989-8f67-1cd8c2de57ce,network=Network(2a6609b2-beb0-48a5-8dc0-1a4c153da77e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0967717d-56') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 04:08:10 np0005534516 nova_compute[253538]: 2025-11-25 09:08:10.776 253542 DEBUG nova.virt.libvirt.vif [None req-bb3b520f-9d15-44fe-b5d5-b950290e8ab8 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T09:07:58Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1852306099',display_name='tempest-TestGettingAddress-server-1852306099',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1852306099',id=143,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNNXZVgUcpOXPWXO6P6soD+m15iDx22PdjLpwIPj4NhiqrYSa8gOlVpM1gBbkAKhCG74Rw2mdANKyxO2M4jwTdZWZdvZ4G3xfBv8VGqbvmMGN/YvwZlGw5MNBtRc1Ho0Cw==',key_name='tempest-TestGettingAddress-1769249141',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='a3cf572dfc9f42528923d69b8fa76422',ramdisk_id='',reservation_id='r-0x2ihl6d',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-364728108',owner_user_name='tempest-TestGettingAddress-364728108-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T09:08:00Z,user_data=None,user_id='c9fb13d4ba9041458692330b7276232f',uuid=24611274-7a7c-4258-8631-032a6c1d8410,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "6effd17c-b1ee-44e2-8346-9e445de0dfb9", "address": "fa:16:3e:28:6f:de", "network": {"id": "21786b2a-59f9-4c4e-b462-8a28f7bd93a3", "bridge": "br-int", "label": "tempest-network-smoke--1716336469", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe28:6fde", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe28:6fde", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6effd17c-b1", "ovs_interfaceid": "6effd17c-b1ee-44e2-8346-9e445de0dfb9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 25 04:08:10 np0005534516 nova_compute[253538]: 2025-11-25 09:08:10.777 253542 DEBUG nova.network.os_vif_util [None req-bb3b520f-9d15-44fe-b5d5-b950290e8ab8 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Converting VIF {"id": "6effd17c-b1ee-44e2-8346-9e445de0dfb9", "address": "fa:16:3e:28:6f:de", "network": {"id": "21786b2a-59f9-4c4e-b462-8a28f7bd93a3", "bridge": "br-int", "label": "tempest-network-smoke--1716336469", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe28:6fde", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe28:6fde", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6effd17c-b1", "ovs_interfaceid": "6effd17c-b1ee-44e2-8346-9e445de0dfb9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 04:08:10 np0005534516 nova_compute[253538]: 2025-11-25 09:08:10.777 253542 DEBUG nova.network.os_vif_util [None req-bb3b520f-9d15-44fe-b5d5-b950290e8ab8 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:28:6f:de,bridge_name='br-int',has_traffic_filtering=True,id=6effd17c-b1ee-44e2-8346-9e445de0dfb9,network=Network(21786b2a-59f9-4c4e-b462-8a28f7bd93a3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6effd17c-b1') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 04:08:10 np0005534516 nova_compute[253538]: 2025-11-25 09:08:10.778 253542 DEBUG nova.objects.instance [None req-bb3b520f-9d15-44fe-b5d5-b950290e8ab8 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lazy-loading 'pci_devices' on Instance uuid 24611274-7a7c-4258-8631-032a6c1d8410 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 04:08:10 np0005534516 nova_compute[253538]: 2025-11-25 09:08:10.793 253542 DEBUG nova.virt.libvirt.driver [None req-bb3b520f-9d15-44fe-b5d5-b950290e8ab8 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 24611274-7a7c-4258-8631-032a6c1d8410] End _get_guest_xml xml=<domain type="kvm">
Nov 25 04:08:10 np0005534516 nova_compute[253538]:  <uuid>24611274-7a7c-4258-8631-032a6c1d8410</uuid>
Nov 25 04:08:10 np0005534516 nova_compute[253538]:  <name>instance-0000008f</name>
Nov 25 04:08:10 np0005534516 nova_compute[253538]:  <memory>131072</memory>
Nov 25 04:08:10 np0005534516 nova_compute[253538]:  <vcpu>1</vcpu>
Nov 25 04:08:10 np0005534516 nova_compute[253538]:  <metadata>
Nov 25 04:08:10 np0005534516 nova_compute[253538]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 04:08:10 np0005534516 nova_compute[253538]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 04:08:10 np0005534516 nova_compute[253538]:      <nova:name>tempest-TestGettingAddress-server-1852306099</nova:name>
Nov 25 04:08:10 np0005534516 nova_compute[253538]:      <nova:creationTime>2025-11-25 09:08:09</nova:creationTime>
Nov 25 04:08:10 np0005534516 nova_compute[253538]:      <nova:flavor name="m1.nano">
Nov 25 04:08:10 np0005534516 nova_compute[253538]:        <nova:memory>128</nova:memory>
Nov 25 04:08:10 np0005534516 nova_compute[253538]:        <nova:disk>1</nova:disk>
Nov 25 04:08:10 np0005534516 nova_compute[253538]:        <nova:swap>0</nova:swap>
Nov 25 04:08:10 np0005534516 nova_compute[253538]:        <nova:ephemeral>0</nova:ephemeral>
Nov 25 04:08:10 np0005534516 nova_compute[253538]:        <nova:vcpus>1</nova:vcpus>
Nov 25 04:08:10 np0005534516 nova_compute[253538]:      </nova:flavor>
Nov 25 04:08:10 np0005534516 nova_compute[253538]:      <nova:owner>
Nov 25 04:08:10 np0005534516 nova_compute[253538]:        <nova:user uuid="c9fb13d4ba9041458692330b7276232f">tempest-TestGettingAddress-364728108-project-member</nova:user>
Nov 25 04:08:10 np0005534516 nova_compute[253538]:        <nova:project uuid="a3cf572dfc9f42528923d69b8fa76422">tempest-TestGettingAddress-364728108</nova:project>
Nov 25 04:08:10 np0005534516 nova_compute[253538]:      </nova:owner>
Nov 25 04:08:10 np0005534516 nova_compute[253538]:      <nova:root type="image" uuid="8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e"/>
Nov 25 04:08:10 np0005534516 nova_compute[253538]:      <nova:ports>
Nov 25 04:08:10 np0005534516 nova_compute[253538]:        <nova:port uuid="0967717d-564b-4989-8f67-1cd8c2de57ce">
Nov 25 04:08:10 np0005534516 nova_compute[253538]:          <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Nov 25 04:08:10 np0005534516 nova_compute[253538]:        </nova:port>
Nov 25 04:08:10 np0005534516 nova_compute[253538]:        <nova:port uuid="6effd17c-b1ee-44e2-8346-9e445de0dfb9">
Nov 25 04:08:10 np0005534516 nova_compute[253538]:          <nova:ip type="fixed" address="2001:db8:0:1:f816:3eff:fe28:6fde" ipVersion="6"/>
Nov 25 04:08:10 np0005534516 nova_compute[253538]:          <nova:ip type="fixed" address="2001:db8::f816:3eff:fe28:6fde" ipVersion="6"/>
Nov 25 04:08:10 np0005534516 nova_compute[253538]:        </nova:port>
Nov 25 04:08:10 np0005534516 nova_compute[253538]:      </nova:ports>
Nov 25 04:08:10 np0005534516 nova_compute[253538]:    </nova:instance>
Nov 25 04:08:10 np0005534516 nova_compute[253538]:  </metadata>
Nov 25 04:08:10 np0005534516 nova_compute[253538]:  <sysinfo type="smbios">
Nov 25 04:08:10 np0005534516 nova_compute[253538]:    <system>
Nov 25 04:08:10 np0005534516 nova_compute[253538]:      <entry name="manufacturer">RDO</entry>
Nov 25 04:08:10 np0005534516 nova_compute[253538]:      <entry name="product">OpenStack Compute</entry>
Nov 25 04:08:10 np0005534516 nova_compute[253538]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 04:08:10 np0005534516 nova_compute[253538]:      <entry name="serial">24611274-7a7c-4258-8631-032a6c1d8410</entry>
Nov 25 04:08:10 np0005534516 nova_compute[253538]:      <entry name="uuid">24611274-7a7c-4258-8631-032a6c1d8410</entry>
Nov 25 04:08:10 np0005534516 nova_compute[253538]:      <entry name="family">Virtual Machine</entry>
Nov 25 04:08:10 np0005534516 nova_compute[253538]:    </system>
Nov 25 04:08:10 np0005534516 nova_compute[253538]:  </sysinfo>
Nov 25 04:08:10 np0005534516 nova_compute[253538]:  <os>
Nov 25 04:08:10 np0005534516 nova_compute[253538]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 25 04:08:10 np0005534516 nova_compute[253538]:    <boot dev="hd"/>
Nov 25 04:08:10 np0005534516 nova_compute[253538]:    <smbios mode="sysinfo"/>
Nov 25 04:08:10 np0005534516 nova_compute[253538]:  </os>
Nov 25 04:08:10 np0005534516 nova_compute[253538]:  <features>
Nov 25 04:08:10 np0005534516 nova_compute[253538]:    <acpi/>
Nov 25 04:08:10 np0005534516 nova_compute[253538]:    <apic/>
Nov 25 04:08:10 np0005534516 nova_compute[253538]:    <vmcoreinfo/>
Nov 25 04:08:10 np0005534516 nova_compute[253538]:  </features>
Nov 25 04:08:10 np0005534516 nova_compute[253538]:  <clock offset="utc">
Nov 25 04:08:10 np0005534516 nova_compute[253538]:    <timer name="pit" tickpolicy="delay"/>
Nov 25 04:08:10 np0005534516 nova_compute[253538]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 25 04:08:10 np0005534516 nova_compute[253538]:    <timer name="hpet" present="no"/>
Nov 25 04:08:10 np0005534516 nova_compute[253538]:  </clock>
Nov 25 04:08:10 np0005534516 nova_compute[253538]:  <cpu mode="host-model" match="exact">
Nov 25 04:08:10 np0005534516 nova_compute[253538]:    <topology sockets="1" cores="1" threads="1"/>
Nov 25 04:08:10 np0005534516 nova_compute[253538]:  </cpu>
Nov 25 04:08:10 np0005534516 nova_compute[253538]:  <devices>
Nov 25 04:08:10 np0005534516 nova_compute[253538]:    <disk type="network" device="disk">
Nov 25 04:08:10 np0005534516 nova_compute[253538]:      <driver type="raw" cache="none"/>
Nov 25 04:08:10 np0005534516 nova_compute[253538]:      <source protocol="rbd" name="vms/24611274-7a7c-4258-8631-032a6c1d8410_disk">
Nov 25 04:08:10 np0005534516 nova_compute[253538]:        <host name="192.168.122.100" port="6789"/>
Nov 25 04:08:10 np0005534516 nova_compute[253538]:      </source>
Nov 25 04:08:10 np0005534516 nova_compute[253538]:      <auth username="openstack">
Nov 25 04:08:10 np0005534516 nova_compute[253538]:        <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 04:08:10 np0005534516 nova_compute[253538]:      </auth>
Nov 25 04:08:10 np0005534516 nova_compute[253538]:      <target dev="vda" bus="virtio"/>
Nov 25 04:08:10 np0005534516 nova_compute[253538]:    </disk>
Nov 25 04:08:10 np0005534516 nova_compute[253538]:    <disk type="network" device="cdrom">
Nov 25 04:08:10 np0005534516 nova_compute[253538]:      <driver type="raw" cache="none"/>
Nov 25 04:08:10 np0005534516 nova_compute[253538]:      <source protocol="rbd" name="vms/24611274-7a7c-4258-8631-032a6c1d8410_disk.config">
Nov 25 04:08:10 np0005534516 nova_compute[253538]:        <host name="192.168.122.100" port="6789"/>
Nov 25 04:08:10 np0005534516 nova_compute[253538]:      </source>
Nov 25 04:08:10 np0005534516 nova_compute[253538]:      <auth username="openstack">
Nov 25 04:08:10 np0005534516 nova_compute[253538]:        <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 04:08:10 np0005534516 nova_compute[253538]:      </auth>
Nov 25 04:08:10 np0005534516 nova_compute[253538]:      <target dev="sda" bus="sata"/>
Nov 25 04:08:10 np0005534516 nova_compute[253538]:    </disk>
Nov 25 04:08:10 np0005534516 nova_compute[253538]:    <interface type="ethernet">
Nov 25 04:08:10 np0005534516 nova_compute[253538]:      <mac address="fa:16:3e:11:69:66"/>
Nov 25 04:08:10 np0005534516 nova_compute[253538]:      <model type="virtio"/>
Nov 25 04:08:10 np0005534516 nova_compute[253538]:      <driver name="vhost" rx_queue_size="512"/>
Nov 25 04:08:10 np0005534516 nova_compute[253538]:      <mtu size="1442"/>
Nov 25 04:08:10 np0005534516 nova_compute[253538]:      <target dev="tap0967717d-56"/>
Nov 25 04:08:10 np0005534516 nova_compute[253538]:    </interface>
Nov 25 04:08:10 np0005534516 nova_compute[253538]:    <interface type="ethernet">
Nov 25 04:08:10 np0005534516 nova_compute[253538]:      <mac address="fa:16:3e:28:6f:de"/>
Nov 25 04:08:10 np0005534516 nova_compute[253538]:      <model type="virtio"/>
Nov 25 04:08:10 np0005534516 nova_compute[253538]:      <driver name="vhost" rx_queue_size="512"/>
Nov 25 04:08:10 np0005534516 nova_compute[253538]:      <mtu size="1442"/>
Nov 25 04:08:10 np0005534516 nova_compute[253538]:      <target dev="tap6effd17c-b1"/>
Nov 25 04:08:10 np0005534516 nova_compute[253538]:    </interface>
Nov 25 04:08:10 np0005534516 nova_compute[253538]:    <serial type="pty">
Nov 25 04:08:10 np0005534516 nova_compute[253538]:      <log file="/var/lib/nova/instances/24611274-7a7c-4258-8631-032a6c1d8410/console.log" append="off"/>
Nov 25 04:08:10 np0005534516 nova_compute[253538]:    </serial>
Nov 25 04:08:10 np0005534516 nova_compute[253538]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 04:08:10 np0005534516 nova_compute[253538]:    <video>
Nov 25 04:08:10 np0005534516 nova_compute[253538]:      <model type="virtio"/>
Nov 25 04:08:10 np0005534516 nova_compute[253538]:    </video>
Nov 25 04:08:10 np0005534516 nova_compute[253538]:    <input type="tablet" bus="usb"/>
Nov 25 04:08:10 np0005534516 nova_compute[253538]:    <rng model="virtio">
Nov 25 04:08:10 np0005534516 nova_compute[253538]:      <backend model="random">/dev/urandom</backend>
Nov 25 04:08:10 np0005534516 nova_compute[253538]:    </rng>
Nov 25 04:08:10 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root"/>
Nov 25 04:08:10 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:08:10 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:08:10 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:08:10 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:08:10 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:08:10 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:08:10 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:08:10 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:08:10 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:08:10 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:08:10 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:08:10 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:08:10 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:08:10 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:08:10 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:08:10 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:08:10 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:08:10 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:08:10 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:08:10 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:08:10 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:08:10 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:08:10 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:08:10 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:08:10 np0005534516 nova_compute[253538]:    <controller type="usb" index="0"/>
Nov 25 04:08:10 np0005534516 nova_compute[253538]:    <memballoon model="virtio">
Nov 25 04:08:10 np0005534516 nova_compute[253538]:      <stats period="10"/>
Nov 25 04:08:10 np0005534516 nova_compute[253538]:    </memballoon>
Nov 25 04:08:10 np0005534516 nova_compute[253538]:  </devices>
Nov 25 04:08:10 np0005534516 nova_compute[253538]: </domain>
Nov 25 04:08:10 np0005534516 nova_compute[253538]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 25 04:08:10 np0005534516 nova_compute[253538]: 2025-11-25 09:08:10.795 253542 DEBUG nova.compute.manager [None req-bb3b520f-9d15-44fe-b5d5-b950290e8ab8 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 24611274-7a7c-4258-8631-032a6c1d8410] Preparing to wait for external event network-vif-plugged-0967717d-564b-4989-8f67-1cd8c2de57ce prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 25 04:08:10 np0005534516 nova_compute[253538]: 2025-11-25 09:08:10.795 253542 DEBUG oslo_concurrency.lockutils [None req-bb3b520f-9d15-44fe-b5d5-b950290e8ab8 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Acquiring lock "24611274-7a7c-4258-8631-032a6c1d8410-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:08:10 np0005534516 nova_compute[253538]: 2025-11-25 09:08:10.795 253542 DEBUG oslo_concurrency.lockutils [None req-bb3b520f-9d15-44fe-b5d5-b950290e8ab8 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "24611274-7a7c-4258-8631-032a6c1d8410-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:08:10 np0005534516 nova_compute[253538]: 2025-11-25 09:08:10.796 253542 DEBUG oslo_concurrency.lockutils [None req-bb3b520f-9d15-44fe-b5d5-b950290e8ab8 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "24611274-7a7c-4258-8631-032a6c1d8410-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:08:10 np0005534516 nova_compute[253538]: 2025-11-25 09:08:10.796 253542 DEBUG nova.compute.manager [None req-bb3b520f-9d15-44fe-b5d5-b950290e8ab8 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 24611274-7a7c-4258-8631-032a6c1d8410] Preparing to wait for external event network-vif-plugged-6effd17c-b1ee-44e2-8346-9e445de0dfb9 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 25 04:08:10 np0005534516 nova_compute[253538]: 2025-11-25 09:08:10.796 253542 DEBUG oslo_concurrency.lockutils [None req-bb3b520f-9d15-44fe-b5d5-b950290e8ab8 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Acquiring lock "24611274-7a7c-4258-8631-032a6c1d8410-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:08:10 np0005534516 nova_compute[253538]: 2025-11-25 09:08:10.796 253542 DEBUG oslo_concurrency.lockutils [None req-bb3b520f-9d15-44fe-b5d5-b950290e8ab8 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "24611274-7a7c-4258-8631-032a6c1d8410-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:08:10 np0005534516 nova_compute[253538]: 2025-11-25 09:08:10.796 253542 DEBUG oslo_concurrency.lockutils [None req-bb3b520f-9d15-44fe-b5d5-b950290e8ab8 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "24611274-7a7c-4258-8631-032a6c1d8410-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:08:10 np0005534516 nova_compute[253538]: 2025-11-25 09:08:10.797 253542 DEBUG nova.virt.libvirt.vif [None req-bb3b520f-9d15-44fe-b5d5-b950290e8ab8 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T09:07:58Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1852306099',display_name='tempest-TestGettingAddress-server-1852306099',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1852306099',id=143,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNNXZVgUcpOXPWXO6P6soD+m15iDx22PdjLpwIPj4NhiqrYSa8gOlVpM1gBbkAKhCG74Rw2mdANKyxO2M4jwTdZWZdvZ4G3xfBv8VGqbvmMGN/YvwZlGw5MNBtRc1Ho0Cw==',key_name='tempest-TestGettingAddress-1769249141',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='a3cf572dfc9f42528923d69b8fa76422',ramdisk_id='',reservation_id='r-0x2ihl6d',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-364728108',owner_user_name='tempest-TestGettingAddress-364728108-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T09:08:00Z,user_data=None,user_id='c9fb13d4ba9041458692330b7276232f',uuid=24611274-7a7c-4258-8631-032a6c1d8410,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "0967717d-564b-4989-8f67-1cd8c2de57ce", "address": "fa:16:3e:11:69:66", "network": {"id": "2a6609b2-beb0-48a5-8dc0-1a4c153da77e", "bridge": "br-int", "label": "tempest-network-smoke--734444177", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0967717d-56", "ovs_interfaceid": "0967717d-564b-4989-8f67-1cd8c2de57ce", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 25 04:08:10 np0005534516 nova_compute[253538]: 2025-11-25 09:08:10.797 253542 DEBUG nova.network.os_vif_util [None req-bb3b520f-9d15-44fe-b5d5-b950290e8ab8 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Converting VIF {"id": "0967717d-564b-4989-8f67-1cd8c2de57ce", "address": "fa:16:3e:11:69:66", "network": {"id": "2a6609b2-beb0-48a5-8dc0-1a4c153da77e", "bridge": "br-int", "label": "tempest-network-smoke--734444177", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0967717d-56", "ovs_interfaceid": "0967717d-564b-4989-8f67-1cd8c2de57ce", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 04:08:10 np0005534516 nova_compute[253538]: 2025-11-25 09:08:10.798 253542 DEBUG nova.network.os_vif_util [None req-bb3b520f-9d15-44fe-b5d5-b950290e8ab8 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:11:69:66,bridge_name='br-int',has_traffic_filtering=True,id=0967717d-564b-4989-8f67-1cd8c2de57ce,network=Network(2a6609b2-beb0-48a5-8dc0-1a4c153da77e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0967717d-56') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 04:08:10 np0005534516 nova_compute[253538]: 2025-11-25 09:08:10.798 253542 DEBUG os_vif [None req-bb3b520f-9d15-44fe-b5d5-b950290e8ab8 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:11:69:66,bridge_name='br-int',has_traffic_filtering=True,id=0967717d-564b-4989-8f67-1cd8c2de57ce,network=Network(2a6609b2-beb0-48a5-8dc0-1a4c153da77e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0967717d-56') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 25 04:08:10 np0005534516 nova_compute[253538]: 2025-11-25 09:08:10.799 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:08:10 np0005534516 nova_compute[253538]: 2025-11-25 09:08:10.799 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 04:08:10 np0005534516 nova_compute[253538]: 2025-11-25 09:08:10.800 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 04:08:10 np0005534516 nova_compute[253538]: 2025-11-25 09:08:10.802 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:08:10 np0005534516 nova_compute[253538]: 2025-11-25 09:08:10.802 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap0967717d-56, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 04:08:10 np0005534516 nova_compute[253538]: 2025-11-25 09:08:10.803 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap0967717d-56, col_values=(('external_ids', {'iface-id': '0967717d-564b-4989-8f67-1cd8c2de57ce', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:11:69:66', 'vm-uuid': '24611274-7a7c-4258-8631-032a6c1d8410'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 04:08:10 np0005534516 nova_compute[253538]: 2025-11-25 09:08:10.805 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:08:10 np0005534516 NetworkManager[48915]: <info>  [1764061690.8060] manager: (tap0967717d-56): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/617)
Nov 25 04:08:10 np0005534516 nova_compute[253538]: 2025-11-25 09:08:10.808 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 25 04:08:10 np0005534516 nova_compute[253538]: 2025-11-25 09:08:10.812 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:08:10 np0005534516 nova_compute[253538]: 2025-11-25 09:08:10.813 253542 INFO os_vif [None req-bb3b520f-9d15-44fe-b5d5-b950290e8ab8 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:11:69:66,bridge_name='br-int',has_traffic_filtering=True,id=0967717d-564b-4989-8f67-1cd8c2de57ce,network=Network(2a6609b2-beb0-48a5-8dc0-1a4c153da77e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0967717d-56')#033[00m
Nov 25 04:08:10 np0005534516 nova_compute[253538]: 2025-11-25 09:08:10.814 253542 DEBUG nova.virt.libvirt.vif [None req-bb3b520f-9d15-44fe-b5d5-b950290e8ab8 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T09:07:58Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1852306099',display_name='tempest-TestGettingAddress-server-1852306099',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1852306099',id=143,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNNXZVgUcpOXPWXO6P6soD+m15iDx22PdjLpwIPj4NhiqrYSa8gOlVpM1gBbkAKhCG74Rw2mdANKyxO2M4jwTdZWZdvZ4G3xfBv8VGqbvmMGN/YvwZlGw5MNBtRc1Ho0Cw==',key_name='tempest-TestGettingAddress-1769249141',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='a3cf572dfc9f42528923d69b8fa76422',ramdisk_id='',reservation_id='r-0x2ihl6d',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-364728108',owner_user_name='tempest-TestGettingAddress-364728108-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T09:08:00Z,user_data=None,user_id='c9fb13d4ba9041458692330b7276232f',uuid=24611274-7a7c-4258-8631-032a6c1d8410,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "6effd17c-b1ee-44e2-8346-9e445de0dfb9", "address": "fa:16:3e:28:6f:de", "network": {"id": "21786b2a-59f9-4c4e-b462-8a28f7bd93a3", "bridge": "br-int", "label": "tempest-network-smoke--1716336469", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe28:6fde", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe28:6fde", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6effd17c-b1", "ovs_interfaceid": "6effd17c-b1ee-44e2-8346-9e445de0dfb9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 25 04:08:10 np0005534516 nova_compute[253538]: 2025-11-25 09:08:10.814 253542 DEBUG nova.network.os_vif_util [None req-bb3b520f-9d15-44fe-b5d5-b950290e8ab8 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Converting VIF {"id": "6effd17c-b1ee-44e2-8346-9e445de0dfb9", "address": "fa:16:3e:28:6f:de", "network": {"id": "21786b2a-59f9-4c4e-b462-8a28f7bd93a3", "bridge": "br-int", "label": "tempest-network-smoke--1716336469", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe28:6fde", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe28:6fde", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6effd17c-b1", "ovs_interfaceid": "6effd17c-b1ee-44e2-8346-9e445de0dfb9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 04:08:10 np0005534516 nova_compute[253538]: 2025-11-25 09:08:10.815 253542 DEBUG nova.network.os_vif_util [None req-bb3b520f-9d15-44fe-b5d5-b950290e8ab8 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:28:6f:de,bridge_name='br-int',has_traffic_filtering=True,id=6effd17c-b1ee-44e2-8346-9e445de0dfb9,network=Network(21786b2a-59f9-4c4e-b462-8a28f7bd93a3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6effd17c-b1') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 04:08:10 np0005534516 nova_compute[253538]: 2025-11-25 09:08:10.815 253542 DEBUG os_vif [None req-bb3b520f-9d15-44fe-b5d5-b950290e8ab8 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:28:6f:de,bridge_name='br-int',has_traffic_filtering=True,id=6effd17c-b1ee-44e2-8346-9e445de0dfb9,network=Network(21786b2a-59f9-4c4e-b462-8a28f7bd93a3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6effd17c-b1') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 25 04:08:10 np0005534516 nova_compute[253538]: 2025-11-25 09:08:10.816 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:08:10 np0005534516 nova_compute[253538]: 2025-11-25 09:08:10.816 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 04:08:10 np0005534516 nova_compute[253538]: 2025-11-25 09:08:10.816 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 04:08:10 np0005534516 nova_compute[253538]: 2025-11-25 09:08:10.818 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:08:10 np0005534516 nova_compute[253538]: 2025-11-25 09:08:10.819 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6effd17c-b1, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 04:08:10 np0005534516 nova_compute[253538]: 2025-11-25 09:08:10.819 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap6effd17c-b1, col_values=(('external_ids', {'iface-id': '6effd17c-b1ee-44e2-8346-9e445de0dfb9', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:28:6f:de', 'vm-uuid': '24611274-7a7c-4258-8631-032a6c1d8410'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 04:08:10 np0005534516 nova_compute[253538]: 2025-11-25 09:08:10.820 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:08:10 np0005534516 NetworkManager[48915]: <info>  [1764061690.8213] manager: (tap6effd17c-b1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/618)
Nov 25 04:08:10 np0005534516 nova_compute[253538]: 2025-11-25 09:08:10.822 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 25 04:08:10 np0005534516 nova_compute[253538]: 2025-11-25 09:08:10.826 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:08:10 np0005534516 nova_compute[253538]: 2025-11-25 09:08:10.827 253542 INFO os_vif [None req-bb3b520f-9d15-44fe-b5d5-b950290e8ab8 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:28:6f:de,bridge_name='br-int',has_traffic_filtering=True,id=6effd17c-b1ee-44e2-8346-9e445de0dfb9,network=Network(21786b2a-59f9-4c4e-b462-8a28f7bd93a3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6effd17c-b1')#033[00m
Nov 25 04:08:10 np0005534516 nova_compute[253538]: 2025-11-25 09:08:10.871 253542 DEBUG nova.virt.libvirt.driver [None req-bb3b520f-9d15-44fe-b5d5-b950290e8ab8 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 04:08:10 np0005534516 nova_compute[253538]: 2025-11-25 09:08:10.871 253542 DEBUG nova.virt.libvirt.driver [None req-bb3b520f-9d15-44fe-b5d5-b950290e8ab8 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 04:08:10 np0005534516 nova_compute[253538]: 2025-11-25 09:08:10.871 253542 DEBUG nova.virt.libvirt.driver [None req-bb3b520f-9d15-44fe-b5d5-b950290e8ab8 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] No VIF found with MAC fa:16:3e:11:69:66, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 25 04:08:10 np0005534516 nova_compute[253538]: 2025-11-25 09:08:10.871 253542 DEBUG nova.virt.libvirt.driver [None req-bb3b520f-9d15-44fe-b5d5-b950290e8ab8 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] No VIF found with MAC fa:16:3e:28:6f:de, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 25 04:08:10 np0005534516 nova_compute[253538]: 2025-11-25 09:08:10.872 253542 INFO nova.virt.libvirt.driver [None req-bb3b520f-9d15-44fe-b5d5-b950290e8ab8 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 24611274-7a7c-4258-8631-032a6c1d8410] Using config drive#033[00m
Nov 25 04:08:10 np0005534516 nova_compute[253538]: 2025-11-25 09:08:10.896 253542 DEBUG nova.storage.rbd_utils [None req-bb3b520f-9d15-44fe-b5d5-b950290e8ab8 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] rbd image 24611274-7a7c-4258-8631-032a6c1d8410_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 04:08:11 np0005534516 nova_compute[253538]: 2025-11-25 09:08:11.553 253542 INFO nova.virt.libvirt.driver [None req-bb3b520f-9d15-44fe-b5d5-b950290e8ab8 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 24611274-7a7c-4258-8631-032a6c1d8410] Creating config drive at /var/lib/nova/instances/24611274-7a7c-4258-8631-032a6c1d8410/disk.config#033[00m
Nov 25 04:08:11 np0005534516 nova_compute[253538]: 2025-11-25 09:08:11.559 253542 DEBUG oslo_concurrency.processutils [None req-bb3b520f-9d15-44fe-b5d5-b950290e8ab8 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/24611274-7a7c-4258-8631-032a6c1d8410/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpobzoy0q1 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 04:08:11 np0005534516 nova_compute[253538]: 2025-11-25 09:08:11.705 253542 DEBUG oslo_concurrency.processutils [None req-bb3b520f-9d15-44fe-b5d5-b950290e8ab8 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/24611274-7a7c-4258-8631-032a6c1d8410/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpobzoy0q1" returned: 0 in 0.146s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 04:08:11 np0005534516 nova_compute[253538]: 2025-11-25 09:08:11.735 253542 DEBUG nova.storage.rbd_utils [None req-bb3b520f-9d15-44fe-b5d5-b950290e8ab8 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] rbd image 24611274-7a7c-4258-8631-032a6c1d8410_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 04:08:11 np0005534516 nova_compute[253538]: 2025-11-25 09:08:11.740 253542 DEBUG oslo_concurrency.processutils [None req-bb3b520f-9d15-44fe-b5d5-b950290e8ab8 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/24611274-7a7c-4258-8631-032a6c1d8410/disk.config 24611274-7a7c-4258-8631-032a6c1d8410_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 04:08:11 np0005534516 nova_compute[253538]: 2025-11-25 09:08:11.949 253542 DEBUG oslo_concurrency.processutils [None req-bb3b520f-9d15-44fe-b5d5-b950290e8ab8 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/24611274-7a7c-4258-8631-032a6c1d8410/disk.config 24611274-7a7c-4258-8631-032a6c1d8410_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.208s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 04:08:11 np0005534516 nova_compute[253538]: 2025-11-25 09:08:11.950 253542 INFO nova.virt.libvirt.driver [None req-bb3b520f-9d15-44fe-b5d5-b950290e8ab8 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 24611274-7a7c-4258-8631-032a6c1d8410] Deleting local config drive /var/lib/nova/instances/24611274-7a7c-4258-8631-032a6c1d8410/disk.config because it was imported into RBD.#033[00m
Nov 25 04:08:11 np0005534516 NetworkManager[48915]: <info>  [1764061691.9970] manager: (tap0967717d-56): new Tun device (/org/freedesktop/NetworkManager/Devices/619)
Nov 25 04:08:12 np0005534516 kernel: tap0967717d-56: entered promiscuous mode
Nov 25 04:08:12 np0005534516 ovn_controller[152859]: 2025-11-25T09:08:12Z|01492|binding|INFO|Claiming lport 0967717d-564b-4989-8f67-1cd8c2de57ce for this chassis.
Nov 25 04:08:12 np0005534516 nova_compute[253538]: 2025-11-25 09:08:12.000 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:08:12 np0005534516 ovn_controller[152859]: 2025-11-25T09:08:12Z|01493|binding|INFO|0967717d-564b-4989-8f67-1cd8c2de57ce: Claiming fa:16:3e:11:69:66 10.100.0.14
Nov 25 04:08:12 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:08:12.011 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:11:69:66 10.100.0.14'], port_security=['fa:16:3e:11:69:66 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '24611274-7a7c-4258-8631-032a6c1d8410', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2a6609b2-beb0-48a5-8dc0-1a4c153da77e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a3cf572dfc9f42528923d69b8fa76422', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'b341b451-7173-4ca0-817f-090d2ce6e1dc', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d52cd819-aa98-4895-9898-b2ec17432e84, chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=0967717d-564b-4989-8f67-1cd8c2de57ce) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 04:08:12 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:08:12.013 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 0967717d-564b-4989-8f67-1cd8c2de57ce in datapath 2a6609b2-beb0-48a5-8dc0-1a4c153da77e bound to our chassis#033[00m
Nov 25 04:08:12 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:08:12.014 162739 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 2a6609b2-beb0-48a5-8dc0-1a4c153da77e#033[00m
Nov 25 04:08:12 np0005534516 NetworkManager[48915]: <info>  [1764061692.0202] manager: (tap6effd17c-b1): new Tun device (/org/freedesktop/NetworkManager/Devices/620)
Nov 25 04:08:12 np0005534516 kernel: tap6effd17c-b1: entered promiscuous mode
Nov 25 04:08:12 np0005534516 ovn_controller[152859]: 2025-11-25T09:08:12Z|01494|binding|INFO|Setting lport 0967717d-564b-4989-8f67-1cd8c2de57ce ovn-installed in OVS
Nov 25 04:08:12 np0005534516 ovn_controller[152859]: 2025-11-25T09:08:12Z|01495|binding|INFO|Setting lport 0967717d-564b-4989-8f67-1cd8c2de57ce up in Southbound
Nov 25 04:08:12 np0005534516 nova_compute[253538]: 2025-11-25 09:08:12.024 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:08:12 np0005534516 nova_compute[253538]: 2025-11-25 09:08:12.026 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:08:12 np0005534516 ovn_controller[152859]: 2025-11-25T09:08:12Z|01496|if_status|INFO|Dropped 1 log messages in last 36 seconds (most recently, 36 seconds ago) due to excessive rate
Nov 25 04:08:12 np0005534516 ovn_controller[152859]: 2025-11-25T09:08:12Z|01497|if_status|INFO|Not updating pb chassis for 6effd17c-b1ee-44e2-8346-9e445de0dfb9 now as sb is readonly
Nov 25 04:08:12 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:08:12.029 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[6b8747a3-1e84-4856-98dd-5d4c93226c22]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:08:12 np0005534516 ovn_controller[152859]: 2025-11-25T09:08:12Z|01498|binding|INFO|Claiming lport 6effd17c-b1ee-44e2-8346-9e445de0dfb9 for this chassis.
Nov 25 04:08:12 np0005534516 ovn_controller[152859]: 2025-11-25T09:08:12Z|01499|binding|INFO|6effd17c-b1ee-44e2-8346-9e445de0dfb9: Claiming fa:16:3e:28:6f:de 2001:db8:0:1:f816:3eff:fe28:6fde 2001:db8::f816:3eff:fe28:6fde
Nov 25 04:08:12 np0005534516 systemd-udevd[404143]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 04:08:12 np0005534516 systemd-udevd[404142]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 04:08:12 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:08:12.043 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:28:6f:de 2001:db8:0:1:f816:3eff:fe28:6fde 2001:db8::f816:3eff:fe28:6fde'], port_security=['fa:16:3e:28:6f:de 2001:db8:0:1:f816:3eff:fe28:6fde 2001:db8::f816:3eff:fe28:6fde'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8:0:1:f816:3eff:fe28:6fde/64 2001:db8::f816:3eff:fe28:6fde/64', 'neutron:device_id': '24611274-7a7c-4258-8631-032a6c1d8410', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-21786b2a-59f9-4c4e-b462-8a28f7bd93a3', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a3cf572dfc9f42528923d69b8fa76422', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'b341b451-7173-4ca0-817f-090d2ce6e1dc', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c4355854-52b0-49fe-b048-f2ee64c9c702, chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=6effd17c-b1ee-44e2-8346-9e445de0dfb9) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 04:08:12 np0005534516 NetworkManager[48915]: <info>  [1764061692.0566] device (tap0967717d-56): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 04:08:12 np0005534516 NetworkManager[48915]: <info>  [1764061692.0580] device (tap0967717d-56): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 04:08:12 np0005534516 NetworkManager[48915]: <info>  [1764061692.0587] device (tap6effd17c-b1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 04:08:12 np0005534516 ovn_controller[152859]: 2025-11-25T09:08:12Z|01500|binding|INFO|Setting lport 6effd17c-b1ee-44e2-8346-9e445de0dfb9 ovn-installed in OVS
Nov 25 04:08:12 np0005534516 NetworkManager[48915]: <info>  [1764061692.0598] device (tap6effd17c-b1): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 04:08:12 np0005534516 ovn_controller[152859]: 2025-11-25T09:08:12Z|01501|binding|INFO|Setting lport 6effd17c-b1ee-44e2-8346-9e445de0dfb9 up in Southbound
Nov 25 04:08:12 np0005534516 nova_compute[253538]: 2025-11-25 09:08:12.059 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:08:12 np0005534516 nova_compute[253538]: 2025-11-25 09:08:12.062 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:08:12 np0005534516 systemd-machined[215790]: New machine qemu-173-instance-0000008f.
Nov 25 04:08:12 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:08:12.062 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[b64fde43-7b7f-4f19-8606-2d1f5852cfba]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:08:12 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:08:12.066 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[0b882f74-0d08-443e-ae29-08d0c39f60d0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:08:12 np0005534516 systemd[1]: Started Virtual Machine qemu-173-instance-0000008f.
Nov 25 04:08:12 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:08:12.094 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[ed8b7eff-c40f-4bd0-99d9-4dd41098d115]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:08:12 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:08:12.109 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[27227a41-b7aa-4770-846c-f031ebf9cd85]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap2a6609b2-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:17:74:b5'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 428], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 701075, 'reachable_time': 37459, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 404152, 'error': None, 'target': 'ovnmeta-2a6609b2-beb0-48a5-8dc0-1a4c153da77e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:08:12 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:08:12.124 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[cd310dd8-e34c-4243-804e-e781a19cb49b]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap2a6609b2-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 701090, 'tstamp': 701090}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 404158, 'error': None, 'target': 'ovnmeta-2a6609b2-beb0-48a5-8dc0-1a4c153da77e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap2a6609b2-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 701094, 'tstamp': 701094}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 404158, 'error': None, 'target': 'ovnmeta-2a6609b2-beb0-48a5-8dc0-1a4c153da77e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:08:12 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:08:12.126 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2a6609b2-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 04:08:12 np0005534516 nova_compute[253538]: 2025-11-25 09:08:12.128 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:08:12 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:08:12.129 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2a6609b2-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 04:08:12 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:08:12.129 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 04:08:12 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:08:12.129 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap2a6609b2-b0, col_values=(('external_ids', {'iface-id': 'd96f8f4b-113a-4ac4-a7d8-3fb3e9d6b94c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 04:08:12 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:08:12.130 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 04:08:12 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:08:12.131 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 6effd17c-b1ee-44e2-8346-9e445de0dfb9 in datapath 21786b2a-59f9-4c4e-b462-8a28f7bd93a3 unbound from our chassis#033[00m
Nov 25 04:08:12 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:08:12.132 162739 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 21786b2a-59f9-4c4e-b462-8a28f7bd93a3#033[00m
Nov 25 04:08:12 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:08:12.149 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[244398fb-54e0-4816-8c6a-6f885d006c9e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:08:12 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:08:12.184 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[7a4efeb9-0756-4c0e-85d6-d3ba7d57de1f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:08:12 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:08:12.187 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[99009079-1392-418c-8662-7b82cf958a04]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:08:12 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2626: 321 pgs: 321 active+clean; 389 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Nov 25 04:08:12 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:08:12.214 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[3c75e840-98d5-40b5-a954-86f6d1a3c1a0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:08:12 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:08:12.232 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[577e72b4-fd9c-4bbd-b96f-2e69dd19d692]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap21786b2a-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a4:b7:82'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 22, 'tx_packets': 4, 'rx_bytes': 1916, 'tx_bytes': 312, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 22, 'tx_packets': 4, 'rx_bytes': 1916, 'tx_bytes': 312, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 429], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 701174, 'reachable_time': 20349, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 22, 'inoctets': 1608, 'indelivers': 7, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 22, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 1608, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 22, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 7, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 404190, 'error': None, 'target': 'ovnmeta-21786b2a-59f9-4c4e-b462-8a28f7bd93a3', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:08:12 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:08:12.246 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[fdbb7e66-a310-440e-b78b-81f5b8bfd799]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap21786b2a-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 701187, 'tstamp': 701187}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 404202, 'error': None, 'target': 'ovnmeta-21786b2a-59f9-4c4e-b462-8a28f7bd93a3', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:08:12 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:08:12.252 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap21786b2a-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 04:08:12 np0005534516 nova_compute[253538]: 2025-11-25 09:08:12.254 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:08:12 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:08:12.255 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap21786b2a-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 04:08:12 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:08:12.255 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 04:08:12 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:08:12.256 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap21786b2a-50, col_values=(('external_ids', {'iface-id': 'c3131dc3-cd6c-4c0c-a1cd-e385c6ff1693'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 04:08:12 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:08:12.256 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 04:08:12 np0005534516 nova_compute[253538]: 2025-11-25 09:08:12.353 253542 DEBUG nova.compute.manager [req-70d5bd52-353f-4a70-b714-26d74a49e2e7 req-10261606-cf60-4c3f-b8bc-ee6a046b3aa1 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 24611274-7a7c-4258-8631-032a6c1d8410] Received event network-vif-plugged-0967717d-564b-4989-8f67-1cd8c2de57ce external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 04:08:12 np0005534516 nova_compute[253538]: 2025-11-25 09:08:12.354 253542 DEBUG oslo_concurrency.lockutils [req-70d5bd52-353f-4a70-b714-26d74a49e2e7 req-10261606-cf60-4c3f-b8bc-ee6a046b3aa1 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "24611274-7a7c-4258-8631-032a6c1d8410-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:08:12 np0005534516 nova_compute[253538]: 2025-11-25 09:08:12.354 253542 DEBUG oslo_concurrency.lockutils [req-70d5bd52-353f-4a70-b714-26d74a49e2e7 req-10261606-cf60-4c3f-b8bc-ee6a046b3aa1 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "24611274-7a7c-4258-8631-032a6c1d8410-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:08:12 np0005534516 nova_compute[253538]: 2025-11-25 09:08:12.354 253542 DEBUG oslo_concurrency.lockutils [req-70d5bd52-353f-4a70-b714-26d74a49e2e7 req-10261606-cf60-4c3f-b8bc-ee6a046b3aa1 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "24611274-7a7c-4258-8631-032a6c1d8410-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:08:12 np0005534516 nova_compute[253538]: 2025-11-25 09:08:12.354 253542 DEBUG nova.compute.manager [req-70d5bd52-353f-4a70-b714-26d74a49e2e7 req-10261606-cf60-4c3f-b8bc-ee6a046b3aa1 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 24611274-7a7c-4258-8631-032a6c1d8410] Processing event network-vif-plugged-0967717d-564b-4989-8f67-1cd8c2de57ce _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 25 04:08:12 np0005534516 nova_compute[253538]: 2025-11-25 09:08:12.402 253542 DEBUG nova.compute.manager [None req-9d4ca3e1-3b2b-4664-bd04-7aa25b498be8 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] [instance: de5bfbef-7a99-4280-a304-71b9099f110b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 04:08:12 np0005534516 nova_compute[253538]: 2025-11-25 09:08:12.436 253542 INFO nova.compute.manager [None req-9d4ca3e1-3b2b-4664-bd04-7aa25b498be8 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] [instance: de5bfbef-7a99-4280-a304-71b9099f110b] instance snapshotting#033[00m
Nov 25 04:08:12 np0005534516 nova_compute[253538]: 2025-11-25 09:08:12.524 253542 DEBUG nova.network.neutron [req-fcb3e5b3-7f48-471d-af94-b4088e8205ab req-13e21f76-62d1-4267-9bbf-b02c360942af b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 24611274-7a7c-4258-8631-032a6c1d8410] Updated VIF entry in instance network info cache for port 6effd17c-b1ee-44e2-8346-9e445de0dfb9. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 25 04:08:12 np0005534516 nova_compute[253538]: 2025-11-25 09:08:12.526 253542 DEBUG nova.network.neutron [req-fcb3e5b3-7f48-471d-af94-b4088e8205ab req-13e21f76-62d1-4267-9bbf-b02c360942af b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 24611274-7a7c-4258-8631-032a6c1d8410] Updating instance_info_cache with network_info: [{"id": "0967717d-564b-4989-8f67-1cd8c2de57ce", "address": "fa:16:3e:11:69:66", "network": {"id": "2a6609b2-beb0-48a5-8dc0-1a4c153da77e", "bridge": "br-int", "label": "tempest-network-smoke--734444177", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0967717d-56", "ovs_interfaceid": "0967717d-564b-4989-8f67-1cd8c2de57ce", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "6effd17c-b1ee-44e2-8346-9e445de0dfb9", "address": "fa:16:3e:28:6f:de", "network": {"id": "21786b2a-59f9-4c4e-b462-8a28f7bd93a3", "bridge": "br-int", "label": "tempest-network-smoke--1716336469", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe28:6fde", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe28:6fde", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6effd17c-b1", "ovs_interfaceid": "6effd17c-b1ee-44e2-8346-9e445de0dfb9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 04:08:12 np0005534516 nova_compute[253538]: 2025-11-25 09:08:12.540 253542 DEBUG oslo_concurrency.lockutils [req-fcb3e5b3-7f48-471d-af94-b4088e8205ab req-13e21f76-62d1-4267-9bbf-b02c360942af b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-24611274-7a7c-4258-8631-032a6c1d8410" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 04:08:12 np0005534516 nova_compute[253538]: 2025-11-25 09:08:12.688 253542 INFO nova.virt.libvirt.driver [None req-9d4ca3e1-3b2b-4664-bd04-7aa25b498be8 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] [instance: de5bfbef-7a99-4280-a304-71b9099f110b] Beginning live snapshot process#033[00m
Nov 25 04:08:12 np0005534516 nova_compute[253538]: 2025-11-25 09:08:12.851 253542 DEBUG nova.storage.rbd_utils [None req-9d4ca3e1-3b2b-4664-bd04-7aa25b498be8 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] creating snapshot(ac3bcbe503ff4146b596d39c14ad05a1) on rbd image(de5bfbef-7a99-4280-a304-71b9099f110b_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Nov 25 04:08:12 np0005534516 nova_compute[253538]: 2025-11-25 09:08:12.956 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764061692.9560034, 24611274-7a7c-4258-8631-032a6c1d8410 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 04:08:12 np0005534516 nova_compute[253538]: 2025-11-25 09:08:12.957 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 24611274-7a7c-4258-8631-032a6c1d8410] VM Started (Lifecycle Event)#033[00m
Nov 25 04:08:12 np0005534516 nova_compute[253538]: 2025-11-25 09:08:12.982 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 24611274-7a7c-4258-8631-032a6c1d8410] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 04:08:12 np0005534516 nova_compute[253538]: 2025-11-25 09:08:12.985 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764061692.9567604, 24611274-7a7c-4258-8631-032a6c1d8410 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 04:08:12 np0005534516 nova_compute[253538]: 2025-11-25 09:08:12.985 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 24611274-7a7c-4258-8631-032a6c1d8410] VM Paused (Lifecycle Event)#033[00m
Nov 25 04:08:13 np0005534516 nova_compute[253538]: 2025-11-25 09:08:13.001 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 24611274-7a7c-4258-8631-032a6c1d8410] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 04:08:13 np0005534516 nova_compute[253538]: 2025-11-25 09:08:13.005 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 24611274-7a7c-4258-8631-032a6c1d8410] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 04:08:13 np0005534516 nova_compute[253538]: 2025-11-25 09:08:13.029 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 24611274-7a7c-4258-8631-032a6c1d8410] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 04:08:13 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 04:08:13 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 04:08:13 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 04:08:13 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 04:08:13 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 04:08:13 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 04:08:13 np0005534516 ceph-mgr[75313]: [progress WARNING root] complete: ev d2528635-ae30-4bab-b79b-d24bee4a10ae does not exist
Nov 25 04:08:13 np0005534516 ceph-mgr[75313]: [progress WARNING root] complete: ev 4b8cf18e-0c9b-46e7-a9b2-fe3285533b4d does not exist
Nov 25 04:08:13 np0005534516 ceph-mgr[75313]: [progress WARNING root] complete: ev 74799879-c0a7-437b-94b5-12e0e37cecc1 does not exist
Nov 25 04:08:13 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 04:08:13 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 04:08:13 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 04:08:13 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 04:08:13 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 04:08:13 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 04:08:13 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e257 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:08:13 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e257 do_prune osdmap full prune enabled
Nov 25 04:08:13 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e258 e258: 3 total, 3 up, 3 in
Nov 25 04:08:13 np0005534516 ceph-mon[75015]: log_channel(cluster) log [DBG] : osdmap e258: 3 total, 3 up, 3 in
Nov 25 04:08:13 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 04:08:13 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 04:08:13 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 04:08:13 np0005534516 nova_compute[253538]: 2025-11-25 09:08:13.518 253542 DEBUG nova.storage.rbd_utils [None req-9d4ca3e1-3b2b-4664-bd04-7aa25b498be8 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] cloning vms/de5bfbef-7a99-4280-a304-71b9099f110b_disk@ac3bcbe503ff4146b596d39c14ad05a1 to images/c1cd651b-e908-45f2-ad61-7952319cf709 clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Nov 25 04:08:13 np0005534516 nova_compute[253538]: 2025-11-25 09:08:13.663 253542 DEBUG nova.storage.rbd_utils [None req-9d4ca3e1-3b2b-4664-bd04-7aa25b498be8 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] flattening images/c1cd651b-e908-45f2-ad61-7952319cf709 flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314#033[00m
Nov 25 04:08:13 np0005534516 podman[404568]: 2025-11-25 09:08:13.672091426 +0000 UTC m=+0.062073549 container create ad180e4f50eb484df2a2ab7ba33d16a4447d44c373e24ff27aa1160a15e1a956 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_noether, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 04:08:13 np0005534516 podman[404568]: 2025-11-25 09:08:13.641533275 +0000 UTC m=+0.031515498 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 04:08:13 np0005534516 systemd[1]: Started libpod-conmon-ad180e4f50eb484df2a2ab7ba33d16a4447d44c373e24ff27aa1160a15e1a956.scope.
Nov 25 04:08:13 np0005534516 systemd[1]: Started libcrun container.
Nov 25 04:08:13 np0005534516 podman[404568]: 2025-11-25 09:08:13.809950123 +0000 UTC m=+0.199932236 container init ad180e4f50eb484df2a2ab7ba33d16a4447d44c373e24ff27aa1160a15e1a956 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_noether, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 04:08:13 np0005534516 podman[404568]: 2025-11-25 09:08:13.819524003 +0000 UTC m=+0.209506116 container start ad180e4f50eb484df2a2ab7ba33d16a4447d44c373e24ff27aa1160a15e1a956 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_noether, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef)
Nov 25 04:08:13 np0005534516 podman[404568]: 2025-11-25 09:08:13.825223678 +0000 UTC m=+0.215205791 container attach ad180e4f50eb484df2a2ab7ba33d16a4447d44c373e24ff27aa1160a15e1a956 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_noether, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 04:08:13 np0005534516 systemd[1]: libpod-ad180e4f50eb484df2a2ab7ba33d16a4447d44c373e24ff27aa1160a15e1a956.scope: Deactivated successfully.
Nov 25 04:08:13 np0005534516 intelligent_noether[404606]: 167 167
Nov 25 04:08:13 np0005534516 conmon[404606]: conmon ad180e4f50eb484df2a2 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-ad180e4f50eb484df2a2ab7ba33d16a4447d44c373e24ff27aa1160a15e1a956.scope/container/memory.events
Nov 25 04:08:13 np0005534516 podman[404611]: 2025-11-25 09:08:13.882172906 +0000 UTC m=+0.032077863 container died ad180e4f50eb484df2a2ab7ba33d16a4447d44c373e24ff27aa1160a15e1a956 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_noether, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Nov 25 04:08:14 np0005534516 nova_compute[253538]: 2025-11-25 09:08:14.182 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:08:14 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2628: 321 pgs: 321 active+clean; 390 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 108 KiB/s rd, 1.2 MiB/s wr, 30 op/s
Nov 25 04:08:14 np0005534516 systemd[1]: var-lib-containers-storage-overlay-ab7a8d1accd815a6477abad2d9398c2bbb06d8c8e4cc247ddf034fadd04cce91-merged.mount: Deactivated successfully.
Nov 25 04:08:14 np0005534516 nova_compute[253538]: 2025-11-25 09:08:14.441 253542 DEBUG nova.compute.manager [req-1be31d02-151f-4a3f-8529-0b4118bb5d3d req-13a0cfe7-2f54-420c-9bd8-57107dfcd184 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 24611274-7a7c-4258-8631-032a6c1d8410] Received event network-vif-plugged-0967717d-564b-4989-8f67-1cd8c2de57ce external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 04:08:14 np0005534516 nova_compute[253538]: 2025-11-25 09:08:14.442 253542 DEBUG oslo_concurrency.lockutils [req-1be31d02-151f-4a3f-8529-0b4118bb5d3d req-13a0cfe7-2f54-420c-9bd8-57107dfcd184 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "24611274-7a7c-4258-8631-032a6c1d8410-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:08:14 np0005534516 nova_compute[253538]: 2025-11-25 09:08:14.443 253542 DEBUG oslo_concurrency.lockutils [req-1be31d02-151f-4a3f-8529-0b4118bb5d3d req-13a0cfe7-2f54-420c-9bd8-57107dfcd184 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "24611274-7a7c-4258-8631-032a6c1d8410-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:08:14 np0005534516 nova_compute[253538]: 2025-11-25 09:08:14.443 253542 DEBUG oslo_concurrency.lockutils [req-1be31d02-151f-4a3f-8529-0b4118bb5d3d req-13a0cfe7-2f54-420c-9bd8-57107dfcd184 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "24611274-7a7c-4258-8631-032a6c1d8410-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:08:14 np0005534516 nova_compute[253538]: 2025-11-25 09:08:14.444 253542 DEBUG nova.compute.manager [req-1be31d02-151f-4a3f-8529-0b4118bb5d3d req-13a0cfe7-2f54-420c-9bd8-57107dfcd184 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 24611274-7a7c-4258-8631-032a6c1d8410] No event matching network-vif-plugged-0967717d-564b-4989-8f67-1cd8c2de57ce in dict_keys([('network-vif-plugged', '6effd17c-b1ee-44e2-8346-9e445de0dfb9')]) pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:325#033[00m
Nov 25 04:08:14 np0005534516 nova_compute[253538]: 2025-11-25 09:08:14.444 253542 WARNING nova.compute.manager [req-1be31d02-151f-4a3f-8529-0b4118bb5d3d req-13a0cfe7-2f54-420c-9bd8-57107dfcd184 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 24611274-7a7c-4258-8631-032a6c1d8410] Received unexpected event network-vif-plugged-0967717d-564b-4989-8f67-1cd8c2de57ce for instance with vm_state building and task_state spawning.#033[00m
Nov 25 04:08:14 np0005534516 nova_compute[253538]: 2025-11-25 09:08:14.445 253542 DEBUG nova.compute.manager [req-1be31d02-151f-4a3f-8529-0b4118bb5d3d req-13a0cfe7-2f54-420c-9bd8-57107dfcd184 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 24611274-7a7c-4258-8631-032a6c1d8410] Received event network-vif-plugged-6effd17c-b1ee-44e2-8346-9e445de0dfb9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 04:08:14 np0005534516 nova_compute[253538]: 2025-11-25 09:08:14.445 253542 DEBUG oslo_concurrency.lockutils [req-1be31d02-151f-4a3f-8529-0b4118bb5d3d req-13a0cfe7-2f54-420c-9bd8-57107dfcd184 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "24611274-7a7c-4258-8631-032a6c1d8410-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:08:14 np0005534516 nova_compute[253538]: 2025-11-25 09:08:14.446 253542 DEBUG oslo_concurrency.lockutils [req-1be31d02-151f-4a3f-8529-0b4118bb5d3d req-13a0cfe7-2f54-420c-9bd8-57107dfcd184 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "24611274-7a7c-4258-8631-032a6c1d8410-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:08:14 np0005534516 nova_compute[253538]: 2025-11-25 09:08:14.446 253542 DEBUG oslo_concurrency.lockutils [req-1be31d02-151f-4a3f-8529-0b4118bb5d3d req-13a0cfe7-2f54-420c-9bd8-57107dfcd184 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "24611274-7a7c-4258-8631-032a6c1d8410-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:08:14 np0005534516 nova_compute[253538]: 2025-11-25 09:08:14.446 253542 DEBUG nova.compute.manager [req-1be31d02-151f-4a3f-8529-0b4118bb5d3d req-13a0cfe7-2f54-420c-9bd8-57107dfcd184 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 24611274-7a7c-4258-8631-032a6c1d8410] Processing event network-vif-plugged-6effd17c-b1ee-44e2-8346-9e445de0dfb9 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 25 04:08:14 np0005534516 nova_compute[253538]: 2025-11-25 09:08:14.447 253542 DEBUG nova.compute.manager [req-1be31d02-151f-4a3f-8529-0b4118bb5d3d req-13a0cfe7-2f54-420c-9bd8-57107dfcd184 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 24611274-7a7c-4258-8631-032a6c1d8410] Received event network-vif-plugged-6effd17c-b1ee-44e2-8346-9e445de0dfb9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 04:08:14 np0005534516 nova_compute[253538]: 2025-11-25 09:08:14.447 253542 DEBUG oslo_concurrency.lockutils [req-1be31d02-151f-4a3f-8529-0b4118bb5d3d req-13a0cfe7-2f54-420c-9bd8-57107dfcd184 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "24611274-7a7c-4258-8631-032a6c1d8410-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:08:14 np0005534516 nova_compute[253538]: 2025-11-25 09:08:14.448 253542 DEBUG oslo_concurrency.lockutils [req-1be31d02-151f-4a3f-8529-0b4118bb5d3d req-13a0cfe7-2f54-420c-9bd8-57107dfcd184 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "24611274-7a7c-4258-8631-032a6c1d8410-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:08:14 np0005534516 nova_compute[253538]: 2025-11-25 09:08:14.448 253542 DEBUG oslo_concurrency.lockutils [req-1be31d02-151f-4a3f-8529-0b4118bb5d3d req-13a0cfe7-2f54-420c-9bd8-57107dfcd184 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "24611274-7a7c-4258-8631-032a6c1d8410-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:08:14 np0005534516 nova_compute[253538]: 2025-11-25 09:08:14.448 253542 DEBUG nova.compute.manager [req-1be31d02-151f-4a3f-8529-0b4118bb5d3d req-13a0cfe7-2f54-420c-9bd8-57107dfcd184 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 24611274-7a7c-4258-8631-032a6c1d8410] No waiting events found dispatching network-vif-plugged-6effd17c-b1ee-44e2-8346-9e445de0dfb9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 04:08:14 np0005534516 nova_compute[253538]: 2025-11-25 09:08:14.449 253542 WARNING nova.compute.manager [req-1be31d02-151f-4a3f-8529-0b4118bb5d3d req-13a0cfe7-2f54-420c-9bd8-57107dfcd184 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 24611274-7a7c-4258-8631-032a6c1d8410] Received unexpected event network-vif-plugged-6effd17c-b1ee-44e2-8346-9e445de0dfb9 for instance with vm_state building and task_state spawning.#033[00m
Nov 25 04:08:14 np0005534516 nova_compute[253538]: 2025-11-25 09:08:14.450 253542 DEBUG nova.compute.manager [None req-bb3b520f-9d15-44fe-b5d5-b950290e8ab8 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 24611274-7a7c-4258-8631-032a6c1d8410] Instance event wait completed in 1 seconds for network-vif-plugged,network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 25 04:08:14 np0005534516 nova_compute[253538]: 2025-11-25 09:08:14.456 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764061694.4563718, 24611274-7a7c-4258-8631-032a6c1d8410 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 04:08:14 np0005534516 nova_compute[253538]: 2025-11-25 09:08:14.457 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 24611274-7a7c-4258-8631-032a6c1d8410] VM Resumed (Lifecycle Event)#033[00m
Nov 25 04:08:14 np0005534516 nova_compute[253538]: 2025-11-25 09:08:14.460 253542 DEBUG nova.virt.libvirt.driver [None req-bb3b520f-9d15-44fe-b5d5-b950290e8ab8 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 24611274-7a7c-4258-8631-032a6c1d8410] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 25 04:08:14 np0005534516 nova_compute[253538]: 2025-11-25 09:08:14.466 253542 INFO nova.virt.libvirt.driver [-] [instance: 24611274-7a7c-4258-8631-032a6c1d8410] Instance spawned successfully.#033[00m
Nov 25 04:08:14 np0005534516 nova_compute[253538]: 2025-11-25 09:08:14.467 253542 DEBUG nova.virt.libvirt.driver [None req-bb3b520f-9d15-44fe-b5d5-b950290e8ab8 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 24611274-7a7c-4258-8631-032a6c1d8410] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 25 04:08:14 np0005534516 nova_compute[253538]: 2025-11-25 09:08:14.480 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 24611274-7a7c-4258-8631-032a6c1d8410] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 04:08:14 np0005534516 nova_compute[253538]: 2025-11-25 09:08:14.487 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 24611274-7a7c-4258-8631-032a6c1d8410] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 04:08:14 np0005534516 nova_compute[253538]: 2025-11-25 09:08:14.492 253542 DEBUG nova.virt.libvirt.driver [None req-bb3b520f-9d15-44fe-b5d5-b950290e8ab8 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 24611274-7a7c-4258-8631-032a6c1d8410] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 04:08:14 np0005534516 nova_compute[253538]: 2025-11-25 09:08:14.492 253542 DEBUG nova.virt.libvirt.driver [None req-bb3b520f-9d15-44fe-b5d5-b950290e8ab8 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 24611274-7a7c-4258-8631-032a6c1d8410] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 04:08:14 np0005534516 nova_compute[253538]: 2025-11-25 09:08:14.493 253542 DEBUG nova.virt.libvirt.driver [None req-bb3b520f-9d15-44fe-b5d5-b950290e8ab8 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 24611274-7a7c-4258-8631-032a6c1d8410] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 04:08:14 np0005534516 nova_compute[253538]: 2025-11-25 09:08:14.493 253542 DEBUG nova.virt.libvirt.driver [None req-bb3b520f-9d15-44fe-b5d5-b950290e8ab8 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 24611274-7a7c-4258-8631-032a6c1d8410] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 04:08:14 np0005534516 nova_compute[253538]: 2025-11-25 09:08:14.494 253542 DEBUG nova.virt.libvirt.driver [None req-bb3b520f-9d15-44fe-b5d5-b950290e8ab8 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 24611274-7a7c-4258-8631-032a6c1d8410] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 04:08:14 np0005534516 nova_compute[253538]: 2025-11-25 09:08:14.494 253542 DEBUG nova.virt.libvirt.driver [None req-bb3b520f-9d15-44fe-b5d5-b950290e8ab8 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 24611274-7a7c-4258-8631-032a6c1d8410] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 04:08:14 np0005534516 nova_compute[253538]: 2025-11-25 09:08:14.518 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 24611274-7a7c-4258-8631-032a6c1d8410] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 04:08:14 np0005534516 nova_compute[253538]: 2025-11-25 09:08:14.566 253542 INFO nova.compute.manager [None req-bb3b520f-9d15-44fe-b5d5-b950290e8ab8 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 24611274-7a7c-4258-8631-032a6c1d8410] Took 13.54 seconds to spawn the instance on the hypervisor.#033[00m
Nov 25 04:08:14 np0005534516 nova_compute[253538]: 2025-11-25 09:08:14.567 253542 DEBUG nova.compute.manager [None req-bb3b520f-9d15-44fe-b5d5-b950290e8ab8 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 24611274-7a7c-4258-8631-032a6c1d8410] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 04:08:14 np0005534516 podman[404611]: 2025-11-25 09:08:14.598771127 +0000 UTC m=+0.748676074 container remove ad180e4f50eb484df2a2ab7ba33d16a4447d44c373e24ff27aa1160a15e1a956 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_noether, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Nov 25 04:08:14 np0005534516 systemd[1]: libpod-conmon-ad180e4f50eb484df2a2ab7ba33d16a4447d44c373e24ff27aa1160a15e1a956.scope: Deactivated successfully.
Nov 25 04:08:14 np0005534516 nova_compute[253538]: 2025-11-25 09:08:14.632 253542 INFO nova.compute.manager [None req-bb3b520f-9d15-44fe-b5d5-b950290e8ab8 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 24611274-7a7c-4258-8631-032a6c1d8410] Took 15.05 seconds to build instance.#033[00m
Nov 25 04:08:14 np0005534516 nova_compute[253538]: 2025-11-25 09:08:14.645 253542 DEBUG oslo_concurrency.lockutils [None req-bb3b520f-9d15-44fe-b5d5-b950290e8ab8 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "24611274-7a7c-4258-8631-032a6c1d8410" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 15.123s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:08:14 np0005534516 podman[404634]: 2025-11-25 09:08:14.879486778 +0000 UTC m=+0.087710116 container create b3be44f5da134a0fc396883fa007c538181ad670e5c9d8aa4c0947a8bef1ffe8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_pare, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 04:08:14 np0005534516 podman[404634]: 2025-11-25 09:08:14.817547564 +0000 UTC m=+0.025770922 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 04:08:14 np0005534516 systemd[1]: Started libpod-conmon-b3be44f5da134a0fc396883fa007c538181ad670e5c9d8aa4c0947a8bef1ffe8.scope.
Nov 25 04:08:14 np0005534516 systemd[1]: Started libcrun container.
Nov 25 04:08:14 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/36738e8cc97da19508b1796b9aae3723644bcdeedd62ded9498f4fa57c292dc7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 04:08:14 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/36738e8cc97da19508b1796b9aae3723644bcdeedd62ded9498f4fa57c292dc7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 04:08:14 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/36738e8cc97da19508b1796b9aae3723644bcdeedd62ded9498f4fa57c292dc7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 04:08:14 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/36738e8cc97da19508b1796b9aae3723644bcdeedd62ded9498f4fa57c292dc7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 04:08:14 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/36738e8cc97da19508b1796b9aae3723644bcdeedd62ded9498f4fa57c292dc7/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 04:08:14 np0005534516 podman[404634]: 2025-11-25 09:08:14.993198449 +0000 UTC m=+0.201421817 container init b3be44f5da134a0fc396883fa007c538181ad670e5c9d8aa4c0947a8bef1ffe8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_pare, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Nov 25 04:08:15 np0005534516 podman[404634]: 2025-11-25 09:08:15.002766469 +0000 UTC m=+0.210989807 container start b3be44f5da134a0fc396883fa007c538181ad670e5c9d8aa4c0947a8bef1ffe8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_pare, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 04:08:15 np0005534516 podman[404634]: 2025-11-25 09:08:15.007791796 +0000 UTC m=+0.216015154 container attach b3be44f5da134a0fc396883fa007c538181ad670e5c9d8aa4c0947a8bef1ffe8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_pare, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Nov 25 04:08:15 np0005534516 nova_compute[253538]: 2025-11-25 09:08:15.068 253542 DEBUG nova.storage.rbd_utils [None req-9d4ca3e1-3b2b-4664-bd04-7aa25b498be8 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] removing snapshot(ac3bcbe503ff4146b596d39c14ad05a1) on rbd image(de5bfbef-7a99-4280-a304-71b9099f110b_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489#033[00m
Nov 25 04:08:15 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e258 do_prune osdmap full prune enabled
Nov 25 04:08:15 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e259 e259: 3 total, 3 up, 3 in
Nov 25 04:08:15 np0005534516 ceph-mon[75015]: log_channel(cluster) log [DBG] : osdmap e259: 3 total, 3 up, 3 in
Nov 25 04:08:15 np0005534516 nova_compute[253538]: 2025-11-25 09:08:15.640 253542 DEBUG nova.storage.rbd_utils [None req-9d4ca3e1-3b2b-4664-bd04-7aa25b498be8 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] creating snapshot(snap) on rbd image(c1cd651b-e908-45f2-ad61-7952319cf709) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Nov 25 04:08:15 np0005534516 nova_compute[253538]: 2025-11-25 09:08:15.821 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:08:16 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2630: 321 pgs: 321 active+clean; 398 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 1.0 MiB/s wr, 34 op/s
Nov 25 04:08:16 np0005534516 serene_pare[404650]: --> passed data devices: 0 physical, 3 LVM
Nov 25 04:08:16 np0005534516 serene_pare[404650]: --> relative data size: 1.0
Nov 25 04:08:16 np0005534516 serene_pare[404650]: --> All data devices are unavailable
Nov 25 04:08:16 np0005534516 systemd[1]: libpod-b3be44f5da134a0fc396883fa007c538181ad670e5c9d8aa4c0947a8bef1ffe8.scope: Deactivated successfully.
Nov 25 04:08:16 np0005534516 systemd[1]: libpod-b3be44f5da134a0fc396883fa007c538181ad670e5c9d8aa4c0947a8bef1ffe8.scope: Consumed 1.081s CPU time.
Nov 25 04:08:16 np0005534516 podman[404634]: 2025-11-25 09:08:16.289053796 +0000 UTC m=+1.497277154 container died b3be44f5da134a0fc396883fa007c538181ad670e5c9d8aa4c0947a8bef1ffe8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_pare, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Nov 25 04:08:16 np0005534516 systemd[1]: var-lib-containers-storage-overlay-36738e8cc97da19508b1796b9aae3723644bcdeedd62ded9498f4fa57c292dc7-merged.mount: Deactivated successfully.
Nov 25 04:08:16 np0005534516 podman[404634]: 2025-11-25 09:08:16.35727338 +0000 UTC m=+1.565496718 container remove b3be44f5da134a0fc396883fa007c538181ad670e5c9d8aa4c0947a8bef1ffe8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_pare, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 04:08:16 np0005534516 systemd[1]: libpod-conmon-b3be44f5da134a0fc396883fa007c538181ad670e5c9d8aa4c0947a8bef1ffe8.scope: Deactivated successfully.
Nov 25 04:08:16 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e259 do_prune osdmap full prune enabled
Nov 25 04:08:16 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e260 e260: 3 total, 3 up, 3 in
Nov 25 04:08:16 np0005534516 ceph-mon[75015]: log_channel(cluster) log [DBG] : osdmap e260: 3 total, 3 up, 3 in
Nov 25 04:08:17 np0005534516 podman[404865]: 2025-11-25 09:08:17.007071915 +0000 UTC m=+0.047335588 container create 3925224abfc47289b034eb7e2b17375fdae796df8af1f806a4ba81a60eaf3132 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_einstein, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Nov 25 04:08:17 np0005534516 systemd[1]: Started libpod-conmon-3925224abfc47289b034eb7e2b17375fdae796df8af1f806a4ba81a60eaf3132.scope.
Nov 25 04:08:17 np0005534516 kernel: hrtimer: interrupt took 34451577 ns
Nov 25 04:08:17 np0005534516 podman[404865]: 2025-11-25 09:08:16.988452879 +0000 UTC m=+0.028716562 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 04:08:17 np0005534516 systemd[1]: Started libcrun container.
Nov 25 04:08:17 np0005534516 podman[404865]: 2025-11-25 09:08:17.177434896 +0000 UTC m=+0.217698599 container init 3925224abfc47289b034eb7e2b17375fdae796df8af1f806a4ba81a60eaf3132 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_einstein, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Nov 25 04:08:17 np0005534516 podman[404865]: 2025-11-25 09:08:17.18494737 +0000 UTC m=+0.225211043 container start 3925224abfc47289b034eb7e2b17375fdae796df8af1f806a4ba81a60eaf3132 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_einstein, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 04:08:17 np0005534516 podman[404865]: 2025-11-25 09:08:17.188541358 +0000 UTC m=+0.228805211 container attach 3925224abfc47289b034eb7e2b17375fdae796df8af1f806a4ba81a60eaf3132 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_einstein, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 04:08:17 np0005534516 nostalgic_einstein[404881]: 167 167
Nov 25 04:08:17 np0005534516 podman[404865]: 2025-11-25 09:08:17.193745509 +0000 UTC m=+0.234009182 container died 3925224abfc47289b034eb7e2b17375fdae796df8af1f806a4ba81a60eaf3132 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_einstein, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Nov 25 04:08:17 np0005534516 systemd[1]: libpod-3925224abfc47289b034eb7e2b17375fdae796df8af1f806a4ba81a60eaf3132.scope: Deactivated successfully.
Nov 25 04:08:17 np0005534516 systemd[1]: var-lib-containers-storage-overlay-e19838e9a19a7a78fc8d5c730f72bce02cf9c010ce28bab6cc948edac06379c6-merged.mount: Deactivated successfully.
Nov 25 04:08:17 np0005534516 podman[404865]: 2025-11-25 09:08:17.232074392 +0000 UTC m=+0.272338065 container remove 3925224abfc47289b034eb7e2b17375fdae796df8af1f806a4ba81a60eaf3132 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_einstein, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Nov 25 04:08:17 np0005534516 systemd[1]: libpod-conmon-3925224abfc47289b034eb7e2b17375fdae796df8af1f806a4ba81a60eaf3132.scope: Deactivated successfully.
Nov 25 04:08:17 np0005534516 podman[404905]: 2025-11-25 09:08:17.448914726 +0000 UTC m=+0.051223673 container create e619ae6ddb954b2c71ebdcd1ab0971e2edac92a05963475f60dad49d5e77c783 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_lamarr, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Nov 25 04:08:17 np0005534516 systemd[1]: Started libpod-conmon-e619ae6ddb954b2c71ebdcd1ab0971e2edac92a05963475f60dad49d5e77c783.scope.
Nov 25 04:08:17 np0005534516 systemd[1]: Started libcrun container.
Nov 25 04:08:17 np0005534516 podman[404905]: 2025-11-25 09:08:17.424001798 +0000 UTC m=+0.026310775 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 04:08:17 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/81783dc73e06d8c667ebc529af6f85d6bcac726618c0936989e0acc46e6e6f3f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 04:08:17 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/81783dc73e06d8c667ebc529af6f85d6bcac726618c0936989e0acc46e6e6f3f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 04:08:17 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/81783dc73e06d8c667ebc529af6f85d6bcac726618c0936989e0acc46e6e6f3f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 04:08:17 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/81783dc73e06d8c667ebc529af6f85d6bcac726618c0936989e0acc46e6e6f3f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 04:08:17 np0005534516 podman[404905]: 2025-11-25 09:08:17.536367243 +0000 UTC m=+0.138676220 container init e619ae6ddb954b2c71ebdcd1ab0971e2edac92a05963475f60dad49d5e77c783 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_lamarr, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 04:08:17 np0005534516 podman[404905]: 2025-11-25 09:08:17.543060575 +0000 UTC m=+0.145369512 container start e619ae6ddb954b2c71ebdcd1ab0971e2edac92a05963475f60dad49d5e77c783 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_lamarr, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 04:08:17 np0005534516 podman[404905]: 2025-11-25 09:08:17.546071627 +0000 UTC m=+0.148380574 container attach e619ae6ddb954b2c71ebdcd1ab0971e2edac92a05963475f60dad49d5e77c783 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_lamarr, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 04:08:17 np0005534516 nova_compute[253538]: 2025-11-25 09:08:17.879 253542 INFO nova.virt.libvirt.driver [None req-9d4ca3e1-3b2b-4664-bd04-7aa25b498be8 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] [instance: de5bfbef-7a99-4280-a304-71b9099f110b] Snapshot image upload complete#033[00m
Nov 25 04:08:17 np0005534516 nova_compute[253538]: 2025-11-25 09:08:17.881 253542 INFO nova.compute.manager [None req-9d4ca3e1-3b2b-4664-bd04-7aa25b498be8 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] [instance: de5bfbef-7a99-4280-a304-71b9099f110b] Took 5.44 seconds to snapshot the instance on the hypervisor.#033[00m
Nov 25 04:08:18 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e260 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:08:18 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2632: 321 pgs: 321 active+clean; 474 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 11 MiB/s rd, 12 MiB/s wr, 278 op/s
Nov 25 04:08:18 np0005534516 great_lamarr[404921]: {
Nov 25 04:08:18 np0005534516 great_lamarr[404921]:    "0": [
Nov 25 04:08:18 np0005534516 great_lamarr[404921]:        {
Nov 25 04:08:18 np0005534516 great_lamarr[404921]:            "devices": [
Nov 25 04:08:18 np0005534516 great_lamarr[404921]:                "/dev/loop3"
Nov 25 04:08:18 np0005534516 great_lamarr[404921]:            ],
Nov 25 04:08:18 np0005534516 great_lamarr[404921]:            "lv_name": "ceph_lv0",
Nov 25 04:08:18 np0005534516 great_lamarr[404921]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 04:08:18 np0005534516 great_lamarr[404921]:            "lv_size": "21470642176",
Nov 25 04:08:18 np0005534516 great_lamarr[404921]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=fa0f8d90-5df8-4d42-9078-da082765696d,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 04:08:18 np0005534516 great_lamarr[404921]:            "lv_uuid": "ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr",
Nov 25 04:08:18 np0005534516 great_lamarr[404921]:            "name": "ceph_lv0",
Nov 25 04:08:18 np0005534516 great_lamarr[404921]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 04:08:18 np0005534516 great_lamarr[404921]:            "tags": {
Nov 25 04:08:18 np0005534516 great_lamarr[404921]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 04:08:18 np0005534516 great_lamarr[404921]:                "ceph.block_uuid": "ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr",
Nov 25 04:08:18 np0005534516 great_lamarr[404921]:                "ceph.cephx_lockbox_secret": "",
Nov 25 04:08:18 np0005534516 great_lamarr[404921]:                "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 04:08:18 np0005534516 great_lamarr[404921]:                "ceph.cluster_name": "ceph",
Nov 25 04:08:18 np0005534516 great_lamarr[404921]:                "ceph.crush_device_class": "",
Nov 25 04:08:18 np0005534516 great_lamarr[404921]:                "ceph.encrypted": "0",
Nov 25 04:08:18 np0005534516 great_lamarr[404921]:                "ceph.osd_fsid": "fa0f8d90-5df8-4d42-9078-da082765696d",
Nov 25 04:08:18 np0005534516 great_lamarr[404921]:                "ceph.osd_id": "0",
Nov 25 04:08:18 np0005534516 great_lamarr[404921]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 04:08:18 np0005534516 great_lamarr[404921]:                "ceph.type": "block",
Nov 25 04:08:18 np0005534516 great_lamarr[404921]:                "ceph.vdo": "0"
Nov 25 04:08:18 np0005534516 great_lamarr[404921]:            },
Nov 25 04:08:18 np0005534516 great_lamarr[404921]:            "type": "block",
Nov 25 04:08:18 np0005534516 great_lamarr[404921]:            "vg_name": "ceph_vg0"
Nov 25 04:08:18 np0005534516 great_lamarr[404921]:        }
Nov 25 04:08:18 np0005534516 great_lamarr[404921]:    ],
Nov 25 04:08:18 np0005534516 great_lamarr[404921]:    "1": [
Nov 25 04:08:18 np0005534516 great_lamarr[404921]:        {
Nov 25 04:08:18 np0005534516 great_lamarr[404921]:            "devices": [
Nov 25 04:08:18 np0005534516 great_lamarr[404921]:                "/dev/loop4"
Nov 25 04:08:18 np0005534516 great_lamarr[404921]:            ],
Nov 25 04:08:18 np0005534516 great_lamarr[404921]:            "lv_name": "ceph_lv1",
Nov 25 04:08:18 np0005534516 great_lamarr[404921]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 04:08:18 np0005534516 great_lamarr[404921]:            "lv_size": "21470642176",
Nov 25 04:08:18 np0005534516 great_lamarr[404921]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1ccbbde0-8faf-460a-800e-c84f00ed17db,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 04:08:18 np0005534516 great_lamarr[404921]:            "lv_uuid": "nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e",
Nov 25 04:08:18 np0005534516 great_lamarr[404921]:            "name": "ceph_lv1",
Nov 25 04:08:18 np0005534516 great_lamarr[404921]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 04:08:18 np0005534516 great_lamarr[404921]:            "tags": {
Nov 25 04:08:18 np0005534516 great_lamarr[404921]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 04:08:18 np0005534516 great_lamarr[404921]:                "ceph.block_uuid": "nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e",
Nov 25 04:08:18 np0005534516 great_lamarr[404921]:                "ceph.cephx_lockbox_secret": "",
Nov 25 04:08:18 np0005534516 great_lamarr[404921]:                "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 04:08:18 np0005534516 great_lamarr[404921]:                "ceph.cluster_name": "ceph",
Nov 25 04:08:18 np0005534516 great_lamarr[404921]:                "ceph.crush_device_class": "",
Nov 25 04:08:18 np0005534516 great_lamarr[404921]:                "ceph.encrypted": "0",
Nov 25 04:08:18 np0005534516 great_lamarr[404921]:                "ceph.osd_fsid": "1ccbbde0-8faf-460a-800e-c84f00ed17db",
Nov 25 04:08:18 np0005534516 great_lamarr[404921]:                "ceph.osd_id": "1",
Nov 25 04:08:18 np0005534516 great_lamarr[404921]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 04:08:18 np0005534516 great_lamarr[404921]:                "ceph.type": "block",
Nov 25 04:08:18 np0005534516 great_lamarr[404921]:                "ceph.vdo": "0"
Nov 25 04:08:18 np0005534516 great_lamarr[404921]:            },
Nov 25 04:08:18 np0005534516 great_lamarr[404921]:            "type": "block",
Nov 25 04:08:18 np0005534516 great_lamarr[404921]:            "vg_name": "ceph_vg1"
Nov 25 04:08:18 np0005534516 great_lamarr[404921]:        }
Nov 25 04:08:18 np0005534516 great_lamarr[404921]:    ],
Nov 25 04:08:18 np0005534516 great_lamarr[404921]:    "2": [
Nov 25 04:08:18 np0005534516 great_lamarr[404921]:        {
Nov 25 04:08:18 np0005534516 great_lamarr[404921]:            "devices": [
Nov 25 04:08:18 np0005534516 great_lamarr[404921]:                "/dev/loop5"
Nov 25 04:08:18 np0005534516 great_lamarr[404921]:            ],
Nov 25 04:08:18 np0005534516 great_lamarr[404921]:            "lv_name": "ceph_lv2",
Nov 25 04:08:18 np0005534516 great_lamarr[404921]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 04:08:18 np0005534516 great_lamarr[404921]:            "lv_size": "21470642176",
Nov 25 04:08:18 np0005534516 great_lamarr[404921]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2a15223e-f21c-42af-b702-2be1c3607399,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 04:08:18 np0005534516 great_lamarr[404921]:            "lv_uuid": "5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0",
Nov 25 04:08:18 np0005534516 great_lamarr[404921]:            "name": "ceph_lv2",
Nov 25 04:08:18 np0005534516 great_lamarr[404921]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 04:08:18 np0005534516 great_lamarr[404921]:            "tags": {
Nov 25 04:08:18 np0005534516 great_lamarr[404921]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 04:08:18 np0005534516 great_lamarr[404921]:                "ceph.block_uuid": "5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0",
Nov 25 04:08:18 np0005534516 great_lamarr[404921]:                "ceph.cephx_lockbox_secret": "",
Nov 25 04:08:18 np0005534516 great_lamarr[404921]:                "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 04:08:18 np0005534516 great_lamarr[404921]:                "ceph.cluster_name": "ceph",
Nov 25 04:08:18 np0005534516 great_lamarr[404921]:                "ceph.crush_device_class": "",
Nov 25 04:08:18 np0005534516 great_lamarr[404921]:                "ceph.encrypted": "0",
Nov 25 04:08:18 np0005534516 great_lamarr[404921]:                "ceph.osd_fsid": "2a15223e-f21c-42af-b702-2be1c3607399",
Nov 25 04:08:18 np0005534516 great_lamarr[404921]:                "ceph.osd_id": "2",
Nov 25 04:08:18 np0005534516 great_lamarr[404921]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 04:08:18 np0005534516 great_lamarr[404921]:                "ceph.type": "block",
Nov 25 04:08:18 np0005534516 great_lamarr[404921]:                "ceph.vdo": "0"
Nov 25 04:08:18 np0005534516 great_lamarr[404921]:            },
Nov 25 04:08:18 np0005534516 great_lamarr[404921]:            "type": "block",
Nov 25 04:08:18 np0005534516 great_lamarr[404921]:            "vg_name": "ceph_vg2"
Nov 25 04:08:18 np0005534516 great_lamarr[404921]:        }
Nov 25 04:08:18 np0005534516 great_lamarr[404921]:    ]
Nov 25 04:08:18 np0005534516 great_lamarr[404921]: }
Nov 25 04:08:18 np0005534516 systemd[1]: libpod-e619ae6ddb954b2c71ebdcd1ab0971e2edac92a05963475f60dad49d5e77c783.scope: Deactivated successfully.
Nov 25 04:08:18 np0005534516 conmon[404921]: conmon e619ae6ddb954b2c71eb <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-e619ae6ddb954b2c71ebdcd1ab0971e2edac92a05963475f60dad49d5e77c783.scope/container/memory.events
Nov 25 04:08:18 np0005534516 podman[404905]: 2025-11-25 09:08:18.424428464 +0000 UTC m=+1.026737411 container died e619ae6ddb954b2c71ebdcd1ab0971e2edac92a05963475f60dad49d5e77c783 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_lamarr, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 04:08:18 np0005534516 systemd[1]: var-lib-containers-storage-overlay-81783dc73e06d8c667ebc529af6f85d6bcac726618c0936989e0acc46e6e6f3f-merged.mount: Deactivated successfully.
Nov 25 04:08:18 np0005534516 podman[404905]: 2025-11-25 09:08:18.479571333 +0000 UTC m=+1.081880280 container remove e619ae6ddb954b2c71ebdcd1ab0971e2edac92a05963475f60dad49d5e77c783 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_lamarr, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Nov 25 04:08:18 np0005534516 systemd[1]: libpod-conmon-e619ae6ddb954b2c71ebdcd1ab0971e2edac92a05963475f60dad49d5e77c783.scope: Deactivated successfully.
Nov 25 04:08:18 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e260 do_prune osdmap full prune enabled
Nov 25 04:08:18 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e261 e261: 3 total, 3 up, 3 in
Nov 25 04:08:18 np0005534516 ceph-mon[75015]: log_channel(cluster) log [DBG] : osdmap e261: 3 total, 3 up, 3 in
Nov 25 04:08:19 np0005534516 podman[405083]: 2025-11-25 09:08:19.175108691 +0000 UTC m=+0.045124378 container create b6fa9582c852e898116b520d66d7550d37e3b46db87425ca57666b519fe663a1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_visvesvaraya, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Nov 25 04:08:19 np0005534516 nova_compute[253538]: 2025-11-25 09:08:19.184 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:08:19 np0005534516 systemd[1]: Started libpod-conmon-b6fa9582c852e898116b520d66d7550d37e3b46db87425ca57666b519fe663a1.scope.
Nov 25 04:08:19 np0005534516 systemd[1]: Started libcrun container.
Nov 25 04:08:19 np0005534516 podman[405083]: 2025-11-25 09:08:19.157300207 +0000 UTC m=+0.027315914 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 04:08:19 np0005534516 podman[405083]: 2025-11-25 09:08:19.261326714 +0000 UTC m=+0.131342421 container init b6fa9582c852e898116b520d66d7550d37e3b46db87425ca57666b519fe663a1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_visvesvaraya, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Nov 25 04:08:19 np0005534516 podman[405083]: 2025-11-25 09:08:19.268967442 +0000 UTC m=+0.138983119 container start b6fa9582c852e898116b520d66d7550d37e3b46db87425ca57666b519fe663a1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_visvesvaraya, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 04:08:19 np0005534516 podman[405083]: 2025-11-25 09:08:19.279321494 +0000 UTC m=+0.149337191 container attach b6fa9582c852e898116b520d66d7550d37e3b46db87425ca57666b519fe663a1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_visvesvaraya, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 04:08:19 np0005534516 interesting_visvesvaraya[405100]: 167 167
Nov 25 04:08:19 np0005534516 systemd[1]: libpod-b6fa9582c852e898116b520d66d7550d37e3b46db87425ca57666b519fe663a1.scope: Deactivated successfully.
Nov 25 04:08:19 np0005534516 podman[405083]: 2025-11-25 09:08:19.282293824 +0000 UTC m=+0.152309501 container died b6fa9582c852e898116b520d66d7550d37e3b46db87425ca57666b519fe663a1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_visvesvaraya, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 04:08:19 np0005534516 systemd[1]: var-lib-containers-storage-overlay-bc1b6d4b374b756391a7867573b4d57efc2ddade5c068189bd86b8749a4956f3-merged.mount: Deactivated successfully.
Nov 25 04:08:19 np0005534516 podman[405083]: 2025-11-25 09:08:19.316716801 +0000 UTC m=+0.186732478 container remove b6fa9582c852e898116b520d66d7550d37e3b46db87425ca57666b519fe663a1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_visvesvaraya, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True)
Nov 25 04:08:19 np0005534516 systemd[1]: libpod-conmon-b6fa9582c852e898116b520d66d7550d37e3b46db87425ca57666b519fe663a1.scope: Deactivated successfully.
Nov 25 04:08:19 np0005534516 podman[405124]: 2025-11-25 09:08:19.486895306 +0000 UTC m=+0.023264893 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 04:08:19 np0005534516 podman[405124]: 2025-11-25 09:08:19.822670984 +0000 UTC m=+0.359040561 container create f77e1defb29e00104c6c525ae29b653b7896618d91240ebea65b3d06504ffe06 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_ellis, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 04:08:19 np0005534516 systemd[1]: Started libpod-conmon-f77e1defb29e00104c6c525ae29b653b7896618d91240ebea65b3d06504ffe06.scope.
Nov 25 04:08:19 np0005534516 systemd[1]: Started libcrun container.
Nov 25 04:08:19 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1d64a689ba754deaf2200527744d4a18e654eda104dfbf380de1bbefdb1651a1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 04:08:19 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1d64a689ba754deaf2200527744d4a18e654eda104dfbf380de1bbefdb1651a1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 04:08:19 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1d64a689ba754deaf2200527744d4a18e654eda104dfbf380de1bbefdb1651a1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 04:08:19 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1d64a689ba754deaf2200527744d4a18e654eda104dfbf380de1bbefdb1651a1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 04:08:20 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2634: 321 pgs: 321 active+clean; 466 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 12 MiB/s rd, 15 MiB/s wr, 357 op/s
Nov 25 04:08:20 np0005534516 podman[405124]: 2025-11-25 09:08:20.542238775 +0000 UTC m=+1.078608332 container init f77e1defb29e00104c6c525ae29b653b7896618d91240ebea65b3d06504ffe06 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_ellis, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Nov 25 04:08:20 np0005534516 podman[405124]: 2025-11-25 09:08:20.550083368 +0000 UTC m=+1.086452925 container start f77e1defb29e00104c6c525ae29b653b7896618d91240ebea65b3d06504ffe06 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_ellis, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True)
Nov 25 04:08:20 np0005534516 nova_compute[253538]: 2025-11-25 09:08:20.569 253542 DEBUG nova.compute.manager [req-712bd2d9-6b12-4c2c-8c7c-6915f04b4a83 req-c0dcf633-4fb0-4383-bda9-6ce5f7b5b753 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: de5bfbef-7a99-4280-a304-71b9099f110b] Received event network-changed-ff88ca8a-d270-4991-b2c4-617f04418848 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 04:08:20 np0005534516 nova_compute[253538]: 2025-11-25 09:08:20.570 253542 DEBUG nova.compute.manager [req-712bd2d9-6b12-4c2c-8c7c-6915f04b4a83 req-c0dcf633-4fb0-4383-bda9-6ce5f7b5b753 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: de5bfbef-7a99-4280-a304-71b9099f110b] Refreshing instance network info cache due to event network-changed-ff88ca8a-d270-4991-b2c4-617f04418848. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 04:08:20 np0005534516 nova_compute[253538]: 2025-11-25 09:08:20.570 253542 DEBUG oslo_concurrency.lockutils [req-712bd2d9-6b12-4c2c-8c7c-6915f04b4a83 req-c0dcf633-4fb0-4383-bda9-6ce5f7b5b753 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-de5bfbef-7a99-4280-a304-71b9099f110b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 04:08:20 np0005534516 nova_compute[253538]: 2025-11-25 09:08:20.570 253542 DEBUG oslo_concurrency.lockutils [req-712bd2d9-6b12-4c2c-8c7c-6915f04b4a83 req-c0dcf633-4fb0-4383-bda9-6ce5f7b5b753 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-de5bfbef-7a99-4280-a304-71b9099f110b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 04:08:20 np0005534516 nova_compute[253538]: 2025-11-25 09:08:20.571 253542 DEBUG nova.network.neutron [req-712bd2d9-6b12-4c2c-8c7c-6915f04b4a83 req-c0dcf633-4fb0-4383-bda9-6ce5f7b5b753 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: de5bfbef-7a99-4280-a304-71b9099f110b] Refreshing network info cache for port ff88ca8a-d270-4991-b2c4-617f04418848 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 04:08:20 np0005534516 podman[405124]: 2025-11-25 09:08:20.615936698 +0000 UTC m=+1.152306255 container attach f77e1defb29e00104c6c525ae29b653b7896618d91240ebea65b3d06504ffe06 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_ellis, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Nov 25 04:08:20 np0005534516 nova_compute[253538]: 2025-11-25 09:08:20.634 253542 DEBUG oslo_concurrency.lockutils [None req-abcac5a9-12e6-4832-baa1-63f475ec0c00 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] Acquiring lock "de5bfbef-7a99-4280-a304-71b9099f110b" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:08:20 np0005534516 nova_compute[253538]: 2025-11-25 09:08:20.635 253542 DEBUG oslo_concurrency.lockutils [None req-abcac5a9-12e6-4832-baa1-63f475ec0c00 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] Lock "de5bfbef-7a99-4280-a304-71b9099f110b" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:08:20 np0005534516 nova_compute[253538]: 2025-11-25 09:08:20.635 253542 DEBUG oslo_concurrency.lockutils [None req-abcac5a9-12e6-4832-baa1-63f475ec0c00 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] Acquiring lock "de5bfbef-7a99-4280-a304-71b9099f110b-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:08:20 np0005534516 nova_compute[253538]: 2025-11-25 09:08:20.636 253542 DEBUG oslo_concurrency.lockutils [None req-abcac5a9-12e6-4832-baa1-63f475ec0c00 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] Lock "de5bfbef-7a99-4280-a304-71b9099f110b-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:08:20 np0005534516 nova_compute[253538]: 2025-11-25 09:08:20.636 253542 DEBUG oslo_concurrency.lockutils [None req-abcac5a9-12e6-4832-baa1-63f475ec0c00 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] Lock "de5bfbef-7a99-4280-a304-71b9099f110b-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:08:20 np0005534516 nova_compute[253538]: 2025-11-25 09:08:20.637 253542 INFO nova.compute.manager [None req-abcac5a9-12e6-4832-baa1-63f475ec0c00 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] [instance: de5bfbef-7a99-4280-a304-71b9099f110b] Terminating instance#033[00m
Nov 25 04:08:20 np0005534516 nova_compute[253538]: 2025-11-25 09:08:20.638 253542 DEBUG nova.compute.manager [None req-abcac5a9-12e6-4832-baa1-63f475ec0c00 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] [instance: de5bfbef-7a99-4280-a304-71b9099f110b] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 25 04:08:20 np0005534516 nova_compute[253538]: 2025-11-25 09:08:20.665 253542 DEBUG nova.compute.manager [req-8bc10e62-c0ab-4357-86d8-d984ce101314 req-ec19b939-f1b8-4364-a277-29feceafb417 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 24611274-7a7c-4258-8631-032a6c1d8410] Received event network-changed-0967717d-564b-4989-8f67-1cd8c2de57ce external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 04:08:20 np0005534516 nova_compute[253538]: 2025-11-25 09:08:20.666 253542 DEBUG nova.compute.manager [req-8bc10e62-c0ab-4357-86d8-d984ce101314 req-ec19b939-f1b8-4364-a277-29feceafb417 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 24611274-7a7c-4258-8631-032a6c1d8410] Refreshing instance network info cache due to event network-changed-0967717d-564b-4989-8f67-1cd8c2de57ce. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 04:08:20 np0005534516 nova_compute[253538]: 2025-11-25 09:08:20.666 253542 DEBUG oslo_concurrency.lockutils [req-8bc10e62-c0ab-4357-86d8-d984ce101314 req-ec19b939-f1b8-4364-a277-29feceafb417 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-24611274-7a7c-4258-8631-032a6c1d8410" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 04:08:20 np0005534516 nova_compute[253538]: 2025-11-25 09:08:20.666 253542 DEBUG oslo_concurrency.lockutils [req-8bc10e62-c0ab-4357-86d8-d984ce101314 req-ec19b939-f1b8-4364-a277-29feceafb417 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-24611274-7a7c-4258-8631-032a6c1d8410" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 04:08:20 np0005534516 nova_compute[253538]: 2025-11-25 09:08:20.667 253542 DEBUG nova.network.neutron [req-8bc10e62-c0ab-4357-86d8-d984ce101314 req-ec19b939-f1b8-4364-a277-29feceafb417 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 24611274-7a7c-4258-8631-032a6c1d8410] Refreshing network info cache for port 0967717d-564b-4989-8f67-1cd8c2de57ce _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 04:08:20 np0005534516 kernel: tapff88ca8a-d2 (unregistering): left promiscuous mode
Nov 25 04:08:20 np0005534516 NetworkManager[48915]: <info>  [1764061700.7318] device (tapff88ca8a-d2): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 04:08:20 np0005534516 nova_compute[253538]: 2025-11-25 09:08:20.743 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:08:20 np0005534516 ovn_controller[152859]: 2025-11-25T09:08:20Z|01502|binding|INFO|Releasing lport ff88ca8a-d270-4991-b2c4-617f04418848 from this chassis (sb_readonly=0)
Nov 25 04:08:20 np0005534516 ovn_controller[152859]: 2025-11-25T09:08:20Z|01503|binding|INFO|Setting lport ff88ca8a-d270-4991-b2c4-617f04418848 down in Southbound
Nov 25 04:08:20 np0005534516 ovn_controller[152859]: 2025-11-25T09:08:20Z|01504|binding|INFO|Removing iface tapff88ca8a-d2 ovn-installed in OVS
Nov 25 04:08:20 np0005534516 nova_compute[253538]: 2025-11-25 09:08:20.748 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:08:20 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:08:20.760 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ec:c7:f1 10.100.0.6'], port_security=['fa:16:3e:ec:c7:f1 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': 'de5bfbef-7a99-4280-a304-71b9099f110b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3c3eb82e-1161-4c2f-9fce-53fdf4386d9c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8771100a91ef4eb3b58cc4840f6154b4', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'b51ac1f2-fc04-45c4-8aed-ff9624bae478', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=774fcdab-a888-48f5-b941-79ea7db76602, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=ff88ca8a-d270-4991-b2c4-617f04418848) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 04:08:20 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:08:20.761 162739 INFO neutron.agent.ovn.metadata.agent [-] Port ff88ca8a-d270-4991-b2c4-617f04418848 in datapath 3c3eb82e-1161-4c2f-9fce-53fdf4386d9c unbound from our chassis#033[00m
Nov 25 04:08:20 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:08:20.762 162739 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 3c3eb82e-1161-4c2f-9fce-53fdf4386d9c#033[00m
Nov 25 04:08:20 np0005534516 nova_compute[253538]: 2025-11-25 09:08:20.775 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:08:20 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:08:20.787 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[fab2f4b5-0c35-409e-909f-d566455707d5]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:08:20 np0005534516 systemd[1]: machine-qemu\x2d171\x2dinstance\x2d0000008e.scope: Deactivated successfully.
Nov 25 04:08:20 np0005534516 systemd[1]: machine-qemu\x2d171\x2dinstance\x2d0000008e.scope: Consumed 14.886s CPU time.
Nov 25 04:08:20 np0005534516 systemd-machined[215790]: Machine qemu-171-instance-0000008e terminated.
Nov 25 04:08:20 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:08:20.821 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[aff40079-9238-49a6-9bbc-1a31492bfc34]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:08:20 np0005534516 nova_compute[253538]: 2025-11-25 09:08:20.822 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:08:20 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:08:20.825 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[39553819-68fb-48b6-acaf-d2e24312bf16]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:08:20 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:08:20.861 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[83522beb-2617-45fa-b496-550ff1b30f71]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:08:20 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:08:20.881 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[4a562be3-8dca-4c46-a767-ad41e32f216a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap3c3eb82e-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:0e:4d:ac'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 7, 'rx_bytes': 700, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 7, 'rx_bytes': 700, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 424], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 696918, 'reachable_time': 22573, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 405164, 'error': None, 'target': 'ovnmeta-3c3eb82e-1161-4c2f-9fce-53fdf4386d9c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:08:20 np0005534516 nova_compute[253538]: 2025-11-25 09:08:20.886 253542 INFO nova.virt.libvirt.driver [-] [instance: de5bfbef-7a99-4280-a304-71b9099f110b] Instance destroyed successfully.#033[00m
Nov 25 04:08:20 np0005534516 nova_compute[253538]: 2025-11-25 09:08:20.886 253542 DEBUG nova.objects.instance [None req-abcac5a9-12e6-4832-baa1-63f475ec0c00 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] Lazy-loading 'resources' on Instance uuid de5bfbef-7a99-4280-a304-71b9099f110b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 04:08:20 np0005534516 nova_compute[253538]: 2025-11-25 09:08:20.899 253542 DEBUG nova.virt.libvirt.vif [None req-abcac5a9-12e6-4832-baa1-63f475ec0c00 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T09:07:25Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestSnapshotPattern-server-1748422570',display_name='tempest-TestSnapshotPattern-server-1748422570',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testsnapshotpattern-server-1748422570',id=142,image_ref='cea21f13-1c78-4633-9d51-3cb641934c22',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLErEV2xwN9mBf+XGKCIgVT/D6OXwB0cwyyX2NBYPEm+JKbk9OpT/b7EfE4XEaPQBYqSc0cfR5p0864dWMnh2OIUXkOANq7ZGUSKBMTzjmr8EUsGjFEiOhtcXJ0bHjOVZQ==',key_name='tempest-TestSnapshotPattern-1105951773',keypairs=<?>,launch_index=0,launched_at=2025-11-25T09:07:34Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='8771100a91ef4eb3b58cc4840f6154b4',ramdisk_id='',reservation_id='r-a2i419d2',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_boot_roles='reader,member',image_container_format='bare',image_disk_format='raw',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_image_location='snapshot',image_image_state='available',image_image_type='snapshot',image_instance_uuid='7aeb9ccf-2506-41d1-92c2-c72892096857',image_min_disk='1',image_min_ram='0',image_owner_id='8771100a91ef4eb3b58cc4840f6154b4',image_owner_project_name='tempest-TestSnapshotPattern-569624779',image_owner_user_name='tempest-TestSnapshotPattern-569624779-project-member',image_user_id='aef72e2ffce442d1848c4753c324ae92',image_version='8.0',owner_project_name='tempest-TestSnapshotPattern-569624779',owner_user_name='tempest-TestSnapshotPattern-569624779-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T09:08:17Z,user_data=None,user_id='aef72e2ffce442d1848c4753c324ae92',uuid=de5bfbef-7a99-4280-a304-71b9099f110b,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "ff88ca8a-d270-4991-b2c4-617f04418848", "address": "fa:16:3e:ec:c7:f1", "network": {"id": "3c3eb82e-1161-4c2f-9fce-53fdf4386d9c", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-990123454-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.226", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8771100a91ef4eb3b58cc4840f6154b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapff88ca8a-d2", "ovs_interfaceid": "ff88ca8a-d270-4991-b2c4-617f04418848", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 25 04:08:20 np0005534516 nova_compute[253538]: 2025-11-25 09:08:20.899 253542 DEBUG nova.network.os_vif_util [None req-abcac5a9-12e6-4832-baa1-63f475ec0c00 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] Converting VIF {"id": "ff88ca8a-d270-4991-b2c4-617f04418848", "address": "fa:16:3e:ec:c7:f1", "network": {"id": "3c3eb82e-1161-4c2f-9fce-53fdf4386d9c", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-990123454-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.226", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8771100a91ef4eb3b58cc4840f6154b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapff88ca8a-d2", "ovs_interfaceid": "ff88ca8a-d270-4991-b2c4-617f04418848", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 04:08:20 np0005534516 nova_compute[253538]: 2025-11-25 09:08:20.900 253542 DEBUG nova.network.os_vif_util [None req-abcac5a9-12e6-4832-baa1-63f475ec0c00 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:ec:c7:f1,bridge_name='br-int',has_traffic_filtering=True,id=ff88ca8a-d270-4991-b2c4-617f04418848,network=Network(3c3eb82e-1161-4c2f-9fce-53fdf4386d9c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapff88ca8a-d2') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 04:08:20 np0005534516 nova_compute[253538]: 2025-11-25 09:08:20.900 253542 DEBUG os_vif [None req-abcac5a9-12e6-4832-baa1-63f475ec0c00 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:ec:c7:f1,bridge_name='br-int',has_traffic_filtering=True,id=ff88ca8a-d270-4991-b2c4-617f04418848,network=Network(3c3eb82e-1161-4c2f-9fce-53fdf4386d9c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapff88ca8a-d2') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 25 04:08:20 np0005534516 nova_compute[253538]: 2025-11-25 09:08:20.902 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:08:20 np0005534516 nova_compute[253538]: 2025-11-25 09:08:20.902 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapff88ca8a-d2, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 04:08:20 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:08:20.901 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[c7a770fc-573b-424b-98da-d112fd4f404d]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap3c3eb82e-11'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 696928, 'tstamp': 696928}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 405171, 'error': None, 'target': 'ovnmeta-3c3eb82e-1161-4c2f-9fce-53fdf4386d9c', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap3c3eb82e-11'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 696931, 'tstamp': 696931}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 405171, 'error': None, 'target': 'ovnmeta-3c3eb82e-1161-4c2f-9fce-53fdf4386d9c', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:08:20 np0005534516 nova_compute[253538]: 2025-11-25 09:08:20.903 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:08:20 np0005534516 nova_compute[253538]: 2025-11-25 09:08:20.904 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:08:20 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:08:20.903 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3c3eb82e-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 04:08:20 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:08:20.906 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap3c3eb82e-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 04:08:20 np0005534516 nova_compute[253538]: 2025-11-25 09:08:20.906 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:08:20 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:08:20.906 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 04:08:20 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:08:20.906 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap3c3eb82e-10, col_values=(('external_ids', {'iface-id': 'aca5006e-311f-469a-ba5d-688da3f7d396'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 04:08:20 np0005534516 nova_compute[253538]: 2025-11-25 09:08:20.906 253542 INFO os_vif [None req-abcac5a9-12e6-4832-baa1-63f475ec0c00 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:ec:c7:f1,bridge_name='br-int',has_traffic_filtering=True,id=ff88ca8a-d270-4991-b2c4-617f04418848,network=Network(3c3eb82e-1161-4c2f-9fce-53fdf4386d9c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapff88ca8a-d2')#033[00m
Nov 25 04:08:20 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:08:20.907 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 04:08:21 np0005534516 flamboyant_ellis[405140]: {
Nov 25 04:08:21 np0005534516 flamboyant_ellis[405140]:    "1ccbbde0-8faf-460a-800e-c84f00ed17db": {
Nov 25 04:08:21 np0005534516 flamboyant_ellis[405140]:        "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 04:08:21 np0005534516 flamboyant_ellis[405140]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 04:08:21 np0005534516 flamboyant_ellis[405140]:        "osd_id": 1,
Nov 25 04:08:21 np0005534516 flamboyant_ellis[405140]:        "osd_uuid": "1ccbbde0-8faf-460a-800e-c84f00ed17db",
Nov 25 04:08:21 np0005534516 flamboyant_ellis[405140]:        "type": "bluestore"
Nov 25 04:08:21 np0005534516 flamboyant_ellis[405140]:    },
Nov 25 04:08:21 np0005534516 flamboyant_ellis[405140]:    "2a15223e-f21c-42af-b702-2be1c3607399": {
Nov 25 04:08:21 np0005534516 flamboyant_ellis[405140]:        "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 04:08:21 np0005534516 flamboyant_ellis[405140]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 04:08:21 np0005534516 flamboyant_ellis[405140]:        "osd_id": 2,
Nov 25 04:08:21 np0005534516 flamboyant_ellis[405140]:        "osd_uuid": "2a15223e-f21c-42af-b702-2be1c3607399",
Nov 25 04:08:21 np0005534516 flamboyant_ellis[405140]:        "type": "bluestore"
Nov 25 04:08:21 np0005534516 flamboyant_ellis[405140]:    },
Nov 25 04:08:21 np0005534516 flamboyant_ellis[405140]:    "fa0f8d90-5df8-4d42-9078-da082765696d": {
Nov 25 04:08:21 np0005534516 flamboyant_ellis[405140]:        "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 04:08:21 np0005534516 flamboyant_ellis[405140]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 04:08:21 np0005534516 flamboyant_ellis[405140]:        "osd_id": 0,
Nov 25 04:08:21 np0005534516 flamboyant_ellis[405140]:        "osd_uuid": "fa0f8d90-5df8-4d42-9078-da082765696d",
Nov 25 04:08:21 np0005534516 flamboyant_ellis[405140]:        "type": "bluestore"
Nov 25 04:08:21 np0005534516 flamboyant_ellis[405140]:    }
Nov 25 04:08:21 np0005534516 flamboyant_ellis[405140]: }
Nov 25 04:08:21 np0005534516 systemd[1]: libpod-f77e1defb29e00104c6c525ae29b653b7896618d91240ebea65b3d06504ffe06.scope: Deactivated successfully.
Nov 25 04:08:21 np0005534516 systemd[1]: libpod-f77e1defb29e00104c6c525ae29b653b7896618d91240ebea65b3d06504ffe06.scope: Consumed 1.006s CPU time.
Nov 25 04:08:21 np0005534516 podman[405124]: 2025-11-25 09:08:21.60772711 +0000 UTC m=+2.144096667 container died f77e1defb29e00104c6c525ae29b653b7896618d91240ebea65b3d06504ffe06 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_ellis, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default)
Nov 25 04:08:21 np0005534516 systemd[1]: var-lib-containers-storage-overlay-1d64a689ba754deaf2200527744d4a18e654eda104dfbf380de1bbefdb1651a1-merged.mount: Deactivated successfully.
Nov 25 04:08:21 np0005534516 podman[405124]: 2025-11-25 09:08:21.756515154 +0000 UTC m=+2.292884701 container remove f77e1defb29e00104c6c525ae29b653b7896618d91240ebea65b3d06504ffe06 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_ellis, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 04:08:21 np0005534516 systemd[1]: libpod-conmon-f77e1defb29e00104c6c525ae29b653b7896618d91240ebea65b3d06504ffe06.scope: Deactivated successfully.
Nov 25 04:08:21 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 04:08:21 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 04:08:21 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 04:08:21 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 04:08:21 np0005534516 ceph-mgr[75313]: [progress WARNING root] complete: ev 0e4d596b-fcbe-4cc8-9b2f-0d37bfc4bbb9 does not exist
Nov 25 04:08:21 np0005534516 ceph-mgr[75313]: [progress WARNING root] complete: ev f6de2a22-d82e-4119-9aa8-4307f8484f0f does not exist
Nov 25 04:08:21 np0005534516 nova_compute[253538]: 2025-11-25 09:08:21.924 253542 INFO nova.virt.libvirt.driver [None req-abcac5a9-12e6-4832-baa1-63f475ec0c00 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] [instance: de5bfbef-7a99-4280-a304-71b9099f110b] Deleting instance files /var/lib/nova/instances/de5bfbef-7a99-4280-a304-71b9099f110b_del#033[00m
Nov 25 04:08:21 np0005534516 nova_compute[253538]: 2025-11-25 09:08:21.925 253542 INFO nova.virt.libvirt.driver [None req-abcac5a9-12e6-4832-baa1-63f475ec0c00 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] [instance: de5bfbef-7a99-4280-a304-71b9099f110b] Deletion of /var/lib/nova/instances/de5bfbef-7a99-4280-a304-71b9099f110b_del complete#033[00m
Nov 25 04:08:21 np0005534516 nova_compute[253538]: 2025-11-25 09:08:21.981 253542 INFO nova.compute.manager [None req-abcac5a9-12e6-4832-baa1-63f475ec0c00 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] [instance: de5bfbef-7a99-4280-a304-71b9099f110b] Took 1.34 seconds to destroy the instance on the hypervisor.#033[00m
Nov 25 04:08:21 np0005534516 nova_compute[253538]: 2025-11-25 09:08:21.981 253542 DEBUG oslo.service.loopingcall [None req-abcac5a9-12e6-4832-baa1-63f475ec0c00 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 25 04:08:21 np0005534516 nova_compute[253538]: 2025-11-25 09:08:21.981 253542 DEBUG nova.compute.manager [-] [instance: de5bfbef-7a99-4280-a304-71b9099f110b] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 25 04:08:21 np0005534516 nova_compute[253538]: 2025-11-25 09:08:21.981 253542 DEBUG nova.network.neutron [-] [instance: de5bfbef-7a99-4280-a304-71b9099f110b] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 25 04:08:22 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2635: 321 pgs: 321 active+clean; 420 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 8.8 MiB/s rd, 13 MiB/s wr, 322 op/s
Nov 25 04:08:22 np0005534516 nova_compute[253538]: 2025-11-25 09:08:22.425 253542 DEBUG nova.network.neutron [req-712bd2d9-6b12-4c2c-8c7c-6915f04b4a83 req-c0dcf633-4fb0-4383-bda9-6ce5f7b5b753 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: de5bfbef-7a99-4280-a304-71b9099f110b] Updated VIF entry in instance network info cache for port ff88ca8a-d270-4991-b2c4-617f04418848. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 25 04:08:22 np0005534516 nova_compute[253538]: 2025-11-25 09:08:22.426 253542 DEBUG nova.network.neutron [req-712bd2d9-6b12-4c2c-8c7c-6915f04b4a83 req-c0dcf633-4fb0-4383-bda9-6ce5f7b5b753 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: de5bfbef-7a99-4280-a304-71b9099f110b] Updating instance_info_cache with network_info: [{"id": "ff88ca8a-d270-4991-b2c4-617f04418848", "address": "fa:16:3e:ec:c7:f1", "network": {"id": "3c3eb82e-1161-4c2f-9fce-53fdf4386d9c", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-990123454-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8771100a91ef4eb3b58cc4840f6154b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapff88ca8a-d2", "ovs_interfaceid": "ff88ca8a-d270-4991-b2c4-617f04418848", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 04:08:22 np0005534516 nova_compute[253538]: 2025-11-25 09:08:22.446 253542 DEBUG oslo_concurrency.lockutils [req-712bd2d9-6b12-4c2c-8c7c-6915f04b4a83 req-c0dcf633-4fb0-4383-bda9-6ce5f7b5b753 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-de5bfbef-7a99-4280-a304-71b9099f110b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 04:08:22 np0005534516 nova_compute[253538]: 2025-11-25 09:08:22.511 253542 DEBUG nova.network.neutron [req-8bc10e62-c0ab-4357-86d8-d984ce101314 req-ec19b939-f1b8-4364-a277-29feceafb417 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 24611274-7a7c-4258-8631-032a6c1d8410] Updated VIF entry in instance network info cache for port 0967717d-564b-4989-8f67-1cd8c2de57ce. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 25 04:08:22 np0005534516 nova_compute[253538]: 2025-11-25 09:08:22.512 253542 DEBUG nova.network.neutron [req-8bc10e62-c0ab-4357-86d8-d984ce101314 req-ec19b939-f1b8-4364-a277-29feceafb417 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 24611274-7a7c-4258-8631-032a6c1d8410] Updating instance_info_cache with network_info: [{"id": "0967717d-564b-4989-8f67-1cd8c2de57ce", "address": "fa:16:3e:11:69:66", "network": {"id": "2a6609b2-beb0-48a5-8dc0-1a4c153da77e", "bridge": "br-int", "label": "tempest-network-smoke--734444177", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.180", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0967717d-56", "ovs_interfaceid": "0967717d-564b-4989-8f67-1cd8c2de57ce", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "6effd17c-b1ee-44e2-8346-9e445de0dfb9", "address": "fa:16:3e:28:6f:de", "network": {"id": "21786b2a-59f9-4c4e-b462-8a28f7bd93a3", "bridge": "br-int", "label": "tempest-network-smoke--1716336469", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe28:6fde", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe28:6fde", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6effd17c-b1", "ovs_interfaceid": "6effd17c-b1ee-44e2-8346-9e445de0dfb9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 04:08:22 np0005534516 nova_compute[253538]: 2025-11-25 09:08:22.532 253542 DEBUG oslo_concurrency.lockutils [req-8bc10e62-c0ab-4357-86d8-d984ce101314 req-ec19b939-f1b8-4364-a277-29feceafb417 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-24611274-7a7c-4258-8631-032a6c1d8410" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 04:08:22 np0005534516 nova_compute[253538]: 2025-11-25 09:08:22.622 253542 DEBUG nova.network.neutron [-] [instance: de5bfbef-7a99-4280-a304-71b9099f110b] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 04:08:22 np0005534516 nova_compute[253538]: 2025-11-25 09:08:22.634 253542 INFO nova.compute.manager [-] [instance: de5bfbef-7a99-4280-a304-71b9099f110b] Took 0.65 seconds to deallocate network for instance.#033[00m
Nov 25 04:08:22 np0005534516 nova_compute[253538]: 2025-11-25 09:08:22.640 253542 DEBUG nova.compute.manager [req-a90d65f1-b91d-4ff6-8333-7c77af4e3f4e req-5afd8118-1999-479e-bf2f-bc8baab6230c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: de5bfbef-7a99-4280-a304-71b9099f110b] Received event network-vif-unplugged-ff88ca8a-d270-4991-b2c4-617f04418848 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 04:08:22 np0005534516 nova_compute[253538]: 2025-11-25 09:08:22.640 253542 DEBUG oslo_concurrency.lockutils [req-a90d65f1-b91d-4ff6-8333-7c77af4e3f4e req-5afd8118-1999-479e-bf2f-bc8baab6230c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "de5bfbef-7a99-4280-a304-71b9099f110b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:08:22 np0005534516 nova_compute[253538]: 2025-11-25 09:08:22.641 253542 DEBUG oslo_concurrency.lockutils [req-a90d65f1-b91d-4ff6-8333-7c77af4e3f4e req-5afd8118-1999-479e-bf2f-bc8baab6230c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "de5bfbef-7a99-4280-a304-71b9099f110b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:08:22 np0005534516 nova_compute[253538]: 2025-11-25 09:08:22.641 253542 DEBUG oslo_concurrency.lockutils [req-a90d65f1-b91d-4ff6-8333-7c77af4e3f4e req-5afd8118-1999-479e-bf2f-bc8baab6230c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "de5bfbef-7a99-4280-a304-71b9099f110b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:08:22 np0005534516 nova_compute[253538]: 2025-11-25 09:08:22.641 253542 DEBUG nova.compute.manager [req-a90d65f1-b91d-4ff6-8333-7c77af4e3f4e req-5afd8118-1999-479e-bf2f-bc8baab6230c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: de5bfbef-7a99-4280-a304-71b9099f110b] No waiting events found dispatching network-vif-unplugged-ff88ca8a-d270-4991-b2c4-617f04418848 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 04:08:22 np0005534516 nova_compute[253538]: 2025-11-25 09:08:22.641 253542 DEBUG nova.compute.manager [req-a90d65f1-b91d-4ff6-8333-7c77af4e3f4e req-5afd8118-1999-479e-bf2f-bc8baab6230c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: de5bfbef-7a99-4280-a304-71b9099f110b] Received event network-vif-unplugged-ff88ca8a-d270-4991-b2c4-617f04418848 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 25 04:08:22 np0005534516 nova_compute[253538]: 2025-11-25 09:08:22.642 253542 DEBUG nova.compute.manager [req-a90d65f1-b91d-4ff6-8333-7c77af4e3f4e req-5afd8118-1999-479e-bf2f-bc8baab6230c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: de5bfbef-7a99-4280-a304-71b9099f110b] Received event network-vif-plugged-ff88ca8a-d270-4991-b2c4-617f04418848 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 04:08:22 np0005534516 nova_compute[253538]: 2025-11-25 09:08:22.642 253542 DEBUG oslo_concurrency.lockutils [req-a90d65f1-b91d-4ff6-8333-7c77af4e3f4e req-5afd8118-1999-479e-bf2f-bc8baab6230c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "de5bfbef-7a99-4280-a304-71b9099f110b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:08:22 np0005534516 nova_compute[253538]: 2025-11-25 09:08:22.642 253542 DEBUG oslo_concurrency.lockutils [req-a90d65f1-b91d-4ff6-8333-7c77af4e3f4e req-5afd8118-1999-479e-bf2f-bc8baab6230c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "de5bfbef-7a99-4280-a304-71b9099f110b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:08:22 np0005534516 nova_compute[253538]: 2025-11-25 09:08:22.642 253542 DEBUG oslo_concurrency.lockutils [req-a90d65f1-b91d-4ff6-8333-7c77af4e3f4e req-5afd8118-1999-479e-bf2f-bc8baab6230c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "de5bfbef-7a99-4280-a304-71b9099f110b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:08:22 np0005534516 nova_compute[253538]: 2025-11-25 09:08:22.643 253542 DEBUG nova.compute.manager [req-a90d65f1-b91d-4ff6-8333-7c77af4e3f4e req-5afd8118-1999-479e-bf2f-bc8baab6230c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: de5bfbef-7a99-4280-a304-71b9099f110b] No waiting events found dispatching network-vif-plugged-ff88ca8a-d270-4991-b2c4-617f04418848 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 04:08:22 np0005534516 nova_compute[253538]: 2025-11-25 09:08:22.644 253542 WARNING nova.compute.manager [req-a90d65f1-b91d-4ff6-8333-7c77af4e3f4e req-5afd8118-1999-479e-bf2f-bc8baab6230c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: de5bfbef-7a99-4280-a304-71b9099f110b] Received unexpected event network-vif-plugged-ff88ca8a-d270-4991-b2c4-617f04418848 for instance with vm_state active and task_state deleting.#033[00m
Nov 25 04:08:22 np0005534516 nova_compute[253538]: 2025-11-25 09:08:22.680 253542 DEBUG oslo_concurrency.lockutils [None req-abcac5a9-12e6-4832-baa1-63f475ec0c00 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:08:22 np0005534516 nova_compute[253538]: 2025-11-25 09:08:22.680 253542 DEBUG oslo_concurrency.lockutils [None req-abcac5a9-12e6-4832-baa1-63f475ec0c00 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:08:22 np0005534516 nova_compute[253538]: 2025-11-25 09:08:22.729 253542 DEBUG nova.compute.manager [req-1c0550e7-e5df-4602-840f-b435f1bbc2e8 req-d77ab5fb-5a3e-4f4c-95de-da350ec737f7 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: de5bfbef-7a99-4280-a304-71b9099f110b] Received event network-vif-deleted-ff88ca8a-d270-4991-b2c4-617f04418848 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 04:08:22 np0005534516 nova_compute[253538]: 2025-11-25 09:08:22.783 253542 DEBUG oslo_concurrency.processutils [None req-abcac5a9-12e6-4832-baa1-63f475ec0c00 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 04:08:22 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 04:08:22 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 04:08:23 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e261 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:08:23 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e261 do_prune osdmap full prune enabled
Nov 25 04:08:23 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 04:08:23 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2827507217' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 04:08:23 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e262 e262: 3 total, 3 up, 3 in
Nov 25 04:08:23 np0005534516 ceph-mon[75015]: log_channel(cluster) log [DBG] : osdmap e262: 3 total, 3 up, 3 in
Nov 25 04:08:23 np0005534516 nova_compute[253538]: 2025-11-25 09:08:23.270 253542 DEBUG oslo_concurrency.processutils [None req-abcac5a9-12e6-4832-baa1-63f475ec0c00 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.486s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 04:08:23 np0005534516 nova_compute[253538]: 2025-11-25 09:08:23.276 253542 DEBUG nova.compute.provider_tree [None req-abcac5a9-12e6-4832-baa1-63f475ec0c00 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 04:08:23 np0005534516 nova_compute[253538]: 2025-11-25 09:08:23.295 253542 DEBUG nova.scheduler.client.report [None req-abcac5a9-12e6-4832-baa1-63f475ec0c00 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 04:08:23 np0005534516 nova_compute[253538]: 2025-11-25 09:08:23.326 253542 DEBUG oslo_concurrency.lockutils [None req-abcac5a9-12e6-4832-baa1-63f475ec0c00 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.645s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:08:23 np0005534516 nova_compute[253538]: 2025-11-25 09:08:23.379 253542 INFO nova.scheduler.client.report [None req-abcac5a9-12e6-4832-baa1-63f475ec0c00 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] Deleted allocations for instance de5bfbef-7a99-4280-a304-71b9099f110b#033[00m
Nov 25 04:08:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 04:08:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 04:08:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 04:08:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 04:08:23 np0005534516 nova_compute[253538]: 2025-11-25 09:08:23.474 253542 DEBUG oslo_concurrency.lockutils [None req-abcac5a9-12e6-4832-baa1-63f475ec0c00 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] Lock "de5bfbef-7a99-4280-a304-71b9099f110b" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.839s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:08:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 04:08:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 04:08:24 np0005534516 nova_compute[253538]: 2025-11-25 09:08:24.186 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:08:24 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2637: 321 pgs: 321 active+clean; 385 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 4.8 MiB/s rd, 5.7 MiB/s wr, 211 op/s
Nov 25 04:08:24 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e262 do_prune osdmap full prune enabled
Nov 25 04:08:24 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e263 e263: 3 total, 3 up, 3 in
Nov 25 04:08:24 np0005534516 ceph-mon[75015]: log_channel(cluster) log [DBG] : osdmap e263: 3 total, 3 up, 3 in
Nov 25 04:08:25 np0005534516 nova_compute[253538]: 2025-11-25 09:08:25.905 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:08:26 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2639: 321 pgs: 321 active+clean; 380 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.3 MiB/s rd, 2.7 MiB/s wr, 126 op/s
Nov 25 04:08:27 np0005534516 ovn_controller[152859]: 2025-11-25T09:08:27Z|00193|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:11:69:66 10.100.0.14
Nov 25 04:08:27 np0005534516 ovn_controller[152859]: 2025-11-25T09:08:27Z|00194|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:11:69:66 10.100.0.14
Nov 25 04:08:28 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2640: 321 pgs: 321 active+clean; 330 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 474 KiB/s rd, 1.3 MiB/s wr, 115 op/s
Nov 25 04:08:28 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e263 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:08:28 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e263 do_prune osdmap full prune enabled
Nov 25 04:08:28 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e264 e264: 3 total, 3 up, 3 in
Nov 25 04:08:28 np0005534516 ceph-mon[75015]: log_channel(cluster) log [DBG] : osdmap e264: 3 total, 3 up, 3 in
Nov 25 04:08:28 np0005534516 nova_compute[253538]: 2025-11-25 09:08:28.521 253542 DEBUG nova.compute.manager [req-6619ff2c-6f80-4f41-963b-ab060189b9e8 req-4eb8fa7c-1093-411e-bcf7-603589142a4b b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 7aeb9ccf-2506-41d1-92c2-c72892096857] Received event network-changed-0ddcebf0-d7e9-474d-b53b-d1746f5af8f2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 04:08:28 np0005534516 nova_compute[253538]: 2025-11-25 09:08:28.521 253542 DEBUG nova.compute.manager [req-6619ff2c-6f80-4f41-963b-ab060189b9e8 req-4eb8fa7c-1093-411e-bcf7-603589142a4b b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 7aeb9ccf-2506-41d1-92c2-c72892096857] Refreshing instance network info cache due to event network-changed-0ddcebf0-d7e9-474d-b53b-d1746f5af8f2. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 04:08:28 np0005534516 nova_compute[253538]: 2025-11-25 09:08:28.522 253542 DEBUG oslo_concurrency.lockutils [req-6619ff2c-6f80-4f41-963b-ab060189b9e8 req-4eb8fa7c-1093-411e-bcf7-603589142a4b b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-7aeb9ccf-2506-41d1-92c2-c72892096857" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 04:08:28 np0005534516 nova_compute[253538]: 2025-11-25 09:08:28.522 253542 DEBUG oslo_concurrency.lockutils [req-6619ff2c-6f80-4f41-963b-ab060189b9e8 req-4eb8fa7c-1093-411e-bcf7-603589142a4b b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-7aeb9ccf-2506-41d1-92c2-c72892096857" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 04:08:28 np0005534516 nova_compute[253538]: 2025-11-25 09:08:28.522 253542 DEBUG nova.network.neutron [req-6619ff2c-6f80-4f41-963b-ab060189b9e8 req-4eb8fa7c-1093-411e-bcf7-603589142a4b b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 7aeb9ccf-2506-41d1-92c2-c72892096857] Refreshing network info cache for port 0ddcebf0-d7e9-474d-b53b-d1746f5af8f2 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 04:08:28 np0005534516 nova_compute[253538]: 2025-11-25 09:08:28.870 253542 DEBUG oslo_concurrency.lockutils [None req-2c5d4260-8d92-42c3-821e-54515d89fc8a aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] Acquiring lock "7aeb9ccf-2506-41d1-92c2-c72892096857" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:08:28 np0005534516 nova_compute[253538]: 2025-11-25 09:08:28.870 253542 DEBUG oslo_concurrency.lockutils [None req-2c5d4260-8d92-42c3-821e-54515d89fc8a aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] Lock "7aeb9ccf-2506-41d1-92c2-c72892096857" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:08:28 np0005534516 nova_compute[253538]: 2025-11-25 09:08:28.870 253542 DEBUG oslo_concurrency.lockutils [None req-2c5d4260-8d92-42c3-821e-54515d89fc8a aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] Acquiring lock "7aeb9ccf-2506-41d1-92c2-c72892096857-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:08:28 np0005534516 nova_compute[253538]: 2025-11-25 09:08:28.871 253542 DEBUG oslo_concurrency.lockutils [None req-2c5d4260-8d92-42c3-821e-54515d89fc8a aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] Lock "7aeb9ccf-2506-41d1-92c2-c72892096857-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:08:28 np0005534516 nova_compute[253538]: 2025-11-25 09:08:28.871 253542 DEBUG oslo_concurrency.lockutils [None req-2c5d4260-8d92-42c3-821e-54515d89fc8a aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] Lock "7aeb9ccf-2506-41d1-92c2-c72892096857-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:08:28 np0005534516 nova_compute[253538]: 2025-11-25 09:08:28.872 253542 INFO nova.compute.manager [None req-2c5d4260-8d92-42c3-821e-54515d89fc8a aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] [instance: 7aeb9ccf-2506-41d1-92c2-c72892096857] Terminating instance#033[00m
Nov 25 04:08:28 np0005534516 nova_compute[253538]: 2025-11-25 09:08:28.873 253542 DEBUG nova.compute.manager [None req-2c5d4260-8d92-42c3-821e-54515d89fc8a aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] [instance: 7aeb9ccf-2506-41d1-92c2-c72892096857] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 25 04:08:28 np0005534516 kernel: tap0ddcebf0-d7 (unregistering): left promiscuous mode
Nov 25 04:08:28 np0005534516 NetworkManager[48915]: <info>  [1764061708.9288] device (tap0ddcebf0-d7): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 04:08:28 np0005534516 nova_compute[253538]: 2025-11-25 09:08:28.938 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:08:28 np0005534516 ovn_controller[152859]: 2025-11-25T09:08:28Z|01505|binding|INFO|Releasing lport 0ddcebf0-d7e9-474d-b53b-d1746f5af8f2 from this chassis (sb_readonly=0)
Nov 25 04:08:28 np0005534516 ovn_controller[152859]: 2025-11-25T09:08:28Z|01506|binding|INFO|Setting lport 0ddcebf0-d7e9-474d-b53b-d1746f5af8f2 down in Southbound
Nov 25 04:08:28 np0005534516 ovn_controller[152859]: 2025-11-25T09:08:28Z|01507|binding|INFO|Removing iface tap0ddcebf0-d7 ovn-installed in OVS
Nov 25 04:08:28 np0005534516 nova_compute[253538]: 2025-11-25 09:08:28.940 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:08:28 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:08:28.954 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d3:95:36 10.100.0.13'], port_security=['fa:16:3e:d3:95:36 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '7aeb9ccf-2506-41d1-92c2-c72892096857', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3c3eb82e-1161-4c2f-9fce-53fdf4386d9c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8771100a91ef4eb3b58cc4840f6154b4', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'b51ac1f2-fc04-45c4-8aed-ff9624bae478', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=774fcdab-a888-48f5-b941-79ea7db76602, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=0ddcebf0-d7e9-474d-b53b-d1746f5af8f2) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 04:08:28 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:08:28.955 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 0ddcebf0-d7e9-474d-b53b-d1746f5af8f2 in datapath 3c3eb82e-1161-4c2f-9fce-53fdf4386d9c unbound from our chassis#033[00m
Nov 25 04:08:28 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:08:28.957 162739 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 3c3eb82e-1161-4c2f-9fce-53fdf4386d9c, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 25 04:08:28 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:08:28.958 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[93e2ef1b-d5e0-4288-9fb2-24fab8a250c0]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:08:28 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:08:28.959 162739 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-3c3eb82e-1161-4c2f-9fce-53fdf4386d9c namespace which is not needed anymore#033[00m
Nov 25 04:08:28 np0005534516 nova_compute[253538]: 2025-11-25 09:08:28.972 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:08:29 np0005534516 systemd[1]: machine-qemu\x2d170\x2dinstance\x2d0000008c.scope: Deactivated successfully.
Nov 25 04:08:29 np0005534516 systemd[1]: machine-qemu\x2d170\x2dinstance\x2d0000008c.scope: Consumed 16.654s CPU time.
Nov 25 04:08:29 np0005534516 systemd-machined[215790]: Machine qemu-170-instance-0000008c terminated.
Nov 25 04:08:29 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 25 04:08:29 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2865710207' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 04:08:29 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 25 04:08:29 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2865710207' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 04:08:29 np0005534516 nova_compute[253538]: 2025-11-25 09:08:29.093 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:08:29 np0005534516 neutron-haproxy-ovnmeta-3c3eb82e-1161-4c2f-9fce-53fdf4386d9c[401544]: [NOTICE]   (401548) : haproxy version is 2.8.14-c23fe91
Nov 25 04:08:29 np0005534516 neutron-haproxy-ovnmeta-3c3eb82e-1161-4c2f-9fce-53fdf4386d9c[401544]: [NOTICE]   (401548) : path to executable is /usr/sbin/haproxy
Nov 25 04:08:29 np0005534516 neutron-haproxy-ovnmeta-3c3eb82e-1161-4c2f-9fce-53fdf4386d9c[401544]: [WARNING]  (401548) : Exiting Master process...
Nov 25 04:08:29 np0005534516 neutron-haproxy-ovnmeta-3c3eb82e-1161-4c2f-9fce-53fdf4386d9c[401544]: [ALERT]    (401548) : Current worker (401550) exited with code 143 (Terminated)
Nov 25 04:08:29 np0005534516 neutron-haproxy-ovnmeta-3c3eb82e-1161-4c2f-9fce-53fdf4386d9c[401544]: [WARNING]  (401548) : All workers exited. Exiting... (0)
Nov 25 04:08:29 np0005534516 nova_compute[253538]: 2025-11-25 09:08:29.098 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:08:29 np0005534516 systemd[1]: libpod-83877085b0968a0b59b5171956da2977080f99dba624d8394ba1eb5fbfc46c41.scope: Deactivated successfully.
Nov 25 04:08:29 np0005534516 podman[405333]: 2025-11-25 09:08:29.105825768 +0000 UTC m=+0.045084697 container died 83877085b0968a0b59b5171956da2977080f99dba624d8394ba1eb5fbfc46c41 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-3c3eb82e-1161-4c2f-9fce-53fdf4386d9c, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251118, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2)
Nov 25 04:08:29 np0005534516 nova_compute[253538]: 2025-11-25 09:08:29.107 253542 INFO nova.virt.libvirt.driver [-] [instance: 7aeb9ccf-2506-41d1-92c2-c72892096857] Instance destroyed successfully.#033[00m
Nov 25 04:08:29 np0005534516 nova_compute[253538]: 2025-11-25 09:08:29.108 253542 DEBUG nova.objects.instance [None req-2c5d4260-8d92-42c3-821e-54515d89fc8a aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] Lazy-loading 'resources' on Instance uuid 7aeb9ccf-2506-41d1-92c2-c72892096857 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 04:08:29 np0005534516 nova_compute[253538]: 2025-11-25 09:08:29.122 253542 DEBUG nova.virt.libvirt.vif [None req-2c5d4260-8d92-42c3-821e-54515d89fc8a aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T09:06:45Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestSnapshotPattern-server-150948456',display_name='tempest-TestSnapshotPattern-server-150948456',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testsnapshotpattern-server-150948456',id=140,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLErEV2xwN9mBf+XGKCIgVT/D6OXwB0cwyyX2NBYPEm+JKbk9OpT/b7EfE4XEaPQBYqSc0cfR5p0864dWMnh2OIUXkOANq7ZGUSKBMTzjmr8EUsGjFEiOhtcXJ0bHjOVZQ==',key_name='tempest-TestSnapshotPattern-1105951773',keypairs=<?>,launch_index=0,launched_at=2025-11-25T09:06:55Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='8771100a91ef4eb3b58cc4840f6154b4',ramdisk_id='',reservation_id='r-9z2sb50j',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestSnapshotPattern-569624779',owner_user_name='tempest-TestSnapshotPattern-569624779-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T09:07:23Z,user_data=None,user_id='aef72e2ffce442d1848c4753c324ae92',uuid=7aeb9ccf-2506-41d1-92c2-c72892096857,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "0ddcebf0-d7e9-474d-b53b-d1746f5af8f2", "address": "fa:16:3e:d3:95:36", "network": {"id": "3c3eb82e-1161-4c2f-9fce-53fdf4386d9c", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-990123454-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.181", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8771100a91ef4eb3b58cc4840f6154b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0ddcebf0-d7", "ovs_interfaceid": "0ddcebf0-d7e9-474d-b53b-d1746f5af8f2", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 25 04:08:29 np0005534516 nova_compute[253538]: 2025-11-25 09:08:29.124 253542 DEBUG nova.network.os_vif_util [None req-2c5d4260-8d92-42c3-821e-54515d89fc8a aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] Converting VIF {"id": "0ddcebf0-d7e9-474d-b53b-d1746f5af8f2", "address": "fa:16:3e:d3:95:36", "network": {"id": "3c3eb82e-1161-4c2f-9fce-53fdf4386d9c", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-990123454-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.181", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8771100a91ef4eb3b58cc4840f6154b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0ddcebf0-d7", "ovs_interfaceid": "0ddcebf0-d7e9-474d-b53b-d1746f5af8f2", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 04:08:29 np0005534516 nova_compute[253538]: 2025-11-25 09:08:29.125 253542 DEBUG nova.network.os_vif_util [None req-2c5d4260-8d92-42c3-821e-54515d89fc8a aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:d3:95:36,bridge_name='br-int',has_traffic_filtering=True,id=0ddcebf0-d7e9-474d-b53b-d1746f5af8f2,network=Network(3c3eb82e-1161-4c2f-9fce-53fdf4386d9c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0ddcebf0-d7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 04:08:29 np0005534516 nova_compute[253538]: 2025-11-25 09:08:29.125 253542 DEBUG os_vif [None req-2c5d4260-8d92-42c3-821e-54515d89fc8a aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:d3:95:36,bridge_name='br-int',has_traffic_filtering=True,id=0ddcebf0-d7e9-474d-b53b-d1746f5af8f2,network=Network(3c3eb82e-1161-4c2f-9fce-53fdf4386d9c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0ddcebf0-d7') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 25 04:08:29 np0005534516 nova_compute[253538]: 2025-11-25 09:08:29.126 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:08:29 np0005534516 nova_compute[253538]: 2025-11-25 09:08:29.127 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0ddcebf0-d7, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 04:08:29 np0005534516 nova_compute[253538]: 2025-11-25 09:08:29.128 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:08:29 np0005534516 nova_compute[253538]: 2025-11-25 09:08:29.130 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:08:29 np0005534516 nova_compute[253538]: 2025-11-25 09:08:29.132 253542 INFO os_vif [None req-2c5d4260-8d92-42c3-821e-54515d89fc8a aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:d3:95:36,bridge_name='br-int',has_traffic_filtering=True,id=0ddcebf0-d7e9-474d-b53b-d1746f5af8f2,network=Network(3c3eb82e-1161-4c2f-9fce-53fdf4386d9c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0ddcebf0-d7')#033[00m
Nov 25 04:08:29 np0005534516 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-83877085b0968a0b59b5171956da2977080f99dba624d8394ba1eb5fbfc46c41-userdata-shm.mount: Deactivated successfully.
Nov 25 04:08:29 np0005534516 systemd[1]: var-lib-containers-storage-overlay-c24402a25caa013e206ed4779ad47cd8d103184e7670c046105bc5ea23a229e0-merged.mount: Deactivated successfully.
Nov 25 04:08:29 np0005534516 nova_compute[253538]: 2025-11-25 09:08:29.188 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:08:29 np0005534516 podman[405333]: 2025-11-25 09:08:29.199537296 +0000 UTC m=+0.138796225 container cleanup 83877085b0968a0b59b5171956da2977080f99dba624d8394ba1eb5fbfc46c41 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-3c3eb82e-1161-4c2f-9fce-53fdf4386d9c, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Nov 25 04:08:29 np0005534516 systemd[1]: libpod-conmon-83877085b0968a0b59b5171956da2977080f99dba624d8394ba1eb5fbfc46c41.scope: Deactivated successfully.
Nov 25 04:08:29 np0005534516 podman[405390]: 2025-11-25 09:08:29.530956225 +0000 UTC m=+0.309415293 container remove 83877085b0968a0b59b5171956da2977080f99dba624d8394ba1eb5fbfc46c41 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-3c3eb82e-1161-4c2f-9fce-53fdf4386d9c, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Nov 25 04:08:29 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:08:29.542 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[f3188a8c-2124-43ec-9049-181fd3aa5cb8]: (4, ('Tue Nov 25 09:08:29 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-3c3eb82e-1161-4c2f-9fce-53fdf4386d9c (83877085b0968a0b59b5171956da2977080f99dba624d8394ba1eb5fbfc46c41)\n83877085b0968a0b59b5171956da2977080f99dba624d8394ba1eb5fbfc46c41\nTue Nov 25 09:08:29 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-3c3eb82e-1161-4c2f-9fce-53fdf4386d9c (83877085b0968a0b59b5171956da2977080f99dba624d8394ba1eb5fbfc46c41)\n83877085b0968a0b59b5171956da2977080f99dba624d8394ba1eb5fbfc46c41\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:08:29 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:08:29.545 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[15c557aa-6a4f-4a2d-9fe7-28138e999fb9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:08:29 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:08:29.547 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3c3eb82e-10, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 04:08:29 np0005534516 nova_compute[253538]: 2025-11-25 09:08:29.549 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:08:29 np0005534516 kernel: tap3c3eb82e-10: left promiscuous mode
Nov 25 04:08:29 np0005534516 nova_compute[253538]: 2025-11-25 09:08:29.566 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:08:29 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:08:29.570 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[f7a9eef5-7260-488b-84df-3bc9c5c1d5eb]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:08:29 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:08:29.581 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[f80f3419-44b8-4e70-a8bb-28c438aeb179]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:08:29 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:08:29.582 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[855a30ca-7223-4ea3-8afb-d641013e513c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:08:29 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:08:29.598 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[239aeb40-d2b8-4c71-9381-1317b6a1e64d]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 696910, 'reachable_time': 39197, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 405405, 'error': None, 'target': 'ovnmeta-3c3eb82e-1161-4c2f-9fce-53fdf4386d9c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:08:29 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:08:29.601 162852 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-3c3eb82e-1161-4c2f-9fce-53fdf4386d9c deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 25 04:08:29 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:08:29.602 162852 DEBUG oslo.privsep.daemon [-] privsep: reply[84d1f845-3b2d-4a15-92d6-a22e7b16cbb9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:08:29 np0005534516 systemd[1]: run-netns-ovnmeta\x2d3c3eb82e\x2d1161\x2d4c2f\x2d9fce\x2d53fdf4386d9c.mount: Deactivated successfully.
Nov 25 04:08:29 np0005534516 nova_compute[253538]: 2025-11-25 09:08:29.834 253542 DEBUG nova.compute.manager [req-751a684b-39a3-4c98-a612-6e8b076dba49 req-b8b9dd29-fe18-4574-a2e9-902d7c8a9745 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 7aeb9ccf-2506-41d1-92c2-c72892096857] Received event network-vif-unplugged-0ddcebf0-d7e9-474d-b53b-d1746f5af8f2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 04:08:29 np0005534516 nova_compute[253538]: 2025-11-25 09:08:29.835 253542 DEBUG oslo_concurrency.lockutils [req-751a684b-39a3-4c98-a612-6e8b076dba49 req-b8b9dd29-fe18-4574-a2e9-902d7c8a9745 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "7aeb9ccf-2506-41d1-92c2-c72892096857-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:08:29 np0005534516 nova_compute[253538]: 2025-11-25 09:08:29.835 253542 DEBUG oslo_concurrency.lockutils [req-751a684b-39a3-4c98-a612-6e8b076dba49 req-b8b9dd29-fe18-4574-a2e9-902d7c8a9745 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "7aeb9ccf-2506-41d1-92c2-c72892096857-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:08:29 np0005534516 nova_compute[253538]: 2025-11-25 09:08:29.835 253542 DEBUG oslo_concurrency.lockutils [req-751a684b-39a3-4c98-a612-6e8b076dba49 req-b8b9dd29-fe18-4574-a2e9-902d7c8a9745 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "7aeb9ccf-2506-41d1-92c2-c72892096857-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:08:29 np0005534516 nova_compute[253538]: 2025-11-25 09:08:29.836 253542 DEBUG nova.compute.manager [req-751a684b-39a3-4c98-a612-6e8b076dba49 req-b8b9dd29-fe18-4574-a2e9-902d7c8a9745 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 7aeb9ccf-2506-41d1-92c2-c72892096857] No waiting events found dispatching network-vif-unplugged-0ddcebf0-d7e9-474d-b53b-d1746f5af8f2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 04:08:29 np0005534516 nova_compute[253538]: 2025-11-25 09:08:29.836 253542 DEBUG nova.compute.manager [req-751a684b-39a3-4c98-a612-6e8b076dba49 req-b8b9dd29-fe18-4574-a2e9-902d7c8a9745 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 7aeb9ccf-2506-41d1-92c2-c72892096857] Received event network-vif-unplugged-0ddcebf0-d7e9-474d-b53b-d1746f5af8f2 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 25 04:08:29 np0005534516 nova_compute[253538]: 2025-11-25 09:08:29.856 253542 INFO nova.virt.libvirt.driver [None req-2c5d4260-8d92-42c3-821e-54515d89fc8a aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] [instance: 7aeb9ccf-2506-41d1-92c2-c72892096857] Deleting instance files /var/lib/nova/instances/7aeb9ccf-2506-41d1-92c2-c72892096857_del#033[00m
Nov 25 04:08:29 np0005534516 nova_compute[253538]: 2025-11-25 09:08:29.858 253542 INFO nova.virt.libvirt.driver [None req-2c5d4260-8d92-42c3-821e-54515d89fc8a aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] [instance: 7aeb9ccf-2506-41d1-92c2-c72892096857] Deletion of /var/lib/nova/instances/7aeb9ccf-2506-41d1-92c2-c72892096857_del complete#033[00m
Nov 25 04:08:29 np0005534516 nova_compute[253538]: 2025-11-25 09:08:29.912 253542 INFO nova.compute.manager [None req-2c5d4260-8d92-42c3-821e-54515d89fc8a aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] [instance: 7aeb9ccf-2506-41d1-92c2-c72892096857] Took 1.04 seconds to destroy the instance on the hypervisor.#033[00m
Nov 25 04:08:29 np0005534516 nova_compute[253538]: 2025-11-25 09:08:29.913 253542 DEBUG oslo.service.loopingcall [None req-2c5d4260-8d92-42c3-821e-54515d89fc8a aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 25 04:08:29 np0005534516 nova_compute[253538]: 2025-11-25 09:08:29.914 253542 DEBUG nova.compute.manager [-] [instance: 7aeb9ccf-2506-41d1-92c2-c72892096857] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 25 04:08:29 np0005534516 nova_compute[253538]: 2025-11-25 09:08:29.914 253542 DEBUG nova.network.neutron [-] [instance: 7aeb9ccf-2506-41d1-92c2-c72892096857] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 25 04:08:30 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2642: 321 pgs: 321 active+clean; 317 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 569 KiB/s rd, 2.8 MiB/s wr, 152 op/s
Nov 25 04:08:30 np0005534516 podman[405408]: 2025-11-25 09:08:30.825461715 +0000 UTC m=+0.070213920 container health_status 5c8d5508db57b4c3da7e80ae11f3a3094e87785cb74f60c98199261dd8b73481 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 04:08:30 np0005534516 podman[405407]: 2025-11-25 09:08:30.827281934 +0000 UTC m=+0.075155674 container health_status 201f5ba8a846455dad7681d91099d8c3f33e8ac91298c1330a8b525b94c03938 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3)
Nov 25 04:08:30 np0005534516 nova_compute[253538]: 2025-11-25 09:08:30.985 253542 DEBUG nova.network.neutron [-] [instance: 7aeb9ccf-2506-41d1-92c2-c72892096857] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 04:08:31 np0005534516 nova_compute[253538]: 2025-11-25 09:08:31.007 253542 INFO nova.compute.manager [-] [instance: 7aeb9ccf-2506-41d1-92c2-c72892096857] Took 1.09 seconds to deallocate network for instance.#033[00m
Nov 25 04:08:31 np0005534516 nova_compute[253538]: 2025-11-25 09:08:31.053 253542 DEBUG nova.compute.manager [req-982378e9-806a-41a4-8343-c2cd3abfa89f req-86895d17-9848-4c2b-9439-12fe085b7ab5 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 7aeb9ccf-2506-41d1-92c2-c72892096857] Received event network-vif-deleted-0ddcebf0-d7e9-474d-b53b-d1746f5af8f2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 04:08:31 np0005534516 nova_compute[253538]: 2025-11-25 09:08:31.058 253542 DEBUG oslo_concurrency.lockutils [None req-2c5d4260-8d92-42c3-821e-54515d89fc8a aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:08:31 np0005534516 nova_compute[253538]: 2025-11-25 09:08:31.058 253542 DEBUG oslo_concurrency.lockutils [None req-2c5d4260-8d92-42c3-821e-54515d89fc8a aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:08:31 np0005534516 nova_compute[253538]: 2025-11-25 09:08:31.140 253542 DEBUG oslo_concurrency.processutils [None req-2c5d4260-8d92-42c3-821e-54515d89fc8a aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 04:08:31 np0005534516 nova_compute[253538]: 2025-11-25 09:08:31.260 253542 DEBUG nova.network.neutron [req-6619ff2c-6f80-4f41-963b-ab060189b9e8 req-4eb8fa7c-1093-411e-bcf7-603589142a4b b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 7aeb9ccf-2506-41d1-92c2-c72892096857] Updated VIF entry in instance network info cache for port 0ddcebf0-d7e9-474d-b53b-d1746f5af8f2. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 25 04:08:31 np0005534516 nova_compute[253538]: 2025-11-25 09:08:31.261 253542 DEBUG nova.network.neutron [req-6619ff2c-6f80-4f41-963b-ab060189b9e8 req-4eb8fa7c-1093-411e-bcf7-603589142a4b b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 7aeb9ccf-2506-41d1-92c2-c72892096857] Updating instance_info_cache with network_info: [{"id": "0ddcebf0-d7e9-474d-b53b-d1746f5af8f2", "address": "fa:16:3e:d3:95:36", "network": {"id": "3c3eb82e-1161-4c2f-9fce-53fdf4386d9c", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-990123454-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8771100a91ef4eb3b58cc4840f6154b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0ddcebf0-d7", "ovs_interfaceid": "0ddcebf0-d7e9-474d-b53b-d1746f5af8f2", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 04:08:31 np0005534516 nova_compute[253538]: 2025-11-25 09:08:31.281 253542 DEBUG oslo_concurrency.lockutils [req-6619ff2c-6f80-4f41-963b-ab060189b9e8 req-4eb8fa7c-1093-411e-bcf7-603589142a4b b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-7aeb9ccf-2506-41d1-92c2-c72892096857" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 04:08:31 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 04:08:31 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/880294285' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 04:08:31 np0005534516 nova_compute[253538]: 2025-11-25 09:08:31.587 253542 DEBUG oslo_concurrency.processutils [None req-2c5d4260-8d92-42c3-821e-54515d89fc8a aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.447s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 04:08:31 np0005534516 nova_compute[253538]: 2025-11-25 09:08:31.592 253542 DEBUG nova.compute.provider_tree [None req-2c5d4260-8d92-42c3-821e-54515d89fc8a aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 04:08:31 np0005534516 nova_compute[253538]: 2025-11-25 09:08:31.608 253542 DEBUG nova.scheduler.client.report [None req-2c5d4260-8d92-42c3-821e-54515d89fc8a aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 04:08:31 np0005534516 nova_compute[253538]: 2025-11-25 09:08:31.635 253542 DEBUG oslo_concurrency.lockutils [None req-2c5d4260-8d92-42c3-821e-54515d89fc8a aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.577s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:08:31 np0005534516 nova_compute[253538]: 2025-11-25 09:08:31.678 253542 INFO nova.scheduler.client.report [None req-2c5d4260-8d92-42c3-821e-54515d89fc8a aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] Deleted allocations for instance 7aeb9ccf-2506-41d1-92c2-c72892096857#033[00m
Nov 25 04:08:31 np0005534516 nova_compute[253538]: 2025-11-25 09:08:31.754 253542 DEBUG oslo_concurrency.lockutils [None req-2c5d4260-8d92-42c3-821e-54515d89fc8a aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] Lock "7aeb9ccf-2506-41d1-92c2-c72892096857" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.884s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:08:31 np0005534516 nova_compute[253538]: 2025-11-25 09:08:31.906 253542 DEBUG nova.compute.manager [req-d1fdf7b5-5295-4972-bae5-41bbe0e3a570 req-a5e3e4c2-48bc-4dfb-8f27-2b75e3f82c19 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 7aeb9ccf-2506-41d1-92c2-c72892096857] Received event network-vif-plugged-0ddcebf0-d7e9-474d-b53b-d1746f5af8f2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 04:08:31 np0005534516 nova_compute[253538]: 2025-11-25 09:08:31.906 253542 DEBUG oslo_concurrency.lockutils [req-d1fdf7b5-5295-4972-bae5-41bbe0e3a570 req-a5e3e4c2-48bc-4dfb-8f27-2b75e3f82c19 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "7aeb9ccf-2506-41d1-92c2-c72892096857-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:08:31 np0005534516 nova_compute[253538]: 2025-11-25 09:08:31.907 253542 DEBUG oslo_concurrency.lockutils [req-d1fdf7b5-5295-4972-bae5-41bbe0e3a570 req-a5e3e4c2-48bc-4dfb-8f27-2b75e3f82c19 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "7aeb9ccf-2506-41d1-92c2-c72892096857-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:08:31 np0005534516 nova_compute[253538]: 2025-11-25 09:08:31.908 253542 DEBUG oslo_concurrency.lockutils [req-d1fdf7b5-5295-4972-bae5-41bbe0e3a570 req-a5e3e4c2-48bc-4dfb-8f27-2b75e3f82c19 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "7aeb9ccf-2506-41d1-92c2-c72892096857-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:08:31 np0005534516 nova_compute[253538]: 2025-11-25 09:08:31.909 253542 DEBUG nova.compute.manager [req-d1fdf7b5-5295-4972-bae5-41bbe0e3a570 req-a5e3e4c2-48bc-4dfb-8f27-2b75e3f82c19 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 7aeb9ccf-2506-41d1-92c2-c72892096857] No waiting events found dispatching network-vif-plugged-0ddcebf0-d7e9-474d-b53b-d1746f5af8f2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 04:08:31 np0005534516 nova_compute[253538]: 2025-11-25 09:08:31.910 253542 WARNING nova.compute.manager [req-d1fdf7b5-5295-4972-bae5-41bbe0e3a570 req-a5e3e4c2-48bc-4dfb-8f27-2b75e3f82c19 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 7aeb9ccf-2506-41d1-92c2-c72892096857] Received unexpected event network-vif-plugged-0ddcebf0-d7e9-474d-b53b-d1746f5af8f2 for instance with vm_state deleted and task_state None.#033[00m
Nov 25 04:08:32 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2643: 321 pgs: 321 active+clean; 304 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 527 KiB/s rd, 3.2 MiB/s wr, 153 op/s
Nov 25 04:08:33 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e264 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:08:33 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e264 do_prune osdmap full prune enabled
Nov 25 04:08:33 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e265 e265: 3 total, 3 up, 3 in
Nov 25 04:08:33 np0005534516 ceph-mon[75015]: log_channel(cluster) log [DBG] : osdmap e265: 3 total, 3 up, 3 in
Nov 25 04:08:34 np0005534516 nova_compute[253538]: 2025-11-25 09:08:34.130 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:08:34 np0005534516 nova_compute[253538]: 2025-11-25 09:08:34.191 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:08:34 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2645: 321 pgs: 321 active+clean; 276 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 571 KiB/s rd, 3.2 MiB/s wr, 182 op/s
Nov 25 04:08:35 np0005534516 nova_compute[253538]: 2025-11-25 09:08:35.874 253542 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764061700.873678, de5bfbef-7a99-4280-a304-71b9099f110b => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 04:08:35 np0005534516 nova_compute[253538]: 2025-11-25 09:08:35.875 253542 INFO nova.compute.manager [-] [instance: de5bfbef-7a99-4280-a304-71b9099f110b] VM Stopped (Lifecycle Event)#033[00m
Nov 25 04:08:35 np0005534516 nova_compute[253538]: 2025-11-25 09:08:35.889 253542 DEBUG nova.compute.manager [None req-859ebcd9-672b-43f0-a40a-78dcb0223ae2 - - - - - -] [instance: de5bfbef-7a99-4280-a304-71b9099f110b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 04:08:36 np0005534516 ovn_controller[152859]: 2025-11-25T09:08:36Z|01508|binding|INFO|Releasing lport c3131dc3-cd6c-4c0c-a1cd-e385c6ff1693 from this chassis (sb_readonly=0)
Nov 25 04:08:36 np0005534516 ovn_controller[152859]: 2025-11-25T09:08:36Z|01509|binding|INFO|Releasing lport d96f8f4b-113a-4ac4-a7d8-3fb3e9d6b94c from this chassis (sb_readonly=0)
Nov 25 04:08:36 np0005534516 nova_compute[253538]: 2025-11-25 09:08:36.101 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:08:36 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2646: 321 pgs: 321 active+clean; 246 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 400 KiB/s rd, 1.9 MiB/s wr, 110 op/s
Nov 25 04:08:37 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:08:37.600 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=49, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '3e:21:09', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '6a:71:8e:07:fa:c2'}, ipsec=False) old=SB_Global(nb_cfg=48) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 04:08:37 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:08:37.601 162739 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 25 04:08:37 np0005534516 nova_compute[253538]: 2025-11-25 09:08:37.601 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:08:37 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:08:37.601 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=0e3362c2-a4a0-4a10-9289-943331244f84, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '49'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 04:08:37 np0005534516 podman[405467]: 2025-11-25 09:08:37.847259155 +0000 UTC m=+0.100522933 container health_status 8ad1e319ea263c63021f01bb2d6f18ca5aea205883339692c6b10bddfa993191 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true)
Nov 25 04:08:38 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2647: 321 pgs: 321 active+clean; 246 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 324 KiB/s rd, 1.5 MiB/s wr, 89 op/s
Nov 25 04:08:38 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:08:39 np0005534516 nova_compute[253538]: 2025-11-25 09:08:39.134 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:08:39 np0005534516 nova_compute[253538]: 2025-11-25 09:08:39.194 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:08:40 np0005534516 nova_compute[253538]: 2025-11-25 09:08:40.103 253542 DEBUG nova.compute.manager [req-ccd81664-0551-4b7f-9614-e1bac9ade488 req-792219a4-27cd-4903-b8f0-b564b0c0e99c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 24611274-7a7c-4258-8631-032a6c1d8410] Received event network-changed-0967717d-564b-4989-8f67-1cd8c2de57ce external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 04:08:40 np0005534516 nova_compute[253538]: 2025-11-25 09:08:40.104 253542 DEBUG nova.compute.manager [req-ccd81664-0551-4b7f-9614-e1bac9ade488 req-792219a4-27cd-4903-b8f0-b564b0c0e99c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 24611274-7a7c-4258-8631-032a6c1d8410] Refreshing instance network info cache due to event network-changed-0967717d-564b-4989-8f67-1cd8c2de57ce. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 04:08:40 np0005534516 nova_compute[253538]: 2025-11-25 09:08:40.104 253542 DEBUG oslo_concurrency.lockutils [req-ccd81664-0551-4b7f-9614-e1bac9ade488 req-792219a4-27cd-4903-b8f0-b564b0c0e99c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-24611274-7a7c-4258-8631-032a6c1d8410" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 04:08:40 np0005534516 nova_compute[253538]: 2025-11-25 09:08:40.104 253542 DEBUG oslo_concurrency.lockutils [req-ccd81664-0551-4b7f-9614-e1bac9ade488 req-792219a4-27cd-4903-b8f0-b564b0c0e99c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-24611274-7a7c-4258-8631-032a6c1d8410" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 04:08:40 np0005534516 nova_compute[253538]: 2025-11-25 09:08:40.105 253542 DEBUG nova.network.neutron [req-ccd81664-0551-4b7f-9614-e1bac9ade488 req-792219a4-27cd-4903-b8f0-b564b0c0e99c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 24611274-7a7c-4258-8631-032a6c1d8410] Refreshing network info cache for port 0967717d-564b-4989-8f67-1cd8c2de57ce _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 04:08:40 np0005534516 nova_compute[253538]: 2025-11-25 09:08:40.187 253542 DEBUG oslo_concurrency.lockutils [None req-49111bd6-c78f-4a62-beb9-cc3235068cf6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Acquiring lock "24611274-7a7c-4258-8631-032a6c1d8410" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:08:40 np0005534516 nova_compute[253538]: 2025-11-25 09:08:40.189 253542 DEBUG oslo_concurrency.lockutils [None req-49111bd6-c78f-4a62-beb9-cc3235068cf6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "24611274-7a7c-4258-8631-032a6c1d8410" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:08:40 np0005534516 nova_compute[253538]: 2025-11-25 09:08:40.189 253542 DEBUG oslo_concurrency.lockutils [None req-49111bd6-c78f-4a62-beb9-cc3235068cf6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Acquiring lock "24611274-7a7c-4258-8631-032a6c1d8410-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:08:40 np0005534516 nova_compute[253538]: 2025-11-25 09:08:40.190 253542 DEBUG oslo_concurrency.lockutils [None req-49111bd6-c78f-4a62-beb9-cc3235068cf6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "24611274-7a7c-4258-8631-032a6c1d8410-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:08:40 np0005534516 nova_compute[253538]: 2025-11-25 09:08:40.191 253542 DEBUG oslo_concurrency.lockutils [None req-49111bd6-c78f-4a62-beb9-cc3235068cf6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "24611274-7a7c-4258-8631-032a6c1d8410-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:08:40 np0005534516 nova_compute[253538]: 2025-11-25 09:08:40.193 253542 INFO nova.compute.manager [None req-49111bd6-c78f-4a62-beb9-cc3235068cf6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 24611274-7a7c-4258-8631-032a6c1d8410] Terminating instance#033[00m
Nov 25 04:08:40 np0005534516 nova_compute[253538]: 2025-11-25 09:08:40.195 253542 DEBUG nova.compute.manager [None req-49111bd6-c78f-4a62-beb9-cc3235068cf6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 24611274-7a7c-4258-8631-032a6c1d8410] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 25 04:08:40 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2648: 321 pgs: 321 active+clean; 246 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 77 KiB/s rd, 653 KiB/s wr, 52 op/s
Nov 25 04:08:40 np0005534516 kernel: tap0967717d-56 (unregistering): left promiscuous mode
Nov 25 04:08:40 np0005534516 NetworkManager[48915]: <info>  [1764061720.2468] device (tap0967717d-56): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 04:08:40 np0005534516 ovn_controller[152859]: 2025-11-25T09:08:40Z|01510|binding|INFO|Releasing lport 0967717d-564b-4989-8f67-1cd8c2de57ce from this chassis (sb_readonly=0)
Nov 25 04:08:40 np0005534516 ovn_controller[152859]: 2025-11-25T09:08:40Z|01511|binding|INFO|Setting lport 0967717d-564b-4989-8f67-1cd8c2de57ce down in Southbound
Nov 25 04:08:40 np0005534516 nova_compute[253538]: 2025-11-25 09:08:40.255 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:08:40 np0005534516 ovn_controller[152859]: 2025-11-25T09:08:40Z|01512|binding|INFO|Removing iface tap0967717d-56 ovn-installed in OVS
Nov 25 04:08:40 np0005534516 nova_compute[253538]: 2025-11-25 09:08:40.257 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:08:40 np0005534516 kernel: tap6effd17c-b1 (unregistering): left promiscuous mode
Nov 25 04:08:40 np0005534516 NetworkManager[48915]: <info>  [1764061720.2748] device (tap6effd17c-b1): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 04:08:40 np0005534516 nova_compute[253538]: 2025-11-25 09:08:40.276 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:08:40 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:08:40.293 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:11:69:66 10.100.0.14'], port_security=['fa:16:3e:11:69:66 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '24611274-7a7c-4258-8631-032a6c1d8410', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2a6609b2-beb0-48a5-8dc0-1a4c153da77e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a3cf572dfc9f42528923d69b8fa76422', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'b341b451-7173-4ca0-817f-090d2ce6e1dc', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d52cd819-aa98-4895-9898-b2ec17432e84, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=0967717d-564b-4989-8f67-1cd8c2de57ce) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 04:08:40 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:08:40.295 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 0967717d-564b-4989-8f67-1cd8c2de57ce in datapath 2a6609b2-beb0-48a5-8dc0-1a4c153da77e unbound from our chassis#033[00m
Nov 25 04:08:40 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:08:40.297 162739 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 2a6609b2-beb0-48a5-8dc0-1a4c153da77e#033[00m
Nov 25 04:08:40 np0005534516 nova_compute[253538]: 2025-11-25 09:08:40.324 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:08:40 np0005534516 ovn_controller[152859]: 2025-11-25T09:08:40Z|01513|binding|INFO|Releasing lport 6effd17c-b1ee-44e2-8346-9e445de0dfb9 from this chassis (sb_readonly=0)
Nov 25 04:08:40 np0005534516 ovn_controller[152859]: 2025-11-25T09:08:40Z|01514|binding|INFO|Setting lport 6effd17c-b1ee-44e2-8346-9e445de0dfb9 down in Southbound
Nov 25 04:08:40 np0005534516 ovn_controller[152859]: 2025-11-25T09:08:40Z|01515|binding|INFO|Removing iface tap6effd17c-b1 ovn-installed in OVS
Nov 25 04:08:40 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:08:40.336 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[fefa194c-03ec-4691-90c5-083af5d66944]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:08:40 np0005534516 nova_compute[253538]: 2025-11-25 09:08:40.339 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:08:40 np0005534516 systemd[1]: machine-qemu\x2d173\x2dinstance\x2d0000008f.scope: Deactivated successfully.
Nov 25 04:08:40 np0005534516 systemd[1]: machine-qemu\x2d173\x2dinstance\x2d0000008f.scope: Consumed 15.211s CPU time.
Nov 25 04:08:40 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:08:40.358 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:28:6f:de 2001:db8:0:1:f816:3eff:fe28:6fde 2001:db8::f816:3eff:fe28:6fde'], port_security=['fa:16:3e:28:6f:de 2001:db8:0:1:f816:3eff:fe28:6fde 2001:db8::f816:3eff:fe28:6fde'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8:0:1:f816:3eff:fe28:6fde/64 2001:db8::f816:3eff:fe28:6fde/64', 'neutron:device_id': '24611274-7a7c-4258-8631-032a6c1d8410', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-21786b2a-59f9-4c4e-b462-8a28f7bd93a3', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a3cf572dfc9f42528923d69b8fa76422', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'b341b451-7173-4ca0-817f-090d2ce6e1dc', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c4355854-52b0-49fe-b048-f2ee64c9c702, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=6effd17c-b1ee-44e2-8346-9e445de0dfb9) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 04:08:40 np0005534516 systemd-machined[215790]: Machine qemu-173-instance-0000008f terminated.
Nov 25 04:08:40 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:08:40.367 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[5f63d16e-7473-47ad-bb48-2e0a8c4b6c5f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:08:40 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:08:40.370 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[257cbf1a-6293-4ec8-843b-1fe20e44aea4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:08:40 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:08:40.396 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[9a2564ca-49a1-4a5a-8cf2-2d0617555869]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:08:40 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:08:40.417 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[08684bc4-5aa0-4e09-a7c4-50b284d2a5f8]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap2a6609b2-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:17:74:b5'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 7, 'rx_bytes': 700, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 7, 'rx_bytes': 700, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 428], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 701075, 'reachable_time': 37459, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 405508, 'error': None, 'target': 'ovnmeta-2a6609b2-beb0-48a5-8dc0-1a4c153da77e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:08:40 np0005534516 NetworkManager[48915]: <info>  [1764061720.4360] manager: (tap6effd17c-b1): new Tun device (/org/freedesktop/NetworkManager/Devices/621)
Nov 25 04:08:40 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:08:40.438 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[4d3271e2-2da8-4159-b307-1f917f745d7e]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap2a6609b2-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 701090, 'tstamp': 701090}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 405515, 'error': None, 'target': 'ovnmeta-2a6609b2-beb0-48a5-8dc0-1a4c153da77e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap2a6609b2-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 701094, 'tstamp': 701094}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 405515, 'error': None, 'target': 'ovnmeta-2a6609b2-beb0-48a5-8dc0-1a4c153da77e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:08:40 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:08:40.440 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2a6609b2-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 04:08:40 np0005534516 nova_compute[253538]: 2025-11-25 09:08:40.442 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:08:40 np0005534516 nova_compute[253538]: 2025-11-25 09:08:40.451 253542 INFO nova.virt.libvirt.driver [-] [instance: 24611274-7a7c-4258-8631-032a6c1d8410] Instance destroyed successfully.#033[00m
Nov 25 04:08:40 np0005534516 nova_compute[253538]: 2025-11-25 09:08:40.452 253542 DEBUG nova.objects.instance [None req-49111bd6-c78f-4a62-beb9-cc3235068cf6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lazy-loading 'resources' on Instance uuid 24611274-7a7c-4258-8631-032a6c1d8410 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 04:08:40 np0005534516 nova_compute[253538]: 2025-11-25 09:08:40.456 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:08:40 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:08:40.457 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2a6609b2-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 04:08:40 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:08:40.458 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 04:08:40 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:08:40.458 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap2a6609b2-b0, col_values=(('external_ids', {'iface-id': 'd96f8f4b-113a-4ac4-a7d8-3fb3e9d6b94c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 04:08:40 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:08:40.459 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 04:08:40 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:08:40.460 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 6effd17c-b1ee-44e2-8346-9e445de0dfb9 in datapath 21786b2a-59f9-4c4e-b462-8a28f7bd93a3 unbound from our chassis#033[00m
Nov 25 04:08:40 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:08:40.461 162739 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 21786b2a-59f9-4c4e-b462-8a28f7bd93a3#033[00m
Nov 25 04:08:40 np0005534516 nova_compute[253538]: 2025-11-25 09:08:40.463 253542 DEBUG nova.virt.libvirt.vif [None req-49111bd6-c78f-4a62-beb9-cc3235068cf6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T09:07:58Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1852306099',display_name='tempest-TestGettingAddress-server-1852306099',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1852306099',id=143,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNNXZVgUcpOXPWXO6P6soD+m15iDx22PdjLpwIPj4NhiqrYSa8gOlVpM1gBbkAKhCG74Rw2mdANKyxO2M4jwTdZWZdvZ4G3xfBv8VGqbvmMGN/YvwZlGw5MNBtRc1Ho0Cw==',key_name='tempest-TestGettingAddress-1769249141',keypairs=<?>,launch_index=0,launched_at=2025-11-25T09:08:14Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='a3cf572dfc9f42528923d69b8fa76422',ramdisk_id='',reservation_id='r-0x2ihl6d',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-364728108',owner_user_name='tempest-TestGettingAddress-364728108-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T09:08:14Z,user_data=None,user_id='c9fb13d4ba9041458692330b7276232f',uuid=24611274-7a7c-4258-8631-032a6c1d8410,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "0967717d-564b-4989-8f67-1cd8c2de57ce", "address": "fa:16:3e:11:69:66", "network": {"id": "2a6609b2-beb0-48a5-8dc0-1a4c153da77e", "bridge": "br-int", "label": "tempest-network-smoke--734444177", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.180", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0967717d-56", "ovs_interfaceid": "0967717d-564b-4989-8f67-1cd8c2de57ce", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 25 04:08:40 np0005534516 nova_compute[253538]: 2025-11-25 09:08:40.463 253542 DEBUG nova.network.os_vif_util [None req-49111bd6-c78f-4a62-beb9-cc3235068cf6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Converting VIF {"id": "0967717d-564b-4989-8f67-1cd8c2de57ce", "address": "fa:16:3e:11:69:66", "network": {"id": "2a6609b2-beb0-48a5-8dc0-1a4c153da77e", "bridge": "br-int", "label": "tempest-network-smoke--734444177", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.180", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0967717d-56", "ovs_interfaceid": "0967717d-564b-4989-8f67-1cd8c2de57ce", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 04:08:40 np0005534516 nova_compute[253538]: 2025-11-25 09:08:40.464 253542 DEBUG nova.network.os_vif_util [None req-49111bd6-c78f-4a62-beb9-cc3235068cf6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:11:69:66,bridge_name='br-int',has_traffic_filtering=True,id=0967717d-564b-4989-8f67-1cd8c2de57ce,network=Network(2a6609b2-beb0-48a5-8dc0-1a4c153da77e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0967717d-56') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 04:08:40 np0005534516 nova_compute[253538]: 2025-11-25 09:08:40.464 253542 DEBUG os_vif [None req-49111bd6-c78f-4a62-beb9-cc3235068cf6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:11:69:66,bridge_name='br-int',has_traffic_filtering=True,id=0967717d-564b-4989-8f67-1cd8c2de57ce,network=Network(2a6609b2-beb0-48a5-8dc0-1a4c153da77e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0967717d-56') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 25 04:08:40 np0005534516 nova_compute[253538]: 2025-11-25 09:08:40.466 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:08:40 np0005534516 nova_compute[253538]: 2025-11-25 09:08:40.466 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0967717d-56, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 04:08:40 np0005534516 nova_compute[253538]: 2025-11-25 09:08:40.468 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:08:40 np0005534516 nova_compute[253538]: 2025-11-25 09:08:40.470 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 25 04:08:40 np0005534516 nova_compute[253538]: 2025-11-25 09:08:40.472 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:08:40 np0005534516 nova_compute[253538]: 2025-11-25 09:08:40.474 253542 INFO os_vif [None req-49111bd6-c78f-4a62-beb9-cc3235068cf6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:11:69:66,bridge_name='br-int',has_traffic_filtering=True,id=0967717d-564b-4989-8f67-1cd8c2de57ce,network=Network(2a6609b2-beb0-48a5-8dc0-1a4c153da77e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0967717d-56')#033[00m
Nov 25 04:08:40 np0005534516 nova_compute[253538]: 2025-11-25 09:08:40.475 253542 DEBUG nova.virt.libvirt.vif [None req-49111bd6-c78f-4a62-beb9-cc3235068cf6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T09:07:58Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1852306099',display_name='tempest-TestGettingAddress-server-1852306099',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1852306099',id=143,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNNXZVgUcpOXPWXO6P6soD+m15iDx22PdjLpwIPj4NhiqrYSa8gOlVpM1gBbkAKhCG74Rw2mdANKyxO2M4jwTdZWZdvZ4G3xfBv8VGqbvmMGN/YvwZlGw5MNBtRc1Ho0Cw==',key_name='tempest-TestGettingAddress-1769249141',keypairs=<?>,launch_index=0,launched_at=2025-11-25T09:08:14Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='a3cf572dfc9f42528923d69b8fa76422',ramdisk_id='',reservation_id='r-0x2ihl6d',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-364728108',owner_user_name='tempest-TestGettingAddress-364728108-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T09:08:14Z,user_data=None,user_id='c9fb13d4ba9041458692330b7276232f',uuid=24611274-7a7c-4258-8631-032a6c1d8410,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "6effd17c-b1ee-44e2-8346-9e445de0dfb9", "address": "fa:16:3e:28:6f:de", "network": {"id": "21786b2a-59f9-4c4e-b462-8a28f7bd93a3", "bridge": "br-int", "label": "tempest-network-smoke--1716336469", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe28:6fde", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe28:6fde", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6effd17c-b1", "ovs_interfaceid": "6effd17c-b1ee-44e2-8346-9e445de0dfb9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 25 04:08:40 np0005534516 nova_compute[253538]: 2025-11-25 09:08:40.475 253542 DEBUG nova.network.os_vif_util [None req-49111bd6-c78f-4a62-beb9-cc3235068cf6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Converting VIF {"id": "6effd17c-b1ee-44e2-8346-9e445de0dfb9", "address": "fa:16:3e:28:6f:de", "network": {"id": "21786b2a-59f9-4c4e-b462-8a28f7bd93a3", "bridge": "br-int", "label": "tempest-network-smoke--1716336469", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe28:6fde", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe28:6fde", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6effd17c-b1", "ovs_interfaceid": "6effd17c-b1ee-44e2-8346-9e445de0dfb9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 04:08:40 np0005534516 nova_compute[253538]: 2025-11-25 09:08:40.476 253542 DEBUG nova.network.os_vif_util [None req-49111bd6-c78f-4a62-beb9-cc3235068cf6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:28:6f:de,bridge_name='br-int',has_traffic_filtering=True,id=6effd17c-b1ee-44e2-8346-9e445de0dfb9,network=Network(21786b2a-59f9-4c4e-b462-8a28f7bd93a3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6effd17c-b1') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 04:08:40 np0005534516 nova_compute[253538]: 2025-11-25 09:08:40.476 253542 DEBUG os_vif [None req-49111bd6-c78f-4a62-beb9-cc3235068cf6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:28:6f:de,bridge_name='br-int',has_traffic_filtering=True,id=6effd17c-b1ee-44e2-8346-9e445de0dfb9,network=Network(21786b2a-59f9-4c4e-b462-8a28f7bd93a3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6effd17c-b1') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 25 04:08:40 np0005534516 nova_compute[253538]: 2025-11-25 09:08:40.477 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:08:40 np0005534516 nova_compute[253538]: 2025-11-25 09:08:40.477 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6effd17c-b1, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 04:08:40 np0005534516 nova_compute[253538]: 2025-11-25 09:08:40.478 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:08:40 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:08:40.478 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[39d20875-6748-4713-a4a1-31142c7c1693]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:08:40 np0005534516 nova_compute[253538]: 2025-11-25 09:08:40.480 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:08:40 np0005534516 nova_compute[253538]: 2025-11-25 09:08:40.482 253542 INFO os_vif [None req-49111bd6-c78f-4a62-beb9-cc3235068cf6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:28:6f:de,bridge_name='br-int',has_traffic_filtering=True,id=6effd17c-b1ee-44e2-8346-9e445de0dfb9,network=Network(21786b2a-59f9-4c4e-b462-8a28f7bd93a3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6effd17c-b1')#033[00m
Nov 25 04:08:40 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:08:40.514 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[bcf2b1fe-462d-4679-adc4-d57f309dd54d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:08:40 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:08:40.518 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[627c65e7-6049-451a-9a5d-5aed999d2a04]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:08:40 np0005534516 nova_compute[253538]: 2025-11-25 09:08:40.537 253542 DEBUG nova.compute.manager [req-009a916c-8d2a-4023-9bbe-9816f18b533f req-768c907b-eda8-48da-9406-392902845947 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 24611274-7a7c-4258-8631-032a6c1d8410] Received event network-vif-unplugged-0967717d-564b-4989-8f67-1cd8c2de57ce external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 04:08:40 np0005534516 nova_compute[253538]: 2025-11-25 09:08:40.537 253542 DEBUG oslo_concurrency.lockutils [req-009a916c-8d2a-4023-9bbe-9816f18b533f req-768c907b-eda8-48da-9406-392902845947 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "24611274-7a7c-4258-8631-032a6c1d8410-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:08:40 np0005534516 nova_compute[253538]: 2025-11-25 09:08:40.537 253542 DEBUG oslo_concurrency.lockutils [req-009a916c-8d2a-4023-9bbe-9816f18b533f req-768c907b-eda8-48da-9406-392902845947 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "24611274-7a7c-4258-8631-032a6c1d8410-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:08:40 np0005534516 nova_compute[253538]: 2025-11-25 09:08:40.538 253542 DEBUG oslo_concurrency.lockutils [req-009a916c-8d2a-4023-9bbe-9816f18b533f req-768c907b-eda8-48da-9406-392902845947 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "24611274-7a7c-4258-8631-032a6c1d8410-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:08:40 np0005534516 nova_compute[253538]: 2025-11-25 09:08:40.538 253542 DEBUG nova.compute.manager [req-009a916c-8d2a-4023-9bbe-9816f18b533f req-768c907b-eda8-48da-9406-392902845947 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 24611274-7a7c-4258-8631-032a6c1d8410] No waiting events found dispatching network-vif-unplugged-0967717d-564b-4989-8f67-1cd8c2de57ce pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 04:08:40 np0005534516 nova_compute[253538]: 2025-11-25 09:08:40.538 253542 DEBUG nova.compute.manager [req-009a916c-8d2a-4023-9bbe-9816f18b533f req-768c907b-eda8-48da-9406-392902845947 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 24611274-7a7c-4258-8631-032a6c1d8410] Received event network-vif-unplugged-0967717d-564b-4989-8f67-1cd8c2de57ce for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 25 04:08:40 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:08:40.547 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[42d8ccb5-62d6-4b85-818c-1052bbce6e52]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:08:40 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:08:40.562 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[a682535e-b296-4445-83d3-76fa60f2f533]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap21786b2a-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a4:b7:82'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 38, 'tx_packets': 5, 'rx_bytes': 3300, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 38, 'tx_packets': 5, 'rx_bytes': 3300, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 429], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 701174, 'reachable_time': 20349, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 38, 'inoctets': 2768, 'indelivers': 13, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 38, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 2768, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 38, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 13, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 405559, 'error': None, 'target': 'ovnmeta-21786b2a-59f9-4c4e-b462-8a28f7bd93a3', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:08:40 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:08:40.577 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[8bfa2f5a-8e66-4569-b64c-d0df7061fc8e]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap21786b2a-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 701187, 'tstamp': 701187}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 405560, 'error': None, 'target': 'ovnmeta-21786b2a-59f9-4c4e-b462-8a28f7bd93a3', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:08:40 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:08:40.579 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap21786b2a-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 04:08:40 np0005534516 nova_compute[253538]: 2025-11-25 09:08:40.580 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:08:40 np0005534516 nova_compute[253538]: 2025-11-25 09:08:40.581 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:08:40 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:08:40.582 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap21786b2a-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 04:08:40 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:08:40.582 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 04:08:40 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:08:40.582 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap21786b2a-50, col_values=(('external_ids', {'iface-id': 'c3131dc3-cd6c-4c0c-a1cd-e385c6ff1693'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 04:08:40 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:08:40.583 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 04:08:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:08:41.092 162739 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:08:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:08:41.093 162739 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:08:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:08:41.094 162739 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:08:41 np0005534516 nova_compute[253538]: 2025-11-25 09:08:41.189 253542 INFO nova.virt.libvirt.driver [None req-49111bd6-c78f-4a62-beb9-cc3235068cf6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 24611274-7a7c-4258-8631-032a6c1d8410] Deleting instance files /var/lib/nova/instances/24611274-7a7c-4258-8631-032a6c1d8410_del#033[00m
Nov 25 04:08:41 np0005534516 nova_compute[253538]: 2025-11-25 09:08:41.190 253542 INFO nova.virt.libvirt.driver [None req-49111bd6-c78f-4a62-beb9-cc3235068cf6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 24611274-7a7c-4258-8631-032a6c1d8410] Deletion of /var/lib/nova/instances/24611274-7a7c-4258-8631-032a6c1d8410_del complete#033[00m
Nov 25 04:08:41 np0005534516 nova_compute[253538]: 2025-11-25 09:08:41.253 253542 INFO nova.compute.manager [None req-49111bd6-c78f-4a62-beb9-cc3235068cf6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 24611274-7a7c-4258-8631-032a6c1d8410] Took 1.06 seconds to destroy the instance on the hypervisor.#033[00m
Nov 25 04:08:41 np0005534516 nova_compute[253538]: 2025-11-25 09:08:41.254 253542 DEBUG oslo.service.loopingcall [None req-49111bd6-c78f-4a62-beb9-cc3235068cf6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 25 04:08:41 np0005534516 nova_compute[253538]: 2025-11-25 09:08:41.254 253542 DEBUG nova.compute.manager [-] [instance: 24611274-7a7c-4258-8631-032a6c1d8410] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 25 04:08:41 np0005534516 nova_compute[253538]: 2025-11-25 09:08:41.254 253542 DEBUG nova.network.neutron [-] [instance: 24611274-7a7c-4258-8631-032a6c1d8410] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 25 04:08:41 np0005534516 nova_compute[253538]: 2025-11-25 09:08:41.298 253542 DEBUG nova.network.neutron [req-ccd81664-0551-4b7f-9614-e1bac9ade488 req-792219a4-27cd-4903-b8f0-b564b0c0e99c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 24611274-7a7c-4258-8631-032a6c1d8410] Updated VIF entry in instance network info cache for port 0967717d-564b-4989-8f67-1cd8c2de57ce. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 25 04:08:41 np0005534516 nova_compute[253538]: 2025-11-25 09:08:41.298 253542 DEBUG nova.network.neutron [req-ccd81664-0551-4b7f-9614-e1bac9ade488 req-792219a4-27cd-4903-b8f0-b564b0c0e99c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 24611274-7a7c-4258-8631-032a6c1d8410] Updating instance_info_cache with network_info: [{"id": "0967717d-564b-4989-8f67-1cd8c2de57ce", "address": "fa:16:3e:11:69:66", "network": {"id": "2a6609b2-beb0-48a5-8dc0-1a4c153da77e", "bridge": "br-int", "label": "tempest-network-smoke--734444177", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0967717d-56", "ovs_interfaceid": "0967717d-564b-4989-8f67-1cd8c2de57ce", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "6effd17c-b1ee-44e2-8346-9e445de0dfb9", "address": "fa:16:3e:28:6f:de", "network": {"id": "21786b2a-59f9-4c4e-b462-8a28f7bd93a3", "bridge": "br-int", "label": "tempest-network-smoke--1716336469", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe28:6fde", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe28:6fde", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6effd17c-b1", "ovs_interfaceid": "6effd17c-b1ee-44e2-8346-9e445de0dfb9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 04:08:41 np0005534516 nova_compute[253538]: 2025-11-25 09:08:41.319 253542 DEBUG oslo_concurrency.lockutils [req-ccd81664-0551-4b7f-9614-e1bac9ade488 req-792219a4-27cd-4903-b8f0-b564b0c0e99c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-24611274-7a7c-4258-8631-032a6c1d8410" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 04:08:41 np0005534516 ovn_controller[152859]: 2025-11-25T09:08:41Z|01516|binding|INFO|Releasing lport c3131dc3-cd6c-4c0c-a1cd-e385c6ff1693 from this chassis (sb_readonly=0)
Nov 25 04:08:41 np0005534516 ovn_controller[152859]: 2025-11-25T09:08:41Z|01517|binding|INFO|Releasing lport d96f8f4b-113a-4ac4-a7d8-3fb3e9d6b94c from this chassis (sb_readonly=0)
Nov 25 04:08:42 np0005534516 nova_compute[253538]: 2025-11-25 09:08:42.050 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:08:42 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2649: 321 pgs: 321 active+clean; 227 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 70 KiB/s rd, 26 KiB/s wr, 58 op/s
Nov 25 04:08:42 np0005534516 nova_compute[253538]: 2025-11-25 09:08:42.601 253542 DEBUG nova.network.neutron [-] [instance: 24611274-7a7c-4258-8631-032a6c1d8410] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 04:08:42 np0005534516 nova_compute[253538]: 2025-11-25 09:08:42.619 253542 INFO nova.compute.manager [-] [instance: 24611274-7a7c-4258-8631-032a6c1d8410] Took 1.36 seconds to deallocate network for instance.#033[00m
Nov 25 04:08:42 np0005534516 nova_compute[253538]: 2025-11-25 09:08:42.649 253542 DEBUG nova.compute.manager [req-d1e574d4-2f7c-4401-8b55-ab40a8550c2a req-58e08b0f-321e-4d5c-96c6-2d2815d68ef0 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 24611274-7a7c-4258-8631-032a6c1d8410] Received event network-vif-plugged-0967717d-564b-4989-8f67-1cd8c2de57ce external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 04:08:42 np0005534516 nova_compute[253538]: 2025-11-25 09:08:42.650 253542 DEBUG oslo_concurrency.lockutils [req-d1e574d4-2f7c-4401-8b55-ab40a8550c2a req-58e08b0f-321e-4d5c-96c6-2d2815d68ef0 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "24611274-7a7c-4258-8631-032a6c1d8410-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:08:42 np0005534516 nova_compute[253538]: 2025-11-25 09:08:42.650 253542 DEBUG oslo_concurrency.lockutils [req-d1e574d4-2f7c-4401-8b55-ab40a8550c2a req-58e08b0f-321e-4d5c-96c6-2d2815d68ef0 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "24611274-7a7c-4258-8631-032a6c1d8410-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:08:42 np0005534516 nova_compute[253538]: 2025-11-25 09:08:42.650 253542 DEBUG oslo_concurrency.lockutils [req-d1e574d4-2f7c-4401-8b55-ab40a8550c2a req-58e08b0f-321e-4d5c-96c6-2d2815d68ef0 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "24611274-7a7c-4258-8631-032a6c1d8410-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:08:42 np0005534516 nova_compute[253538]: 2025-11-25 09:08:42.650 253542 DEBUG nova.compute.manager [req-d1e574d4-2f7c-4401-8b55-ab40a8550c2a req-58e08b0f-321e-4d5c-96c6-2d2815d68ef0 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 24611274-7a7c-4258-8631-032a6c1d8410] No waiting events found dispatching network-vif-plugged-0967717d-564b-4989-8f67-1cd8c2de57ce pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 04:08:42 np0005534516 nova_compute[253538]: 2025-11-25 09:08:42.651 253542 WARNING nova.compute.manager [req-d1e574d4-2f7c-4401-8b55-ab40a8550c2a req-58e08b0f-321e-4d5c-96c6-2d2815d68ef0 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 24611274-7a7c-4258-8631-032a6c1d8410] Received unexpected event network-vif-plugged-0967717d-564b-4989-8f67-1cd8c2de57ce for instance with vm_state active and task_state deleting.#033[00m
Nov 25 04:08:42 np0005534516 nova_compute[253538]: 2025-11-25 09:08:42.651 253542 DEBUG nova.compute.manager [req-d1e574d4-2f7c-4401-8b55-ab40a8550c2a req-58e08b0f-321e-4d5c-96c6-2d2815d68ef0 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 24611274-7a7c-4258-8631-032a6c1d8410] Received event network-vif-unplugged-6effd17c-b1ee-44e2-8346-9e445de0dfb9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 04:08:42 np0005534516 nova_compute[253538]: 2025-11-25 09:08:42.652 253542 DEBUG oslo_concurrency.lockutils [req-d1e574d4-2f7c-4401-8b55-ab40a8550c2a req-58e08b0f-321e-4d5c-96c6-2d2815d68ef0 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "24611274-7a7c-4258-8631-032a6c1d8410-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:08:42 np0005534516 nova_compute[253538]: 2025-11-25 09:08:42.652 253542 DEBUG oslo_concurrency.lockutils [req-d1e574d4-2f7c-4401-8b55-ab40a8550c2a req-58e08b0f-321e-4d5c-96c6-2d2815d68ef0 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "24611274-7a7c-4258-8631-032a6c1d8410-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:08:42 np0005534516 nova_compute[253538]: 2025-11-25 09:08:42.652 253542 DEBUG oslo_concurrency.lockutils [req-d1e574d4-2f7c-4401-8b55-ab40a8550c2a req-58e08b0f-321e-4d5c-96c6-2d2815d68ef0 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "24611274-7a7c-4258-8631-032a6c1d8410-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:08:42 np0005534516 nova_compute[253538]: 2025-11-25 09:08:42.652 253542 DEBUG nova.compute.manager [req-d1e574d4-2f7c-4401-8b55-ab40a8550c2a req-58e08b0f-321e-4d5c-96c6-2d2815d68ef0 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 24611274-7a7c-4258-8631-032a6c1d8410] No waiting events found dispatching network-vif-unplugged-6effd17c-b1ee-44e2-8346-9e445de0dfb9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 04:08:42 np0005534516 nova_compute[253538]: 2025-11-25 09:08:42.653 253542 DEBUG nova.compute.manager [req-d1e574d4-2f7c-4401-8b55-ab40a8550c2a req-58e08b0f-321e-4d5c-96c6-2d2815d68ef0 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 24611274-7a7c-4258-8631-032a6c1d8410] Received event network-vif-unplugged-6effd17c-b1ee-44e2-8346-9e445de0dfb9 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 25 04:08:42 np0005534516 nova_compute[253538]: 2025-11-25 09:08:42.653 253542 DEBUG nova.compute.manager [req-d1e574d4-2f7c-4401-8b55-ab40a8550c2a req-58e08b0f-321e-4d5c-96c6-2d2815d68ef0 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 24611274-7a7c-4258-8631-032a6c1d8410] Received event network-vif-plugged-6effd17c-b1ee-44e2-8346-9e445de0dfb9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 04:08:42 np0005534516 nova_compute[253538]: 2025-11-25 09:08:42.653 253542 DEBUG oslo_concurrency.lockutils [req-d1e574d4-2f7c-4401-8b55-ab40a8550c2a req-58e08b0f-321e-4d5c-96c6-2d2815d68ef0 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "24611274-7a7c-4258-8631-032a6c1d8410-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:08:42 np0005534516 nova_compute[253538]: 2025-11-25 09:08:42.653 253542 DEBUG oslo_concurrency.lockutils [req-d1e574d4-2f7c-4401-8b55-ab40a8550c2a req-58e08b0f-321e-4d5c-96c6-2d2815d68ef0 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "24611274-7a7c-4258-8631-032a6c1d8410-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:08:42 np0005534516 nova_compute[253538]: 2025-11-25 09:08:42.654 253542 DEBUG oslo_concurrency.lockutils [req-d1e574d4-2f7c-4401-8b55-ab40a8550c2a req-58e08b0f-321e-4d5c-96c6-2d2815d68ef0 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "24611274-7a7c-4258-8631-032a6c1d8410-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:08:42 np0005534516 nova_compute[253538]: 2025-11-25 09:08:42.654 253542 DEBUG nova.compute.manager [req-d1e574d4-2f7c-4401-8b55-ab40a8550c2a req-58e08b0f-321e-4d5c-96c6-2d2815d68ef0 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 24611274-7a7c-4258-8631-032a6c1d8410] No waiting events found dispatching network-vif-plugged-6effd17c-b1ee-44e2-8346-9e445de0dfb9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 04:08:42 np0005534516 nova_compute[253538]: 2025-11-25 09:08:42.654 253542 WARNING nova.compute.manager [req-d1e574d4-2f7c-4401-8b55-ab40a8550c2a req-58e08b0f-321e-4d5c-96c6-2d2815d68ef0 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 24611274-7a7c-4258-8631-032a6c1d8410] Received unexpected event network-vif-plugged-6effd17c-b1ee-44e2-8346-9e445de0dfb9 for instance with vm_state active and task_state deleting.#033[00m
Nov 25 04:08:42 np0005534516 nova_compute[253538]: 2025-11-25 09:08:42.654 253542 DEBUG nova.compute.manager [req-d1e574d4-2f7c-4401-8b55-ab40a8550c2a req-58e08b0f-321e-4d5c-96c6-2d2815d68ef0 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 24611274-7a7c-4258-8631-032a6c1d8410] Received event network-vif-deleted-0967717d-564b-4989-8f67-1cd8c2de57ce external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 04:08:42 np0005534516 nova_compute[253538]: 2025-11-25 09:08:42.665 253542 DEBUG oslo_concurrency.lockutils [None req-49111bd6-c78f-4a62-beb9-cc3235068cf6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:08:42 np0005534516 nova_compute[253538]: 2025-11-25 09:08:42.666 253542 DEBUG oslo_concurrency.lockutils [None req-49111bd6-c78f-4a62-beb9-cc3235068cf6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:08:42 np0005534516 nova_compute[253538]: 2025-11-25 09:08:42.734 253542 DEBUG oslo_concurrency.processutils [None req-49111bd6-c78f-4a62-beb9-cc3235068cf6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 04:08:43 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 04:08:43 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1633812308' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 04:08:43 np0005534516 nova_compute[253538]: 2025-11-25 09:08:43.192 253542 DEBUG oslo_concurrency.processutils [None req-49111bd6-c78f-4a62-beb9-cc3235068cf6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.459s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 04:08:43 np0005534516 nova_compute[253538]: 2025-11-25 09:08:43.198 253542 DEBUG nova.compute.provider_tree [None req-49111bd6-c78f-4a62-beb9-cc3235068cf6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 04:08:43 np0005534516 nova_compute[253538]: 2025-11-25 09:08:43.241 253542 DEBUG nova.scheduler.client.report [None req-49111bd6-c78f-4a62-beb9-cc3235068cf6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 04:08:43 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:08:43 np0005534516 nova_compute[253538]: 2025-11-25 09:08:43.271 253542 DEBUG oslo_concurrency.lockutils [None req-49111bd6-c78f-4a62-beb9-cc3235068cf6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.606s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:08:43 np0005534516 nova_compute[253538]: 2025-11-25 09:08:43.297 253542 INFO nova.scheduler.client.report [None req-49111bd6-c78f-4a62-beb9-cc3235068cf6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Deleted allocations for instance 24611274-7a7c-4258-8631-032a6c1d8410#033[00m
Nov 25 04:08:43 np0005534516 nova_compute[253538]: 2025-11-25 09:08:43.357 253542 DEBUG oslo_concurrency.lockutils [None req-49111bd6-c78f-4a62-beb9-cc3235068cf6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "24611274-7a7c-4258-8631-032a6c1d8410" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.169s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:08:44 np0005534516 nova_compute[253538]: 2025-11-25 09:08:44.102 253542 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764061709.1016028, 7aeb9ccf-2506-41d1-92c2-c72892096857 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 04:08:44 np0005534516 nova_compute[253538]: 2025-11-25 09:08:44.103 253542 INFO nova.compute.manager [-] [instance: 7aeb9ccf-2506-41d1-92c2-c72892096857] VM Stopped (Lifecycle Event)#033[00m
Nov 25 04:08:44 np0005534516 nova_compute[253538]: 2025-11-25 09:08:44.121 253542 DEBUG nova.compute.manager [None req-47934868-6c55-4104-9539-0ad2ef8b9e77 - - - - - -] [instance: 7aeb9ccf-2506-41d1-92c2-c72892096857] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 04:08:44 np0005534516 nova_compute[253538]: 2025-11-25 09:08:44.196 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:08:44 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2650: 321 pgs: 321 active+clean; 196 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 35 KiB/s rd, 25 KiB/s wr, 29 op/s
Nov 25 04:08:44 np0005534516 nova_compute[253538]: 2025-11-25 09:08:44.307 253542 DEBUG oslo_concurrency.lockutils [None req-a622154e-08b5-4423-b43a-15ce0902fe7c c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Acquiring lock "a5ec67ec-7042-47d0-925d-6ff3847d3846" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:08:44 np0005534516 nova_compute[253538]: 2025-11-25 09:08:44.307 253542 DEBUG oslo_concurrency.lockutils [None req-a622154e-08b5-4423-b43a-15ce0902fe7c c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "a5ec67ec-7042-47d0-925d-6ff3847d3846" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:08:44 np0005534516 nova_compute[253538]: 2025-11-25 09:08:44.308 253542 DEBUG oslo_concurrency.lockutils [None req-a622154e-08b5-4423-b43a-15ce0902fe7c c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Acquiring lock "a5ec67ec-7042-47d0-925d-6ff3847d3846-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:08:44 np0005534516 nova_compute[253538]: 2025-11-25 09:08:44.308 253542 DEBUG oslo_concurrency.lockutils [None req-a622154e-08b5-4423-b43a-15ce0902fe7c c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "a5ec67ec-7042-47d0-925d-6ff3847d3846-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:08:44 np0005534516 nova_compute[253538]: 2025-11-25 09:08:44.308 253542 DEBUG oslo_concurrency.lockutils [None req-a622154e-08b5-4423-b43a-15ce0902fe7c c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "a5ec67ec-7042-47d0-925d-6ff3847d3846-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:08:44 np0005534516 nova_compute[253538]: 2025-11-25 09:08:44.309 253542 INFO nova.compute.manager [None req-a622154e-08b5-4423-b43a-15ce0902fe7c c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: a5ec67ec-7042-47d0-925d-6ff3847d3846] Terminating instance#033[00m
Nov 25 04:08:44 np0005534516 nova_compute[253538]: 2025-11-25 09:08:44.310 253542 DEBUG nova.compute.manager [None req-a622154e-08b5-4423-b43a-15ce0902fe7c c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: a5ec67ec-7042-47d0-925d-6ff3847d3846] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 25 04:08:44 np0005534516 kernel: tap05d0fd93-ce (unregistering): left promiscuous mode
Nov 25 04:08:44 np0005534516 NetworkManager[48915]: <info>  [1764061724.3904] device (tap05d0fd93-ce): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 04:08:44 np0005534516 nova_compute[253538]: 2025-11-25 09:08:44.396 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:08:44 np0005534516 ovn_controller[152859]: 2025-11-25T09:08:44Z|01518|binding|INFO|Releasing lport 05d0fd93-ce0f-4842-962f-c9491d3850c8 from this chassis (sb_readonly=0)
Nov 25 04:08:44 np0005534516 ovn_controller[152859]: 2025-11-25T09:08:44Z|01519|binding|INFO|Setting lport 05d0fd93-ce0f-4842-962f-c9491d3850c8 down in Southbound
Nov 25 04:08:44 np0005534516 ovn_controller[152859]: 2025-11-25T09:08:44Z|01520|binding|INFO|Removing iface tap05d0fd93-ce ovn-installed in OVS
Nov 25 04:08:44 np0005534516 nova_compute[253538]: 2025-11-25 09:08:44.399 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:08:44 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:08:44.411 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e4:ec:f8 10.100.0.7'], port_security=['fa:16:3e:e4:ec:f8 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': 'a5ec67ec-7042-47d0-925d-6ff3847d3846', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2a6609b2-beb0-48a5-8dc0-1a4c153da77e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a3cf572dfc9f42528923d69b8fa76422', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'b341b451-7173-4ca0-817f-090d2ce6e1dc', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d52cd819-aa98-4895-9898-b2ec17432e84, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=05d0fd93-ce0f-4842-962f-c9491d3850c8) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 04:08:44 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:08:44.412 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 05d0fd93-ce0f-4842-962f-c9491d3850c8 in datapath 2a6609b2-beb0-48a5-8dc0-1a4c153da77e unbound from our chassis#033[00m
Nov 25 04:08:44 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:08:44.413 162739 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 2a6609b2-beb0-48a5-8dc0-1a4c153da77e, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 25 04:08:44 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:08:44.414 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[aca258b2-4181-467d-a2b6-14780485f4b2]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:08:44 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:08:44.415 162739 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-2a6609b2-beb0-48a5-8dc0-1a4c153da77e namespace which is not needed anymore#033[00m
Nov 25 04:08:44 np0005534516 nova_compute[253538]: 2025-11-25 09:08:44.418 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:08:44 np0005534516 kernel: tap2eda6ce1-df (unregistering): left promiscuous mode
Nov 25 04:08:44 np0005534516 NetworkManager[48915]: <info>  [1764061724.4373] device (tap2eda6ce1-df): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 04:08:44 np0005534516 nova_compute[253538]: 2025-11-25 09:08:44.440 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:08:44 np0005534516 ovn_controller[152859]: 2025-11-25T09:08:44Z|01521|binding|INFO|Releasing lport 2eda6ce1-df50-4620-a5e9-d08e62f7350e from this chassis (sb_readonly=0)
Nov 25 04:08:44 np0005534516 ovn_controller[152859]: 2025-11-25T09:08:44Z|01522|binding|INFO|Setting lport 2eda6ce1-df50-4620-a5e9-d08e62f7350e down in Southbound
Nov 25 04:08:44 np0005534516 ovn_controller[152859]: 2025-11-25T09:08:44Z|01523|binding|INFO|Removing iface tap2eda6ce1-df ovn-installed in OVS
Nov 25 04:08:44 np0005534516 nova_compute[253538]: 2025-11-25 09:08:44.450 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:08:44 np0005534516 nova_compute[253538]: 2025-11-25 09:08:44.452 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:08:44 np0005534516 nova_compute[253538]: 2025-11-25 09:08:44.464 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:08:44 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:08:44.475 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:fc:ed:73 2001:db8:0:1:f816:3eff:fefc:ed73 2001:db8::f816:3eff:fefc:ed73'], port_security=['fa:16:3e:fc:ed:73 2001:db8:0:1:f816:3eff:fefc:ed73 2001:db8::f816:3eff:fefc:ed73'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8:0:1:f816:3eff:fefc:ed73/64 2001:db8::f816:3eff:fefc:ed73/64', 'neutron:device_id': 'a5ec67ec-7042-47d0-925d-6ff3847d3846', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-21786b2a-59f9-4c4e-b462-8a28f7bd93a3', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a3cf572dfc9f42528923d69b8fa76422', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'b341b451-7173-4ca0-817f-090d2ce6e1dc', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c4355854-52b0-49fe-b048-f2ee64c9c702, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=2eda6ce1-df50-4620-a5e9-d08e62f7350e) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 04:08:44 np0005534516 systemd[1]: machine-qemu\x2d172\x2dinstance\x2d0000008d.scope: Deactivated successfully.
Nov 25 04:08:44 np0005534516 systemd[1]: machine-qemu\x2d172\x2dinstance\x2d0000008d.scope: Consumed 16.976s CPU time.
Nov 25 04:08:44 np0005534516 systemd-machined[215790]: Machine qemu-172-instance-0000008d terminated.
Nov 25 04:08:44 np0005534516 NetworkManager[48915]: <info>  [1764061724.5346] manager: (tap05d0fd93-ce): new Tun device (/org/freedesktop/NetworkManager/Devices/622)
Nov 25 04:08:44 np0005534516 NetworkManager[48915]: <info>  [1764061724.5471] manager: (tap2eda6ce1-df): new Tun device (/org/freedesktop/NetworkManager/Devices/623)
Nov 25 04:08:44 np0005534516 nova_compute[253538]: 2025-11-25 09:08:44.583 253542 INFO nova.virt.libvirt.driver [-] [instance: a5ec67ec-7042-47d0-925d-6ff3847d3846] Instance destroyed successfully.#033[00m
Nov 25 04:08:44 np0005534516 nova_compute[253538]: 2025-11-25 09:08:44.584 253542 DEBUG nova.objects.instance [None req-a622154e-08b5-4423-b43a-15ce0902fe7c c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lazy-loading 'resources' on Instance uuid a5ec67ec-7042-47d0-925d-6ff3847d3846 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 04:08:44 np0005534516 nova_compute[253538]: 2025-11-25 09:08:44.596 253542 DEBUG nova.virt.libvirt.vif [None req-a622154e-08b5-4423-b43a-15ce0902fe7c c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T09:07:24Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1660352591',display_name='tempest-TestGettingAddress-server-1660352591',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1660352591',id=141,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNNXZVgUcpOXPWXO6P6soD+m15iDx22PdjLpwIPj4NhiqrYSa8gOlVpM1gBbkAKhCG74Rw2mdANKyxO2M4jwTdZWZdvZ4G3xfBv8VGqbvmMGN/YvwZlGw5MNBtRc1Ho0Cw==',key_name='tempest-TestGettingAddress-1769249141',keypairs=<?>,launch_index=0,launched_at=2025-11-25T09:07:37Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='a3cf572dfc9f42528923d69b8fa76422',ramdisk_id='',reservation_id='r-umnvxfkq',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-364728108',owner_user_name='tempest-TestGettingAddress-364728108-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T09:07:37Z,user_data=None,user_id='c9fb13d4ba9041458692330b7276232f',uuid=a5ec67ec-7042-47d0-925d-6ff3847d3846,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "05d0fd93-ce0f-4842-962f-c9491d3850c8", "address": "fa:16:3e:e4:ec:f8", "network": {"id": "2a6609b2-beb0-48a5-8dc0-1a4c153da77e", "bridge": "br-int", "label": "tempest-network-smoke--734444177", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.200", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap05d0fd93-ce", "ovs_interfaceid": "05d0fd93-ce0f-4842-962f-c9491d3850c8", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 25 04:08:44 np0005534516 nova_compute[253538]: 2025-11-25 09:08:44.596 253542 DEBUG nova.network.os_vif_util [None req-a622154e-08b5-4423-b43a-15ce0902fe7c c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Converting VIF {"id": "05d0fd93-ce0f-4842-962f-c9491d3850c8", "address": "fa:16:3e:e4:ec:f8", "network": {"id": "2a6609b2-beb0-48a5-8dc0-1a4c153da77e", "bridge": "br-int", "label": "tempest-network-smoke--734444177", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.200", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap05d0fd93-ce", "ovs_interfaceid": "05d0fd93-ce0f-4842-962f-c9491d3850c8", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 04:08:44 np0005534516 nova_compute[253538]: 2025-11-25 09:08:44.597 253542 DEBUG nova.network.os_vif_util [None req-a622154e-08b5-4423-b43a-15ce0902fe7c c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:e4:ec:f8,bridge_name='br-int',has_traffic_filtering=True,id=05d0fd93-ce0f-4842-962f-c9491d3850c8,network=Network(2a6609b2-beb0-48a5-8dc0-1a4c153da77e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap05d0fd93-ce') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 04:08:44 np0005534516 nova_compute[253538]: 2025-11-25 09:08:44.597 253542 DEBUG os_vif [None req-a622154e-08b5-4423-b43a-15ce0902fe7c c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:e4:ec:f8,bridge_name='br-int',has_traffic_filtering=True,id=05d0fd93-ce0f-4842-962f-c9491d3850c8,network=Network(2a6609b2-beb0-48a5-8dc0-1a4c153da77e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap05d0fd93-ce') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 25 04:08:44 np0005534516 nova_compute[253538]: 2025-11-25 09:08:44.599 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:08:44 np0005534516 nova_compute[253538]: 2025-11-25 09:08:44.599 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap05d0fd93-ce, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 04:08:44 np0005534516 nova_compute[253538]: 2025-11-25 09:08:44.600 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:08:44 np0005534516 nova_compute[253538]: 2025-11-25 09:08:44.602 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 25 04:08:44 np0005534516 nova_compute[253538]: 2025-11-25 09:08:44.604 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:08:44 np0005534516 nova_compute[253538]: 2025-11-25 09:08:44.606 253542 INFO os_vif [None req-a622154e-08b5-4423-b43a-15ce0902fe7c c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:e4:ec:f8,bridge_name='br-int',has_traffic_filtering=True,id=05d0fd93-ce0f-4842-962f-c9491d3850c8,network=Network(2a6609b2-beb0-48a5-8dc0-1a4c153da77e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap05d0fd93-ce')#033[00m
Nov 25 04:08:44 np0005534516 nova_compute[253538]: 2025-11-25 09:08:44.607 253542 DEBUG nova.virt.libvirt.vif [None req-a622154e-08b5-4423-b43a-15ce0902fe7c c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T09:07:24Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1660352591',display_name='tempest-TestGettingAddress-server-1660352591',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1660352591',id=141,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNNXZVgUcpOXPWXO6P6soD+m15iDx22PdjLpwIPj4NhiqrYSa8gOlVpM1gBbkAKhCG74Rw2mdANKyxO2M4jwTdZWZdvZ4G3xfBv8VGqbvmMGN/YvwZlGw5MNBtRc1Ho0Cw==',key_name='tempest-TestGettingAddress-1769249141',keypairs=<?>,launch_index=0,launched_at=2025-11-25T09:07:37Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='a3cf572dfc9f42528923d69b8fa76422',ramdisk_id='',reservation_id='r-umnvxfkq',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-364728108',owner_user_name='tempest-TestGettingAddress-364728108-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T09:07:37Z,user_data=None,user_id='c9fb13d4ba9041458692330b7276232f',uuid=a5ec67ec-7042-47d0-925d-6ff3847d3846,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "2eda6ce1-df50-4620-a5e9-d08e62f7350e", "address": "fa:16:3e:fc:ed:73", "network": {"id": "21786b2a-59f9-4c4e-b462-8a28f7bd93a3", "bridge": "br-int", "label": "tempest-network-smoke--1716336469", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fefc:ed73", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fefc:ed73", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2eda6ce1-df", "ovs_interfaceid": "2eda6ce1-df50-4620-a5e9-d08e62f7350e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 25 04:08:44 np0005534516 nova_compute[253538]: 2025-11-25 09:08:44.607 253542 DEBUG nova.network.os_vif_util [None req-a622154e-08b5-4423-b43a-15ce0902fe7c c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Converting VIF {"id": "2eda6ce1-df50-4620-a5e9-d08e62f7350e", "address": "fa:16:3e:fc:ed:73", "network": {"id": "21786b2a-59f9-4c4e-b462-8a28f7bd93a3", "bridge": "br-int", "label": "tempest-network-smoke--1716336469", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fefc:ed73", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fefc:ed73", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2eda6ce1-df", "ovs_interfaceid": "2eda6ce1-df50-4620-a5e9-d08e62f7350e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 04:08:44 np0005534516 nova_compute[253538]: 2025-11-25 09:08:44.609 253542 DEBUG nova.network.os_vif_util [None req-a622154e-08b5-4423-b43a-15ce0902fe7c c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:fc:ed:73,bridge_name='br-int',has_traffic_filtering=True,id=2eda6ce1-df50-4620-a5e9-d08e62f7350e,network=Network(21786b2a-59f9-4c4e-b462-8a28f7bd93a3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2eda6ce1-df') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 04:08:44 np0005534516 neutron-haproxy-ovnmeta-2a6609b2-beb0-48a5-8dc0-1a4c153da77e[403594]: [NOTICE]   (403598) : haproxy version is 2.8.14-c23fe91
Nov 25 04:08:44 np0005534516 neutron-haproxy-ovnmeta-2a6609b2-beb0-48a5-8dc0-1a4c153da77e[403594]: [NOTICE]   (403598) : path to executable is /usr/sbin/haproxy
Nov 25 04:08:44 np0005534516 neutron-haproxy-ovnmeta-2a6609b2-beb0-48a5-8dc0-1a4c153da77e[403594]: [WARNING]  (403598) : Exiting Master process...
Nov 25 04:08:44 np0005534516 nova_compute[253538]: 2025-11-25 09:08:44.609 253542 DEBUG os_vif [None req-a622154e-08b5-4423-b43a-15ce0902fe7c c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:fc:ed:73,bridge_name='br-int',has_traffic_filtering=True,id=2eda6ce1-df50-4620-a5e9-d08e62f7350e,network=Network(21786b2a-59f9-4c4e-b462-8a28f7bd93a3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2eda6ce1-df') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 25 04:08:44 np0005534516 nova_compute[253538]: 2025-11-25 09:08:44.610 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:08:44 np0005534516 nova_compute[253538]: 2025-11-25 09:08:44.610 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2eda6ce1-df, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 04:08:44 np0005534516 neutron-haproxy-ovnmeta-2a6609b2-beb0-48a5-8dc0-1a4c153da77e[403594]: [ALERT]    (403598) : Current worker (403600) exited with code 143 (Terminated)
Nov 25 04:08:44 np0005534516 neutron-haproxy-ovnmeta-2a6609b2-beb0-48a5-8dc0-1a4c153da77e[403594]: [WARNING]  (403598) : All workers exited. Exiting... (0)
Nov 25 04:08:44 np0005534516 nova_compute[253538]: 2025-11-25 09:08:44.611 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:08:44 np0005534516 nova_compute[253538]: 2025-11-25 09:08:44.613 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:08:44 np0005534516 systemd[1]: libpod-9a504edfb770e834c8ae706c82d27a846bbfdf5baeddccd2911bc4b7aa6bc6fa.scope: Deactivated successfully.
Nov 25 04:08:44 np0005534516 nova_compute[253538]: 2025-11-25 09:08:44.614 253542 INFO os_vif [None req-a622154e-08b5-4423-b43a-15ce0902fe7c c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:fc:ed:73,bridge_name='br-int',has_traffic_filtering=True,id=2eda6ce1-df50-4620-a5e9-d08e62f7350e,network=Network(21786b2a-59f9-4c4e-b462-8a28f7bd93a3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2eda6ce1-df')#033[00m
Nov 25 04:08:44 np0005534516 podman[405614]: 2025-11-25 09:08:44.620676324 +0000 UTC m=+0.085916826 container died 9a504edfb770e834c8ae706c82d27a846bbfdf5baeddccd2911bc4b7aa6bc6fa (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-2a6609b2-beb0-48a5-8dc0-1a4c153da77e, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, tcib_managed=true)
Nov 25 04:08:44 np0005534516 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-9a504edfb770e834c8ae706c82d27a846bbfdf5baeddccd2911bc4b7aa6bc6fa-userdata-shm.mount: Deactivated successfully.
Nov 25 04:08:44 np0005534516 systemd[1]: var-lib-containers-storage-overlay-e09528d4ae8d074a0aa083a055a8dc133b765cc40f864841f0ed3ff34768cc83-merged.mount: Deactivated successfully.
Nov 25 04:08:44 np0005534516 nova_compute[253538]: 2025-11-25 09:08:44.804 253542 DEBUG nova.compute.manager [req-84a9d82c-d20b-40f0-9620-63ccf07470fb req-11a43452-1aef-4bdc-90c3-75b82ef6971b b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 24611274-7a7c-4258-8631-032a6c1d8410] Received event network-vif-deleted-6effd17c-b1ee-44e2-8346-9e445de0dfb9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 04:08:44 np0005534516 nova_compute[253538]: 2025-11-25 09:08:44.806 253542 DEBUG nova.compute.manager [req-84a9d82c-d20b-40f0-9620-63ccf07470fb req-11a43452-1aef-4bdc-90c3-75b82ef6971b b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: a5ec67ec-7042-47d0-925d-6ff3847d3846] Received event network-changed-05d0fd93-ce0f-4842-962f-c9491d3850c8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 04:08:44 np0005534516 nova_compute[253538]: 2025-11-25 09:08:44.806 253542 DEBUG nova.compute.manager [req-84a9d82c-d20b-40f0-9620-63ccf07470fb req-11a43452-1aef-4bdc-90c3-75b82ef6971b b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: a5ec67ec-7042-47d0-925d-6ff3847d3846] Refreshing instance network info cache due to event network-changed-05d0fd93-ce0f-4842-962f-c9491d3850c8. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 04:08:44 np0005534516 nova_compute[253538]: 2025-11-25 09:08:44.807 253542 DEBUG oslo_concurrency.lockutils [req-84a9d82c-d20b-40f0-9620-63ccf07470fb req-11a43452-1aef-4bdc-90c3-75b82ef6971b b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-a5ec67ec-7042-47d0-925d-6ff3847d3846" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 04:08:44 np0005534516 nova_compute[253538]: 2025-11-25 09:08:44.807 253542 DEBUG oslo_concurrency.lockutils [req-84a9d82c-d20b-40f0-9620-63ccf07470fb req-11a43452-1aef-4bdc-90c3-75b82ef6971b b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-a5ec67ec-7042-47d0-925d-6ff3847d3846" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 04:08:44 np0005534516 nova_compute[253538]: 2025-11-25 09:08:44.808 253542 DEBUG nova.network.neutron [req-84a9d82c-d20b-40f0-9620-63ccf07470fb req-11a43452-1aef-4bdc-90c3-75b82ef6971b b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: a5ec67ec-7042-47d0-925d-6ff3847d3846] Refreshing network info cache for port 05d0fd93-ce0f-4842-962f-c9491d3850c8 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 04:08:44 np0005534516 podman[405614]: 2025-11-25 09:08:44.812281743 +0000 UTC m=+0.277522245 container cleanup 9a504edfb770e834c8ae706c82d27a846bbfdf5baeddccd2911bc4b7aa6bc6fa (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-2a6609b2-beb0-48a5-8dc0-1a4c153da77e, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 04:08:44 np0005534516 systemd[1]: libpod-conmon-9a504edfb770e834c8ae706c82d27a846bbfdf5baeddccd2911bc4b7aa6bc6fa.scope: Deactivated successfully.
Nov 25 04:08:44 np0005534516 nova_compute[253538]: 2025-11-25 09:08:44.901 253542 DEBUG nova.compute.manager [req-93a17075-a365-42a4-baa9-1a1aaa9646a6 req-263576da-841c-4d9d-adfb-8b311c924534 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: a5ec67ec-7042-47d0-925d-6ff3847d3846] Received event network-vif-unplugged-2eda6ce1-df50-4620-a5e9-d08e62f7350e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 04:08:44 np0005534516 nova_compute[253538]: 2025-11-25 09:08:44.902 253542 DEBUG oslo_concurrency.lockutils [req-93a17075-a365-42a4-baa9-1a1aaa9646a6 req-263576da-841c-4d9d-adfb-8b311c924534 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "a5ec67ec-7042-47d0-925d-6ff3847d3846-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:08:44 np0005534516 nova_compute[253538]: 2025-11-25 09:08:44.902 253542 DEBUG oslo_concurrency.lockutils [req-93a17075-a365-42a4-baa9-1a1aaa9646a6 req-263576da-841c-4d9d-adfb-8b311c924534 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "a5ec67ec-7042-47d0-925d-6ff3847d3846-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:08:44 np0005534516 nova_compute[253538]: 2025-11-25 09:08:44.903 253542 DEBUG oslo_concurrency.lockutils [req-93a17075-a365-42a4-baa9-1a1aaa9646a6 req-263576da-841c-4d9d-adfb-8b311c924534 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "a5ec67ec-7042-47d0-925d-6ff3847d3846-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:08:44 np0005534516 nova_compute[253538]: 2025-11-25 09:08:44.903 253542 DEBUG nova.compute.manager [req-93a17075-a365-42a4-baa9-1a1aaa9646a6 req-263576da-841c-4d9d-adfb-8b311c924534 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: a5ec67ec-7042-47d0-925d-6ff3847d3846] No waiting events found dispatching network-vif-unplugged-2eda6ce1-df50-4620-a5e9-d08e62f7350e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 04:08:44 np0005534516 nova_compute[253538]: 2025-11-25 09:08:44.904 253542 DEBUG nova.compute.manager [req-93a17075-a365-42a4-baa9-1a1aaa9646a6 req-263576da-841c-4d9d-adfb-8b311c924534 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: a5ec67ec-7042-47d0-925d-6ff3847d3846] Received event network-vif-unplugged-2eda6ce1-df50-4620-a5e9-d08e62f7350e for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 25 04:08:44 np0005534516 podman[405684]: 2025-11-25 09:08:44.909738872 +0000 UTC m=+0.066501049 container remove 9a504edfb770e834c8ae706c82d27a846bbfdf5baeddccd2911bc4b7aa6bc6fa (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-2a6609b2-beb0-48a5-8dc0-1a4c153da77e, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 25 04:08:44 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:08:44.922 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[0231f5a3-07f3-4f98-ac32-fec44f19f244]: (4, ('Tue Nov 25 09:08:44 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-2a6609b2-beb0-48a5-8dc0-1a4c153da77e (9a504edfb770e834c8ae706c82d27a846bbfdf5baeddccd2911bc4b7aa6bc6fa)\n9a504edfb770e834c8ae706c82d27a846bbfdf5baeddccd2911bc4b7aa6bc6fa\nTue Nov 25 09:08:44 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-2a6609b2-beb0-48a5-8dc0-1a4c153da77e (9a504edfb770e834c8ae706c82d27a846bbfdf5baeddccd2911bc4b7aa6bc6fa)\n9a504edfb770e834c8ae706c82d27a846bbfdf5baeddccd2911bc4b7aa6bc6fa\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:08:44 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:08:44.924 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[a3b01468-cda2-45c5-8581-80aab1799641]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:08:44 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:08:44.925 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2a6609b2-b0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 04:08:44 np0005534516 kernel: tap2a6609b2-b0: left promiscuous mode
Nov 25 04:08:44 np0005534516 nova_compute[253538]: 2025-11-25 09:08:44.927 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:08:44 np0005534516 nova_compute[253538]: 2025-11-25 09:08:44.943 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:08:44 np0005534516 nova_compute[253538]: 2025-11-25 09:08:44.944 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:08:44 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:08:44.946 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[c36b5d13-a4f8-401f-af09-687e2cc70aef]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:08:44 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:08:44.962 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[f532be33-8405-4d5a-a859-8cbdf1b567bd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:08:44 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:08:44.963 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[78dc80ff-4238-4c54-9a64-79a0b5ba31c9]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:08:44 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:08:44.978 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[1e1e494a-d7f7-45bd-89f2-fd20e3bbb276]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 701063, 'reachable_time': 19494, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 405699, 'error': None, 'target': 'ovnmeta-2a6609b2-beb0-48a5-8dc0-1a4c153da77e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:08:44 np0005534516 systemd[1]: run-netns-ovnmeta\x2d2a6609b2\x2dbeb0\x2d48a5\x2d8dc0\x2d1a4c153da77e.mount: Deactivated successfully.
Nov 25 04:08:44 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:08:44.982 162852 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-2a6609b2-beb0-48a5-8dc0-1a4c153da77e deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 25 04:08:44 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:08:44.983 162852 DEBUG oslo.privsep.daemon [-] privsep: reply[6e0207fb-cbd4-4b7b-86c9-61d4d77355bf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:08:44 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:08:44.983 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 2eda6ce1-df50-4620-a5e9-d08e62f7350e in datapath 21786b2a-59f9-4c4e-b462-8a28f7bd93a3 unbound from our chassis#033[00m
Nov 25 04:08:44 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:08:44.984 162739 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 21786b2a-59f9-4c4e-b462-8a28f7bd93a3, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 25 04:08:44 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:08:44.986 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[abe3672d-0930-45de-86a0-5aee95ea6de8]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:08:44 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:08:44.986 162739 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-21786b2a-59f9-4c4e-b462-8a28f7bd93a3 namespace which is not needed anymore#033[00m
Nov 25 04:08:45 np0005534516 neutron-haproxy-ovnmeta-21786b2a-59f9-4c4e-b462-8a28f7bd93a3[403683]: [NOTICE]   (403687) : haproxy version is 2.8.14-c23fe91
Nov 25 04:08:45 np0005534516 neutron-haproxy-ovnmeta-21786b2a-59f9-4c4e-b462-8a28f7bd93a3[403683]: [NOTICE]   (403687) : path to executable is /usr/sbin/haproxy
Nov 25 04:08:45 np0005534516 neutron-haproxy-ovnmeta-21786b2a-59f9-4c4e-b462-8a28f7bd93a3[403683]: [WARNING]  (403687) : Exiting Master process...
Nov 25 04:08:45 np0005534516 neutron-haproxy-ovnmeta-21786b2a-59f9-4c4e-b462-8a28f7bd93a3[403683]: [ALERT]    (403687) : Current worker (403689) exited with code 143 (Terminated)
Nov 25 04:08:45 np0005534516 neutron-haproxy-ovnmeta-21786b2a-59f9-4c4e-b462-8a28f7bd93a3[403683]: [WARNING]  (403687) : All workers exited. Exiting... (0)
Nov 25 04:08:45 np0005534516 systemd[1]: libpod-4c4ab6fadfcdb863651d7b97fe31c5cb7b19c73ef6a849b2c903b75365943d51.scope: Deactivated successfully.
Nov 25 04:08:45 np0005534516 podman[405718]: 2025-11-25 09:08:45.1344248 +0000 UTC m=+0.047008889 container died 4c4ab6fadfcdb863651d7b97fe31c5cb7b19c73ef6a849b2c903b75365943d51 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-21786b2a-59f9-4c4e-b462-8a28f7bd93a3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 25 04:08:45 np0005534516 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-4c4ab6fadfcdb863651d7b97fe31c5cb7b19c73ef6a849b2c903b75365943d51-userdata-shm.mount: Deactivated successfully.
Nov 25 04:08:45 np0005534516 systemd[1]: var-lib-containers-storage-overlay-489261d86b924eb877c414a169c28246ea802beb251d049c1cd79a5b173b9a69-merged.mount: Deactivated successfully.
Nov 25 04:08:45 np0005534516 podman[405718]: 2025-11-25 09:08:45.178144738 +0000 UTC m=+0.090728817 container cleanup 4c4ab6fadfcdb863651d7b97fe31c5cb7b19c73ef6a849b2c903b75365943d51 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-21786b2a-59f9-4c4e-b462-8a28f7bd93a3, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true)
Nov 25 04:08:45 np0005534516 systemd[1]: libpod-conmon-4c4ab6fadfcdb863651d7b97fe31c5cb7b19c73ef6a849b2c903b75365943d51.scope: Deactivated successfully.
Nov 25 04:08:45 np0005534516 nova_compute[253538]: 2025-11-25 09:08:45.212 253542 INFO nova.virt.libvirt.driver [None req-a622154e-08b5-4423-b43a-15ce0902fe7c c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: a5ec67ec-7042-47d0-925d-6ff3847d3846] Deleting instance files /var/lib/nova/instances/a5ec67ec-7042-47d0-925d-6ff3847d3846_del#033[00m
Nov 25 04:08:45 np0005534516 nova_compute[253538]: 2025-11-25 09:08:45.213 253542 INFO nova.virt.libvirt.driver [None req-a622154e-08b5-4423-b43a-15ce0902fe7c c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: a5ec67ec-7042-47d0-925d-6ff3847d3846] Deletion of /var/lib/nova/instances/a5ec67ec-7042-47d0-925d-6ff3847d3846_del complete#033[00m
Nov 25 04:08:45 np0005534516 podman[405748]: 2025-11-25 09:08:45.246651801 +0000 UTC m=+0.044584113 container remove 4c4ab6fadfcdb863651d7b97fe31c5cb7b19c73ef6a849b2c903b75365943d51 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-21786b2a-59f9-4c4e-b462-8a28f7bd93a3, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Nov 25 04:08:45 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:08:45.253 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[4b212c38-19a3-4676-befa-3c41055eed14]: (4, ('Tue Nov 25 09:08:45 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-21786b2a-59f9-4c4e-b462-8a28f7bd93a3 (4c4ab6fadfcdb863651d7b97fe31c5cb7b19c73ef6a849b2c903b75365943d51)\n4c4ab6fadfcdb863651d7b97fe31c5cb7b19c73ef6a849b2c903b75365943d51\nTue Nov 25 09:08:45 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-21786b2a-59f9-4c4e-b462-8a28f7bd93a3 (4c4ab6fadfcdb863651d7b97fe31c5cb7b19c73ef6a849b2c903b75365943d51)\n4c4ab6fadfcdb863651d7b97fe31c5cb7b19c73ef6a849b2c903b75365943d51\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:08:45 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:08:45.255 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[289cc496-5170-4280-a1e5-20afad61ca7e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:08:45 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:08:45.256 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap21786b2a-50, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 04:08:45 np0005534516 nova_compute[253538]: 2025-11-25 09:08:45.257 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:08:45 np0005534516 kernel: tap21786b2a-50: left promiscuous mode
Nov 25 04:08:45 np0005534516 nova_compute[253538]: 2025-11-25 09:08:45.267 253542 INFO nova.compute.manager [None req-a622154e-08b5-4423-b43a-15ce0902fe7c c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: a5ec67ec-7042-47d0-925d-6ff3847d3846] Took 0.96 seconds to destroy the instance on the hypervisor.#033[00m
Nov 25 04:08:45 np0005534516 nova_compute[253538]: 2025-11-25 09:08:45.267 253542 DEBUG oslo.service.loopingcall [None req-a622154e-08b5-4423-b43a-15ce0902fe7c c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 25 04:08:45 np0005534516 nova_compute[253538]: 2025-11-25 09:08:45.268 253542 DEBUG nova.compute.manager [-] [instance: a5ec67ec-7042-47d0-925d-6ff3847d3846] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 25 04:08:45 np0005534516 nova_compute[253538]: 2025-11-25 09:08:45.268 253542 DEBUG nova.network.neutron [-] [instance: a5ec67ec-7042-47d0-925d-6ff3847d3846] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 25 04:08:45 np0005534516 nova_compute[253538]: 2025-11-25 09:08:45.273 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:08:45 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:08:45.276 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[4e9e86c0-6c72-4bf2-a7e0-9ffdb3380636]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:08:45 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:08:45.292 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[c05936b9-1889-415d-923f-a33d091e65dc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:08:45 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:08:45.293 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[5e52abfe-9e31-435e-8e9d-7169b245ac83]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:08:45 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:08:45.312 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[18bd4bf9-d269-4c9f-b165-f21a600d5f66]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 701165, 'reachable_time': 26579, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 405764, 'error': None, 'target': 'ovnmeta-21786b2a-59f9-4c4e-b462-8a28f7bd93a3', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:08:45 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:08:45.314 162852 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-21786b2a-59f9-4c4e-b462-8a28f7bd93a3 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 25 04:08:45 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:08:45.315 162852 DEBUG oslo.privsep.daemon [-] privsep: reply[529dc2ef-91a3-425f-8663-ea5e67b8a29c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:08:45 np0005534516 systemd[1]: run-netns-ovnmeta\x2d21786b2a\x2d59f9\x2d4c4e\x2db462\x2d8a28f7bd93a3.mount: Deactivated successfully.
Nov 25 04:08:46 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2651: 321 pgs: 321 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 35 KiB/s rd, 23 KiB/s wr, 32 op/s
Nov 25 04:08:46 np0005534516 nova_compute[253538]: 2025-11-25 09:08:46.761 253542 DEBUG nova.network.neutron [-] [instance: a5ec67ec-7042-47d0-925d-6ff3847d3846] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 04:08:46 np0005534516 nova_compute[253538]: 2025-11-25 09:08:46.777 253542 INFO nova.compute.manager [-] [instance: a5ec67ec-7042-47d0-925d-6ff3847d3846] Took 1.51 seconds to deallocate network for instance.#033[00m
Nov 25 04:08:46 np0005534516 nova_compute[253538]: 2025-11-25 09:08:46.833 253542 DEBUG oslo_concurrency.lockutils [None req-a622154e-08b5-4423-b43a-15ce0902fe7c c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:08:46 np0005534516 nova_compute[253538]: 2025-11-25 09:08:46.834 253542 DEBUG oslo_concurrency.lockutils [None req-a622154e-08b5-4423-b43a-15ce0902fe7c c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:08:46 np0005534516 nova_compute[253538]: 2025-11-25 09:08:46.892 253542 DEBUG oslo_concurrency.processutils [None req-a622154e-08b5-4423-b43a-15ce0902fe7c c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 04:08:46 np0005534516 nova_compute[253538]: 2025-11-25 09:08:46.931 253542 DEBUG nova.compute.manager [req-1105d66f-5fc5-48a5-bee6-c9700bfb427f req-9e21b266-4f8d-4ca6-add1-3e62b5d87d1b b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: a5ec67ec-7042-47d0-925d-6ff3847d3846] Received event network-vif-plugged-05d0fd93-ce0f-4842-962f-c9491d3850c8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 04:08:46 np0005534516 nova_compute[253538]: 2025-11-25 09:08:46.931 253542 DEBUG oslo_concurrency.lockutils [req-1105d66f-5fc5-48a5-bee6-c9700bfb427f req-9e21b266-4f8d-4ca6-add1-3e62b5d87d1b b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "a5ec67ec-7042-47d0-925d-6ff3847d3846-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:08:46 np0005534516 nova_compute[253538]: 2025-11-25 09:08:46.932 253542 DEBUG oslo_concurrency.lockutils [req-1105d66f-5fc5-48a5-bee6-c9700bfb427f req-9e21b266-4f8d-4ca6-add1-3e62b5d87d1b b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "a5ec67ec-7042-47d0-925d-6ff3847d3846-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:08:46 np0005534516 nova_compute[253538]: 2025-11-25 09:08:46.932 253542 DEBUG oslo_concurrency.lockutils [req-1105d66f-5fc5-48a5-bee6-c9700bfb427f req-9e21b266-4f8d-4ca6-add1-3e62b5d87d1b b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "a5ec67ec-7042-47d0-925d-6ff3847d3846-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:08:46 np0005534516 nova_compute[253538]: 2025-11-25 09:08:46.932 253542 DEBUG nova.compute.manager [req-1105d66f-5fc5-48a5-bee6-c9700bfb427f req-9e21b266-4f8d-4ca6-add1-3e62b5d87d1b b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: a5ec67ec-7042-47d0-925d-6ff3847d3846] No waiting events found dispatching network-vif-plugged-05d0fd93-ce0f-4842-962f-c9491d3850c8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 04:08:46 np0005534516 nova_compute[253538]: 2025-11-25 09:08:46.932 253542 WARNING nova.compute.manager [req-1105d66f-5fc5-48a5-bee6-c9700bfb427f req-9e21b266-4f8d-4ca6-add1-3e62b5d87d1b b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: a5ec67ec-7042-47d0-925d-6ff3847d3846] Received unexpected event network-vif-plugged-05d0fd93-ce0f-4842-962f-c9491d3850c8 for instance with vm_state deleted and task_state None.#033[00m
Nov 25 04:08:47 np0005534516 nova_compute[253538]: 2025-11-25 09:08:47.030 253542 DEBUG nova.compute.manager [req-7b45d0c1-b8a9-4119-9865-01fc6e8b2357 req-4788bcaf-7125-4b0f-a95f-72e5a7267338 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: a5ec67ec-7042-47d0-925d-6ff3847d3846] Received event network-vif-plugged-2eda6ce1-df50-4620-a5e9-d08e62f7350e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 04:08:47 np0005534516 nova_compute[253538]: 2025-11-25 09:08:47.031 253542 DEBUG oslo_concurrency.lockutils [req-7b45d0c1-b8a9-4119-9865-01fc6e8b2357 req-4788bcaf-7125-4b0f-a95f-72e5a7267338 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "a5ec67ec-7042-47d0-925d-6ff3847d3846-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:08:47 np0005534516 nova_compute[253538]: 2025-11-25 09:08:47.031 253542 DEBUG oslo_concurrency.lockutils [req-7b45d0c1-b8a9-4119-9865-01fc6e8b2357 req-4788bcaf-7125-4b0f-a95f-72e5a7267338 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "a5ec67ec-7042-47d0-925d-6ff3847d3846-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:08:47 np0005534516 nova_compute[253538]: 2025-11-25 09:08:47.031 253542 DEBUG oslo_concurrency.lockutils [req-7b45d0c1-b8a9-4119-9865-01fc6e8b2357 req-4788bcaf-7125-4b0f-a95f-72e5a7267338 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "a5ec67ec-7042-47d0-925d-6ff3847d3846-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:08:47 np0005534516 nova_compute[253538]: 2025-11-25 09:08:47.031 253542 DEBUG nova.compute.manager [req-7b45d0c1-b8a9-4119-9865-01fc6e8b2357 req-4788bcaf-7125-4b0f-a95f-72e5a7267338 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: a5ec67ec-7042-47d0-925d-6ff3847d3846] No waiting events found dispatching network-vif-plugged-2eda6ce1-df50-4620-a5e9-d08e62f7350e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 04:08:47 np0005534516 nova_compute[253538]: 2025-11-25 09:08:47.032 253542 WARNING nova.compute.manager [req-7b45d0c1-b8a9-4119-9865-01fc6e8b2357 req-4788bcaf-7125-4b0f-a95f-72e5a7267338 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: a5ec67ec-7042-47d0-925d-6ff3847d3846] Received unexpected event network-vif-plugged-2eda6ce1-df50-4620-a5e9-d08e62f7350e for instance with vm_state deleted and task_state None.#033[00m
Nov 25 04:08:47 np0005534516 nova_compute[253538]: 2025-11-25 09:08:47.032 253542 DEBUG nova.compute.manager [req-7b45d0c1-b8a9-4119-9865-01fc6e8b2357 req-4788bcaf-7125-4b0f-a95f-72e5a7267338 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: a5ec67ec-7042-47d0-925d-6ff3847d3846] Received event network-vif-deleted-2eda6ce1-df50-4620-a5e9-d08e62f7350e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 04:08:47 np0005534516 nova_compute[253538]: 2025-11-25 09:08:47.032 253542 DEBUG nova.compute.manager [req-7b45d0c1-b8a9-4119-9865-01fc6e8b2357 req-4788bcaf-7125-4b0f-a95f-72e5a7267338 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: a5ec67ec-7042-47d0-925d-6ff3847d3846] Received event network-vif-deleted-05d0fd93-ce0f-4842-962f-c9491d3850c8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 04:08:47 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 04:08:47 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3717188419' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 04:08:47 np0005534516 nova_compute[253538]: 2025-11-25 09:08:47.340 253542 DEBUG oslo_concurrency.processutils [None req-a622154e-08b5-4423-b43a-15ce0902fe7c c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.448s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 04:08:47 np0005534516 nova_compute[253538]: 2025-11-25 09:08:47.350 253542 DEBUG nova.compute.provider_tree [None req-a622154e-08b5-4423-b43a-15ce0902fe7c c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 04:08:47 np0005534516 nova_compute[253538]: 2025-11-25 09:08:47.365 253542 DEBUG nova.scheduler.client.report [None req-a622154e-08b5-4423-b43a-15ce0902fe7c c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 04:08:47 np0005534516 nova_compute[253538]: 2025-11-25 09:08:47.389 253542 DEBUG oslo_concurrency.lockutils [None req-a622154e-08b5-4423-b43a-15ce0902fe7c c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.555s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:08:47 np0005534516 nova_compute[253538]: 2025-11-25 09:08:47.430 253542 INFO nova.scheduler.client.report [None req-a622154e-08b5-4423-b43a-15ce0902fe7c c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Deleted allocations for instance a5ec67ec-7042-47d0-925d-6ff3847d3846#033[00m
Nov 25 04:08:47 np0005534516 nova_compute[253538]: 2025-11-25 09:08:47.435 253542 DEBUG nova.network.neutron [req-84a9d82c-d20b-40f0-9620-63ccf07470fb req-11a43452-1aef-4bdc-90c3-75b82ef6971b b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: a5ec67ec-7042-47d0-925d-6ff3847d3846] Updated VIF entry in instance network info cache for port 05d0fd93-ce0f-4842-962f-c9491d3850c8. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 25 04:08:47 np0005534516 nova_compute[253538]: 2025-11-25 09:08:47.436 253542 DEBUG nova.network.neutron [req-84a9d82c-d20b-40f0-9620-63ccf07470fb req-11a43452-1aef-4bdc-90c3-75b82ef6971b b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: a5ec67ec-7042-47d0-925d-6ff3847d3846] Updating instance_info_cache with network_info: [{"id": "05d0fd93-ce0f-4842-962f-c9491d3850c8", "address": "fa:16:3e:e4:ec:f8", "network": {"id": "2a6609b2-beb0-48a5-8dc0-1a4c153da77e", "bridge": "br-int", "label": "tempest-network-smoke--734444177", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap05d0fd93-ce", "ovs_interfaceid": "05d0fd93-ce0f-4842-962f-c9491d3850c8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "2eda6ce1-df50-4620-a5e9-d08e62f7350e", "address": "fa:16:3e:fc:ed:73", "network": {"id": "21786b2a-59f9-4c4e-b462-8a28f7bd93a3", "bridge": "br-int", "label": "tempest-network-smoke--1716336469", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fefc:ed73", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fefc:ed73", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2eda6ce1-df", "ovs_interfaceid": "2eda6ce1-df50-4620-a5e9-d08e62f7350e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 04:08:47 np0005534516 nova_compute[253538]: 2025-11-25 09:08:47.459 253542 DEBUG oslo_concurrency.lockutils [req-84a9d82c-d20b-40f0-9620-63ccf07470fb req-11a43452-1aef-4bdc-90c3-75b82ef6971b b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-a5ec67ec-7042-47d0-925d-6ff3847d3846" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 04:08:47 np0005534516 nova_compute[253538]: 2025-11-25 09:08:47.460 253542 DEBUG nova.compute.manager [req-84a9d82c-d20b-40f0-9620-63ccf07470fb req-11a43452-1aef-4bdc-90c3-75b82ef6971b b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: a5ec67ec-7042-47d0-925d-6ff3847d3846] Received event network-vif-unplugged-05d0fd93-ce0f-4842-962f-c9491d3850c8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 04:08:47 np0005534516 nova_compute[253538]: 2025-11-25 09:08:47.460 253542 DEBUG oslo_concurrency.lockutils [req-84a9d82c-d20b-40f0-9620-63ccf07470fb req-11a43452-1aef-4bdc-90c3-75b82ef6971b b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "a5ec67ec-7042-47d0-925d-6ff3847d3846-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:08:47 np0005534516 nova_compute[253538]: 2025-11-25 09:08:47.460 253542 DEBUG oslo_concurrency.lockutils [req-84a9d82c-d20b-40f0-9620-63ccf07470fb req-11a43452-1aef-4bdc-90c3-75b82ef6971b b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "a5ec67ec-7042-47d0-925d-6ff3847d3846-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:08:47 np0005534516 nova_compute[253538]: 2025-11-25 09:08:47.461 253542 DEBUG oslo_concurrency.lockutils [req-84a9d82c-d20b-40f0-9620-63ccf07470fb req-11a43452-1aef-4bdc-90c3-75b82ef6971b b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "a5ec67ec-7042-47d0-925d-6ff3847d3846-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:08:47 np0005534516 nova_compute[253538]: 2025-11-25 09:08:47.461 253542 DEBUG nova.compute.manager [req-84a9d82c-d20b-40f0-9620-63ccf07470fb req-11a43452-1aef-4bdc-90c3-75b82ef6971b b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: a5ec67ec-7042-47d0-925d-6ff3847d3846] No waiting events found dispatching network-vif-unplugged-05d0fd93-ce0f-4842-962f-c9491d3850c8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 04:08:47 np0005534516 nova_compute[253538]: 2025-11-25 09:08:47.461 253542 DEBUG nova.compute.manager [req-84a9d82c-d20b-40f0-9620-63ccf07470fb req-11a43452-1aef-4bdc-90c3-75b82ef6971b b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: a5ec67ec-7042-47d0-925d-6ff3847d3846] Received event network-vif-unplugged-05d0fd93-ce0f-4842-962f-c9491d3850c8 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 25 04:08:47 np0005534516 nova_compute[253538]: 2025-11-25 09:08:47.487 253542 DEBUG oslo_concurrency.lockutils [None req-a622154e-08b5-4423-b43a-15ce0902fe7c c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "a5ec67ec-7042-47d0-925d-6ff3847d3846" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.180s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:08:48 np0005534516 nova_compute[253538]: 2025-11-25 09:08:48.137 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:08:48 np0005534516 nova_compute[253538]: 2025-11-25 09:08:48.137 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:08:48 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2652: 321 pgs: 321 active+clean; 113 MiB data, 999 MiB used, 59 GiB / 60 GiB avail; 44 KiB/s rd, 24 KiB/s wr, 45 op/s
Nov 25 04:08:48 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:08:49 np0005534516 nova_compute[253538]: 2025-11-25 09:08:49.197 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:08:49 np0005534516 nova_compute[253538]: 2025-11-25 09:08:49.612 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:08:50 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2653: 321 pgs: 321 active+clean; 88 MiB data, 981 MiB used, 59 GiB / 60 GiB avail; 50 KiB/s rd, 12 KiB/s wr, 57 op/s
Nov 25 04:08:51 np0005534516 nova_compute[253538]: 2025-11-25 09:08:51.555 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:08:51 np0005534516 nova_compute[253538]: 2025-11-25 09:08:51.556 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 25 04:08:51 np0005534516 nova_compute[253538]: 2025-11-25 09:08:51.600 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 25 04:08:52 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2654: 321 pgs: 321 active+clean; 88 MiB data, 981 MiB used, 59 GiB / 60 GiB avail; 44 KiB/s rd, 12 KiB/s wr, 56 op/s
Nov 25 04:08:53 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:08:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] Optimize plan auto_2025-11-25_09:08:53
Nov 25 04:08:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 04:08:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] do_upmap
Nov 25 04:08:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'vms', '.rgw.root', 'default.rgw.log', 'backups', 'default.rgw.meta', 'images', 'default.rgw.control', '.mgr', 'volumes', 'cephfs.cephfs.data']
Nov 25 04:08:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] prepared 0/10 changes
Nov 25 04:08:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 04:08:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 04:08:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 04:08:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 04:08:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 04:08:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 04:08:53 np0005534516 nova_compute[253538]: 2025-11-25 09:08:53.593 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:08:54 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 04:08:54 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 04:08:54 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 04:08:54 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 04:08:54 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 04:08:54 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 04:08:54 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 04:08:54 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 04:08:54 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 04:08:54 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 04:08:54 np0005534516 nova_compute[253538]: 2025-11-25 09:08:54.201 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:08:54 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2655: 321 pgs: 321 active+clean; 88 MiB data, 981 MiB used, 59 GiB / 60 GiB avail; 29 KiB/s rd, 6.6 KiB/s wr, 38 op/s
Nov 25 04:08:54 np0005534516 nova_compute[253538]: 2025-11-25 09:08:54.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:08:54 np0005534516 nova_compute[253538]: 2025-11-25 09:08:54.554 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 25 04:08:54 np0005534516 nova_compute[253538]: 2025-11-25 09:08:54.614 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:08:55 np0005534516 nova_compute[253538]: 2025-11-25 09:08:55.450 253542 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764061720.4491594, 24611274-7a7c-4258-8631-032a6c1d8410 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 04:08:55 np0005534516 nova_compute[253538]: 2025-11-25 09:08:55.450 253542 INFO nova.compute.manager [-] [instance: 24611274-7a7c-4258-8631-032a6c1d8410] VM Stopped (Lifecycle Event)#033[00m
Nov 25 04:08:55 np0005534516 nova_compute[253538]: 2025-11-25 09:08:55.469 253542 DEBUG nova.compute.manager [None req-bdb2c934-49ff-48f6-bdaa-4335a5f8666f - - - - - -] [instance: 24611274-7a7c-4258-8631-032a6c1d8410] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 04:08:56 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2656: 321 pgs: 321 active+clean; 88 MiB data, 981 MiB used, 59 GiB / 60 GiB avail; 21 KiB/s rd, 1.6 KiB/s wr, 33 op/s
Nov 25 04:08:56 np0005534516 nova_compute[253538]: 2025-11-25 09:08:56.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:08:57 np0005534516 nova_compute[253538]: 2025-11-25 09:08:57.885 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:08:57 np0005534516 nova_compute[253538]: 2025-11-25 09:08:57.982 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:08:58 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2657: 321 pgs: 321 active+clean; 88 MiB data, 981 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Nov 25 04:08:58 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:08:58 np0005534516 nova_compute[253538]: 2025-11-25 09:08:58.555 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:08:59 np0005534516 nova_compute[253538]: 2025-11-25 09:08:59.201 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:08:59 np0005534516 nova_compute[253538]: 2025-11-25 09:08:59.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:08:59 np0005534516 nova_compute[253538]: 2025-11-25 09:08:59.576 253542 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764061724.5751293, a5ec67ec-7042-47d0-925d-6ff3847d3846 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 04:08:59 np0005534516 nova_compute[253538]: 2025-11-25 09:08:59.576 253542 INFO nova.compute.manager [-] [instance: a5ec67ec-7042-47d0-925d-6ff3847d3846] VM Stopped (Lifecycle Event)#033[00m
Nov 25 04:08:59 np0005534516 nova_compute[253538]: 2025-11-25 09:08:59.645 253542 DEBUG nova.compute.manager [None req-6836202c-8fc5-4033-9a5b-785cc7a0d1ad - - - - - -] [instance: a5ec67ec-7042-47d0-925d-6ff3847d3846] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 04:08:59 np0005534516 nova_compute[253538]: 2025-11-25 09:08:59.645 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:09:00 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2658: 321 pgs: 321 active+clean; 88 MiB data, 981 MiB used, 59 GiB / 60 GiB avail; 8.9 KiB/s rd, 341 B/s wr, 12 op/s
Nov 25 04:09:01 np0005534516 podman[405789]: 2025-11-25 09:09:01.823051805 +0000 UTC m=+0.063223100 container health_status 5c8d5508db57b4c3da7e80ae11f3a3094e87785cb74f60c98199261dd8b73481 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_metadata_agent, org.label-schema.build-date=20251118, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible)
Nov 25 04:09:01 np0005534516 podman[405788]: 2025-11-25 09:09:01.829297664 +0000 UTC m=+0.075522784 container health_status 201f5ba8a846455dad7681d91099d8c3f33e8ac91298c1330a8b525b94c03938 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251118, container_name=multipathd, org.label-schema.schema-version=1.0)
Nov 25 04:09:02 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2659: 321 pgs: 321 active+clean; 88 MiB data, 981 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:09:02 np0005534516 nova_compute[253538]: 2025-11-25 09:09:02.548 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:09:03 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:09:04 np0005534516 nova_compute[253538]: 2025-11-25 09:09:04.202 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:09:04 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2660: 321 pgs: 321 active+clean; 88 MiB data, 981 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:09:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 04:09:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 04:09:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 04:09:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 04:09:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 25 04:09:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 04:09:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 04:09:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 04:09:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 04:09:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 04:09:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0010122368870873217 of space, bias 1.0, pg target 0.3036710661261965 quantized to 32 (current 32)
Nov 25 04:09:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 04:09:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 25 04:09:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 04:09:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 04:09:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 04:09:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 25 04:09:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 04:09:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 25 04:09:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 04:09:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 04:09:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 04:09:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 25 04:09:04 np0005534516 nova_compute[253538]: 2025-11-25 09:09:04.553 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:09:04 np0005534516 nova_compute[253538]: 2025-11-25 09:09:04.582 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:09:04 np0005534516 nova_compute[253538]: 2025-11-25 09:09:04.582 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:09:04 np0005534516 nova_compute[253538]: 2025-11-25 09:09:04.582 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:09:04 np0005534516 nova_compute[253538]: 2025-11-25 09:09:04.582 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 25 04:09:04 np0005534516 nova_compute[253538]: 2025-11-25 09:09:04.583 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 04:09:04 np0005534516 nova_compute[253538]: 2025-11-25 09:09:04.647 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:09:05 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 04:09:05 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/9535386' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 04:09:05 np0005534516 nova_compute[253538]: 2025-11-25 09:09:05.032 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.450s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 04:09:05 np0005534516 nova_compute[253538]: 2025-11-25 09:09:05.209 253542 WARNING nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 04:09:05 np0005534516 nova_compute[253538]: 2025-11-25 09:09:05.211 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3674MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 25 04:09:05 np0005534516 nova_compute[253538]: 2025-11-25 09:09:05.211 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:09:05 np0005534516 nova_compute[253538]: 2025-11-25 09:09:05.211 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:09:05 np0005534516 nova_compute[253538]: 2025-11-25 09:09:05.282 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 25 04:09:05 np0005534516 nova_compute[253538]: 2025-11-25 09:09:05.282 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 25 04:09:05 np0005534516 nova_compute[253538]: 2025-11-25 09:09:05.306 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 04:09:05 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 04:09:05 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2902937317' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 04:09:05 np0005534516 nova_compute[253538]: 2025-11-25 09:09:05.719 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.413s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 04:09:05 np0005534516 nova_compute[253538]: 2025-11-25 09:09:05.727 253542 DEBUG nova.compute.provider_tree [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 04:09:05 np0005534516 nova_compute[253538]: 2025-11-25 09:09:05.746 253542 DEBUG nova.scheduler.client.report [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 04:09:05 np0005534516 nova_compute[253538]: 2025-11-25 09:09:05.966 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 25 04:09:05 np0005534516 nova_compute[253538]: 2025-11-25 09:09:05.966 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.755s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:09:06 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2661: 321 pgs: 321 active+clean; 88 MiB data, 981 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:09:08 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2662: 321 pgs: 321 active+clean; 88 MiB data, 981 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:09:08 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:09:08 np0005534516 podman[405873]: 2025-11-25 09:09:08.853710557 +0000 UTC m=+0.099332042 container health_status 8ad1e319ea263c63021f01bb2d6f18ca5aea205883339692c6b10bddfa993191 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team)
Nov 25 04:09:09 np0005534516 nova_compute[253538]: 2025-11-25 09:09:09.204 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:09:09 np0005534516 nova_compute[253538]: 2025-11-25 09:09:09.650 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:09:10 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2663: 321 pgs: 321 active+clean; 88 MiB data, 981 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:09:12 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2664: 321 pgs: 321 active+clean; 88 MiB data, 981 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:09:13 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:09:14 np0005534516 nova_compute[253538]: 2025-11-25 09:09:14.206 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:09:14 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2665: 321 pgs: 321 active+clean; 88 MiB data, 981 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:09:14 np0005534516 nova_compute[253538]: 2025-11-25 09:09:14.653 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:09:16 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2666: 321 pgs: 321 active+clean; 88 MiB data, 981 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:09:18 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2667: 321 pgs: 321 active+clean; 88 MiB data, 981 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:09:18 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:09:19 np0005534516 nova_compute[253538]: 2025-11-25 09:09:19.208 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:09:19 np0005534516 nova_compute[253538]: 2025-11-25 09:09:19.655 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:09:20 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2668: 321 pgs: 321 active+clean; 88 MiB data, 981 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:09:22 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2669: 321 pgs: 321 active+clean; 88 MiB data, 981 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:09:22 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 04:09:22 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 04:09:22 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 04:09:22 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 04:09:22 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 04:09:22 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 04:09:22 np0005534516 ceph-mgr[75313]: [progress WARNING root] complete: ev 0a869e24-bd15-4ad4-b68c-c1fc3e539624 does not exist
Nov 25 04:09:22 np0005534516 ceph-mgr[75313]: [progress WARNING root] complete: ev b77957e3-2d2f-437d-9db9-7b1b910fada7 does not exist
Nov 25 04:09:22 np0005534516 ceph-mgr[75313]: [progress WARNING root] complete: ev 995942e8-7cbb-44ec-82d3-d718509fe275 does not exist
Nov 25 04:09:22 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 04:09:22 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 04:09:22 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 04:09:22 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 04:09:22 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 04:09:22 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 04:09:23 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:09:23 np0005534516 podman[406170]: 2025-11-25 09:09:23.385007136 +0000 UTC m=+0.050475053 container create b0423fa64cef0d1e28222733dc95d95409c40c2b9273275eb054a6e37905ec6d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_jepsen, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Nov 25 04:09:23 np0005534516 systemd[1]: Started libpod-conmon-b0423fa64cef0d1e28222733dc95d95409c40c2b9273275eb054a6e37905ec6d.scope.
Nov 25 04:09:23 np0005534516 systemd[1]: Started libcrun container.
Nov 25 04:09:23 np0005534516 podman[406170]: 2025-11-25 09:09:23.35759646 +0000 UTC m=+0.023064407 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 04:09:23 np0005534516 podman[406170]: 2025-11-25 09:09:23.467971231 +0000 UTC m=+0.133439148 container init b0423fa64cef0d1e28222733dc95d95409c40c2b9273275eb054a6e37905ec6d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_jepsen, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Nov 25 04:09:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 04:09:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 04:09:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 04:09:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 04:09:23 np0005534516 podman[406170]: 2025-11-25 09:09:23.475644159 +0000 UTC m=+0.141112076 container start b0423fa64cef0d1e28222733dc95d95409c40c2b9273275eb054a6e37905ec6d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_jepsen, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Nov 25 04:09:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 04:09:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 04:09:23 np0005534516 podman[406170]: 2025-11-25 09:09:23.481005705 +0000 UTC m=+0.146477962 container attach b0423fa64cef0d1e28222733dc95d95409c40c2b9273275eb054a6e37905ec6d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_jepsen, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 04:09:23 np0005534516 relaxed_jepsen[406186]: 167 167
Nov 25 04:09:23 np0005534516 systemd[1]: libpod-b0423fa64cef0d1e28222733dc95d95409c40c2b9273275eb054a6e37905ec6d.scope: Deactivated successfully.
Nov 25 04:09:23 np0005534516 podman[406170]: 2025-11-25 09:09:23.482520757 +0000 UTC m=+0.147988704 container died b0423fa64cef0d1e28222733dc95d95409c40c2b9273275eb054a6e37905ec6d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_jepsen, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 04:09:23 np0005534516 systemd[1]: var-lib-containers-storage-overlay-1a1a3f6977a5659bfd2098e1fa6a495e7ae1a854ef22b08f7908a6882aef0794-merged.mount: Deactivated successfully.
Nov 25 04:09:23 np0005534516 podman[406170]: 2025-11-25 09:09:23.527089258 +0000 UTC m=+0.192557175 container remove b0423fa64cef0d1e28222733dc95d95409c40c2b9273275eb054a6e37905ec6d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_jepsen, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Nov 25 04:09:23 np0005534516 systemd[1]: libpod-conmon-b0423fa64cef0d1e28222733dc95d95409c40c2b9273275eb054a6e37905ec6d.scope: Deactivated successfully.
Nov 25 04:09:23 np0005534516 podman[406213]: 2025-11-25 09:09:23.678203926 +0000 UTC m=+0.041768866 container create 9cd936b2ec58c5842f65901d26a2692b64428f8683f622ff2c7357d78d338ef0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_elgamal, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 04:09:23 np0005534516 systemd[1]: Started libpod-conmon-9cd936b2ec58c5842f65901d26a2692b64428f8683f622ff2c7357d78d338ef0.scope.
Nov 25 04:09:23 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 04:09:23 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 04:09:23 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 04:09:23 np0005534516 systemd[1]: Started libcrun container.
Nov 25 04:09:23 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fc784e53912096613f60d00844f606a6f0d5bc3379ad47fea2baf08c5574cb44/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 04:09:23 np0005534516 podman[406213]: 2025-11-25 09:09:23.659519898 +0000 UTC m=+0.023084858 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 04:09:23 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fc784e53912096613f60d00844f606a6f0d5bc3379ad47fea2baf08c5574cb44/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 04:09:23 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fc784e53912096613f60d00844f606a6f0d5bc3379ad47fea2baf08c5574cb44/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 04:09:23 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fc784e53912096613f60d00844f606a6f0d5bc3379ad47fea2baf08c5574cb44/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 04:09:23 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fc784e53912096613f60d00844f606a6f0d5bc3379ad47fea2baf08c5574cb44/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 04:09:23 np0005534516 podman[406213]: 2025-11-25 09:09:23.771493222 +0000 UTC m=+0.135058262 container init 9cd936b2ec58c5842f65901d26a2692b64428f8683f622ff2c7357d78d338ef0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_elgamal, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Nov 25 04:09:23 np0005534516 podman[406213]: 2025-11-25 09:09:23.781176675 +0000 UTC m=+0.144741655 container start 9cd936b2ec58c5842f65901d26a2692b64428f8683f622ff2c7357d78d338ef0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_elgamal, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 04:09:23 np0005534516 podman[406213]: 2025-11-25 09:09:23.785717429 +0000 UTC m=+0.149282379 container attach 9cd936b2ec58c5842f65901d26a2692b64428f8683f622ff2c7357d78d338ef0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_elgamal, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 04:09:24 np0005534516 nova_compute[253538]: 2025-11-25 09:09:24.211 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:09:24 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2670: 321 pgs: 321 active+clean; 88 MiB data, 981 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:09:24 np0005534516 nova_compute[253538]: 2025-11-25 09:09:24.657 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:09:24 np0005534516 romantic_elgamal[406230]: --> passed data devices: 0 physical, 3 LVM
Nov 25 04:09:24 np0005534516 romantic_elgamal[406230]: --> relative data size: 1.0
Nov 25 04:09:24 np0005534516 romantic_elgamal[406230]: --> All data devices are unavailable
Nov 25 04:09:24 np0005534516 systemd[1]: libpod-9cd936b2ec58c5842f65901d26a2692b64428f8683f622ff2c7357d78d338ef0.scope: Deactivated successfully.
Nov 25 04:09:24 np0005534516 podman[406213]: 2025-11-25 09:09:24.858019498 +0000 UTC m=+1.221584448 container died 9cd936b2ec58c5842f65901d26a2692b64428f8683f622ff2c7357d78d338ef0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_elgamal, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Nov 25 04:09:24 np0005534516 systemd[1]: libpod-9cd936b2ec58c5842f65901d26a2692b64428f8683f622ff2c7357d78d338ef0.scope: Consumed 1.013s CPU time.
Nov 25 04:09:24 np0005534516 systemd[1]: var-lib-containers-storage-overlay-fc784e53912096613f60d00844f606a6f0d5bc3379ad47fea2baf08c5574cb44-merged.mount: Deactivated successfully.
Nov 25 04:09:24 np0005534516 podman[406213]: 2025-11-25 09:09:24.971214235 +0000 UTC m=+1.334779175 container remove 9cd936b2ec58c5842f65901d26a2692b64428f8683f622ff2c7357d78d338ef0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_elgamal, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 04:09:24 np0005534516 systemd[1]: libpod-conmon-9cd936b2ec58c5842f65901d26a2692b64428f8683f622ff2c7357d78d338ef0.scope: Deactivated successfully.
Nov 25 04:09:25 np0005534516 podman[406412]: 2025-11-25 09:09:25.603841433 +0000 UTC m=+0.100344950 container create 5fe72f4a9cbe5d08899e2c81f0e1b60a49399284a41facff2ae0cb7ff6209292 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_bell, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Nov 25 04:09:25 np0005534516 podman[406412]: 2025-11-25 09:09:25.528616057 +0000 UTC m=+0.025119604 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 04:09:25 np0005534516 systemd[1]: Started libpod-conmon-5fe72f4a9cbe5d08899e2c81f0e1b60a49399284a41facff2ae0cb7ff6209292.scope.
Nov 25 04:09:25 np0005534516 systemd[1]: Started libcrun container.
Nov 25 04:09:25 np0005534516 podman[406412]: 2025-11-25 09:09:25.736881199 +0000 UTC m=+0.233384736 container init 5fe72f4a9cbe5d08899e2c81f0e1b60a49399284a41facff2ae0cb7ff6209292 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_bell, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Nov 25 04:09:25 np0005534516 podman[406412]: 2025-11-25 09:09:25.747910808 +0000 UTC m=+0.244414325 container start 5fe72f4a9cbe5d08899e2c81f0e1b60a49399284a41facff2ae0cb7ff6209292 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_bell, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Nov 25 04:09:25 np0005534516 podman[406412]: 2025-11-25 09:09:25.75054232 +0000 UTC m=+0.247045847 container attach 5fe72f4a9cbe5d08899e2c81f0e1b60a49399284a41facff2ae0cb7ff6209292 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_bell, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Nov 25 04:09:25 np0005534516 flamboyant_bell[406428]: 167 167
Nov 25 04:09:25 np0005534516 systemd[1]: libpod-5fe72f4a9cbe5d08899e2c81f0e1b60a49399284a41facff2ae0cb7ff6209292.scope: Deactivated successfully.
Nov 25 04:09:25 np0005534516 podman[406412]: 2025-11-25 09:09:25.753608233 +0000 UTC m=+0.250111760 container died 5fe72f4a9cbe5d08899e2c81f0e1b60a49399284a41facff2ae0cb7ff6209292 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_bell, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 04:09:25 np0005534516 systemd[1]: var-lib-containers-storage-overlay-3e9ef2d5ac065b2bc5192ea50d99ebcc4f8f876dd3367fa9000921fe3149229a-merged.mount: Deactivated successfully.
Nov 25 04:09:25 np0005534516 podman[406412]: 2025-11-25 09:09:25.832796597 +0000 UTC m=+0.329300114 container remove 5fe72f4a9cbe5d08899e2c81f0e1b60a49399284a41facff2ae0cb7ff6209292 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_bell, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Nov 25 04:09:25 np0005534516 systemd[1]: libpod-conmon-5fe72f4a9cbe5d08899e2c81f0e1b60a49399284a41facff2ae0cb7ff6209292.scope: Deactivated successfully.
Nov 25 04:09:25 np0005534516 nova_compute[253538]: 2025-11-25 09:09:25.899 253542 DEBUG oslo_concurrency.lockutils [None req-b36e01e6-4ffd-4f01-913d-275d77801532 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Acquiring lock "23232702-7686-425d-8921-7aa6192ca1c8" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:09:25 np0005534516 nova_compute[253538]: 2025-11-25 09:09:25.901 253542 DEBUG oslo_concurrency.lockutils [None req-b36e01e6-4ffd-4f01-913d-275d77801532 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "23232702-7686-425d-8921-7aa6192ca1c8" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:09:25 np0005534516 nova_compute[253538]: 2025-11-25 09:09:25.915 253542 DEBUG nova.compute.manager [None req-b36e01e6-4ffd-4f01-913d-275d77801532 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 23232702-7686-425d-8921-7aa6192ca1c8] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 25 04:09:26 np0005534516 nova_compute[253538]: 2025-11-25 09:09:26.000 253542 DEBUG oslo_concurrency.lockutils [None req-b36e01e6-4ffd-4f01-913d-275d77801532 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:09:26 np0005534516 nova_compute[253538]: 2025-11-25 09:09:26.001 253542 DEBUG oslo_concurrency.lockutils [None req-b36e01e6-4ffd-4f01-913d-275d77801532 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:09:26 np0005534516 nova_compute[253538]: 2025-11-25 09:09:26.010 253542 DEBUG nova.virt.hardware [None req-b36e01e6-4ffd-4f01-913d-275d77801532 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 25 04:09:26 np0005534516 nova_compute[253538]: 2025-11-25 09:09:26.010 253542 INFO nova.compute.claims [None req-b36e01e6-4ffd-4f01-913d-275d77801532 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 23232702-7686-425d-8921-7aa6192ca1c8] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 25 04:09:26 np0005534516 podman[406453]: 2025-11-25 09:09:26.051537653 +0000 UTC m=+0.080489860 container create 0e12bc02a0800ce572bfcd6463e02e5377617f10b1bf3c56b70eb8d19920a815 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_meitner, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 04:09:26 np0005534516 podman[406453]: 2025-11-25 09:09:26.001061581 +0000 UTC m=+0.030013788 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 04:09:26 np0005534516 systemd[1]: Started libpod-conmon-0e12bc02a0800ce572bfcd6463e02e5377617f10b1bf3c56b70eb8d19920a815.scope.
Nov 25 04:09:26 np0005534516 systemd[1]: Started libcrun container.
Nov 25 04:09:26 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/34dd24bf07a3cebc5f515829bd367951621b3bdb00b0d4ae2c0671d1a8748f6e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 04:09:26 np0005534516 nova_compute[253538]: 2025-11-25 09:09:26.139 253542 DEBUG oslo_concurrency.processutils [None req-b36e01e6-4ffd-4f01-913d-275d77801532 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 04:09:26 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/34dd24bf07a3cebc5f515829bd367951621b3bdb00b0d4ae2c0671d1a8748f6e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 04:09:26 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/34dd24bf07a3cebc5f515829bd367951621b3bdb00b0d4ae2c0671d1a8748f6e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 04:09:26 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/34dd24bf07a3cebc5f515829bd367951621b3bdb00b0d4ae2c0671d1a8748f6e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 04:09:26 np0005534516 podman[406453]: 2025-11-25 09:09:26.149830185 +0000 UTC m=+0.178782382 container init 0e12bc02a0800ce572bfcd6463e02e5377617f10b1bf3c56b70eb8d19920a815 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_meitner, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Nov 25 04:09:26 np0005534516 podman[406453]: 2025-11-25 09:09:26.159493647 +0000 UTC m=+0.188445824 container start 0e12bc02a0800ce572bfcd6463e02e5377617f10b1bf3c56b70eb8d19920a815 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_meitner, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 04:09:26 np0005534516 podman[406453]: 2025-11-25 09:09:26.165349616 +0000 UTC m=+0.194301793 container attach 0e12bc02a0800ce572bfcd6463e02e5377617f10b1bf3c56b70eb8d19920a815 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_meitner, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef)
Nov 25 04:09:26 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2671: 321 pgs: 321 active+clean; 88 MiB data, 981 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:09:26 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 04:09:26 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3534811428' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 04:09:26 np0005534516 nova_compute[253538]: 2025-11-25 09:09:26.646 253542 DEBUG oslo_concurrency.processutils [None req-b36e01e6-4ffd-4f01-913d-275d77801532 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.507s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 04:09:26 np0005534516 nova_compute[253538]: 2025-11-25 09:09:26.656 253542 DEBUG nova.compute.provider_tree [None req-b36e01e6-4ffd-4f01-913d-275d77801532 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 04:09:26 np0005534516 nova_compute[253538]: 2025-11-25 09:09:26.671 253542 DEBUG nova.scheduler.client.report [None req-b36e01e6-4ffd-4f01-913d-275d77801532 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 04:09:26 np0005534516 nova_compute[253538]: 2025-11-25 09:09:26.693 253542 DEBUG oslo_concurrency.lockutils [None req-b36e01e6-4ffd-4f01-913d-275d77801532 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.692s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:09:26 np0005534516 nova_compute[253538]: 2025-11-25 09:09:26.694 253542 DEBUG nova.compute.manager [None req-b36e01e6-4ffd-4f01-913d-275d77801532 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 23232702-7686-425d-8921-7aa6192ca1c8] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 25 04:09:26 np0005534516 nova_compute[253538]: 2025-11-25 09:09:26.733 253542 DEBUG nova.compute.manager [None req-b36e01e6-4ffd-4f01-913d-275d77801532 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 23232702-7686-425d-8921-7aa6192ca1c8] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 25 04:09:26 np0005534516 nova_compute[253538]: 2025-11-25 09:09:26.734 253542 DEBUG nova.network.neutron [None req-b36e01e6-4ffd-4f01-913d-275d77801532 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 23232702-7686-425d-8921-7aa6192ca1c8] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 25 04:09:26 np0005534516 nova_compute[253538]: 2025-11-25 09:09:26.751 253542 INFO nova.virt.libvirt.driver [None req-b36e01e6-4ffd-4f01-913d-275d77801532 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 23232702-7686-425d-8921-7aa6192ca1c8] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 25 04:09:26 np0005534516 nova_compute[253538]: 2025-11-25 09:09:26.766 253542 DEBUG nova.compute.manager [None req-b36e01e6-4ffd-4f01-913d-275d77801532 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 23232702-7686-425d-8921-7aa6192ca1c8] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 25 04:09:26 np0005534516 nova_compute[253538]: 2025-11-25 09:09:26.838 253542 DEBUG nova.compute.manager [None req-b36e01e6-4ffd-4f01-913d-275d77801532 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 23232702-7686-425d-8921-7aa6192ca1c8] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 25 04:09:26 np0005534516 nova_compute[253538]: 2025-11-25 09:09:26.840 253542 DEBUG nova.virt.libvirt.driver [None req-b36e01e6-4ffd-4f01-913d-275d77801532 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 23232702-7686-425d-8921-7aa6192ca1c8] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 25 04:09:26 np0005534516 nova_compute[253538]: 2025-11-25 09:09:26.841 253542 INFO nova.virt.libvirt.driver [None req-b36e01e6-4ffd-4f01-913d-275d77801532 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 23232702-7686-425d-8921-7aa6192ca1c8] Creating image(s)#033[00m
Nov 25 04:09:26 np0005534516 nova_compute[253538]: 2025-11-25 09:09:26.864 253542 DEBUG nova.storage.rbd_utils [None req-b36e01e6-4ffd-4f01-913d-275d77801532 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] rbd image 23232702-7686-425d-8921-7aa6192ca1c8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 04:09:26 np0005534516 nova_compute[253538]: 2025-11-25 09:09:26.886 253542 DEBUG nova.storage.rbd_utils [None req-b36e01e6-4ffd-4f01-913d-275d77801532 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] rbd image 23232702-7686-425d-8921-7aa6192ca1c8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 04:09:26 np0005534516 nova_compute[253538]: 2025-11-25 09:09:26.905 253542 DEBUG nova.storage.rbd_utils [None req-b36e01e6-4ffd-4f01-913d-275d77801532 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] rbd image 23232702-7686-425d-8921-7aa6192ca1c8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 04:09:26 np0005534516 practical_meitner[406469]: {
Nov 25 04:09:26 np0005534516 practical_meitner[406469]:    "0": [
Nov 25 04:09:26 np0005534516 practical_meitner[406469]:        {
Nov 25 04:09:26 np0005534516 practical_meitner[406469]:            "devices": [
Nov 25 04:09:26 np0005534516 practical_meitner[406469]:                "/dev/loop3"
Nov 25 04:09:26 np0005534516 practical_meitner[406469]:            ],
Nov 25 04:09:26 np0005534516 practical_meitner[406469]:            "lv_name": "ceph_lv0",
Nov 25 04:09:26 np0005534516 practical_meitner[406469]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 04:09:26 np0005534516 practical_meitner[406469]:            "lv_size": "21470642176",
Nov 25 04:09:26 np0005534516 practical_meitner[406469]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=fa0f8d90-5df8-4d42-9078-da082765696d,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 04:09:26 np0005534516 practical_meitner[406469]:            "lv_uuid": "ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr",
Nov 25 04:09:26 np0005534516 practical_meitner[406469]:            "name": "ceph_lv0",
Nov 25 04:09:26 np0005534516 practical_meitner[406469]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 04:09:26 np0005534516 practical_meitner[406469]:            "tags": {
Nov 25 04:09:26 np0005534516 practical_meitner[406469]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 04:09:26 np0005534516 practical_meitner[406469]:                "ceph.block_uuid": "ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr",
Nov 25 04:09:26 np0005534516 practical_meitner[406469]:                "ceph.cephx_lockbox_secret": "",
Nov 25 04:09:26 np0005534516 practical_meitner[406469]:                "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 04:09:26 np0005534516 practical_meitner[406469]:                "ceph.cluster_name": "ceph",
Nov 25 04:09:26 np0005534516 practical_meitner[406469]:                "ceph.crush_device_class": "",
Nov 25 04:09:26 np0005534516 practical_meitner[406469]:                "ceph.encrypted": "0",
Nov 25 04:09:26 np0005534516 practical_meitner[406469]:                "ceph.osd_fsid": "fa0f8d90-5df8-4d42-9078-da082765696d",
Nov 25 04:09:26 np0005534516 practical_meitner[406469]:                "ceph.osd_id": "0",
Nov 25 04:09:26 np0005534516 practical_meitner[406469]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 04:09:26 np0005534516 practical_meitner[406469]:                "ceph.type": "block",
Nov 25 04:09:26 np0005534516 practical_meitner[406469]:                "ceph.vdo": "0"
Nov 25 04:09:26 np0005534516 practical_meitner[406469]:            },
Nov 25 04:09:26 np0005534516 practical_meitner[406469]:            "type": "block",
Nov 25 04:09:26 np0005534516 practical_meitner[406469]:            "vg_name": "ceph_vg0"
Nov 25 04:09:26 np0005534516 practical_meitner[406469]:        }
Nov 25 04:09:26 np0005534516 practical_meitner[406469]:    ],
Nov 25 04:09:26 np0005534516 practical_meitner[406469]:    "1": [
Nov 25 04:09:26 np0005534516 practical_meitner[406469]:        {
Nov 25 04:09:26 np0005534516 practical_meitner[406469]:            "devices": [
Nov 25 04:09:26 np0005534516 practical_meitner[406469]:                "/dev/loop4"
Nov 25 04:09:26 np0005534516 practical_meitner[406469]:            ],
Nov 25 04:09:26 np0005534516 practical_meitner[406469]:            "lv_name": "ceph_lv1",
Nov 25 04:09:26 np0005534516 practical_meitner[406469]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 04:09:26 np0005534516 practical_meitner[406469]:            "lv_size": "21470642176",
Nov 25 04:09:26 np0005534516 practical_meitner[406469]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1ccbbde0-8faf-460a-800e-c84f00ed17db,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 04:09:26 np0005534516 practical_meitner[406469]:            "lv_uuid": "nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e",
Nov 25 04:09:26 np0005534516 practical_meitner[406469]:            "name": "ceph_lv1",
Nov 25 04:09:26 np0005534516 practical_meitner[406469]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 04:09:26 np0005534516 practical_meitner[406469]:            "tags": {
Nov 25 04:09:26 np0005534516 practical_meitner[406469]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 04:09:26 np0005534516 practical_meitner[406469]:                "ceph.block_uuid": "nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e",
Nov 25 04:09:26 np0005534516 practical_meitner[406469]:                "ceph.cephx_lockbox_secret": "",
Nov 25 04:09:26 np0005534516 practical_meitner[406469]:                "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 04:09:26 np0005534516 practical_meitner[406469]:                "ceph.cluster_name": "ceph",
Nov 25 04:09:26 np0005534516 practical_meitner[406469]:                "ceph.crush_device_class": "",
Nov 25 04:09:26 np0005534516 practical_meitner[406469]:                "ceph.encrypted": "0",
Nov 25 04:09:26 np0005534516 practical_meitner[406469]:                "ceph.osd_fsid": "1ccbbde0-8faf-460a-800e-c84f00ed17db",
Nov 25 04:09:26 np0005534516 practical_meitner[406469]:                "ceph.osd_id": "1",
Nov 25 04:09:26 np0005534516 practical_meitner[406469]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 04:09:26 np0005534516 practical_meitner[406469]:                "ceph.type": "block",
Nov 25 04:09:26 np0005534516 practical_meitner[406469]:                "ceph.vdo": "0"
Nov 25 04:09:26 np0005534516 practical_meitner[406469]:            },
Nov 25 04:09:26 np0005534516 practical_meitner[406469]:            "type": "block",
Nov 25 04:09:26 np0005534516 practical_meitner[406469]:            "vg_name": "ceph_vg1"
Nov 25 04:09:26 np0005534516 practical_meitner[406469]:        }
Nov 25 04:09:26 np0005534516 practical_meitner[406469]:    ],
Nov 25 04:09:26 np0005534516 practical_meitner[406469]:    "2": [
Nov 25 04:09:26 np0005534516 practical_meitner[406469]:        {
Nov 25 04:09:26 np0005534516 practical_meitner[406469]:            "devices": [
Nov 25 04:09:26 np0005534516 practical_meitner[406469]:                "/dev/loop5"
Nov 25 04:09:26 np0005534516 practical_meitner[406469]:            ],
Nov 25 04:09:26 np0005534516 practical_meitner[406469]:            "lv_name": "ceph_lv2",
Nov 25 04:09:26 np0005534516 practical_meitner[406469]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 04:09:26 np0005534516 practical_meitner[406469]:            "lv_size": "21470642176",
Nov 25 04:09:26 np0005534516 practical_meitner[406469]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2a15223e-f21c-42af-b702-2be1c3607399,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 04:09:26 np0005534516 practical_meitner[406469]:            "lv_uuid": "5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0",
Nov 25 04:09:26 np0005534516 practical_meitner[406469]:            "name": "ceph_lv2",
Nov 25 04:09:26 np0005534516 practical_meitner[406469]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 04:09:26 np0005534516 practical_meitner[406469]:            "tags": {
Nov 25 04:09:26 np0005534516 practical_meitner[406469]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 04:09:26 np0005534516 practical_meitner[406469]:                "ceph.block_uuid": "5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0",
Nov 25 04:09:26 np0005534516 practical_meitner[406469]:                "ceph.cephx_lockbox_secret": "",
Nov 25 04:09:26 np0005534516 practical_meitner[406469]:                "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 04:09:26 np0005534516 practical_meitner[406469]:                "ceph.cluster_name": "ceph",
Nov 25 04:09:26 np0005534516 practical_meitner[406469]:                "ceph.crush_device_class": "",
Nov 25 04:09:26 np0005534516 practical_meitner[406469]:                "ceph.encrypted": "0",
Nov 25 04:09:26 np0005534516 practical_meitner[406469]:                "ceph.osd_fsid": "2a15223e-f21c-42af-b702-2be1c3607399",
Nov 25 04:09:26 np0005534516 practical_meitner[406469]:                "ceph.osd_id": "2",
Nov 25 04:09:26 np0005534516 practical_meitner[406469]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 04:09:26 np0005534516 practical_meitner[406469]:                "ceph.type": "block",
Nov 25 04:09:26 np0005534516 practical_meitner[406469]:                "ceph.vdo": "0"
Nov 25 04:09:26 np0005534516 practical_meitner[406469]:            },
Nov 25 04:09:26 np0005534516 practical_meitner[406469]:            "type": "block",
Nov 25 04:09:26 np0005534516 practical_meitner[406469]:            "vg_name": "ceph_vg2"
Nov 25 04:09:26 np0005534516 practical_meitner[406469]:        }
Nov 25 04:09:26 np0005534516 practical_meitner[406469]:    ]
Nov 25 04:09:26 np0005534516 practical_meitner[406469]: }
Nov 25 04:09:26 np0005534516 nova_compute[253538]: 2025-11-25 09:09:26.912 253542 DEBUG oslo_concurrency.processutils [None req-b36e01e6-4ffd-4f01-913d-275d77801532 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 04:09:26 np0005534516 systemd[1]: libpod-0e12bc02a0800ce572bfcd6463e02e5377617f10b1bf3c56b70eb8d19920a815.scope: Deactivated successfully.
Nov 25 04:09:26 np0005534516 podman[406453]: 2025-11-25 09:09:26.938457382 +0000 UTC m=+0.967409559 container died 0e12bc02a0800ce572bfcd6463e02e5377617f10b1bf3c56b70eb8d19920a815 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_meitner, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Nov 25 04:09:26 np0005534516 nova_compute[253538]: 2025-11-25 09:09:26.958 253542 DEBUG nova.policy [None req-b36e01e6-4ffd-4f01-913d-275d77801532 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'c9fb13d4ba9041458692330b7276232f', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'a3cf572dfc9f42528923d69b8fa76422', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 25 04:09:27 np0005534516 systemd[1]: var-lib-containers-storage-overlay-34dd24bf07a3cebc5f515829bd367951621b3bdb00b0d4ae2c0671d1a8748f6e-merged.mount: Deactivated successfully.
Nov 25 04:09:27 np0005534516 nova_compute[253538]: 2025-11-25 09:09:27.019 253542 DEBUG oslo_concurrency.processutils [None req-b36e01e6-4ffd-4f01-913d-275d77801532 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json" returned: 0 in 0.107s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 04:09:27 np0005534516 nova_compute[253538]: 2025-11-25 09:09:27.021 253542 DEBUG oslo_concurrency.lockutils [None req-b36e01e6-4ffd-4f01-913d-275d77801532 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Acquiring lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:09:27 np0005534516 nova_compute[253538]: 2025-11-25 09:09:27.022 253542 DEBUG oslo_concurrency.lockutils [None req-b36e01e6-4ffd-4f01-913d-275d77801532 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:09:27 np0005534516 nova_compute[253538]: 2025-11-25 09:09:27.022 253542 DEBUG oslo_concurrency.lockutils [None req-b36e01e6-4ffd-4f01-913d-275d77801532 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:09:27 np0005534516 nova_compute[253538]: 2025-11-25 09:09:27.044 253542 DEBUG nova.storage.rbd_utils [None req-b36e01e6-4ffd-4f01-913d-275d77801532 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] rbd image 23232702-7686-425d-8921-7aa6192ca1c8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 04:09:27 np0005534516 nova_compute[253538]: 2025-11-25 09:09:27.048 253542 DEBUG oslo_concurrency.processutils [None req-b36e01e6-4ffd-4f01-913d-275d77801532 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc 23232702-7686-425d-8921-7aa6192ca1c8_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 04:09:27 np0005534516 podman[406453]: 2025-11-25 09:09:27.082105388 +0000 UTC m=+1.111057555 container remove 0e12bc02a0800ce572bfcd6463e02e5377617f10b1bf3c56b70eb8d19920a815 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_meitner, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 04:09:27 np0005534516 systemd[1]: libpod-conmon-0e12bc02a0800ce572bfcd6463e02e5377617f10b1bf3c56b70eb8d19920a815.scope: Deactivated successfully.
Nov 25 04:09:27 np0005534516 podman[406749]: 2025-11-25 09:09:27.7605338 +0000 UTC m=+0.091541890 container create 218fccc0e195e87922e46e9c56c050ee5205433976cf707e4409956d13e0e8ee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_ardinghelli, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 04:09:27 np0005534516 nova_compute[253538]: 2025-11-25 09:09:27.763 253542 DEBUG nova.network.neutron [None req-b36e01e6-4ffd-4f01-913d-275d77801532 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 23232702-7686-425d-8921-7aa6192ca1c8] Successfully created port: 2cf452f4-d6c3-4977-9e5b-874c9d9707e6 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 25 04:09:27 np0005534516 podman[406749]: 2025-11-25 09:09:27.696425777 +0000 UTC m=+0.027433877 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 04:09:27 np0005534516 nova_compute[253538]: 2025-11-25 09:09:27.823 253542 DEBUG oslo_concurrency.processutils [None req-b36e01e6-4ffd-4f01-913d-275d77801532 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc 23232702-7686-425d-8921-7aa6192ca1c8_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.774s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 04:09:27 np0005534516 systemd[1]: Started libpod-conmon-218fccc0e195e87922e46e9c56c050ee5205433976cf707e4409956d13e0e8ee.scope.
Nov 25 04:09:27 np0005534516 systemd[1]: Started libcrun container.
Nov 25 04:09:27 np0005534516 podman[406749]: 2025-11-25 09:09:27.89441816 +0000 UTC m=+0.225426250 container init 218fccc0e195e87922e46e9c56c050ee5205433976cf707e4409956d13e0e8ee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_ardinghelli, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 04:09:27 np0005534516 nova_compute[253538]: 2025-11-25 09:09:27.895 253542 DEBUG nova.storage.rbd_utils [None req-b36e01e6-4ffd-4f01-913d-275d77801532 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] resizing rbd image 23232702-7686-425d-8921-7aa6192ca1c8_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 25 04:09:27 np0005534516 podman[406749]: 2025-11-25 09:09:27.901732288 +0000 UTC m=+0.232740358 container start 218fccc0e195e87922e46e9c56c050ee5205433976cf707e4409956d13e0e8ee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_ardinghelli, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True)
Nov 25 04:09:27 np0005534516 quizzical_ardinghelli[406767]: 167 167
Nov 25 04:09:27 np0005534516 systemd[1]: libpod-218fccc0e195e87922e46e9c56c050ee5205433976cf707e4409956d13e0e8ee.scope: Deactivated successfully.
Nov 25 04:09:27 np0005534516 podman[406749]: 2025-11-25 09:09:27.908425551 +0000 UTC m=+0.239433621 container attach 218fccc0e195e87922e46e9c56c050ee5205433976cf707e4409956d13e0e8ee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_ardinghelli, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Nov 25 04:09:27 np0005534516 conmon[406767]: conmon 218fccc0e195e87922e4 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-218fccc0e195e87922e46e9c56c050ee5205433976cf707e4409956d13e0e8ee.scope/container/memory.events
Nov 25 04:09:27 np0005534516 podman[406823]: 2025-11-25 09:09:27.948677605 +0000 UTC m=+0.022853233 container died 218fccc0e195e87922e46e9c56c050ee5205433976cf707e4409956d13e0e8ee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_ardinghelli, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 04:09:27 np0005534516 systemd[1]: var-lib-containers-storage-overlay-097fda53ec4e86b13dd9b1b1d56c921a2fd414a02e1c9e144d3e72e4ddb70390-merged.mount: Deactivated successfully.
Nov 25 04:09:28 np0005534516 podman[406823]: 2025-11-25 09:09:28.010445174 +0000 UTC m=+0.084620772 container remove 218fccc0e195e87922e46e9c56c050ee5205433976cf707e4409956d13e0e8ee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_ardinghelli, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef)
Nov 25 04:09:28 np0005534516 systemd[1]: libpod-conmon-218fccc0e195e87922e46e9c56c050ee5205433976cf707e4409956d13e0e8ee.scope: Deactivated successfully.
Nov 25 04:09:28 np0005534516 nova_compute[253538]: 2025-11-25 09:09:28.043 253542 DEBUG nova.objects.instance [None req-b36e01e6-4ffd-4f01-913d-275d77801532 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lazy-loading 'migration_context' on Instance uuid 23232702-7686-425d-8921-7aa6192ca1c8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 04:09:28 np0005534516 nova_compute[253538]: 2025-11-25 09:09:28.054 253542 DEBUG nova.virt.libvirt.driver [None req-b36e01e6-4ffd-4f01-913d-275d77801532 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 23232702-7686-425d-8921-7aa6192ca1c8] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 25 04:09:28 np0005534516 nova_compute[253538]: 2025-11-25 09:09:28.054 253542 DEBUG nova.virt.libvirt.driver [None req-b36e01e6-4ffd-4f01-913d-275d77801532 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 23232702-7686-425d-8921-7aa6192ca1c8] Ensure instance console log exists: /var/lib/nova/instances/23232702-7686-425d-8921-7aa6192ca1c8/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 25 04:09:28 np0005534516 nova_compute[253538]: 2025-11-25 09:09:28.055 253542 DEBUG oslo_concurrency.lockutils [None req-b36e01e6-4ffd-4f01-913d-275d77801532 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:09:28 np0005534516 nova_compute[253538]: 2025-11-25 09:09:28.055 253542 DEBUG oslo_concurrency.lockutils [None req-b36e01e6-4ffd-4f01-913d-275d77801532 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:09:28 np0005534516 nova_compute[253538]: 2025-11-25 09:09:28.055 253542 DEBUG oslo_concurrency.lockutils [None req-b36e01e6-4ffd-4f01-913d-275d77801532 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:09:28 np0005534516 podman[406866]: 2025-11-25 09:09:28.189743468 +0000 UTC m=+0.054573064 container create d8ff41e62a7aca3e69c51183ec38d7febb2e4ea7e14044a2458dcc5344927a9f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_moore, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Nov 25 04:09:28 np0005534516 nova_compute[253538]: 2025-11-25 09:09:28.216 253542 DEBUG nova.network.neutron [None req-b36e01e6-4ffd-4f01-913d-275d77801532 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 23232702-7686-425d-8921-7aa6192ca1c8] Successfully created port: 24beb614-6f72-4107-adca-af1258052ab5 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 25 04:09:28 np0005534516 systemd[1]: Started libpod-conmon-d8ff41e62a7aca3e69c51183ec38d7febb2e4ea7e14044a2458dcc5344927a9f.scope.
Nov 25 04:09:28 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2672: 321 pgs: 321 active+clean; 95 MiB data, 984 MiB used, 59 GiB / 60 GiB avail; 597 B/s rd, 267 KiB/s wr, 1 op/s
Nov 25 04:09:28 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:09:28 np0005534516 podman[406866]: 2025-11-25 09:09:28.163939316 +0000 UTC m=+0.028768972 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 04:09:28 np0005534516 systemd[1]: Started libcrun container.
Nov 25 04:09:28 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8e0a2fe2cd55fb33b52ff6c29ca2652d1d37f8e12233ea7e62c7c3baae06e69a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 04:09:28 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8e0a2fe2cd55fb33b52ff6c29ca2652d1d37f8e12233ea7e62c7c3baae06e69a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 04:09:28 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8e0a2fe2cd55fb33b52ff6c29ca2652d1d37f8e12233ea7e62c7c3baae06e69a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 04:09:28 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8e0a2fe2cd55fb33b52ff6c29ca2652d1d37f8e12233ea7e62c7c3baae06e69a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 04:09:28 np0005534516 podman[406866]: 2025-11-25 09:09:28.279494777 +0000 UTC m=+0.144324343 container init d8ff41e62a7aca3e69c51183ec38d7febb2e4ea7e14044a2458dcc5344927a9f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_moore, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Nov 25 04:09:28 np0005534516 podman[406866]: 2025-11-25 09:09:28.296771097 +0000 UTC m=+0.161600663 container start d8ff41e62a7aca3e69c51183ec38d7febb2e4ea7e14044a2458dcc5344927a9f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_moore, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Nov 25 04:09:28 np0005534516 podman[406866]: 2025-11-25 09:09:28.303634324 +0000 UTC m=+0.168463890 container attach d8ff41e62a7aca3e69c51183ec38d7febb2e4ea7e14044a2458dcc5344927a9f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_moore, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Nov 25 04:09:28 np0005534516 nova_compute[253538]: 2025-11-25 09:09:28.925 253542 DEBUG nova.network.neutron [None req-b36e01e6-4ffd-4f01-913d-275d77801532 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 23232702-7686-425d-8921-7aa6192ca1c8] Successfully updated port: 2cf452f4-d6c3-4977-9e5b-874c9d9707e6 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 25 04:09:29 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 25 04:09:29 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/86718841' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 04:09:29 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 25 04:09:29 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/86718841' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 04:09:29 np0005534516 nova_compute[253538]: 2025-11-25 09:09:29.061 253542 DEBUG nova.compute.manager [req-8b47df65-ac74-4bc9-8aad-9ca6de80858d req-b9b9ce20-9c81-4a5a-be4c-13e6df046318 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 23232702-7686-425d-8921-7aa6192ca1c8] Received event network-changed-2cf452f4-d6c3-4977-9e5b-874c9d9707e6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 04:09:29 np0005534516 nova_compute[253538]: 2025-11-25 09:09:29.062 253542 DEBUG nova.compute.manager [req-8b47df65-ac74-4bc9-8aad-9ca6de80858d req-b9b9ce20-9c81-4a5a-be4c-13e6df046318 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 23232702-7686-425d-8921-7aa6192ca1c8] Refreshing instance network info cache due to event network-changed-2cf452f4-d6c3-4977-9e5b-874c9d9707e6. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 04:09:29 np0005534516 nova_compute[253538]: 2025-11-25 09:09:29.062 253542 DEBUG oslo_concurrency.lockutils [req-8b47df65-ac74-4bc9-8aad-9ca6de80858d req-b9b9ce20-9c81-4a5a-be4c-13e6df046318 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-23232702-7686-425d-8921-7aa6192ca1c8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 04:09:29 np0005534516 nova_compute[253538]: 2025-11-25 09:09:29.063 253542 DEBUG oslo_concurrency.lockutils [req-8b47df65-ac74-4bc9-8aad-9ca6de80858d req-b9b9ce20-9c81-4a5a-be4c-13e6df046318 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-23232702-7686-425d-8921-7aa6192ca1c8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 04:09:29 np0005534516 nova_compute[253538]: 2025-11-25 09:09:29.063 253542 DEBUG nova.network.neutron [req-8b47df65-ac74-4bc9-8aad-9ca6de80858d req-b9b9ce20-9c81-4a5a-be4c-13e6df046318 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 23232702-7686-425d-8921-7aa6192ca1c8] Refreshing network info cache for port 2cf452f4-d6c3-4977-9e5b-874c9d9707e6 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 04:09:29 np0005534516 nova_compute[253538]: 2025-11-25 09:09:29.212 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:09:29 np0005534516 nova_compute[253538]: 2025-11-25 09:09:29.230 253542 DEBUG nova.network.neutron [req-8b47df65-ac74-4bc9-8aad-9ca6de80858d req-b9b9ce20-9c81-4a5a-be4c-13e6df046318 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 23232702-7686-425d-8921-7aa6192ca1c8] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 25 04:09:29 np0005534516 charming_moore[406883]: {
Nov 25 04:09:29 np0005534516 charming_moore[406883]:    "1ccbbde0-8faf-460a-800e-c84f00ed17db": {
Nov 25 04:09:29 np0005534516 charming_moore[406883]:        "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 04:09:29 np0005534516 charming_moore[406883]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 04:09:29 np0005534516 charming_moore[406883]:        "osd_id": 1,
Nov 25 04:09:29 np0005534516 charming_moore[406883]:        "osd_uuid": "1ccbbde0-8faf-460a-800e-c84f00ed17db",
Nov 25 04:09:29 np0005534516 charming_moore[406883]:        "type": "bluestore"
Nov 25 04:09:29 np0005534516 charming_moore[406883]:    },
Nov 25 04:09:29 np0005534516 charming_moore[406883]:    "2a15223e-f21c-42af-b702-2be1c3607399": {
Nov 25 04:09:29 np0005534516 charming_moore[406883]:        "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 04:09:29 np0005534516 charming_moore[406883]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 04:09:29 np0005534516 charming_moore[406883]:        "osd_id": 2,
Nov 25 04:09:29 np0005534516 charming_moore[406883]:        "osd_uuid": "2a15223e-f21c-42af-b702-2be1c3607399",
Nov 25 04:09:29 np0005534516 charming_moore[406883]:        "type": "bluestore"
Nov 25 04:09:29 np0005534516 charming_moore[406883]:    },
Nov 25 04:09:29 np0005534516 charming_moore[406883]:    "fa0f8d90-5df8-4d42-9078-da082765696d": {
Nov 25 04:09:29 np0005534516 charming_moore[406883]:        "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 04:09:29 np0005534516 charming_moore[406883]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 04:09:29 np0005534516 charming_moore[406883]:        "osd_id": 0,
Nov 25 04:09:29 np0005534516 charming_moore[406883]:        "osd_uuid": "fa0f8d90-5df8-4d42-9078-da082765696d",
Nov 25 04:09:29 np0005534516 charming_moore[406883]:        "type": "bluestore"
Nov 25 04:09:29 np0005534516 charming_moore[406883]:    }
Nov 25 04:09:29 np0005534516 charming_moore[406883]: }
Nov 25 04:09:29 np0005534516 systemd[1]: libpod-d8ff41e62a7aca3e69c51183ec38d7febb2e4ea7e14044a2458dcc5344927a9f.scope: Deactivated successfully.
Nov 25 04:09:29 np0005534516 systemd[1]: libpod-d8ff41e62a7aca3e69c51183ec38d7febb2e4ea7e14044a2458dcc5344927a9f.scope: Consumed 1.013s CPU time.
Nov 25 04:09:29 np0005534516 podman[406866]: 2025-11-25 09:09:29.315342996 +0000 UTC m=+1.180172562 container died d8ff41e62a7aca3e69c51183ec38d7febb2e4ea7e14044a2458dcc5344927a9f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_moore, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Nov 25 04:09:29 np0005534516 nova_compute[253538]: 2025-11-25 09:09:29.527 253542 DEBUG nova.network.neutron [req-8b47df65-ac74-4bc9-8aad-9ca6de80858d req-b9b9ce20-9c81-4a5a-be4c-13e6df046318 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 23232702-7686-425d-8921-7aa6192ca1c8] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 04:09:29 np0005534516 nova_compute[253538]: 2025-11-25 09:09:29.547 253542 DEBUG oslo_concurrency.lockutils [req-8b47df65-ac74-4bc9-8aad-9ca6de80858d req-b9b9ce20-9c81-4a5a-be4c-13e6df046318 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-23232702-7686-425d-8921-7aa6192ca1c8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 04:09:29 np0005534516 systemd[1]: var-lib-containers-storage-overlay-8e0a2fe2cd55fb33b52ff6c29ca2652d1d37f8e12233ea7e62c7c3baae06e69a-merged.mount: Deactivated successfully.
Nov 25 04:09:29 np0005534516 nova_compute[253538]: 2025-11-25 09:09:29.659 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:09:29 np0005534516 nova_compute[253538]: 2025-11-25 09:09:29.724 253542 DEBUG nova.network.neutron [None req-b36e01e6-4ffd-4f01-913d-275d77801532 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 23232702-7686-425d-8921-7aa6192ca1c8] Successfully updated port: 24beb614-6f72-4107-adca-af1258052ab5 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 25 04:09:29 np0005534516 nova_compute[253538]: 2025-11-25 09:09:29.739 253542 DEBUG oslo_concurrency.lockutils [None req-b36e01e6-4ffd-4f01-913d-275d77801532 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Acquiring lock "refresh_cache-23232702-7686-425d-8921-7aa6192ca1c8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 04:09:29 np0005534516 nova_compute[253538]: 2025-11-25 09:09:29.740 253542 DEBUG oslo_concurrency.lockutils [None req-b36e01e6-4ffd-4f01-913d-275d77801532 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Acquired lock "refresh_cache-23232702-7686-425d-8921-7aa6192ca1c8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 04:09:29 np0005534516 nova_compute[253538]: 2025-11-25 09:09:29.740 253542 DEBUG nova.network.neutron [None req-b36e01e6-4ffd-4f01-913d-275d77801532 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 23232702-7686-425d-8921-7aa6192ca1c8] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 25 04:09:29 np0005534516 nova_compute[253538]: 2025-11-25 09:09:29.897 253542 DEBUG nova.network.neutron [None req-b36e01e6-4ffd-4f01-913d-275d77801532 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 23232702-7686-425d-8921-7aa6192ca1c8] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 25 04:09:30 np0005534516 podman[406866]: 2025-11-25 09:09:30.053609164 +0000 UTC m=+1.918438730 container remove d8ff41e62a7aca3e69c51183ec38d7febb2e4ea7e14044a2458dcc5344927a9f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_moore, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef)
Nov 25 04:09:30 np0005534516 systemd[1]: libpod-conmon-d8ff41e62a7aca3e69c51183ec38d7febb2e4ea7e14044a2458dcc5344927a9f.scope: Deactivated successfully.
Nov 25 04:09:30 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 04:09:30 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 04:09:30 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 04:09:30 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 04:09:30 np0005534516 ceph-mgr[75313]: [progress WARNING root] complete: ev e2b5150c-412b-406f-9bda-a2a0df841436 does not exist
Nov 25 04:09:30 np0005534516 ceph-mgr[75313]: [progress WARNING root] complete: ev 83e10143-9f2b-4ec7-bd05-a5d85e1adeeb does not exist
Nov 25 04:09:30 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2673: 321 pgs: 321 active+clean; 108 MiB data, 989 MiB used, 59 GiB / 60 GiB avail; 7.6 KiB/s rd, 681 KiB/s wr, 14 op/s
Nov 25 04:09:31 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 04:09:31 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 04:09:31 np0005534516 nova_compute[253538]: 2025-11-25 09:09:31.155 253542 DEBUG nova.compute.manager [req-7f818ead-650a-4980-8637-fc93d3aac516 req-fdbd555e-2983-49b2-bf73-68c3b7837290 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 23232702-7686-425d-8921-7aa6192ca1c8] Received event network-changed-24beb614-6f72-4107-adca-af1258052ab5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 04:09:31 np0005534516 nova_compute[253538]: 2025-11-25 09:09:31.155 253542 DEBUG nova.compute.manager [req-7f818ead-650a-4980-8637-fc93d3aac516 req-fdbd555e-2983-49b2-bf73-68c3b7837290 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 23232702-7686-425d-8921-7aa6192ca1c8] Refreshing instance network info cache due to event network-changed-24beb614-6f72-4107-adca-af1258052ab5. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 04:09:31 np0005534516 nova_compute[253538]: 2025-11-25 09:09:31.156 253542 DEBUG oslo_concurrency.lockutils [req-7f818ead-650a-4980-8637-fc93d3aac516 req-fdbd555e-2983-49b2-bf73-68c3b7837290 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-23232702-7686-425d-8921-7aa6192ca1c8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 04:09:32 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2674: 321 pgs: 321 active+clean; 134 MiB data, 1002 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 25 04:09:32 np0005534516 nova_compute[253538]: 2025-11-25 09:09:32.254 253542 DEBUG nova.network.neutron [None req-b36e01e6-4ffd-4f01-913d-275d77801532 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 23232702-7686-425d-8921-7aa6192ca1c8] Updating instance_info_cache with network_info: [{"id": "2cf452f4-d6c3-4977-9e5b-874c9d9707e6", "address": "fa:16:3e:57:54:60", "network": {"id": "7bf4f588-ebc7-4f3f-bad9-0474cdb461a6", "bridge": "br-int", "label": "tempest-network-smoke--1406303604", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2cf452f4-d6", "ovs_interfaceid": "2cf452f4-d6c3-4977-9e5b-874c9d9707e6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "24beb614-6f72-4107-adca-af1258052ab5", "address": "fa:16:3e:18:a0:7e", "network": {"id": "3cc67e51-433c-4c50-9e32-11618e10c494", "bridge": "br-int", "label": "tempest-network-smoke--1717122316", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe18:a07e", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap24beb614-6f", "ovs_interfaceid": "24beb614-6f72-4107-adca-af1258052ab5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 04:09:32 np0005534516 nova_compute[253538]: 2025-11-25 09:09:32.272 253542 DEBUG oslo_concurrency.lockutils [None req-b36e01e6-4ffd-4f01-913d-275d77801532 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Releasing lock "refresh_cache-23232702-7686-425d-8921-7aa6192ca1c8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 04:09:32 np0005534516 nova_compute[253538]: 2025-11-25 09:09:32.273 253542 DEBUG nova.compute.manager [None req-b36e01e6-4ffd-4f01-913d-275d77801532 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 23232702-7686-425d-8921-7aa6192ca1c8] Instance network_info: |[{"id": "2cf452f4-d6c3-4977-9e5b-874c9d9707e6", "address": "fa:16:3e:57:54:60", "network": {"id": "7bf4f588-ebc7-4f3f-bad9-0474cdb461a6", "bridge": "br-int", "label": "tempest-network-smoke--1406303604", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2cf452f4-d6", "ovs_interfaceid": "2cf452f4-d6c3-4977-9e5b-874c9d9707e6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "24beb614-6f72-4107-adca-af1258052ab5", "address": "fa:16:3e:18:a0:7e", "network": {"id": "3cc67e51-433c-4c50-9e32-11618e10c494", "bridge": "br-int", "label": "tempest-network-smoke--1717122316", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe18:a07e", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap24beb614-6f", "ovs_interfaceid": "24beb614-6f72-4107-adca-af1258052ab5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 25 04:09:32 np0005534516 nova_compute[253538]: 2025-11-25 09:09:32.274 253542 DEBUG oslo_concurrency.lockutils [req-7f818ead-650a-4980-8637-fc93d3aac516 req-fdbd555e-2983-49b2-bf73-68c3b7837290 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-23232702-7686-425d-8921-7aa6192ca1c8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 04:09:32 np0005534516 nova_compute[253538]: 2025-11-25 09:09:32.275 253542 DEBUG nova.network.neutron [req-7f818ead-650a-4980-8637-fc93d3aac516 req-fdbd555e-2983-49b2-bf73-68c3b7837290 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 23232702-7686-425d-8921-7aa6192ca1c8] Refreshing network info cache for port 24beb614-6f72-4107-adca-af1258052ab5 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 04:09:32 np0005534516 nova_compute[253538]: 2025-11-25 09:09:32.281 253542 DEBUG nova.virt.libvirt.driver [None req-b36e01e6-4ffd-4f01-913d-275d77801532 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 23232702-7686-425d-8921-7aa6192ca1c8] Start _get_guest_xml network_info=[{"id": "2cf452f4-d6c3-4977-9e5b-874c9d9707e6", "address": "fa:16:3e:57:54:60", "network": {"id": "7bf4f588-ebc7-4f3f-bad9-0474cdb461a6", "bridge": "br-int", "label": "tempest-network-smoke--1406303604", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2cf452f4-d6", "ovs_interfaceid": "2cf452f4-d6c3-4977-9e5b-874c9d9707e6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "24beb614-6f72-4107-adca-af1258052ab5", "address": "fa:16:3e:18:a0:7e", "network": {"id": "3cc67e51-433c-4c50-9e32-11618e10c494", "bridge": "br-int", "label": "tempest-network-smoke--1717122316", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe18:a07e", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap24beb614-6f", "ovs_interfaceid": "24beb614-6f72-4107-adca-af1258052ab5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'device_name': '/dev/vda', 'guest_format': None, 'encryption_options': None, 'boot_index': 0, 'encryption_format': None, 'encryption_secret_uuid': None, 'size': 0, 'encrypted': False, 'disk_bus': 'virtio', 'image_id': '8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 25 04:09:32 np0005534516 nova_compute[253538]: 2025-11-25 09:09:32.288 253542 WARNING nova.virt.libvirt.driver [None req-b36e01e6-4ffd-4f01-913d-275d77801532 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 04:09:32 np0005534516 nova_compute[253538]: 2025-11-25 09:09:32.300 253542 DEBUG nova.virt.libvirt.host [None req-b36e01e6-4ffd-4f01-913d-275d77801532 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 25 04:09:32 np0005534516 nova_compute[253538]: 2025-11-25 09:09:32.301 253542 DEBUG nova.virt.libvirt.host [None req-b36e01e6-4ffd-4f01-913d-275d77801532 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 25 04:09:32 np0005534516 nova_compute[253538]: 2025-11-25 09:09:32.305 253542 DEBUG nova.virt.libvirt.host [None req-b36e01e6-4ffd-4f01-913d-275d77801532 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 25 04:09:32 np0005534516 nova_compute[253538]: 2025-11-25 09:09:32.305 253542 DEBUG nova.virt.libvirt.host [None req-b36e01e6-4ffd-4f01-913d-275d77801532 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 25 04:09:32 np0005534516 nova_compute[253538]: 2025-11-25 09:09:32.306 253542 DEBUG nova.virt.libvirt.driver [None req-b36e01e6-4ffd-4f01-913d-275d77801532 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 25 04:09:32 np0005534516 nova_compute[253538]: 2025-11-25 09:09:32.307 253542 DEBUG nova.virt.hardware [None req-b36e01e6-4ffd-4f01-913d-275d77801532 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T08:20:54Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='c0247c91-0f34-45b2-87e7-43f31a790a00',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 25 04:09:32 np0005534516 nova_compute[253538]: 2025-11-25 09:09:32.308 253542 DEBUG nova.virt.hardware [None req-b36e01e6-4ffd-4f01-913d-275d77801532 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 25 04:09:32 np0005534516 nova_compute[253538]: 2025-11-25 09:09:32.308 253542 DEBUG nova.virt.hardware [None req-b36e01e6-4ffd-4f01-913d-275d77801532 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 25 04:09:32 np0005534516 nova_compute[253538]: 2025-11-25 09:09:32.309 253542 DEBUG nova.virt.hardware [None req-b36e01e6-4ffd-4f01-913d-275d77801532 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 25 04:09:32 np0005534516 nova_compute[253538]: 2025-11-25 09:09:32.309 253542 DEBUG nova.virt.hardware [None req-b36e01e6-4ffd-4f01-913d-275d77801532 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 25 04:09:32 np0005534516 nova_compute[253538]: 2025-11-25 09:09:32.309 253542 DEBUG nova.virt.hardware [None req-b36e01e6-4ffd-4f01-913d-275d77801532 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 25 04:09:32 np0005534516 nova_compute[253538]: 2025-11-25 09:09:32.310 253542 DEBUG nova.virt.hardware [None req-b36e01e6-4ffd-4f01-913d-275d77801532 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 25 04:09:32 np0005534516 nova_compute[253538]: 2025-11-25 09:09:32.310 253542 DEBUG nova.virt.hardware [None req-b36e01e6-4ffd-4f01-913d-275d77801532 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 25 04:09:32 np0005534516 nova_compute[253538]: 2025-11-25 09:09:32.311 253542 DEBUG nova.virt.hardware [None req-b36e01e6-4ffd-4f01-913d-275d77801532 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 25 04:09:32 np0005534516 nova_compute[253538]: 2025-11-25 09:09:32.312 253542 DEBUG nova.virt.hardware [None req-b36e01e6-4ffd-4f01-913d-275d77801532 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 25 04:09:32 np0005534516 nova_compute[253538]: 2025-11-25 09:09:32.312 253542 DEBUG nova.virt.hardware [None req-b36e01e6-4ffd-4f01-913d-275d77801532 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 25 04:09:32 np0005534516 nova_compute[253538]: 2025-11-25 09:09:32.320 253542 DEBUG oslo_concurrency.processutils [None req-b36e01e6-4ffd-4f01-913d-275d77801532 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 04:09:32 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 04:09:32 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/542945606' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 04:09:32 np0005534516 nova_compute[253538]: 2025-11-25 09:09:32.790 253542 DEBUG oslo_concurrency.processutils [None req-b36e01e6-4ffd-4f01-913d-275d77801532 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.470s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 04:09:32 np0005534516 nova_compute[253538]: 2025-11-25 09:09:32.813 253542 DEBUG nova.storage.rbd_utils [None req-b36e01e6-4ffd-4f01-913d-275d77801532 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] rbd image 23232702-7686-425d-8921-7aa6192ca1c8_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 04:09:32 np0005534516 nova_compute[253538]: 2025-11-25 09:09:32.817 253542 DEBUG oslo_concurrency.processutils [None req-b36e01e6-4ffd-4f01-913d-275d77801532 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 04:09:32 np0005534516 podman[407001]: 2025-11-25 09:09:32.834150171 +0000 UTC m=+0.075887494 container health_status 201f5ba8a846455dad7681d91099d8c3f33e8ac91298c1330a8b525b94c03938 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team)
Nov 25 04:09:32 np0005534516 podman[407002]: 2025-11-25 09:09:32.854293979 +0000 UTC m=+0.095626191 container health_status 5c8d5508db57b4c3da7e80ae11f3a3094e87785cb74f60c98199261dd8b73481 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, container_name=ovn_metadata_agent)
Nov 25 04:09:33 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:09:33 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 04:09:33 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/791276440' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 04:09:33 np0005534516 nova_compute[253538]: 2025-11-25 09:09:33.290 253542 DEBUG oslo_concurrency.processutils [None req-b36e01e6-4ffd-4f01-913d-275d77801532 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.474s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 04:09:33 np0005534516 nova_compute[253538]: 2025-11-25 09:09:33.292 253542 DEBUG nova.virt.libvirt.vif [None req-b36e01e6-4ffd-4f01-913d-275d77801532 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T09:09:24Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1362533958',display_name='tempest-TestGettingAddress-server-1362533958',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1362533958',id=144,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOe89J94R4qKvHMYwrfikowJ7ceUXQ642a2tchAyKy2knwK13gMoa9aQVgsZFx+J1CMsBQiBzDcpzQJhqE9YwLxM9sA075e5a1p5R338zHNdy69LTAjp0AMqDwCa+QT4pw==',key_name='tempest-TestGettingAddress-230342741',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='a3cf572dfc9f42528923d69b8fa76422',ramdisk_id='',reservation_id='r-1pg69rxs',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-364728108',owner_user_name='tempest-TestGettingAddress-364728108-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T09:09:26Z,user_data=None,user_id='c9fb13d4ba9041458692330b7276232f',uuid=23232702-7686-425d-8921-7aa6192ca1c8,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "2cf452f4-d6c3-4977-9e5b-874c9d9707e6", "address": "fa:16:3e:57:54:60", "network": {"id": "7bf4f588-ebc7-4f3f-bad9-0474cdb461a6", "bridge": "br-int", "label": "tempest-network-smoke--1406303604", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2cf452f4-d6", "ovs_interfaceid": "2cf452f4-d6c3-4977-9e5b-874c9d9707e6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 25 04:09:33 np0005534516 nova_compute[253538]: 2025-11-25 09:09:33.293 253542 DEBUG nova.network.os_vif_util [None req-b36e01e6-4ffd-4f01-913d-275d77801532 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Converting VIF {"id": "2cf452f4-d6c3-4977-9e5b-874c9d9707e6", "address": "fa:16:3e:57:54:60", "network": {"id": "7bf4f588-ebc7-4f3f-bad9-0474cdb461a6", "bridge": "br-int", "label": "tempest-network-smoke--1406303604", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2cf452f4-d6", "ovs_interfaceid": "2cf452f4-d6c3-4977-9e5b-874c9d9707e6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 04:09:33 np0005534516 nova_compute[253538]: 2025-11-25 09:09:33.294 253542 DEBUG nova.network.os_vif_util [None req-b36e01e6-4ffd-4f01-913d-275d77801532 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:57:54:60,bridge_name='br-int',has_traffic_filtering=True,id=2cf452f4-d6c3-4977-9e5b-874c9d9707e6,network=Network(7bf4f588-ebc7-4f3f-bad9-0474cdb461a6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2cf452f4-d6') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 04:09:33 np0005534516 nova_compute[253538]: 2025-11-25 09:09:33.295 253542 DEBUG nova.virt.libvirt.vif [None req-b36e01e6-4ffd-4f01-913d-275d77801532 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T09:09:24Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1362533958',display_name='tempest-TestGettingAddress-server-1362533958',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1362533958',id=144,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOe89J94R4qKvHMYwrfikowJ7ceUXQ642a2tchAyKy2knwK13gMoa9aQVgsZFx+J1CMsBQiBzDcpzQJhqE9YwLxM9sA075e5a1p5R338zHNdy69LTAjp0AMqDwCa+QT4pw==',key_name='tempest-TestGettingAddress-230342741',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='a3cf572dfc9f42528923d69b8fa76422',ramdisk_id='',reservation_id='r-1pg69rxs',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-364728108',owner_user_name='tempest-TestGettingAddress-364728108-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T09:09:26Z,user_data=None,user_id='c9fb13d4ba9041458692330b7276232f',uuid=23232702-7686-425d-8921-7aa6192ca1c8,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "24beb614-6f72-4107-adca-af1258052ab5", "address": "fa:16:3e:18:a0:7e", "network": {"id": "3cc67e51-433c-4c50-9e32-11618e10c494", "bridge": "br-int", "label": "tempest-network-smoke--1717122316", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe18:a07e", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap24beb614-6f", "ovs_interfaceid": "24beb614-6f72-4107-adca-af1258052ab5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 25 04:09:33 np0005534516 nova_compute[253538]: 2025-11-25 09:09:33.295 253542 DEBUG nova.network.os_vif_util [None req-b36e01e6-4ffd-4f01-913d-275d77801532 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Converting VIF {"id": "24beb614-6f72-4107-adca-af1258052ab5", "address": "fa:16:3e:18:a0:7e", "network": {"id": "3cc67e51-433c-4c50-9e32-11618e10c494", "bridge": "br-int", "label": "tempest-network-smoke--1717122316", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe18:a07e", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap24beb614-6f", "ovs_interfaceid": "24beb614-6f72-4107-adca-af1258052ab5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 04:09:33 np0005534516 nova_compute[253538]: 2025-11-25 09:09:33.296 253542 DEBUG nova.network.os_vif_util [None req-b36e01e6-4ffd-4f01-913d-275d77801532 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:18:a0:7e,bridge_name='br-int',has_traffic_filtering=True,id=24beb614-6f72-4107-adca-af1258052ab5,network=Network(3cc67e51-433c-4c50-9e32-11618e10c494),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap24beb614-6f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 04:09:33 np0005534516 nova_compute[253538]: 2025-11-25 09:09:33.297 253542 DEBUG nova.objects.instance [None req-b36e01e6-4ffd-4f01-913d-275d77801532 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lazy-loading 'pci_devices' on Instance uuid 23232702-7686-425d-8921-7aa6192ca1c8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 04:09:33 np0005534516 nova_compute[253538]: 2025-11-25 09:09:33.317 253542 DEBUG nova.virt.libvirt.driver [None req-b36e01e6-4ffd-4f01-913d-275d77801532 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 23232702-7686-425d-8921-7aa6192ca1c8] End _get_guest_xml xml=<domain type="kvm">
Nov 25 04:09:33 np0005534516 nova_compute[253538]:  <uuid>23232702-7686-425d-8921-7aa6192ca1c8</uuid>
Nov 25 04:09:33 np0005534516 nova_compute[253538]:  <name>instance-00000090</name>
Nov 25 04:09:33 np0005534516 nova_compute[253538]:  <memory>131072</memory>
Nov 25 04:09:33 np0005534516 nova_compute[253538]:  <vcpu>1</vcpu>
Nov 25 04:09:33 np0005534516 nova_compute[253538]:  <metadata>
Nov 25 04:09:33 np0005534516 nova_compute[253538]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 04:09:33 np0005534516 nova_compute[253538]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 04:09:33 np0005534516 nova_compute[253538]:      <nova:name>tempest-TestGettingAddress-server-1362533958</nova:name>
Nov 25 04:09:33 np0005534516 nova_compute[253538]:      <nova:creationTime>2025-11-25 09:09:32</nova:creationTime>
Nov 25 04:09:33 np0005534516 nova_compute[253538]:      <nova:flavor name="m1.nano">
Nov 25 04:09:33 np0005534516 nova_compute[253538]:        <nova:memory>128</nova:memory>
Nov 25 04:09:33 np0005534516 nova_compute[253538]:        <nova:disk>1</nova:disk>
Nov 25 04:09:33 np0005534516 nova_compute[253538]:        <nova:swap>0</nova:swap>
Nov 25 04:09:33 np0005534516 nova_compute[253538]:        <nova:ephemeral>0</nova:ephemeral>
Nov 25 04:09:33 np0005534516 nova_compute[253538]:        <nova:vcpus>1</nova:vcpus>
Nov 25 04:09:33 np0005534516 nova_compute[253538]:      </nova:flavor>
Nov 25 04:09:33 np0005534516 nova_compute[253538]:      <nova:owner>
Nov 25 04:09:33 np0005534516 nova_compute[253538]:        <nova:user uuid="c9fb13d4ba9041458692330b7276232f">tempest-TestGettingAddress-364728108-project-member</nova:user>
Nov 25 04:09:33 np0005534516 nova_compute[253538]:        <nova:project uuid="a3cf572dfc9f42528923d69b8fa76422">tempest-TestGettingAddress-364728108</nova:project>
Nov 25 04:09:33 np0005534516 nova_compute[253538]:      </nova:owner>
Nov 25 04:09:33 np0005534516 nova_compute[253538]:      <nova:root type="image" uuid="8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e"/>
Nov 25 04:09:33 np0005534516 nova_compute[253538]:      <nova:ports>
Nov 25 04:09:33 np0005534516 nova_compute[253538]:        <nova:port uuid="2cf452f4-d6c3-4977-9e5b-874c9d9707e6">
Nov 25 04:09:33 np0005534516 nova_compute[253538]:          <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Nov 25 04:09:33 np0005534516 nova_compute[253538]:        </nova:port>
Nov 25 04:09:33 np0005534516 nova_compute[253538]:        <nova:port uuid="24beb614-6f72-4107-adca-af1258052ab5">
Nov 25 04:09:33 np0005534516 nova_compute[253538]:          <nova:ip type="fixed" address="2001:db8::f816:3eff:fe18:a07e" ipVersion="6"/>
Nov 25 04:09:33 np0005534516 nova_compute[253538]:        </nova:port>
Nov 25 04:09:33 np0005534516 nova_compute[253538]:      </nova:ports>
Nov 25 04:09:33 np0005534516 nova_compute[253538]:    </nova:instance>
Nov 25 04:09:33 np0005534516 nova_compute[253538]:  </metadata>
Nov 25 04:09:33 np0005534516 nova_compute[253538]:  <sysinfo type="smbios">
Nov 25 04:09:33 np0005534516 nova_compute[253538]:    <system>
Nov 25 04:09:33 np0005534516 nova_compute[253538]:      <entry name="manufacturer">RDO</entry>
Nov 25 04:09:33 np0005534516 nova_compute[253538]:      <entry name="product">OpenStack Compute</entry>
Nov 25 04:09:33 np0005534516 nova_compute[253538]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 04:09:33 np0005534516 nova_compute[253538]:      <entry name="serial">23232702-7686-425d-8921-7aa6192ca1c8</entry>
Nov 25 04:09:33 np0005534516 nova_compute[253538]:      <entry name="uuid">23232702-7686-425d-8921-7aa6192ca1c8</entry>
Nov 25 04:09:33 np0005534516 nova_compute[253538]:      <entry name="family">Virtual Machine</entry>
Nov 25 04:09:33 np0005534516 nova_compute[253538]:    </system>
Nov 25 04:09:33 np0005534516 nova_compute[253538]:  </sysinfo>
Nov 25 04:09:33 np0005534516 nova_compute[253538]:  <os>
Nov 25 04:09:33 np0005534516 nova_compute[253538]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 25 04:09:33 np0005534516 nova_compute[253538]:    <boot dev="hd"/>
Nov 25 04:09:33 np0005534516 nova_compute[253538]:    <smbios mode="sysinfo"/>
Nov 25 04:09:33 np0005534516 nova_compute[253538]:  </os>
Nov 25 04:09:33 np0005534516 nova_compute[253538]:  <features>
Nov 25 04:09:33 np0005534516 nova_compute[253538]:    <acpi/>
Nov 25 04:09:33 np0005534516 nova_compute[253538]:    <apic/>
Nov 25 04:09:33 np0005534516 nova_compute[253538]:    <vmcoreinfo/>
Nov 25 04:09:33 np0005534516 nova_compute[253538]:  </features>
Nov 25 04:09:33 np0005534516 nova_compute[253538]:  <clock offset="utc">
Nov 25 04:09:33 np0005534516 nova_compute[253538]:    <timer name="pit" tickpolicy="delay"/>
Nov 25 04:09:33 np0005534516 nova_compute[253538]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 25 04:09:33 np0005534516 nova_compute[253538]:    <timer name="hpet" present="no"/>
Nov 25 04:09:33 np0005534516 nova_compute[253538]:  </clock>
Nov 25 04:09:33 np0005534516 nova_compute[253538]:  <cpu mode="host-model" match="exact">
Nov 25 04:09:33 np0005534516 nova_compute[253538]:    <topology sockets="1" cores="1" threads="1"/>
Nov 25 04:09:33 np0005534516 nova_compute[253538]:  </cpu>
Nov 25 04:09:33 np0005534516 nova_compute[253538]:  <devices>
Nov 25 04:09:33 np0005534516 nova_compute[253538]:    <disk type="network" device="disk">
Nov 25 04:09:33 np0005534516 nova_compute[253538]:      <driver type="raw" cache="none"/>
Nov 25 04:09:33 np0005534516 nova_compute[253538]:      <source protocol="rbd" name="vms/23232702-7686-425d-8921-7aa6192ca1c8_disk">
Nov 25 04:09:33 np0005534516 nova_compute[253538]:        <host name="192.168.122.100" port="6789"/>
Nov 25 04:09:33 np0005534516 nova_compute[253538]:      </source>
Nov 25 04:09:33 np0005534516 nova_compute[253538]:      <auth username="openstack">
Nov 25 04:09:33 np0005534516 nova_compute[253538]:        <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 04:09:33 np0005534516 nova_compute[253538]:      </auth>
Nov 25 04:09:33 np0005534516 nova_compute[253538]:      <target dev="vda" bus="virtio"/>
Nov 25 04:09:33 np0005534516 nova_compute[253538]:    </disk>
Nov 25 04:09:33 np0005534516 nova_compute[253538]:    <disk type="network" device="cdrom">
Nov 25 04:09:33 np0005534516 nova_compute[253538]:      <driver type="raw" cache="none"/>
Nov 25 04:09:33 np0005534516 nova_compute[253538]:      <source protocol="rbd" name="vms/23232702-7686-425d-8921-7aa6192ca1c8_disk.config">
Nov 25 04:09:33 np0005534516 nova_compute[253538]:        <host name="192.168.122.100" port="6789"/>
Nov 25 04:09:33 np0005534516 nova_compute[253538]:      </source>
Nov 25 04:09:33 np0005534516 nova_compute[253538]:      <auth username="openstack">
Nov 25 04:09:33 np0005534516 nova_compute[253538]:        <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 04:09:33 np0005534516 nova_compute[253538]:      </auth>
Nov 25 04:09:33 np0005534516 nova_compute[253538]:      <target dev="sda" bus="sata"/>
Nov 25 04:09:33 np0005534516 nova_compute[253538]:    </disk>
Nov 25 04:09:33 np0005534516 nova_compute[253538]:    <interface type="ethernet">
Nov 25 04:09:33 np0005534516 nova_compute[253538]:      <mac address="fa:16:3e:57:54:60"/>
Nov 25 04:09:33 np0005534516 nova_compute[253538]:      <model type="virtio"/>
Nov 25 04:09:33 np0005534516 nova_compute[253538]:      <driver name="vhost" rx_queue_size="512"/>
Nov 25 04:09:33 np0005534516 nova_compute[253538]:      <mtu size="1442"/>
Nov 25 04:09:33 np0005534516 nova_compute[253538]:      <target dev="tap2cf452f4-d6"/>
Nov 25 04:09:33 np0005534516 nova_compute[253538]:    </interface>
Nov 25 04:09:33 np0005534516 nova_compute[253538]:    <interface type="ethernet">
Nov 25 04:09:33 np0005534516 nova_compute[253538]:      <mac address="fa:16:3e:18:a0:7e"/>
Nov 25 04:09:33 np0005534516 nova_compute[253538]:      <model type="virtio"/>
Nov 25 04:09:33 np0005534516 nova_compute[253538]:      <driver name="vhost" rx_queue_size="512"/>
Nov 25 04:09:33 np0005534516 nova_compute[253538]:      <mtu size="1442"/>
Nov 25 04:09:33 np0005534516 nova_compute[253538]:      <target dev="tap24beb614-6f"/>
Nov 25 04:09:33 np0005534516 nova_compute[253538]:    </interface>
Nov 25 04:09:33 np0005534516 nova_compute[253538]:    <serial type="pty">
Nov 25 04:09:33 np0005534516 nova_compute[253538]:      <log file="/var/lib/nova/instances/23232702-7686-425d-8921-7aa6192ca1c8/console.log" append="off"/>
Nov 25 04:09:33 np0005534516 nova_compute[253538]:    </serial>
Nov 25 04:09:33 np0005534516 nova_compute[253538]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 04:09:33 np0005534516 nova_compute[253538]:    <video>
Nov 25 04:09:33 np0005534516 nova_compute[253538]:      <model type="virtio"/>
Nov 25 04:09:33 np0005534516 nova_compute[253538]:    </video>
Nov 25 04:09:33 np0005534516 nova_compute[253538]:    <input type="tablet" bus="usb"/>
Nov 25 04:09:33 np0005534516 nova_compute[253538]:    <rng model="virtio">
Nov 25 04:09:33 np0005534516 nova_compute[253538]:      <backend model="random">/dev/urandom</backend>
Nov 25 04:09:33 np0005534516 nova_compute[253538]:    </rng>
Nov 25 04:09:33 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root"/>
Nov 25 04:09:33 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:09:33 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:09:33 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:09:33 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:09:33 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:09:33 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:09:33 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:09:33 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:09:33 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:09:33 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:09:33 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:09:33 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:09:33 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:09:33 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:09:33 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:09:33 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:09:33 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:09:33 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:09:33 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:09:33 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:09:33 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:09:33 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:09:33 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:09:33 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:09:33 np0005534516 nova_compute[253538]:    <controller type="usb" index="0"/>
Nov 25 04:09:33 np0005534516 nova_compute[253538]:    <memballoon model="virtio">
Nov 25 04:09:33 np0005534516 nova_compute[253538]:      <stats period="10"/>
Nov 25 04:09:33 np0005534516 nova_compute[253538]:    </memballoon>
Nov 25 04:09:33 np0005534516 nova_compute[253538]:  </devices>
Nov 25 04:09:33 np0005534516 nova_compute[253538]: </domain>
Nov 25 04:09:33 np0005534516 nova_compute[253538]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 25 04:09:33 np0005534516 nova_compute[253538]: 2025-11-25 09:09:33.319 253542 DEBUG nova.compute.manager [None req-b36e01e6-4ffd-4f01-913d-275d77801532 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 23232702-7686-425d-8921-7aa6192ca1c8] Preparing to wait for external event network-vif-plugged-2cf452f4-d6c3-4977-9e5b-874c9d9707e6 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 25 04:09:33 np0005534516 nova_compute[253538]: 2025-11-25 09:09:33.320 253542 DEBUG oslo_concurrency.lockutils [None req-b36e01e6-4ffd-4f01-913d-275d77801532 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Acquiring lock "23232702-7686-425d-8921-7aa6192ca1c8-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:09:33 np0005534516 nova_compute[253538]: 2025-11-25 09:09:33.321 253542 DEBUG oslo_concurrency.lockutils [None req-b36e01e6-4ffd-4f01-913d-275d77801532 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "23232702-7686-425d-8921-7aa6192ca1c8-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:09:33 np0005534516 nova_compute[253538]: 2025-11-25 09:09:33.322 253542 DEBUG oslo_concurrency.lockutils [None req-b36e01e6-4ffd-4f01-913d-275d77801532 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "23232702-7686-425d-8921-7aa6192ca1c8-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:09:33 np0005534516 nova_compute[253538]: 2025-11-25 09:09:33.322 253542 DEBUG nova.compute.manager [None req-b36e01e6-4ffd-4f01-913d-275d77801532 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 23232702-7686-425d-8921-7aa6192ca1c8] Preparing to wait for external event network-vif-plugged-24beb614-6f72-4107-adca-af1258052ab5 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 25 04:09:33 np0005534516 nova_compute[253538]: 2025-11-25 09:09:33.323 253542 DEBUG oslo_concurrency.lockutils [None req-b36e01e6-4ffd-4f01-913d-275d77801532 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Acquiring lock "23232702-7686-425d-8921-7aa6192ca1c8-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:09:33 np0005534516 nova_compute[253538]: 2025-11-25 09:09:33.323 253542 DEBUG oslo_concurrency.lockutils [None req-b36e01e6-4ffd-4f01-913d-275d77801532 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "23232702-7686-425d-8921-7aa6192ca1c8-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:09:33 np0005534516 nova_compute[253538]: 2025-11-25 09:09:33.324 253542 DEBUG oslo_concurrency.lockutils [None req-b36e01e6-4ffd-4f01-913d-275d77801532 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "23232702-7686-425d-8921-7aa6192ca1c8-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:09:33 np0005534516 nova_compute[253538]: 2025-11-25 09:09:33.325 253542 DEBUG nova.virt.libvirt.vif [None req-b36e01e6-4ffd-4f01-913d-275d77801532 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T09:09:24Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1362533958',display_name='tempest-TestGettingAddress-server-1362533958',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1362533958',id=144,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOe89J94R4qKvHMYwrfikowJ7ceUXQ642a2tchAyKy2knwK13gMoa9aQVgsZFx+J1CMsBQiBzDcpzQJhqE9YwLxM9sA075e5a1p5R338zHNdy69LTAjp0AMqDwCa+QT4pw==',key_name='tempest-TestGettingAddress-230342741',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='a3cf572dfc9f42528923d69b8fa76422',ramdisk_id='',reservation_id='r-1pg69rxs',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-364728108',owner_user_name='tempest-TestGettingAddress-364728108-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T09:09:26Z,user_data=None,user_id='c9fb13d4ba9041458692330b7276232f',uuid=23232702-7686-425d-8921-7aa6192ca1c8,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "2cf452f4-d6c3-4977-9e5b-874c9d9707e6", "address": "fa:16:3e:57:54:60", "network": {"id": "7bf4f588-ebc7-4f3f-bad9-0474cdb461a6", "bridge": "br-int", "label": "tempest-network-smoke--1406303604", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2cf452f4-d6", "ovs_interfaceid": "2cf452f4-d6c3-4977-9e5b-874c9d9707e6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 25 04:09:33 np0005534516 nova_compute[253538]: 2025-11-25 09:09:33.326 253542 DEBUG nova.network.os_vif_util [None req-b36e01e6-4ffd-4f01-913d-275d77801532 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Converting VIF {"id": "2cf452f4-d6c3-4977-9e5b-874c9d9707e6", "address": "fa:16:3e:57:54:60", "network": {"id": "7bf4f588-ebc7-4f3f-bad9-0474cdb461a6", "bridge": "br-int", "label": "tempest-network-smoke--1406303604", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2cf452f4-d6", "ovs_interfaceid": "2cf452f4-d6c3-4977-9e5b-874c9d9707e6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 04:09:33 np0005534516 nova_compute[253538]: 2025-11-25 09:09:33.328 253542 DEBUG nova.network.os_vif_util [None req-b36e01e6-4ffd-4f01-913d-275d77801532 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:57:54:60,bridge_name='br-int',has_traffic_filtering=True,id=2cf452f4-d6c3-4977-9e5b-874c9d9707e6,network=Network(7bf4f588-ebc7-4f3f-bad9-0474cdb461a6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2cf452f4-d6') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 04:09:33 np0005534516 nova_compute[253538]: 2025-11-25 09:09:33.329 253542 DEBUG os_vif [None req-b36e01e6-4ffd-4f01-913d-275d77801532 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:57:54:60,bridge_name='br-int',has_traffic_filtering=True,id=2cf452f4-d6c3-4977-9e5b-874c9d9707e6,network=Network(7bf4f588-ebc7-4f3f-bad9-0474cdb461a6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2cf452f4-d6') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 25 04:09:33 np0005534516 nova_compute[253538]: 2025-11-25 09:09:33.330 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:09:33 np0005534516 nova_compute[253538]: 2025-11-25 09:09:33.331 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 04:09:33 np0005534516 nova_compute[253538]: 2025-11-25 09:09:33.332 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 04:09:33 np0005534516 nova_compute[253538]: 2025-11-25 09:09:33.336 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:09:33 np0005534516 nova_compute[253538]: 2025-11-25 09:09:33.337 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2cf452f4-d6, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 04:09:33 np0005534516 nova_compute[253538]: 2025-11-25 09:09:33.338 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap2cf452f4-d6, col_values=(('external_ids', {'iface-id': '2cf452f4-d6c3-4977-9e5b-874c9d9707e6', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:57:54:60', 'vm-uuid': '23232702-7686-425d-8921-7aa6192ca1c8'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 04:09:33 np0005534516 nova_compute[253538]: 2025-11-25 09:09:33.340 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:09:33 np0005534516 NetworkManager[48915]: <info>  [1764061773.3416] manager: (tap2cf452f4-d6): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/624)
Nov 25 04:09:33 np0005534516 nova_compute[253538]: 2025-11-25 09:09:33.343 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 25 04:09:33 np0005534516 nova_compute[253538]: 2025-11-25 09:09:33.346 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:09:33 np0005534516 nova_compute[253538]: 2025-11-25 09:09:33.347 253542 INFO os_vif [None req-b36e01e6-4ffd-4f01-913d-275d77801532 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:57:54:60,bridge_name='br-int',has_traffic_filtering=True,id=2cf452f4-d6c3-4977-9e5b-874c9d9707e6,network=Network(7bf4f588-ebc7-4f3f-bad9-0474cdb461a6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2cf452f4-d6')#033[00m
Nov 25 04:09:33 np0005534516 nova_compute[253538]: 2025-11-25 09:09:33.349 253542 DEBUG nova.virt.libvirt.vif [None req-b36e01e6-4ffd-4f01-913d-275d77801532 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T09:09:24Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1362533958',display_name='tempest-TestGettingAddress-server-1362533958',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1362533958',id=144,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOe89J94R4qKvHMYwrfikowJ7ceUXQ642a2tchAyKy2knwK13gMoa9aQVgsZFx+J1CMsBQiBzDcpzQJhqE9YwLxM9sA075e5a1p5R338zHNdy69LTAjp0AMqDwCa+QT4pw==',key_name='tempest-TestGettingAddress-230342741',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='a3cf572dfc9f42528923d69b8fa76422',ramdisk_id='',reservation_id='r-1pg69rxs',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-364728108',owner_user_name='tempest-TestGettingAddress-364728108-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T09:09:26Z,user_data=None,user_id='c9fb13d4ba9041458692330b7276232f',uuid=23232702-7686-425d-8921-7aa6192ca1c8,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "24beb614-6f72-4107-adca-af1258052ab5", "address": "fa:16:3e:18:a0:7e", "network": {"id": "3cc67e51-433c-4c50-9e32-11618e10c494", "bridge": "br-int", "label": "tempest-network-smoke--1717122316", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe18:a07e", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap24beb614-6f", "ovs_interfaceid": "24beb614-6f72-4107-adca-af1258052ab5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 25 04:09:33 np0005534516 nova_compute[253538]: 2025-11-25 09:09:33.349 253542 DEBUG nova.network.os_vif_util [None req-b36e01e6-4ffd-4f01-913d-275d77801532 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Converting VIF {"id": "24beb614-6f72-4107-adca-af1258052ab5", "address": "fa:16:3e:18:a0:7e", "network": {"id": "3cc67e51-433c-4c50-9e32-11618e10c494", "bridge": "br-int", "label": "tempest-network-smoke--1717122316", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe18:a07e", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap24beb614-6f", "ovs_interfaceid": "24beb614-6f72-4107-adca-af1258052ab5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 04:09:33 np0005534516 nova_compute[253538]: 2025-11-25 09:09:33.350 253542 DEBUG nova.network.os_vif_util [None req-b36e01e6-4ffd-4f01-913d-275d77801532 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:18:a0:7e,bridge_name='br-int',has_traffic_filtering=True,id=24beb614-6f72-4107-adca-af1258052ab5,network=Network(3cc67e51-433c-4c50-9e32-11618e10c494),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap24beb614-6f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 04:09:33 np0005534516 nova_compute[253538]: 2025-11-25 09:09:33.351 253542 DEBUG os_vif [None req-b36e01e6-4ffd-4f01-913d-275d77801532 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:18:a0:7e,bridge_name='br-int',has_traffic_filtering=True,id=24beb614-6f72-4107-adca-af1258052ab5,network=Network(3cc67e51-433c-4c50-9e32-11618e10c494),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap24beb614-6f') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 25 04:09:33 np0005534516 nova_compute[253538]: 2025-11-25 09:09:33.352 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:09:33 np0005534516 nova_compute[253538]: 2025-11-25 09:09:33.352 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 04:09:33 np0005534516 nova_compute[253538]: 2025-11-25 09:09:33.353 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 04:09:33 np0005534516 nova_compute[253538]: 2025-11-25 09:09:33.356 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:09:33 np0005534516 nova_compute[253538]: 2025-11-25 09:09:33.356 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap24beb614-6f, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 04:09:33 np0005534516 nova_compute[253538]: 2025-11-25 09:09:33.357 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap24beb614-6f, col_values=(('external_ids', {'iface-id': '24beb614-6f72-4107-adca-af1258052ab5', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:18:a0:7e', 'vm-uuid': '23232702-7686-425d-8921-7aa6192ca1c8'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 04:09:33 np0005534516 nova_compute[253538]: 2025-11-25 09:09:33.358 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:09:33 np0005534516 NetworkManager[48915]: <info>  [1764061773.3599] manager: (tap24beb614-6f): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/625)
Nov 25 04:09:33 np0005534516 nova_compute[253538]: 2025-11-25 09:09:33.362 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 25 04:09:33 np0005534516 nova_compute[253538]: 2025-11-25 09:09:33.367 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:09:33 np0005534516 nova_compute[253538]: 2025-11-25 09:09:33.369 253542 INFO os_vif [None req-b36e01e6-4ffd-4f01-913d-275d77801532 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:18:a0:7e,bridge_name='br-int',has_traffic_filtering=True,id=24beb614-6f72-4107-adca-af1258052ab5,network=Network(3cc67e51-433c-4c50-9e32-11618e10c494),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap24beb614-6f')#033[00m
Nov 25 04:09:33 np0005534516 nova_compute[253538]: 2025-11-25 09:09:33.423 253542 DEBUG nova.virt.libvirt.driver [None req-b36e01e6-4ffd-4f01-913d-275d77801532 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 04:09:33 np0005534516 nova_compute[253538]: 2025-11-25 09:09:33.424 253542 DEBUG nova.virt.libvirt.driver [None req-b36e01e6-4ffd-4f01-913d-275d77801532 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 04:09:33 np0005534516 nova_compute[253538]: 2025-11-25 09:09:33.425 253542 DEBUG nova.virt.libvirt.driver [None req-b36e01e6-4ffd-4f01-913d-275d77801532 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] No VIF found with MAC fa:16:3e:57:54:60, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 25 04:09:33 np0005534516 nova_compute[253538]: 2025-11-25 09:09:33.425 253542 DEBUG nova.virt.libvirt.driver [None req-b36e01e6-4ffd-4f01-913d-275d77801532 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] No VIF found with MAC fa:16:3e:18:a0:7e, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 25 04:09:33 np0005534516 nova_compute[253538]: 2025-11-25 09:09:33.426 253542 INFO nova.virt.libvirt.driver [None req-b36e01e6-4ffd-4f01-913d-275d77801532 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 23232702-7686-425d-8921-7aa6192ca1c8] Using config drive#033[00m
Nov 25 04:09:33 np0005534516 nova_compute[253538]: 2025-11-25 09:09:33.451 253542 DEBUG nova.storage.rbd_utils [None req-b36e01e6-4ffd-4f01-913d-275d77801532 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] rbd image 23232702-7686-425d-8921-7aa6192ca1c8_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 04:09:33 np0005534516 nova_compute[253538]: 2025-11-25 09:09:33.727 253542 INFO nova.virt.libvirt.driver [None req-b36e01e6-4ffd-4f01-913d-275d77801532 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 23232702-7686-425d-8921-7aa6192ca1c8] Creating config drive at /var/lib/nova/instances/23232702-7686-425d-8921-7aa6192ca1c8/disk.config#033[00m
Nov 25 04:09:33 np0005534516 nova_compute[253538]: 2025-11-25 09:09:33.732 253542 DEBUG oslo_concurrency.processutils [None req-b36e01e6-4ffd-4f01-913d-275d77801532 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/23232702-7686-425d-8921-7aa6192ca1c8/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp23oeljy5 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 04:09:33 np0005534516 nova_compute[253538]: 2025-11-25 09:09:33.769 253542 DEBUG nova.network.neutron [req-7f818ead-650a-4980-8637-fc93d3aac516 req-fdbd555e-2983-49b2-bf73-68c3b7837290 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 23232702-7686-425d-8921-7aa6192ca1c8] Updated VIF entry in instance network info cache for port 24beb614-6f72-4107-adca-af1258052ab5. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 25 04:09:33 np0005534516 nova_compute[253538]: 2025-11-25 09:09:33.770 253542 DEBUG nova.network.neutron [req-7f818ead-650a-4980-8637-fc93d3aac516 req-fdbd555e-2983-49b2-bf73-68c3b7837290 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 23232702-7686-425d-8921-7aa6192ca1c8] Updating instance_info_cache with network_info: [{"id": "2cf452f4-d6c3-4977-9e5b-874c9d9707e6", "address": "fa:16:3e:57:54:60", "network": {"id": "7bf4f588-ebc7-4f3f-bad9-0474cdb461a6", "bridge": "br-int", "label": "tempest-network-smoke--1406303604", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2cf452f4-d6", "ovs_interfaceid": "2cf452f4-d6c3-4977-9e5b-874c9d9707e6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "24beb614-6f72-4107-adca-af1258052ab5", "address": "fa:16:3e:18:a0:7e", "network": {"id": "3cc67e51-433c-4c50-9e32-11618e10c494", "bridge": "br-int", "label": "tempest-network-smoke--1717122316", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe18:a07e", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap24beb614-6f", "ovs_interfaceid": "24beb614-6f72-4107-adca-af1258052ab5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 04:09:33 np0005534516 nova_compute[253538]: 2025-11-25 09:09:33.782 253542 DEBUG oslo_concurrency.lockutils [req-7f818ead-650a-4980-8637-fc93d3aac516 req-fdbd555e-2983-49b2-bf73-68c3b7837290 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-23232702-7686-425d-8921-7aa6192ca1c8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 04:09:33 np0005534516 nova_compute[253538]: 2025-11-25 09:09:33.875 253542 DEBUG oslo_concurrency.processutils [None req-b36e01e6-4ffd-4f01-913d-275d77801532 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/23232702-7686-425d-8921-7aa6192ca1c8/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp23oeljy5" returned: 0 in 0.143s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 04:09:33 np0005534516 nova_compute[253538]: 2025-11-25 09:09:33.908 253542 DEBUG nova.storage.rbd_utils [None req-b36e01e6-4ffd-4f01-913d-275d77801532 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] rbd image 23232702-7686-425d-8921-7aa6192ca1c8_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 04:09:33 np0005534516 nova_compute[253538]: 2025-11-25 09:09:33.912 253542 DEBUG oslo_concurrency.processutils [None req-b36e01e6-4ffd-4f01-913d-275d77801532 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/23232702-7686-425d-8921-7aa6192ca1c8/disk.config 23232702-7686-425d-8921-7aa6192ca1c8_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 04:09:34 np0005534516 nova_compute[253538]: 2025-11-25 09:09:34.103 253542 DEBUG oslo_concurrency.processutils [None req-b36e01e6-4ffd-4f01-913d-275d77801532 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/23232702-7686-425d-8921-7aa6192ca1c8/disk.config 23232702-7686-425d-8921-7aa6192ca1c8_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.190s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 04:09:34 np0005534516 nova_compute[253538]: 2025-11-25 09:09:34.104 253542 INFO nova.virt.libvirt.driver [None req-b36e01e6-4ffd-4f01-913d-275d77801532 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 23232702-7686-425d-8921-7aa6192ca1c8] Deleting local config drive /var/lib/nova/instances/23232702-7686-425d-8921-7aa6192ca1c8/disk.config because it was imported into RBD.#033[00m
Nov 25 04:09:34 np0005534516 kernel: tap2cf452f4-d6: entered promiscuous mode
Nov 25 04:09:34 np0005534516 NetworkManager[48915]: <info>  [1764061774.2008] manager: (tap2cf452f4-d6): new Tun device (/org/freedesktop/NetworkManager/Devices/626)
Nov 25 04:09:34 np0005534516 nova_compute[253538]: 2025-11-25 09:09:34.204 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:09:34 np0005534516 ovn_controller[152859]: 2025-11-25T09:09:34Z|01524|binding|INFO|Claiming lport 2cf452f4-d6c3-4977-9e5b-874c9d9707e6 for this chassis.
Nov 25 04:09:34 np0005534516 ovn_controller[152859]: 2025-11-25T09:09:34Z|01525|binding|INFO|2cf452f4-d6c3-4977-9e5b-874c9d9707e6: Claiming fa:16:3e:57:54:60 10.100.0.6
Nov 25 04:09:34 np0005534516 nova_compute[253538]: 2025-11-25 09:09:34.214 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:09:34 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2675: 321 pgs: 321 active+clean; 134 MiB data, 1002 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 25 04:09:34 np0005534516 NetworkManager[48915]: <info>  [1764061774.2525] manager: (tap24beb614-6f): new Tun device (/org/freedesktop/NetworkManager/Devices/627)
Nov 25 04:09:34 np0005534516 systemd-udevd[407154]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 04:09:34 np0005534516 systemd-udevd[407155]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 04:09:34 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:09:34.278 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:57:54:60 10.100.0.6'], port_security=['fa:16:3e:57:54:60 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '23232702-7686-425d-8921-7aa6192ca1c8', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7bf4f588-ebc7-4f3f-bad9-0474cdb461a6', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a3cf572dfc9f42528923d69b8fa76422', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'd62dafc0-5bc7-45ab-b9df-841bbdd333a8', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=78a8bb3c-a4c7-425c-b375-b8834bad3945, chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=2cf452f4-d6c3-4977-9e5b-874c9d9707e6) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 04:09:34 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:09:34.279 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 2cf452f4-d6c3-4977-9e5b-874c9d9707e6 in datapath 7bf4f588-ebc7-4f3f-bad9-0474cdb461a6 bound to our chassis#033[00m
Nov 25 04:09:34 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:09:34.280 162739 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 7bf4f588-ebc7-4f3f-bad9-0474cdb461a6#033[00m
Nov 25 04:09:34 np0005534516 NetworkManager[48915]: <info>  [1764061774.2856] device (tap2cf452f4-d6): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 04:09:34 np0005534516 NetworkManager[48915]: <info>  [1764061774.2865] device (tap2cf452f4-d6): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 04:09:34 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:09:34.292 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[ce6de6d5-c3ff-4605-a99b-84a20155f4a4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:09:34 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:09:34.294 162739 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap7bf4f588-e1 in ovnmeta-7bf4f588-ebc7-4f3f-bad9-0474cdb461a6 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 25 04:09:34 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:09:34.296 269370 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap7bf4f588-e0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 25 04:09:34 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:09:34.297 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[95d12376-6505-4945-a64f-9f8ec71e3d67]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:09:34 np0005534516 systemd-machined[215790]: New machine qemu-174-instance-00000090.
Nov 25 04:09:34 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:09:34.297 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[9ccaa5aa-efad-4d7b-b2c0-07b36245006d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:09:34 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:09:34.309 162852 DEBUG oslo.privsep.daemon [-] privsep: reply[b0c36008-9c5c-41be-ab33-7ce880e2472b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:09:34 np0005534516 nova_compute[253538]: 2025-11-25 09:09:34.326 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:09:34 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:09:34.328 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[e733c043-7dad-4a30-9443-4e16fb64cd1a]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:09:34 np0005534516 kernel: tap24beb614-6f: entered promiscuous mode
Nov 25 04:09:34 np0005534516 systemd[1]: Started Virtual Machine qemu-174-instance-00000090.
Nov 25 04:09:34 np0005534516 NetworkManager[48915]: <info>  [1764061774.3299] device (tap24beb614-6f): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 04:09:34 np0005534516 NetworkManager[48915]: <info>  [1764061774.3317] device (tap24beb614-6f): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 04:09:34 np0005534516 nova_compute[253538]: 2025-11-25 09:09:34.335 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:09:34 np0005534516 ovn_controller[152859]: 2025-11-25T09:09:34Z|01526|binding|INFO|Claiming lport 24beb614-6f72-4107-adca-af1258052ab5 for this chassis.
Nov 25 04:09:34 np0005534516 ovn_controller[152859]: 2025-11-25T09:09:34Z|01527|binding|INFO|24beb614-6f72-4107-adca-af1258052ab5: Claiming fa:16:3e:18:a0:7e 2001:db8::f816:3eff:fe18:a07e
Nov 25 04:09:34 np0005534516 ovn_controller[152859]: 2025-11-25T09:09:34Z|01528|binding|INFO|Setting lport 2cf452f4-d6c3-4977-9e5b-874c9d9707e6 ovn-installed in OVS
Nov 25 04:09:34 np0005534516 nova_compute[253538]: 2025-11-25 09:09:34.342 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:09:34 np0005534516 ovn_controller[152859]: 2025-11-25T09:09:34Z|01529|binding|INFO|Setting lport 24beb614-6f72-4107-adca-af1258052ab5 ovn-installed in OVS
Nov 25 04:09:34 np0005534516 nova_compute[253538]: 2025-11-25 09:09:34.354 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:09:34 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:09:34.363 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[9889ff75-acf7-4d25-be4e-0996443e8736]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:09:34 np0005534516 ovn_controller[152859]: 2025-11-25T09:09:34Z|01530|binding|INFO|Setting lport 24beb614-6f72-4107-adca-af1258052ab5 up in Southbound
Nov 25 04:09:34 np0005534516 ovn_controller[152859]: 2025-11-25T09:09:34Z|01531|binding|INFO|Setting lport 2cf452f4-d6c3-4977-9e5b-874c9d9707e6 up in Southbound
Nov 25 04:09:34 np0005534516 NetworkManager[48915]: <info>  [1764061774.3712] manager: (tap7bf4f588-e0): new Veth device (/org/freedesktop/NetworkManager/Devices/628)
Nov 25 04:09:34 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:09:34.370 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:18:a0:7e 2001:db8::f816:3eff:fe18:a07e'], port_security=['fa:16:3e:18:a0:7e 2001:db8::f816:3eff:fe18:a07e'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:fe18:a07e/64', 'neutron:device_id': '23232702-7686-425d-8921-7aa6192ca1c8', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3cc67e51-433c-4c50-9e32-11618e10c494', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a3cf572dfc9f42528923d69b8fa76422', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'd62dafc0-5bc7-45ab-b9df-841bbdd333a8', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8b005d09-6aec-4c42-a2eb-38372057a2c4, chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=24beb614-6f72-4107-adca-af1258052ab5) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 04:09:34 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:09:34.372 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[158ef1a0-4a25-480e-a7a2-669e2d83e2c0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:09:34 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:09:34.403 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[342a5fe0-6641-487b-899d-e6d2cc35f1a8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:09:34 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:09:34.410 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[32f2b838-db0f-4ee1-8abb-7ac5f5015641]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:09:34 np0005534516 NetworkManager[48915]: <info>  [1764061774.4333] device (tap7bf4f588-e0): carrier: link connected
Nov 25 04:09:34 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:09:34.440 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[c0333e54-5870-4da3-8054-a358b1f52b62]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:09:34 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:09:34.459 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[b35e0e95-02ad-4a04-af34-e6b6837d78f9]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7bf4f588-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:dd:cc:c4'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 440], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 712903, 'reachable_time': 15708, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 407191, 'error': None, 'target': 'ovnmeta-7bf4f588-ebc7-4f3f-bad9-0474cdb461a6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:09:34 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:09:34.483 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[64541eac-a104-4def-b8d5-9cd59b7fac03]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fedd:ccc4'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 712903, 'tstamp': 712903}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 407192, 'error': None, 'target': 'ovnmeta-7bf4f588-ebc7-4f3f-bad9-0474cdb461a6', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:09:34 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:09:34.506 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[8af0b789-3cbf-425a-a17f-7f818d236b45]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7bf4f588-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:dd:cc:c4'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 440], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 712903, 'reachable_time': 15708, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 407193, 'error': None, 'target': 'ovnmeta-7bf4f588-ebc7-4f3f-bad9-0474cdb461a6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:09:34 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:09:34.546 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[53724410-3016-4623-bb95-9deb3374e56d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:09:34 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:09:34.620 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[899b93e9-79ca-4108-bd17-fa7e4678cad5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:09:34 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:09:34.624 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7bf4f588-e0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 04:09:34 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:09:34.624 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 04:09:34 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:09:34.625 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7bf4f588-e0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 04:09:34 np0005534516 nova_compute[253538]: 2025-11-25 09:09:34.626 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:09:34 np0005534516 NetworkManager[48915]: <info>  [1764061774.6273] manager: (tap7bf4f588-e0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/629)
Nov 25 04:09:34 np0005534516 kernel: tap7bf4f588-e0: entered promiscuous mode
Nov 25 04:09:34 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:09:34.635 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap7bf4f588-e0, col_values=(('external_ids', {'iface-id': '681702e6-167a-4d5b-9bcf-7f086c4e8bad'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 04:09:34 np0005534516 nova_compute[253538]: 2025-11-25 09:09:34.637 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:09:34 np0005534516 nova_compute[253538]: 2025-11-25 09:09:34.637 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:09:34 np0005534516 ovn_controller[152859]: 2025-11-25T09:09:34Z|01532|binding|INFO|Releasing lport 681702e6-167a-4d5b-9bcf-7f086c4e8bad from this chassis (sb_readonly=0)
Nov 25 04:09:34 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:09:34.640 162739 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/7bf4f588-ebc7-4f3f-bad9-0474cdb461a6.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/7bf4f588-ebc7-4f3f-bad9-0474cdb461a6.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 25 04:09:34 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:09:34.641 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[416c9b49-35e0-4e25-a63b-3f54405a0b2e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:09:34 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:09:34.643 162739 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 25 04:09:34 np0005534516 ovn_metadata_agent[162734]: global
Nov 25 04:09:34 np0005534516 ovn_metadata_agent[162734]:    log         /dev/log local0 debug
Nov 25 04:09:34 np0005534516 ovn_metadata_agent[162734]:    log-tag     haproxy-metadata-proxy-7bf4f588-ebc7-4f3f-bad9-0474cdb461a6
Nov 25 04:09:34 np0005534516 ovn_metadata_agent[162734]:    user        root
Nov 25 04:09:34 np0005534516 ovn_metadata_agent[162734]:    group       root
Nov 25 04:09:34 np0005534516 ovn_metadata_agent[162734]:    maxconn     1024
Nov 25 04:09:34 np0005534516 ovn_metadata_agent[162734]:    pidfile     /var/lib/neutron/external/pids/7bf4f588-ebc7-4f3f-bad9-0474cdb461a6.pid.haproxy
Nov 25 04:09:34 np0005534516 ovn_metadata_agent[162734]:    daemon
Nov 25 04:09:34 np0005534516 ovn_metadata_agent[162734]: 
Nov 25 04:09:34 np0005534516 ovn_metadata_agent[162734]: defaults
Nov 25 04:09:34 np0005534516 ovn_metadata_agent[162734]:    log global
Nov 25 04:09:34 np0005534516 ovn_metadata_agent[162734]:    mode http
Nov 25 04:09:34 np0005534516 ovn_metadata_agent[162734]:    option httplog
Nov 25 04:09:34 np0005534516 ovn_metadata_agent[162734]:    option dontlognull
Nov 25 04:09:34 np0005534516 ovn_metadata_agent[162734]:    option http-server-close
Nov 25 04:09:34 np0005534516 ovn_metadata_agent[162734]:    option forwardfor
Nov 25 04:09:34 np0005534516 ovn_metadata_agent[162734]:    retries                 3
Nov 25 04:09:34 np0005534516 ovn_metadata_agent[162734]:    timeout http-request    30s
Nov 25 04:09:34 np0005534516 ovn_metadata_agent[162734]:    timeout connect         30s
Nov 25 04:09:34 np0005534516 ovn_metadata_agent[162734]:    timeout client          32s
Nov 25 04:09:34 np0005534516 ovn_metadata_agent[162734]:    timeout server          32s
Nov 25 04:09:34 np0005534516 ovn_metadata_agent[162734]:    timeout http-keep-alive 30s
Nov 25 04:09:34 np0005534516 ovn_metadata_agent[162734]: 
Nov 25 04:09:34 np0005534516 ovn_metadata_agent[162734]: 
Nov 25 04:09:34 np0005534516 ovn_metadata_agent[162734]: listen listener
Nov 25 04:09:34 np0005534516 ovn_metadata_agent[162734]:    bind 169.254.169.254:80
Nov 25 04:09:34 np0005534516 ovn_metadata_agent[162734]:    server metadata /var/lib/neutron/metadata_proxy
Nov 25 04:09:34 np0005534516 ovn_metadata_agent[162734]:    http-request add-header X-OVN-Network-ID 7bf4f588-ebc7-4f3f-bad9-0474cdb461a6
Nov 25 04:09:34 np0005534516 ovn_metadata_agent[162734]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 25 04:09:34 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:09:34.645 162739 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-7bf4f588-ebc7-4f3f-bad9-0474cdb461a6', 'env', 'PROCESS_TAG=haproxy-7bf4f588-ebc7-4f3f-bad9-0474cdb461a6', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/7bf4f588-ebc7-4f3f-bad9-0474cdb461a6.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 25 04:09:34 np0005534516 nova_compute[253538]: 2025-11-25 09:09:34.650 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:09:34 np0005534516 nova_compute[253538]: 2025-11-25 09:09:34.765 253542 DEBUG nova.compute.manager [req-47196a83-ad19-41d4-96e6-e03fa5d9e09d req-78d2eae6-e86f-4e07-abb5-8fd2e23e3541 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 23232702-7686-425d-8921-7aa6192ca1c8] Received event network-vif-plugged-2cf452f4-d6c3-4977-9e5b-874c9d9707e6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 04:09:34 np0005534516 nova_compute[253538]: 2025-11-25 09:09:34.766 253542 DEBUG oslo_concurrency.lockutils [req-47196a83-ad19-41d4-96e6-e03fa5d9e09d req-78d2eae6-e86f-4e07-abb5-8fd2e23e3541 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "23232702-7686-425d-8921-7aa6192ca1c8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:09:34 np0005534516 nova_compute[253538]: 2025-11-25 09:09:34.766 253542 DEBUG oslo_concurrency.lockutils [req-47196a83-ad19-41d4-96e6-e03fa5d9e09d req-78d2eae6-e86f-4e07-abb5-8fd2e23e3541 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "23232702-7686-425d-8921-7aa6192ca1c8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:09:34 np0005534516 nova_compute[253538]: 2025-11-25 09:09:34.766 253542 DEBUG oslo_concurrency.lockutils [req-47196a83-ad19-41d4-96e6-e03fa5d9e09d req-78d2eae6-e86f-4e07-abb5-8fd2e23e3541 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "23232702-7686-425d-8921-7aa6192ca1c8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:09:34 np0005534516 nova_compute[253538]: 2025-11-25 09:09:34.766 253542 DEBUG nova.compute.manager [req-47196a83-ad19-41d4-96e6-e03fa5d9e09d req-78d2eae6-e86f-4e07-abb5-8fd2e23e3541 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 23232702-7686-425d-8921-7aa6192ca1c8] Processing event network-vif-plugged-2cf452f4-d6c3-4977-9e5b-874c9d9707e6 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 25 04:09:35 np0005534516 podman[407226]: 2025-11-25 09:09:35.011338736 +0000 UTC m=+0.048686084 container create 21ad73ad938112a08976cbfcb3c2201d1d10bcaa8ba0451deb54ab338f9b407d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-7bf4f588-ebc7-4f3f-bad9-0474cdb461a6, org.label-schema.license=GPLv2, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Nov 25 04:09:35 np0005534516 systemd[1]: Started libpod-conmon-21ad73ad938112a08976cbfcb3c2201d1d10bcaa8ba0451deb54ab338f9b407d.scope.
Nov 25 04:09:35 np0005534516 podman[407226]: 2025-11-25 09:09:34.986331436 +0000 UTC m=+0.023678804 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 25 04:09:35 np0005534516 systemd[1]: Started libcrun container.
Nov 25 04:09:35 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5f10253aa29a2d9744f35c5a42f8958d44efd52ebdbabf6a5f52a1d200b9fc6b/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 25 04:09:35 np0005534516 podman[407226]: 2025-11-25 09:09:35.11228681 +0000 UTC m=+0.149634178 container init 21ad73ad938112a08976cbfcb3c2201d1d10bcaa8ba0451deb54ab338f9b407d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-7bf4f588-ebc7-4f3f-bad9-0474cdb461a6, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team)
Nov 25 04:09:35 np0005534516 podman[407226]: 2025-11-25 09:09:35.123788693 +0000 UTC m=+0.161136041 container start 21ad73ad938112a08976cbfcb3c2201d1d10bcaa8ba0451deb54ab338f9b407d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-7bf4f588-ebc7-4f3f-bad9-0474cdb461a6, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.schema-version=1.0)
Nov 25 04:09:35 np0005534516 nova_compute[253538]: 2025-11-25 09:09:35.125 253542 DEBUG nova.compute.manager [req-a793bf7d-07d8-4efb-aadd-56940ee7ef93 req-d625b315-a552-439c-9ab1-b721af268997 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 23232702-7686-425d-8921-7aa6192ca1c8] Received event network-vif-plugged-24beb614-6f72-4107-adca-af1258052ab5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 04:09:35 np0005534516 nova_compute[253538]: 2025-11-25 09:09:35.126 253542 DEBUG oslo_concurrency.lockutils [req-a793bf7d-07d8-4efb-aadd-56940ee7ef93 req-d625b315-a552-439c-9ab1-b721af268997 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "23232702-7686-425d-8921-7aa6192ca1c8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:09:35 np0005534516 nova_compute[253538]: 2025-11-25 09:09:35.126 253542 DEBUG oslo_concurrency.lockutils [req-a793bf7d-07d8-4efb-aadd-56940ee7ef93 req-d625b315-a552-439c-9ab1-b721af268997 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "23232702-7686-425d-8921-7aa6192ca1c8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:09:35 np0005534516 nova_compute[253538]: 2025-11-25 09:09:35.126 253542 DEBUG oslo_concurrency.lockutils [req-a793bf7d-07d8-4efb-aadd-56940ee7ef93 req-d625b315-a552-439c-9ab1-b721af268997 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "23232702-7686-425d-8921-7aa6192ca1c8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:09:35 np0005534516 nova_compute[253538]: 2025-11-25 09:09:35.126 253542 DEBUG nova.compute.manager [req-a793bf7d-07d8-4efb-aadd-56940ee7ef93 req-d625b315-a552-439c-9ab1-b721af268997 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 23232702-7686-425d-8921-7aa6192ca1c8] Processing event network-vif-plugged-24beb614-6f72-4107-adca-af1258052ab5 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 25 04:09:35 np0005534516 neutron-haproxy-ovnmeta-7bf4f588-ebc7-4f3f-bad9-0474cdb461a6[407241]: [NOTICE]   (407260) : New worker (407265) forked
Nov 25 04:09:35 np0005534516 neutron-haproxy-ovnmeta-7bf4f588-ebc7-4f3f-bad9-0474cdb461a6[407241]: [NOTICE]   (407260) : Loading success.
Nov 25 04:09:35 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:09:35.188 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 24beb614-6f72-4107-adca-af1258052ab5 in datapath 3cc67e51-433c-4c50-9e32-11618e10c494 unbound from our chassis#033[00m
Nov 25 04:09:35 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:09:35.191 162739 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 3cc67e51-433c-4c50-9e32-11618e10c494#033[00m
Nov 25 04:09:35 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:09:35.204 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[8a65dd3c-b562-4b68-b3f5-cd6af3803f9a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:09:35 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:09:35.205 162739 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap3cc67e51-41 in ovnmeta-3cc67e51-433c-4c50-9e32-11618e10c494 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 25 04:09:35 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:09:35.209 269370 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap3cc67e51-40 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 25 04:09:35 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:09:35.209 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[f5d241de-1262-474a-9145-d13115ab4a99]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:09:35 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:09:35.211 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[0b55cb31-8e54-42af-8bf2-35d50dc991ce]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:09:35 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:09:35.222 162852 DEBUG oslo.privsep.daemon [-] privsep: reply[99d7bc0c-4ae0-4ed8-83dc-b2476cb8e2bc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:09:35 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:09:35.237 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[09eaf04a-9974-4905-85d3-c51f8dc48f3d]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:09:35 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:09:35.265 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[bff68061-c9da-4c9d-9bea-6e286b53e8cd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:09:35 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:09:35.272 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[03e73254-4463-4944-9391-98acb7412f8c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:09:35 np0005534516 NetworkManager[48915]: <info>  [1764061775.2731] manager: (tap3cc67e51-40): new Veth device (/org/freedesktop/NetworkManager/Devices/630)
Nov 25 04:09:35 np0005534516 systemd-udevd[407173]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 04:09:35 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:09:35.307 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[2fc7b631-4df4-45a2-b9db-6c15204f49f1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:09:35 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:09:35.310 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[2acf3026-932f-41fb-9b97-d65afd80437f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:09:35 np0005534516 nova_compute[253538]: 2025-11-25 09:09:35.313 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764061775.3132522, 23232702-7686-425d-8921-7aa6192ca1c8 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 04:09:35 np0005534516 nova_compute[253538]: 2025-11-25 09:09:35.314 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 23232702-7686-425d-8921-7aa6192ca1c8] VM Started (Lifecycle Event)#033[00m
Nov 25 04:09:35 np0005534516 nova_compute[253538]: 2025-11-25 09:09:35.316 253542 DEBUG nova.compute.manager [None req-b36e01e6-4ffd-4f01-913d-275d77801532 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 23232702-7686-425d-8921-7aa6192ca1c8] Instance event wait completed in 0 seconds for network-vif-plugged,network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 25 04:09:35 np0005534516 nova_compute[253538]: 2025-11-25 09:09:35.319 253542 DEBUG nova.virt.libvirt.driver [None req-b36e01e6-4ffd-4f01-913d-275d77801532 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 23232702-7686-425d-8921-7aa6192ca1c8] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 25 04:09:35 np0005534516 nova_compute[253538]: 2025-11-25 09:09:35.322 253542 INFO nova.virt.libvirt.driver [-] [instance: 23232702-7686-425d-8921-7aa6192ca1c8] Instance spawned successfully.#033[00m
Nov 25 04:09:35 np0005534516 nova_compute[253538]: 2025-11-25 09:09:35.322 253542 DEBUG nova.virt.libvirt.driver [None req-b36e01e6-4ffd-4f01-913d-275d77801532 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 23232702-7686-425d-8921-7aa6192ca1c8] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 25 04:09:35 np0005534516 nova_compute[253538]: 2025-11-25 09:09:35.332 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 23232702-7686-425d-8921-7aa6192ca1c8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 04:09:35 np0005534516 nova_compute[253538]: 2025-11-25 09:09:35.336 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 23232702-7686-425d-8921-7aa6192ca1c8] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 04:09:35 np0005534516 NetworkManager[48915]: <info>  [1764061775.3403] device (tap3cc67e51-40): carrier: link connected
Nov 25 04:09:35 np0005534516 nova_compute[253538]: 2025-11-25 09:09:35.340 253542 DEBUG nova.virt.libvirt.driver [None req-b36e01e6-4ffd-4f01-913d-275d77801532 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 23232702-7686-425d-8921-7aa6192ca1c8] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 04:09:35 np0005534516 nova_compute[253538]: 2025-11-25 09:09:35.341 253542 DEBUG nova.virt.libvirt.driver [None req-b36e01e6-4ffd-4f01-913d-275d77801532 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 23232702-7686-425d-8921-7aa6192ca1c8] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 04:09:35 np0005534516 nova_compute[253538]: 2025-11-25 09:09:35.341 253542 DEBUG nova.virt.libvirt.driver [None req-b36e01e6-4ffd-4f01-913d-275d77801532 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 23232702-7686-425d-8921-7aa6192ca1c8] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 04:09:35 np0005534516 nova_compute[253538]: 2025-11-25 09:09:35.341 253542 DEBUG nova.virt.libvirt.driver [None req-b36e01e6-4ffd-4f01-913d-275d77801532 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 23232702-7686-425d-8921-7aa6192ca1c8] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 04:09:35 np0005534516 nova_compute[253538]: 2025-11-25 09:09:35.342 253542 DEBUG nova.virt.libvirt.driver [None req-b36e01e6-4ffd-4f01-913d-275d77801532 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 23232702-7686-425d-8921-7aa6192ca1c8] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 04:09:35 np0005534516 nova_compute[253538]: 2025-11-25 09:09:35.342 253542 DEBUG nova.virt.libvirt.driver [None req-b36e01e6-4ffd-4f01-913d-275d77801532 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 23232702-7686-425d-8921-7aa6192ca1c8] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 04:09:35 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:09:35.348 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[173e00a6-5d4c-4959-afbb-316423f9070a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:09:35 np0005534516 nova_compute[253538]: 2025-11-25 09:09:35.365 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 23232702-7686-425d-8921-7aa6192ca1c8] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 04:09:35 np0005534516 nova_compute[253538]: 2025-11-25 09:09:35.366 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764061775.3134649, 23232702-7686-425d-8921-7aa6192ca1c8 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 04:09:35 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:09:35.365 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[00eb0e73-2813-4930-a9cb-71215364a8ac]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap3cc67e51-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:42:16:67'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 441], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 712993, 'reachable_time': 20254, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 407313, 'error': None, 'target': 'ovnmeta-3cc67e51-433c-4c50-9e32-11618e10c494', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:09:35 np0005534516 nova_compute[253538]: 2025-11-25 09:09:35.366 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 23232702-7686-425d-8921-7aa6192ca1c8] VM Paused (Lifecycle Event)#033[00m
Nov 25 04:09:35 np0005534516 nova_compute[253538]: 2025-11-25 09:09:35.382 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 23232702-7686-425d-8921-7aa6192ca1c8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 04:09:35 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:09:35.382 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[28776d6a-8bdc-48a0-8a65-c6a80e6078b1]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe42:1667'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 712993, 'tstamp': 712993}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 407314, 'error': None, 'target': 'ovnmeta-3cc67e51-433c-4c50-9e32-11618e10c494', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:09:35 np0005534516 nova_compute[253538]: 2025-11-25 09:09:35.385 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764061775.3184218, 23232702-7686-425d-8921-7aa6192ca1c8 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 04:09:35 np0005534516 nova_compute[253538]: 2025-11-25 09:09:35.385 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 23232702-7686-425d-8921-7aa6192ca1c8] VM Resumed (Lifecycle Event)#033[00m
Nov 25 04:09:35 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:09:35.402 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[d0f41b77-52df-4c8e-afa9-4d71e5aa5a0b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap3cc67e51-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:42:16:67'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 441], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 712993, 'reachable_time': 20254, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 407315, 'error': None, 'target': 'ovnmeta-3cc67e51-433c-4c50-9e32-11618e10c494', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:09:35 np0005534516 nova_compute[253538]: 2025-11-25 09:09:35.412 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 23232702-7686-425d-8921-7aa6192ca1c8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 04:09:35 np0005534516 nova_compute[253538]: 2025-11-25 09:09:35.414 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 23232702-7686-425d-8921-7aa6192ca1c8] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 04:09:35 np0005534516 nova_compute[253538]: 2025-11-25 09:09:35.427 253542 INFO nova.compute.manager [None req-b36e01e6-4ffd-4f01-913d-275d77801532 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 23232702-7686-425d-8921-7aa6192ca1c8] Took 8.59 seconds to spawn the instance on the hypervisor.#033[00m
Nov 25 04:09:35 np0005534516 nova_compute[253538]: 2025-11-25 09:09:35.428 253542 DEBUG nova.compute.manager [None req-b36e01e6-4ffd-4f01-913d-275d77801532 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 23232702-7686-425d-8921-7aa6192ca1c8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 04:09:35 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:09:35.433 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[4037d67e-c35d-4c8a-a8c6-ddc4baeba408]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:09:35 np0005534516 nova_compute[253538]: 2025-11-25 09:09:35.458 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 23232702-7686-425d-8921-7aa6192ca1c8] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 04:09:35 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:09:35.469 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[317f5dd3-0cd8-446a-9ab1-a7cdec802b61]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:09:35 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:09:35.471 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3cc67e51-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 04:09:35 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:09:35.471 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 04:09:35 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:09:35.471 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap3cc67e51-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 04:09:35 np0005534516 NetworkManager[48915]: <info>  [1764061775.4749] manager: (tap3cc67e51-40): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/631)
Nov 25 04:09:35 np0005534516 kernel: tap3cc67e51-40: entered promiscuous mode
Nov 25 04:09:35 np0005534516 nova_compute[253538]: 2025-11-25 09:09:35.474 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:09:35 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:09:35.478 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap3cc67e51-40, col_values=(('external_ids', {'iface-id': '7cc0292c-b133-4cb7-8177-2a55fd592909'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 04:09:35 np0005534516 nova_compute[253538]: 2025-11-25 09:09:35.479 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:09:35 np0005534516 nova_compute[253538]: 2025-11-25 09:09:35.482 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:09:35 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:09:35.483 162739 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/3cc67e51-433c-4c50-9e32-11618e10c494.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/3cc67e51-433c-4c50-9e32-11618e10c494.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 25 04:09:35 np0005534516 ovn_controller[152859]: 2025-11-25T09:09:35Z|01533|binding|INFO|Releasing lport 7cc0292c-b133-4cb7-8177-2a55fd592909 from this chassis (sb_readonly=0)
Nov 25 04:09:35 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:09:35.484 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[b1cd975e-1ea7-4ee7-9989-2a9edf4f10c4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:09:35 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:09:35.486 162739 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 25 04:09:35 np0005534516 ovn_metadata_agent[162734]: global
Nov 25 04:09:35 np0005534516 ovn_metadata_agent[162734]:    log         /dev/log local0 debug
Nov 25 04:09:35 np0005534516 ovn_metadata_agent[162734]:    log-tag     haproxy-metadata-proxy-3cc67e51-433c-4c50-9e32-11618e10c494
Nov 25 04:09:35 np0005534516 ovn_metadata_agent[162734]:    user        root
Nov 25 04:09:35 np0005534516 ovn_metadata_agent[162734]:    group       root
Nov 25 04:09:35 np0005534516 ovn_metadata_agent[162734]:    maxconn     1024
Nov 25 04:09:35 np0005534516 ovn_metadata_agent[162734]:    pidfile     /var/lib/neutron/external/pids/3cc67e51-433c-4c50-9e32-11618e10c494.pid.haproxy
Nov 25 04:09:35 np0005534516 ovn_metadata_agent[162734]:    daemon
Nov 25 04:09:35 np0005534516 ovn_metadata_agent[162734]: 
Nov 25 04:09:35 np0005534516 ovn_metadata_agent[162734]: defaults
Nov 25 04:09:35 np0005534516 ovn_metadata_agent[162734]:    log global
Nov 25 04:09:35 np0005534516 ovn_metadata_agent[162734]:    mode http
Nov 25 04:09:35 np0005534516 ovn_metadata_agent[162734]:    option httplog
Nov 25 04:09:35 np0005534516 ovn_metadata_agent[162734]:    option dontlognull
Nov 25 04:09:35 np0005534516 ovn_metadata_agent[162734]:    option http-server-close
Nov 25 04:09:35 np0005534516 ovn_metadata_agent[162734]:    option forwardfor
Nov 25 04:09:35 np0005534516 ovn_metadata_agent[162734]:    retries                 3
Nov 25 04:09:35 np0005534516 ovn_metadata_agent[162734]:    timeout http-request    30s
Nov 25 04:09:35 np0005534516 ovn_metadata_agent[162734]:    timeout connect         30s
Nov 25 04:09:35 np0005534516 ovn_metadata_agent[162734]:    timeout client          32s
Nov 25 04:09:35 np0005534516 ovn_metadata_agent[162734]:    timeout server          32s
Nov 25 04:09:35 np0005534516 ovn_metadata_agent[162734]:    timeout http-keep-alive 30s
Nov 25 04:09:35 np0005534516 ovn_metadata_agent[162734]: 
Nov 25 04:09:35 np0005534516 ovn_metadata_agent[162734]: 
Nov 25 04:09:35 np0005534516 ovn_metadata_agent[162734]: listen listener
Nov 25 04:09:35 np0005534516 ovn_metadata_agent[162734]:    bind 169.254.169.254:80
Nov 25 04:09:35 np0005534516 ovn_metadata_agent[162734]:    server metadata /var/lib/neutron/metadata_proxy
Nov 25 04:09:35 np0005534516 ovn_metadata_agent[162734]:    http-request add-header X-OVN-Network-ID 3cc67e51-433c-4c50-9e32-11618e10c494
Nov 25 04:09:35 np0005534516 ovn_metadata_agent[162734]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 25 04:09:35 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:09:35.488 162739 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-3cc67e51-433c-4c50-9e32-11618e10c494', 'env', 'PROCESS_TAG=haproxy-3cc67e51-433c-4c50-9e32-11618e10c494', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/3cc67e51-433c-4c50-9e32-11618e10c494.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 25 04:09:35 np0005534516 nova_compute[253538]: 2025-11-25 09:09:35.503 253542 INFO nova.compute.manager [None req-b36e01e6-4ffd-4f01-913d-275d77801532 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 23232702-7686-425d-8921-7aa6192ca1c8] Took 9.54 seconds to build instance.#033[00m
Nov 25 04:09:35 np0005534516 nova_compute[253538]: 2025-11-25 09:09:35.506 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:09:35 np0005534516 nova_compute[253538]: 2025-11-25 09:09:35.516 253542 DEBUG oslo_concurrency.lockutils [None req-b36e01e6-4ffd-4f01-913d-275d77801532 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "23232702-7686-425d-8921-7aa6192ca1c8" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.615s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:09:35 np0005534516 podman[407345]: 2025-11-25 09:09:35.879538177 +0000 UTC m=+0.028890716 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 25 04:09:36 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2676: 321 pgs: 321 active+clean; 134 MiB data, 1002 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 25 04:09:36 np0005534516 podman[407345]: 2025-11-25 09:09:36.317163984 +0000 UTC m=+0.466516503 container create f6574fad04277da8ca88c648bbe0a2e0e2f056b52818dd6a57db9f9ca00b052d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-3cc67e51-433c-4c50-9e32-11618e10c494, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 04:09:36 np0005534516 systemd[1]: Started libpod-conmon-f6574fad04277da8ca88c648bbe0a2e0e2f056b52818dd6a57db9f9ca00b052d.scope.
Nov 25 04:09:36 np0005534516 systemd[1]: Started libcrun container.
Nov 25 04:09:36 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/15b7818f4970c473f15ea612dd47dfe4573d2b3b5179a915f1ffcc79793320c6/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 25 04:09:36 np0005534516 podman[407345]: 2025-11-25 09:09:36.454181218 +0000 UTC m=+0.603533717 container init f6574fad04277da8ca88c648bbe0a2e0e2f056b52818dd6a57db9f9ca00b052d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-3cc67e51-433c-4c50-9e32-11618e10c494, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 04:09:36 np0005534516 podman[407345]: 2025-11-25 09:09:36.460783279 +0000 UTC m=+0.610135758 container start f6574fad04277da8ca88c648bbe0a2e0e2f056b52818dd6a57db9f9ca00b052d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-3cc67e51-433c-4c50-9e32-11618e10c494, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 25 04:09:36 np0005534516 neutron-haproxy-ovnmeta-3cc67e51-433c-4c50-9e32-11618e10c494[407360]: [NOTICE]   (407364) : New worker (407366) forked
Nov 25 04:09:36 np0005534516 neutron-haproxy-ovnmeta-3cc67e51-433c-4c50-9e32-11618e10c494[407360]: [NOTICE]   (407364) : Loading success.
Nov 25 04:09:36 np0005534516 nova_compute[253538]: 2025-11-25 09:09:36.836 253542 DEBUG nova.compute.manager [req-a3d6e6fb-c888-4f79-9f78-037428f598f0 req-262bd564-9737-44aa-a4f8-0ba2ec8ca679 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 23232702-7686-425d-8921-7aa6192ca1c8] Received event network-vif-plugged-2cf452f4-d6c3-4977-9e5b-874c9d9707e6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 04:09:36 np0005534516 nova_compute[253538]: 2025-11-25 09:09:36.836 253542 DEBUG oslo_concurrency.lockutils [req-a3d6e6fb-c888-4f79-9f78-037428f598f0 req-262bd564-9737-44aa-a4f8-0ba2ec8ca679 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "23232702-7686-425d-8921-7aa6192ca1c8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:09:36 np0005534516 nova_compute[253538]: 2025-11-25 09:09:36.837 253542 DEBUG oslo_concurrency.lockutils [req-a3d6e6fb-c888-4f79-9f78-037428f598f0 req-262bd564-9737-44aa-a4f8-0ba2ec8ca679 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "23232702-7686-425d-8921-7aa6192ca1c8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:09:36 np0005534516 nova_compute[253538]: 2025-11-25 09:09:36.837 253542 DEBUG oslo_concurrency.lockutils [req-a3d6e6fb-c888-4f79-9f78-037428f598f0 req-262bd564-9737-44aa-a4f8-0ba2ec8ca679 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "23232702-7686-425d-8921-7aa6192ca1c8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:09:36 np0005534516 nova_compute[253538]: 2025-11-25 09:09:36.837 253542 DEBUG nova.compute.manager [req-a3d6e6fb-c888-4f79-9f78-037428f598f0 req-262bd564-9737-44aa-a4f8-0ba2ec8ca679 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 23232702-7686-425d-8921-7aa6192ca1c8] No waiting events found dispatching network-vif-plugged-2cf452f4-d6c3-4977-9e5b-874c9d9707e6 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 04:09:36 np0005534516 nova_compute[253538]: 2025-11-25 09:09:36.838 253542 WARNING nova.compute.manager [req-a3d6e6fb-c888-4f79-9f78-037428f598f0 req-262bd564-9737-44aa-a4f8-0ba2ec8ca679 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 23232702-7686-425d-8921-7aa6192ca1c8] Received unexpected event network-vif-plugged-2cf452f4-d6c3-4977-9e5b-874c9d9707e6 for instance with vm_state active and task_state None.#033[00m
Nov 25 04:09:37 np0005534516 nova_compute[253538]: 2025-11-25 09:09:37.206 253542 DEBUG nova.compute.manager [req-6e93873c-1a57-4f91-bfe5-5c748c58c96c req-25d2078c-c535-4ebe-aa3e-a28662bddafe b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 23232702-7686-425d-8921-7aa6192ca1c8] Received event network-vif-plugged-24beb614-6f72-4107-adca-af1258052ab5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 04:09:37 np0005534516 nova_compute[253538]: 2025-11-25 09:09:37.206 253542 DEBUG oslo_concurrency.lockutils [req-6e93873c-1a57-4f91-bfe5-5c748c58c96c req-25d2078c-c535-4ebe-aa3e-a28662bddafe b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "23232702-7686-425d-8921-7aa6192ca1c8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:09:37 np0005534516 nova_compute[253538]: 2025-11-25 09:09:37.206 253542 DEBUG oslo_concurrency.lockutils [req-6e93873c-1a57-4f91-bfe5-5c748c58c96c req-25d2078c-c535-4ebe-aa3e-a28662bddafe b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "23232702-7686-425d-8921-7aa6192ca1c8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:09:37 np0005534516 nova_compute[253538]: 2025-11-25 09:09:37.207 253542 DEBUG oslo_concurrency.lockutils [req-6e93873c-1a57-4f91-bfe5-5c748c58c96c req-25d2078c-c535-4ebe-aa3e-a28662bddafe b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "23232702-7686-425d-8921-7aa6192ca1c8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:09:37 np0005534516 nova_compute[253538]: 2025-11-25 09:09:37.207 253542 DEBUG nova.compute.manager [req-6e93873c-1a57-4f91-bfe5-5c748c58c96c req-25d2078c-c535-4ebe-aa3e-a28662bddafe b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 23232702-7686-425d-8921-7aa6192ca1c8] No waiting events found dispatching network-vif-plugged-24beb614-6f72-4107-adca-af1258052ab5 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 04:09:37 np0005534516 nova_compute[253538]: 2025-11-25 09:09:37.207 253542 WARNING nova.compute.manager [req-6e93873c-1a57-4f91-bfe5-5c748c58c96c req-25d2078c-c535-4ebe-aa3e-a28662bddafe b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 23232702-7686-425d-8921-7aa6192ca1c8] Received unexpected event network-vif-plugged-24beb614-6f72-4107-adca-af1258052ab5 for instance with vm_state active and task_state None.#033[00m
Nov 25 04:09:37 np0005534516 nova_compute[253538]: 2025-11-25 09:09:37.676 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:09:37 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:09:37.677 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=50, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '3e:21:09', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '6a:71:8e:07:fa:c2'}, ipsec=False) old=SB_Global(nb_cfg=49) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 04:09:37 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:09:37.680 162739 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 25 04:09:38 np0005534516 ovn_controller[152859]: 2025-11-25T09:09:38Z|01534|binding|INFO|Releasing lport 681702e6-167a-4d5b-9bcf-7f086c4e8bad from this chassis (sb_readonly=0)
Nov 25 04:09:38 np0005534516 ovn_controller[152859]: 2025-11-25T09:09:38Z|01535|binding|INFO|Releasing lport 7cc0292c-b133-4cb7-8177-2a55fd592909 from this chassis (sb_readonly=0)
Nov 25 04:09:38 np0005534516 NetworkManager[48915]: <info>  [1764061778.1090] manager: (patch-provnet-14ba300c-36d8-42f5-a7bd-8245a4488c5f-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/632)
Nov 25 04:09:38 np0005534516 NetworkManager[48915]: <info>  [1764061778.1107] manager: (patch-br-int-to-provnet-14ba300c-36d8-42f5-a7bd-8245a4488c5f): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/633)
Nov 25 04:09:38 np0005534516 nova_compute[253538]: 2025-11-25 09:09:38.111 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:09:38 np0005534516 ovn_controller[152859]: 2025-11-25T09:09:38Z|01536|binding|INFO|Releasing lport 681702e6-167a-4d5b-9bcf-7f086c4e8bad from this chassis (sb_readonly=0)
Nov 25 04:09:38 np0005534516 ovn_controller[152859]: 2025-11-25T09:09:38Z|01537|binding|INFO|Releasing lport 7cc0292c-b133-4cb7-8177-2a55fd592909 from this chassis (sb_readonly=0)
Nov 25 04:09:38 np0005534516 nova_compute[253538]: 2025-11-25 09:09:38.141 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:09:38 np0005534516 nova_compute[253538]: 2025-11-25 09:09:38.146 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:09:38 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2677: 321 pgs: 321 active+clean; 134 MiB data, 1002 MiB used, 59 GiB / 60 GiB avail; 1.1 MiB/s rd, 1.8 MiB/s wr, 72 op/s
Nov 25 04:09:38 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:09:38 np0005534516 nova_compute[253538]: 2025-11-25 09:09:38.359 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:09:38 np0005534516 nova_compute[253538]: 2025-11-25 09:09:38.934 253542 DEBUG nova.compute.manager [req-e9200fef-df0f-4468-9dd6-efbbc1ddd11c req-ada053e1-5809-4f38-b8bf-54a63aa38535 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 23232702-7686-425d-8921-7aa6192ca1c8] Received event network-changed-2cf452f4-d6c3-4977-9e5b-874c9d9707e6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 04:09:38 np0005534516 nova_compute[253538]: 2025-11-25 09:09:38.935 253542 DEBUG nova.compute.manager [req-e9200fef-df0f-4468-9dd6-efbbc1ddd11c req-ada053e1-5809-4f38-b8bf-54a63aa38535 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 23232702-7686-425d-8921-7aa6192ca1c8] Refreshing instance network info cache due to event network-changed-2cf452f4-d6c3-4977-9e5b-874c9d9707e6. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 04:09:38 np0005534516 nova_compute[253538]: 2025-11-25 09:09:38.935 253542 DEBUG oslo_concurrency.lockutils [req-e9200fef-df0f-4468-9dd6-efbbc1ddd11c req-ada053e1-5809-4f38-b8bf-54a63aa38535 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-23232702-7686-425d-8921-7aa6192ca1c8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 04:09:38 np0005534516 nova_compute[253538]: 2025-11-25 09:09:38.935 253542 DEBUG oslo_concurrency.lockutils [req-e9200fef-df0f-4468-9dd6-efbbc1ddd11c req-ada053e1-5809-4f38-b8bf-54a63aa38535 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-23232702-7686-425d-8921-7aa6192ca1c8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 04:09:38 np0005534516 nova_compute[253538]: 2025-11-25 09:09:38.936 253542 DEBUG nova.network.neutron [req-e9200fef-df0f-4468-9dd6-efbbc1ddd11c req-ada053e1-5809-4f38-b8bf-54a63aa38535 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 23232702-7686-425d-8921-7aa6192ca1c8] Refreshing network info cache for port 2cf452f4-d6c3-4977-9e5b-874c9d9707e6 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 04:09:39 np0005534516 nova_compute[253538]: 2025-11-25 09:09:39.328 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:09:39 np0005534516 podman[407376]: 2025-11-25 09:09:39.882238567 +0000 UTC m=+0.109825396 container health_status 8ad1e319ea263c63021f01bb2d6f18ca5aea205883339692c6b10bddfa993191 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3)
Nov 25 04:09:40 np0005534516 nova_compute[253538]: 2025-11-25 09:09:40.043 253542 DEBUG nova.network.neutron [req-e9200fef-df0f-4468-9dd6-efbbc1ddd11c req-ada053e1-5809-4f38-b8bf-54a63aa38535 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 23232702-7686-425d-8921-7aa6192ca1c8] Updated VIF entry in instance network info cache for port 2cf452f4-d6c3-4977-9e5b-874c9d9707e6. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 25 04:09:40 np0005534516 nova_compute[253538]: 2025-11-25 09:09:40.045 253542 DEBUG nova.network.neutron [req-e9200fef-df0f-4468-9dd6-efbbc1ddd11c req-ada053e1-5809-4f38-b8bf-54a63aa38535 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 23232702-7686-425d-8921-7aa6192ca1c8] Updating instance_info_cache with network_info: [{"id": "2cf452f4-d6c3-4977-9e5b-874c9d9707e6", "address": "fa:16:3e:57:54:60", "network": {"id": "7bf4f588-ebc7-4f3f-bad9-0474cdb461a6", "bridge": "br-int", "label": "tempest-network-smoke--1406303604", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.218", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2cf452f4-d6", "ovs_interfaceid": "2cf452f4-d6c3-4977-9e5b-874c9d9707e6", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "24beb614-6f72-4107-adca-af1258052ab5", "address": "fa:16:3e:18:a0:7e", "network": {"id": "3cc67e51-433c-4c50-9e32-11618e10c494", "bridge": "br-int", "label": "tempest-network-smoke--1717122316", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe18:a07e", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap24beb614-6f", "ovs_interfaceid": "24beb614-6f72-4107-adca-af1258052ab5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 04:09:40 np0005534516 nova_compute[253538]: 2025-11-25 09:09:40.063 253542 DEBUG oslo_concurrency.lockutils [req-e9200fef-df0f-4468-9dd6-efbbc1ddd11c req-ada053e1-5809-4f38-b8bf-54a63aa38535 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-23232702-7686-425d-8921-7aa6192ca1c8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 04:09:40 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2678: 321 pgs: 321 active+clean; 134 MiB data, 1002 MiB used, 59 GiB / 60 GiB avail; 1.6 MiB/s rd, 1.5 MiB/s wr, 88 op/s
Nov 25 04:09:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:09:41.093 162739 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:09:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:09:41.094 162739 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:09:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:09:41.094 162739 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:09:42 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2679: 321 pgs: 321 active+clean; 134 MiB data, 1002 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.1 MiB/s wr, 86 op/s
Nov 25 04:09:43 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:09:43 np0005534516 nova_compute[253538]: 2025-11-25 09:09:43.365 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:09:44 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2680: 321 pgs: 321 active+clean; 134 MiB data, 1002 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Nov 25 04:09:44 np0005534516 nova_compute[253538]: 2025-11-25 09:09:44.331 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:09:45 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:09:45.682 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=0e3362c2-a4a0-4a10-9289-943331244f84, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '50'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 04:09:46 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2681: 321 pgs: 321 active+clean; 134 MiB data, 1002 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Nov 25 04:09:47 np0005534516 ceph-osd[88620]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #51. Immutable memtables: 8.
Nov 25 04:09:48 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:09:48 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2682: 321 pgs: 321 active+clean; 140 MiB data, 1008 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 489 KiB/s wr, 86 op/s
Nov 25 04:09:48 np0005534516 nova_compute[253538]: 2025-11-25 09:09:48.369 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:09:48 np0005534516 ovn_controller[152859]: 2025-11-25T09:09:48Z|00195|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:57:54:60 10.100.0.6
Nov 25 04:09:48 np0005534516 ovn_controller[152859]: 2025-11-25T09:09:48Z|00196|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:57:54:60 10.100.0.6
Nov 25 04:09:48 np0005534516 nova_compute[253538]: 2025-11-25 09:09:48.968 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:09:48 np0005534516 nova_compute[253538]: 2025-11-25 09:09:48.968 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:09:49 np0005534516 nova_compute[253538]: 2025-11-25 09:09:49.333 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:09:50 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2683: 321 pgs: 321 active+clean; 157 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.0 MiB/s rd, 1.3 MiB/s wr, 74 op/s
Nov 25 04:09:52 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2684: 321 pgs: 321 active+clean; 166 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 677 KiB/s rd, 2.1 MiB/s wr, 71 op/s
Nov 25 04:09:53 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:09:53 np0005534516 nova_compute[253538]: 2025-11-25 09:09:53.371 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:09:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] Optimize plan auto_2025-11-25_09:09:53
Nov 25 04:09:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 04:09:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] do_upmap
Nov 25 04:09:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] pools ['volumes', 'cephfs.cephfs.data', 'images', 'default.rgw.log', 'cephfs.cephfs.meta', '.mgr', 'vms', 'default.rgw.meta', 'default.rgw.control', '.rgw.root', 'backups']
Nov 25 04:09:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] prepared 0/10 changes
Nov 25 04:09:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 04:09:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 04:09:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 04:09:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 04:09:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 04:09:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 04:09:53 np0005534516 nova_compute[253538]: 2025-11-25 09:09:53.555 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:09:53 np0005534516 nova_compute[253538]: 2025-11-25 09:09:53.555 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 25 04:09:53 np0005534516 nova_compute[253538]: 2025-11-25 09:09:53.555 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 25 04:09:53 np0005534516 nova_compute[253538]: 2025-11-25 09:09:53.951 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Acquiring lock "refresh_cache-23232702-7686-425d-8921-7aa6192ca1c8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 04:09:53 np0005534516 nova_compute[253538]: 2025-11-25 09:09:53.952 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Acquired lock "refresh_cache-23232702-7686-425d-8921-7aa6192ca1c8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 04:09:53 np0005534516 nova_compute[253538]: 2025-11-25 09:09:53.952 253542 DEBUG nova.network.neutron [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] [instance: 23232702-7686-425d-8921-7aa6192ca1c8] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Nov 25 04:09:53 np0005534516 nova_compute[253538]: 2025-11-25 09:09:53.953 253542 DEBUG nova.objects.instance [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 23232702-7686-425d-8921-7aa6192ca1c8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 04:09:54 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 04:09:54 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 04:09:54 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 04:09:54 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 04:09:54 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 04:09:54 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 04:09:54 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 04:09:54 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 04:09:54 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 04:09:54 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 04:09:54 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2685: 321 pgs: 321 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 375 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Nov 25 04:09:54 np0005534516 nova_compute[253538]: 2025-11-25 09:09:54.352 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:09:56 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2686: 321 pgs: 321 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 375 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Nov 25 04:09:57 np0005534516 nova_compute[253538]: 2025-11-25 09:09:57.828 253542 DEBUG nova.network.neutron [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] [instance: 23232702-7686-425d-8921-7aa6192ca1c8] Updating instance_info_cache with network_info: [{"id": "2cf452f4-d6c3-4977-9e5b-874c9d9707e6", "address": "fa:16:3e:57:54:60", "network": {"id": "7bf4f588-ebc7-4f3f-bad9-0474cdb461a6", "bridge": "br-int", "label": "tempest-network-smoke--1406303604", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.218", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2cf452f4-d6", "ovs_interfaceid": "2cf452f4-d6c3-4977-9e5b-874c9d9707e6", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "24beb614-6f72-4107-adca-af1258052ab5", "address": "fa:16:3e:18:a0:7e", "network": {"id": "3cc67e51-433c-4c50-9e32-11618e10c494", "bridge": "br-int", "label": "tempest-network-smoke--1717122316", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe18:a07e", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap24beb614-6f", "ovs_interfaceid": "24beb614-6f72-4107-adca-af1258052ab5", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 04:09:57 np0005534516 nova_compute[253538]: 2025-11-25 09:09:57.993 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Releasing lock "refresh_cache-23232702-7686-425d-8921-7aa6192ca1c8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 04:09:57 np0005534516 nova_compute[253538]: 2025-11-25 09:09:57.994 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] [instance: 23232702-7686-425d-8921-7aa6192ca1c8] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Nov 25 04:09:57 np0005534516 nova_compute[253538]: 2025-11-25 09:09:57.994 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_shelved_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:09:57 np0005534516 nova_compute[253538]: 2025-11-25 09:09:57.995 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:09:57 np0005534516 nova_compute[253538]: 2025-11-25 09:09:57.995 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 25 04:09:58 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:09:58 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2687: 321 pgs: 321 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 375 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Nov 25 04:09:58 np0005534516 nova_compute[253538]: 2025-11-25 09:09:58.374 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:09:58 np0005534516 nova_compute[253538]: 2025-11-25 09:09:58.553 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:09:58 np0005534516 nova_compute[253538]: 2025-11-25 09:09:58.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:09:59 np0005534516 nova_compute[253538]: 2025-11-25 09:09:59.355 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:09:59 np0005534516 nova_compute[253538]: 2025-11-25 09:09:59.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:10:00 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2688: 321 pgs: 321 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 285 KiB/s rd, 1.7 MiB/s wr, 51 op/s
Nov 25 04:10:01 np0005534516 nova_compute[253538]: 2025-11-25 09:10:01.555 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:10:02 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2689: 321 pgs: 321 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 164 KiB/s rd, 817 KiB/s wr, 18 op/s
Nov 25 04:10:03 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:10:03 np0005534516 nova_compute[253538]: 2025-11-25 09:10:03.377 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:10:03 np0005534516 podman[407404]: 2025-11-25 09:10:03.841821565 +0000 UTC m=+0.069188482 container health_status 5c8d5508db57b4c3da7e80ae11f3a3094e87785cb74f60c98199261dd8b73481 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2)
Nov 25 04:10:03 np0005534516 podman[407403]: 2025-11-25 09:10:03.871293746 +0000 UTC m=+0.098404256 container health_status 201f5ba8a846455dad7681d91099d8c3f33e8ac91298c1330a8b525b94c03938 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=multipathd, container_name=multipathd, org.label-schema.vendor=CentOS)
Nov 25 04:10:04 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2690: 321 pgs: 321 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 20 KiB/s rd, 27 KiB/s wr, 4 op/s
Nov 25 04:10:04 np0005534516 nova_compute[253538]: 2025-11-25 09:10:04.357 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:10:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 04:10:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 04:10:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 04:10:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 04:10:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0007589550978381194 of space, bias 1.0, pg target 0.22768652935143582 quantized to 32 (current 32)
Nov 25 04:10:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 04:10:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 04:10:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 04:10:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 04:10:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 04:10:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0010122368870873217 of space, bias 1.0, pg target 0.3036710661261965 quantized to 32 (current 32)
Nov 25 04:10:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 04:10:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 25 04:10:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 04:10:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 04:10:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 04:10:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 25 04:10:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 04:10:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 25 04:10:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 04:10:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 04:10:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 04:10:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 25 04:10:05 np0005534516 ceph-osd[88620]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 25 04:10:05 np0005534516 ceph-osd[88620]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 4800.5 total, 600.0 interval#012Cumulative writes: 43K writes, 175K keys, 43K commit groups, 1.0 writes per commit group, ingest: 0.17 GB, 0.04 MB/s#012Cumulative WAL: 43K writes, 15K syncs, 2.86 writes per sync, written: 0.17 GB, 0.04 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 4589 writes, 19K keys, 4589 commit groups, 1.0 writes per commit group, ingest: 23.86 MB, 0.04 MB/s#012Interval WAL: 4589 writes, 1654 syncs, 2.77 writes per sync, written: 0.02 GB, 0.04 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 25 04:10:05 np0005534516 nova_compute[253538]: 2025-11-25 09:10:05.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:10:05 np0005534516 nova_compute[253538]: 2025-11-25 09:10:05.577 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:10:05 np0005534516 nova_compute[253538]: 2025-11-25 09:10:05.577 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:10:05 np0005534516 nova_compute[253538]: 2025-11-25 09:10:05.577 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:10:05 np0005534516 nova_compute[253538]: 2025-11-25 09:10:05.577 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 25 04:10:05 np0005534516 nova_compute[253538]: 2025-11-25 09:10:05.578 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 04:10:06 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 04:10:06 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3652824619' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 04:10:06 np0005534516 nova_compute[253538]: 2025-11-25 09:10:06.067 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.490s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 04:10:06 np0005534516 nova_compute[253538]: 2025-11-25 09:10:06.129 253542 DEBUG nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] skipping disk for instance-00000090 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 25 04:10:06 np0005534516 nova_compute[253538]: 2025-11-25 09:10:06.129 253542 DEBUG nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] skipping disk for instance-00000090 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 25 04:10:06 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2691: 321 pgs: 321 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 12 KiB/s wr, 0 op/s
Nov 25 04:10:06 np0005534516 nova_compute[253538]: 2025-11-25 09:10:06.310 253542 WARNING nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 04:10:06 np0005534516 nova_compute[253538]: 2025-11-25 09:10:06.311 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3448MB free_disk=59.942752838134766GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 25 04:10:06 np0005534516 nova_compute[253538]: 2025-11-25 09:10:06.312 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:10:06 np0005534516 nova_compute[253538]: 2025-11-25 09:10:06.312 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:10:06 np0005534516 nova_compute[253538]: 2025-11-25 09:10:06.382 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Instance 23232702-7686-425d-8921-7aa6192ca1c8 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 25 04:10:06 np0005534516 nova_compute[253538]: 2025-11-25 09:10:06.383 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 25 04:10:06 np0005534516 nova_compute[253538]: 2025-11-25 09:10:06.383 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 25 04:10:06 np0005534516 nova_compute[253538]: 2025-11-25 09:10:06.428 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 04:10:06 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 04:10:06 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4067024358' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 04:10:06 np0005534516 nova_compute[253538]: 2025-11-25 09:10:06.971 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.544s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 04:10:06 np0005534516 nova_compute[253538]: 2025-11-25 09:10:06.978 253542 DEBUG nova.compute.provider_tree [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 04:10:07 np0005534516 nova_compute[253538]: 2025-11-25 09:10:07.000 253542 DEBUG nova.scheduler.client.report [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 04:10:07 np0005534516 nova_compute[253538]: 2025-11-25 09:10:07.160 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 25 04:10:07 np0005534516 nova_compute[253538]: 2025-11-25 09:10:07.160 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.848s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:10:08 np0005534516 ovn_controller[152859]: 2025-11-25T09:10:08Z|01538|memory_trim|INFO|Detected inactivity (last active 30023 ms ago): trimming memory
Nov 25 04:10:08 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:10:08 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2692: 321 pgs: 321 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 13 KiB/s wr, 0 op/s
Nov 25 04:10:08 np0005534516 nova_compute[253538]: 2025-11-25 09:10:08.381 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:10:09 np0005534516 nova_compute[253538]: 2025-11-25 09:10:09.359 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:10:10 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2693: 321 pgs: 321 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1023 B/s wr, 0 op/s
Nov 25 04:10:10 np0005534516 podman[407489]: 2025-11-25 09:10:10.392747225 +0000 UTC m=+0.080910041 container health_status 8ad1e319ea263c63021f01bb2d6f18ca5aea205883339692c6b10bddfa993191 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3)
Nov 25 04:10:11 np0005534516 ceph-osd[89702]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 25 04:10:11 np0005534516 ceph-osd[89702]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 4800.4 total, 600.0 interval#012Cumulative writes: 42K writes, 168K keys, 42K commit groups, 1.0 writes per commit group, ingest: 0.16 GB, 0.03 MB/s#012Cumulative WAL: 42K writes, 15K syncs, 2.81 writes per sync, written: 0.16 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 4521 writes, 19K keys, 4521 commit groups, 1.0 writes per commit group, ingest: 23.42 MB, 0.04 MB/s#012Interval WAL: 4521 writes, 1686 syncs, 2.68 writes per sync, written: 0.02 GB, 0.04 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 25 04:10:12 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2694: 321 pgs: 321 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1023 B/s wr, 0 op/s
Nov 25 04:10:13 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:10:13 np0005534516 nova_compute[253538]: 2025-11-25 09:10:13.383 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:10:14 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2695: 321 pgs: 321 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 682 B/s rd, 1023 B/s wr, 0 op/s
Nov 25 04:10:14 np0005534516 nova_compute[253538]: 2025-11-25 09:10:14.361 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:10:14 np0005534516 nova_compute[253538]: 2025-11-25 09:10:14.709 253542 DEBUG oslo_concurrency.lockutils [None req-d92035a5-8b8f-4b18-a7ed-f45f7573b37f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Acquiring lock "e30f8c90-01de-40a5-8c04-289a035fca22" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:10:14 np0005534516 nova_compute[253538]: 2025-11-25 09:10:14.710 253542 DEBUG oslo_concurrency.lockutils [None req-d92035a5-8b8f-4b18-a7ed-f45f7573b37f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "e30f8c90-01de-40a5-8c04-289a035fca22" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:10:14 np0005534516 nova_compute[253538]: 2025-11-25 09:10:14.755 253542 DEBUG nova.compute.manager [None req-d92035a5-8b8f-4b18-a7ed-f45f7573b37f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: e30f8c90-01de-40a5-8c04-289a035fca22] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 25 04:10:15 np0005534516 nova_compute[253538]: 2025-11-25 09:10:15.018 253542 DEBUG oslo_concurrency.lockutils [None req-d92035a5-8b8f-4b18-a7ed-f45f7573b37f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:10:15 np0005534516 nova_compute[253538]: 2025-11-25 09:10:15.019 253542 DEBUG oslo_concurrency.lockutils [None req-d92035a5-8b8f-4b18-a7ed-f45f7573b37f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:10:15 np0005534516 nova_compute[253538]: 2025-11-25 09:10:15.026 253542 DEBUG nova.virt.hardware [None req-d92035a5-8b8f-4b18-a7ed-f45f7573b37f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 25 04:10:15 np0005534516 nova_compute[253538]: 2025-11-25 09:10:15.026 253542 INFO nova.compute.claims [None req-d92035a5-8b8f-4b18-a7ed-f45f7573b37f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: e30f8c90-01de-40a5-8c04-289a035fca22] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 25 04:10:15 np0005534516 nova_compute[253538]: 2025-11-25 09:10:15.261 253542 DEBUG oslo_concurrency.processutils [None req-d92035a5-8b8f-4b18-a7ed-f45f7573b37f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 04:10:15 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 04:10:15 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2060131293' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 04:10:15 np0005534516 nova_compute[253538]: 2025-11-25 09:10:15.721 253542 DEBUG oslo_concurrency.processutils [None req-d92035a5-8b8f-4b18-a7ed-f45f7573b37f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.460s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 04:10:15 np0005534516 nova_compute[253538]: 2025-11-25 09:10:15.729 253542 DEBUG nova.compute.provider_tree [None req-d92035a5-8b8f-4b18-a7ed-f45f7573b37f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 04:10:15 np0005534516 nova_compute[253538]: 2025-11-25 09:10:15.752 253542 DEBUG nova.scheduler.client.report [None req-d92035a5-8b8f-4b18-a7ed-f45f7573b37f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 04:10:15 np0005534516 nova_compute[253538]: 2025-11-25 09:10:15.799 253542 DEBUG oslo_concurrency.lockutils [None req-d92035a5-8b8f-4b18-a7ed-f45f7573b37f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.780s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:10:15 np0005534516 nova_compute[253538]: 2025-11-25 09:10:15.801 253542 DEBUG nova.compute.manager [None req-d92035a5-8b8f-4b18-a7ed-f45f7573b37f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: e30f8c90-01de-40a5-8c04-289a035fca22] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 25 04:10:15 np0005534516 nova_compute[253538]: 2025-11-25 09:10:15.869 253542 DEBUG nova.compute.manager [None req-d92035a5-8b8f-4b18-a7ed-f45f7573b37f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: e30f8c90-01de-40a5-8c04-289a035fca22] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 25 04:10:15 np0005534516 nova_compute[253538]: 2025-11-25 09:10:15.870 253542 DEBUG nova.network.neutron [None req-d92035a5-8b8f-4b18-a7ed-f45f7573b37f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: e30f8c90-01de-40a5-8c04-289a035fca22] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 25 04:10:16 np0005534516 nova_compute[253538]: 2025-11-25 09:10:16.091 253542 INFO nova.virt.libvirt.driver [None req-d92035a5-8b8f-4b18-a7ed-f45f7573b37f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: e30f8c90-01de-40a5-8c04-289a035fca22] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 25 04:10:16 np0005534516 nova_compute[253538]: 2025-11-25 09:10:16.154 253542 DEBUG nova.compute.manager [None req-d92035a5-8b8f-4b18-a7ed-f45f7573b37f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: e30f8c90-01de-40a5-8c04-289a035fca22] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 25 04:10:16 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2696: 321 pgs: 321 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 682 B/s rd, 1023 B/s wr, 0 op/s
Nov 25 04:10:16 np0005534516 nova_compute[253538]: 2025-11-25 09:10:16.512 253542 DEBUG nova.compute.manager [None req-d92035a5-8b8f-4b18-a7ed-f45f7573b37f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: e30f8c90-01de-40a5-8c04-289a035fca22] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 25 04:10:16 np0005534516 nova_compute[253538]: 2025-11-25 09:10:16.513 253542 DEBUG nova.virt.libvirt.driver [None req-d92035a5-8b8f-4b18-a7ed-f45f7573b37f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: e30f8c90-01de-40a5-8c04-289a035fca22] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 25 04:10:16 np0005534516 nova_compute[253538]: 2025-11-25 09:10:16.514 253542 INFO nova.virt.libvirt.driver [None req-d92035a5-8b8f-4b18-a7ed-f45f7573b37f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: e30f8c90-01de-40a5-8c04-289a035fca22] Creating image(s)#033[00m
Nov 25 04:10:16 np0005534516 nova_compute[253538]: 2025-11-25 09:10:16.537 253542 DEBUG nova.storage.rbd_utils [None req-d92035a5-8b8f-4b18-a7ed-f45f7573b37f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] rbd image e30f8c90-01de-40a5-8c04-289a035fca22_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 04:10:16 np0005534516 nova_compute[253538]: 2025-11-25 09:10:16.561 253542 DEBUG nova.storage.rbd_utils [None req-d92035a5-8b8f-4b18-a7ed-f45f7573b37f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] rbd image e30f8c90-01de-40a5-8c04-289a035fca22_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 04:10:16 np0005534516 nova_compute[253538]: 2025-11-25 09:10:16.587 253542 DEBUG nova.storage.rbd_utils [None req-d92035a5-8b8f-4b18-a7ed-f45f7573b37f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] rbd image e30f8c90-01de-40a5-8c04-289a035fca22_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 04:10:16 np0005534516 nova_compute[253538]: 2025-11-25 09:10:16.591 253542 DEBUG oslo_concurrency.processutils [None req-d92035a5-8b8f-4b18-a7ed-f45f7573b37f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 04:10:16 np0005534516 nova_compute[253538]: 2025-11-25 09:10:16.668 253542 DEBUG nova.policy [None req-d92035a5-8b8f-4b18-a7ed-f45f7573b37f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'c9fb13d4ba9041458692330b7276232f', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'a3cf572dfc9f42528923d69b8fa76422', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 25 04:10:16 np0005534516 nova_compute[253538]: 2025-11-25 09:10:16.690 253542 DEBUG oslo_concurrency.processutils [None req-d92035a5-8b8f-4b18-a7ed-f45f7573b37f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json" returned: 0 in 0.099s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 04:10:16 np0005534516 nova_compute[253538]: 2025-11-25 09:10:16.692 253542 DEBUG oslo_concurrency.lockutils [None req-d92035a5-8b8f-4b18-a7ed-f45f7573b37f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Acquiring lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:10:16 np0005534516 nova_compute[253538]: 2025-11-25 09:10:16.692 253542 DEBUG oslo_concurrency.lockutils [None req-d92035a5-8b8f-4b18-a7ed-f45f7573b37f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:10:16 np0005534516 nova_compute[253538]: 2025-11-25 09:10:16.693 253542 DEBUG oslo_concurrency.lockutils [None req-d92035a5-8b8f-4b18-a7ed-f45f7573b37f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:10:16 np0005534516 nova_compute[253538]: 2025-11-25 09:10:16.720 253542 DEBUG nova.storage.rbd_utils [None req-d92035a5-8b8f-4b18-a7ed-f45f7573b37f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] rbd image e30f8c90-01de-40a5-8c04-289a035fca22_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 04:10:16 np0005534516 nova_compute[253538]: 2025-11-25 09:10:16.725 253542 DEBUG oslo_concurrency.processutils [None req-d92035a5-8b8f-4b18-a7ed-f45f7573b37f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc e30f8c90-01de-40a5-8c04-289a035fca22_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 04:10:17 np0005534516 nova_compute[253538]: 2025-11-25 09:10:17.112 253542 DEBUG oslo_concurrency.processutils [None req-d92035a5-8b8f-4b18-a7ed-f45f7573b37f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc e30f8c90-01de-40a5-8c04-289a035fca22_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.387s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 04:10:17 np0005534516 nova_compute[253538]: 2025-11-25 09:10:17.180 253542 DEBUG nova.storage.rbd_utils [None req-d92035a5-8b8f-4b18-a7ed-f45f7573b37f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] resizing rbd image e30f8c90-01de-40a5-8c04-289a035fca22_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 25 04:10:17 np0005534516 nova_compute[253538]: 2025-11-25 09:10:17.271 253542 DEBUG nova.objects.instance [None req-d92035a5-8b8f-4b18-a7ed-f45f7573b37f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lazy-loading 'migration_context' on Instance uuid e30f8c90-01de-40a5-8c04-289a035fca22 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 04:10:17 np0005534516 nova_compute[253538]: 2025-11-25 09:10:17.298 253542 DEBUG nova.virt.libvirt.driver [None req-d92035a5-8b8f-4b18-a7ed-f45f7573b37f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: e30f8c90-01de-40a5-8c04-289a035fca22] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 25 04:10:17 np0005534516 nova_compute[253538]: 2025-11-25 09:10:17.299 253542 DEBUG nova.virt.libvirt.driver [None req-d92035a5-8b8f-4b18-a7ed-f45f7573b37f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: e30f8c90-01de-40a5-8c04-289a035fca22] Ensure instance console log exists: /var/lib/nova/instances/e30f8c90-01de-40a5-8c04-289a035fca22/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 25 04:10:17 np0005534516 nova_compute[253538]: 2025-11-25 09:10:17.299 253542 DEBUG oslo_concurrency.lockutils [None req-d92035a5-8b8f-4b18-a7ed-f45f7573b37f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:10:17 np0005534516 nova_compute[253538]: 2025-11-25 09:10:17.300 253542 DEBUG oslo_concurrency.lockutils [None req-d92035a5-8b8f-4b18-a7ed-f45f7573b37f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:10:17 np0005534516 nova_compute[253538]: 2025-11-25 09:10:17.300 253542 DEBUG oslo_concurrency.lockutils [None req-d92035a5-8b8f-4b18-a7ed-f45f7573b37f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:10:17 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:10:17.797 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=51, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '3e:21:09', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '6a:71:8e:07:fa:c2'}, ipsec=False) old=SB_Global(nb_cfg=50) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 04:10:17 np0005534516 nova_compute[253538]: 2025-11-25 09:10:17.798 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:10:17 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:10:17.798 162739 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 25 04:10:18 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:10:18 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2697: 321 pgs: 321 active+clean; 183 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 11 KiB/s rd, 428 KiB/s wr, 13 op/s
Nov 25 04:10:18 np0005534516 nova_compute[253538]: 2025-11-25 09:10:18.330 253542 DEBUG nova.network.neutron [None req-d92035a5-8b8f-4b18-a7ed-f45f7573b37f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: e30f8c90-01de-40a5-8c04-289a035fca22] Successfully created port: a2ae2d19-2b35-4e83-b6ba-9f037762a501 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 25 04:10:18 np0005534516 nova_compute[253538]: 2025-11-25 09:10:18.385 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:10:19 np0005534516 nova_compute[253538]: 2025-11-25 09:10:19.363 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:10:19 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:10:19.801 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=0e3362c2-a4a0-4a10-9289-943331244f84, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '51'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 04:10:20 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2698: 321 pgs: 321 active+clean; 195 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 952 KiB/s wr, 24 op/s
Nov 25 04:10:20 np0005534516 nova_compute[253538]: 2025-11-25 09:10:20.434 253542 DEBUG nova.network.neutron [None req-d92035a5-8b8f-4b18-a7ed-f45f7573b37f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: e30f8c90-01de-40a5-8c04-289a035fca22] Successfully created port: aa75ca22-e976-4c62-b1e2-cc57fac51dec _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 25 04:10:21 np0005534516 ceph-osd[90711]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 25 04:10:21 np0005534516 ceph-osd[90711]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 4802.4 total, 600.0 interval#012Cumulative writes: 33K writes, 133K keys, 33K commit groups, 1.0 writes per commit group, ingest: 0.13 GB, 0.03 MB/s#012Cumulative WAL: 33K writes, 11K syncs, 2.85 writes per sync, written: 0.13 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 3363 writes, 13K keys, 3363 commit groups, 1.0 writes per commit group, ingest: 13.80 MB, 0.02 MB/s#012Interval WAL: 3363 writes, 1299 syncs, 2.59 writes per sync, written: 0.01 GB, 0.02 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 25 04:10:22 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2699: 321 pgs: 321 active+clean; 213 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 25 04:10:22 np0005534516 nova_compute[253538]: 2025-11-25 09:10:22.473 253542 DEBUG nova.network.neutron [None req-d92035a5-8b8f-4b18-a7ed-f45f7573b37f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: e30f8c90-01de-40a5-8c04-289a035fca22] Successfully updated port: a2ae2d19-2b35-4e83-b6ba-9f037762a501 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 25 04:10:22 np0005534516 nova_compute[253538]: 2025-11-25 09:10:22.568 253542 DEBUG nova.compute.manager [req-a0fd49ff-fe77-4529-82f0-4b0240829479 req-712a073d-1f89-469c-94db-5169507a40d9 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: e30f8c90-01de-40a5-8c04-289a035fca22] Received event network-changed-a2ae2d19-2b35-4e83-b6ba-9f037762a501 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 04:10:22 np0005534516 nova_compute[253538]: 2025-11-25 09:10:22.569 253542 DEBUG nova.compute.manager [req-a0fd49ff-fe77-4529-82f0-4b0240829479 req-712a073d-1f89-469c-94db-5169507a40d9 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: e30f8c90-01de-40a5-8c04-289a035fca22] Refreshing instance network info cache due to event network-changed-a2ae2d19-2b35-4e83-b6ba-9f037762a501. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 04:10:22 np0005534516 nova_compute[253538]: 2025-11-25 09:10:22.569 253542 DEBUG oslo_concurrency.lockutils [req-a0fd49ff-fe77-4529-82f0-4b0240829479 req-712a073d-1f89-469c-94db-5169507a40d9 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-e30f8c90-01de-40a5-8c04-289a035fca22" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 04:10:22 np0005534516 nova_compute[253538]: 2025-11-25 09:10:22.570 253542 DEBUG oslo_concurrency.lockutils [req-a0fd49ff-fe77-4529-82f0-4b0240829479 req-712a073d-1f89-469c-94db-5169507a40d9 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-e30f8c90-01de-40a5-8c04-289a035fca22" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 04:10:22 np0005534516 nova_compute[253538]: 2025-11-25 09:10:22.570 253542 DEBUG nova.network.neutron [req-a0fd49ff-fe77-4529-82f0-4b0240829479 req-712a073d-1f89-469c-94db-5169507a40d9 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: e30f8c90-01de-40a5-8c04-289a035fca22] Refreshing network info cache for port a2ae2d19-2b35-4e83-b6ba-9f037762a501 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 04:10:22 np0005534516 nova_compute[253538]: 2025-11-25 09:10:22.740 253542 DEBUG nova.network.neutron [req-a0fd49ff-fe77-4529-82f0-4b0240829479 req-712a073d-1f89-469c-94db-5169507a40d9 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: e30f8c90-01de-40a5-8c04-289a035fca22] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 25 04:10:23 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:10:23 np0005534516 nova_compute[253538]: 2025-11-25 09:10:23.316 253542 DEBUG nova.network.neutron [req-a0fd49ff-fe77-4529-82f0-4b0240829479 req-712a073d-1f89-469c-94db-5169507a40d9 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: e30f8c90-01de-40a5-8c04-289a035fca22] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 04:10:23 np0005534516 nova_compute[253538]: 2025-11-25 09:10:23.331 253542 DEBUG oslo_concurrency.lockutils [req-a0fd49ff-fe77-4529-82f0-4b0240829479 req-712a073d-1f89-469c-94db-5169507a40d9 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-e30f8c90-01de-40a5-8c04-289a035fca22" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 04:10:23 np0005534516 nova_compute[253538]: 2025-11-25 09:10:23.388 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:10:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 04:10:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 04:10:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 04:10:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 04:10:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 04:10:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 04:10:23 np0005534516 nova_compute[253538]: 2025-11-25 09:10:23.769 253542 DEBUG nova.network.neutron [None req-d92035a5-8b8f-4b18-a7ed-f45f7573b37f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: e30f8c90-01de-40a5-8c04-289a035fca22] Successfully updated port: aa75ca22-e976-4c62-b1e2-cc57fac51dec _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 25 04:10:23 np0005534516 nova_compute[253538]: 2025-11-25 09:10:23.866 253542 DEBUG oslo_concurrency.lockutils [None req-d92035a5-8b8f-4b18-a7ed-f45f7573b37f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Acquiring lock "refresh_cache-e30f8c90-01de-40a5-8c04-289a035fca22" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 04:10:23 np0005534516 nova_compute[253538]: 2025-11-25 09:10:23.867 253542 DEBUG oslo_concurrency.lockutils [None req-d92035a5-8b8f-4b18-a7ed-f45f7573b37f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Acquired lock "refresh_cache-e30f8c90-01de-40a5-8c04-289a035fca22" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 04:10:23 np0005534516 nova_compute[253538]: 2025-11-25 09:10:23.867 253542 DEBUG nova.network.neutron [None req-d92035a5-8b8f-4b18-a7ed-f45f7573b37f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: e30f8c90-01de-40a5-8c04-289a035fca22] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 25 04:10:24 np0005534516 nova_compute[253538]: 2025-11-25 09:10:24.169 253542 DEBUG nova.network.neutron [None req-d92035a5-8b8f-4b18-a7ed-f45f7573b37f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: e30f8c90-01de-40a5-8c04-289a035fca22] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 25 04:10:24 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2700: 321 pgs: 321 active+clean; 213 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 25 04:10:24 np0005534516 nova_compute[253538]: 2025-11-25 09:10:24.364 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:10:24 np0005534516 nova_compute[253538]: 2025-11-25 09:10:24.643 253542 DEBUG nova.compute.manager [req-748d4071-a155-4e74-960c-86c19bc79b5c req-972e4abb-f7de-4040-9398-523737d20e42 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: e30f8c90-01de-40a5-8c04-289a035fca22] Received event network-changed-aa75ca22-e976-4c62-b1e2-cc57fac51dec external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 04:10:24 np0005534516 nova_compute[253538]: 2025-11-25 09:10:24.644 253542 DEBUG nova.compute.manager [req-748d4071-a155-4e74-960c-86c19bc79b5c req-972e4abb-f7de-4040-9398-523737d20e42 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: e30f8c90-01de-40a5-8c04-289a035fca22] Refreshing instance network info cache due to event network-changed-aa75ca22-e976-4c62-b1e2-cc57fac51dec. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 04:10:24 np0005534516 nova_compute[253538]: 2025-11-25 09:10:24.644 253542 DEBUG oslo_concurrency.lockutils [req-748d4071-a155-4e74-960c-86c19bc79b5c req-972e4abb-f7de-4040-9398-523737d20e42 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-e30f8c90-01de-40a5-8c04-289a035fca22" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 04:10:26 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2701: 321 pgs: 321 active+clean; 213 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 25 04:10:26 np0005534516 nova_compute[253538]: 2025-11-25 09:10:26.881 253542 DEBUG nova.network.neutron [None req-d92035a5-8b8f-4b18-a7ed-f45f7573b37f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: e30f8c90-01de-40a5-8c04-289a035fca22] Updating instance_info_cache with network_info: [{"id": "a2ae2d19-2b35-4e83-b6ba-9f037762a501", "address": "fa:16:3e:1a:7a:85", "network": {"id": "7bf4f588-ebc7-4f3f-bad9-0474cdb461a6", "bridge": "br-int", "label": "tempest-network-smoke--1406303604", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa2ae2d19-2b", "ovs_interfaceid": "a2ae2d19-2b35-4e83-b6ba-9f037762a501", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "aa75ca22-e976-4c62-b1e2-cc57fac51dec", "address": "fa:16:3e:81:24:53", "network": {"id": "3cc67e51-433c-4c50-9e32-11618e10c494", "bridge": "br-int", "label": "tempest-network-smoke--1717122316", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe81:2453", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapaa75ca22-e9", "ovs_interfaceid": "aa75ca22-e976-4c62-b1e2-cc57fac51dec", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 04:10:27 np0005534516 nova_compute[253538]: 2025-11-25 09:10:27.122 253542 DEBUG oslo_concurrency.lockutils [None req-d92035a5-8b8f-4b18-a7ed-f45f7573b37f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Releasing lock "refresh_cache-e30f8c90-01de-40a5-8c04-289a035fca22" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 04:10:27 np0005534516 nova_compute[253538]: 2025-11-25 09:10:27.122 253542 DEBUG nova.compute.manager [None req-d92035a5-8b8f-4b18-a7ed-f45f7573b37f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: e30f8c90-01de-40a5-8c04-289a035fca22] Instance network_info: |[{"id": "a2ae2d19-2b35-4e83-b6ba-9f037762a501", "address": "fa:16:3e:1a:7a:85", "network": {"id": "7bf4f588-ebc7-4f3f-bad9-0474cdb461a6", "bridge": "br-int", "label": "tempest-network-smoke--1406303604", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa2ae2d19-2b", "ovs_interfaceid": "a2ae2d19-2b35-4e83-b6ba-9f037762a501", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "aa75ca22-e976-4c62-b1e2-cc57fac51dec", "address": "fa:16:3e:81:24:53", "network": {"id": "3cc67e51-433c-4c50-9e32-11618e10c494", "bridge": "br-int", "label": "tempest-network-smoke--1717122316", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe81:2453", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapaa75ca22-e9", "ovs_interfaceid": "aa75ca22-e976-4c62-b1e2-cc57fac51dec", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 25 04:10:27 np0005534516 nova_compute[253538]: 2025-11-25 09:10:27.123 253542 DEBUG oslo_concurrency.lockutils [req-748d4071-a155-4e74-960c-86c19bc79b5c req-972e4abb-f7de-4040-9398-523737d20e42 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-e30f8c90-01de-40a5-8c04-289a035fca22" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 04:10:27 np0005534516 nova_compute[253538]: 2025-11-25 09:10:27.124 253542 DEBUG nova.network.neutron [req-748d4071-a155-4e74-960c-86c19bc79b5c req-972e4abb-f7de-4040-9398-523737d20e42 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: e30f8c90-01de-40a5-8c04-289a035fca22] Refreshing network info cache for port aa75ca22-e976-4c62-b1e2-cc57fac51dec _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 04:10:27 np0005534516 nova_compute[253538]: 2025-11-25 09:10:27.134 253542 DEBUG nova.virt.libvirt.driver [None req-d92035a5-8b8f-4b18-a7ed-f45f7573b37f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: e30f8c90-01de-40a5-8c04-289a035fca22] Start _get_guest_xml network_info=[{"id": "a2ae2d19-2b35-4e83-b6ba-9f037762a501", "address": "fa:16:3e:1a:7a:85", "network": {"id": "7bf4f588-ebc7-4f3f-bad9-0474cdb461a6", "bridge": "br-int", "label": "tempest-network-smoke--1406303604", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa2ae2d19-2b", "ovs_interfaceid": "a2ae2d19-2b35-4e83-b6ba-9f037762a501", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "aa75ca22-e976-4c62-b1e2-cc57fac51dec", "address": "fa:16:3e:81:24:53", "network": {"id": "3cc67e51-433c-4c50-9e32-11618e10c494", "bridge": "br-int", "label": "tempest-network-smoke--1717122316", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe81:2453", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapaa75ca22-e9", "ovs_interfaceid": "aa75ca22-e976-4c62-b1e2-cc57fac51dec", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'device_name': '/dev/vda', 'guest_format': None, 'encryption_options': None, 'boot_index': 0, 'encryption_format': None, 'encryption_secret_uuid': None, 'size': 0, 'encrypted': False, 'disk_bus': 'virtio', 'image_id': '8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 25 04:10:27 np0005534516 nova_compute[253538]: 2025-11-25 09:10:27.142 253542 WARNING nova.virt.libvirt.driver [None req-d92035a5-8b8f-4b18-a7ed-f45f7573b37f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 04:10:27 np0005534516 nova_compute[253538]: 2025-11-25 09:10:27.155 253542 DEBUG nova.virt.libvirt.host [None req-d92035a5-8b8f-4b18-a7ed-f45f7573b37f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 25 04:10:27 np0005534516 nova_compute[253538]: 2025-11-25 09:10:27.156 253542 DEBUG nova.virt.libvirt.host [None req-d92035a5-8b8f-4b18-a7ed-f45f7573b37f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 25 04:10:27 np0005534516 nova_compute[253538]: 2025-11-25 09:10:27.161 253542 DEBUG nova.virt.libvirt.host [None req-d92035a5-8b8f-4b18-a7ed-f45f7573b37f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 25 04:10:27 np0005534516 nova_compute[253538]: 2025-11-25 09:10:27.162 253542 DEBUG nova.virt.libvirt.host [None req-d92035a5-8b8f-4b18-a7ed-f45f7573b37f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 25 04:10:27 np0005534516 nova_compute[253538]: 2025-11-25 09:10:27.163 253542 DEBUG nova.virt.libvirt.driver [None req-d92035a5-8b8f-4b18-a7ed-f45f7573b37f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 25 04:10:27 np0005534516 nova_compute[253538]: 2025-11-25 09:10:27.163 253542 DEBUG nova.virt.hardware [None req-d92035a5-8b8f-4b18-a7ed-f45f7573b37f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T08:20:54Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='c0247c91-0f34-45b2-87e7-43f31a790a00',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 25 04:10:27 np0005534516 nova_compute[253538]: 2025-11-25 09:10:27.164 253542 DEBUG nova.virt.hardware [None req-d92035a5-8b8f-4b18-a7ed-f45f7573b37f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 25 04:10:27 np0005534516 nova_compute[253538]: 2025-11-25 09:10:27.165 253542 DEBUG nova.virt.hardware [None req-d92035a5-8b8f-4b18-a7ed-f45f7573b37f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 25 04:10:27 np0005534516 nova_compute[253538]: 2025-11-25 09:10:27.165 253542 DEBUG nova.virt.hardware [None req-d92035a5-8b8f-4b18-a7ed-f45f7573b37f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 25 04:10:27 np0005534516 nova_compute[253538]: 2025-11-25 09:10:27.166 253542 DEBUG nova.virt.hardware [None req-d92035a5-8b8f-4b18-a7ed-f45f7573b37f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 25 04:10:27 np0005534516 nova_compute[253538]: 2025-11-25 09:10:27.167 253542 DEBUG nova.virt.hardware [None req-d92035a5-8b8f-4b18-a7ed-f45f7573b37f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 25 04:10:27 np0005534516 nova_compute[253538]: 2025-11-25 09:10:27.167 253542 DEBUG nova.virt.hardware [None req-d92035a5-8b8f-4b18-a7ed-f45f7573b37f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 25 04:10:27 np0005534516 nova_compute[253538]: 2025-11-25 09:10:27.168 253542 DEBUG nova.virt.hardware [None req-d92035a5-8b8f-4b18-a7ed-f45f7573b37f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 25 04:10:27 np0005534516 nova_compute[253538]: 2025-11-25 09:10:27.168 253542 DEBUG nova.virt.hardware [None req-d92035a5-8b8f-4b18-a7ed-f45f7573b37f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 25 04:10:27 np0005534516 nova_compute[253538]: 2025-11-25 09:10:27.169 253542 DEBUG nova.virt.hardware [None req-d92035a5-8b8f-4b18-a7ed-f45f7573b37f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 25 04:10:27 np0005534516 nova_compute[253538]: 2025-11-25 09:10:27.170 253542 DEBUG nova.virt.hardware [None req-d92035a5-8b8f-4b18-a7ed-f45f7573b37f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 25 04:10:27 np0005534516 nova_compute[253538]: 2025-11-25 09:10:27.175 253542 DEBUG oslo_concurrency.processutils [None req-d92035a5-8b8f-4b18-a7ed-f45f7573b37f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 04:10:27 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 04:10:27 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4174543525' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 04:10:27 np0005534516 nova_compute[253538]: 2025-11-25 09:10:27.654 253542 DEBUG oslo_concurrency.processutils [None req-d92035a5-8b8f-4b18-a7ed-f45f7573b37f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.479s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 04:10:27 np0005534516 nova_compute[253538]: 2025-11-25 09:10:27.677 253542 DEBUG nova.storage.rbd_utils [None req-d92035a5-8b8f-4b18-a7ed-f45f7573b37f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] rbd image e30f8c90-01de-40a5-8c04-289a035fca22_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 04:10:27 np0005534516 nova_compute[253538]: 2025-11-25 09:10:27.681 253542 DEBUG oslo_concurrency.processutils [None req-d92035a5-8b8f-4b18-a7ed-f45f7573b37f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 04:10:28 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 04:10:28 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/197990714' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 04:10:28 np0005534516 nova_compute[253538]: 2025-11-25 09:10:28.149 253542 DEBUG oslo_concurrency.processutils [None req-d92035a5-8b8f-4b18-a7ed-f45f7573b37f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.469s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 04:10:28 np0005534516 nova_compute[253538]: 2025-11-25 09:10:28.154 253542 DEBUG nova.virt.libvirt.vif [None req-d92035a5-8b8f-4b18-a7ed-f45f7573b37f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T09:10:13Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-640466180',display_name='tempest-TestGettingAddress-server-640466180',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-640466180',id=145,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOe89J94R4qKvHMYwrfikowJ7ceUXQ642a2tchAyKy2knwK13gMoa9aQVgsZFx+J1CMsBQiBzDcpzQJhqE9YwLxM9sA075e5a1p5R338zHNdy69LTAjp0AMqDwCa+QT4pw==',key_name='tempest-TestGettingAddress-230342741',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='a3cf572dfc9f42528923d69b8fa76422',ramdisk_id='',reservation_id='r-vi0tehjw',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-364728108',owner_user_name='tempest-TestGettingAddress-364728108-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T09:10:16Z,user_data=None,user_id='c9fb13d4ba9041458692330b7276232f',uuid=e30f8c90-01de-40a5-8c04-289a035fca22,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "a2ae2d19-2b35-4e83-b6ba-9f037762a501", "address": "fa:16:3e:1a:7a:85", "network": {"id": "7bf4f588-ebc7-4f3f-bad9-0474cdb461a6", "bridge": "br-int", "label": "tempest-network-smoke--1406303604", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa2ae2d19-2b", "ovs_interfaceid": "a2ae2d19-2b35-4e83-b6ba-9f037762a501", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 25 04:10:28 np0005534516 nova_compute[253538]: 2025-11-25 09:10:28.155 253542 DEBUG nova.network.os_vif_util [None req-d92035a5-8b8f-4b18-a7ed-f45f7573b37f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Converting VIF {"id": "a2ae2d19-2b35-4e83-b6ba-9f037762a501", "address": "fa:16:3e:1a:7a:85", "network": {"id": "7bf4f588-ebc7-4f3f-bad9-0474cdb461a6", "bridge": "br-int", "label": "tempest-network-smoke--1406303604", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa2ae2d19-2b", "ovs_interfaceid": "a2ae2d19-2b35-4e83-b6ba-9f037762a501", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 04:10:28 np0005534516 nova_compute[253538]: 2025-11-25 09:10:28.157 253542 DEBUG nova.network.os_vif_util [None req-d92035a5-8b8f-4b18-a7ed-f45f7573b37f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:1a:7a:85,bridge_name='br-int',has_traffic_filtering=True,id=a2ae2d19-2b35-4e83-b6ba-9f037762a501,network=Network(7bf4f588-ebc7-4f3f-bad9-0474cdb461a6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa2ae2d19-2b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 04:10:28 np0005534516 nova_compute[253538]: 2025-11-25 09:10:28.158 253542 DEBUG nova.virt.libvirt.vif [None req-d92035a5-8b8f-4b18-a7ed-f45f7573b37f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T09:10:13Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-640466180',display_name='tempest-TestGettingAddress-server-640466180',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-640466180',id=145,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOe89J94R4qKvHMYwrfikowJ7ceUXQ642a2tchAyKy2knwK13gMoa9aQVgsZFx+J1CMsBQiBzDcpzQJhqE9YwLxM9sA075e5a1p5R338zHNdy69LTAjp0AMqDwCa+QT4pw==',key_name='tempest-TestGettingAddress-230342741',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='a3cf572dfc9f42528923d69b8fa76422',ramdisk_id='',reservation_id='r-vi0tehjw',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-364728108',owner_user_name='tempest-TestGettingAddress-364728108-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T09:10:16Z,user_data=None,user_id='c9fb13d4ba9041458692330b7276232f',uuid=e30f8c90-01de-40a5-8c04-289a035fca22,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "aa75ca22-e976-4c62-b1e2-cc57fac51dec", "address": "fa:16:3e:81:24:53", "network": {"id": "3cc67e51-433c-4c50-9e32-11618e10c494", "bridge": "br-int", "label": "tempest-network-smoke--1717122316", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe81:2453", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapaa75ca22-e9", "ovs_interfaceid": "aa75ca22-e976-4c62-b1e2-cc57fac51dec", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 25 04:10:28 np0005534516 nova_compute[253538]: 2025-11-25 09:10:28.159 253542 DEBUG nova.network.os_vif_util [None req-d92035a5-8b8f-4b18-a7ed-f45f7573b37f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Converting VIF {"id": "aa75ca22-e976-4c62-b1e2-cc57fac51dec", "address": "fa:16:3e:81:24:53", "network": {"id": "3cc67e51-433c-4c50-9e32-11618e10c494", "bridge": "br-int", "label": "tempest-network-smoke--1717122316", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe81:2453", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapaa75ca22-e9", "ovs_interfaceid": "aa75ca22-e976-4c62-b1e2-cc57fac51dec", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 04:10:28 np0005534516 nova_compute[253538]: 2025-11-25 09:10:28.160 253542 DEBUG nova.network.os_vif_util [None req-d92035a5-8b8f-4b18-a7ed-f45f7573b37f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:81:24:53,bridge_name='br-int',has_traffic_filtering=True,id=aa75ca22-e976-4c62-b1e2-cc57fac51dec,network=Network(3cc67e51-433c-4c50-9e32-11618e10c494),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapaa75ca22-e9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 04:10:28 np0005534516 nova_compute[253538]: 2025-11-25 09:10:28.162 253542 DEBUG nova.objects.instance [None req-d92035a5-8b8f-4b18-a7ed-f45f7573b37f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lazy-loading 'pci_devices' on Instance uuid e30f8c90-01de-40a5-8c04-289a035fca22 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 04:10:28 np0005534516 nova_compute[253538]: 2025-11-25 09:10:28.182 253542 DEBUG nova.virt.libvirt.driver [None req-d92035a5-8b8f-4b18-a7ed-f45f7573b37f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: e30f8c90-01de-40a5-8c04-289a035fca22] End _get_guest_xml xml=<domain type="kvm">
Nov 25 04:10:28 np0005534516 nova_compute[253538]:  <uuid>e30f8c90-01de-40a5-8c04-289a035fca22</uuid>
Nov 25 04:10:28 np0005534516 nova_compute[253538]:  <name>instance-00000091</name>
Nov 25 04:10:28 np0005534516 nova_compute[253538]:  <memory>131072</memory>
Nov 25 04:10:28 np0005534516 nova_compute[253538]:  <vcpu>1</vcpu>
Nov 25 04:10:28 np0005534516 nova_compute[253538]:  <metadata>
Nov 25 04:10:28 np0005534516 nova_compute[253538]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 04:10:28 np0005534516 nova_compute[253538]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 04:10:28 np0005534516 nova_compute[253538]:      <nova:name>tempest-TestGettingAddress-server-640466180</nova:name>
Nov 25 04:10:28 np0005534516 nova_compute[253538]:      <nova:creationTime>2025-11-25 09:10:27</nova:creationTime>
Nov 25 04:10:28 np0005534516 nova_compute[253538]:      <nova:flavor name="m1.nano">
Nov 25 04:10:28 np0005534516 nova_compute[253538]:        <nova:memory>128</nova:memory>
Nov 25 04:10:28 np0005534516 nova_compute[253538]:        <nova:disk>1</nova:disk>
Nov 25 04:10:28 np0005534516 nova_compute[253538]:        <nova:swap>0</nova:swap>
Nov 25 04:10:28 np0005534516 nova_compute[253538]:        <nova:ephemeral>0</nova:ephemeral>
Nov 25 04:10:28 np0005534516 nova_compute[253538]:        <nova:vcpus>1</nova:vcpus>
Nov 25 04:10:28 np0005534516 nova_compute[253538]:      </nova:flavor>
Nov 25 04:10:28 np0005534516 nova_compute[253538]:      <nova:owner>
Nov 25 04:10:28 np0005534516 nova_compute[253538]:        <nova:user uuid="c9fb13d4ba9041458692330b7276232f">tempest-TestGettingAddress-364728108-project-member</nova:user>
Nov 25 04:10:28 np0005534516 nova_compute[253538]:        <nova:project uuid="a3cf572dfc9f42528923d69b8fa76422">tempest-TestGettingAddress-364728108</nova:project>
Nov 25 04:10:28 np0005534516 nova_compute[253538]:      </nova:owner>
Nov 25 04:10:28 np0005534516 nova_compute[253538]:      <nova:root type="image" uuid="8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e"/>
Nov 25 04:10:28 np0005534516 nova_compute[253538]:      <nova:ports>
Nov 25 04:10:28 np0005534516 nova_compute[253538]:        <nova:port uuid="a2ae2d19-2b35-4e83-b6ba-9f037762a501">
Nov 25 04:10:28 np0005534516 nova_compute[253538]:          <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Nov 25 04:10:28 np0005534516 nova_compute[253538]:        </nova:port>
Nov 25 04:10:28 np0005534516 nova_compute[253538]:        <nova:port uuid="aa75ca22-e976-4c62-b1e2-cc57fac51dec">
Nov 25 04:10:28 np0005534516 nova_compute[253538]:          <nova:ip type="fixed" address="2001:db8::f816:3eff:fe81:2453" ipVersion="6"/>
Nov 25 04:10:28 np0005534516 nova_compute[253538]:        </nova:port>
Nov 25 04:10:28 np0005534516 nova_compute[253538]:      </nova:ports>
Nov 25 04:10:28 np0005534516 nova_compute[253538]:    </nova:instance>
Nov 25 04:10:28 np0005534516 nova_compute[253538]:  </metadata>
Nov 25 04:10:28 np0005534516 nova_compute[253538]:  <sysinfo type="smbios">
Nov 25 04:10:28 np0005534516 nova_compute[253538]:    <system>
Nov 25 04:10:28 np0005534516 nova_compute[253538]:      <entry name="manufacturer">RDO</entry>
Nov 25 04:10:28 np0005534516 nova_compute[253538]:      <entry name="product">OpenStack Compute</entry>
Nov 25 04:10:28 np0005534516 nova_compute[253538]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 04:10:28 np0005534516 nova_compute[253538]:      <entry name="serial">e30f8c90-01de-40a5-8c04-289a035fca22</entry>
Nov 25 04:10:28 np0005534516 nova_compute[253538]:      <entry name="uuid">e30f8c90-01de-40a5-8c04-289a035fca22</entry>
Nov 25 04:10:28 np0005534516 nova_compute[253538]:      <entry name="family">Virtual Machine</entry>
Nov 25 04:10:28 np0005534516 nova_compute[253538]:    </system>
Nov 25 04:10:28 np0005534516 nova_compute[253538]:  </sysinfo>
Nov 25 04:10:28 np0005534516 nova_compute[253538]:  <os>
Nov 25 04:10:28 np0005534516 nova_compute[253538]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 25 04:10:28 np0005534516 nova_compute[253538]:    <boot dev="hd"/>
Nov 25 04:10:28 np0005534516 nova_compute[253538]:    <smbios mode="sysinfo"/>
Nov 25 04:10:28 np0005534516 nova_compute[253538]:  </os>
Nov 25 04:10:28 np0005534516 nova_compute[253538]:  <features>
Nov 25 04:10:28 np0005534516 nova_compute[253538]:    <acpi/>
Nov 25 04:10:28 np0005534516 nova_compute[253538]:    <apic/>
Nov 25 04:10:28 np0005534516 nova_compute[253538]:    <vmcoreinfo/>
Nov 25 04:10:28 np0005534516 nova_compute[253538]:  </features>
Nov 25 04:10:28 np0005534516 nova_compute[253538]:  <clock offset="utc">
Nov 25 04:10:28 np0005534516 nova_compute[253538]:    <timer name="pit" tickpolicy="delay"/>
Nov 25 04:10:28 np0005534516 nova_compute[253538]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 25 04:10:28 np0005534516 nova_compute[253538]:    <timer name="hpet" present="no"/>
Nov 25 04:10:28 np0005534516 nova_compute[253538]:  </clock>
Nov 25 04:10:28 np0005534516 nova_compute[253538]:  <cpu mode="host-model" match="exact">
Nov 25 04:10:28 np0005534516 nova_compute[253538]:    <topology sockets="1" cores="1" threads="1"/>
Nov 25 04:10:28 np0005534516 nova_compute[253538]:  </cpu>
Nov 25 04:10:28 np0005534516 nova_compute[253538]:  <devices>
Nov 25 04:10:28 np0005534516 nova_compute[253538]:    <disk type="network" device="disk">
Nov 25 04:10:28 np0005534516 nova_compute[253538]:      <driver type="raw" cache="none"/>
Nov 25 04:10:28 np0005534516 nova_compute[253538]:      <source protocol="rbd" name="vms/e30f8c90-01de-40a5-8c04-289a035fca22_disk">
Nov 25 04:10:28 np0005534516 nova_compute[253538]:        <host name="192.168.122.100" port="6789"/>
Nov 25 04:10:28 np0005534516 nova_compute[253538]:      </source>
Nov 25 04:10:28 np0005534516 nova_compute[253538]:      <auth username="openstack">
Nov 25 04:10:28 np0005534516 nova_compute[253538]:        <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 04:10:28 np0005534516 nova_compute[253538]:      </auth>
Nov 25 04:10:28 np0005534516 nova_compute[253538]:      <target dev="vda" bus="virtio"/>
Nov 25 04:10:28 np0005534516 nova_compute[253538]:    </disk>
Nov 25 04:10:28 np0005534516 nova_compute[253538]:    <disk type="network" device="cdrom">
Nov 25 04:10:28 np0005534516 nova_compute[253538]:      <driver type="raw" cache="none"/>
Nov 25 04:10:28 np0005534516 nova_compute[253538]:      <source protocol="rbd" name="vms/e30f8c90-01de-40a5-8c04-289a035fca22_disk.config">
Nov 25 04:10:28 np0005534516 nova_compute[253538]:        <host name="192.168.122.100" port="6789"/>
Nov 25 04:10:28 np0005534516 nova_compute[253538]:      </source>
Nov 25 04:10:28 np0005534516 nova_compute[253538]:      <auth username="openstack">
Nov 25 04:10:28 np0005534516 nova_compute[253538]:        <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 04:10:28 np0005534516 nova_compute[253538]:      </auth>
Nov 25 04:10:28 np0005534516 nova_compute[253538]:      <target dev="sda" bus="sata"/>
Nov 25 04:10:28 np0005534516 nova_compute[253538]:    </disk>
Nov 25 04:10:28 np0005534516 nova_compute[253538]:    <interface type="ethernet">
Nov 25 04:10:28 np0005534516 nova_compute[253538]:      <mac address="fa:16:3e:1a:7a:85"/>
Nov 25 04:10:28 np0005534516 nova_compute[253538]:      <model type="virtio"/>
Nov 25 04:10:28 np0005534516 nova_compute[253538]:      <driver name="vhost" rx_queue_size="512"/>
Nov 25 04:10:28 np0005534516 nova_compute[253538]:      <mtu size="1442"/>
Nov 25 04:10:28 np0005534516 nova_compute[253538]:      <target dev="tapa2ae2d19-2b"/>
Nov 25 04:10:28 np0005534516 nova_compute[253538]:    </interface>
Nov 25 04:10:28 np0005534516 nova_compute[253538]:    <interface type="ethernet">
Nov 25 04:10:28 np0005534516 nova_compute[253538]:      <mac address="fa:16:3e:81:24:53"/>
Nov 25 04:10:28 np0005534516 nova_compute[253538]:      <model type="virtio"/>
Nov 25 04:10:28 np0005534516 nova_compute[253538]:      <driver name="vhost" rx_queue_size="512"/>
Nov 25 04:10:28 np0005534516 nova_compute[253538]:      <mtu size="1442"/>
Nov 25 04:10:28 np0005534516 nova_compute[253538]:      <target dev="tapaa75ca22-e9"/>
Nov 25 04:10:28 np0005534516 nova_compute[253538]:    </interface>
Nov 25 04:10:28 np0005534516 nova_compute[253538]:    <serial type="pty">
Nov 25 04:10:28 np0005534516 nova_compute[253538]:      <log file="/var/lib/nova/instances/e30f8c90-01de-40a5-8c04-289a035fca22/console.log" append="off"/>
Nov 25 04:10:28 np0005534516 nova_compute[253538]:    </serial>
Nov 25 04:10:28 np0005534516 nova_compute[253538]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 04:10:28 np0005534516 nova_compute[253538]:    <video>
Nov 25 04:10:28 np0005534516 nova_compute[253538]:      <model type="virtio"/>
Nov 25 04:10:28 np0005534516 nova_compute[253538]:    </video>
Nov 25 04:10:28 np0005534516 nova_compute[253538]:    <input type="tablet" bus="usb"/>
Nov 25 04:10:28 np0005534516 nova_compute[253538]:    <rng model="virtio">
Nov 25 04:10:28 np0005534516 nova_compute[253538]:      <backend model="random">/dev/urandom</backend>
Nov 25 04:10:28 np0005534516 nova_compute[253538]:    </rng>
Nov 25 04:10:28 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root"/>
Nov 25 04:10:28 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:10:28 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:10:28 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:10:28 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:10:28 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:10:28 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:10:28 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:10:28 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:10:28 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:10:28 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:10:28 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:10:28 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:10:28 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:10:28 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:10:28 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:10:28 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:10:28 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:10:28 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:10:28 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:10:28 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:10:28 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:10:28 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:10:28 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:10:28 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:10:28 np0005534516 nova_compute[253538]:    <controller type="usb" index="0"/>
Nov 25 04:10:28 np0005534516 nova_compute[253538]:    <memballoon model="virtio">
Nov 25 04:10:28 np0005534516 nova_compute[253538]:      <stats period="10"/>
Nov 25 04:10:28 np0005534516 nova_compute[253538]:    </memballoon>
Nov 25 04:10:28 np0005534516 nova_compute[253538]:  </devices>
Nov 25 04:10:28 np0005534516 nova_compute[253538]: </domain>
Nov 25 04:10:28 np0005534516 nova_compute[253538]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 25 04:10:28 np0005534516 nova_compute[253538]: 2025-11-25 09:10:28.183 253542 DEBUG nova.compute.manager [None req-d92035a5-8b8f-4b18-a7ed-f45f7573b37f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: e30f8c90-01de-40a5-8c04-289a035fca22] Preparing to wait for external event network-vif-plugged-a2ae2d19-2b35-4e83-b6ba-9f037762a501 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 25 04:10:28 np0005534516 nova_compute[253538]: 2025-11-25 09:10:28.184 253542 DEBUG oslo_concurrency.lockutils [None req-d92035a5-8b8f-4b18-a7ed-f45f7573b37f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Acquiring lock "e30f8c90-01de-40a5-8c04-289a035fca22-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:10:28 np0005534516 nova_compute[253538]: 2025-11-25 09:10:28.184 253542 DEBUG oslo_concurrency.lockutils [None req-d92035a5-8b8f-4b18-a7ed-f45f7573b37f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "e30f8c90-01de-40a5-8c04-289a035fca22-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:10:28 np0005534516 nova_compute[253538]: 2025-11-25 09:10:28.184 253542 DEBUG oslo_concurrency.lockutils [None req-d92035a5-8b8f-4b18-a7ed-f45f7573b37f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "e30f8c90-01de-40a5-8c04-289a035fca22-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:10:28 np0005534516 nova_compute[253538]: 2025-11-25 09:10:28.184 253542 DEBUG nova.compute.manager [None req-d92035a5-8b8f-4b18-a7ed-f45f7573b37f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: e30f8c90-01de-40a5-8c04-289a035fca22] Preparing to wait for external event network-vif-plugged-aa75ca22-e976-4c62-b1e2-cc57fac51dec prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 25 04:10:28 np0005534516 nova_compute[253538]: 2025-11-25 09:10:28.185 253542 DEBUG oslo_concurrency.lockutils [None req-d92035a5-8b8f-4b18-a7ed-f45f7573b37f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Acquiring lock "e30f8c90-01de-40a5-8c04-289a035fca22-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:10:28 np0005534516 nova_compute[253538]: 2025-11-25 09:10:28.185 253542 DEBUG oslo_concurrency.lockutils [None req-d92035a5-8b8f-4b18-a7ed-f45f7573b37f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "e30f8c90-01de-40a5-8c04-289a035fca22-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:10:28 np0005534516 nova_compute[253538]: 2025-11-25 09:10:28.186 253542 DEBUG oslo_concurrency.lockutils [None req-d92035a5-8b8f-4b18-a7ed-f45f7573b37f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "e30f8c90-01de-40a5-8c04-289a035fca22-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:10:28 np0005534516 nova_compute[253538]: 2025-11-25 09:10:28.187 253542 DEBUG nova.virt.libvirt.vif [None req-d92035a5-8b8f-4b18-a7ed-f45f7573b37f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T09:10:13Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-640466180',display_name='tempest-TestGettingAddress-server-640466180',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-640466180',id=145,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOe89J94R4qKvHMYwrfikowJ7ceUXQ642a2tchAyKy2knwK13gMoa9aQVgsZFx+J1CMsBQiBzDcpzQJhqE9YwLxM9sA075e5a1p5R338zHNdy69LTAjp0AMqDwCa+QT4pw==',key_name='tempest-TestGettingAddress-230342741',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='a3cf572dfc9f42528923d69b8fa76422',ramdisk_id='',reservation_id='r-vi0tehjw',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-364728108',owner_user_name='tempest-TestGettingAddress-364728108-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T09:10:16Z,user_data=None,user_id='c9fb13d4ba9041458692330b7276232f',uuid=e30f8c90-01de-40a5-8c04-289a035fca22,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "a2ae2d19-2b35-4e83-b6ba-9f037762a501", "address": "fa:16:3e:1a:7a:85", "network": {"id": "7bf4f588-ebc7-4f3f-bad9-0474cdb461a6", "bridge": "br-int", "label": "tempest-network-smoke--1406303604", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa2ae2d19-2b", "ovs_interfaceid": "a2ae2d19-2b35-4e83-b6ba-9f037762a501", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 25 04:10:28 np0005534516 nova_compute[253538]: 2025-11-25 09:10:28.187 253542 DEBUG nova.network.os_vif_util [None req-d92035a5-8b8f-4b18-a7ed-f45f7573b37f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Converting VIF {"id": "a2ae2d19-2b35-4e83-b6ba-9f037762a501", "address": "fa:16:3e:1a:7a:85", "network": {"id": "7bf4f588-ebc7-4f3f-bad9-0474cdb461a6", "bridge": "br-int", "label": "tempest-network-smoke--1406303604", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa2ae2d19-2b", "ovs_interfaceid": "a2ae2d19-2b35-4e83-b6ba-9f037762a501", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 04:10:28 np0005534516 nova_compute[253538]: 2025-11-25 09:10:28.188 253542 DEBUG nova.network.os_vif_util [None req-d92035a5-8b8f-4b18-a7ed-f45f7573b37f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:1a:7a:85,bridge_name='br-int',has_traffic_filtering=True,id=a2ae2d19-2b35-4e83-b6ba-9f037762a501,network=Network(7bf4f588-ebc7-4f3f-bad9-0474cdb461a6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa2ae2d19-2b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 04:10:28 np0005534516 nova_compute[253538]: 2025-11-25 09:10:28.188 253542 DEBUG os_vif [None req-d92035a5-8b8f-4b18-a7ed-f45f7573b37f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:1a:7a:85,bridge_name='br-int',has_traffic_filtering=True,id=a2ae2d19-2b35-4e83-b6ba-9f037762a501,network=Network(7bf4f588-ebc7-4f3f-bad9-0474cdb461a6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa2ae2d19-2b') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 25 04:10:28 np0005534516 nova_compute[253538]: 2025-11-25 09:10:28.189 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:10:28 np0005534516 nova_compute[253538]: 2025-11-25 09:10:28.190 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 04:10:28 np0005534516 nova_compute[253538]: 2025-11-25 09:10:28.190 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 04:10:28 np0005534516 nova_compute[253538]: 2025-11-25 09:10:28.194 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:10:28 np0005534516 nova_compute[253538]: 2025-11-25 09:10:28.195 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa2ae2d19-2b, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 04:10:28 np0005534516 nova_compute[253538]: 2025-11-25 09:10:28.196 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapa2ae2d19-2b, col_values=(('external_ids', {'iface-id': 'a2ae2d19-2b35-4e83-b6ba-9f037762a501', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:1a:7a:85', 'vm-uuid': 'e30f8c90-01de-40a5-8c04-289a035fca22'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 04:10:28 np0005534516 nova_compute[253538]: 2025-11-25 09:10:28.197 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:10:28 np0005534516 NetworkManager[48915]: <info>  [1764061828.1984] manager: (tapa2ae2d19-2b): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/634)
Nov 25 04:10:28 np0005534516 nova_compute[253538]: 2025-11-25 09:10:28.199 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 25 04:10:28 np0005534516 nova_compute[253538]: 2025-11-25 09:10:28.203 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:10:28 np0005534516 nova_compute[253538]: 2025-11-25 09:10:28.204 253542 INFO os_vif [None req-d92035a5-8b8f-4b18-a7ed-f45f7573b37f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:1a:7a:85,bridge_name='br-int',has_traffic_filtering=True,id=a2ae2d19-2b35-4e83-b6ba-9f037762a501,network=Network(7bf4f588-ebc7-4f3f-bad9-0474cdb461a6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa2ae2d19-2b')#033[00m
Nov 25 04:10:28 np0005534516 nova_compute[253538]: 2025-11-25 09:10:28.205 253542 DEBUG nova.virt.libvirt.vif [None req-d92035a5-8b8f-4b18-a7ed-f45f7573b37f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T09:10:13Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-640466180',display_name='tempest-TestGettingAddress-server-640466180',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-640466180',id=145,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOe89J94R4qKvHMYwrfikowJ7ceUXQ642a2tchAyKy2knwK13gMoa9aQVgsZFx+J1CMsBQiBzDcpzQJhqE9YwLxM9sA075e5a1p5R338zHNdy69LTAjp0AMqDwCa+QT4pw==',key_name='tempest-TestGettingAddress-230342741',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='a3cf572dfc9f42528923d69b8fa76422',ramdisk_id='',reservation_id='r-vi0tehjw',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-364728108',owner_user_name='tempest-TestGettingAddress-364728108-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T09:10:16Z,user_data=None,user_id='c9fb13d4ba9041458692330b7276232f',uuid=e30f8c90-01de-40a5-8c04-289a035fca22,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "aa75ca22-e976-4c62-b1e2-cc57fac51dec", "address": "fa:16:3e:81:24:53", "network": {"id": "3cc67e51-433c-4c50-9e32-11618e10c494", "bridge": "br-int", "label": "tempest-network-smoke--1717122316", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe81:2453", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapaa75ca22-e9", "ovs_interfaceid": "aa75ca22-e976-4c62-b1e2-cc57fac51dec", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 25 04:10:28 np0005534516 nova_compute[253538]: 2025-11-25 09:10:28.205 253542 DEBUG nova.network.os_vif_util [None req-d92035a5-8b8f-4b18-a7ed-f45f7573b37f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Converting VIF {"id": "aa75ca22-e976-4c62-b1e2-cc57fac51dec", "address": "fa:16:3e:81:24:53", "network": {"id": "3cc67e51-433c-4c50-9e32-11618e10c494", "bridge": "br-int", "label": "tempest-network-smoke--1717122316", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe81:2453", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapaa75ca22-e9", "ovs_interfaceid": "aa75ca22-e976-4c62-b1e2-cc57fac51dec", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 04:10:28 np0005534516 nova_compute[253538]: 2025-11-25 09:10:28.206 253542 DEBUG nova.network.os_vif_util [None req-d92035a5-8b8f-4b18-a7ed-f45f7573b37f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:81:24:53,bridge_name='br-int',has_traffic_filtering=True,id=aa75ca22-e976-4c62-b1e2-cc57fac51dec,network=Network(3cc67e51-433c-4c50-9e32-11618e10c494),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapaa75ca22-e9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 04:10:28 np0005534516 nova_compute[253538]: 2025-11-25 09:10:28.206 253542 DEBUG os_vif [None req-d92035a5-8b8f-4b18-a7ed-f45f7573b37f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:81:24:53,bridge_name='br-int',has_traffic_filtering=True,id=aa75ca22-e976-4c62-b1e2-cc57fac51dec,network=Network(3cc67e51-433c-4c50-9e32-11618e10c494),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapaa75ca22-e9') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 25 04:10:28 np0005534516 nova_compute[253538]: 2025-11-25 09:10:28.207 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:10:28 np0005534516 nova_compute[253538]: 2025-11-25 09:10:28.207 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 04:10:28 np0005534516 nova_compute[253538]: 2025-11-25 09:10:28.207 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 04:10:28 np0005534516 nova_compute[253538]: 2025-11-25 09:10:28.209 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:10:28 np0005534516 nova_compute[253538]: 2025-11-25 09:10:28.209 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapaa75ca22-e9, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 04:10:28 np0005534516 nova_compute[253538]: 2025-11-25 09:10:28.210 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapaa75ca22-e9, col_values=(('external_ids', {'iface-id': 'aa75ca22-e976-4c62-b1e2-cc57fac51dec', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:81:24:53', 'vm-uuid': 'e30f8c90-01de-40a5-8c04-289a035fca22'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 04:10:28 np0005534516 nova_compute[253538]: 2025-11-25 09:10:28.211 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:10:28 np0005534516 NetworkManager[48915]: <info>  [1764061828.2119] manager: (tapaa75ca22-e9): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/635)
Nov 25 04:10:28 np0005534516 nova_compute[253538]: 2025-11-25 09:10:28.213 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 25 04:10:28 np0005534516 nova_compute[253538]: 2025-11-25 09:10:28.216 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:10:28 np0005534516 nova_compute[253538]: 2025-11-25 09:10:28.217 253542 INFO os_vif [None req-d92035a5-8b8f-4b18-a7ed-f45f7573b37f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:81:24:53,bridge_name='br-int',has_traffic_filtering=True,id=aa75ca22-e976-4c62-b1e2-cc57fac51dec,network=Network(3cc67e51-433c-4c50-9e32-11618e10c494),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapaa75ca22-e9')#033[00m
Nov 25 04:10:28 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:10:28 np0005534516 nova_compute[253538]: 2025-11-25 09:10:28.277 253542 DEBUG nova.virt.libvirt.driver [None req-d92035a5-8b8f-4b18-a7ed-f45f7573b37f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 04:10:28 np0005534516 nova_compute[253538]: 2025-11-25 09:10:28.277 253542 DEBUG nova.virt.libvirt.driver [None req-d92035a5-8b8f-4b18-a7ed-f45f7573b37f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 04:10:28 np0005534516 nova_compute[253538]: 2025-11-25 09:10:28.277 253542 DEBUG nova.virt.libvirt.driver [None req-d92035a5-8b8f-4b18-a7ed-f45f7573b37f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] No VIF found with MAC fa:16:3e:1a:7a:85, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 25 04:10:28 np0005534516 nova_compute[253538]: 2025-11-25 09:10:28.278 253542 DEBUG nova.virt.libvirt.driver [None req-d92035a5-8b8f-4b18-a7ed-f45f7573b37f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] No VIF found with MAC fa:16:3e:81:24:53, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 25 04:10:28 np0005534516 nova_compute[253538]: 2025-11-25 09:10:28.278 253542 INFO nova.virt.libvirt.driver [None req-d92035a5-8b8f-4b18-a7ed-f45f7573b37f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: e30f8c90-01de-40a5-8c04-289a035fca22] Using config drive#033[00m
Nov 25 04:10:28 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2702: 321 pgs: 321 active+clean; 213 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 25 04:10:28 np0005534516 nova_compute[253538]: 2025-11-25 09:10:28.299 253542 DEBUG nova.storage.rbd_utils [None req-d92035a5-8b8f-4b18-a7ed-f45f7573b37f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] rbd image e30f8c90-01de-40a5-8c04-289a035fca22_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 04:10:28 np0005534516 nova_compute[253538]: 2025-11-25 09:10:28.613 253542 INFO nova.virt.libvirt.driver [None req-d92035a5-8b8f-4b18-a7ed-f45f7573b37f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: e30f8c90-01de-40a5-8c04-289a035fca22] Creating config drive at /var/lib/nova/instances/e30f8c90-01de-40a5-8c04-289a035fca22/disk.config#033[00m
Nov 25 04:10:28 np0005534516 nova_compute[253538]: 2025-11-25 09:10:28.624 253542 DEBUG oslo_concurrency.processutils [None req-d92035a5-8b8f-4b18-a7ed-f45f7573b37f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/e30f8c90-01de-40a5-8c04-289a035fca22/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpio18bur_ execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 04:10:28 np0005534516 nova_compute[253538]: 2025-11-25 09:10:28.683 253542 DEBUG nova.network.neutron [req-748d4071-a155-4e74-960c-86c19bc79b5c req-972e4abb-f7de-4040-9398-523737d20e42 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: e30f8c90-01de-40a5-8c04-289a035fca22] Updated VIF entry in instance network info cache for port aa75ca22-e976-4c62-b1e2-cc57fac51dec. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 25 04:10:28 np0005534516 nova_compute[253538]: 2025-11-25 09:10:28.684 253542 DEBUG nova.network.neutron [req-748d4071-a155-4e74-960c-86c19bc79b5c req-972e4abb-f7de-4040-9398-523737d20e42 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: e30f8c90-01de-40a5-8c04-289a035fca22] Updating instance_info_cache with network_info: [{"id": "a2ae2d19-2b35-4e83-b6ba-9f037762a501", "address": "fa:16:3e:1a:7a:85", "network": {"id": "7bf4f588-ebc7-4f3f-bad9-0474cdb461a6", "bridge": "br-int", "label": "tempest-network-smoke--1406303604", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa2ae2d19-2b", "ovs_interfaceid": "a2ae2d19-2b35-4e83-b6ba-9f037762a501", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "aa75ca22-e976-4c62-b1e2-cc57fac51dec", "address": "fa:16:3e:81:24:53", "network": {"id": "3cc67e51-433c-4c50-9e32-11618e10c494", "bridge": "br-int", "label": "tempest-network-smoke--1717122316", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe81:2453", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapaa75ca22-e9", "ovs_interfaceid": "aa75ca22-e976-4c62-b1e2-cc57fac51dec", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 04:10:28 np0005534516 nova_compute[253538]: 2025-11-25 09:10:28.705 253542 DEBUG oslo_concurrency.lockutils [req-748d4071-a155-4e74-960c-86c19bc79b5c req-972e4abb-f7de-4040-9398-523737d20e42 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-e30f8c90-01de-40a5-8c04-289a035fca22" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 04:10:28 np0005534516 nova_compute[253538]: 2025-11-25 09:10:28.797 253542 DEBUG oslo_concurrency.processutils [None req-d92035a5-8b8f-4b18-a7ed-f45f7573b37f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/e30f8c90-01de-40a5-8c04-289a035fca22/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpio18bur_" returned: 0 in 0.174s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 04:10:28 np0005534516 nova_compute[253538]: 2025-11-25 09:10:28.830 253542 DEBUG nova.storage.rbd_utils [None req-d92035a5-8b8f-4b18-a7ed-f45f7573b37f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] rbd image e30f8c90-01de-40a5-8c04-289a035fca22_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 04:10:28 np0005534516 nova_compute[253538]: 2025-11-25 09:10:28.834 253542 DEBUG oslo_concurrency.processutils [None req-d92035a5-8b8f-4b18-a7ed-f45f7573b37f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/e30f8c90-01de-40a5-8c04-289a035fca22/disk.config e30f8c90-01de-40a5-8c04-289a035fca22_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 04:10:29 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 25 04:10:29 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/392999834' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 04:10:29 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 25 04:10:29 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/392999834' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 04:10:29 np0005534516 nova_compute[253538]: 2025-11-25 09:10:29.029 253542 DEBUG oslo_concurrency.processutils [None req-d92035a5-8b8f-4b18-a7ed-f45f7573b37f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/e30f8c90-01de-40a5-8c04-289a035fca22/disk.config e30f8c90-01de-40a5-8c04-289a035fca22_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.195s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 04:10:29 np0005534516 nova_compute[253538]: 2025-11-25 09:10:29.030 253542 INFO nova.virt.libvirt.driver [None req-d92035a5-8b8f-4b18-a7ed-f45f7573b37f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: e30f8c90-01de-40a5-8c04-289a035fca22] Deleting local config drive /var/lib/nova/instances/e30f8c90-01de-40a5-8c04-289a035fca22/disk.config because it was imported into RBD.#033[00m
Nov 25 04:10:29 np0005534516 NetworkManager[48915]: <info>  [1764061829.0885] manager: (tapa2ae2d19-2b): new Tun device (/org/freedesktop/NetworkManager/Devices/636)
Nov 25 04:10:29 np0005534516 kernel: tapa2ae2d19-2b: entered promiscuous mode
Nov 25 04:10:29 np0005534516 ovn_controller[152859]: 2025-11-25T09:10:29Z|01539|binding|INFO|Claiming lport a2ae2d19-2b35-4e83-b6ba-9f037762a501 for this chassis.
Nov 25 04:10:29 np0005534516 nova_compute[253538]: 2025-11-25 09:10:29.096 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:10:29 np0005534516 ovn_controller[152859]: 2025-11-25T09:10:29Z|01540|binding|INFO|a2ae2d19-2b35-4e83-b6ba-9f037762a501: Claiming fa:16:3e:1a:7a:85 10.100.0.3
Nov 25 04:10:29 np0005534516 NetworkManager[48915]: <info>  [1764061829.1112] manager: (tapaa75ca22-e9): new Tun device (/org/freedesktop/NetworkManager/Devices/637)
Nov 25 04:10:29 np0005534516 kernel: tapaa75ca22-e9: entered promiscuous mode
Nov 25 04:10:29 np0005534516 ovn_controller[152859]: 2025-11-25T09:10:29Z|01541|binding|INFO|Setting lport a2ae2d19-2b35-4e83-b6ba-9f037762a501 ovn-installed in OVS
Nov 25 04:10:29 np0005534516 nova_compute[253538]: 2025-11-25 09:10:29.156 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:10:29 np0005534516 ovn_controller[152859]: 2025-11-25T09:10:29Z|01542|if_status|INFO|Not updating pb chassis for aa75ca22-e976-4c62-b1e2-cc57fac51dec now as sb is readonly
Nov 25 04:10:29 np0005534516 nova_compute[253538]: 2025-11-25 09:10:29.171 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:10:29 np0005534516 systemd-udevd[407844]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 04:10:29 np0005534516 systemd-udevd[407843]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 04:10:29 np0005534516 systemd-machined[215790]: New machine qemu-175-instance-00000091.
Nov 25 04:10:29 np0005534516 NetworkManager[48915]: <info>  [1764061829.1882] device (tapa2ae2d19-2b): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 04:10:29 np0005534516 NetworkManager[48915]: <info>  [1764061829.1896] device (tapa2ae2d19-2b): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 04:10:29 np0005534516 NetworkManager[48915]: <info>  [1764061829.1928] device (tapaa75ca22-e9): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 04:10:29 np0005534516 NetworkManager[48915]: <info>  [1764061829.1937] device (tapaa75ca22-e9): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 04:10:29 np0005534516 systemd[1]: Started Virtual Machine qemu-175-instance-00000091.
Nov 25 04:10:29 np0005534516 ovn_controller[152859]: 2025-11-25T09:10:29Z|01543|binding|INFO|Claiming lport aa75ca22-e976-4c62-b1e2-cc57fac51dec for this chassis.
Nov 25 04:10:29 np0005534516 ovn_controller[152859]: 2025-11-25T09:10:29Z|01544|binding|INFO|aa75ca22-e976-4c62-b1e2-cc57fac51dec: Claiming fa:16:3e:81:24:53 2001:db8::f816:3eff:fe81:2453
Nov 25 04:10:29 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:10:29.273 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:1a:7a:85 10.100.0.3'], port_security=['fa:16:3e:1a:7a:85 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'e30f8c90-01de-40a5-8c04-289a035fca22', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7bf4f588-ebc7-4f3f-bad9-0474cdb461a6', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a3cf572dfc9f42528923d69b8fa76422', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'd62dafc0-5bc7-45ab-b9df-841bbdd333a8', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=78a8bb3c-a4c7-425c-b375-b8834bad3945, chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=a2ae2d19-2b35-4e83-b6ba-9f037762a501) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 04:10:29 np0005534516 ovn_controller[152859]: 2025-11-25T09:10:29Z|01545|binding|INFO|Setting lport a2ae2d19-2b35-4e83-b6ba-9f037762a501 up in Southbound
Nov 25 04:10:29 np0005534516 ovn_controller[152859]: 2025-11-25T09:10:29Z|01546|binding|INFO|Setting lport aa75ca22-e976-4c62-b1e2-cc57fac51dec ovn-installed in OVS
Nov 25 04:10:29 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:10:29.275 162739 INFO neutron.agent.ovn.metadata.agent [-] Port a2ae2d19-2b35-4e83-b6ba-9f037762a501 in datapath 7bf4f588-ebc7-4f3f-bad9-0474cdb461a6 bound to our chassis#033[00m
Nov 25 04:10:29 np0005534516 nova_compute[253538]: 2025-11-25 09:10:29.276 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:10:29 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:10:29.278 162739 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 7bf4f588-ebc7-4f3f-bad9-0474cdb461a6#033[00m
Nov 25 04:10:29 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:10:29.292 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[f37fd433-e0d1-47b2-9232-8273e74a7663]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:10:29 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:10:29.319 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[42e14545-d7cd-40de-a588-dc8f377e804e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:10:29 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:10:29.323 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[79e5e28d-e72e-45e4-b5f8-1890b8cfae13]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:10:29 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:10:29.348 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[15a2ba11-9cec-43ac-9c93-d751b8de0428]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:10:29 np0005534516 nova_compute[253538]: 2025-11-25 09:10:29.368 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:10:29 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:10:29.368 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[7337e8ca-1e8a-442e-bf18-fd7716e32396]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7bf4f588-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:dd:cc:c4'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 440], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 712903, 'reachable_time': 15708, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 407860, 'error': None, 'target': 'ovnmeta-7bf4f588-ebc7-4f3f-bad9-0474cdb461a6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:10:29 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:10:29.382 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[806e44b8-ede8-4175-a6cc-2050f21ca3b9]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap7bf4f588-e1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 712917, 'tstamp': 712917}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 407861, 'error': None, 'target': 'ovnmeta-7bf4f588-ebc7-4f3f-bad9-0474cdb461a6', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap7bf4f588-e1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 712921, 'tstamp': 712921}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 407861, 'error': None, 'target': 'ovnmeta-7bf4f588-ebc7-4f3f-bad9-0474cdb461a6', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:10:29 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:10:29.383 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7bf4f588-e0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 04:10:29 np0005534516 nova_compute[253538]: 2025-11-25 09:10:29.385 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:10:29 np0005534516 nova_compute[253538]: 2025-11-25 09:10:29.386 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:10:29 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:10:29.387 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7bf4f588-e0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 04:10:29 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:10:29.387 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 04:10:29 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:10:29.387 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap7bf4f588-e0, col_values=(('external_ids', {'iface-id': '681702e6-167a-4d5b-9bcf-7f086c4e8bad'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 04:10:29 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:10:29.387 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 04:10:29 np0005534516 ovn_controller[152859]: 2025-11-25T09:10:29Z|01547|binding|INFO|Setting lport aa75ca22-e976-4c62-b1e2-cc57fac51dec up in Southbound
Nov 25 04:10:29 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:10:29.524 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:81:24:53 2001:db8::f816:3eff:fe81:2453'], port_security=['fa:16:3e:81:24:53 2001:db8::f816:3eff:fe81:2453'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:fe81:2453/64', 'neutron:device_id': 'e30f8c90-01de-40a5-8c04-289a035fca22', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3cc67e51-433c-4c50-9e32-11618e10c494', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a3cf572dfc9f42528923d69b8fa76422', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'd62dafc0-5bc7-45ab-b9df-841bbdd333a8', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8b005d09-6aec-4c42-a2eb-38372057a2c4, chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=aa75ca22-e976-4c62-b1e2-cc57fac51dec) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 04:10:29 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:10:29.525 162739 INFO neutron.agent.ovn.metadata.agent [-] Port aa75ca22-e976-4c62-b1e2-cc57fac51dec in datapath 3cc67e51-433c-4c50-9e32-11618e10c494 bound to our chassis#033[00m
Nov 25 04:10:29 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:10:29.527 162739 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 3cc67e51-433c-4c50-9e32-11618e10c494#033[00m
Nov 25 04:10:29 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:10:29.543 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[96fcaac6-9410-47f3-8320-db30e254ce08]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:10:29 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:10:29.573 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[f19b6d94-fe1a-4ac4-bd1c-49d4fa333a0f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:10:29 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:10:29.576 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[67417b36-de36-412f-9dee-8402683770f7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:10:29 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:10:29.607 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[b847d4d9-8978-4d0c-887c-dbf676680e07]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:10:29 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:10:29.623 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[3718c77d-419c-4d9e-8739-d60db5fc19f3]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap3cc67e51-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:42:16:67'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 18, 'tx_packets': 4, 'rx_bytes': 1572, 'tx_bytes': 312, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 18, 'tx_packets': 4, 'rx_bytes': 1572, 'tx_bytes': 312, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 441], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 712993, 'reachable_time': 20254, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 18, 'inoctets': 1320, 'indelivers': 4, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 18, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 1320, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 18, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 4, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 407907, 'error': None, 'target': 'ovnmeta-3cc67e51-433c-4c50-9e32-11618e10c494', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:10:29 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:10:29.639 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[d5fdac41-7438-4b89-98c3-1548eec78c69]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap3cc67e51-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 713006, 'tstamp': 713006}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 407910, 'error': None, 'target': 'ovnmeta-3cc67e51-433c-4c50-9e32-11618e10c494', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:10:29 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:10:29.640 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3cc67e51-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 04:10:29 np0005534516 nova_compute[253538]: 2025-11-25 09:10:29.642 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:10:29 np0005534516 nova_compute[253538]: 2025-11-25 09:10:29.643 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:10:29 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:10:29.645 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap3cc67e51-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 04:10:29 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:10:29.645 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 04:10:29 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:10:29.646 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap3cc67e51-40, col_values=(('external_ids', {'iface-id': '7cc0292c-b133-4cb7-8177-2a55fd592909'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 04:10:29 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:10:29.646 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 04:10:29 np0005534516 nova_compute[253538]: 2025-11-25 09:10:29.744 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764061829.7437663, e30f8c90-01de-40a5-8c04-289a035fca22 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 04:10:29 np0005534516 nova_compute[253538]: 2025-11-25 09:10:29.744 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: e30f8c90-01de-40a5-8c04-289a035fca22] VM Started (Lifecycle Event)#033[00m
Nov 25 04:10:29 np0005534516 nova_compute[253538]: 2025-11-25 09:10:29.765 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: e30f8c90-01de-40a5-8c04-289a035fca22] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 04:10:29 np0005534516 nova_compute[253538]: 2025-11-25 09:10:29.770 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764061829.7439542, e30f8c90-01de-40a5-8c04-289a035fca22 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 04:10:29 np0005534516 nova_compute[253538]: 2025-11-25 09:10:29.770 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: e30f8c90-01de-40a5-8c04-289a035fca22] VM Paused (Lifecycle Event)#033[00m
Nov 25 04:10:29 np0005534516 nova_compute[253538]: 2025-11-25 09:10:29.787 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: e30f8c90-01de-40a5-8c04-289a035fca22] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 04:10:29 np0005534516 nova_compute[253538]: 2025-11-25 09:10:29.790 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: e30f8c90-01de-40a5-8c04-289a035fca22] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 04:10:29 np0005534516 nova_compute[253538]: 2025-11-25 09:10:29.807 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: e30f8c90-01de-40a5-8c04-289a035fca22] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 04:10:30 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2703: 321 pgs: 321 active+clean; 213 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 9.7 KiB/s rd, 1.4 MiB/s wr, 18 op/s
Nov 25 04:10:31 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 04:10:31 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 04:10:31 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 04:10:31 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 04:10:31 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 04:10:31 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 04:10:31 np0005534516 ceph-mgr[75313]: [progress WARNING root] complete: ev d3ae4a1e-1188-4d73-a34a-4fea98432f27 does not exist
Nov 25 04:10:31 np0005534516 ceph-mgr[75313]: [progress WARNING root] complete: ev f9a51ff7-bd13-4a9a-a77c-476abd801f98 does not exist
Nov 25 04:10:31 np0005534516 ceph-mgr[75313]: [progress WARNING root] complete: ev 6ca4df0d-2b5a-486f-8a27-5c258e093420 does not exist
Nov 25 04:10:31 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 04:10:31 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 04:10:31 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 04:10:31 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 04:10:31 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 04:10:31 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 04:10:31 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 04:10:31 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 04:10:31 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 04:10:31 np0005534516 podman[408186]: 2025-11-25 09:10:31.828866655 +0000 UTC m=+0.051364877 container create 2eaad78e45e4e03f2073b2aebc974c615bc4c836025814b0049408ed0cd37a1e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_cerf, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 04:10:31 np0005534516 systemd[1]: Started libpod-conmon-2eaad78e45e4e03f2073b2aebc974c615bc4c836025814b0049408ed0cd37a1e.scope.
Nov 25 04:10:31 np0005534516 podman[408186]: 2025-11-25 09:10:31.805215223 +0000 UTC m=+0.027713435 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 04:10:31 np0005534516 systemd[1]: Started libcrun container.
Nov 25 04:10:31 np0005534516 podman[408186]: 2025-11-25 09:10:31.940289014 +0000 UTC m=+0.162787246 container init 2eaad78e45e4e03f2073b2aebc974c615bc4c836025814b0049408ed0cd37a1e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_cerf, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Nov 25 04:10:31 np0005534516 podman[408186]: 2025-11-25 09:10:31.950562973 +0000 UTC m=+0.173061205 container start 2eaad78e45e4e03f2073b2aebc974c615bc4c836025814b0049408ed0cd37a1e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_cerf, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 04:10:31 np0005534516 podman[408186]: 2025-11-25 09:10:31.954901421 +0000 UTC m=+0.177399653 container attach 2eaad78e45e4e03f2073b2aebc974c615bc4c836025814b0049408ed0cd37a1e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_cerf, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 04:10:31 np0005534516 quizzical_cerf[408203]: 167 167
Nov 25 04:10:31 np0005534516 systemd[1]: libpod-2eaad78e45e4e03f2073b2aebc974c615bc4c836025814b0049408ed0cd37a1e.scope: Deactivated successfully.
Nov 25 04:10:31 np0005534516 conmon[408203]: conmon 2eaad78e45e4e03f2073 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-2eaad78e45e4e03f2073b2aebc974c615bc4c836025814b0049408ed0cd37a1e.scope/container/memory.events
Nov 25 04:10:31 np0005534516 podman[408186]: 2025-11-25 09:10:31.959789184 +0000 UTC m=+0.182287416 container died 2eaad78e45e4e03f2073b2aebc974c615bc4c836025814b0049408ed0cd37a1e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_cerf, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Nov 25 04:10:31 np0005534516 systemd[1]: var-lib-containers-storage-overlay-574f3d04aab82f2cf040c106de3bf2ff24e2fcf0e76438df3afac3554c474473-merged.mount: Deactivated successfully.
Nov 25 04:10:32 np0005534516 podman[408186]: 2025-11-25 09:10:32.012030794 +0000 UTC m=+0.234528986 container remove 2eaad78e45e4e03f2073b2aebc974c615bc4c836025814b0049408ed0cd37a1e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_cerf, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True)
Nov 25 04:10:32 np0005534516 systemd[1]: libpod-conmon-2eaad78e45e4e03f2073b2aebc974c615bc4c836025814b0049408ed0cd37a1e.scope: Deactivated successfully.
Nov 25 04:10:32 np0005534516 podman[408229]: 2025-11-25 09:10:32.234464641 +0000 UTC m=+0.046571327 container create a9e4c76ce1d86957e6c6bf3215feba304b4e6b760c448bb24580ebde99528e93 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_hamilton, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Nov 25 04:10:32 np0005534516 systemd[1]: Started libpod-conmon-a9e4c76ce1d86957e6c6bf3215feba304b4e6b760c448bb24580ebde99528e93.scope.
Nov 25 04:10:32 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2704: 321 pgs: 321 active+clean; 213 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 3.3 KiB/s rd, 880 KiB/s wr, 8 op/s
Nov 25 04:10:32 np0005534516 systemd[1]: Started libcrun container.
Nov 25 04:10:32 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d6de99040cd4c97045259785f2851eb070fa6080f37c890a4a542ff12f12b9e3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 04:10:32 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d6de99040cd4c97045259785f2851eb070fa6080f37c890a4a542ff12f12b9e3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 04:10:32 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d6de99040cd4c97045259785f2851eb070fa6080f37c890a4a542ff12f12b9e3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 04:10:32 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d6de99040cd4c97045259785f2851eb070fa6080f37c890a4a542ff12f12b9e3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 04:10:32 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d6de99040cd4c97045259785f2851eb070fa6080f37c890a4a542ff12f12b9e3/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 04:10:32 np0005534516 podman[408229]: 2025-11-25 09:10:32.216185044 +0000 UTC m=+0.028291760 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 04:10:32 np0005534516 podman[408229]: 2025-11-25 09:10:32.330263015 +0000 UTC m=+0.142369741 container init a9e4c76ce1d86957e6c6bf3215feba304b4e6b760c448bb24580ebde99528e93 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_hamilton, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True)
Nov 25 04:10:32 np0005534516 podman[408229]: 2025-11-25 09:10:32.337658906 +0000 UTC m=+0.149765592 container start a9e4c76ce1d86957e6c6bf3215feba304b4e6b760c448bb24580ebde99528e93 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_hamilton, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Nov 25 04:10:32 np0005534516 podman[408229]: 2025-11-25 09:10:32.344748879 +0000 UTC m=+0.156855585 container attach a9e4c76ce1d86957e6c6bf3215feba304b4e6b760c448bb24580ebde99528e93 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_hamilton, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Nov 25 04:10:32 np0005534516 nova_compute[253538]: 2025-11-25 09:10:32.859 253542 DEBUG nova.compute.manager [req-f6b4b0ad-9086-4c5e-a617-740052144201 req-7f7cc4ac-16c4-4654-9666-6ea6b122207c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: e30f8c90-01de-40a5-8c04-289a035fca22] Received event network-vif-plugged-a2ae2d19-2b35-4e83-b6ba-9f037762a501 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 04:10:32 np0005534516 nova_compute[253538]: 2025-11-25 09:10:32.861 253542 DEBUG oslo_concurrency.lockutils [req-f6b4b0ad-9086-4c5e-a617-740052144201 req-7f7cc4ac-16c4-4654-9666-6ea6b122207c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "e30f8c90-01de-40a5-8c04-289a035fca22-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:10:32 np0005534516 nova_compute[253538]: 2025-11-25 09:10:32.861 253542 DEBUG oslo_concurrency.lockutils [req-f6b4b0ad-9086-4c5e-a617-740052144201 req-7f7cc4ac-16c4-4654-9666-6ea6b122207c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "e30f8c90-01de-40a5-8c04-289a035fca22-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:10:32 np0005534516 nova_compute[253538]: 2025-11-25 09:10:32.862 253542 DEBUG oslo_concurrency.lockutils [req-f6b4b0ad-9086-4c5e-a617-740052144201 req-7f7cc4ac-16c4-4654-9666-6ea6b122207c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "e30f8c90-01de-40a5-8c04-289a035fca22-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:10:32 np0005534516 nova_compute[253538]: 2025-11-25 09:10:32.862 253542 DEBUG nova.compute.manager [req-f6b4b0ad-9086-4c5e-a617-740052144201 req-7f7cc4ac-16c4-4654-9666-6ea6b122207c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: e30f8c90-01de-40a5-8c04-289a035fca22] Processing event network-vif-plugged-a2ae2d19-2b35-4e83-b6ba-9f037762a501 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 25 04:10:32 np0005534516 nova_compute[253538]: 2025-11-25 09:10:32.916 253542 DEBUG nova.compute.manager [req-7cbd9d89-f379-4215-b917-800f2283ba59 req-149eaedd-f754-49fe-a153-140974a8ea84 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: e30f8c90-01de-40a5-8c04-289a035fca22] Received event network-vif-plugged-aa75ca22-e976-4c62-b1e2-cc57fac51dec external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 04:10:32 np0005534516 nova_compute[253538]: 2025-11-25 09:10:32.916 253542 DEBUG oslo_concurrency.lockutils [req-7cbd9d89-f379-4215-b917-800f2283ba59 req-149eaedd-f754-49fe-a153-140974a8ea84 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "e30f8c90-01de-40a5-8c04-289a035fca22-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:10:32 np0005534516 nova_compute[253538]: 2025-11-25 09:10:32.917 253542 DEBUG oslo_concurrency.lockutils [req-7cbd9d89-f379-4215-b917-800f2283ba59 req-149eaedd-f754-49fe-a153-140974a8ea84 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "e30f8c90-01de-40a5-8c04-289a035fca22-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:10:32 np0005534516 nova_compute[253538]: 2025-11-25 09:10:32.917 253542 DEBUG oslo_concurrency.lockutils [req-7cbd9d89-f379-4215-b917-800f2283ba59 req-149eaedd-f754-49fe-a153-140974a8ea84 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "e30f8c90-01de-40a5-8c04-289a035fca22-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:10:32 np0005534516 nova_compute[253538]: 2025-11-25 09:10:32.917 253542 DEBUG nova.compute.manager [req-7cbd9d89-f379-4215-b917-800f2283ba59 req-149eaedd-f754-49fe-a153-140974a8ea84 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: e30f8c90-01de-40a5-8c04-289a035fca22] Processing event network-vif-plugged-aa75ca22-e976-4c62-b1e2-cc57fac51dec _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 25 04:10:32 np0005534516 nova_compute[253538]: 2025-11-25 09:10:32.918 253542 DEBUG nova.compute.manager [None req-d92035a5-8b8f-4b18-a7ed-f45f7573b37f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: e30f8c90-01de-40a5-8c04-289a035fca22] Instance event wait completed in 3 seconds for network-vif-plugged,network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 25 04:10:32 np0005534516 nova_compute[253538]: 2025-11-25 09:10:32.925 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764061832.9254777, e30f8c90-01de-40a5-8c04-289a035fca22 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 04:10:32 np0005534516 nova_compute[253538]: 2025-11-25 09:10:32.926 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: e30f8c90-01de-40a5-8c04-289a035fca22] VM Resumed (Lifecycle Event)#033[00m
Nov 25 04:10:32 np0005534516 nova_compute[253538]: 2025-11-25 09:10:32.942 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: e30f8c90-01de-40a5-8c04-289a035fca22] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 04:10:32 np0005534516 nova_compute[253538]: 2025-11-25 09:10:32.945 253542 DEBUG nova.virt.libvirt.driver [None req-d92035a5-8b8f-4b18-a7ed-f45f7573b37f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: e30f8c90-01de-40a5-8c04-289a035fca22] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 25 04:10:32 np0005534516 nova_compute[253538]: 2025-11-25 09:10:32.953 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: e30f8c90-01de-40a5-8c04-289a035fca22] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 04:10:32 np0005534516 nova_compute[253538]: 2025-11-25 09:10:32.956 253542 INFO nova.virt.libvirt.driver [-] [instance: e30f8c90-01de-40a5-8c04-289a035fca22] Instance spawned successfully.#033[00m
Nov 25 04:10:32 np0005534516 nova_compute[253538]: 2025-11-25 09:10:32.957 253542 DEBUG nova.virt.libvirt.driver [None req-d92035a5-8b8f-4b18-a7ed-f45f7573b37f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: e30f8c90-01de-40a5-8c04-289a035fca22] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 25 04:10:32 np0005534516 nova_compute[253538]: 2025-11-25 09:10:32.975 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: e30f8c90-01de-40a5-8c04-289a035fca22] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 04:10:32 np0005534516 nova_compute[253538]: 2025-11-25 09:10:32.986 253542 DEBUG nova.virt.libvirt.driver [None req-d92035a5-8b8f-4b18-a7ed-f45f7573b37f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: e30f8c90-01de-40a5-8c04-289a035fca22] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 04:10:32 np0005534516 nova_compute[253538]: 2025-11-25 09:10:32.987 253542 DEBUG nova.virt.libvirt.driver [None req-d92035a5-8b8f-4b18-a7ed-f45f7573b37f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: e30f8c90-01de-40a5-8c04-289a035fca22] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 04:10:32 np0005534516 nova_compute[253538]: 2025-11-25 09:10:32.987 253542 DEBUG nova.virt.libvirt.driver [None req-d92035a5-8b8f-4b18-a7ed-f45f7573b37f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: e30f8c90-01de-40a5-8c04-289a035fca22] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 04:10:32 np0005534516 nova_compute[253538]: 2025-11-25 09:10:32.988 253542 DEBUG nova.virt.libvirt.driver [None req-d92035a5-8b8f-4b18-a7ed-f45f7573b37f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: e30f8c90-01de-40a5-8c04-289a035fca22] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 04:10:32 np0005534516 nova_compute[253538]: 2025-11-25 09:10:32.989 253542 DEBUG nova.virt.libvirt.driver [None req-d92035a5-8b8f-4b18-a7ed-f45f7573b37f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: e30f8c90-01de-40a5-8c04-289a035fca22] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 04:10:32 np0005534516 nova_compute[253538]: 2025-11-25 09:10:32.989 253542 DEBUG nova.virt.libvirt.driver [None req-d92035a5-8b8f-4b18-a7ed-f45f7573b37f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: e30f8c90-01de-40a5-8c04-289a035fca22] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 04:10:33 np0005534516 nova_compute[253538]: 2025-11-25 09:10:33.213 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:10:33 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:10:33 np0005534516 ceph-mon[75015]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #129. Immutable memtables: 0.
Nov 25 04:10:33 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:10:33.408555) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 25 04:10:33 np0005534516 ceph-mon[75015]: rocksdb: [db/flush_job.cc:856] [default] [JOB 77] Flushing memtable with next log file: 129
Nov 25 04:10:33 np0005534516 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764061833408610, "job": 77, "event": "flush_started", "num_memtables": 1, "num_entries": 2133, "num_deletes": 255, "total_data_size": 3400252, "memory_usage": 3475328, "flush_reason": "Manual Compaction"}
Nov 25 04:10:33 np0005534516 ceph-mon[75015]: rocksdb: [db/flush_job.cc:885] [default] [JOB 77] Level-0 flush table #130: started
Nov 25 04:10:33 np0005534516 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764061833431178, "cf_name": "default", "job": 77, "event": "table_file_creation", "file_number": 130, "file_size": 3342084, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 54387, "largest_seqno": 56519, "table_properties": {"data_size": 3332304, "index_size": 6209, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2501, "raw_key_size": 20116, "raw_average_key_size": 20, "raw_value_size": 3312743, "raw_average_value_size": 3380, "num_data_blocks": 273, "num_entries": 980, "num_filter_entries": 980, "num_deletions": 255, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764061620, "oldest_key_time": 1764061620, "file_creation_time": 1764061833, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "dc5dc6ae-0fb8-474e-b34d-e37705a41add", "db_session_id": "WCSCMBNWDQZ3QKJA3OWW", "orig_file_number": 130, "seqno_to_time_mapping": "N/A"}}
Nov 25 04:10:33 np0005534516 ceph-mon[75015]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 77] Flush lasted 22659 microseconds, and 6479 cpu microseconds.
Nov 25 04:10:33 np0005534516 ceph-mon[75015]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 04:10:33 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:10:33.431230) [db/flush_job.cc:967] [default] [JOB 77] Level-0 flush table #130: 3342084 bytes OK
Nov 25 04:10:33 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:10:33.431252) [db/memtable_list.cc:519] [default] Level-0 commit table #130 started
Nov 25 04:10:33 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:10:33.432850) [db/memtable_list.cc:722] [default] Level-0 commit table #130: memtable #1 done
Nov 25 04:10:33 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:10:33.432865) EVENT_LOG_v1 {"time_micros": 1764061833432860, "job": 77, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 25 04:10:33 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:10:33.432884) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 25 04:10:33 np0005534516 ceph-mon[75015]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 77] Try to delete WAL files size 3391234, prev total WAL file size 3391234, number of live WAL files 2.
Nov 25 04:10:33 np0005534516 ceph-mon[75015]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000126.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 04:10:33 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:10:33.433827) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730035323731' seq:72057594037927935, type:22 .. '7061786F730035353233' seq:0, type:0; will stop at (end)
Nov 25 04:10:33 np0005534516 ceph-mon[75015]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 78] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 25 04:10:33 np0005534516 ceph-mon[75015]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 77 Base level 0, inputs: [130(3263KB)], [128(8279KB)]
Nov 25 04:10:33 np0005534516 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764061833433856, "job": 78, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [130], "files_L6": [128], "score": -1, "input_data_size": 11819855, "oldest_snapshot_seqno": -1}
Nov 25 04:10:33 np0005534516 modest_hamilton[408245]: --> passed data devices: 0 physical, 3 LVM
Nov 25 04:10:33 np0005534516 modest_hamilton[408245]: --> relative data size: 1.0
Nov 25 04:10:33 np0005534516 modest_hamilton[408245]: --> All data devices are unavailable
Nov 25 04:10:33 np0005534516 systemd[1]: libpod-a9e4c76ce1d86957e6c6bf3215feba304b4e6b760c448bb24580ebde99528e93.scope: Deactivated successfully.
Nov 25 04:10:33 np0005534516 podman[408229]: 2025-11-25 09:10:33.470586083 +0000 UTC m=+1.282692769 container died a9e4c76ce1d86957e6c6bf3215feba304b4e6b760c448bb24580ebde99528e93 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_hamilton, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 04:10:33 np0005534516 systemd[1]: libpod-a9e4c76ce1d86957e6c6bf3215feba304b4e6b760c448bb24580ebde99528e93.scope: Consumed 1.063s CPU time.
Nov 25 04:10:33 np0005534516 ceph-mon[75015]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 78] Generated table #131: 7780 keys, 10126774 bytes, temperature: kUnknown
Nov 25 04:10:33 np0005534516 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764061833500703, "cf_name": "default", "job": 78, "event": "table_file_creation", "file_number": 131, "file_size": 10126774, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10076285, "index_size": 29960, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 19461, "raw_key_size": 202445, "raw_average_key_size": 26, "raw_value_size": 9938845, "raw_average_value_size": 1277, "num_data_blocks": 1172, "num_entries": 7780, "num_filter_entries": 7780, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764056881, "oldest_key_time": 0, "file_creation_time": 1764061833, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "dc5dc6ae-0fb8-474e-b34d-e37705a41add", "db_session_id": "WCSCMBNWDQZ3QKJA3OWW", "orig_file_number": 131, "seqno_to_time_mapping": "N/A"}}
Nov 25 04:10:33 np0005534516 ceph-mon[75015]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 04:10:33 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:10:33.500948) [db/compaction/compaction_job.cc:1663] [default] [JOB 78] Compacted 1@0 + 1@6 files to L6 => 10126774 bytes
Nov 25 04:10:33 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:10:33.502507) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 176.6 rd, 151.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.2, 8.1 +0.0 blob) out(9.7 +0.0 blob), read-write-amplify(6.6) write-amplify(3.0) OK, records in: 8304, records dropped: 524 output_compression: NoCompression
Nov 25 04:10:33 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:10:33.502526) EVENT_LOG_v1 {"time_micros": 1764061833502517, "job": 78, "event": "compaction_finished", "compaction_time_micros": 66930, "compaction_time_cpu_micros": 28995, "output_level": 6, "num_output_files": 1, "total_output_size": 10126774, "num_input_records": 8304, "num_output_records": 7780, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 25 04:10:33 np0005534516 ceph-mon[75015]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000130.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 04:10:33 np0005534516 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764061833503199, "job": 78, "event": "table_file_deletion", "file_number": 130}
Nov 25 04:10:33 np0005534516 ceph-mon[75015]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000128.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 04:10:33 np0005534516 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764061833505012, "job": 78, "event": "table_file_deletion", "file_number": 128}
Nov 25 04:10:33 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:10:33.433774) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 04:10:33 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:10:33.505095) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 04:10:33 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:10:33.505103) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 04:10:33 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:10:33.505106) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 04:10:33 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:10:33.505108) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 04:10:33 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:10:33.505111) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 04:10:33 np0005534516 systemd[1]: var-lib-containers-storage-overlay-d6de99040cd4c97045259785f2851eb070fa6080f37c890a4a542ff12f12b9e3-merged.mount: Deactivated successfully.
Nov 25 04:10:33 np0005534516 podman[408229]: 2025-11-25 09:10:33.555131802 +0000 UTC m=+1.367238498 container remove a9e4c76ce1d86957e6c6bf3215feba304b4e6b760c448bb24580ebde99528e93 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_hamilton, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 04:10:33 np0005534516 systemd[1]: libpod-conmon-a9e4c76ce1d86957e6c6bf3215feba304b4e6b760c448bb24580ebde99528e93.scope: Deactivated successfully.
Nov 25 04:10:33 np0005534516 nova_compute[253538]: 2025-11-25 09:10:33.719 253542 INFO nova.compute.manager [None req-d92035a5-8b8f-4b18-a7ed-f45f7573b37f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: e30f8c90-01de-40a5-8c04-289a035fca22] Took 17.21 seconds to spawn the instance on the hypervisor.#033[00m
Nov 25 04:10:33 np0005534516 nova_compute[253538]: 2025-11-25 09:10:33.722 253542 DEBUG nova.compute.manager [None req-d92035a5-8b8f-4b18-a7ed-f45f7573b37f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: e30f8c90-01de-40a5-8c04-289a035fca22] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 04:10:33 np0005534516 podman[408386]: 2025-11-25 09:10:33.969086675 +0000 UTC m=+0.065880332 container health_status 5c8d5508db57b4c3da7e80ae11f3a3094e87785cb74f60c98199261dd8b73481 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, managed_by=edpm_ansible, io.buildah.version=1.41.3, container_name=ovn_metadata_agent, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true)
Nov 25 04:10:34 np0005534516 podman[408387]: 2025-11-25 09:10:34.002174704 +0000 UTC m=+0.098952400 container health_status 201f5ba8a846455dad7681d91099d8c3f33e8ac91298c1330a8b525b94c03938 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, tcib_managed=true, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=multipathd)
Nov 25 04:10:34 np0005534516 nova_compute[253538]: 2025-11-25 09:10:34.215 253542 INFO nova.compute.manager [None req-d92035a5-8b8f-4b18-a7ed-f45f7573b37f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: e30f8c90-01de-40a5-8c04-289a035fca22] Took 19.23 seconds to build instance.#033[00m
Nov 25 04:10:34 np0005534516 podman[408465]: 2025-11-25 09:10:34.283655086 +0000 UTC m=+0.052080086 container create 490e86bd9eacba91a7edf3676c2b661f50b4a9d1e62441f203ec0d6173e191ab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_moser, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Nov 25 04:10:34 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2705: 321 pgs: 321 active+clean; 213 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 5.3 KiB/s rd, 13 KiB/s wr, 7 op/s
Nov 25 04:10:34 np0005534516 systemd[1]: Started libpod-conmon-490e86bd9eacba91a7edf3676c2b661f50b4a9d1e62441f203ec0d6173e191ab.scope.
Nov 25 04:10:34 np0005534516 podman[408465]: 2025-11-25 09:10:34.258378139 +0000 UTC m=+0.026803179 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 04:10:34 np0005534516 nova_compute[253538]: 2025-11-25 09:10:34.370 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:10:34 np0005534516 systemd[1]: Started libcrun container.
Nov 25 04:10:34 np0005534516 podman[408465]: 2025-11-25 09:10:34.396818213 +0000 UTC m=+0.165243243 container init 490e86bd9eacba91a7edf3676c2b661f50b4a9d1e62441f203ec0d6173e191ab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_moser, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 04:10:34 np0005534516 podman[408465]: 2025-11-25 09:10:34.406273929 +0000 UTC m=+0.174698929 container start 490e86bd9eacba91a7edf3676c2b661f50b4a9d1e62441f203ec0d6173e191ab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_moser, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 04:10:34 np0005534516 podman[408465]: 2025-11-25 09:10:34.409703762 +0000 UTC m=+0.178128772 container attach 490e86bd9eacba91a7edf3676c2b661f50b4a9d1e62441f203ec0d6173e191ab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_moser, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 04:10:34 np0005534516 infallible_moser[408481]: 167 167
Nov 25 04:10:34 np0005534516 systemd[1]: libpod-490e86bd9eacba91a7edf3676c2b661f50b4a9d1e62441f203ec0d6173e191ab.scope: Deactivated successfully.
Nov 25 04:10:34 np0005534516 podman[408465]: 2025-11-25 09:10:34.41620118 +0000 UTC m=+0.184626180 container died 490e86bd9eacba91a7edf3676c2b661f50b4a9d1e62441f203ec0d6173e191ab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_moser, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Nov 25 04:10:34 np0005534516 systemd[1]: var-lib-containers-storage-overlay-9b966882ed199a006a9dada10119e00576719353983316202abe14b7a8312945-merged.mount: Deactivated successfully.
Nov 25 04:10:34 np0005534516 podman[408465]: 2025-11-25 09:10:34.454785658 +0000 UTC m=+0.223210648 container remove 490e86bd9eacba91a7edf3676c2b661f50b4a9d1e62441f203ec0d6173e191ab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_moser, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 04:10:34 np0005534516 nova_compute[253538]: 2025-11-25 09:10:34.487 253542 DEBUG oslo_concurrency.lockutils [None req-d92035a5-8b8f-4b18-a7ed-f45f7573b37f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "e30f8c90-01de-40a5-8c04-289a035fca22" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 19.778s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:10:34 np0005534516 systemd[1]: libpod-conmon-490e86bd9eacba91a7edf3676c2b661f50b4a9d1e62441f203ec0d6173e191ab.scope: Deactivated successfully.
Nov 25 04:10:34 np0005534516 podman[408505]: 2025-11-25 09:10:34.674575303 +0000 UTC m=+0.041052417 container create 8009d928d94acd01ff5d146534eaf9557c30a13f5d7974a9db4536fa25978415 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_jang, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 04:10:34 np0005534516 systemd[1]: Started libpod-conmon-8009d928d94acd01ff5d146534eaf9557c30a13f5d7974a9db4536fa25978415.scope.
Nov 25 04:10:34 np0005534516 systemd[1]: Started libcrun container.
Nov 25 04:10:34 np0005534516 podman[408505]: 2025-11-25 09:10:34.657644943 +0000 UTC m=+0.024122077 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 04:10:34 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a93cbee2b1e38cef28b54b4274c9179936b4ee426d3e41ef04a644da9a77cefa/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 04:10:34 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a93cbee2b1e38cef28b54b4274c9179936b4ee426d3e41ef04a644da9a77cefa/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 04:10:34 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a93cbee2b1e38cef28b54b4274c9179936b4ee426d3e41ef04a644da9a77cefa/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 04:10:34 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a93cbee2b1e38cef28b54b4274c9179936b4ee426d3e41ef04a644da9a77cefa/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 04:10:34 np0005534516 podman[408505]: 2025-11-25 09:10:34.768723722 +0000 UTC m=+0.135200866 container init 8009d928d94acd01ff5d146534eaf9557c30a13f5d7974a9db4536fa25978415 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_jang, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 04:10:34 np0005534516 podman[408505]: 2025-11-25 09:10:34.778670713 +0000 UTC m=+0.145147827 container start 8009d928d94acd01ff5d146534eaf9557c30a13f5d7974a9db4536fa25978415 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_jang, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 04:10:34 np0005534516 podman[408505]: 2025-11-25 09:10:34.783116784 +0000 UTC m=+0.149593948 container attach 8009d928d94acd01ff5d146534eaf9557c30a13f5d7974a9db4536fa25978415 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_jang, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 04:10:34 np0005534516 nova_compute[253538]: 2025-11-25 09:10:34.948 253542 DEBUG nova.compute.manager [req-d53d2619-a0da-44b4-826d-9fc60f6e8ca0 req-42fc910c-a01e-4b3b-809d-5c8f79f7feac b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: e30f8c90-01de-40a5-8c04-289a035fca22] Received event network-vif-plugged-a2ae2d19-2b35-4e83-b6ba-9f037762a501 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 04:10:34 np0005534516 nova_compute[253538]: 2025-11-25 09:10:34.950 253542 DEBUG oslo_concurrency.lockutils [req-d53d2619-a0da-44b4-826d-9fc60f6e8ca0 req-42fc910c-a01e-4b3b-809d-5c8f79f7feac b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "e30f8c90-01de-40a5-8c04-289a035fca22-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:10:34 np0005534516 nova_compute[253538]: 2025-11-25 09:10:34.950 253542 DEBUG oslo_concurrency.lockutils [req-d53d2619-a0da-44b4-826d-9fc60f6e8ca0 req-42fc910c-a01e-4b3b-809d-5c8f79f7feac b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "e30f8c90-01de-40a5-8c04-289a035fca22-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:10:34 np0005534516 nova_compute[253538]: 2025-11-25 09:10:34.951 253542 DEBUG oslo_concurrency.lockutils [req-d53d2619-a0da-44b4-826d-9fc60f6e8ca0 req-42fc910c-a01e-4b3b-809d-5c8f79f7feac b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "e30f8c90-01de-40a5-8c04-289a035fca22-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:10:34 np0005534516 nova_compute[253538]: 2025-11-25 09:10:34.951 253542 DEBUG nova.compute.manager [req-d53d2619-a0da-44b4-826d-9fc60f6e8ca0 req-42fc910c-a01e-4b3b-809d-5c8f79f7feac b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: e30f8c90-01de-40a5-8c04-289a035fca22] No waiting events found dispatching network-vif-plugged-a2ae2d19-2b35-4e83-b6ba-9f037762a501 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 04:10:34 np0005534516 nova_compute[253538]: 2025-11-25 09:10:34.952 253542 WARNING nova.compute.manager [req-d53d2619-a0da-44b4-826d-9fc60f6e8ca0 req-42fc910c-a01e-4b3b-809d-5c8f79f7feac b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: e30f8c90-01de-40a5-8c04-289a035fca22] Received unexpected event network-vif-plugged-a2ae2d19-2b35-4e83-b6ba-9f037762a501 for instance with vm_state active and task_state None.#033[00m
Nov 25 04:10:35 np0005534516 nova_compute[253538]: 2025-11-25 09:10:35.030 253542 DEBUG nova.compute.manager [req-1663f7bf-c89c-4ef7-9694-273bea9e0687 req-f468c3d8-2ee7-48cb-b38e-dcf45c8c4970 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: e30f8c90-01de-40a5-8c04-289a035fca22] Received event network-vif-plugged-aa75ca22-e976-4c62-b1e2-cc57fac51dec external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 04:10:35 np0005534516 nova_compute[253538]: 2025-11-25 09:10:35.030 253542 DEBUG oslo_concurrency.lockutils [req-1663f7bf-c89c-4ef7-9694-273bea9e0687 req-f468c3d8-2ee7-48cb-b38e-dcf45c8c4970 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "e30f8c90-01de-40a5-8c04-289a035fca22-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:10:35 np0005534516 nova_compute[253538]: 2025-11-25 09:10:35.031 253542 DEBUG oslo_concurrency.lockutils [req-1663f7bf-c89c-4ef7-9694-273bea9e0687 req-f468c3d8-2ee7-48cb-b38e-dcf45c8c4970 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "e30f8c90-01de-40a5-8c04-289a035fca22-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:10:35 np0005534516 nova_compute[253538]: 2025-11-25 09:10:35.031 253542 DEBUG oslo_concurrency.lockutils [req-1663f7bf-c89c-4ef7-9694-273bea9e0687 req-f468c3d8-2ee7-48cb-b38e-dcf45c8c4970 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "e30f8c90-01de-40a5-8c04-289a035fca22-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:10:35 np0005534516 nova_compute[253538]: 2025-11-25 09:10:35.031 253542 DEBUG nova.compute.manager [req-1663f7bf-c89c-4ef7-9694-273bea9e0687 req-f468c3d8-2ee7-48cb-b38e-dcf45c8c4970 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: e30f8c90-01de-40a5-8c04-289a035fca22] No waiting events found dispatching network-vif-plugged-aa75ca22-e976-4c62-b1e2-cc57fac51dec pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 04:10:35 np0005534516 nova_compute[253538]: 2025-11-25 09:10:35.032 253542 WARNING nova.compute.manager [req-1663f7bf-c89c-4ef7-9694-273bea9e0687 req-f468c3d8-2ee7-48cb-b38e-dcf45c8c4970 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: e30f8c90-01de-40a5-8c04-289a035fca22] Received unexpected event network-vif-plugged-aa75ca22-e976-4c62-b1e2-cc57fac51dec for instance with vm_state active and task_state None.#033[00m
Nov 25 04:10:35 np0005534516 brave_jang[408522]: {
Nov 25 04:10:35 np0005534516 brave_jang[408522]:    "0": [
Nov 25 04:10:35 np0005534516 brave_jang[408522]:        {
Nov 25 04:10:35 np0005534516 brave_jang[408522]:            "devices": [
Nov 25 04:10:35 np0005534516 brave_jang[408522]:                "/dev/loop3"
Nov 25 04:10:35 np0005534516 brave_jang[408522]:            ],
Nov 25 04:10:35 np0005534516 brave_jang[408522]:            "lv_name": "ceph_lv0",
Nov 25 04:10:35 np0005534516 brave_jang[408522]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 04:10:35 np0005534516 brave_jang[408522]:            "lv_size": "21470642176",
Nov 25 04:10:35 np0005534516 brave_jang[408522]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=fa0f8d90-5df8-4d42-9078-da082765696d,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 04:10:35 np0005534516 brave_jang[408522]:            "lv_uuid": "ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr",
Nov 25 04:10:35 np0005534516 brave_jang[408522]:            "name": "ceph_lv0",
Nov 25 04:10:35 np0005534516 brave_jang[408522]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 04:10:35 np0005534516 brave_jang[408522]:            "tags": {
Nov 25 04:10:35 np0005534516 brave_jang[408522]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 04:10:35 np0005534516 brave_jang[408522]:                "ceph.block_uuid": "ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr",
Nov 25 04:10:35 np0005534516 brave_jang[408522]:                "ceph.cephx_lockbox_secret": "",
Nov 25 04:10:35 np0005534516 brave_jang[408522]:                "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 04:10:35 np0005534516 brave_jang[408522]:                "ceph.cluster_name": "ceph",
Nov 25 04:10:35 np0005534516 brave_jang[408522]:                "ceph.crush_device_class": "",
Nov 25 04:10:35 np0005534516 brave_jang[408522]:                "ceph.encrypted": "0",
Nov 25 04:10:35 np0005534516 brave_jang[408522]:                "ceph.osd_fsid": "fa0f8d90-5df8-4d42-9078-da082765696d",
Nov 25 04:10:35 np0005534516 brave_jang[408522]:                "ceph.osd_id": "0",
Nov 25 04:10:35 np0005534516 brave_jang[408522]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 04:10:35 np0005534516 brave_jang[408522]:                "ceph.type": "block",
Nov 25 04:10:35 np0005534516 brave_jang[408522]:                "ceph.vdo": "0"
Nov 25 04:10:35 np0005534516 brave_jang[408522]:            },
Nov 25 04:10:35 np0005534516 brave_jang[408522]:            "type": "block",
Nov 25 04:10:35 np0005534516 brave_jang[408522]:            "vg_name": "ceph_vg0"
Nov 25 04:10:35 np0005534516 brave_jang[408522]:        }
Nov 25 04:10:35 np0005534516 brave_jang[408522]:    ],
Nov 25 04:10:35 np0005534516 brave_jang[408522]:    "1": [
Nov 25 04:10:35 np0005534516 brave_jang[408522]:        {
Nov 25 04:10:35 np0005534516 brave_jang[408522]:            "devices": [
Nov 25 04:10:35 np0005534516 brave_jang[408522]:                "/dev/loop4"
Nov 25 04:10:35 np0005534516 brave_jang[408522]:            ],
Nov 25 04:10:35 np0005534516 brave_jang[408522]:            "lv_name": "ceph_lv1",
Nov 25 04:10:35 np0005534516 brave_jang[408522]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 04:10:35 np0005534516 brave_jang[408522]:            "lv_size": "21470642176",
Nov 25 04:10:35 np0005534516 brave_jang[408522]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1ccbbde0-8faf-460a-800e-c84f00ed17db,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 04:10:35 np0005534516 brave_jang[408522]:            "lv_uuid": "nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e",
Nov 25 04:10:35 np0005534516 brave_jang[408522]:            "name": "ceph_lv1",
Nov 25 04:10:35 np0005534516 brave_jang[408522]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 04:10:35 np0005534516 brave_jang[408522]:            "tags": {
Nov 25 04:10:35 np0005534516 brave_jang[408522]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 04:10:35 np0005534516 brave_jang[408522]:                "ceph.block_uuid": "nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e",
Nov 25 04:10:35 np0005534516 brave_jang[408522]:                "ceph.cephx_lockbox_secret": "",
Nov 25 04:10:35 np0005534516 brave_jang[408522]:                "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 04:10:35 np0005534516 brave_jang[408522]:                "ceph.cluster_name": "ceph",
Nov 25 04:10:35 np0005534516 brave_jang[408522]:                "ceph.crush_device_class": "",
Nov 25 04:10:35 np0005534516 brave_jang[408522]:                "ceph.encrypted": "0",
Nov 25 04:10:35 np0005534516 brave_jang[408522]:                "ceph.osd_fsid": "1ccbbde0-8faf-460a-800e-c84f00ed17db",
Nov 25 04:10:35 np0005534516 brave_jang[408522]:                "ceph.osd_id": "1",
Nov 25 04:10:35 np0005534516 brave_jang[408522]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 04:10:35 np0005534516 brave_jang[408522]:                "ceph.type": "block",
Nov 25 04:10:35 np0005534516 brave_jang[408522]:                "ceph.vdo": "0"
Nov 25 04:10:35 np0005534516 brave_jang[408522]:            },
Nov 25 04:10:35 np0005534516 brave_jang[408522]:            "type": "block",
Nov 25 04:10:35 np0005534516 brave_jang[408522]:            "vg_name": "ceph_vg1"
Nov 25 04:10:35 np0005534516 brave_jang[408522]:        }
Nov 25 04:10:35 np0005534516 brave_jang[408522]:    ],
Nov 25 04:10:35 np0005534516 brave_jang[408522]:    "2": [
Nov 25 04:10:35 np0005534516 brave_jang[408522]:        {
Nov 25 04:10:35 np0005534516 brave_jang[408522]:            "devices": [
Nov 25 04:10:35 np0005534516 brave_jang[408522]:                "/dev/loop5"
Nov 25 04:10:35 np0005534516 brave_jang[408522]:            ],
Nov 25 04:10:35 np0005534516 brave_jang[408522]:            "lv_name": "ceph_lv2",
Nov 25 04:10:35 np0005534516 brave_jang[408522]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 04:10:35 np0005534516 brave_jang[408522]:            "lv_size": "21470642176",
Nov 25 04:10:35 np0005534516 brave_jang[408522]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2a15223e-f21c-42af-b702-2be1c3607399,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 04:10:35 np0005534516 brave_jang[408522]:            "lv_uuid": "5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0",
Nov 25 04:10:35 np0005534516 brave_jang[408522]:            "name": "ceph_lv2",
Nov 25 04:10:35 np0005534516 brave_jang[408522]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 04:10:35 np0005534516 brave_jang[408522]:            "tags": {
Nov 25 04:10:35 np0005534516 brave_jang[408522]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 04:10:35 np0005534516 brave_jang[408522]:                "ceph.block_uuid": "5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0",
Nov 25 04:10:35 np0005534516 brave_jang[408522]:                "ceph.cephx_lockbox_secret": "",
Nov 25 04:10:35 np0005534516 brave_jang[408522]:                "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 04:10:35 np0005534516 brave_jang[408522]:                "ceph.cluster_name": "ceph",
Nov 25 04:10:35 np0005534516 brave_jang[408522]:                "ceph.crush_device_class": "",
Nov 25 04:10:35 np0005534516 brave_jang[408522]:                "ceph.encrypted": "0",
Nov 25 04:10:35 np0005534516 brave_jang[408522]:                "ceph.osd_fsid": "2a15223e-f21c-42af-b702-2be1c3607399",
Nov 25 04:10:35 np0005534516 brave_jang[408522]:                "ceph.osd_id": "2",
Nov 25 04:10:35 np0005534516 brave_jang[408522]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 04:10:35 np0005534516 brave_jang[408522]:                "ceph.type": "block",
Nov 25 04:10:35 np0005534516 brave_jang[408522]:                "ceph.vdo": "0"
Nov 25 04:10:35 np0005534516 brave_jang[408522]:            },
Nov 25 04:10:35 np0005534516 brave_jang[408522]:            "type": "block",
Nov 25 04:10:35 np0005534516 brave_jang[408522]:            "vg_name": "ceph_vg2"
Nov 25 04:10:35 np0005534516 brave_jang[408522]:        }
Nov 25 04:10:35 np0005534516 brave_jang[408522]:    ]
Nov 25 04:10:35 np0005534516 brave_jang[408522]: }
Nov 25 04:10:35 np0005534516 systemd[1]: libpod-8009d928d94acd01ff5d146534eaf9557c30a13f5d7974a9db4536fa25978415.scope: Deactivated successfully.
Nov 25 04:10:35 np0005534516 podman[408505]: 2025-11-25 09:10:35.542206289 +0000 UTC m=+0.908683403 container died 8009d928d94acd01ff5d146534eaf9557c30a13f5d7974a9db4536fa25978415 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_jang, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Nov 25 04:10:35 np0005534516 systemd[1]: var-lib-containers-storage-overlay-a93cbee2b1e38cef28b54b4274c9179936b4ee426d3e41ef04a644da9a77cefa-merged.mount: Deactivated successfully.
Nov 25 04:10:35 np0005534516 podman[408505]: 2025-11-25 09:10:35.599995459 +0000 UTC m=+0.966472563 container remove 8009d928d94acd01ff5d146534eaf9557c30a13f5d7974a9db4536fa25978415 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_jang, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 04:10:35 np0005534516 systemd[1]: libpod-conmon-8009d928d94acd01ff5d146534eaf9557c30a13f5d7974a9db4536fa25978415.scope: Deactivated successfully.
Nov 25 04:10:36 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2706: 321 pgs: 321 active+clean; 213 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 552 KiB/s rd, 12 KiB/s wr, 27 op/s
Nov 25 04:10:36 np0005534516 podman[408681]: 2025-11-25 09:10:36.195175859 +0000 UTC m=+0.023082209 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 04:10:36 np0005534516 podman[408681]: 2025-11-25 09:10:36.343604434 +0000 UTC m=+0.171510744 container create d5e9e777ab698b0cf2b5180db6440d53d2e12a229020adf7c2bb65ddd58ce6d8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_lehmann, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Nov 25 04:10:36 np0005534516 systemd[1]: Started libpod-conmon-d5e9e777ab698b0cf2b5180db6440d53d2e12a229020adf7c2bb65ddd58ce6d8.scope.
Nov 25 04:10:36 np0005534516 systemd[1]: Started libcrun container.
Nov 25 04:10:36 np0005534516 podman[408681]: 2025-11-25 09:10:36.493734585 +0000 UTC m=+0.321640905 container init d5e9e777ab698b0cf2b5180db6440d53d2e12a229020adf7c2bb65ddd58ce6d8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_lehmann, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Nov 25 04:10:36 np0005534516 podman[408681]: 2025-11-25 09:10:36.500132719 +0000 UTC m=+0.328039029 container start d5e9e777ab698b0cf2b5180db6440d53d2e12a229020adf7c2bb65ddd58ce6d8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_lehmann, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Nov 25 04:10:36 np0005534516 nice_lehmann[408698]: 167 167
Nov 25 04:10:36 np0005534516 systemd[1]: libpod-d5e9e777ab698b0cf2b5180db6440d53d2e12a229020adf7c2bb65ddd58ce6d8.scope: Deactivated successfully.
Nov 25 04:10:36 np0005534516 podman[408681]: 2025-11-25 09:10:36.511055275 +0000 UTC m=+0.338961615 container attach d5e9e777ab698b0cf2b5180db6440d53d2e12a229020adf7c2bb65ddd58ce6d8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_lehmann, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default)
Nov 25 04:10:36 np0005534516 podman[408681]: 2025-11-25 09:10:36.51158016 +0000 UTC m=+0.339486470 container died d5e9e777ab698b0cf2b5180db6440d53d2e12a229020adf7c2bb65ddd58ce6d8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_lehmann, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 04:10:36 np0005534516 systemd[1]: var-lib-containers-storage-overlay-57e5ed7f3d547be82b3edeef16ad9012fc615701d73719561911e2f911a69afb-merged.mount: Deactivated successfully.
Nov 25 04:10:36 np0005534516 podman[408681]: 2025-11-25 09:10:36.572415813 +0000 UTC m=+0.400322123 container remove d5e9e777ab698b0cf2b5180db6440d53d2e12a229020adf7c2bb65ddd58ce6d8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_lehmann, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Nov 25 04:10:36 np0005534516 systemd[1]: libpod-conmon-d5e9e777ab698b0cf2b5180db6440d53d2e12a229020adf7c2bb65ddd58ce6d8.scope: Deactivated successfully.
Nov 25 04:10:36 np0005534516 podman[408722]: 2025-11-25 09:10:36.836085891 +0000 UTC m=+0.117304549 container create ac6c8c4529a71e2fe6f883bfdc09542969c9eb789474d76bc9bf6ab8458132d2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_mendel, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Nov 25 04:10:36 np0005534516 podman[408722]: 2025-11-25 09:10:36.746423594 +0000 UTC m=+0.027642282 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 04:10:36 np0005534516 systemd[1]: Started libpod-conmon-ac6c8c4529a71e2fe6f883bfdc09542969c9eb789474d76bc9bf6ab8458132d2.scope.
Nov 25 04:10:36 np0005534516 systemd[1]: Started libcrun container.
Nov 25 04:10:36 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ef02148c62232d5cc6378d6968bf96eac2b4c4c2436662085f17a539a624d62e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 04:10:36 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ef02148c62232d5cc6378d6968bf96eac2b4c4c2436662085f17a539a624d62e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 04:10:36 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ef02148c62232d5cc6378d6968bf96eac2b4c4c2436662085f17a539a624d62e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 04:10:36 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ef02148c62232d5cc6378d6968bf96eac2b4c4c2436662085f17a539a624d62e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 04:10:37 np0005534516 podman[408722]: 2025-11-25 09:10:37.141275357 +0000 UTC m=+0.422494025 container init ac6c8c4529a71e2fe6f883bfdc09542969c9eb789474d76bc9bf6ab8458132d2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_mendel, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2)
Nov 25 04:10:37 np0005534516 podman[408722]: 2025-11-25 09:10:37.148861724 +0000 UTC m=+0.430080372 container start ac6c8c4529a71e2fe6f883bfdc09542969c9eb789474d76bc9bf6ab8458132d2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_mendel, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 04:10:37 np0005534516 podman[408722]: 2025-11-25 09:10:37.216989365 +0000 UTC m=+0.498208013 container attach ac6c8c4529a71e2fe6f883bfdc09542969c9eb789474d76bc9bf6ab8458132d2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_mendel, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2)
Nov 25 04:10:38 np0005534516 magical_mendel[408738]: {
Nov 25 04:10:38 np0005534516 magical_mendel[408738]:    "1ccbbde0-8faf-460a-800e-c84f00ed17db": {
Nov 25 04:10:38 np0005534516 magical_mendel[408738]:        "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 04:10:38 np0005534516 magical_mendel[408738]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 04:10:38 np0005534516 magical_mendel[408738]:        "osd_id": 1,
Nov 25 04:10:38 np0005534516 magical_mendel[408738]:        "osd_uuid": "1ccbbde0-8faf-460a-800e-c84f00ed17db",
Nov 25 04:10:38 np0005534516 magical_mendel[408738]:        "type": "bluestore"
Nov 25 04:10:38 np0005534516 magical_mendel[408738]:    },
Nov 25 04:10:38 np0005534516 magical_mendel[408738]:    "2a15223e-f21c-42af-b702-2be1c3607399": {
Nov 25 04:10:38 np0005534516 magical_mendel[408738]:        "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 04:10:38 np0005534516 magical_mendel[408738]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 04:10:38 np0005534516 magical_mendel[408738]:        "osd_id": 2,
Nov 25 04:10:38 np0005534516 magical_mendel[408738]:        "osd_uuid": "2a15223e-f21c-42af-b702-2be1c3607399",
Nov 25 04:10:38 np0005534516 magical_mendel[408738]:        "type": "bluestore"
Nov 25 04:10:38 np0005534516 magical_mendel[408738]:    },
Nov 25 04:10:38 np0005534516 magical_mendel[408738]:    "fa0f8d90-5df8-4d42-9078-da082765696d": {
Nov 25 04:10:38 np0005534516 magical_mendel[408738]:        "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 04:10:38 np0005534516 magical_mendel[408738]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 04:10:38 np0005534516 magical_mendel[408738]:        "osd_id": 0,
Nov 25 04:10:38 np0005534516 magical_mendel[408738]:        "osd_uuid": "fa0f8d90-5df8-4d42-9078-da082765696d",
Nov 25 04:10:38 np0005534516 magical_mendel[408738]:        "type": "bluestore"
Nov 25 04:10:38 np0005534516 magical_mendel[408738]:    }
Nov 25 04:10:38 np0005534516 magical_mendel[408738]: }
Nov 25 04:10:38 np0005534516 systemd[1]: libpod-ac6c8c4529a71e2fe6f883bfdc09542969c9eb789474d76bc9bf6ab8458132d2.scope: Deactivated successfully.
Nov 25 04:10:38 np0005534516 systemd[1]: libpod-ac6c8c4529a71e2fe6f883bfdc09542969c9eb789474d76bc9bf6ab8458132d2.scope: Consumed 1.044s CPU time.
Nov 25 04:10:38 np0005534516 conmon[408738]: conmon ac6c8c4529a71e2fe6f8 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-ac6c8c4529a71e2fe6f883bfdc09542969c9eb789474d76bc9bf6ab8458132d2.scope/container/memory.events
Nov 25 04:10:38 np0005534516 podman[408722]: 2025-11-25 09:10:38.240093858 +0000 UTC m=+1.521312506 container died ac6c8c4529a71e2fe6f883bfdc09542969c9eb789474d76bc9bf6ab8458132d2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_mendel, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 04:10:38 np0005534516 nova_compute[253538]: 2025-11-25 09:10:38.239 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:10:38 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:10:38 np0005534516 systemd[1]: var-lib-containers-storage-overlay-ef02148c62232d5cc6378d6968bf96eac2b4c4c2436662085f17a539a624d62e-merged.mount: Deactivated successfully.
Nov 25 04:10:38 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2707: 321 pgs: 321 active+clean; 213 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 73 op/s
Nov 25 04:10:38 np0005534516 podman[408722]: 2025-11-25 09:10:38.315009844 +0000 UTC m=+1.596228512 container remove ac6c8c4529a71e2fe6f883bfdc09542969c9eb789474d76bc9bf6ab8458132d2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_mendel, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 04:10:38 np0005534516 systemd[1]: libpod-conmon-ac6c8c4529a71e2fe6f883bfdc09542969c9eb789474d76bc9bf6ab8458132d2.scope: Deactivated successfully.
Nov 25 04:10:38 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 04:10:38 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 04:10:38 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 04:10:38 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 04:10:38 np0005534516 ceph-mgr[75313]: [progress WARNING root] complete: ev 87366dac-6a43-411c-ab7e-bd81e3a5a69e does not exist
Nov 25 04:10:38 np0005534516 ceph-mgr[75313]: [progress WARNING root] complete: ev 7e7c539f-3c16-4855-aa9b-8cc3e4a0d495 does not exist
Nov 25 04:10:38 np0005534516 nova_compute[253538]: 2025-11-25 09:10:38.847 253542 DEBUG nova.compute.manager [req-89092815-01a8-4ab4-97a2-68ad2cfbcb72 req-c7758340-ba60-49ce-bc7d-7b63f4636121 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: e30f8c90-01de-40a5-8c04-289a035fca22] Received event network-changed-a2ae2d19-2b35-4e83-b6ba-9f037762a501 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 04:10:38 np0005534516 nova_compute[253538]: 2025-11-25 09:10:38.848 253542 DEBUG nova.compute.manager [req-89092815-01a8-4ab4-97a2-68ad2cfbcb72 req-c7758340-ba60-49ce-bc7d-7b63f4636121 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: e30f8c90-01de-40a5-8c04-289a035fca22] Refreshing instance network info cache due to event network-changed-a2ae2d19-2b35-4e83-b6ba-9f037762a501. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 04:10:38 np0005534516 nova_compute[253538]: 2025-11-25 09:10:38.848 253542 DEBUG oslo_concurrency.lockutils [req-89092815-01a8-4ab4-97a2-68ad2cfbcb72 req-c7758340-ba60-49ce-bc7d-7b63f4636121 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-e30f8c90-01de-40a5-8c04-289a035fca22" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 04:10:38 np0005534516 nova_compute[253538]: 2025-11-25 09:10:38.849 253542 DEBUG oslo_concurrency.lockutils [req-89092815-01a8-4ab4-97a2-68ad2cfbcb72 req-c7758340-ba60-49ce-bc7d-7b63f4636121 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-e30f8c90-01de-40a5-8c04-289a035fca22" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 04:10:38 np0005534516 nova_compute[253538]: 2025-11-25 09:10:38.849 253542 DEBUG nova.network.neutron [req-89092815-01a8-4ab4-97a2-68ad2cfbcb72 req-c7758340-ba60-49ce-bc7d-7b63f4636121 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: e30f8c90-01de-40a5-8c04-289a035fca22] Refreshing network info cache for port a2ae2d19-2b35-4e83-b6ba-9f037762a501 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 04:10:39 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 04:10:39 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 04:10:39 np0005534516 nova_compute[253538]: 2025-11-25 09:10:39.371 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:10:40 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2708: 321 pgs: 321 active+clean; 213 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 21 KiB/s wr, 74 op/s
Nov 25 04:10:40 np0005534516 nova_compute[253538]: 2025-11-25 09:10:40.823 253542 DEBUG nova.network.neutron [req-89092815-01a8-4ab4-97a2-68ad2cfbcb72 req-c7758340-ba60-49ce-bc7d-7b63f4636121 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: e30f8c90-01de-40a5-8c04-289a035fca22] Updated VIF entry in instance network info cache for port a2ae2d19-2b35-4e83-b6ba-9f037762a501. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 25 04:10:40 np0005534516 nova_compute[253538]: 2025-11-25 09:10:40.823 253542 DEBUG nova.network.neutron [req-89092815-01a8-4ab4-97a2-68ad2cfbcb72 req-c7758340-ba60-49ce-bc7d-7b63f4636121 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: e30f8c90-01de-40a5-8c04-289a035fca22] Updating instance_info_cache with network_info: [{"id": "a2ae2d19-2b35-4e83-b6ba-9f037762a501", "address": "fa:16:3e:1a:7a:85", "network": {"id": "7bf4f588-ebc7-4f3f-bad9-0474cdb461a6", "bridge": "br-int", "label": "tempest-network-smoke--1406303604", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.230", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa2ae2d19-2b", "ovs_interfaceid": "a2ae2d19-2b35-4e83-b6ba-9f037762a501", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "aa75ca22-e976-4c62-b1e2-cc57fac51dec", "address": "fa:16:3e:81:24:53", "network": {"id": "3cc67e51-433c-4c50-9e32-11618e10c494", "bridge": "br-int", "label": "tempest-network-smoke--1717122316", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe81:2453", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapaa75ca22-e9", "ovs_interfaceid": "aa75ca22-e976-4c62-b1e2-cc57fac51dec", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 04:10:40 np0005534516 podman[408831]: 2025-11-25 09:10:40.876675801 +0000 UTC m=+0.116534879 container health_status 8ad1e319ea263c63021f01bb2d6f18ca5aea205883339692c6b10bddfa993191 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_controller, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, tcib_managed=true)
Nov 25 04:10:40 np0005534516 nova_compute[253538]: 2025-11-25 09:10:40.976 253542 DEBUG oslo_concurrency.lockutils [req-89092815-01a8-4ab4-97a2-68ad2cfbcb72 req-c7758340-ba60-49ce-bc7d-7b63f4636121 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-e30f8c90-01de-40a5-8c04-289a035fca22" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 04:10:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:10:41.095 162739 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:10:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:10:41.095 162739 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:10:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:10:41.096 162739 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:10:42 np0005534516 ceph-mgr[75313]: [devicehealth INFO root] Check health
Nov 25 04:10:42 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2709: 321 pgs: 321 active+clean; 213 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 8.5 KiB/s wr, 70 op/s
Nov 25 04:10:43 np0005534516 nova_compute[253538]: 2025-11-25 09:10:43.244 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:10:43 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:10:44 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2710: 321 pgs: 321 active+clean; 213 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 8.4 KiB/s wr, 69 op/s
Nov 25 04:10:44 np0005534516 nova_compute[253538]: 2025-11-25 09:10:44.374 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:10:46 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2711: 321 pgs: 321 active+clean; 213 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 8.4 KiB/s wr, 67 op/s
Nov 25 04:10:47 np0005534516 ovn_controller[152859]: 2025-11-25T09:10:47Z|00197|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:1a:7a:85 10.100.0.3
Nov 25 04:10:47 np0005534516 ovn_controller[152859]: 2025-11-25T09:10:47Z|00198|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:1a:7a:85 10.100.0.3
Nov 25 04:10:48 np0005534516 nova_compute[253538]: 2025-11-25 09:10:48.246 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:10:48 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:10:48 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2712: 321 pgs: 321 active+clean; 229 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.6 MiB/s rd, 1.7 MiB/s wr, 82 op/s
Nov 25 04:10:49 np0005534516 nova_compute[253538]: 2025-11-25 09:10:49.161 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:10:49 np0005534516 nova_compute[253538]: 2025-11-25 09:10:49.162 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:10:49 np0005534516 nova_compute[253538]: 2025-11-25 09:10:49.377 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:10:50 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2713: 321 pgs: 321 active+clean; 243 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 279 KiB/s rd, 2.1 MiB/s wr, 56 op/s
Nov 25 04:10:52 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2714: 321 pgs: 321 active+clean; 246 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 326 KiB/s rd, 2.1 MiB/s wr, 62 op/s
Nov 25 04:10:53 np0005534516 nova_compute[253538]: 2025-11-25 09:10:53.250 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:10:53 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:10:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] Optimize plan auto_2025-11-25_09:10:53
Nov 25 04:10:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 04:10:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] do_upmap
Nov 25 04:10:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] pools ['images', 'default.rgw.control', 'cephfs.cephfs.meta', '.rgw.root', 'volumes', '.mgr', 'vms', 'backups', 'default.rgw.log', 'cephfs.cephfs.data', 'default.rgw.meta']
Nov 25 04:10:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] prepared 0/10 changes
Nov 25 04:10:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 04:10:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 04:10:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 04:10:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 04:10:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 04:10:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 04:10:53 np0005534516 nova_compute[253538]: 2025-11-25 09:10:53.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:10:53 np0005534516 nova_compute[253538]: 2025-11-25 09:10:53.555 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 25 04:10:53 np0005534516 nova_compute[253538]: 2025-11-25 09:10:53.555 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 25 04:10:53 np0005534516 nova_compute[253538]: 2025-11-25 09:10:53.824 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Acquiring lock "refresh_cache-23232702-7686-425d-8921-7aa6192ca1c8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 04:10:53 np0005534516 nova_compute[253538]: 2025-11-25 09:10:53.825 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Acquired lock "refresh_cache-23232702-7686-425d-8921-7aa6192ca1c8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 04:10:53 np0005534516 nova_compute[253538]: 2025-11-25 09:10:53.825 253542 DEBUG nova.network.neutron [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] [instance: 23232702-7686-425d-8921-7aa6192ca1c8] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Nov 25 04:10:53 np0005534516 nova_compute[253538]: 2025-11-25 09:10:53.825 253542 DEBUG nova.objects.instance [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 23232702-7686-425d-8921-7aa6192ca1c8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 04:10:54 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 04:10:54 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 04:10:54 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 04:10:54 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 04:10:54 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 04:10:54 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 04:10:54 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 04:10:54 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 04:10:54 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 04:10:54 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 04:10:54 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2715: 321 pgs: 321 active+clean; 246 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 326 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Nov 25 04:10:54 np0005534516 nova_compute[253538]: 2025-11-25 09:10:54.380 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:10:56 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2716: 321 pgs: 321 active+clean; 246 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 326 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Nov 25 04:10:58 np0005534516 nova_compute[253538]: 2025-11-25 09:10:58.253 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:10:58 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:10:58 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2717: 321 pgs: 321 active+clean; 246 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 337 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Nov 25 04:10:59 np0005534516 nova_compute[253538]: 2025-11-25 09:10:59.073 253542 DEBUG nova.compute.manager [req-25a4439f-b7c1-4aca-b54f-02daff1a0216 req-7adb7897-5e2b-4786-810c-d9558e68676d b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: e30f8c90-01de-40a5-8c04-289a035fca22] Received event network-changed-a2ae2d19-2b35-4e83-b6ba-9f037762a501 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 04:10:59 np0005534516 nova_compute[253538]: 2025-11-25 09:10:59.073 253542 DEBUG nova.compute.manager [req-25a4439f-b7c1-4aca-b54f-02daff1a0216 req-7adb7897-5e2b-4786-810c-d9558e68676d b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: e30f8c90-01de-40a5-8c04-289a035fca22] Refreshing instance network info cache due to event network-changed-a2ae2d19-2b35-4e83-b6ba-9f037762a501. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 04:10:59 np0005534516 nova_compute[253538]: 2025-11-25 09:10:59.074 253542 DEBUG oslo_concurrency.lockutils [req-25a4439f-b7c1-4aca-b54f-02daff1a0216 req-7adb7897-5e2b-4786-810c-d9558e68676d b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-e30f8c90-01de-40a5-8c04-289a035fca22" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 04:10:59 np0005534516 nova_compute[253538]: 2025-11-25 09:10:59.074 253542 DEBUG oslo_concurrency.lockutils [req-25a4439f-b7c1-4aca-b54f-02daff1a0216 req-7adb7897-5e2b-4786-810c-d9558e68676d b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-e30f8c90-01de-40a5-8c04-289a035fca22" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 04:10:59 np0005534516 nova_compute[253538]: 2025-11-25 09:10:59.074 253542 DEBUG nova.network.neutron [req-25a4439f-b7c1-4aca-b54f-02daff1a0216 req-7adb7897-5e2b-4786-810c-d9558e68676d b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: e30f8c90-01de-40a5-8c04-289a035fca22] Refreshing network info cache for port a2ae2d19-2b35-4e83-b6ba-9f037762a501 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 04:10:59 np0005534516 nova_compute[253538]: 2025-11-25 09:10:59.381 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:10:59 np0005534516 nova_compute[253538]: 2025-11-25 09:10:59.605 253542 DEBUG oslo_concurrency.lockutils [None req-d9f8d56c-07ea-4dde-829f-4cd4c5736f9f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Acquiring lock "e30f8c90-01de-40a5-8c04-289a035fca22" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:10:59 np0005534516 nova_compute[253538]: 2025-11-25 09:10:59.606 253542 DEBUG oslo_concurrency.lockutils [None req-d9f8d56c-07ea-4dde-829f-4cd4c5736f9f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "e30f8c90-01de-40a5-8c04-289a035fca22" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:10:59 np0005534516 nova_compute[253538]: 2025-11-25 09:10:59.606 253542 DEBUG oslo_concurrency.lockutils [None req-d9f8d56c-07ea-4dde-829f-4cd4c5736f9f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Acquiring lock "e30f8c90-01de-40a5-8c04-289a035fca22-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:10:59 np0005534516 nova_compute[253538]: 2025-11-25 09:10:59.607 253542 DEBUG oslo_concurrency.lockutils [None req-d9f8d56c-07ea-4dde-829f-4cd4c5736f9f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "e30f8c90-01de-40a5-8c04-289a035fca22-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:10:59 np0005534516 nova_compute[253538]: 2025-11-25 09:10:59.607 253542 DEBUG oslo_concurrency.lockutils [None req-d9f8d56c-07ea-4dde-829f-4cd4c5736f9f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "e30f8c90-01de-40a5-8c04-289a035fca22-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:10:59 np0005534516 nova_compute[253538]: 2025-11-25 09:10:59.609 253542 INFO nova.compute.manager [None req-d9f8d56c-07ea-4dde-829f-4cd4c5736f9f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: e30f8c90-01de-40a5-8c04-289a035fca22] Terminating instance#033[00m
Nov 25 04:10:59 np0005534516 nova_compute[253538]: 2025-11-25 09:10:59.610 253542 DEBUG nova.compute.manager [None req-d9f8d56c-07ea-4dde-829f-4cd4c5736f9f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: e30f8c90-01de-40a5-8c04-289a035fca22] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 25 04:10:59 np0005534516 nova_compute[253538]: 2025-11-25 09:10:59.674 253542 DEBUG nova.network.neutron [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] [instance: 23232702-7686-425d-8921-7aa6192ca1c8] Updating instance_info_cache with network_info: [{"id": "2cf452f4-d6c3-4977-9e5b-874c9d9707e6", "address": "fa:16:3e:57:54:60", "network": {"id": "7bf4f588-ebc7-4f3f-bad9-0474cdb461a6", "bridge": "br-int", "label": "tempest-network-smoke--1406303604", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.218", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2cf452f4-d6", "ovs_interfaceid": "2cf452f4-d6c3-4977-9e5b-874c9d9707e6", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "24beb614-6f72-4107-adca-af1258052ab5", "address": "fa:16:3e:18:a0:7e", "network": {"id": "3cc67e51-433c-4c50-9e32-11618e10c494", "bridge": "br-int", "label": "tempest-network-smoke--1717122316", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe18:a07e", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap24beb614-6f", "ovs_interfaceid": "24beb614-6f72-4107-adca-af1258052ab5", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 04:10:59 np0005534516 kernel: tapa2ae2d19-2b (unregistering): left promiscuous mode
Nov 25 04:10:59 np0005534516 NetworkManager[48915]: <info>  [1764061859.6870] device (tapa2ae2d19-2b): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 04:10:59 np0005534516 nova_compute[253538]: 2025-11-25 09:10:59.687 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Releasing lock "refresh_cache-23232702-7686-425d-8921-7aa6192ca1c8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 04:10:59 np0005534516 nova_compute[253538]: 2025-11-25 09:10:59.688 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] [instance: 23232702-7686-425d-8921-7aa6192ca1c8] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Nov 25 04:10:59 np0005534516 nova_compute[253538]: 2025-11-25 09:10:59.689 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:10:59 np0005534516 nova_compute[253538]: 2025-11-25 09:10:59.689 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:10:59 np0005534516 nova_compute[253538]: 2025-11-25 09:10:59.689 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 25 04:10:59 np0005534516 ovn_controller[152859]: 2025-11-25T09:10:59Z|01548|binding|INFO|Releasing lport a2ae2d19-2b35-4e83-b6ba-9f037762a501 from this chassis (sb_readonly=0)
Nov 25 04:10:59 np0005534516 nova_compute[253538]: 2025-11-25 09:10:59.746 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:10:59 np0005534516 ovn_controller[152859]: 2025-11-25T09:10:59Z|01549|binding|INFO|Setting lport a2ae2d19-2b35-4e83-b6ba-9f037762a501 down in Southbound
Nov 25 04:10:59 np0005534516 ovn_controller[152859]: 2025-11-25T09:10:59Z|01550|binding|INFO|Removing iface tapa2ae2d19-2b ovn-installed in OVS
Nov 25 04:10:59 np0005534516 nova_compute[253538]: 2025-11-25 09:10:59.750 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:10:59 np0005534516 kernel: tapaa75ca22-e9 (unregistering): left promiscuous mode
Nov 25 04:10:59 np0005534516 NetworkManager[48915]: <info>  [1764061859.7724] device (tapaa75ca22-e9): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 04:10:59 np0005534516 nova_compute[253538]: 2025-11-25 09:10:59.774 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:10:59 np0005534516 ovn_controller[152859]: 2025-11-25T09:10:59Z|01551|binding|INFO|Releasing lport aa75ca22-e976-4c62-b1e2-cc57fac51dec from this chassis (sb_readonly=1)
Nov 25 04:10:59 np0005534516 ovn_controller[152859]: 2025-11-25T09:10:59Z|01552|binding|INFO|Removing iface tapaa75ca22-e9 ovn-installed in OVS
Nov 25 04:10:59 np0005534516 ovn_controller[152859]: 2025-11-25T09:10:59Z|01553|if_status|INFO|Dropped 6 log messages in last 262 seconds (most recently, 262 seconds ago) due to excessive rate
Nov 25 04:10:59 np0005534516 ovn_controller[152859]: 2025-11-25T09:10:59Z|01554|if_status|INFO|Not setting lport aa75ca22-e976-4c62-b1e2-cc57fac51dec down as sb is readonly
Nov 25 04:10:59 np0005534516 nova_compute[253538]: 2025-11-25 09:10:59.782 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:10:59 np0005534516 ovn_controller[152859]: 2025-11-25T09:10:59Z|01555|binding|INFO|Setting lport aa75ca22-e976-4c62-b1e2-cc57fac51dec down in Southbound
Nov 25 04:10:59 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:10:59.787 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:1a:7a:85 10.100.0.3'], port_security=['fa:16:3e:1a:7a:85 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'e30f8c90-01de-40a5-8c04-289a035fca22', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7bf4f588-ebc7-4f3f-bad9-0474cdb461a6', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a3cf572dfc9f42528923d69b8fa76422', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'd62dafc0-5bc7-45ab-b9df-841bbdd333a8', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=78a8bb3c-a4c7-425c-b375-b8834bad3945, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=a2ae2d19-2b35-4e83-b6ba-9f037762a501) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 04:10:59 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:10:59.788 162739 INFO neutron.agent.ovn.metadata.agent [-] Port a2ae2d19-2b35-4e83-b6ba-9f037762a501 in datapath 7bf4f588-ebc7-4f3f-bad9-0474cdb461a6 unbound from our chassis#033[00m
Nov 25 04:10:59 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:10:59.790 162739 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 7bf4f588-ebc7-4f3f-bad9-0474cdb461a6#033[00m
Nov 25 04:10:59 np0005534516 nova_compute[253538]: 2025-11-25 09:10:59.798 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:10:59 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:10:59.813 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[65a403cf-5a00-4918-8480-bd631585b9a5]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:10:59 np0005534516 systemd[1]: machine-qemu\x2d175\x2dinstance\x2d00000091.scope: Deactivated successfully.
Nov 25 04:10:59 np0005534516 systemd[1]: machine-qemu\x2d175\x2dinstance\x2d00000091.scope: Consumed 14.602s CPU time.
Nov 25 04:10:59 np0005534516 systemd-machined[215790]: Machine qemu-175-instance-00000091 terminated.
Nov 25 04:10:59 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:10:59.847 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[beb44bca-4e62-468f-ad1c-b8f84d0dfffc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:10:59 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:10:59.850 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[93d34977-2dcb-4bdc-9d73-55e7b7ae38be]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:10:59 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:10:59.881 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[65ac49f4-df5b-4ca5-8078-86e7daeac035]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:10:59 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:10:59.886 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:81:24:53 2001:db8::f816:3eff:fe81:2453'], port_security=['fa:16:3e:81:24:53 2001:db8::f816:3eff:fe81:2453'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:fe81:2453/64', 'neutron:device_id': 'e30f8c90-01de-40a5-8c04-289a035fca22', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3cc67e51-433c-4c50-9e32-11618e10c494', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a3cf572dfc9f42528923d69b8fa76422', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'd62dafc0-5bc7-45ab-b9df-841bbdd333a8', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8b005d09-6aec-4c42-a2eb-38372057a2c4, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=aa75ca22-e976-4c62-b1e2-cc57fac51dec) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 04:10:59 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:10:59.898 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[47c682a4-a6dc-4063-9f3c-6af5359c218c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7bf4f588-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:dd:cc:c4'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 7, 'rx_bytes': 700, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 7, 'rx_bytes': 700, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 440], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 712903, 'reachable_time': 15708, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 408881, 'error': None, 'target': 'ovnmeta-7bf4f588-ebc7-4f3f-bad9-0474cdb461a6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:10:59 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:10:59.916 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[55566d7a-f7e5-443e-a92a-a8a855084795]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap7bf4f588-e1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 712917, 'tstamp': 712917}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 408882, 'error': None, 'target': 'ovnmeta-7bf4f588-ebc7-4f3f-bad9-0474cdb461a6', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap7bf4f588-e1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 712921, 'tstamp': 712921}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 408882, 'error': None, 'target': 'ovnmeta-7bf4f588-ebc7-4f3f-bad9-0474cdb461a6', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:10:59 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:10:59.919 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7bf4f588-e0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 04:10:59 np0005534516 nova_compute[253538]: 2025-11-25 09:10:59.921 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:10:59 np0005534516 nova_compute[253538]: 2025-11-25 09:10:59.927 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:10:59 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:10:59.929 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7bf4f588-e0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 04:10:59 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:10:59.929 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 04:10:59 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:10:59.930 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap7bf4f588-e0, col_values=(('external_ids', {'iface-id': '681702e6-167a-4d5b-9bcf-7f086c4e8bad'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 04:10:59 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:10:59.930 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 04:10:59 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:10:59.932 162739 INFO neutron.agent.ovn.metadata.agent [-] Port aa75ca22-e976-4c62-b1e2-cc57fac51dec in datapath 3cc67e51-433c-4c50-9e32-11618e10c494 unbound from our chassis#033[00m
Nov 25 04:10:59 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:10:59.934 162739 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 3cc67e51-433c-4c50-9e32-11618e10c494#033[00m
Nov 25 04:10:59 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:10:59.952 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[87091937-5cb6-41e2-b5e1-f9e4efe73d50]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:10:59 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:10:59.991 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[d74abeff-e3d3-4cf0-b18d-f14b9ed077f4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:10:59 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:10:59.995 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[a28c1aaa-e819-4f65-b5ca-1c36d9b0f900]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:11:00 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:11:00.036 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[67110452-cae8-4b70-9e3b-66d5d9cfc379]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:11:00 np0005534516 NetworkManager[48915]: <info>  [1764061860.0480] manager: (tapaa75ca22-e9): new Tun device (/org/freedesktop/NetworkManager/Devices/638)
Nov 25 04:11:00 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:11:00.056 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[7efe3720-7e7c-4065-b9f6-0c8e2908e1ba]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap3cc67e51-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:42:16:67'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 30, 'tx_packets': 5, 'rx_bytes': 2612, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 30, 'tx_packets': 5, 'rx_bytes': 2612, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 441], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 712993, 'reachable_time': 20254, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 30, 'inoctets': 2192, 'indelivers': 7, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 30, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 2192, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 30, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 7, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 408898, 'error': None, 'target': 'ovnmeta-3cc67e51-433c-4c50-9e32-11618e10c494', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:11:00 np0005534516 nova_compute[253538]: 2025-11-25 09:11:00.068 253542 INFO nova.virt.libvirt.driver [-] [instance: e30f8c90-01de-40a5-8c04-289a035fca22] Instance destroyed successfully.#033[00m
Nov 25 04:11:00 np0005534516 nova_compute[253538]: 2025-11-25 09:11:00.069 253542 DEBUG nova.objects.instance [None req-d9f8d56c-07ea-4dde-829f-4cd4c5736f9f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lazy-loading 'resources' on Instance uuid e30f8c90-01de-40a5-8c04-289a035fca22 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 04:11:00 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:11:00.080 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[fb1d19c7-1f24-40ff-bcd6-9c3b3efaf7b6]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap3cc67e51-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 713006, 'tstamp': 713006}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 408912, 'error': None, 'target': 'ovnmeta-3cc67e51-433c-4c50-9e32-11618e10c494', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:11:00 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:11:00.082 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3cc67e51-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 04:11:00 np0005534516 nova_compute[253538]: 2025-11-25 09:11:00.083 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:11:00 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:11:00.092 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap3cc67e51-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 04:11:00 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:11:00.093 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 04:11:00 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:11:00.093 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap3cc67e51-40, col_values=(('external_ids', {'iface-id': '7cc0292c-b133-4cb7-8177-2a55fd592909'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 04:11:00 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:11:00.093 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 04:11:00 np0005534516 nova_compute[253538]: 2025-11-25 09:11:00.094 253542 DEBUG nova.virt.libvirt.vif [None req-d9f8d56c-07ea-4dde-829f-4cd4c5736f9f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T09:10:13Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-640466180',display_name='tempest-TestGettingAddress-server-640466180',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-640466180',id=145,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOe89J94R4qKvHMYwrfikowJ7ceUXQ642a2tchAyKy2knwK13gMoa9aQVgsZFx+J1CMsBQiBzDcpzQJhqE9YwLxM9sA075e5a1p5R338zHNdy69LTAjp0AMqDwCa+QT4pw==',key_name='tempest-TestGettingAddress-230342741',keypairs=<?>,launch_index=0,launched_at=2025-11-25T09:10:33Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='a3cf572dfc9f42528923d69b8fa76422',ramdisk_id='',reservation_id='r-vi0tehjw',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-364728108',owner_user_name='tempest-TestGettingAddress-364728108-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T09:10:33Z,user_data=None,user_id='c9fb13d4ba9041458692330b7276232f',uuid=e30f8c90-01de-40a5-8c04-289a035fca22,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "a2ae2d19-2b35-4e83-b6ba-9f037762a501", "address": "fa:16:3e:1a:7a:85", "network": {"id": "7bf4f588-ebc7-4f3f-bad9-0474cdb461a6", "bridge": "br-int", "label": "tempest-network-smoke--1406303604", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.230", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa2ae2d19-2b", "ovs_interfaceid": "a2ae2d19-2b35-4e83-b6ba-9f037762a501", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 25 04:11:00 np0005534516 nova_compute[253538]: 2025-11-25 09:11:00.094 253542 DEBUG nova.network.os_vif_util [None req-d9f8d56c-07ea-4dde-829f-4cd4c5736f9f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Converting VIF {"id": "a2ae2d19-2b35-4e83-b6ba-9f037762a501", "address": "fa:16:3e:1a:7a:85", "network": {"id": "7bf4f588-ebc7-4f3f-bad9-0474cdb461a6", "bridge": "br-int", "label": "tempest-network-smoke--1406303604", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.230", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa2ae2d19-2b", "ovs_interfaceid": "a2ae2d19-2b35-4e83-b6ba-9f037762a501", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 04:11:00 np0005534516 nova_compute[253538]: 2025-11-25 09:11:00.095 253542 DEBUG nova.network.os_vif_util [None req-d9f8d56c-07ea-4dde-829f-4cd4c5736f9f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:1a:7a:85,bridge_name='br-int',has_traffic_filtering=True,id=a2ae2d19-2b35-4e83-b6ba-9f037762a501,network=Network(7bf4f588-ebc7-4f3f-bad9-0474cdb461a6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa2ae2d19-2b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 04:11:00 np0005534516 nova_compute[253538]: 2025-11-25 09:11:00.096 253542 DEBUG os_vif [None req-d9f8d56c-07ea-4dde-829f-4cd4c5736f9f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:1a:7a:85,bridge_name='br-int',has_traffic_filtering=True,id=a2ae2d19-2b35-4e83-b6ba-9f037762a501,network=Network(7bf4f588-ebc7-4f3f-bad9-0474cdb461a6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa2ae2d19-2b') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 25 04:11:00 np0005534516 nova_compute[253538]: 2025-11-25 09:11:00.098 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:11:00 np0005534516 nova_compute[253538]: 2025-11-25 09:11:00.099 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa2ae2d19-2b, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 04:11:00 np0005534516 nova_compute[253538]: 2025-11-25 09:11:00.101 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:11:00 np0005534516 nova_compute[253538]: 2025-11-25 09:11:00.103 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 25 04:11:00 np0005534516 nova_compute[253538]: 2025-11-25 09:11:00.105 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:11:00 np0005534516 nova_compute[253538]: 2025-11-25 09:11:00.108 253542 INFO os_vif [None req-d9f8d56c-07ea-4dde-829f-4cd4c5736f9f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:1a:7a:85,bridge_name='br-int',has_traffic_filtering=True,id=a2ae2d19-2b35-4e83-b6ba-9f037762a501,network=Network(7bf4f588-ebc7-4f3f-bad9-0474cdb461a6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa2ae2d19-2b')#033[00m
Nov 25 04:11:00 np0005534516 nova_compute[253538]: 2025-11-25 09:11:00.110 253542 DEBUG nova.virt.libvirt.vif [None req-d9f8d56c-07ea-4dde-829f-4cd4c5736f9f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T09:10:13Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-640466180',display_name='tempest-TestGettingAddress-server-640466180',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-640466180',id=145,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOe89J94R4qKvHMYwrfikowJ7ceUXQ642a2tchAyKy2knwK13gMoa9aQVgsZFx+J1CMsBQiBzDcpzQJhqE9YwLxM9sA075e5a1p5R338zHNdy69LTAjp0AMqDwCa+QT4pw==',key_name='tempest-TestGettingAddress-230342741',keypairs=<?>,launch_index=0,launched_at=2025-11-25T09:10:33Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='a3cf572dfc9f42528923d69b8fa76422',ramdisk_id='',reservation_id='r-vi0tehjw',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-364728108',owner_user_name='tempest-TestGettingAddress-364728108-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T09:10:33Z,user_data=None,user_id='c9fb13d4ba9041458692330b7276232f',uuid=e30f8c90-01de-40a5-8c04-289a035fca22,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "aa75ca22-e976-4c62-b1e2-cc57fac51dec", "address": "fa:16:3e:81:24:53", "network": {"id": "3cc67e51-433c-4c50-9e32-11618e10c494", "bridge": "br-int", "label": "tempest-network-smoke--1717122316", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe81:2453", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapaa75ca22-e9", "ovs_interfaceid": "aa75ca22-e976-4c62-b1e2-cc57fac51dec", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 25 04:11:00 np0005534516 nova_compute[253538]: 2025-11-25 09:11:00.110 253542 DEBUG nova.network.os_vif_util [None req-d9f8d56c-07ea-4dde-829f-4cd4c5736f9f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Converting VIF {"id": "aa75ca22-e976-4c62-b1e2-cc57fac51dec", "address": "fa:16:3e:81:24:53", "network": {"id": "3cc67e51-433c-4c50-9e32-11618e10c494", "bridge": "br-int", "label": "tempest-network-smoke--1717122316", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe81:2453", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapaa75ca22-e9", "ovs_interfaceid": "aa75ca22-e976-4c62-b1e2-cc57fac51dec", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 04:11:00 np0005534516 nova_compute[253538]: 2025-11-25 09:11:00.111 253542 DEBUG nova.network.os_vif_util [None req-d9f8d56c-07ea-4dde-829f-4cd4c5736f9f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:81:24:53,bridge_name='br-int',has_traffic_filtering=True,id=aa75ca22-e976-4c62-b1e2-cc57fac51dec,network=Network(3cc67e51-433c-4c50-9e32-11618e10c494),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapaa75ca22-e9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 04:11:00 np0005534516 nova_compute[253538]: 2025-11-25 09:11:00.112 253542 DEBUG os_vif [None req-d9f8d56c-07ea-4dde-829f-4cd4c5736f9f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:81:24:53,bridge_name='br-int',has_traffic_filtering=True,id=aa75ca22-e976-4c62-b1e2-cc57fac51dec,network=Network(3cc67e51-433c-4c50-9e32-11618e10c494),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapaa75ca22-e9') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 25 04:11:00 np0005534516 nova_compute[253538]: 2025-11-25 09:11:00.114 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:11:00 np0005534516 nova_compute[253538]: 2025-11-25 09:11:00.115 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapaa75ca22-e9, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 04:11:00 np0005534516 nova_compute[253538]: 2025-11-25 09:11:00.117 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:11:00 np0005534516 nova_compute[253538]: 2025-11-25 09:11:00.118 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:11:00 np0005534516 nova_compute[253538]: 2025-11-25 09:11:00.121 253542 INFO os_vif [None req-d9f8d56c-07ea-4dde-829f-4cd4c5736f9f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:81:24:53,bridge_name='br-int',has_traffic_filtering=True,id=aa75ca22-e976-4c62-b1e2-cc57fac51dec,network=Network(3cc67e51-433c-4c50-9e32-11618e10c494),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapaa75ca22-e9')#033[00m
Nov 25 04:11:00 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2718: 321 pgs: 321 active+clean; 246 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 90 KiB/s rd, 490 KiB/s wr, 30 op/s
Nov 25 04:11:00 np0005534516 nova_compute[253538]: 2025-11-25 09:11:00.682 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:11:01 np0005534516 nova_compute[253538]: 2025-11-25 09:11:01.112 253542 DEBUG nova.compute.manager [req-c40e33b1-c7d7-40d5-878d-58ac47eff964 req-598a95bb-59ac-4b40-b804-76a6d9e587ed b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: e30f8c90-01de-40a5-8c04-289a035fca22] Received event network-vif-unplugged-a2ae2d19-2b35-4e83-b6ba-9f037762a501 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 04:11:01 np0005534516 nova_compute[253538]: 2025-11-25 09:11:01.113 253542 DEBUG oslo_concurrency.lockutils [req-c40e33b1-c7d7-40d5-878d-58ac47eff964 req-598a95bb-59ac-4b40-b804-76a6d9e587ed b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "e30f8c90-01de-40a5-8c04-289a035fca22-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:11:01 np0005534516 nova_compute[253538]: 2025-11-25 09:11:01.113 253542 DEBUG oslo_concurrency.lockutils [req-c40e33b1-c7d7-40d5-878d-58ac47eff964 req-598a95bb-59ac-4b40-b804-76a6d9e587ed b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "e30f8c90-01de-40a5-8c04-289a035fca22-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:11:01 np0005534516 nova_compute[253538]: 2025-11-25 09:11:01.114 253542 DEBUG oslo_concurrency.lockutils [req-c40e33b1-c7d7-40d5-878d-58ac47eff964 req-598a95bb-59ac-4b40-b804-76a6d9e587ed b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "e30f8c90-01de-40a5-8c04-289a035fca22-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:11:01 np0005534516 nova_compute[253538]: 2025-11-25 09:11:01.114 253542 DEBUG nova.compute.manager [req-c40e33b1-c7d7-40d5-878d-58ac47eff964 req-598a95bb-59ac-4b40-b804-76a6d9e587ed b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: e30f8c90-01de-40a5-8c04-289a035fca22] No waiting events found dispatching network-vif-unplugged-a2ae2d19-2b35-4e83-b6ba-9f037762a501 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 04:11:01 np0005534516 nova_compute[253538]: 2025-11-25 09:11:01.115 253542 DEBUG nova.compute.manager [req-c40e33b1-c7d7-40d5-878d-58ac47eff964 req-598a95bb-59ac-4b40-b804-76a6d9e587ed b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: e30f8c90-01de-40a5-8c04-289a035fca22] Received event network-vif-unplugged-a2ae2d19-2b35-4e83-b6ba-9f037762a501 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 25 04:11:01 np0005534516 nova_compute[253538]: 2025-11-25 09:11:01.187 253542 DEBUG nova.compute.manager [req-c0c781a8-be71-4853-8f8c-dd99ae9e4279 req-158673f7-c580-458b-bf8f-ef7620bcb3d9 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: e30f8c90-01de-40a5-8c04-289a035fca22] Received event network-vif-unplugged-aa75ca22-e976-4c62-b1e2-cc57fac51dec external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 04:11:01 np0005534516 nova_compute[253538]: 2025-11-25 09:11:01.187 253542 DEBUG oslo_concurrency.lockutils [req-c0c781a8-be71-4853-8f8c-dd99ae9e4279 req-158673f7-c580-458b-bf8f-ef7620bcb3d9 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "e30f8c90-01de-40a5-8c04-289a035fca22-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:11:01 np0005534516 nova_compute[253538]: 2025-11-25 09:11:01.187 253542 DEBUG oslo_concurrency.lockutils [req-c0c781a8-be71-4853-8f8c-dd99ae9e4279 req-158673f7-c580-458b-bf8f-ef7620bcb3d9 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "e30f8c90-01de-40a5-8c04-289a035fca22-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:11:01 np0005534516 nova_compute[253538]: 2025-11-25 09:11:01.188 253542 DEBUG oslo_concurrency.lockutils [req-c0c781a8-be71-4853-8f8c-dd99ae9e4279 req-158673f7-c580-458b-bf8f-ef7620bcb3d9 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "e30f8c90-01de-40a5-8c04-289a035fca22-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:11:01 np0005534516 nova_compute[253538]: 2025-11-25 09:11:01.188 253542 DEBUG nova.compute.manager [req-c0c781a8-be71-4853-8f8c-dd99ae9e4279 req-158673f7-c580-458b-bf8f-ef7620bcb3d9 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: e30f8c90-01de-40a5-8c04-289a035fca22] No waiting events found dispatching network-vif-unplugged-aa75ca22-e976-4c62-b1e2-cc57fac51dec pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 04:11:01 np0005534516 nova_compute[253538]: 2025-11-25 09:11:01.188 253542 DEBUG nova.compute.manager [req-c0c781a8-be71-4853-8f8c-dd99ae9e4279 req-158673f7-c580-458b-bf8f-ef7620bcb3d9 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: e30f8c90-01de-40a5-8c04-289a035fca22] Received event network-vif-unplugged-aa75ca22-e976-4c62-b1e2-cc57fac51dec for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 25 04:11:01 np0005534516 nova_compute[253538]: 2025-11-25 09:11:01.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:11:01 np0005534516 nova_compute[253538]: 2025-11-25 09:11:01.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:11:02 np0005534516 nova_compute[253538]: 2025-11-25 09:11:02.273 253542 INFO nova.virt.libvirt.driver [None req-d9f8d56c-07ea-4dde-829f-4cd4c5736f9f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: e30f8c90-01de-40a5-8c04-289a035fca22] Deleting instance files /var/lib/nova/instances/e30f8c90-01de-40a5-8c04-289a035fca22_del#033[00m
Nov 25 04:11:02 np0005534516 nova_compute[253538]: 2025-11-25 09:11:02.274 253542 INFO nova.virt.libvirt.driver [None req-d9f8d56c-07ea-4dde-829f-4cd4c5736f9f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: e30f8c90-01de-40a5-8c04-289a035fca22] Deletion of /var/lib/nova/instances/e30f8c90-01de-40a5-8c04-289a035fca22_del complete#033[00m
Nov 25 04:11:02 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2719: 321 pgs: 321 active+clean; 246 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 63 KiB/s rd, 91 KiB/s wr, 10 op/s
Nov 25 04:11:02 np0005534516 nova_compute[253538]: 2025-11-25 09:11:02.368 253542 INFO nova.compute.manager [None req-d9f8d56c-07ea-4dde-829f-4cd4c5736f9f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: e30f8c90-01de-40a5-8c04-289a035fca22] Took 2.76 seconds to destroy the instance on the hypervisor.#033[00m
Nov 25 04:11:02 np0005534516 nova_compute[253538]: 2025-11-25 09:11:02.369 253542 DEBUG oslo.service.loopingcall [None req-d9f8d56c-07ea-4dde-829f-4cd4c5736f9f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 25 04:11:02 np0005534516 nova_compute[253538]: 2025-11-25 09:11:02.369 253542 DEBUG nova.compute.manager [-] [instance: e30f8c90-01de-40a5-8c04-289a035fca22] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 25 04:11:02 np0005534516 nova_compute[253538]: 2025-11-25 09:11:02.370 253542 DEBUG nova.network.neutron [-] [instance: e30f8c90-01de-40a5-8c04-289a035fca22] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 25 04:11:02 np0005534516 nova_compute[253538]: 2025-11-25 09:11:02.446 253542 DEBUG nova.network.neutron [req-25a4439f-b7c1-4aca-b54f-02daff1a0216 req-7adb7897-5e2b-4786-810c-d9558e68676d b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: e30f8c90-01de-40a5-8c04-289a035fca22] Updated VIF entry in instance network info cache for port a2ae2d19-2b35-4e83-b6ba-9f037762a501. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 25 04:11:02 np0005534516 nova_compute[253538]: 2025-11-25 09:11:02.446 253542 DEBUG nova.network.neutron [req-25a4439f-b7c1-4aca-b54f-02daff1a0216 req-7adb7897-5e2b-4786-810c-d9558e68676d b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: e30f8c90-01de-40a5-8c04-289a035fca22] Updating instance_info_cache with network_info: [{"id": "a2ae2d19-2b35-4e83-b6ba-9f037762a501", "address": "fa:16:3e:1a:7a:85", "network": {"id": "7bf4f588-ebc7-4f3f-bad9-0474cdb461a6", "bridge": "br-int", "label": "tempest-network-smoke--1406303604", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa2ae2d19-2b", "ovs_interfaceid": "a2ae2d19-2b35-4e83-b6ba-9f037762a501", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "aa75ca22-e976-4c62-b1e2-cc57fac51dec", "address": "fa:16:3e:81:24:53", "network": {"id": "3cc67e51-433c-4c50-9e32-11618e10c494", "bridge": "br-int", "label": "tempest-network-smoke--1717122316", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe81:2453", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapaa75ca22-e9", "ovs_interfaceid": "aa75ca22-e976-4c62-b1e2-cc57fac51dec", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 04:11:02 np0005534516 nova_compute[253538]: 2025-11-25 09:11:02.451 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:11:02 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:11:02.451 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=52, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '3e:21:09', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '6a:71:8e:07:fa:c2'}, ipsec=False) old=SB_Global(nb_cfg=51) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 04:11:02 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:11:02.452 162739 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 25 04:11:02 np0005534516 nova_compute[253538]: 2025-11-25 09:11:02.692 253542 DEBUG oslo_concurrency.lockutils [req-25a4439f-b7c1-4aca-b54f-02daff1a0216 req-7adb7897-5e2b-4786-810c-d9558e68676d b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-e30f8c90-01de-40a5-8c04-289a035fca22" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 04:11:03 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:11:03 np0005534516 nova_compute[253538]: 2025-11-25 09:11:03.463 253542 DEBUG nova.compute.manager [req-3c429f28-ec10-4d1d-a32b-d51f16fc222e req-3093768b-3e5f-4b3a-8217-6625d0b73dbd b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: e30f8c90-01de-40a5-8c04-289a035fca22] Received event network-vif-plugged-a2ae2d19-2b35-4e83-b6ba-9f037762a501 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 04:11:03 np0005534516 nova_compute[253538]: 2025-11-25 09:11:03.463 253542 DEBUG oslo_concurrency.lockutils [req-3c429f28-ec10-4d1d-a32b-d51f16fc222e req-3093768b-3e5f-4b3a-8217-6625d0b73dbd b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "e30f8c90-01de-40a5-8c04-289a035fca22-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:11:03 np0005534516 nova_compute[253538]: 2025-11-25 09:11:03.463 253542 DEBUG oslo_concurrency.lockutils [req-3c429f28-ec10-4d1d-a32b-d51f16fc222e req-3093768b-3e5f-4b3a-8217-6625d0b73dbd b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "e30f8c90-01de-40a5-8c04-289a035fca22-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:11:03 np0005534516 nova_compute[253538]: 2025-11-25 09:11:03.464 253542 DEBUG oslo_concurrency.lockutils [req-3c429f28-ec10-4d1d-a32b-d51f16fc222e req-3093768b-3e5f-4b3a-8217-6625d0b73dbd b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "e30f8c90-01de-40a5-8c04-289a035fca22-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:11:03 np0005534516 nova_compute[253538]: 2025-11-25 09:11:03.464 253542 DEBUG nova.compute.manager [req-3c429f28-ec10-4d1d-a32b-d51f16fc222e req-3093768b-3e5f-4b3a-8217-6625d0b73dbd b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: e30f8c90-01de-40a5-8c04-289a035fca22] No waiting events found dispatching network-vif-plugged-a2ae2d19-2b35-4e83-b6ba-9f037762a501 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 04:11:03 np0005534516 nova_compute[253538]: 2025-11-25 09:11:03.464 253542 WARNING nova.compute.manager [req-3c429f28-ec10-4d1d-a32b-d51f16fc222e req-3093768b-3e5f-4b3a-8217-6625d0b73dbd b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: e30f8c90-01de-40a5-8c04-289a035fca22] Received unexpected event network-vif-plugged-a2ae2d19-2b35-4e83-b6ba-9f037762a501 for instance with vm_state active and task_state deleting.#033[00m
Nov 25 04:11:03 np0005534516 nova_compute[253538]: 2025-11-25 09:11:03.514 253542 DEBUG nova.compute.manager [req-1c8b59a6-d30c-4a59-a438-6603a7e62ea8 req-e0577a2c-c613-45e6-89d2-7fa5122aabd5 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: e30f8c90-01de-40a5-8c04-289a035fca22] Received event network-vif-plugged-aa75ca22-e976-4c62-b1e2-cc57fac51dec external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 04:11:03 np0005534516 nova_compute[253538]: 2025-11-25 09:11:03.515 253542 DEBUG oslo_concurrency.lockutils [req-1c8b59a6-d30c-4a59-a438-6603a7e62ea8 req-e0577a2c-c613-45e6-89d2-7fa5122aabd5 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "e30f8c90-01de-40a5-8c04-289a035fca22-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:11:03 np0005534516 nova_compute[253538]: 2025-11-25 09:11:03.515 253542 DEBUG oslo_concurrency.lockutils [req-1c8b59a6-d30c-4a59-a438-6603a7e62ea8 req-e0577a2c-c613-45e6-89d2-7fa5122aabd5 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "e30f8c90-01de-40a5-8c04-289a035fca22-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:11:03 np0005534516 nova_compute[253538]: 2025-11-25 09:11:03.515 253542 DEBUG oslo_concurrency.lockutils [req-1c8b59a6-d30c-4a59-a438-6603a7e62ea8 req-e0577a2c-c613-45e6-89d2-7fa5122aabd5 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "e30f8c90-01de-40a5-8c04-289a035fca22-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:11:03 np0005534516 nova_compute[253538]: 2025-11-25 09:11:03.515 253542 DEBUG nova.compute.manager [req-1c8b59a6-d30c-4a59-a438-6603a7e62ea8 req-e0577a2c-c613-45e6-89d2-7fa5122aabd5 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: e30f8c90-01de-40a5-8c04-289a035fca22] No waiting events found dispatching network-vif-plugged-aa75ca22-e976-4c62-b1e2-cc57fac51dec pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 04:11:03 np0005534516 nova_compute[253538]: 2025-11-25 09:11:03.515 253542 WARNING nova.compute.manager [req-1c8b59a6-d30c-4a59-a438-6603a7e62ea8 req-e0577a2c-c613-45e6-89d2-7fa5122aabd5 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: e30f8c90-01de-40a5-8c04-289a035fca22] Received unexpected event network-vif-plugged-aa75ca22-e976-4c62-b1e2-cc57fac51dec for instance with vm_state active and task_state deleting.#033[00m
Nov 25 04:11:04 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2720: 321 pgs: 321 active+clean; 225 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 23 KiB/s rd, 22 KiB/s wr, 12 op/s
Nov 25 04:11:04 np0005534516 nova_compute[253538]: 2025-11-25 09:11:04.391 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:11:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 04:11:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 04:11:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 04:11:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 04:11:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0012764562780816565 of space, bias 1.0, pg target 0.38293688342449694 quantized to 32 (current 32)
Nov 25 04:11:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 04:11:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 04:11:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 04:11:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 04:11:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 04:11:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0010122368870873217 of space, bias 1.0, pg target 0.3036710661261965 quantized to 32 (current 32)
Nov 25 04:11:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 04:11:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 25 04:11:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 04:11:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 04:11:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 04:11:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 25 04:11:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 04:11:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 25 04:11:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 04:11:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 04:11:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 04:11:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 25 04:11:04 np0005534516 nova_compute[253538]: 2025-11-25 09:11:04.548 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:11:04 np0005534516 podman[408937]: 2025-11-25 09:11:04.81493781 +0000 UTC m=+0.063263901 container health_status 201f5ba8a846455dad7681d91099d8c3f33e8ac91298c1330a8b525b94c03938 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=multipathd, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 04:11:04 np0005534516 podman[408938]: 2025-11-25 09:11:04.835143359 +0000 UTC m=+0.083468130 container health_status 5c8d5508db57b4c3da7e80ae11f3a3094e87785cb74f60c98199261dd8b73481 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Nov 25 04:11:04 np0005534516 nova_compute[253538]: 2025-11-25 09:11:04.866 253542 DEBUG nova.network.neutron [-] [instance: e30f8c90-01de-40a5-8c04-289a035fca22] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 04:11:04 np0005534516 nova_compute[253538]: 2025-11-25 09:11:04.939 253542 INFO nova.compute.manager [-] [instance: e30f8c90-01de-40a5-8c04-289a035fca22] Took 2.57 seconds to deallocate network for instance.#033[00m
Nov 25 04:11:05 np0005534516 nova_compute[253538]: 2025-11-25 09:11:05.036 253542 DEBUG oslo_concurrency.lockutils [None req-d9f8d56c-07ea-4dde-829f-4cd4c5736f9f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:11:05 np0005534516 nova_compute[253538]: 2025-11-25 09:11:05.036 253542 DEBUG oslo_concurrency.lockutils [None req-d9f8d56c-07ea-4dde-829f-4cd4c5736f9f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:11:05 np0005534516 nova_compute[253538]: 2025-11-25 09:11:05.117 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:11:05 np0005534516 nova_compute[253538]: 2025-11-25 09:11:05.121 253542 DEBUG oslo_concurrency.processutils [None req-d9f8d56c-07ea-4dde-829f-4cd4c5736f9f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 04:11:05 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 04:11:05 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2945930788' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 04:11:05 np0005534516 nova_compute[253538]: 2025-11-25 09:11:05.604 253542 DEBUG oslo_concurrency.processutils [None req-d9f8d56c-07ea-4dde-829f-4cd4c5736f9f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.483s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 04:11:05 np0005534516 nova_compute[253538]: 2025-11-25 09:11:05.612 253542 DEBUG nova.compute.provider_tree [None req-d9f8d56c-07ea-4dde-829f-4cd4c5736f9f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 04:11:05 np0005534516 nova_compute[253538]: 2025-11-25 09:11:05.633 253542 DEBUG nova.compute.manager [req-d5a8a11a-268e-44e7-b8e8-8d86ba4eb6e9 req-8c25e619-4d19-4f22-a0f9-388a947dad67 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: e30f8c90-01de-40a5-8c04-289a035fca22] Received event network-vif-deleted-aa75ca22-e976-4c62-b1e2-cc57fac51dec external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 04:11:05 np0005534516 nova_compute[253538]: 2025-11-25 09:11:05.633 253542 DEBUG nova.compute.manager [req-d5a8a11a-268e-44e7-b8e8-8d86ba4eb6e9 req-8c25e619-4d19-4f22-a0f9-388a947dad67 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: e30f8c90-01de-40a5-8c04-289a035fca22] Received event network-vif-deleted-a2ae2d19-2b35-4e83-b6ba-9f037762a501 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 04:11:05 np0005534516 nova_compute[253538]: 2025-11-25 09:11:05.635 253542 DEBUG nova.scheduler.client.report [None req-d9f8d56c-07ea-4dde-829f-4cd4c5736f9f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 04:11:05 np0005534516 nova_compute[253538]: 2025-11-25 09:11:05.769 253542 DEBUG oslo_concurrency.lockutils [None req-d9f8d56c-07ea-4dde-829f-4cd4c5736f9f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.733s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:11:05 np0005534516 nova_compute[253538]: 2025-11-25 09:11:05.878 253542 INFO nova.scheduler.client.report [None req-d9f8d56c-07ea-4dde-829f-4cd4c5736f9f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Deleted allocations for instance e30f8c90-01de-40a5-8c04-289a035fca22#033[00m
Nov 25 04:11:06 np0005534516 nova_compute[253538]: 2025-11-25 09:11:06.072 253542 DEBUG oslo_concurrency.lockutils [None req-d9f8d56c-07ea-4dde-829f-4cd4c5736f9f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "e30f8c90-01de-40a5-8c04-289a035fca22" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 6.466s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:11:06 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2721: 321 pgs: 321 active+clean; 199 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 34 KiB/s rd, 22 KiB/s wr, 28 op/s
Nov 25 04:11:06 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:11:06.454 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=0e3362c2-a4a0-4a10-9289-943331244f84, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '52'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 04:11:06 np0005534516 nova_compute[253538]: 2025-11-25 09:11:06.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:11:06 np0005534516 nova_compute[253538]: 2025-11-25 09:11:06.587 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:11:06 np0005534516 nova_compute[253538]: 2025-11-25 09:11:06.588 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:11:06 np0005534516 nova_compute[253538]: 2025-11-25 09:11:06.588 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:11:06 np0005534516 nova_compute[253538]: 2025-11-25 09:11:06.589 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 25 04:11:06 np0005534516 nova_compute[253538]: 2025-11-25 09:11:06.589 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 04:11:07 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 04:11:07 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2809866083' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 04:11:07 np0005534516 nova_compute[253538]: 2025-11-25 09:11:07.068 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.479s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 04:11:07 np0005534516 nova_compute[253538]: 2025-11-25 09:11:07.157 253542 DEBUG nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] skipping disk for instance-00000090 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 25 04:11:07 np0005534516 nova_compute[253538]: 2025-11-25 09:11:07.157 253542 DEBUG nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] skipping disk for instance-00000090 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 25 04:11:07 np0005534516 nova_compute[253538]: 2025-11-25 09:11:07.324 253542 WARNING nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 04:11:07 np0005534516 nova_compute[253538]: 2025-11-25 09:11:07.325 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3419MB free_disk=59.922340393066406GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 25 04:11:07 np0005534516 nova_compute[253538]: 2025-11-25 09:11:07.325 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:11:07 np0005534516 nova_compute[253538]: 2025-11-25 09:11:07.326 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:11:07 np0005534516 nova_compute[253538]: 2025-11-25 09:11:07.386 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Instance 23232702-7686-425d-8921-7aa6192ca1c8 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 25 04:11:07 np0005534516 nova_compute[253538]: 2025-11-25 09:11:07.387 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 25 04:11:07 np0005534516 nova_compute[253538]: 2025-11-25 09:11:07.387 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 25 04:11:07 np0005534516 nova_compute[253538]: 2025-11-25 09:11:07.424 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 04:11:07 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 04:11:07 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2157968572' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 04:11:07 np0005534516 nova_compute[253538]: 2025-11-25 09:11:07.861 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.437s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 04:11:07 np0005534516 nova_compute[253538]: 2025-11-25 09:11:07.870 253542 DEBUG nova.compute.provider_tree [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 04:11:07 np0005534516 nova_compute[253538]: 2025-11-25 09:11:07.885 253542 DEBUG nova.scheduler.client.report [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 04:11:08 np0005534516 nova_compute[253538]: 2025-11-25 09:11:08.033 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 25 04:11:08 np0005534516 nova_compute[253538]: 2025-11-25 09:11:08.034 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.708s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:11:08 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:11:08 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2722: 321 pgs: 321 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 34 KiB/s rd, 22 KiB/s wr, 30 op/s
Nov 25 04:11:08 np0005534516 ceph-mon[75015]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #132. Immutable memtables: 0.
Nov 25 04:11:08 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:11:08.307614) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 25 04:11:08 np0005534516 ceph-mon[75015]: rocksdb: [db/flush_job.cc:856] [default] [JOB 79] Flushing memtable with next log file: 132
Nov 25 04:11:08 np0005534516 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764061868307653, "job": 79, "event": "flush_started", "num_memtables": 1, "num_entries": 525, "num_deletes": 256, "total_data_size": 539408, "memory_usage": 550744, "flush_reason": "Manual Compaction"}
Nov 25 04:11:08 np0005534516 ceph-mon[75015]: rocksdb: [db/flush_job.cc:885] [default] [JOB 79] Level-0 flush table #133: started
Nov 25 04:11:08 np0005534516 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764061868336229, "cf_name": "default", "job": 79, "event": "table_file_creation", "file_number": 133, "file_size": 535047, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 56520, "largest_seqno": 57044, "table_properties": {"data_size": 532054, "index_size": 964, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 965, "raw_key_size": 6697, "raw_average_key_size": 18, "raw_value_size": 526150, "raw_average_value_size": 1445, "num_data_blocks": 43, "num_entries": 364, "num_filter_entries": 364, "num_deletions": 256, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764061834, "oldest_key_time": 1764061834, "file_creation_time": 1764061868, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "dc5dc6ae-0fb8-474e-b34d-e37705a41add", "db_session_id": "WCSCMBNWDQZ3QKJA3OWW", "orig_file_number": 133, "seqno_to_time_mapping": "N/A"}}
Nov 25 04:11:08 np0005534516 ceph-mon[75015]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 79] Flush lasted 28745 microseconds, and 4330 cpu microseconds.
Nov 25 04:11:08 np0005534516 ceph-mon[75015]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 04:11:08 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:11:08.336299) [db/flush_job.cc:967] [default] [JOB 79] Level-0 flush table #133: 535047 bytes OK
Nov 25 04:11:08 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:11:08.336378) [db/memtable_list.cc:519] [default] Level-0 commit table #133 started
Nov 25 04:11:08 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:11:08.386575) [db/memtable_list.cc:722] [default] Level-0 commit table #133: memtable #1 done
Nov 25 04:11:08 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:11:08.386616) EVENT_LOG_v1 {"time_micros": 1764061868386607, "job": 79, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 25 04:11:08 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:11:08.386638) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 25 04:11:08 np0005534516 ceph-mon[75015]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 79] Try to delete WAL files size 536389, prev total WAL file size 536389, number of live WAL files 2.
Nov 25 04:11:08 np0005534516 ceph-mon[75015]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000129.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 04:11:08 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:11:08.387182) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0032323638' seq:72057594037927935, type:22 .. '6C6F676D0032353230' seq:0, type:0; will stop at (end)
Nov 25 04:11:08 np0005534516 ceph-mon[75015]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 80] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 25 04:11:08 np0005534516 ceph-mon[75015]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 79 Base level 0, inputs: [133(522KB)], [131(9889KB)]
Nov 25 04:11:08 np0005534516 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764061868387423, "job": 80, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [133], "files_L6": [131], "score": -1, "input_data_size": 10661821, "oldest_snapshot_seqno": -1}
Nov 25 04:11:08 np0005534516 ceph-mon[75015]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 80] Generated table #134: 7621 keys, 10541420 bytes, temperature: kUnknown
Nov 25 04:11:08 np0005534516 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764061868864775, "cf_name": "default", "job": 80, "event": "table_file_creation", "file_number": 134, "file_size": 10541420, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10490862, "index_size": 30413, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 19077, "raw_key_size": 200049, "raw_average_key_size": 26, "raw_value_size": 10355040, "raw_average_value_size": 1358, "num_data_blocks": 1187, "num_entries": 7621, "num_filter_entries": 7621, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764056881, "oldest_key_time": 0, "file_creation_time": 1764061868, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "dc5dc6ae-0fb8-474e-b34d-e37705a41add", "db_session_id": "WCSCMBNWDQZ3QKJA3OWW", "orig_file_number": 134, "seqno_to_time_mapping": "N/A"}}
Nov 25 04:11:08 np0005534516 ceph-mon[75015]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 04:11:08 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:11:08.865042) [db/compaction/compaction_job.cc:1663] [default] [JOB 80] Compacted 1@0 + 1@6 files to L6 => 10541420 bytes
Nov 25 04:11:08 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:11:08.872643) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 22.3 rd, 22.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.5, 9.7 +0.0 blob) out(10.1 +0.0 blob), read-write-amplify(39.6) write-amplify(19.7) OK, records in: 8144, records dropped: 523 output_compression: NoCompression
Nov 25 04:11:08 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:11:08.872682) EVENT_LOG_v1 {"time_micros": 1764061868872664, "job": 80, "event": "compaction_finished", "compaction_time_micros": 477436, "compaction_time_cpu_micros": 30521, "output_level": 6, "num_output_files": 1, "total_output_size": 10541420, "num_input_records": 8144, "num_output_records": 7621, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 25 04:11:08 np0005534516 ceph-mon[75015]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000133.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 04:11:08 np0005534516 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764061868873023, "job": 80, "event": "table_file_deletion", "file_number": 133}
Nov 25 04:11:08 np0005534516 ceph-mon[75015]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000131.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 04:11:08 np0005534516 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764061868875613, "job": 80, "event": "table_file_deletion", "file_number": 131}
Nov 25 04:11:08 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:11:08.387069) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 04:11:08 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:11:08.875835) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 04:11:08 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:11:08.875857) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 04:11:08 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:11:08.875860) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 04:11:08 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:11:08.875864) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 04:11:08 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:11:08.875867) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 04:11:08 np0005534516 nova_compute[253538]: 2025-11-25 09:11:08.937 253542 DEBUG nova.compute.manager [req-fb44d24c-ccee-4366-9df6-a1c8bb08dfc9 req-dab485f7-cb48-408f-b459-6b9eab0d5bfc b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 23232702-7686-425d-8921-7aa6192ca1c8] Received event network-changed-2cf452f4-d6c3-4977-9e5b-874c9d9707e6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 04:11:08 np0005534516 nova_compute[253538]: 2025-11-25 09:11:08.937 253542 DEBUG nova.compute.manager [req-fb44d24c-ccee-4366-9df6-a1c8bb08dfc9 req-dab485f7-cb48-408f-b459-6b9eab0d5bfc b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 23232702-7686-425d-8921-7aa6192ca1c8] Refreshing instance network info cache due to event network-changed-2cf452f4-d6c3-4977-9e5b-874c9d9707e6. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 04:11:08 np0005534516 nova_compute[253538]: 2025-11-25 09:11:08.938 253542 DEBUG oslo_concurrency.lockutils [req-fb44d24c-ccee-4366-9df6-a1c8bb08dfc9 req-dab485f7-cb48-408f-b459-6b9eab0d5bfc b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-23232702-7686-425d-8921-7aa6192ca1c8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 04:11:08 np0005534516 nova_compute[253538]: 2025-11-25 09:11:08.938 253542 DEBUG oslo_concurrency.lockutils [req-fb44d24c-ccee-4366-9df6-a1c8bb08dfc9 req-dab485f7-cb48-408f-b459-6b9eab0d5bfc b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-23232702-7686-425d-8921-7aa6192ca1c8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 04:11:08 np0005534516 nova_compute[253538]: 2025-11-25 09:11:08.938 253542 DEBUG nova.network.neutron [req-fb44d24c-ccee-4366-9df6-a1c8bb08dfc9 req-dab485f7-cb48-408f-b459-6b9eab0d5bfc b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 23232702-7686-425d-8921-7aa6192ca1c8] Refreshing network info cache for port 2cf452f4-d6c3-4977-9e5b-874c9d9707e6 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 04:11:09 np0005534516 nova_compute[253538]: 2025-11-25 09:11:09.067 253542 DEBUG oslo_concurrency.lockutils [None req-7f6f8d8e-94b7-4b92-a14c-e64cd9ad5635 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Acquiring lock "23232702-7686-425d-8921-7aa6192ca1c8" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:11:09 np0005534516 nova_compute[253538]: 2025-11-25 09:11:09.068 253542 DEBUG oslo_concurrency.lockutils [None req-7f6f8d8e-94b7-4b92-a14c-e64cd9ad5635 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "23232702-7686-425d-8921-7aa6192ca1c8" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:11:09 np0005534516 nova_compute[253538]: 2025-11-25 09:11:09.068 253542 DEBUG oslo_concurrency.lockutils [None req-7f6f8d8e-94b7-4b92-a14c-e64cd9ad5635 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Acquiring lock "23232702-7686-425d-8921-7aa6192ca1c8-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:11:09 np0005534516 nova_compute[253538]: 2025-11-25 09:11:09.068 253542 DEBUG oslo_concurrency.lockutils [None req-7f6f8d8e-94b7-4b92-a14c-e64cd9ad5635 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "23232702-7686-425d-8921-7aa6192ca1c8-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:11:09 np0005534516 nova_compute[253538]: 2025-11-25 09:11:09.069 253542 DEBUG oslo_concurrency.lockutils [None req-7f6f8d8e-94b7-4b92-a14c-e64cd9ad5635 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "23232702-7686-425d-8921-7aa6192ca1c8-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:11:09 np0005534516 nova_compute[253538]: 2025-11-25 09:11:09.070 253542 INFO nova.compute.manager [None req-7f6f8d8e-94b7-4b92-a14c-e64cd9ad5635 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 23232702-7686-425d-8921-7aa6192ca1c8] Terminating instance#033[00m
Nov 25 04:11:09 np0005534516 nova_compute[253538]: 2025-11-25 09:11:09.071 253542 DEBUG nova.compute.manager [None req-7f6f8d8e-94b7-4b92-a14c-e64cd9ad5635 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 23232702-7686-425d-8921-7aa6192ca1c8] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 25 04:11:09 np0005534516 nova_compute[253538]: 2025-11-25 09:11:09.394 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:11:09 np0005534516 kernel: tap2cf452f4-d6 (unregistering): left promiscuous mode
Nov 25 04:11:09 np0005534516 NetworkManager[48915]: <info>  [1764061869.4961] device (tap2cf452f4-d6): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 04:11:09 np0005534516 ovn_controller[152859]: 2025-11-25T09:11:09Z|01556|binding|INFO|Releasing lport 2cf452f4-d6c3-4977-9e5b-874c9d9707e6 from this chassis (sb_readonly=0)
Nov 25 04:11:09 np0005534516 ovn_controller[152859]: 2025-11-25T09:11:09Z|01557|binding|INFO|Setting lport 2cf452f4-d6c3-4977-9e5b-874c9d9707e6 down in Southbound
Nov 25 04:11:09 np0005534516 ovn_controller[152859]: 2025-11-25T09:11:09Z|01558|binding|INFO|Removing iface tap2cf452f4-d6 ovn-installed in OVS
Nov 25 04:11:09 np0005534516 nova_compute[253538]: 2025-11-25 09:11:09.506 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:11:09 np0005534516 nova_compute[253538]: 2025-11-25 09:11:09.509 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:11:09 np0005534516 kernel: tap24beb614-6f (unregistering): left promiscuous mode
Nov 25 04:11:09 np0005534516 NetworkManager[48915]: <info>  [1764061869.5422] device (tap24beb614-6f): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 04:11:09 np0005534516 nova_compute[253538]: 2025-11-25 09:11:09.542 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:11:09 np0005534516 nova_compute[253538]: 2025-11-25 09:11:09.557 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:11:09 np0005534516 ovn_controller[152859]: 2025-11-25T09:11:09Z|01559|binding|INFO|Releasing lport 24beb614-6f72-4107-adca-af1258052ab5 from this chassis (sb_readonly=1)
Nov 25 04:11:09 np0005534516 ovn_controller[152859]: 2025-11-25T09:11:09Z|01560|binding|INFO|Removing iface tap24beb614-6f ovn-installed in OVS
Nov 25 04:11:09 np0005534516 nova_compute[253538]: 2025-11-25 09:11:09.559 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:11:09 np0005534516 nova_compute[253538]: 2025-11-25 09:11:09.576 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:11:09 np0005534516 systemd[1]: machine-qemu\x2d174\x2dinstance\x2d00000090.scope: Deactivated successfully.
Nov 25 04:11:09 np0005534516 systemd[1]: machine-qemu\x2d174\x2dinstance\x2d00000090.scope: Consumed 17.936s CPU time.
Nov 25 04:11:09 np0005534516 systemd-machined[215790]: Machine qemu-174-instance-00000090 terminated.
Nov 25 04:11:09 np0005534516 ovn_controller[152859]: 2025-11-25T09:11:09Z|01561|binding|INFO|Setting lport 24beb614-6f72-4107-adca-af1258052ab5 down in Southbound
Nov 25 04:11:09 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:11:09.636 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:57:54:60 10.100.0.6'], port_security=['fa:16:3e:57:54:60 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '23232702-7686-425d-8921-7aa6192ca1c8', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7bf4f588-ebc7-4f3f-bad9-0474cdb461a6', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a3cf572dfc9f42528923d69b8fa76422', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'd62dafc0-5bc7-45ab-b9df-841bbdd333a8', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=78a8bb3c-a4c7-425c-b375-b8834bad3945, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=2cf452f4-d6c3-4977-9e5b-874c9d9707e6) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 04:11:09 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:11:09.638 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 2cf452f4-d6c3-4977-9e5b-874c9d9707e6 in datapath 7bf4f588-ebc7-4f3f-bad9-0474cdb461a6 unbound from our chassis#033[00m
Nov 25 04:11:09 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:11:09.639 162739 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 7bf4f588-ebc7-4f3f-bad9-0474cdb461a6, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 25 04:11:09 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:11:09.641 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[6847c358-e372-45d5-8655-53d42856823e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:11:09 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:11:09.641 162739 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-7bf4f588-ebc7-4f3f-bad9-0474cdb461a6 namespace which is not needed anymore#033[00m
Nov 25 04:11:09 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:11:09.670 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:18:a0:7e 2001:db8::f816:3eff:fe18:a07e'], port_security=['fa:16:3e:18:a0:7e 2001:db8::f816:3eff:fe18:a07e'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:fe18:a07e/64', 'neutron:device_id': '23232702-7686-425d-8921-7aa6192ca1c8', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3cc67e51-433c-4c50-9e32-11618e10c494', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a3cf572dfc9f42528923d69b8fa76422', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'd62dafc0-5bc7-45ab-b9df-841bbdd333a8', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8b005d09-6aec-4c42-a2eb-38372057a2c4, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=24beb614-6f72-4107-adca-af1258052ab5) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 04:11:09 np0005534516 NetworkManager[48915]: <info>  [1764061869.7084] manager: (tap24beb614-6f): new Tun device (/org/freedesktop/NetworkManager/Devices/639)
Nov 25 04:11:09 np0005534516 nova_compute[253538]: 2025-11-25 09:11:09.729 253542 INFO nova.virt.libvirt.driver [-] [instance: 23232702-7686-425d-8921-7aa6192ca1c8] Instance destroyed successfully.#033[00m
Nov 25 04:11:09 np0005534516 nova_compute[253538]: 2025-11-25 09:11:09.731 253542 DEBUG nova.objects.instance [None req-7f6f8d8e-94b7-4b92-a14c-e64cd9ad5635 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lazy-loading 'resources' on Instance uuid 23232702-7686-425d-8921-7aa6192ca1c8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 04:11:09 np0005534516 nova_compute[253538]: 2025-11-25 09:11:09.746 253542 DEBUG nova.virt.libvirt.vif [None req-7f6f8d8e-94b7-4b92-a14c-e64cd9ad5635 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T09:09:24Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1362533958',display_name='tempest-TestGettingAddress-server-1362533958',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1362533958',id=144,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOe89J94R4qKvHMYwrfikowJ7ceUXQ642a2tchAyKy2knwK13gMoa9aQVgsZFx+J1CMsBQiBzDcpzQJhqE9YwLxM9sA075e5a1p5R338zHNdy69LTAjp0AMqDwCa+QT4pw==',key_name='tempest-TestGettingAddress-230342741',keypairs=<?>,launch_index=0,launched_at=2025-11-25T09:09:35Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='a3cf572dfc9f42528923d69b8fa76422',ramdisk_id='',reservation_id='r-1pg69rxs',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-364728108',owner_user_name='tempest-TestGettingAddress-364728108-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T09:09:35Z,user_data=None,user_id='c9fb13d4ba9041458692330b7276232f',uuid=23232702-7686-425d-8921-7aa6192ca1c8,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "2cf452f4-d6c3-4977-9e5b-874c9d9707e6", "address": "fa:16:3e:57:54:60", "network": {"id": "7bf4f588-ebc7-4f3f-bad9-0474cdb461a6", "bridge": "br-int", "label": "tempest-network-smoke--1406303604", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.218", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2cf452f4-d6", "ovs_interfaceid": "2cf452f4-d6c3-4977-9e5b-874c9d9707e6", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 25 04:11:09 np0005534516 nova_compute[253538]: 2025-11-25 09:11:09.746 253542 DEBUG nova.network.os_vif_util [None req-7f6f8d8e-94b7-4b92-a14c-e64cd9ad5635 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Converting VIF {"id": "2cf452f4-d6c3-4977-9e5b-874c9d9707e6", "address": "fa:16:3e:57:54:60", "network": {"id": "7bf4f588-ebc7-4f3f-bad9-0474cdb461a6", "bridge": "br-int", "label": "tempest-network-smoke--1406303604", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.218", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2cf452f4-d6", "ovs_interfaceid": "2cf452f4-d6c3-4977-9e5b-874c9d9707e6", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 04:11:09 np0005534516 nova_compute[253538]: 2025-11-25 09:11:09.747 253542 DEBUG nova.network.os_vif_util [None req-7f6f8d8e-94b7-4b92-a14c-e64cd9ad5635 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:57:54:60,bridge_name='br-int',has_traffic_filtering=True,id=2cf452f4-d6c3-4977-9e5b-874c9d9707e6,network=Network(7bf4f588-ebc7-4f3f-bad9-0474cdb461a6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2cf452f4-d6') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 04:11:09 np0005534516 nova_compute[253538]: 2025-11-25 09:11:09.748 253542 DEBUG os_vif [None req-7f6f8d8e-94b7-4b92-a14c-e64cd9ad5635 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:57:54:60,bridge_name='br-int',has_traffic_filtering=True,id=2cf452f4-d6c3-4977-9e5b-874c9d9707e6,network=Network(7bf4f588-ebc7-4f3f-bad9-0474cdb461a6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2cf452f4-d6') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 25 04:11:09 np0005534516 nova_compute[253538]: 2025-11-25 09:11:09.749 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:11:09 np0005534516 nova_compute[253538]: 2025-11-25 09:11:09.750 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2cf452f4-d6, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 04:11:09 np0005534516 nova_compute[253538]: 2025-11-25 09:11:09.752 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:11:09 np0005534516 nova_compute[253538]: 2025-11-25 09:11:09.756 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 25 04:11:09 np0005534516 nova_compute[253538]: 2025-11-25 09:11:09.758 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:11:09 np0005534516 nova_compute[253538]: 2025-11-25 09:11:09.760 253542 INFO os_vif [None req-7f6f8d8e-94b7-4b92-a14c-e64cd9ad5635 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:57:54:60,bridge_name='br-int',has_traffic_filtering=True,id=2cf452f4-d6c3-4977-9e5b-874c9d9707e6,network=Network(7bf4f588-ebc7-4f3f-bad9-0474cdb461a6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2cf452f4-d6')#033[00m
Nov 25 04:11:09 np0005534516 nova_compute[253538]: 2025-11-25 09:11:09.761 253542 DEBUG nova.virt.libvirt.vif [None req-7f6f8d8e-94b7-4b92-a14c-e64cd9ad5635 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T09:09:24Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1362533958',display_name='tempest-TestGettingAddress-server-1362533958',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1362533958',id=144,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOe89J94R4qKvHMYwrfikowJ7ceUXQ642a2tchAyKy2knwK13gMoa9aQVgsZFx+J1CMsBQiBzDcpzQJhqE9YwLxM9sA075e5a1p5R338zHNdy69LTAjp0AMqDwCa+QT4pw==',key_name='tempest-TestGettingAddress-230342741',keypairs=<?>,launch_index=0,launched_at=2025-11-25T09:09:35Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='a3cf572dfc9f42528923d69b8fa76422',ramdisk_id='',reservation_id='r-1pg69rxs',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-364728108',owner_user_name='tempest-TestGettingAddress-364728108-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T09:09:35Z,user_data=None,user_id='c9fb13d4ba9041458692330b7276232f',uuid=23232702-7686-425d-8921-7aa6192ca1c8,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "24beb614-6f72-4107-adca-af1258052ab5", "address": "fa:16:3e:18:a0:7e", "network": {"id": "3cc67e51-433c-4c50-9e32-11618e10c494", "bridge": "br-int", "label": "tempest-network-smoke--1717122316", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe18:a07e", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap24beb614-6f", "ovs_interfaceid": "24beb614-6f72-4107-adca-af1258052ab5", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 25 04:11:09 np0005534516 nova_compute[253538]: 2025-11-25 09:11:09.761 253542 DEBUG nova.network.os_vif_util [None req-7f6f8d8e-94b7-4b92-a14c-e64cd9ad5635 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Converting VIF {"id": "24beb614-6f72-4107-adca-af1258052ab5", "address": "fa:16:3e:18:a0:7e", "network": {"id": "3cc67e51-433c-4c50-9e32-11618e10c494", "bridge": "br-int", "label": "tempest-network-smoke--1717122316", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe18:a07e", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap24beb614-6f", "ovs_interfaceid": "24beb614-6f72-4107-adca-af1258052ab5", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 04:11:09 np0005534516 nova_compute[253538]: 2025-11-25 09:11:09.762 253542 DEBUG nova.network.os_vif_util [None req-7f6f8d8e-94b7-4b92-a14c-e64cd9ad5635 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:18:a0:7e,bridge_name='br-int',has_traffic_filtering=True,id=24beb614-6f72-4107-adca-af1258052ab5,network=Network(3cc67e51-433c-4c50-9e32-11618e10c494),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap24beb614-6f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 04:11:09 np0005534516 nova_compute[253538]: 2025-11-25 09:11:09.763 253542 DEBUG os_vif [None req-7f6f8d8e-94b7-4b92-a14c-e64cd9ad5635 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:18:a0:7e,bridge_name='br-int',has_traffic_filtering=True,id=24beb614-6f72-4107-adca-af1258052ab5,network=Network(3cc67e51-433c-4c50-9e32-11618e10c494),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap24beb614-6f') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 25 04:11:09 np0005534516 nova_compute[253538]: 2025-11-25 09:11:09.764 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:11:09 np0005534516 nova_compute[253538]: 2025-11-25 09:11:09.764 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap24beb614-6f, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 04:11:09 np0005534516 nova_compute[253538]: 2025-11-25 09:11:09.768 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 25 04:11:09 np0005534516 nova_compute[253538]: 2025-11-25 09:11:09.770 253542 INFO os_vif [None req-7f6f8d8e-94b7-4b92-a14c-e64cd9ad5635 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:18:a0:7e,bridge_name='br-int',has_traffic_filtering=True,id=24beb614-6f72-4107-adca-af1258052ab5,network=Network(3cc67e51-433c-4c50-9e32-11618e10c494),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap24beb614-6f')#033[00m
Nov 25 04:11:09 np0005534516 neutron-haproxy-ovnmeta-7bf4f588-ebc7-4f3f-bad9-0474cdb461a6[407241]: [NOTICE]   (407260) : haproxy version is 2.8.14-c23fe91
Nov 25 04:11:10 np0005534516 neutron-haproxy-ovnmeta-7bf4f588-ebc7-4f3f-bad9-0474cdb461a6[407241]: [NOTICE]   (407260) : path to executable is /usr/sbin/haproxy
Nov 25 04:11:10 np0005534516 neutron-haproxy-ovnmeta-7bf4f588-ebc7-4f3f-bad9-0474cdb461a6[407241]: [WARNING]  (407260) : Exiting Master process...
Nov 25 04:11:10 np0005534516 neutron-haproxy-ovnmeta-7bf4f588-ebc7-4f3f-bad9-0474cdb461a6[407241]: [ALERT]    (407260) : Current worker (407265) exited with code 143 (Terminated)
Nov 25 04:11:10 np0005534516 neutron-haproxy-ovnmeta-7bf4f588-ebc7-4f3f-bad9-0474cdb461a6[407241]: [WARNING]  (407260) : All workers exited. Exiting... (0)
Nov 25 04:11:10 np0005534516 systemd[1]: libpod-21ad73ad938112a08976cbfcb3c2201d1d10bcaa8ba0451deb54ab338f9b407d.scope: Deactivated successfully.
Nov 25 04:11:10 np0005534516 podman[409090]: 2025-11-25 09:11:10.009246412 +0000 UTC m=+0.245954347 container died 21ad73ad938112a08976cbfcb3c2201d1d10bcaa8ba0451deb54ab338f9b407d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-7bf4f588-ebc7-4f3f-bad9-0474cdb461a6, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 04:11:10 np0005534516 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-21ad73ad938112a08976cbfcb3c2201d1d10bcaa8ba0451deb54ab338f9b407d-userdata-shm.mount: Deactivated successfully.
Nov 25 04:11:10 np0005534516 systemd[1]: var-lib-containers-storage-overlay-5f10253aa29a2d9744f35c5a42f8958d44efd52ebdbabf6a5f52a1d200b9fc6b-merged.mount: Deactivated successfully.
Nov 25 04:11:10 np0005534516 podman[409090]: 2025-11-25 09:11:10.220846635 +0000 UTC m=+0.457554570 container cleanup 21ad73ad938112a08976cbfcb3c2201d1d10bcaa8ba0451deb54ab338f9b407d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-7bf4f588-ebc7-4f3f-bad9-0474cdb461a6, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Nov 25 04:11:10 np0005534516 systemd[1]: libpod-conmon-21ad73ad938112a08976cbfcb3c2201d1d10bcaa8ba0451deb54ab338f9b407d.scope: Deactivated successfully.
Nov 25 04:11:10 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2723: 321 pgs: 321 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 24 KiB/s rd, 10 KiB/s wr, 28 op/s
Nov 25 04:11:10 np0005534516 podman[409138]: 2025-11-25 09:11:10.315991411 +0000 UTC m=+0.073916921 container remove 21ad73ad938112a08976cbfcb3c2201d1d10bcaa8ba0451deb54ab338f9b407d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-7bf4f588-ebc7-4f3f-bad9-0474cdb461a6, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2)
Nov 25 04:11:10 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:11:10.321 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[62c12c0d-5c00-4f99-b85a-90d043b17c10]: (4, ('Tue Nov 25 09:11:09 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-7bf4f588-ebc7-4f3f-bad9-0474cdb461a6 (21ad73ad938112a08976cbfcb3c2201d1d10bcaa8ba0451deb54ab338f9b407d)\n21ad73ad938112a08976cbfcb3c2201d1d10bcaa8ba0451deb54ab338f9b407d\nTue Nov 25 09:11:10 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-7bf4f588-ebc7-4f3f-bad9-0474cdb461a6 (21ad73ad938112a08976cbfcb3c2201d1d10bcaa8ba0451deb54ab338f9b407d)\n21ad73ad938112a08976cbfcb3c2201d1d10bcaa8ba0451deb54ab338f9b407d\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:11:10 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:11:10.323 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[385403c2-4b85-4fc6-b7b0-0cc48de47e82]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:11:10 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:11:10.324 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7bf4f588-e0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 04:11:10 np0005534516 nova_compute[253538]: 2025-11-25 09:11:10.326 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:11:10 np0005534516 kernel: tap7bf4f588-e0: left promiscuous mode
Nov 25 04:11:10 np0005534516 nova_compute[253538]: 2025-11-25 09:11:10.344 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:11:10 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:11:10.347 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[4a6a70d5-f978-45e9-80fb-00bd25fca234]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:11:10 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:11:10.360 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[148b9266-a187-43fc-ac8f-29802022e072]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:11:10 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:11:10.361 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[e2b44d07-1047-4efc-9f89-5c39eed8bbf9]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:11:10 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:11:10.384 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[870b2a11-59c1-4d59-aa3f-459160887c0b]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 712895, 'reachable_time': 19419, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 409151, 'error': None, 'target': 'ovnmeta-7bf4f588-ebc7-4f3f-bad9-0474cdb461a6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:11:10 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:11:10.388 162852 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-7bf4f588-ebc7-4f3f-bad9-0474cdb461a6 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 25 04:11:10 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:11:10.388 162852 DEBUG oslo.privsep.daemon [-] privsep: reply[0e3580b9-2dfd-42e1-be08-381a94384806]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:11:10 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:11:10.389 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 24beb614-6f72-4107-adca-af1258052ab5 in datapath 3cc67e51-433c-4c50-9e32-11618e10c494 unbound from our chassis#033[00m
Nov 25 04:11:10 np0005534516 systemd[1]: run-netns-ovnmeta\x2d7bf4f588\x2debc7\x2d4f3f\x2dbad9\x2d0474cdb461a6.mount: Deactivated successfully.
Nov 25 04:11:10 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:11:10.390 162739 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 3cc67e51-433c-4c50-9e32-11618e10c494, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 25 04:11:10 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:11:10.390 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[1a7e072d-f2aa-4ff7-bfad-51cd7e76eeab]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:11:10 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:11:10.391 162739 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-3cc67e51-433c-4c50-9e32-11618e10c494 namespace which is not needed anymore#033[00m
Nov 25 04:11:10 np0005534516 neutron-haproxy-ovnmeta-3cc67e51-433c-4c50-9e32-11618e10c494[407360]: [NOTICE]   (407364) : haproxy version is 2.8.14-c23fe91
Nov 25 04:11:10 np0005534516 neutron-haproxy-ovnmeta-3cc67e51-433c-4c50-9e32-11618e10c494[407360]: [NOTICE]   (407364) : path to executable is /usr/sbin/haproxy
Nov 25 04:11:10 np0005534516 neutron-haproxy-ovnmeta-3cc67e51-433c-4c50-9e32-11618e10c494[407360]: [WARNING]  (407364) : Exiting Master process...
Nov 25 04:11:10 np0005534516 neutron-haproxy-ovnmeta-3cc67e51-433c-4c50-9e32-11618e10c494[407360]: [ALERT]    (407364) : Current worker (407366) exited with code 143 (Terminated)
Nov 25 04:11:10 np0005534516 neutron-haproxy-ovnmeta-3cc67e51-433c-4c50-9e32-11618e10c494[407360]: [WARNING]  (407364) : All workers exited. Exiting... (0)
Nov 25 04:11:10 np0005534516 systemd[1]: libpod-f6574fad04277da8ca88c648bbe0a2e0e2f056b52818dd6a57db9f9ca00b052d.scope: Deactivated successfully.
Nov 25 04:11:10 np0005534516 podman[409168]: 2025-11-25 09:11:10.759800065 +0000 UTC m=+0.260284766 container died f6574fad04277da8ca88c648bbe0a2e0e2f056b52818dd6a57db9f9ca00b052d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-3cc67e51-433c-4c50-9e32-11618e10c494, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 04:11:10 np0005534516 nova_compute[253538]: 2025-11-25 09:11:10.936 253542 INFO nova.virt.libvirt.driver [None req-7f6f8d8e-94b7-4b92-a14c-e64cd9ad5635 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 23232702-7686-425d-8921-7aa6192ca1c8] Deleting instance files /var/lib/nova/instances/23232702-7686-425d-8921-7aa6192ca1c8_del#033[00m
Nov 25 04:11:10 np0005534516 nova_compute[253538]: 2025-11-25 09:11:10.938 253542 INFO nova.virt.libvirt.driver [None req-7f6f8d8e-94b7-4b92-a14c-e64cd9ad5635 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 23232702-7686-425d-8921-7aa6192ca1c8] Deletion of /var/lib/nova/instances/23232702-7686-425d-8921-7aa6192ca1c8_del complete#033[00m
Nov 25 04:11:10 np0005534516 systemd[1]: var-lib-containers-storage-overlay-15b7818f4970c473f15ea612dd47dfe4573d2b3b5179a915f1ffcc79793320c6-merged.mount: Deactivated successfully.
Nov 25 04:11:10 np0005534516 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-f6574fad04277da8ca88c648bbe0a2e0e2f056b52818dd6a57db9f9ca00b052d-userdata-shm.mount: Deactivated successfully.
Nov 25 04:11:10 np0005534516 podman[409168]: 2025-11-25 09:11:10.949085302 +0000 UTC m=+0.449570043 container cleanup f6574fad04277da8ca88c648bbe0a2e0e2f056b52818dd6a57db9f9ca00b052d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-3cc67e51-433c-4c50-9e32-11618e10c494, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2)
Nov 25 04:11:10 np0005534516 systemd[1]: libpod-conmon-f6574fad04277da8ca88c648bbe0a2e0e2f056b52818dd6a57db9f9ca00b052d.scope: Deactivated successfully.
Nov 25 04:11:11 np0005534516 podman[409204]: 2025-11-25 09:11:11.027523824 +0000 UTC m=+0.046856185 container remove f6574fad04277da8ca88c648bbe0a2e0e2f056b52818dd6a57db9f9ca00b052d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-3cc67e51-433c-4c50-9e32-11618e10c494, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 04:11:11 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:11:11.034 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[72a8363c-a65f-4a00-bb61-8fe6785491d4]: (4, ('Tue Nov 25 09:11:10 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-3cc67e51-433c-4c50-9e32-11618e10c494 (f6574fad04277da8ca88c648bbe0a2e0e2f056b52818dd6a57db9f9ca00b052d)\nf6574fad04277da8ca88c648bbe0a2e0e2f056b52818dd6a57db9f9ca00b052d\nTue Nov 25 09:11:10 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-3cc67e51-433c-4c50-9e32-11618e10c494 (f6574fad04277da8ca88c648bbe0a2e0e2f056b52818dd6a57db9f9ca00b052d)\nf6574fad04277da8ca88c648bbe0a2e0e2f056b52818dd6a57db9f9ca00b052d\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:11:11 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:11:11.035 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[b9f0d961-d837-47b8-80bb-1a3e52c737dc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:11:11 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:11:11.036 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3cc67e51-40, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 04:11:11 np0005534516 nova_compute[253538]: 2025-11-25 09:11:11.038 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:11:11 np0005534516 kernel: tap3cc67e51-40: left promiscuous mode
Nov 25 04:11:11 np0005534516 nova_compute[253538]: 2025-11-25 09:11:11.048 253542 DEBUG nova.compute.manager [req-79af0000-45c5-4b62-94df-2c5c1be87075 req-ad9b29f8-c13c-44b5-b7d8-b89df451d3dd b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 23232702-7686-425d-8921-7aa6192ca1c8] Received event network-vif-unplugged-2cf452f4-d6c3-4977-9e5b-874c9d9707e6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 04:11:11 np0005534516 nova_compute[253538]: 2025-11-25 09:11:11.049 253542 DEBUG oslo_concurrency.lockutils [req-79af0000-45c5-4b62-94df-2c5c1be87075 req-ad9b29f8-c13c-44b5-b7d8-b89df451d3dd b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "23232702-7686-425d-8921-7aa6192ca1c8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:11:11 np0005534516 nova_compute[253538]: 2025-11-25 09:11:11.049 253542 DEBUG oslo_concurrency.lockutils [req-79af0000-45c5-4b62-94df-2c5c1be87075 req-ad9b29f8-c13c-44b5-b7d8-b89df451d3dd b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "23232702-7686-425d-8921-7aa6192ca1c8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:11:11 np0005534516 nova_compute[253538]: 2025-11-25 09:11:11.050 253542 DEBUG oslo_concurrency.lockutils [req-79af0000-45c5-4b62-94df-2c5c1be87075 req-ad9b29f8-c13c-44b5-b7d8-b89df451d3dd b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "23232702-7686-425d-8921-7aa6192ca1c8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:11:11 np0005534516 nova_compute[253538]: 2025-11-25 09:11:11.050 253542 DEBUG nova.compute.manager [req-79af0000-45c5-4b62-94df-2c5c1be87075 req-ad9b29f8-c13c-44b5-b7d8-b89df451d3dd b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 23232702-7686-425d-8921-7aa6192ca1c8] No waiting events found dispatching network-vif-unplugged-2cf452f4-d6c3-4977-9e5b-874c9d9707e6 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 04:11:11 np0005534516 nova_compute[253538]: 2025-11-25 09:11:11.050 253542 DEBUG nova.compute.manager [req-79af0000-45c5-4b62-94df-2c5c1be87075 req-ad9b29f8-c13c-44b5-b7d8-b89df451d3dd b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 23232702-7686-425d-8921-7aa6192ca1c8] Received event network-vif-unplugged-2cf452f4-d6c3-4977-9e5b-874c9d9707e6 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 25 04:11:11 np0005534516 nova_compute[253538]: 2025-11-25 09:11:11.051 253542 DEBUG nova.compute.manager [req-79af0000-45c5-4b62-94df-2c5c1be87075 req-ad9b29f8-c13c-44b5-b7d8-b89df451d3dd b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 23232702-7686-425d-8921-7aa6192ca1c8] Received event network-vif-plugged-2cf452f4-d6c3-4977-9e5b-874c9d9707e6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 04:11:11 np0005534516 nova_compute[253538]: 2025-11-25 09:11:11.051 253542 DEBUG oslo_concurrency.lockutils [req-79af0000-45c5-4b62-94df-2c5c1be87075 req-ad9b29f8-c13c-44b5-b7d8-b89df451d3dd b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "23232702-7686-425d-8921-7aa6192ca1c8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:11:11 np0005534516 nova_compute[253538]: 2025-11-25 09:11:11.051 253542 DEBUG oslo_concurrency.lockutils [req-79af0000-45c5-4b62-94df-2c5c1be87075 req-ad9b29f8-c13c-44b5-b7d8-b89df451d3dd b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "23232702-7686-425d-8921-7aa6192ca1c8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:11:11 np0005534516 nova_compute[253538]: 2025-11-25 09:11:11.051 253542 DEBUG oslo_concurrency.lockutils [req-79af0000-45c5-4b62-94df-2c5c1be87075 req-ad9b29f8-c13c-44b5-b7d8-b89df451d3dd b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "23232702-7686-425d-8921-7aa6192ca1c8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:11:11 np0005534516 nova_compute[253538]: 2025-11-25 09:11:11.052 253542 DEBUG nova.compute.manager [req-79af0000-45c5-4b62-94df-2c5c1be87075 req-ad9b29f8-c13c-44b5-b7d8-b89df451d3dd b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 23232702-7686-425d-8921-7aa6192ca1c8] No waiting events found dispatching network-vif-plugged-2cf452f4-d6c3-4977-9e5b-874c9d9707e6 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 04:11:11 np0005534516 nova_compute[253538]: 2025-11-25 09:11:11.052 253542 WARNING nova.compute.manager [req-79af0000-45c5-4b62-94df-2c5c1be87075 req-ad9b29f8-c13c-44b5-b7d8-b89df451d3dd b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 23232702-7686-425d-8921-7aa6192ca1c8] Received unexpected event network-vif-plugged-2cf452f4-d6c3-4977-9e5b-874c9d9707e6 for instance with vm_state active and task_state deleting.#033[00m
Nov 25 04:11:11 np0005534516 nova_compute[253538]: 2025-11-25 09:11:11.053 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:11:11 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:11:11.055 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[625d7fa2-b818-4ccc-acf7-a58fe9938e3a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:11:11 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:11:11.069 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[bf18e22d-3356-4fd6-ad35-f3c608437b47]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:11:11 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:11:11.071 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[bfcff5b2-f687-4cd8-b482-a75758f363fa]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:11:11 np0005534516 nova_compute[253538]: 2025-11-25 09:11:11.079 253542 INFO nova.compute.manager [None req-7f6f8d8e-94b7-4b92-a14c-e64cd9ad5635 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 23232702-7686-425d-8921-7aa6192ca1c8] Took 2.01 seconds to destroy the instance on the hypervisor.#033[00m
Nov 25 04:11:11 np0005534516 nova_compute[253538]: 2025-11-25 09:11:11.080 253542 DEBUG oslo.service.loopingcall [None req-7f6f8d8e-94b7-4b92-a14c-e64cd9ad5635 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 25 04:11:11 np0005534516 nova_compute[253538]: 2025-11-25 09:11:11.080 253542 DEBUG nova.compute.manager [-] [instance: 23232702-7686-425d-8921-7aa6192ca1c8] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 25 04:11:11 np0005534516 nova_compute[253538]: 2025-11-25 09:11:11.080 253542 DEBUG nova.network.neutron [-] [instance: 23232702-7686-425d-8921-7aa6192ca1c8] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 25 04:11:11 np0005534516 podman[409197]: 2025-11-25 09:11:11.087139494 +0000 UTC m=+0.134013764 container health_status 8ad1e319ea263c63021f01bb2d6f18ca5aea205883339692c6b10bddfa993191 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 04:11:11 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:11:11.087 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[98808b5a-e7da-43fa-a852-83d589a14c5b]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 712985, 'reachable_time': 39888, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 409241, 'error': None, 'target': 'ovnmeta-3cc67e51-433c-4c50-9e32-11618e10c494', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:11:11 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:11:11.090 162852 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-3cc67e51-433c-4c50-9e32-11618e10c494 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 25 04:11:11 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:11:11.090 162852 DEBUG oslo.privsep.daemon [-] privsep: reply[b9b9f351-15ee-42a0-8b91-1ad8d8568719]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:11:11 np0005534516 systemd[1]: run-netns-ovnmeta\x2d3cc67e51\x2d433c\x2d4c50\x2d9e32\x2d11618e10c494.mount: Deactivated successfully.
Nov 25 04:11:11 np0005534516 nova_compute[253538]: 2025-11-25 09:11:11.119 253542 DEBUG nova.compute.manager [req-4041e62b-3e23-438e-933d-c1796c953176 req-970237c2-c5b2-402a-9f98-99eb0ec4eb0b b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 23232702-7686-425d-8921-7aa6192ca1c8] Received event network-vif-unplugged-24beb614-6f72-4107-adca-af1258052ab5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 04:11:11 np0005534516 nova_compute[253538]: 2025-11-25 09:11:11.119 253542 DEBUG oslo_concurrency.lockutils [req-4041e62b-3e23-438e-933d-c1796c953176 req-970237c2-c5b2-402a-9f98-99eb0ec4eb0b b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "23232702-7686-425d-8921-7aa6192ca1c8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:11:11 np0005534516 nova_compute[253538]: 2025-11-25 09:11:11.120 253542 DEBUG oslo_concurrency.lockutils [req-4041e62b-3e23-438e-933d-c1796c953176 req-970237c2-c5b2-402a-9f98-99eb0ec4eb0b b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "23232702-7686-425d-8921-7aa6192ca1c8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:11:11 np0005534516 nova_compute[253538]: 2025-11-25 09:11:11.120 253542 DEBUG oslo_concurrency.lockutils [req-4041e62b-3e23-438e-933d-c1796c953176 req-970237c2-c5b2-402a-9f98-99eb0ec4eb0b b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "23232702-7686-425d-8921-7aa6192ca1c8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:11:11 np0005534516 nova_compute[253538]: 2025-11-25 09:11:11.121 253542 DEBUG nova.compute.manager [req-4041e62b-3e23-438e-933d-c1796c953176 req-970237c2-c5b2-402a-9f98-99eb0ec4eb0b b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 23232702-7686-425d-8921-7aa6192ca1c8] No waiting events found dispatching network-vif-unplugged-24beb614-6f72-4107-adca-af1258052ab5 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 04:11:11 np0005534516 nova_compute[253538]: 2025-11-25 09:11:11.121 253542 DEBUG nova.compute.manager [req-4041e62b-3e23-438e-933d-c1796c953176 req-970237c2-c5b2-402a-9f98-99eb0ec4eb0b b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 23232702-7686-425d-8921-7aa6192ca1c8] Received event network-vif-unplugged-24beb614-6f72-4107-adca-af1258052ab5 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 25 04:11:11 np0005534516 nova_compute[253538]: 2025-11-25 09:11:11.844 253542 DEBUG nova.network.neutron [req-fb44d24c-ccee-4366-9df6-a1c8bb08dfc9 req-dab485f7-cb48-408f-b459-6b9eab0d5bfc b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 23232702-7686-425d-8921-7aa6192ca1c8] Updated VIF entry in instance network info cache for port 2cf452f4-d6c3-4977-9e5b-874c9d9707e6. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 25 04:11:11 np0005534516 nova_compute[253538]: 2025-11-25 09:11:11.845 253542 DEBUG nova.network.neutron [req-fb44d24c-ccee-4366-9df6-a1c8bb08dfc9 req-dab485f7-cb48-408f-b459-6b9eab0d5bfc b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 23232702-7686-425d-8921-7aa6192ca1c8] Updating instance_info_cache with network_info: [{"id": "2cf452f4-d6c3-4977-9e5b-874c9d9707e6", "address": "fa:16:3e:57:54:60", "network": {"id": "7bf4f588-ebc7-4f3f-bad9-0474cdb461a6", "bridge": "br-int", "label": "tempest-network-smoke--1406303604", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2cf452f4-d6", "ovs_interfaceid": "2cf452f4-d6c3-4977-9e5b-874c9d9707e6", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "24beb614-6f72-4107-adca-af1258052ab5", "address": "fa:16:3e:18:a0:7e", "network": {"id": "3cc67e51-433c-4c50-9e32-11618e10c494", "bridge": "br-int", "label": "tempest-network-smoke--1717122316", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe18:a07e", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap24beb614-6f", "ovs_interfaceid": "24beb614-6f72-4107-adca-af1258052ab5", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 04:11:11 np0005534516 nova_compute[253538]: 2025-11-25 09:11:11.867 253542 DEBUG oslo_concurrency.lockutils [req-fb44d24c-ccee-4366-9df6-a1c8bb08dfc9 req-dab485f7-cb48-408f-b459-6b9eab0d5bfc b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-23232702-7686-425d-8921-7aa6192ca1c8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 04:11:12 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2724: 321 pgs: 321 active+clean; 132 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 39 KiB/s rd, 11 KiB/s wr, 49 op/s
Nov 25 04:11:12 np0005534516 nova_compute[253538]: 2025-11-25 09:11:12.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:11:12 np0005534516 nova_compute[253538]: 2025-11-25 09:11:12.554 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Nov 25 04:11:13 np0005534516 nova_compute[253538]: 2025-11-25 09:11:13.059 253542 DEBUG nova.network.neutron [-] [instance: 23232702-7686-425d-8921-7aa6192ca1c8] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 04:11:13 np0005534516 nova_compute[253538]: 2025-11-25 09:11:13.129 253542 INFO nova.compute.manager [-] [instance: 23232702-7686-425d-8921-7aa6192ca1c8] Took 2.05 seconds to deallocate network for instance.#033[00m
Nov 25 04:11:13 np0005534516 nova_compute[253538]: 2025-11-25 09:11:13.217 253542 DEBUG nova.compute.manager [req-33e069e5-b8de-430e-bda8-9f8e6ad770d0 req-1f665d4e-6802-4dd1-9f08-fc9180250173 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 23232702-7686-425d-8921-7aa6192ca1c8] Received event network-vif-plugged-24beb614-6f72-4107-adca-af1258052ab5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 04:11:13 np0005534516 nova_compute[253538]: 2025-11-25 09:11:13.217 253542 DEBUG oslo_concurrency.lockutils [req-33e069e5-b8de-430e-bda8-9f8e6ad770d0 req-1f665d4e-6802-4dd1-9f08-fc9180250173 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "23232702-7686-425d-8921-7aa6192ca1c8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:11:13 np0005534516 nova_compute[253538]: 2025-11-25 09:11:13.218 253542 DEBUG oslo_concurrency.lockutils [req-33e069e5-b8de-430e-bda8-9f8e6ad770d0 req-1f665d4e-6802-4dd1-9f08-fc9180250173 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "23232702-7686-425d-8921-7aa6192ca1c8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:11:13 np0005534516 nova_compute[253538]: 2025-11-25 09:11:13.218 253542 DEBUG oslo_concurrency.lockutils [req-33e069e5-b8de-430e-bda8-9f8e6ad770d0 req-1f665d4e-6802-4dd1-9f08-fc9180250173 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "23232702-7686-425d-8921-7aa6192ca1c8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:11:13 np0005534516 nova_compute[253538]: 2025-11-25 09:11:13.218 253542 DEBUG nova.compute.manager [req-33e069e5-b8de-430e-bda8-9f8e6ad770d0 req-1f665d4e-6802-4dd1-9f08-fc9180250173 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 23232702-7686-425d-8921-7aa6192ca1c8] No waiting events found dispatching network-vif-plugged-24beb614-6f72-4107-adca-af1258052ab5 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 04:11:13 np0005534516 nova_compute[253538]: 2025-11-25 09:11:13.219 253542 WARNING nova.compute.manager [req-33e069e5-b8de-430e-bda8-9f8e6ad770d0 req-1f665d4e-6802-4dd1-9f08-fc9180250173 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 23232702-7686-425d-8921-7aa6192ca1c8] Received unexpected event network-vif-plugged-24beb614-6f72-4107-adca-af1258052ab5 for instance with vm_state deleted and task_state None.#033[00m
Nov 25 04:11:13 np0005534516 nova_compute[253538]: 2025-11-25 09:11:13.219 253542 DEBUG nova.compute.manager [req-33e069e5-b8de-430e-bda8-9f8e6ad770d0 req-1f665d4e-6802-4dd1-9f08-fc9180250173 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 23232702-7686-425d-8921-7aa6192ca1c8] Received event network-vif-deleted-2cf452f4-d6c3-4977-9e5b-874c9d9707e6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 04:11:13 np0005534516 nova_compute[253538]: 2025-11-25 09:11:13.219 253542 DEBUG nova.compute.manager [req-33e069e5-b8de-430e-bda8-9f8e6ad770d0 req-1f665d4e-6802-4dd1-9f08-fc9180250173 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 23232702-7686-425d-8921-7aa6192ca1c8] Received event network-vif-deleted-24beb614-6f72-4107-adca-af1258052ab5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 04:11:13 np0005534516 nova_compute[253538]: 2025-11-25 09:11:13.221 253542 DEBUG oslo_concurrency.lockutils [None req-7f6f8d8e-94b7-4b92-a14c-e64cd9ad5635 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:11:13 np0005534516 nova_compute[253538]: 2025-11-25 09:11:13.222 253542 DEBUG oslo_concurrency.lockutils [None req-7f6f8d8e-94b7-4b92-a14c-e64cd9ad5635 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:11:13 np0005534516 nova_compute[253538]: 2025-11-25 09:11:13.267 253542 DEBUG oslo_concurrency.processutils [None req-7f6f8d8e-94b7-4b92-a14c-e64cd9ad5635 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 04:11:13 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:11:13 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 04:11:13 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1269866015' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 04:11:13 np0005534516 nova_compute[253538]: 2025-11-25 09:11:13.699 253542 DEBUG oslo_concurrency.processutils [None req-7f6f8d8e-94b7-4b92-a14c-e64cd9ad5635 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.432s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 04:11:13 np0005534516 nova_compute[253538]: 2025-11-25 09:11:13.706 253542 DEBUG nova.compute.provider_tree [None req-7f6f8d8e-94b7-4b92-a14c-e64cd9ad5635 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 04:11:13 np0005534516 nova_compute[253538]: 2025-11-25 09:11:13.726 253542 DEBUG nova.scheduler.client.report [None req-7f6f8d8e-94b7-4b92-a14c-e64cd9ad5635 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 04:11:13 np0005534516 nova_compute[253538]: 2025-11-25 09:11:13.796 253542 DEBUG oslo_concurrency.lockutils [None req-7f6f8d8e-94b7-4b92-a14c-e64cd9ad5635 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.574s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:11:13 np0005534516 nova_compute[253538]: 2025-11-25 09:11:13.866 253542 INFO nova.scheduler.client.report [None req-7f6f8d8e-94b7-4b92-a14c-e64cd9ad5635 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Deleted allocations for instance 23232702-7686-425d-8921-7aa6192ca1c8#033[00m
Nov 25 04:11:13 np0005534516 nova_compute[253538]: 2025-11-25 09:11:13.928 253542 DEBUG oslo_concurrency.lockutils [None req-7f6f8d8e-94b7-4b92-a14c-e64cd9ad5635 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "23232702-7686-425d-8921-7aa6192ca1c8" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.860s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:11:14 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2725: 321 pgs: 321 active+clean; 117 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 35 KiB/s rd, 2.1 KiB/s wr, 49 op/s
Nov 25 04:11:14 np0005534516 nova_compute[253538]: 2025-11-25 09:11:14.396 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:11:14 np0005534516 nova_compute[253538]: 2025-11-25 09:11:14.798 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:11:15 np0005534516 nova_compute[253538]: 2025-11-25 09:11:15.064 253542 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764061860.0629854, e30f8c90-01de-40a5-8c04-289a035fca22 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 04:11:15 np0005534516 nova_compute[253538]: 2025-11-25 09:11:15.065 253542 INFO nova.compute.manager [-] [instance: e30f8c90-01de-40a5-8c04-289a035fca22] VM Stopped (Lifecycle Event)#033[00m
Nov 25 04:11:15 np0005534516 nova_compute[253538]: 2025-11-25 09:11:15.082 253542 DEBUG nova.compute.manager [None req-671db374-3833-4943-a2c1-4b845b32ceef - - - - - -] [instance: e30f8c90-01de-40a5-8c04-289a035fca22] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 04:11:16 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2726: 321 pgs: 321 active+clean; 88 MiB data, 1016 MiB used, 59 GiB / 60 GiB avail; 30 KiB/s rd, 2.1 KiB/s wr, 45 op/s
Nov 25 04:11:18 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2727: 321 pgs: 321 active+clean; 88 MiB data, 1016 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.4 KiB/s wr, 29 op/s
Nov 25 04:11:18 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:11:19 np0005534516 nova_compute[253538]: 2025-11-25 09:11:19.102 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:11:19 np0005534516 nova_compute[253538]: 2025-11-25 09:11:19.217 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:11:19 np0005534516 nova_compute[253538]: 2025-11-25 09:11:19.416 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:11:19 np0005534516 nova_compute[253538]: 2025-11-25 09:11:19.566 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:11:19 np0005534516 nova_compute[253538]: 2025-11-25 09:11:19.567 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Nov 25 04:11:19 np0005534516 nova_compute[253538]: 2025-11-25 09:11:19.584 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Nov 25 04:11:19 np0005534516 nova_compute[253538]: 2025-11-25 09:11:19.801 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:11:20 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2728: 321 pgs: 321 active+clean; 88 MiB data, 1016 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Nov 25 04:11:22 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2729: 321 pgs: 321 active+clean; 88 MiB data, 1016 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Nov 25 04:11:23 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:11:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 04:11:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 04:11:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 04:11:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 04:11:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 04:11:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 04:11:24 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2730: 321 pgs: 321 active+clean; 88 MiB data, 1016 MiB used, 59 GiB / 60 GiB avail; 3.2 KiB/s rd, 341 B/s wr, 6 op/s
Nov 25 04:11:24 np0005534516 nova_compute[253538]: 2025-11-25 09:11:24.418 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:11:24 np0005534516 nova_compute[253538]: 2025-11-25 09:11:24.726 253542 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764061869.7246559, 23232702-7686-425d-8921-7aa6192ca1c8 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 04:11:24 np0005534516 nova_compute[253538]: 2025-11-25 09:11:24.726 253542 INFO nova.compute.manager [-] [instance: 23232702-7686-425d-8921-7aa6192ca1c8] VM Stopped (Lifecycle Event)#033[00m
Nov 25 04:11:24 np0005534516 nova_compute[253538]: 2025-11-25 09:11:24.741 253542 DEBUG nova.compute.manager [None req-0f45dd52-99e4-42e8-9019-d27f6e7ebc79 - - - - - -] [instance: 23232702-7686-425d-8921-7aa6192ca1c8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 04:11:24 np0005534516 nova_compute[253538]: 2025-11-25 09:11:24.804 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:11:26 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2731: 321 pgs: 321 active+clean; 88 MiB data, 1016 MiB used, 59 GiB / 60 GiB avail; 2.5 KiB/s rd, 341 B/s wr, 5 op/s
Nov 25 04:11:28 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:11:28 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2732: 321 pgs: 321 active+clean; 88 MiB data, 1016 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:11:29 np0005534516 nova_compute[253538]: 2025-11-25 09:11:29.419 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:11:29 np0005534516 nova_compute[253538]: 2025-11-25 09:11:29.805 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:11:30 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2733: 321 pgs: 321 active+clean; 88 MiB data, 1016 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:11:31 np0005534516 nova_compute[253538]: 2025-11-25 09:11:31.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:11:31 np0005534516 nova_compute[253538]: 2025-11-25 09:11:31.872 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._cleanup_running_deleted_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:11:32 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2734: 321 pgs: 321 active+clean; 88 MiB data, 1016 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:11:33 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:11:34 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2735: 321 pgs: 321 active+clean; 88 MiB data, 1016 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:11:34 np0005534516 nova_compute[253538]: 2025-11-25 09:11:34.421 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:11:34 np0005534516 nova_compute[253538]: 2025-11-25 09:11:34.807 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:11:35 np0005534516 podman[409269]: 2025-11-25 09:11:35.812752216 +0000 UTC m=+0.057690858 container health_status 5c8d5508db57b4c3da7e80ae11f3a3094e87785cb74f60c98199261dd8b73481 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 25 04:11:35 np0005534516 podman[409268]: 2025-11-25 09:11:35.823232361 +0000 UTC m=+0.076441208 container health_status 201f5ba8a846455dad7681d91099d8c3f33e8ac91298c1330a8b525b94c03938 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 25 04:11:36 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2736: 321 pgs: 321 active+clean; 88 MiB data, 1016 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:11:38 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:11:38 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2737: 321 pgs: 321 active+clean; 88 MiB data, 1016 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:11:39 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 04:11:39 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 04:11:39 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 04:11:39 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 04:11:39 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 04:11:39 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 04:11:39 np0005534516 ceph-mgr[75313]: [progress WARNING root] complete: ev 80dc861e-d747-492b-bdcc-055aba6e25b1 does not exist
Nov 25 04:11:39 np0005534516 ceph-mgr[75313]: [progress WARNING root] complete: ev 3d42c405-f862-4b61-a46a-75f0f94a23bf does not exist
Nov 25 04:11:39 np0005534516 ceph-mgr[75313]: [progress WARNING root] complete: ev 3b4f3c47-c5b9-4601-8f51-fa311145cec2 does not exist
Nov 25 04:11:39 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 04:11:39 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 04:11:39 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 04:11:39 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 04:11:39 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 04:11:39 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 04:11:39 np0005534516 nova_compute[253538]: 2025-11-25 09:11:39.422 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:11:39 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 04:11:39 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 04:11:39 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 04:11:39 np0005534516 nova_compute[253538]: 2025-11-25 09:11:39.808 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:11:40 np0005534516 podman[409578]: 2025-11-25 09:11:39.951993248 +0000 UTC m=+0.037605393 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 04:11:40 np0005534516 podman[409578]: 2025-11-25 09:11:40.107202707 +0000 UTC m=+0.192814832 container create c3543ffac13cde9715cb8c28026f02c7f06aaa660360cbb8699cd49d7658b4ec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_herschel, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 04:11:40 np0005534516 systemd[1]: Started libpod-conmon-c3543ffac13cde9715cb8c28026f02c7f06aaa660360cbb8699cd49d7658b4ec.scope.
Nov 25 04:11:40 np0005534516 systemd[1]: Started libcrun container.
Nov 25 04:11:40 np0005534516 podman[409578]: 2025-11-25 09:11:40.189275088 +0000 UTC m=+0.274887213 container init c3543ffac13cde9715cb8c28026f02c7f06aaa660360cbb8699cd49d7658b4ec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_herschel, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Nov 25 04:11:40 np0005534516 podman[409578]: 2025-11-25 09:11:40.197318807 +0000 UTC m=+0.282930932 container start c3543ffac13cde9715cb8c28026f02c7f06aaa660360cbb8699cd49d7658b4ec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_herschel, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 04:11:40 np0005534516 podman[409578]: 2025-11-25 09:11:40.200289958 +0000 UTC m=+0.285902093 container attach c3543ffac13cde9715cb8c28026f02c7f06aaa660360cbb8699cd49d7658b4ec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_herschel, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef)
Nov 25 04:11:40 np0005534516 lucid_herschel[409594]: 167 167
Nov 25 04:11:40 np0005534516 systemd[1]: libpod-c3543ffac13cde9715cb8c28026f02c7f06aaa660360cbb8699cd49d7658b4ec.scope: Deactivated successfully.
Nov 25 04:11:40 np0005534516 podman[409578]: 2025-11-25 09:11:40.203036472 +0000 UTC m=+0.288648597 container died c3543ffac13cde9715cb8c28026f02c7f06aaa660360cbb8699cd49d7658b4ec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_herschel, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 04:11:40 np0005534516 systemd[1]: var-lib-containers-storage-overlay-024e037a0fa6d2b0c0fa6e1734cd32af1218634665780f703eef61e6b417bf25-merged.mount: Deactivated successfully.
Nov 25 04:11:40 np0005534516 podman[409578]: 2025-11-25 09:11:40.236077201 +0000 UTC m=+0.321689326 container remove c3543ffac13cde9715cb8c28026f02c7f06aaa660360cbb8699cd49d7658b4ec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_herschel, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Nov 25 04:11:40 np0005534516 systemd[1]: libpod-conmon-c3543ffac13cde9715cb8c28026f02c7f06aaa660360cbb8699cd49d7658b4ec.scope: Deactivated successfully.
Nov 25 04:11:40 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2738: 321 pgs: 321 active+clean; 88 MiB data, 1016 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:11:40 np0005534516 podman[409620]: 2025-11-25 09:11:40.439765628 +0000 UTC m=+0.053226358 container create 6f582e60efe8893fe8ae52bb75f8d206e84549f1c9f395a63be5f730caffc214 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_curran, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef)
Nov 25 04:11:40 np0005534516 systemd[1]: Started libpod-conmon-6f582e60efe8893fe8ae52bb75f8d206e84549f1c9f395a63be5f730caffc214.scope.
Nov 25 04:11:40 np0005534516 systemd[1]: Started libcrun container.
Nov 25 04:11:40 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7378a4a6e66958e4037f0cd7ec6fc981809faecf25eb3b42446c16973e78fa57/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 04:11:40 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7378a4a6e66958e4037f0cd7ec6fc981809faecf25eb3b42446c16973e78fa57/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 04:11:40 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7378a4a6e66958e4037f0cd7ec6fc981809faecf25eb3b42446c16973e78fa57/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 04:11:40 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7378a4a6e66958e4037f0cd7ec6fc981809faecf25eb3b42446c16973e78fa57/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 04:11:40 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7378a4a6e66958e4037f0cd7ec6fc981809faecf25eb3b42446c16973e78fa57/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 04:11:40 np0005534516 podman[409620]: 2025-11-25 09:11:40.419916308 +0000 UTC m=+0.033377058 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 04:11:40 np0005534516 podman[409620]: 2025-11-25 09:11:40.524216764 +0000 UTC m=+0.137677524 container init 6f582e60efe8893fe8ae52bb75f8d206e84549f1c9f395a63be5f730caffc214 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_curran, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Nov 25 04:11:40 np0005534516 podman[409620]: 2025-11-25 09:11:40.537979648 +0000 UTC m=+0.151440358 container start 6f582e60efe8893fe8ae52bb75f8d206e84549f1c9f395a63be5f730caffc214 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_curran, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 04:11:40 np0005534516 podman[409620]: 2025-11-25 09:11:40.545587075 +0000 UTC m=+0.159047795 container attach 6f582e60efe8893fe8ae52bb75f8d206e84549f1c9f395a63be5f730caffc214 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_curran, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Nov 25 04:11:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:11:41.096 162739 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:11:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:11:41.096 162739 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:11:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:11:41.096 162739 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:11:41 np0005534516 musing_curran[409635]: --> passed data devices: 0 physical, 3 LVM
Nov 25 04:11:41 np0005534516 musing_curran[409635]: --> relative data size: 1.0
Nov 25 04:11:41 np0005534516 musing_curran[409635]: --> All data devices are unavailable
Nov 25 04:11:41 np0005534516 systemd[1]: libpod-6f582e60efe8893fe8ae52bb75f8d206e84549f1c9f395a63be5f730caffc214.scope: Deactivated successfully.
Nov 25 04:11:41 np0005534516 systemd[1]: libpod-6f582e60efe8893fe8ae52bb75f8d206e84549f1c9f395a63be5f730caffc214.scope: Consumed 1.004s CPU time.
Nov 25 04:11:41 np0005534516 conmon[409635]: conmon 6f582e60efe8893fe8ae <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-6f582e60efe8893fe8ae52bb75f8d206e84549f1c9f395a63be5f730caffc214.scope/container/memory.events
Nov 25 04:11:41 np0005534516 podman[409620]: 2025-11-25 09:11:41.59676861 +0000 UTC m=+1.210229320 container died 6f582e60efe8893fe8ae52bb75f8d206e84549f1c9f395a63be5f730caffc214 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_curran, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Nov 25 04:11:41 np0005534516 systemd[1]: var-lib-containers-storage-overlay-7378a4a6e66958e4037f0cd7ec6fc981809faecf25eb3b42446c16973e78fa57-merged.mount: Deactivated successfully.
Nov 25 04:11:41 np0005534516 podman[409620]: 2025-11-25 09:11:41.663922105 +0000 UTC m=+1.277382805 container remove 6f582e60efe8893fe8ae52bb75f8d206e84549f1c9f395a63be5f730caffc214 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_curran, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 04:11:41 np0005534516 systemd[1]: libpod-conmon-6f582e60efe8893fe8ae52bb75f8d206e84549f1c9f395a63be5f730caffc214.scope: Deactivated successfully.
Nov 25 04:11:41 np0005534516 podman[409666]: 2025-11-25 09:11:41.75642586 +0000 UTC m=+0.118234535 container health_status 8ad1e319ea263c63021f01bb2d6f18ca5aea205883339692c6b10bddfa993191 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3)
Nov 25 04:11:42 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:11:42.164 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f0:f7:0e 10.100.0.2 2001:db8::f816:3eff:fef0:f70e'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:fef0:f70e/64', 'neutron:device_id': 'ovnmeta-6c73317d-f647-4813-8469-7d8f6ba2c0c7', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6c73317d-f647-4813-8469-7d8f6ba2c0c7', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a3cf572dfc9f42528923d69b8fa76422', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=82db8616-e2a9-492c-9b1b-8c775409acb3, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=08f181bc-bee1-4710-a487-b95c62cfce38) old=Port_Binding(mac=['fa:16:3e:f0:f7:0e 10.100.0.2'], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'ovnmeta-6c73317d-f647-4813-8469-7d8f6ba2c0c7', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6c73317d-f647-4813-8469-7d8f6ba2c0c7', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a3cf572dfc9f42528923d69b8fa76422', 'neutron:revision_number': '2', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 04:11:42 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:11:42.165 162739 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port 08f181bc-bee1-4710-a487-b95c62cfce38 in datapath 6c73317d-f647-4813-8469-7d8f6ba2c0c7 updated#033[00m
Nov 25 04:11:42 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:11:42.166 162739 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 6c73317d-f647-4813-8469-7d8f6ba2c0c7, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 25 04:11:42 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:11:42.167 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[1d1ed3d0-f656-4b0e-aadd-67e4f33257a9]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:11:42 np0005534516 podman[409845]: 2025-11-25 09:11:42.29755176 +0000 UTC m=+0.039743231 container create 9c784d3f9a9d5c1efd1cad1c87c7a5c67d4a72ed5b8fc2123078b543cf37eef7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_brown, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Nov 25 04:11:42 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2739: 321 pgs: 321 active+clean; 88 MiB data, 1016 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:11:42 np0005534516 systemd[1]: Started libpod-conmon-9c784d3f9a9d5c1efd1cad1c87c7a5c67d4a72ed5b8fc2123078b543cf37eef7.scope.
Nov 25 04:11:42 np0005534516 podman[409845]: 2025-11-25 09:11:42.280411734 +0000 UTC m=+0.022603125 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 04:11:42 np0005534516 systemd[1]: Started libcrun container.
Nov 25 04:11:42 np0005534516 podman[409845]: 2025-11-25 09:11:42.398066662 +0000 UTC m=+0.140258083 container init 9c784d3f9a9d5c1efd1cad1c87c7a5c67d4a72ed5b8fc2123078b543cf37eef7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_brown, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 04:11:42 np0005534516 podman[409845]: 2025-11-25 09:11:42.407530799 +0000 UTC m=+0.149722160 container start 9c784d3f9a9d5c1efd1cad1c87c7a5c67d4a72ed5b8fc2123078b543cf37eef7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_brown, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 04:11:42 np0005534516 podman[409845]: 2025-11-25 09:11:42.410716336 +0000 UTC m=+0.152907727 container attach 9c784d3f9a9d5c1efd1cad1c87c7a5c67d4a72ed5b8fc2123078b543cf37eef7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_brown, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True)
Nov 25 04:11:42 np0005534516 sad_brown[409861]: 167 167
Nov 25 04:11:42 np0005534516 systemd[1]: libpod-9c784d3f9a9d5c1efd1cad1c87c7a5c67d4a72ed5b8fc2123078b543cf37eef7.scope: Deactivated successfully.
Nov 25 04:11:42 np0005534516 podman[409845]: 2025-11-25 09:11:42.417765808 +0000 UTC m=+0.159957199 container died 9c784d3f9a9d5c1efd1cad1c87c7a5c67d4a72ed5b8fc2123078b543cf37eef7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_brown, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Nov 25 04:11:42 np0005534516 systemd[1]: var-lib-containers-storage-overlay-fa4644053f06a5cb9970416252bc1ed9810c181cc6b8819ed76b5ac502ee0780-merged.mount: Deactivated successfully.
Nov 25 04:11:42 np0005534516 podman[409845]: 2025-11-25 09:11:42.450754645 +0000 UTC m=+0.192946026 container remove 9c784d3f9a9d5c1efd1cad1c87c7a5c67d4a72ed5b8fc2123078b543cf37eef7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_brown, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 04:11:42 np0005534516 systemd[1]: libpod-conmon-9c784d3f9a9d5c1efd1cad1c87c7a5c67d4a72ed5b8fc2123078b543cf37eef7.scope: Deactivated successfully.
Nov 25 04:11:42 np0005534516 podman[409885]: 2025-11-25 09:11:42.635890878 +0000 UTC m=+0.046845145 container create 4e2e07be7b80a064cfe125d5199401f95ba9e9b209f467c3809f3b8459f5e6e1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_yalow, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Nov 25 04:11:42 np0005534516 systemd[1]: Started libpod-conmon-4e2e07be7b80a064cfe125d5199401f95ba9e9b209f467c3809f3b8459f5e6e1.scope.
Nov 25 04:11:42 np0005534516 podman[409885]: 2025-11-25 09:11:42.614481435 +0000 UTC m=+0.025435752 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 04:11:42 np0005534516 systemd[1]: Started libcrun container.
Nov 25 04:11:42 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/62ebbed220cd40372dfa5c3e94bec32a6227c4e5ac5ebc4598b063fdbb86075f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 04:11:42 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/62ebbed220cd40372dfa5c3e94bec32a6227c4e5ac5ebc4598b063fdbb86075f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 04:11:42 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/62ebbed220cd40372dfa5c3e94bec32a6227c4e5ac5ebc4598b063fdbb86075f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 04:11:42 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/62ebbed220cd40372dfa5c3e94bec32a6227c4e5ac5ebc4598b063fdbb86075f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 04:11:42 np0005534516 podman[409885]: 2025-11-25 09:11:42.749511066 +0000 UTC m=+0.160465403 container init 4e2e07be7b80a064cfe125d5199401f95ba9e9b209f467c3809f3b8459f5e6e1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_yalow, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Nov 25 04:11:42 np0005534516 podman[409885]: 2025-11-25 09:11:42.75846319 +0000 UTC m=+0.169417467 container start 4e2e07be7b80a064cfe125d5199401f95ba9e9b209f467c3809f3b8459f5e6e1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_yalow, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 04:11:42 np0005534516 podman[409885]: 2025-11-25 09:11:42.761741939 +0000 UTC m=+0.172696256 container attach 4e2e07be7b80a064cfe125d5199401f95ba9e9b209f467c3809f3b8459f5e6e1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_yalow, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 04:11:43 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:11:43 np0005534516 silly_yalow[409901]: {
Nov 25 04:11:43 np0005534516 silly_yalow[409901]:    "0": [
Nov 25 04:11:43 np0005534516 silly_yalow[409901]:        {
Nov 25 04:11:43 np0005534516 silly_yalow[409901]:            "devices": [
Nov 25 04:11:43 np0005534516 silly_yalow[409901]:                "/dev/loop3"
Nov 25 04:11:43 np0005534516 silly_yalow[409901]:            ],
Nov 25 04:11:43 np0005534516 silly_yalow[409901]:            "lv_name": "ceph_lv0",
Nov 25 04:11:43 np0005534516 silly_yalow[409901]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 04:11:43 np0005534516 silly_yalow[409901]:            "lv_size": "21470642176",
Nov 25 04:11:43 np0005534516 silly_yalow[409901]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=fa0f8d90-5df8-4d42-9078-da082765696d,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 04:11:43 np0005534516 silly_yalow[409901]:            "lv_uuid": "ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr",
Nov 25 04:11:43 np0005534516 silly_yalow[409901]:            "name": "ceph_lv0",
Nov 25 04:11:43 np0005534516 silly_yalow[409901]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 04:11:43 np0005534516 silly_yalow[409901]:            "tags": {
Nov 25 04:11:43 np0005534516 silly_yalow[409901]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 04:11:43 np0005534516 silly_yalow[409901]:                "ceph.block_uuid": "ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr",
Nov 25 04:11:43 np0005534516 silly_yalow[409901]:                "ceph.cephx_lockbox_secret": "",
Nov 25 04:11:43 np0005534516 silly_yalow[409901]:                "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 04:11:43 np0005534516 silly_yalow[409901]:                "ceph.cluster_name": "ceph",
Nov 25 04:11:43 np0005534516 silly_yalow[409901]:                "ceph.crush_device_class": "",
Nov 25 04:11:43 np0005534516 silly_yalow[409901]:                "ceph.encrypted": "0",
Nov 25 04:11:43 np0005534516 silly_yalow[409901]:                "ceph.osd_fsid": "fa0f8d90-5df8-4d42-9078-da082765696d",
Nov 25 04:11:43 np0005534516 silly_yalow[409901]:                "ceph.osd_id": "0",
Nov 25 04:11:43 np0005534516 silly_yalow[409901]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 04:11:43 np0005534516 silly_yalow[409901]:                "ceph.type": "block",
Nov 25 04:11:43 np0005534516 silly_yalow[409901]:                "ceph.vdo": "0"
Nov 25 04:11:43 np0005534516 silly_yalow[409901]:            },
Nov 25 04:11:43 np0005534516 silly_yalow[409901]:            "type": "block",
Nov 25 04:11:43 np0005534516 silly_yalow[409901]:            "vg_name": "ceph_vg0"
Nov 25 04:11:43 np0005534516 silly_yalow[409901]:        }
Nov 25 04:11:43 np0005534516 silly_yalow[409901]:    ],
Nov 25 04:11:43 np0005534516 silly_yalow[409901]:    "1": [
Nov 25 04:11:43 np0005534516 silly_yalow[409901]:        {
Nov 25 04:11:43 np0005534516 silly_yalow[409901]:            "devices": [
Nov 25 04:11:43 np0005534516 silly_yalow[409901]:                "/dev/loop4"
Nov 25 04:11:43 np0005534516 silly_yalow[409901]:            ],
Nov 25 04:11:43 np0005534516 silly_yalow[409901]:            "lv_name": "ceph_lv1",
Nov 25 04:11:43 np0005534516 silly_yalow[409901]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 04:11:43 np0005534516 silly_yalow[409901]:            "lv_size": "21470642176",
Nov 25 04:11:43 np0005534516 silly_yalow[409901]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1ccbbde0-8faf-460a-800e-c84f00ed17db,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 04:11:43 np0005534516 silly_yalow[409901]:            "lv_uuid": "nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e",
Nov 25 04:11:43 np0005534516 silly_yalow[409901]:            "name": "ceph_lv1",
Nov 25 04:11:43 np0005534516 silly_yalow[409901]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 04:11:43 np0005534516 silly_yalow[409901]:            "tags": {
Nov 25 04:11:43 np0005534516 silly_yalow[409901]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 04:11:43 np0005534516 silly_yalow[409901]:                "ceph.block_uuid": "nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e",
Nov 25 04:11:43 np0005534516 silly_yalow[409901]:                "ceph.cephx_lockbox_secret": "",
Nov 25 04:11:43 np0005534516 silly_yalow[409901]:                "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 04:11:43 np0005534516 silly_yalow[409901]:                "ceph.cluster_name": "ceph",
Nov 25 04:11:43 np0005534516 silly_yalow[409901]:                "ceph.crush_device_class": "",
Nov 25 04:11:43 np0005534516 silly_yalow[409901]:                "ceph.encrypted": "0",
Nov 25 04:11:43 np0005534516 silly_yalow[409901]:                "ceph.osd_fsid": "1ccbbde0-8faf-460a-800e-c84f00ed17db",
Nov 25 04:11:43 np0005534516 silly_yalow[409901]:                "ceph.osd_id": "1",
Nov 25 04:11:43 np0005534516 silly_yalow[409901]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 04:11:43 np0005534516 silly_yalow[409901]:                "ceph.type": "block",
Nov 25 04:11:43 np0005534516 silly_yalow[409901]:                "ceph.vdo": "0"
Nov 25 04:11:43 np0005534516 silly_yalow[409901]:            },
Nov 25 04:11:43 np0005534516 silly_yalow[409901]:            "type": "block",
Nov 25 04:11:43 np0005534516 silly_yalow[409901]:            "vg_name": "ceph_vg1"
Nov 25 04:11:43 np0005534516 silly_yalow[409901]:        }
Nov 25 04:11:43 np0005534516 silly_yalow[409901]:    ],
Nov 25 04:11:43 np0005534516 silly_yalow[409901]:    "2": [
Nov 25 04:11:43 np0005534516 silly_yalow[409901]:        {
Nov 25 04:11:43 np0005534516 silly_yalow[409901]:            "devices": [
Nov 25 04:11:43 np0005534516 silly_yalow[409901]:                "/dev/loop5"
Nov 25 04:11:43 np0005534516 silly_yalow[409901]:            ],
Nov 25 04:11:43 np0005534516 silly_yalow[409901]:            "lv_name": "ceph_lv2",
Nov 25 04:11:43 np0005534516 silly_yalow[409901]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 04:11:43 np0005534516 silly_yalow[409901]:            "lv_size": "21470642176",
Nov 25 04:11:43 np0005534516 silly_yalow[409901]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2a15223e-f21c-42af-b702-2be1c3607399,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 04:11:43 np0005534516 silly_yalow[409901]:            "lv_uuid": "5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0",
Nov 25 04:11:43 np0005534516 silly_yalow[409901]:            "name": "ceph_lv2",
Nov 25 04:11:43 np0005534516 silly_yalow[409901]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 04:11:43 np0005534516 silly_yalow[409901]:            "tags": {
Nov 25 04:11:43 np0005534516 silly_yalow[409901]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 04:11:43 np0005534516 silly_yalow[409901]:                "ceph.block_uuid": "5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0",
Nov 25 04:11:43 np0005534516 silly_yalow[409901]:                "ceph.cephx_lockbox_secret": "",
Nov 25 04:11:43 np0005534516 silly_yalow[409901]:                "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 04:11:43 np0005534516 silly_yalow[409901]:                "ceph.cluster_name": "ceph",
Nov 25 04:11:43 np0005534516 silly_yalow[409901]:                "ceph.crush_device_class": "",
Nov 25 04:11:43 np0005534516 silly_yalow[409901]:                "ceph.encrypted": "0",
Nov 25 04:11:43 np0005534516 silly_yalow[409901]:                "ceph.osd_fsid": "2a15223e-f21c-42af-b702-2be1c3607399",
Nov 25 04:11:43 np0005534516 silly_yalow[409901]:                "ceph.osd_id": "2",
Nov 25 04:11:43 np0005534516 silly_yalow[409901]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 04:11:43 np0005534516 silly_yalow[409901]:                "ceph.type": "block",
Nov 25 04:11:43 np0005534516 silly_yalow[409901]:                "ceph.vdo": "0"
Nov 25 04:11:43 np0005534516 silly_yalow[409901]:            },
Nov 25 04:11:43 np0005534516 silly_yalow[409901]:            "type": "block",
Nov 25 04:11:43 np0005534516 silly_yalow[409901]:            "vg_name": "ceph_vg2"
Nov 25 04:11:43 np0005534516 silly_yalow[409901]:        }
Nov 25 04:11:43 np0005534516 silly_yalow[409901]:    ]
Nov 25 04:11:43 np0005534516 silly_yalow[409901]: }
Nov 25 04:11:43 np0005534516 systemd[1]: libpod-4e2e07be7b80a064cfe125d5199401f95ba9e9b209f467c3809f3b8459f5e6e1.scope: Deactivated successfully.
Nov 25 04:11:43 np0005534516 podman[409885]: 2025-11-25 09:11:43.607641924 +0000 UTC m=+1.018596181 container died 4e2e07be7b80a064cfe125d5199401f95ba9e9b209f467c3809f3b8459f5e6e1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_yalow, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 04:11:43 np0005534516 systemd[1]: var-lib-containers-storage-overlay-62ebbed220cd40372dfa5c3e94bec32a6227c4e5ac5ebc4598b063fdbb86075f-merged.mount: Deactivated successfully.
Nov 25 04:11:43 np0005534516 podman[409885]: 2025-11-25 09:11:43.788919081 +0000 UTC m=+1.199873348 container remove 4e2e07be7b80a064cfe125d5199401f95ba9e9b209f467c3809f3b8459f5e6e1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_yalow, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 04:11:43 np0005534516 systemd[1]: libpod-conmon-4e2e07be7b80a064cfe125d5199401f95ba9e9b209f467c3809f3b8459f5e6e1.scope: Deactivated successfully.
Nov 25 04:11:44 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2740: 321 pgs: 321 active+clean; 88 MiB data, 1016 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:11:44 np0005534516 podman[410060]: 2025-11-25 09:11:44.400062995 +0000 UTC m=+0.042030803 container create 2520e6047af675bbe29cde88ba511a3c7cda3e48bffbce765080126bfde4b7ce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_davinci, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2)
Nov 25 04:11:44 np0005534516 nova_compute[253538]: 2025-11-25 09:11:44.461 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:11:44 np0005534516 systemd[1]: Started libpod-conmon-2520e6047af675bbe29cde88ba511a3c7cda3e48bffbce765080126bfde4b7ce.scope.
Nov 25 04:11:44 np0005534516 podman[410060]: 2025-11-25 09:11:44.384938913 +0000 UTC m=+0.026906741 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 04:11:44 np0005534516 systemd[1]: Started libcrun container.
Nov 25 04:11:44 np0005534516 podman[410060]: 2025-11-25 09:11:44.499877888 +0000 UTC m=+0.141845726 container init 2520e6047af675bbe29cde88ba511a3c7cda3e48bffbce765080126bfde4b7ce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_davinci, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 04:11:44 np0005534516 podman[410060]: 2025-11-25 09:11:44.504806522 +0000 UTC m=+0.146774330 container start 2520e6047af675bbe29cde88ba511a3c7cda3e48bffbce765080126bfde4b7ce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_davinci, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Nov 25 04:11:44 np0005534516 podman[410060]: 2025-11-25 09:11:44.508157863 +0000 UTC m=+0.150125701 container attach 2520e6047af675bbe29cde88ba511a3c7cda3e48bffbce765080126bfde4b7ce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_davinci, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Nov 25 04:11:44 np0005534516 nice_davinci[410076]: 167 167
Nov 25 04:11:44 np0005534516 systemd[1]: libpod-2520e6047af675bbe29cde88ba511a3c7cda3e48bffbce765080126bfde4b7ce.scope: Deactivated successfully.
Nov 25 04:11:44 np0005534516 podman[410060]: 2025-11-25 09:11:44.511511214 +0000 UTC m=+0.153479022 container died 2520e6047af675bbe29cde88ba511a3c7cda3e48bffbce765080126bfde4b7ce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_davinci, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Nov 25 04:11:44 np0005534516 systemd[1]: var-lib-containers-storage-overlay-001d7dad2cdd72052f792ab04075ea5559219cb9ce7e468494d58c3c752b0e22-merged.mount: Deactivated successfully.
Nov 25 04:11:44 np0005534516 podman[410060]: 2025-11-25 09:11:44.555820289 +0000 UTC m=+0.197788137 container remove 2520e6047af675bbe29cde88ba511a3c7cda3e48bffbce765080126bfde4b7ce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_davinci, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 04:11:44 np0005534516 systemd[1]: libpod-conmon-2520e6047af675bbe29cde88ba511a3c7cda3e48bffbce765080126bfde4b7ce.scope: Deactivated successfully.
Nov 25 04:11:44 np0005534516 podman[410100]: 2025-11-25 09:11:44.747837929 +0000 UTC m=+0.056396104 container create ab0e6496e1d796924c4177d477b75d04e27e7079b3a2ac80b75d1428727dc23f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_shaw, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 04:11:44 np0005534516 systemd[1]: Started libpod-conmon-ab0e6496e1d796924c4177d477b75d04e27e7079b3a2ac80b75d1428727dc23f.scope.
Nov 25 04:11:44 np0005534516 podman[410100]: 2025-11-25 09:11:44.71733469 +0000 UTC m=+0.025892865 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 04:11:44 np0005534516 nova_compute[253538]: 2025-11-25 09:11:44.810 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:11:44 np0005534516 systemd[1]: Started libcrun container.
Nov 25 04:11:44 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a6d2763e30f65eb5a845e4da3a7fe367cad2132e028969588340110a1a06eb4a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 04:11:44 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a6d2763e30f65eb5a845e4da3a7fe367cad2132e028969588340110a1a06eb4a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 04:11:44 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a6d2763e30f65eb5a845e4da3a7fe367cad2132e028969588340110a1a06eb4a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 04:11:44 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a6d2763e30f65eb5a845e4da3a7fe367cad2132e028969588340110a1a06eb4a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 04:11:44 np0005534516 podman[410100]: 2025-11-25 09:11:44.96787326 +0000 UTC m=+0.276431495 container init ab0e6496e1d796924c4177d477b75d04e27e7079b3a2ac80b75d1428727dc23f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_shaw, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Nov 25 04:11:44 np0005534516 podman[410100]: 2025-11-25 09:11:44.976349711 +0000 UTC m=+0.284907896 container start ab0e6496e1d796924c4177d477b75d04e27e7079b3a2ac80b75d1428727dc23f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_shaw, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 04:11:44 np0005534516 podman[410100]: 2025-11-25 09:11:44.979592759 +0000 UTC m=+0.288150934 container attach ab0e6496e1d796924c4177d477b75d04e27e7079b3a2ac80b75d1428727dc23f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_shaw, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 04:11:45 np0005534516 compassionate_shaw[410117]: {
Nov 25 04:11:45 np0005534516 compassionate_shaw[410117]:    "1ccbbde0-8faf-460a-800e-c84f00ed17db": {
Nov 25 04:11:45 np0005534516 compassionate_shaw[410117]:        "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 04:11:45 np0005534516 compassionate_shaw[410117]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 04:11:45 np0005534516 compassionate_shaw[410117]:        "osd_id": 1,
Nov 25 04:11:45 np0005534516 compassionate_shaw[410117]:        "osd_uuid": "1ccbbde0-8faf-460a-800e-c84f00ed17db",
Nov 25 04:11:45 np0005534516 compassionate_shaw[410117]:        "type": "bluestore"
Nov 25 04:11:45 np0005534516 compassionate_shaw[410117]:    },
Nov 25 04:11:45 np0005534516 compassionate_shaw[410117]:    "2a15223e-f21c-42af-b702-2be1c3607399": {
Nov 25 04:11:45 np0005534516 compassionate_shaw[410117]:        "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 04:11:45 np0005534516 compassionate_shaw[410117]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 04:11:45 np0005534516 compassionate_shaw[410117]:        "osd_id": 2,
Nov 25 04:11:45 np0005534516 compassionate_shaw[410117]:        "osd_uuid": "2a15223e-f21c-42af-b702-2be1c3607399",
Nov 25 04:11:45 np0005534516 compassionate_shaw[410117]:        "type": "bluestore"
Nov 25 04:11:45 np0005534516 compassionate_shaw[410117]:    },
Nov 25 04:11:45 np0005534516 compassionate_shaw[410117]:    "fa0f8d90-5df8-4d42-9078-da082765696d": {
Nov 25 04:11:45 np0005534516 compassionate_shaw[410117]:        "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 04:11:45 np0005534516 compassionate_shaw[410117]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 04:11:45 np0005534516 compassionate_shaw[410117]:        "osd_id": 0,
Nov 25 04:11:45 np0005534516 compassionate_shaw[410117]:        "osd_uuid": "fa0f8d90-5df8-4d42-9078-da082765696d",
Nov 25 04:11:45 np0005534516 compassionate_shaw[410117]:        "type": "bluestore"
Nov 25 04:11:45 np0005534516 compassionate_shaw[410117]:    }
Nov 25 04:11:45 np0005534516 compassionate_shaw[410117]: }
Nov 25 04:11:46 np0005534516 systemd[1]: libpod-ab0e6496e1d796924c4177d477b75d04e27e7079b3a2ac80b75d1428727dc23f.scope: Deactivated successfully.
Nov 25 04:11:46 np0005534516 systemd[1]: libpod-ab0e6496e1d796924c4177d477b75d04e27e7079b3a2ac80b75d1428727dc23f.scope: Consumed 1.026s CPU time.
Nov 25 04:11:46 np0005534516 podman[410100]: 2025-11-25 09:11:46.011233683 +0000 UTC m=+1.319791868 container died ab0e6496e1d796924c4177d477b75d04e27e7079b3a2ac80b75d1428727dc23f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_shaw, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 04:11:46 np0005534516 systemd[1]: var-lib-containers-storage-overlay-a6d2763e30f65eb5a845e4da3a7fe367cad2132e028969588340110a1a06eb4a-merged.mount: Deactivated successfully.
Nov 25 04:11:46 np0005534516 podman[410100]: 2025-11-25 09:11:46.091688999 +0000 UTC m=+1.400247144 container remove ab0e6496e1d796924c4177d477b75d04e27e7079b3a2ac80b75d1428727dc23f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_shaw, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 04:11:46 np0005534516 systemd[1]: libpod-conmon-ab0e6496e1d796924c4177d477b75d04e27e7079b3a2ac80b75d1428727dc23f.scope: Deactivated successfully.
Nov 25 04:11:46 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 04:11:46 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 04:11:46 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 04:11:46 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 04:11:46 np0005534516 ceph-mgr[75313]: [progress WARNING root] complete: ev e24b4369-0a2e-411e-a53d-0bc600d7406c does not exist
Nov 25 04:11:46 np0005534516 ceph-mgr[75313]: [progress WARNING root] complete: ev 4efe061a-0f5b-47bc-abd9-847bb23f30e4 does not exist
Nov 25 04:11:46 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2741: 321 pgs: 321 active+clean; 88 MiB data, 1016 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:11:47 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 04:11:47 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 04:11:47 np0005534516 nova_compute[253538]: 2025-11-25 09:11:47.557 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:11:48 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:11:48.030 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f0:f7:0e 10.100.0.2 2001:db8:0:1:f816:3eff:fef0:f70e 2001:db8::f816:3eff:fef0:f70e'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8:0:1:f816:3eff:fef0:f70e/64 2001:db8::f816:3eff:fef0:f70e/64', 'neutron:device_id': 'ovnmeta-6c73317d-f647-4813-8469-7d8f6ba2c0c7', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6c73317d-f647-4813-8469-7d8f6ba2c0c7', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a3cf572dfc9f42528923d69b8fa76422', 'neutron:revision_number': '4', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=82db8616-e2a9-492c-9b1b-8c775409acb3, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=08f181bc-bee1-4710-a487-b95c62cfce38) old=Port_Binding(mac=['fa:16:3e:f0:f7:0e 10.100.0.2 2001:db8::f816:3eff:fef0:f70e'], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:fef0:f70e/64', 'neutron:device_id': 'ovnmeta-6c73317d-f647-4813-8469-7d8f6ba2c0c7', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6c73317d-f647-4813-8469-7d8f6ba2c0c7', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a3cf572dfc9f42528923d69b8fa76422', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 04:11:48 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:11:48.032 162739 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port 08f181bc-bee1-4710-a487-b95c62cfce38 in datapath 6c73317d-f647-4813-8469-7d8f6ba2c0c7 updated#033[00m
Nov 25 04:11:48 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:11:48.033 162739 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 6c73317d-f647-4813-8469-7d8f6ba2c0c7, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 25 04:11:48 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:11:48.033 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[a1cba74a-ee95-432d-b966-b1b524d724d4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:11:48 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:11:48 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2742: 321 pgs: 321 active+clean; 88 MiB data, 1016 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:11:49 np0005534516 nova_compute[253538]: 2025-11-25 09:11:49.465 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:11:49 np0005534516 nova_compute[253538]: 2025-11-25 09:11:49.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:11:49 np0005534516 nova_compute[253538]: 2025-11-25 09:11:49.813 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:11:50 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2743: 321 pgs: 321 active+clean; 88 MiB data, 1016 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:11:52 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2744: 321 pgs: 321 active+clean; 88 MiB data, 1016 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:11:52 np0005534516 nova_compute[253538]: 2025-11-25 09:11:52.521 253542 DEBUG oslo_concurrency.lockutils [None req-ff78a7af-1a34-46b5-8d0c-947bedf9c0c2 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Acquiring lock "f5964963-11b8-4fd9-ace9-e5ee67571925" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:11:52 np0005534516 nova_compute[253538]: 2025-11-25 09:11:52.522 253542 DEBUG oslo_concurrency.lockutils [None req-ff78a7af-1a34-46b5-8d0c-947bedf9c0c2 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "f5964963-11b8-4fd9-ace9-e5ee67571925" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:11:52 np0005534516 nova_compute[253538]: 2025-11-25 09:11:52.555 253542 DEBUG nova.compute.manager [None req-ff78a7af-1a34-46b5-8d0c-947bedf9c0c2 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: f5964963-11b8-4fd9-ace9-e5ee67571925] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 25 04:11:52 np0005534516 nova_compute[253538]: 2025-11-25 09:11:52.635 253542 DEBUG oslo_concurrency.lockutils [None req-ff78a7af-1a34-46b5-8d0c-947bedf9c0c2 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:11:52 np0005534516 nova_compute[253538]: 2025-11-25 09:11:52.636 253542 DEBUG oslo_concurrency.lockutils [None req-ff78a7af-1a34-46b5-8d0c-947bedf9c0c2 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:11:52 np0005534516 nova_compute[253538]: 2025-11-25 09:11:52.644 253542 DEBUG nova.virt.hardware [None req-ff78a7af-1a34-46b5-8d0c-947bedf9c0c2 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 25 04:11:52 np0005534516 nova_compute[253538]: 2025-11-25 09:11:52.645 253542 INFO nova.compute.claims [None req-ff78a7af-1a34-46b5-8d0c-947bedf9c0c2 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: f5964963-11b8-4fd9-ace9-e5ee67571925] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 25 04:11:52 np0005534516 nova_compute[253538]: 2025-11-25 09:11:52.791 253542 DEBUG oslo_concurrency.processutils [None req-ff78a7af-1a34-46b5-8d0c-947bedf9c0c2 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 04:11:53 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 04:11:53 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3131094186' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 04:11:53 np0005534516 nova_compute[253538]: 2025-11-25 09:11:53.245 253542 DEBUG oslo_concurrency.processutils [None req-ff78a7af-1a34-46b5-8d0c-947bedf9c0c2 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.455s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 04:11:53 np0005534516 nova_compute[253538]: 2025-11-25 09:11:53.252 253542 DEBUG nova.compute.provider_tree [None req-ff78a7af-1a34-46b5-8d0c-947bedf9c0c2 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 04:11:53 np0005534516 nova_compute[253538]: 2025-11-25 09:11:53.266 253542 DEBUG nova.scheduler.client.report [None req-ff78a7af-1a34-46b5-8d0c-947bedf9c0c2 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 04:11:53 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:11:53 np0005534516 nova_compute[253538]: 2025-11-25 09:11:53.360 253542 DEBUG oslo_concurrency.lockutils [None req-ff78a7af-1a34-46b5-8d0c-947bedf9c0c2 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.724s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:11:53 np0005534516 nova_compute[253538]: 2025-11-25 09:11:53.361 253542 DEBUG nova.compute.manager [None req-ff78a7af-1a34-46b5-8d0c-947bedf9c0c2 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: f5964963-11b8-4fd9-ace9-e5ee67571925] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 25 04:11:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] Optimize plan auto_2025-11-25_09:11:53
Nov 25 04:11:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 04:11:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] do_upmap
Nov 25 04:11:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] pools ['cephfs.cephfs.data', 'volumes', 'default.rgw.meta', 'vms', 'backups', 'images', 'default.rgw.control', '.mgr', '.rgw.root', 'cephfs.cephfs.meta', 'default.rgw.log']
Nov 25 04:11:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] prepared 0/10 changes
Nov 25 04:11:53 np0005534516 nova_compute[253538]: 2025-11-25 09:11:53.421 253542 DEBUG nova.compute.manager [None req-ff78a7af-1a34-46b5-8d0c-947bedf9c0c2 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: f5964963-11b8-4fd9-ace9-e5ee67571925] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 25 04:11:53 np0005534516 nova_compute[253538]: 2025-11-25 09:11:53.422 253542 DEBUG nova.network.neutron [None req-ff78a7af-1a34-46b5-8d0c-947bedf9c0c2 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: f5964963-11b8-4fd9-ace9-e5ee67571925] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 25 04:11:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 04:11:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 04:11:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 04:11:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 04:11:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 04:11:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 04:11:53 np0005534516 nova_compute[253538]: 2025-11-25 09:11:53.525 253542 INFO nova.virt.libvirt.driver [None req-ff78a7af-1a34-46b5-8d0c-947bedf9c0c2 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: f5964963-11b8-4fd9-ace9-e5ee67571925] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 25 04:11:53 np0005534516 nova_compute[253538]: 2025-11-25 09:11:53.544 253542 DEBUG nova.compute.manager [None req-ff78a7af-1a34-46b5-8d0c-947bedf9c0c2 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: f5964963-11b8-4fd9-ace9-e5ee67571925] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 25 04:11:53 np0005534516 nova_compute[253538]: 2025-11-25 09:11:53.651 253542 DEBUG nova.compute.manager [None req-ff78a7af-1a34-46b5-8d0c-947bedf9c0c2 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: f5964963-11b8-4fd9-ace9-e5ee67571925] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 25 04:11:53 np0005534516 nova_compute[253538]: 2025-11-25 09:11:53.653 253542 DEBUG nova.virt.libvirt.driver [None req-ff78a7af-1a34-46b5-8d0c-947bedf9c0c2 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: f5964963-11b8-4fd9-ace9-e5ee67571925] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 25 04:11:53 np0005534516 nova_compute[253538]: 2025-11-25 09:11:53.653 253542 INFO nova.virt.libvirt.driver [None req-ff78a7af-1a34-46b5-8d0c-947bedf9c0c2 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: f5964963-11b8-4fd9-ace9-e5ee67571925] Creating image(s)#033[00m
Nov 25 04:11:53 np0005534516 nova_compute[253538]: 2025-11-25 09:11:53.686 253542 DEBUG nova.storage.rbd_utils [None req-ff78a7af-1a34-46b5-8d0c-947bedf9c0c2 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] rbd image f5964963-11b8-4fd9-ace9-e5ee67571925_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 04:11:53 np0005534516 nova_compute[253538]: 2025-11-25 09:11:53.706 253542 DEBUG nova.storage.rbd_utils [None req-ff78a7af-1a34-46b5-8d0c-947bedf9c0c2 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] rbd image f5964963-11b8-4fd9-ace9-e5ee67571925_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 04:11:53 np0005534516 nova_compute[253538]: 2025-11-25 09:11:53.729 253542 DEBUG nova.storage.rbd_utils [None req-ff78a7af-1a34-46b5-8d0c-947bedf9c0c2 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] rbd image f5964963-11b8-4fd9-ace9-e5ee67571925_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 04:11:53 np0005534516 nova_compute[253538]: 2025-11-25 09:11:53.733 253542 DEBUG oslo_concurrency.processutils [None req-ff78a7af-1a34-46b5-8d0c-947bedf9c0c2 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 04:11:53 np0005534516 nova_compute[253538]: 2025-11-25 09:11:53.782 253542 DEBUG nova.policy [None req-ff78a7af-1a34-46b5-8d0c-947bedf9c0c2 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'c9fb13d4ba9041458692330b7276232f', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'a3cf572dfc9f42528923d69b8fa76422', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 25 04:11:53 np0005534516 nova_compute[253538]: 2025-11-25 09:11:53.817 253542 DEBUG oslo_concurrency.processutils [None req-ff78a7af-1a34-46b5-8d0c-947bedf9c0c2 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json" returned: 0 in 0.084s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 04:11:53 np0005534516 nova_compute[253538]: 2025-11-25 09:11:53.818 253542 DEBUG oslo_concurrency.lockutils [None req-ff78a7af-1a34-46b5-8d0c-947bedf9c0c2 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Acquiring lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:11:53 np0005534516 nova_compute[253538]: 2025-11-25 09:11:53.818 253542 DEBUG oslo_concurrency.lockutils [None req-ff78a7af-1a34-46b5-8d0c-947bedf9c0c2 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:11:53 np0005534516 nova_compute[253538]: 2025-11-25 09:11:53.819 253542 DEBUG oslo_concurrency.lockutils [None req-ff78a7af-1a34-46b5-8d0c-947bedf9c0c2 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:11:53 np0005534516 nova_compute[253538]: 2025-11-25 09:11:53.848 253542 DEBUG nova.storage.rbd_utils [None req-ff78a7af-1a34-46b5-8d0c-947bedf9c0c2 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] rbd image f5964963-11b8-4fd9-ace9-e5ee67571925_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 04:11:53 np0005534516 nova_compute[253538]: 2025-11-25 09:11:53.853 253542 DEBUG oslo_concurrency.processutils [None req-ff78a7af-1a34-46b5-8d0c-947bedf9c0c2 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc f5964963-11b8-4fd9-ace9-e5ee67571925_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 04:11:54 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 04:11:54 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 04:11:54 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 04:11:54 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 04:11:54 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 04:11:54 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 04:11:54 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 04:11:54 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 04:11:54 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 04:11:54 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 04:11:54 np0005534516 nova_compute[253538]: 2025-11-25 09:11:54.170 253542 DEBUG oslo_concurrency.processutils [None req-ff78a7af-1a34-46b5-8d0c-947bedf9c0c2 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc f5964963-11b8-4fd9-ace9-e5ee67571925_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.317s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 04:11:54 np0005534516 nova_compute[253538]: 2025-11-25 09:11:54.220 253542 DEBUG nova.storage.rbd_utils [None req-ff78a7af-1a34-46b5-8d0c-947bedf9c0c2 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] resizing rbd image f5964963-11b8-4fd9-ace9-e5ee67571925_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 25 04:11:54 np0005534516 nova_compute[253538]: 2025-11-25 09:11:54.301 253542 DEBUG nova.objects.instance [None req-ff78a7af-1a34-46b5-8d0c-947bedf9c0c2 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lazy-loading 'migration_context' on Instance uuid f5964963-11b8-4fd9-ace9-e5ee67571925 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 04:11:54 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2745: 321 pgs: 321 active+clean; 88 MiB data, 1016 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:11:54 np0005534516 nova_compute[253538]: 2025-11-25 09:11:54.326 253542 DEBUG nova.virt.libvirt.driver [None req-ff78a7af-1a34-46b5-8d0c-947bedf9c0c2 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: f5964963-11b8-4fd9-ace9-e5ee67571925] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 25 04:11:54 np0005534516 nova_compute[253538]: 2025-11-25 09:11:54.327 253542 DEBUG nova.virt.libvirt.driver [None req-ff78a7af-1a34-46b5-8d0c-947bedf9c0c2 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: f5964963-11b8-4fd9-ace9-e5ee67571925] Ensure instance console log exists: /var/lib/nova/instances/f5964963-11b8-4fd9-ace9-e5ee67571925/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 25 04:11:54 np0005534516 nova_compute[253538]: 2025-11-25 09:11:54.327 253542 DEBUG oslo_concurrency.lockutils [None req-ff78a7af-1a34-46b5-8d0c-947bedf9c0c2 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:11:54 np0005534516 nova_compute[253538]: 2025-11-25 09:11:54.327 253542 DEBUG oslo_concurrency.lockutils [None req-ff78a7af-1a34-46b5-8d0c-947bedf9c0c2 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:11:54 np0005534516 nova_compute[253538]: 2025-11-25 09:11:54.328 253542 DEBUG oslo_concurrency.lockutils [None req-ff78a7af-1a34-46b5-8d0c-947bedf9c0c2 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:11:54 np0005534516 nova_compute[253538]: 2025-11-25 09:11:54.467 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:11:54 np0005534516 nova_compute[253538]: 2025-11-25 09:11:54.547 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:11:54 np0005534516 nova_compute[253538]: 2025-11-25 09:11:54.553 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:11:54 np0005534516 nova_compute[253538]: 2025-11-25 09:11:54.553 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 25 04:11:54 np0005534516 nova_compute[253538]: 2025-11-25 09:11:54.580 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 25 04:11:54 np0005534516 nova_compute[253538]: 2025-11-25 09:11:54.754 253542 DEBUG nova.network.neutron [None req-ff78a7af-1a34-46b5-8d0c-947bedf9c0c2 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: f5964963-11b8-4fd9-ace9-e5ee67571925] Successfully created port: 637fce28-ce53-4bd9-95fb-dc0675dd7009 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 25 04:11:54 np0005534516 nova_compute[253538]: 2025-11-25 09:11:54.816 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:11:55 np0005534516 nova_compute[253538]: 2025-11-25 09:11:55.535 253542 DEBUG nova.network.neutron [None req-ff78a7af-1a34-46b5-8d0c-947bedf9c0c2 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: f5964963-11b8-4fd9-ace9-e5ee67571925] Successfully updated port: 637fce28-ce53-4bd9-95fb-dc0675dd7009 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 25 04:11:55 np0005534516 nova_compute[253538]: 2025-11-25 09:11:55.563 253542 DEBUG oslo_concurrency.lockutils [None req-ff78a7af-1a34-46b5-8d0c-947bedf9c0c2 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Acquiring lock "refresh_cache-f5964963-11b8-4fd9-ace9-e5ee67571925" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 04:11:55 np0005534516 nova_compute[253538]: 2025-11-25 09:11:55.563 253542 DEBUG oslo_concurrency.lockutils [None req-ff78a7af-1a34-46b5-8d0c-947bedf9c0c2 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Acquired lock "refresh_cache-f5964963-11b8-4fd9-ace9-e5ee67571925" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 04:11:55 np0005534516 nova_compute[253538]: 2025-11-25 09:11:55.563 253542 DEBUG nova.network.neutron [None req-ff78a7af-1a34-46b5-8d0c-947bedf9c0c2 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: f5964963-11b8-4fd9-ace9-e5ee67571925] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 25 04:11:55 np0005534516 nova_compute[253538]: 2025-11-25 09:11:55.637 253542 DEBUG nova.compute.manager [req-1276c942-f00a-4b72-8c3e-9d70456b1d47 req-65b3014d-b0d6-4e6f-9140-91ec67e3f355 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: f5964963-11b8-4fd9-ace9-e5ee67571925] Received event network-changed-637fce28-ce53-4bd9-95fb-dc0675dd7009 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 04:11:55 np0005534516 nova_compute[253538]: 2025-11-25 09:11:55.637 253542 DEBUG nova.compute.manager [req-1276c942-f00a-4b72-8c3e-9d70456b1d47 req-65b3014d-b0d6-4e6f-9140-91ec67e3f355 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: f5964963-11b8-4fd9-ace9-e5ee67571925] Refreshing instance network info cache due to event network-changed-637fce28-ce53-4bd9-95fb-dc0675dd7009. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 04:11:55 np0005534516 nova_compute[253538]: 2025-11-25 09:11:55.637 253542 DEBUG oslo_concurrency.lockutils [req-1276c942-f00a-4b72-8c3e-9d70456b1d47 req-65b3014d-b0d6-4e6f-9140-91ec67e3f355 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-f5964963-11b8-4fd9-ace9-e5ee67571925" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 04:11:55 np0005534516 nova_compute[253538]: 2025-11-25 09:11:55.718 253542 DEBUG nova.network.neutron [None req-ff78a7af-1a34-46b5-8d0c-947bedf9c0c2 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: f5964963-11b8-4fd9-ace9-e5ee67571925] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 25 04:11:56 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2746: 321 pgs: 321 active+clean; 104 MiB data, 1016 MiB used, 59 GiB / 60 GiB avail; 170 B/s rd, 650 KiB/s wr, 1 op/s
Nov 25 04:11:57 np0005534516 nova_compute[253538]: 2025-11-25 09:11:57.757 253542 DEBUG nova.network.neutron [None req-ff78a7af-1a34-46b5-8d0c-947bedf9c0c2 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: f5964963-11b8-4fd9-ace9-e5ee67571925] Updating instance_info_cache with network_info: [{"id": "637fce28-ce53-4bd9-95fb-dc0675dd7009", "address": "fa:16:3e:59:4e:62", "network": {"id": "6c73317d-f647-4813-8469-7d8f6ba2c0c7", "bridge": "br-int", "label": "tempest-network-smoke--565189020", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe59:4e62", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe59:4e62", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap637fce28-ce", "ovs_interfaceid": "637fce28-ce53-4bd9-95fb-dc0675dd7009", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 04:11:57 np0005534516 nova_compute[253538]: 2025-11-25 09:11:57.790 253542 DEBUG oslo_concurrency.lockutils [None req-ff78a7af-1a34-46b5-8d0c-947bedf9c0c2 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Releasing lock "refresh_cache-f5964963-11b8-4fd9-ace9-e5ee67571925" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 04:11:57 np0005534516 nova_compute[253538]: 2025-11-25 09:11:57.791 253542 DEBUG nova.compute.manager [None req-ff78a7af-1a34-46b5-8d0c-947bedf9c0c2 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: f5964963-11b8-4fd9-ace9-e5ee67571925] Instance network_info: |[{"id": "637fce28-ce53-4bd9-95fb-dc0675dd7009", "address": "fa:16:3e:59:4e:62", "network": {"id": "6c73317d-f647-4813-8469-7d8f6ba2c0c7", "bridge": "br-int", "label": "tempest-network-smoke--565189020", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe59:4e62", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe59:4e62", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap637fce28-ce", "ovs_interfaceid": "637fce28-ce53-4bd9-95fb-dc0675dd7009", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 25 04:11:57 np0005534516 nova_compute[253538]: 2025-11-25 09:11:57.792 253542 DEBUG oslo_concurrency.lockutils [req-1276c942-f00a-4b72-8c3e-9d70456b1d47 req-65b3014d-b0d6-4e6f-9140-91ec67e3f355 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-f5964963-11b8-4fd9-ace9-e5ee67571925" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 04:11:57 np0005534516 nova_compute[253538]: 2025-11-25 09:11:57.793 253542 DEBUG nova.network.neutron [req-1276c942-f00a-4b72-8c3e-9d70456b1d47 req-65b3014d-b0d6-4e6f-9140-91ec67e3f355 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: f5964963-11b8-4fd9-ace9-e5ee67571925] Refreshing network info cache for port 637fce28-ce53-4bd9-95fb-dc0675dd7009 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 04:11:57 np0005534516 nova_compute[253538]: 2025-11-25 09:11:57.798 253542 DEBUG nova.virt.libvirt.driver [None req-ff78a7af-1a34-46b5-8d0c-947bedf9c0c2 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: f5964963-11b8-4fd9-ace9-e5ee67571925] Start _get_guest_xml network_info=[{"id": "637fce28-ce53-4bd9-95fb-dc0675dd7009", "address": "fa:16:3e:59:4e:62", "network": {"id": "6c73317d-f647-4813-8469-7d8f6ba2c0c7", "bridge": "br-int", "label": "tempest-network-smoke--565189020", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe59:4e62", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe59:4e62", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap637fce28-ce", "ovs_interfaceid": "637fce28-ce53-4bd9-95fb-dc0675dd7009", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'device_name': '/dev/vda', 'guest_format': None, 'encryption_options': None, 'boot_index': 0, 'encryption_format': None, 'encryption_secret_uuid': None, 'size': 0, 'encrypted': False, 'disk_bus': 'virtio', 'image_id': '8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 25 04:11:57 np0005534516 nova_compute[253538]: 2025-11-25 09:11:57.806 253542 WARNING nova.virt.libvirt.driver [None req-ff78a7af-1a34-46b5-8d0c-947bedf9c0c2 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 04:11:57 np0005534516 nova_compute[253538]: 2025-11-25 09:11:57.817 253542 DEBUG nova.virt.libvirt.host [None req-ff78a7af-1a34-46b5-8d0c-947bedf9c0c2 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 25 04:11:57 np0005534516 nova_compute[253538]: 2025-11-25 09:11:57.819 253542 DEBUG nova.virt.libvirt.host [None req-ff78a7af-1a34-46b5-8d0c-947bedf9c0c2 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 25 04:11:57 np0005534516 nova_compute[253538]: 2025-11-25 09:11:57.823 253542 DEBUG nova.virt.libvirt.host [None req-ff78a7af-1a34-46b5-8d0c-947bedf9c0c2 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 25 04:11:57 np0005534516 nova_compute[253538]: 2025-11-25 09:11:57.824 253542 DEBUG nova.virt.libvirt.host [None req-ff78a7af-1a34-46b5-8d0c-947bedf9c0c2 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 25 04:11:57 np0005534516 nova_compute[253538]: 2025-11-25 09:11:57.826 253542 DEBUG nova.virt.libvirt.driver [None req-ff78a7af-1a34-46b5-8d0c-947bedf9c0c2 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 25 04:11:57 np0005534516 nova_compute[253538]: 2025-11-25 09:11:57.826 253542 DEBUG nova.virt.hardware [None req-ff78a7af-1a34-46b5-8d0c-947bedf9c0c2 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T08:20:54Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='c0247c91-0f34-45b2-87e7-43f31a790a00',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 25 04:11:57 np0005534516 nova_compute[253538]: 2025-11-25 09:11:57.826 253542 DEBUG nova.virt.hardware [None req-ff78a7af-1a34-46b5-8d0c-947bedf9c0c2 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 25 04:11:57 np0005534516 nova_compute[253538]: 2025-11-25 09:11:57.827 253542 DEBUG nova.virt.hardware [None req-ff78a7af-1a34-46b5-8d0c-947bedf9c0c2 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 25 04:11:57 np0005534516 nova_compute[253538]: 2025-11-25 09:11:57.827 253542 DEBUG nova.virt.hardware [None req-ff78a7af-1a34-46b5-8d0c-947bedf9c0c2 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 25 04:11:57 np0005534516 nova_compute[253538]: 2025-11-25 09:11:57.827 253542 DEBUG nova.virt.hardware [None req-ff78a7af-1a34-46b5-8d0c-947bedf9c0c2 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 25 04:11:57 np0005534516 nova_compute[253538]: 2025-11-25 09:11:57.827 253542 DEBUG nova.virt.hardware [None req-ff78a7af-1a34-46b5-8d0c-947bedf9c0c2 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 25 04:11:57 np0005534516 nova_compute[253538]: 2025-11-25 09:11:57.827 253542 DEBUG nova.virt.hardware [None req-ff78a7af-1a34-46b5-8d0c-947bedf9c0c2 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 25 04:11:57 np0005534516 nova_compute[253538]: 2025-11-25 09:11:57.828 253542 DEBUG nova.virt.hardware [None req-ff78a7af-1a34-46b5-8d0c-947bedf9c0c2 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 25 04:11:57 np0005534516 nova_compute[253538]: 2025-11-25 09:11:57.828 253542 DEBUG nova.virt.hardware [None req-ff78a7af-1a34-46b5-8d0c-947bedf9c0c2 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 25 04:11:57 np0005534516 nova_compute[253538]: 2025-11-25 09:11:57.828 253542 DEBUG nova.virt.hardware [None req-ff78a7af-1a34-46b5-8d0c-947bedf9c0c2 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 25 04:11:57 np0005534516 nova_compute[253538]: 2025-11-25 09:11:57.828 253542 DEBUG nova.virt.hardware [None req-ff78a7af-1a34-46b5-8d0c-947bedf9c0c2 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 25 04:11:57 np0005534516 nova_compute[253538]: 2025-11-25 09:11:57.831 253542 DEBUG oslo_concurrency.processutils [None req-ff78a7af-1a34-46b5-8d0c-947bedf9c0c2 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 04:11:58 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 04:11:58 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3589502343' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 04:11:58 np0005534516 nova_compute[253538]: 2025-11-25 09:11:58.307 253542 DEBUG oslo_concurrency.processutils [None req-ff78a7af-1a34-46b5-8d0c-947bedf9c0c2 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.476s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 04:11:58 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:11:58 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2747: 321 pgs: 321 active+clean; 134 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 25 04:11:58 np0005534516 nova_compute[253538]: 2025-11-25 09:11:58.334 253542 DEBUG nova.storage.rbd_utils [None req-ff78a7af-1a34-46b5-8d0c-947bedf9c0c2 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] rbd image f5964963-11b8-4fd9-ace9-e5ee67571925_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 04:11:58 np0005534516 nova_compute[253538]: 2025-11-25 09:11:58.337 253542 DEBUG oslo_concurrency.processutils [None req-ff78a7af-1a34-46b5-8d0c-947bedf9c0c2 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 04:11:58 np0005534516 nova_compute[253538]: 2025-11-25 09:11:58.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:11:58 np0005534516 nova_compute[253538]: 2025-11-25 09:11:58.555 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:11:58 np0005534516 nova_compute[253538]: 2025-11-25 09:11:58.555 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 25 04:11:58 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 04:11:58 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2553256577' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 04:11:58 np0005534516 nova_compute[253538]: 2025-11-25 09:11:58.772 253542 DEBUG oslo_concurrency.processutils [None req-ff78a7af-1a34-46b5-8d0c-947bedf9c0c2 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.434s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 04:11:58 np0005534516 nova_compute[253538]: 2025-11-25 09:11:58.773 253542 DEBUG nova.virt.libvirt.vif [None req-ff78a7af-1a34-46b5-8d0c-947bedf9c0c2 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T09:11:51Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1898594818',display_name='tempest-TestGettingAddress-server-1898594818',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1898594818',id=146,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCYWSfdVj+Wf7cbG/yHvSQzac2UDJ+lVbG+gHLhUdgH1axNTDtOQgu7Qi4+rF49LHVdNelU4faTO+e7DMH4d34+ViVy+2CuP/uQsAgIE7Bo9udHLpfncBLpy55zcDymblA==',key_name='tempest-TestGettingAddress-275156659',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='a3cf572dfc9f42528923d69b8fa76422',ramdisk_id='',reservation_id='r-ezfjzmm2',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-364728108',owner_user_name='tempest-TestGettingAddress-364728108-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T09:11:53Z,user_data=None,user_id='c9fb13d4ba9041458692330b7276232f',uuid=f5964963-11b8-4fd9-ace9-e5ee67571925,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "637fce28-ce53-4bd9-95fb-dc0675dd7009", "address": "fa:16:3e:59:4e:62", "network": {"id": "6c73317d-f647-4813-8469-7d8f6ba2c0c7", "bridge": "br-int", "label": "tempest-network-smoke--565189020", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe59:4e62", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe59:4e62", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap637fce28-ce", "ovs_interfaceid": "637fce28-ce53-4bd9-95fb-dc0675dd7009", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 25 04:11:58 np0005534516 nova_compute[253538]: 2025-11-25 09:11:58.774 253542 DEBUG nova.network.os_vif_util [None req-ff78a7af-1a34-46b5-8d0c-947bedf9c0c2 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Converting VIF {"id": "637fce28-ce53-4bd9-95fb-dc0675dd7009", "address": "fa:16:3e:59:4e:62", "network": {"id": "6c73317d-f647-4813-8469-7d8f6ba2c0c7", "bridge": "br-int", "label": "tempest-network-smoke--565189020", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe59:4e62", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe59:4e62", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap637fce28-ce", "ovs_interfaceid": "637fce28-ce53-4bd9-95fb-dc0675dd7009", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 04:11:58 np0005534516 nova_compute[253538]: 2025-11-25 09:11:58.775 253542 DEBUG nova.network.os_vif_util [None req-ff78a7af-1a34-46b5-8d0c-947bedf9c0c2 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:59:4e:62,bridge_name='br-int',has_traffic_filtering=True,id=637fce28-ce53-4bd9-95fb-dc0675dd7009,network=Network(6c73317d-f647-4813-8469-7d8f6ba2c0c7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap637fce28-ce') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 04:11:58 np0005534516 nova_compute[253538]: 2025-11-25 09:11:58.777 253542 DEBUG nova.objects.instance [None req-ff78a7af-1a34-46b5-8d0c-947bedf9c0c2 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lazy-loading 'pci_devices' on Instance uuid f5964963-11b8-4fd9-ace9-e5ee67571925 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 04:11:58 np0005534516 nova_compute[253538]: 2025-11-25 09:11:58.789 253542 DEBUG nova.virt.libvirt.driver [None req-ff78a7af-1a34-46b5-8d0c-947bedf9c0c2 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: f5964963-11b8-4fd9-ace9-e5ee67571925] End _get_guest_xml xml=<domain type="kvm">
Nov 25 04:11:58 np0005534516 nova_compute[253538]:  <uuid>f5964963-11b8-4fd9-ace9-e5ee67571925</uuid>
Nov 25 04:11:58 np0005534516 nova_compute[253538]:  <name>instance-00000092</name>
Nov 25 04:11:58 np0005534516 nova_compute[253538]:  <memory>131072</memory>
Nov 25 04:11:58 np0005534516 nova_compute[253538]:  <vcpu>1</vcpu>
Nov 25 04:11:58 np0005534516 nova_compute[253538]:  <metadata>
Nov 25 04:11:58 np0005534516 nova_compute[253538]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 04:11:58 np0005534516 nova_compute[253538]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 04:11:58 np0005534516 nova_compute[253538]:      <nova:name>tempest-TestGettingAddress-server-1898594818</nova:name>
Nov 25 04:11:58 np0005534516 nova_compute[253538]:      <nova:creationTime>2025-11-25 09:11:57</nova:creationTime>
Nov 25 04:11:58 np0005534516 nova_compute[253538]:      <nova:flavor name="m1.nano">
Nov 25 04:11:58 np0005534516 nova_compute[253538]:        <nova:memory>128</nova:memory>
Nov 25 04:11:58 np0005534516 nova_compute[253538]:        <nova:disk>1</nova:disk>
Nov 25 04:11:58 np0005534516 nova_compute[253538]:        <nova:swap>0</nova:swap>
Nov 25 04:11:58 np0005534516 nova_compute[253538]:        <nova:ephemeral>0</nova:ephemeral>
Nov 25 04:11:58 np0005534516 nova_compute[253538]:        <nova:vcpus>1</nova:vcpus>
Nov 25 04:11:58 np0005534516 nova_compute[253538]:      </nova:flavor>
Nov 25 04:11:58 np0005534516 nova_compute[253538]:      <nova:owner>
Nov 25 04:11:58 np0005534516 nova_compute[253538]:        <nova:user uuid="c9fb13d4ba9041458692330b7276232f">tempest-TestGettingAddress-364728108-project-member</nova:user>
Nov 25 04:11:58 np0005534516 nova_compute[253538]:        <nova:project uuid="a3cf572dfc9f42528923d69b8fa76422">tempest-TestGettingAddress-364728108</nova:project>
Nov 25 04:11:58 np0005534516 nova_compute[253538]:      </nova:owner>
Nov 25 04:11:58 np0005534516 nova_compute[253538]:      <nova:root type="image" uuid="8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e"/>
Nov 25 04:11:58 np0005534516 nova_compute[253538]:      <nova:ports>
Nov 25 04:11:58 np0005534516 nova_compute[253538]:        <nova:port uuid="637fce28-ce53-4bd9-95fb-dc0675dd7009">
Nov 25 04:11:58 np0005534516 nova_compute[253538]:          <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Nov 25 04:11:58 np0005534516 nova_compute[253538]:          <nova:ip type="fixed" address="2001:db8:0:1:f816:3eff:fe59:4e62" ipVersion="6"/>
Nov 25 04:11:58 np0005534516 nova_compute[253538]:          <nova:ip type="fixed" address="2001:db8::f816:3eff:fe59:4e62" ipVersion="6"/>
Nov 25 04:11:58 np0005534516 nova_compute[253538]:        </nova:port>
Nov 25 04:11:58 np0005534516 nova_compute[253538]:      </nova:ports>
Nov 25 04:11:58 np0005534516 nova_compute[253538]:    </nova:instance>
Nov 25 04:11:58 np0005534516 nova_compute[253538]:  </metadata>
Nov 25 04:11:58 np0005534516 nova_compute[253538]:  <sysinfo type="smbios">
Nov 25 04:11:58 np0005534516 nova_compute[253538]:    <system>
Nov 25 04:11:58 np0005534516 nova_compute[253538]:      <entry name="manufacturer">RDO</entry>
Nov 25 04:11:58 np0005534516 nova_compute[253538]:      <entry name="product">OpenStack Compute</entry>
Nov 25 04:11:58 np0005534516 nova_compute[253538]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 04:11:58 np0005534516 nova_compute[253538]:      <entry name="serial">f5964963-11b8-4fd9-ace9-e5ee67571925</entry>
Nov 25 04:11:58 np0005534516 nova_compute[253538]:      <entry name="uuid">f5964963-11b8-4fd9-ace9-e5ee67571925</entry>
Nov 25 04:11:58 np0005534516 nova_compute[253538]:      <entry name="family">Virtual Machine</entry>
Nov 25 04:11:58 np0005534516 nova_compute[253538]:    </system>
Nov 25 04:11:58 np0005534516 nova_compute[253538]:  </sysinfo>
Nov 25 04:11:58 np0005534516 nova_compute[253538]:  <os>
Nov 25 04:11:58 np0005534516 nova_compute[253538]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 25 04:11:58 np0005534516 nova_compute[253538]:    <boot dev="hd"/>
Nov 25 04:11:58 np0005534516 nova_compute[253538]:    <smbios mode="sysinfo"/>
Nov 25 04:11:58 np0005534516 nova_compute[253538]:  </os>
Nov 25 04:11:58 np0005534516 nova_compute[253538]:  <features>
Nov 25 04:11:58 np0005534516 nova_compute[253538]:    <acpi/>
Nov 25 04:11:58 np0005534516 nova_compute[253538]:    <apic/>
Nov 25 04:11:58 np0005534516 nova_compute[253538]:    <vmcoreinfo/>
Nov 25 04:11:58 np0005534516 nova_compute[253538]:  </features>
Nov 25 04:11:58 np0005534516 nova_compute[253538]:  <clock offset="utc">
Nov 25 04:11:58 np0005534516 nova_compute[253538]:    <timer name="pit" tickpolicy="delay"/>
Nov 25 04:11:58 np0005534516 nova_compute[253538]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 25 04:11:58 np0005534516 nova_compute[253538]:    <timer name="hpet" present="no"/>
Nov 25 04:11:58 np0005534516 nova_compute[253538]:  </clock>
Nov 25 04:11:58 np0005534516 nova_compute[253538]:  <cpu mode="host-model" match="exact">
Nov 25 04:11:58 np0005534516 nova_compute[253538]:    <topology sockets="1" cores="1" threads="1"/>
Nov 25 04:11:58 np0005534516 nova_compute[253538]:  </cpu>
Nov 25 04:11:58 np0005534516 nova_compute[253538]:  <devices>
Nov 25 04:11:58 np0005534516 nova_compute[253538]:    <disk type="network" device="disk">
Nov 25 04:11:58 np0005534516 nova_compute[253538]:      <driver type="raw" cache="none"/>
Nov 25 04:11:58 np0005534516 nova_compute[253538]:      <source protocol="rbd" name="vms/f5964963-11b8-4fd9-ace9-e5ee67571925_disk">
Nov 25 04:11:58 np0005534516 nova_compute[253538]:        <host name="192.168.122.100" port="6789"/>
Nov 25 04:11:58 np0005534516 nova_compute[253538]:      </source>
Nov 25 04:11:58 np0005534516 nova_compute[253538]:      <auth username="openstack">
Nov 25 04:11:58 np0005534516 nova_compute[253538]:        <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 04:11:58 np0005534516 nova_compute[253538]:      </auth>
Nov 25 04:11:58 np0005534516 nova_compute[253538]:      <target dev="vda" bus="virtio"/>
Nov 25 04:11:58 np0005534516 nova_compute[253538]:    </disk>
Nov 25 04:11:58 np0005534516 nova_compute[253538]:    <disk type="network" device="cdrom">
Nov 25 04:11:58 np0005534516 nova_compute[253538]:      <driver type="raw" cache="none"/>
Nov 25 04:11:58 np0005534516 nova_compute[253538]:      <source protocol="rbd" name="vms/f5964963-11b8-4fd9-ace9-e5ee67571925_disk.config">
Nov 25 04:11:58 np0005534516 nova_compute[253538]:        <host name="192.168.122.100" port="6789"/>
Nov 25 04:11:58 np0005534516 nova_compute[253538]:      </source>
Nov 25 04:11:58 np0005534516 nova_compute[253538]:      <auth username="openstack">
Nov 25 04:11:58 np0005534516 nova_compute[253538]:        <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 04:11:58 np0005534516 nova_compute[253538]:      </auth>
Nov 25 04:11:58 np0005534516 nova_compute[253538]:      <target dev="sda" bus="sata"/>
Nov 25 04:11:58 np0005534516 nova_compute[253538]:    </disk>
Nov 25 04:11:58 np0005534516 nova_compute[253538]:    <interface type="ethernet">
Nov 25 04:11:58 np0005534516 nova_compute[253538]:      <mac address="fa:16:3e:59:4e:62"/>
Nov 25 04:11:58 np0005534516 nova_compute[253538]:      <model type="virtio"/>
Nov 25 04:11:58 np0005534516 nova_compute[253538]:      <driver name="vhost" rx_queue_size="512"/>
Nov 25 04:11:58 np0005534516 nova_compute[253538]:      <mtu size="1442"/>
Nov 25 04:11:58 np0005534516 nova_compute[253538]:      <target dev="tap637fce28-ce"/>
Nov 25 04:11:58 np0005534516 nova_compute[253538]:    </interface>
Nov 25 04:11:58 np0005534516 nova_compute[253538]:    <serial type="pty">
Nov 25 04:11:58 np0005534516 nova_compute[253538]:      <log file="/var/lib/nova/instances/f5964963-11b8-4fd9-ace9-e5ee67571925/console.log" append="off"/>
Nov 25 04:11:58 np0005534516 nova_compute[253538]:    </serial>
Nov 25 04:11:58 np0005534516 nova_compute[253538]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 04:11:58 np0005534516 nova_compute[253538]:    <video>
Nov 25 04:11:58 np0005534516 nova_compute[253538]:      <model type="virtio"/>
Nov 25 04:11:58 np0005534516 nova_compute[253538]:    </video>
Nov 25 04:11:58 np0005534516 nova_compute[253538]:    <input type="tablet" bus="usb"/>
Nov 25 04:11:58 np0005534516 nova_compute[253538]:    <rng model="virtio">
Nov 25 04:11:58 np0005534516 nova_compute[253538]:      <backend model="random">/dev/urandom</backend>
Nov 25 04:11:58 np0005534516 nova_compute[253538]:    </rng>
Nov 25 04:11:58 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root"/>
Nov 25 04:11:58 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:11:58 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:11:58 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:11:58 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:11:58 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:11:58 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:11:58 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:11:58 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:11:58 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:11:58 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:11:58 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:11:58 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:11:58 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:11:58 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:11:58 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:11:58 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:11:58 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:11:58 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:11:58 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:11:58 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:11:58 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:11:58 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:11:58 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:11:58 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:11:58 np0005534516 nova_compute[253538]:    <controller type="usb" index="0"/>
Nov 25 04:11:58 np0005534516 nova_compute[253538]:    <memballoon model="virtio">
Nov 25 04:11:58 np0005534516 nova_compute[253538]:      <stats period="10"/>
Nov 25 04:11:58 np0005534516 nova_compute[253538]:    </memballoon>
Nov 25 04:11:58 np0005534516 nova_compute[253538]:  </devices>
Nov 25 04:11:58 np0005534516 nova_compute[253538]: </domain>
Nov 25 04:11:58 np0005534516 nova_compute[253538]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 25 04:11:58 np0005534516 nova_compute[253538]: 2025-11-25 09:11:58.790 253542 DEBUG nova.compute.manager [None req-ff78a7af-1a34-46b5-8d0c-947bedf9c0c2 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: f5964963-11b8-4fd9-ace9-e5ee67571925] Preparing to wait for external event network-vif-plugged-637fce28-ce53-4bd9-95fb-dc0675dd7009 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 25 04:11:58 np0005534516 nova_compute[253538]: 2025-11-25 09:11:58.791 253542 DEBUG oslo_concurrency.lockutils [None req-ff78a7af-1a34-46b5-8d0c-947bedf9c0c2 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Acquiring lock "f5964963-11b8-4fd9-ace9-e5ee67571925-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:11:58 np0005534516 nova_compute[253538]: 2025-11-25 09:11:58.791 253542 DEBUG oslo_concurrency.lockutils [None req-ff78a7af-1a34-46b5-8d0c-947bedf9c0c2 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "f5964963-11b8-4fd9-ace9-e5ee67571925-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:11:58 np0005534516 nova_compute[253538]: 2025-11-25 09:11:58.791 253542 DEBUG oslo_concurrency.lockutils [None req-ff78a7af-1a34-46b5-8d0c-947bedf9c0c2 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "f5964963-11b8-4fd9-ace9-e5ee67571925-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:11:58 np0005534516 nova_compute[253538]: 2025-11-25 09:11:58.792 253542 DEBUG nova.virt.libvirt.vif [None req-ff78a7af-1a34-46b5-8d0c-947bedf9c0c2 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T09:11:51Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1898594818',display_name='tempest-TestGettingAddress-server-1898594818',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1898594818',id=146,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCYWSfdVj+Wf7cbG/yHvSQzac2UDJ+lVbG+gHLhUdgH1axNTDtOQgu7Qi4+rF49LHVdNelU4faTO+e7DMH4d34+ViVy+2CuP/uQsAgIE7Bo9udHLpfncBLpy55zcDymblA==',key_name='tempest-TestGettingAddress-275156659',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='a3cf572dfc9f42528923d69b8fa76422',ramdisk_id='',reservation_id='r-ezfjzmm2',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-364728108',owner_user_name='tempest-TestGettingAddress-364728108-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T09:11:53Z,user_data=None,user_id='c9fb13d4ba9041458692330b7276232f',uuid=f5964963-11b8-4fd9-ace9-e5ee67571925,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "637fce28-ce53-4bd9-95fb-dc0675dd7009", "address": "fa:16:3e:59:4e:62", "network": {"id": "6c73317d-f647-4813-8469-7d8f6ba2c0c7", "bridge": "br-int", "label": "tempest-network-smoke--565189020", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe59:4e62", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe59:4e62", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap637fce28-ce", "ovs_interfaceid": "637fce28-ce53-4bd9-95fb-dc0675dd7009", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 25 04:11:58 np0005534516 nova_compute[253538]: 2025-11-25 09:11:58.792 253542 DEBUG nova.network.os_vif_util [None req-ff78a7af-1a34-46b5-8d0c-947bedf9c0c2 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Converting VIF {"id": "637fce28-ce53-4bd9-95fb-dc0675dd7009", "address": "fa:16:3e:59:4e:62", "network": {"id": "6c73317d-f647-4813-8469-7d8f6ba2c0c7", "bridge": "br-int", "label": "tempest-network-smoke--565189020", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe59:4e62", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe59:4e62", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap637fce28-ce", "ovs_interfaceid": "637fce28-ce53-4bd9-95fb-dc0675dd7009", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 04:11:58 np0005534516 nova_compute[253538]: 2025-11-25 09:11:58.793 253542 DEBUG nova.network.os_vif_util [None req-ff78a7af-1a34-46b5-8d0c-947bedf9c0c2 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:59:4e:62,bridge_name='br-int',has_traffic_filtering=True,id=637fce28-ce53-4bd9-95fb-dc0675dd7009,network=Network(6c73317d-f647-4813-8469-7d8f6ba2c0c7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap637fce28-ce') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 04:11:58 np0005534516 nova_compute[253538]: 2025-11-25 09:11:58.794 253542 DEBUG os_vif [None req-ff78a7af-1a34-46b5-8d0c-947bedf9c0c2 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:59:4e:62,bridge_name='br-int',has_traffic_filtering=True,id=637fce28-ce53-4bd9-95fb-dc0675dd7009,network=Network(6c73317d-f647-4813-8469-7d8f6ba2c0c7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap637fce28-ce') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 25 04:11:58 np0005534516 nova_compute[253538]: 2025-11-25 09:11:58.794 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:11:58 np0005534516 nova_compute[253538]: 2025-11-25 09:11:58.795 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 04:11:58 np0005534516 nova_compute[253538]: 2025-11-25 09:11:58.795 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 04:11:58 np0005534516 nova_compute[253538]: 2025-11-25 09:11:58.799 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:11:58 np0005534516 nova_compute[253538]: 2025-11-25 09:11:58.800 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap637fce28-ce, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 04:11:58 np0005534516 nova_compute[253538]: 2025-11-25 09:11:58.800 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap637fce28-ce, col_values=(('external_ids', {'iface-id': '637fce28-ce53-4bd9-95fb-dc0675dd7009', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:59:4e:62', 'vm-uuid': 'f5964963-11b8-4fd9-ace9-e5ee67571925'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 04:11:58 np0005534516 nova_compute[253538]: 2025-11-25 09:11:58.802 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:11:58 np0005534516 NetworkManager[48915]: <info>  [1764061918.8029] manager: (tap637fce28-ce): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/640)
Nov 25 04:11:58 np0005534516 nova_compute[253538]: 2025-11-25 09:11:58.804 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 25 04:11:58 np0005534516 nova_compute[253538]: 2025-11-25 09:11:58.810 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:11:58 np0005534516 nova_compute[253538]: 2025-11-25 09:11:58.811 253542 INFO os_vif [None req-ff78a7af-1a34-46b5-8d0c-947bedf9c0c2 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:59:4e:62,bridge_name='br-int',has_traffic_filtering=True,id=637fce28-ce53-4bd9-95fb-dc0675dd7009,network=Network(6c73317d-f647-4813-8469-7d8f6ba2c0c7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap637fce28-ce')#033[00m
Nov 25 04:11:58 np0005534516 nova_compute[253538]: 2025-11-25 09:11:58.858 253542 DEBUG nova.virt.libvirt.driver [None req-ff78a7af-1a34-46b5-8d0c-947bedf9c0c2 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 04:11:58 np0005534516 nova_compute[253538]: 2025-11-25 09:11:58.859 253542 DEBUG nova.virt.libvirt.driver [None req-ff78a7af-1a34-46b5-8d0c-947bedf9c0c2 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 04:11:58 np0005534516 nova_compute[253538]: 2025-11-25 09:11:58.859 253542 DEBUG nova.virt.libvirt.driver [None req-ff78a7af-1a34-46b5-8d0c-947bedf9c0c2 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] No VIF found with MAC fa:16:3e:59:4e:62, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 25 04:11:58 np0005534516 nova_compute[253538]: 2025-11-25 09:11:58.860 253542 INFO nova.virt.libvirt.driver [None req-ff78a7af-1a34-46b5-8d0c-947bedf9c0c2 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: f5964963-11b8-4fd9-ace9-e5ee67571925] Using config drive#033[00m
Nov 25 04:11:58 np0005534516 nova_compute[253538]: 2025-11-25 09:11:58.886 253542 DEBUG nova.storage.rbd_utils [None req-ff78a7af-1a34-46b5-8d0c-947bedf9c0c2 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] rbd image f5964963-11b8-4fd9-ace9-e5ee67571925_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 04:11:59 np0005534516 nova_compute[253538]: 2025-11-25 09:11:59.469 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:11:59 np0005534516 nova_compute[253538]: 2025-11-25 09:11:59.890 253542 INFO nova.virt.libvirt.driver [None req-ff78a7af-1a34-46b5-8d0c-947bedf9c0c2 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: f5964963-11b8-4fd9-ace9-e5ee67571925] Creating config drive at /var/lib/nova/instances/f5964963-11b8-4fd9-ace9-e5ee67571925/disk.config#033[00m
Nov 25 04:11:59 np0005534516 nova_compute[253538]: 2025-11-25 09:11:59.895 253542 DEBUG oslo_concurrency.processutils [None req-ff78a7af-1a34-46b5-8d0c-947bedf9c0c2 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/f5964963-11b8-4fd9-ace9-e5ee67571925/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp4qp6oy1o execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 04:12:00 np0005534516 nova_compute[253538]: 2025-11-25 09:12:00.064 253542 DEBUG oslo_concurrency.processutils [None req-ff78a7af-1a34-46b5-8d0c-947bedf9c0c2 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/f5964963-11b8-4fd9-ace9-e5ee67571925/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp4qp6oy1o" returned: 0 in 0.169s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 04:12:00 np0005534516 nova_compute[253538]: 2025-11-25 09:12:00.091 253542 DEBUG nova.storage.rbd_utils [None req-ff78a7af-1a34-46b5-8d0c-947bedf9c0c2 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] rbd image f5964963-11b8-4fd9-ace9-e5ee67571925_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 04:12:00 np0005534516 nova_compute[253538]: 2025-11-25 09:12:00.095 253542 DEBUG oslo_concurrency.processutils [None req-ff78a7af-1a34-46b5-8d0c-947bedf9c0c2 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/f5964963-11b8-4fd9-ace9-e5ee67571925/disk.config f5964963-11b8-4fd9-ace9-e5ee67571925_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 04:12:00 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2748: 321 pgs: 321 active+clean; 134 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 25 04:12:00 np0005534516 nova_compute[253538]: 2025-11-25 09:12:00.406 253542 DEBUG oslo_concurrency.processutils [None req-ff78a7af-1a34-46b5-8d0c-947bedf9c0c2 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/f5964963-11b8-4fd9-ace9-e5ee67571925/disk.config f5964963-11b8-4fd9-ace9-e5ee67571925_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.311s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 04:12:00 np0005534516 nova_compute[253538]: 2025-11-25 09:12:00.407 253542 INFO nova.virt.libvirt.driver [None req-ff78a7af-1a34-46b5-8d0c-947bedf9c0c2 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: f5964963-11b8-4fd9-ace9-e5ee67571925] Deleting local config drive /var/lib/nova/instances/f5964963-11b8-4fd9-ace9-e5ee67571925/disk.config because it was imported into RBD.#033[00m
Nov 25 04:12:00 np0005534516 kernel: tap637fce28-ce: entered promiscuous mode
Nov 25 04:12:00 np0005534516 NetworkManager[48915]: <info>  [1764061920.4868] manager: (tap637fce28-ce): new Tun device (/org/freedesktop/NetworkManager/Devices/641)
Nov 25 04:12:00 np0005534516 nova_compute[253538]: 2025-11-25 09:12:00.533 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:12:00 np0005534516 ovn_controller[152859]: 2025-11-25T09:12:00Z|01562|binding|INFO|Claiming lport 637fce28-ce53-4bd9-95fb-dc0675dd7009 for this chassis.
Nov 25 04:12:00 np0005534516 ovn_controller[152859]: 2025-11-25T09:12:00Z|01563|binding|INFO|637fce28-ce53-4bd9-95fb-dc0675dd7009: Claiming fa:16:3e:59:4e:62 10.100.0.4 2001:db8:0:1:f816:3eff:fe59:4e62 2001:db8::f816:3eff:fe59:4e62
Nov 25 04:12:00 np0005534516 nova_compute[253538]: 2025-11-25 09:12:00.539 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:12:00 np0005534516 systemd-udevd[410538]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 04:12:00 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:12:00.560 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:59:4e:62 10.100.0.4 2001:db8:0:1:f816:3eff:fe59:4e62 2001:db8::f816:3eff:fe59:4e62'], port_security=['fa:16:3e:59:4e:62 10.100.0.4 2001:db8:0:1:f816:3eff:fe59:4e62 2001:db8::f816:3eff:fe59:4e62'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28 2001:db8:0:1:f816:3eff:fe59:4e62/64 2001:db8::f816:3eff:fe59:4e62/64', 'neutron:device_id': 'f5964963-11b8-4fd9-ace9-e5ee67571925', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6c73317d-f647-4813-8469-7d8f6ba2c0c7', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a3cf572dfc9f42528923d69b8fa76422', 'neutron:revision_number': '2', 'neutron:security_group_ids': '67e8b3b6-9b34-4c2b-a8fc-88ec98721eb7', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=82db8616-e2a9-492c-9b1b-8c775409acb3, chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=637fce28-ce53-4bd9-95fb-dc0675dd7009) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 04:12:00 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:12:00.561 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 637fce28-ce53-4bd9-95fb-dc0675dd7009 in datapath 6c73317d-f647-4813-8469-7d8f6ba2c0c7 bound to our chassis#033[00m
Nov 25 04:12:00 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:12:00.562 162739 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 6c73317d-f647-4813-8469-7d8f6ba2c0c7#033[00m
Nov 25 04:12:00 np0005534516 systemd-machined[215790]: New machine qemu-176-instance-00000092.
Nov 25 04:12:00 np0005534516 NetworkManager[48915]: <info>  [1764061920.5741] device (tap637fce28-ce): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 04:12:00 np0005534516 NetworkManager[48915]: <info>  [1764061920.5749] device (tap637fce28-ce): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 04:12:00 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:12:00.576 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[b0a65c10-6e8f-44d9-b87c-f4a00a0cfda8]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:12:00 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:12:00.577 162739 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap6c73317d-f1 in ovnmeta-6c73317d-f647-4813-8469-7d8f6ba2c0c7 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 25 04:12:00 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:12:00.579 269370 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap6c73317d-f0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 25 04:12:00 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:12:00.579 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[e283315d-92e7-4048-b70b-c9b2aafb3945]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:12:00 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:12:00.579 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[2a2780a2-57b6-4b7a-a998-d55741baae9b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:12:00 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:12:00.592 162852 DEBUG oslo.privsep.daemon [-] privsep: reply[2d225b4c-d9d8-472d-a9f8-39eb9816246d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:12:00 np0005534516 ovn_controller[152859]: 2025-11-25T09:12:00Z|01564|binding|INFO|Setting lport 637fce28-ce53-4bd9-95fb-dc0675dd7009 ovn-installed in OVS
Nov 25 04:12:00 np0005534516 ovn_controller[152859]: 2025-11-25T09:12:00Z|01565|binding|INFO|Setting lport 637fce28-ce53-4bd9-95fb-dc0675dd7009 up in Southbound
Nov 25 04:12:00 np0005534516 nova_compute[253538]: 2025-11-25 09:12:00.603 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:12:00 np0005534516 systemd[1]: Started Virtual Machine qemu-176-instance-00000092.
Nov 25 04:12:00 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:12:00.604 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[790422a2-a904-405b-b8df-f5248ad675e8]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:12:00 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:12:00.633 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[1a853244-e882-42da-a91b-dd369c4e9f99]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:12:00 np0005534516 systemd-udevd[410542]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 04:12:00 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:12:00.638 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[d5c83020-5220-4c8e-abd5-e2ac51343660]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:12:00 np0005534516 NetworkManager[48915]: <info>  [1764061920.6394] manager: (tap6c73317d-f0): new Veth device (/org/freedesktop/NetworkManager/Devices/642)
Nov 25 04:12:00 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:12:00.663 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[097cd7c8-b190-4abe-be2e-49436b321121]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:12:00 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:12:00.666 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[f40b5f84-939c-4e09-b7ff-5672b1bc26ca]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:12:00 np0005534516 NetworkManager[48915]: <info>  [1764061920.6909] device (tap6c73317d-f0): carrier: link connected
Nov 25 04:12:00 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:12:00.699 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[53b8c74a-9744-4606-a743-2e9ad7b2aff7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:12:00 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:12:00.719 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[8c5f3f27-514b-4a89-a7d7-dd93aa688068]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6c73317d-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f0:f7:0e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 449], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 727529, 'reachable_time': 16900, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 410572, 'error': None, 'target': 'ovnmeta-6c73317d-f647-4813-8469-7d8f6ba2c0c7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:12:00 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:12:00.734 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[b04d295c-cf1b-4038-9260-683959449f81]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fef0:f70e'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 727529, 'tstamp': 727529}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 410573, 'error': None, 'target': 'ovnmeta-6c73317d-f647-4813-8469-7d8f6ba2c0c7', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:12:00 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:12:00.752 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[fcfa37ba-0c25-4a6f-a23b-d04cc5f11ef5]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6c73317d-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f0:f7:0e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 449], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 727529, 'reachable_time': 16900, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 152, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 152, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 410574, 'error': None, 'target': 'ovnmeta-6c73317d-f647-4813-8469-7d8f6ba2c0c7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:12:00 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:12:00.780 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[29470ef7-7760-4720-8686-344342504983]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:12:00 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:12:00.829 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[10bb94a6-b49c-4085-97a3-67dbf5be2e22]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:12:00 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:12:00.830 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6c73317d-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 04:12:00 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:12:00.830 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 04:12:00 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:12:00.830 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6c73317d-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 04:12:00 np0005534516 kernel: tap6c73317d-f0: entered promiscuous mode
Nov 25 04:12:00 np0005534516 nova_compute[253538]: 2025-11-25 09:12:00.832 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:12:00 np0005534516 NetworkManager[48915]: <info>  [1764061920.8330] manager: (tap6c73317d-f0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/643)
Nov 25 04:12:00 np0005534516 nova_compute[253538]: 2025-11-25 09:12:00.834 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:12:00 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:12:00.835 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap6c73317d-f0, col_values=(('external_ids', {'iface-id': '08f181bc-bee1-4710-a487-b95c62cfce38'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 04:12:00 np0005534516 nova_compute[253538]: 2025-11-25 09:12:00.836 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:12:00 np0005534516 ovn_controller[152859]: 2025-11-25T09:12:00Z|01566|binding|INFO|Releasing lport 08f181bc-bee1-4710-a487-b95c62cfce38 from this chassis (sb_readonly=0)
Nov 25 04:12:00 np0005534516 nova_compute[253538]: 2025-11-25 09:12:00.848 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:12:00 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:12:00.849 162739 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/6c73317d-f647-4813-8469-7d8f6ba2c0c7.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/6c73317d-f647-4813-8469-7d8f6ba2c0c7.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 25 04:12:00 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:12:00.850 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[41f03221-7a75-4089-8260-743e70a9ef5f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:12:00 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:12:00.851 162739 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 25 04:12:00 np0005534516 ovn_metadata_agent[162734]: global
Nov 25 04:12:00 np0005534516 ovn_metadata_agent[162734]:    log         /dev/log local0 debug
Nov 25 04:12:00 np0005534516 ovn_metadata_agent[162734]:    log-tag     haproxy-metadata-proxy-6c73317d-f647-4813-8469-7d8f6ba2c0c7
Nov 25 04:12:00 np0005534516 ovn_metadata_agent[162734]:    user        root
Nov 25 04:12:00 np0005534516 ovn_metadata_agent[162734]:    group       root
Nov 25 04:12:00 np0005534516 ovn_metadata_agent[162734]:    maxconn     1024
Nov 25 04:12:00 np0005534516 ovn_metadata_agent[162734]:    pidfile     /var/lib/neutron/external/pids/6c73317d-f647-4813-8469-7d8f6ba2c0c7.pid.haproxy
Nov 25 04:12:00 np0005534516 ovn_metadata_agent[162734]:    daemon
Nov 25 04:12:00 np0005534516 ovn_metadata_agent[162734]: 
Nov 25 04:12:00 np0005534516 ovn_metadata_agent[162734]: defaults
Nov 25 04:12:00 np0005534516 ovn_metadata_agent[162734]:    log global
Nov 25 04:12:00 np0005534516 ovn_metadata_agent[162734]:    mode http
Nov 25 04:12:00 np0005534516 ovn_metadata_agent[162734]:    option httplog
Nov 25 04:12:00 np0005534516 ovn_metadata_agent[162734]:    option dontlognull
Nov 25 04:12:00 np0005534516 ovn_metadata_agent[162734]:    option http-server-close
Nov 25 04:12:00 np0005534516 ovn_metadata_agent[162734]:    option forwardfor
Nov 25 04:12:00 np0005534516 ovn_metadata_agent[162734]:    retries                 3
Nov 25 04:12:00 np0005534516 ovn_metadata_agent[162734]:    timeout http-request    30s
Nov 25 04:12:00 np0005534516 ovn_metadata_agent[162734]:    timeout connect         30s
Nov 25 04:12:00 np0005534516 ovn_metadata_agent[162734]:    timeout client          32s
Nov 25 04:12:00 np0005534516 ovn_metadata_agent[162734]:    timeout server          32s
Nov 25 04:12:00 np0005534516 ovn_metadata_agent[162734]:    timeout http-keep-alive 30s
Nov 25 04:12:00 np0005534516 ovn_metadata_agent[162734]: 
Nov 25 04:12:00 np0005534516 ovn_metadata_agent[162734]: 
Nov 25 04:12:00 np0005534516 ovn_metadata_agent[162734]: listen listener
Nov 25 04:12:00 np0005534516 ovn_metadata_agent[162734]:    bind 169.254.169.254:80
Nov 25 04:12:00 np0005534516 ovn_metadata_agent[162734]:    server metadata /var/lib/neutron/metadata_proxy
Nov 25 04:12:00 np0005534516 ovn_metadata_agent[162734]:    http-request add-header X-OVN-Network-ID 6c73317d-f647-4813-8469-7d8f6ba2c0c7
Nov 25 04:12:00 np0005534516 ovn_metadata_agent[162734]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 25 04:12:00 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:12:00.852 162739 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-6c73317d-f647-4813-8469-7d8f6ba2c0c7', 'env', 'PROCESS_TAG=haproxy-6c73317d-f647-4813-8469-7d8f6ba2c0c7', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/6c73317d-f647-4813-8469-7d8f6ba2c0c7.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 25 04:12:00 np0005534516 nova_compute[253538]: 2025-11-25 09:12:00.996 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764061920.995621, f5964963-11b8-4fd9-ace9-e5ee67571925 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 04:12:00 np0005534516 nova_compute[253538]: 2025-11-25 09:12:00.997 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: f5964963-11b8-4fd9-ace9-e5ee67571925] VM Started (Lifecycle Event)#033[00m
Nov 25 04:12:01 np0005534516 nova_compute[253538]: 2025-11-25 09:12:01.014 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: f5964963-11b8-4fd9-ace9-e5ee67571925] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 04:12:01 np0005534516 nova_compute[253538]: 2025-11-25 09:12:01.018 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764061920.9968166, f5964963-11b8-4fd9-ace9-e5ee67571925 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 04:12:01 np0005534516 nova_compute[253538]: 2025-11-25 09:12:01.018 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: f5964963-11b8-4fd9-ace9-e5ee67571925] VM Paused (Lifecycle Event)#033[00m
Nov 25 04:12:01 np0005534516 nova_compute[253538]: 2025-11-25 09:12:01.033 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: f5964963-11b8-4fd9-ace9-e5ee67571925] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 04:12:01 np0005534516 nova_compute[253538]: 2025-11-25 09:12:01.036 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: f5964963-11b8-4fd9-ace9-e5ee67571925] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 04:12:01 np0005534516 nova_compute[253538]: 2025-11-25 09:12:01.049 253542 DEBUG nova.compute.manager [req-b1e22d23-b575-4717-98a4-0a6a9bd8b3bb req-bc07bb9c-eff6-4e84-bffc-f42ee607084a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: f5964963-11b8-4fd9-ace9-e5ee67571925] Received event network-vif-plugged-637fce28-ce53-4bd9-95fb-dc0675dd7009 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 04:12:01 np0005534516 nova_compute[253538]: 2025-11-25 09:12:01.049 253542 DEBUG oslo_concurrency.lockutils [req-b1e22d23-b575-4717-98a4-0a6a9bd8b3bb req-bc07bb9c-eff6-4e84-bffc-f42ee607084a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "f5964963-11b8-4fd9-ace9-e5ee67571925-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:12:01 np0005534516 nova_compute[253538]: 2025-11-25 09:12:01.049 253542 DEBUG oslo_concurrency.lockutils [req-b1e22d23-b575-4717-98a4-0a6a9bd8b3bb req-bc07bb9c-eff6-4e84-bffc-f42ee607084a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "f5964963-11b8-4fd9-ace9-e5ee67571925-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:12:01 np0005534516 nova_compute[253538]: 2025-11-25 09:12:01.050 253542 DEBUG oslo_concurrency.lockutils [req-b1e22d23-b575-4717-98a4-0a6a9bd8b3bb req-bc07bb9c-eff6-4e84-bffc-f42ee607084a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "f5964963-11b8-4fd9-ace9-e5ee67571925-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:12:01 np0005534516 nova_compute[253538]: 2025-11-25 09:12:01.050 253542 DEBUG nova.compute.manager [req-b1e22d23-b575-4717-98a4-0a6a9bd8b3bb req-bc07bb9c-eff6-4e84-bffc-f42ee607084a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: f5964963-11b8-4fd9-ace9-e5ee67571925] Processing event network-vif-plugged-637fce28-ce53-4bd9-95fb-dc0675dd7009 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 25 04:12:01 np0005534516 nova_compute[253538]: 2025-11-25 09:12:01.051 253542 DEBUG nova.compute.manager [None req-ff78a7af-1a34-46b5-8d0c-947bedf9c0c2 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: f5964963-11b8-4fd9-ace9-e5ee67571925] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 25 04:12:01 np0005534516 nova_compute[253538]: 2025-11-25 09:12:01.051 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: f5964963-11b8-4fd9-ace9-e5ee67571925] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 04:12:01 np0005534516 nova_compute[253538]: 2025-11-25 09:12:01.054 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764061921.0542498, f5964963-11b8-4fd9-ace9-e5ee67571925 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 04:12:01 np0005534516 nova_compute[253538]: 2025-11-25 09:12:01.054 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: f5964963-11b8-4fd9-ace9-e5ee67571925] VM Resumed (Lifecycle Event)#033[00m
Nov 25 04:12:01 np0005534516 nova_compute[253538]: 2025-11-25 09:12:01.056 253542 DEBUG nova.virt.libvirt.driver [None req-ff78a7af-1a34-46b5-8d0c-947bedf9c0c2 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: f5964963-11b8-4fd9-ace9-e5ee67571925] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 25 04:12:01 np0005534516 nova_compute[253538]: 2025-11-25 09:12:01.058 253542 INFO nova.virt.libvirt.driver [-] [instance: f5964963-11b8-4fd9-ace9-e5ee67571925] Instance spawned successfully.#033[00m
Nov 25 04:12:01 np0005534516 nova_compute[253538]: 2025-11-25 09:12:01.058 253542 DEBUG nova.virt.libvirt.driver [None req-ff78a7af-1a34-46b5-8d0c-947bedf9c0c2 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: f5964963-11b8-4fd9-ace9-e5ee67571925] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 25 04:12:01 np0005534516 nova_compute[253538]: 2025-11-25 09:12:01.071 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: f5964963-11b8-4fd9-ace9-e5ee67571925] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 04:12:01 np0005534516 nova_compute[253538]: 2025-11-25 09:12:01.076 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: f5964963-11b8-4fd9-ace9-e5ee67571925] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 04:12:01 np0005534516 nova_compute[253538]: 2025-11-25 09:12:01.080 253542 DEBUG nova.virt.libvirt.driver [None req-ff78a7af-1a34-46b5-8d0c-947bedf9c0c2 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: f5964963-11b8-4fd9-ace9-e5ee67571925] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 04:12:01 np0005534516 nova_compute[253538]: 2025-11-25 09:12:01.081 253542 DEBUG nova.virt.libvirt.driver [None req-ff78a7af-1a34-46b5-8d0c-947bedf9c0c2 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: f5964963-11b8-4fd9-ace9-e5ee67571925] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 04:12:01 np0005534516 nova_compute[253538]: 2025-11-25 09:12:01.081 253542 DEBUG nova.virt.libvirt.driver [None req-ff78a7af-1a34-46b5-8d0c-947bedf9c0c2 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: f5964963-11b8-4fd9-ace9-e5ee67571925] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 04:12:01 np0005534516 nova_compute[253538]: 2025-11-25 09:12:01.082 253542 DEBUG nova.virt.libvirt.driver [None req-ff78a7af-1a34-46b5-8d0c-947bedf9c0c2 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: f5964963-11b8-4fd9-ace9-e5ee67571925] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 04:12:01 np0005534516 nova_compute[253538]: 2025-11-25 09:12:01.082 253542 DEBUG nova.virt.libvirt.driver [None req-ff78a7af-1a34-46b5-8d0c-947bedf9c0c2 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: f5964963-11b8-4fd9-ace9-e5ee67571925] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 04:12:01 np0005534516 nova_compute[253538]: 2025-11-25 09:12:01.083 253542 DEBUG nova.virt.libvirt.driver [None req-ff78a7af-1a34-46b5-8d0c-947bedf9c0c2 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: f5964963-11b8-4fd9-ace9-e5ee67571925] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 04:12:01 np0005534516 nova_compute[253538]: 2025-11-25 09:12:01.107 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: f5964963-11b8-4fd9-ace9-e5ee67571925] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 04:12:01 np0005534516 nova_compute[253538]: 2025-11-25 09:12:01.244 253542 INFO nova.compute.manager [None req-ff78a7af-1a34-46b5-8d0c-947bedf9c0c2 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: f5964963-11b8-4fd9-ace9-e5ee67571925] Took 7.59 seconds to spawn the instance on the hypervisor.#033[00m
Nov 25 04:12:01 np0005534516 nova_compute[253538]: 2025-11-25 09:12:01.245 253542 DEBUG nova.compute.manager [None req-ff78a7af-1a34-46b5-8d0c-947bedf9c0c2 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: f5964963-11b8-4fd9-ace9-e5ee67571925] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 04:12:01 np0005534516 podman[410649]: 2025-11-25 09:12:01.252968143 +0000 UTC m=+0.104747628 container create c02b5a0433a2173378512657c6a2bae707a7c1cd7bc6a3918a43c1fa8e1f5a4d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-6c73317d-f647-4813-8469-7d8f6ba2c0c7, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 04:12:01 np0005534516 podman[410649]: 2025-11-25 09:12:01.16749278 +0000 UTC m=+0.019272265 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 25 04:12:01 np0005534516 systemd[1]: Started libpod-conmon-c02b5a0433a2173378512657c6a2bae707a7c1cd7bc6a3918a43c1fa8e1f5a4d.scope.
Nov 25 04:12:01 np0005534516 systemd[1]: Started libcrun container.
Nov 25 04:12:01 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b8dc5cf81cc7a79b68a934106958d2a3e5f0faa44968a236fc77d5815309e349/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 25 04:12:01 np0005534516 nova_compute[253538]: 2025-11-25 09:12:01.339 253542 INFO nova.compute.manager [None req-ff78a7af-1a34-46b5-8d0c-947bedf9c0c2 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: f5964963-11b8-4fd9-ace9-e5ee67571925] Took 8.74 seconds to build instance.#033[00m
Nov 25 04:12:01 np0005534516 nova_compute[253538]: 2025-11-25 09:12:01.366 253542 DEBUG oslo_concurrency.lockutils [None req-ff78a7af-1a34-46b5-8d0c-947bedf9c0c2 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "f5964963-11b8-4fd9-ace9-e5ee67571925" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.845s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:12:01 np0005534516 podman[410649]: 2025-11-25 09:12:01.389881505 +0000 UTC m=+0.241661010 container init c02b5a0433a2173378512657c6a2bae707a7c1cd7bc6a3918a43c1fa8e1f5a4d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-6c73317d-f647-4813-8469-7d8f6ba2c0c7, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Nov 25 04:12:01 np0005534516 podman[410649]: 2025-11-25 09:12:01.395099957 +0000 UTC m=+0.246879432 container start c02b5a0433a2173378512657c6a2bae707a7c1cd7bc6a3918a43c1fa8e1f5a4d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-6c73317d-f647-4813-8469-7d8f6ba2c0c7, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118)
Nov 25 04:12:01 np0005534516 neutron-haproxy-ovnmeta-6c73317d-f647-4813-8469-7d8f6ba2c0c7[410665]: [NOTICE]   (410669) : New worker (410671) forked
Nov 25 04:12:01 np0005534516 neutron-haproxy-ovnmeta-6c73317d-f647-4813-8469-7d8f6ba2c0c7[410665]: [NOTICE]   (410669) : Loading success.
Nov 25 04:12:02 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2749: 321 pgs: 321 active+clean; 134 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 20 KiB/s rd, 1.8 MiB/s wr, 30 op/s
Nov 25 04:12:02 np0005534516 nova_compute[253538]: 2025-11-25 09:12:02.449 253542 DEBUG nova.network.neutron [req-1276c942-f00a-4b72-8c3e-9d70456b1d47 req-65b3014d-b0d6-4e6f-9140-91ec67e3f355 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: f5964963-11b8-4fd9-ace9-e5ee67571925] Updated VIF entry in instance network info cache for port 637fce28-ce53-4bd9-95fb-dc0675dd7009. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 25 04:12:02 np0005534516 nova_compute[253538]: 2025-11-25 09:12:02.451 253542 DEBUG nova.network.neutron [req-1276c942-f00a-4b72-8c3e-9d70456b1d47 req-65b3014d-b0d6-4e6f-9140-91ec67e3f355 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: f5964963-11b8-4fd9-ace9-e5ee67571925] Updating instance_info_cache with network_info: [{"id": "637fce28-ce53-4bd9-95fb-dc0675dd7009", "address": "fa:16:3e:59:4e:62", "network": {"id": "6c73317d-f647-4813-8469-7d8f6ba2c0c7", "bridge": "br-int", "label": "tempest-network-smoke--565189020", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe59:4e62", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe59:4e62", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap637fce28-ce", "ovs_interfaceid": "637fce28-ce53-4bd9-95fb-dc0675dd7009", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 04:12:02 np0005534516 nova_compute[253538]: 2025-11-25 09:12:02.466 253542 DEBUG oslo_concurrency.lockutils [req-1276c942-f00a-4b72-8c3e-9d70456b1d47 req-65b3014d-b0d6-4e6f-9140-91ec67e3f355 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-f5964963-11b8-4fd9-ace9-e5ee67571925" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 04:12:02 np0005534516 nova_compute[253538]: 2025-11-25 09:12:02.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:12:02 np0005534516 nova_compute[253538]: 2025-11-25 09:12:02.555 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:12:02 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:12:02.581 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=53, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '3e:21:09', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '6a:71:8e:07:fa:c2'}, ipsec=False) old=SB_Global(nb_cfg=52) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 04:12:02 np0005534516 nova_compute[253538]: 2025-11-25 09:12:02.582 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:12:02 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:12:02.583 162739 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 25 04:12:02 np0005534516 nova_compute[253538]: 2025-11-25 09:12:02.842 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:12:02 np0005534516 nova_compute[253538]: 2025-11-25 09:12:02.862 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Triggering sync for uuid f5964963-11b8-4fd9-ace9-e5ee67571925 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268#033[00m
Nov 25 04:12:02 np0005534516 nova_compute[253538]: 2025-11-25 09:12:02.862 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Acquiring lock "f5964963-11b8-4fd9-ace9-e5ee67571925" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:12:02 np0005534516 nova_compute[253538]: 2025-11-25 09:12:02.863 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "f5964963-11b8-4fd9-ace9-e5ee67571925" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:12:02 np0005534516 nova_compute[253538]: 2025-11-25 09:12:02.883 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "f5964963-11b8-4fd9-ace9-e5ee67571925" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.020s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:12:03 np0005534516 nova_compute[253538]: 2025-11-25 09:12:03.140 253542 DEBUG nova.compute.manager [req-b454c817-ea45-431c-a663-e83279fb6eaa req-f423d5ca-d59b-47d1-9c88-cdb6a4a134ef b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: f5964963-11b8-4fd9-ace9-e5ee67571925] Received event network-vif-plugged-637fce28-ce53-4bd9-95fb-dc0675dd7009 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 04:12:03 np0005534516 nova_compute[253538]: 2025-11-25 09:12:03.141 253542 DEBUG oslo_concurrency.lockutils [req-b454c817-ea45-431c-a663-e83279fb6eaa req-f423d5ca-d59b-47d1-9c88-cdb6a4a134ef b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "f5964963-11b8-4fd9-ace9-e5ee67571925-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:12:03 np0005534516 nova_compute[253538]: 2025-11-25 09:12:03.141 253542 DEBUG oslo_concurrency.lockutils [req-b454c817-ea45-431c-a663-e83279fb6eaa req-f423d5ca-d59b-47d1-9c88-cdb6a4a134ef b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "f5964963-11b8-4fd9-ace9-e5ee67571925-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:12:03 np0005534516 nova_compute[253538]: 2025-11-25 09:12:03.141 253542 DEBUG oslo_concurrency.lockutils [req-b454c817-ea45-431c-a663-e83279fb6eaa req-f423d5ca-d59b-47d1-9c88-cdb6a4a134ef b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "f5964963-11b8-4fd9-ace9-e5ee67571925-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:12:03 np0005534516 nova_compute[253538]: 2025-11-25 09:12:03.142 253542 DEBUG nova.compute.manager [req-b454c817-ea45-431c-a663-e83279fb6eaa req-f423d5ca-d59b-47d1-9c88-cdb6a4a134ef b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: f5964963-11b8-4fd9-ace9-e5ee67571925] No waiting events found dispatching network-vif-plugged-637fce28-ce53-4bd9-95fb-dc0675dd7009 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 04:12:03 np0005534516 nova_compute[253538]: 2025-11-25 09:12:03.142 253542 WARNING nova.compute.manager [req-b454c817-ea45-431c-a663-e83279fb6eaa req-f423d5ca-d59b-47d1-9c88-cdb6a4a134ef b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: f5964963-11b8-4fd9-ace9-e5ee67571925] Received unexpected event network-vif-plugged-637fce28-ce53-4bd9-95fb-dc0675dd7009 for instance with vm_state active and task_state None.#033[00m
Nov 25 04:12:03 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:12:03 np0005534516 nova_compute[253538]: 2025-11-25 09:12:03.803 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:12:04 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2750: 321 pgs: 321 active+clean; 134 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 920 KiB/s rd, 1.8 MiB/s wr, 62 op/s
Nov 25 04:12:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 04:12:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 04:12:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 04:12:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 04:12:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0003461242226671876 of space, bias 1.0, pg target 0.10383726680015627 quantized to 32 (current 32)
Nov 25 04:12:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 04:12:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 04:12:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 04:12:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 04:12:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 04:12:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0010122368870873217 of space, bias 1.0, pg target 0.3036710661261965 quantized to 32 (current 32)
Nov 25 04:12:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 04:12:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 25 04:12:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 04:12:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 04:12:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 04:12:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 25 04:12:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 04:12:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 25 04:12:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 04:12:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 04:12:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 04:12:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 25 04:12:04 np0005534516 nova_compute[253538]: 2025-11-25 09:12:04.470 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:12:05 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:12:05.584 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=0e3362c2-a4a0-4a10-9289-943331244f84, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '53'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 04:12:06 np0005534516 ovn_controller[152859]: 2025-11-25T09:12:06Z|01567|binding|INFO|Releasing lport 08f181bc-bee1-4710-a487-b95c62cfce38 from this chassis (sb_readonly=0)
Nov 25 04:12:06 np0005534516 NetworkManager[48915]: <info>  [1764061926.0048] manager: (patch-provnet-14ba300c-36d8-42f5-a7bd-8245a4488c5f-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/644)
Nov 25 04:12:06 np0005534516 NetworkManager[48915]: <info>  [1764061926.0056] manager: (patch-br-int-to-provnet-14ba300c-36d8-42f5-a7bd-8245a4488c5f): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/645)
Nov 25 04:12:06 np0005534516 nova_compute[253538]: 2025-11-25 09:12:06.000 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:12:06 np0005534516 ovn_controller[152859]: 2025-11-25T09:12:06Z|01568|binding|INFO|Releasing lport 08f181bc-bee1-4710-a487-b95c62cfce38 from this chassis (sb_readonly=0)
Nov 25 04:12:06 np0005534516 nova_compute[253538]: 2025-11-25 09:12:06.036 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:12:06 np0005534516 nova_compute[253538]: 2025-11-25 09:12:06.045 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:12:06 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2751: 321 pgs: 321 active+clean; 134 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.6 MiB/s rd, 1.8 MiB/s wr, 88 op/s
Nov 25 04:12:06 np0005534516 podman[410681]: 2025-11-25 09:12:06.805873904 +0000 UTC m=+0.057249378 container health_status 201f5ba8a846455dad7681d91099d8c3f33e8ac91298c1330a8b525b94c03938 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Nov 25 04:12:06 np0005534516 podman[410682]: 2025-11-25 09:12:06.809257906 +0000 UTC m=+0.057258228 container health_status 5c8d5508db57b4c3da7e80ae11f3a3094e87785cb74f60c98199261dd8b73481 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Nov 25 04:12:06 np0005534516 nova_compute[253538]: 2025-11-25 09:12:06.832 253542 DEBUG nova.compute.manager [req-92f8daa1-25a1-46dc-a696-0d8817feec42 req-21beb53a-5af7-4dd5-bb0d-534579138817 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: f5964963-11b8-4fd9-ace9-e5ee67571925] Received event network-changed-637fce28-ce53-4bd9-95fb-dc0675dd7009 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 04:12:06 np0005534516 nova_compute[253538]: 2025-11-25 09:12:06.833 253542 DEBUG nova.compute.manager [req-92f8daa1-25a1-46dc-a696-0d8817feec42 req-21beb53a-5af7-4dd5-bb0d-534579138817 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: f5964963-11b8-4fd9-ace9-e5ee67571925] Refreshing instance network info cache due to event network-changed-637fce28-ce53-4bd9-95fb-dc0675dd7009. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 04:12:06 np0005534516 nova_compute[253538]: 2025-11-25 09:12:06.833 253542 DEBUG oslo_concurrency.lockutils [req-92f8daa1-25a1-46dc-a696-0d8817feec42 req-21beb53a-5af7-4dd5-bb0d-534579138817 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-f5964963-11b8-4fd9-ace9-e5ee67571925" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 04:12:06 np0005534516 nova_compute[253538]: 2025-11-25 09:12:06.833 253542 DEBUG oslo_concurrency.lockutils [req-92f8daa1-25a1-46dc-a696-0d8817feec42 req-21beb53a-5af7-4dd5-bb0d-534579138817 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-f5964963-11b8-4fd9-ace9-e5ee67571925" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 04:12:06 np0005534516 nova_compute[253538]: 2025-11-25 09:12:06.834 253542 DEBUG nova.network.neutron [req-92f8daa1-25a1-46dc-a696-0d8817feec42 req-21beb53a-5af7-4dd5-bb0d-534579138817 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: f5964963-11b8-4fd9-ace9-e5ee67571925] Refreshing network info cache for port 637fce28-ce53-4bd9-95fb-dc0675dd7009 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 04:12:07 np0005534516 nova_compute[253538]: 2025-11-25 09:12:07.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:12:07 np0005534516 nova_compute[253538]: 2025-11-25 09:12:07.576 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:12:07 np0005534516 nova_compute[253538]: 2025-11-25 09:12:07.576 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:12:07 np0005534516 nova_compute[253538]: 2025-11-25 09:12:07.577 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:12:07 np0005534516 nova_compute[253538]: 2025-11-25 09:12:07.577 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 25 04:12:07 np0005534516 nova_compute[253538]: 2025-11-25 09:12:07.577 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 04:12:08 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 04:12:08 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/72231431' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 04:12:08 np0005534516 nova_compute[253538]: 2025-11-25 09:12:08.083 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.506s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 04:12:08 np0005534516 nova_compute[253538]: 2025-11-25 09:12:08.144 253542 DEBUG nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] skipping disk for instance-00000092 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 25 04:12:08 np0005534516 nova_compute[253538]: 2025-11-25 09:12:08.144 253542 DEBUG nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] skipping disk for instance-00000092 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 25 04:12:08 np0005534516 nova_compute[253538]: 2025-11-25 09:12:08.297 253542 WARNING nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 04:12:08 np0005534516 nova_compute[253538]: 2025-11-25 09:12:08.298 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3464MB free_disk=59.96738052368164GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 25 04:12:08 np0005534516 nova_compute[253538]: 2025-11-25 09:12:08.299 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:12:08 np0005534516 nova_compute[253538]: 2025-11-25 09:12:08.299 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:12:08 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:12:08 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2752: 321 pgs: 321 active+clean; 134 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.1 MiB/s wr, 131 op/s
Nov 25 04:12:08 np0005534516 nova_compute[253538]: 2025-11-25 09:12:08.372 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Instance f5964963-11b8-4fd9-ace9-e5ee67571925 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 25 04:12:08 np0005534516 nova_compute[253538]: 2025-11-25 09:12:08.373 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 25 04:12:08 np0005534516 nova_compute[253538]: 2025-11-25 09:12:08.373 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 25 04:12:08 np0005534516 nova_compute[253538]: 2025-11-25 09:12:08.415 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 04:12:08 np0005534516 nova_compute[253538]: 2025-11-25 09:12:08.807 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:12:08 np0005534516 nova_compute[253538]: 2025-11-25 09:12:08.909 253542 DEBUG nova.network.neutron [req-92f8daa1-25a1-46dc-a696-0d8817feec42 req-21beb53a-5af7-4dd5-bb0d-534579138817 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: f5964963-11b8-4fd9-ace9-e5ee67571925] Updated VIF entry in instance network info cache for port 637fce28-ce53-4bd9-95fb-dc0675dd7009. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 25 04:12:08 np0005534516 nova_compute[253538]: 2025-11-25 09:12:08.910 253542 DEBUG nova.network.neutron [req-92f8daa1-25a1-46dc-a696-0d8817feec42 req-21beb53a-5af7-4dd5-bb0d-534579138817 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: f5964963-11b8-4fd9-ace9-e5ee67571925] Updating instance_info_cache with network_info: [{"id": "637fce28-ce53-4bd9-95fb-dc0675dd7009", "address": "fa:16:3e:59:4e:62", "network": {"id": "6c73317d-f647-4813-8469-7d8f6ba2c0c7", "bridge": "br-int", "label": "tempest-network-smoke--565189020", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.215", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe59:4e62", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe59:4e62", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap637fce28-ce", "ovs_interfaceid": "637fce28-ce53-4bd9-95fb-dc0675dd7009", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 04:12:08 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 04:12:08 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2794914448' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 04:12:08 np0005534516 nova_compute[253538]: 2025-11-25 09:12:08.943 253542 DEBUG oslo_concurrency.lockutils [req-92f8daa1-25a1-46dc-a696-0d8817feec42 req-21beb53a-5af7-4dd5-bb0d-534579138817 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-f5964963-11b8-4fd9-ace9-e5ee67571925" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 04:12:08 np0005534516 nova_compute[253538]: 2025-11-25 09:12:08.966 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.552s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 04:12:08 np0005534516 nova_compute[253538]: 2025-11-25 09:12:08.974 253542 DEBUG nova.compute.provider_tree [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 04:12:08 np0005534516 nova_compute[253538]: 2025-11-25 09:12:08.990 253542 DEBUG nova.scheduler.client.report [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 04:12:09 np0005534516 nova_compute[253538]: 2025-11-25 09:12:09.046 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 25 04:12:09 np0005534516 nova_compute[253538]: 2025-11-25 09:12:09.047 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.748s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:12:09 np0005534516 nova_compute[253538]: 2025-11-25 09:12:09.473 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:12:10 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2753: 321 pgs: 321 active+clean; 134 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 127 op/s
Nov 25 04:12:12 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2754: 321 pgs: 321 active+clean; 134 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 133 op/s
Nov 25 04:12:12 np0005534516 podman[410763]: 2025-11-25 09:12:12.83356675 +0000 UTC m=+0.078722761 container health_status 8ad1e319ea263c63021f01bb2d6f18ca5aea205883339692c6b10bddfa993191 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, container_name=ovn_controller)
Nov 25 04:12:13 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:12:13 np0005534516 nova_compute[253538]: 2025-11-25 09:12:13.810 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:12:14 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2755: 321 pgs: 321 active+clean; 135 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 404 KiB/s wr, 135 op/s
Nov 25 04:12:14 np0005534516 nova_compute[253538]: 2025-11-25 09:12:14.476 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:12:15 np0005534516 ovn_controller[152859]: 2025-11-25T09:12:15Z|00199|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:59:4e:62 10.100.0.4
Nov 25 04:12:15 np0005534516 ovn_controller[152859]: 2025-11-25T09:12:15Z|00200|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:59:4e:62 10.100.0.4
Nov 25 04:12:16 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2756: 321 pgs: 321 active+clean; 144 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.1 MiB/s rd, 1.2 MiB/s wr, 116 op/s
Nov 25 04:12:18 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:12:18 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2757: 321 pgs: 321 active+clean; 161 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 617 KiB/s rd, 2.1 MiB/s wr, 119 op/s
Nov 25 04:12:18 np0005534516 nova_compute[253538]: 2025-11-25 09:12:18.815 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:12:19 np0005534516 nova_compute[253538]: 2025-11-25 09:12:19.479 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:12:20 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2758: 321 pgs: 321 active+clean; 162 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 280 KiB/s rd, 2.1 MiB/s wr, 81 op/s
Nov 25 04:12:21 np0005534516 systemd[1]: Starting dnf makecache...
Nov 25 04:12:21 np0005534516 dnf[410792]: Metadata cache refreshed recently.
Nov 25 04:12:22 np0005534516 systemd[1]: dnf-makecache.service: Deactivated successfully.
Nov 25 04:12:22 np0005534516 systemd[1]: Finished dnf makecache.
Nov 25 04:12:22 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2759: 321 pgs: 321 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 277 KiB/s rd, 2.1 MiB/s wr, 67 op/s
Nov 25 04:12:23 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:12:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 04:12:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 04:12:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 04:12:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 04:12:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 04:12:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 04:12:23 np0005534516 nova_compute[253538]: 2025-11-25 09:12:23.817 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:12:24 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2760: 321 pgs: 321 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 273 KiB/s rd, 2.1 MiB/s wr, 61 op/s
Nov 25 04:12:24 np0005534516 nova_compute[253538]: 2025-11-25 09:12:24.481 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:12:26 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2761: 321 pgs: 321 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 264 KiB/s rd, 1.8 MiB/s wr, 55 op/s
Nov 25 04:12:26 np0005534516 nova_compute[253538]: 2025-11-25 09:12:26.774 253542 DEBUG oslo_concurrency.lockutils [None req-a152a26a-c1e0-4c61-b2e2-5b8345590ff6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Acquiring lock "985307b1-28a6-47cc-8dfc-f18ab08169f7" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:12:26 np0005534516 nova_compute[253538]: 2025-11-25 09:12:26.775 253542 DEBUG oslo_concurrency.lockutils [None req-a152a26a-c1e0-4c61-b2e2-5b8345590ff6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "985307b1-28a6-47cc-8dfc-f18ab08169f7" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:12:26 np0005534516 nova_compute[253538]: 2025-11-25 09:12:26.834 253542 DEBUG nova.compute.manager [None req-a152a26a-c1e0-4c61-b2e2-5b8345590ff6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 985307b1-28a6-47cc-8dfc-f18ab08169f7] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 25 04:12:27 np0005534516 nova_compute[253538]: 2025-11-25 09:12:27.013 253542 DEBUG oslo_concurrency.lockutils [None req-a152a26a-c1e0-4c61-b2e2-5b8345590ff6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:12:27 np0005534516 nova_compute[253538]: 2025-11-25 09:12:27.014 253542 DEBUG oslo_concurrency.lockutils [None req-a152a26a-c1e0-4c61-b2e2-5b8345590ff6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:12:27 np0005534516 nova_compute[253538]: 2025-11-25 09:12:27.023 253542 DEBUG nova.virt.hardware [None req-a152a26a-c1e0-4c61-b2e2-5b8345590ff6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 25 04:12:27 np0005534516 nova_compute[253538]: 2025-11-25 09:12:27.024 253542 INFO nova.compute.claims [None req-a152a26a-c1e0-4c61-b2e2-5b8345590ff6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 985307b1-28a6-47cc-8dfc-f18ab08169f7] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 25 04:12:27 np0005534516 nova_compute[253538]: 2025-11-25 09:12:27.319 253542 DEBUG oslo_concurrency.processutils [None req-a152a26a-c1e0-4c61-b2e2-5b8345590ff6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 04:12:27 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 04:12:27 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2877166837' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 04:12:27 np0005534516 nova_compute[253538]: 2025-11-25 09:12:27.768 253542 DEBUG oslo_concurrency.processutils [None req-a152a26a-c1e0-4c61-b2e2-5b8345590ff6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.449s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 04:12:27 np0005534516 nova_compute[253538]: 2025-11-25 09:12:27.775 253542 DEBUG nova.compute.provider_tree [None req-a152a26a-c1e0-4c61-b2e2-5b8345590ff6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 04:12:27 np0005534516 nova_compute[253538]: 2025-11-25 09:12:27.790 253542 DEBUG nova.scheduler.client.report [None req-a152a26a-c1e0-4c61-b2e2-5b8345590ff6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 04:12:27 np0005534516 nova_compute[253538]: 2025-11-25 09:12:27.956 253542 DEBUG oslo_concurrency.lockutils [None req-a152a26a-c1e0-4c61-b2e2-5b8345590ff6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.942s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:12:27 np0005534516 nova_compute[253538]: 2025-11-25 09:12:27.957 253542 DEBUG nova.compute.manager [None req-a152a26a-c1e0-4c61-b2e2-5b8345590ff6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 985307b1-28a6-47cc-8dfc-f18ab08169f7] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 25 04:12:28 np0005534516 nova_compute[253538]: 2025-11-25 09:12:28.001 253542 DEBUG nova.compute.manager [None req-a152a26a-c1e0-4c61-b2e2-5b8345590ff6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 985307b1-28a6-47cc-8dfc-f18ab08169f7] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 25 04:12:28 np0005534516 nova_compute[253538]: 2025-11-25 09:12:28.002 253542 DEBUG nova.network.neutron [None req-a152a26a-c1e0-4c61-b2e2-5b8345590ff6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 985307b1-28a6-47cc-8dfc-f18ab08169f7] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 25 04:12:28 np0005534516 nova_compute[253538]: 2025-11-25 09:12:28.027 253542 INFO nova.virt.libvirt.driver [None req-a152a26a-c1e0-4c61-b2e2-5b8345590ff6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 985307b1-28a6-47cc-8dfc-f18ab08169f7] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 25 04:12:28 np0005534516 nova_compute[253538]: 2025-11-25 09:12:28.135 253542 DEBUG nova.compute.manager [None req-a152a26a-c1e0-4c61-b2e2-5b8345590ff6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 985307b1-28a6-47cc-8dfc-f18ab08169f7] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 25 04:12:28 np0005534516 nova_compute[253538]: 2025-11-25 09:12:28.312 253542 DEBUG nova.compute.manager [None req-a152a26a-c1e0-4c61-b2e2-5b8345590ff6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 985307b1-28a6-47cc-8dfc-f18ab08169f7] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 25 04:12:28 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:12:28 np0005534516 nova_compute[253538]: 2025-11-25 09:12:28.314 253542 DEBUG nova.virt.libvirt.driver [None req-a152a26a-c1e0-4c61-b2e2-5b8345590ff6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 985307b1-28a6-47cc-8dfc-f18ab08169f7] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 25 04:12:28 np0005534516 nova_compute[253538]: 2025-11-25 09:12:28.315 253542 INFO nova.virt.libvirt.driver [None req-a152a26a-c1e0-4c61-b2e2-5b8345590ff6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 985307b1-28a6-47cc-8dfc-f18ab08169f7] Creating image(s)#033[00m
Nov 25 04:12:28 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2762: 321 pgs: 321 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 234 KiB/s rd, 948 KiB/s wr, 43 op/s
Nov 25 04:12:28 np0005534516 nova_compute[253538]: 2025-11-25 09:12:28.340 253542 DEBUG nova.storage.rbd_utils [None req-a152a26a-c1e0-4c61-b2e2-5b8345590ff6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] rbd image 985307b1-28a6-47cc-8dfc-f18ab08169f7_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 04:12:28 np0005534516 nova_compute[253538]: 2025-11-25 09:12:28.366 253542 DEBUG nova.storage.rbd_utils [None req-a152a26a-c1e0-4c61-b2e2-5b8345590ff6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] rbd image 985307b1-28a6-47cc-8dfc-f18ab08169f7_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 04:12:28 np0005534516 nova_compute[253538]: 2025-11-25 09:12:28.392 253542 DEBUG nova.storage.rbd_utils [None req-a152a26a-c1e0-4c61-b2e2-5b8345590ff6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] rbd image 985307b1-28a6-47cc-8dfc-f18ab08169f7_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 04:12:28 np0005534516 nova_compute[253538]: 2025-11-25 09:12:28.396 253542 DEBUG oslo_concurrency.processutils [None req-a152a26a-c1e0-4c61-b2e2-5b8345590ff6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 04:12:28 np0005534516 nova_compute[253538]: 2025-11-25 09:12:28.431 253542 DEBUG nova.policy [None req-a152a26a-c1e0-4c61-b2e2-5b8345590ff6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'c9fb13d4ba9041458692330b7276232f', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'a3cf572dfc9f42528923d69b8fa76422', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 25 04:12:28 np0005534516 nova_compute[253538]: 2025-11-25 09:12:28.465 253542 DEBUG oslo_concurrency.processutils [None req-a152a26a-c1e0-4c61-b2e2-5b8345590ff6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json" returned: 0 in 0.069s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 04:12:28 np0005534516 nova_compute[253538]: 2025-11-25 09:12:28.466 253542 DEBUG oslo_concurrency.lockutils [None req-a152a26a-c1e0-4c61-b2e2-5b8345590ff6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Acquiring lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:12:28 np0005534516 nova_compute[253538]: 2025-11-25 09:12:28.467 253542 DEBUG oslo_concurrency.lockutils [None req-a152a26a-c1e0-4c61-b2e2-5b8345590ff6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:12:28 np0005534516 nova_compute[253538]: 2025-11-25 09:12:28.468 253542 DEBUG oslo_concurrency.lockutils [None req-a152a26a-c1e0-4c61-b2e2-5b8345590ff6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:12:28 np0005534516 nova_compute[253538]: 2025-11-25 09:12:28.493 253542 DEBUG nova.storage.rbd_utils [None req-a152a26a-c1e0-4c61-b2e2-5b8345590ff6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] rbd image 985307b1-28a6-47cc-8dfc-f18ab08169f7_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 04:12:28 np0005534516 nova_compute[253538]: 2025-11-25 09:12:28.496 253542 DEBUG oslo_concurrency.processutils [None req-a152a26a-c1e0-4c61-b2e2-5b8345590ff6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc 985307b1-28a6-47cc-8dfc-f18ab08169f7_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 04:12:28 np0005534516 nova_compute[253538]: 2025-11-25 09:12:28.819 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:12:28 np0005534516 nova_compute[253538]: 2025-11-25 09:12:28.853 253542 DEBUG oslo_concurrency.processutils [None req-a152a26a-c1e0-4c61-b2e2-5b8345590ff6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc 985307b1-28a6-47cc-8dfc-f18ab08169f7_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.356s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 04:12:28 np0005534516 nova_compute[253538]: 2025-11-25 09:12:28.926 253542 DEBUG nova.storage.rbd_utils [None req-a152a26a-c1e0-4c61-b2e2-5b8345590ff6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] resizing rbd image 985307b1-28a6-47cc-8dfc-f18ab08169f7_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 25 04:12:29 np0005534516 nova_compute[253538]: 2025-11-25 09:12:29.048 253542 DEBUG nova.objects.instance [None req-a152a26a-c1e0-4c61-b2e2-5b8345590ff6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lazy-loading 'migration_context' on Instance uuid 985307b1-28a6-47cc-8dfc-f18ab08169f7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 04:12:29 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 25 04:12:29 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/530805451' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 04:12:29 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 25 04:12:29 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/530805451' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 04:12:29 np0005534516 nova_compute[253538]: 2025-11-25 09:12:29.060 253542 DEBUG nova.virt.libvirt.driver [None req-a152a26a-c1e0-4c61-b2e2-5b8345590ff6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 985307b1-28a6-47cc-8dfc-f18ab08169f7] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 25 04:12:29 np0005534516 nova_compute[253538]: 2025-11-25 09:12:29.061 253542 DEBUG nova.virt.libvirt.driver [None req-a152a26a-c1e0-4c61-b2e2-5b8345590ff6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 985307b1-28a6-47cc-8dfc-f18ab08169f7] Ensure instance console log exists: /var/lib/nova/instances/985307b1-28a6-47cc-8dfc-f18ab08169f7/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 25 04:12:29 np0005534516 nova_compute[253538]: 2025-11-25 09:12:29.061 253542 DEBUG oslo_concurrency.lockutils [None req-a152a26a-c1e0-4c61-b2e2-5b8345590ff6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:12:29 np0005534516 nova_compute[253538]: 2025-11-25 09:12:29.062 253542 DEBUG oslo_concurrency.lockutils [None req-a152a26a-c1e0-4c61-b2e2-5b8345590ff6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:12:29 np0005534516 nova_compute[253538]: 2025-11-25 09:12:29.062 253542 DEBUG oslo_concurrency.lockutils [None req-a152a26a-c1e0-4c61-b2e2-5b8345590ff6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:12:29 np0005534516 nova_compute[253538]: 2025-11-25 09:12:29.483 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:12:29 np0005534516 nova_compute[253538]: 2025-11-25 09:12:29.529 253542 DEBUG nova.network.neutron [None req-a152a26a-c1e0-4c61-b2e2-5b8345590ff6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 985307b1-28a6-47cc-8dfc-f18ab08169f7] Successfully created port: 970a51bb-207b-46ae-bb14-c743ea86eb2f _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 25 04:12:30 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2763: 321 pgs: 321 active+clean; 186 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 75 KiB/s rd, 425 KiB/s wr, 27 op/s
Nov 25 04:12:30 np0005534516 nova_compute[253538]: 2025-11-25 09:12:30.815 253542 DEBUG nova.network.neutron [None req-a152a26a-c1e0-4c61-b2e2-5b8345590ff6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 985307b1-28a6-47cc-8dfc-f18ab08169f7] Successfully updated port: 970a51bb-207b-46ae-bb14-c743ea86eb2f _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 25 04:12:30 np0005534516 nova_compute[253538]: 2025-11-25 09:12:30.833 253542 DEBUG oslo_concurrency.lockutils [None req-a152a26a-c1e0-4c61-b2e2-5b8345590ff6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Acquiring lock "refresh_cache-985307b1-28a6-47cc-8dfc-f18ab08169f7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 04:12:30 np0005534516 nova_compute[253538]: 2025-11-25 09:12:30.833 253542 DEBUG oslo_concurrency.lockutils [None req-a152a26a-c1e0-4c61-b2e2-5b8345590ff6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Acquired lock "refresh_cache-985307b1-28a6-47cc-8dfc-f18ab08169f7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 04:12:30 np0005534516 nova_compute[253538]: 2025-11-25 09:12:30.834 253542 DEBUG nova.network.neutron [None req-a152a26a-c1e0-4c61-b2e2-5b8345590ff6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 985307b1-28a6-47cc-8dfc-f18ab08169f7] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 25 04:12:30 np0005534516 nova_compute[253538]: 2025-11-25 09:12:30.897 253542 DEBUG nova.compute.manager [req-247a3b0a-b173-461b-b055-2e65b904b168 req-bcd1f5ba-cebc-4bad-b5dd-b2a28928bc85 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 985307b1-28a6-47cc-8dfc-f18ab08169f7] Received event network-changed-970a51bb-207b-46ae-bb14-c743ea86eb2f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 04:12:30 np0005534516 nova_compute[253538]: 2025-11-25 09:12:30.898 253542 DEBUG nova.compute.manager [req-247a3b0a-b173-461b-b055-2e65b904b168 req-bcd1f5ba-cebc-4bad-b5dd-b2a28928bc85 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 985307b1-28a6-47cc-8dfc-f18ab08169f7] Refreshing instance network info cache due to event network-changed-970a51bb-207b-46ae-bb14-c743ea86eb2f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 04:12:30 np0005534516 nova_compute[253538]: 2025-11-25 09:12:30.898 253542 DEBUG oslo_concurrency.lockutils [req-247a3b0a-b173-461b-b055-2e65b904b168 req-bcd1f5ba-cebc-4bad-b5dd-b2a28928bc85 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-985307b1-28a6-47cc-8dfc-f18ab08169f7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 04:12:30 np0005534516 nova_compute[253538]: 2025-11-25 09:12:30.979 253542 DEBUG nova.network.neutron [None req-a152a26a-c1e0-4c61-b2e2-5b8345590ff6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 985307b1-28a6-47cc-8dfc-f18ab08169f7] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 25 04:12:32 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2764: 321 pgs: 321 active+clean; 201 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 26 KiB/s rd, 1.5 MiB/s wr, 32 op/s
Nov 25 04:12:33 np0005534516 nova_compute[253538]: 2025-11-25 09:12:33.024 253542 DEBUG nova.network.neutron [None req-a152a26a-c1e0-4c61-b2e2-5b8345590ff6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 985307b1-28a6-47cc-8dfc-f18ab08169f7] Updating instance_info_cache with network_info: [{"id": "970a51bb-207b-46ae-bb14-c743ea86eb2f", "address": "fa:16:3e:f0:74:a0", "network": {"id": "6c73317d-f647-4813-8469-7d8f6ba2c0c7", "bridge": "br-int", "label": "tempest-network-smoke--565189020", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fef0:74a0", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fef0:74a0", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap970a51bb-20", "ovs_interfaceid": "970a51bb-207b-46ae-bb14-c743ea86eb2f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 04:12:33 np0005534516 nova_compute[253538]: 2025-11-25 09:12:33.051 253542 DEBUG oslo_concurrency.lockutils [None req-a152a26a-c1e0-4c61-b2e2-5b8345590ff6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Releasing lock "refresh_cache-985307b1-28a6-47cc-8dfc-f18ab08169f7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 04:12:33 np0005534516 nova_compute[253538]: 2025-11-25 09:12:33.051 253542 DEBUG nova.compute.manager [None req-a152a26a-c1e0-4c61-b2e2-5b8345590ff6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 985307b1-28a6-47cc-8dfc-f18ab08169f7] Instance network_info: |[{"id": "970a51bb-207b-46ae-bb14-c743ea86eb2f", "address": "fa:16:3e:f0:74:a0", "network": {"id": "6c73317d-f647-4813-8469-7d8f6ba2c0c7", "bridge": "br-int", "label": "tempest-network-smoke--565189020", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fef0:74a0", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fef0:74a0", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap970a51bb-20", "ovs_interfaceid": "970a51bb-207b-46ae-bb14-c743ea86eb2f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 25 04:12:33 np0005534516 nova_compute[253538]: 2025-11-25 09:12:33.052 253542 DEBUG oslo_concurrency.lockutils [req-247a3b0a-b173-461b-b055-2e65b904b168 req-bcd1f5ba-cebc-4bad-b5dd-b2a28928bc85 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-985307b1-28a6-47cc-8dfc-f18ab08169f7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 04:12:33 np0005534516 nova_compute[253538]: 2025-11-25 09:12:33.052 253542 DEBUG nova.network.neutron [req-247a3b0a-b173-461b-b055-2e65b904b168 req-bcd1f5ba-cebc-4bad-b5dd-b2a28928bc85 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 985307b1-28a6-47cc-8dfc-f18ab08169f7] Refreshing network info cache for port 970a51bb-207b-46ae-bb14-c743ea86eb2f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 04:12:33 np0005534516 nova_compute[253538]: 2025-11-25 09:12:33.057 253542 DEBUG nova.virt.libvirt.driver [None req-a152a26a-c1e0-4c61-b2e2-5b8345590ff6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 985307b1-28a6-47cc-8dfc-f18ab08169f7] Start _get_guest_xml network_info=[{"id": "970a51bb-207b-46ae-bb14-c743ea86eb2f", "address": "fa:16:3e:f0:74:a0", "network": {"id": "6c73317d-f647-4813-8469-7d8f6ba2c0c7", "bridge": "br-int", "label": "tempest-network-smoke--565189020", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fef0:74a0", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fef0:74a0", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap970a51bb-20", "ovs_interfaceid": "970a51bb-207b-46ae-bb14-c743ea86eb2f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'device_name': '/dev/vda', 'guest_format': None, 'encryption_options': None, 'boot_index': 0, 'encryption_format': None, 'encryption_secret_uuid': None, 'size': 0, 'encrypted': False, 'disk_bus': 'virtio', 'image_id': '8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 25 04:12:33 np0005534516 nova_compute[253538]: 2025-11-25 09:12:33.062 253542 WARNING nova.virt.libvirt.driver [None req-a152a26a-c1e0-4c61-b2e2-5b8345590ff6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 04:12:33 np0005534516 nova_compute[253538]: 2025-11-25 09:12:33.069 253542 DEBUG nova.virt.libvirt.host [None req-a152a26a-c1e0-4c61-b2e2-5b8345590ff6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 25 04:12:33 np0005534516 nova_compute[253538]: 2025-11-25 09:12:33.070 253542 DEBUG nova.virt.libvirt.host [None req-a152a26a-c1e0-4c61-b2e2-5b8345590ff6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 25 04:12:33 np0005534516 nova_compute[253538]: 2025-11-25 09:12:33.073 253542 DEBUG nova.virt.libvirt.host [None req-a152a26a-c1e0-4c61-b2e2-5b8345590ff6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 25 04:12:33 np0005534516 nova_compute[253538]: 2025-11-25 09:12:33.074 253542 DEBUG nova.virt.libvirt.host [None req-a152a26a-c1e0-4c61-b2e2-5b8345590ff6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 25 04:12:33 np0005534516 nova_compute[253538]: 2025-11-25 09:12:33.074 253542 DEBUG nova.virt.libvirt.driver [None req-a152a26a-c1e0-4c61-b2e2-5b8345590ff6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 25 04:12:33 np0005534516 nova_compute[253538]: 2025-11-25 09:12:33.075 253542 DEBUG nova.virt.hardware [None req-a152a26a-c1e0-4c61-b2e2-5b8345590ff6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T08:20:54Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='c0247c91-0f34-45b2-87e7-43f31a790a00',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 25 04:12:33 np0005534516 nova_compute[253538]: 2025-11-25 09:12:33.075 253542 DEBUG nova.virt.hardware [None req-a152a26a-c1e0-4c61-b2e2-5b8345590ff6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 25 04:12:33 np0005534516 nova_compute[253538]: 2025-11-25 09:12:33.075 253542 DEBUG nova.virt.hardware [None req-a152a26a-c1e0-4c61-b2e2-5b8345590ff6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 25 04:12:33 np0005534516 nova_compute[253538]: 2025-11-25 09:12:33.076 253542 DEBUG nova.virt.hardware [None req-a152a26a-c1e0-4c61-b2e2-5b8345590ff6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 25 04:12:33 np0005534516 nova_compute[253538]: 2025-11-25 09:12:33.076 253542 DEBUG nova.virt.hardware [None req-a152a26a-c1e0-4c61-b2e2-5b8345590ff6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 25 04:12:33 np0005534516 nova_compute[253538]: 2025-11-25 09:12:33.076 253542 DEBUG nova.virt.hardware [None req-a152a26a-c1e0-4c61-b2e2-5b8345590ff6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 25 04:12:33 np0005534516 nova_compute[253538]: 2025-11-25 09:12:33.077 253542 DEBUG nova.virt.hardware [None req-a152a26a-c1e0-4c61-b2e2-5b8345590ff6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 25 04:12:33 np0005534516 nova_compute[253538]: 2025-11-25 09:12:33.077 253542 DEBUG nova.virt.hardware [None req-a152a26a-c1e0-4c61-b2e2-5b8345590ff6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 25 04:12:33 np0005534516 nova_compute[253538]: 2025-11-25 09:12:33.078 253542 DEBUG nova.virt.hardware [None req-a152a26a-c1e0-4c61-b2e2-5b8345590ff6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 25 04:12:33 np0005534516 nova_compute[253538]: 2025-11-25 09:12:33.078 253542 DEBUG nova.virt.hardware [None req-a152a26a-c1e0-4c61-b2e2-5b8345590ff6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 25 04:12:33 np0005534516 nova_compute[253538]: 2025-11-25 09:12:33.078 253542 DEBUG nova.virt.hardware [None req-a152a26a-c1e0-4c61-b2e2-5b8345590ff6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 25 04:12:33 np0005534516 nova_compute[253538]: 2025-11-25 09:12:33.082 253542 DEBUG oslo_concurrency.processutils [None req-a152a26a-c1e0-4c61-b2e2-5b8345590ff6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 04:12:33 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:12:33 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 04:12:33 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/872168593' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 04:12:33 np0005534516 nova_compute[253538]: 2025-11-25 09:12:33.576 253542 DEBUG oslo_concurrency.processutils [None req-a152a26a-c1e0-4c61-b2e2-5b8345590ff6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.494s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 04:12:33 np0005534516 nova_compute[253538]: 2025-11-25 09:12:33.601 253542 DEBUG nova.storage.rbd_utils [None req-a152a26a-c1e0-4c61-b2e2-5b8345590ff6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] rbd image 985307b1-28a6-47cc-8dfc-f18ab08169f7_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 04:12:33 np0005534516 nova_compute[253538]: 2025-11-25 09:12:33.605 253542 DEBUG oslo_concurrency.processutils [None req-a152a26a-c1e0-4c61-b2e2-5b8345590ff6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 04:12:33 np0005534516 nova_compute[253538]: 2025-11-25 09:12:33.823 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:12:34 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 04:12:34 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4257947021' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 04:12:34 np0005534516 nova_compute[253538]: 2025-11-25 09:12:34.101 253542 DEBUG oslo_concurrency.processutils [None req-a152a26a-c1e0-4c61-b2e2-5b8345590ff6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.496s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 04:12:34 np0005534516 nova_compute[253538]: 2025-11-25 09:12:34.103 253542 DEBUG nova.virt.libvirt.vif [None req-a152a26a-c1e0-4c61-b2e2-5b8345590ff6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T09:12:25Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-639443231',display_name='tempest-TestGettingAddress-server-639443231',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-639443231',id=147,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCYWSfdVj+Wf7cbG/yHvSQzac2UDJ+lVbG+gHLhUdgH1axNTDtOQgu7Qi4+rF49LHVdNelU4faTO+e7DMH4d34+ViVy+2CuP/uQsAgIE7Bo9udHLpfncBLpy55zcDymblA==',key_name='tempest-TestGettingAddress-275156659',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='a3cf572dfc9f42528923d69b8fa76422',ramdisk_id='',reservation_id='r-8f1ydg3j',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-364728108',owner_user_name='tempest-TestGettingAddress-364728108-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T09:12:28Z,user_data=None,user_id='c9fb13d4ba9041458692330b7276232f',uuid=985307b1-28a6-47cc-8dfc-f18ab08169f7,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "970a51bb-207b-46ae-bb14-c743ea86eb2f", "address": "fa:16:3e:f0:74:a0", "network": {"id": "6c73317d-f647-4813-8469-7d8f6ba2c0c7", "bridge": "br-int", "label": "tempest-network-smoke--565189020", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fef0:74a0", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fef0:74a0", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap970a51bb-20", "ovs_interfaceid": "970a51bb-207b-46ae-bb14-c743ea86eb2f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 25 04:12:34 np0005534516 nova_compute[253538]: 2025-11-25 09:12:34.103 253542 DEBUG nova.network.os_vif_util [None req-a152a26a-c1e0-4c61-b2e2-5b8345590ff6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Converting VIF {"id": "970a51bb-207b-46ae-bb14-c743ea86eb2f", "address": "fa:16:3e:f0:74:a0", "network": {"id": "6c73317d-f647-4813-8469-7d8f6ba2c0c7", "bridge": "br-int", "label": "tempest-network-smoke--565189020", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fef0:74a0", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fef0:74a0", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap970a51bb-20", "ovs_interfaceid": "970a51bb-207b-46ae-bb14-c743ea86eb2f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 04:12:34 np0005534516 nova_compute[253538]: 2025-11-25 09:12:34.104 253542 DEBUG nova.network.os_vif_util [None req-a152a26a-c1e0-4c61-b2e2-5b8345590ff6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f0:74:a0,bridge_name='br-int',has_traffic_filtering=True,id=970a51bb-207b-46ae-bb14-c743ea86eb2f,network=Network(6c73317d-f647-4813-8469-7d8f6ba2c0c7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap970a51bb-20') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 04:12:34 np0005534516 nova_compute[253538]: 2025-11-25 09:12:34.105 253542 DEBUG nova.objects.instance [None req-a152a26a-c1e0-4c61-b2e2-5b8345590ff6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lazy-loading 'pci_devices' on Instance uuid 985307b1-28a6-47cc-8dfc-f18ab08169f7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 04:12:34 np0005534516 nova_compute[253538]: 2025-11-25 09:12:34.126 253542 DEBUG nova.virt.libvirt.driver [None req-a152a26a-c1e0-4c61-b2e2-5b8345590ff6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 985307b1-28a6-47cc-8dfc-f18ab08169f7] End _get_guest_xml xml=<domain type="kvm">
Nov 25 04:12:34 np0005534516 nova_compute[253538]:  <uuid>985307b1-28a6-47cc-8dfc-f18ab08169f7</uuid>
Nov 25 04:12:34 np0005534516 nova_compute[253538]:  <name>instance-00000093</name>
Nov 25 04:12:34 np0005534516 nova_compute[253538]:  <memory>131072</memory>
Nov 25 04:12:34 np0005534516 nova_compute[253538]:  <vcpu>1</vcpu>
Nov 25 04:12:34 np0005534516 nova_compute[253538]:  <metadata>
Nov 25 04:12:34 np0005534516 nova_compute[253538]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 04:12:34 np0005534516 nova_compute[253538]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 04:12:34 np0005534516 nova_compute[253538]:      <nova:name>tempest-TestGettingAddress-server-639443231</nova:name>
Nov 25 04:12:34 np0005534516 nova_compute[253538]:      <nova:creationTime>2025-11-25 09:12:33</nova:creationTime>
Nov 25 04:12:34 np0005534516 nova_compute[253538]:      <nova:flavor name="m1.nano">
Nov 25 04:12:34 np0005534516 nova_compute[253538]:        <nova:memory>128</nova:memory>
Nov 25 04:12:34 np0005534516 nova_compute[253538]:        <nova:disk>1</nova:disk>
Nov 25 04:12:34 np0005534516 nova_compute[253538]:        <nova:swap>0</nova:swap>
Nov 25 04:12:34 np0005534516 nova_compute[253538]:        <nova:ephemeral>0</nova:ephemeral>
Nov 25 04:12:34 np0005534516 nova_compute[253538]:        <nova:vcpus>1</nova:vcpus>
Nov 25 04:12:34 np0005534516 nova_compute[253538]:      </nova:flavor>
Nov 25 04:12:34 np0005534516 nova_compute[253538]:      <nova:owner>
Nov 25 04:12:34 np0005534516 nova_compute[253538]:        <nova:user uuid="c9fb13d4ba9041458692330b7276232f">tempest-TestGettingAddress-364728108-project-member</nova:user>
Nov 25 04:12:34 np0005534516 nova_compute[253538]:        <nova:project uuid="a3cf572dfc9f42528923d69b8fa76422">tempest-TestGettingAddress-364728108</nova:project>
Nov 25 04:12:34 np0005534516 nova_compute[253538]:      </nova:owner>
Nov 25 04:12:34 np0005534516 nova_compute[253538]:      <nova:root type="image" uuid="8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e"/>
Nov 25 04:12:34 np0005534516 nova_compute[253538]:      <nova:ports>
Nov 25 04:12:34 np0005534516 nova_compute[253538]:        <nova:port uuid="970a51bb-207b-46ae-bb14-c743ea86eb2f">
Nov 25 04:12:34 np0005534516 nova_compute[253538]:          <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Nov 25 04:12:34 np0005534516 nova_compute[253538]:          <nova:ip type="fixed" address="2001:db8:0:1:f816:3eff:fef0:74a0" ipVersion="6"/>
Nov 25 04:12:34 np0005534516 nova_compute[253538]:          <nova:ip type="fixed" address="2001:db8::f816:3eff:fef0:74a0" ipVersion="6"/>
Nov 25 04:12:34 np0005534516 nova_compute[253538]:        </nova:port>
Nov 25 04:12:34 np0005534516 nova_compute[253538]:      </nova:ports>
Nov 25 04:12:34 np0005534516 nova_compute[253538]:    </nova:instance>
Nov 25 04:12:34 np0005534516 nova_compute[253538]:  </metadata>
Nov 25 04:12:34 np0005534516 nova_compute[253538]:  <sysinfo type="smbios">
Nov 25 04:12:34 np0005534516 nova_compute[253538]:    <system>
Nov 25 04:12:34 np0005534516 nova_compute[253538]:      <entry name="manufacturer">RDO</entry>
Nov 25 04:12:34 np0005534516 nova_compute[253538]:      <entry name="product">OpenStack Compute</entry>
Nov 25 04:12:34 np0005534516 nova_compute[253538]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 04:12:34 np0005534516 nova_compute[253538]:      <entry name="serial">985307b1-28a6-47cc-8dfc-f18ab08169f7</entry>
Nov 25 04:12:34 np0005534516 nova_compute[253538]:      <entry name="uuid">985307b1-28a6-47cc-8dfc-f18ab08169f7</entry>
Nov 25 04:12:34 np0005534516 nova_compute[253538]:      <entry name="family">Virtual Machine</entry>
Nov 25 04:12:34 np0005534516 nova_compute[253538]:    </system>
Nov 25 04:12:34 np0005534516 nova_compute[253538]:  </sysinfo>
Nov 25 04:12:34 np0005534516 nova_compute[253538]:  <os>
Nov 25 04:12:34 np0005534516 nova_compute[253538]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 25 04:12:34 np0005534516 nova_compute[253538]:    <boot dev="hd"/>
Nov 25 04:12:34 np0005534516 nova_compute[253538]:    <smbios mode="sysinfo"/>
Nov 25 04:12:34 np0005534516 nova_compute[253538]:  </os>
Nov 25 04:12:34 np0005534516 nova_compute[253538]:  <features>
Nov 25 04:12:34 np0005534516 nova_compute[253538]:    <acpi/>
Nov 25 04:12:34 np0005534516 nova_compute[253538]:    <apic/>
Nov 25 04:12:34 np0005534516 nova_compute[253538]:    <vmcoreinfo/>
Nov 25 04:12:34 np0005534516 nova_compute[253538]:  </features>
Nov 25 04:12:34 np0005534516 nova_compute[253538]:  <clock offset="utc">
Nov 25 04:12:34 np0005534516 nova_compute[253538]:    <timer name="pit" tickpolicy="delay"/>
Nov 25 04:12:34 np0005534516 nova_compute[253538]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 25 04:12:34 np0005534516 nova_compute[253538]:    <timer name="hpet" present="no"/>
Nov 25 04:12:34 np0005534516 nova_compute[253538]:  </clock>
Nov 25 04:12:34 np0005534516 nova_compute[253538]:  <cpu mode="host-model" match="exact">
Nov 25 04:12:34 np0005534516 nova_compute[253538]:    <topology sockets="1" cores="1" threads="1"/>
Nov 25 04:12:34 np0005534516 nova_compute[253538]:  </cpu>
Nov 25 04:12:34 np0005534516 nova_compute[253538]:  <devices>
Nov 25 04:12:34 np0005534516 nova_compute[253538]:    <disk type="network" device="disk">
Nov 25 04:12:34 np0005534516 nova_compute[253538]:      <driver type="raw" cache="none"/>
Nov 25 04:12:34 np0005534516 nova_compute[253538]:      <source protocol="rbd" name="vms/985307b1-28a6-47cc-8dfc-f18ab08169f7_disk">
Nov 25 04:12:34 np0005534516 nova_compute[253538]:        <host name="192.168.122.100" port="6789"/>
Nov 25 04:12:34 np0005534516 nova_compute[253538]:      </source>
Nov 25 04:12:34 np0005534516 nova_compute[253538]:      <auth username="openstack">
Nov 25 04:12:34 np0005534516 nova_compute[253538]:        <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 04:12:34 np0005534516 nova_compute[253538]:      </auth>
Nov 25 04:12:34 np0005534516 nova_compute[253538]:      <target dev="vda" bus="virtio"/>
Nov 25 04:12:34 np0005534516 nova_compute[253538]:    </disk>
Nov 25 04:12:34 np0005534516 nova_compute[253538]:    <disk type="network" device="cdrom">
Nov 25 04:12:34 np0005534516 nova_compute[253538]:      <driver type="raw" cache="none"/>
Nov 25 04:12:34 np0005534516 nova_compute[253538]:      <source protocol="rbd" name="vms/985307b1-28a6-47cc-8dfc-f18ab08169f7_disk.config">
Nov 25 04:12:34 np0005534516 nova_compute[253538]:        <host name="192.168.122.100" port="6789"/>
Nov 25 04:12:34 np0005534516 nova_compute[253538]:      </source>
Nov 25 04:12:34 np0005534516 nova_compute[253538]:      <auth username="openstack">
Nov 25 04:12:34 np0005534516 nova_compute[253538]:        <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 04:12:34 np0005534516 nova_compute[253538]:      </auth>
Nov 25 04:12:34 np0005534516 nova_compute[253538]:      <target dev="sda" bus="sata"/>
Nov 25 04:12:34 np0005534516 nova_compute[253538]:    </disk>
Nov 25 04:12:34 np0005534516 nova_compute[253538]:    <interface type="ethernet">
Nov 25 04:12:34 np0005534516 nova_compute[253538]:      <mac address="fa:16:3e:f0:74:a0"/>
Nov 25 04:12:34 np0005534516 nova_compute[253538]:      <model type="virtio"/>
Nov 25 04:12:34 np0005534516 nova_compute[253538]:      <driver name="vhost" rx_queue_size="512"/>
Nov 25 04:12:34 np0005534516 nova_compute[253538]:      <mtu size="1442"/>
Nov 25 04:12:34 np0005534516 nova_compute[253538]:      <target dev="tap970a51bb-20"/>
Nov 25 04:12:34 np0005534516 nova_compute[253538]:    </interface>
Nov 25 04:12:34 np0005534516 nova_compute[253538]:    <serial type="pty">
Nov 25 04:12:34 np0005534516 nova_compute[253538]:      <log file="/var/lib/nova/instances/985307b1-28a6-47cc-8dfc-f18ab08169f7/console.log" append="off"/>
Nov 25 04:12:34 np0005534516 nova_compute[253538]:    </serial>
Nov 25 04:12:34 np0005534516 nova_compute[253538]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 04:12:34 np0005534516 nova_compute[253538]:    <video>
Nov 25 04:12:34 np0005534516 nova_compute[253538]:      <model type="virtio"/>
Nov 25 04:12:34 np0005534516 nova_compute[253538]:    </video>
Nov 25 04:12:34 np0005534516 nova_compute[253538]:    <input type="tablet" bus="usb"/>
Nov 25 04:12:34 np0005534516 nova_compute[253538]:    <rng model="virtio">
Nov 25 04:12:34 np0005534516 nova_compute[253538]:      <backend model="random">/dev/urandom</backend>
Nov 25 04:12:34 np0005534516 nova_compute[253538]:    </rng>
Nov 25 04:12:34 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root"/>
Nov 25 04:12:34 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:12:34 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:12:34 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:12:34 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:12:34 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:12:34 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:12:34 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:12:34 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:12:34 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:12:34 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:12:34 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:12:34 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:12:34 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:12:34 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:12:34 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:12:34 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:12:34 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:12:34 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:12:34 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:12:34 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:12:34 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:12:34 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:12:34 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:12:34 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:12:34 np0005534516 nova_compute[253538]:    <controller type="usb" index="0"/>
Nov 25 04:12:34 np0005534516 nova_compute[253538]:    <memballoon model="virtio">
Nov 25 04:12:34 np0005534516 nova_compute[253538]:      <stats period="10"/>
Nov 25 04:12:34 np0005534516 nova_compute[253538]:    </memballoon>
Nov 25 04:12:34 np0005534516 nova_compute[253538]:  </devices>
Nov 25 04:12:34 np0005534516 nova_compute[253538]: </domain>
Nov 25 04:12:34 np0005534516 nova_compute[253538]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 25 04:12:34 np0005534516 nova_compute[253538]: 2025-11-25 09:12:34.127 253542 DEBUG nova.compute.manager [None req-a152a26a-c1e0-4c61-b2e2-5b8345590ff6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 985307b1-28a6-47cc-8dfc-f18ab08169f7] Preparing to wait for external event network-vif-plugged-970a51bb-207b-46ae-bb14-c743ea86eb2f prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 25 04:12:34 np0005534516 nova_compute[253538]: 2025-11-25 09:12:34.127 253542 DEBUG oslo_concurrency.lockutils [None req-a152a26a-c1e0-4c61-b2e2-5b8345590ff6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Acquiring lock "985307b1-28a6-47cc-8dfc-f18ab08169f7-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:12:34 np0005534516 nova_compute[253538]: 2025-11-25 09:12:34.127 253542 DEBUG oslo_concurrency.lockutils [None req-a152a26a-c1e0-4c61-b2e2-5b8345590ff6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "985307b1-28a6-47cc-8dfc-f18ab08169f7-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:12:34 np0005534516 nova_compute[253538]: 2025-11-25 09:12:34.127 253542 DEBUG oslo_concurrency.lockutils [None req-a152a26a-c1e0-4c61-b2e2-5b8345590ff6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "985307b1-28a6-47cc-8dfc-f18ab08169f7-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:12:34 np0005534516 nova_compute[253538]: 2025-11-25 09:12:34.128 253542 DEBUG nova.virt.libvirt.vif [None req-a152a26a-c1e0-4c61-b2e2-5b8345590ff6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T09:12:25Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-639443231',display_name='tempest-TestGettingAddress-server-639443231',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-639443231',id=147,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCYWSfdVj+Wf7cbG/yHvSQzac2UDJ+lVbG+gHLhUdgH1axNTDtOQgu7Qi4+rF49LHVdNelU4faTO+e7DMH4d34+ViVy+2CuP/uQsAgIE7Bo9udHLpfncBLpy55zcDymblA==',key_name='tempest-TestGettingAddress-275156659',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='a3cf572dfc9f42528923d69b8fa76422',ramdisk_id='',reservation_id='r-8f1ydg3j',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-364728108',owner_user_name='tempest-TestGettingAddress-364728108-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T09:12:28Z,user_data=None,user_id='c9fb13d4ba9041458692330b7276232f',uuid=985307b1-28a6-47cc-8dfc-f18ab08169f7,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "970a51bb-207b-46ae-bb14-c743ea86eb2f", "address": "fa:16:3e:f0:74:a0", "network": {"id": "6c73317d-f647-4813-8469-7d8f6ba2c0c7", "bridge": "br-int", "label": "tempest-network-smoke--565189020", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fef0:74a0", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fef0:74a0", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap970a51bb-20", "ovs_interfaceid": "970a51bb-207b-46ae-bb14-c743ea86eb2f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 25 04:12:34 np0005534516 nova_compute[253538]: 2025-11-25 09:12:34.128 253542 DEBUG nova.network.os_vif_util [None req-a152a26a-c1e0-4c61-b2e2-5b8345590ff6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Converting VIF {"id": "970a51bb-207b-46ae-bb14-c743ea86eb2f", "address": "fa:16:3e:f0:74:a0", "network": {"id": "6c73317d-f647-4813-8469-7d8f6ba2c0c7", "bridge": "br-int", "label": "tempest-network-smoke--565189020", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fef0:74a0", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fef0:74a0", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap970a51bb-20", "ovs_interfaceid": "970a51bb-207b-46ae-bb14-c743ea86eb2f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 04:12:34 np0005534516 nova_compute[253538]: 2025-11-25 09:12:34.129 253542 DEBUG nova.network.os_vif_util [None req-a152a26a-c1e0-4c61-b2e2-5b8345590ff6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f0:74:a0,bridge_name='br-int',has_traffic_filtering=True,id=970a51bb-207b-46ae-bb14-c743ea86eb2f,network=Network(6c73317d-f647-4813-8469-7d8f6ba2c0c7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap970a51bb-20') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 04:12:34 np0005534516 nova_compute[253538]: 2025-11-25 09:12:34.129 253542 DEBUG os_vif [None req-a152a26a-c1e0-4c61-b2e2-5b8345590ff6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:f0:74:a0,bridge_name='br-int',has_traffic_filtering=True,id=970a51bb-207b-46ae-bb14-c743ea86eb2f,network=Network(6c73317d-f647-4813-8469-7d8f6ba2c0c7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap970a51bb-20') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 25 04:12:34 np0005534516 nova_compute[253538]: 2025-11-25 09:12:34.130 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:12:34 np0005534516 nova_compute[253538]: 2025-11-25 09:12:34.130 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 04:12:34 np0005534516 nova_compute[253538]: 2025-11-25 09:12:34.131 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 04:12:34 np0005534516 nova_compute[253538]: 2025-11-25 09:12:34.133 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:12:34 np0005534516 nova_compute[253538]: 2025-11-25 09:12:34.133 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap970a51bb-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 04:12:34 np0005534516 nova_compute[253538]: 2025-11-25 09:12:34.134 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap970a51bb-20, col_values=(('external_ids', {'iface-id': '970a51bb-207b-46ae-bb14-c743ea86eb2f', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:f0:74:a0', 'vm-uuid': '985307b1-28a6-47cc-8dfc-f18ab08169f7'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 04:12:34 np0005534516 nova_compute[253538]: 2025-11-25 09:12:34.135 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:12:34 np0005534516 NetworkManager[48915]: <info>  [1764061954.1368] manager: (tap970a51bb-20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/646)
Nov 25 04:12:34 np0005534516 nova_compute[253538]: 2025-11-25 09:12:34.138 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 25 04:12:34 np0005534516 nova_compute[253538]: 2025-11-25 09:12:34.147 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:12:34 np0005534516 nova_compute[253538]: 2025-11-25 09:12:34.147 253542 INFO os_vif [None req-a152a26a-c1e0-4c61-b2e2-5b8345590ff6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:f0:74:a0,bridge_name='br-int',has_traffic_filtering=True,id=970a51bb-207b-46ae-bb14-c743ea86eb2f,network=Network(6c73317d-f647-4813-8469-7d8f6ba2c0c7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap970a51bb-20')#033[00m
Nov 25 04:12:34 np0005534516 nova_compute[253538]: 2025-11-25 09:12:34.197 253542 DEBUG nova.virt.libvirt.driver [None req-a152a26a-c1e0-4c61-b2e2-5b8345590ff6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 04:12:34 np0005534516 nova_compute[253538]: 2025-11-25 09:12:34.197 253542 DEBUG nova.virt.libvirt.driver [None req-a152a26a-c1e0-4c61-b2e2-5b8345590ff6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 04:12:34 np0005534516 nova_compute[253538]: 2025-11-25 09:12:34.197 253542 DEBUG nova.virt.libvirt.driver [None req-a152a26a-c1e0-4c61-b2e2-5b8345590ff6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] No VIF found with MAC fa:16:3e:f0:74:a0, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 25 04:12:34 np0005534516 nova_compute[253538]: 2025-11-25 09:12:34.198 253542 INFO nova.virt.libvirt.driver [None req-a152a26a-c1e0-4c61-b2e2-5b8345590ff6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 985307b1-28a6-47cc-8dfc-f18ab08169f7] Using config drive#033[00m
Nov 25 04:12:34 np0005534516 nova_compute[253538]: 2025-11-25 09:12:34.217 253542 DEBUG nova.storage.rbd_utils [None req-a152a26a-c1e0-4c61-b2e2-5b8345590ff6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] rbd image 985307b1-28a6-47cc-8dfc-f18ab08169f7_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 04:12:34 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2765: 321 pgs: 321 active+clean; 213 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 25 04:12:34 np0005534516 nova_compute[253538]: 2025-11-25 09:12:34.485 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:12:34 np0005534516 nova_compute[253538]: 2025-11-25 09:12:34.883 253542 INFO nova.virt.libvirt.driver [None req-a152a26a-c1e0-4c61-b2e2-5b8345590ff6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 985307b1-28a6-47cc-8dfc-f18ab08169f7] Creating config drive at /var/lib/nova/instances/985307b1-28a6-47cc-8dfc-f18ab08169f7/disk.config#033[00m
Nov 25 04:12:34 np0005534516 nova_compute[253538]: 2025-11-25 09:12:34.888 253542 DEBUG oslo_concurrency.processutils [None req-a152a26a-c1e0-4c61-b2e2-5b8345590ff6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/985307b1-28a6-47cc-8dfc-f18ab08169f7/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpwldmiogl execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 04:12:35 np0005534516 nova_compute[253538]: 2025-11-25 09:12:35.012 253542 DEBUG nova.network.neutron [req-247a3b0a-b173-461b-b055-2e65b904b168 req-bcd1f5ba-cebc-4bad-b5dd-b2a28928bc85 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 985307b1-28a6-47cc-8dfc-f18ab08169f7] Updated VIF entry in instance network info cache for port 970a51bb-207b-46ae-bb14-c743ea86eb2f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 25 04:12:35 np0005534516 nova_compute[253538]: 2025-11-25 09:12:35.013 253542 DEBUG nova.network.neutron [req-247a3b0a-b173-461b-b055-2e65b904b168 req-bcd1f5ba-cebc-4bad-b5dd-b2a28928bc85 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 985307b1-28a6-47cc-8dfc-f18ab08169f7] Updating instance_info_cache with network_info: [{"id": "970a51bb-207b-46ae-bb14-c743ea86eb2f", "address": "fa:16:3e:f0:74:a0", "network": {"id": "6c73317d-f647-4813-8469-7d8f6ba2c0c7", "bridge": "br-int", "label": "tempest-network-smoke--565189020", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fef0:74a0", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fef0:74a0", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap970a51bb-20", "ovs_interfaceid": "970a51bb-207b-46ae-bb14-c743ea86eb2f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 04:12:35 np0005534516 nova_compute[253538]: 2025-11-25 09:12:35.030 253542 DEBUG oslo_concurrency.lockutils [req-247a3b0a-b173-461b-b055-2e65b904b168 req-bcd1f5ba-cebc-4bad-b5dd-b2a28928bc85 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-985307b1-28a6-47cc-8dfc-f18ab08169f7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 04:12:35 np0005534516 nova_compute[253538]: 2025-11-25 09:12:35.044 253542 DEBUG oslo_concurrency.processutils [None req-a152a26a-c1e0-4c61-b2e2-5b8345590ff6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/985307b1-28a6-47cc-8dfc-f18ab08169f7/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpwldmiogl" returned: 0 in 0.156s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 04:12:35 np0005534516 nova_compute[253538]: 2025-11-25 09:12:35.066 253542 DEBUG nova.storage.rbd_utils [None req-a152a26a-c1e0-4c61-b2e2-5b8345590ff6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] rbd image 985307b1-28a6-47cc-8dfc-f18ab08169f7_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 04:12:35 np0005534516 nova_compute[253538]: 2025-11-25 09:12:35.069 253542 DEBUG oslo_concurrency.processutils [None req-a152a26a-c1e0-4c61-b2e2-5b8345590ff6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/985307b1-28a6-47cc-8dfc-f18ab08169f7/disk.config 985307b1-28a6-47cc-8dfc-f18ab08169f7_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 04:12:35 np0005534516 nova_compute[253538]: 2025-11-25 09:12:35.400 253542 DEBUG oslo_concurrency.processutils [None req-a152a26a-c1e0-4c61-b2e2-5b8345590ff6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/985307b1-28a6-47cc-8dfc-f18ab08169f7/disk.config 985307b1-28a6-47cc-8dfc-f18ab08169f7_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.331s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 04:12:35 np0005534516 nova_compute[253538]: 2025-11-25 09:12:35.401 253542 INFO nova.virt.libvirt.driver [None req-a152a26a-c1e0-4c61-b2e2-5b8345590ff6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 985307b1-28a6-47cc-8dfc-f18ab08169f7] Deleting local config drive /var/lib/nova/instances/985307b1-28a6-47cc-8dfc-f18ab08169f7/disk.config because it was imported into RBD.#033[00m
Nov 25 04:12:35 np0005534516 kernel: tap970a51bb-20: entered promiscuous mode
Nov 25 04:12:35 np0005534516 NetworkManager[48915]: <info>  [1764061955.4724] manager: (tap970a51bb-20): new Tun device (/org/freedesktop/NetworkManager/Devices/647)
Nov 25 04:12:35 np0005534516 ovn_controller[152859]: 2025-11-25T09:12:35Z|01569|binding|INFO|Claiming lport 970a51bb-207b-46ae-bb14-c743ea86eb2f for this chassis.
Nov 25 04:12:35 np0005534516 ovn_controller[152859]: 2025-11-25T09:12:35Z|01570|binding|INFO|970a51bb-207b-46ae-bb14-c743ea86eb2f: Claiming fa:16:3e:f0:74:a0 10.100.0.9 2001:db8:0:1:f816:3eff:fef0:74a0 2001:db8::f816:3eff:fef0:74a0
Nov 25 04:12:35 np0005534516 nova_compute[253538]: 2025-11-25 09:12:35.474 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:12:35 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:12:35.484 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f0:74:a0 10.100.0.9 2001:db8:0:1:f816:3eff:fef0:74a0 2001:db8::f816:3eff:fef0:74a0'], port_security=['fa:16:3e:f0:74:a0 10.100.0.9 2001:db8:0:1:f816:3eff:fef0:74a0 2001:db8::f816:3eff:fef0:74a0'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28 2001:db8:0:1:f816:3eff:fef0:74a0/64 2001:db8::f816:3eff:fef0:74a0/64', 'neutron:device_id': '985307b1-28a6-47cc-8dfc-f18ab08169f7', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6c73317d-f647-4813-8469-7d8f6ba2c0c7', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a3cf572dfc9f42528923d69b8fa76422', 'neutron:revision_number': '2', 'neutron:security_group_ids': '67e8b3b6-9b34-4c2b-a8fc-88ec98721eb7', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=82db8616-e2a9-492c-9b1b-8c775409acb3, chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=970a51bb-207b-46ae-bb14-c743ea86eb2f) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 04:12:35 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:12:35.485 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 970a51bb-207b-46ae-bb14-c743ea86eb2f in datapath 6c73317d-f647-4813-8469-7d8f6ba2c0c7 bound to our chassis#033[00m
Nov 25 04:12:35 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:12:35.486 162739 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 6c73317d-f647-4813-8469-7d8f6ba2c0c7#033[00m
Nov 25 04:12:35 np0005534516 nova_compute[253538]: 2025-11-25 09:12:35.490 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:12:35 np0005534516 ovn_controller[152859]: 2025-11-25T09:12:35Z|01571|binding|INFO|Setting lport 970a51bb-207b-46ae-bb14-c743ea86eb2f ovn-installed in OVS
Nov 25 04:12:35 np0005534516 ovn_controller[152859]: 2025-11-25T09:12:35Z|01572|binding|INFO|Setting lport 970a51bb-207b-46ae-bb14-c743ea86eb2f up in Southbound
Nov 25 04:12:35 np0005534516 nova_compute[253538]: 2025-11-25 09:12:35.495 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:12:35 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:12:35.506 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[45341451-801b-47d4-b97f-eccd233c686c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:12:35 np0005534516 systemd-machined[215790]: New machine qemu-177-instance-00000093.
Nov 25 04:12:35 np0005534516 systemd[1]: Started Virtual Machine qemu-177-instance-00000093.
Nov 25 04:12:35 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:12:35.546 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[c7ef4180-987a-4393-8b13-dd0f94486f4c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:12:35 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:12:35.548 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[c62d3c5b-00e6-4e1b-bc0b-9285043dd70d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:12:35 np0005534516 systemd-udevd[411119]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 04:12:35 np0005534516 NetworkManager[48915]: <info>  [1764061955.5616] device (tap970a51bb-20): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 04:12:35 np0005534516 NetworkManager[48915]: <info>  [1764061955.5626] device (tap970a51bb-20): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 04:12:35 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:12:35.576 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[b73079a6-7c21-4423-815d-7dea3f48487f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:12:35 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:12:35.593 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[9a5422ab-87f9-4612-9558-5444df57ffa8]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6c73317d-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f0:f7:0e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 23, 'tx_packets': 6, 'rx_bytes': 1930, 'tx_bytes': 444, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 23, 'tx_packets': 6, 'rx_bytes': 1930, 'tx_bytes': 444, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 449], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 727529, 'reachable_time': 16900, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 21, 'inoctets': 1552, 'indelivers': 7, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 304, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 21, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 1552, 'outmcastoctets': 304, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 21, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 7, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 411129, 'error': None, 'target': 'ovnmeta-6c73317d-f647-4813-8469-7d8f6ba2c0c7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:12:35 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:12:35.605 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[b7939b83-f134-4e2a-895f-1f891b8e6ba5]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap6c73317d-f1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 727540, 'tstamp': 727540}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 411130, 'error': None, 'target': 'ovnmeta-6c73317d-f647-4813-8469-7d8f6ba2c0c7', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap6c73317d-f1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 727542, 'tstamp': 727542}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 411130, 'error': None, 'target': 'ovnmeta-6c73317d-f647-4813-8469-7d8f6ba2c0c7', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:12:35 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:12:35.607 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6c73317d-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 04:12:35 np0005534516 nova_compute[253538]: 2025-11-25 09:12:35.608 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:12:35 np0005534516 nova_compute[253538]: 2025-11-25 09:12:35.609 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:12:35 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:12:35.609 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6c73317d-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 04:12:35 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:12:35.609 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 04:12:35 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:12:35.610 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap6c73317d-f0, col_values=(('external_ids', {'iface-id': '08f181bc-bee1-4710-a487-b95c62cfce38'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 04:12:35 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:12:35.610 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 04:12:35 np0005534516 nova_compute[253538]: 2025-11-25 09:12:35.923 253542 DEBUG nova.compute.manager [req-fd3ae7e5-a1f5-466c-a9f2-1fe6991cf0cc req-bef8e50c-8956-427a-8b6f-e3871ab1badf b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 985307b1-28a6-47cc-8dfc-f18ab08169f7] Received event network-vif-plugged-970a51bb-207b-46ae-bb14-c743ea86eb2f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 04:12:35 np0005534516 nova_compute[253538]: 2025-11-25 09:12:35.924 253542 DEBUG oslo_concurrency.lockutils [req-fd3ae7e5-a1f5-466c-a9f2-1fe6991cf0cc req-bef8e50c-8956-427a-8b6f-e3871ab1badf b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "985307b1-28a6-47cc-8dfc-f18ab08169f7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:12:35 np0005534516 nova_compute[253538]: 2025-11-25 09:12:35.924 253542 DEBUG oslo_concurrency.lockutils [req-fd3ae7e5-a1f5-466c-a9f2-1fe6991cf0cc req-bef8e50c-8956-427a-8b6f-e3871ab1badf b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "985307b1-28a6-47cc-8dfc-f18ab08169f7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:12:35 np0005534516 nova_compute[253538]: 2025-11-25 09:12:35.925 253542 DEBUG oslo_concurrency.lockutils [req-fd3ae7e5-a1f5-466c-a9f2-1fe6991cf0cc req-bef8e50c-8956-427a-8b6f-e3871ab1badf b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "985307b1-28a6-47cc-8dfc-f18ab08169f7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:12:35 np0005534516 nova_compute[253538]: 2025-11-25 09:12:35.925 253542 DEBUG nova.compute.manager [req-fd3ae7e5-a1f5-466c-a9f2-1fe6991cf0cc req-bef8e50c-8956-427a-8b6f-e3871ab1badf b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 985307b1-28a6-47cc-8dfc-f18ab08169f7] Processing event network-vif-plugged-970a51bb-207b-46ae-bb14-c743ea86eb2f _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 25 04:12:36 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2766: 321 pgs: 321 active+clean; 213 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 25 04:12:36 np0005534516 nova_compute[253538]: 2025-11-25 09:12:36.577 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764061956.5762353, 985307b1-28a6-47cc-8dfc-f18ab08169f7 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 04:12:36 np0005534516 nova_compute[253538]: 2025-11-25 09:12:36.578 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 985307b1-28a6-47cc-8dfc-f18ab08169f7] VM Started (Lifecycle Event)#033[00m
Nov 25 04:12:36 np0005534516 nova_compute[253538]: 2025-11-25 09:12:36.582 253542 DEBUG nova.compute.manager [None req-a152a26a-c1e0-4c61-b2e2-5b8345590ff6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 985307b1-28a6-47cc-8dfc-f18ab08169f7] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 25 04:12:36 np0005534516 nova_compute[253538]: 2025-11-25 09:12:36.586 253542 DEBUG nova.virt.libvirt.driver [None req-a152a26a-c1e0-4c61-b2e2-5b8345590ff6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 985307b1-28a6-47cc-8dfc-f18ab08169f7] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 25 04:12:36 np0005534516 nova_compute[253538]: 2025-11-25 09:12:36.591 253542 INFO nova.virt.libvirt.driver [-] [instance: 985307b1-28a6-47cc-8dfc-f18ab08169f7] Instance spawned successfully.#033[00m
Nov 25 04:12:36 np0005534516 nova_compute[253538]: 2025-11-25 09:12:36.591 253542 DEBUG nova.virt.libvirt.driver [None req-a152a26a-c1e0-4c61-b2e2-5b8345590ff6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 985307b1-28a6-47cc-8dfc-f18ab08169f7] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 25 04:12:36 np0005534516 nova_compute[253538]: 2025-11-25 09:12:36.607 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 985307b1-28a6-47cc-8dfc-f18ab08169f7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 04:12:36 np0005534516 nova_compute[253538]: 2025-11-25 09:12:36.621 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 985307b1-28a6-47cc-8dfc-f18ab08169f7] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 04:12:36 np0005534516 nova_compute[253538]: 2025-11-25 09:12:36.628 253542 DEBUG nova.virt.libvirt.driver [None req-a152a26a-c1e0-4c61-b2e2-5b8345590ff6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 985307b1-28a6-47cc-8dfc-f18ab08169f7] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 04:12:36 np0005534516 nova_compute[253538]: 2025-11-25 09:12:36.629 253542 DEBUG nova.virt.libvirt.driver [None req-a152a26a-c1e0-4c61-b2e2-5b8345590ff6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 985307b1-28a6-47cc-8dfc-f18ab08169f7] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 04:12:36 np0005534516 nova_compute[253538]: 2025-11-25 09:12:36.629 253542 DEBUG nova.virt.libvirt.driver [None req-a152a26a-c1e0-4c61-b2e2-5b8345590ff6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 985307b1-28a6-47cc-8dfc-f18ab08169f7] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 04:12:36 np0005534516 nova_compute[253538]: 2025-11-25 09:12:36.630 253542 DEBUG nova.virt.libvirt.driver [None req-a152a26a-c1e0-4c61-b2e2-5b8345590ff6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 985307b1-28a6-47cc-8dfc-f18ab08169f7] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 04:12:36 np0005534516 nova_compute[253538]: 2025-11-25 09:12:36.631 253542 DEBUG nova.virt.libvirt.driver [None req-a152a26a-c1e0-4c61-b2e2-5b8345590ff6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 985307b1-28a6-47cc-8dfc-f18ab08169f7] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 04:12:36 np0005534516 nova_compute[253538]: 2025-11-25 09:12:36.632 253542 DEBUG nova.virt.libvirt.driver [None req-a152a26a-c1e0-4c61-b2e2-5b8345590ff6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 985307b1-28a6-47cc-8dfc-f18ab08169f7] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 04:12:36 np0005534516 nova_compute[253538]: 2025-11-25 09:12:36.643 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 985307b1-28a6-47cc-8dfc-f18ab08169f7] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 04:12:36 np0005534516 nova_compute[253538]: 2025-11-25 09:12:36.644 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764061956.5777004, 985307b1-28a6-47cc-8dfc-f18ab08169f7 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 04:12:36 np0005534516 nova_compute[253538]: 2025-11-25 09:12:36.644 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 985307b1-28a6-47cc-8dfc-f18ab08169f7] VM Paused (Lifecycle Event)#033[00m
Nov 25 04:12:36 np0005534516 nova_compute[253538]: 2025-11-25 09:12:36.730 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 985307b1-28a6-47cc-8dfc-f18ab08169f7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 04:12:36 np0005534516 nova_compute[253538]: 2025-11-25 09:12:36.735 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764061956.585542, 985307b1-28a6-47cc-8dfc-f18ab08169f7 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 04:12:36 np0005534516 nova_compute[253538]: 2025-11-25 09:12:36.736 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 985307b1-28a6-47cc-8dfc-f18ab08169f7] VM Resumed (Lifecycle Event)#033[00m
Nov 25 04:12:36 np0005534516 nova_compute[253538]: 2025-11-25 09:12:36.741 253542 INFO nova.compute.manager [None req-a152a26a-c1e0-4c61-b2e2-5b8345590ff6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 985307b1-28a6-47cc-8dfc-f18ab08169f7] Took 8.43 seconds to spawn the instance on the hypervisor.#033[00m
Nov 25 04:12:36 np0005534516 nova_compute[253538]: 2025-11-25 09:12:36.742 253542 DEBUG nova.compute.manager [None req-a152a26a-c1e0-4c61-b2e2-5b8345590ff6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 985307b1-28a6-47cc-8dfc-f18ab08169f7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 04:12:36 np0005534516 nova_compute[253538]: 2025-11-25 09:12:36.752 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 985307b1-28a6-47cc-8dfc-f18ab08169f7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 04:12:36 np0005534516 nova_compute[253538]: 2025-11-25 09:12:36.756 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 985307b1-28a6-47cc-8dfc-f18ab08169f7] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 04:12:36 np0005534516 nova_compute[253538]: 2025-11-25 09:12:36.785 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 985307b1-28a6-47cc-8dfc-f18ab08169f7] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 04:12:36 np0005534516 nova_compute[253538]: 2025-11-25 09:12:36.818 253542 INFO nova.compute.manager [None req-a152a26a-c1e0-4c61-b2e2-5b8345590ff6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 985307b1-28a6-47cc-8dfc-f18ab08169f7] Took 9.84 seconds to build instance.#033[00m
Nov 25 04:12:36 np0005534516 nova_compute[253538]: 2025-11-25 09:12:36.832 253542 DEBUG oslo_concurrency.lockutils [None req-a152a26a-c1e0-4c61-b2e2-5b8345590ff6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "985307b1-28a6-47cc-8dfc-f18ab08169f7" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.058s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:12:37 np0005534516 podman[411175]: 2025-11-25 09:12:37.854047887 +0000 UTC m=+0.091633012 container health_status 5c8d5508db57b4c3da7e80ae11f3a3094e87785cb74f60c98199261dd8b73481 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent)
Nov 25 04:12:37 np0005534516 podman[411174]: 2025-11-25 09:12:37.875204212 +0000 UTC m=+0.114659918 container health_status 201f5ba8a846455dad7681d91099d8c3f33e8ac91298c1330a8b525b94c03938 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=multipathd, container_name=multipathd)
Nov 25 04:12:38 np0005534516 nova_compute[253538]: 2025-11-25 09:12:38.022 253542 DEBUG nova.compute.manager [req-04ac4448-679f-4f3c-9379-ed2d2377fe79 req-8ffa9116-9036-47df-b818-1a8470a98341 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 985307b1-28a6-47cc-8dfc-f18ab08169f7] Received event network-vif-plugged-970a51bb-207b-46ae-bb14-c743ea86eb2f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 04:12:38 np0005534516 nova_compute[253538]: 2025-11-25 09:12:38.023 253542 DEBUG oslo_concurrency.lockutils [req-04ac4448-679f-4f3c-9379-ed2d2377fe79 req-8ffa9116-9036-47df-b818-1a8470a98341 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "985307b1-28a6-47cc-8dfc-f18ab08169f7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:12:38 np0005534516 nova_compute[253538]: 2025-11-25 09:12:38.024 253542 DEBUG oslo_concurrency.lockutils [req-04ac4448-679f-4f3c-9379-ed2d2377fe79 req-8ffa9116-9036-47df-b818-1a8470a98341 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "985307b1-28a6-47cc-8dfc-f18ab08169f7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:12:38 np0005534516 nova_compute[253538]: 2025-11-25 09:12:38.024 253542 DEBUG oslo_concurrency.lockutils [req-04ac4448-679f-4f3c-9379-ed2d2377fe79 req-8ffa9116-9036-47df-b818-1a8470a98341 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "985307b1-28a6-47cc-8dfc-f18ab08169f7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:12:38 np0005534516 nova_compute[253538]: 2025-11-25 09:12:38.025 253542 DEBUG nova.compute.manager [req-04ac4448-679f-4f3c-9379-ed2d2377fe79 req-8ffa9116-9036-47df-b818-1a8470a98341 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 985307b1-28a6-47cc-8dfc-f18ab08169f7] No waiting events found dispatching network-vif-plugged-970a51bb-207b-46ae-bb14-c743ea86eb2f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 04:12:38 np0005534516 nova_compute[253538]: 2025-11-25 09:12:38.026 253542 WARNING nova.compute.manager [req-04ac4448-679f-4f3c-9379-ed2d2377fe79 req-8ffa9116-9036-47df-b818-1a8470a98341 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 985307b1-28a6-47cc-8dfc-f18ab08169f7] Received unexpected event network-vif-plugged-970a51bb-207b-46ae-bb14-c743ea86eb2f for instance with vm_state active and task_state None.#033[00m
Nov 25 04:12:38 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:12:38 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2767: 321 pgs: 321 active+clean; 213 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 403 KiB/s rd, 1.8 MiB/s wr, 47 op/s
Nov 25 04:12:39 np0005534516 nova_compute[253538]: 2025-11-25 09:12:39.172 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:12:39 np0005534516 nova_compute[253538]: 2025-11-25 09:12:39.486 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:12:40 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2768: 321 pgs: 321 active+clean; 213 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 834 KiB/s rd, 1.8 MiB/s wr, 64 op/s
Nov 25 04:12:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:12:41.096 162739 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:12:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:12:41.097 162739 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:12:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:12:41.098 162739 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:12:42 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2769: 321 pgs: 321 active+clean; 213 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.4 MiB/s wr, 87 op/s
Nov 25 04:12:42 np0005534516 nova_compute[253538]: 2025-11-25 09:12:42.548 253542 DEBUG nova.compute.manager [req-b3a82283-a9ed-4f06-aeeb-bc754cd0f994 req-cb36829f-c91a-4653-a842-a238a63a6253 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 985307b1-28a6-47cc-8dfc-f18ab08169f7] Received event network-changed-970a51bb-207b-46ae-bb14-c743ea86eb2f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 04:12:42 np0005534516 nova_compute[253538]: 2025-11-25 09:12:42.549 253542 DEBUG nova.compute.manager [req-b3a82283-a9ed-4f06-aeeb-bc754cd0f994 req-cb36829f-c91a-4653-a842-a238a63a6253 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 985307b1-28a6-47cc-8dfc-f18ab08169f7] Refreshing instance network info cache due to event network-changed-970a51bb-207b-46ae-bb14-c743ea86eb2f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 04:12:42 np0005534516 nova_compute[253538]: 2025-11-25 09:12:42.549 253542 DEBUG oslo_concurrency.lockutils [req-b3a82283-a9ed-4f06-aeeb-bc754cd0f994 req-cb36829f-c91a-4653-a842-a238a63a6253 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-985307b1-28a6-47cc-8dfc-f18ab08169f7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 04:12:42 np0005534516 nova_compute[253538]: 2025-11-25 09:12:42.550 253542 DEBUG oslo_concurrency.lockutils [req-b3a82283-a9ed-4f06-aeeb-bc754cd0f994 req-cb36829f-c91a-4653-a842-a238a63a6253 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-985307b1-28a6-47cc-8dfc-f18ab08169f7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 04:12:42 np0005534516 nova_compute[253538]: 2025-11-25 09:12:42.550 253542 DEBUG nova.network.neutron [req-b3a82283-a9ed-4f06-aeeb-bc754cd0f994 req-cb36829f-c91a-4653-a842-a238a63a6253 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 985307b1-28a6-47cc-8dfc-f18ab08169f7] Refreshing network info cache for port 970a51bb-207b-46ae-bb14-c743ea86eb2f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 04:12:43 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:12:43 np0005534516 podman[411211]: 2025-11-25 09:12:43.874008704 +0000 UTC m=+0.120030854 container health_status 8ad1e319ea263c63021f01bb2d6f18ca5aea205883339692c6b10bddfa993191 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 04:12:44 np0005534516 nova_compute[253538]: 2025-11-25 09:12:44.174 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:12:44 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2770: 321 pgs: 321 active+clean; 213 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 337 KiB/s wr, 76 op/s
Nov 25 04:12:44 np0005534516 nova_compute[253538]: 2025-11-25 09:12:44.489 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:12:44 np0005534516 nova_compute[253538]: 2025-11-25 09:12:44.922 253542 DEBUG nova.network.neutron [req-b3a82283-a9ed-4f06-aeeb-bc754cd0f994 req-cb36829f-c91a-4653-a842-a238a63a6253 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 985307b1-28a6-47cc-8dfc-f18ab08169f7] Updated VIF entry in instance network info cache for port 970a51bb-207b-46ae-bb14-c743ea86eb2f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 25 04:12:44 np0005534516 nova_compute[253538]: 2025-11-25 09:12:44.923 253542 DEBUG nova.network.neutron [req-b3a82283-a9ed-4f06-aeeb-bc754cd0f994 req-cb36829f-c91a-4653-a842-a238a63a6253 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 985307b1-28a6-47cc-8dfc-f18ab08169f7] Updating instance_info_cache with network_info: [{"id": "970a51bb-207b-46ae-bb14-c743ea86eb2f", "address": "fa:16:3e:f0:74:a0", "network": {"id": "6c73317d-f647-4813-8469-7d8f6ba2c0c7", "bridge": "br-int", "label": "tempest-network-smoke--565189020", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.222", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fef0:74a0", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fef0:74a0", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap970a51bb-20", "ovs_interfaceid": "970a51bb-207b-46ae-bb14-c743ea86eb2f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 04:12:44 np0005534516 nova_compute[253538]: 2025-11-25 09:12:44.963 253542 DEBUG oslo_concurrency.lockutils [req-b3a82283-a9ed-4f06-aeeb-bc754cd0f994 req-cb36829f-c91a-4653-a842-a238a63a6253 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-985307b1-28a6-47cc-8dfc-f18ab08169f7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 04:12:46 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2771: 321 pgs: 321 active+clean; 213 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Nov 25 04:12:46 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 04:12:46 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 04:12:46 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 04:12:46 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 04:12:47 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 04:12:47 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 04:12:47 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 04:12:47 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 04:12:47 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 04:12:47 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 04:12:47 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 04:12:47 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 04:12:47 np0005534516 ceph-mgr[75313]: [progress WARNING root] complete: ev 9feed2c6-9ac8-4657-aa0e-57793f638a6d does not exist
Nov 25 04:12:47 np0005534516 ceph-mgr[75313]: [progress WARNING root] complete: ev 46a597e8-d045-4bab-a1e6-a04550781dad does not exist
Nov 25 04:12:47 np0005534516 ceph-mgr[75313]: [progress WARNING root] complete: ev 9eb9cfc2-eee0-4cf3-b889-d008ef60e18a does not exist
Nov 25 04:12:47 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 04:12:47 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 04:12:47 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 04:12:47 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 04:12:47 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 04:12:47 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 04:12:47 np0005534516 ceph-osd[90711]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #49. Immutable memtables: 6.
Nov 25 04:12:48 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:12:48 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2772: 321 pgs: 321 active+clean; 225 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.1 MiB/s wr, 79 op/s
Nov 25 04:12:48 np0005534516 podman[411629]: 2025-11-25 09:12:48.440394717 +0000 UTC m=+0.057880855 container create d01ecc76876e37789a26fa695ede4de4416cd188cab79e4344ecd9f985711243 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_chaum, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 04:12:48 np0005534516 systemd[1]: Started libpod-conmon-d01ecc76876e37789a26fa695ede4de4416cd188cab79e4344ecd9f985711243.scope.
Nov 25 04:12:48 np0005534516 podman[411629]: 2025-11-25 09:12:48.408906701 +0000 UTC m=+0.026392889 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 04:12:48 np0005534516 systemd[1]: Started libcrun container.
Nov 25 04:12:48 np0005534516 podman[411629]: 2025-11-25 09:12:48.537297991 +0000 UTC m=+0.154784149 container init d01ecc76876e37789a26fa695ede4de4416cd188cab79e4344ecd9f985711243 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_chaum, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Nov 25 04:12:48 np0005534516 podman[411629]: 2025-11-25 09:12:48.548001332 +0000 UTC m=+0.165487470 container start d01ecc76876e37789a26fa695ede4de4416cd188cab79e4344ecd9f985711243 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_chaum, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 04:12:48 np0005534516 podman[411629]: 2025-11-25 09:12:48.552211007 +0000 UTC m=+0.169697185 container attach d01ecc76876e37789a26fa695ede4de4416cd188cab79e4344ecd9f985711243 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_chaum, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 04:12:48 np0005534516 systemd[1]: libpod-d01ecc76876e37789a26fa695ede4de4416cd188cab79e4344ecd9f985711243.scope: Deactivated successfully.
Nov 25 04:12:48 np0005534516 competent_chaum[411645]: 167 167
Nov 25 04:12:48 np0005534516 conmon[411645]: conmon d01ecc76876e37789a26 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-d01ecc76876e37789a26fa695ede4de4416cd188cab79e4344ecd9f985711243.scope/container/memory.events
Nov 25 04:12:48 np0005534516 podman[411629]: 2025-11-25 09:12:48.558273241 +0000 UTC m=+0.175759419 container died d01ecc76876e37789a26fa695ede4de4416cd188cab79e4344ecd9f985711243 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_chaum, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Nov 25 04:12:48 np0005534516 systemd[1]: var-lib-containers-storage-overlay-1a47aa50298d875825775d3fd26afa60997543dae2228fa6bd2f7a5c1bf3b4c4-merged.mount: Deactivated successfully.
Nov 25 04:12:48 np0005534516 podman[411629]: 2025-11-25 09:12:48.633285701 +0000 UTC m=+0.250771849 container remove d01ecc76876e37789a26fa695ede4de4416cd188cab79e4344ecd9f985711243 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_chaum, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 04:12:48 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 04:12:48 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 04:12:48 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 04:12:48 np0005534516 systemd[1]: libpod-conmon-d01ecc76876e37789a26fa695ede4de4416cd188cab79e4344ecd9f985711243.scope: Deactivated successfully.
Nov 25 04:12:48 np0005534516 podman[411668]: 2025-11-25 09:12:48.850057933 +0000 UTC m=+0.081191778 container create 1117f22c156f6e59608d2232b86b8a14d20df233e7d5e9de49750208bf17659b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_dewdney, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Nov 25 04:12:48 np0005534516 podman[411668]: 2025-11-25 09:12:48.794572505 +0000 UTC m=+0.025706370 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 04:12:49 np0005534516 systemd[1]: Started libpod-conmon-1117f22c156f6e59608d2232b86b8a14d20df233e7d5e9de49750208bf17659b.scope.
Nov 25 04:12:49 np0005534516 systemd[1]: Started libcrun container.
Nov 25 04:12:49 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d9ec2446fb53877a34d2620da23a164585d163ae5369b2270321e59fcd36628d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 04:12:49 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d9ec2446fb53877a34d2620da23a164585d163ae5369b2270321e59fcd36628d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 04:12:49 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d9ec2446fb53877a34d2620da23a164585d163ae5369b2270321e59fcd36628d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 04:12:49 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d9ec2446fb53877a34d2620da23a164585d163ae5369b2270321e59fcd36628d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 04:12:49 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d9ec2446fb53877a34d2620da23a164585d163ae5369b2270321e59fcd36628d/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 04:12:49 np0005534516 nova_compute[253538]: 2025-11-25 09:12:49.176 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:12:49 np0005534516 podman[411668]: 2025-11-25 09:12:49.333972188 +0000 UTC m=+0.565106083 container init 1117f22c156f6e59608d2232b86b8a14d20df233e7d5e9de49750208bf17659b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_dewdney, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 04:12:49 np0005534516 podman[411668]: 2025-11-25 09:12:49.345783609 +0000 UTC m=+0.576917494 container start 1117f22c156f6e59608d2232b86b8a14d20df233e7d5e9de49750208bf17659b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_dewdney, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Nov 25 04:12:49 np0005534516 nova_compute[253538]: 2025-11-25 09:12:49.492 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:12:49 np0005534516 ovn_controller[152859]: 2025-11-25T09:12:49Z|00201|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:f0:74:a0 10.100.0.9
Nov 25 04:12:49 np0005534516 ovn_controller[152859]: 2025-11-25T09:12:49Z|00202|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:f0:74:a0 10.100.0.9
Nov 25 04:12:49 np0005534516 podman[411668]: 2025-11-25 09:12:49.971692313 +0000 UTC m=+1.202826198 container attach 1117f22c156f6e59608d2232b86b8a14d20df233e7d5e9de49750208bf17659b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_dewdney, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 04:12:50 np0005534516 nova_compute[253538]: 2025-11-25 09:12:50.047 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:12:50 np0005534516 nova_compute[253538]: 2025-11-25 09:12:50.048 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:12:50 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2773: 321 pgs: 321 active+clean; 233 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.8 MiB/s rd, 1.7 MiB/s wr, 88 op/s
Nov 25 04:12:50 np0005534516 happy_dewdney[411685]: --> passed data devices: 0 physical, 3 LVM
Nov 25 04:12:50 np0005534516 happy_dewdney[411685]: --> relative data size: 1.0
Nov 25 04:12:50 np0005534516 happy_dewdney[411685]: --> All data devices are unavailable
Nov 25 04:12:50 np0005534516 systemd[1]: libpod-1117f22c156f6e59608d2232b86b8a14d20df233e7d5e9de49750208bf17659b.scope: Deactivated successfully.
Nov 25 04:12:50 np0005534516 podman[411668]: 2025-11-25 09:12:50.400602413 +0000 UTC m=+1.631736278 container died 1117f22c156f6e59608d2232b86b8a14d20df233e7d5e9de49750208bf17659b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_dewdney, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 25 04:12:50 np0005534516 systemd[1]: var-lib-containers-storage-overlay-d9ec2446fb53877a34d2620da23a164585d163ae5369b2270321e59fcd36628d-merged.mount: Deactivated successfully.
Nov 25 04:12:50 np0005534516 podman[411668]: 2025-11-25 09:12:50.569842034 +0000 UTC m=+1.800975879 container remove 1117f22c156f6e59608d2232b86b8a14d20df233e7d5e9de49750208bf17659b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_dewdney, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Nov 25 04:12:50 np0005534516 systemd[1]: libpod-conmon-1117f22c156f6e59608d2232b86b8a14d20df233e7d5e9de49750208bf17659b.scope: Deactivated successfully.
Nov 25 04:12:51 np0005534516 podman[411865]: 2025-11-25 09:12:51.273459991 +0000 UTC m=+0.051856500 container create 21f7f9acef5e828aab660b2b44dfe3368896061d6ea7545b011cd091aaf4cfe0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_hamilton, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 04:12:51 np0005534516 systemd[1]: Started libpod-conmon-21f7f9acef5e828aab660b2b44dfe3368896061d6ea7545b011cd091aaf4cfe0.scope.
Nov 25 04:12:51 np0005534516 podman[411865]: 2025-11-25 09:12:51.24253865 +0000 UTC m=+0.020935179 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 04:12:51 np0005534516 systemd[1]: Started libcrun container.
Nov 25 04:12:51 np0005534516 podman[411865]: 2025-11-25 09:12:51.460792293 +0000 UTC m=+0.239188882 container init 21f7f9acef5e828aab660b2b44dfe3368896061d6ea7545b011cd091aaf4cfe0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_hamilton, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Nov 25 04:12:51 np0005534516 podman[411865]: 2025-11-25 09:12:51.469403517 +0000 UTC m=+0.247800026 container start 21f7f9acef5e828aab660b2b44dfe3368896061d6ea7545b011cd091aaf4cfe0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_hamilton, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 04:12:51 np0005534516 epic_hamilton[411882]: 167 167
Nov 25 04:12:51 np0005534516 systemd[1]: libpod-21f7f9acef5e828aab660b2b44dfe3368896061d6ea7545b011cd091aaf4cfe0.scope: Deactivated successfully.
Nov 25 04:12:51 np0005534516 podman[411865]: 2025-11-25 09:12:51.478218037 +0000 UTC m=+0.256614666 container attach 21f7f9acef5e828aab660b2b44dfe3368896061d6ea7545b011cd091aaf4cfe0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_hamilton, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 04:12:51 np0005534516 podman[411865]: 2025-11-25 09:12:51.478976737 +0000 UTC m=+0.257373326 container died 21f7f9acef5e828aab660b2b44dfe3368896061d6ea7545b011cd091aaf4cfe0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_hamilton, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 04:12:51 np0005534516 systemd[1]: var-lib-containers-storage-overlay-0480f3300461ffdedeb9d88735ebcd4aaeda4a9e04548b5b338f64377e6d871b-merged.mount: Deactivated successfully.
Nov 25 04:12:51 np0005534516 podman[411865]: 2025-11-25 09:12:51.56809035 +0000 UTC m=+0.346486859 container remove 21f7f9acef5e828aab660b2b44dfe3368896061d6ea7545b011cd091aaf4cfe0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_hamilton, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 04:12:51 np0005534516 systemd[1]: libpod-conmon-21f7f9acef5e828aab660b2b44dfe3368896061d6ea7545b011cd091aaf4cfe0.scope: Deactivated successfully.
Nov 25 04:12:51 np0005534516 podman[411908]: 2025-11-25 09:12:51.842698405 +0000 UTC m=+0.079521183 container create 61ff14f45ea6b3b2b2b81448aeed940a2baaf638832f90540b2703b0c0570a30 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_hertz, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 04:12:51 np0005534516 podman[411908]: 2025-11-25 09:12:51.807756975 +0000 UTC m=+0.044579833 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 04:12:51 np0005534516 systemd[1]: Started libpod-conmon-61ff14f45ea6b3b2b2b81448aeed940a2baaf638832f90540b2703b0c0570a30.scope.
Nov 25 04:12:51 np0005534516 systemd[1]: Started libcrun container.
Nov 25 04:12:52 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4da366d0ed2dea6acef763089482daf39c0051bde2840f61b3c1af134d6989c9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 04:12:52 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4da366d0ed2dea6acef763089482daf39c0051bde2840f61b3c1af134d6989c9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 04:12:52 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4da366d0ed2dea6acef763089482daf39c0051bde2840f61b3c1af134d6989c9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 04:12:52 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4da366d0ed2dea6acef763089482daf39c0051bde2840f61b3c1af134d6989c9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 04:12:52 np0005534516 podman[411908]: 2025-11-25 09:12:52.026395199 +0000 UTC m=+0.263218047 container init 61ff14f45ea6b3b2b2b81448aeed940a2baaf638832f90540b2703b0c0570a30 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_hertz, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Nov 25 04:12:52 np0005534516 podman[411908]: 2025-11-25 09:12:52.041022596 +0000 UTC m=+0.277845404 container start 61ff14f45ea6b3b2b2b81448aeed940a2baaf638832f90540b2703b0c0570a30 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_hertz, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 04:12:52 np0005534516 podman[411908]: 2025-11-25 09:12:52.046365181 +0000 UTC m=+0.283188039 container attach 61ff14f45ea6b3b2b2b81448aeed940a2baaf638832f90540b2703b0c0570a30 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_hertz, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Nov 25 04:12:52 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2774: 321 pgs: 321 active+clean; 243 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.4 MiB/s rd, 2.1 MiB/s wr, 91 op/s
Nov 25 04:12:52 np0005534516 hardcore_hertz[411924]: {
Nov 25 04:12:52 np0005534516 hardcore_hertz[411924]:    "0": [
Nov 25 04:12:52 np0005534516 hardcore_hertz[411924]:        {
Nov 25 04:12:52 np0005534516 hardcore_hertz[411924]:            "devices": [
Nov 25 04:12:52 np0005534516 hardcore_hertz[411924]:                "/dev/loop3"
Nov 25 04:12:52 np0005534516 hardcore_hertz[411924]:            ],
Nov 25 04:12:52 np0005534516 hardcore_hertz[411924]:            "lv_name": "ceph_lv0",
Nov 25 04:12:52 np0005534516 hardcore_hertz[411924]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 04:12:52 np0005534516 hardcore_hertz[411924]:            "lv_size": "21470642176",
Nov 25 04:12:52 np0005534516 hardcore_hertz[411924]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=fa0f8d90-5df8-4d42-9078-da082765696d,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 04:12:52 np0005534516 hardcore_hertz[411924]:            "lv_uuid": "ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr",
Nov 25 04:12:52 np0005534516 hardcore_hertz[411924]:            "name": "ceph_lv0",
Nov 25 04:12:52 np0005534516 hardcore_hertz[411924]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 04:12:52 np0005534516 hardcore_hertz[411924]:            "tags": {
Nov 25 04:12:52 np0005534516 hardcore_hertz[411924]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 04:12:52 np0005534516 hardcore_hertz[411924]:                "ceph.block_uuid": "ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr",
Nov 25 04:12:52 np0005534516 hardcore_hertz[411924]:                "ceph.cephx_lockbox_secret": "",
Nov 25 04:12:52 np0005534516 hardcore_hertz[411924]:                "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 04:12:52 np0005534516 hardcore_hertz[411924]:                "ceph.cluster_name": "ceph",
Nov 25 04:12:52 np0005534516 hardcore_hertz[411924]:                "ceph.crush_device_class": "",
Nov 25 04:12:52 np0005534516 hardcore_hertz[411924]:                "ceph.encrypted": "0",
Nov 25 04:12:52 np0005534516 hardcore_hertz[411924]:                "ceph.osd_fsid": "fa0f8d90-5df8-4d42-9078-da082765696d",
Nov 25 04:12:52 np0005534516 hardcore_hertz[411924]:                "ceph.osd_id": "0",
Nov 25 04:12:52 np0005534516 hardcore_hertz[411924]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 04:12:52 np0005534516 hardcore_hertz[411924]:                "ceph.type": "block",
Nov 25 04:12:52 np0005534516 hardcore_hertz[411924]:                "ceph.vdo": "0"
Nov 25 04:12:52 np0005534516 hardcore_hertz[411924]:            },
Nov 25 04:12:52 np0005534516 hardcore_hertz[411924]:            "type": "block",
Nov 25 04:12:52 np0005534516 hardcore_hertz[411924]:            "vg_name": "ceph_vg0"
Nov 25 04:12:52 np0005534516 hardcore_hertz[411924]:        }
Nov 25 04:12:52 np0005534516 hardcore_hertz[411924]:    ],
Nov 25 04:12:52 np0005534516 hardcore_hertz[411924]:    "1": [
Nov 25 04:12:52 np0005534516 hardcore_hertz[411924]:        {
Nov 25 04:12:52 np0005534516 hardcore_hertz[411924]:            "devices": [
Nov 25 04:12:52 np0005534516 hardcore_hertz[411924]:                "/dev/loop4"
Nov 25 04:12:52 np0005534516 hardcore_hertz[411924]:            ],
Nov 25 04:12:52 np0005534516 hardcore_hertz[411924]:            "lv_name": "ceph_lv1",
Nov 25 04:12:52 np0005534516 hardcore_hertz[411924]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 04:12:52 np0005534516 hardcore_hertz[411924]:            "lv_size": "21470642176",
Nov 25 04:12:52 np0005534516 hardcore_hertz[411924]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1ccbbde0-8faf-460a-800e-c84f00ed17db,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 04:12:52 np0005534516 hardcore_hertz[411924]:            "lv_uuid": "nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e",
Nov 25 04:12:52 np0005534516 hardcore_hertz[411924]:            "name": "ceph_lv1",
Nov 25 04:12:52 np0005534516 hardcore_hertz[411924]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 04:12:52 np0005534516 hardcore_hertz[411924]:            "tags": {
Nov 25 04:12:52 np0005534516 hardcore_hertz[411924]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 04:12:52 np0005534516 hardcore_hertz[411924]:                "ceph.block_uuid": "nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e",
Nov 25 04:12:52 np0005534516 hardcore_hertz[411924]:                "ceph.cephx_lockbox_secret": "",
Nov 25 04:12:52 np0005534516 hardcore_hertz[411924]:                "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 04:12:52 np0005534516 hardcore_hertz[411924]:                "ceph.cluster_name": "ceph",
Nov 25 04:12:52 np0005534516 hardcore_hertz[411924]:                "ceph.crush_device_class": "",
Nov 25 04:12:52 np0005534516 hardcore_hertz[411924]:                "ceph.encrypted": "0",
Nov 25 04:12:52 np0005534516 hardcore_hertz[411924]:                "ceph.osd_fsid": "1ccbbde0-8faf-460a-800e-c84f00ed17db",
Nov 25 04:12:52 np0005534516 hardcore_hertz[411924]:                "ceph.osd_id": "1",
Nov 25 04:12:52 np0005534516 hardcore_hertz[411924]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 04:12:52 np0005534516 hardcore_hertz[411924]:                "ceph.type": "block",
Nov 25 04:12:52 np0005534516 hardcore_hertz[411924]:                "ceph.vdo": "0"
Nov 25 04:12:52 np0005534516 hardcore_hertz[411924]:            },
Nov 25 04:12:52 np0005534516 hardcore_hertz[411924]:            "type": "block",
Nov 25 04:12:52 np0005534516 hardcore_hertz[411924]:            "vg_name": "ceph_vg1"
Nov 25 04:12:52 np0005534516 hardcore_hertz[411924]:        }
Nov 25 04:12:52 np0005534516 hardcore_hertz[411924]:    ],
Nov 25 04:12:52 np0005534516 hardcore_hertz[411924]:    "2": [
Nov 25 04:12:52 np0005534516 hardcore_hertz[411924]:        {
Nov 25 04:12:52 np0005534516 hardcore_hertz[411924]:            "devices": [
Nov 25 04:12:52 np0005534516 hardcore_hertz[411924]:                "/dev/loop5"
Nov 25 04:12:52 np0005534516 hardcore_hertz[411924]:            ],
Nov 25 04:12:52 np0005534516 hardcore_hertz[411924]:            "lv_name": "ceph_lv2",
Nov 25 04:12:52 np0005534516 hardcore_hertz[411924]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 04:12:52 np0005534516 hardcore_hertz[411924]:            "lv_size": "21470642176",
Nov 25 04:12:52 np0005534516 hardcore_hertz[411924]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2a15223e-f21c-42af-b702-2be1c3607399,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 04:12:52 np0005534516 hardcore_hertz[411924]:            "lv_uuid": "5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0",
Nov 25 04:12:52 np0005534516 hardcore_hertz[411924]:            "name": "ceph_lv2",
Nov 25 04:12:52 np0005534516 hardcore_hertz[411924]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 04:12:52 np0005534516 hardcore_hertz[411924]:            "tags": {
Nov 25 04:12:52 np0005534516 hardcore_hertz[411924]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 04:12:52 np0005534516 hardcore_hertz[411924]:                "ceph.block_uuid": "5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0",
Nov 25 04:12:52 np0005534516 hardcore_hertz[411924]:                "ceph.cephx_lockbox_secret": "",
Nov 25 04:12:52 np0005534516 hardcore_hertz[411924]:                "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 04:12:52 np0005534516 hardcore_hertz[411924]:                "ceph.cluster_name": "ceph",
Nov 25 04:12:52 np0005534516 hardcore_hertz[411924]:                "ceph.crush_device_class": "",
Nov 25 04:12:52 np0005534516 hardcore_hertz[411924]:                "ceph.encrypted": "0",
Nov 25 04:12:52 np0005534516 hardcore_hertz[411924]:                "ceph.osd_fsid": "2a15223e-f21c-42af-b702-2be1c3607399",
Nov 25 04:12:52 np0005534516 hardcore_hertz[411924]:                "ceph.osd_id": "2",
Nov 25 04:12:52 np0005534516 hardcore_hertz[411924]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 04:12:52 np0005534516 hardcore_hertz[411924]:                "ceph.type": "block",
Nov 25 04:12:52 np0005534516 hardcore_hertz[411924]:                "ceph.vdo": "0"
Nov 25 04:12:52 np0005534516 hardcore_hertz[411924]:            },
Nov 25 04:12:52 np0005534516 hardcore_hertz[411924]:            "type": "block",
Nov 25 04:12:52 np0005534516 hardcore_hertz[411924]:            "vg_name": "ceph_vg2"
Nov 25 04:12:52 np0005534516 hardcore_hertz[411924]:        }
Nov 25 04:12:52 np0005534516 hardcore_hertz[411924]:    ]
Nov 25 04:12:52 np0005534516 hardcore_hertz[411924]: }
Nov 25 04:12:52 np0005534516 systemd[1]: libpod-61ff14f45ea6b3b2b2b81448aeed940a2baaf638832f90540b2703b0c0570a30.scope: Deactivated successfully.
Nov 25 04:12:52 np0005534516 podman[411908]: 2025-11-25 09:12:52.898586288 +0000 UTC m=+1.135409076 container died 61ff14f45ea6b3b2b2b81448aeed940a2baaf638832f90540b2703b0c0570a30 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_hertz, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 04:12:52 np0005534516 systemd[1]: var-lib-containers-storage-overlay-4da366d0ed2dea6acef763089482daf39c0051bde2840f61b3c1af134d6989c9-merged.mount: Deactivated successfully.
Nov 25 04:12:52 np0005534516 podman[411908]: 2025-11-25 09:12:52.989079438 +0000 UTC m=+1.225902206 container remove 61ff14f45ea6b3b2b2b81448aeed940a2baaf638832f90540b2703b0c0570a30 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_hertz, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Nov 25 04:12:52 np0005534516 systemd[1]: libpod-conmon-61ff14f45ea6b3b2b2b81448aeed940a2baaf638832f90540b2703b0c0570a30.scope: Deactivated successfully.
Nov 25 04:12:53 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:12:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] Optimize plan auto_2025-11-25_09:12:53
Nov 25 04:12:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 04:12:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] do_upmap
Nov 25 04:12:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] pools ['default.rgw.log', 'backups', 'cephfs.cephfs.data', '.rgw.root', 'vms', 'default.rgw.meta', '.mgr', 'volumes', 'default.rgw.control', 'images', 'cephfs.cephfs.meta']
Nov 25 04:12:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] prepared 0/10 changes
Nov 25 04:12:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 04:12:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 04:12:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 04:12:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 04:12:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 04:12:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 04:12:53 np0005534516 podman[412090]: 2025-11-25 09:12:53.785886239 +0000 UTC m=+0.042204139 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 04:12:53 np0005534516 podman[412090]: 2025-11-25 09:12:53.915771849 +0000 UTC m=+0.172089779 container create 3ffb6c6bcf54fae2dd86a1af60c58ec2465d459d6ed103901e1c981c24713691 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_sinoussi, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 04:12:54 np0005534516 systemd[1]: Started libpod-conmon-3ffb6c6bcf54fae2dd86a1af60c58ec2465d459d6ed103901e1c981c24713691.scope.
Nov 25 04:12:54 np0005534516 systemd[1]: Started libcrun container.
Nov 25 04:12:54 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 04:12:54 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 04:12:54 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 04:12:54 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 04:12:54 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 04:12:54 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 04:12:54 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 04:12:54 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 04:12:54 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 04:12:54 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 04:12:54 np0005534516 podman[412090]: 2025-11-25 09:12:54.173772973 +0000 UTC m=+0.430090953 container init 3ffb6c6bcf54fae2dd86a1af60c58ec2465d459d6ed103901e1c981c24713691 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_sinoussi, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 04:12:54 np0005534516 nova_compute[253538]: 2025-11-25 09:12:54.179 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:12:54 np0005534516 podman[412090]: 2025-11-25 09:12:54.189784948 +0000 UTC m=+0.446102878 container start 3ffb6c6bcf54fae2dd86a1af60c58ec2465d459d6ed103901e1c981c24713691 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_sinoussi, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Nov 25 04:12:54 np0005534516 sleepy_sinoussi[412107]: 167 167
Nov 25 04:12:54 np0005534516 systemd[1]: libpod-3ffb6c6bcf54fae2dd86a1af60c58ec2465d459d6ed103901e1c981c24713691.scope: Deactivated successfully.
Nov 25 04:12:54 np0005534516 podman[412090]: 2025-11-25 09:12:54.272021073 +0000 UTC m=+0.528339013 container attach 3ffb6c6bcf54fae2dd86a1af60c58ec2465d459d6ed103901e1c981c24713691 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_sinoussi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 04:12:54 np0005534516 podman[412090]: 2025-11-25 09:12:54.272757504 +0000 UTC m=+0.529075434 container died 3ffb6c6bcf54fae2dd86a1af60c58ec2465d459d6ed103901e1c981c24713691 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_sinoussi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 04:12:54 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2775: 321 pgs: 321 active+clean; 244 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 336 KiB/s rd, 2.1 MiB/s wr, 58 op/s
Nov 25 04:12:54 np0005534516 nova_compute[253538]: 2025-11-25 09:12:54.495 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:12:54 np0005534516 nova_compute[253538]: 2025-11-25 09:12:54.548 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:12:54 np0005534516 systemd[1]: var-lib-containers-storage-overlay-50366c2d4307fdf72ea739595996f67c3d1eb67f194b72164d92389018808d47-merged.mount: Deactivated successfully.
Nov 25 04:12:54 np0005534516 podman[412090]: 2025-11-25 09:12:54.871409387 +0000 UTC m=+1.127727277 container remove 3ffb6c6bcf54fae2dd86a1af60c58ec2465d459d6ed103901e1c981c24713691 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_sinoussi, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Nov 25 04:12:54 np0005534516 systemd[1]: libpod-conmon-3ffb6c6bcf54fae2dd86a1af60c58ec2465d459d6ed103901e1c981c24713691.scope: Deactivated successfully.
Nov 25 04:12:55 np0005534516 podman[412131]: 2025-11-25 09:12:55.059168761 +0000 UTC m=+0.037521550 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 04:12:55 np0005534516 podman[412131]: 2025-11-25 09:12:55.248584331 +0000 UTC m=+0.226937110 container create 04b6940eeb64b5118929b8109d47ace60fa022f44fe537608f23221faff27e5a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_buck, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 25 04:12:55 np0005534516 systemd[1]: Started libpod-conmon-04b6940eeb64b5118929b8109d47ace60fa022f44fe537608f23221faff27e5a.scope.
Nov 25 04:12:55 np0005534516 systemd[1]: Started libcrun container.
Nov 25 04:12:55 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a3eb9c41bb4af90bc173b8eb91037a67e99f5c30df201a4b49a4944a7edcbe99/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 04:12:55 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a3eb9c41bb4af90bc173b8eb91037a67e99f5c30df201a4b49a4944a7edcbe99/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 04:12:55 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a3eb9c41bb4af90bc173b8eb91037a67e99f5c30df201a4b49a4944a7edcbe99/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 04:12:55 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a3eb9c41bb4af90bc173b8eb91037a67e99f5c30df201a4b49a4944a7edcbe99/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 04:12:55 np0005534516 podman[412131]: 2025-11-25 09:12:55.625181748 +0000 UTC m=+0.603534537 container init 04b6940eeb64b5118929b8109d47ace60fa022f44fe537608f23221faff27e5a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_buck, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 04:12:55 np0005534516 podman[412131]: 2025-11-25 09:12:55.638436068 +0000 UTC m=+0.616788837 container start 04b6940eeb64b5118929b8109d47ace60fa022f44fe537608f23221faff27e5a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_buck, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 04:12:55 np0005534516 podman[412131]: 2025-11-25 09:12:55.724911309 +0000 UTC m=+0.703264098 container attach 04b6940eeb64b5118929b8109d47ace60fa022f44fe537608f23221faff27e5a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_buck, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Nov 25 04:12:56 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2776: 321 pgs: 321 active+clean; 246 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 363 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Nov 25 04:12:56 np0005534516 nova_compute[253538]: 2025-11-25 09:12:56.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:12:56 np0005534516 nova_compute[253538]: 2025-11-25 09:12:56.554 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 25 04:12:56 np0005534516 nova_compute[253538]: 2025-11-25 09:12:56.554 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 25 04:12:56 np0005534516 magical_buck[412147]: {
Nov 25 04:12:56 np0005534516 magical_buck[412147]:    "1ccbbde0-8faf-460a-800e-c84f00ed17db": {
Nov 25 04:12:56 np0005534516 magical_buck[412147]:        "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 04:12:56 np0005534516 magical_buck[412147]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 04:12:56 np0005534516 magical_buck[412147]:        "osd_id": 1,
Nov 25 04:12:56 np0005534516 magical_buck[412147]:        "osd_uuid": "1ccbbde0-8faf-460a-800e-c84f00ed17db",
Nov 25 04:12:56 np0005534516 magical_buck[412147]:        "type": "bluestore"
Nov 25 04:12:56 np0005534516 magical_buck[412147]:    },
Nov 25 04:12:56 np0005534516 magical_buck[412147]:    "2a15223e-f21c-42af-b702-2be1c3607399": {
Nov 25 04:12:56 np0005534516 magical_buck[412147]:        "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 04:12:56 np0005534516 magical_buck[412147]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 04:12:56 np0005534516 magical_buck[412147]:        "osd_id": 2,
Nov 25 04:12:56 np0005534516 magical_buck[412147]:        "osd_uuid": "2a15223e-f21c-42af-b702-2be1c3607399",
Nov 25 04:12:56 np0005534516 magical_buck[412147]:        "type": "bluestore"
Nov 25 04:12:56 np0005534516 magical_buck[412147]:    },
Nov 25 04:12:56 np0005534516 magical_buck[412147]:    "fa0f8d90-5df8-4d42-9078-da082765696d": {
Nov 25 04:12:56 np0005534516 magical_buck[412147]:        "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 04:12:56 np0005534516 magical_buck[412147]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 04:12:56 np0005534516 magical_buck[412147]:        "osd_id": 0,
Nov 25 04:12:56 np0005534516 magical_buck[412147]:        "osd_uuid": "fa0f8d90-5df8-4d42-9078-da082765696d",
Nov 25 04:12:56 np0005534516 magical_buck[412147]:        "type": "bluestore"
Nov 25 04:12:56 np0005534516 magical_buck[412147]:    }
Nov 25 04:12:56 np0005534516 magical_buck[412147]: }
Nov 25 04:12:56 np0005534516 systemd[1]: libpod-04b6940eeb64b5118929b8109d47ace60fa022f44fe537608f23221faff27e5a.scope: Deactivated successfully.
Nov 25 04:12:56 np0005534516 podman[412131]: 2025-11-25 09:12:56.772376114 +0000 UTC m=+1.750728903 container died 04b6940eeb64b5118929b8109d47ace60fa022f44fe537608f23221faff27e5a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_buck, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Nov 25 04:12:56 np0005534516 systemd[1]: libpod-04b6940eeb64b5118929b8109d47ace60fa022f44fe537608f23221faff27e5a.scope: Consumed 1.138s CPU time.
Nov 25 04:12:57 np0005534516 systemd[1]: var-lib-containers-storage-overlay-a3eb9c41bb4af90bc173b8eb91037a67e99f5c30df201a4b49a4944a7edcbe99-merged.mount: Deactivated successfully.
Nov 25 04:12:57 np0005534516 podman[412131]: 2025-11-25 09:12:57.244714664 +0000 UTC m=+2.223067443 container remove 04b6940eeb64b5118929b8109d47ace60fa022f44fe537608f23221faff27e5a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_buck, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Nov 25 04:12:57 np0005534516 systemd[1]: libpod-conmon-04b6940eeb64b5118929b8109d47ace60fa022f44fe537608f23221faff27e5a.scope: Deactivated successfully.
Nov 25 04:12:57 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 04:12:57 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 04:12:57 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 04:12:57 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 04:12:57 np0005534516 ceph-mgr[75313]: [progress WARNING root] complete: ev 6a863280-1c69-4b2e-9897-4a59b058550f does not exist
Nov 25 04:12:57 np0005534516 ceph-mgr[75313]: [progress WARNING root] complete: ev 2784088b-d8d4-415f-9a16-748063c7bb84 does not exist
Nov 25 04:12:57 np0005534516 nova_compute[253538]: 2025-11-25 09:12:57.763 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Acquiring lock "refresh_cache-f5964963-11b8-4fd9-ace9-e5ee67571925" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 04:12:57 np0005534516 nova_compute[253538]: 2025-11-25 09:12:57.764 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Acquired lock "refresh_cache-f5964963-11b8-4fd9-ace9-e5ee67571925" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 04:12:57 np0005534516 nova_compute[253538]: 2025-11-25 09:12:57.764 253542 DEBUG nova.network.neutron [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] [instance: f5964963-11b8-4fd9-ace9-e5ee67571925] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Nov 25 04:12:57 np0005534516 nova_compute[253538]: 2025-11-25 09:12:57.764 253542 DEBUG nova.objects.instance [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lazy-loading 'info_cache' on Instance uuid f5964963-11b8-4fd9-ace9-e5ee67571925 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 04:12:58 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:12:58 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 04:12:58 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 04:12:58 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2777: 321 pgs: 321 active+clean; 246 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 363 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Nov 25 04:12:59 np0005534516 nova_compute[253538]: 2025-11-25 09:12:59.182 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:12:59 np0005534516 nova_compute[253538]: 2025-11-25 09:12:59.498 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:13:00 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2778: 321 pgs: 321 active+clean; 246 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 369 KiB/s rd, 1.1 MiB/s wr, 60 op/s
Nov 25 04:13:00 np0005534516 nova_compute[253538]: 2025-11-25 09:13:00.922 253542 DEBUG nova.compute.manager [req-62190143-8b4a-4975-be50-1cd79a5aa54f req-a1327cd6-251c-4372-8b74-ee3e82b136b6 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 985307b1-28a6-47cc-8dfc-f18ab08169f7] Received event network-changed-970a51bb-207b-46ae-bb14-c743ea86eb2f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 04:13:00 np0005534516 nova_compute[253538]: 2025-11-25 09:13:00.923 253542 DEBUG nova.compute.manager [req-62190143-8b4a-4975-be50-1cd79a5aa54f req-a1327cd6-251c-4372-8b74-ee3e82b136b6 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 985307b1-28a6-47cc-8dfc-f18ab08169f7] Refreshing instance network info cache due to event network-changed-970a51bb-207b-46ae-bb14-c743ea86eb2f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 04:13:00 np0005534516 nova_compute[253538]: 2025-11-25 09:13:00.923 253542 DEBUG oslo_concurrency.lockutils [req-62190143-8b4a-4975-be50-1cd79a5aa54f req-a1327cd6-251c-4372-8b74-ee3e82b136b6 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-985307b1-28a6-47cc-8dfc-f18ab08169f7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 04:13:00 np0005534516 nova_compute[253538]: 2025-11-25 09:13:00.923 253542 DEBUG oslo_concurrency.lockutils [req-62190143-8b4a-4975-be50-1cd79a5aa54f req-a1327cd6-251c-4372-8b74-ee3e82b136b6 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-985307b1-28a6-47cc-8dfc-f18ab08169f7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 04:13:00 np0005534516 nova_compute[253538]: 2025-11-25 09:13:00.924 253542 DEBUG nova.network.neutron [req-62190143-8b4a-4975-be50-1cd79a5aa54f req-a1327cd6-251c-4372-8b74-ee3e82b136b6 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 985307b1-28a6-47cc-8dfc-f18ab08169f7] Refreshing network info cache for port 970a51bb-207b-46ae-bb14-c743ea86eb2f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 04:13:00 np0005534516 nova_compute[253538]: 2025-11-25 09:13:00.995 253542 DEBUG oslo_concurrency.lockutils [None req-a8d5cb5a-6bac-4adf-b11d-be50e9684937 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Acquiring lock "985307b1-28a6-47cc-8dfc-f18ab08169f7" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:13:00 np0005534516 nova_compute[253538]: 2025-11-25 09:13:00.996 253542 DEBUG oslo_concurrency.lockutils [None req-a8d5cb5a-6bac-4adf-b11d-be50e9684937 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "985307b1-28a6-47cc-8dfc-f18ab08169f7" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:13:00 np0005534516 nova_compute[253538]: 2025-11-25 09:13:00.996 253542 DEBUG oslo_concurrency.lockutils [None req-a8d5cb5a-6bac-4adf-b11d-be50e9684937 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Acquiring lock "985307b1-28a6-47cc-8dfc-f18ab08169f7-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:13:00 np0005534516 nova_compute[253538]: 2025-11-25 09:13:00.996 253542 DEBUG oslo_concurrency.lockutils [None req-a8d5cb5a-6bac-4adf-b11d-be50e9684937 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "985307b1-28a6-47cc-8dfc-f18ab08169f7-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:13:00 np0005534516 nova_compute[253538]: 2025-11-25 09:13:00.997 253542 DEBUG oslo_concurrency.lockutils [None req-a8d5cb5a-6bac-4adf-b11d-be50e9684937 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "985307b1-28a6-47cc-8dfc-f18ab08169f7-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:13:00 np0005534516 nova_compute[253538]: 2025-11-25 09:13:00.998 253542 INFO nova.compute.manager [None req-a8d5cb5a-6bac-4adf-b11d-be50e9684937 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 985307b1-28a6-47cc-8dfc-f18ab08169f7] Terminating instance#033[00m
Nov 25 04:13:00 np0005534516 nova_compute[253538]: 2025-11-25 09:13:00.999 253542 DEBUG nova.compute.manager [None req-a8d5cb5a-6bac-4adf-b11d-be50e9684937 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 985307b1-28a6-47cc-8dfc-f18ab08169f7] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 25 04:13:01 np0005534516 kernel: tap970a51bb-20 (unregistering): left promiscuous mode
Nov 25 04:13:01 np0005534516 NetworkManager[48915]: <info>  [1764061981.3950] device (tap970a51bb-20): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 04:13:01 np0005534516 ovn_controller[152859]: 2025-11-25T09:13:01Z|01573|binding|INFO|Releasing lport 970a51bb-207b-46ae-bb14-c743ea86eb2f from this chassis (sb_readonly=0)
Nov 25 04:13:01 np0005534516 ovn_controller[152859]: 2025-11-25T09:13:01Z|01574|binding|INFO|Setting lport 970a51bb-207b-46ae-bb14-c743ea86eb2f down in Southbound
Nov 25 04:13:01 np0005534516 nova_compute[253538]: 2025-11-25 09:13:01.404 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:13:01 np0005534516 ovn_controller[152859]: 2025-11-25T09:13:01Z|01575|binding|INFO|Removing iface tap970a51bb-20 ovn-installed in OVS
Nov 25 04:13:01 np0005534516 nova_compute[253538]: 2025-11-25 09:13:01.407 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:13:01 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:13:01.416 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f0:74:a0 10.100.0.9 2001:db8:0:1:f816:3eff:fef0:74a0 2001:db8::f816:3eff:fef0:74a0'], port_security=['fa:16:3e:f0:74:a0 10.100.0.9 2001:db8:0:1:f816:3eff:fef0:74a0 2001:db8::f816:3eff:fef0:74a0'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28 2001:db8:0:1:f816:3eff:fef0:74a0/64 2001:db8::f816:3eff:fef0:74a0/64', 'neutron:device_id': '985307b1-28a6-47cc-8dfc-f18ab08169f7', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6c73317d-f647-4813-8469-7d8f6ba2c0c7', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a3cf572dfc9f42528923d69b8fa76422', 'neutron:revision_number': '4', 'neutron:security_group_ids': '67e8b3b6-9b34-4c2b-a8fc-88ec98721eb7', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=82db8616-e2a9-492c-9b1b-8c775409acb3, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=970a51bb-207b-46ae-bb14-c743ea86eb2f) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 04:13:01 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:13:01.417 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 970a51bb-207b-46ae-bb14-c743ea86eb2f in datapath 6c73317d-f647-4813-8469-7d8f6ba2c0c7 unbound from our chassis#033[00m
Nov 25 04:13:01 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:13:01.418 162739 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 6c73317d-f647-4813-8469-7d8f6ba2c0c7#033[00m
Nov 25 04:13:01 np0005534516 nova_compute[253538]: 2025-11-25 09:13:01.419 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:13:01 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:13:01.436 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[45618230-df9a-4f75-a06e-8820acd8910a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:13:01 np0005534516 systemd[1]: machine-qemu\x2d177\x2dinstance\x2d00000093.scope: Deactivated successfully.
Nov 25 04:13:01 np0005534516 systemd[1]: machine-qemu\x2d177\x2dinstance\x2d00000093.scope: Consumed 14.106s CPU time.
Nov 25 04:13:01 np0005534516 systemd-machined[215790]: Machine qemu-177-instance-00000093 terminated.
Nov 25 04:13:01 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:13:01.463 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[b0200af2-81df-4b71-9be3-c6be2cf7db7f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:13:01 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:13:01.467 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[283c4ec5-23ae-43a6-b5c5-aa4323dd68be]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:13:01 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:13:01.493 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[cff87346-1df0-4d5a-948d-082d61d1b873]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:13:01 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:13:01.508 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[55b743b3-d731-48e6-a3fb-b46feb4ab651]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6c73317d-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f0:f7:0e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 40, 'tx_packets': 8, 'rx_bytes': 3328, 'tx_bytes': 528, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 40, 'tx_packets': 8, 'rx_bytes': 3328, 'tx_bytes': 528, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 449], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 727529, 'reachable_time': 37897, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 36, 'inoctets': 2656, 'indelivers': 13, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 304, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 36, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 2656, 'outmcastoctets': 304, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 36, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 13, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 412257, 'error': None, 'target': 'ovnmeta-6c73317d-f647-4813-8469-7d8f6ba2c0c7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:13:01 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:13:01.520 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[18dcacbb-1fdc-4f96-bf9f-079d366867e5]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap6c73317d-f1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 727540, 'tstamp': 727540}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 412258, 'error': None, 'target': 'ovnmeta-6c73317d-f647-4813-8469-7d8f6ba2c0c7', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap6c73317d-f1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 727542, 'tstamp': 727542}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 412258, 'error': None, 'target': 'ovnmeta-6c73317d-f647-4813-8469-7d8f6ba2c0c7', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:13:01 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:13:01.522 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6c73317d-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 04:13:01 np0005534516 nova_compute[253538]: 2025-11-25 09:13:01.523 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:13:01 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:13:01.528 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6c73317d-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 04:13:01 np0005534516 nova_compute[253538]: 2025-11-25 09:13:01.528 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:13:01 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:13:01.528 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 04:13:01 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:13:01.529 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap6c73317d-f0, col_values=(('external_ids', {'iface-id': '08f181bc-bee1-4710-a487-b95c62cfce38'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 04:13:01 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:13:01.529 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 04:13:01 np0005534516 nova_compute[253538]: 2025-11-25 09:13:01.646 253542 INFO nova.virt.libvirt.driver [-] [instance: 985307b1-28a6-47cc-8dfc-f18ab08169f7] Instance destroyed successfully.#033[00m
Nov 25 04:13:01 np0005534516 nova_compute[253538]: 2025-11-25 09:13:01.646 253542 DEBUG nova.objects.instance [None req-a8d5cb5a-6bac-4adf-b11d-be50e9684937 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lazy-loading 'resources' on Instance uuid 985307b1-28a6-47cc-8dfc-f18ab08169f7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 04:13:01 np0005534516 nova_compute[253538]: 2025-11-25 09:13:01.662 253542 DEBUG nova.virt.libvirt.vif [None req-a8d5cb5a-6bac-4adf-b11d-be50e9684937 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T09:12:25Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-639443231',display_name='tempest-TestGettingAddress-server-639443231',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-639443231',id=147,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCYWSfdVj+Wf7cbG/yHvSQzac2UDJ+lVbG+gHLhUdgH1axNTDtOQgu7Qi4+rF49LHVdNelU4faTO+e7DMH4d34+ViVy+2CuP/uQsAgIE7Bo9udHLpfncBLpy55zcDymblA==',key_name='tempest-TestGettingAddress-275156659',keypairs=<?>,launch_index=0,launched_at=2025-11-25T09:12:36Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='a3cf572dfc9f42528923d69b8fa76422',ramdisk_id='',reservation_id='r-8f1ydg3j',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-364728108',owner_user_name='tempest-TestGettingAddress-364728108-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T09:12:36Z,user_data=None,user_id='c9fb13d4ba9041458692330b7276232f',uuid=985307b1-28a6-47cc-8dfc-f18ab08169f7,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "970a51bb-207b-46ae-bb14-c743ea86eb2f", "address": "fa:16:3e:f0:74:a0", "network": {"id": "6c73317d-f647-4813-8469-7d8f6ba2c0c7", "bridge": "br-int", "label": "tempest-network-smoke--565189020", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.222", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fef0:74a0", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fef0:74a0", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap970a51bb-20", "ovs_interfaceid": "970a51bb-207b-46ae-bb14-c743ea86eb2f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 25 04:13:01 np0005534516 nova_compute[253538]: 2025-11-25 09:13:01.662 253542 DEBUG nova.network.os_vif_util [None req-a8d5cb5a-6bac-4adf-b11d-be50e9684937 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Converting VIF {"id": "970a51bb-207b-46ae-bb14-c743ea86eb2f", "address": "fa:16:3e:f0:74:a0", "network": {"id": "6c73317d-f647-4813-8469-7d8f6ba2c0c7", "bridge": "br-int", "label": "tempest-network-smoke--565189020", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.222", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fef0:74a0", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fef0:74a0", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap970a51bb-20", "ovs_interfaceid": "970a51bb-207b-46ae-bb14-c743ea86eb2f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 04:13:01 np0005534516 nova_compute[253538]: 2025-11-25 09:13:01.664 253542 DEBUG nova.network.os_vif_util [None req-a8d5cb5a-6bac-4adf-b11d-be50e9684937 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:f0:74:a0,bridge_name='br-int',has_traffic_filtering=True,id=970a51bb-207b-46ae-bb14-c743ea86eb2f,network=Network(6c73317d-f647-4813-8469-7d8f6ba2c0c7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap970a51bb-20') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 04:13:01 np0005534516 nova_compute[253538]: 2025-11-25 09:13:01.665 253542 DEBUG os_vif [None req-a8d5cb5a-6bac-4adf-b11d-be50e9684937 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:f0:74:a0,bridge_name='br-int',has_traffic_filtering=True,id=970a51bb-207b-46ae-bb14-c743ea86eb2f,network=Network(6c73317d-f647-4813-8469-7d8f6ba2c0c7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap970a51bb-20') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 25 04:13:01 np0005534516 nova_compute[253538]: 2025-11-25 09:13:01.667 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:13:01 np0005534516 nova_compute[253538]: 2025-11-25 09:13:01.668 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap970a51bb-20, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 04:13:01 np0005534516 nova_compute[253538]: 2025-11-25 09:13:01.673 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:13:01 np0005534516 nova_compute[253538]: 2025-11-25 09:13:01.676 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 25 04:13:01 np0005534516 nova_compute[253538]: 2025-11-25 09:13:01.679 253542 INFO os_vif [None req-a8d5cb5a-6bac-4adf-b11d-be50e9684937 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:f0:74:a0,bridge_name='br-int',has_traffic_filtering=True,id=970a51bb-207b-46ae-bb14-c743ea86eb2f,network=Network(6c73317d-f647-4813-8469-7d8f6ba2c0c7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap970a51bb-20')#033[00m
Nov 25 04:13:01 np0005534516 nova_compute[253538]: 2025-11-25 09:13:01.798 253542 DEBUG nova.network.neutron [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] [instance: f5964963-11b8-4fd9-ace9-e5ee67571925] Updating instance_info_cache with network_info: [{"id": "637fce28-ce53-4bd9-95fb-dc0675dd7009", "address": "fa:16:3e:59:4e:62", "network": {"id": "6c73317d-f647-4813-8469-7d8f6ba2c0c7", "bridge": "br-int", "label": "tempest-network-smoke--565189020", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.215", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe59:4e62", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe59:4e62", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap637fce28-ce", "ovs_interfaceid": "637fce28-ce53-4bd9-95fb-dc0675dd7009", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 04:13:01 np0005534516 nova_compute[253538]: 2025-11-25 09:13:01.817 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Releasing lock "refresh_cache-f5964963-11b8-4fd9-ace9-e5ee67571925" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 04:13:01 np0005534516 nova_compute[253538]: 2025-11-25 09:13:01.817 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] [instance: f5964963-11b8-4fd9-ace9-e5ee67571925] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Nov 25 04:13:01 np0005534516 nova_compute[253538]: 2025-11-25 09:13:01.818 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:13:01 np0005534516 nova_compute[253538]: 2025-11-25 09:13:01.819 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:13:01 np0005534516 nova_compute[253538]: 2025-11-25 09:13:01.819 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 25 04:13:01 np0005534516 nova_compute[253538]: 2025-11-25 09:13:01.997 253542 DEBUG nova.compute.manager [req-506cd6e5-6087-4c7c-86b1-c9362694b5d1 req-81978fe0-d6ef-44f1-893a-064580734d97 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 985307b1-28a6-47cc-8dfc-f18ab08169f7] Received event network-vif-unplugged-970a51bb-207b-46ae-bb14-c743ea86eb2f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 04:13:01 np0005534516 nova_compute[253538]: 2025-11-25 09:13:01.998 253542 DEBUG oslo_concurrency.lockutils [req-506cd6e5-6087-4c7c-86b1-c9362694b5d1 req-81978fe0-d6ef-44f1-893a-064580734d97 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "985307b1-28a6-47cc-8dfc-f18ab08169f7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:13:01 np0005534516 nova_compute[253538]: 2025-11-25 09:13:01.999 253542 DEBUG oslo_concurrency.lockutils [req-506cd6e5-6087-4c7c-86b1-c9362694b5d1 req-81978fe0-d6ef-44f1-893a-064580734d97 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "985307b1-28a6-47cc-8dfc-f18ab08169f7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:13:02 np0005534516 nova_compute[253538]: 2025-11-25 09:13:01.999 253542 DEBUG oslo_concurrency.lockutils [req-506cd6e5-6087-4c7c-86b1-c9362694b5d1 req-81978fe0-d6ef-44f1-893a-064580734d97 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "985307b1-28a6-47cc-8dfc-f18ab08169f7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:13:02 np0005534516 nova_compute[253538]: 2025-11-25 09:13:02.000 253542 DEBUG nova.compute.manager [req-506cd6e5-6087-4c7c-86b1-c9362694b5d1 req-81978fe0-d6ef-44f1-893a-064580734d97 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 985307b1-28a6-47cc-8dfc-f18ab08169f7] No waiting events found dispatching network-vif-unplugged-970a51bb-207b-46ae-bb14-c743ea86eb2f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 04:13:02 np0005534516 nova_compute[253538]: 2025-11-25 09:13:02.001 253542 DEBUG nova.compute.manager [req-506cd6e5-6087-4c7c-86b1-c9362694b5d1 req-81978fe0-d6ef-44f1-893a-064580734d97 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 985307b1-28a6-47cc-8dfc-f18ab08169f7] Received event network-vif-unplugged-970a51bb-207b-46ae-bb14-c743ea86eb2f for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 25 04:13:02 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2779: 321 pgs: 321 active+clean; 246 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 133 KiB/s rd, 484 KiB/s wr, 32 op/s
Nov 25 04:13:02 np0005534516 nova_compute[253538]: 2025-11-25 09:13:02.449 253542 INFO nova.virt.libvirt.driver [None req-a8d5cb5a-6bac-4adf-b11d-be50e9684937 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 985307b1-28a6-47cc-8dfc-f18ab08169f7] Deleting instance files /var/lib/nova/instances/985307b1-28a6-47cc-8dfc-f18ab08169f7_del#033[00m
Nov 25 04:13:02 np0005534516 nova_compute[253538]: 2025-11-25 09:13:02.450 253542 INFO nova.virt.libvirt.driver [None req-a8d5cb5a-6bac-4adf-b11d-be50e9684937 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 985307b1-28a6-47cc-8dfc-f18ab08169f7] Deletion of /var/lib/nova/instances/985307b1-28a6-47cc-8dfc-f18ab08169f7_del complete#033[00m
Nov 25 04:13:02 np0005534516 nova_compute[253538]: 2025-11-25 09:13:02.506 253542 INFO nova.compute.manager [None req-a8d5cb5a-6bac-4adf-b11d-be50e9684937 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 985307b1-28a6-47cc-8dfc-f18ab08169f7] Took 1.51 seconds to destroy the instance on the hypervisor.#033[00m
Nov 25 04:13:02 np0005534516 nova_compute[253538]: 2025-11-25 09:13:02.507 253542 DEBUG oslo.service.loopingcall [None req-a8d5cb5a-6bac-4adf-b11d-be50e9684937 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 25 04:13:02 np0005534516 nova_compute[253538]: 2025-11-25 09:13:02.507 253542 DEBUG nova.compute.manager [-] [instance: 985307b1-28a6-47cc-8dfc-f18ab08169f7] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 25 04:13:02 np0005534516 nova_compute[253538]: 2025-11-25 09:13:02.508 253542 DEBUG nova.network.neutron [-] [instance: 985307b1-28a6-47cc-8dfc-f18ab08169f7] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 25 04:13:02 np0005534516 nova_compute[253538]: 2025-11-25 09:13:02.555 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:13:03 np0005534516 nova_compute[253538]: 2025-11-25 09:13:03.005 253542 DEBUG nova.network.neutron [-] [instance: 985307b1-28a6-47cc-8dfc-f18ab08169f7] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 04:13:03 np0005534516 nova_compute[253538]: 2025-11-25 09:13:03.020 253542 INFO nova.compute.manager [-] [instance: 985307b1-28a6-47cc-8dfc-f18ab08169f7] Took 0.51 seconds to deallocate network for instance.#033[00m
Nov 25 04:13:03 np0005534516 nova_compute[253538]: 2025-11-25 09:13:03.068 253542 DEBUG oslo_concurrency.lockutils [None req-a8d5cb5a-6bac-4adf-b11d-be50e9684937 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:13:03 np0005534516 nova_compute[253538]: 2025-11-25 09:13:03.069 253542 DEBUG oslo_concurrency.lockutils [None req-a8d5cb5a-6bac-4adf-b11d-be50e9684937 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:13:03 np0005534516 nova_compute[253538]: 2025-11-25 09:13:03.096 253542 DEBUG nova.compute.manager [req-efa619af-4cca-49e7-9759-6f96fb7ae67e req-4ae3d06b-490f-4514-bcad-96d293946cae b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 985307b1-28a6-47cc-8dfc-f18ab08169f7] Received event network-vif-deleted-970a51bb-207b-46ae-bb14-c743ea86eb2f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 04:13:03 np0005534516 nova_compute[253538]: 2025-11-25 09:13:03.165 253542 DEBUG nova.scheduler.client.report [None req-a8d5cb5a-6bac-4adf-b11d-be50e9684937 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Refreshing inventories for resource provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Nov 25 04:13:03 np0005534516 nova_compute[253538]: 2025-11-25 09:13:03.233 253542 DEBUG nova.scheduler.client.report [None req-a8d5cb5a-6bac-4adf-b11d-be50e9684937 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Updating ProviderTree inventory for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Nov 25 04:13:03 np0005534516 nova_compute[253538]: 2025-11-25 09:13:03.234 253542 DEBUG nova.compute.provider_tree [None req-a8d5cb5a-6bac-4adf-b11d-be50e9684937 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Updating inventory in ProviderTree for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Nov 25 04:13:03 np0005534516 nova_compute[253538]: 2025-11-25 09:13:03.308 253542 DEBUG nova.scheduler.client.report [None req-a8d5cb5a-6bac-4adf-b11d-be50e9684937 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Refreshing aggregate associations for resource provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Nov 25 04:13:03 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:13:03 np0005534516 nova_compute[253538]: 2025-11-25 09:13:03.333 253542 DEBUG nova.scheduler.client.report [None req-a8d5cb5a-6bac-4adf-b11d-be50e9684937 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Refreshing trait associations for resource provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4, traits: HW_CPU_X86_ABM,HW_CPU_X86_SSE,HW_CPU_X86_SSE4A,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_NET_VIF_MODEL_VIRTIO,HW_CPU_X86_SSSE3,COMPUTE_ACCELERATORS,COMPUTE_RESCUE_BFV,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_BMI2,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_DEVICE_TAGGING,HW_CPU_X86_SSE2,HW_CPU_X86_SSE42,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_SECURITY_TPM_1_2,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_SVM,HW_CPU_X86_AVX,HW_CPU_X86_CLMUL,COMPUTE_NODE,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_AVX2,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_SHA,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_BMI,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_MMX,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_SSE41,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_SECURITY_TPM_2_0,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_F16C,COMPUTE_STORAGE_BUS_FDC,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_AMD_SVM,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_AESNI,HW_CPU_X86_FMA3 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Nov 25 04:13:03 np0005534516 nova_compute[253538]: 2025-11-25 09:13:03.389 253542 DEBUG oslo_concurrency.processutils [None req-a8d5cb5a-6bac-4adf-b11d-be50e9684937 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 04:13:03 np0005534516 nova_compute[253538]: 2025-11-25 09:13:03.635 253542 DEBUG nova.network.neutron [req-62190143-8b4a-4975-be50-1cd79a5aa54f req-a1327cd6-251c-4372-8b74-ee3e82b136b6 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 985307b1-28a6-47cc-8dfc-f18ab08169f7] Updated VIF entry in instance network info cache for port 970a51bb-207b-46ae-bb14-c743ea86eb2f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 25 04:13:03 np0005534516 nova_compute[253538]: 2025-11-25 09:13:03.636 253542 DEBUG nova.network.neutron [req-62190143-8b4a-4975-be50-1cd79a5aa54f req-a1327cd6-251c-4372-8b74-ee3e82b136b6 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 985307b1-28a6-47cc-8dfc-f18ab08169f7] Updating instance_info_cache with network_info: [{"id": "970a51bb-207b-46ae-bb14-c743ea86eb2f", "address": "fa:16:3e:f0:74:a0", "network": {"id": "6c73317d-f647-4813-8469-7d8f6ba2c0c7", "bridge": "br-int", "label": "tempest-network-smoke--565189020", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fef0:74a0", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fef0:74a0", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap970a51bb-20", "ovs_interfaceid": "970a51bb-207b-46ae-bb14-c743ea86eb2f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 04:13:03 np0005534516 nova_compute[253538]: 2025-11-25 09:13:03.766 253542 DEBUG oslo_concurrency.lockutils [req-62190143-8b4a-4975-be50-1cd79a5aa54f req-a1327cd6-251c-4372-8b74-ee3e82b136b6 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-985307b1-28a6-47cc-8dfc-f18ab08169f7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 04:13:03 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 04:13:03 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2942042518' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 04:13:03 np0005534516 nova_compute[253538]: 2025-11-25 09:13:03.919 253542 DEBUG oslo_concurrency.processutils [None req-a8d5cb5a-6bac-4adf-b11d-be50e9684937 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.529s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 04:13:03 np0005534516 nova_compute[253538]: 2025-11-25 09:13:03.926 253542 DEBUG nova.compute.provider_tree [None req-a8d5cb5a-6bac-4adf-b11d-be50e9684937 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 04:13:03 np0005534516 nova_compute[253538]: 2025-11-25 09:13:03.938 253542 DEBUG nova.scheduler.client.report [None req-a8d5cb5a-6bac-4adf-b11d-be50e9684937 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 04:13:03 np0005534516 nova_compute[253538]: 2025-11-25 09:13:03.973 253542 DEBUG oslo_concurrency.lockutils [None req-a8d5cb5a-6bac-4adf-b11d-be50e9684937 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.904s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:13:04 np0005534516 nova_compute[253538]: 2025-11-25 09:13:04.006 253542 INFO nova.scheduler.client.report [None req-a8d5cb5a-6bac-4adf-b11d-be50e9684937 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Deleted allocations for instance 985307b1-28a6-47cc-8dfc-f18ab08169f7#033[00m
Nov 25 04:13:04 np0005534516 nova_compute[253538]: 2025-11-25 09:13:04.075 253542 DEBUG nova.compute.manager [req-706e7dee-802d-43ec-a280-dcb34b18f9b5 req-f4ab714a-08fd-43a7-adbd-b6ae671f88eb b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 985307b1-28a6-47cc-8dfc-f18ab08169f7] Received event network-vif-plugged-970a51bb-207b-46ae-bb14-c743ea86eb2f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 04:13:04 np0005534516 nova_compute[253538]: 2025-11-25 09:13:04.076 253542 DEBUG oslo_concurrency.lockutils [req-706e7dee-802d-43ec-a280-dcb34b18f9b5 req-f4ab714a-08fd-43a7-adbd-b6ae671f88eb b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "985307b1-28a6-47cc-8dfc-f18ab08169f7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:13:04 np0005534516 nova_compute[253538]: 2025-11-25 09:13:04.076 253542 DEBUG oslo_concurrency.lockutils [req-706e7dee-802d-43ec-a280-dcb34b18f9b5 req-f4ab714a-08fd-43a7-adbd-b6ae671f88eb b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "985307b1-28a6-47cc-8dfc-f18ab08169f7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:13:04 np0005534516 nova_compute[253538]: 2025-11-25 09:13:04.076 253542 DEBUG oslo_concurrency.lockutils [req-706e7dee-802d-43ec-a280-dcb34b18f9b5 req-f4ab714a-08fd-43a7-adbd-b6ae671f88eb b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "985307b1-28a6-47cc-8dfc-f18ab08169f7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:13:04 np0005534516 nova_compute[253538]: 2025-11-25 09:13:04.077 253542 DEBUG nova.compute.manager [req-706e7dee-802d-43ec-a280-dcb34b18f9b5 req-f4ab714a-08fd-43a7-adbd-b6ae671f88eb b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 985307b1-28a6-47cc-8dfc-f18ab08169f7] No waiting events found dispatching network-vif-plugged-970a51bb-207b-46ae-bb14-c743ea86eb2f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 04:13:04 np0005534516 nova_compute[253538]: 2025-11-25 09:13:04.077 253542 WARNING nova.compute.manager [req-706e7dee-802d-43ec-a280-dcb34b18f9b5 req-f4ab714a-08fd-43a7-adbd-b6ae671f88eb b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 985307b1-28a6-47cc-8dfc-f18ab08169f7] Received unexpected event network-vif-plugged-970a51bb-207b-46ae-bb14-c743ea86eb2f for instance with vm_state deleted and task_state None.#033[00m
Nov 25 04:13:04 np0005534516 nova_compute[253538]: 2025-11-25 09:13:04.094 253542 DEBUG oslo_concurrency.lockutils [None req-a8d5cb5a-6bac-4adf-b11d-be50e9684937 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "985307b1-28a6-47cc-8dfc-f18ab08169f7" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.098s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:13:04 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2780: 321 pgs: 321 active+clean; 222 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 80 KiB/s rd, 93 KiB/s wr, 13 op/s
Nov 25 04:13:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 04:13:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 04:13:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 04:13:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 04:13:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0012365213135703589 of space, bias 1.0, pg target 0.37095639407110764 quantized to 32 (current 32)
Nov 25 04:13:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 04:13:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 04:13:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 04:13:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 04:13:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 04:13:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0010122368870873217 of space, bias 1.0, pg target 0.3036710661261965 quantized to 32 (current 32)
Nov 25 04:13:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 04:13:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 25 04:13:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 04:13:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 04:13:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 04:13:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 25 04:13:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 04:13:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 25 04:13:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 04:13:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 04:13:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 04:13:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 25 04:13:04 np0005534516 nova_compute[253538]: 2025-11-25 09:13:04.500 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:13:04 np0005534516 nova_compute[253538]: 2025-11-25 09:13:04.555 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:13:05 np0005534516 nova_compute[253538]: 2025-11-25 09:13:05.186 253542 DEBUG nova.compute.manager [req-bed71b33-ffa3-4ad7-b911-d1a5c1fcf508 req-df142172-2af5-4a75-83dc-f899e6e456b0 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: f5964963-11b8-4fd9-ace9-e5ee67571925] Received event network-changed-637fce28-ce53-4bd9-95fb-dc0675dd7009 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 04:13:05 np0005534516 nova_compute[253538]: 2025-11-25 09:13:05.187 253542 DEBUG nova.compute.manager [req-bed71b33-ffa3-4ad7-b911-d1a5c1fcf508 req-df142172-2af5-4a75-83dc-f899e6e456b0 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: f5964963-11b8-4fd9-ace9-e5ee67571925] Refreshing instance network info cache due to event network-changed-637fce28-ce53-4bd9-95fb-dc0675dd7009. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 04:13:05 np0005534516 nova_compute[253538]: 2025-11-25 09:13:05.187 253542 DEBUG oslo_concurrency.lockutils [req-bed71b33-ffa3-4ad7-b911-d1a5c1fcf508 req-df142172-2af5-4a75-83dc-f899e6e456b0 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-f5964963-11b8-4fd9-ace9-e5ee67571925" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 04:13:05 np0005534516 nova_compute[253538]: 2025-11-25 09:13:05.187 253542 DEBUG oslo_concurrency.lockutils [req-bed71b33-ffa3-4ad7-b911-d1a5c1fcf508 req-df142172-2af5-4a75-83dc-f899e6e456b0 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-f5964963-11b8-4fd9-ace9-e5ee67571925" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 04:13:05 np0005534516 nova_compute[253538]: 2025-11-25 09:13:05.187 253542 DEBUG nova.network.neutron [req-bed71b33-ffa3-4ad7-b911-d1a5c1fcf508 req-df142172-2af5-4a75-83dc-f899e6e456b0 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: f5964963-11b8-4fd9-ace9-e5ee67571925] Refreshing network info cache for port 637fce28-ce53-4bd9-95fb-dc0675dd7009 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 04:13:05 np0005534516 nova_compute[253538]: 2025-11-25 09:13:05.211 253542 DEBUG oslo_concurrency.lockutils [None req-ad8dc415-7069-44b7-9ed8-89ea97771199 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Acquiring lock "f5964963-11b8-4fd9-ace9-e5ee67571925" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:13:05 np0005534516 nova_compute[253538]: 2025-11-25 09:13:05.212 253542 DEBUG oslo_concurrency.lockutils [None req-ad8dc415-7069-44b7-9ed8-89ea97771199 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "f5964963-11b8-4fd9-ace9-e5ee67571925" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:13:05 np0005534516 nova_compute[253538]: 2025-11-25 09:13:05.212 253542 DEBUG oslo_concurrency.lockutils [None req-ad8dc415-7069-44b7-9ed8-89ea97771199 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Acquiring lock "f5964963-11b8-4fd9-ace9-e5ee67571925-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:13:05 np0005534516 nova_compute[253538]: 2025-11-25 09:13:05.212 253542 DEBUG oslo_concurrency.lockutils [None req-ad8dc415-7069-44b7-9ed8-89ea97771199 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "f5964963-11b8-4fd9-ace9-e5ee67571925-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:13:05 np0005534516 nova_compute[253538]: 2025-11-25 09:13:05.212 253542 DEBUG oslo_concurrency.lockutils [None req-ad8dc415-7069-44b7-9ed8-89ea97771199 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "f5964963-11b8-4fd9-ace9-e5ee67571925-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:13:05 np0005534516 nova_compute[253538]: 2025-11-25 09:13:05.213 253542 INFO nova.compute.manager [None req-ad8dc415-7069-44b7-9ed8-89ea97771199 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: f5964963-11b8-4fd9-ace9-e5ee67571925] Terminating instance#033[00m
Nov 25 04:13:05 np0005534516 nova_compute[253538]: 2025-11-25 09:13:05.214 253542 DEBUG nova.compute.manager [None req-ad8dc415-7069-44b7-9ed8-89ea97771199 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: f5964963-11b8-4fd9-ace9-e5ee67571925] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 25 04:13:05 np0005534516 kernel: tap637fce28-ce (unregistering): left promiscuous mode
Nov 25 04:13:05 np0005534516 NetworkManager[48915]: <info>  [1764061985.2832] device (tap637fce28-ce): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 04:13:05 np0005534516 ovn_controller[152859]: 2025-11-25T09:13:05Z|01576|binding|INFO|Releasing lport 637fce28-ce53-4bd9-95fb-dc0675dd7009 from this chassis (sb_readonly=0)
Nov 25 04:13:05 np0005534516 ovn_controller[152859]: 2025-11-25T09:13:05Z|01577|binding|INFO|Setting lport 637fce28-ce53-4bd9-95fb-dc0675dd7009 down in Southbound
Nov 25 04:13:05 np0005534516 ovn_controller[152859]: 2025-11-25T09:13:05Z|01578|binding|INFO|Removing iface tap637fce28-ce ovn-installed in OVS
Nov 25 04:13:05 np0005534516 nova_compute[253538]: 2025-11-25 09:13:05.291 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:13:05 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:13:05.299 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:59:4e:62 10.100.0.4 2001:db8:0:1:f816:3eff:fe59:4e62 2001:db8::f816:3eff:fe59:4e62'], port_security=['fa:16:3e:59:4e:62 10.100.0.4 2001:db8:0:1:f816:3eff:fe59:4e62 2001:db8::f816:3eff:fe59:4e62'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28 2001:db8:0:1:f816:3eff:fe59:4e62/64 2001:db8::f816:3eff:fe59:4e62/64', 'neutron:device_id': 'f5964963-11b8-4fd9-ace9-e5ee67571925', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6c73317d-f647-4813-8469-7d8f6ba2c0c7', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a3cf572dfc9f42528923d69b8fa76422', 'neutron:revision_number': '4', 'neutron:security_group_ids': '67e8b3b6-9b34-4c2b-a8fc-88ec98721eb7', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=82db8616-e2a9-492c-9b1b-8c775409acb3, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=637fce28-ce53-4bd9-95fb-dc0675dd7009) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 04:13:05 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:13:05.301 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 637fce28-ce53-4bd9-95fb-dc0675dd7009 in datapath 6c73317d-f647-4813-8469-7d8f6ba2c0c7 unbound from our chassis#033[00m
Nov 25 04:13:05 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:13:05.303 162739 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 6c73317d-f647-4813-8469-7d8f6ba2c0c7, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 25 04:13:05 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:13:05.305 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[8ee7742f-dda6-4f97-a6a9-4148b4cb6334]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:13:05 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:13:05.306 162739 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-6c73317d-f647-4813-8469-7d8f6ba2c0c7 namespace which is not needed anymore#033[00m
Nov 25 04:13:05 np0005534516 nova_compute[253538]: 2025-11-25 09:13:05.311 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:13:05 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:13:05.313 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=54, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '3e:21:09', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '6a:71:8e:07:fa:c2'}, ipsec=False) old=SB_Global(nb_cfg=53) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 04:13:05 np0005534516 nova_compute[253538]: 2025-11-25 09:13:05.313 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:13:05 np0005534516 systemd[1]: machine-qemu\x2d176\x2dinstance\x2d00000092.scope: Deactivated successfully.
Nov 25 04:13:05 np0005534516 systemd[1]: machine-qemu\x2d176\x2dinstance\x2d00000092.scope: Consumed 16.725s CPU time.
Nov 25 04:13:05 np0005534516 systemd-machined[215790]: Machine qemu-176-instance-00000092 terminated.
Nov 25 04:13:05 np0005534516 nova_compute[253538]: 2025-11-25 09:13:05.482 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:13:05 np0005534516 nova_compute[253538]: 2025-11-25 09:13:05.496 253542 INFO nova.virt.libvirt.driver [-] [instance: f5964963-11b8-4fd9-ace9-e5ee67571925] Instance destroyed successfully.#033[00m
Nov 25 04:13:05 np0005534516 nova_compute[253538]: 2025-11-25 09:13:05.497 253542 DEBUG nova.objects.instance [None req-ad8dc415-7069-44b7-9ed8-89ea97771199 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lazy-loading 'resources' on Instance uuid f5964963-11b8-4fd9-ace9-e5ee67571925 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 04:13:05 np0005534516 neutron-haproxy-ovnmeta-6c73317d-f647-4813-8469-7d8f6ba2c0c7[410665]: [NOTICE]   (410669) : haproxy version is 2.8.14-c23fe91
Nov 25 04:13:05 np0005534516 neutron-haproxy-ovnmeta-6c73317d-f647-4813-8469-7d8f6ba2c0c7[410665]: [NOTICE]   (410669) : path to executable is /usr/sbin/haproxy
Nov 25 04:13:05 np0005534516 neutron-haproxy-ovnmeta-6c73317d-f647-4813-8469-7d8f6ba2c0c7[410665]: [WARNING]  (410669) : Exiting Master process...
Nov 25 04:13:05 np0005534516 neutron-haproxy-ovnmeta-6c73317d-f647-4813-8469-7d8f6ba2c0c7[410665]: [WARNING]  (410669) : Exiting Master process...
Nov 25 04:13:05 np0005534516 neutron-haproxy-ovnmeta-6c73317d-f647-4813-8469-7d8f6ba2c0c7[410665]: [ALERT]    (410669) : Current worker (410671) exited with code 143 (Terminated)
Nov 25 04:13:05 np0005534516 neutron-haproxy-ovnmeta-6c73317d-f647-4813-8469-7d8f6ba2c0c7[410665]: [WARNING]  (410669) : All workers exited. Exiting... (0)
Nov 25 04:13:05 np0005534516 systemd[1]: libpod-c02b5a0433a2173378512657c6a2bae707a7c1cd7bc6a3918a43c1fa8e1f5a4d.scope: Deactivated successfully.
Nov 25 04:13:05 np0005534516 podman[412337]: 2025-11-25 09:13:05.513868322 +0000 UTC m=+0.068440381 container died c02b5a0433a2173378512657c6a2bae707a7c1cd7bc6a3918a43c1fa8e1f5a4d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-6c73317d-f647-4813-8469-7d8f6ba2c0c7, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.license=GPLv2)
Nov 25 04:13:05 np0005534516 nova_compute[253538]: 2025-11-25 09:13:05.516 253542 DEBUG nova.virt.libvirt.vif [None req-ad8dc415-7069-44b7-9ed8-89ea97771199 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T09:11:51Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1898594818',display_name='tempest-TestGettingAddress-server-1898594818',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1898594818',id=146,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCYWSfdVj+Wf7cbG/yHvSQzac2UDJ+lVbG+gHLhUdgH1axNTDtOQgu7Qi4+rF49LHVdNelU4faTO+e7DMH4d34+ViVy+2CuP/uQsAgIE7Bo9udHLpfncBLpy55zcDymblA==',key_name='tempest-TestGettingAddress-275156659',keypairs=<?>,launch_index=0,launched_at=2025-11-25T09:12:01Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='a3cf572dfc9f42528923d69b8fa76422',ramdisk_id='',reservation_id='r-ezfjzmm2',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-364728108',owner_user_name='tempest-TestGettingAddress-364728108-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T09:12:01Z,user_data=None,user_id='c9fb13d4ba9041458692330b7276232f',uuid=f5964963-11b8-4fd9-ace9-e5ee67571925,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "637fce28-ce53-4bd9-95fb-dc0675dd7009", "address": "fa:16:3e:59:4e:62", "network": {"id": "6c73317d-f647-4813-8469-7d8f6ba2c0c7", "bridge": "br-int", "label": "tempest-network-smoke--565189020", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.215", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe59:4e62", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe59:4e62", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap637fce28-ce", "ovs_interfaceid": "637fce28-ce53-4bd9-95fb-dc0675dd7009", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 25 04:13:05 np0005534516 nova_compute[253538]: 2025-11-25 09:13:05.518 253542 DEBUG nova.network.os_vif_util [None req-ad8dc415-7069-44b7-9ed8-89ea97771199 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Converting VIF {"id": "637fce28-ce53-4bd9-95fb-dc0675dd7009", "address": "fa:16:3e:59:4e:62", "network": {"id": "6c73317d-f647-4813-8469-7d8f6ba2c0c7", "bridge": "br-int", "label": "tempest-network-smoke--565189020", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.215", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe59:4e62", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe59:4e62", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap637fce28-ce", "ovs_interfaceid": "637fce28-ce53-4bd9-95fb-dc0675dd7009", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 04:13:05 np0005534516 nova_compute[253538]: 2025-11-25 09:13:05.520 253542 DEBUG nova.network.os_vif_util [None req-ad8dc415-7069-44b7-9ed8-89ea97771199 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:59:4e:62,bridge_name='br-int',has_traffic_filtering=True,id=637fce28-ce53-4bd9-95fb-dc0675dd7009,network=Network(6c73317d-f647-4813-8469-7d8f6ba2c0c7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap637fce28-ce') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 04:13:05 np0005534516 nova_compute[253538]: 2025-11-25 09:13:05.521 253542 DEBUG os_vif [None req-ad8dc415-7069-44b7-9ed8-89ea97771199 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:59:4e:62,bridge_name='br-int',has_traffic_filtering=True,id=637fce28-ce53-4bd9-95fb-dc0675dd7009,network=Network(6c73317d-f647-4813-8469-7d8f6ba2c0c7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap637fce28-ce') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 25 04:13:05 np0005534516 nova_compute[253538]: 2025-11-25 09:13:05.522 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:13:05 np0005534516 nova_compute[253538]: 2025-11-25 09:13:05.523 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap637fce28-ce, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 04:13:05 np0005534516 nova_compute[253538]: 2025-11-25 09:13:05.524 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:13:05 np0005534516 nova_compute[253538]: 2025-11-25 09:13:05.526 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:13:05 np0005534516 nova_compute[253538]: 2025-11-25 09:13:05.529 253542 INFO os_vif [None req-ad8dc415-7069-44b7-9ed8-89ea97771199 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:59:4e:62,bridge_name='br-int',has_traffic_filtering=True,id=637fce28-ce53-4bd9-95fb-dc0675dd7009,network=Network(6c73317d-f647-4813-8469-7d8f6ba2c0c7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap637fce28-ce')#033[00m
Nov 25 04:13:05 np0005534516 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-c02b5a0433a2173378512657c6a2bae707a7c1cd7bc6a3918a43c1fa8e1f5a4d-userdata-shm.mount: Deactivated successfully.
Nov 25 04:13:05 np0005534516 systemd[1]: var-lib-containers-storage-overlay-b8dc5cf81cc7a79b68a934106958d2a3e5f0faa44968a236fc77d5815309e349-merged.mount: Deactivated successfully.
Nov 25 04:13:05 np0005534516 podman[412337]: 2025-11-25 09:13:05.586733653 +0000 UTC m=+0.141305702 container cleanup c02b5a0433a2173378512657c6a2bae707a7c1cd7bc6a3918a43c1fa8e1f5a4d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-6c73317d-f647-4813-8469-7d8f6ba2c0c7, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2)
Nov 25 04:13:05 np0005534516 systemd[1]: libpod-conmon-c02b5a0433a2173378512657c6a2bae707a7c1cd7bc6a3918a43c1fa8e1f5a4d.scope: Deactivated successfully.
Nov 25 04:13:05 np0005534516 podman[412394]: 2025-11-25 09:13:05.677597874 +0000 UTC m=+0.070549729 container remove c02b5a0433a2173378512657c6a2bae707a7c1cd7bc6a3918a43c1fa8e1f5a4d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-6c73317d-f647-4813-8469-7d8f6ba2c0c7, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Nov 25 04:13:05 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:13:05.687 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[87f9b7ac-f4e8-41e6-8597-a102af40da92]: (4, ('Tue Nov 25 09:13:05 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-6c73317d-f647-4813-8469-7d8f6ba2c0c7 (c02b5a0433a2173378512657c6a2bae707a7c1cd7bc6a3918a43c1fa8e1f5a4d)\nc02b5a0433a2173378512657c6a2bae707a7c1cd7bc6a3918a43c1fa8e1f5a4d\nTue Nov 25 09:13:05 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-6c73317d-f647-4813-8469-7d8f6ba2c0c7 (c02b5a0433a2173378512657c6a2bae707a7c1cd7bc6a3918a43c1fa8e1f5a4d)\nc02b5a0433a2173378512657c6a2bae707a7c1cd7bc6a3918a43c1fa8e1f5a4d\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:13:05 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:13:05.690 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[d53fc46a-b7c1-4b32-a9f3-33b7ceaf38c2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:13:05 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:13:05.691 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6c73317d-f0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 04:13:05 np0005534516 kernel: tap6c73317d-f0: left promiscuous mode
Nov 25 04:13:05 np0005534516 nova_compute[253538]: 2025-11-25 09:13:05.693 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:13:05 np0005534516 nova_compute[253538]: 2025-11-25 09:13:05.722 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:13:05 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:13:05.725 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[f213d44f-cf3d-4e81-a232-bd23c28aad45]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:13:05 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:13:05.744 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[611173dc-7966-4be6-aebb-387360d86862]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:13:05 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:13:05.745 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[889a86bb-da78-4504-9266-a7eb10bafe3a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:13:05 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:13:05.770 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[4b75279d-8008-4907-b452-e0acb90e2671]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 727522, 'reachable_time': 34091, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 412408, 'error': None, 'target': 'ovnmeta-6c73317d-f647-4813-8469-7d8f6ba2c0c7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:13:05 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:13:05.774 162852 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-6c73317d-f647-4813-8469-7d8f6ba2c0c7 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 25 04:13:05 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:13:05.774 162852 DEBUG oslo.privsep.daemon [-] privsep: reply[e9dcc30e-4396-4c11-8545-9f1e0f06d4af]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:13:05 np0005534516 systemd[1]: run-netns-ovnmeta\x2d6c73317d\x2df647\x2d4813\x2d8469\x2d7d8f6ba2c0c7.mount: Deactivated successfully.
Nov 25 04:13:05 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:13:05.777 162739 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 25 04:13:06 np0005534516 nova_compute[253538]: 2025-11-25 09:13:06.099 253542 INFO nova.virt.libvirt.driver [None req-ad8dc415-7069-44b7-9ed8-89ea97771199 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: f5964963-11b8-4fd9-ace9-e5ee67571925] Deleting instance files /var/lib/nova/instances/f5964963-11b8-4fd9-ace9-e5ee67571925_del#033[00m
Nov 25 04:13:06 np0005534516 nova_compute[253538]: 2025-11-25 09:13:06.099 253542 INFO nova.virt.libvirt.driver [None req-ad8dc415-7069-44b7-9ed8-89ea97771199 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: f5964963-11b8-4fd9-ace9-e5ee67571925] Deletion of /var/lib/nova/instances/f5964963-11b8-4fd9-ace9-e5ee67571925_del complete#033[00m
Nov 25 04:13:06 np0005534516 nova_compute[253538]: 2025-11-25 09:13:06.151 253542 INFO nova.compute.manager [None req-ad8dc415-7069-44b7-9ed8-89ea97771199 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: f5964963-11b8-4fd9-ace9-e5ee67571925] Took 0.94 seconds to destroy the instance on the hypervisor.#033[00m
Nov 25 04:13:06 np0005534516 nova_compute[253538]: 2025-11-25 09:13:06.152 253542 DEBUG oslo.service.loopingcall [None req-ad8dc415-7069-44b7-9ed8-89ea97771199 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 25 04:13:06 np0005534516 nova_compute[253538]: 2025-11-25 09:13:06.152 253542 DEBUG nova.compute.manager [-] [instance: f5964963-11b8-4fd9-ace9-e5ee67571925] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 25 04:13:06 np0005534516 nova_compute[253538]: 2025-11-25 09:13:06.152 253542 DEBUG nova.network.neutron [-] [instance: f5964963-11b8-4fd9-ace9-e5ee67571925] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 25 04:13:06 np0005534516 nova_compute[253538]: 2025-11-25 09:13:06.233 253542 DEBUG nova.compute.manager [req-4e736d69-db68-4c0e-843f-17b22b365233 req-7eb6223e-0325-49ef-9865-94d3985954d1 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: f5964963-11b8-4fd9-ace9-e5ee67571925] Received event network-vif-unplugged-637fce28-ce53-4bd9-95fb-dc0675dd7009 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 04:13:06 np0005534516 nova_compute[253538]: 2025-11-25 09:13:06.233 253542 DEBUG oslo_concurrency.lockutils [req-4e736d69-db68-4c0e-843f-17b22b365233 req-7eb6223e-0325-49ef-9865-94d3985954d1 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "f5964963-11b8-4fd9-ace9-e5ee67571925-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:13:06 np0005534516 nova_compute[253538]: 2025-11-25 09:13:06.234 253542 DEBUG oslo_concurrency.lockutils [req-4e736d69-db68-4c0e-843f-17b22b365233 req-7eb6223e-0325-49ef-9865-94d3985954d1 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "f5964963-11b8-4fd9-ace9-e5ee67571925-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:13:06 np0005534516 nova_compute[253538]: 2025-11-25 09:13:06.234 253542 DEBUG oslo_concurrency.lockutils [req-4e736d69-db68-4c0e-843f-17b22b365233 req-7eb6223e-0325-49ef-9865-94d3985954d1 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "f5964963-11b8-4fd9-ace9-e5ee67571925-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:13:06 np0005534516 nova_compute[253538]: 2025-11-25 09:13:06.234 253542 DEBUG nova.compute.manager [req-4e736d69-db68-4c0e-843f-17b22b365233 req-7eb6223e-0325-49ef-9865-94d3985954d1 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: f5964963-11b8-4fd9-ace9-e5ee67571925] No waiting events found dispatching network-vif-unplugged-637fce28-ce53-4bd9-95fb-dc0675dd7009 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 04:13:06 np0005534516 nova_compute[253538]: 2025-11-25 09:13:06.234 253542 DEBUG nova.compute.manager [req-4e736d69-db68-4c0e-843f-17b22b365233 req-7eb6223e-0325-49ef-9865-94d3985954d1 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: f5964963-11b8-4fd9-ace9-e5ee67571925] Received event network-vif-unplugged-637fce28-ce53-4bd9-95fb-dc0675dd7009 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 25 04:13:06 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2781: 321 pgs: 321 active+clean; 192 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 48 KiB/s rd, 50 KiB/s wr, 22 op/s
Nov 25 04:13:06 np0005534516 nova_compute[253538]: 2025-11-25 09:13:06.548 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:13:07 np0005534516 nova_compute[253538]: 2025-11-25 09:13:07.217 253542 DEBUG nova.network.neutron [-] [instance: f5964963-11b8-4fd9-ace9-e5ee67571925] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 04:13:07 np0005534516 nova_compute[253538]: 2025-11-25 09:13:07.242 253542 INFO nova.compute.manager [-] [instance: f5964963-11b8-4fd9-ace9-e5ee67571925] Took 1.09 seconds to deallocate network for instance.#033[00m
Nov 25 04:13:07 np0005534516 nova_compute[253538]: 2025-11-25 09:13:07.284 253542 DEBUG oslo_concurrency.lockutils [None req-ad8dc415-7069-44b7-9ed8-89ea97771199 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:13:07 np0005534516 nova_compute[253538]: 2025-11-25 09:13:07.285 253542 DEBUG oslo_concurrency.lockutils [None req-ad8dc415-7069-44b7-9ed8-89ea97771199 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:13:07 np0005534516 nova_compute[253538]: 2025-11-25 09:13:07.292 253542 DEBUG nova.compute.manager [req-be9705fd-7464-4422-a391-ee554b622e9e req-40a362ce-4149-4023-961a-505f4809e6ca b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: f5964963-11b8-4fd9-ace9-e5ee67571925] Received event network-vif-deleted-637fce28-ce53-4bd9-95fb-dc0675dd7009 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 04:13:07 np0005534516 nova_compute[253538]: 2025-11-25 09:13:07.338 253542 DEBUG oslo_concurrency.processutils [None req-ad8dc415-7069-44b7-9ed8-89ea97771199 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 04:13:07 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 04:13:07 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3816843362' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 04:13:07 np0005534516 nova_compute[253538]: 2025-11-25 09:13:07.852 253542 DEBUG oslo_concurrency.processutils [None req-ad8dc415-7069-44b7-9ed8-89ea97771199 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.513s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 04:13:07 np0005534516 nova_compute[253538]: 2025-11-25 09:13:07.862 253542 DEBUG nova.compute.provider_tree [None req-ad8dc415-7069-44b7-9ed8-89ea97771199 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 04:13:07 np0005534516 nova_compute[253538]: 2025-11-25 09:13:07.880 253542 DEBUG nova.scheduler.client.report [None req-ad8dc415-7069-44b7-9ed8-89ea97771199 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 04:13:07 np0005534516 nova_compute[253538]: 2025-11-25 09:13:07.902 253542 DEBUG oslo_concurrency.lockutils [None req-ad8dc415-7069-44b7-9ed8-89ea97771199 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.617s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:13:07 np0005534516 nova_compute[253538]: 2025-11-25 09:13:07.932 253542 INFO nova.scheduler.client.report [None req-ad8dc415-7069-44b7-9ed8-89ea97771199 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Deleted allocations for instance f5964963-11b8-4fd9-ace9-e5ee67571925#033[00m
Nov 25 04:13:07 np0005534516 nova_compute[253538]: 2025-11-25 09:13:07.985 253542 DEBUG oslo_concurrency.lockutils [None req-ad8dc415-7069-44b7-9ed8-89ea97771199 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "f5964963-11b8-4fd9-ace9-e5ee67571925" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.774s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:13:08 np0005534516 nova_compute[253538]: 2025-11-25 09:13:08.170 253542 DEBUG nova.network.neutron [req-bed71b33-ffa3-4ad7-b911-d1a5c1fcf508 req-df142172-2af5-4a75-83dc-f899e6e456b0 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: f5964963-11b8-4fd9-ace9-e5ee67571925] Updated VIF entry in instance network info cache for port 637fce28-ce53-4bd9-95fb-dc0675dd7009. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 25 04:13:08 np0005534516 nova_compute[253538]: 2025-11-25 09:13:08.171 253542 DEBUG nova.network.neutron [req-bed71b33-ffa3-4ad7-b911-d1a5c1fcf508 req-df142172-2af5-4a75-83dc-f899e6e456b0 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: f5964963-11b8-4fd9-ace9-e5ee67571925] Updating instance_info_cache with network_info: [{"id": "637fce28-ce53-4bd9-95fb-dc0675dd7009", "address": "fa:16:3e:59:4e:62", "network": {"id": "6c73317d-f647-4813-8469-7d8f6ba2c0c7", "bridge": "br-int", "label": "tempest-network-smoke--565189020", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe59:4e62", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe59:4e62", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap637fce28-ce", "ovs_interfaceid": "637fce28-ce53-4bd9-95fb-dc0675dd7009", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 04:13:08 np0005534516 nova_compute[253538]: 2025-11-25 09:13:08.189 253542 DEBUG oslo_concurrency.lockutils [req-bed71b33-ffa3-4ad7-b911-d1a5c1fcf508 req-df142172-2af5-4a75-83dc-f899e6e456b0 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-f5964963-11b8-4fd9-ace9-e5ee67571925" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 04:13:08 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:13:08 np0005534516 nova_compute[253538]: 2025-11-25 09:13:08.330 253542 DEBUG nova.compute.manager [req-aca2b843-a6c9-4aa7-8f71-0aaaa923de90 req-0e757407-11cc-431f-bf60-7981a46ce807 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: f5964963-11b8-4fd9-ace9-e5ee67571925] Received event network-vif-plugged-637fce28-ce53-4bd9-95fb-dc0675dd7009 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 04:13:08 np0005534516 nova_compute[253538]: 2025-11-25 09:13:08.331 253542 DEBUG oslo_concurrency.lockutils [req-aca2b843-a6c9-4aa7-8f71-0aaaa923de90 req-0e757407-11cc-431f-bf60-7981a46ce807 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "f5964963-11b8-4fd9-ace9-e5ee67571925-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:13:08 np0005534516 nova_compute[253538]: 2025-11-25 09:13:08.331 253542 DEBUG oslo_concurrency.lockutils [req-aca2b843-a6c9-4aa7-8f71-0aaaa923de90 req-0e757407-11cc-431f-bf60-7981a46ce807 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "f5964963-11b8-4fd9-ace9-e5ee67571925-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:13:08 np0005534516 nova_compute[253538]: 2025-11-25 09:13:08.332 253542 DEBUG oslo_concurrency.lockutils [req-aca2b843-a6c9-4aa7-8f71-0aaaa923de90 req-0e757407-11cc-431f-bf60-7981a46ce807 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "f5964963-11b8-4fd9-ace9-e5ee67571925-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:13:08 np0005534516 nova_compute[253538]: 2025-11-25 09:13:08.332 253542 DEBUG nova.compute.manager [req-aca2b843-a6c9-4aa7-8f71-0aaaa923de90 req-0e757407-11cc-431f-bf60-7981a46ce807 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: f5964963-11b8-4fd9-ace9-e5ee67571925] No waiting events found dispatching network-vif-plugged-637fce28-ce53-4bd9-95fb-dc0675dd7009 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 04:13:08 np0005534516 nova_compute[253538]: 2025-11-25 09:13:08.333 253542 WARNING nova.compute.manager [req-aca2b843-a6c9-4aa7-8f71-0aaaa923de90 req-0e757407-11cc-431f-bf60-7981a46ce807 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: f5964963-11b8-4fd9-ace9-e5ee67571925] Received unexpected event network-vif-plugged-637fce28-ce53-4bd9-95fb-dc0675dd7009 for instance with vm_state deleted and task_state None.#033[00m
Nov 25 04:13:08 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2782: 321 pgs: 321 active+clean; 114 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 41 KiB/s rd, 17 KiB/s wr, 47 op/s
Nov 25 04:13:08 np0005534516 nova_compute[253538]: 2025-11-25 09:13:08.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:13:08 np0005534516 nova_compute[253538]: 2025-11-25 09:13:08.575 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:13:08 np0005534516 nova_compute[253538]: 2025-11-25 09:13:08.575 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:13:08 np0005534516 nova_compute[253538]: 2025-11-25 09:13:08.575 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:13:08 np0005534516 nova_compute[253538]: 2025-11-25 09:13:08.575 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 25 04:13:08 np0005534516 nova_compute[253538]: 2025-11-25 09:13:08.576 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 04:13:08 np0005534516 podman[412455]: 2025-11-25 09:13:08.826013011 +0000 UTC m=+0.066365026 container health_status 5c8d5508db57b4c3da7e80ae11f3a3094e87785cb74f60c98199261dd8b73481 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 04:13:08 np0005534516 podman[412454]: 2025-11-25 09:13:08.837379769 +0000 UTC m=+0.078081944 container health_status 201f5ba8a846455dad7681d91099d8c3f33e8ac91298c1330a8b525b94c03938 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2)
Nov 25 04:13:08 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 04:13:08 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/890086821' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 04:13:09 np0005534516 nova_compute[253538]: 2025-11-25 09:13:09.008 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.432s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 04:13:09 np0005534516 nova_compute[253538]: 2025-11-25 09:13:09.191 253542 WARNING nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 04:13:09 np0005534516 nova_compute[253538]: 2025-11-25 09:13:09.193 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3597MB free_disk=59.970909118652344GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 25 04:13:09 np0005534516 nova_compute[253538]: 2025-11-25 09:13:09.193 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:13:09 np0005534516 nova_compute[253538]: 2025-11-25 09:13:09.193 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:13:09 np0005534516 nova_compute[253538]: 2025-11-25 09:13:09.267 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 25 04:13:09 np0005534516 nova_compute[253538]: 2025-11-25 09:13:09.267 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 25 04:13:09 np0005534516 nova_compute[253538]: 2025-11-25 09:13:09.290 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 04:13:09 np0005534516 nova_compute[253538]: 2025-11-25 09:13:09.502 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:13:09 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 04:13:09 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/282700708' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 04:13:09 np0005534516 nova_compute[253538]: 2025-11-25 09:13:09.762 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.471s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 04:13:09 np0005534516 nova_compute[253538]: 2025-11-25 09:13:09.767 253542 DEBUG nova.compute.provider_tree [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 04:13:09 np0005534516 nova_compute[253538]: 2025-11-25 09:13:09.808 253542 DEBUG nova.scheduler.client.report [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 04:13:09 np0005534516 nova_compute[253538]: 2025-11-25 09:13:09.835 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 25 04:13:09 np0005534516 nova_compute[253538]: 2025-11-25 09:13:09.836 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.643s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:13:10 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2783: 321 pgs: 321 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 49 KiB/s rd, 6.5 KiB/s wr, 57 op/s
Nov 25 04:13:10 np0005534516 nova_compute[253538]: 2025-11-25 09:13:10.525 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:13:10 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:13:10.779 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=0e3362c2-a4a0-4a10-9289-943331244f84, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '54'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 04:13:12 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2784: 321 pgs: 321 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 38 KiB/s rd, 6.2 KiB/s wr, 56 op/s
Nov 25 04:13:13 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:13:14 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2785: 321 pgs: 321 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 38 KiB/s rd, 5.1 KiB/s wr, 55 op/s
Nov 25 04:13:14 np0005534516 nova_compute[253538]: 2025-11-25 09:13:14.504 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:13:14 np0005534516 nova_compute[253538]: 2025-11-25 09:13:14.706 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:13:14 np0005534516 nova_compute[253538]: 2025-11-25 09:13:14.776 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:13:14 np0005534516 podman[412519]: 2025-11-25 09:13:14.903175052 +0000 UTC m=+0.119321175 container health_status 8ad1e319ea263c63021f01bb2d6f18ca5aea205883339692c6b10bddfa993191 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller)
Nov 25 04:13:15 np0005534516 nova_compute[253538]: 2025-11-25 09:13:15.527 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:13:16 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2786: 321 pgs: 321 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 37 KiB/s rd, 2.7 KiB/s wr, 54 op/s
Nov 25 04:13:16 np0005534516 nova_compute[253538]: 2025-11-25 09:13:16.644 253542 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764061981.6433387, 985307b1-28a6-47cc-8dfc-f18ab08169f7 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 04:13:16 np0005534516 nova_compute[253538]: 2025-11-25 09:13:16.645 253542 INFO nova.compute.manager [-] [instance: 985307b1-28a6-47cc-8dfc-f18ab08169f7] VM Stopped (Lifecycle Event)#033[00m
Nov 25 04:13:16 np0005534516 nova_compute[253538]: 2025-11-25 09:13:16.728 253542 DEBUG nova.compute.manager [None req-69d292d2-0d52-4b1c-a844-3678ae8d7323 - - - - - -] [instance: 985307b1-28a6-47cc-8dfc-f18ab08169f7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 04:13:18 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:13:18 np0005534516 ceph-mon[75015]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #135. Immutable memtables: 0.
Nov 25 04:13:18 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:13:18.347633) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 25 04:13:18 np0005534516 ceph-mon[75015]: rocksdb: [db/flush_job.cc:856] [default] [JOB 81] Flushing memtable with next log file: 135
Nov 25 04:13:18 np0005534516 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764061998347665, "job": 81, "event": "flush_started", "num_memtables": 1, "num_entries": 1317, "num_deletes": 250, "total_data_size": 2002081, "memory_usage": 2030040, "flush_reason": "Manual Compaction"}
Nov 25 04:13:18 np0005534516 ceph-mon[75015]: rocksdb: [db/flush_job.cc:885] [default] [JOB 81] Level-0 flush table #136: started
Nov 25 04:13:18 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2787: 321 pgs: 321 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 28 KiB/s rd, 2.4 KiB/s wr, 41 op/s
Nov 25 04:13:18 np0005534516 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764061998362290, "cf_name": "default", "job": 81, "event": "table_file_creation", "file_number": 136, "file_size": 1185099, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 57045, "largest_seqno": 58361, "table_properties": {"data_size": 1180416, "index_size": 2078, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1541, "raw_key_size": 12407, "raw_average_key_size": 20, "raw_value_size": 1170176, "raw_average_value_size": 1950, "num_data_blocks": 95, "num_entries": 600, "num_filter_entries": 600, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764061869, "oldest_key_time": 1764061869, "file_creation_time": 1764061998, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "dc5dc6ae-0fb8-474e-b34d-e37705a41add", "db_session_id": "WCSCMBNWDQZ3QKJA3OWW", "orig_file_number": 136, "seqno_to_time_mapping": "N/A"}}
Nov 25 04:13:18 np0005534516 ceph-mon[75015]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 81] Flush lasted 14750 microseconds, and 5110 cpu microseconds.
Nov 25 04:13:18 np0005534516 ceph-mon[75015]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 04:13:18 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:13:18.362374) [db/flush_job.cc:967] [default] [JOB 81] Level-0 flush table #136: 1185099 bytes OK
Nov 25 04:13:18 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:13:18.362400) [db/memtable_list.cc:519] [default] Level-0 commit table #136 started
Nov 25 04:13:18 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:13:18.365245) [db/memtable_list.cc:722] [default] Level-0 commit table #136: memtable #1 done
Nov 25 04:13:18 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:13:18.365296) EVENT_LOG_v1 {"time_micros": 1764061998365285, "job": 81, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 25 04:13:18 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:13:18.365358) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 25 04:13:18 np0005534516 ceph-mon[75015]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 81] Try to delete WAL files size 1996188, prev total WAL file size 1996188, number of live WAL files 2.
Nov 25 04:13:18 np0005534516 ceph-mon[75015]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000132.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 04:13:18 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:13:18.366673) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740032323535' seq:72057594037927935, type:22 .. '6D6772737461740032353036' seq:0, type:0; will stop at (end)
Nov 25 04:13:18 np0005534516 ceph-mon[75015]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 82] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 25 04:13:18 np0005534516 ceph-mon[75015]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 81 Base level 0, inputs: [136(1157KB)], [134(10MB)]
Nov 25 04:13:18 np0005534516 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764061998366750, "job": 82, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [136], "files_L6": [134], "score": -1, "input_data_size": 11726519, "oldest_snapshot_seqno": -1}
Nov 25 04:13:18 np0005534516 ceph-mon[75015]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 82] Generated table #137: 7768 keys, 9199115 bytes, temperature: kUnknown
Nov 25 04:13:18 np0005534516 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764061998452159, "cf_name": "default", "job": 82, "event": "table_file_creation", "file_number": 137, "file_size": 9199115, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9150641, "index_size": 27945, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 19461, "raw_key_size": 203243, "raw_average_key_size": 26, "raw_value_size": 9015372, "raw_average_value_size": 1160, "num_data_blocks": 1090, "num_entries": 7768, "num_filter_entries": 7768, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764056881, "oldest_key_time": 0, "file_creation_time": 1764061998, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "dc5dc6ae-0fb8-474e-b34d-e37705a41add", "db_session_id": "WCSCMBNWDQZ3QKJA3OWW", "orig_file_number": 137, "seqno_to_time_mapping": "N/A"}}
Nov 25 04:13:18 np0005534516 ceph-mon[75015]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 04:13:18 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:13:18.452552) [db/compaction/compaction_job.cc:1663] [default] [JOB 82] Compacted 1@0 + 1@6 files to L6 => 9199115 bytes
Nov 25 04:13:18 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:13:18.456672) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 137.0 rd, 107.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.1, 10.1 +0.0 blob) out(8.8 +0.0 blob), read-write-amplify(17.7) write-amplify(7.8) OK, records in: 8221, records dropped: 453 output_compression: NoCompression
Nov 25 04:13:18 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:13:18.456702) EVENT_LOG_v1 {"time_micros": 1764061998456689, "job": 82, "event": "compaction_finished", "compaction_time_micros": 85574, "compaction_time_cpu_micros": 22588, "output_level": 6, "num_output_files": 1, "total_output_size": 9199115, "num_input_records": 8221, "num_output_records": 7768, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 25 04:13:18 np0005534516 ceph-mon[75015]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000136.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 04:13:18 np0005534516 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764061998457678, "job": 82, "event": "table_file_deletion", "file_number": 136}
Nov 25 04:13:18 np0005534516 ceph-mon[75015]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000134.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 04:13:18 np0005534516 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764061998461673, "job": 82, "event": "table_file_deletion", "file_number": 134}
Nov 25 04:13:18 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:13:18.366518) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 04:13:18 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:13:18.461861) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 04:13:18 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:13:18.461870) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 04:13:18 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:13:18.461874) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 04:13:18 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:13:18.461877) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 04:13:18 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:13:18.461881) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 04:13:19 np0005534516 nova_compute[253538]: 2025-11-25 09:13:19.505 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:13:20 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2788: 321 pgs: 321 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 7.3 KiB/s rd, 0 B/s wr, 9 op/s
Nov 25 04:13:20 np0005534516 nova_compute[253538]: 2025-11-25 09:13:20.494 253542 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764061985.4911578, f5964963-11b8-4fd9-ace9-e5ee67571925 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 04:13:20 np0005534516 nova_compute[253538]: 2025-11-25 09:13:20.495 253542 INFO nova.compute.manager [-] [instance: f5964963-11b8-4fd9-ace9-e5ee67571925] VM Stopped (Lifecycle Event)#033[00m
Nov 25 04:13:20 np0005534516 nova_compute[253538]: 2025-11-25 09:13:20.516 253542 DEBUG nova.compute.manager [None req-5549c912-1677-41b9-8c42-835b85cc2250 - - - - - -] [instance: f5964963-11b8-4fd9-ace9-e5ee67571925] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 04:13:20 np0005534516 nova_compute[253538]: 2025-11-25 09:13:20.579 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:13:22 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2789: 321 pgs: 321 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 04:13:23 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:13:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 04:13:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 04:13:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 04:13:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 04:13:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 04:13:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 04:13:24 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2790: 321 pgs: 321 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 04:13:24 np0005534516 nova_compute[253538]: 2025-11-25 09:13:24.507 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:13:25 np0005534516 nova_compute[253538]: 2025-11-25 09:13:25.581 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:13:26 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2791: 321 pgs: 321 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 04:13:28 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:13:28 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2792: 321 pgs: 321 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 04:13:29 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 25 04:13:29 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/283372595' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 04:13:29 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 25 04:13:29 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/283372595' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 04:13:29 np0005534516 nova_compute[253538]: 2025-11-25 09:13:29.510 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:13:30 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2793: 321 pgs: 321 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 04:13:30 np0005534516 nova_compute[253538]: 2025-11-25 09:13:30.584 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:13:32 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2794: 321 pgs: 321 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 04:13:32 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:13:32.980 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ae:94:0e 10.100.0.2 2001:db8::f816:3eff:feae:940e'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:feae:940e/64', 'neutron:device_id': 'ovnmeta-a0d85633-9402-4022-8c0a-b00348775e93', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a0d85633-9402-4022-8c0a-b00348775e93', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a3cf572dfc9f42528923d69b8fa76422', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=dcec4777-cf00-4ad5-abd7-0fc74f3a8f46, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=0596e673-151c-4eed-ad1e-d612e39d6f14) old=Port_Binding(mac=['fa:16:3e:ae:94:0e 10.100.0.2'], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'ovnmeta-a0d85633-9402-4022-8c0a-b00348775e93', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a0d85633-9402-4022-8c0a-b00348775e93', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a3cf572dfc9f42528923d69b8fa76422', 'neutron:revision_number': '2', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 04:13:32 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:13:32.982 162739 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port 0596e673-151c-4eed-ad1e-d612e39d6f14 in datapath a0d85633-9402-4022-8c0a-b00348775e93 updated#033[00m
Nov 25 04:13:32 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:13:32.983 162739 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network a0d85633-9402-4022-8c0a-b00348775e93, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 25 04:13:32 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:13:32.984 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[6446cfcf-419e-46ff-9151-10feee93b3b4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:13:33 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:13:34 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2795: 321 pgs: 321 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 04:13:34 np0005534516 nova_compute[253538]: 2025-11-25 09:13:34.512 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:13:35 np0005534516 nova_compute[253538]: 2025-11-25 09:13:35.587 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:13:36 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2796: 321 pgs: 321 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 04:13:38 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:13:38 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2797: 321 pgs: 321 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 04:13:39 np0005534516 nova_compute[253538]: 2025-11-25 09:13:39.516 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:13:39 np0005534516 podman[412549]: 2025-11-25 09:13:39.860852564 +0000 UTC m=+0.092726682 container health_status 5c8d5508db57b4c3da7e80ae11f3a3094e87785cb74f60c98199261dd8b73481 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 04:13:39 np0005534516 podman[412548]: 2025-11-25 09:13:39.882219435 +0000 UTC m=+0.122257045 container health_status 201f5ba8a846455dad7681d91099d8c3f33e8ac91298c1330a8b525b94c03938 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=multipathd, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Nov 25 04:13:39 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:13:39.930 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ae:94:0e 10.100.0.2 2001:db8:0:1:f816:3eff:feae:940e 2001:db8::f816:3eff:feae:940e'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8:0:1:f816:3eff:feae:940e/64 2001:db8::f816:3eff:feae:940e/64', 'neutron:device_id': 'ovnmeta-a0d85633-9402-4022-8c0a-b00348775e93', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a0d85633-9402-4022-8c0a-b00348775e93', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a3cf572dfc9f42528923d69b8fa76422', 'neutron:revision_number': '4', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=dcec4777-cf00-4ad5-abd7-0fc74f3a8f46, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=0596e673-151c-4eed-ad1e-d612e39d6f14) old=Port_Binding(mac=['fa:16:3e:ae:94:0e 10.100.0.2 2001:db8::f816:3eff:feae:940e'], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:feae:940e/64', 'neutron:device_id': 'ovnmeta-a0d85633-9402-4022-8c0a-b00348775e93', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a0d85633-9402-4022-8c0a-b00348775e93', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a3cf572dfc9f42528923d69b8fa76422', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 04:13:39 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:13:39.932 162739 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port 0596e673-151c-4eed-ad1e-d612e39d6f14 in datapath a0d85633-9402-4022-8c0a-b00348775e93 updated#033[00m
Nov 25 04:13:39 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:13:39.932 162739 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network a0d85633-9402-4022-8c0a-b00348775e93, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 25 04:13:39 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:13:39.933 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[a1e6b951-6f36-4ed7-b950-a31606b50059]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:13:40 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2798: 321 pgs: 321 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 04:13:40 np0005534516 nova_compute[253538]: 2025-11-25 09:13:40.590 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:13:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:13:41.097 162739 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:13:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:13:41.098 162739 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:13:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:13:41.098 162739 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:13:42 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2799: 321 pgs: 321 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 04:13:43 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:13:44 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2800: 321 pgs: 321 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 04:13:44 np0005534516 nova_compute[253538]: 2025-11-25 09:13:44.518 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:13:45 np0005534516 nova_compute[253538]: 2025-11-25 09:13:45.014 253542 DEBUG oslo_concurrency.lockutils [None req-9519dcae-62f0-45ad-a54d-9a20fba8c9e5 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Acquiring lock "3525156a-e9c9-40b7-88f6-db0de5eb3cd1" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:13:45 np0005534516 nova_compute[253538]: 2025-11-25 09:13:45.015 253542 DEBUG oslo_concurrency.lockutils [None req-9519dcae-62f0-45ad-a54d-9a20fba8c9e5 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "3525156a-e9c9-40b7-88f6-db0de5eb3cd1" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:13:45 np0005534516 nova_compute[253538]: 2025-11-25 09:13:45.029 253542 DEBUG nova.compute.manager [None req-9519dcae-62f0-45ad-a54d-9a20fba8c9e5 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 3525156a-e9c9-40b7-88f6-db0de5eb3cd1] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 25 04:13:45 np0005534516 nova_compute[253538]: 2025-11-25 09:13:45.111 253542 DEBUG oslo_concurrency.lockutils [None req-9519dcae-62f0-45ad-a54d-9a20fba8c9e5 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:13:45 np0005534516 nova_compute[253538]: 2025-11-25 09:13:45.111 253542 DEBUG oslo_concurrency.lockutils [None req-9519dcae-62f0-45ad-a54d-9a20fba8c9e5 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:13:45 np0005534516 nova_compute[253538]: 2025-11-25 09:13:45.119 253542 DEBUG nova.virt.hardware [None req-9519dcae-62f0-45ad-a54d-9a20fba8c9e5 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 25 04:13:45 np0005534516 nova_compute[253538]: 2025-11-25 09:13:45.119 253542 INFO nova.compute.claims [None req-9519dcae-62f0-45ad-a54d-9a20fba8c9e5 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 3525156a-e9c9-40b7-88f6-db0de5eb3cd1] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 25 04:13:45 np0005534516 nova_compute[253538]: 2025-11-25 09:13:45.227 253542 DEBUG oslo_concurrency.processutils [None req-9519dcae-62f0-45ad-a54d-9a20fba8c9e5 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 04:13:45 np0005534516 nova_compute[253538]: 2025-11-25 09:13:45.594 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:13:45 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 04:13:45 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2659642029' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 04:13:45 np0005534516 nova_compute[253538]: 2025-11-25 09:13:45.654 253542 DEBUG oslo_concurrency.processutils [None req-9519dcae-62f0-45ad-a54d-9a20fba8c9e5 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.427s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 04:13:45 np0005534516 nova_compute[253538]: 2025-11-25 09:13:45.661 253542 DEBUG nova.compute.provider_tree [None req-9519dcae-62f0-45ad-a54d-9a20fba8c9e5 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 04:13:45 np0005534516 nova_compute[253538]: 2025-11-25 09:13:45.674 253542 DEBUG nova.scheduler.client.report [None req-9519dcae-62f0-45ad-a54d-9a20fba8c9e5 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 04:13:45 np0005534516 nova_compute[253538]: 2025-11-25 09:13:45.703 253542 DEBUG oslo_concurrency.lockutils [None req-9519dcae-62f0-45ad-a54d-9a20fba8c9e5 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.592s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:13:45 np0005534516 nova_compute[253538]: 2025-11-25 09:13:45.704 253542 DEBUG nova.compute.manager [None req-9519dcae-62f0-45ad-a54d-9a20fba8c9e5 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 3525156a-e9c9-40b7-88f6-db0de5eb3cd1] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 25 04:13:45 np0005534516 nova_compute[253538]: 2025-11-25 09:13:45.749 253542 DEBUG nova.compute.manager [None req-9519dcae-62f0-45ad-a54d-9a20fba8c9e5 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 3525156a-e9c9-40b7-88f6-db0de5eb3cd1] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 25 04:13:45 np0005534516 nova_compute[253538]: 2025-11-25 09:13:45.749 253542 DEBUG nova.network.neutron [None req-9519dcae-62f0-45ad-a54d-9a20fba8c9e5 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 3525156a-e9c9-40b7-88f6-db0de5eb3cd1] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 25 04:13:45 np0005534516 nova_compute[253538]: 2025-11-25 09:13:45.797 253542 INFO nova.virt.libvirt.driver [None req-9519dcae-62f0-45ad-a54d-9a20fba8c9e5 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 3525156a-e9c9-40b7-88f6-db0de5eb3cd1] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 25 04:13:45 np0005534516 nova_compute[253538]: 2025-11-25 09:13:45.819 253542 DEBUG nova.compute.manager [None req-9519dcae-62f0-45ad-a54d-9a20fba8c9e5 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 3525156a-e9c9-40b7-88f6-db0de5eb3cd1] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 25 04:13:45 np0005534516 podman[412608]: 2025-11-25 09:13:45.866560373 +0000 UTC m=+0.110898446 container health_status 8ad1e319ea263c63021f01bb2d6f18ca5aea205883339692c6b10bddfa993191 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Nov 25 04:13:45 np0005534516 nova_compute[253538]: 2025-11-25 09:13:45.927 253542 DEBUG nova.compute.manager [None req-9519dcae-62f0-45ad-a54d-9a20fba8c9e5 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 3525156a-e9c9-40b7-88f6-db0de5eb3cd1] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 25 04:13:45 np0005534516 nova_compute[253538]: 2025-11-25 09:13:45.929 253542 DEBUG nova.virt.libvirt.driver [None req-9519dcae-62f0-45ad-a54d-9a20fba8c9e5 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 3525156a-e9c9-40b7-88f6-db0de5eb3cd1] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 25 04:13:45 np0005534516 nova_compute[253538]: 2025-11-25 09:13:45.930 253542 INFO nova.virt.libvirt.driver [None req-9519dcae-62f0-45ad-a54d-9a20fba8c9e5 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 3525156a-e9c9-40b7-88f6-db0de5eb3cd1] Creating image(s)#033[00m
Nov 25 04:13:45 np0005534516 nova_compute[253538]: 2025-11-25 09:13:45.957 253542 DEBUG nova.storage.rbd_utils [None req-9519dcae-62f0-45ad-a54d-9a20fba8c9e5 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] rbd image 3525156a-e9c9-40b7-88f6-db0de5eb3cd1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 04:13:45 np0005534516 nova_compute[253538]: 2025-11-25 09:13:45.978 253542 DEBUG nova.storage.rbd_utils [None req-9519dcae-62f0-45ad-a54d-9a20fba8c9e5 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] rbd image 3525156a-e9c9-40b7-88f6-db0de5eb3cd1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 04:13:46 np0005534516 nova_compute[253538]: 2025-11-25 09:13:45.999 253542 DEBUG nova.storage.rbd_utils [None req-9519dcae-62f0-45ad-a54d-9a20fba8c9e5 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] rbd image 3525156a-e9c9-40b7-88f6-db0de5eb3cd1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 04:13:46 np0005534516 nova_compute[253538]: 2025-11-25 09:13:46.003 253542 DEBUG oslo_concurrency.processutils [None req-9519dcae-62f0-45ad-a54d-9a20fba8c9e5 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 04:13:46 np0005534516 nova_compute[253538]: 2025-11-25 09:13:46.077 253542 DEBUG oslo_concurrency.processutils [None req-9519dcae-62f0-45ad-a54d-9a20fba8c9e5 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json" returned: 0 in 0.074s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 04:13:46 np0005534516 nova_compute[253538]: 2025-11-25 09:13:46.078 253542 DEBUG oslo_concurrency.lockutils [None req-9519dcae-62f0-45ad-a54d-9a20fba8c9e5 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Acquiring lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:13:46 np0005534516 nova_compute[253538]: 2025-11-25 09:13:46.079 253542 DEBUG oslo_concurrency.lockutils [None req-9519dcae-62f0-45ad-a54d-9a20fba8c9e5 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:13:46 np0005534516 nova_compute[253538]: 2025-11-25 09:13:46.080 253542 DEBUG oslo_concurrency.lockutils [None req-9519dcae-62f0-45ad-a54d-9a20fba8c9e5 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:13:46 np0005534516 nova_compute[253538]: 2025-11-25 09:13:46.112 253542 DEBUG nova.storage.rbd_utils [None req-9519dcae-62f0-45ad-a54d-9a20fba8c9e5 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] rbd image 3525156a-e9c9-40b7-88f6-db0de5eb3cd1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 04:13:46 np0005534516 nova_compute[253538]: 2025-11-25 09:13:46.117 253542 DEBUG oslo_concurrency.processutils [None req-9519dcae-62f0-45ad-a54d-9a20fba8c9e5 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc 3525156a-e9c9-40b7-88f6-db0de5eb3cd1_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 04:13:46 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2801: 321 pgs: 321 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 04:13:46 np0005534516 nova_compute[253538]: 2025-11-25 09:13:46.506 253542 DEBUG oslo_concurrency.processutils [None req-9519dcae-62f0-45ad-a54d-9a20fba8c9e5 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc 3525156a-e9c9-40b7-88f6-db0de5eb3cd1_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.389s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 04:13:46 np0005534516 nova_compute[253538]: 2025-11-25 09:13:46.577 253542 DEBUG nova.storage.rbd_utils [None req-9519dcae-62f0-45ad-a54d-9a20fba8c9e5 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] resizing rbd image 3525156a-e9c9-40b7-88f6-db0de5eb3cd1_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 25 04:13:46 np0005534516 nova_compute[253538]: 2025-11-25 09:13:46.690 253542 DEBUG nova.objects.instance [None req-9519dcae-62f0-45ad-a54d-9a20fba8c9e5 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lazy-loading 'migration_context' on Instance uuid 3525156a-e9c9-40b7-88f6-db0de5eb3cd1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 04:13:46 np0005534516 nova_compute[253538]: 2025-11-25 09:13:46.707 253542 DEBUG nova.virt.libvirt.driver [None req-9519dcae-62f0-45ad-a54d-9a20fba8c9e5 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 3525156a-e9c9-40b7-88f6-db0de5eb3cd1] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 25 04:13:46 np0005534516 nova_compute[253538]: 2025-11-25 09:13:46.708 253542 DEBUG nova.virt.libvirt.driver [None req-9519dcae-62f0-45ad-a54d-9a20fba8c9e5 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 3525156a-e9c9-40b7-88f6-db0de5eb3cd1] Ensure instance console log exists: /var/lib/nova/instances/3525156a-e9c9-40b7-88f6-db0de5eb3cd1/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 25 04:13:46 np0005534516 nova_compute[253538]: 2025-11-25 09:13:46.709 253542 DEBUG oslo_concurrency.lockutils [None req-9519dcae-62f0-45ad-a54d-9a20fba8c9e5 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:13:46 np0005534516 nova_compute[253538]: 2025-11-25 09:13:46.709 253542 DEBUG oslo_concurrency.lockutils [None req-9519dcae-62f0-45ad-a54d-9a20fba8c9e5 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:13:46 np0005534516 nova_compute[253538]: 2025-11-25 09:13:46.709 253542 DEBUG oslo_concurrency.lockutils [None req-9519dcae-62f0-45ad-a54d-9a20fba8c9e5 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:13:46 np0005534516 nova_compute[253538]: 2025-11-25 09:13:46.804 253542 DEBUG nova.policy [None req-9519dcae-62f0-45ad-a54d-9a20fba8c9e5 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'c9fb13d4ba9041458692330b7276232f', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'a3cf572dfc9f42528923d69b8fa76422', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 25 04:13:48 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:13:48 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2802: 321 pgs: 321 active+clean; 117 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1023 B/s rd, 846 KiB/s wr, 3 op/s
Nov 25 04:13:48 np0005534516 nova_compute[253538]: 2025-11-25 09:13:48.529 253542 DEBUG nova.network.neutron [None req-9519dcae-62f0-45ad-a54d-9a20fba8c9e5 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 3525156a-e9c9-40b7-88f6-db0de5eb3cd1] Successfully created port: bc0d7fbf-1c1d-43bc-884b-d89f14be4712 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 25 04:13:49 np0005534516 nova_compute[253538]: 2025-11-25 09:13:49.521 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:13:49 np0005534516 nova_compute[253538]: 2025-11-25 09:13:49.813 253542 DEBUG nova.network.neutron [None req-9519dcae-62f0-45ad-a54d-9a20fba8c9e5 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 3525156a-e9c9-40b7-88f6-db0de5eb3cd1] Successfully updated port: bc0d7fbf-1c1d-43bc-884b-d89f14be4712 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 25 04:13:49 np0005534516 nova_compute[253538]: 2025-11-25 09:13:49.836 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:13:49 np0005534516 nova_compute[253538]: 2025-11-25 09:13:49.837 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:13:49 np0005534516 nova_compute[253538]: 2025-11-25 09:13:49.843 253542 DEBUG oslo_concurrency.lockutils [None req-9519dcae-62f0-45ad-a54d-9a20fba8c9e5 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Acquiring lock "refresh_cache-3525156a-e9c9-40b7-88f6-db0de5eb3cd1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 04:13:49 np0005534516 nova_compute[253538]: 2025-11-25 09:13:49.843 253542 DEBUG oslo_concurrency.lockutils [None req-9519dcae-62f0-45ad-a54d-9a20fba8c9e5 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Acquired lock "refresh_cache-3525156a-e9c9-40b7-88f6-db0de5eb3cd1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 04:13:49 np0005534516 nova_compute[253538]: 2025-11-25 09:13:49.844 253542 DEBUG nova.network.neutron [None req-9519dcae-62f0-45ad-a54d-9a20fba8c9e5 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 3525156a-e9c9-40b7-88f6-db0de5eb3cd1] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 25 04:13:49 np0005534516 nova_compute[253538]: 2025-11-25 09:13:49.973 253542 DEBUG nova.compute.manager [req-ea5a7282-22c4-49d2-b8f9-6a88f6cf9300 req-24e4fff6-9109-4b47-a959-89cec155777e b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 3525156a-e9c9-40b7-88f6-db0de5eb3cd1] Received event network-changed-bc0d7fbf-1c1d-43bc-884b-d89f14be4712 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 04:13:49 np0005534516 nova_compute[253538]: 2025-11-25 09:13:49.973 253542 DEBUG nova.compute.manager [req-ea5a7282-22c4-49d2-b8f9-6a88f6cf9300 req-24e4fff6-9109-4b47-a959-89cec155777e b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 3525156a-e9c9-40b7-88f6-db0de5eb3cd1] Refreshing instance network info cache due to event network-changed-bc0d7fbf-1c1d-43bc-884b-d89f14be4712. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 04:13:49 np0005534516 nova_compute[253538]: 2025-11-25 09:13:49.974 253542 DEBUG oslo_concurrency.lockutils [req-ea5a7282-22c4-49d2-b8f9-6a88f6cf9300 req-24e4fff6-9109-4b47-a959-89cec155777e b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-3525156a-e9c9-40b7-88f6-db0de5eb3cd1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 04:13:50 np0005534516 nova_compute[253538]: 2025-11-25 09:13:50.015 253542 DEBUG nova.network.neutron [None req-9519dcae-62f0-45ad-a54d-9a20fba8c9e5 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 3525156a-e9c9-40b7-88f6-db0de5eb3cd1] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 25 04:13:50 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2803: 321 pgs: 321 active+clean; 126 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.1 MiB/s wr, 26 op/s
Nov 25 04:13:50 np0005534516 nova_compute[253538]: 2025-11-25 09:13:50.597 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:13:51 np0005534516 nova_compute[253538]: 2025-11-25 09:13:51.915 253542 DEBUG nova.network.neutron [None req-9519dcae-62f0-45ad-a54d-9a20fba8c9e5 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 3525156a-e9c9-40b7-88f6-db0de5eb3cd1] Updating instance_info_cache with network_info: [{"id": "bc0d7fbf-1c1d-43bc-884b-d89f14be4712", "address": "fa:16:3e:57:11:da", "network": {"id": "a0d85633-9402-4022-8c0a-b00348775e93", "bridge": "br-int", "label": "tempest-network-smoke--1636625224", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe57:11da", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe57:11da", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbc0d7fbf-1c", "ovs_interfaceid": "bc0d7fbf-1c1d-43bc-884b-d89f14be4712", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 04:13:51 np0005534516 nova_compute[253538]: 2025-11-25 09:13:51.963 253542 DEBUG oslo_concurrency.lockutils [None req-9519dcae-62f0-45ad-a54d-9a20fba8c9e5 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Releasing lock "refresh_cache-3525156a-e9c9-40b7-88f6-db0de5eb3cd1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 04:13:51 np0005534516 nova_compute[253538]: 2025-11-25 09:13:51.963 253542 DEBUG nova.compute.manager [None req-9519dcae-62f0-45ad-a54d-9a20fba8c9e5 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 3525156a-e9c9-40b7-88f6-db0de5eb3cd1] Instance network_info: |[{"id": "bc0d7fbf-1c1d-43bc-884b-d89f14be4712", "address": "fa:16:3e:57:11:da", "network": {"id": "a0d85633-9402-4022-8c0a-b00348775e93", "bridge": "br-int", "label": "tempest-network-smoke--1636625224", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe57:11da", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe57:11da", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbc0d7fbf-1c", "ovs_interfaceid": "bc0d7fbf-1c1d-43bc-884b-d89f14be4712", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 25 04:13:51 np0005534516 nova_compute[253538]: 2025-11-25 09:13:51.964 253542 DEBUG oslo_concurrency.lockutils [req-ea5a7282-22c4-49d2-b8f9-6a88f6cf9300 req-24e4fff6-9109-4b47-a959-89cec155777e b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-3525156a-e9c9-40b7-88f6-db0de5eb3cd1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 04:13:51 np0005534516 nova_compute[253538]: 2025-11-25 09:13:51.965 253542 DEBUG nova.network.neutron [req-ea5a7282-22c4-49d2-b8f9-6a88f6cf9300 req-24e4fff6-9109-4b47-a959-89cec155777e b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 3525156a-e9c9-40b7-88f6-db0de5eb3cd1] Refreshing network info cache for port bc0d7fbf-1c1d-43bc-884b-d89f14be4712 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 04:13:51 np0005534516 nova_compute[253538]: 2025-11-25 09:13:51.970 253542 DEBUG nova.virt.libvirt.driver [None req-9519dcae-62f0-45ad-a54d-9a20fba8c9e5 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 3525156a-e9c9-40b7-88f6-db0de5eb3cd1] Start _get_guest_xml network_info=[{"id": "bc0d7fbf-1c1d-43bc-884b-d89f14be4712", "address": "fa:16:3e:57:11:da", "network": {"id": "a0d85633-9402-4022-8c0a-b00348775e93", "bridge": "br-int", "label": "tempest-network-smoke--1636625224", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe57:11da", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe57:11da", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbc0d7fbf-1c", "ovs_interfaceid": "bc0d7fbf-1c1d-43bc-884b-d89f14be4712", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'device_name': '/dev/vda', 'guest_format': None, 'encryption_options': None, 'boot_index': 0, 'encryption_format': None, 'encryption_secret_uuid': None, 'size': 0, 'encrypted': False, 'disk_bus': 'virtio', 'image_id': '8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 25 04:13:51 np0005534516 nova_compute[253538]: 2025-11-25 09:13:51.976 253542 WARNING nova.virt.libvirt.driver [None req-9519dcae-62f0-45ad-a54d-9a20fba8c9e5 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 04:13:51 np0005534516 nova_compute[253538]: 2025-11-25 09:13:51.982 253542 DEBUG nova.virt.libvirt.host [None req-9519dcae-62f0-45ad-a54d-9a20fba8c9e5 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 25 04:13:51 np0005534516 nova_compute[253538]: 2025-11-25 09:13:51.983 253542 DEBUG nova.virt.libvirt.host [None req-9519dcae-62f0-45ad-a54d-9a20fba8c9e5 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 25 04:13:51 np0005534516 nova_compute[253538]: 2025-11-25 09:13:51.991 253542 DEBUG nova.virt.libvirt.host [None req-9519dcae-62f0-45ad-a54d-9a20fba8c9e5 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 25 04:13:51 np0005534516 nova_compute[253538]: 2025-11-25 09:13:51.992 253542 DEBUG nova.virt.libvirt.host [None req-9519dcae-62f0-45ad-a54d-9a20fba8c9e5 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 25 04:13:51 np0005534516 nova_compute[253538]: 2025-11-25 09:13:51.992 253542 DEBUG nova.virt.libvirt.driver [None req-9519dcae-62f0-45ad-a54d-9a20fba8c9e5 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 25 04:13:51 np0005534516 nova_compute[253538]: 2025-11-25 09:13:51.993 253542 DEBUG nova.virt.hardware [None req-9519dcae-62f0-45ad-a54d-9a20fba8c9e5 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T08:20:54Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='c0247c91-0f34-45b2-87e7-43f31a790a00',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 25 04:13:51 np0005534516 nova_compute[253538]: 2025-11-25 09:13:51.993 253542 DEBUG nova.virt.hardware [None req-9519dcae-62f0-45ad-a54d-9a20fba8c9e5 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 25 04:13:51 np0005534516 nova_compute[253538]: 2025-11-25 09:13:51.993 253542 DEBUG nova.virt.hardware [None req-9519dcae-62f0-45ad-a54d-9a20fba8c9e5 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 25 04:13:51 np0005534516 nova_compute[253538]: 2025-11-25 09:13:51.994 253542 DEBUG nova.virt.hardware [None req-9519dcae-62f0-45ad-a54d-9a20fba8c9e5 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 25 04:13:51 np0005534516 nova_compute[253538]: 2025-11-25 09:13:51.994 253542 DEBUG nova.virt.hardware [None req-9519dcae-62f0-45ad-a54d-9a20fba8c9e5 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 25 04:13:51 np0005534516 nova_compute[253538]: 2025-11-25 09:13:51.994 253542 DEBUG nova.virt.hardware [None req-9519dcae-62f0-45ad-a54d-9a20fba8c9e5 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 25 04:13:51 np0005534516 nova_compute[253538]: 2025-11-25 09:13:51.995 253542 DEBUG nova.virt.hardware [None req-9519dcae-62f0-45ad-a54d-9a20fba8c9e5 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 25 04:13:51 np0005534516 nova_compute[253538]: 2025-11-25 09:13:51.995 253542 DEBUG nova.virt.hardware [None req-9519dcae-62f0-45ad-a54d-9a20fba8c9e5 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 25 04:13:51 np0005534516 nova_compute[253538]: 2025-11-25 09:13:51.995 253542 DEBUG nova.virt.hardware [None req-9519dcae-62f0-45ad-a54d-9a20fba8c9e5 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 25 04:13:51 np0005534516 nova_compute[253538]: 2025-11-25 09:13:51.996 253542 DEBUG nova.virt.hardware [None req-9519dcae-62f0-45ad-a54d-9a20fba8c9e5 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 25 04:13:51 np0005534516 nova_compute[253538]: 2025-11-25 09:13:51.996 253542 DEBUG nova.virt.hardware [None req-9519dcae-62f0-45ad-a54d-9a20fba8c9e5 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 25 04:13:52 np0005534516 nova_compute[253538]: 2025-11-25 09:13:51.999 253542 DEBUG oslo_concurrency.processutils [None req-9519dcae-62f0-45ad-a54d-9a20fba8c9e5 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 04:13:52 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2804: 321 pgs: 321 active+clean; 134 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 25 04:13:52 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 04:13:52 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3252352678' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 04:13:52 np0005534516 nova_compute[253538]: 2025-11-25 09:13:52.433 253542 DEBUG oslo_concurrency.processutils [None req-9519dcae-62f0-45ad-a54d-9a20fba8c9e5 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.434s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 04:13:52 np0005534516 nova_compute[253538]: 2025-11-25 09:13:52.460 253542 DEBUG nova.storage.rbd_utils [None req-9519dcae-62f0-45ad-a54d-9a20fba8c9e5 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] rbd image 3525156a-e9c9-40b7-88f6-db0de5eb3cd1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 04:13:52 np0005534516 nova_compute[253538]: 2025-11-25 09:13:52.465 253542 DEBUG oslo_concurrency.processutils [None req-9519dcae-62f0-45ad-a54d-9a20fba8c9e5 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 04:13:52 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 04:13:52 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2957749098' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 04:13:52 np0005534516 nova_compute[253538]: 2025-11-25 09:13:52.972 253542 DEBUG oslo_concurrency.processutils [None req-9519dcae-62f0-45ad-a54d-9a20fba8c9e5 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.506s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 04:13:52 np0005534516 nova_compute[253538]: 2025-11-25 09:13:52.973 253542 DEBUG nova.virt.libvirt.vif [None req-9519dcae-62f0-45ad-a54d-9a20fba8c9e5 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T09:13:44Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-846287804',display_name='tempest-TestGettingAddress-server-846287804',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-846287804',id=148,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLjQVPPbaLK0o8SGnTKKoAiXxzMCTbiXlcClrAsKocG23tIVNKIl+uyDryAiBRRNgOXs1VZtoFvH020JuANRMKekifvu6hXrGsC+ZSxX2adgnjX2NrqkFauwj51UvSALOg==',key_name='tempest-TestGettingAddress-1641139979',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='a3cf572dfc9f42528923d69b8fa76422',ramdisk_id='',reservation_id='r-4cgshl0z',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-364728108',owner_user_name='tempest-TestGettingAddress-364728108-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T09:13:45Z,user_data=None,user_id='c9fb13d4ba9041458692330b7276232f',uuid=3525156a-e9c9-40b7-88f6-db0de5eb3cd1,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "bc0d7fbf-1c1d-43bc-884b-d89f14be4712", "address": "fa:16:3e:57:11:da", "network": {"id": "a0d85633-9402-4022-8c0a-b00348775e93", "bridge": "br-int", "label": "tempest-network-smoke--1636625224", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe57:11da", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe57:11da", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbc0d7fbf-1c", "ovs_interfaceid": "bc0d7fbf-1c1d-43bc-884b-d89f14be4712", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 25 04:13:52 np0005534516 nova_compute[253538]: 2025-11-25 09:13:52.974 253542 DEBUG nova.network.os_vif_util [None req-9519dcae-62f0-45ad-a54d-9a20fba8c9e5 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Converting VIF {"id": "bc0d7fbf-1c1d-43bc-884b-d89f14be4712", "address": "fa:16:3e:57:11:da", "network": {"id": "a0d85633-9402-4022-8c0a-b00348775e93", "bridge": "br-int", "label": "tempest-network-smoke--1636625224", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe57:11da", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe57:11da", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbc0d7fbf-1c", "ovs_interfaceid": "bc0d7fbf-1c1d-43bc-884b-d89f14be4712", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 04:13:52 np0005534516 nova_compute[253538]: 2025-11-25 09:13:52.975 253542 DEBUG nova.network.os_vif_util [None req-9519dcae-62f0-45ad-a54d-9a20fba8c9e5 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:57:11:da,bridge_name='br-int',has_traffic_filtering=True,id=bc0d7fbf-1c1d-43bc-884b-d89f14be4712,network=Network(a0d85633-9402-4022-8c0a-b00348775e93),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbc0d7fbf-1c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 04:13:52 np0005534516 nova_compute[253538]: 2025-11-25 09:13:52.976 253542 DEBUG nova.objects.instance [None req-9519dcae-62f0-45ad-a54d-9a20fba8c9e5 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lazy-loading 'pci_devices' on Instance uuid 3525156a-e9c9-40b7-88f6-db0de5eb3cd1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 04:13:53 np0005534516 nova_compute[253538]: 2025-11-25 09:13:53.000 253542 DEBUG nova.virt.libvirt.driver [None req-9519dcae-62f0-45ad-a54d-9a20fba8c9e5 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 3525156a-e9c9-40b7-88f6-db0de5eb3cd1] End _get_guest_xml xml=<domain type="kvm">
Nov 25 04:13:53 np0005534516 nova_compute[253538]:  <uuid>3525156a-e9c9-40b7-88f6-db0de5eb3cd1</uuid>
Nov 25 04:13:53 np0005534516 nova_compute[253538]:  <name>instance-00000094</name>
Nov 25 04:13:53 np0005534516 nova_compute[253538]:  <memory>131072</memory>
Nov 25 04:13:53 np0005534516 nova_compute[253538]:  <vcpu>1</vcpu>
Nov 25 04:13:53 np0005534516 nova_compute[253538]:  <metadata>
Nov 25 04:13:53 np0005534516 nova_compute[253538]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 04:13:53 np0005534516 nova_compute[253538]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 04:13:53 np0005534516 nova_compute[253538]:      <nova:name>tempest-TestGettingAddress-server-846287804</nova:name>
Nov 25 04:13:53 np0005534516 nova_compute[253538]:      <nova:creationTime>2025-11-25 09:13:51</nova:creationTime>
Nov 25 04:13:53 np0005534516 nova_compute[253538]:      <nova:flavor name="m1.nano">
Nov 25 04:13:53 np0005534516 nova_compute[253538]:        <nova:memory>128</nova:memory>
Nov 25 04:13:53 np0005534516 nova_compute[253538]:        <nova:disk>1</nova:disk>
Nov 25 04:13:53 np0005534516 nova_compute[253538]:        <nova:swap>0</nova:swap>
Nov 25 04:13:53 np0005534516 nova_compute[253538]:        <nova:ephemeral>0</nova:ephemeral>
Nov 25 04:13:53 np0005534516 nova_compute[253538]:        <nova:vcpus>1</nova:vcpus>
Nov 25 04:13:53 np0005534516 nova_compute[253538]:      </nova:flavor>
Nov 25 04:13:53 np0005534516 nova_compute[253538]:      <nova:owner>
Nov 25 04:13:53 np0005534516 nova_compute[253538]:        <nova:user uuid="c9fb13d4ba9041458692330b7276232f">tempest-TestGettingAddress-364728108-project-member</nova:user>
Nov 25 04:13:53 np0005534516 nova_compute[253538]:        <nova:project uuid="a3cf572dfc9f42528923d69b8fa76422">tempest-TestGettingAddress-364728108</nova:project>
Nov 25 04:13:53 np0005534516 nova_compute[253538]:      </nova:owner>
Nov 25 04:13:53 np0005534516 nova_compute[253538]:      <nova:root type="image" uuid="8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e"/>
Nov 25 04:13:53 np0005534516 nova_compute[253538]:      <nova:ports>
Nov 25 04:13:53 np0005534516 nova_compute[253538]:        <nova:port uuid="bc0d7fbf-1c1d-43bc-884b-d89f14be4712">
Nov 25 04:13:53 np0005534516 nova_compute[253538]:          <nova:ip type="fixed" address="2001:db8:0:1:f816:3eff:fe57:11da" ipVersion="6"/>
Nov 25 04:13:53 np0005534516 nova_compute[253538]:          <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Nov 25 04:13:53 np0005534516 nova_compute[253538]:          <nova:ip type="fixed" address="2001:db8::f816:3eff:fe57:11da" ipVersion="6"/>
Nov 25 04:13:53 np0005534516 nova_compute[253538]:        </nova:port>
Nov 25 04:13:53 np0005534516 nova_compute[253538]:      </nova:ports>
Nov 25 04:13:53 np0005534516 nova_compute[253538]:    </nova:instance>
Nov 25 04:13:53 np0005534516 nova_compute[253538]:  </metadata>
Nov 25 04:13:53 np0005534516 nova_compute[253538]:  <sysinfo type="smbios">
Nov 25 04:13:53 np0005534516 nova_compute[253538]:    <system>
Nov 25 04:13:53 np0005534516 nova_compute[253538]:      <entry name="manufacturer">RDO</entry>
Nov 25 04:13:53 np0005534516 nova_compute[253538]:      <entry name="product">OpenStack Compute</entry>
Nov 25 04:13:53 np0005534516 nova_compute[253538]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 04:13:53 np0005534516 nova_compute[253538]:      <entry name="serial">3525156a-e9c9-40b7-88f6-db0de5eb3cd1</entry>
Nov 25 04:13:53 np0005534516 nova_compute[253538]:      <entry name="uuid">3525156a-e9c9-40b7-88f6-db0de5eb3cd1</entry>
Nov 25 04:13:53 np0005534516 nova_compute[253538]:      <entry name="family">Virtual Machine</entry>
Nov 25 04:13:53 np0005534516 nova_compute[253538]:    </system>
Nov 25 04:13:53 np0005534516 nova_compute[253538]:  </sysinfo>
Nov 25 04:13:53 np0005534516 nova_compute[253538]:  <os>
Nov 25 04:13:53 np0005534516 nova_compute[253538]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 25 04:13:53 np0005534516 nova_compute[253538]:    <boot dev="hd"/>
Nov 25 04:13:53 np0005534516 nova_compute[253538]:    <smbios mode="sysinfo"/>
Nov 25 04:13:53 np0005534516 nova_compute[253538]:  </os>
Nov 25 04:13:53 np0005534516 nova_compute[253538]:  <features>
Nov 25 04:13:53 np0005534516 nova_compute[253538]:    <acpi/>
Nov 25 04:13:53 np0005534516 nova_compute[253538]:    <apic/>
Nov 25 04:13:53 np0005534516 nova_compute[253538]:    <vmcoreinfo/>
Nov 25 04:13:53 np0005534516 nova_compute[253538]:  </features>
Nov 25 04:13:53 np0005534516 nova_compute[253538]:  <clock offset="utc">
Nov 25 04:13:53 np0005534516 nova_compute[253538]:    <timer name="pit" tickpolicy="delay"/>
Nov 25 04:13:53 np0005534516 nova_compute[253538]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 25 04:13:53 np0005534516 nova_compute[253538]:    <timer name="hpet" present="no"/>
Nov 25 04:13:53 np0005534516 nova_compute[253538]:  </clock>
Nov 25 04:13:53 np0005534516 nova_compute[253538]:  <cpu mode="host-model" match="exact">
Nov 25 04:13:53 np0005534516 nova_compute[253538]:    <topology sockets="1" cores="1" threads="1"/>
Nov 25 04:13:53 np0005534516 nova_compute[253538]:  </cpu>
Nov 25 04:13:53 np0005534516 nova_compute[253538]:  <devices>
Nov 25 04:13:53 np0005534516 nova_compute[253538]:    <disk type="network" device="disk">
Nov 25 04:13:53 np0005534516 nova_compute[253538]:      <driver type="raw" cache="none"/>
Nov 25 04:13:53 np0005534516 nova_compute[253538]:      <source protocol="rbd" name="vms/3525156a-e9c9-40b7-88f6-db0de5eb3cd1_disk">
Nov 25 04:13:53 np0005534516 nova_compute[253538]:        <host name="192.168.122.100" port="6789"/>
Nov 25 04:13:53 np0005534516 nova_compute[253538]:      </source>
Nov 25 04:13:53 np0005534516 nova_compute[253538]:      <auth username="openstack">
Nov 25 04:13:53 np0005534516 nova_compute[253538]:        <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 04:13:53 np0005534516 nova_compute[253538]:      </auth>
Nov 25 04:13:53 np0005534516 nova_compute[253538]:      <target dev="vda" bus="virtio"/>
Nov 25 04:13:53 np0005534516 nova_compute[253538]:    </disk>
Nov 25 04:13:53 np0005534516 nova_compute[253538]:    <disk type="network" device="cdrom">
Nov 25 04:13:53 np0005534516 nova_compute[253538]:      <driver type="raw" cache="none"/>
Nov 25 04:13:53 np0005534516 nova_compute[253538]:      <source protocol="rbd" name="vms/3525156a-e9c9-40b7-88f6-db0de5eb3cd1_disk.config">
Nov 25 04:13:53 np0005534516 nova_compute[253538]:        <host name="192.168.122.100" port="6789"/>
Nov 25 04:13:53 np0005534516 nova_compute[253538]:      </source>
Nov 25 04:13:53 np0005534516 nova_compute[253538]:      <auth username="openstack">
Nov 25 04:13:53 np0005534516 nova_compute[253538]:        <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 04:13:53 np0005534516 nova_compute[253538]:      </auth>
Nov 25 04:13:53 np0005534516 nova_compute[253538]:      <target dev="sda" bus="sata"/>
Nov 25 04:13:53 np0005534516 nova_compute[253538]:    </disk>
Nov 25 04:13:53 np0005534516 nova_compute[253538]:    <interface type="ethernet">
Nov 25 04:13:53 np0005534516 nova_compute[253538]:      <mac address="fa:16:3e:57:11:da"/>
Nov 25 04:13:53 np0005534516 nova_compute[253538]:      <model type="virtio"/>
Nov 25 04:13:53 np0005534516 nova_compute[253538]:      <driver name="vhost" rx_queue_size="512"/>
Nov 25 04:13:53 np0005534516 nova_compute[253538]:      <mtu size="1442"/>
Nov 25 04:13:53 np0005534516 nova_compute[253538]:      <target dev="tapbc0d7fbf-1c"/>
Nov 25 04:13:53 np0005534516 nova_compute[253538]:    </interface>
Nov 25 04:13:53 np0005534516 nova_compute[253538]:    <serial type="pty">
Nov 25 04:13:53 np0005534516 nova_compute[253538]:      <log file="/var/lib/nova/instances/3525156a-e9c9-40b7-88f6-db0de5eb3cd1/console.log" append="off"/>
Nov 25 04:13:53 np0005534516 nova_compute[253538]:    </serial>
Nov 25 04:13:53 np0005534516 nova_compute[253538]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 04:13:53 np0005534516 nova_compute[253538]:    <video>
Nov 25 04:13:53 np0005534516 nova_compute[253538]:      <model type="virtio"/>
Nov 25 04:13:53 np0005534516 nova_compute[253538]:    </video>
Nov 25 04:13:53 np0005534516 nova_compute[253538]:    <input type="tablet" bus="usb"/>
Nov 25 04:13:53 np0005534516 nova_compute[253538]:    <rng model="virtio">
Nov 25 04:13:53 np0005534516 nova_compute[253538]:      <backend model="random">/dev/urandom</backend>
Nov 25 04:13:53 np0005534516 nova_compute[253538]:    </rng>
Nov 25 04:13:53 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root"/>
Nov 25 04:13:53 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:13:53 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:13:53 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:13:53 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:13:53 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:13:53 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:13:53 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:13:53 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:13:53 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:13:53 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:13:53 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:13:53 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:13:53 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:13:53 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:13:53 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:13:53 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:13:53 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:13:53 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:13:53 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:13:53 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:13:53 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:13:53 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:13:53 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:13:53 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:13:53 np0005534516 nova_compute[253538]:    <controller type="usb" index="0"/>
Nov 25 04:13:53 np0005534516 nova_compute[253538]:    <memballoon model="virtio">
Nov 25 04:13:53 np0005534516 nova_compute[253538]:      <stats period="10"/>
Nov 25 04:13:53 np0005534516 nova_compute[253538]:    </memballoon>
Nov 25 04:13:53 np0005534516 nova_compute[253538]:  </devices>
Nov 25 04:13:53 np0005534516 nova_compute[253538]: </domain>
Nov 25 04:13:53 np0005534516 nova_compute[253538]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 25 04:13:53 np0005534516 nova_compute[253538]: 2025-11-25 09:13:53.002 253542 DEBUG nova.compute.manager [None req-9519dcae-62f0-45ad-a54d-9a20fba8c9e5 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 3525156a-e9c9-40b7-88f6-db0de5eb3cd1] Preparing to wait for external event network-vif-plugged-bc0d7fbf-1c1d-43bc-884b-d89f14be4712 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 25 04:13:53 np0005534516 nova_compute[253538]: 2025-11-25 09:13:53.002 253542 DEBUG oslo_concurrency.lockutils [None req-9519dcae-62f0-45ad-a54d-9a20fba8c9e5 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Acquiring lock "3525156a-e9c9-40b7-88f6-db0de5eb3cd1-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:13:53 np0005534516 nova_compute[253538]: 2025-11-25 09:13:53.003 253542 DEBUG oslo_concurrency.lockutils [None req-9519dcae-62f0-45ad-a54d-9a20fba8c9e5 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "3525156a-e9c9-40b7-88f6-db0de5eb3cd1-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:13:53 np0005534516 nova_compute[253538]: 2025-11-25 09:13:53.003 253542 DEBUG oslo_concurrency.lockutils [None req-9519dcae-62f0-45ad-a54d-9a20fba8c9e5 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "3525156a-e9c9-40b7-88f6-db0de5eb3cd1-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:13:53 np0005534516 nova_compute[253538]: 2025-11-25 09:13:53.004 253542 DEBUG nova.virt.libvirt.vif [None req-9519dcae-62f0-45ad-a54d-9a20fba8c9e5 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T09:13:44Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-846287804',display_name='tempest-TestGettingAddress-server-846287804',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-846287804',id=148,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLjQVPPbaLK0o8SGnTKKoAiXxzMCTbiXlcClrAsKocG23tIVNKIl+uyDryAiBRRNgOXs1VZtoFvH020JuANRMKekifvu6hXrGsC+ZSxX2adgnjX2NrqkFauwj51UvSALOg==',key_name='tempest-TestGettingAddress-1641139979',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='a3cf572dfc9f42528923d69b8fa76422',ramdisk_id='',reservation_id='r-4cgshl0z',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-364728108',owner_user_name='tempest-TestGettingAddress-364728108-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T09:13:45Z,user_data=None,user_id='c9fb13d4ba9041458692330b7276232f',uuid=3525156a-e9c9-40b7-88f6-db0de5eb3cd1,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "bc0d7fbf-1c1d-43bc-884b-d89f14be4712", "address": "fa:16:3e:57:11:da", "network": {"id": "a0d85633-9402-4022-8c0a-b00348775e93", "bridge": "br-int", "label": "tempest-network-smoke--1636625224", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe57:11da", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe57:11da", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbc0d7fbf-1c", "ovs_interfaceid": "bc0d7fbf-1c1d-43bc-884b-d89f14be4712", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 25 04:13:53 np0005534516 nova_compute[253538]: 2025-11-25 09:13:53.004 253542 DEBUG nova.network.os_vif_util [None req-9519dcae-62f0-45ad-a54d-9a20fba8c9e5 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Converting VIF {"id": "bc0d7fbf-1c1d-43bc-884b-d89f14be4712", "address": "fa:16:3e:57:11:da", "network": {"id": "a0d85633-9402-4022-8c0a-b00348775e93", "bridge": "br-int", "label": "tempest-network-smoke--1636625224", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe57:11da", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe57:11da", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbc0d7fbf-1c", "ovs_interfaceid": "bc0d7fbf-1c1d-43bc-884b-d89f14be4712", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 04:13:53 np0005534516 nova_compute[253538]: 2025-11-25 09:13:53.005 253542 DEBUG nova.network.os_vif_util [None req-9519dcae-62f0-45ad-a54d-9a20fba8c9e5 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:57:11:da,bridge_name='br-int',has_traffic_filtering=True,id=bc0d7fbf-1c1d-43bc-884b-d89f14be4712,network=Network(a0d85633-9402-4022-8c0a-b00348775e93),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbc0d7fbf-1c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 04:13:53 np0005534516 nova_compute[253538]: 2025-11-25 09:13:53.005 253542 DEBUG os_vif [None req-9519dcae-62f0-45ad-a54d-9a20fba8c9e5 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:57:11:da,bridge_name='br-int',has_traffic_filtering=True,id=bc0d7fbf-1c1d-43bc-884b-d89f14be4712,network=Network(a0d85633-9402-4022-8c0a-b00348775e93),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbc0d7fbf-1c') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 25 04:13:53 np0005534516 nova_compute[253538]: 2025-11-25 09:13:53.006 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:13:53 np0005534516 nova_compute[253538]: 2025-11-25 09:13:53.006 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 04:13:53 np0005534516 nova_compute[253538]: 2025-11-25 09:13:53.007 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 04:13:53 np0005534516 nova_compute[253538]: 2025-11-25 09:13:53.009 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:13:53 np0005534516 nova_compute[253538]: 2025-11-25 09:13:53.009 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapbc0d7fbf-1c, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 04:13:53 np0005534516 nova_compute[253538]: 2025-11-25 09:13:53.009 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapbc0d7fbf-1c, col_values=(('external_ids', {'iface-id': 'bc0d7fbf-1c1d-43bc-884b-d89f14be4712', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:57:11:da', 'vm-uuid': '3525156a-e9c9-40b7-88f6-db0de5eb3cd1'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 04:13:53 np0005534516 nova_compute[253538]: 2025-11-25 09:13:53.011 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:13:53 np0005534516 NetworkManager[48915]: <info>  [1764062033.0119] manager: (tapbc0d7fbf-1c): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/648)
Nov 25 04:13:53 np0005534516 nova_compute[253538]: 2025-11-25 09:13:53.013 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 25 04:13:53 np0005534516 nova_compute[253538]: 2025-11-25 09:13:53.021 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:13:53 np0005534516 nova_compute[253538]: 2025-11-25 09:13:53.021 253542 INFO os_vif [None req-9519dcae-62f0-45ad-a54d-9a20fba8c9e5 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:57:11:da,bridge_name='br-int',has_traffic_filtering=True,id=bc0d7fbf-1c1d-43bc-884b-d89f14be4712,network=Network(a0d85633-9402-4022-8c0a-b00348775e93),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbc0d7fbf-1c')#033[00m
Nov 25 04:13:53 np0005534516 nova_compute[253538]: 2025-11-25 09:13:53.058 253542 DEBUG nova.virt.libvirt.driver [None req-9519dcae-62f0-45ad-a54d-9a20fba8c9e5 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 04:13:53 np0005534516 nova_compute[253538]: 2025-11-25 09:13:53.059 253542 DEBUG nova.virt.libvirt.driver [None req-9519dcae-62f0-45ad-a54d-9a20fba8c9e5 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 04:13:53 np0005534516 nova_compute[253538]: 2025-11-25 09:13:53.060 253542 DEBUG nova.virt.libvirt.driver [None req-9519dcae-62f0-45ad-a54d-9a20fba8c9e5 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] No VIF found with MAC fa:16:3e:57:11:da, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 25 04:13:53 np0005534516 nova_compute[253538]: 2025-11-25 09:13:53.060 253542 INFO nova.virt.libvirt.driver [None req-9519dcae-62f0-45ad-a54d-9a20fba8c9e5 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 3525156a-e9c9-40b7-88f6-db0de5eb3cd1] Using config drive#033[00m
Nov 25 04:13:53 np0005534516 nova_compute[253538]: 2025-11-25 09:13:53.082 253542 DEBUG nova.storage.rbd_utils [None req-9519dcae-62f0-45ad-a54d-9a20fba8c9e5 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] rbd image 3525156a-e9c9-40b7-88f6-db0de5eb3cd1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 04:13:53 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:13:53 np0005534516 nova_compute[253538]: 2025-11-25 09:13:53.347 253542 INFO nova.virt.libvirt.driver [None req-9519dcae-62f0-45ad-a54d-9a20fba8c9e5 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 3525156a-e9c9-40b7-88f6-db0de5eb3cd1] Creating config drive at /var/lib/nova/instances/3525156a-e9c9-40b7-88f6-db0de5eb3cd1/disk.config#033[00m
Nov 25 04:13:53 np0005534516 nova_compute[253538]: 2025-11-25 09:13:53.358 253542 DEBUG oslo_concurrency.processutils [None req-9519dcae-62f0-45ad-a54d-9a20fba8c9e5 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/3525156a-e9c9-40b7-88f6-db0de5eb3cd1/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpun_364r7 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 04:13:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] Optimize plan auto_2025-11-25_09:13:53
Nov 25 04:13:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 04:13:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] do_upmap
Nov 25 04:13:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] pools ['default.rgw.meta', 'backups', 'volumes', 'cephfs.cephfs.meta', 'default.rgw.control', 'vms', 'cephfs.cephfs.data', '.rgw.root', '.mgr', 'default.rgw.log', 'images']
Nov 25 04:13:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] prepared 0/10 changes
Nov 25 04:13:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 04:13:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 04:13:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 04:13:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 04:13:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 04:13:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 04:13:53 np0005534516 nova_compute[253538]: 2025-11-25 09:13:53.513 253542 DEBUG oslo_concurrency.processutils [None req-9519dcae-62f0-45ad-a54d-9a20fba8c9e5 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/3525156a-e9c9-40b7-88f6-db0de5eb3cd1/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpun_364r7" returned: 0 in 0.155s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 04:13:53 np0005534516 nova_compute[253538]: 2025-11-25 09:13:53.535 253542 DEBUG nova.storage.rbd_utils [None req-9519dcae-62f0-45ad-a54d-9a20fba8c9e5 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] rbd image 3525156a-e9c9-40b7-88f6-db0de5eb3cd1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 04:13:53 np0005534516 nova_compute[253538]: 2025-11-25 09:13:53.539 253542 DEBUG oslo_concurrency.processutils [None req-9519dcae-62f0-45ad-a54d-9a20fba8c9e5 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/3525156a-e9c9-40b7-88f6-db0de5eb3cd1/disk.config 3525156a-e9c9-40b7-88f6-db0de5eb3cd1_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 04:13:53 np0005534516 nova_compute[253538]: 2025-11-25 09:13:53.688 253542 DEBUG oslo_concurrency.processutils [None req-9519dcae-62f0-45ad-a54d-9a20fba8c9e5 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/3525156a-e9c9-40b7-88f6-db0de5eb3cd1/disk.config 3525156a-e9c9-40b7-88f6-db0de5eb3cd1_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.149s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 04:13:53 np0005534516 nova_compute[253538]: 2025-11-25 09:13:53.690 253542 INFO nova.virt.libvirt.driver [None req-9519dcae-62f0-45ad-a54d-9a20fba8c9e5 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 3525156a-e9c9-40b7-88f6-db0de5eb3cd1] Deleting local config drive /var/lib/nova/instances/3525156a-e9c9-40b7-88f6-db0de5eb3cd1/disk.config because it was imported into RBD.#033[00m
Nov 25 04:13:53 np0005534516 kernel: tapbc0d7fbf-1c: entered promiscuous mode
Nov 25 04:13:53 np0005534516 NetworkManager[48915]: <info>  [1764062033.7711] manager: (tapbc0d7fbf-1c): new Tun device (/org/freedesktop/NetworkManager/Devices/649)
Nov 25 04:13:53 np0005534516 nova_compute[253538]: 2025-11-25 09:13:53.774 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:13:53 np0005534516 ovn_controller[152859]: 2025-11-25T09:13:53Z|01579|binding|INFO|Claiming lport bc0d7fbf-1c1d-43bc-884b-d89f14be4712 for this chassis.
Nov 25 04:13:53 np0005534516 ovn_controller[152859]: 2025-11-25T09:13:53Z|01580|binding|INFO|bc0d7fbf-1c1d-43bc-884b-d89f14be4712: Claiming fa:16:3e:57:11:da 10.100.0.12 2001:db8:0:1:f816:3eff:fe57:11da 2001:db8::f816:3eff:fe57:11da
Nov 25 04:13:53 np0005534516 nova_compute[253538]: 2025-11-25 09:13:53.780 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:13:53 np0005534516 nova_compute[253538]: 2025-11-25 09:13:53.784 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:13:53 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:13:53.794 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:57:11:da 10.100.0.12 2001:db8:0:1:f816:3eff:fe57:11da 2001:db8::f816:3eff:fe57:11da'], port_security=['fa:16:3e:57:11:da 10.100.0.12 2001:db8:0:1:f816:3eff:fe57:11da 2001:db8::f816:3eff:fe57:11da'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28 2001:db8:0:1:f816:3eff:fe57:11da/64 2001:db8::f816:3eff:fe57:11da/64', 'neutron:device_id': '3525156a-e9c9-40b7-88f6-db0de5eb3cd1', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a0d85633-9402-4022-8c0a-b00348775e93', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a3cf572dfc9f42528923d69b8fa76422', 'neutron:revision_number': '2', 'neutron:security_group_ids': '27a20ef1-698a-4857-9ba1-1bcdffeb008a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=dcec4777-cf00-4ad5-abd7-0fc74f3a8f46, chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=bc0d7fbf-1c1d-43bc-884b-d89f14be4712) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 04:13:53 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:13:53.795 162739 INFO neutron.agent.ovn.metadata.agent [-] Port bc0d7fbf-1c1d-43bc-884b-d89f14be4712 in datapath a0d85633-9402-4022-8c0a-b00348775e93 bound to our chassis#033[00m
Nov 25 04:13:53 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:13:53.796 162739 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network a0d85633-9402-4022-8c0a-b00348775e93#033[00m
Nov 25 04:13:53 np0005534516 systemd-udevd[412933]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 04:13:53 np0005534516 nova_compute[253538]: 2025-11-25 09:13:53.799 253542 DEBUG nova.network.neutron [req-ea5a7282-22c4-49d2-b8f9-6a88f6cf9300 req-24e4fff6-9109-4b47-a959-89cec155777e b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 3525156a-e9c9-40b7-88f6-db0de5eb3cd1] Updated VIF entry in instance network info cache for port bc0d7fbf-1c1d-43bc-884b-d89f14be4712. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 25 04:13:53 np0005534516 nova_compute[253538]: 2025-11-25 09:13:53.800 253542 DEBUG nova.network.neutron [req-ea5a7282-22c4-49d2-b8f9-6a88f6cf9300 req-24e4fff6-9109-4b47-a959-89cec155777e b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 3525156a-e9c9-40b7-88f6-db0de5eb3cd1] Updating instance_info_cache with network_info: [{"id": "bc0d7fbf-1c1d-43bc-884b-d89f14be4712", "address": "fa:16:3e:57:11:da", "network": {"id": "a0d85633-9402-4022-8c0a-b00348775e93", "bridge": "br-int", "label": "tempest-network-smoke--1636625224", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe57:11da", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe57:11da", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbc0d7fbf-1c", "ovs_interfaceid": "bc0d7fbf-1c1d-43bc-884b-d89f14be4712", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 04:13:53 np0005534516 NetworkManager[48915]: <info>  [1764062033.8095] device (tapbc0d7fbf-1c): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 04:13:53 np0005534516 NetworkManager[48915]: <info>  [1764062033.8119] device (tapbc0d7fbf-1c): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 04:13:53 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:13:53.809 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[3d802538-a2d4-4aa4-840d-3e77be0e9034]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:13:53 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:13:53.810 162739 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapa0d85633-91 in ovnmeta-a0d85633-9402-4022-8c0a-b00348775e93 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 25 04:13:53 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:13:53.812 269370 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapa0d85633-90 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 25 04:13:53 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:13:53.812 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[71dc1414-4cd6-442f-9aa6-b0e5b68e8646]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:13:53 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:13:53.813 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[23f41ec7-1971-4fef-a5da-7d61c523867b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:13:53 np0005534516 systemd-machined[215790]: New machine qemu-178-instance-00000094.
Nov 25 04:13:53 np0005534516 nova_compute[253538]: 2025-11-25 09:13:53.816 253542 DEBUG oslo_concurrency.lockutils [req-ea5a7282-22c4-49d2-b8f9-6a88f6cf9300 req-24e4fff6-9109-4b47-a959-89cec155777e b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-3525156a-e9c9-40b7-88f6-db0de5eb3cd1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 04:13:53 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:13:53.826 162852 DEBUG oslo.privsep.daemon [-] privsep: reply[4b293432-ff8f-4926-ac4d-74e4a5b1b970]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:13:53 np0005534516 systemd[1]: Started Virtual Machine qemu-178-instance-00000094.
Nov 25 04:13:53 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:13:53.852 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[942a472f-b178-4c00-ac66-1389f17e291e]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:13:53 np0005534516 nova_compute[253538]: 2025-11-25 09:13:53.873 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:13:53 np0005534516 ovn_controller[152859]: 2025-11-25T09:13:53Z|01581|binding|INFO|Setting lport bc0d7fbf-1c1d-43bc-884b-d89f14be4712 ovn-installed in OVS
Nov 25 04:13:53 np0005534516 ovn_controller[152859]: 2025-11-25T09:13:53Z|01582|binding|INFO|Setting lport bc0d7fbf-1c1d-43bc-884b-d89f14be4712 up in Southbound
Nov 25 04:13:53 np0005534516 nova_compute[253538]: 2025-11-25 09:13:53.877 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:13:53 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:13:53.883 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[36822361-48c5-4b1b-96f8-50a43991eb6d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:13:53 np0005534516 NetworkManager[48915]: <info>  [1764062033.8892] manager: (tapa0d85633-90): new Veth device (/org/freedesktop/NetworkManager/Devices/650)
Nov 25 04:13:53 np0005534516 systemd-udevd[412937]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 04:13:53 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:13:53.888 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[68afee91-7aa8-4964-8e36-e15320448cc3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:13:53 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:13:53.919 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[f11face7-61b0-4a79-9087-f93c628cf312]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:13:53 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:13:53.921 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[d2c857d6-59e5-4ca1-ba5d-e4edc324f8d3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:13:53 np0005534516 NetworkManager[48915]: <info>  [1764062033.9469] device (tapa0d85633-90): carrier: link connected
Nov 25 04:13:53 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:13:53.954 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[4129cdca-2311-4c7b-a53f-9da83bd0cb71]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:13:53 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:13:53.973 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[4a4cf901-d5c9-4103-8fb0-e56e3a73449e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapa0d85633-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ae:94:0e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 454], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 738854, 'reachable_time': 17518, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 412967, 'error': None, 'target': 'ovnmeta-a0d85633-9402-4022-8c0a-b00348775e93', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:13:53 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:13:53.994 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[f4d30da5-735c-42fa-9009-f8bac4a259ea]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feae:940e'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 738854, 'tstamp': 738854}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 412968, 'error': None, 'target': 'ovnmeta-a0d85633-9402-4022-8c0a-b00348775e93', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:13:54 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:13:54.012 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[87302f19-4df3-4f3b-9937-b387ebe295d3]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapa0d85633-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ae:94:0e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 454], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 738854, 'reachable_time': 17518, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 412969, 'error': None, 'target': 'ovnmeta-a0d85633-9402-4022-8c0a-b00348775e93', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:13:54 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:13:54.046 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[60ab94d4-6a2a-44e9-96a6-466ce1c2ce8b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:13:54 np0005534516 nova_compute[253538]: 2025-11-25 09:13:54.061 253542 DEBUG nova.compute.manager [req-88301cb8-71e7-44b2-8fbf-664bdacccf4f req-a0215cc6-94e4-44af-81ce-87cec939e0a4 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 3525156a-e9c9-40b7-88f6-db0de5eb3cd1] Received event network-vif-plugged-bc0d7fbf-1c1d-43bc-884b-d89f14be4712 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 04:13:54 np0005534516 nova_compute[253538]: 2025-11-25 09:13:54.063 253542 DEBUG oslo_concurrency.lockutils [req-88301cb8-71e7-44b2-8fbf-664bdacccf4f req-a0215cc6-94e4-44af-81ce-87cec939e0a4 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "3525156a-e9c9-40b7-88f6-db0de5eb3cd1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:13:54 np0005534516 nova_compute[253538]: 2025-11-25 09:13:54.063 253542 DEBUG oslo_concurrency.lockutils [req-88301cb8-71e7-44b2-8fbf-664bdacccf4f req-a0215cc6-94e4-44af-81ce-87cec939e0a4 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "3525156a-e9c9-40b7-88f6-db0de5eb3cd1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:13:54 np0005534516 nova_compute[253538]: 2025-11-25 09:13:54.064 253542 DEBUG oslo_concurrency.lockutils [req-88301cb8-71e7-44b2-8fbf-664bdacccf4f req-a0215cc6-94e4-44af-81ce-87cec939e0a4 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "3525156a-e9c9-40b7-88f6-db0de5eb3cd1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:13:54 np0005534516 nova_compute[253538]: 2025-11-25 09:13:54.064 253542 DEBUG nova.compute.manager [req-88301cb8-71e7-44b2-8fbf-664bdacccf4f req-a0215cc6-94e4-44af-81ce-87cec939e0a4 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 3525156a-e9c9-40b7-88f6-db0de5eb3cd1] Processing event network-vif-plugged-bc0d7fbf-1c1d-43bc-884b-d89f14be4712 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 25 04:13:54 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:13:54.102 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[8425dcb5-b697-4298-bc3b-f5ecb8bb365b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:13:54 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:13:54.103 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa0d85633-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 04:13:54 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:13:54.103 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 04:13:54 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:13:54.104 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa0d85633-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 04:13:54 np0005534516 nova_compute[253538]: 2025-11-25 09:13:54.105 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:13:54 np0005534516 kernel: tapa0d85633-90: entered promiscuous mode
Nov 25 04:13:54 np0005534516 NetworkManager[48915]: <info>  [1764062034.1069] manager: (tapa0d85633-90): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/651)
Nov 25 04:13:54 np0005534516 nova_compute[253538]: 2025-11-25 09:13:54.108 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:13:54 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:13:54.111 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapa0d85633-90, col_values=(('external_ids', {'iface-id': '0596e673-151c-4eed-ad1e-d612e39d6f14'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 04:13:54 np0005534516 nova_compute[253538]: 2025-11-25 09:13:54.112 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:13:54 np0005534516 ovn_controller[152859]: 2025-11-25T09:13:54Z|01583|binding|INFO|Releasing lport 0596e673-151c-4eed-ad1e-d612e39d6f14 from this chassis (sb_readonly=0)
Nov 25 04:13:54 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 04:13:54 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:13:54.116 162739 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/a0d85633-9402-4022-8c0a-b00348775e93.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/a0d85633-9402-4022-8c0a-b00348775e93.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 25 04:13:54 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 04:13:54 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 04:13:54 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 04:13:54 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 04:13:54 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 04:13:54 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 04:13:54 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 04:13:54 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:13:54.117 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[ed5fd35a-1f46-4d8b-80e2-4d1aa2472f9a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:13:54 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 04:13:54 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:13:54.119 162739 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 25 04:13:54 np0005534516 ovn_metadata_agent[162734]: global
Nov 25 04:13:54 np0005534516 ovn_metadata_agent[162734]:    log         /dev/log local0 debug
Nov 25 04:13:54 np0005534516 ovn_metadata_agent[162734]:    log-tag     haproxy-metadata-proxy-a0d85633-9402-4022-8c0a-b00348775e93
Nov 25 04:13:54 np0005534516 ovn_metadata_agent[162734]:    user        root
Nov 25 04:13:54 np0005534516 ovn_metadata_agent[162734]:    group       root
Nov 25 04:13:54 np0005534516 ovn_metadata_agent[162734]:    maxconn     1024
Nov 25 04:13:54 np0005534516 ovn_metadata_agent[162734]:    pidfile     /var/lib/neutron/external/pids/a0d85633-9402-4022-8c0a-b00348775e93.pid.haproxy
Nov 25 04:13:54 np0005534516 ovn_metadata_agent[162734]:    daemon
Nov 25 04:13:54 np0005534516 ovn_metadata_agent[162734]: 
Nov 25 04:13:54 np0005534516 ovn_metadata_agent[162734]: defaults
Nov 25 04:13:54 np0005534516 ovn_metadata_agent[162734]:    log global
Nov 25 04:13:54 np0005534516 ovn_metadata_agent[162734]:    mode http
Nov 25 04:13:54 np0005534516 ovn_metadata_agent[162734]:    option httplog
Nov 25 04:13:54 np0005534516 ovn_metadata_agent[162734]:    option dontlognull
Nov 25 04:13:54 np0005534516 ovn_metadata_agent[162734]:    option http-server-close
Nov 25 04:13:54 np0005534516 ovn_metadata_agent[162734]:    option forwardfor
Nov 25 04:13:54 np0005534516 ovn_metadata_agent[162734]:    retries                 3
Nov 25 04:13:54 np0005534516 ovn_metadata_agent[162734]:    timeout http-request    30s
Nov 25 04:13:54 np0005534516 ovn_metadata_agent[162734]:    timeout connect         30s
Nov 25 04:13:54 np0005534516 ovn_metadata_agent[162734]:    timeout client          32s
Nov 25 04:13:54 np0005534516 ovn_metadata_agent[162734]:    timeout server          32s
Nov 25 04:13:54 np0005534516 ovn_metadata_agent[162734]:    timeout http-keep-alive 30s
Nov 25 04:13:54 np0005534516 ovn_metadata_agent[162734]: 
Nov 25 04:13:54 np0005534516 ovn_metadata_agent[162734]: 
Nov 25 04:13:54 np0005534516 ovn_metadata_agent[162734]: listen listener
Nov 25 04:13:54 np0005534516 ovn_metadata_agent[162734]:    bind 169.254.169.254:80
Nov 25 04:13:54 np0005534516 ovn_metadata_agent[162734]:    server metadata /var/lib/neutron/metadata_proxy
Nov 25 04:13:54 np0005534516 ovn_metadata_agent[162734]:    http-request add-header X-OVN-Network-ID a0d85633-9402-4022-8c0a-b00348775e93
Nov 25 04:13:54 np0005534516 ovn_metadata_agent[162734]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 25 04:13:54 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 04:13:54 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:13:54.119 162739 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-a0d85633-9402-4022-8c0a-b00348775e93', 'env', 'PROCESS_TAG=haproxy-a0d85633-9402-4022-8c0a-b00348775e93', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/a0d85633-9402-4022-8c0a-b00348775e93.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 25 04:13:54 np0005534516 nova_compute[253538]: 2025-11-25 09:13:54.125 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:13:54 np0005534516 nova_compute[253538]: 2025-11-25 09:13:54.226 253542 DEBUG nova.compute.manager [None req-9519dcae-62f0-45ad-a54d-9a20fba8c9e5 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 3525156a-e9c9-40b7-88f6-db0de5eb3cd1] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 25 04:13:54 np0005534516 nova_compute[253538]: 2025-11-25 09:13:54.229 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764062034.2285385, 3525156a-e9c9-40b7-88f6-db0de5eb3cd1 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 04:13:54 np0005534516 nova_compute[253538]: 2025-11-25 09:13:54.229 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 3525156a-e9c9-40b7-88f6-db0de5eb3cd1] VM Started (Lifecycle Event)#033[00m
Nov 25 04:13:54 np0005534516 nova_compute[253538]: 2025-11-25 09:13:54.231 253542 DEBUG nova.virt.libvirt.driver [None req-9519dcae-62f0-45ad-a54d-9a20fba8c9e5 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 3525156a-e9c9-40b7-88f6-db0de5eb3cd1] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 25 04:13:54 np0005534516 nova_compute[253538]: 2025-11-25 09:13:54.234 253542 INFO nova.virt.libvirt.driver [-] [instance: 3525156a-e9c9-40b7-88f6-db0de5eb3cd1] Instance spawned successfully.#033[00m
Nov 25 04:13:54 np0005534516 nova_compute[253538]: 2025-11-25 09:13:54.234 253542 DEBUG nova.virt.libvirt.driver [None req-9519dcae-62f0-45ad-a54d-9a20fba8c9e5 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 3525156a-e9c9-40b7-88f6-db0de5eb3cd1] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 25 04:13:54 np0005534516 nova_compute[253538]: 2025-11-25 09:13:54.245 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 3525156a-e9c9-40b7-88f6-db0de5eb3cd1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 04:13:54 np0005534516 nova_compute[253538]: 2025-11-25 09:13:54.250 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 3525156a-e9c9-40b7-88f6-db0de5eb3cd1] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 04:13:54 np0005534516 nova_compute[253538]: 2025-11-25 09:13:54.255 253542 DEBUG nova.virt.libvirt.driver [None req-9519dcae-62f0-45ad-a54d-9a20fba8c9e5 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 3525156a-e9c9-40b7-88f6-db0de5eb3cd1] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 04:13:54 np0005534516 nova_compute[253538]: 2025-11-25 09:13:54.255 253542 DEBUG nova.virt.libvirt.driver [None req-9519dcae-62f0-45ad-a54d-9a20fba8c9e5 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 3525156a-e9c9-40b7-88f6-db0de5eb3cd1] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 04:13:54 np0005534516 nova_compute[253538]: 2025-11-25 09:13:54.256 253542 DEBUG nova.virt.libvirt.driver [None req-9519dcae-62f0-45ad-a54d-9a20fba8c9e5 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 3525156a-e9c9-40b7-88f6-db0de5eb3cd1] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 04:13:54 np0005534516 nova_compute[253538]: 2025-11-25 09:13:54.256 253542 DEBUG nova.virt.libvirt.driver [None req-9519dcae-62f0-45ad-a54d-9a20fba8c9e5 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 3525156a-e9c9-40b7-88f6-db0de5eb3cd1] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 04:13:54 np0005534516 nova_compute[253538]: 2025-11-25 09:13:54.257 253542 DEBUG nova.virt.libvirt.driver [None req-9519dcae-62f0-45ad-a54d-9a20fba8c9e5 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 3525156a-e9c9-40b7-88f6-db0de5eb3cd1] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 04:13:54 np0005534516 nova_compute[253538]: 2025-11-25 09:13:54.257 253542 DEBUG nova.virt.libvirt.driver [None req-9519dcae-62f0-45ad-a54d-9a20fba8c9e5 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 3525156a-e9c9-40b7-88f6-db0de5eb3cd1] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 04:13:54 np0005534516 nova_compute[253538]: 2025-11-25 09:13:54.276 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 3525156a-e9c9-40b7-88f6-db0de5eb3cd1] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 04:13:54 np0005534516 nova_compute[253538]: 2025-11-25 09:13:54.277 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764062034.2286754, 3525156a-e9c9-40b7-88f6-db0de5eb3cd1 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 04:13:54 np0005534516 nova_compute[253538]: 2025-11-25 09:13:54.277 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 3525156a-e9c9-40b7-88f6-db0de5eb3cd1] VM Paused (Lifecycle Event)#033[00m
Nov 25 04:13:54 np0005534516 nova_compute[253538]: 2025-11-25 09:13:54.301 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 3525156a-e9c9-40b7-88f6-db0de5eb3cd1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 04:13:54 np0005534516 nova_compute[253538]: 2025-11-25 09:13:54.304 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764062034.2297797, 3525156a-e9c9-40b7-88f6-db0de5eb3cd1 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 04:13:54 np0005534516 nova_compute[253538]: 2025-11-25 09:13:54.305 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 3525156a-e9c9-40b7-88f6-db0de5eb3cd1] VM Resumed (Lifecycle Event)#033[00m
Nov 25 04:13:54 np0005534516 nova_compute[253538]: 2025-11-25 09:13:54.313 253542 INFO nova.compute.manager [None req-9519dcae-62f0-45ad-a54d-9a20fba8c9e5 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 3525156a-e9c9-40b7-88f6-db0de5eb3cd1] Took 8.38 seconds to spawn the instance on the hypervisor.#033[00m
Nov 25 04:13:54 np0005534516 nova_compute[253538]: 2025-11-25 09:13:54.313 253542 DEBUG nova.compute.manager [None req-9519dcae-62f0-45ad-a54d-9a20fba8c9e5 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 3525156a-e9c9-40b7-88f6-db0de5eb3cd1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 04:13:54 np0005534516 nova_compute[253538]: 2025-11-25 09:13:54.322 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 3525156a-e9c9-40b7-88f6-db0de5eb3cd1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 04:13:54 np0005534516 nova_compute[253538]: 2025-11-25 09:13:54.324 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 3525156a-e9c9-40b7-88f6-db0de5eb3cd1] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 04:13:54 np0005534516 nova_compute[253538]: 2025-11-25 09:13:54.342 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 3525156a-e9c9-40b7-88f6-db0de5eb3cd1] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 04:13:54 np0005534516 nova_compute[253538]: 2025-11-25 09:13:54.370 253542 INFO nova.compute.manager [None req-9519dcae-62f0-45ad-a54d-9a20fba8c9e5 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 3525156a-e9c9-40b7-88f6-db0de5eb3cd1] Took 9.29 seconds to build instance.#033[00m
Nov 25 04:13:54 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2805: 321 pgs: 321 active+clean; 134 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 25 04:13:54 np0005534516 nova_compute[253538]: 2025-11-25 09:13:54.385 253542 DEBUG oslo_concurrency.lockutils [None req-9519dcae-62f0-45ad-a54d-9a20fba8c9e5 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "3525156a-e9c9-40b7-88f6-db0de5eb3cd1" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.370s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:13:54 np0005534516 podman[413043]: 2025-11-25 09:13:54.514103718 +0000 UTC m=+0.047120392 container create 7cb63ae56f9aa9c60b0517e753abd1c81a65a009b616cb2f760cf458d27adbd9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-a0d85633-9402-4022-8c0a-b00348775e93, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Nov 25 04:13:54 np0005534516 nova_compute[253538]: 2025-11-25 09:13:54.523 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:13:54 np0005534516 systemd[1]: Started libpod-conmon-7cb63ae56f9aa9c60b0517e753abd1c81a65a009b616cb2f760cf458d27adbd9.scope.
Nov 25 04:13:54 np0005534516 systemd[1]: Started libcrun container.
Nov 25 04:13:54 np0005534516 podman[413043]: 2025-11-25 09:13:54.490694901 +0000 UTC m=+0.023711605 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 25 04:13:54 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/92dfa951f831771d47b5f3c19baa67a33147d7a8d9cd91e8a74b897c79b9363c/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 25 04:13:54 np0005534516 podman[413043]: 2025-11-25 09:13:54.599590812 +0000 UTC m=+0.132607586 container init 7cb63ae56f9aa9c60b0517e753abd1c81a65a009b616cb2f760cf458d27adbd9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-a0d85633-9402-4022-8c0a-b00348775e93, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.schema-version=1.0)
Nov 25 04:13:54 np0005534516 podman[413043]: 2025-11-25 09:13:54.605073701 +0000 UTC m=+0.138090385 container start 7cb63ae56f9aa9c60b0517e753abd1c81a65a009b616cb2f760cf458d27adbd9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-a0d85633-9402-4022-8c0a-b00348775e93, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true)
Nov 25 04:13:54 np0005534516 neutron-haproxy-ovnmeta-a0d85633-9402-4022-8c0a-b00348775e93[413058]: [NOTICE]   (413062) : New worker (413064) forked
Nov 25 04:13:54 np0005534516 neutron-haproxy-ovnmeta-a0d85633-9402-4022-8c0a-b00348775e93[413058]: [NOTICE]   (413062) : Loading success.
Nov 25 04:13:56 np0005534516 nova_compute[253538]: 2025-11-25 09:13:56.129 253542 DEBUG nova.compute.manager [req-619ec3ae-127f-445e-b9f0-53f1a7b51493 req-b0435bb4-1a42-4ce0-81ed-1ddca32c571f b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 3525156a-e9c9-40b7-88f6-db0de5eb3cd1] Received event network-vif-plugged-bc0d7fbf-1c1d-43bc-884b-d89f14be4712 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 04:13:56 np0005534516 nova_compute[253538]: 2025-11-25 09:13:56.129 253542 DEBUG oslo_concurrency.lockutils [req-619ec3ae-127f-445e-b9f0-53f1a7b51493 req-b0435bb4-1a42-4ce0-81ed-1ddca32c571f b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "3525156a-e9c9-40b7-88f6-db0de5eb3cd1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:13:56 np0005534516 nova_compute[253538]: 2025-11-25 09:13:56.130 253542 DEBUG oslo_concurrency.lockutils [req-619ec3ae-127f-445e-b9f0-53f1a7b51493 req-b0435bb4-1a42-4ce0-81ed-1ddca32c571f b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "3525156a-e9c9-40b7-88f6-db0de5eb3cd1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:13:56 np0005534516 nova_compute[253538]: 2025-11-25 09:13:56.130 253542 DEBUG oslo_concurrency.lockutils [req-619ec3ae-127f-445e-b9f0-53f1a7b51493 req-b0435bb4-1a42-4ce0-81ed-1ddca32c571f b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "3525156a-e9c9-40b7-88f6-db0de5eb3cd1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:13:56 np0005534516 nova_compute[253538]: 2025-11-25 09:13:56.131 253542 DEBUG nova.compute.manager [req-619ec3ae-127f-445e-b9f0-53f1a7b51493 req-b0435bb4-1a42-4ce0-81ed-1ddca32c571f b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 3525156a-e9c9-40b7-88f6-db0de5eb3cd1] No waiting events found dispatching network-vif-plugged-bc0d7fbf-1c1d-43bc-884b-d89f14be4712 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 04:13:56 np0005534516 nova_compute[253538]: 2025-11-25 09:13:56.131 253542 WARNING nova.compute.manager [req-619ec3ae-127f-445e-b9f0-53f1a7b51493 req-b0435bb4-1a42-4ce0-81ed-1ddca32c571f b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 3525156a-e9c9-40b7-88f6-db0de5eb3cd1] Received unexpected event network-vif-plugged-bc0d7fbf-1c1d-43bc-884b-d89f14be4712 for instance with vm_state active and task_state None.#033[00m
Nov 25 04:13:56 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2806: 321 pgs: 321 active+clean; 134 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 20 KiB/s rd, 1.8 MiB/s wr, 31 op/s
Nov 25 04:13:56 np0005534516 nova_compute[253538]: 2025-11-25 09:13:56.547 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:13:57 np0005534516 ovn_controller[152859]: 2025-11-25T09:13:57Z|01584|binding|INFO|Releasing lport 0596e673-151c-4eed-ad1e-d612e39d6f14 from this chassis (sb_readonly=0)
Nov 25 04:13:57 np0005534516 nova_compute[253538]: 2025-11-25 09:13:57.888 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:13:57 np0005534516 NetworkManager[48915]: <info>  [1764062037.8918] manager: (patch-br-int-to-provnet-14ba300c-36d8-42f5-a7bd-8245a4488c5f): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/652)
Nov 25 04:13:57 np0005534516 NetworkManager[48915]: <info>  [1764062037.8932] manager: (patch-provnet-14ba300c-36d8-42f5-a7bd-8245a4488c5f-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/653)
Nov 25 04:13:57 np0005534516 ovn_controller[152859]: 2025-11-25T09:13:57Z|01585|binding|INFO|Releasing lport 0596e673-151c-4eed-ad1e-d612e39d6f14 from this chassis (sb_readonly=0)
Nov 25 04:13:57 np0005534516 nova_compute[253538]: 2025-11-25 09:13:57.949 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:13:57 np0005534516 nova_compute[253538]: 2025-11-25 09:13:57.956 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:13:58 np0005534516 nova_compute[253538]: 2025-11-25 09:13:58.011 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:13:58 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0) v1
Nov 25 04:13:58 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 25 04:13:58 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 04:13:58 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 04:13:58 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 04:13:58 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 04:13:58 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 04:13:58 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 04:13:58 np0005534516 ceph-mgr[75313]: [progress WARNING root] complete: ev b91b1ca9-cab7-4043-96b0-af1f9f7e327f does not exist
Nov 25 04:13:58 np0005534516 ceph-mgr[75313]: [progress WARNING root] complete: ev f10292e2-2cfb-4ea8-9911-e5ae8a4767ad does not exist
Nov 25 04:13:58 np0005534516 ceph-mgr[75313]: [progress WARNING root] complete: ev c9f0f336-e2a1-42bb-8bda-c58de83795a7 does not exist
Nov 25 04:13:58 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 04:13:58 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 04:13:58 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 04:13:58 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 04:13:58 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 04:13:58 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 04:13:58 np0005534516 nova_compute[253538]: 2025-11-25 09:13:58.305 253542 DEBUG nova.compute.manager [req-0e83b409-0f3d-4c66-ab89-5c6894dc6607 req-a1f9a9fe-f856-4482-9422-572c6e51aa6d b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 3525156a-e9c9-40b7-88f6-db0de5eb3cd1] Received event network-changed-bc0d7fbf-1c1d-43bc-884b-d89f14be4712 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 04:13:58 np0005534516 nova_compute[253538]: 2025-11-25 09:13:58.305 253542 DEBUG nova.compute.manager [req-0e83b409-0f3d-4c66-ab89-5c6894dc6607 req-a1f9a9fe-f856-4482-9422-572c6e51aa6d b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 3525156a-e9c9-40b7-88f6-db0de5eb3cd1] Refreshing instance network info cache due to event network-changed-bc0d7fbf-1c1d-43bc-884b-d89f14be4712. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 04:13:58 np0005534516 nova_compute[253538]: 2025-11-25 09:13:58.306 253542 DEBUG oslo_concurrency.lockutils [req-0e83b409-0f3d-4c66-ab89-5c6894dc6607 req-a1f9a9fe-f856-4482-9422-572c6e51aa6d b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-3525156a-e9c9-40b7-88f6-db0de5eb3cd1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 04:13:58 np0005534516 nova_compute[253538]: 2025-11-25 09:13:58.306 253542 DEBUG oslo_concurrency.lockutils [req-0e83b409-0f3d-4c66-ab89-5c6894dc6607 req-a1f9a9fe-f856-4482-9422-572c6e51aa6d b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-3525156a-e9c9-40b7-88f6-db0de5eb3cd1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 04:13:58 np0005534516 nova_compute[253538]: 2025-11-25 09:13:58.307 253542 DEBUG nova.network.neutron [req-0e83b409-0f3d-4c66-ab89-5c6894dc6607 req-a1f9a9fe-f856-4482-9422-572c6e51aa6d b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 3525156a-e9c9-40b7-88f6-db0de5eb3cd1] Refreshing network info cache for port bc0d7fbf-1c1d-43bc-884b-d89f14be4712 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 04:13:58 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:13:58 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2807: 321 pgs: 321 active+clean; 134 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.5 MiB/s rd, 1.8 MiB/s wr, 87 op/s
Nov 25 04:13:58 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 25 04:13:58 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 04:13:58 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 04:13:58 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 04:13:58 np0005534516 nova_compute[253538]: 2025-11-25 09:13:58.553 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:13:58 np0005534516 nova_compute[253538]: 2025-11-25 09:13:58.554 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 25 04:13:58 np0005534516 nova_compute[253538]: 2025-11-25 09:13:58.569 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 25 04:13:58 np0005534516 podman[413345]: 2025-11-25 09:13:58.77845888 +0000 UTC m=+0.039836943 container create bf3dd2a5ecdd0a960df01a6ab40f620c78de063813e8b60433eb72fa8c87c75d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_cray, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 04:13:58 np0005534516 systemd[1]: Started libpod-conmon-bf3dd2a5ecdd0a960df01a6ab40f620c78de063813e8b60433eb72fa8c87c75d.scope.
Nov 25 04:13:58 np0005534516 systemd[1]: Started libcrun container.
Nov 25 04:13:58 np0005534516 podman[413345]: 2025-11-25 09:13:58.762439935 +0000 UTC m=+0.023818028 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 04:13:58 np0005534516 podman[413345]: 2025-11-25 09:13:58.86784191 +0000 UTC m=+0.129220003 container init bf3dd2a5ecdd0a960df01a6ab40f620c78de063813e8b60433eb72fa8c87c75d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_cray, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Nov 25 04:13:58 np0005534516 podman[413345]: 2025-11-25 09:13:58.877839002 +0000 UTC m=+0.139217075 container start bf3dd2a5ecdd0a960df01a6ab40f620c78de063813e8b60433eb72fa8c87c75d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_cray, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 04:13:58 np0005534516 podman[413345]: 2025-11-25 09:13:58.881626275 +0000 UTC m=+0.143004338 container attach bf3dd2a5ecdd0a960df01a6ab40f620c78de063813e8b60433eb72fa8c87c75d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_cray, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Nov 25 04:13:58 np0005534516 funny_cray[413361]: 167 167
Nov 25 04:13:58 np0005534516 systemd[1]: libpod-bf3dd2a5ecdd0a960df01a6ab40f620c78de063813e8b60433eb72fa8c87c75d.scope: Deactivated successfully.
Nov 25 04:13:58 np0005534516 conmon[413361]: conmon bf3dd2a5ecdd0a960df0 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-bf3dd2a5ecdd0a960df01a6ab40f620c78de063813e8b60433eb72fa8c87c75d.scope/container/memory.events
Nov 25 04:13:58 np0005534516 podman[413345]: 2025-11-25 09:13:58.884818452 +0000 UTC m=+0.146196545 container died bf3dd2a5ecdd0a960df01a6ab40f620c78de063813e8b60433eb72fa8c87c75d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_cray, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 04:13:58 np0005534516 systemd[1]: var-lib-containers-storage-overlay-57f8277c8d2e55bea2df96da31bd48e1b1bbbaa3f8a2699c52030ab04ccf3fbf-merged.mount: Deactivated successfully.
Nov 25 04:13:58 np0005534516 podman[413345]: 2025-11-25 09:13:58.92742309 +0000 UTC m=+0.188801153 container remove bf3dd2a5ecdd0a960df01a6ab40f620c78de063813e8b60433eb72fa8c87c75d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_cray, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 04:13:58 np0005534516 systemd[1]: libpod-conmon-bf3dd2a5ecdd0a960df01a6ab40f620c78de063813e8b60433eb72fa8c87c75d.scope: Deactivated successfully.
Nov 25 04:13:59 np0005534516 podman[413385]: 2025-11-25 09:13:59.125184866 +0000 UTC m=+0.068786101 container create 1b0f107261c3873fceffcf70ee55357b4159539398614d28e04f8699acdd08f6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_fermat, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Nov 25 04:13:59 np0005534516 systemd[1]: Started libpod-conmon-1b0f107261c3873fceffcf70ee55357b4159539398614d28e04f8699acdd08f6.scope.
Nov 25 04:13:59 np0005534516 podman[413385]: 2025-11-25 09:13:59.102009656 +0000 UTC m=+0.045610931 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 04:13:59 np0005534516 systemd[1]: Started libcrun container.
Nov 25 04:13:59 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6bba751563e3b0d4ca3fcede2fc4893e70ff4bba3d2d05211dcdef3ef882f1e2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 04:13:59 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6bba751563e3b0d4ca3fcede2fc4893e70ff4bba3d2d05211dcdef3ef882f1e2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 04:13:59 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6bba751563e3b0d4ca3fcede2fc4893e70ff4bba3d2d05211dcdef3ef882f1e2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 04:13:59 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6bba751563e3b0d4ca3fcede2fc4893e70ff4bba3d2d05211dcdef3ef882f1e2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 04:13:59 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6bba751563e3b0d4ca3fcede2fc4893e70ff4bba3d2d05211dcdef3ef882f1e2/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 04:13:59 np0005534516 podman[413385]: 2025-11-25 09:13:59.237131639 +0000 UTC m=+0.180732924 container init 1b0f107261c3873fceffcf70ee55357b4159539398614d28e04f8699acdd08f6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_fermat, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 04:13:59 np0005534516 podman[413385]: 2025-11-25 09:13:59.245015084 +0000 UTC m=+0.188616319 container start 1b0f107261c3873fceffcf70ee55357b4159539398614d28e04f8699acdd08f6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_fermat, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 04:13:59 np0005534516 podman[413385]: 2025-11-25 09:13:59.248593081 +0000 UTC m=+0.192194336 container attach 1b0f107261c3873fceffcf70ee55357b4159539398614d28e04f8699acdd08f6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_fermat, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Nov 25 04:13:59 np0005534516 nova_compute[253538]: 2025-11-25 09:13:59.525 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:14:00 np0005534516 nova_compute[253538]: 2025-11-25 09:14:00.304 253542 DEBUG nova.network.neutron [req-0e83b409-0f3d-4c66-ab89-5c6894dc6607 req-a1f9a9fe-f856-4482-9422-572c6e51aa6d b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 3525156a-e9c9-40b7-88f6-db0de5eb3cd1] Updated VIF entry in instance network info cache for port bc0d7fbf-1c1d-43bc-884b-d89f14be4712. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 25 04:14:00 np0005534516 nova_compute[253538]: 2025-11-25 09:14:00.306 253542 DEBUG nova.network.neutron [req-0e83b409-0f3d-4c66-ab89-5c6894dc6607 req-a1f9a9fe-f856-4482-9422-572c6e51aa6d b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 3525156a-e9c9-40b7-88f6-db0de5eb3cd1] Updating instance_info_cache with network_info: [{"id": "bc0d7fbf-1c1d-43bc-884b-d89f14be4712", "address": "fa:16:3e:57:11:da", "network": {"id": "a0d85633-9402-4022-8c0a-b00348775e93", "bridge": "br-int", "label": "tempest-network-smoke--1636625224", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe57:11da", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.246", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe57:11da", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbc0d7fbf-1c", "ovs_interfaceid": "bc0d7fbf-1c1d-43bc-884b-d89f14be4712", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 04:14:00 np0005534516 flamboyant_fermat[413402]: --> passed data devices: 0 physical, 3 LVM
Nov 25 04:14:00 np0005534516 flamboyant_fermat[413402]: --> relative data size: 1.0
Nov 25 04:14:00 np0005534516 flamboyant_fermat[413402]: --> All data devices are unavailable
Nov 25 04:14:00 np0005534516 nova_compute[253538]: 2025-11-25 09:14:00.325 253542 DEBUG oslo_concurrency.lockutils [req-0e83b409-0f3d-4c66-ab89-5c6894dc6607 req-a1f9a9fe-f856-4482-9422-572c6e51aa6d b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-3525156a-e9c9-40b7-88f6-db0de5eb3cd1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 04:14:00 np0005534516 systemd[1]: libpod-1b0f107261c3873fceffcf70ee55357b4159539398614d28e04f8699acdd08f6.scope: Deactivated successfully.
Nov 25 04:14:00 np0005534516 systemd[1]: libpod-1b0f107261c3873fceffcf70ee55357b4159539398614d28e04f8699acdd08f6.scope: Consumed 1.022s CPU time.
Nov 25 04:14:00 np0005534516 podman[413385]: 2025-11-25 09:14:00.353757893 +0000 UTC m=+1.297359128 container died 1b0f107261c3873fceffcf70ee55357b4159539398614d28e04f8699acdd08f6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_fermat, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Nov 25 04:14:00 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2808: 321 pgs: 321 active+clean; 134 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 981 KiB/s wr, 96 op/s
Nov 25 04:14:00 np0005534516 systemd[1]: var-lib-containers-storage-overlay-6bba751563e3b0d4ca3fcede2fc4893e70ff4bba3d2d05211dcdef3ef882f1e2-merged.mount: Deactivated successfully.
Nov 25 04:14:00 np0005534516 podman[413385]: 2025-11-25 09:14:00.421586508 +0000 UTC m=+1.365187733 container remove 1b0f107261c3873fceffcf70ee55357b4159539398614d28e04f8699acdd08f6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_fermat, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 04:14:00 np0005534516 systemd[1]: libpod-conmon-1b0f107261c3873fceffcf70ee55357b4159539398614d28e04f8699acdd08f6.scope: Deactivated successfully.
Nov 25 04:14:00 np0005534516 nova_compute[253538]: 2025-11-25 09:14:00.553 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:14:00 np0005534516 nova_compute[253538]: 2025-11-25 09:14:00.554 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 25 04:14:01 np0005534516 podman[413584]: 2025-11-25 09:14:01.06654836 +0000 UTC m=+0.086926514 container create d04b1f2371c8f9a8176629a8941193a27508d614f1a14d095a3caade322f8962 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_hopper, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 04:14:01 np0005534516 podman[413584]: 2025-11-25 09:14:01.003447794 +0000 UTC m=+0.023825988 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 04:14:01 np0005534516 systemd[1]: Started libpod-conmon-d04b1f2371c8f9a8176629a8941193a27508d614f1a14d095a3caade322f8962.scope.
Nov 25 04:14:01 np0005534516 systemd[1]: Started libcrun container.
Nov 25 04:14:01 np0005534516 podman[413584]: 2025-11-25 09:14:01.153019581 +0000 UTC m=+0.173397745 container init d04b1f2371c8f9a8176629a8941193a27508d614f1a14d095a3caade322f8962 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_hopper, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Nov 25 04:14:01 np0005534516 podman[413584]: 2025-11-25 09:14:01.163864976 +0000 UTC m=+0.184243140 container start d04b1f2371c8f9a8176629a8941193a27508d614f1a14d095a3caade322f8962 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_hopper, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Nov 25 04:14:01 np0005534516 confident_hopper[413598]: 167 167
Nov 25 04:14:01 np0005534516 systemd[1]: libpod-d04b1f2371c8f9a8176629a8941193a27508d614f1a14d095a3caade322f8962.scope: Deactivated successfully.
Nov 25 04:14:01 np0005534516 podman[413584]: 2025-11-25 09:14:01.18208338 +0000 UTC m=+0.202461574 container attach d04b1f2371c8f9a8176629a8941193a27508d614f1a14d095a3caade322f8962 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_hopper, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Nov 25 04:14:01 np0005534516 podman[413584]: 2025-11-25 09:14:01.183599842 +0000 UTC m=+0.203978006 container died d04b1f2371c8f9a8176629a8941193a27508d614f1a14d095a3caade322f8962 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_hopper, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Nov 25 04:14:01 np0005534516 systemd[1]: var-lib-containers-storage-overlay-4e7eda25c3b017e90ce1aaf4aab8573785d50ab1f91e39c607f309096d328886-merged.mount: Deactivated successfully.
Nov 25 04:14:01 np0005534516 podman[413584]: 2025-11-25 09:14:01.35568667 +0000 UTC m=+0.376064824 container remove d04b1f2371c8f9a8176629a8941193a27508d614f1a14d095a3caade322f8962 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_hopper, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Nov 25 04:14:01 np0005534516 systemd[1]: libpod-conmon-d04b1f2371c8f9a8176629a8941193a27508d614f1a14d095a3caade322f8962.scope: Deactivated successfully.
Nov 25 04:14:01 np0005534516 podman[413625]: 2025-11-25 09:14:01.537109092 +0000 UTC m=+0.029226996 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 04:14:01 np0005534516 podman[413625]: 2025-11-25 09:14:01.628447724 +0000 UTC m=+0.120565568 container create e769b7ccd85d22682b181c46edd19f42d490ff174bc0457df6aa65d6a329717c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_lumiere, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef)
Nov 25 04:14:01 np0005534516 systemd[1]: Started libpod-conmon-e769b7ccd85d22682b181c46edd19f42d490ff174bc0457df6aa65d6a329717c.scope.
Nov 25 04:14:01 np0005534516 systemd[1]: Started libcrun container.
Nov 25 04:14:01 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/881f1c42da3ce90144cd3878c7f7a0d12d93fdf94203ca216a183c591c2750da/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 04:14:01 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/881f1c42da3ce90144cd3878c7f7a0d12d93fdf94203ca216a183c591c2750da/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 04:14:01 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/881f1c42da3ce90144cd3878c7f7a0d12d93fdf94203ca216a183c591c2750da/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 04:14:01 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/881f1c42da3ce90144cd3878c7f7a0d12d93fdf94203ca216a183c591c2750da/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 04:14:01 np0005534516 podman[413625]: 2025-11-25 09:14:01.766269061 +0000 UTC m=+0.258386935 container init e769b7ccd85d22682b181c46edd19f42d490ff174bc0457df6aa65d6a329717c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_lumiere, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 04:14:01 np0005534516 podman[413625]: 2025-11-25 09:14:01.779213343 +0000 UTC m=+0.271331167 container start e769b7ccd85d22682b181c46edd19f42d490ff174bc0457df6aa65d6a329717c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_lumiere, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 04:14:01 np0005534516 podman[413625]: 2025-11-25 09:14:01.788615739 +0000 UTC m=+0.280733633 container attach e769b7ccd85d22682b181c46edd19f42d490ff174bc0457df6aa65d6a329717c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_lumiere, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef)
Nov 25 04:14:02 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2809: 321 pgs: 321 active+clean; 134 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 677 KiB/s wr, 74 op/s
Nov 25 04:14:02 np0005534516 nova_compute[253538]: 2025-11-25 09:14:02.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:14:02 np0005534516 affectionate_lumiere[413642]: {
Nov 25 04:14:02 np0005534516 affectionate_lumiere[413642]:    "0": [
Nov 25 04:14:02 np0005534516 affectionate_lumiere[413642]:        {
Nov 25 04:14:02 np0005534516 affectionate_lumiere[413642]:            "devices": [
Nov 25 04:14:02 np0005534516 affectionate_lumiere[413642]:                "/dev/loop3"
Nov 25 04:14:02 np0005534516 affectionate_lumiere[413642]:            ],
Nov 25 04:14:02 np0005534516 affectionate_lumiere[413642]:            "lv_name": "ceph_lv0",
Nov 25 04:14:02 np0005534516 affectionate_lumiere[413642]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 04:14:02 np0005534516 affectionate_lumiere[413642]:            "lv_size": "21470642176",
Nov 25 04:14:02 np0005534516 affectionate_lumiere[413642]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=fa0f8d90-5df8-4d42-9078-da082765696d,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 04:14:02 np0005534516 affectionate_lumiere[413642]:            "lv_uuid": "ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr",
Nov 25 04:14:02 np0005534516 affectionate_lumiere[413642]:            "name": "ceph_lv0",
Nov 25 04:14:02 np0005534516 affectionate_lumiere[413642]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 04:14:02 np0005534516 affectionate_lumiere[413642]:            "tags": {
Nov 25 04:14:02 np0005534516 affectionate_lumiere[413642]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 04:14:02 np0005534516 affectionate_lumiere[413642]:                "ceph.block_uuid": "ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr",
Nov 25 04:14:02 np0005534516 affectionate_lumiere[413642]:                "ceph.cephx_lockbox_secret": "",
Nov 25 04:14:02 np0005534516 affectionate_lumiere[413642]:                "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 04:14:02 np0005534516 affectionate_lumiere[413642]:                "ceph.cluster_name": "ceph",
Nov 25 04:14:02 np0005534516 affectionate_lumiere[413642]:                "ceph.crush_device_class": "",
Nov 25 04:14:02 np0005534516 affectionate_lumiere[413642]:                "ceph.encrypted": "0",
Nov 25 04:14:02 np0005534516 affectionate_lumiere[413642]:                "ceph.osd_fsid": "fa0f8d90-5df8-4d42-9078-da082765696d",
Nov 25 04:14:02 np0005534516 affectionate_lumiere[413642]:                "ceph.osd_id": "0",
Nov 25 04:14:02 np0005534516 affectionate_lumiere[413642]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 04:14:02 np0005534516 affectionate_lumiere[413642]:                "ceph.type": "block",
Nov 25 04:14:02 np0005534516 affectionate_lumiere[413642]:                "ceph.vdo": "0"
Nov 25 04:14:02 np0005534516 affectionate_lumiere[413642]:            },
Nov 25 04:14:02 np0005534516 affectionate_lumiere[413642]:            "type": "block",
Nov 25 04:14:02 np0005534516 affectionate_lumiere[413642]:            "vg_name": "ceph_vg0"
Nov 25 04:14:02 np0005534516 affectionate_lumiere[413642]:        }
Nov 25 04:14:02 np0005534516 affectionate_lumiere[413642]:    ],
Nov 25 04:14:02 np0005534516 affectionate_lumiere[413642]:    "1": [
Nov 25 04:14:02 np0005534516 affectionate_lumiere[413642]:        {
Nov 25 04:14:02 np0005534516 affectionate_lumiere[413642]:            "devices": [
Nov 25 04:14:02 np0005534516 affectionate_lumiere[413642]:                "/dev/loop4"
Nov 25 04:14:02 np0005534516 affectionate_lumiere[413642]:            ],
Nov 25 04:14:02 np0005534516 affectionate_lumiere[413642]:            "lv_name": "ceph_lv1",
Nov 25 04:14:02 np0005534516 affectionate_lumiere[413642]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 04:14:02 np0005534516 affectionate_lumiere[413642]:            "lv_size": "21470642176",
Nov 25 04:14:02 np0005534516 affectionate_lumiere[413642]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1ccbbde0-8faf-460a-800e-c84f00ed17db,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 04:14:02 np0005534516 affectionate_lumiere[413642]:            "lv_uuid": "nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e",
Nov 25 04:14:02 np0005534516 affectionate_lumiere[413642]:            "name": "ceph_lv1",
Nov 25 04:14:02 np0005534516 affectionate_lumiere[413642]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 04:14:02 np0005534516 affectionate_lumiere[413642]:            "tags": {
Nov 25 04:14:02 np0005534516 affectionate_lumiere[413642]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 04:14:02 np0005534516 affectionate_lumiere[413642]:                "ceph.block_uuid": "nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e",
Nov 25 04:14:02 np0005534516 affectionate_lumiere[413642]:                "ceph.cephx_lockbox_secret": "",
Nov 25 04:14:02 np0005534516 affectionate_lumiere[413642]:                "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 04:14:02 np0005534516 affectionate_lumiere[413642]:                "ceph.cluster_name": "ceph",
Nov 25 04:14:02 np0005534516 affectionate_lumiere[413642]:                "ceph.crush_device_class": "",
Nov 25 04:14:02 np0005534516 affectionate_lumiere[413642]:                "ceph.encrypted": "0",
Nov 25 04:14:02 np0005534516 affectionate_lumiere[413642]:                "ceph.osd_fsid": "1ccbbde0-8faf-460a-800e-c84f00ed17db",
Nov 25 04:14:02 np0005534516 affectionate_lumiere[413642]:                "ceph.osd_id": "1",
Nov 25 04:14:02 np0005534516 affectionate_lumiere[413642]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 04:14:02 np0005534516 affectionate_lumiere[413642]:                "ceph.type": "block",
Nov 25 04:14:02 np0005534516 affectionate_lumiere[413642]:                "ceph.vdo": "0"
Nov 25 04:14:02 np0005534516 affectionate_lumiere[413642]:            },
Nov 25 04:14:02 np0005534516 affectionate_lumiere[413642]:            "type": "block",
Nov 25 04:14:02 np0005534516 affectionate_lumiere[413642]:            "vg_name": "ceph_vg1"
Nov 25 04:14:02 np0005534516 affectionate_lumiere[413642]:        }
Nov 25 04:14:02 np0005534516 affectionate_lumiere[413642]:    ],
Nov 25 04:14:02 np0005534516 affectionate_lumiere[413642]:    "2": [
Nov 25 04:14:02 np0005534516 affectionate_lumiere[413642]:        {
Nov 25 04:14:02 np0005534516 affectionate_lumiere[413642]:            "devices": [
Nov 25 04:14:02 np0005534516 affectionate_lumiere[413642]:                "/dev/loop5"
Nov 25 04:14:02 np0005534516 affectionate_lumiere[413642]:            ],
Nov 25 04:14:02 np0005534516 affectionate_lumiere[413642]:            "lv_name": "ceph_lv2",
Nov 25 04:14:02 np0005534516 affectionate_lumiere[413642]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 04:14:02 np0005534516 affectionate_lumiere[413642]:            "lv_size": "21470642176",
Nov 25 04:14:02 np0005534516 affectionate_lumiere[413642]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2a15223e-f21c-42af-b702-2be1c3607399,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 04:14:02 np0005534516 rsyslogd[1007]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 25 04:14:02 np0005534516 affectionate_lumiere[413642]:            "lv_uuid": "5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0",
Nov 25 04:14:02 np0005534516 affectionate_lumiere[413642]:            "name": "ceph_lv2",
Nov 25 04:14:02 np0005534516 affectionate_lumiere[413642]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 04:14:02 np0005534516 affectionate_lumiere[413642]:            "tags": {
Nov 25 04:14:02 np0005534516 affectionate_lumiere[413642]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 04:14:02 np0005534516 affectionate_lumiere[413642]:                "ceph.block_uuid": "5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0",
Nov 25 04:14:02 np0005534516 affectionate_lumiere[413642]:                "ceph.cephx_lockbox_secret": "",
Nov 25 04:14:02 np0005534516 affectionate_lumiere[413642]:                "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 04:14:02 np0005534516 affectionate_lumiere[413642]:                "ceph.cluster_name": "ceph",
Nov 25 04:14:02 np0005534516 affectionate_lumiere[413642]:                "ceph.crush_device_class": "",
Nov 25 04:14:02 np0005534516 affectionate_lumiere[413642]:                "ceph.encrypted": "0",
Nov 25 04:14:02 np0005534516 affectionate_lumiere[413642]:                "ceph.osd_fsid": "2a15223e-f21c-42af-b702-2be1c3607399",
Nov 25 04:14:02 np0005534516 affectionate_lumiere[413642]:                "ceph.osd_id": "2",
Nov 25 04:14:02 np0005534516 affectionate_lumiere[413642]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 04:14:02 np0005534516 affectionate_lumiere[413642]:                "ceph.type": "block",
Nov 25 04:14:02 np0005534516 affectionate_lumiere[413642]:                "ceph.vdo": "0"
Nov 25 04:14:02 np0005534516 affectionate_lumiere[413642]:            },
Nov 25 04:14:02 np0005534516 affectionate_lumiere[413642]:            "type": "block",
Nov 25 04:14:02 np0005534516 affectionate_lumiere[413642]:            "vg_name": "ceph_vg2"
Nov 25 04:14:02 np0005534516 affectionate_lumiere[413642]:        }
Nov 25 04:14:02 np0005534516 affectionate_lumiere[413642]:    ]
Nov 25 04:14:02 np0005534516 affectionate_lumiere[413642]: }
Nov 25 04:14:02 np0005534516 rsyslogd[1007]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 25 04:14:02 np0005534516 systemd[1]: libpod-e769b7ccd85d22682b181c46edd19f42d490ff174bc0457df6aa65d6a329717c.scope: Deactivated successfully.
Nov 25 04:14:02 np0005534516 podman[413625]: 2025-11-25 09:14:02.606715988 +0000 UTC m=+1.098833792 container died e769b7ccd85d22682b181c46edd19f42d490ff174bc0457df6aa65d6a329717c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_lumiere, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 04:14:02 np0005534516 systemd[1]: var-lib-containers-storage-overlay-881f1c42da3ce90144cd3878c7f7a0d12d93fdf94203ca216a183c591c2750da-merged.mount: Deactivated successfully.
Nov 25 04:14:02 np0005534516 podman[413625]: 2025-11-25 09:14:02.681064549 +0000 UTC m=+1.173182353 container remove e769b7ccd85d22682b181c46edd19f42d490ff174bc0457df6aa65d6a329717c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_lumiere, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Nov 25 04:14:02 np0005534516 systemd[1]: libpod-conmon-e769b7ccd85d22682b181c46edd19f42d490ff174bc0457df6aa65d6a329717c.scope: Deactivated successfully.
Nov 25 04:14:03 np0005534516 nova_compute[253538]: 2025-11-25 09:14:03.012 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:14:03 np0005534516 podman[413804]: 2025-11-25 09:14:03.282869588 +0000 UTC m=+0.054567164 container create adb6d30d8a4be183cc04c644aa59b1aedb5a9f57a0e20e44ae0dbc0b0fd5a464 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_lehmann, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 25 04:14:03 np0005534516 systemd[1]: Started libpod-conmon-adb6d30d8a4be183cc04c644aa59b1aedb5a9f57a0e20e44ae0dbc0b0fd5a464.scope.
Nov 25 04:14:03 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:14:03 np0005534516 podman[413804]: 2025-11-25 09:14:03.253664935 +0000 UTC m=+0.025362521 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 04:14:03 np0005534516 systemd[1]: Started libcrun container.
Nov 25 04:14:03 np0005534516 podman[413804]: 2025-11-25 09:14:03.418750142 +0000 UTC m=+0.190447818 container init adb6d30d8a4be183cc04c644aa59b1aedb5a9f57a0e20e44ae0dbc0b0fd5a464 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_lehmann, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Nov 25 04:14:03 np0005534516 podman[413804]: 2025-11-25 09:14:03.43078705 +0000 UTC m=+0.202484636 container start adb6d30d8a4be183cc04c644aa59b1aedb5a9f57a0e20e44ae0dbc0b0fd5a464 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_lehmann, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 04:14:03 np0005534516 blissful_lehmann[413820]: 167 167
Nov 25 04:14:03 np0005534516 systemd[1]: libpod-adb6d30d8a4be183cc04c644aa59b1aedb5a9f57a0e20e44ae0dbc0b0fd5a464.scope: Deactivated successfully.
Nov 25 04:14:03 np0005534516 podman[413804]: 2025-11-25 09:14:03.445609052 +0000 UTC m=+0.217306668 container attach adb6d30d8a4be183cc04c644aa59b1aedb5a9f57a0e20e44ae0dbc0b0fd5a464 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_lehmann, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Nov 25 04:14:03 np0005534516 podman[413804]: 2025-11-25 09:14:03.446085315 +0000 UTC m=+0.217782911 container died adb6d30d8a4be183cc04c644aa59b1aedb5a9f57a0e20e44ae0dbc0b0fd5a464 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_lehmann, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True)
Nov 25 04:14:03 np0005534516 systemd[1]: var-lib-containers-storage-overlay-a6813109c5a7abe3e95fec5cfc1042fd70bb7a62d752e9ed5b0b49ae3707f036-merged.mount: Deactivated successfully.
Nov 25 04:14:03 np0005534516 podman[413804]: 2025-11-25 09:14:03.511077211 +0000 UTC m=+0.282774787 container remove adb6d30d8a4be183cc04c644aa59b1aedb5a9f57a0e20e44ae0dbc0b0fd5a464 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_lehmann, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 25 04:14:03 np0005534516 systemd[1]: libpod-conmon-adb6d30d8a4be183cc04c644aa59b1aedb5a9f57a0e20e44ae0dbc0b0fd5a464.scope: Deactivated successfully.
Nov 25 04:14:03 np0005534516 nova_compute[253538]: 2025-11-25 09:14:03.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:14:03 np0005534516 podman[413842]: 2025-11-25 09:14:03.75342115 +0000 UTC m=+0.084824297 container create b2c646e61f3ea34972cf3615008ec48150466aa823f702ed6de2b1e01546e570 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_banach, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 04:14:03 np0005534516 podman[413842]: 2025-11-25 09:14:03.699177486 +0000 UTC m=+0.030580683 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 04:14:03 np0005534516 systemd[1]: Started libpod-conmon-b2c646e61f3ea34972cf3615008ec48150466aa823f702ed6de2b1e01546e570.scope.
Nov 25 04:14:03 np0005534516 systemd[1]: Started libcrun container.
Nov 25 04:14:03 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7ae3c80e45b88e4f37b7facc1f9a758ec98451035f141aa6d0823629fae6a915/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 04:14:03 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7ae3c80e45b88e4f37b7facc1f9a758ec98451035f141aa6d0823629fae6a915/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 04:14:03 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7ae3c80e45b88e4f37b7facc1f9a758ec98451035f141aa6d0823629fae6a915/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 04:14:03 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7ae3c80e45b88e4f37b7facc1f9a758ec98451035f141aa6d0823629fae6a915/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 04:14:03 np0005534516 podman[413842]: 2025-11-25 09:14:03.890124216 +0000 UTC m=+0.221527403 container init b2c646e61f3ea34972cf3615008ec48150466aa823f702ed6de2b1e01546e570 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_banach, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True)
Nov 25 04:14:03 np0005534516 podman[413842]: 2025-11-25 09:14:03.897934649 +0000 UTC m=+0.229337796 container start b2c646e61f3ea34972cf3615008ec48150466aa823f702ed6de2b1e01546e570 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_banach, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Nov 25 04:14:03 np0005534516 podman[413842]: 2025-11-25 09:14:03.946997742 +0000 UTC m=+0.278400889 container attach b2c646e61f3ea34972cf3615008ec48150466aa823f702ed6de2b1e01546e570 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_banach, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Nov 25 04:14:04 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2810: 321 pgs: 321 active+clean; 134 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Nov 25 04:14:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 04:14:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 04:14:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 04:14:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 04:14:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.00034841348814872695 of space, bias 1.0, pg target 0.10452404644461809 quantized to 32 (current 32)
Nov 25 04:14:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 04:14:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 04:14:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 04:14:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 04:14:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 04:14:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0010122368870873217 of space, bias 1.0, pg target 0.3036710661261965 quantized to 32 (current 32)
Nov 25 04:14:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 04:14:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 25 04:14:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 04:14:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 04:14:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 04:14:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 25 04:14:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 04:14:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 25 04:14:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 04:14:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 04:14:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 04:14:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 25 04:14:04 np0005534516 nova_compute[253538]: 2025-11-25 09:14:04.528 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:14:04 np0005534516 confident_banach[413858]: {
Nov 25 04:14:04 np0005534516 confident_banach[413858]:    "1ccbbde0-8faf-460a-800e-c84f00ed17db": {
Nov 25 04:14:04 np0005534516 confident_banach[413858]:        "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 04:14:04 np0005534516 confident_banach[413858]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 04:14:04 np0005534516 confident_banach[413858]:        "osd_id": 1,
Nov 25 04:14:04 np0005534516 confident_banach[413858]:        "osd_uuid": "1ccbbde0-8faf-460a-800e-c84f00ed17db",
Nov 25 04:14:04 np0005534516 confident_banach[413858]:        "type": "bluestore"
Nov 25 04:14:04 np0005534516 confident_banach[413858]:    },
Nov 25 04:14:04 np0005534516 confident_banach[413858]:    "2a15223e-f21c-42af-b702-2be1c3607399": {
Nov 25 04:14:04 np0005534516 confident_banach[413858]:        "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 04:14:04 np0005534516 confident_banach[413858]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 04:14:04 np0005534516 confident_banach[413858]:        "osd_id": 2,
Nov 25 04:14:04 np0005534516 confident_banach[413858]:        "osd_uuid": "2a15223e-f21c-42af-b702-2be1c3607399",
Nov 25 04:14:04 np0005534516 confident_banach[413858]:        "type": "bluestore"
Nov 25 04:14:04 np0005534516 confident_banach[413858]:    },
Nov 25 04:14:04 np0005534516 confident_banach[413858]:    "fa0f8d90-5df8-4d42-9078-da082765696d": {
Nov 25 04:14:04 np0005534516 confident_banach[413858]:        "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 04:14:04 np0005534516 confident_banach[413858]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 04:14:04 np0005534516 confident_banach[413858]:        "osd_id": 0,
Nov 25 04:14:04 np0005534516 confident_banach[413858]:        "osd_uuid": "fa0f8d90-5df8-4d42-9078-da082765696d",
Nov 25 04:14:04 np0005534516 confident_banach[413858]:        "type": "bluestore"
Nov 25 04:14:04 np0005534516 confident_banach[413858]:    }
Nov 25 04:14:04 np0005534516 confident_banach[413858]: }
Nov 25 04:14:04 np0005534516 systemd[1]: libpod-b2c646e61f3ea34972cf3615008ec48150466aa823f702ed6de2b1e01546e570.scope: Deactivated successfully.
Nov 25 04:14:04 np0005534516 podman[413842]: 2025-11-25 09:14:04.857967946 +0000 UTC m=+1.189371053 container died b2c646e61f3ea34972cf3615008ec48150466aa823f702ed6de2b1e01546e570 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_banach, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Nov 25 04:14:04 np0005534516 systemd[1]: var-lib-containers-storage-overlay-7ae3c80e45b88e4f37b7facc1f9a758ec98451035f141aa6d0823629fae6a915-merged.mount: Deactivated successfully.
Nov 25 04:14:05 np0005534516 podman[413842]: 2025-11-25 09:14:05.013873994 +0000 UTC m=+1.345277111 container remove b2c646e61f3ea34972cf3615008ec48150466aa823f702ed6de2b1e01546e570 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_banach, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3)
Nov 25 04:14:05 np0005534516 systemd[1]: libpod-conmon-b2c646e61f3ea34972cf3615008ec48150466aa823f702ed6de2b1e01546e570.scope: Deactivated successfully.
Nov 25 04:14:05 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 04:14:05 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 04:14:05 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 04:14:05 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 04:14:05 np0005534516 ceph-mgr[75313]: [progress WARNING root] complete: ev 77ad1e66-7fc0-4824-abe8-27f6d78b5ee4 does not exist
Nov 25 04:14:05 np0005534516 ceph-mgr[75313]: [progress WARNING root] complete: ev 77576eab-69ae-45d8-8c77-ffc564017ff8 does not exist
Nov 25 04:14:05 np0005534516 nova_compute[253538]: 2025-11-25 09:14:05.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:14:05 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 04:14:05 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 04:14:06 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2811: 321 pgs: 321 active+clean; 134 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Nov 25 04:14:07 np0005534516 ovn_controller[152859]: 2025-11-25T09:14:07Z|00203|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:57:11:da 10.100.0.12
Nov 25 04:14:07 np0005534516 ovn_controller[152859]: 2025-11-25T09:14:07Z|00204|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:57:11:da 10.100.0.12
Nov 25 04:14:08 np0005534516 nova_compute[253538]: 2025-11-25 09:14:08.017 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:14:08 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:14:08 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2812: 321 pgs: 321 active+clean; 150 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 1.6 MiB/s wr, 102 op/s
Nov 25 04:14:08 np0005534516 nova_compute[253538]: 2025-11-25 09:14:08.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:14:08 np0005534516 nova_compute[253538]: 2025-11-25 09:14:08.589 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:14:08 np0005534516 nova_compute[253538]: 2025-11-25 09:14:08.590 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:14:08 np0005534516 nova_compute[253538]: 2025-11-25 09:14:08.590 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:14:08 np0005534516 nova_compute[253538]: 2025-11-25 09:14:08.590 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 25 04:14:08 np0005534516 nova_compute[253538]: 2025-11-25 09:14:08.591 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 04:14:08 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 04:14:08 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2925464592' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 04:14:09 np0005534516 nova_compute[253538]: 2025-11-25 09:14:09.014 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.422s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 04:14:09 np0005534516 nova_compute[253538]: 2025-11-25 09:14:09.078 253542 DEBUG nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] skipping disk for instance-00000094 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 25 04:14:09 np0005534516 nova_compute[253538]: 2025-11-25 09:14:09.078 253542 DEBUG nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] skipping disk for instance-00000094 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 25 04:14:09 np0005534516 nova_compute[253538]: 2025-11-25 09:14:09.277 253542 WARNING nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 04:14:09 np0005534516 nova_compute[253538]: 2025-11-25 09:14:09.278 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3456MB free_disk=59.94898986816406GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 25 04:14:09 np0005534516 nova_compute[253538]: 2025-11-25 09:14:09.278 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:14:09 np0005534516 nova_compute[253538]: 2025-11-25 09:14:09.278 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:14:09 np0005534516 nova_compute[253538]: 2025-11-25 09:14:09.359 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Instance 3525156a-e9c9-40b7-88f6-db0de5eb3cd1 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 25 04:14:09 np0005534516 nova_compute[253538]: 2025-11-25 09:14:09.359 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 25 04:14:09 np0005534516 nova_compute[253538]: 2025-11-25 09:14:09.359 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 25 04:14:09 np0005534516 nova_compute[253538]: 2025-11-25 09:14:09.420 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 04:14:09 np0005534516 nova_compute[253538]: 2025-11-25 09:14:09.532 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:14:09 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 04:14:09 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/661340366' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 04:14:09 np0005534516 nova_compute[253538]: 2025-11-25 09:14:09.901 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.481s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 04:14:09 np0005534516 nova_compute[253538]: 2025-11-25 09:14:09.907 253542 DEBUG nova.compute.provider_tree [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 04:14:09 np0005534516 nova_compute[253538]: 2025-11-25 09:14:09.925 253542 DEBUG nova.scheduler.client.report [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 04:14:09 np0005534516 nova_compute[253538]: 2025-11-25 09:14:09.985 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 25 04:14:09 np0005534516 nova_compute[253538]: 2025-11-25 09:14:09.985 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.707s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:14:10 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2813: 321 pgs: 321 active+clean; 160 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 684 KiB/s rd, 2.1 MiB/s wr, 70 op/s
Nov 25 04:14:10 np0005534516 podman[414002]: 2025-11-25 09:14:10.812065433 +0000 UTC m=+0.062158362 container health_status 5c8d5508db57b4c3da7e80ae11f3a3094e87785cb74f60c98199261dd8b73481 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible)
Nov 25 04:14:10 np0005534516 podman[414001]: 2025-11-25 09:14:10.81419585 +0000 UTC m=+0.064607338 container health_status 201f5ba8a846455dad7681d91099d8c3f33e8ac91298c1330a8b525b94c03938 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=multipathd)
Nov 25 04:14:12 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2814: 321 pgs: 321 active+clean; 167 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 326 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Nov 25 04:14:13 np0005534516 nova_compute[253538]: 2025-11-25 09:14:13.020 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:14:13 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:14:14 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2815: 321 pgs: 321 active+clean; 167 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 326 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Nov 25 04:14:14 np0005534516 nova_compute[253538]: 2025-11-25 09:14:14.535 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:14:14 np0005534516 ceph-mon[75015]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #138. Immutable memtables: 0.
Nov 25 04:14:14 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:14:14.757910) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 25 04:14:14 np0005534516 ceph-mon[75015]: rocksdb: [db/flush_job.cc:856] [default] [JOB 83] Flushing memtable with next log file: 138
Nov 25 04:14:14 np0005534516 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764062054757954, "job": 83, "event": "flush_started", "num_memtables": 1, "num_entries": 712, "num_deletes": 251, "total_data_size": 884990, "memory_usage": 899176, "flush_reason": "Manual Compaction"}
Nov 25 04:14:14 np0005534516 ceph-mon[75015]: rocksdb: [db/flush_job.cc:885] [default] [JOB 83] Level-0 flush table #139: started
Nov 25 04:14:14 np0005534516 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764062054767843, "cf_name": "default", "job": 83, "event": "table_file_creation", "file_number": 139, "file_size": 876885, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 58362, "largest_seqno": 59073, "table_properties": {"data_size": 873133, "index_size": 1595, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1093, "raw_key_size": 8476, "raw_average_key_size": 19, "raw_value_size": 865603, "raw_average_value_size": 1989, "num_data_blocks": 71, "num_entries": 435, "num_filter_entries": 435, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764061999, "oldest_key_time": 1764061999, "file_creation_time": 1764062054, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "dc5dc6ae-0fb8-474e-b34d-e37705a41add", "db_session_id": "WCSCMBNWDQZ3QKJA3OWW", "orig_file_number": 139, "seqno_to_time_mapping": "N/A"}}
Nov 25 04:14:14 np0005534516 ceph-mon[75015]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 83] Flush lasted 10006 microseconds, and 5437 cpu microseconds.
Nov 25 04:14:14 np0005534516 ceph-mon[75015]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 04:14:14 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:14:14.767909) [db/flush_job.cc:967] [default] [JOB 83] Level-0 flush table #139: 876885 bytes OK
Nov 25 04:14:14 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:14:14.767936) [db/memtable_list.cc:519] [default] Level-0 commit table #139 started
Nov 25 04:14:14 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:14:14.770249) [db/memtable_list.cc:722] [default] Level-0 commit table #139: memtable #1 done
Nov 25 04:14:14 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:14:14.770270) EVENT_LOG_v1 {"time_micros": 1764062054770263, "job": 83, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 25 04:14:14 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:14:14.770294) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 25 04:14:14 np0005534516 ceph-mon[75015]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 83] Try to delete WAL files size 881291, prev total WAL file size 881291, number of live WAL files 2.
Nov 25 04:14:14 np0005534516 ceph-mon[75015]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000135.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 04:14:14 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:14:14.771027) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730035353232' seq:72057594037927935, type:22 .. '7061786F730035373734' seq:0, type:0; will stop at (end)
Nov 25 04:14:14 np0005534516 ceph-mon[75015]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 84] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 25 04:14:14 np0005534516 ceph-mon[75015]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 83 Base level 0, inputs: [139(856KB)], [137(8983KB)]
Nov 25 04:14:14 np0005534516 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764062054771075, "job": 84, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [139], "files_L6": [137], "score": -1, "input_data_size": 10076000, "oldest_snapshot_seqno": -1}
Nov 25 04:14:14 np0005534516 ceph-mon[75015]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 84] Generated table #140: 7689 keys, 8350119 bytes, temperature: kUnknown
Nov 25 04:14:14 np0005534516 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764062054829783, "cf_name": "default", "job": 84, "event": "table_file_creation", "file_number": 140, "file_size": 8350119, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8302978, "index_size": 26804, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 19269, "raw_key_size": 202292, "raw_average_key_size": 26, "raw_value_size": 8169847, "raw_average_value_size": 1062, "num_data_blocks": 1035, "num_entries": 7689, "num_filter_entries": 7689, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764056881, "oldest_key_time": 0, "file_creation_time": 1764062054, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "dc5dc6ae-0fb8-474e-b34d-e37705a41add", "db_session_id": "WCSCMBNWDQZ3QKJA3OWW", "orig_file_number": 140, "seqno_to_time_mapping": "N/A"}}
Nov 25 04:14:14 np0005534516 ceph-mon[75015]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 04:14:14 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:14:14.830057) [db/compaction/compaction_job.cc:1663] [default] [JOB 84] Compacted 1@0 + 1@6 files to L6 => 8350119 bytes
Nov 25 04:14:14 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:14:14.831666) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 171.3 rd, 142.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.8, 8.8 +0.0 blob) out(8.0 +0.0 blob), read-write-amplify(21.0) write-amplify(9.5) OK, records in: 8203, records dropped: 514 output_compression: NoCompression
Nov 25 04:14:14 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:14:14.831686) EVENT_LOG_v1 {"time_micros": 1764062054831677, "job": 84, "event": "compaction_finished", "compaction_time_micros": 58809, "compaction_time_cpu_micros": 23845, "output_level": 6, "num_output_files": 1, "total_output_size": 8350119, "num_input_records": 8203, "num_output_records": 7689, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 25 04:14:14 np0005534516 ceph-mon[75015]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000139.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 04:14:14 np0005534516 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764062054832140, "job": 84, "event": "table_file_deletion", "file_number": 139}
Nov 25 04:14:14 np0005534516 ceph-mon[75015]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000137.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 04:14:14 np0005534516 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764062054834290, "job": 84, "event": "table_file_deletion", "file_number": 137}
Nov 25 04:14:14 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:14:14.770931) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 04:14:14 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:14:14.834440) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 04:14:14 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:14:14.834448) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 04:14:14 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:14:14.834456) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 04:14:14 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:14:14.834459) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 04:14:14 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:14:14.834462) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 04:14:16 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2816: 321 pgs: 321 active+clean; 167 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 326 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Nov 25 04:14:16 np0005534516 nova_compute[253538]: 2025-11-25 09:14:16.491 253542 DEBUG oslo_concurrency.lockutils [None req-adbbced1-e284-4a74-9b62-5b622b64136f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Acquiring lock "49fdf548-77e1-47b2-9118-f42acc3a4052" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:14:16 np0005534516 nova_compute[253538]: 2025-11-25 09:14:16.492 253542 DEBUG oslo_concurrency.lockutils [None req-adbbced1-e284-4a74-9b62-5b622b64136f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "49fdf548-77e1-47b2-9118-f42acc3a4052" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:14:16 np0005534516 nova_compute[253538]: 2025-11-25 09:14:16.520 253542 DEBUG nova.compute.manager [None req-adbbced1-e284-4a74-9b62-5b622b64136f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 49fdf548-77e1-47b2-9118-f42acc3a4052] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 25 04:14:16 np0005534516 nova_compute[253538]: 2025-11-25 09:14:16.731 253542 DEBUG oslo_concurrency.lockutils [None req-adbbced1-e284-4a74-9b62-5b622b64136f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:14:16 np0005534516 nova_compute[253538]: 2025-11-25 09:14:16.732 253542 DEBUG oslo_concurrency.lockutils [None req-adbbced1-e284-4a74-9b62-5b622b64136f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:14:16 np0005534516 nova_compute[253538]: 2025-11-25 09:14:16.740 253542 DEBUG nova.virt.hardware [None req-adbbced1-e284-4a74-9b62-5b622b64136f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 25 04:14:16 np0005534516 nova_compute[253538]: 2025-11-25 09:14:16.740 253542 INFO nova.compute.claims [None req-adbbced1-e284-4a74-9b62-5b622b64136f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 49fdf548-77e1-47b2-9118-f42acc3a4052] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 25 04:14:16 np0005534516 podman[414037]: 2025-11-25 09:14:16.912193299 +0000 UTC m=+0.154985794 container health_status 8ad1e319ea263c63021f01bb2d6f18ca5aea205883339692c6b10bddfa993191 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Nov 25 04:14:16 np0005534516 nova_compute[253538]: 2025-11-25 09:14:16.917 253542 DEBUG oslo_concurrency.processutils [None req-adbbced1-e284-4a74-9b62-5b622b64136f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 04:14:17 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 04:14:17 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1522660723' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 04:14:17 np0005534516 nova_compute[253538]: 2025-11-25 09:14:17.358 253542 DEBUG oslo_concurrency.processutils [None req-adbbced1-e284-4a74-9b62-5b622b64136f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.441s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 04:14:17 np0005534516 nova_compute[253538]: 2025-11-25 09:14:17.365 253542 DEBUG nova.compute.provider_tree [None req-adbbced1-e284-4a74-9b62-5b622b64136f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 04:14:17 np0005534516 nova_compute[253538]: 2025-11-25 09:14:17.387 253542 DEBUG nova.scheduler.client.report [None req-adbbced1-e284-4a74-9b62-5b622b64136f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 04:14:17 np0005534516 nova_compute[253538]: 2025-11-25 09:14:17.413 253542 DEBUG oslo_concurrency.lockutils [None req-adbbced1-e284-4a74-9b62-5b622b64136f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.682s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:14:17 np0005534516 nova_compute[253538]: 2025-11-25 09:14:17.415 253542 DEBUG nova.compute.manager [None req-adbbced1-e284-4a74-9b62-5b622b64136f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 49fdf548-77e1-47b2-9118-f42acc3a4052] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 25 04:14:17 np0005534516 nova_compute[253538]: 2025-11-25 09:14:17.503 253542 DEBUG nova.compute.manager [None req-adbbced1-e284-4a74-9b62-5b622b64136f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 49fdf548-77e1-47b2-9118-f42acc3a4052] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 25 04:14:17 np0005534516 nova_compute[253538]: 2025-11-25 09:14:17.504 253542 DEBUG nova.network.neutron [None req-adbbced1-e284-4a74-9b62-5b622b64136f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 49fdf548-77e1-47b2-9118-f42acc3a4052] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 25 04:14:17 np0005534516 nova_compute[253538]: 2025-11-25 09:14:17.545 253542 INFO nova.virt.libvirt.driver [None req-adbbced1-e284-4a74-9b62-5b622b64136f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 49fdf548-77e1-47b2-9118-f42acc3a4052] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 25 04:14:17 np0005534516 nova_compute[253538]: 2025-11-25 09:14:17.600 253542 DEBUG nova.compute.manager [None req-adbbced1-e284-4a74-9b62-5b622b64136f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 49fdf548-77e1-47b2-9118-f42acc3a4052] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 25 04:14:17 np0005534516 nova_compute[253538]: 2025-11-25 09:14:17.754 253542 DEBUG nova.compute.manager [None req-adbbced1-e284-4a74-9b62-5b622b64136f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 49fdf548-77e1-47b2-9118-f42acc3a4052] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 25 04:14:17 np0005534516 nova_compute[253538]: 2025-11-25 09:14:17.755 253542 DEBUG nova.virt.libvirt.driver [None req-adbbced1-e284-4a74-9b62-5b622b64136f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 49fdf548-77e1-47b2-9118-f42acc3a4052] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 25 04:14:17 np0005534516 nova_compute[253538]: 2025-11-25 09:14:17.756 253542 INFO nova.virt.libvirt.driver [None req-adbbced1-e284-4a74-9b62-5b622b64136f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 49fdf548-77e1-47b2-9118-f42acc3a4052] Creating image(s)#033[00m
Nov 25 04:14:17 np0005534516 nova_compute[253538]: 2025-11-25 09:14:17.781 253542 DEBUG nova.storage.rbd_utils [None req-adbbced1-e284-4a74-9b62-5b622b64136f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] rbd image 49fdf548-77e1-47b2-9118-f42acc3a4052_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 04:14:17 np0005534516 nova_compute[253538]: 2025-11-25 09:14:17.804 253542 DEBUG nova.storage.rbd_utils [None req-adbbced1-e284-4a74-9b62-5b622b64136f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] rbd image 49fdf548-77e1-47b2-9118-f42acc3a4052_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 04:14:17 np0005534516 nova_compute[253538]: 2025-11-25 09:14:17.825 253542 DEBUG nova.storage.rbd_utils [None req-adbbced1-e284-4a74-9b62-5b622b64136f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] rbd image 49fdf548-77e1-47b2-9118-f42acc3a4052_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 04:14:17 np0005534516 nova_compute[253538]: 2025-11-25 09:14:17.830 253542 DEBUG oslo_concurrency.processutils [None req-adbbced1-e284-4a74-9b62-5b622b64136f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 04:14:17 np0005534516 nova_compute[253538]: 2025-11-25 09:14:17.865 253542 DEBUG nova.policy [None req-adbbced1-e284-4a74-9b62-5b622b64136f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'c9fb13d4ba9041458692330b7276232f', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'a3cf572dfc9f42528923d69b8fa76422', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 25 04:14:17 np0005534516 nova_compute[253538]: 2025-11-25 09:14:17.905 253542 DEBUG oslo_concurrency.processutils [None req-adbbced1-e284-4a74-9b62-5b622b64136f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json" returned: 0 in 0.076s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 04:14:17 np0005534516 nova_compute[253538]: 2025-11-25 09:14:17.906 253542 DEBUG oslo_concurrency.lockutils [None req-adbbced1-e284-4a74-9b62-5b622b64136f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Acquiring lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:14:17 np0005534516 nova_compute[253538]: 2025-11-25 09:14:17.907 253542 DEBUG oslo_concurrency.lockutils [None req-adbbced1-e284-4a74-9b62-5b622b64136f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:14:17 np0005534516 nova_compute[253538]: 2025-11-25 09:14:17.907 253542 DEBUG oslo_concurrency.lockutils [None req-adbbced1-e284-4a74-9b62-5b622b64136f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:14:17 np0005534516 nova_compute[253538]: 2025-11-25 09:14:17.928 253542 DEBUG nova.storage.rbd_utils [None req-adbbced1-e284-4a74-9b62-5b622b64136f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] rbd image 49fdf548-77e1-47b2-9118-f42acc3a4052_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 04:14:17 np0005534516 nova_compute[253538]: 2025-11-25 09:14:17.932 253542 DEBUG oslo_concurrency.processutils [None req-adbbced1-e284-4a74-9b62-5b622b64136f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc 49fdf548-77e1-47b2-9118-f42acc3a4052_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 04:14:18 np0005534516 nova_compute[253538]: 2025-11-25 09:14:18.022 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:14:18 np0005534516 nova_compute[253538]: 2025-11-25 09:14:18.265 253542 DEBUG oslo_concurrency.processutils [None req-adbbced1-e284-4a74-9b62-5b622b64136f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc 49fdf548-77e1-47b2-9118-f42acc3a4052_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.333s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 04:14:18 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:14:18 np0005534516 nova_compute[253538]: 2025-11-25 09:14:18.352 253542 DEBUG nova.storage.rbd_utils [None req-adbbced1-e284-4a74-9b62-5b622b64136f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] resizing rbd image 49fdf548-77e1-47b2-9118-f42acc3a4052_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 25 04:14:18 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2817: 321 pgs: 321 active+clean; 181 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 327 KiB/s rd, 2.6 MiB/s wr, 66 op/s
Nov 25 04:14:18 np0005534516 nova_compute[253538]: 2025-11-25 09:14:18.484 253542 DEBUG nova.objects.instance [None req-adbbced1-e284-4a74-9b62-5b622b64136f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lazy-loading 'migration_context' on Instance uuid 49fdf548-77e1-47b2-9118-f42acc3a4052 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 04:14:18 np0005534516 nova_compute[253538]: 2025-11-25 09:14:18.630 253542 DEBUG nova.virt.libvirt.driver [None req-adbbced1-e284-4a74-9b62-5b622b64136f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 49fdf548-77e1-47b2-9118-f42acc3a4052] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 25 04:14:18 np0005534516 nova_compute[253538]: 2025-11-25 09:14:18.631 253542 DEBUG nova.virt.libvirt.driver [None req-adbbced1-e284-4a74-9b62-5b622b64136f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 49fdf548-77e1-47b2-9118-f42acc3a4052] Ensure instance console log exists: /var/lib/nova/instances/49fdf548-77e1-47b2-9118-f42acc3a4052/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 25 04:14:18 np0005534516 nova_compute[253538]: 2025-11-25 09:14:18.631 253542 DEBUG oslo_concurrency.lockutils [None req-adbbced1-e284-4a74-9b62-5b622b64136f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:14:18 np0005534516 nova_compute[253538]: 2025-11-25 09:14:18.632 253542 DEBUG oslo_concurrency.lockutils [None req-adbbced1-e284-4a74-9b62-5b622b64136f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:14:18 np0005534516 nova_compute[253538]: 2025-11-25 09:14:18.632 253542 DEBUG oslo_concurrency.lockutils [None req-adbbced1-e284-4a74-9b62-5b622b64136f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:14:19 np0005534516 nova_compute[253538]: 2025-11-25 09:14:19.538 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:14:20 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2818: 321 pgs: 321 active+clean; 203 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 117 KiB/s rd, 1.9 MiB/s wr, 44 op/s
Nov 25 04:14:21 np0005534516 nova_compute[253538]: 2025-11-25 09:14:21.780 253542 DEBUG nova.network.neutron [None req-adbbced1-e284-4a74-9b62-5b622b64136f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 49fdf548-77e1-47b2-9118-f42acc3a4052] Successfully created port: 8567d2b8-5fd3-45d6-9d10-d88839de3d8a _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 25 04:14:22 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2819: 321 pgs: 321 active+clean; 213 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 56 KiB/s rd, 1.8 MiB/s wr, 33 op/s
Nov 25 04:14:22 np0005534516 nova_compute[253538]: 2025-11-25 09:14:22.939 253542 DEBUG nova.network.neutron [None req-adbbced1-e284-4a74-9b62-5b622b64136f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 49fdf548-77e1-47b2-9118-f42acc3a4052] Successfully updated port: 8567d2b8-5fd3-45d6-9d10-d88839de3d8a _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 25 04:14:23 np0005534516 nova_compute[253538]: 2025-11-25 09:14:23.021 253542 DEBUG oslo_concurrency.lockutils [None req-adbbced1-e284-4a74-9b62-5b622b64136f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Acquiring lock "refresh_cache-49fdf548-77e1-47b2-9118-f42acc3a4052" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 04:14:23 np0005534516 nova_compute[253538]: 2025-11-25 09:14:23.022 253542 DEBUG oslo_concurrency.lockutils [None req-adbbced1-e284-4a74-9b62-5b622b64136f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Acquired lock "refresh_cache-49fdf548-77e1-47b2-9118-f42acc3a4052" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 04:14:23 np0005534516 nova_compute[253538]: 2025-11-25 09:14:23.022 253542 DEBUG nova.network.neutron [None req-adbbced1-e284-4a74-9b62-5b622b64136f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 49fdf548-77e1-47b2-9118-f42acc3a4052] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 25 04:14:23 np0005534516 nova_compute[253538]: 2025-11-25 09:14:23.026 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:14:23 np0005534516 nova_compute[253538]: 2025-11-25 09:14:23.032 253542 DEBUG nova.compute.manager [req-7fc5e1f1-2e5b-4fe5-aa65-5a8f530ef661 req-510f3780-9c10-4aa9-afeb-084c2d88730f b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 49fdf548-77e1-47b2-9118-f42acc3a4052] Received event network-changed-8567d2b8-5fd3-45d6-9d10-d88839de3d8a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 04:14:23 np0005534516 nova_compute[253538]: 2025-11-25 09:14:23.033 253542 DEBUG nova.compute.manager [req-7fc5e1f1-2e5b-4fe5-aa65-5a8f530ef661 req-510f3780-9c10-4aa9-afeb-084c2d88730f b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 49fdf548-77e1-47b2-9118-f42acc3a4052] Refreshing instance network info cache due to event network-changed-8567d2b8-5fd3-45d6-9d10-d88839de3d8a. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 04:14:23 np0005534516 nova_compute[253538]: 2025-11-25 09:14:23.033 253542 DEBUG oslo_concurrency.lockutils [req-7fc5e1f1-2e5b-4fe5-aa65-5a8f530ef661 req-510f3780-9c10-4aa9-afeb-084c2d88730f b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-49fdf548-77e1-47b2-9118-f42acc3a4052" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 04:14:23 np0005534516 nova_compute[253538]: 2025-11-25 09:14:23.165 253542 DEBUG nova.network.neutron [None req-adbbced1-e284-4a74-9b62-5b622b64136f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 49fdf548-77e1-47b2-9118-f42acc3a4052] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 25 04:14:23 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:14:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 04:14:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 04:14:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 04:14:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 04:14:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 04:14:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 04:14:24 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2820: 321 pgs: 321 active+clean; 213 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 25 04:14:24 np0005534516 nova_compute[253538]: 2025-11-25 09:14:24.539 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:14:25 np0005534516 nova_compute[253538]: 2025-11-25 09:14:25.799 253542 DEBUG nova.network.neutron [None req-adbbced1-e284-4a74-9b62-5b622b64136f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 49fdf548-77e1-47b2-9118-f42acc3a4052] Updating instance_info_cache with network_info: [{"id": "8567d2b8-5fd3-45d6-9d10-d88839de3d8a", "address": "fa:16:3e:ef:e3:44", "network": {"id": "a0d85633-9402-4022-8c0a-b00348775e93", "bridge": "br-int", "label": "tempest-network-smoke--1636625224", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:feef:e344", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feef:e344", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8567d2b8-5f", "ovs_interfaceid": "8567d2b8-5fd3-45d6-9d10-d88839de3d8a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 04:14:25 np0005534516 nova_compute[253538]: 2025-11-25 09:14:25.845 253542 DEBUG oslo_concurrency.lockutils [None req-adbbced1-e284-4a74-9b62-5b622b64136f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Releasing lock "refresh_cache-49fdf548-77e1-47b2-9118-f42acc3a4052" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 04:14:25 np0005534516 nova_compute[253538]: 2025-11-25 09:14:25.846 253542 DEBUG nova.compute.manager [None req-adbbced1-e284-4a74-9b62-5b622b64136f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 49fdf548-77e1-47b2-9118-f42acc3a4052] Instance network_info: |[{"id": "8567d2b8-5fd3-45d6-9d10-d88839de3d8a", "address": "fa:16:3e:ef:e3:44", "network": {"id": "a0d85633-9402-4022-8c0a-b00348775e93", "bridge": "br-int", "label": "tempest-network-smoke--1636625224", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:feef:e344", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feef:e344", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8567d2b8-5f", "ovs_interfaceid": "8567d2b8-5fd3-45d6-9d10-d88839de3d8a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 25 04:14:25 np0005534516 nova_compute[253538]: 2025-11-25 09:14:25.846 253542 DEBUG oslo_concurrency.lockutils [req-7fc5e1f1-2e5b-4fe5-aa65-5a8f530ef661 req-510f3780-9c10-4aa9-afeb-084c2d88730f b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-49fdf548-77e1-47b2-9118-f42acc3a4052" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 04:14:25 np0005534516 nova_compute[253538]: 2025-11-25 09:14:25.846 253542 DEBUG nova.network.neutron [req-7fc5e1f1-2e5b-4fe5-aa65-5a8f530ef661 req-510f3780-9c10-4aa9-afeb-084c2d88730f b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 49fdf548-77e1-47b2-9118-f42acc3a4052] Refreshing network info cache for port 8567d2b8-5fd3-45d6-9d10-d88839de3d8a _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 04:14:25 np0005534516 nova_compute[253538]: 2025-11-25 09:14:25.851 253542 DEBUG nova.virt.libvirt.driver [None req-adbbced1-e284-4a74-9b62-5b622b64136f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 49fdf548-77e1-47b2-9118-f42acc3a4052] Start _get_guest_xml network_info=[{"id": "8567d2b8-5fd3-45d6-9d10-d88839de3d8a", "address": "fa:16:3e:ef:e3:44", "network": {"id": "a0d85633-9402-4022-8c0a-b00348775e93", "bridge": "br-int", "label": "tempest-network-smoke--1636625224", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:feef:e344", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feef:e344", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8567d2b8-5f", "ovs_interfaceid": "8567d2b8-5fd3-45d6-9d10-d88839de3d8a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'device_name': '/dev/vda', 'guest_format': None, 'encryption_options': None, 'boot_index': 0, 'encryption_format': None, 'encryption_secret_uuid': None, 'size': 0, 'encrypted': False, 'disk_bus': 'virtio', 'image_id': '8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 25 04:14:25 np0005534516 nova_compute[253538]: 2025-11-25 09:14:25.856 253542 WARNING nova.virt.libvirt.driver [None req-adbbced1-e284-4a74-9b62-5b622b64136f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 04:14:25 np0005534516 nova_compute[253538]: 2025-11-25 09:14:25.863 253542 DEBUG nova.virt.libvirt.host [None req-adbbced1-e284-4a74-9b62-5b622b64136f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 25 04:14:25 np0005534516 nova_compute[253538]: 2025-11-25 09:14:25.864 253542 DEBUG nova.virt.libvirt.host [None req-adbbced1-e284-4a74-9b62-5b622b64136f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 25 04:14:25 np0005534516 nova_compute[253538]: 2025-11-25 09:14:25.868 253542 DEBUG nova.virt.libvirt.host [None req-adbbced1-e284-4a74-9b62-5b622b64136f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 25 04:14:25 np0005534516 nova_compute[253538]: 2025-11-25 09:14:25.869 253542 DEBUG nova.virt.libvirt.host [None req-adbbced1-e284-4a74-9b62-5b622b64136f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 25 04:14:25 np0005534516 nova_compute[253538]: 2025-11-25 09:14:25.869 253542 DEBUG nova.virt.libvirt.driver [None req-adbbced1-e284-4a74-9b62-5b622b64136f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 25 04:14:25 np0005534516 nova_compute[253538]: 2025-11-25 09:14:25.870 253542 DEBUG nova.virt.hardware [None req-adbbced1-e284-4a74-9b62-5b622b64136f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T08:20:54Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='c0247c91-0f34-45b2-87e7-43f31a790a00',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 25 04:14:25 np0005534516 nova_compute[253538]: 2025-11-25 09:14:25.870 253542 DEBUG nova.virt.hardware [None req-adbbced1-e284-4a74-9b62-5b622b64136f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 25 04:14:25 np0005534516 nova_compute[253538]: 2025-11-25 09:14:25.870 253542 DEBUG nova.virt.hardware [None req-adbbced1-e284-4a74-9b62-5b622b64136f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 25 04:14:25 np0005534516 nova_compute[253538]: 2025-11-25 09:14:25.871 253542 DEBUG nova.virt.hardware [None req-adbbced1-e284-4a74-9b62-5b622b64136f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 25 04:14:25 np0005534516 nova_compute[253538]: 2025-11-25 09:14:25.871 253542 DEBUG nova.virt.hardware [None req-adbbced1-e284-4a74-9b62-5b622b64136f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 25 04:14:25 np0005534516 nova_compute[253538]: 2025-11-25 09:14:25.871 253542 DEBUG nova.virt.hardware [None req-adbbced1-e284-4a74-9b62-5b622b64136f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 25 04:14:25 np0005534516 nova_compute[253538]: 2025-11-25 09:14:25.872 253542 DEBUG nova.virt.hardware [None req-adbbced1-e284-4a74-9b62-5b622b64136f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 25 04:14:25 np0005534516 nova_compute[253538]: 2025-11-25 09:14:25.872 253542 DEBUG nova.virt.hardware [None req-adbbced1-e284-4a74-9b62-5b622b64136f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 25 04:14:25 np0005534516 nova_compute[253538]: 2025-11-25 09:14:25.872 253542 DEBUG nova.virt.hardware [None req-adbbced1-e284-4a74-9b62-5b622b64136f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 25 04:14:25 np0005534516 nova_compute[253538]: 2025-11-25 09:14:25.873 253542 DEBUG nova.virt.hardware [None req-adbbced1-e284-4a74-9b62-5b622b64136f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 25 04:14:25 np0005534516 nova_compute[253538]: 2025-11-25 09:14:25.873 253542 DEBUG nova.virt.hardware [None req-adbbced1-e284-4a74-9b62-5b622b64136f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 25 04:14:25 np0005534516 nova_compute[253538]: 2025-11-25 09:14:25.877 253542 DEBUG oslo_concurrency.processutils [None req-adbbced1-e284-4a74-9b62-5b622b64136f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 04:14:26 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 04:14:26 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2528925284' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 04:14:26 np0005534516 nova_compute[253538]: 2025-11-25 09:14:26.369 253542 DEBUG oslo_concurrency.processutils [None req-adbbced1-e284-4a74-9b62-5b622b64136f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.492s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 04:14:26 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2821: 321 pgs: 321 active+clean; 213 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 25 04:14:26 np0005534516 nova_compute[253538]: 2025-11-25 09:14:26.394 253542 DEBUG nova.storage.rbd_utils [None req-adbbced1-e284-4a74-9b62-5b622b64136f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] rbd image 49fdf548-77e1-47b2-9118-f42acc3a4052_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 04:14:26 np0005534516 nova_compute[253538]: 2025-11-25 09:14:26.398 253542 DEBUG oslo_concurrency.processutils [None req-adbbced1-e284-4a74-9b62-5b622b64136f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 04:14:26 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 04:14:26 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2454110497' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 04:14:26 np0005534516 nova_compute[253538]: 2025-11-25 09:14:26.853 253542 DEBUG oslo_concurrency.processutils [None req-adbbced1-e284-4a74-9b62-5b622b64136f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.455s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 04:14:26 np0005534516 nova_compute[253538]: 2025-11-25 09:14:26.856 253542 DEBUG nova.virt.libvirt.vif [None req-adbbced1-e284-4a74-9b62-5b622b64136f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T09:14:15Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1132307495',display_name='tempest-TestGettingAddress-server-1132307495',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1132307495',id=149,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLjQVPPbaLK0o8SGnTKKoAiXxzMCTbiXlcClrAsKocG23tIVNKIl+uyDryAiBRRNgOXs1VZtoFvH020JuANRMKekifvu6hXrGsC+ZSxX2adgnjX2NrqkFauwj51UvSALOg==',key_name='tempest-TestGettingAddress-1641139979',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='a3cf572dfc9f42528923d69b8fa76422',ramdisk_id='',reservation_id='r-4n1kgn5k',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-364728108',owner_user_name='tempest-TestGettingAddress-364728108-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T09:14:17Z,user_data=None,user_id='c9fb13d4ba9041458692330b7276232f',uuid=49fdf548-77e1-47b2-9118-f42acc3a4052,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "8567d2b8-5fd3-45d6-9d10-d88839de3d8a", "address": "fa:16:3e:ef:e3:44", "network": {"id": "a0d85633-9402-4022-8c0a-b00348775e93", "bridge": "br-int", "label": "tempest-network-smoke--1636625224", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:feef:e344", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feef:e344", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8567d2b8-5f", "ovs_interfaceid": "8567d2b8-5fd3-45d6-9d10-d88839de3d8a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 25 04:14:26 np0005534516 nova_compute[253538]: 2025-11-25 09:14:26.856 253542 DEBUG nova.network.os_vif_util [None req-adbbced1-e284-4a74-9b62-5b622b64136f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Converting VIF {"id": "8567d2b8-5fd3-45d6-9d10-d88839de3d8a", "address": "fa:16:3e:ef:e3:44", "network": {"id": "a0d85633-9402-4022-8c0a-b00348775e93", "bridge": "br-int", "label": "tempest-network-smoke--1636625224", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:feef:e344", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feef:e344", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8567d2b8-5f", "ovs_interfaceid": "8567d2b8-5fd3-45d6-9d10-d88839de3d8a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 04:14:26 np0005534516 nova_compute[253538]: 2025-11-25 09:14:26.857 253542 DEBUG nova.network.os_vif_util [None req-adbbced1-e284-4a74-9b62-5b622b64136f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ef:e3:44,bridge_name='br-int',has_traffic_filtering=True,id=8567d2b8-5fd3-45d6-9d10-d88839de3d8a,network=Network(a0d85633-9402-4022-8c0a-b00348775e93),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8567d2b8-5f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 04:14:26 np0005534516 nova_compute[253538]: 2025-11-25 09:14:26.858 253542 DEBUG nova.objects.instance [None req-adbbced1-e284-4a74-9b62-5b622b64136f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lazy-loading 'pci_devices' on Instance uuid 49fdf548-77e1-47b2-9118-f42acc3a4052 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 04:14:26 np0005534516 nova_compute[253538]: 2025-11-25 09:14:26.871 253542 DEBUG nova.virt.libvirt.driver [None req-adbbced1-e284-4a74-9b62-5b622b64136f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 49fdf548-77e1-47b2-9118-f42acc3a4052] End _get_guest_xml xml=<domain type="kvm">
Nov 25 04:14:26 np0005534516 nova_compute[253538]:  <uuid>49fdf548-77e1-47b2-9118-f42acc3a4052</uuid>
Nov 25 04:14:26 np0005534516 nova_compute[253538]:  <name>instance-00000095</name>
Nov 25 04:14:26 np0005534516 nova_compute[253538]:  <memory>131072</memory>
Nov 25 04:14:26 np0005534516 nova_compute[253538]:  <vcpu>1</vcpu>
Nov 25 04:14:26 np0005534516 nova_compute[253538]:  <metadata>
Nov 25 04:14:26 np0005534516 nova_compute[253538]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 04:14:26 np0005534516 nova_compute[253538]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 04:14:26 np0005534516 nova_compute[253538]:      <nova:name>tempest-TestGettingAddress-server-1132307495</nova:name>
Nov 25 04:14:26 np0005534516 nova_compute[253538]:      <nova:creationTime>2025-11-25 09:14:25</nova:creationTime>
Nov 25 04:14:26 np0005534516 nova_compute[253538]:      <nova:flavor name="m1.nano">
Nov 25 04:14:26 np0005534516 nova_compute[253538]:        <nova:memory>128</nova:memory>
Nov 25 04:14:26 np0005534516 nova_compute[253538]:        <nova:disk>1</nova:disk>
Nov 25 04:14:26 np0005534516 nova_compute[253538]:        <nova:swap>0</nova:swap>
Nov 25 04:14:26 np0005534516 nova_compute[253538]:        <nova:ephemeral>0</nova:ephemeral>
Nov 25 04:14:26 np0005534516 nova_compute[253538]:        <nova:vcpus>1</nova:vcpus>
Nov 25 04:14:26 np0005534516 nova_compute[253538]:      </nova:flavor>
Nov 25 04:14:26 np0005534516 nova_compute[253538]:      <nova:owner>
Nov 25 04:14:26 np0005534516 nova_compute[253538]:        <nova:user uuid="c9fb13d4ba9041458692330b7276232f">tempest-TestGettingAddress-364728108-project-member</nova:user>
Nov 25 04:14:26 np0005534516 nova_compute[253538]:        <nova:project uuid="a3cf572dfc9f42528923d69b8fa76422">tempest-TestGettingAddress-364728108</nova:project>
Nov 25 04:14:26 np0005534516 nova_compute[253538]:      </nova:owner>
Nov 25 04:14:26 np0005534516 nova_compute[253538]:      <nova:root type="image" uuid="8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e"/>
Nov 25 04:14:26 np0005534516 nova_compute[253538]:      <nova:ports>
Nov 25 04:14:26 np0005534516 nova_compute[253538]:        <nova:port uuid="8567d2b8-5fd3-45d6-9d10-d88839de3d8a">
Nov 25 04:14:26 np0005534516 nova_compute[253538]:          <nova:ip type="fixed" address="2001:db8:0:1:f816:3eff:feef:e344" ipVersion="6"/>
Nov 25 04:14:26 np0005534516 nova_compute[253538]:          <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Nov 25 04:14:26 np0005534516 nova_compute[253538]:          <nova:ip type="fixed" address="2001:db8::f816:3eff:feef:e344" ipVersion="6"/>
Nov 25 04:14:26 np0005534516 nova_compute[253538]:        </nova:port>
Nov 25 04:14:26 np0005534516 nova_compute[253538]:      </nova:ports>
Nov 25 04:14:26 np0005534516 nova_compute[253538]:    </nova:instance>
Nov 25 04:14:26 np0005534516 nova_compute[253538]:  </metadata>
Nov 25 04:14:26 np0005534516 nova_compute[253538]:  <sysinfo type="smbios">
Nov 25 04:14:26 np0005534516 nova_compute[253538]:    <system>
Nov 25 04:14:26 np0005534516 nova_compute[253538]:      <entry name="manufacturer">RDO</entry>
Nov 25 04:14:26 np0005534516 nova_compute[253538]:      <entry name="product">OpenStack Compute</entry>
Nov 25 04:14:26 np0005534516 nova_compute[253538]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 04:14:26 np0005534516 nova_compute[253538]:      <entry name="serial">49fdf548-77e1-47b2-9118-f42acc3a4052</entry>
Nov 25 04:14:26 np0005534516 nova_compute[253538]:      <entry name="uuid">49fdf548-77e1-47b2-9118-f42acc3a4052</entry>
Nov 25 04:14:26 np0005534516 nova_compute[253538]:      <entry name="family">Virtual Machine</entry>
Nov 25 04:14:26 np0005534516 nova_compute[253538]:    </system>
Nov 25 04:14:26 np0005534516 nova_compute[253538]:  </sysinfo>
Nov 25 04:14:26 np0005534516 nova_compute[253538]:  <os>
Nov 25 04:14:26 np0005534516 nova_compute[253538]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 25 04:14:26 np0005534516 nova_compute[253538]:    <boot dev="hd"/>
Nov 25 04:14:26 np0005534516 nova_compute[253538]:    <smbios mode="sysinfo"/>
Nov 25 04:14:26 np0005534516 nova_compute[253538]:  </os>
Nov 25 04:14:26 np0005534516 nova_compute[253538]:  <features>
Nov 25 04:14:26 np0005534516 nova_compute[253538]:    <acpi/>
Nov 25 04:14:26 np0005534516 nova_compute[253538]:    <apic/>
Nov 25 04:14:26 np0005534516 nova_compute[253538]:    <vmcoreinfo/>
Nov 25 04:14:26 np0005534516 nova_compute[253538]:  </features>
Nov 25 04:14:26 np0005534516 nova_compute[253538]:  <clock offset="utc">
Nov 25 04:14:26 np0005534516 nova_compute[253538]:    <timer name="pit" tickpolicy="delay"/>
Nov 25 04:14:26 np0005534516 nova_compute[253538]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 25 04:14:26 np0005534516 nova_compute[253538]:    <timer name="hpet" present="no"/>
Nov 25 04:14:26 np0005534516 nova_compute[253538]:  </clock>
Nov 25 04:14:26 np0005534516 nova_compute[253538]:  <cpu mode="host-model" match="exact">
Nov 25 04:14:26 np0005534516 nova_compute[253538]:    <topology sockets="1" cores="1" threads="1"/>
Nov 25 04:14:26 np0005534516 nova_compute[253538]:  </cpu>
Nov 25 04:14:26 np0005534516 nova_compute[253538]:  <devices>
Nov 25 04:14:26 np0005534516 nova_compute[253538]:    <disk type="network" device="disk">
Nov 25 04:14:26 np0005534516 nova_compute[253538]:      <driver type="raw" cache="none"/>
Nov 25 04:14:26 np0005534516 nova_compute[253538]:      <source protocol="rbd" name="vms/49fdf548-77e1-47b2-9118-f42acc3a4052_disk">
Nov 25 04:14:26 np0005534516 nova_compute[253538]:        <host name="192.168.122.100" port="6789"/>
Nov 25 04:14:26 np0005534516 nova_compute[253538]:      </source>
Nov 25 04:14:26 np0005534516 nova_compute[253538]:      <auth username="openstack">
Nov 25 04:14:26 np0005534516 nova_compute[253538]:        <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 04:14:26 np0005534516 nova_compute[253538]:      </auth>
Nov 25 04:14:26 np0005534516 nova_compute[253538]:      <target dev="vda" bus="virtio"/>
Nov 25 04:14:26 np0005534516 nova_compute[253538]:    </disk>
Nov 25 04:14:26 np0005534516 nova_compute[253538]:    <disk type="network" device="cdrom">
Nov 25 04:14:26 np0005534516 nova_compute[253538]:      <driver type="raw" cache="none"/>
Nov 25 04:14:26 np0005534516 nova_compute[253538]:      <source protocol="rbd" name="vms/49fdf548-77e1-47b2-9118-f42acc3a4052_disk.config">
Nov 25 04:14:26 np0005534516 nova_compute[253538]:        <host name="192.168.122.100" port="6789"/>
Nov 25 04:14:26 np0005534516 nova_compute[253538]:      </source>
Nov 25 04:14:26 np0005534516 nova_compute[253538]:      <auth username="openstack">
Nov 25 04:14:26 np0005534516 nova_compute[253538]:        <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 04:14:26 np0005534516 nova_compute[253538]:      </auth>
Nov 25 04:14:26 np0005534516 nova_compute[253538]:      <target dev="sda" bus="sata"/>
Nov 25 04:14:26 np0005534516 nova_compute[253538]:    </disk>
Nov 25 04:14:26 np0005534516 nova_compute[253538]:    <interface type="ethernet">
Nov 25 04:14:26 np0005534516 nova_compute[253538]:      <mac address="fa:16:3e:ef:e3:44"/>
Nov 25 04:14:26 np0005534516 nova_compute[253538]:      <model type="virtio"/>
Nov 25 04:14:26 np0005534516 nova_compute[253538]:      <driver name="vhost" rx_queue_size="512"/>
Nov 25 04:14:26 np0005534516 nova_compute[253538]:      <mtu size="1442"/>
Nov 25 04:14:26 np0005534516 nova_compute[253538]:      <target dev="tap8567d2b8-5f"/>
Nov 25 04:14:26 np0005534516 nova_compute[253538]:    </interface>
Nov 25 04:14:26 np0005534516 nova_compute[253538]:    <serial type="pty">
Nov 25 04:14:26 np0005534516 nova_compute[253538]:      <log file="/var/lib/nova/instances/49fdf548-77e1-47b2-9118-f42acc3a4052/console.log" append="off"/>
Nov 25 04:14:26 np0005534516 nova_compute[253538]:    </serial>
Nov 25 04:14:26 np0005534516 nova_compute[253538]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 04:14:26 np0005534516 nova_compute[253538]:    <video>
Nov 25 04:14:26 np0005534516 nova_compute[253538]:      <model type="virtio"/>
Nov 25 04:14:26 np0005534516 nova_compute[253538]:    </video>
Nov 25 04:14:26 np0005534516 nova_compute[253538]:    <input type="tablet" bus="usb"/>
Nov 25 04:14:26 np0005534516 nova_compute[253538]:    <rng model="virtio">
Nov 25 04:14:26 np0005534516 nova_compute[253538]:      <backend model="random">/dev/urandom</backend>
Nov 25 04:14:26 np0005534516 nova_compute[253538]:    </rng>
Nov 25 04:14:26 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root"/>
Nov 25 04:14:26 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:14:26 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:14:26 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:14:26 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:14:26 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:14:26 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:14:26 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:14:26 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:14:26 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:14:26 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:14:26 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:14:26 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:14:26 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:14:26 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:14:26 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:14:26 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:14:26 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:14:26 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:14:26 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:14:26 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:14:26 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:14:26 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:14:26 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:14:26 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:14:26 np0005534516 nova_compute[253538]:    <controller type="usb" index="0"/>
Nov 25 04:14:26 np0005534516 nova_compute[253538]:    <memballoon model="virtio">
Nov 25 04:14:26 np0005534516 nova_compute[253538]:      <stats period="10"/>
Nov 25 04:14:26 np0005534516 nova_compute[253538]:    </memballoon>
Nov 25 04:14:26 np0005534516 nova_compute[253538]:  </devices>
Nov 25 04:14:26 np0005534516 nova_compute[253538]: </domain>
Nov 25 04:14:26 np0005534516 nova_compute[253538]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 25 04:14:26 np0005534516 nova_compute[253538]: 2025-11-25 09:14:26.872 253542 DEBUG nova.compute.manager [None req-adbbced1-e284-4a74-9b62-5b622b64136f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 49fdf548-77e1-47b2-9118-f42acc3a4052] Preparing to wait for external event network-vif-plugged-8567d2b8-5fd3-45d6-9d10-d88839de3d8a prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 25 04:14:26 np0005534516 nova_compute[253538]: 2025-11-25 09:14:26.873 253542 DEBUG oslo_concurrency.lockutils [None req-adbbced1-e284-4a74-9b62-5b622b64136f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Acquiring lock "49fdf548-77e1-47b2-9118-f42acc3a4052-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:14:26 np0005534516 nova_compute[253538]: 2025-11-25 09:14:26.874 253542 DEBUG oslo_concurrency.lockutils [None req-adbbced1-e284-4a74-9b62-5b622b64136f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "49fdf548-77e1-47b2-9118-f42acc3a4052-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:14:26 np0005534516 nova_compute[253538]: 2025-11-25 09:14:26.874 253542 DEBUG oslo_concurrency.lockutils [None req-adbbced1-e284-4a74-9b62-5b622b64136f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "49fdf548-77e1-47b2-9118-f42acc3a4052-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:14:26 np0005534516 nova_compute[253538]: 2025-11-25 09:14:26.876 253542 DEBUG nova.virt.libvirt.vif [None req-adbbced1-e284-4a74-9b62-5b622b64136f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T09:14:15Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1132307495',display_name='tempest-TestGettingAddress-server-1132307495',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1132307495',id=149,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLjQVPPbaLK0o8SGnTKKoAiXxzMCTbiXlcClrAsKocG23tIVNKIl+uyDryAiBRRNgOXs1VZtoFvH020JuANRMKekifvu6hXrGsC+ZSxX2adgnjX2NrqkFauwj51UvSALOg==',key_name='tempest-TestGettingAddress-1641139979',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='a3cf572dfc9f42528923d69b8fa76422',ramdisk_id='',reservation_id='r-4n1kgn5k',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-364728108',owner_user_name='tempest-TestGettingAddress-364728108-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T09:14:17Z,user_data=None,user_id='c9fb13d4ba9041458692330b7276232f',uuid=49fdf548-77e1-47b2-9118-f42acc3a4052,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "8567d2b8-5fd3-45d6-9d10-d88839de3d8a", "address": "fa:16:3e:ef:e3:44", "network": {"id": "a0d85633-9402-4022-8c0a-b00348775e93", "bridge": "br-int", "label": "tempest-network-smoke--1636625224", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:feef:e344", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feef:e344", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8567d2b8-5f", "ovs_interfaceid": "8567d2b8-5fd3-45d6-9d10-d88839de3d8a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 25 04:14:26 np0005534516 nova_compute[253538]: 2025-11-25 09:14:26.876 253542 DEBUG nova.network.os_vif_util [None req-adbbced1-e284-4a74-9b62-5b622b64136f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Converting VIF {"id": "8567d2b8-5fd3-45d6-9d10-d88839de3d8a", "address": "fa:16:3e:ef:e3:44", "network": {"id": "a0d85633-9402-4022-8c0a-b00348775e93", "bridge": "br-int", "label": "tempest-network-smoke--1636625224", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:feef:e344", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feef:e344", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8567d2b8-5f", "ovs_interfaceid": "8567d2b8-5fd3-45d6-9d10-d88839de3d8a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 04:14:26 np0005534516 nova_compute[253538]: 2025-11-25 09:14:26.878 253542 DEBUG nova.network.os_vif_util [None req-adbbced1-e284-4a74-9b62-5b622b64136f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ef:e3:44,bridge_name='br-int',has_traffic_filtering=True,id=8567d2b8-5fd3-45d6-9d10-d88839de3d8a,network=Network(a0d85633-9402-4022-8c0a-b00348775e93),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8567d2b8-5f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 04:14:26 np0005534516 nova_compute[253538]: 2025-11-25 09:14:26.879 253542 DEBUG os_vif [None req-adbbced1-e284-4a74-9b62-5b622b64136f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:ef:e3:44,bridge_name='br-int',has_traffic_filtering=True,id=8567d2b8-5fd3-45d6-9d10-d88839de3d8a,network=Network(a0d85633-9402-4022-8c0a-b00348775e93),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8567d2b8-5f') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 25 04:14:26 np0005534516 nova_compute[253538]: 2025-11-25 09:14:26.880 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:14:26 np0005534516 nova_compute[253538]: 2025-11-25 09:14:26.881 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 04:14:26 np0005534516 nova_compute[253538]: 2025-11-25 09:14:26.881 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 04:14:26 np0005534516 nova_compute[253538]: 2025-11-25 09:14:26.885 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:14:26 np0005534516 nova_compute[253538]: 2025-11-25 09:14:26.885 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap8567d2b8-5f, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 04:14:26 np0005534516 nova_compute[253538]: 2025-11-25 09:14:26.886 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap8567d2b8-5f, col_values=(('external_ids', {'iface-id': '8567d2b8-5fd3-45d6-9d10-d88839de3d8a', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:ef:e3:44', 'vm-uuid': '49fdf548-77e1-47b2-9118-f42acc3a4052'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 04:14:26 np0005534516 nova_compute[253538]: 2025-11-25 09:14:26.889 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:14:26 np0005534516 NetworkManager[48915]: <info>  [1764062066.8901] manager: (tap8567d2b8-5f): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/654)
Nov 25 04:14:26 np0005534516 nova_compute[253538]: 2025-11-25 09:14:26.892 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 25 04:14:26 np0005534516 nova_compute[253538]: 2025-11-25 09:14:26.896 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:14:26 np0005534516 nova_compute[253538]: 2025-11-25 09:14:26.897 253542 INFO os_vif [None req-adbbced1-e284-4a74-9b62-5b622b64136f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:ef:e3:44,bridge_name='br-int',has_traffic_filtering=True,id=8567d2b8-5fd3-45d6-9d10-d88839de3d8a,network=Network(a0d85633-9402-4022-8c0a-b00348775e93),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8567d2b8-5f')#033[00m
Nov 25 04:14:26 np0005534516 nova_compute[253538]: 2025-11-25 09:14:26.938 253542 DEBUG nova.virt.libvirt.driver [None req-adbbced1-e284-4a74-9b62-5b622b64136f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 04:14:26 np0005534516 nova_compute[253538]: 2025-11-25 09:14:26.938 253542 DEBUG nova.virt.libvirt.driver [None req-adbbced1-e284-4a74-9b62-5b622b64136f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 04:14:26 np0005534516 nova_compute[253538]: 2025-11-25 09:14:26.939 253542 DEBUG nova.virt.libvirt.driver [None req-adbbced1-e284-4a74-9b62-5b622b64136f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] No VIF found with MAC fa:16:3e:ef:e3:44, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 25 04:14:26 np0005534516 nova_compute[253538]: 2025-11-25 09:14:26.939 253542 INFO nova.virt.libvirt.driver [None req-adbbced1-e284-4a74-9b62-5b622b64136f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 49fdf548-77e1-47b2-9118-f42acc3a4052] Using config drive#033[00m
Nov 25 04:14:26 np0005534516 nova_compute[253538]: 2025-11-25 09:14:26.960 253542 DEBUG nova.storage.rbd_utils [None req-adbbced1-e284-4a74-9b62-5b622b64136f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] rbd image 49fdf548-77e1-47b2-9118-f42acc3a4052_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 04:14:27 np0005534516 nova_compute[253538]: 2025-11-25 09:14:27.263 253542 INFO nova.virt.libvirt.driver [None req-adbbced1-e284-4a74-9b62-5b622b64136f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 49fdf548-77e1-47b2-9118-f42acc3a4052] Creating config drive at /var/lib/nova/instances/49fdf548-77e1-47b2-9118-f42acc3a4052/disk.config#033[00m
Nov 25 04:14:27 np0005534516 nova_compute[253538]: 2025-11-25 09:14:27.272 253542 DEBUG oslo_concurrency.processutils [None req-adbbced1-e284-4a74-9b62-5b622b64136f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/49fdf548-77e1-47b2-9118-f42acc3a4052/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp5t44b9p1 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 04:14:27 np0005534516 nova_compute[253538]: 2025-11-25 09:14:27.419 253542 DEBUG oslo_concurrency.processutils [None req-adbbced1-e284-4a74-9b62-5b622b64136f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/49fdf548-77e1-47b2-9118-f42acc3a4052/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp5t44b9p1" returned: 0 in 0.146s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 04:14:27 np0005534516 nova_compute[253538]: 2025-11-25 09:14:27.452 253542 DEBUG nova.storage.rbd_utils [None req-adbbced1-e284-4a74-9b62-5b622b64136f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] rbd image 49fdf548-77e1-47b2-9118-f42acc3a4052_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 04:14:27 np0005534516 nova_compute[253538]: 2025-11-25 09:14:27.457 253542 DEBUG oslo_concurrency.processutils [None req-adbbced1-e284-4a74-9b62-5b622b64136f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/49fdf548-77e1-47b2-9118-f42acc3a4052/disk.config 49fdf548-77e1-47b2-9118-f42acc3a4052_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 04:14:27 np0005534516 nova_compute[253538]: 2025-11-25 09:14:27.642 253542 DEBUG oslo_concurrency.processutils [None req-adbbced1-e284-4a74-9b62-5b622b64136f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/49fdf548-77e1-47b2-9118-f42acc3a4052/disk.config 49fdf548-77e1-47b2-9118-f42acc3a4052_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.186s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 04:14:27 np0005534516 nova_compute[253538]: 2025-11-25 09:14:27.643 253542 INFO nova.virt.libvirt.driver [None req-adbbced1-e284-4a74-9b62-5b622b64136f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 49fdf548-77e1-47b2-9118-f42acc3a4052] Deleting local config drive /var/lib/nova/instances/49fdf548-77e1-47b2-9118-f42acc3a4052/disk.config because it was imported into RBD.#033[00m
Nov 25 04:14:27 np0005534516 kernel: tap8567d2b8-5f: entered promiscuous mode
Nov 25 04:14:27 np0005534516 NetworkManager[48915]: <info>  [1764062067.6896] manager: (tap8567d2b8-5f): new Tun device (/org/freedesktop/NetworkManager/Devices/655)
Nov 25 04:14:27 np0005534516 ovn_controller[152859]: 2025-11-25T09:14:27Z|01586|binding|INFO|Claiming lport 8567d2b8-5fd3-45d6-9d10-d88839de3d8a for this chassis.
Nov 25 04:14:27 np0005534516 ovn_controller[152859]: 2025-11-25T09:14:27Z|01587|binding|INFO|8567d2b8-5fd3-45d6-9d10-d88839de3d8a: Claiming fa:16:3e:ef:e3:44 10.100.0.13 2001:db8:0:1:f816:3eff:feef:e344 2001:db8::f816:3eff:feef:e344
Nov 25 04:14:27 np0005534516 nova_compute[253538]: 2025-11-25 09:14:27.689 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:14:27 np0005534516 ovn_controller[152859]: 2025-11-25T09:14:27Z|01588|binding|INFO|Setting lport 8567d2b8-5fd3-45d6-9d10-d88839de3d8a ovn-installed in OVS
Nov 25 04:14:27 np0005534516 nova_compute[253538]: 2025-11-25 09:14:27.705 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:14:27 np0005534516 nova_compute[253538]: 2025-11-25 09:14:27.708 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:14:27 np0005534516 systemd-machined[215790]: New machine qemu-179-instance-00000095.
Nov 25 04:14:27 np0005534516 systemd-udevd[414389]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 04:14:27 np0005534516 systemd[1]: Started Virtual Machine qemu-179-instance-00000095.
Nov 25 04:14:27 np0005534516 NetworkManager[48915]: <info>  [1764062067.7345] device (tap8567d2b8-5f): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 04:14:27 np0005534516 NetworkManager[48915]: <info>  [1764062067.7360] device (tap8567d2b8-5f): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 04:14:27 np0005534516 ovn_controller[152859]: 2025-11-25T09:14:27Z|01589|binding|INFO|Setting lport 8567d2b8-5fd3-45d6-9d10-d88839de3d8a up in Southbound
Nov 25 04:14:27 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:14:27.768 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ef:e3:44 10.100.0.13 2001:db8:0:1:f816:3eff:feef:e344 2001:db8::f816:3eff:feef:e344'], port_security=['fa:16:3e:ef:e3:44 10.100.0.13 2001:db8:0:1:f816:3eff:feef:e344 2001:db8::f816:3eff:feef:e344'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28 2001:db8:0:1:f816:3eff:feef:e344/64 2001:db8::f816:3eff:feef:e344/64', 'neutron:device_id': '49fdf548-77e1-47b2-9118-f42acc3a4052', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a0d85633-9402-4022-8c0a-b00348775e93', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a3cf572dfc9f42528923d69b8fa76422', 'neutron:revision_number': '2', 'neutron:security_group_ids': '27a20ef1-698a-4857-9ba1-1bcdffeb008a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=dcec4777-cf00-4ad5-abd7-0fc74f3a8f46, chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=8567d2b8-5fd3-45d6-9d10-d88839de3d8a) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 04:14:27 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:14:27.769 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 8567d2b8-5fd3-45d6-9d10-d88839de3d8a in datapath a0d85633-9402-4022-8c0a-b00348775e93 bound to our chassis#033[00m
Nov 25 04:14:27 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:14:27.770 162739 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network a0d85633-9402-4022-8c0a-b00348775e93#033[00m
Nov 25 04:14:27 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:14:27.786 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[6398b08b-5d5c-40ab-bb49-8c328f3c02f8]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:14:27 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:14:27.819 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[822afb82-ae9d-44ff-80ee-64253d60cc5c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:14:27 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:14:27.822 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[d45e3f40-b7a4-4c01-92d5-392eccb88384]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:14:27 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:14:27.867 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[8cf82c06-d4dc-4173-816f-778651a88513]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:14:27 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:14:27.891 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[62f5086c-fb02-4c55-8599-96b9d57affa4]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapa0d85633-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ae:94:0e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 23, 'tx_packets': 5, 'rx_bytes': 1930, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 23, 'tx_packets': 5, 'rx_bytes': 1930, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 454], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 738854, 'reachable_time': 17518, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 21, 'inoctets': 1552, 'indelivers': 7, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 21, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 1552, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 21, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 7, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 414403, 'error': None, 'target': 'ovnmeta-a0d85633-9402-4022-8c0a-b00348775e93', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:14:27 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:14:27.908 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[7cdd5a99-2726-4d62-9b65-a6068ab8d4c2]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapa0d85633-91'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 738867, 'tstamp': 738867}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 414404, 'error': None, 'target': 'ovnmeta-a0d85633-9402-4022-8c0a-b00348775e93', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapa0d85633-91'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 738869, 'tstamp': 738869}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 414404, 'error': None, 'target': 'ovnmeta-a0d85633-9402-4022-8c0a-b00348775e93', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:14:27 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:14:27.910 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa0d85633-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 04:14:27 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:14:27.913 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa0d85633-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 04:14:27 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:14:27.913 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 04:14:27 np0005534516 nova_compute[253538]: 2025-11-25 09:14:27.913 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:14:27 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:14:27.913 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapa0d85633-90, col_values=(('external_ids', {'iface-id': '0596e673-151c-4eed-ad1e-d612e39d6f14'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 04:14:27 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:14:27.914 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 04:14:28 np0005534516 nova_compute[253538]: 2025-11-25 09:14:28.053 253542 DEBUG nova.network.neutron [req-7fc5e1f1-2e5b-4fe5-aa65-5a8f530ef661 req-510f3780-9c10-4aa9-afeb-084c2d88730f b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 49fdf548-77e1-47b2-9118-f42acc3a4052] Updated VIF entry in instance network info cache for port 8567d2b8-5fd3-45d6-9d10-d88839de3d8a. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 25 04:14:28 np0005534516 nova_compute[253538]: 2025-11-25 09:14:28.054 253542 DEBUG nova.network.neutron [req-7fc5e1f1-2e5b-4fe5-aa65-5a8f530ef661 req-510f3780-9c10-4aa9-afeb-084c2d88730f b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 49fdf548-77e1-47b2-9118-f42acc3a4052] Updating instance_info_cache with network_info: [{"id": "8567d2b8-5fd3-45d6-9d10-d88839de3d8a", "address": "fa:16:3e:ef:e3:44", "network": {"id": "a0d85633-9402-4022-8c0a-b00348775e93", "bridge": "br-int", "label": "tempest-network-smoke--1636625224", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:feef:e344", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feef:e344", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8567d2b8-5f", "ovs_interfaceid": "8567d2b8-5fd3-45d6-9d10-d88839de3d8a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 04:14:28 np0005534516 nova_compute[253538]: 2025-11-25 09:14:28.069 253542 DEBUG oslo_concurrency.lockutils [req-7fc5e1f1-2e5b-4fe5-aa65-5a8f530ef661 req-510f3780-9c10-4aa9-afeb-084c2d88730f b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-49fdf548-77e1-47b2-9118-f42acc3a4052" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 04:14:28 np0005534516 nova_compute[253538]: 2025-11-25 09:14:28.197 253542 DEBUG nova.compute.manager [req-a92dc730-5e01-4bc1-8c4f-23f46be963e0 req-f1b4608f-ffca-4674-8e13-752e298853f8 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 49fdf548-77e1-47b2-9118-f42acc3a4052] Received event network-vif-plugged-8567d2b8-5fd3-45d6-9d10-d88839de3d8a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 04:14:28 np0005534516 nova_compute[253538]: 2025-11-25 09:14:28.198 253542 DEBUG oslo_concurrency.lockutils [req-a92dc730-5e01-4bc1-8c4f-23f46be963e0 req-f1b4608f-ffca-4674-8e13-752e298853f8 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "49fdf548-77e1-47b2-9118-f42acc3a4052-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:14:28 np0005534516 nova_compute[253538]: 2025-11-25 09:14:28.198 253542 DEBUG oslo_concurrency.lockutils [req-a92dc730-5e01-4bc1-8c4f-23f46be963e0 req-f1b4608f-ffca-4674-8e13-752e298853f8 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "49fdf548-77e1-47b2-9118-f42acc3a4052-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:14:28 np0005534516 nova_compute[253538]: 2025-11-25 09:14:28.199 253542 DEBUG oslo_concurrency.lockutils [req-a92dc730-5e01-4bc1-8c4f-23f46be963e0 req-f1b4608f-ffca-4674-8e13-752e298853f8 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "49fdf548-77e1-47b2-9118-f42acc3a4052-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:14:28 np0005534516 nova_compute[253538]: 2025-11-25 09:14:28.199 253542 DEBUG nova.compute.manager [req-a92dc730-5e01-4bc1-8c4f-23f46be963e0 req-f1b4608f-ffca-4674-8e13-752e298853f8 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 49fdf548-77e1-47b2-9118-f42acc3a4052] Processing event network-vif-plugged-8567d2b8-5fd3-45d6-9d10-d88839de3d8a _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 25 04:14:28 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:14:28 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2822: 321 pgs: 321 active+clean; 213 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 25 04:14:28 np0005534516 nova_compute[253538]: 2025-11-25 09:14:28.735 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764062068.735344, 49fdf548-77e1-47b2-9118-f42acc3a4052 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 04:14:28 np0005534516 nova_compute[253538]: 2025-11-25 09:14:28.736 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 49fdf548-77e1-47b2-9118-f42acc3a4052] VM Started (Lifecycle Event)#033[00m
Nov 25 04:14:28 np0005534516 nova_compute[253538]: 2025-11-25 09:14:28.739 253542 DEBUG nova.compute.manager [None req-adbbced1-e284-4a74-9b62-5b622b64136f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 49fdf548-77e1-47b2-9118-f42acc3a4052] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 25 04:14:28 np0005534516 nova_compute[253538]: 2025-11-25 09:14:28.743 253542 DEBUG nova.virt.libvirt.driver [None req-adbbced1-e284-4a74-9b62-5b622b64136f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 49fdf548-77e1-47b2-9118-f42acc3a4052] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 25 04:14:28 np0005534516 nova_compute[253538]: 2025-11-25 09:14:28.747 253542 INFO nova.virt.libvirt.driver [-] [instance: 49fdf548-77e1-47b2-9118-f42acc3a4052] Instance spawned successfully.#033[00m
Nov 25 04:14:28 np0005534516 nova_compute[253538]: 2025-11-25 09:14:28.747 253542 DEBUG nova.virt.libvirt.driver [None req-adbbced1-e284-4a74-9b62-5b622b64136f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 49fdf548-77e1-47b2-9118-f42acc3a4052] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 25 04:14:28 np0005534516 nova_compute[253538]: 2025-11-25 09:14:28.760 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 49fdf548-77e1-47b2-9118-f42acc3a4052] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 04:14:28 np0005534516 nova_compute[253538]: 2025-11-25 09:14:28.766 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 49fdf548-77e1-47b2-9118-f42acc3a4052] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 04:14:28 np0005534516 nova_compute[253538]: 2025-11-25 09:14:28.769 253542 DEBUG nova.virt.libvirt.driver [None req-adbbced1-e284-4a74-9b62-5b622b64136f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 49fdf548-77e1-47b2-9118-f42acc3a4052] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 04:14:28 np0005534516 nova_compute[253538]: 2025-11-25 09:14:28.770 253542 DEBUG nova.virt.libvirt.driver [None req-adbbced1-e284-4a74-9b62-5b622b64136f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 49fdf548-77e1-47b2-9118-f42acc3a4052] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 04:14:28 np0005534516 nova_compute[253538]: 2025-11-25 09:14:28.770 253542 DEBUG nova.virt.libvirt.driver [None req-adbbced1-e284-4a74-9b62-5b622b64136f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 49fdf548-77e1-47b2-9118-f42acc3a4052] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 04:14:28 np0005534516 nova_compute[253538]: 2025-11-25 09:14:28.771 253542 DEBUG nova.virt.libvirt.driver [None req-adbbced1-e284-4a74-9b62-5b622b64136f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 49fdf548-77e1-47b2-9118-f42acc3a4052] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 04:14:28 np0005534516 nova_compute[253538]: 2025-11-25 09:14:28.771 253542 DEBUG nova.virt.libvirt.driver [None req-adbbced1-e284-4a74-9b62-5b622b64136f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 49fdf548-77e1-47b2-9118-f42acc3a4052] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 04:14:28 np0005534516 nova_compute[253538]: 2025-11-25 09:14:28.772 253542 DEBUG nova.virt.libvirt.driver [None req-adbbced1-e284-4a74-9b62-5b622b64136f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 49fdf548-77e1-47b2-9118-f42acc3a4052] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 04:14:28 np0005534516 nova_compute[253538]: 2025-11-25 09:14:28.794 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 49fdf548-77e1-47b2-9118-f42acc3a4052] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 04:14:28 np0005534516 nova_compute[253538]: 2025-11-25 09:14:28.795 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764062068.7354622, 49fdf548-77e1-47b2-9118-f42acc3a4052 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 04:14:28 np0005534516 nova_compute[253538]: 2025-11-25 09:14:28.795 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 49fdf548-77e1-47b2-9118-f42acc3a4052] VM Paused (Lifecycle Event)#033[00m
Nov 25 04:14:28 np0005534516 nova_compute[253538]: 2025-11-25 09:14:28.809 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 49fdf548-77e1-47b2-9118-f42acc3a4052] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 04:14:28 np0005534516 nova_compute[253538]: 2025-11-25 09:14:28.812 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764062068.7427714, 49fdf548-77e1-47b2-9118-f42acc3a4052 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 04:14:28 np0005534516 nova_compute[253538]: 2025-11-25 09:14:28.812 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 49fdf548-77e1-47b2-9118-f42acc3a4052] VM Resumed (Lifecycle Event)#033[00m
Nov 25 04:14:28 np0005534516 nova_compute[253538]: 2025-11-25 09:14:28.832 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 49fdf548-77e1-47b2-9118-f42acc3a4052] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 04:14:28 np0005534516 nova_compute[253538]: 2025-11-25 09:14:28.835 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 49fdf548-77e1-47b2-9118-f42acc3a4052] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 04:14:28 np0005534516 nova_compute[253538]: 2025-11-25 09:14:28.852 253542 INFO nova.compute.manager [None req-adbbced1-e284-4a74-9b62-5b622b64136f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 49fdf548-77e1-47b2-9118-f42acc3a4052] Took 11.10 seconds to spawn the instance on the hypervisor.#033[00m
Nov 25 04:14:28 np0005534516 nova_compute[253538]: 2025-11-25 09:14:28.852 253542 DEBUG nova.compute.manager [None req-adbbced1-e284-4a74-9b62-5b622b64136f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 49fdf548-77e1-47b2-9118-f42acc3a4052] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 04:14:28 np0005534516 nova_compute[253538]: 2025-11-25 09:14:28.853 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 49fdf548-77e1-47b2-9118-f42acc3a4052] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 04:14:28 np0005534516 nova_compute[253538]: 2025-11-25 09:14:28.992 253542 INFO nova.compute.manager [None req-adbbced1-e284-4a74-9b62-5b622b64136f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 49fdf548-77e1-47b2-9118-f42acc3a4052] Took 12.31 seconds to build instance.#033[00m
Nov 25 04:14:29 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 25 04:14:29 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2576011999' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 04:14:29 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 25 04:14:29 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2576011999' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 04:14:29 np0005534516 nova_compute[253538]: 2025-11-25 09:14:29.084 253542 DEBUG oslo_concurrency.lockutils [None req-adbbced1-e284-4a74-9b62-5b622b64136f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "49fdf548-77e1-47b2-9118-f42acc3a4052" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 12.593s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:14:29 np0005534516 nova_compute[253538]: 2025-11-25 09:14:29.540 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:14:30 np0005534516 nova_compute[253538]: 2025-11-25 09:14:30.280 253542 DEBUG nova.compute.manager [req-0c51a311-32f0-44a0-a6a1-685453e82542 req-ab20ee44-9494-4947-b5a6-eadacf4edff4 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 49fdf548-77e1-47b2-9118-f42acc3a4052] Received event network-vif-plugged-8567d2b8-5fd3-45d6-9d10-d88839de3d8a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 04:14:30 np0005534516 nova_compute[253538]: 2025-11-25 09:14:30.280 253542 DEBUG oslo_concurrency.lockutils [req-0c51a311-32f0-44a0-a6a1-685453e82542 req-ab20ee44-9494-4947-b5a6-eadacf4edff4 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "49fdf548-77e1-47b2-9118-f42acc3a4052-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:14:30 np0005534516 nova_compute[253538]: 2025-11-25 09:14:30.280 253542 DEBUG oslo_concurrency.lockutils [req-0c51a311-32f0-44a0-a6a1-685453e82542 req-ab20ee44-9494-4947-b5a6-eadacf4edff4 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "49fdf548-77e1-47b2-9118-f42acc3a4052-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:14:30 np0005534516 nova_compute[253538]: 2025-11-25 09:14:30.280 253542 DEBUG oslo_concurrency.lockutils [req-0c51a311-32f0-44a0-a6a1-685453e82542 req-ab20ee44-9494-4947-b5a6-eadacf4edff4 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "49fdf548-77e1-47b2-9118-f42acc3a4052-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:14:30 np0005534516 nova_compute[253538]: 2025-11-25 09:14:30.281 253542 DEBUG nova.compute.manager [req-0c51a311-32f0-44a0-a6a1-685453e82542 req-ab20ee44-9494-4947-b5a6-eadacf4edff4 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 49fdf548-77e1-47b2-9118-f42acc3a4052] No waiting events found dispatching network-vif-plugged-8567d2b8-5fd3-45d6-9d10-d88839de3d8a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 04:14:30 np0005534516 nova_compute[253538]: 2025-11-25 09:14:30.281 253542 WARNING nova.compute.manager [req-0c51a311-32f0-44a0-a6a1-685453e82542 req-ab20ee44-9494-4947-b5a6-eadacf4edff4 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 49fdf548-77e1-47b2-9118-f42acc3a4052] Received unexpected event network-vif-plugged-8567d2b8-5fd3-45d6-9d10-d88839de3d8a for instance with vm_state active and task_state None.#033[00m
Nov 25 04:14:30 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2823: 321 pgs: 321 active+clean; 213 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 455 KiB/s rd, 1.3 MiB/s wr, 46 op/s
Nov 25 04:14:31 np0005534516 nova_compute[253538]: 2025-11-25 09:14:31.889 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:14:32 np0005534516 nova_compute[253538]: 2025-11-25 09:14:32.356 253542 DEBUG nova.compute.manager [req-63329d58-0308-4706-ad4e-b98627749bdc req-48831c82-2c90-4f84-8833-56dfe5ed0086 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 49fdf548-77e1-47b2-9118-f42acc3a4052] Received event network-changed-8567d2b8-5fd3-45d6-9d10-d88839de3d8a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 04:14:32 np0005534516 nova_compute[253538]: 2025-11-25 09:14:32.356 253542 DEBUG nova.compute.manager [req-63329d58-0308-4706-ad4e-b98627749bdc req-48831c82-2c90-4f84-8833-56dfe5ed0086 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 49fdf548-77e1-47b2-9118-f42acc3a4052] Refreshing instance network info cache due to event network-changed-8567d2b8-5fd3-45d6-9d10-d88839de3d8a. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 04:14:32 np0005534516 nova_compute[253538]: 2025-11-25 09:14:32.356 253542 DEBUG oslo_concurrency.lockutils [req-63329d58-0308-4706-ad4e-b98627749bdc req-48831c82-2c90-4f84-8833-56dfe5ed0086 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-49fdf548-77e1-47b2-9118-f42acc3a4052" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 04:14:32 np0005534516 nova_compute[253538]: 2025-11-25 09:14:32.357 253542 DEBUG oslo_concurrency.lockutils [req-63329d58-0308-4706-ad4e-b98627749bdc req-48831c82-2c90-4f84-8833-56dfe5ed0086 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-49fdf548-77e1-47b2-9118-f42acc3a4052" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 04:14:32 np0005534516 nova_compute[253538]: 2025-11-25 09:14:32.357 253542 DEBUG nova.network.neutron [req-63329d58-0308-4706-ad4e-b98627749bdc req-48831c82-2c90-4f84-8833-56dfe5ed0086 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 49fdf548-77e1-47b2-9118-f42acc3a4052] Refreshing network info cache for port 8567d2b8-5fd3-45d6-9d10-d88839de3d8a _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 04:14:32 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2824: 321 pgs: 321 active+clean; 213 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 853 KiB/s rd, 472 KiB/s wr, 50 op/s
Nov 25 04:14:33 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:14:34 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2825: 321 pgs: 321 active+clean; 213 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.4 MiB/s rd, 12 KiB/s wr, 58 op/s
Nov 25 04:14:34 np0005534516 nova_compute[253538]: 2025-11-25 09:14:34.528 253542 DEBUG nova.network.neutron [req-63329d58-0308-4706-ad4e-b98627749bdc req-48831c82-2c90-4f84-8833-56dfe5ed0086 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 49fdf548-77e1-47b2-9118-f42acc3a4052] Updated VIF entry in instance network info cache for port 8567d2b8-5fd3-45d6-9d10-d88839de3d8a. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 25 04:14:34 np0005534516 nova_compute[253538]: 2025-11-25 09:14:34.529 253542 DEBUG nova.network.neutron [req-63329d58-0308-4706-ad4e-b98627749bdc req-48831c82-2c90-4f84-8833-56dfe5ed0086 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 49fdf548-77e1-47b2-9118-f42acc3a4052] Updating instance_info_cache with network_info: [{"id": "8567d2b8-5fd3-45d6-9d10-d88839de3d8a", "address": "fa:16:3e:ef:e3:44", "network": {"id": "a0d85633-9402-4022-8c0a-b00348775e93", "bridge": "br-int", "label": "tempest-network-smoke--1636625224", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:feef:e344", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.183", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feef:e344", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8567d2b8-5f", "ovs_interfaceid": "8567d2b8-5fd3-45d6-9d10-d88839de3d8a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 04:14:34 np0005534516 nova_compute[253538]: 2025-11-25 09:14:34.542 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:14:34 np0005534516 nova_compute[253538]: 2025-11-25 09:14:34.547 253542 DEBUG oslo_concurrency.lockutils [req-63329d58-0308-4706-ad4e-b98627749bdc req-48831c82-2c90-4f84-8833-56dfe5ed0086 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-49fdf548-77e1-47b2-9118-f42acc3a4052" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 04:14:36 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2826: 321 pgs: 321 active+clean; 213 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Nov 25 04:14:36 np0005534516 nova_compute[253538]: 2025-11-25 09:14:36.891 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:14:38 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:14:38 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2827: 321 pgs: 321 active+clean; 213 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Nov 25 04:14:39 np0005534516 nova_compute[253538]: 2025-11-25 09:14:39.546 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:14:40 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2828: 321 pgs: 321 active+clean; 213 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.5 KiB/s wr, 73 op/s
Nov 25 04:14:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:14:41.098 162739 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:14:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:14:41.099 162739 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:14:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:14:41.100 162739 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:14:41 np0005534516 ovn_controller[152859]: 2025-11-25T09:14:41Z|00205|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:ef:e3:44 10.100.0.13
Nov 25 04:14:41 np0005534516 ovn_controller[152859]: 2025-11-25T09:14:41Z|00206|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:ef:e3:44 10.100.0.13
Nov 25 04:14:41 np0005534516 podman[414449]: 2025-11-25 09:14:41.828212096 +0000 UTC m=+0.065462461 container health_status 201f5ba8a846455dad7681d91099d8c3f33e8ac91298c1330a8b525b94c03938 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 04:14:41 np0005534516 podman[414450]: 2025-11-25 09:14:41.832506213 +0000 UTC m=+0.062327625 container health_status 5c8d5508db57b4c3da7e80ae11f3a3094e87785cb74f60c98199261dd8b73481 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3)
Nov 25 04:14:41 np0005534516 nova_compute[253538]: 2025-11-25 09:14:41.893 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:14:42 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2829: 321 pgs: 321 active+clean; 228 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.5 MiB/s rd, 695 KiB/s wr, 65 op/s
Nov 25 04:14:43 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:14:44 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2830: 321 pgs: 321 active+clean; 236 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.4 MiB/s rd, 1.5 MiB/s wr, 82 op/s
Nov 25 04:14:44 np0005534516 nova_compute[253538]: 2025-11-25 09:14:44.548 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:14:46 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2831: 321 pgs: 321 active+clean; 246 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 865 KiB/s rd, 2.1 MiB/s wr, 81 op/s
Nov 25 04:14:46 np0005534516 nova_compute[253538]: 2025-11-25 09:14:46.896 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:14:47 np0005534516 podman[414489]: 2025-11-25 09:14:47.858449932 +0000 UTC m=+0.101378477 container health_status 8ad1e319ea263c63021f01bb2d6f18ca5aea205883339692c6b10bddfa993191 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.build-date=20251118, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3)
Nov 25 04:14:48 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:14:48 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2832: 321 pgs: 321 active+clean; 246 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 364 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Nov 25 04:14:49 np0005534516 nova_compute[253538]: 2025-11-25 09:14:49.602 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:14:49 np0005534516 nova_compute[253538]: 2025-11-25 09:14:49.986 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:14:50 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2833: 321 pgs: 321 active+clean; 246 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 364 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Nov 25 04:14:51 np0005534516 nova_compute[253538]: 2025-11-25 09:14:51.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:14:51 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:14:51.859 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=55, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '3e:21:09', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '6a:71:8e:07:fa:c2'}, ipsec=False) old=SB_Global(nb_cfg=54) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 04:14:51 np0005534516 nova_compute[253538]: 2025-11-25 09:14:51.860 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:14:51 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:14:51.860 162739 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 25 04:14:51 np0005534516 nova_compute[253538]: 2025-11-25 09:14:51.897 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:14:52 np0005534516 nova_compute[253538]: 2025-11-25 09:14:52.099 253542 DEBUG nova.compute.manager [req-6620b07e-2786-4486-b029-349c7d28a458 req-f87febc5-96eb-451a-a6ff-95898e1238c3 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 49fdf548-77e1-47b2-9118-f42acc3a4052] Received event network-changed-8567d2b8-5fd3-45d6-9d10-d88839de3d8a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 04:14:52 np0005534516 nova_compute[253538]: 2025-11-25 09:14:52.099 253542 DEBUG nova.compute.manager [req-6620b07e-2786-4486-b029-349c7d28a458 req-f87febc5-96eb-451a-a6ff-95898e1238c3 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 49fdf548-77e1-47b2-9118-f42acc3a4052] Refreshing instance network info cache due to event network-changed-8567d2b8-5fd3-45d6-9d10-d88839de3d8a. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 04:14:52 np0005534516 nova_compute[253538]: 2025-11-25 09:14:52.100 253542 DEBUG oslo_concurrency.lockutils [req-6620b07e-2786-4486-b029-349c7d28a458 req-f87febc5-96eb-451a-a6ff-95898e1238c3 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-49fdf548-77e1-47b2-9118-f42acc3a4052" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 04:14:52 np0005534516 nova_compute[253538]: 2025-11-25 09:14:52.100 253542 DEBUG oslo_concurrency.lockutils [req-6620b07e-2786-4486-b029-349c7d28a458 req-f87febc5-96eb-451a-a6ff-95898e1238c3 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-49fdf548-77e1-47b2-9118-f42acc3a4052" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 04:14:52 np0005534516 nova_compute[253538]: 2025-11-25 09:14:52.100 253542 DEBUG nova.network.neutron [req-6620b07e-2786-4486-b029-349c7d28a458 req-f87febc5-96eb-451a-a6ff-95898e1238c3 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 49fdf548-77e1-47b2-9118-f42acc3a4052] Refreshing network info cache for port 8567d2b8-5fd3-45d6-9d10-d88839de3d8a _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 04:14:52 np0005534516 nova_compute[253538]: 2025-11-25 09:14:52.262 253542 DEBUG oslo_concurrency.lockutils [None req-897de6a4-6b39-4f4a-9d4e-5de56a8ddf8d c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Acquiring lock "49fdf548-77e1-47b2-9118-f42acc3a4052" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:14:52 np0005534516 nova_compute[253538]: 2025-11-25 09:14:52.263 253542 DEBUG oslo_concurrency.lockutils [None req-897de6a4-6b39-4f4a-9d4e-5de56a8ddf8d c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "49fdf548-77e1-47b2-9118-f42acc3a4052" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:14:52 np0005534516 nova_compute[253538]: 2025-11-25 09:14:52.263 253542 DEBUG oslo_concurrency.lockutils [None req-897de6a4-6b39-4f4a-9d4e-5de56a8ddf8d c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Acquiring lock "49fdf548-77e1-47b2-9118-f42acc3a4052-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:14:52 np0005534516 nova_compute[253538]: 2025-11-25 09:14:52.263 253542 DEBUG oslo_concurrency.lockutils [None req-897de6a4-6b39-4f4a-9d4e-5de56a8ddf8d c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "49fdf548-77e1-47b2-9118-f42acc3a4052-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:14:52 np0005534516 nova_compute[253538]: 2025-11-25 09:14:52.264 253542 DEBUG oslo_concurrency.lockutils [None req-897de6a4-6b39-4f4a-9d4e-5de56a8ddf8d c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "49fdf548-77e1-47b2-9118-f42acc3a4052-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:14:52 np0005534516 nova_compute[253538]: 2025-11-25 09:14:52.265 253542 INFO nova.compute.manager [None req-897de6a4-6b39-4f4a-9d4e-5de56a8ddf8d c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 49fdf548-77e1-47b2-9118-f42acc3a4052] Terminating instance#033[00m
Nov 25 04:14:52 np0005534516 nova_compute[253538]: 2025-11-25 09:14:52.266 253542 DEBUG nova.compute.manager [None req-897de6a4-6b39-4f4a-9d4e-5de56a8ddf8d c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 49fdf548-77e1-47b2-9118-f42acc3a4052] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 25 04:14:52 np0005534516 kernel: tap8567d2b8-5f (unregistering): left promiscuous mode
Nov 25 04:14:52 np0005534516 NetworkManager[48915]: <info>  [1764062092.3286] device (tap8567d2b8-5f): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 04:14:52 np0005534516 ovn_controller[152859]: 2025-11-25T09:14:52Z|01590|binding|INFO|Releasing lport 8567d2b8-5fd3-45d6-9d10-d88839de3d8a from this chassis (sb_readonly=0)
Nov 25 04:14:52 np0005534516 nova_compute[253538]: 2025-11-25 09:14:52.337 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:14:52 np0005534516 ovn_controller[152859]: 2025-11-25T09:14:52Z|01591|binding|INFO|Setting lport 8567d2b8-5fd3-45d6-9d10-d88839de3d8a down in Southbound
Nov 25 04:14:52 np0005534516 ovn_controller[152859]: 2025-11-25T09:14:52Z|01592|binding|INFO|Removing iface tap8567d2b8-5f ovn-installed in OVS
Nov 25 04:14:52 np0005534516 nova_compute[253538]: 2025-11-25 09:14:52.339 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:14:52 np0005534516 nova_compute[253538]: 2025-11-25 09:14:52.351 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:14:52 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:14:52.379 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ef:e3:44 10.100.0.13 2001:db8:0:1:f816:3eff:feef:e344 2001:db8::f816:3eff:feef:e344'], port_security=['fa:16:3e:ef:e3:44 10.100.0.13 2001:db8:0:1:f816:3eff:feef:e344 2001:db8::f816:3eff:feef:e344'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28 2001:db8:0:1:f816:3eff:feef:e344/64 2001:db8::f816:3eff:feef:e344/64', 'neutron:device_id': '49fdf548-77e1-47b2-9118-f42acc3a4052', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a0d85633-9402-4022-8c0a-b00348775e93', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a3cf572dfc9f42528923d69b8fa76422', 'neutron:revision_number': '4', 'neutron:security_group_ids': '27a20ef1-698a-4857-9ba1-1bcdffeb008a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=dcec4777-cf00-4ad5-abd7-0fc74f3a8f46, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=8567d2b8-5fd3-45d6-9d10-d88839de3d8a) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 04:14:52 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:14:52.380 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 8567d2b8-5fd3-45d6-9d10-d88839de3d8a in datapath a0d85633-9402-4022-8c0a-b00348775e93 unbound from our chassis#033[00m
Nov 25 04:14:52 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:14:52.381 162739 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network a0d85633-9402-4022-8c0a-b00348775e93#033[00m
Nov 25 04:14:52 np0005534516 systemd[1]: machine-qemu\x2d179\x2dinstance\x2d00000095.scope: Deactivated successfully.
Nov 25 04:14:52 np0005534516 systemd[1]: machine-qemu\x2d179\x2dinstance\x2d00000095.scope: Consumed 14.017s CPU time.
Nov 25 04:14:52 np0005534516 systemd-machined[215790]: Machine qemu-179-instance-00000095 terminated.
Nov 25 04:14:52 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2834: 321 pgs: 321 active+clean; 246 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 362 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Nov 25 04:14:52 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:14:52.407 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[da5fd852-d368-4533-93a4-d4dd2322adb3]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:14:52 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:14:52.448 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[3623434d-559d-44fb-986d-6c8e3a1dcd95]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:14:52 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:14:52.451 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[878d7676-fec2-42ab-b616-e4a2f473e892]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:14:52 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:14:52.490 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[42cbacf2-15aa-420f-a8d0-10edc135cb8b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:14:52 np0005534516 nova_compute[253538]: 2025-11-25 09:14:52.500 253542 INFO nova.virt.libvirt.driver [-] [instance: 49fdf548-77e1-47b2-9118-f42acc3a4052] Instance destroyed successfully.#033[00m
Nov 25 04:14:52 np0005534516 nova_compute[253538]: 2025-11-25 09:14:52.501 253542 DEBUG nova.objects.instance [None req-897de6a4-6b39-4f4a-9d4e-5de56a8ddf8d c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lazy-loading 'resources' on Instance uuid 49fdf548-77e1-47b2-9118-f42acc3a4052 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 04:14:52 np0005534516 nova_compute[253538]: 2025-11-25 09:14:52.514 253542 DEBUG nova.virt.libvirt.vif [None req-897de6a4-6b39-4f4a-9d4e-5de56a8ddf8d c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T09:14:15Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1132307495',display_name='tempest-TestGettingAddress-server-1132307495',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1132307495',id=149,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLjQVPPbaLK0o8SGnTKKoAiXxzMCTbiXlcClrAsKocG23tIVNKIl+uyDryAiBRRNgOXs1VZtoFvH020JuANRMKekifvu6hXrGsC+ZSxX2adgnjX2NrqkFauwj51UvSALOg==',key_name='tempest-TestGettingAddress-1641139979',keypairs=<?>,launch_index=0,launched_at=2025-11-25T09:14:28Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='a3cf572dfc9f42528923d69b8fa76422',ramdisk_id='',reservation_id='r-4n1kgn5k',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-364728108',owner_user_name='tempest-TestGettingAddress-364728108-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T09:14:28Z,user_data=None,user_id='c9fb13d4ba9041458692330b7276232f',uuid=49fdf548-77e1-47b2-9118-f42acc3a4052,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "8567d2b8-5fd3-45d6-9d10-d88839de3d8a", "address": "fa:16:3e:ef:e3:44", "network": {"id": "a0d85633-9402-4022-8c0a-b00348775e93", "bridge": "br-int", "label": "tempest-network-smoke--1636625224", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:feef:e344", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.183", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feef:e344", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8567d2b8-5f", "ovs_interfaceid": "8567d2b8-5fd3-45d6-9d10-d88839de3d8a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 25 04:14:52 np0005534516 nova_compute[253538]: 2025-11-25 09:14:52.514 253542 DEBUG nova.network.os_vif_util [None req-897de6a4-6b39-4f4a-9d4e-5de56a8ddf8d c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Converting VIF {"id": "8567d2b8-5fd3-45d6-9d10-d88839de3d8a", "address": "fa:16:3e:ef:e3:44", "network": {"id": "a0d85633-9402-4022-8c0a-b00348775e93", "bridge": "br-int", "label": "tempest-network-smoke--1636625224", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:feef:e344", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.183", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feef:e344", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8567d2b8-5f", "ovs_interfaceid": "8567d2b8-5fd3-45d6-9d10-d88839de3d8a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 04:14:52 np0005534516 nova_compute[253538]: 2025-11-25 09:14:52.515 253542 DEBUG nova.network.os_vif_util [None req-897de6a4-6b39-4f4a-9d4e-5de56a8ddf8d c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:ef:e3:44,bridge_name='br-int',has_traffic_filtering=True,id=8567d2b8-5fd3-45d6-9d10-d88839de3d8a,network=Network(a0d85633-9402-4022-8c0a-b00348775e93),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8567d2b8-5f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 04:14:52 np0005534516 nova_compute[253538]: 2025-11-25 09:14:52.516 253542 DEBUG os_vif [None req-897de6a4-6b39-4f4a-9d4e-5de56a8ddf8d c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:ef:e3:44,bridge_name='br-int',has_traffic_filtering=True,id=8567d2b8-5fd3-45d6-9d10-d88839de3d8a,network=Network(a0d85633-9402-4022-8c0a-b00348775e93),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8567d2b8-5f') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 25 04:14:52 np0005534516 nova_compute[253538]: 2025-11-25 09:14:52.517 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:14:52 np0005534516 nova_compute[253538]: 2025-11-25 09:14:52.517 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8567d2b8-5f, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 04:14:52 np0005534516 nova_compute[253538]: 2025-11-25 09:14:52.519 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:14:52 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:14:52.519 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[0cf2096f-0c74-4381-9690-6a9142ace8c3]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapa0d85633-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ae:94:0e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 40, 'tx_packets': 7, 'rx_bytes': 3328, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 40, 'tx_packets': 7, 'rx_bytes': 3328, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 454], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 738854, 'reachable_time': 17518, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 36, 'inoctets': 2656, 'indelivers': 13, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 36, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 2656, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 36, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 13, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 414536, 'error': None, 'target': 'ovnmeta-a0d85633-9402-4022-8c0a-b00348775e93', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:14:52 np0005534516 nova_compute[253538]: 2025-11-25 09:14:52.521 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:14:52 np0005534516 nova_compute[253538]: 2025-11-25 09:14:52.525 253542 INFO os_vif [None req-897de6a4-6b39-4f4a-9d4e-5de56a8ddf8d c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:ef:e3:44,bridge_name='br-int',has_traffic_filtering=True,id=8567d2b8-5fd3-45d6-9d10-d88839de3d8a,network=Network(a0d85633-9402-4022-8c0a-b00348775e93),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8567d2b8-5f')#033[00m
Nov 25 04:14:52 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:14:52.538 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[70a8f38d-a254-4939-a35d-fcfdad246b00]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapa0d85633-91'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 738867, 'tstamp': 738867}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 414542, 'error': None, 'target': 'ovnmeta-a0d85633-9402-4022-8c0a-b00348775e93', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapa0d85633-91'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 738869, 'tstamp': 738869}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 414542, 'error': None, 'target': 'ovnmeta-a0d85633-9402-4022-8c0a-b00348775e93', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:14:52 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:14:52.540 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa0d85633-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 04:14:52 np0005534516 nova_compute[253538]: 2025-11-25 09:14:52.542 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:14:52 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:14:52.543 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa0d85633-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 04:14:52 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:14:52.543 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 04:14:52 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:14:52.544 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapa0d85633-90, col_values=(('external_ids', {'iface-id': '0596e673-151c-4eed-ad1e-d612e39d6f14'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 04:14:52 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:14:52.544 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 04:14:52 np0005534516 nova_compute[253538]: 2025-11-25 09:14:52.958 253542 INFO nova.virt.libvirt.driver [None req-897de6a4-6b39-4f4a-9d4e-5de56a8ddf8d c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 49fdf548-77e1-47b2-9118-f42acc3a4052] Deleting instance files /var/lib/nova/instances/49fdf548-77e1-47b2-9118-f42acc3a4052_del#033[00m
Nov 25 04:14:52 np0005534516 nova_compute[253538]: 2025-11-25 09:14:52.959 253542 INFO nova.virt.libvirt.driver [None req-897de6a4-6b39-4f4a-9d4e-5de56a8ddf8d c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 49fdf548-77e1-47b2-9118-f42acc3a4052] Deletion of /var/lib/nova/instances/49fdf548-77e1-47b2-9118-f42acc3a4052_del complete#033[00m
Nov 25 04:14:53 np0005534516 nova_compute[253538]: 2025-11-25 09:14:53.109 253542 INFO nova.compute.manager [None req-897de6a4-6b39-4f4a-9d4e-5de56a8ddf8d c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 49fdf548-77e1-47b2-9118-f42acc3a4052] Took 0.84 seconds to destroy the instance on the hypervisor.#033[00m
Nov 25 04:14:53 np0005534516 nova_compute[253538]: 2025-11-25 09:14:53.110 253542 DEBUG oslo.service.loopingcall [None req-897de6a4-6b39-4f4a-9d4e-5de56a8ddf8d c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 25 04:14:53 np0005534516 nova_compute[253538]: 2025-11-25 09:14:53.111 253542 DEBUG nova.compute.manager [-] [instance: 49fdf548-77e1-47b2-9118-f42acc3a4052] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 25 04:14:53 np0005534516 nova_compute[253538]: 2025-11-25 09:14:53.111 253542 DEBUG nova.network.neutron [-] [instance: 49fdf548-77e1-47b2-9118-f42acc3a4052] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 25 04:14:53 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:14:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] Optimize plan auto_2025-11-25_09:14:53
Nov 25 04:14:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 04:14:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] do_upmap
Nov 25 04:14:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] pools ['cephfs.cephfs.data', 'cephfs.cephfs.meta', '.rgw.root', 'vms', 'default.rgw.control', 'default.rgw.meta', 'backups', 'default.rgw.log', '.mgr', 'volumes', 'images']
Nov 25 04:14:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] prepared 0/10 changes
Nov 25 04:14:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 04:14:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 04:14:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 04:14:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 04:14:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 04:14:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 04:14:54 np0005534516 nova_compute[253538]: 2025-11-25 09:14:54.021 253542 DEBUG nova.network.neutron [req-6620b07e-2786-4486-b029-349c7d28a458 req-f87febc5-96eb-451a-a6ff-95898e1238c3 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 49fdf548-77e1-47b2-9118-f42acc3a4052] Updated VIF entry in instance network info cache for port 8567d2b8-5fd3-45d6-9d10-d88839de3d8a. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 25 04:14:54 np0005534516 nova_compute[253538]: 2025-11-25 09:14:54.022 253542 DEBUG nova.network.neutron [req-6620b07e-2786-4486-b029-349c7d28a458 req-f87febc5-96eb-451a-a6ff-95898e1238c3 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 49fdf548-77e1-47b2-9118-f42acc3a4052] Updating instance_info_cache with network_info: [{"id": "8567d2b8-5fd3-45d6-9d10-d88839de3d8a", "address": "fa:16:3e:ef:e3:44", "network": {"id": "a0d85633-9402-4022-8c0a-b00348775e93", "bridge": "br-int", "label": "tempest-network-smoke--1636625224", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:feef:e344", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feef:e344", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8567d2b8-5f", "ovs_interfaceid": "8567d2b8-5fd3-45d6-9d10-d88839de3d8a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 04:14:54 np0005534516 nova_compute[253538]: 2025-11-25 09:14:54.117 253542 DEBUG oslo_concurrency.lockutils [req-6620b07e-2786-4486-b029-349c7d28a458 req-f87febc5-96eb-451a-a6ff-95898e1238c3 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-49fdf548-77e1-47b2-9118-f42acc3a4052" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 04:14:54 np0005534516 nova_compute[253538]: 2025-11-25 09:14:54.119 253542 DEBUG nova.network.neutron [-] [instance: 49fdf548-77e1-47b2-9118-f42acc3a4052] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 04:14:54 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 04:14:54 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 04:14:54 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 04:14:54 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 04:14:54 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 04:14:54 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 04:14:54 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 04:14:54 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 04:14:54 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 04:14:54 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 04:14:54 np0005534516 nova_compute[253538]: 2025-11-25 09:14:54.161 253542 INFO nova.compute.manager [-] [instance: 49fdf548-77e1-47b2-9118-f42acc3a4052] Took 1.05 seconds to deallocate network for instance.#033[00m
Nov 25 04:14:54 np0005534516 nova_compute[253538]: 2025-11-25 09:14:54.176 253542 DEBUG nova.compute.manager [req-3da9552a-0579-47ce-99a7-6f362509ac87 req-6d9f33d2-516a-40fa-88d9-1b509e1af972 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 49fdf548-77e1-47b2-9118-f42acc3a4052] Received event network-vif-unplugged-8567d2b8-5fd3-45d6-9d10-d88839de3d8a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 04:14:54 np0005534516 nova_compute[253538]: 2025-11-25 09:14:54.177 253542 DEBUG oslo_concurrency.lockutils [req-3da9552a-0579-47ce-99a7-6f362509ac87 req-6d9f33d2-516a-40fa-88d9-1b509e1af972 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "49fdf548-77e1-47b2-9118-f42acc3a4052-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:14:54 np0005534516 nova_compute[253538]: 2025-11-25 09:14:54.177 253542 DEBUG oslo_concurrency.lockutils [req-3da9552a-0579-47ce-99a7-6f362509ac87 req-6d9f33d2-516a-40fa-88d9-1b509e1af972 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "49fdf548-77e1-47b2-9118-f42acc3a4052-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:14:54 np0005534516 nova_compute[253538]: 2025-11-25 09:14:54.177 253542 DEBUG oslo_concurrency.lockutils [req-3da9552a-0579-47ce-99a7-6f362509ac87 req-6d9f33d2-516a-40fa-88d9-1b509e1af972 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "49fdf548-77e1-47b2-9118-f42acc3a4052-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:14:54 np0005534516 nova_compute[253538]: 2025-11-25 09:14:54.177 253542 DEBUG nova.compute.manager [req-3da9552a-0579-47ce-99a7-6f362509ac87 req-6d9f33d2-516a-40fa-88d9-1b509e1af972 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 49fdf548-77e1-47b2-9118-f42acc3a4052] No waiting events found dispatching network-vif-unplugged-8567d2b8-5fd3-45d6-9d10-d88839de3d8a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 04:14:54 np0005534516 nova_compute[253538]: 2025-11-25 09:14:54.178 253542 DEBUG nova.compute.manager [req-3da9552a-0579-47ce-99a7-6f362509ac87 req-6d9f33d2-516a-40fa-88d9-1b509e1af972 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 49fdf548-77e1-47b2-9118-f42acc3a4052] Received event network-vif-unplugged-8567d2b8-5fd3-45d6-9d10-d88839de3d8a for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 25 04:14:54 np0005534516 nova_compute[253538]: 2025-11-25 09:14:54.178 253542 DEBUG nova.compute.manager [req-3da9552a-0579-47ce-99a7-6f362509ac87 req-6d9f33d2-516a-40fa-88d9-1b509e1af972 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 49fdf548-77e1-47b2-9118-f42acc3a4052] Received event network-vif-plugged-8567d2b8-5fd3-45d6-9d10-d88839de3d8a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 04:14:54 np0005534516 nova_compute[253538]: 2025-11-25 09:14:54.178 253542 DEBUG oslo_concurrency.lockutils [req-3da9552a-0579-47ce-99a7-6f362509ac87 req-6d9f33d2-516a-40fa-88d9-1b509e1af972 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "49fdf548-77e1-47b2-9118-f42acc3a4052-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:14:54 np0005534516 nova_compute[253538]: 2025-11-25 09:14:54.179 253542 DEBUG oslo_concurrency.lockutils [req-3da9552a-0579-47ce-99a7-6f362509ac87 req-6d9f33d2-516a-40fa-88d9-1b509e1af972 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "49fdf548-77e1-47b2-9118-f42acc3a4052-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:14:54 np0005534516 nova_compute[253538]: 2025-11-25 09:14:54.179 253542 DEBUG oslo_concurrency.lockutils [req-3da9552a-0579-47ce-99a7-6f362509ac87 req-6d9f33d2-516a-40fa-88d9-1b509e1af972 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "49fdf548-77e1-47b2-9118-f42acc3a4052-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:14:54 np0005534516 nova_compute[253538]: 2025-11-25 09:14:54.179 253542 DEBUG nova.compute.manager [req-3da9552a-0579-47ce-99a7-6f362509ac87 req-6d9f33d2-516a-40fa-88d9-1b509e1af972 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 49fdf548-77e1-47b2-9118-f42acc3a4052] No waiting events found dispatching network-vif-plugged-8567d2b8-5fd3-45d6-9d10-d88839de3d8a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 04:14:54 np0005534516 nova_compute[253538]: 2025-11-25 09:14:54.179 253542 WARNING nova.compute.manager [req-3da9552a-0579-47ce-99a7-6f362509ac87 req-6d9f33d2-516a-40fa-88d9-1b509e1af972 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 49fdf548-77e1-47b2-9118-f42acc3a4052] Received unexpected event network-vif-plugged-8567d2b8-5fd3-45d6-9d10-d88839de3d8a for instance with vm_state active and task_state deleting.#033[00m
Nov 25 04:14:54 np0005534516 nova_compute[253538]: 2025-11-25 09:14:54.208 253542 DEBUG nova.compute.manager [req-1d849c1b-bdcd-427c-b26f-3b7f1278ee7e req-a7f9e33f-7720-409d-851c-bed139f4a90e b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 49fdf548-77e1-47b2-9118-f42acc3a4052] Received event network-vif-deleted-8567d2b8-5fd3-45d6-9d10-d88839de3d8a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 04:14:54 np0005534516 nova_compute[253538]: 2025-11-25 09:14:54.223 253542 DEBUG oslo_concurrency.lockutils [None req-897de6a4-6b39-4f4a-9d4e-5de56a8ddf8d c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:14:54 np0005534516 nova_compute[253538]: 2025-11-25 09:14:54.224 253542 DEBUG oslo_concurrency.lockutils [None req-897de6a4-6b39-4f4a-9d4e-5de56a8ddf8d c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:14:54 np0005534516 nova_compute[253538]: 2025-11-25 09:14:54.306 253542 DEBUG oslo_concurrency.processutils [None req-897de6a4-6b39-4f4a-9d4e-5de56a8ddf8d c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 04:14:54 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2835: 321 pgs: 321 active+clean; 224 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 324 KiB/s rd, 1.5 MiB/s wr, 54 op/s
Nov 25 04:14:54 np0005534516 nova_compute[253538]: 2025-11-25 09:14:54.603 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:14:54 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 04:14:54 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1965543821' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 04:14:54 np0005534516 nova_compute[253538]: 2025-11-25 09:14:54.754 253542 DEBUG oslo_concurrency.processutils [None req-897de6a4-6b39-4f4a-9d4e-5de56a8ddf8d c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.448s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 04:14:54 np0005534516 nova_compute[253538]: 2025-11-25 09:14:54.761 253542 DEBUG nova.compute.provider_tree [None req-897de6a4-6b39-4f4a-9d4e-5de56a8ddf8d c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 04:14:54 np0005534516 nova_compute[253538]: 2025-11-25 09:14:54.776 253542 DEBUG nova.scheduler.client.report [None req-897de6a4-6b39-4f4a-9d4e-5de56a8ddf8d c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 04:14:54 np0005534516 nova_compute[253538]: 2025-11-25 09:14:54.794 253542 DEBUG oslo_concurrency.lockutils [None req-897de6a4-6b39-4f4a-9d4e-5de56a8ddf8d c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.570s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:14:54 np0005534516 nova_compute[253538]: 2025-11-25 09:14:54.825 253542 INFO nova.scheduler.client.report [None req-897de6a4-6b39-4f4a-9d4e-5de56a8ddf8d c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Deleted allocations for instance 49fdf548-77e1-47b2-9118-f42acc3a4052#033[00m
Nov 25 04:14:54 np0005534516 nova_compute[253538]: 2025-11-25 09:14:54.904 253542 DEBUG oslo_concurrency.lockutils [None req-897de6a4-6b39-4f4a-9d4e-5de56a8ddf8d c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "49fdf548-77e1-47b2-9118-f42acc3a4052" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.641s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:14:55 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:14:55.863 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=0e3362c2-a4a0-4a10-9289-943331244f84, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '55'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 04:14:56 np0005534516 nova_compute[253538]: 2025-11-25 09:14:56.308 253542 DEBUG nova.compute.manager [req-524fb2af-fc48-47f6-b334-d8b525ddcdc4 req-49701a26-3eab-4fe3-9b24-74bca72d683a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 3525156a-e9c9-40b7-88f6-db0de5eb3cd1] Received event network-changed-bc0d7fbf-1c1d-43bc-884b-d89f14be4712 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 04:14:56 np0005534516 nova_compute[253538]: 2025-11-25 09:14:56.308 253542 DEBUG nova.compute.manager [req-524fb2af-fc48-47f6-b334-d8b525ddcdc4 req-49701a26-3eab-4fe3-9b24-74bca72d683a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 3525156a-e9c9-40b7-88f6-db0de5eb3cd1] Refreshing instance network info cache due to event network-changed-bc0d7fbf-1c1d-43bc-884b-d89f14be4712. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 04:14:56 np0005534516 nova_compute[253538]: 2025-11-25 09:14:56.309 253542 DEBUG oslo_concurrency.lockutils [req-524fb2af-fc48-47f6-b334-d8b525ddcdc4 req-49701a26-3eab-4fe3-9b24-74bca72d683a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-3525156a-e9c9-40b7-88f6-db0de5eb3cd1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 04:14:56 np0005534516 nova_compute[253538]: 2025-11-25 09:14:56.310 253542 DEBUG oslo_concurrency.lockutils [req-524fb2af-fc48-47f6-b334-d8b525ddcdc4 req-49701a26-3eab-4fe3-9b24-74bca72d683a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-3525156a-e9c9-40b7-88f6-db0de5eb3cd1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 04:14:56 np0005534516 nova_compute[253538]: 2025-11-25 09:14:56.310 253542 DEBUG nova.network.neutron [req-524fb2af-fc48-47f6-b334-d8b525ddcdc4 req-49701a26-3eab-4fe3-9b24-74bca72d683a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 3525156a-e9c9-40b7-88f6-db0de5eb3cd1] Refreshing network info cache for port bc0d7fbf-1c1d-43bc-884b-d89f14be4712 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 04:14:56 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2836: 321 pgs: 321 active+clean; 192 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 73 KiB/s rd, 709 KiB/s wr, 39 op/s
Nov 25 04:14:56 np0005534516 nova_compute[253538]: 2025-11-25 09:14:56.532 253542 DEBUG oslo_concurrency.lockutils [None req-637521c8-0853-4fb4-9c14-1e99f7de6afc c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Acquiring lock "3525156a-e9c9-40b7-88f6-db0de5eb3cd1" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:14:56 np0005534516 nova_compute[253538]: 2025-11-25 09:14:56.533 253542 DEBUG oslo_concurrency.lockutils [None req-637521c8-0853-4fb4-9c14-1e99f7de6afc c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "3525156a-e9c9-40b7-88f6-db0de5eb3cd1" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:14:56 np0005534516 nova_compute[253538]: 2025-11-25 09:14:56.534 253542 DEBUG oslo_concurrency.lockutils [None req-637521c8-0853-4fb4-9c14-1e99f7de6afc c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Acquiring lock "3525156a-e9c9-40b7-88f6-db0de5eb3cd1-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:14:56 np0005534516 nova_compute[253538]: 2025-11-25 09:14:56.534 253542 DEBUG oslo_concurrency.lockutils [None req-637521c8-0853-4fb4-9c14-1e99f7de6afc c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "3525156a-e9c9-40b7-88f6-db0de5eb3cd1-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:14:56 np0005534516 nova_compute[253538]: 2025-11-25 09:14:56.535 253542 DEBUG oslo_concurrency.lockutils [None req-637521c8-0853-4fb4-9c14-1e99f7de6afc c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "3525156a-e9c9-40b7-88f6-db0de5eb3cd1-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:14:56 np0005534516 nova_compute[253538]: 2025-11-25 09:14:56.537 253542 INFO nova.compute.manager [None req-637521c8-0853-4fb4-9c14-1e99f7de6afc c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 3525156a-e9c9-40b7-88f6-db0de5eb3cd1] Terminating instance#033[00m
Nov 25 04:14:56 np0005534516 nova_compute[253538]: 2025-11-25 09:14:56.538 253542 DEBUG nova.compute.manager [None req-637521c8-0853-4fb4-9c14-1e99f7de6afc c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 3525156a-e9c9-40b7-88f6-db0de5eb3cd1] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 25 04:14:56 np0005534516 kernel: tapbc0d7fbf-1c (unregistering): left promiscuous mode
Nov 25 04:14:56 np0005534516 NetworkManager[48915]: <info>  [1764062096.6072] device (tapbc0d7fbf-1c): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 04:14:56 np0005534516 nova_compute[253538]: 2025-11-25 09:14:56.620 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:14:56 np0005534516 ovn_controller[152859]: 2025-11-25T09:14:56Z|01593|binding|INFO|Releasing lport bc0d7fbf-1c1d-43bc-884b-d89f14be4712 from this chassis (sb_readonly=0)
Nov 25 04:14:56 np0005534516 ovn_controller[152859]: 2025-11-25T09:14:56Z|01594|binding|INFO|Setting lport bc0d7fbf-1c1d-43bc-884b-d89f14be4712 down in Southbound
Nov 25 04:14:56 np0005534516 ovn_controller[152859]: 2025-11-25T09:14:56Z|01595|binding|INFO|Removing iface tapbc0d7fbf-1c ovn-installed in OVS
Nov 25 04:14:56 np0005534516 nova_compute[253538]: 2025-11-25 09:14:56.624 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:14:56 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:14:56.642 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:57:11:da 10.100.0.12 2001:db8:0:1:f816:3eff:fe57:11da 2001:db8::f816:3eff:fe57:11da'], port_security=['fa:16:3e:57:11:da 10.100.0.12 2001:db8:0:1:f816:3eff:fe57:11da 2001:db8::f816:3eff:fe57:11da'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28 2001:db8:0:1:f816:3eff:fe57:11da/64 2001:db8::f816:3eff:fe57:11da/64', 'neutron:device_id': '3525156a-e9c9-40b7-88f6-db0de5eb3cd1', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a0d85633-9402-4022-8c0a-b00348775e93', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a3cf572dfc9f42528923d69b8fa76422', 'neutron:revision_number': '4', 'neutron:security_group_ids': '27a20ef1-698a-4857-9ba1-1bcdffeb008a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=dcec4777-cf00-4ad5-abd7-0fc74f3a8f46, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=bc0d7fbf-1c1d-43bc-884b-d89f14be4712) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 04:14:56 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:14:56.645 162739 INFO neutron.agent.ovn.metadata.agent [-] Port bc0d7fbf-1c1d-43bc-884b-d89f14be4712 in datapath a0d85633-9402-4022-8c0a-b00348775e93 unbound from our chassis#033[00m
Nov 25 04:14:56 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:14:56.647 162739 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network a0d85633-9402-4022-8c0a-b00348775e93, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 25 04:14:56 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:14:56.648 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[7b4d3393-7ffa-49e7-98a2-965ab8852651]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:14:56 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:14:56.649 162739 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-a0d85633-9402-4022-8c0a-b00348775e93 namespace which is not needed anymore#033[00m
Nov 25 04:14:56 np0005534516 nova_compute[253538]: 2025-11-25 09:14:56.658 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:14:56 np0005534516 systemd[1]: machine-qemu\x2d178\x2dinstance\x2d00000094.scope: Deactivated successfully.
Nov 25 04:14:56 np0005534516 systemd[1]: machine-qemu\x2d178\x2dinstance\x2d00000094.scope: Consumed 15.125s CPU time.
Nov 25 04:14:56 np0005534516 systemd-machined[215790]: Machine qemu-178-instance-00000094 terminated.
Nov 25 04:14:56 np0005534516 nova_compute[253538]: 2025-11-25 09:14:56.782 253542 INFO nova.virt.libvirt.driver [-] [instance: 3525156a-e9c9-40b7-88f6-db0de5eb3cd1] Instance destroyed successfully.#033[00m
Nov 25 04:14:56 np0005534516 nova_compute[253538]: 2025-11-25 09:14:56.782 253542 DEBUG nova.objects.instance [None req-637521c8-0853-4fb4-9c14-1e99f7de6afc c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lazy-loading 'resources' on Instance uuid 3525156a-e9c9-40b7-88f6-db0de5eb3cd1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 04:14:56 np0005534516 nova_compute[253538]: 2025-11-25 09:14:56.793 253542 DEBUG nova.virt.libvirt.vif [None req-637521c8-0853-4fb4-9c14-1e99f7de6afc c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T09:13:44Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-846287804',display_name='tempest-TestGettingAddress-server-846287804',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-846287804',id=148,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLjQVPPbaLK0o8SGnTKKoAiXxzMCTbiXlcClrAsKocG23tIVNKIl+uyDryAiBRRNgOXs1VZtoFvH020JuANRMKekifvu6hXrGsC+ZSxX2adgnjX2NrqkFauwj51UvSALOg==',key_name='tempest-TestGettingAddress-1641139979',keypairs=<?>,launch_index=0,launched_at=2025-11-25T09:13:54Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='a3cf572dfc9f42528923d69b8fa76422',ramdisk_id='',reservation_id='r-4cgshl0z',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-364728108',owner_user_name='tempest-TestGettingAddress-364728108-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T09:13:54Z,user_data=None,user_id='c9fb13d4ba9041458692330b7276232f',uuid=3525156a-e9c9-40b7-88f6-db0de5eb3cd1,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "bc0d7fbf-1c1d-43bc-884b-d89f14be4712", "address": "fa:16:3e:57:11:da", "network": {"id": "a0d85633-9402-4022-8c0a-b00348775e93", "bridge": "br-int", "label": "tempest-network-smoke--1636625224", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe57:11da", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.246", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe57:11da", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbc0d7fbf-1c", "ovs_interfaceid": "bc0d7fbf-1c1d-43bc-884b-d89f14be4712", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 25 04:14:56 np0005534516 nova_compute[253538]: 2025-11-25 09:14:56.794 253542 DEBUG nova.network.os_vif_util [None req-637521c8-0853-4fb4-9c14-1e99f7de6afc c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Converting VIF {"id": "bc0d7fbf-1c1d-43bc-884b-d89f14be4712", "address": "fa:16:3e:57:11:da", "network": {"id": "a0d85633-9402-4022-8c0a-b00348775e93", "bridge": "br-int", "label": "tempest-network-smoke--1636625224", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe57:11da", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.246", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe57:11da", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbc0d7fbf-1c", "ovs_interfaceid": "bc0d7fbf-1c1d-43bc-884b-d89f14be4712", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 04:14:56 np0005534516 nova_compute[253538]: 2025-11-25 09:14:56.795 253542 DEBUG nova.network.os_vif_util [None req-637521c8-0853-4fb4-9c14-1e99f7de6afc c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:57:11:da,bridge_name='br-int',has_traffic_filtering=True,id=bc0d7fbf-1c1d-43bc-884b-d89f14be4712,network=Network(a0d85633-9402-4022-8c0a-b00348775e93),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbc0d7fbf-1c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 04:14:56 np0005534516 nova_compute[253538]: 2025-11-25 09:14:56.795 253542 DEBUG os_vif [None req-637521c8-0853-4fb4-9c14-1e99f7de6afc c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:57:11:da,bridge_name='br-int',has_traffic_filtering=True,id=bc0d7fbf-1c1d-43bc-884b-d89f14be4712,network=Network(a0d85633-9402-4022-8c0a-b00348775e93),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbc0d7fbf-1c') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 25 04:14:56 np0005534516 nova_compute[253538]: 2025-11-25 09:14:56.796 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:14:56 np0005534516 nova_compute[253538]: 2025-11-25 09:14:56.797 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapbc0d7fbf-1c, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 04:14:56 np0005534516 nova_compute[253538]: 2025-11-25 09:14:56.798 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:14:56 np0005534516 nova_compute[253538]: 2025-11-25 09:14:56.801 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:14:56 np0005534516 nova_compute[253538]: 2025-11-25 09:14:56.804 253542 INFO os_vif [None req-637521c8-0853-4fb4-9c14-1e99f7de6afc c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:57:11:da,bridge_name='br-int',has_traffic_filtering=True,id=bc0d7fbf-1c1d-43bc-884b-d89f14be4712,network=Network(a0d85633-9402-4022-8c0a-b00348775e93),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbc0d7fbf-1c')#033[00m
Nov 25 04:14:56 np0005534516 neutron-haproxy-ovnmeta-a0d85633-9402-4022-8c0a-b00348775e93[413058]: [NOTICE]   (413062) : haproxy version is 2.8.14-c23fe91
Nov 25 04:14:56 np0005534516 neutron-haproxy-ovnmeta-a0d85633-9402-4022-8c0a-b00348775e93[413058]: [NOTICE]   (413062) : path to executable is /usr/sbin/haproxy
Nov 25 04:14:56 np0005534516 neutron-haproxy-ovnmeta-a0d85633-9402-4022-8c0a-b00348775e93[413058]: [WARNING]  (413062) : Exiting Master process...
Nov 25 04:14:56 np0005534516 neutron-haproxy-ovnmeta-a0d85633-9402-4022-8c0a-b00348775e93[413058]: [WARNING]  (413062) : Exiting Master process...
Nov 25 04:14:56 np0005534516 neutron-haproxy-ovnmeta-a0d85633-9402-4022-8c0a-b00348775e93[413058]: [ALERT]    (413062) : Current worker (413064) exited with code 143 (Terminated)
Nov 25 04:14:56 np0005534516 neutron-haproxy-ovnmeta-a0d85633-9402-4022-8c0a-b00348775e93[413058]: [WARNING]  (413062) : All workers exited. Exiting... (0)
Nov 25 04:14:56 np0005534516 systemd[1]: libpod-7cb63ae56f9aa9c60b0517e753abd1c81a65a009b616cb2f760cf458d27adbd9.scope: Deactivated successfully.
Nov 25 04:14:56 np0005534516 podman[414608]: 2025-11-25 09:14:56.82835194 +0000 UTC m=+0.060358282 container died 7cb63ae56f9aa9c60b0517e753abd1c81a65a009b616cb2f760cf458d27adbd9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-a0d85633-9402-4022-8c0a-b00348775e93, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true)
Nov 25 04:14:56 np0005534516 systemd[1]: var-lib-containers-storage-overlay-92dfa951f831771d47b5f3c19baa67a33147d7a8d9cd91e8a74b897c79b9363c-merged.mount: Deactivated successfully.
Nov 25 04:14:56 np0005534516 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-7cb63ae56f9aa9c60b0517e753abd1c81a65a009b616cb2f760cf458d27adbd9-userdata-shm.mount: Deactivated successfully.
Nov 25 04:14:56 np0005534516 podman[414608]: 2025-11-25 09:14:56.870027403 +0000 UTC m=+0.102033705 container cleanup 7cb63ae56f9aa9c60b0517e753abd1c81a65a009b616cb2f760cf458d27adbd9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-a0d85633-9402-4022-8c0a-b00348775e93, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true)
Nov 25 04:14:56 np0005534516 systemd[1]: libpod-conmon-7cb63ae56f9aa9c60b0517e753abd1c81a65a009b616cb2f760cf458d27adbd9.scope: Deactivated successfully.
Nov 25 04:14:56 np0005534516 podman[414662]: 2025-11-25 09:14:56.943295435 +0000 UTC m=+0.047835942 container remove 7cb63ae56f9aa9c60b0517e753abd1c81a65a009b616cb2f760cf458d27adbd9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-a0d85633-9402-4022-8c0a-b00348775e93, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 04:14:56 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:14:56.950 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[bf4f7f76-2d58-42af-afc8-c9d742a44eed]: (4, ('Tue Nov 25 09:14:56 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-a0d85633-9402-4022-8c0a-b00348775e93 (7cb63ae56f9aa9c60b0517e753abd1c81a65a009b616cb2f760cf458d27adbd9)\n7cb63ae56f9aa9c60b0517e753abd1c81a65a009b616cb2f760cf458d27adbd9\nTue Nov 25 09:14:56 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-a0d85633-9402-4022-8c0a-b00348775e93 (7cb63ae56f9aa9c60b0517e753abd1c81a65a009b616cb2f760cf458d27adbd9)\n7cb63ae56f9aa9c60b0517e753abd1c81a65a009b616cb2f760cf458d27adbd9\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:14:56 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:14:56.954 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[403d629f-ffa7-4350-9a73-e517ccfd00cc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:14:56 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:14:56.957 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa0d85633-90, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 04:14:56 np0005534516 nova_compute[253538]: 2025-11-25 09:14:56.963 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:14:56 np0005534516 kernel: tapa0d85633-90: left promiscuous mode
Nov 25 04:14:56 np0005534516 nova_compute[253538]: 2025-11-25 09:14:56.966 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:14:56 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:14:56.970 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[482cb757-8001-4b22-8ec0-8fdb75dae85c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:14:56 np0005534516 nova_compute[253538]: 2025-11-25 09:14:56.979 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:14:56 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:14:56.993 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[429e5248-490f-4a67-9915-a704105876ec]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:14:56 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:14:56.994 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[6a5740a8-0fdc-4673-9c1b-4191f3605c6d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:14:57 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:14:57.011 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[332e2d1a-a59f-45a3-bdec-27c4b3b39469]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 738847, 'reachable_time': 34208, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 414678, 'error': None, 'target': 'ovnmeta-a0d85633-9402-4022-8c0a-b00348775e93', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:14:57 np0005534516 systemd[1]: run-netns-ovnmeta\x2da0d85633\x2d9402\x2d4022\x2d8c0a\x2db00348775e93.mount: Deactivated successfully.
Nov 25 04:14:57 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:14:57.014 162852 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-a0d85633-9402-4022-8c0a-b00348775e93 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 25 04:14:57 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:14:57.015 162852 DEBUG oslo.privsep.daemon [-] privsep: reply[b3ac7ccc-c4c7-4460-848d-b9a911bebbec]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:14:57 np0005534516 nova_compute[253538]: 2025-11-25 09:14:57.195 253542 INFO nova.virt.libvirt.driver [None req-637521c8-0853-4fb4-9c14-1e99f7de6afc c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 3525156a-e9c9-40b7-88f6-db0de5eb3cd1] Deleting instance files /var/lib/nova/instances/3525156a-e9c9-40b7-88f6-db0de5eb3cd1_del#033[00m
Nov 25 04:14:57 np0005534516 nova_compute[253538]: 2025-11-25 09:14:57.196 253542 INFO nova.virt.libvirt.driver [None req-637521c8-0853-4fb4-9c14-1e99f7de6afc c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 3525156a-e9c9-40b7-88f6-db0de5eb3cd1] Deletion of /var/lib/nova/instances/3525156a-e9c9-40b7-88f6-db0de5eb3cd1_del complete#033[00m
Nov 25 04:14:57 np0005534516 nova_compute[253538]: 2025-11-25 09:14:57.355 253542 INFO nova.compute.manager [None req-637521c8-0853-4fb4-9c14-1e99f7de6afc c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 3525156a-e9c9-40b7-88f6-db0de5eb3cd1] Took 0.82 seconds to destroy the instance on the hypervisor.#033[00m
Nov 25 04:14:57 np0005534516 nova_compute[253538]: 2025-11-25 09:14:57.356 253542 DEBUG oslo.service.loopingcall [None req-637521c8-0853-4fb4-9c14-1e99f7de6afc c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 25 04:14:57 np0005534516 nova_compute[253538]: 2025-11-25 09:14:57.356 253542 DEBUG nova.compute.manager [-] [instance: 3525156a-e9c9-40b7-88f6-db0de5eb3cd1] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 25 04:14:57 np0005534516 nova_compute[253538]: 2025-11-25 09:14:57.356 253542 DEBUG nova.network.neutron [-] [instance: 3525156a-e9c9-40b7-88f6-db0de5eb3cd1] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 25 04:14:58 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:14:58 np0005534516 nova_compute[253538]: 2025-11-25 09:14:58.377 253542 DEBUG nova.network.neutron [-] [instance: 3525156a-e9c9-40b7-88f6-db0de5eb3cd1] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 04:14:58 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2837: 321 pgs: 321 active+clean; 147 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 31 KiB/s rd, 24 KiB/s wr, 32 op/s
Nov 25 04:14:58 np0005534516 nova_compute[253538]: 2025-11-25 09:14:58.413 253542 DEBUG nova.compute.manager [req-e912a41d-9fee-40c5-9fce-6b519b52bb7a req-b349f6de-17fd-49dc-a035-2c30b54d4156 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 3525156a-e9c9-40b7-88f6-db0de5eb3cd1] Received event network-vif-unplugged-bc0d7fbf-1c1d-43bc-884b-d89f14be4712 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 04:14:58 np0005534516 nova_compute[253538]: 2025-11-25 09:14:58.414 253542 DEBUG oslo_concurrency.lockutils [req-e912a41d-9fee-40c5-9fce-6b519b52bb7a req-b349f6de-17fd-49dc-a035-2c30b54d4156 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "3525156a-e9c9-40b7-88f6-db0de5eb3cd1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:14:58 np0005534516 nova_compute[253538]: 2025-11-25 09:14:58.414 253542 DEBUG oslo_concurrency.lockutils [req-e912a41d-9fee-40c5-9fce-6b519b52bb7a req-b349f6de-17fd-49dc-a035-2c30b54d4156 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "3525156a-e9c9-40b7-88f6-db0de5eb3cd1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:14:58 np0005534516 nova_compute[253538]: 2025-11-25 09:14:58.414 253542 DEBUG oslo_concurrency.lockutils [req-e912a41d-9fee-40c5-9fce-6b519b52bb7a req-b349f6de-17fd-49dc-a035-2c30b54d4156 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "3525156a-e9c9-40b7-88f6-db0de5eb3cd1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:14:58 np0005534516 nova_compute[253538]: 2025-11-25 09:14:58.415 253542 DEBUG nova.compute.manager [req-e912a41d-9fee-40c5-9fce-6b519b52bb7a req-b349f6de-17fd-49dc-a035-2c30b54d4156 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 3525156a-e9c9-40b7-88f6-db0de5eb3cd1] No waiting events found dispatching network-vif-unplugged-bc0d7fbf-1c1d-43bc-884b-d89f14be4712 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 04:14:58 np0005534516 nova_compute[253538]: 2025-11-25 09:14:58.415 253542 DEBUG nova.compute.manager [req-e912a41d-9fee-40c5-9fce-6b519b52bb7a req-b349f6de-17fd-49dc-a035-2c30b54d4156 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 3525156a-e9c9-40b7-88f6-db0de5eb3cd1] Received event network-vif-unplugged-bc0d7fbf-1c1d-43bc-884b-d89f14be4712 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 25 04:14:58 np0005534516 nova_compute[253538]: 2025-11-25 09:14:58.415 253542 DEBUG nova.compute.manager [req-e912a41d-9fee-40c5-9fce-6b519b52bb7a req-b349f6de-17fd-49dc-a035-2c30b54d4156 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 3525156a-e9c9-40b7-88f6-db0de5eb3cd1] Received event network-vif-plugged-bc0d7fbf-1c1d-43bc-884b-d89f14be4712 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 04:14:58 np0005534516 nova_compute[253538]: 2025-11-25 09:14:58.415 253542 DEBUG oslo_concurrency.lockutils [req-e912a41d-9fee-40c5-9fce-6b519b52bb7a req-b349f6de-17fd-49dc-a035-2c30b54d4156 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "3525156a-e9c9-40b7-88f6-db0de5eb3cd1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:14:58 np0005534516 nova_compute[253538]: 2025-11-25 09:14:58.416 253542 DEBUG oslo_concurrency.lockutils [req-e912a41d-9fee-40c5-9fce-6b519b52bb7a req-b349f6de-17fd-49dc-a035-2c30b54d4156 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "3525156a-e9c9-40b7-88f6-db0de5eb3cd1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:14:58 np0005534516 nova_compute[253538]: 2025-11-25 09:14:58.416 253542 DEBUG oslo_concurrency.lockutils [req-e912a41d-9fee-40c5-9fce-6b519b52bb7a req-b349f6de-17fd-49dc-a035-2c30b54d4156 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "3525156a-e9c9-40b7-88f6-db0de5eb3cd1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:14:58 np0005534516 nova_compute[253538]: 2025-11-25 09:14:58.416 253542 DEBUG nova.compute.manager [req-e912a41d-9fee-40c5-9fce-6b519b52bb7a req-b349f6de-17fd-49dc-a035-2c30b54d4156 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 3525156a-e9c9-40b7-88f6-db0de5eb3cd1] No waiting events found dispatching network-vif-plugged-bc0d7fbf-1c1d-43bc-884b-d89f14be4712 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 04:14:58 np0005534516 nova_compute[253538]: 2025-11-25 09:14:58.416 253542 WARNING nova.compute.manager [req-e912a41d-9fee-40c5-9fce-6b519b52bb7a req-b349f6de-17fd-49dc-a035-2c30b54d4156 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 3525156a-e9c9-40b7-88f6-db0de5eb3cd1] Received unexpected event network-vif-plugged-bc0d7fbf-1c1d-43bc-884b-d89f14be4712 for instance with vm_state active and task_state deleting.#033[00m
Nov 25 04:14:58 np0005534516 nova_compute[253538]: 2025-11-25 09:14:58.437 253542 INFO nova.compute.manager [-] [instance: 3525156a-e9c9-40b7-88f6-db0de5eb3cd1] Took 1.08 seconds to deallocate network for instance.#033[00m
Nov 25 04:14:58 np0005534516 nova_compute[253538]: 2025-11-25 09:14:58.485 253542 DEBUG nova.compute.manager [req-8c371b19-424f-47e8-97a9-5e671eed0634 req-1422bebf-84c5-4199-a9d3-97b237218ae1 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 3525156a-e9c9-40b7-88f6-db0de5eb3cd1] Received event network-vif-deleted-bc0d7fbf-1c1d-43bc-884b-d89f14be4712 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 04:14:58 np0005534516 nova_compute[253538]: 2025-11-25 09:14:58.515 253542 DEBUG oslo_concurrency.lockutils [None req-637521c8-0853-4fb4-9c14-1e99f7de6afc c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:14:58 np0005534516 nova_compute[253538]: 2025-11-25 09:14:58.516 253542 DEBUG oslo_concurrency.lockutils [None req-637521c8-0853-4fb4-9c14-1e99f7de6afc c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:14:58 np0005534516 nova_compute[253538]: 2025-11-25 09:14:58.546 253542 DEBUG oslo_concurrency.processutils [None req-637521c8-0853-4fb4-9c14-1e99f7de6afc c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 04:14:58 np0005534516 nova_compute[253538]: 2025-11-25 09:14:58.591 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:14:58 np0005534516 nova_compute[253538]: 2025-11-25 09:14:58.593 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:14:58 np0005534516 nova_compute[253538]: 2025-11-25 09:14:58.593 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 25 04:14:58 np0005534516 nova_compute[253538]: 2025-11-25 09:14:58.593 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 25 04:14:58 np0005534516 nova_compute[253538]: 2025-11-25 09:14:58.772 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Acquiring lock "refresh_cache-3525156a-e9c9-40b7-88f6-db0de5eb3cd1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 04:14:58 np0005534516 nova_compute[253538]: 2025-11-25 09:14:58.795 253542 DEBUG nova.network.neutron [req-524fb2af-fc48-47f6-b334-d8b525ddcdc4 req-49701a26-3eab-4fe3-9b24-74bca72d683a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 3525156a-e9c9-40b7-88f6-db0de5eb3cd1] Updated VIF entry in instance network info cache for port bc0d7fbf-1c1d-43bc-884b-d89f14be4712. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 25 04:14:58 np0005534516 nova_compute[253538]: 2025-11-25 09:14:58.796 253542 DEBUG nova.network.neutron [req-524fb2af-fc48-47f6-b334-d8b525ddcdc4 req-49701a26-3eab-4fe3-9b24-74bca72d683a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 3525156a-e9c9-40b7-88f6-db0de5eb3cd1] Updating instance_info_cache with network_info: [{"id": "bc0d7fbf-1c1d-43bc-884b-d89f14be4712", "address": "fa:16:3e:57:11:da", "network": {"id": "a0d85633-9402-4022-8c0a-b00348775e93", "bridge": "br-int", "label": "tempest-network-smoke--1636625224", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe57:11da", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe57:11da", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbc0d7fbf-1c", "ovs_interfaceid": "bc0d7fbf-1c1d-43bc-884b-d89f14be4712", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 04:14:58 np0005534516 nova_compute[253538]: 2025-11-25 09:14:58.829 253542 DEBUG oslo_concurrency.lockutils [req-524fb2af-fc48-47f6-b334-d8b525ddcdc4 req-49701a26-3eab-4fe3-9b24-74bca72d683a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-3525156a-e9c9-40b7-88f6-db0de5eb3cd1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 04:14:58 np0005534516 nova_compute[253538]: 2025-11-25 09:14:58.829 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Acquired lock "refresh_cache-3525156a-e9c9-40b7-88f6-db0de5eb3cd1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 04:14:58 np0005534516 nova_compute[253538]: 2025-11-25 09:14:58.829 253542 DEBUG nova.network.neutron [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] [instance: 3525156a-e9c9-40b7-88f6-db0de5eb3cd1] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Nov 25 04:14:58 np0005534516 nova_compute[253538]: 2025-11-25 09:14:58.830 253542 DEBUG nova.objects.instance [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 3525156a-e9c9-40b7-88f6-db0de5eb3cd1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 04:14:59 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 04:14:59 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2934596963' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 04:14:59 np0005534516 nova_compute[253538]: 2025-11-25 09:14:59.019 253542 DEBUG oslo_concurrency.processutils [None req-637521c8-0853-4fb4-9c14-1e99f7de6afc c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.473s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 04:14:59 np0005534516 nova_compute[253538]: 2025-11-25 09:14:59.030 253542 DEBUG nova.compute.provider_tree [None req-637521c8-0853-4fb4-9c14-1e99f7de6afc c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 04:14:59 np0005534516 nova_compute[253538]: 2025-11-25 09:14:59.045 253542 DEBUG nova.scheduler.client.report [None req-637521c8-0853-4fb4-9c14-1e99f7de6afc c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 04:14:59 np0005534516 nova_compute[253538]: 2025-11-25 09:14:59.141 253542 DEBUG oslo_concurrency.lockutils [None req-637521c8-0853-4fb4-9c14-1e99f7de6afc c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.625s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:14:59 np0005534516 nova_compute[253538]: 2025-11-25 09:14:59.244 253542 INFO nova.scheduler.client.report [None req-637521c8-0853-4fb4-9c14-1e99f7de6afc c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Deleted allocations for instance 3525156a-e9c9-40b7-88f6-db0de5eb3cd1#033[00m
Nov 25 04:14:59 np0005534516 nova_compute[253538]: 2025-11-25 09:14:59.262 253542 DEBUG nova.network.neutron [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] [instance: 3525156a-e9c9-40b7-88f6-db0de5eb3cd1] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 04:14:59 np0005534516 nova_compute[253538]: 2025-11-25 09:14:59.307 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Releasing lock "refresh_cache-3525156a-e9c9-40b7-88f6-db0de5eb3cd1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 04:14:59 np0005534516 nova_compute[253538]: 2025-11-25 09:14:59.308 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] [instance: 3525156a-e9c9-40b7-88f6-db0de5eb3cd1] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Nov 25 04:14:59 np0005534516 nova_compute[253538]: 2025-11-25 09:14:59.492 253542 DEBUG oslo_concurrency.lockutils [None req-637521c8-0853-4fb4-9c14-1e99f7de6afc c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "3525156a-e9c9-40b7-88f6-db0de5eb3cd1" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.959s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:14:59 np0005534516 nova_compute[253538]: 2025-11-25 09:14:59.659 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:15:00 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2838: 321 pgs: 321 active+clean; 129 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 41 KiB/s rd, 24 KiB/s wr, 47 op/s
Nov 25 04:15:01 np0005534516 nova_compute[253538]: 2025-11-25 09:15:01.798 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:15:02 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2839: 321 pgs: 321 active+clean; 88 MiB data, 1021 MiB used, 59 GiB / 60 GiB avail; 48 KiB/s rd, 13 KiB/s wr, 57 op/s
Nov 25 04:15:02 np0005534516 nova_compute[253538]: 2025-11-25 09:15:02.553 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:15:02 np0005534516 nova_compute[253538]: 2025-11-25 09:15:02.554 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 25 04:15:03 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:15:03 np0005534516 nova_compute[253538]: 2025-11-25 09:15:03.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:15:04 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2840: 321 pgs: 321 active+clean; 88 MiB data, 1021 MiB used, 59 GiB / 60 GiB avail; 48 KiB/s rd, 12 KiB/s wr, 57 op/s
Nov 25 04:15:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 04:15:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 04:15:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 04:15:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 04:15:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 25 04:15:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 04:15:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 04:15:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 04:15:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 04:15:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 04:15:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0010122368870873217 of space, bias 1.0, pg target 0.3036710661261965 quantized to 32 (current 32)
Nov 25 04:15:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 04:15:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 25 04:15:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 04:15:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 04:15:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 04:15:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 25 04:15:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 04:15:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 25 04:15:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 04:15:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 04:15:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 04:15:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 25 04:15:04 np0005534516 nova_compute[253538]: 2025-11-25 09:15:04.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:15:04 np0005534516 nova_compute[253538]: 2025-11-25 09:15:04.662 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:15:05 np0005534516 nova_compute[253538]: 2025-11-25 09:15:05.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:15:06 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 04:15:06 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 04:15:06 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 04:15:06 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 04:15:06 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 04:15:06 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 04:15:06 np0005534516 ceph-mgr[75313]: [progress WARNING root] complete: ev 0d227d0d-0490-4f3f-90e1-4fb7b1c518eb does not exist
Nov 25 04:15:06 np0005534516 ceph-mgr[75313]: [progress WARNING root] complete: ev e0a2ee8c-dbe3-4bbe-b944-0282fc37092d does not exist
Nov 25 04:15:06 np0005534516 ceph-mgr[75313]: [progress WARNING root] complete: ev e209057a-81f0-4f1c-aaa4-a77683fa7793 does not exist
Nov 25 04:15:06 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 04:15:06 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 04:15:06 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 04:15:06 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 04:15:06 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 04:15:06 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 04:15:06 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2841: 321 pgs: 321 active+clean; 88 MiB data, 1021 MiB used, 59 GiB / 60 GiB avail; 37 KiB/s rd, 12 KiB/s wr, 55 op/s
Nov 25 04:15:06 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 04:15:06 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 04:15:06 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 04:15:06 np0005534516 nova_compute[253538]: 2025-11-25 09:15:06.548 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:15:06 np0005534516 podman[414969]: 2025-11-25 09:15:06.695973481 +0000 UTC m=+0.118041260 container create 61ffce8873919a4ca8c13277357f524f381bc7b3a4d60929a3ca97b5b37d2a41 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_northcutt, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Nov 25 04:15:06 np0005534516 podman[414969]: 2025-11-25 09:15:06.620744215 +0000 UTC m=+0.042812074 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 04:15:06 np0005534516 systemd[1]: Started libpod-conmon-61ffce8873919a4ca8c13277357f524f381bc7b3a4d60929a3ca97b5b37d2a41.scope.
Nov 25 04:15:06 np0005534516 systemd[1]: Started libcrun container.
Nov 25 04:15:06 np0005534516 podman[414969]: 2025-11-25 09:15:06.797936273 +0000 UTC m=+0.220004082 container init 61ffce8873919a4ca8c13277357f524f381bc7b3a4d60929a3ca97b5b37d2a41 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_northcutt, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 04:15:06 np0005534516 nova_compute[253538]: 2025-11-25 09:15:06.799 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:15:06 np0005534516 podman[414969]: 2025-11-25 09:15:06.809826876 +0000 UTC m=+0.231894655 container start 61ffce8873919a4ca8c13277357f524f381bc7b3a4d60929a3ca97b5b37d2a41 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_northcutt, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Nov 25 04:15:06 np0005534516 podman[414969]: 2025-11-25 09:15:06.81586389 +0000 UTC m=+0.237931669 container attach 61ffce8873919a4ca8c13277357f524f381bc7b3a4d60929a3ca97b5b37d2a41 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_northcutt, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Nov 25 04:15:06 np0005534516 nice_northcutt[414985]: 167 167
Nov 25 04:15:06 np0005534516 systemd[1]: libpod-61ffce8873919a4ca8c13277357f524f381bc7b3a4d60929a3ca97b5b37d2a41.scope: Deactivated successfully.
Nov 25 04:15:06 np0005534516 conmon[414985]: conmon 61ffce8873919a4ca8c1 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-61ffce8873919a4ca8c13277357f524f381bc7b3a4d60929a3ca97b5b37d2a41.scope/container/memory.events
Nov 25 04:15:06 np0005534516 podman[414969]: 2025-11-25 09:15:06.820919917 +0000 UTC m=+0.242987696 container died 61ffce8873919a4ca8c13277357f524f381bc7b3a4d60929a3ca97b5b37d2a41 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_northcutt, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 04:15:06 np0005534516 systemd[1]: var-lib-containers-storage-overlay-5526869d61dd4feab378e0141362676a313ab9b8662098db99073eb3772e00ab-merged.mount: Deactivated successfully.
Nov 25 04:15:06 np0005534516 podman[414969]: 2025-11-25 09:15:06.861345316 +0000 UTC m=+0.283413095 container remove 61ffce8873919a4ca8c13277357f524f381bc7b3a4d60929a3ca97b5b37d2a41 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_northcutt, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Nov 25 04:15:06 np0005534516 systemd[1]: libpod-conmon-61ffce8873919a4ca8c13277357f524f381bc7b3a4d60929a3ca97b5b37d2a41.scope: Deactivated successfully.
Nov 25 04:15:07 np0005534516 podman[415012]: 2025-11-25 09:15:07.01858518 +0000 UTC m=+0.041791696 container create 77a29c686269194f5fad34e7a4e07eba84285be3980591f77c5ede3c0f211a3f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_newton, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 04:15:07 np0005534516 systemd[1]: Started libpod-conmon-77a29c686269194f5fad34e7a4e07eba84285be3980591f77c5ede3c0f211a3f.scope.
Nov 25 04:15:07 np0005534516 systemd[1]: Started libcrun container.
Nov 25 04:15:07 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e5ce479ae00f5dbe10b241f8bf1c34ef5a61346a54ccb2a7fb1a197c0a76609c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 04:15:07 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e5ce479ae00f5dbe10b241f8bf1c34ef5a61346a54ccb2a7fb1a197c0a76609c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 04:15:07 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e5ce479ae00f5dbe10b241f8bf1c34ef5a61346a54ccb2a7fb1a197c0a76609c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 04:15:07 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e5ce479ae00f5dbe10b241f8bf1c34ef5a61346a54ccb2a7fb1a197c0a76609c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 04:15:07 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e5ce479ae00f5dbe10b241f8bf1c34ef5a61346a54ccb2a7fb1a197c0a76609c/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 04:15:07 np0005534516 podman[415012]: 2025-11-25 09:15:06.999577194 +0000 UTC m=+0.022783720 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 04:15:07 np0005534516 podman[415012]: 2025-11-25 09:15:07.101517765 +0000 UTC m=+0.124724311 container init 77a29c686269194f5fad34e7a4e07eba84285be3980591f77c5ede3c0f211a3f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_newton, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Nov 25 04:15:07 np0005534516 podman[415012]: 2025-11-25 09:15:07.108207267 +0000 UTC m=+0.131413783 container start 77a29c686269194f5fad34e7a4e07eba84285be3980591f77c5ede3c0f211a3f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_newton, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 04:15:07 np0005534516 podman[415012]: 2025-11-25 09:15:07.111215669 +0000 UTC m=+0.134422185 container attach 77a29c686269194f5fad34e7a4e07eba84285be3980591f77c5ede3c0f211a3f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_newton, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Nov 25 04:15:07 np0005534516 nova_compute[253538]: 2025-11-25 09:15:07.498 253542 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764062092.4977322, 49fdf548-77e1-47b2-9118-f42acc3a4052 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 04:15:07 np0005534516 nova_compute[253538]: 2025-11-25 09:15:07.499 253542 INFO nova.compute.manager [-] [instance: 49fdf548-77e1-47b2-9118-f42acc3a4052] VM Stopped (Lifecycle Event)#033[00m
Nov 25 04:15:07 np0005534516 nova_compute[253538]: 2025-11-25 09:15:07.514 253542 DEBUG nova.compute.manager [None req-4342b7cd-a5da-4645-b592-a039e84dec6a - - - - - -] [instance: 49fdf548-77e1-47b2-9118-f42acc3a4052] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 04:15:07 np0005534516 nova_compute[253538]: 2025-11-25 09:15:07.548 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:15:07 np0005534516 nova_compute[253538]: 2025-11-25 09:15:07.623 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:15:08 np0005534516 nifty_newton[415028]: --> passed data devices: 0 physical, 3 LVM
Nov 25 04:15:08 np0005534516 nifty_newton[415028]: --> relative data size: 1.0
Nov 25 04:15:08 np0005534516 nifty_newton[415028]: --> All data devices are unavailable
Nov 25 04:15:08 np0005534516 systemd[1]: libpod-77a29c686269194f5fad34e7a4e07eba84285be3980591f77c5ede3c0f211a3f.scope: Deactivated successfully.
Nov 25 04:15:08 np0005534516 systemd[1]: libpod-77a29c686269194f5fad34e7a4e07eba84285be3980591f77c5ede3c0f211a3f.scope: Consumed 1.009s CPU time.
Nov 25 04:15:08 np0005534516 podman[415012]: 2025-11-25 09:15:08.158416096 +0000 UTC m=+1.181622602 container died 77a29c686269194f5fad34e7a4e07eba84285be3980591f77c5ede3c0f211a3f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_newton, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 25 04:15:08 np0005534516 systemd[1]: var-lib-containers-storage-overlay-e5ce479ae00f5dbe10b241f8bf1c34ef5a61346a54ccb2a7fb1a197c0a76609c-merged.mount: Deactivated successfully.
Nov 25 04:15:08 np0005534516 podman[415012]: 2025-11-25 09:15:08.228845441 +0000 UTC m=+1.252051957 container remove 77a29c686269194f5fad34e7a4e07eba84285be3980591f77c5ede3c0f211a3f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_newton, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 04:15:08 np0005534516 systemd[1]: libpod-conmon-77a29c686269194f5fad34e7a4e07eba84285be3980591f77c5ede3c0f211a3f.scope: Deactivated successfully.
Nov 25 04:15:08 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:15:08 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2842: 321 pgs: 321 active+clean; 88 MiB data, 1021 MiB used, 59 GiB / 60 GiB avail; 26 KiB/s rd, 4.0 KiB/s wr, 38 op/s
Nov 25 04:15:08 np0005534516 nova_compute[253538]: 2025-11-25 09:15:08.553 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:15:08 np0005534516 nova_compute[253538]: 2025-11-25 09:15:08.572 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:15:08 np0005534516 nova_compute[253538]: 2025-11-25 09:15:08.573 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:15:08 np0005534516 nova_compute[253538]: 2025-11-25 09:15:08.573 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:15:08 np0005534516 nova_compute[253538]: 2025-11-25 09:15:08.573 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 25 04:15:08 np0005534516 nova_compute[253538]: 2025-11-25 09:15:08.574 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 04:15:08 np0005534516 podman[415226]: 2025-11-25 09:15:08.844183478 +0000 UTC m=+0.039459384 container create edf18abb7e70eace4c8af4f300998f8b5314fbab31909ec7d9509ee3ae1d65f6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_greider, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Nov 25 04:15:08 np0005534516 systemd[1]: Started libpod-conmon-edf18abb7e70eace4c8af4f300998f8b5314fbab31909ec7d9509ee3ae1d65f6.scope.
Nov 25 04:15:08 np0005534516 systemd[1]: Started libcrun container.
Nov 25 04:15:08 np0005534516 podman[415226]: 2025-11-25 09:15:08.826425005 +0000 UTC m=+0.021700921 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 04:15:08 np0005534516 podman[415226]: 2025-11-25 09:15:08.925383525 +0000 UTC m=+0.120659441 container init edf18abb7e70eace4c8af4f300998f8b5314fbab31909ec7d9509ee3ae1d65f6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_greider, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Nov 25 04:15:08 np0005534516 podman[415226]: 2025-11-25 09:15:08.931426779 +0000 UTC m=+0.126702695 container start edf18abb7e70eace4c8af4f300998f8b5314fbab31909ec7d9509ee3ae1d65f6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_greider, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 04:15:08 np0005534516 podman[415226]: 2025-11-25 09:15:08.935252904 +0000 UTC m=+0.130528820 container attach edf18abb7e70eace4c8af4f300998f8b5314fbab31909ec7d9509ee3ae1d65f6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_greider, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 04:15:08 np0005534516 frosty_greider[415243]: 167 167
Nov 25 04:15:08 np0005534516 systemd[1]: libpod-edf18abb7e70eace4c8af4f300998f8b5314fbab31909ec7d9509ee3ae1d65f6.scope: Deactivated successfully.
Nov 25 04:15:08 np0005534516 podman[415226]: 2025-11-25 09:15:08.937059602 +0000 UTC m=+0.132335498 container died edf18abb7e70eace4c8af4f300998f8b5314fbab31909ec7d9509ee3ae1d65f6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_greider, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Nov 25 04:15:08 np0005534516 systemd[1]: var-lib-containers-storage-overlay-4397de3385fac24da8c3fe6e7a4cc398dcbede03eec7d683967c61223cedc716-merged.mount: Deactivated successfully.
Nov 25 04:15:08 np0005534516 podman[415226]: 2025-11-25 09:15:08.970136942 +0000 UTC m=+0.165412838 container remove edf18abb7e70eace4c8af4f300998f8b5314fbab31909ec7d9509ee3ae1d65f6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_greider, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Nov 25 04:15:08 np0005534516 systemd[1]: libpod-conmon-edf18abb7e70eace4c8af4f300998f8b5314fbab31909ec7d9509ee3ae1d65f6.scope: Deactivated successfully.
Nov 25 04:15:09 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 04:15:09 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1681472048' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 04:15:09 np0005534516 nova_compute[253538]: 2025-11-25 09:15:09.025 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.451s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 04:15:09 np0005534516 podman[415271]: 2025-11-25 09:15:09.147617286 +0000 UTC m=+0.049232929 container create 784ec06424a65276e543c291765b4b207983a6afd5ca1975f1ebda0a06df2240 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_matsumoto, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Nov 25 04:15:09 np0005534516 systemd[1]: Started libpod-conmon-784ec06424a65276e543c291765b4b207983a6afd5ca1975f1ebda0a06df2240.scope.
Nov 25 04:15:09 np0005534516 systemd[1]: Started libcrun container.
Nov 25 04:15:09 np0005534516 podman[415271]: 2025-11-25 09:15:09.124828137 +0000 UTC m=+0.026443810 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 04:15:09 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0f21cb69e41eadaf7f33cdf1ab0999e2d72ce394f6b5b4e56868f2951a4c5d1c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 04:15:09 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0f21cb69e41eadaf7f33cdf1ab0999e2d72ce394f6b5b4e56868f2951a4c5d1c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 04:15:09 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0f21cb69e41eadaf7f33cdf1ab0999e2d72ce394f6b5b4e56868f2951a4c5d1c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 04:15:09 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0f21cb69e41eadaf7f33cdf1ab0999e2d72ce394f6b5b4e56868f2951a4c5d1c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 04:15:09 np0005534516 nova_compute[253538]: 2025-11-25 09:15:09.229 253542 WARNING nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 04:15:09 np0005534516 nova_compute[253538]: 2025-11-25 09:15:09.230 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3587MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 25 04:15:09 np0005534516 nova_compute[253538]: 2025-11-25 09:15:09.230 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:15:09 np0005534516 nova_compute[253538]: 2025-11-25 09:15:09.230 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:15:09 np0005534516 podman[415271]: 2025-11-25 09:15:09.231511057 +0000 UTC m=+0.133126700 container init 784ec06424a65276e543c291765b4b207983a6afd5ca1975f1ebda0a06df2240 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_matsumoto, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Nov 25 04:15:09 np0005534516 podman[415271]: 2025-11-25 09:15:09.241840298 +0000 UTC m=+0.143455941 container start 784ec06424a65276e543c291765b4b207983a6afd5ca1975f1ebda0a06df2240 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_matsumoto, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Nov 25 04:15:09 np0005534516 podman[415271]: 2025-11-25 09:15:09.244617153 +0000 UTC m=+0.146232816 container attach 784ec06424a65276e543c291765b4b207983a6afd5ca1975f1ebda0a06df2240 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_matsumoto, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Nov 25 04:15:09 np0005534516 nova_compute[253538]: 2025-11-25 09:15:09.291 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 25 04:15:09 np0005534516 nova_compute[253538]: 2025-11-25 09:15:09.291 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 25 04:15:09 np0005534516 nova_compute[253538]: 2025-11-25 09:15:09.305 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 04:15:09 np0005534516 nova_compute[253538]: 2025-11-25 09:15:09.663 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:15:09 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 04:15:09 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2817833574' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 04:15:09 np0005534516 nova_compute[253538]: 2025-11-25 09:15:09.751 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.446s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 04:15:09 np0005534516 nova_compute[253538]: 2025-11-25 09:15:09.758 253542 DEBUG nova.compute.provider_tree [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 04:15:09 np0005534516 nova_compute[253538]: 2025-11-25 09:15:09.773 253542 DEBUG nova.scheduler.client.report [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 04:15:09 np0005534516 nova_compute[253538]: 2025-11-25 09:15:09.795 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 25 04:15:09 np0005534516 nova_compute[253538]: 2025-11-25 09:15:09.796 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.565s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:15:09 np0005534516 romantic_matsumoto[415288]: {
Nov 25 04:15:09 np0005534516 romantic_matsumoto[415288]:    "0": [
Nov 25 04:15:09 np0005534516 romantic_matsumoto[415288]:        {
Nov 25 04:15:09 np0005534516 romantic_matsumoto[415288]:            "devices": [
Nov 25 04:15:09 np0005534516 romantic_matsumoto[415288]:                "/dev/loop3"
Nov 25 04:15:09 np0005534516 romantic_matsumoto[415288]:            ],
Nov 25 04:15:09 np0005534516 romantic_matsumoto[415288]:            "lv_name": "ceph_lv0",
Nov 25 04:15:09 np0005534516 romantic_matsumoto[415288]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 04:15:09 np0005534516 romantic_matsumoto[415288]:            "lv_size": "21470642176",
Nov 25 04:15:09 np0005534516 romantic_matsumoto[415288]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=fa0f8d90-5df8-4d42-9078-da082765696d,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 04:15:09 np0005534516 romantic_matsumoto[415288]:            "lv_uuid": "ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr",
Nov 25 04:15:09 np0005534516 romantic_matsumoto[415288]:            "name": "ceph_lv0",
Nov 25 04:15:09 np0005534516 romantic_matsumoto[415288]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 04:15:09 np0005534516 romantic_matsumoto[415288]:            "tags": {
Nov 25 04:15:09 np0005534516 romantic_matsumoto[415288]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 04:15:09 np0005534516 romantic_matsumoto[415288]:                "ceph.block_uuid": "ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr",
Nov 25 04:15:09 np0005534516 romantic_matsumoto[415288]:                "ceph.cephx_lockbox_secret": "",
Nov 25 04:15:09 np0005534516 romantic_matsumoto[415288]:                "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 04:15:09 np0005534516 romantic_matsumoto[415288]:                "ceph.cluster_name": "ceph",
Nov 25 04:15:09 np0005534516 romantic_matsumoto[415288]:                "ceph.crush_device_class": "",
Nov 25 04:15:09 np0005534516 romantic_matsumoto[415288]:                "ceph.encrypted": "0",
Nov 25 04:15:09 np0005534516 romantic_matsumoto[415288]:                "ceph.osd_fsid": "fa0f8d90-5df8-4d42-9078-da082765696d",
Nov 25 04:15:09 np0005534516 romantic_matsumoto[415288]:                "ceph.osd_id": "0",
Nov 25 04:15:09 np0005534516 romantic_matsumoto[415288]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 04:15:09 np0005534516 romantic_matsumoto[415288]:                "ceph.type": "block",
Nov 25 04:15:09 np0005534516 romantic_matsumoto[415288]:                "ceph.vdo": "0"
Nov 25 04:15:09 np0005534516 romantic_matsumoto[415288]:            },
Nov 25 04:15:09 np0005534516 romantic_matsumoto[415288]:            "type": "block",
Nov 25 04:15:09 np0005534516 romantic_matsumoto[415288]:            "vg_name": "ceph_vg0"
Nov 25 04:15:09 np0005534516 romantic_matsumoto[415288]:        }
Nov 25 04:15:09 np0005534516 romantic_matsumoto[415288]:    ],
Nov 25 04:15:09 np0005534516 romantic_matsumoto[415288]:    "1": [
Nov 25 04:15:09 np0005534516 romantic_matsumoto[415288]:        {
Nov 25 04:15:09 np0005534516 romantic_matsumoto[415288]:            "devices": [
Nov 25 04:15:09 np0005534516 romantic_matsumoto[415288]:                "/dev/loop4"
Nov 25 04:15:09 np0005534516 romantic_matsumoto[415288]:            ],
Nov 25 04:15:09 np0005534516 romantic_matsumoto[415288]:            "lv_name": "ceph_lv1",
Nov 25 04:15:09 np0005534516 romantic_matsumoto[415288]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 04:15:09 np0005534516 romantic_matsumoto[415288]:            "lv_size": "21470642176",
Nov 25 04:15:09 np0005534516 romantic_matsumoto[415288]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1ccbbde0-8faf-460a-800e-c84f00ed17db,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 04:15:09 np0005534516 romantic_matsumoto[415288]:            "lv_uuid": "nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e",
Nov 25 04:15:09 np0005534516 romantic_matsumoto[415288]:            "name": "ceph_lv1",
Nov 25 04:15:09 np0005534516 romantic_matsumoto[415288]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 04:15:09 np0005534516 romantic_matsumoto[415288]:            "tags": {
Nov 25 04:15:09 np0005534516 romantic_matsumoto[415288]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 04:15:09 np0005534516 romantic_matsumoto[415288]:                "ceph.block_uuid": "nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e",
Nov 25 04:15:09 np0005534516 romantic_matsumoto[415288]:                "ceph.cephx_lockbox_secret": "",
Nov 25 04:15:09 np0005534516 romantic_matsumoto[415288]:                "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 04:15:09 np0005534516 romantic_matsumoto[415288]:                "ceph.cluster_name": "ceph",
Nov 25 04:15:09 np0005534516 romantic_matsumoto[415288]:                "ceph.crush_device_class": "",
Nov 25 04:15:09 np0005534516 romantic_matsumoto[415288]:                "ceph.encrypted": "0",
Nov 25 04:15:09 np0005534516 romantic_matsumoto[415288]:                "ceph.osd_fsid": "1ccbbde0-8faf-460a-800e-c84f00ed17db",
Nov 25 04:15:09 np0005534516 romantic_matsumoto[415288]:                "ceph.osd_id": "1",
Nov 25 04:15:09 np0005534516 romantic_matsumoto[415288]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 04:15:09 np0005534516 romantic_matsumoto[415288]:                "ceph.type": "block",
Nov 25 04:15:09 np0005534516 romantic_matsumoto[415288]:                "ceph.vdo": "0"
Nov 25 04:15:09 np0005534516 romantic_matsumoto[415288]:            },
Nov 25 04:15:09 np0005534516 romantic_matsumoto[415288]:            "type": "block",
Nov 25 04:15:09 np0005534516 romantic_matsumoto[415288]:            "vg_name": "ceph_vg1"
Nov 25 04:15:09 np0005534516 romantic_matsumoto[415288]:        }
Nov 25 04:15:09 np0005534516 romantic_matsumoto[415288]:    ],
Nov 25 04:15:09 np0005534516 romantic_matsumoto[415288]:    "2": [
Nov 25 04:15:09 np0005534516 romantic_matsumoto[415288]:        {
Nov 25 04:15:09 np0005534516 romantic_matsumoto[415288]:            "devices": [
Nov 25 04:15:09 np0005534516 romantic_matsumoto[415288]:                "/dev/loop5"
Nov 25 04:15:09 np0005534516 romantic_matsumoto[415288]:            ],
Nov 25 04:15:09 np0005534516 romantic_matsumoto[415288]:            "lv_name": "ceph_lv2",
Nov 25 04:15:09 np0005534516 romantic_matsumoto[415288]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 04:15:09 np0005534516 romantic_matsumoto[415288]:            "lv_size": "21470642176",
Nov 25 04:15:09 np0005534516 romantic_matsumoto[415288]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2a15223e-f21c-42af-b702-2be1c3607399,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 04:15:09 np0005534516 romantic_matsumoto[415288]:            "lv_uuid": "5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0",
Nov 25 04:15:09 np0005534516 romantic_matsumoto[415288]:            "name": "ceph_lv2",
Nov 25 04:15:09 np0005534516 romantic_matsumoto[415288]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 04:15:09 np0005534516 romantic_matsumoto[415288]:            "tags": {
Nov 25 04:15:09 np0005534516 romantic_matsumoto[415288]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 04:15:09 np0005534516 romantic_matsumoto[415288]:                "ceph.block_uuid": "5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0",
Nov 25 04:15:09 np0005534516 romantic_matsumoto[415288]:                "ceph.cephx_lockbox_secret": "",
Nov 25 04:15:09 np0005534516 romantic_matsumoto[415288]:                "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 04:15:09 np0005534516 romantic_matsumoto[415288]:                "ceph.cluster_name": "ceph",
Nov 25 04:15:09 np0005534516 romantic_matsumoto[415288]:                "ceph.crush_device_class": "",
Nov 25 04:15:09 np0005534516 romantic_matsumoto[415288]:                "ceph.encrypted": "0",
Nov 25 04:15:09 np0005534516 romantic_matsumoto[415288]:                "ceph.osd_fsid": "2a15223e-f21c-42af-b702-2be1c3607399",
Nov 25 04:15:09 np0005534516 romantic_matsumoto[415288]:                "ceph.osd_id": "2",
Nov 25 04:15:09 np0005534516 romantic_matsumoto[415288]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 04:15:09 np0005534516 romantic_matsumoto[415288]:                "ceph.type": "block",
Nov 25 04:15:09 np0005534516 romantic_matsumoto[415288]:                "ceph.vdo": "0"
Nov 25 04:15:09 np0005534516 romantic_matsumoto[415288]:            },
Nov 25 04:15:09 np0005534516 romantic_matsumoto[415288]:            "type": "block",
Nov 25 04:15:09 np0005534516 romantic_matsumoto[415288]:            "vg_name": "ceph_vg2"
Nov 25 04:15:09 np0005534516 romantic_matsumoto[415288]:        }
Nov 25 04:15:09 np0005534516 romantic_matsumoto[415288]:    ]
Nov 25 04:15:09 np0005534516 romantic_matsumoto[415288]: }
Nov 25 04:15:10 np0005534516 systemd[1]: libpod-784ec06424a65276e543c291765b4b207983a6afd5ca1975f1ebda0a06df2240.scope: Deactivated successfully.
Nov 25 04:15:10 np0005534516 podman[415319]: 2025-11-25 09:15:10.066199897 +0000 UTC m=+0.023502690 container died 784ec06424a65276e543c291765b4b207983a6afd5ca1975f1ebda0a06df2240 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_matsumoto, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 04:15:10 np0005534516 systemd[1]: var-lib-containers-storage-overlay-0f21cb69e41eadaf7f33cdf1ab0999e2d72ce394f6b5b4e56868f2951a4c5d1c-merged.mount: Deactivated successfully.
Nov 25 04:15:10 np0005534516 podman[415319]: 2025-11-25 09:15:10.123537776 +0000 UTC m=+0.080840539 container remove 784ec06424a65276e543c291765b4b207983a6afd5ca1975f1ebda0a06df2240 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_matsumoto, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Nov 25 04:15:10 np0005534516 systemd[1]: libpod-conmon-784ec06424a65276e543c291765b4b207983a6afd5ca1975f1ebda0a06df2240.scope: Deactivated successfully.
Nov 25 04:15:10 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2843: 321 pgs: 321 active+clean; 88 MiB data, 1021 MiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.2 KiB/s wr, 26 op/s
Nov 25 04:15:10 np0005534516 podman[415472]: 2025-11-25 09:15:10.822827595 +0000 UTC m=+0.043766260 container create a20eed38e2a6f2e2df4ecbcf8fb5004ad470d5ba6081c31ce2715059970b9c1f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_feynman, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 04:15:10 np0005534516 systemd[1]: Started libpod-conmon-a20eed38e2a6f2e2df4ecbcf8fb5004ad470d5ba6081c31ce2715059970b9c1f.scope.
Nov 25 04:15:10 np0005534516 systemd[1]: Started libcrun container.
Nov 25 04:15:10 np0005534516 podman[415472]: 2025-11-25 09:15:10.80128368 +0000 UTC m=+0.022222355 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 04:15:10 np0005534516 podman[415472]: 2025-11-25 09:15:10.909932544 +0000 UTC m=+0.130871219 container init a20eed38e2a6f2e2df4ecbcf8fb5004ad470d5ba6081c31ce2715059970b9c1f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_feynman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 04:15:10 np0005534516 podman[415472]: 2025-11-25 09:15:10.919026511 +0000 UTC m=+0.139965166 container start a20eed38e2a6f2e2df4ecbcf8fb5004ad470d5ba6081c31ce2715059970b9c1f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_feynman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 04:15:10 np0005534516 podman[415472]: 2025-11-25 09:15:10.922532426 +0000 UTC m=+0.143471101 container attach a20eed38e2a6f2e2df4ecbcf8fb5004ad470d5ba6081c31ce2715059970b9c1f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_feynman, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 04:15:10 np0005534516 awesome_feynman[415489]: 167 167
Nov 25 04:15:10 np0005534516 systemd[1]: libpod-a20eed38e2a6f2e2df4ecbcf8fb5004ad470d5ba6081c31ce2715059970b9c1f.scope: Deactivated successfully.
Nov 25 04:15:10 np0005534516 podman[415472]: 2025-11-25 09:15:10.926488783 +0000 UTC m=+0.147427438 container died a20eed38e2a6f2e2df4ecbcf8fb5004ad470d5ba6081c31ce2715059970b9c1f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_feynman, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Nov 25 04:15:10 np0005534516 systemd[1]: var-lib-containers-storage-overlay-09d7cad6a588a9ef834755091a5a22e3ce59193f619c1945f235e925e5de8ae9-merged.mount: Deactivated successfully.
Nov 25 04:15:11 np0005534516 podman[415472]: 2025-11-25 09:15:11.032070323 +0000 UTC m=+0.253008978 container remove a20eed38e2a6f2e2df4ecbcf8fb5004ad470d5ba6081c31ce2715059970b9c1f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_feynman, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 04:15:11 np0005534516 systemd[1]: libpod-conmon-a20eed38e2a6f2e2df4ecbcf8fb5004ad470d5ba6081c31ce2715059970b9c1f.scope: Deactivated successfully.
Nov 25 04:15:11 np0005534516 podman[415514]: 2025-11-25 09:15:11.246617346 +0000 UTC m=+0.048910561 container create a91c4397a0f96ace4c5d6c270c376dab81744b41255c7f57e189a582dc74f630 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_rosalind, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 04:15:11 np0005534516 systemd[1]: Started libpod-conmon-a91c4397a0f96ace4c5d6c270c376dab81744b41255c7f57e189a582dc74f630.scope.
Nov 25 04:15:11 np0005534516 systemd[1]: Started libcrun container.
Nov 25 04:15:11 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/96d39c256bc6287f107512198cac140db2836fb5955affaa6e90ec1d8c47eded/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 04:15:11 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/96d39c256bc6287f107512198cac140db2836fb5955affaa6e90ec1d8c47eded/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 04:15:11 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/96d39c256bc6287f107512198cac140db2836fb5955affaa6e90ec1d8c47eded/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 04:15:11 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/96d39c256bc6287f107512198cac140db2836fb5955affaa6e90ec1d8c47eded/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 04:15:11 np0005534516 podman[415514]: 2025-11-25 09:15:11.22910138 +0000 UTC m=+0.031394625 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 04:15:11 np0005534516 podman[415514]: 2025-11-25 09:15:11.338115993 +0000 UTC m=+0.140409228 container init a91c4397a0f96ace4c5d6c270c376dab81744b41255c7f57e189a582dc74f630 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_rosalind, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 04:15:11 np0005534516 podman[415514]: 2025-11-25 09:15:11.344326942 +0000 UTC m=+0.146620177 container start a91c4397a0f96ace4c5d6c270c376dab81744b41255c7f57e189a582dc74f630 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_rosalind, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 04:15:11 np0005534516 podman[415514]: 2025-11-25 09:15:11.348244398 +0000 UTC m=+0.150537613 container attach a91c4397a0f96ace4c5d6c270c376dab81744b41255c7f57e189a582dc74f630 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_rosalind, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Nov 25 04:15:11 np0005534516 nova_compute[253538]: 2025-11-25 09:15:11.779 253542 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764062096.778285, 3525156a-e9c9-40b7-88f6-db0de5eb3cd1 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 04:15:11 np0005534516 nova_compute[253538]: 2025-11-25 09:15:11.780 253542 INFO nova.compute.manager [-] [instance: 3525156a-e9c9-40b7-88f6-db0de5eb3cd1] VM Stopped (Lifecycle Event)#033[00m
Nov 25 04:15:11 np0005534516 nova_compute[253538]: 2025-11-25 09:15:11.801 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:15:11 np0005534516 nova_compute[253538]: 2025-11-25 09:15:11.854 253542 DEBUG nova.compute.manager [None req-f80ef751-508e-439c-a966-b6d22a063f28 - - - - - -] [instance: 3525156a-e9c9-40b7-88f6-db0de5eb3cd1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 04:15:12 np0005534516 inspiring_rosalind[415531]: {
Nov 25 04:15:12 np0005534516 inspiring_rosalind[415531]:    "1ccbbde0-8faf-460a-800e-c84f00ed17db": {
Nov 25 04:15:12 np0005534516 inspiring_rosalind[415531]:        "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 04:15:12 np0005534516 inspiring_rosalind[415531]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 04:15:12 np0005534516 inspiring_rosalind[415531]:        "osd_id": 1,
Nov 25 04:15:12 np0005534516 inspiring_rosalind[415531]:        "osd_uuid": "1ccbbde0-8faf-460a-800e-c84f00ed17db",
Nov 25 04:15:12 np0005534516 inspiring_rosalind[415531]:        "type": "bluestore"
Nov 25 04:15:12 np0005534516 inspiring_rosalind[415531]:    },
Nov 25 04:15:12 np0005534516 inspiring_rosalind[415531]:    "2a15223e-f21c-42af-b702-2be1c3607399": {
Nov 25 04:15:12 np0005534516 inspiring_rosalind[415531]:        "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 04:15:12 np0005534516 inspiring_rosalind[415531]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 04:15:12 np0005534516 inspiring_rosalind[415531]:        "osd_id": 2,
Nov 25 04:15:12 np0005534516 inspiring_rosalind[415531]:        "osd_uuid": "2a15223e-f21c-42af-b702-2be1c3607399",
Nov 25 04:15:12 np0005534516 inspiring_rosalind[415531]:        "type": "bluestore"
Nov 25 04:15:12 np0005534516 inspiring_rosalind[415531]:    },
Nov 25 04:15:12 np0005534516 inspiring_rosalind[415531]:    "fa0f8d90-5df8-4d42-9078-da082765696d": {
Nov 25 04:15:12 np0005534516 inspiring_rosalind[415531]:        "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 04:15:12 np0005534516 inspiring_rosalind[415531]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 04:15:12 np0005534516 inspiring_rosalind[415531]:        "osd_id": 0,
Nov 25 04:15:12 np0005534516 inspiring_rosalind[415531]:        "osd_uuid": "fa0f8d90-5df8-4d42-9078-da082765696d",
Nov 25 04:15:12 np0005534516 inspiring_rosalind[415531]:        "type": "bluestore"
Nov 25 04:15:12 np0005534516 inspiring_rosalind[415531]:    }
Nov 25 04:15:12 np0005534516 inspiring_rosalind[415531]: }
Nov 25 04:15:12 np0005534516 systemd[1]: libpod-a91c4397a0f96ace4c5d6c270c376dab81744b41255c7f57e189a582dc74f630.scope: Deactivated successfully.
Nov 25 04:15:12 np0005534516 podman[415514]: 2025-11-25 09:15:12.33712483 +0000 UTC m=+1.139418055 container died a91c4397a0f96ace4c5d6c270c376dab81744b41255c7f57e189a582dc74f630 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_rosalind, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Nov 25 04:15:12 np0005534516 systemd[1]: var-lib-containers-storage-overlay-96d39c256bc6287f107512198cac140db2836fb5955affaa6e90ec1d8c47eded-merged.mount: Deactivated successfully.
Nov 25 04:15:12 np0005534516 podman[415514]: 2025-11-25 09:15:12.401724716 +0000 UTC m=+1.204017951 container remove a91c4397a0f96ace4c5d6c270c376dab81744b41255c7f57e189a582dc74f630 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_rosalind, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Nov 25 04:15:12 np0005534516 systemd[1]: libpod-conmon-a91c4397a0f96ace4c5d6c270c376dab81744b41255c7f57e189a582dc74f630.scope: Deactivated successfully.
Nov 25 04:15:12 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2844: 321 pgs: 321 active+clean; 88 MiB data, 1021 MiB used, 59 GiB / 60 GiB avail; 7.5 KiB/s rd, 511 B/s wr, 10 op/s
Nov 25 04:15:12 np0005534516 podman[415572]: 2025-11-25 09:15:12.439247116 +0000 UTC m=+0.068180154 container health_status 5c8d5508db57b4c3da7e80ae11f3a3094e87785cb74f60c98199261dd8b73481 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Nov 25 04:15:12 np0005534516 podman[415564]: 2025-11-25 09:15:12.444122318 +0000 UTC m=+0.073061847 container health_status 201f5ba8a846455dad7681d91099d8c3f33e8ac91298c1330a8b525b94c03938 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Nov 25 04:15:12 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 04:15:12 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 04:15:12 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 04:15:12 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 04:15:12 np0005534516 ceph-mgr[75313]: [progress WARNING root] complete: ev 54f3ffc0-816c-4b09-ac25-26edfcf15506 does not exist
Nov 25 04:15:12 np0005534516 ceph-mgr[75313]: [progress WARNING root] complete: ev 816428e8-7017-49dc-8bdd-bbdc72dac66a does not exist
Nov 25 04:15:13 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:15:13 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 04:15:13 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 04:15:14 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2845: 321 pgs: 321 active+clean; 88 MiB data, 1021 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:15:14 np0005534516 nova_compute[253538]: 2025-11-25 09:15:14.685 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:15:16 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2846: 321 pgs: 321 active+clean; 88 MiB data, 1021 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:15:16 np0005534516 nova_compute[253538]: 2025-11-25 09:15:16.805 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:15:18 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:15:18 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2847: 321 pgs: 321 active+clean; 88 MiB data, 1021 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:15:18 np0005534516 podman[415662]: 2025-11-25 09:15:18.891253118 +0000 UTC m=+0.131110215 container health_status 8ad1e319ea263c63021f01bb2d6f18ca5aea205883339692c6b10bddfa993191 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller)
Nov 25 04:15:19 np0005534516 nova_compute[253538]: 2025-11-25 09:15:19.687 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:15:20 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2848: 321 pgs: 321 active+clean; 88 MiB data, 1021 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:15:21 np0005534516 nova_compute[253538]: 2025-11-25 09:15:21.807 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:15:22 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2849: 321 pgs: 321 active+clean; 88 MiB data, 1021 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:15:23 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:15:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 04:15:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 04:15:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 04:15:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 04:15:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 04:15:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 04:15:24 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2850: 321 pgs: 321 active+clean; 88 MiB data, 1021 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:15:24 np0005534516 nova_compute[253538]: 2025-11-25 09:15:24.721 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:15:26 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2851: 321 pgs: 321 active+clean; 88 MiB data, 1021 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:15:26 np0005534516 nova_compute[253538]: 2025-11-25 09:15:26.852 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:15:27 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:15:27.977 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:81:95:46 10.100.0.2 2001:db8::f816:3eff:fe81:9546'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:fe81:9546/64', 'neutron:device_id': 'ovnmeta-f86f1e83-b07b-4abd-bc9a-7c03f3634fc6', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f86f1e83-b07b-4abd-bc9a-7c03f3634fc6', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a3cf572dfc9f42528923d69b8fa76422', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e5830640-550a-474a-a915-1b8e117ec031, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=cfc44526-993c-46ae-8c7c-2505531aa9fc) old=Port_Binding(mac=['fa:16:3e:81:95:46 10.100.0.2'], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'ovnmeta-f86f1e83-b07b-4abd-bc9a-7c03f3634fc6', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f86f1e83-b07b-4abd-bc9a-7c03f3634fc6', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a3cf572dfc9f42528923d69b8fa76422', 'neutron:revision_number': '2', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 04:15:27 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:15:27.979 162739 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port cfc44526-993c-46ae-8c7c-2505531aa9fc in datapath f86f1e83-b07b-4abd-bc9a-7c03f3634fc6 updated#033[00m
Nov 25 04:15:27 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:15:27.981 162739 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network f86f1e83-b07b-4abd-bc9a-7c03f3634fc6, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 25 04:15:27 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:15:27.982 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[e224885a-0adf-4b8d-bb70-db43482a7f3b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:15:28 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:15:28 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2852: 321 pgs: 321 active+clean; 88 MiB data, 1021 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:15:29 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 25 04:15:29 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2432785060' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 04:15:29 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 25 04:15:29 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2432785060' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 04:15:29 np0005534516 nova_compute[253538]: 2025-11-25 09:15:29.723 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:15:30 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2853: 321 pgs: 321 active+clean; 88 MiB data, 1021 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:15:31 np0005534516 nova_compute[253538]: 2025-11-25 09:15:31.855 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:15:32 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2854: 321 pgs: 321 active+clean; 88 MiB data, 1021 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:15:33 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:15:34 np0005534516 nova_compute[253538]: 2025-11-25 09:15:34.190 253542 DEBUG oslo_concurrency.lockutils [None req-53df080d-df0f-4c21-9484-79434ec815e6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Acquiring lock "2abdf1f8-0c71-459d-8467-ec8825219eda" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:15:34 np0005534516 nova_compute[253538]: 2025-11-25 09:15:34.191 253542 DEBUG oslo_concurrency.lockutils [None req-53df080d-df0f-4c21-9484-79434ec815e6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "2abdf1f8-0c71-459d-8467-ec8825219eda" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:15:34 np0005534516 nova_compute[253538]: 2025-11-25 09:15:34.204 253542 DEBUG nova.compute.manager [None req-53df080d-df0f-4c21-9484-79434ec815e6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 2abdf1f8-0c71-459d-8467-ec8825219eda] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 25 04:15:34 np0005534516 nova_compute[253538]: 2025-11-25 09:15:34.280 253542 DEBUG oslo_concurrency.lockutils [None req-53df080d-df0f-4c21-9484-79434ec815e6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:15:34 np0005534516 nova_compute[253538]: 2025-11-25 09:15:34.280 253542 DEBUG oslo_concurrency.lockutils [None req-53df080d-df0f-4c21-9484-79434ec815e6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:15:34 np0005534516 nova_compute[253538]: 2025-11-25 09:15:34.291 253542 DEBUG nova.virt.hardware [None req-53df080d-df0f-4c21-9484-79434ec815e6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 25 04:15:34 np0005534516 nova_compute[253538]: 2025-11-25 09:15:34.291 253542 INFO nova.compute.claims [None req-53df080d-df0f-4c21-9484-79434ec815e6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 2abdf1f8-0c71-459d-8467-ec8825219eda] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 25 04:15:34 np0005534516 nova_compute[253538]: 2025-11-25 09:15:34.400 253542 DEBUG oslo_concurrency.processutils [None req-53df080d-df0f-4c21-9484-79434ec815e6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 04:15:34 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2855: 321 pgs: 321 active+clean; 88 MiB data, 1021 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:15:34 np0005534516 nova_compute[253538]: 2025-11-25 09:15:34.725 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:15:34 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 04:15:34 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3555988415' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 04:15:34 np0005534516 nova_compute[253538]: 2025-11-25 09:15:34.909 253542 DEBUG oslo_concurrency.processutils [None req-53df080d-df0f-4c21-9484-79434ec815e6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.508s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 04:15:34 np0005534516 nova_compute[253538]: 2025-11-25 09:15:34.915 253542 DEBUG nova.compute.provider_tree [None req-53df080d-df0f-4c21-9484-79434ec815e6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 04:15:34 np0005534516 nova_compute[253538]: 2025-11-25 09:15:34.928 253542 DEBUG nova.scheduler.client.report [None req-53df080d-df0f-4c21-9484-79434ec815e6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 04:15:34 np0005534516 nova_compute[253538]: 2025-11-25 09:15:34.948 253542 DEBUG oslo_concurrency.lockutils [None req-53df080d-df0f-4c21-9484-79434ec815e6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.668s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:15:34 np0005534516 nova_compute[253538]: 2025-11-25 09:15:34.950 253542 DEBUG nova.compute.manager [None req-53df080d-df0f-4c21-9484-79434ec815e6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 2abdf1f8-0c71-459d-8467-ec8825219eda] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 25 04:15:35 np0005534516 nova_compute[253538]: 2025-11-25 09:15:35.005 253542 DEBUG nova.compute.manager [None req-53df080d-df0f-4c21-9484-79434ec815e6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 2abdf1f8-0c71-459d-8467-ec8825219eda] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 25 04:15:35 np0005534516 nova_compute[253538]: 2025-11-25 09:15:35.005 253542 DEBUG nova.network.neutron [None req-53df080d-df0f-4c21-9484-79434ec815e6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 2abdf1f8-0c71-459d-8467-ec8825219eda] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 25 04:15:35 np0005534516 nova_compute[253538]: 2025-11-25 09:15:35.025 253542 INFO nova.virt.libvirt.driver [None req-53df080d-df0f-4c21-9484-79434ec815e6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 2abdf1f8-0c71-459d-8467-ec8825219eda] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 25 04:15:35 np0005534516 nova_compute[253538]: 2025-11-25 09:15:35.050 253542 DEBUG nova.compute.manager [None req-53df080d-df0f-4c21-9484-79434ec815e6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 2abdf1f8-0c71-459d-8467-ec8825219eda] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 25 04:15:35 np0005534516 nova_compute[253538]: 2025-11-25 09:15:35.201 253542 DEBUG nova.compute.manager [None req-53df080d-df0f-4c21-9484-79434ec815e6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 2abdf1f8-0c71-459d-8467-ec8825219eda] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 25 04:15:35 np0005534516 nova_compute[253538]: 2025-11-25 09:15:35.204 253542 DEBUG nova.virt.libvirt.driver [None req-53df080d-df0f-4c21-9484-79434ec815e6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 2abdf1f8-0c71-459d-8467-ec8825219eda] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 25 04:15:35 np0005534516 nova_compute[253538]: 2025-11-25 09:15:35.205 253542 INFO nova.virt.libvirt.driver [None req-53df080d-df0f-4c21-9484-79434ec815e6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 2abdf1f8-0c71-459d-8467-ec8825219eda] Creating image(s)#033[00m
Nov 25 04:15:35 np0005534516 nova_compute[253538]: 2025-11-25 09:15:35.240 253542 DEBUG nova.storage.rbd_utils [None req-53df080d-df0f-4c21-9484-79434ec815e6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] rbd image 2abdf1f8-0c71-459d-8467-ec8825219eda_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 04:15:35 np0005534516 nova_compute[253538]: 2025-11-25 09:15:35.267 253542 DEBUG nova.storage.rbd_utils [None req-53df080d-df0f-4c21-9484-79434ec815e6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] rbd image 2abdf1f8-0c71-459d-8467-ec8825219eda_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 04:15:35 np0005534516 nova_compute[253538]: 2025-11-25 09:15:35.293 253542 DEBUG nova.storage.rbd_utils [None req-53df080d-df0f-4c21-9484-79434ec815e6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] rbd image 2abdf1f8-0c71-459d-8467-ec8825219eda_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 04:15:35 np0005534516 nova_compute[253538]: 2025-11-25 09:15:35.296 253542 DEBUG oslo_concurrency.processutils [None req-53df080d-df0f-4c21-9484-79434ec815e6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 04:15:35 np0005534516 nova_compute[253538]: 2025-11-25 09:15:35.340 253542 DEBUG nova.policy [None req-53df080d-df0f-4c21-9484-79434ec815e6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'c9fb13d4ba9041458692330b7276232f', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'a3cf572dfc9f42528923d69b8fa76422', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 25 04:15:35 np0005534516 nova_compute[253538]: 2025-11-25 09:15:35.375 253542 DEBUG oslo_concurrency.processutils [None req-53df080d-df0f-4c21-9484-79434ec815e6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json" returned: 0 in 0.079s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 04:15:35 np0005534516 nova_compute[253538]: 2025-11-25 09:15:35.377 253542 DEBUG oslo_concurrency.lockutils [None req-53df080d-df0f-4c21-9484-79434ec815e6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Acquiring lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:15:35 np0005534516 nova_compute[253538]: 2025-11-25 09:15:35.378 253542 DEBUG oslo_concurrency.lockutils [None req-53df080d-df0f-4c21-9484-79434ec815e6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:15:35 np0005534516 nova_compute[253538]: 2025-11-25 09:15:35.378 253542 DEBUG oslo_concurrency.lockutils [None req-53df080d-df0f-4c21-9484-79434ec815e6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:15:35 np0005534516 nova_compute[253538]: 2025-11-25 09:15:35.404 253542 DEBUG nova.storage.rbd_utils [None req-53df080d-df0f-4c21-9484-79434ec815e6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] rbd image 2abdf1f8-0c71-459d-8467-ec8825219eda_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 04:15:35 np0005534516 nova_compute[253538]: 2025-11-25 09:15:35.407 253542 DEBUG oslo_concurrency.processutils [None req-53df080d-df0f-4c21-9484-79434ec815e6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc 2abdf1f8-0c71-459d-8467-ec8825219eda_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 04:15:35 np0005534516 nova_compute[253538]: 2025-11-25 09:15:35.718 253542 DEBUG oslo_concurrency.processutils [None req-53df080d-df0f-4c21-9484-79434ec815e6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc 2abdf1f8-0c71-459d-8467-ec8825219eda_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.310s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 04:15:35 np0005534516 nova_compute[253538]: 2025-11-25 09:15:35.818 253542 DEBUG nova.storage.rbd_utils [None req-53df080d-df0f-4c21-9484-79434ec815e6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] resizing rbd image 2abdf1f8-0c71-459d-8467-ec8825219eda_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 25 04:15:35 np0005534516 nova_compute[253538]: 2025-11-25 09:15:35.923 253542 DEBUG nova.objects.instance [None req-53df080d-df0f-4c21-9484-79434ec815e6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lazy-loading 'migration_context' on Instance uuid 2abdf1f8-0c71-459d-8467-ec8825219eda obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 04:15:35 np0005534516 nova_compute[253538]: 2025-11-25 09:15:35.936 253542 DEBUG nova.virt.libvirt.driver [None req-53df080d-df0f-4c21-9484-79434ec815e6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 2abdf1f8-0c71-459d-8467-ec8825219eda] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 25 04:15:35 np0005534516 nova_compute[253538]: 2025-11-25 09:15:35.937 253542 DEBUG nova.virt.libvirt.driver [None req-53df080d-df0f-4c21-9484-79434ec815e6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 2abdf1f8-0c71-459d-8467-ec8825219eda] Ensure instance console log exists: /var/lib/nova/instances/2abdf1f8-0c71-459d-8467-ec8825219eda/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 25 04:15:35 np0005534516 nova_compute[253538]: 2025-11-25 09:15:35.937 253542 DEBUG oslo_concurrency.lockutils [None req-53df080d-df0f-4c21-9484-79434ec815e6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:15:35 np0005534516 nova_compute[253538]: 2025-11-25 09:15:35.938 253542 DEBUG oslo_concurrency.lockutils [None req-53df080d-df0f-4c21-9484-79434ec815e6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:15:35 np0005534516 nova_compute[253538]: 2025-11-25 09:15:35.938 253542 DEBUG oslo_concurrency.lockutils [None req-53df080d-df0f-4c21-9484-79434ec815e6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:15:36 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2856: 321 pgs: 321 active+clean; 88 MiB data, 1021 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:15:36 np0005534516 nova_compute[253538]: 2025-11-25 09:15:36.844 253542 DEBUG nova.network.neutron [None req-53df080d-df0f-4c21-9484-79434ec815e6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 2abdf1f8-0c71-459d-8467-ec8825219eda] Successfully created port: f30cb228-eac2-4d17-a356-bec8d6ae142a _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 25 04:15:36 np0005534516 nova_compute[253538]: 2025-11-25 09:15:36.859 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:15:37 np0005534516 nova_compute[253538]: 2025-11-25 09:15:37.458 253542 DEBUG nova.network.neutron [None req-53df080d-df0f-4c21-9484-79434ec815e6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 2abdf1f8-0c71-459d-8467-ec8825219eda] Successfully updated port: f30cb228-eac2-4d17-a356-bec8d6ae142a _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 25 04:15:37 np0005534516 nova_compute[253538]: 2025-11-25 09:15:37.475 253542 DEBUG oslo_concurrency.lockutils [None req-53df080d-df0f-4c21-9484-79434ec815e6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Acquiring lock "refresh_cache-2abdf1f8-0c71-459d-8467-ec8825219eda" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 04:15:37 np0005534516 nova_compute[253538]: 2025-11-25 09:15:37.475 253542 DEBUG oslo_concurrency.lockutils [None req-53df080d-df0f-4c21-9484-79434ec815e6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Acquired lock "refresh_cache-2abdf1f8-0c71-459d-8467-ec8825219eda" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 04:15:37 np0005534516 nova_compute[253538]: 2025-11-25 09:15:37.476 253542 DEBUG nova.network.neutron [None req-53df080d-df0f-4c21-9484-79434ec815e6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 2abdf1f8-0c71-459d-8467-ec8825219eda] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 25 04:15:37 np0005534516 nova_compute[253538]: 2025-11-25 09:15:37.552 253542 DEBUG nova.compute.manager [req-c02adc78-893d-4378-97b0-ef9c2b4403ff req-e204fc5e-aa03-47c2-baf2-c04154eaba80 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 2abdf1f8-0c71-459d-8467-ec8825219eda] Received event network-changed-f30cb228-eac2-4d17-a356-bec8d6ae142a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 04:15:37 np0005534516 nova_compute[253538]: 2025-11-25 09:15:37.552 253542 DEBUG nova.compute.manager [req-c02adc78-893d-4378-97b0-ef9c2b4403ff req-e204fc5e-aa03-47c2-baf2-c04154eaba80 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 2abdf1f8-0c71-459d-8467-ec8825219eda] Refreshing instance network info cache due to event network-changed-f30cb228-eac2-4d17-a356-bec8d6ae142a. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 04:15:37 np0005534516 nova_compute[253538]: 2025-11-25 09:15:37.552 253542 DEBUG oslo_concurrency.lockutils [req-c02adc78-893d-4378-97b0-ef9c2b4403ff req-e204fc5e-aa03-47c2-baf2-c04154eaba80 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-2abdf1f8-0c71-459d-8467-ec8825219eda" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 04:15:37 np0005534516 nova_compute[253538]: 2025-11-25 09:15:37.818 253542 DEBUG nova.network.neutron [None req-53df080d-df0f-4c21-9484-79434ec815e6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 2abdf1f8-0c71-459d-8467-ec8825219eda] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 25 04:15:38 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:15:38 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2857: 321 pgs: 321 active+clean; 114 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.3 MiB/s wr, 25 op/s
Nov 25 04:15:38 np0005534516 nova_compute[253538]: 2025-11-25 09:15:38.865 253542 DEBUG nova.network.neutron [None req-53df080d-df0f-4c21-9484-79434ec815e6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 2abdf1f8-0c71-459d-8467-ec8825219eda] Updating instance_info_cache with network_info: [{"id": "f30cb228-eac2-4d17-a356-bec8d6ae142a", "address": "fa:16:3e:de:b8:78", "network": {"id": "f86f1e83-b07b-4abd-bc9a-7c03f3634fc6", "bridge": "br-int", "label": "tempest-network-smoke--1498407759", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fede:b878", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf30cb228-ea", "ovs_interfaceid": "f30cb228-eac2-4d17-a356-bec8d6ae142a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 04:15:38 np0005534516 nova_compute[253538]: 2025-11-25 09:15:38.896 253542 DEBUG oslo_concurrency.lockutils [None req-53df080d-df0f-4c21-9484-79434ec815e6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Releasing lock "refresh_cache-2abdf1f8-0c71-459d-8467-ec8825219eda" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 04:15:38 np0005534516 nova_compute[253538]: 2025-11-25 09:15:38.897 253542 DEBUG nova.compute.manager [None req-53df080d-df0f-4c21-9484-79434ec815e6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 2abdf1f8-0c71-459d-8467-ec8825219eda] Instance network_info: |[{"id": "f30cb228-eac2-4d17-a356-bec8d6ae142a", "address": "fa:16:3e:de:b8:78", "network": {"id": "f86f1e83-b07b-4abd-bc9a-7c03f3634fc6", "bridge": "br-int", "label": "tempest-network-smoke--1498407759", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fede:b878", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf30cb228-ea", "ovs_interfaceid": "f30cb228-eac2-4d17-a356-bec8d6ae142a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 25 04:15:38 np0005534516 nova_compute[253538]: 2025-11-25 09:15:38.898 253542 DEBUG oslo_concurrency.lockutils [req-c02adc78-893d-4378-97b0-ef9c2b4403ff req-e204fc5e-aa03-47c2-baf2-c04154eaba80 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-2abdf1f8-0c71-459d-8467-ec8825219eda" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 04:15:38 np0005534516 nova_compute[253538]: 2025-11-25 09:15:38.898 253542 DEBUG nova.network.neutron [req-c02adc78-893d-4378-97b0-ef9c2b4403ff req-e204fc5e-aa03-47c2-baf2-c04154eaba80 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 2abdf1f8-0c71-459d-8467-ec8825219eda] Refreshing network info cache for port f30cb228-eac2-4d17-a356-bec8d6ae142a _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 04:15:38 np0005534516 nova_compute[253538]: 2025-11-25 09:15:38.903 253542 DEBUG nova.virt.libvirt.driver [None req-53df080d-df0f-4c21-9484-79434ec815e6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 2abdf1f8-0c71-459d-8467-ec8825219eda] Start _get_guest_xml network_info=[{"id": "f30cb228-eac2-4d17-a356-bec8d6ae142a", "address": "fa:16:3e:de:b8:78", "network": {"id": "f86f1e83-b07b-4abd-bc9a-7c03f3634fc6", "bridge": "br-int", "label": "tempest-network-smoke--1498407759", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fede:b878", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf30cb228-ea", "ovs_interfaceid": "f30cb228-eac2-4d17-a356-bec8d6ae142a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'device_name': '/dev/vda', 'guest_format': None, 'encryption_options': None, 'boot_index': 0, 'encryption_format': None, 'encryption_secret_uuid': None, 'size': 0, 'encrypted': False, 'disk_bus': 'virtio', 'image_id': '8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 25 04:15:38 np0005534516 nova_compute[253538]: 2025-11-25 09:15:38.909 253542 WARNING nova.virt.libvirt.driver [None req-53df080d-df0f-4c21-9484-79434ec815e6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 04:15:38 np0005534516 nova_compute[253538]: 2025-11-25 09:15:38.920 253542 DEBUG nova.virt.libvirt.host [None req-53df080d-df0f-4c21-9484-79434ec815e6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 25 04:15:38 np0005534516 nova_compute[253538]: 2025-11-25 09:15:38.922 253542 DEBUG nova.virt.libvirt.host [None req-53df080d-df0f-4c21-9484-79434ec815e6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 25 04:15:38 np0005534516 nova_compute[253538]: 2025-11-25 09:15:38.928 253542 DEBUG nova.virt.libvirt.host [None req-53df080d-df0f-4c21-9484-79434ec815e6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 25 04:15:38 np0005534516 nova_compute[253538]: 2025-11-25 09:15:38.929 253542 DEBUG nova.virt.libvirt.host [None req-53df080d-df0f-4c21-9484-79434ec815e6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 25 04:15:38 np0005534516 nova_compute[253538]: 2025-11-25 09:15:38.929 253542 DEBUG nova.virt.libvirt.driver [None req-53df080d-df0f-4c21-9484-79434ec815e6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 25 04:15:38 np0005534516 nova_compute[253538]: 2025-11-25 09:15:38.930 253542 DEBUG nova.virt.hardware [None req-53df080d-df0f-4c21-9484-79434ec815e6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T08:20:54Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='c0247c91-0f34-45b2-87e7-43f31a790a00',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 25 04:15:38 np0005534516 nova_compute[253538]: 2025-11-25 09:15:38.931 253542 DEBUG nova.virt.hardware [None req-53df080d-df0f-4c21-9484-79434ec815e6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 25 04:15:38 np0005534516 nova_compute[253538]: 2025-11-25 09:15:38.931 253542 DEBUG nova.virt.hardware [None req-53df080d-df0f-4c21-9484-79434ec815e6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 25 04:15:38 np0005534516 nova_compute[253538]: 2025-11-25 09:15:38.932 253542 DEBUG nova.virt.hardware [None req-53df080d-df0f-4c21-9484-79434ec815e6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 25 04:15:38 np0005534516 nova_compute[253538]: 2025-11-25 09:15:38.932 253542 DEBUG nova.virt.hardware [None req-53df080d-df0f-4c21-9484-79434ec815e6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 25 04:15:38 np0005534516 nova_compute[253538]: 2025-11-25 09:15:38.933 253542 DEBUG nova.virt.hardware [None req-53df080d-df0f-4c21-9484-79434ec815e6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 25 04:15:38 np0005534516 nova_compute[253538]: 2025-11-25 09:15:38.933 253542 DEBUG nova.virt.hardware [None req-53df080d-df0f-4c21-9484-79434ec815e6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 25 04:15:38 np0005534516 nova_compute[253538]: 2025-11-25 09:15:38.934 253542 DEBUG nova.virt.hardware [None req-53df080d-df0f-4c21-9484-79434ec815e6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 25 04:15:38 np0005534516 nova_compute[253538]: 2025-11-25 09:15:38.934 253542 DEBUG nova.virt.hardware [None req-53df080d-df0f-4c21-9484-79434ec815e6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 25 04:15:38 np0005534516 nova_compute[253538]: 2025-11-25 09:15:38.935 253542 DEBUG nova.virt.hardware [None req-53df080d-df0f-4c21-9484-79434ec815e6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 25 04:15:38 np0005534516 nova_compute[253538]: 2025-11-25 09:15:38.935 253542 DEBUG nova.virt.hardware [None req-53df080d-df0f-4c21-9484-79434ec815e6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 25 04:15:38 np0005534516 nova_compute[253538]: 2025-11-25 09:15:38.941 253542 DEBUG oslo_concurrency.processutils [None req-53df080d-df0f-4c21-9484-79434ec815e6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 04:15:39 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 04:15:39 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3979270653' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 04:15:39 np0005534516 nova_compute[253538]: 2025-11-25 09:15:39.434 253542 DEBUG oslo_concurrency.processutils [None req-53df080d-df0f-4c21-9484-79434ec815e6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.493s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 04:15:39 np0005534516 nova_compute[253538]: 2025-11-25 09:15:39.460 253542 DEBUG nova.storage.rbd_utils [None req-53df080d-df0f-4c21-9484-79434ec815e6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] rbd image 2abdf1f8-0c71-459d-8467-ec8825219eda_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 04:15:39 np0005534516 nova_compute[253538]: 2025-11-25 09:15:39.464 253542 DEBUG oslo_concurrency.processutils [None req-53df080d-df0f-4c21-9484-79434ec815e6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 04:15:39 np0005534516 nova_compute[253538]: 2025-11-25 09:15:39.726 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:15:39 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 04:15:39 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3441562930' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 04:15:39 np0005534516 nova_compute[253538]: 2025-11-25 09:15:39.930 253542 DEBUG oslo_concurrency.processutils [None req-53df080d-df0f-4c21-9484-79434ec815e6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.466s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 04:15:39 np0005534516 nova_compute[253538]: 2025-11-25 09:15:39.932 253542 DEBUG nova.virt.libvirt.vif [None req-53df080d-df0f-4c21-9484-79434ec815e6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T09:15:32Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-5134712',display_name='tempest-TestGettingAddress-server-5134712',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-5134712',id=150,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMyv667Z2ABnyPVBrZx9jd9gq6P58X6si+s81we9pUqnDsWI1jnpnGU0fnIp9UQ/Apxt3tS4iccd2fLnQpe7TCkxqUAyHZSIrSlnLfmfHodlWycVpxB5TAoXVMJ4xTQPeA==',key_name='tempest-TestGettingAddress-1435340358',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='a3cf572dfc9f42528923d69b8fa76422',ramdisk_id='',reservation_id='r-s9d0d0tr',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-364728108',owner_user_name='tempest-TestGettingAddress-364728108-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T09:15:35Z,user_data=None,user_id='c9fb13d4ba9041458692330b7276232f',uuid=2abdf1f8-0c71-459d-8467-ec8825219eda,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "f30cb228-eac2-4d17-a356-bec8d6ae142a", "address": "fa:16:3e:de:b8:78", "network": {"id": "f86f1e83-b07b-4abd-bc9a-7c03f3634fc6", "bridge": "br-int", "label": "tempest-network-smoke--1498407759", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fede:b878", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf30cb228-ea", "ovs_interfaceid": "f30cb228-eac2-4d17-a356-bec8d6ae142a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 25 04:15:39 np0005534516 nova_compute[253538]: 2025-11-25 09:15:39.932 253542 DEBUG nova.network.os_vif_util [None req-53df080d-df0f-4c21-9484-79434ec815e6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Converting VIF {"id": "f30cb228-eac2-4d17-a356-bec8d6ae142a", "address": "fa:16:3e:de:b8:78", "network": {"id": "f86f1e83-b07b-4abd-bc9a-7c03f3634fc6", "bridge": "br-int", "label": "tempest-network-smoke--1498407759", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fede:b878", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf30cb228-ea", "ovs_interfaceid": "f30cb228-eac2-4d17-a356-bec8d6ae142a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 04:15:39 np0005534516 nova_compute[253538]: 2025-11-25 09:15:39.933 253542 DEBUG nova.network.os_vif_util [None req-53df080d-df0f-4c21-9484-79434ec815e6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:de:b8:78,bridge_name='br-int',has_traffic_filtering=True,id=f30cb228-eac2-4d17-a356-bec8d6ae142a,network=Network(f86f1e83-b07b-4abd-bc9a-7c03f3634fc6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf30cb228-ea') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 04:15:39 np0005534516 nova_compute[253538]: 2025-11-25 09:15:39.935 253542 DEBUG nova.objects.instance [None req-53df080d-df0f-4c21-9484-79434ec815e6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lazy-loading 'pci_devices' on Instance uuid 2abdf1f8-0c71-459d-8467-ec8825219eda obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 04:15:39 np0005534516 nova_compute[253538]: 2025-11-25 09:15:39.954 253542 DEBUG nova.virt.libvirt.driver [None req-53df080d-df0f-4c21-9484-79434ec815e6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 2abdf1f8-0c71-459d-8467-ec8825219eda] End _get_guest_xml xml=<domain type="kvm">
Nov 25 04:15:39 np0005534516 nova_compute[253538]:  <uuid>2abdf1f8-0c71-459d-8467-ec8825219eda</uuid>
Nov 25 04:15:39 np0005534516 nova_compute[253538]:  <name>instance-00000096</name>
Nov 25 04:15:39 np0005534516 nova_compute[253538]:  <memory>131072</memory>
Nov 25 04:15:39 np0005534516 nova_compute[253538]:  <vcpu>1</vcpu>
Nov 25 04:15:39 np0005534516 nova_compute[253538]:  <metadata>
Nov 25 04:15:39 np0005534516 nova_compute[253538]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 04:15:39 np0005534516 nova_compute[253538]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 04:15:39 np0005534516 nova_compute[253538]:      <nova:name>tempest-TestGettingAddress-server-5134712</nova:name>
Nov 25 04:15:39 np0005534516 nova_compute[253538]:      <nova:creationTime>2025-11-25 09:15:38</nova:creationTime>
Nov 25 04:15:39 np0005534516 nova_compute[253538]:      <nova:flavor name="m1.nano">
Nov 25 04:15:39 np0005534516 nova_compute[253538]:        <nova:memory>128</nova:memory>
Nov 25 04:15:39 np0005534516 nova_compute[253538]:        <nova:disk>1</nova:disk>
Nov 25 04:15:39 np0005534516 nova_compute[253538]:        <nova:swap>0</nova:swap>
Nov 25 04:15:39 np0005534516 nova_compute[253538]:        <nova:ephemeral>0</nova:ephemeral>
Nov 25 04:15:39 np0005534516 nova_compute[253538]:        <nova:vcpus>1</nova:vcpus>
Nov 25 04:15:39 np0005534516 nova_compute[253538]:      </nova:flavor>
Nov 25 04:15:39 np0005534516 nova_compute[253538]:      <nova:owner>
Nov 25 04:15:39 np0005534516 nova_compute[253538]:        <nova:user uuid="c9fb13d4ba9041458692330b7276232f">tempest-TestGettingAddress-364728108-project-member</nova:user>
Nov 25 04:15:39 np0005534516 nova_compute[253538]:        <nova:project uuid="a3cf572dfc9f42528923d69b8fa76422">tempest-TestGettingAddress-364728108</nova:project>
Nov 25 04:15:39 np0005534516 nova_compute[253538]:      </nova:owner>
Nov 25 04:15:39 np0005534516 nova_compute[253538]:      <nova:root type="image" uuid="8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e"/>
Nov 25 04:15:39 np0005534516 nova_compute[253538]:      <nova:ports>
Nov 25 04:15:39 np0005534516 nova_compute[253538]:        <nova:port uuid="f30cb228-eac2-4d17-a356-bec8d6ae142a">
Nov 25 04:15:39 np0005534516 nova_compute[253538]:          <nova:ip type="fixed" address="2001:db8::f816:3eff:fede:b878" ipVersion="6"/>
Nov 25 04:15:39 np0005534516 nova_compute[253538]:          <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Nov 25 04:15:39 np0005534516 nova_compute[253538]:        </nova:port>
Nov 25 04:15:39 np0005534516 nova_compute[253538]:      </nova:ports>
Nov 25 04:15:39 np0005534516 nova_compute[253538]:    </nova:instance>
Nov 25 04:15:39 np0005534516 nova_compute[253538]:  </metadata>
Nov 25 04:15:39 np0005534516 nova_compute[253538]:  <sysinfo type="smbios">
Nov 25 04:15:39 np0005534516 nova_compute[253538]:    <system>
Nov 25 04:15:39 np0005534516 nova_compute[253538]:      <entry name="manufacturer">RDO</entry>
Nov 25 04:15:39 np0005534516 nova_compute[253538]:      <entry name="product">OpenStack Compute</entry>
Nov 25 04:15:39 np0005534516 nova_compute[253538]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 04:15:39 np0005534516 nova_compute[253538]:      <entry name="serial">2abdf1f8-0c71-459d-8467-ec8825219eda</entry>
Nov 25 04:15:39 np0005534516 nova_compute[253538]:      <entry name="uuid">2abdf1f8-0c71-459d-8467-ec8825219eda</entry>
Nov 25 04:15:39 np0005534516 nova_compute[253538]:      <entry name="family">Virtual Machine</entry>
Nov 25 04:15:39 np0005534516 nova_compute[253538]:    </system>
Nov 25 04:15:39 np0005534516 nova_compute[253538]:  </sysinfo>
Nov 25 04:15:39 np0005534516 nova_compute[253538]:  <os>
Nov 25 04:15:39 np0005534516 nova_compute[253538]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 25 04:15:39 np0005534516 nova_compute[253538]:    <boot dev="hd"/>
Nov 25 04:15:39 np0005534516 nova_compute[253538]:    <smbios mode="sysinfo"/>
Nov 25 04:15:39 np0005534516 nova_compute[253538]:  </os>
Nov 25 04:15:39 np0005534516 nova_compute[253538]:  <features>
Nov 25 04:15:39 np0005534516 nova_compute[253538]:    <acpi/>
Nov 25 04:15:39 np0005534516 nova_compute[253538]:    <apic/>
Nov 25 04:15:39 np0005534516 nova_compute[253538]:    <vmcoreinfo/>
Nov 25 04:15:39 np0005534516 nova_compute[253538]:  </features>
Nov 25 04:15:39 np0005534516 nova_compute[253538]:  <clock offset="utc">
Nov 25 04:15:39 np0005534516 nova_compute[253538]:    <timer name="pit" tickpolicy="delay"/>
Nov 25 04:15:39 np0005534516 nova_compute[253538]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 25 04:15:39 np0005534516 nova_compute[253538]:    <timer name="hpet" present="no"/>
Nov 25 04:15:39 np0005534516 nova_compute[253538]:  </clock>
Nov 25 04:15:39 np0005534516 nova_compute[253538]:  <cpu mode="host-model" match="exact">
Nov 25 04:15:39 np0005534516 nova_compute[253538]:    <topology sockets="1" cores="1" threads="1"/>
Nov 25 04:15:39 np0005534516 nova_compute[253538]:  </cpu>
Nov 25 04:15:39 np0005534516 nova_compute[253538]:  <devices>
Nov 25 04:15:39 np0005534516 nova_compute[253538]:    <disk type="network" device="disk">
Nov 25 04:15:39 np0005534516 nova_compute[253538]:      <driver type="raw" cache="none"/>
Nov 25 04:15:39 np0005534516 nova_compute[253538]:      <source protocol="rbd" name="vms/2abdf1f8-0c71-459d-8467-ec8825219eda_disk">
Nov 25 04:15:39 np0005534516 nova_compute[253538]:        <host name="192.168.122.100" port="6789"/>
Nov 25 04:15:39 np0005534516 nova_compute[253538]:      </source>
Nov 25 04:15:39 np0005534516 nova_compute[253538]:      <auth username="openstack">
Nov 25 04:15:39 np0005534516 nova_compute[253538]:        <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 04:15:39 np0005534516 nova_compute[253538]:      </auth>
Nov 25 04:15:39 np0005534516 nova_compute[253538]:      <target dev="vda" bus="virtio"/>
Nov 25 04:15:39 np0005534516 nova_compute[253538]:    </disk>
Nov 25 04:15:39 np0005534516 nova_compute[253538]:    <disk type="network" device="cdrom">
Nov 25 04:15:39 np0005534516 nova_compute[253538]:      <driver type="raw" cache="none"/>
Nov 25 04:15:39 np0005534516 nova_compute[253538]:      <source protocol="rbd" name="vms/2abdf1f8-0c71-459d-8467-ec8825219eda_disk.config">
Nov 25 04:15:39 np0005534516 nova_compute[253538]:        <host name="192.168.122.100" port="6789"/>
Nov 25 04:15:39 np0005534516 nova_compute[253538]:      </source>
Nov 25 04:15:39 np0005534516 nova_compute[253538]:      <auth username="openstack">
Nov 25 04:15:39 np0005534516 nova_compute[253538]:        <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 04:15:39 np0005534516 nova_compute[253538]:      </auth>
Nov 25 04:15:39 np0005534516 nova_compute[253538]:      <target dev="sda" bus="sata"/>
Nov 25 04:15:39 np0005534516 nova_compute[253538]:    </disk>
Nov 25 04:15:39 np0005534516 nova_compute[253538]:    <interface type="ethernet">
Nov 25 04:15:39 np0005534516 nova_compute[253538]:      <mac address="fa:16:3e:de:b8:78"/>
Nov 25 04:15:39 np0005534516 nova_compute[253538]:      <model type="virtio"/>
Nov 25 04:15:39 np0005534516 nova_compute[253538]:      <driver name="vhost" rx_queue_size="512"/>
Nov 25 04:15:39 np0005534516 nova_compute[253538]:      <mtu size="1442"/>
Nov 25 04:15:39 np0005534516 nova_compute[253538]:      <target dev="tapf30cb228-ea"/>
Nov 25 04:15:39 np0005534516 nova_compute[253538]:    </interface>
Nov 25 04:15:39 np0005534516 nova_compute[253538]:    <serial type="pty">
Nov 25 04:15:39 np0005534516 nova_compute[253538]:      <log file="/var/lib/nova/instances/2abdf1f8-0c71-459d-8467-ec8825219eda/console.log" append="off"/>
Nov 25 04:15:39 np0005534516 nova_compute[253538]:    </serial>
Nov 25 04:15:39 np0005534516 nova_compute[253538]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 04:15:39 np0005534516 nova_compute[253538]:    <video>
Nov 25 04:15:39 np0005534516 nova_compute[253538]:      <model type="virtio"/>
Nov 25 04:15:39 np0005534516 nova_compute[253538]:    </video>
Nov 25 04:15:39 np0005534516 nova_compute[253538]:    <input type="tablet" bus="usb"/>
Nov 25 04:15:39 np0005534516 nova_compute[253538]:    <rng model="virtio">
Nov 25 04:15:39 np0005534516 nova_compute[253538]:      <backend model="random">/dev/urandom</backend>
Nov 25 04:15:39 np0005534516 nova_compute[253538]:    </rng>
Nov 25 04:15:39 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root"/>
Nov 25 04:15:39 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:15:39 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:15:39 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:15:39 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:15:39 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:15:39 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:15:39 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:15:39 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:15:39 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:15:39 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:15:39 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:15:39 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:15:39 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:15:39 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:15:39 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:15:39 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:15:39 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:15:39 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:15:39 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:15:39 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:15:39 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:15:39 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:15:39 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:15:39 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:15:39 np0005534516 nova_compute[253538]:    <controller type="usb" index="0"/>
Nov 25 04:15:39 np0005534516 nova_compute[253538]:    <memballoon model="virtio">
Nov 25 04:15:39 np0005534516 nova_compute[253538]:      <stats period="10"/>
Nov 25 04:15:39 np0005534516 nova_compute[253538]:    </memballoon>
Nov 25 04:15:39 np0005534516 nova_compute[253538]:  </devices>
Nov 25 04:15:39 np0005534516 nova_compute[253538]: </domain>
Nov 25 04:15:39 np0005534516 nova_compute[253538]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 25 04:15:39 np0005534516 nova_compute[253538]: 2025-11-25 09:15:39.955 253542 DEBUG nova.compute.manager [None req-53df080d-df0f-4c21-9484-79434ec815e6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 2abdf1f8-0c71-459d-8467-ec8825219eda] Preparing to wait for external event network-vif-plugged-f30cb228-eac2-4d17-a356-bec8d6ae142a prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 25 04:15:39 np0005534516 nova_compute[253538]: 2025-11-25 09:15:39.956 253542 DEBUG oslo_concurrency.lockutils [None req-53df080d-df0f-4c21-9484-79434ec815e6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Acquiring lock "2abdf1f8-0c71-459d-8467-ec8825219eda-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:15:39 np0005534516 nova_compute[253538]: 2025-11-25 09:15:39.956 253542 DEBUG oslo_concurrency.lockutils [None req-53df080d-df0f-4c21-9484-79434ec815e6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "2abdf1f8-0c71-459d-8467-ec8825219eda-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:15:39 np0005534516 nova_compute[253538]: 2025-11-25 09:15:39.956 253542 DEBUG oslo_concurrency.lockutils [None req-53df080d-df0f-4c21-9484-79434ec815e6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "2abdf1f8-0c71-459d-8467-ec8825219eda-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:15:39 np0005534516 nova_compute[253538]: 2025-11-25 09:15:39.957 253542 DEBUG nova.virt.libvirt.vif [None req-53df080d-df0f-4c21-9484-79434ec815e6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T09:15:32Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-5134712',display_name='tempest-TestGettingAddress-server-5134712',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-5134712',id=150,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMyv667Z2ABnyPVBrZx9jd9gq6P58X6si+s81we9pUqnDsWI1jnpnGU0fnIp9UQ/Apxt3tS4iccd2fLnQpe7TCkxqUAyHZSIrSlnLfmfHodlWycVpxB5TAoXVMJ4xTQPeA==',key_name='tempest-TestGettingAddress-1435340358',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='a3cf572dfc9f42528923d69b8fa76422',ramdisk_id='',reservation_id='r-s9d0d0tr',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-364728108',owner_user_name='tempest-TestGettingAddress-364728108-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T09:15:35Z,user_data=None,user_id='c9fb13d4ba9041458692330b7276232f',uuid=2abdf1f8-0c71-459d-8467-ec8825219eda,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "f30cb228-eac2-4d17-a356-bec8d6ae142a", "address": "fa:16:3e:de:b8:78", "network": {"id": "f86f1e83-b07b-4abd-bc9a-7c03f3634fc6", "bridge": "br-int", "label": "tempest-network-smoke--1498407759", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fede:b878", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf30cb228-ea", "ovs_interfaceid": "f30cb228-eac2-4d17-a356-bec8d6ae142a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 25 04:15:39 np0005534516 nova_compute[253538]: 2025-11-25 09:15:39.958 253542 DEBUG nova.network.os_vif_util [None req-53df080d-df0f-4c21-9484-79434ec815e6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Converting VIF {"id": "f30cb228-eac2-4d17-a356-bec8d6ae142a", "address": "fa:16:3e:de:b8:78", "network": {"id": "f86f1e83-b07b-4abd-bc9a-7c03f3634fc6", "bridge": "br-int", "label": "tempest-network-smoke--1498407759", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fede:b878", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf30cb228-ea", "ovs_interfaceid": "f30cb228-eac2-4d17-a356-bec8d6ae142a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 04:15:39 np0005534516 nova_compute[253538]: 2025-11-25 09:15:39.958 253542 DEBUG nova.network.os_vif_util [None req-53df080d-df0f-4c21-9484-79434ec815e6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:de:b8:78,bridge_name='br-int',has_traffic_filtering=True,id=f30cb228-eac2-4d17-a356-bec8d6ae142a,network=Network(f86f1e83-b07b-4abd-bc9a-7c03f3634fc6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf30cb228-ea') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 04:15:39 np0005534516 nova_compute[253538]: 2025-11-25 09:15:39.959 253542 DEBUG os_vif [None req-53df080d-df0f-4c21-9484-79434ec815e6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:de:b8:78,bridge_name='br-int',has_traffic_filtering=True,id=f30cb228-eac2-4d17-a356-bec8d6ae142a,network=Network(f86f1e83-b07b-4abd-bc9a-7c03f3634fc6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf30cb228-ea') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 25 04:15:39 np0005534516 nova_compute[253538]: 2025-11-25 09:15:39.959 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:15:39 np0005534516 nova_compute[253538]: 2025-11-25 09:15:39.960 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 04:15:39 np0005534516 nova_compute[253538]: 2025-11-25 09:15:39.960 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 04:15:39 np0005534516 nova_compute[253538]: 2025-11-25 09:15:39.964 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:15:39 np0005534516 nova_compute[253538]: 2025-11-25 09:15:39.964 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf30cb228-ea, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 04:15:39 np0005534516 nova_compute[253538]: 2025-11-25 09:15:39.965 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapf30cb228-ea, col_values=(('external_ids', {'iface-id': 'f30cb228-eac2-4d17-a356-bec8d6ae142a', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:de:b8:78', 'vm-uuid': '2abdf1f8-0c71-459d-8467-ec8825219eda'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 04:15:39 np0005534516 nova_compute[253538]: 2025-11-25 09:15:39.966 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:15:39 np0005534516 NetworkManager[48915]: <info>  [1764062139.9679] manager: (tapf30cb228-ea): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/656)
Nov 25 04:15:39 np0005534516 nova_compute[253538]: 2025-11-25 09:15:39.969 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 25 04:15:39 np0005534516 nova_compute[253538]: 2025-11-25 09:15:39.973 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:15:39 np0005534516 nova_compute[253538]: 2025-11-25 09:15:39.974 253542 INFO os_vif [None req-53df080d-df0f-4c21-9484-79434ec815e6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:de:b8:78,bridge_name='br-int',has_traffic_filtering=True,id=f30cb228-eac2-4d17-a356-bec8d6ae142a,network=Network(f86f1e83-b07b-4abd-bc9a-7c03f3634fc6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf30cb228-ea')#033[00m
Nov 25 04:15:40 np0005534516 nova_compute[253538]: 2025-11-25 09:15:40.015 253542 DEBUG nova.virt.libvirt.driver [None req-53df080d-df0f-4c21-9484-79434ec815e6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 04:15:40 np0005534516 nova_compute[253538]: 2025-11-25 09:15:40.016 253542 DEBUG nova.virt.libvirt.driver [None req-53df080d-df0f-4c21-9484-79434ec815e6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 04:15:40 np0005534516 nova_compute[253538]: 2025-11-25 09:15:40.016 253542 DEBUG nova.virt.libvirt.driver [None req-53df080d-df0f-4c21-9484-79434ec815e6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] No VIF found with MAC fa:16:3e:de:b8:78, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 25 04:15:40 np0005534516 nova_compute[253538]: 2025-11-25 09:15:40.017 253542 INFO nova.virt.libvirt.driver [None req-53df080d-df0f-4c21-9484-79434ec815e6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 2abdf1f8-0c71-459d-8467-ec8825219eda] Using config drive#033[00m
Nov 25 04:15:40 np0005534516 nova_compute[253538]: 2025-11-25 09:15:40.044 253542 DEBUG nova.storage.rbd_utils [None req-53df080d-df0f-4c21-9484-79434ec815e6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] rbd image 2abdf1f8-0c71-459d-8467-ec8825219eda_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 04:15:40 np0005534516 nova_compute[253538]: 2025-11-25 09:15:40.341 253542 INFO nova.virt.libvirt.driver [None req-53df080d-df0f-4c21-9484-79434ec815e6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 2abdf1f8-0c71-459d-8467-ec8825219eda] Creating config drive at /var/lib/nova/instances/2abdf1f8-0c71-459d-8467-ec8825219eda/disk.config#033[00m
Nov 25 04:15:40 np0005534516 nova_compute[253538]: 2025-11-25 09:15:40.348 253542 DEBUG oslo_concurrency.processutils [None req-53df080d-df0f-4c21-9484-79434ec815e6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/2abdf1f8-0c71-459d-8467-ec8825219eda/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp5ci664wf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 04:15:40 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2858: 321 pgs: 321 active+clean; 134 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 25 04:15:40 np0005534516 nova_compute[253538]: 2025-11-25 09:15:40.444 253542 DEBUG nova.network.neutron [req-c02adc78-893d-4378-97b0-ef9c2b4403ff req-e204fc5e-aa03-47c2-baf2-c04154eaba80 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 2abdf1f8-0c71-459d-8467-ec8825219eda] Updated VIF entry in instance network info cache for port f30cb228-eac2-4d17-a356-bec8d6ae142a. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 25 04:15:40 np0005534516 nova_compute[253538]: 2025-11-25 09:15:40.445 253542 DEBUG nova.network.neutron [req-c02adc78-893d-4378-97b0-ef9c2b4403ff req-e204fc5e-aa03-47c2-baf2-c04154eaba80 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 2abdf1f8-0c71-459d-8467-ec8825219eda] Updating instance_info_cache with network_info: [{"id": "f30cb228-eac2-4d17-a356-bec8d6ae142a", "address": "fa:16:3e:de:b8:78", "network": {"id": "f86f1e83-b07b-4abd-bc9a-7c03f3634fc6", "bridge": "br-int", "label": "tempest-network-smoke--1498407759", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fede:b878", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf30cb228-ea", "ovs_interfaceid": "f30cb228-eac2-4d17-a356-bec8d6ae142a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 04:15:40 np0005534516 nova_compute[253538]: 2025-11-25 09:15:40.459 253542 DEBUG oslo_concurrency.lockutils [req-c02adc78-893d-4378-97b0-ef9c2b4403ff req-e204fc5e-aa03-47c2-baf2-c04154eaba80 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-2abdf1f8-0c71-459d-8467-ec8825219eda" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 04:15:40 np0005534516 nova_compute[253538]: 2025-11-25 09:15:40.490 253542 DEBUG oslo_concurrency.processutils [None req-53df080d-df0f-4c21-9484-79434ec815e6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/2abdf1f8-0c71-459d-8467-ec8825219eda/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp5ci664wf" returned: 0 in 0.142s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 04:15:40 np0005534516 nova_compute[253538]: 2025-11-25 09:15:40.515 253542 DEBUG nova.storage.rbd_utils [None req-53df080d-df0f-4c21-9484-79434ec815e6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] rbd image 2abdf1f8-0c71-459d-8467-ec8825219eda_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 04:15:40 np0005534516 nova_compute[253538]: 2025-11-25 09:15:40.519 253542 DEBUG oslo_concurrency.processutils [None req-53df080d-df0f-4c21-9484-79434ec815e6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/2abdf1f8-0c71-459d-8467-ec8825219eda/disk.config 2abdf1f8-0c71-459d-8467-ec8825219eda_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 04:15:40 np0005534516 nova_compute[253538]: 2025-11-25 09:15:40.694 253542 DEBUG oslo_concurrency.processutils [None req-53df080d-df0f-4c21-9484-79434ec815e6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/2abdf1f8-0c71-459d-8467-ec8825219eda/disk.config 2abdf1f8-0c71-459d-8467-ec8825219eda_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.175s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 04:15:40 np0005534516 nova_compute[253538]: 2025-11-25 09:15:40.695 253542 INFO nova.virt.libvirt.driver [None req-53df080d-df0f-4c21-9484-79434ec815e6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 2abdf1f8-0c71-459d-8467-ec8825219eda] Deleting local config drive /var/lib/nova/instances/2abdf1f8-0c71-459d-8467-ec8825219eda/disk.config because it was imported into RBD.#033[00m
Nov 25 04:15:40 np0005534516 kernel: tapf30cb228-ea: entered promiscuous mode
Nov 25 04:15:40 np0005534516 NetworkManager[48915]: <info>  [1764062140.7599] manager: (tapf30cb228-ea): new Tun device (/org/freedesktop/NetworkManager/Devices/657)
Nov 25 04:15:40 np0005534516 ovn_controller[152859]: 2025-11-25T09:15:40Z|01596|binding|INFO|Claiming lport f30cb228-eac2-4d17-a356-bec8d6ae142a for this chassis.
Nov 25 04:15:40 np0005534516 ovn_controller[152859]: 2025-11-25T09:15:40Z|01597|binding|INFO|f30cb228-eac2-4d17-a356-bec8d6ae142a: Claiming fa:16:3e:de:b8:78 10.100.0.7 2001:db8::f816:3eff:fede:b878
Nov 25 04:15:40 np0005534516 nova_compute[253538]: 2025-11-25 09:15:40.762 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:15:40 np0005534516 nova_compute[253538]: 2025-11-25 09:15:40.774 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:15:40 np0005534516 nova_compute[253538]: 2025-11-25 09:15:40.779 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:15:40 np0005534516 systemd-udevd[416013]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 04:15:40 np0005534516 systemd-machined[215790]: New machine qemu-180-instance-00000096.
Nov 25 04:15:40 np0005534516 NetworkManager[48915]: <info>  [1764062140.8077] device (tapf30cb228-ea): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 04:15:40 np0005534516 NetworkManager[48915]: <info>  [1764062140.8095] device (tapf30cb228-ea): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 04:15:40 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:15:40.809 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:de:b8:78 10.100.0.7 2001:db8::f816:3eff:fede:b878'], port_security=['fa:16:3e:de:b8:78 10.100.0.7 2001:db8::f816:3eff:fede:b878'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28 2001:db8::f816:3eff:fede:b878/64', 'neutron:device_id': '2abdf1f8-0c71-459d-8467-ec8825219eda', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f86f1e83-b07b-4abd-bc9a-7c03f3634fc6', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a3cf572dfc9f42528923d69b8fa76422', 'neutron:revision_number': '2', 'neutron:security_group_ids': '15198f79-2199-4ce9-8f4e-9ae43d3cedee', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e5830640-550a-474a-a915-1b8e117ec031, chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=f30cb228-eac2-4d17-a356-bec8d6ae142a) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 04:15:40 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:15:40.811 162739 INFO neutron.agent.ovn.metadata.agent [-] Port f30cb228-eac2-4d17-a356-bec8d6ae142a in datapath f86f1e83-b07b-4abd-bc9a-7c03f3634fc6 bound to our chassis#033[00m
Nov 25 04:15:40 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:15:40.812 162739 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network f86f1e83-b07b-4abd-bc9a-7c03f3634fc6#033[00m
Nov 25 04:15:40 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:15:40.824 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[05a63c23-68a5-46b5-852d-71461c63bb28]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:15:40 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:15:40.825 162739 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapf86f1e83-b1 in ovnmeta-f86f1e83-b07b-4abd-bc9a-7c03f3634fc6 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 25 04:15:40 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:15:40.827 269370 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapf86f1e83-b0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 25 04:15:40 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:15:40.827 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[a8c126a8-8b8c-42a9-b76b-0f332f002fc1]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:15:40 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:15:40.828 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[e2c75cca-d34f-4847-871e-fc27abd7573e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:15:40 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:15:40.839 162852 DEBUG oslo.privsep.daemon [-] privsep: reply[e9fcf106-0fd7-4402-a3fd-71d33a4d5f64]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:15:40 np0005534516 systemd[1]: Started Virtual Machine qemu-180-instance-00000096.
Nov 25 04:15:40 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:15:40.866 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[8f71455e-4453-4ea2-b41a-1019aad61be4]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:15:40 np0005534516 nova_compute[253538]: 2025-11-25 09:15:40.872 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:15:40 np0005534516 ovn_controller[152859]: 2025-11-25T09:15:40Z|01598|binding|INFO|Setting lport f30cb228-eac2-4d17-a356-bec8d6ae142a ovn-installed in OVS
Nov 25 04:15:40 np0005534516 ovn_controller[152859]: 2025-11-25T09:15:40Z|01599|binding|INFO|Setting lport f30cb228-eac2-4d17-a356-bec8d6ae142a up in Southbound
Nov 25 04:15:40 np0005534516 nova_compute[253538]: 2025-11-25 09:15:40.876 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:15:40 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:15:40.897 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[7eaed574-a2b0-4adc-8c6a-8f2804f4bb82]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:15:40 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:15:40.903 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[076bad99-0288-41f8-8911-daf291f0ea90]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:15:40 np0005534516 NetworkManager[48915]: <info>  [1764062140.9041] manager: (tapf86f1e83-b0): new Veth device (/org/freedesktop/NetworkManager/Devices/658)
Nov 25 04:15:40 np0005534516 systemd-udevd[416016]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 04:15:40 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:15:40.942 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[1c9f4b7a-e04c-49bc-9d8d-ccc1ad60917a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:15:40 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:15:40.944 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[49246896-1bf6-4f95-b656-1fa3fef8cdd4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:15:40 np0005534516 NetworkManager[48915]: <info>  [1764062140.9717] device (tapf86f1e83-b0): carrier: link connected
Nov 25 04:15:40 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:15:40.978 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[a877a279-1f92-4ce1-97bb-478a192911dd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:15:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:15:40.997 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[af68bba5-8839-4e57-9491-5cc25065fe08]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf86f1e83-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:81:95:46'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 459], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 749557, 'reachable_time': 25638, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 416049, 'error': None, 'target': 'ovnmeta-f86f1e83-b07b-4abd-bc9a-7c03f3634fc6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:15:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:15:41.012 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[915571c7-7420-49bb-a55f-e16669c5fd37]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe81:9546'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 749557, 'tstamp': 749557}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 416050, 'error': None, 'target': 'ovnmeta-f86f1e83-b07b-4abd-bc9a-7c03f3634fc6', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:15:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:15:41.034 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[24fb9a4a-1af7-4ab0-b15c-b0b091a78c4d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf86f1e83-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:81:95:46'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 459], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 749557, 'reachable_time': 25638, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 416051, 'error': None, 'target': 'ovnmeta-f86f1e83-b07b-4abd-bc9a-7c03f3634fc6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:15:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:15:41.066 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[f4e8c27d-a677-4c0f-a1a2-191153c2d29a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:15:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:15:41.099 162739 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:15:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:15:41.099 162739 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:15:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:15:41.100 162739 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:15:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:15:41.123 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[f61fe2eb-70a8-4fea-ac3e-fa44a69be98b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:15:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:15:41.125 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf86f1e83-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 04:15:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:15:41.125 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 04:15:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:15:41.126 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf86f1e83-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 04:15:41 np0005534516 nova_compute[253538]: 2025-11-25 09:15:41.128 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:15:41 np0005534516 kernel: tapf86f1e83-b0: entered promiscuous mode
Nov 25 04:15:41 np0005534516 NetworkManager[48915]: <info>  [1764062141.1293] manager: (tapf86f1e83-b0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/659)
Nov 25 04:15:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:15:41.133 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapf86f1e83-b0, col_values=(('external_ids', {'iface-id': 'cfc44526-993c-46ae-8c7c-2505531aa9fc'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 04:15:41 np0005534516 ovn_controller[152859]: 2025-11-25T09:15:41Z|01600|binding|INFO|Releasing lport cfc44526-993c-46ae-8c7c-2505531aa9fc from this chassis (sb_readonly=0)
Nov 25 04:15:41 np0005534516 nova_compute[253538]: 2025-11-25 09:15:41.134 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:15:41 np0005534516 nova_compute[253538]: 2025-11-25 09:15:41.134 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:15:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:15:41.135 162739 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/f86f1e83-b07b-4abd-bc9a-7c03f3634fc6.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/f86f1e83-b07b-4abd-bc9a-7c03f3634fc6.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 25 04:15:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:15:41.136 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[a6ce32f5-001f-4000-b540-891dcdbbe7df]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:15:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:15:41.137 162739 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 25 04:15:41 np0005534516 ovn_metadata_agent[162734]: global
Nov 25 04:15:41 np0005534516 ovn_metadata_agent[162734]:    log         /dev/log local0 debug
Nov 25 04:15:41 np0005534516 ovn_metadata_agent[162734]:    log-tag     haproxy-metadata-proxy-f86f1e83-b07b-4abd-bc9a-7c03f3634fc6
Nov 25 04:15:41 np0005534516 ovn_metadata_agent[162734]:    user        root
Nov 25 04:15:41 np0005534516 ovn_metadata_agent[162734]:    group       root
Nov 25 04:15:41 np0005534516 ovn_metadata_agent[162734]:    maxconn     1024
Nov 25 04:15:41 np0005534516 ovn_metadata_agent[162734]:    pidfile     /var/lib/neutron/external/pids/f86f1e83-b07b-4abd-bc9a-7c03f3634fc6.pid.haproxy
Nov 25 04:15:41 np0005534516 ovn_metadata_agent[162734]:    daemon
Nov 25 04:15:41 np0005534516 ovn_metadata_agent[162734]: 
Nov 25 04:15:41 np0005534516 ovn_metadata_agent[162734]: defaults
Nov 25 04:15:41 np0005534516 ovn_metadata_agent[162734]:    log global
Nov 25 04:15:41 np0005534516 ovn_metadata_agent[162734]:    mode http
Nov 25 04:15:41 np0005534516 ovn_metadata_agent[162734]:    option httplog
Nov 25 04:15:41 np0005534516 ovn_metadata_agent[162734]:    option dontlognull
Nov 25 04:15:41 np0005534516 ovn_metadata_agent[162734]:    option http-server-close
Nov 25 04:15:41 np0005534516 ovn_metadata_agent[162734]:    option forwardfor
Nov 25 04:15:41 np0005534516 ovn_metadata_agent[162734]:    retries                 3
Nov 25 04:15:41 np0005534516 ovn_metadata_agent[162734]:    timeout http-request    30s
Nov 25 04:15:41 np0005534516 ovn_metadata_agent[162734]:    timeout connect         30s
Nov 25 04:15:41 np0005534516 ovn_metadata_agent[162734]:    timeout client          32s
Nov 25 04:15:41 np0005534516 ovn_metadata_agent[162734]:    timeout server          32s
Nov 25 04:15:41 np0005534516 ovn_metadata_agent[162734]:    timeout http-keep-alive 30s
Nov 25 04:15:41 np0005534516 ovn_metadata_agent[162734]: 
Nov 25 04:15:41 np0005534516 ovn_metadata_agent[162734]: 
Nov 25 04:15:41 np0005534516 ovn_metadata_agent[162734]: listen listener
Nov 25 04:15:41 np0005534516 ovn_metadata_agent[162734]:    bind 169.254.169.254:80
Nov 25 04:15:41 np0005534516 ovn_metadata_agent[162734]:    server metadata /var/lib/neutron/metadata_proxy
Nov 25 04:15:41 np0005534516 ovn_metadata_agent[162734]:    http-request add-header X-OVN-Network-ID f86f1e83-b07b-4abd-bc9a-7c03f3634fc6
Nov 25 04:15:41 np0005534516 ovn_metadata_agent[162734]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 25 04:15:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:15:41.137 162739 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-f86f1e83-b07b-4abd-bc9a-7c03f3634fc6', 'env', 'PROCESS_TAG=haproxy-f86f1e83-b07b-4abd-bc9a-7c03f3634fc6', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/f86f1e83-b07b-4abd-bc9a-7c03f3634fc6.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 25 04:15:41 np0005534516 nova_compute[253538]: 2025-11-25 09:15:41.147 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:15:41 np0005534516 nova_compute[253538]: 2025-11-25 09:15:41.325 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764062141.3250024, 2abdf1f8-0c71-459d-8467-ec8825219eda => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 04:15:41 np0005534516 nova_compute[253538]: 2025-11-25 09:15:41.325 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 2abdf1f8-0c71-459d-8467-ec8825219eda] VM Started (Lifecycle Event)#033[00m
Nov 25 04:15:41 np0005534516 nova_compute[253538]: 2025-11-25 09:15:41.343 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 2abdf1f8-0c71-459d-8467-ec8825219eda] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 04:15:41 np0005534516 nova_compute[253538]: 2025-11-25 09:15:41.347 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764062141.3272762, 2abdf1f8-0c71-459d-8467-ec8825219eda => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 04:15:41 np0005534516 nova_compute[253538]: 2025-11-25 09:15:41.347 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 2abdf1f8-0c71-459d-8467-ec8825219eda] VM Paused (Lifecycle Event)#033[00m
Nov 25 04:15:41 np0005534516 nova_compute[253538]: 2025-11-25 09:15:41.362 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 2abdf1f8-0c71-459d-8467-ec8825219eda] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 04:15:41 np0005534516 nova_compute[253538]: 2025-11-25 09:15:41.365 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 2abdf1f8-0c71-459d-8467-ec8825219eda] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 04:15:41 np0005534516 nova_compute[253538]: 2025-11-25 09:15:41.382 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 2abdf1f8-0c71-459d-8467-ec8825219eda] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 04:15:41 np0005534516 podman[416125]: 2025-11-25 09:15:41.536086965 +0000 UTC m=+0.064919306 container create 919ec9bb21963cad983079dccfc8800aa481d29f0479cf6379ec7168a71e58bb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-f86f1e83-b07b-4abd-bc9a-7c03f3634fc6, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118)
Nov 25 04:15:41 np0005534516 systemd[1]: Started libpod-conmon-919ec9bb21963cad983079dccfc8800aa481d29f0479cf6379ec7168a71e58bb.scope.
Nov 25 04:15:41 np0005534516 podman[416125]: 2025-11-25 09:15:41.498396761 +0000 UTC m=+0.027229152 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 25 04:15:41 np0005534516 systemd[1]: Started libcrun container.
Nov 25 04:15:41 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/724b3d88118a671a8ffba4a262a67c17eb9917c6cc062f3440b0fe5cf48c7928/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 25 04:15:41 np0005534516 podman[416125]: 2025-11-25 09:15:41.654819802 +0000 UTC m=+0.183652213 container init 919ec9bb21963cad983079dccfc8800aa481d29f0479cf6379ec7168a71e58bb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-f86f1e83-b07b-4abd-bc9a-7c03f3634fc6, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2)
Nov 25 04:15:41 np0005534516 podman[416125]: 2025-11-25 09:15:41.661780082 +0000 UTC m=+0.190612423 container start 919ec9bb21963cad983079dccfc8800aa481d29f0479cf6379ec7168a71e58bb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-f86f1e83-b07b-4abd-bc9a-7c03f3634fc6, org.label-schema.license=GPLv2, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Nov 25 04:15:41 np0005534516 neutron-haproxy-ovnmeta-f86f1e83-b07b-4abd-bc9a-7c03f3634fc6[416140]: [NOTICE]   (416144) : New worker (416146) forked
Nov 25 04:15:41 np0005534516 neutron-haproxy-ovnmeta-f86f1e83-b07b-4abd-bc9a-7c03f3634fc6[416140]: [NOTICE]   (416144) : Loading success.
Nov 25 04:15:42 np0005534516 nova_compute[253538]: 2025-11-25 09:15:42.011 253542 DEBUG nova.compute.manager [req-07f9b31a-0625-4cba-9662-def9a614be31 req-b7ff6f96-38f2-4940-8afa-778c3daabc9c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 2abdf1f8-0c71-459d-8467-ec8825219eda] Received event network-vif-plugged-f30cb228-eac2-4d17-a356-bec8d6ae142a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 04:15:42 np0005534516 nova_compute[253538]: 2025-11-25 09:15:42.013 253542 DEBUG oslo_concurrency.lockutils [req-07f9b31a-0625-4cba-9662-def9a614be31 req-b7ff6f96-38f2-4940-8afa-778c3daabc9c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "2abdf1f8-0c71-459d-8467-ec8825219eda-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:15:42 np0005534516 nova_compute[253538]: 2025-11-25 09:15:42.013 253542 DEBUG oslo_concurrency.lockutils [req-07f9b31a-0625-4cba-9662-def9a614be31 req-b7ff6f96-38f2-4940-8afa-778c3daabc9c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "2abdf1f8-0c71-459d-8467-ec8825219eda-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:15:42 np0005534516 nova_compute[253538]: 2025-11-25 09:15:42.013 253542 DEBUG oslo_concurrency.lockutils [req-07f9b31a-0625-4cba-9662-def9a614be31 req-b7ff6f96-38f2-4940-8afa-778c3daabc9c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "2abdf1f8-0c71-459d-8467-ec8825219eda-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:15:42 np0005534516 nova_compute[253538]: 2025-11-25 09:15:42.013 253542 DEBUG nova.compute.manager [req-07f9b31a-0625-4cba-9662-def9a614be31 req-b7ff6f96-38f2-4940-8afa-778c3daabc9c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 2abdf1f8-0c71-459d-8467-ec8825219eda] Processing event network-vif-plugged-f30cb228-eac2-4d17-a356-bec8d6ae142a _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 25 04:15:42 np0005534516 nova_compute[253538]: 2025-11-25 09:15:42.014 253542 DEBUG nova.compute.manager [None req-53df080d-df0f-4c21-9484-79434ec815e6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 2abdf1f8-0c71-459d-8467-ec8825219eda] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 25 04:15:42 np0005534516 nova_compute[253538]: 2025-11-25 09:15:42.018 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764062142.0183544, 2abdf1f8-0c71-459d-8467-ec8825219eda => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 04:15:42 np0005534516 nova_compute[253538]: 2025-11-25 09:15:42.018 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 2abdf1f8-0c71-459d-8467-ec8825219eda] VM Resumed (Lifecycle Event)#033[00m
Nov 25 04:15:42 np0005534516 nova_compute[253538]: 2025-11-25 09:15:42.020 253542 DEBUG nova.virt.libvirt.driver [None req-53df080d-df0f-4c21-9484-79434ec815e6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 2abdf1f8-0c71-459d-8467-ec8825219eda] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 25 04:15:42 np0005534516 nova_compute[253538]: 2025-11-25 09:15:42.024 253542 INFO nova.virt.libvirt.driver [-] [instance: 2abdf1f8-0c71-459d-8467-ec8825219eda] Instance spawned successfully.#033[00m
Nov 25 04:15:42 np0005534516 nova_compute[253538]: 2025-11-25 09:15:42.024 253542 DEBUG nova.virt.libvirt.driver [None req-53df080d-df0f-4c21-9484-79434ec815e6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 2abdf1f8-0c71-459d-8467-ec8825219eda] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 25 04:15:42 np0005534516 nova_compute[253538]: 2025-11-25 09:15:42.035 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 2abdf1f8-0c71-459d-8467-ec8825219eda] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 04:15:42 np0005534516 nova_compute[253538]: 2025-11-25 09:15:42.040 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 2abdf1f8-0c71-459d-8467-ec8825219eda] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 04:15:42 np0005534516 nova_compute[253538]: 2025-11-25 09:15:42.043 253542 DEBUG nova.virt.libvirt.driver [None req-53df080d-df0f-4c21-9484-79434ec815e6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 2abdf1f8-0c71-459d-8467-ec8825219eda] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 04:15:42 np0005534516 nova_compute[253538]: 2025-11-25 09:15:42.043 253542 DEBUG nova.virt.libvirt.driver [None req-53df080d-df0f-4c21-9484-79434ec815e6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 2abdf1f8-0c71-459d-8467-ec8825219eda] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 04:15:42 np0005534516 nova_compute[253538]: 2025-11-25 09:15:42.044 253542 DEBUG nova.virt.libvirt.driver [None req-53df080d-df0f-4c21-9484-79434ec815e6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 2abdf1f8-0c71-459d-8467-ec8825219eda] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 04:15:42 np0005534516 nova_compute[253538]: 2025-11-25 09:15:42.044 253542 DEBUG nova.virt.libvirt.driver [None req-53df080d-df0f-4c21-9484-79434ec815e6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 2abdf1f8-0c71-459d-8467-ec8825219eda] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 04:15:42 np0005534516 nova_compute[253538]: 2025-11-25 09:15:42.044 253542 DEBUG nova.virt.libvirt.driver [None req-53df080d-df0f-4c21-9484-79434ec815e6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 2abdf1f8-0c71-459d-8467-ec8825219eda] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 04:15:42 np0005534516 nova_compute[253538]: 2025-11-25 09:15:42.045 253542 DEBUG nova.virt.libvirt.driver [None req-53df080d-df0f-4c21-9484-79434ec815e6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 2abdf1f8-0c71-459d-8467-ec8825219eda] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 04:15:42 np0005534516 nova_compute[253538]: 2025-11-25 09:15:42.065 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 2abdf1f8-0c71-459d-8467-ec8825219eda] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 04:15:42 np0005534516 nova_compute[253538]: 2025-11-25 09:15:42.274 253542 INFO nova.compute.manager [None req-53df080d-df0f-4c21-9484-79434ec815e6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 2abdf1f8-0c71-459d-8467-ec8825219eda] Took 7.07 seconds to spawn the instance on the hypervisor.#033[00m
Nov 25 04:15:42 np0005534516 nova_compute[253538]: 2025-11-25 09:15:42.275 253542 DEBUG nova.compute.manager [None req-53df080d-df0f-4c21-9484-79434ec815e6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 2abdf1f8-0c71-459d-8467-ec8825219eda] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 04:15:42 np0005534516 nova_compute[253538]: 2025-11-25 09:15:42.394 253542 INFO nova.compute.manager [None req-53df080d-df0f-4c21-9484-79434ec815e6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 2abdf1f8-0c71-459d-8467-ec8825219eda] Took 8.15 seconds to build instance.#033[00m
Nov 25 04:15:42 np0005534516 nova_compute[253538]: 2025-11-25 09:15:42.424 253542 DEBUG oslo_concurrency.lockutils [None req-53df080d-df0f-4c21-9484-79434ec815e6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "2abdf1f8-0c71-459d-8467-ec8825219eda" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.234s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:15:42 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2859: 321 pgs: 321 active+clean; 134 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 25 04:15:42 np0005534516 podman[416156]: 2025-11-25 09:15:42.826421972 +0000 UTC m=+0.062827129 container health_status 5c8d5508db57b4c3da7e80ae11f3a3094e87785cb74f60c98199261dd8b73481 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251118)
Nov 25 04:15:42 np0005534516 podman[416155]: 2025-11-25 09:15:42.836150276 +0000 UTC m=+0.077897079 container health_status 201f5ba8a846455dad7681d91099d8c3f33e8ac91298c1330a8b525b94c03938 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 25 04:15:43 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:15:44 np0005534516 nova_compute[253538]: 2025-11-25 09:15:44.074 253542 DEBUG nova.compute.manager [req-996c0461-7282-4bcb-ae21-12cee14b5783 req-6b5331ea-a2b5-41f4-8111-0f7e4d6352bb b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 2abdf1f8-0c71-459d-8467-ec8825219eda] Received event network-vif-plugged-f30cb228-eac2-4d17-a356-bec8d6ae142a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 04:15:44 np0005534516 nova_compute[253538]: 2025-11-25 09:15:44.075 253542 DEBUG oslo_concurrency.lockutils [req-996c0461-7282-4bcb-ae21-12cee14b5783 req-6b5331ea-a2b5-41f4-8111-0f7e4d6352bb b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "2abdf1f8-0c71-459d-8467-ec8825219eda-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:15:44 np0005534516 nova_compute[253538]: 2025-11-25 09:15:44.075 253542 DEBUG oslo_concurrency.lockutils [req-996c0461-7282-4bcb-ae21-12cee14b5783 req-6b5331ea-a2b5-41f4-8111-0f7e4d6352bb b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "2abdf1f8-0c71-459d-8467-ec8825219eda-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:15:44 np0005534516 nova_compute[253538]: 2025-11-25 09:15:44.076 253542 DEBUG oslo_concurrency.lockutils [req-996c0461-7282-4bcb-ae21-12cee14b5783 req-6b5331ea-a2b5-41f4-8111-0f7e4d6352bb b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "2abdf1f8-0c71-459d-8467-ec8825219eda-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:15:44 np0005534516 nova_compute[253538]: 2025-11-25 09:15:44.076 253542 DEBUG nova.compute.manager [req-996c0461-7282-4bcb-ae21-12cee14b5783 req-6b5331ea-a2b5-41f4-8111-0f7e4d6352bb b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 2abdf1f8-0c71-459d-8467-ec8825219eda] No waiting events found dispatching network-vif-plugged-f30cb228-eac2-4d17-a356-bec8d6ae142a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 04:15:44 np0005534516 nova_compute[253538]: 2025-11-25 09:15:44.076 253542 WARNING nova.compute.manager [req-996c0461-7282-4bcb-ae21-12cee14b5783 req-6b5331ea-a2b5-41f4-8111-0f7e4d6352bb b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 2abdf1f8-0c71-459d-8467-ec8825219eda] Received unexpected event network-vif-plugged-f30cb228-eac2-4d17-a356-bec8d6ae142a for instance with vm_state active and task_state None.#033[00m
Nov 25 04:15:44 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2860: 321 pgs: 321 active+clean; 134 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 319 KiB/s rd, 1.8 MiB/s wr, 39 op/s
Nov 25 04:15:44 np0005534516 nova_compute[253538]: 2025-11-25 09:15:44.767 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:15:44 np0005534516 nova_compute[253538]: 2025-11-25 09:15:44.968 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:15:46 np0005534516 ovn_controller[152859]: 2025-11-25T09:15:46Z|01601|binding|INFO|Releasing lport cfc44526-993c-46ae-8c7c-2505531aa9fc from this chassis (sb_readonly=0)
Nov 25 04:15:46 np0005534516 NetworkManager[48915]: <info>  [1764062146.2223] manager: (patch-br-int-to-provnet-14ba300c-36d8-42f5-a7bd-8245a4488c5f): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/660)
Nov 25 04:15:46 np0005534516 nova_compute[253538]: 2025-11-25 09:15:46.222 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:15:46 np0005534516 NetworkManager[48915]: <info>  [1764062146.2234] manager: (patch-provnet-14ba300c-36d8-42f5-a7bd-8245a4488c5f-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/661)
Nov 25 04:15:46 np0005534516 nova_compute[253538]: 2025-11-25 09:15:46.254 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:15:46 np0005534516 ovn_controller[152859]: 2025-11-25T09:15:46Z|01602|binding|INFO|Releasing lport cfc44526-993c-46ae-8c7c-2505531aa9fc from this chassis (sb_readonly=0)
Nov 25 04:15:46 np0005534516 nova_compute[253538]: 2025-11-25 09:15:46.256 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:15:46 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2861: 321 pgs: 321 active+clean; 134 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 944 KiB/s rd, 1.8 MiB/s wr, 68 op/s
Nov 25 04:15:46 np0005534516 nova_compute[253538]: 2025-11-25 09:15:46.996 253542 DEBUG nova.compute.manager [req-3d58a869-6447-4f91-ad48-e0f8657eeee4 req-8454f4d3-fc12-41f7-b253-eb111e106818 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 2abdf1f8-0c71-459d-8467-ec8825219eda] Received event network-changed-f30cb228-eac2-4d17-a356-bec8d6ae142a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 04:15:46 np0005534516 nova_compute[253538]: 2025-11-25 09:15:46.996 253542 DEBUG nova.compute.manager [req-3d58a869-6447-4f91-ad48-e0f8657eeee4 req-8454f4d3-fc12-41f7-b253-eb111e106818 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 2abdf1f8-0c71-459d-8467-ec8825219eda] Refreshing instance network info cache due to event network-changed-f30cb228-eac2-4d17-a356-bec8d6ae142a. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 04:15:46 np0005534516 nova_compute[253538]: 2025-11-25 09:15:46.996 253542 DEBUG oslo_concurrency.lockutils [req-3d58a869-6447-4f91-ad48-e0f8657eeee4 req-8454f4d3-fc12-41f7-b253-eb111e106818 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-2abdf1f8-0c71-459d-8467-ec8825219eda" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 04:15:46 np0005534516 nova_compute[253538]: 2025-11-25 09:15:46.997 253542 DEBUG oslo_concurrency.lockutils [req-3d58a869-6447-4f91-ad48-e0f8657eeee4 req-8454f4d3-fc12-41f7-b253-eb111e106818 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-2abdf1f8-0c71-459d-8467-ec8825219eda" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 04:15:46 np0005534516 nova_compute[253538]: 2025-11-25 09:15:46.997 253542 DEBUG nova.network.neutron [req-3d58a869-6447-4f91-ad48-e0f8657eeee4 req-8454f4d3-fc12-41f7-b253-eb111e106818 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 2abdf1f8-0c71-459d-8467-ec8825219eda] Refreshing network info cache for port f30cb228-eac2-4d17-a356-bec8d6ae142a _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 04:15:48 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:15:48 np0005534516 nova_compute[253538]: 2025-11-25 09:15:48.422 253542 DEBUG nova.network.neutron [req-3d58a869-6447-4f91-ad48-e0f8657eeee4 req-8454f4d3-fc12-41f7-b253-eb111e106818 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 2abdf1f8-0c71-459d-8467-ec8825219eda] Updated VIF entry in instance network info cache for port f30cb228-eac2-4d17-a356-bec8d6ae142a. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 25 04:15:48 np0005534516 nova_compute[253538]: 2025-11-25 09:15:48.423 253542 DEBUG nova.network.neutron [req-3d58a869-6447-4f91-ad48-e0f8657eeee4 req-8454f4d3-fc12-41f7-b253-eb111e106818 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 2abdf1f8-0c71-459d-8467-ec8825219eda] Updating instance_info_cache with network_info: [{"id": "f30cb228-eac2-4d17-a356-bec8d6ae142a", "address": "fa:16:3e:de:b8:78", "network": {"id": "f86f1e83-b07b-4abd-bc9a-7c03f3634fc6", "bridge": "br-int", "label": "tempest-network-smoke--1498407759", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fede:b878", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.185", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf30cb228-ea", "ovs_interfaceid": "f30cb228-eac2-4d17-a356-bec8d6ae142a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 04:15:48 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2862: 321 pgs: 321 active+clean; 134 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Nov 25 04:15:48 np0005534516 nova_compute[253538]: 2025-11-25 09:15:48.441 253542 DEBUG oslo_concurrency.lockutils [req-3d58a869-6447-4f91-ad48-e0f8657eeee4 req-8454f4d3-fc12-41f7-b253-eb111e106818 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-2abdf1f8-0c71-459d-8467-ec8825219eda" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 04:15:49 np0005534516 nova_compute[253538]: 2025-11-25 09:15:49.796 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:15:49 np0005534516 nova_compute[253538]: 2025-11-25 09:15:49.802 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:15:49 np0005534516 podman[416196]: 2025-11-25 09:15:49.859249882 +0000 UTC m=+0.106365362 container health_status 8ad1e319ea263c63021f01bb2d6f18ca5aea205883339692c6b10bddfa993191 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Nov 25 04:15:49 np0005534516 nova_compute[253538]: 2025-11-25 09:15:49.969 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:15:50 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2863: 321 pgs: 321 active+clean; 134 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 477 KiB/s wr, 75 op/s
Nov 25 04:15:52 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2864: 321 pgs: 321 active+clean; 134 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Nov 25 04:15:53 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:15:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] Optimize plan auto_2025-11-25_09:15:53
Nov 25 04:15:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 04:15:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] do_upmap
Nov 25 04:15:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] pools ['cephfs.cephfs.data', 'default.rgw.control', '.mgr', 'images', '.rgw.root', 'backups', 'cephfs.cephfs.meta', 'vms', 'default.rgw.log', 'default.rgw.meta', 'volumes']
Nov 25 04:15:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] prepared 0/10 changes
Nov 25 04:15:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 04:15:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 04:15:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 04:15:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 04:15:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 04:15:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 04:15:53 np0005534516 nova_compute[253538]: 2025-11-25 09:15:53.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:15:54 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 04:15:54 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 04:15:54 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 04:15:54 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 04:15:54 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 04:15:54 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 04:15:54 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 04:15:54 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 04:15:54 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 04:15:54 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 04:15:54 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2865: 321 pgs: 321 active+clean; 134 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Nov 25 04:15:54 np0005534516 ceph-osd[89702]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [P] New memtable created with log file: #51. Immutable memtables: 0.
Nov 25 04:15:54 np0005534516 nova_compute[253538]: 2025-11-25 09:15:54.805 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:15:54 np0005534516 nova_compute[253538]: 2025-11-25 09:15:54.971 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:15:55 np0005534516 ovn_controller[152859]: 2025-11-25T09:15:55Z|00207|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:de:b8:78 10.100.0.7
Nov 25 04:15:55 np0005534516 ovn_controller[152859]: 2025-11-25T09:15:55Z|00208|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:de:b8:78 10.100.0.7
Nov 25 04:15:56 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2866: 321 pgs: 321 active+clean; 151 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.7 MiB/s rd, 1.0 MiB/s wr, 80 op/s
Nov 25 04:15:58 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:15:58 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2867: 321 pgs: 321 active+clean; 164 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.3 MiB/s rd, 2.1 MiB/s wr, 93 op/s
Nov 25 04:15:58 np0005534516 nova_compute[253538]: 2025-11-25 09:15:58.547 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:15:58 np0005534516 nova_compute[253538]: 2025-11-25 09:15:58.553 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:15:58 np0005534516 nova_compute[253538]: 2025-11-25 09:15:58.553 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 25 04:15:58 np0005534516 nova_compute[253538]: 2025-11-25 09:15:58.554 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 25 04:15:59 np0005534516 nova_compute[253538]: 2025-11-25 09:15:59.807 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:15:59 np0005534516 nova_compute[253538]: 2025-11-25 09:15:59.823 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Acquiring lock "refresh_cache-2abdf1f8-0c71-459d-8467-ec8825219eda" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 04:15:59 np0005534516 nova_compute[253538]: 2025-11-25 09:15:59.824 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Acquired lock "refresh_cache-2abdf1f8-0c71-459d-8467-ec8825219eda" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 04:15:59 np0005534516 nova_compute[253538]: 2025-11-25 09:15:59.824 253542 DEBUG nova.network.neutron [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] [instance: 2abdf1f8-0c71-459d-8467-ec8825219eda] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Nov 25 04:15:59 np0005534516 nova_compute[253538]: 2025-11-25 09:15:59.824 253542 DEBUG nova.objects.instance [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 2abdf1f8-0c71-459d-8467-ec8825219eda obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 04:15:59 np0005534516 nova_compute[253538]: 2025-11-25 09:15:59.973 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:16:00 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2868: 321 pgs: 321 active+clean; 167 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 353 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Nov 25 04:16:02 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2869: 321 pgs: 321 active+clean; 167 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 353 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Nov 25 04:16:02 np0005534516 nova_compute[253538]: 2025-11-25 09:16:02.926 253542 DEBUG nova.network.neutron [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] [instance: 2abdf1f8-0c71-459d-8467-ec8825219eda] Updating instance_info_cache with network_info: [{"id": "f30cb228-eac2-4d17-a356-bec8d6ae142a", "address": "fa:16:3e:de:b8:78", "network": {"id": "f86f1e83-b07b-4abd-bc9a-7c03f3634fc6", "bridge": "br-int", "label": "tempest-network-smoke--1498407759", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fede:b878", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.185", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf30cb228-ea", "ovs_interfaceid": "f30cb228-eac2-4d17-a356-bec8d6ae142a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 04:16:02 np0005534516 nova_compute[253538]: 2025-11-25 09:16:02.938 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Releasing lock "refresh_cache-2abdf1f8-0c71-459d-8467-ec8825219eda" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 04:16:02 np0005534516 nova_compute[253538]: 2025-11-25 09:16:02.938 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] [instance: 2abdf1f8-0c71-459d-8467-ec8825219eda] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Nov 25 04:16:02 np0005534516 nova_compute[253538]: 2025-11-25 09:16:02.939 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:16:02 np0005534516 nova_compute[253538]: 2025-11-25 09:16:02.939 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 25 04:16:03 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:16:03 np0005534516 nova_compute[253538]: 2025-11-25 09:16:03.555 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:16:04 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2870: 321 pgs: 321 active+clean; 167 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 353 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Nov 25 04:16:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 04:16:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 04:16:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 04:16:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 04:16:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.00075666583235658 of space, bias 1.0, pg target 0.226999749706974 quantized to 32 (current 32)
Nov 25 04:16:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 04:16:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 04:16:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 04:16:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 04:16:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 04:16:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0010122368870873217 of space, bias 1.0, pg target 0.3036710661261965 quantized to 32 (current 32)
Nov 25 04:16:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 04:16:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 25 04:16:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 04:16:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 04:16:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 04:16:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 25 04:16:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 04:16:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 25 04:16:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 04:16:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 04:16:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 04:16:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 25 04:16:04 np0005534516 nova_compute[253538]: 2025-11-25 09:16:04.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:16:04 np0005534516 nova_compute[253538]: 2025-11-25 09:16:04.811 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:16:04 np0005534516 nova_compute[253538]: 2025-11-25 09:16:04.975 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:16:06 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2871: 321 pgs: 321 active+clean; 167 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 352 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Nov 25 04:16:06 np0005534516 nova_compute[253538]: 2025-11-25 09:16:06.461 253542 DEBUG oslo_concurrency.lockutils [None req-9d6a0950-4bff-493d-a7b8-87dee7bc5524 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Acquiring lock "645b40f5-7a87-4de2-8b13-a340bcffd14b" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:16:06 np0005534516 nova_compute[253538]: 2025-11-25 09:16:06.462 253542 DEBUG oslo_concurrency.lockutils [None req-9d6a0950-4bff-493d-a7b8-87dee7bc5524 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "645b40f5-7a87-4de2-8b13-a340bcffd14b" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:16:06 np0005534516 nova_compute[253538]: 2025-11-25 09:16:06.533 253542 DEBUG nova.compute.manager [None req-9d6a0950-4bff-493d-a7b8-87dee7bc5524 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 645b40f5-7a87-4de2-8b13-a340bcffd14b] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 25 04:16:06 np0005534516 nova_compute[253538]: 2025-11-25 09:16:06.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:16:06 np0005534516 nova_compute[253538]: 2025-11-25 09:16:06.623 253542 DEBUG oslo_concurrency.lockutils [None req-9d6a0950-4bff-493d-a7b8-87dee7bc5524 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:16:06 np0005534516 nova_compute[253538]: 2025-11-25 09:16:06.624 253542 DEBUG oslo_concurrency.lockutils [None req-9d6a0950-4bff-493d-a7b8-87dee7bc5524 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:16:06 np0005534516 nova_compute[253538]: 2025-11-25 09:16:06.636 253542 DEBUG nova.virt.hardware [None req-9d6a0950-4bff-493d-a7b8-87dee7bc5524 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 25 04:16:06 np0005534516 nova_compute[253538]: 2025-11-25 09:16:06.636 253542 INFO nova.compute.claims [None req-9d6a0950-4bff-493d-a7b8-87dee7bc5524 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 645b40f5-7a87-4de2-8b13-a340bcffd14b] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 25 04:16:06 np0005534516 nova_compute[253538]: 2025-11-25 09:16:06.878 253542 DEBUG oslo_concurrency.processutils [None req-9d6a0950-4bff-493d-a7b8-87dee7bc5524 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 04:16:07 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 04:16:07 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1428635491' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 04:16:07 np0005534516 nova_compute[253538]: 2025-11-25 09:16:07.423 253542 DEBUG oslo_concurrency.processutils [None req-9d6a0950-4bff-493d-a7b8-87dee7bc5524 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.545s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 04:16:07 np0005534516 nova_compute[253538]: 2025-11-25 09:16:07.430 253542 DEBUG nova.compute.provider_tree [None req-9d6a0950-4bff-493d-a7b8-87dee7bc5524 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 04:16:07 np0005534516 nova_compute[253538]: 2025-11-25 09:16:07.442 253542 DEBUG nova.scheduler.client.report [None req-9d6a0950-4bff-493d-a7b8-87dee7bc5524 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 04:16:07 np0005534516 nova_compute[253538]: 2025-11-25 09:16:07.461 253542 DEBUG oslo_concurrency.lockutils [None req-9d6a0950-4bff-493d-a7b8-87dee7bc5524 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.836s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:16:07 np0005534516 nova_compute[253538]: 2025-11-25 09:16:07.462 253542 DEBUG nova.compute.manager [None req-9d6a0950-4bff-493d-a7b8-87dee7bc5524 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 645b40f5-7a87-4de2-8b13-a340bcffd14b] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 25 04:16:07 np0005534516 nova_compute[253538]: 2025-11-25 09:16:07.519 253542 DEBUG nova.compute.manager [None req-9d6a0950-4bff-493d-a7b8-87dee7bc5524 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 645b40f5-7a87-4de2-8b13-a340bcffd14b] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 25 04:16:07 np0005534516 nova_compute[253538]: 2025-11-25 09:16:07.520 253542 DEBUG nova.network.neutron [None req-9d6a0950-4bff-493d-a7b8-87dee7bc5524 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 645b40f5-7a87-4de2-8b13-a340bcffd14b] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 25 04:16:07 np0005534516 nova_compute[253538]: 2025-11-25 09:16:07.661 253542 INFO nova.virt.libvirt.driver [None req-9d6a0950-4bff-493d-a7b8-87dee7bc5524 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 645b40f5-7a87-4de2-8b13-a340bcffd14b] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 25 04:16:07 np0005534516 nova_compute[253538]: 2025-11-25 09:16:07.708 253542 DEBUG nova.compute.manager [None req-9d6a0950-4bff-493d-a7b8-87dee7bc5524 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 645b40f5-7a87-4de2-8b13-a340bcffd14b] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 25 04:16:07 np0005534516 nova_compute[253538]: 2025-11-25 09:16:07.863 253542 DEBUG nova.policy [None req-9d6a0950-4bff-493d-a7b8-87dee7bc5524 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'c9fb13d4ba9041458692330b7276232f', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'a3cf572dfc9f42528923d69b8fa76422', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 25 04:16:07 np0005534516 nova_compute[253538]: 2025-11-25 09:16:07.871 253542 DEBUG nova.compute.manager [None req-9d6a0950-4bff-493d-a7b8-87dee7bc5524 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 645b40f5-7a87-4de2-8b13-a340bcffd14b] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 25 04:16:07 np0005534516 nova_compute[253538]: 2025-11-25 09:16:07.872 253542 DEBUG nova.virt.libvirt.driver [None req-9d6a0950-4bff-493d-a7b8-87dee7bc5524 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 645b40f5-7a87-4de2-8b13-a340bcffd14b] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 25 04:16:07 np0005534516 nova_compute[253538]: 2025-11-25 09:16:07.873 253542 INFO nova.virt.libvirt.driver [None req-9d6a0950-4bff-493d-a7b8-87dee7bc5524 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 645b40f5-7a87-4de2-8b13-a340bcffd14b] Creating image(s)#033[00m
Nov 25 04:16:07 np0005534516 nova_compute[253538]: 2025-11-25 09:16:07.897 253542 DEBUG nova.storage.rbd_utils [None req-9d6a0950-4bff-493d-a7b8-87dee7bc5524 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] rbd image 645b40f5-7a87-4de2-8b13-a340bcffd14b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 04:16:07 np0005534516 nova_compute[253538]: 2025-11-25 09:16:07.924 253542 DEBUG nova.storage.rbd_utils [None req-9d6a0950-4bff-493d-a7b8-87dee7bc5524 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] rbd image 645b40f5-7a87-4de2-8b13-a340bcffd14b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 04:16:07 np0005534516 nova_compute[253538]: 2025-11-25 09:16:07.949 253542 DEBUG nova.storage.rbd_utils [None req-9d6a0950-4bff-493d-a7b8-87dee7bc5524 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] rbd image 645b40f5-7a87-4de2-8b13-a340bcffd14b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 04:16:07 np0005534516 nova_compute[253538]: 2025-11-25 09:16:07.953 253542 DEBUG oslo_concurrency.processutils [None req-9d6a0950-4bff-493d-a7b8-87dee7bc5524 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 04:16:08 np0005534516 nova_compute[253538]: 2025-11-25 09:16:08.033 253542 DEBUG oslo_concurrency.processutils [None req-9d6a0950-4bff-493d-a7b8-87dee7bc5524 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json" returned: 0 in 0.080s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 04:16:08 np0005534516 nova_compute[253538]: 2025-11-25 09:16:08.034 253542 DEBUG oslo_concurrency.lockutils [None req-9d6a0950-4bff-493d-a7b8-87dee7bc5524 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Acquiring lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:16:08 np0005534516 nova_compute[253538]: 2025-11-25 09:16:08.035 253542 DEBUG oslo_concurrency.lockutils [None req-9d6a0950-4bff-493d-a7b8-87dee7bc5524 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:16:08 np0005534516 nova_compute[253538]: 2025-11-25 09:16:08.035 253542 DEBUG oslo_concurrency.lockutils [None req-9d6a0950-4bff-493d-a7b8-87dee7bc5524 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:16:08 np0005534516 nova_compute[253538]: 2025-11-25 09:16:08.058 253542 DEBUG nova.storage.rbd_utils [None req-9d6a0950-4bff-493d-a7b8-87dee7bc5524 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] rbd image 645b40f5-7a87-4de2-8b13-a340bcffd14b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 04:16:08 np0005534516 nova_compute[253538]: 2025-11-25 09:16:08.060 253542 DEBUG oslo_concurrency.processutils [None req-9d6a0950-4bff-493d-a7b8-87dee7bc5524 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc 645b40f5-7a87-4de2-8b13-a340bcffd14b_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 04:16:08 np0005534516 nova_compute[253538]: 2025-11-25 09:16:08.361 253542 DEBUG oslo_concurrency.processutils [None req-9d6a0950-4bff-493d-a7b8-87dee7bc5524 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc 645b40f5-7a87-4de2-8b13-a340bcffd14b_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.301s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 04:16:08 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:16:08 np0005534516 nova_compute[253538]: 2025-11-25 09:16:08.434 253542 DEBUG nova.storage.rbd_utils [None req-9d6a0950-4bff-493d-a7b8-87dee7bc5524 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] resizing rbd image 645b40f5-7a87-4de2-8b13-a340bcffd14b_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 25 04:16:08 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2872: 321 pgs: 321 active+clean; 180 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 255 KiB/s rd, 1.5 MiB/s wr, 56 op/s
Nov 25 04:16:08 np0005534516 nova_compute[253538]: 2025-11-25 09:16:08.542 253542 DEBUG nova.objects.instance [None req-9d6a0950-4bff-493d-a7b8-87dee7bc5524 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lazy-loading 'migration_context' on Instance uuid 645b40f5-7a87-4de2-8b13-a340bcffd14b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 04:16:08 np0005534516 nova_compute[253538]: 2025-11-25 09:16:08.613 253542 DEBUG nova.virt.libvirt.driver [None req-9d6a0950-4bff-493d-a7b8-87dee7bc5524 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 645b40f5-7a87-4de2-8b13-a340bcffd14b] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 25 04:16:08 np0005534516 nova_compute[253538]: 2025-11-25 09:16:08.614 253542 DEBUG nova.virt.libvirt.driver [None req-9d6a0950-4bff-493d-a7b8-87dee7bc5524 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 645b40f5-7a87-4de2-8b13-a340bcffd14b] Ensure instance console log exists: /var/lib/nova/instances/645b40f5-7a87-4de2-8b13-a340bcffd14b/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 25 04:16:08 np0005534516 nova_compute[253538]: 2025-11-25 09:16:08.615 253542 DEBUG oslo_concurrency.lockutils [None req-9d6a0950-4bff-493d-a7b8-87dee7bc5524 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:16:08 np0005534516 nova_compute[253538]: 2025-11-25 09:16:08.615 253542 DEBUG oslo_concurrency.lockutils [None req-9d6a0950-4bff-493d-a7b8-87dee7bc5524 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:16:08 np0005534516 nova_compute[253538]: 2025-11-25 09:16:08.616 253542 DEBUG oslo_concurrency.lockutils [None req-9d6a0950-4bff-493d-a7b8-87dee7bc5524 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:16:09 np0005534516 nova_compute[253538]: 2025-11-25 09:16:09.812 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:16:09 np0005534516 nova_compute[253538]: 2025-11-25 09:16:09.977 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:16:10 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2873: 321 pgs: 321 active+clean; 202 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 11 KiB/s rd, 1.2 MiB/s wr, 17 op/s
Nov 25 04:16:10 np0005534516 nova_compute[253538]: 2025-11-25 09:16:10.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:16:10 np0005534516 nova_compute[253538]: 2025-11-25 09:16:10.593 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:16:10 np0005534516 nova_compute[253538]: 2025-11-25 09:16:10.593 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:16:10 np0005534516 nova_compute[253538]: 2025-11-25 09:16:10.594 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:16:10 np0005534516 nova_compute[253538]: 2025-11-25 09:16:10.594 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 25 04:16:10 np0005534516 nova_compute[253538]: 2025-11-25 09:16:10.594 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 04:16:10 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:16:10.973 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=56, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '3e:21:09', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '6a:71:8e:07:fa:c2'}, ipsec=False) old=SB_Global(nb_cfg=55) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 04:16:10 np0005534516 nova_compute[253538]: 2025-11-25 09:16:10.973 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:16:10 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:16:10.976 162739 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 25 04:16:11 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 04:16:11 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3417734888' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 04:16:11 np0005534516 nova_compute[253538]: 2025-11-25 09:16:11.027 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.433s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 04:16:11 np0005534516 nova_compute[253538]: 2025-11-25 09:16:11.089 253542 DEBUG nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] skipping disk for instance-00000096 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 25 04:16:11 np0005534516 nova_compute[253538]: 2025-11-25 09:16:11.090 253542 DEBUG nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] skipping disk for instance-00000096 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 25 04:16:11 np0005534516 nova_compute[253538]: 2025-11-25 09:16:11.198 253542 DEBUG nova.network.neutron [None req-9d6a0950-4bff-493d-a7b8-87dee7bc5524 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 645b40f5-7a87-4de2-8b13-a340bcffd14b] Successfully created port: 53302c95-cc0c-4237-a7f3-dca02953a876 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 25 04:16:11 np0005534516 nova_compute[253538]: 2025-11-25 09:16:11.274 253542 WARNING nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 04:16:11 np0005534516 nova_compute[253538]: 2025-11-25 09:16:11.275 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3428MB free_disk=59.928916931152344GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 25 04:16:11 np0005534516 nova_compute[253538]: 2025-11-25 09:16:11.275 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:16:11 np0005534516 nova_compute[253538]: 2025-11-25 09:16:11.276 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:16:11 np0005534516 nova_compute[253538]: 2025-11-25 09:16:11.334 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Instance 2abdf1f8-0c71-459d-8467-ec8825219eda actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 25 04:16:11 np0005534516 nova_compute[253538]: 2025-11-25 09:16:11.334 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Instance 645b40f5-7a87-4de2-8b13-a340bcffd14b actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 25 04:16:11 np0005534516 nova_compute[253538]: 2025-11-25 09:16:11.334 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 25 04:16:11 np0005534516 nova_compute[253538]: 2025-11-25 09:16:11.334 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=768MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 25 04:16:11 np0005534516 nova_compute[253538]: 2025-11-25 09:16:11.377 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 04:16:11 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 04:16:11 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1834755696' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 04:16:11 np0005534516 nova_compute[253538]: 2025-11-25 09:16:11.867 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.490s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 04:16:11 np0005534516 nova_compute[253538]: 2025-11-25 09:16:11.876 253542 DEBUG nova.compute.provider_tree [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 04:16:11 np0005534516 nova_compute[253538]: 2025-11-25 09:16:11.897 253542 DEBUG nova.scheduler.client.report [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 04:16:11 np0005534516 nova_compute[253538]: 2025-11-25 09:16:11.922 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 25 04:16:11 np0005534516 nova_compute[253538]: 2025-11-25 09:16:11.922 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.647s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:16:12 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2874: 321 pgs: 321 active+clean; 213 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 26 op/s
Nov 25 04:16:12 np0005534516 nova_compute[253538]: 2025-11-25 09:16:12.714 253542 DEBUG nova.network.neutron [None req-9d6a0950-4bff-493d-a7b8-87dee7bc5524 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 645b40f5-7a87-4de2-8b13-a340bcffd14b] Successfully updated port: 53302c95-cc0c-4237-a7f3-dca02953a876 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 25 04:16:12 np0005534516 nova_compute[253538]: 2025-11-25 09:16:12.733 253542 DEBUG oslo_concurrency.lockutils [None req-9d6a0950-4bff-493d-a7b8-87dee7bc5524 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Acquiring lock "refresh_cache-645b40f5-7a87-4de2-8b13-a340bcffd14b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 04:16:12 np0005534516 nova_compute[253538]: 2025-11-25 09:16:12.733 253542 DEBUG oslo_concurrency.lockutils [None req-9d6a0950-4bff-493d-a7b8-87dee7bc5524 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Acquired lock "refresh_cache-645b40f5-7a87-4de2-8b13-a340bcffd14b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 04:16:12 np0005534516 nova_compute[253538]: 2025-11-25 09:16:12.734 253542 DEBUG nova.network.neutron [None req-9d6a0950-4bff-493d-a7b8-87dee7bc5524 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 645b40f5-7a87-4de2-8b13-a340bcffd14b] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 25 04:16:12 np0005534516 nova_compute[253538]: 2025-11-25 09:16:12.822 253542 DEBUG nova.compute.manager [req-08ee7e48-bfcc-4c15-a328-6fe3e2048303 req-41556e18-1e02-4977-bfab-ec40d4927f19 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 645b40f5-7a87-4de2-8b13-a340bcffd14b] Received event network-changed-53302c95-cc0c-4237-a7f3-dca02953a876 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 04:16:12 np0005534516 nova_compute[253538]: 2025-11-25 09:16:12.823 253542 DEBUG nova.compute.manager [req-08ee7e48-bfcc-4c15-a328-6fe3e2048303 req-41556e18-1e02-4977-bfab-ec40d4927f19 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 645b40f5-7a87-4de2-8b13-a340bcffd14b] Refreshing instance network info cache due to event network-changed-53302c95-cc0c-4237-a7f3-dca02953a876. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 04:16:12 np0005534516 nova_compute[253538]: 2025-11-25 09:16:12.823 253542 DEBUG oslo_concurrency.lockutils [req-08ee7e48-bfcc-4c15-a328-6fe3e2048303 req-41556e18-1e02-4977-bfab-ec40d4927f19 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-645b40f5-7a87-4de2-8b13-a340bcffd14b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 04:16:12 np0005534516 nova_compute[253538]: 2025-11-25 09:16:12.902 253542 DEBUG nova.network.neutron [None req-9d6a0950-4bff-493d-a7b8-87dee7bc5524 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 645b40f5-7a87-4de2-8b13-a340bcffd14b] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 25 04:16:12 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:16:12.979 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=0e3362c2-a4a0-4a10-9289-943331244f84, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '56'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 04:16:13 np0005534516 podman[416552]: 2025-11-25 09:16:13.023441748 +0000 UTC m=+0.079372908 container health_status 201f5ba8a846455dad7681d91099d8c3f33e8ac91298c1330a8b525b94c03938 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Nov 25 04:16:13 np0005534516 podman[416553]: 2025-11-25 09:16:13.036818582 +0000 UTC m=+0.083100030 container health_status 5c8d5508db57b4c3da7e80ae11f3a3094e87785cb74f60c98199261dd8b73481 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_metadata_agent, org.label-schema.build-date=20251118, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Nov 25 04:16:13 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:16:13 np0005534516 podman[416662]: 2025-11-25 09:16:13.54440327 +0000 UTC m=+0.074686282 container exec 04ce8b89bbacb77b3b59c4996e899d7494010827e740656ebea8754c25303956 (image=quay.io/ceph/ceph:v18, name=ceph-a058ea16-8b73-51e1-b172-ed66107102bf-mon-compute-0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Nov 25 04:16:13 np0005534516 podman[416662]: 2025-11-25 09:16:13.655359677 +0000 UTC m=+0.185642679 container exec_died 04ce8b89bbacb77b3b59c4996e899d7494010827e740656ebea8754c25303956 (image=quay.io/ceph/ceph:v18, name=ceph-a058ea16-8b73-51e1-b172-ed66107102bf-mon-compute-0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 04:16:14 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 04:16:14 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 04:16:14 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 04:16:14 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 04:16:14 np0005534516 nova_compute[253538]: 2025-11-25 09:16:14.413 253542 DEBUG nova.network.neutron [None req-9d6a0950-4bff-493d-a7b8-87dee7bc5524 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 645b40f5-7a87-4de2-8b13-a340bcffd14b] Updating instance_info_cache with network_info: [{"id": "53302c95-cc0c-4237-a7f3-dca02953a876", "address": "fa:16:3e:02:9c:53", "network": {"id": "f86f1e83-b07b-4abd-bc9a-7c03f3634fc6", "bridge": "br-int", "label": "tempest-network-smoke--1498407759", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe02:9c53", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap53302c95-cc", "ovs_interfaceid": "53302c95-cc0c-4237-a7f3-dca02953a876", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 04:16:14 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2875: 321 pgs: 321 active+clean; 213 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 25 04:16:14 np0005534516 nova_compute[253538]: 2025-11-25 09:16:14.484 253542 DEBUG oslo_concurrency.lockutils [None req-9d6a0950-4bff-493d-a7b8-87dee7bc5524 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Releasing lock "refresh_cache-645b40f5-7a87-4de2-8b13-a340bcffd14b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 04:16:14 np0005534516 nova_compute[253538]: 2025-11-25 09:16:14.485 253542 DEBUG nova.compute.manager [None req-9d6a0950-4bff-493d-a7b8-87dee7bc5524 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 645b40f5-7a87-4de2-8b13-a340bcffd14b] Instance network_info: |[{"id": "53302c95-cc0c-4237-a7f3-dca02953a876", "address": "fa:16:3e:02:9c:53", "network": {"id": "f86f1e83-b07b-4abd-bc9a-7c03f3634fc6", "bridge": "br-int", "label": "tempest-network-smoke--1498407759", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe02:9c53", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap53302c95-cc", "ovs_interfaceid": "53302c95-cc0c-4237-a7f3-dca02953a876", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 25 04:16:14 np0005534516 nova_compute[253538]: 2025-11-25 09:16:14.486 253542 DEBUG oslo_concurrency.lockutils [req-08ee7e48-bfcc-4c15-a328-6fe3e2048303 req-41556e18-1e02-4977-bfab-ec40d4927f19 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-645b40f5-7a87-4de2-8b13-a340bcffd14b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 04:16:14 np0005534516 nova_compute[253538]: 2025-11-25 09:16:14.486 253542 DEBUG nova.network.neutron [req-08ee7e48-bfcc-4c15-a328-6fe3e2048303 req-41556e18-1e02-4977-bfab-ec40d4927f19 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 645b40f5-7a87-4de2-8b13-a340bcffd14b] Refreshing network info cache for port 53302c95-cc0c-4237-a7f3-dca02953a876 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 04:16:14 np0005534516 nova_compute[253538]: 2025-11-25 09:16:14.488 253542 DEBUG nova.virt.libvirt.driver [None req-9d6a0950-4bff-493d-a7b8-87dee7bc5524 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 645b40f5-7a87-4de2-8b13-a340bcffd14b] Start _get_guest_xml network_info=[{"id": "53302c95-cc0c-4237-a7f3-dca02953a876", "address": "fa:16:3e:02:9c:53", "network": {"id": "f86f1e83-b07b-4abd-bc9a-7c03f3634fc6", "bridge": "br-int", "label": "tempest-network-smoke--1498407759", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe02:9c53", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap53302c95-cc", "ovs_interfaceid": "53302c95-cc0c-4237-a7f3-dca02953a876", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'device_name': '/dev/vda', 'guest_format': None, 'encryption_options': None, 'boot_index': 0, 'encryption_format': None, 'encryption_secret_uuid': None, 'size': 0, 'encrypted': False, 'disk_bus': 'virtio', 'image_id': '8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 25 04:16:14 np0005534516 nova_compute[253538]: 2025-11-25 09:16:14.493 253542 WARNING nova.virt.libvirt.driver [None req-9d6a0950-4bff-493d-a7b8-87dee7bc5524 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 04:16:14 np0005534516 nova_compute[253538]: 2025-11-25 09:16:14.502 253542 DEBUG nova.virt.libvirt.host [None req-9d6a0950-4bff-493d-a7b8-87dee7bc5524 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 25 04:16:14 np0005534516 nova_compute[253538]: 2025-11-25 09:16:14.503 253542 DEBUG nova.virt.libvirt.host [None req-9d6a0950-4bff-493d-a7b8-87dee7bc5524 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 25 04:16:14 np0005534516 nova_compute[253538]: 2025-11-25 09:16:14.506 253542 DEBUG nova.virt.libvirt.host [None req-9d6a0950-4bff-493d-a7b8-87dee7bc5524 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 25 04:16:14 np0005534516 nova_compute[253538]: 2025-11-25 09:16:14.507 253542 DEBUG nova.virt.libvirt.host [None req-9d6a0950-4bff-493d-a7b8-87dee7bc5524 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 25 04:16:14 np0005534516 nova_compute[253538]: 2025-11-25 09:16:14.507 253542 DEBUG nova.virt.libvirt.driver [None req-9d6a0950-4bff-493d-a7b8-87dee7bc5524 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 25 04:16:14 np0005534516 nova_compute[253538]: 2025-11-25 09:16:14.507 253542 DEBUG nova.virt.hardware [None req-9d6a0950-4bff-493d-a7b8-87dee7bc5524 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T08:20:54Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='c0247c91-0f34-45b2-87e7-43f31a790a00',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 25 04:16:14 np0005534516 nova_compute[253538]: 2025-11-25 09:16:14.508 253542 DEBUG nova.virt.hardware [None req-9d6a0950-4bff-493d-a7b8-87dee7bc5524 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 25 04:16:14 np0005534516 nova_compute[253538]: 2025-11-25 09:16:14.508 253542 DEBUG nova.virt.hardware [None req-9d6a0950-4bff-493d-a7b8-87dee7bc5524 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 25 04:16:14 np0005534516 nova_compute[253538]: 2025-11-25 09:16:14.509 253542 DEBUG nova.virt.hardware [None req-9d6a0950-4bff-493d-a7b8-87dee7bc5524 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 25 04:16:14 np0005534516 nova_compute[253538]: 2025-11-25 09:16:14.509 253542 DEBUG nova.virt.hardware [None req-9d6a0950-4bff-493d-a7b8-87dee7bc5524 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 25 04:16:14 np0005534516 nova_compute[253538]: 2025-11-25 09:16:14.509 253542 DEBUG nova.virt.hardware [None req-9d6a0950-4bff-493d-a7b8-87dee7bc5524 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 25 04:16:14 np0005534516 nova_compute[253538]: 2025-11-25 09:16:14.509 253542 DEBUG nova.virt.hardware [None req-9d6a0950-4bff-493d-a7b8-87dee7bc5524 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 25 04:16:14 np0005534516 nova_compute[253538]: 2025-11-25 09:16:14.510 253542 DEBUG nova.virt.hardware [None req-9d6a0950-4bff-493d-a7b8-87dee7bc5524 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 25 04:16:14 np0005534516 nova_compute[253538]: 2025-11-25 09:16:14.510 253542 DEBUG nova.virt.hardware [None req-9d6a0950-4bff-493d-a7b8-87dee7bc5524 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 25 04:16:14 np0005534516 nova_compute[253538]: 2025-11-25 09:16:14.510 253542 DEBUG nova.virt.hardware [None req-9d6a0950-4bff-493d-a7b8-87dee7bc5524 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 25 04:16:14 np0005534516 nova_compute[253538]: 2025-11-25 09:16:14.510 253542 DEBUG nova.virt.hardware [None req-9d6a0950-4bff-493d-a7b8-87dee7bc5524 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 25 04:16:14 np0005534516 nova_compute[253538]: 2025-11-25 09:16:14.514 253542 DEBUG oslo_concurrency.processutils [None req-9d6a0950-4bff-493d-a7b8-87dee7bc5524 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 04:16:14 np0005534516 nova_compute[253538]: 2025-11-25 09:16:14.814 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:16:14 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 04:16:14 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/713355237' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 04:16:14 np0005534516 nova_compute[253538]: 2025-11-25 09:16:14.972 253542 DEBUG oslo_concurrency.processutils [None req-9d6a0950-4bff-493d-a7b8-87dee7bc5524 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.458s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 04:16:15 np0005534516 nova_compute[253538]: 2025-11-25 09:16:15.002 253542 DEBUG nova.storage.rbd_utils [None req-9d6a0950-4bff-493d-a7b8-87dee7bc5524 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] rbd image 645b40f5-7a87-4de2-8b13-a340bcffd14b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 04:16:15 np0005534516 nova_compute[253538]: 2025-11-25 09:16:15.007 253542 DEBUG oslo_concurrency.processutils [None req-9d6a0950-4bff-493d-a7b8-87dee7bc5524 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 04:16:15 np0005534516 nova_compute[253538]: 2025-11-25 09:16:15.045 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:16:15 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 04:16:15 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 04:16:15 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 04:16:15 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 04:16:15 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 04:16:15 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 04:16:15 np0005534516 ceph-mgr[75313]: [progress WARNING root] complete: ev d20ac63c-a6a4-46e7-8a07-adcf0ec29b8f does not exist
Nov 25 04:16:15 np0005534516 ceph-mgr[75313]: [progress WARNING root] complete: ev 872309ce-bedd-4b29-b1f5-7854fe8d781a does not exist
Nov 25 04:16:15 np0005534516 ceph-mgr[75313]: [progress WARNING root] complete: ev da3f0fd6-2bbf-4f24-8fce-458d1c187fa7 does not exist
Nov 25 04:16:15 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 04:16:15 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 04:16:15 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 04:16:15 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 04:16:15 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 04:16:15 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 04:16:15 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 04:16:15 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 04:16:15 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 04:16:15 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 04:16:15 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 04:16:15 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 04:16:15 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/880378211' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 04:16:15 np0005534516 nova_compute[253538]: 2025-11-25 09:16:15.480 253542 DEBUG oslo_concurrency.processutils [None req-9d6a0950-4bff-493d-a7b8-87dee7bc5524 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.473s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 04:16:15 np0005534516 nova_compute[253538]: 2025-11-25 09:16:15.482 253542 DEBUG nova.virt.libvirt.vif [None req-9d6a0950-4bff-493d-a7b8-87dee7bc5524 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T09:16:04Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-557180333',display_name='tempest-TestGettingAddress-server-557180333',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-557180333',id=151,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMyv667Z2ABnyPVBrZx9jd9gq6P58X6si+s81we9pUqnDsWI1jnpnGU0fnIp9UQ/Apxt3tS4iccd2fLnQpe7TCkxqUAyHZSIrSlnLfmfHodlWycVpxB5TAoXVMJ4xTQPeA==',key_name='tempest-TestGettingAddress-1435340358',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='a3cf572dfc9f42528923d69b8fa76422',ramdisk_id='',reservation_id='r-yusne7fb',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-364728108',owner_user_name='tempest-TestGettingAddress-364728108-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T09:16:07Z,user_data=None,user_id='c9fb13d4ba9041458692330b7276232f',uuid=645b40f5-7a87-4de2-8b13-a340bcffd14b,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "53302c95-cc0c-4237-a7f3-dca02953a876", "address": "fa:16:3e:02:9c:53", "network": {"id": "f86f1e83-b07b-4abd-bc9a-7c03f3634fc6", "bridge": "br-int", "label": "tempest-network-smoke--1498407759", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe02:9c53", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap53302c95-cc", "ovs_interfaceid": "53302c95-cc0c-4237-a7f3-dca02953a876", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 25 04:16:15 np0005534516 nova_compute[253538]: 2025-11-25 09:16:15.482 253542 DEBUG nova.network.os_vif_util [None req-9d6a0950-4bff-493d-a7b8-87dee7bc5524 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Converting VIF {"id": "53302c95-cc0c-4237-a7f3-dca02953a876", "address": "fa:16:3e:02:9c:53", "network": {"id": "f86f1e83-b07b-4abd-bc9a-7c03f3634fc6", "bridge": "br-int", "label": "tempest-network-smoke--1498407759", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe02:9c53", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap53302c95-cc", "ovs_interfaceid": "53302c95-cc0c-4237-a7f3-dca02953a876", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 04:16:15 np0005534516 nova_compute[253538]: 2025-11-25 09:16:15.483 253542 DEBUG nova.network.os_vif_util [None req-9d6a0950-4bff-493d-a7b8-87dee7bc5524 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:02:9c:53,bridge_name='br-int',has_traffic_filtering=True,id=53302c95-cc0c-4237-a7f3-dca02953a876,network=Network(f86f1e83-b07b-4abd-bc9a-7c03f3634fc6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap53302c95-cc') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 04:16:15 np0005534516 nova_compute[253538]: 2025-11-25 09:16:15.484 253542 DEBUG nova.objects.instance [None req-9d6a0950-4bff-493d-a7b8-87dee7bc5524 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lazy-loading 'pci_devices' on Instance uuid 645b40f5-7a87-4de2-8b13-a340bcffd14b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 04:16:15 np0005534516 nova_compute[253538]: 2025-11-25 09:16:15.499 253542 DEBUG nova.virt.libvirt.driver [None req-9d6a0950-4bff-493d-a7b8-87dee7bc5524 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 645b40f5-7a87-4de2-8b13-a340bcffd14b] End _get_guest_xml xml=<domain type="kvm">
Nov 25 04:16:15 np0005534516 nova_compute[253538]:  <uuid>645b40f5-7a87-4de2-8b13-a340bcffd14b</uuid>
Nov 25 04:16:15 np0005534516 nova_compute[253538]:  <name>instance-00000097</name>
Nov 25 04:16:15 np0005534516 nova_compute[253538]:  <memory>131072</memory>
Nov 25 04:16:15 np0005534516 nova_compute[253538]:  <vcpu>1</vcpu>
Nov 25 04:16:15 np0005534516 nova_compute[253538]:  <metadata>
Nov 25 04:16:15 np0005534516 nova_compute[253538]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 04:16:15 np0005534516 nova_compute[253538]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 04:16:15 np0005534516 nova_compute[253538]:      <nova:name>tempest-TestGettingAddress-server-557180333</nova:name>
Nov 25 04:16:15 np0005534516 nova_compute[253538]:      <nova:creationTime>2025-11-25 09:16:14</nova:creationTime>
Nov 25 04:16:15 np0005534516 nova_compute[253538]:      <nova:flavor name="m1.nano">
Nov 25 04:16:15 np0005534516 nova_compute[253538]:        <nova:memory>128</nova:memory>
Nov 25 04:16:15 np0005534516 nova_compute[253538]:        <nova:disk>1</nova:disk>
Nov 25 04:16:15 np0005534516 nova_compute[253538]:        <nova:swap>0</nova:swap>
Nov 25 04:16:15 np0005534516 nova_compute[253538]:        <nova:ephemeral>0</nova:ephemeral>
Nov 25 04:16:15 np0005534516 nova_compute[253538]:        <nova:vcpus>1</nova:vcpus>
Nov 25 04:16:15 np0005534516 nova_compute[253538]:      </nova:flavor>
Nov 25 04:16:15 np0005534516 nova_compute[253538]:      <nova:owner>
Nov 25 04:16:15 np0005534516 nova_compute[253538]:        <nova:user uuid="c9fb13d4ba9041458692330b7276232f">tempest-TestGettingAddress-364728108-project-member</nova:user>
Nov 25 04:16:15 np0005534516 nova_compute[253538]:        <nova:project uuid="a3cf572dfc9f42528923d69b8fa76422">tempest-TestGettingAddress-364728108</nova:project>
Nov 25 04:16:15 np0005534516 nova_compute[253538]:      </nova:owner>
Nov 25 04:16:15 np0005534516 nova_compute[253538]:      <nova:root type="image" uuid="8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e"/>
Nov 25 04:16:15 np0005534516 nova_compute[253538]:      <nova:ports>
Nov 25 04:16:15 np0005534516 nova_compute[253538]:        <nova:port uuid="53302c95-cc0c-4237-a7f3-dca02953a876">
Nov 25 04:16:15 np0005534516 nova_compute[253538]:          <nova:ip type="fixed" address="2001:db8::f816:3eff:fe02:9c53" ipVersion="6"/>
Nov 25 04:16:15 np0005534516 nova_compute[253538]:          <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Nov 25 04:16:15 np0005534516 nova_compute[253538]:        </nova:port>
Nov 25 04:16:15 np0005534516 nova_compute[253538]:      </nova:ports>
Nov 25 04:16:15 np0005534516 nova_compute[253538]:    </nova:instance>
Nov 25 04:16:15 np0005534516 nova_compute[253538]:  </metadata>
Nov 25 04:16:15 np0005534516 nova_compute[253538]:  <sysinfo type="smbios">
Nov 25 04:16:15 np0005534516 nova_compute[253538]:    <system>
Nov 25 04:16:15 np0005534516 nova_compute[253538]:      <entry name="manufacturer">RDO</entry>
Nov 25 04:16:15 np0005534516 nova_compute[253538]:      <entry name="product">OpenStack Compute</entry>
Nov 25 04:16:15 np0005534516 nova_compute[253538]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 04:16:15 np0005534516 nova_compute[253538]:      <entry name="serial">645b40f5-7a87-4de2-8b13-a340bcffd14b</entry>
Nov 25 04:16:15 np0005534516 nova_compute[253538]:      <entry name="uuid">645b40f5-7a87-4de2-8b13-a340bcffd14b</entry>
Nov 25 04:16:15 np0005534516 nova_compute[253538]:      <entry name="family">Virtual Machine</entry>
Nov 25 04:16:15 np0005534516 nova_compute[253538]:    </system>
Nov 25 04:16:15 np0005534516 nova_compute[253538]:  </sysinfo>
Nov 25 04:16:15 np0005534516 nova_compute[253538]:  <os>
Nov 25 04:16:15 np0005534516 nova_compute[253538]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 25 04:16:15 np0005534516 nova_compute[253538]:    <boot dev="hd"/>
Nov 25 04:16:15 np0005534516 nova_compute[253538]:    <smbios mode="sysinfo"/>
Nov 25 04:16:15 np0005534516 nova_compute[253538]:  </os>
Nov 25 04:16:15 np0005534516 nova_compute[253538]:  <features>
Nov 25 04:16:15 np0005534516 nova_compute[253538]:    <acpi/>
Nov 25 04:16:15 np0005534516 nova_compute[253538]:    <apic/>
Nov 25 04:16:15 np0005534516 nova_compute[253538]:    <vmcoreinfo/>
Nov 25 04:16:15 np0005534516 nova_compute[253538]:  </features>
Nov 25 04:16:15 np0005534516 nova_compute[253538]:  <clock offset="utc">
Nov 25 04:16:15 np0005534516 nova_compute[253538]:    <timer name="pit" tickpolicy="delay"/>
Nov 25 04:16:15 np0005534516 nova_compute[253538]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 25 04:16:15 np0005534516 nova_compute[253538]:    <timer name="hpet" present="no"/>
Nov 25 04:16:15 np0005534516 nova_compute[253538]:  </clock>
Nov 25 04:16:15 np0005534516 nova_compute[253538]:  <cpu mode="host-model" match="exact">
Nov 25 04:16:15 np0005534516 nova_compute[253538]:    <topology sockets="1" cores="1" threads="1"/>
Nov 25 04:16:15 np0005534516 nova_compute[253538]:  </cpu>
Nov 25 04:16:15 np0005534516 nova_compute[253538]:  <devices>
Nov 25 04:16:15 np0005534516 nova_compute[253538]:    <disk type="network" device="disk">
Nov 25 04:16:15 np0005534516 nova_compute[253538]:      <driver type="raw" cache="none"/>
Nov 25 04:16:15 np0005534516 nova_compute[253538]:      <source protocol="rbd" name="vms/645b40f5-7a87-4de2-8b13-a340bcffd14b_disk">
Nov 25 04:16:15 np0005534516 nova_compute[253538]:        <host name="192.168.122.100" port="6789"/>
Nov 25 04:16:15 np0005534516 nova_compute[253538]:      </source>
Nov 25 04:16:15 np0005534516 nova_compute[253538]:      <auth username="openstack">
Nov 25 04:16:15 np0005534516 nova_compute[253538]:        <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 04:16:15 np0005534516 nova_compute[253538]:      </auth>
Nov 25 04:16:15 np0005534516 nova_compute[253538]:      <target dev="vda" bus="virtio"/>
Nov 25 04:16:15 np0005534516 nova_compute[253538]:    </disk>
Nov 25 04:16:15 np0005534516 nova_compute[253538]:    <disk type="network" device="cdrom">
Nov 25 04:16:15 np0005534516 nova_compute[253538]:      <driver type="raw" cache="none"/>
Nov 25 04:16:15 np0005534516 nova_compute[253538]:      <source protocol="rbd" name="vms/645b40f5-7a87-4de2-8b13-a340bcffd14b_disk.config">
Nov 25 04:16:15 np0005534516 nova_compute[253538]:        <host name="192.168.122.100" port="6789"/>
Nov 25 04:16:15 np0005534516 nova_compute[253538]:      </source>
Nov 25 04:16:15 np0005534516 nova_compute[253538]:      <auth username="openstack">
Nov 25 04:16:15 np0005534516 nova_compute[253538]:        <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 04:16:15 np0005534516 nova_compute[253538]:      </auth>
Nov 25 04:16:15 np0005534516 nova_compute[253538]:      <target dev="sda" bus="sata"/>
Nov 25 04:16:15 np0005534516 nova_compute[253538]:    </disk>
Nov 25 04:16:15 np0005534516 nova_compute[253538]:    <interface type="ethernet">
Nov 25 04:16:15 np0005534516 nova_compute[253538]:      <mac address="fa:16:3e:02:9c:53"/>
Nov 25 04:16:15 np0005534516 nova_compute[253538]:      <model type="virtio"/>
Nov 25 04:16:15 np0005534516 nova_compute[253538]:      <driver name="vhost" rx_queue_size="512"/>
Nov 25 04:16:15 np0005534516 nova_compute[253538]:      <mtu size="1442"/>
Nov 25 04:16:15 np0005534516 nova_compute[253538]:      <target dev="tap53302c95-cc"/>
Nov 25 04:16:15 np0005534516 nova_compute[253538]:    </interface>
Nov 25 04:16:15 np0005534516 nova_compute[253538]:    <serial type="pty">
Nov 25 04:16:15 np0005534516 nova_compute[253538]:      <log file="/var/lib/nova/instances/645b40f5-7a87-4de2-8b13-a340bcffd14b/console.log" append="off"/>
Nov 25 04:16:15 np0005534516 nova_compute[253538]:    </serial>
Nov 25 04:16:15 np0005534516 nova_compute[253538]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 04:16:15 np0005534516 nova_compute[253538]:    <video>
Nov 25 04:16:15 np0005534516 nova_compute[253538]:      <model type="virtio"/>
Nov 25 04:16:15 np0005534516 nova_compute[253538]:    </video>
Nov 25 04:16:15 np0005534516 nova_compute[253538]:    <input type="tablet" bus="usb"/>
Nov 25 04:16:15 np0005534516 nova_compute[253538]:    <rng model="virtio">
Nov 25 04:16:15 np0005534516 nova_compute[253538]:      <backend model="random">/dev/urandom</backend>
Nov 25 04:16:15 np0005534516 nova_compute[253538]:    </rng>
Nov 25 04:16:15 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root"/>
Nov 25 04:16:15 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:16:15 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:16:15 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:16:15 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:16:15 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:16:15 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:16:15 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:16:15 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:16:15 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:16:15 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:16:15 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:16:15 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:16:15 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:16:15 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:16:15 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:16:15 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:16:15 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:16:15 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:16:15 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:16:15 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:16:15 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:16:15 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:16:15 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:16:15 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:16:15 np0005534516 nova_compute[253538]:    <controller type="usb" index="0"/>
Nov 25 04:16:15 np0005534516 nova_compute[253538]:    <memballoon model="virtio">
Nov 25 04:16:15 np0005534516 nova_compute[253538]:      <stats period="10"/>
Nov 25 04:16:15 np0005534516 nova_compute[253538]:    </memballoon>
Nov 25 04:16:15 np0005534516 nova_compute[253538]:  </devices>
Nov 25 04:16:15 np0005534516 nova_compute[253538]: </domain>
Nov 25 04:16:15 np0005534516 nova_compute[253538]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 25 04:16:15 np0005534516 nova_compute[253538]: 2025-11-25 09:16:15.500 253542 DEBUG nova.compute.manager [None req-9d6a0950-4bff-493d-a7b8-87dee7bc5524 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 645b40f5-7a87-4de2-8b13-a340bcffd14b] Preparing to wait for external event network-vif-plugged-53302c95-cc0c-4237-a7f3-dca02953a876 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 25 04:16:15 np0005534516 nova_compute[253538]: 2025-11-25 09:16:15.500 253542 DEBUG oslo_concurrency.lockutils [None req-9d6a0950-4bff-493d-a7b8-87dee7bc5524 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Acquiring lock "645b40f5-7a87-4de2-8b13-a340bcffd14b-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:16:15 np0005534516 nova_compute[253538]: 2025-11-25 09:16:15.500 253542 DEBUG oslo_concurrency.lockutils [None req-9d6a0950-4bff-493d-a7b8-87dee7bc5524 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "645b40f5-7a87-4de2-8b13-a340bcffd14b-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:16:15 np0005534516 nova_compute[253538]: 2025-11-25 09:16:15.500 253542 DEBUG oslo_concurrency.lockutils [None req-9d6a0950-4bff-493d-a7b8-87dee7bc5524 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "645b40f5-7a87-4de2-8b13-a340bcffd14b-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:16:15 np0005534516 nova_compute[253538]: 2025-11-25 09:16:15.501 253542 DEBUG nova.virt.libvirt.vif [None req-9d6a0950-4bff-493d-a7b8-87dee7bc5524 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T09:16:04Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-557180333',display_name='tempest-TestGettingAddress-server-557180333',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-557180333',id=151,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMyv667Z2ABnyPVBrZx9jd9gq6P58X6si+s81we9pUqnDsWI1jnpnGU0fnIp9UQ/Apxt3tS4iccd2fLnQpe7TCkxqUAyHZSIrSlnLfmfHodlWycVpxB5TAoXVMJ4xTQPeA==',key_name='tempest-TestGettingAddress-1435340358',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='a3cf572dfc9f42528923d69b8fa76422',ramdisk_id='',reservation_id='r-yusne7fb',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-364728108',owner_user_name='tempest-TestGettingAddress-364728108-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T09:16:07Z,user_data=None,user_id='c9fb13d4ba9041458692330b7276232f',uuid=645b40f5-7a87-4de2-8b13-a340bcffd14b,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "53302c95-cc0c-4237-a7f3-dca02953a876", "address": "fa:16:3e:02:9c:53", "network": {"id": "f86f1e83-b07b-4abd-bc9a-7c03f3634fc6", "bridge": "br-int", "label": "tempest-network-smoke--1498407759", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe02:9c53", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap53302c95-cc", "ovs_interfaceid": "53302c95-cc0c-4237-a7f3-dca02953a876", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 25 04:16:15 np0005534516 nova_compute[253538]: 2025-11-25 09:16:15.501 253542 DEBUG nova.network.os_vif_util [None req-9d6a0950-4bff-493d-a7b8-87dee7bc5524 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Converting VIF {"id": "53302c95-cc0c-4237-a7f3-dca02953a876", "address": "fa:16:3e:02:9c:53", "network": {"id": "f86f1e83-b07b-4abd-bc9a-7c03f3634fc6", "bridge": "br-int", "label": "tempest-network-smoke--1498407759", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe02:9c53", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap53302c95-cc", "ovs_interfaceid": "53302c95-cc0c-4237-a7f3-dca02953a876", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 04:16:15 np0005534516 nova_compute[253538]: 2025-11-25 09:16:15.502 253542 DEBUG nova.network.os_vif_util [None req-9d6a0950-4bff-493d-a7b8-87dee7bc5524 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:02:9c:53,bridge_name='br-int',has_traffic_filtering=True,id=53302c95-cc0c-4237-a7f3-dca02953a876,network=Network(f86f1e83-b07b-4abd-bc9a-7c03f3634fc6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap53302c95-cc') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 04:16:15 np0005534516 nova_compute[253538]: 2025-11-25 09:16:15.502 253542 DEBUG os_vif [None req-9d6a0950-4bff-493d-a7b8-87dee7bc5524 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:02:9c:53,bridge_name='br-int',has_traffic_filtering=True,id=53302c95-cc0c-4237-a7f3-dca02953a876,network=Network(f86f1e83-b07b-4abd-bc9a-7c03f3634fc6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap53302c95-cc') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 25 04:16:15 np0005534516 nova_compute[253538]: 2025-11-25 09:16:15.502 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:16:15 np0005534516 nova_compute[253538]: 2025-11-25 09:16:15.503 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 04:16:15 np0005534516 nova_compute[253538]: 2025-11-25 09:16:15.503 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 04:16:15 np0005534516 nova_compute[253538]: 2025-11-25 09:16:15.505 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:16:15 np0005534516 nova_compute[253538]: 2025-11-25 09:16:15.505 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap53302c95-cc, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 04:16:15 np0005534516 nova_compute[253538]: 2025-11-25 09:16:15.506 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap53302c95-cc, col_values=(('external_ids', {'iface-id': '53302c95-cc0c-4237-a7f3-dca02953a876', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:02:9c:53', 'vm-uuid': '645b40f5-7a87-4de2-8b13-a340bcffd14b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 04:16:15 np0005534516 nova_compute[253538]: 2025-11-25 09:16:15.546 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:16:15 np0005534516 NetworkManager[48915]: <info>  [1764062175.5477] manager: (tap53302c95-cc): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/662)
Nov 25 04:16:15 np0005534516 nova_compute[253538]: 2025-11-25 09:16:15.548 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 25 04:16:15 np0005534516 nova_compute[253538]: 2025-11-25 09:16:15.553 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:16:15 np0005534516 nova_compute[253538]: 2025-11-25 09:16:15.555 253542 INFO os_vif [None req-9d6a0950-4bff-493d-a7b8-87dee7bc5524 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:02:9c:53,bridge_name='br-int',has_traffic_filtering=True,id=53302c95-cc0c-4237-a7f3-dca02953a876,network=Network(f86f1e83-b07b-4abd-bc9a-7c03f3634fc6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap53302c95-cc')#033[00m
Nov 25 04:16:15 np0005534516 nova_compute[253538]: 2025-11-25 09:16:15.596 253542 DEBUG nova.virt.libvirt.driver [None req-9d6a0950-4bff-493d-a7b8-87dee7bc5524 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 04:16:15 np0005534516 nova_compute[253538]: 2025-11-25 09:16:15.597 253542 DEBUG nova.virt.libvirt.driver [None req-9d6a0950-4bff-493d-a7b8-87dee7bc5524 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 04:16:15 np0005534516 nova_compute[253538]: 2025-11-25 09:16:15.597 253542 DEBUG nova.virt.libvirt.driver [None req-9d6a0950-4bff-493d-a7b8-87dee7bc5524 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] No VIF found with MAC fa:16:3e:02:9c:53, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 25 04:16:15 np0005534516 nova_compute[253538]: 2025-11-25 09:16:15.598 253542 INFO nova.virt.libvirt.driver [None req-9d6a0950-4bff-493d-a7b8-87dee7bc5524 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 645b40f5-7a87-4de2-8b13-a340bcffd14b] Using config drive#033[00m
Nov 25 04:16:15 np0005534516 nova_compute[253538]: 2025-11-25 09:16:15.626 253542 DEBUG nova.storage.rbd_utils [None req-9d6a0950-4bff-493d-a7b8-87dee7bc5524 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] rbd image 645b40f5-7a87-4de2-8b13-a340bcffd14b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 04:16:15 np0005534516 podman[417173]: 2025-11-25 09:16:15.927872142 +0000 UTC m=+0.064998717 container create 2ce613ccee9f77f71aa9659aaf6d48834005a4f3868168b6bf89932f4580eb1b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_greider, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 04:16:15 np0005534516 systemd[1]: Started libpod-conmon-2ce613ccee9f77f71aa9659aaf6d48834005a4f3868168b6bf89932f4580eb1b.scope.
Nov 25 04:16:15 np0005534516 nova_compute[253538]: 2025-11-25 09:16:15.978 253542 INFO nova.virt.libvirt.driver [None req-9d6a0950-4bff-493d-a7b8-87dee7bc5524 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 645b40f5-7a87-4de2-8b13-a340bcffd14b] Creating config drive at /var/lib/nova/instances/645b40f5-7a87-4de2-8b13-a340bcffd14b/disk.config#033[00m
Nov 25 04:16:15 np0005534516 systemd[1]: Started libcrun container.
Nov 25 04:16:15 np0005534516 nova_compute[253538]: 2025-11-25 09:16:15.982 253542 DEBUG oslo_concurrency.processutils [None req-9d6a0950-4bff-493d-a7b8-87dee7bc5524 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/645b40f5-7a87-4de2-8b13-a340bcffd14b/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpkhs75yei execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 04:16:15 np0005534516 podman[417173]: 2025-11-25 09:16:15.895540823 +0000 UTC m=+0.032667468 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 04:16:16 np0005534516 podman[417173]: 2025-11-25 09:16:15.99989645 +0000 UTC m=+0.137023045 container init 2ce613ccee9f77f71aa9659aaf6d48834005a4f3868168b6bf89932f4580eb1b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_greider, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Nov 25 04:16:16 np0005534516 podman[417173]: 2025-11-25 09:16:16.00725138 +0000 UTC m=+0.144377955 container start 2ce613ccee9f77f71aa9659aaf6d48834005a4f3868168b6bf89932f4580eb1b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_greider, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 04:16:16 np0005534516 podman[417173]: 2025-11-25 09:16:16.010818777 +0000 UTC m=+0.147945352 container attach 2ce613ccee9f77f71aa9659aaf6d48834005a4f3868168b6bf89932f4580eb1b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_greider, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 04:16:16 np0005534516 boring_greider[417190]: 167 167
Nov 25 04:16:16 np0005534516 systemd[1]: libpod-2ce613ccee9f77f71aa9659aaf6d48834005a4f3868168b6bf89932f4580eb1b.scope: Deactivated successfully.
Nov 25 04:16:16 np0005534516 podman[417196]: 2025-11-25 09:16:16.052742786 +0000 UTC m=+0.023844188 container died 2ce613ccee9f77f71aa9659aaf6d48834005a4f3868168b6bf89932f4580eb1b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_greider, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 04:16:16 np0005534516 systemd[1]: var-lib-containers-storage-overlay-36a2d0c1a0a9a9ba1c2ceaf56251b781a11bd497b4e59cb4fab8abd96f3b5519-merged.mount: Deactivated successfully.
Nov 25 04:16:16 np0005534516 podman[417196]: 2025-11-25 09:16:16.088060417 +0000 UTC m=+0.059161799 container remove 2ce613ccee9f77f71aa9659aaf6d48834005a4f3868168b6bf89932f4580eb1b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_greider, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 04:16:16 np0005534516 systemd[1]: libpod-conmon-2ce613ccee9f77f71aa9659aaf6d48834005a4f3868168b6bf89932f4580eb1b.scope: Deactivated successfully.
Nov 25 04:16:16 np0005534516 nova_compute[253538]: 2025-11-25 09:16:16.136 253542 DEBUG oslo_concurrency.processutils [None req-9d6a0950-4bff-493d-a7b8-87dee7bc5524 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/645b40f5-7a87-4de2-8b13-a340bcffd14b/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpkhs75yei" returned: 0 in 0.154s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 04:16:16 np0005534516 nova_compute[253538]: 2025-11-25 09:16:16.169 253542 DEBUG nova.storage.rbd_utils [None req-9d6a0950-4bff-493d-a7b8-87dee7bc5524 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] rbd image 645b40f5-7a87-4de2-8b13-a340bcffd14b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 04:16:16 np0005534516 nova_compute[253538]: 2025-11-25 09:16:16.173 253542 DEBUG oslo_concurrency.processutils [None req-9d6a0950-4bff-493d-a7b8-87dee7bc5524 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/645b40f5-7a87-4de2-8b13-a340bcffd14b/disk.config 645b40f5-7a87-4de2-8b13-a340bcffd14b_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 04:16:16 np0005534516 podman[417240]: 2025-11-25 09:16:16.258084399 +0000 UTC m=+0.042509607 container create fa2e322412eca2e06d976c30e45b58bbaa172ba0bf6fa621d92575149c17ba19 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_margulis, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Nov 25 04:16:16 np0005534516 systemd[1]: Started libpod-conmon-fa2e322412eca2e06d976c30e45b58bbaa172ba0bf6fa621d92575149c17ba19.scope.
Nov 25 04:16:16 np0005534516 systemd[1]: Started libcrun container.
Nov 25 04:16:16 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0080199a7588eeeec543c911235672ac1587c3d54c03eea6c4780a5d587e8334/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 04:16:16 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0080199a7588eeeec543c911235672ac1587c3d54c03eea6c4780a5d587e8334/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 04:16:16 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0080199a7588eeeec543c911235672ac1587c3d54c03eea6c4780a5d587e8334/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 04:16:16 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0080199a7588eeeec543c911235672ac1587c3d54c03eea6c4780a5d587e8334/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 04:16:16 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0080199a7588eeeec543c911235672ac1587c3d54c03eea6c4780a5d587e8334/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 04:16:16 np0005534516 podman[417240]: 2025-11-25 09:16:16.239598826 +0000 UTC m=+0.024024054 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 04:16:16 np0005534516 podman[417240]: 2025-11-25 09:16:16.336416078 +0000 UTC m=+0.120841276 container init fa2e322412eca2e06d976c30e45b58bbaa172ba0bf6fa621d92575149c17ba19 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_margulis, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 04:16:16 np0005534516 podman[417240]: 2025-11-25 09:16:16.348020264 +0000 UTC m=+0.132445472 container start fa2e322412eca2e06d976c30e45b58bbaa172ba0bf6fa621d92575149c17ba19 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_margulis, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 04:16:16 np0005534516 podman[417240]: 2025-11-25 09:16:16.350746958 +0000 UTC m=+0.135172156 container attach fa2e322412eca2e06d976c30e45b58bbaa172ba0bf6fa621d92575149c17ba19 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_margulis, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS)
Nov 25 04:16:16 np0005534516 nova_compute[253538]: 2025-11-25 09:16:16.369 253542 DEBUG oslo_concurrency.processutils [None req-9d6a0950-4bff-493d-a7b8-87dee7bc5524 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/645b40f5-7a87-4de2-8b13-a340bcffd14b/disk.config 645b40f5-7a87-4de2-8b13-a340bcffd14b_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.196s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 04:16:16 np0005534516 nova_compute[253538]: 2025-11-25 09:16:16.370 253542 INFO nova.virt.libvirt.driver [None req-9d6a0950-4bff-493d-a7b8-87dee7bc5524 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 645b40f5-7a87-4de2-8b13-a340bcffd14b] Deleting local config drive /var/lib/nova/instances/645b40f5-7a87-4de2-8b13-a340bcffd14b/disk.config because it was imported into RBD.#033[00m
Nov 25 04:16:16 np0005534516 nova_compute[253538]: 2025-11-25 09:16:16.397 253542 DEBUG nova.network.neutron [req-08ee7e48-bfcc-4c15-a328-6fe3e2048303 req-41556e18-1e02-4977-bfab-ec40d4927f19 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 645b40f5-7a87-4de2-8b13-a340bcffd14b] Updated VIF entry in instance network info cache for port 53302c95-cc0c-4237-a7f3-dca02953a876. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 25 04:16:16 np0005534516 nova_compute[253538]: 2025-11-25 09:16:16.398 253542 DEBUG nova.network.neutron [req-08ee7e48-bfcc-4c15-a328-6fe3e2048303 req-41556e18-1e02-4977-bfab-ec40d4927f19 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 645b40f5-7a87-4de2-8b13-a340bcffd14b] Updating instance_info_cache with network_info: [{"id": "53302c95-cc0c-4237-a7f3-dca02953a876", "address": "fa:16:3e:02:9c:53", "network": {"id": "f86f1e83-b07b-4abd-bc9a-7c03f3634fc6", "bridge": "br-int", "label": "tempest-network-smoke--1498407759", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe02:9c53", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap53302c95-cc", "ovs_interfaceid": "53302c95-cc0c-4237-a7f3-dca02953a876", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 04:16:16 np0005534516 nova_compute[253538]: 2025-11-25 09:16:16.412 253542 DEBUG oslo_concurrency.lockutils [req-08ee7e48-bfcc-4c15-a328-6fe3e2048303 req-41556e18-1e02-4977-bfab-ec40d4927f19 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-645b40f5-7a87-4de2-8b13-a340bcffd14b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 04:16:16 np0005534516 NetworkManager[48915]: <info>  [1764062176.4215] manager: (tap53302c95-cc): new Tun device (/org/freedesktop/NetworkManager/Devices/663)
Nov 25 04:16:16 np0005534516 kernel: tap53302c95-cc: entered promiscuous mode
Nov 25 04:16:16 np0005534516 ovn_controller[152859]: 2025-11-25T09:16:16Z|01603|binding|INFO|Claiming lport 53302c95-cc0c-4237-a7f3-dca02953a876 for this chassis.
Nov 25 04:16:16 np0005534516 ovn_controller[152859]: 2025-11-25T09:16:16Z|01604|binding|INFO|53302c95-cc0c-4237-a7f3-dca02953a876: Claiming fa:16:3e:02:9c:53 10.100.0.3 2001:db8::f816:3eff:fe02:9c53
Nov 25 04:16:16 np0005534516 nova_compute[253538]: 2025-11-25 09:16:16.432 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:16:16 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2876: 321 pgs: 321 active+clean; 213 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 25 04:16:16 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:16:16.447 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:02:9c:53 10.100.0.3 2001:db8::f816:3eff:fe02:9c53'], port_security=['fa:16:3e:02:9c:53 10.100.0.3 2001:db8::f816:3eff:fe02:9c53'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28 2001:db8::f816:3eff:fe02:9c53/64', 'neutron:device_id': '645b40f5-7a87-4de2-8b13-a340bcffd14b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f86f1e83-b07b-4abd-bc9a-7c03f3634fc6', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a3cf572dfc9f42528923d69b8fa76422', 'neutron:revision_number': '2', 'neutron:security_group_ids': '15198f79-2199-4ce9-8f4e-9ae43d3cedee', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e5830640-550a-474a-a915-1b8e117ec031, chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=53302c95-cc0c-4237-a7f3-dca02953a876) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 04:16:16 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:16:16.448 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 53302c95-cc0c-4237-a7f3-dca02953a876 in datapath f86f1e83-b07b-4abd-bc9a-7c03f3634fc6 bound to our chassis#033[00m
Nov 25 04:16:16 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:16:16.450 162739 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network f86f1e83-b07b-4abd-bc9a-7c03f3634fc6#033[00m
Nov 25 04:16:16 np0005534516 ovn_controller[152859]: 2025-11-25T09:16:16Z|01605|binding|INFO|Setting lport 53302c95-cc0c-4237-a7f3-dca02953a876 ovn-installed in OVS
Nov 25 04:16:16 np0005534516 ovn_controller[152859]: 2025-11-25T09:16:16Z|01606|binding|INFO|Setting lport 53302c95-cc0c-4237-a7f3-dca02953a876 up in Southbound
Nov 25 04:16:16 np0005534516 nova_compute[253538]: 2025-11-25 09:16:16.458 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:16:16 np0005534516 systemd-udevd[417291]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 04:16:16 np0005534516 nova_compute[253538]: 2025-11-25 09:16:16.467 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:16:16 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:16:16.475 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[f524062e-b248-46b4-9ba4-e6ea6eba7c12]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:16:16 np0005534516 systemd-machined[215790]: New machine qemu-181-instance-00000097.
Nov 25 04:16:16 np0005534516 NetworkManager[48915]: <info>  [1764062176.4823] device (tap53302c95-cc): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 04:16:16 np0005534516 NetworkManager[48915]: <info>  [1764062176.4836] device (tap53302c95-cc): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 04:16:16 np0005534516 systemd[1]: Started Virtual Machine qemu-181-instance-00000097.
Nov 25 04:16:16 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:16:16.504 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[2185ff63-5468-4e71-8775-4b26873c47a8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:16:16 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:16:16.507 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[3c679241-05d9-4ada-a0fc-95ef988ba84d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:16:16 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:16:16.538 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[ecd5873d-3c02-4c27-903f-d7d5a381861c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:16:16 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:16:16.558 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[ec930246-49e0-4498-9013-b16675adceb1]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf86f1e83-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:81:95:46'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 19, 'tx_packets': 5, 'rx_bytes': 1586, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 19, 'tx_packets': 5, 'rx_bytes': 1586, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 459], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 749557, 'reachable_time': 25638, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 17, 'inoctets': 1264, 'indelivers': 4, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 17, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 1264, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 17, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 4, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 417304, 'error': None, 'target': 'ovnmeta-f86f1e83-b07b-4abd-bc9a-7c03f3634fc6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:16:16 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:16:16.575 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[fcf8371e-bb19-446e-9fb3-d2fa7d0f62c1]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapf86f1e83-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 749569, 'tstamp': 749569}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 417306, 'error': None, 'target': 'ovnmeta-f86f1e83-b07b-4abd-bc9a-7c03f3634fc6', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapf86f1e83-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 749571, 'tstamp': 749571}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 417306, 'error': None, 'target': 'ovnmeta-f86f1e83-b07b-4abd-bc9a-7c03f3634fc6', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:16:16 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:16:16.578 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf86f1e83-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 04:16:16 np0005534516 nova_compute[253538]: 2025-11-25 09:16:16.579 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:16:16 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:16:16.580 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf86f1e83-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 04:16:16 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:16:16.581 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 04:16:16 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:16:16.581 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapf86f1e83-b0, col_values=(('external_ids', {'iface-id': 'cfc44526-993c-46ae-8c7c-2505531aa9fc'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 04:16:16 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:16:16.581 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 04:16:16 np0005534516 nova_compute[253538]: 2025-11-25 09:16:16.982 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764062176.9817286, 645b40f5-7a87-4de2-8b13-a340bcffd14b => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 04:16:16 np0005534516 nova_compute[253538]: 2025-11-25 09:16:16.982 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 645b40f5-7a87-4de2-8b13-a340bcffd14b] VM Started (Lifecycle Event)#033[00m
Nov 25 04:16:16 np0005534516 nova_compute[253538]: 2025-11-25 09:16:16.989 253542 DEBUG nova.compute.manager [req-9b2078f9-9aff-48e6-bc59-4de26920fc7c req-3995c5fa-51b7-43dc-ad14-b94999221f26 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 645b40f5-7a87-4de2-8b13-a340bcffd14b] Received event network-vif-plugged-53302c95-cc0c-4237-a7f3-dca02953a876 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 04:16:16 np0005534516 nova_compute[253538]: 2025-11-25 09:16:16.990 253542 DEBUG oslo_concurrency.lockutils [req-9b2078f9-9aff-48e6-bc59-4de26920fc7c req-3995c5fa-51b7-43dc-ad14-b94999221f26 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "645b40f5-7a87-4de2-8b13-a340bcffd14b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:16:16 np0005534516 nova_compute[253538]: 2025-11-25 09:16:16.990 253542 DEBUG oslo_concurrency.lockutils [req-9b2078f9-9aff-48e6-bc59-4de26920fc7c req-3995c5fa-51b7-43dc-ad14-b94999221f26 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "645b40f5-7a87-4de2-8b13-a340bcffd14b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:16:16 np0005534516 nova_compute[253538]: 2025-11-25 09:16:16.990 253542 DEBUG oslo_concurrency.lockutils [req-9b2078f9-9aff-48e6-bc59-4de26920fc7c req-3995c5fa-51b7-43dc-ad14-b94999221f26 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "645b40f5-7a87-4de2-8b13-a340bcffd14b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:16:16 np0005534516 nova_compute[253538]: 2025-11-25 09:16:16.991 253542 DEBUG nova.compute.manager [req-9b2078f9-9aff-48e6-bc59-4de26920fc7c req-3995c5fa-51b7-43dc-ad14-b94999221f26 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 645b40f5-7a87-4de2-8b13-a340bcffd14b] Processing event network-vif-plugged-53302c95-cc0c-4237-a7f3-dca02953a876 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 25 04:16:16 np0005534516 nova_compute[253538]: 2025-11-25 09:16:16.991 253542 DEBUG nova.compute.manager [None req-9d6a0950-4bff-493d-a7b8-87dee7bc5524 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 645b40f5-7a87-4de2-8b13-a340bcffd14b] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 25 04:16:16 np0005534516 nova_compute[253538]: 2025-11-25 09:16:16.995 253542 DEBUG nova.virt.libvirt.driver [None req-9d6a0950-4bff-493d-a7b8-87dee7bc5524 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 645b40f5-7a87-4de2-8b13-a340bcffd14b] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 25 04:16:16 np0005534516 nova_compute[253538]: 2025-11-25 09:16:16.998 253542 INFO nova.virt.libvirt.driver [-] [instance: 645b40f5-7a87-4de2-8b13-a340bcffd14b] Instance spawned successfully.#033[00m
Nov 25 04:16:16 np0005534516 nova_compute[253538]: 2025-11-25 09:16:16.998 253542 DEBUG nova.virt.libvirt.driver [None req-9d6a0950-4bff-493d-a7b8-87dee7bc5524 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 645b40f5-7a87-4de2-8b13-a340bcffd14b] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 25 04:16:17 np0005534516 nova_compute[253538]: 2025-11-25 09:16:17.001 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 645b40f5-7a87-4de2-8b13-a340bcffd14b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 04:16:17 np0005534516 nova_compute[253538]: 2025-11-25 09:16:17.004 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 645b40f5-7a87-4de2-8b13-a340bcffd14b] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 04:16:17 np0005534516 nova_compute[253538]: 2025-11-25 09:16:17.021 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 645b40f5-7a87-4de2-8b13-a340bcffd14b] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 04:16:17 np0005534516 nova_compute[253538]: 2025-11-25 09:16:17.022 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764062176.9819534, 645b40f5-7a87-4de2-8b13-a340bcffd14b => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 04:16:17 np0005534516 nova_compute[253538]: 2025-11-25 09:16:17.022 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 645b40f5-7a87-4de2-8b13-a340bcffd14b] VM Paused (Lifecycle Event)#033[00m
Nov 25 04:16:17 np0005534516 nova_compute[253538]: 2025-11-25 09:16:17.026 253542 DEBUG nova.virt.libvirt.driver [None req-9d6a0950-4bff-493d-a7b8-87dee7bc5524 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 645b40f5-7a87-4de2-8b13-a340bcffd14b] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 04:16:17 np0005534516 nova_compute[253538]: 2025-11-25 09:16:17.026 253542 DEBUG nova.virt.libvirt.driver [None req-9d6a0950-4bff-493d-a7b8-87dee7bc5524 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 645b40f5-7a87-4de2-8b13-a340bcffd14b] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 04:16:17 np0005534516 nova_compute[253538]: 2025-11-25 09:16:17.027 253542 DEBUG nova.virt.libvirt.driver [None req-9d6a0950-4bff-493d-a7b8-87dee7bc5524 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 645b40f5-7a87-4de2-8b13-a340bcffd14b] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 04:16:17 np0005534516 nova_compute[253538]: 2025-11-25 09:16:17.027 253542 DEBUG nova.virt.libvirt.driver [None req-9d6a0950-4bff-493d-a7b8-87dee7bc5524 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 645b40f5-7a87-4de2-8b13-a340bcffd14b] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 04:16:17 np0005534516 nova_compute[253538]: 2025-11-25 09:16:17.028 253542 DEBUG nova.virt.libvirt.driver [None req-9d6a0950-4bff-493d-a7b8-87dee7bc5524 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 645b40f5-7a87-4de2-8b13-a340bcffd14b] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 04:16:17 np0005534516 nova_compute[253538]: 2025-11-25 09:16:17.028 253542 DEBUG nova.virt.libvirt.driver [None req-9d6a0950-4bff-493d-a7b8-87dee7bc5524 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 645b40f5-7a87-4de2-8b13-a340bcffd14b] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 04:16:17 np0005534516 nova_compute[253538]: 2025-11-25 09:16:17.055 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 645b40f5-7a87-4de2-8b13-a340bcffd14b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 04:16:17 np0005534516 nova_compute[253538]: 2025-11-25 09:16:17.058 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764062176.9943283, 645b40f5-7a87-4de2-8b13-a340bcffd14b => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 04:16:17 np0005534516 nova_compute[253538]: 2025-11-25 09:16:17.059 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 645b40f5-7a87-4de2-8b13-a340bcffd14b] VM Resumed (Lifecycle Event)#033[00m
Nov 25 04:16:17 np0005534516 nova_compute[253538]: 2025-11-25 09:16:17.081 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 645b40f5-7a87-4de2-8b13-a340bcffd14b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 04:16:17 np0005534516 nova_compute[253538]: 2025-11-25 09:16:17.083 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 645b40f5-7a87-4de2-8b13-a340bcffd14b] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 04:16:17 np0005534516 nova_compute[253538]: 2025-11-25 09:16:17.101 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 645b40f5-7a87-4de2-8b13-a340bcffd14b] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 04:16:17 np0005534516 nova_compute[253538]: 2025-11-25 09:16:17.198 253542 INFO nova.compute.manager [None req-9d6a0950-4bff-493d-a7b8-87dee7bc5524 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 645b40f5-7a87-4de2-8b13-a340bcffd14b] Took 9.33 seconds to spawn the instance on the hypervisor.#033[00m
Nov 25 04:16:17 np0005534516 nova_compute[253538]: 2025-11-25 09:16:17.199 253542 DEBUG nova.compute.manager [None req-9d6a0950-4bff-493d-a7b8-87dee7bc5524 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 645b40f5-7a87-4de2-8b13-a340bcffd14b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 04:16:17 np0005534516 nova_compute[253538]: 2025-11-25 09:16:17.257 253542 INFO nova.compute.manager [None req-9d6a0950-4bff-493d-a7b8-87dee7bc5524 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 645b40f5-7a87-4de2-8b13-a340bcffd14b] Took 10.66 seconds to build instance.#033[00m
Nov 25 04:16:17 np0005534516 nova_compute[253538]: 2025-11-25 09:16:17.343 253542 DEBUG oslo_concurrency.lockutils [None req-9d6a0950-4bff-493d-a7b8-87dee7bc5524 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "645b40f5-7a87-4de2-8b13-a340bcffd14b" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.881s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:16:17 np0005534516 brave_margulis[417274]: --> passed data devices: 0 physical, 3 LVM
Nov 25 04:16:17 np0005534516 brave_margulis[417274]: --> relative data size: 1.0
Nov 25 04:16:17 np0005534516 brave_margulis[417274]: --> All data devices are unavailable
Nov 25 04:16:17 np0005534516 systemd[1]: libpod-fa2e322412eca2e06d976c30e45b58bbaa172ba0bf6fa621d92575149c17ba19.scope: Deactivated successfully.
Nov 25 04:16:17 np0005534516 systemd[1]: libpod-fa2e322412eca2e06d976c30e45b58bbaa172ba0bf6fa621d92575149c17ba19.scope: Consumed 1.016s CPU time.
Nov 25 04:16:17 np0005534516 podman[417240]: 2025-11-25 09:16:17.44487452 +0000 UTC m=+1.229299728 container died fa2e322412eca2e06d976c30e45b58bbaa172ba0bf6fa621d92575149c17ba19 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_margulis, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 04:16:17 np0005534516 systemd[1]: var-lib-containers-storage-overlay-0080199a7588eeeec543c911235672ac1587c3d54c03eea6c4780a5d587e8334-merged.mount: Deactivated successfully.
Nov 25 04:16:17 np0005534516 podman[417240]: 2025-11-25 09:16:17.502589589 +0000 UTC m=+1.287014787 container remove fa2e322412eca2e06d976c30e45b58bbaa172ba0bf6fa621d92575149c17ba19 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_margulis, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 04:16:17 np0005534516 systemd[1]: libpod-conmon-fa2e322412eca2e06d976c30e45b58bbaa172ba0bf6fa621d92575149c17ba19.scope: Deactivated successfully.
Nov 25 04:16:18 np0005534516 podman[417526]: 2025-11-25 09:16:18.14379802 +0000 UTC m=+0.023428278 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 04:16:18 np0005534516 podman[417526]: 2025-11-25 09:16:18.271185343 +0000 UTC m=+0.150815581 container create f84a1a36193ade3f89ed8c0c2729bedad1d2f83cfa14a11a0c99f64b4d8cde96 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_mcclintock, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 04:16:18 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:16:18 np0005534516 systemd[1]: Started libpod-conmon-f84a1a36193ade3f89ed8c0c2729bedad1d2f83cfa14a11a0c99f64b4d8cde96.scope.
Nov 25 04:16:18 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2877: 321 pgs: 321 active+clean; 213 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 486 KiB/s rd, 1.8 MiB/s wr, 44 op/s
Nov 25 04:16:18 np0005534516 systemd[1]: Started libcrun container.
Nov 25 04:16:18 np0005534516 podman[417526]: 2025-11-25 09:16:18.476547046 +0000 UTC m=+0.356177364 container init f84a1a36193ade3f89ed8c0c2729bedad1d2f83cfa14a11a0c99f64b4d8cde96 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_mcclintock, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 04:16:18 np0005534516 podman[417526]: 2025-11-25 09:16:18.489175449 +0000 UTC m=+0.368805717 container start f84a1a36193ade3f89ed8c0c2729bedad1d2f83cfa14a11a0c99f64b4d8cde96 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_mcclintock, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Nov 25 04:16:18 np0005534516 podman[417526]: 2025-11-25 09:16:18.492694754 +0000 UTC m=+0.372325092 container attach f84a1a36193ade3f89ed8c0c2729bedad1d2f83cfa14a11a0c99f64b4d8cde96 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_mcclintock, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Nov 25 04:16:18 np0005534516 zealous_mcclintock[417542]: 167 167
Nov 25 04:16:18 np0005534516 podman[417526]: 2025-11-25 09:16:18.49876568 +0000 UTC m=+0.378395918 container died f84a1a36193ade3f89ed8c0c2729bedad1d2f83cfa14a11a0c99f64b4d8cde96 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_mcclintock, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Nov 25 04:16:18 np0005534516 systemd[1]: libpod-f84a1a36193ade3f89ed8c0c2729bedad1d2f83cfa14a11a0c99f64b4d8cde96.scope: Deactivated successfully.
Nov 25 04:16:18 np0005534516 systemd[1]: var-lib-containers-storage-overlay-bb9600aceaa64fc044401ff000cfab8fa319a32a864942c3acb158bbd1946108-merged.mount: Deactivated successfully.
Nov 25 04:16:18 np0005534516 podman[417526]: 2025-11-25 09:16:18.544172204 +0000 UTC m=+0.423802442 container remove f84a1a36193ade3f89ed8c0c2729bedad1d2f83cfa14a11a0c99f64b4d8cde96 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_mcclintock, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 04:16:18 np0005534516 systemd[1]: libpod-conmon-f84a1a36193ade3f89ed8c0c2729bedad1d2f83cfa14a11a0c99f64b4d8cde96.scope: Deactivated successfully.
Nov 25 04:16:18 np0005534516 podman[417565]: 2025-11-25 09:16:18.766497287 +0000 UTC m=+0.067135236 container create 994e5d7ff8a1b8048ec322eb5c2d2bf0e3b25e9d2ed0c42cf013d01822c8f775 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_lovelace, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 04:16:18 np0005534516 podman[417565]: 2025-11-25 09:16:18.72173224 +0000 UTC m=+0.022370179 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 04:16:18 np0005534516 systemd[1]: Started libpod-conmon-994e5d7ff8a1b8048ec322eb5c2d2bf0e3b25e9d2ed0c42cf013d01822c8f775.scope.
Nov 25 04:16:18 np0005534516 systemd[1]: Started libcrun container.
Nov 25 04:16:18 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cede17204f8cf7a50c96dfd6a631cd1733a1ff18d59e1d5ddf1c2751cae965dd/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 04:16:18 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cede17204f8cf7a50c96dfd6a631cd1733a1ff18d59e1d5ddf1c2751cae965dd/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 04:16:18 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cede17204f8cf7a50c96dfd6a631cd1733a1ff18d59e1d5ddf1c2751cae965dd/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 04:16:18 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cede17204f8cf7a50c96dfd6a631cd1733a1ff18d59e1d5ddf1c2751cae965dd/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 04:16:18 np0005534516 podman[417565]: 2025-11-25 09:16:18.923361122 +0000 UTC m=+0.223999111 container init 994e5d7ff8a1b8048ec322eb5c2d2bf0e3b25e9d2ed0c42cf013d01822c8f775 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_lovelace, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True)
Nov 25 04:16:18 np0005534516 podman[417565]: 2025-11-25 09:16:18.933480257 +0000 UTC m=+0.234118156 container start 994e5d7ff8a1b8048ec322eb5c2d2bf0e3b25e9d2ed0c42cf013d01822c8f775 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_lovelace, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True)
Nov 25 04:16:18 np0005534516 podman[417565]: 2025-11-25 09:16:18.937177797 +0000 UTC m=+0.237815756 container attach 994e5d7ff8a1b8048ec322eb5c2d2bf0e3b25e9d2ed0c42cf013d01822c8f775 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_lovelace, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Nov 25 04:16:19 np0005534516 nova_compute[253538]: 2025-11-25 09:16:19.083 253542 DEBUG nova.compute.manager [req-9bb44a29-35b9-4742-88bc-eafcee40d83f req-8433fc9e-edee-4646-a01d-ec4a5270fc32 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 645b40f5-7a87-4de2-8b13-a340bcffd14b] Received event network-vif-plugged-53302c95-cc0c-4237-a7f3-dca02953a876 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 04:16:19 np0005534516 nova_compute[253538]: 2025-11-25 09:16:19.084 253542 DEBUG oslo_concurrency.lockutils [req-9bb44a29-35b9-4742-88bc-eafcee40d83f req-8433fc9e-edee-4646-a01d-ec4a5270fc32 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "645b40f5-7a87-4de2-8b13-a340bcffd14b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:16:19 np0005534516 nova_compute[253538]: 2025-11-25 09:16:19.086 253542 DEBUG oslo_concurrency.lockutils [req-9bb44a29-35b9-4742-88bc-eafcee40d83f req-8433fc9e-edee-4646-a01d-ec4a5270fc32 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "645b40f5-7a87-4de2-8b13-a340bcffd14b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:16:19 np0005534516 nova_compute[253538]: 2025-11-25 09:16:19.086 253542 DEBUG oslo_concurrency.lockutils [req-9bb44a29-35b9-4742-88bc-eafcee40d83f req-8433fc9e-edee-4646-a01d-ec4a5270fc32 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "645b40f5-7a87-4de2-8b13-a340bcffd14b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:16:19 np0005534516 nova_compute[253538]: 2025-11-25 09:16:19.087 253542 DEBUG nova.compute.manager [req-9bb44a29-35b9-4742-88bc-eafcee40d83f req-8433fc9e-edee-4646-a01d-ec4a5270fc32 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 645b40f5-7a87-4de2-8b13-a340bcffd14b] No waiting events found dispatching network-vif-plugged-53302c95-cc0c-4237-a7f3-dca02953a876 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 04:16:19 np0005534516 nova_compute[253538]: 2025-11-25 09:16:19.087 253542 WARNING nova.compute.manager [req-9bb44a29-35b9-4742-88bc-eafcee40d83f req-8433fc9e-edee-4646-a01d-ec4a5270fc32 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 645b40f5-7a87-4de2-8b13-a340bcffd14b] Received unexpected event network-vif-plugged-53302c95-cc0c-4237-a7f3-dca02953a876 for instance with vm_state active and task_state None.#033[00m
Nov 25 04:16:19 np0005534516 silly_lovelace[417582]: {
Nov 25 04:16:19 np0005534516 silly_lovelace[417582]:    "0": [
Nov 25 04:16:19 np0005534516 silly_lovelace[417582]:        {
Nov 25 04:16:19 np0005534516 silly_lovelace[417582]:            "devices": [
Nov 25 04:16:19 np0005534516 silly_lovelace[417582]:                "/dev/loop3"
Nov 25 04:16:19 np0005534516 silly_lovelace[417582]:            ],
Nov 25 04:16:19 np0005534516 silly_lovelace[417582]:            "lv_name": "ceph_lv0",
Nov 25 04:16:19 np0005534516 silly_lovelace[417582]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 04:16:19 np0005534516 silly_lovelace[417582]:            "lv_size": "21470642176",
Nov 25 04:16:19 np0005534516 silly_lovelace[417582]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=fa0f8d90-5df8-4d42-9078-da082765696d,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 04:16:19 np0005534516 silly_lovelace[417582]:            "lv_uuid": "ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr",
Nov 25 04:16:19 np0005534516 silly_lovelace[417582]:            "name": "ceph_lv0",
Nov 25 04:16:19 np0005534516 silly_lovelace[417582]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 04:16:19 np0005534516 silly_lovelace[417582]:            "tags": {
Nov 25 04:16:19 np0005534516 silly_lovelace[417582]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 04:16:19 np0005534516 silly_lovelace[417582]:                "ceph.block_uuid": "ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr",
Nov 25 04:16:19 np0005534516 silly_lovelace[417582]:                "ceph.cephx_lockbox_secret": "",
Nov 25 04:16:19 np0005534516 silly_lovelace[417582]:                "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 04:16:19 np0005534516 silly_lovelace[417582]:                "ceph.cluster_name": "ceph",
Nov 25 04:16:19 np0005534516 silly_lovelace[417582]:                "ceph.crush_device_class": "",
Nov 25 04:16:19 np0005534516 silly_lovelace[417582]:                "ceph.encrypted": "0",
Nov 25 04:16:19 np0005534516 silly_lovelace[417582]:                "ceph.osd_fsid": "fa0f8d90-5df8-4d42-9078-da082765696d",
Nov 25 04:16:19 np0005534516 silly_lovelace[417582]:                "ceph.osd_id": "0",
Nov 25 04:16:19 np0005534516 silly_lovelace[417582]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 04:16:19 np0005534516 silly_lovelace[417582]:                "ceph.type": "block",
Nov 25 04:16:19 np0005534516 silly_lovelace[417582]:                "ceph.vdo": "0"
Nov 25 04:16:19 np0005534516 silly_lovelace[417582]:            },
Nov 25 04:16:19 np0005534516 silly_lovelace[417582]:            "type": "block",
Nov 25 04:16:19 np0005534516 silly_lovelace[417582]:            "vg_name": "ceph_vg0"
Nov 25 04:16:19 np0005534516 silly_lovelace[417582]:        }
Nov 25 04:16:19 np0005534516 silly_lovelace[417582]:    ],
Nov 25 04:16:19 np0005534516 silly_lovelace[417582]:    "1": [
Nov 25 04:16:19 np0005534516 silly_lovelace[417582]:        {
Nov 25 04:16:19 np0005534516 silly_lovelace[417582]:            "devices": [
Nov 25 04:16:19 np0005534516 silly_lovelace[417582]:                "/dev/loop4"
Nov 25 04:16:19 np0005534516 silly_lovelace[417582]:            ],
Nov 25 04:16:19 np0005534516 silly_lovelace[417582]:            "lv_name": "ceph_lv1",
Nov 25 04:16:19 np0005534516 silly_lovelace[417582]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 04:16:19 np0005534516 silly_lovelace[417582]:            "lv_size": "21470642176",
Nov 25 04:16:19 np0005534516 silly_lovelace[417582]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1ccbbde0-8faf-460a-800e-c84f00ed17db,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 04:16:19 np0005534516 silly_lovelace[417582]:            "lv_uuid": "nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e",
Nov 25 04:16:19 np0005534516 silly_lovelace[417582]:            "name": "ceph_lv1",
Nov 25 04:16:19 np0005534516 silly_lovelace[417582]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 04:16:19 np0005534516 silly_lovelace[417582]:            "tags": {
Nov 25 04:16:19 np0005534516 silly_lovelace[417582]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 04:16:19 np0005534516 silly_lovelace[417582]:                "ceph.block_uuid": "nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e",
Nov 25 04:16:19 np0005534516 silly_lovelace[417582]:                "ceph.cephx_lockbox_secret": "",
Nov 25 04:16:19 np0005534516 silly_lovelace[417582]:                "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 04:16:19 np0005534516 silly_lovelace[417582]:                "ceph.cluster_name": "ceph",
Nov 25 04:16:19 np0005534516 silly_lovelace[417582]:                "ceph.crush_device_class": "",
Nov 25 04:16:19 np0005534516 silly_lovelace[417582]:                "ceph.encrypted": "0",
Nov 25 04:16:19 np0005534516 silly_lovelace[417582]:                "ceph.osd_fsid": "1ccbbde0-8faf-460a-800e-c84f00ed17db",
Nov 25 04:16:19 np0005534516 silly_lovelace[417582]:                "ceph.osd_id": "1",
Nov 25 04:16:19 np0005534516 silly_lovelace[417582]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 04:16:19 np0005534516 silly_lovelace[417582]:                "ceph.type": "block",
Nov 25 04:16:19 np0005534516 silly_lovelace[417582]:                "ceph.vdo": "0"
Nov 25 04:16:19 np0005534516 silly_lovelace[417582]:            },
Nov 25 04:16:19 np0005534516 silly_lovelace[417582]:            "type": "block",
Nov 25 04:16:19 np0005534516 silly_lovelace[417582]:            "vg_name": "ceph_vg1"
Nov 25 04:16:19 np0005534516 silly_lovelace[417582]:        }
Nov 25 04:16:19 np0005534516 silly_lovelace[417582]:    ],
Nov 25 04:16:19 np0005534516 silly_lovelace[417582]:    "2": [
Nov 25 04:16:19 np0005534516 silly_lovelace[417582]:        {
Nov 25 04:16:19 np0005534516 silly_lovelace[417582]:            "devices": [
Nov 25 04:16:19 np0005534516 silly_lovelace[417582]:                "/dev/loop5"
Nov 25 04:16:19 np0005534516 silly_lovelace[417582]:            ],
Nov 25 04:16:19 np0005534516 silly_lovelace[417582]:            "lv_name": "ceph_lv2",
Nov 25 04:16:19 np0005534516 silly_lovelace[417582]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 04:16:19 np0005534516 silly_lovelace[417582]:            "lv_size": "21470642176",
Nov 25 04:16:19 np0005534516 silly_lovelace[417582]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2a15223e-f21c-42af-b702-2be1c3607399,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 04:16:19 np0005534516 silly_lovelace[417582]:            "lv_uuid": "5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0",
Nov 25 04:16:19 np0005534516 silly_lovelace[417582]:            "name": "ceph_lv2",
Nov 25 04:16:19 np0005534516 silly_lovelace[417582]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 04:16:19 np0005534516 silly_lovelace[417582]:            "tags": {
Nov 25 04:16:19 np0005534516 silly_lovelace[417582]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 04:16:19 np0005534516 silly_lovelace[417582]:                "ceph.block_uuid": "5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0",
Nov 25 04:16:19 np0005534516 silly_lovelace[417582]:                "ceph.cephx_lockbox_secret": "",
Nov 25 04:16:19 np0005534516 silly_lovelace[417582]:                "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 04:16:19 np0005534516 silly_lovelace[417582]:                "ceph.cluster_name": "ceph",
Nov 25 04:16:19 np0005534516 silly_lovelace[417582]:                "ceph.crush_device_class": "",
Nov 25 04:16:19 np0005534516 silly_lovelace[417582]:                "ceph.encrypted": "0",
Nov 25 04:16:19 np0005534516 silly_lovelace[417582]:                "ceph.osd_fsid": "2a15223e-f21c-42af-b702-2be1c3607399",
Nov 25 04:16:19 np0005534516 silly_lovelace[417582]:                "ceph.osd_id": "2",
Nov 25 04:16:19 np0005534516 silly_lovelace[417582]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 04:16:19 np0005534516 silly_lovelace[417582]:                "ceph.type": "block",
Nov 25 04:16:19 np0005534516 silly_lovelace[417582]:                "ceph.vdo": "0"
Nov 25 04:16:19 np0005534516 silly_lovelace[417582]:            },
Nov 25 04:16:19 np0005534516 silly_lovelace[417582]:            "type": "block",
Nov 25 04:16:19 np0005534516 silly_lovelace[417582]:            "vg_name": "ceph_vg2"
Nov 25 04:16:19 np0005534516 silly_lovelace[417582]:        }
Nov 25 04:16:19 np0005534516 silly_lovelace[417582]:    ]
Nov 25 04:16:19 np0005534516 silly_lovelace[417582]: }
Nov 25 04:16:19 np0005534516 systemd[1]: libpod-994e5d7ff8a1b8048ec322eb5c2d2bf0e3b25e9d2ed0c42cf013d01822c8f775.scope: Deactivated successfully.
Nov 25 04:16:19 np0005534516 podman[417565]: 2025-11-25 09:16:19.762444482 +0000 UTC m=+1.063082391 container died 994e5d7ff8a1b8048ec322eb5c2d2bf0e3b25e9d2ed0c42cf013d01822c8f775 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_lovelace, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Nov 25 04:16:19 np0005534516 systemd[1]: var-lib-containers-storage-overlay-cede17204f8cf7a50c96dfd6a631cd1733a1ff18d59e1d5ddf1c2751cae965dd-merged.mount: Deactivated successfully.
Nov 25 04:16:19 np0005534516 nova_compute[253538]: 2025-11-25 09:16:19.816 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:16:19 np0005534516 podman[417565]: 2025-11-25 09:16:19.819543043 +0000 UTC m=+1.120180952 container remove 994e5d7ff8a1b8048ec322eb5c2d2bf0e3b25e9d2ed0c42cf013d01822c8f775 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_lovelace, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 04:16:19 np0005534516 systemd[1]: libpod-conmon-994e5d7ff8a1b8048ec322eb5c2d2bf0e3b25e9d2ed0c42cf013d01822c8f775.scope: Deactivated successfully.
Nov 25 04:16:20 np0005534516 podman[417628]: 2025-11-25 09:16:20.039554164 +0000 UTC m=+0.098396966 container health_status 8ad1e319ea263c63021f01bb2d6f18ca5aea205883339692c6b10bddfa993191 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ovn_controller, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251118, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 25 04:16:20 np0005534516 nova_compute[253538]: 2025-11-25 09:16:20.197 253542 DEBUG nova.compute.manager [req-e9d4ddd6-a295-4a2e-9d3e-095761a1fb2e req-f911c114-e128-42bf-9129-cb04149a46f9 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 645b40f5-7a87-4de2-8b13-a340bcffd14b] Received event network-changed-53302c95-cc0c-4237-a7f3-dca02953a876 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 04:16:20 np0005534516 nova_compute[253538]: 2025-11-25 09:16:20.197 253542 DEBUG nova.compute.manager [req-e9d4ddd6-a295-4a2e-9d3e-095761a1fb2e req-f911c114-e128-42bf-9129-cb04149a46f9 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 645b40f5-7a87-4de2-8b13-a340bcffd14b] Refreshing instance network info cache due to event network-changed-53302c95-cc0c-4237-a7f3-dca02953a876. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 04:16:20 np0005534516 nova_compute[253538]: 2025-11-25 09:16:20.197 253542 DEBUG oslo_concurrency.lockutils [req-e9d4ddd6-a295-4a2e-9d3e-095761a1fb2e req-f911c114-e128-42bf-9129-cb04149a46f9 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-645b40f5-7a87-4de2-8b13-a340bcffd14b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 04:16:20 np0005534516 nova_compute[253538]: 2025-11-25 09:16:20.197 253542 DEBUG oslo_concurrency.lockutils [req-e9d4ddd6-a295-4a2e-9d3e-095761a1fb2e req-f911c114-e128-42bf-9129-cb04149a46f9 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-645b40f5-7a87-4de2-8b13-a340bcffd14b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 04:16:20 np0005534516 nova_compute[253538]: 2025-11-25 09:16:20.198 253542 DEBUG nova.network.neutron [req-e9d4ddd6-a295-4a2e-9d3e-095761a1fb2e req-f911c114-e128-42bf-9129-cb04149a46f9 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 645b40f5-7a87-4de2-8b13-a340bcffd14b] Refreshing network info cache for port 53302c95-cc0c-4237-a7f3-dca02953a876 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 04:16:20 np0005534516 podman[417767]: 2025-11-25 09:16:20.437963504 +0000 UTC m=+0.051775109 container create 23f47e96c4fb99c365705146e3373b27e981749b5bb943315c136358f56b1f2f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_williams, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 04:16:20 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2878: 321 pgs: 321 active+clean; 213 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.3 MiB/s rd, 1.5 MiB/s wr, 67 op/s
Nov 25 04:16:20 np0005534516 systemd[1]: Started libpod-conmon-23f47e96c4fb99c365705146e3373b27e981749b5bb943315c136358f56b1f2f.scope.
Nov 25 04:16:20 np0005534516 podman[417767]: 2025-11-25 09:16:20.410516738 +0000 UTC m=+0.024328363 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 04:16:20 np0005534516 systemd[1]: Started libcrun container.
Nov 25 04:16:20 np0005534516 nova_compute[253538]: 2025-11-25 09:16:20.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:16:20 np0005534516 nova_compute[253538]: 2025-11-25 09:16:20.556 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Nov 25 04:16:20 np0005534516 nova_compute[253538]: 2025-11-25 09:16:20.576 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:16:20 np0005534516 podman[417767]: 2025-11-25 09:16:20.580767427 +0000 UTC m=+0.194579042 container init 23f47e96c4fb99c365705146e3373b27e981749b5bb943315c136358f56b1f2f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_williams, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 04:16:20 np0005534516 podman[417767]: 2025-11-25 09:16:20.587042467 +0000 UTC m=+0.200854072 container start 23f47e96c4fb99c365705146e3373b27e981749b5bb943315c136358f56b1f2f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_williams, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 25 04:16:20 np0005534516 xenodochial_williams[417782]: 167 167
Nov 25 04:16:20 np0005534516 systemd[1]: libpod-23f47e96c4fb99c365705146e3373b27e981749b5bb943315c136358f56b1f2f.scope: Deactivated successfully.
Nov 25 04:16:20 np0005534516 podman[417767]: 2025-11-25 09:16:20.612445327 +0000 UTC m=+0.226256942 container attach 23f47e96c4fb99c365705146e3373b27e981749b5bb943315c136358f56b1f2f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_williams, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef)
Nov 25 04:16:20 np0005534516 podman[417767]: 2025-11-25 09:16:20.613691391 +0000 UTC m=+0.227502996 container died 23f47e96c4fb99c365705146e3373b27e981749b5bb943315c136358f56b1f2f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_williams, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Nov 25 04:16:20 np0005534516 systemd[1]: var-lib-containers-storage-overlay-521852b2d737ec17599ce289ac518c865d102fa2f94e3402e2b1d5a4ff5d0688-merged.mount: Deactivated successfully.
Nov 25 04:16:20 np0005534516 podman[417767]: 2025-11-25 09:16:20.93149499 +0000 UTC m=+0.545306635 container remove 23f47e96c4fb99c365705146e3373b27e981749b5bb943315c136358f56b1f2f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_williams, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 04:16:20 np0005534516 systemd[1]: libpod-conmon-23f47e96c4fb99c365705146e3373b27e981749b5bb943315c136358f56b1f2f.scope: Deactivated successfully.
Nov 25 04:16:21 np0005534516 podman[417807]: 2025-11-25 09:16:21.164806583 +0000 UTC m=+0.069549022 container create a452a978c87f9d95eaba6f214720840c5e76907d97c9d4b79105c0ada72866b2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_newton, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 04:16:21 np0005534516 systemd[1]: Started libpod-conmon-a452a978c87f9d95eaba6f214720840c5e76907d97c9d4b79105c0ada72866b2.scope.
Nov 25 04:16:21 np0005534516 podman[417807]: 2025-11-25 09:16:21.135519207 +0000 UTC m=+0.040261666 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 04:16:21 np0005534516 systemd[1]: Started libcrun container.
Nov 25 04:16:21 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6d0c4c635bbb8a85f19e9d0ce872b25cd2d9fd772499a58d25f3e715cc137cf3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 04:16:21 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6d0c4c635bbb8a85f19e9d0ce872b25cd2d9fd772499a58d25f3e715cc137cf3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 04:16:21 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6d0c4c635bbb8a85f19e9d0ce872b25cd2d9fd772499a58d25f3e715cc137cf3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 04:16:21 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6d0c4c635bbb8a85f19e9d0ce872b25cd2d9fd772499a58d25f3e715cc137cf3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 04:16:21 np0005534516 podman[417807]: 2025-11-25 09:16:21.257495993 +0000 UTC m=+0.162238452 container init a452a978c87f9d95eaba6f214720840c5e76907d97c9d4b79105c0ada72866b2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_newton, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2)
Nov 25 04:16:21 np0005534516 podman[417807]: 2025-11-25 09:16:21.269123858 +0000 UTC m=+0.173866227 container start a452a978c87f9d95eaba6f214720840c5e76907d97c9d4b79105c0ada72866b2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_newton, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Nov 25 04:16:21 np0005534516 podman[417807]: 2025-11-25 09:16:21.273390165 +0000 UTC m=+0.178132624 container attach a452a978c87f9d95eaba6f214720840c5e76907d97c9d4b79105c0ada72866b2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_newton, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 04:16:22 np0005534516 romantic_newton[417824]: {
Nov 25 04:16:22 np0005534516 romantic_newton[417824]:    "1ccbbde0-8faf-460a-800e-c84f00ed17db": {
Nov 25 04:16:22 np0005534516 romantic_newton[417824]:        "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 04:16:22 np0005534516 romantic_newton[417824]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 04:16:22 np0005534516 romantic_newton[417824]:        "osd_id": 1,
Nov 25 04:16:22 np0005534516 romantic_newton[417824]:        "osd_uuid": "1ccbbde0-8faf-460a-800e-c84f00ed17db",
Nov 25 04:16:22 np0005534516 romantic_newton[417824]:        "type": "bluestore"
Nov 25 04:16:22 np0005534516 romantic_newton[417824]:    },
Nov 25 04:16:22 np0005534516 romantic_newton[417824]:    "2a15223e-f21c-42af-b702-2be1c3607399": {
Nov 25 04:16:22 np0005534516 romantic_newton[417824]:        "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 04:16:22 np0005534516 romantic_newton[417824]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 04:16:22 np0005534516 romantic_newton[417824]:        "osd_id": 2,
Nov 25 04:16:22 np0005534516 romantic_newton[417824]:        "osd_uuid": "2a15223e-f21c-42af-b702-2be1c3607399",
Nov 25 04:16:22 np0005534516 romantic_newton[417824]:        "type": "bluestore"
Nov 25 04:16:22 np0005534516 romantic_newton[417824]:    },
Nov 25 04:16:22 np0005534516 romantic_newton[417824]:    "fa0f8d90-5df8-4d42-9078-da082765696d": {
Nov 25 04:16:22 np0005534516 romantic_newton[417824]:        "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 04:16:22 np0005534516 romantic_newton[417824]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 04:16:22 np0005534516 romantic_newton[417824]:        "osd_id": 0,
Nov 25 04:16:22 np0005534516 romantic_newton[417824]:        "osd_uuid": "fa0f8d90-5df8-4d42-9078-da082765696d",
Nov 25 04:16:22 np0005534516 romantic_newton[417824]:        "type": "bluestore"
Nov 25 04:16:22 np0005534516 romantic_newton[417824]:    }
Nov 25 04:16:22 np0005534516 romantic_newton[417824]: }
Nov 25 04:16:22 np0005534516 systemd[1]: libpod-a452a978c87f9d95eaba6f214720840c5e76907d97c9d4b79105c0ada72866b2.scope: Deactivated successfully.
Nov 25 04:16:22 np0005534516 podman[417807]: 2025-11-25 09:16:22.259515702 +0000 UTC m=+1.164258081 container died a452a978c87f9d95eaba6f214720840c5e76907d97c9d4b79105c0ada72866b2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_newton, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS)
Nov 25 04:16:22 np0005534516 systemd[1]: var-lib-containers-storage-overlay-6d0c4c635bbb8a85f19e9d0ce872b25cd2d9fd772499a58d25f3e715cc137cf3-merged.mount: Deactivated successfully.
Nov 25 04:16:22 np0005534516 podman[417807]: 2025-11-25 09:16:22.320713766 +0000 UTC m=+1.225456125 container remove a452a978c87f9d95eaba6f214720840c5e76907d97c9d4b79105c0ada72866b2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_newton, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Nov 25 04:16:22 np0005534516 systemd[1]: libpod-conmon-a452a978c87f9d95eaba6f214720840c5e76907d97c9d4b79105c0ada72866b2.scope: Deactivated successfully.
Nov 25 04:16:22 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 04:16:22 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 04:16:22 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 04:16:22 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 04:16:22 np0005534516 ceph-mgr[75313]: [progress WARNING root] complete: ev 52dd1973-018e-4e74-8f4f-87779c29a6ba does not exist
Nov 25 04:16:22 np0005534516 ceph-mgr[75313]: [progress WARNING root] complete: ev f67eaeed-6825-4ade-9dd0-a7706355aa78 does not exist
Nov 25 04:16:22 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2879: 321 pgs: 321 active+clean; 213 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 620 KiB/s wr, 88 op/s
Nov 25 04:16:22 np0005534516 nova_compute[253538]: 2025-11-25 09:16:22.538 253542 DEBUG nova.network.neutron [req-e9d4ddd6-a295-4a2e-9d3e-095761a1fb2e req-f911c114-e128-42bf-9129-cb04149a46f9 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 645b40f5-7a87-4de2-8b13-a340bcffd14b] Updated VIF entry in instance network info cache for port 53302c95-cc0c-4237-a7f3-dca02953a876. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 25 04:16:22 np0005534516 nova_compute[253538]: 2025-11-25 09:16:22.539 253542 DEBUG nova.network.neutron [req-e9d4ddd6-a295-4a2e-9d3e-095761a1fb2e req-f911c114-e128-42bf-9129-cb04149a46f9 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 645b40f5-7a87-4de2-8b13-a340bcffd14b] Updating instance_info_cache with network_info: [{"id": "53302c95-cc0c-4237-a7f3-dca02953a876", "address": "fa:16:3e:02:9c:53", "network": {"id": "f86f1e83-b07b-4abd-bc9a-7c03f3634fc6", "bridge": "br-int", "label": "tempest-network-smoke--1498407759", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe02:9c53", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.174", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap53302c95-cc", "ovs_interfaceid": "53302c95-cc0c-4237-a7f3-dca02953a876", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 04:16:22 np0005534516 nova_compute[253538]: 2025-11-25 09:16:22.561 253542 DEBUG oslo_concurrency.lockutils [req-e9d4ddd6-a295-4a2e-9d3e-095761a1fb2e req-f911c114-e128-42bf-9129-cb04149a46f9 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-645b40f5-7a87-4de2-8b13-a340bcffd14b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 04:16:23 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 04:16:23 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 04:16:23 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:16:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 04:16:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 04:16:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 04:16:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 04:16:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 04:16:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 04:16:24 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2880: 321 pgs: 321 active+clean; 213 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 75 op/s
Nov 25 04:16:24 np0005534516 nova_compute[253538]: 2025-11-25 09:16:24.817 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:16:25 np0005534516 nova_compute[253538]: 2025-11-25 09:16:25.617 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:16:26 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2881: 321 pgs: 321 active+clean; 213 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Nov 25 04:16:28 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:16:28 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2882: 321 pgs: 321 active+clean; 213 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 73 op/s
Nov 25 04:16:29 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 25 04:16:29 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1653182183' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 04:16:29 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 25 04:16:29 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1653182183' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 04:16:29 np0005534516 ovn_controller[152859]: 2025-11-25T09:16:29Z|00209|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:02:9c:53 10.100.0.3
Nov 25 04:16:29 np0005534516 ovn_controller[152859]: 2025-11-25T09:16:29Z|00210|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:02:9c:53 10.100.0.3
Nov 25 04:16:29 np0005534516 nova_compute[253538]: 2025-11-25 09:16:29.818 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:16:30 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2883: 321 pgs: 321 active+clean; 226 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.7 MiB/s rd, 1.1 MiB/s wr, 86 op/s
Nov 25 04:16:30 np0005534516 nova_compute[253538]: 2025-11-25 09:16:30.619 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:16:32 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2884: 321 pgs: 321 active+clean; 237 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 853 KiB/s rd, 1.4 MiB/s wr, 57 op/s
Nov 25 04:16:32 np0005534516 nova_compute[253538]: 2025-11-25 09:16:32.569 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:16:32 np0005534516 nova_compute[253538]: 2025-11-25 09:16:32.570 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Nov 25 04:16:32 np0005534516 nova_compute[253538]: 2025-11-25 09:16:32.584 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Nov 25 04:16:33 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:16:34 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2885: 321 pgs: 321 active+clean; 244 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 303 KiB/s rd, 2.1 MiB/s wr, 59 op/s
Nov 25 04:16:34 np0005534516 nova_compute[253538]: 2025-11-25 09:16:34.822 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:16:35 np0005534516 nova_compute[253538]: 2025-11-25 09:16:35.621 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:16:36 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2886: 321 pgs: 321 active+clean; 246 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 328 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Nov 25 04:16:37 np0005534516 nova_compute[253538]: 2025-11-25 09:16:37.656 253542 DEBUG nova.compute.manager [req-a6069ca5-437e-4b71-a89b-4b1c3d905b3d req-a6fe8234-1daf-43fa-b34c-3dec07b7c7fe b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 645b40f5-7a87-4de2-8b13-a340bcffd14b] Received event network-changed-53302c95-cc0c-4237-a7f3-dca02953a876 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 04:16:37 np0005534516 nova_compute[253538]: 2025-11-25 09:16:37.657 253542 DEBUG nova.compute.manager [req-a6069ca5-437e-4b71-a89b-4b1c3d905b3d req-a6fe8234-1daf-43fa-b34c-3dec07b7c7fe b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 645b40f5-7a87-4de2-8b13-a340bcffd14b] Refreshing instance network info cache due to event network-changed-53302c95-cc0c-4237-a7f3-dca02953a876. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 04:16:37 np0005534516 nova_compute[253538]: 2025-11-25 09:16:37.657 253542 DEBUG oslo_concurrency.lockutils [req-a6069ca5-437e-4b71-a89b-4b1c3d905b3d req-a6fe8234-1daf-43fa-b34c-3dec07b7c7fe b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-645b40f5-7a87-4de2-8b13-a340bcffd14b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 04:16:37 np0005534516 nova_compute[253538]: 2025-11-25 09:16:37.657 253542 DEBUG oslo_concurrency.lockutils [req-a6069ca5-437e-4b71-a89b-4b1c3d905b3d req-a6fe8234-1daf-43fa-b34c-3dec07b7c7fe b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-645b40f5-7a87-4de2-8b13-a340bcffd14b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 04:16:37 np0005534516 nova_compute[253538]: 2025-11-25 09:16:37.658 253542 DEBUG nova.network.neutron [req-a6069ca5-437e-4b71-a89b-4b1c3d905b3d req-a6fe8234-1daf-43fa-b34c-3dec07b7c7fe b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 645b40f5-7a87-4de2-8b13-a340bcffd14b] Refreshing network info cache for port 53302c95-cc0c-4237-a7f3-dca02953a876 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 04:16:37 np0005534516 nova_compute[253538]: 2025-11-25 09:16:37.705 253542 DEBUG oslo_concurrency.lockutils [None req-521483c9-132a-45e0-83e2-dabe0be817a7 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Acquiring lock "645b40f5-7a87-4de2-8b13-a340bcffd14b" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:16:37 np0005534516 nova_compute[253538]: 2025-11-25 09:16:37.706 253542 DEBUG oslo_concurrency.lockutils [None req-521483c9-132a-45e0-83e2-dabe0be817a7 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "645b40f5-7a87-4de2-8b13-a340bcffd14b" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:16:37 np0005534516 nova_compute[253538]: 2025-11-25 09:16:37.706 253542 DEBUG oslo_concurrency.lockutils [None req-521483c9-132a-45e0-83e2-dabe0be817a7 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Acquiring lock "645b40f5-7a87-4de2-8b13-a340bcffd14b-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:16:37 np0005534516 nova_compute[253538]: 2025-11-25 09:16:37.707 253542 DEBUG oslo_concurrency.lockutils [None req-521483c9-132a-45e0-83e2-dabe0be817a7 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "645b40f5-7a87-4de2-8b13-a340bcffd14b-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:16:37 np0005534516 nova_compute[253538]: 2025-11-25 09:16:37.707 253542 DEBUG oslo_concurrency.lockutils [None req-521483c9-132a-45e0-83e2-dabe0be817a7 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "645b40f5-7a87-4de2-8b13-a340bcffd14b-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:16:37 np0005534516 nova_compute[253538]: 2025-11-25 09:16:37.708 253542 INFO nova.compute.manager [None req-521483c9-132a-45e0-83e2-dabe0be817a7 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 645b40f5-7a87-4de2-8b13-a340bcffd14b] Terminating instance#033[00m
Nov 25 04:16:37 np0005534516 nova_compute[253538]: 2025-11-25 09:16:37.709 253542 DEBUG nova.compute.manager [None req-521483c9-132a-45e0-83e2-dabe0be817a7 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 645b40f5-7a87-4de2-8b13-a340bcffd14b] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 25 04:16:37 np0005534516 kernel: tap53302c95-cc (unregistering): left promiscuous mode
Nov 25 04:16:37 np0005534516 NetworkManager[48915]: <info>  [1764062197.7598] device (tap53302c95-cc): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 04:16:37 np0005534516 nova_compute[253538]: 2025-11-25 09:16:37.808 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:16:37 np0005534516 ovn_controller[152859]: 2025-11-25T09:16:37Z|01607|binding|INFO|Releasing lport 53302c95-cc0c-4237-a7f3-dca02953a876 from this chassis (sb_readonly=0)
Nov 25 04:16:37 np0005534516 ovn_controller[152859]: 2025-11-25T09:16:37Z|01608|binding|INFO|Setting lport 53302c95-cc0c-4237-a7f3-dca02953a876 down in Southbound
Nov 25 04:16:37 np0005534516 ovn_controller[152859]: 2025-11-25T09:16:37Z|01609|binding|INFO|Removing iface tap53302c95-cc ovn-installed in OVS
Nov 25 04:16:37 np0005534516 nova_compute[253538]: 2025-11-25 09:16:37.813 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:16:37 np0005534516 nova_compute[253538]: 2025-11-25 09:16:37.833 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:16:37 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:16:37.846 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:02:9c:53 10.100.0.3 2001:db8::f816:3eff:fe02:9c53'], port_security=['fa:16:3e:02:9c:53 10.100.0.3 2001:db8::f816:3eff:fe02:9c53'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28 2001:db8::f816:3eff:fe02:9c53/64', 'neutron:device_id': '645b40f5-7a87-4de2-8b13-a340bcffd14b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f86f1e83-b07b-4abd-bc9a-7c03f3634fc6', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a3cf572dfc9f42528923d69b8fa76422', 'neutron:revision_number': '4', 'neutron:security_group_ids': '15198f79-2199-4ce9-8f4e-9ae43d3cedee', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e5830640-550a-474a-a915-1b8e117ec031, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=53302c95-cc0c-4237-a7f3-dca02953a876) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 04:16:37 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:16:37.848 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 53302c95-cc0c-4237-a7f3-dca02953a876 in datapath f86f1e83-b07b-4abd-bc9a-7c03f3634fc6 unbound from our chassis#033[00m
Nov 25 04:16:37 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:16:37.849 162739 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network f86f1e83-b07b-4abd-bc9a-7c03f3634fc6#033[00m
Nov 25 04:16:37 np0005534516 systemd[1]: machine-qemu\x2d181\x2dinstance\x2d00000097.scope: Deactivated successfully.
Nov 25 04:16:37 np0005534516 systemd[1]: machine-qemu\x2d181\x2dinstance\x2d00000097.scope: Consumed 13.579s CPU time.
Nov 25 04:16:37 np0005534516 systemd-machined[215790]: Machine qemu-181-instance-00000097 terminated.
Nov 25 04:16:37 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:16:37.867 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[425dd0ca-948a-4560-aea1-ef14671e2f7a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:16:37 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:16:37.903 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[39f38300-c9da-4bcb-bdc0-3265737c1413]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:16:37 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:16:37.907 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[1a694680-de2f-4dfc-b6f4-f2669c87faa5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:16:37 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:16:37.938 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[40149a49-6727-4134-9995-134351519c67]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:16:37 np0005534516 nova_compute[253538]: 2025-11-25 09:16:37.948 253542 INFO nova.virt.libvirt.driver [-] [instance: 645b40f5-7a87-4de2-8b13-a340bcffd14b] Instance destroyed successfully.#033[00m
Nov 25 04:16:37 np0005534516 nova_compute[253538]: 2025-11-25 09:16:37.949 253542 DEBUG nova.objects.instance [None req-521483c9-132a-45e0-83e2-dabe0be817a7 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lazy-loading 'resources' on Instance uuid 645b40f5-7a87-4de2-8b13-a340bcffd14b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 04:16:37 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:16:37.957 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[f4e05496-166f-4fcb-a52e-469f4f059def]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf86f1e83-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:81:95:46'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 32, 'tx_packets': 7, 'rx_bytes': 2640, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 32, 'tx_packets': 7, 'rx_bytes': 2640, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 459], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 749557, 'reachable_time': 25638, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 28, 'inoctets': 2080, 'indelivers': 7, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 28, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 2080, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 28, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 7, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 417941, 'error': None, 'target': 'ovnmeta-f86f1e83-b07b-4abd-bc9a-7c03f3634fc6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:16:37 np0005534516 nova_compute[253538]: 2025-11-25 09:16:37.962 253542 DEBUG nova.virt.libvirt.vif [None req-521483c9-132a-45e0-83e2-dabe0be817a7 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T09:16:04Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-557180333',display_name='tempest-TestGettingAddress-server-557180333',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-557180333',id=151,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMyv667Z2ABnyPVBrZx9jd9gq6P58X6si+s81we9pUqnDsWI1jnpnGU0fnIp9UQ/Apxt3tS4iccd2fLnQpe7TCkxqUAyHZSIrSlnLfmfHodlWycVpxB5TAoXVMJ4xTQPeA==',key_name='tempest-TestGettingAddress-1435340358',keypairs=<?>,launch_index=0,launched_at=2025-11-25T09:16:17Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='a3cf572dfc9f42528923d69b8fa76422',ramdisk_id='',reservation_id='r-yusne7fb',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-364728108',owner_user_name='tempest-TestGettingAddress-364728108-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T09:16:17Z,user_data=None,user_id='c9fb13d4ba9041458692330b7276232f',uuid=645b40f5-7a87-4de2-8b13-a340bcffd14b,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "53302c95-cc0c-4237-a7f3-dca02953a876", "address": "fa:16:3e:02:9c:53", "network": {"id": "f86f1e83-b07b-4abd-bc9a-7c03f3634fc6", "bridge": "br-int", "label": "tempest-network-smoke--1498407759", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe02:9c53", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.174", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap53302c95-cc", "ovs_interfaceid": "53302c95-cc0c-4237-a7f3-dca02953a876", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 25 04:16:37 np0005534516 nova_compute[253538]: 2025-11-25 09:16:37.963 253542 DEBUG nova.network.os_vif_util [None req-521483c9-132a-45e0-83e2-dabe0be817a7 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Converting VIF {"id": "53302c95-cc0c-4237-a7f3-dca02953a876", "address": "fa:16:3e:02:9c:53", "network": {"id": "f86f1e83-b07b-4abd-bc9a-7c03f3634fc6", "bridge": "br-int", "label": "tempest-network-smoke--1498407759", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe02:9c53", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.174", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap53302c95-cc", "ovs_interfaceid": "53302c95-cc0c-4237-a7f3-dca02953a876", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 04:16:37 np0005534516 nova_compute[253538]: 2025-11-25 09:16:37.964 253542 DEBUG nova.network.os_vif_util [None req-521483c9-132a-45e0-83e2-dabe0be817a7 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:02:9c:53,bridge_name='br-int',has_traffic_filtering=True,id=53302c95-cc0c-4237-a7f3-dca02953a876,network=Network(f86f1e83-b07b-4abd-bc9a-7c03f3634fc6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap53302c95-cc') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 04:16:37 np0005534516 nova_compute[253538]: 2025-11-25 09:16:37.964 253542 DEBUG os_vif [None req-521483c9-132a-45e0-83e2-dabe0be817a7 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:02:9c:53,bridge_name='br-int',has_traffic_filtering=True,id=53302c95-cc0c-4237-a7f3-dca02953a876,network=Network(f86f1e83-b07b-4abd-bc9a-7c03f3634fc6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap53302c95-cc') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 25 04:16:37 np0005534516 nova_compute[253538]: 2025-11-25 09:16:37.967 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:16:37 np0005534516 nova_compute[253538]: 2025-11-25 09:16:37.968 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap53302c95-cc, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 04:16:37 np0005534516 nova_compute[253538]: 2025-11-25 09:16:37.969 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:16:37 np0005534516 nova_compute[253538]: 2025-11-25 09:16:37.970 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:16:37 np0005534516 nova_compute[253538]: 2025-11-25 09:16:37.973 253542 INFO os_vif [None req-521483c9-132a-45e0-83e2-dabe0be817a7 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:02:9c:53,bridge_name='br-int',has_traffic_filtering=True,id=53302c95-cc0c-4237-a7f3-dca02953a876,network=Network(f86f1e83-b07b-4abd-bc9a-7c03f3634fc6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap53302c95-cc')#033[00m
Nov 25 04:16:37 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:16:37.980 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[76004572-768d-42d2-9b22-fbb1da189694]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapf86f1e83-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 749569, 'tstamp': 749569}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 417946, 'error': None, 'target': 'ovnmeta-f86f1e83-b07b-4abd-bc9a-7c03f3634fc6', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapf86f1e83-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 749571, 'tstamp': 749571}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 417946, 'error': None, 'target': 'ovnmeta-f86f1e83-b07b-4abd-bc9a-7c03f3634fc6', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:16:37 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:16:37.981 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf86f1e83-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 04:16:37 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:16:37.984 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf86f1e83-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 04:16:37 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:16:37.984 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 04:16:37 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:16:37.985 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapf86f1e83-b0, col_values=(('external_ids', {'iface-id': 'cfc44526-993c-46ae-8c7c-2505531aa9fc'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 04:16:37 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:16:37.985 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 04:16:37 np0005534516 nova_compute[253538]: 2025-11-25 09:16:37.992 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:16:38 np0005534516 nova_compute[253538]: 2025-11-25 09:16:38.135 253542 DEBUG nova.compute.manager [req-79314b71-d722-466e-9adc-653f541e1f00 req-078f8673-d87e-4335-8b84-7241f786339d b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 645b40f5-7a87-4de2-8b13-a340bcffd14b] Received event network-vif-unplugged-53302c95-cc0c-4237-a7f3-dca02953a876 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 04:16:38 np0005534516 nova_compute[253538]: 2025-11-25 09:16:38.136 253542 DEBUG oslo_concurrency.lockutils [req-79314b71-d722-466e-9adc-653f541e1f00 req-078f8673-d87e-4335-8b84-7241f786339d b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "645b40f5-7a87-4de2-8b13-a340bcffd14b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:16:38 np0005534516 nova_compute[253538]: 2025-11-25 09:16:38.136 253542 DEBUG oslo_concurrency.lockutils [req-79314b71-d722-466e-9adc-653f541e1f00 req-078f8673-d87e-4335-8b84-7241f786339d b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "645b40f5-7a87-4de2-8b13-a340bcffd14b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:16:38 np0005534516 nova_compute[253538]: 2025-11-25 09:16:38.136 253542 DEBUG oslo_concurrency.lockutils [req-79314b71-d722-466e-9adc-653f541e1f00 req-078f8673-d87e-4335-8b84-7241f786339d b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "645b40f5-7a87-4de2-8b13-a340bcffd14b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:16:38 np0005534516 nova_compute[253538]: 2025-11-25 09:16:38.136 253542 DEBUG nova.compute.manager [req-79314b71-d722-466e-9adc-653f541e1f00 req-078f8673-d87e-4335-8b84-7241f786339d b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 645b40f5-7a87-4de2-8b13-a340bcffd14b] No waiting events found dispatching network-vif-unplugged-53302c95-cc0c-4237-a7f3-dca02953a876 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 04:16:38 np0005534516 nova_compute[253538]: 2025-11-25 09:16:38.136 253542 DEBUG nova.compute.manager [req-79314b71-d722-466e-9adc-653f541e1f00 req-078f8673-d87e-4335-8b84-7241f786339d b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 645b40f5-7a87-4de2-8b13-a340bcffd14b] Received event network-vif-unplugged-53302c95-cc0c-4237-a7f3-dca02953a876 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 25 04:16:38 np0005534516 nova_compute[253538]: 2025-11-25 09:16:38.370 253542 INFO nova.virt.libvirt.driver [None req-521483c9-132a-45e0-83e2-dabe0be817a7 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 645b40f5-7a87-4de2-8b13-a340bcffd14b] Deleting instance files /var/lib/nova/instances/645b40f5-7a87-4de2-8b13-a340bcffd14b_del#033[00m
Nov 25 04:16:38 np0005534516 nova_compute[253538]: 2025-11-25 09:16:38.371 253542 INFO nova.virt.libvirt.driver [None req-521483c9-132a-45e0-83e2-dabe0be817a7 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 645b40f5-7a87-4de2-8b13-a340bcffd14b] Deletion of /var/lib/nova/instances/645b40f5-7a87-4de2-8b13-a340bcffd14b_del complete#033[00m
Nov 25 04:16:38 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:16:38 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2887: 321 pgs: 321 active+clean; 227 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 336 KiB/s rd, 2.1 MiB/s wr, 69 op/s
Nov 25 04:16:38 np0005534516 nova_compute[253538]: 2025-11-25 09:16:38.491 253542 INFO nova.compute.manager [None req-521483c9-132a-45e0-83e2-dabe0be817a7 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 645b40f5-7a87-4de2-8b13-a340bcffd14b] Took 0.78 seconds to destroy the instance on the hypervisor.#033[00m
Nov 25 04:16:38 np0005534516 nova_compute[253538]: 2025-11-25 09:16:38.491 253542 DEBUG oslo.service.loopingcall [None req-521483c9-132a-45e0-83e2-dabe0be817a7 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 25 04:16:38 np0005534516 nova_compute[253538]: 2025-11-25 09:16:38.492 253542 DEBUG nova.compute.manager [-] [instance: 645b40f5-7a87-4de2-8b13-a340bcffd14b] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 25 04:16:38 np0005534516 nova_compute[253538]: 2025-11-25 09:16:38.492 253542 DEBUG nova.network.neutron [-] [instance: 645b40f5-7a87-4de2-8b13-a340bcffd14b] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 25 04:16:39 np0005534516 nova_compute[253538]: 2025-11-25 09:16:39.098 253542 DEBUG nova.network.neutron [req-a6069ca5-437e-4b71-a89b-4b1c3d905b3d req-a6fe8234-1daf-43fa-b34c-3dec07b7c7fe b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 645b40f5-7a87-4de2-8b13-a340bcffd14b] Updated VIF entry in instance network info cache for port 53302c95-cc0c-4237-a7f3-dca02953a876. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 25 04:16:39 np0005534516 nova_compute[253538]: 2025-11-25 09:16:39.099 253542 DEBUG nova.network.neutron [req-a6069ca5-437e-4b71-a89b-4b1c3d905b3d req-a6fe8234-1daf-43fa-b34c-3dec07b7c7fe b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 645b40f5-7a87-4de2-8b13-a340bcffd14b] Updating instance_info_cache with network_info: [{"id": "53302c95-cc0c-4237-a7f3-dca02953a876", "address": "fa:16:3e:02:9c:53", "network": {"id": "f86f1e83-b07b-4abd-bc9a-7c03f3634fc6", "bridge": "br-int", "label": "tempest-network-smoke--1498407759", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe02:9c53", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap53302c95-cc", "ovs_interfaceid": "53302c95-cc0c-4237-a7f3-dca02953a876", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 04:16:39 np0005534516 nova_compute[253538]: 2025-11-25 09:16:39.824 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:16:40 np0005534516 nova_compute[253538]: 2025-11-25 09:16:40.027 253542 DEBUG oslo_concurrency.lockutils [req-a6069ca5-437e-4b71-a89b-4b1c3d905b3d req-a6fe8234-1daf-43fa-b34c-3dec07b7c7fe b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-645b40f5-7a87-4de2-8b13-a340bcffd14b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 04:16:40 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2888: 321 pgs: 321 active+clean; 190 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 350 KiB/s rd, 2.1 MiB/s wr, 82 op/s
Nov 25 04:16:40 np0005534516 nova_compute[253538]: 2025-11-25 09:16:40.552 253542 DEBUG nova.compute.manager [req-1e69153c-a6b7-4e40-9c1a-4b6f9d67bf5b req-7f6728fd-6ab4-4d6b-8c63-d1202196c6f7 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 645b40f5-7a87-4de2-8b13-a340bcffd14b] Received event network-vif-plugged-53302c95-cc0c-4237-a7f3-dca02953a876 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 04:16:40 np0005534516 nova_compute[253538]: 2025-11-25 09:16:40.553 253542 DEBUG oslo_concurrency.lockutils [req-1e69153c-a6b7-4e40-9c1a-4b6f9d67bf5b req-7f6728fd-6ab4-4d6b-8c63-d1202196c6f7 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "645b40f5-7a87-4de2-8b13-a340bcffd14b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:16:40 np0005534516 nova_compute[253538]: 2025-11-25 09:16:40.554 253542 DEBUG oslo_concurrency.lockutils [req-1e69153c-a6b7-4e40-9c1a-4b6f9d67bf5b req-7f6728fd-6ab4-4d6b-8c63-d1202196c6f7 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "645b40f5-7a87-4de2-8b13-a340bcffd14b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:16:40 np0005534516 nova_compute[253538]: 2025-11-25 09:16:40.554 253542 DEBUG oslo_concurrency.lockutils [req-1e69153c-a6b7-4e40-9c1a-4b6f9d67bf5b req-7f6728fd-6ab4-4d6b-8c63-d1202196c6f7 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "645b40f5-7a87-4de2-8b13-a340bcffd14b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:16:40 np0005534516 nova_compute[253538]: 2025-11-25 09:16:40.555 253542 DEBUG nova.compute.manager [req-1e69153c-a6b7-4e40-9c1a-4b6f9d67bf5b req-7f6728fd-6ab4-4d6b-8c63-d1202196c6f7 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 645b40f5-7a87-4de2-8b13-a340bcffd14b] No waiting events found dispatching network-vif-plugged-53302c95-cc0c-4237-a7f3-dca02953a876 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 04:16:40 np0005534516 nova_compute[253538]: 2025-11-25 09:16:40.555 253542 WARNING nova.compute.manager [req-1e69153c-a6b7-4e40-9c1a-4b6f9d67bf5b req-7f6728fd-6ab4-4d6b-8c63-d1202196c6f7 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 645b40f5-7a87-4de2-8b13-a340bcffd14b] Received unexpected event network-vif-plugged-53302c95-cc0c-4237-a7f3-dca02953a876 for instance with vm_state active and task_state deleting.#033[00m
Nov 25 04:16:40 np0005534516 nova_compute[253538]: 2025-11-25 09:16:40.743 253542 DEBUG nova.network.neutron [-] [instance: 645b40f5-7a87-4de2-8b13-a340bcffd14b] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 04:16:40 np0005534516 nova_compute[253538]: 2025-11-25 09:16:40.762 253542 INFO nova.compute.manager [-] [instance: 645b40f5-7a87-4de2-8b13-a340bcffd14b] Took 2.27 seconds to deallocate network for instance.#033[00m
Nov 25 04:16:40 np0005534516 nova_compute[253538]: 2025-11-25 09:16:40.809 253542 DEBUG nova.compute.manager [req-f26c5f0e-ac4c-4571-ac7e-5755e28c9bab req-7b1d5594-4296-41f9-a0e1-6a10b8c49c95 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 645b40f5-7a87-4de2-8b13-a340bcffd14b] Received event network-vif-deleted-53302c95-cc0c-4237-a7f3-dca02953a876 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 04:16:40 np0005534516 nova_compute[253538]: 2025-11-25 09:16:40.833 253542 DEBUG oslo_concurrency.lockutils [None req-521483c9-132a-45e0-83e2-dabe0be817a7 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:16:40 np0005534516 nova_compute[253538]: 2025-11-25 09:16:40.833 253542 DEBUG oslo_concurrency.lockutils [None req-521483c9-132a-45e0-83e2-dabe0be817a7 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:16:40 np0005534516 nova_compute[253538]: 2025-11-25 09:16:40.935 253542 DEBUG oslo_concurrency.processutils [None req-521483c9-132a-45e0-83e2-dabe0be817a7 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 04:16:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:16:41.100 162739 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:16:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:16:41.101 162739 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:16:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:16:41.102 162739 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:16:41 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 04:16:41 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1984379008' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 04:16:41 np0005534516 nova_compute[253538]: 2025-11-25 09:16:41.396 253542 DEBUG oslo_concurrency.processutils [None req-521483c9-132a-45e0-83e2-dabe0be817a7 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.461s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 04:16:41 np0005534516 nova_compute[253538]: 2025-11-25 09:16:41.406 253542 DEBUG nova.compute.provider_tree [None req-521483c9-132a-45e0-83e2-dabe0be817a7 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 04:16:41 np0005534516 nova_compute[253538]: 2025-11-25 09:16:41.567 253542 DEBUG nova.scheduler.client.report [None req-521483c9-132a-45e0-83e2-dabe0be817a7 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 04:16:41 np0005534516 nova_compute[253538]: 2025-11-25 09:16:41.596 253542 DEBUG oslo_concurrency.lockutils [None req-521483c9-132a-45e0-83e2-dabe0be817a7 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.763s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:16:41 np0005534516 nova_compute[253538]: 2025-11-25 09:16:41.634 253542 INFO nova.scheduler.client.report [None req-521483c9-132a-45e0-83e2-dabe0be817a7 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Deleted allocations for instance 645b40f5-7a87-4de2-8b13-a340bcffd14b#033[00m
Nov 25 04:16:41 np0005534516 nova_compute[253538]: 2025-11-25 09:16:41.729 253542 DEBUG oslo_concurrency.lockutils [None req-521483c9-132a-45e0-83e2-dabe0be817a7 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "645b40f5-7a87-4de2-8b13-a340bcffd14b" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.023s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:16:42 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2889: 321 pgs: 321 active+clean; 167 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 154 KiB/s rd, 1.1 MiB/s wr, 63 op/s
Nov 25 04:16:42 np0005534516 nova_compute[253538]: 2025-11-25 09:16:42.887 253542 DEBUG oslo_concurrency.lockutils [None req-6f9c169b-a0fe-430f-8841-61cbc5ef4924 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Acquiring lock "2abdf1f8-0c71-459d-8467-ec8825219eda" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:16:42 np0005534516 nova_compute[253538]: 2025-11-25 09:16:42.888 253542 DEBUG oslo_concurrency.lockutils [None req-6f9c169b-a0fe-430f-8841-61cbc5ef4924 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "2abdf1f8-0c71-459d-8467-ec8825219eda" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:16:42 np0005534516 nova_compute[253538]: 2025-11-25 09:16:42.889 253542 DEBUG oslo_concurrency.lockutils [None req-6f9c169b-a0fe-430f-8841-61cbc5ef4924 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Acquiring lock "2abdf1f8-0c71-459d-8467-ec8825219eda-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:16:42 np0005534516 nova_compute[253538]: 2025-11-25 09:16:42.890 253542 DEBUG oslo_concurrency.lockutils [None req-6f9c169b-a0fe-430f-8841-61cbc5ef4924 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "2abdf1f8-0c71-459d-8467-ec8825219eda-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:16:42 np0005534516 nova_compute[253538]: 2025-11-25 09:16:42.890 253542 DEBUG oslo_concurrency.lockutils [None req-6f9c169b-a0fe-430f-8841-61cbc5ef4924 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "2abdf1f8-0c71-459d-8467-ec8825219eda-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:16:42 np0005534516 nova_compute[253538]: 2025-11-25 09:16:42.892 253542 INFO nova.compute.manager [None req-6f9c169b-a0fe-430f-8841-61cbc5ef4924 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 2abdf1f8-0c71-459d-8467-ec8825219eda] Terminating instance#033[00m
Nov 25 04:16:42 np0005534516 nova_compute[253538]: 2025-11-25 09:16:42.894 253542 DEBUG nova.compute.manager [None req-6f9c169b-a0fe-430f-8841-61cbc5ef4924 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 2abdf1f8-0c71-459d-8467-ec8825219eda] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 25 04:16:42 np0005534516 nova_compute[253538]: 2025-11-25 09:16:42.899 253542 DEBUG nova.compute.manager [req-7dcb55fa-f6cd-4e91-a53d-c68bc6b44d74 req-9aceae57-4124-47c8-ba14-02e6dae01c8e b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 2abdf1f8-0c71-459d-8467-ec8825219eda] Received event network-changed-f30cb228-eac2-4d17-a356-bec8d6ae142a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 04:16:42 np0005534516 nova_compute[253538]: 2025-11-25 09:16:42.900 253542 DEBUG nova.compute.manager [req-7dcb55fa-f6cd-4e91-a53d-c68bc6b44d74 req-9aceae57-4124-47c8-ba14-02e6dae01c8e b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 2abdf1f8-0c71-459d-8467-ec8825219eda] Refreshing instance network info cache due to event network-changed-f30cb228-eac2-4d17-a356-bec8d6ae142a. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 04:16:42 np0005534516 nova_compute[253538]: 2025-11-25 09:16:42.900 253542 DEBUG oslo_concurrency.lockutils [req-7dcb55fa-f6cd-4e91-a53d-c68bc6b44d74 req-9aceae57-4124-47c8-ba14-02e6dae01c8e b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-2abdf1f8-0c71-459d-8467-ec8825219eda" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 04:16:42 np0005534516 nova_compute[253538]: 2025-11-25 09:16:42.901 253542 DEBUG oslo_concurrency.lockutils [req-7dcb55fa-f6cd-4e91-a53d-c68bc6b44d74 req-9aceae57-4124-47c8-ba14-02e6dae01c8e b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-2abdf1f8-0c71-459d-8467-ec8825219eda" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 04:16:42 np0005534516 nova_compute[253538]: 2025-11-25 09:16:42.901 253542 DEBUG nova.network.neutron [req-7dcb55fa-f6cd-4e91-a53d-c68bc6b44d74 req-9aceae57-4124-47c8-ba14-02e6dae01c8e b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 2abdf1f8-0c71-459d-8467-ec8825219eda] Refreshing network info cache for port f30cb228-eac2-4d17-a356-bec8d6ae142a _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 04:16:42 np0005534516 kernel: tapf30cb228-ea (unregistering): left promiscuous mode
Nov 25 04:16:42 np0005534516 NetworkManager[48915]: <info>  [1764062202.9661] device (tapf30cb228-ea): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 04:16:42 np0005534516 nova_compute[253538]: 2025-11-25 09:16:42.969 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:16:42 np0005534516 nova_compute[253538]: 2025-11-25 09:16:42.981 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:16:42 np0005534516 ovn_controller[152859]: 2025-11-25T09:16:42Z|01610|binding|INFO|Releasing lport f30cb228-eac2-4d17-a356-bec8d6ae142a from this chassis (sb_readonly=0)
Nov 25 04:16:42 np0005534516 ovn_controller[152859]: 2025-11-25T09:16:42Z|01611|binding|INFO|Setting lport f30cb228-eac2-4d17-a356-bec8d6ae142a down in Southbound
Nov 25 04:16:42 np0005534516 ovn_controller[152859]: 2025-11-25T09:16:42Z|01612|binding|INFO|Removing iface tapf30cb228-ea ovn-installed in OVS
Nov 25 04:16:42 np0005534516 nova_compute[253538]: 2025-11-25 09:16:42.983 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:16:43 np0005534516 nova_compute[253538]: 2025-11-25 09:16:43.002 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:16:43 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:16:43.012 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:de:b8:78 10.100.0.7 2001:db8::f816:3eff:fede:b878'], port_security=['fa:16:3e:de:b8:78 10.100.0.7 2001:db8::f816:3eff:fede:b878'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28 2001:db8::f816:3eff:fede:b878/64', 'neutron:device_id': '2abdf1f8-0c71-459d-8467-ec8825219eda', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f86f1e83-b07b-4abd-bc9a-7c03f3634fc6', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a3cf572dfc9f42528923d69b8fa76422', 'neutron:revision_number': '4', 'neutron:security_group_ids': '15198f79-2199-4ce9-8f4e-9ae43d3cedee', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e5830640-550a-474a-a915-1b8e117ec031, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=f30cb228-eac2-4d17-a356-bec8d6ae142a) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 04:16:43 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:16:43.013 162739 INFO neutron.agent.ovn.metadata.agent [-] Port f30cb228-eac2-4d17-a356-bec8d6ae142a in datapath f86f1e83-b07b-4abd-bc9a-7c03f3634fc6 unbound from our chassis#033[00m
Nov 25 04:16:43 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:16:43.014 162739 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network f86f1e83-b07b-4abd-bc9a-7c03f3634fc6, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 25 04:16:43 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:16:43.015 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[692d7065-890d-4244-ab5f-9f385dab91bb]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:16:43 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:16:43.016 162739 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-f86f1e83-b07b-4abd-bc9a-7c03f3634fc6 namespace which is not needed anymore#033[00m
Nov 25 04:16:43 np0005534516 systemd[1]: machine-qemu\x2d180\x2dinstance\x2d00000096.scope: Deactivated successfully.
Nov 25 04:16:43 np0005534516 systemd[1]: machine-qemu\x2d180\x2dinstance\x2d00000096.scope: Consumed 15.449s CPU time.
Nov 25 04:16:43 np0005534516 systemd-machined[215790]: Machine qemu-180-instance-00000096 terminated.
Nov 25 04:16:43 np0005534516 nova_compute[253538]: 2025-11-25 09:16:43.136 253542 INFO nova.virt.libvirt.driver [-] [instance: 2abdf1f8-0c71-459d-8467-ec8825219eda] Instance destroyed successfully.#033[00m
Nov 25 04:16:43 np0005534516 nova_compute[253538]: 2025-11-25 09:16:43.136 253542 DEBUG nova.objects.instance [None req-6f9c169b-a0fe-430f-8841-61cbc5ef4924 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lazy-loading 'resources' on Instance uuid 2abdf1f8-0c71-459d-8467-ec8825219eda obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 04:16:43 np0005534516 nova_compute[253538]: 2025-11-25 09:16:43.149 253542 DEBUG nova.virt.libvirt.vif [None req-6f9c169b-a0fe-430f-8841-61cbc5ef4924 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T09:15:32Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-5134712',display_name='tempest-TestGettingAddress-server-5134712',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-5134712',id=150,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMyv667Z2ABnyPVBrZx9jd9gq6P58X6si+s81we9pUqnDsWI1jnpnGU0fnIp9UQ/Apxt3tS4iccd2fLnQpe7TCkxqUAyHZSIrSlnLfmfHodlWycVpxB5TAoXVMJ4xTQPeA==',key_name='tempest-TestGettingAddress-1435340358',keypairs=<?>,launch_index=0,launched_at=2025-11-25T09:15:42Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='a3cf572dfc9f42528923d69b8fa76422',ramdisk_id='',reservation_id='r-s9d0d0tr',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-364728108',owner_user_name='tempest-TestGettingAddress-364728108-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T09:15:42Z,user_data=None,user_id='c9fb13d4ba9041458692330b7276232f',uuid=2abdf1f8-0c71-459d-8467-ec8825219eda,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "f30cb228-eac2-4d17-a356-bec8d6ae142a", "address": "fa:16:3e:de:b8:78", "network": {"id": "f86f1e83-b07b-4abd-bc9a-7c03f3634fc6", "bridge": "br-int", "label": "tempest-network-smoke--1498407759", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fede:b878", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.185", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf30cb228-ea", "ovs_interfaceid": "f30cb228-eac2-4d17-a356-bec8d6ae142a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 25 04:16:43 np0005534516 nova_compute[253538]: 2025-11-25 09:16:43.150 253542 DEBUG nova.network.os_vif_util [None req-6f9c169b-a0fe-430f-8841-61cbc5ef4924 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Converting VIF {"id": "f30cb228-eac2-4d17-a356-bec8d6ae142a", "address": "fa:16:3e:de:b8:78", "network": {"id": "f86f1e83-b07b-4abd-bc9a-7c03f3634fc6", "bridge": "br-int", "label": "tempest-network-smoke--1498407759", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fede:b878", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.185", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf30cb228-ea", "ovs_interfaceid": "f30cb228-eac2-4d17-a356-bec8d6ae142a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 04:16:43 np0005534516 nova_compute[253538]: 2025-11-25 09:16:43.152 253542 DEBUG nova.network.os_vif_util [None req-6f9c169b-a0fe-430f-8841-61cbc5ef4924 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:de:b8:78,bridge_name='br-int',has_traffic_filtering=True,id=f30cb228-eac2-4d17-a356-bec8d6ae142a,network=Network(f86f1e83-b07b-4abd-bc9a-7c03f3634fc6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf30cb228-ea') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 04:16:43 np0005534516 nova_compute[253538]: 2025-11-25 09:16:43.153 253542 DEBUG os_vif [None req-6f9c169b-a0fe-430f-8841-61cbc5ef4924 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:de:b8:78,bridge_name='br-int',has_traffic_filtering=True,id=f30cb228-eac2-4d17-a356-bec8d6ae142a,network=Network(f86f1e83-b07b-4abd-bc9a-7c03f3634fc6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf30cb228-ea') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 25 04:16:43 np0005534516 nova_compute[253538]: 2025-11-25 09:16:43.154 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:16:43 np0005534516 nova_compute[253538]: 2025-11-25 09:16:43.154 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf30cb228-ea, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 04:16:43 np0005534516 nova_compute[253538]: 2025-11-25 09:16:43.156 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:16:43 np0005534516 nova_compute[253538]: 2025-11-25 09:16:43.158 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:16:43 np0005534516 nova_compute[253538]: 2025-11-25 09:16:43.161 253542 INFO os_vif [None req-6f9c169b-a0fe-430f-8841-61cbc5ef4924 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:de:b8:78,bridge_name='br-int',has_traffic_filtering=True,id=f30cb228-eac2-4d17-a356-bec8d6ae142a,network=Network(f86f1e83-b07b-4abd-bc9a-7c03f3634fc6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf30cb228-ea')#033[00m
Nov 25 04:16:43 np0005534516 neutron-haproxy-ovnmeta-f86f1e83-b07b-4abd-bc9a-7c03f3634fc6[416140]: [NOTICE]   (416144) : haproxy version is 2.8.14-c23fe91
Nov 25 04:16:43 np0005534516 neutron-haproxy-ovnmeta-f86f1e83-b07b-4abd-bc9a-7c03f3634fc6[416140]: [NOTICE]   (416144) : path to executable is /usr/sbin/haproxy
Nov 25 04:16:43 np0005534516 neutron-haproxy-ovnmeta-f86f1e83-b07b-4abd-bc9a-7c03f3634fc6[416140]: [WARNING]  (416144) : Exiting Master process...
Nov 25 04:16:43 np0005534516 neutron-haproxy-ovnmeta-f86f1e83-b07b-4abd-bc9a-7c03f3634fc6[416140]: [WARNING]  (416144) : Exiting Master process...
Nov 25 04:16:43 np0005534516 neutron-haproxy-ovnmeta-f86f1e83-b07b-4abd-bc9a-7c03f3634fc6[416140]: [ALERT]    (416144) : Current worker (416146) exited with code 143 (Terminated)
Nov 25 04:16:43 np0005534516 neutron-haproxy-ovnmeta-f86f1e83-b07b-4abd-bc9a-7c03f3634fc6[416140]: [WARNING]  (416144) : All workers exited. Exiting... (0)
Nov 25 04:16:43 np0005534516 systemd[1]: libpod-919ec9bb21963cad983079dccfc8800aa481d29f0479cf6379ec7168a71e58bb.scope: Deactivated successfully.
Nov 25 04:16:43 np0005534516 podman[418011]: 2025-11-25 09:16:43.183453689 +0000 UTC m=+0.066710905 container died 919ec9bb21963cad983079dccfc8800aa481d29f0479cf6379ec7168a71e58bb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-f86f1e83-b07b-4abd-bc9a-7c03f3634fc6, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3)
Nov 25 04:16:43 np0005534516 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-919ec9bb21963cad983079dccfc8800aa481d29f0479cf6379ec7168a71e58bb-userdata-shm.mount: Deactivated successfully.
Nov 25 04:16:43 np0005534516 systemd[1]: var-lib-containers-storage-overlay-724b3d88118a671a8ffba4a262a67c17eb9917c6cc062f3440b0fe5cf48c7928-merged.mount: Deactivated successfully.
Nov 25 04:16:43 np0005534516 podman[418011]: 2025-11-25 09:16:43.237833668 +0000 UTC m=+0.121090874 container cleanup 919ec9bb21963cad983079dccfc8800aa481d29f0479cf6379ec7168a71e58bb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-f86f1e83-b07b-4abd-bc9a-7c03f3634fc6, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3)
Nov 25 04:16:43 np0005534516 systemd[1]: libpod-conmon-919ec9bb21963cad983079dccfc8800aa481d29f0479cf6379ec7168a71e58bb.scope: Deactivated successfully.
Nov 25 04:16:43 np0005534516 podman[418044]: 2025-11-25 09:16:43.281466374 +0000 UTC m=+0.091880689 container health_status 5c8d5508db57b4c3da7e80ae11f3a3094e87785cb74f60c98199261dd8b73481 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 04:16:43 np0005534516 podman[418036]: 2025-11-25 09:16:43.286620444 +0000 UTC m=+0.101878310 container health_status 201f5ba8a846455dad7681d91099d8c3f33e8ac91298c1330a8b525b94c03938 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, managed_by=edpm_ansible)
Nov 25 04:16:43 np0005534516 podman[418102]: 2025-11-25 09:16:43.328842691 +0000 UTC m=+0.060260369 container remove 919ec9bb21963cad983079dccfc8800aa481d29f0479cf6379ec7168a71e58bb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-f86f1e83-b07b-4abd-bc9a-7c03f3634fc6, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Nov 25 04:16:43 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:16:43.336 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[36d25076-e2ba-415b-ab4e-54004f54538b]: (4, ('Tue Nov 25 09:16:43 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-f86f1e83-b07b-4abd-bc9a-7c03f3634fc6 (919ec9bb21963cad983079dccfc8800aa481d29f0479cf6379ec7168a71e58bb)\n919ec9bb21963cad983079dccfc8800aa481d29f0479cf6379ec7168a71e58bb\nTue Nov 25 09:16:43 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-f86f1e83-b07b-4abd-bc9a-7c03f3634fc6 (919ec9bb21963cad983079dccfc8800aa481d29f0479cf6379ec7168a71e58bb)\n919ec9bb21963cad983079dccfc8800aa481d29f0479cf6379ec7168a71e58bb\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:16:43 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:16:43.338 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[758a6a7a-c320-459b-ab34-ad90c2ca69fa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:16:43 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:16:43.339 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf86f1e83-b0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 04:16:43 np0005534516 nova_compute[253538]: 2025-11-25 09:16:43.341 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:16:43 np0005534516 kernel: tapf86f1e83-b0: left promiscuous mode
Nov 25 04:16:43 np0005534516 nova_compute[253538]: 2025-11-25 09:16:43.354 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:16:43 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:16:43.356 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[f3a822c0-585b-4478-9bdb-102e4f0b4387]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:16:43 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:16:43 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:16:43.375 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[535d18ad-373d-4979-b76d-14e474a71e8c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:16:43 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:16:43.377 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[075a6820-b4b0-487a-b4f3-2a562318b8d4]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:16:43 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:16:43.394 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[4863132b-8b26-4e73-8e9d-8cbb03e62286]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 749549, 'reachable_time': 35827, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 418127, 'error': None, 'target': 'ovnmeta-f86f1e83-b07b-4abd-bc9a-7c03f3634fc6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:16:43 np0005534516 systemd[1]: run-netns-ovnmeta\x2df86f1e83\x2db07b\x2d4abd\x2dbc9a\x2d7c03f3634fc6.mount: Deactivated successfully.
Nov 25 04:16:43 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:16:43.397 162852 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-f86f1e83-b07b-4abd-bc9a-7c03f3634fc6 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 25 04:16:43 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:16:43.397 162852 DEBUG oslo.privsep.daemon [-] privsep: reply[bdd78e63-3542-4926-8148-baaf269cf6f6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:16:43 np0005534516 nova_compute[253538]: 2025-11-25 09:16:43.521 253542 DEBUG nova.compute.manager [req-05dcf2de-69f0-49ee-a2b3-003cd63272c2 req-5c8e8eb3-751f-4cde-8ab9-b23fcebf8e44 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 2abdf1f8-0c71-459d-8467-ec8825219eda] Received event network-vif-unplugged-f30cb228-eac2-4d17-a356-bec8d6ae142a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 04:16:43 np0005534516 nova_compute[253538]: 2025-11-25 09:16:43.522 253542 DEBUG oslo_concurrency.lockutils [req-05dcf2de-69f0-49ee-a2b3-003cd63272c2 req-5c8e8eb3-751f-4cde-8ab9-b23fcebf8e44 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "2abdf1f8-0c71-459d-8467-ec8825219eda-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:16:43 np0005534516 nova_compute[253538]: 2025-11-25 09:16:43.522 253542 DEBUG oslo_concurrency.lockutils [req-05dcf2de-69f0-49ee-a2b3-003cd63272c2 req-5c8e8eb3-751f-4cde-8ab9-b23fcebf8e44 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "2abdf1f8-0c71-459d-8467-ec8825219eda-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:16:43 np0005534516 nova_compute[253538]: 2025-11-25 09:16:43.522 253542 DEBUG oslo_concurrency.lockutils [req-05dcf2de-69f0-49ee-a2b3-003cd63272c2 req-5c8e8eb3-751f-4cde-8ab9-b23fcebf8e44 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "2abdf1f8-0c71-459d-8467-ec8825219eda-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:16:43 np0005534516 nova_compute[253538]: 2025-11-25 09:16:43.523 253542 DEBUG nova.compute.manager [req-05dcf2de-69f0-49ee-a2b3-003cd63272c2 req-5c8e8eb3-751f-4cde-8ab9-b23fcebf8e44 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 2abdf1f8-0c71-459d-8467-ec8825219eda] No waiting events found dispatching network-vif-unplugged-f30cb228-eac2-4d17-a356-bec8d6ae142a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 04:16:43 np0005534516 nova_compute[253538]: 2025-11-25 09:16:43.523 253542 DEBUG nova.compute.manager [req-05dcf2de-69f0-49ee-a2b3-003cd63272c2 req-5c8e8eb3-751f-4cde-8ab9-b23fcebf8e44 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 2abdf1f8-0c71-459d-8467-ec8825219eda] Received event network-vif-unplugged-f30cb228-eac2-4d17-a356-bec8d6ae142a for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 25 04:16:43 np0005534516 nova_compute[253538]: 2025-11-25 09:16:43.612 253542 INFO nova.virt.libvirt.driver [None req-6f9c169b-a0fe-430f-8841-61cbc5ef4924 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 2abdf1f8-0c71-459d-8467-ec8825219eda] Deleting instance files /var/lib/nova/instances/2abdf1f8-0c71-459d-8467-ec8825219eda_del#033[00m
Nov 25 04:16:43 np0005534516 nova_compute[253538]: 2025-11-25 09:16:43.613 253542 INFO nova.virt.libvirt.driver [None req-6f9c169b-a0fe-430f-8841-61cbc5ef4924 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 2abdf1f8-0c71-459d-8467-ec8825219eda] Deletion of /var/lib/nova/instances/2abdf1f8-0c71-459d-8467-ec8825219eda_del complete#033[00m
Nov 25 04:16:43 np0005534516 nova_compute[253538]: 2025-11-25 09:16:43.726 253542 INFO nova.compute.manager [None req-6f9c169b-a0fe-430f-8841-61cbc5ef4924 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 2abdf1f8-0c71-459d-8467-ec8825219eda] Took 0.83 seconds to destroy the instance on the hypervisor.#033[00m
Nov 25 04:16:43 np0005534516 nova_compute[253538]: 2025-11-25 09:16:43.727 253542 DEBUG oslo.service.loopingcall [None req-6f9c169b-a0fe-430f-8841-61cbc5ef4924 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 25 04:16:43 np0005534516 nova_compute[253538]: 2025-11-25 09:16:43.727 253542 DEBUG nova.compute.manager [-] [instance: 2abdf1f8-0c71-459d-8467-ec8825219eda] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 25 04:16:43 np0005534516 nova_compute[253538]: 2025-11-25 09:16:43.727 253542 DEBUG nova.network.neutron [-] [instance: 2abdf1f8-0c71-459d-8467-ec8825219eda] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 25 04:16:44 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2890: 321 pgs: 321 active+clean; 167 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 146 KiB/s rd, 730 KiB/s wr, 63 op/s
Nov 25 04:16:44 np0005534516 nova_compute[253538]: 2025-11-25 09:16:44.555 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:16:44 np0005534516 nova_compute[253538]: 2025-11-25 09:16:44.623 253542 DEBUG nova.network.neutron [-] [instance: 2abdf1f8-0c71-459d-8467-ec8825219eda] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 04:16:44 np0005534516 nova_compute[253538]: 2025-11-25 09:16:44.641 253542 INFO nova.compute.manager [-] [instance: 2abdf1f8-0c71-459d-8467-ec8825219eda] Took 0.91 seconds to deallocate network for instance.#033[00m
Nov 25 04:16:44 np0005534516 nova_compute[253538]: 2025-11-25 09:16:44.694 253542 DEBUG oslo_concurrency.lockutils [None req-6f9c169b-a0fe-430f-8841-61cbc5ef4924 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:16:44 np0005534516 nova_compute[253538]: 2025-11-25 09:16:44.695 253542 DEBUG oslo_concurrency.lockutils [None req-6f9c169b-a0fe-430f-8841-61cbc5ef4924 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:16:44 np0005534516 nova_compute[253538]: 2025-11-25 09:16:44.744 253542 DEBUG oslo_concurrency.processutils [None req-6f9c169b-a0fe-430f-8841-61cbc5ef4924 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 04:16:44 np0005534516 nova_compute[253538]: 2025-11-25 09:16:44.827 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:16:44 np0005534516 nova_compute[253538]: 2025-11-25 09:16:44.890 253542 DEBUG nova.network.neutron [req-7dcb55fa-f6cd-4e91-a53d-c68bc6b44d74 req-9aceae57-4124-47c8-ba14-02e6dae01c8e b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 2abdf1f8-0c71-459d-8467-ec8825219eda] Updated VIF entry in instance network info cache for port f30cb228-eac2-4d17-a356-bec8d6ae142a. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 25 04:16:44 np0005534516 nova_compute[253538]: 2025-11-25 09:16:44.891 253542 DEBUG nova.network.neutron [req-7dcb55fa-f6cd-4e91-a53d-c68bc6b44d74 req-9aceae57-4124-47c8-ba14-02e6dae01c8e b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 2abdf1f8-0c71-459d-8467-ec8825219eda] Updating instance_info_cache with network_info: [{"id": "f30cb228-eac2-4d17-a356-bec8d6ae142a", "address": "fa:16:3e:de:b8:78", "network": {"id": "f86f1e83-b07b-4abd-bc9a-7c03f3634fc6", "bridge": "br-int", "label": "tempest-network-smoke--1498407759", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fede:b878", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf30cb228-ea", "ovs_interfaceid": "f30cb228-eac2-4d17-a356-bec8d6ae142a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 04:16:44 np0005534516 nova_compute[253538]: 2025-11-25 09:16:44.954 253542 DEBUG oslo_concurrency.lockutils [req-7dcb55fa-f6cd-4e91-a53d-c68bc6b44d74 req-9aceae57-4124-47c8-ba14-02e6dae01c8e b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-2abdf1f8-0c71-459d-8467-ec8825219eda" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 04:16:44 np0005534516 nova_compute[253538]: 2025-11-25 09:16:44.969 253542 DEBUG nova.compute.manager [req-1ee320dd-585b-4aea-bba2-2a436f52e337 req-094fd4b8-f274-4366-b0d2-79bc5a920a5e b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 2abdf1f8-0c71-459d-8467-ec8825219eda] Received event network-vif-deleted-f30cb228-eac2-4d17-a356-bec8d6ae142a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 04:16:44 np0005534516 nova_compute[253538]: 2025-11-25 09:16:44.969 253542 INFO nova.compute.manager [req-1ee320dd-585b-4aea-bba2-2a436f52e337 req-094fd4b8-f274-4366-b0d2-79bc5a920a5e b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 2abdf1f8-0c71-459d-8467-ec8825219eda] Neutron deleted interface f30cb228-eac2-4d17-a356-bec8d6ae142a; detaching it from the instance and deleting it from the info cache#033[00m
Nov 25 04:16:44 np0005534516 nova_compute[253538]: 2025-11-25 09:16:44.969 253542 DEBUG nova.network.neutron [req-1ee320dd-585b-4aea-bba2-2a436f52e337 req-094fd4b8-f274-4366-b0d2-79bc5a920a5e b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 2abdf1f8-0c71-459d-8467-ec8825219eda] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 04:16:45 np0005534516 nova_compute[253538]: 2025-11-25 09:16:45.013 253542 DEBUG nova.compute.manager [req-1ee320dd-585b-4aea-bba2-2a436f52e337 req-094fd4b8-f274-4366-b0d2-79bc5a920a5e b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 2abdf1f8-0c71-459d-8467-ec8825219eda] Detach interface failed, port_id=f30cb228-eac2-4d17-a356-bec8d6ae142a, reason: Instance 2abdf1f8-0c71-459d-8467-ec8825219eda could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882#033[00m
Nov 25 04:16:45 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 04:16:45 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2053815818' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 04:16:45 np0005534516 nova_compute[253538]: 2025-11-25 09:16:45.207 253542 DEBUG oslo_concurrency.processutils [None req-6f9c169b-a0fe-430f-8841-61cbc5ef4924 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.464s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 04:16:45 np0005534516 nova_compute[253538]: 2025-11-25 09:16:45.214 253542 DEBUG nova.compute.provider_tree [None req-6f9c169b-a0fe-430f-8841-61cbc5ef4924 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 04:16:45 np0005534516 nova_compute[253538]: 2025-11-25 09:16:45.232 253542 DEBUG nova.scheduler.client.report [None req-6f9c169b-a0fe-430f-8841-61cbc5ef4924 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 04:16:45 np0005534516 nova_compute[253538]: 2025-11-25 09:16:45.270 253542 DEBUG oslo_concurrency.lockutils [None req-6f9c169b-a0fe-430f-8841-61cbc5ef4924 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.575s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:16:45 np0005534516 nova_compute[253538]: 2025-11-25 09:16:45.336 253542 INFO nova.scheduler.client.report [None req-6f9c169b-a0fe-430f-8841-61cbc5ef4924 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Deleted allocations for instance 2abdf1f8-0c71-459d-8467-ec8825219eda#033[00m
Nov 25 04:16:45 np0005534516 nova_compute[253538]: 2025-11-25 09:16:45.417 253542 DEBUG oslo_concurrency.lockutils [None req-6f9c169b-a0fe-430f-8841-61cbc5ef4924 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "2abdf1f8-0c71-459d-8467-ec8825219eda" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.528s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:16:45 np0005534516 nova_compute[253538]: 2025-11-25 09:16:45.608 253542 DEBUG nova.compute.manager [req-4ebf0d02-9979-4a35-94e8-6514c1552224 req-c8b0074e-296c-4403-aff9-d0fb09ed1676 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 2abdf1f8-0c71-459d-8467-ec8825219eda] Received event network-vif-plugged-f30cb228-eac2-4d17-a356-bec8d6ae142a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 04:16:45 np0005534516 nova_compute[253538]: 2025-11-25 09:16:45.608 253542 DEBUG oslo_concurrency.lockutils [req-4ebf0d02-9979-4a35-94e8-6514c1552224 req-c8b0074e-296c-4403-aff9-d0fb09ed1676 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "2abdf1f8-0c71-459d-8467-ec8825219eda-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:16:45 np0005534516 nova_compute[253538]: 2025-11-25 09:16:45.609 253542 DEBUG oslo_concurrency.lockutils [req-4ebf0d02-9979-4a35-94e8-6514c1552224 req-c8b0074e-296c-4403-aff9-d0fb09ed1676 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "2abdf1f8-0c71-459d-8467-ec8825219eda-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:16:45 np0005534516 nova_compute[253538]: 2025-11-25 09:16:45.609 253542 DEBUG oslo_concurrency.lockutils [req-4ebf0d02-9979-4a35-94e8-6514c1552224 req-c8b0074e-296c-4403-aff9-d0fb09ed1676 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "2abdf1f8-0c71-459d-8467-ec8825219eda-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:16:45 np0005534516 nova_compute[253538]: 2025-11-25 09:16:45.609 253542 DEBUG nova.compute.manager [req-4ebf0d02-9979-4a35-94e8-6514c1552224 req-c8b0074e-296c-4403-aff9-d0fb09ed1676 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 2abdf1f8-0c71-459d-8467-ec8825219eda] No waiting events found dispatching network-vif-plugged-f30cb228-eac2-4d17-a356-bec8d6ae142a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 04:16:45 np0005534516 nova_compute[253538]: 2025-11-25 09:16:45.610 253542 WARNING nova.compute.manager [req-4ebf0d02-9979-4a35-94e8-6514c1552224 req-c8b0074e-296c-4403-aff9-d0fb09ed1676 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 2abdf1f8-0c71-459d-8467-ec8825219eda] Received unexpected event network-vif-plugged-f30cb228-eac2-4d17-a356-bec8d6ae142a for instance with vm_state deleted and task_state None.#033[00m
Nov 25 04:16:46 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2891: 321 pgs: 321 active+clean; 127 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 72 KiB/s rd, 33 KiB/s wr, 52 op/s
Nov 25 04:16:48 np0005534516 nova_compute[253538]: 2025-11-25 09:16:48.158 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:16:48 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:16:48 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2892: 321 pgs: 321 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 49 KiB/s rd, 17 KiB/s wr, 57 op/s
Nov 25 04:16:48 np0005534516 nova_compute[253538]: 2025-11-25 09:16:48.565 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:16:49 np0005534516 nova_compute[253538]: 2025-11-25 09:16:49.828 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:16:50 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2893: 321 pgs: 321 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 41 KiB/s rd, 5.7 KiB/s wr, 52 op/s
Nov 25 04:16:50 np0005534516 nova_compute[253538]: 2025-11-25 09:16:50.560 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:16:50 np0005534516 nova_compute[253538]: 2025-11-25 09:16:50.594 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:16:50 np0005534516 podman[418152]: 2025-11-25 09:16:50.875423978 +0000 UTC m=+0.119799578 container health_status 8ad1e319ea263c63021f01bb2d6f18ca5aea205883339692c6b10bddfa993191 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_controller)
Nov 25 04:16:52 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2894: 321 pgs: 321 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 26 KiB/s rd, 4.4 KiB/s wr, 38 op/s
Nov 25 04:16:52 np0005534516 nova_compute[253538]: 2025-11-25 09:16:52.947 253542 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764062197.9445813, 645b40f5-7a87-4de2-8b13-a340bcffd14b => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 04:16:52 np0005534516 nova_compute[253538]: 2025-11-25 09:16:52.948 253542 INFO nova.compute.manager [-] [instance: 645b40f5-7a87-4de2-8b13-a340bcffd14b] VM Stopped (Lifecycle Event)#033[00m
Nov 25 04:16:52 np0005534516 nova_compute[253538]: 2025-11-25 09:16:52.976 253542 DEBUG nova.compute.manager [None req-e9777e07-eacc-4f83-817b-1fa92c9fd250 - - - - - -] [instance: 645b40f5-7a87-4de2-8b13-a340bcffd14b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 04:16:53 np0005534516 nova_compute[253538]: 2025-11-25 09:16:53.160 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:16:53 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:16:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] Optimize plan auto_2025-11-25_09:16:53
Nov 25 04:16:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 04:16:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] do_upmap
Nov 25 04:16:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] pools ['default.rgw.control', 'images', 'volumes', 'vms', 'cephfs.cephfs.meta', 'backups', '.mgr', 'default.rgw.log', 'cephfs.cephfs.data', 'default.rgw.meta', '.rgw.root']
Nov 25 04:16:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] prepared 0/10 changes
Nov 25 04:16:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 04:16:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 04:16:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 04:16:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 04:16:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 04:16:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 04:16:54 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 04:16:54 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 04:16:54 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 04:16:54 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 04:16:54 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 04:16:54 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 04:16:54 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 04:16:54 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 04:16:54 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 04:16:54 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 04:16:54 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2895: 321 pgs: 321 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 3.9 KiB/s wr, 28 op/s
Nov 25 04:16:54 np0005534516 nova_compute[253538]: 2025-11-25 09:16:54.555 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:16:54 np0005534516 nova_compute[253538]: 2025-11-25 09:16:54.830 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:16:56 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2896: 321 pgs: 321 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 14 KiB/s rd, 3.8 KiB/s wr, 22 op/s
Nov 25 04:16:58 np0005534516 nova_compute[253538]: 2025-11-25 09:16:58.136 253542 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764062203.13455, 2abdf1f8-0c71-459d-8467-ec8825219eda => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 04:16:58 np0005534516 nova_compute[253538]: 2025-11-25 09:16:58.136 253542 INFO nova.compute.manager [-] [instance: 2abdf1f8-0c71-459d-8467-ec8825219eda] VM Stopped (Lifecycle Event)#033[00m
Nov 25 04:16:58 np0005534516 nova_compute[253538]: 2025-11-25 09:16:58.153 253542 DEBUG nova.compute.manager [None req-588e48fd-0318-4279-92ed-14a21bec7ec2 - - - - - -] [instance: 2abdf1f8-0c71-459d-8467-ec8825219eda] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 04:16:58 np0005534516 nova_compute[253538]: 2025-11-25 09:16:58.162 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:16:58 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:16:58 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2897: 321 pgs: 321 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 5.4 KiB/s rd, 852 B/s wr, 9 op/s
Nov 25 04:16:59 np0005534516 nova_compute[253538]: 2025-11-25 09:16:59.547 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:16:59 np0005534516 nova_compute[253538]: 2025-11-25 09:16:59.553 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:16:59 np0005534516 nova_compute[253538]: 2025-11-25 09:16:59.553 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 25 04:16:59 np0005534516 nova_compute[253538]: 2025-11-25 09:16:59.553 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 25 04:16:59 np0005534516 nova_compute[253538]: 2025-11-25 09:16:59.567 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 25 04:16:59 np0005534516 nova_compute[253538]: 2025-11-25 09:16:59.833 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:17:00 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2898: 321 pgs: 321 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 04:17:02 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2899: 321 pgs: 321 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 04:17:03 np0005534516 nova_compute[253538]: 2025-11-25 09:17:03.164 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:17:03 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:17:03 np0005534516 nova_compute[253538]: 2025-11-25 09:17:03.553 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:17:04 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2900: 321 pgs: 321 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 04:17:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 04:17:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 04:17:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 04:17:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 04:17:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 25 04:17:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 04:17:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 04:17:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 04:17:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 04:17:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 04:17:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0010122368870873217 of space, bias 1.0, pg target 0.3036710661261965 quantized to 32 (current 32)
Nov 25 04:17:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 04:17:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 25 04:17:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 04:17:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 04:17:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 04:17:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 25 04:17:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 04:17:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 25 04:17:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 04:17:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 04:17:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 04:17:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 25 04:17:04 np0005534516 nova_compute[253538]: 2025-11-25 09:17:04.555 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:17:04 np0005534516 nova_compute[253538]: 2025-11-25 09:17:04.555 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:17:04 np0005534516 nova_compute[253538]: 2025-11-25 09:17:04.556 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 25 04:17:04 np0005534516 nova_compute[253538]: 2025-11-25 09:17:04.834 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:17:06 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2901: 321 pgs: 321 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 04:17:07 np0005534516 nova_compute[253538]: 2025-11-25 09:17:07.555 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:17:08 np0005534516 nova_compute[253538]: 2025-11-25 09:17:08.172 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:17:08 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:17:08 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2902: 321 pgs: 321 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 04:17:09 np0005534516 nova_compute[253538]: 2025-11-25 09:17:09.547 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:17:09 np0005534516 nova_compute[253538]: 2025-11-25 09:17:09.836 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:17:10 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2903: 321 pgs: 321 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 04:17:12 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2904: 321 pgs: 321 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 04:17:12 np0005534516 nova_compute[253538]: 2025-11-25 09:17:12.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:17:12 np0005534516 nova_compute[253538]: 2025-11-25 09:17:12.591 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:17:12 np0005534516 nova_compute[253538]: 2025-11-25 09:17:12.591 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:17:12 np0005534516 nova_compute[253538]: 2025-11-25 09:17:12.592 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:17:12 np0005534516 nova_compute[253538]: 2025-11-25 09:17:12.592 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 25 04:17:12 np0005534516 nova_compute[253538]: 2025-11-25 09:17:12.592 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 04:17:13 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 04:17:13 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/210025578' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 04:17:13 np0005534516 nova_compute[253538]: 2025-11-25 09:17:13.083 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.491s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 04:17:13 np0005534516 nova_compute[253538]: 2025-11-25 09:17:13.143 253542 DEBUG oslo_concurrency.lockutils [None req-e991bbc4-4882-4760-ba09-1cbed1562b4e 612906aa606e4268918814ea9f47c674 3c19f20fbaec489eaece7cf904f192fa - - default default] Acquiring lock "aefbd3e8-a8ba-4fef-a771-4e2b5091a90a" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:17:13 np0005534516 nova_compute[253538]: 2025-11-25 09:17:13.144 253542 DEBUG oslo_concurrency.lockutils [None req-e991bbc4-4882-4760-ba09-1cbed1562b4e 612906aa606e4268918814ea9f47c674 3c19f20fbaec489eaece7cf904f192fa - - default default] Lock "aefbd3e8-a8ba-4fef-a771-4e2b5091a90a" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:17:13 np0005534516 nova_compute[253538]: 2025-11-25 09:17:13.165 253542 DEBUG nova.compute.manager [None req-e991bbc4-4882-4760-ba09-1cbed1562b4e 612906aa606e4268918814ea9f47c674 3c19f20fbaec489eaece7cf904f192fa - - default default] [instance: aefbd3e8-a8ba-4fef-a771-4e2b5091a90a] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 25 04:17:13 np0005534516 nova_compute[253538]: 2025-11-25 09:17:13.173 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:17:13 np0005534516 nova_compute[253538]: 2025-11-25 09:17:13.240 253542 DEBUG oslo_concurrency.lockutils [None req-e991bbc4-4882-4760-ba09-1cbed1562b4e 612906aa606e4268918814ea9f47c674 3c19f20fbaec489eaece7cf904f192fa - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:17:13 np0005534516 nova_compute[253538]: 2025-11-25 09:17:13.241 253542 DEBUG oslo_concurrency.lockutils [None req-e991bbc4-4882-4760-ba09-1cbed1562b4e 612906aa606e4268918814ea9f47c674 3c19f20fbaec489eaece7cf904f192fa - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:17:13 np0005534516 nova_compute[253538]: 2025-11-25 09:17:13.247 253542 DEBUG nova.virt.hardware [None req-e991bbc4-4882-4760-ba09-1cbed1562b4e 612906aa606e4268918814ea9f47c674 3c19f20fbaec489eaece7cf904f192fa - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 25 04:17:13 np0005534516 nova_compute[253538]: 2025-11-25 09:17:13.247 253542 INFO nova.compute.claims [None req-e991bbc4-4882-4760-ba09-1cbed1562b4e 612906aa606e4268918814ea9f47c674 3c19f20fbaec489eaece7cf904f192fa - - default default] [instance: aefbd3e8-a8ba-4fef-a771-4e2b5091a90a] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 25 04:17:13 np0005534516 nova_compute[253538]: 2025-11-25 09:17:13.285 253542 WARNING nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 04:17:13 np0005534516 nova_compute[253538]: 2025-11-25 09:17:13.286 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3595MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 25 04:17:13 np0005534516 nova_compute[253538]: 2025-11-25 09:17:13.287 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:17:13 np0005534516 nova_compute[253538]: 2025-11-25 09:17:13.334 253542 DEBUG oslo_concurrency.processutils [None req-e991bbc4-4882-4760-ba09-1cbed1562b4e 612906aa606e4268918814ea9f47c674 3c19f20fbaec489eaece7cf904f192fa - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 04:17:13 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:17:13 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 04:17:13 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3461204194' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 04:17:13 np0005534516 nova_compute[253538]: 2025-11-25 09:17:13.758 253542 DEBUG oslo_concurrency.processutils [None req-e991bbc4-4882-4760-ba09-1cbed1562b4e 612906aa606e4268918814ea9f47c674 3c19f20fbaec489eaece7cf904f192fa - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.423s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 04:17:13 np0005534516 nova_compute[253538]: 2025-11-25 09:17:13.763 253542 DEBUG nova.compute.provider_tree [None req-e991bbc4-4882-4760-ba09-1cbed1562b4e 612906aa606e4268918814ea9f47c674 3c19f20fbaec489eaece7cf904f192fa - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 04:17:13 np0005534516 nova_compute[253538]: 2025-11-25 09:17:13.777 253542 DEBUG nova.scheduler.client.report [None req-e991bbc4-4882-4760-ba09-1cbed1562b4e 612906aa606e4268918814ea9f47c674 3c19f20fbaec489eaece7cf904f192fa - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 04:17:13 np0005534516 nova_compute[253538]: 2025-11-25 09:17:13.795 253542 DEBUG oslo_concurrency.lockutils [None req-e991bbc4-4882-4760-ba09-1cbed1562b4e 612906aa606e4268918814ea9f47c674 3c19f20fbaec489eaece7cf904f192fa - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.554s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:17:13 np0005534516 nova_compute[253538]: 2025-11-25 09:17:13.795 253542 DEBUG nova.compute.manager [None req-e991bbc4-4882-4760-ba09-1cbed1562b4e 612906aa606e4268918814ea9f47c674 3c19f20fbaec489eaece7cf904f192fa - - default default] [instance: aefbd3e8-a8ba-4fef-a771-4e2b5091a90a] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 25 04:17:13 np0005534516 nova_compute[253538]: 2025-11-25 09:17:13.798 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.511s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:17:13 np0005534516 podman[418225]: 2025-11-25 09:17:13.803121905 +0000 UTC m=+0.058991995 container health_status 201f5ba8a846455dad7681d91099d8c3f33e8ac91298c1330a8b525b94c03938 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd)
Nov 25 04:17:13 np0005534516 podman[418226]: 2025-11-25 09:17:13.82868751 +0000 UTC m=+0.084099138 container health_status 5c8d5508db57b4c3da7e80ae11f3a3094e87785cb74f60c98199261dd8b73481 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 04:17:13 np0005534516 nova_compute[253538]: 2025-11-25 09:17:13.857 253542 DEBUG nova.compute.manager [None req-e991bbc4-4882-4760-ba09-1cbed1562b4e 612906aa606e4268918814ea9f47c674 3c19f20fbaec489eaece7cf904f192fa - - default default] [instance: aefbd3e8-a8ba-4fef-a771-4e2b5091a90a] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 25 04:17:13 np0005534516 nova_compute[253538]: 2025-11-25 09:17:13.857 253542 DEBUG nova.network.neutron [None req-e991bbc4-4882-4760-ba09-1cbed1562b4e 612906aa606e4268918814ea9f47c674 3c19f20fbaec489eaece7cf904f192fa - - default default] [instance: aefbd3e8-a8ba-4fef-a771-4e2b5091a90a] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 25 04:17:13 np0005534516 nova_compute[253538]: 2025-11-25 09:17:13.862 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Instance aefbd3e8-a8ba-4fef-a771-4e2b5091a90a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 25 04:17:13 np0005534516 nova_compute[253538]: 2025-11-25 09:17:13.863 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 25 04:17:13 np0005534516 nova_compute[253538]: 2025-11-25 09:17:13.863 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 25 04:17:13 np0005534516 nova_compute[253538]: 2025-11-25 09:17:13.876 253542 INFO nova.virt.libvirt.driver [None req-e991bbc4-4882-4760-ba09-1cbed1562b4e 612906aa606e4268918814ea9f47c674 3c19f20fbaec489eaece7cf904f192fa - - default default] [instance: aefbd3e8-a8ba-4fef-a771-4e2b5091a90a] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 25 04:17:13 np0005534516 nova_compute[253538]: 2025-11-25 09:17:13.893 253542 DEBUG nova.compute.manager [None req-e991bbc4-4882-4760-ba09-1cbed1562b4e 612906aa606e4268918814ea9f47c674 3c19f20fbaec489eaece7cf904f192fa - - default default] [instance: aefbd3e8-a8ba-4fef-a771-4e2b5091a90a] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 25 04:17:13 np0005534516 nova_compute[253538]: 2025-11-25 09:17:13.906 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 04:17:13 np0005534516 nova_compute[253538]: 2025-11-25 09:17:13.981 253542 DEBUG nova.compute.manager [None req-e991bbc4-4882-4760-ba09-1cbed1562b4e 612906aa606e4268918814ea9f47c674 3c19f20fbaec489eaece7cf904f192fa - - default default] [instance: aefbd3e8-a8ba-4fef-a771-4e2b5091a90a] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 25 04:17:13 np0005534516 nova_compute[253538]: 2025-11-25 09:17:13.983 253542 DEBUG nova.virt.libvirt.driver [None req-e991bbc4-4882-4760-ba09-1cbed1562b4e 612906aa606e4268918814ea9f47c674 3c19f20fbaec489eaece7cf904f192fa - - default default] [instance: aefbd3e8-a8ba-4fef-a771-4e2b5091a90a] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 25 04:17:13 np0005534516 nova_compute[253538]: 2025-11-25 09:17:13.983 253542 INFO nova.virt.libvirt.driver [None req-e991bbc4-4882-4760-ba09-1cbed1562b4e 612906aa606e4268918814ea9f47c674 3c19f20fbaec489eaece7cf904f192fa - - default default] [instance: aefbd3e8-a8ba-4fef-a771-4e2b5091a90a] Creating image(s)#033[00m
Nov 25 04:17:14 np0005534516 nova_compute[253538]: 2025-11-25 09:17:14.005 253542 DEBUG nova.storage.rbd_utils [None req-e991bbc4-4882-4760-ba09-1cbed1562b4e 612906aa606e4268918814ea9f47c674 3c19f20fbaec489eaece7cf904f192fa - - default default] rbd image aefbd3e8-a8ba-4fef-a771-4e2b5091a90a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 04:17:14 np0005534516 nova_compute[253538]: 2025-11-25 09:17:14.028 253542 DEBUG nova.storage.rbd_utils [None req-e991bbc4-4882-4760-ba09-1cbed1562b4e 612906aa606e4268918814ea9f47c674 3c19f20fbaec489eaece7cf904f192fa - - default default] rbd image aefbd3e8-a8ba-4fef-a771-4e2b5091a90a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 04:17:14 np0005534516 nova_compute[253538]: 2025-11-25 09:17:14.048 253542 DEBUG nova.storage.rbd_utils [None req-e991bbc4-4882-4760-ba09-1cbed1562b4e 612906aa606e4268918814ea9f47c674 3c19f20fbaec489eaece7cf904f192fa - - default default] rbd image aefbd3e8-a8ba-4fef-a771-4e2b5091a90a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 04:17:14 np0005534516 nova_compute[253538]: 2025-11-25 09:17:14.052 253542 DEBUG oslo_concurrency.processutils [None req-e991bbc4-4882-4760-ba09-1cbed1562b4e 612906aa606e4268918814ea9f47c674 3c19f20fbaec489eaece7cf904f192fa - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 04:17:14 np0005534516 nova_compute[253538]: 2025-11-25 09:17:14.120 253542 DEBUG oslo_concurrency.processutils [None req-e991bbc4-4882-4760-ba09-1cbed1562b4e 612906aa606e4268918814ea9f47c674 3c19f20fbaec489eaece7cf904f192fa - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json" returned: 0 in 0.068s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 04:17:14 np0005534516 nova_compute[253538]: 2025-11-25 09:17:14.121 253542 DEBUG oslo_concurrency.lockutils [None req-e991bbc4-4882-4760-ba09-1cbed1562b4e 612906aa606e4268918814ea9f47c674 3c19f20fbaec489eaece7cf904f192fa - - default default] Acquiring lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:17:14 np0005534516 nova_compute[253538]: 2025-11-25 09:17:14.122 253542 DEBUG oslo_concurrency.lockutils [None req-e991bbc4-4882-4760-ba09-1cbed1562b4e 612906aa606e4268918814ea9f47c674 3c19f20fbaec489eaece7cf904f192fa - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:17:14 np0005534516 nova_compute[253538]: 2025-11-25 09:17:14.122 253542 DEBUG oslo_concurrency.lockutils [None req-e991bbc4-4882-4760-ba09-1cbed1562b4e 612906aa606e4268918814ea9f47c674 3c19f20fbaec489eaece7cf904f192fa - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:17:14 np0005534516 nova_compute[253538]: 2025-11-25 09:17:14.158 253542 DEBUG nova.storage.rbd_utils [None req-e991bbc4-4882-4760-ba09-1cbed1562b4e 612906aa606e4268918814ea9f47c674 3c19f20fbaec489eaece7cf904f192fa - - default default] rbd image aefbd3e8-a8ba-4fef-a771-4e2b5091a90a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 04:17:14 np0005534516 nova_compute[253538]: 2025-11-25 09:17:14.163 253542 DEBUG oslo_concurrency.processutils [None req-e991bbc4-4882-4760-ba09-1cbed1562b4e 612906aa606e4268918814ea9f47c674 3c19f20fbaec489eaece7cf904f192fa - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc aefbd3e8-a8ba-4fef-a771-4e2b5091a90a_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 04:17:14 np0005534516 nova_compute[253538]: 2025-11-25 09:17:14.203 253542 DEBUG nova.policy [None req-e991bbc4-4882-4760-ba09-1cbed1562b4e 612906aa606e4268918814ea9f47c674 3c19f20fbaec489eaece7cf904f192fa - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '612906aa606e4268918814ea9f47c674', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '3c19f20fbaec489eaece7cf904f192fa', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 25 04:17:14 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 04:17:14 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2112254869' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 04:17:14 np0005534516 nova_compute[253538]: 2025-11-25 09:17:14.392 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.485s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 04:17:14 np0005534516 nova_compute[253538]: 2025-11-25 09:17:14.397 253542 DEBUG nova.compute.provider_tree [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 04:17:14 np0005534516 nova_compute[253538]: 2025-11-25 09:17:14.421 253542 DEBUG nova.scheduler.client.report [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 04:17:14 np0005534516 nova_compute[253538]: 2025-11-25 09:17:14.443 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 25 04:17:14 np0005534516 nova_compute[253538]: 2025-11-25 09:17:14.444 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.646s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:17:14 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2905: 321 pgs: 321 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 04:17:14 np0005534516 nova_compute[253538]: 2025-11-25 09:17:14.764 253542 DEBUG nova.network.neutron [None req-e991bbc4-4882-4760-ba09-1cbed1562b4e 612906aa606e4268918814ea9f47c674 3c19f20fbaec489eaece7cf904f192fa - - default default] [instance: aefbd3e8-a8ba-4fef-a771-4e2b5091a90a] Successfully created port: 3e3120d7-1164-497f-8e95-61789ab8f383 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 25 04:17:14 np0005534516 nova_compute[253538]: 2025-11-25 09:17:14.838 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:17:15 np0005534516 nova_compute[253538]: 2025-11-25 09:17:15.438 253542 DEBUG oslo_concurrency.processutils [None req-e991bbc4-4882-4760-ba09-1cbed1562b4e 612906aa606e4268918814ea9f47c674 3c19f20fbaec489eaece7cf904f192fa - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc aefbd3e8-a8ba-4fef-a771-4e2b5091a90a_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.275s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 04:17:15 np0005534516 nova_compute[253538]: 2025-11-25 09:17:15.492 253542 DEBUG nova.storage.rbd_utils [None req-e991bbc4-4882-4760-ba09-1cbed1562b4e 612906aa606e4268918814ea9f47c674 3c19f20fbaec489eaece7cf904f192fa - - default default] resizing rbd image aefbd3e8-a8ba-4fef-a771-4e2b5091a90a_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 25 04:17:15 np0005534516 nova_compute[253538]: 2025-11-25 09:17:15.683 253542 DEBUG nova.objects.instance [None req-e991bbc4-4882-4760-ba09-1cbed1562b4e 612906aa606e4268918814ea9f47c674 3c19f20fbaec489eaece7cf904f192fa - - default default] Lazy-loading 'migration_context' on Instance uuid aefbd3e8-a8ba-4fef-a771-4e2b5091a90a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 04:17:15 np0005534516 nova_compute[253538]: 2025-11-25 09:17:15.698 253542 DEBUG nova.virt.libvirt.driver [None req-e991bbc4-4882-4760-ba09-1cbed1562b4e 612906aa606e4268918814ea9f47c674 3c19f20fbaec489eaece7cf904f192fa - - default default] [instance: aefbd3e8-a8ba-4fef-a771-4e2b5091a90a] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 25 04:17:15 np0005534516 nova_compute[253538]: 2025-11-25 09:17:15.698 253542 DEBUG nova.virt.libvirt.driver [None req-e991bbc4-4882-4760-ba09-1cbed1562b4e 612906aa606e4268918814ea9f47c674 3c19f20fbaec489eaece7cf904f192fa - - default default] [instance: aefbd3e8-a8ba-4fef-a771-4e2b5091a90a] Ensure instance console log exists: /var/lib/nova/instances/aefbd3e8-a8ba-4fef-a771-4e2b5091a90a/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 25 04:17:15 np0005534516 nova_compute[253538]: 2025-11-25 09:17:15.699 253542 DEBUG oslo_concurrency.lockutils [None req-e991bbc4-4882-4760-ba09-1cbed1562b4e 612906aa606e4268918814ea9f47c674 3c19f20fbaec489eaece7cf904f192fa - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:17:15 np0005534516 nova_compute[253538]: 2025-11-25 09:17:15.699 253542 DEBUG oslo_concurrency.lockutils [None req-e991bbc4-4882-4760-ba09-1cbed1562b4e 612906aa606e4268918814ea9f47c674 3c19f20fbaec489eaece7cf904f192fa - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:17:15 np0005534516 nova_compute[253538]: 2025-11-25 09:17:15.699 253542 DEBUG oslo_concurrency.lockutils [None req-e991bbc4-4882-4760-ba09-1cbed1562b4e 612906aa606e4268918814ea9f47c674 3c19f20fbaec489eaece7cf904f192fa - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:17:16 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2906: 321 pgs: 321 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 170 B/s rd, 85 B/s wr, 0 op/s
Nov 25 04:17:16 np0005534516 nova_compute[253538]: 2025-11-25 09:17:16.645 253542 DEBUG nova.network.neutron [None req-e991bbc4-4882-4760-ba09-1cbed1562b4e 612906aa606e4268918814ea9f47c674 3c19f20fbaec489eaece7cf904f192fa - - default default] [instance: aefbd3e8-a8ba-4fef-a771-4e2b5091a90a] Successfully updated port: 3e3120d7-1164-497f-8e95-61789ab8f383 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 25 04:17:16 np0005534516 nova_compute[253538]: 2025-11-25 09:17:16.657 253542 DEBUG oslo_concurrency.lockutils [None req-e991bbc4-4882-4760-ba09-1cbed1562b4e 612906aa606e4268918814ea9f47c674 3c19f20fbaec489eaece7cf904f192fa - - default default] Acquiring lock "refresh_cache-aefbd3e8-a8ba-4fef-a771-4e2b5091a90a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 04:17:16 np0005534516 nova_compute[253538]: 2025-11-25 09:17:16.657 253542 DEBUG oslo_concurrency.lockutils [None req-e991bbc4-4882-4760-ba09-1cbed1562b4e 612906aa606e4268918814ea9f47c674 3c19f20fbaec489eaece7cf904f192fa - - default default] Acquired lock "refresh_cache-aefbd3e8-a8ba-4fef-a771-4e2b5091a90a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 04:17:16 np0005534516 nova_compute[253538]: 2025-11-25 09:17:16.657 253542 DEBUG nova.network.neutron [None req-e991bbc4-4882-4760-ba09-1cbed1562b4e 612906aa606e4268918814ea9f47c674 3c19f20fbaec489eaece7cf904f192fa - - default default] [instance: aefbd3e8-a8ba-4fef-a771-4e2b5091a90a] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 25 04:17:16 np0005534516 nova_compute[253538]: 2025-11-25 09:17:16.758 253542 DEBUG nova.compute.manager [req-8ed782c1-5642-4471-b5aa-480c4f1fcbd0 req-ee836338-d096-431e-8244-ad5920f2ac63 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: aefbd3e8-a8ba-4fef-a771-4e2b5091a90a] Received event network-changed-3e3120d7-1164-497f-8e95-61789ab8f383 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 04:17:16 np0005534516 nova_compute[253538]: 2025-11-25 09:17:16.759 253542 DEBUG nova.compute.manager [req-8ed782c1-5642-4471-b5aa-480c4f1fcbd0 req-ee836338-d096-431e-8244-ad5920f2ac63 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: aefbd3e8-a8ba-4fef-a771-4e2b5091a90a] Refreshing instance network info cache due to event network-changed-3e3120d7-1164-497f-8e95-61789ab8f383. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 04:17:16 np0005534516 nova_compute[253538]: 2025-11-25 09:17:16.759 253542 DEBUG oslo_concurrency.lockutils [req-8ed782c1-5642-4471-b5aa-480c4f1fcbd0 req-ee836338-d096-431e-8244-ad5920f2ac63 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-aefbd3e8-a8ba-4fef-a771-4e2b5091a90a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 04:17:16 np0005534516 nova_compute[253538]: 2025-11-25 09:17:16.794 253542 DEBUG nova.network.neutron [None req-e991bbc4-4882-4760-ba09-1cbed1562b4e 612906aa606e4268918814ea9f47c674 3c19f20fbaec489eaece7cf904f192fa - - default default] [instance: aefbd3e8-a8ba-4fef-a771-4e2b5091a90a] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 25 04:17:18 np0005534516 nova_compute[253538]: 2025-11-25 09:17:18.138 253542 DEBUG nova.network.neutron [None req-e991bbc4-4882-4760-ba09-1cbed1562b4e 612906aa606e4268918814ea9f47c674 3c19f20fbaec489eaece7cf904f192fa - - default default] [instance: aefbd3e8-a8ba-4fef-a771-4e2b5091a90a] Updating instance_info_cache with network_info: [{"id": "3e3120d7-1164-497f-8e95-61789ab8f383", "address": "fa:16:3e:ae:3f:a1", "network": {"id": "5f085f3e-8c63-44aa-9adc-cec2428aefd2", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-931270361-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "3c19f20fbaec489eaece7cf904f192fa", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3e3120d7-11", "ovs_interfaceid": "3e3120d7-1164-497f-8e95-61789ab8f383", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 04:17:18 np0005534516 nova_compute[253538]: 2025-11-25 09:17:18.167 253542 DEBUG oslo_concurrency.lockutils [None req-e991bbc4-4882-4760-ba09-1cbed1562b4e 612906aa606e4268918814ea9f47c674 3c19f20fbaec489eaece7cf904f192fa - - default default] Releasing lock "refresh_cache-aefbd3e8-a8ba-4fef-a771-4e2b5091a90a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 04:17:18 np0005534516 nova_compute[253538]: 2025-11-25 09:17:18.167 253542 DEBUG nova.compute.manager [None req-e991bbc4-4882-4760-ba09-1cbed1562b4e 612906aa606e4268918814ea9f47c674 3c19f20fbaec489eaece7cf904f192fa - - default default] [instance: aefbd3e8-a8ba-4fef-a771-4e2b5091a90a] Instance network_info: |[{"id": "3e3120d7-1164-497f-8e95-61789ab8f383", "address": "fa:16:3e:ae:3f:a1", "network": {"id": "5f085f3e-8c63-44aa-9adc-cec2428aefd2", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-931270361-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "3c19f20fbaec489eaece7cf904f192fa", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3e3120d7-11", "ovs_interfaceid": "3e3120d7-1164-497f-8e95-61789ab8f383", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 25 04:17:18 np0005534516 nova_compute[253538]: 2025-11-25 09:17:18.168 253542 DEBUG oslo_concurrency.lockutils [req-8ed782c1-5642-4471-b5aa-480c4f1fcbd0 req-ee836338-d096-431e-8244-ad5920f2ac63 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-aefbd3e8-a8ba-4fef-a771-4e2b5091a90a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 04:17:18 np0005534516 nova_compute[253538]: 2025-11-25 09:17:18.168 253542 DEBUG nova.network.neutron [req-8ed782c1-5642-4471-b5aa-480c4f1fcbd0 req-ee836338-d096-431e-8244-ad5920f2ac63 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: aefbd3e8-a8ba-4fef-a771-4e2b5091a90a] Refreshing network info cache for port 3e3120d7-1164-497f-8e95-61789ab8f383 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 04:17:18 np0005534516 nova_compute[253538]: 2025-11-25 09:17:18.171 253542 DEBUG nova.virt.libvirt.driver [None req-e991bbc4-4882-4760-ba09-1cbed1562b4e 612906aa606e4268918814ea9f47c674 3c19f20fbaec489eaece7cf904f192fa - - default default] [instance: aefbd3e8-a8ba-4fef-a771-4e2b5091a90a] Start _get_guest_xml network_info=[{"id": "3e3120d7-1164-497f-8e95-61789ab8f383", "address": "fa:16:3e:ae:3f:a1", "network": {"id": "5f085f3e-8c63-44aa-9adc-cec2428aefd2", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-931270361-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "3c19f20fbaec489eaece7cf904f192fa", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3e3120d7-11", "ovs_interfaceid": "3e3120d7-1164-497f-8e95-61789ab8f383", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'device_name': '/dev/vda', 'guest_format': None, 'encryption_options': None, 'boot_index': 0, 'encryption_format': None, 'encryption_secret_uuid': None, 'size': 0, 'encrypted': False, 'disk_bus': 'virtio', 'image_id': '8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 25 04:17:18 np0005534516 nova_compute[253538]: 2025-11-25 09:17:18.174 253542 WARNING nova.virt.libvirt.driver [None req-e991bbc4-4882-4760-ba09-1cbed1562b4e 612906aa606e4268918814ea9f47c674 3c19f20fbaec489eaece7cf904f192fa - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 04:17:18 np0005534516 nova_compute[253538]: 2025-11-25 09:17:18.175 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:17:18 np0005534516 nova_compute[253538]: 2025-11-25 09:17:18.180 253542 DEBUG nova.virt.libvirt.host [None req-e991bbc4-4882-4760-ba09-1cbed1562b4e 612906aa606e4268918814ea9f47c674 3c19f20fbaec489eaece7cf904f192fa - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 25 04:17:18 np0005534516 nova_compute[253538]: 2025-11-25 09:17:18.180 253542 DEBUG nova.virt.libvirt.host [None req-e991bbc4-4882-4760-ba09-1cbed1562b4e 612906aa606e4268918814ea9f47c674 3c19f20fbaec489eaece7cf904f192fa - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 25 04:17:18 np0005534516 nova_compute[253538]: 2025-11-25 09:17:18.189 253542 DEBUG nova.virt.libvirt.host [None req-e991bbc4-4882-4760-ba09-1cbed1562b4e 612906aa606e4268918814ea9f47c674 3c19f20fbaec489eaece7cf904f192fa - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 25 04:17:18 np0005534516 nova_compute[253538]: 2025-11-25 09:17:18.189 253542 DEBUG nova.virt.libvirt.host [None req-e991bbc4-4882-4760-ba09-1cbed1562b4e 612906aa606e4268918814ea9f47c674 3c19f20fbaec489eaece7cf904f192fa - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 25 04:17:18 np0005534516 nova_compute[253538]: 2025-11-25 09:17:18.190 253542 DEBUG nova.virt.libvirt.driver [None req-e991bbc4-4882-4760-ba09-1cbed1562b4e 612906aa606e4268918814ea9f47c674 3c19f20fbaec489eaece7cf904f192fa - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 25 04:17:18 np0005534516 nova_compute[253538]: 2025-11-25 09:17:18.190 253542 DEBUG nova.virt.hardware [None req-e991bbc4-4882-4760-ba09-1cbed1562b4e 612906aa606e4268918814ea9f47c674 3c19f20fbaec489eaece7cf904f192fa - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T08:20:54Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='c0247c91-0f34-45b2-87e7-43f31a790a00',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 25 04:17:18 np0005534516 nova_compute[253538]: 2025-11-25 09:17:18.190 253542 DEBUG nova.virt.hardware [None req-e991bbc4-4882-4760-ba09-1cbed1562b4e 612906aa606e4268918814ea9f47c674 3c19f20fbaec489eaece7cf904f192fa - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 25 04:17:18 np0005534516 nova_compute[253538]: 2025-11-25 09:17:18.191 253542 DEBUG nova.virt.hardware [None req-e991bbc4-4882-4760-ba09-1cbed1562b4e 612906aa606e4268918814ea9f47c674 3c19f20fbaec489eaece7cf904f192fa - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 25 04:17:18 np0005534516 nova_compute[253538]: 2025-11-25 09:17:18.191 253542 DEBUG nova.virt.hardware [None req-e991bbc4-4882-4760-ba09-1cbed1562b4e 612906aa606e4268918814ea9f47c674 3c19f20fbaec489eaece7cf904f192fa - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 25 04:17:18 np0005534516 nova_compute[253538]: 2025-11-25 09:17:18.191 253542 DEBUG nova.virt.hardware [None req-e991bbc4-4882-4760-ba09-1cbed1562b4e 612906aa606e4268918814ea9f47c674 3c19f20fbaec489eaece7cf904f192fa - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 25 04:17:18 np0005534516 nova_compute[253538]: 2025-11-25 09:17:18.191 253542 DEBUG nova.virt.hardware [None req-e991bbc4-4882-4760-ba09-1cbed1562b4e 612906aa606e4268918814ea9f47c674 3c19f20fbaec489eaece7cf904f192fa - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 25 04:17:18 np0005534516 nova_compute[253538]: 2025-11-25 09:17:18.192 253542 DEBUG nova.virt.hardware [None req-e991bbc4-4882-4760-ba09-1cbed1562b4e 612906aa606e4268918814ea9f47c674 3c19f20fbaec489eaece7cf904f192fa - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 25 04:17:18 np0005534516 nova_compute[253538]: 2025-11-25 09:17:18.192 253542 DEBUG nova.virt.hardware [None req-e991bbc4-4882-4760-ba09-1cbed1562b4e 612906aa606e4268918814ea9f47c674 3c19f20fbaec489eaece7cf904f192fa - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 25 04:17:18 np0005534516 nova_compute[253538]: 2025-11-25 09:17:18.192 253542 DEBUG nova.virt.hardware [None req-e991bbc4-4882-4760-ba09-1cbed1562b4e 612906aa606e4268918814ea9f47c674 3c19f20fbaec489eaece7cf904f192fa - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 25 04:17:18 np0005534516 nova_compute[253538]: 2025-11-25 09:17:18.192 253542 DEBUG nova.virt.hardware [None req-e991bbc4-4882-4760-ba09-1cbed1562b4e 612906aa606e4268918814ea9f47c674 3c19f20fbaec489eaece7cf904f192fa - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 25 04:17:18 np0005534516 nova_compute[253538]: 2025-11-25 09:17:18.192 253542 DEBUG nova.virt.hardware [None req-e991bbc4-4882-4760-ba09-1cbed1562b4e 612906aa606e4268918814ea9f47c674 3c19f20fbaec489eaece7cf904f192fa - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 25 04:17:18 np0005534516 nova_compute[253538]: 2025-11-25 09:17:18.195 253542 DEBUG oslo_concurrency.processutils [None req-e991bbc4-4882-4760-ba09-1cbed1562b4e 612906aa606e4268918814ea9f47c674 3c19f20fbaec489eaece7cf904f192fa - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 04:17:18 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:17:18 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2907: 321 pgs: 321 active+clean; 111 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 782 KiB/s wr, 25 op/s
Nov 25 04:17:18 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 04:17:18 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3519843840' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 04:17:18 np0005534516 nova_compute[253538]: 2025-11-25 09:17:18.683 253542 DEBUG oslo_concurrency.processutils [None req-e991bbc4-4882-4760-ba09-1cbed1562b4e 612906aa606e4268918814ea9f47c674 3c19f20fbaec489eaece7cf904f192fa - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.488s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 04:17:18 np0005534516 nova_compute[253538]: 2025-11-25 09:17:18.709 253542 DEBUG nova.storage.rbd_utils [None req-e991bbc4-4882-4760-ba09-1cbed1562b4e 612906aa606e4268918814ea9f47c674 3c19f20fbaec489eaece7cf904f192fa - - default default] rbd image aefbd3e8-a8ba-4fef-a771-4e2b5091a90a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 04:17:18 np0005534516 nova_compute[253538]: 2025-11-25 09:17:18.716 253542 DEBUG oslo_concurrency.processutils [None req-e991bbc4-4882-4760-ba09-1cbed1562b4e 612906aa606e4268918814ea9f47c674 3c19f20fbaec489eaece7cf904f192fa - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 04:17:19 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 04:17:19 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/498757576' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 04:17:19 np0005534516 nova_compute[253538]: 2025-11-25 09:17:19.191 253542 DEBUG oslo_concurrency.processutils [None req-e991bbc4-4882-4760-ba09-1cbed1562b4e 612906aa606e4268918814ea9f47c674 3c19f20fbaec489eaece7cf904f192fa - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.475s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 04:17:19 np0005534516 nova_compute[253538]: 2025-11-25 09:17:19.192 253542 DEBUG nova.virt.libvirt.vif [None req-e991bbc4-4882-4760-ba09-1cbed1562b4e 612906aa606e4268918814ea9f47c674 3c19f20fbaec489eaece7cf904f192fa - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T09:17:11Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestServerAdvancedOps-server-1580927856',display_name='tempest-TestServerAdvancedOps-server-1580927856',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testserveradvancedops-server-1580927856',id=152,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='3c19f20fbaec489eaece7cf904f192fa',ramdisk_id='',reservation_id='r-aqe54enr',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestServerAdvancedOps-1902459302',owner_user_name='tempest-TestServerAdvancedOps-1902459302-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T09:17:13Z,user_data=None,user_id='612906aa606e4268918814ea9f47c674',uuid=aefbd3e8-a8ba-4fef-a771-4e2b5091a90a,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "3e3120d7-1164-497f-8e95-61789ab8f383", "address": "fa:16:3e:ae:3f:a1", "network": {"id": "5f085f3e-8c63-44aa-9adc-cec2428aefd2", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-931270361-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "3c19f20fbaec489eaece7cf904f192fa", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3e3120d7-11", "ovs_interfaceid": "3e3120d7-1164-497f-8e95-61789ab8f383", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 25 04:17:19 np0005534516 nova_compute[253538]: 2025-11-25 09:17:19.193 253542 DEBUG nova.network.os_vif_util [None req-e991bbc4-4882-4760-ba09-1cbed1562b4e 612906aa606e4268918814ea9f47c674 3c19f20fbaec489eaece7cf904f192fa - - default default] Converting VIF {"id": "3e3120d7-1164-497f-8e95-61789ab8f383", "address": "fa:16:3e:ae:3f:a1", "network": {"id": "5f085f3e-8c63-44aa-9adc-cec2428aefd2", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-931270361-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "3c19f20fbaec489eaece7cf904f192fa", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3e3120d7-11", "ovs_interfaceid": "3e3120d7-1164-497f-8e95-61789ab8f383", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 04:17:19 np0005534516 nova_compute[253538]: 2025-11-25 09:17:19.194 253542 DEBUG nova.network.os_vif_util [None req-e991bbc4-4882-4760-ba09-1cbed1562b4e 612906aa606e4268918814ea9f47c674 3c19f20fbaec489eaece7cf904f192fa - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ae:3f:a1,bridge_name='br-int',has_traffic_filtering=True,id=3e3120d7-1164-497f-8e95-61789ab8f383,network=Network(5f085f3e-8c63-44aa-9adc-cec2428aefd2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3e3120d7-11') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 04:17:19 np0005534516 nova_compute[253538]: 2025-11-25 09:17:19.195 253542 DEBUG nova.objects.instance [None req-e991bbc4-4882-4760-ba09-1cbed1562b4e 612906aa606e4268918814ea9f47c674 3c19f20fbaec489eaece7cf904f192fa - - default default] Lazy-loading 'pci_devices' on Instance uuid aefbd3e8-a8ba-4fef-a771-4e2b5091a90a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 04:17:19 np0005534516 nova_compute[253538]: 2025-11-25 09:17:19.209 253542 DEBUG nova.virt.libvirt.driver [None req-e991bbc4-4882-4760-ba09-1cbed1562b4e 612906aa606e4268918814ea9f47c674 3c19f20fbaec489eaece7cf904f192fa - - default default] [instance: aefbd3e8-a8ba-4fef-a771-4e2b5091a90a] End _get_guest_xml xml=<domain type="kvm">
Nov 25 04:17:19 np0005534516 nova_compute[253538]:  <uuid>aefbd3e8-a8ba-4fef-a771-4e2b5091a90a</uuid>
Nov 25 04:17:19 np0005534516 nova_compute[253538]:  <name>instance-00000098</name>
Nov 25 04:17:19 np0005534516 nova_compute[253538]:  <memory>131072</memory>
Nov 25 04:17:19 np0005534516 nova_compute[253538]:  <vcpu>1</vcpu>
Nov 25 04:17:19 np0005534516 nova_compute[253538]:  <metadata>
Nov 25 04:17:19 np0005534516 nova_compute[253538]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 04:17:19 np0005534516 nova_compute[253538]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 04:17:19 np0005534516 nova_compute[253538]:      <nova:name>tempest-TestServerAdvancedOps-server-1580927856</nova:name>
Nov 25 04:17:19 np0005534516 nova_compute[253538]:      <nova:creationTime>2025-11-25 09:17:18</nova:creationTime>
Nov 25 04:17:19 np0005534516 nova_compute[253538]:      <nova:flavor name="m1.nano">
Nov 25 04:17:19 np0005534516 nova_compute[253538]:        <nova:memory>128</nova:memory>
Nov 25 04:17:19 np0005534516 nova_compute[253538]:        <nova:disk>1</nova:disk>
Nov 25 04:17:19 np0005534516 nova_compute[253538]:        <nova:swap>0</nova:swap>
Nov 25 04:17:19 np0005534516 nova_compute[253538]:        <nova:ephemeral>0</nova:ephemeral>
Nov 25 04:17:19 np0005534516 nova_compute[253538]:        <nova:vcpus>1</nova:vcpus>
Nov 25 04:17:19 np0005534516 nova_compute[253538]:      </nova:flavor>
Nov 25 04:17:19 np0005534516 nova_compute[253538]:      <nova:owner>
Nov 25 04:17:19 np0005534516 nova_compute[253538]:        <nova:user uuid="612906aa606e4268918814ea9f47c674">tempest-TestServerAdvancedOps-1902459302-project-member</nova:user>
Nov 25 04:17:19 np0005534516 nova_compute[253538]:        <nova:project uuid="3c19f20fbaec489eaece7cf904f192fa">tempest-TestServerAdvancedOps-1902459302</nova:project>
Nov 25 04:17:19 np0005534516 nova_compute[253538]:      </nova:owner>
Nov 25 04:17:19 np0005534516 nova_compute[253538]:      <nova:root type="image" uuid="8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e"/>
Nov 25 04:17:19 np0005534516 nova_compute[253538]:      <nova:ports>
Nov 25 04:17:19 np0005534516 nova_compute[253538]:        <nova:port uuid="3e3120d7-1164-497f-8e95-61789ab8f383">
Nov 25 04:17:19 np0005534516 nova_compute[253538]:          <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Nov 25 04:17:19 np0005534516 nova_compute[253538]:        </nova:port>
Nov 25 04:17:19 np0005534516 nova_compute[253538]:      </nova:ports>
Nov 25 04:17:19 np0005534516 nova_compute[253538]:    </nova:instance>
Nov 25 04:17:19 np0005534516 nova_compute[253538]:  </metadata>
Nov 25 04:17:19 np0005534516 nova_compute[253538]:  <sysinfo type="smbios">
Nov 25 04:17:19 np0005534516 nova_compute[253538]:    <system>
Nov 25 04:17:19 np0005534516 nova_compute[253538]:      <entry name="manufacturer">RDO</entry>
Nov 25 04:17:19 np0005534516 nova_compute[253538]:      <entry name="product">OpenStack Compute</entry>
Nov 25 04:17:19 np0005534516 nova_compute[253538]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 04:17:19 np0005534516 nova_compute[253538]:      <entry name="serial">aefbd3e8-a8ba-4fef-a771-4e2b5091a90a</entry>
Nov 25 04:17:19 np0005534516 nova_compute[253538]:      <entry name="uuid">aefbd3e8-a8ba-4fef-a771-4e2b5091a90a</entry>
Nov 25 04:17:19 np0005534516 nova_compute[253538]:      <entry name="family">Virtual Machine</entry>
Nov 25 04:17:19 np0005534516 nova_compute[253538]:    </system>
Nov 25 04:17:19 np0005534516 nova_compute[253538]:  </sysinfo>
Nov 25 04:17:19 np0005534516 nova_compute[253538]:  <os>
Nov 25 04:17:19 np0005534516 nova_compute[253538]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 25 04:17:19 np0005534516 nova_compute[253538]:    <boot dev="hd"/>
Nov 25 04:17:19 np0005534516 nova_compute[253538]:    <smbios mode="sysinfo"/>
Nov 25 04:17:19 np0005534516 nova_compute[253538]:  </os>
Nov 25 04:17:19 np0005534516 nova_compute[253538]:  <features>
Nov 25 04:17:19 np0005534516 nova_compute[253538]:    <acpi/>
Nov 25 04:17:19 np0005534516 nova_compute[253538]:    <apic/>
Nov 25 04:17:19 np0005534516 nova_compute[253538]:    <vmcoreinfo/>
Nov 25 04:17:19 np0005534516 nova_compute[253538]:  </features>
Nov 25 04:17:19 np0005534516 nova_compute[253538]:  <clock offset="utc">
Nov 25 04:17:19 np0005534516 nova_compute[253538]:    <timer name="pit" tickpolicy="delay"/>
Nov 25 04:17:19 np0005534516 nova_compute[253538]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 25 04:17:19 np0005534516 nova_compute[253538]:    <timer name="hpet" present="no"/>
Nov 25 04:17:19 np0005534516 nova_compute[253538]:  </clock>
Nov 25 04:17:19 np0005534516 nova_compute[253538]:  <cpu mode="host-model" match="exact">
Nov 25 04:17:19 np0005534516 nova_compute[253538]:    <topology sockets="1" cores="1" threads="1"/>
Nov 25 04:17:19 np0005534516 nova_compute[253538]:  </cpu>
Nov 25 04:17:19 np0005534516 nova_compute[253538]:  <devices>
Nov 25 04:17:19 np0005534516 nova_compute[253538]:    <disk type="network" device="disk">
Nov 25 04:17:19 np0005534516 nova_compute[253538]:      <driver type="raw" cache="none"/>
Nov 25 04:17:19 np0005534516 nova_compute[253538]:      <source protocol="rbd" name="vms/aefbd3e8-a8ba-4fef-a771-4e2b5091a90a_disk">
Nov 25 04:17:19 np0005534516 nova_compute[253538]:        <host name="192.168.122.100" port="6789"/>
Nov 25 04:17:19 np0005534516 nova_compute[253538]:      </source>
Nov 25 04:17:19 np0005534516 nova_compute[253538]:      <auth username="openstack">
Nov 25 04:17:19 np0005534516 nova_compute[253538]:        <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 04:17:19 np0005534516 nova_compute[253538]:      </auth>
Nov 25 04:17:19 np0005534516 nova_compute[253538]:      <target dev="vda" bus="virtio"/>
Nov 25 04:17:19 np0005534516 nova_compute[253538]:    </disk>
Nov 25 04:17:19 np0005534516 nova_compute[253538]:    <disk type="network" device="cdrom">
Nov 25 04:17:19 np0005534516 nova_compute[253538]:      <driver type="raw" cache="none"/>
Nov 25 04:17:19 np0005534516 nova_compute[253538]:      <source protocol="rbd" name="vms/aefbd3e8-a8ba-4fef-a771-4e2b5091a90a_disk.config">
Nov 25 04:17:19 np0005534516 nova_compute[253538]:        <host name="192.168.122.100" port="6789"/>
Nov 25 04:17:19 np0005534516 nova_compute[253538]:      </source>
Nov 25 04:17:19 np0005534516 nova_compute[253538]:      <auth username="openstack">
Nov 25 04:17:19 np0005534516 nova_compute[253538]:        <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 04:17:19 np0005534516 nova_compute[253538]:      </auth>
Nov 25 04:17:19 np0005534516 nova_compute[253538]:      <target dev="sda" bus="sata"/>
Nov 25 04:17:19 np0005534516 nova_compute[253538]:    </disk>
Nov 25 04:17:19 np0005534516 nova_compute[253538]:    <interface type="ethernet">
Nov 25 04:17:19 np0005534516 nova_compute[253538]:      <mac address="fa:16:3e:ae:3f:a1"/>
Nov 25 04:17:19 np0005534516 nova_compute[253538]:      <model type="virtio"/>
Nov 25 04:17:19 np0005534516 nova_compute[253538]:      <driver name="vhost" rx_queue_size="512"/>
Nov 25 04:17:19 np0005534516 nova_compute[253538]:      <mtu size="1442"/>
Nov 25 04:17:19 np0005534516 nova_compute[253538]:      <target dev="tap3e3120d7-11"/>
Nov 25 04:17:19 np0005534516 nova_compute[253538]:    </interface>
Nov 25 04:17:19 np0005534516 nova_compute[253538]:    <serial type="pty">
Nov 25 04:17:19 np0005534516 nova_compute[253538]:      <log file="/var/lib/nova/instances/aefbd3e8-a8ba-4fef-a771-4e2b5091a90a/console.log" append="off"/>
Nov 25 04:17:19 np0005534516 nova_compute[253538]:    </serial>
Nov 25 04:17:19 np0005534516 nova_compute[253538]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 04:17:19 np0005534516 nova_compute[253538]:    <video>
Nov 25 04:17:19 np0005534516 nova_compute[253538]:      <model type="virtio"/>
Nov 25 04:17:19 np0005534516 nova_compute[253538]:    </video>
Nov 25 04:17:19 np0005534516 nova_compute[253538]:    <input type="tablet" bus="usb"/>
Nov 25 04:17:19 np0005534516 nova_compute[253538]:    <rng model="virtio">
Nov 25 04:17:19 np0005534516 nova_compute[253538]:      <backend model="random">/dev/urandom</backend>
Nov 25 04:17:19 np0005534516 nova_compute[253538]:    </rng>
Nov 25 04:17:19 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root"/>
Nov 25 04:17:19 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:17:19 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:17:19 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:17:19 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:17:19 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:17:19 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:17:19 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:17:19 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:17:19 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:17:19 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:17:19 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:17:19 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:17:19 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:17:19 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:17:19 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:17:19 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:17:19 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:17:19 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:17:19 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:17:19 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:17:19 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:17:19 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:17:19 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:17:19 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:17:19 np0005534516 nova_compute[253538]:    <controller type="usb" index="0"/>
Nov 25 04:17:19 np0005534516 nova_compute[253538]:    <memballoon model="virtio">
Nov 25 04:17:19 np0005534516 nova_compute[253538]:      <stats period="10"/>
Nov 25 04:17:19 np0005534516 nova_compute[253538]:    </memballoon>
Nov 25 04:17:19 np0005534516 nova_compute[253538]:  </devices>
Nov 25 04:17:19 np0005534516 nova_compute[253538]: </domain>
Nov 25 04:17:19 np0005534516 nova_compute[253538]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 25 04:17:19 np0005534516 nova_compute[253538]: 2025-11-25 09:17:19.210 253542 DEBUG nova.compute.manager [None req-e991bbc4-4882-4760-ba09-1cbed1562b4e 612906aa606e4268918814ea9f47c674 3c19f20fbaec489eaece7cf904f192fa - - default default] [instance: aefbd3e8-a8ba-4fef-a771-4e2b5091a90a] Preparing to wait for external event network-vif-plugged-3e3120d7-1164-497f-8e95-61789ab8f383 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 25 04:17:19 np0005534516 nova_compute[253538]: 2025-11-25 09:17:19.210 253542 DEBUG oslo_concurrency.lockutils [None req-e991bbc4-4882-4760-ba09-1cbed1562b4e 612906aa606e4268918814ea9f47c674 3c19f20fbaec489eaece7cf904f192fa - - default default] Acquiring lock "aefbd3e8-a8ba-4fef-a771-4e2b5091a90a-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:17:19 np0005534516 nova_compute[253538]: 2025-11-25 09:17:19.211 253542 DEBUG oslo_concurrency.lockutils [None req-e991bbc4-4882-4760-ba09-1cbed1562b4e 612906aa606e4268918814ea9f47c674 3c19f20fbaec489eaece7cf904f192fa - - default default] Lock "aefbd3e8-a8ba-4fef-a771-4e2b5091a90a-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:17:19 np0005534516 nova_compute[253538]: 2025-11-25 09:17:19.211 253542 DEBUG oslo_concurrency.lockutils [None req-e991bbc4-4882-4760-ba09-1cbed1562b4e 612906aa606e4268918814ea9f47c674 3c19f20fbaec489eaece7cf904f192fa - - default default] Lock "aefbd3e8-a8ba-4fef-a771-4e2b5091a90a-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:17:19 np0005534516 nova_compute[253538]: 2025-11-25 09:17:19.212 253542 DEBUG nova.virt.libvirt.vif [None req-e991bbc4-4882-4760-ba09-1cbed1562b4e 612906aa606e4268918814ea9f47c674 3c19f20fbaec489eaece7cf904f192fa - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T09:17:11Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestServerAdvancedOps-server-1580927856',display_name='tempest-TestServerAdvancedOps-server-1580927856',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testserveradvancedops-server-1580927856',id=152,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='3c19f20fbaec489eaece7cf904f192fa',ramdisk_id='',reservation_id='r-aqe54enr',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestServerAdvancedOps-1902459302',owner_user_name='tempest-TestServerAdvancedOps-1902459302-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T09:17:13Z,user_data=None,user_id='612906aa606e4268918814ea9f47c674',uuid=aefbd3e8-a8ba-4fef-a771-4e2b5091a90a,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "3e3120d7-1164-497f-8e95-61789ab8f383", "address": "fa:16:3e:ae:3f:a1", "network": {"id": "5f085f3e-8c63-44aa-9adc-cec2428aefd2", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-931270361-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "3c19f20fbaec489eaece7cf904f192fa", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3e3120d7-11", "ovs_interfaceid": "3e3120d7-1164-497f-8e95-61789ab8f383", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 25 04:17:19 np0005534516 nova_compute[253538]: 2025-11-25 09:17:19.212 253542 DEBUG nova.network.os_vif_util [None req-e991bbc4-4882-4760-ba09-1cbed1562b4e 612906aa606e4268918814ea9f47c674 3c19f20fbaec489eaece7cf904f192fa - - default default] Converting VIF {"id": "3e3120d7-1164-497f-8e95-61789ab8f383", "address": "fa:16:3e:ae:3f:a1", "network": {"id": "5f085f3e-8c63-44aa-9adc-cec2428aefd2", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-931270361-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "3c19f20fbaec489eaece7cf904f192fa", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3e3120d7-11", "ovs_interfaceid": "3e3120d7-1164-497f-8e95-61789ab8f383", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 04:17:19 np0005534516 nova_compute[253538]: 2025-11-25 09:17:19.213 253542 DEBUG nova.network.os_vif_util [None req-e991bbc4-4882-4760-ba09-1cbed1562b4e 612906aa606e4268918814ea9f47c674 3c19f20fbaec489eaece7cf904f192fa - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ae:3f:a1,bridge_name='br-int',has_traffic_filtering=True,id=3e3120d7-1164-497f-8e95-61789ab8f383,network=Network(5f085f3e-8c63-44aa-9adc-cec2428aefd2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3e3120d7-11') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 04:17:19 np0005534516 nova_compute[253538]: 2025-11-25 09:17:19.213 253542 DEBUG os_vif [None req-e991bbc4-4882-4760-ba09-1cbed1562b4e 612906aa606e4268918814ea9f47c674 3c19f20fbaec489eaece7cf904f192fa - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:ae:3f:a1,bridge_name='br-int',has_traffic_filtering=True,id=3e3120d7-1164-497f-8e95-61789ab8f383,network=Network(5f085f3e-8c63-44aa-9adc-cec2428aefd2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3e3120d7-11') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 25 04:17:19 np0005534516 nova_compute[253538]: 2025-11-25 09:17:19.214 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:17:19 np0005534516 nova_compute[253538]: 2025-11-25 09:17:19.214 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 04:17:19 np0005534516 nova_compute[253538]: 2025-11-25 09:17:19.215 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 04:17:19 np0005534516 nova_compute[253538]: 2025-11-25 09:17:19.218 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:17:19 np0005534516 nova_compute[253538]: 2025-11-25 09:17:19.219 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap3e3120d7-11, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 04:17:19 np0005534516 nova_compute[253538]: 2025-11-25 09:17:19.219 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap3e3120d7-11, col_values=(('external_ids', {'iface-id': '3e3120d7-1164-497f-8e95-61789ab8f383', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:ae:3f:a1', 'vm-uuid': 'aefbd3e8-a8ba-4fef-a771-4e2b5091a90a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 04:17:19 np0005534516 nova_compute[253538]: 2025-11-25 09:17:19.220 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:17:19 np0005534516 NetworkManager[48915]: <info>  [1764062239.2217] manager: (tap3e3120d7-11): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/664)
Nov 25 04:17:19 np0005534516 nova_compute[253538]: 2025-11-25 09:17:19.223 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 25 04:17:19 np0005534516 nova_compute[253538]: 2025-11-25 09:17:19.227 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:17:19 np0005534516 nova_compute[253538]: 2025-11-25 09:17:19.228 253542 INFO os_vif [None req-e991bbc4-4882-4760-ba09-1cbed1562b4e 612906aa606e4268918814ea9f47c674 3c19f20fbaec489eaece7cf904f192fa - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:ae:3f:a1,bridge_name='br-int',has_traffic_filtering=True,id=3e3120d7-1164-497f-8e95-61789ab8f383,network=Network(5f085f3e-8c63-44aa-9adc-cec2428aefd2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3e3120d7-11')#033[00m
Nov 25 04:17:19 np0005534516 nova_compute[253538]: 2025-11-25 09:17:19.269 253542 DEBUG nova.virt.libvirt.driver [None req-e991bbc4-4882-4760-ba09-1cbed1562b4e 612906aa606e4268918814ea9f47c674 3c19f20fbaec489eaece7cf904f192fa - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 04:17:19 np0005534516 nova_compute[253538]: 2025-11-25 09:17:19.270 253542 DEBUG nova.virt.libvirt.driver [None req-e991bbc4-4882-4760-ba09-1cbed1562b4e 612906aa606e4268918814ea9f47c674 3c19f20fbaec489eaece7cf904f192fa - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 04:17:19 np0005534516 nova_compute[253538]: 2025-11-25 09:17:19.270 253542 DEBUG nova.virt.libvirt.driver [None req-e991bbc4-4882-4760-ba09-1cbed1562b4e 612906aa606e4268918814ea9f47c674 3c19f20fbaec489eaece7cf904f192fa - - default default] No VIF found with MAC fa:16:3e:ae:3f:a1, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 25 04:17:19 np0005534516 nova_compute[253538]: 2025-11-25 09:17:19.270 253542 INFO nova.virt.libvirt.driver [None req-e991bbc4-4882-4760-ba09-1cbed1562b4e 612906aa606e4268918814ea9f47c674 3c19f20fbaec489eaece7cf904f192fa - - default default] [instance: aefbd3e8-a8ba-4fef-a771-4e2b5091a90a] Using config drive#033[00m
Nov 25 04:17:19 np0005534516 nova_compute[253538]: 2025-11-25 09:17:19.289 253542 DEBUG nova.storage.rbd_utils [None req-e991bbc4-4882-4760-ba09-1cbed1562b4e 612906aa606e4268918814ea9f47c674 3c19f20fbaec489eaece7cf904f192fa - - default default] rbd image aefbd3e8-a8ba-4fef-a771-4e2b5091a90a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 04:17:19 np0005534516 nova_compute[253538]: 2025-11-25 09:17:19.839 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:17:19 np0005534516 nova_compute[253538]: 2025-11-25 09:17:19.981 253542 INFO nova.virt.libvirt.driver [None req-e991bbc4-4882-4760-ba09-1cbed1562b4e 612906aa606e4268918814ea9f47c674 3c19f20fbaec489eaece7cf904f192fa - - default default] [instance: aefbd3e8-a8ba-4fef-a771-4e2b5091a90a] Creating config drive at /var/lib/nova/instances/aefbd3e8-a8ba-4fef-a771-4e2b5091a90a/disk.config#033[00m
Nov 25 04:17:19 np0005534516 nova_compute[253538]: 2025-11-25 09:17:19.990 253542 DEBUG oslo_concurrency.processutils [None req-e991bbc4-4882-4760-ba09-1cbed1562b4e 612906aa606e4268918814ea9f47c674 3c19f20fbaec489eaece7cf904f192fa - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/aefbd3e8-a8ba-4fef-a771-4e2b5091a90a/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp44t3ms5s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 04:17:20 np0005534516 nova_compute[253538]: 2025-11-25 09:17:20.135 253542 DEBUG oslo_concurrency.processutils [None req-e991bbc4-4882-4760-ba09-1cbed1562b4e 612906aa606e4268918814ea9f47c674 3c19f20fbaec489eaece7cf904f192fa - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/aefbd3e8-a8ba-4fef-a771-4e2b5091a90a/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp44t3ms5s" returned: 0 in 0.145s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 04:17:20 np0005534516 nova_compute[253538]: 2025-11-25 09:17:20.160 253542 DEBUG nova.storage.rbd_utils [None req-e991bbc4-4882-4760-ba09-1cbed1562b4e 612906aa606e4268918814ea9f47c674 3c19f20fbaec489eaece7cf904f192fa - - default default] rbd image aefbd3e8-a8ba-4fef-a771-4e2b5091a90a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 04:17:20 np0005534516 nova_compute[253538]: 2025-11-25 09:17:20.164 253542 DEBUG oslo_concurrency.processutils [None req-e991bbc4-4882-4760-ba09-1cbed1562b4e 612906aa606e4268918814ea9f47c674 3c19f20fbaec489eaece7cf904f192fa - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/aefbd3e8-a8ba-4fef-a771-4e2b5091a90a/disk.config aefbd3e8-a8ba-4fef-a771-4e2b5091a90a_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 04:17:20 np0005534516 nova_compute[253538]: 2025-11-25 09:17:20.196 253542 DEBUG nova.network.neutron [req-8ed782c1-5642-4471-b5aa-480c4f1fcbd0 req-ee836338-d096-431e-8244-ad5920f2ac63 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: aefbd3e8-a8ba-4fef-a771-4e2b5091a90a] Updated VIF entry in instance network info cache for port 3e3120d7-1164-497f-8e95-61789ab8f383. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 25 04:17:20 np0005534516 nova_compute[253538]: 2025-11-25 09:17:20.198 253542 DEBUG nova.network.neutron [req-8ed782c1-5642-4471-b5aa-480c4f1fcbd0 req-ee836338-d096-431e-8244-ad5920f2ac63 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: aefbd3e8-a8ba-4fef-a771-4e2b5091a90a] Updating instance_info_cache with network_info: [{"id": "3e3120d7-1164-497f-8e95-61789ab8f383", "address": "fa:16:3e:ae:3f:a1", "network": {"id": "5f085f3e-8c63-44aa-9adc-cec2428aefd2", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-931270361-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "3c19f20fbaec489eaece7cf904f192fa", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3e3120d7-11", "ovs_interfaceid": "3e3120d7-1164-497f-8e95-61789ab8f383", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 04:17:20 np0005534516 nova_compute[253538]: 2025-11-25 09:17:20.211 253542 DEBUG oslo_concurrency.lockutils [req-8ed782c1-5642-4471-b5aa-480c4f1fcbd0 req-ee836338-d096-431e-8244-ad5920f2ac63 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-aefbd3e8-a8ba-4fef-a771-4e2b5091a90a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 04:17:20 np0005534516 nova_compute[253538]: 2025-11-25 09:17:20.303 253542 DEBUG oslo_concurrency.processutils [None req-e991bbc4-4882-4760-ba09-1cbed1562b4e 612906aa606e4268918814ea9f47c674 3c19f20fbaec489eaece7cf904f192fa - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/aefbd3e8-a8ba-4fef-a771-4e2b5091a90a/disk.config aefbd3e8-a8ba-4fef-a771-4e2b5091a90a_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.139s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 04:17:20 np0005534516 nova_compute[253538]: 2025-11-25 09:17:20.304 253542 INFO nova.virt.libvirt.driver [None req-e991bbc4-4882-4760-ba09-1cbed1562b4e 612906aa606e4268918814ea9f47c674 3c19f20fbaec489eaece7cf904f192fa - - default default] [instance: aefbd3e8-a8ba-4fef-a771-4e2b5091a90a] Deleting local config drive /var/lib/nova/instances/aefbd3e8-a8ba-4fef-a771-4e2b5091a90a/disk.config because it was imported into RBD.#033[00m
Nov 25 04:17:20 np0005534516 kernel: tap3e3120d7-11: entered promiscuous mode
Nov 25 04:17:20 np0005534516 nova_compute[253538]: 2025-11-25 09:17:20.350 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:17:20 np0005534516 NetworkManager[48915]: <info>  [1764062240.3520] manager: (tap3e3120d7-11): new Tun device (/org/freedesktop/NetworkManager/Devices/665)
Nov 25 04:17:20 np0005534516 ovn_controller[152859]: 2025-11-25T09:17:20Z|01613|binding|INFO|Claiming lport 3e3120d7-1164-497f-8e95-61789ab8f383 for this chassis.
Nov 25 04:17:20 np0005534516 ovn_controller[152859]: 2025-11-25T09:17:20Z|01614|binding|INFO|3e3120d7-1164-497f-8e95-61789ab8f383: Claiming fa:16:3e:ae:3f:a1 10.100.0.5
Nov 25 04:17:20 np0005534516 systemd-udevd[418590]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 04:17:20 np0005534516 nova_compute[253538]: 2025-11-25 09:17:20.390 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:17:20 np0005534516 systemd-machined[215790]: New machine qemu-182-instance-00000098.
Nov 25 04:17:20 np0005534516 nova_compute[253538]: 2025-11-25 09:17:20.395 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:17:20 np0005534516 ovn_controller[152859]: 2025-11-25T09:17:20Z|01615|binding|INFO|Setting lport 3e3120d7-1164-497f-8e95-61789ab8f383 ovn-installed in OVS
Nov 25 04:17:20 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:17:20.396 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ae:3f:a1 10.100.0.5'], port_security=['fa:16:3e:ae:3f:a1 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': 'aefbd3e8-a8ba-4fef-a771-4e2b5091a90a', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5f085f3e-8c63-44aa-9adc-cec2428aefd2', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3c19f20fbaec489eaece7cf904f192fa', 'neutron:revision_number': '2', 'neutron:security_group_ids': '280baeb6-358b-4b30-9eb9-c5b8498ccc51', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4b0834da-ae27-4c3d-83e4-79728139a358, chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=3e3120d7-1164-497f-8e95-61789ab8f383) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 04:17:20 np0005534516 ovn_controller[152859]: 2025-11-25T09:17:20Z|01616|binding|INFO|Setting lport 3e3120d7-1164-497f-8e95-61789ab8f383 up in Southbound
Nov 25 04:17:20 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:17:20.397 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 3e3120d7-1164-497f-8e95-61789ab8f383 in datapath 5f085f3e-8c63-44aa-9adc-cec2428aefd2 bound to our chassis#033[00m
Nov 25 04:17:20 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:17:20.398 162739 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 5f085f3e-8c63-44aa-9adc-cec2428aefd2 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m
Nov 25 04:17:20 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:17:20.399 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[74141c63-5a0a-452a-b484-2dbd390d9dbe]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:17:20 np0005534516 nova_compute[253538]: 2025-11-25 09:17:20.399 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:17:20 np0005534516 NetworkManager[48915]: <info>  [1764062240.4008] device (tap3e3120d7-11): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 04:17:20 np0005534516 NetworkManager[48915]: <info>  [1764062240.4031] device (tap3e3120d7-11): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 04:17:20 np0005534516 systemd[1]: Started Virtual Machine qemu-182-instance-00000098.
Nov 25 04:17:20 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2908: 321 pgs: 321 active+clean; 134 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 25 04:17:20 np0005534516 nova_compute[253538]: 2025-11-25 09:17:20.897 253542 DEBUG nova.compute.manager [req-10ba1098-11f1-4b5f-886e-5e350cb6c767 req-cbdb653b-a43c-4476-8014-fdf40f33f6d0 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: aefbd3e8-a8ba-4fef-a771-4e2b5091a90a] Received event network-vif-plugged-3e3120d7-1164-497f-8e95-61789ab8f383 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 04:17:20 np0005534516 nova_compute[253538]: 2025-11-25 09:17:20.898 253542 DEBUG oslo_concurrency.lockutils [req-10ba1098-11f1-4b5f-886e-5e350cb6c767 req-cbdb653b-a43c-4476-8014-fdf40f33f6d0 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "aefbd3e8-a8ba-4fef-a771-4e2b5091a90a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:17:20 np0005534516 nova_compute[253538]: 2025-11-25 09:17:20.898 253542 DEBUG oslo_concurrency.lockutils [req-10ba1098-11f1-4b5f-886e-5e350cb6c767 req-cbdb653b-a43c-4476-8014-fdf40f33f6d0 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "aefbd3e8-a8ba-4fef-a771-4e2b5091a90a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:17:20 np0005534516 nova_compute[253538]: 2025-11-25 09:17:20.898 253542 DEBUG oslo_concurrency.lockutils [req-10ba1098-11f1-4b5f-886e-5e350cb6c767 req-cbdb653b-a43c-4476-8014-fdf40f33f6d0 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "aefbd3e8-a8ba-4fef-a771-4e2b5091a90a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:17:20 np0005534516 nova_compute[253538]: 2025-11-25 09:17:20.898 253542 DEBUG nova.compute.manager [req-10ba1098-11f1-4b5f-886e-5e350cb6c767 req-cbdb653b-a43c-4476-8014-fdf40f33f6d0 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: aefbd3e8-a8ba-4fef-a771-4e2b5091a90a] Processing event network-vif-plugged-3e3120d7-1164-497f-8e95-61789ab8f383 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 25 04:17:21 np0005534516 nova_compute[253538]: 2025-11-25 09:17:21.170 253542 DEBUG nova.compute.manager [None req-e991bbc4-4882-4760-ba09-1cbed1562b4e 612906aa606e4268918814ea9f47c674 3c19f20fbaec489eaece7cf904f192fa - - default default] [instance: aefbd3e8-a8ba-4fef-a771-4e2b5091a90a] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 25 04:17:21 np0005534516 nova_compute[253538]: 2025-11-25 09:17:21.172 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764062241.1712468, aefbd3e8-a8ba-4fef-a771-4e2b5091a90a => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 04:17:21 np0005534516 nova_compute[253538]: 2025-11-25 09:17:21.172 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: aefbd3e8-a8ba-4fef-a771-4e2b5091a90a] VM Started (Lifecycle Event)#033[00m
Nov 25 04:17:21 np0005534516 nova_compute[253538]: 2025-11-25 09:17:21.176 253542 DEBUG nova.virt.libvirt.driver [None req-e991bbc4-4882-4760-ba09-1cbed1562b4e 612906aa606e4268918814ea9f47c674 3c19f20fbaec489eaece7cf904f192fa - - default default] [instance: aefbd3e8-a8ba-4fef-a771-4e2b5091a90a] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 25 04:17:21 np0005534516 nova_compute[253538]: 2025-11-25 09:17:21.179 253542 INFO nova.virt.libvirt.driver [-] [instance: aefbd3e8-a8ba-4fef-a771-4e2b5091a90a] Instance spawned successfully.#033[00m
Nov 25 04:17:21 np0005534516 nova_compute[253538]: 2025-11-25 09:17:21.180 253542 DEBUG nova.virt.libvirt.driver [None req-e991bbc4-4882-4760-ba09-1cbed1562b4e 612906aa606e4268918814ea9f47c674 3c19f20fbaec489eaece7cf904f192fa - - default default] [instance: aefbd3e8-a8ba-4fef-a771-4e2b5091a90a] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 25 04:17:21 np0005534516 nova_compute[253538]: 2025-11-25 09:17:21.190 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: aefbd3e8-a8ba-4fef-a771-4e2b5091a90a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 04:17:21 np0005534516 nova_compute[253538]: 2025-11-25 09:17:21.194 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: aefbd3e8-a8ba-4fef-a771-4e2b5091a90a] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 04:17:21 np0005534516 nova_compute[253538]: 2025-11-25 09:17:21.202 253542 DEBUG nova.virt.libvirt.driver [None req-e991bbc4-4882-4760-ba09-1cbed1562b4e 612906aa606e4268918814ea9f47c674 3c19f20fbaec489eaece7cf904f192fa - - default default] [instance: aefbd3e8-a8ba-4fef-a771-4e2b5091a90a] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 04:17:21 np0005534516 nova_compute[253538]: 2025-11-25 09:17:21.203 253542 DEBUG nova.virt.libvirt.driver [None req-e991bbc4-4882-4760-ba09-1cbed1562b4e 612906aa606e4268918814ea9f47c674 3c19f20fbaec489eaece7cf904f192fa - - default default] [instance: aefbd3e8-a8ba-4fef-a771-4e2b5091a90a] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 04:17:21 np0005534516 nova_compute[253538]: 2025-11-25 09:17:21.204 253542 DEBUG nova.virt.libvirt.driver [None req-e991bbc4-4882-4760-ba09-1cbed1562b4e 612906aa606e4268918814ea9f47c674 3c19f20fbaec489eaece7cf904f192fa - - default default] [instance: aefbd3e8-a8ba-4fef-a771-4e2b5091a90a] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 04:17:21 np0005534516 nova_compute[253538]: 2025-11-25 09:17:21.204 253542 DEBUG nova.virt.libvirt.driver [None req-e991bbc4-4882-4760-ba09-1cbed1562b4e 612906aa606e4268918814ea9f47c674 3c19f20fbaec489eaece7cf904f192fa - - default default] [instance: aefbd3e8-a8ba-4fef-a771-4e2b5091a90a] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 04:17:21 np0005534516 nova_compute[253538]: 2025-11-25 09:17:21.204 253542 DEBUG nova.virt.libvirt.driver [None req-e991bbc4-4882-4760-ba09-1cbed1562b4e 612906aa606e4268918814ea9f47c674 3c19f20fbaec489eaece7cf904f192fa - - default default] [instance: aefbd3e8-a8ba-4fef-a771-4e2b5091a90a] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 04:17:21 np0005534516 nova_compute[253538]: 2025-11-25 09:17:21.205 253542 DEBUG nova.virt.libvirt.driver [None req-e991bbc4-4882-4760-ba09-1cbed1562b4e 612906aa606e4268918814ea9f47c674 3c19f20fbaec489eaece7cf904f192fa - - default default] [instance: aefbd3e8-a8ba-4fef-a771-4e2b5091a90a] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 04:17:21 np0005534516 nova_compute[253538]: 2025-11-25 09:17:21.213 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: aefbd3e8-a8ba-4fef-a771-4e2b5091a90a] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 04:17:21 np0005534516 nova_compute[253538]: 2025-11-25 09:17:21.213 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764062241.171398, aefbd3e8-a8ba-4fef-a771-4e2b5091a90a => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 04:17:21 np0005534516 nova_compute[253538]: 2025-11-25 09:17:21.214 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: aefbd3e8-a8ba-4fef-a771-4e2b5091a90a] VM Paused (Lifecycle Event)#033[00m
Nov 25 04:17:21 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:17:21.214 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=57, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '3e:21:09', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '6a:71:8e:07:fa:c2'}, ipsec=False) old=SB_Global(nb_cfg=56) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 04:17:21 np0005534516 nova_compute[253538]: 2025-11-25 09:17:21.216 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:17:21 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:17:21.216 162739 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 25 04:17:21 np0005534516 nova_compute[253538]: 2025-11-25 09:17:21.232 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: aefbd3e8-a8ba-4fef-a771-4e2b5091a90a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 04:17:21 np0005534516 nova_compute[253538]: 2025-11-25 09:17:21.236 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764062241.1749299, aefbd3e8-a8ba-4fef-a771-4e2b5091a90a => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 04:17:21 np0005534516 nova_compute[253538]: 2025-11-25 09:17:21.237 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: aefbd3e8-a8ba-4fef-a771-4e2b5091a90a] VM Resumed (Lifecycle Event)#033[00m
Nov 25 04:17:21 np0005534516 nova_compute[253538]: 2025-11-25 09:17:21.251 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: aefbd3e8-a8ba-4fef-a771-4e2b5091a90a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 04:17:21 np0005534516 nova_compute[253538]: 2025-11-25 09:17:21.257 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: aefbd3e8-a8ba-4fef-a771-4e2b5091a90a] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 04:17:21 np0005534516 nova_compute[253538]: 2025-11-25 09:17:21.271 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: aefbd3e8-a8ba-4fef-a771-4e2b5091a90a] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 04:17:21 np0005534516 nova_compute[253538]: 2025-11-25 09:17:21.321 253542 INFO nova.compute.manager [None req-e991bbc4-4882-4760-ba09-1cbed1562b4e 612906aa606e4268918814ea9f47c674 3c19f20fbaec489eaece7cf904f192fa - - default default] [instance: aefbd3e8-a8ba-4fef-a771-4e2b5091a90a] Took 7.34 seconds to spawn the instance on the hypervisor.#033[00m
Nov 25 04:17:21 np0005534516 nova_compute[253538]: 2025-11-25 09:17:21.322 253542 DEBUG nova.compute.manager [None req-e991bbc4-4882-4760-ba09-1cbed1562b4e 612906aa606e4268918814ea9f47c674 3c19f20fbaec489eaece7cf904f192fa - - default default] [instance: aefbd3e8-a8ba-4fef-a771-4e2b5091a90a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 04:17:21 np0005534516 nova_compute[253538]: 2025-11-25 09:17:21.390 253542 INFO nova.compute.manager [None req-e991bbc4-4882-4760-ba09-1cbed1562b4e 612906aa606e4268918814ea9f47c674 3c19f20fbaec489eaece7cf904f192fa - - default default] [instance: aefbd3e8-a8ba-4fef-a771-4e2b5091a90a] Took 8.18 seconds to build instance.#033[00m
Nov 25 04:17:21 np0005534516 nova_compute[253538]: 2025-11-25 09:17:21.423 253542 DEBUG oslo_concurrency.lockutils [None req-e991bbc4-4882-4760-ba09-1cbed1562b4e 612906aa606e4268918814ea9f47c674 3c19f20fbaec489eaece7cf904f192fa - - default default] Lock "aefbd3e8-a8ba-4fef-a771-4e2b5091a90a" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.279s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:17:21 np0005534516 podman[418642]: 2025-11-25 09:17:21.84659469 +0000 UTC m=+0.094664845 container health_status 8ad1e319ea263c63021f01bb2d6f18ca5aea205883339692c6b10bddfa993191 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3)
Nov 25 04:17:22 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2909: 321 pgs: 321 active+clean; 134 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 20 KiB/s rd, 1.8 MiB/s wr, 31 op/s
Nov 25 04:17:22 np0005534516 nova_compute[253538]: 2025-11-25 09:17:22.970 253542 DEBUG nova.compute.manager [req-5f0fb66f-9092-4da8-85a8-ad57d0c9dc22 req-e5971cc2-4531-4e82-bfa2-b713cd617360 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: aefbd3e8-a8ba-4fef-a771-4e2b5091a90a] Received event network-vif-plugged-3e3120d7-1164-497f-8e95-61789ab8f383 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 04:17:22 np0005534516 nova_compute[253538]: 2025-11-25 09:17:22.971 253542 DEBUG oslo_concurrency.lockutils [req-5f0fb66f-9092-4da8-85a8-ad57d0c9dc22 req-e5971cc2-4531-4e82-bfa2-b713cd617360 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "aefbd3e8-a8ba-4fef-a771-4e2b5091a90a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:17:22 np0005534516 nova_compute[253538]: 2025-11-25 09:17:22.972 253542 DEBUG oslo_concurrency.lockutils [req-5f0fb66f-9092-4da8-85a8-ad57d0c9dc22 req-e5971cc2-4531-4e82-bfa2-b713cd617360 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "aefbd3e8-a8ba-4fef-a771-4e2b5091a90a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:17:22 np0005534516 nova_compute[253538]: 2025-11-25 09:17:22.972 253542 DEBUG oslo_concurrency.lockutils [req-5f0fb66f-9092-4da8-85a8-ad57d0c9dc22 req-e5971cc2-4531-4e82-bfa2-b713cd617360 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "aefbd3e8-a8ba-4fef-a771-4e2b5091a90a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:17:22 np0005534516 nova_compute[253538]: 2025-11-25 09:17:22.972 253542 DEBUG nova.compute.manager [req-5f0fb66f-9092-4da8-85a8-ad57d0c9dc22 req-e5971cc2-4531-4e82-bfa2-b713cd617360 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: aefbd3e8-a8ba-4fef-a771-4e2b5091a90a] No waiting events found dispatching network-vif-plugged-3e3120d7-1164-497f-8e95-61789ab8f383 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 04:17:22 np0005534516 nova_compute[253538]: 2025-11-25 09:17:22.973 253542 WARNING nova.compute.manager [req-5f0fb66f-9092-4da8-85a8-ad57d0c9dc22 req-e5971cc2-4531-4e82-bfa2-b713cd617360 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: aefbd3e8-a8ba-4fef-a771-4e2b5091a90a] Received unexpected event network-vif-plugged-3e3120d7-1164-497f-8e95-61789ab8f383 for instance with vm_state active and task_state None.#033[00m
Nov 25 04:17:23 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 04:17:23 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 04:17:23 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 04:17:23 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 04:17:23 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 04:17:23 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 04:17:23 np0005534516 ceph-mgr[75313]: [progress WARNING root] complete: ev 9fe04653-f5f5-429d-b5fc-40ee7d92267f does not exist
Nov 25 04:17:23 np0005534516 ceph-mgr[75313]: [progress WARNING root] complete: ev adc6a3b2-ca8c-48ac-a54c-454f3c202900 does not exist
Nov 25 04:17:23 np0005534516 ceph-mgr[75313]: [progress WARNING root] complete: ev fa258db8-0a70-4fdb-a727-a7abd4c91a1b does not exist
Nov 25 04:17:23 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 04:17:23 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 04:17:23 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 04:17:23 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 04:17:23 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 04:17:23 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 04:17:23 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:17:23 np0005534516 nova_compute[253538]: 2025-11-25 09:17:23.468 253542 DEBUG nova.objects.instance [None req-f51c54d3-bc18-4760-951b-0325ee83221a 612906aa606e4268918814ea9f47c674 3c19f20fbaec489eaece7cf904f192fa - - default default] Lazy-loading 'pci_devices' on Instance uuid aefbd3e8-a8ba-4fef-a771-4e2b5091a90a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 04:17:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 04:17:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 04:17:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 04:17:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 04:17:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 04:17:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 04:17:23 np0005534516 nova_compute[253538]: 2025-11-25 09:17:23.495 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764062243.4955335, aefbd3e8-a8ba-4fef-a771-4e2b5091a90a => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 04:17:23 np0005534516 nova_compute[253538]: 2025-11-25 09:17:23.496 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: aefbd3e8-a8ba-4fef-a771-4e2b5091a90a] VM Paused (Lifecycle Event)#033[00m
Nov 25 04:17:23 np0005534516 nova_compute[253538]: 2025-11-25 09:17:23.511 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: aefbd3e8-a8ba-4fef-a771-4e2b5091a90a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 04:17:23 np0005534516 nova_compute[253538]: 2025-11-25 09:17:23.516 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: aefbd3e8-a8ba-4fef-a771-4e2b5091a90a] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: suspending, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 04:17:23 np0005534516 nova_compute[253538]: 2025-11-25 09:17:23.534 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: aefbd3e8-a8ba-4fef-a771-4e2b5091a90a] During sync_power_state the instance has a pending task (suspending). Skip.#033[00m
Nov 25 04:17:23 np0005534516 podman[418939]: 2025-11-25 09:17:23.844564733 +0000 UTC m=+0.024828677 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 04:17:23 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 04:17:23 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 04:17:23 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 04:17:24 np0005534516 podman[418939]: 2025-11-25 09:17:24.032226544 +0000 UTC m=+0.212490438 container create a80d33063619818ac872d3603d91e1defab113316e80329aa33c3cc9cf989eb7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_poincare, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Nov 25 04:17:24 np0005534516 kernel: tap3e3120d7-11 (unregistering): left promiscuous mode
Nov 25 04:17:24 np0005534516 NetworkManager[48915]: <info>  [1764062244.0946] device (tap3e3120d7-11): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 04:17:24 np0005534516 systemd[1]: Started libpod-conmon-a80d33063619818ac872d3603d91e1defab113316e80329aa33c3cc9cf989eb7.scope.
Nov 25 04:17:24 np0005534516 nova_compute[253538]: 2025-11-25 09:17:24.108 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:17:24 np0005534516 ovn_controller[152859]: 2025-11-25T09:17:24Z|01617|binding|INFO|Releasing lport 3e3120d7-1164-497f-8e95-61789ab8f383 from this chassis (sb_readonly=0)
Nov 25 04:17:24 np0005534516 ovn_controller[152859]: 2025-11-25T09:17:24Z|01618|binding|INFO|Setting lport 3e3120d7-1164-497f-8e95-61789ab8f383 down in Southbound
Nov 25 04:17:24 np0005534516 ovn_controller[152859]: 2025-11-25T09:17:24Z|01619|binding|INFO|Removing iface tap3e3120d7-11 ovn-installed in OVS
Nov 25 04:17:24 np0005534516 nova_compute[253538]: 2025-11-25 09:17:24.111 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:17:24 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:17:24.116 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ae:3f:a1 10.100.0.5'], port_security=['fa:16:3e:ae:3f:a1 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': 'aefbd3e8-a8ba-4fef-a771-4e2b5091a90a', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5f085f3e-8c63-44aa-9adc-cec2428aefd2', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3c19f20fbaec489eaece7cf904f192fa', 'neutron:revision_number': '4', 'neutron:security_group_ids': '280baeb6-358b-4b30-9eb9-c5b8498ccc51', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4b0834da-ae27-4c3d-83e4-79728139a358, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=3e3120d7-1164-497f-8e95-61789ab8f383) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 04:17:24 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:17:24.117 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 3e3120d7-1164-497f-8e95-61789ab8f383 in datapath 5f085f3e-8c63-44aa-9adc-cec2428aefd2 unbound from our chassis#033[00m
Nov 25 04:17:24 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:17:24.118 162739 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 5f085f3e-8c63-44aa-9adc-cec2428aefd2 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m
Nov 25 04:17:24 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:17:24.119 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[de11c323-83a9-47ee-84cf-6674a91ece0a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:17:24 np0005534516 nova_compute[253538]: 2025-11-25 09:17:24.122 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:17:24 np0005534516 systemd[1]: Started libcrun container.
Nov 25 04:17:24 np0005534516 systemd[1]: machine-qemu\x2d182\x2dinstance\x2d00000098.scope: Deactivated successfully.
Nov 25 04:17:24 np0005534516 systemd[1]: machine-qemu\x2d182\x2dinstance\x2d00000098.scope: Consumed 3.149s CPU time.
Nov 25 04:17:24 np0005534516 systemd-machined[215790]: Machine qemu-182-instance-00000098 terminated.
Nov 25 04:17:24 np0005534516 podman[418939]: 2025-11-25 09:17:24.167571243 +0000 UTC m=+0.347835167 container init a80d33063619818ac872d3603d91e1defab113316e80329aa33c3cc9cf989eb7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_poincare, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Nov 25 04:17:24 np0005534516 podman[418939]: 2025-11-25 09:17:24.17776894 +0000 UTC m=+0.358032844 container start a80d33063619818ac872d3603d91e1defab113316e80329aa33c3cc9cf989eb7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_poincare, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 04:17:24 np0005534516 relaxed_poincare[418957]: 167 167
Nov 25 04:17:24 np0005534516 systemd[1]: libpod-a80d33063619818ac872d3603d91e1defab113316e80329aa33c3cc9cf989eb7.scope: Deactivated successfully.
Nov 25 04:17:24 np0005534516 conmon[418957]: conmon a80d33063619818ac872 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-a80d33063619818ac872d3603d91e1defab113316e80329aa33c3cc9cf989eb7.scope/container/memory.events
Nov 25 04:17:24 np0005534516 podman[418939]: 2025-11-25 09:17:24.185573482 +0000 UTC m=+0.365837386 container attach a80d33063619818ac872d3603d91e1defab113316e80329aa33c3cc9cf989eb7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_poincare, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 04:17:24 np0005534516 podman[418939]: 2025-11-25 09:17:24.188872892 +0000 UTC m=+0.369136826 container died a80d33063619818ac872d3603d91e1defab113316e80329aa33c3cc9cf989eb7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_poincare, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True)
Nov 25 04:17:24 np0005534516 nova_compute[253538]: 2025-11-25 09:17:24.194 253542 DEBUG nova.compute.manager [None req-f51c54d3-bc18-4760-951b-0325ee83221a 612906aa606e4268918814ea9f47c674 3c19f20fbaec489eaece7cf904f192fa - - default default] [instance: aefbd3e8-a8ba-4fef-a771-4e2b5091a90a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 04:17:24 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:17:24.218 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=0e3362c2-a4a0-4a10-9289-943331244f84, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '57'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 04:17:24 np0005534516 nova_compute[253538]: 2025-11-25 09:17:24.222 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:17:24 np0005534516 systemd[1]: var-lib-containers-storage-overlay-8353aed3f565c9cd61cf066823dce0624fe40c0231ac6523aeec51ffd4530755-merged.mount: Deactivated successfully.
Nov 25 04:17:24 np0005534516 podman[418939]: 2025-11-25 09:17:24.255104622 +0000 UTC m=+0.435368526 container remove a80d33063619818ac872d3603d91e1defab113316e80329aa33c3cc9cf989eb7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_poincare, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 04:17:24 np0005534516 systemd[1]: libpod-conmon-a80d33063619818ac872d3603d91e1defab113316e80329aa33c3cc9cf989eb7.scope: Deactivated successfully.
Nov 25 04:17:24 np0005534516 podman[418998]: 2025-11-25 09:17:24.449168018 +0000 UTC m=+0.058840550 container create ae256fddc37385405c4f90e55062754447d7776931625d82e66c5eef8cdef656 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_pasteur, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 04:17:24 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2910: 321 pgs: 321 active+clean; 134 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 428 KiB/s rd, 1.8 MiB/s wr, 49 op/s
Nov 25 04:17:24 np0005534516 systemd[1]: Started libpod-conmon-ae256fddc37385405c4f90e55062754447d7776931625d82e66c5eef8cdef656.scope.
Nov 25 04:17:24 np0005534516 podman[418998]: 2025-11-25 09:17:24.416815758 +0000 UTC m=+0.026488320 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 04:17:24 np0005534516 systemd[1]: Started libcrun container.
Nov 25 04:17:24 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d9fe786a9a6cf15a9cde18f8d46b369b17b37322a074c7f96122f3afaf584a52/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 04:17:24 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d9fe786a9a6cf15a9cde18f8d46b369b17b37322a074c7f96122f3afaf584a52/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 04:17:24 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d9fe786a9a6cf15a9cde18f8d46b369b17b37322a074c7f96122f3afaf584a52/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 04:17:24 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d9fe786a9a6cf15a9cde18f8d46b369b17b37322a074c7f96122f3afaf584a52/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 04:17:24 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d9fe786a9a6cf15a9cde18f8d46b369b17b37322a074c7f96122f3afaf584a52/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 04:17:24 np0005534516 podman[418998]: 2025-11-25 09:17:24.556158666 +0000 UTC m=+0.165831228 container init ae256fddc37385405c4f90e55062754447d7776931625d82e66c5eef8cdef656 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_pasteur, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2)
Nov 25 04:17:24 np0005534516 podman[418998]: 2025-11-25 09:17:24.565245593 +0000 UTC m=+0.174918115 container start ae256fddc37385405c4f90e55062754447d7776931625d82e66c5eef8cdef656 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_pasteur, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef)
Nov 25 04:17:24 np0005534516 podman[418998]: 2025-11-25 09:17:24.572697236 +0000 UTC m=+0.182369768 container attach ae256fddc37385405c4f90e55062754447d7776931625d82e66c5eef8cdef656 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_pasteur, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 04:17:24 np0005534516 nova_compute[253538]: 2025-11-25 09:17:24.841 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:17:25 np0005534516 nova_compute[253538]: 2025-11-25 09:17:25.062 253542 DEBUG nova.compute.manager [req-6802fae2-b7c1-424b-acec-49f85503ff82 req-41e865f2-9305-4601-baad-69b043fea37c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: aefbd3e8-a8ba-4fef-a771-4e2b5091a90a] Received event network-vif-unplugged-3e3120d7-1164-497f-8e95-61789ab8f383 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 04:17:25 np0005534516 nova_compute[253538]: 2025-11-25 09:17:25.063 253542 DEBUG oslo_concurrency.lockutils [req-6802fae2-b7c1-424b-acec-49f85503ff82 req-41e865f2-9305-4601-baad-69b043fea37c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "aefbd3e8-a8ba-4fef-a771-4e2b5091a90a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:17:25 np0005534516 nova_compute[253538]: 2025-11-25 09:17:25.063 253542 DEBUG oslo_concurrency.lockutils [req-6802fae2-b7c1-424b-acec-49f85503ff82 req-41e865f2-9305-4601-baad-69b043fea37c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "aefbd3e8-a8ba-4fef-a771-4e2b5091a90a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:17:25 np0005534516 nova_compute[253538]: 2025-11-25 09:17:25.063 253542 DEBUG oslo_concurrency.lockutils [req-6802fae2-b7c1-424b-acec-49f85503ff82 req-41e865f2-9305-4601-baad-69b043fea37c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "aefbd3e8-a8ba-4fef-a771-4e2b5091a90a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:17:25 np0005534516 nova_compute[253538]: 2025-11-25 09:17:25.064 253542 DEBUG nova.compute.manager [req-6802fae2-b7c1-424b-acec-49f85503ff82 req-41e865f2-9305-4601-baad-69b043fea37c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: aefbd3e8-a8ba-4fef-a771-4e2b5091a90a] No waiting events found dispatching network-vif-unplugged-3e3120d7-1164-497f-8e95-61789ab8f383 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 04:17:25 np0005534516 nova_compute[253538]: 2025-11-25 09:17:25.064 253542 WARNING nova.compute.manager [req-6802fae2-b7c1-424b-acec-49f85503ff82 req-41e865f2-9305-4601-baad-69b043fea37c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: aefbd3e8-a8ba-4fef-a771-4e2b5091a90a] Received unexpected event network-vif-unplugged-3e3120d7-1164-497f-8e95-61789ab8f383 for instance with vm_state suspended and task_state None.#033[00m
Nov 25 04:17:25 np0005534516 nova_compute[253538]: 2025-11-25 09:17:25.064 253542 DEBUG nova.compute.manager [req-6802fae2-b7c1-424b-acec-49f85503ff82 req-41e865f2-9305-4601-baad-69b043fea37c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: aefbd3e8-a8ba-4fef-a771-4e2b5091a90a] Received event network-vif-plugged-3e3120d7-1164-497f-8e95-61789ab8f383 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 04:17:25 np0005534516 nova_compute[253538]: 2025-11-25 09:17:25.064 253542 DEBUG oslo_concurrency.lockutils [req-6802fae2-b7c1-424b-acec-49f85503ff82 req-41e865f2-9305-4601-baad-69b043fea37c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "aefbd3e8-a8ba-4fef-a771-4e2b5091a90a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:17:25 np0005534516 nova_compute[253538]: 2025-11-25 09:17:25.064 253542 DEBUG oslo_concurrency.lockutils [req-6802fae2-b7c1-424b-acec-49f85503ff82 req-41e865f2-9305-4601-baad-69b043fea37c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "aefbd3e8-a8ba-4fef-a771-4e2b5091a90a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:17:25 np0005534516 nova_compute[253538]: 2025-11-25 09:17:25.065 253542 DEBUG oslo_concurrency.lockutils [req-6802fae2-b7c1-424b-acec-49f85503ff82 req-41e865f2-9305-4601-baad-69b043fea37c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "aefbd3e8-a8ba-4fef-a771-4e2b5091a90a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:17:25 np0005534516 nova_compute[253538]: 2025-11-25 09:17:25.065 253542 DEBUG nova.compute.manager [req-6802fae2-b7c1-424b-acec-49f85503ff82 req-41e865f2-9305-4601-baad-69b043fea37c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: aefbd3e8-a8ba-4fef-a771-4e2b5091a90a] No waiting events found dispatching network-vif-plugged-3e3120d7-1164-497f-8e95-61789ab8f383 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 04:17:25 np0005534516 nova_compute[253538]: 2025-11-25 09:17:25.065 253542 WARNING nova.compute.manager [req-6802fae2-b7c1-424b-acec-49f85503ff82 req-41e865f2-9305-4601-baad-69b043fea37c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: aefbd3e8-a8ba-4fef-a771-4e2b5091a90a] Received unexpected event network-vif-plugged-3e3120d7-1164-497f-8e95-61789ab8f383 for instance with vm_state suspended and task_state None.#033[00m
Nov 25 04:17:25 np0005534516 pedantic_pasteur[419015]: --> passed data devices: 0 physical, 3 LVM
Nov 25 04:17:25 np0005534516 pedantic_pasteur[419015]: --> relative data size: 1.0
Nov 25 04:17:25 np0005534516 pedantic_pasteur[419015]: --> All data devices are unavailable
Nov 25 04:17:25 np0005534516 systemd[1]: libpod-ae256fddc37385405c4f90e55062754447d7776931625d82e66c5eef8cdef656.scope: Deactivated successfully.
Nov 25 04:17:25 np0005534516 podman[418998]: 2025-11-25 09:17:25.748292253 +0000 UTC m=+1.357964795 container died ae256fddc37385405c4f90e55062754447d7776931625d82e66c5eef8cdef656 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_pasteur, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 04:17:25 np0005534516 systemd[1]: libpod-ae256fddc37385405c4f90e55062754447d7776931625d82e66c5eef8cdef656.scope: Consumed 1.134s CPU time.
Nov 25 04:17:25 np0005534516 systemd[1]: var-lib-containers-storage-overlay-d9fe786a9a6cf15a9cde18f8d46b369b17b37322a074c7f96122f3afaf584a52-merged.mount: Deactivated successfully.
Nov 25 04:17:25 np0005534516 podman[418998]: 2025-11-25 09:17:25.820084395 +0000 UTC m=+1.429756927 container remove ae256fddc37385405c4f90e55062754447d7776931625d82e66c5eef8cdef656 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_pasteur, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 04:17:25 np0005534516 systemd[1]: libpod-conmon-ae256fddc37385405c4f90e55062754447d7776931625d82e66c5eef8cdef656.scope: Deactivated successfully.
Nov 25 04:17:26 np0005534516 nova_compute[253538]: 2025-11-25 09:17:26.208 253542 INFO nova.compute.manager [None req-254ffe90-dcb6-4ec1-89d2-d004a6c7f005 612906aa606e4268918814ea9f47c674 3c19f20fbaec489eaece7cf904f192fa - - default default] [instance: aefbd3e8-a8ba-4fef-a771-4e2b5091a90a] Resuming#033[00m
Nov 25 04:17:26 np0005534516 nova_compute[253538]: 2025-11-25 09:17:26.209 253542 DEBUG nova.objects.instance [None req-254ffe90-dcb6-4ec1-89d2-d004a6c7f005 612906aa606e4268918814ea9f47c674 3c19f20fbaec489eaece7cf904f192fa - - default default] Lazy-loading 'flavor' on Instance uuid aefbd3e8-a8ba-4fef-a771-4e2b5091a90a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 04:17:26 np0005534516 nova_compute[253538]: 2025-11-25 09:17:26.242 253542 DEBUG oslo_concurrency.lockutils [None req-254ffe90-dcb6-4ec1-89d2-d004a6c7f005 612906aa606e4268918814ea9f47c674 3c19f20fbaec489eaece7cf904f192fa - - default default] Acquiring lock "refresh_cache-aefbd3e8-a8ba-4fef-a771-4e2b5091a90a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 04:17:26 np0005534516 nova_compute[253538]: 2025-11-25 09:17:26.243 253542 DEBUG oslo_concurrency.lockutils [None req-254ffe90-dcb6-4ec1-89d2-d004a6c7f005 612906aa606e4268918814ea9f47c674 3c19f20fbaec489eaece7cf904f192fa - - default default] Acquired lock "refresh_cache-aefbd3e8-a8ba-4fef-a771-4e2b5091a90a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 04:17:26 np0005534516 nova_compute[253538]: 2025-11-25 09:17:26.243 253542 DEBUG nova.network.neutron [None req-254ffe90-dcb6-4ec1-89d2-d004a6c7f005 612906aa606e4268918814ea9f47c674 3c19f20fbaec489eaece7cf904f192fa - - default default] [instance: aefbd3e8-a8ba-4fef-a771-4e2b5091a90a] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 25 04:17:26 np0005534516 podman[419196]: 2025-11-25 09:17:26.405849678 +0000 UTC m=+0.039623008 container create 62ba897be086476e45e2767e09f5c65a88c476fdc68cd3a83ccc69c9b9d6471a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_wu, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Nov 25 04:17:26 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2911: 321 pgs: 321 active+clean; 134 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.5 MiB/s rd, 1.8 MiB/s wr, 87 op/s
Nov 25 04:17:26 np0005534516 podman[419196]: 2025-11-25 09:17:26.388052275 +0000 UTC m=+0.021825635 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 04:17:26 np0005534516 systemd[1]: Started libpod-conmon-62ba897be086476e45e2767e09f5c65a88c476fdc68cd3a83ccc69c9b9d6471a.scope.
Nov 25 04:17:26 np0005534516 systemd[1]: Started libcrun container.
Nov 25 04:17:26 np0005534516 podman[419196]: 2025-11-25 09:17:26.536199992 +0000 UTC m=+0.169973352 container init 62ba897be086476e45e2767e09f5c65a88c476fdc68cd3a83ccc69c9b9d6471a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_wu, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Nov 25 04:17:26 np0005534516 podman[419196]: 2025-11-25 09:17:26.542698248 +0000 UTC m=+0.176471578 container start 62ba897be086476e45e2767e09f5c65a88c476fdc68cd3a83ccc69c9b9d6471a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_wu, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 04:17:26 np0005534516 pensive_wu[419214]: 167 167
Nov 25 04:17:26 np0005534516 systemd[1]: libpod-62ba897be086476e45e2767e09f5c65a88c476fdc68cd3a83ccc69c9b9d6471a.scope: Deactivated successfully.
Nov 25 04:17:26 np0005534516 conmon[419214]: conmon 62ba897be086476e45e2 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-62ba897be086476e45e2767e09f5c65a88c476fdc68cd3a83ccc69c9b9d6471a.scope/container/memory.events
Nov 25 04:17:26 np0005534516 podman[419196]: 2025-11-25 09:17:26.564364878 +0000 UTC m=+0.198138208 container attach 62ba897be086476e45e2767e09f5c65a88c476fdc68cd3a83ccc69c9b9d6471a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_wu, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 04:17:26 np0005534516 podman[419196]: 2025-11-25 09:17:26.564738658 +0000 UTC m=+0.198511988 container died 62ba897be086476e45e2767e09f5c65a88c476fdc68cd3a83ccc69c9b9d6471a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_wu, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Nov 25 04:17:26 np0005534516 systemd[1]: var-lib-containers-storage-overlay-dd53542dcc6e8faab4f858cc99c3271a9b8a96385c9467bf43c312705353b276-merged.mount: Deactivated successfully.
Nov 25 04:17:26 np0005534516 podman[419196]: 2025-11-25 09:17:26.660651305 +0000 UTC m=+0.294424635 container remove 62ba897be086476e45e2767e09f5c65a88c476fdc68cd3a83ccc69c9b9d6471a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_wu, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 04:17:26 np0005534516 systemd[1]: libpod-conmon-62ba897be086476e45e2767e09f5c65a88c476fdc68cd3a83ccc69c9b9d6471a.scope: Deactivated successfully.
Nov 25 04:17:26 np0005534516 podman[419240]: 2025-11-25 09:17:26.872789362 +0000 UTC m=+0.067384643 container create e089890665f76fd96184bd8109a7567db5ffcc949fa8ab408810eef17aa078f9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_pascal, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 04:17:26 np0005534516 systemd[1]: Started libpod-conmon-e089890665f76fd96184bd8109a7567db5ffcc949fa8ab408810eef17aa078f9.scope.
Nov 25 04:17:26 np0005534516 podman[419240]: 2025-11-25 09:17:26.849302653 +0000 UTC m=+0.043897974 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 04:17:26 np0005534516 systemd[1]: Started libcrun container.
Nov 25 04:17:26 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3b09c0a247f0ddce71821545b3ace0cd46b70e13e26dbc2d4cec91c9df254566/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 04:17:26 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3b09c0a247f0ddce71821545b3ace0cd46b70e13e26dbc2d4cec91c9df254566/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 04:17:26 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3b09c0a247f0ddce71821545b3ace0cd46b70e13e26dbc2d4cec91c9df254566/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 04:17:26 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3b09c0a247f0ddce71821545b3ace0cd46b70e13e26dbc2d4cec91c9df254566/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 04:17:26 np0005534516 podman[419240]: 2025-11-25 09:17:26.972093031 +0000 UTC m=+0.166688322 container init e089890665f76fd96184bd8109a7567db5ffcc949fa8ab408810eef17aa078f9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_pascal, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 04:17:26 np0005534516 podman[419240]: 2025-11-25 09:17:26.980722555 +0000 UTC m=+0.175317836 container start e089890665f76fd96184bd8109a7567db5ffcc949fa8ab408810eef17aa078f9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_pascal, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 04:17:26 np0005534516 podman[419240]: 2025-11-25 09:17:26.984229251 +0000 UTC m=+0.178824572 container attach e089890665f76fd96184bd8109a7567db5ffcc949fa8ab408810eef17aa078f9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_pascal, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 04:17:27 np0005534516 zen_pascal[419256]: {
Nov 25 04:17:27 np0005534516 zen_pascal[419256]:    "0": [
Nov 25 04:17:27 np0005534516 zen_pascal[419256]:        {
Nov 25 04:17:27 np0005534516 zen_pascal[419256]:            "devices": [
Nov 25 04:17:27 np0005534516 zen_pascal[419256]:                "/dev/loop3"
Nov 25 04:17:27 np0005534516 zen_pascal[419256]:            ],
Nov 25 04:17:27 np0005534516 zen_pascal[419256]:            "lv_name": "ceph_lv0",
Nov 25 04:17:27 np0005534516 zen_pascal[419256]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 04:17:27 np0005534516 zen_pascal[419256]:            "lv_size": "21470642176",
Nov 25 04:17:27 np0005534516 zen_pascal[419256]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=fa0f8d90-5df8-4d42-9078-da082765696d,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 04:17:27 np0005534516 zen_pascal[419256]:            "lv_uuid": "ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr",
Nov 25 04:17:27 np0005534516 zen_pascal[419256]:            "name": "ceph_lv0",
Nov 25 04:17:27 np0005534516 zen_pascal[419256]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 04:17:27 np0005534516 zen_pascal[419256]:            "tags": {
Nov 25 04:17:27 np0005534516 zen_pascal[419256]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 04:17:27 np0005534516 zen_pascal[419256]:                "ceph.block_uuid": "ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr",
Nov 25 04:17:27 np0005534516 zen_pascal[419256]:                "ceph.cephx_lockbox_secret": "",
Nov 25 04:17:27 np0005534516 zen_pascal[419256]:                "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 04:17:27 np0005534516 zen_pascal[419256]:                "ceph.cluster_name": "ceph",
Nov 25 04:17:27 np0005534516 zen_pascal[419256]:                "ceph.crush_device_class": "",
Nov 25 04:17:27 np0005534516 zen_pascal[419256]:                "ceph.encrypted": "0",
Nov 25 04:17:27 np0005534516 zen_pascal[419256]:                "ceph.osd_fsid": "fa0f8d90-5df8-4d42-9078-da082765696d",
Nov 25 04:17:27 np0005534516 zen_pascal[419256]:                "ceph.osd_id": "0",
Nov 25 04:17:27 np0005534516 zen_pascal[419256]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 04:17:27 np0005534516 zen_pascal[419256]:                "ceph.type": "block",
Nov 25 04:17:27 np0005534516 zen_pascal[419256]:                "ceph.vdo": "0"
Nov 25 04:17:27 np0005534516 zen_pascal[419256]:            },
Nov 25 04:17:27 np0005534516 zen_pascal[419256]:            "type": "block",
Nov 25 04:17:27 np0005534516 zen_pascal[419256]:            "vg_name": "ceph_vg0"
Nov 25 04:17:27 np0005534516 zen_pascal[419256]:        }
Nov 25 04:17:27 np0005534516 zen_pascal[419256]:    ],
Nov 25 04:17:27 np0005534516 zen_pascal[419256]:    "1": [
Nov 25 04:17:27 np0005534516 zen_pascal[419256]:        {
Nov 25 04:17:27 np0005534516 zen_pascal[419256]:            "devices": [
Nov 25 04:17:27 np0005534516 zen_pascal[419256]:                "/dev/loop4"
Nov 25 04:17:27 np0005534516 zen_pascal[419256]:            ],
Nov 25 04:17:27 np0005534516 zen_pascal[419256]:            "lv_name": "ceph_lv1",
Nov 25 04:17:27 np0005534516 zen_pascal[419256]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 04:17:27 np0005534516 zen_pascal[419256]:            "lv_size": "21470642176",
Nov 25 04:17:27 np0005534516 zen_pascal[419256]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1ccbbde0-8faf-460a-800e-c84f00ed17db,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 04:17:27 np0005534516 zen_pascal[419256]:            "lv_uuid": "nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e",
Nov 25 04:17:27 np0005534516 zen_pascal[419256]:            "name": "ceph_lv1",
Nov 25 04:17:27 np0005534516 zen_pascal[419256]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 04:17:27 np0005534516 zen_pascal[419256]:            "tags": {
Nov 25 04:17:27 np0005534516 zen_pascal[419256]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 04:17:27 np0005534516 zen_pascal[419256]:                "ceph.block_uuid": "nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e",
Nov 25 04:17:27 np0005534516 zen_pascal[419256]:                "ceph.cephx_lockbox_secret": "",
Nov 25 04:17:27 np0005534516 zen_pascal[419256]:                "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 04:17:27 np0005534516 zen_pascal[419256]:                "ceph.cluster_name": "ceph",
Nov 25 04:17:27 np0005534516 zen_pascal[419256]:                "ceph.crush_device_class": "",
Nov 25 04:17:27 np0005534516 zen_pascal[419256]:                "ceph.encrypted": "0",
Nov 25 04:17:27 np0005534516 zen_pascal[419256]:                "ceph.osd_fsid": "1ccbbde0-8faf-460a-800e-c84f00ed17db",
Nov 25 04:17:27 np0005534516 zen_pascal[419256]:                "ceph.osd_id": "1",
Nov 25 04:17:27 np0005534516 zen_pascal[419256]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 04:17:27 np0005534516 zen_pascal[419256]:                "ceph.type": "block",
Nov 25 04:17:27 np0005534516 zen_pascal[419256]:                "ceph.vdo": "0"
Nov 25 04:17:27 np0005534516 zen_pascal[419256]:            },
Nov 25 04:17:27 np0005534516 zen_pascal[419256]:            "type": "block",
Nov 25 04:17:27 np0005534516 zen_pascal[419256]:            "vg_name": "ceph_vg1"
Nov 25 04:17:27 np0005534516 zen_pascal[419256]:        }
Nov 25 04:17:27 np0005534516 zen_pascal[419256]:    ],
Nov 25 04:17:27 np0005534516 zen_pascal[419256]:    "2": [
Nov 25 04:17:27 np0005534516 zen_pascal[419256]:        {
Nov 25 04:17:27 np0005534516 zen_pascal[419256]:            "devices": [
Nov 25 04:17:27 np0005534516 zen_pascal[419256]:                "/dev/loop5"
Nov 25 04:17:27 np0005534516 zen_pascal[419256]:            ],
Nov 25 04:17:27 np0005534516 zen_pascal[419256]:            "lv_name": "ceph_lv2",
Nov 25 04:17:27 np0005534516 zen_pascal[419256]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 04:17:27 np0005534516 zen_pascal[419256]:            "lv_size": "21470642176",
Nov 25 04:17:27 np0005534516 zen_pascal[419256]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2a15223e-f21c-42af-b702-2be1c3607399,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 04:17:27 np0005534516 zen_pascal[419256]:            "lv_uuid": "5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0",
Nov 25 04:17:27 np0005534516 zen_pascal[419256]:            "name": "ceph_lv2",
Nov 25 04:17:27 np0005534516 zen_pascal[419256]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 04:17:27 np0005534516 zen_pascal[419256]:            "tags": {
Nov 25 04:17:27 np0005534516 zen_pascal[419256]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 04:17:27 np0005534516 zen_pascal[419256]:                "ceph.block_uuid": "5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0",
Nov 25 04:17:27 np0005534516 zen_pascal[419256]:                "ceph.cephx_lockbox_secret": "",
Nov 25 04:17:27 np0005534516 zen_pascal[419256]:                "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 04:17:27 np0005534516 zen_pascal[419256]:                "ceph.cluster_name": "ceph",
Nov 25 04:17:27 np0005534516 zen_pascal[419256]:                "ceph.crush_device_class": "",
Nov 25 04:17:27 np0005534516 zen_pascal[419256]:                "ceph.encrypted": "0",
Nov 25 04:17:27 np0005534516 zen_pascal[419256]:                "ceph.osd_fsid": "2a15223e-f21c-42af-b702-2be1c3607399",
Nov 25 04:17:27 np0005534516 zen_pascal[419256]:                "ceph.osd_id": "2",
Nov 25 04:17:27 np0005534516 zen_pascal[419256]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 04:17:27 np0005534516 zen_pascal[419256]:                "ceph.type": "block",
Nov 25 04:17:27 np0005534516 zen_pascal[419256]:                "ceph.vdo": "0"
Nov 25 04:17:27 np0005534516 zen_pascal[419256]:            },
Nov 25 04:17:27 np0005534516 zen_pascal[419256]:            "type": "block",
Nov 25 04:17:27 np0005534516 zen_pascal[419256]:            "vg_name": "ceph_vg2"
Nov 25 04:17:27 np0005534516 zen_pascal[419256]:        }
Nov 25 04:17:27 np0005534516 zen_pascal[419256]:    ]
Nov 25 04:17:27 np0005534516 zen_pascal[419256]: }
Nov 25 04:17:27 np0005534516 systemd[1]: libpod-e089890665f76fd96184bd8109a7567db5ffcc949fa8ab408810eef17aa078f9.scope: Deactivated successfully.
Nov 25 04:17:27 np0005534516 podman[419240]: 2025-11-25 09:17:27.763761002 +0000 UTC m=+0.958356293 container died e089890665f76fd96184bd8109a7567db5ffcc949fa8ab408810eef17aa078f9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_pascal, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Nov 25 04:17:27 np0005534516 systemd[1]: var-lib-containers-storage-overlay-3b09c0a247f0ddce71821545b3ace0cd46b70e13e26dbc2d4cec91c9df254566-merged.mount: Deactivated successfully.
Nov 25 04:17:27 np0005534516 podman[419240]: 2025-11-25 09:17:27.815825727 +0000 UTC m=+1.010420998 container remove e089890665f76fd96184bd8109a7567db5ffcc949fa8ab408810eef17aa078f9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_pascal, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 04:17:27 np0005534516 systemd[1]: libpod-conmon-e089890665f76fd96184bd8109a7567db5ffcc949fa8ab408810eef17aa078f9.scope: Deactivated successfully.
Nov 25 04:17:27 np0005534516 nova_compute[253538]: 2025-11-25 09:17:27.986 253542 DEBUG nova.network.neutron [None req-254ffe90-dcb6-4ec1-89d2-d004a6c7f005 612906aa606e4268918814ea9f47c674 3c19f20fbaec489eaece7cf904f192fa - - default default] [instance: aefbd3e8-a8ba-4fef-a771-4e2b5091a90a] Updating instance_info_cache with network_info: [{"id": "3e3120d7-1164-497f-8e95-61789ab8f383", "address": "fa:16:3e:ae:3f:a1", "network": {"id": "5f085f3e-8c63-44aa-9adc-cec2428aefd2", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-931270361-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "3c19f20fbaec489eaece7cf904f192fa", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3e3120d7-11", "ovs_interfaceid": "3e3120d7-1164-497f-8e95-61789ab8f383", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 04:17:28 np0005534516 nova_compute[253538]: 2025-11-25 09:17:28.003 253542 DEBUG oslo_concurrency.lockutils [None req-254ffe90-dcb6-4ec1-89d2-d004a6c7f005 612906aa606e4268918814ea9f47c674 3c19f20fbaec489eaece7cf904f192fa - - default default] Releasing lock "refresh_cache-aefbd3e8-a8ba-4fef-a771-4e2b5091a90a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 04:17:28 np0005534516 nova_compute[253538]: 2025-11-25 09:17:28.008 253542 DEBUG nova.virt.libvirt.vif [None req-254ffe90-dcb6-4ec1-89d2-d004a6c7f005 612906aa606e4268918814ea9f47c674 3c19f20fbaec489eaece7cf904f192fa - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T09:17:11Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestServerAdvancedOps-server-1580927856',display_name='tempest-TestServerAdvancedOps-server-1580927856',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testserveradvancedops-server-1580927856',id=152,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-25T09:17:21Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='3c19f20fbaec489eaece7cf904f192fa',ramdisk_id='',reservation_id='r-aqe54enr',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-TestServerAdvancedOps-1902459302',owner_user_name='tempest-TestServerAdvancedOps-1902459302-project-member'},tags=<?>,task_state='resuming',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T09:17:24Z,user_data=None,user_id='612906aa606e4268918814ea9f47c674',uuid=aefbd3e8-a8ba-4fef-a771-4e2b5091a90a,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='suspended') vif={"id": "3e3120d7-1164-497f-8e95-61789ab8f383", "address": "fa:16:3e:ae:3f:a1", "network": {"id": "5f085f3e-8c63-44aa-9adc-cec2428aefd2", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-931270361-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "3c19f20fbaec489eaece7cf904f192fa", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3e3120d7-11", "ovs_interfaceid": "3e3120d7-1164-497f-8e95-61789ab8f383", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 25 04:17:28 np0005534516 nova_compute[253538]: 2025-11-25 09:17:28.009 253542 DEBUG nova.network.os_vif_util [None req-254ffe90-dcb6-4ec1-89d2-d004a6c7f005 612906aa606e4268918814ea9f47c674 3c19f20fbaec489eaece7cf904f192fa - - default default] Converting VIF {"id": "3e3120d7-1164-497f-8e95-61789ab8f383", "address": "fa:16:3e:ae:3f:a1", "network": {"id": "5f085f3e-8c63-44aa-9adc-cec2428aefd2", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-931270361-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "3c19f20fbaec489eaece7cf904f192fa", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3e3120d7-11", "ovs_interfaceid": "3e3120d7-1164-497f-8e95-61789ab8f383", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 04:17:28 np0005534516 nova_compute[253538]: 2025-11-25 09:17:28.010 253542 DEBUG nova.network.os_vif_util [None req-254ffe90-dcb6-4ec1-89d2-d004a6c7f005 612906aa606e4268918814ea9f47c674 3c19f20fbaec489eaece7cf904f192fa - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ae:3f:a1,bridge_name='br-int',has_traffic_filtering=True,id=3e3120d7-1164-497f-8e95-61789ab8f383,network=Network(5f085f3e-8c63-44aa-9adc-cec2428aefd2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3e3120d7-11') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 04:17:28 np0005534516 nova_compute[253538]: 2025-11-25 09:17:28.011 253542 DEBUG os_vif [None req-254ffe90-dcb6-4ec1-89d2-d004a6c7f005 612906aa606e4268918814ea9f47c674 3c19f20fbaec489eaece7cf904f192fa - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:ae:3f:a1,bridge_name='br-int',has_traffic_filtering=True,id=3e3120d7-1164-497f-8e95-61789ab8f383,network=Network(5f085f3e-8c63-44aa-9adc-cec2428aefd2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3e3120d7-11') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 25 04:17:28 np0005534516 nova_compute[253538]: 2025-11-25 09:17:28.012 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:17:28 np0005534516 nova_compute[253538]: 2025-11-25 09:17:28.012 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 04:17:28 np0005534516 nova_compute[253538]: 2025-11-25 09:17:28.013 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 04:17:28 np0005534516 nova_compute[253538]: 2025-11-25 09:17:28.016 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:17:28 np0005534516 nova_compute[253538]: 2025-11-25 09:17:28.016 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap3e3120d7-11, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 04:17:28 np0005534516 nova_compute[253538]: 2025-11-25 09:17:28.017 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap3e3120d7-11, col_values=(('external_ids', {'iface-id': '3e3120d7-1164-497f-8e95-61789ab8f383', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:ae:3f:a1', 'vm-uuid': 'aefbd3e8-a8ba-4fef-a771-4e2b5091a90a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 04:17:28 np0005534516 nova_compute[253538]: 2025-11-25 09:17:28.017 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 04:17:28 np0005534516 nova_compute[253538]: 2025-11-25 09:17:28.018 253542 INFO os_vif [None req-254ffe90-dcb6-4ec1-89d2-d004a6c7f005 612906aa606e4268918814ea9f47c674 3c19f20fbaec489eaece7cf904f192fa - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:ae:3f:a1,bridge_name='br-int',has_traffic_filtering=True,id=3e3120d7-1164-497f-8e95-61789ab8f383,network=Network(5f085f3e-8c63-44aa-9adc-cec2428aefd2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3e3120d7-11')#033[00m
Nov 25 04:17:28 np0005534516 nova_compute[253538]: 2025-11-25 09:17:28.037 253542 DEBUG nova.objects.instance [None req-254ffe90-dcb6-4ec1-89d2-d004a6c7f005 612906aa606e4268918814ea9f47c674 3c19f20fbaec489eaece7cf904f192fa - - default default] Lazy-loading 'numa_topology' on Instance uuid aefbd3e8-a8ba-4fef-a771-4e2b5091a90a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 04:17:28 np0005534516 kernel: tap3e3120d7-11: entered promiscuous mode
Nov 25 04:17:28 np0005534516 NetworkManager[48915]: <info>  [1764062248.1018] manager: (tap3e3120d7-11): new Tun device (/org/freedesktop/NetworkManager/Devices/666)
Nov 25 04:17:28 np0005534516 systemd-udevd[419388]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 04:17:28 np0005534516 nova_compute[253538]: 2025-11-25 09:17:28.145 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:17:28 np0005534516 ovn_controller[152859]: 2025-11-25T09:17:28Z|01620|binding|INFO|Claiming lport 3e3120d7-1164-497f-8e95-61789ab8f383 for this chassis.
Nov 25 04:17:28 np0005534516 ovn_controller[152859]: 2025-11-25T09:17:28Z|01621|binding|INFO|3e3120d7-1164-497f-8e95-61789ab8f383: Claiming fa:16:3e:ae:3f:a1 10.100.0.5
Nov 25 04:17:28 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:17:28.152 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ae:3f:a1 10.100.0.5'], port_security=['fa:16:3e:ae:3f:a1 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': 'aefbd3e8-a8ba-4fef-a771-4e2b5091a90a', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5f085f3e-8c63-44aa-9adc-cec2428aefd2', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3c19f20fbaec489eaece7cf904f192fa', 'neutron:revision_number': '5', 'neutron:security_group_ids': '280baeb6-358b-4b30-9eb9-c5b8498ccc51', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4b0834da-ae27-4c3d-83e4-79728139a358, chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=3e3120d7-1164-497f-8e95-61789ab8f383) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 04:17:28 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:17:28.153 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 3e3120d7-1164-497f-8e95-61789ab8f383 in datapath 5f085f3e-8c63-44aa-9adc-cec2428aefd2 bound to our chassis#033[00m
Nov 25 04:17:28 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:17:28.154 162739 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 5f085f3e-8c63-44aa-9adc-cec2428aefd2 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m
Nov 25 04:17:28 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:17:28.155 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[cfb7dba4-e9fb-4682-9aaa-2dff32beb969]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:17:28 np0005534516 NetworkManager[48915]: <info>  [1764062248.1582] device (tap3e3120d7-11): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 04:17:28 np0005534516 NetworkManager[48915]: <info>  [1764062248.1593] device (tap3e3120d7-11): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 04:17:28 np0005534516 nova_compute[253538]: 2025-11-25 09:17:28.163 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:17:28 np0005534516 ovn_controller[152859]: 2025-11-25T09:17:28Z|01622|binding|INFO|Setting lport 3e3120d7-1164-497f-8e95-61789ab8f383 ovn-installed in OVS
Nov 25 04:17:28 np0005534516 ovn_controller[152859]: 2025-11-25T09:17:28Z|01623|binding|INFO|Setting lport 3e3120d7-1164-497f-8e95-61789ab8f383 up in Southbound
Nov 25 04:17:28 np0005534516 nova_compute[253538]: 2025-11-25 09:17:28.165 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:17:28 np0005534516 nova_compute[253538]: 2025-11-25 09:17:28.170 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:17:28 np0005534516 systemd-machined[215790]: New machine qemu-183-instance-00000098.
Nov 25 04:17:28 np0005534516 systemd[1]: Started Virtual Machine qemu-183-instance-00000098.
Nov 25 04:17:28 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:17:28 np0005534516 podman[419440]: 2025-11-25 09:17:28.411006357 +0000 UTC m=+0.042398994 container create 6eb705af89601b204126d136b5c1de439d8fb08d9c9ce931e0016cb10b740200 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_benz, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Nov 25 04:17:28 np0005534516 nova_compute[253538]: 2025-11-25 09:17:28.424 253542 DEBUG nova.compute.manager [req-a642535f-8f68-4a2a-8863-d55f4fe70e2e req-7c2d1f87-9a6b-4e9d-8caf-3c8e927d19c9 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: aefbd3e8-a8ba-4fef-a771-4e2b5091a90a] Received event network-vif-plugged-3e3120d7-1164-497f-8e95-61789ab8f383 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 04:17:28 np0005534516 nova_compute[253538]: 2025-11-25 09:17:28.426 253542 DEBUG oslo_concurrency.lockutils [req-a642535f-8f68-4a2a-8863-d55f4fe70e2e req-7c2d1f87-9a6b-4e9d-8caf-3c8e927d19c9 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "aefbd3e8-a8ba-4fef-a771-4e2b5091a90a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:17:28 np0005534516 nova_compute[253538]: 2025-11-25 09:17:28.426 253542 DEBUG oslo_concurrency.lockutils [req-a642535f-8f68-4a2a-8863-d55f4fe70e2e req-7c2d1f87-9a6b-4e9d-8caf-3c8e927d19c9 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "aefbd3e8-a8ba-4fef-a771-4e2b5091a90a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:17:28 np0005534516 nova_compute[253538]: 2025-11-25 09:17:28.426 253542 DEBUG oslo_concurrency.lockutils [req-a642535f-8f68-4a2a-8863-d55f4fe70e2e req-7c2d1f87-9a6b-4e9d-8caf-3c8e927d19c9 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "aefbd3e8-a8ba-4fef-a771-4e2b5091a90a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:17:28 np0005534516 nova_compute[253538]: 2025-11-25 09:17:28.427 253542 DEBUG nova.compute.manager [req-a642535f-8f68-4a2a-8863-d55f4fe70e2e req-7c2d1f87-9a6b-4e9d-8caf-3c8e927d19c9 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: aefbd3e8-a8ba-4fef-a771-4e2b5091a90a] No waiting events found dispatching network-vif-plugged-3e3120d7-1164-497f-8e95-61789ab8f383 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 04:17:28 np0005534516 nova_compute[253538]: 2025-11-25 09:17:28.427 253542 WARNING nova.compute.manager [req-a642535f-8f68-4a2a-8863-d55f4fe70e2e req-7c2d1f87-9a6b-4e9d-8caf-3c8e927d19c9 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: aefbd3e8-a8ba-4fef-a771-4e2b5091a90a] Received unexpected event network-vif-plugged-3e3120d7-1164-497f-8e95-61789ab8f383 for instance with vm_state suspended and task_state resuming.#033[00m
Nov 25 04:17:28 np0005534516 systemd[1]: Started libpod-conmon-6eb705af89601b204126d136b5c1de439d8fb08d9c9ce931e0016cb10b740200.scope.
Nov 25 04:17:28 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2912: 321 pgs: 321 active+clean; 134 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Nov 25 04:17:28 np0005534516 podman[419440]: 2025-11-25 09:17:28.392620776 +0000 UTC m=+0.024013413 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 04:17:28 np0005534516 systemd[1]: Started libcrun container.
Nov 25 04:17:28 np0005534516 podman[419440]: 2025-11-25 09:17:28.525351405 +0000 UTC m=+0.156744072 container init 6eb705af89601b204126d136b5c1de439d8fb08d9c9ce931e0016cb10b740200 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_benz, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Nov 25 04:17:28 np0005534516 podman[419440]: 2025-11-25 09:17:28.533261109 +0000 UTC m=+0.164653746 container start 6eb705af89601b204126d136b5c1de439d8fb08d9c9ce931e0016cb10b740200 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_benz, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 04:17:28 np0005534516 podman[419440]: 2025-11-25 09:17:28.536053295 +0000 UTC m=+0.167445962 container attach 6eb705af89601b204126d136b5c1de439d8fb08d9c9ce931e0016cb10b740200 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_benz, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 04:17:28 np0005534516 nervous_benz[419456]: 167 167
Nov 25 04:17:28 np0005534516 systemd[1]: libpod-6eb705af89601b204126d136b5c1de439d8fb08d9c9ce931e0016cb10b740200.scope: Deactivated successfully.
Nov 25 04:17:28 np0005534516 conmon[419456]: conmon 6eb705af89601b204126 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-6eb705af89601b204126d136b5c1de439d8fb08d9c9ce931e0016cb10b740200.scope/container/memory.events
Nov 25 04:17:28 np0005534516 podman[419440]: 2025-11-25 09:17:28.541996878 +0000 UTC m=+0.173389515 container died 6eb705af89601b204126d136b5c1de439d8fb08d9c9ce931e0016cb10b740200 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_benz, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 04:17:28 np0005534516 systemd[1]: var-lib-containers-storage-overlay-790e3f136d75c6efab54f2ba978389ab68ce4e5019555bb3c78725fc6adcaf20-merged.mount: Deactivated successfully.
Nov 25 04:17:28 np0005534516 podman[419440]: 2025-11-25 09:17:28.585316065 +0000 UTC m=+0.216708712 container remove 6eb705af89601b204126d136b5c1de439d8fb08d9c9ce931e0016cb10b740200 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_benz, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Nov 25 04:17:28 np0005534516 systemd[1]: libpod-conmon-6eb705af89601b204126d136b5c1de439d8fb08d9c9ce931e0016cb10b740200.scope: Deactivated successfully.
Nov 25 04:17:28 np0005534516 podman[419479]: 2025-11-25 09:17:28.76611272 +0000 UTC m=+0.046053933 container create d921f54f72ea86dec9c2431c0d7d7a505d5fcffbd6e3fd43458a6af91c45661b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_herschel, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Nov 25 04:17:28 np0005534516 systemd[1]: Started libpod-conmon-d921f54f72ea86dec9c2431c0d7d7a505d5fcffbd6e3fd43458a6af91c45661b.scope.
Nov 25 04:17:28 np0005534516 systemd[1]: Started libcrun container.
Nov 25 04:17:28 np0005534516 podman[419479]: 2025-11-25 09:17:28.746569819 +0000 UTC m=+0.026511072 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 04:17:28 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/40580bb91d8be25ca25b9f0eb08c0aaaaa8db588e24bf44fd57a4d00441767d3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 04:17:28 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/40580bb91d8be25ca25b9f0eb08c0aaaaa8db588e24bf44fd57a4d00441767d3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 04:17:28 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/40580bb91d8be25ca25b9f0eb08c0aaaaa8db588e24bf44fd57a4d00441767d3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 04:17:28 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/40580bb91d8be25ca25b9f0eb08c0aaaaa8db588e24bf44fd57a4d00441767d3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 04:17:28 np0005534516 podman[419479]: 2025-11-25 09:17:28.886597655 +0000 UTC m=+0.166538928 container init d921f54f72ea86dec9c2431c0d7d7a505d5fcffbd6e3fd43458a6af91c45661b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_herschel, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Nov 25 04:17:28 np0005534516 podman[419479]: 2025-11-25 09:17:28.90111555 +0000 UTC m=+0.181056763 container start d921f54f72ea86dec9c2431c0d7d7a505d5fcffbd6e3fd43458a6af91c45661b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_herschel, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Nov 25 04:17:28 np0005534516 podman[419479]: 2025-11-25 09:17:28.911909823 +0000 UTC m=+0.191851056 container attach d921f54f72ea86dec9c2431c0d7d7a505d5fcffbd6e3fd43458a6af91c45661b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_herschel, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 04:17:29 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 25 04:17:29 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/16843156' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 04:17:29 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 25 04:17:29 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/16843156' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 04:17:29 np0005534516 nova_compute[253538]: 2025-11-25 09:17:29.226 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:17:29 np0005534516 nova_compute[253538]: 2025-11-25 09:17:29.394 253542 DEBUG nova.virt.libvirt.host [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Removed pending event for aefbd3e8-a8ba-4fef-a771-4e2b5091a90a due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Nov 25 04:17:29 np0005534516 nova_compute[253538]: 2025-11-25 09:17:29.394 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764062249.3940399, aefbd3e8-a8ba-4fef-a771-4e2b5091a90a => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 04:17:29 np0005534516 nova_compute[253538]: 2025-11-25 09:17:29.394 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: aefbd3e8-a8ba-4fef-a771-4e2b5091a90a] VM Started (Lifecycle Event)#033[00m
Nov 25 04:17:29 np0005534516 nova_compute[253538]: 2025-11-25 09:17:29.422 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: aefbd3e8-a8ba-4fef-a771-4e2b5091a90a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 04:17:29 np0005534516 nova_compute[253538]: 2025-11-25 09:17:29.426 253542 DEBUG nova.compute.manager [None req-254ffe90-dcb6-4ec1-89d2-d004a6c7f005 612906aa606e4268918814ea9f47c674 3c19f20fbaec489eaece7cf904f192fa - - default default] [instance: aefbd3e8-a8ba-4fef-a771-4e2b5091a90a] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 25 04:17:29 np0005534516 nova_compute[253538]: 2025-11-25 09:17:29.427 253542 DEBUG nova.objects.instance [None req-254ffe90-dcb6-4ec1-89d2-d004a6c7f005 612906aa606e4268918814ea9f47c674 3c19f20fbaec489eaece7cf904f192fa - - default default] Lazy-loading 'pci_devices' on Instance uuid aefbd3e8-a8ba-4fef-a771-4e2b5091a90a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 04:17:29 np0005534516 nova_compute[253538]: 2025-11-25 09:17:29.432 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: aefbd3e8-a8ba-4fef-a771-4e2b5091a90a] Synchronizing instance power state after lifecycle event "Started"; current vm_state: suspended, current task_state: resuming, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 04:17:29 np0005534516 nova_compute[253538]: 2025-11-25 09:17:29.447 253542 INFO nova.virt.libvirt.driver [-] [instance: aefbd3e8-a8ba-4fef-a771-4e2b5091a90a] Instance running successfully.#033[00m
Nov 25 04:17:29 np0005534516 virtqemud[253839]: argument unsupported: QEMU guest agent is not configured
Nov 25 04:17:29 np0005534516 nova_compute[253538]: 2025-11-25 09:17:29.450 253542 DEBUG nova.virt.libvirt.guest [None req-254ffe90-dcb6-4ec1-89d2-d004a6c7f005 612906aa606e4268918814ea9f47c674 3c19f20fbaec489eaece7cf904f192fa - - default default] [instance: aefbd3e8-a8ba-4fef-a771-4e2b5091a90a] Failed to set time: agent not configured sync_guest_time /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:200#033[00m
Nov 25 04:17:29 np0005534516 nova_compute[253538]: 2025-11-25 09:17:29.451 253542 DEBUG nova.compute.manager [None req-254ffe90-dcb6-4ec1-89d2-d004a6c7f005 612906aa606e4268918814ea9f47c674 3c19f20fbaec489eaece7cf904f192fa - - default default] [instance: aefbd3e8-a8ba-4fef-a771-4e2b5091a90a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 04:17:29 np0005534516 nova_compute[253538]: 2025-11-25 09:17:29.452 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: aefbd3e8-a8ba-4fef-a771-4e2b5091a90a] During sync_power_state the instance has a pending task (resuming). Skip.#033[00m
Nov 25 04:17:29 np0005534516 nova_compute[253538]: 2025-11-25 09:17:29.452 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764062249.3987281, aefbd3e8-a8ba-4fef-a771-4e2b5091a90a => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 04:17:29 np0005534516 nova_compute[253538]: 2025-11-25 09:17:29.452 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: aefbd3e8-a8ba-4fef-a771-4e2b5091a90a] VM Resumed (Lifecycle Event)#033[00m
Nov 25 04:17:29 np0005534516 nova_compute[253538]: 2025-11-25 09:17:29.476 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: aefbd3e8-a8ba-4fef-a771-4e2b5091a90a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 04:17:29 np0005534516 nova_compute[253538]: 2025-11-25 09:17:29.480 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: aefbd3e8-a8ba-4fef-a771-4e2b5091a90a] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: suspended, current task_state: resuming, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 04:17:29 np0005534516 nova_compute[253538]: 2025-11-25 09:17:29.499 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: aefbd3e8-a8ba-4fef-a771-4e2b5091a90a] During sync_power_state the instance has a pending task (resuming). Skip.#033[00m
Nov 25 04:17:29 np0005534516 nova_compute[253538]: 2025-11-25 09:17:29.842 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:17:29 np0005534516 wizardly_herschel[419493]: {
Nov 25 04:17:29 np0005534516 wizardly_herschel[419493]:    "1ccbbde0-8faf-460a-800e-c84f00ed17db": {
Nov 25 04:17:29 np0005534516 wizardly_herschel[419493]:        "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 04:17:29 np0005534516 wizardly_herschel[419493]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 04:17:29 np0005534516 wizardly_herschel[419493]:        "osd_id": 1,
Nov 25 04:17:29 np0005534516 wizardly_herschel[419493]:        "osd_uuid": "1ccbbde0-8faf-460a-800e-c84f00ed17db",
Nov 25 04:17:29 np0005534516 wizardly_herschel[419493]:        "type": "bluestore"
Nov 25 04:17:29 np0005534516 wizardly_herschel[419493]:    },
Nov 25 04:17:29 np0005534516 wizardly_herschel[419493]:    "2a15223e-f21c-42af-b702-2be1c3607399": {
Nov 25 04:17:29 np0005534516 wizardly_herschel[419493]:        "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 04:17:29 np0005534516 wizardly_herschel[419493]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 04:17:29 np0005534516 wizardly_herschel[419493]:        "osd_id": 2,
Nov 25 04:17:29 np0005534516 wizardly_herschel[419493]:        "osd_uuid": "2a15223e-f21c-42af-b702-2be1c3607399",
Nov 25 04:17:29 np0005534516 wizardly_herschel[419493]:        "type": "bluestore"
Nov 25 04:17:29 np0005534516 wizardly_herschel[419493]:    },
Nov 25 04:17:29 np0005534516 wizardly_herschel[419493]:    "fa0f8d90-5df8-4d42-9078-da082765696d": {
Nov 25 04:17:29 np0005534516 wizardly_herschel[419493]:        "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 04:17:29 np0005534516 wizardly_herschel[419493]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 04:17:29 np0005534516 wizardly_herschel[419493]:        "osd_id": 0,
Nov 25 04:17:29 np0005534516 wizardly_herschel[419493]:        "osd_uuid": "fa0f8d90-5df8-4d42-9078-da082765696d",
Nov 25 04:17:29 np0005534516 wizardly_herschel[419493]:        "type": "bluestore"
Nov 25 04:17:29 np0005534516 wizardly_herschel[419493]:    }
Nov 25 04:17:29 np0005534516 wizardly_herschel[419493]: }
Nov 25 04:17:29 np0005534516 systemd[1]: libpod-d921f54f72ea86dec9c2431c0d7d7a505d5fcffbd6e3fd43458a6af91c45661b.scope: Deactivated successfully.
Nov 25 04:17:29 np0005534516 podman[419479]: 2025-11-25 09:17:29.876874015 +0000 UTC m=+1.156815248 container died d921f54f72ea86dec9c2431c0d7d7a505d5fcffbd6e3fd43458a6af91c45661b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_herschel, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 04:17:30 np0005534516 systemd[1]: var-lib-containers-storage-overlay-40580bb91d8be25ca25b9f0eb08c0aaaaa8db588e24bf44fd57a4d00441767d3-merged.mount: Deactivated successfully.
Nov 25 04:17:30 np0005534516 podman[419479]: 2025-11-25 09:17:30.301709974 +0000 UTC m=+1.581651207 container remove d921f54f72ea86dec9c2431c0d7d7a505d5fcffbd6e3fd43458a6af91c45661b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_herschel, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 04:17:30 np0005534516 systemd[1]: libpod-conmon-d921f54f72ea86dec9c2431c0d7d7a505d5fcffbd6e3fd43458a6af91c45661b.scope: Deactivated successfully.
Nov 25 04:17:30 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 04:17:30 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 04:17:30 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 04:17:30 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2913: 321 pgs: 321 active+clean; 134 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.0 MiB/s wr, 75 op/s
Nov 25 04:17:30 np0005534516 nova_compute[253538]: 2025-11-25 09:17:30.505 253542 DEBUG nova.compute.manager [req-a392f206-ee58-4909-84e3-48042f4376cb req-b9718b6a-900a-48fa-a871-56db506cf172 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: aefbd3e8-a8ba-4fef-a771-4e2b5091a90a] Received event network-vif-plugged-3e3120d7-1164-497f-8e95-61789ab8f383 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 04:17:30 np0005534516 nova_compute[253538]: 2025-11-25 09:17:30.505 253542 DEBUG oslo_concurrency.lockutils [req-a392f206-ee58-4909-84e3-48042f4376cb req-b9718b6a-900a-48fa-a871-56db506cf172 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "aefbd3e8-a8ba-4fef-a771-4e2b5091a90a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:17:30 np0005534516 nova_compute[253538]: 2025-11-25 09:17:30.506 253542 DEBUG oslo_concurrency.lockutils [req-a392f206-ee58-4909-84e3-48042f4376cb req-b9718b6a-900a-48fa-a871-56db506cf172 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "aefbd3e8-a8ba-4fef-a771-4e2b5091a90a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:17:30 np0005534516 nova_compute[253538]: 2025-11-25 09:17:30.506 253542 DEBUG oslo_concurrency.lockutils [req-a392f206-ee58-4909-84e3-48042f4376cb req-b9718b6a-900a-48fa-a871-56db506cf172 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "aefbd3e8-a8ba-4fef-a771-4e2b5091a90a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:17:30 np0005534516 nova_compute[253538]: 2025-11-25 09:17:30.506 253542 DEBUG nova.compute.manager [req-a392f206-ee58-4909-84e3-48042f4376cb req-b9718b6a-900a-48fa-a871-56db506cf172 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: aefbd3e8-a8ba-4fef-a771-4e2b5091a90a] No waiting events found dispatching network-vif-plugged-3e3120d7-1164-497f-8e95-61789ab8f383 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 04:17:30 np0005534516 nova_compute[253538]: 2025-11-25 09:17:30.507 253542 WARNING nova.compute.manager [req-a392f206-ee58-4909-84e3-48042f4376cb req-b9718b6a-900a-48fa-a871-56db506cf172 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: aefbd3e8-a8ba-4fef-a771-4e2b5091a90a] Received unexpected event network-vif-plugged-3e3120d7-1164-497f-8e95-61789ab8f383 for instance with vm_state active and task_state None.#033[00m
Nov 25 04:17:30 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 04:17:30 np0005534516 ceph-mgr[75313]: [progress WARNING root] complete: ev 58c032ee-56e6-4f3b-908e-c060173d47d1 does not exist
Nov 25 04:17:30 np0005534516 ceph-mgr[75313]: [progress WARNING root] complete: ev 50ae1dd6-f300-465d-af01-11532dbf1071 does not exist
Nov 25 04:17:31 np0005534516 nova_compute[253538]: 2025-11-25 09:17:31.072 253542 DEBUG nova.objects.instance [None req-97bf4ae3-08d4-4197-b698-7b420920c92f 612906aa606e4268918814ea9f47c674 3c19f20fbaec489eaece7cf904f192fa - - default default] Lazy-loading 'pci_devices' on Instance uuid aefbd3e8-a8ba-4fef-a771-4e2b5091a90a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 04:17:31 np0005534516 nova_compute[253538]: 2025-11-25 09:17:31.089 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764062251.0891693, aefbd3e8-a8ba-4fef-a771-4e2b5091a90a => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 04:17:31 np0005534516 nova_compute[253538]: 2025-11-25 09:17:31.090 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: aefbd3e8-a8ba-4fef-a771-4e2b5091a90a] VM Paused (Lifecycle Event)#033[00m
Nov 25 04:17:31 np0005534516 nova_compute[253538]: 2025-11-25 09:17:31.110 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: aefbd3e8-a8ba-4fef-a771-4e2b5091a90a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 04:17:31 np0005534516 nova_compute[253538]: 2025-11-25 09:17:31.113 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: aefbd3e8-a8ba-4fef-a771-4e2b5091a90a] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: suspending, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 04:17:31 np0005534516 nova_compute[253538]: 2025-11-25 09:17:31.130 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: aefbd3e8-a8ba-4fef-a771-4e2b5091a90a] During sync_power_state the instance has a pending task (suspending). Skip.#033[00m
Nov 25 04:17:31 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 04:17:31 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 04:17:31 np0005534516 kernel: tap3e3120d7-11 (unregistering): left promiscuous mode
Nov 25 04:17:31 np0005534516 NetworkManager[48915]: <info>  [1764062251.5740] device (tap3e3120d7-11): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 04:17:31 np0005534516 nova_compute[253538]: 2025-11-25 09:17:31.586 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:17:31 np0005534516 ovn_controller[152859]: 2025-11-25T09:17:31Z|01624|binding|INFO|Releasing lport 3e3120d7-1164-497f-8e95-61789ab8f383 from this chassis (sb_readonly=0)
Nov 25 04:17:31 np0005534516 ovn_controller[152859]: 2025-11-25T09:17:31Z|01625|binding|INFO|Setting lport 3e3120d7-1164-497f-8e95-61789ab8f383 down in Southbound
Nov 25 04:17:31 np0005534516 ovn_controller[152859]: 2025-11-25T09:17:31Z|01626|binding|INFO|Removing iface tap3e3120d7-11 ovn-installed in OVS
Nov 25 04:17:31 np0005534516 nova_compute[253538]: 2025-11-25 09:17:31.587 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:17:31 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:17:31.599 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ae:3f:a1 10.100.0.5'], port_security=['fa:16:3e:ae:3f:a1 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': 'aefbd3e8-a8ba-4fef-a771-4e2b5091a90a', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5f085f3e-8c63-44aa-9adc-cec2428aefd2', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3c19f20fbaec489eaece7cf904f192fa', 'neutron:revision_number': '6', 'neutron:security_group_ids': '280baeb6-358b-4b30-9eb9-c5b8498ccc51', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4b0834da-ae27-4c3d-83e4-79728139a358, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=3e3120d7-1164-497f-8e95-61789ab8f383) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 04:17:31 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:17:31.600 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 3e3120d7-1164-497f-8e95-61789ab8f383 in datapath 5f085f3e-8c63-44aa-9adc-cec2428aefd2 unbound from our chassis#033[00m
Nov 25 04:17:31 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:17:31.600 162739 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 5f085f3e-8c63-44aa-9adc-cec2428aefd2 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m
Nov 25 04:17:31 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:17:31.601 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[e3441ac7-60ed-4837-a4b9-9ba9ca2edbbe]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:17:31 np0005534516 nova_compute[253538]: 2025-11-25 09:17:31.615 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:17:31 np0005534516 systemd[1]: machine-qemu\x2d183\x2dinstance\x2d00000098.scope: Deactivated successfully.
Nov 25 04:17:31 np0005534516 systemd[1]: machine-qemu\x2d183\x2dinstance\x2d00000098.scope: Consumed 2.853s CPU time.
Nov 25 04:17:31 np0005534516 systemd-machined[215790]: Machine qemu-183-instance-00000098 terminated.
Nov 25 04:17:31 np0005534516 nova_compute[253538]: 2025-11-25 09:17:31.777 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:17:31 np0005534516 nova_compute[253538]: 2025-11-25 09:17:31.785 253542 DEBUG nova.compute.manager [None req-97bf4ae3-08d4-4197-b698-7b420920c92f 612906aa606e4268918814ea9f47c674 3c19f20fbaec489eaece7cf904f192fa - - default default] [instance: aefbd3e8-a8ba-4fef-a771-4e2b5091a90a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 04:17:32 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2914: 321 pgs: 321 active+clean; 134 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 76 op/s
Nov 25 04:17:32 np0005534516 nova_compute[253538]: 2025-11-25 09:17:32.570 253542 DEBUG nova.compute.manager [req-c4c47d59-8375-48a3-84ff-e61e03c015b3 req-e8efab25-5236-4666-bd09-51ac7e4bdfc6 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: aefbd3e8-a8ba-4fef-a771-4e2b5091a90a] Received event network-vif-unplugged-3e3120d7-1164-497f-8e95-61789ab8f383 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 04:17:32 np0005534516 nova_compute[253538]: 2025-11-25 09:17:32.571 253542 DEBUG oslo_concurrency.lockutils [req-c4c47d59-8375-48a3-84ff-e61e03c015b3 req-e8efab25-5236-4666-bd09-51ac7e4bdfc6 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "aefbd3e8-a8ba-4fef-a771-4e2b5091a90a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:17:32 np0005534516 nova_compute[253538]: 2025-11-25 09:17:32.571 253542 DEBUG oslo_concurrency.lockutils [req-c4c47d59-8375-48a3-84ff-e61e03c015b3 req-e8efab25-5236-4666-bd09-51ac7e4bdfc6 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "aefbd3e8-a8ba-4fef-a771-4e2b5091a90a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:17:32 np0005534516 nova_compute[253538]: 2025-11-25 09:17:32.571 253542 DEBUG oslo_concurrency.lockutils [req-c4c47d59-8375-48a3-84ff-e61e03c015b3 req-e8efab25-5236-4666-bd09-51ac7e4bdfc6 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "aefbd3e8-a8ba-4fef-a771-4e2b5091a90a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:17:32 np0005534516 nova_compute[253538]: 2025-11-25 09:17:32.571 253542 DEBUG nova.compute.manager [req-c4c47d59-8375-48a3-84ff-e61e03c015b3 req-e8efab25-5236-4666-bd09-51ac7e4bdfc6 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: aefbd3e8-a8ba-4fef-a771-4e2b5091a90a] No waiting events found dispatching network-vif-unplugged-3e3120d7-1164-497f-8e95-61789ab8f383 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 04:17:32 np0005534516 nova_compute[253538]: 2025-11-25 09:17:32.572 253542 WARNING nova.compute.manager [req-c4c47d59-8375-48a3-84ff-e61e03c015b3 req-e8efab25-5236-4666-bd09-51ac7e4bdfc6 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: aefbd3e8-a8ba-4fef-a771-4e2b5091a90a] Received unexpected event network-vif-unplugged-3e3120d7-1164-497f-8e95-61789ab8f383 for instance with vm_state suspended and task_state None.#033[00m
Nov 25 04:17:32 np0005534516 nova_compute[253538]: 2025-11-25 09:17:32.572 253542 DEBUG nova.compute.manager [req-c4c47d59-8375-48a3-84ff-e61e03c015b3 req-e8efab25-5236-4666-bd09-51ac7e4bdfc6 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: aefbd3e8-a8ba-4fef-a771-4e2b5091a90a] Received event network-vif-plugged-3e3120d7-1164-497f-8e95-61789ab8f383 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 04:17:32 np0005534516 nova_compute[253538]: 2025-11-25 09:17:32.572 253542 DEBUG oslo_concurrency.lockutils [req-c4c47d59-8375-48a3-84ff-e61e03c015b3 req-e8efab25-5236-4666-bd09-51ac7e4bdfc6 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "aefbd3e8-a8ba-4fef-a771-4e2b5091a90a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:17:32 np0005534516 nova_compute[253538]: 2025-11-25 09:17:32.573 253542 DEBUG oslo_concurrency.lockutils [req-c4c47d59-8375-48a3-84ff-e61e03c015b3 req-e8efab25-5236-4666-bd09-51ac7e4bdfc6 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "aefbd3e8-a8ba-4fef-a771-4e2b5091a90a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:17:32 np0005534516 nova_compute[253538]: 2025-11-25 09:17:32.573 253542 DEBUG oslo_concurrency.lockutils [req-c4c47d59-8375-48a3-84ff-e61e03c015b3 req-e8efab25-5236-4666-bd09-51ac7e4bdfc6 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "aefbd3e8-a8ba-4fef-a771-4e2b5091a90a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:17:32 np0005534516 nova_compute[253538]: 2025-11-25 09:17:32.573 253542 DEBUG nova.compute.manager [req-c4c47d59-8375-48a3-84ff-e61e03c015b3 req-e8efab25-5236-4666-bd09-51ac7e4bdfc6 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: aefbd3e8-a8ba-4fef-a771-4e2b5091a90a] No waiting events found dispatching network-vif-plugged-3e3120d7-1164-497f-8e95-61789ab8f383 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 04:17:32 np0005534516 nova_compute[253538]: 2025-11-25 09:17:32.574 253542 WARNING nova.compute.manager [req-c4c47d59-8375-48a3-84ff-e61e03c015b3 req-e8efab25-5236-4666-bd09-51ac7e4bdfc6 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: aefbd3e8-a8ba-4fef-a771-4e2b5091a90a] Received unexpected event network-vif-plugged-3e3120d7-1164-497f-8e95-61789ab8f383 for instance with vm_state suspended and task_state None.#033[00m
Nov 25 04:17:33 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:17:33 np0005534516 nova_compute[253538]: 2025-11-25 09:17:33.628 253542 INFO nova.compute.manager [None req-92dd4b2a-0153-4f62-b0f2-6cb5c1a7cfdf 612906aa606e4268918814ea9f47c674 3c19f20fbaec489eaece7cf904f192fa - - default default] [instance: aefbd3e8-a8ba-4fef-a771-4e2b5091a90a] Resuming#033[00m
Nov 25 04:17:33 np0005534516 nova_compute[253538]: 2025-11-25 09:17:33.629 253542 DEBUG nova.objects.instance [None req-92dd4b2a-0153-4f62-b0f2-6cb5c1a7cfdf 612906aa606e4268918814ea9f47c674 3c19f20fbaec489eaece7cf904f192fa - - default default] Lazy-loading 'flavor' on Instance uuid aefbd3e8-a8ba-4fef-a771-4e2b5091a90a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 04:17:33 np0005534516 nova_compute[253538]: 2025-11-25 09:17:33.666 253542 DEBUG oslo_concurrency.lockutils [None req-92dd4b2a-0153-4f62-b0f2-6cb5c1a7cfdf 612906aa606e4268918814ea9f47c674 3c19f20fbaec489eaece7cf904f192fa - - default default] Acquiring lock "refresh_cache-aefbd3e8-a8ba-4fef-a771-4e2b5091a90a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 04:17:33 np0005534516 nova_compute[253538]: 2025-11-25 09:17:33.667 253542 DEBUG oslo_concurrency.lockutils [None req-92dd4b2a-0153-4f62-b0f2-6cb5c1a7cfdf 612906aa606e4268918814ea9f47c674 3c19f20fbaec489eaece7cf904f192fa - - default default] Acquired lock "refresh_cache-aefbd3e8-a8ba-4fef-a771-4e2b5091a90a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 04:17:33 np0005534516 nova_compute[253538]: 2025-11-25 09:17:33.667 253542 DEBUG nova.network.neutron [None req-92dd4b2a-0153-4f62-b0f2-6cb5c1a7cfdf 612906aa606e4268918814ea9f47c674 3c19f20fbaec489eaece7cf904f192fa - - default default] [instance: aefbd3e8-a8ba-4fef-a771-4e2b5091a90a] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 25 04:17:34 np0005534516 nova_compute[253538]: 2025-11-25 09:17:34.269 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:17:34 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2915: 321 pgs: 321 active+clean; 134 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 74 op/s
Nov 25 04:17:34 np0005534516 nova_compute[253538]: 2025-11-25 09:17:34.845 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:17:35 np0005534516 nova_compute[253538]: 2025-11-25 09:17:35.197 253542 DEBUG nova.network.neutron [None req-92dd4b2a-0153-4f62-b0f2-6cb5c1a7cfdf 612906aa606e4268918814ea9f47c674 3c19f20fbaec489eaece7cf904f192fa - - default default] [instance: aefbd3e8-a8ba-4fef-a771-4e2b5091a90a] Updating instance_info_cache with network_info: [{"id": "3e3120d7-1164-497f-8e95-61789ab8f383", "address": "fa:16:3e:ae:3f:a1", "network": {"id": "5f085f3e-8c63-44aa-9adc-cec2428aefd2", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-931270361-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "3c19f20fbaec489eaece7cf904f192fa", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3e3120d7-11", "ovs_interfaceid": "3e3120d7-1164-497f-8e95-61789ab8f383", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 04:17:35 np0005534516 nova_compute[253538]: 2025-11-25 09:17:35.209 253542 DEBUG oslo_concurrency.lockutils [None req-92dd4b2a-0153-4f62-b0f2-6cb5c1a7cfdf 612906aa606e4268918814ea9f47c674 3c19f20fbaec489eaece7cf904f192fa - - default default] Releasing lock "refresh_cache-aefbd3e8-a8ba-4fef-a771-4e2b5091a90a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 04:17:35 np0005534516 nova_compute[253538]: 2025-11-25 09:17:35.216 253542 DEBUG nova.virt.libvirt.vif [None req-92dd4b2a-0153-4f62-b0f2-6cb5c1a7cfdf 612906aa606e4268918814ea9f47c674 3c19f20fbaec489eaece7cf904f192fa - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T09:17:11Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestServerAdvancedOps-server-1580927856',display_name='tempest-TestServerAdvancedOps-server-1580927856',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testserveradvancedops-server-1580927856',id=152,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-25T09:17:21Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='3c19f20fbaec489eaece7cf904f192fa',ramdisk_id='',reservation_id='r-aqe54enr',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-TestServerAdvancedOps-1902459302',owner_user_name='tempest-TestServerAdvancedOps-1902459302-project-member'},tags=<?>,task_state='resuming',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T09:17:31Z,user_data=None,user_id='612906aa606e4268918814ea9f47c674',uuid=aefbd3e8-a8ba-4fef-a771-4e2b5091a90a,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='suspended') vif={"id": "3e3120d7-1164-497f-8e95-61789ab8f383", "address": "fa:16:3e:ae:3f:a1", "network": {"id": "5f085f3e-8c63-44aa-9adc-cec2428aefd2", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-931270361-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "3c19f20fbaec489eaece7cf904f192fa", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3e3120d7-11", "ovs_interfaceid": "3e3120d7-1164-497f-8e95-61789ab8f383", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 25 04:17:35 np0005534516 nova_compute[253538]: 2025-11-25 09:17:35.217 253542 DEBUG nova.network.os_vif_util [None req-92dd4b2a-0153-4f62-b0f2-6cb5c1a7cfdf 612906aa606e4268918814ea9f47c674 3c19f20fbaec489eaece7cf904f192fa - - default default] Converting VIF {"id": "3e3120d7-1164-497f-8e95-61789ab8f383", "address": "fa:16:3e:ae:3f:a1", "network": {"id": "5f085f3e-8c63-44aa-9adc-cec2428aefd2", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-931270361-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "3c19f20fbaec489eaece7cf904f192fa", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3e3120d7-11", "ovs_interfaceid": "3e3120d7-1164-497f-8e95-61789ab8f383", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 04:17:35 np0005534516 nova_compute[253538]: 2025-11-25 09:17:35.218 253542 DEBUG nova.network.os_vif_util [None req-92dd4b2a-0153-4f62-b0f2-6cb5c1a7cfdf 612906aa606e4268918814ea9f47c674 3c19f20fbaec489eaece7cf904f192fa - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ae:3f:a1,bridge_name='br-int',has_traffic_filtering=True,id=3e3120d7-1164-497f-8e95-61789ab8f383,network=Network(5f085f3e-8c63-44aa-9adc-cec2428aefd2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3e3120d7-11') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 04:17:35 np0005534516 nova_compute[253538]: 2025-11-25 09:17:35.219 253542 DEBUG os_vif [None req-92dd4b2a-0153-4f62-b0f2-6cb5c1a7cfdf 612906aa606e4268918814ea9f47c674 3c19f20fbaec489eaece7cf904f192fa - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:ae:3f:a1,bridge_name='br-int',has_traffic_filtering=True,id=3e3120d7-1164-497f-8e95-61789ab8f383,network=Network(5f085f3e-8c63-44aa-9adc-cec2428aefd2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3e3120d7-11') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 25 04:17:35 np0005534516 nova_compute[253538]: 2025-11-25 09:17:35.220 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:17:35 np0005534516 nova_compute[253538]: 2025-11-25 09:17:35.220 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 04:17:35 np0005534516 nova_compute[253538]: 2025-11-25 09:17:35.221 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 04:17:35 np0005534516 nova_compute[253538]: 2025-11-25 09:17:35.224 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:17:35 np0005534516 nova_compute[253538]: 2025-11-25 09:17:35.225 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap3e3120d7-11, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 04:17:35 np0005534516 nova_compute[253538]: 2025-11-25 09:17:35.225 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap3e3120d7-11, col_values=(('external_ids', {'iface-id': '3e3120d7-1164-497f-8e95-61789ab8f383', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:ae:3f:a1', 'vm-uuid': 'aefbd3e8-a8ba-4fef-a771-4e2b5091a90a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 04:17:35 np0005534516 nova_compute[253538]: 2025-11-25 09:17:35.226 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 04:17:35 np0005534516 nova_compute[253538]: 2025-11-25 09:17:35.226 253542 INFO os_vif [None req-92dd4b2a-0153-4f62-b0f2-6cb5c1a7cfdf 612906aa606e4268918814ea9f47c674 3c19f20fbaec489eaece7cf904f192fa - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:ae:3f:a1,bridge_name='br-int',has_traffic_filtering=True,id=3e3120d7-1164-497f-8e95-61789ab8f383,network=Network(5f085f3e-8c63-44aa-9adc-cec2428aefd2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3e3120d7-11')#033[00m
Nov 25 04:17:35 np0005534516 nova_compute[253538]: 2025-11-25 09:17:35.248 253542 DEBUG nova.objects.instance [None req-92dd4b2a-0153-4f62-b0f2-6cb5c1a7cfdf 612906aa606e4268918814ea9f47c674 3c19f20fbaec489eaece7cf904f192fa - - default default] Lazy-loading 'numa_topology' on Instance uuid aefbd3e8-a8ba-4fef-a771-4e2b5091a90a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 04:17:35 np0005534516 kernel: tap3e3120d7-11: entered promiscuous mode
Nov 25 04:17:35 np0005534516 NetworkManager[48915]: <info>  [1764062255.3322] manager: (tap3e3120d7-11): new Tun device (/org/freedesktop/NetworkManager/Devices/667)
Nov 25 04:17:35 np0005534516 systemd-udevd[419663]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 04:17:35 np0005534516 ovn_controller[152859]: 2025-11-25T09:17:35Z|01627|binding|INFO|Claiming lport 3e3120d7-1164-497f-8e95-61789ab8f383 for this chassis.
Nov 25 04:17:35 np0005534516 nova_compute[253538]: 2025-11-25 09:17:35.366 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:17:35 np0005534516 ovn_controller[152859]: 2025-11-25T09:17:35Z|01628|binding|INFO|3e3120d7-1164-497f-8e95-61789ab8f383: Claiming fa:16:3e:ae:3f:a1 10.100.0.5
Nov 25 04:17:35 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:17:35.376 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ae:3f:a1 10.100.0.5'], port_security=['fa:16:3e:ae:3f:a1 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': 'aefbd3e8-a8ba-4fef-a771-4e2b5091a90a', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5f085f3e-8c63-44aa-9adc-cec2428aefd2', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3c19f20fbaec489eaece7cf904f192fa', 'neutron:revision_number': '7', 'neutron:security_group_ids': '280baeb6-358b-4b30-9eb9-c5b8498ccc51', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4b0834da-ae27-4c3d-83e4-79728139a358, chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=3e3120d7-1164-497f-8e95-61789ab8f383) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 04:17:35 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:17:35.378 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 3e3120d7-1164-497f-8e95-61789ab8f383 in datapath 5f085f3e-8c63-44aa-9adc-cec2428aefd2 bound to our chassis#033[00m
Nov 25 04:17:35 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:17:35.378 162739 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 5f085f3e-8c63-44aa-9adc-cec2428aefd2 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m
Nov 25 04:17:35 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:17:35.379 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[d292a742-7814-402a-b77a-538538b5016b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:17:35 np0005534516 nova_compute[253538]: 2025-11-25 09:17:35.380 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:17:35 np0005534516 NetworkManager[48915]: <info>  [1764062255.3829] device (tap3e3120d7-11): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 04:17:35 np0005534516 ovn_controller[152859]: 2025-11-25T09:17:35Z|01629|binding|INFO|Setting lport 3e3120d7-1164-497f-8e95-61789ab8f383 ovn-installed in OVS
Nov 25 04:17:35 np0005534516 ovn_controller[152859]: 2025-11-25T09:17:35Z|01630|binding|INFO|Setting lport 3e3120d7-1164-497f-8e95-61789ab8f383 up in Southbound
Nov 25 04:17:35 np0005534516 nova_compute[253538]: 2025-11-25 09:17:35.383 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:17:35 np0005534516 NetworkManager[48915]: <info>  [1764062255.3845] device (tap3e3120d7-11): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 04:17:35 np0005534516 nova_compute[253538]: 2025-11-25 09:17:35.392 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:17:35 np0005534516 systemd-machined[215790]: New machine qemu-184-instance-00000098.
Nov 25 04:17:35 np0005534516 systemd[1]: Started Virtual Machine qemu-184-instance-00000098.
Nov 25 04:17:35 np0005534516 nova_compute[253538]: 2025-11-25 09:17:35.979 253542 DEBUG nova.virt.libvirt.host [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Removed pending event for aefbd3e8-a8ba-4fef-a771-4e2b5091a90a due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Nov 25 04:17:35 np0005534516 nova_compute[253538]: 2025-11-25 09:17:35.980 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764062255.9792745, aefbd3e8-a8ba-4fef-a771-4e2b5091a90a => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 04:17:35 np0005534516 nova_compute[253538]: 2025-11-25 09:17:35.980 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: aefbd3e8-a8ba-4fef-a771-4e2b5091a90a] VM Started (Lifecycle Event)#033[00m
Nov 25 04:17:35 np0005534516 nova_compute[253538]: 2025-11-25 09:17:35.997 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: aefbd3e8-a8ba-4fef-a771-4e2b5091a90a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 04:17:35 np0005534516 nova_compute[253538]: 2025-11-25 09:17:35.998 253542 DEBUG nova.compute.manager [None req-92dd4b2a-0153-4f62-b0f2-6cb5c1a7cfdf 612906aa606e4268918814ea9f47c674 3c19f20fbaec489eaece7cf904f192fa - - default default] [instance: aefbd3e8-a8ba-4fef-a771-4e2b5091a90a] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 25 04:17:35 np0005534516 nova_compute[253538]: 2025-11-25 09:17:35.999 253542 DEBUG nova.objects.instance [None req-92dd4b2a-0153-4f62-b0f2-6cb5c1a7cfdf 612906aa606e4268918814ea9f47c674 3c19f20fbaec489eaece7cf904f192fa - - default default] Lazy-loading 'pci_devices' on Instance uuid aefbd3e8-a8ba-4fef-a771-4e2b5091a90a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 04:17:36 np0005534516 nova_compute[253538]: 2025-11-25 09:17:36.002 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: aefbd3e8-a8ba-4fef-a771-4e2b5091a90a] Synchronizing instance power state after lifecycle event "Started"; current vm_state: suspended, current task_state: resuming, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 04:17:36 np0005534516 nova_compute[253538]: 2025-11-25 09:17:36.018 253542 INFO nova.virt.libvirt.driver [-] [instance: aefbd3e8-a8ba-4fef-a771-4e2b5091a90a] Instance running successfully.#033[00m
Nov 25 04:17:36 np0005534516 nova_compute[253538]: 2025-11-25 09:17:36.020 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: aefbd3e8-a8ba-4fef-a771-4e2b5091a90a] During sync_power_state the instance has a pending task (resuming). Skip.#033[00m
Nov 25 04:17:36 np0005534516 virtqemud[253839]: argument unsupported: QEMU guest agent is not configured
Nov 25 04:17:36 np0005534516 nova_compute[253538]: 2025-11-25 09:17:36.021 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764062255.9827847, aefbd3e8-a8ba-4fef-a771-4e2b5091a90a => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 04:17:36 np0005534516 nova_compute[253538]: 2025-11-25 09:17:36.023 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: aefbd3e8-a8ba-4fef-a771-4e2b5091a90a] VM Resumed (Lifecycle Event)#033[00m
Nov 25 04:17:36 np0005534516 nova_compute[253538]: 2025-11-25 09:17:36.027 253542 DEBUG nova.compute.manager [req-5cc0d4b5-325f-44c5-a61e-f37f9414aabd req-4d6361a7-070a-44b4-9cf4-46a1631c69df b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: aefbd3e8-a8ba-4fef-a771-4e2b5091a90a] Received event network-vif-plugged-3e3120d7-1164-497f-8e95-61789ab8f383 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 04:17:36 np0005534516 nova_compute[253538]: 2025-11-25 09:17:36.028 253542 DEBUG oslo_concurrency.lockutils [req-5cc0d4b5-325f-44c5-a61e-f37f9414aabd req-4d6361a7-070a-44b4-9cf4-46a1631c69df b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "aefbd3e8-a8ba-4fef-a771-4e2b5091a90a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:17:36 np0005534516 nova_compute[253538]: 2025-11-25 09:17:36.028 253542 DEBUG oslo_concurrency.lockutils [req-5cc0d4b5-325f-44c5-a61e-f37f9414aabd req-4d6361a7-070a-44b4-9cf4-46a1631c69df b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "aefbd3e8-a8ba-4fef-a771-4e2b5091a90a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:17:36 np0005534516 nova_compute[253538]: 2025-11-25 09:17:36.029 253542 DEBUG oslo_concurrency.lockutils [req-5cc0d4b5-325f-44c5-a61e-f37f9414aabd req-4d6361a7-070a-44b4-9cf4-46a1631c69df b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "aefbd3e8-a8ba-4fef-a771-4e2b5091a90a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:17:36 np0005534516 nova_compute[253538]: 2025-11-25 09:17:36.029 253542 DEBUG nova.compute.manager [req-5cc0d4b5-325f-44c5-a61e-f37f9414aabd req-4d6361a7-070a-44b4-9cf4-46a1631c69df b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: aefbd3e8-a8ba-4fef-a771-4e2b5091a90a] No waiting events found dispatching network-vif-plugged-3e3120d7-1164-497f-8e95-61789ab8f383 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 04:17:36 np0005534516 nova_compute[253538]: 2025-11-25 09:17:36.029 253542 WARNING nova.compute.manager [req-5cc0d4b5-325f-44c5-a61e-f37f9414aabd req-4d6361a7-070a-44b4-9cf4-46a1631c69df b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: aefbd3e8-a8ba-4fef-a771-4e2b5091a90a] Received unexpected event network-vif-plugged-3e3120d7-1164-497f-8e95-61789ab8f383 for instance with vm_state suspended and task_state resuming.#033[00m
Nov 25 04:17:36 np0005534516 nova_compute[253538]: 2025-11-25 09:17:36.031 253542 DEBUG nova.virt.libvirt.guest [None req-92dd4b2a-0153-4f62-b0f2-6cb5c1a7cfdf 612906aa606e4268918814ea9f47c674 3c19f20fbaec489eaece7cf904f192fa - - default default] [instance: aefbd3e8-a8ba-4fef-a771-4e2b5091a90a] Failed to set time: agent not configured sync_guest_time /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:200#033[00m
Nov 25 04:17:36 np0005534516 nova_compute[253538]: 2025-11-25 09:17:36.031 253542 DEBUG nova.compute.manager [None req-92dd4b2a-0153-4f62-b0f2-6cb5c1a7cfdf 612906aa606e4268918814ea9f47c674 3c19f20fbaec489eaece7cf904f192fa - - default default] [instance: aefbd3e8-a8ba-4fef-a771-4e2b5091a90a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 04:17:36 np0005534516 nova_compute[253538]: 2025-11-25 09:17:36.038 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: aefbd3e8-a8ba-4fef-a771-4e2b5091a90a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 04:17:36 np0005534516 nova_compute[253538]: 2025-11-25 09:17:36.041 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: aefbd3e8-a8ba-4fef-a771-4e2b5091a90a] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: suspended, current task_state: resuming, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 04:17:36 np0005534516 nova_compute[253538]: 2025-11-25 09:17:36.060 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: aefbd3e8-a8ba-4fef-a771-4e2b5091a90a] During sync_power_state the instance has a pending task (resuming). Skip.#033[00m
Nov 25 04:17:36 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2916: 321 pgs: 321 active+clean; 134 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.5 MiB/s rd, 85 B/s wr, 56 op/s
Nov 25 04:17:38 np0005534516 nova_compute[253538]: 2025-11-25 09:17:38.099 253542 DEBUG nova.compute.manager [req-989cc6dc-fe33-487a-8de5-58db6483c888 req-d23eced3-6fbd-4ec9-8816-8320530aadf3 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: aefbd3e8-a8ba-4fef-a771-4e2b5091a90a] Received event network-vif-plugged-3e3120d7-1164-497f-8e95-61789ab8f383 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 04:17:38 np0005534516 nova_compute[253538]: 2025-11-25 09:17:38.101 253542 DEBUG oslo_concurrency.lockutils [req-989cc6dc-fe33-487a-8de5-58db6483c888 req-d23eced3-6fbd-4ec9-8816-8320530aadf3 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "aefbd3e8-a8ba-4fef-a771-4e2b5091a90a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:17:38 np0005534516 nova_compute[253538]: 2025-11-25 09:17:38.101 253542 DEBUG oslo_concurrency.lockutils [req-989cc6dc-fe33-487a-8de5-58db6483c888 req-d23eced3-6fbd-4ec9-8816-8320530aadf3 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "aefbd3e8-a8ba-4fef-a771-4e2b5091a90a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:17:38 np0005534516 nova_compute[253538]: 2025-11-25 09:17:38.102 253542 DEBUG oslo_concurrency.lockutils [req-989cc6dc-fe33-487a-8de5-58db6483c888 req-d23eced3-6fbd-4ec9-8816-8320530aadf3 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "aefbd3e8-a8ba-4fef-a771-4e2b5091a90a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:17:38 np0005534516 nova_compute[253538]: 2025-11-25 09:17:38.103 253542 DEBUG nova.compute.manager [req-989cc6dc-fe33-487a-8de5-58db6483c888 req-d23eced3-6fbd-4ec9-8816-8320530aadf3 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: aefbd3e8-a8ba-4fef-a771-4e2b5091a90a] No waiting events found dispatching network-vif-plugged-3e3120d7-1164-497f-8e95-61789ab8f383 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 04:17:38 np0005534516 nova_compute[253538]: 2025-11-25 09:17:38.104 253542 WARNING nova.compute.manager [req-989cc6dc-fe33-487a-8de5-58db6483c888 req-d23eced3-6fbd-4ec9-8816-8320530aadf3 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: aefbd3e8-a8ba-4fef-a771-4e2b5091a90a] Received unexpected event network-vif-plugged-3e3120d7-1164-497f-8e95-61789ab8f383 for instance with vm_state active and task_state None.#033[00m
Nov 25 04:17:38 np0005534516 nova_compute[253538]: 2025-11-25 09:17:38.284 253542 DEBUG oslo_concurrency.lockutils [None req-260e4df6-9d3d-44ee-916a-da0492ea0072 612906aa606e4268918814ea9f47c674 3c19f20fbaec489eaece7cf904f192fa - - default default] Acquiring lock "aefbd3e8-a8ba-4fef-a771-4e2b5091a90a" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:17:38 np0005534516 nova_compute[253538]: 2025-11-25 09:17:38.285 253542 DEBUG oslo_concurrency.lockutils [None req-260e4df6-9d3d-44ee-916a-da0492ea0072 612906aa606e4268918814ea9f47c674 3c19f20fbaec489eaece7cf904f192fa - - default default] Lock "aefbd3e8-a8ba-4fef-a771-4e2b5091a90a" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:17:38 np0005534516 nova_compute[253538]: 2025-11-25 09:17:38.285 253542 DEBUG oslo_concurrency.lockutils [None req-260e4df6-9d3d-44ee-916a-da0492ea0072 612906aa606e4268918814ea9f47c674 3c19f20fbaec489eaece7cf904f192fa - - default default] Acquiring lock "aefbd3e8-a8ba-4fef-a771-4e2b5091a90a-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:17:38 np0005534516 nova_compute[253538]: 2025-11-25 09:17:38.286 253542 DEBUG oslo_concurrency.lockutils [None req-260e4df6-9d3d-44ee-916a-da0492ea0072 612906aa606e4268918814ea9f47c674 3c19f20fbaec489eaece7cf904f192fa - - default default] Lock "aefbd3e8-a8ba-4fef-a771-4e2b5091a90a-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:17:38 np0005534516 nova_compute[253538]: 2025-11-25 09:17:38.286 253542 DEBUG oslo_concurrency.lockutils [None req-260e4df6-9d3d-44ee-916a-da0492ea0072 612906aa606e4268918814ea9f47c674 3c19f20fbaec489eaece7cf904f192fa - - default default] Lock "aefbd3e8-a8ba-4fef-a771-4e2b5091a90a-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:17:38 np0005534516 nova_compute[253538]: 2025-11-25 09:17:38.287 253542 INFO nova.compute.manager [None req-260e4df6-9d3d-44ee-916a-da0492ea0072 612906aa606e4268918814ea9f47c674 3c19f20fbaec489eaece7cf904f192fa - - default default] [instance: aefbd3e8-a8ba-4fef-a771-4e2b5091a90a] Terminating instance#033[00m
Nov 25 04:17:38 np0005534516 nova_compute[253538]: 2025-11-25 09:17:38.288 253542 DEBUG nova.compute.manager [None req-260e4df6-9d3d-44ee-916a-da0492ea0072 612906aa606e4268918814ea9f47c674 3c19f20fbaec489eaece7cf904f192fa - - default default] [instance: aefbd3e8-a8ba-4fef-a771-4e2b5091a90a] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 25 04:17:38 np0005534516 kernel: tap3e3120d7-11 (unregistering): left promiscuous mode
Nov 25 04:17:38 np0005534516 NetworkManager[48915]: <info>  [1764062258.3802] device (tap3e3120d7-11): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 04:17:38 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:17:38 np0005534516 ovn_controller[152859]: 2025-11-25T09:17:38Z|01631|binding|INFO|Releasing lport 3e3120d7-1164-497f-8e95-61789ab8f383 from this chassis (sb_readonly=0)
Nov 25 04:17:38 np0005534516 ovn_controller[152859]: 2025-11-25T09:17:38Z|01632|binding|INFO|Setting lport 3e3120d7-1164-497f-8e95-61789ab8f383 down in Southbound
Nov 25 04:17:38 np0005534516 ovn_controller[152859]: 2025-11-25T09:17:38Z|01633|binding|INFO|Removing iface tap3e3120d7-11 ovn-installed in OVS
Nov 25 04:17:38 np0005534516 nova_compute[253538]: 2025-11-25 09:17:38.383 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:17:38 np0005534516 nova_compute[253538]: 2025-11-25 09:17:38.385 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:17:38 np0005534516 nova_compute[253538]: 2025-11-25 09:17:38.413 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:17:38 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:17:38.440 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ae:3f:a1 10.100.0.5'], port_security=['fa:16:3e:ae:3f:a1 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': 'aefbd3e8-a8ba-4fef-a771-4e2b5091a90a', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5f085f3e-8c63-44aa-9adc-cec2428aefd2', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3c19f20fbaec489eaece7cf904f192fa', 'neutron:revision_number': '8', 'neutron:security_group_ids': '280baeb6-358b-4b30-9eb9-c5b8498ccc51', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4b0834da-ae27-4c3d-83e4-79728139a358, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=3e3120d7-1164-497f-8e95-61789ab8f383) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 04:17:38 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:17:38.441 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 3e3120d7-1164-497f-8e95-61789ab8f383 in datapath 5f085f3e-8c63-44aa-9adc-cec2428aefd2 unbound from our chassis#033[00m
Nov 25 04:17:38 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:17:38.441 162739 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 5f085f3e-8c63-44aa-9adc-cec2428aefd2 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m
Nov 25 04:17:38 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:17:38.442 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[57b557a9-167a-49d7-8903-fe439d6bdcd2]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:17:38 np0005534516 systemd[1]: machine-qemu\x2d184\x2dinstance\x2d00000098.scope: Deactivated successfully.
Nov 25 04:17:38 np0005534516 systemd[1]: machine-qemu\x2d184\x2dinstance\x2d00000098.scope: Consumed 2.832s CPU time.
Nov 25 04:17:38 np0005534516 systemd-machined[215790]: Machine qemu-184-instance-00000098 terminated.
Nov 25 04:17:38 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2917: 321 pgs: 321 active+clean; 134 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 430 KiB/s rd, 23 op/s
Nov 25 04:17:38 np0005534516 nova_compute[253538]: 2025-11-25 09:17:38.508 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:17:38 np0005534516 nova_compute[253538]: 2025-11-25 09:17:38.513 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:17:38 np0005534516 nova_compute[253538]: 2025-11-25 09:17:38.525 253542 INFO nova.virt.libvirt.driver [-] [instance: aefbd3e8-a8ba-4fef-a771-4e2b5091a90a] Instance destroyed successfully.#033[00m
Nov 25 04:17:38 np0005534516 nova_compute[253538]: 2025-11-25 09:17:38.526 253542 DEBUG nova.objects.instance [None req-260e4df6-9d3d-44ee-916a-da0492ea0072 612906aa606e4268918814ea9f47c674 3c19f20fbaec489eaece7cf904f192fa - - default default] Lazy-loading 'resources' on Instance uuid aefbd3e8-a8ba-4fef-a771-4e2b5091a90a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 04:17:38 np0005534516 nova_compute[253538]: 2025-11-25 09:17:38.540 253542 DEBUG nova.virt.libvirt.vif [None req-260e4df6-9d3d-44ee-916a-da0492ea0072 612906aa606e4268918814ea9f47c674 3c19f20fbaec489eaece7cf904f192fa - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T09:17:11Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestServerAdvancedOps-server-1580927856',display_name='tempest-TestServerAdvancedOps-server-1580927856',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testserveradvancedops-server-1580927856',id=152,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-25T09:17:21Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='3c19f20fbaec489eaece7cf904f192fa',ramdisk_id='',reservation_id='r-aqe54enr',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestServerAdvancedOps-1902459302',owner_user_name='tempest-TestServerAdvancedOps-1902459302-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T09:17:36Z,user_data=None,user_id='612906aa606e4268918814ea9f47c674',uuid=aefbd3e8-a8ba-4fef-a771-4e2b5091a90a,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "3e3120d7-1164-497f-8e95-61789ab8f383", "address": "fa:16:3e:ae:3f:a1", "network": {"id": "5f085f3e-8c63-44aa-9adc-cec2428aefd2", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-931270361-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "3c19f20fbaec489eaece7cf904f192fa", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3e3120d7-11", "ovs_interfaceid": "3e3120d7-1164-497f-8e95-61789ab8f383", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 25 04:17:38 np0005534516 nova_compute[253538]: 2025-11-25 09:17:38.541 253542 DEBUG nova.network.os_vif_util [None req-260e4df6-9d3d-44ee-916a-da0492ea0072 612906aa606e4268918814ea9f47c674 3c19f20fbaec489eaece7cf904f192fa - - default default] Converting VIF {"id": "3e3120d7-1164-497f-8e95-61789ab8f383", "address": "fa:16:3e:ae:3f:a1", "network": {"id": "5f085f3e-8c63-44aa-9adc-cec2428aefd2", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-931270361-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "3c19f20fbaec489eaece7cf904f192fa", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3e3120d7-11", "ovs_interfaceid": "3e3120d7-1164-497f-8e95-61789ab8f383", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 04:17:38 np0005534516 nova_compute[253538]: 2025-11-25 09:17:38.542 253542 DEBUG nova.network.os_vif_util [None req-260e4df6-9d3d-44ee-916a-da0492ea0072 612906aa606e4268918814ea9f47c674 3c19f20fbaec489eaece7cf904f192fa - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ae:3f:a1,bridge_name='br-int',has_traffic_filtering=True,id=3e3120d7-1164-497f-8e95-61789ab8f383,network=Network(5f085f3e-8c63-44aa-9adc-cec2428aefd2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3e3120d7-11') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 04:17:38 np0005534516 nova_compute[253538]: 2025-11-25 09:17:38.542 253542 DEBUG os_vif [None req-260e4df6-9d3d-44ee-916a-da0492ea0072 612906aa606e4268918814ea9f47c674 3c19f20fbaec489eaece7cf904f192fa - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:ae:3f:a1,bridge_name='br-int',has_traffic_filtering=True,id=3e3120d7-1164-497f-8e95-61789ab8f383,network=Network(5f085f3e-8c63-44aa-9adc-cec2428aefd2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3e3120d7-11') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 25 04:17:38 np0005534516 nova_compute[253538]: 2025-11-25 09:17:38.544 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:17:38 np0005534516 nova_compute[253538]: 2025-11-25 09:17:38.544 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3e3120d7-11, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 04:17:38 np0005534516 nova_compute[253538]: 2025-11-25 09:17:38.597 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:17:38 np0005534516 nova_compute[253538]: 2025-11-25 09:17:38.601 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 25 04:17:38 np0005534516 nova_compute[253538]: 2025-11-25 09:17:38.604 253542 INFO os_vif [None req-260e4df6-9d3d-44ee-916a-da0492ea0072 612906aa606e4268918814ea9f47c674 3c19f20fbaec489eaece7cf904f192fa - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:ae:3f:a1,bridge_name='br-int',has_traffic_filtering=True,id=3e3120d7-1164-497f-8e95-61789ab8f383,network=Network(5f085f3e-8c63-44aa-9adc-cec2428aefd2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3e3120d7-11')#033[00m
Nov 25 04:17:39 np0005534516 nova_compute[253538]: 2025-11-25 09:17:39.005 253542 INFO nova.virt.libvirt.driver [None req-260e4df6-9d3d-44ee-916a-da0492ea0072 612906aa606e4268918814ea9f47c674 3c19f20fbaec489eaece7cf904f192fa - - default default] [instance: aefbd3e8-a8ba-4fef-a771-4e2b5091a90a] Deleting instance files /var/lib/nova/instances/aefbd3e8-a8ba-4fef-a771-4e2b5091a90a_del#033[00m
Nov 25 04:17:39 np0005534516 nova_compute[253538]: 2025-11-25 09:17:39.006 253542 INFO nova.virt.libvirt.driver [None req-260e4df6-9d3d-44ee-916a-da0492ea0072 612906aa606e4268918814ea9f47c674 3c19f20fbaec489eaece7cf904f192fa - - default default] [instance: aefbd3e8-a8ba-4fef-a771-4e2b5091a90a] Deletion of /var/lib/nova/instances/aefbd3e8-a8ba-4fef-a771-4e2b5091a90a_del complete#033[00m
Nov 25 04:17:39 np0005534516 nova_compute[253538]: 2025-11-25 09:17:39.063 253542 INFO nova.compute.manager [None req-260e4df6-9d3d-44ee-916a-da0492ea0072 612906aa606e4268918814ea9f47c674 3c19f20fbaec489eaece7cf904f192fa - - default default] [instance: aefbd3e8-a8ba-4fef-a771-4e2b5091a90a] Took 0.78 seconds to destroy the instance on the hypervisor.#033[00m
Nov 25 04:17:39 np0005534516 nova_compute[253538]: 2025-11-25 09:17:39.064 253542 DEBUG oslo.service.loopingcall [None req-260e4df6-9d3d-44ee-916a-da0492ea0072 612906aa606e4268918814ea9f47c674 3c19f20fbaec489eaece7cf904f192fa - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 25 04:17:39 np0005534516 nova_compute[253538]: 2025-11-25 09:17:39.064 253542 DEBUG nova.compute.manager [-] [instance: aefbd3e8-a8ba-4fef-a771-4e2b5091a90a] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 25 04:17:39 np0005534516 nova_compute[253538]: 2025-11-25 09:17:39.065 253542 DEBUG nova.network.neutron [-] [instance: aefbd3e8-a8ba-4fef-a771-4e2b5091a90a] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 25 04:17:39 np0005534516 nova_compute[253538]: 2025-11-25 09:17:39.847 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:17:39 np0005534516 nova_compute[253538]: 2025-11-25 09:17:39.904 253542 DEBUG nova.network.neutron [-] [instance: aefbd3e8-a8ba-4fef-a771-4e2b5091a90a] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 04:17:39 np0005534516 nova_compute[253538]: 2025-11-25 09:17:39.922 253542 INFO nova.compute.manager [-] [instance: aefbd3e8-a8ba-4fef-a771-4e2b5091a90a] Took 0.86 seconds to deallocate network for instance.#033[00m
Nov 25 04:17:39 np0005534516 nova_compute[253538]: 2025-11-25 09:17:39.960 253542 DEBUG oslo_concurrency.lockutils [None req-260e4df6-9d3d-44ee-916a-da0492ea0072 612906aa606e4268918814ea9f47c674 3c19f20fbaec489eaece7cf904f192fa - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:17:39 np0005534516 nova_compute[253538]: 2025-11-25 09:17:39.961 253542 DEBUG oslo_concurrency.lockutils [None req-260e4df6-9d3d-44ee-916a-da0492ea0072 612906aa606e4268918814ea9f47c674 3c19f20fbaec489eaece7cf904f192fa - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:17:40 np0005534516 nova_compute[253538]: 2025-11-25 09:17:40.002 253542 DEBUG nova.compute.manager [req-c63503c5-9ec3-4b24-82b2-6be1d0b14018 req-9ff7e240-6ef3-4877-8775-09745c6d2b7d b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: aefbd3e8-a8ba-4fef-a771-4e2b5091a90a] Received event network-vif-deleted-3e3120d7-1164-497f-8e95-61789ab8f383 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 04:17:40 np0005534516 nova_compute[253538]: 2025-11-25 09:17:40.011 253542 DEBUG oslo_concurrency.processutils [None req-260e4df6-9d3d-44ee-916a-da0492ea0072 612906aa606e4268918814ea9f47c674 3c19f20fbaec489eaece7cf904f192fa - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 04:17:40 np0005534516 nova_compute[253538]: 2025-11-25 09:17:40.174 253542 DEBUG nova.compute.manager [req-0bf78217-fc21-43c0-af76-d1e4aeca6259 req-d3d720ce-cae1-43f4-bffe-9f7cb54b9de2 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: aefbd3e8-a8ba-4fef-a771-4e2b5091a90a] Received event network-vif-unplugged-3e3120d7-1164-497f-8e95-61789ab8f383 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 04:17:40 np0005534516 nova_compute[253538]: 2025-11-25 09:17:40.175 253542 DEBUG oslo_concurrency.lockutils [req-0bf78217-fc21-43c0-af76-d1e4aeca6259 req-d3d720ce-cae1-43f4-bffe-9f7cb54b9de2 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "aefbd3e8-a8ba-4fef-a771-4e2b5091a90a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:17:40 np0005534516 nova_compute[253538]: 2025-11-25 09:17:40.175 253542 DEBUG oslo_concurrency.lockutils [req-0bf78217-fc21-43c0-af76-d1e4aeca6259 req-d3d720ce-cae1-43f4-bffe-9f7cb54b9de2 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "aefbd3e8-a8ba-4fef-a771-4e2b5091a90a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:17:40 np0005534516 nova_compute[253538]: 2025-11-25 09:17:40.175 253542 DEBUG oslo_concurrency.lockutils [req-0bf78217-fc21-43c0-af76-d1e4aeca6259 req-d3d720ce-cae1-43f4-bffe-9f7cb54b9de2 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "aefbd3e8-a8ba-4fef-a771-4e2b5091a90a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:17:40 np0005534516 nova_compute[253538]: 2025-11-25 09:17:40.175 253542 DEBUG nova.compute.manager [req-0bf78217-fc21-43c0-af76-d1e4aeca6259 req-d3d720ce-cae1-43f4-bffe-9f7cb54b9de2 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: aefbd3e8-a8ba-4fef-a771-4e2b5091a90a] No waiting events found dispatching network-vif-unplugged-3e3120d7-1164-497f-8e95-61789ab8f383 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 04:17:40 np0005534516 nova_compute[253538]: 2025-11-25 09:17:40.175 253542 WARNING nova.compute.manager [req-0bf78217-fc21-43c0-af76-d1e4aeca6259 req-d3d720ce-cae1-43f4-bffe-9f7cb54b9de2 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: aefbd3e8-a8ba-4fef-a771-4e2b5091a90a] Received unexpected event network-vif-unplugged-3e3120d7-1164-497f-8e95-61789ab8f383 for instance with vm_state deleted and task_state None.#033[00m
Nov 25 04:17:40 np0005534516 nova_compute[253538]: 2025-11-25 09:17:40.176 253542 DEBUG nova.compute.manager [req-0bf78217-fc21-43c0-af76-d1e4aeca6259 req-d3d720ce-cae1-43f4-bffe-9f7cb54b9de2 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: aefbd3e8-a8ba-4fef-a771-4e2b5091a90a] Received event network-vif-plugged-3e3120d7-1164-497f-8e95-61789ab8f383 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 04:17:40 np0005534516 nova_compute[253538]: 2025-11-25 09:17:40.176 253542 DEBUG oslo_concurrency.lockutils [req-0bf78217-fc21-43c0-af76-d1e4aeca6259 req-d3d720ce-cae1-43f4-bffe-9f7cb54b9de2 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "aefbd3e8-a8ba-4fef-a771-4e2b5091a90a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:17:40 np0005534516 nova_compute[253538]: 2025-11-25 09:17:40.176 253542 DEBUG oslo_concurrency.lockutils [req-0bf78217-fc21-43c0-af76-d1e4aeca6259 req-d3d720ce-cae1-43f4-bffe-9f7cb54b9de2 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "aefbd3e8-a8ba-4fef-a771-4e2b5091a90a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:17:40 np0005534516 nova_compute[253538]: 2025-11-25 09:17:40.176 253542 DEBUG oslo_concurrency.lockutils [req-0bf78217-fc21-43c0-af76-d1e4aeca6259 req-d3d720ce-cae1-43f4-bffe-9f7cb54b9de2 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "aefbd3e8-a8ba-4fef-a771-4e2b5091a90a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:17:40 np0005534516 nova_compute[253538]: 2025-11-25 09:17:40.176 253542 DEBUG nova.compute.manager [req-0bf78217-fc21-43c0-af76-d1e4aeca6259 req-d3d720ce-cae1-43f4-bffe-9f7cb54b9de2 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: aefbd3e8-a8ba-4fef-a771-4e2b5091a90a] No waiting events found dispatching network-vif-plugged-3e3120d7-1164-497f-8e95-61789ab8f383 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 04:17:40 np0005534516 nova_compute[253538]: 2025-11-25 09:17:40.176 253542 WARNING nova.compute.manager [req-0bf78217-fc21-43c0-af76-d1e4aeca6259 req-d3d720ce-cae1-43f4-bffe-9f7cb54b9de2 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: aefbd3e8-a8ba-4fef-a771-4e2b5091a90a] Received unexpected event network-vif-plugged-3e3120d7-1164-497f-8e95-61789ab8f383 for instance with vm_state deleted and task_state None.#033[00m
Nov 25 04:17:40 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 04:17:40 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1131093284' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 04:17:40 np0005534516 nova_compute[253538]: 2025-11-25 09:17:40.478 253542 DEBUG oslo_concurrency.processutils [None req-260e4df6-9d3d-44ee-916a-da0492ea0072 612906aa606e4268918814ea9f47c674 3c19f20fbaec489eaece7cf904f192fa - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.467s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 04:17:40 np0005534516 nova_compute[253538]: 2025-11-25 09:17:40.483 253542 DEBUG nova.compute.provider_tree [None req-260e4df6-9d3d-44ee-916a-da0492ea0072 612906aa606e4268918814ea9f47c674 3c19f20fbaec489eaece7cf904f192fa - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 04:17:40 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2918: 321 pgs: 321 active+clean; 111 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 9.4 KiB/s rd, 0 B/s wr, 11 op/s
Nov 25 04:17:40 np0005534516 nova_compute[253538]: 2025-11-25 09:17:40.501 253542 DEBUG nova.scheduler.client.report [None req-260e4df6-9d3d-44ee-916a-da0492ea0072 612906aa606e4268918814ea9f47c674 3c19f20fbaec489eaece7cf904f192fa - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 04:17:40 np0005534516 nova_compute[253538]: 2025-11-25 09:17:40.521 253542 DEBUG oslo_concurrency.lockutils [None req-260e4df6-9d3d-44ee-916a-da0492ea0072 612906aa606e4268918814ea9f47c674 3c19f20fbaec489eaece7cf904f192fa - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.560s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:17:40 np0005534516 nova_compute[253538]: 2025-11-25 09:17:40.557 253542 INFO nova.scheduler.client.report [None req-260e4df6-9d3d-44ee-916a-da0492ea0072 612906aa606e4268918814ea9f47c674 3c19f20fbaec489eaece7cf904f192fa - - default default] Deleted allocations for instance aefbd3e8-a8ba-4fef-a771-4e2b5091a90a#033[00m
Nov 25 04:17:40 np0005534516 nova_compute[253538]: 2025-11-25 09:17:40.648 253542 DEBUG oslo_concurrency.lockutils [None req-260e4df6-9d3d-44ee-916a-da0492ea0072 612906aa606e4268918814ea9f47c674 3c19f20fbaec489eaece7cf904f192fa - - default default] Lock "aefbd3e8-a8ba-4fef-a771-4e2b5091a90a" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.363s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:17:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:17:41.101 162739 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:17:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:17:41.102 162739 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:17:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:17:41.102 162739 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:17:42 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2919: 321 pgs: 321 active+clean; 104 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 20 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Nov 25 04:17:42 np0005534516 nova_compute[253538]: 2025-11-25 09:17:42.576 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:17:43 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:17:43 np0005534516 nova_compute[253538]: 2025-11-25 09:17:43.597 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:17:44 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2920: 321 pgs: 321 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 25 KiB/s rd, 1.2 KiB/s wr, 33 op/s
Nov 25 04:17:44 np0005534516 podman[419775]: 2025-11-25 09:17:44.820438901 +0000 UTC m=+0.065164862 container health_status 201f5ba8a846455dad7681d91099d8c3f33e8ac91298c1330a8b525b94c03938 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 04:17:44 np0005534516 podman[419776]: 2025-11-25 09:17:44.843062077 +0000 UTC m=+0.088796535 container health_status 5c8d5508db57b4c3da7e80ae11f3a3094e87785cb74f60c98199261dd8b73481 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 04:17:44 np0005534516 nova_compute[253538]: 2025-11-25 09:17:44.850 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:17:46 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2921: 321 pgs: 321 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 23 KiB/s rd, 1.2 KiB/s wr, 31 op/s
Nov 25 04:17:48 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:17:48 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2922: 321 pgs: 321 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 23 KiB/s rd, 1.2 KiB/s wr, 31 op/s
Nov 25 04:17:48 np0005534516 nova_compute[253538]: 2025-11-25 09:17:48.600 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:17:49 np0005534516 nova_compute[253538]: 2025-11-25 09:17:49.852 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:17:50 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2923: 321 pgs: 321 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 26 op/s
Nov 25 04:17:51 np0005534516 nova_compute[253538]: 2025-11-25 09:17:51.444 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:17:52 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2924: 321 pgs: 321 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.2 KiB/s wr, 24 op/s
Nov 25 04:17:52 np0005534516 podman[419816]: 2025-11-25 09:17:52.800301427 +0000 UTC m=+0.088403445 container health_status 8ad1e319ea263c63021f01bb2d6f18ca5aea205883339692c6b10bddfa993191 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 25 04:17:53 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:17:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] Optimize plan auto_2025-11-25_09:17:53
Nov 25 04:17:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 04:17:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] do_upmap
Nov 25 04:17:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] pools ['vms', '.mgr', 'default.rgw.log', 'volumes', 'images', '.rgw.root', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', 'default.rgw.meta', 'default.rgw.control', 'backups']
Nov 25 04:17:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] prepared 0/10 changes
Nov 25 04:17:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 04:17:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 04:17:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 04:17:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 04:17:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 04:17:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 04:17:53 np0005534516 nova_compute[253538]: 2025-11-25 09:17:53.524 253542 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764062258.5227954, aefbd3e8-a8ba-4fef-a771-4e2b5091a90a => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 04:17:53 np0005534516 nova_compute[253538]: 2025-11-25 09:17:53.524 253542 INFO nova.compute.manager [-] [instance: aefbd3e8-a8ba-4fef-a771-4e2b5091a90a] VM Stopped (Lifecycle Event)#033[00m
Nov 25 04:17:53 np0005534516 nova_compute[253538]: 2025-11-25 09:17:53.540 253542 DEBUG nova.compute.manager [None req-007dd71e-a4cc-4d83-9ff1-e2923502894f - - - - - -] [instance: aefbd3e8-a8ba-4fef-a771-4e2b5091a90a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 04:17:53 np0005534516 nova_compute[253538]: 2025-11-25 09:17:53.601 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:17:54 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 04:17:54 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 04:17:54 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 04:17:54 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 04:17:54 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 04:17:54 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 04:17:54 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 04:17:54 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 04:17:54 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 04:17:54 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 04:17:54 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2925: 321 pgs: 321 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 7.1 KiB/s rd, 0 B/s wr, 8 op/s
Nov 25 04:17:54 np0005534516 nova_compute[253538]: 2025-11-25 09:17:54.853 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:17:55 np0005534516 nova_compute[253538]: 2025-11-25 09:17:55.555 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:17:56 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2926: 321 pgs: 321 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 04:17:58 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:17:58 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2927: 321 pgs: 321 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 04:17:58 np0005534516 nova_compute[253538]: 2025-11-25 09:17:58.602 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:17:59 np0005534516 nova_compute[253538]: 2025-11-25 09:17:59.555 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:17:59 np0005534516 nova_compute[253538]: 2025-11-25 09:17:59.555 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 25 04:17:59 np0005534516 nova_compute[253538]: 2025-11-25 09:17:59.555 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 25 04:17:59 np0005534516 nova_compute[253538]: 2025-11-25 09:17:59.568 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 25 04:17:59 np0005534516 nova_compute[253538]: 2025-11-25 09:17:59.856 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:18:00 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2928: 321 pgs: 321 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 04:18:00 np0005534516 nova_compute[253538]: 2025-11-25 09:18:00.561 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:18:02 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2929: 321 pgs: 321 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 04:18:03 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:18:03 np0005534516 nova_compute[253538]: 2025-11-25 09:18:03.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:18:03 np0005534516 nova_compute[253538]: 2025-11-25 09:18:03.604 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:18:03 np0005534516 ceph-mon[75015]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #141. Immutable memtables: 0.
Nov 25 04:18:03 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:18:03.841855) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 25 04:18:03 np0005534516 ceph-mon[75015]: rocksdb: [db/flush_job.cc:856] [default] [JOB 85] Flushing memtable with next log file: 141
Nov 25 04:18:03 np0005534516 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764062283841893, "job": 85, "event": "flush_started", "num_memtables": 1, "num_entries": 2057, "num_deletes": 251, "total_data_size": 3440247, "memory_usage": 3503488, "flush_reason": "Manual Compaction"}
Nov 25 04:18:03 np0005534516 ceph-mon[75015]: rocksdb: [db/flush_job.cc:885] [default] [JOB 85] Level-0 flush table #142: started
Nov 25 04:18:03 np0005534516 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764062283858642, "cf_name": "default", "job": 85, "event": "table_file_creation", "file_number": 142, "file_size": 3373649, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 59074, "largest_seqno": 61130, "table_properties": {"data_size": 3364202, "index_size": 6004, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2373, "raw_key_size": 18904, "raw_average_key_size": 20, "raw_value_size": 3345573, "raw_average_value_size": 3566, "num_data_blocks": 266, "num_entries": 938, "num_filter_entries": 938, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764062055, "oldest_key_time": 1764062055, "file_creation_time": 1764062283, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "dc5dc6ae-0fb8-474e-b34d-e37705a41add", "db_session_id": "WCSCMBNWDQZ3QKJA3OWW", "orig_file_number": 142, "seqno_to_time_mapping": "N/A"}}
Nov 25 04:18:03 np0005534516 ceph-mon[75015]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 85] Flush lasted 16843 microseconds, and 8983 cpu microseconds.
Nov 25 04:18:03 np0005534516 ceph-mon[75015]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 04:18:03 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:18:03.858691) [db/flush_job.cc:967] [default] [JOB 85] Level-0 flush table #142: 3373649 bytes OK
Nov 25 04:18:03 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:18:03.858717) [db/memtable_list.cc:519] [default] Level-0 commit table #142 started
Nov 25 04:18:03 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:18:03.860140) [db/memtable_list.cc:722] [default] Level-0 commit table #142: memtable #1 done
Nov 25 04:18:03 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:18:03.860154) EVENT_LOG_v1 {"time_micros": 1764062283860149, "job": 85, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 25 04:18:03 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:18:03.860177) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 25 04:18:03 np0005534516 ceph-mon[75015]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 85] Try to delete WAL files size 3431621, prev total WAL file size 3431621, number of live WAL files 2.
Nov 25 04:18:03 np0005534516 ceph-mon[75015]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000138.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 04:18:03 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:18:03.861144) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730035373733' seq:72057594037927935, type:22 .. '7061786F730036303235' seq:0, type:0; will stop at (end)
Nov 25 04:18:03 np0005534516 ceph-mon[75015]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 86] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 25 04:18:03 np0005534516 ceph-mon[75015]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 85 Base level 0, inputs: [142(3294KB)], [140(8154KB)]
Nov 25 04:18:03 np0005534516 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764062283861181, "job": 86, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [142], "files_L6": [140], "score": -1, "input_data_size": 11723768, "oldest_snapshot_seqno": -1}
Nov 25 04:18:03 np0005534516 ceph-mon[75015]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 86] Generated table #143: 8113 keys, 10004109 bytes, temperature: kUnknown
Nov 25 04:18:03 np0005534516 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764062283914181, "cf_name": "default", "job": 86, "event": "table_file_creation", "file_number": 143, "file_size": 10004109, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9952552, "index_size": 30196, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 20293, "raw_key_size": 211769, "raw_average_key_size": 26, "raw_value_size": 9810436, "raw_average_value_size": 1209, "num_data_blocks": 1174, "num_entries": 8113, "num_filter_entries": 8113, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764056881, "oldest_key_time": 0, "file_creation_time": 1764062283, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "dc5dc6ae-0fb8-474e-b34d-e37705a41add", "db_session_id": "WCSCMBNWDQZ3QKJA3OWW", "orig_file_number": 143, "seqno_to_time_mapping": "N/A"}}
Nov 25 04:18:03 np0005534516 ceph-mon[75015]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 04:18:03 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:18:03.914542) [db/compaction/compaction_job.cc:1663] [default] [JOB 86] Compacted 1@0 + 1@6 files to L6 => 10004109 bytes
Nov 25 04:18:03 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:18:03.915927) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 220.8 rd, 188.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.2, 8.0 +0.0 blob) out(9.5 +0.0 blob), read-write-amplify(6.4) write-amplify(3.0) OK, records in: 8627, records dropped: 514 output_compression: NoCompression
Nov 25 04:18:03 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:18:03.915950) EVENT_LOG_v1 {"time_micros": 1764062283915938, "job": 86, "event": "compaction_finished", "compaction_time_micros": 53106, "compaction_time_cpu_micros": 27441, "output_level": 6, "num_output_files": 1, "total_output_size": 10004109, "num_input_records": 8627, "num_output_records": 8113, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 25 04:18:03 np0005534516 ceph-mon[75015]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000142.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 04:18:03 np0005534516 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764062283916916, "job": 86, "event": "table_file_deletion", "file_number": 142}
Nov 25 04:18:03 np0005534516 ceph-mon[75015]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000140.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 04:18:03 np0005534516 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764062283919058, "job": 86, "event": "table_file_deletion", "file_number": 140}
Nov 25 04:18:03 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:18:03.861040) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 04:18:03 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:18:03.919145) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 04:18:03 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:18:03.919152) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 04:18:03 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:18:03.919155) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 04:18:03 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:18:03.919157) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 04:18:03 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:18:03.919166) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 04:18:04 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2930: 321 pgs: 321 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 04:18:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 04:18:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 04:18:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 04:18:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 04:18:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 25 04:18:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 04:18:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 04:18:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 04:18:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 04:18:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 04:18:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0010122368870873217 of space, bias 1.0, pg target 0.3036710661261965 quantized to 32 (current 32)
Nov 25 04:18:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 04:18:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 25 04:18:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 04:18:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 04:18:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 04:18:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 25 04:18:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 04:18:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 25 04:18:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 04:18:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 04:18:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 04:18:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 25 04:18:04 np0005534516 nova_compute[253538]: 2025-11-25 09:18:04.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:18:04 np0005534516 nova_compute[253538]: 2025-11-25 09:18:04.555 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 25 04:18:04 np0005534516 nova_compute[253538]: 2025-11-25 09:18:04.857 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:18:05 np0005534516 nova_compute[253538]: 2025-11-25 09:18:05.555 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:18:05 np0005534516 ceph-mon[75015]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 25 04:18:05 np0005534516 ceph-mon[75015]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 5400.0 total, 600.0 interval#012Cumulative writes: 13K writes, 61K keys, 13K commit groups, 1.0 writes per commit group, ingest: 0.08 GB, 0.02 MB/s#012Cumulative WAL: 13K writes, 13K syncs, 1.00 writes per sync, written: 0.08 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1362 writes, 6172 keys, 1362 commit groups, 1.0 writes per commit group, ingest: 8.78 MB, 0.01 MB/s#012Interval WAL: 1362 writes, 1362 syncs, 1.00 writes per sync, written: 0.01 GB, 0.01 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   1.0      0.0     33.0      2.26              0.27        43    0.053       0      0       0.0       0.0#012  L6      1/0    9.54 MB   0.0      0.4     0.1      0.3       0.3      0.0       0.0   4.5     62.6     52.5      6.42              1.08        42    0.153    265K    22K       0.0       0.0#012 Sum      1/0    9.54 MB   0.0      0.4     0.1      0.3       0.4      0.1       0.0   5.5     46.3     47.4      8.69              1.34        85    0.102    265K    22K       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.1     0.0      0.0       0.1      0.0       0.0   6.2     64.0     65.7      0.83              0.16        10    0.083     41K   2528       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.4     0.1      0.3       0.3      0.0       0.0   0.0     62.6     52.5      6.42              1.08        42    0.153    265K    22K       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   0.0      0.0     33.0      2.26              0.27        42    0.054       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     19.9      0.00              0.00         1    0.003       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 5400.0 total, 600.0 interval#012Flush(GB): cumulative 0.073, interval 0.009#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.40 GB write, 0.08 MB/s write, 0.39 GB read, 0.07 MB/s read, 8.7 seconds#012Interval compaction: 0.05 GB write, 0.09 MB/s write, 0.05 GB read, 0.09 MB/s read, 0.8 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55e967e9f1f0#2 capacity: 304.00 MB usage: 47.54 MB table_size: 0 occupancy: 18446744073709551615 collections: 10 last_copies: 0 last_secs: 0.000535 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3090,45.65 MB,15.018%) FilterBlock(86,742.80 KB,0.238614%) IndexBlock(86,1.16 MB,0.382172%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Nov 25 04:18:06 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2931: 321 pgs: 321 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 04:18:08 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:18:08 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2932: 321 pgs: 321 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 04:18:08 np0005534516 nova_compute[253538]: 2025-11-25 09:18:08.605 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:18:09 np0005534516 nova_compute[253538]: 2025-11-25 09:18:09.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:18:09 np0005534516 nova_compute[253538]: 2025-11-25 09:18:09.859 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:18:10 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2933: 321 pgs: 321 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 04:18:12 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2934: 321 pgs: 321 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 04:18:13 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:18:13 np0005534516 nova_compute[253538]: 2025-11-25 09:18:13.606 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:18:14 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2935: 321 pgs: 321 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 04:18:14 np0005534516 nova_compute[253538]: 2025-11-25 09:18:14.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:18:14 np0005534516 nova_compute[253538]: 2025-11-25 09:18:14.578 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:18:14 np0005534516 nova_compute[253538]: 2025-11-25 09:18:14.579 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:18:14 np0005534516 nova_compute[253538]: 2025-11-25 09:18:14.579 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:18:14 np0005534516 nova_compute[253538]: 2025-11-25 09:18:14.579 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 25 04:18:14 np0005534516 nova_compute[253538]: 2025-11-25 09:18:14.579 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 04:18:14 np0005534516 nova_compute[253538]: 2025-11-25 09:18:14.860 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:18:15 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 04:18:15 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2768762082' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 04:18:15 np0005534516 nova_compute[253538]: 2025-11-25 09:18:15.086 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.506s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 04:18:15 np0005534516 podman[419872]: 2025-11-25 09:18:15.212087029 +0000 UTC m=+0.068539774 container health_status 5c8d5508db57b4c3da7e80ae11f3a3094e87785cb74f60c98199261dd8b73481 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent)
Nov 25 04:18:15 np0005534516 podman[419871]: 2025-11-25 09:18:15.221070643 +0000 UTC m=+0.082123564 container health_status 201f5ba8a846455dad7681d91099d8c3f33e8ac91298c1330a8b525b94c03938 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=multipathd)
Nov 25 04:18:15 np0005534516 nova_compute[253538]: 2025-11-25 09:18:15.289 253542 WARNING nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 04:18:15 np0005534516 nova_compute[253538]: 2025-11-25 09:18:15.291 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3598MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 25 04:18:15 np0005534516 nova_compute[253538]: 2025-11-25 09:18:15.291 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:18:15 np0005534516 nova_compute[253538]: 2025-11-25 09:18:15.291 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:18:15 np0005534516 nova_compute[253538]: 2025-11-25 09:18:15.425 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 25 04:18:15 np0005534516 nova_compute[253538]: 2025-11-25 09:18:15.426 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 25 04:18:15 np0005534516 nova_compute[253538]: 2025-11-25 09:18:15.519 253542 DEBUG nova.scheduler.client.report [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Refreshing inventories for resource provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Nov 25 04:18:15 np0005534516 nova_compute[253538]: 2025-11-25 09:18:15.537 253542 DEBUG nova.scheduler.client.report [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Updating ProviderTree inventory for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Nov 25 04:18:15 np0005534516 nova_compute[253538]: 2025-11-25 09:18:15.538 253542 DEBUG nova.compute.provider_tree [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Updating inventory in ProviderTree for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Nov 25 04:18:15 np0005534516 nova_compute[253538]: 2025-11-25 09:18:15.599 253542 DEBUG nova.scheduler.client.report [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Refreshing aggregate associations for resource provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Nov 25 04:18:15 np0005534516 nova_compute[253538]: 2025-11-25 09:18:15.616 253542 DEBUG nova.scheduler.client.report [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Refreshing trait associations for resource provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4, traits: HW_CPU_X86_ABM,HW_CPU_X86_SSE,HW_CPU_X86_SSE4A,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_NET_VIF_MODEL_VIRTIO,HW_CPU_X86_SSSE3,COMPUTE_ACCELERATORS,COMPUTE_RESCUE_BFV,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_BMI2,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_DEVICE_TAGGING,HW_CPU_X86_SSE2,HW_CPU_X86_SSE42,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_SECURITY_TPM_1_2,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_SVM,HW_CPU_X86_AVX,HW_CPU_X86_CLMUL,COMPUTE_NODE,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_AVX2,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_SHA,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_BMI,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_MMX,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_SSE41,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_SECURITY_TPM_2_0,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_F16C,COMPUTE_STORAGE_BUS_FDC,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_AMD_SVM,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_AESNI,HW_CPU_X86_FMA3 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Nov 25 04:18:15 np0005534516 nova_compute[253538]: 2025-11-25 09:18:15.648 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 04:18:15 np0005534516 nova_compute[253538]: 2025-11-25 09:18:15.954 253542 DEBUG oslo_concurrency.lockutils [None req-0ad32855-dd92-421e-a879-3a223e28a816 a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] Acquiring lock "282b7217-4c1e-4a42-b3da-05616f4e1da3" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:18:15 np0005534516 nova_compute[253538]: 2025-11-25 09:18:15.955 253542 DEBUG oslo_concurrency.lockutils [None req-0ad32855-dd92-421e-a879-3a223e28a816 a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] Lock "282b7217-4c1e-4a42-b3da-05616f4e1da3" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:18:15 np0005534516 nova_compute[253538]: 2025-11-25 09:18:15.969 253542 DEBUG nova.compute.manager [None req-0ad32855-dd92-421e-a879-3a223e28a816 a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] [instance: 282b7217-4c1e-4a42-b3da-05616f4e1da3] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 25 04:18:16 np0005534516 nova_compute[253538]: 2025-11-25 09:18:16.038 253542 DEBUG oslo_concurrency.lockutils [None req-0ad32855-dd92-421e-a879-3a223e28a816 a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:18:16 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 04:18:16 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2664273862' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 04:18:16 np0005534516 nova_compute[253538]: 2025-11-25 09:18:16.113 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.465s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 04:18:16 np0005534516 nova_compute[253538]: 2025-11-25 09:18:16.119 253542 DEBUG nova.compute.provider_tree [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 04:18:16 np0005534516 nova_compute[253538]: 2025-11-25 09:18:16.132 253542 DEBUG nova.scheduler.client.report [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 04:18:16 np0005534516 nova_compute[253538]: 2025-11-25 09:18:16.172 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 25 04:18:16 np0005534516 nova_compute[253538]: 2025-11-25 09:18:16.172 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.881s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:18:16 np0005534516 nova_compute[253538]: 2025-11-25 09:18:16.172 253542 DEBUG oslo_concurrency.lockutils [None req-0ad32855-dd92-421e-a879-3a223e28a816 a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.135s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:18:16 np0005534516 nova_compute[253538]: 2025-11-25 09:18:16.180 253542 DEBUG nova.virt.hardware [None req-0ad32855-dd92-421e-a879-3a223e28a816 a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 25 04:18:16 np0005534516 nova_compute[253538]: 2025-11-25 09:18:16.181 253542 INFO nova.compute.claims [None req-0ad32855-dd92-421e-a879-3a223e28a816 a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] [instance: 282b7217-4c1e-4a42-b3da-05616f4e1da3] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 25 04:18:16 np0005534516 nova_compute[253538]: 2025-11-25 09:18:16.440 253542 DEBUG oslo_concurrency.processutils [None req-0ad32855-dd92-421e-a879-3a223e28a816 a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 04:18:16 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2936: 321 pgs: 321 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 04:18:16 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 04:18:16 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1569016593' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 04:18:16 np0005534516 nova_compute[253538]: 2025-11-25 09:18:16.882 253542 DEBUG oslo_concurrency.processutils [None req-0ad32855-dd92-421e-a879-3a223e28a816 a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.442s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 04:18:16 np0005534516 nova_compute[253538]: 2025-11-25 09:18:16.888 253542 DEBUG nova.compute.provider_tree [None req-0ad32855-dd92-421e-a879-3a223e28a816 a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 04:18:16 np0005534516 nova_compute[253538]: 2025-11-25 09:18:16.900 253542 DEBUG nova.scheduler.client.report [None req-0ad32855-dd92-421e-a879-3a223e28a816 a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 04:18:16 np0005534516 nova_compute[253538]: 2025-11-25 09:18:16.950 253542 DEBUG oslo_concurrency.lockutils [None req-0ad32855-dd92-421e-a879-3a223e28a816 a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.778s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:18:16 np0005534516 nova_compute[253538]: 2025-11-25 09:18:16.951 253542 DEBUG nova.compute.manager [None req-0ad32855-dd92-421e-a879-3a223e28a816 a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] [instance: 282b7217-4c1e-4a42-b3da-05616f4e1da3] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 25 04:18:16 np0005534516 nova_compute[253538]: 2025-11-25 09:18:16.997 253542 DEBUG nova.compute.manager [None req-0ad32855-dd92-421e-a879-3a223e28a816 a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] [instance: 282b7217-4c1e-4a42-b3da-05616f4e1da3] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 25 04:18:16 np0005534516 nova_compute[253538]: 2025-11-25 09:18:16.997 253542 DEBUG nova.network.neutron [None req-0ad32855-dd92-421e-a879-3a223e28a816 a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] [instance: 282b7217-4c1e-4a42-b3da-05616f4e1da3] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 25 04:18:17 np0005534516 nova_compute[253538]: 2025-11-25 09:18:17.017 253542 INFO nova.virt.libvirt.driver [None req-0ad32855-dd92-421e-a879-3a223e28a816 a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] [instance: 282b7217-4c1e-4a42-b3da-05616f4e1da3] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 25 04:18:17 np0005534516 nova_compute[253538]: 2025-11-25 09:18:17.044 253542 DEBUG nova.compute.manager [None req-0ad32855-dd92-421e-a879-3a223e28a816 a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] [instance: 282b7217-4c1e-4a42-b3da-05616f4e1da3] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 25 04:18:17 np0005534516 nova_compute[253538]: 2025-11-25 09:18:17.164 253542 DEBUG nova.policy [None req-0ad32855-dd92-421e-a879-3a223e28a816 a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'a68fbd2f756d42aa982630f3a41f0a1f', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '41c67820b40a4185a60c4245f9c43ef5', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 25 04:18:17 np0005534516 nova_compute[253538]: 2025-11-25 09:18:17.227 253542 DEBUG nova.compute.manager [None req-0ad32855-dd92-421e-a879-3a223e28a816 a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] [instance: 282b7217-4c1e-4a42-b3da-05616f4e1da3] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 25 04:18:17 np0005534516 nova_compute[253538]: 2025-11-25 09:18:17.228 253542 DEBUG nova.virt.libvirt.driver [None req-0ad32855-dd92-421e-a879-3a223e28a816 a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] [instance: 282b7217-4c1e-4a42-b3da-05616f4e1da3] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 25 04:18:17 np0005534516 nova_compute[253538]: 2025-11-25 09:18:17.229 253542 INFO nova.virt.libvirt.driver [None req-0ad32855-dd92-421e-a879-3a223e28a816 a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] [instance: 282b7217-4c1e-4a42-b3da-05616f4e1da3] Creating image(s)#033[00m
Nov 25 04:18:17 np0005534516 nova_compute[253538]: 2025-11-25 09:18:17.257 253542 DEBUG nova.storage.rbd_utils [None req-0ad32855-dd92-421e-a879-3a223e28a816 a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] rbd image 282b7217-4c1e-4a42-b3da-05616f4e1da3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 04:18:17 np0005534516 nova_compute[253538]: 2025-11-25 09:18:17.283 253542 DEBUG nova.storage.rbd_utils [None req-0ad32855-dd92-421e-a879-3a223e28a816 a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] rbd image 282b7217-4c1e-4a42-b3da-05616f4e1da3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 04:18:17 np0005534516 nova_compute[253538]: 2025-11-25 09:18:17.309 253542 DEBUG nova.storage.rbd_utils [None req-0ad32855-dd92-421e-a879-3a223e28a816 a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] rbd image 282b7217-4c1e-4a42-b3da-05616f4e1da3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 04:18:17 np0005534516 nova_compute[253538]: 2025-11-25 09:18:17.313 253542 DEBUG oslo_concurrency.processutils [None req-0ad32855-dd92-421e-a879-3a223e28a816 a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 04:18:17 np0005534516 nova_compute[253538]: 2025-11-25 09:18:17.385 253542 DEBUG oslo_concurrency.processutils [None req-0ad32855-dd92-421e-a879-3a223e28a816 a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json" returned: 0 in 0.072s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 04:18:17 np0005534516 nova_compute[253538]: 2025-11-25 09:18:17.386 253542 DEBUG oslo_concurrency.lockutils [None req-0ad32855-dd92-421e-a879-3a223e28a816 a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] Acquiring lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:18:17 np0005534516 nova_compute[253538]: 2025-11-25 09:18:17.387 253542 DEBUG oslo_concurrency.lockutils [None req-0ad32855-dd92-421e-a879-3a223e28a816 a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:18:17 np0005534516 nova_compute[253538]: 2025-11-25 09:18:17.388 253542 DEBUG oslo_concurrency.lockutils [None req-0ad32855-dd92-421e-a879-3a223e28a816 a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:18:17 np0005534516 nova_compute[253538]: 2025-11-25 09:18:17.416 253542 DEBUG nova.storage.rbd_utils [None req-0ad32855-dd92-421e-a879-3a223e28a816 a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] rbd image 282b7217-4c1e-4a42-b3da-05616f4e1da3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 04:18:17 np0005534516 nova_compute[253538]: 2025-11-25 09:18:17.420 253542 DEBUG oslo_concurrency.processutils [None req-0ad32855-dd92-421e-a879-3a223e28a816 a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc 282b7217-4c1e-4a42-b3da-05616f4e1da3_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 04:18:17 np0005534516 ceph-osd[88620]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [P] New memtable created with log file: #52. Immutable memtables: 0.
Nov 25 04:18:17 np0005534516 nova_compute[253538]: 2025-11-25 09:18:17.795 253542 DEBUG oslo_concurrency.processutils [None req-0ad32855-dd92-421e-a879-3a223e28a816 a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc 282b7217-4c1e-4a42-b3da-05616f4e1da3_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.375s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 04:18:17 np0005534516 nova_compute[253538]: 2025-11-25 09:18:17.869 253542 DEBUG nova.storage.rbd_utils [None req-0ad32855-dd92-421e-a879-3a223e28a816 a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] resizing rbd image 282b7217-4c1e-4a42-b3da-05616f4e1da3_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 25 04:18:17 np0005534516 nova_compute[253538]: 2025-11-25 09:18:17.966 253542 DEBUG nova.objects.instance [None req-0ad32855-dd92-421e-a879-3a223e28a816 a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] Lazy-loading 'migration_context' on Instance uuid 282b7217-4c1e-4a42-b3da-05616f4e1da3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 04:18:17 np0005534516 nova_compute[253538]: 2025-11-25 09:18:17.978 253542 DEBUG nova.virt.libvirt.driver [None req-0ad32855-dd92-421e-a879-3a223e28a816 a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] [instance: 282b7217-4c1e-4a42-b3da-05616f4e1da3] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 25 04:18:17 np0005534516 nova_compute[253538]: 2025-11-25 09:18:17.979 253542 DEBUG nova.virt.libvirt.driver [None req-0ad32855-dd92-421e-a879-3a223e28a816 a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] [instance: 282b7217-4c1e-4a42-b3da-05616f4e1da3] Ensure instance console log exists: /var/lib/nova/instances/282b7217-4c1e-4a42-b3da-05616f4e1da3/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 25 04:18:17 np0005534516 nova_compute[253538]: 2025-11-25 09:18:17.979 253542 DEBUG oslo_concurrency.lockutils [None req-0ad32855-dd92-421e-a879-3a223e28a816 a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:18:17 np0005534516 nova_compute[253538]: 2025-11-25 09:18:17.979 253542 DEBUG oslo_concurrency.lockutils [None req-0ad32855-dd92-421e-a879-3a223e28a816 a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:18:17 np0005534516 nova_compute[253538]: 2025-11-25 09:18:17.980 253542 DEBUG oslo_concurrency.lockutils [None req-0ad32855-dd92-421e-a879-3a223e28a816 a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:18:18 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:18:18 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2937: 321 pgs: 321 active+clean; 103 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 10 KiB/s rd, 846 KiB/s wr, 12 op/s
Nov 25 04:18:18 np0005534516 nova_compute[253538]: 2025-11-25 09:18:18.607 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:18:18 np0005534516 nova_compute[253538]: 2025-11-25 09:18:18.895 253542 DEBUG nova.network.neutron [None req-0ad32855-dd92-421e-a879-3a223e28a816 a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] [instance: 282b7217-4c1e-4a42-b3da-05616f4e1da3] Successfully created port: 9d78d6ba-3489-4cfd-ae33-9166be3f940c _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 25 04:18:19 np0005534516 nova_compute[253538]: 2025-11-25 09:18:19.863 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:18:20 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2938: 321 pgs: 321 active+clean; 119 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1018 KiB/s wr, 26 op/s
Nov 25 04:18:20 np0005534516 nova_compute[253538]: 2025-11-25 09:18:20.516 253542 DEBUG nova.network.neutron [None req-0ad32855-dd92-421e-a879-3a223e28a816 a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] [instance: 282b7217-4c1e-4a42-b3da-05616f4e1da3] Successfully updated port: 9d78d6ba-3489-4cfd-ae33-9166be3f940c _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 25 04:18:20 np0005534516 nova_compute[253538]: 2025-11-25 09:18:20.519 253542 DEBUG nova.compute.manager [req-334bc46f-f43f-4629-b5b0-798e2d0e4a2f req-02a97ce5-31ae-4706-ba90-7d8049005db4 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 282b7217-4c1e-4a42-b3da-05616f4e1da3] Received event network-changed-9d78d6ba-3489-4cfd-ae33-9166be3f940c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 04:18:20 np0005534516 nova_compute[253538]: 2025-11-25 09:18:20.519 253542 DEBUG nova.compute.manager [req-334bc46f-f43f-4629-b5b0-798e2d0e4a2f req-02a97ce5-31ae-4706-ba90-7d8049005db4 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 282b7217-4c1e-4a42-b3da-05616f4e1da3] Refreshing instance network info cache due to event network-changed-9d78d6ba-3489-4cfd-ae33-9166be3f940c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 04:18:20 np0005534516 nova_compute[253538]: 2025-11-25 09:18:20.519 253542 DEBUG oslo_concurrency.lockutils [req-334bc46f-f43f-4629-b5b0-798e2d0e4a2f req-02a97ce5-31ae-4706-ba90-7d8049005db4 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-282b7217-4c1e-4a42-b3da-05616f4e1da3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 04:18:20 np0005534516 nova_compute[253538]: 2025-11-25 09:18:20.519 253542 DEBUG oslo_concurrency.lockutils [req-334bc46f-f43f-4629-b5b0-798e2d0e4a2f req-02a97ce5-31ae-4706-ba90-7d8049005db4 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-282b7217-4c1e-4a42-b3da-05616f4e1da3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 04:18:20 np0005534516 nova_compute[253538]: 2025-11-25 09:18:20.520 253542 DEBUG nova.network.neutron [req-334bc46f-f43f-4629-b5b0-798e2d0e4a2f req-02a97ce5-31ae-4706-ba90-7d8049005db4 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 282b7217-4c1e-4a42-b3da-05616f4e1da3] Refreshing network info cache for port 9d78d6ba-3489-4cfd-ae33-9166be3f940c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 04:18:20 np0005534516 nova_compute[253538]: 2025-11-25 09:18:20.535 253542 DEBUG oslo_concurrency.lockutils [None req-0ad32855-dd92-421e-a879-3a223e28a816 a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] Acquiring lock "refresh_cache-282b7217-4c1e-4a42-b3da-05616f4e1da3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 04:18:20 np0005534516 nova_compute[253538]: 2025-11-25 09:18:20.715 253542 DEBUG nova.network.neutron [req-334bc46f-f43f-4629-b5b0-798e2d0e4a2f req-02a97ce5-31ae-4706-ba90-7d8049005db4 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 282b7217-4c1e-4a42-b3da-05616f4e1da3] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 25 04:18:21 np0005534516 nova_compute[253538]: 2025-11-25 09:18:21.034 253542 DEBUG nova.network.neutron [req-334bc46f-f43f-4629-b5b0-798e2d0e4a2f req-02a97ce5-31ae-4706-ba90-7d8049005db4 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 282b7217-4c1e-4a42-b3da-05616f4e1da3] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 04:18:21 np0005534516 nova_compute[253538]: 2025-11-25 09:18:21.048 253542 DEBUG oslo_concurrency.lockutils [req-334bc46f-f43f-4629-b5b0-798e2d0e4a2f req-02a97ce5-31ae-4706-ba90-7d8049005db4 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-282b7217-4c1e-4a42-b3da-05616f4e1da3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 04:18:21 np0005534516 nova_compute[253538]: 2025-11-25 09:18:21.048 253542 DEBUG oslo_concurrency.lockutils [None req-0ad32855-dd92-421e-a879-3a223e28a816 a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] Acquired lock "refresh_cache-282b7217-4c1e-4a42-b3da-05616f4e1da3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 04:18:21 np0005534516 nova_compute[253538]: 2025-11-25 09:18:21.048 253542 DEBUG nova.network.neutron [None req-0ad32855-dd92-421e-a879-3a223e28a816 a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] [instance: 282b7217-4c1e-4a42-b3da-05616f4e1da3] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 25 04:18:21 np0005534516 nova_compute[253538]: 2025-11-25 09:18:21.179 253542 DEBUG nova.network.neutron [None req-0ad32855-dd92-421e-a879-3a223e28a816 a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] [instance: 282b7217-4c1e-4a42-b3da-05616f4e1da3] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 25 04:18:21 np0005534516 nova_compute[253538]: 2025-11-25 09:18:21.949 253542 DEBUG nova.network.neutron [None req-0ad32855-dd92-421e-a879-3a223e28a816 a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] [instance: 282b7217-4c1e-4a42-b3da-05616f4e1da3] Updating instance_info_cache with network_info: [{"id": "9d78d6ba-3489-4cfd-ae33-9166be3f940c", "address": "fa:16:3e:eb:22:67", "network": {"id": "26d70c6d-e66b-4570-a7d7-11486a935ed8", "bridge": "br-int", "label": "tempest-TestShelveInstance-164848660-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "41c67820b40a4185a60c4245f9c43ef5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9d78d6ba-34", "ovs_interfaceid": "9d78d6ba-3489-4cfd-ae33-9166be3f940c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 04:18:21 np0005534516 nova_compute[253538]: 2025-11-25 09:18:21.966 253542 DEBUG oslo_concurrency.lockutils [None req-0ad32855-dd92-421e-a879-3a223e28a816 a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] Releasing lock "refresh_cache-282b7217-4c1e-4a42-b3da-05616f4e1da3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 04:18:21 np0005534516 nova_compute[253538]: 2025-11-25 09:18:21.967 253542 DEBUG nova.compute.manager [None req-0ad32855-dd92-421e-a879-3a223e28a816 a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] [instance: 282b7217-4c1e-4a42-b3da-05616f4e1da3] Instance network_info: |[{"id": "9d78d6ba-3489-4cfd-ae33-9166be3f940c", "address": "fa:16:3e:eb:22:67", "network": {"id": "26d70c6d-e66b-4570-a7d7-11486a935ed8", "bridge": "br-int", "label": "tempest-TestShelveInstance-164848660-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "41c67820b40a4185a60c4245f9c43ef5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9d78d6ba-34", "ovs_interfaceid": "9d78d6ba-3489-4cfd-ae33-9166be3f940c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 25 04:18:21 np0005534516 nova_compute[253538]: 2025-11-25 09:18:21.971 253542 DEBUG nova.virt.libvirt.driver [None req-0ad32855-dd92-421e-a879-3a223e28a816 a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] [instance: 282b7217-4c1e-4a42-b3da-05616f4e1da3] Start _get_guest_xml network_info=[{"id": "9d78d6ba-3489-4cfd-ae33-9166be3f940c", "address": "fa:16:3e:eb:22:67", "network": {"id": "26d70c6d-e66b-4570-a7d7-11486a935ed8", "bridge": "br-int", "label": "tempest-TestShelveInstance-164848660-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "41c67820b40a4185a60c4245f9c43ef5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9d78d6ba-34", "ovs_interfaceid": "9d78d6ba-3489-4cfd-ae33-9166be3f940c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'device_name': '/dev/vda', 'guest_format': None, 'encryption_options': None, 'boot_index': 0, 'encryption_format': None, 'encryption_secret_uuid': None, 'size': 0, 'encrypted': False, 'disk_bus': 'virtio', 'image_id': '8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 25 04:18:21 np0005534516 nova_compute[253538]: 2025-11-25 09:18:21.975 253542 WARNING nova.virt.libvirt.driver [None req-0ad32855-dd92-421e-a879-3a223e28a816 a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 04:18:21 np0005534516 nova_compute[253538]: 2025-11-25 09:18:21.980 253542 DEBUG nova.virt.libvirt.host [None req-0ad32855-dd92-421e-a879-3a223e28a816 a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 25 04:18:21 np0005534516 nova_compute[253538]: 2025-11-25 09:18:21.981 253542 DEBUG nova.virt.libvirt.host [None req-0ad32855-dd92-421e-a879-3a223e28a816 a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 25 04:18:21 np0005534516 nova_compute[253538]: 2025-11-25 09:18:21.983 253542 DEBUG nova.virt.libvirt.host [None req-0ad32855-dd92-421e-a879-3a223e28a816 a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 25 04:18:21 np0005534516 nova_compute[253538]: 2025-11-25 09:18:21.984 253542 DEBUG nova.virt.libvirt.host [None req-0ad32855-dd92-421e-a879-3a223e28a816 a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 25 04:18:21 np0005534516 nova_compute[253538]: 2025-11-25 09:18:21.984 253542 DEBUG nova.virt.libvirt.driver [None req-0ad32855-dd92-421e-a879-3a223e28a816 a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 25 04:18:21 np0005534516 nova_compute[253538]: 2025-11-25 09:18:21.984 253542 DEBUG nova.virt.hardware [None req-0ad32855-dd92-421e-a879-3a223e28a816 a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T08:20:54Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='c0247c91-0f34-45b2-87e7-43f31a790a00',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 25 04:18:21 np0005534516 nova_compute[253538]: 2025-11-25 09:18:21.985 253542 DEBUG nova.virt.hardware [None req-0ad32855-dd92-421e-a879-3a223e28a816 a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 25 04:18:21 np0005534516 nova_compute[253538]: 2025-11-25 09:18:21.985 253542 DEBUG nova.virt.hardware [None req-0ad32855-dd92-421e-a879-3a223e28a816 a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 25 04:18:21 np0005534516 nova_compute[253538]: 2025-11-25 09:18:21.985 253542 DEBUG nova.virt.hardware [None req-0ad32855-dd92-421e-a879-3a223e28a816 a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 25 04:18:21 np0005534516 nova_compute[253538]: 2025-11-25 09:18:21.986 253542 DEBUG nova.virt.hardware [None req-0ad32855-dd92-421e-a879-3a223e28a816 a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 25 04:18:21 np0005534516 nova_compute[253538]: 2025-11-25 09:18:21.986 253542 DEBUG nova.virt.hardware [None req-0ad32855-dd92-421e-a879-3a223e28a816 a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 25 04:18:21 np0005534516 nova_compute[253538]: 2025-11-25 09:18:21.986 253542 DEBUG nova.virt.hardware [None req-0ad32855-dd92-421e-a879-3a223e28a816 a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 25 04:18:21 np0005534516 nova_compute[253538]: 2025-11-25 09:18:21.986 253542 DEBUG nova.virt.hardware [None req-0ad32855-dd92-421e-a879-3a223e28a816 a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 25 04:18:21 np0005534516 nova_compute[253538]: 2025-11-25 09:18:21.987 253542 DEBUG nova.virt.hardware [None req-0ad32855-dd92-421e-a879-3a223e28a816 a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 25 04:18:21 np0005534516 nova_compute[253538]: 2025-11-25 09:18:21.987 253542 DEBUG nova.virt.hardware [None req-0ad32855-dd92-421e-a879-3a223e28a816 a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 25 04:18:21 np0005534516 nova_compute[253538]: 2025-11-25 09:18:21.987 253542 DEBUG nova.virt.hardware [None req-0ad32855-dd92-421e-a879-3a223e28a816 a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 25 04:18:21 np0005534516 nova_compute[253538]: 2025-11-25 09:18:21.990 253542 DEBUG oslo_concurrency.processutils [None req-0ad32855-dd92-421e-a879-3a223e28a816 a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 04:18:22 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 04:18:22 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1250252886' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 04:18:22 np0005534516 nova_compute[253538]: 2025-11-25 09:18:22.441 253542 DEBUG oslo_concurrency.processutils [None req-0ad32855-dd92-421e-a879-3a223e28a816 a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.451s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 04:18:22 np0005534516 nova_compute[253538]: 2025-11-25 09:18:22.467 253542 DEBUG nova.storage.rbd_utils [None req-0ad32855-dd92-421e-a879-3a223e28a816 a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] rbd image 282b7217-4c1e-4a42-b3da-05616f4e1da3_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 04:18:22 np0005534516 nova_compute[253538]: 2025-11-25 09:18:22.471 253542 DEBUG oslo_concurrency.processutils [None req-0ad32855-dd92-421e-a879-3a223e28a816 a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 04:18:22 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2939: 321 pgs: 321 active+clean; 134 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 25 04:18:22 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 04:18:22 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2867417623' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 04:18:22 np0005534516 nova_compute[253538]: 2025-11-25 09:18:22.936 253542 DEBUG oslo_concurrency.processutils [None req-0ad32855-dd92-421e-a879-3a223e28a816 a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.464s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 04:18:22 np0005534516 nova_compute[253538]: 2025-11-25 09:18:22.938 253542 DEBUG nova.virt.libvirt.vif [None req-0ad32855-dd92-421e-a879-3a223e28a816 a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T09:18:14Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestShelveInstance-server-853165821',display_name='tempest-TestShelveInstance-server-853165821',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testshelveinstance-server-853165821',id=153,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDNJQsCu4fQ3ll5Z4ZaGvMq+pPgiaY3EvL05ETUACffFb5NNPT58fZR5bxwEgYmFiG8knRhbPhzoHa6MYoWsZMDhMe0q2RDfQW/VzCu9RVlFpki+QcCgPNPr5WwLwdENig==',key_name='tempest-TestShelveInstance-83615415',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='41c67820b40a4185a60c4245f9c43ef5',ramdisk_id='',reservation_id='r-5ikkcbmq',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestShelveInstance-1867415308',owner_user_name='tempest-TestShelveInstance-1867415308-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T09:18:17Z,user_data=None,user_id='a68fbd2f756d42aa982630f3a41f0a1f',uuid=282b7217-4c1e-4a42-b3da-05616f4e1da3,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "9d78d6ba-3489-4cfd-ae33-9166be3f940c", "address": "fa:16:3e:eb:22:67", "network": {"id": "26d70c6d-e66b-4570-a7d7-11486a935ed8", "bridge": "br-int", "label": "tempest-TestShelveInstance-164848660-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "41c67820b40a4185a60c4245f9c43ef5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9d78d6ba-34", "ovs_interfaceid": "9d78d6ba-3489-4cfd-ae33-9166be3f940c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 25 04:18:22 np0005534516 nova_compute[253538]: 2025-11-25 09:18:22.939 253542 DEBUG nova.network.os_vif_util [None req-0ad32855-dd92-421e-a879-3a223e28a816 a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] Converting VIF {"id": "9d78d6ba-3489-4cfd-ae33-9166be3f940c", "address": "fa:16:3e:eb:22:67", "network": {"id": "26d70c6d-e66b-4570-a7d7-11486a935ed8", "bridge": "br-int", "label": "tempest-TestShelveInstance-164848660-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "41c67820b40a4185a60c4245f9c43ef5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9d78d6ba-34", "ovs_interfaceid": "9d78d6ba-3489-4cfd-ae33-9166be3f940c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 04:18:22 np0005534516 nova_compute[253538]: 2025-11-25 09:18:22.941 253542 DEBUG nova.network.os_vif_util [None req-0ad32855-dd92-421e-a879-3a223e28a816 a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:eb:22:67,bridge_name='br-int',has_traffic_filtering=True,id=9d78d6ba-3489-4cfd-ae33-9166be3f940c,network=Network(26d70c6d-e66b-4570-a7d7-11486a935ed8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9d78d6ba-34') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 04:18:22 np0005534516 nova_compute[253538]: 2025-11-25 09:18:22.943 253542 DEBUG nova.objects.instance [None req-0ad32855-dd92-421e-a879-3a223e28a816 a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] Lazy-loading 'pci_devices' on Instance uuid 282b7217-4c1e-4a42-b3da-05616f4e1da3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 04:18:22 np0005534516 nova_compute[253538]: 2025-11-25 09:18:22.956 253542 DEBUG nova.virt.libvirt.driver [None req-0ad32855-dd92-421e-a879-3a223e28a816 a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] [instance: 282b7217-4c1e-4a42-b3da-05616f4e1da3] End _get_guest_xml xml=<domain type="kvm">
Nov 25 04:18:22 np0005534516 nova_compute[253538]:  <uuid>282b7217-4c1e-4a42-b3da-05616f4e1da3</uuid>
Nov 25 04:18:22 np0005534516 nova_compute[253538]:  <name>instance-00000099</name>
Nov 25 04:18:22 np0005534516 nova_compute[253538]:  <memory>131072</memory>
Nov 25 04:18:22 np0005534516 nova_compute[253538]:  <vcpu>1</vcpu>
Nov 25 04:18:22 np0005534516 nova_compute[253538]:  <metadata>
Nov 25 04:18:22 np0005534516 nova_compute[253538]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 04:18:22 np0005534516 nova_compute[253538]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 04:18:22 np0005534516 nova_compute[253538]:      <nova:name>tempest-TestShelveInstance-server-853165821</nova:name>
Nov 25 04:18:22 np0005534516 nova_compute[253538]:      <nova:creationTime>2025-11-25 09:18:21</nova:creationTime>
Nov 25 04:18:22 np0005534516 nova_compute[253538]:      <nova:flavor name="m1.nano">
Nov 25 04:18:22 np0005534516 nova_compute[253538]:        <nova:memory>128</nova:memory>
Nov 25 04:18:22 np0005534516 nova_compute[253538]:        <nova:disk>1</nova:disk>
Nov 25 04:18:22 np0005534516 nova_compute[253538]:        <nova:swap>0</nova:swap>
Nov 25 04:18:22 np0005534516 nova_compute[253538]:        <nova:ephemeral>0</nova:ephemeral>
Nov 25 04:18:22 np0005534516 nova_compute[253538]:        <nova:vcpus>1</nova:vcpus>
Nov 25 04:18:22 np0005534516 nova_compute[253538]:      </nova:flavor>
Nov 25 04:18:22 np0005534516 nova_compute[253538]:      <nova:owner>
Nov 25 04:18:22 np0005534516 nova_compute[253538]:        <nova:user uuid="a68fbd2f756d42aa982630f3a41f0a1f">tempest-TestShelveInstance-1867415308-project-member</nova:user>
Nov 25 04:18:22 np0005534516 nova_compute[253538]:        <nova:project uuid="41c67820b40a4185a60c4245f9c43ef5">tempest-TestShelveInstance-1867415308</nova:project>
Nov 25 04:18:22 np0005534516 nova_compute[253538]:      </nova:owner>
Nov 25 04:18:22 np0005534516 nova_compute[253538]:      <nova:root type="image" uuid="8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e"/>
Nov 25 04:18:22 np0005534516 nova_compute[253538]:      <nova:ports>
Nov 25 04:18:22 np0005534516 nova_compute[253538]:        <nova:port uuid="9d78d6ba-3489-4cfd-ae33-9166be3f940c">
Nov 25 04:18:22 np0005534516 nova_compute[253538]:          <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Nov 25 04:18:22 np0005534516 nova_compute[253538]:        </nova:port>
Nov 25 04:18:22 np0005534516 nova_compute[253538]:      </nova:ports>
Nov 25 04:18:22 np0005534516 nova_compute[253538]:    </nova:instance>
Nov 25 04:18:22 np0005534516 nova_compute[253538]:  </metadata>
Nov 25 04:18:22 np0005534516 nova_compute[253538]:  <sysinfo type="smbios">
Nov 25 04:18:22 np0005534516 nova_compute[253538]:    <system>
Nov 25 04:18:22 np0005534516 nova_compute[253538]:      <entry name="manufacturer">RDO</entry>
Nov 25 04:18:22 np0005534516 nova_compute[253538]:      <entry name="product">OpenStack Compute</entry>
Nov 25 04:18:22 np0005534516 nova_compute[253538]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 04:18:22 np0005534516 nova_compute[253538]:      <entry name="serial">282b7217-4c1e-4a42-b3da-05616f4e1da3</entry>
Nov 25 04:18:22 np0005534516 nova_compute[253538]:      <entry name="uuid">282b7217-4c1e-4a42-b3da-05616f4e1da3</entry>
Nov 25 04:18:22 np0005534516 nova_compute[253538]:      <entry name="family">Virtual Machine</entry>
Nov 25 04:18:22 np0005534516 nova_compute[253538]:    </system>
Nov 25 04:18:22 np0005534516 nova_compute[253538]:  </sysinfo>
Nov 25 04:18:22 np0005534516 nova_compute[253538]:  <os>
Nov 25 04:18:22 np0005534516 nova_compute[253538]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 25 04:18:22 np0005534516 nova_compute[253538]:    <boot dev="hd"/>
Nov 25 04:18:22 np0005534516 nova_compute[253538]:    <smbios mode="sysinfo"/>
Nov 25 04:18:22 np0005534516 nova_compute[253538]:  </os>
Nov 25 04:18:22 np0005534516 nova_compute[253538]:  <features>
Nov 25 04:18:22 np0005534516 nova_compute[253538]:    <acpi/>
Nov 25 04:18:22 np0005534516 nova_compute[253538]:    <apic/>
Nov 25 04:18:22 np0005534516 nova_compute[253538]:    <vmcoreinfo/>
Nov 25 04:18:22 np0005534516 nova_compute[253538]:  </features>
Nov 25 04:18:22 np0005534516 nova_compute[253538]:  <clock offset="utc">
Nov 25 04:18:22 np0005534516 nova_compute[253538]:    <timer name="pit" tickpolicy="delay"/>
Nov 25 04:18:22 np0005534516 nova_compute[253538]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 25 04:18:22 np0005534516 nova_compute[253538]:    <timer name="hpet" present="no"/>
Nov 25 04:18:22 np0005534516 nova_compute[253538]:  </clock>
Nov 25 04:18:22 np0005534516 nova_compute[253538]:  <cpu mode="host-model" match="exact">
Nov 25 04:18:22 np0005534516 nova_compute[253538]:    <topology sockets="1" cores="1" threads="1"/>
Nov 25 04:18:22 np0005534516 nova_compute[253538]:  </cpu>
Nov 25 04:18:22 np0005534516 nova_compute[253538]:  <devices>
Nov 25 04:18:22 np0005534516 nova_compute[253538]:    <disk type="network" device="disk">
Nov 25 04:18:22 np0005534516 nova_compute[253538]:      <driver type="raw" cache="none"/>
Nov 25 04:18:22 np0005534516 nova_compute[253538]:      <source protocol="rbd" name="vms/282b7217-4c1e-4a42-b3da-05616f4e1da3_disk">
Nov 25 04:18:22 np0005534516 nova_compute[253538]:        <host name="192.168.122.100" port="6789"/>
Nov 25 04:18:22 np0005534516 nova_compute[253538]:      </source>
Nov 25 04:18:22 np0005534516 nova_compute[253538]:      <auth username="openstack">
Nov 25 04:18:22 np0005534516 nova_compute[253538]:        <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 04:18:22 np0005534516 nova_compute[253538]:      </auth>
Nov 25 04:18:22 np0005534516 nova_compute[253538]:      <target dev="vda" bus="virtio"/>
Nov 25 04:18:22 np0005534516 nova_compute[253538]:    </disk>
Nov 25 04:18:22 np0005534516 nova_compute[253538]:    <disk type="network" device="cdrom">
Nov 25 04:18:22 np0005534516 nova_compute[253538]:      <driver type="raw" cache="none"/>
Nov 25 04:18:22 np0005534516 nova_compute[253538]:      <source protocol="rbd" name="vms/282b7217-4c1e-4a42-b3da-05616f4e1da3_disk.config">
Nov 25 04:18:22 np0005534516 nova_compute[253538]:        <host name="192.168.122.100" port="6789"/>
Nov 25 04:18:22 np0005534516 nova_compute[253538]:      </source>
Nov 25 04:18:22 np0005534516 nova_compute[253538]:      <auth username="openstack">
Nov 25 04:18:22 np0005534516 nova_compute[253538]:        <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 04:18:22 np0005534516 nova_compute[253538]:      </auth>
Nov 25 04:18:22 np0005534516 nova_compute[253538]:      <target dev="sda" bus="sata"/>
Nov 25 04:18:22 np0005534516 nova_compute[253538]:    </disk>
Nov 25 04:18:22 np0005534516 nova_compute[253538]:    <interface type="ethernet">
Nov 25 04:18:22 np0005534516 nova_compute[253538]:      <mac address="fa:16:3e:eb:22:67"/>
Nov 25 04:18:22 np0005534516 nova_compute[253538]:      <model type="virtio"/>
Nov 25 04:18:22 np0005534516 nova_compute[253538]:      <driver name="vhost" rx_queue_size="512"/>
Nov 25 04:18:22 np0005534516 nova_compute[253538]:      <mtu size="1442"/>
Nov 25 04:18:22 np0005534516 nova_compute[253538]:      <target dev="tap9d78d6ba-34"/>
Nov 25 04:18:22 np0005534516 nova_compute[253538]:    </interface>
Nov 25 04:18:22 np0005534516 nova_compute[253538]:    <serial type="pty">
Nov 25 04:18:22 np0005534516 nova_compute[253538]:      <log file="/var/lib/nova/instances/282b7217-4c1e-4a42-b3da-05616f4e1da3/console.log" append="off"/>
Nov 25 04:18:22 np0005534516 nova_compute[253538]:    </serial>
Nov 25 04:18:22 np0005534516 nova_compute[253538]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 04:18:22 np0005534516 nova_compute[253538]:    <video>
Nov 25 04:18:22 np0005534516 nova_compute[253538]:      <model type="virtio"/>
Nov 25 04:18:22 np0005534516 nova_compute[253538]:    </video>
Nov 25 04:18:22 np0005534516 nova_compute[253538]:    <input type="tablet" bus="usb"/>
Nov 25 04:18:22 np0005534516 nova_compute[253538]:    <rng model="virtio">
Nov 25 04:18:22 np0005534516 nova_compute[253538]:      <backend model="random">/dev/urandom</backend>
Nov 25 04:18:22 np0005534516 nova_compute[253538]:    </rng>
Nov 25 04:18:22 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root"/>
Nov 25 04:18:22 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:18:22 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:18:22 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:18:22 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:18:22 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:18:22 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:18:22 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:18:22 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:18:22 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:18:22 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:18:22 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:18:22 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:18:22 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:18:22 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:18:22 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:18:22 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:18:22 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:18:22 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:18:22 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:18:22 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:18:22 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:18:22 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:18:22 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:18:22 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:18:22 np0005534516 nova_compute[253538]:    <controller type="usb" index="0"/>
Nov 25 04:18:22 np0005534516 nova_compute[253538]:    <memballoon model="virtio">
Nov 25 04:18:22 np0005534516 nova_compute[253538]:      <stats period="10"/>
Nov 25 04:18:22 np0005534516 nova_compute[253538]:    </memballoon>
Nov 25 04:18:22 np0005534516 nova_compute[253538]:  </devices>
Nov 25 04:18:22 np0005534516 nova_compute[253538]: </domain>
Nov 25 04:18:22 np0005534516 nova_compute[253538]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 25 04:18:22 np0005534516 nova_compute[253538]: 2025-11-25 09:18:22.958 253542 DEBUG nova.compute.manager [None req-0ad32855-dd92-421e-a879-3a223e28a816 a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] [instance: 282b7217-4c1e-4a42-b3da-05616f4e1da3] Preparing to wait for external event network-vif-plugged-9d78d6ba-3489-4cfd-ae33-9166be3f940c prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 25 04:18:22 np0005534516 nova_compute[253538]: 2025-11-25 09:18:22.958 253542 DEBUG oslo_concurrency.lockutils [None req-0ad32855-dd92-421e-a879-3a223e28a816 a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] Acquiring lock "282b7217-4c1e-4a42-b3da-05616f4e1da3-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:18:22 np0005534516 nova_compute[253538]: 2025-11-25 09:18:22.958 253542 DEBUG oslo_concurrency.lockutils [None req-0ad32855-dd92-421e-a879-3a223e28a816 a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] Lock "282b7217-4c1e-4a42-b3da-05616f4e1da3-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:18:22 np0005534516 nova_compute[253538]: 2025-11-25 09:18:22.959 253542 DEBUG oslo_concurrency.lockutils [None req-0ad32855-dd92-421e-a879-3a223e28a816 a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] Lock "282b7217-4c1e-4a42-b3da-05616f4e1da3-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:18:22 np0005534516 nova_compute[253538]: 2025-11-25 09:18:22.959 253542 DEBUG nova.virt.libvirt.vif [None req-0ad32855-dd92-421e-a879-3a223e28a816 a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T09:18:14Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestShelveInstance-server-853165821',display_name='tempest-TestShelveInstance-server-853165821',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testshelveinstance-server-853165821',id=153,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDNJQsCu4fQ3ll5Z4ZaGvMq+pPgiaY3EvL05ETUACffFb5NNPT58fZR5bxwEgYmFiG8knRhbPhzoHa6MYoWsZMDhMe0q2RDfQW/VzCu9RVlFpki+QcCgPNPr5WwLwdENig==',key_name='tempest-TestShelveInstance-83615415',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='41c67820b40a4185a60c4245f9c43ef5',ramdisk_id='',reservation_id='r-5ikkcbmq',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestShelveInstance-1867415308',owner_user_name='tempest-TestShelveInstance-1867415308-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T09:18:17Z,user_data=None,user_id='a68fbd2f756d42aa982630f3a41f0a1f',uuid=282b7217-4c1e-4a42-b3da-05616f4e1da3,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "9d78d6ba-3489-4cfd-ae33-9166be3f940c", "address": "fa:16:3e:eb:22:67", "network": {"id": "26d70c6d-e66b-4570-a7d7-11486a935ed8", "bridge": "br-int", "label": "tempest-TestShelveInstance-164848660-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "41c67820b40a4185a60c4245f9c43ef5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9d78d6ba-34", "ovs_interfaceid": "9d78d6ba-3489-4cfd-ae33-9166be3f940c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 25 04:18:22 np0005534516 nova_compute[253538]: 2025-11-25 09:18:22.960 253542 DEBUG nova.network.os_vif_util [None req-0ad32855-dd92-421e-a879-3a223e28a816 a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] Converting VIF {"id": "9d78d6ba-3489-4cfd-ae33-9166be3f940c", "address": "fa:16:3e:eb:22:67", "network": {"id": "26d70c6d-e66b-4570-a7d7-11486a935ed8", "bridge": "br-int", "label": "tempest-TestShelveInstance-164848660-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "41c67820b40a4185a60c4245f9c43ef5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9d78d6ba-34", "ovs_interfaceid": "9d78d6ba-3489-4cfd-ae33-9166be3f940c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 04:18:22 np0005534516 nova_compute[253538]: 2025-11-25 09:18:22.961 253542 DEBUG nova.network.os_vif_util [None req-0ad32855-dd92-421e-a879-3a223e28a816 a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:eb:22:67,bridge_name='br-int',has_traffic_filtering=True,id=9d78d6ba-3489-4cfd-ae33-9166be3f940c,network=Network(26d70c6d-e66b-4570-a7d7-11486a935ed8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9d78d6ba-34') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 04:18:22 np0005534516 nova_compute[253538]: 2025-11-25 09:18:22.961 253542 DEBUG os_vif [None req-0ad32855-dd92-421e-a879-3a223e28a816 a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:eb:22:67,bridge_name='br-int',has_traffic_filtering=True,id=9d78d6ba-3489-4cfd-ae33-9166be3f940c,network=Network(26d70c6d-e66b-4570-a7d7-11486a935ed8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9d78d6ba-34') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 25 04:18:22 np0005534516 nova_compute[253538]: 2025-11-25 09:18:22.962 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:18:22 np0005534516 nova_compute[253538]: 2025-11-25 09:18:22.962 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 04:18:22 np0005534516 nova_compute[253538]: 2025-11-25 09:18:22.963 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 04:18:22 np0005534516 nova_compute[253538]: 2025-11-25 09:18:22.967 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:18:22 np0005534516 nova_compute[253538]: 2025-11-25 09:18:22.968 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9d78d6ba-34, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 04:18:22 np0005534516 nova_compute[253538]: 2025-11-25 09:18:22.968 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap9d78d6ba-34, col_values=(('external_ids', {'iface-id': '9d78d6ba-3489-4cfd-ae33-9166be3f940c', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:eb:22:67', 'vm-uuid': '282b7217-4c1e-4a42-b3da-05616f4e1da3'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 04:18:23 np0005534516 NetworkManager[48915]: <info>  [1764062303.0008] manager: (tap9d78d6ba-34): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/668)
Nov 25 04:18:23 np0005534516 nova_compute[253538]: 2025-11-25 09:18:23.000 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:18:23 np0005534516 nova_compute[253538]: 2025-11-25 09:18:23.004 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 25 04:18:23 np0005534516 nova_compute[253538]: 2025-11-25 09:18:23.009 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:18:23 np0005534516 nova_compute[253538]: 2025-11-25 09:18:23.010 253542 INFO os_vif [None req-0ad32855-dd92-421e-a879-3a223e28a816 a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:eb:22:67,bridge_name='br-int',has_traffic_filtering=True,id=9d78d6ba-3489-4cfd-ae33-9166be3f940c,network=Network(26d70c6d-e66b-4570-a7d7-11486a935ed8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9d78d6ba-34')#033[00m
Nov 25 04:18:23 np0005534516 nova_compute[253538]: 2025-11-25 09:18:23.054 253542 DEBUG nova.virt.libvirt.driver [None req-0ad32855-dd92-421e-a879-3a223e28a816 a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 04:18:23 np0005534516 nova_compute[253538]: 2025-11-25 09:18:23.056 253542 DEBUG nova.virt.libvirt.driver [None req-0ad32855-dd92-421e-a879-3a223e28a816 a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 04:18:23 np0005534516 nova_compute[253538]: 2025-11-25 09:18:23.056 253542 DEBUG nova.virt.libvirt.driver [None req-0ad32855-dd92-421e-a879-3a223e28a816 a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] No VIF found with MAC fa:16:3e:eb:22:67, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 25 04:18:23 np0005534516 nova_compute[253538]: 2025-11-25 09:18:23.056 253542 INFO nova.virt.libvirt.driver [None req-0ad32855-dd92-421e-a879-3a223e28a816 a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] [instance: 282b7217-4c1e-4a42-b3da-05616f4e1da3] Using config drive#033[00m
Nov 25 04:18:23 np0005534516 nova_compute[253538]: 2025-11-25 09:18:23.078 253542 DEBUG nova.storage.rbd_utils [None req-0ad32855-dd92-421e-a879-3a223e28a816 a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] rbd image 282b7217-4c1e-4a42-b3da-05616f4e1da3_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 04:18:23 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:18:23 np0005534516 nova_compute[253538]: 2025-11-25 09:18:23.454 253542 INFO nova.virt.libvirt.driver [None req-0ad32855-dd92-421e-a879-3a223e28a816 a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] [instance: 282b7217-4c1e-4a42-b3da-05616f4e1da3] Creating config drive at /var/lib/nova/instances/282b7217-4c1e-4a42-b3da-05616f4e1da3/disk.config#033[00m
Nov 25 04:18:23 np0005534516 nova_compute[253538]: 2025-11-25 09:18:23.460 253542 DEBUG oslo_concurrency.processutils [None req-0ad32855-dd92-421e-a879-3a223e28a816 a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/282b7217-4c1e-4a42-b3da-05616f4e1da3/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp2p660v3i execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 04:18:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 04:18:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 04:18:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 04:18:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 04:18:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 04:18:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 04:18:23 np0005534516 nova_compute[253538]: 2025-11-25 09:18:23.602 253542 DEBUG oslo_concurrency.processutils [None req-0ad32855-dd92-421e-a879-3a223e28a816 a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/282b7217-4c1e-4a42-b3da-05616f4e1da3/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp2p660v3i" returned: 0 in 0.142s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 04:18:23 np0005534516 nova_compute[253538]: 2025-11-25 09:18:23.628 253542 DEBUG nova.storage.rbd_utils [None req-0ad32855-dd92-421e-a879-3a223e28a816 a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] rbd image 282b7217-4c1e-4a42-b3da-05616f4e1da3_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 04:18:23 np0005534516 nova_compute[253538]: 2025-11-25 09:18:23.632 253542 DEBUG oslo_concurrency.processutils [None req-0ad32855-dd92-421e-a879-3a223e28a816 a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/282b7217-4c1e-4a42-b3da-05616f4e1da3/disk.config 282b7217-4c1e-4a42-b3da-05616f4e1da3_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 04:18:23 np0005534516 nova_compute[253538]: 2025-11-25 09:18:23.806 253542 DEBUG oslo_concurrency.processutils [None req-0ad32855-dd92-421e-a879-3a223e28a816 a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/282b7217-4c1e-4a42-b3da-05616f4e1da3/disk.config 282b7217-4c1e-4a42-b3da-05616f4e1da3_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.174s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 04:18:23 np0005534516 nova_compute[253538]: 2025-11-25 09:18:23.807 253542 INFO nova.virt.libvirt.driver [None req-0ad32855-dd92-421e-a879-3a223e28a816 a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] [instance: 282b7217-4c1e-4a42-b3da-05616f4e1da3] Deleting local config drive /var/lib/nova/instances/282b7217-4c1e-4a42-b3da-05616f4e1da3/disk.config because it was imported into RBD.#033[00m
Nov 25 04:18:23 np0005534516 podman[420234]: 2025-11-25 09:18:23.836788233 +0000 UTC m=+0.092063474 container health_status 8ad1e319ea263c63021f01bb2d6f18ca5aea205883339692c6b10bddfa993191 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_controller, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true)
Nov 25 04:18:23 np0005534516 kernel: tap9d78d6ba-34: entered promiscuous mode
Nov 25 04:18:23 np0005534516 NetworkManager[48915]: <info>  [1764062303.8673] manager: (tap9d78d6ba-34): new Tun device (/org/freedesktop/NetworkManager/Devices/669)
Nov 25 04:18:23 np0005534516 nova_compute[253538]: 2025-11-25 09:18:23.868 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:18:23 np0005534516 ovn_controller[152859]: 2025-11-25T09:18:23Z|01634|binding|INFO|Claiming lport 9d78d6ba-3489-4cfd-ae33-9166be3f940c for this chassis.
Nov 25 04:18:23 np0005534516 ovn_controller[152859]: 2025-11-25T09:18:23Z|01635|binding|INFO|9d78d6ba-3489-4cfd-ae33-9166be3f940c: Claiming fa:16:3e:eb:22:67 10.100.0.14
Nov 25 04:18:23 np0005534516 nova_compute[253538]: 2025-11-25 09:18:23.872 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:18:23 np0005534516 nova_compute[253538]: 2025-11-25 09:18:23.877 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:18:23 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:18:23.892 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:eb:22:67 10.100.0.14'], port_security=['fa:16:3e:eb:22:67 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '282b7217-4c1e-4a42-b3da-05616f4e1da3', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-26d70c6d-e66b-4570-a7d7-11486a935ed8', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '41c67820b40a4185a60c4245f9c43ef5', 'neutron:revision_number': '2', 'neutron:security_group_ids': '5b17be75-81bb-4f4d-9234-5572279d07c7', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c3ce9783-7822-423e-a3be-85165987da53, chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=9d78d6ba-3489-4cfd-ae33-9166be3f940c) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 04:18:23 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:18:23.894 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 9d78d6ba-3489-4cfd-ae33-9166be3f940c in datapath 26d70c6d-e66b-4570-a7d7-11486a935ed8 bound to our chassis#033[00m
Nov 25 04:18:23 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:18:23.895 162739 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 26d70c6d-e66b-4570-a7d7-11486a935ed8#033[00m
Nov 25 04:18:23 np0005534516 systemd-udevd[420276]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 04:18:23 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:18:23.908 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[a0a07af6-2e05-425c-ab3c-9e88b3fed90c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:18:23 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:18:23.909 162739 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap26d70c6d-e1 in ovnmeta-26d70c6d-e66b-4570-a7d7-11486a935ed8 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 25 04:18:23 np0005534516 systemd-machined[215790]: New machine qemu-185-instance-00000099.
Nov 25 04:18:23 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:18:23.915 269370 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap26d70c6d-e0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 25 04:18:23 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:18:23.915 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[93816612-d97a-4617-90a8-cd14637db13d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:18:23 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:18:23.916 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[63efcb56-200f-4312-aa55-97709d936995]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:18:23 np0005534516 NetworkManager[48915]: <info>  [1764062303.9215] device (tap9d78d6ba-34): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 04:18:23 np0005534516 NetworkManager[48915]: <info>  [1764062303.9231] device (tap9d78d6ba-34): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 04:18:23 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:18:23.927 162852 DEBUG oslo.privsep.daemon [-] privsep: reply[fb76d939-5017-4831-bb48-85bda173dd94]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:18:23 np0005534516 nova_compute[253538]: 2025-11-25 09:18:23.939 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:18:23 np0005534516 systemd[1]: Started Virtual Machine qemu-185-instance-00000099.
Nov 25 04:18:23 np0005534516 ovn_controller[152859]: 2025-11-25T09:18:23Z|01636|binding|INFO|Setting lport 9d78d6ba-3489-4cfd-ae33-9166be3f940c ovn-installed in OVS
Nov 25 04:18:23 np0005534516 ovn_controller[152859]: 2025-11-25T09:18:23Z|01637|binding|INFO|Setting lport 9d78d6ba-3489-4cfd-ae33-9166be3f940c up in Southbound
Nov 25 04:18:23 np0005534516 nova_compute[253538]: 2025-11-25 09:18:23.945 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:18:23 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:18:23.950 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[cfb840eb-8528-4620-abd2-7b881f8df821]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:18:23 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:18:23.977 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[01ba7914-86d3-4719-9d3e-9fdd20a495c3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:18:23 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:18:23.983 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[962ca2ad-d3c5-4b12-8c65-baadf53b692e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:18:23 np0005534516 NetworkManager[48915]: <info>  [1764062303.9852] manager: (tap26d70c6d-e0): new Veth device (/org/freedesktop/NetworkManager/Devices/670)
Nov 25 04:18:23 np0005534516 systemd-udevd[420280]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 04:18:24 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:18:24.017 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[ba6c91d4-9d95-4436-b5a1-b7bccfc86491]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:18:24 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:18:24.020 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[5c207857-2703-4e66-a5bf-226f43a1a3fe]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:18:24 np0005534516 NetworkManager[48915]: <info>  [1764062304.0406] device (tap26d70c6d-e0): carrier: link connected
Nov 25 04:18:24 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:18:24.045 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[c72198ac-f81a-4673-8c4e-df66f7e340ca]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:18:24 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:18:24.061 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[efe02358-c03f-4553-ba5e-8e0b25e17d28]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap26d70c6d-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:8a:90:93'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 470], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 765863, 'reachable_time': 33599, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 420309, 'error': None, 'target': 'ovnmeta-26d70c6d-e66b-4570-a7d7-11486a935ed8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:18:24 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:18:24.073 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[63277a6f-a0ca-4cee-9dcf-fa279cbf7627]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe8a:9093'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 765863, 'tstamp': 765863}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 420310, 'error': None, 'target': 'ovnmeta-26d70c6d-e66b-4570-a7d7-11486a935ed8', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:18:24 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:18:24.089 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[b9862a0c-f1f6-4243-9072-d8ed68d0e3d7]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap26d70c6d-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:8a:90:93'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 470], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 765863, 'reachable_time': 33599, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 420311, 'error': None, 'target': 'ovnmeta-26d70c6d-e66b-4570-a7d7-11486a935ed8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:18:24 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:18:24.119 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[a6956cbb-0a82-4936-b0e6-750305cb8488]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:18:24 np0005534516 nova_compute[253538]: 2025-11-25 09:18:24.133 253542 DEBUG nova.compute.manager [req-f8e89a3a-7522-412a-80c2-b3add5bb3a8b req-ac6b0d0d-d0c7-440e-a2df-8fd89a25741d b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 282b7217-4c1e-4a42-b3da-05616f4e1da3] Received event network-vif-plugged-9d78d6ba-3489-4cfd-ae33-9166be3f940c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 04:18:24 np0005534516 nova_compute[253538]: 2025-11-25 09:18:24.134 253542 DEBUG oslo_concurrency.lockutils [req-f8e89a3a-7522-412a-80c2-b3add5bb3a8b req-ac6b0d0d-d0c7-440e-a2df-8fd89a25741d b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "282b7217-4c1e-4a42-b3da-05616f4e1da3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:18:24 np0005534516 nova_compute[253538]: 2025-11-25 09:18:24.134 253542 DEBUG oslo_concurrency.lockutils [req-f8e89a3a-7522-412a-80c2-b3add5bb3a8b req-ac6b0d0d-d0c7-440e-a2df-8fd89a25741d b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "282b7217-4c1e-4a42-b3da-05616f4e1da3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:18:24 np0005534516 nova_compute[253538]: 2025-11-25 09:18:24.134 253542 DEBUG oslo_concurrency.lockutils [req-f8e89a3a-7522-412a-80c2-b3add5bb3a8b req-ac6b0d0d-d0c7-440e-a2df-8fd89a25741d b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "282b7217-4c1e-4a42-b3da-05616f4e1da3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:18:24 np0005534516 nova_compute[253538]: 2025-11-25 09:18:24.134 253542 DEBUG nova.compute.manager [req-f8e89a3a-7522-412a-80c2-b3add5bb3a8b req-ac6b0d0d-d0c7-440e-a2df-8fd89a25741d b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 282b7217-4c1e-4a42-b3da-05616f4e1da3] Processing event network-vif-plugged-9d78d6ba-3489-4cfd-ae33-9166be3f940c _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 25 04:18:24 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:18:24.177 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[f8f8ff3a-531f-44f6-9eb0-79e1e0b68bf1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:18:24 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:18:24.178 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap26d70c6d-e0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 04:18:24 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:18:24.178 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 04:18:24 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:18:24.179 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap26d70c6d-e0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 04:18:24 np0005534516 nova_compute[253538]: 2025-11-25 09:18:24.180 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:18:24 np0005534516 kernel: tap26d70c6d-e0: entered promiscuous mode
Nov 25 04:18:24 np0005534516 NetworkManager[48915]: <info>  [1764062304.1822] manager: (tap26d70c6d-e0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/671)
Nov 25 04:18:24 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:18:24.182 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap26d70c6d-e0, col_values=(('external_ids', {'iface-id': '49a3f274-19b1-4763-bafc-281fe099299b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 04:18:24 np0005534516 nova_compute[253538]: 2025-11-25 09:18:24.183 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:18:24 np0005534516 ovn_controller[152859]: 2025-11-25T09:18:24Z|01638|binding|INFO|Releasing lport 49a3f274-19b1-4763-bafc-281fe099299b from this chassis (sb_readonly=0)
Nov 25 04:18:24 np0005534516 nova_compute[253538]: 2025-11-25 09:18:24.198 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:18:24 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:18:24.199 162739 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/26d70c6d-e66b-4570-a7d7-11486a935ed8.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/26d70c6d-e66b-4570-a7d7-11486a935ed8.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 25 04:18:24 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:18:24.200 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[9070ac40-e0bd-408b-a77f-f96b9d5b120f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:18:24 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:18:24.200 162739 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 25 04:18:24 np0005534516 ovn_metadata_agent[162734]: global
Nov 25 04:18:24 np0005534516 ovn_metadata_agent[162734]:    log         /dev/log local0 debug
Nov 25 04:18:24 np0005534516 ovn_metadata_agent[162734]:    log-tag     haproxy-metadata-proxy-26d70c6d-e66b-4570-a7d7-11486a935ed8
Nov 25 04:18:24 np0005534516 ovn_metadata_agent[162734]:    user        root
Nov 25 04:18:24 np0005534516 ovn_metadata_agent[162734]:    group       root
Nov 25 04:18:24 np0005534516 ovn_metadata_agent[162734]:    maxconn     1024
Nov 25 04:18:24 np0005534516 ovn_metadata_agent[162734]:    pidfile     /var/lib/neutron/external/pids/26d70c6d-e66b-4570-a7d7-11486a935ed8.pid.haproxy
Nov 25 04:18:24 np0005534516 ovn_metadata_agent[162734]:    daemon
Nov 25 04:18:24 np0005534516 ovn_metadata_agent[162734]: 
Nov 25 04:18:24 np0005534516 ovn_metadata_agent[162734]: defaults
Nov 25 04:18:24 np0005534516 ovn_metadata_agent[162734]:    log global
Nov 25 04:18:24 np0005534516 ovn_metadata_agent[162734]:    mode http
Nov 25 04:18:24 np0005534516 ovn_metadata_agent[162734]:    option httplog
Nov 25 04:18:24 np0005534516 ovn_metadata_agent[162734]:    option dontlognull
Nov 25 04:18:24 np0005534516 ovn_metadata_agent[162734]:    option http-server-close
Nov 25 04:18:24 np0005534516 ovn_metadata_agent[162734]:    option forwardfor
Nov 25 04:18:24 np0005534516 ovn_metadata_agent[162734]:    retries                 3
Nov 25 04:18:24 np0005534516 ovn_metadata_agent[162734]:    timeout http-request    30s
Nov 25 04:18:24 np0005534516 ovn_metadata_agent[162734]:    timeout connect         30s
Nov 25 04:18:24 np0005534516 ovn_metadata_agent[162734]:    timeout client          32s
Nov 25 04:18:24 np0005534516 ovn_metadata_agent[162734]:    timeout server          32s
Nov 25 04:18:24 np0005534516 ovn_metadata_agent[162734]:    timeout http-keep-alive 30s
Nov 25 04:18:24 np0005534516 ovn_metadata_agent[162734]: 
Nov 25 04:18:24 np0005534516 ovn_metadata_agent[162734]: 
Nov 25 04:18:24 np0005534516 ovn_metadata_agent[162734]: listen listener
Nov 25 04:18:24 np0005534516 ovn_metadata_agent[162734]:    bind 169.254.169.254:80
Nov 25 04:18:24 np0005534516 ovn_metadata_agent[162734]:    server metadata /var/lib/neutron/metadata_proxy
Nov 25 04:18:24 np0005534516 ovn_metadata_agent[162734]:    http-request add-header X-OVN-Network-ID 26d70c6d-e66b-4570-a7d7-11486a935ed8
Nov 25 04:18:24 np0005534516 ovn_metadata_agent[162734]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 25 04:18:24 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:18:24.201 162739 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-26d70c6d-e66b-4570-a7d7-11486a935ed8', 'env', 'PROCESS_TAG=haproxy-26d70c6d-e66b-4570-a7d7-11486a935ed8', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/26d70c6d-e66b-4570-a7d7-11486a935ed8.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 25 04:18:24 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2940: 321 pgs: 321 active+clean; 134 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 25 04:18:24 np0005534516 nova_compute[253538]: 2025-11-25 09:18:24.520 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764062304.5197344, 282b7217-4c1e-4a42-b3da-05616f4e1da3 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 04:18:24 np0005534516 nova_compute[253538]: 2025-11-25 09:18:24.521 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 282b7217-4c1e-4a42-b3da-05616f4e1da3] VM Started (Lifecycle Event)#033[00m
Nov 25 04:18:24 np0005534516 nova_compute[253538]: 2025-11-25 09:18:24.524 253542 DEBUG nova.compute.manager [None req-0ad32855-dd92-421e-a879-3a223e28a816 a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] [instance: 282b7217-4c1e-4a42-b3da-05616f4e1da3] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 25 04:18:24 np0005534516 nova_compute[253538]: 2025-11-25 09:18:24.528 253542 DEBUG nova.virt.libvirt.driver [None req-0ad32855-dd92-421e-a879-3a223e28a816 a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] [instance: 282b7217-4c1e-4a42-b3da-05616f4e1da3] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 25 04:18:24 np0005534516 nova_compute[253538]: 2025-11-25 09:18:24.532 253542 INFO nova.virt.libvirt.driver [-] [instance: 282b7217-4c1e-4a42-b3da-05616f4e1da3] Instance spawned successfully.#033[00m
Nov 25 04:18:24 np0005534516 nova_compute[253538]: 2025-11-25 09:18:24.533 253542 DEBUG nova.virt.libvirt.driver [None req-0ad32855-dd92-421e-a879-3a223e28a816 a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] [instance: 282b7217-4c1e-4a42-b3da-05616f4e1da3] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 25 04:18:24 np0005534516 nova_compute[253538]: 2025-11-25 09:18:24.536 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 282b7217-4c1e-4a42-b3da-05616f4e1da3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 04:18:24 np0005534516 nova_compute[253538]: 2025-11-25 09:18:24.539 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 282b7217-4c1e-4a42-b3da-05616f4e1da3] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 04:18:24 np0005534516 nova_compute[253538]: 2025-11-25 09:18:24.548 253542 DEBUG nova.virt.libvirt.driver [None req-0ad32855-dd92-421e-a879-3a223e28a816 a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] [instance: 282b7217-4c1e-4a42-b3da-05616f4e1da3] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 04:18:24 np0005534516 nova_compute[253538]: 2025-11-25 09:18:24.549 253542 DEBUG nova.virt.libvirt.driver [None req-0ad32855-dd92-421e-a879-3a223e28a816 a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] [instance: 282b7217-4c1e-4a42-b3da-05616f4e1da3] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 04:18:24 np0005534516 nova_compute[253538]: 2025-11-25 09:18:24.549 253542 DEBUG nova.virt.libvirt.driver [None req-0ad32855-dd92-421e-a879-3a223e28a816 a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] [instance: 282b7217-4c1e-4a42-b3da-05616f4e1da3] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 04:18:24 np0005534516 nova_compute[253538]: 2025-11-25 09:18:24.550 253542 DEBUG nova.virt.libvirt.driver [None req-0ad32855-dd92-421e-a879-3a223e28a816 a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] [instance: 282b7217-4c1e-4a42-b3da-05616f4e1da3] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 04:18:24 np0005534516 nova_compute[253538]: 2025-11-25 09:18:24.550 253542 DEBUG nova.virt.libvirt.driver [None req-0ad32855-dd92-421e-a879-3a223e28a816 a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] [instance: 282b7217-4c1e-4a42-b3da-05616f4e1da3] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 04:18:24 np0005534516 nova_compute[253538]: 2025-11-25 09:18:24.551 253542 DEBUG nova.virt.libvirt.driver [None req-0ad32855-dd92-421e-a879-3a223e28a816 a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] [instance: 282b7217-4c1e-4a42-b3da-05616f4e1da3] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 04:18:24 np0005534516 nova_compute[253538]: 2025-11-25 09:18:24.556 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 282b7217-4c1e-4a42-b3da-05616f4e1da3] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 04:18:24 np0005534516 nova_compute[253538]: 2025-11-25 09:18:24.556 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764062304.5208545, 282b7217-4c1e-4a42-b3da-05616f4e1da3 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 04:18:24 np0005534516 nova_compute[253538]: 2025-11-25 09:18:24.556 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 282b7217-4c1e-4a42-b3da-05616f4e1da3] VM Paused (Lifecycle Event)#033[00m
Nov 25 04:18:24 np0005534516 nova_compute[253538]: 2025-11-25 09:18:24.585 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 282b7217-4c1e-4a42-b3da-05616f4e1da3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 04:18:24 np0005534516 nova_compute[253538]: 2025-11-25 09:18:24.588 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764062304.5274475, 282b7217-4c1e-4a42-b3da-05616f4e1da3 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 04:18:24 np0005534516 nova_compute[253538]: 2025-11-25 09:18:24.588 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 282b7217-4c1e-4a42-b3da-05616f4e1da3] VM Resumed (Lifecycle Event)#033[00m
Nov 25 04:18:24 np0005534516 nova_compute[253538]: 2025-11-25 09:18:24.613 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 282b7217-4c1e-4a42-b3da-05616f4e1da3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 04:18:24 np0005534516 nova_compute[253538]: 2025-11-25 09:18:24.616 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 282b7217-4c1e-4a42-b3da-05616f4e1da3] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 04:18:24 np0005534516 nova_compute[253538]: 2025-11-25 09:18:24.626 253542 INFO nova.compute.manager [None req-0ad32855-dd92-421e-a879-3a223e28a816 a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] [instance: 282b7217-4c1e-4a42-b3da-05616f4e1da3] Took 7.40 seconds to spawn the instance on the hypervisor.#033[00m
Nov 25 04:18:24 np0005534516 nova_compute[253538]: 2025-11-25 09:18:24.627 253542 DEBUG nova.compute.manager [None req-0ad32855-dd92-421e-a879-3a223e28a816 a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] [instance: 282b7217-4c1e-4a42-b3da-05616f4e1da3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 04:18:24 np0005534516 nova_compute[253538]: 2025-11-25 09:18:24.654 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 282b7217-4c1e-4a42-b3da-05616f4e1da3] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 04:18:24 np0005534516 podman[420385]: 2025-11-25 09:18:24.578437664 +0000 UTC m=+0.025467843 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 25 04:18:24 np0005534516 nova_compute[253538]: 2025-11-25 09:18:24.683 253542 INFO nova.compute.manager [None req-0ad32855-dd92-421e-a879-3a223e28a816 a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] [instance: 282b7217-4c1e-4a42-b3da-05616f4e1da3] Took 8.67 seconds to build instance.#033[00m
Nov 25 04:18:24 np0005534516 nova_compute[253538]: 2025-11-25 09:18:24.699 253542 DEBUG oslo_concurrency.lockutils [None req-0ad32855-dd92-421e-a879-3a223e28a816 a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] Lock "282b7217-4c1e-4a42-b3da-05616f4e1da3" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.744s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:18:24 np0005534516 podman[420385]: 2025-11-25 09:18:24.70187302 +0000 UTC m=+0.148903169 container create 9d9b9766124d9feb2b45bea3dc9173783fcf7a6d1b818c502ca20a1c08707dd9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-26d70c6d-e66b-4570-a7d7-11486a935ed8, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 04:18:24 np0005534516 systemd[1]: Started libpod-conmon-9d9b9766124d9feb2b45bea3dc9173783fcf7a6d1b818c502ca20a1c08707dd9.scope.
Nov 25 04:18:24 np0005534516 systemd[1]: Started libcrun container.
Nov 25 04:18:24 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8869441ad4141e8c78750280aebdef1e5823b5dc10b72133bb9fe07f53bcf62c/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 25 04:18:24 np0005534516 podman[420385]: 2025-11-25 09:18:24.853691756 +0000 UTC m=+0.300722015 container init 9d9b9766124d9feb2b45bea3dc9173783fcf7a6d1b818c502ca20a1c08707dd9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-26d70c6d-e66b-4570-a7d7-11486a935ed8, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0)
Nov 25 04:18:24 np0005534516 podman[420385]: 2025-11-25 09:18:24.861214111 +0000 UTC m=+0.308244280 container start 9d9b9766124d9feb2b45bea3dc9173783fcf7a6d1b818c502ca20a1c08707dd9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-26d70c6d-e66b-4570-a7d7-11486a935ed8, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 04:18:24 np0005534516 nova_compute[253538]: 2025-11-25 09:18:24.864 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:18:24 np0005534516 neutron-haproxy-ovnmeta-26d70c6d-e66b-4570-a7d7-11486a935ed8[420400]: [NOTICE]   (420404) : New worker (420406) forked
Nov 25 04:18:24 np0005534516 neutron-haproxy-ovnmeta-26d70c6d-e66b-4570-a7d7-11486a935ed8[420400]: [NOTICE]   (420404) : Loading success.
Nov 25 04:18:26 np0005534516 nova_compute[253538]: 2025-11-25 09:18:26.202 253542 DEBUG nova.compute.manager [req-e141d6cf-c3d0-49e1-b4ff-555ddf0fc098 req-c72f8e2d-437e-42e6-9ed5-484740d6820b b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 282b7217-4c1e-4a42-b3da-05616f4e1da3] Received event network-vif-plugged-9d78d6ba-3489-4cfd-ae33-9166be3f940c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 04:18:26 np0005534516 nova_compute[253538]: 2025-11-25 09:18:26.203 253542 DEBUG oslo_concurrency.lockutils [req-e141d6cf-c3d0-49e1-b4ff-555ddf0fc098 req-c72f8e2d-437e-42e6-9ed5-484740d6820b b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "282b7217-4c1e-4a42-b3da-05616f4e1da3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:18:26 np0005534516 nova_compute[253538]: 2025-11-25 09:18:26.203 253542 DEBUG oslo_concurrency.lockutils [req-e141d6cf-c3d0-49e1-b4ff-555ddf0fc098 req-c72f8e2d-437e-42e6-9ed5-484740d6820b b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "282b7217-4c1e-4a42-b3da-05616f4e1da3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:18:26 np0005534516 nova_compute[253538]: 2025-11-25 09:18:26.203 253542 DEBUG oslo_concurrency.lockutils [req-e141d6cf-c3d0-49e1-b4ff-555ddf0fc098 req-c72f8e2d-437e-42e6-9ed5-484740d6820b b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "282b7217-4c1e-4a42-b3da-05616f4e1da3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:18:26 np0005534516 nova_compute[253538]: 2025-11-25 09:18:26.203 253542 DEBUG nova.compute.manager [req-e141d6cf-c3d0-49e1-b4ff-555ddf0fc098 req-c72f8e2d-437e-42e6-9ed5-484740d6820b b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 282b7217-4c1e-4a42-b3da-05616f4e1da3] No waiting events found dispatching network-vif-plugged-9d78d6ba-3489-4cfd-ae33-9166be3f940c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 04:18:26 np0005534516 nova_compute[253538]: 2025-11-25 09:18:26.204 253542 WARNING nova.compute.manager [req-e141d6cf-c3d0-49e1-b4ff-555ddf0fc098 req-c72f8e2d-437e-42e6-9ed5-484740d6820b b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 282b7217-4c1e-4a42-b3da-05616f4e1da3] Received unexpected event network-vif-plugged-9d78d6ba-3489-4cfd-ae33-9166be3f940c for instance with vm_state active and task_state None.#033[00m
Nov 25 04:18:26 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2941: 321 pgs: 321 active+clean; 134 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 22 KiB/s rd, 1.8 MiB/s wr, 33 op/s
Nov 25 04:18:28 np0005534516 nova_compute[253538]: 2025-11-25 09:18:28.000 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:18:28 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:18:28 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2942: 321 pgs: 321 active+clean; 134 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 91 op/s
Nov 25 04:18:28 np0005534516 nova_compute[253538]: 2025-11-25 09:18:28.928 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:18:28 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:18:28.929 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=58, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '3e:21:09', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '6a:71:8e:07:fa:c2'}, ipsec=False) old=SB_Global(nb_cfg=57) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 04:18:28 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:18:28.929 162739 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 25 04:18:29 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 25 04:18:29 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3634771501' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 04:18:29 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 25 04:18:29 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3634771501' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 04:18:29 np0005534516 nova_compute[253538]: 2025-11-25 09:18:29.394 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:18:29 np0005534516 NetworkManager[48915]: <info>  [1764062309.3951] manager: (patch-br-int-to-provnet-14ba300c-36d8-42f5-a7bd-8245a4488c5f): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/672)
Nov 25 04:18:29 np0005534516 NetworkManager[48915]: <info>  [1764062309.3964] manager: (patch-provnet-14ba300c-36d8-42f5-a7bd-8245a4488c5f-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/673)
Nov 25 04:18:29 np0005534516 nova_compute[253538]: 2025-11-25 09:18:29.481 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:18:29 np0005534516 ovn_controller[152859]: 2025-11-25T09:18:29Z|01639|binding|INFO|Releasing lport 49a3f274-19b1-4763-bafc-281fe099299b from this chassis (sb_readonly=0)
Nov 25 04:18:29 np0005534516 nova_compute[253538]: 2025-11-25 09:18:29.491 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:18:29 np0005534516 nova_compute[253538]: 2025-11-25 09:18:29.866 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:18:30 np0005534516 nova_compute[253538]: 2025-11-25 09:18:30.091 253542 DEBUG nova.compute.manager [req-2f94ac36-b991-45bf-8a99-f093e88e5b45 req-76cbf340-fffe-41bf-9957-26b1af64e4dc b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 282b7217-4c1e-4a42-b3da-05616f4e1da3] Received event network-changed-9d78d6ba-3489-4cfd-ae33-9166be3f940c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 04:18:30 np0005534516 nova_compute[253538]: 2025-11-25 09:18:30.092 253542 DEBUG nova.compute.manager [req-2f94ac36-b991-45bf-8a99-f093e88e5b45 req-76cbf340-fffe-41bf-9957-26b1af64e4dc b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 282b7217-4c1e-4a42-b3da-05616f4e1da3] Refreshing instance network info cache due to event network-changed-9d78d6ba-3489-4cfd-ae33-9166be3f940c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 04:18:30 np0005534516 nova_compute[253538]: 2025-11-25 09:18:30.092 253542 DEBUG oslo_concurrency.lockutils [req-2f94ac36-b991-45bf-8a99-f093e88e5b45 req-76cbf340-fffe-41bf-9957-26b1af64e4dc b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-282b7217-4c1e-4a42-b3da-05616f4e1da3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 04:18:30 np0005534516 nova_compute[253538]: 2025-11-25 09:18:30.093 253542 DEBUG oslo_concurrency.lockutils [req-2f94ac36-b991-45bf-8a99-f093e88e5b45 req-76cbf340-fffe-41bf-9957-26b1af64e4dc b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-282b7217-4c1e-4a42-b3da-05616f4e1da3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 04:18:30 np0005534516 nova_compute[253538]: 2025-11-25 09:18:30.093 253542 DEBUG nova.network.neutron [req-2f94ac36-b991-45bf-8a99-f093e88e5b45 req-76cbf340-fffe-41bf-9957-26b1af64e4dc b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 282b7217-4c1e-4a42-b3da-05616f4e1da3] Refreshing network info cache for port 9d78d6ba-3489-4cfd-ae33-9166be3f940c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 04:18:30 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2943: 321 pgs: 321 active+clean; 134 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 981 KiB/s wr, 87 op/s
Nov 25 04:18:31 np0005534516 nova_compute[253538]: 2025-11-25 09:18:31.257 253542 DEBUG nova.network.neutron [req-2f94ac36-b991-45bf-8a99-f093e88e5b45 req-76cbf340-fffe-41bf-9957-26b1af64e4dc b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 282b7217-4c1e-4a42-b3da-05616f4e1da3] Updated VIF entry in instance network info cache for port 9d78d6ba-3489-4cfd-ae33-9166be3f940c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 25 04:18:31 np0005534516 nova_compute[253538]: 2025-11-25 09:18:31.257 253542 DEBUG nova.network.neutron [req-2f94ac36-b991-45bf-8a99-f093e88e5b45 req-76cbf340-fffe-41bf-9957-26b1af64e4dc b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 282b7217-4c1e-4a42-b3da-05616f4e1da3] Updating instance_info_cache with network_info: [{"id": "9d78d6ba-3489-4cfd-ae33-9166be3f940c", "address": "fa:16:3e:eb:22:67", "network": {"id": "26d70c6d-e66b-4570-a7d7-11486a935ed8", "bridge": "br-int", "label": "tempest-TestShelveInstance-164848660-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.210", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "41c67820b40a4185a60c4245f9c43ef5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9d78d6ba-34", "ovs_interfaceid": "9d78d6ba-3489-4cfd-ae33-9166be3f940c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 04:18:31 np0005534516 nova_compute[253538]: 2025-11-25 09:18:31.274 253542 DEBUG oslo_concurrency.lockutils [req-2f94ac36-b991-45bf-8a99-f093e88e5b45 req-76cbf340-fffe-41bf-9957-26b1af64e4dc b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-282b7217-4c1e-4a42-b3da-05616f4e1da3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 04:18:31 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 04:18:31 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 04:18:31 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 04:18:31 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 04:18:31 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 04:18:31 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 04:18:31 np0005534516 ceph-mgr[75313]: [progress WARNING root] complete: ev 26b6720f-df57-4801-b599-defd5da1fbf5 does not exist
Nov 25 04:18:31 np0005534516 ceph-mgr[75313]: [progress WARNING root] complete: ev bad9114d-2602-4f8c-b888-adf2608a20f4 does not exist
Nov 25 04:18:31 np0005534516 ceph-mgr[75313]: [progress WARNING root] complete: ev 547d7171-a413-4532-bf56-71882ceabb0b does not exist
Nov 25 04:18:31 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 04:18:31 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 04:18:31 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 04:18:31 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 04:18:31 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 04:18:31 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 04:18:31 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 04:18:31 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 04:18:31 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 04:18:32 np0005534516 podman[420688]: 2025-11-25 09:18:32.208721575 +0000 UTC m=+0.043573765 container create 385ec1b4f0a27aca40f9e51e99b16c274d9d55aaad1097c023685425c5a4c6f8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_sanderson, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 04:18:32 np0005534516 systemd[1]: Started libpod-conmon-385ec1b4f0a27aca40f9e51e99b16c274d9d55aaad1097c023685425c5a4c6f8.scope.
Nov 25 04:18:32 np0005534516 systemd[1]: Started libcrun container.
Nov 25 04:18:32 np0005534516 podman[420688]: 2025-11-25 09:18:32.186492122 +0000 UTC m=+0.021344332 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 04:18:32 np0005534516 podman[420688]: 2025-11-25 09:18:32.301061285 +0000 UTC m=+0.135913505 container init 385ec1b4f0a27aca40f9e51e99b16c274d9d55aaad1097c023685425c5a4c6f8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_sanderson, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Nov 25 04:18:32 np0005534516 podman[420688]: 2025-11-25 09:18:32.309381282 +0000 UTC m=+0.144233472 container start 385ec1b4f0a27aca40f9e51e99b16c274d9d55aaad1097c023685425c5a4c6f8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_sanderson, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 04:18:32 np0005534516 podman[420688]: 2025-11-25 09:18:32.314369858 +0000 UTC m=+0.149222048 container attach 385ec1b4f0a27aca40f9e51e99b16c274d9d55aaad1097c023685425c5a4c6f8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_sanderson, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Nov 25 04:18:32 np0005534516 jolly_sanderson[420704]: 167 167
Nov 25 04:18:32 np0005534516 systemd[1]: libpod-385ec1b4f0a27aca40f9e51e99b16c274d9d55aaad1097c023685425c5a4c6f8.scope: Deactivated successfully.
Nov 25 04:18:32 np0005534516 podman[420688]: 2025-11-25 09:18:32.318835309 +0000 UTC m=+0.153687509 container died 385ec1b4f0a27aca40f9e51e99b16c274d9d55aaad1097c023685425c5a4c6f8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_sanderson, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 04:18:32 np0005534516 systemd[1]: var-lib-containers-storage-overlay-ecc77f87cf1e6c11d3fa4be2f6f29dbe70b7d0819029780d5822af48aacbb5c3-merged.mount: Deactivated successfully.
Nov 25 04:18:32 np0005534516 podman[420688]: 2025-11-25 09:18:32.377536285 +0000 UTC m=+0.212388475 container remove 385ec1b4f0a27aca40f9e51e99b16c274d9d55aaad1097c023685425c5a4c6f8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_sanderson, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 04:18:32 np0005534516 systemd[1]: libpod-conmon-385ec1b4f0a27aca40f9e51e99b16c274d9d55aaad1097c023685425c5a4c6f8.scope: Deactivated successfully.
Nov 25 04:18:32 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2944: 321 pgs: 321 active+clean; 134 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 809 KiB/s wr, 74 op/s
Nov 25 04:18:32 np0005534516 podman[420730]: 2025-11-25 09:18:32.624912959 +0000 UTC m=+0.057893125 container create a821990298980c51035ae7d7615927331d30b10972f2b09bdfe9fe68376ab2d2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_black, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 04:18:32 np0005534516 systemd[1]: Started libpod-conmon-a821990298980c51035ae7d7615927331d30b10972f2b09bdfe9fe68376ab2d2.scope.
Nov 25 04:18:32 np0005534516 podman[420730]: 2025-11-25 09:18:32.596472096 +0000 UTC m=+0.029452282 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 04:18:32 np0005534516 systemd[1]: Started libcrun container.
Nov 25 04:18:32 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c81282f4e78d3cd1c63f5c35f4dcf9bdbd3589de809e49e49cb4e52938d1ceb7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 04:18:32 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c81282f4e78d3cd1c63f5c35f4dcf9bdbd3589de809e49e49cb4e52938d1ceb7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 04:18:32 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c81282f4e78d3cd1c63f5c35f4dcf9bdbd3589de809e49e49cb4e52938d1ceb7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 04:18:32 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c81282f4e78d3cd1c63f5c35f4dcf9bdbd3589de809e49e49cb4e52938d1ceb7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 04:18:32 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c81282f4e78d3cd1c63f5c35f4dcf9bdbd3589de809e49e49cb4e52938d1ceb7/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 04:18:32 np0005534516 podman[420730]: 2025-11-25 09:18:32.726407788 +0000 UTC m=+0.159387954 container init a821990298980c51035ae7d7615927331d30b10972f2b09bdfe9fe68376ab2d2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_black, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 04:18:32 np0005534516 podman[420730]: 2025-11-25 09:18:32.736252076 +0000 UTC m=+0.169232242 container start a821990298980c51035ae7d7615927331d30b10972f2b09bdfe9fe68376ab2d2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_black, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Nov 25 04:18:32 np0005534516 podman[420730]: 2025-11-25 09:18:32.740661716 +0000 UTC m=+0.173641882 container attach a821990298980c51035ae7d7615927331d30b10972f2b09bdfe9fe68376ab2d2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_black, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Nov 25 04:18:33 np0005534516 nova_compute[253538]: 2025-11-25 09:18:33.003 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:18:33 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:18:33 np0005534516 ceph-mon[75015]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #144. Immutable memtables: 0.
Nov 25 04:18:33 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:18:33.455981) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 25 04:18:33 np0005534516 ceph-mon[75015]: rocksdb: [db/flush_job.cc:856] [default] [JOB 87] Flushing memtable with next log file: 144
Nov 25 04:18:33 np0005534516 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764062313456013, "job": 87, "event": "flush_started", "num_memtables": 1, "num_entries": 516, "num_deletes": 255, "total_data_size": 449145, "memory_usage": 460552, "flush_reason": "Manual Compaction"}
Nov 25 04:18:33 np0005534516 ceph-mon[75015]: rocksdb: [db/flush_job.cc:885] [default] [JOB 87] Level-0 flush table #145: started
Nov 25 04:18:33 np0005534516 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764062313486170, "cf_name": "default", "job": 87, "event": "table_file_creation", "file_number": 145, "file_size": 444845, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 61131, "largest_seqno": 61646, "table_properties": {"data_size": 441982, "index_size": 834, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 965, "raw_key_size": 6735, "raw_average_key_size": 18, "raw_value_size": 436204, "raw_average_value_size": 1195, "num_data_blocks": 38, "num_entries": 365, "num_filter_entries": 365, "num_deletions": 255, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764062284, "oldest_key_time": 1764062284, "file_creation_time": 1764062313, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "dc5dc6ae-0fb8-474e-b34d-e37705a41add", "db_session_id": "WCSCMBNWDQZ3QKJA3OWW", "orig_file_number": 145, "seqno_to_time_mapping": "N/A"}}
Nov 25 04:18:33 np0005534516 ceph-mon[75015]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 87] Flush lasted 30236 microseconds, and 2016 cpu microseconds.
Nov 25 04:18:33 np0005534516 ceph-mon[75015]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 04:18:33 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:18:33.486213) [db/flush_job.cc:967] [default] [JOB 87] Level-0 flush table #145: 444845 bytes OK
Nov 25 04:18:33 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:18:33.486231) [db/memtable_list.cc:519] [default] Level-0 commit table #145 started
Nov 25 04:18:33 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:18:33.500945) [db/memtable_list.cc:722] [default] Level-0 commit table #145: memtable #1 done
Nov 25 04:18:33 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:18:33.500960) EVENT_LOG_v1 {"time_micros": 1764062313500955, "job": 87, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 25 04:18:33 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:18:33.500978) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 25 04:18:33 np0005534516 ceph-mon[75015]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 87] Try to delete WAL files size 446142, prev total WAL file size 446142, number of live WAL files 2.
Nov 25 04:18:33 np0005534516 ceph-mon[75015]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000141.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 04:18:33 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:18:33.501395) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0032353139' seq:72057594037927935, type:22 .. '6C6F676D0032373730' seq:0, type:0; will stop at (end)
Nov 25 04:18:33 np0005534516 ceph-mon[75015]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 88] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 25 04:18:33 np0005534516 ceph-mon[75015]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 87 Base level 0, inputs: [145(434KB)], [143(9769KB)]
Nov 25 04:18:33 np0005534516 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764062313501475, "job": 88, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [145], "files_L6": [143], "score": -1, "input_data_size": 10448954, "oldest_snapshot_seqno": -1}
Nov 25 04:18:33 np0005534516 ceph-mon[75015]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 88] Generated table #146: 7958 keys, 10343495 bytes, temperature: kUnknown
Nov 25 04:18:33 np0005534516 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764062313574995, "cf_name": "default", "job": 88, "event": "table_file_creation", "file_number": 146, "file_size": 10343495, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10292010, "index_size": 30508, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 19909, "raw_key_size": 209502, "raw_average_key_size": 26, "raw_value_size": 10151610, "raw_average_value_size": 1275, "num_data_blocks": 1186, "num_entries": 7958, "num_filter_entries": 7958, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764056881, "oldest_key_time": 0, "file_creation_time": 1764062313, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "dc5dc6ae-0fb8-474e-b34d-e37705a41add", "db_session_id": "WCSCMBNWDQZ3QKJA3OWW", "orig_file_number": 146, "seqno_to_time_mapping": "N/A"}}
Nov 25 04:18:33 np0005534516 ceph-mon[75015]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 04:18:33 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:18:33.575376) [db/compaction/compaction_job.cc:1663] [default] [JOB 88] Compacted 1@0 + 1@6 files to L6 => 10343495 bytes
Nov 25 04:18:33 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:18:33.576867) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 141.9 rd, 140.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.4, 9.5 +0.0 blob) out(9.9 +0.0 blob), read-write-amplify(46.7) write-amplify(23.3) OK, records in: 8478, records dropped: 520 output_compression: NoCompression
Nov 25 04:18:33 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:18:33.576896) EVENT_LOG_v1 {"time_micros": 1764062313576883, "job": 88, "event": "compaction_finished", "compaction_time_micros": 73616, "compaction_time_cpu_micros": 28779, "output_level": 6, "num_output_files": 1, "total_output_size": 10343495, "num_input_records": 8478, "num_output_records": 7958, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 25 04:18:33 np0005534516 ceph-mon[75015]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000145.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 04:18:33 np0005534516 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764062313577202, "job": 88, "event": "table_file_deletion", "file_number": 145}
Nov 25 04:18:33 np0005534516 ceph-mon[75015]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000143.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 04:18:33 np0005534516 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764062313581310, "job": 88, "event": "table_file_deletion", "file_number": 143}
Nov 25 04:18:33 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:18:33.501292) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 04:18:33 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:18:33.581424) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 04:18:33 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:18:33.581430) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 04:18:33 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:18:33.581432) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 04:18:33 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:18:33.581434) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 04:18:33 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:18:33.581436) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 04:18:33 np0005534516 condescending_black[420746]: --> passed data devices: 0 physical, 3 LVM
Nov 25 04:18:33 np0005534516 condescending_black[420746]: --> relative data size: 1.0
Nov 25 04:18:33 np0005534516 condescending_black[420746]: --> All data devices are unavailable
Nov 25 04:18:33 np0005534516 systemd[1]: libpod-a821990298980c51035ae7d7615927331d30b10972f2b09bdfe9fe68376ab2d2.scope: Deactivated successfully.
Nov 25 04:18:33 np0005534516 podman[420730]: 2025-11-25 09:18:33.910563369 +0000 UTC m=+1.343543535 container died a821990298980c51035ae7d7615927331d30b10972f2b09bdfe9fe68376ab2d2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_black, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 04:18:33 np0005534516 systemd[1]: libpod-a821990298980c51035ae7d7615927331d30b10972f2b09bdfe9fe68376ab2d2.scope: Consumed 1.110s CPU time.
Nov 25 04:18:33 np0005534516 systemd[1]: var-lib-containers-storage-overlay-c81282f4e78d3cd1c63f5c35f4dcf9bdbd3589de809e49e49cb4e52938d1ceb7-merged.mount: Deactivated successfully.
Nov 25 04:18:33 np0005534516 podman[420730]: 2025-11-25 09:18:33.964024372 +0000 UTC m=+1.397004538 container remove a821990298980c51035ae7d7615927331d30b10972f2b09bdfe9fe68376ab2d2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_black, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 04:18:34 np0005534516 systemd[1]: libpod-conmon-a821990298980c51035ae7d7615927331d30b10972f2b09bdfe9fe68376ab2d2.scope: Deactivated successfully.
Nov 25 04:18:34 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2945: 321 pgs: 321 active+clean; 134 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Nov 25 04:18:34 np0005534516 podman[420927]: 2025-11-25 09:18:34.633000317 +0000 UTC m=+0.042969189 container create 96ed90b84f24739d2731214821bf87593a654d07b1dc1c6127bf0087cbd4a160 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_gagarin, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 04:18:34 np0005534516 systemd[1]: Started libpod-conmon-96ed90b84f24739d2731214821bf87593a654d07b1dc1c6127bf0087cbd4a160.scope.
Nov 25 04:18:34 np0005534516 podman[420927]: 2025-11-25 09:18:34.612191021 +0000 UTC m=+0.022159913 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 04:18:34 np0005534516 systemd[1]: Started libcrun container.
Nov 25 04:18:34 np0005534516 podman[420927]: 2025-11-25 09:18:34.727929468 +0000 UTC m=+0.137898370 container init 96ed90b84f24739d2731214821bf87593a654d07b1dc1c6127bf0087cbd4a160 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_gagarin, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Nov 25 04:18:34 np0005534516 podman[420927]: 2025-11-25 09:18:34.737606131 +0000 UTC m=+0.147575003 container start 96ed90b84f24739d2731214821bf87593a654d07b1dc1c6127bf0087cbd4a160 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_gagarin, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Nov 25 04:18:34 np0005534516 podman[420927]: 2025-11-25 09:18:34.741389324 +0000 UTC m=+0.151358226 container attach 96ed90b84f24739d2731214821bf87593a654d07b1dc1c6127bf0087cbd4a160 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_gagarin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True)
Nov 25 04:18:34 np0005534516 musing_gagarin[420941]: 167 167
Nov 25 04:18:34 np0005534516 systemd[1]: libpod-96ed90b84f24739d2731214821bf87593a654d07b1dc1c6127bf0087cbd4a160.scope: Deactivated successfully.
Nov 25 04:18:34 np0005534516 podman[420927]: 2025-11-25 09:18:34.745760133 +0000 UTC m=+0.155729025 container died 96ed90b84f24739d2731214821bf87593a654d07b1dc1c6127bf0087cbd4a160 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_gagarin, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 04:18:34 np0005534516 systemd[1]: var-lib-containers-storage-overlay-bb0d98d688efe5ce92032b852db357408613bb256a8b904bc395e871d3073634-merged.mount: Deactivated successfully.
Nov 25 04:18:34 np0005534516 podman[420927]: 2025-11-25 09:18:34.790002165 +0000 UTC m=+0.199971037 container remove 96ed90b84f24739d2731214821bf87593a654d07b1dc1c6127bf0087cbd4a160 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_gagarin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 04:18:34 np0005534516 systemd[1]: libpod-conmon-96ed90b84f24739d2731214821bf87593a654d07b1dc1c6127bf0087cbd4a160.scope: Deactivated successfully.
Nov 25 04:18:34 np0005534516 nova_compute[253538]: 2025-11-25 09:18:34.868 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:18:34 np0005534516 podman[420965]: 2025-11-25 09:18:34.981357337 +0000 UTC m=+0.041531360 container create b7bfbe40595c91dd118f019350030aac73586d6697e4cc2d90af73a8a5f7d771 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_antonelli, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True)
Nov 25 04:18:35 np0005534516 systemd[1]: Started libpod-conmon-b7bfbe40595c91dd118f019350030aac73586d6697e4cc2d90af73a8a5f7d771.scope.
Nov 25 04:18:35 np0005534516 systemd[1]: Started libcrun container.
Nov 25 04:18:35 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/86672213367fb3c872b22f8b375caaf849b67cae25c785b3753f43fd6c5795d2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 04:18:35 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/86672213367fb3c872b22f8b375caaf849b67cae25c785b3753f43fd6c5795d2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 04:18:35 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/86672213367fb3c872b22f8b375caaf849b67cae25c785b3753f43fd6c5795d2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 04:18:35 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/86672213367fb3c872b22f8b375caaf849b67cae25c785b3753f43fd6c5795d2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 04:18:35 np0005534516 podman[420965]: 2025-11-25 09:18:34.96268093 +0000 UTC m=+0.022854973 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 04:18:35 np0005534516 podman[420965]: 2025-11-25 09:18:35.071901039 +0000 UTC m=+0.132075112 container init b7bfbe40595c91dd118f019350030aac73586d6697e4cc2d90af73a8a5f7d771 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_antonelli, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Nov 25 04:18:35 np0005534516 podman[420965]: 2025-11-25 09:18:35.080941504 +0000 UTC m=+0.141115527 container start b7bfbe40595c91dd118f019350030aac73586d6697e4cc2d90af73a8a5f7d771 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_antonelli, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 04:18:35 np0005534516 podman[420965]: 2025-11-25 09:18:35.118843424 +0000 UTC m=+0.179017447 container attach b7bfbe40595c91dd118f019350030aac73586d6697e4cc2d90af73a8a5f7d771 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_antonelli, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default)
Nov 25 04:18:35 np0005534516 admiring_antonelli[420982]: {
Nov 25 04:18:35 np0005534516 admiring_antonelli[420982]:    "0": [
Nov 25 04:18:35 np0005534516 admiring_antonelli[420982]:        {
Nov 25 04:18:35 np0005534516 admiring_antonelli[420982]:            "devices": [
Nov 25 04:18:35 np0005534516 admiring_antonelli[420982]:                "/dev/loop3"
Nov 25 04:18:35 np0005534516 admiring_antonelli[420982]:            ],
Nov 25 04:18:35 np0005534516 admiring_antonelli[420982]:            "lv_name": "ceph_lv0",
Nov 25 04:18:35 np0005534516 admiring_antonelli[420982]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 04:18:35 np0005534516 admiring_antonelli[420982]:            "lv_size": "21470642176",
Nov 25 04:18:35 np0005534516 admiring_antonelli[420982]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=fa0f8d90-5df8-4d42-9078-da082765696d,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 04:18:35 np0005534516 admiring_antonelli[420982]:            "lv_uuid": "ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr",
Nov 25 04:18:35 np0005534516 admiring_antonelli[420982]:            "name": "ceph_lv0",
Nov 25 04:18:35 np0005534516 admiring_antonelli[420982]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 04:18:35 np0005534516 admiring_antonelli[420982]:            "tags": {
Nov 25 04:18:35 np0005534516 admiring_antonelli[420982]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 04:18:35 np0005534516 admiring_antonelli[420982]:                "ceph.block_uuid": "ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr",
Nov 25 04:18:35 np0005534516 admiring_antonelli[420982]:                "ceph.cephx_lockbox_secret": "",
Nov 25 04:18:35 np0005534516 admiring_antonelli[420982]:                "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 04:18:35 np0005534516 admiring_antonelli[420982]:                "ceph.cluster_name": "ceph",
Nov 25 04:18:35 np0005534516 admiring_antonelli[420982]:                "ceph.crush_device_class": "",
Nov 25 04:18:35 np0005534516 admiring_antonelli[420982]:                "ceph.encrypted": "0",
Nov 25 04:18:35 np0005534516 admiring_antonelli[420982]:                "ceph.osd_fsid": "fa0f8d90-5df8-4d42-9078-da082765696d",
Nov 25 04:18:35 np0005534516 admiring_antonelli[420982]:                "ceph.osd_id": "0",
Nov 25 04:18:35 np0005534516 admiring_antonelli[420982]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 04:18:35 np0005534516 admiring_antonelli[420982]:                "ceph.type": "block",
Nov 25 04:18:35 np0005534516 admiring_antonelli[420982]:                "ceph.vdo": "0"
Nov 25 04:18:35 np0005534516 admiring_antonelli[420982]:            },
Nov 25 04:18:35 np0005534516 admiring_antonelli[420982]:            "type": "block",
Nov 25 04:18:35 np0005534516 admiring_antonelli[420982]:            "vg_name": "ceph_vg0"
Nov 25 04:18:35 np0005534516 admiring_antonelli[420982]:        }
Nov 25 04:18:35 np0005534516 admiring_antonelli[420982]:    ],
Nov 25 04:18:35 np0005534516 admiring_antonelli[420982]:    "1": [
Nov 25 04:18:35 np0005534516 admiring_antonelli[420982]:        {
Nov 25 04:18:35 np0005534516 admiring_antonelli[420982]:            "devices": [
Nov 25 04:18:35 np0005534516 admiring_antonelli[420982]:                "/dev/loop4"
Nov 25 04:18:35 np0005534516 admiring_antonelli[420982]:            ],
Nov 25 04:18:35 np0005534516 admiring_antonelli[420982]:            "lv_name": "ceph_lv1",
Nov 25 04:18:35 np0005534516 admiring_antonelli[420982]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 04:18:35 np0005534516 admiring_antonelli[420982]:            "lv_size": "21470642176",
Nov 25 04:18:35 np0005534516 admiring_antonelli[420982]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1ccbbde0-8faf-460a-800e-c84f00ed17db,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 04:18:35 np0005534516 admiring_antonelli[420982]:            "lv_uuid": "nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e",
Nov 25 04:18:35 np0005534516 admiring_antonelli[420982]:            "name": "ceph_lv1",
Nov 25 04:18:35 np0005534516 admiring_antonelli[420982]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 04:18:35 np0005534516 admiring_antonelli[420982]:            "tags": {
Nov 25 04:18:35 np0005534516 admiring_antonelli[420982]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 04:18:35 np0005534516 admiring_antonelli[420982]:                "ceph.block_uuid": "nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e",
Nov 25 04:18:35 np0005534516 admiring_antonelli[420982]:                "ceph.cephx_lockbox_secret": "",
Nov 25 04:18:35 np0005534516 admiring_antonelli[420982]:                "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 04:18:35 np0005534516 admiring_antonelli[420982]:                "ceph.cluster_name": "ceph",
Nov 25 04:18:35 np0005534516 admiring_antonelli[420982]:                "ceph.crush_device_class": "",
Nov 25 04:18:35 np0005534516 admiring_antonelli[420982]:                "ceph.encrypted": "0",
Nov 25 04:18:35 np0005534516 admiring_antonelli[420982]:                "ceph.osd_fsid": "1ccbbde0-8faf-460a-800e-c84f00ed17db",
Nov 25 04:18:35 np0005534516 admiring_antonelli[420982]:                "ceph.osd_id": "1",
Nov 25 04:18:35 np0005534516 admiring_antonelli[420982]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 04:18:35 np0005534516 admiring_antonelli[420982]:                "ceph.type": "block",
Nov 25 04:18:35 np0005534516 admiring_antonelli[420982]:                "ceph.vdo": "0"
Nov 25 04:18:35 np0005534516 admiring_antonelli[420982]:            },
Nov 25 04:18:35 np0005534516 admiring_antonelli[420982]:            "type": "block",
Nov 25 04:18:35 np0005534516 admiring_antonelli[420982]:            "vg_name": "ceph_vg1"
Nov 25 04:18:35 np0005534516 admiring_antonelli[420982]:        }
Nov 25 04:18:35 np0005534516 admiring_antonelli[420982]:    ],
Nov 25 04:18:35 np0005534516 admiring_antonelli[420982]:    "2": [
Nov 25 04:18:35 np0005534516 admiring_antonelli[420982]:        {
Nov 25 04:18:35 np0005534516 admiring_antonelli[420982]:            "devices": [
Nov 25 04:18:35 np0005534516 admiring_antonelli[420982]:                "/dev/loop5"
Nov 25 04:18:35 np0005534516 admiring_antonelli[420982]:            ],
Nov 25 04:18:35 np0005534516 admiring_antonelli[420982]:            "lv_name": "ceph_lv2",
Nov 25 04:18:35 np0005534516 admiring_antonelli[420982]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 04:18:35 np0005534516 admiring_antonelli[420982]:            "lv_size": "21470642176",
Nov 25 04:18:35 np0005534516 admiring_antonelli[420982]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2a15223e-f21c-42af-b702-2be1c3607399,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 04:18:35 np0005534516 admiring_antonelli[420982]:            "lv_uuid": "5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0",
Nov 25 04:18:35 np0005534516 admiring_antonelli[420982]:            "name": "ceph_lv2",
Nov 25 04:18:35 np0005534516 admiring_antonelli[420982]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 04:18:35 np0005534516 admiring_antonelli[420982]:            "tags": {
Nov 25 04:18:35 np0005534516 admiring_antonelli[420982]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 04:18:35 np0005534516 admiring_antonelli[420982]:                "ceph.block_uuid": "5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0",
Nov 25 04:18:35 np0005534516 admiring_antonelli[420982]:                "ceph.cephx_lockbox_secret": "",
Nov 25 04:18:35 np0005534516 admiring_antonelli[420982]:                "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 04:18:35 np0005534516 admiring_antonelli[420982]:                "ceph.cluster_name": "ceph",
Nov 25 04:18:35 np0005534516 admiring_antonelli[420982]:                "ceph.crush_device_class": "",
Nov 25 04:18:35 np0005534516 admiring_antonelli[420982]:                "ceph.encrypted": "0",
Nov 25 04:18:35 np0005534516 admiring_antonelli[420982]:                "ceph.osd_fsid": "2a15223e-f21c-42af-b702-2be1c3607399",
Nov 25 04:18:35 np0005534516 admiring_antonelli[420982]:                "ceph.osd_id": "2",
Nov 25 04:18:35 np0005534516 admiring_antonelli[420982]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 04:18:35 np0005534516 admiring_antonelli[420982]:                "ceph.type": "block",
Nov 25 04:18:35 np0005534516 admiring_antonelli[420982]:                "ceph.vdo": "0"
Nov 25 04:18:35 np0005534516 admiring_antonelli[420982]:            },
Nov 25 04:18:35 np0005534516 admiring_antonelli[420982]:            "type": "block",
Nov 25 04:18:35 np0005534516 admiring_antonelli[420982]:            "vg_name": "ceph_vg2"
Nov 25 04:18:35 np0005534516 admiring_antonelli[420982]:        }
Nov 25 04:18:35 np0005534516 admiring_antonelli[420982]:    ]
Nov 25 04:18:35 np0005534516 admiring_antonelli[420982]: }
Nov 25 04:18:35 np0005534516 systemd[1]: libpod-b7bfbe40595c91dd118f019350030aac73586d6697e4cc2d90af73a8a5f7d771.scope: Deactivated successfully.
Nov 25 04:18:35 np0005534516 podman[420965]: 2025-11-25 09:18:35.867928458 +0000 UTC m=+0.928102501 container died b7bfbe40595c91dd118f019350030aac73586d6697e4cc2d90af73a8a5f7d771 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_antonelli, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True)
Nov 25 04:18:35 np0005534516 systemd[1]: var-lib-containers-storage-overlay-86672213367fb3c872b22f8b375caaf849b67cae25c785b3753f43fd6c5795d2-merged.mount: Deactivated successfully.
Nov 25 04:18:35 np0005534516 podman[420965]: 2025-11-25 09:18:35.997686575 +0000 UTC m=+1.057860598 container remove b7bfbe40595c91dd118f019350030aac73586d6697e4cc2d90af73a8a5f7d771 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_antonelli, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Nov 25 04:18:36 np0005534516 systemd[1]: libpod-conmon-b7bfbe40595c91dd118f019350030aac73586d6697e4cc2d90af73a8a5f7d771.scope: Deactivated successfully.
Nov 25 04:18:36 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2946: 321 pgs: 321 active+clean; 134 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Nov 25 04:18:36 np0005534516 podman[421143]: 2025-11-25 09:18:36.695234427 +0000 UTC m=+0.043191155 container create 17969f5917273aaaa4b9f90dc0e735cc189c4a0eb4ec38dce878db81f54a6b49 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_lalande, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Nov 25 04:18:36 np0005534516 systemd[1]: Started libpod-conmon-17969f5917273aaaa4b9f90dc0e735cc189c4a0eb4ec38dce878db81f54a6b49.scope.
Nov 25 04:18:36 np0005534516 systemd[1]: Started libcrun container.
Nov 25 04:18:36 np0005534516 podman[421143]: 2025-11-25 09:18:36.676468117 +0000 UTC m=+0.024424865 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 04:18:36 np0005534516 podman[421143]: 2025-11-25 09:18:36.779656582 +0000 UTC m=+0.127613360 container init 17969f5917273aaaa4b9f90dc0e735cc189c4a0eb4ec38dce878db81f54a6b49 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_lalande, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0)
Nov 25 04:18:36 np0005534516 podman[421143]: 2025-11-25 09:18:36.791840583 +0000 UTC m=+0.139797311 container start 17969f5917273aaaa4b9f90dc0e735cc189c4a0eb4ec38dce878db81f54a6b49 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_lalande, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 04:18:36 np0005534516 podman[421143]: 2025-11-25 09:18:36.795351419 +0000 UTC m=+0.143308197 container attach 17969f5917273aaaa4b9f90dc0e735cc189c4a0eb4ec38dce878db81f54a6b49 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_lalande, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Nov 25 04:18:36 np0005534516 agitated_lalande[421159]: 167 167
Nov 25 04:18:36 np0005534516 systemd[1]: libpod-17969f5917273aaaa4b9f90dc0e735cc189c4a0eb4ec38dce878db81f54a6b49.scope: Deactivated successfully.
Nov 25 04:18:36 np0005534516 conmon[421159]: conmon 17969f5917273aaaa4b9 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-17969f5917273aaaa4b9f90dc0e735cc189c4a0eb4ec38dce878db81f54a6b49.scope/container/memory.events
Nov 25 04:18:36 np0005534516 podman[421143]: 2025-11-25 09:18:36.799130711 +0000 UTC m=+0.147087439 container died 17969f5917273aaaa4b9f90dc0e735cc189c4a0eb4ec38dce878db81f54a6b49 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_lalande, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 04:18:36 np0005534516 systemd[1]: var-lib-containers-storage-overlay-c7054b438cd380079ce27c5f1f1f394c7d3a97c4b4c703a4372490306583bef8-merged.mount: Deactivated successfully.
Nov 25 04:18:36 np0005534516 podman[421143]: 2025-11-25 09:18:36.837117374 +0000 UTC m=+0.185074102 container remove 17969f5917273aaaa4b9f90dc0e735cc189c4a0eb4ec38dce878db81f54a6b49 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_lalande, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 04:18:36 np0005534516 systemd[1]: libpod-conmon-17969f5917273aaaa4b9f90dc0e735cc189c4a0eb4ec38dce878db81f54a6b49.scope: Deactivated successfully.
Nov 25 04:18:37 np0005534516 podman[421181]: 2025-11-25 09:18:37.037238074 +0000 UTC m=+0.040174573 container create 580d9736b00a2c1451acbed0796894bf08a26ab9d5d1b8546ffc9882b8bd3591 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_hamilton, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Nov 25 04:18:37 np0005534516 systemd[1]: Started libpod-conmon-580d9736b00a2c1451acbed0796894bf08a26ab9d5d1b8546ffc9882b8bd3591.scope.
Nov 25 04:18:37 np0005534516 podman[421181]: 2025-11-25 09:18:37.019592035 +0000 UTC m=+0.022528554 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 04:18:37 np0005534516 systemd[1]: Started libcrun container.
Nov 25 04:18:37 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4771cb63b069d1007544b3f1b25814c400b4833ad6021438486341c4b4d58b84/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 04:18:37 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4771cb63b069d1007544b3f1b25814c400b4833ad6021438486341c4b4d58b84/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 04:18:37 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4771cb63b069d1007544b3f1b25814c400b4833ad6021438486341c4b4d58b84/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 04:18:37 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4771cb63b069d1007544b3f1b25814c400b4833ad6021438486341c4b4d58b84/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 04:18:37 np0005534516 podman[421181]: 2025-11-25 09:18:37.135107545 +0000 UTC m=+0.138044074 container init 580d9736b00a2c1451acbed0796894bf08a26ab9d5d1b8546ffc9882b8bd3591 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_hamilton, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Nov 25 04:18:37 np0005534516 podman[421181]: 2025-11-25 09:18:37.148514349 +0000 UTC m=+0.151450848 container start 580d9736b00a2c1451acbed0796894bf08a26ab9d5d1b8546ffc9882b8bd3591 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_hamilton, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 04:18:37 np0005534516 podman[421181]: 2025-11-25 09:18:37.153366841 +0000 UTC m=+0.156303350 container attach 580d9736b00a2c1451acbed0796894bf08a26ab9d5d1b8546ffc9882b8bd3591 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_hamilton, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Nov 25 04:18:37 np0005534516 ovn_controller[152859]: 2025-11-25T09:18:37Z|00211|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:eb:22:67 10.100.0.14
Nov 25 04:18:37 np0005534516 ovn_controller[152859]: 2025-11-25T09:18:37Z|00212|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:eb:22:67 10.100.0.14
Nov 25 04:18:37 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:18:37.931 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=0e3362c2-a4a0-4a10-9289-943331244f84, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '58'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 04:18:38 np0005534516 nova_compute[253538]: 2025-11-25 09:18:38.007 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:18:38 np0005534516 upbeat_hamilton[421197]: {
Nov 25 04:18:38 np0005534516 upbeat_hamilton[421197]:    "1ccbbde0-8faf-460a-800e-c84f00ed17db": {
Nov 25 04:18:38 np0005534516 upbeat_hamilton[421197]:        "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 04:18:38 np0005534516 upbeat_hamilton[421197]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 04:18:38 np0005534516 upbeat_hamilton[421197]:        "osd_id": 1,
Nov 25 04:18:38 np0005534516 upbeat_hamilton[421197]:        "osd_uuid": "1ccbbde0-8faf-460a-800e-c84f00ed17db",
Nov 25 04:18:38 np0005534516 upbeat_hamilton[421197]:        "type": "bluestore"
Nov 25 04:18:38 np0005534516 upbeat_hamilton[421197]:    },
Nov 25 04:18:38 np0005534516 upbeat_hamilton[421197]:    "2a15223e-f21c-42af-b702-2be1c3607399": {
Nov 25 04:18:38 np0005534516 upbeat_hamilton[421197]:        "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 04:18:38 np0005534516 upbeat_hamilton[421197]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 04:18:38 np0005534516 upbeat_hamilton[421197]:        "osd_id": 2,
Nov 25 04:18:38 np0005534516 upbeat_hamilton[421197]:        "osd_uuid": "2a15223e-f21c-42af-b702-2be1c3607399",
Nov 25 04:18:38 np0005534516 upbeat_hamilton[421197]:        "type": "bluestore"
Nov 25 04:18:38 np0005534516 upbeat_hamilton[421197]:    },
Nov 25 04:18:38 np0005534516 upbeat_hamilton[421197]:    "fa0f8d90-5df8-4d42-9078-da082765696d": {
Nov 25 04:18:38 np0005534516 upbeat_hamilton[421197]:        "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 04:18:38 np0005534516 upbeat_hamilton[421197]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 04:18:38 np0005534516 upbeat_hamilton[421197]:        "osd_id": 0,
Nov 25 04:18:38 np0005534516 upbeat_hamilton[421197]:        "osd_uuid": "fa0f8d90-5df8-4d42-9078-da082765696d",
Nov 25 04:18:38 np0005534516 upbeat_hamilton[421197]:        "type": "bluestore"
Nov 25 04:18:38 np0005534516 upbeat_hamilton[421197]:    }
Nov 25 04:18:38 np0005534516 upbeat_hamilton[421197]: }
Nov 25 04:18:38 np0005534516 systemd[1]: libpod-580d9736b00a2c1451acbed0796894bf08a26ab9d5d1b8546ffc9882b8bd3591.scope: Deactivated successfully.
Nov 25 04:18:38 np0005534516 systemd[1]: libpod-580d9736b00a2c1451acbed0796894bf08a26ab9d5d1b8546ffc9882b8bd3591.scope: Consumed 1.078s CPU time.
Nov 25 04:18:38 np0005534516 podman[421181]: 2025-11-25 09:18:38.224037866 +0000 UTC m=+1.226974395 container died 580d9736b00a2c1451acbed0796894bf08a26ab9d5d1b8546ffc9882b8bd3591 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_hamilton, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Nov 25 04:18:38 np0005534516 systemd[1]: var-lib-containers-storage-overlay-4771cb63b069d1007544b3f1b25814c400b4833ad6021438486341c4b4d58b84-merged.mount: Deactivated successfully.
Nov 25 04:18:38 np0005534516 podman[421181]: 2025-11-25 09:18:38.333644886 +0000 UTC m=+1.336581365 container remove 580d9736b00a2c1451acbed0796894bf08a26ab9d5d1b8546ffc9882b8bd3591 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_hamilton, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 04:18:38 np0005534516 systemd[1]: libpod-conmon-580d9736b00a2c1451acbed0796894bf08a26ab9d5d1b8546ffc9882b8bd3591.scope: Deactivated successfully.
Nov 25 04:18:38 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 04:18:38 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 04:18:38 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 04:18:38 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 04:18:38 np0005534516 ceph-mgr[75313]: [progress WARNING root] complete: ev 40ff5b64-8821-4775-9053-90b2660f5d2c does not exist
Nov 25 04:18:38 np0005534516 ceph-mgr[75313]: [progress WARNING root] complete: ev 224a3f87-18bc-4e7c-8b69-bffe0a98030d does not exist
Nov 25 04:18:38 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:18:38 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2947: 321 pgs: 321 active+clean; 159 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.4 MiB/s wr, 92 op/s
Nov 25 04:18:39 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 04:18:39 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 04:18:39 np0005534516 nova_compute[253538]: 2025-11-25 09:18:39.869 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:18:40 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2948: 321 pgs: 321 active+clean; 165 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 575 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Nov 25 04:18:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:18:41.103 162739 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:18:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:18:41.103 162739 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:18:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:18:41.104 162739 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:18:42 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2949: 321 pgs: 321 active+clean; 167 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Nov 25 04:18:43 np0005534516 nova_compute[253538]: 2025-11-25 09:18:43.009 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:18:43 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:18:44 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2950: 321 pgs: 321 active+clean; 167 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Nov 25 04:18:44 np0005534516 nova_compute[253538]: 2025-11-25 09:18:44.872 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:18:45 np0005534516 podman[421292]: 2025-11-25 09:18:45.842536978 +0000 UTC m=+0.081737363 container health_status 201f5ba8a846455dad7681d91099d8c3f33e8ac91298c1330a8b525b94c03938 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_id=multipathd, org.label-schema.license=GPLv2, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.vendor=CentOS)
Nov 25 04:18:45 np0005534516 podman[421293]: 2025-11-25 09:18:45.843492174 +0000 UTC m=+0.076577573 container health_status 5c8d5508db57b4c3da7e80ae11f3a3094e87785cb74f60c98199261dd8b73481 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251118, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Nov 25 04:18:46 np0005534516 nova_compute[253538]: 2025-11-25 09:18:46.397 253542 DEBUG oslo_concurrency.lockutils [None req-c048a4d2-93dd-4b12-a5c7-ed35a63cad27 a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] Acquiring lock "282b7217-4c1e-4a42-b3da-05616f4e1da3" by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:18:46 np0005534516 nova_compute[253538]: 2025-11-25 09:18:46.397 253542 DEBUG oslo_concurrency.lockutils [None req-c048a4d2-93dd-4b12-a5c7-ed35a63cad27 a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] Lock "282b7217-4c1e-4a42-b3da-05616f4e1da3" acquired by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:18:46 np0005534516 nova_compute[253538]: 2025-11-25 09:18:46.397 253542 INFO nova.compute.manager [None req-c048a4d2-93dd-4b12-a5c7-ed35a63cad27 a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] [instance: 282b7217-4c1e-4a42-b3da-05616f4e1da3] Shelving#033[00m
Nov 25 04:18:46 np0005534516 nova_compute[253538]: 2025-11-25 09:18:46.426 253542 DEBUG nova.virt.libvirt.driver [None req-c048a4d2-93dd-4b12-a5c7-ed35a63cad27 a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] [instance: 282b7217-4c1e-4a42-b3da-05616f4e1da3] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071#033[00m
Nov 25 04:18:46 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2951: 321 pgs: 321 active+clean; 167 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 2.2 MiB/s wr, 64 op/s
Nov 25 04:18:48 np0005534516 nova_compute[253538]: 2025-11-25 09:18:48.058 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:18:48 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:18:48 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2952: 321 pgs: 321 active+clean; 167 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 328 KiB/s rd, 2.2 MiB/s wr, 65 op/s
Nov 25 04:18:48 np0005534516 kernel: tap9d78d6ba-34 (unregistering): left promiscuous mode
Nov 25 04:18:48 np0005534516 NetworkManager[48915]: <info>  [1764062328.7551] device (tap9d78d6ba-34): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 04:18:48 np0005534516 ovn_controller[152859]: 2025-11-25T09:18:48Z|01640|binding|INFO|Releasing lport 9d78d6ba-3489-4cfd-ae33-9166be3f940c from this chassis (sb_readonly=0)
Nov 25 04:18:48 np0005534516 ovn_controller[152859]: 2025-11-25T09:18:48Z|01641|binding|INFO|Setting lport 9d78d6ba-3489-4cfd-ae33-9166be3f940c down in Southbound
Nov 25 04:18:48 np0005534516 nova_compute[253538]: 2025-11-25 09:18:48.764 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:18:48 np0005534516 ovn_controller[152859]: 2025-11-25T09:18:48Z|01642|binding|INFO|Removing iface tap9d78d6ba-34 ovn-installed in OVS
Nov 25 04:18:48 np0005534516 nova_compute[253538]: 2025-11-25 09:18:48.766 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:18:48 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:18:48.775 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:eb:22:67 10.100.0.14'], port_security=['fa:16:3e:eb:22:67 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '282b7217-4c1e-4a42-b3da-05616f4e1da3', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-26d70c6d-e66b-4570-a7d7-11486a935ed8', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '41c67820b40a4185a60c4245f9c43ef5', 'neutron:revision_number': '4', 'neutron:security_group_ids': '5b17be75-81bb-4f4d-9234-5572279d07c7', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.210'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c3ce9783-7822-423e-a3be-85165987da53, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=9d78d6ba-3489-4cfd-ae33-9166be3f940c) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 04:18:48 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:18:48.778 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 9d78d6ba-3489-4cfd-ae33-9166be3f940c in datapath 26d70c6d-e66b-4570-a7d7-11486a935ed8 unbound from our chassis#033[00m
Nov 25 04:18:48 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:18:48.780 162739 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 26d70c6d-e66b-4570-a7d7-11486a935ed8, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 25 04:18:48 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:18:48.782 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[d0c7701a-c053-47df-b933-8ed93dc2657b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:18:48 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:18:48.782 162739 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-26d70c6d-e66b-4570-a7d7-11486a935ed8 namespace which is not needed anymore#033[00m
Nov 25 04:18:48 np0005534516 nova_compute[253538]: 2025-11-25 09:18:48.785 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:18:48 np0005534516 systemd[1]: machine-qemu\x2d185\x2dinstance\x2d00000099.scope: Deactivated successfully.
Nov 25 04:18:48 np0005534516 systemd[1]: machine-qemu\x2d185\x2dinstance\x2d00000099.scope: Consumed 13.933s CPU time.
Nov 25 04:18:48 np0005534516 systemd-machined[215790]: Machine qemu-185-instance-00000099 terminated.
Nov 25 04:18:48 np0005534516 neutron-haproxy-ovnmeta-26d70c6d-e66b-4570-a7d7-11486a935ed8[420400]: [NOTICE]   (420404) : haproxy version is 2.8.14-c23fe91
Nov 25 04:18:48 np0005534516 neutron-haproxy-ovnmeta-26d70c6d-e66b-4570-a7d7-11486a935ed8[420400]: [NOTICE]   (420404) : path to executable is /usr/sbin/haproxy
Nov 25 04:18:48 np0005534516 neutron-haproxy-ovnmeta-26d70c6d-e66b-4570-a7d7-11486a935ed8[420400]: [WARNING]  (420404) : Exiting Master process...
Nov 25 04:18:48 np0005534516 neutron-haproxy-ovnmeta-26d70c6d-e66b-4570-a7d7-11486a935ed8[420400]: [ALERT]    (420404) : Current worker (420406) exited with code 143 (Terminated)
Nov 25 04:18:48 np0005534516 neutron-haproxy-ovnmeta-26d70c6d-e66b-4570-a7d7-11486a935ed8[420400]: [WARNING]  (420404) : All workers exited. Exiting... (0)
Nov 25 04:18:48 np0005534516 systemd[1]: libpod-9d9b9766124d9feb2b45bea3dc9173783fcf7a6d1b818c502ca20a1c08707dd9.scope: Deactivated successfully.
Nov 25 04:18:48 np0005534516 podman[421356]: 2025-11-25 09:18:48.936287419 +0000 UTC m=+0.050161265 container died 9d9b9766124d9feb2b45bea3dc9173783fcf7a6d1b818c502ca20a1c08707dd9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-26d70c6d-e66b-4570-a7d7-11486a935ed8, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team)
Nov 25 04:18:48 np0005534516 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-9d9b9766124d9feb2b45bea3dc9173783fcf7a6d1b818c502ca20a1c08707dd9-userdata-shm.mount: Deactivated successfully.
Nov 25 04:18:48 np0005534516 systemd[1]: var-lib-containers-storage-overlay-8869441ad4141e8c78750280aebdef1e5823b5dc10b72133bb9fe07f53bcf62c-merged.mount: Deactivated successfully.
Nov 25 04:18:48 np0005534516 nova_compute[253538]: 2025-11-25 09:18:48.989 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:18:48 np0005534516 podman[421356]: 2025-11-25 09:18:48.990990546 +0000 UTC m=+0.104864392 container cleanup 9d9b9766124d9feb2b45bea3dc9173783fcf7a6d1b818c502ca20a1c08707dd9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-26d70c6d-e66b-4570-a7d7-11486a935ed8, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 25 04:18:48 np0005534516 nova_compute[253538]: 2025-11-25 09:18:48.994 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:18:49 np0005534516 systemd[1]: libpod-conmon-9d9b9766124d9feb2b45bea3dc9173783fcf7a6d1b818c502ca20a1c08707dd9.scope: Deactivated successfully.
Nov 25 04:18:49 np0005534516 podman[421394]: 2025-11-25 09:18:49.060813194 +0000 UTC m=+0.045967800 container remove 9d9b9766124d9feb2b45bea3dc9173783fcf7a6d1b818c502ca20a1c08707dd9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-26d70c6d-e66b-4570-a7d7-11486a935ed8, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 04:18:49 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:18:49.065 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[f78b05b8-10ff-48ae-8615-43d081b57e8a]: (4, ('Tue Nov 25 09:18:48 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-26d70c6d-e66b-4570-a7d7-11486a935ed8 (9d9b9766124d9feb2b45bea3dc9173783fcf7a6d1b818c502ca20a1c08707dd9)\n9d9b9766124d9feb2b45bea3dc9173783fcf7a6d1b818c502ca20a1c08707dd9\nTue Nov 25 09:18:49 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-26d70c6d-e66b-4570-a7d7-11486a935ed8 (9d9b9766124d9feb2b45bea3dc9173783fcf7a6d1b818c502ca20a1c08707dd9)\n9d9b9766124d9feb2b45bea3dc9173783fcf7a6d1b818c502ca20a1c08707dd9\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:18:49 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:18:49.067 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[093dec41-c2c9-405f-ba13-7aa21241b085]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:18:49 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:18:49.068 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap26d70c6d-e0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 04:18:49 np0005534516 nova_compute[253538]: 2025-11-25 09:18:49.069 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:18:49 np0005534516 kernel: tap26d70c6d-e0: left promiscuous mode
Nov 25 04:18:49 np0005534516 nova_compute[253538]: 2025-11-25 09:18:49.081 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:18:49 np0005534516 nova_compute[253538]: 2025-11-25 09:18:49.085 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:18:49 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:18:49.088 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[82591a0e-2569-4fc1-b251-6a1d73e4da34]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:18:49 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:18:49.102 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[b9b39dbd-d469-4df7-832d-32a657de50e7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:18:49 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:18:49.103 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[a0fb0ae1-c484-4781-ad5c-840aeaf1922a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:18:49 np0005534516 nova_compute[253538]: 2025-11-25 09:18:49.104 253542 DEBUG nova.compute.manager [req-b172a661-bc70-4568-8400-c89dcade785f req-6c098d6c-ee30-4edb-895b-bdbedd512d23 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 282b7217-4c1e-4a42-b3da-05616f4e1da3] Received event network-vif-unplugged-9d78d6ba-3489-4cfd-ae33-9166be3f940c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 04:18:49 np0005534516 nova_compute[253538]: 2025-11-25 09:18:49.104 253542 DEBUG oslo_concurrency.lockutils [req-b172a661-bc70-4568-8400-c89dcade785f req-6c098d6c-ee30-4edb-895b-bdbedd512d23 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "282b7217-4c1e-4a42-b3da-05616f4e1da3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:18:49 np0005534516 nova_compute[253538]: 2025-11-25 09:18:49.104 253542 DEBUG oslo_concurrency.lockutils [req-b172a661-bc70-4568-8400-c89dcade785f req-6c098d6c-ee30-4edb-895b-bdbedd512d23 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "282b7217-4c1e-4a42-b3da-05616f4e1da3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:18:49 np0005534516 nova_compute[253538]: 2025-11-25 09:18:49.105 253542 DEBUG oslo_concurrency.lockutils [req-b172a661-bc70-4568-8400-c89dcade785f req-6c098d6c-ee30-4edb-895b-bdbedd512d23 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "282b7217-4c1e-4a42-b3da-05616f4e1da3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:18:49 np0005534516 nova_compute[253538]: 2025-11-25 09:18:49.105 253542 DEBUG nova.compute.manager [req-b172a661-bc70-4568-8400-c89dcade785f req-6c098d6c-ee30-4edb-895b-bdbedd512d23 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 282b7217-4c1e-4a42-b3da-05616f4e1da3] No waiting events found dispatching network-vif-unplugged-9d78d6ba-3489-4cfd-ae33-9166be3f940c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 04:18:49 np0005534516 nova_compute[253538]: 2025-11-25 09:18:49.105 253542 WARNING nova.compute.manager [req-b172a661-bc70-4568-8400-c89dcade785f req-6c098d6c-ee30-4edb-895b-bdbedd512d23 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 282b7217-4c1e-4a42-b3da-05616f4e1da3] Received unexpected event network-vif-unplugged-9d78d6ba-3489-4cfd-ae33-9166be3f940c for instance with vm_state active and task_state shelving.#033[00m
Nov 25 04:18:49 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:18:49.118 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[503f8958-4d34-4925-a25f-04da05deb551]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 765857, 'reachable_time': 35317, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 421415, 'error': None, 'target': 'ovnmeta-26d70c6d-e66b-4570-a7d7-11486a935ed8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:18:49 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:18:49.121 162852 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-26d70c6d-e66b-4570-a7d7-11486a935ed8 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 25 04:18:49 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:18:49.121 162852 DEBUG oslo.privsep.daemon [-] privsep: reply[1af300b6-711c-482a-9019-88dabed4a5f2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:18:49 np0005534516 systemd[1]: run-netns-ovnmeta\x2d26d70c6d\x2de66b\x2d4570\x2da7d7\x2d11486a935ed8.mount: Deactivated successfully.
Nov 25 04:18:49 np0005534516 nova_compute[253538]: 2025-11-25 09:18:49.445 253542 INFO nova.virt.libvirt.driver [None req-c048a4d2-93dd-4b12-a5c7-ed35a63cad27 a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] [instance: 282b7217-4c1e-4a42-b3da-05616f4e1da3] Instance shutdown successfully after 3 seconds.#033[00m
Nov 25 04:18:49 np0005534516 nova_compute[253538]: 2025-11-25 09:18:49.454 253542 INFO nova.virt.libvirt.driver [-] [instance: 282b7217-4c1e-4a42-b3da-05616f4e1da3] Instance destroyed successfully.#033[00m
Nov 25 04:18:49 np0005534516 nova_compute[253538]: 2025-11-25 09:18:49.455 253542 DEBUG nova.objects.instance [None req-c048a4d2-93dd-4b12-a5c7-ed35a63cad27 a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] Lazy-loading 'numa_topology' on Instance uuid 282b7217-4c1e-4a42-b3da-05616f4e1da3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 04:18:49 np0005534516 nova_compute[253538]: 2025-11-25 09:18:49.704 253542 INFO nova.virt.libvirt.driver [None req-c048a4d2-93dd-4b12-a5c7-ed35a63cad27 a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] [instance: 282b7217-4c1e-4a42-b3da-05616f4e1da3] Beginning cold snapshot process#033[00m
Nov 25 04:18:49 np0005534516 nova_compute[253538]: 2025-11-25 09:18:49.831 253542 DEBUG nova.virt.libvirt.imagebackend [None req-c048a4d2-93dd-4b12-a5c7-ed35a63cad27 a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] No parent info for 8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163#033[00m
Nov 25 04:18:49 np0005534516 nova_compute[253538]: 2025-11-25 09:18:49.874 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:18:50 np0005534516 nova_compute[253538]: 2025-11-25 09:18:50.061 253542 DEBUG nova.storage.rbd_utils [None req-c048a4d2-93dd-4b12-a5c7-ed35a63cad27 a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] creating snapshot(31dc4e47f1b5484f859997b028f1b43e) on rbd image(282b7217-4c1e-4a42-b3da-05616f4e1da3_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Nov 25 04:18:50 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2953: 321 pgs: 321 active+clean; 167 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 209 KiB/s rd, 753 KiB/s wr, 43 op/s
Nov 25 04:18:50 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e265 do_prune osdmap full prune enabled
Nov 25 04:18:50 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e266 e266: 3 total, 3 up, 3 in
Nov 25 04:18:50 np0005534516 ceph-mon[75015]: log_channel(cluster) log [DBG] : osdmap e266: 3 total, 3 up, 3 in
Nov 25 04:18:50 np0005534516 nova_compute[253538]: 2025-11-25 09:18:50.851 253542 DEBUG nova.storage.rbd_utils [None req-c048a4d2-93dd-4b12-a5c7-ed35a63cad27 a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] cloning vms/282b7217-4c1e-4a42-b3da-05616f4e1da3_disk@31dc4e47f1b5484f859997b028f1b43e to images/d40366df-73ef-47e3-8169-d7bf9c208bbb clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Nov 25 04:18:50 np0005534516 nova_compute[253538]: 2025-11-25 09:18:50.973 253542 DEBUG nova.storage.rbd_utils [None req-c048a4d2-93dd-4b12-a5c7-ed35a63cad27 a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] flattening images/d40366df-73ef-47e3-8169-d7bf9c208bbb flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314#033[00m
Nov 25 04:18:51 np0005534516 nova_compute[253538]: 2025-11-25 09:18:51.190 253542 DEBUG nova.compute.manager [req-39178f6c-4a2d-4222-ba15-bea9148884c7 req-04c1e154-eb03-4f38-84f8-9436edd78492 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 282b7217-4c1e-4a42-b3da-05616f4e1da3] Received event network-vif-plugged-9d78d6ba-3489-4cfd-ae33-9166be3f940c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 04:18:51 np0005534516 nova_compute[253538]: 2025-11-25 09:18:51.191 253542 DEBUG oslo_concurrency.lockutils [req-39178f6c-4a2d-4222-ba15-bea9148884c7 req-04c1e154-eb03-4f38-84f8-9436edd78492 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "282b7217-4c1e-4a42-b3da-05616f4e1da3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:18:51 np0005534516 nova_compute[253538]: 2025-11-25 09:18:51.191 253542 DEBUG oslo_concurrency.lockutils [req-39178f6c-4a2d-4222-ba15-bea9148884c7 req-04c1e154-eb03-4f38-84f8-9436edd78492 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "282b7217-4c1e-4a42-b3da-05616f4e1da3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:18:51 np0005534516 nova_compute[253538]: 2025-11-25 09:18:51.192 253542 DEBUG oslo_concurrency.lockutils [req-39178f6c-4a2d-4222-ba15-bea9148884c7 req-04c1e154-eb03-4f38-84f8-9436edd78492 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "282b7217-4c1e-4a42-b3da-05616f4e1da3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:18:51 np0005534516 nova_compute[253538]: 2025-11-25 09:18:51.192 253542 DEBUG nova.compute.manager [req-39178f6c-4a2d-4222-ba15-bea9148884c7 req-04c1e154-eb03-4f38-84f8-9436edd78492 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 282b7217-4c1e-4a42-b3da-05616f4e1da3] No waiting events found dispatching network-vif-plugged-9d78d6ba-3489-4cfd-ae33-9166be3f940c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 04:18:51 np0005534516 nova_compute[253538]: 2025-11-25 09:18:51.193 253542 WARNING nova.compute.manager [req-39178f6c-4a2d-4222-ba15-bea9148884c7 req-04c1e154-eb03-4f38-84f8-9436edd78492 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 282b7217-4c1e-4a42-b3da-05616f4e1da3] Received unexpected event network-vif-plugged-9d78d6ba-3489-4cfd-ae33-9166be3f940c for instance with vm_state active and task_state shelving_image_uploading.#033[00m
Nov 25 04:18:51 np0005534516 nova_compute[253538]: 2025-11-25 09:18:51.375 253542 DEBUG nova.storage.rbd_utils [None req-c048a4d2-93dd-4b12-a5c7-ed35a63cad27 a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] removing snapshot(31dc4e47f1b5484f859997b028f1b43e) on rbd image(282b7217-4c1e-4a42-b3da-05616f4e1da3_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489#033[00m
Nov 25 04:18:51 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e266 do_prune osdmap full prune enabled
Nov 25 04:18:51 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e267 e267: 3 total, 3 up, 3 in
Nov 25 04:18:51 np0005534516 ceph-mon[75015]: log_channel(cluster) log [DBG] : osdmap e267: 3 total, 3 up, 3 in
Nov 25 04:18:51 np0005534516 nova_compute[253538]: 2025-11-25 09:18:51.817 253542 DEBUG nova.storage.rbd_utils [None req-c048a4d2-93dd-4b12-a5c7-ed35a63cad27 a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] creating snapshot(snap) on rbd image(d40366df-73ef-47e3-8169-d7bf9c208bbb) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Nov 25 04:18:52 np0005534516 nova_compute[253538]: 2025-11-25 09:18:52.172 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:18:52 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2956: 321 pgs: 321 active+clean; 214 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.6 MiB/s rd, 3.7 MiB/s wr, 60 op/s
Nov 25 04:18:52 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e267 do_prune osdmap full prune enabled
Nov 25 04:18:52 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e268 e268: 3 total, 3 up, 3 in
Nov 25 04:18:52 np0005534516 ceph-mon[75015]: log_channel(cluster) log [DBG] : osdmap e268: 3 total, 3 up, 3 in
Nov 25 04:18:53 np0005534516 nova_compute[253538]: 2025-11-25 09:18:53.060 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:18:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] Optimize plan auto_2025-11-25_09:18:53
Nov 25 04:18:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 04:18:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] do_upmap
Nov 25 04:18:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] pools ['.rgw.root', 'default.rgw.log', 'default.rgw.meta', 'cephfs.cephfs.data', '.mgr', 'volumes', 'default.rgw.control', 'backups', 'vms', 'images', 'cephfs.cephfs.meta']
Nov 25 04:18:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] prepared 0/10 changes
Nov 25 04:18:53 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e268 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:18:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 04:18:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 04:18:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 04:18:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 04:18:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 04:18:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 04:18:54 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 04:18:54 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 04:18:54 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 04:18:54 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 04:18:54 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 04:18:54 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 04:18:54 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 04:18:54 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 04:18:54 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 04:18:54 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 04:18:54 np0005534516 ceph-mgr[75313]: client.0 ms_handle_reset on v2:192.168.122.100:6800/3119838916
Nov 25 04:18:54 np0005534516 nova_compute[253538]: 2025-11-25 09:18:54.214 253542 INFO nova.virt.libvirt.driver [None req-c048a4d2-93dd-4b12-a5c7-ed35a63cad27 a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] [instance: 282b7217-4c1e-4a42-b3da-05616f4e1da3] Snapshot image upload complete#033[00m
Nov 25 04:18:54 np0005534516 nova_compute[253538]: 2025-11-25 09:18:54.215 253542 DEBUG nova.compute.manager [None req-c048a4d2-93dd-4b12-a5c7-ed35a63cad27 a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] [instance: 282b7217-4c1e-4a42-b3da-05616f4e1da3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 04:18:54 np0005534516 nova_compute[253538]: 2025-11-25 09:18:54.255 253542 INFO nova.compute.manager [None req-c048a4d2-93dd-4b12-a5c7-ed35a63cad27 a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] [instance: 282b7217-4c1e-4a42-b3da-05616f4e1da3] Shelve offloading#033[00m
Nov 25 04:18:54 np0005534516 nova_compute[253538]: 2025-11-25 09:18:54.262 253542 INFO nova.virt.libvirt.driver [-] [instance: 282b7217-4c1e-4a42-b3da-05616f4e1da3] Instance destroyed successfully.#033[00m
Nov 25 04:18:54 np0005534516 nova_compute[253538]: 2025-11-25 09:18:54.263 253542 DEBUG nova.compute.manager [None req-c048a4d2-93dd-4b12-a5c7-ed35a63cad27 a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] [instance: 282b7217-4c1e-4a42-b3da-05616f4e1da3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 04:18:54 np0005534516 nova_compute[253538]: 2025-11-25 09:18:54.266 253542 DEBUG oslo_concurrency.lockutils [None req-c048a4d2-93dd-4b12-a5c7-ed35a63cad27 a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] Acquiring lock "refresh_cache-282b7217-4c1e-4a42-b3da-05616f4e1da3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 04:18:54 np0005534516 nova_compute[253538]: 2025-11-25 09:18:54.266 253542 DEBUG oslo_concurrency.lockutils [None req-c048a4d2-93dd-4b12-a5c7-ed35a63cad27 a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] Acquired lock "refresh_cache-282b7217-4c1e-4a42-b3da-05616f4e1da3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 04:18:54 np0005534516 nova_compute[253538]: 2025-11-25 09:18:54.267 253542 DEBUG nova.network.neutron [None req-c048a4d2-93dd-4b12-a5c7-ed35a63cad27 a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] [instance: 282b7217-4c1e-4a42-b3da-05616f4e1da3] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 25 04:18:54 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2958: 321 pgs: 321 active+clean; 233 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 6.3 MiB/s rd, 6.6 MiB/s wr, 134 op/s
Nov 25 04:18:54 np0005534516 podman[421559]: 2025-11-25 09:18:54.8401772 +0000 UTC m=+0.091538370 container health_status 8ad1e319ea263c63021f01bb2d6f18ca5aea205883339692c6b10bddfa993191 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Nov 25 04:18:54 np0005534516 nova_compute[253538]: 2025-11-25 09:18:54.875 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:18:55 np0005534516 nova_compute[253538]: 2025-11-25 09:18:55.279 253542 DEBUG nova.network.neutron [None req-c048a4d2-93dd-4b12-a5c7-ed35a63cad27 a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] [instance: 282b7217-4c1e-4a42-b3da-05616f4e1da3] Updating instance_info_cache with network_info: [{"id": "9d78d6ba-3489-4cfd-ae33-9166be3f940c", "address": "fa:16:3e:eb:22:67", "network": {"id": "26d70c6d-e66b-4570-a7d7-11486a935ed8", "bridge": "br-int", "label": "tempest-TestShelveInstance-164848660-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.210", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "41c67820b40a4185a60c4245f9c43ef5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9d78d6ba-34", "ovs_interfaceid": "9d78d6ba-3489-4cfd-ae33-9166be3f940c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 04:18:55 np0005534516 nova_compute[253538]: 2025-11-25 09:18:55.346 253542 DEBUG oslo_concurrency.lockutils [None req-c048a4d2-93dd-4b12-a5c7-ed35a63cad27 a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] Releasing lock "refresh_cache-282b7217-4c1e-4a42-b3da-05616f4e1da3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 04:18:55 np0005534516 nova_compute[253538]: 2025-11-25 09:18:55.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:18:56 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2959: 321 pgs: 321 active+clean; 246 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 7.9 MiB/s rd, 7.8 MiB/s wr, 170 op/s
Nov 25 04:18:58 np0005534516 nova_compute[253538]: 2025-11-25 09:18:58.062 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:18:58 np0005534516 nova_compute[253538]: 2025-11-25 09:18:58.319 253542 INFO nova.virt.libvirt.driver [-] [instance: 282b7217-4c1e-4a42-b3da-05616f4e1da3] Instance destroyed successfully.#033[00m
Nov 25 04:18:58 np0005534516 nova_compute[253538]: 2025-11-25 09:18:58.319 253542 DEBUG nova.objects.instance [None req-c048a4d2-93dd-4b12-a5c7-ed35a63cad27 a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] Lazy-loading 'resources' on Instance uuid 282b7217-4c1e-4a42-b3da-05616f4e1da3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 04:18:58 np0005534516 nova_compute[253538]: 2025-11-25 09:18:58.331 253542 DEBUG nova.virt.libvirt.vif [None req-c048a4d2-93dd-4b12-a5c7-ed35a63cad27 a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T09:18:14Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestShelveInstance-server-853165821',display_name='tempest-TestShelveInstance-server-853165821',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testshelveinstance-server-853165821',id=153,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDNJQsCu4fQ3ll5Z4ZaGvMq+pPgiaY3EvL05ETUACffFb5NNPT58fZR5bxwEgYmFiG8knRhbPhzoHa6MYoWsZMDhMe0q2RDfQW/VzCu9RVlFpki+QcCgPNPr5WwLwdENig==',key_name='tempest-TestShelveInstance-83615415',keypairs=<?>,launch_index=0,launched_at=2025-11-25T09:18:24Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='41c67820b40a4185a60c4245f9c43ef5',ramdisk_id='',reservation_id='r-5ikkcbmq',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestShelveInstance-1867415308',owner_user_name='tempest-TestShelveInstance-1867415308-project-member',shelved_at='2025-11-25T09:18:54.215428',shelved_host='compute-0.ctlplane.example.com',shelved_image_id='d40366df-73ef-47e3-8169-d7bf9c208bbb'},tags=<?>,task_state='shelving_offloading',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T09:18:49Z,user_data=None,user_id='a68fbd2f756d42aa982630f3a41f0a1f',uuid=282b7217-4c1e-4a42-b3da-05616f4e1da3,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='shelved') vif={"id": "9d78d6ba-3489-4cfd-ae33-9166be3f940c", "address": "fa:16:3e:eb:22:67", "network": {"id": "26d70c6d-e66b-4570-a7d7-11486a935ed8", "bridge": "br-int", "label": "tempest-TestShelveInstance-164848660-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.210", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "41c67820b40a4185a60c4245f9c43ef5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9d78d6ba-34", "ovs_interfaceid": "9d78d6ba-3489-4cfd-ae33-9166be3f940c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 25 04:18:58 np0005534516 nova_compute[253538]: 2025-11-25 09:18:58.331 253542 DEBUG nova.network.os_vif_util [None req-c048a4d2-93dd-4b12-a5c7-ed35a63cad27 a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] Converting VIF {"id": "9d78d6ba-3489-4cfd-ae33-9166be3f940c", "address": "fa:16:3e:eb:22:67", "network": {"id": "26d70c6d-e66b-4570-a7d7-11486a935ed8", "bridge": "br-int", "label": "tempest-TestShelveInstance-164848660-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.210", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "41c67820b40a4185a60c4245f9c43ef5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9d78d6ba-34", "ovs_interfaceid": "9d78d6ba-3489-4cfd-ae33-9166be3f940c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 04:18:58 np0005534516 nova_compute[253538]: 2025-11-25 09:18:58.332 253542 DEBUG nova.network.os_vif_util [None req-c048a4d2-93dd-4b12-a5c7-ed35a63cad27 a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:eb:22:67,bridge_name='br-int',has_traffic_filtering=True,id=9d78d6ba-3489-4cfd-ae33-9166be3f940c,network=Network(26d70c6d-e66b-4570-a7d7-11486a935ed8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9d78d6ba-34') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 04:18:58 np0005534516 nova_compute[253538]: 2025-11-25 09:18:58.333 253542 DEBUG os_vif [None req-c048a4d2-93dd-4b12-a5c7-ed35a63cad27 a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:eb:22:67,bridge_name='br-int',has_traffic_filtering=True,id=9d78d6ba-3489-4cfd-ae33-9166be3f940c,network=Network(26d70c6d-e66b-4570-a7d7-11486a935ed8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9d78d6ba-34') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 25 04:18:58 np0005534516 nova_compute[253538]: 2025-11-25 09:18:58.335 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:18:58 np0005534516 nova_compute[253538]: 2025-11-25 09:18:58.335 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9d78d6ba-34, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 04:18:58 np0005534516 nova_compute[253538]: 2025-11-25 09:18:58.337 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:18:58 np0005534516 nova_compute[253538]: 2025-11-25 09:18:58.338 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:18:58 np0005534516 nova_compute[253538]: 2025-11-25 09:18:58.342 253542 INFO os_vif [None req-c048a4d2-93dd-4b12-a5c7-ed35a63cad27 a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:eb:22:67,bridge_name='br-int',has_traffic_filtering=True,id=9d78d6ba-3489-4cfd-ae33-9166be3f940c,network=Network(26d70c6d-e66b-4570-a7d7-11486a935ed8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9d78d6ba-34')#033[00m
Nov 25 04:18:58 np0005534516 nova_compute[253538]: 2025-11-25 09:18:58.394 253542 DEBUG nova.compute.manager [req-09827419-15be-48f4-a6d0-c149c7ff5af3 req-6950896a-8ab7-40f9-bc58-c5cee0026aab b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 282b7217-4c1e-4a42-b3da-05616f4e1da3] Received event network-changed-9d78d6ba-3489-4cfd-ae33-9166be3f940c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 04:18:58 np0005534516 nova_compute[253538]: 2025-11-25 09:18:58.395 253542 DEBUG nova.compute.manager [req-09827419-15be-48f4-a6d0-c149c7ff5af3 req-6950896a-8ab7-40f9-bc58-c5cee0026aab b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 282b7217-4c1e-4a42-b3da-05616f4e1da3] Refreshing instance network info cache due to event network-changed-9d78d6ba-3489-4cfd-ae33-9166be3f940c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 04:18:58 np0005534516 nova_compute[253538]: 2025-11-25 09:18:58.395 253542 DEBUG oslo_concurrency.lockutils [req-09827419-15be-48f4-a6d0-c149c7ff5af3 req-6950896a-8ab7-40f9-bc58-c5cee0026aab b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-282b7217-4c1e-4a42-b3da-05616f4e1da3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 04:18:58 np0005534516 nova_compute[253538]: 2025-11-25 09:18:58.396 253542 DEBUG oslo_concurrency.lockutils [req-09827419-15be-48f4-a6d0-c149c7ff5af3 req-6950896a-8ab7-40f9-bc58-c5cee0026aab b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-282b7217-4c1e-4a42-b3da-05616f4e1da3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 04:18:58 np0005534516 nova_compute[253538]: 2025-11-25 09:18:58.396 253542 DEBUG nova.network.neutron [req-09827419-15be-48f4-a6d0-c149c7ff5af3 req-6950896a-8ab7-40f9-bc58-c5cee0026aab b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 282b7217-4c1e-4a42-b3da-05616f4e1da3] Refreshing network info cache for port 9d78d6ba-3489-4cfd-ae33-9166be3f940c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 04:18:58 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e268 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:18:58 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e268 do_prune osdmap full prune enabled
Nov 25 04:18:58 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e269 e269: 3 total, 3 up, 3 in
Nov 25 04:18:58 np0005534516 ceph-mon[75015]: log_channel(cluster) log [DBG] : osdmap e269: 3 total, 3 up, 3 in
Nov 25 04:18:58 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2961: 321 pgs: 321 active+clean; 246 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 2.6 MiB/s wr, 90 op/s
Nov 25 04:18:59 np0005534516 nova_compute[253538]: 2025-11-25 09:18:59.273 253542 INFO nova.virt.libvirt.driver [None req-c048a4d2-93dd-4b12-a5c7-ed35a63cad27 a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] [instance: 282b7217-4c1e-4a42-b3da-05616f4e1da3] Deleting instance files /var/lib/nova/instances/282b7217-4c1e-4a42-b3da-05616f4e1da3_del#033[00m
Nov 25 04:18:59 np0005534516 nova_compute[253538]: 2025-11-25 09:18:59.274 253542 INFO nova.virt.libvirt.driver [None req-c048a4d2-93dd-4b12-a5c7-ed35a63cad27 a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] [instance: 282b7217-4c1e-4a42-b3da-05616f4e1da3] Deletion of /var/lib/nova/instances/282b7217-4c1e-4a42-b3da-05616f4e1da3_del complete#033[00m
Nov 25 04:18:59 np0005534516 nova_compute[253538]: 2025-11-25 09:18:59.372 253542 INFO nova.scheduler.client.report [None req-c048a4d2-93dd-4b12-a5c7-ed35a63cad27 a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] Deleted allocations for instance 282b7217-4c1e-4a42-b3da-05616f4e1da3#033[00m
Nov 25 04:18:59 np0005534516 nova_compute[253538]: 2025-11-25 09:18:59.413 253542 DEBUG oslo_concurrency.lockutils [None req-c048a4d2-93dd-4b12-a5c7-ed35a63cad27 a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:18:59 np0005534516 nova_compute[253538]: 2025-11-25 09:18:59.414 253542 DEBUG oslo_concurrency.lockutils [None req-c048a4d2-93dd-4b12-a5c7-ed35a63cad27 a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:18:59 np0005534516 nova_compute[253538]: 2025-11-25 09:18:59.438 253542 DEBUG oslo_concurrency.processutils [None req-c048a4d2-93dd-4b12-a5c7-ed35a63cad27 a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 04:18:59 np0005534516 nova_compute[253538]: 2025-11-25 09:18:59.680 253542 DEBUG nova.network.neutron [req-09827419-15be-48f4-a6d0-c149c7ff5af3 req-6950896a-8ab7-40f9-bc58-c5cee0026aab b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 282b7217-4c1e-4a42-b3da-05616f4e1da3] Updated VIF entry in instance network info cache for port 9d78d6ba-3489-4cfd-ae33-9166be3f940c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 25 04:18:59 np0005534516 nova_compute[253538]: 2025-11-25 09:18:59.681 253542 DEBUG nova.network.neutron [req-09827419-15be-48f4-a6d0-c149c7ff5af3 req-6950896a-8ab7-40f9-bc58-c5cee0026aab b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 282b7217-4c1e-4a42-b3da-05616f4e1da3] Updating instance_info_cache with network_info: [{"id": "9d78d6ba-3489-4cfd-ae33-9166be3f940c", "address": "fa:16:3e:eb:22:67", "network": {"id": "26d70c6d-e66b-4570-a7d7-11486a935ed8", "bridge": null, "label": "tempest-TestShelveInstance-164848660-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.210", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "41c67820b40a4185a60c4245f9c43ef5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "unbound", "details": {}, "devname": "tap9d78d6ba-34", "ovs_interfaceid": null, "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 04:18:59 np0005534516 nova_compute[253538]: 2025-11-25 09:18:59.707 253542 DEBUG oslo_concurrency.lockutils [req-09827419-15be-48f4-a6d0-c149c7ff5af3 req-6950896a-8ab7-40f9-bc58-c5cee0026aab b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-282b7217-4c1e-4a42-b3da-05616f4e1da3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 04:18:59 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 04:18:59 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2621795008' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 04:18:59 np0005534516 nova_compute[253538]: 2025-11-25 09:18:59.871 253542 DEBUG oslo_concurrency.processutils [None req-c048a4d2-93dd-4b12-a5c7-ed35a63cad27 a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.433s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 04:18:59 np0005534516 nova_compute[253538]: 2025-11-25 09:18:59.877 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:18:59 np0005534516 nova_compute[253538]: 2025-11-25 09:18:59.880 253542 DEBUG nova.compute.provider_tree [None req-c048a4d2-93dd-4b12-a5c7-ed35a63cad27 a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 04:18:59 np0005534516 nova_compute[253538]: 2025-11-25 09:18:59.895 253542 DEBUG nova.scheduler.client.report [None req-c048a4d2-93dd-4b12-a5c7-ed35a63cad27 a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 04:18:59 np0005534516 nova_compute[253538]: 2025-11-25 09:18:59.913 253542 DEBUG oslo_concurrency.lockutils [None req-c048a4d2-93dd-4b12-a5c7-ed35a63cad27 a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.499s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:18:59 np0005534516 nova_compute[253538]: 2025-11-25 09:18:59.954 253542 DEBUG oslo_concurrency.lockutils [None req-c048a4d2-93dd-4b12-a5c7-ed35a63cad27 a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] Lock "282b7217-4c1e-4a42-b3da-05616f4e1da3" "released" by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" :: held 13.557s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:19:00 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2962: 321 pgs: 321 active+clean; 225 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 3.3 MiB/s rd, 2.2 MiB/s wr, 99 op/s
Nov 25 04:19:00 np0005534516 nova_compute[253538]: 2025-11-25 09:19:00.555 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:19:00 np0005534516 nova_compute[253538]: 2025-11-25 09:19:00.555 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 25 04:19:00 np0005534516 nova_compute[253538]: 2025-11-25 09:19:00.555 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 25 04:19:00 np0005534516 nova_compute[253538]: 2025-11-25 09:19:00.566 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 25 04:19:02 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2963: 321 pgs: 321 active+clean; 195 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.7 MiB/s rd, 1.8 MiB/s wr, 85 op/s
Nov 25 04:19:02 np0005534516 nova_compute[253538]: 2025-11-25 09:19:02.558 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:19:02 np0005534516 nova_compute[253538]: 2025-11-25 09:19:02.720 253542 DEBUG oslo_concurrency.lockutils [None req-44cee0f2-669b-4ed1-b8eb-7bcd0fab10ee a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] Acquiring lock "282b7217-4c1e-4a42-b3da-05616f4e1da3" by "nova.compute.manager.ComputeManager.unshelve_instance.<locals>.do_unshelve_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:19:02 np0005534516 nova_compute[253538]: 2025-11-25 09:19:02.721 253542 DEBUG oslo_concurrency.lockutils [None req-44cee0f2-669b-4ed1-b8eb-7bcd0fab10ee a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] Lock "282b7217-4c1e-4a42-b3da-05616f4e1da3" acquired by "nova.compute.manager.ComputeManager.unshelve_instance.<locals>.do_unshelve_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:19:02 np0005534516 nova_compute[253538]: 2025-11-25 09:19:02.721 253542 INFO nova.compute.manager [None req-44cee0f2-669b-4ed1-b8eb-7bcd0fab10ee a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] [instance: 282b7217-4c1e-4a42-b3da-05616f4e1da3] Unshelving#033[00m
Nov 25 04:19:02 np0005534516 nova_compute[253538]: 2025-11-25 09:19:02.831 253542 DEBUG oslo_concurrency.lockutils [None req-44cee0f2-669b-4ed1-b8eb-7bcd0fab10ee a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:19:02 np0005534516 nova_compute[253538]: 2025-11-25 09:19:02.832 253542 DEBUG oslo_concurrency.lockutils [None req-44cee0f2-669b-4ed1-b8eb-7bcd0fab10ee a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:19:02 np0005534516 nova_compute[253538]: 2025-11-25 09:19:02.837 253542 DEBUG nova.objects.instance [None req-44cee0f2-669b-4ed1-b8eb-7bcd0fab10ee a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] Lazy-loading 'pci_requests' on Instance uuid 282b7217-4c1e-4a42-b3da-05616f4e1da3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 04:19:02 np0005534516 nova_compute[253538]: 2025-11-25 09:19:02.854 253542 DEBUG nova.objects.instance [None req-44cee0f2-669b-4ed1-b8eb-7bcd0fab10ee a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] Lazy-loading 'numa_topology' on Instance uuid 282b7217-4c1e-4a42-b3da-05616f4e1da3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 04:19:02 np0005534516 nova_compute[253538]: 2025-11-25 09:19:02.865 253542 DEBUG nova.virt.hardware [None req-44cee0f2-669b-4ed1-b8eb-7bcd0fab10ee a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 25 04:19:02 np0005534516 nova_compute[253538]: 2025-11-25 09:19:02.866 253542 INFO nova.compute.claims [None req-44cee0f2-669b-4ed1-b8eb-7bcd0fab10ee a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] [instance: 282b7217-4c1e-4a42-b3da-05616f4e1da3] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 25 04:19:02 np0005534516 nova_compute[253538]: 2025-11-25 09:19:02.973 253542 DEBUG oslo_concurrency.processutils [None req-44cee0f2-669b-4ed1-b8eb-7bcd0fab10ee a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 04:19:03 np0005534516 nova_compute[253538]: 2025-11-25 09:19:03.338 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:19:03 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 04:19:03 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/307464301' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 04:19:03 np0005534516 nova_compute[253538]: 2025-11-25 09:19:03.419 253542 DEBUG oslo_concurrency.processutils [None req-44cee0f2-669b-4ed1-b8eb-7bcd0fab10ee a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.446s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 04:19:03 np0005534516 nova_compute[253538]: 2025-11-25 09:19:03.427 253542 DEBUG nova.compute.provider_tree [None req-44cee0f2-669b-4ed1-b8eb-7bcd0fab10ee a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 04:19:03 np0005534516 nova_compute[253538]: 2025-11-25 09:19:03.442 253542 DEBUG nova.scheduler.client.report [None req-44cee0f2-669b-4ed1-b8eb-7bcd0fab10ee a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 04:19:03 np0005534516 nova_compute[253538]: 2025-11-25 09:19:03.462 253542 DEBUG oslo_concurrency.lockutils [None req-44cee0f2-669b-4ed1-b8eb-7bcd0fab10ee a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.630s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:19:03 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e269 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:19:03 np0005534516 nova_compute[253538]: 2025-11-25 09:19:03.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:19:03 np0005534516 nova_compute[253538]: 2025-11-25 09:19:03.585 253542 INFO nova.network.neutron [None req-44cee0f2-669b-4ed1-b8eb-7bcd0fab10ee a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] [instance: 282b7217-4c1e-4a42-b3da-05616f4e1da3] Updating port 9d78d6ba-3489-4cfd-ae33-9166be3f940c with attributes {'binding:host_id': 'compute-0.ctlplane.example.com', 'device_owner': 'compute:nova'}#033[00m
Nov 25 04:19:04 np0005534516 nova_compute[253538]: 2025-11-25 09:19:03.999 253542 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764062328.9989727, 282b7217-4c1e-4a42-b3da-05616f4e1da3 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 04:19:04 np0005534516 nova_compute[253538]: 2025-11-25 09:19:04.000 253542 INFO nova.compute.manager [-] [instance: 282b7217-4c1e-4a42-b3da-05616f4e1da3] VM Stopped (Lifecycle Event)#033[00m
Nov 25 04:19:04 np0005534516 nova_compute[253538]: 2025-11-25 09:19:04.017 253542 DEBUG nova.compute.manager [None req-eeb986e4-5a10-407c-8af3-57f704a1782d - - - - - -] [instance: 282b7217-4c1e-4a42-b3da-05616f4e1da3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 04:19:04 np0005534516 nova_compute[253538]: 2025-11-25 09:19:04.202 253542 DEBUG oslo_concurrency.lockutils [None req-44cee0f2-669b-4ed1-b8eb-7bcd0fab10ee a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] Acquiring lock "refresh_cache-282b7217-4c1e-4a42-b3da-05616f4e1da3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 04:19:04 np0005534516 nova_compute[253538]: 2025-11-25 09:19:04.203 253542 DEBUG oslo_concurrency.lockutils [None req-44cee0f2-669b-4ed1-b8eb-7bcd0fab10ee a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] Acquired lock "refresh_cache-282b7217-4c1e-4a42-b3da-05616f4e1da3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 04:19:04 np0005534516 nova_compute[253538]: 2025-11-25 09:19:04.203 253542 DEBUG nova.network.neutron [None req-44cee0f2-669b-4ed1-b8eb-7bcd0fab10ee a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] [instance: 282b7217-4c1e-4a42-b3da-05616f4e1da3] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 25 04:19:04 np0005534516 nova_compute[253538]: 2025-11-25 09:19:04.308 253542 DEBUG nova.compute.manager [req-2f320d5b-fe0f-4318-aed8-254fa32f9757 req-59257299-084b-4ac4-b717-a3857312bc95 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 282b7217-4c1e-4a42-b3da-05616f4e1da3] Received event network-changed-9d78d6ba-3489-4cfd-ae33-9166be3f940c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 04:19:04 np0005534516 nova_compute[253538]: 2025-11-25 09:19:04.309 253542 DEBUG nova.compute.manager [req-2f320d5b-fe0f-4318-aed8-254fa32f9757 req-59257299-084b-4ac4-b717-a3857312bc95 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 282b7217-4c1e-4a42-b3da-05616f4e1da3] Refreshing instance network info cache due to event network-changed-9d78d6ba-3489-4cfd-ae33-9166be3f940c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 04:19:04 np0005534516 nova_compute[253538]: 2025-11-25 09:19:04.309 253542 DEBUG oslo_concurrency.lockutils [req-2f320d5b-fe0f-4318-aed8-254fa32f9757 req-59257299-084b-4ac4-b717-a3857312bc95 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-282b7217-4c1e-4a42-b3da-05616f4e1da3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 04:19:04 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2964: 321 pgs: 321 active+clean; 167 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 989 KiB/s rd, 761 KiB/s wr, 59 op/s
Nov 25 04:19:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 04:19:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 04:19:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 04:19:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 04:19:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 25 04:19:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 04:19:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 04:19:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 04:19:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 04:19:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 04:19:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.001770937622094159 of space, bias 1.0, pg target 0.5312812866282477 quantized to 32 (current 32)
Nov 25 04:19:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 04:19:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 25 04:19:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 04:19:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 04:19:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 04:19:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 25 04:19:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 04:19:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 25 04:19:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 04:19:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 04:19:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 04:19:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 25 04:19:04 np0005534516 nova_compute[253538]: 2025-11-25 09:19:04.879 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:19:06 np0005534516 nova_compute[253538]: 2025-11-25 09:19:06.056 253542 DEBUG nova.network.neutron [None req-44cee0f2-669b-4ed1-b8eb-7bcd0fab10ee a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] [instance: 282b7217-4c1e-4a42-b3da-05616f4e1da3] Updating instance_info_cache with network_info: [{"id": "9d78d6ba-3489-4cfd-ae33-9166be3f940c", "address": "fa:16:3e:eb:22:67", "network": {"id": "26d70c6d-e66b-4570-a7d7-11486a935ed8", "bridge": "br-int", "label": "tempest-TestShelveInstance-164848660-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.210", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "41c67820b40a4185a60c4245f9c43ef5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9d78d6ba-34", "ovs_interfaceid": "9d78d6ba-3489-4cfd-ae33-9166be3f940c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 04:19:06 np0005534516 nova_compute[253538]: 2025-11-25 09:19:06.165 253542 DEBUG oslo_concurrency.lockutils [None req-44cee0f2-669b-4ed1-b8eb-7bcd0fab10ee a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] Releasing lock "refresh_cache-282b7217-4c1e-4a42-b3da-05616f4e1da3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 04:19:06 np0005534516 nova_compute[253538]: 2025-11-25 09:19:06.167 253542 DEBUG nova.virt.libvirt.driver [None req-44cee0f2-669b-4ed1-b8eb-7bcd0fab10ee a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] [instance: 282b7217-4c1e-4a42-b3da-05616f4e1da3] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 25 04:19:06 np0005534516 nova_compute[253538]: 2025-11-25 09:19:06.167 253542 INFO nova.virt.libvirt.driver [None req-44cee0f2-669b-4ed1-b8eb-7bcd0fab10ee a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] [instance: 282b7217-4c1e-4a42-b3da-05616f4e1da3] Creating image(s)#033[00m
Nov 25 04:19:06 np0005534516 nova_compute[253538]: 2025-11-25 09:19:06.194 253542 DEBUG nova.storage.rbd_utils [None req-44cee0f2-669b-4ed1-b8eb-7bcd0fab10ee a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] rbd image 282b7217-4c1e-4a42-b3da-05616f4e1da3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 04:19:06 np0005534516 nova_compute[253538]: 2025-11-25 09:19:06.198 253542 DEBUG nova.objects.instance [None req-44cee0f2-669b-4ed1-b8eb-7bcd0fab10ee a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] Lazy-loading 'trusted_certs' on Instance uuid 282b7217-4c1e-4a42-b3da-05616f4e1da3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 04:19:06 np0005534516 nova_compute[253538]: 2025-11-25 09:19:06.199 253542 DEBUG oslo_concurrency.lockutils [req-2f320d5b-fe0f-4318-aed8-254fa32f9757 req-59257299-084b-4ac4-b717-a3857312bc95 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-282b7217-4c1e-4a42-b3da-05616f4e1da3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 04:19:06 np0005534516 nova_compute[253538]: 2025-11-25 09:19:06.200 253542 DEBUG nova.network.neutron [req-2f320d5b-fe0f-4318-aed8-254fa32f9757 req-59257299-084b-4ac4-b717-a3857312bc95 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 282b7217-4c1e-4a42-b3da-05616f4e1da3] Refreshing network info cache for port 9d78d6ba-3489-4cfd-ae33-9166be3f940c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 04:19:06 np0005534516 nova_compute[253538]: 2025-11-25 09:19:06.246 253542 DEBUG nova.storage.rbd_utils [None req-44cee0f2-669b-4ed1-b8eb-7bcd0fab10ee a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] rbd image 282b7217-4c1e-4a42-b3da-05616f4e1da3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 04:19:06 np0005534516 nova_compute[253538]: 2025-11-25 09:19:06.282 253542 DEBUG nova.storage.rbd_utils [None req-44cee0f2-669b-4ed1-b8eb-7bcd0fab10ee a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] rbd image 282b7217-4c1e-4a42-b3da-05616f4e1da3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 04:19:06 np0005534516 nova_compute[253538]: 2025-11-25 09:19:06.286 253542 DEBUG oslo_concurrency.lockutils [None req-44cee0f2-669b-4ed1-b8eb-7bcd0fab10ee a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] Acquiring lock "2702c9a3c09b88045258aad233b10ea7fa141e76" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:19:06 np0005534516 nova_compute[253538]: 2025-11-25 09:19:06.287 253542 DEBUG oslo_concurrency.lockutils [None req-44cee0f2-669b-4ed1-b8eb-7bcd0fab10ee a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] Lock "2702c9a3c09b88045258aad233b10ea7fa141e76" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:19:06 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2965: 321 pgs: 321 active+clean; 167 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 23 KiB/s rd, 1.4 KiB/s wr, 33 op/s
Nov 25 04:19:06 np0005534516 nova_compute[253538]: 2025-11-25 09:19:06.553 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:19:06 np0005534516 nova_compute[253538]: 2025-11-25 09:19:06.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:19:06 np0005534516 nova_compute[253538]: 2025-11-25 09:19:06.555 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 25 04:19:06 np0005534516 nova_compute[253538]: 2025-11-25 09:19:06.563 253542 DEBUG nova.virt.libvirt.imagebackend [None req-44cee0f2-669b-4ed1-b8eb-7bcd0fab10ee a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] Image locations are: [{'url': 'rbd://a058ea16-8b73-51e1-b172-ed66107102bf/images/d40366df-73ef-47e3-8169-d7bf9c208bbb/snap', 'metadata': {'store': 'default_backend'}}, {'url': 'rbd://a058ea16-8b73-51e1-b172-ed66107102bf/images/d40366df-73ef-47e3-8169-d7bf9c208bbb/snap', 'metadata': {}}] clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1085#033[00m
Nov 25 04:19:06 np0005534516 nova_compute[253538]: 2025-11-25 09:19:06.626 253542 DEBUG nova.virt.libvirt.imagebackend [None req-44cee0f2-669b-4ed1-b8eb-7bcd0fab10ee a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] Selected location: {'url': 'rbd://a058ea16-8b73-51e1-b172-ed66107102bf/images/d40366df-73ef-47e3-8169-d7bf9c208bbb/snap', 'metadata': {'store': 'default_backend'}} clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1094#033[00m
Nov 25 04:19:06 np0005534516 nova_compute[253538]: 2025-11-25 09:19:06.628 253542 DEBUG nova.storage.rbd_utils [None req-44cee0f2-669b-4ed1-b8eb-7bcd0fab10ee a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] cloning images/d40366df-73ef-47e3-8169-d7bf9c208bbb@snap to None/282b7217-4c1e-4a42-b3da-05616f4e1da3_disk clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Nov 25 04:19:06 np0005534516 nova_compute[253538]: 2025-11-25 09:19:06.742 253542 DEBUG oslo_concurrency.lockutils [None req-44cee0f2-669b-4ed1-b8eb-7bcd0fab10ee a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] Lock "2702c9a3c09b88045258aad233b10ea7fa141e76" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.454s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:19:06 np0005534516 nova_compute[253538]: 2025-11-25 09:19:06.868 253542 DEBUG nova.objects.instance [None req-44cee0f2-669b-4ed1-b8eb-7bcd0fab10ee a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] Lazy-loading 'migration_context' on Instance uuid 282b7217-4c1e-4a42-b3da-05616f4e1da3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 04:19:06 np0005534516 nova_compute[253538]: 2025-11-25 09:19:06.917 253542 DEBUG nova.storage.rbd_utils [None req-44cee0f2-669b-4ed1-b8eb-7bcd0fab10ee a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] flattening vms/282b7217-4c1e-4a42-b3da-05616f4e1da3_disk flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314#033[00m
Nov 25 04:19:07 np0005534516 nova_compute[253538]: 2025-11-25 09:19:07.970 253542 DEBUG nova.virt.libvirt.driver [None req-44cee0f2-669b-4ed1-b8eb-7bcd0fab10ee a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] [instance: 282b7217-4c1e-4a42-b3da-05616f4e1da3] Image rbd:vms/282b7217-4c1e-4a42-b3da-05616f4e1da3_disk:id=openstack:conf=/etc/ceph/ceph.conf flattened successfully while unshelving instance. _try_fetch_image_cache /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11007#033[00m
Nov 25 04:19:07 np0005534516 nova_compute[253538]: 2025-11-25 09:19:07.970 253542 DEBUG nova.virt.libvirt.driver [None req-44cee0f2-669b-4ed1-b8eb-7bcd0fab10ee a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] [instance: 282b7217-4c1e-4a42-b3da-05616f4e1da3] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 25 04:19:07 np0005534516 nova_compute[253538]: 2025-11-25 09:19:07.971 253542 DEBUG nova.virt.libvirt.driver [None req-44cee0f2-669b-4ed1-b8eb-7bcd0fab10ee a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] [instance: 282b7217-4c1e-4a42-b3da-05616f4e1da3] Ensure instance console log exists: /var/lib/nova/instances/282b7217-4c1e-4a42-b3da-05616f4e1da3/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 25 04:19:07 np0005534516 nova_compute[253538]: 2025-11-25 09:19:07.971 253542 DEBUG oslo_concurrency.lockutils [None req-44cee0f2-669b-4ed1-b8eb-7bcd0fab10ee a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:19:07 np0005534516 nova_compute[253538]: 2025-11-25 09:19:07.971 253542 DEBUG oslo_concurrency.lockutils [None req-44cee0f2-669b-4ed1-b8eb-7bcd0fab10ee a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:19:07 np0005534516 nova_compute[253538]: 2025-11-25 09:19:07.972 253542 DEBUG oslo_concurrency.lockutils [None req-44cee0f2-669b-4ed1-b8eb-7bcd0fab10ee a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:19:07 np0005534516 nova_compute[253538]: 2025-11-25 09:19:07.973 253542 DEBUG nova.virt.libvirt.driver [None req-44cee0f2-669b-4ed1-b8eb-7bcd0fab10ee a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] [instance: 282b7217-4c1e-4a42-b3da-05616f4e1da3] Start _get_guest_xml network_info=[{"id": "9d78d6ba-3489-4cfd-ae33-9166be3f940c", "address": "fa:16:3e:eb:22:67", "network": {"id": "26d70c6d-e66b-4570-a7d7-11486a935ed8", "bridge": "br-int", "label": "tempest-TestShelveInstance-164848660-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.210", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "41c67820b40a4185a60c4245f9c43ef5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9d78d6ba-34", "ovs_interfaceid": "9d78d6ba-3489-4cfd-ae33-9166be3f940c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='',container_format='bare',created_at=2025-11-25T09:18:46Z,direct_url=<?>,disk_format='raw',id=d40366df-73ef-47e3-8169-d7bf9c208bbb,min_disk=1,min_ram=0,name='tempest-TestShelveInstance-server-853165821-shelved',owner='41c67820b40a4185a60c4245f9c43ef5',properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=2025-11-25T09:18:53Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'device_name': '/dev/vda', 'guest_format': None, 'encryption_options': None, 'boot_index': 0, 'encryption_format': None, 'encryption_secret_uuid': None, 'size': 0, 'encrypted': False, 'disk_bus': 'virtio', 'image_id': '8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 25 04:19:07 np0005534516 nova_compute[253538]: 2025-11-25 09:19:07.978 253542 WARNING nova.virt.libvirt.driver [None req-44cee0f2-669b-4ed1-b8eb-7bcd0fab10ee a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 04:19:07 np0005534516 nova_compute[253538]: 2025-11-25 09:19:07.983 253542 DEBUG nova.virt.libvirt.host [None req-44cee0f2-669b-4ed1-b8eb-7bcd0fab10ee a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 25 04:19:07 np0005534516 nova_compute[253538]: 2025-11-25 09:19:07.984 253542 DEBUG nova.virt.libvirt.host [None req-44cee0f2-669b-4ed1-b8eb-7bcd0fab10ee a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 25 04:19:07 np0005534516 nova_compute[253538]: 2025-11-25 09:19:07.987 253542 DEBUG nova.virt.libvirt.host [None req-44cee0f2-669b-4ed1-b8eb-7bcd0fab10ee a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 25 04:19:07 np0005534516 nova_compute[253538]: 2025-11-25 09:19:07.988 253542 DEBUG nova.virt.libvirt.host [None req-44cee0f2-669b-4ed1-b8eb-7bcd0fab10ee a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 25 04:19:07 np0005534516 nova_compute[253538]: 2025-11-25 09:19:07.988 253542 DEBUG nova.virt.libvirt.driver [None req-44cee0f2-669b-4ed1-b8eb-7bcd0fab10ee a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 25 04:19:07 np0005534516 nova_compute[253538]: 2025-11-25 09:19:07.988 253542 DEBUG nova.virt.hardware [None req-44cee0f2-669b-4ed1-b8eb-7bcd0fab10ee a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T08:20:54Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='c0247c91-0f34-45b2-87e7-43f31a790a00',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='',container_format='bare',created_at=2025-11-25T09:18:46Z,direct_url=<?>,disk_format='raw',id=d40366df-73ef-47e3-8169-d7bf9c208bbb,min_disk=1,min_ram=0,name='tempest-TestShelveInstance-server-853165821-shelved',owner='41c67820b40a4185a60c4245f9c43ef5',properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=2025-11-25T09:18:53Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 25 04:19:07 np0005534516 nova_compute[253538]: 2025-11-25 09:19:07.989 253542 DEBUG nova.virt.hardware [None req-44cee0f2-669b-4ed1-b8eb-7bcd0fab10ee a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 25 04:19:07 np0005534516 nova_compute[253538]: 2025-11-25 09:19:07.989 253542 DEBUG nova.virt.hardware [None req-44cee0f2-669b-4ed1-b8eb-7bcd0fab10ee a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 25 04:19:07 np0005534516 nova_compute[253538]: 2025-11-25 09:19:07.989 253542 DEBUG nova.virt.hardware [None req-44cee0f2-669b-4ed1-b8eb-7bcd0fab10ee a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 25 04:19:07 np0005534516 nova_compute[253538]: 2025-11-25 09:19:07.989 253542 DEBUG nova.virt.hardware [None req-44cee0f2-669b-4ed1-b8eb-7bcd0fab10ee a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 25 04:19:07 np0005534516 nova_compute[253538]: 2025-11-25 09:19:07.990 253542 DEBUG nova.virt.hardware [None req-44cee0f2-669b-4ed1-b8eb-7bcd0fab10ee a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 25 04:19:07 np0005534516 nova_compute[253538]: 2025-11-25 09:19:07.990 253542 DEBUG nova.virt.hardware [None req-44cee0f2-669b-4ed1-b8eb-7bcd0fab10ee a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 25 04:19:07 np0005534516 nova_compute[253538]: 2025-11-25 09:19:07.990 253542 DEBUG nova.virt.hardware [None req-44cee0f2-669b-4ed1-b8eb-7bcd0fab10ee a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 25 04:19:07 np0005534516 nova_compute[253538]: 2025-11-25 09:19:07.990 253542 DEBUG nova.virt.hardware [None req-44cee0f2-669b-4ed1-b8eb-7bcd0fab10ee a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 25 04:19:07 np0005534516 nova_compute[253538]: 2025-11-25 09:19:07.990 253542 DEBUG nova.virt.hardware [None req-44cee0f2-669b-4ed1-b8eb-7bcd0fab10ee a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 25 04:19:07 np0005534516 nova_compute[253538]: 2025-11-25 09:19:07.991 253542 DEBUG nova.virt.hardware [None req-44cee0f2-669b-4ed1-b8eb-7bcd0fab10ee a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 25 04:19:07 np0005534516 nova_compute[253538]: 2025-11-25 09:19:07.991 253542 DEBUG nova.objects.instance [None req-44cee0f2-669b-4ed1-b8eb-7bcd0fab10ee a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] Lazy-loading 'vcpu_model' on Instance uuid 282b7217-4c1e-4a42-b3da-05616f4e1da3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 04:19:08 np0005534516 nova_compute[253538]: 2025-11-25 09:19:08.004 253542 DEBUG oslo_concurrency.processutils [None req-44cee0f2-669b-4ed1-b8eb-7bcd0fab10ee a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 04:19:08 np0005534516 nova_compute[253538]: 2025-11-25 09:19:08.308 253542 DEBUG nova.network.neutron [req-2f320d5b-fe0f-4318-aed8-254fa32f9757 req-59257299-084b-4ac4-b717-a3857312bc95 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 282b7217-4c1e-4a42-b3da-05616f4e1da3] Updated VIF entry in instance network info cache for port 9d78d6ba-3489-4cfd-ae33-9166be3f940c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 25 04:19:08 np0005534516 nova_compute[253538]: 2025-11-25 09:19:08.309 253542 DEBUG nova.network.neutron [req-2f320d5b-fe0f-4318-aed8-254fa32f9757 req-59257299-084b-4ac4-b717-a3857312bc95 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 282b7217-4c1e-4a42-b3da-05616f4e1da3] Updating instance_info_cache with network_info: [{"id": "9d78d6ba-3489-4cfd-ae33-9166be3f940c", "address": "fa:16:3e:eb:22:67", "network": {"id": "26d70c6d-e66b-4570-a7d7-11486a935ed8", "bridge": "br-int", "label": "tempest-TestShelveInstance-164848660-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.210", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "41c67820b40a4185a60c4245f9c43ef5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9d78d6ba-34", "ovs_interfaceid": "9d78d6ba-3489-4cfd-ae33-9166be3f940c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 04:19:08 np0005534516 nova_compute[253538]: 2025-11-25 09:19:08.335 253542 DEBUG oslo_concurrency.lockutils [req-2f320d5b-fe0f-4318-aed8-254fa32f9757 req-59257299-084b-4ac4-b717-a3857312bc95 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-282b7217-4c1e-4a42-b3da-05616f4e1da3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 04:19:08 np0005534516 nova_compute[253538]: 2025-11-25 09:19:08.340 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:19:08 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 04:19:08 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2466850822' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 04:19:08 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e269 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:19:08 np0005534516 nova_compute[253538]: 2025-11-25 09:19:08.484 253542 DEBUG oslo_concurrency.processutils [None req-44cee0f2-669b-4ed1-b8eb-7bcd0fab10ee a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.480s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 04:19:08 np0005534516 nova_compute[253538]: 2025-11-25 09:19:08.503 253542 DEBUG nova.storage.rbd_utils [None req-44cee0f2-669b-4ed1-b8eb-7bcd0fab10ee a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] rbd image 282b7217-4c1e-4a42-b3da-05616f4e1da3_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 04:19:08 np0005534516 nova_compute[253538]: 2025-11-25 09:19:08.506 253542 DEBUG oslo_concurrency.processutils [None req-44cee0f2-669b-4ed1-b8eb-7bcd0fab10ee a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 04:19:08 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2966: 321 pgs: 321 active+clean; 201 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.1 MiB/s rd, 2.3 MiB/s wr, 67 op/s
Nov 25 04:19:08 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 04:19:08 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4089369585' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 04:19:08 np0005534516 nova_compute[253538]: 2025-11-25 09:19:08.935 253542 DEBUG oslo_concurrency.processutils [None req-44cee0f2-669b-4ed1-b8eb-7bcd0fab10ee a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.429s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 04:19:08 np0005534516 nova_compute[253538]: 2025-11-25 09:19:08.937 253542 DEBUG nova.virt.libvirt.vif [None req-44cee0f2-669b-4ed1-b8eb-7bcd0fab10ee a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-11-25T09:18:14Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestShelveInstance-server-853165821',display_name='tempest-TestShelveInstance-server-853165821',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testshelveinstance-server-853165821',id=153,image_ref='d40366df-73ef-47e3-8169-d7bf9c208bbb',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name='tempest-TestShelveInstance-83615415',keypairs=<?>,launch_index=0,launched_at=2025-11-25T09:18:24Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=4,progress=0,project_id='41c67820b40a4185a60c4245f9c43ef5',ramdisk_id='',reservation_id='r-5ikkcbmq',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestShelveInstance-1867415308',owner_user_name='tempest-TestShelveInstance-1867415308-project-member',shelved_at='2025-11-25T09:18:54.215428',shelved_host='compute-0.ctlplane.example.com',shelved_image_id='d40366df-73ef-47e3-8169-d7bf9c208bbb'},tags=<?>,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T09:19:02Z,user_data=None,user_id='a68fbd2f756d42aa982630f3a41f0a1f',uuid=282b7217-4c1e-4a42-b3da-05616f4e1da3,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='shelved_offloaded') vif={"id": "9d78d6ba-3489-4cfd-ae33-9166be3f940c", "address": "fa:16:3e:eb:22:67", "network": {"id": "26d70c6d-e66b-4570-a7d7-11486a935ed8", "bridge": "br-int", "label": "tempest-TestShelveInstance-164848660-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.210", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "41c67820b40a4185a60c4245f9c43ef5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9d78d6ba-34", "ovs_interfaceid": "9d78d6ba-3489-4cfd-ae33-9166be3f940c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 25 04:19:08 np0005534516 nova_compute[253538]: 2025-11-25 09:19:08.937 253542 DEBUG nova.network.os_vif_util [None req-44cee0f2-669b-4ed1-b8eb-7bcd0fab10ee a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] Converting VIF {"id": "9d78d6ba-3489-4cfd-ae33-9166be3f940c", "address": "fa:16:3e:eb:22:67", "network": {"id": "26d70c6d-e66b-4570-a7d7-11486a935ed8", "bridge": "br-int", "label": "tempest-TestShelveInstance-164848660-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.210", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "41c67820b40a4185a60c4245f9c43ef5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9d78d6ba-34", "ovs_interfaceid": "9d78d6ba-3489-4cfd-ae33-9166be3f940c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 04:19:08 np0005534516 nova_compute[253538]: 2025-11-25 09:19:08.938 253542 DEBUG nova.network.os_vif_util [None req-44cee0f2-669b-4ed1-b8eb-7bcd0fab10ee a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:eb:22:67,bridge_name='br-int',has_traffic_filtering=True,id=9d78d6ba-3489-4cfd-ae33-9166be3f940c,network=Network(26d70c6d-e66b-4570-a7d7-11486a935ed8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9d78d6ba-34') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 04:19:08 np0005534516 nova_compute[253538]: 2025-11-25 09:19:08.939 253542 DEBUG nova.objects.instance [None req-44cee0f2-669b-4ed1-b8eb-7bcd0fab10ee a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] Lazy-loading 'pci_devices' on Instance uuid 282b7217-4c1e-4a42-b3da-05616f4e1da3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 04:19:08 np0005534516 nova_compute[253538]: 2025-11-25 09:19:08.963 253542 DEBUG nova.virt.libvirt.driver [None req-44cee0f2-669b-4ed1-b8eb-7bcd0fab10ee a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] [instance: 282b7217-4c1e-4a42-b3da-05616f4e1da3] End _get_guest_xml xml=<domain type="kvm">
Nov 25 04:19:08 np0005534516 nova_compute[253538]:  <uuid>282b7217-4c1e-4a42-b3da-05616f4e1da3</uuid>
Nov 25 04:19:08 np0005534516 nova_compute[253538]:  <name>instance-00000099</name>
Nov 25 04:19:08 np0005534516 nova_compute[253538]:  <memory>131072</memory>
Nov 25 04:19:08 np0005534516 nova_compute[253538]:  <vcpu>1</vcpu>
Nov 25 04:19:08 np0005534516 nova_compute[253538]:  <metadata>
Nov 25 04:19:08 np0005534516 nova_compute[253538]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 04:19:08 np0005534516 nova_compute[253538]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 04:19:08 np0005534516 nova_compute[253538]:      <nova:name>tempest-TestShelveInstance-server-853165821</nova:name>
Nov 25 04:19:08 np0005534516 nova_compute[253538]:      <nova:creationTime>2025-11-25 09:19:07</nova:creationTime>
Nov 25 04:19:08 np0005534516 nova_compute[253538]:      <nova:flavor name="m1.nano">
Nov 25 04:19:08 np0005534516 nova_compute[253538]:        <nova:memory>128</nova:memory>
Nov 25 04:19:08 np0005534516 nova_compute[253538]:        <nova:disk>1</nova:disk>
Nov 25 04:19:08 np0005534516 nova_compute[253538]:        <nova:swap>0</nova:swap>
Nov 25 04:19:08 np0005534516 nova_compute[253538]:        <nova:ephemeral>0</nova:ephemeral>
Nov 25 04:19:08 np0005534516 nova_compute[253538]:        <nova:vcpus>1</nova:vcpus>
Nov 25 04:19:08 np0005534516 nova_compute[253538]:      </nova:flavor>
Nov 25 04:19:08 np0005534516 nova_compute[253538]:      <nova:owner>
Nov 25 04:19:08 np0005534516 nova_compute[253538]:        <nova:user uuid="a68fbd2f756d42aa982630f3a41f0a1f">tempest-TestShelveInstance-1867415308-project-member</nova:user>
Nov 25 04:19:08 np0005534516 nova_compute[253538]:        <nova:project uuid="41c67820b40a4185a60c4245f9c43ef5">tempest-TestShelveInstance-1867415308</nova:project>
Nov 25 04:19:08 np0005534516 nova_compute[253538]:      </nova:owner>
Nov 25 04:19:08 np0005534516 nova_compute[253538]:      <nova:root type="image" uuid="d40366df-73ef-47e3-8169-d7bf9c208bbb"/>
Nov 25 04:19:08 np0005534516 nova_compute[253538]:      <nova:ports>
Nov 25 04:19:08 np0005534516 nova_compute[253538]:        <nova:port uuid="9d78d6ba-3489-4cfd-ae33-9166be3f940c">
Nov 25 04:19:08 np0005534516 nova_compute[253538]:          <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Nov 25 04:19:08 np0005534516 nova_compute[253538]:        </nova:port>
Nov 25 04:19:08 np0005534516 nova_compute[253538]:      </nova:ports>
Nov 25 04:19:08 np0005534516 nova_compute[253538]:    </nova:instance>
Nov 25 04:19:08 np0005534516 nova_compute[253538]:  </metadata>
Nov 25 04:19:08 np0005534516 nova_compute[253538]:  <sysinfo type="smbios">
Nov 25 04:19:08 np0005534516 nova_compute[253538]:    <system>
Nov 25 04:19:08 np0005534516 nova_compute[253538]:      <entry name="manufacturer">RDO</entry>
Nov 25 04:19:08 np0005534516 nova_compute[253538]:      <entry name="product">OpenStack Compute</entry>
Nov 25 04:19:08 np0005534516 nova_compute[253538]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 04:19:08 np0005534516 nova_compute[253538]:      <entry name="serial">282b7217-4c1e-4a42-b3da-05616f4e1da3</entry>
Nov 25 04:19:08 np0005534516 nova_compute[253538]:      <entry name="uuid">282b7217-4c1e-4a42-b3da-05616f4e1da3</entry>
Nov 25 04:19:08 np0005534516 nova_compute[253538]:      <entry name="family">Virtual Machine</entry>
Nov 25 04:19:08 np0005534516 nova_compute[253538]:    </system>
Nov 25 04:19:08 np0005534516 nova_compute[253538]:  </sysinfo>
Nov 25 04:19:08 np0005534516 nova_compute[253538]:  <os>
Nov 25 04:19:08 np0005534516 nova_compute[253538]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 25 04:19:08 np0005534516 nova_compute[253538]:    <boot dev="hd"/>
Nov 25 04:19:08 np0005534516 nova_compute[253538]:    <smbios mode="sysinfo"/>
Nov 25 04:19:08 np0005534516 nova_compute[253538]:  </os>
Nov 25 04:19:08 np0005534516 nova_compute[253538]:  <features>
Nov 25 04:19:08 np0005534516 nova_compute[253538]:    <acpi/>
Nov 25 04:19:08 np0005534516 nova_compute[253538]:    <apic/>
Nov 25 04:19:08 np0005534516 nova_compute[253538]:    <vmcoreinfo/>
Nov 25 04:19:08 np0005534516 nova_compute[253538]:  </features>
Nov 25 04:19:08 np0005534516 nova_compute[253538]:  <clock offset="utc">
Nov 25 04:19:08 np0005534516 nova_compute[253538]:    <timer name="pit" tickpolicy="delay"/>
Nov 25 04:19:08 np0005534516 nova_compute[253538]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 25 04:19:08 np0005534516 nova_compute[253538]:    <timer name="hpet" present="no"/>
Nov 25 04:19:08 np0005534516 nova_compute[253538]:  </clock>
Nov 25 04:19:08 np0005534516 nova_compute[253538]:  <cpu mode="host-model" match="exact">
Nov 25 04:19:08 np0005534516 nova_compute[253538]:    <topology sockets="1" cores="1" threads="1"/>
Nov 25 04:19:08 np0005534516 nova_compute[253538]:  </cpu>
Nov 25 04:19:08 np0005534516 nova_compute[253538]:  <devices>
Nov 25 04:19:08 np0005534516 nova_compute[253538]:    <disk type="network" device="disk">
Nov 25 04:19:08 np0005534516 nova_compute[253538]:      <driver type="raw" cache="none"/>
Nov 25 04:19:08 np0005534516 nova_compute[253538]:      <source protocol="rbd" name="vms/282b7217-4c1e-4a42-b3da-05616f4e1da3_disk">
Nov 25 04:19:08 np0005534516 nova_compute[253538]:        <host name="192.168.122.100" port="6789"/>
Nov 25 04:19:08 np0005534516 nova_compute[253538]:      </source>
Nov 25 04:19:08 np0005534516 nova_compute[253538]:      <auth username="openstack">
Nov 25 04:19:08 np0005534516 nova_compute[253538]:        <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 04:19:08 np0005534516 nova_compute[253538]:      </auth>
Nov 25 04:19:08 np0005534516 nova_compute[253538]:      <target dev="vda" bus="virtio"/>
Nov 25 04:19:08 np0005534516 nova_compute[253538]:    </disk>
Nov 25 04:19:08 np0005534516 nova_compute[253538]:    <disk type="network" device="cdrom">
Nov 25 04:19:08 np0005534516 nova_compute[253538]:      <driver type="raw" cache="none"/>
Nov 25 04:19:08 np0005534516 nova_compute[253538]:      <source protocol="rbd" name="vms/282b7217-4c1e-4a42-b3da-05616f4e1da3_disk.config">
Nov 25 04:19:08 np0005534516 nova_compute[253538]:        <host name="192.168.122.100" port="6789"/>
Nov 25 04:19:08 np0005534516 nova_compute[253538]:      </source>
Nov 25 04:19:08 np0005534516 nova_compute[253538]:      <auth username="openstack">
Nov 25 04:19:08 np0005534516 nova_compute[253538]:        <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 04:19:08 np0005534516 nova_compute[253538]:      </auth>
Nov 25 04:19:08 np0005534516 nova_compute[253538]:      <target dev="sda" bus="sata"/>
Nov 25 04:19:08 np0005534516 nova_compute[253538]:    </disk>
Nov 25 04:19:08 np0005534516 nova_compute[253538]:    <interface type="ethernet">
Nov 25 04:19:08 np0005534516 nova_compute[253538]:      <mac address="fa:16:3e:eb:22:67"/>
Nov 25 04:19:08 np0005534516 nova_compute[253538]:      <model type="virtio"/>
Nov 25 04:19:08 np0005534516 nova_compute[253538]:      <driver name="vhost" rx_queue_size="512"/>
Nov 25 04:19:08 np0005534516 nova_compute[253538]:      <mtu size="1442"/>
Nov 25 04:19:08 np0005534516 nova_compute[253538]:      <target dev="tap9d78d6ba-34"/>
Nov 25 04:19:08 np0005534516 nova_compute[253538]:    </interface>
Nov 25 04:19:08 np0005534516 nova_compute[253538]:    <serial type="pty">
Nov 25 04:19:08 np0005534516 nova_compute[253538]:      <log file="/var/lib/nova/instances/282b7217-4c1e-4a42-b3da-05616f4e1da3/console.log" append="off"/>
Nov 25 04:19:08 np0005534516 nova_compute[253538]:    </serial>
Nov 25 04:19:08 np0005534516 nova_compute[253538]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 04:19:08 np0005534516 nova_compute[253538]:    <video>
Nov 25 04:19:08 np0005534516 nova_compute[253538]:      <model type="virtio"/>
Nov 25 04:19:08 np0005534516 nova_compute[253538]:    </video>
Nov 25 04:19:08 np0005534516 nova_compute[253538]:    <input type="tablet" bus="usb"/>
Nov 25 04:19:08 np0005534516 nova_compute[253538]:    <input type="keyboard" bus="usb"/>
Nov 25 04:19:08 np0005534516 nova_compute[253538]:    <rng model="virtio">
Nov 25 04:19:08 np0005534516 nova_compute[253538]:      <backend model="random">/dev/urandom</backend>
Nov 25 04:19:08 np0005534516 nova_compute[253538]:    </rng>
Nov 25 04:19:08 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root"/>
Nov 25 04:19:08 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:19:08 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:19:08 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:19:08 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:19:08 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:19:08 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:19:08 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:19:08 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:19:08 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:19:08 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:19:08 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:19:08 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:19:08 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:19:08 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:19:08 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:19:08 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:19:08 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:19:08 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:19:08 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:19:08 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:19:08 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:19:08 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:19:08 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:19:08 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:19:08 np0005534516 nova_compute[253538]:    <controller type="usb" index="0"/>
Nov 25 04:19:08 np0005534516 nova_compute[253538]:    <memballoon model="virtio">
Nov 25 04:19:08 np0005534516 nova_compute[253538]:      <stats period="10"/>
Nov 25 04:19:08 np0005534516 nova_compute[253538]:    </memballoon>
Nov 25 04:19:08 np0005534516 nova_compute[253538]:  </devices>
Nov 25 04:19:08 np0005534516 nova_compute[253538]: </domain>
Nov 25 04:19:08 np0005534516 nova_compute[253538]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 25 04:19:08 np0005534516 nova_compute[253538]: 2025-11-25 09:19:08.964 253542 DEBUG nova.compute.manager [None req-44cee0f2-669b-4ed1-b8eb-7bcd0fab10ee a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] [instance: 282b7217-4c1e-4a42-b3da-05616f4e1da3] Preparing to wait for external event network-vif-plugged-9d78d6ba-3489-4cfd-ae33-9166be3f940c prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 25 04:19:08 np0005534516 nova_compute[253538]: 2025-11-25 09:19:08.964 253542 DEBUG oslo_concurrency.lockutils [None req-44cee0f2-669b-4ed1-b8eb-7bcd0fab10ee a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] Acquiring lock "282b7217-4c1e-4a42-b3da-05616f4e1da3-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:19:08 np0005534516 nova_compute[253538]: 2025-11-25 09:19:08.964 253542 DEBUG oslo_concurrency.lockutils [None req-44cee0f2-669b-4ed1-b8eb-7bcd0fab10ee a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] Lock "282b7217-4c1e-4a42-b3da-05616f4e1da3-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:19:08 np0005534516 nova_compute[253538]: 2025-11-25 09:19:08.964 253542 DEBUG oslo_concurrency.lockutils [None req-44cee0f2-669b-4ed1-b8eb-7bcd0fab10ee a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] Lock "282b7217-4c1e-4a42-b3da-05616f4e1da3-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:19:08 np0005534516 nova_compute[253538]: 2025-11-25 09:19:08.965 253542 DEBUG nova.virt.libvirt.vif [None req-44cee0f2-669b-4ed1-b8eb-7bcd0fab10ee a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-11-25T09:18:14Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestShelveInstance-server-853165821',display_name='tempest-TestShelveInstance-server-853165821',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testshelveinstance-server-853165821',id=153,image_ref='d40366df-73ef-47e3-8169-d7bf9c208bbb',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name='tempest-TestShelveInstance-83615415',keypairs=<?>,launch_index=0,launched_at=2025-11-25T09:18:24Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=4,progress=0,project_id='41c67820b40a4185a60c4245f9c43ef5',ramdisk_id='',reservation_id='r-5ikkcbmq',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestShelveInstance-1867415308',owner_user_name='tempest-TestShelveInstance-1867415308-project-member',shelved_at='2025-11-25T09:18:54.215428',shelved_host='compute-0.ctlplane.example.com',shelved_image_id='d40366df-73ef-47e3-8169-d7bf9c208bbb'},tags=<?>,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T09:19:02Z,user_data=None,user_id='a68fbd2f756d42aa982630f3a41f0a1f',uuid=282b7217-4c1e-4a42-b3da-05616f4e1da3,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='shelved_offloaded') vif={"id": "9d78d6ba-3489-4cfd-ae33-9166be3f940c", "address": "fa:16:3e:eb:22:67", "network": {"id": "26d70c6d-e66b-4570-a7d7-11486a935ed8", "bridge": "br-int", "label": "tempest-TestShelveInstance-164848660-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.210", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "41c67820b40a4185a60c4245f9c43ef5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9d78d6ba-34", "ovs_interfaceid": "9d78d6ba-3489-4cfd-ae33-9166be3f940c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 25 04:19:08 np0005534516 nova_compute[253538]: 2025-11-25 09:19:08.965 253542 DEBUG nova.network.os_vif_util [None req-44cee0f2-669b-4ed1-b8eb-7bcd0fab10ee a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] Converting VIF {"id": "9d78d6ba-3489-4cfd-ae33-9166be3f940c", "address": "fa:16:3e:eb:22:67", "network": {"id": "26d70c6d-e66b-4570-a7d7-11486a935ed8", "bridge": "br-int", "label": "tempest-TestShelveInstance-164848660-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.210", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "41c67820b40a4185a60c4245f9c43ef5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9d78d6ba-34", "ovs_interfaceid": "9d78d6ba-3489-4cfd-ae33-9166be3f940c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 04:19:08 np0005534516 nova_compute[253538]: 2025-11-25 09:19:08.965 253542 DEBUG nova.network.os_vif_util [None req-44cee0f2-669b-4ed1-b8eb-7bcd0fab10ee a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:eb:22:67,bridge_name='br-int',has_traffic_filtering=True,id=9d78d6ba-3489-4cfd-ae33-9166be3f940c,network=Network(26d70c6d-e66b-4570-a7d7-11486a935ed8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9d78d6ba-34') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 04:19:08 np0005534516 nova_compute[253538]: 2025-11-25 09:19:08.966 253542 DEBUG os_vif [None req-44cee0f2-669b-4ed1-b8eb-7bcd0fab10ee a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:eb:22:67,bridge_name='br-int',has_traffic_filtering=True,id=9d78d6ba-3489-4cfd-ae33-9166be3f940c,network=Network(26d70c6d-e66b-4570-a7d7-11486a935ed8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9d78d6ba-34') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 25 04:19:08 np0005534516 nova_compute[253538]: 2025-11-25 09:19:08.966 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:19:08 np0005534516 nova_compute[253538]: 2025-11-25 09:19:08.966 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 04:19:08 np0005534516 nova_compute[253538]: 2025-11-25 09:19:08.967 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 04:19:08 np0005534516 nova_compute[253538]: 2025-11-25 09:19:08.970 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:19:08 np0005534516 nova_compute[253538]: 2025-11-25 09:19:08.970 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9d78d6ba-34, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 04:19:08 np0005534516 nova_compute[253538]: 2025-11-25 09:19:08.971 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap9d78d6ba-34, col_values=(('external_ids', {'iface-id': '9d78d6ba-3489-4cfd-ae33-9166be3f940c', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:eb:22:67', 'vm-uuid': '282b7217-4c1e-4a42-b3da-05616f4e1da3'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 04:19:08 np0005534516 nova_compute[253538]: 2025-11-25 09:19:08.972 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:19:08 np0005534516 NetworkManager[48915]: <info>  [1764062348.9731] manager: (tap9d78d6ba-34): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/674)
Nov 25 04:19:08 np0005534516 nova_compute[253538]: 2025-11-25 09:19:08.974 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 25 04:19:08 np0005534516 nova_compute[253538]: 2025-11-25 09:19:08.977 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:19:08 np0005534516 nova_compute[253538]: 2025-11-25 09:19:08.977 253542 INFO os_vif [None req-44cee0f2-669b-4ed1-b8eb-7bcd0fab10ee a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:eb:22:67,bridge_name='br-int',has_traffic_filtering=True,id=9d78d6ba-3489-4cfd-ae33-9166be3f940c,network=Network(26d70c6d-e66b-4570-a7d7-11486a935ed8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9d78d6ba-34')#033[00m
Nov 25 04:19:09 np0005534516 nova_compute[253538]: 2025-11-25 09:19:09.037 253542 DEBUG nova.virt.libvirt.driver [None req-44cee0f2-669b-4ed1-b8eb-7bcd0fab10ee a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 04:19:09 np0005534516 nova_compute[253538]: 2025-11-25 09:19:09.037 253542 DEBUG nova.virt.libvirt.driver [None req-44cee0f2-669b-4ed1-b8eb-7bcd0fab10ee a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 04:19:09 np0005534516 nova_compute[253538]: 2025-11-25 09:19:09.037 253542 DEBUG nova.virt.libvirt.driver [None req-44cee0f2-669b-4ed1-b8eb-7bcd0fab10ee a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] No VIF found with MAC fa:16:3e:eb:22:67, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 25 04:19:09 np0005534516 nova_compute[253538]: 2025-11-25 09:19:09.038 253542 INFO nova.virt.libvirt.driver [None req-44cee0f2-669b-4ed1-b8eb-7bcd0fab10ee a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] [instance: 282b7217-4c1e-4a42-b3da-05616f4e1da3] Using config drive#033[00m
Nov 25 04:19:09 np0005534516 nova_compute[253538]: 2025-11-25 09:19:09.059 253542 DEBUG nova.storage.rbd_utils [None req-44cee0f2-669b-4ed1-b8eb-7bcd0fab10ee a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] rbd image 282b7217-4c1e-4a42-b3da-05616f4e1da3_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 04:19:09 np0005534516 nova_compute[253538]: 2025-11-25 09:19:09.074 253542 DEBUG nova.objects.instance [None req-44cee0f2-669b-4ed1-b8eb-7bcd0fab10ee a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] Lazy-loading 'ec2_ids' on Instance uuid 282b7217-4c1e-4a42-b3da-05616f4e1da3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 04:19:09 np0005534516 nova_compute[253538]: 2025-11-25 09:19:09.108 253542 DEBUG nova.objects.instance [None req-44cee0f2-669b-4ed1-b8eb-7bcd0fab10ee a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] Lazy-loading 'keypairs' on Instance uuid 282b7217-4c1e-4a42-b3da-05616f4e1da3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 04:19:09 np0005534516 nova_compute[253538]: 2025-11-25 09:19:09.424 253542 INFO nova.virt.libvirt.driver [None req-44cee0f2-669b-4ed1-b8eb-7bcd0fab10ee a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] [instance: 282b7217-4c1e-4a42-b3da-05616f4e1da3] Creating config drive at /var/lib/nova/instances/282b7217-4c1e-4a42-b3da-05616f4e1da3/disk.config#033[00m
Nov 25 04:19:09 np0005534516 nova_compute[253538]: 2025-11-25 09:19:09.428 253542 DEBUG oslo_concurrency.processutils [None req-44cee0f2-669b-4ed1-b8eb-7bcd0fab10ee a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/282b7217-4c1e-4a42-b3da-05616f4e1da3/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp2d4z3lxa execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 04:19:09 np0005534516 nova_compute[253538]: 2025-11-25 09:19:09.595 253542 DEBUG oslo_concurrency.processutils [None req-44cee0f2-669b-4ed1-b8eb-7bcd0fab10ee a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/282b7217-4c1e-4a42-b3da-05616f4e1da3/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp2d4z3lxa" returned: 0 in 0.167s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 04:19:09 np0005534516 nova_compute[253538]: 2025-11-25 09:19:09.621 253542 DEBUG nova.storage.rbd_utils [None req-44cee0f2-669b-4ed1-b8eb-7bcd0fab10ee a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] rbd image 282b7217-4c1e-4a42-b3da-05616f4e1da3_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 04:19:09 np0005534516 nova_compute[253538]: 2025-11-25 09:19:09.624 253542 DEBUG oslo_concurrency.processutils [None req-44cee0f2-669b-4ed1-b8eb-7bcd0fab10ee a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/282b7217-4c1e-4a42-b3da-05616f4e1da3/disk.config 282b7217-4c1e-4a42-b3da-05616f4e1da3_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 04:19:09 np0005534516 nova_compute[253538]: 2025-11-25 09:19:09.804 253542 DEBUG oslo_concurrency.processutils [None req-44cee0f2-669b-4ed1-b8eb-7bcd0fab10ee a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/282b7217-4c1e-4a42-b3da-05616f4e1da3/disk.config 282b7217-4c1e-4a42-b3da-05616f4e1da3_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.180s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 04:19:09 np0005534516 nova_compute[253538]: 2025-11-25 09:19:09.806 253542 INFO nova.virt.libvirt.driver [None req-44cee0f2-669b-4ed1-b8eb-7bcd0fab10ee a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] [instance: 282b7217-4c1e-4a42-b3da-05616f4e1da3] Deleting local config drive /var/lib/nova/instances/282b7217-4c1e-4a42-b3da-05616f4e1da3/disk.config because it was imported into RBD.#033[00m
Nov 25 04:19:09 np0005534516 kernel: tap9d78d6ba-34: entered promiscuous mode
Nov 25 04:19:09 np0005534516 NetworkManager[48915]: <info>  [1764062349.8638] manager: (tap9d78d6ba-34): new Tun device (/org/freedesktop/NetworkManager/Devices/675)
Nov 25 04:19:09 np0005534516 ovn_controller[152859]: 2025-11-25T09:19:09Z|01643|binding|INFO|Claiming lport 9d78d6ba-3489-4cfd-ae33-9166be3f940c for this chassis.
Nov 25 04:19:09 np0005534516 ovn_controller[152859]: 2025-11-25T09:19:09Z|01644|binding|INFO|9d78d6ba-3489-4cfd-ae33-9166be3f940c: Claiming fa:16:3e:eb:22:67 10.100.0.14
Nov 25 04:19:09 np0005534516 nova_compute[253538]: 2025-11-25 09:19:09.865 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:19:09 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:19:09.874 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:eb:22:67 10.100.0.14'], port_security=['fa:16:3e:eb:22:67 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '282b7217-4c1e-4a42-b3da-05616f4e1da3', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-26d70c6d-e66b-4570-a7d7-11486a935ed8', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '41c67820b40a4185a60c4245f9c43ef5', 'neutron:revision_number': '7', 'neutron:security_group_ids': '5b17be75-81bb-4f4d-9234-5572279d07c7', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.210'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c3ce9783-7822-423e-a3be-85165987da53, chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=9d78d6ba-3489-4cfd-ae33-9166be3f940c) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 04:19:09 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:19:09.875 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 9d78d6ba-3489-4cfd-ae33-9166be3f940c in datapath 26d70c6d-e66b-4570-a7d7-11486a935ed8 bound to our chassis#033[00m
Nov 25 04:19:09 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:19:09.876 162739 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 26d70c6d-e66b-4570-a7d7-11486a935ed8#033[00m
Nov 25 04:19:09 np0005534516 ovn_controller[152859]: 2025-11-25T09:19:09Z|01645|binding|INFO|Setting lport 9d78d6ba-3489-4cfd-ae33-9166be3f940c ovn-installed in OVS
Nov 25 04:19:09 np0005534516 ovn_controller[152859]: 2025-11-25T09:19:09Z|01646|binding|INFO|Setting lport 9d78d6ba-3489-4cfd-ae33-9166be3f940c up in Southbound
Nov 25 04:19:09 np0005534516 nova_compute[253538]: 2025-11-25 09:19:09.884 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:19:09 np0005534516 nova_compute[253538]: 2025-11-25 09:19:09.887 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:19:09 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:19:09.889 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[8f4a6f30-511b-4e84-a7b2-afad8be5ee26]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:19:09 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:19:09.890 162739 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap26d70c6d-e1 in ovnmeta-26d70c6d-e66b-4570-a7d7-11486a935ed8 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 25 04:19:09 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:19:09.893 269370 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap26d70c6d-e0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 25 04:19:09 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:19:09.893 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[3f5a68f1-540e-4e13-aaeb-cb1a1a2e71d3]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:19:09 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:19:09.894 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[37ad09d5-ea13-474b-af62-fd50820e094e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:19:09 np0005534516 systemd-udevd[421999]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 04:19:09 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:19:09.909 162852 DEBUG oslo.privsep.daemon [-] privsep: reply[22b6c6d8-8e25-4137-a5eb-c5466f601a77]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:19:09 np0005534516 NetworkManager[48915]: <info>  [1764062349.9129] device (tap9d78d6ba-34): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 04:19:09 np0005534516 systemd-machined[215790]: New machine qemu-186-instance-00000099.
Nov 25 04:19:09 np0005534516 NetworkManager[48915]: <info>  [1764062349.9139] device (tap9d78d6ba-34): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 04:19:09 np0005534516 systemd[1]: Started Virtual Machine qemu-186-instance-00000099.
Nov 25 04:19:09 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:19:09.928 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[1bcbfb19-27ea-4105-ad01-d332d76961ee]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:19:09 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:19:09.969 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[2724e08b-62bc-42b0-9d61-74ac5977760b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:19:09 np0005534516 systemd-udevd[422003]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 04:19:09 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:19:09.976 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[75acd2d6-ab81-43e7-94dd-c494406b11a6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:19:09 np0005534516 NetworkManager[48915]: <info>  [1764062349.9777] manager: (tap26d70c6d-e0): new Veth device (/org/freedesktop/NetworkManager/Devices/676)
Nov 25 04:19:10 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:19:10.019 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[84bceb90-6d30-499e-94f7-50e28fd92ce2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:19:10 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:19:10.023 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[9a90f188-e04c-4d94-9c95-07739df7e3d1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:19:10 np0005534516 NetworkManager[48915]: <info>  [1764062350.0528] device (tap26d70c6d-e0): carrier: link connected
Nov 25 04:19:10 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:19:10.058 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[9f06a2b6-3fe6-4d24-9320-bc6381fc4cc8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:19:10 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:19:10.078 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[19374641-4be7-4a83-8aab-625f1710ec54]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap26d70c6d-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:8a:90:93'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 473], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 770465, 'reachable_time': 21033, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 422032, 'error': None, 'target': 'ovnmeta-26d70c6d-e66b-4570-a7d7-11486a935ed8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:19:10 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:19:10.096 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[b7610ec1-d72f-41b1-8ce3-bed113229400]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe8a:9093'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 770465, 'tstamp': 770465}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 422033, 'error': None, 'target': 'ovnmeta-26d70c6d-e66b-4570-a7d7-11486a935ed8', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:19:10 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:19:10.112 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[5ccf5f83-5d30-468f-a697-806ea81193ba]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap26d70c6d-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:8a:90:93'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 473], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 770465, 'reachable_time': 21033, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 422034, 'error': None, 'target': 'ovnmeta-26d70c6d-e66b-4570-a7d7-11486a935ed8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:19:10 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:19:10.150 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[2e25d807-2d31-487b-8782-6d8e8594a628]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:19:10 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:19:10.205 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[2226d01a-fd17-4953-a224-6bbdbc50df3b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:19:10 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:19:10.207 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap26d70c6d-e0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 04:19:10 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:19:10.207 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 04:19:10 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:19:10.207 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap26d70c6d-e0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 04:19:10 np0005534516 nova_compute[253538]: 2025-11-25 09:19:10.209 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:19:10 np0005534516 NetworkManager[48915]: <info>  [1764062350.2097] manager: (tap26d70c6d-e0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/677)
Nov 25 04:19:10 np0005534516 kernel: tap26d70c6d-e0: entered promiscuous mode
Nov 25 04:19:10 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:19:10.211 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap26d70c6d-e0, col_values=(('external_ids', {'iface-id': '49a3f274-19b1-4763-bafc-281fe099299b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 04:19:10 np0005534516 nova_compute[253538]: 2025-11-25 09:19:10.212 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:19:10 np0005534516 ovn_controller[152859]: 2025-11-25T09:19:10Z|01647|binding|INFO|Releasing lport 49a3f274-19b1-4763-bafc-281fe099299b from this chassis (sb_readonly=0)
Nov 25 04:19:10 np0005534516 nova_compute[253538]: 2025-11-25 09:19:10.228 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:19:10 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:19:10.229 162739 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/26d70c6d-e66b-4570-a7d7-11486a935ed8.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/26d70c6d-e66b-4570-a7d7-11486a935ed8.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 25 04:19:10 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:19:10.230 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[2324c13a-8e36-48a9-a644-dd81c06c27e2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:19:10 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:19:10.230 162739 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 25 04:19:10 np0005534516 ovn_metadata_agent[162734]: global
Nov 25 04:19:10 np0005534516 ovn_metadata_agent[162734]:    log         /dev/log local0 debug
Nov 25 04:19:10 np0005534516 ovn_metadata_agent[162734]:    log-tag     haproxy-metadata-proxy-26d70c6d-e66b-4570-a7d7-11486a935ed8
Nov 25 04:19:10 np0005534516 ovn_metadata_agent[162734]:    user        root
Nov 25 04:19:10 np0005534516 ovn_metadata_agent[162734]:    group       root
Nov 25 04:19:10 np0005534516 ovn_metadata_agent[162734]:    maxconn     1024
Nov 25 04:19:10 np0005534516 ovn_metadata_agent[162734]:    pidfile     /var/lib/neutron/external/pids/26d70c6d-e66b-4570-a7d7-11486a935ed8.pid.haproxy
Nov 25 04:19:10 np0005534516 ovn_metadata_agent[162734]:    daemon
Nov 25 04:19:10 np0005534516 ovn_metadata_agent[162734]: 
Nov 25 04:19:10 np0005534516 ovn_metadata_agent[162734]: defaults
Nov 25 04:19:10 np0005534516 ovn_metadata_agent[162734]:    log global
Nov 25 04:19:10 np0005534516 ovn_metadata_agent[162734]:    mode http
Nov 25 04:19:10 np0005534516 ovn_metadata_agent[162734]:    option httplog
Nov 25 04:19:10 np0005534516 ovn_metadata_agent[162734]:    option dontlognull
Nov 25 04:19:10 np0005534516 ovn_metadata_agent[162734]:    option http-server-close
Nov 25 04:19:10 np0005534516 ovn_metadata_agent[162734]:    option forwardfor
Nov 25 04:19:10 np0005534516 ovn_metadata_agent[162734]:    retries                 3
Nov 25 04:19:10 np0005534516 ovn_metadata_agent[162734]:    timeout http-request    30s
Nov 25 04:19:10 np0005534516 ovn_metadata_agent[162734]:    timeout connect         30s
Nov 25 04:19:10 np0005534516 ovn_metadata_agent[162734]:    timeout client          32s
Nov 25 04:19:10 np0005534516 ovn_metadata_agent[162734]:    timeout server          32s
Nov 25 04:19:10 np0005534516 ovn_metadata_agent[162734]:    timeout http-keep-alive 30s
Nov 25 04:19:10 np0005534516 ovn_metadata_agent[162734]: 
Nov 25 04:19:10 np0005534516 ovn_metadata_agent[162734]: 
Nov 25 04:19:10 np0005534516 ovn_metadata_agent[162734]: listen listener
Nov 25 04:19:10 np0005534516 ovn_metadata_agent[162734]:    bind 169.254.169.254:80
Nov 25 04:19:10 np0005534516 ovn_metadata_agent[162734]:    server metadata /var/lib/neutron/metadata_proxy
Nov 25 04:19:10 np0005534516 ovn_metadata_agent[162734]:    http-request add-header X-OVN-Network-ID 26d70c6d-e66b-4570-a7d7-11486a935ed8
Nov 25 04:19:10 np0005534516 ovn_metadata_agent[162734]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 25 04:19:10 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:19:10.231 162739 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-26d70c6d-e66b-4570-a7d7-11486a935ed8', 'env', 'PROCESS_TAG=haproxy-26d70c6d-e66b-4570-a7d7-11486a935ed8', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/26d70c6d-e66b-4570-a7d7-11486a935ed8.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 25 04:19:10 np0005534516 nova_compute[253538]: 2025-11-25 09:19:10.325 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764062350.3252456, 282b7217-4c1e-4a42-b3da-05616f4e1da3 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 04:19:10 np0005534516 nova_compute[253538]: 2025-11-25 09:19:10.326 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 282b7217-4c1e-4a42-b3da-05616f4e1da3] VM Started (Lifecycle Event)#033[00m
Nov 25 04:19:10 np0005534516 nova_compute[253538]: 2025-11-25 09:19:10.346 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 282b7217-4c1e-4a42-b3da-05616f4e1da3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 04:19:10 np0005534516 nova_compute[253538]: 2025-11-25 09:19:10.352 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764062350.326644, 282b7217-4c1e-4a42-b3da-05616f4e1da3 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 04:19:10 np0005534516 nova_compute[253538]: 2025-11-25 09:19:10.352 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 282b7217-4c1e-4a42-b3da-05616f4e1da3] VM Paused (Lifecycle Event)#033[00m
Nov 25 04:19:10 np0005534516 nova_compute[253538]: 2025-11-25 09:19:10.368 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 282b7217-4c1e-4a42-b3da-05616f4e1da3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 04:19:10 np0005534516 nova_compute[253538]: 2025-11-25 09:19:10.372 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 282b7217-4c1e-4a42-b3da-05616f4e1da3] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: shelved_offloaded, current task_state: spawning, current DB power_state: 4, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 04:19:10 np0005534516 nova_compute[253538]: 2025-11-25 09:19:10.388 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 282b7217-4c1e-4a42-b3da-05616f4e1da3] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 04:19:10 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2967: 321 pgs: 321 active+clean; 220 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.5 MiB/s rd, 2.7 MiB/s wr, 77 op/s
Nov 25 04:19:10 np0005534516 nova_compute[253538]: 2025-11-25 09:19:10.548 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:19:10 np0005534516 nova_compute[253538]: 2025-11-25 09:19:10.569 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:19:10 np0005534516 podman[422107]: 2025-11-25 09:19:10.567497623 +0000 UTC m=+0.024585720 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 25 04:19:10 np0005534516 podman[422107]: 2025-11-25 09:19:10.942633981 +0000 UTC m=+0.399722068 container create 59697db1d2110bdf8b4b7ad10f66ca81825c76e57228cc3dcf2c53833e7f5a2f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-26d70c6d-e66b-4570-a7d7-11486a935ed8, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118)
Nov 25 04:19:10 np0005534516 nova_compute[253538]: 2025-11-25 09:19:10.975 253542 DEBUG nova.compute.manager [req-a32ca9eb-3073-4741-bca3-cc38e41cd82f req-b7bfc538-f76e-4366-b6be-d4f66216054e b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 282b7217-4c1e-4a42-b3da-05616f4e1da3] Received event network-vif-plugged-9d78d6ba-3489-4cfd-ae33-9166be3f940c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 04:19:10 np0005534516 nova_compute[253538]: 2025-11-25 09:19:10.976 253542 DEBUG oslo_concurrency.lockutils [req-a32ca9eb-3073-4741-bca3-cc38e41cd82f req-b7bfc538-f76e-4366-b6be-d4f66216054e b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "282b7217-4c1e-4a42-b3da-05616f4e1da3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:19:10 np0005534516 nova_compute[253538]: 2025-11-25 09:19:10.976 253542 DEBUG oslo_concurrency.lockutils [req-a32ca9eb-3073-4741-bca3-cc38e41cd82f req-b7bfc538-f76e-4366-b6be-d4f66216054e b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "282b7217-4c1e-4a42-b3da-05616f4e1da3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:19:10 np0005534516 nova_compute[253538]: 2025-11-25 09:19:10.976 253542 DEBUG oslo_concurrency.lockutils [req-a32ca9eb-3073-4741-bca3-cc38e41cd82f req-b7bfc538-f76e-4366-b6be-d4f66216054e b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "282b7217-4c1e-4a42-b3da-05616f4e1da3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:19:10 np0005534516 nova_compute[253538]: 2025-11-25 09:19:10.977 253542 DEBUG nova.compute.manager [req-a32ca9eb-3073-4741-bca3-cc38e41cd82f req-b7bfc538-f76e-4366-b6be-d4f66216054e b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 282b7217-4c1e-4a42-b3da-05616f4e1da3] Processing event network-vif-plugged-9d78d6ba-3489-4cfd-ae33-9166be3f940c _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 25 04:19:10 np0005534516 nova_compute[253538]: 2025-11-25 09:19:10.977 253542 DEBUG nova.compute.manager [None req-44cee0f2-669b-4ed1-b8eb-7bcd0fab10ee a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] [instance: 282b7217-4c1e-4a42-b3da-05616f4e1da3] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 25 04:19:10 np0005534516 nova_compute[253538]: 2025-11-25 09:19:10.982 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764062350.981649, 282b7217-4c1e-4a42-b3da-05616f4e1da3 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 04:19:10 np0005534516 nova_compute[253538]: 2025-11-25 09:19:10.983 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 282b7217-4c1e-4a42-b3da-05616f4e1da3] VM Resumed (Lifecycle Event)#033[00m
Nov 25 04:19:10 np0005534516 nova_compute[253538]: 2025-11-25 09:19:10.984 253542 DEBUG nova.virt.libvirt.driver [None req-44cee0f2-669b-4ed1-b8eb-7bcd0fab10ee a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] [instance: 282b7217-4c1e-4a42-b3da-05616f4e1da3] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 25 04:19:10 np0005534516 nova_compute[253538]: 2025-11-25 09:19:10.987 253542 INFO nova.virt.libvirt.driver [-] [instance: 282b7217-4c1e-4a42-b3da-05616f4e1da3] Instance spawned successfully.#033[00m
Nov 25 04:19:10 np0005534516 nova_compute[253538]: 2025-11-25 09:19:10.997 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 282b7217-4c1e-4a42-b3da-05616f4e1da3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 04:19:11 np0005534516 nova_compute[253538]: 2025-11-25 09:19:11.000 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 282b7217-4c1e-4a42-b3da-05616f4e1da3] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: shelved_offloaded, current task_state: spawning, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 04:19:11 np0005534516 nova_compute[253538]: 2025-11-25 09:19:11.014 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 282b7217-4c1e-4a42-b3da-05616f4e1da3] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 04:19:11 np0005534516 systemd[1]: Started libpod-conmon-59697db1d2110bdf8b4b7ad10f66ca81825c76e57228cc3dcf2c53833e7f5a2f.scope.
Nov 25 04:19:11 np0005534516 systemd[1]: Started libcrun container.
Nov 25 04:19:11 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3c1e777106289c23b7a822a87608b70a386d112d661b25f06e481127e86f854e/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 25 04:19:11 np0005534516 podman[422107]: 2025-11-25 09:19:11.168014928 +0000 UTC m=+0.625103055 container init 59697db1d2110bdf8b4b7ad10f66ca81825c76e57228cc3dcf2c53833e7f5a2f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-26d70c6d-e66b-4570-a7d7-11486a935ed8, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 04:19:11 np0005534516 podman[422107]: 2025-11-25 09:19:11.1754921 +0000 UTC m=+0.632580207 container start 59697db1d2110bdf8b4b7ad10f66ca81825c76e57228cc3dcf2c53833e7f5a2f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-26d70c6d-e66b-4570-a7d7-11486a935ed8, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS)
Nov 25 04:19:11 np0005534516 neutron-haproxy-ovnmeta-26d70c6d-e66b-4570-a7d7-11486a935ed8[422122]: [NOTICE]   (422126) : New worker (422128) forked
Nov 25 04:19:11 np0005534516 neutron-haproxy-ovnmeta-26d70c6d-e66b-4570-a7d7-11486a935ed8[422122]: [NOTICE]   (422126) : Loading success.
Nov 25 04:19:12 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e269 do_prune osdmap full prune enabled
Nov 25 04:19:12 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e270 e270: 3 total, 3 up, 3 in
Nov 25 04:19:12 np0005534516 ceph-mon[75015]: log_channel(cluster) log [DBG] : osdmap e270: 3 total, 3 up, 3 in
Nov 25 04:19:12 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2969: 321 pgs: 321 active+clean; 246 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 4.7 MiB/s rd, 4.7 MiB/s wr, 112 op/s
Nov 25 04:19:12 np0005534516 nova_compute[253538]: 2025-11-25 09:19:12.806 253542 DEBUG nova.compute.manager [None req-44cee0f2-669b-4ed1-b8eb-7bcd0fab10ee a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] [instance: 282b7217-4c1e-4a42-b3da-05616f4e1da3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 04:19:12 np0005534516 nova_compute[253538]: 2025-11-25 09:19:12.869 253542 DEBUG oslo_concurrency.lockutils [None req-44cee0f2-669b-4ed1-b8eb-7bcd0fab10ee a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] Lock "282b7217-4c1e-4a42-b3da-05616f4e1da3" "released" by "nova.compute.manager.ComputeManager.unshelve_instance.<locals>.do_unshelve_instance" :: held 10.148s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:19:13 np0005534516 nova_compute[253538]: 2025-11-25 09:19:13.040 253542 DEBUG nova.compute.manager [req-f0f8d291-386b-4271-a5d2-b444c4e92894 req-4885265b-7930-4be3-9415-c6f1c418a90b b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 282b7217-4c1e-4a42-b3da-05616f4e1da3] Received event network-vif-plugged-9d78d6ba-3489-4cfd-ae33-9166be3f940c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 04:19:13 np0005534516 nova_compute[253538]: 2025-11-25 09:19:13.041 253542 DEBUG oslo_concurrency.lockutils [req-f0f8d291-386b-4271-a5d2-b444c4e92894 req-4885265b-7930-4be3-9415-c6f1c418a90b b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "282b7217-4c1e-4a42-b3da-05616f4e1da3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:19:13 np0005534516 nova_compute[253538]: 2025-11-25 09:19:13.041 253542 DEBUG oslo_concurrency.lockutils [req-f0f8d291-386b-4271-a5d2-b444c4e92894 req-4885265b-7930-4be3-9415-c6f1c418a90b b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "282b7217-4c1e-4a42-b3da-05616f4e1da3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:19:13 np0005534516 nova_compute[253538]: 2025-11-25 09:19:13.041 253542 DEBUG oslo_concurrency.lockutils [req-f0f8d291-386b-4271-a5d2-b444c4e92894 req-4885265b-7930-4be3-9415-c6f1c418a90b b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "282b7217-4c1e-4a42-b3da-05616f4e1da3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:19:13 np0005534516 nova_compute[253538]: 2025-11-25 09:19:13.041 253542 DEBUG nova.compute.manager [req-f0f8d291-386b-4271-a5d2-b444c4e92894 req-4885265b-7930-4be3-9415-c6f1c418a90b b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 282b7217-4c1e-4a42-b3da-05616f4e1da3] No waiting events found dispatching network-vif-plugged-9d78d6ba-3489-4cfd-ae33-9166be3f940c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 04:19:13 np0005534516 nova_compute[253538]: 2025-11-25 09:19:13.042 253542 WARNING nova.compute.manager [req-f0f8d291-386b-4271-a5d2-b444c4e92894 req-4885265b-7930-4be3-9415-c6f1c418a90b b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 282b7217-4c1e-4a42-b3da-05616f4e1da3] Received unexpected event network-vif-plugged-9d78d6ba-3489-4cfd-ae33-9166be3f940c for instance with vm_state active and task_state None.#033[00m
Nov 25 04:19:13 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e270 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:19:13 np0005534516 nova_compute[253538]: 2025-11-25 09:19:13.972 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:19:14 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2970: 321 pgs: 321 active+clean; 226 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 5.6 MiB/s rd, 4.7 MiB/s wr, 135 op/s
Nov 25 04:19:14 np0005534516 nova_compute[253538]: 2025-11-25 09:19:14.888 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:19:15 np0005534516 nova_compute[253538]: 2025-11-25 09:19:15.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:19:15 np0005534516 nova_compute[253538]: 2025-11-25 09:19:15.575 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:19:15 np0005534516 nova_compute[253538]: 2025-11-25 09:19:15.576 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:19:15 np0005534516 nova_compute[253538]: 2025-11-25 09:19:15.576 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:19:15 np0005534516 nova_compute[253538]: 2025-11-25 09:19:15.576 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 25 04:19:15 np0005534516 nova_compute[253538]: 2025-11-25 09:19:15.576 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 04:19:16 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 04:19:16 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3248222726' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 04:19:16 np0005534516 nova_compute[253538]: 2025-11-25 09:19:16.049 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.473s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 04:19:16 np0005534516 nova_compute[253538]: 2025-11-25 09:19:16.125 253542 DEBUG nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] skipping disk for instance-00000099 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 25 04:19:16 np0005534516 nova_compute[253538]: 2025-11-25 09:19:16.126 253542 DEBUG nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] skipping disk for instance-00000099 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 25 04:19:16 np0005534516 podman[422166]: 2025-11-25 09:19:16.149608637 +0000 UTC m=+0.054010309 container health_status 5c8d5508db57b4c3da7e80ae11f3a3094e87785cb74f60c98199261dd8b73481 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118)
Nov 25 04:19:16 np0005534516 podman[422165]: 2025-11-25 09:19:16.190035116 +0000 UTC m=+0.094789468 container health_status 201f5ba8a846455dad7681d91099d8c3f33e8ac91298c1330a8b525b94c03938 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2)
Nov 25 04:19:16 np0005534516 nova_compute[253538]: 2025-11-25 09:19:16.294 253542 WARNING nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 04:19:16 np0005534516 nova_compute[253538]: 2025-11-25 09:19:16.295 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3427MB free_disk=59.94276428222656GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 25 04:19:16 np0005534516 nova_compute[253538]: 2025-11-25 09:19:16.295 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:19:16 np0005534516 nova_compute[253538]: 2025-11-25 09:19:16.296 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:19:16 np0005534516 nova_compute[253538]: 2025-11-25 09:19:16.361 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Instance 282b7217-4c1e-4a42-b3da-05616f4e1da3 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 25 04:19:16 np0005534516 nova_compute[253538]: 2025-11-25 09:19:16.362 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 25 04:19:16 np0005534516 nova_compute[253538]: 2025-11-25 09:19:16.362 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 25 04:19:16 np0005534516 nova_compute[253538]: 2025-11-25 09:19:16.390 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 04:19:16 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2971: 321 pgs: 321 active+clean; 167 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 7.0 MiB/s rd, 4.7 MiB/s wr, 208 op/s
Nov 25 04:19:16 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 04:19:16 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4239824916' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 04:19:16 np0005534516 nova_compute[253538]: 2025-11-25 09:19:16.867 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.477s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 04:19:16 np0005534516 nova_compute[253538]: 2025-11-25 09:19:16.876 253542 DEBUG nova.compute.provider_tree [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 04:19:16 np0005534516 nova_compute[253538]: 2025-11-25 09:19:16.895 253542 DEBUG nova.scheduler.client.report [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 04:19:16 np0005534516 nova_compute[253538]: 2025-11-25 09:19:16.961 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 25 04:19:16 np0005534516 nova_compute[253538]: 2025-11-25 09:19:16.962 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.666s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:19:18 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e270 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:19:18 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e270 do_prune osdmap full prune enabled
Nov 25 04:19:18 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e271 e271: 3 total, 3 up, 3 in
Nov 25 04:19:18 np0005534516 ceph-mon[75015]: log_channel(cluster) log [DBG] : osdmap e271: 3 total, 3 up, 3 in
Nov 25 04:19:18 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2973: 321 pgs: 321 active+clean; 167 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 6.6 MiB/s rd, 1.8 MiB/s wr, 185 op/s
Nov 25 04:19:18 np0005534516 nova_compute[253538]: 2025-11-25 09:19:18.976 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:19:19 np0005534516 nova_compute[253538]: 2025-11-25 09:19:19.891 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:19:20 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2974: 321 pgs: 321 active+clean; 167 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.8 MiB/s rd, 19 KiB/s wr, 127 op/s
Nov 25 04:19:22 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2975: 321 pgs: 321 active+clean; 167 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 16 KiB/s wr, 106 op/s
Nov 25 04:19:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 04:19:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 04:19:23 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e271 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:19:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 04:19:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 04:19:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 04:19:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 04:19:23 np0005534516 nova_compute[253538]: 2025-11-25 09:19:23.979 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:19:24 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2976: 321 pgs: 321 active+clean; 167 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.4 MiB/s rd, 16 KiB/s wr, 77 op/s
Nov 25 04:19:24 np0005534516 nova_compute[253538]: 2025-11-25 09:19:24.893 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:19:25 np0005534516 ovn_controller[152859]: 2025-11-25T09:19:25Z|00213|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:eb:22:67 10.100.0.14
Nov 25 04:19:25 np0005534516 podman[422222]: 2025-11-25 09:19:25.831972823 +0000 UTC m=+0.085421492 container health_status 8ad1e319ea263c63021f01bb2d6f18ca5aea205883339692c6b10bddfa993191 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS)
Nov 25 04:19:26 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2977: 321 pgs: 321 active+clean; 167 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 428 KiB/s rd, 15 KiB/s wr, 38 op/s
Nov 25 04:19:27 np0005534516 ovn_controller[152859]: 2025-11-25T09:19:27Z|01648|memory_trim|INFO|Detected inactivity (last active 30002 ms ago): trimming memory
Nov 25 04:19:28 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e271 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:19:28 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2978: 321 pgs: 321 active+clean; 167 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 630 KiB/s rd, 16 KiB/s wr, 50 op/s
Nov 25 04:19:28 np0005534516 nova_compute[253538]: 2025-11-25 09:19:28.983 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:19:29 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 25 04:19:29 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3341607574' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 04:19:29 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 25 04:19:29 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3341607574' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 04:19:29 np0005534516 nova_compute[253538]: 2025-11-25 09:19:29.895 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:19:30 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2979: 321 pgs: 321 active+clean; 169 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 539 KiB/s rd, 15 KiB/s wr, 44 op/s
Nov 25 04:19:32 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2980: 321 pgs: 321 active+clean; 169 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 539 KiB/s rd, 23 KiB/s wr, 45 op/s
Nov 25 04:19:33 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e271 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:19:33 np0005534516 nova_compute[253538]: 2025-11-25 09:19:33.987 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:19:34 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2981: 321 pgs: 321 active+clean; 169 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 539 KiB/s rd, 23 KiB/s wr, 45 op/s
Nov 25 04:19:34 np0005534516 nova_compute[253538]: 2025-11-25 09:19:34.897 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:19:36 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2982: 321 pgs: 321 active+clean; 169 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 497 KiB/s rd, 26 KiB/s wr, 40 op/s
Nov 25 04:19:38 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e271 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:19:38 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2983: 321 pgs: 321 active+clean; 169 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 182 KiB/s rd, 14 KiB/s wr, 13 op/s
Nov 25 04:19:38 np0005534516 nova_compute[253538]: 2025-11-25 09:19:38.989 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:19:39 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 04:19:39 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 04:19:39 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 04:19:39 np0005534516 nova_compute[253538]: 2025-11-25 09:19:39.898 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:19:39 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 04:19:39 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 04:19:39 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 04:19:39 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 04:19:39 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 04:19:39 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 04:19:39 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 04:19:39 np0005534516 ceph-mgr[75313]: [progress WARNING root] complete: ev 0c9e4ba1-c56e-49dd-abfc-b120d358b84e does not exist
Nov 25 04:19:39 np0005534516 ceph-mgr[75313]: [progress WARNING root] complete: ev c4c9d1bc-b424-4082-bbfe-d2598d757cb6 does not exist
Nov 25 04:19:39 np0005534516 ceph-mgr[75313]: [progress WARNING root] complete: ev 8471738d-8e22-475f-84a4-264c6ed8d87c does not exist
Nov 25 04:19:39 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 04:19:39 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 04:19:39 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 04:19:39 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 04:19:39 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 04:19:39 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 04:19:40 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2984: 321 pgs: 321 active+clean; 169 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 11 KiB/s rd, 21 KiB/s wr, 3 op/s
Nov 25 04:19:40 np0005534516 podman[422638]: 2025-11-25 09:19:40.565735609 +0000 UTC m=+0.052054016 container create 9165c5030731bb1891f13bc4d6eee4e462baf287067e2d0beb693c17de7c51d5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_ptolemy, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 04:19:40 np0005534516 systemd[1]: Started libpod-conmon-9165c5030731bb1891f13bc4d6eee4e462baf287067e2d0beb693c17de7c51d5.scope.
Nov 25 04:19:40 np0005534516 podman[422638]: 2025-11-25 09:19:40.539809753 +0000 UTC m=+0.026128150 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 04:19:40 np0005534516 systemd[1]: Started libcrun container.
Nov 25 04:19:40 np0005534516 podman[422638]: 2025-11-25 09:19:40.694013444 +0000 UTC m=+0.180331861 container init 9165c5030731bb1891f13bc4d6eee4e462baf287067e2d0beb693c17de7c51d5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_ptolemy, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Nov 25 04:19:40 np0005534516 podman[422638]: 2025-11-25 09:19:40.701598041 +0000 UTC m=+0.187916478 container start 9165c5030731bb1891f13bc4d6eee4e462baf287067e2d0beb693c17de7c51d5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_ptolemy, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 04:19:40 np0005534516 competent_ptolemy[422654]: 167 167
Nov 25 04:19:40 np0005534516 systemd[1]: libpod-9165c5030731bb1891f13bc4d6eee4e462baf287067e2d0beb693c17de7c51d5.scope: Deactivated successfully.
Nov 25 04:19:40 np0005534516 podman[422638]: 2025-11-25 09:19:40.769821085 +0000 UTC m=+0.256139482 container attach 9165c5030731bb1891f13bc4d6eee4e462baf287067e2d0beb693c17de7c51d5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_ptolemy, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True)
Nov 25 04:19:40 np0005534516 podman[422638]: 2025-11-25 09:19:40.770331039 +0000 UTC m=+0.256649436 container died 9165c5030731bb1891f13bc4d6eee4e462baf287067e2d0beb693c17de7c51d5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_ptolemy, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 04:19:40 np0005534516 systemd[1]: var-lib-containers-storage-overlay-0058601d3ee8d9d487e3f3d083deba8504aad63d918e9c2616c9d2d4102a4b5c-merged.mount: Deactivated successfully.
Nov 25 04:19:40 np0005534516 podman[422638]: 2025-11-25 09:19:40.853077068 +0000 UTC m=+0.339395465 container remove 9165c5030731bb1891f13bc4d6eee4e462baf287067e2d0beb693c17de7c51d5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_ptolemy, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 04:19:40 np0005534516 systemd[1]: libpod-conmon-9165c5030731bb1891f13bc4d6eee4e462baf287067e2d0beb693c17de7c51d5.scope: Deactivated successfully.
Nov 25 04:19:40 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 04:19:40 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 04:19:40 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 04:19:40 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 04:19:40 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 04:19:41 np0005534516 podman[422682]: 2025-11-25 09:19:41.090183763 +0000 UTC m=+0.091942720 container create ddf5a69f0cb2c5b52acb27669ce0db720a67b4ee5b8c5bd7bd5de3fc8143367c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_tesla, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True)
Nov 25 04:19:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:19:41.104 162739 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:19:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:19:41.105 162739 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:19:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:19:41.106 162739 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:19:41 np0005534516 podman[422682]: 2025-11-25 09:19:41.023697916 +0000 UTC m=+0.025456893 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 04:19:41 np0005534516 systemd[1]: Started libpod-conmon-ddf5a69f0cb2c5b52acb27669ce0db720a67b4ee5b8c5bd7bd5de3fc8143367c.scope.
Nov 25 04:19:41 np0005534516 systemd[1]: Started libcrun container.
Nov 25 04:19:41 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/91ae3d3b39563aa312204c86eb6289f885ecc8e11033dbf42f1a8beb7724f1b6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 04:19:41 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/91ae3d3b39563aa312204c86eb6289f885ecc8e11033dbf42f1a8beb7724f1b6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 04:19:41 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/91ae3d3b39563aa312204c86eb6289f885ecc8e11033dbf42f1a8beb7724f1b6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 04:19:41 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/91ae3d3b39563aa312204c86eb6289f885ecc8e11033dbf42f1a8beb7724f1b6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 04:19:41 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/91ae3d3b39563aa312204c86eb6289f885ecc8e11033dbf42f1a8beb7724f1b6/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 04:19:41 np0005534516 podman[422682]: 2025-11-25 09:19:41.265510899 +0000 UTC m=+0.267269906 container init ddf5a69f0cb2c5b52acb27669ce0db720a67b4ee5b8c5bd7bd5de3fc8143367c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_tesla, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Nov 25 04:19:41 np0005534516 podman[422682]: 2025-11-25 09:19:41.277204078 +0000 UTC m=+0.278963055 container start ddf5a69f0cb2c5b52acb27669ce0db720a67b4ee5b8c5bd7bd5de3fc8143367c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_tesla, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef)
Nov 25 04:19:41 np0005534516 podman[422682]: 2025-11-25 09:19:41.296948964 +0000 UTC m=+0.298707921 container attach ddf5a69f0cb2c5b52acb27669ce0db720a67b4ee5b8c5bd7bd5de3fc8143367c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_tesla, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Nov 25 04:19:42 np0005534516 quizzical_tesla[422698]: --> passed data devices: 0 physical, 3 LVM
Nov 25 04:19:42 np0005534516 quizzical_tesla[422698]: --> relative data size: 1.0
Nov 25 04:19:42 np0005534516 quizzical_tesla[422698]: --> All data devices are unavailable
Nov 25 04:19:42 np0005534516 systemd[1]: libpod-ddf5a69f0cb2c5b52acb27669ce0db720a67b4ee5b8c5bd7bd5de3fc8143367c.scope: Deactivated successfully.
Nov 25 04:19:42 np0005534516 podman[422682]: 2025-11-25 09:19:42.345260808 +0000 UTC m=+1.347019775 container died ddf5a69f0cb2c5b52acb27669ce0db720a67b4ee5b8c5bd7bd5de3fc8143367c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_tesla, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 04:19:42 np0005534516 systemd[1]: libpod-ddf5a69f0cb2c5b52acb27669ce0db720a67b4ee5b8c5bd7bd5de3fc8143367c.scope: Consumed 1.009s CPU time.
Nov 25 04:19:42 np0005534516 systemd[1]: var-lib-containers-storage-overlay-91ae3d3b39563aa312204c86eb6289f885ecc8e11033dbf42f1a8beb7724f1b6-merged.mount: Deactivated successfully.
Nov 25 04:19:42 np0005534516 podman[422682]: 2025-11-25 09:19:42.407199676 +0000 UTC m=+1.408958633 container remove ddf5a69f0cb2c5b52acb27669ce0db720a67b4ee5b8c5bd7bd5de3fc8143367c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_tesla, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Nov 25 04:19:42 np0005534516 systemd[1]: libpod-conmon-ddf5a69f0cb2c5b52acb27669ce0db720a67b4ee5b8c5bd7bd5de3fc8143367c.scope: Deactivated successfully.
Nov 25 04:19:42 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2985: 321 pgs: 321 active+clean; 169 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 19 KiB/s wr, 1 op/s
Nov 25 04:19:43 np0005534516 podman[422876]: 2025-11-25 09:19:43.052459567 +0000 UTC m=+0.086192241 container create b216db34ec1a0ce62644298f48d470a44ce14f38f8a06d779ff9434c5d31b507 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_blackwell, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 04:19:43 np0005534516 podman[422876]: 2025-11-25 09:19:42.989182373 +0000 UTC m=+0.022915087 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 04:19:43 np0005534516 systemd[1]: Started libpod-conmon-b216db34ec1a0ce62644298f48d470a44ce14f38f8a06d779ff9434c5d31b507.scope.
Nov 25 04:19:43 np0005534516 systemd[1]: Started libcrun container.
Nov 25 04:19:43 np0005534516 podman[422876]: 2025-11-25 09:19:43.161237873 +0000 UTC m=+0.194970957 container init b216db34ec1a0ce62644298f48d470a44ce14f38f8a06d779ff9434c5d31b507 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_blackwell, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Nov 25 04:19:43 np0005534516 podman[422876]: 2025-11-25 09:19:43.170749922 +0000 UTC m=+0.204482596 container start b216db34ec1a0ce62644298f48d470a44ce14f38f8a06d779ff9434c5d31b507 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_blackwell, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True)
Nov 25 04:19:43 np0005534516 nervous_blackwell[422892]: 167 167
Nov 25 04:19:43 np0005534516 systemd[1]: libpod-b216db34ec1a0ce62644298f48d470a44ce14f38f8a06d779ff9434c5d31b507.scope: Deactivated successfully.
Nov 25 04:19:43 np0005534516 podman[422876]: 2025-11-25 09:19:43.181652679 +0000 UTC m=+0.215385383 container attach b216db34ec1a0ce62644298f48d470a44ce14f38f8a06d779ff9434c5d31b507 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_blackwell, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2)
Nov 25 04:19:43 np0005534516 podman[422876]: 2025-11-25 09:19:43.182696828 +0000 UTC m=+0.216429512 container died b216db34ec1a0ce62644298f48d470a44ce14f38f8a06d779ff9434c5d31b507 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_blackwell, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 04:19:43 np0005534516 systemd[1]: var-lib-containers-storage-overlay-7afaa41cf58d7b32eb598cb0b9bd30540590ac4bcee3d399b6634895d472e1ff-merged.mount: Deactivated successfully.
Nov 25 04:19:43 np0005534516 podman[422876]: 2025-11-25 09:19:43.397873445 +0000 UTC m=+0.431606129 container remove b216db34ec1a0ce62644298f48d470a44ce14f38f8a06d779ff9434c5d31b507 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_blackwell, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True)
Nov 25 04:19:43 np0005534516 systemd[1]: libpod-conmon-b216db34ec1a0ce62644298f48d470a44ce14f38f8a06d779ff9434c5d31b507.scope: Deactivated successfully.
Nov 25 04:19:43 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e271 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:19:43 np0005534516 podman[422916]: 2025-11-25 09:19:43.565026771 +0000 UTC m=+0.024010815 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 04:19:43 np0005534516 podman[422916]: 2025-11-25 09:19:43.65999764 +0000 UTC m=+0.118981664 container create e16b2f43b88a69caa6a4e650730576c7a8b1be71b56176efe35ae25f86aeaaed (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_golick, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 04:19:43 np0005534516 systemd[1]: Started libpod-conmon-e16b2f43b88a69caa6a4e650730576c7a8b1be71b56176efe35ae25f86aeaaed.scope.
Nov 25 04:19:43 np0005534516 systemd[1]: Started libcrun container.
Nov 25 04:19:43 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/265224fe9158640971946e0afef5654e8296ee8815e07f186539595e57b55207/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 04:19:43 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/265224fe9158640971946e0afef5654e8296ee8815e07f186539595e57b55207/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 04:19:43 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/265224fe9158640971946e0afef5654e8296ee8815e07f186539595e57b55207/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 04:19:43 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/265224fe9158640971946e0afef5654e8296ee8815e07f186539595e57b55207/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 04:19:43 np0005534516 podman[422916]: 2025-11-25 09:19:43.840424689 +0000 UTC m=+0.299408743 container init e16b2f43b88a69caa6a4e650730576c7a8b1be71b56176efe35ae25f86aeaaed (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_golick, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True)
Nov 25 04:19:43 np0005534516 podman[422916]: 2025-11-25 09:19:43.851870781 +0000 UTC m=+0.310854805 container start e16b2f43b88a69caa6a4e650730576c7a8b1be71b56176efe35ae25f86aeaaed (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_golick, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 04:19:43 np0005534516 podman[422916]: 2025-11-25 09:19:43.874683443 +0000 UTC m=+0.333667487 container attach e16b2f43b88a69caa6a4e650730576c7a8b1be71b56176efe35ae25f86aeaaed (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_golick, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Nov 25 04:19:43 np0005534516 nova_compute[253538]: 2025-11-25 09:19:43.992 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:19:44 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2986: 321 pgs: 321 active+clean; 169 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 11 KiB/s wr, 1 op/s
Nov 25 04:19:44 np0005534516 unruffled_golick[422933]: {
Nov 25 04:19:44 np0005534516 unruffled_golick[422933]:    "0": [
Nov 25 04:19:44 np0005534516 unruffled_golick[422933]:        {
Nov 25 04:19:44 np0005534516 unruffled_golick[422933]:            "devices": [
Nov 25 04:19:44 np0005534516 unruffled_golick[422933]:                "/dev/loop3"
Nov 25 04:19:44 np0005534516 unruffled_golick[422933]:            ],
Nov 25 04:19:44 np0005534516 unruffled_golick[422933]:            "lv_name": "ceph_lv0",
Nov 25 04:19:44 np0005534516 unruffled_golick[422933]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 04:19:44 np0005534516 unruffled_golick[422933]:            "lv_size": "21470642176",
Nov 25 04:19:44 np0005534516 unruffled_golick[422933]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=fa0f8d90-5df8-4d42-9078-da082765696d,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 04:19:44 np0005534516 unruffled_golick[422933]:            "lv_uuid": "ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr",
Nov 25 04:19:44 np0005534516 unruffled_golick[422933]:            "name": "ceph_lv0",
Nov 25 04:19:44 np0005534516 unruffled_golick[422933]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 04:19:44 np0005534516 unruffled_golick[422933]:            "tags": {
Nov 25 04:19:44 np0005534516 unruffled_golick[422933]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 04:19:44 np0005534516 unruffled_golick[422933]:                "ceph.block_uuid": "ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr",
Nov 25 04:19:44 np0005534516 unruffled_golick[422933]:                "ceph.cephx_lockbox_secret": "",
Nov 25 04:19:44 np0005534516 unruffled_golick[422933]:                "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 04:19:44 np0005534516 unruffled_golick[422933]:                "ceph.cluster_name": "ceph",
Nov 25 04:19:44 np0005534516 unruffled_golick[422933]:                "ceph.crush_device_class": "",
Nov 25 04:19:44 np0005534516 unruffled_golick[422933]:                "ceph.encrypted": "0",
Nov 25 04:19:44 np0005534516 unruffled_golick[422933]:                "ceph.osd_fsid": "fa0f8d90-5df8-4d42-9078-da082765696d",
Nov 25 04:19:44 np0005534516 unruffled_golick[422933]:                "ceph.osd_id": "0",
Nov 25 04:19:44 np0005534516 unruffled_golick[422933]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 04:19:44 np0005534516 unruffled_golick[422933]:                "ceph.type": "block",
Nov 25 04:19:44 np0005534516 unruffled_golick[422933]:                "ceph.vdo": "0"
Nov 25 04:19:44 np0005534516 unruffled_golick[422933]:            },
Nov 25 04:19:44 np0005534516 unruffled_golick[422933]:            "type": "block",
Nov 25 04:19:44 np0005534516 unruffled_golick[422933]:            "vg_name": "ceph_vg0"
Nov 25 04:19:44 np0005534516 unruffled_golick[422933]:        }
Nov 25 04:19:44 np0005534516 unruffled_golick[422933]:    ],
Nov 25 04:19:44 np0005534516 unruffled_golick[422933]:    "1": [
Nov 25 04:19:44 np0005534516 unruffled_golick[422933]:        {
Nov 25 04:19:44 np0005534516 unruffled_golick[422933]:            "devices": [
Nov 25 04:19:44 np0005534516 unruffled_golick[422933]:                "/dev/loop4"
Nov 25 04:19:44 np0005534516 unruffled_golick[422933]:            ],
Nov 25 04:19:44 np0005534516 unruffled_golick[422933]:            "lv_name": "ceph_lv1",
Nov 25 04:19:44 np0005534516 unruffled_golick[422933]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 04:19:44 np0005534516 unruffled_golick[422933]:            "lv_size": "21470642176",
Nov 25 04:19:44 np0005534516 unruffled_golick[422933]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1ccbbde0-8faf-460a-800e-c84f00ed17db,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 04:19:44 np0005534516 unruffled_golick[422933]:            "lv_uuid": "nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e",
Nov 25 04:19:44 np0005534516 unruffled_golick[422933]:            "name": "ceph_lv1",
Nov 25 04:19:44 np0005534516 unruffled_golick[422933]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 04:19:44 np0005534516 unruffled_golick[422933]:            "tags": {
Nov 25 04:19:44 np0005534516 unruffled_golick[422933]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 04:19:44 np0005534516 unruffled_golick[422933]:                "ceph.block_uuid": "nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e",
Nov 25 04:19:44 np0005534516 unruffled_golick[422933]:                "ceph.cephx_lockbox_secret": "",
Nov 25 04:19:44 np0005534516 unruffled_golick[422933]:                "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 04:19:44 np0005534516 unruffled_golick[422933]:                "ceph.cluster_name": "ceph",
Nov 25 04:19:44 np0005534516 unruffled_golick[422933]:                "ceph.crush_device_class": "",
Nov 25 04:19:44 np0005534516 unruffled_golick[422933]:                "ceph.encrypted": "0",
Nov 25 04:19:44 np0005534516 unruffled_golick[422933]:                "ceph.osd_fsid": "1ccbbde0-8faf-460a-800e-c84f00ed17db",
Nov 25 04:19:44 np0005534516 unruffled_golick[422933]:                "ceph.osd_id": "1",
Nov 25 04:19:44 np0005534516 unruffled_golick[422933]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 04:19:44 np0005534516 unruffled_golick[422933]:                "ceph.type": "block",
Nov 25 04:19:44 np0005534516 unruffled_golick[422933]:                "ceph.vdo": "0"
Nov 25 04:19:44 np0005534516 unruffled_golick[422933]:            },
Nov 25 04:19:44 np0005534516 unruffled_golick[422933]:            "type": "block",
Nov 25 04:19:44 np0005534516 unruffled_golick[422933]:            "vg_name": "ceph_vg1"
Nov 25 04:19:44 np0005534516 unruffled_golick[422933]:        }
Nov 25 04:19:44 np0005534516 unruffled_golick[422933]:    ],
Nov 25 04:19:44 np0005534516 unruffled_golick[422933]:    "2": [
Nov 25 04:19:44 np0005534516 unruffled_golick[422933]:        {
Nov 25 04:19:44 np0005534516 unruffled_golick[422933]:            "devices": [
Nov 25 04:19:44 np0005534516 unruffled_golick[422933]:                "/dev/loop5"
Nov 25 04:19:44 np0005534516 unruffled_golick[422933]:            ],
Nov 25 04:19:44 np0005534516 unruffled_golick[422933]:            "lv_name": "ceph_lv2",
Nov 25 04:19:44 np0005534516 unruffled_golick[422933]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 04:19:44 np0005534516 unruffled_golick[422933]:            "lv_size": "21470642176",
Nov 25 04:19:44 np0005534516 unruffled_golick[422933]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2a15223e-f21c-42af-b702-2be1c3607399,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 04:19:44 np0005534516 unruffled_golick[422933]:            "lv_uuid": "5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0",
Nov 25 04:19:44 np0005534516 unruffled_golick[422933]:            "name": "ceph_lv2",
Nov 25 04:19:44 np0005534516 unruffled_golick[422933]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 04:19:44 np0005534516 unruffled_golick[422933]:            "tags": {
Nov 25 04:19:44 np0005534516 unruffled_golick[422933]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 04:19:44 np0005534516 unruffled_golick[422933]:                "ceph.block_uuid": "5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0",
Nov 25 04:19:44 np0005534516 unruffled_golick[422933]:                "ceph.cephx_lockbox_secret": "",
Nov 25 04:19:44 np0005534516 unruffled_golick[422933]:                "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 04:19:44 np0005534516 unruffled_golick[422933]:                "ceph.cluster_name": "ceph",
Nov 25 04:19:44 np0005534516 unruffled_golick[422933]:                "ceph.crush_device_class": "",
Nov 25 04:19:44 np0005534516 unruffled_golick[422933]:                "ceph.encrypted": "0",
Nov 25 04:19:44 np0005534516 unruffled_golick[422933]:                "ceph.osd_fsid": "2a15223e-f21c-42af-b702-2be1c3607399",
Nov 25 04:19:44 np0005534516 unruffled_golick[422933]:                "ceph.osd_id": "2",
Nov 25 04:19:44 np0005534516 unruffled_golick[422933]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 04:19:44 np0005534516 unruffled_golick[422933]:                "ceph.type": "block",
Nov 25 04:19:44 np0005534516 unruffled_golick[422933]:                "ceph.vdo": "0"
Nov 25 04:19:44 np0005534516 unruffled_golick[422933]:            },
Nov 25 04:19:44 np0005534516 unruffled_golick[422933]:            "type": "block",
Nov 25 04:19:44 np0005534516 unruffled_golick[422933]:            "vg_name": "ceph_vg2"
Nov 25 04:19:44 np0005534516 unruffled_golick[422933]:        }
Nov 25 04:19:44 np0005534516 unruffled_golick[422933]:    ]
Nov 25 04:19:44 np0005534516 unruffled_golick[422933]: }
Nov 25 04:19:44 np0005534516 systemd[1]: libpod-e16b2f43b88a69caa6a4e650730576c7a8b1be71b56176efe35ae25f86aeaaed.scope: Deactivated successfully.
Nov 25 04:19:44 np0005534516 podman[422916]: 2025-11-25 09:19:44.622408718 +0000 UTC m=+1.081392742 container died e16b2f43b88a69caa6a4e650730576c7a8b1be71b56176efe35ae25f86aeaaed (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_golick, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Nov 25 04:19:44 np0005534516 nova_compute[253538]: 2025-11-25 09:19:44.901 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:19:45 np0005534516 systemd[1]: var-lib-containers-storage-overlay-265224fe9158640971946e0afef5654e8296ee8815e07f186539595e57b55207-merged.mount: Deactivated successfully.
Nov 25 04:19:45 np0005534516 podman[422916]: 2025-11-25 09:19:45.23984416 +0000 UTC m=+1.698828194 container remove e16b2f43b88a69caa6a4e650730576c7a8b1be71b56176efe35ae25f86aeaaed (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_golick, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3)
Nov 25 04:19:45 np0005534516 systemd[1]: libpod-conmon-e16b2f43b88a69caa6a4e650730576c7a8b1be71b56176efe35ae25f86aeaaed.scope: Deactivated successfully.
Nov 25 04:19:45 np0005534516 podman[423092]: 2025-11-25 09:19:45.915449938 +0000 UTC m=+0.036497486 container create 6cc5c4614d0f3bdef44fcec693ccb4f2a6940206ee5158e514162e6ec3fdb385 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_jackson, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Nov 25 04:19:45 np0005534516 systemd[1]: Started libpod-conmon-6cc5c4614d0f3bdef44fcec693ccb4f2a6940206ee5158e514162e6ec3fdb385.scope.
Nov 25 04:19:45 np0005534516 systemd[1]: Started libcrun container.
Nov 25 04:19:45 np0005534516 podman[423092]: 2025-11-25 09:19:45.898149797 +0000 UTC m=+0.019197335 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 04:19:46 np0005534516 podman[423092]: 2025-11-25 09:19:46.152860441 +0000 UTC m=+0.273907989 container init 6cc5c4614d0f3bdef44fcec693ccb4f2a6940206ee5158e514162e6ec3fdb385 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_jackson, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Nov 25 04:19:46 np0005534516 podman[423092]: 2025-11-25 09:19:46.159761098 +0000 UTC m=+0.280808616 container start 6cc5c4614d0f3bdef44fcec693ccb4f2a6940206ee5158e514162e6ec3fdb385 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_jackson, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 04:19:46 np0005534516 wizardly_jackson[423108]: 167 167
Nov 25 04:19:46 np0005534516 systemd[1]: libpod-6cc5c4614d0f3bdef44fcec693ccb4f2a6940206ee5158e514162e6ec3fdb385.scope: Deactivated successfully.
Nov 25 04:19:46 np0005534516 podman[423092]: 2025-11-25 09:19:46.187102204 +0000 UTC m=+0.308149732 container attach 6cc5c4614d0f3bdef44fcec693ccb4f2a6940206ee5158e514162e6ec3fdb385 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_jackson, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Nov 25 04:19:46 np0005534516 podman[423092]: 2025-11-25 09:19:46.188979776 +0000 UTC m=+0.310027334 container died 6cc5c4614d0f3bdef44fcec693ccb4f2a6940206ee5158e514162e6ec3fdb385 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_jackson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Nov 25 04:19:46 np0005534516 systemd[1]: var-lib-containers-storage-overlay-27dacb2e904bd63a77d6b270b9310e457abdf91178b5af1e60c22124a10bdae4-merged.mount: Deactivated successfully.
Nov 25 04:19:46 np0005534516 podman[423092]: 2025-11-25 09:19:46.265540912 +0000 UTC m=+0.386588450 container remove 6cc5c4614d0f3bdef44fcec693ccb4f2a6940206ee5158e514162e6ec3fdb385 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_jackson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True)
Nov 25 04:19:46 np0005534516 systemd[1]: libpod-conmon-6cc5c4614d0f3bdef44fcec693ccb4f2a6940206ee5158e514162e6ec3fdb385.scope: Deactivated successfully.
Nov 25 04:19:46 np0005534516 podman[423113]: 2025-11-25 09:19:46.305754379 +0000 UTC m=+0.110800642 container health_status 5c8d5508db57b4c3da7e80ae11f3a3094e87785cb74f60c98199261dd8b73481 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent)
Nov 25 04:19:46 np0005534516 podman[423135]: 2025-11-25 09:19:46.325986801 +0000 UTC m=+0.060822140 container health_status 201f5ba8a846455dad7681d91099d8c3f33e8ac91298c1330a8b525b94c03938 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251118, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 04:19:46 np0005534516 podman[423169]: 2025-11-25 09:19:46.459059888 +0000 UTC m=+0.062309179 container create 4bed9fdd579f0f3eb008c5d66a5a0f83fed4b20af3b4f1324d18a2316a2360c8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_mclean, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Nov 25 04:19:46 np0005534516 systemd[1]: Started libpod-conmon-4bed9fdd579f0f3eb008c5d66a5a0f83fed4b20af3b4f1324d18a2316a2360c8.scope.
Nov 25 04:19:46 np0005534516 podman[423169]: 2025-11-25 09:19:46.436103342 +0000 UTC m=+0.039352673 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 04:19:46 np0005534516 systemd[1]: Started libcrun container.
Nov 25 04:19:46 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/33332d9c80f45b1bf790d432af2653e54a11da36e6a9e811117f7a2e026a9510/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 04:19:46 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/33332d9c80f45b1bf790d432af2653e54a11da36e6a9e811117f7a2e026a9510/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 04:19:46 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/33332d9c80f45b1bf790d432af2653e54a11da36e6a9e811117f7a2e026a9510/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 04:19:46 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/33332d9c80f45b1bf790d432af2653e54a11da36e6a9e811117f7a2e026a9510/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 04:19:46 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2987: 321 pgs: 321 active+clean; 169 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 7.3 KiB/s rd, 12 KiB/s wr, 2 op/s
Nov 25 04:19:46 np0005534516 podman[423169]: 2025-11-25 09:19:46.561335307 +0000 UTC m=+0.164584618 container init 4bed9fdd579f0f3eb008c5d66a5a0f83fed4b20af3b4f1324d18a2316a2360c8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_mclean, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 04:19:46 np0005534516 podman[423169]: 2025-11-25 09:19:46.569770587 +0000 UTC m=+0.173019878 container start 4bed9fdd579f0f3eb008c5d66a5a0f83fed4b20af3b4f1324d18a2316a2360c8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_mclean, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef)
Nov 25 04:19:46 np0005534516 podman[423169]: 2025-11-25 09:19:46.572994125 +0000 UTC m=+0.176243416 container attach 4bed9fdd579f0f3eb008c5d66a5a0f83fed4b20af3b4f1324d18a2316a2360c8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_mclean, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 04:19:47 np0005534516 brave_mclean[423186]: {
Nov 25 04:19:47 np0005534516 brave_mclean[423186]:    "1ccbbde0-8faf-460a-800e-c84f00ed17db": {
Nov 25 04:19:47 np0005534516 brave_mclean[423186]:        "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 04:19:47 np0005534516 brave_mclean[423186]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 04:19:47 np0005534516 brave_mclean[423186]:        "osd_id": 1,
Nov 25 04:19:47 np0005534516 brave_mclean[423186]:        "osd_uuid": "1ccbbde0-8faf-460a-800e-c84f00ed17db",
Nov 25 04:19:47 np0005534516 brave_mclean[423186]:        "type": "bluestore"
Nov 25 04:19:47 np0005534516 brave_mclean[423186]:    },
Nov 25 04:19:47 np0005534516 brave_mclean[423186]:    "2a15223e-f21c-42af-b702-2be1c3607399": {
Nov 25 04:19:47 np0005534516 brave_mclean[423186]:        "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 04:19:47 np0005534516 brave_mclean[423186]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 04:19:47 np0005534516 brave_mclean[423186]:        "osd_id": 2,
Nov 25 04:19:47 np0005534516 brave_mclean[423186]:        "osd_uuid": "2a15223e-f21c-42af-b702-2be1c3607399",
Nov 25 04:19:47 np0005534516 brave_mclean[423186]:        "type": "bluestore"
Nov 25 04:19:47 np0005534516 brave_mclean[423186]:    },
Nov 25 04:19:47 np0005534516 brave_mclean[423186]:    "fa0f8d90-5df8-4d42-9078-da082765696d": {
Nov 25 04:19:47 np0005534516 brave_mclean[423186]:        "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 04:19:47 np0005534516 brave_mclean[423186]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 04:19:47 np0005534516 brave_mclean[423186]:        "osd_id": 0,
Nov 25 04:19:47 np0005534516 brave_mclean[423186]:        "osd_uuid": "fa0f8d90-5df8-4d42-9078-da082765696d",
Nov 25 04:19:47 np0005534516 brave_mclean[423186]:        "type": "bluestore"
Nov 25 04:19:47 np0005534516 brave_mclean[423186]:    }
Nov 25 04:19:47 np0005534516 brave_mclean[423186]: }
Nov 25 04:19:47 np0005534516 systemd[1]: libpod-4bed9fdd579f0f3eb008c5d66a5a0f83fed4b20af3b4f1324d18a2316a2360c8.scope: Deactivated successfully.
Nov 25 04:19:47 np0005534516 podman[423169]: 2025-11-25 09:19:47.554618525 +0000 UTC m=+1.157867806 container died 4bed9fdd579f0f3eb008c5d66a5a0f83fed4b20af3b4f1324d18a2316a2360c8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_mclean, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS)
Nov 25 04:19:47 np0005534516 systemd[1]: var-lib-containers-storage-overlay-33332d9c80f45b1bf790d432af2653e54a11da36e6a9e811117f7a2e026a9510-merged.mount: Deactivated successfully.
Nov 25 04:19:47 np0005534516 podman[423169]: 2025-11-25 09:19:47.629345743 +0000 UTC m=+1.232595034 container remove 4bed9fdd579f0f3eb008c5d66a5a0f83fed4b20af3b4f1324d18a2316a2360c8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_mclean, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True)
Nov 25 04:19:47 np0005534516 systemd[1]: libpod-conmon-4bed9fdd579f0f3eb008c5d66a5a0f83fed4b20af3b4f1324d18a2316a2360c8.scope: Deactivated successfully.
Nov 25 04:19:47 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 04:19:47 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 04:19:47 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 04:19:47 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 04:19:47 np0005534516 ceph-mgr[75313]: [progress WARNING root] complete: ev 26c0a479-fc87-488a-8865-e25675579d21 does not exist
Nov 25 04:19:47 np0005534516 ceph-mgr[75313]: [progress WARNING root] complete: ev eb693f94-19b7-430c-9486-b3f11ad15642 does not exist
Nov 25 04:19:47 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:19:47.950 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=59, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '3e:21:09', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '6a:71:8e:07:fa:c2'}, ipsec=False) old=SB_Global(nb_cfg=58) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 04:19:47 np0005534516 nova_compute[253538]: 2025-11-25 09:19:47.951 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:19:47 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:19:47.952 162739 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 25 04:19:48 np0005534516 nova_compute[253538]: 2025-11-25 09:19:48.183 253542 DEBUG nova.compute.manager [req-828f7a84-f713-4e60-b804-c65243625863 req-57f12959-50b9-46bb-82c4-d47139d0ca79 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 282b7217-4c1e-4a42-b3da-05616f4e1da3] Received event network-changed-9d78d6ba-3489-4cfd-ae33-9166be3f940c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 04:19:48 np0005534516 nova_compute[253538]: 2025-11-25 09:19:48.183 253542 DEBUG nova.compute.manager [req-828f7a84-f713-4e60-b804-c65243625863 req-57f12959-50b9-46bb-82c4-d47139d0ca79 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 282b7217-4c1e-4a42-b3da-05616f4e1da3] Refreshing instance network info cache due to event network-changed-9d78d6ba-3489-4cfd-ae33-9166be3f940c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 04:19:48 np0005534516 nova_compute[253538]: 2025-11-25 09:19:48.184 253542 DEBUG oslo_concurrency.lockutils [req-828f7a84-f713-4e60-b804-c65243625863 req-57f12959-50b9-46bb-82c4-d47139d0ca79 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-282b7217-4c1e-4a42-b3da-05616f4e1da3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 04:19:48 np0005534516 nova_compute[253538]: 2025-11-25 09:19:48.184 253542 DEBUG oslo_concurrency.lockutils [req-828f7a84-f713-4e60-b804-c65243625863 req-57f12959-50b9-46bb-82c4-d47139d0ca79 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-282b7217-4c1e-4a42-b3da-05616f4e1da3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 04:19:48 np0005534516 nova_compute[253538]: 2025-11-25 09:19:48.185 253542 DEBUG nova.network.neutron [req-828f7a84-f713-4e60-b804-c65243625863 req-57f12959-50b9-46bb-82c4-d47139d0ca79 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 282b7217-4c1e-4a42-b3da-05616f4e1da3] Refreshing network info cache for port 9d78d6ba-3489-4cfd-ae33-9166be3f940c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 04:19:48 np0005534516 nova_compute[253538]: 2025-11-25 09:19:48.230 253542 DEBUG oslo_concurrency.lockutils [None req-cb207b26-6d8b-43e4-90cd-a0aad9008460 a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] Acquiring lock "282b7217-4c1e-4a42-b3da-05616f4e1da3" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:19:48 np0005534516 nova_compute[253538]: 2025-11-25 09:19:48.231 253542 DEBUG oslo_concurrency.lockutils [None req-cb207b26-6d8b-43e4-90cd-a0aad9008460 a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] Lock "282b7217-4c1e-4a42-b3da-05616f4e1da3" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:19:48 np0005534516 nova_compute[253538]: 2025-11-25 09:19:48.231 253542 DEBUG oslo_concurrency.lockutils [None req-cb207b26-6d8b-43e4-90cd-a0aad9008460 a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] Acquiring lock "282b7217-4c1e-4a42-b3da-05616f4e1da3-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:19:48 np0005534516 nova_compute[253538]: 2025-11-25 09:19:48.231 253542 DEBUG oslo_concurrency.lockutils [None req-cb207b26-6d8b-43e4-90cd-a0aad9008460 a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] Lock "282b7217-4c1e-4a42-b3da-05616f4e1da3-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:19:48 np0005534516 nova_compute[253538]: 2025-11-25 09:19:48.232 253542 DEBUG oslo_concurrency.lockutils [None req-cb207b26-6d8b-43e4-90cd-a0aad9008460 a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] Lock "282b7217-4c1e-4a42-b3da-05616f4e1da3-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:19:48 np0005534516 nova_compute[253538]: 2025-11-25 09:19:48.233 253542 INFO nova.compute.manager [None req-cb207b26-6d8b-43e4-90cd-a0aad9008460 a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] [instance: 282b7217-4c1e-4a42-b3da-05616f4e1da3] Terminating instance#033[00m
Nov 25 04:19:48 np0005534516 nova_compute[253538]: 2025-11-25 09:19:48.234 253542 DEBUG nova.compute.manager [None req-cb207b26-6d8b-43e4-90cd-a0aad9008460 a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] [instance: 282b7217-4c1e-4a42-b3da-05616f4e1da3] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 25 04:19:48 np0005534516 kernel: tap9d78d6ba-34 (unregistering): left promiscuous mode
Nov 25 04:19:48 np0005534516 NetworkManager[48915]: <info>  [1764062388.2936] device (tap9d78d6ba-34): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 04:19:48 np0005534516 ovn_controller[152859]: 2025-11-25T09:19:48Z|01649|binding|INFO|Releasing lport 9d78d6ba-3489-4cfd-ae33-9166be3f940c from this chassis (sb_readonly=0)
Nov 25 04:19:48 np0005534516 nova_compute[253538]: 2025-11-25 09:19:48.303 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:19:48 np0005534516 ovn_controller[152859]: 2025-11-25T09:19:48Z|01650|binding|INFO|Setting lport 9d78d6ba-3489-4cfd-ae33-9166be3f940c down in Southbound
Nov 25 04:19:48 np0005534516 ovn_controller[152859]: 2025-11-25T09:19:48Z|01651|binding|INFO|Removing iface tap9d78d6ba-34 ovn-installed in OVS
Nov 25 04:19:48 np0005534516 nova_compute[253538]: 2025-11-25 09:19:48.306 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:19:48 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:19:48.312 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:eb:22:67 10.100.0.14'], port_security=['fa:16:3e:eb:22:67 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '282b7217-4c1e-4a42-b3da-05616f4e1da3', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-26d70c6d-e66b-4570-a7d7-11486a935ed8', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '41c67820b40a4185a60c4245f9c43ef5', 'neutron:revision_number': '9', 'neutron:security_group_ids': '5b17be75-81bb-4f4d-9234-5572279d07c7', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c3ce9783-7822-423e-a3be-85165987da53, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=9d78d6ba-3489-4cfd-ae33-9166be3f940c) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 04:19:48 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:19:48.313 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 9d78d6ba-3489-4cfd-ae33-9166be3f940c in datapath 26d70c6d-e66b-4570-a7d7-11486a935ed8 unbound from our chassis#033[00m
Nov 25 04:19:48 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:19:48.315 162739 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 26d70c6d-e66b-4570-a7d7-11486a935ed8, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 25 04:19:48 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:19:48.316 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[96ee1c1e-1891-46ff-853e-7f7e07df1a37]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:19:48 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:19:48.316 162739 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-26d70c6d-e66b-4570-a7d7-11486a935ed8 namespace which is not needed anymore#033[00m
Nov 25 04:19:48 np0005534516 nova_compute[253538]: 2025-11-25 09:19:48.326 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:19:48 np0005534516 systemd[1]: machine-qemu\x2d186\x2dinstance\x2d00000099.scope: Deactivated successfully.
Nov 25 04:19:48 np0005534516 systemd[1]: machine-qemu\x2d186\x2dinstance\x2d00000099.scope: Consumed 14.566s CPU time.
Nov 25 04:19:48 np0005534516 systemd-machined[215790]: Machine qemu-186-instance-00000099 terminated.
Nov 25 04:19:48 np0005534516 neutron-haproxy-ovnmeta-26d70c6d-e66b-4570-a7d7-11486a935ed8[422122]: [NOTICE]   (422126) : haproxy version is 2.8.14-c23fe91
Nov 25 04:19:48 np0005534516 neutron-haproxy-ovnmeta-26d70c6d-e66b-4570-a7d7-11486a935ed8[422122]: [NOTICE]   (422126) : path to executable is /usr/sbin/haproxy
Nov 25 04:19:48 np0005534516 neutron-haproxy-ovnmeta-26d70c6d-e66b-4570-a7d7-11486a935ed8[422122]: [WARNING]  (422126) : Exiting Master process...
Nov 25 04:19:48 np0005534516 neutron-haproxy-ovnmeta-26d70c6d-e66b-4570-a7d7-11486a935ed8[422122]: [WARNING]  (422126) : Exiting Master process...
Nov 25 04:19:48 np0005534516 neutron-haproxy-ovnmeta-26d70c6d-e66b-4570-a7d7-11486a935ed8[422122]: [ALERT]    (422126) : Current worker (422128) exited with code 143 (Terminated)
Nov 25 04:19:48 np0005534516 neutron-haproxy-ovnmeta-26d70c6d-e66b-4570-a7d7-11486a935ed8[422122]: [WARNING]  (422126) : All workers exited. Exiting... (0)
Nov 25 04:19:48 np0005534516 systemd[1]: libpod-59697db1d2110bdf8b4b7ad10f66ca81825c76e57228cc3dcf2c53833e7f5a2f.scope: Deactivated successfully.
Nov 25 04:19:48 np0005534516 podman[423307]: 2025-11-25 09:19:48.457544542 +0000 UTC m=+0.049209773 container died 59697db1d2110bdf8b4b7ad10f66ca81825c76e57228cc3dcf2c53833e7f5a2f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-26d70c6d-e66b-4570-a7d7-11486a935ed8, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2)
Nov 25 04:19:48 np0005534516 nova_compute[253538]: 2025-11-25 09:19:48.458 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:19:48 np0005534516 nova_compute[253538]: 2025-11-25 09:19:48.463 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:19:48 np0005534516 nova_compute[253538]: 2025-11-25 09:19:48.473 253542 INFO nova.virt.libvirt.driver [-] [instance: 282b7217-4c1e-4a42-b3da-05616f4e1da3] Instance destroyed successfully.#033[00m
Nov 25 04:19:48 np0005534516 nova_compute[253538]: 2025-11-25 09:19:48.474 253542 DEBUG nova.objects.instance [None req-cb207b26-6d8b-43e4-90cd-a0aad9008460 a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] Lazy-loading 'resources' on Instance uuid 282b7217-4c1e-4a42-b3da-05616f4e1da3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 04:19:48 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e271 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:19:48 np0005534516 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-59697db1d2110bdf8b4b7ad10f66ca81825c76e57228cc3dcf2c53833e7f5a2f-userdata-shm.mount: Deactivated successfully.
Nov 25 04:19:48 np0005534516 systemd[1]: var-lib-containers-storage-overlay-3c1e777106289c23b7a822a87608b70a386d112d661b25f06e481127e86f854e-merged.mount: Deactivated successfully.
Nov 25 04:19:48 np0005534516 nova_compute[253538]: 2025-11-25 09:19:48.492 253542 DEBUG nova.virt.libvirt.vif [None req-cb207b26-6d8b-43e4-90cd-a0aad9008460 a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-11-25T09:18:14Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestShelveInstance-server-853165821',display_name='tempest-TestShelveInstance-server-853165821',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testshelveinstance-server-853165821',id=153,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDNJQsCu4fQ3ll5Z4ZaGvMq+pPgiaY3EvL05ETUACffFb5NNPT58fZR5bxwEgYmFiG8knRhbPhzoHa6MYoWsZMDhMe0q2RDfQW/VzCu9RVlFpki+QcCgPNPr5WwLwdENig==',key_name='tempest-TestShelveInstance-83615415',keypairs=<?>,launch_index=0,launched_at=2025-11-25T09:19:12Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='41c67820b40a4185a60c4245f9c43ef5',ramdisk_id='',reservation_id='r-5ikkcbmq',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestShelveInstance-1867415308',owner_user_name='tempest-TestShelveInstance-1867415308-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T09:19:12Z,user_data=None,user_id='a68fbd2f756d42aa982630f3a41f0a1f',uuid=282b7217-4c1e-4a42-b3da-05616f4e1da3,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "9d78d6ba-3489-4cfd-ae33-9166be3f940c", "address": "fa:16:3e:eb:22:67", "network": {"id": "26d70c6d-e66b-4570-a7d7-11486a935ed8", "bridge": "br-int", "label": "tempest-TestShelveInstance-164848660-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.210", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "41c67820b40a4185a60c4245f9c43ef5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9d78d6ba-34", "ovs_interfaceid": "9d78d6ba-3489-4cfd-ae33-9166be3f940c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 25 04:19:48 np0005534516 nova_compute[253538]: 2025-11-25 09:19:48.493 253542 DEBUG nova.network.os_vif_util [None req-cb207b26-6d8b-43e4-90cd-a0aad9008460 a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] Converting VIF {"id": "9d78d6ba-3489-4cfd-ae33-9166be3f940c", "address": "fa:16:3e:eb:22:67", "network": {"id": "26d70c6d-e66b-4570-a7d7-11486a935ed8", "bridge": "br-int", "label": "tempest-TestShelveInstance-164848660-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.210", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "41c67820b40a4185a60c4245f9c43ef5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9d78d6ba-34", "ovs_interfaceid": "9d78d6ba-3489-4cfd-ae33-9166be3f940c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 04:19:48 np0005534516 nova_compute[253538]: 2025-11-25 09:19:48.493 253542 DEBUG nova.network.os_vif_util [None req-cb207b26-6d8b-43e4-90cd-a0aad9008460 a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:eb:22:67,bridge_name='br-int',has_traffic_filtering=True,id=9d78d6ba-3489-4cfd-ae33-9166be3f940c,network=Network(26d70c6d-e66b-4570-a7d7-11486a935ed8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9d78d6ba-34') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 04:19:48 np0005534516 nova_compute[253538]: 2025-11-25 09:19:48.494 253542 DEBUG os_vif [None req-cb207b26-6d8b-43e4-90cd-a0aad9008460 a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:eb:22:67,bridge_name='br-int',has_traffic_filtering=True,id=9d78d6ba-3489-4cfd-ae33-9166be3f940c,network=Network(26d70c6d-e66b-4570-a7d7-11486a935ed8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9d78d6ba-34') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 25 04:19:48 np0005534516 nova_compute[253538]: 2025-11-25 09:19:48.495 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:19:48 np0005534516 nova_compute[253538]: 2025-11-25 09:19:48.496 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9d78d6ba-34, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 04:19:48 np0005534516 nova_compute[253538]: 2025-11-25 09:19:48.497 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:19:48 np0005534516 nova_compute[253538]: 2025-11-25 09:19:48.498 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:19:48 np0005534516 nova_compute[253538]: 2025-11-25 09:19:48.501 253542 INFO os_vif [None req-cb207b26-6d8b-43e4-90cd-a0aad9008460 a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:eb:22:67,bridge_name='br-int',has_traffic_filtering=True,id=9d78d6ba-3489-4cfd-ae33-9166be3f940c,network=Network(26d70c6d-e66b-4570-a7d7-11486a935ed8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9d78d6ba-34')#033[00m
Nov 25 04:19:48 np0005534516 podman[423307]: 2025-11-25 09:19:48.502829506 +0000 UTC m=+0.094494737 container cleanup 59697db1d2110bdf8b4b7ad10f66ca81825c76e57228cc3dcf2c53833e7f5a2f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-26d70c6d-e66b-4570-a7d7-11486a935ed8, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team)
Nov 25 04:19:48 np0005534516 systemd[1]: libpod-conmon-59697db1d2110bdf8b4b7ad10f66ca81825c76e57228cc3dcf2c53833e7f5a2f.scope: Deactivated successfully.
Nov 25 04:19:48 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2988: 321 pgs: 321 active+clean; 169 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 123 KiB/s rd, 9.3 KiB/s wr, 5 op/s
Nov 25 04:19:48 np0005534516 podman[423356]: 2025-11-25 09:19:48.56970184 +0000 UTC m=+0.046017347 container remove 59697db1d2110bdf8b4b7ad10f66ca81825c76e57228cc3dcf2c53833e7f5a2f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-26d70c6d-e66b-4570-a7d7-11486a935ed8, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Nov 25 04:19:48 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:19:48.577 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[ce230150-cd84-4a4a-bd6b-8ee8d99b1635]: (4, ('Tue Nov 25 09:19:48 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-26d70c6d-e66b-4570-a7d7-11486a935ed8 (59697db1d2110bdf8b4b7ad10f66ca81825c76e57228cc3dcf2c53833e7f5a2f)\n59697db1d2110bdf8b4b7ad10f66ca81825c76e57228cc3dcf2c53833e7f5a2f\nTue Nov 25 09:19:48 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-26d70c6d-e66b-4570-a7d7-11486a935ed8 (59697db1d2110bdf8b4b7ad10f66ca81825c76e57228cc3dcf2c53833e7f5a2f)\n59697db1d2110bdf8b4b7ad10f66ca81825c76e57228cc3dcf2c53833e7f5a2f\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:19:48 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:19:48.582 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[5ca1dcf2-17a5-4d54-9e1e-df4c10f0cd52]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:19:48 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:19:48.583 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap26d70c6d-e0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 04:19:48 np0005534516 nova_compute[253538]: 2025-11-25 09:19:48.586 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:19:48 np0005534516 kernel: tap26d70c6d-e0: left promiscuous mode
Nov 25 04:19:48 np0005534516 nova_compute[253538]: 2025-11-25 09:19:48.590 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:19:48 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:19:48.595 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[11326ed1-cdc5-40c8-beff-dc2eea33b5dd]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:19:48 np0005534516 nova_compute[253538]: 2025-11-25 09:19:48.602 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:19:48 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:19:48.616 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[6ba7fbe0-0dd5-45b4-bf2c-9660be12f57e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:19:48 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:19:48.618 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[b43285b1-e626-44f9-95f7-c435578a6a1d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:19:48 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:19:48.637 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[11f87d3c-eb56-4431-831b-3d08667a498e]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 770455, 'reachable_time': 31865, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 423378, 'error': None, 'target': 'ovnmeta-26d70c6d-e66b-4570-a7d7-11486a935ed8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:19:48 np0005534516 systemd[1]: run-netns-ovnmeta\x2d26d70c6d\x2de66b\x2d4570\x2da7d7\x2d11486a935ed8.mount: Deactivated successfully.
Nov 25 04:19:48 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:19:48.640 162852 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-26d70c6d-e66b-4570-a7d7-11486a935ed8 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 25 04:19:48 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:19:48.640 162852 DEBUG oslo.privsep.daemon [-] privsep: reply[810f0c11-fb03-4bbb-99ce-6ba4b3a2b2e2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:19:48 np0005534516 nova_compute[253538]: 2025-11-25 09:19:48.683 253542 DEBUG nova.compute.manager [req-028e5bab-983f-46e4-8a12-c71776a6ec3c req-baeca2e0-7f9e-489c-8847-2a0bc6123f36 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 282b7217-4c1e-4a42-b3da-05616f4e1da3] Received event network-vif-unplugged-9d78d6ba-3489-4cfd-ae33-9166be3f940c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 04:19:48 np0005534516 nova_compute[253538]: 2025-11-25 09:19:48.684 253542 DEBUG oslo_concurrency.lockutils [req-028e5bab-983f-46e4-8a12-c71776a6ec3c req-baeca2e0-7f9e-489c-8847-2a0bc6123f36 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "282b7217-4c1e-4a42-b3da-05616f4e1da3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:19:48 np0005534516 nova_compute[253538]: 2025-11-25 09:19:48.684 253542 DEBUG oslo_concurrency.lockutils [req-028e5bab-983f-46e4-8a12-c71776a6ec3c req-baeca2e0-7f9e-489c-8847-2a0bc6123f36 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "282b7217-4c1e-4a42-b3da-05616f4e1da3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:19:48 np0005534516 nova_compute[253538]: 2025-11-25 09:19:48.684 253542 DEBUG oslo_concurrency.lockutils [req-028e5bab-983f-46e4-8a12-c71776a6ec3c req-baeca2e0-7f9e-489c-8847-2a0bc6123f36 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "282b7217-4c1e-4a42-b3da-05616f4e1da3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:19:48 np0005534516 nova_compute[253538]: 2025-11-25 09:19:48.684 253542 DEBUG nova.compute.manager [req-028e5bab-983f-46e4-8a12-c71776a6ec3c req-baeca2e0-7f9e-489c-8847-2a0bc6123f36 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 282b7217-4c1e-4a42-b3da-05616f4e1da3] No waiting events found dispatching network-vif-unplugged-9d78d6ba-3489-4cfd-ae33-9166be3f940c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 04:19:48 np0005534516 nova_compute[253538]: 2025-11-25 09:19:48.684 253542 DEBUG nova.compute.manager [req-028e5bab-983f-46e4-8a12-c71776a6ec3c req-baeca2e0-7f9e-489c-8847-2a0bc6123f36 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 282b7217-4c1e-4a42-b3da-05616f4e1da3] Received event network-vif-unplugged-9d78d6ba-3489-4cfd-ae33-9166be3f940c for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 25 04:19:48 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 04:19:48 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 04:19:48 np0005534516 nova_compute[253538]: 2025-11-25 09:19:48.931 253542 INFO nova.virt.libvirt.driver [None req-cb207b26-6d8b-43e4-90cd-a0aad9008460 a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] [instance: 282b7217-4c1e-4a42-b3da-05616f4e1da3] Deleting instance files /var/lib/nova/instances/282b7217-4c1e-4a42-b3da-05616f4e1da3_del#033[00m
Nov 25 04:19:48 np0005534516 nova_compute[253538]: 2025-11-25 09:19:48.932 253542 INFO nova.virt.libvirt.driver [None req-cb207b26-6d8b-43e4-90cd-a0aad9008460 a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] [instance: 282b7217-4c1e-4a42-b3da-05616f4e1da3] Deletion of /var/lib/nova/instances/282b7217-4c1e-4a42-b3da-05616f4e1da3_del complete#033[00m
Nov 25 04:19:49 np0005534516 nova_compute[253538]: 2025-11-25 09:19:49.127 253542 INFO nova.compute.manager [None req-cb207b26-6d8b-43e4-90cd-a0aad9008460 a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] [instance: 282b7217-4c1e-4a42-b3da-05616f4e1da3] Took 0.89 seconds to destroy the instance on the hypervisor.#033[00m
Nov 25 04:19:49 np0005534516 nova_compute[253538]: 2025-11-25 09:19:49.127 253542 DEBUG oslo.service.loopingcall [None req-cb207b26-6d8b-43e4-90cd-a0aad9008460 a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 25 04:19:49 np0005534516 nova_compute[253538]: 2025-11-25 09:19:49.128 253542 DEBUG nova.compute.manager [-] [instance: 282b7217-4c1e-4a42-b3da-05616f4e1da3] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 25 04:19:49 np0005534516 nova_compute[253538]: 2025-11-25 09:19:49.128 253542 DEBUG nova.network.neutron [-] [instance: 282b7217-4c1e-4a42-b3da-05616f4e1da3] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 25 04:19:49 np0005534516 nova_compute[253538]: 2025-11-25 09:19:49.904 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:19:50 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2989: 321 pgs: 321 active+clean; 148 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 227 KiB/s rd, 8.7 KiB/s wr, 27 op/s
Nov 25 04:19:51 np0005534516 nova_compute[253538]: 2025-11-25 09:19:51.004 253542 DEBUG nova.compute.manager [req-2597a671-5c7d-474d-a5fd-d963119417a4 req-b6eecc01-9243-493e-8f58-711f75a43bfd b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 282b7217-4c1e-4a42-b3da-05616f4e1da3] Received event network-vif-plugged-9d78d6ba-3489-4cfd-ae33-9166be3f940c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 04:19:51 np0005534516 nova_compute[253538]: 2025-11-25 09:19:51.005 253542 DEBUG oslo_concurrency.lockutils [req-2597a671-5c7d-474d-a5fd-d963119417a4 req-b6eecc01-9243-493e-8f58-711f75a43bfd b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "282b7217-4c1e-4a42-b3da-05616f4e1da3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:19:51 np0005534516 nova_compute[253538]: 2025-11-25 09:19:51.005 253542 DEBUG oslo_concurrency.lockutils [req-2597a671-5c7d-474d-a5fd-d963119417a4 req-b6eecc01-9243-493e-8f58-711f75a43bfd b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "282b7217-4c1e-4a42-b3da-05616f4e1da3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:19:51 np0005534516 nova_compute[253538]: 2025-11-25 09:19:51.005 253542 DEBUG oslo_concurrency.lockutils [req-2597a671-5c7d-474d-a5fd-d963119417a4 req-b6eecc01-9243-493e-8f58-711f75a43bfd b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "282b7217-4c1e-4a42-b3da-05616f4e1da3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:19:51 np0005534516 nova_compute[253538]: 2025-11-25 09:19:51.005 253542 DEBUG nova.compute.manager [req-2597a671-5c7d-474d-a5fd-d963119417a4 req-b6eecc01-9243-493e-8f58-711f75a43bfd b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 282b7217-4c1e-4a42-b3da-05616f4e1da3] No waiting events found dispatching network-vif-plugged-9d78d6ba-3489-4cfd-ae33-9166be3f940c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 04:19:51 np0005534516 nova_compute[253538]: 2025-11-25 09:19:51.006 253542 WARNING nova.compute.manager [req-2597a671-5c7d-474d-a5fd-d963119417a4 req-b6eecc01-9243-493e-8f58-711f75a43bfd b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 282b7217-4c1e-4a42-b3da-05616f4e1da3] Received unexpected event network-vif-plugged-9d78d6ba-3489-4cfd-ae33-9166be3f940c for instance with vm_state active and task_state deleting.#033[00m
Nov 25 04:19:51 np0005534516 nova_compute[253538]: 2025-11-25 09:19:51.205 253542 DEBUG nova.network.neutron [-] [instance: 282b7217-4c1e-4a42-b3da-05616f4e1da3] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 04:19:51 np0005534516 nova_compute[253538]: 2025-11-25 09:19:51.310 253542 DEBUG nova.network.neutron [req-828f7a84-f713-4e60-b804-c65243625863 req-57f12959-50b9-46bb-82c4-d47139d0ca79 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 282b7217-4c1e-4a42-b3da-05616f4e1da3] Updated VIF entry in instance network info cache for port 9d78d6ba-3489-4cfd-ae33-9166be3f940c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 25 04:19:51 np0005534516 nova_compute[253538]: 2025-11-25 09:19:51.311 253542 DEBUG nova.network.neutron [req-828f7a84-f713-4e60-b804-c65243625863 req-57f12959-50b9-46bb-82c4-d47139d0ca79 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 282b7217-4c1e-4a42-b3da-05616f4e1da3] Updating instance_info_cache with network_info: [{"id": "9d78d6ba-3489-4cfd-ae33-9166be3f940c", "address": "fa:16:3e:eb:22:67", "network": {"id": "26d70c6d-e66b-4570-a7d7-11486a935ed8", "bridge": "br-int", "label": "tempest-TestShelveInstance-164848660-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "41c67820b40a4185a60c4245f9c43ef5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9d78d6ba-34", "ovs_interfaceid": "9d78d6ba-3489-4cfd-ae33-9166be3f940c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 04:19:51 np0005534516 nova_compute[253538]: 2025-11-25 09:19:51.476 253542 INFO nova.compute.manager [-] [instance: 282b7217-4c1e-4a42-b3da-05616f4e1da3] Took 2.35 seconds to deallocate network for instance.#033[00m
Nov 25 04:19:51 np0005534516 nova_compute[253538]: 2025-11-25 09:19:51.526 253542 DEBUG oslo_concurrency.lockutils [req-828f7a84-f713-4e60-b804-c65243625863 req-57f12959-50b9-46bb-82c4-d47139d0ca79 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-282b7217-4c1e-4a42-b3da-05616f4e1da3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 04:19:51 np0005534516 nova_compute[253538]: 2025-11-25 09:19:51.632 253542 DEBUG oslo_concurrency.lockutils [None req-cb207b26-6d8b-43e4-90cd-a0aad9008460 a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:19:51 np0005534516 nova_compute[253538]: 2025-11-25 09:19:51.633 253542 DEBUG oslo_concurrency.lockutils [None req-cb207b26-6d8b-43e4-90cd-a0aad9008460 a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:19:51 np0005534516 nova_compute[253538]: 2025-11-25 09:19:51.697 253542 DEBUG oslo_concurrency.processutils [None req-cb207b26-6d8b-43e4-90cd-a0aad9008460 a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 04:19:51 np0005534516 nova_compute[253538]: 2025-11-25 09:19:51.963 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:19:52 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 04:19:52 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/730549159' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 04:19:52 np0005534516 nova_compute[253538]: 2025-11-25 09:19:52.167 253542 DEBUG oslo_concurrency.processutils [None req-cb207b26-6d8b-43e4-90cd-a0aad9008460 a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.470s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 04:19:52 np0005534516 nova_compute[253538]: 2025-11-25 09:19:52.175 253542 DEBUG nova.compute.provider_tree [None req-cb207b26-6d8b-43e4-90cd-a0aad9008460 a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 04:19:52 np0005534516 nova_compute[253538]: 2025-11-25 09:19:52.190 253542 DEBUG nova.scheduler.client.report [None req-cb207b26-6d8b-43e4-90cd-a0aad9008460 a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 04:19:52 np0005534516 nova_compute[253538]: 2025-11-25 09:19:52.215 253542 DEBUG oslo_concurrency.lockutils [None req-cb207b26-6d8b-43e4-90cd-a0aad9008460 a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.582s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:19:52 np0005534516 nova_compute[253538]: 2025-11-25 09:19:52.289 253542 INFO nova.scheduler.client.report [None req-cb207b26-6d8b-43e4-90cd-a0aad9008460 a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] Deleted allocations for instance 282b7217-4c1e-4a42-b3da-05616f4e1da3#033[00m
Nov 25 04:19:52 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2990: 321 pgs: 321 active+clean; 122 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 228 KiB/s rd, 2.2 KiB/s wr, 29 op/s
Nov 25 04:19:52 np0005534516 nova_compute[253538]: 2025-11-25 09:19:52.722 253542 DEBUG oslo_concurrency.lockutils [None req-cb207b26-6d8b-43e4-90cd-a0aad9008460 a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] Lock "282b7217-4c1e-4a42-b3da-05616f4e1da3" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.491s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:19:53 np0005534516 nova_compute[253538]: 2025-11-25 09:19:53.113 253542 DEBUG nova.compute.manager [req-a29a5ef3-e94c-4e8c-be46-71ec71439748 req-9e9f73c0-265b-49b7-a3b7-db5144ef89a9 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 282b7217-4c1e-4a42-b3da-05616f4e1da3] Received event network-vif-deleted-9d78d6ba-3489-4cfd-ae33-9166be3f940c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 04:19:53 np0005534516 nova_compute[253538]: 2025-11-25 09:19:53.114 253542 INFO nova.compute.manager [req-a29a5ef3-e94c-4e8c-be46-71ec71439748 req-9e9f73c0-265b-49b7-a3b7-db5144ef89a9 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 282b7217-4c1e-4a42-b3da-05616f4e1da3] Neutron deleted interface 9d78d6ba-3489-4cfd-ae33-9166be3f940c; detaching it from the instance and deleting it from the info cache#033[00m
Nov 25 04:19:53 np0005534516 nova_compute[253538]: 2025-11-25 09:19:53.114 253542 DEBUG nova.network.neutron [req-a29a5ef3-e94c-4e8c-be46-71ec71439748 req-9e9f73c0-265b-49b7-a3b7-db5144ef89a9 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 282b7217-4c1e-4a42-b3da-05616f4e1da3] Instance is deleted, no further info cache update update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:106#033[00m
Nov 25 04:19:53 np0005534516 nova_compute[253538]: 2025-11-25 09:19:53.117 253542 DEBUG nova.compute.manager [req-a29a5ef3-e94c-4e8c-be46-71ec71439748 req-9e9f73c0-265b-49b7-a3b7-db5144ef89a9 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 282b7217-4c1e-4a42-b3da-05616f4e1da3] Detach interface failed, port_id=9d78d6ba-3489-4cfd-ae33-9166be3f940c, reason: Instance 282b7217-4c1e-4a42-b3da-05616f4e1da3 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882#033[00m
Nov 25 04:19:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] Optimize plan auto_2025-11-25_09:19:53
Nov 25 04:19:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 04:19:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] do_upmap
Nov 25 04:19:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] pools ['volumes', 'default.rgw.log', 'vms', 'default.rgw.meta', 'images', 'backups', 'cephfs.cephfs.data', '.mgr', '.rgw.root', 'default.rgw.control', 'cephfs.cephfs.meta']
Nov 25 04:19:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] prepared 0/10 changes
Nov 25 04:19:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 04:19:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 04:19:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 04:19:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 04:19:53 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e271 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:19:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 04:19:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 04:19:53 np0005534516 nova_compute[253538]: 2025-11-25 09:19:53.498 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:19:54 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 04:19:54 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 04:19:54 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 04:19:54 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 04:19:54 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 04:19:54 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 04:19:54 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 04:19:54 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 04:19:54 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 04:19:54 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 04:19:54 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2991: 321 pgs: 321 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 231 KiB/s rd, 2.2 KiB/s wr, 33 op/s
Nov 25 04:19:54 np0005534516 nova_compute[253538]: 2025-11-25 09:19:54.905 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:19:55 np0005534516 nova_compute[253538]: 2025-11-25 09:19:55.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:19:55 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:19:55.954 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=0e3362c2-a4a0-4a10-9289-943331244f84, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '59'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 04:19:56 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2992: 321 pgs: 321 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 231 KiB/s rd, 2.2 KiB/s wr, 33 op/s
Nov 25 04:19:56 np0005534516 podman[423402]: 2025-11-25 09:19:56.854488119 +0000 UTC m=+0.106978187 container health_status 8ad1e319ea263c63021f01bb2d6f18ca5aea205883339692c6b10bddfa993191 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, config_id=ovn_controller, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller)
Nov 25 04:19:57 np0005534516 nova_compute[253538]: 2025-11-25 09:19:57.499 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:19:57 np0005534516 nova_compute[253538]: 2025-11-25 09:19:57.656 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:19:58 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e271 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:19:58 np0005534516 nova_compute[253538]: 2025-11-25 09:19:58.501 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:19:58 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2993: 321 pgs: 321 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 223 KiB/s rd, 1.2 KiB/s wr, 32 op/s
Nov 25 04:19:59 np0005534516 nova_compute[253538]: 2025-11-25 09:19:59.907 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:20:00 np0005534516 nova_compute[253538]: 2025-11-25 09:20:00.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:20:00 np0005534516 nova_compute[253538]: 2025-11-25 09:20:00.555 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 25 04:20:00 np0005534516 nova_compute[253538]: 2025-11-25 09:20:00.555 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 25 04:20:00 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2994: 321 pgs: 321 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 108 KiB/s rd, 1.2 KiB/s wr, 29 op/s
Nov 25 04:20:00 np0005534516 nova_compute[253538]: 2025-11-25 09:20:00.569 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 25 04:20:02 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2995: 321 pgs: 321 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 3.3 KiB/s rd, 852 B/s wr, 6 op/s
Nov 25 04:20:03 np0005534516 nova_compute[253538]: 2025-11-25 09:20:03.472 253542 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764062388.4714363, 282b7217-4c1e-4a42-b3da-05616f4e1da3 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 04:20:03 np0005534516 nova_compute[253538]: 2025-11-25 09:20:03.472 253542 INFO nova.compute.manager [-] [instance: 282b7217-4c1e-4a42-b3da-05616f4e1da3] VM Stopped (Lifecycle Event)#033[00m
Nov 25 04:20:03 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e271 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:20:03 np0005534516 nova_compute[253538]: 2025-11-25 09:20:03.491 253542 DEBUG nova.compute.manager [None req-22a257ec-0687-4db3-8326-093e753ffaf7 - - - - - -] [instance: 282b7217-4c1e-4a42-b3da-05616f4e1da3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 04:20:03 np0005534516 nova_compute[253538]: 2025-11-25 09:20:03.503 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:20:03 np0005534516 nova_compute[253538]: 2025-11-25 09:20:03.562 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:20:04 np0005534516 nova_compute[253538]: 2025-11-25 09:20:04.553 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:20:04 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2996: 321 pgs: 321 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 2.2 KiB/s rd, 341 B/s wr, 4 op/s
Nov 25 04:20:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 04:20:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 04:20:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 04:20:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 04:20:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 25 04:20:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 04:20:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 04:20:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 04:20:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 04:20:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 04:20:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0010122368870873217 of space, bias 1.0, pg target 0.3036710661261965 quantized to 32 (current 32)
Nov 25 04:20:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 04:20:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 25 04:20:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 04:20:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 04:20:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 04:20:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 25 04:20:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 04:20:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 25 04:20:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 04:20:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 04:20:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 04:20:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 25 04:20:04 np0005534516 nova_compute[253538]: 2025-11-25 09:20:04.908 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:20:05 np0005534516 ceph-osd[88620]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 25 04:20:05 np0005534516 ceph-osd[88620]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 5400.5 total, 600.0 interval#012Cumulative writes: 46K writes, 186K keys, 46K commit groups, 1.0 writes per commit group, ingest: 0.19 GB, 0.04 MB/s#012Cumulative WAL: 46K writes, 16K syncs, 2.84 writes per sync, written: 0.19 GB, 0.04 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 2946 writes, 11K keys, 2946 commit groups, 1.0 writes per commit group, ingest: 12.87 MB, 0.02 MB/s#012Interval WAL: 2946 writes, 1143 syncs, 2.58 writes per sync, written: 0.01 GB, 0.02 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 25 04:20:06 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2997: 321 pgs: 321 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 04:20:07 np0005534516 nova_compute[253538]: 2025-11-25 09:20:07.555 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:20:08 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e271 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:20:08 np0005534516 nova_compute[253538]: 2025-11-25 09:20:08.505 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:20:08 np0005534516 nova_compute[253538]: 2025-11-25 09:20:08.553 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:20:08 np0005534516 nova_compute[253538]: 2025-11-25 09:20:08.553 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 25 04:20:08 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2998: 321 pgs: 321 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 04:20:09 np0005534516 nova_compute[253538]: 2025-11-25 09:20:09.909 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:20:10 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2999: 321 pgs: 321 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 04:20:11 np0005534516 nova_compute[253538]: 2025-11-25 09:20:11.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:20:11 np0005534516 ceph-osd[89702]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 25 04:20:11 np0005534516 ceph-osd[89702]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 5400.4 total, 600.0 interval#012Cumulative writes: 45K writes, 180K keys, 45K commit groups, 1.0 writes per commit group, ingest: 0.17 GB, 0.03 MB/s#012Cumulative WAL: 45K writes, 16K syncs, 2.79 writes per sync, written: 0.17 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 2715 writes, 11K keys, 2715 commit groups, 1.0 writes per commit group, ingest: 11.35 MB, 0.02 MB/s#012Interval WAL: 2715 writes, 1097 syncs, 2.47 writes per sync, written: 0.01 GB, 0.02 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 25 04:20:11 np0005534516 nova_compute[253538]: 2025-11-25 09:20:11.705 253542 DEBUG oslo_concurrency.lockutils [None req-fd6958ed-9d92-4759-b61a-0ce56dffd471 f152533b0a03477485b6883b0c89d441 195498a87961428fa33efd9eb8f206a9 - - default default] Acquiring lock "73412c84-02b0-4ed4-872c-78d4714956d9" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:20:11 np0005534516 nova_compute[253538]: 2025-11-25 09:20:11.705 253542 DEBUG oslo_concurrency.lockutils [None req-fd6958ed-9d92-4759-b61a-0ce56dffd471 f152533b0a03477485b6883b0c89d441 195498a87961428fa33efd9eb8f206a9 - - default default] Lock "73412c84-02b0-4ed4-872c-78d4714956d9" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:20:11 np0005534516 nova_compute[253538]: 2025-11-25 09:20:11.717 253542 DEBUG nova.compute.manager [None req-fd6958ed-9d92-4759-b61a-0ce56dffd471 f152533b0a03477485b6883b0c89d441 195498a87961428fa33efd9eb8f206a9 - - default default] [instance: 73412c84-02b0-4ed4-872c-78d4714956d9] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 25 04:20:11 np0005534516 nova_compute[253538]: 2025-11-25 09:20:11.777 253542 DEBUG oslo_concurrency.lockutils [None req-fd6958ed-9d92-4759-b61a-0ce56dffd471 f152533b0a03477485b6883b0c89d441 195498a87961428fa33efd9eb8f206a9 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:20:11 np0005534516 nova_compute[253538]: 2025-11-25 09:20:11.778 253542 DEBUG oslo_concurrency.lockutils [None req-fd6958ed-9d92-4759-b61a-0ce56dffd471 f152533b0a03477485b6883b0c89d441 195498a87961428fa33efd9eb8f206a9 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:20:11 np0005534516 nova_compute[253538]: 2025-11-25 09:20:11.786 253542 DEBUG nova.virt.hardware [None req-fd6958ed-9d92-4759-b61a-0ce56dffd471 f152533b0a03477485b6883b0c89d441 195498a87961428fa33efd9eb8f206a9 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 25 04:20:11 np0005534516 nova_compute[253538]: 2025-11-25 09:20:11.786 253542 INFO nova.compute.claims [None req-fd6958ed-9d92-4759-b61a-0ce56dffd471 f152533b0a03477485b6883b0c89d441 195498a87961428fa33efd9eb8f206a9 - - default default] [instance: 73412c84-02b0-4ed4-872c-78d4714956d9] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 25 04:20:11 np0005534516 nova_compute[253538]: 2025-11-25 09:20:11.875 253542 DEBUG oslo_concurrency.processutils [None req-fd6958ed-9d92-4759-b61a-0ce56dffd471 f152533b0a03477485b6883b0c89d441 195498a87961428fa33efd9eb8f206a9 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 04:20:12 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 04:20:12 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4231694633' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 04:20:12 np0005534516 nova_compute[253538]: 2025-11-25 09:20:12.337 253542 DEBUG oslo_concurrency.processutils [None req-fd6958ed-9d92-4759-b61a-0ce56dffd471 f152533b0a03477485b6883b0c89d441 195498a87961428fa33efd9eb8f206a9 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.462s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 04:20:12 np0005534516 nova_compute[253538]: 2025-11-25 09:20:12.343 253542 DEBUG nova.compute.provider_tree [None req-fd6958ed-9d92-4759-b61a-0ce56dffd471 f152533b0a03477485b6883b0c89d441 195498a87961428fa33efd9eb8f206a9 - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 04:20:12 np0005534516 nova_compute[253538]: 2025-11-25 09:20:12.360 253542 DEBUG nova.scheduler.client.report [None req-fd6958ed-9d92-4759-b61a-0ce56dffd471 f152533b0a03477485b6883b0c89d441 195498a87961428fa33efd9eb8f206a9 - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 04:20:12 np0005534516 nova_compute[253538]: 2025-11-25 09:20:12.393 253542 DEBUG oslo_concurrency.lockutils [None req-fd6958ed-9d92-4759-b61a-0ce56dffd471 f152533b0a03477485b6883b0c89d441 195498a87961428fa33efd9eb8f206a9 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.615s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:20:12 np0005534516 nova_compute[253538]: 2025-11-25 09:20:12.394 253542 DEBUG nova.compute.manager [None req-fd6958ed-9d92-4759-b61a-0ce56dffd471 f152533b0a03477485b6883b0c89d441 195498a87961428fa33efd9eb8f206a9 - - default default] [instance: 73412c84-02b0-4ed4-872c-78d4714956d9] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 25 04:20:12 np0005534516 nova_compute[253538]: 2025-11-25 09:20:12.439 253542 DEBUG nova.compute.manager [None req-fd6958ed-9d92-4759-b61a-0ce56dffd471 f152533b0a03477485b6883b0c89d441 195498a87961428fa33efd9eb8f206a9 - - default default] [instance: 73412c84-02b0-4ed4-872c-78d4714956d9] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 25 04:20:12 np0005534516 nova_compute[253538]: 2025-11-25 09:20:12.440 253542 DEBUG nova.network.neutron [None req-fd6958ed-9d92-4759-b61a-0ce56dffd471 f152533b0a03477485b6883b0c89d441 195498a87961428fa33efd9eb8f206a9 - - default default] [instance: 73412c84-02b0-4ed4-872c-78d4714956d9] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 25 04:20:12 np0005534516 nova_compute[253538]: 2025-11-25 09:20:12.456 253542 INFO nova.virt.libvirt.driver [None req-fd6958ed-9d92-4759-b61a-0ce56dffd471 f152533b0a03477485b6883b0c89d441 195498a87961428fa33efd9eb8f206a9 - - default default] [instance: 73412c84-02b0-4ed4-872c-78d4714956d9] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 25 04:20:12 np0005534516 nova_compute[253538]: 2025-11-25 09:20:12.475 253542 DEBUG nova.compute.manager [None req-fd6958ed-9d92-4759-b61a-0ce56dffd471 f152533b0a03477485b6883b0c89d441 195498a87961428fa33efd9eb8f206a9 - - default default] [instance: 73412c84-02b0-4ed4-872c-78d4714956d9] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 25 04:20:12 np0005534516 nova_compute[253538]: 2025-11-25 09:20:12.554 253542 DEBUG nova.compute.manager [None req-fd6958ed-9d92-4759-b61a-0ce56dffd471 f152533b0a03477485b6883b0c89d441 195498a87961428fa33efd9eb8f206a9 - - default default] [instance: 73412c84-02b0-4ed4-872c-78d4714956d9] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 25 04:20:12 np0005534516 nova_compute[253538]: 2025-11-25 09:20:12.555 253542 DEBUG nova.virt.libvirt.driver [None req-fd6958ed-9d92-4759-b61a-0ce56dffd471 f152533b0a03477485b6883b0c89d441 195498a87961428fa33efd9eb8f206a9 - - default default] [instance: 73412c84-02b0-4ed4-872c-78d4714956d9] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 25 04:20:12 np0005534516 nova_compute[253538]: 2025-11-25 09:20:12.555 253542 INFO nova.virt.libvirt.driver [None req-fd6958ed-9d92-4759-b61a-0ce56dffd471 f152533b0a03477485b6883b0c89d441 195498a87961428fa33efd9eb8f206a9 - - default default] [instance: 73412c84-02b0-4ed4-872c-78d4714956d9] Creating image(s)#033[00m
Nov 25 04:20:12 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3000: 321 pgs: 321 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 04:20:12 np0005534516 nova_compute[253538]: 2025-11-25 09:20:12.574 253542 DEBUG nova.storage.rbd_utils [None req-fd6958ed-9d92-4759-b61a-0ce56dffd471 f152533b0a03477485b6883b0c89d441 195498a87961428fa33efd9eb8f206a9 - - default default] rbd image 73412c84-02b0-4ed4-872c-78d4714956d9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 04:20:12 np0005534516 nova_compute[253538]: 2025-11-25 09:20:12.594 253542 DEBUG nova.storage.rbd_utils [None req-fd6958ed-9d92-4759-b61a-0ce56dffd471 f152533b0a03477485b6883b0c89d441 195498a87961428fa33efd9eb8f206a9 - - default default] rbd image 73412c84-02b0-4ed4-872c-78d4714956d9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 04:20:12 np0005534516 nova_compute[253538]: 2025-11-25 09:20:12.615 253542 DEBUG nova.storage.rbd_utils [None req-fd6958ed-9d92-4759-b61a-0ce56dffd471 f152533b0a03477485b6883b0c89d441 195498a87961428fa33efd9eb8f206a9 - - default default] rbd image 73412c84-02b0-4ed4-872c-78d4714956d9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 04:20:12 np0005534516 nova_compute[253538]: 2025-11-25 09:20:12.619 253542 DEBUG oslo_concurrency.processutils [None req-fd6958ed-9d92-4759-b61a-0ce56dffd471 f152533b0a03477485b6883b0c89d441 195498a87961428fa33efd9eb8f206a9 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 04:20:12 np0005534516 nova_compute[253538]: 2025-11-25 09:20:12.703 253542 DEBUG oslo_concurrency.processutils [None req-fd6958ed-9d92-4759-b61a-0ce56dffd471 f152533b0a03477485b6883b0c89d441 195498a87961428fa33efd9eb8f206a9 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json" returned: 0 in 0.084s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 04:20:12 np0005534516 nova_compute[253538]: 2025-11-25 09:20:12.704 253542 DEBUG oslo_concurrency.lockutils [None req-fd6958ed-9d92-4759-b61a-0ce56dffd471 f152533b0a03477485b6883b0c89d441 195498a87961428fa33efd9eb8f206a9 - - default default] Acquiring lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:20:12 np0005534516 nova_compute[253538]: 2025-11-25 09:20:12.704 253542 DEBUG oslo_concurrency.lockutils [None req-fd6958ed-9d92-4759-b61a-0ce56dffd471 f152533b0a03477485b6883b0c89d441 195498a87961428fa33efd9eb8f206a9 - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:20:12 np0005534516 nova_compute[253538]: 2025-11-25 09:20:12.705 253542 DEBUG oslo_concurrency.lockutils [None req-fd6958ed-9d92-4759-b61a-0ce56dffd471 f152533b0a03477485b6883b0c89d441 195498a87961428fa33efd9eb8f206a9 - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:20:12 np0005534516 nova_compute[253538]: 2025-11-25 09:20:12.724 253542 DEBUG nova.storage.rbd_utils [None req-fd6958ed-9d92-4759-b61a-0ce56dffd471 f152533b0a03477485b6883b0c89d441 195498a87961428fa33efd9eb8f206a9 - - default default] rbd image 73412c84-02b0-4ed4-872c-78d4714956d9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 04:20:12 np0005534516 nova_compute[253538]: 2025-11-25 09:20:12.728 253542 DEBUG oslo_concurrency.processutils [None req-fd6958ed-9d92-4759-b61a-0ce56dffd471 f152533b0a03477485b6883b0c89d441 195498a87961428fa33efd9eb8f206a9 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc 73412c84-02b0-4ed4-872c-78d4714956d9_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 04:20:13 np0005534516 nova_compute[253538]: 2025-11-25 09:20:13.037 253542 DEBUG nova.network.neutron [None req-fd6958ed-9d92-4759-b61a-0ce56dffd471 f152533b0a03477485b6883b0c89d441 195498a87961428fa33efd9eb8f206a9 - - default default] [instance: 73412c84-02b0-4ed4-872c-78d4714956d9] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188#033[00m
Nov 25 04:20:13 np0005534516 nova_compute[253538]: 2025-11-25 09:20:13.037 253542 DEBUG nova.compute.manager [None req-fd6958ed-9d92-4759-b61a-0ce56dffd471 f152533b0a03477485b6883b0c89d441 195498a87961428fa33efd9eb8f206a9 - - default default] [instance: 73412c84-02b0-4ed4-872c-78d4714956d9] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 25 04:20:13 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e271 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:20:13 np0005534516 nova_compute[253538]: 2025-11-25 09:20:13.496 253542 DEBUG oslo_concurrency.processutils [None req-fd6958ed-9d92-4759-b61a-0ce56dffd471 f152533b0a03477485b6883b0c89d441 195498a87961428fa33efd9eb8f206a9 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc 73412c84-02b0-4ed4-872c-78d4714956d9_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.769s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 04:20:13 np0005534516 nova_compute[253538]: 2025-11-25 09:20:13.536 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:20:13 np0005534516 nova_compute[253538]: 2025-11-25 09:20:13.581 253542 DEBUG nova.storage.rbd_utils [None req-fd6958ed-9d92-4759-b61a-0ce56dffd471 f152533b0a03477485b6883b0c89d441 195498a87961428fa33efd9eb8f206a9 - - default default] resizing rbd image 73412c84-02b0-4ed4-872c-78d4714956d9_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 25 04:20:13 np0005534516 nova_compute[253538]: 2025-11-25 09:20:13.688 253542 DEBUG nova.objects.instance [None req-fd6958ed-9d92-4759-b61a-0ce56dffd471 f152533b0a03477485b6883b0c89d441 195498a87961428fa33efd9eb8f206a9 - - default default] Lazy-loading 'migration_context' on Instance uuid 73412c84-02b0-4ed4-872c-78d4714956d9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 04:20:13 np0005534516 nova_compute[253538]: 2025-11-25 09:20:13.769 253542 DEBUG nova.virt.libvirt.driver [None req-fd6958ed-9d92-4759-b61a-0ce56dffd471 f152533b0a03477485b6883b0c89d441 195498a87961428fa33efd9eb8f206a9 - - default default] [instance: 73412c84-02b0-4ed4-872c-78d4714956d9] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 25 04:20:13 np0005534516 nova_compute[253538]: 2025-11-25 09:20:13.770 253542 DEBUG nova.virt.libvirt.driver [None req-fd6958ed-9d92-4759-b61a-0ce56dffd471 f152533b0a03477485b6883b0c89d441 195498a87961428fa33efd9eb8f206a9 - - default default] [instance: 73412c84-02b0-4ed4-872c-78d4714956d9] Ensure instance console log exists: /var/lib/nova/instances/73412c84-02b0-4ed4-872c-78d4714956d9/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 25 04:20:13 np0005534516 nova_compute[253538]: 2025-11-25 09:20:13.770 253542 DEBUG oslo_concurrency.lockutils [None req-fd6958ed-9d92-4759-b61a-0ce56dffd471 f152533b0a03477485b6883b0c89d441 195498a87961428fa33efd9eb8f206a9 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:20:13 np0005534516 nova_compute[253538]: 2025-11-25 09:20:13.771 253542 DEBUG oslo_concurrency.lockutils [None req-fd6958ed-9d92-4759-b61a-0ce56dffd471 f152533b0a03477485b6883b0c89d441 195498a87961428fa33efd9eb8f206a9 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:20:13 np0005534516 nova_compute[253538]: 2025-11-25 09:20:13.771 253542 DEBUG oslo_concurrency.lockutils [None req-fd6958ed-9d92-4759-b61a-0ce56dffd471 f152533b0a03477485b6883b0c89d441 195498a87961428fa33efd9eb8f206a9 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:20:13 np0005534516 nova_compute[253538]: 2025-11-25 09:20:13.773 253542 DEBUG nova.virt.libvirt.driver [None req-fd6958ed-9d92-4759-b61a-0ce56dffd471 f152533b0a03477485b6883b0c89d441 195498a87961428fa33efd9eb8f206a9 - - default default] [instance: 73412c84-02b0-4ed4-872c-78d4714956d9] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'device_name': '/dev/vda', 'guest_format': None, 'encryption_options': None, 'boot_index': 0, 'encryption_format': None, 'encryption_secret_uuid': None, 'size': 0, 'encrypted': False, 'disk_bus': 'virtio', 'image_id': '8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 25 04:20:13 np0005534516 nova_compute[253538]: 2025-11-25 09:20:13.778 253542 WARNING nova.virt.libvirt.driver [None req-fd6958ed-9d92-4759-b61a-0ce56dffd471 f152533b0a03477485b6883b0c89d441 195498a87961428fa33efd9eb8f206a9 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 04:20:13 np0005534516 nova_compute[253538]: 2025-11-25 09:20:13.791 253542 DEBUG nova.virt.libvirt.host [None req-fd6958ed-9d92-4759-b61a-0ce56dffd471 f152533b0a03477485b6883b0c89d441 195498a87961428fa33efd9eb8f206a9 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 25 04:20:13 np0005534516 nova_compute[253538]: 2025-11-25 09:20:13.792 253542 DEBUG nova.virt.libvirt.host [None req-fd6958ed-9d92-4759-b61a-0ce56dffd471 f152533b0a03477485b6883b0c89d441 195498a87961428fa33efd9eb8f206a9 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 25 04:20:13 np0005534516 nova_compute[253538]: 2025-11-25 09:20:13.822 253542 DEBUG nova.virt.libvirt.host [None req-fd6958ed-9d92-4759-b61a-0ce56dffd471 f152533b0a03477485b6883b0c89d441 195498a87961428fa33efd9eb8f206a9 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 25 04:20:13 np0005534516 nova_compute[253538]: 2025-11-25 09:20:13.823 253542 DEBUG nova.virt.libvirt.host [None req-fd6958ed-9d92-4759-b61a-0ce56dffd471 f152533b0a03477485b6883b0c89d441 195498a87961428fa33efd9eb8f206a9 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 25 04:20:13 np0005534516 nova_compute[253538]: 2025-11-25 09:20:13.824 253542 DEBUG nova.virt.libvirt.driver [None req-fd6958ed-9d92-4759-b61a-0ce56dffd471 f152533b0a03477485b6883b0c89d441 195498a87961428fa33efd9eb8f206a9 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 25 04:20:13 np0005534516 nova_compute[253538]: 2025-11-25 09:20:13.824 253542 DEBUG nova.virt.hardware [None req-fd6958ed-9d92-4759-b61a-0ce56dffd471 f152533b0a03477485b6883b0c89d441 195498a87961428fa33efd9eb8f206a9 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T08:20:54Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='c0247c91-0f34-45b2-87e7-43f31a790a00',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 25 04:20:13 np0005534516 nova_compute[253538]: 2025-11-25 09:20:13.825 253542 DEBUG nova.virt.hardware [None req-fd6958ed-9d92-4759-b61a-0ce56dffd471 f152533b0a03477485b6883b0c89d441 195498a87961428fa33efd9eb8f206a9 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 25 04:20:13 np0005534516 nova_compute[253538]: 2025-11-25 09:20:13.825 253542 DEBUG nova.virt.hardware [None req-fd6958ed-9d92-4759-b61a-0ce56dffd471 f152533b0a03477485b6883b0c89d441 195498a87961428fa33efd9eb8f206a9 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 25 04:20:13 np0005534516 nova_compute[253538]: 2025-11-25 09:20:13.825 253542 DEBUG nova.virt.hardware [None req-fd6958ed-9d92-4759-b61a-0ce56dffd471 f152533b0a03477485b6883b0c89d441 195498a87961428fa33efd9eb8f206a9 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 25 04:20:13 np0005534516 nova_compute[253538]: 2025-11-25 09:20:13.825 253542 DEBUG nova.virt.hardware [None req-fd6958ed-9d92-4759-b61a-0ce56dffd471 f152533b0a03477485b6883b0c89d441 195498a87961428fa33efd9eb8f206a9 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 25 04:20:13 np0005534516 nova_compute[253538]: 2025-11-25 09:20:13.826 253542 DEBUG nova.virt.hardware [None req-fd6958ed-9d92-4759-b61a-0ce56dffd471 f152533b0a03477485b6883b0c89d441 195498a87961428fa33efd9eb8f206a9 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 25 04:20:13 np0005534516 nova_compute[253538]: 2025-11-25 09:20:13.826 253542 DEBUG nova.virt.hardware [None req-fd6958ed-9d92-4759-b61a-0ce56dffd471 f152533b0a03477485b6883b0c89d441 195498a87961428fa33efd9eb8f206a9 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 25 04:20:13 np0005534516 nova_compute[253538]: 2025-11-25 09:20:13.826 253542 DEBUG nova.virt.hardware [None req-fd6958ed-9d92-4759-b61a-0ce56dffd471 f152533b0a03477485b6883b0c89d441 195498a87961428fa33efd9eb8f206a9 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 25 04:20:13 np0005534516 nova_compute[253538]: 2025-11-25 09:20:13.826 253542 DEBUG nova.virt.hardware [None req-fd6958ed-9d92-4759-b61a-0ce56dffd471 f152533b0a03477485b6883b0c89d441 195498a87961428fa33efd9eb8f206a9 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 25 04:20:13 np0005534516 nova_compute[253538]: 2025-11-25 09:20:13.827 253542 DEBUG nova.virt.hardware [None req-fd6958ed-9d92-4759-b61a-0ce56dffd471 f152533b0a03477485b6883b0c89d441 195498a87961428fa33efd9eb8f206a9 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 25 04:20:13 np0005534516 nova_compute[253538]: 2025-11-25 09:20:13.827 253542 DEBUG nova.virt.hardware [None req-fd6958ed-9d92-4759-b61a-0ce56dffd471 f152533b0a03477485b6883b0c89d441 195498a87961428fa33efd9eb8f206a9 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 25 04:20:13 np0005534516 nova_compute[253538]: 2025-11-25 09:20:13.830 253542 DEBUG oslo_concurrency.processutils [None req-fd6958ed-9d92-4759-b61a-0ce56dffd471 f152533b0a03477485b6883b0c89d441 195498a87961428fa33efd9eb8f206a9 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 04:20:14 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 04:20:14 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1504769425' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 04:20:14 np0005534516 nova_compute[253538]: 2025-11-25 09:20:14.307 253542 DEBUG oslo_concurrency.processutils [None req-fd6958ed-9d92-4759-b61a-0ce56dffd471 f152533b0a03477485b6883b0c89d441 195498a87961428fa33efd9eb8f206a9 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.477s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 04:20:14 np0005534516 nova_compute[253538]: 2025-11-25 09:20:14.334 253542 DEBUG nova.storage.rbd_utils [None req-fd6958ed-9d92-4759-b61a-0ce56dffd471 f152533b0a03477485b6883b0c89d441 195498a87961428fa33efd9eb8f206a9 - - default default] rbd image 73412c84-02b0-4ed4-872c-78d4714956d9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 04:20:14 np0005534516 nova_compute[253538]: 2025-11-25 09:20:14.338 253542 DEBUG oslo_concurrency.processutils [None req-fd6958ed-9d92-4759-b61a-0ce56dffd471 f152533b0a03477485b6883b0c89d441 195498a87961428fa33efd9eb8f206a9 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 04:20:14 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3001: 321 pgs: 321 active+clean; 104 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 2.1 KiB/s rd, 843 KiB/s wr, 3 op/s
Nov 25 04:20:14 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 04:20:14 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1016389856' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 04:20:14 np0005534516 nova_compute[253538]: 2025-11-25 09:20:14.771 253542 DEBUG oslo_concurrency.processutils [None req-fd6958ed-9d92-4759-b61a-0ce56dffd471 f152533b0a03477485b6883b0c89d441 195498a87961428fa33efd9eb8f206a9 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.433s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 04:20:14 np0005534516 nova_compute[253538]: 2025-11-25 09:20:14.774 253542 DEBUG nova.objects.instance [None req-fd6958ed-9d92-4759-b61a-0ce56dffd471 f152533b0a03477485b6883b0c89d441 195498a87961428fa33efd9eb8f206a9 - - default default] Lazy-loading 'pci_devices' on Instance uuid 73412c84-02b0-4ed4-872c-78d4714956d9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 04:20:14 np0005534516 nova_compute[253538]: 2025-11-25 09:20:14.791 253542 DEBUG nova.virt.libvirt.driver [None req-fd6958ed-9d92-4759-b61a-0ce56dffd471 f152533b0a03477485b6883b0c89d441 195498a87961428fa33efd9eb8f206a9 - - default default] [instance: 73412c84-02b0-4ed4-872c-78d4714956d9] End _get_guest_xml xml=<domain type="kvm">
Nov 25 04:20:14 np0005534516 nova_compute[253538]:  <uuid>73412c84-02b0-4ed4-872c-78d4714956d9</uuid>
Nov 25 04:20:14 np0005534516 nova_compute[253538]:  <name>instance-0000009a</name>
Nov 25 04:20:14 np0005534516 nova_compute[253538]:  <memory>131072</memory>
Nov 25 04:20:14 np0005534516 nova_compute[253538]:  <vcpu>1</vcpu>
Nov 25 04:20:14 np0005534516 nova_compute[253538]:  <metadata>
Nov 25 04:20:14 np0005534516 nova_compute[253538]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 04:20:14 np0005534516 nova_compute[253538]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 04:20:14 np0005534516 nova_compute[253538]:      <nova:name>tempest-AggregatesAdminTestJSON-server-149578569</nova:name>
Nov 25 04:20:14 np0005534516 nova_compute[253538]:      <nova:creationTime>2025-11-25 09:20:13</nova:creationTime>
Nov 25 04:20:14 np0005534516 nova_compute[253538]:      <nova:flavor name="m1.nano">
Nov 25 04:20:14 np0005534516 nova_compute[253538]:        <nova:memory>128</nova:memory>
Nov 25 04:20:14 np0005534516 nova_compute[253538]:        <nova:disk>1</nova:disk>
Nov 25 04:20:14 np0005534516 nova_compute[253538]:        <nova:swap>0</nova:swap>
Nov 25 04:20:14 np0005534516 nova_compute[253538]:        <nova:ephemeral>0</nova:ephemeral>
Nov 25 04:20:14 np0005534516 nova_compute[253538]:        <nova:vcpus>1</nova:vcpus>
Nov 25 04:20:14 np0005534516 nova_compute[253538]:      </nova:flavor>
Nov 25 04:20:14 np0005534516 nova_compute[253538]:      <nova:owner>
Nov 25 04:20:14 np0005534516 nova_compute[253538]:        <nova:user uuid="f152533b0a03477485b6883b0c89d441">tempest-AggregatesAdminTestJSON-1484531335-project-member</nova:user>
Nov 25 04:20:14 np0005534516 nova_compute[253538]:        <nova:project uuid="195498a87961428fa33efd9eb8f206a9">tempest-AggregatesAdminTestJSON-1484531335</nova:project>
Nov 25 04:20:14 np0005534516 nova_compute[253538]:      </nova:owner>
Nov 25 04:20:14 np0005534516 nova_compute[253538]:      <nova:root type="image" uuid="8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e"/>
Nov 25 04:20:14 np0005534516 nova_compute[253538]:      <nova:ports/>
Nov 25 04:20:14 np0005534516 nova_compute[253538]:    </nova:instance>
Nov 25 04:20:14 np0005534516 nova_compute[253538]:  </metadata>
Nov 25 04:20:14 np0005534516 nova_compute[253538]:  <sysinfo type="smbios">
Nov 25 04:20:14 np0005534516 nova_compute[253538]:    <system>
Nov 25 04:20:14 np0005534516 nova_compute[253538]:      <entry name="manufacturer">RDO</entry>
Nov 25 04:20:14 np0005534516 nova_compute[253538]:      <entry name="product">OpenStack Compute</entry>
Nov 25 04:20:14 np0005534516 nova_compute[253538]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 04:20:14 np0005534516 nova_compute[253538]:      <entry name="serial">73412c84-02b0-4ed4-872c-78d4714956d9</entry>
Nov 25 04:20:14 np0005534516 nova_compute[253538]:      <entry name="uuid">73412c84-02b0-4ed4-872c-78d4714956d9</entry>
Nov 25 04:20:14 np0005534516 nova_compute[253538]:      <entry name="family">Virtual Machine</entry>
Nov 25 04:20:14 np0005534516 nova_compute[253538]:    </system>
Nov 25 04:20:14 np0005534516 nova_compute[253538]:  </sysinfo>
Nov 25 04:20:14 np0005534516 nova_compute[253538]:  <os>
Nov 25 04:20:14 np0005534516 nova_compute[253538]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 25 04:20:14 np0005534516 nova_compute[253538]:    <boot dev="hd"/>
Nov 25 04:20:14 np0005534516 nova_compute[253538]:    <smbios mode="sysinfo"/>
Nov 25 04:20:14 np0005534516 nova_compute[253538]:  </os>
Nov 25 04:20:14 np0005534516 nova_compute[253538]:  <features>
Nov 25 04:20:14 np0005534516 nova_compute[253538]:    <acpi/>
Nov 25 04:20:14 np0005534516 nova_compute[253538]:    <apic/>
Nov 25 04:20:14 np0005534516 nova_compute[253538]:    <vmcoreinfo/>
Nov 25 04:20:14 np0005534516 nova_compute[253538]:  </features>
Nov 25 04:20:14 np0005534516 nova_compute[253538]:  <clock offset="utc">
Nov 25 04:20:14 np0005534516 nova_compute[253538]:    <timer name="pit" tickpolicy="delay"/>
Nov 25 04:20:14 np0005534516 nova_compute[253538]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 25 04:20:14 np0005534516 nova_compute[253538]:    <timer name="hpet" present="no"/>
Nov 25 04:20:14 np0005534516 nova_compute[253538]:  </clock>
Nov 25 04:20:14 np0005534516 nova_compute[253538]:  <cpu mode="host-model" match="exact">
Nov 25 04:20:14 np0005534516 nova_compute[253538]:    <topology sockets="1" cores="1" threads="1"/>
Nov 25 04:20:14 np0005534516 nova_compute[253538]:  </cpu>
Nov 25 04:20:14 np0005534516 nova_compute[253538]:  <devices>
Nov 25 04:20:14 np0005534516 nova_compute[253538]:    <disk type="network" device="disk">
Nov 25 04:20:14 np0005534516 nova_compute[253538]:      <driver type="raw" cache="none"/>
Nov 25 04:20:14 np0005534516 nova_compute[253538]:      <source protocol="rbd" name="vms/73412c84-02b0-4ed4-872c-78d4714956d9_disk">
Nov 25 04:20:14 np0005534516 nova_compute[253538]:        <host name="192.168.122.100" port="6789"/>
Nov 25 04:20:14 np0005534516 nova_compute[253538]:      </source>
Nov 25 04:20:14 np0005534516 nova_compute[253538]:      <auth username="openstack">
Nov 25 04:20:14 np0005534516 nova_compute[253538]:        <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 04:20:14 np0005534516 nova_compute[253538]:      </auth>
Nov 25 04:20:14 np0005534516 nova_compute[253538]:      <target dev="vda" bus="virtio"/>
Nov 25 04:20:14 np0005534516 nova_compute[253538]:    </disk>
Nov 25 04:20:14 np0005534516 nova_compute[253538]:    <disk type="network" device="cdrom">
Nov 25 04:20:14 np0005534516 nova_compute[253538]:      <driver type="raw" cache="none"/>
Nov 25 04:20:14 np0005534516 nova_compute[253538]:      <source protocol="rbd" name="vms/73412c84-02b0-4ed4-872c-78d4714956d9_disk.config">
Nov 25 04:20:14 np0005534516 nova_compute[253538]:        <host name="192.168.122.100" port="6789"/>
Nov 25 04:20:14 np0005534516 nova_compute[253538]:      </source>
Nov 25 04:20:14 np0005534516 nova_compute[253538]:      <auth username="openstack">
Nov 25 04:20:14 np0005534516 nova_compute[253538]:        <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 04:20:14 np0005534516 nova_compute[253538]:      </auth>
Nov 25 04:20:14 np0005534516 nova_compute[253538]:      <target dev="sda" bus="sata"/>
Nov 25 04:20:14 np0005534516 nova_compute[253538]:    </disk>
Nov 25 04:20:14 np0005534516 nova_compute[253538]:    <serial type="pty">
Nov 25 04:20:14 np0005534516 nova_compute[253538]:      <log file="/var/lib/nova/instances/73412c84-02b0-4ed4-872c-78d4714956d9/console.log" append="off"/>
Nov 25 04:20:14 np0005534516 nova_compute[253538]:    </serial>
Nov 25 04:20:14 np0005534516 nova_compute[253538]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 04:20:14 np0005534516 nova_compute[253538]:    <video>
Nov 25 04:20:14 np0005534516 nova_compute[253538]:      <model type="virtio"/>
Nov 25 04:20:14 np0005534516 nova_compute[253538]:    </video>
Nov 25 04:20:14 np0005534516 nova_compute[253538]:    <input type="tablet" bus="usb"/>
Nov 25 04:20:14 np0005534516 nova_compute[253538]:    <rng model="virtio">
Nov 25 04:20:14 np0005534516 nova_compute[253538]:      <backend model="random">/dev/urandom</backend>
Nov 25 04:20:14 np0005534516 nova_compute[253538]:    </rng>
Nov 25 04:20:14 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root"/>
Nov 25 04:20:14 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:20:14 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:20:14 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:20:14 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:20:14 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:20:14 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:20:14 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:20:14 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:20:14 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:20:14 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:20:14 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:20:14 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:20:14 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:20:14 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:20:14 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:20:14 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:20:14 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:20:14 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:20:14 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:20:14 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:20:14 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:20:14 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:20:14 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:20:14 np0005534516 nova_compute[253538]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:20:14 np0005534516 nova_compute[253538]:    <controller type="usb" index="0"/>
Nov 25 04:20:14 np0005534516 nova_compute[253538]:    <memballoon model="virtio">
Nov 25 04:20:14 np0005534516 nova_compute[253538]:      <stats period="10"/>
Nov 25 04:20:14 np0005534516 nova_compute[253538]:    </memballoon>
Nov 25 04:20:14 np0005534516 nova_compute[253538]:  </devices>
Nov 25 04:20:14 np0005534516 nova_compute[253538]: </domain>
Nov 25 04:20:14 np0005534516 nova_compute[253538]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 25 04:20:14 np0005534516 nova_compute[253538]: 2025-11-25 09:20:14.874 253542 DEBUG nova.virt.libvirt.driver [None req-fd6958ed-9d92-4759-b61a-0ce56dffd471 f152533b0a03477485b6883b0c89d441 195498a87961428fa33efd9eb8f206a9 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 04:20:14 np0005534516 nova_compute[253538]: 2025-11-25 09:20:14.875 253542 DEBUG nova.virt.libvirt.driver [None req-fd6958ed-9d92-4759-b61a-0ce56dffd471 f152533b0a03477485b6883b0c89d441 195498a87961428fa33efd9eb8f206a9 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 04:20:14 np0005534516 nova_compute[253538]: 2025-11-25 09:20:14.875 253542 INFO nova.virt.libvirt.driver [None req-fd6958ed-9d92-4759-b61a-0ce56dffd471 f152533b0a03477485b6883b0c89d441 195498a87961428fa33efd9eb8f206a9 - - default default] [instance: 73412c84-02b0-4ed4-872c-78d4714956d9] Using config drive#033[00m
Nov 25 04:20:14 np0005534516 nova_compute[253538]: 2025-11-25 09:20:14.897 253542 DEBUG nova.storage.rbd_utils [None req-fd6958ed-9d92-4759-b61a-0ce56dffd471 f152533b0a03477485b6883b0c89d441 195498a87961428fa33efd9eb8f206a9 - - default default] rbd image 73412c84-02b0-4ed4-872c-78d4714956d9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 04:20:14 np0005534516 nova_compute[253538]: 2025-11-25 09:20:14.912 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:20:15 np0005534516 nova_compute[253538]: 2025-11-25 09:20:15.066 253542 INFO nova.virt.libvirt.driver [None req-fd6958ed-9d92-4759-b61a-0ce56dffd471 f152533b0a03477485b6883b0c89d441 195498a87961428fa33efd9eb8f206a9 - - default default] [instance: 73412c84-02b0-4ed4-872c-78d4714956d9] Creating config drive at /var/lib/nova/instances/73412c84-02b0-4ed4-872c-78d4714956d9/disk.config#033[00m
Nov 25 04:20:15 np0005534516 nova_compute[253538]: 2025-11-25 09:20:15.322 253542 DEBUG oslo_concurrency.processutils [None req-fd6958ed-9d92-4759-b61a-0ce56dffd471 f152533b0a03477485b6883b0c89d441 195498a87961428fa33efd9eb8f206a9 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/73412c84-02b0-4ed4-872c-78d4714956d9/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpsk9l96vw execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 04:20:15 np0005534516 nova_compute[253538]: 2025-11-25 09:20:15.489 253542 DEBUG oslo_concurrency.processutils [None req-fd6958ed-9d92-4759-b61a-0ce56dffd471 f152533b0a03477485b6883b0c89d441 195498a87961428fa33efd9eb8f206a9 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/73412c84-02b0-4ed4-872c-78d4714956d9/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpsk9l96vw" returned: 0 in 0.167s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 04:20:15 np0005534516 nova_compute[253538]: 2025-11-25 09:20:15.519 253542 DEBUG nova.storage.rbd_utils [None req-fd6958ed-9d92-4759-b61a-0ce56dffd471 f152533b0a03477485b6883b0c89d441 195498a87961428fa33efd9eb8f206a9 - - default default] rbd image 73412c84-02b0-4ed4-872c-78d4714956d9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 04:20:15 np0005534516 nova_compute[253538]: 2025-11-25 09:20:15.523 253542 DEBUG oslo_concurrency.processutils [None req-fd6958ed-9d92-4759-b61a-0ce56dffd471 f152533b0a03477485b6883b0c89d441 195498a87961428fa33efd9eb8f206a9 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/73412c84-02b0-4ed4-872c-78d4714956d9/disk.config 73412c84-02b0-4ed4-872c-78d4714956d9_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 04:20:15 np0005534516 nova_compute[253538]: 2025-11-25 09:20:15.562 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:20:15 np0005534516 nova_compute[253538]: 2025-11-25 09:20:15.585 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:20:15 np0005534516 nova_compute[253538]: 2025-11-25 09:20:15.586 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:20:15 np0005534516 nova_compute[253538]: 2025-11-25 09:20:15.586 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:20:15 np0005534516 nova_compute[253538]: 2025-11-25 09:20:15.587 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 25 04:20:15 np0005534516 nova_compute[253538]: 2025-11-25 09:20:15.587 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 04:20:16 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 04:20:16 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1644718927' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 04:20:16 np0005534516 nova_compute[253538]: 2025-11-25 09:20:16.065 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.478s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 04:20:16 np0005534516 nova_compute[253538]: 2025-11-25 09:20:16.076 253542 DEBUG oslo_concurrency.processutils [None req-fd6958ed-9d92-4759-b61a-0ce56dffd471 f152533b0a03477485b6883b0c89d441 195498a87961428fa33efd9eb8f206a9 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/73412c84-02b0-4ed4-872c-78d4714956d9/disk.config 73412c84-02b0-4ed4-872c-78d4714956d9_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.553s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 04:20:16 np0005534516 nova_compute[253538]: 2025-11-25 09:20:16.077 253542 INFO nova.virt.libvirt.driver [None req-fd6958ed-9d92-4759-b61a-0ce56dffd471 f152533b0a03477485b6883b0c89d441 195498a87961428fa33efd9eb8f206a9 - - default default] [instance: 73412c84-02b0-4ed4-872c-78d4714956d9] Deleting local config drive /var/lib/nova/instances/73412c84-02b0-4ed4-872c-78d4714956d9/disk.config because it was imported into RBD.#033[00m
Nov 25 04:20:16 np0005534516 systemd-machined[215790]: New machine qemu-187-instance-0000009a.
Nov 25 04:20:16 np0005534516 systemd[1]: Started Virtual Machine qemu-187-instance-0000009a.
Nov 25 04:20:16 np0005534516 nova_compute[253538]: 2025-11-25 09:20:16.362 253542 DEBUG nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] skipping disk for instance-0000009a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 25 04:20:16 np0005534516 nova_compute[253538]: 2025-11-25 09:20:16.364 253542 DEBUG nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] skipping disk for instance-0000009a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 25 04:20:16 np0005534516 nova_compute[253538]: 2025-11-25 09:20:16.515 253542 WARNING nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 04:20:16 np0005534516 nova_compute[253538]: 2025-11-25 09:20:16.516 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3566MB free_disk=59.9786262512207GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 25 04:20:16 np0005534516 nova_compute[253538]: 2025-11-25 09:20:16.516 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:20:16 np0005534516 nova_compute[253538]: 2025-11-25 09:20:16.517 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:20:16 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3002: 321 pgs: 321 active+clean; 134 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 9.7 KiB/s rd, 1.8 MiB/s wr, 19 op/s
Nov 25 04:20:16 np0005534516 nova_compute[253538]: 2025-11-25 09:20:16.570 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Instance 73412c84-02b0-4ed4-872c-78d4714956d9 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 25 04:20:16 np0005534516 nova_compute[253538]: 2025-11-25 09:20:16.571 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 25 04:20:16 np0005534516 nova_compute[253538]: 2025-11-25 09:20:16.571 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 25 04:20:16 np0005534516 nova_compute[253538]: 2025-11-25 09:20:16.603 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 04:20:16 np0005534516 podman[423812]: 2025-11-25 09:20:16.808384621 +0000 UTC m=+0.056341027 container health_status 5c8d5508db57b4c3da7e80ae11f3a3094e87785cb74f60c98199261dd8b73481 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_id=ovn_metadata_agent, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Nov 25 04:20:16 np0005534516 podman[423795]: 2025-11-25 09:20:16.813679556 +0000 UTC m=+0.058816725 container health_status 201f5ba8a846455dad7681d91099d8c3f33e8ac91298c1330a8b525b94c03938 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=multipathd, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team)
Nov 25 04:20:17 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 04:20:17 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3595739862' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 04:20:17 np0005534516 nova_compute[253538]: 2025-11-25 09:20:17.272 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.669s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 04:20:17 np0005534516 nova_compute[253538]: 2025-11-25 09:20:17.279 253542 DEBUG nova.compute.provider_tree [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 04:20:17 np0005534516 nova_compute[253538]: 2025-11-25 09:20:17.293 253542 DEBUG nova.scheduler.client.report [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 04:20:17 np0005534516 nova_compute[253538]: 2025-11-25 09:20:17.318 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 25 04:20:17 np0005534516 nova_compute[253538]: 2025-11-25 09:20:17.319 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.802s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:20:17 np0005534516 nova_compute[253538]: 2025-11-25 09:20:17.539 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764062417.5388584, 73412c84-02b0-4ed4-872c-78d4714956d9 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 04:20:17 np0005534516 nova_compute[253538]: 2025-11-25 09:20:17.540 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 73412c84-02b0-4ed4-872c-78d4714956d9] VM Resumed (Lifecycle Event)#033[00m
Nov 25 04:20:17 np0005534516 nova_compute[253538]: 2025-11-25 09:20:17.544 253542 DEBUG nova.compute.manager [None req-fd6958ed-9d92-4759-b61a-0ce56dffd471 f152533b0a03477485b6883b0c89d441 195498a87961428fa33efd9eb8f206a9 - - default default] [instance: 73412c84-02b0-4ed4-872c-78d4714956d9] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 25 04:20:17 np0005534516 nova_compute[253538]: 2025-11-25 09:20:17.544 253542 DEBUG nova.virt.libvirt.driver [None req-fd6958ed-9d92-4759-b61a-0ce56dffd471 f152533b0a03477485b6883b0c89d441 195498a87961428fa33efd9eb8f206a9 - - default default] [instance: 73412c84-02b0-4ed4-872c-78d4714956d9] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 25 04:20:17 np0005534516 nova_compute[253538]: 2025-11-25 09:20:17.550 253542 INFO nova.virt.libvirt.driver [-] [instance: 73412c84-02b0-4ed4-872c-78d4714956d9] Instance spawned successfully.#033[00m
Nov 25 04:20:17 np0005534516 nova_compute[253538]: 2025-11-25 09:20:17.551 253542 DEBUG nova.virt.libvirt.driver [None req-fd6958ed-9d92-4759-b61a-0ce56dffd471 f152533b0a03477485b6883b0c89d441 195498a87961428fa33efd9eb8f206a9 - - default default] [instance: 73412c84-02b0-4ed4-872c-78d4714956d9] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 25 04:20:17 np0005534516 nova_compute[253538]: 2025-11-25 09:20:17.562 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 73412c84-02b0-4ed4-872c-78d4714956d9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 04:20:17 np0005534516 nova_compute[253538]: 2025-11-25 09:20:17.572 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 73412c84-02b0-4ed4-872c-78d4714956d9] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 04:20:17 np0005534516 nova_compute[253538]: 2025-11-25 09:20:17.579 253542 DEBUG nova.virt.libvirt.driver [None req-fd6958ed-9d92-4759-b61a-0ce56dffd471 f152533b0a03477485b6883b0c89d441 195498a87961428fa33efd9eb8f206a9 - - default default] [instance: 73412c84-02b0-4ed4-872c-78d4714956d9] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 04:20:17 np0005534516 nova_compute[253538]: 2025-11-25 09:20:17.580 253542 DEBUG nova.virt.libvirt.driver [None req-fd6958ed-9d92-4759-b61a-0ce56dffd471 f152533b0a03477485b6883b0c89d441 195498a87961428fa33efd9eb8f206a9 - - default default] [instance: 73412c84-02b0-4ed4-872c-78d4714956d9] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 04:20:17 np0005534516 nova_compute[253538]: 2025-11-25 09:20:17.581 253542 DEBUG nova.virt.libvirt.driver [None req-fd6958ed-9d92-4759-b61a-0ce56dffd471 f152533b0a03477485b6883b0c89d441 195498a87961428fa33efd9eb8f206a9 - - default default] [instance: 73412c84-02b0-4ed4-872c-78d4714956d9] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 04:20:17 np0005534516 nova_compute[253538]: 2025-11-25 09:20:17.582 253542 DEBUG nova.virt.libvirt.driver [None req-fd6958ed-9d92-4759-b61a-0ce56dffd471 f152533b0a03477485b6883b0c89d441 195498a87961428fa33efd9eb8f206a9 - - default default] [instance: 73412c84-02b0-4ed4-872c-78d4714956d9] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 04:20:17 np0005534516 nova_compute[253538]: 2025-11-25 09:20:17.582 253542 DEBUG nova.virt.libvirt.driver [None req-fd6958ed-9d92-4759-b61a-0ce56dffd471 f152533b0a03477485b6883b0c89d441 195498a87961428fa33efd9eb8f206a9 - - default default] [instance: 73412c84-02b0-4ed4-872c-78d4714956d9] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 04:20:17 np0005534516 nova_compute[253538]: 2025-11-25 09:20:17.583 253542 DEBUG nova.virt.libvirt.driver [None req-fd6958ed-9d92-4759-b61a-0ce56dffd471 f152533b0a03477485b6883b0c89d441 195498a87961428fa33efd9eb8f206a9 - - default default] [instance: 73412c84-02b0-4ed4-872c-78d4714956d9] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 04:20:17 np0005534516 nova_compute[253538]: 2025-11-25 09:20:17.589 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 73412c84-02b0-4ed4-872c-78d4714956d9] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 04:20:17 np0005534516 nova_compute[253538]: 2025-11-25 09:20:17.590 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764062417.540476, 73412c84-02b0-4ed4-872c-78d4714956d9 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 04:20:17 np0005534516 nova_compute[253538]: 2025-11-25 09:20:17.590 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 73412c84-02b0-4ed4-872c-78d4714956d9] VM Started (Lifecycle Event)#033[00m
Nov 25 04:20:17 np0005534516 nova_compute[253538]: 2025-11-25 09:20:17.614 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 73412c84-02b0-4ed4-872c-78d4714956d9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 04:20:17 np0005534516 nova_compute[253538]: 2025-11-25 09:20:17.618 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 73412c84-02b0-4ed4-872c-78d4714956d9] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 04:20:17 np0005534516 nova_compute[253538]: 2025-11-25 09:20:17.639 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 73412c84-02b0-4ed4-872c-78d4714956d9] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 04:20:17 np0005534516 nova_compute[253538]: 2025-11-25 09:20:17.645 253542 INFO nova.compute.manager [None req-fd6958ed-9d92-4759-b61a-0ce56dffd471 f152533b0a03477485b6883b0c89d441 195498a87961428fa33efd9eb8f206a9 - - default default] [instance: 73412c84-02b0-4ed4-872c-78d4714956d9] Took 5.09 seconds to spawn the instance on the hypervisor.#033[00m
Nov 25 04:20:17 np0005534516 nova_compute[253538]: 2025-11-25 09:20:17.646 253542 DEBUG nova.compute.manager [None req-fd6958ed-9d92-4759-b61a-0ce56dffd471 f152533b0a03477485b6883b0c89d441 195498a87961428fa33efd9eb8f206a9 - - default default] [instance: 73412c84-02b0-4ed4-872c-78d4714956d9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 04:20:17 np0005534516 nova_compute[253538]: 2025-11-25 09:20:17.731 253542 INFO nova.compute.manager [None req-fd6958ed-9d92-4759-b61a-0ce56dffd471 f152533b0a03477485b6883b0c89d441 195498a87961428fa33efd9eb8f206a9 - - default default] [instance: 73412c84-02b0-4ed4-872c-78d4714956d9] Took 5.98 seconds to build instance.#033[00m
Nov 25 04:20:17 np0005534516 nova_compute[253538]: 2025-11-25 09:20:17.753 253542 DEBUG oslo_concurrency.lockutils [None req-fd6958ed-9d92-4759-b61a-0ce56dffd471 f152533b0a03477485b6883b0c89d441 195498a87961428fa33efd9eb8f206a9 - - default default] Lock "73412c84-02b0-4ed4-872c-78d4714956d9" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 6.048s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:20:18 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e271 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:20:18 np0005534516 nova_compute[253538]: 2025-11-25 09:20:18.539 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:20:18 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3003: 321 pgs: 321 active+clean; 134 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 368 KiB/s rd, 1.8 MiB/s wr, 42 op/s
Nov 25 04:20:18 np0005534516 nova_compute[253538]: 2025-11-25 09:20:18.695 253542 DEBUG oslo_concurrency.lockutils [None req-b18a868f-8566-4a2c-9b67-33cc86778341 f152533b0a03477485b6883b0c89d441 195498a87961428fa33efd9eb8f206a9 - - default default] Acquiring lock "73412c84-02b0-4ed4-872c-78d4714956d9" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:20:18 np0005534516 nova_compute[253538]: 2025-11-25 09:20:18.696 253542 DEBUG oslo_concurrency.lockutils [None req-b18a868f-8566-4a2c-9b67-33cc86778341 f152533b0a03477485b6883b0c89d441 195498a87961428fa33efd9eb8f206a9 - - default default] Lock "73412c84-02b0-4ed4-872c-78d4714956d9" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:20:18 np0005534516 nova_compute[253538]: 2025-11-25 09:20:18.696 253542 DEBUG oslo_concurrency.lockutils [None req-b18a868f-8566-4a2c-9b67-33cc86778341 f152533b0a03477485b6883b0c89d441 195498a87961428fa33efd9eb8f206a9 - - default default] Acquiring lock "73412c84-02b0-4ed4-872c-78d4714956d9-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:20:18 np0005534516 nova_compute[253538]: 2025-11-25 09:20:18.697 253542 DEBUG oslo_concurrency.lockutils [None req-b18a868f-8566-4a2c-9b67-33cc86778341 f152533b0a03477485b6883b0c89d441 195498a87961428fa33efd9eb8f206a9 - - default default] Lock "73412c84-02b0-4ed4-872c-78d4714956d9-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:20:18 np0005534516 nova_compute[253538]: 2025-11-25 09:20:18.697 253542 DEBUG oslo_concurrency.lockutils [None req-b18a868f-8566-4a2c-9b67-33cc86778341 f152533b0a03477485b6883b0c89d441 195498a87961428fa33efd9eb8f206a9 - - default default] Lock "73412c84-02b0-4ed4-872c-78d4714956d9-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:20:18 np0005534516 nova_compute[253538]: 2025-11-25 09:20:18.698 253542 INFO nova.compute.manager [None req-b18a868f-8566-4a2c-9b67-33cc86778341 f152533b0a03477485b6883b0c89d441 195498a87961428fa33efd9eb8f206a9 - - default default] [instance: 73412c84-02b0-4ed4-872c-78d4714956d9] Terminating instance#033[00m
Nov 25 04:20:18 np0005534516 nova_compute[253538]: 2025-11-25 09:20:18.698 253542 DEBUG oslo_concurrency.lockutils [None req-b18a868f-8566-4a2c-9b67-33cc86778341 f152533b0a03477485b6883b0c89d441 195498a87961428fa33efd9eb8f206a9 - - default default] Acquiring lock "refresh_cache-73412c84-02b0-4ed4-872c-78d4714956d9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 04:20:18 np0005534516 nova_compute[253538]: 2025-11-25 09:20:18.699 253542 DEBUG oslo_concurrency.lockutils [None req-b18a868f-8566-4a2c-9b67-33cc86778341 f152533b0a03477485b6883b0c89d441 195498a87961428fa33efd9eb8f206a9 - - default default] Acquired lock "refresh_cache-73412c84-02b0-4ed4-872c-78d4714956d9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 04:20:18 np0005534516 nova_compute[253538]: 2025-11-25 09:20:18.699 253542 DEBUG nova.network.neutron [None req-b18a868f-8566-4a2c-9b67-33cc86778341 f152533b0a03477485b6883b0c89d441 195498a87961428fa33efd9eb8f206a9 - - default default] [instance: 73412c84-02b0-4ed4-872c-78d4714956d9] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 25 04:20:18 np0005534516 nova_compute[253538]: 2025-11-25 09:20:18.861 253542 DEBUG nova.network.neutron [None req-b18a868f-8566-4a2c-9b67-33cc86778341 f152533b0a03477485b6883b0c89d441 195498a87961428fa33efd9eb8f206a9 - - default default] [instance: 73412c84-02b0-4ed4-872c-78d4714956d9] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 25 04:20:19 np0005534516 nova_compute[253538]: 2025-11-25 09:20:19.044 253542 DEBUG nova.network.neutron [None req-b18a868f-8566-4a2c-9b67-33cc86778341 f152533b0a03477485b6883b0c89d441 195498a87961428fa33efd9eb8f206a9 - - default default] [instance: 73412c84-02b0-4ed4-872c-78d4714956d9] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 04:20:19 np0005534516 nova_compute[253538]: 2025-11-25 09:20:19.060 253542 DEBUG oslo_concurrency.lockutils [None req-b18a868f-8566-4a2c-9b67-33cc86778341 f152533b0a03477485b6883b0c89d441 195498a87961428fa33efd9eb8f206a9 - - default default] Releasing lock "refresh_cache-73412c84-02b0-4ed4-872c-78d4714956d9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 04:20:19 np0005534516 nova_compute[253538]: 2025-11-25 09:20:19.061 253542 DEBUG nova.compute.manager [None req-b18a868f-8566-4a2c-9b67-33cc86778341 f152533b0a03477485b6883b0c89d441 195498a87961428fa33efd9eb8f206a9 - - default default] [instance: 73412c84-02b0-4ed4-872c-78d4714956d9] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 25 04:20:19 np0005534516 systemd[1]: machine-qemu\x2d187\x2dinstance\x2d0000009a.scope: Deactivated successfully.
Nov 25 04:20:19 np0005534516 systemd[1]: machine-qemu\x2d187\x2dinstance\x2d0000009a.scope: Consumed 2.340s CPU time.
Nov 25 04:20:19 np0005534516 systemd-machined[215790]: Machine qemu-187-instance-0000009a terminated.
Nov 25 04:20:19 np0005534516 nova_compute[253538]: 2025-11-25 09:20:19.281 253542 INFO nova.virt.libvirt.driver [-] [instance: 73412c84-02b0-4ed4-872c-78d4714956d9] Instance destroyed successfully.#033[00m
Nov 25 04:20:19 np0005534516 nova_compute[253538]: 2025-11-25 09:20:19.281 253542 DEBUG nova.objects.instance [None req-b18a868f-8566-4a2c-9b67-33cc86778341 f152533b0a03477485b6883b0c89d441 195498a87961428fa33efd9eb8f206a9 - - default default] Lazy-loading 'resources' on Instance uuid 73412c84-02b0-4ed4-872c-78d4714956d9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 04:20:19 np0005534516 nova_compute[253538]: 2025-11-25 09:20:19.796 253542 INFO nova.virt.libvirt.driver [None req-b18a868f-8566-4a2c-9b67-33cc86778341 f152533b0a03477485b6883b0c89d441 195498a87961428fa33efd9eb8f206a9 - - default default] [instance: 73412c84-02b0-4ed4-872c-78d4714956d9] Deleting instance files /var/lib/nova/instances/73412c84-02b0-4ed4-872c-78d4714956d9_del#033[00m
Nov 25 04:20:19 np0005534516 nova_compute[253538]: 2025-11-25 09:20:19.796 253542 INFO nova.virt.libvirt.driver [None req-b18a868f-8566-4a2c-9b67-33cc86778341 f152533b0a03477485b6883b0c89d441 195498a87961428fa33efd9eb8f206a9 - - default default] [instance: 73412c84-02b0-4ed4-872c-78d4714956d9] Deletion of /var/lib/nova/instances/73412c84-02b0-4ed4-872c-78d4714956d9_del complete#033[00m
Nov 25 04:20:19 np0005534516 nova_compute[253538]: 2025-11-25 09:20:19.914 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:20:20 np0005534516 nova_compute[253538]: 2025-11-25 09:20:20.168 253542 INFO nova.compute.manager [None req-b18a868f-8566-4a2c-9b67-33cc86778341 f152533b0a03477485b6883b0c89d441 195498a87961428fa33efd9eb8f206a9 - - default default] [instance: 73412c84-02b0-4ed4-872c-78d4714956d9] Took 1.11 seconds to destroy the instance on the hypervisor.#033[00m
Nov 25 04:20:20 np0005534516 nova_compute[253538]: 2025-11-25 09:20:20.168 253542 DEBUG oslo.service.loopingcall [None req-b18a868f-8566-4a2c-9b67-33cc86778341 f152533b0a03477485b6883b0c89d441 195498a87961428fa33efd9eb8f206a9 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 25 04:20:20 np0005534516 nova_compute[253538]: 2025-11-25 09:20:20.169 253542 DEBUG nova.compute.manager [-] [instance: 73412c84-02b0-4ed4-872c-78d4714956d9] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 25 04:20:20 np0005534516 nova_compute[253538]: 2025-11-25 09:20:20.169 253542 DEBUG nova.network.neutron [-] [instance: 73412c84-02b0-4ed4-872c-78d4714956d9] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 25 04:20:20 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3004: 321 pgs: 321 active+clean; 116 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 907 KiB/s rd, 1.8 MiB/s wr, 75 op/s
Nov 25 04:20:20 np0005534516 nova_compute[253538]: 2025-11-25 09:20:20.731 253542 DEBUG nova.network.neutron [-] [instance: 73412c84-02b0-4ed4-872c-78d4714956d9] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 25 04:20:20 np0005534516 nova_compute[253538]: 2025-11-25 09:20:20.744 253542 DEBUG nova.network.neutron [-] [instance: 73412c84-02b0-4ed4-872c-78d4714956d9] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 04:20:20 np0005534516 nova_compute[253538]: 2025-11-25 09:20:20.758 253542 INFO nova.compute.manager [-] [instance: 73412c84-02b0-4ed4-872c-78d4714956d9] Took 0.59 seconds to deallocate network for instance.#033[00m
Nov 25 04:20:20 np0005534516 nova_compute[253538]: 2025-11-25 09:20:20.820 253542 DEBUG oslo_concurrency.lockutils [None req-b18a868f-8566-4a2c-9b67-33cc86778341 f152533b0a03477485b6883b0c89d441 195498a87961428fa33efd9eb8f206a9 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:20:20 np0005534516 nova_compute[253538]: 2025-11-25 09:20:20.821 253542 DEBUG oslo_concurrency.lockutils [None req-b18a868f-8566-4a2c-9b67-33cc86778341 f152533b0a03477485b6883b0c89d441 195498a87961428fa33efd9eb8f206a9 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:20:20 np0005534516 nova_compute[253538]: 2025-11-25 09:20:20.867 253542 DEBUG oslo_concurrency.processutils [None req-b18a868f-8566-4a2c-9b67-33cc86778341 f152533b0a03477485b6883b0c89d441 195498a87961428fa33efd9eb8f206a9 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 04:20:21 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 04:20:21 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2128121174' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 04:20:21 np0005534516 nova_compute[253538]: 2025-11-25 09:20:21.796 253542 DEBUG oslo_concurrency.processutils [None req-b18a868f-8566-4a2c-9b67-33cc86778341 f152533b0a03477485b6883b0c89d441 195498a87961428fa33efd9eb8f206a9 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.929s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 04:20:21 np0005534516 nova_compute[253538]: 2025-11-25 09:20:21.805 253542 DEBUG nova.compute.provider_tree [None req-b18a868f-8566-4a2c-9b67-33cc86778341 f152533b0a03477485b6883b0c89d441 195498a87961428fa33efd9eb8f206a9 - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 04:20:21 np0005534516 nova_compute[253538]: 2025-11-25 09:20:21.820 253542 DEBUG nova.scheduler.client.report [None req-b18a868f-8566-4a2c-9b67-33cc86778341 f152533b0a03477485b6883b0c89d441 195498a87961428fa33efd9eb8f206a9 - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 04:20:21 np0005534516 nova_compute[253538]: 2025-11-25 09:20:21.852 253542 DEBUG oslo_concurrency.lockutils [None req-b18a868f-8566-4a2c-9b67-33cc86778341 f152533b0a03477485b6883b0c89d441 195498a87961428fa33efd9eb8f206a9 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 1.031s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:20:21 np0005534516 nova_compute[253538]: 2025-11-25 09:20:21.881 253542 INFO nova.scheduler.client.report [None req-b18a868f-8566-4a2c-9b67-33cc86778341 f152533b0a03477485b6883b0c89d441 195498a87961428fa33efd9eb8f206a9 - - default default] Deleted allocations for instance 73412c84-02b0-4ed4-872c-78d4714956d9#033[00m
Nov 25 04:20:21 np0005534516 ceph-osd[90711]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 25 04:20:21 np0005534516 ceph-osd[90711]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 5402.4 total, 600.0 interval#012Cumulative writes: 35K writes, 142K keys, 35K commit groups, 1.0 writes per commit group, ingest: 0.14 GB, 0.03 MB/s#012Cumulative WAL: 35K writes, 12K syncs, 2.84 writes per sync, written: 0.14 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1830 writes, 8268 keys, 1830 commit groups, 1.0 writes per commit group, ingest: 10.35 MB, 0.02 MB/s#012Interval WAL: 1830 writes, 682 syncs, 2.68 writes per sync, written: 0.01 GB, 0.02 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 25 04:20:21 np0005534516 nova_compute[253538]: 2025-11-25 09:20:21.942 253542 DEBUG oslo_concurrency.lockutils [None req-b18a868f-8566-4a2c-9b67-33cc86778341 f152533b0a03477485b6883b0c89d441 195498a87961428fa33efd9eb8f206a9 - - default default] Lock "73412c84-02b0-4ed4-872c-78d4714956d9" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.247s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:20:22 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3005: 321 pgs: 321 active+clean; 104 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.4 MiB/s rd, 1.8 MiB/s wr, 96 op/s
Nov 25 04:20:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 04:20:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 04:20:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 04:20:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 04:20:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 04:20:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 04:20:23 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e271 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:20:23 np0005534516 nova_compute[253538]: 2025-11-25 09:20:23.542 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:20:24 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3006: 321 pgs: 321 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 123 op/s
Nov 25 04:20:24 np0005534516 nova_compute[253538]: 2025-11-25 09:20:24.914 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:20:26 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3007: 321 pgs: 321 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 985 KiB/s wr, 123 op/s
Nov 25 04:20:27 np0005534516 podman[423924]: 2025-11-25 09:20:27.831026172 +0000 UTC m=+0.075996223 container health_status 8ad1e319ea263c63021f01bb2d6f18ca5aea205883339692c6b10bddfa993191 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 25 04:20:28 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e271 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:20:28 np0005534516 nova_compute[253538]: 2025-11-25 09:20:28.545 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:20:28 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3008: 321 pgs: 321 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 107 op/s
Nov 25 04:20:29 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 25 04:20:29 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4163204498' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 04:20:29 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 25 04:20:29 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4163204498' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 04:20:29 np0005534516 nova_compute[253538]: 2025-11-25 09:20:29.917 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:20:30 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3009: 321 pgs: 321 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.6 MiB/s rd, 13 KiB/s wr, 84 op/s
Nov 25 04:20:32 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3010: 321 pgs: 321 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.1 MiB/s rd, 852 B/s wr, 51 op/s
Nov 25 04:20:33 np0005534516 ovn_controller[152859]: 2025-11-25T09:20:33Z|01652|memory_trim|INFO|Detected inactivity (last active 30001 ms ago): trimming memory
Nov 25 04:20:33 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e271 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:20:33 np0005534516 nova_compute[253538]: 2025-11-25 09:20:33.548 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:20:34 np0005534516 nova_compute[253538]: 2025-11-25 09:20:34.280 253542 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764062419.2789662, 73412c84-02b0-4ed4-872c-78d4714956d9 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 04:20:34 np0005534516 nova_compute[253538]: 2025-11-25 09:20:34.280 253542 INFO nova.compute.manager [-] [instance: 73412c84-02b0-4ed4-872c-78d4714956d9] VM Stopped (Lifecycle Event)#033[00m
Nov 25 04:20:34 np0005534516 nova_compute[253538]: 2025-11-25 09:20:34.306 253542 DEBUG nova.compute.manager [None req-dabff886-4c65-4c7e-ae5c-30b465c1e561 - - - - - -] [instance: 73412c84-02b0-4ed4-872c-78d4714956d9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 04:20:34 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3011: 321 pgs: 321 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 583 KiB/s rd, 0 B/s wr, 30 op/s
Nov 25 04:20:34 np0005534516 nova_compute[253538]: 2025-11-25 09:20:34.919 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:20:36 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3012: 321 pgs: 321 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 2.3 KiB/s rd, 0 B/s wr, 3 op/s
Nov 25 04:20:38 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e271 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:20:38 np0005534516 nova_compute[253538]: 2025-11-25 09:20:38.551 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:20:38 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3013: 321 pgs: 321 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 04:20:39 np0005534516 nova_compute[253538]: 2025-11-25 09:20:39.921 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:20:40 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3014: 321 pgs: 321 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 04:20:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:20:41.105 162739 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:20:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:20:41.106 162739 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:20:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:20:41.106 162739 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:20:42 np0005534516 ceph-mgr[75313]: [devicehealth INFO root] Check health
Nov 25 04:20:42 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3015: 321 pgs: 321 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 04:20:43 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e271 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:20:43 np0005534516 nova_compute[253538]: 2025-11-25 09:20:43.554 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:20:44 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3016: 321 pgs: 321 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 04:20:44 np0005534516 nova_compute[253538]: 2025-11-25 09:20:44.923 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:20:46 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3017: 321 pgs: 321 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 04:20:47 np0005534516 podman[423955]: 2025-11-25 09:20:47.82900581 +0000 UTC m=+0.063953505 container health_status 5c8d5508db57b4c3da7e80ae11f3a3094e87785cb74f60c98199261dd8b73481 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Nov 25 04:20:47 np0005534516 podman[423954]: 2025-11-25 09:20:47.836763862 +0000 UTC m=+0.079805427 container health_status 201f5ba8a846455dad7681d91099d8c3f33e8ac91298c1330a8b525b94c03938 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=multipathd, container_name=multipathd, org.label-schema.build-date=20251118, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true)
Nov 25 04:20:48 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e271 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:20:48 np0005534516 nova_compute[253538]: 2025-11-25 09:20:48.556 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:20:48 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3018: 321 pgs: 321 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 04:20:48 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 04:20:48 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 04:20:48 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 04:20:48 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 04:20:48 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 04:20:48 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 04:20:48 np0005534516 ceph-mgr[75313]: [progress WARNING root] complete: ev 5f3fdbe6-2bfe-45e0-8698-4e43b79b1daf does not exist
Nov 25 04:20:48 np0005534516 ceph-mgr[75313]: [progress WARNING root] complete: ev 4489c1f5-5e7d-4966-99fc-b9b72a8a6a8f does not exist
Nov 25 04:20:48 np0005534516 ceph-mgr[75313]: [progress WARNING root] complete: ev 225f3f89-0a2d-4a0c-babc-833d96c521e5 does not exist
Nov 25 04:20:48 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 04:20:48 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 04:20:48 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 04:20:48 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 04:20:48 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 04:20:48 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 04:20:48 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 04:20:48 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 04:20:48 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 04:20:49 np0005534516 podman[424260]: 2025-11-25 09:20:49.363457943 +0000 UTC m=+0.029007472 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 04:20:49 np0005534516 podman[424260]: 2025-11-25 09:20:49.614908127 +0000 UTC m=+0.280457566 container create eae07aca2f3a10e1e9a6c426de08b743efbb7f98184aa8e9ca22369e1b68ecbd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_saha, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 04:20:49 np0005534516 systemd[1]: Started libpod-conmon-eae07aca2f3a10e1e9a6c426de08b743efbb7f98184aa8e9ca22369e1b68ecbd.scope.
Nov 25 04:20:49 np0005534516 systemd[1]: Started libcrun container.
Nov 25 04:20:49 np0005534516 nova_compute[253538]: 2025-11-25 09:20:49.925 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:20:49 np0005534516 podman[424260]: 2025-11-25 09:20:49.962827062 +0000 UTC m=+0.628376521 container init eae07aca2f3a10e1e9a6c426de08b743efbb7f98184aa8e9ca22369e1b68ecbd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_saha, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 04:20:49 np0005534516 podman[424260]: 2025-11-25 09:20:49.971888149 +0000 UTC m=+0.637437578 container start eae07aca2f3a10e1e9a6c426de08b743efbb7f98184aa8e9ca22369e1b68ecbd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_saha, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 04:20:49 np0005534516 vibrant_saha[424276]: 167 167
Nov 25 04:20:49 np0005534516 systemd[1]: libpod-eae07aca2f3a10e1e9a6c426de08b743efbb7f98184aa8e9ca22369e1b68ecbd.scope: Deactivated successfully.
Nov 25 04:20:50 np0005534516 podman[424260]: 2025-11-25 09:20:50.076582214 +0000 UTC m=+0.742131663 container attach eae07aca2f3a10e1e9a6c426de08b743efbb7f98184aa8e9ca22369e1b68ecbd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_saha, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 04:20:50 np0005534516 podman[424260]: 2025-11-25 09:20:50.077653042 +0000 UTC m=+0.743202481 container died eae07aca2f3a10e1e9a6c426de08b743efbb7f98184aa8e9ca22369e1b68ecbd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_saha, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 04:20:50 np0005534516 systemd[1]: var-lib-containers-storage-overlay-bc987e33be2951abf964d7d211ef1a4ded0d54379645fe1e829b46ae7ecce0ec-merged.mount: Deactivated successfully.
Nov 25 04:20:50 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:20:50.475 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=60, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '3e:21:09', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '6a:71:8e:07:fa:c2'}, ipsec=False) old=SB_Global(nb_cfg=59) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 04:20:50 np0005534516 nova_compute[253538]: 2025-11-25 09:20:50.475 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:20:50 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:20:50.476 162739 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 25 04:20:50 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3019: 321 pgs: 321 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 04:20:50 np0005534516 podman[424260]: 2025-11-25 09:20:50.647769305 +0000 UTC m=+1.313318734 container remove eae07aca2f3a10e1e9a6c426de08b743efbb7f98184aa8e9ca22369e1b68ecbd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_saha, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Nov 25 04:20:50 np0005534516 systemd[1]: libpod-conmon-eae07aca2f3a10e1e9a6c426de08b743efbb7f98184aa8e9ca22369e1b68ecbd.scope: Deactivated successfully.
Nov 25 04:20:50 np0005534516 podman[424301]: 2025-11-25 09:20:50.877174018 +0000 UTC m=+0.089124100 container create 0c867feb5f543a99e827249aba541c516c660785dbb3979eefabfebaf1ab3c60 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_montalcini, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 04:20:50 np0005534516 podman[424301]: 2025-11-25 09:20:50.810258025 +0000 UTC m=+0.022208087 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 04:20:51 np0005534516 systemd[1]: Started libpod-conmon-0c867feb5f543a99e827249aba541c516c660785dbb3979eefabfebaf1ab3c60.scope.
Nov 25 04:20:51 np0005534516 systemd[1]: Started libcrun container.
Nov 25 04:20:51 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4d1f22d9d8eb12b9d075be3681b48166c29a9aa38c1fa3bc64b21bdd36e32513/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 04:20:51 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4d1f22d9d8eb12b9d075be3681b48166c29a9aa38c1fa3bc64b21bdd36e32513/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 04:20:51 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4d1f22d9d8eb12b9d075be3681b48166c29a9aa38c1fa3bc64b21bdd36e32513/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 04:20:51 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4d1f22d9d8eb12b9d075be3681b48166c29a9aa38c1fa3bc64b21bdd36e32513/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 04:20:51 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4d1f22d9d8eb12b9d075be3681b48166c29a9aa38c1fa3bc64b21bdd36e32513/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 04:20:51 np0005534516 podman[424301]: 2025-11-25 09:20:51.284580316 +0000 UTC m=+0.496530428 container init 0c867feb5f543a99e827249aba541c516c660785dbb3979eefabfebaf1ab3c60 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_montalcini, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Nov 25 04:20:51 np0005534516 podman[424301]: 2025-11-25 09:20:51.291863784 +0000 UTC m=+0.503813876 container start 0c867feb5f543a99e827249aba541c516c660785dbb3979eefabfebaf1ab3c60 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_montalcini, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 04:20:51 np0005534516 podman[424301]: 2025-11-25 09:20:51.55987055 +0000 UTC m=+0.771820632 container attach 0c867feb5f543a99e827249aba541c516c660785dbb3979eefabfebaf1ab3c60 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_montalcini, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 04:20:52 np0005534516 nova_compute[253538]: 2025-11-25 09:20:52.312 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:20:52 np0005534516 gallant_montalcini[424318]: --> passed data devices: 0 physical, 3 LVM
Nov 25 04:20:52 np0005534516 gallant_montalcini[424318]: --> relative data size: 1.0
Nov 25 04:20:52 np0005534516 gallant_montalcini[424318]: --> All data devices are unavailable
Nov 25 04:20:52 np0005534516 systemd[1]: libpod-0c867feb5f543a99e827249aba541c516c660785dbb3979eefabfebaf1ab3c60.scope: Deactivated successfully.
Nov 25 04:20:52 np0005534516 podman[424347]: 2025-11-25 09:20:52.395251724 +0000 UTC m=+0.032611709 container died 0c867feb5f543a99e827249aba541c516c660785dbb3979eefabfebaf1ab3c60 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_montalcini, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Nov 25 04:20:52 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3020: 321 pgs: 321 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 04:20:52 np0005534516 systemd[1]: var-lib-containers-storage-overlay-4d1f22d9d8eb12b9d075be3681b48166c29a9aa38c1fa3bc64b21bdd36e32513-merged.mount: Deactivated successfully.
Nov 25 04:20:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] Optimize plan auto_2025-11-25_09:20:53
Nov 25 04:20:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 04:20:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] do_upmap
Nov 25 04:20:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] pools ['volumes', '.mgr', '.rgw.root', 'cephfs.cephfs.meta', 'default.rgw.log', 'vms', 'default.rgw.meta', 'default.rgw.control', 'images', 'cephfs.cephfs.data', 'backups']
Nov 25 04:20:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] prepared 0/10 changes
Nov 25 04:20:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 04:20:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 04:20:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 04:20:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 04:20:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 04:20:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 04:20:53 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e271 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:20:53 np0005534516 podman[424347]: 2025-11-25 09:20:53.542706286 +0000 UTC m=+1.180066251 container remove 0c867feb5f543a99e827249aba541c516c660785dbb3979eefabfebaf1ab3c60 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_montalcini, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 04:20:53 np0005534516 systemd[1]: libpod-conmon-0c867feb5f543a99e827249aba541c516c660785dbb3979eefabfebaf1ab3c60.scope: Deactivated successfully.
Nov 25 04:20:53 np0005534516 nova_compute[253538]: 2025-11-25 09:20:53.559 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:20:54 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 04:20:54 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 04:20:54 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 04:20:54 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 04:20:54 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 04:20:54 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 04:20:54 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 04:20:54 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 04:20:54 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 04:20:54 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 04:20:54 np0005534516 podman[424502]: 2025-11-25 09:20:54.170798259 +0000 UTC m=+0.021777555 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 04:20:54 np0005534516 podman[424502]: 2025-11-25 09:20:54.280399487 +0000 UTC m=+0.131378753 container create 8ea1801437fc8b958555e90fceaa2ac933da51b4b8df7eac470acb6994fa955a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_leavitt, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Nov 25 04:20:54 np0005534516 systemd[1]: Started libpod-conmon-8ea1801437fc8b958555e90fceaa2ac933da51b4b8df7eac470acb6994fa955a.scope.
Nov 25 04:20:54 np0005534516 systemd[1]: Started libcrun container.
Nov 25 04:20:54 np0005534516 podman[424502]: 2025-11-25 09:20:54.489837357 +0000 UTC m=+0.340816643 container init 8ea1801437fc8b958555e90fceaa2ac933da51b4b8df7eac470acb6994fa955a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_leavitt, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Nov 25 04:20:54 np0005534516 podman[424502]: 2025-11-25 09:20:54.497678061 +0000 UTC m=+0.348657327 container start 8ea1801437fc8b958555e90fceaa2ac933da51b4b8df7eac470acb6994fa955a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_leavitt, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 04:20:54 np0005534516 romantic_leavitt[424518]: 167 167
Nov 25 04:20:54 np0005534516 systemd[1]: libpod-8ea1801437fc8b958555e90fceaa2ac933da51b4b8df7eac470acb6994fa955a.scope: Deactivated successfully.
Nov 25 04:20:54 np0005534516 podman[424502]: 2025-11-25 09:20:54.553665547 +0000 UTC m=+0.404644833 container attach 8ea1801437fc8b958555e90fceaa2ac933da51b4b8df7eac470acb6994fa955a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_leavitt, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True)
Nov 25 04:20:54 np0005534516 podman[424502]: 2025-11-25 09:20:54.555281411 +0000 UTC m=+0.406260707 container died 8ea1801437fc8b958555e90fceaa2ac933da51b4b8df7eac470acb6994fa955a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_leavitt, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 25 04:20:54 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3021: 321 pgs: 321 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 04:20:54 np0005534516 systemd[1]: var-lib-containers-storage-overlay-52c4e31615e2cfcdb31699b95bdbc2310afde06f442bbc5771e3ea4fc14e56f4-merged.mount: Deactivated successfully.
Nov 25 04:20:54 np0005534516 nova_compute[253538]: 2025-11-25 09:20:54.927 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:20:55 np0005534516 podman[424502]: 2025-11-25 09:20:55.355419945 +0000 UTC m=+1.206399211 container remove 8ea1801437fc8b958555e90fceaa2ac933da51b4b8df7eac470acb6994fa955a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_leavitt, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 04:20:55 np0005534516 systemd[1]: libpod-conmon-8ea1801437fc8b958555e90fceaa2ac933da51b4b8df7eac470acb6994fa955a.scope: Deactivated successfully.
Nov 25 04:20:55 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:20:55.478 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=0e3362c2-a4a0-4a10-9289-943331244f84, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '60'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 04:20:55 np0005534516 podman[424541]: 2025-11-25 09:20:55.581060436 +0000 UTC m=+0.092313217 container create 4add07f471d3b1aef44a6a05e82495f614d605be0f8043b8692085e45ccb45f1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_wing, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0)
Nov 25 04:20:55 np0005534516 podman[424541]: 2025-11-25 09:20:55.513837014 +0000 UTC m=+0.025089825 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 04:20:55 np0005534516 systemd[1]: Started libpod-conmon-4add07f471d3b1aef44a6a05e82495f614d605be0f8043b8692085e45ccb45f1.scope.
Nov 25 04:20:55 np0005534516 systemd[1]: Started libcrun container.
Nov 25 04:20:55 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c67b3107dae7618167f188050dda4eff6d2c8b161d359901569de1e4b1809f45/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 04:20:55 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c67b3107dae7618167f188050dda4eff6d2c8b161d359901569de1e4b1809f45/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 04:20:55 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c67b3107dae7618167f188050dda4eff6d2c8b161d359901569de1e4b1809f45/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 04:20:55 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c67b3107dae7618167f188050dda4eff6d2c8b161d359901569de1e4b1809f45/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 04:20:55 np0005534516 podman[424541]: 2025-11-25 09:20:55.756077377 +0000 UTC m=+0.267330178 container init 4add07f471d3b1aef44a6a05e82495f614d605be0f8043b8692085e45ccb45f1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_wing, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 04:20:55 np0005534516 podman[424541]: 2025-11-25 09:20:55.762212034 +0000 UTC m=+0.273464815 container start 4add07f471d3b1aef44a6a05e82495f614d605be0f8043b8692085e45ccb45f1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_wing, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 04:20:55 np0005534516 podman[424541]: 2025-11-25 09:20:55.822462777 +0000 UTC m=+0.333715578 container attach 4add07f471d3b1aef44a6a05e82495f614d605be0f8043b8692085e45ccb45f1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_wing, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507)
Nov 25 04:20:56 np0005534516 vigilant_wing[424558]: {
Nov 25 04:20:56 np0005534516 vigilant_wing[424558]:    "0": [
Nov 25 04:20:56 np0005534516 vigilant_wing[424558]:        {
Nov 25 04:20:56 np0005534516 vigilant_wing[424558]:            "devices": [
Nov 25 04:20:56 np0005534516 vigilant_wing[424558]:                "/dev/loop3"
Nov 25 04:20:56 np0005534516 vigilant_wing[424558]:            ],
Nov 25 04:20:56 np0005534516 vigilant_wing[424558]:            "lv_name": "ceph_lv0",
Nov 25 04:20:56 np0005534516 vigilant_wing[424558]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 04:20:56 np0005534516 vigilant_wing[424558]:            "lv_size": "21470642176",
Nov 25 04:20:56 np0005534516 vigilant_wing[424558]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=fa0f8d90-5df8-4d42-9078-da082765696d,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 04:20:56 np0005534516 vigilant_wing[424558]:            "lv_uuid": "ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr",
Nov 25 04:20:56 np0005534516 vigilant_wing[424558]:            "name": "ceph_lv0",
Nov 25 04:20:56 np0005534516 vigilant_wing[424558]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 04:20:56 np0005534516 vigilant_wing[424558]:            "tags": {
Nov 25 04:20:56 np0005534516 vigilant_wing[424558]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 04:20:56 np0005534516 vigilant_wing[424558]:                "ceph.block_uuid": "ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr",
Nov 25 04:20:56 np0005534516 vigilant_wing[424558]:                "ceph.cephx_lockbox_secret": "",
Nov 25 04:20:56 np0005534516 vigilant_wing[424558]:                "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 04:20:56 np0005534516 vigilant_wing[424558]:                "ceph.cluster_name": "ceph",
Nov 25 04:20:56 np0005534516 vigilant_wing[424558]:                "ceph.crush_device_class": "",
Nov 25 04:20:56 np0005534516 vigilant_wing[424558]:                "ceph.encrypted": "0",
Nov 25 04:20:56 np0005534516 vigilant_wing[424558]:                "ceph.osd_fsid": "fa0f8d90-5df8-4d42-9078-da082765696d",
Nov 25 04:20:56 np0005534516 vigilant_wing[424558]:                "ceph.osd_id": "0",
Nov 25 04:20:56 np0005534516 vigilant_wing[424558]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 04:20:56 np0005534516 vigilant_wing[424558]:                "ceph.type": "block",
Nov 25 04:20:56 np0005534516 vigilant_wing[424558]:                "ceph.vdo": "0"
Nov 25 04:20:56 np0005534516 vigilant_wing[424558]:            },
Nov 25 04:20:56 np0005534516 vigilant_wing[424558]:            "type": "block",
Nov 25 04:20:56 np0005534516 vigilant_wing[424558]:            "vg_name": "ceph_vg0"
Nov 25 04:20:56 np0005534516 vigilant_wing[424558]:        }
Nov 25 04:20:56 np0005534516 vigilant_wing[424558]:    ],
Nov 25 04:20:56 np0005534516 vigilant_wing[424558]:    "1": [
Nov 25 04:20:56 np0005534516 vigilant_wing[424558]:        {
Nov 25 04:20:56 np0005534516 vigilant_wing[424558]:            "devices": [
Nov 25 04:20:56 np0005534516 vigilant_wing[424558]:                "/dev/loop4"
Nov 25 04:20:56 np0005534516 vigilant_wing[424558]:            ],
Nov 25 04:20:56 np0005534516 vigilant_wing[424558]:            "lv_name": "ceph_lv1",
Nov 25 04:20:56 np0005534516 vigilant_wing[424558]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 04:20:56 np0005534516 vigilant_wing[424558]:            "lv_size": "21470642176",
Nov 25 04:20:56 np0005534516 vigilant_wing[424558]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1ccbbde0-8faf-460a-800e-c84f00ed17db,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 04:20:56 np0005534516 vigilant_wing[424558]:            "lv_uuid": "nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e",
Nov 25 04:20:56 np0005534516 vigilant_wing[424558]:            "name": "ceph_lv1",
Nov 25 04:20:56 np0005534516 vigilant_wing[424558]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 04:20:56 np0005534516 vigilant_wing[424558]:            "tags": {
Nov 25 04:20:56 np0005534516 vigilant_wing[424558]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 04:20:56 np0005534516 vigilant_wing[424558]:                "ceph.block_uuid": "nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e",
Nov 25 04:20:56 np0005534516 vigilant_wing[424558]:                "ceph.cephx_lockbox_secret": "",
Nov 25 04:20:56 np0005534516 vigilant_wing[424558]:                "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 04:20:56 np0005534516 vigilant_wing[424558]:                "ceph.cluster_name": "ceph",
Nov 25 04:20:56 np0005534516 vigilant_wing[424558]:                "ceph.crush_device_class": "",
Nov 25 04:20:56 np0005534516 vigilant_wing[424558]:                "ceph.encrypted": "0",
Nov 25 04:20:56 np0005534516 vigilant_wing[424558]:                "ceph.osd_fsid": "1ccbbde0-8faf-460a-800e-c84f00ed17db",
Nov 25 04:20:56 np0005534516 vigilant_wing[424558]:                "ceph.osd_id": "1",
Nov 25 04:20:56 np0005534516 vigilant_wing[424558]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 04:20:56 np0005534516 vigilant_wing[424558]:                "ceph.type": "block",
Nov 25 04:20:56 np0005534516 vigilant_wing[424558]:                "ceph.vdo": "0"
Nov 25 04:20:56 np0005534516 vigilant_wing[424558]:            },
Nov 25 04:20:56 np0005534516 vigilant_wing[424558]:            "type": "block",
Nov 25 04:20:56 np0005534516 vigilant_wing[424558]:            "vg_name": "ceph_vg1"
Nov 25 04:20:56 np0005534516 vigilant_wing[424558]:        }
Nov 25 04:20:56 np0005534516 vigilant_wing[424558]:    ],
Nov 25 04:20:56 np0005534516 vigilant_wing[424558]:    "2": [
Nov 25 04:20:56 np0005534516 vigilant_wing[424558]:        {
Nov 25 04:20:56 np0005534516 vigilant_wing[424558]:            "devices": [
Nov 25 04:20:56 np0005534516 vigilant_wing[424558]:                "/dev/loop5"
Nov 25 04:20:56 np0005534516 vigilant_wing[424558]:            ],
Nov 25 04:20:56 np0005534516 vigilant_wing[424558]:            "lv_name": "ceph_lv2",
Nov 25 04:20:56 np0005534516 vigilant_wing[424558]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 04:20:56 np0005534516 vigilant_wing[424558]:            "lv_size": "21470642176",
Nov 25 04:20:56 np0005534516 vigilant_wing[424558]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2a15223e-f21c-42af-b702-2be1c3607399,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 04:20:56 np0005534516 vigilant_wing[424558]:            "lv_uuid": "5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0",
Nov 25 04:20:56 np0005534516 vigilant_wing[424558]:            "name": "ceph_lv2",
Nov 25 04:20:56 np0005534516 vigilant_wing[424558]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 04:20:56 np0005534516 vigilant_wing[424558]:            "tags": {
Nov 25 04:20:56 np0005534516 vigilant_wing[424558]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 04:20:56 np0005534516 vigilant_wing[424558]:                "ceph.block_uuid": "5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0",
Nov 25 04:20:56 np0005534516 vigilant_wing[424558]:                "ceph.cephx_lockbox_secret": "",
Nov 25 04:20:56 np0005534516 vigilant_wing[424558]:                "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 04:20:56 np0005534516 vigilant_wing[424558]:                "ceph.cluster_name": "ceph",
Nov 25 04:20:56 np0005534516 vigilant_wing[424558]:                "ceph.crush_device_class": "",
Nov 25 04:20:56 np0005534516 vigilant_wing[424558]:                "ceph.encrypted": "0",
Nov 25 04:20:56 np0005534516 vigilant_wing[424558]:                "ceph.osd_fsid": "2a15223e-f21c-42af-b702-2be1c3607399",
Nov 25 04:20:56 np0005534516 vigilant_wing[424558]:                "ceph.osd_id": "2",
Nov 25 04:20:56 np0005534516 vigilant_wing[424558]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 04:20:56 np0005534516 vigilant_wing[424558]:                "ceph.type": "block",
Nov 25 04:20:56 np0005534516 vigilant_wing[424558]:                "ceph.vdo": "0"
Nov 25 04:20:56 np0005534516 vigilant_wing[424558]:            },
Nov 25 04:20:56 np0005534516 vigilant_wing[424558]:            "type": "block",
Nov 25 04:20:56 np0005534516 vigilant_wing[424558]:            "vg_name": "ceph_vg2"
Nov 25 04:20:56 np0005534516 vigilant_wing[424558]:        }
Nov 25 04:20:56 np0005534516 vigilant_wing[424558]:    ]
Nov 25 04:20:56 np0005534516 vigilant_wing[424558]: }
Nov 25 04:20:56 np0005534516 systemd[1]: libpod-4add07f471d3b1aef44a6a05e82495f614d605be0f8043b8692085e45ccb45f1.scope: Deactivated successfully.
Nov 25 04:20:56 np0005534516 podman[424541]: 2025-11-25 09:20:56.541961433 +0000 UTC m=+1.053214224 container died 4add07f471d3b1aef44a6a05e82495f614d605be0f8043b8692085e45ccb45f1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_wing, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3)
Nov 25 04:20:56 np0005534516 nova_compute[253538]: 2025-11-25 09:20:56.555 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:20:56 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3022: 321 pgs: 321 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 04:20:56 np0005534516 systemd[1]: var-lib-containers-storage-overlay-c67b3107dae7618167f188050dda4eff6d2c8b161d359901569de1e4b1809f45-merged.mount: Deactivated successfully.
Nov 25 04:20:57 np0005534516 podman[424541]: 2025-11-25 09:20:57.052907711 +0000 UTC m=+1.564160532 container remove 4add07f471d3b1aef44a6a05e82495f614d605be0f8043b8692085e45ccb45f1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_wing, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 04:20:57 np0005534516 systemd[1]: libpod-conmon-4add07f471d3b1aef44a6a05e82495f614d605be0f8043b8692085e45ccb45f1.scope: Deactivated successfully.
Nov 25 04:20:57 np0005534516 podman[424719]: 2025-11-25 09:20:57.710153089 +0000 UTC m=+0.102539156 container create 45a401c1685078eb9e65b7e27dc167cc89c1b6861b7c906f1de6cb219f22cf2b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_poitras, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 04:20:57 np0005534516 podman[424719]: 2025-11-25 09:20:57.627330751 +0000 UTC m=+0.019716848 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 04:20:57 np0005534516 systemd[1]: Started libpod-conmon-45a401c1685078eb9e65b7e27dc167cc89c1b6861b7c906f1de6cb219f22cf2b.scope.
Nov 25 04:20:57 np0005534516 systemd[1]: Started libcrun container.
Nov 25 04:20:57 np0005534516 podman[424719]: 2025-11-25 09:20:57.952906577 +0000 UTC m=+0.345292664 container init 45a401c1685078eb9e65b7e27dc167cc89c1b6861b7c906f1de6cb219f22cf2b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_poitras, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 04:20:57 np0005534516 podman[424719]: 2025-11-25 09:20:57.960086993 +0000 UTC m=+0.352473060 container start 45a401c1685078eb9e65b7e27dc167cc89c1b6861b7c906f1de6cb219f22cf2b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_poitras, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Nov 25 04:20:57 np0005534516 hardcore_poitras[424735]: 167 167
Nov 25 04:20:57 np0005534516 systemd[1]: libpod-45a401c1685078eb9e65b7e27dc167cc89c1b6861b7c906f1de6cb219f22cf2b.scope: Deactivated successfully.
Nov 25 04:20:58 np0005534516 podman[424719]: 2025-11-25 09:20:58.188832049 +0000 UTC m=+0.581218156 container attach 45a401c1685078eb9e65b7e27dc167cc89c1b6861b7c906f1de6cb219f22cf2b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_poitras, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Nov 25 04:20:58 np0005534516 podman[424719]: 2025-11-25 09:20:58.189760775 +0000 UTC m=+0.582146912 container died 45a401c1685078eb9e65b7e27dc167cc89c1b6861b7c906f1de6cb219f22cf2b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_poitras, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 04:20:58 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e271 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:20:58 np0005534516 systemd[1]: var-lib-containers-storage-overlay-9d348ea0d038407c834841e37c586e229548ee2fdbd150554aaed7c7961a8c40-merged.mount: Deactivated successfully.
Nov 25 04:20:58 np0005534516 nova_compute[253538]: 2025-11-25 09:20:58.562 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:20:58 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3023: 321 pgs: 321 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 04:20:58 np0005534516 podman[424719]: 2025-11-25 09:20:58.779637685 +0000 UTC m=+1.172023752 container remove 45a401c1685078eb9e65b7e27dc167cc89c1b6861b7c906f1de6cb219f22cf2b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_poitras, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Nov 25 04:20:58 np0005534516 systemd[1]: libpod-conmon-45a401c1685078eb9e65b7e27dc167cc89c1b6861b7c906f1de6cb219f22cf2b.scope: Deactivated successfully.
Nov 25 04:20:58 np0005534516 podman[424741]: 2025-11-25 09:20:58.910642767 +0000 UTC m=+0.911587312 container health_status 8ad1e319ea263c63021f01bb2d6f18ca5aea205883339692c6b10bddfa993191 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, io.buildah.version=1.41.3, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 04:20:58 np0005534516 podman[424785]: 2025-11-25 09:20:58.980519042 +0000 UTC m=+0.077729720 container create 91cceaf23f8e5c3e03b55a3e412a74e2bc180d128327c9a83b86c45aa819b38d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_tharp, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Nov 25 04:20:59 np0005534516 podman[424785]: 2025-11-25 09:20:58.923302992 +0000 UTC m=+0.020513690 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 04:20:59 np0005534516 systemd[1]: Started libpod-conmon-91cceaf23f8e5c3e03b55a3e412a74e2bc180d128327c9a83b86c45aa819b38d.scope.
Nov 25 04:20:59 np0005534516 systemd[1]: Started libcrun container.
Nov 25 04:20:59 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8f20d92439824b7cba7e9538e80b0f1e5bc0e6fd0a3fa521781413b29f58604e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 04:20:59 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8f20d92439824b7cba7e9538e80b0f1e5bc0e6fd0a3fa521781413b29f58604e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 04:20:59 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8f20d92439824b7cba7e9538e80b0f1e5bc0e6fd0a3fa521781413b29f58604e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 04:20:59 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8f20d92439824b7cba7e9538e80b0f1e5bc0e6fd0a3fa521781413b29f58604e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 04:20:59 np0005534516 podman[424785]: 2025-11-25 09:20:59.172552248 +0000 UTC m=+0.269762956 container init 91cceaf23f8e5c3e03b55a3e412a74e2bc180d128327c9a83b86c45aa819b38d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_tharp, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 04:20:59 np0005534516 podman[424785]: 2025-11-25 09:20:59.180656698 +0000 UTC m=+0.277867376 container start 91cceaf23f8e5c3e03b55a3e412a74e2bc180d128327c9a83b86c45aa819b38d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_tharp, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Nov 25 04:20:59 np0005534516 podman[424785]: 2025-11-25 09:20:59.205121545 +0000 UTC m=+0.302332223 container attach 91cceaf23f8e5c3e03b55a3e412a74e2bc180d128327c9a83b86c45aa819b38d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_tharp, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Nov 25 04:20:59 np0005534516 nova_compute[253538]: 2025-11-25 09:20:59.929 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:21:00 np0005534516 naughty_tharp[424807]: {
Nov 25 04:21:00 np0005534516 naughty_tharp[424807]:    "1ccbbde0-8faf-460a-800e-c84f00ed17db": {
Nov 25 04:21:00 np0005534516 naughty_tharp[424807]:        "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 04:21:00 np0005534516 naughty_tharp[424807]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 04:21:00 np0005534516 naughty_tharp[424807]:        "osd_id": 1,
Nov 25 04:21:00 np0005534516 naughty_tharp[424807]:        "osd_uuid": "1ccbbde0-8faf-460a-800e-c84f00ed17db",
Nov 25 04:21:00 np0005534516 naughty_tharp[424807]:        "type": "bluestore"
Nov 25 04:21:00 np0005534516 naughty_tharp[424807]:    },
Nov 25 04:21:00 np0005534516 naughty_tharp[424807]:    "2a15223e-f21c-42af-b702-2be1c3607399": {
Nov 25 04:21:00 np0005534516 naughty_tharp[424807]:        "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 04:21:00 np0005534516 naughty_tharp[424807]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 04:21:00 np0005534516 naughty_tharp[424807]:        "osd_id": 2,
Nov 25 04:21:00 np0005534516 naughty_tharp[424807]:        "osd_uuid": "2a15223e-f21c-42af-b702-2be1c3607399",
Nov 25 04:21:00 np0005534516 naughty_tharp[424807]:        "type": "bluestore"
Nov 25 04:21:00 np0005534516 naughty_tharp[424807]:    },
Nov 25 04:21:00 np0005534516 naughty_tharp[424807]:    "fa0f8d90-5df8-4d42-9078-da082765696d": {
Nov 25 04:21:00 np0005534516 naughty_tharp[424807]:        "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 04:21:00 np0005534516 naughty_tharp[424807]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 04:21:00 np0005534516 naughty_tharp[424807]:        "osd_id": 0,
Nov 25 04:21:00 np0005534516 naughty_tharp[424807]:        "osd_uuid": "fa0f8d90-5df8-4d42-9078-da082765696d",
Nov 25 04:21:00 np0005534516 naughty_tharp[424807]:        "type": "bluestore"
Nov 25 04:21:00 np0005534516 naughty_tharp[424807]:    }
Nov 25 04:21:00 np0005534516 naughty_tharp[424807]: }
Nov 25 04:21:00 np0005534516 systemd[1]: libpod-91cceaf23f8e5c3e03b55a3e412a74e2bc180d128327c9a83b86c45aa819b38d.scope: Deactivated successfully.
Nov 25 04:21:00 np0005534516 systemd[1]: libpod-91cceaf23f8e5c3e03b55a3e412a74e2bc180d128327c9a83b86c45aa819b38d.scope: Consumed 1.027s CPU time.
Nov 25 04:21:00 np0005534516 podman[424785]: 2025-11-25 09:21:00.206148556 +0000 UTC m=+1.303359264 container died 91cceaf23f8e5c3e03b55a3e412a74e2bc180d128327c9a83b86c45aa819b38d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_tharp, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Nov 25 04:21:00 np0005534516 systemd[1]: var-lib-containers-storage-overlay-8f20d92439824b7cba7e9538e80b0f1e5bc0e6fd0a3fa521781413b29f58604e-merged.mount: Deactivated successfully.
Nov 25 04:21:00 np0005534516 podman[424785]: 2025-11-25 09:21:00.281973473 +0000 UTC m=+1.379184151 container remove 91cceaf23f8e5c3e03b55a3e412a74e2bc180d128327c9a83b86c45aa819b38d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_tharp, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Nov 25 04:21:00 np0005534516 systemd[1]: libpod-conmon-91cceaf23f8e5c3e03b55a3e412a74e2bc180d128327c9a83b86c45aa819b38d.scope: Deactivated successfully.
Nov 25 04:21:00 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 04:21:00 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 04:21:00 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 04:21:00 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 04:21:00 np0005534516 ceph-mgr[75313]: [progress WARNING root] complete: ev c2151727-2d2f-4bca-a401-ca77242fc724 does not exist
Nov 25 04:21:00 np0005534516 ceph-mgr[75313]: [progress WARNING root] complete: ev 091011c9-35e9-4ac6-8d1a-ccf45d3403f2 does not exist
Nov 25 04:21:00 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3024: 321 pgs: 321 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 04:21:01 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 04:21:01 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 04:21:02 np0005534516 nova_compute[253538]: 2025-11-25 09:21:02.557 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:21:02 np0005534516 nova_compute[253538]: 2025-11-25 09:21:02.557 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 25 04:21:02 np0005534516 nova_compute[253538]: 2025-11-25 09:21:02.557 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 25 04:21:02 np0005534516 nova_compute[253538]: 2025-11-25 09:21:02.588 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 25 04:21:02 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3025: 321 pgs: 321 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 04:21:03 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e271 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:21:03 np0005534516 nova_compute[253538]: 2025-11-25 09:21:03.564 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:21:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 04:21:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 04:21:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 04:21:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 04:21:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 25 04:21:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 04:21:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 04:21:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 04:21:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 04:21:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 04:21:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0010122368870873217 of space, bias 1.0, pg target 0.3036710661261965 quantized to 32 (current 32)
Nov 25 04:21:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 04:21:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 25 04:21:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 04:21:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 04:21:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 04:21:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 25 04:21:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 04:21:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 25 04:21:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 04:21:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 04:21:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 04:21:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 25 04:21:04 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3026: 321 pgs: 321 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 04:21:04 np0005534516 nova_compute[253538]: 2025-11-25 09:21:04.929 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:21:05 np0005534516 nova_compute[253538]: 2025-11-25 09:21:05.579 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:21:06 np0005534516 nova_compute[253538]: 2025-11-25 09:21:06.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:21:06 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3027: 321 pgs: 321 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 04:21:08 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e271 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:21:08 np0005534516 nova_compute[253538]: 2025-11-25 09:21:08.567 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:21:08 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3028: 321 pgs: 321 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 04:21:09 np0005534516 nova_compute[253538]: 2025-11-25 09:21:09.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:21:09 np0005534516 nova_compute[253538]: 2025-11-25 09:21:09.555 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:21:09 np0005534516 nova_compute[253538]: 2025-11-25 09:21:09.555 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 25 04:21:09 np0005534516 nova_compute[253538]: 2025-11-25 09:21:09.980 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:21:10 np0005534516 ovn_controller[152859]: 2025-11-25T09:21:10Z|01653|memory_trim|INFO|Detected inactivity (last active 30002 ms ago): trimming memory
Nov 25 04:21:10 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3029: 321 pgs: 321 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 04:21:12 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3030: 321 pgs: 321 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 04:21:13 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e271 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:21:13 np0005534516 nova_compute[253538]: 2025-11-25 09:21:13.555 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:21:13 np0005534516 nova_compute[253538]: 2025-11-25 09:21:13.570 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:21:14 np0005534516 nova_compute[253538]: 2025-11-25 09:21:14.547 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:21:14 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3031: 321 pgs: 321 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 04:21:14 np0005534516 nova_compute[253538]: 2025-11-25 09:21:14.981 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:21:16 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3032: 321 pgs: 321 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 04:21:17 np0005534516 nova_compute[253538]: 2025-11-25 09:21:17.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:21:17 np0005534516 nova_compute[253538]: 2025-11-25 09:21:17.707 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:21:17 np0005534516 nova_compute[253538]: 2025-11-25 09:21:17.708 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:21:17 np0005534516 nova_compute[253538]: 2025-11-25 09:21:17.708 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:21:17 np0005534516 nova_compute[253538]: 2025-11-25 09:21:17.708 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 25 04:21:17 np0005534516 nova_compute[253538]: 2025-11-25 09:21:17.708 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 04:21:18 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 04:21:18 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/600632121' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 04:21:18 np0005534516 nova_compute[253538]: 2025-11-25 09:21:18.190 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.481s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 04:21:18 np0005534516 podman[424926]: 2025-11-25 09:21:18.308483067 +0000 UTC m=+0.065300171 container health_status 5c8d5508db57b4c3da7e80ae11f3a3094e87785cb74f60c98199261dd8b73481 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251118)
Nov 25 04:21:18 np0005534516 podman[424925]: 2025-11-25 09:21:18.313634797 +0000 UTC m=+0.070558494 container health_status 201f5ba8a846455dad7681d91099d8c3f33e8ac91298c1330a8b525b94c03938 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true)
Nov 25 04:21:18 np0005534516 nova_compute[253538]: 2025-11-25 09:21:18.385 253542 WARNING nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 04:21:18 np0005534516 nova_compute[253538]: 2025-11-25 09:21:18.386 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3604MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 25 04:21:18 np0005534516 nova_compute[253538]: 2025-11-25 09:21:18.386 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:21:18 np0005534516 nova_compute[253538]: 2025-11-25 09:21:18.386 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:21:18 np0005534516 nova_compute[253538]: 2025-11-25 09:21:18.441 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 25 04:21:18 np0005534516 nova_compute[253538]: 2025-11-25 09:21:18.442 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 25 04:21:18 np0005534516 nova_compute[253538]: 2025-11-25 09:21:18.463 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 04:21:18 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e271 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:21:18 np0005534516 nova_compute[253538]: 2025-11-25 09:21:18.573 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:21:18 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3033: 321 pgs: 321 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 04:21:18 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 04:21:18 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2662410292' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 04:21:18 np0005534516 nova_compute[253538]: 2025-11-25 09:21:18.968 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.506s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 04:21:18 np0005534516 nova_compute[253538]: 2025-11-25 09:21:18.974 253542 DEBUG nova.compute.provider_tree [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 04:21:18 np0005534516 nova_compute[253538]: 2025-11-25 09:21:18.997 253542 DEBUG nova.scheduler.client.report [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 04:21:19 np0005534516 nova_compute[253538]: 2025-11-25 09:21:19.013 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 25 04:21:19 np0005534516 nova_compute[253538]: 2025-11-25 09:21:19.013 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.627s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:21:19 np0005534516 nova_compute[253538]: 2025-11-25 09:21:19.984 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:21:20 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3034: 321 pgs: 321 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 04:21:22 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3035: 321 pgs: 321 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 04:21:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 04:21:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 04:21:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 04:21:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 04:21:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 04:21:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 04:21:23 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e271 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:21:23 np0005534516 nova_compute[253538]: 2025-11-25 09:21:23.555 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:21:23 np0005534516 nova_compute[253538]: 2025-11-25 09:21:23.555 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Nov 25 04:21:23 np0005534516 nova_compute[253538]: 2025-11-25 09:21:23.576 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:21:24 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3036: 321 pgs: 321 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 04:21:25 np0005534516 nova_compute[253538]: 2025-11-25 09:21:25.037 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:21:26 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3037: 321 pgs: 321 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 04:21:28 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e271 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:21:28 np0005534516 nova_compute[253538]: 2025-11-25 09:21:28.579 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:21:28 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3038: 321 pgs: 321 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 04:21:29 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 25 04:21:29 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1916060938' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 04:21:29 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 25 04:21:29 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1916060938' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 04:21:29 np0005534516 podman[424988]: 2025-11-25 09:21:29.861673501 +0000 UTC m=+0.107522583 container health_status 8ad1e319ea263c63021f01bb2d6f18ca5aea205883339692c6b10bddfa993191 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Nov 25 04:21:30 np0005534516 nova_compute[253538]: 2025-11-25 09:21:30.039 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:21:30 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3039: 321 pgs: 321 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 04:21:30 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e271 do_prune osdmap full prune enabled
Nov 25 04:21:30 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e272 e272: 3 total, 3 up, 3 in
Nov 25 04:21:30 np0005534516 ceph-mon[75015]: log_channel(cluster) log [DBG] : osdmap e272: 3 total, 3 up, 3 in
Nov 25 04:21:32 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3041: 321 pgs: 321 active+clean; 64 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.1 KiB/s wr, 23 op/s
Nov 25 04:21:32 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e272 do_prune osdmap full prune enabled
Nov 25 04:21:32 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e273 e273: 3 total, 3 up, 3 in
Nov 25 04:21:32 np0005534516 ceph-mon[75015]: log_channel(cluster) log [DBG] : osdmap e273: 3 total, 3 up, 3 in
Nov 25 04:21:33 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e273 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:21:33 np0005534516 nova_compute[253538]: 2025-11-25 09:21:33.582 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:21:34 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3043: 321 pgs: 321 active+clean; 52 MiB data, 1024 MiB used, 59 GiB / 60 GiB avail; 22 KiB/s rd, 1.4 KiB/s wr, 31 op/s
Nov 25 04:21:35 np0005534516 nova_compute[253538]: 2025-11-25 09:21:35.042 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:21:36 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3044: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail; 45 KiB/s rd, 3.5 KiB/s wr, 64 op/s
Nov 25 04:21:38 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e273 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:21:38 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e273 do_prune osdmap full prune enabled
Nov 25 04:21:38 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e274 e274: 3 total, 3 up, 3 in
Nov 25 04:21:38 np0005534516 ceph-mon[75015]: log_channel(cluster) log [DBG] : osdmap e274: 3 total, 3 up, 3 in
Nov 25 04:21:38 np0005534516 ceph-mon[75015]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #147. Immutable memtables: 0.
Nov 25 04:21:38 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:21:38.520642) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 25 04:21:38 np0005534516 ceph-mon[75015]: rocksdb: [db/flush_job.cc:856] [default] [JOB 89] Flushing memtable with next log file: 147
Nov 25 04:21:38 np0005534516 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764062498520680, "job": 89, "event": "flush_started", "num_memtables": 1, "num_entries": 1837, "num_deletes": 256, "total_data_size": 2948287, "memory_usage": 2997320, "flush_reason": "Manual Compaction"}
Nov 25 04:21:38 np0005534516 ceph-mon[75015]: rocksdb: [db/flush_job.cc:885] [default] [JOB 89] Level-0 flush table #148: started
Nov 25 04:21:38 np0005534516 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764062498530933, "cf_name": "default", "job": 89, "event": "table_file_creation", "file_number": 148, "file_size": 1754942, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 61647, "largest_seqno": 63483, "table_properties": {"data_size": 1748606, "index_size": 3282, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1989, "raw_key_size": 16338, "raw_average_key_size": 21, "raw_value_size": 1734710, "raw_average_value_size": 2244, "num_data_blocks": 149, "num_entries": 773, "num_filter_entries": 773, "num_deletions": 256, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764062314, "oldest_key_time": 1764062314, "file_creation_time": 1764062498, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "dc5dc6ae-0fb8-474e-b34d-e37705a41add", "db_session_id": "WCSCMBNWDQZ3QKJA3OWW", "orig_file_number": 148, "seqno_to_time_mapping": "N/A"}}
Nov 25 04:21:38 np0005534516 ceph-mon[75015]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 89] Flush lasted 10448 microseconds, and 4799 cpu microseconds.
Nov 25 04:21:38 np0005534516 ceph-mon[75015]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 04:21:38 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:21:38.531083) [db/flush_job.cc:967] [default] [JOB 89] Level-0 flush table #148: 1754942 bytes OK
Nov 25 04:21:38 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:21:38.531108) [db/memtable_list.cc:519] [default] Level-0 commit table #148 started
Nov 25 04:21:38 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:21:38.537656) [db/memtable_list.cc:722] [default] Level-0 commit table #148: memtable #1 done
Nov 25 04:21:38 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:21:38.537701) EVENT_LOG_v1 {"time_micros": 1764062498537692, "job": 89, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 25 04:21:38 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:21:38.537723) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 25 04:21:38 np0005534516 ceph-mon[75015]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 89] Try to delete WAL files size 2940416, prev total WAL file size 2951727, number of live WAL files 2.
Nov 25 04:21:38 np0005534516 ceph-mon[75015]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000144.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 04:21:38 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:21:38.538982) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740032353035' seq:72057594037927935, type:22 .. '6D6772737461740032373536' seq:0, type:0; will stop at (end)
Nov 25 04:21:38 np0005534516 ceph-mon[75015]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 90] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 25 04:21:38 np0005534516 ceph-mon[75015]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 89 Base level 0, inputs: [148(1713KB)], [146(10101KB)]
Nov 25 04:21:38 np0005534516 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764062498539053, "job": 90, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [148], "files_L6": [146], "score": -1, "input_data_size": 12098437, "oldest_snapshot_seqno": -1}
Nov 25 04:21:38 np0005534516 nova_compute[253538]: 2025-11-25 09:21:38.586 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:21:38 np0005534516 ceph-mon[75015]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 90] Generated table #149: 8291 keys, 9927801 bytes, temperature: kUnknown
Nov 25 04:21:38 np0005534516 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764062498600902, "cf_name": "default", "job": 90, "event": "table_file_creation", "file_number": 149, "file_size": 9927801, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9876035, "index_size": 29957, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 20741, "raw_key_size": 216589, "raw_average_key_size": 26, "raw_value_size": 9731745, "raw_average_value_size": 1173, "num_data_blocks": 1169, "num_entries": 8291, "num_filter_entries": 8291, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764056881, "oldest_key_time": 0, "file_creation_time": 1764062498, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "dc5dc6ae-0fb8-474e-b34d-e37705a41add", "db_session_id": "WCSCMBNWDQZ3QKJA3OWW", "orig_file_number": 149, "seqno_to_time_mapping": "N/A"}}
Nov 25 04:21:38 np0005534516 ceph-mon[75015]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 04:21:38 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:21:38.601176) [db/compaction/compaction_job.cc:1663] [default] [JOB 90] Compacted 1@0 + 1@6 files to L6 => 9927801 bytes
Nov 25 04:21:38 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:21:38.602340) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 195.3 rd, 160.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.7, 9.9 +0.0 blob) out(9.5 +0.0 blob), read-write-amplify(12.6) write-amplify(5.7) OK, records in: 8731, records dropped: 440 output_compression: NoCompression
Nov 25 04:21:38 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:21:38.602360) EVENT_LOG_v1 {"time_micros": 1764062498602350, "job": 90, "event": "compaction_finished", "compaction_time_micros": 61937, "compaction_time_cpu_micros": 23227, "output_level": 6, "num_output_files": 1, "total_output_size": 9927801, "num_input_records": 8731, "num_output_records": 8291, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 25 04:21:38 np0005534516 ceph-mon[75015]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000148.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 04:21:38 np0005534516 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764062498602837, "job": 90, "event": "table_file_deletion", "file_number": 148}
Nov 25 04:21:38 np0005534516 ceph-mon[75015]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000146.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 04:21:38 np0005534516 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764062498604978, "job": 90, "event": "table_file_deletion", "file_number": 146}
Nov 25 04:21:38 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:21:38.538886) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 04:21:38 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:21:38.605097) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 04:21:38 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:21:38.605104) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 04:21:38 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:21:38.605105) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 04:21:38 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:21:38.605107) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 04:21:38 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:21:38.605108) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 04:21:38 np0005534516 ceph-mon[75015]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #150. Immutable memtables: 0.
Nov 25 04:21:38 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:21:38.605672) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 25 04:21:38 np0005534516 ceph-mon[75015]: rocksdb: [db/flush_job.cc:856] [default] [JOB 91] Flushing memtable with next log file: 150
Nov 25 04:21:38 np0005534516 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764062498605808, "job": 91, "event": "flush_started", "num_memtables": 1, "num_entries": 256, "num_deletes": 251, "total_data_size": 13330, "memory_usage": 19384, "flush_reason": "Manual Compaction"}
Nov 25 04:21:38 np0005534516 ceph-mon[75015]: rocksdb: [db/flush_job.cc:885] [default] [JOB 91] Level-0 flush table #151: started
Nov 25 04:21:38 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3046: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail; 47 KiB/s rd, 3.6 KiB/s wr, 66 op/s
Nov 25 04:21:38 np0005534516 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764062498609760, "cf_name": "default", "job": 91, "event": "table_file_creation", "file_number": 151, "file_size": 13305, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 63484, "largest_seqno": 63739, "table_properties": {"data_size": 11551, "index_size": 50, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 645, "raw_key_size": 4640, "raw_average_key_size": 18, "raw_value_size": 8177, "raw_average_value_size": 31, "num_data_blocks": 2, "num_entries": 256, "num_filter_entries": 256, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764062498, "oldest_key_time": 1764062498, "file_creation_time": 1764062498, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "dc5dc6ae-0fb8-474e-b34d-e37705a41add", "db_session_id": "WCSCMBNWDQZ3QKJA3OWW", "orig_file_number": 151, "seqno_to_time_mapping": "N/A"}}
Nov 25 04:21:38 np0005534516 ceph-mon[75015]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 91] Flush lasted 4117 microseconds, and 1484 cpu microseconds.
Nov 25 04:21:38 np0005534516 ceph-mon[75015]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 04:21:38 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:21:38.609811) [db/flush_job.cc:967] [default] [JOB 91] Level-0 flush table #151: 13305 bytes OK
Nov 25 04:21:38 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:21:38.609836) [db/memtable_list.cc:519] [default] Level-0 commit table #151 started
Nov 25 04:21:38 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:21:38.611359) [db/memtable_list.cc:722] [default] Level-0 commit table #151: memtable #1 done
Nov 25 04:21:38 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:21:38.611378) EVENT_LOG_v1 {"time_micros": 1764062498611370, "job": 91, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 25 04:21:38 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:21:38.611409) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 25 04:21:38 np0005534516 ceph-mon[75015]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 91] Try to delete WAL files size 11311, prev total WAL file size 11311, number of live WAL files 2.
Nov 25 04:21:38 np0005534516 ceph-mon[75015]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000147.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 04:21:38 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:21:38.611987) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730036303234' seq:72057594037927935, type:22 .. '7061786F730036323736' seq:0, type:0; will stop at (end)
Nov 25 04:21:38 np0005534516 ceph-mon[75015]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 92] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 25 04:21:38 np0005534516 ceph-mon[75015]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 91 Base level 0, inputs: [151(12KB)], [149(9695KB)]
Nov 25 04:21:38 np0005534516 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764062498612062, "job": 92, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [151], "files_L6": [149], "score": -1, "input_data_size": 9941106, "oldest_snapshot_seqno": -1}
Nov 25 04:21:38 np0005534516 ceph-mon[75015]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 92] Generated table #152: 8041 keys, 8216905 bytes, temperature: kUnknown
Nov 25 04:21:38 np0005534516 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764062498659480, "cf_name": "default", "job": 92, "event": "table_file_creation", "file_number": 152, "file_size": 8216905, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8168371, "index_size": 27301, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 20165, "raw_key_size": 212089, "raw_average_key_size": 26, "raw_value_size": 8030013, "raw_average_value_size": 998, "num_data_blocks": 1048, "num_entries": 8041, "num_filter_entries": 8041, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764056881, "oldest_key_time": 0, "file_creation_time": 1764062498, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "dc5dc6ae-0fb8-474e-b34d-e37705a41add", "db_session_id": "WCSCMBNWDQZ3QKJA3OWW", "orig_file_number": 152, "seqno_to_time_mapping": "N/A"}}
Nov 25 04:21:38 np0005534516 ceph-mon[75015]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 04:21:38 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:21:38.659841) [db/compaction/compaction_job.cc:1663] [default] [JOB 92] Compacted 1@0 + 1@6 files to L6 => 8216905 bytes
Nov 25 04:21:38 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:21:38.661115) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 209.1 rd, 172.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.0, 9.5 +0.0 blob) out(7.8 +0.0 blob), read-write-amplify(1364.8) write-amplify(617.6) OK, records in: 8547, records dropped: 506 output_compression: NoCompression
Nov 25 04:21:38 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:21:38.661146) EVENT_LOG_v1 {"time_micros": 1764062498661133, "job": 92, "event": "compaction_finished", "compaction_time_micros": 47541, "compaction_time_cpu_micros": 25666, "output_level": 6, "num_output_files": 1, "total_output_size": 8216905, "num_input_records": 8547, "num_output_records": 8041, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 25 04:21:38 np0005534516 ceph-mon[75015]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000151.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 04:21:38 np0005534516 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764062498661349, "job": 92, "event": "table_file_deletion", "file_number": 151}
Nov 25 04:21:38 np0005534516 ceph-mon[75015]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000149.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 04:21:38 np0005534516 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764062498664578, "job": 92, "event": "table_file_deletion", "file_number": 149}
Nov 25 04:21:38 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:21:38.611847) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 04:21:38 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:21:38.664661) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 04:21:38 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:21:38.664670) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 04:21:38 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:21:38.664671) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 04:21:38 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:21:38.664673) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 04:21:38 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:21:38.664674) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 04:21:39 np0005534516 nova_compute[253538]: 2025-11-25 09:21:39.566 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:21:39 np0005534516 nova_compute[253538]: 2025-11-25 09:21:39.567 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Nov 25 04:21:39 np0005534516 nova_compute[253538]: 2025-11-25 09:21:39.607 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Nov 25 04:21:40 np0005534516 nova_compute[253538]: 2025-11-25 09:21:40.043 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:21:40 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3047: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail; 24 KiB/s rd, 2.1 KiB/s wr, 34 op/s
Nov 25 04:21:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:21:41.106 162739 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:21:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:21:41.106 162739 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:21:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:21:41.107 162739 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:21:42 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3048: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail; 20 KiB/s rd, 1.8 KiB/s wr, 28 op/s
Nov 25 04:21:43 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e274 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:21:43 np0005534516 nova_compute[253538]: 2025-11-25 09:21:43.589 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:21:44 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3049: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.7 KiB/s wr, 25 op/s
Nov 25 04:21:45 np0005534516 nova_compute[253538]: 2025-11-25 09:21:45.045 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:21:45 np0005534516 nova_compute[253538]: 2025-11-25 09:21:45.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:21:46 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3050: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:21:48 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e274 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:21:48 np0005534516 nova_compute[253538]: 2025-11-25 09:21:48.593 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:21:48 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3051: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:21:48 np0005534516 podman[425018]: 2025-11-25 09:21:48.810178116 +0000 UTC m=+0.054108906 container health_status 5c8d5508db57b4c3da7e80ae11f3a3094e87785cb74f60c98199261dd8b73481 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 04:21:48 np0005534516 podman[425017]: 2025-11-25 09:21:48.82609918 +0000 UTC m=+0.069825215 container health_status 201f5ba8a846455dad7681d91099d8c3f33e8ac91298c1330a8b525b94c03938 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=multipathd, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, managed_by=edpm_ansible)
Nov 25 04:21:50 np0005534516 nova_compute[253538]: 2025-11-25 09:21:50.049 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:21:50 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3052: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:21:52 np0005534516 nova_compute[253538]: 2025-11-25 09:21:52.593 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:21:52 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3053: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:21:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] Optimize plan auto_2025-11-25_09:21:53
Nov 25 04:21:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 04:21:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] do_upmap
Nov 25 04:21:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'default.rgw.control', 'images', 'cephfs.cephfs.data', 'default.rgw.meta', 'vms', 'volumes', '.rgw.root', 'default.rgw.log', '.mgr', 'backups']
Nov 25 04:21:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] prepared 0/10 changes
Nov 25 04:21:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 04:21:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 04:21:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 04:21:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 04:21:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 04:21:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 04:21:53 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e274 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:21:53 np0005534516 nova_compute[253538]: 2025-11-25 09:21:53.596 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:21:54 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 04:21:54 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 04:21:54 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 04:21:54 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 04:21:54 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 04:21:54 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 04:21:54 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 04:21:54 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 04:21:54 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 04:21:54 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 04:21:54 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3054: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:21:55 np0005534516 nova_compute[253538]: 2025-11-25 09:21:55.049 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:21:56 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3055: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:21:58 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e274 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:21:58 np0005534516 nova_compute[253538]: 2025-11-25 09:21:58.555 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:21:58 np0005534516 nova_compute[253538]: 2025-11-25 09:21:58.598 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:21:58 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3056: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:22:00 np0005534516 nova_compute[253538]: 2025-11-25 09:22:00.050 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:22:00 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3057: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:22:00 np0005534516 podman[425080]: 2025-11-25 09:22:00.704396827 +0000 UTC m=+0.096789529 container health_status 8ad1e319ea263c63021f01bb2d6f18ca5aea205883339692c6b10bddfa993191 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller)
Nov 25 04:22:01 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 04:22:01 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 04:22:01 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 04:22:01 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 04:22:01 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 04:22:01 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 04:22:01 np0005534516 ceph-mgr[75313]: [progress WARNING root] complete: ev eae0f1db-3b04-48ea-973e-51446c668f5b does not exist
Nov 25 04:22:01 np0005534516 ceph-mgr[75313]: [progress WARNING root] complete: ev 31a77a54-cfad-4656-9879-07850ecf10e7 does not exist
Nov 25 04:22:01 np0005534516 ceph-mgr[75313]: [progress WARNING root] complete: ev eabdb512-9bff-4e26-9a2e-11dde76d1767 does not exist
Nov 25 04:22:01 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 04:22:01 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 04:22:01 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 04:22:01 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 04:22:01 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 04:22:01 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 04:22:01 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 04:22:01 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 04:22:01 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 04:22:01 np0005534516 podman[425351]: 2025-11-25 09:22:01.940873026 +0000 UTC m=+0.041673007 container create bbd1b7f1234447f02bdee02bfd29ef2f8565ff3e75fe8524a8a746edbbc8ef62 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_kirch, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Nov 25 04:22:01 np0005534516 systemd[1]: Started libpod-conmon-bbd1b7f1234447f02bdee02bfd29ef2f8565ff3e75fe8524a8a746edbbc8ef62.scope.
Nov 25 04:22:02 np0005534516 systemd[1]: Started libcrun container.
Nov 25 04:22:02 np0005534516 podman[425351]: 2025-11-25 09:22:01.922709971 +0000 UTC m=+0.023509982 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 04:22:02 np0005534516 podman[425351]: 2025-11-25 09:22:02.028494375 +0000 UTC m=+0.129294386 container init bbd1b7f1234447f02bdee02bfd29ef2f8565ff3e75fe8524a8a746edbbc8ef62 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_kirch, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef)
Nov 25 04:22:02 np0005534516 podman[425351]: 2025-11-25 09:22:02.037814958 +0000 UTC m=+0.138614939 container start bbd1b7f1234447f02bdee02bfd29ef2f8565ff3e75fe8524a8a746edbbc8ef62 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_kirch, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef)
Nov 25 04:22:02 np0005534516 podman[425351]: 2025-11-25 09:22:02.043767191 +0000 UTC m=+0.144567202 container attach bbd1b7f1234447f02bdee02bfd29ef2f8565ff3e75fe8524a8a746edbbc8ef62 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_kirch, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 25 04:22:02 np0005534516 hungry_kirch[425367]: 167 167
Nov 25 04:22:02 np0005534516 systemd[1]: libpod-bbd1b7f1234447f02bdee02bfd29ef2f8565ff3e75fe8524a8a746edbbc8ef62.scope: Deactivated successfully.
Nov 25 04:22:02 np0005534516 conmon[425367]: conmon bbd1b7f1234447f02bde <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-bbd1b7f1234447f02bdee02bfd29ef2f8565ff3e75fe8524a8a746edbbc8ef62.scope/container/memory.events
Nov 25 04:22:02 np0005534516 podman[425351]: 2025-11-25 09:22:02.051749218 +0000 UTC m=+0.152549219 container died bbd1b7f1234447f02bdee02bfd29ef2f8565ff3e75fe8524a8a746edbbc8ef62 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_kirch, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Nov 25 04:22:02 np0005534516 systemd[1]: var-lib-containers-storage-overlay-4457175f1487932a659d67a08fb27dda186a440b731efd24d810d29fe1555d78-merged.mount: Deactivated successfully.
Nov 25 04:22:02 np0005534516 podman[425351]: 2025-11-25 09:22:02.096266572 +0000 UTC m=+0.197066553 container remove bbd1b7f1234447f02bdee02bfd29ef2f8565ff3e75fe8524a8a746edbbc8ef62 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_kirch, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 04:22:02 np0005534516 systemd[1]: libpod-conmon-bbd1b7f1234447f02bdee02bfd29ef2f8565ff3e75fe8524a8a746edbbc8ef62.scope: Deactivated successfully.
Nov 25 04:22:02 np0005534516 podman[425390]: 2025-11-25 09:22:02.269876825 +0000 UTC m=+0.044097473 container create 4ddc64317917f2be3d01eed47cfb3284c0060d468c075dc0aa8dfd17ef751096 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_swirles, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 04:22:02 np0005534516 systemd[1]: Started libpod-conmon-4ddc64317917f2be3d01eed47cfb3284c0060d468c075dc0aa8dfd17ef751096.scope.
Nov 25 04:22:02 np0005534516 systemd[1]: Started libcrun container.
Nov 25 04:22:02 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0731f039435028043a9ea05d7fe35ba2c8c333e0aeaf888661280ff465ed1ee2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 04:22:02 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0731f039435028043a9ea05d7fe35ba2c8c333e0aeaf888661280ff465ed1ee2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 04:22:02 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0731f039435028043a9ea05d7fe35ba2c8c333e0aeaf888661280ff465ed1ee2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 04:22:02 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0731f039435028043a9ea05d7fe35ba2c8c333e0aeaf888661280ff465ed1ee2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 04:22:02 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0731f039435028043a9ea05d7fe35ba2c8c333e0aeaf888661280ff465ed1ee2/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 04:22:02 np0005534516 podman[425390]: 2025-11-25 09:22:02.250374393 +0000 UTC m=+0.024595071 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 04:22:02 np0005534516 podman[425390]: 2025-11-25 09:22:02.350839042 +0000 UTC m=+0.125059710 container init 4ddc64317917f2be3d01eed47cfb3284c0060d468c075dc0aa8dfd17ef751096 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_swirles, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Nov 25 04:22:02 np0005534516 podman[425390]: 2025-11-25 09:22:02.361398561 +0000 UTC m=+0.135619209 container start 4ddc64317917f2be3d01eed47cfb3284c0060d468c075dc0aa8dfd17ef751096 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_swirles, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Nov 25 04:22:02 np0005534516 podman[425390]: 2025-11-25 09:22:02.364548416 +0000 UTC m=+0.138769084 container attach 4ddc64317917f2be3d01eed47cfb3284c0060d468c075dc0aa8dfd17ef751096 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_swirles, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True)
Nov 25 04:22:02 np0005534516 nova_compute[253538]: 2025-11-25 09:22:02.555 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:22:02 np0005534516 nova_compute[253538]: 2025-11-25 09:22:02.557 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 25 04:22:02 np0005534516 nova_compute[253538]: 2025-11-25 09:22:02.558 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 25 04:22:02 np0005534516 nova_compute[253538]: 2025-11-25 09:22:02.569 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 25 04:22:02 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3058: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:22:03 np0005534516 gifted_swirles[425406]: --> passed data devices: 0 physical, 3 LVM
Nov 25 04:22:03 np0005534516 gifted_swirles[425406]: --> relative data size: 1.0
Nov 25 04:22:03 np0005534516 gifted_swirles[425406]: --> All data devices are unavailable
Nov 25 04:22:03 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e274 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:22:03 np0005534516 systemd[1]: libpod-4ddc64317917f2be3d01eed47cfb3284c0060d468c075dc0aa8dfd17ef751096.scope: Deactivated successfully.
Nov 25 04:22:03 np0005534516 systemd[1]: libpod-4ddc64317917f2be3d01eed47cfb3284c0060d468c075dc0aa8dfd17ef751096.scope: Consumed 1.131s CPU time.
Nov 25 04:22:03 np0005534516 podman[425390]: 2025-11-25 09:22:03.542507999 +0000 UTC m=+1.316728667 container died 4ddc64317917f2be3d01eed47cfb3284c0060d468c075dc0aa8dfd17ef751096 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_swirles, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Nov 25 04:22:03 np0005534516 nova_compute[253538]: 2025-11-25 09:22:03.601 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:22:03 np0005534516 systemd[1]: var-lib-containers-storage-overlay-0731f039435028043a9ea05d7fe35ba2c8c333e0aeaf888661280ff465ed1ee2-merged.mount: Deactivated successfully.
Nov 25 04:22:03 np0005534516 podman[425390]: 2025-11-25 09:22:03.705271378 +0000 UTC m=+1.479492026 container remove 4ddc64317917f2be3d01eed47cfb3284c0060d468c075dc0aa8dfd17ef751096 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_swirles, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Nov 25 04:22:03 np0005534516 systemd[1]: libpod-conmon-4ddc64317917f2be3d01eed47cfb3284c0060d468c075dc0aa8dfd17ef751096.scope: Deactivated successfully.
Nov 25 04:22:04 np0005534516 podman[425588]: 2025-11-25 09:22:04.392905093 +0000 UTC m=+0.043029613 container create 35892926119a23fbdae6620e82e5ea6806af6325417644515750d0b9fdcb41ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_varahamihira, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Nov 25 04:22:04 np0005534516 systemd[1]: Started libpod-conmon-35892926119a23fbdae6620e82e5ea6806af6325417644515750d0b9fdcb41ae.scope.
Nov 25 04:22:04 np0005534516 systemd[1]: Started libcrun container.
Nov 25 04:22:04 np0005534516 podman[425588]: 2025-11-25 09:22:04.374433059 +0000 UTC m=+0.024557609 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 04:22:04 np0005534516 podman[425588]: 2025-11-25 09:22:04.492117048 +0000 UTC m=+0.142241588 container init 35892926119a23fbdae6620e82e5ea6806af6325417644515750d0b9fdcb41ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_varahamihira, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef)
Nov 25 04:22:04 np0005534516 podman[425588]: 2025-11-25 09:22:04.500816136 +0000 UTC m=+0.150940666 container start 35892926119a23fbdae6620e82e5ea6806af6325417644515750d0b9fdcb41ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_varahamihira, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 04:22:04 np0005534516 podman[425588]: 2025-11-25 09:22:04.503723455 +0000 UTC m=+0.153847985 container attach 35892926119a23fbdae6620e82e5ea6806af6325417644515750d0b9fdcb41ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_varahamihira, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default)
Nov 25 04:22:04 np0005534516 dreamy_varahamihira[425604]: 167 167
Nov 25 04:22:04 np0005534516 systemd[1]: libpod-35892926119a23fbdae6620e82e5ea6806af6325417644515750d0b9fdcb41ae.scope: Deactivated successfully.
Nov 25 04:22:04 np0005534516 podman[425588]: 2025-11-25 09:22:04.510321225 +0000 UTC m=+0.160445755 container died 35892926119a23fbdae6620e82e5ea6806af6325417644515750d0b9fdcb41ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_varahamihira, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 04:22:04 np0005534516 systemd[1]: var-lib-containers-storage-overlay-517aff901b9ba4530065e9feac4ed62e5a6a8e02e669c770fa3000c44cf036c3-merged.mount: Deactivated successfully.
Nov 25 04:22:04 np0005534516 podman[425588]: 2025-11-25 09:22:04.575868551 +0000 UTC m=+0.225993081 container remove 35892926119a23fbdae6620e82e5ea6806af6325417644515750d0b9fdcb41ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_varahamihira, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 04:22:04 np0005534516 systemd[1]: libpod-conmon-35892926119a23fbdae6620e82e5ea6806af6325417644515750d0b9fdcb41ae.scope: Deactivated successfully.
Nov 25 04:22:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 04:22:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 04:22:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 04:22:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 04:22:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 25 04:22:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 04:22:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 04:22:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 04:22:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 04:22:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 04:22:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006659854830044931 of space, bias 1.0, pg target 0.19979564490134794 quantized to 32 (current 32)
Nov 25 04:22:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 04:22:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 25 04:22:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 04:22:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 04:22:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 04:22:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 25 04:22:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 04:22:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 25 04:22:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 04:22:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 04:22:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 04:22:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 25 04:22:04 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3059: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:22:04 np0005534516 podman[425628]: 2025-11-25 09:22:04.785589619 +0000 UTC m=+0.071777698 container create 0e6c91df2c9e4eb97e86877e76adf48dd769792dc559a3513ae167f3aa319ae2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_easley, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Nov 25 04:22:04 np0005534516 podman[425628]: 2025-11-25 09:22:04.742899605 +0000 UTC m=+0.029087714 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 04:22:04 np0005534516 systemd[1]: Started libpod-conmon-0e6c91df2c9e4eb97e86877e76adf48dd769792dc559a3513ae167f3aa319ae2.scope.
Nov 25 04:22:04 np0005534516 systemd[1]: Started libcrun container.
Nov 25 04:22:04 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/12bb353cd3955ae142a9a7c531e87504b0864a74b18db3e4c88bb33873a2bfac/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 04:22:04 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/12bb353cd3955ae142a9a7c531e87504b0864a74b18db3e4c88bb33873a2bfac/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 04:22:04 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/12bb353cd3955ae142a9a7c531e87504b0864a74b18db3e4c88bb33873a2bfac/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 04:22:04 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/12bb353cd3955ae142a9a7c531e87504b0864a74b18db3e4c88bb33873a2bfac/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 04:22:04 np0005534516 podman[425628]: 2025-11-25 09:22:04.897935701 +0000 UTC m=+0.184123820 container init 0e6c91df2c9e4eb97e86877e76adf48dd769792dc559a3513ae167f3aa319ae2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_easley, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 25 04:22:04 np0005534516 podman[425628]: 2025-11-25 09:22:04.905451937 +0000 UTC m=+0.191640026 container start 0e6c91df2c9e4eb97e86877e76adf48dd769792dc559a3513ae167f3aa319ae2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_easley, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0)
Nov 25 04:22:04 np0005534516 podman[425628]: 2025-11-25 09:22:04.918526113 +0000 UTC m=+0.204714232 container attach 0e6c91df2c9e4eb97e86877e76adf48dd769792dc559a3513ae167f3aa319ae2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_easley, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Nov 25 04:22:05 np0005534516 nova_compute[253538]: 2025-11-25 09:22:05.085 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:22:05 np0005534516 nova_compute[253538]: 2025-11-25 09:22:05.561 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:22:05 np0005534516 trusting_easley[425645]: {
Nov 25 04:22:05 np0005534516 trusting_easley[425645]:    "0": [
Nov 25 04:22:05 np0005534516 trusting_easley[425645]:        {
Nov 25 04:22:05 np0005534516 trusting_easley[425645]:            "devices": [
Nov 25 04:22:05 np0005534516 trusting_easley[425645]:                "/dev/loop3"
Nov 25 04:22:05 np0005534516 trusting_easley[425645]:            ],
Nov 25 04:22:05 np0005534516 trusting_easley[425645]:            "lv_name": "ceph_lv0",
Nov 25 04:22:05 np0005534516 trusting_easley[425645]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 04:22:05 np0005534516 trusting_easley[425645]:            "lv_size": "21470642176",
Nov 25 04:22:05 np0005534516 trusting_easley[425645]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=fa0f8d90-5df8-4d42-9078-da082765696d,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 04:22:05 np0005534516 trusting_easley[425645]:            "lv_uuid": "ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr",
Nov 25 04:22:05 np0005534516 trusting_easley[425645]:            "name": "ceph_lv0",
Nov 25 04:22:05 np0005534516 trusting_easley[425645]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 04:22:05 np0005534516 trusting_easley[425645]:            "tags": {
Nov 25 04:22:05 np0005534516 trusting_easley[425645]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 04:22:05 np0005534516 trusting_easley[425645]:                "ceph.block_uuid": "ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr",
Nov 25 04:22:05 np0005534516 trusting_easley[425645]:                "ceph.cephx_lockbox_secret": "",
Nov 25 04:22:05 np0005534516 trusting_easley[425645]:                "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 04:22:05 np0005534516 trusting_easley[425645]:                "ceph.cluster_name": "ceph",
Nov 25 04:22:05 np0005534516 trusting_easley[425645]:                "ceph.crush_device_class": "",
Nov 25 04:22:05 np0005534516 trusting_easley[425645]:                "ceph.encrypted": "0",
Nov 25 04:22:05 np0005534516 trusting_easley[425645]:                "ceph.osd_fsid": "fa0f8d90-5df8-4d42-9078-da082765696d",
Nov 25 04:22:05 np0005534516 trusting_easley[425645]:                "ceph.osd_id": "0",
Nov 25 04:22:05 np0005534516 trusting_easley[425645]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 04:22:05 np0005534516 trusting_easley[425645]:                "ceph.type": "block",
Nov 25 04:22:05 np0005534516 trusting_easley[425645]:                "ceph.vdo": "0"
Nov 25 04:22:05 np0005534516 trusting_easley[425645]:            },
Nov 25 04:22:05 np0005534516 trusting_easley[425645]:            "type": "block",
Nov 25 04:22:05 np0005534516 trusting_easley[425645]:            "vg_name": "ceph_vg0"
Nov 25 04:22:05 np0005534516 trusting_easley[425645]:        }
Nov 25 04:22:05 np0005534516 trusting_easley[425645]:    ],
Nov 25 04:22:05 np0005534516 trusting_easley[425645]:    "1": [
Nov 25 04:22:05 np0005534516 trusting_easley[425645]:        {
Nov 25 04:22:05 np0005534516 trusting_easley[425645]:            "devices": [
Nov 25 04:22:05 np0005534516 trusting_easley[425645]:                "/dev/loop4"
Nov 25 04:22:05 np0005534516 trusting_easley[425645]:            ],
Nov 25 04:22:05 np0005534516 trusting_easley[425645]:            "lv_name": "ceph_lv1",
Nov 25 04:22:05 np0005534516 trusting_easley[425645]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 04:22:05 np0005534516 trusting_easley[425645]:            "lv_size": "21470642176",
Nov 25 04:22:05 np0005534516 trusting_easley[425645]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1ccbbde0-8faf-460a-800e-c84f00ed17db,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 04:22:05 np0005534516 trusting_easley[425645]:            "lv_uuid": "nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e",
Nov 25 04:22:05 np0005534516 trusting_easley[425645]:            "name": "ceph_lv1",
Nov 25 04:22:05 np0005534516 trusting_easley[425645]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 04:22:05 np0005534516 trusting_easley[425645]:            "tags": {
Nov 25 04:22:05 np0005534516 trusting_easley[425645]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 04:22:05 np0005534516 trusting_easley[425645]:                "ceph.block_uuid": "nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e",
Nov 25 04:22:05 np0005534516 trusting_easley[425645]:                "ceph.cephx_lockbox_secret": "",
Nov 25 04:22:05 np0005534516 trusting_easley[425645]:                "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 04:22:05 np0005534516 trusting_easley[425645]:                "ceph.cluster_name": "ceph",
Nov 25 04:22:05 np0005534516 trusting_easley[425645]:                "ceph.crush_device_class": "",
Nov 25 04:22:05 np0005534516 trusting_easley[425645]:                "ceph.encrypted": "0",
Nov 25 04:22:05 np0005534516 trusting_easley[425645]:                "ceph.osd_fsid": "1ccbbde0-8faf-460a-800e-c84f00ed17db",
Nov 25 04:22:05 np0005534516 trusting_easley[425645]:                "ceph.osd_id": "1",
Nov 25 04:22:05 np0005534516 trusting_easley[425645]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 04:22:05 np0005534516 trusting_easley[425645]:                "ceph.type": "block",
Nov 25 04:22:05 np0005534516 trusting_easley[425645]:                "ceph.vdo": "0"
Nov 25 04:22:05 np0005534516 trusting_easley[425645]:            },
Nov 25 04:22:05 np0005534516 trusting_easley[425645]:            "type": "block",
Nov 25 04:22:05 np0005534516 trusting_easley[425645]:            "vg_name": "ceph_vg1"
Nov 25 04:22:05 np0005534516 trusting_easley[425645]:        }
Nov 25 04:22:05 np0005534516 trusting_easley[425645]:    ],
Nov 25 04:22:05 np0005534516 trusting_easley[425645]:    "2": [
Nov 25 04:22:05 np0005534516 trusting_easley[425645]:        {
Nov 25 04:22:05 np0005534516 trusting_easley[425645]:            "devices": [
Nov 25 04:22:05 np0005534516 trusting_easley[425645]:                "/dev/loop5"
Nov 25 04:22:05 np0005534516 trusting_easley[425645]:            ],
Nov 25 04:22:05 np0005534516 trusting_easley[425645]:            "lv_name": "ceph_lv2",
Nov 25 04:22:05 np0005534516 trusting_easley[425645]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 04:22:05 np0005534516 trusting_easley[425645]:            "lv_size": "21470642176",
Nov 25 04:22:05 np0005534516 trusting_easley[425645]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2a15223e-f21c-42af-b702-2be1c3607399,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 04:22:05 np0005534516 trusting_easley[425645]:            "lv_uuid": "5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0",
Nov 25 04:22:05 np0005534516 trusting_easley[425645]:            "name": "ceph_lv2",
Nov 25 04:22:05 np0005534516 trusting_easley[425645]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 04:22:05 np0005534516 trusting_easley[425645]:            "tags": {
Nov 25 04:22:05 np0005534516 trusting_easley[425645]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 04:22:05 np0005534516 trusting_easley[425645]:                "ceph.block_uuid": "5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0",
Nov 25 04:22:05 np0005534516 trusting_easley[425645]:                "ceph.cephx_lockbox_secret": "",
Nov 25 04:22:05 np0005534516 trusting_easley[425645]:                "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 04:22:05 np0005534516 trusting_easley[425645]:                "ceph.cluster_name": "ceph",
Nov 25 04:22:05 np0005534516 trusting_easley[425645]:                "ceph.crush_device_class": "",
Nov 25 04:22:05 np0005534516 trusting_easley[425645]:                "ceph.encrypted": "0",
Nov 25 04:22:05 np0005534516 trusting_easley[425645]:                "ceph.osd_fsid": "2a15223e-f21c-42af-b702-2be1c3607399",
Nov 25 04:22:05 np0005534516 trusting_easley[425645]:                "ceph.osd_id": "2",
Nov 25 04:22:05 np0005534516 trusting_easley[425645]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 04:22:05 np0005534516 trusting_easley[425645]:                "ceph.type": "block",
Nov 25 04:22:05 np0005534516 trusting_easley[425645]:                "ceph.vdo": "0"
Nov 25 04:22:05 np0005534516 trusting_easley[425645]:            },
Nov 25 04:22:05 np0005534516 trusting_easley[425645]:            "type": "block",
Nov 25 04:22:05 np0005534516 trusting_easley[425645]:            "vg_name": "ceph_vg2"
Nov 25 04:22:05 np0005534516 trusting_easley[425645]:        }
Nov 25 04:22:05 np0005534516 trusting_easley[425645]:    ]
Nov 25 04:22:05 np0005534516 trusting_easley[425645]: }
Nov 25 04:22:05 np0005534516 systemd[1]: libpod-0e6c91df2c9e4eb97e86877e76adf48dd769792dc559a3513ae167f3aa319ae2.scope: Deactivated successfully.
Nov 25 04:22:05 np0005534516 podman[425654]: 2025-11-25 09:22:05.851426096 +0000 UTC m=+0.031227432 container died 0e6c91df2c9e4eb97e86877e76adf48dd769792dc559a3513ae167f3aa319ae2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_easley, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Nov 25 04:22:05 np0005534516 systemd[1]: var-lib-containers-storage-overlay-12bb353cd3955ae142a9a7c531e87504b0864a74b18db3e4c88bb33873a2bfac-merged.mount: Deactivated successfully.
Nov 25 04:22:06 np0005534516 podman[425654]: 2025-11-25 09:22:06.02619185 +0000 UTC m=+0.205993166 container remove 0e6c91df2c9e4eb97e86877e76adf48dd769792dc559a3513ae167f3aa319ae2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_easley, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Nov 25 04:22:06 np0005534516 systemd[1]: libpod-conmon-0e6c91df2c9e4eb97e86877e76adf48dd769792dc559a3513ae167f3aa319ae2.scope: Deactivated successfully.
Nov 25 04:22:06 np0005534516 nova_compute[253538]: 2025-11-25 09:22:06.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:22:06 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3060: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:22:06 np0005534516 podman[425810]: 2025-11-25 09:22:06.725042543 +0000 UTC m=+0.050484588 container create d5df9424402254395571f05dc2915e3e59d808eba93ef6f80c32d08ce469d2e3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_davinci, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 04:22:06 np0005534516 systemd[1]: Started libpod-conmon-d5df9424402254395571f05dc2915e3e59d808eba93ef6f80c32d08ce469d2e3.scope.
Nov 25 04:22:06 np0005534516 podman[425810]: 2025-11-25 09:22:06.703743702 +0000 UTC m=+0.029185777 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 04:22:06 np0005534516 systemd[1]: Started libcrun container.
Nov 25 04:22:06 np0005534516 podman[425810]: 2025-11-25 09:22:06.824569565 +0000 UTC m=+0.150011640 container init d5df9424402254395571f05dc2915e3e59d808eba93ef6f80c32d08ce469d2e3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_davinci, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Nov 25 04:22:06 np0005534516 podman[425810]: 2025-11-25 09:22:06.833389796 +0000 UTC m=+0.158831841 container start d5df9424402254395571f05dc2915e3e59d808eba93ef6f80c32d08ce469d2e3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_davinci, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 04:22:06 np0005534516 podman[425810]: 2025-11-25 09:22:06.839179794 +0000 UTC m=+0.164621869 container attach d5df9424402254395571f05dc2915e3e59d808eba93ef6f80c32d08ce469d2e3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_davinci, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Nov 25 04:22:06 np0005534516 focused_davinci[425827]: 167 167
Nov 25 04:22:06 np0005534516 systemd[1]: libpod-d5df9424402254395571f05dc2915e3e59d808eba93ef6f80c32d08ce469d2e3.scope: Deactivated successfully.
Nov 25 04:22:06 np0005534516 podman[425810]: 2025-11-25 09:22:06.840801638 +0000 UTC m=+0.166243683 container died d5df9424402254395571f05dc2915e3e59d808eba93ef6f80c32d08ce469d2e3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_davinci, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Nov 25 04:22:06 np0005534516 systemd[1]: var-lib-containers-storage-overlay-2ff1b9ce1bbe2965ab562cbb30c77ccf7cc528e1c88957481f238d83fe257604-merged.mount: Deactivated successfully.
Nov 25 04:22:06 np0005534516 podman[425810]: 2025-11-25 09:22:06.98062719 +0000 UTC m=+0.306069235 container remove d5df9424402254395571f05dc2915e3e59d808eba93ef6f80c32d08ce469d2e3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_davinci, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Nov 25 04:22:07 np0005534516 systemd[1]: libpod-conmon-d5df9424402254395571f05dc2915e3e59d808eba93ef6f80c32d08ce469d2e3.scope: Deactivated successfully.
Nov 25 04:22:07 np0005534516 podman[425851]: 2025-11-25 09:22:07.163073724 +0000 UTC m=+0.053660704 container create a59136682f19c9da1f38f1fd7d35e017f3ecd2f2cf32f2ab45ece18d25d42cd7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_kilby, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 04:22:07 np0005534516 systemd[1]: Started libpod-conmon-a59136682f19c9da1f38f1fd7d35e017f3ecd2f2cf32f2ab45ece18d25d42cd7.scope.
Nov 25 04:22:07 np0005534516 podman[425851]: 2025-11-25 09:22:07.137946759 +0000 UTC m=+0.028533759 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 04:22:07 np0005534516 systemd[1]: Started libcrun container.
Nov 25 04:22:07 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5e92befe67c996a87f97fc1590a4e5da38a603c92003526823d3b04b19f9423e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 04:22:07 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5e92befe67c996a87f97fc1590a4e5da38a603c92003526823d3b04b19f9423e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 04:22:07 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5e92befe67c996a87f97fc1590a4e5da38a603c92003526823d3b04b19f9423e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 04:22:07 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5e92befe67c996a87f97fc1590a4e5da38a603c92003526823d3b04b19f9423e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 04:22:07 np0005534516 podman[425851]: 2025-11-25 09:22:07.269760182 +0000 UTC m=+0.160347182 container init a59136682f19c9da1f38f1fd7d35e017f3ecd2f2cf32f2ab45ece18d25d42cd7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_kilby, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 04:22:07 np0005534516 podman[425851]: 2025-11-25 09:22:07.280513626 +0000 UTC m=+0.171100626 container start a59136682f19c9da1f38f1fd7d35e017f3ecd2f2cf32f2ab45ece18d25d42cd7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_kilby, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Nov 25 04:22:07 np0005534516 podman[425851]: 2025-11-25 09:22:07.283410055 +0000 UTC m=+0.173997045 container attach a59136682f19c9da1f38f1fd7d35e017f3ecd2f2cf32f2ab45ece18d25d42cd7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_kilby, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Nov 25 04:22:08 np0005534516 naughty_kilby[425868]: {
Nov 25 04:22:08 np0005534516 naughty_kilby[425868]:    "1ccbbde0-8faf-460a-800e-c84f00ed17db": {
Nov 25 04:22:08 np0005534516 naughty_kilby[425868]:        "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 04:22:08 np0005534516 naughty_kilby[425868]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 04:22:08 np0005534516 naughty_kilby[425868]:        "osd_id": 1,
Nov 25 04:22:08 np0005534516 naughty_kilby[425868]:        "osd_uuid": "1ccbbde0-8faf-460a-800e-c84f00ed17db",
Nov 25 04:22:08 np0005534516 naughty_kilby[425868]:        "type": "bluestore"
Nov 25 04:22:08 np0005534516 naughty_kilby[425868]:    },
Nov 25 04:22:08 np0005534516 naughty_kilby[425868]:    "2a15223e-f21c-42af-b702-2be1c3607399": {
Nov 25 04:22:08 np0005534516 naughty_kilby[425868]:        "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 04:22:08 np0005534516 naughty_kilby[425868]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 04:22:08 np0005534516 naughty_kilby[425868]:        "osd_id": 2,
Nov 25 04:22:08 np0005534516 naughty_kilby[425868]:        "osd_uuid": "2a15223e-f21c-42af-b702-2be1c3607399",
Nov 25 04:22:08 np0005534516 naughty_kilby[425868]:        "type": "bluestore"
Nov 25 04:22:08 np0005534516 naughty_kilby[425868]:    },
Nov 25 04:22:08 np0005534516 naughty_kilby[425868]:    "fa0f8d90-5df8-4d42-9078-da082765696d": {
Nov 25 04:22:08 np0005534516 naughty_kilby[425868]:        "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 04:22:08 np0005534516 naughty_kilby[425868]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 04:22:08 np0005534516 naughty_kilby[425868]:        "osd_id": 0,
Nov 25 04:22:08 np0005534516 naughty_kilby[425868]:        "osd_uuid": "fa0f8d90-5df8-4d42-9078-da082765696d",
Nov 25 04:22:08 np0005534516 naughty_kilby[425868]:        "type": "bluestore"
Nov 25 04:22:08 np0005534516 naughty_kilby[425868]:    }
Nov 25 04:22:08 np0005534516 naughty_kilby[425868]: }
Nov 25 04:22:08 np0005534516 systemd[1]: libpod-a59136682f19c9da1f38f1fd7d35e017f3ecd2f2cf32f2ab45ece18d25d42cd7.scope: Deactivated successfully.
Nov 25 04:22:08 np0005534516 podman[425851]: 2025-11-25 09:22:08.421375938 +0000 UTC m=+1.311962928 container died a59136682f19c9da1f38f1fd7d35e017f3ecd2f2cf32f2ab45ece18d25d42cd7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_kilby, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 04:22:08 np0005534516 systemd[1]: libpod-a59136682f19c9da1f38f1fd7d35e017f3ecd2f2cf32f2ab45ece18d25d42cd7.scope: Consumed 1.113s CPU time.
Nov 25 04:22:08 np0005534516 systemd[1]: var-lib-containers-storage-overlay-5e92befe67c996a87f97fc1590a4e5da38a603c92003526823d3b04b19f9423e-merged.mount: Deactivated successfully.
Nov 25 04:22:08 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e274 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:22:08 np0005534516 podman[425851]: 2025-11-25 09:22:08.554983631 +0000 UTC m=+1.445570611 container remove a59136682f19c9da1f38f1fd7d35e017f3ecd2f2cf32f2ab45ece18d25d42cd7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_kilby, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 04:22:08 np0005534516 systemd[1]: libpod-conmon-a59136682f19c9da1f38f1fd7d35e017f3ecd2f2cf32f2ab45ece18d25d42cd7.scope: Deactivated successfully.
Nov 25 04:22:08 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 04:22:08 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 04:22:08 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 04:22:08 np0005534516 nova_compute[253538]: 2025-11-25 09:22:08.603 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:22:08 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 04:22:08 np0005534516 ceph-mgr[75313]: [progress WARNING root] complete: ev 3afc6391-063d-4346-9f9a-bcc214d6b308 does not exist
Nov 25 04:22:08 np0005534516 ceph-mgr[75313]: [progress WARNING root] complete: ev ee97b1b5-8776-4820-8106-a9a09f19c525 does not exist
Nov 25 04:22:08 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3061: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail; 11 KiB/s rd, 0 B/s wr, 18 op/s
Nov 25 04:22:09 np0005534516 nova_compute[253538]: 2025-11-25 09:22:09.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:22:09 np0005534516 nova_compute[253538]: 2025-11-25 09:22:09.555 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 25 04:22:09 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 04:22:09 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 04:22:10 np0005534516 nova_compute[253538]: 2025-11-25 09:22:10.085 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:22:10 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3062: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail; 24 KiB/s rd, 0 B/s wr, 40 op/s
Nov 25 04:22:11 np0005534516 nova_compute[253538]: 2025-11-25 09:22:11.556 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:22:12 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3063: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Nov 25 04:22:13 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e274 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:22:13 np0005534516 nova_compute[253538]: 2025-11-25 09:22:13.555 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:22:13 np0005534516 nova_compute[253538]: 2025-11-25 09:22:13.623 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:22:14 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3064: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Nov 25 04:22:15 np0005534516 nova_compute[253538]: 2025-11-25 09:22:15.088 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:22:16 np0005534516 systemd[1]: virtsecretd.service: Deactivated successfully.
Nov 25 04:22:16 np0005534516 systemd[1]: virtsecretd.service: Consumed 1.190s CPU time.
Nov 25 04:22:16 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3065: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Nov 25 04:22:18 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e274 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:22:18 np0005534516 nova_compute[253538]: 2025-11-25 09:22:18.624 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:22:18 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3066: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Nov 25 04:22:18 np0005534516 nova_compute[253538]: 2025-11-25 09:22:18.842 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:22:19 np0005534516 nova_compute[253538]: 2025-11-25 09:22:19.553 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:22:19 np0005534516 nova_compute[253538]: 2025-11-25 09:22:19.577 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:22:19 np0005534516 nova_compute[253538]: 2025-11-25 09:22:19.578 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:22:19 np0005534516 nova_compute[253538]: 2025-11-25 09:22:19.579 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:22:19 np0005534516 nova_compute[253538]: 2025-11-25 09:22:19.579 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 25 04:22:19 np0005534516 nova_compute[253538]: 2025-11-25 09:22:19.580 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 04:22:19 np0005534516 podman[425985]: 2025-11-25 09:22:19.821415124 +0000 UTC m=+0.060667504 container health_status 5c8d5508db57b4c3da7e80ae11f3a3094e87785cb74f60c98199261dd8b73481 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible)
Nov 25 04:22:19 np0005534516 podman[425968]: 2025-11-25 09:22:19.834784619 +0000 UTC m=+0.075052337 container health_status 201f5ba8a846455dad7681d91099d8c3f33e8ac91298c1330a8b525b94c03938 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, tcib_managed=true, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=multipathd)
Nov 25 04:22:20 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 04:22:20 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2640351867' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 04:22:20 np0005534516 nova_compute[253538]: 2025-11-25 09:22:20.060 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.480s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 04:22:20 np0005534516 nova_compute[253538]: 2025-11-25 09:22:20.090 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:22:20 np0005534516 nova_compute[253538]: 2025-11-25 09:22:20.290 253542 WARNING nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 04:22:20 np0005534516 nova_compute[253538]: 2025-11-25 09:22:20.292 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3592MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 25 04:22:20 np0005534516 nova_compute[253538]: 2025-11-25 09:22:20.293 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:22:20 np0005534516 nova_compute[253538]: 2025-11-25 09:22:20.293 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:22:20 np0005534516 nova_compute[253538]: 2025-11-25 09:22:20.376 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 25 04:22:20 np0005534516 nova_compute[253538]: 2025-11-25 09:22:20.377 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 25 04:22:20 np0005534516 nova_compute[253538]: 2025-11-25 09:22:20.401 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 04:22:20 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3067: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail; 24 KiB/s rd, 0 B/s wr, 40 op/s
Nov 25 04:22:20 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 04:22:20 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/75560992' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 04:22:20 np0005534516 nova_compute[253538]: 2025-11-25 09:22:20.873 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.473s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 04:22:20 np0005534516 nova_compute[253538]: 2025-11-25 09:22:20.880 253542 DEBUG nova.compute.provider_tree [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 04:22:20 np0005534516 nova_compute[253538]: 2025-11-25 09:22:20.897 253542 DEBUG nova.scheduler.client.report [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 04:22:20 np0005534516 nova_compute[253538]: 2025-11-25 09:22:20.899 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 25 04:22:20 np0005534516 nova_compute[253538]: 2025-11-25 09:22:20.900 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.607s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:22:22 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3068: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail; 11 KiB/s rd, 0 B/s wr, 18 op/s
Nov 25 04:22:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 04:22:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 04:22:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 04:22:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 04:22:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 04:22:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 04:22:23 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e274 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:22:23 np0005534516 nova_compute[253538]: 2025-11-25 09:22:23.627 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:22:24 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3069: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:22:25 np0005534516 nova_compute[253538]: 2025-11-25 09:22:25.091 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:22:26 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3070: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:22:28 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e274 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:22:28 np0005534516 nova_compute[253538]: 2025-11-25 09:22:28.632 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:22:28 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3071: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:22:29 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 25 04:22:29 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3420193235' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 04:22:29 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 25 04:22:29 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3420193235' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 04:22:30 np0005534516 nova_compute[253538]: 2025-11-25 09:22:30.093 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:22:30 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3072: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:22:30 np0005534516 podman[426042]: 2025-11-25 09:22:30.839344958 +0000 UTC m=+0.093310385 container health_status 8ad1e319ea263c63021f01bb2d6f18ca5aea205883339692c6b10bddfa993191 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_controller, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, config_id=ovn_controller, managed_by=edpm_ansible)
Nov 25 04:22:32 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3073: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:22:33 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e274 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:22:33 np0005534516 nova_compute[253538]: 2025-11-25 09:22:33.635 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:22:34 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3074: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:22:35 np0005534516 nova_compute[253538]: 2025-11-25 09:22:35.126 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:22:36 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3075: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:22:38 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e274 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:22:38 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3076: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:22:38 np0005534516 nova_compute[253538]: 2025-11-25 09:22:38.664 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:22:40 np0005534516 nova_compute[253538]: 2025-11-25 09:22:40.127 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:22:40 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3077: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:22:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:22:41.108 162739 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:22:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:22:41.108 162739 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:22:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:22:41.109 162739 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:22:42 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3078: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:22:43 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e274 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:22:43 np0005534516 nova_compute[253538]: 2025-11-25 09:22:43.667 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:22:44 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3079: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:22:45 np0005534516 nova_compute[253538]: 2025-11-25 09:22:45.129 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:22:46 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3080: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:22:48 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e274 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:22:48 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3081: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:22:48 np0005534516 nova_compute[253538]: 2025-11-25 09:22:48.670 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:22:50 np0005534516 nova_compute[253538]: 2025-11-25 09:22:50.131 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:22:50 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3082: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:22:50 np0005534516 podman[426069]: 2025-11-25 09:22:50.801850277 +0000 UTC m=+0.047660020 container health_status 5c8d5508db57b4c3da7e80ae11f3a3094e87785cb74f60c98199261dd8b73481 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Nov 25 04:22:50 np0005534516 podman[426068]: 2025-11-25 09:22:50.818249584 +0000 UTC m=+0.064709755 container health_status 201f5ba8a846455dad7681d91099d8c3f33e8ac91298c1330a8b525b94c03938 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=multipathd, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 04:22:52 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3083: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:22:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] Optimize plan auto_2025-11-25_09:22:53
Nov 25 04:22:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 04:22:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] do_upmap
Nov 25 04:22:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] pools ['cephfs.cephfs.data', 'backups', 'images', 'volumes', 'default.rgw.meta', '.rgw.root', 'cephfs.cephfs.meta', 'vms', 'default.rgw.control', 'default.rgw.log', '.mgr']
Nov 25 04:22:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] prepared 0/10 changes
Nov 25 04:22:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 04:22:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 04:22:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 04:22:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 04:22:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 04:22:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 04:22:53 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e274 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:22:53 np0005534516 nova_compute[253538]: 2025-11-25 09:22:53.672 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:22:54 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 04:22:54 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 04:22:54 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 04:22:54 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 04:22:54 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 04:22:54 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 04:22:54 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 04:22:54 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 04:22:54 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 04:22:54 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 04:22:54 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3084: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:22:55 np0005534516 nova_compute[253538]: 2025-11-25 09:22:55.133 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:22:55 np0005534516 nova_compute[253538]: 2025-11-25 09:22:55.901 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:22:56 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3085: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:22:58 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e274 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:22:58 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3086: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:22:58 np0005534516 nova_compute[253538]: 2025-11-25 09:22:58.675 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:23:00 np0005534516 nova_compute[253538]: 2025-11-25 09:23:00.135 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:23:00 np0005534516 nova_compute[253538]: 2025-11-25 09:23:00.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:23:00 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3087: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:23:01 np0005534516 podman[426108]: 2025-11-25 09:23:01.854465186 +0000 UTC m=+0.107469851 container health_status 8ad1e319ea263c63021f01bb2d6f18ca5aea205883339692c6b10bddfa993191 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_controller, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 04:23:02 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3088: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:23:03 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e274 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:23:03 np0005534516 nova_compute[253538]: 2025-11-25 09:23:03.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:23:03 np0005534516 nova_compute[253538]: 2025-11-25 09:23:03.554 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 25 04:23:03 np0005534516 nova_compute[253538]: 2025-11-25 09:23:03.555 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 25 04:23:03 np0005534516 nova_compute[253538]: 2025-11-25 09:23:03.565 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 25 04:23:03 np0005534516 nova_compute[253538]: 2025-11-25 09:23:03.677 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:23:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 04:23:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 04:23:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 04:23:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 04:23:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 25 04:23:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 04:23:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 04:23:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 04:23:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 04:23:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 04:23:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006659854830044931 of space, bias 1.0, pg target 0.19979564490134794 quantized to 32 (current 32)
Nov 25 04:23:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 04:23:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 25 04:23:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 04:23:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 04:23:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 04:23:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 25 04:23:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 04:23:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 25 04:23:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 04:23:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 04:23:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 04:23:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 25 04:23:04 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3089: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:23:05 np0005534516 nova_compute[253538]: 2025-11-25 09:23:05.136 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:23:06 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3090: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:23:07 np0005534516 nova_compute[253538]: 2025-11-25 09:23:07.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:23:07 np0005534516 nova_compute[253538]: 2025-11-25 09:23:07.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:23:08 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e274 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:23:08 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3091: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:23:08 np0005534516 nova_compute[253538]: 2025-11-25 09:23:08.681 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:23:09 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 04:23:09 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 04:23:09 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 04:23:09 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 04:23:09 np0005534516 nova_compute[253538]: 2025-11-25 09:23:09.553 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:23:09 np0005534516 nova_compute[253538]: 2025-11-25 09:23:09.554 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 25 04:23:09 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 04:23:09 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 04:23:10 np0005534516 nova_compute[253538]: 2025-11-25 09:23:10.162 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:23:10 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3092: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:23:10 np0005534516 podman[426528]: 2025-11-25 09:23:10.700784174 +0000 UTC m=+0.079682244 container create 6e67cf6e3d379638152c4735c15126924ed44da5cde8043b31e26f6eec1ecdde (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_bell, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 04:23:10 np0005534516 podman[426528]: 2025-11-25 09:23:10.646488984 +0000 UTC m=+0.025387144 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 04:23:10 np0005534516 systemd[1]: Started libpod-conmon-6e67cf6e3d379638152c4735c15126924ed44da5cde8043b31e26f6eec1ecdde.scope.
Nov 25 04:23:10 np0005534516 systemd[1]: Started libcrun container.
Nov 25 04:23:10 np0005534516 podman[426528]: 2025-11-25 09:23:10.817679291 +0000 UTC m=+0.196577381 container init 6e67cf6e3d379638152c4735c15126924ed44da5cde8043b31e26f6eec1ecdde (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_bell, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 04:23:10 np0005534516 podman[426528]: 2025-11-25 09:23:10.826579444 +0000 UTC m=+0.205477504 container start 6e67cf6e3d379638152c4735c15126924ed44da5cde8043b31e26f6eec1ecdde (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_bell, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 04:23:10 np0005534516 angry_bell[426545]: 167 167
Nov 25 04:23:10 np0005534516 systemd[1]: libpod-6e67cf6e3d379638152c4735c15126924ed44da5cde8043b31e26f6eec1ecdde.scope: Deactivated successfully.
Nov 25 04:23:10 np0005534516 podman[426528]: 2025-11-25 09:23:10.835031434 +0000 UTC m=+0.213929564 container attach 6e67cf6e3d379638152c4735c15126924ed44da5cde8043b31e26f6eec1ecdde (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_bell, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 04:23:10 np0005534516 podman[426528]: 2025-11-25 09:23:10.835689332 +0000 UTC m=+0.214587422 container died 6e67cf6e3d379638152c4735c15126924ed44da5cde8043b31e26f6eec1ecdde (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_bell, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 04:23:10 np0005534516 systemd[1]: var-lib-containers-storage-overlay-8073415963f06c7c1cbbcf1051b12de6b432bc7e46e8a934451066151877388e-merged.mount: Deactivated successfully.
Nov 25 04:23:10 np0005534516 podman[426528]: 2025-11-25 09:23:10.91521123 +0000 UTC m=+0.294109300 container remove 6e67cf6e3d379638152c4735c15126924ed44da5cde8043b31e26f6eec1ecdde (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_bell, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Nov 25 04:23:10 np0005534516 systemd[1]: libpod-conmon-6e67cf6e3d379638152c4735c15126924ed44da5cde8043b31e26f6eec1ecdde.scope: Deactivated successfully.
Nov 25 04:23:11 np0005534516 podman[426567]: 2025-11-25 09:23:11.152863879 +0000 UTC m=+0.083068096 container create b6762fbe7fd09f6adea916478dc0f09395a4c5e9e9c57ba4260ef646ab794990 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_meninsky, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 04:23:11 np0005534516 podman[426567]: 2025-11-25 09:23:11.10815263 +0000 UTC m=+0.038356927 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 04:23:11 np0005534516 systemd[1]: Started libpod-conmon-b6762fbe7fd09f6adea916478dc0f09395a4c5e9e9c57ba4260ef646ab794990.scope.
Nov 25 04:23:11 np0005534516 systemd[1]: Started libcrun container.
Nov 25 04:23:11 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b72a246bf61fb6a02c881b10e3a193a6f82797672075f4ddf80c01ac15cd048d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 04:23:11 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b72a246bf61fb6a02c881b10e3a193a6f82797672075f4ddf80c01ac15cd048d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 04:23:11 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b72a246bf61fb6a02c881b10e3a193a6f82797672075f4ddf80c01ac15cd048d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 04:23:11 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b72a246bf61fb6a02c881b10e3a193a6f82797672075f4ddf80c01ac15cd048d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 04:23:11 np0005534516 podman[426567]: 2025-11-25 09:23:11.264547513 +0000 UTC m=+0.194751760 container init b6762fbe7fd09f6adea916478dc0f09395a4c5e9e9c57ba4260ef646ab794990 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_meninsky, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Nov 25 04:23:11 np0005534516 podman[426567]: 2025-11-25 09:23:11.274561347 +0000 UTC m=+0.204765564 container start b6762fbe7fd09f6adea916478dc0f09395a4c5e9e9c57ba4260ef646ab794990 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_meninsky, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 25 04:23:11 np0005534516 podman[426567]: 2025-11-25 09:23:11.281090794 +0000 UTC m=+0.211295011 container attach b6762fbe7fd09f6adea916478dc0f09395a4c5e9e9c57ba4260ef646ab794990 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_meninsky, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Nov 25 04:23:11 np0005534516 nova_compute[253538]: 2025-11-25 09:23:11.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:23:12 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3093: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:23:12 np0005534516 gifted_meninsky[426583]: [
Nov 25 04:23:12 np0005534516 gifted_meninsky[426583]:    {
Nov 25 04:23:12 np0005534516 gifted_meninsky[426583]:        "available": false,
Nov 25 04:23:12 np0005534516 gifted_meninsky[426583]:        "ceph_device": false,
Nov 25 04:23:12 np0005534516 gifted_meninsky[426583]:        "device_id": "QEMU_DVD-ROM_QM00001",
Nov 25 04:23:12 np0005534516 gifted_meninsky[426583]:        "lsm_data": {},
Nov 25 04:23:12 np0005534516 gifted_meninsky[426583]:        "lvs": [],
Nov 25 04:23:12 np0005534516 gifted_meninsky[426583]:        "path": "/dev/sr0",
Nov 25 04:23:12 np0005534516 gifted_meninsky[426583]:        "rejected_reasons": [
Nov 25 04:23:12 np0005534516 gifted_meninsky[426583]:            "Insufficient space (<5GB)",
Nov 25 04:23:12 np0005534516 gifted_meninsky[426583]:            "Has a FileSystem"
Nov 25 04:23:12 np0005534516 gifted_meninsky[426583]:        ],
Nov 25 04:23:12 np0005534516 gifted_meninsky[426583]:        "sys_api": {
Nov 25 04:23:12 np0005534516 gifted_meninsky[426583]:            "actuators": null,
Nov 25 04:23:12 np0005534516 gifted_meninsky[426583]:            "device_nodes": "sr0",
Nov 25 04:23:12 np0005534516 gifted_meninsky[426583]:            "devname": "sr0",
Nov 25 04:23:12 np0005534516 gifted_meninsky[426583]:            "human_readable_size": "482.00 KB",
Nov 25 04:23:12 np0005534516 gifted_meninsky[426583]:            "id_bus": "ata",
Nov 25 04:23:12 np0005534516 gifted_meninsky[426583]:            "model": "QEMU DVD-ROM",
Nov 25 04:23:12 np0005534516 gifted_meninsky[426583]:            "nr_requests": "2",
Nov 25 04:23:12 np0005534516 gifted_meninsky[426583]:            "parent": "/dev/sr0",
Nov 25 04:23:12 np0005534516 gifted_meninsky[426583]:            "partitions": {},
Nov 25 04:23:12 np0005534516 gifted_meninsky[426583]:            "path": "/dev/sr0",
Nov 25 04:23:12 np0005534516 gifted_meninsky[426583]:            "removable": "1",
Nov 25 04:23:12 np0005534516 gifted_meninsky[426583]:            "rev": "2.5+",
Nov 25 04:23:12 np0005534516 gifted_meninsky[426583]:            "ro": "0",
Nov 25 04:23:12 np0005534516 gifted_meninsky[426583]:            "rotational": "1",
Nov 25 04:23:12 np0005534516 gifted_meninsky[426583]:            "sas_address": "",
Nov 25 04:23:12 np0005534516 gifted_meninsky[426583]:            "sas_device_handle": "",
Nov 25 04:23:12 np0005534516 gifted_meninsky[426583]:            "scheduler_mode": "mq-deadline",
Nov 25 04:23:12 np0005534516 gifted_meninsky[426583]:            "sectors": 0,
Nov 25 04:23:12 np0005534516 gifted_meninsky[426583]:            "sectorsize": "2048",
Nov 25 04:23:12 np0005534516 gifted_meninsky[426583]:            "size": 493568.0,
Nov 25 04:23:12 np0005534516 gifted_meninsky[426583]:            "support_discard": "2048",
Nov 25 04:23:12 np0005534516 gifted_meninsky[426583]:            "type": "disk",
Nov 25 04:23:12 np0005534516 gifted_meninsky[426583]:            "vendor": "QEMU"
Nov 25 04:23:12 np0005534516 gifted_meninsky[426583]:        }
Nov 25 04:23:12 np0005534516 gifted_meninsky[426583]:    }
Nov 25 04:23:12 np0005534516 gifted_meninsky[426583]: ]
Nov 25 04:23:12 np0005534516 systemd[1]: libpod-b6762fbe7fd09f6adea916478dc0f09395a4c5e9e9c57ba4260ef646ab794990.scope: Deactivated successfully.
Nov 25 04:23:12 np0005534516 podman[426567]: 2025-11-25 09:23:12.833544087 +0000 UTC m=+1.763748294 container died b6762fbe7fd09f6adea916478dc0f09395a4c5e9e9c57ba4260ef646ab794990 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_meninsky, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 04:23:12 np0005534516 systemd[1]: libpod-b6762fbe7fd09f6adea916478dc0f09395a4c5e9e9c57ba4260ef646ab794990.scope: Consumed 1.603s CPU time.
Nov 25 04:23:12 np0005534516 systemd[1]: var-lib-containers-storage-overlay-b72a246bf61fb6a02c881b10e3a193a6f82797672075f4ddf80c01ac15cd048d-merged.mount: Deactivated successfully.
Nov 25 04:23:12 np0005534516 podman[426567]: 2025-11-25 09:23:12.90701961 +0000 UTC m=+1.837223827 container remove b6762fbe7fd09f6adea916478dc0f09395a4c5e9e9c57ba4260ef646ab794990 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_meninsky, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 04:23:12 np0005534516 systemd[1]: libpod-conmon-b6762fbe7fd09f6adea916478dc0f09395a4c5e9e9c57ba4260ef646ab794990.scope: Deactivated successfully.
Nov 25 04:23:12 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 04:23:12 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 04:23:12 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 04:23:12 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 04:23:12 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 04:23:12 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 04:23:12 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 04:23:12 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 04:23:12 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 04:23:12 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 04:23:12 np0005534516 ceph-mgr[75313]: [progress WARNING root] complete: ev 973b0880-37fd-4120-8bcd-1170032b75fc does not exist
Nov 25 04:23:12 np0005534516 ceph-mgr[75313]: [progress WARNING root] complete: ev 8287d27b-d122-4fd8-92a4-7f6039ee1d89 does not exist
Nov 25 04:23:12 np0005534516 ceph-mgr[75313]: [progress WARNING root] complete: ev de7f56ac-9082-4abf-adc9-a48dd64c1da8 does not exist
Nov 25 04:23:12 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 04:23:12 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 04:23:12 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 04:23:12 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 04:23:12 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 04:23:12 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 04:23:13 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e274 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:23:13 np0005534516 podman[428567]: 2025-11-25 09:23:13.600016373 +0000 UTC m=+0.037478023 container create 5a907070d971aa98d78754540d785c8521a29e5ea368373ac44af45064bf3ac7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_benz, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True)
Nov 25 04:23:13 np0005534516 systemd[1]: Started libpod-conmon-5a907070d971aa98d78754540d785c8521a29e5ea368373ac44af45064bf3ac7.scope.
Nov 25 04:23:13 np0005534516 systemd[1]: Started libcrun container.
Nov 25 04:23:13 np0005534516 podman[428567]: 2025-11-25 09:23:13.671745888 +0000 UTC m=+0.109207548 container init 5a907070d971aa98d78754540d785c8521a29e5ea368373ac44af45064bf3ac7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_benz, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 04:23:13 np0005534516 podman[428567]: 2025-11-25 09:23:13.679353556 +0000 UTC m=+0.116815206 container start 5a907070d971aa98d78754540d785c8521a29e5ea368373ac44af45064bf3ac7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_benz, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 04:23:13 np0005534516 podman[428567]: 2025-11-25 09:23:13.582942288 +0000 UTC m=+0.020403978 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 04:23:13 np0005534516 podman[428567]: 2025-11-25 09:23:13.682996335 +0000 UTC m=+0.120457985 container attach 5a907070d971aa98d78754540d785c8521a29e5ea368373ac44af45064bf3ac7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_benz, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Nov 25 04:23:13 np0005534516 nova_compute[253538]: 2025-11-25 09:23:13.682 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:23:13 np0005534516 eloquent_benz[428583]: 167 167
Nov 25 04:23:13 np0005534516 systemd[1]: libpod-5a907070d971aa98d78754540d785c8521a29e5ea368373ac44af45064bf3ac7.scope: Deactivated successfully.
Nov 25 04:23:13 np0005534516 podman[428567]: 2025-11-25 09:23:13.687736174 +0000 UTC m=+0.125197824 container died 5a907070d971aa98d78754540d785c8521a29e5ea368373ac44af45064bf3ac7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_benz, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Nov 25 04:23:13 np0005534516 systemd[1]: var-lib-containers-storage-overlay-94a9c39dc40a2f862173b6897472e6c06bdbedb1ca2478d2784bbb00c62ef05a-merged.mount: Deactivated successfully.
Nov 25 04:23:13 np0005534516 podman[428567]: 2025-11-25 09:23:13.726026259 +0000 UTC m=+0.163487909 container remove 5a907070d971aa98d78754540d785c8521a29e5ea368373ac44af45064bf3ac7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_benz, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 04:23:13 np0005534516 systemd[1]: libpod-conmon-5a907070d971aa98d78754540d785c8521a29e5ea368373ac44af45064bf3ac7.scope: Deactivated successfully.
Nov 25 04:23:13 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 04:23:13 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 04:23:13 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 04:23:13 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 04:23:13 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 04:23:13 np0005534516 podman[428608]: 2025-11-25 09:23:13.902066887 +0000 UTC m=+0.046930890 container create 9bda23b08168298a487d05d7019d36dd197078bbf8c7b483be137d3f5cf6adb1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_mirzakhani, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Nov 25 04:23:13 np0005534516 systemd[1]: Started libpod-conmon-9bda23b08168298a487d05d7019d36dd197078bbf8c7b483be137d3f5cf6adb1.scope.
Nov 25 04:23:13 np0005534516 systemd[1]: Started libcrun container.
Nov 25 04:23:13 np0005534516 podman[428608]: 2025-11-25 09:23:13.884886769 +0000 UTC m=+0.029750792 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 04:23:13 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c402c643f0764473e56bc69a13aa9c54977401402a0cb846c8f2f4f86303a83e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 04:23:13 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c402c643f0764473e56bc69a13aa9c54977401402a0cb846c8f2f4f86303a83e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 04:23:13 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c402c643f0764473e56bc69a13aa9c54977401402a0cb846c8f2f4f86303a83e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 04:23:13 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c402c643f0764473e56bc69a13aa9c54977401402a0cb846c8f2f4f86303a83e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 04:23:13 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c402c643f0764473e56bc69a13aa9c54977401402a0cb846c8f2f4f86303a83e/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 04:23:14 np0005534516 podman[428608]: 2025-11-25 09:23:14.001496948 +0000 UTC m=+0.146360971 container init 9bda23b08168298a487d05d7019d36dd197078bbf8c7b483be137d3f5cf6adb1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_mirzakhani, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Nov 25 04:23:14 np0005534516 podman[428608]: 2025-11-25 09:23:14.014080851 +0000 UTC m=+0.158944854 container start 9bda23b08168298a487d05d7019d36dd197078bbf8c7b483be137d3f5cf6adb1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_mirzakhani, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 04:23:14 np0005534516 podman[428608]: 2025-11-25 09:23:14.017573127 +0000 UTC m=+0.162437150 container attach 9bda23b08168298a487d05d7019d36dd197078bbf8c7b483be137d3f5cf6adb1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_mirzakhani, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 04:23:14 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3094: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:23:15 np0005534516 competent_mirzakhani[428624]: --> passed data devices: 0 physical, 3 LVM
Nov 25 04:23:15 np0005534516 competent_mirzakhani[428624]: --> relative data size: 1.0
Nov 25 04:23:15 np0005534516 competent_mirzakhani[428624]: --> All data devices are unavailable
Nov 25 04:23:15 np0005534516 systemd[1]: libpod-9bda23b08168298a487d05d7019d36dd197078bbf8c7b483be137d3f5cf6adb1.scope: Deactivated successfully.
Nov 25 04:23:15 np0005534516 podman[428608]: 2025-11-25 09:23:15.070743939 +0000 UTC m=+1.215607942 container died 9bda23b08168298a487d05d7019d36dd197078bbf8c7b483be137d3f5cf6adb1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_mirzakhani, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Nov 25 04:23:15 np0005534516 systemd[1]: libpod-9bda23b08168298a487d05d7019d36dd197078bbf8c7b483be137d3f5cf6adb1.scope: Consumed 1.010s CPU time.
Nov 25 04:23:15 np0005534516 nova_compute[253538]: 2025-11-25 09:23:15.165 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:23:15 np0005534516 systemd[1]: var-lib-containers-storage-overlay-c402c643f0764473e56bc69a13aa9c54977401402a0cb846c8f2f4f86303a83e-merged.mount: Deactivated successfully.
Nov 25 04:23:15 np0005534516 podman[428608]: 2025-11-25 09:23:15.446901773 +0000 UTC m=+1.591765776 container remove 9bda23b08168298a487d05d7019d36dd197078bbf8c7b483be137d3f5cf6adb1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_mirzakhani, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 04:23:15 np0005534516 systemd[1]: libpod-conmon-9bda23b08168298a487d05d7019d36dd197078bbf8c7b483be137d3f5cf6adb1.scope: Deactivated successfully.
Nov 25 04:23:15 np0005534516 nova_compute[253538]: 2025-11-25 09:23:15.555 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:23:16 np0005534516 podman[428804]: 2025-11-25 09:23:16.156667292 +0000 UTC m=+0.063525312 container create 15641a7164dda2256a061501e7c286e41cba3c035aa1a8509a0158e0efac1943 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_varahamihira, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 04:23:16 np0005534516 systemd[1]: Started libpod-conmon-15641a7164dda2256a061501e7c286e41cba3c035aa1a8509a0158e0efac1943.scope.
Nov 25 04:23:16 np0005534516 podman[428804]: 2025-11-25 09:23:16.121788492 +0000 UTC m=+0.028646552 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 04:23:16 np0005534516 systemd[1]: Started libcrun container.
Nov 25 04:23:16 np0005534516 podman[428804]: 2025-11-25 09:23:16.246599094 +0000 UTC m=+0.153457154 container init 15641a7164dda2256a061501e7c286e41cba3c035aa1a8509a0158e0efac1943 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_varahamihira, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Nov 25 04:23:16 np0005534516 podman[428804]: 2025-11-25 09:23:16.258215361 +0000 UTC m=+0.165073381 container start 15641a7164dda2256a061501e7c286e41cba3c035aa1a8509a0158e0efac1943 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_varahamihira, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Nov 25 04:23:16 np0005534516 podman[428804]: 2025-11-25 09:23:16.260572355 +0000 UTC m=+0.167430405 container attach 15641a7164dda2256a061501e7c286e41cba3c035aa1a8509a0158e0efac1943 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_varahamihira, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Nov 25 04:23:16 np0005534516 quizzical_varahamihira[428818]: 167 167
Nov 25 04:23:16 np0005534516 systemd[1]: libpod-15641a7164dda2256a061501e7c286e41cba3c035aa1a8509a0158e0efac1943.scope: Deactivated successfully.
Nov 25 04:23:16 np0005534516 podman[428804]: 2025-11-25 09:23:16.263293649 +0000 UTC m=+0.170151669 container died 15641a7164dda2256a061501e7c286e41cba3c035aa1a8509a0158e0efac1943 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_varahamihira, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Nov 25 04:23:16 np0005534516 systemd[1]: var-lib-containers-storage-overlay-4258785ca6da09afc181c177b78d82b1bc24775f9097944fa86bf0ec53422a8d-merged.mount: Deactivated successfully.
Nov 25 04:23:16 np0005534516 podman[428804]: 2025-11-25 09:23:16.419371204 +0000 UTC m=+0.326229224 container remove 15641a7164dda2256a061501e7c286e41cba3c035aa1a8509a0158e0efac1943 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_varahamihira, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Nov 25 04:23:16 np0005534516 systemd[1]: libpod-conmon-15641a7164dda2256a061501e7c286e41cba3c035aa1a8509a0158e0efac1943.scope: Deactivated successfully.
Nov 25 04:23:16 np0005534516 podman[428842]: 2025-11-25 09:23:16.619194752 +0000 UTC m=+0.069458784 container create abf984efbe49ba70b5a6fdcc93c8b7609e3d1d85511720a01f83e2f9b29068fc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_engelbart, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 04:23:16 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3095: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:23:16 np0005534516 podman[428842]: 2025-11-25 09:23:16.570797123 +0000 UTC m=+0.021061175 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 04:23:16 np0005534516 systemd[1]: Started libpod-conmon-abf984efbe49ba70b5a6fdcc93c8b7609e3d1d85511720a01f83e2f9b29068fc.scope.
Nov 25 04:23:16 np0005534516 systemd[1]: Started libcrun container.
Nov 25 04:23:16 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6e5e439d02db9952bb9519fed8c2c02b88b5a57bb8ecbd0112f333956497a280/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 04:23:16 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6e5e439d02db9952bb9519fed8c2c02b88b5a57bb8ecbd0112f333956497a280/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 04:23:16 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6e5e439d02db9952bb9519fed8c2c02b88b5a57bb8ecbd0112f333956497a280/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 04:23:16 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6e5e439d02db9952bb9519fed8c2c02b88b5a57bb8ecbd0112f333956497a280/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 04:23:16 np0005534516 podman[428842]: 2025-11-25 09:23:16.822182556 +0000 UTC m=+0.272446608 container init abf984efbe49ba70b5a6fdcc93c8b7609e3d1d85511720a01f83e2f9b29068fc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_engelbart, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 04:23:16 np0005534516 podman[428842]: 2025-11-25 09:23:16.829252138 +0000 UTC m=+0.279516170 container start abf984efbe49ba70b5a6fdcc93c8b7609e3d1d85511720a01f83e2f9b29068fc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_engelbart, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Nov 25 04:23:16 np0005534516 podman[428842]: 2025-11-25 09:23:16.837165474 +0000 UTC m=+0.287429506 container attach abf984efbe49ba70b5a6fdcc93c8b7609e3d1d85511720a01f83e2f9b29068fc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_engelbart, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 04:23:17 np0005534516 funny_engelbart[428859]: {
Nov 25 04:23:17 np0005534516 funny_engelbart[428859]:    "0": [
Nov 25 04:23:17 np0005534516 funny_engelbart[428859]:        {
Nov 25 04:23:17 np0005534516 funny_engelbart[428859]:            "devices": [
Nov 25 04:23:17 np0005534516 funny_engelbart[428859]:                "/dev/loop3"
Nov 25 04:23:17 np0005534516 funny_engelbart[428859]:            ],
Nov 25 04:23:17 np0005534516 funny_engelbart[428859]:            "lv_name": "ceph_lv0",
Nov 25 04:23:17 np0005534516 funny_engelbart[428859]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 04:23:17 np0005534516 funny_engelbart[428859]:            "lv_size": "21470642176",
Nov 25 04:23:17 np0005534516 funny_engelbart[428859]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=fa0f8d90-5df8-4d42-9078-da082765696d,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 04:23:17 np0005534516 funny_engelbart[428859]:            "lv_uuid": "ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr",
Nov 25 04:23:17 np0005534516 funny_engelbart[428859]:            "name": "ceph_lv0",
Nov 25 04:23:17 np0005534516 funny_engelbart[428859]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 04:23:17 np0005534516 funny_engelbart[428859]:            "tags": {
Nov 25 04:23:17 np0005534516 funny_engelbart[428859]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 04:23:17 np0005534516 funny_engelbart[428859]:                "ceph.block_uuid": "ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr",
Nov 25 04:23:17 np0005534516 funny_engelbart[428859]:                "ceph.cephx_lockbox_secret": "",
Nov 25 04:23:17 np0005534516 funny_engelbart[428859]:                "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 04:23:17 np0005534516 funny_engelbart[428859]:                "ceph.cluster_name": "ceph",
Nov 25 04:23:17 np0005534516 funny_engelbart[428859]:                "ceph.crush_device_class": "",
Nov 25 04:23:17 np0005534516 funny_engelbart[428859]:                "ceph.encrypted": "0",
Nov 25 04:23:17 np0005534516 funny_engelbart[428859]:                "ceph.osd_fsid": "fa0f8d90-5df8-4d42-9078-da082765696d",
Nov 25 04:23:17 np0005534516 funny_engelbart[428859]:                "ceph.osd_id": "0",
Nov 25 04:23:17 np0005534516 funny_engelbart[428859]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 04:23:17 np0005534516 funny_engelbart[428859]:                "ceph.type": "block",
Nov 25 04:23:17 np0005534516 funny_engelbart[428859]:                "ceph.vdo": "0"
Nov 25 04:23:17 np0005534516 funny_engelbart[428859]:            },
Nov 25 04:23:17 np0005534516 funny_engelbart[428859]:            "type": "block",
Nov 25 04:23:17 np0005534516 funny_engelbart[428859]:            "vg_name": "ceph_vg0"
Nov 25 04:23:17 np0005534516 funny_engelbart[428859]:        }
Nov 25 04:23:17 np0005534516 funny_engelbart[428859]:    ],
Nov 25 04:23:17 np0005534516 funny_engelbart[428859]:    "1": [
Nov 25 04:23:17 np0005534516 funny_engelbart[428859]:        {
Nov 25 04:23:17 np0005534516 funny_engelbart[428859]:            "devices": [
Nov 25 04:23:17 np0005534516 funny_engelbart[428859]:                "/dev/loop4"
Nov 25 04:23:17 np0005534516 funny_engelbart[428859]:            ],
Nov 25 04:23:17 np0005534516 funny_engelbart[428859]:            "lv_name": "ceph_lv1",
Nov 25 04:23:17 np0005534516 funny_engelbart[428859]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 04:23:17 np0005534516 funny_engelbart[428859]:            "lv_size": "21470642176",
Nov 25 04:23:17 np0005534516 funny_engelbart[428859]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1ccbbde0-8faf-460a-800e-c84f00ed17db,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 04:23:17 np0005534516 funny_engelbart[428859]:            "lv_uuid": "nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e",
Nov 25 04:23:17 np0005534516 funny_engelbart[428859]:            "name": "ceph_lv1",
Nov 25 04:23:17 np0005534516 funny_engelbart[428859]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 04:23:17 np0005534516 funny_engelbart[428859]:            "tags": {
Nov 25 04:23:17 np0005534516 funny_engelbart[428859]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 04:23:17 np0005534516 funny_engelbart[428859]:                "ceph.block_uuid": "nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e",
Nov 25 04:23:17 np0005534516 funny_engelbart[428859]:                "ceph.cephx_lockbox_secret": "",
Nov 25 04:23:17 np0005534516 funny_engelbart[428859]:                "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 04:23:17 np0005534516 funny_engelbart[428859]:                "ceph.cluster_name": "ceph",
Nov 25 04:23:17 np0005534516 funny_engelbart[428859]:                "ceph.crush_device_class": "",
Nov 25 04:23:17 np0005534516 funny_engelbart[428859]:                "ceph.encrypted": "0",
Nov 25 04:23:17 np0005534516 funny_engelbart[428859]:                "ceph.osd_fsid": "1ccbbde0-8faf-460a-800e-c84f00ed17db",
Nov 25 04:23:17 np0005534516 funny_engelbart[428859]:                "ceph.osd_id": "1",
Nov 25 04:23:17 np0005534516 funny_engelbart[428859]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 04:23:17 np0005534516 funny_engelbart[428859]:                "ceph.type": "block",
Nov 25 04:23:17 np0005534516 funny_engelbart[428859]:                "ceph.vdo": "0"
Nov 25 04:23:17 np0005534516 funny_engelbart[428859]:            },
Nov 25 04:23:17 np0005534516 funny_engelbart[428859]:            "type": "block",
Nov 25 04:23:17 np0005534516 funny_engelbart[428859]:            "vg_name": "ceph_vg1"
Nov 25 04:23:17 np0005534516 funny_engelbart[428859]:        }
Nov 25 04:23:17 np0005534516 funny_engelbart[428859]:    ],
Nov 25 04:23:17 np0005534516 funny_engelbart[428859]:    "2": [
Nov 25 04:23:17 np0005534516 funny_engelbart[428859]:        {
Nov 25 04:23:17 np0005534516 funny_engelbart[428859]:            "devices": [
Nov 25 04:23:17 np0005534516 funny_engelbart[428859]:                "/dev/loop5"
Nov 25 04:23:17 np0005534516 funny_engelbart[428859]:            ],
Nov 25 04:23:17 np0005534516 funny_engelbart[428859]:            "lv_name": "ceph_lv2",
Nov 25 04:23:17 np0005534516 funny_engelbart[428859]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 04:23:17 np0005534516 funny_engelbart[428859]:            "lv_size": "21470642176",
Nov 25 04:23:17 np0005534516 funny_engelbart[428859]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2a15223e-f21c-42af-b702-2be1c3607399,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 04:23:17 np0005534516 funny_engelbart[428859]:            "lv_uuid": "5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0",
Nov 25 04:23:17 np0005534516 funny_engelbart[428859]:            "name": "ceph_lv2",
Nov 25 04:23:17 np0005534516 funny_engelbart[428859]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 04:23:17 np0005534516 funny_engelbart[428859]:            "tags": {
Nov 25 04:23:17 np0005534516 funny_engelbart[428859]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 04:23:17 np0005534516 funny_engelbart[428859]:                "ceph.block_uuid": "5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0",
Nov 25 04:23:17 np0005534516 funny_engelbart[428859]:                "ceph.cephx_lockbox_secret": "",
Nov 25 04:23:17 np0005534516 funny_engelbart[428859]:                "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 04:23:17 np0005534516 funny_engelbart[428859]:                "ceph.cluster_name": "ceph",
Nov 25 04:23:17 np0005534516 funny_engelbart[428859]:                "ceph.crush_device_class": "",
Nov 25 04:23:17 np0005534516 funny_engelbart[428859]:                "ceph.encrypted": "0",
Nov 25 04:23:17 np0005534516 funny_engelbart[428859]:                "ceph.osd_fsid": "2a15223e-f21c-42af-b702-2be1c3607399",
Nov 25 04:23:17 np0005534516 funny_engelbart[428859]:                "ceph.osd_id": "2",
Nov 25 04:23:17 np0005534516 funny_engelbart[428859]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 04:23:17 np0005534516 funny_engelbart[428859]:                "ceph.type": "block",
Nov 25 04:23:17 np0005534516 funny_engelbart[428859]:                "ceph.vdo": "0"
Nov 25 04:23:17 np0005534516 funny_engelbart[428859]:            },
Nov 25 04:23:17 np0005534516 funny_engelbart[428859]:            "type": "block",
Nov 25 04:23:17 np0005534516 funny_engelbart[428859]:            "vg_name": "ceph_vg2"
Nov 25 04:23:17 np0005534516 funny_engelbart[428859]:        }
Nov 25 04:23:17 np0005534516 funny_engelbart[428859]:    ]
Nov 25 04:23:17 np0005534516 funny_engelbart[428859]: }
Nov 25 04:23:17 np0005534516 systemd[1]: libpod-abf984efbe49ba70b5a6fdcc93c8b7609e3d1d85511720a01f83e2f9b29068fc.scope: Deactivated successfully.
Nov 25 04:23:17 np0005534516 podman[428842]: 2025-11-25 09:23:17.635731735 +0000 UTC m=+1.085995767 container died abf984efbe49ba70b5a6fdcc93c8b7609e3d1d85511720a01f83e2f9b29068fc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_engelbart, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 04:23:17 np0005534516 systemd[1]: var-lib-containers-storage-overlay-6e5e439d02db9952bb9519fed8c2c02b88b5a57bb8ecbd0112f333956497a280-merged.mount: Deactivated successfully.
Nov 25 04:23:17 np0005534516 podman[428842]: 2025-11-25 09:23:17.691335731 +0000 UTC m=+1.141599763 container remove abf984efbe49ba70b5a6fdcc93c8b7609e3d1d85511720a01f83e2f9b29068fc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_engelbart, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 04:23:17 np0005534516 systemd[1]: libpod-conmon-abf984efbe49ba70b5a6fdcc93c8b7609e3d1d85511720a01f83e2f9b29068fc.scope: Deactivated successfully.
Nov 25 04:23:18 np0005534516 podman[429018]: 2025-11-25 09:23:18.295106921 +0000 UTC m=+0.044401081 container create 007e35880de50966834a4d8c5906a53abfd7aa6490678fe76985c001d4ab1c1e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_black, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Nov 25 04:23:18 np0005534516 systemd[1]: Started libpod-conmon-007e35880de50966834a4d8c5906a53abfd7aa6490678fe76985c001d4ab1c1e.scope.
Nov 25 04:23:18 np0005534516 systemd[1]: Started libcrun container.
Nov 25 04:23:18 np0005534516 podman[429018]: 2025-11-25 09:23:18.273368648 +0000 UTC m=+0.022662848 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 04:23:18 np0005534516 podman[429018]: 2025-11-25 09:23:18.371579386 +0000 UTC m=+0.120873606 container init 007e35880de50966834a4d8c5906a53abfd7aa6490678fe76985c001d4ab1c1e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_black, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 04:23:18 np0005534516 podman[429018]: 2025-11-25 09:23:18.378411452 +0000 UTC m=+0.127705622 container start 007e35880de50966834a4d8c5906a53abfd7aa6490678fe76985c001d4ab1c1e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_black, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Nov 25 04:23:18 np0005534516 confident_black[429034]: 167 167
Nov 25 04:23:18 np0005534516 systemd[1]: libpod-007e35880de50966834a4d8c5906a53abfd7aa6490678fe76985c001d4ab1c1e.scope: Deactivated successfully.
Nov 25 04:23:18 np0005534516 podman[429018]: 2025-11-25 09:23:18.382970556 +0000 UTC m=+0.132264826 container attach 007e35880de50966834a4d8c5906a53abfd7aa6490678fe76985c001d4ab1c1e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_black, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True)
Nov 25 04:23:18 np0005534516 podman[429018]: 2025-11-25 09:23:18.383788198 +0000 UTC m=+0.133082368 container died 007e35880de50966834a4d8c5906a53abfd7aa6490678fe76985c001d4ab1c1e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_black, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 04:23:18 np0005534516 systemd[1]: var-lib-containers-storage-overlay-cac98c9c302d7235eea2a7aa8fd21d2ed7e5e6122f3244eb3a8de79b5cfb487c-merged.mount: Deactivated successfully.
Nov 25 04:23:18 np0005534516 podman[429018]: 2025-11-25 09:23:18.433051592 +0000 UTC m=+0.182345762 container remove 007e35880de50966834a4d8c5906a53abfd7aa6490678fe76985c001d4ab1c1e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_black, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Nov 25 04:23:18 np0005534516 systemd[1]: libpod-conmon-007e35880de50966834a4d8c5906a53abfd7aa6490678fe76985c001d4ab1c1e.scope: Deactivated successfully.
Nov 25 04:23:18 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e274 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:23:18 np0005534516 podman[429060]: 2025-11-25 09:23:18.59403446 +0000 UTC m=+0.043605399 container create 17ee83225fb8b161e6b89be3d33060f8203eb46d8c2a71cdeb333baf4d45d5fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_heisenberg, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 04:23:18 np0005534516 systemd[1]: Started libpod-conmon-17ee83225fb8b161e6b89be3d33060f8203eb46d8c2a71cdeb333baf4d45d5fe.scope.
Nov 25 04:23:18 np0005534516 systemd[1]: Started libcrun container.
Nov 25 04:23:18 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3096: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:23:18 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7a94d9802bf78c458a65e2786fce1aab99d3b0c80b551efc86d040817977332a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 04:23:18 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7a94d9802bf78c458a65e2786fce1aab99d3b0c80b551efc86d040817977332a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 04:23:18 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7a94d9802bf78c458a65e2786fce1aab99d3b0c80b551efc86d040817977332a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 04:23:18 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7a94d9802bf78c458a65e2786fce1aab99d3b0c80b551efc86d040817977332a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 04:23:18 np0005534516 podman[429060]: 2025-11-25 09:23:18.577555291 +0000 UTC m=+0.027126250 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 04:23:18 np0005534516 podman[429060]: 2025-11-25 09:23:18.672963571 +0000 UTC m=+0.122534530 container init 17ee83225fb8b161e6b89be3d33060f8203eb46d8c2a71cdeb333baf4d45d5fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_heisenberg, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 04:23:18 np0005534516 podman[429060]: 2025-11-25 09:23:18.681030932 +0000 UTC m=+0.130601871 container start 17ee83225fb8b161e6b89be3d33060f8203eb46d8c2a71cdeb333baf4d45d5fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_heisenberg, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 04:23:18 np0005534516 podman[429060]: 2025-11-25 09:23:18.685563115 +0000 UTC m=+0.135134104 container attach 17ee83225fb8b161e6b89be3d33060f8203eb46d8c2a71cdeb333baf4d45d5fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_heisenberg, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 04:23:18 np0005534516 nova_compute[253538]: 2025-11-25 09:23:18.686 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:23:19 np0005534516 nova_compute[253538]: 2025-11-25 09:23:19.548 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:23:19 np0005534516 interesting_heisenberg[429077]: {
Nov 25 04:23:19 np0005534516 interesting_heisenberg[429077]:    "1ccbbde0-8faf-460a-800e-c84f00ed17db": {
Nov 25 04:23:19 np0005534516 interesting_heisenberg[429077]:        "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 04:23:19 np0005534516 interesting_heisenberg[429077]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 04:23:19 np0005534516 interesting_heisenberg[429077]:        "osd_id": 1,
Nov 25 04:23:19 np0005534516 interesting_heisenberg[429077]:        "osd_uuid": "1ccbbde0-8faf-460a-800e-c84f00ed17db",
Nov 25 04:23:19 np0005534516 interesting_heisenberg[429077]:        "type": "bluestore"
Nov 25 04:23:19 np0005534516 interesting_heisenberg[429077]:    },
Nov 25 04:23:19 np0005534516 interesting_heisenberg[429077]:    "2a15223e-f21c-42af-b702-2be1c3607399": {
Nov 25 04:23:19 np0005534516 interesting_heisenberg[429077]:        "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 04:23:19 np0005534516 interesting_heisenberg[429077]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 04:23:19 np0005534516 interesting_heisenberg[429077]:        "osd_id": 2,
Nov 25 04:23:19 np0005534516 interesting_heisenberg[429077]:        "osd_uuid": "2a15223e-f21c-42af-b702-2be1c3607399",
Nov 25 04:23:19 np0005534516 interesting_heisenberg[429077]:        "type": "bluestore"
Nov 25 04:23:19 np0005534516 interesting_heisenberg[429077]:    },
Nov 25 04:23:19 np0005534516 interesting_heisenberg[429077]:    "fa0f8d90-5df8-4d42-9078-da082765696d": {
Nov 25 04:23:19 np0005534516 interesting_heisenberg[429077]:        "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 04:23:19 np0005534516 interesting_heisenberg[429077]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 04:23:19 np0005534516 interesting_heisenberg[429077]:        "osd_id": 0,
Nov 25 04:23:19 np0005534516 interesting_heisenberg[429077]:        "osd_uuid": "fa0f8d90-5df8-4d42-9078-da082765696d",
Nov 25 04:23:19 np0005534516 interesting_heisenberg[429077]:        "type": "bluestore"
Nov 25 04:23:19 np0005534516 interesting_heisenberg[429077]:    }
Nov 25 04:23:19 np0005534516 interesting_heisenberg[429077]: }
Nov 25 04:23:19 np0005534516 systemd[1]: libpod-17ee83225fb8b161e6b89be3d33060f8203eb46d8c2a71cdeb333baf4d45d5fe.scope: Deactivated successfully.
Nov 25 04:23:19 np0005534516 systemd[1]: libpod-17ee83225fb8b161e6b89be3d33060f8203eb46d8c2a71cdeb333baf4d45d5fe.scope: Consumed 1.021s CPU time.
Nov 25 04:23:19 np0005534516 podman[429060]: 2025-11-25 09:23:19.697848652 +0000 UTC m=+1.147419611 container died 17ee83225fb8b161e6b89be3d33060f8203eb46d8c2a71cdeb333baf4d45d5fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_heisenberg, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Nov 25 04:23:19 np0005534516 systemd[1]: var-lib-containers-storage-overlay-7a94d9802bf78c458a65e2786fce1aab99d3b0c80b551efc86d040817977332a-merged.mount: Deactivated successfully.
Nov 25 04:23:19 np0005534516 podman[429060]: 2025-11-25 09:23:19.753283444 +0000 UTC m=+1.202854383 container remove 17ee83225fb8b161e6b89be3d33060f8203eb46d8c2a71cdeb333baf4d45d5fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_heisenberg, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 04:23:19 np0005534516 systemd[1]: libpod-conmon-17ee83225fb8b161e6b89be3d33060f8203eb46d8c2a71cdeb333baf4d45d5fe.scope: Deactivated successfully.
Nov 25 04:23:19 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 04:23:19 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 04:23:19 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 04:23:19 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 04:23:19 np0005534516 ceph-mgr[75313]: [progress WARNING root] complete: ev 2b0f72af-ab7d-49d1-a5bf-2327280f6b71 does not exist
Nov 25 04:23:19 np0005534516 ceph-mgr[75313]: [progress WARNING root] complete: ev 2508a217-5be7-4000-9bd3-674eb995826f does not exist
Nov 25 04:23:19 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 04:23:19 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 04:23:20 np0005534516 nova_compute[253538]: 2025-11-25 09:23:20.167 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:23:20 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3097: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:23:21 np0005534516 nova_compute[253538]: 2025-11-25 09:23:21.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:23:21 np0005534516 nova_compute[253538]: 2025-11-25 09:23:21.582 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:23:21 np0005534516 nova_compute[253538]: 2025-11-25 09:23:21.583 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:23:21 np0005534516 nova_compute[253538]: 2025-11-25 09:23:21.584 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:23:21 np0005534516 nova_compute[253538]: 2025-11-25 09:23:21.584 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 25 04:23:21 np0005534516 nova_compute[253538]: 2025-11-25 09:23:21.584 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 04:23:21 np0005534516 podman[429177]: 2025-11-25 09:23:21.956479768 +0000 UTC m=+0.056916623 container health_status 201f5ba8a846455dad7681d91099d8c3f33e8ac91298c1330a8b525b94c03938 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 25 04:23:21 np0005534516 podman[429195]: 2025-11-25 09:23:21.980963735 +0000 UTC m=+0.081603756 container health_status 5c8d5508db57b4c3da7e80ae11f3a3094e87785cb74f60c98199261dd8b73481 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_managed=true)
Nov 25 04:23:22 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 04:23:22 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1720521840' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 04:23:22 np0005534516 nova_compute[253538]: 2025-11-25 09:23:22.059 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.475s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 04:23:22 np0005534516 nova_compute[253538]: 2025-11-25 09:23:22.270 253542 WARNING nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 04:23:22 np0005534516 nova_compute[253538]: 2025-11-25 09:23:22.272 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3580MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 25 04:23:22 np0005534516 nova_compute[253538]: 2025-11-25 09:23:22.272 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:23:22 np0005534516 nova_compute[253538]: 2025-11-25 09:23:22.272 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:23:22 np0005534516 nova_compute[253538]: 2025-11-25 09:23:22.499 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 25 04:23:22 np0005534516 nova_compute[253538]: 2025-11-25 09:23:22.499 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 25 04:23:22 np0005534516 nova_compute[253538]: 2025-11-25 09:23:22.590 253542 DEBUG nova.scheduler.client.report [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Refreshing inventories for resource provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Nov 25 04:23:22 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3098: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:23:22 np0005534516 nova_compute[253538]: 2025-11-25 09:23:22.671 253542 DEBUG nova.scheduler.client.report [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Updating ProviderTree inventory for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Nov 25 04:23:22 np0005534516 nova_compute[253538]: 2025-11-25 09:23:22.672 253542 DEBUG nova.compute.provider_tree [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Updating inventory in ProviderTree for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Nov 25 04:23:22 np0005534516 nova_compute[253538]: 2025-11-25 09:23:22.688 253542 DEBUG nova.scheduler.client.report [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Refreshing aggregate associations for resource provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Nov 25 04:23:22 np0005534516 nova_compute[253538]: 2025-11-25 09:23:22.709 253542 DEBUG nova.scheduler.client.report [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Refreshing trait associations for resource provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4, traits: HW_CPU_X86_ABM,HW_CPU_X86_SSE,HW_CPU_X86_SSE4A,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_NET_VIF_MODEL_VIRTIO,HW_CPU_X86_SSSE3,COMPUTE_ACCELERATORS,COMPUTE_RESCUE_BFV,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_BMI2,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_DEVICE_TAGGING,HW_CPU_X86_SSE2,HW_CPU_X86_SSE42,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_SECURITY_TPM_1_2,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_SVM,HW_CPU_X86_AVX,HW_CPU_X86_CLMUL,COMPUTE_NODE,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_AVX2,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_SHA,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_BMI,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_MMX,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_SSE41,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_SECURITY_TPM_2_0,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_F16C,COMPUTE_STORAGE_BUS_FDC,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_AMD_SVM,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_AESNI,HW_CPU_X86_FMA3 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Nov 25 04:23:22 np0005534516 nova_compute[253538]: 2025-11-25 09:23:22.727 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 04:23:23 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 04:23:23 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4240611041' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 04:23:23 np0005534516 nova_compute[253538]: 2025-11-25 09:23:23.216 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.489s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 04:23:23 np0005534516 nova_compute[253538]: 2025-11-25 09:23:23.222 253542 DEBUG nova.compute.provider_tree [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 04:23:23 np0005534516 nova_compute[253538]: 2025-11-25 09:23:23.242 253542 DEBUG nova.scheduler.client.report [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 04:23:23 np0005534516 nova_compute[253538]: 2025-11-25 09:23:23.244 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 25 04:23:23 np0005534516 nova_compute[253538]: 2025-11-25 09:23:23.244 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.972s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:23:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 04:23:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 04:23:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 04:23:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 04:23:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 04:23:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 04:23:23 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e274 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:23:23 np0005534516 nova_compute[253538]: 2025-11-25 09:23:23.688 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:23:24 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3099: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:23:25 np0005534516 nova_compute[253538]: 2025-11-25 09:23:25.212 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:23:26 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3100: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:23:28 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e274 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:23:28 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3101: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:23:28 np0005534516 nova_compute[253538]: 2025-11-25 09:23:28.691 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:23:29 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 25 04:23:29 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3652649450' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 04:23:29 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 25 04:23:29 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3652649450' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 04:23:30 np0005534516 nova_compute[253538]: 2025-11-25 09:23:30.214 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:23:30 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3102: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail; 378 KiB/s rd, 0 op/s
Nov 25 04:23:32 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3103: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail; 1.7 MiB/s rd, 0 op/s
Nov 25 04:23:32 np0005534516 podman[429257]: 2025-11-25 09:23:32.860235215 +0000 UTC m=+0.111608804 container health_status 8ad1e319ea263c63021f01bb2d6f18ca5aea205883339692c6b10bddfa993191 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_id=ovn_controller)
Nov 25 04:23:33 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e274 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:23:33 np0005534516 nova_compute[253538]: 2025-11-25 09:23:33.693 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:23:34 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3104: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail; 1.7 MiB/s rd, 85 B/s wr, 6 op/s
Nov 25 04:23:35 np0005534516 nova_compute[253538]: 2025-11-25 09:23:35.216 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:23:36 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3105: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail; 1.7 MiB/s rd, 85 B/s wr, 6 op/s
Nov 25 04:23:37 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e274 do_prune osdmap full prune enabled
Nov 25 04:23:37 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e275 e275: 3 total, 3 up, 3 in
Nov 25 04:23:37 np0005534516 ceph-mon[75015]: log_channel(cluster) log [DBG] : osdmap e275: 3 total, 3 up, 3 in
Nov 25 04:23:38 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e275 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:23:38 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3107: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 818 B/s wr, 25 op/s
Nov 25 04:23:38 np0005534516 nova_compute[253538]: 2025-11-25 09:23:38.695 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:23:40 np0005534516 nova_compute[253538]: 2025-11-25 09:23:40.218 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:23:40 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3108: 321 pgs: 321 active+clean; 37 MiB data, 1007 MiB used, 59 GiB / 60 GiB avail; 1.6 MiB/s rd, 1.1 KiB/s wr, 26 op/s
Nov 25 04:23:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:23:41.110 162739 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:23:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:23:41.110 162739 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:23:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:23:41.111 162739 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:23:42 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e275 do_prune osdmap full prune enabled
Nov 25 04:23:42 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e276 e276: 3 total, 3 up, 3 in
Nov 25 04:23:42 np0005534516 ceph-mon[75015]: log_channel(cluster) log [DBG] : osdmap e276: 3 total, 3 up, 3 in
Nov 25 04:23:42 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3110: 321 pgs: 321 active+clean; 21 MiB data, 991 MiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.2 KiB/s wr, 24 op/s
Nov 25 04:23:43 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e276 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:23:43 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e276 do_prune osdmap full prune enabled
Nov 25 04:23:43 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e277 e277: 3 total, 3 up, 3 in
Nov 25 04:23:43 np0005534516 ceph-mon[75015]: log_channel(cluster) log [DBG] : osdmap e277: 3 total, 3 up, 3 in
Nov 25 04:23:43 np0005534516 nova_compute[253538]: 2025-11-25 09:23:43.698 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:23:44 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3112: 321 pgs: 321 active+clean; 13 MiB data, 983 MiB used, 59 GiB / 60 GiB avail; 49 KiB/s rd, 3.0 KiB/s wr, 65 op/s
Nov 25 04:23:45 np0005534516 nova_compute[253538]: 2025-11-25 09:23:45.220 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:23:46 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3113: 321 pgs: 321 active+clean; 457 KiB data, 970 MiB used, 59 GiB / 60 GiB avail; 30 KiB/s rd, 2.6 KiB/s wr, 41 op/s
Nov 25 04:23:48 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e277 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:23:48 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e277 do_prune osdmap full prune enabled
Nov 25 04:23:48 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e278 e278: 3 total, 3 up, 3 in
Nov 25 04:23:48 np0005534516 ceph-mon[75015]: log_channel(cluster) log [DBG] : osdmap e278: 3 total, 3 up, 3 in
Nov 25 04:23:48 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3115: 321 pgs: 321 active+clean; 457 KiB data, 970 MiB used, 59 GiB / 60 GiB avail; 33 KiB/s rd, 2.7 KiB/s wr, 46 op/s
Nov 25 04:23:48 np0005534516 nova_compute[253538]: 2025-11-25 09:23:48.700 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:23:50 np0005534516 nova_compute[253538]: 2025-11-25 09:23:50.224 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:23:50 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3116: 321 pgs: 321 active+clean; 457 KiB data, 970 MiB used, 59 GiB / 60 GiB avail; 27 KiB/s rd, 2.2 KiB/s wr, 38 op/s
Nov 25 04:23:52 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3117: 321 pgs: 321 active+clean; 457 KiB data, 970 MiB used, 59 GiB / 60 GiB avail; 3.6 KiB/s rd, 897 B/s wr, 6 op/s
Nov 25 04:23:52 np0005534516 podman[429287]: 2025-11-25 09:23:52.825215474 +0000 UTC m=+0.064236242 container health_status 5c8d5508db57b4c3da7e80ae11f3a3094e87785cb74f60c98199261dd8b73481 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3)
Nov 25 04:23:52 np0005534516 podman[429286]: 2025-11-25 09:23:52.835419631 +0000 UTC m=+0.080320490 container health_status 201f5ba8a846455dad7681d91099d8c3f33e8ac91298c1330a8b525b94c03938 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Nov 25 04:23:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] Optimize plan auto_2025-11-25_09:23:53
Nov 25 04:23:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 04:23:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] do_upmap
Nov 25 04:23:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'vms', 'default.rgw.control', 'volumes', 'default.rgw.meta', '.mgr', 'images', 'default.rgw.log', 'backups', '.rgw.root', 'cephfs.cephfs.data']
Nov 25 04:23:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] prepared 0/10 changes
Nov 25 04:23:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 04:23:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 04:23:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 04:23:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 04:23:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 04:23:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 04:23:53 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:23:53 np0005534516 nova_compute[253538]: 2025-11-25 09:23:53.703 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:23:54 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 04:23:54 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 04:23:54 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 04:23:54 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 04:23:54 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 04:23:54 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 04:23:54 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 04:23:54 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 04:23:54 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 04:23:54 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 04:23:54 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3118: 321 pgs: 321 active+clean; 457 KiB data, 970 MiB used, 59 GiB / 60 GiB avail; 3.3 KiB/s rd, 818 B/s wr, 6 op/s
Nov 25 04:23:55 np0005534516 nova_compute[253538]: 2025-11-25 09:23:55.225 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:23:56 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e278 do_prune osdmap full prune enabled
Nov 25 04:23:56 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e279 e279: 3 total, 3 up, 3 in
Nov 25 04:23:56 np0005534516 ceph-mon[75015]: log_channel(cluster) log [DBG] : osdmap e279: 3 total, 3 up, 3 in
Nov 25 04:23:56 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3120: 321 pgs: 321 active+clean; 457 KiB data, 970 MiB used, 59 GiB / 60 GiB avail; 11 KiB/s rd, 1007 B/s wr, 14 op/s
Nov 25 04:23:57 np0005534516 nova_compute[253538]: 2025-11-25 09:23:57.244 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:23:58 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e279 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:23:58 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3121: 321 pgs: 321 active+clean; 457 KiB data, 970 MiB used, 59 GiB / 60 GiB avail; 10 KiB/s rd, 1.5 KiB/s wr, 14 op/s
Nov 25 04:23:58 np0005534516 nova_compute[253538]: 2025-11-25 09:23:58.739 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:24:00 np0005534516 nova_compute[253538]: 2025-11-25 09:24:00.226 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:24:00 np0005534516 nova_compute[253538]: 2025-11-25 09:24:00.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:24:00 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3122: 321 pgs: 321 active+clean; 458 KiB data, 970 MiB used, 59 GiB / 60 GiB avail; 10 KiB/s rd, 1.6 KiB/s wr, 14 op/s
Nov 25 04:24:02 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3123: 321 pgs: 321 active+clean; 458 KiB data, 970 MiB used, 59 GiB / 60 GiB avail; 10 KiB/s rd, 1.6 KiB/s wr, 14 op/s
Nov 25 04:24:03 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e279 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:24:03 np0005534516 nova_compute[253538]: 2025-11-25 09:24:03.741 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:24:03 np0005534516 podman[429322]: 2025-11-25 09:24:03.85988657 +0000 UTC m=+0.102592118 container health_status 8ad1e319ea263c63021f01bb2d6f18ca5aea205883339692c6b10bddfa993191 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 04:24:04 np0005534516 nova_compute[253538]: 2025-11-25 09:24:04.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:24:04 np0005534516 nova_compute[253538]: 2025-11-25 09:24:04.554 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 25 04:24:04 np0005534516 nova_compute[253538]: 2025-11-25 09:24:04.554 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 25 04:24:04 np0005534516 nova_compute[253538]: 2025-11-25 09:24:04.567 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 25 04:24:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 04:24:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 04:24:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 04:24:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 04:24:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 25 04:24:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 04:24:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 04:24:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 04:24:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 04:24:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 04:24:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 4.4513495474376506e-07 of space, bias 1.0, pg target 0.00013354048642312953 quantized to 32 (current 32)
Nov 25 04:24:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 04:24:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 25 04:24:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 04:24:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 04:24:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 04:24:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 25 04:24:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 04:24:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 25 04:24:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 04:24:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 04:24:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 04:24:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 25 04:24:04 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3124: 321 pgs: 321 active+clean; 458 KiB data, 970 MiB used, 59 GiB / 60 GiB avail; 10 KiB/s rd, 1.6 KiB/s wr, 14 op/s
Nov 25 04:24:05 np0005534516 nova_compute[253538]: 2025-11-25 09:24:05.228 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:24:06 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3125: 321 pgs: 321 active+clean; 458 KiB data, 970 MiB used, 59 GiB / 60 GiB avail; 9.9 KiB/s rd, 1.5 KiB/s wr, 14 op/s
Nov 25 04:24:08 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e279 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:24:08 np0005534516 nova_compute[253538]: 2025-11-25 09:24:08.561 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:24:08 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3126: 321 pgs: 321 active+clean; 458 KiB data, 970 MiB used, 59 GiB / 60 GiB avail; 1.3 KiB/s rd, 682 B/s wr, 2 op/s
Nov 25 04:24:08 np0005534516 nova_compute[253538]: 2025-11-25 09:24:08.745 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:24:09 np0005534516 nova_compute[253538]: 2025-11-25 09:24:09.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:24:10 np0005534516 nova_compute[253538]: 2025-11-25 09:24:10.229 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:24:10 np0005534516 nova_compute[253538]: 2025-11-25 09:24:10.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:24:10 np0005534516 nova_compute[253538]: 2025-11-25 09:24:10.554 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 25 04:24:10 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3127: 321 pgs: 321 active+clean; 458 KiB data, 970 MiB used, 59 GiB / 60 GiB avail; 85 B/s wr, 0 op/s
Nov 25 04:24:12 np0005534516 nova_compute[253538]: 2025-11-25 09:24:12.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:24:12 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3128: 321 pgs: 321 active+clean; 458 KiB data, 970 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:24:13 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e279 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:24:13 np0005534516 nova_compute[253538]: 2025-11-25 09:24:13.748 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:24:14 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3129: 321 pgs: 321 active+clean; 458 KiB data, 970 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:24:15 np0005534516 nova_compute[253538]: 2025-11-25 09:24:15.231 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:24:15 np0005534516 nova_compute[253538]: 2025-11-25 09:24:15.555 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:24:16 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3130: 321 pgs: 321 active+clean; 458 KiB data, 970 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:24:18 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e279 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:24:18 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3131: 321 pgs: 321 active+clean; 458 KiB data, 970 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:24:18 np0005534516 nova_compute[253538]: 2025-11-25 09:24:18.750 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:24:20 np0005534516 nova_compute[253538]: 2025-11-25 09:24:20.233 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:24:20 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3132: 321 pgs: 321 active+clean; 458 KiB data, 970 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:24:20 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0) v1
Nov 25 04:24:20 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 25 04:24:20 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 04:24:20 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 04:24:20 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 04:24:20 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 04:24:20 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 04:24:20 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 04:24:20 np0005534516 ceph-mgr[75313]: [progress WARNING root] complete: ev 5b3a8aa6-3fd8-41bf-9141-f4417ec61663 does not exist
Nov 25 04:24:20 np0005534516 ceph-mgr[75313]: [progress WARNING root] complete: ev 198ba67a-df82-4450-a3f2-13372c330822 does not exist
Nov 25 04:24:20 np0005534516 ceph-mgr[75313]: [progress WARNING root] complete: ev 6bb50488-8344-493f-9981-6a614c67982b does not exist
Nov 25 04:24:20 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 04:24:20 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 04:24:20 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 04:24:20 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 04:24:20 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 04:24:20 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 04:24:20 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 25 04:24:20 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 04:24:20 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 04:24:20 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 04:24:21 np0005534516 podman[429618]: 2025-11-25 09:24:21.462937798 +0000 UTC m=+0.024602791 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 04:24:21 np0005534516 podman[429618]: 2025-11-25 09:24:21.804809298 +0000 UTC m=+0.366474261 container create 6f46c5eafc93ba5d6c2bf188e7b46e3a0790edb7877bb9be4f3329dd2dc62fb5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_dijkstra, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 04:24:21 np0005534516 systemd[1]: Started libpod-conmon-6f46c5eafc93ba5d6c2bf188e7b46e3a0790edb7877bb9be4f3329dd2dc62fb5.scope.
Nov 25 04:24:22 np0005534516 systemd[1]: Started libcrun container.
Nov 25 04:24:22 np0005534516 podman[429618]: 2025-11-25 09:24:22.069693979 +0000 UTC m=+0.631358972 container init 6f46c5eafc93ba5d6c2bf188e7b46e3a0790edb7877bb9be4f3329dd2dc62fb5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_dijkstra, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Nov 25 04:24:22 np0005534516 podman[429618]: 2025-11-25 09:24:22.078490389 +0000 UTC m=+0.640155382 container start 6f46c5eafc93ba5d6c2bf188e7b46e3a0790edb7877bb9be4f3329dd2dc62fb5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_dijkstra, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Nov 25 04:24:22 np0005534516 condescending_dijkstra[429635]: 167 167
Nov 25 04:24:22 np0005534516 systemd[1]: libpod-6f46c5eafc93ba5d6c2bf188e7b46e3a0790edb7877bb9be4f3329dd2dc62fb5.scope: Deactivated successfully.
Nov 25 04:24:22 np0005534516 podman[429618]: 2025-11-25 09:24:22.150463701 +0000 UTC m=+0.712128674 container attach 6f46c5eafc93ba5d6c2bf188e7b46e3a0790edb7877bb9be4f3329dd2dc62fb5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_dijkstra, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True)
Nov 25 04:24:22 np0005534516 podman[429618]: 2025-11-25 09:24:22.151481709 +0000 UTC m=+0.713146762 container died 6f46c5eafc93ba5d6c2bf188e7b46e3a0790edb7877bb9be4f3329dd2dc62fb5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_dijkstra, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 04:24:22 np0005534516 systemd[1]: var-lib-containers-storage-overlay-bf9474d5906eca69310c566eeb4f5c30bc1f4e2689985c4e08696e7318a470fb-merged.mount: Deactivated successfully.
Nov 25 04:24:22 np0005534516 podman[429618]: 2025-11-25 09:24:22.465572381 +0000 UTC m=+1.027237344 container remove 6f46c5eafc93ba5d6c2bf188e7b46e3a0790edb7877bb9be4f3329dd2dc62fb5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_dijkstra, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 04:24:22 np0005534516 systemd[1]: libpod-conmon-6f46c5eafc93ba5d6c2bf188e7b46e3a0790edb7877bb9be4f3329dd2dc62fb5.scope: Deactivated successfully.
Nov 25 04:24:22 np0005534516 nova_compute[253538]: 2025-11-25 09:24:22.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:24:22 np0005534516 nova_compute[253538]: 2025-11-25 09:24:22.576 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:24:22 np0005534516 nova_compute[253538]: 2025-11-25 09:24:22.577 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:24:22 np0005534516 nova_compute[253538]: 2025-11-25 09:24:22.577 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:24:22 np0005534516 nova_compute[253538]: 2025-11-25 09:24:22.577 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 25 04:24:22 np0005534516 nova_compute[253538]: 2025-11-25 09:24:22.578 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 04:24:22 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3133: 321 pgs: 321 active+clean; 458 KiB data, 970 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:24:22 np0005534516 podman[429659]: 2025-11-25 09:24:22.731373068 +0000 UTC m=+0.094378745 container create ff32e3dafad48c2eb70443b028e154512f6d52f4d0ea8f08b5b39d13b8f6b429 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_gauss, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 25 04:24:22 np0005534516 podman[429659]: 2025-11-25 09:24:22.661216015 +0000 UTC m=+0.024221792 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 04:24:22 np0005534516 systemd[1]: Started libpod-conmon-ff32e3dafad48c2eb70443b028e154512f6d52f4d0ea8f08b5b39d13b8f6b429.scope.
Nov 25 04:24:22 np0005534516 systemd[1]: Started libcrun container.
Nov 25 04:24:22 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f3801211cae971bd1c60d27764e66634998bfdd35bdf44b031d5dc8909f0f146/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 04:24:22 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f3801211cae971bd1c60d27764e66634998bfdd35bdf44b031d5dc8909f0f146/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 04:24:22 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f3801211cae971bd1c60d27764e66634998bfdd35bdf44b031d5dc8909f0f146/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 04:24:22 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f3801211cae971bd1c60d27764e66634998bfdd35bdf44b031d5dc8909f0f146/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 04:24:22 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f3801211cae971bd1c60d27764e66634998bfdd35bdf44b031d5dc8909f0f146/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 04:24:22 np0005534516 podman[429659]: 2025-11-25 09:24:22.892413378 +0000 UTC m=+0.255419065 container init ff32e3dafad48c2eb70443b028e154512f6d52f4d0ea8f08b5b39d13b8f6b429 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_gauss, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Nov 25 04:24:22 np0005534516 podman[429659]: 2025-11-25 09:24:22.901998109 +0000 UTC m=+0.265003786 container start ff32e3dafad48c2eb70443b028e154512f6d52f4d0ea8f08b5b39d13b8f6b429 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_gauss, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 04:24:22 np0005534516 podman[429659]: 2025-11-25 09:24:22.907704205 +0000 UTC m=+0.270709902 container attach ff32e3dafad48c2eb70443b028e154512f6d52f4d0ea8f08b5b39d13b8f6b429 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_gauss, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Nov 25 04:24:22 np0005534516 podman[429696]: 2025-11-25 09:24:22.955260611 +0000 UTC m=+0.077659678 container health_status 5c8d5508db57b4c3da7e80ae11f3a3094e87785cb74f60c98199261dd8b73481 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 04:24:22 np0005534516 podman[429698]: 2025-11-25 09:24:22.959113466 +0000 UTC m=+0.082266124 container health_status 201f5ba8a846455dad7681d91099d8c3f33e8ac91298c1330a8b525b94c03938 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd)
Nov 25 04:24:22 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 04:24:22 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1842557725' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 04:24:23 np0005534516 nova_compute[253538]: 2025-11-25 09:24:23.001 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.423s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 04:24:23 np0005534516 nova_compute[253538]: 2025-11-25 09:24:23.202 253542 WARNING nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 04:24:23 np0005534516 nova_compute[253538]: 2025-11-25 09:24:23.204 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3551MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 25 04:24:23 np0005534516 nova_compute[253538]: 2025-11-25 09:24:23.204 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:24:23 np0005534516 nova_compute[253538]: 2025-11-25 09:24:23.204 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:24:23 np0005534516 nova_compute[253538]: 2025-11-25 09:24:23.286 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 25 04:24:23 np0005534516 nova_compute[253538]: 2025-11-25 09:24:23.287 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 25 04:24:23 np0005534516 nova_compute[253538]: 2025-11-25 09:24:23.305 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 04:24:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 04:24:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 04:24:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 04:24:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 04:24:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 04:24:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 04:24:23 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e279 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:24:23 np0005534516 nova_compute[253538]: 2025-11-25 09:24:23.753 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:24:23 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 04:24:23 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1799143828' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 04:24:23 np0005534516 nova_compute[253538]: 2025-11-25 09:24:23.778 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.473s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 04:24:23 np0005534516 nova_compute[253538]: 2025-11-25 09:24:23.784 253542 DEBUG nova.compute.provider_tree [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 04:24:23 np0005534516 nova_compute[253538]: 2025-11-25 09:24:23.797 253542 DEBUG nova.scheduler.client.report [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 04:24:23 np0005534516 nova_compute[253538]: 2025-11-25 09:24:23.800 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 25 04:24:23 np0005534516 nova_compute[253538]: 2025-11-25 09:24:23.801 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.596s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:24:24 np0005534516 busy_gauss[429695]: --> passed data devices: 0 physical, 3 LVM
Nov 25 04:24:24 np0005534516 busy_gauss[429695]: --> relative data size: 1.0
Nov 25 04:24:24 np0005534516 busy_gauss[429695]: --> All data devices are unavailable
Nov 25 04:24:24 np0005534516 systemd[1]: libpod-ff32e3dafad48c2eb70443b028e154512f6d52f4d0ea8f08b5b39d13b8f6b429.scope: Deactivated successfully.
Nov 25 04:24:24 np0005534516 systemd[1]: libpod-ff32e3dafad48c2eb70443b028e154512f6d52f4d0ea8f08b5b39d13b8f6b429.scope: Consumed 1.069s CPU time.
Nov 25 04:24:24 np0005534516 podman[429659]: 2025-11-25 09:24:24.363741788 +0000 UTC m=+1.726747505 container died ff32e3dafad48c2eb70443b028e154512f6d52f4d0ea8f08b5b39d13b8f6b429 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_gauss, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Nov 25 04:24:24 np0005534516 systemd[1]: var-lib-containers-storage-overlay-f3801211cae971bd1c60d27764e66634998bfdd35bdf44b031d5dc8909f0f146-merged.mount: Deactivated successfully.
Nov 25 04:24:24 np0005534516 podman[429659]: 2025-11-25 09:24:24.600624756 +0000 UTC m=+1.963630473 container remove ff32e3dafad48c2eb70443b028e154512f6d52f4d0ea8f08b5b39d13b8f6b429 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_gauss, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 04:24:24 np0005534516 systemd[1]: libpod-conmon-ff32e3dafad48c2eb70443b028e154512f6d52f4d0ea8f08b5b39d13b8f6b429.scope: Deactivated successfully.
Nov 25 04:24:24 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3134: 321 pgs: 321 active+clean; 458 KiB data, 970 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:24:25 np0005534516 nova_compute[253538]: 2025-11-25 09:24:25.234 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:24:25 np0005534516 podman[429940]: 2025-11-25 09:24:25.329657061 +0000 UTC m=+0.060633594 container create 9eee93f695bd189282d615147f47dccb5c67ed8d9da0d34331d5cfed69809e63 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_black, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 04:24:25 np0005534516 systemd[1]: Started libpod-conmon-9eee93f695bd189282d615147f47dccb5c67ed8d9da0d34331d5cfed69809e63.scope.
Nov 25 04:24:25 np0005534516 podman[429940]: 2025-11-25 09:24:25.298729738 +0000 UTC m=+0.029706291 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 04:24:25 np0005534516 systemd[1]: Started libcrun container.
Nov 25 04:24:25 np0005534516 podman[429940]: 2025-11-25 09:24:25.4370937 +0000 UTC m=+0.168070323 container init 9eee93f695bd189282d615147f47dccb5c67ed8d9da0d34331d5cfed69809e63 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_black, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Nov 25 04:24:25 np0005534516 podman[429940]: 2025-11-25 09:24:25.444558664 +0000 UTC m=+0.175535207 container start 9eee93f695bd189282d615147f47dccb5c67ed8d9da0d34331d5cfed69809e63 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_black, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 04:24:25 np0005534516 peaceful_black[429956]: 167 167
Nov 25 04:24:25 np0005534516 systemd[1]: libpod-9eee93f695bd189282d615147f47dccb5c67ed8d9da0d34331d5cfed69809e63.scope: Deactivated successfully.
Nov 25 04:24:25 np0005534516 conmon[429956]: conmon 9eee93f695bd189282d6 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-9eee93f695bd189282d615147f47dccb5c67ed8d9da0d34331d5cfed69809e63.scope/container/memory.events
Nov 25 04:24:25 np0005534516 podman[429940]: 2025-11-25 09:24:25.473185824 +0000 UTC m=+0.204162357 container attach 9eee93f695bd189282d615147f47dccb5c67ed8d9da0d34331d5cfed69809e63 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_black, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 04:24:25 np0005534516 podman[429940]: 2025-11-25 09:24:25.473636906 +0000 UTC m=+0.204613439 container died 9eee93f695bd189282d615147f47dccb5c67ed8d9da0d34331d5cfed69809e63 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_black, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 04:24:25 np0005534516 systemd[1]: var-lib-containers-storage-overlay-c06a83d4316436aaabf1971b1507ccb748b0ea409445c71956122961893d2422-merged.mount: Deactivated successfully.
Nov 25 04:24:25 np0005534516 podman[429940]: 2025-11-25 09:24:25.606871689 +0000 UTC m=+0.337848222 container remove 9eee93f695bd189282d615147f47dccb5c67ed8d9da0d34331d5cfed69809e63 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_black, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 04:24:25 np0005534516 systemd[1]: libpod-conmon-9eee93f695bd189282d615147f47dccb5c67ed8d9da0d34331d5cfed69809e63.scope: Deactivated successfully.
Nov 25 04:24:25 np0005534516 podman[429982]: 2025-11-25 09:24:25.78226605 +0000 UTC m=+0.056486391 container create f3c8f0e5f665212bdd72ffd1531974c015e5b981efff4f30a2082994067a34aa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_spence, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Nov 25 04:24:25 np0005534516 systemd[1]: Started libpod-conmon-f3c8f0e5f665212bdd72ffd1531974c015e5b981efff4f30a2082994067a34aa.scope.
Nov 25 04:24:25 np0005534516 podman[429982]: 2025-11-25 09:24:25.751373898 +0000 UTC m=+0.025594269 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 04:24:25 np0005534516 systemd[1]: Started libcrun container.
Nov 25 04:24:25 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/48e0795b41b2b1a8ba5eb2d7996995c0144a41533a271ebfd5d3bb2fabc55262/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 04:24:25 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/48e0795b41b2b1a8ba5eb2d7996995c0144a41533a271ebfd5d3bb2fabc55262/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 04:24:25 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/48e0795b41b2b1a8ba5eb2d7996995c0144a41533a271ebfd5d3bb2fabc55262/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 04:24:25 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/48e0795b41b2b1a8ba5eb2d7996995c0144a41533a271ebfd5d3bb2fabc55262/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 04:24:25 np0005534516 podman[429982]: 2025-11-25 09:24:25.886968735 +0000 UTC m=+0.161189106 container init f3c8f0e5f665212bdd72ffd1531974c015e5b981efff4f30a2082994067a34aa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_spence, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Nov 25 04:24:25 np0005534516 podman[429982]: 2025-11-25 09:24:25.894599742 +0000 UTC m=+0.168820083 container start f3c8f0e5f665212bdd72ffd1531974c015e5b981efff4f30a2082994067a34aa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_spence, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default)
Nov 25 04:24:25 np0005534516 podman[429982]: 2025-11-25 09:24:25.913652312 +0000 UTC m=+0.187872673 container attach f3c8f0e5f665212bdd72ffd1531974c015e5b981efff4f30a2082994067a34aa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_spence, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Nov 25 04:24:26 np0005534516 gracious_spence[429998]: {
Nov 25 04:24:26 np0005534516 gracious_spence[429998]:    "0": [
Nov 25 04:24:26 np0005534516 gracious_spence[429998]:        {
Nov 25 04:24:26 np0005534516 gracious_spence[429998]:            "devices": [
Nov 25 04:24:26 np0005534516 gracious_spence[429998]:                "/dev/loop3"
Nov 25 04:24:26 np0005534516 gracious_spence[429998]:            ],
Nov 25 04:24:26 np0005534516 gracious_spence[429998]:            "lv_name": "ceph_lv0",
Nov 25 04:24:26 np0005534516 gracious_spence[429998]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 04:24:26 np0005534516 gracious_spence[429998]:            "lv_size": "21470642176",
Nov 25 04:24:26 np0005534516 gracious_spence[429998]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=fa0f8d90-5df8-4d42-9078-da082765696d,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 04:24:26 np0005534516 gracious_spence[429998]:            "lv_uuid": "ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr",
Nov 25 04:24:26 np0005534516 gracious_spence[429998]:            "name": "ceph_lv0",
Nov 25 04:24:26 np0005534516 gracious_spence[429998]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 04:24:26 np0005534516 gracious_spence[429998]:            "tags": {
Nov 25 04:24:26 np0005534516 gracious_spence[429998]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 04:24:26 np0005534516 gracious_spence[429998]:                "ceph.block_uuid": "ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr",
Nov 25 04:24:26 np0005534516 gracious_spence[429998]:                "ceph.cephx_lockbox_secret": "",
Nov 25 04:24:26 np0005534516 gracious_spence[429998]:                "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 04:24:26 np0005534516 gracious_spence[429998]:                "ceph.cluster_name": "ceph",
Nov 25 04:24:26 np0005534516 gracious_spence[429998]:                "ceph.crush_device_class": "",
Nov 25 04:24:26 np0005534516 gracious_spence[429998]:                "ceph.encrypted": "0",
Nov 25 04:24:26 np0005534516 gracious_spence[429998]:                "ceph.osd_fsid": "fa0f8d90-5df8-4d42-9078-da082765696d",
Nov 25 04:24:26 np0005534516 gracious_spence[429998]:                "ceph.osd_id": "0",
Nov 25 04:24:26 np0005534516 gracious_spence[429998]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 04:24:26 np0005534516 gracious_spence[429998]:                "ceph.type": "block",
Nov 25 04:24:26 np0005534516 gracious_spence[429998]:                "ceph.vdo": "0"
Nov 25 04:24:26 np0005534516 gracious_spence[429998]:            },
Nov 25 04:24:26 np0005534516 gracious_spence[429998]:            "type": "block",
Nov 25 04:24:26 np0005534516 gracious_spence[429998]:            "vg_name": "ceph_vg0"
Nov 25 04:24:26 np0005534516 gracious_spence[429998]:        }
Nov 25 04:24:26 np0005534516 gracious_spence[429998]:    ],
Nov 25 04:24:26 np0005534516 gracious_spence[429998]:    "1": [
Nov 25 04:24:26 np0005534516 gracious_spence[429998]:        {
Nov 25 04:24:26 np0005534516 gracious_spence[429998]:            "devices": [
Nov 25 04:24:26 np0005534516 gracious_spence[429998]:                "/dev/loop4"
Nov 25 04:24:26 np0005534516 gracious_spence[429998]:            ],
Nov 25 04:24:26 np0005534516 gracious_spence[429998]:            "lv_name": "ceph_lv1",
Nov 25 04:24:26 np0005534516 gracious_spence[429998]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 04:24:26 np0005534516 gracious_spence[429998]:            "lv_size": "21470642176",
Nov 25 04:24:26 np0005534516 gracious_spence[429998]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1ccbbde0-8faf-460a-800e-c84f00ed17db,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 04:24:26 np0005534516 gracious_spence[429998]:            "lv_uuid": "nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e",
Nov 25 04:24:26 np0005534516 gracious_spence[429998]:            "name": "ceph_lv1",
Nov 25 04:24:26 np0005534516 gracious_spence[429998]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 04:24:26 np0005534516 gracious_spence[429998]:            "tags": {
Nov 25 04:24:26 np0005534516 gracious_spence[429998]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 04:24:26 np0005534516 gracious_spence[429998]:                "ceph.block_uuid": "nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e",
Nov 25 04:24:26 np0005534516 gracious_spence[429998]:                "ceph.cephx_lockbox_secret": "",
Nov 25 04:24:26 np0005534516 gracious_spence[429998]:                "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 04:24:26 np0005534516 gracious_spence[429998]:                "ceph.cluster_name": "ceph",
Nov 25 04:24:26 np0005534516 gracious_spence[429998]:                "ceph.crush_device_class": "",
Nov 25 04:24:26 np0005534516 gracious_spence[429998]:                "ceph.encrypted": "0",
Nov 25 04:24:26 np0005534516 gracious_spence[429998]:                "ceph.osd_fsid": "1ccbbde0-8faf-460a-800e-c84f00ed17db",
Nov 25 04:24:26 np0005534516 gracious_spence[429998]:                "ceph.osd_id": "1",
Nov 25 04:24:26 np0005534516 gracious_spence[429998]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 04:24:26 np0005534516 gracious_spence[429998]:                "ceph.type": "block",
Nov 25 04:24:26 np0005534516 gracious_spence[429998]:                "ceph.vdo": "0"
Nov 25 04:24:26 np0005534516 gracious_spence[429998]:            },
Nov 25 04:24:26 np0005534516 gracious_spence[429998]:            "type": "block",
Nov 25 04:24:26 np0005534516 gracious_spence[429998]:            "vg_name": "ceph_vg1"
Nov 25 04:24:26 np0005534516 gracious_spence[429998]:        }
Nov 25 04:24:26 np0005534516 gracious_spence[429998]:    ],
Nov 25 04:24:26 np0005534516 gracious_spence[429998]:    "2": [
Nov 25 04:24:26 np0005534516 gracious_spence[429998]:        {
Nov 25 04:24:26 np0005534516 gracious_spence[429998]:            "devices": [
Nov 25 04:24:26 np0005534516 gracious_spence[429998]:                "/dev/loop5"
Nov 25 04:24:26 np0005534516 gracious_spence[429998]:            ],
Nov 25 04:24:26 np0005534516 gracious_spence[429998]:            "lv_name": "ceph_lv2",
Nov 25 04:24:26 np0005534516 gracious_spence[429998]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 04:24:26 np0005534516 gracious_spence[429998]:            "lv_size": "21470642176",
Nov 25 04:24:26 np0005534516 gracious_spence[429998]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2a15223e-f21c-42af-b702-2be1c3607399,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 04:24:26 np0005534516 gracious_spence[429998]:            "lv_uuid": "5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0",
Nov 25 04:24:26 np0005534516 gracious_spence[429998]:            "name": "ceph_lv2",
Nov 25 04:24:26 np0005534516 gracious_spence[429998]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 04:24:26 np0005534516 gracious_spence[429998]:            "tags": {
Nov 25 04:24:26 np0005534516 gracious_spence[429998]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 04:24:26 np0005534516 gracious_spence[429998]:                "ceph.block_uuid": "5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0",
Nov 25 04:24:26 np0005534516 gracious_spence[429998]:                "ceph.cephx_lockbox_secret": "",
Nov 25 04:24:26 np0005534516 gracious_spence[429998]:                "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 04:24:26 np0005534516 gracious_spence[429998]:                "ceph.cluster_name": "ceph",
Nov 25 04:24:26 np0005534516 gracious_spence[429998]:                "ceph.crush_device_class": "",
Nov 25 04:24:26 np0005534516 gracious_spence[429998]:                "ceph.encrypted": "0",
Nov 25 04:24:26 np0005534516 gracious_spence[429998]:                "ceph.osd_fsid": "2a15223e-f21c-42af-b702-2be1c3607399",
Nov 25 04:24:26 np0005534516 gracious_spence[429998]:                "ceph.osd_id": "2",
Nov 25 04:24:26 np0005534516 gracious_spence[429998]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 04:24:26 np0005534516 gracious_spence[429998]:                "ceph.type": "block",
Nov 25 04:24:26 np0005534516 gracious_spence[429998]:                "ceph.vdo": "0"
Nov 25 04:24:26 np0005534516 gracious_spence[429998]:            },
Nov 25 04:24:26 np0005534516 gracious_spence[429998]:            "type": "block",
Nov 25 04:24:26 np0005534516 gracious_spence[429998]:            "vg_name": "ceph_vg2"
Nov 25 04:24:26 np0005534516 gracious_spence[429998]:        }
Nov 25 04:24:26 np0005534516 gracious_spence[429998]:    ]
Nov 25 04:24:26 np0005534516 gracious_spence[429998]: }
Nov 25 04:24:26 np0005534516 systemd[1]: libpod-f3c8f0e5f665212bdd72ffd1531974c015e5b981efff4f30a2082994067a34aa.scope: Deactivated successfully.
Nov 25 04:24:26 np0005534516 podman[429982]: 2025-11-25 09:24:26.674196236 +0000 UTC m=+0.948416577 container died f3c8f0e5f665212bdd72ffd1531974c015e5b981efff4f30a2082994067a34aa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_spence, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 04:24:26 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3135: 321 pgs: 321 active+clean; 458 KiB data, 970 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:24:26 np0005534516 systemd[1]: var-lib-containers-storage-overlay-48e0795b41b2b1a8ba5eb2d7996995c0144a41533a271ebfd5d3bb2fabc55262-merged.mount: Deactivated successfully.
Nov 25 04:24:27 np0005534516 podman[429982]: 2025-11-25 09:24:27.003618556 +0000 UTC m=+1.277838907 container remove f3c8f0e5f665212bdd72ffd1531974c015e5b981efff4f30a2082994067a34aa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_spence, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Nov 25 04:24:27 np0005534516 systemd[1]: libpod-conmon-f3c8f0e5f665212bdd72ffd1531974c015e5b981efff4f30a2082994067a34aa.scope: Deactivated successfully.
Nov 25 04:24:27 np0005534516 podman[430160]: 2025-11-25 09:24:27.660873864 +0000 UTC m=+0.048571535 container create a7c9feb5cbbd93c8504d3bcfda02880251b87504ca517dfb513bd60e73f1c3b4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_tesla, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Nov 25 04:24:27 np0005534516 systemd[1]: Started libpod-conmon-a7c9feb5cbbd93c8504d3bcfda02880251b87504ca517dfb513bd60e73f1c3b4.scope.
Nov 25 04:24:27 np0005534516 podman[430160]: 2025-11-25 09:24:27.636428917 +0000 UTC m=+0.024126608 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 04:24:27 np0005534516 systemd[1]: Started libcrun container.
Nov 25 04:24:27 np0005534516 podman[430160]: 2025-11-25 09:24:27.753662124 +0000 UTC m=+0.141359815 container init a7c9feb5cbbd93c8504d3bcfda02880251b87504ca517dfb513bd60e73f1c3b4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_tesla, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 04:24:27 np0005534516 podman[430160]: 2025-11-25 09:24:27.762122514 +0000 UTC m=+0.149820185 container start a7c9feb5cbbd93c8504d3bcfda02880251b87504ca517dfb513bd60e73f1c3b4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_tesla, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Nov 25 04:24:27 np0005534516 nifty_tesla[430177]: 167 167
Nov 25 04:24:27 np0005534516 systemd[1]: libpod-a7c9feb5cbbd93c8504d3bcfda02880251b87504ca517dfb513bd60e73f1c3b4.scope: Deactivated successfully.
Nov 25 04:24:27 np0005534516 podman[430160]: 2025-11-25 09:24:27.767185092 +0000 UTC m=+0.154882773 container attach a7c9feb5cbbd93c8504d3bcfda02880251b87504ca517dfb513bd60e73f1c3b4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_tesla, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 04:24:27 np0005534516 podman[430160]: 2025-11-25 09:24:27.767937313 +0000 UTC m=+0.155634984 container died a7c9feb5cbbd93c8504d3bcfda02880251b87504ca517dfb513bd60e73f1c3b4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_tesla, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Nov 25 04:24:27 np0005534516 systemd[1]: var-lib-containers-storage-overlay-9c9fe62981956c8351245bcf107fb6ab5cb772e00c1f7df87594183c1196bed5-merged.mount: Deactivated successfully.
Nov 25 04:24:27 np0005534516 podman[430160]: 2025-11-25 09:24:27.823410735 +0000 UTC m=+0.211108406 container remove a7c9feb5cbbd93c8504d3bcfda02880251b87504ca517dfb513bd60e73f1c3b4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_tesla, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 04:24:27 np0005534516 systemd[1]: libpod-conmon-a7c9feb5cbbd93c8504d3bcfda02880251b87504ca517dfb513bd60e73f1c3b4.scope: Deactivated successfully.
Nov 25 04:24:28 np0005534516 podman[430201]: 2025-11-25 09:24:28.009975811 +0000 UTC m=+0.045545683 container create 88bb1d3692b19929e00e6ca57ebe7c6e9e5b5c62d78934413e687b1641485f6e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_grothendieck, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 04:24:28 np0005534516 systemd[1]: Started libpod-conmon-88bb1d3692b19929e00e6ca57ebe7c6e9e5b5c62d78934413e687b1641485f6e.scope.
Nov 25 04:24:28 np0005534516 systemd[1]: Started libcrun container.
Nov 25 04:24:28 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a41bd55063642333689f6aaf752704e8b3402ed7a0c29ba9102068f2ac61317c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 04:24:28 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a41bd55063642333689f6aaf752704e8b3402ed7a0c29ba9102068f2ac61317c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 04:24:28 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a41bd55063642333689f6aaf752704e8b3402ed7a0c29ba9102068f2ac61317c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 04:24:28 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a41bd55063642333689f6aaf752704e8b3402ed7a0c29ba9102068f2ac61317c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 04:24:28 np0005534516 podman[430201]: 2025-11-25 09:24:27.988960148 +0000 UTC m=+0.024530040 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 04:24:28 np0005534516 podman[430201]: 2025-11-25 09:24:28.094565368 +0000 UTC m=+0.130135270 container init 88bb1d3692b19929e00e6ca57ebe7c6e9e5b5c62d78934413e687b1641485f6e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_grothendieck, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 04:24:28 np0005534516 podman[430201]: 2025-11-25 09:24:28.101225039 +0000 UTC m=+0.136794931 container start 88bb1d3692b19929e00e6ca57ebe7c6e9e5b5c62d78934413e687b1641485f6e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_grothendieck, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Nov 25 04:24:28 np0005534516 podman[430201]: 2025-11-25 09:24:28.10494991 +0000 UTC m=+0.140519812 container attach 88bb1d3692b19929e00e6ca57ebe7c6e9e5b5c62d78934413e687b1641485f6e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_grothendieck, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 04:24:28 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e279 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:24:28 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3136: 321 pgs: 321 active+clean; 458 KiB data, 970 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:24:28 np0005534516 nova_compute[253538]: 2025-11-25 09:24:28.756 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:24:29 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 25 04:24:29 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/338765886' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 04:24:29 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 25 04:24:29 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/338765886' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 04:24:29 np0005534516 goofy_grothendieck[430218]: {
Nov 25 04:24:29 np0005534516 goofy_grothendieck[430218]:    "1ccbbde0-8faf-460a-800e-c84f00ed17db": {
Nov 25 04:24:29 np0005534516 goofy_grothendieck[430218]:        "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 04:24:29 np0005534516 goofy_grothendieck[430218]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 04:24:29 np0005534516 goofy_grothendieck[430218]:        "osd_id": 1,
Nov 25 04:24:29 np0005534516 goofy_grothendieck[430218]:        "osd_uuid": "1ccbbde0-8faf-460a-800e-c84f00ed17db",
Nov 25 04:24:29 np0005534516 goofy_grothendieck[430218]:        "type": "bluestore"
Nov 25 04:24:29 np0005534516 goofy_grothendieck[430218]:    },
Nov 25 04:24:29 np0005534516 goofy_grothendieck[430218]:    "2a15223e-f21c-42af-b702-2be1c3607399": {
Nov 25 04:24:29 np0005534516 goofy_grothendieck[430218]:        "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 04:24:29 np0005534516 goofy_grothendieck[430218]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 04:24:29 np0005534516 goofy_grothendieck[430218]:        "osd_id": 2,
Nov 25 04:24:29 np0005534516 goofy_grothendieck[430218]:        "osd_uuid": "2a15223e-f21c-42af-b702-2be1c3607399",
Nov 25 04:24:29 np0005534516 goofy_grothendieck[430218]:        "type": "bluestore"
Nov 25 04:24:29 np0005534516 goofy_grothendieck[430218]:    },
Nov 25 04:24:29 np0005534516 goofy_grothendieck[430218]:    "fa0f8d90-5df8-4d42-9078-da082765696d": {
Nov 25 04:24:29 np0005534516 goofy_grothendieck[430218]:        "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 04:24:29 np0005534516 goofy_grothendieck[430218]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 04:24:29 np0005534516 goofy_grothendieck[430218]:        "osd_id": 0,
Nov 25 04:24:29 np0005534516 goofy_grothendieck[430218]:        "osd_uuid": "fa0f8d90-5df8-4d42-9078-da082765696d",
Nov 25 04:24:29 np0005534516 goofy_grothendieck[430218]:        "type": "bluestore"
Nov 25 04:24:29 np0005534516 goofy_grothendieck[430218]:    }
Nov 25 04:24:29 np0005534516 goofy_grothendieck[430218]: }
Nov 25 04:24:29 np0005534516 systemd[1]: libpod-88bb1d3692b19929e00e6ca57ebe7c6e9e5b5c62d78934413e687b1641485f6e.scope: Deactivated successfully.
Nov 25 04:24:29 np0005534516 podman[430201]: 2025-11-25 09:24:29.203013516 +0000 UTC m=+1.238583388 container died 88bb1d3692b19929e00e6ca57ebe7c6e9e5b5c62d78934413e687b1641485f6e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_grothendieck, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 25 04:24:29 np0005534516 systemd[1]: libpod-88bb1d3692b19929e00e6ca57ebe7c6e9e5b5c62d78934413e687b1641485f6e.scope: Consumed 1.107s CPU time.
Nov 25 04:24:29 np0005534516 systemd[1]: var-lib-containers-storage-overlay-a41bd55063642333689f6aaf752704e8b3402ed7a0c29ba9102068f2ac61317c-merged.mount: Deactivated successfully.
Nov 25 04:24:29 np0005534516 podman[430201]: 2025-11-25 09:24:29.421940754 +0000 UTC m=+1.457510626 container remove 88bb1d3692b19929e00e6ca57ebe7c6e9e5b5c62d78934413e687b1641485f6e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_grothendieck, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Nov 25 04:24:29 np0005534516 systemd[1]: libpod-conmon-88bb1d3692b19929e00e6ca57ebe7c6e9e5b5c62d78934413e687b1641485f6e.scope: Deactivated successfully.
Nov 25 04:24:29 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 04:24:29 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 04:24:29 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 04:24:29 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 04:24:29 np0005534516 ceph-mgr[75313]: [progress WARNING root] complete: ev 60ca1655-f090-45bb-aa9f-cc22d9861cf6 does not exist
Nov 25 04:24:29 np0005534516 ceph-mgr[75313]: [progress WARNING root] complete: ev 43f1d31d-9801-46c2-9ece-b50c2b825cc4 does not exist
Nov 25 04:24:30 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 04:24:30 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 04:24:30 np0005534516 nova_compute[253538]: 2025-11-25 09:24:30.236 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:24:30 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3137: 321 pgs: 321 active+clean; 458 KiB data, 970 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:24:32 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3138: 321 pgs: 321 active+clean; 458 KiB data, 970 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:24:33 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e279 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:24:33 np0005534516 nova_compute[253538]: 2025-11-25 09:24:33.759 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:24:34 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3139: 321 pgs: 321 active+clean; 458 KiB data, 970 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:24:34 np0005534516 podman[430314]: 2025-11-25 09:24:34.869166786 +0000 UTC m=+0.107878652 container health_status 8ad1e319ea263c63021f01bb2d6f18ca5aea205883339692c6b10bddfa993191 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS)
Nov 25 04:24:35 np0005534516 nova_compute[253538]: 2025-11-25 09:24:35.278 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:24:36 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3140: 321 pgs: 321 active+clean; 458 KiB data, 970 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:24:38 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e279 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:24:38 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3141: 321 pgs: 321 active+clean; 458 KiB data, 970 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:24:38 np0005534516 nova_compute[253538]: 2025-11-25 09:24:38.813 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:24:40 np0005534516 nova_compute[253538]: 2025-11-25 09:24:40.281 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:24:40 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3142: 321 pgs: 321 active+clean; 458 KiB data, 970 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:24:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:24:41.112 162739 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:24:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:24:41.112 162739 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:24:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:24:41.113 162739 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:24:42 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3143: 321 pgs: 321 active+clean; 458 KiB data, 970 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:24:43 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e279 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:24:43 np0005534516 nova_compute[253538]: 2025-11-25 09:24:43.817 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:24:44 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3144: 321 pgs: 321 active+clean; 458 KiB data, 970 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:24:45 np0005534516 nova_compute[253538]: 2025-11-25 09:24:45.283 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:24:46 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3145: 321 pgs: 321 active+clean; 458 KiB data, 970 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:24:48 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e279 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:24:48 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3146: 321 pgs: 321 active+clean; 458 KiB data, 970 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:24:48 np0005534516 nova_compute[253538]: 2025-11-25 09:24:48.861 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:24:50 np0005534516 nova_compute[253538]: 2025-11-25 09:24:50.285 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:24:50 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3147: 321 pgs: 321 active+clean; 458 KiB data, 970 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:24:52 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3148: 321 pgs: 321 active+clean; 458 KiB data, 970 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:24:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] Optimize plan auto_2025-11-25_09:24:53
Nov 25 04:24:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 04:24:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] do_upmap
Nov 25 04:24:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'vms', 'backups', '.mgr', 'default.rgw.meta', 'volumes', 'default.rgw.control', 'cephfs.cephfs.data', '.rgw.root', 'images', 'default.rgw.log']
Nov 25 04:24:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] prepared 0/10 changes
Nov 25 04:24:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 04:24:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 04:24:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 04:24:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 04:24:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 04:24:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 04:24:53 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e279 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:24:53 np0005534516 podman[430341]: 2025-11-25 09:24:53.822079023 +0000 UTC m=+0.069484705 container health_status 201f5ba8a846455dad7681d91099d8c3f33e8ac91298c1330a8b525b94c03938 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, config_id=multipathd, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Nov 25 04:24:53 np0005534516 podman[430342]: 2025-11-25 09:24:53.866478363 +0000 UTC m=+0.111860461 container health_status 5c8d5508db57b4c3da7e80ae11f3a3094e87785cb74f60c98199261dd8b73481 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.build-date=20251118)
Nov 25 04:24:53 np0005534516 nova_compute[253538]: 2025-11-25 09:24:53.896 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:24:54 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 04:24:54 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 04:24:54 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 04:24:54 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 04:24:54 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 04:24:54 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 04:24:54 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 04:24:54 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 04:24:54 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 04:24:54 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 04:24:54 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3149: 321 pgs: 321 active+clean; 458 KiB data, 970 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:24:55 np0005534516 nova_compute[253538]: 2025-11-25 09:24:55.287 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:24:56 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3150: 321 pgs: 321 active+clean; 458 KiB data, 970 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:24:58 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e279 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:24:58 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3151: 321 pgs: 321 active+clean; 458 KiB data, 970 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:24:58 np0005534516 nova_compute[253538]: 2025-11-25 09:24:58.801 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:24:58 np0005534516 nova_compute[253538]: 2025-11-25 09:24:58.899 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:25:00 np0005534516 nova_compute[253538]: 2025-11-25 09:25:00.288 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:25:00 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3152: 321 pgs: 321 active+clean; 458 KiB data, 970 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:25:02 np0005534516 nova_compute[253538]: 2025-11-25 09:25:02.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:25:02 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3153: 321 pgs: 321 active+clean; 458 KiB data, 970 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:25:03 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e279 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:25:03 np0005534516 nova_compute[253538]: 2025-11-25 09:25:03.901 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:25:04 np0005534516 nova_compute[253538]: 2025-11-25 09:25:04.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:25:04 np0005534516 nova_compute[253538]: 2025-11-25 09:25:04.555 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 25 04:25:04 np0005534516 nova_compute[253538]: 2025-11-25 09:25:04.555 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 25 04:25:04 np0005534516 nova_compute[253538]: 2025-11-25 09:25:04.568 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 25 04:25:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 04:25:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 04:25:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 04:25:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 04:25:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 25 04:25:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 04:25:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 04:25:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 04:25:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 04:25:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 04:25:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 4.4513495474376506e-07 of space, bias 1.0, pg target 0.00013354048642312953 quantized to 32 (current 32)
Nov 25 04:25:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 04:25:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 25 04:25:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 04:25:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 04:25:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 04:25:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 25 04:25:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 04:25:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 25 04:25:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 04:25:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 04:25:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 04:25:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 25 04:25:04 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3154: 321 pgs: 321 active+clean; 458 KiB data, 970 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:25:05 np0005534516 nova_compute[253538]: 2025-11-25 09:25:05.291 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:25:05 np0005534516 podman[430380]: 2025-11-25 09:25:05.828403622 +0000 UTC m=+0.080194316 container health_status 8ad1e319ea263c63021f01bb2d6f18ca5aea205883339692c6b10bddfa993191 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 04:25:06 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3155: 321 pgs: 321 active+clean; 458 KiB data, 970 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:25:08 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e279 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:25:08 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3156: 321 pgs: 321 active+clean; 458 KiB data, 970 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:25:08 np0005534516 nova_compute[253538]: 2025-11-25 09:25:08.904 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:25:09 np0005534516 nova_compute[253538]: 2025-11-25 09:25:09.561 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:25:10 np0005534516 nova_compute[253538]: 2025-11-25 09:25:10.292 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:25:10 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3157: 321 pgs: 321 active+clean; 458 KiB data, 970 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:25:11 np0005534516 nova_compute[253538]: 2025-11-25 09:25:11.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:25:12 np0005534516 nova_compute[253538]: 2025-11-25 09:25:12.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:25:12 np0005534516 nova_compute[253538]: 2025-11-25 09:25:12.555 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 25 04:25:12 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3158: 321 pgs: 321 active+clean; 458 KiB data, 970 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:25:13 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e279 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:25:13 np0005534516 nova_compute[253538]: 2025-11-25 09:25:13.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:25:13 np0005534516 nova_compute[253538]: 2025-11-25 09:25:13.906 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:25:14 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3159: 321 pgs: 321 active+clean; 458 KiB data, 970 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:25:15 np0005534516 nova_compute[253538]: 2025-11-25 09:25:15.300 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:25:15 np0005534516 nova_compute[253538]: 2025-11-25 09:25:15.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:25:16 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3160: 321 pgs: 321 active+clean; 458 KiB data, 970 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:25:18 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e279 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:25:18 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3161: 321 pgs: 321 active+clean; 458 KiB data, 970 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:25:18 np0005534516 nova_compute[253538]: 2025-11-25 09:25:18.908 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:25:20 np0005534516 nova_compute[253538]: 2025-11-25 09:25:20.303 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:25:20 np0005534516 nova_compute[253538]: 2025-11-25 09:25:20.548 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:25:20 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3162: 321 pgs: 321 active+clean; 458 KiB data, 970 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:25:22 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3163: 321 pgs: 321 active+clean; 458 KiB data, 970 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:25:23 np0005534516 ceph-mon[75015]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #153. Immutable memtables: 0.
Nov 25 04:25:23 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:25:23.100760) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 25 04:25:23 np0005534516 ceph-mon[75015]: rocksdb: [db/flush_job.cc:856] [default] [JOB 93] Flushing memtable with next log file: 153
Nov 25 04:25:23 np0005534516 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764062723100804, "job": 93, "event": "flush_started", "num_memtables": 1, "num_entries": 2078, "num_deletes": 255, "total_data_size": 3505756, "memory_usage": 3556160, "flush_reason": "Manual Compaction"}
Nov 25 04:25:23 np0005534516 ceph-mon[75015]: rocksdb: [db/flush_job.cc:885] [default] [JOB 93] Level-0 flush table #154: started
Nov 25 04:25:23 np0005534516 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764062723183043, "cf_name": "default", "job": 93, "event": "table_file_creation", "file_number": 154, "file_size": 3427712, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 63740, "largest_seqno": 65817, "table_properties": {"data_size": 3418131, "index_size": 6138, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2373, "raw_key_size": 19182, "raw_average_key_size": 20, "raw_value_size": 3399066, "raw_average_value_size": 3600, "num_data_blocks": 273, "num_entries": 944, "num_filter_entries": 944, "num_deletions": 255, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764062499, "oldest_key_time": 1764062499, "file_creation_time": 1764062723, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "dc5dc6ae-0fb8-474e-b34d-e37705a41add", "db_session_id": "WCSCMBNWDQZ3QKJA3OWW", "orig_file_number": 154, "seqno_to_time_mapping": "N/A"}}
Nov 25 04:25:23 np0005534516 ceph-mon[75015]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 93] Flush lasted 82362 microseconds, and 7407 cpu microseconds.
Nov 25 04:25:23 np0005534516 ceph-mon[75015]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 04:25:23 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:25:23.183113) [db/flush_job.cc:967] [default] [JOB 93] Level-0 flush table #154: 3427712 bytes OK
Nov 25 04:25:23 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:25:23.183146) [db/memtable_list.cc:519] [default] Level-0 commit table #154 started
Nov 25 04:25:23 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:25:23.275116) [db/memtable_list.cc:722] [default] Level-0 commit table #154: memtable #1 done
Nov 25 04:25:23 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:25:23.275178) EVENT_LOG_v1 {"time_micros": 1764062723275161, "job": 93, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 25 04:25:23 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:25:23.275216) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 25 04:25:23 np0005534516 ceph-mon[75015]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 93] Try to delete WAL files size 3497039, prev total WAL file size 3497039, number of live WAL files 2.
Nov 25 04:25:23 np0005534516 ceph-mon[75015]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000150.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 04:25:23 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:25:23.277296) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730036323735' seq:72057594037927935, type:22 .. '7061786F730036353237' seq:0, type:0; will stop at (end)
Nov 25 04:25:23 np0005534516 ceph-mon[75015]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 94] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 25 04:25:23 np0005534516 ceph-mon[75015]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 93 Base level 0, inputs: [154(3347KB)], [152(8024KB)]
Nov 25 04:25:23 np0005534516 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764062723277401, "job": 94, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [154], "files_L6": [152], "score": -1, "input_data_size": 11644617, "oldest_snapshot_seqno": -1}
Nov 25 04:25:23 np0005534516 ceph-mon[75015]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 94] Generated table #155: 8461 keys, 9904845 bytes, temperature: kUnknown
Nov 25 04:25:23 np0005534516 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764062723389884, "cf_name": "default", "job": 94, "event": "table_file_creation", "file_number": 155, "file_size": 9904845, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9851743, "index_size": 30845, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 21189, "raw_key_size": 221454, "raw_average_key_size": 26, "raw_value_size": 9704174, "raw_average_value_size": 1146, "num_data_blocks": 1195, "num_entries": 8461, "num_filter_entries": 8461, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764056881, "oldest_key_time": 0, "file_creation_time": 1764062723, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "dc5dc6ae-0fb8-474e-b34d-e37705a41add", "db_session_id": "WCSCMBNWDQZ3QKJA3OWW", "orig_file_number": 155, "seqno_to_time_mapping": "N/A"}}
Nov 25 04:25:23 np0005534516 ceph-mon[75015]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 04:25:23 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:25:23.390221) [db/compaction/compaction_job.cc:1663] [default] [JOB 94] Compacted 1@0 + 1@6 files to L6 => 9904845 bytes
Nov 25 04:25:23 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:25:23.400415) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 103.4 rd, 87.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.3, 7.8 +0.0 blob) out(9.4 +0.0 blob), read-write-amplify(6.3) write-amplify(2.9) OK, records in: 8985, records dropped: 524 output_compression: NoCompression
Nov 25 04:25:23 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:25:23.400456) EVENT_LOG_v1 {"time_micros": 1764062723400440, "job": 94, "event": "compaction_finished", "compaction_time_micros": 112648, "compaction_time_cpu_micros": 32948, "output_level": 6, "num_output_files": 1, "total_output_size": 9904845, "num_input_records": 8985, "num_output_records": 8461, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 25 04:25:23 np0005534516 ceph-mon[75015]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000154.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 04:25:23 np0005534516 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764062723401340, "job": 94, "event": "table_file_deletion", "file_number": 154}
Nov 25 04:25:23 np0005534516 ceph-mon[75015]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000152.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 04:25:23 np0005534516 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764062723402944, "job": 94, "event": "table_file_deletion", "file_number": 152}
Nov 25 04:25:23 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:25:23.277159) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 04:25:23 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:25:23.403198) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 04:25:23 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:25:23.403205) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 04:25:23 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:25:23.403210) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 04:25:23 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:25:23.403216) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 04:25:23 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:25:23.403219) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 04:25:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 04:25:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 04:25:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 04:25:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 04:25:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 04:25:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 04:25:23 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e279 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:25:23 np0005534516 nova_compute[253538]: 2025-11-25 09:25:23.941 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:25:24 np0005534516 nova_compute[253538]: 2025-11-25 09:25:24.553 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:25:24 np0005534516 nova_compute[253538]: 2025-11-25 09:25:24.579 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:25:24 np0005534516 nova_compute[253538]: 2025-11-25 09:25:24.580 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:25:24 np0005534516 nova_compute[253538]: 2025-11-25 09:25:24.580 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:25:24 np0005534516 nova_compute[253538]: 2025-11-25 09:25:24.580 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 25 04:25:24 np0005534516 nova_compute[253538]: 2025-11-25 09:25:24.580 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 04:25:24 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3164: 321 pgs: 321 active+clean; 458 KiB data, 970 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:25:24 np0005534516 podman[430425]: 2025-11-25 09:25:24.818621596 +0000 UTC m=+0.064279684 container health_status 201f5ba8a846455dad7681d91099d8c3f33e8ac91298c1330a8b525b94c03938 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.schema-version=1.0, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 04:25:24 np0005534516 podman[430429]: 2025-11-25 09:25:24.828531626 +0000 UTC m=+0.061689272 container health_status 5c8d5508db57b4c3da7e80ae11f3a3094e87785cb74f60c98199261dd8b73481 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 04:25:25 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 04:25:25 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/784927242' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 04:25:25 np0005534516 nova_compute[253538]: 2025-11-25 09:25:25.092 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.512s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 04:25:25 np0005534516 nova_compute[253538]: 2025-11-25 09:25:25.299 253542 WARNING nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 04:25:25 np0005534516 nova_compute[253538]: 2025-11-25 09:25:25.300 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3625MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 25 04:25:25 np0005534516 nova_compute[253538]: 2025-11-25 09:25:25.300 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:25:25 np0005534516 nova_compute[253538]: 2025-11-25 09:25:25.301 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:25:25 np0005534516 nova_compute[253538]: 2025-11-25 09:25:25.303 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:25:25 np0005534516 nova_compute[253538]: 2025-11-25 09:25:25.360 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 25 04:25:25 np0005534516 nova_compute[253538]: 2025-11-25 09:25:25.361 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 25 04:25:25 np0005534516 nova_compute[253538]: 2025-11-25 09:25:25.376 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 04:25:25 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 04:25:25 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2780056618' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 04:25:25 np0005534516 nova_compute[253538]: 2025-11-25 09:25:25.856 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.481s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 04:25:25 np0005534516 nova_compute[253538]: 2025-11-25 09:25:25.863 253542 DEBUG nova.compute.provider_tree [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 04:25:25 np0005534516 nova_compute[253538]: 2025-11-25 09:25:25.877 253542 DEBUG nova.scheduler.client.report [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 04:25:25 np0005534516 nova_compute[253538]: 2025-11-25 09:25:25.881 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 25 04:25:25 np0005534516 nova_compute[253538]: 2025-11-25 09:25:25.882 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.581s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:25:26 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3165: 321 pgs: 321 active+clean; 458 KiB data, 970 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:25:28 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e279 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:25:28 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3166: 321 pgs: 321 active+clean; 458 KiB data, 970 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:25:28 np0005534516 nova_compute[253538]: 2025-11-25 09:25:28.944 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:25:29 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 25 04:25:29 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2942069893' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 04:25:29 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 25 04:25:29 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2942069893' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 04:25:30 np0005534516 nova_compute[253538]: 2025-11-25 09:25:30.311 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:25:30 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 04:25:30 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 04:25:30 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 04:25:30 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 04:25:30 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 04:25:30 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 04:25:30 np0005534516 ceph-mgr[75313]: [progress WARNING root] complete: ev f9cec71a-3741-4f83-bed2-1e79606723dc does not exist
Nov 25 04:25:30 np0005534516 ceph-mgr[75313]: [progress WARNING root] complete: ev a0672b7e-c367-4883-a2eb-eac914e881db does not exist
Nov 25 04:25:30 np0005534516 ceph-mgr[75313]: [progress WARNING root] complete: ev 409dfc27-9e95-48ce-92f0-d2a22b5d2e5a does not exist
Nov 25 04:25:30 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 04:25:30 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 04:25:30 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 04:25:30 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 04:25:30 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 04:25:30 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 04:25:30 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3167: 321 pgs: 321 active+clean; 458 KiB data, 970 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:25:31 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 04:25:31 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 04:25:31 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 04:25:31 np0005534516 podman[430759]: 2025-11-25 09:25:31.359978476 +0000 UTC m=+0.041358338 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 04:25:31 np0005534516 podman[430759]: 2025-11-25 09:25:31.587442417 +0000 UTC m=+0.268822219 container create 93933f860f36901f407a16f67cfcd9e935e7d944da8ba03ccbe0f8c17baaa7cb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_payne, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2)
Nov 25 04:25:31 np0005534516 systemd[1]: Started libpod-conmon-93933f860f36901f407a16f67cfcd9e935e7d944da8ba03ccbe0f8c17baaa7cb.scope.
Nov 25 04:25:31 np0005534516 systemd[1]: Started libcrun container.
Nov 25 04:25:31 np0005534516 podman[430759]: 2025-11-25 09:25:31.922035709 +0000 UTC m=+0.603415571 container init 93933f860f36901f407a16f67cfcd9e935e7d944da8ba03ccbe0f8c17baaa7cb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_payne, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 04:25:31 np0005534516 podman[430759]: 2025-11-25 09:25:31.930790188 +0000 UTC m=+0.612169960 container start 93933f860f36901f407a16f67cfcd9e935e7d944da8ba03ccbe0f8c17baaa7cb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_payne, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Nov 25 04:25:31 np0005534516 stupefied_payne[430775]: 167 167
Nov 25 04:25:31 np0005534516 systemd[1]: libpod-93933f860f36901f407a16f67cfcd9e935e7d944da8ba03ccbe0f8c17baaa7cb.scope: Deactivated successfully.
Nov 25 04:25:31 np0005534516 conmon[430775]: conmon 93933f860f36901f407a <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-93933f860f36901f407a16f67cfcd9e935e7d944da8ba03ccbe0f8c17baaa7cb.scope/container/memory.events
Nov 25 04:25:32 np0005534516 podman[430759]: 2025-11-25 09:25:32.011996122 +0000 UTC m=+0.693375884 container attach 93933f860f36901f407a16f67cfcd9e935e7d944da8ba03ccbe0f8c17baaa7cb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_payne, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 04:25:32 np0005534516 podman[430759]: 2025-11-25 09:25:32.014163581 +0000 UTC m=+0.695543383 container died 93933f860f36901f407a16f67cfcd9e935e7d944da8ba03ccbe0f8c17baaa7cb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_payne, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 04:25:32 np0005534516 systemd[1]: var-lib-containers-storage-overlay-5f10c336e918bf3ebfad48bc67c88ed1fb929ba3d3816ea20ee7fe2aeaa4d031-merged.mount: Deactivated successfully.
Nov 25 04:25:32 np0005534516 podman[430759]: 2025-11-25 09:25:32.543604405 +0000 UTC m=+1.224984177 container remove 93933f860f36901f407a16f67cfcd9e935e7d944da8ba03ccbe0f8c17baaa7cb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_payne, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 04:25:32 np0005534516 systemd[1]: libpod-conmon-93933f860f36901f407a16f67cfcd9e935e7d944da8ba03ccbe0f8c17baaa7cb.scope: Deactivated successfully.
Nov 25 04:25:32 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3168: 321 pgs: 321 active+clean; 458 KiB data, 970 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:25:32 np0005534516 podman[430800]: 2025-11-25 09:25:32.782340523 +0000 UTC m=+0.047143726 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 04:25:32 np0005534516 podman[430800]: 2025-11-25 09:25:32.925545337 +0000 UTC m=+0.190348540 container create 1f313a22837d0fdf20ba73ba15ad68d587b1022f9107ea16727585359563ef99 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_buck, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Nov 25 04:25:33 np0005534516 systemd[1]: Started libpod-conmon-1f313a22837d0fdf20ba73ba15ad68d587b1022f9107ea16727585359563ef99.scope.
Nov 25 04:25:33 np0005534516 systemd[1]: Started libcrun container.
Nov 25 04:25:33 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/281c57f8a0c3c92c96d63ce57e683f2bcdc07bcc2ae6bdfecb9b5b8b230e8935/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 04:25:33 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/281c57f8a0c3c92c96d63ce57e683f2bcdc07bcc2ae6bdfecb9b5b8b230e8935/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 04:25:33 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/281c57f8a0c3c92c96d63ce57e683f2bcdc07bcc2ae6bdfecb9b5b8b230e8935/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 04:25:33 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/281c57f8a0c3c92c96d63ce57e683f2bcdc07bcc2ae6bdfecb9b5b8b230e8935/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 04:25:33 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/281c57f8a0c3c92c96d63ce57e683f2bcdc07bcc2ae6bdfecb9b5b8b230e8935/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 04:25:33 np0005534516 podman[430800]: 2025-11-25 09:25:33.138557815 +0000 UTC m=+0.403360978 container init 1f313a22837d0fdf20ba73ba15ad68d587b1022f9107ea16727585359563ef99 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_buck, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 04:25:33 np0005534516 podman[430800]: 2025-11-25 09:25:33.146662286 +0000 UTC m=+0.411465469 container start 1f313a22837d0fdf20ba73ba15ad68d587b1022f9107ea16727585359563ef99 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_buck, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 04:25:33 np0005534516 podman[430800]: 2025-11-25 09:25:33.219645695 +0000 UTC m=+0.484448888 container attach 1f313a22837d0fdf20ba73ba15ad68d587b1022f9107ea16727585359563ef99 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_buck, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Nov 25 04:25:33 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e279 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:25:33 np0005534516 nova_compute[253538]: 2025-11-25 09:25:33.947 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:25:34 np0005534516 optimistic_buck[430817]: --> passed data devices: 0 physical, 3 LVM
Nov 25 04:25:34 np0005534516 optimistic_buck[430817]: --> relative data size: 1.0
Nov 25 04:25:34 np0005534516 optimistic_buck[430817]: --> All data devices are unavailable
Nov 25 04:25:34 np0005534516 systemd[1]: libpod-1f313a22837d0fdf20ba73ba15ad68d587b1022f9107ea16727585359563ef99.scope: Deactivated successfully.
Nov 25 04:25:34 np0005534516 systemd[1]: libpod-1f313a22837d0fdf20ba73ba15ad68d587b1022f9107ea16727585359563ef99.scope: Consumed 1.083s CPU time.
Nov 25 04:25:34 np0005534516 podman[430846]: 2025-11-25 09:25:34.325061522 +0000 UTC m=+0.029073005 container died 1f313a22837d0fdf20ba73ba15ad68d587b1022f9107ea16727585359563ef99 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_buck, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Nov 25 04:25:34 np0005534516 systemd[1]: var-lib-containers-storage-overlay-281c57f8a0c3c92c96d63ce57e683f2bcdc07bcc2ae6bdfecb9b5b8b230e8935-merged.mount: Deactivated successfully.
Nov 25 04:25:34 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3169: 321 pgs: 321 active+clean; 458 KiB data, 970 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:25:35 np0005534516 nova_compute[253538]: 2025-11-25 09:25:35.314 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:25:35 np0005534516 podman[430846]: 2025-11-25 09:25:35.49986035 +0000 UTC m=+1.203871833 container remove 1f313a22837d0fdf20ba73ba15ad68d587b1022f9107ea16727585359563ef99 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_buck, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 04:25:35 np0005534516 systemd[1]: libpod-conmon-1f313a22837d0fdf20ba73ba15ad68d587b1022f9107ea16727585359563ef99.scope: Deactivated successfully.
Nov 25 04:25:35 np0005534516 podman[430962]: 2025-11-25 09:25:35.977130441 +0000 UTC m=+0.090929580 container health_status 8ad1e319ea263c63021f01bb2d6f18ca5aea205883339692c6b10bddfa993191 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, io.buildah.version=1.41.3, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible)
Nov 25 04:25:36 np0005534516 podman[431030]: 2025-11-25 09:25:36.304809854 +0000 UTC m=+0.032411964 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 04:25:36 np0005534516 podman[431030]: 2025-11-25 09:25:36.409051517 +0000 UTC m=+0.136653537 container create 5d7e0a7063e60ba34552dc88e55fe481514d872ebe5800eee0d3e461ec784d47 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_wright, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 04:25:36 np0005534516 systemd[1]: Started libpod-conmon-5d7e0a7063e60ba34552dc88e55fe481514d872ebe5800eee0d3e461ec784d47.scope.
Nov 25 04:25:36 np0005534516 systemd[1]: Started libcrun container.
Nov 25 04:25:36 np0005534516 podman[431030]: 2025-11-25 09:25:36.722507942 +0000 UTC m=+0.450110062 container init 5d7e0a7063e60ba34552dc88e55fe481514d872ebe5800eee0d3e461ec784d47 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_wright, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 04:25:36 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3170: 321 pgs: 321 active+clean; 458 KiB data, 970 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:25:36 np0005534516 podman[431030]: 2025-11-25 09:25:36.736705749 +0000 UTC m=+0.464307809 container start 5d7e0a7063e60ba34552dc88e55fe481514d872ebe5800eee0d3e461ec784d47 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_wright, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef)
Nov 25 04:25:36 np0005534516 pensive_wright[431046]: 167 167
Nov 25 04:25:36 np0005534516 systemd[1]: libpod-5d7e0a7063e60ba34552dc88e55fe481514d872ebe5800eee0d3e461ec784d47.scope: Deactivated successfully.
Nov 25 04:25:36 np0005534516 podman[431030]: 2025-11-25 09:25:36.921410814 +0000 UTC m=+0.649012864 container attach 5d7e0a7063e60ba34552dc88e55fe481514d872ebe5800eee0d3e461ec784d47 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_wright, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Nov 25 04:25:36 np0005534516 podman[431030]: 2025-11-25 09:25:36.923693916 +0000 UTC m=+0.651296006 container died 5d7e0a7063e60ba34552dc88e55fe481514d872ebe5800eee0d3e461ec784d47 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_wright, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Nov 25 04:25:37 np0005534516 systemd[1]: var-lib-containers-storage-overlay-59f4b79e15372c5fae81d5641afd6d0089eeb88ef8f4234a37a18615ade3eb4c-merged.mount: Deactivated successfully.
Nov 25 04:25:37 np0005534516 podman[431030]: 2025-11-25 09:25:37.546578868 +0000 UTC m=+1.274180958 container remove 5d7e0a7063e60ba34552dc88e55fe481514d872ebe5800eee0d3e461ec784d47 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_wright, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Nov 25 04:25:37 np0005534516 systemd[1]: libpod-conmon-5d7e0a7063e60ba34552dc88e55fe481514d872ebe5800eee0d3e461ec784d47.scope: Deactivated successfully.
Nov 25 04:25:37 np0005534516 podman[431068]: 2025-11-25 09:25:37.697716968 +0000 UTC m=+0.021064895 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 04:25:37 np0005534516 podman[431068]: 2025-11-25 09:25:37.867536797 +0000 UTC m=+0.190884744 container create 67ba0ff5e2d17cef454ed823b29a5fca53f8045bdd9b1ff56487b70cbc47caa2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_saha, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3)
Nov 25 04:25:37 np0005534516 systemd[1]: Started libpod-conmon-67ba0ff5e2d17cef454ed823b29a5fca53f8045bdd9b1ff56487b70cbc47caa2.scope.
Nov 25 04:25:38 np0005534516 systemd[1]: Started libcrun container.
Nov 25 04:25:38 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/17b5c1d7710078c54dad9e439f1175573985af69dc2c63c89246841a5150b3d8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 04:25:38 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/17b5c1d7710078c54dad9e439f1175573985af69dc2c63c89246841a5150b3d8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 04:25:38 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/17b5c1d7710078c54dad9e439f1175573985af69dc2c63c89246841a5150b3d8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 04:25:38 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/17b5c1d7710078c54dad9e439f1175573985af69dc2c63c89246841a5150b3d8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 04:25:38 np0005534516 podman[431068]: 2025-11-25 09:25:38.145129725 +0000 UTC m=+0.468477652 container init 67ba0ff5e2d17cef454ed823b29a5fca53f8045bdd9b1ff56487b70cbc47caa2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_saha, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 04:25:38 np0005534516 podman[431068]: 2025-11-25 09:25:38.157221705 +0000 UTC m=+0.480569602 container start 67ba0ff5e2d17cef454ed823b29a5fca53f8045bdd9b1ff56487b70cbc47caa2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_saha, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 04:25:38 np0005534516 podman[431068]: 2025-11-25 09:25:38.284448973 +0000 UTC m=+0.607796970 container attach 67ba0ff5e2d17cef454ed823b29a5fca53f8045bdd9b1ff56487b70cbc47caa2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_saha, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Nov 25 04:25:38 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e279 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:25:38 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3171: 321 pgs: 321 active+clean; 458 KiB data, 970 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:25:38 np0005534516 compassionate_saha[431084]: {
Nov 25 04:25:38 np0005534516 compassionate_saha[431084]:    "0": [
Nov 25 04:25:38 np0005534516 compassionate_saha[431084]:        {
Nov 25 04:25:38 np0005534516 compassionate_saha[431084]:            "devices": [
Nov 25 04:25:38 np0005534516 compassionate_saha[431084]:                "/dev/loop3"
Nov 25 04:25:38 np0005534516 compassionate_saha[431084]:            ],
Nov 25 04:25:38 np0005534516 compassionate_saha[431084]:            "lv_name": "ceph_lv0",
Nov 25 04:25:38 np0005534516 compassionate_saha[431084]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 04:25:38 np0005534516 compassionate_saha[431084]:            "lv_size": "21470642176",
Nov 25 04:25:38 np0005534516 compassionate_saha[431084]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=fa0f8d90-5df8-4d42-9078-da082765696d,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 04:25:38 np0005534516 compassionate_saha[431084]:            "lv_uuid": "ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr",
Nov 25 04:25:38 np0005534516 compassionate_saha[431084]:            "name": "ceph_lv0",
Nov 25 04:25:38 np0005534516 compassionate_saha[431084]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 04:25:38 np0005534516 compassionate_saha[431084]:            "tags": {
Nov 25 04:25:38 np0005534516 compassionate_saha[431084]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 04:25:38 np0005534516 compassionate_saha[431084]:                "ceph.block_uuid": "ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr",
Nov 25 04:25:38 np0005534516 compassionate_saha[431084]:                "ceph.cephx_lockbox_secret": "",
Nov 25 04:25:38 np0005534516 compassionate_saha[431084]:                "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 04:25:38 np0005534516 compassionate_saha[431084]:                "ceph.cluster_name": "ceph",
Nov 25 04:25:38 np0005534516 compassionate_saha[431084]:                "ceph.crush_device_class": "",
Nov 25 04:25:38 np0005534516 compassionate_saha[431084]:                "ceph.encrypted": "0",
Nov 25 04:25:38 np0005534516 compassionate_saha[431084]:                "ceph.osd_fsid": "fa0f8d90-5df8-4d42-9078-da082765696d",
Nov 25 04:25:38 np0005534516 compassionate_saha[431084]:                "ceph.osd_id": "0",
Nov 25 04:25:38 np0005534516 compassionate_saha[431084]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 04:25:38 np0005534516 compassionate_saha[431084]:                "ceph.type": "block",
Nov 25 04:25:38 np0005534516 compassionate_saha[431084]:                "ceph.vdo": "0"
Nov 25 04:25:38 np0005534516 compassionate_saha[431084]:            },
Nov 25 04:25:38 np0005534516 compassionate_saha[431084]:            "type": "block",
Nov 25 04:25:38 np0005534516 compassionate_saha[431084]:            "vg_name": "ceph_vg0"
Nov 25 04:25:38 np0005534516 compassionate_saha[431084]:        }
Nov 25 04:25:38 np0005534516 compassionate_saha[431084]:    ],
Nov 25 04:25:38 np0005534516 compassionate_saha[431084]:    "1": [
Nov 25 04:25:38 np0005534516 compassionate_saha[431084]:        {
Nov 25 04:25:38 np0005534516 compassionate_saha[431084]:            "devices": [
Nov 25 04:25:38 np0005534516 compassionate_saha[431084]:                "/dev/loop4"
Nov 25 04:25:38 np0005534516 compassionate_saha[431084]:            ],
Nov 25 04:25:38 np0005534516 compassionate_saha[431084]:            "lv_name": "ceph_lv1",
Nov 25 04:25:38 np0005534516 compassionate_saha[431084]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 04:25:38 np0005534516 compassionate_saha[431084]:            "lv_size": "21470642176",
Nov 25 04:25:38 np0005534516 compassionate_saha[431084]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1ccbbde0-8faf-460a-800e-c84f00ed17db,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 04:25:38 np0005534516 compassionate_saha[431084]:            "lv_uuid": "nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e",
Nov 25 04:25:38 np0005534516 compassionate_saha[431084]:            "name": "ceph_lv1",
Nov 25 04:25:38 np0005534516 compassionate_saha[431084]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 04:25:38 np0005534516 compassionate_saha[431084]:            "tags": {
Nov 25 04:25:38 np0005534516 compassionate_saha[431084]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 04:25:38 np0005534516 compassionate_saha[431084]:                "ceph.block_uuid": "nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e",
Nov 25 04:25:38 np0005534516 compassionate_saha[431084]:                "ceph.cephx_lockbox_secret": "",
Nov 25 04:25:38 np0005534516 compassionate_saha[431084]:                "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 04:25:38 np0005534516 compassionate_saha[431084]:                "ceph.cluster_name": "ceph",
Nov 25 04:25:38 np0005534516 compassionate_saha[431084]:                "ceph.crush_device_class": "",
Nov 25 04:25:38 np0005534516 compassionate_saha[431084]:                "ceph.encrypted": "0",
Nov 25 04:25:38 np0005534516 compassionate_saha[431084]:                "ceph.osd_fsid": "1ccbbde0-8faf-460a-800e-c84f00ed17db",
Nov 25 04:25:38 np0005534516 compassionate_saha[431084]:                "ceph.osd_id": "1",
Nov 25 04:25:38 np0005534516 compassionate_saha[431084]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 04:25:38 np0005534516 compassionate_saha[431084]:                "ceph.type": "block",
Nov 25 04:25:38 np0005534516 compassionate_saha[431084]:                "ceph.vdo": "0"
Nov 25 04:25:38 np0005534516 compassionate_saha[431084]:            },
Nov 25 04:25:38 np0005534516 compassionate_saha[431084]:            "type": "block",
Nov 25 04:25:38 np0005534516 compassionate_saha[431084]:            "vg_name": "ceph_vg1"
Nov 25 04:25:38 np0005534516 compassionate_saha[431084]:        }
Nov 25 04:25:38 np0005534516 compassionate_saha[431084]:    ],
Nov 25 04:25:38 np0005534516 compassionate_saha[431084]:    "2": [
Nov 25 04:25:38 np0005534516 compassionate_saha[431084]:        {
Nov 25 04:25:38 np0005534516 compassionate_saha[431084]:            "devices": [
Nov 25 04:25:38 np0005534516 compassionate_saha[431084]:                "/dev/loop5"
Nov 25 04:25:38 np0005534516 compassionate_saha[431084]:            ],
Nov 25 04:25:38 np0005534516 compassionate_saha[431084]:            "lv_name": "ceph_lv2",
Nov 25 04:25:38 np0005534516 compassionate_saha[431084]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 04:25:38 np0005534516 compassionate_saha[431084]:            "lv_size": "21470642176",
Nov 25 04:25:38 np0005534516 compassionate_saha[431084]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2a15223e-f21c-42af-b702-2be1c3607399,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 04:25:38 np0005534516 compassionate_saha[431084]:            "lv_uuid": "5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0",
Nov 25 04:25:38 np0005534516 compassionate_saha[431084]:            "name": "ceph_lv2",
Nov 25 04:25:38 np0005534516 compassionate_saha[431084]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 04:25:38 np0005534516 compassionate_saha[431084]:            "tags": {
Nov 25 04:25:38 np0005534516 compassionate_saha[431084]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 04:25:38 np0005534516 compassionate_saha[431084]:                "ceph.block_uuid": "5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0",
Nov 25 04:25:38 np0005534516 compassionate_saha[431084]:                "ceph.cephx_lockbox_secret": "",
Nov 25 04:25:38 np0005534516 compassionate_saha[431084]:                "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 04:25:38 np0005534516 compassionate_saha[431084]:                "ceph.cluster_name": "ceph",
Nov 25 04:25:38 np0005534516 compassionate_saha[431084]:                "ceph.crush_device_class": "",
Nov 25 04:25:38 np0005534516 compassionate_saha[431084]:                "ceph.encrypted": "0",
Nov 25 04:25:38 np0005534516 compassionate_saha[431084]:                "ceph.osd_fsid": "2a15223e-f21c-42af-b702-2be1c3607399",
Nov 25 04:25:38 np0005534516 compassionate_saha[431084]:                "ceph.osd_id": "2",
Nov 25 04:25:38 np0005534516 compassionate_saha[431084]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 04:25:38 np0005534516 compassionate_saha[431084]:                "ceph.type": "block",
Nov 25 04:25:38 np0005534516 compassionate_saha[431084]:                "ceph.vdo": "0"
Nov 25 04:25:38 np0005534516 compassionate_saha[431084]:            },
Nov 25 04:25:38 np0005534516 compassionate_saha[431084]:            "type": "block",
Nov 25 04:25:38 np0005534516 compassionate_saha[431084]:            "vg_name": "ceph_vg2"
Nov 25 04:25:38 np0005534516 compassionate_saha[431084]:        }
Nov 25 04:25:38 np0005534516 compassionate_saha[431084]:    ]
Nov 25 04:25:38 np0005534516 compassionate_saha[431084]: }
Nov 25 04:25:38 np0005534516 systemd[1]: libpod-67ba0ff5e2d17cef454ed823b29a5fca53f8045bdd9b1ff56487b70cbc47caa2.scope: Deactivated successfully.
Nov 25 04:25:38 np0005534516 nova_compute[253538]: 2025-11-25 09:25:38.948 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:25:38 np0005534516 podman[431093]: 2025-11-25 09:25:38.99481059 +0000 UTC m=+0.041746720 container died 67ba0ff5e2d17cef454ed823b29a5fca53f8045bdd9b1ff56487b70cbc47caa2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_saha, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Nov 25 04:25:40 np0005534516 systemd[1]: var-lib-containers-storage-overlay-17b5c1d7710078c54dad9e439f1175573985af69dc2c63c89246841a5150b3d8-merged.mount: Deactivated successfully.
Nov 25 04:25:40 np0005534516 nova_compute[253538]: 2025-11-25 09:25:40.330 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:25:40 np0005534516 podman[431093]: 2025-11-25 09:25:40.549246796 +0000 UTC m=+1.596182856 container remove 67ba0ff5e2d17cef454ed823b29a5fca53f8045bdd9b1ff56487b70cbc47caa2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_saha, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True)
Nov 25 04:25:40 np0005534516 systemd[1]: libpod-conmon-67ba0ff5e2d17cef454ed823b29a5fca53f8045bdd9b1ff56487b70cbc47caa2.scope: Deactivated successfully.
Nov 25 04:25:40 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3172: 321 pgs: 321 active+clean; 458 KiB data, 970 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:25:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:25:41.113 162739 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:25:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:25:41.114 162739 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:25:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:25:41.115 162739 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:25:41 np0005534516 podman[431248]: 2025-11-25 09:25:41.21918518 +0000 UTC m=+0.025099825 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 04:25:41 np0005534516 podman[431248]: 2025-11-25 09:25:41.563641921 +0000 UTC m=+0.369556566 container create 1efe226eec73943f8928bda8976cae86d5c3355c88bb05147b75cf2f99ff36ef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_keller, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 04:25:41 np0005534516 systemd[1]: Started libpod-conmon-1efe226eec73943f8928bda8976cae86d5c3355c88bb05147b75cf2f99ff36ef.scope.
Nov 25 04:25:41 np0005534516 systemd[1]: Started libcrun container.
Nov 25 04:25:42 np0005534516 podman[431248]: 2025-11-25 09:25:42.162048655 +0000 UTC m=+0.967963330 container init 1efe226eec73943f8928bda8976cae86d5c3355c88bb05147b75cf2f99ff36ef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_keller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 04:25:42 np0005534516 podman[431248]: 2025-11-25 09:25:42.170934907 +0000 UTC m=+0.976849592 container start 1efe226eec73943f8928bda8976cae86d5c3355c88bb05147b75cf2f99ff36ef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_keller, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Nov 25 04:25:42 np0005534516 eager_keller[431266]: 167 167
Nov 25 04:25:42 np0005534516 systemd[1]: libpod-1efe226eec73943f8928bda8976cae86d5c3355c88bb05147b75cf2f99ff36ef.scope: Deactivated successfully.
Nov 25 04:25:42 np0005534516 podman[431248]: 2025-11-25 09:25:42.409173811 +0000 UTC m=+1.215088546 container attach 1efe226eec73943f8928bda8976cae86d5c3355c88bb05147b75cf2f99ff36ef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_keller, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 04:25:42 np0005534516 podman[431248]: 2025-11-25 09:25:42.410407046 +0000 UTC m=+1.216321701 container died 1efe226eec73943f8928bda8976cae86d5c3355c88bb05147b75cf2f99ff36ef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_keller, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 04:25:42 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3173: 321 pgs: 321 active+clean; 458 KiB data, 970 MiB used, 59 GiB / 60 GiB avail; 2.2 KiB/s rd, 255 B/s wr, 3 op/s
Nov 25 04:25:43 np0005534516 systemd[1]: var-lib-containers-storage-overlay-55a50467a0b2e310209f78869ec9626449b503c753b96f3dd2ce7404c785b6d0-merged.mount: Deactivated successfully.
Nov 25 04:25:43 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e279 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:25:43 np0005534516 podman[431248]: 2025-11-25 09:25:43.930846026 +0000 UTC m=+2.736760671 container remove 1efe226eec73943f8928bda8976cae86d5c3355c88bb05147b75cf2f99ff36ef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_keller, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 04:25:43 np0005534516 nova_compute[253538]: 2025-11-25 09:25:43.984 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:25:43 np0005534516 systemd[1]: libpod-conmon-1efe226eec73943f8928bda8976cae86d5c3355c88bb05147b75cf2f99ff36ef.scope: Deactivated successfully.
Nov 25 04:25:44 np0005534516 podman[431290]: 2025-11-25 09:25:44.107642046 +0000 UTC m=+0.036282160 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 04:25:44 np0005534516 podman[431290]: 2025-11-25 09:25:44.513880181 +0000 UTC m=+0.442520245 container create 9a1e3a9ef47abef2fa6ebd3540aa6346c769c72cfb2badf017bccfabbc2d53e7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_leakey, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Nov 25 04:25:44 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3174: 321 pgs: 321 active+clean; 458 KiB data, 970 MiB used, 59 GiB / 60 GiB avail; 2.3 KiB/s rd, 341 B/s wr, 3 op/s
Nov 25 04:25:44 np0005534516 systemd[1]: Started libpod-conmon-9a1e3a9ef47abef2fa6ebd3540aa6346c769c72cfb2badf017bccfabbc2d53e7.scope.
Nov 25 04:25:44 np0005534516 systemd[1]: Started libcrun container.
Nov 25 04:25:44 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bde7251d46106a307708cf0ceae1afbc5e4ae0310675fa39096215870aef241c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 04:25:44 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bde7251d46106a307708cf0ceae1afbc5e4ae0310675fa39096215870aef241c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 04:25:44 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bde7251d46106a307708cf0ceae1afbc5e4ae0310675fa39096215870aef241c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 04:25:44 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bde7251d46106a307708cf0ceae1afbc5e4ae0310675fa39096215870aef241c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 04:25:45 np0005534516 podman[431290]: 2025-11-25 09:25:45.306208542 +0000 UTC m=+1.234848596 container init 9a1e3a9ef47abef2fa6ebd3540aa6346c769c72cfb2badf017bccfabbc2d53e7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_leakey, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True)
Nov 25 04:25:45 np0005534516 podman[431290]: 2025-11-25 09:25:45.315255598 +0000 UTC m=+1.243895622 container start 9a1e3a9ef47abef2fa6ebd3540aa6346c769c72cfb2badf017bccfabbc2d53e7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_leakey, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Nov 25 04:25:45 np0005534516 nova_compute[253538]: 2025-11-25 09:25:45.333 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:25:45 np0005534516 podman[431290]: 2025-11-25 09:25:45.810216192 +0000 UTC m=+1.738856256 container attach 9a1e3a9ef47abef2fa6ebd3540aa6346c769c72cfb2badf017bccfabbc2d53e7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_leakey, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 04:25:46 np0005534516 objective_leakey[431306]: {
Nov 25 04:25:46 np0005534516 objective_leakey[431306]:    "1ccbbde0-8faf-460a-800e-c84f00ed17db": {
Nov 25 04:25:46 np0005534516 objective_leakey[431306]:        "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 04:25:46 np0005534516 objective_leakey[431306]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 04:25:46 np0005534516 objective_leakey[431306]:        "osd_id": 1,
Nov 25 04:25:46 np0005534516 objective_leakey[431306]:        "osd_uuid": "1ccbbde0-8faf-460a-800e-c84f00ed17db",
Nov 25 04:25:46 np0005534516 objective_leakey[431306]:        "type": "bluestore"
Nov 25 04:25:46 np0005534516 objective_leakey[431306]:    },
Nov 25 04:25:46 np0005534516 objective_leakey[431306]:    "2a15223e-f21c-42af-b702-2be1c3607399": {
Nov 25 04:25:46 np0005534516 objective_leakey[431306]:        "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 04:25:46 np0005534516 objective_leakey[431306]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 04:25:46 np0005534516 objective_leakey[431306]:        "osd_id": 2,
Nov 25 04:25:46 np0005534516 objective_leakey[431306]:        "osd_uuid": "2a15223e-f21c-42af-b702-2be1c3607399",
Nov 25 04:25:46 np0005534516 objective_leakey[431306]:        "type": "bluestore"
Nov 25 04:25:46 np0005534516 objective_leakey[431306]:    },
Nov 25 04:25:46 np0005534516 objective_leakey[431306]:    "fa0f8d90-5df8-4d42-9078-da082765696d": {
Nov 25 04:25:46 np0005534516 objective_leakey[431306]:        "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 04:25:46 np0005534516 objective_leakey[431306]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 04:25:46 np0005534516 objective_leakey[431306]:        "osd_id": 0,
Nov 25 04:25:46 np0005534516 objective_leakey[431306]:        "osd_uuid": "fa0f8d90-5df8-4d42-9078-da082765696d",
Nov 25 04:25:46 np0005534516 objective_leakey[431306]:        "type": "bluestore"
Nov 25 04:25:46 np0005534516 objective_leakey[431306]:    }
Nov 25 04:25:46 np0005534516 objective_leakey[431306]: }
Nov 25 04:25:46 np0005534516 systemd[1]: libpod-9a1e3a9ef47abef2fa6ebd3540aa6346c769c72cfb2badf017bccfabbc2d53e7.scope: Deactivated successfully.
Nov 25 04:25:46 np0005534516 conmon[431306]: conmon 9a1e3a9ef47abef2fa6e <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-9a1e3a9ef47abef2fa6ebd3540aa6346c769c72cfb2badf017bccfabbc2d53e7.scope/container/memory.events
Nov 25 04:25:46 np0005534516 podman[431339]: 2025-11-25 09:25:46.338762861 +0000 UTC m=+0.023683617 container died 9a1e3a9ef47abef2fa6ebd3540aa6346c769c72cfb2badf017bccfabbc2d53e7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_leakey, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Nov 25 04:25:46 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3175: 321 pgs: 321 active+clean; 8.4 MiB data, 970 MiB used, 59 GiB / 60 GiB avail; 6.8 KiB/s rd, 684 KiB/s wr, 10 op/s
Nov 25 04:25:46 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e279 do_prune osdmap full prune enabled
Nov 25 04:25:46 np0005534516 systemd[1]: var-lib-containers-storage-overlay-bde7251d46106a307708cf0ceae1afbc5e4ae0310675fa39096215870aef241c-merged.mount: Deactivated successfully.
Nov 25 04:25:46 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e280 e280: 3 total, 3 up, 3 in
Nov 25 04:25:47 np0005534516 ceph-mon[75015]: log_channel(cluster) log [DBG] : osdmap e280: 3 total, 3 up, 3 in
Nov 25 04:25:48 np0005534516 podman[431339]: 2025-11-25 09:25:48.614236336 +0000 UTC m=+2.299157102 container remove 9a1e3a9ef47abef2fa6ebd3540aa6346c769c72cfb2badf017bccfabbc2d53e7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_leakey, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True)
Nov 25 04:25:48 np0005534516 systemd[1]: libpod-conmon-9a1e3a9ef47abef2fa6ebd3540aa6346c769c72cfb2badf017bccfabbc2d53e7.scope: Deactivated successfully.
Nov 25 04:25:48 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3177: 321 pgs: 321 active+clean; 16 MiB data, 978 MiB used, 59 GiB / 60 GiB avail; 8.2 KiB/s rd, 1.6 MiB/s wr, 12 op/s
Nov 25 04:25:48 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 04:25:48 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:25:48 np0005534516 nova_compute[253538]: 2025-11-25 09:25:48.986 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:25:49 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 04:25:49 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 04:25:49 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 04:25:49 np0005534516 ceph-mgr[75313]: [progress WARNING root] complete: ev f612bd6d-323d-48c5-b27c-1f533d8bc4b5 does not exist
Nov 25 04:25:49 np0005534516 ceph-mgr[75313]: [progress WARNING root] complete: ev a940df7b-7bec-47c8-922e-534951a059e3 does not exist
Nov 25 04:25:49 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 04:25:49 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 04:25:50 np0005534516 nova_compute[253538]: 2025-11-25 09:25:50.378 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:25:50 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3178: 321 pgs: 321 active+clean; 21 MiB data, 991 MiB used, 59 GiB / 60 GiB avail; 8.2 KiB/s rd, 2.0 MiB/s wr, 12 op/s
Nov 25 04:25:52 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3179: 321 pgs: 321 active+clean; 21 MiB data, 991 MiB used, 59 GiB / 60 GiB avail; 11 KiB/s rd, 2.0 MiB/s wr, 15 op/s
Nov 25 04:25:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] Optimize plan auto_2025-11-25_09:25:53
Nov 25 04:25:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 04:25:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] do_upmap
Nov 25 04:25:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] pools ['cephfs.cephfs.data', '.mgr', 'volumes', 'default.rgw.meta', 'vms', 'default.rgw.log', 'images', 'backups', 'cephfs.cephfs.meta', 'default.rgw.control', '.rgw.root']
Nov 25 04:25:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] prepared 0/10 changes
Nov 25 04:25:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 04:25:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 04:25:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 04:25:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 04:25:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 04:25:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 04:25:53 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:25:53 np0005534516 nova_compute[253538]: 2025-11-25 09:25:53.989 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:25:54 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 04:25:54 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 04:25:54 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 04:25:54 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 04:25:54 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 04:25:54 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 04:25:54 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 04:25:54 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 04:25:54 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 04:25:54 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 04:25:54 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3180: 321 pgs: 321 active+clean; 29 MiB data, 991 MiB used, 59 GiB / 60 GiB avail; 12 KiB/s rd, 2.8 MiB/s wr, 18 op/s
Nov 25 04:25:55 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e280 do_prune osdmap full prune enabled
Nov 25 04:25:55 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e281 e281: 3 total, 3 up, 3 in
Nov 25 04:25:55 np0005534516 ceph-mon[75015]: log_channel(cluster) log [DBG] : osdmap e281: 3 total, 3 up, 3 in
Nov 25 04:25:55 np0005534516 nova_compute[253538]: 2025-11-25 09:25:55.380 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:25:55 np0005534516 podman[431406]: 2025-11-25 09:25:55.856175525 +0000 UTC m=+0.092008409 container health_status 5c8d5508db57b4c3da7e80ae11f3a3094e87785cb74f60c98199261dd8b73481 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 25 04:25:55 np0005534516 podman[431405]: 2025-11-25 09:25:55.865624013 +0000 UTC m=+0.101401486 container health_status 201f5ba8a846455dad7681d91099d8c3f33e8ac91298c1330a8b525b94c03938 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=multipathd, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Nov 25 04:25:56 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3182: 321 pgs: 321 active+clean; 33 MiB data, 995 MiB used, 59 GiB / 60 GiB avail; 15 KiB/s rd, 2.6 MiB/s wr, 21 op/s
Nov 25 04:25:58 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3183: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail; 15 KiB/s rd, 2.5 MiB/s wr, 22 op/s
Nov 25 04:25:58 np0005534516 nova_compute[253538]: 2025-11-25 09:25:58.884 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:25:58 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e281 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:25:58 np0005534516 nova_compute[253538]: 2025-11-25 09:25:58.991 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:26:00 np0005534516 nova_compute[253538]: 2025-11-25 09:26:00.382 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:26:00 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3184: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 2.0 MiB/s wr, 25 op/s
Nov 25 04:26:02 np0005534516 nova_compute[253538]: 2025-11-25 09:26:02.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:26:02 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3185: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail; 13 KiB/s rd, 2.0 MiB/s wr, 18 op/s
Nov 25 04:26:03 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e281 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:26:03 np0005534516 nova_compute[253538]: 2025-11-25 09:26:03.994 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:26:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 04:26:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 04:26:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 04:26:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 04:26:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 25 04:26:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 04:26:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 04:26:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 04:26:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 04:26:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 04:26:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006662398458357752 of space, bias 1.0, pg target 0.19987195375073258 quantized to 32 (current 32)
Nov 25 04:26:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 04:26:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 25 04:26:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 04:26:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 04:26:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 04:26:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 25 04:26:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 04:26:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 25 04:26:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 04:26:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 04:26:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 04:26:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 25 04:26:04 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3186: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail; 11 KiB/s rd, 1.2 MiB/s wr, 15 op/s
Nov 25 04:26:05 np0005534516 nova_compute[253538]: 2025-11-25 09:26:05.382 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:26:06 np0005534516 nova_compute[253538]: 2025-11-25 09:26:06.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:26:06 np0005534516 nova_compute[253538]: 2025-11-25 09:26:06.554 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 25 04:26:06 np0005534516 nova_compute[253538]: 2025-11-25 09:26:06.554 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 25 04:26:06 np0005534516 nova_compute[253538]: 2025-11-25 09:26:06.567 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 25 04:26:06 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3187: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail; 3.7 KiB/s rd, 1.1 MiB/s wr, 5 op/s
Nov 25 04:26:06 np0005534516 podman[431446]: 2025-11-25 09:26:06.854973337 +0000 UTC m=+0.107392999 container health_status 8ad1e319ea263c63021f01bb2d6f18ca5aea205883339692c6b10bddfa993191 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 04:26:08 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3188: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail; 3.2 KiB/s rd, 683 KiB/s wr, 4 op/s
Nov 25 04:26:08 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e281 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:26:08 np0005534516 nova_compute[253538]: 2025-11-25 09:26:08.997 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:26:09 np0005534516 nova_compute[253538]: 2025-11-25 09:26:09.561 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:26:10 np0005534516 nova_compute[253538]: 2025-11-25 09:26:10.422 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:26:10 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3189: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail; 2.3 KiB/s rd, 255 B/s wr, 3 op/s
Nov 25 04:26:11 np0005534516 nova_compute[253538]: 2025-11-25 09:26:11.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:26:12 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3190: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:26:13 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e281 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:26:13 np0005534516 ceph-mon[75015]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #156. Immutable memtables: 0.
Nov 25 04:26:13 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:26:13.903540) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 25 04:26:13 np0005534516 ceph-mon[75015]: rocksdb: [db/flush_job.cc:856] [default] [JOB 95] Flushing memtable with next log file: 156
Nov 25 04:26:13 np0005534516 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764062773903590, "job": 95, "event": "flush_started", "num_memtables": 1, "num_entries": 673, "num_deletes": 255, "total_data_size": 800982, "memory_usage": 815096, "flush_reason": "Manual Compaction"}
Nov 25 04:26:13 np0005534516 ceph-mon[75015]: rocksdb: [db/flush_job.cc:885] [default] [JOB 95] Level-0 flush table #157: started
Nov 25 04:26:13 np0005534516 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764062773912284, "cf_name": "default", "job": 95, "event": "table_file_creation", "file_number": 157, "file_size": 793988, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 65818, "largest_seqno": 66490, "table_properties": {"data_size": 790346, "index_size": 1485, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1093, "raw_key_size": 8079, "raw_average_key_size": 18, "raw_value_size": 782991, "raw_average_value_size": 1838, "num_data_blocks": 66, "num_entries": 426, "num_filter_entries": 426, "num_deletions": 255, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764062724, "oldest_key_time": 1764062724, "file_creation_time": 1764062773, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "dc5dc6ae-0fb8-474e-b34d-e37705a41add", "db_session_id": "WCSCMBNWDQZ3QKJA3OWW", "orig_file_number": 157, "seqno_to_time_mapping": "N/A"}}
Nov 25 04:26:13 np0005534516 ceph-mon[75015]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 95] Flush lasted 8808 microseconds, and 2991 cpu microseconds.
Nov 25 04:26:13 np0005534516 ceph-mon[75015]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 04:26:13 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:26:13.912350) [db/flush_job.cc:967] [default] [JOB 95] Level-0 flush table #157: 793988 bytes OK
Nov 25 04:26:13 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:26:13.912372) [db/memtable_list.cc:519] [default] Level-0 commit table #157 started
Nov 25 04:26:13 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:26:13.915673) [db/memtable_list.cc:722] [default] Level-0 commit table #157: memtable #1 done
Nov 25 04:26:13 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:26:13.915700) EVENT_LOG_v1 {"time_micros": 1764062773915692, "job": 95, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 25 04:26:13 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:26:13.915725) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 25 04:26:13 np0005534516 ceph-mon[75015]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 95] Try to delete WAL files size 797408, prev total WAL file size 797408, number of live WAL files 2.
Nov 25 04:26:13 np0005534516 ceph-mon[75015]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000153.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 04:26:13 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:26:13.916530) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0032373639' seq:72057594037927935, type:22 .. '6C6F676D0033303230' seq:0, type:0; will stop at (end)
Nov 25 04:26:13 np0005534516 ceph-mon[75015]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 96] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 25 04:26:13 np0005534516 ceph-mon[75015]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 95 Base level 0, inputs: [157(775KB)], [155(9672KB)]
Nov 25 04:26:13 np0005534516 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764062773916573, "job": 96, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [157], "files_L6": [155], "score": -1, "input_data_size": 10698833, "oldest_snapshot_seqno": -1}
Nov 25 04:26:14 np0005534516 nova_compute[253538]: 2025-11-25 09:26:13.999 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:26:14 np0005534516 ceph-mon[75015]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 96] Generated table #158: 8361 keys, 10584525 bytes, temperature: kUnknown
Nov 25 04:26:14 np0005534516 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764062774010621, "cf_name": "default", "job": 96, "event": "table_file_creation", "file_number": 158, "file_size": 10584525, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10530737, "index_size": 31787, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 20933, "raw_key_size": 220315, "raw_average_key_size": 26, "raw_value_size": 10383584, "raw_average_value_size": 1241, "num_data_blocks": 1235, "num_entries": 8361, "num_filter_entries": 8361, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764056881, "oldest_key_time": 0, "file_creation_time": 1764062773, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "dc5dc6ae-0fb8-474e-b34d-e37705a41add", "db_session_id": "WCSCMBNWDQZ3QKJA3OWW", "orig_file_number": 158, "seqno_to_time_mapping": "N/A"}}
Nov 25 04:26:14 np0005534516 ceph-mon[75015]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 04:26:14 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:26:14.010901) [db/compaction/compaction_job.cc:1663] [default] [JOB 96] Compacted 1@0 + 1@6 files to L6 => 10584525 bytes
Nov 25 04:26:14 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:26:14.012835) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 113.7 rd, 112.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.8, 9.4 +0.0 blob) out(10.1 +0.0 blob), read-write-amplify(26.8) write-amplify(13.3) OK, records in: 8887, records dropped: 526 output_compression: NoCompression
Nov 25 04:26:14 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:26:14.012857) EVENT_LOG_v1 {"time_micros": 1764062774012848, "job": 96, "event": "compaction_finished", "compaction_time_micros": 94135, "compaction_time_cpu_micros": 49087, "output_level": 6, "num_output_files": 1, "total_output_size": 10584525, "num_input_records": 8887, "num_output_records": 8361, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 25 04:26:14 np0005534516 ceph-mon[75015]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000157.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 04:26:14 np0005534516 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764062774013148, "job": 96, "event": "table_file_deletion", "file_number": 157}
Nov 25 04:26:14 np0005534516 ceph-mon[75015]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000155.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 04:26:14 np0005534516 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764062774015809, "job": 96, "event": "table_file_deletion", "file_number": 155}
Nov 25 04:26:14 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:26:13.916420) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 04:26:14 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:26:14.015849) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 04:26:14 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:26:14.015854) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 04:26:14 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:26:14.015856) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 04:26:14 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:26:14.015858) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 04:26:14 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:26:14.015860) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 04:26:14 np0005534516 nova_compute[253538]: 2025-11-25 09:26:14.553 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:26:14 np0005534516 nova_compute[253538]: 2025-11-25 09:26:14.553 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 25 04:26:14 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3191: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:26:15 np0005534516 nova_compute[253538]: 2025-11-25 09:26:15.424 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:26:15 np0005534516 nova_compute[253538]: 2025-11-25 09:26:15.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:26:16 np0005534516 nova_compute[253538]: 2025-11-25 09:26:16.555 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:26:16 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3192: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:26:18 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3193: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:26:18 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e281 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:26:19 np0005534516 nova_compute[253538]: 2025-11-25 09:26:19.001 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:26:20 np0005534516 nova_compute[253538]: 2025-11-25 09:26:20.427 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:26:20 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3194: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:26:22 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3195: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:26:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 04:26:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 04:26:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 04:26:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 04:26:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 04:26:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 04:26:23 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e281 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:26:24 np0005534516 nova_compute[253538]: 2025-11-25 09:26:24.004 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:26:24 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:26:24.228 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=61, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '3e:21:09', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '6a:71:8e:07:fa:c2'}, ipsec=False) old=SB_Global(nb_cfg=60) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 04:26:24 np0005534516 nova_compute[253538]: 2025-11-25 09:26:24.229 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:26:24 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:26:24.229 162739 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 25 04:26:24 np0005534516 nova_compute[253538]: 2025-11-25 09:26:24.555 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:26:24 np0005534516 nova_compute[253538]: 2025-11-25 09:26:24.579 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:26:24 np0005534516 nova_compute[253538]: 2025-11-25 09:26:24.580 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:26:24 np0005534516 nova_compute[253538]: 2025-11-25 09:26:24.580 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:26:24 np0005534516 nova_compute[253538]: 2025-11-25 09:26:24.580 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 25 04:26:24 np0005534516 nova_compute[253538]: 2025-11-25 09:26:24.580 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 04:26:24 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3196: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:26:25 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 04:26:25 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/268513709' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 04:26:25 np0005534516 nova_compute[253538]: 2025-11-25 09:26:25.100 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.519s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 04:26:25 np0005534516 nova_compute[253538]: 2025-11-25 09:26:25.264 253542 WARNING nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 04:26:25 np0005534516 nova_compute[253538]: 2025-11-25 09:26:25.266 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3640MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 25 04:26:25 np0005534516 nova_compute[253538]: 2025-11-25 09:26:25.266 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:26:25 np0005534516 nova_compute[253538]: 2025-11-25 09:26:25.267 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:26:25 np0005534516 nova_compute[253538]: 2025-11-25 09:26:25.427 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:26:25 np0005534516 nova_compute[253538]: 2025-11-25 09:26:25.471 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 25 04:26:25 np0005534516 nova_compute[253538]: 2025-11-25 09:26:25.472 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 25 04:26:25 np0005534516 nova_compute[253538]: 2025-11-25 09:26:25.489 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 04:26:25 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 04:26:25 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/748818001' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 04:26:26 np0005534516 nova_compute[253538]: 2025-11-25 09:26:26.006 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.518s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 04:26:26 np0005534516 nova_compute[253538]: 2025-11-25 09:26:26.012 253542 DEBUG nova.compute.provider_tree [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 04:26:26 np0005534516 nova_compute[253538]: 2025-11-25 09:26:26.026 253542 DEBUG nova.scheduler.client.report [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 04:26:26 np0005534516 nova_compute[253538]: 2025-11-25 09:26:26.028 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 25 04:26:26 np0005534516 nova_compute[253538]: 2025-11-25 09:26:26.029 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.762s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:26:26 np0005534516 nova_compute[253538]: 2025-11-25 09:26:26.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:26:26 np0005534516 nova_compute[253538]: 2025-11-25 09:26:26.554 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Nov 25 04:26:26 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3197: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:26:26 np0005534516 podman[431516]: 2025-11-25 09:26:26.824346882 +0000 UTC m=+0.067649385 container health_status 201f5ba8a846455dad7681d91099d8c3f33e8ac91298c1330a8b525b94c03938 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118)
Nov 25 04:26:26 np0005534516 podman[431517]: 2025-11-25 09:26:26.824892297 +0000 UTC m=+0.063668406 container health_status 5c8d5508db57b4c3da7e80ae11f3a3094e87785cb74f60c98199261dd8b73481 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Nov 25 04:26:28 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3198: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:26:28 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e281 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:26:29 np0005534516 nova_compute[253538]: 2025-11-25 09:26:29.005 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:26:29 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 25 04:26:29 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3539745747' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 04:26:29 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 25 04:26:29 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3539745747' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 04:26:30 np0005534516 nova_compute[253538]: 2025-11-25 09:26:30.477 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:26:30 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3199: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:26:32 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:26:32.232 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=0e3362c2-a4a0-4a10-9289-943331244f84, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '61'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 04:26:32 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3200: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:26:33 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e281 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:26:34 np0005534516 nova_compute[253538]: 2025-11-25 09:26:34.007 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:26:34 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3201: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:26:34 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e281 do_prune osdmap full prune enabled
Nov 25 04:26:34 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e282 e282: 3 total, 3 up, 3 in
Nov 25 04:26:34 np0005534516 ceph-mon[75015]: log_channel(cluster) log [DBG] : osdmap e282: 3 total, 3 up, 3 in
Nov 25 04:26:35 np0005534516 nova_compute[253538]: 2025-11-25 09:26:35.478 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:26:36 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3203: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail; 15 KiB/s rd, 614 B/s wr, 18 op/s
Nov 25 04:26:37 np0005534516 podman[431553]: 2025-11-25 09:26:37.858450435 +0000 UTC m=+0.103177134 container health_status 8ad1e319ea263c63021f01bb2d6f18ca5aea205883339692c6b10bddfa993191 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118)
Nov 25 04:26:38 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3204: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.1 KiB/s wr, 23 op/s
Nov 25 04:26:38 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e282 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:26:39 np0005534516 nova_compute[253538]: 2025-11-25 09:26:39.009 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:26:40 np0005534516 nova_compute[253538]: 2025-11-25 09:26:40.519 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:26:40 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3205: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.4 KiB/s wr, 24 op/s
Nov 25 04:26:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:26:41.114 162739 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:26:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:26:41.114 162739 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:26:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:26:41.115 162739 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:26:42 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3206: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.4 KiB/s wr, 24 op/s
Nov 25 04:26:43 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e282 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:26:43 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e282 do_prune osdmap full prune enabled
Nov 25 04:26:43 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e283 e283: 3 total, 3 up, 3 in
Nov 25 04:26:44 np0005534516 nova_compute[253538]: 2025-11-25 09:26:44.038 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:26:44 np0005534516 ceph-mon[75015]: log_channel(cluster) log [DBG] : osdmap e283: 3 total, 3 up, 3 in
Nov 25 04:26:44 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3208: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.4 KiB/s wr, 25 op/s
Nov 25 04:26:45 np0005534516 nova_compute[253538]: 2025-11-25 09:26:45.521 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:26:46 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3209: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail; 3.5 KiB/s rd, 818 B/s wr, 6 op/s
Nov 25 04:26:48 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3210: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail; 613 B/s rd, 306 B/s wr, 1 op/s
Nov 25 04:26:48 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e283 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:26:49 np0005534516 nova_compute[253538]: 2025-11-25 09:26:49.041 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:26:50 np0005534516 podman[431751]: 2025-11-25 09:26:50.368582118 +0000 UTC m=+0.171429905 container exec 04ce8b89bbacb77b3b59c4996e899d7494010827e740656ebea8754c25303956 (image=quay.io/ceph/ceph:v18, name=ceph-a058ea16-8b73-51e1-b172-ed66107102bf-mon-compute-0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 04:26:50 np0005534516 podman[431751]: 2025-11-25 09:26:50.485713751 +0000 UTC m=+0.288561508 container exec_died 04ce8b89bbacb77b3b59c4996e899d7494010827e740656ebea8754c25303956 (image=quay.io/ceph/ceph:v18, name=ceph-a058ea16-8b73-51e1-b172-ed66107102bf-mon-compute-0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 04:26:50 np0005534516 nova_compute[253538]: 2025-11-25 09:26:50.574 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:26:50 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3211: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:26:51 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 04:26:51 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 04:26:51 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 04:26:51 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 04:26:51 np0005534516 nova_compute[253538]: 2025-11-25 09:26:51.566 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:26:51 np0005534516 nova_compute[253538]: 2025-11-25 09:26:51.567 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Nov 25 04:26:51 np0005534516 nova_compute[253538]: 2025-11-25 09:26:51.587 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Nov 25 04:26:51 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 04:26:51 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 04:26:52 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 04:26:52 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 04:26:52 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 04:26:52 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 04:26:52 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 04:26:52 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 04:26:52 np0005534516 ceph-mgr[75313]: [progress WARNING root] complete: ev 9df99beb-afbf-474a-a41b-9d1615cf1235 does not exist
Nov 25 04:26:52 np0005534516 ceph-mgr[75313]: [progress WARNING root] complete: ev 012e2f98-bff3-482e-81ad-05ff027402c1 does not exist
Nov 25 04:26:52 np0005534516 ceph-mgr[75313]: [progress WARNING root] complete: ev d60505dd-3c34-49e3-a50a-622841ecfe31 does not exist
Nov 25 04:26:52 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 04:26:52 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 04:26:52 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 04:26:52 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 04:26:52 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 04:26:52 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 04:26:52 np0005534516 podman[432178]: 2025-11-25 09:26:52.707332247 +0000 UTC m=+0.051126915 container create 98276f8ffe7156f7bc4d93d73747ad7d7a21330c66eb469f794cc9e282555f35 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_rhodes, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Nov 25 04:26:52 np0005534516 systemd[1]: Started libpod-conmon-98276f8ffe7156f7bc4d93d73747ad7d7a21330c66eb469f794cc9e282555f35.scope.
Nov 25 04:26:52 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3212: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:26:52 np0005534516 systemd[1]: Started libcrun container.
Nov 25 04:26:52 np0005534516 podman[432178]: 2025-11-25 09:26:52.683949919 +0000 UTC m=+0.027744677 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 04:26:52 np0005534516 podman[432178]: 2025-11-25 09:26:52.792160239 +0000 UTC m=+0.135954957 container init 98276f8ffe7156f7bc4d93d73747ad7d7a21330c66eb469f794cc9e282555f35 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_rhodes, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 04:26:52 np0005534516 podman[432178]: 2025-11-25 09:26:52.804505065 +0000 UTC m=+0.148299733 container start 98276f8ffe7156f7bc4d93d73747ad7d7a21330c66eb469f794cc9e282555f35 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_rhodes, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 04:26:52 np0005534516 podman[432178]: 2025-11-25 09:26:52.807859297 +0000 UTC m=+0.151654025 container attach 98276f8ffe7156f7bc4d93d73747ad7d7a21330c66eb469f794cc9e282555f35 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_rhodes, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Nov 25 04:26:52 np0005534516 affectionate_rhodes[432195]: 167 167
Nov 25 04:26:52 np0005534516 systemd[1]: libpod-98276f8ffe7156f7bc4d93d73747ad7d7a21330c66eb469f794cc9e282555f35.scope: Deactivated successfully.
Nov 25 04:26:52 np0005534516 podman[432178]: 2025-11-25 09:26:52.809346268 +0000 UTC m=+0.153140936 container died 98276f8ffe7156f7bc4d93d73747ad7d7a21330c66eb469f794cc9e282555f35 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_rhodes, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 04:26:52 np0005534516 systemd[1]: var-lib-containers-storage-overlay-6df0af510512eae51917558090c3c4f3b0ffd4a4fcac6264539b4066d2dd1d7b-merged.mount: Deactivated successfully.
Nov 25 04:26:52 np0005534516 podman[432178]: 2025-11-25 09:26:52.845230246 +0000 UTC m=+0.189024914 container remove 98276f8ffe7156f7bc4d93d73747ad7d7a21330c66eb469f794cc9e282555f35 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_rhodes, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 04:26:52 np0005534516 systemd[1]: libpod-conmon-98276f8ffe7156f7bc4d93d73747ad7d7a21330c66eb469f794cc9e282555f35.scope: Deactivated successfully.
Nov 25 04:26:52 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 04:26:52 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 04:26:52 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 04:26:53 np0005534516 podman[432219]: 2025-11-25 09:26:53.068605045 +0000 UTC m=+0.060657224 container create 9030533c0972b319d16888b4b08eac3452734ce25eb308f31d85a3560dd92100 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_visvesvaraya, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 04:26:53 np0005534516 systemd[1]: Started libpod-conmon-9030533c0972b319d16888b4b08eac3452734ce25eb308f31d85a3560dd92100.scope.
Nov 25 04:26:53 np0005534516 podman[432219]: 2025-11-25 09:26:53.047014597 +0000 UTC m=+0.039066816 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 04:26:53 np0005534516 systemd[1]: Started libcrun container.
Nov 25 04:26:53 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/65fa2e58688cc35360945c6a98054333c92e783e8ba0fe1fb766180fdbe05d11/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 04:26:53 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/65fa2e58688cc35360945c6a98054333c92e783e8ba0fe1fb766180fdbe05d11/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 04:26:53 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/65fa2e58688cc35360945c6a98054333c92e783e8ba0fe1fb766180fdbe05d11/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 04:26:53 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/65fa2e58688cc35360945c6a98054333c92e783e8ba0fe1fb766180fdbe05d11/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 04:26:53 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/65fa2e58688cc35360945c6a98054333c92e783e8ba0fe1fb766180fdbe05d11/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 04:26:53 np0005534516 podman[432219]: 2025-11-25 09:26:53.185570054 +0000 UTC m=+0.177622243 container init 9030533c0972b319d16888b4b08eac3452734ce25eb308f31d85a3560dd92100 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_visvesvaraya, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Nov 25 04:26:53 np0005534516 podman[432219]: 2025-11-25 09:26:53.191971319 +0000 UTC m=+0.184023488 container start 9030533c0972b319d16888b4b08eac3452734ce25eb308f31d85a3560dd92100 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_visvesvaraya, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 04:26:53 np0005534516 podman[432219]: 2025-11-25 09:26:53.203436792 +0000 UTC m=+0.195488961 container attach 9030533c0972b319d16888b4b08eac3452734ce25eb308f31d85a3560dd92100 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_visvesvaraya, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 04:26:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] Optimize plan auto_2025-11-25_09:26:53
Nov 25 04:26:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 04:26:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] do_upmap
Nov 25 04:26:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] pools ['.mgr', '.rgw.root', 'default.rgw.meta', 'vms', 'images', 'volumes', 'default.rgw.control', 'backups', 'default.rgw.log', 'cephfs.cephfs.meta', 'cephfs.cephfs.data']
Nov 25 04:26:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] prepared 0/10 changes
Nov 25 04:26:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 04:26:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 04:26:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 04:26:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 04:26:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 04:26:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 04:26:53 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e283 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:26:54 np0005534516 nova_compute[253538]: 2025-11-25 09:26:54.043 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:26:54 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 04:26:54 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 04:26:54 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 04:26:54 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 04:26:54 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 04:26:54 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 04:26:54 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 04:26:54 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 04:26:54 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 04:26:54 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 04:26:54 np0005534516 zen_visvesvaraya[432236]: --> passed data devices: 0 physical, 3 LVM
Nov 25 04:26:54 np0005534516 zen_visvesvaraya[432236]: --> relative data size: 1.0
Nov 25 04:26:54 np0005534516 zen_visvesvaraya[432236]: --> All data devices are unavailable
Nov 25 04:26:54 np0005534516 systemd[1]: libpod-9030533c0972b319d16888b4b08eac3452734ce25eb308f31d85a3560dd92100.scope: Deactivated successfully.
Nov 25 04:26:54 np0005534516 systemd[1]: libpod-9030533c0972b319d16888b4b08eac3452734ce25eb308f31d85a3560dd92100.scope: Consumed 1.126s CPU time.
Nov 25 04:26:54 np0005534516 podman[432219]: 2025-11-25 09:26:54.368787762 +0000 UTC m=+1.360839931 container died 9030533c0972b319d16888b4b08eac3452734ce25eb308f31d85a3560dd92100 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_visvesvaraya, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True)
Nov 25 04:26:54 np0005534516 systemd[1]: var-lib-containers-storage-overlay-65fa2e58688cc35360945c6a98054333c92e783e8ba0fe1fb766180fdbe05d11-merged.mount: Deactivated successfully.
Nov 25 04:26:54 np0005534516 podman[432219]: 2025-11-25 09:26:54.582492597 +0000 UTC m=+1.574544776 container remove 9030533c0972b319d16888b4b08eac3452734ce25eb308f31d85a3560dd92100 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_visvesvaraya, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Nov 25 04:26:54 np0005534516 systemd[1]: libpod-conmon-9030533c0972b319d16888b4b08eac3452734ce25eb308f31d85a3560dd92100.scope: Deactivated successfully.
Nov 25 04:26:54 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3213: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:26:55 np0005534516 podman[432420]: 2025-11-25 09:26:55.268286664 +0000 UTC m=+0.054050005 container create 721f777e83b330561fd3116babcaf0028e8e15d78d56fe794cf325c306ab74ef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_elgamal, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 04:26:55 np0005534516 systemd[1]: Started libpod-conmon-721f777e83b330561fd3116babcaf0028e8e15d78d56fe794cf325c306ab74ef.scope.
Nov 25 04:26:55 np0005534516 podman[432420]: 2025-11-25 09:26:55.241704359 +0000 UTC m=+0.027467790 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 04:26:55 np0005534516 systemd[1]: Started libcrun container.
Nov 25 04:26:55 np0005534516 podman[432420]: 2025-11-25 09:26:55.363857929 +0000 UTC m=+0.149621300 container init 721f777e83b330561fd3116babcaf0028e8e15d78d56fe794cf325c306ab74ef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_elgamal, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Nov 25 04:26:55 np0005534516 podman[432420]: 2025-11-25 09:26:55.372227027 +0000 UTC m=+0.157990368 container start 721f777e83b330561fd3116babcaf0028e8e15d78d56fe794cf325c306ab74ef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_elgamal, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Nov 25 04:26:55 np0005534516 happy_elgamal[432436]: 167 167
Nov 25 04:26:55 np0005534516 systemd[1]: libpod-721f777e83b330561fd3116babcaf0028e8e15d78d56fe794cf325c306ab74ef.scope: Deactivated successfully.
Nov 25 04:26:55 np0005534516 conmon[432436]: conmon 721f777e83b330561fd3 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-721f777e83b330561fd3116babcaf0028e8e15d78d56fe794cf325c306ab74ef.scope/container/memory.events
Nov 25 04:26:55 np0005534516 podman[432420]: 2025-11-25 09:26:55.377174272 +0000 UTC m=+0.162937613 container attach 721f777e83b330561fd3116babcaf0028e8e15d78d56fe794cf325c306ab74ef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_elgamal, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 04:26:55 np0005534516 podman[432420]: 2025-11-25 09:26:55.38074721 +0000 UTC m=+0.166510591 container died 721f777e83b330561fd3116babcaf0028e8e15d78d56fe794cf325c306ab74ef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_elgamal, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Nov 25 04:26:55 np0005534516 systemd[1]: var-lib-containers-storage-overlay-bd63e34c9302aa5a9620f472ddde59042b4881de581165b483750570bfc19568-merged.mount: Deactivated successfully.
Nov 25 04:26:55 np0005534516 podman[432420]: 2025-11-25 09:26:55.424415089 +0000 UTC m=+0.210178430 container remove 721f777e83b330561fd3116babcaf0028e8e15d78d56fe794cf325c306ab74ef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_elgamal, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0)
Nov 25 04:26:55 np0005534516 systemd[1]: libpod-conmon-721f777e83b330561fd3116babcaf0028e8e15d78d56fe794cf325c306ab74ef.scope: Deactivated successfully.
Nov 25 04:26:55 np0005534516 nova_compute[253538]: 2025-11-25 09:26:55.576 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:26:55 np0005534516 podman[432460]: 2025-11-25 09:26:55.603038799 +0000 UTC m=+0.044801082 container create 13c6b846aec13b1f47cf11b36ebf85ae7b4f94ae38bd32f5fad051baf1fdb1a3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_snyder, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Nov 25 04:26:55 np0005534516 systemd[1]: Started libpod-conmon-13c6b846aec13b1f47cf11b36ebf85ae7b4f94ae38bd32f5fad051baf1fdb1a3.scope.
Nov 25 04:26:55 np0005534516 podman[432460]: 2025-11-25 09:26:55.585183203 +0000 UTC m=+0.026945506 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 04:26:55 np0005534516 systemd[1]: Started libcrun container.
Nov 25 04:26:55 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b242a063f6af310e3fa58f87fb098501863b96ef9ff33b33e06d8f5046eda4e8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 04:26:55 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b242a063f6af310e3fa58f87fb098501863b96ef9ff33b33e06d8f5046eda4e8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 04:26:55 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b242a063f6af310e3fa58f87fb098501863b96ef9ff33b33e06d8f5046eda4e8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 04:26:55 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b242a063f6af310e3fa58f87fb098501863b96ef9ff33b33e06d8f5046eda4e8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 04:26:55 np0005534516 podman[432460]: 2025-11-25 09:26:55.716972476 +0000 UTC m=+0.158734779 container init 13c6b846aec13b1f47cf11b36ebf85ae7b4f94ae38bd32f5fad051baf1fdb1a3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_snyder, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 04:26:55 np0005534516 podman[432460]: 2025-11-25 09:26:55.725923289 +0000 UTC m=+0.167685582 container start 13c6b846aec13b1f47cf11b36ebf85ae7b4f94ae38bd32f5fad051baf1fdb1a3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_snyder, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Nov 25 04:26:55 np0005534516 podman[432460]: 2025-11-25 09:26:55.730795683 +0000 UTC m=+0.172557996 container attach 13c6b846aec13b1f47cf11b36ebf85ae7b4f94ae38bd32f5fad051baf1fdb1a3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_snyder, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3)
Nov 25 04:26:56 np0005534516 charming_snyder[432475]: {
Nov 25 04:26:56 np0005534516 charming_snyder[432475]:    "0": [
Nov 25 04:26:56 np0005534516 charming_snyder[432475]:        {
Nov 25 04:26:56 np0005534516 charming_snyder[432475]:            "devices": [
Nov 25 04:26:56 np0005534516 charming_snyder[432475]:                "/dev/loop3"
Nov 25 04:26:56 np0005534516 charming_snyder[432475]:            ],
Nov 25 04:26:56 np0005534516 charming_snyder[432475]:            "lv_name": "ceph_lv0",
Nov 25 04:26:56 np0005534516 charming_snyder[432475]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 04:26:56 np0005534516 charming_snyder[432475]:            "lv_size": "21470642176",
Nov 25 04:26:56 np0005534516 charming_snyder[432475]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=fa0f8d90-5df8-4d42-9078-da082765696d,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 04:26:56 np0005534516 charming_snyder[432475]:            "lv_uuid": "ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr",
Nov 25 04:26:56 np0005534516 charming_snyder[432475]:            "name": "ceph_lv0",
Nov 25 04:26:56 np0005534516 charming_snyder[432475]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 04:26:56 np0005534516 charming_snyder[432475]:            "tags": {
Nov 25 04:26:56 np0005534516 charming_snyder[432475]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 04:26:56 np0005534516 charming_snyder[432475]:                "ceph.block_uuid": "ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr",
Nov 25 04:26:56 np0005534516 charming_snyder[432475]:                "ceph.cephx_lockbox_secret": "",
Nov 25 04:26:56 np0005534516 charming_snyder[432475]:                "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 04:26:56 np0005534516 charming_snyder[432475]:                "ceph.cluster_name": "ceph",
Nov 25 04:26:56 np0005534516 charming_snyder[432475]:                "ceph.crush_device_class": "",
Nov 25 04:26:56 np0005534516 charming_snyder[432475]:                "ceph.encrypted": "0",
Nov 25 04:26:56 np0005534516 charming_snyder[432475]:                "ceph.osd_fsid": "fa0f8d90-5df8-4d42-9078-da082765696d",
Nov 25 04:26:56 np0005534516 charming_snyder[432475]:                "ceph.osd_id": "0",
Nov 25 04:26:56 np0005534516 charming_snyder[432475]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 04:26:56 np0005534516 charming_snyder[432475]:                "ceph.type": "block",
Nov 25 04:26:56 np0005534516 charming_snyder[432475]:                "ceph.vdo": "0"
Nov 25 04:26:56 np0005534516 charming_snyder[432475]:            },
Nov 25 04:26:56 np0005534516 charming_snyder[432475]:            "type": "block",
Nov 25 04:26:56 np0005534516 charming_snyder[432475]:            "vg_name": "ceph_vg0"
Nov 25 04:26:56 np0005534516 charming_snyder[432475]:        }
Nov 25 04:26:56 np0005534516 charming_snyder[432475]:    ],
Nov 25 04:26:56 np0005534516 charming_snyder[432475]:    "1": [
Nov 25 04:26:56 np0005534516 charming_snyder[432475]:        {
Nov 25 04:26:56 np0005534516 charming_snyder[432475]:            "devices": [
Nov 25 04:26:56 np0005534516 charming_snyder[432475]:                "/dev/loop4"
Nov 25 04:26:56 np0005534516 charming_snyder[432475]:            ],
Nov 25 04:26:56 np0005534516 charming_snyder[432475]:            "lv_name": "ceph_lv1",
Nov 25 04:26:56 np0005534516 charming_snyder[432475]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 04:26:56 np0005534516 charming_snyder[432475]:            "lv_size": "21470642176",
Nov 25 04:26:56 np0005534516 charming_snyder[432475]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1ccbbde0-8faf-460a-800e-c84f00ed17db,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 04:26:56 np0005534516 charming_snyder[432475]:            "lv_uuid": "nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e",
Nov 25 04:26:56 np0005534516 charming_snyder[432475]:            "name": "ceph_lv1",
Nov 25 04:26:56 np0005534516 charming_snyder[432475]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 04:26:56 np0005534516 charming_snyder[432475]:            "tags": {
Nov 25 04:26:56 np0005534516 charming_snyder[432475]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 04:26:56 np0005534516 charming_snyder[432475]:                "ceph.block_uuid": "nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e",
Nov 25 04:26:56 np0005534516 charming_snyder[432475]:                "ceph.cephx_lockbox_secret": "",
Nov 25 04:26:56 np0005534516 charming_snyder[432475]:                "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 04:26:56 np0005534516 charming_snyder[432475]:                "ceph.cluster_name": "ceph",
Nov 25 04:26:56 np0005534516 charming_snyder[432475]:                "ceph.crush_device_class": "",
Nov 25 04:26:56 np0005534516 charming_snyder[432475]:                "ceph.encrypted": "0",
Nov 25 04:26:56 np0005534516 charming_snyder[432475]:                "ceph.osd_fsid": "1ccbbde0-8faf-460a-800e-c84f00ed17db",
Nov 25 04:26:56 np0005534516 charming_snyder[432475]:                "ceph.osd_id": "1",
Nov 25 04:26:56 np0005534516 charming_snyder[432475]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 04:26:56 np0005534516 charming_snyder[432475]:                "ceph.type": "block",
Nov 25 04:26:56 np0005534516 charming_snyder[432475]:                "ceph.vdo": "0"
Nov 25 04:26:56 np0005534516 charming_snyder[432475]:            },
Nov 25 04:26:56 np0005534516 charming_snyder[432475]:            "type": "block",
Nov 25 04:26:56 np0005534516 charming_snyder[432475]:            "vg_name": "ceph_vg1"
Nov 25 04:26:56 np0005534516 charming_snyder[432475]:        }
Nov 25 04:26:56 np0005534516 charming_snyder[432475]:    ],
Nov 25 04:26:56 np0005534516 charming_snyder[432475]:    "2": [
Nov 25 04:26:56 np0005534516 charming_snyder[432475]:        {
Nov 25 04:26:56 np0005534516 charming_snyder[432475]:            "devices": [
Nov 25 04:26:56 np0005534516 charming_snyder[432475]:                "/dev/loop5"
Nov 25 04:26:56 np0005534516 charming_snyder[432475]:            ],
Nov 25 04:26:56 np0005534516 charming_snyder[432475]:            "lv_name": "ceph_lv2",
Nov 25 04:26:56 np0005534516 charming_snyder[432475]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 04:26:56 np0005534516 charming_snyder[432475]:            "lv_size": "21470642176",
Nov 25 04:26:56 np0005534516 charming_snyder[432475]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2a15223e-f21c-42af-b702-2be1c3607399,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 04:26:56 np0005534516 charming_snyder[432475]:            "lv_uuid": "5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0",
Nov 25 04:26:56 np0005534516 charming_snyder[432475]:            "name": "ceph_lv2",
Nov 25 04:26:56 np0005534516 charming_snyder[432475]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 04:26:56 np0005534516 charming_snyder[432475]:            "tags": {
Nov 25 04:26:56 np0005534516 charming_snyder[432475]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 04:26:56 np0005534516 charming_snyder[432475]:                "ceph.block_uuid": "5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0",
Nov 25 04:26:56 np0005534516 charming_snyder[432475]:                "ceph.cephx_lockbox_secret": "",
Nov 25 04:26:56 np0005534516 charming_snyder[432475]:                "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 04:26:56 np0005534516 charming_snyder[432475]:                "ceph.cluster_name": "ceph",
Nov 25 04:26:56 np0005534516 charming_snyder[432475]:                "ceph.crush_device_class": "",
Nov 25 04:26:56 np0005534516 charming_snyder[432475]:                "ceph.encrypted": "0",
Nov 25 04:26:56 np0005534516 charming_snyder[432475]:                "ceph.osd_fsid": "2a15223e-f21c-42af-b702-2be1c3607399",
Nov 25 04:26:56 np0005534516 charming_snyder[432475]:                "ceph.osd_id": "2",
Nov 25 04:26:56 np0005534516 charming_snyder[432475]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 04:26:56 np0005534516 charming_snyder[432475]:                "ceph.type": "block",
Nov 25 04:26:56 np0005534516 charming_snyder[432475]:                "ceph.vdo": "0"
Nov 25 04:26:56 np0005534516 charming_snyder[432475]:            },
Nov 25 04:26:56 np0005534516 charming_snyder[432475]:            "type": "block",
Nov 25 04:26:56 np0005534516 charming_snyder[432475]:            "vg_name": "ceph_vg2"
Nov 25 04:26:56 np0005534516 charming_snyder[432475]:        }
Nov 25 04:26:56 np0005534516 charming_snyder[432475]:    ]
Nov 25 04:26:56 np0005534516 charming_snyder[432475]: }
Nov 25 04:26:56 np0005534516 systemd[1]: libpod-13c6b846aec13b1f47cf11b36ebf85ae7b4f94ae38bd32f5fad051baf1fdb1a3.scope: Deactivated successfully.
Nov 25 04:26:56 np0005534516 podman[432460]: 2025-11-25 09:26:56.55747568 +0000 UTC m=+0.999237973 container died 13c6b846aec13b1f47cf11b36ebf85ae7b4f94ae38bd32f5fad051baf1fdb1a3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_snyder, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 04:26:56 np0005534516 systemd[1]: var-lib-containers-storage-overlay-b242a063f6af310e3fa58f87fb098501863b96ef9ff33b33e06d8f5046eda4e8-merged.mount: Deactivated successfully.
Nov 25 04:26:56 np0005534516 podman[432460]: 2025-11-25 09:26:56.6825475 +0000 UTC m=+1.124309793 container remove 13c6b846aec13b1f47cf11b36ebf85ae7b4f94ae38bd32f5fad051baf1fdb1a3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_snyder, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Nov 25 04:26:56 np0005534516 systemd[1]: libpod-conmon-13c6b846aec13b1f47cf11b36ebf85ae7b4f94ae38bd32f5fad051baf1fdb1a3.scope: Deactivated successfully.
Nov 25 04:26:56 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3214: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:26:56 np0005534516 podman[432551]: 2025-11-25 09:26:56.975326691 +0000 UTC m=+0.079146189 container health_status 5c8d5508db57b4c3da7e80ae11f3a3094e87785cb74f60c98199261dd8b73481 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Nov 25 04:26:56 np0005534516 podman[432550]: 2025-11-25 09:26:56.980924644 +0000 UTC m=+0.086120720 container health_status 201f5ba8a846455dad7681d91099d8c3f33e8ac91298c1330a8b525b94c03938 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Nov 25 04:26:57 np0005534516 podman[432676]: 2025-11-25 09:26:57.406684241 +0000 UTC m=+0.035244403 container create d5ccffd3cd1cc3717dc51ffa9b2f50b86686bfa7b920938b348b0e679be36cf7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_chandrasekhar, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Nov 25 04:26:57 np0005534516 systemd[1]: Started libpod-conmon-d5ccffd3cd1cc3717dc51ffa9b2f50b86686bfa7b920938b348b0e679be36cf7.scope.
Nov 25 04:26:57 np0005534516 systemd[1]: Started libcrun container.
Nov 25 04:26:57 np0005534516 podman[432676]: 2025-11-25 09:26:57.481824059 +0000 UTC m=+0.110384271 container init d5ccffd3cd1cc3717dc51ffa9b2f50b86686bfa7b920938b348b0e679be36cf7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_chandrasekhar, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Nov 25 04:26:57 np0005534516 podman[432676]: 2025-11-25 09:26:57.392795322 +0000 UTC m=+0.021355504 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 04:26:57 np0005534516 podman[432676]: 2025-11-25 09:26:57.489000524 +0000 UTC m=+0.117560686 container start d5ccffd3cd1cc3717dc51ffa9b2f50b86686bfa7b920938b348b0e679be36cf7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_chandrasekhar, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 04:26:57 np0005534516 podman[432676]: 2025-11-25 09:26:57.491979775 +0000 UTC m=+0.120539957 container attach d5ccffd3cd1cc3717dc51ffa9b2f50b86686bfa7b920938b348b0e679be36cf7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_chandrasekhar, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Nov 25 04:26:57 np0005534516 nervous_chandrasekhar[432692]: 167 167
Nov 25 04:26:57 np0005534516 systemd[1]: libpod-d5ccffd3cd1cc3717dc51ffa9b2f50b86686bfa7b920938b348b0e679be36cf7.scope: Deactivated successfully.
Nov 25 04:26:57 np0005534516 conmon[432692]: conmon d5ccffd3cd1cc3717dc5 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-d5ccffd3cd1cc3717dc51ffa9b2f50b86686bfa7b920938b348b0e679be36cf7.scope/container/memory.events
Nov 25 04:26:57 np0005534516 podman[432697]: 2025-11-25 09:26:57.540457428 +0000 UTC m=+0.029250379 container died d5ccffd3cd1cc3717dc51ffa9b2f50b86686bfa7b920938b348b0e679be36cf7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_chandrasekhar, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Nov 25 04:26:57 np0005534516 systemd[1]: var-lib-containers-storage-overlay-8901162eafe67e28a264ee5cbc98e3352a7c971b888ad9a0e797b0a839bd7481-merged.mount: Deactivated successfully.
Nov 25 04:26:57 np0005534516 podman[432697]: 2025-11-25 09:26:57.57427195 +0000 UTC m=+0.063064891 container remove d5ccffd3cd1cc3717dc51ffa9b2f50b86686bfa7b920938b348b0e679be36cf7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_chandrasekhar, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Nov 25 04:26:57 np0005534516 systemd[1]: libpod-conmon-d5ccffd3cd1cc3717dc51ffa9b2f50b86686bfa7b920938b348b0e679be36cf7.scope: Deactivated successfully.
Nov 25 04:26:57 np0005534516 podman[432719]: 2025-11-25 09:26:57.773202593 +0000 UTC m=+0.064698205 container create 0249c9de94e5e19b93456f9edf8ac6b953d5423aa2b3b9061111a57cfc8e2603 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_heyrovsky, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 04:26:57 np0005534516 systemd[1]: Started libpod-conmon-0249c9de94e5e19b93456f9edf8ac6b953d5423aa2b3b9061111a57cfc8e2603.scope.
Nov 25 04:26:57 np0005534516 systemd[1]: Started libcrun container.
Nov 25 04:26:57 np0005534516 podman[432719]: 2025-11-25 09:26:57.742417444 +0000 UTC m=+0.033913126 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 04:26:57 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e88feda8cdc9558dfe83f83348305bff239bab708f4c3006d4d25cffc7bdbf2a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 04:26:57 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e88feda8cdc9558dfe83f83348305bff239bab708f4c3006d4d25cffc7bdbf2a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 04:26:57 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e88feda8cdc9558dfe83f83348305bff239bab708f4c3006d4d25cffc7bdbf2a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 04:26:57 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e88feda8cdc9558dfe83f83348305bff239bab708f4c3006d4d25cffc7bdbf2a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 04:26:57 np0005534516 podman[432719]: 2025-11-25 09:26:57.848968488 +0000 UTC m=+0.140464110 container init 0249c9de94e5e19b93456f9edf8ac6b953d5423aa2b3b9061111a57cfc8e2603 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_heyrovsky, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 04:26:57 np0005534516 podman[432719]: 2025-11-25 09:26:57.863208226 +0000 UTC m=+0.154703818 container start 0249c9de94e5e19b93456f9edf8ac6b953d5423aa2b3b9061111a57cfc8e2603 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_heyrovsky, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default)
Nov 25 04:26:57 np0005534516 podman[432719]: 2025-11-25 09:26:57.86701297 +0000 UTC m=+0.158508592 container attach 0249c9de94e5e19b93456f9edf8ac6b953d5423aa2b3b9061111a57cfc8e2603 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_heyrovsky, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 04:26:58 np0005534516 nova_compute[253538]: 2025-11-25 09:26:58.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:26:58 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3215: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:26:58 np0005534516 gallant_heyrovsky[432735]: {
Nov 25 04:26:58 np0005534516 gallant_heyrovsky[432735]:    "1ccbbde0-8faf-460a-800e-c84f00ed17db": {
Nov 25 04:26:58 np0005534516 gallant_heyrovsky[432735]:        "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 04:26:58 np0005534516 gallant_heyrovsky[432735]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 04:26:58 np0005534516 gallant_heyrovsky[432735]:        "osd_id": 1,
Nov 25 04:26:58 np0005534516 gallant_heyrovsky[432735]:        "osd_uuid": "1ccbbde0-8faf-460a-800e-c84f00ed17db",
Nov 25 04:26:58 np0005534516 gallant_heyrovsky[432735]:        "type": "bluestore"
Nov 25 04:26:58 np0005534516 gallant_heyrovsky[432735]:    },
Nov 25 04:26:58 np0005534516 gallant_heyrovsky[432735]:    "2a15223e-f21c-42af-b702-2be1c3607399": {
Nov 25 04:26:58 np0005534516 gallant_heyrovsky[432735]:        "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 04:26:58 np0005534516 gallant_heyrovsky[432735]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 04:26:58 np0005534516 gallant_heyrovsky[432735]:        "osd_id": 2,
Nov 25 04:26:58 np0005534516 gallant_heyrovsky[432735]:        "osd_uuid": "2a15223e-f21c-42af-b702-2be1c3607399",
Nov 25 04:26:58 np0005534516 gallant_heyrovsky[432735]:        "type": "bluestore"
Nov 25 04:26:58 np0005534516 gallant_heyrovsky[432735]:    },
Nov 25 04:26:58 np0005534516 gallant_heyrovsky[432735]:    "fa0f8d90-5df8-4d42-9078-da082765696d": {
Nov 25 04:26:58 np0005534516 gallant_heyrovsky[432735]:        "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 04:26:58 np0005534516 gallant_heyrovsky[432735]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 04:26:58 np0005534516 gallant_heyrovsky[432735]:        "osd_id": 0,
Nov 25 04:26:58 np0005534516 gallant_heyrovsky[432735]:        "osd_uuid": "fa0f8d90-5df8-4d42-9078-da082765696d",
Nov 25 04:26:58 np0005534516 gallant_heyrovsky[432735]:        "type": "bluestore"
Nov 25 04:26:58 np0005534516 gallant_heyrovsky[432735]:    }
Nov 25 04:26:58 np0005534516 gallant_heyrovsky[432735]: }
Nov 25 04:26:58 np0005534516 systemd[1]: libpod-0249c9de94e5e19b93456f9edf8ac6b953d5423aa2b3b9061111a57cfc8e2603.scope: Deactivated successfully.
Nov 25 04:26:58 np0005534516 systemd[1]: libpod-0249c9de94e5e19b93456f9edf8ac6b953d5423aa2b3b9061111a57cfc8e2603.scope: Consumed 1.067s CPU time.
Nov 25 04:26:58 np0005534516 conmon[432735]: conmon 0249c9de94e5e19b9345 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-0249c9de94e5e19b93456f9edf8ac6b953d5423aa2b3b9061111a57cfc8e2603.scope/container/memory.events
Nov 25 04:26:58 np0005534516 podman[432719]: 2025-11-25 09:26:58.929459695 +0000 UTC m=+1.220955307 container died 0249c9de94e5e19b93456f9edf8ac6b953d5423aa2b3b9061111a57cfc8e2603 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_heyrovsky, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 04:26:58 np0005534516 systemd[1]: var-lib-containers-storage-overlay-e88feda8cdc9558dfe83f83348305bff239bab708f4c3006d4d25cffc7bdbf2a-merged.mount: Deactivated successfully.
Nov 25 04:26:58 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e283 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:26:58 np0005534516 podman[432719]: 2025-11-25 09:26:58.986700345 +0000 UTC m=+1.278195937 container remove 0249c9de94e5e19b93456f9edf8ac6b953d5423aa2b3b9061111a57cfc8e2603 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_heyrovsky, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Nov 25 04:26:58 np0005534516 systemd[1]: libpod-conmon-0249c9de94e5e19b93456f9edf8ac6b953d5423aa2b3b9061111a57cfc8e2603.scope: Deactivated successfully.
Nov 25 04:26:59 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 04:26:59 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 04:26:59 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 04:26:59 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 04:26:59 np0005534516 nova_compute[253538]: 2025-11-25 09:26:59.046 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:26:59 np0005534516 ceph-mgr[75313]: [progress WARNING root] complete: ev 47537c22-396f-442c-abf7-963364df7155 does not exist
Nov 25 04:26:59 np0005534516 ceph-mgr[75313]: [progress WARNING root] complete: ev 7be838f9-da08-4dbd-9c7f-529953f76d96 does not exist
Nov 25 04:26:59 np0005534516 nova_compute[253538]: 2025-11-25 09:26:59.565 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:26:59 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 04:26:59 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 04:27:00 np0005534516 nova_compute[253538]: 2025-11-25 09:27:00.579 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:27:00 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3216: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:27:02 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3217: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:27:03 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e283 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:27:04 np0005534516 nova_compute[253538]: 2025-11-25 09:27:04.048 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:27:04 np0005534516 nova_compute[253538]: 2025-11-25 09:27:04.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:27:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 04:27:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 04:27:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 04:27:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 04:27:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 25 04:27:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 04:27:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 04:27:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 04:27:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 04:27:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 04:27:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006659854830044931 of space, bias 1.0, pg target 0.19979564490134794 quantized to 32 (current 32)
Nov 25 04:27:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 04:27:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 25 04:27:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 04:27:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 04:27:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 04:27:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 25 04:27:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 04:27:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 25 04:27:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 04:27:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 04:27:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 04:27:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 25 04:27:04 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3218: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:27:05 np0005534516 nova_compute[253538]: 2025-11-25 09:27:05.632 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:27:06 np0005534516 nova_compute[253538]: 2025-11-25 09:27:06.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:27:06 np0005534516 nova_compute[253538]: 2025-11-25 09:27:06.554 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 25 04:27:06 np0005534516 nova_compute[253538]: 2025-11-25 09:27:06.554 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 25 04:27:06 np0005534516 nova_compute[253538]: 2025-11-25 09:27:06.569 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 25 04:27:06 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3219: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:27:08 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3220: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:27:08 np0005534516 podman[432830]: 2025-11-25 09:27:08.853062062 +0000 UTC m=+0.095255058 container health_status 8ad1e319ea263c63021f01bb2d6f18ca5aea205883339692c6b10bddfa993191 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, config_id=ovn_controller)
Nov 25 04:27:08 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e283 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:27:09 np0005534516 nova_compute[253538]: 2025-11-25 09:27:09.090 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:27:10 np0005534516 nova_compute[253538]: 2025-11-25 09:27:10.562 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:27:10 np0005534516 nova_compute[253538]: 2025-11-25 09:27:10.633 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:27:10 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3221: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:27:12 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3222: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:27:13 np0005534516 nova_compute[253538]: 2025-11-25 09:27:13.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:27:13 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e283 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:27:14 np0005534516 nova_compute[253538]: 2025-11-25 09:27:14.092 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:27:14 np0005534516 nova_compute[253538]: 2025-11-25 09:27:14.553 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:27:14 np0005534516 nova_compute[253538]: 2025-11-25 09:27:14.554 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 25 04:27:14 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3223: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:27:15 np0005534516 nova_compute[253538]: 2025-11-25 09:27:15.634 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:27:16 np0005534516 nova_compute[253538]: 2025-11-25 09:27:16.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:27:16 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3224: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:27:17 np0005534516 nova_compute[253538]: 2025-11-25 09:27:17.555 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:27:18 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3225: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:27:18 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e283 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:27:19 np0005534516 nova_compute[253538]: 2025-11-25 09:27:19.094 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:27:20 np0005534516 nova_compute[253538]: 2025-11-25 09:27:20.636 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:27:20 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3226: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:27:22 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3227: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:27:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 04:27:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 04:27:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 04:27:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 04:27:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 04:27:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 04:27:24 np0005534516 nova_compute[253538]: 2025-11-25 09:27:24.096 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:27:24 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e283 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:27:24 np0005534516 nova_compute[253538]: 2025-11-25 09:27:24.548 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:27:24 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3228: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:27:25 np0005534516 nova_compute[253538]: 2025-11-25 09:27:25.637 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:27:26 np0005534516 nova_compute[253538]: 2025-11-25 09:27:26.553 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:27:26 np0005534516 nova_compute[253538]: 2025-11-25 09:27:26.580 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:27:26 np0005534516 nova_compute[253538]: 2025-11-25 09:27:26.580 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:27:26 np0005534516 nova_compute[253538]: 2025-11-25 09:27:26.581 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:27:26 np0005534516 nova_compute[253538]: 2025-11-25 09:27:26.581 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 25 04:27:26 np0005534516 nova_compute[253538]: 2025-11-25 09:27:26.582 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 04:27:26 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3229: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:27:27 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 04:27:27 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3317948295' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 04:27:27 np0005534516 nova_compute[253538]: 2025-11-25 09:27:27.081 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.499s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 04:27:27 np0005534516 nova_compute[253538]: 2025-11-25 09:27:27.253 253542 WARNING nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 04:27:27 np0005534516 nova_compute[253538]: 2025-11-25 09:27:27.254 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3610MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 25 04:27:27 np0005534516 nova_compute[253538]: 2025-11-25 09:27:27.255 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:27:27 np0005534516 nova_compute[253538]: 2025-11-25 09:27:27.255 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:27:27 np0005534516 nova_compute[253538]: 2025-11-25 09:27:27.334 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 25 04:27:27 np0005534516 nova_compute[253538]: 2025-11-25 09:27:27.335 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 25 04:27:27 np0005534516 nova_compute[253538]: 2025-11-25 09:27:27.357 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 04:27:27 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 04:27:27 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1955999655' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 04:27:27 np0005534516 podman[432898]: 2025-11-25 09:27:27.821953514 +0000 UTC m=+0.071711045 container health_status 201f5ba8a846455dad7681d91099d8c3f33e8ac91298c1330a8b525b94c03938 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, config_id=multipathd, container_name=multipathd, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true)
Nov 25 04:27:27 np0005534516 nova_compute[253538]: 2025-11-25 09:27:27.832 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.475s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 04:27:27 np0005534516 nova_compute[253538]: 2025-11-25 09:27:27.839 253542 DEBUG nova.compute.provider_tree [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 04:27:27 np0005534516 nova_compute[253538]: 2025-11-25 09:27:27.856 253542 DEBUG nova.scheduler.client.report [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 04:27:27 np0005534516 nova_compute[253538]: 2025-11-25 09:27:27.859 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 25 04:27:27 np0005534516 nova_compute[253538]: 2025-11-25 09:27:27.859 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.604s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:27:27 np0005534516 podman[432899]: 2025-11-25 09:27:27.863104406 +0000 UTC m=+0.098611849 container health_status 5c8d5508db57b4c3da7e80ae11f3a3094e87785cb74f60c98199261dd8b73481 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Nov 25 04:27:28 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3230: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:27:29 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 25 04:27:29 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2048954658' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 04:27:29 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 25 04:27:29 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2048954658' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 04:27:29 np0005534516 nova_compute[253538]: 2025-11-25 09:27:29.098 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:27:29 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e283 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:27:30 np0005534516 nova_compute[253538]: 2025-11-25 09:27:30.639 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:27:30 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3231: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:27:32 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3232: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:27:34 np0005534516 nova_compute[253538]: 2025-11-25 09:27:34.101 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:27:34 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e283 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:27:34 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3233: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:27:35 np0005534516 nova_compute[253538]: 2025-11-25 09:27:35.640 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:27:36 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3234: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:27:38 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3235: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:27:39 np0005534516 nova_compute[253538]: 2025-11-25 09:27:39.102 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:27:39 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e283 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:27:39 np0005534516 podman[432942]: 2025-11-25 09:27:39.832677044 +0000 UTC m=+0.089870891 container health_status 8ad1e319ea263c63021f01bb2d6f18ca5aea205883339692c6b10bddfa993191 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_managed=true, container_name=ovn_controller, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, config_id=ovn_controller, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 04:27:40 np0005534516 nova_compute[253538]: 2025-11-25 09:27:40.642 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:27:40 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3236: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:27:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:27:41.115 162739 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:27:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:27:41.116 162739 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:27:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:27:41.116 162739 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:27:42 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3237: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:27:44 np0005534516 nova_compute[253538]: 2025-11-25 09:27:44.104 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:27:44 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e283 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:27:44 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3238: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:27:45 np0005534516 nova_compute[253538]: 2025-11-25 09:27:45.644 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:27:46 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3239: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:27:48 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3240: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:27:49 np0005534516 nova_compute[253538]: 2025-11-25 09:27:49.106 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:27:49 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e283 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:27:50 np0005534516 nova_compute[253538]: 2025-11-25 09:27:50.645 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:27:50 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3241: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:27:52 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3242: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:27:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] Optimize plan auto_2025-11-25_09:27:53
Nov 25 04:27:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 04:27:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] do_upmap
Nov 25 04:27:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] pools ['volumes', 'cephfs.cephfs.data', 'default.rgw.meta', 'vms', '.mgr', 'images', '.rgw.root', 'backups', 'default.rgw.control', 'cephfs.cephfs.meta', 'default.rgw.log']
Nov 25 04:27:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] prepared 0/10 changes
Nov 25 04:27:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 04:27:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 04:27:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 04:27:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 04:27:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 04:27:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 04:27:54 np0005534516 nova_compute[253538]: 2025-11-25 09:27:54.109 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:27:54 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 04:27:54 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 04:27:54 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 04:27:54 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 04:27:54 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 04:27:54 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 04:27:54 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 04:27:54 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 04:27:54 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 04:27:54 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 04:27:54 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e283 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:27:54 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3243: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:27:55 np0005534516 nova_compute[253538]: 2025-11-25 09:27:55.699 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:27:56 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3244: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:27:58 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3245: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:27:58 np0005534516 podman[432973]: 2025-11-25 09:27:58.803241636 +0000 UTC m=+0.052545424 container health_status 5c8d5508db57b4c3da7e80ae11f3a3094e87785cb74f60c98199261dd8b73481 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0)
Nov 25 04:27:58 np0005534516 podman[432972]: 2025-11-25 09:27:58.809238059 +0000 UTC m=+0.058721262 container health_status 201f5ba8a846455dad7681d91099d8c3f33e8ac91298c1330a8b525b94c03938 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 04:27:59 np0005534516 nova_compute[253538]: 2025-11-25 09:27:59.112 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:27:59 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e283 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:27:59 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 04:27:59 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 04:27:59 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 04:27:59 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 04:27:59 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 04:27:59 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 04:28:00 np0005534516 ceph-mgr[75313]: [progress WARNING root] complete: ev 8be94307-49c2-422e-a43b-8869bb3c969d does not exist
Nov 25 04:28:00 np0005534516 ceph-mgr[75313]: [progress WARNING root] complete: ev 1f4692f7-649b-4757-b7d9-cdbaa6a90999 does not exist
Nov 25 04:28:00 np0005534516 ceph-mgr[75313]: [progress WARNING root] complete: ev 578e15c1-81d8-4610-906e-e421483f9158 does not exist
Nov 25 04:28:00 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 04:28:00 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 04:28:00 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 04:28:00 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 04:28:00 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 04:28:00 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 04:28:00 np0005534516 nova_compute[253538]: 2025-11-25 09:28:00.700 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:28:00 np0005534516 podman[433279]: 2025-11-25 09:28:00.733257251 +0000 UTC m=+0.047750713 container create 28d4ad0a77cd9d12005fa0447f764f246e3fcc630c3000c99b9894abca12a686 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_haibt, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Nov 25 04:28:00 np0005534516 systemd[1]: Started libpod-conmon-28d4ad0a77cd9d12005fa0447f764f246e3fcc630c3000c99b9894abca12a686.scope.
Nov 25 04:28:00 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3246: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:28:00 np0005534516 podman[433279]: 2025-11-25 09:28:00.712175307 +0000 UTC m=+0.026668819 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 04:28:00 np0005534516 systemd[1]: Started libcrun container.
Nov 25 04:28:00 np0005534516 podman[433279]: 2025-11-25 09:28:00.827514121 +0000 UTC m=+0.142007603 container init 28d4ad0a77cd9d12005fa0447f764f246e3fcc630c3000c99b9894abca12a686 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_haibt, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef)
Nov 25 04:28:00 np0005534516 podman[433279]: 2025-11-25 09:28:00.83958603 +0000 UTC m=+0.154079532 container start 28d4ad0a77cd9d12005fa0447f764f246e3fcc630c3000c99b9894abca12a686 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_haibt, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Nov 25 04:28:00 np0005534516 podman[433279]: 2025-11-25 09:28:00.843095136 +0000 UTC m=+0.157588618 container attach 28d4ad0a77cd9d12005fa0447f764f246e3fcc630c3000c99b9894abca12a686 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_haibt, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 04:28:00 np0005534516 systemd[1]: libpod-28d4ad0a77cd9d12005fa0447f764f246e3fcc630c3000c99b9894abca12a686.scope: Deactivated successfully.
Nov 25 04:28:00 np0005534516 distracted_haibt[433296]: 167 167
Nov 25 04:28:00 np0005534516 conmon[433296]: conmon 28d4ad0a77cd9d12005f <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-28d4ad0a77cd9d12005fa0447f764f246e3fcc630c3000c99b9894abca12a686.scope/container/memory.events
Nov 25 04:28:00 np0005534516 podman[433279]: 2025-11-25 09:28:00.848229816 +0000 UTC m=+0.162723278 container died 28d4ad0a77cd9d12005fa0447f764f246e3fcc630c3000c99b9894abca12a686 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_haibt, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Nov 25 04:28:00 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 04:28:00 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 04:28:00 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 04:28:00 np0005534516 systemd[1]: var-lib-containers-storage-overlay-0d81d09c12d09c65e69e6e6eb50ca726ce25d21cebb71838d85929ce940014ba-merged.mount: Deactivated successfully.
Nov 25 04:28:00 np0005534516 podman[433279]: 2025-11-25 09:28:00.897731765 +0000 UTC m=+0.212225267 container remove 28d4ad0a77cd9d12005fa0447f764f246e3fcc630c3000c99b9894abca12a686 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_haibt, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 04:28:00 np0005534516 systemd[1]: libpod-conmon-28d4ad0a77cd9d12005fa0447f764f246e3fcc630c3000c99b9894abca12a686.scope: Deactivated successfully.
Nov 25 04:28:01 np0005534516 podman[433319]: 2025-11-25 09:28:01.109296543 +0000 UTC m=+0.056158802 container create 9cc463ddd98f6816ce1297575176d825ef2f079f3a054cc7658d0d56d6a84510 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_maxwell, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Nov 25 04:28:01 np0005534516 systemd[1]: Started libpod-conmon-9cc463ddd98f6816ce1297575176d825ef2f079f3a054cc7658d0d56d6a84510.scope.
Nov 25 04:28:01 np0005534516 systemd[1]: Started libcrun container.
Nov 25 04:28:01 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3aa6020fdcd8fbd634bb09db9e898736027e55019acb3cd3532203670b11694a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 04:28:01 np0005534516 podman[433319]: 2025-11-25 09:28:01.091541469 +0000 UTC m=+0.038403748 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 04:28:01 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3aa6020fdcd8fbd634bb09db9e898736027e55019acb3cd3532203670b11694a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 04:28:01 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3aa6020fdcd8fbd634bb09db9e898736027e55019acb3cd3532203670b11694a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 04:28:01 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3aa6020fdcd8fbd634bb09db9e898736027e55019acb3cd3532203670b11694a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 04:28:01 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3aa6020fdcd8fbd634bb09db9e898736027e55019acb3cd3532203670b11694a/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 04:28:01 np0005534516 podman[433319]: 2025-11-25 09:28:01.210913473 +0000 UTC m=+0.157775732 container init 9cc463ddd98f6816ce1297575176d825ef2f079f3a054cc7658d0d56d6a84510 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_maxwell, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 04:28:01 np0005534516 podman[433319]: 2025-11-25 09:28:01.221691797 +0000 UTC m=+0.168554096 container start 9cc463ddd98f6816ce1297575176d825ef2f079f3a054cc7658d0d56d6a84510 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_maxwell, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 04:28:01 np0005534516 podman[433319]: 2025-11-25 09:28:01.225887741 +0000 UTC m=+0.172750020 container attach 9cc463ddd98f6816ce1297575176d825ef2f079f3a054cc7658d0d56d6a84510 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_maxwell, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 04:28:02 np0005534516 nifty_maxwell[433335]: --> passed data devices: 0 physical, 3 LVM
Nov 25 04:28:02 np0005534516 nifty_maxwell[433335]: --> relative data size: 1.0
Nov 25 04:28:02 np0005534516 nifty_maxwell[433335]: --> All data devices are unavailable
Nov 25 04:28:02 np0005534516 systemd[1]: libpod-9cc463ddd98f6816ce1297575176d825ef2f079f3a054cc7658d0d56d6a84510.scope: Deactivated successfully.
Nov 25 04:28:02 np0005534516 systemd[1]: libpod-9cc463ddd98f6816ce1297575176d825ef2f079f3a054cc7658d0d56d6a84510.scope: Consumed 1.109s CPU time.
Nov 25 04:28:02 np0005534516 podman[433319]: 2025-11-25 09:28:02.366508298 +0000 UTC m=+1.313370557 container died 9cc463ddd98f6816ce1297575176d825ef2f079f3a054cc7658d0d56d6a84510 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_maxwell, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Nov 25 04:28:02 np0005534516 systemd[1]: var-lib-containers-storage-overlay-3aa6020fdcd8fbd634bb09db9e898736027e55019acb3cd3532203670b11694a-merged.mount: Deactivated successfully.
Nov 25 04:28:02 np0005534516 podman[433319]: 2025-11-25 09:28:02.428398785 +0000 UTC m=+1.375261034 container remove 9cc463ddd98f6816ce1297575176d825ef2f079f3a054cc7658d0d56d6a84510 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_maxwell, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default)
Nov 25 04:28:02 np0005534516 systemd[1]: libpod-conmon-9cc463ddd98f6816ce1297575176d825ef2f079f3a054cc7658d0d56d6a84510.scope: Deactivated successfully.
Nov 25 04:28:02 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3247: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:28:02 np0005534516 nova_compute[253538]: 2025-11-25 09:28:02.861 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:28:03 np0005534516 podman[433515]: 2025-11-25 09:28:03.022121911 +0000 UTC m=+0.040041833 container create c3975ffdaa2caa683e941e2db86bc14106f3f60c7ba1928383a98cebf7a1680a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_maxwell, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Nov 25 04:28:03 np0005534516 systemd[1]: Started libpod-conmon-c3975ffdaa2caa683e941e2db86bc14106f3f60c7ba1928383a98cebf7a1680a.scope.
Nov 25 04:28:03 np0005534516 systemd[1]: Started libcrun container.
Nov 25 04:28:03 np0005534516 podman[433515]: 2025-11-25 09:28:03.096602881 +0000 UTC m=+0.114522823 container init c3975ffdaa2caa683e941e2db86bc14106f3f60c7ba1928383a98cebf7a1680a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_maxwell, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Nov 25 04:28:03 np0005534516 podman[433515]: 2025-11-25 09:28:03.005866308 +0000 UTC m=+0.023786250 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 04:28:03 np0005534516 podman[433515]: 2025-11-25 09:28:03.104403044 +0000 UTC m=+0.122322976 container start c3975ffdaa2caa683e941e2db86bc14106f3f60c7ba1928383a98cebf7a1680a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_maxwell, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 04:28:03 np0005534516 podman[433515]: 2025-11-25 09:28:03.10793735 +0000 UTC m=+0.125857272 container attach c3975ffdaa2caa683e941e2db86bc14106f3f60c7ba1928383a98cebf7a1680a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_maxwell, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Nov 25 04:28:03 np0005534516 festive_maxwell[433531]: 167 167
Nov 25 04:28:03 np0005534516 systemd[1]: libpod-c3975ffdaa2caa683e941e2db86bc14106f3f60c7ba1928383a98cebf7a1680a.scope: Deactivated successfully.
Nov 25 04:28:03 np0005534516 podman[433515]: 2025-11-25 09:28:03.109685818 +0000 UTC m=+0.127605740 container died c3975ffdaa2caa683e941e2db86bc14106f3f60c7ba1928383a98cebf7a1680a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_maxwell, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Nov 25 04:28:03 np0005534516 systemd[1]: var-lib-containers-storage-overlay-6e52d1cb34c26c792017842888b06f09dee67e1e8d997821516d5cf2c803f10c-merged.mount: Deactivated successfully.
Nov 25 04:28:03 np0005534516 podman[433515]: 2025-11-25 09:28:03.142264226 +0000 UTC m=+0.160184148 container remove c3975ffdaa2caa683e941e2db86bc14106f3f60c7ba1928383a98cebf7a1680a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_maxwell, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 04:28:03 np0005534516 systemd[1]: libpod-conmon-c3975ffdaa2caa683e941e2db86bc14106f3f60c7ba1928383a98cebf7a1680a.scope: Deactivated successfully.
Nov 25 04:28:03 np0005534516 podman[433553]: 2025-11-25 09:28:03.31956981 +0000 UTC m=+0.054525558 container create 91c04484af799a8d2db1c05e602341720a41c8e14a5ea43e4bacae9ddbbc1c53 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_davinci, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Nov 25 04:28:03 np0005534516 systemd[1]: Started libpod-conmon-91c04484af799a8d2db1c05e602341720a41c8e14a5ea43e4bacae9ddbbc1c53.scope.
Nov 25 04:28:03 np0005534516 podman[433553]: 2025-11-25 09:28:03.295581966 +0000 UTC m=+0.030537804 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 04:28:03 np0005534516 systemd[1]: Started libcrun container.
Nov 25 04:28:03 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9b3bcc1053cfc7670b96aa930325b13ba84ca774d48e9f4ab6e85efab8ec69d9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 04:28:03 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9b3bcc1053cfc7670b96aa930325b13ba84ca774d48e9f4ab6e85efab8ec69d9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 04:28:03 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9b3bcc1053cfc7670b96aa930325b13ba84ca774d48e9f4ab6e85efab8ec69d9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 04:28:03 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9b3bcc1053cfc7670b96aa930325b13ba84ca774d48e9f4ab6e85efab8ec69d9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 04:28:03 np0005534516 podman[433553]: 2025-11-25 09:28:03.420293046 +0000 UTC m=+0.155248864 container init 91c04484af799a8d2db1c05e602341720a41c8e14a5ea43e4bacae9ddbbc1c53 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_davinci, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Nov 25 04:28:03 np0005534516 podman[433553]: 2025-11-25 09:28:03.429168478 +0000 UTC m=+0.164124226 container start 91c04484af799a8d2db1c05e602341720a41c8e14a5ea43e4bacae9ddbbc1c53 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_davinci, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Nov 25 04:28:03 np0005534516 podman[433553]: 2025-11-25 09:28:03.434147714 +0000 UTC m=+0.169103512 container attach 91c04484af799a8d2db1c05e602341720a41c8e14a5ea43e4bacae9ddbbc1c53 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_davinci, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef)
Nov 25 04:28:04 np0005534516 nova_compute[253538]: 2025-11-25 09:28:04.115 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:28:04 np0005534516 lucid_davinci[433570]: {
Nov 25 04:28:04 np0005534516 lucid_davinci[433570]:    "0": [
Nov 25 04:28:04 np0005534516 lucid_davinci[433570]:        {
Nov 25 04:28:04 np0005534516 lucid_davinci[433570]:            "devices": [
Nov 25 04:28:04 np0005534516 lucid_davinci[433570]:                "/dev/loop3"
Nov 25 04:28:04 np0005534516 lucid_davinci[433570]:            ],
Nov 25 04:28:04 np0005534516 lucid_davinci[433570]:            "lv_name": "ceph_lv0",
Nov 25 04:28:04 np0005534516 lucid_davinci[433570]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 04:28:04 np0005534516 lucid_davinci[433570]:            "lv_size": "21470642176",
Nov 25 04:28:04 np0005534516 lucid_davinci[433570]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=fa0f8d90-5df8-4d42-9078-da082765696d,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 04:28:04 np0005534516 lucid_davinci[433570]:            "lv_uuid": "ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr",
Nov 25 04:28:04 np0005534516 lucid_davinci[433570]:            "name": "ceph_lv0",
Nov 25 04:28:04 np0005534516 lucid_davinci[433570]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 04:28:04 np0005534516 lucid_davinci[433570]:            "tags": {
Nov 25 04:28:04 np0005534516 lucid_davinci[433570]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 04:28:04 np0005534516 lucid_davinci[433570]:                "ceph.block_uuid": "ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr",
Nov 25 04:28:04 np0005534516 lucid_davinci[433570]:                "ceph.cephx_lockbox_secret": "",
Nov 25 04:28:04 np0005534516 lucid_davinci[433570]:                "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 04:28:04 np0005534516 lucid_davinci[433570]:                "ceph.cluster_name": "ceph",
Nov 25 04:28:04 np0005534516 lucid_davinci[433570]:                "ceph.crush_device_class": "",
Nov 25 04:28:04 np0005534516 lucid_davinci[433570]:                "ceph.encrypted": "0",
Nov 25 04:28:04 np0005534516 lucid_davinci[433570]:                "ceph.osd_fsid": "fa0f8d90-5df8-4d42-9078-da082765696d",
Nov 25 04:28:04 np0005534516 lucid_davinci[433570]:                "ceph.osd_id": "0",
Nov 25 04:28:04 np0005534516 lucid_davinci[433570]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 04:28:04 np0005534516 lucid_davinci[433570]:                "ceph.type": "block",
Nov 25 04:28:04 np0005534516 lucid_davinci[433570]:                "ceph.vdo": "0"
Nov 25 04:28:04 np0005534516 lucid_davinci[433570]:            },
Nov 25 04:28:04 np0005534516 lucid_davinci[433570]:            "type": "block",
Nov 25 04:28:04 np0005534516 lucid_davinci[433570]:            "vg_name": "ceph_vg0"
Nov 25 04:28:04 np0005534516 lucid_davinci[433570]:        }
Nov 25 04:28:04 np0005534516 lucid_davinci[433570]:    ],
Nov 25 04:28:04 np0005534516 lucid_davinci[433570]:    "1": [
Nov 25 04:28:04 np0005534516 lucid_davinci[433570]:        {
Nov 25 04:28:04 np0005534516 lucid_davinci[433570]:            "devices": [
Nov 25 04:28:04 np0005534516 lucid_davinci[433570]:                "/dev/loop4"
Nov 25 04:28:04 np0005534516 lucid_davinci[433570]:            ],
Nov 25 04:28:04 np0005534516 lucid_davinci[433570]:            "lv_name": "ceph_lv1",
Nov 25 04:28:04 np0005534516 lucid_davinci[433570]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 04:28:04 np0005534516 lucid_davinci[433570]:            "lv_size": "21470642176",
Nov 25 04:28:04 np0005534516 lucid_davinci[433570]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1ccbbde0-8faf-460a-800e-c84f00ed17db,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 04:28:04 np0005534516 lucid_davinci[433570]:            "lv_uuid": "nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e",
Nov 25 04:28:04 np0005534516 lucid_davinci[433570]:            "name": "ceph_lv1",
Nov 25 04:28:04 np0005534516 lucid_davinci[433570]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 04:28:04 np0005534516 lucid_davinci[433570]:            "tags": {
Nov 25 04:28:04 np0005534516 lucid_davinci[433570]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 04:28:04 np0005534516 lucid_davinci[433570]:                "ceph.block_uuid": "nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e",
Nov 25 04:28:04 np0005534516 lucid_davinci[433570]:                "ceph.cephx_lockbox_secret": "",
Nov 25 04:28:04 np0005534516 lucid_davinci[433570]:                "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 04:28:04 np0005534516 lucid_davinci[433570]:                "ceph.cluster_name": "ceph",
Nov 25 04:28:04 np0005534516 lucid_davinci[433570]:                "ceph.crush_device_class": "",
Nov 25 04:28:04 np0005534516 lucid_davinci[433570]:                "ceph.encrypted": "0",
Nov 25 04:28:04 np0005534516 lucid_davinci[433570]:                "ceph.osd_fsid": "1ccbbde0-8faf-460a-800e-c84f00ed17db",
Nov 25 04:28:04 np0005534516 lucid_davinci[433570]:                "ceph.osd_id": "1",
Nov 25 04:28:04 np0005534516 lucid_davinci[433570]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 04:28:04 np0005534516 lucid_davinci[433570]:                "ceph.type": "block",
Nov 25 04:28:04 np0005534516 lucid_davinci[433570]:                "ceph.vdo": "0"
Nov 25 04:28:04 np0005534516 lucid_davinci[433570]:            },
Nov 25 04:28:04 np0005534516 lucid_davinci[433570]:            "type": "block",
Nov 25 04:28:04 np0005534516 lucid_davinci[433570]:            "vg_name": "ceph_vg1"
Nov 25 04:28:04 np0005534516 lucid_davinci[433570]:        }
Nov 25 04:28:04 np0005534516 lucid_davinci[433570]:    ],
Nov 25 04:28:04 np0005534516 lucid_davinci[433570]:    "2": [
Nov 25 04:28:04 np0005534516 lucid_davinci[433570]:        {
Nov 25 04:28:04 np0005534516 lucid_davinci[433570]:            "devices": [
Nov 25 04:28:04 np0005534516 lucid_davinci[433570]:                "/dev/loop5"
Nov 25 04:28:04 np0005534516 lucid_davinci[433570]:            ],
Nov 25 04:28:04 np0005534516 lucid_davinci[433570]:            "lv_name": "ceph_lv2",
Nov 25 04:28:04 np0005534516 lucid_davinci[433570]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 04:28:04 np0005534516 lucid_davinci[433570]:            "lv_size": "21470642176",
Nov 25 04:28:04 np0005534516 lucid_davinci[433570]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2a15223e-f21c-42af-b702-2be1c3607399,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 04:28:04 np0005534516 lucid_davinci[433570]:            "lv_uuid": "5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0",
Nov 25 04:28:04 np0005534516 lucid_davinci[433570]:            "name": "ceph_lv2",
Nov 25 04:28:04 np0005534516 lucid_davinci[433570]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 04:28:04 np0005534516 lucid_davinci[433570]:            "tags": {
Nov 25 04:28:04 np0005534516 lucid_davinci[433570]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 04:28:04 np0005534516 lucid_davinci[433570]:                "ceph.block_uuid": "5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0",
Nov 25 04:28:04 np0005534516 lucid_davinci[433570]:                "ceph.cephx_lockbox_secret": "",
Nov 25 04:28:04 np0005534516 lucid_davinci[433570]:                "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 04:28:04 np0005534516 lucid_davinci[433570]:                "ceph.cluster_name": "ceph",
Nov 25 04:28:04 np0005534516 lucid_davinci[433570]:                "ceph.crush_device_class": "",
Nov 25 04:28:04 np0005534516 lucid_davinci[433570]:                "ceph.encrypted": "0",
Nov 25 04:28:04 np0005534516 lucid_davinci[433570]:                "ceph.osd_fsid": "2a15223e-f21c-42af-b702-2be1c3607399",
Nov 25 04:28:04 np0005534516 lucid_davinci[433570]:                "ceph.osd_id": "2",
Nov 25 04:28:04 np0005534516 lucid_davinci[433570]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 04:28:04 np0005534516 lucid_davinci[433570]:                "ceph.type": "block",
Nov 25 04:28:04 np0005534516 lucid_davinci[433570]:                "ceph.vdo": "0"
Nov 25 04:28:04 np0005534516 lucid_davinci[433570]:            },
Nov 25 04:28:04 np0005534516 lucid_davinci[433570]:            "type": "block",
Nov 25 04:28:04 np0005534516 lucid_davinci[433570]:            "vg_name": "ceph_vg2"
Nov 25 04:28:04 np0005534516 lucid_davinci[433570]:        }
Nov 25 04:28:04 np0005534516 lucid_davinci[433570]:    ]
Nov 25 04:28:04 np0005534516 lucid_davinci[433570]: }
Nov 25 04:28:04 np0005534516 systemd[1]: libpod-91c04484af799a8d2db1c05e602341720a41c8e14a5ea43e4bacae9ddbbc1c53.scope: Deactivated successfully.
Nov 25 04:28:04 np0005534516 podman[433553]: 2025-11-25 09:28:04.236704543 +0000 UTC m=+0.971660321 container died 91c04484af799a8d2db1c05e602341720a41c8e14a5ea43e4bacae9ddbbc1c53 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_davinci, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2)
Nov 25 04:28:04 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e283 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:28:04 np0005534516 systemd[1]: var-lib-containers-storage-overlay-9b3bcc1053cfc7670b96aa930325b13ba84ca774d48e9f4ab6e85efab8ec69d9-merged.mount: Deactivated successfully.
Nov 25 04:28:04 np0005534516 podman[433553]: 2025-11-25 09:28:04.305007775 +0000 UTC m=+1.039963523 container remove 91c04484af799a8d2db1c05e602341720a41c8e14a5ea43e4bacae9ddbbc1c53 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_davinci, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 04:28:04 np0005534516 systemd[1]: libpod-conmon-91c04484af799a8d2db1c05e602341720a41c8e14a5ea43e4bacae9ddbbc1c53.scope: Deactivated successfully.
Nov 25 04:28:04 np0005534516 nova_compute[253538]: 2025-11-25 09:28:04.555 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:28:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 04:28:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 04:28:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 04:28:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 04:28:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 25 04:28:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 04:28:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 04:28:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 04:28:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 04:28:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 04:28:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006659854830044931 of space, bias 1.0, pg target 0.19979564490134794 quantized to 32 (current 32)
Nov 25 04:28:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 04:28:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 25 04:28:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 04:28:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 04:28:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 04:28:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 25 04:28:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 04:28:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 25 04:28:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 04:28:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 04:28:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 04:28:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 25 04:28:04 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3248: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:28:05 np0005534516 podman[433728]: 2025-11-25 09:28:05.038265795 +0000 UTC m=+0.093153510 container create 0688b321e5b8435baf19a8ea0c036f6b5f957eee28230af37abe8e423be7146b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_feynman, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Nov 25 04:28:05 np0005534516 podman[433728]: 2025-11-25 09:28:04.968882133 +0000 UTC m=+0.023769878 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 04:28:05 np0005534516 systemd[1]: Started libpod-conmon-0688b321e5b8435baf19a8ea0c036f6b5f957eee28230af37abe8e423be7146b.scope.
Nov 25 04:28:05 np0005534516 systemd[1]: Started libcrun container.
Nov 25 04:28:05 np0005534516 podman[433728]: 2025-11-25 09:28:05.222392985 +0000 UTC m=+0.277280720 container init 0688b321e5b8435baf19a8ea0c036f6b5f957eee28230af37abe8e423be7146b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_feynman, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 04:28:05 np0005534516 podman[433728]: 2025-11-25 09:28:05.230925018 +0000 UTC m=+0.285812743 container start 0688b321e5b8435baf19a8ea0c036f6b5f957eee28230af37abe8e423be7146b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_feynman, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 04:28:05 np0005534516 eager_feynman[433744]: 167 167
Nov 25 04:28:05 np0005534516 systemd[1]: libpod-0688b321e5b8435baf19a8ea0c036f6b5f957eee28230af37abe8e423be7146b.scope: Deactivated successfully.
Nov 25 04:28:05 np0005534516 podman[433728]: 2025-11-25 09:28:05.250291306 +0000 UTC m=+0.305179041 container attach 0688b321e5b8435baf19a8ea0c036f6b5f957eee28230af37abe8e423be7146b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_feynman, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Nov 25 04:28:05 np0005534516 podman[433728]: 2025-11-25 09:28:05.250765208 +0000 UTC m=+0.305652923 container died 0688b321e5b8435baf19a8ea0c036f6b5f957eee28230af37abe8e423be7146b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_feynman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Nov 25 04:28:05 np0005534516 systemd[1]: var-lib-containers-storage-overlay-c97721eca3636abfbe8ed2d52a1478e021fa93f2c205c0bf5b217d0d148e8a38-merged.mount: Deactivated successfully.
Nov 25 04:28:05 np0005534516 podman[433728]: 2025-11-25 09:28:05.359963766 +0000 UTC m=+0.414851491 container remove 0688b321e5b8435baf19a8ea0c036f6b5f957eee28230af37abe8e423be7146b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_feynman, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Nov 25 04:28:05 np0005534516 systemd[1]: libpod-conmon-0688b321e5b8435baf19a8ea0c036f6b5f957eee28230af37abe8e423be7146b.scope: Deactivated successfully.
Nov 25 04:28:05 np0005534516 podman[433770]: 2025-11-25 09:28:05.538177594 +0000 UTC m=+0.044664578 container create b1e7bd2f14efa6eae0152cf8d1b0cc2556c2691c04561e8fe769cf7ba389b583 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_lamport, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0)
Nov 25 04:28:05 np0005534516 systemd[1]: Started libpod-conmon-b1e7bd2f14efa6eae0152cf8d1b0cc2556c2691c04561e8fe769cf7ba389b583.scope.
Nov 25 04:28:05 np0005534516 systemd[1]: Started libcrun container.
Nov 25 04:28:05 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/746dac037b998398f70bf8280b4e2c1939453db159a366eede684add3deedea2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 04:28:05 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/746dac037b998398f70bf8280b4e2c1939453db159a366eede684add3deedea2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 04:28:05 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/746dac037b998398f70bf8280b4e2c1939453db159a366eede684add3deedea2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 04:28:05 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/746dac037b998398f70bf8280b4e2c1939453db159a366eede684add3deedea2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 04:28:05 np0005534516 podman[433770]: 2025-11-25 09:28:05.515690291 +0000 UTC m=+0.022177305 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 04:28:05 np0005534516 podman[433770]: 2025-11-25 09:28:05.62569582 +0000 UTC m=+0.132182824 container init b1e7bd2f14efa6eae0152cf8d1b0cc2556c2691c04561e8fe769cf7ba389b583 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_lamport, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Nov 25 04:28:05 np0005534516 podman[433770]: 2025-11-25 09:28:05.635514657 +0000 UTC m=+0.142001651 container start b1e7bd2f14efa6eae0152cf8d1b0cc2556c2691c04561e8fe769cf7ba389b583 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_lamport, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 04:28:05 np0005534516 podman[433770]: 2025-11-25 09:28:05.647269019 +0000 UTC m=+0.153756033 container attach b1e7bd2f14efa6eae0152cf8d1b0cc2556c2691c04561e8fe769cf7ba389b583 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_lamport, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Nov 25 04:28:05 np0005534516 nova_compute[253538]: 2025-11-25 09:28:05.704 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:28:05 np0005534516 ceph-mon[75015]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 25 04:28:05 np0005534516 ceph-mon[75015]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 6000.0 total, 600.0 interval#012Cumulative writes: 14K writes, 67K keys, 14K commit groups, 1.0 writes per commit group, ingest: 0.09 GB, 0.02 MB/s#012Cumulative WAL: 14K writes, 14K syncs, 1.00 writes per sync, written: 0.09 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1378 writes, 6241 keys, 1378 commit groups, 1.0 writes per commit group, ingest: 8.93 MB, 0.01 MB/s#012Interval WAL: 1378 writes, 1378 syncs, 1.00 writes per sync, written: 0.01 GB, 0.01 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   1.0      0.0     33.7      2.40              0.28        48    0.050       0      0       0.0       0.0#012  L6      1/0   10.09 MB   0.0      0.4     0.1      0.4       0.4      0.0       0.0   4.8     66.7     56.4      6.81              1.24        47    0.145    309K    25K       0.0       0.0#012 Sum      1/0   10.09 MB   0.0      0.4     0.1      0.4       0.5      0.1       0.0   5.8     49.3     50.4      9.21              1.52        95    0.097    309K    25K       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.1     0.0      0.0       0.1      0.0       0.0   8.6     99.4    100.5      0.53              0.18        10    0.053     43K   2516       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.4     0.1      0.4       0.4      0.0       0.0   0.0     66.7     56.4      6.81              1.24        47    0.145    309K    25K       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   0.0      0.0     33.7      2.40              0.28        47    0.051       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     19.9      0.00              0.00         1    0.003       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 6000.0 total, 600.0 interval#012Flush(GB): cumulative 0.079, interval 0.006#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.45 GB write, 0.08 MB/s write, 0.44 GB read, 0.08 MB/s read, 9.2 seconds#012Interval compaction: 0.05 GB write, 0.09 MB/s write, 0.05 GB read, 0.09 MB/s read, 0.5 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55e967e9f1f0#2 capacity: 304.00 MB usage: 52.48 MB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 0 last_secs: 0.000367 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3403,50.30 MB,16.5474%) FilterBlock(96,862.42 KB,0.277042%) IndexBlock(96,1.33 MB,0.437576%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Nov 25 04:28:06 np0005534516 suspicious_lamport[433787]: {
Nov 25 04:28:06 np0005534516 suspicious_lamport[433787]:    "1ccbbde0-8faf-460a-800e-c84f00ed17db": {
Nov 25 04:28:06 np0005534516 suspicious_lamport[433787]:        "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 04:28:06 np0005534516 suspicious_lamport[433787]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 04:28:06 np0005534516 suspicious_lamport[433787]:        "osd_id": 1,
Nov 25 04:28:06 np0005534516 suspicious_lamport[433787]:        "osd_uuid": "1ccbbde0-8faf-460a-800e-c84f00ed17db",
Nov 25 04:28:06 np0005534516 suspicious_lamport[433787]:        "type": "bluestore"
Nov 25 04:28:06 np0005534516 suspicious_lamport[433787]:    },
Nov 25 04:28:06 np0005534516 suspicious_lamport[433787]:    "2a15223e-f21c-42af-b702-2be1c3607399": {
Nov 25 04:28:06 np0005534516 suspicious_lamport[433787]:        "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 04:28:06 np0005534516 suspicious_lamport[433787]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 04:28:06 np0005534516 suspicious_lamport[433787]:        "osd_id": 2,
Nov 25 04:28:06 np0005534516 suspicious_lamport[433787]:        "osd_uuid": "2a15223e-f21c-42af-b702-2be1c3607399",
Nov 25 04:28:06 np0005534516 suspicious_lamport[433787]:        "type": "bluestore"
Nov 25 04:28:06 np0005534516 suspicious_lamport[433787]:    },
Nov 25 04:28:06 np0005534516 suspicious_lamport[433787]:    "fa0f8d90-5df8-4d42-9078-da082765696d": {
Nov 25 04:28:06 np0005534516 suspicious_lamport[433787]:        "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 04:28:06 np0005534516 suspicious_lamport[433787]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 04:28:06 np0005534516 suspicious_lamport[433787]:        "osd_id": 0,
Nov 25 04:28:06 np0005534516 suspicious_lamport[433787]:        "osd_uuid": "fa0f8d90-5df8-4d42-9078-da082765696d",
Nov 25 04:28:06 np0005534516 suspicious_lamport[433787]:        "type": "bluestore"
Nov 25 04:28:06 np0005534516 suspicious_lamport[433787]:    }
Nov 25 04:28:06 np0005534516 suspicious_lamport[433787]: }
Nov 25 04:28:06 np0005534516 systemd[1]: libpod-b1e7bd2f14efa6eae0152cf8d1b0cc2556c2691c04561e8fe769cf7ba389b583.scope: Deactivated successfully.
Nov 25 04:28:06 np0005534516 conmon[433787]: conmon b1e7bd2f14efa6eae015 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-b1e7bd2f14efa6eae0152cf8d1b0cc2556c2691c04561e8fe769cf7ba389b583.scope/container/memory.events
Nov 25 04:28:06 np0005534516 podman[433770]: 2025-11-25 09:28:06.623543614 +0000 UTC m=+1.130030598 container died b1e7bd2f14efa6eae0152cf8d1b0cc2556c2691c04561e8fe769cf7ba389b583 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_lamport, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 04:28:06 np0005534516 systemd[1]: var-lib-containers-storage-overlay-746dac037b998398f70bf8280b4e2c1939453db159a366eede684add3deedea2-merged.mount: Deactivated successfully.
Nov 25 04:28:06 np0005534516 podman[433770]: 2025-11-25 09:28:06.718312557 +0000 UTC m=+1.224799541 container remove b1e7bd2f14efa6eae0152cf8d1b0cc2556c2691c04561e8fe769cf7ba389b583 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_lamport, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 04:28:06 np0005534516 systemd[1]: libpod-conmon-b1e7bd2f14efa6eae0152cf8d1b0cc2556c2691c04561e8fe769cf7ba389b583.scope: Deactivated successfully.
Nov 25 04:28:06 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 04:28:06 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 04:28:06 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 04:28:06 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 04:28:06 np0005534516 ceph-mgr[75313]: [progress WARNING root] complete: ev 8f4de430-621b-4b56-9796-f674f387d1f8 does not exist
Nov 25 04:28:06 np0005534516 ceph-mgr[75313]: [progress WARNING root] complete: ev b7a1ea45-d7a0-41a1-bf40-1b71e50be779 does not exist
Nov 25 04:28:06 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3249: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:28:07 np0005534516 nova_compute[253538]: 2025-11-25 09:28:07.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:28:07 np0005534516 nova_compute[253538]: 2025-11-25 09:28:07.555 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 25 04:28:07 np0005534516 nova_compute[253538]: 2025-11-25 09:28:07.555 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 25 04:28:07 np0005534516 nova_compute[253538]: 2025-11-25 09:28:07.570 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 25 04:28:07 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 04:28:07 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 04:28:08 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3250: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:28:09 np0005534516 nova_compute[253538]: 2025-11-25 09:28:09.117 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:28:09 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e283 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:28:10 np0005534516 nova_compute[253538]: 2025-11-25 09:28:10.705 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:28:10 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3251: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:28:10 np0005534516 podman[433884]: 2025-11-25 09:28:10.857926742 +0000 UTC m=+0.113343241 container health_status 8ad1e319ea263c63021f01bb2d6f18ca5aea205883339692c6b10bddfa993191 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=ovn_controller, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 25 04:28:11 np0005534516 nova_compute[253538]: 2025-11-25 09:28:11.564 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:28:12 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3252: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:28:13 np0005534516 nova_compute[253538]: 2025-11-25 09:28:13.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:28:14 np0005534516 nova_compute[253538]: 2025-11-25 09:28:14.119 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:28:14 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e283 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:28:14 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3253: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:28:15 np0005534516 nova_compute[253538]: 2025-11-25 09:28:15.553 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:28:15 np0005534516 nova_compute[253538]: 2025-11-25 09:28:15.554 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 25 04:28:15 np0005534516 nova_compute[253538]: 2025-11-25 09:28:15.751 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:28:16 np0005534516 nova_compute[253538]: 2025-11-25 09:28:16.555 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:28:16 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3254: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:28:18 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3255: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:28:19 np0005534516 nova_compute[253538]: 2025-11-25 09:28:19.121 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:28:19 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e283 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:28:19 np0005534516 nova_compute[253538]: 2025-11-25 09:28:19.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:28:20 np0005534516 nova_compute[253538]: 2025-11-25 09:28:20.753 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:28:20 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3256: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:28:22 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3257: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:28:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 04:28:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 04:28:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 04:28:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 04:28:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 04:28:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 04:28:24 np0005534516 nova_compute[253538]: 2025-11-25 09:28:24.124 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:28:24 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e283 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:28:24 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3258: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:28:25 np0005534516 nova_compute[253538]: 2025-11-25 09:28:25.757 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:28:26 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3259: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:28:28 np0005534516 nova_compute[253538]: 2025-11-25 09:28:28.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:28:28 np0005534516 nova_compute[253538]: 2025-11-25 09:28:28.580 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:28:28 np0005534516 nova_compute[253538]: 2025-11-25 09:28:28.581 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:28:28 np0005534516 nova_compute[253538]: 2025-11-25 09:28:28.581 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:28:28 np0005534516 nova_compute[253538]: 2025-11-25 09:28:28.581 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 25 04:28:28 np0005534516 nova_compute[253538]: 2025-11-25 09:28:28.581 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 04:28:28 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3260: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:28:29 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 04:28:29 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/14658349' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 04:28:29 np0005534516 nova_compute[253538]: 2025-11-25 09:28:29.054 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.473s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 04:28:29 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 25 04:28:29 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/132669809' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 04:28:29 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 25 04:28:29 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/132669809' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 04:28:29 np0005534516 nova_compute[253538]: 2025-11-25 09:28:29.125 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:28:29 np0005534516 nova_compute[253538]: 2025-11-25 09:28:29.219 253542 WARNING nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 04:28:29 np0005534516 nova_compute[253538]: 2025-11-25 09:28:29.220 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3628MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 25 04:28:29 np0005534516 nova_compute[253538]: 2025-11-25 09:28:29.220 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:28:29 np0005534516 nova_compute[253538]: 2025-11-25 09:28:29.220 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:28:29 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e283 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:28:29 np0005534516 nova_compute[253538]: 2025-11-25 09:28:29.558 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 25 04:28:29 np0005534516 nova_compute[253538]: 2025-11-25 09:28:29.559 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 25 04:28:29 np0005534516 nova_compute[253538]: 2025-11-25 09:28:29.674 253542 DEBUG nova.scheduler.client.report [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Refreshing inventories for resource provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Nov 25 04:28:29 np0005534516 nova_compute[253538]: 2025-11-25 09:28:29.758 253542 DEBUG nova.scheduler.client.report [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Updating ProviderTree inventory for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Nov 25 04:28:29 np0005534516 nova_compute[253538]: 2025-11-25 09:28:29.759 253542 DEBUG nova.compute.provider_tree [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Updating inventory in ProviderTree for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Nov 25 04:28:29 np0005534516 nova_compute[253538]: 2025-11-25 09:28:29.777 253542 DEBUG nova.scheduler.client.report [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Refreshing aggregate associations for resource provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Nov 25 04:28:29 np0005534516 nova_compute[253538]: 2025-11-25 09:28:29.795 253542 DEBUG nova.scheduler.client.report [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Refreshing trait associations for resource provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4, traits: HW_CPU_X86_ABM,HW_CPU_X86_SSE,HW_CPU_X86_SSE4A,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_NET_VIF_MODEL_VIRTIO,HW_CPU_X86_SSSE3,COMPUTE_ACCELERATORS,COMPUTE_RESCUE_BFV,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_BMI2,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_DEVICE_TAGGING,HW_CPU_X86_SSE2,HW_CPU_X86_SSE42,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_SECURITY_TPM_1_2,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_SVM,HW_CPU_X86_AVX,HW_CPU_X86_CLMUL,COMPUTE_NODE,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_AVX2,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_SHA,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_BMI,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_MMX,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_SSE41,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_SECURITY_TPM_2_0,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_F16C,COMPUTE_STORAGE_BUS_FDC,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_AMD_SVM,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_AESNI,HW_CPU_X86_FMA3 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Nov 25 04:28:29 np0005534516 podman[433934]: 2025-11-25 09:28:29.816144922 +0000 UTC m=+0.065775844 container health_status 5c8d5508db57b4c3da7e80ae11f3a3094e87785cb74f60c98199261dd8b73481 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 25 04:28:29 np0005534516 nova_compute[253538]: 2025-11-25 09:28:29.816 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 04:28:29 np0005534516 podman[433933]: 2025-11-25 09:28:29.816676397 +0000 UTC m=+0.068977721 container health_status 201f5ba8a846455dad7681d91099d8c3f33e8ac91298c1330a8b525b94c03938 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=multipathd)
Nov 25 04:28:30 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 04:28:30 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3446382416' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 04:28:30 np0005534516 nova_compute[253538]: 2025-11-25 09:28:30.268 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.452s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 04:28:30 np0005534516 nova_compute[253538]: 2025-11-25 09:28:30.275 253542 DEBUG nova.compute.provider_tree [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 04:28:30 np0005534516 nova_compute[253538]: 2025-11-25 09:28:30.294 253542 DEBUG nova.scheduler.client.report [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 04:28:30 np0005534516 nova_compute[253538]: 2025-11-25 09:28:30.297 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 25 04:28:30 np0005534516 nova_compute[253538]: 2025-11-25 09:28:30.298 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.077s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:28:30 np0005534516 nova_compute[253538]: 2025-11-25 09:28:30.758 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:28:30 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3261: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:28:32 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3262: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:28:34 np0005534516 nova_compute[253538]: 2025-11-25 09:28:34.143 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:28:34 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e283 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:28:34 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3263: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:28:35 np0005534516 nova_compute[253538]: 2025-11-25 09:28:35.761 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:28:36 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3264: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:28:38 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3265: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:28:39 np0005534516 nova_compute[253538]: 2025-11-25 09:28:39.146 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:28:39 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e283 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:28:40 np0005534516 nova_compute[253538]: 2025-11-25 09:28:40.762 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:28:40 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3266: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:28:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:28:41.117 162739 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:28:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:28:41.117 162739 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:28:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:28:41.118 162739 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:28:41 np0005534516 podman[433988]: 2025-11-25 09:28:41.829054639 +0000 UTC m=+0.085357998 container health_status 8ad1e319ea263c63021f01bb2d6f18ca5aea205883339692c6b10bddfa993191 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 04:28:42 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3267: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:28:44 np0005534516 nova_compute[253538]: 2025-11-25 09:28:44.147 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:28:44 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e283 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:28:44 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3268: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:28:45 np0005534516 nova_compute[253538]: 2025-11-25 09:28:45.764 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:28:45 np0005534516 systemd-logind[822]: New session 52 of user zuul.
Nov 25 04:28:45 np0005534516 systemd[1]: Started Session 52 of User zuul.
Nov 25 04:28:46 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3269: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:28:48 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:28:48.125 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=62, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '3e:21:09', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '6a:71:8e:07:fa:c2'}, ipsec=False) old=SB_Global(nb_cfg=61) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 04:28:48 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:28:48.126 162739 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 25 04:28:48 np0005534516 nova_compute[253538]: 2025-11-25 09:28:48.125 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:28:48 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3270: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:28:49 np0005534516 nova_compute[253538]: 2025-11-25 09:28:49.149 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:28:49 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e283 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:28:50 np0005534516 nova_compute[253538]: 2025-11-25 09:28:50.765 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:28:50 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3271: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:28:51 np0005534516 systemd[1]: session-52.scope: Deactivated successfully.
Nov 25 04:28:51 np0005534516 systemd-logind[822]: Session 52 logged out. Waiting for processes to exit.
Nov 25 04:28:51 np0005534516 systemd-logind[822]: Removed session 52.
Nov 25 04:28:52 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3272: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:28:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] Optimize plan auto_2025-11-25_09:28:53
Nov 25 04:28:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 04:28:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] do_upmap
Nov 25 04:28:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] pools ['vms', 'default.rgw.meta', 'default.rgw.log', 'images', '.rgw.root', 'default.rgw.control', '.mgr', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', 'backups', 'volumes']
Nov 25 04:28:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] prepared 0/10 changes
Nov 25 04:28:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 04:28:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 04:28:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 04:28:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 04:28:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 04:28:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 04:28:54 np0005534516 nova_compute[253538]: 2025-11-25 09:28:54.151 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:28:54 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 04:28:54 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 04:28:54 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 04:28:54 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 04:28:54 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 04:28:54 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 04:28:54 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 04:28:54 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 04:28:54 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 04:28:54 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 04:28:54 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e283 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:28:54 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3273: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:28:55 np0005534516 nova_compute[253538]: 2025-11-25 09:28:55.767 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:28:56 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3274: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:28:58 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:28:58.128 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=0e3362c2-a4a0-4a10-9289-943331244f84, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '62'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 04:28:58 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3275: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:28:59 np0005534516 nova_compute[253538]: 2025-11-25 09:28:59.154 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:28:59 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e283 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:29:00 np0005534516 nova_compute[253538]: 2025-11-25 09:29:00.769 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:29:00 np0005534516 podman[434271]: 2025-11-25 09:29:00.82171315 +0000 UTC m=+0.074072890 container health_status 201f5ba8a846455dad7681d91099d8c3f33e8ac91298c1330a8b525b94c03938 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251118, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=multipathd, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Nov 25 04:29:00 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3276: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:29:00 np0005534516 podman[434272]: 2025-11-25 09:29:00.835633879 +0000 UTC m=+0.073747441 container health_status 5c8d5508db57b4c3da7e80ae11f3a3094e87785cb74f60c98199261dd8b73481 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Nov 25 04:29:02 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3277: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:29:04 np0005534516 nova_compute[253538]: 2025-11-25 09:29:04.156 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:29:04 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e283 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:29:04 np0005534516 nova_compute[253538]: 2025-11-25 09:29:04.299 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:29:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 04:29:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 04:29:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 04:29:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 04:29:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 25 04:29:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 04:29:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 04:29:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 04:29:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 04:29:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 04:29:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006659854830044931 of space, bias 1.0, pg target 0.19979564490134794 quantized to 32 (current 32)
Nov 25 04:29:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 04:29:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 25 04:29:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 04:29:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 04:29:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 04:29:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 25 04:29:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 04:29:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 25 04:29:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 04:29:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 04:29:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 04:29:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 25 04:29:04 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3278: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:29:05 np0005534516 nova_compute[253538]: 2025-11-25 09:29:05.555 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:29:05 np0005534516 nova_compute[253538]: 2025-11-25 09:29:05.771 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:29:06 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3279: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:29:07 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 04:29:07 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 04:29:07 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 04:29:07 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 04:29:07 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 04:29:07 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 04:29:07 np0005534516 ceph-mgr[75313]: [progress WARNING root] complete: ev 85616236-fee4-48d4-b65a-d252ee4b87aa does not exist
Nov 25 04:29:07 np0005534516 ceph-mgr[75313]: [progress WARNING root] complete: ev fcd345c6-23a4-4bb8-ab67-352d3f27267a does not exist
Nov 25 04:29:07 np0005534516 ceph-mgr[75313]: [progress WARNING root] complete: ev 69b84dc9-16c1-4080-b454-8fe78df2231a does not exist
Nov 25 04:29:07 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 04:29:07 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 04:29:07 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 04:29:07 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 04:29:07 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 04:29:07 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 04:29:08 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 04:29:08 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 04:29:08 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 04:29:08 np0005534516 podman[434579]: 2025-11-25 09:29:08.422674677 +0000 UTC m=+0.166329794 container create ada061d494fc986b2dcd687d43cf8203eda95fa38c1a07a81e1f05bc74b16e27 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_williams, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Nov 25 04:29:08 np0005534516 podman[434579]: 2025-11-25 09:29:08.331525723 +0000 UTC m=+0.075180860 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 04:29:08 np0005534516 systemd[1]: Started libpod-conmon-ada061d494fc986b2dcd687d43cf8203eda95fa38c1a07a81e1f05bc74b16e27.scope.
Nov 25 04:29:08 np0005534516 systemd[1]: Started libcrun container.
Nov 25 04:29:08 np0005534516 podman[434579]: 2025-11-25 09:29:08.566098339 +0000 UTC m=+0.309753506 container init ada061d494fc986b2dcd687d43cf8203eda95fa38c1a07a81e1f05bc74b16e27 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_williams, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Nov 25 04:29:08 np0005534516 podman[434579]: 2025-11-25 09:29:08.57717865 +0000 UTC m=+0.320833777 container start ada061d494fc986b2dcd687d43cf8203eda95fa38c1a07a81e1f05bc74b16e27 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_williams, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Nov 25 04:29:08 np0005534516 wizardly_williams[434596]: 167 167
Nov 25 04:29:08 np0005534516 systemd[1]: libpod-ada061d494fc986b2dcd687d43cf8203eda95fa38c1a07a81e1f05bc74b16e27.scope: Deactivated successfully.
Nov 25 04:29:08 np0005534516 podman[434579]: 2025-11-25 09:29:08.594435461 +0000 UTC m=+0.338090578 container attach ada061d494fc986b2dcd687d43cf8203eda95fa38c1a07a81e1f05bc74b16e27 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_williams, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 04:29:08 np0005534516 podman[434579]: 2025-11-25 09:29:08.594972016 +0000 UTC m=+0.338627133 container died ada061d494fc986b2dcd687d43cf8203eda95fa38c1a07a81e1f05bc74b16e27 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_williams, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2)
Nov 25 04:29:08 np0005534516 systemd[1]: var-lib-containers-storage-overlay-f084e989184c3e20d2e3ed2efa2f1909b0592970028c88cb4643fa055ee7bff9-merged.mount: Deactivated successfully.
Nov 25 04:29:08 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3280: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:29:08 np0005534516 podman[434579]: 2025-11-25 09:29:08.861351837 +0000 UTC m=+0.605006954 container remove ada061d494fc986b2dcd687d43cf8203eda95fa38c1a07a81e1f05bc74b16e27 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_williams, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3)
Nov 25 04:29:08 np0005534516 systemd[1]: libpod-conmon-ada061d494fc986b2dcd687d43cf8203eda95fa38c1a07a81e1f05bc74b16e27.scope: Deactivated successfully.
Nov 25 04:29:09 np0005534516 podman[434620]: 2025-11-25 09:29:09.124191073 +0000 UTC m=+0.096013809 container create 8aff5a82f4ca885adae1bd3f2926103e719b214132d3a60b570055022bc69138 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_antonelli, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Nov 25 04:29:09 np0005534516 podman[434620]: 2025-11-25 09:29:09.059718715 +0000 UTC m=+0.031541451 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 04:29:09 np0005534516 nova_compute[253538]: 2025-11-25 09:29:09.158 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:29:09 np0005534516 systemd[1]: Started libpod-conmon-8aff5a82f4ca885adae1bd3f2926103e719b214132d3a60b570055022bc69138.scope.
Nov 25 04:29:09 np0005534516 systemd[1]: Started libcrun container.
Nov 25 04:29:09 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a8fbefe6502460a1e401bf112565ac326470b7724c8dd96a0bf39c026b811f80/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 04:29:09 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a8fbefe6502460a1e401bf112565ac326470b7724c8dd96a0bf39c026b811f80/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 04:29:09 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a8fbefe6502460a1e401bf112565ac326470b7724c8dd96a0bf39c026b811f80/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 04:29:09 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a8fbefe6502460a1e401bf112565ac326470b7724c8dd96a0bf39c026b811f80/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 04:29:09 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a8fbefe6502460a1e401bf112565ac326470b7724c8dd96a0bf39c026b811f80/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 04:29:09 np0005534516 ceph-mon[75015]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #159. Immutable memtables: 0.
Nov 25 04:29:09 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:29:09.269060) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 25 04:29:09 np0005534516 ceph-mon[75015]: rocksdb: [db/flush_job.cc:856] [default] [JOB 97] Flushing memtable with next log file: 159
Nov 25 04:29:09 np0005534516 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764062949269100, "job": 97, "event": "flush_started", "num_memtables": 1, "num_entries": 1651, "num_deletes": 252, "total_data_size": 2663824, "memory_usage": 2702912, "flush_reason": "Manual Compaction"}
Nov 25 04:29:09 np0005534516 ceph-mon[75015]: rocksdb: [db/flush_job.cc:885] [default] [JOB 97] Level-0 flush table #160: started
Nov 25 04:29:09 np0005534516 podman[434620]: 2025-11-25 09:29:09.286289982 +0000 UTC m=+0.258112708 container init 8aff5a82f4ca885adae1bd3f2926103e719b214132d3a60b570055022bc69138 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_antonelli, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3)
Nov 25 04:29:09 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e283 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:29:09 np0005534516 podman[434620]: 2025-11-25 09:29:09.297812706 +0000 UTC m=+0.269635442 container start 8aff5a82f4ca885adae1bd3f2926103e719b214132d3a60b570055022bc69138 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_antonelli, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Nov 25 04:29:09 np0005534516 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764062949339965, "cf_name": "default", "job": 97, "event": "table_file_creation", "file_number": 160, "file_size": 2616092, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 66491, "largest_seqno": 68141, "table_properties": {"data_size": 2608407, "index_size": 4627, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1989, "raw_key_size": 15858, "raw_average_key_size": 20, "raw_value_size": 2592936, "raw_average_value_size": 3294, "num_data_blocks": 207, "num_entries": 787, "num_filter_entries": 787, "num_deletions": 252, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764062774, "oldest_key_time": 1764062774, "file_creation_time": 1764062949, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "dc5dc6ae-0fb8-474e-b34d-e37705a41add", "db_session_id": "WCSCMBNWDQZ3QKJA3OWW", "orig_file_number": 160, "seqno_to_time_mapping": "N/A"}}
Nov 25 04:29:09 np0005534516 ceph-mon[75015]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 97] Flush lasted 70953 microseconds, and 6702 cpu microseconds.
Nov 25 04:29:09 np0005534516 ceph-mon[75015]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 04:29:09 np0005534516 podman[434620]: 2025-11-25 09:29:09.345615899 +0000 UTC m=+0.317438635 container attach 8aff5a82f4ca885adae1bd3f2926103e719b214132d3a60b570055022bc69138 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_antonelli, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Nov 25 04:29:09 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:29:09.340011) [db/flush_job.cc:967] [default] [JOB 97] Level-0 flush table #160: 2616092 bytes OK
Nov 25 04:29:09 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:29:09.340032) [db/memtable_list.cc:519] [default] Level-0 commit table #160 started
Nov 25 04:29:09 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:29:09.345978) [db/memtable_list.cc:722] [default] Level-0 commit table #160: memtable #1 done
Nov 25 04:29:09 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:29:09.346019) EVENT_LOG_v1 {"time_micros": 1764062949346010, "job": 97, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 25 04:29:09 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:29:09.346043) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 25 04:29:09 np0005534516 ceph-mon[75015]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 97] Try to delete WAL files size 2656695, prev total WAL file size 2656695, number of live WAL files 2.
Nov 25 04:29:09 np0005534516 ceph-mon[75015]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000156.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 04:29:09 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:29:09.346988) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730036353236' seq:72057594037927935, type:22 .. '7061786F730036373738' seq:0, type:0; will stop at (end)
Nov 25 04:29:09 np0005534516 ceph-mon[75015]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 98] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 25 04:29:09 np0005534516 ceph-mon[75015]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 97 Base level 0, inputs: [160(2554KB)], [158(10MB)]
Nov 25 04:29:09 np0005534516 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764062949347030, "job": 98, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [160], "files_L6": [158], "score": -1, "input_data_size": 13200617, "oldest_snapshot_seqno": -1}
Nov 25 04:29:09 np0005534516 ceph-mon[75015]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 98] Generated table #161: 8628 keys, 11472551 bytes, temperature: kUnknown
Nov 25 04:29:09 np0005534516 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764062949444276, "cf_name": "default", "job": 98, "event": "table_file_creation", "file_number": 161, "file_size": 11472551, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11416045, "index_size": 33801, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 21637, "raw_key_size": 226544, "raw_average_key_size": 26, "raw_value_size": 11263202, "raw_average_value_size": 1305, "num_data_blocks": 1314, "num_entries": 8628, "num_filter_entries": 8628, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764056881, "oldest_key_time": 0, "file_creation_time": 1764062949, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "dc5dc6ae-0fb8-474e-b34d-e37705a41add", "db_session_id": "WCSCMBNWDQZ3QKJA3OWW", "orig_file_number": 161, "seqno_to_time_mapping": "N/A"}}
Nov 25 04:29:09 np0005534516 ceph-mon[75015]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 04:29:09 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:29:09.445121) [db/compaction/compaction_job.cc:1663] [default] [JOB 98] Compacted 1@0 + 1@6 files to L6 => 11472551 bytes
Nov 25 04:29:09 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:29:09.454144) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 135.0 rd, 117.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.5, 10.1 +0.0 blob) out(10.9 +0.0 blob), read-write-amplify(9.4) write-amplify(4.4) OK, records in: 9148, records dropped: 520 output_compression: NoCompression
Nov 25 04:29:09 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:29:09.454180) EVENT_LOG_v1 {"time_micros": 1764062949454166, "job": 98, "event": "compaction_finished", "compaction_time_micros": 97811, "compaction_time_cpu_micros": 28123, "output_level": 6, "num_output_files": 1, "total_output_size": 11472551, "num_input_records": 9148, "num_output_records": 8628, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 25 04:29:09 np0005534516 ceph-mon[75015]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000160.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 04:29:09 np0005534516 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764062949455174, "job": 98, "event": "table_file_deletion", "file_number": 160}
Nov 25 04:29:09 np0005534516 ceph-mon[75015]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000158.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 04:29:09 np0005534516 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764062949457726, "job": 98, "event": "table_file_deletion", "file_number": 158}
Nov 25 04:29:09 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:29:09.346917) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 04:29:09 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:29:09.457761) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 04:29:09 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:29:09.457766) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 04:29:09 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:29:09.457768) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 04:29:09 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:29:09.457770) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 04:29:09 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:29:09.457772) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 04:29:09 np0005534516 nova_compute[253538]: 2025-11-25 09:29:09.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:29:09 np0005534516 nova_compute[253538]: 2025-11-25 09:29:09.556 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 25 04:29:09 np0005534516 nova_compute[253538]: 2025-11-25 09:29:09.556 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 25 04:29:09 np0005534516 nova_compute[253538]: 2025-11-25 09:29:09.570 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 25 04:29:10 np0005534516 musing_antonelli[434637]: --> passed data devices: 0 physical, 3 LVM
Nov 25 04:29:10 np0005534516 musing_antonelli[434637]: --> relative data size: 1.0
Nov 25 04:29:10 np0005534516 musing_antonelli[434637]: --> All data devices are unavailable
Nov 25 04:29:10 np0005534516 systemd[1]: libpod-8aff5a82f4ca885adae1bd3f2926103e719b214132d3a60b570055022bc69138.scope: Deactivated successfully.
Nov 25 04:29:10 np0005534516 podman[434620]: 2025-11-25 09:29:10.399960203 +0000 UTC m=+1.371782919 container died 8aff5a82f4ca885adae1bd3f2926103e719b214132d3a60b570055022bc69138 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_antonelli, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 04:29:10 np0005534516 systemd[1]: libpod-8aff5a82f4ca885adae1bd3f2926103e719b214132d3a60b570055022bc69138.scope: Consumed 1.048s CPU time.
Nov 25 04:29:10 np0005534516 systemd[1]: var-lib-containers-storage-overlay-a8fbefe6502460a1e401bf112565ac326470b7724c8dd96a0bf39c026b811f80-merged.mount: Deactivated successfully.
Nov 25 04:29:10 np0005534516 podman[434620]: 2025-11-25 09:29:10.454927832 +0000 UTC m=+1.426750548 container remove 8aff5a82f4ca885adae1bd3f2926103e719b214132d3a60b570055022bc69138 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_antonelli, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Nov 25 04:29:10 np0005534516 systemd[1]: libpod-conmon-8aff5a82f4ca885adae1bd3f2926103e719b214132d3a60b570055022bc69138.scope: Deactivated successfully.
Nov 25 04:29:10 np0005534516 nova_compute[253538]: 2025-11-25 09:29:10.772 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:29:10 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3281: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:29:11 np0005534516 podman[434819]: 2025-11-25 09:29:11.046567441 +0000 UTC m=+0.037861094 container create a7402d1d5ef07897ff3e365ed2d15581bef68812a95f92c1870a22b7d18c82c9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_robinson, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Nov 25 04:29:11 np0005534516 systemd[1]: Started libpod-conmon-a7402d1d5ef07897ff3e365ed2d15581bef68812a95f92c1870a22b7d18c82c9.scope.
Nov 25 04:29:11 np0005534516 systemd[1]: Started libcrun container.
Nov 25 04:29:11 np0005534516 podman[434819]: 2025-11-25 09:29:11.030823121 +0000 UTC m=+0.022116794 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 04:29:11 np0005534516 podman[434819]: 2025-11-25 09:29:11.139832773 +0000 UTC m=+0.131126476 container init a7402d1d5ef07897ff3e365ed2d15581bef68812a95f92c1870a22b7d18c82c9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_robinson, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 04:29:11 np0005534516 podman[434819]: 2025-11-25 09:29:11.14924622 +0000 UTC m=+0.140539873 container start a7402d1d5ef07897ff3e365ed2d15581bef68812a95f92c1870a22b7d18c82c9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_robinson, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 04:29:11 np0005534516 podman[434819]: 2025-11-25 09:29:11.153023573 +0000 UTC m=+0.144317226 container attach a7402d1d5ef07897ff3e365ed2d15581bef68812a95f92c1870a22b7d18c82c9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_robinson, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 04:29:11 np0005534516 stupefied_robinson[434835]: 167 167
Nov 25 04:29:11 np0005534516 systemd[1]: libpod-a7402d1d5ef07897ff3e365ed2d15581bef68812a95f92c1870a22b7d18c82c9.scope: Deactivated successfully.
Nov 25 04:29:11 np0005534516 conmon[434835]: conmon a7402d1d5ef07897ff3e <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-a7402d1d5ef07897ff3e365ed2d15581bef68812a95f92c1870a22b7d18c82c9.scope/container/memory.events
Nov 25 04:29:11 np0005534516 podman[434819]: 2025-11-25 09:29:11.159267733 +0000 UTC m=+0.150561386 container died a7402d1d5ef07897ff3e365ed2d15581bef68812a95f92c1870a22b7d18c82c9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_robinson, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Nov 25 04:29:11 np0005534516 systemd[1]: var-lib-containers-storage-overlay-23a0b31698cd7b13c562e50d7ad81387d4f7b31a86f511fa69c4971ac0be243b-merged.mount: Deactivated successfully.
Nov 25 04:29:11 np0005534516 podman[434819]: 2025-11-25 09:29:11.20792062 +0000 UTC m=+0.199214273 container remove a7402d1d5ef07897ff3e365ed2d15581bef68812a95f92c1870a22b7d18c82c9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_robinson, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Nov 25 04:29:11 np0005534516 systemd[1]: libpod-conmon-a7402d1d5ef07897ff3e365ed2d15581bef68812a95f92c1870a22b7d18c82c9.scope: Deactivated successfully.
Nov 25 04:29:11 np0005534516 podman[434861]: 2025-11-25 09:29:11.422587822 +0000 UTC m=+0.075381446 container create 6c316d0b9b7e64fcd6a21f28cad2117e89e9c5bb37df989b62bf1873c92846b3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_shaw, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True)
Nov 25 04:29:11 np0005534516 podman[434861]: 2025-11-25 09:29:11.372125006 +0000 UTC m=+0.024918660 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 04:29:11 np0005534516 systemd[1]: Started libpod-conmon-6c316d0b9b7e64fcd6a21f28cad2117e89e9c5bb37df989b62bf1873c92846b3.scope.
Nov 25 04:29:11 np0005534516 systemd[1]: Started libcrun container.
Nov 25 04:29:11 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/275f40ecd8226259aaa895281c3d2658a087c6b609eb3f199a8226ba2568dd13/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 04:29:11 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/275f40ecd8226259aaa895281c3d2658a087c6b609eb3f199a8226ba2568dd13/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 04:29:11 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/275f40ecd8226259aaa895281c3d2658a087c6b609eb3f199a8226ba2568dd13/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 04:29:11 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/275f40ecd8226259aaa895281c3d2658a087c6b609eb3f199a8226ba2568dd13/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 04:29:11 np0005534516 podman[434861]: 2025-11-25 09:29:11.69579834 +0000 UTC m=+0.348591994 container init 6c316d0b9b7e64fcd6a21f28cad2117e89e9c5bb37df989b62bf1873c92846b3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_shaw, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 04:29:11 np0005534516 podman[434861]: 2025-11-25 09:29:11.703338625 +0000 UTC m=+0.356132249 container start 6c316d0b9b7e64fcd6a21f28cad2117e89e9c5bb37df989b62bf1873c92846b3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_shaw, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 04:29:11 np0005534516 podman[434861]: 2025-11-25 09:29:11.801319526 +0000 UTC m=+0.454113160 container attach 6c316d0b9b7e64fcd6a21f28cad2117e89e9c5bb37df989b62bf1873c92846b3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_shaw, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Nov 25 04:29:12 np0005534516 exciting_shaw[434877]: {
Nov 25 04:29:12 np0005534516 exciting_shaw[434877]:    "0": [
Nov 25 04:29:12 np0005534516 exciting_shaw[434877]:        {
Nov 25 04:29:12 np0005534516 exciting_shaw[434877]:            "devices": [
Nov 25 04:29:12 np0005534516 exciting_shaw[434877]:                "/dev/loop3"
Nov 25 04:29:12 np0005534516 exciting_shaw[434877]:            ],
Nov 25 04:29:12 np0005534516 exciting_shaw[434877]:            "lv_name": "ceph_lv0",
Nov 25 04:29:12 np0005534516 exciting_shaw[434877]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 04:29:12 np0005534516 exciting_shaw[434877]:            "lv_size": "21470642176",
Nov 25 04:29:12 np0005534516 exciting_shaw[434877]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=fa0f8d90-5df8-4d42-9078-da082765696d,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 04:29:12 np0005534516 exciting_shaw[434877]:            "lv_uuid": "ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr",
Nov 25 04:29:12 np0005534516 exciting_shaw[434877]:            "name": "ceph_lv0",
Nov 25 04:29:12 np0005534516 exciting_shaw[434877]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 04:29:12 np0005534516 exciting_shaw[434877]:            "tags": {
Nov 25 04:29:12 np0005534516 exciting_shaw[434877]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 04:29:12 np0005534516 exciting_shaw[434877]:                "ceph.block_uuid": "ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr",
Nov 25 04:29:12 np0005534516 exciting_shaw[434877]:                "ceph.cephx_lockbox_secret": "",
Nov 25 04:29:12 np0005534516 exciting_shaw[434877]:                "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 04:29:12 np0005534516 exciting_shaw[434877]:                "ceph.cluster_name": "ceph",
Nov 25 04:29:12 np0005534516 exciting_shaw[434877]:                "ceph.crush_device_class": "",
Nov 25 04:29:12 np0005534516 exciting_shaw[434877]:                "ceph.encrypted": "0",
Nov 25 04:29:12 np0005534516 exciting_shaw[434877]:                "ceph.osd_fsid": "fa0f8d90-5df8-4d42-9078-da082765696d",
Nov 25 04:29:12 np0005534516 exciting_shaw[434877]:                "ceph.osd_id": "0",
Nov 25 04:29:12 np0005534516 exciting_shaw[434877]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 04:29:12 np0005534516 exciting_shaw[434877]:                "ceph.type": "block",
Nov 25 04:29:12 np0005534516 exciting_shaw[434877]:                "ceph.vdo": "0"
Nov 25 04:29:12 np0005534516 exciting_shaw[434877]:            },
Nov 25 04:29:12 np0005534516 exciting_shaw[434877]:            "type": "block",
Nov 25 04:29:12 np0005534516 exciting_shaw[434877]:            "vg_name": "ceph_vg0"
Nov 25 04:29:12 np0005534516 exciting_shaw[434877]:        }
Nov 25 04:29:12 np0005534516 exciting_shaw[434877]:    ],
Nov 25 04:29:12 np0005534516 exciting_shaw[434877]:    "1": [
Nov 25 04:29:12 np0005534516 exciting_shaw[434877]:        {
Nov 25 04:29:12 np0005534516 exciting_shaw[434877]:            "devices": [
Nov 25 04:29:12 np0005534516 exciting_shaw[434877]:                "/dev/loop4"
Nov 25 04:29:12 np0005534516 exciting_shaw[434877]:            ],
Nov 25 04:29:12 np0005534516 exciting_shaw[434877]:            "lv_name": "ceph_lv1",
Nov 25 04:29:12 np0005534516 exciting_shaw[434877]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 04:29:12 np0005534516 exciting_shaw[434877]:            "lv_size": "21470642176",
Nov 25 04:29:12 np0005534516 exciting_shaw[434877]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1ccbbde0-8faf-460a-800e-c84f00ed17db,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 04:29:12 np0005534516 exciting_shaw[434877]:            "lv_uuid": "nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e",
Nov 25 04:29:12 np0005534516 exciting_shaw[434877]:            "name": "ceph_lv1",
Nov 25 04:29:12 np0005534516 exciting_shaw[434877]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 04:29:12 np0005534516 exciting_shaw[434877]:            "tags": {
Nov 25 04:29:12 np0005534516 exciting_shaw[434877]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 04:29:12 np0005534516 exciting_shaw[434877]:                "ceph.block_uuid": "nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e",
Nov 25 04:29:12 np0005534516 exciting_shaw[434877]:                "ceph.cephx_lockbox_secret": "",
Nov 25 04:29:12 np0005534516 exciting_shaw[434877]:                "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 04:29:12 np0005534516 exciting_shaw[434877]:                "ceph.cluster_name": "ceph",
Nov 25 04:29:12 np0005534516 exciting_shaw[434877]:                "ceph.crush_device_class": "",
Nov 25 04:29:12 np0005534516 exciting_shaw[434877]:                "ceph.encrypted": "0",
Nov 25 04:29:12 np0005534516 exciting_shaw[434877]:                "ceph.osd_fsid": "1ccbbde0-8faf-460a-800e-c84f00ed17db",
Nov 25 04:29:12 np0005534516 exciting_shaw[434877]:                "ceph.osd_id": "1",
Nov 25 04:29:12 np0005534516 exciting_shaw[434877]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 04:29:12 np0005534516 exciting_shaw[434877]:                "ceph.type": "block",
Nov 25 04:29:12 np0005534516 exciting_shaw[434877]:                "ceph.vdo": "0"
Nov 25 04:29:12 np0005534516 exciting_shaw[434877]:            },
Nov 25 04:29:12 np0005534516 exciting_shaw[434877]:            "type": "block",
Nov 25 04:29:12 np0005534516 exciting_shaw[434877]:            "vg_name": "ceph_vg1"
Nov 25 04:29:12 np0005534516 exciting_shaw[434877]:        }
Nov 25 04:29:12 np0005534516 exciting_shaw[434877]:    ],
Nov 25 04:29:12 np0005534516 exciting_shaw[434877]:    "2": [
Nov 25 04:29:12 np0005534516 exciting_shaw[434877]:        {
Nov 25 04:29:12 np0005534516 exciting_shaw[434877]:            "devices": [
Nov 25 04:29:12 np0005534516 exciting_shaw[434877]:                "/dev/loop5"
Nov 25 04:29:12 np0005534516 exciting_shaw[434877]:            ],
Nov 25 04:29:12 np0005534516 exciting_shaw[434877]:            "lv_name": "ceph_lv2",
Nov 25 04:29:12 np0005534516 exciting_shaw[434877]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 04:29:12 np0005534516 exciting_shaw[434877]:            "lv_size": "21470642176",
Nov 25 04:29:12 np0005534516 exciting_shaw[434877]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2a15223e-f21c-42af-b702-2be1c3607399,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 04:29:12 np0005534516 exciting_shaw[434877]:            "lv_uuid": "5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0",
Nov 25 04:29:12 np0005534516 exciting_shaw[434877]:            "name": "ceph_lv2",
Nov 25 04:29:12 np0005534516 exciting_shaw[434877]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 04:29:12 np0005534516 exciting_shaw[434877]:            "tags": {
Nov 25 04:29:12 np0005534516 exciting_shaw[434877]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 04:29:12 np0005534516 exciting_shaw[434877]:                "ceph.block_uuid": "5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0",
Nov 25 04:29:12 np0005534516 exciting_shaw[434877]:                "ceph.cephx_lockbox_secret": "",
Nov 25 04:29:12 np0005534516 exciting_shaw[434877]:                "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 04:29:12 np0005534516 exciting_shaw[434877]:                "ceph.cluster_name": "ceph",
Nov 25 04:29:12 np0005534516 exciting_shaw[434877]:                "ceph.crush_device_class": "",
Nov 25 04:29:12 np0005534516 exciting_shaw[434877]:                "ceph.encrypted": "0",
Nov 25 04:29:12 np0005534516 exciting_shaw[434877]:                "ceph.osd_fsid": "2a15223e-f21c-42af-b702-2be1c3607399",
Nov 25 04:29:12 np0005534516 exciting_shaw[434877]:                "ceph.osd_id": "2",
Nov 25 04:29:12 np0005534516 exciting_shaw[434877]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 04:29:12 np0005534516 exciting_shaw[434877]:                "ceph.type": "block",
Nov 25 04:29:12 np0005534516 exciting_shaw[434877]:                "ceph.vdo": "0"
Nov 25 04:29:12 np0005534516 exciting_shaw[434877]:            },
Nov 25 04:29:12 np0005534516 exciting_shaw[434877]:            "type": "block",
Nov 25 04:29:12 np0005534516 exciting_shaw[434877]:            "vg_name": "ceph_vg2"
Nov 25 04:29:12 np0005534516 exciting_shaw[434877]:        }
Nov 25 04:29:12 np0005534516 exciting_shaw[434877]:    ]
Nov 25 04:29:12 np0005534516 exciting_shaw[434877]: }
Nov 25 04:29:12 np0005534516 systemd[1]: libpod-6c316d0b9b7e64fcd6a21f28cad2117e89e9c5bb37df989b62bf1873c92846b3.scope: Deactivated successfully.
Nov 25 04:29:12 np0005534516 podman[434887]: 2025-11-25 09:29:12.553138893 +0000 UTC m=+0.027119290 container died 6c316d0b9b7e64fcd6a21f28cad2117e89e9c5bb37df989b62bf1873c92846b3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_shaw, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 25 04:29:12 np0005534516 systemd[1]: var-lib-containers-storage-overlay-275f40ecd8226259aaa895281c3d2658a087c6b609eb3f199a8226ba2568dd13-merged.mount: Deactivated successfully.
Nov 25 04:29:12 np0005534516 podman[434887]: 2025-11-25 09:29:12.620781787 +0000 UTC m=+0.094762174 container remove 6c316d0b9b7e64fcd6a21f28cad2117e89e9c5bb37df989b62bf1873c92846b3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_shaw, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Nov 25 04:29:12 np0005534516 systemd[1]: libpod-conmon-6c316d0b9b7e64fcd6a21f28cad2117e89e9c5bb37df989b62bf1873c92846b3.scope: Deactivated successfully.
Nov 25 04:29:12 np0005534516 podman[434886]: 2025-11-25 09:29:12.647454024 +0000 UTC m=+0.108819407 container health_status 8ad1e319ea263c63021f01bb2d6f18ca5aea205883339692c6b10bddfa993191 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_controller, managed_by=edpm_ansible, container_name=ovn_controller, io.buildah.version=1.41.3)
Nov 25 04:29:12 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3282: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:29:13 np0005534516 podman[435067]: 2025-11-25 09:29:13.295785859 +0000 UTC m=+0.046935951 container create 9b67d3bf36eb249d9bf5ade69f27a31465c15861b4af787e3a23f34282ed08ab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_black, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 04:29:13 np0005534516 systemd[1]: Started libpod-conmon-9b67d3bf36eb249d9bf5ade69f27a31465c15861b4af787e3a23f34282ed08ab.scope.
Nov 25 04:29:13 np0005534516 podman[435067]: 2025-11-25 09:29:13.273533153 +0000 UTC m=+0.024683245 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 04:29:13 np0005534516 systemd[1]: Started libcrun container.
Nov 25 04:29:13 np0005534516 podman[435067]: 2025-11-25 09:29:13.388890557 +0000 UTC m=+0.140040669 container init 9b67d3bf36eb249d9bf5ade69f27a31465c15861b4af787e3a23f34282ed08ab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_black, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True)
Nov 25 04:29:13 np0005534516 podman[435067]: 2025-11-25 09:29:13.39704072 +0000 UTC m=+0.148190812 container start 9b67d3bf36eb249d9bf5ade69f27a31465c15861b4af787e3a23f34282ed08ab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_black, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 25 04:29:13 np0005534516 podman[435067]: 2025-11-25 09:29:13.400088883 +0000 UTC m=+0.151238975 container attach 9b67d3bf36eb249d9bf5ade69f27a31465c15861b4af787e3a23f34282ed08ab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_black, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 04:29:13 np0005534516 systemd[1]: libpod-9b67d3bf36eb249d9bf5ade69f27a31465c15861b4af787e3a23f34282ed08ab.scope: Deactivated successfully.
Nov 25 04:29:13 np0005534516 quizzical_black[435083]: 167 167
Nov 25 04:29:13 np0005534516 conmon[435083]: conmon 9b67d3bf36eb249d9bf5 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-9b67d3bf36eb249d9bf5ade69f27a31465c15861b4af787e3a23f34282ed08ab.scope/container/memory.events
Nov 25 04:29:13 np0005534516 podman[435067]: 2025-11-25 09:29:13.405457459 +0000 UTC m=+0.156607551 container died 9b67d3bf36eb249d9bf5ade69f27a31465c15861b4af787e3a23f34282ed08ab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_black, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Nov 25 04:29:13 np0005534516 systemd[1]: var-lib-containers-storage-overlay-a1c75af54530410171b20f11f2946a5db3921942cfcc35ae1460a292cfe0ac51-merged.mount: Deactivated successfully.
Nov 25 04:29:13 np0005534516 podman[435067]: 2025-11-25 09:29:13.444026151 +0000 UTC m=+0.195176243 container remove 9b67d3bf36eb249d9bf5ade69f27a31465c15861b4af787e3a23f34282ed08ab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_black, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Nov 25 04:29:13 np0005534516 systemd[1]: libpod-conmon-9b67d3bf36eb249d9bf5ade69f27a31465c15861b4af787e3a23f34282ed08ab.scope: Deactivated successfully.
Nov 25 04:29:13 np0005534516 nova_compute[253538]: 2025-11-25 09:29:13.564 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:29:13 np0005534516 podman[435107]: 2025-11-25 09:29:13.642985885 +0000 UTC m=+0.046620233 container create 62bb4ed11fb7a54167214586266e555236454e8095ce148517be672e31d30de2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_mirzakhani, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Nov 25 04:29:13 np0005534516 systemd[1]: Started libpod-conmon-62bb4ed11fb7a54167214586266e555236454e8095ce148517be672e31d30de2.scope.
Nov 25 04:29:13 np0005534516 podman[435107]: 2025-11-25 09:29:13.624628384 +0000 UTC m=+0.028262752 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 04:29:13 np0005534516 systemd[1]: Started libcrun container.
Nov 25 04:29:13 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7a5eb10b57bbd7bdf2d1beeb4606200656c27aa4ac662298b80f4bade4228ed4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 04:29:13 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7a5eb10b57bbd7bdf2d1beeb4606200656c27aa4ac662298b80f4bade4228ed4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 04:29:13 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7a5eb10b57bbd7bdf2d1beeb4606200656c27aa4ac662298b80f4bade4228ed4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 04:29:13 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7a5eb10b57bbd7bdf2d1beeb4606200656c27aa4ac662298b80f4bade4228ed4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 04:29:13 np0005534516 podman[435107]: 2025-11-25 09:29:13.748241694 +0000 UTC m=+0.151876082 container init 62bb4ed11fb7a54167214586266e555236454e8095ce148517be672e31d30de2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_mirzakhani, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Nov 25 04:29:13 np0005534516 podman[435107]: 2025-11-25 09:29:13.757083726 +0000 UTC m=+0.160718074 container start 62bb4ed11fb7a54167214586266e555236454e8095ce148517be672e31d30de2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_mirzakhani, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Nov 25 04:29:13 np0005534516 podman[435107]: 2025-11-25 09:29:13.76091992 +0000 UTC m=+0.164554298 container attach 62bb4ed11fb7a54167214586266e555236454e8095ce148517be672e31d30de2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_mirzakhani, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 04:29:14 np0005534516 nova_compute[253538]: 2025-11-25 09:29:14.161 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:29:14 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e283 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:29:14 np0005534516 reverent_mirzakhani[435124]: {
Nov 25 04:29:14 np0005534516 reverent_mirzakhani[435124]:    "1ccbbde0-8faf-460a-800e-c84f00ed17db": {
Nov 25 04:29:14 np0005534516 reverent_mirzakhani[435124]:        "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 04:29:14 np0005534516 reverent_mirzakhani[435124]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 04:29:14 np0005534516 reverent_mirzakhani[435124]:        "osd_id": 1,
Nov 25 04:29:14 np0005534516 reverent_mirzakhani[435124]:        "osd_uuid": "1ccbbde0-8faf-460a-800e-c84f00ed17db",
Nov 25 04:29:14 np0005534516 reverent_mirzakhani[435124]:        "type": "bluestore"
Nov 25 04:29:14 np0005534516 reverent_mirzakhani[435124]:    },
Nov 25 04:29:14 np0005534516 reverent_mirzakhani[435124]:    "2a15223e-f21c-42af-b702-2be1c3607399": {
Nov 25 04:29:14 np0005534516 reverent_mirzakhani[435124]:        "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 04:29:14 np0005534516 reverent_mirzakhani[435124]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 04:29:14 np0005534516 reverent_mirzakhani[435124]:        "osd_id": 2,
Nov 25 04:29:14 np0005534516 reverent_mirzakhani[435124]:        "osd_uuid": "2a15223e-f21c-42af-b702-2be1c3607399",
Nov 25 04:29:14 np0005534516 reverent_mirzakhani[435124]:        "type": "bluestore"
Nov 25 04:29:14 np0005534516 reverent_mirzakhani[435124]:    },
Nov 25 04:29:14 np0005534516 reverent_mirzakhani[435124]:    "fa0f8d90-5df8-4d42-9078-da082765696d": {
Nov 25 04:29:14 np0005534516 reverent_mirzakhani[435124]:        "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 04:29:14 np0005534516 reverent_mirzakhani[435124]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 04:29:14 np0005534516 reverent_mirzakhani[435124]:        "osd_id": 0,
Nov 25 04:29:14 np0005534516 reverent_mirzakhani[435124]:        "osd_uuid": "fa0f8d90-5df8-4d42-9078-da082765696d",
Nov 25 04:29:14 np0005534516 reverent_mirzakhani[435124]:        "type": "bluestore"
Nov 25 04:29:14 np0005534516 reverent_mirzakhani[435124]:    }
Nov 25 04:29:14 np0005534516 reverent_mirzakhani[435124]: }
Nov 25 04:29:14 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3283: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:29:14 np0005534516 systemd[1]: libpod-62bb4ed11fb7a54167214586266e555236454e8095ce148517be672e31d30de2.scope: Deactivated successfully.
Nov 25 04:29:14 np0005534516 podman[435107]: 2025-11-25 09:29:14.853941598 +0000 UTC m=+1.257575936 container died 62bb4ed11fb7a54167214586266e555236454e8095ce148517be672e31d30de2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_mirzakhani, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 04:29:14 np0005534516 systemd[1]: libpod-62bb4ed11fb7a54167214586266e555236454e8095ce148517be672e31d30de2.scope: Consumed 1.101s CPU time.
Nov 25 04:29:14 np0005534516 systemd[1]: var-lib-containers-storage-overlay-7a5eb10b57bbd7bdf2d1beeb4606200656c27aa4ac662298b80f4bade4228ed4-merged.mount: Deactivated successfully.
Nov 25 04:29:14 np0005534516 podman[435107]: 2025-11-25 09:29:14.957700766 +0000 UTC m=+1.361335114 container remove 62bb4ed11fb7a54167214586266e555236454e8095ce148517be672e31d30de2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_mirzakhani, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 25 04:29:14 np0005534516 systemd[1]: libpod-conmon-62bb4ed11fb7a54167214586266e555236454e8095ce148517be672e31d30de2.scope: Deactivated successfully.
Nov 25 04:29:15 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 04:29:15 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 04:29:15 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 04:29:15 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 04:29:15 np0005534516 ceph-mgr[75313]: [progress WARNING root] complete: ev 74622ce9-fae0-4a38-b82f-58602ccd19ba does not exist
Nov 25 04:29:15 np0005534516 ceph-mgr[75313]: [progress WARNING root] complete: ev 081c9abb-f7a6-48c8-9e0c-75952509a069 does not exist
Nov 25 04:29:15 np0005534516 nova_compute[253538]: 2025-11-25 09:29:15.555 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:29:15 np0005534516 nova_compute[253538]: 2025-11-25 09:29:15.774 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:29:16 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 04:29:16 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 04:29:16 np0005534516 nova_compute[253538]: 2025-11-25 09:29:16.555 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:29:16 np0005534516 nova_compute[253538]: 2025-11-25 09:29:16.555 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:29:16 np0005534516 nova_compute[253538]: 2025-11-25 09:29:16.555 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 25 04:29:16 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3284: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:29:18 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3285: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:29:19 np0005534516 nova_compute[253538]: 2025-11-25 09:29:19.164 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:29:19 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e283 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:29:20 np0005534516 nova_compute[253538]: 2025-11-25 09:29:20.795 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:29:20 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3286: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:29:21 np0005534516 nova_compute[253538]: 2025-11-25 09:29:21.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:29:22 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3287: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:29:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 04:29:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 04:29:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 04:29:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 04:29:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 04:29:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 04:29:24 np0005534516 nova_compute[253538]: 2025-11-25 09:29:24.165 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:29:24 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e283 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:29:24 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3288: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:29:25 np0005534516 nova_compute[253538]: 2025-11-25 09:29:25.548 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:29:25 np0005534516 nova_compute[253538]: 2025-11-25 09:29:25.795 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:29:26 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3289: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:29:28 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3290: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:29:29 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 25 04:29:29 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1292568820' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 04:29:29 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 25 04:29:29 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1292568820' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 04:29:29 np0005534516 nova_compute[253538]: 2025-11-25 09:29:29.168 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:29:29 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e283 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:29:29 np0005534516 nova_compute[253538]: 2025-11-25 09:29:29.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:29:29 np0005534516 nova_compute[253538]: 2025-11-25 09:29:29.578 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:29:29 np0005534516 nova_compute[253538]: 2025-11-25 09:29:29.578 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:29:29 np0005534516 nova_compute[253538]: 2025-11-25 09:29:29.578 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:29:29 np0005534516 nova_compute[253538]: 2025-11-25 09:29:29.579 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 25 04:29:29 np0005534516 nova_compute[253538]: 2025-11-25 09:29:29.579 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 04:29:30 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 04:29:30 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1627516190' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 04:29:30 np0005534516 nova_compute[253538]: 2025-11-25 09:29:30.073 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.494s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 04:29:30 np0005534516 nova_compute[253538]: 2025-11-25 09:29:30.253 253542 WARNING nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 04:29:30 np0005534516 nova_compute[253538]: 2025-11-25 09:29:30.254 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3589MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 25 04:29:30 np0005534516 nova_compute[253538]: 2025-11-25 09:29:30.255 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:29:30 np0005534516 nova_compute[253538]: 2025-11-25 09:29:30.255 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:29:30 np0005534516 nova_compute[253538]: 2025-11-25 09:29:30.316 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 25 04:29:30 np0005534516 nova_compute[253538]: 2025-11-25 09:29:30.316 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 25 04:29:30 np0005534516 nova_compute[253538]: 2025-11-25 09:29:30.333 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 04:29:30 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 04:29:30 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2966608219' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 04:29:30 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3291: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:29:30 np0005534516 nova_compute[253538]: 2025-11-25 09:29:30.849 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:29:30 np0005534516 nova_compute[253538]: 2025-11-25 09:29:30.857 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.524s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 04:29:30 np0005534516 nova_compute[253538]: 2025-11-25 09:29:30.863 253542 DEBUG nova.compute.provider_tree [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 04:29:30 np0005534516 nova_compute[253538]: 2025-11-25 09:29:30.876 253542 DEBUG nova.scheduler.client.report [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 04:29:30 np0005534516 nova_compute[253538]: 2025-11-25 09:29:30.879 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 25 04:29:30 np0005534516 nova_compute[253538]: 2025-11-25 09:29:30.879 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.624s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:29:31 np0005534516 podman[435265]: 2025-11-25 09:29:31.805020409 +0000 UTC m=+0.054924929 container health_status 5c8d5508db57b4c3da7e80ae11f3a3094e87785cb74f60c98199261dd8b73481 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, io.buildah.version=1.41.3)
Nov 25 04:29:31 np0005534516 podman[435264]: 2025-11-25 09:29:31.805875912 +0000 UTC m=+0.055624197 container health_status 201f5ba8a846455dad7681d91099d8c3f33e8ac91298c1330a8b525b94c03938 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=multipathd, org.label-schema.build-date=20251118)
Nov 25 04:29:32 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3292: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:29:34 np0005534516 nova_compute[253538]: 2025-11-25 09:29:34.170 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:29:34 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e283 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:29:34 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3293: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:29:35 np0005534516 nova_compute[253538]: 2025-11-25 09:29:35.852 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:29:36 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3294: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:29:38 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3295: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:29:39 np0005534516 nova_compute[253538]: 2025-11-25 09:29:39.174 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:29:39 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e283 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:29:40 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3296: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:29:40 np0005534516 nova_compute[253538]: 2025-11-25 09:29:40.853 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:29:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:29:41.118 162739 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:29:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:29:41.118 162739 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:29:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:29:41.119 162739 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:29:42 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3297: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:29:42 np0005534516 podman[435302]: 2025-11-25 09:29:42.877257791 +0000 UTC m=+0.126353315 container health_status 8ad1e319ea263c63021f01bb2d6f18ca5aea205883339692c6b10bddfa993191 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.schema-version=1.0, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251118)
Nov 25 04:29:44 np0005534516 nova_compute[253538]: 2025-11-25 09:29:44.176 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:29:44 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e283 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:29:44 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3298: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:29:45 np0005534516 nova_compute[253538]: 2025-11-25 09:29:45.854 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:29:46 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3299: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:29:48 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3300: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:29:49 np0005534516 nova_compute[253538]: 2025-11-25 09:29:49.179 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:29:49 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e283 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:29:50 np0005534516 nova_compute[253538]: 2025-11-25 09:29:50.857 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:29:50 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3301: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:29:52 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3302: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:29:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] Optimize plan auto_2025-11-25_09:29:53
Nov 25 04:29:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 04:29:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] do_upmap
Nov 25 04:29:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] pools ['default.rgw.meta', 'backups', '.mgr', 'default.rgw.log', 'cephfs.cephfs.data', 'vms', 'cephfs.cephfs.meta', 'images', '.rgw.root', 'default.rgw.control', 'volumes']
Nov 25 04:29:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] prepared 0/10 changes
Nov 25 04:29:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 04:29:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 04:29:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 04:29:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 04:29:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 04:29:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 04:29:54 np0005534516 nova_compute[253538]: 2025-11-25 09:29:54.181 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:29:54 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 04:29:54 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 04:29:54 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 04:29:54 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 04:29:54 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 04:29:54 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 04:29:54 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 04:29:54 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 04:29:54 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 04:29:54 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 04:29:54 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e283 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:29:54 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3303: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:29:55 np0005534516 nova_compute[253538]: 2025-11-25 09:29:55.858 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:29:56 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3304: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:29:58 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3305: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:29:59 np0005534516 nova_compute[253538]: 2025-11-25 09:29:59.183 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:29:59 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e283 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:29:59 np0005534516 ceph-mon[75015]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #162. Immutable memtables: 0.
Nov 25 04:29:59 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:29:59.353291) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 25 04:29:59 np0005534516 ceph-mon[75015]: rocksdb: [db/flush_job.cc:856] [default] [JOB 99] Flushing memtable with next log file: 162
Nov 25 04:29:59 np0005534516 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764062999353426, "job": 99, "event": "flush_started", "num_memtables": 1, "num_entries": 627, "num_deletes": 251, "total_data_size": 760086, "memory_usage": 772264, "flush_reason": "Manual Compaction"}
Nov 25 04:29:59 np0005534516 ceph-mon[75015]: rocksdb: [db/flush_job.cc:885] [default] [JOB 99] Level-0 flush table #163: started
Nov 25 04:29:59 np0005534516 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764062999385862, "cf_name": "default", "job": 99, "event": "table_file_creation", "file_number": 163, "file_size": 503622, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 68142, "largest_seqno": 68768, "table_properties": {"data_size": 500696, "index_size": 897, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 965, "raw_key_size": 7769, "raw_average_key_size": 20, "raw_value_size": 494589, "raw_average_value_size": 1304, "num_data_blocks": 40, "num_entries": 379, "num_filter_entries": 379, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764062950, "oldest_key_time": 1764062950, "file_creation_time": 1764062999, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "dc5dc6ae-0fb8-474e-b34d-e37705a41add", "db_session_id": "WCSCMBNWDQZ3QKJA3OWW", "orig_file_number": 163, "seqno_to_time_mapping": "N/A"}}
Nov 25 04:29:59 np0005534516 ceph-mon[75015]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 99] Flush lasted 32593 microseconds, and 2796 cpu microseconds.
Nov 25 04:29:59 np0005534516 ceph-mon[75015]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 04:29:59 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:29:59.385940) [db/flush_job.cc:967] [default] [JOB 99] Level-0 flush table #163: 503622 bytes OK
Nov 25 04:29:59 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:29:59.385974) [db/memtable_list.cc:519] [default] Level-0 commit table #163 started
Nov 25 04:29:59 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:29:59.406048) [db/memtable_list.cc:722] [default] Level-0 commit table #163: memtable #1 done
Nov 25 04:29:59 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:29:59.406119) EVENT_LOG_v1 {"time_micros": 1764062999406104, "job": 99, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 25 04:29:59 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:29:59.406154) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 25 04:29:59 np0005534516 ceph-mon[75015]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 99] Try to delete WAL files size 756718, prev total WAL file size 756718, number of live WAL files 2.
Nov 25 04:29:59 np0005534516 ceph-mon[75015]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000159.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 04:29:59 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:29:59.407107) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740032373535' seq:72057594037927935, type:22 .. '6D6772737461740033303037' seq:0, type:0; will stop at (end)
Nov 25 04:29:59 np0005534516 ceph-mon[75015]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 100] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 25 04:29:59 np0005534516 ceph-mon[75015]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 99 Base level 0, inputs: [163(491KB)], [161(10MB)]
Nov 25 04:29:59 np0005534516 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764062999407169, "job": 100, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [163], "files_L6": [161], "score": -1, "input_data_size": 11976173, "oldest_snapshot_seqno": -1}
Nov 25 04:29:59 np0005534516 ceph-mon[75015]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 100] Generated table #164: 8513 keys, 8913090 bytes, temperature: kUnknown
Nov 25 04:29:59 np0005534516 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764062999616674, "cf_name": "default", "job": 100, "event": "table_file_creation", "file_number": 164, "file_size": 8913090, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8861612, "index_size": 29093, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 21317, "raw_key_size": 224311, "raw_average_key_size": 26, "raw_value_size": 8715034, "raw_average_value_size": 1023, "num_data_blocks": 1120, "num_entries": 8513, "num_filter_entries": 8513, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764056881, "oldest_key_time": 0, "file_creation_time": 1764062999, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "dc5dc6ae-0fb8-474e-b34d-e37705a41add", "db_session_id": "WCSCMBNWDQZ3QKJA3OWW", "orig_file_number": 164, "seqno_to_time_mapping": "N/A"}}
Nov 25 04:29:59 np0005534516 ceph-mon[75015]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 04:29:59 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:29:59.617069) [db/compaction/compaction_job.cc:1663] [default] [JOB 100] Compacted 1@0 + 1@6 files to L6 => 8913090 bytes
Nov 25 04:29:59 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:29:59.677748) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 57.1 rd, 42.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.5, 10.9 +0.0 blob) out(8.5 +0.0 blob), read-write-amplify(41.5) write-amplify(17.7) OK, records in: 9007, records dropped: 494 output_compression: NoCompression
Nov 25 04:29:59 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:29:59.677791) EVENT_LOG_v1 {"time_micros": 1764062999677774, "job": 100, "event": "compaction_finished", "compaction_time_micros": 209633, "compaction_time_cpu_micros": 44747, "output_level": 6, "num_output_files": 1, "total_output_size": 8913090, "num_input_records": 9007, "num_output_records": 8513, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 25 04:29:59 np0005534516 ceph-mon[75015]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000163.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 04:29:59 np0005534516 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764062999678084, "job": 100, "event": "table_file_deletion", "file_number": 163}
Nov 25 04:29:59 np0005534516 ceph-mon[75015]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000161.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 04:29:59 np0005534516 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764062999680724, "job": 100, "event": "table_file_deletion", "file_number": 161}
Nov 25 04:29:59 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:29:59.407042) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 04:29:59 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:29:59.680800) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 04:29:59 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:29:59.680805) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 04:29:59 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:29:59.680807) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 04:29:59 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:29:59.680809) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 04:29:59 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:29:59.680811) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 04:30:00 np0005534516 nova_compute[253538]: 2025-11-25 09:30:00.859 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:30:00 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3306: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:30:02 np0005534516 podman[435339]: 2025-11-25 09:30:02.815657278 +0000 UTC m=+0.055686859 container health_status 5c8d5508db57b4c3da7e80ae11f3a3094e87785cb74f60c98199261dd8b73481 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0)
Nov 25 04:30:02 np0005534516 podman[435338]: 2025-11-25 09:30:02.828377094 +0000 UTC m=+0.073608397 container health_status 201f5ba8a846455dad7681d91099d8c3f33e8ac91298c1330a8b525b94c03938 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3)
Nov 25 04:30:02 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3307: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:30:04 np0005534516 nova_compute[253538]: 2025-11-25 09:30:04.185 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:30:04 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e283 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:30:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 04:30:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 04:30:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 04:30:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 04:30:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 25 04:30:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 04:30:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 04:30:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 04:30:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 04:30:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 04:30:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006659854830044931 of space, bias 1.0, pg target 0.19979564490134794 quantized to 32 (current 32)
Nov 25 04:30:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 04:30:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 25 04:30:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 04:30:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 04:30:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 04:30:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 25 04:30:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 04:30:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 25 04:30:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 04:30:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 04:30:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 04:30:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 25 04:30:04 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3308: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:30:04 np0005534516 nova_compute[253538]: 2025-11-25 09:30:04.880 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:30:05 np0005534516 ceph-osd[88620]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 25 04:30:05 np0005534516 ceph-osd[88620]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 6000.5 total, 600.0 interval#012Cumulative writes: 46K writes, 188K keys, 46K commit groups, 1.0 writes per commit group, ingest: 0.19 GB, 0.03 MB/s#012Cumulative WAL: 46K writes, 16K syncs, 2.83 writes per sync, written: 0.19 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 620 writes, 1636 keys, 620 commit groups, 1.0 writes per commit group, ingest: 0.86 MB, 0.00 MB/s#012Interval WAL: 620 writes, 275 syncs, 2.25 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.08              0.00         1    0.077       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.08              0.00         1    0.077       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.08              0.00         1    0.077       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 6000.5 total, 4800.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.1 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x562bd08331f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 4e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 6000.5 total, 4800.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x562bd08331f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 4e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 6000.5 total, 4800.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_s
Nov 25 04:30:05 np0005534516 nova_compute[253538]: 2025-11-25 09:30:05.861 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:30:06 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3309: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:30:07 np0005534516 nova_compute[253538]: 2025-11-25 09:30:07.555 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:30:08 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3310: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:30:09 np0005534516 nova_compute[253538]: 2025-11-25 09:30:09.188 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:30:09 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e283 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:30:10 np0005534516 nova_compute[253538]: 2025-11-25 09:30:10.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:30:10 np0005534516 nova_compute[253538]: 2025-11-25 09:30:10.555 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 25 04:30:10 np0005534516 nova_compute[253538]: 2025-11-25 09:30:10.555 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 25 04:30:10 np0005534516 nova_compute[253538]: 2025-11-25 09:30:10.568 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 25 04:30:10 np0005534516 nova_compute[253538]: 2025-11-25 09:30:10.866 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:30:10 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3311: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:30:11 np0005534516 ceph-osd[89702]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 25 04:30:11 np0005534516 ceph-osd[89702]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 6000.4 total, 600.0 interval#012Cumulative writes: 46K writes, 181K keys, 46K commit groups, 1.0 writes per commit group, ingest: 0.17 GB, 0.03 MB/s#012Cumulative WAL: 46K writes, 16K syncs, 2.78 writes per sync, written: 0.17 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 691 writes, 1763 keys, 691 commit groups, 1.0 writes per commit group, ingest: 0.85 MB, 0.00 MB/s#012Interval WAL: 691 writes, 309 syncs, 2.24 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.05              0.00         1    0.049       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.05              0.00         1    0.049       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.05              0.00         1    0.049       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 6000.4 total, 4800.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x561d8ae0f1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 5.4e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 6000.4 total, 4800.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x561d8ae0f1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 5.4e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 6000.4 total, 4800.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtab
Nov 25 04:30:12 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3312: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:30:13 np0005534516 nova_compute[253538]: 2025-11-25 09:30:13.562 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:30:13 np0005534516 podman[435377]: 2025-11-25 09:30:13.864411759 +0000 UTC m=+0.120885316 container health_status 8ad1e319ea263c63021f01bb2d6f18ca5aea205883339692c6b10bddfa993191 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, tcib_managed=true)
Nov 25 04:30:14 np0005534516 nova_compute[253538]: 2025-11-25 09:30:14.190 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:30:14 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e283 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:30:14 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3313: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:30:15 np0005534516 nova_compute[253538]: 2025-11-25 09:30:15.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:30:15 np0005534516 nova_compute[253538]: 2025-11-25 09:30:15.867 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:30:16 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 04:30:16 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 04:30:16 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 04:30:16 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 04:30:16 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 04:30:16 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 04:30:16 np0005534516 ceph-mgr[75313]: [progress WARNING root] complete: ev 749fd843-ee72-432a-923d-0fa01d24d1e5 does not exist
Nov 25 04:30:16 np0005534516 ceph-mgr[75313]: [progress WARNING root] complete: ev a7859bbe-b745-4211-a546-984b3865b994 does not exist
Nov 25 04:30:16 np0005534516 ceph-mgr[75313]: [progress WARNING root] complete: ev 1bd17a76-87e7-444c-80d0-10cf5d0f9dc2 does not exist
Nov 25 04:30:16 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 04:30:16 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 04:30:16 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 04:30:16 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 04:30:16 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 04:30:16 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 04:30:16 np0005534516 podman[435676]: 2025-11-25 09:30:16.589399989 +0000 UTC m=+0.042599653 container create 862e1663c268069f630a9b4df8a491fa846abe950cc9b25fafd3eea095c0a7ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_lamport, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 04:30:16 np0005534516 systemd[1]: Started libpod-conmon-862e1663c268069f630a9b4df8a491fa846abe950cc9b25fafd3eea095c0a7ae.scope.
Nov 25 04:30:16 np0005534516 systemd[1]: Started libcrun container.
Nov 25 04:30:16 np0005534516 podman[435676]: 2025-11-25 09:30:16.571244873 +0000 UTC m=+0.024444557 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 04:30:16 np0005534516 podman[435676]: 2025-11-25 09:30:16.670076487 +0000 UTC m=+0.123276171 container init 862e1663c268069f630a9b4df8a491fa846abe950cc9b25fafd3eea095c0a7ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_lamport, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Nov 25 04:30:16 np0005534516 podman[435676]: 2025-11-25 09:30:16.678550128 +0000 UTC m=+0.131749792 container start 862e1663c268069f630a9b4df8a491fa846abe950cc9b25fafd3eea095c0a7ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_lamport, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 04:30:16 np0005534516 podman[435676]: 2025-11-25 09:30:16.681657953 +0000 UTC m=+0.134857647 container attach 862e1663c268069f630a9b4df8a491fa846abe950cc9b25fafd3eea095c0a7ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_lamport, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 04:30:16 np0005534516 suspicious_lamport[435693]: 167 167
Nov 25 04:30:16 np0005534516 systemd[1]: libpod-862e1663c268069f630a9b4df8a491fa846abe950cc9b25fafd3eea095c0a7ae.scope: Deactivated successfully.
Nov 25 04:30:16 np0005534516 podman[435676]: 2025-11-25 09:30:16.684589294 +0000 UTC m=+0.137788958 container died 862e1663c268069f630a9b4df8a491fa846abe950cc9b25fafd3eea095c0a7ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_lamport, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 04:30:16 np0005534516 systemd[1]: var-lib-containers-storage-overlay-81cd80287068906d92f04fa3ad4fb8bbf26f9216830fe3075d48f807762c3b75-merged.mount: Deactivated successfully.
Nov 25 04:30:16 np0005534516 podman[435676]: 2025-11-25 09:30:16.737539306 +0000 UTC m=+0.190738980 container remove 862e1663c268069f630a9b4df8a491fa846abe950cc9b25fafd3eea095c0a7ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_lamport, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 04:30:16 np0005534516 systemd[1]: libpod-conmon-862e1663c268069f630a9b4df8a491fa846abe950cc9b25fafd3eea095c0a7ae.scope: Deactivated successfully.
Nov 25 04:30:16 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3314: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:30:16 np0005534516 podman[435717]: 2025-11-25 09:30:16.896079699 +0000 UTC m=+0.054391293 container create 4d88dfd356c9717440bd9e11c21e801d0f10638a39937a27a607caaa5e24c170 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_gates, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 04:30:16 np0005534516 systemd[1]: Started libpod-conmon-4d88dfd356c9717440bd9e11c21e801d0f10638a39937a27a607caaa5e24c170.scope.
Nov 25 04:30:16 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 04:30:16 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 04:30:16 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 04:30:16 np0005534516 podman[435717]: 2025-11-25 09:30:16.866664258 +0000 UTC m=+0.024975932 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 04:30:16 np0005534516 systemd[1]: Started libcrun container.
Nov 25 04:30:16 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/949250c2b1632bb56c09986b24b951143ce18ea87ea3d0de3cee3f97f1a6bce1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 04:30:16 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/949250c2b1632bb56c09986b24b951143ce18ea87ea3d0de3cee3f97f1a6bce1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 04:30:16 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/949250c2b1632bb56c09986b24b951143ce18ea87ea3d0de3cee3f97f1a6bce1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 04:30:16 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/949250c2b1632bb56c09986b24b951143ce18ea87ea3d0de3cee3f97f1a6bce1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 04:30:16 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/949250c2b1632bb56c09986b24b951143ce18ea87ea3d0de3cee3f97f1a6bce1/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 04:30:16 np0005534516 podman[435717]: 2025-11-25 09:30:16.997643708 +0000 UTC m=+0.155955302 container init 4d88dfd356c9717440bd9e11c21e801d0f10638a39937a27a607caaa5e24c170 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_gates, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 04:30:17 np0005534516 podman[435717]: 2025-11-25 09:30:17.008953936 +0000 UTC m=+0.167265520 container start 4d88dfd356c9717440bd9e11c21e801d0f10638a39937a27a607caaa5e24c170 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_gates, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 04:30:17 np0005534516 podman[435717]: 2025-11-25 09:30:17.012279166 +0000 UTC m=+0.170590820 container attach 4d88dfd356c9717440bd9e11c21e801d0f10638a39937a27a607caaa5e24c170 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_gates, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 04:30:17 np0005534516 nova_compute[253538]: 2025-11-25 09:30:17.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:30:17 np0005534516 nova_compute[253538]: 2025-11-25 09:30:17.555 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:30:17 np0005534516 nova_compute[253538]: 2025-11-25 09:30:17.555 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 25 04:30:18 np0005534516 unruffled_gates[435733]: --> passed data devices: 0 physical, 3 LVM
Nov 25 04:30:18 np0005534516 unruffled_gates[435733]: --> relative data size: 1.0
Nov 25 04:30:18 np0005534516 unruffled_gates[435733]: --> All data devices are unavailable
Nov 25 04:30:18 np0005534516 systemd[1]: libpod-4d88dfd356c9717440bd9e11c21e801d0f10638a39937a27a607caaa5e24c170.scope: Deactivated successfully.
Nov 25 04:30:18 np0005534516 podman[435717]: 2025-11-25 09:30:18.166683289 +0000 UTC m=+1.324994913 container died 4d88dfd356c9717440bd9e11c21e801d0f10638a39937a27a607caaa5e24c170 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_gates, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 25 04:30:18 np0005534516 systemd[1]: libpod-4d88dfd356c9717440bd9e11c21e801d0f10638a39937a27a607caaa5e24c170.scope: Consumed 1.111s CPU time.
Nov 25 04:30:18 np0005534516 systemd[1]: var-lib-containers-storage-overlay-949250c2b1632bb56c09986b24b951143ce18ea87ea3d0de3cee3f97f1a6bce1-merged.mount: Deactivated successfully.
Nov 25 04:30:18 np0005534516 podman[435717]: 2025-11-25 09:30:18.35414612 +0000 UTC m=+1.512457724 container remove 4d88dfd356c9717440bd9e11c21e801d0f10638a39937a27a607caaa5e24c170 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_gates, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Nov 25 04:30:18 np0005534516 systemd[1]: libpod-conmon-4d88dfd356c9717440bd9e11c21e801d0f10638a39937a27a607caaa5e24c170.scope: Deactivated successfully.
Nov 25 04:30:18 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3315: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:30:19 np0005534516 podman[435919]: 2025-11-25 09:30:19.167793951 +0000 UTC m=+0.078707437 container create 95d3047ac2a83c7d5b055f5b23f6926a7d5a98d9ac8318b9f599b8d5d4254a70 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_brahmagupta, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 04:30:19 np0005534516 nova_compute[253538]: 2025-11-25 09:30:19.192 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:30:19 np0005534516 podman[435919]: 2025-11-25 09:30:19.125511678 +0000 UTC m=+0.036425214 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 04:30:19 np0005534516 systemd[1]: Started libpod-conmon-95d3047ac2a83c7d5b055f5b23f6926a7d5a98d9ac8318b9f599b8d5d4254a70.scope.
Nov 25 04:30:19 np0005534516 systemd[1]: Started libcrun container.
Nov 25 04:30:19 np0005534516 podman[435919]: 2025-11-25 09:30:19.289783386 +0000 UTC m=+0.200696872 container init 95d3047ac2a83c7d5b055f5b23f6926a7d5a98d9ac8318b9f599b8d5d4254a70 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_brahmagupta, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 04:30:19 np0005534516 podman[435919]: 2025-11-25 09:30:19.301764313 +0000 UTC m=+0.212677759 container start 95d3047ac2a83c7d5b055f5b23f6926a7d5a98d9ac8318b9f599b8d5d4254a70 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_brahmagupta, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 04:30:19 np0005534516 podman[435919]: 2025-11-25 09:30:19.307580642 +0000 UTC m=+0.218494098 container attach 95d3047ac2a83c7d5b055f5b23f6926a7d5a98d9ac8318b9f599b8d5d4254a70 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_brahmagupta, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Nov 25 04:30:19 np0005534516 amazing_brahmagupta[435936]: 167 167
Nov 25 04:30:19 np0005534516 podman[435919]: 2025-11-25 09:30:19.308825645 +0000 UTC m=+0.219739101 container died 95d3047ac2a83c7d5b055f5b23f6926a7d5a98d9ac8318b9f599b8d5d4254a70 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_brahmagupta, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Nov 25 04:30:19 np0005534516 systemd[1]: libpod-95d3047ac2a83c7d5b055f5b23f6926a7d5a98d9ac8318b9f599b8d5d4254a70.scope: Deactivated successfully.
Nov 25 04:30:19 np0005534516 systemd[1]: var-lib-containers-storage-overlay-5c95b4a513ac5db01e4c9cf5b8baccf5abcfd7f9f660906df15c50a7f4f79006-merged.mount: Deactivated successfully.
Nov 25 04:30:19 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e283 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:30:19 np0005534516 podman[435919]: 2025-11-25 09:30:19.347619083 +0000 UTC m=+0.258532529 container remove 95d3047ac2a83c7d5b055f5b23f6926a7d5a98d9ac8318b9f599b8d5d4254a70 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_brahmagupta, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Nov 25 04:30:19 np0005534516 systemd[1]: libpod-conmon-95d3047ac2a83c7d5b055f5b23f6926a7d5a98d9ac8318b9f599b8d5d4254a70.scope: Deactivated successfully.
Nov 25 04:30:19 np0005534516 podman[435958]: 2025-11-25 09:30:19.556360804 +0000 UTC m=+0.066186626 container create c26b4f693fbc29b6237e848936058f3fe2111eda4ebc1aa745d5c968834862c5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_mcclintock, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 04:30:19 np0005534516 systemd[1]: Started libpod-conmon-c26b4f693fbc29b6237e848936058f3fe2111eda4ebc1aa745d5c968834862c5.scope.
Nov 25 04:30:19 np0005534516 podman[435958]: 2025-11-25 09:30:19.524934697 +0000 UTC m=+0.034760559 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 04:30:19 np0005534516 systemd[1]: Started libcrun container.
Nov 25 04:30:19 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/37e0cabffb4b02e32e901f9c5016a6414f9c80fb3b829f556622cfaddaac504a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 04:30:19 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/37e0cabffb4b02e32e901f9c5016a6414f9c80fb3b829f556622cfaddaac504a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 04:30:19 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/37e0cabffb4b02e32e901f9c5016a6414f9c80fb3b829f556622cfaddaac504a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 04:30:19 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/37e0cabffb4b02e32e901f9c5016a6414f9c80fb3b829f556622cfaddaac504a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 04:30:19 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:30:19.628 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=63, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '3e:21:09', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '6a:71:8e:07:fa:c2'}, ipsec=False) old=SB_Global(nb_cfg=62) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 04:30:19 np0005534516 nova_compute[253538]: 2025-11-25 09:30:19.629 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:30:19 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:30:19.631 162739 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 25 04:30:19 np0005534516 podman[435958]: 2025-11-25 09:30:19.674669169 +0000 UTC m=+0.184495041 container init c26b4f693fbc29b6237e848936058f3fe2111eda4ebc1aa745d5c968834862c5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_mcclintock, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 04:30:19 np0005534516 podman[435958]: 2025-11-25 09:30:19.685538346 +0000 UTC m=+0.195364168 container start c26b4f693fbc29b6237e848936058f3fe2111eda4ebc1aa745d5c968834862c5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_mcclintock, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Nov 25 04:30:19 np0005534516 podman[435958]: 2025-11-25 09:30:19.726709137 +0000 UTC m=+0.236534919 container attach c26b4f693fbc29b6237e848936058f3fe2111eda4ebc1aa745d5c968834862c5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_mcclintock, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 04:30:20 np0005534516 frosty_mcclintock[435975]: {
Nov 25 04:30:20 np0005534516 frosty_mcclintock[435975]:    "0": [
Nov 25 04:30:20 np0005534516 frosty_mcclintock[435975]:        {
Nov 25 04:30:20 np0005534516 frosty_mcclintock[435975]:            "devices": [
Nov 25 04:30:20 np0005534516 frosty_mcclintock[435975]:                "/dev/loop3"
Nov 25 04:30:20 np0005534516 frosty_mcclintock[435975]:            ],
Nov 25 04:30:20 np0005534516 frosty_mcclintock[435975]:            "lv_name": "ceph_lv0",
Nov 25 04:30:20 np0005534516 frosty_mcclintock[435975]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 04:30:20 np0005534516 frosty_mcclintock[435975]:            "lv_size": "21470642176",
Nov 25 04:30:20 np0005534516 frosty_mcclintock[435975]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=fa0f8d90-5df8-4d42-9078-da082765696d,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 04:30:20 np0005534516 frosty_mcclintock[435975]:            "lv_uuid": "ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr",
Nov 25 04:30:20 np0005534516 frosty_mcclintock[435975]:            "name": "ceph_lv0",
Nov 25 04:30:20 np0005534516 frosty_mcclintock[435975]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 04:30:20 np0005534516 frosty_mcclintock[435975]:            "tags": {
Nov 25 04:30:20 np0005534516 frosty_mcclintock[435975]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 04:30:20 np0005534516 frosty_mcclintock[435975]:                "ceph.block_uuid": "ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr",
Nov 25 04:30:20 np0005534516 frosty_mcclintock[435975]:                "ceph.cephx_lockbox_secret": "",
Nov 25 04:30:20 np0005534516 frosty_mcclintock[435975]:                "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 04:30:20 np0005534516 frosty_mcclintock[435975]:                "ceph.cluster_name": "ceph",
Nov 25 04:30:20 np0005534516 frosty_mcclintock[435975]:                "ceph.crush_device_class": "",
Nov 25 04:30:20 np0005534516 frosty_mcclintock[435975]:                "ceph.encrypted": "0",
Nov 25 04:30:20 np0005534516 frosty_mcclintock[435975]:                "ceph.osd_fsid": "fa0f8d90-5df8-4d42-9078-da082765696d",
Nov 25 04:30:20 np0005534516 frosty_mcclintock[435975]:                "ceph.osd_id": "0",
Nov 25 04:30:20 np0005534516 frosty_mcclintock[435975]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 04:30:20 np0005534516 frosty_mcclintock[435975]:                "ceph.type": "block",
Nov 25 04:30:20 np0005534516 frosty_mcclintock[435975]:                "ceph.vdo": "0"
Nov 25 04:30:20 np0005534516 frosty_mcclintock[435975]:            },
Nov 25 04:30:20 np0005534516 frosty_mcclintock[435975]:            "type": "block",
Nov 25 04:30:20 np0005534516 frosty_mcclintock[435975]:            "vg_name": "ceph_vg0"
Nov 25 04:30:20 np0005534516 frosty_mcclintock[435975]:        }
Nov 25 04:30:20 np0005534516 frosty_mcclintock[435975]:    ],
Nov 25 04:30:20 np0005534516 frosty_mcclintock[435975]:    "1": [
Nov 25 04:30:20 np0005534516 frosty_mcclintock[435975]:        {
Nov 25 04:30:20 np0005534516 frosty_mcclintock[435975]:            "devices": [
Nov 25 04:30:20 np0005534516 frosty_mcclintock[435975]:                "/dev/loop4"
Nov 25 04:30:20 np0005534516 frosty_mcclintock[435975]:            ],
Nov 25 04:30:20 np0005534516 frosty_mcclintock[435975]:            "lv_name": "ceph_lv1",
Nov 25 04:30:20 np0005534516 frosty_mcclintock[435975]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 04:30:20 np0005534516 frosty_mcclintock[435975]:            "lv_size": "21470642176",
Nov 25 04:30:20 np0005534516 frosty_mcclintock[435975]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1ccbbde0-8faf-460a-800e-c84f00ed17db,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 04:30:20 np0005534516 frosty_mcclintock[435975]:            "lv_uuid": "nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e",
Nov 25 04:30:20 np0005534516 frosty_mcclintock[435975]:            "name": "ceph_lv1",
Nov 25 04:30:20 np0005534516 frosty_mcclintock[435975]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 04:30:20 np0005534516 frosty_mcclintock[435975]:            "tags": {
Nov 25 04:30:20 np0005534516 frosty_mcclintock[435975]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 04:30:20 np0005534516 frosty_mcclintock[435975]:                "ceph.block_uuid": "nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e",
Nov 25 04:30:20 np0005534516 frosty_mcclintock[435975]:                "ceph.cephx_lockbox_secret": "",
Nov 25 04:30:20 np0005534516 frosty_mcclintock[435975]:                "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 04:30:20 np0005534516 frosty_mcclintock[435975]:                "ceph.cluster_name": "ceph",
Nov 25 04:30:20 np0005534516 frosty_mcclintock[435975]:                "ceph.crush_device_class": "",
Nov 25 04:30:20 np0005534516 frosty_mcclintock[435975]:                "ceph.encrypted": "0",
Nov 25 04:30:20 np0005534516 frosty_mcclintock[435975]:                "ceph.osd_fsid": "1ccbbde0-8faf-460a-800e-c84f00ed17db",
Nov 25 04:30:20 np0005534516 frosty_mcclintock[435975]:                "ceph.osd_id": "1",
Nov 25 04:30:20 np0005534516 frosty_mcclintock[435975]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 04:30:20 np0005534516 frosty_mcclintock[435975]:                "ceph.type": "block",
Nov 25 04:30:20 np0005534516 frosty_mcclintock[435975]:                "ceph.vdo": "0"
Nov 25 04:30:20 np0005534516 frosty_mcclintock[435975]:            },
Nov 25 04:30:20 np0005534516 frosty_mcclintock[435975]:            "type": "block",
Nov 25 04:30:20 np0005534516 frosty_mcclintock[435975]:            "vg_name": "ceph_vg1"
Nov 25 04:30:20 np0005534516 frosty_mcclintock[435975]:        }
Nov 25 04:30:20 np0005534516 frosty_mcclintock[435975]:    ],
Nov 25 04:30:20 np0005534516 frosty_mcclintock[435975]:    "2": [
Nov 25 04:30:20 np0005534516 frosty_mcclintock[435975]:        {
Nov 25 04:30:20 np0005534516 frosty_mcclintock[435975]:            "devices": [
Nov 25 04:30:20 np0005534516 frosty_mcclintock[435975]:                "/dev/loop5"
Nov 25 04:30:20 np0005534516 frosty_mcclintock[435975]:            ],
Nov 25 04:30:20 np0005534516 frosty_mcclintock[435975]:            "lv_name": "ceph_lv2",
Nov 25 04:30:20 np0005534516 frosty_mcclintock[435975]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 04:30:20 np0005534516 frosty_mcclintock[435975]:            "lv_size": "21470642176",
Nov 25 04:30:20 np0005534516 frosty_mcclintock[435975]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2a15223e-f21c-42af-b702-2be1c3607399,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 04:30:20 np0005534516 frosty_mcclintock[435975]:            "lv_uuid": "5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0",
Nov 25 04:30:20 np0005534516 frosty_mcclintock[435975]:            "name": "ceph_lv2",
Nov 25 04:30:20 np0005534516 frosty_mcclintock[435975]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 04:30:20 np0005534516 frosty_mcclintock[435975]:            "tags": {
Nov 25 04:30:20 np0005534516 frosty_mcclintock[435975]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 04:30:20 np0005534516 frosty_mcclintock[435975]:                "ceph.block_uuid": "5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0",
Nov 25 04:30:20 np0005534516 frosty_mcclintock[435975]:                "ceph.cephx_lockbox_secret": "",
Nov 25 04:30:20 np0005534516 frosty_mcclintock[435975]:                "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 04:30:20 np0005534516 frosty_mcclintock[435975]:                "ceph.cluster_name": "ceph",
Nov 25 04:30:20 np0005534516 frosty_mcclintock[435975]:                "ceph.crush_device_class": "",
Nov 25 04:30:20 np0005534516 frosty_mcclintock[435975]:                "ceph.encrypted": "0",
Nov 25 04:30:20 np0005534516 frosty_mcclintock[435975]:                "ceph.osd_fsid": "2a15223e-f21c-42af-b702-2be1c3607399",
Nov 25 04:30:20 np0005534516 frosty_mcclintock[435975]:                "ceph.osd_id": "2",
Nov 25 04:30:20 np0005534516 frosty_mcclintock[435975]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 04:30:20 np0005534516 frosty_mcclintock[435975]:                "ceph.type": "block",
Nov 25 04:30:20 np0005534516 frosty_mcclintock[435975]:                "ceph.vdo": "0"
Nov 25 04:30:20 np0005534516 frosty_mcclintock[435975]:            },
Nov 25 04:30:20 np0005534516 frosty_mcclintock[435975]:            "type": "block",
Nov 25 04:30:20 np0005534516 frosty_mcclintock[435975]:            "vg_name": "ceph_vg2"
Nov 25 04:30:20 np0005534516 frosty_mcclintock[435975]:        }
Nov 25 04:30:20 np0005534516 frosty_mcclintock[435975]:    ]
Nov 25 04:30:20 np0005534516 frosty_mcclintock[435975]: }
Nov 25 04:30:20 np0005534516 systemd[1]: libpod-c26b4f693fbc29b6237e848936058f3fe2111eda4ebc1aa745d5c968834862c5.scope: Deactivated successfully.
Nov 25 04:30:20 np0005534516 conmon[435975]: conmon c26b4f693fbc29b6237e <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-c26b4f693fbc29b6237e848936058f3fe2111eda4ebc1aa745d5c968834862c5.scope/container/memory.events
Nov 25 04:30:20 np0005534516 podman[435958]: 2025-11-25 09:30:20.510062014 +0000 UTC m=+1.019887796 container died c26b4f693fbc29b6237e848936058f3fe2111eda4ebc1aa745d5c968834862c5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_mcclintock, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Nov 25 04:30:20 np0005534516 systemd[1]: var-lib-containers-storage-overlay-37e0cabffb4b02e32e901f9c5016a6414f9c80fb3b829f556622cfaddaac504a-merged.mount: Deactivated successfully.
Nov 25 04:30:20 np0005534516 podman[435958]: 2025-11-25 09:30:20.57155518 +0000 UTC m=+1.081380962 container remove c26b4f693fbc29b6237e848936058f3fe2111eda4ebc1aa745d5c968834862c5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_mcclintock, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 04:30:20 np0005534516 systemd[1]: libpod-conmon-c26b4f693fbc29b6237e848936058f3fe2111eda4ebc1aa745d5c968834862c5.scope: Deactivated successfully.
Nov 25 04:30:20 np0005534516 nova_compute[253538]: 2025-11-25 09:30:20.868 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:30:20 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3316: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:30:21 np0005534516 podman[436139]: 2025-11-25 09:30:21.252504774 +0000 UTC m=+0.028444886 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 04:30:21 np0005534516 podman[436139]: 2025-11-25 09:30:21.705391731 +0000 UTC m=+0.481331823 container create 85918ce9ba0870ad9641f362cc072ee63eac70918a6320802fc10c2609d41367 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_clarke, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2)
Nov 25 04:30:21 np0005534516 systemd[1]: Started libpod-conmon-85918ce9ba0870ad9641f362cc072ee63eac70918a6320802fc10c2609d41367.scope.
Nov 25 04:30:21 np0005534516 systemd[1]: Started libcrun container.
Nov 25 04:30:21 np0005534516 podman[436139]: 2025-11-25 09:30:21.793064481 +0000 UTC m=+0.569004573 container init 85918ce9ba0870ad9641f362cc072ee63eac70918a6320802fc10c2609d41367 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_clarke, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Nov 25 04:30:21 np0005534516 podman[436139]: 2025-11-25 09:30:21.800895045 +0000 UTC m=+0.576835107 container start 85918ce9ba0870ad9641f362cc072ee63eac70918a6320802fc10c2609d41367 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_clarke, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 04:30:21 np0005534516 podman[436139]: 2025-11-25 09:30:21.804660487 +0000 UTC m=+0.580600569 container attach 85918ce9ba0870ad9641f362cc072ee63eac70918a6320802fc10c2609d41367 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_clarke, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Nov 25 04:30:21 np0005534516 objective_clarke[436156]: 167 167
Nov 25 04:30:21 np0005534516 systemd[1]: libpod-85918ce9ba0870ad9641f362cc072ee63eac70918a6320802fc10c2609d41367.scope: Deactivated successfully.
Nov 25 04:30:21 np0005534516 podman[436139]: 2025-11-25 09:30:21.807503305 +0000 UTC m=+0.583443367 container died 85918ce9ba0870ad9641f362cc072ee63eac70918a6320802fc10c2609d41367 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_clarke, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3)
Nov 25 04:30:21 np0005534516 systemd[1]: var-lib-containers-storage-overlay-20245aed7b25b3ff6b2a1a175554599aacd9484b98f09883c7a7acca11260681-merged.mount: Deactivated successfully.
Nov 25 04:30:21 np0005534516 podman[436139]: 2025-11-25 09:30:21.854679441 +0000 UTC m=+0.630619513 container remove 85918ce9ba0870ad9641f362cc072ee63eac70918a6320802fc10c2609d41367 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_clarke, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Nov 25 04:30:21 np0005534516 systemd[1]: libpod-conmon-85918ce9ba0870ad9641f362cc072ee63eac70918a6320802fc10c2609d41367.scope: Deactivated successfully.
Nov 25 04:30:21 np0005534516 ceph-osd[90711]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 25 04:30:21 np0005534516 ceph-osd[90711]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 6002.4 total, 600.0 interval#012Cumulative writes: 36K writes, 143K keys, 36K commit groups, 1.0 writes per commit group, ingest: 0.14 GB, 0.02 MB/s#012Cumulative WAL: 36K writes, 12K syncs, 2.83 writes per sync, written: 0.14 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 596 writes, 1534 keys, 596 commit groups, 1.0 writes per commit group, ingest: 0.75 MB, 0.00 MB/s#012Interval WAL: 596 writes, 265 syncs, 2.25 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.36              0.00         1    0.357       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.36              0.00         1    0.357       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.36              0.00         1    0.357       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 6002.4 total, 4800.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.4 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55a125e131f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 2.2e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 6002.4 total, 4800.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55a125e131f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 2.2e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 6002.4 total, 4800.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtab
Nov 25 04:30:22 np0005534516 podman[436180]: 2025-11-25 09:30:22.076658652 +0000 UTC m=+0.063128691 container create 200b58e520dd115d86e3d14ace70ed89244fd73f88b13ed5d8ecb5f6c1bf45ed (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_matsumoto, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 04:30:22 np0005534516 systemd[1]: Started libpod-conmon-200b58e520dd115d86e3d14ace70ed89244fd73f88b13ed5d8ecb5f6c1bf45ed.scope.
Nov 25 04:30:22 np0005534516 systemd[1]: Started libcrun container.
Nov 25 04:30:22 np0005534516 podman[436180]: 2025-11-25 09:30:22.054972702 +0000 UTC m=+0.041442791 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 04:30:22 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0e2d6e8cc0c1b2b84ba3e209d1cf1e719118e1c086f9abc8457c9c8cac7018f2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 04:30:22 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0e2d6e8cc0c1b2b84ba3e209d1cf1e719118e1c086f9abc8457c9c8cac7018f2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 04:30:22 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0e2d6e8cc0c1b2b84ba3e209d1cf1e719118e1c086f9abc8457c9c8cac7018f2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 04:30:22 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0e2d6e8cc0c1b2b84ba3e209d1cf1e719118e1c086f9abc8457c9c8cac7018f2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 04:30:22 np0005534516 podman[436180]: 2025-11-25 09:30:22.160277942 +0000 UTC m=+0.146747991 container init 200b58e520dd115d86e3d14ace70ed89244fd73f88b13ed5d8ecb5f6c1bf45ed (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_matsumoto, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2)
Nov 25 04:30:22 np0005534516 podman[436180]: 2025-11-25 09:30:22.169245066 +0000 UTC m=+0.155715105 container start 200b58e520dd115d86e3d14ace70ed89244fd73f88b13ed5d8ecb5f6c1bf45ed (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_matsumoto, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Nov 25 04:30:22 np0005534516 podman[436180]: 2025-11-25 09:30:22.174400857 +0000 UTC m=+0.160870896 container attach 200b58e520dd115d86e3d14ace70ed89244fd73f88b13ed5d8ecb5f6c1bf45ed (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_matsumoto, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 04:30:22 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3317: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:30:23 np0005534516 sharp_matsumoto[436197]: {
Nov 25 04:30:23 np0005534516 sharp_matsumoto[436197]:    "1ccbbde0-8faf-460a-800e-c84f00ed17db": {
Nov 25 04:30:23 np0005534516 sharp_matsumoto[436197]:        "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 04:30:23 np0005534516 sharp_matsumoto[436197]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 04:30:23 np0005534516 sharp_matsumoto[436197]:        "osd_id": 1,
Nov 25 04:30:23 np0005534516 sharp_matsumoto[436197]:        "osd_uuid": "1ccbbde0-8faf-460a-800e-c84f00ed17db",
Nov 25 04:30:23 np0005534516 sharp_matsumoto[436197]:        "type": "bluestore"
Nov 25 04:30:23 np0005534516 sharp_matsumoto[436197]:    },
Nov 25 04:30:23 np0005534516 sharp_matsumoto[436197]:    "2a15223e-f21c-42af-b702-2be1c3607399": {
Nov 25 04:30:23 np0005534516 sharp_matsumoto[436197]:        "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 04:30:23 np0005534516 sharp_matsumoto[436197]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 04:30:23 np0005534516 sharp_matsumoto[436197]:        "osd_id": 2,
Nov 25 04:30:23 np0005534516 sharp_matsumoto[436197]:        "osd_uuid": "2a15223e-f21c-42af-b702-2be1c3607399",
Nov 25 04:30:23 np0005534516 sharp_matsumoto[436197]:        "type": "bluestore"
Nov 25 04:30:23 np0005534516 sharp_matsumoto[436197]:    },
Nov 25 04:30:23 np0005534516 sharp_matsumoto[436197]:    "fa0f8d90-5df8-4d42-9078-da082765696d": {
Nov 25 04:30:23 np0005534516 sharp_matsumoto[436197]:        "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 04:30:23 np0005534516 sharp_matsumoto[436197]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 04:30:23 np0005534516 sharp_matsumoto[436197]:        "osd_id": 0,
Nov 25 04:30:23 np0005534516 sharp_matsumoto[436197]:        "osd_uuid": "fa0f8d90-5df8-4d42-9078-da082765696d",
Nov 25 04:30:23 np0005534516 sharp_matsumoto[436197]:        "type": "bluestore"
Nov 25 04:30:23 np0005534516 sharp_matsumoto[436197]:    }
Nov 25 04:30:23 np0005534516 sharp_matsumoto[436197]: }
Nov 25 04:30:23 np0005534516 systemd[1]: libpod-200b58e520dd115d86e3d14ace70ed89244fd73f88b13ed5d8ecb5f6c1bf45ed.scope: Deactivated successfully.
Nov 25 04:30:23 np0005534516 systemd[1]: libpod-200b58e520dd115d86e3d14ace70ed89244fd73f88b13ed5d8ecb5f6c1bf45ed.scope: Consumed 1.099s CPU time.
Nov 25 04:30:23 np0005534516 podman[436180]: 2025-11-25 09:30:23.263695144 +0000 UTC m=+1.250165223 container died 200b58e520dd115d86e3d14ace70ed89244fd73f88b13ed5d8ecb5f6c1bf45ed (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_matsumoto, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 04:30:23 np0005534516 systemd[1]: var-lib-containers-storage-overlay-0e2d6e8cc0c1b2b84ba3e209d1cf1e719118e1c086f9abc8457c9c8cac7018f2-merged.mount: Deactivated successfully.
Nov 25 04:30:23 np0005534516 podman[436180]: 2025-11-25 09:30:23.32443273 +0000 UTC m=+1.310902799 container remove 200b58e520dd115d86e3d14ace70ed89244fd73f88b13ed5d8ecb5f6c1bf45ed (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_matsumoto, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Nov 25 04:30:23 np0005534516 systemd[1]: libpod-conmon-200b58e520dd115d86e3d14ace70ed89244fd73f88b13ed5d8ecb5f6c1bf45ed.scope: Deactivated successfully.
Nov 25 04:30:23 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 04:30:23 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 04:30:23 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 04:30:23 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 04:30:23 np0005534516 ceph-mgr[75313]: [progress WARNING root] complete: ev e89a28e8-21e1-4c2f-8059-eb132feb050d does not exist
Nov 25 04:30:23 np0005534516 ceph-mgr[75313]: [progress WARNING root] complete: ev 64a1d304-03a0-4707-b955-4e804152c9ff does not exist
Nov 25 04:30:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 04:30:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 04:30:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 04:30:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 04:30:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 04:30:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 04:30:23 np0005534516 nova_compute[253538]: 2025-11-25 09:30:23.555 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:30:23 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 04:30:23 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 04:30:24 np0005534516 systemd-logind[822]: New session 53 of user zuul.
Nov 25 04:30:24 np0005534516 systemd[1]: Started Session 53 of User zuul.
Nov 25 04:30:24 np0005534516 nova_compute[253538]: 2025-11-25 09:30:24.197 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:30:24 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e283 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:30:24 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3318: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:30:25 np0005534516 systemd[1]: Reloading.
Nov 25 04:30:25 np0005534516 nova_compute[253538]: 2025-11-25 09:30:25.871 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:30:25 np0005534516 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 04:30:25 np0005534516 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 04:30:26 np0005534516 systemd[1]: Reloading.
Nov 25 04:30:26 np0005534516 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 04:30:26 np0005534516 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 04:30:26 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3319: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:30:26 np0005534516 systemd[1]: Starting Podman API Socket...
Nov 25 04:30:26 np0005534516 systemd[1]: Listening on Podman API Socket.
Nov 25 04:30:27 np0005534516 dbus-broker-launch[813]: avc:  op=setenforce lsm=selinux enforcing=0 res=1
Nov 25 04:30:27 np0005534516 systemd[1]: podman.socket: Deactivated successfully.
Nov 25 04:30:27 np0005534516 systemd[1]: Closed Podman API Socket.
Nov 25 04:30:27 np0005534516 systemd[1]: Stopping Podman API Socket...
Nov 25 04:30:27 np0005534516 systemd[1]: Starting Podman API Socket...
Nov 25 04:30:27 np0005534516 systemd[1]: Listening on Podman API Socket.
Nov 25 04:30:27 np0005534516 systemd-logind[822]: New session 54 of user zuul.
Nov 25 04:30:27 np0005534516 systemd[1]: Started Session 54 of User zuul.
Nov 25 04:30:27 np0005534516 systemd[1]: Starting Podman API Service...
Nov 25 04:30:27 np0005534516 systemd[1]: Started Podman API Service.
Nov 25 04:30:27 np0005534516 podman[436547]: time="2025-11-25T09:30:27Z" level=info msg="/usr/bin/podman filtering at log level info"
Nov 25 04:30:27 np0005534516 podman[436547]: time="2025-11-25T09:30:27Z" level=info msg="Setting parallel job count to 25"
Nov 25 04:30:27 np0005534516 podman[436547]: time="2025-11-25T09:30:27Z" level=info msg="Using sqlite as database backend"
Nov 25 04:30:27 np0005534516 podman[436547]: time="2025-11-25T09:30:27Z" level=info msg="Not using native diff for overlay, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled"
Nov 25 04:30:27 np0005534516 podman[436547]: time="2025-11-25T09:30:27Z" level=info msg="Using systemd socket activation to determine API endpoint"
Nov 25 04:30:27 np0005534516 podman[436547]: time="2025-11-25T09:30:27Z" level=info msg="API service listening on \"/run/podman/podman.sock\". URI: \"unix:///run/podman/podman.sock\""
Nov 25 04:30:27 np0005534516 podman[436547]: @ - - [25/Nov/2025:09:30:27 +0000] "HEAD /v4.7.0/libpod/_ping HTTP/1.1" 200 0 "" "PodmanPy/4.7.0 (API v4.7.0; Compatible v1.40)"
Nov 25 04:30:27 np0005534516 podman[436547]: @ - - [25/Nov/2025:09:30:27 +0000] "GET /v4.7.0/libpod/containers/json HTTP/1.1" 200 24899 "" "PodmanPy/4.7.0 (API v4.7.0; Compatible v1.40)"
Nov 25 04:30:28 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3320: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:30:29 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 25 04:30:29 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1449125801' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 04:30:29 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 25 04:30:29 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1449125801' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 04:30:29 np0005534516 nova_compute[253538]: 2025-11-25 09:30:29.199 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:30:29 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e283 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:30:29 np0005534516 nova_compute[253538]: 2025-11-25 09:30:29.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:30:29 np0005534516 nova_compute[253538]: 2025-11-25 09:30:29.581 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:30:29 np0005534516 nova_compute[253538]: 2025-11-25 09:30:29.582 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:30:29 np0005534516 nova_compute[253538]: 2025-11-25 09:30:29.582 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:30:29 np0005534516 nova_compute[253538]: 2025-11-25 09:30:29.583 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 25 04:30:29 np0005534516 nova_compute[253538]: 2025-11-25 09:30:29.584 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 04:30:29 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:30:29.634 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=0e3362c2-a4a0-4a10-9289-943331244f84, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '63'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 04:30:30 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 04:30:30 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4256782775' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 04:30:30 np0005534516 nova_compute[253538]: 2025-11-25 09:30:30.105 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.521s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 04:30:30 np0005534516 nova_compute[253538]: 2025-11-25 09:30:30.299 253542 WARNING nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 04:30:30 np0005534516 nova_compute[253538]: 2025-11-25 09:30:30.303 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3608MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 25 04:30:30 np0005534516 nova_compute[253538]: 2025-11-25 09:30:30.304 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:30:30 np0005534516 nova_compute[253538]: 2025-11-25 09:30:30.304 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:30:30 np0005534516 nova_compute[253538]: 2025-11-25 09:30:30.379 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 25 04:30:30 np0005534516 nova_compute[253538]: 2025-11-25 09:30:30.380 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 25 04:30:30 np0005534516 nova_compute[253538]: 2025-11-25 09:30:30.396 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 04:30:30 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 04:30:30 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2508826971' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 04:30:30 np0005534516 nova_compute[253538]: 2025-11-25 09:30:30.865 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.469s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 04:30:30 np0005534516 nova_compute[253538]: 2025-11-25 09:30:30.870 253542 DEBUG nova.compute.provider_tree [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 04:30:30 np0005534516 nova_compute[253538]: 2025-11-25 09:30:30.873 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:30:30 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3321: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:30:30 np0005534516 nova_compute[253538]: 2025-11-25 09:30:30.887 253542 DEBUG nova.scheduler.client.report [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 04:30:30 np0005534516 nova_compute[253538]: 2025-11-25 09:30:30.888 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 25 04:30:30 np0005534516 nova_compute[253538]: 2025-11-25 09:30:30.888 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.584s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:30:32 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3322: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:30:33 np0005534516 podman[436656]: 2025-11-25 09:30:33.80153167 +0000 UTC m=+0.048113143 container health_status 5c8d5508db57b4c3da7e80ae11f3a3094e87785cb74f60c98199261dd8b73481 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_metadata_agent, managed_by=edpm_ansible, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 04:30:33 np0005534516 podman[436655]: 2025-11-25 09:30:33.806912406 +0000 UTC m=+0.056550423 container health_status 201f5ba8a846455dad7681d91099d8c3f33e8ac91298c1330a8b525b94c03938 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, io.buildah.version=1.41.3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118)
Nov 25 04:30:34 np0005534516 nova_compute[253538]: 2025-11-25 09:30:34.201 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:30:34 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e283 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:30:34 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3323: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:30:35 np0005534516 nova_compute[253538]: 2025-11-25 09:30:35.875 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:30:36 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3324: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:30:38 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3325: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:30:39 np0005534516 nova_compute[253538]: 2025-11-25 09:30:39.203 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:30:39 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e283 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:30:40 np0005534516 nova_compute[253538]: 2025-11-25 09:30:40.876 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:30:40 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3326: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:30:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:30:41.118 162739 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:30:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:30:41.119 162739 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:30:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:30:41.119 162739 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:30:42 np0005534516 ceph-mgr[75313]: [devicehealth INFO root] Check health
Nov 25 04:30:42 np0005534516 podman[436547]: time="2025-11-25T09:30:42Z" level=info msg="Received shutdown.Stop(), terminating!" PID=436547
Nov 25 04:30:42 np0005534516 systemd[1]: podman.service: Deactivated successfully.
Nov 25 04:30:42 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3327: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:30:44 np0005534516 nova_compute[253538]: 2025-11-25 09:30:44.204 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:30:44 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e283 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:30:44 np0005534516 podman[436691]: 2025-11-25 09:30:44.828853676 +0000 UTC m=+0.082658353 container health_status 8ad1e319ea263c63021f01bb2d6f18ca5aea205883339692c6b10bddfa993191 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.schema-version=1.0)
Nov 25 04:30:44 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3328: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:30:45 np0005534516 nova_compute[253538]: 2025-11-25 09:30:45.877 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:30:46 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3329: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:30:48 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3330: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:30:49 np0005534516 nova_compute[253538]: 2025-11-25 09:30:49.206 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:30:49 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e283 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:30:50 np0005534516 nova_compute[253538]: 2025-11-25 09:30:50.878 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:30:50 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3331: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:30:52 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3332: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:30:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] Optimize plan auto_2025-11-25_09:30:53
Nov 25 04:30:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 04:30:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] do_upmap
Nov 25 04:30:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] pools ['vms', 'cephfs.cephfs.meta', 'images', 'default.rgw.meta', 'default.rgw.control', 'default.rgw.log', 'volumes', '.rgw.root', 'backups', 'cephfs.cephfs.data', '.mgr']
Nov 25 04:30:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] prepared 0/10 changes
Nov 25 04:30:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 04:30:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 04:30:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 04:30:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 04:30:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 04:30:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 04:30:54 np0005534516 nova_compute[253538]: 2025-11-25 09:30:54.208 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:30:54 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 04:30:54 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 04:30:54 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 04:30:54 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 04:30:54 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 04:30:54 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 04:30:54 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 04:30:54 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 04:30:54 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 04:30:54 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 04:30:54 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e283 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:30:54 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3333: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:30:55 np0005534516 nova_compute[253538]: 2025-11-25 09:30:55.881 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:30:56 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3334: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:30:58 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3335: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:30:59 np0005534516 nova_compute[253538]: 2025-11-25 09:30:59.209 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:30:59 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e283 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:31:00 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:31:00.622 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=64, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '3e:21:09', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '6a:71:8e:07:fa:c2'}, ipsec=False) old=SB_Global(nb_cfg=63) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 04:31:00 np0005534516 nova_compute[253538]: 2025-11-25 09:31:00.623 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:31:00 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:31:00.623 162739 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 1 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 25 04:31:00 np0005534516 systemd[1]: session-53.scope: Deactivated successfully.
Nov 25 04:31:00 np0005534516 systemd[1]: session-53.scope: Consumed 1.410s CPU time.
Nov 25 04:31:00 np0005534516 systemd-logind[822]: Session 53 logged out. Waiting for processes to exit.
Nov 25 04:31:00 np0005534516 systemd-logind[822]: Removed session 53.
Nov 25 04:31:00 np0005534516 nova_compute[253538]: 2025-11-25 09:31:00.883 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:31:00 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3336: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:31:01 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:31:01.625 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=0e3362c2-a4a0-4a10-9289-943331244f84, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '64'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 04:31:01 np0005534516 systemd[1]: session-54.scope: Deactivated successfully.
Nov 25 04:31:01 np0005534516 systemd-logind[822]: Session 54 logged out. Waiting for processes to exit.
Nov 25 04:31:01 np0005534516 systemd-logind[822]: Removed session 54.
Nov 25 04:31:02 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3337: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:31:04 np0005534516 nova_compute[253538]: 2025-11-25 09:31:04.211 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:31:04 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e283 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:31:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 04:31:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 04:31:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 04:31:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 04:31:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 25 04:31:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 04:31:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 04:31:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 04:31:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 04:31:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 04:31:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006659854830044931 of space, bias 1.0, pg target 0.19979564490134794 quantized to 32 (current 32)
Nov 25 04:31:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 04:31:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 25 04:31:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 04:31:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 04:31:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 04:31:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 25 04:31:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 04:31:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 25 04:31:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 04:31:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 04:31:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 04:31:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 25 04:31:04 np0005534516 podman[436718]: 2025-11-25 09:31:04.807823185 +0000 UTC m=+0.052226915 container health_status 5c8d5508db57b4c3da7e80ae11f3a3094e87785cb74f60c98199261dd8b73481 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118)
Nov 25 04:31:04 np0005534516 podman[436717]: 2025-11-25 09:31:04.839161279 +0000 UTC m=+0.089600553 container health_status 201f5ba8a846455dad7681d91099d8c3f33e8ac91298c1330a8b525b94c03938 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, container_name=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 04:31:04 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3338: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:31:05 np0005534516 nova_compute[253538]: 2025-11-25 09:31:05.886 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:31:05 np0005534516 nova_compute[253538]: 2025-11-25 09:31:05.889 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:31:06 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3339: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:31:08 np0005534516 nova_compute[253538]: 2025-11-25 09:31:08.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:31:08 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3340: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:31:09 np0005534516 nova_compute[253538]: 2025-11-25 09:31:09.214 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:31:09 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e283 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:31:10 np0005534516 nova_compute[253538]: 2025-11-25 09:31:10.888 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:31:10 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3341: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:31:11 np0005534516 nova_compute[253538]: 2025-11-25 09:31:11.555 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:31:11 np0005534516 nova_compute[253538]: 2025-11-25 09:31:11.555 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 25 04:31:11 np0005534516 nova_compute[253538]: 2025-11-25 09:31:11.555 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 25 04:31:11 np0005534516 nova_compute[253538]: 2025-11-25 09:31:11.580 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 25 04:31:12 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3342: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:31:14 np0005534516 nova_compute[253538]: 2025-11-25 09:31:14.216 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:31:14 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e283 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:31:14 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3343: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:31:15 np0005534516 nova_compute[253538]: 2025-11-25 09:31:15.573 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:31:15 np0005534516 podman[436752]: 2025-11-25 09:31:15.837228208 +0000 UTC m=+0.080662090 container health_status 8ad1e319ea263c63021f01bb2d6f18ca5aea205883339692c6b10bddfa993191 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=ovn_controller, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_controller, org.label-schema.build-date=20251118, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Nov 25 04:31:15 np0005534516 nova_compute[253538]: 2025-11-25 09:31:15.889 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:31:16 np0005534516 nova_compute[253538]: 2025-11-25 09:31:16.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:31:16 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3344: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:31:17 np0005534516 nova_compute[253538]: 2025-11-25 09:31:17.555 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:31:17 np0005534516 nova_compute[253538]: 2025-11-25 09:31:17.556 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 25 04:31:18 np0005534516 nova_compute[253538]: 2025-11-25 09:31:18.555 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:31:18 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3345: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:31:19 np0005534516 nova_compute[253538]: 2025-11-25 09:31:19.218 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:31:19 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e283 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:31:20 np0005534516 nova_compute[253538]: 2025-11-25 09:31:20.892 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:31:20 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3346: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:31:22 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3347: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:31:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 04:31:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 04:31:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 04:31:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 04:31:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 04:31:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 04:31:24 np0005534516 nova_compute[253538]: 2025-11-25 09:31:24.220 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:31:24 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 04:31:24 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 04:31:24 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 04:31:24 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 04:31:24 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 04:31:24 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 04:31:24 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e283 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:31:24 np0005534516 ceph-mgr[75313]: [progress WARNING root] complete: ev 4138a3c1-b9dd-4fd9-85c2-4791ab063206 does not exist
Nov 25 04:31:24 np0005534516 ceph-mgr[75313]: [progress WARNING root] complete: ev c7f88d99-490a-41fb-9f2f-e524ed8dc47f does not exist
Nov 25 04:31:24 np0005534516 ceph-mgr[75313]: [progress WARNING root] complete: ev 95c5506f-8860-4674-9265-ed5029171e33 does not exist
Nov 25 04:31:24 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 04:31:24 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 04:31:24 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 04:31:24 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 04:31:24 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 04:31:24 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 04:31:24 np0005534516 nova_compute[253538]: 2025-11-25 09:31:24.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:31:24 np0005534516 nova_compute[253538]: 2025-11-25 09:31:24.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._run_image_cache_manager_pass run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:31:24 np0005534516 nova_compute[253538]: 2025-11-25 09:31:24.554 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Acquiring lock "storage-registry-lock" by "nova.virt.storage_users.register_storage_use.<locals>.do_register_storage_use" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:31:24 np0005534516 nova_compute[253538]: 2025-11-25 09:31:24.555 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "storage-registry-lock" acquired by "nova.virt.storage_users.register_storage_use.<locals>.do_register_storage_use" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:31:24 np0005534516 nova_compute[253538]: 2025-11-25 09:31:24.555 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "storage-registry-lock" "released" by "nova.virt.storage_users.register_storage_use.<locals>.do_register_storage_use" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:31:24 np0005534516 nova_compute[253538]: 2025-11-25 09:31:24.556 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Acquiring lock "storage-registry-lock" by "nova.virt.storage_users.get_storage_users.<locals>.do_get_storage_users" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:31:24 np0005534516 nova_compute[253538]: 2025-11-25 09:31:24.556 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "storage-registry-lock" acquired by "nova.virt.storage_users.get_storage_users.<locals>.do_get_storage_users" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:31:24 np0005534516 nova_compute[253538]: 2025-11-25 09:31:24.556 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "storage-registry-lock" "released" by "nova.virt.storage_users.get_storage_users.<locals>.do_get_storage_users" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:31:24 np0005534516 nova_compute[253538]: 2025-11-25 09:31:24.572 253542 DEBUG nova.virt.libvirt.imagecache [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Adding ephemeral_1_0706d66 into backend ephemeral images _store_ephemeral_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:100#033[00m
Nov 25 04:31:24 np0005534516 nova_compute[253538]: 2025-11-25 09:31:24.579 253542 DEBUG nova.virt.libvirt.imagecache [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Verify base images _age_and_verify_cached_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:314#033[00m
Nov 25 04:31:24 np0005534516 nova_compute[253538]: 2025-11-25 09:31:24.579 253542 WARNING nova.virt.libvirt.imagecache [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Unknown base file: /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc#033[00m
Nov 25 04:31:24 np0005534516 nova_compute[253538]: 2025-11-25 09:31:24.579 253542 WARNING nova.virt.libvirt.imagecache [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Unknown base file: /var/lib/nova/instances/_base/a6a8bade0130d94f6fc91c6e483cc9b6db836676#033[00m
Nov 25 04:31:24 np0005534516 nova_compute[253538]: 2025-11-25 09:31:24.579 253542 INFO nova.virt.libvirt.imagecache [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Removable base files: /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc /var/lib/nova/instances/_base/a6a8bade0130d94f6fc91c6e483cc9b6db836676#033[00m
Nov 25 04:31:24 np0005534516 nova_compute[253538]: 2025-11-25 09:31:24.579 253542 INFO nova.virt.libvirt.imagecache [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Base, swap or ephemeral file too young to remove: /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc#033[00m
Nov 25 04:31:24 np0005534516 nova_compute[253538]: 2025-11-25 09:31:24.579 253542 INFO nova.virt.libvirt.imagecache [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Base, swap or ephemeral file too young to remove: /var/lib/nova/instances/_base/a6a8bade0130d94f6fc91c6e483cc9b6db836676#033[00m
Nov 25 04:31:24 np0005534516 nova_compute[253538]: 2025-11-25 09:31:24.579 253542 DEBUG nova.virt.libvirt.imagecache [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Verification complete _age_and_verify_cached_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:350#033[00m
Nov 25 04:31:24 np0005534516 nova_compute[253538]: 2025-11-25 09:31:24.580 253542 DEBUG nova.virt.libvirt.imagecache [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Verify swap images _age_and_verify_swap_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:299#033[00m
Nov 25 04:31:24 np0005534516 nova_compute[253538]: 2025-11-25 09:31:24.580 253542 DEBUG nova.virt.libvirt.imagecache [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Verify ephemeral images _age_and_verify_ephemeral_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:284#033[00m
Nov 25 04:31:24 np0005534516 nova_compute[253538]: 2025-11-25 09:31:24.580 253542 INFO nova.virt.libvirt.imagecache [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Base, swap or ephemeral file too young to remove: /var/lib/nova/instances/_base/ephemeral_1_0706d66#033[00m
Nov 25 04:31:24 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3348: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:31:24 np0005534516 podman[437047]: 2025-11-25 09:31:24.9797311 +0000 UTC m=+0.064945523 container create c7d7d577de4703326acefebef2bf7e22fa01320656c5fe565f9a73bc7433eb67 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_moore, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Nov 25 04:31:25 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 04:31:25 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 04:31:25 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 04:31:25 np0005534516 systemd[1]: Started libpod-conmon-c7d7d577de4703326acefebef2bf7e22fa01320656c5fe565f9a73bc7433eb67.scope.
Nov 25 04:31:25 np0005534516 podman[437047]: 2025-11-25 09:31:24.943949484 +0000 UTC m=+0.029163877 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 04:31:25 np0005534516 systemd[1]: Started libcrun container.
Nov 25 04:31:25 np0005534516 podman[437047]: 2025-11-25 09:31:25.110627718 +0000 UTC m=+0.195842131 container init c7d7d577de4703326acefebef2bf7e22fa01320656c5fe565f9a73bc7433eb67 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_moore, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 04:31:25 np0005534516 podman[437047]: 2025-11-25 09:31:25.11840769 +0000 UTC m=+0.203622073 container start c7d7d577de4703326acefebef2bf7e22fa01320656c5fe565f9a73bc7433eb67 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_moore, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 04:31:25 np0005534516 gifted_moore[437063]: 167 167
Nov 25 04:31:25 np0005534516 systemd[1]: libpod-c7d7d577de4703326acefebef2bf7e22fa01320656c5fe565f9a73bc7433eb67.scope: Deactivated successfully.
Nov 25 04:31:25 np0005534516 conmon[437063]: conmon c7d7d577de4703326ace <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-c7d7d577de4703326acefebef2bf7e22fa01320656c5fe565f9a73bc7433eb67.scope/container/memory.events
Nov 25 04:31:25 np0005534516 podman[437047]: 2025-11-25 09:31:25.128556657 +0000 UTC m=+0.213771050 container attach c7d7d577de4703326acefebef2bf7e22fa01320656c5fe565f9a73bc7433eb67 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_moore, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Nov 25 04:31:25 np0005534516 podman[437047]: 2025-11-25 09:31:25.129251745 +0000 UTC m=+0.214466138 container died c7d7d577de4703326acefebef2bf7e22fa01320656c5fe565f9a73bc7433eb67 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_moore, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 04:31:25 np0005534516 systemd[1]: var-lib-containers-storage-overlay-241281af1d535a33dc3b8975fe7d74ef88ebdb275c246853125a64f308b86ff9-merged.mount: Deactivated successfully.
Nov 25 04:31:25 np0005534516 podman[437047]: 2025-11-25 09:31:25.237781954 +0000 UTC m=+0.322996337 container remove c7d7d577de4703326acefebef2bf7e22fa01320656c5fe565f9a73bc7433eb67 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_moore, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Nov 25 04:31:25 np0005534516 systemd[1]: libpod-conmon-c7d7d577de4703326acefebef2bf7e22fa01320656c5fe565f9a73bc7433eb67.scope: Deactivated successfully.
Nov 25 04:31:25 np0005534516 podman[437088]: 2025-11-25 09:31:25.410155513 +0000 UTC m=+0.049380416 container create 0805092ca531d09789da2c6849f213263d61f07f2933549e6ddf06aba66ae031 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_maxwell, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 04:31:25 np0005534516 systemd[1]: Started libpod-conmon-0805092ca531d09789da2c6849f213263d61f07f2933549e6ddf06aba66ae031.scope.
Nov 25 04:31:25 np0005534516 podman[437088]: 2025-11-25 09:31:25.387029723 +0000 UTC m=+0.026254596 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 04:31:25 np0005534516 systemd[1]: Started libcrun container.
Nov 25 04:31:25 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/49957548785ab1f709e9ff7e7d826f2efde8ba2d9830c847186d9c15f5a56163/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 04:31:25 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/49957548785ab1f709e9ff7e7d826f2efde8ba2d9830c847186d9c15f5a56163/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 04:31:25 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/49957548785ab1f709e9ff7e7d826f2efde8ba2d9830c847186d9c15f5a56163/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 04:31:25 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/49957548785ab1f709e9ff7e7d826f2efde8ba2d9830c847186d9c15f5a56163/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 04:31:25 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/49957548785ab1f709e9ff7e7d826f2efde8ba2d9830c847186d9c15f5a56163/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 04:31:25 np0005534516 podman[437088]: 2025-11-25 09:31:25.504153356 +0000 UTC m=+0.143378229 container init 0805092ca531d09789da2c6849f213263d61f07f2933549e6ddf06aba66ae031 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_maxwell, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 04:31:25 np0005534516 podman[437088]: 2025-11-25 09:31:25.516634766 +0000 UTC m=+0.155859639 container start 0805092ca531d09789da2c6849f213263d61f07f2933549e6ddf06aba66ae031 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_maxwell, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 04:31:25 np0005534516 podman[437088]: 2025-11-25 09:31:25.520927834 +0000 UTC m=+0.160152707 container attach 0805092ca531d09789da2c6849f213263d61f07f2933549e6ddf06aba66ae031 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_maxwell, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Nov 25 04:31:25 np0005534516 nova_compute[253538]: 2025-11-25 09:31:25.892 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:31:26 np0005534516 competent_maxwell[437105]: --> passed data devices: 0 physical, 3 LVM
Nov 25 04:31:26 np0005534516 competent_maxwell[437105]: --> relative data size: 1.0
Nov 25 04:31:26 np0005534516 competent_maxwell[437105]: --> All data devices are unavailable
Nov 25 04:31:26 np0005534516 systemd[1]: libpod-0805092ca531d09789da2c6849f213263d61f07f2933549e6ddf06aba66ae031.scope: Deactivated successfully.
Nov 25 04:31:26 np0005534516 podman[437088]: 2025-11-25 09:31:26.548539609 +0000 UTC m=+1.187764492 container died 0805092ca531d09789da2c6849f213263d61f07f2933549e6ddf06aba66ae031 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_maxwell, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Nov 25 04:31:26 np0005534516 systemd[1]: var-lib-containers-storage-overlay-49957548785ab1f709e9ff7e7d826f2efde8ba2d9830c847186d9c15f5a56163-merged.mount: Deactivated successfully.
Nov 25 04:31:26 np0005534516 podman[437088]: 2025-11-25 09:31:26.723057126 +0000 UTC m=+1.362281989 container remove 0805092ca531d09789da2c6849f213263d61f07f2933549e6ddf06aba66ae031 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_maxwell, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 04:31:26 np0005534516 systemd[1]: libpod-conmon-0805092ca531d09789da2c6849f213263d61f07f2933549e6ddf06aba66ae031.scope: Deactivated successfully.
Nov 25 04:31:26 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3349: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:31:27 np0005534516 podman[437285]: 2025-11-25 09:31:27.365199662 +0000 UTC m=+0.048329138 container create 4f2213a5819d2d2071ff0cc2d8d7ba12faa3d5ba1e8849fada4a8647e7ace710 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_hermann, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 04:31:27 np0005534516 systemd[1]: Started libpod-conmon-4f2213a5819d2d2071ff0cc2d8d7ba12faa3d5ba1e8849fada4a8647e7ace710.scope.
Nov 25 04:31:27 np0005534516 systemd[1]: Started libcrun container.
Nov 25 04:31:27 np0005534516 podman[437285]: 2025-11-25 09:31:27.342035211 +0000 UTC m=+0.025164747 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 04:31:27 np0005534516 podman[437285]: 2025-11-25 09:31:27.449270544 +0000 UTC m=+0.132400070 container init 4f2213a5819d2d2071ff0cc2d8d7ba12faa3d5ba1e8849fada4a8647e7ace710 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_hermann, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 04:31:27 np0005534516 podman[437285]: 2025-11-25 09:31:27.459206955 +0000 UTC m=+0.142336411 container start 4f2213a5819d2d2071ff0cc2d8d7ba12faa3d5ba1e8849fada4a8647e7ace710 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_hermann, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Nov 25 04:31:27 np0005534516 podman[437285]: 2025-11-25 09:31:27.462650449 +0000 UTC m=+0.145779925 container attach 4f2213a5819d2d2071ff0cc2d8d7ba12faa3d5ba1e8849fada4a8647e7ace710 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_hermann, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 04:31:27 np0005534516 quirky_hermann[437301]: 167 167
Nov 25 04:31:27 np0005534516 systemd[1]: libpod-4f2213a5819d2d2071ff0cc2d8d7ba12faa3d5ba1e8849fada4a8647e7ace710.scope: Deactivated successfully.
Nov 25 04:31:27 np0005534516 podman[437285]: 2025-11-25 09:31:27.467685936 +0000 UTC m=+0.150815392 container died 4f2213a5819d2d2071ff0cc2d8d7ba12faa3d5ba1e8849fada4a8647e7ace710 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_hermann, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Nov 25 04:31:27 np0005534516 systemd[1]: var-lib-containers-storage-overlay-9a2132794d924e88c35f9ef5602d123ae06d801638a263a0b3f1b7e8dff68f04-merged.mount: Deactivated successfully.
Nov 25 04:31:27 np0005534516 podman[437285]: 2025-11-25 09:31:27.508271772 +0000 UTC m=+0.191401228 container remove 4f2213a5819d2d2071ff0cc2d8d7ba12faa3d5ba1e8849fada4a8647e7ace710 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_hermann, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Nov 25 04:31:27 np0005534516 systemd[1]: libpod-conmon-4f2213a5819d2d2071ff0cc2d8d7ba12faa3d5ba1e8849fada4a8647e7ace710.scope: Deactivated successfully.
Nov 25 04:31:27 np0005534516 podman[437325]: 2025-11-25 09:31:27.763922532 +0000 UTC m=+0.062972648 container create aaf6529a5e0ea4b67ee6dfd3fd53eb1530852137872d361aab81dd9436b54c3d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_goodall, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 04:31:27 np0005534516 systemd[1]: Started libpod-conmon-aaf6529a5e0ea4b67ee6dfd3fd53eb1530852137872d361aab81dd9436b54c3d.scope.
Nov 25 04:31:27 np0005534516 systemd[1]: Started libcrun container.
Nov 25 04:31:27 np0005534516 podman[437325]: 2025-11-25 09:31:27.739008773 +0000 UTC m=+0.038058959 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 04:31:27 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/77595f1d308c622abc7c0a21e9c953d1aa8c74c16d0cdad28e64998c88cbf48c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 04:31:27 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/77595f1d308c622abc7c0a21e9c953d1aa8c74c16d0cdad28e64998c88cbf48c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 04:31:27 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/77595f1d308c622abc7c0a21e9c953d1aa8c74c16d0cdad28e64998c88cbf48c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 04:31:27 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/77595f1d308c622abc7c0a21e9c953d1aa8c74c16d0cdad28e64998c88cbf48c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 04:31:27 np0005534516 podman[437325]: 2025-11-25 09:31:27.849488055 +0000 UTC m=+0.148538191 container init aaf6529a5e0ea4b67ee6dfd3fd53eb1530852137872d361aab81dd9436b54c3d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_goodall, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Nov 25 04:31:27 np0005534516 podman[437325]: 2025-11-25 09:31:27.859659912 +0000 UTC m=+0.158710048 container start aaf6529a5e0ea4b67ee6dfd3fd53eb1530852137872d361aab81dd9436b54c3d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_goodall, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Nov 25 04:31:27 np0005534516 podman[437325]: 2025-11-25 09:31:27.86398345 +0000 UTC m=+0.163033596 container attach aaf6529a5e0ea4b67ee6dfd3fd53eb1530852137872d361aab81dd9436b54c3d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_goodall, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 04:31:28 np0005534516 nova_compute[253538]: 2025-11-25 09:31:28.573 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:31:28 np0005534516 intelligent_goodall[437342]: {
Nov 25 04:31:28 np0005534516 intelligent_goodall[437342]:    "0": [
Nov 25 04:31:28 np0005534516 intelligent_goodall[437342]:        {
Nov 25 04:31:28 np0005534516 rsyslogd[1007]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 25 04:31:28 np0005534516 intelligent_goodall[437342]:            "devices": [
Nov 25 04:31:28 np0005534516 rsyslogd[1007]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 25 04:31:28 np0005534516 intelligent_goodall[437342]:                "/dev/loop3"
Nov 25 04:31:28 np0005534516 intelligent_goodall[437342]:            ],
Nov 25 04:31:28 np0005534516 intelligent_goodall[437342]:            "lv_name": "ceph_lv0",
Nov 25 04:31:28 np0005534516 intelligent_goodall[437342]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 04:31:28 np0005534516 intelligent_goodall[437342]:            "lv_size": "21470642176",
Nov 25 04:31:28 np0005534516 intelligent_goodall[437342]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=fa0f8d90-5df8-4d42-9078-da082765696d,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 04:31:28 np0005534516 intelligent_goodall[437342]:            "lv_uuid": "ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr",
Nov 25 04:31:28 np0005534516 intelligent_goodall[437342]:            "name": "ceph_lv0",
Nov 25 04:31:28 np0005534516 intelligent_goodall[437342]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 04:31:28 np0005534516 intelligent_goodall[437342]:            "tags": {
Nov 25 04:31:28 np0005534516 intelligent_goodall[437342]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 04:31:28 np0005534516 intelligent_goodall[437342]:                "ceph.block_uuid": "ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr",
Nov 25 04:31:28 np0005534516 intelligent_goodall[437342]:                "ceph.cephx_lockbox_secret": "",
Nov 25 04:31:28 np0005534516 intelligent_goodall[437342]:                "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 04:31:28 np0005534516 intelligent_goodall[437342]:                "ceph.cluster_name": "ceph",
Nov 25 04:31:28 np0005534516 intelligent_goodall[437342]:                "ceph.crush_device_class": "",
Nov 25 04:31:28 np0005534516 intelligent_goodall[437342]:                "ceph.encrypted": "0",
Nov 25 04:31:28 np0005534516 intelligent_goodall[437342]:                "ceph.osd_fsid": "fa0f8d90-5df8-4d42-9078-da082765696d",
Nov 25 04:31:28 np0005534516 intelligent_goodall[437342]:                "ceph.osd_id": "0",
Nov 25 04:31:28 np0005534516 intelligent_goodall[437342]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 04:31:28 np0005534516 intelligent_goodall[437342]:                "ceph.type": "block",
Nov 25 04:31:28 np0005534516 intelligent_goodall[437342]:                "ceph.vdo": "0"
Nov 25 04:31:28 np0005534516 intelligent_goodall[437342]:            },
Nov 25 04:31:28 np0005534516 intelligent_goodall[437342]:            "type": "block",
Nov 25 04:31:28 np0005534516 intelligent_goodall[437342]:            "vg_name": "ceph_vg0"
Nov 25 04:31:28 np0005534516 intelligent_goodall[437342]:        }
Nov 25 04:31:28 np0005534516 intelligent_goodall[437342]:    ],
Nov 25 04:31:28 np0005534516 intelligent_goodall[437342]:    "1": [
Nov 25 04:31:28 np0005534516 intelligent_goodall[437342]:        {
Nov 25 04:31:28 np0005534516 intelligent_goodall[437342]:            "devices": [
Nov 25 04:31:28 np0005534516 intelligent_goodall[437342]:                "/dev/loop4"
Nov 25 04:31:28 np0005534516 intelligent_goodall[437342]:            ],
Nov 25 04:31:28 np0005534516 intelligent_goodall[437342]:            "lv_name": "ceph_lv1",
Nov 25 04:31:28 np0005534516 intelligent_goodall[437342]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 04:31:28 np0005534516 intelligent_goodall[437342]:            "lv_size": "21470642176",
Nov 25 04:31:28 np0005534516 intelligent_goodall[437342]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1ccbbde0-8faf-460a-800e-c84f00ed17db,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 04:31:28 np0005534516 intelligent_goodall[437342]:            "lv_uuid": "nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e",
Nov 25 04:31:28 np0005534516 intelligent_goodall[437342]:            "name": "ceph_lv1",
Nov 25 04:31:28 np0005534516 intelligent_goodall[437342]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 04:31:28 np0005534516 intelligent_goodall[437342]:            "tags": {
Nov 25 04:31:28 np0005534516 intelligent_goodall[437342]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 04:31:28 np0005534516 intelligent_goodall[437342]:                "ceph.block_uuid": "nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e",
Nov 25 04:31:28 np0005534516 intelligent_goodall[437342]:                "ceph.cephx_lockbox_secret": "",
Nov 25 04:31:28 np0005534516 intelligent_goodall[437342]:                "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 04:31:28 np0005534516 intelligent_goodall[437342]:                "ceph.cluster_name": "ceph",
Nov 25 04:31:28 np0005534516 intelligent_goodall[437342]:                "ceph.crush_device_class": "",
Nov 25 04:31:28 np0005534516 intelligent_goodall[437342]:                "ceph.encrypted": "0",
Nov 25 04:31:28 np0005534516 intelligent_goodall[437342]:                "ceph.osd_fsid": "1ccbbde0-8faf-460a-800e-c84f00ed17db",
Nov 25 04:31:28 np0005534516 intelligent_goodall[437342]:                "ceph.osd_id": "1",
Nov 25 04:31:28 np0005534516 intelligent_goodall[437342]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 04:31:28 np0005534516 intelligent_goodall[437342]:                "ceph.type": "block",
Nov 25 04:31:28 np0005534516 intelligent_goodall[437342]:                "ceph.vdo": "0"
Nov 25 04:31:28 np0005534516 intelligent_goodall[437342]:            },
Nov 25 04:31:28 np0005534516 intelligent_goodall[437342]:            "type": "block",
Nov 25 04:31:28 np0005534516 intelligent_goodall[437342]:            "vg_name": "ceph_vg1"
Nov 25 04:31:28 np0005534516 intelligent_goodall[437342]:        }
Nov 25 04:31:28 np0005534516 intelligent_goodall[437342]:    ],
Nov 25 04:31:28 np0005534516 intelligent_goodall[437342]:    "2": [
Nov 25 04:31:28 np0005534516 intelligent_goodall[437342]:        {
Nov 25 04:31:28 np0005534516 intelligent_goodall[437342]:            "devices": [
Nov 25 04:31:28 np0005534516 intelligent_goodall[437342]:                "/dev/loop5"
Nov 25 04:31:28 np0005534516 intelligent_goodall[437342]:            ],
Nov 25 04:31:28 np0005534516 intelligent_goodall[437342]:            "lv_name": "ceph_lv2",
Nov 25 04:31:28 np0005534516 intelligent_goodall[437342]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 04:31:28 np0005534516 intelligent_goodall[437342]:            "lv_size": "21470642176",
Nov 25 04:31:28 np0005534516 intelligent_goodall[437342]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2a15223e-f21c-42af-b702-2be1c3607399,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 04:31:28 np0005534516 intelligent_goodall[437342]:            "lv_uuid": "5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0",
Nov 25 04:31:28 np0005534516 intelligent_goodall[437342]:            "name": "ceph_lv2",
Nov 25 04:31:28 np0005534516 intelligent_goodall[437342]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 04:31:28 np0005534516 intelligent_goodall[437342]:            "tags": {
Nov 25 04:31:28 np0005534516 intelligent_goodall[437342]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 04:31:28 np0005534516 intelligent_goodall[437342]:                "ceph.block_uuid": "5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0",
Nov 25 04:31:28 np0005534516 intelligent_goodall[437342]:                "ceph.cephx_lockbox_secret": "",
Nov 25 04:31:28 np0005534516 intelligent_goodall[437342]:                "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 04:31:28 np0005534516 intelligent_goodall[437342]:                "ceph.cluster_name": "ceph",
Nov 25 04:31:28 np0005534516 intelligent_goodall[437342]:                "ceph.crush_device_class": "",
Nov 25 04:31:28 np0005534516 intelligent_goodall[437342]:                "ceph.encrypted": "0",
Nov 25 04:31:28 np0005534516 intelligent_goodall[437342]:                "ceph.osd_fsid": "2a15223e-f21c-42af-b702-2be1c3607399",
Nov 25 04:31:28 np0005534516 intelligent_goodall[437342]:                "ceph.osd_id": "2",
Nov 25 04:31:28 np0005534516 intelligent_goodall[437342]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 04:31:28 np0005534516 intelligent_goodall[437342]:                "ceph.type": "block",
Nov 25 04:31:28 np0005534516 intelligent_goodall[437342]:                "ceph.vdo": "0"
Nov 25 04:31:28 np0005534516 intelligent_goodall[437342]:            },
Nov 25 04:31:28 np0005534516 intelligent_goodall[437342]:            "type": "block",
Nov 25 04:31:28 np0005534516 intelligent_goodall[437342]:            "vg_name": "ceph_vg2"
Nov 25 04:31:28 np0005534516 intelligent_goodall[437342]:        }
Nov 25 04:31:28 np0005534516 intelligent_goodall[437342]:    ]
Nov 25 04:31:28 np0005534516 intelligent_goodall[437342]: }
Nov 25 04:31:28 np0005534516 systemd[1]: libpod-aaf6529a5e0ea4b67ee6dfd3fd53eb1530852137872d361aab81dd9436b54c3d.scope: Deactivated successfully.
Nov 25 04:31:28 np0005534516 podman[437325]: 2025-11-25 09:31:28.651806437 +0000 UTC m=+0.950856563 container died aaf6529a5e0ea4b67ee6dfd3fd53eb1530852137872d361aab81dd9436b54c3d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_goodall, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 04:31:28 np0005534516 systemd[1]: var-lib-containers-storage-overlay-77595f1d308c622abc7c0a21e9c953d1aa8c74c16d0cdad28e64998c88cbf48c-merged.mount: Deactivated successfully.
Nov 25 04:31:28 np0005534516 podman[437325]: 2025-11-25 09:31:28.71536045 +0000 UTC m=+1.014410556 container remove aaf6529a5e0ea4b67ee6dfd3fd53eb1530852137872d361aab81dd9436b54c3d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_goodall, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 04:31:28 np0005534516 systemd[1]: libpod-conmon-aaf6529a5e0ea4b67ee6dfd3fd53eb1530852137872d361aab81dd9436b54c3d.scope: Deactivated successfully.
Nov 25 04:31:28 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3350: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:31:29 np0005534516 nova_compute[253538]: 2025-11-25 09:31:29.222 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:31:29 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 25 04:31:29 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/105697232' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 04:31:29 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 25 04:31:29 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/105697232' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 04:31:29 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e283 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:31:29 np0005534516 podman[437505]: 2025-11-25 09:31:29.42842199 +0000 UTC m=+0.047533067 container create dffa2f8c6b0d2b284f0c2e24ba15c81735d906e06176a1b914de5aef2d04bd45 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_saha, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 04:31:29 np0005534516 systemd[1]: Started libpod-conmon-dffa2f8c6b0d2b284f0c2e24ba15c81735d906e06176a1b914de5aef2d04bd45.scope.
Nov 25 04:31:29 np0005534516 systemd[1]: Started libcrun container.
Nov 25 04:31:29 np0005534516 podman[437505]: 2025-11-25 09:31:29.406757509 +0000 UTC m=+0.025868626 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 04:31:29 np0005534516 podman[437505]: 2025-11-25 09:31:29.516707827 +0000 UTC m=+0.135818924 container init dffa2f8c6b0d2b284f0c2e24ba15c81735d906e06176a1b914de5aef2d04bd45 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_saha, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 04:31:29 np0005534516 podman[437505]: 2025-11-25 09:31:29.52821126 +0000 UTC m=+0.147322337 container start dffa2f8c6b0d2b284f0c2e24ba15c81735d906e06176a1b914de5aef2d04bd45 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_saha, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 04:31:29 np0005534516 podman[437505]: 2025-11-25 09:31:29.53185323 +0000 UTC m=+0.150964307 container attach dffa2f8c6b0d2b284f0c2e24ba15c81735d906e06176a1b914de5aef2d04bd45 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_saha, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Nov 25 04:31:29 np0005534516 clever_saha[437522]: 167 167
Nov 25 04:31:29 np0005534516 systemd[1]: libpod-dffa2f8c6b0d2b284f0c2e24ba15c81735d906e06176a1b914de5aef2d04bd45.scope: Deactivated successfully.
Nov 25 04:31:29 np0005534516 conmon[437522]: conmon dffa2f8c6b0d2b284f0c <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-dffa2f8c6b0d2b284f0c2e24ba15c81735d906e06176a1b914de5aef2d04bd45.scope/container/memory.events
Nov 25 04:31:29 np0005534516 podman[437505]: 2025-11-25 09:31:29.536861786 +0000 UTC m=+0.155972903 container died dffa2f8c6b0d2b284f0c2e24ba15c81735d906e06176a1b914de5aef2d04bd45 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_saha, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Nov 25 04:31:29 np0005534516 systemd[1]: var-lib-containers-storage-overlay-c66313d657bc415bb3e339fa827303834863e0953f55b0f351d7faf030a0c167-merged.mount: Deactivated successfully.
Nov 25 04:31:29 np0005534516 podman[437505]: 2025-11-25 09:31:29.578175852 +0000 UTC m=+0.197286939 container remove dffa2f8c6b0d2b284f0c2e24ba15c81735d906e06176a1b914de5aef2d04bd45 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_saha, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 04:31:29 np0005534516 systemd[1]: libpod-conmon-dffa2f8c6b0d2b284f0c2e24ba15c81735d906e06176a1b914de5aef2d04bd45.scope: Deactivated successfully.
Nov 25 04:31:29 np0005534516 podman[437546]: 2025-11-25 09:31:29.747400525 +0000 UTC m=+0.047360391 container create 753d8a85ad6a191a4a6929b65ae2b2649ac427f48102b46feaa9c6683ab734f0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_williamson, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Nov 25 04:31:29 np0005534516 systemd[1]: Started libpod-conmon-753d8a85ad6a191a4a6929b65ae2b2649ac427f48102b46feaa9c6683ab734f0.scope.
Nov 25 04:31:29 np0005534516 systemd[1]: Started libcrun container.
Nov 25 04:31:29 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/787492dae69f60a7ad6168cfb3819b75f1509bd1080b31c14f15eb54a74be965/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 04:31:29 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/787492dae69f60a7ad6168cfb3819b75f1509bd1080b31c14f15eb54a74be965/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 04:31:29 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/787492dae69f60a7ad6168cfb3819b75f1509bd1080b31c14f15eb54a74be965/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 04:31:29 np0005534516 podman[437546]: 2025-11-25 09:31:29.731367619 +0000 UTC m=+0.031327515 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 04:31:29 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/787492dae69f60a7ad6168cfb3819b75f1509bd1080b31c14f15eb54a74be965/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 04:31:29 np0005534516 podman[437546]: 2025-11-25 09:31:29.842677083 +0000 UTC m=+0.142636949 container init 753d8a85ad6a191a4a6929b65ae2b2649ac427f48102b46feaa9c6683ab734f0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_williamson, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS)
Nov 25 04:31:29 np0005534516 podman[437546]: 2025-11-25 09:31:29.858370771 +0000 UTC m=+0.158330677 container start 753d8a85ad6a191a4a6929b65ae2b2649ac427f48102b46feaa9c6683ab734f0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_williamson, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 04:31:29 np0005534516 podman[437546]: 2025-11-25 09:31:29.862915635 +0000 UTC m=+0.162875531 container attach 753d8a85ad6a191a4a6929b65ae2b2649ac427f48102b46feaa9c6683ab734f0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_williamson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Nov 25 04:31:30 np0005534516 ecstatic_williamson[437563]: {
Nov 25 04:31:30 np0005534516 ecstatic_williamson[437563]:    "1ccbbde0-8faf-460a-800e-c84f00ed17db": {
Nov 25 04:31:30 np0005534516 ecstatic_williamson[437563]:        "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 04:31:30 np0005534516 ecstatic_williamson[437563]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 04:31:30 np0005534516 ecstatic_williamson[437563]:        "osd_id": 1,
Nov 25 04:31:30 np0005534516 ecstatic_williamson[437563]:        "osd_uuid": "1ccbbde0-8faf-460a-800e-c84f00ed17db",
Nov 25 04:31:30 np0005534516 ecstatic_williamson[437563]:        "type": "bluestore"
Nov 25 04:31:30 np0005534516 ecstatic_williamson[437563]:    },
Nov 25 04:31:30 np0005534516 ecstatic_williamson[437563]:    "2a15223e-f21c-42af-b702-2be1c3607399": {
Nov 25 04:31:30 np0005534516 ecstatic_williamson[437563]:        "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 04:31:30 np0005534516 ecstatic_williamson[437563]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 04:31:30 np0005534516 ecstatic_williamson[437563]:        "osd_id": 2,
Nov 25 04:31:30 np0005534516 ecstatic_williamson[437563]:        "osd_uuid": "2a15223e-f21c-42af-b702-2be1c3607399",
Nov 25 04:31:30 np0005534516 ecstatic_williamson[437563]:        "type": "bluestore"
Nov 25 04:31:30 np0005534516 ecstatic_williamson[437563]:    },
Nov 25 04:31:30 np0005534516 ecstatic_williamson[437563]:    "fa0f8d90-5df8-4d42-9078-da082765696d": {
Nov 25 04:31:30 np0005534516 ecstatic_williamson[437563]:        "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 04:31:30 np0005534516 ecstatic_williamson[437563]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 04:31:30 np0005534516 ecstatic_williamson[437563]:        "osd_id": 0,
Nov 25 04:31:30 np0005534516 ecstatic_williamson[437563]:        "osd_uuid": "fa0f8d90-5df8-4d42-9078-da082765696d",
Nov 25 04:31:30 np0005534516 ecstatic_williamson[437563]:        "type": "bluestore"
Nov 25 04:31:30 np0005534516 ecstatic_williamson[437563]:    }
Nov 25 04:31:30 np0005534516 ecstatic_williamson[437563]: }
Nov 25 04:31:30 np0005534516 systemd[1]: libpod-753d8a85ad6a191a4a6929b65ae2b2649ac427f48102b46feaa9c6683ab734f0.scope: Deactivated successfully.
Nov 25 04:31:30 np0005534516 podman[437546]: 2025-11-25 09:31:30.849981815 +0000 UTC m=+1.149941701 container died 753d8a85ad6a191a4a6929b65ae2b2649ac427f48102b46feaa9c6683ab734f0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_williamson, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS)
Nov 25 04:31:30 np0005534516 systemd[1]: var-lib-containers-storage-overlay-787492dae69f60a7ad6168cfb3819b75f1509bd1080b31c14f15eb54a74be965-merged.mount: Deactivated successfully.
Nov 25 04:31:30 np0005534516 nova_compute[253538]: 2025-11-25 09:31:30.894 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:31:30 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3351: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:31:30 np0005534516 podman[437546]: 2025-11-25 09:31:30.91105746 +0000 UTC m=+1.211017326 container remove 753d8a85ad6a191a4a6929b65ae2b2649ac427f48102b46feaa9c6683ab734f0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_williamson, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 04:31:30 np0005534516 systemd[1]: libpod-conmon-753d8a85ad6a191a4a6929b65ae2b2649ac427f48102b46feaa9c6683ab734f0.scope: Deactivated successfully.
Nov 25 04:31:30 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 04:31:30 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 04:31:30 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 04:31:31 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 04:31:31 np0005534516 ceph-mgr[75313]: [progress WARNING root] complete: ev 2dfa15eb-6009-4d41-8d4f-5e0eb5e79605 does not exist
Nov 25 04:31:31 np0005534516 ceph-mgr[75313]: [progress WARNING root] complete: ev c22bfd70-aa4f-4b6f-8c25-d1ba913229b8 does not exist
Nov 25 04:31:31 np0005534516 nova_compute[253538]: 2025-11-25 09:31:31.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:31:31 np0005534516 nova_compute[253538]: 2025-11-25 09:31:31.579 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:31:31 np0005534516 nova_compute[253538]: 2025-11-25 09:31:31.580 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:31:31 np0005534516 nova_compute[253538]: 2025-11-25 09:31:31.580 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:31:31 np0005534516 nova_compute[253538]: 2025-11-25 09:31:31.580 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 25 04:31:31 np0005534516 nova_compute[253538]: 2025-11-25 09:31:31.580 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 04:31:31 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 04:31:31 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 04:31:32 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 04:31:32 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3735912451' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 04:31:32 np0005534516 nova_compute[253538]: 2025-11-25 09:31:32.024 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.443s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 04:31:32 np0005534516 nova_compute[253538]: 2025-11-25 09:31:32.166 253542 WARNING nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 04:31:32 np0005534516 nova_compute[253538]: 2025-11-25 09:31:32.167 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3591MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 25 04:31:32 np0005534516 nova_compute[253538]: 2025-11-25 09:31:32.167 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:31:32 np0005534516 nova_compute[253538]: 2025-11-25 09:31:32.167 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:31:32 np0005534516 nova_compute[253538]: 2025-11-25 09:31:32.223 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 25 04:31:32 np0005534516 nova_compute[253538]: 2025-11-25 09:31:32.223 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 25 04:31:32 np0005534516 nova_compute[253538]: 2025-11-25 09:31:32.243 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 04:31:32 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 04:31:32 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4028974768' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 04:31:32 np0005534516 nova_compute[253538]: 2025-11-25 09:31:32.734 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.491s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 04:31:32 np0005534516 nova_compute[253538]: 2025-11-25 09:31:32.739 253542 DEBUG nova.compute.provider_tree [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 04:31:32 np0005534516 nova_compute[253538]: 2025-11-25 09:31:32.754 253542 DEBUG nova.scheduler.client.report [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 04:31:32 np0005534516 nova_compute[253538]: 2025-11-25 09:31:32.756 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 25 04:31:32 np0005534516 nova_compute[253538]: 2025-11-25 09:31:32.756 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.589s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:31:32 np0005534516 nova_compute[253538]: 2025-11-25 09:31:32.757 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:31:32 np0005534516 nova_compute[253538]: 2025-11-25 09:31:32.757 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Nov 25 04:31:32 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3352: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:31:34 np0005534516 nova_compute[253538]: 2025-11-25 09:31:34.224 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:31:34 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e283 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:31:34 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3353: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:31:35 np0005534516 podman[437704]: 2025-11-25 09:31:35.810672513 +0000 UTC m=+0.064236722 container health_status 201f5ba8a846455dad7681d91099d8c3f33e8ac91298c1330a8b525b94c03938 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true)
Nov 25 04:31:35 np0005534516 podman[437705]: 2025-11-25 09:31:35.8369579 +0000 UTC m=+0.087653661 container health_status 5c8d5508db57b4c3da7e80ae11f3a3094e87785cb74f60c98199261dd8b73481 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent)
Nov 25 04:31:35 np0005534516 nova_compute[253538]: 2025-11-25 09:31:35.895 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:31:36 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3354: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:31:38 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3355: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:31:39 np0005534516 nova_compute[253538]: 2025-11-25 09:31:39.227 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:31:39 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e283 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:31:40 np0005534516 nova_compute[253538]: 2025-11-25 09:31:40.897 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:31:40 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3356: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:31:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:31:41.119 162739 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:31:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:31:41.120 162739 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:31:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:31:41.120 162739 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:31:42 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3357: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:31:44 np0005534516 nova_compute[253538]: 2025-11-25 09:31:44.229 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:31:44 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e283 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:31:44 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3358: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:31:45 np0005534516 nova_compute[253538]: 2025-11-25 09:31:45.898 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:31:46 np0005534516 podman[437741]: 2025-11-25 09:31:46.840483959 +0000 UTC m=+0.097213202 container health_status 8ad1e319ea263c63021f01bb2d6f18ca5aea205883339692c6b10bddfa993191 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.build-date=20251118)
Nov 25 04:31:46 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3359: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:31:48 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3360: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:31:49 np0005534516 nova_compute[253538]: 2025-11-25 09:31:49.230 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:31:49 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e283 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:31:50 np0005534516 nova_compute[253538]: 2025-11-25 09:31:50.901 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:31:50 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3361: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:31:52 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3362: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:31:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] Optimize plan auto_2025-11-25_09:31:53
Nov 25 04:31:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 04:31:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] do_upmap
Nov 25 04:31:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] pools ['images', 'default.rgw.control', '.rgw.root', 'default.rgw.log', 'backups', 'vms', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', '.mgr', 'volumes', 'default.rgw.meta']
Nov 25 04:31:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] prepared 0/10 changes
Nov 25 04:31:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 04:31:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 04:31:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 04:31:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 04:31:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 04:31:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 04:31:54 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 04:31:54 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 04:31:54 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 04:31:54 np0005534516 nova_compute[253538]: 2025-11-25 09:31:54.233 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:31:54 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 04:31:54 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 04:31:54 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 04:31:54 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 04:31:54 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 04:31:54 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 04:31:54 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 04:31:54 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e283 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:31:54 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3363: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:31:55 np0005534516 nova_compute[253538]: 2025-11-25 09:31:55.901 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:31:56 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3364: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:31:58 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3365: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:31:59 np0005534516 nova_compute[253538]: 2025-11-25 09:31:59.235 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:31:59 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e283 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:31:59 np0005534516 nova_compute[253538]: 2025-11-25 09:31:59.555 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:32:00 np0005534516 nova_compute[253538]: 2025-11-25 09:32:00.904 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:32:00 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3366: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:32:02 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3367: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:32:04 np0005534516 nova_compute[253538]: 2025-11-25 09:32:04.238 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:32:04 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e283 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:32:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 04:32:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 04:32:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 04:32:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 04:32:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 25 04:32:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 04:32:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 04:32:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 04:32:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 04:32:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 04:32:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006659854830044931 of space, bias 1.0, pg target 0.19979564490134794 quantized to 32 (current 32)
Nov 25 04:32:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 04:32:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 25 04:32:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 04:32:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 04:32:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 04:32:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 25 04:32:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 04:32:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 25 04:32:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 04:32:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 04:32:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 04:32:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 25 04:32:04 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3368: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:32:05 np0005534516 nova_compute[253538]: 2025-11-25 09:32:05.564 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:32:05 np0005534516 nova_compute[253538]: 2025-11-25 09:32:05.564 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:32:05 np0005534516 nova_compute[253538]: 2025-11-25 09:32:05.565 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Nov 25 04:32:05 np0005534516 nova_compute[253538]: 2025-11-25 09:32:05.581 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Nov 25 04:32:05 np0005534516 nova_compute[253538]: 2025-11-25 09:32:05.906 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:32:06 np0005534516 podman[437769]: 2025-11-25 09:32:06.812362813 +0000 UTC m=+0.060373757 container health_status 201f5ba8a846455dad7681d91099d8c3f33e8ac91298c1330a8b525b94c03938 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_managed=true, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Nov 25 04:32:06 np0005534516 podman[437770]: 2025-11-25 09:32:06.829421369 +0000 UTC m=+0.063584715 container health_status 5c8d5508db57b4c3da7e80ae11f3a3094e87785cb74f60c98199261dd8b73481 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_metadata_agent, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent)
Nov 25 04:32:06 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3369: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:32:08 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3370: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail; 11 KiB/s rd, 0 B/s wr, 18 op/s
Nov 25 04:32:09 np0005534516 nova_compute[253538]: 2025-11-25 09:32:09.241 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:32:09 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e283 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:32:10 np0005534516 nova_compute[253538]: 2025-11-25 09:32:10.571 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:32:10 np0005534516 nova_compute[253538]: 2025-11-25 09:32:10.908 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:32:10 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3371: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail; 24 KiB/s rd, 0 B/s wr, 40 op/s
Nov 25 04:32:12 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3372: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Nov 25 04:32:13 np0005534516 nova_compute[253538]: 2025-11-25 09:32:13.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:32:13 np0005534516 nova_compute[253538]: 2025-11-25 09:32:13.554 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 25 04:32:13 np0005534516 nova_compute[253538]: 2025-11-25 09:32:13.555 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 25 04:32:13 np0005534516 nova_compute[253538]: 2025-11-25 09:32:13.568 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 25 04:32:14 np0005534516 nova_compute[253538]: 2025-11-25 09:32:14.241 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:32:14 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e283 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:32:14 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3373: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Nov 25 04:32:15 np0005534516 nova_compute[253538]: 2025-11-25 09:32:15.909 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:32:16 np0005534516 nova_compute[253538]: 2025-11-25 09:32:16.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:32:16 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3374: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Nov 25 04:32:17 np0005534516 nova_compute[253538]: 2025-11-25 09:32:17.548 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:32:17 np0005534516 nova_compute[253538]: 2025-11-25 09:32:17.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:32:17 np0005534516 nova_compute[253538]: 2025-11-25 09:32:17.554 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 25 04:32:17 np0005534516 podman[437808]: 2025-11-25 09:32:17.832418285 +0000 UTC m=+0.080763483 container health_status 8ad1e319ea263c63021f01bb2d6f18ca5aea205883339692c6b10bddfa993191 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Nov 25 04:32:18 np0005534516 nova_compute[253538]: 2025-11-25 09:32:18.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:32:18 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3375: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Nov 25 04:32:19 np0005534516 nova_compute[253538]: 2025-11-25 09:32:19.243 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:32:19 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e283 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:32:20 np0005534516 nova_compute[253538]: 2025-11-25 09:32:20.912 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:32:20 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3376: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail; 24 KiB/s rd, 0 B/s wr, 40 op/s
Nov 25 04:32:21 np0005534516 nova_compute[253538]: 2025-11-25 09:32:21.841 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:32:22 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3377: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail; 11 KiB/s rd, 0 B/s wr, 18 op/s
Nov 25 04:32:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 04:32:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 04:32:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 04:32:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 04:32:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 04:32:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 04:32:24 np0005534516 nova_compute[253538]: 2025-11-25 09:32:24.245 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:32:24 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e283 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:32:24 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3378: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:32:25 np0005534516 nova_compute[253538]: 2025-11-25 09:32:25.568 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:32:25 np0005534516 nova_compute[253538]: 2025-11-25 09:32:25.913 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:32:26 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3379: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:32:28 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3380: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:32:29 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 25 04:32:29 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2463486539' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 04:32:29 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 25 04:32:29 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2463486539' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 04:32:29 np0005534516 nova_compute[253538]: 2025-11-25 09:32:29.246 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:32:29 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e283 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:32:30 np0005534516 nova_compute[253538]: 2025-11-25 09:32:30.915 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:32:30 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3381: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:32:32 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 04:32:32 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 04:32:32 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 04:32:32 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 04:32:32 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 04:32:32 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 04:32:32 np0005534516 ceph-mgr[75313]: [progress WARNING root] complete: ev 2a958daf-c744-41e0-a227-6459902a6abd does not exist
Nov 25 04:32:32 np0005534516 ceph-mgr[75313]: [progress WARNING root] complete: ev 4f98d49e-b021-4316-9d34-0d51ee81b71c does not exist
Nov 25 04:32:32 np0005534516 ceph-mgr[75313]: [progress WARNING root] complete: ev 9536ad03-b1c8-48aa-9fe1-719d2c133a5a does not exist
Nov 25 04:32:32 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 04:32:32 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 04:32:32 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 04:32:32 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 04:32:32 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 04:32:32 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 04:32:32 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 04:32:32 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 04:32:32 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 04:32:32 np0005534516 nova_compute[253538]: 2025-11-25 09:32:32.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:32:32 np0005534516 nova_compute[253538]: 2025-11-25 09:32:32.574 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:32:32 np0005534516 nova_compute[253538]: 2025-11-25 09:32:32.574 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:32:32 np0005534516 nova_compute[253538]: 2025-11-25 09:32:32.575 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:32:32 np0005534516 nova_compute[253538]: 2025-11-25 09:32:32.576 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 25 04:32:32 np0005534516 nova_compute[253538]: 2025-11-25 09:32:32.576 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 04:32:32 np0005534516 podman[438108]: 2025-11-25 09:32:32.710586018 +0000 UTC m=+0.044933126 container create 58fd72da658dcb7b61557bd3cbcd453df3508b7e152db12568662c4751fbc754 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_hermann, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True)
Nov 25 04:32:32 np0005534516 systemd[1]: Started libpod-conmon-58fd72da658dcb7b61557bd3cbcd453df3508b7e152db12568662c4751fbc754.scope.
Nov 25 04:32:32 np0005534516 systemd[1]: Started libcrun container.
Nov 25 04:32:32 np0005534516 podman[438108]: 2025-11-25 09:32:32.688535007 +0000 UTC m=+0.022882095 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 04:32:32 np0005534516 podman[438108]: 2025-11-25 09:32:32.801906268 +0000 UTC m=+0.136253326 container init 58fd72da658dcb7b61557bd3cbcd453df3508b7e152db12568662c4751fbc754 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_hermann, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Nov 25 04:32:32 np0005534516 podman[438108]: 2025-11-25 09:32:32.809913216 +0000 UTC m=+0.144260274 container start 58fd72da658dcb7b61557bd3cbcd453df3508b7e152db12568662c4751fbc754 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_hermann, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Nov 25 04:32:32 np0005534516 podman[438108]: 2025-11-25 09:32:32.813429582 +0000 UTC m=+0.147776640 container attach 58fd72da658dcb7b61557bd3cbcd453df3508b7e152db12568662c4751fbc754 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_hermann, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Nov 25 04:32:32 np0005534516 cool_hermann[438143]: 167 167
Nov 25 04:32:32 np0005534516 systemd[1]: libpod-58fd72da658dcb7b61557bd3cbcd453df3508b7e152db12568662c4751fbc754.scope: Deactivated successfully.
Nov 25 04:32:32 np0005534516 podman[438148]: 2025-11-25 09:32:32.852850467 +0000 UTC m=+0.023974265 container died 58fd72da658dcb7b61557bd3cbcd453df3508b7e152db12568662c4751fbc754 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_hermann, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default)
Nov 25 04:32:32 np0005534516 systemd[1]: var-lib-containers-storage-overlay-b2bfc41aaa27bfc2f1e3eede4e4975c250d7e94929d29971e233b602d2907cb5-merged.mount: Deactivated successfully.
Nov 25 04:32:32 np0005534516 podman[438148]: 2025-11-25 09:32:32.889128296 +0000 UTC m=+0.060252104 container remove 58fd72da658dcb7b61557bd3cbcd453df3508b7e152db12568662c4751fbc754 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_hermann, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Nov 25 04:32:32 np0005534516 systemd[1]: libpod-conmon-58fd72da658dcb7b61557bd3cbcd453df3508b7e152db12568662c4751fbc754.scope: Deactivated successfully.
Nov 25 04:32:32 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3382: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:32:33 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 04:32:33 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/452485779' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 04:32:33 np0005534516 nova_compute[253538]: 2025-11-25 09:32:33.058 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.481s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 04:32:33 np0005534516 podman[438170]: 2025-11-25 09:32:33.0653588 +0000 UTC m=+0.050362624 container create 77cc221d182124e285144c0a74272471e8a53967477813b5a54e351bb596103f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_jackson, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 04:32:33 np0005534516 systemd[1]: Started libpod-conmon-77cc221d182124e285144c0a74272471e8a53967477813b5a54e351bb596103f.scope.
Nov 25 04:32:33 np0005534516 podman[438170]: 2025-11-25 09:32:33.042673162 +0000 UTC m=+0.027677026 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 04:32:33 np0005534516 systemd[1]: Started libcrun container.
Nov 25 04:32:33 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a98e488fcc4ebbc8ddb655ad9abd7dff831fe5d139e1389f654ad3cc26b788e7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 04:32:33 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a98e488fcc4ebbc8ddb655ad9abd7dff831fe5d139e1389f654ad3cc26b788e7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 04:32:33 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a98e488fcc4ebbc8ddb655ad9abd7dff831fe5d139e1389f654ad3cc26b788e7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 04:32:33 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a98e488fcc4ebbc8ddb655ad9abd7dff831fe5d139e1389f654ad3cc26b788e7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 04:32:33 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a98e488fcc4ebbc8ddb655ad9abd7dff831fe5d139e1389f654ad3cc26b788e7/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 04:32:33 np0005534516 podman[438170]: 2025-11-25 09:32:33.164259126 +0000 UTC m=+0.149262970 container init 77cc221d182124e285144c0a74272471e8a53967477813b5a54e351bb596103f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_jackson, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 04:32:33 np0005534516 podman[438170]: 2025-11-25 09:32:33.17322078 +0000 UTC m=+0.158224604 container start 77cc221d182124e285144c0a74272471e8a53967477813b5a54e351bb596103f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_jackson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Nov 25 04:32:33 np0005534516 podman[438170]: 2025-11-25 09:32:33.177249901 +0000 UTC m=+0.162253745 container attach 77cc221d182124e285144c0a74272471e8a53967477813b5a54e351bb596103f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_jackson, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Nov 25 04:32:33 np0005534516 nova_compute[253538]: 2025-11-25 09:32:33.243 253542 WARNING nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 04:32:33 np0005534516 nova_compute[253538]: 2025-11-25 09:32:33.246 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3600MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 25 04:32:33 np0005534516 nova_compute[253538]: 2025-11-25 09:32:33.246 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:32:33 np0005534516 nova_compute[253538]: 2025-11-25 09:32:33.246 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:32:33 np0005534516 nova_compute[253538]: 2025-11-25 09:32:33.306 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 25 04:32:33 np0005534516 nova_compute[253538]: 2025-11-25 09:32:33.307 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 25 04:32:33 np0005534516 nova_compute[253538]: 2025-11-25 09:32:33.320 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 04:32:33 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 04:32:33 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3535890691' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 04:32:33 np0005534516 nova_compute[253538]: 2025-11-25 09:32:33.828 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.508s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 04:32:33 np0005534516 nova_compute[253538]: 2025-11-25 09:32:33.835 253542 DEBUG nova.compute.provider_tree [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 04:32:33 np0005534516 nova_compute[253538]: 2025-11-25 09:32:33.856 253542 DEBUG nova.scheduler.client.report [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 04:32:33 np0005534516 nova_compute[253538]: 2025-11-25 09:32:33.860 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 25 04:32:33 np0005534516 nova_compute[253538]: 2025-11-25 09:32:33.862 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.616s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:32:34 np0005534516 naughty_jackson[438188]: --> passed data devices: 0 physical, 3 LVM
Nov 25 04:32:34 np0005534516 naughty_jackson[438188]: --> relative data size: 1.0
Nov 25 04:32:34 np0005534516 naughty_jackson[438188]: --> All data devices are unavailable
Nov 25 04:32:34 np0005534516 systemd[1]: libpod-77cc221d182124e285144c0a74272471e8a53967477813b5a54e351bb596103f.scope: Deactivated successfully.
Nov 25 04:32:34 np0005534516 systemd[1]: libpod-77cc221d182124e285144c0a74272471e8a53967477813b5a54e351bb596103f.scope: Consumed 1.012s CPU time.
Nov 25 04:32:34 np0005534516 nova_compute[253538]: 2025-11-25 09:32:34.250 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:32:34 np0005534516 podman[438239]: 2025-11-25 09:32:34.294362256 +0000 UTC m=+0.031878091 container died 77cc221d182124e285144c0a74272471e8a53967477813b5a54e351bb596103f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_jackson, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Nov 25 04:32:34 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e283 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:32:34 np0005534516 systemd[1]: var-lib-containers-storage-overlay-a98e488fcc4ebbc8ddb655ad9abd7dff831fe5d139e1389f654ad3cc26b788e7-merged.mount: Deactivated successfully.
Nov 25 04:32:34 np0005534516 podman[438239]: 2025-11-25 09:32:34.743232613 +0000 UTC m=+0.480748428 container remove 77cc221d182124e285144c0a74272471e8a53967477813b5a54e351bb596103f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_jackson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 04:32:34 np0005534516 systemd[1]: libpod-conmon-77cc221d182124e285144c0a74272471e8a53967477813b5a54e351bb596103f.scope: Deactivated successfully.
Nov 25 04:32:34 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3383: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:32:35 np0005534516 podman[438395]: 2025-11-25 09:32:35.413834104 +0000 UTC m=+0.023566833 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 04:32:35 np0005534516 podman[438395]: 2025-11-25 09:32:35.555890197 +0000 UTC m=+0.165622946 container create 47b0d5ab4205c39b1c2f1578cae32a5ff5559184fd649d14aa07b90e501c8de9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_mclean, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Nov 25 04:32:35 np0005534516 systemd[1]: Started libpod-conmon-47b0d5ab4205c39b1c2f1578cae32a5ff5559184fd649d14aa07b90e501c8de9.scope.
Nov 25 04:32:35 np0005534516 systemd[1]: Started libcrun container.
Nov 25 04:32:35 np0005534516 podman[438395]: 2025-11-25 09:32:35.885495743 +0000 UTC m=+0.495228472 container init 47b0d5ab4205c39b1c2f1578cae32a5ff5559184fd649d14aa07b90e501c8de9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_mclean, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 04:32:35 np0005534516 podman[438395]: 2025-11-25 09:32:35.896885343 +0000 UTC m=+0.506618092 container start 47b0d5ab4205c39b1c2f1578cae32a5ff5559184fd649d14aa07b90e501c8de9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_mclean, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Nov 25 04:32:35 np0005534516 eloquent_mclean[438412]: 167 167
Nov 25 04:32:35 np0005534516 systemd[1]: libpod-47b0d5ab4205c39b1c2f1578cae32a5ff5559184fd649d14aa07b90e501c8de9.scope: Deactivated successfully.
Nov 25 04:32:35 np0005534516 nova_compute[253538]: 2025-11-25 09:32:35.917 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:32:35 np0005534516 podman[438395]: 2025-11-25 09:32:35.986214108 +0000 UTC m=+0.595946867 container attach 47b0d5ab4205c39b1c2f1578cae32a5ff5559184fd649d14aa07b90e501c8de9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_mclean, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Nov 25 04:32:35 np0005534516 podman[438395]: 2025-11-25 09:32:35.987411141 +0000 UTC m=+0.597143850 container died 47b0d5ab4205c39b1c2f1578cae32a5ff5559184fd649d14aa07b90e501c8de9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_mclean, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 04:32:36 np0005534516 systemd[1]: var-lib-containers-storage-overlay-e1ce38f23e761ce5119220c91e753f0366bedca261a35cc12872f8a13f12e18e-merged.mount: Deactivated successfully.
Nov 25 04:32:36 np0005534516 podman[438395]: 2025-11-25 09:32:36.276283226 +0000 UTC m=+0.886015935 container remove 47b0d5ab4205c39b1c2f1578cae32a5ff5559184fd649d14aa07b90e501c8de9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_mclean, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 04:32:36 np0005534516 systemd[1]: libpod-conmon-47b0d5ab4205c39b1c2f1578cae32a5ff5559184fd649d14aa07b90e501c8de9.scope: Deactivated successfully.
Nov 25 04:32:36 np0005534516 podman[438436]: 2025-11-25 09:32:36.456331594 +0000 UTC m=+0.029911595 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 04:32:36 np0005534516 podman[438436]: 2025-11-25 09:32:36.5750001 +0000 UTC m=+0.148580081 container create 15253919b7c359fa4a942be90d247e71b0767462bfc0e1cacf2fd9f2fd17907e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_joliot, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 04:32:36 np0005534516 systemd[1]: Started libpod-conmon-15253919b7c359fa4a942be90d247e71b0767462bfc0e1cacf2fd9f2fd17907e.scope.
Nov 25 04:32:36 np0005534516 systemd[1]: Started libcrun container.
Nov 25 04:32:36 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b6fedbde1fed2463b5b51ee35540d8bebb34c7da28c1cf81f602f276a5e284aa/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 04:32:36 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b6fedbde1fed2463b5b51ee35540d8bebb34c7da28c1cf81f602f276a5e284aa/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 04:32:36 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b6fedbde1fed2463b5b51ee35540d8bebb34c7da28c1cf81f602f276a5e284aa/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 04:32:36 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b6fedbde1fed2463b5b51ee35540d8bebb34c7da28c1cf81f602f276a5e284aa/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 04:32:36 np0005534516 podman[438436]: 2025-11-25 09:32:36.836575461 +0000 UTC m=+0.410155472 container init 15253919b7c359fa4a942be90d247e71b0767462bfc0e1cacf2fd9f2fd17907e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_joliot, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 04:32:36 np0005534516 podman[438436]: 2025-11-25 09:32:36.848462875 +0000 UTC m=+0.422042856 container start 15253919b7c359fa4a942be90d247e71b0767462bfc0e1cacf2fd9f2fd17907e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_joliot, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 04:32:36 np0005534516 podman[438436]: 2025-11-25 09:32:36.862544539 +0000 UTC m=+0.436124530 container attach 15253919b7c359fa4a942be90d247e71b0767462bfc0e1cacf2fd9f2fd17907e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_joliot, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Nov 25 04:32:36 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3384: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:32:37 np0005534516 reverent_joliot[438453]: {
Nov 25 04:32:37 np0005534516 reverent_joliot[438453]:    "0": [
Nov 25 04:32:37 np0005534516 reverent_joliot[438453]:        {
Nov 25 04:32:37 np0005534516 reverent_joliot[438453]:            "devices": [
Nov 25 04:32:37 np0005534516 reverent_joliot[438453]:                "/dev/loop3"
Nov 25 04:32:37 np0005534516 reverent_joliot[438453]:            ],
Nov 25 04:32:37 np0005534516 reverent_joliot[438453]:            "lv_name": "ceph_lv0",
Nov 25 04:32:37 np0005534516 reverent_joliot[438453]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 04:32:37 np0005534516 reverent_joliot[438453]:            "lv_size": "21470642176",
Nov 25 04:32:37 np0005534516 reverent_joliot[438453]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=fa0f8d90-5df8-4d42-9078-da082765696d,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 04:32:37 np0005534516 reverent_joliot[438453]:            "lv_uuid": "ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr",
Nov 25 04:32:37 np0005534516 reverent_joliot[438453]:            "name": "ceph_lv0",
Nov 25 04:32:37 np0005534516 reverent_joliot[438453]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 04:32:37 np0005534516 reverent_joliot[438453]:            "tags": {
Nov 25 04:32:37 np0005534516 reverent_joliot[438453]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 04:32:37 np0005534516 reverent_joliot[438453]:                "ceph.block_uuid": "ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr",
Nov 25 04:32:37 np0005534516 reverent_joliot[438453]:                "ceph.cephx_lockbox_secret": "",
Nov 25 04:32:37 np0005534516 reverent_joliot[438453]:                "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 04:32:37 np0005534516 reverent_joliot[438453]:                "ceph.cluster_name": "ceph",
Nov 25 04:32:37 np0005534516 reverent_joliot[438453]:                "ceph.crush_device_class": "",
Nov 25 04:32:37 np0005534516 reverent_joliot[438453]:                "ceph.encrypted": "0",
Nov 25 04:32:37 np0005534516 reverent_joliot[438453]:                "ceph.osd_fsid": "fa0f8d90-5df8-4d42-9078-da082765696d",
Nov 25 04:32:37 np0005534516 reverent_joliot[438453]:                "ceph.osd_id": "0",
Nov 25 04:32:37 np0005534516 reverent_joliot[438453]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 04:32:37 np0005534516 reverent_joliot[438453]:                "ceph.type": "block",
Nov 25 04:32:37 np0005534516 reverent_joliot[438453]:                "ceph.vdo": "0"
Nov 25 04:32:37 np0005534516 reverent_joliot[438453]:            },
Nov 25 04:32:37 np0005534516 reverent_joliot[438453]:            "type": "block",
Nov 25 04:32:37 np0005534516 reverent_joliot[438453]:            "vg_name": "ceph_vg0"
Nov 25 04:32:37 np0005534516 reverent_joliot[438453]:        }
Nov 25 04:32:37 np0005534516 reverent_joliot[438453]:    ],
Nov 25 04:32:37 np0005534516 reverent_joliot[438453]:    "1": [
Nov 25 04:32:37 np0005534516 reverent_joliot[438453]:        {
Nov 25 04:32:37 np0005534516 reverent_joliot[438453]:            "devices": [
Nov 25 04:32:37 np0005534516 reverent_joliot[438453]:                "/dev/loop4"
Nov 25 04:32:37 np0005534516 reverent_joliot[438453]:            ],
Nov 25 04:32:37 np0005534516 reverent_joliot[438453]:            "lv_name": "ceph_lv1",
Nov 25 04:32:37 np0005534516 reverent_joliot[438453]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 04:32:37 np0005534516 reverent_joliot[438453]:            "lv_size": "21470642176",
Nov 25 04:32:37 np0005534516 reverent_joliot[438453]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1ccbbde0-8faf-460a-800e-c84f00ed17db,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 04:32:37 np0005534516 reverent_joliot[438453]:            "lv_uuid": "nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e",
Nov 25 04:32:37 np0005534516 reverent_joliot[438453]:            "name": "ceph_lv1",
Nov 25 04:32:37 np0005534516 reverent_joliot[438453]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 04:32:37 np0005534516 reverent_joliot[438453]:            "tags": {
Nov 25 04:32:37 np0005534516 reverent_joliot[438453]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 04:32:37 np0005534516 reverent_joliot[438453]:                "ceph.block_uuid": "nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e",
Nov 25 04:32:37 np0005534516 reverent_joliot[438453]:                "ceph.cephx_lockbox_secret": "",
Nov 25 04:32:37 np0005534516 reverent_joliot[438453]:                "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 04:32:37 np0005534516 reverent_joliot[438453]:                "ceph.cluster_name": "ceph",
Nov 25 04:32:37 np0005534516 reverent_joliot[438453]:                "ceph.crush_device_class": "",
Nov 25 04:32:37 np0005534516 reverent_joliot[438453]:                "ceph.encrypted": "0",
Nov 25 04:32:37 np0005534516 reverent_joliot[438453]:                "ceph.osd_fsid": "1ccbbde0-8faf-460a-800e-c84f00ed17db",
Nov 25 04:32:37 np0005534516 reverent_joliot[438453]:                "ceph.osd_id": "1",
Nov 25 04:32:37 np0005534516 reverent_joliot[438453]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 04:32:37 np0005534516 reverent_joliot[438453]:                "ceph.type": "block",
Nov 25 04:32:37 np0005534516 reverent_joliot[438453]:                "ceph.vdo": "0"
Nov 25 04:32:37 np0005534516 reverent_joliot[438453]:            },
Nov 25 04:32:37 np0005534516 reverent_joliot[438453]:            "type": "block",
Nov 25 04:32:37 np0005534516 reverent_joliot[438453]:            "vg_name": "ceph_vg1"
Nov 25 04:32:37 np0005534516 reverent_joliot[438453]:        }
Nov 25 04:32:37 np0005534516 reverent_joliot[438453]:    ],
Nov 25 04:32:37 np0005534516 reverent_joliot[438453]:    "2": [
Nov 25 04:32:37 np0005534516 reverent_joliot[438453]:        {
Nov 25 04:32:37 np0005534516 reverent_joliot[438453]:            "devices": [
Nov 25 04:32:37 np0005534516 reverent_joliot[438453]:                "/dev/loop5"
Nov 25 04:32:37 np0005534516 reverent_joliot[438453]:            ],
Nov 25 04:32:37 np0005534516 reverent_joliot[438453]:            "lv_name": "ceph_lv2",
Nov 25 04:32:37 np0005534516 reverent_joliot[438453]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 04:32:37 np0005534516 reverent_joliot[438453]:            "lv_size": "21470642176",
Nov 25 04:32:37 np0005534516 reverent_joliot[438453]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2a15223e-f21c-42af-b702-2be1c3607399,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 04:32:37 np0005534516 reverent_joliot[438453]:            "lv_uuid": "5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0",
Nov 25 04:32:37 np0005534516 reverent_joliot[438453]:            "name": "ceph_lv2",
Nov 25 04:32:37 np0005534516 reverent_joliot[438453]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 04:32:37 np0005534516 reverent_joliot[438453]:            "tags": {
Nov 25 04:32:37 np0005534516 reverent_joliot[438453]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 04:32:37 np0005534516 reverent_joliot[438453]:                "ceph.block_uuid": "5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0",
Nov 25 04:32:37 np0005534516 reverent_joliot[438453]:                "ceph.cephx_lockbox_secret": "",
Nov 25 04:32:37 np0005534516 reverent_joliot[438453]:                "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 04:32:37 np0005534516 reverent_joliot[438453]:                "ceph.cluster_name": "ceph",
Nov 25 04:32:37 np0005534516 reverent_joliot[438453]:                "ceph.crush_device_class": "",
Nov 25 04:32:37 np0005534516 reverent_joliot[438453]:                "ceph.encrypted": "0",
Nov 25 04:32:37 np0005534516 reverent_joliot[438453]:                "ceph.osd_fsid": "2a15223e-f21c-42af-b702-2be1c3607399",
Nov 25 04:32:37 np0005534516 reverent_joliot[438453]:                "ceph.osd_id": "2",
Nov 25 04:32:37 np0005534516 reverent_joliot[438453]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 04:32:37 np0005534516 reverent_joliot[438453]:                "ceph.type": "block",
Nov 25 04:32:37 np0005534516 reverent_joliot[438453]:                "ceph.vdo": "0"
Nov 25 04:32:37 np0005534516 reverent_joliot[438453]:            },
Nov 25 04:32:37 np0005534516 reverent_joliot[438453]:            "type": "block",
Nov 25 04:32:37 np0005534516 reverent_joliot[438453]:            "vg_name": "ceph_vg2"
Nov 25 04:32:37 np0005534516 reverent_joliot[438453]:        }
Nov 25 04:32:37 np0005534516 reverent_joliot[438453]:    ]
Nov 25 04:32:37 np0005534516 reverent_joliot[438453]: }
Nov 25 04:32:37 np0005534516 systemd[1]: libpod-15253919b7c359fa4a942be90d247e71b0767462bfc0e1cacf2fd9f2fd17907e.scope: Deactivated successfully.
Nov 25 04:32:37 np0005534516 podman[438436]: 2025-11-25 09:32:37.61050626 +0000 UTC m=+1.184086291 container died 15253919b7c359fa4a942be90d247e71b0767462bfc0e1cacf2fd9f2fd17907e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_joliot, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Nov 25 04:32:38 np0005534516 systemd[1]: var-lib-containers-storage-overlay-b6fedbde1fed2463b5b51ee35540d8bebb34c7da28c1cf81f602f276a5e284aa-merged.mount: Deactivated successfully.
Nov 25 04:32:38 np0005534516 podman[438436]: 2025-11-25 09:32:38.757227902 +0000 UTC m=+2.330807923 container remove 15253919b7c359fa4a942be90d247e71b0767462bfc0e1cacf2fd9f2fd17907e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_joliot, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Nov 25 04:32:38 np0005534516 systemd[1]: libpod-conmon-15253919b7c359fa4a942be90d247e71b0767462bfc0e1cacf2fd9f2fd17907e.scope: Deactivated successfully.
Nov 25 04:32:38 np0005534516 podman[438465]: 2025-11-25 09:32:38.88554038 +0000 UTC m=+1.228918583 container health_status 201f5ba8a846455dad7681d91099d8c3f33e8ac91298c1330a8b525b94c03938 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Nov 25 04:32:38 np0005534516 podman[438472]: 2025-11-25 09:32:38.88480833 +0000 UTC m=+1.229712366 container health_status 5c8d5508db57b4c3da7e80ae11f3a3094e87785cb74f60c98199261dd8b73481 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true)
Nov 25 04:32:38 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3385: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:32:39 np0005534516 nova_compute[253538]: 2025-11-25 09:32:39.253 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:32:39 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e283 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:32:39 np0005534516 podman[438657]: 2025-11-25 09:32:39.415212581 +0000 UTC m=+0.040147167 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 04:32:39 np0005534516 podman[438657]: 2025-11-25 09:32:39.577459143 +0000 UTC m=+0.202393729 container create ea9ba254c0bd9b0442e926f3bc1c3b488d222fa1a279d56d2439abd3e074c698 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_haslett, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Nov 25 04:32:39 np0005534516 systemd[1]: Started libpod-conmon-ea9ba254c0bd9b0442e926f3bc1c3b488d222fa1a279d56d2439abd3e074c698.scope.
Nov 25 04:32:39 np0005534516 systemd[1]: Started libcrun container.
Nov 25 04:32:39 np0005534516 podman[438657]: 2025-11-25 09:32:39.741848475 +0000 UTC m=+0.366783081 container init ea9ba254c0bd9b0442e926f3bc1c3b488d222fa1a279d56d2439abd3e074c698 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_haslett, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 04:32:39 np0005534516 podman[438657]: 2025-11-25 09:32:39.75340939 +0000 UTC m=+0.378343976 container start ea9ba254c0bd9b0442e926f3bc1c3b488d222fa1a279d56d2439abd3e074c698 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_haslett, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Nov 25 04:32:39 np0005534516 great_haslett[438673]: 167 167
Nov 25 04:32:39 np0005534516 systemd[1]: libpod-ea9ba254c0bd9b0442e926f3bc1c3b488d222fa1a279d56d2439abd3e074c698.scope: Deactivated successfully.
Nov 25 04:32:39 np0005534516 podman[438657]: 2025-11-25 09:32:39.820250872 +0000 UTC m=+0.445185458 container attach ea9ba254c0bd9b0442e926f3bc1c3b488d222fa1a279d56d2439abd3e074c698 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_haslett, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 04:32:39 np0005534516 podman[438657]: 2025-11-25 09:32:39.821688652 +0000 UTC m=+0.446623238 container died ea9ba254c0bd9b0442e926f3bc1c3b488d222fa1a279d56d2439abd3e074c698 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_haslett, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Nov 25 04:32:39 np0005534516 systemd[1]: var-lib-containers-storage-overlay-ca54e4da19159da098181a4439c763bfb153cad7cacb84c940146eb512a8c9e6-merged.mount: Deactivated successfully.
Nov 25 04:32:40 np0005534516 podman[438657]: 2025-11-25 09:32:40.217766419 +0000 UTC m=+0.842701045 container remove ea9ba254c0bd9b0442e926f3bc1c3b488d222fa1a279d56d2439abd3e074c698 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_haslett, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2)
Nov 25 04:32:40 np0005534516 systemd[1]: libpod-conmon-ea9ba254c0bd9b0442e926f3bc1c3b488d222fa1a279d56d2439abd3e074c698.scope: Deactivated successfully.
Nov 25 04:32:40 np0005534516 podman[438700]: 2025-11-25 09:32:40.540343283 +0000 UTC m=+0.122126880 container create 291fe8498066679d1b403893c6a6371e397e4d13ac4f83dcca96f314f0e7a6aa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_buck, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 04:32:40 np0005534516 podman[438700]: 2025-11-25 09:32:40.459271523 +0000 UTC m=+0.041055210 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 04:32:40 np0005534516 systemd[1]: Started libpod-conmon-291fe8498066679d1b403893c6a6371e397e4d13ac4f83dcca96f314f0e7a6aa.scope.
Nov 25 04:32:40 np0005534516 systemd[1]: Started libcrun container.
Nov 25 04:32:40 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c2ace03bdd2c8b7656e8f34ea390951f51da22ad303f9b68444d4ffb1fe0beef/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 04:32:40 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c2ace03bdd2c8b7656e8f34ea390951f51da22ad303f9b68444d4ffb1fe0beef/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 04:32:40 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c2ace03bdd2c8b7656e8f34ea390951f51da22ad303f9b68444d4ffb1fe0beef/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 04:32:40 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c2ace03bdd2c8b7656e8f34ea390951f51da22ad303f9b68444d4ffb1fe0beef/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 04:32:40 np0005534516 podman[438700]: 2025-11-25 09:32:40.667132549 +0000 UTC m=+0.248916166 container init 291fe8498066679d1b403893c6a6371e397e4d13ac4f83dcca96f314f0e7a6aa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_buck, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 04:32:40 np0005534516 podman[438700]: 2025-11-25 09:32:40.680018061 +0000 UTC m=+0.261801698 container start 291fe8498066679d1b403893c6a6371e397e4d13ac4f83dcca96f314f0e7a6aa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_buck, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Nov 25 04:32:40 np0005534516 podman[438700]: 2025-11-25 09:32:40.79297447 +0000 UTC m=+0.374758167 container attach 291fe8498066679d1b403893c6a6371e397e4d13ac4f83dcca96f314f0e7a6aa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_buck, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 04:32:40 np0005534516 nova_compute[253538]: 2025-11-25 09:32:40.920 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:32:40 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3386: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:32:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:32:41.120 162739 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:32:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:32:41.121 162739 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:32:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:32:41.121 162739 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:32:41 np0005534516 wonderful_buck[438716]: {
Nov 25 04:32:41 np0005534516 wonderful_buck[438716]:    "1ccbbde0-8faf-460a-800e-c84f00ed17db": {
Nov 25 04:32:41 np0005534516 wonderful_buck[438716]:        "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 04:32:41 np0005534516 wonderful_buck[438716]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 04:32:41 np0005534516 wonderful_buck[438716]:        "osd_id": 1,
Nov 25 04:32:41 np0005534516 wonderful_buck[438716]:        "osd_uuid": "1ccbbde0-8faf-460a-800e-c84f00ed17db",
Nov 25 04:32:41 np0005534516 wonderful_buck[438716]:        "type": "bluestore"
Nov 25 04:32:41 np0005534516 wonderful_buck[438716]:    },
Nov 25 04:32:41 np0005534516 wonderful_buck[438716]:    "2a15223e-f21c-42af-b702-2be1c3607399": {
Nov 25 04:32:41 np0005534516 wonderful_buck[438716]:        "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 04:32:41 np0005534516 wonderful_buck[438716]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 04:32:41 np0005534516 wonderful_buck[438716]:        "osd_id": 2,
Nov 25 04:32:41 np0005534516 wonderful_buck[438716]:        "osd_uuid": "2a15223e-f21c-42af-b702-2be1c3607399",
Nov 25 04:32:41 np0005534516 wonderful_buck[438716]:        "type": "bluestore"
Nov 25 04:32:41 np0005534516 wonderful_buck[438716]:    },
Nov 25 04:32:41 np0005534516 wonderful_buck[438716]:    "fa0f8d90-5df8-4d42-9078-da082765696d": {
Nov 25 04:32:41 np0005534516 wonderful_buck[438716]:        "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 04:32:41 np0005534516 wonderful_buck[438716]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 04:32:41 np0005534516 wonderful_buck[438716]:        "osd_id": 0,
Nov 25 04:32:41 np0005534516 wonderful_buck[438716]:        "osd_uuid": "fa0f8d90-5df8-4d42-9078-da082765696d",
Nov 25 04:32:41 np0005534516 wonderful_buck[438716]:        "type": "bluestore"
Nov 25 04:32:41 np0005534516 wonderful_buck[438716]:    }
Nov 25 04:32:41 np0005534516 wonderful_buck[438716]: }
Nov 25 04:32:41 np0005534516 systemd[1]: libpod-291fe8498066679d1b403893c6a6371e397e4d13ac4f83dcca96f314f0e7a6aa.scope: Deactivated successfully.
Nov 25 04:32:41 np0005534516 systemd[1]: libpod-291fe8498066679d1b403893c6a6371e397e4d13ac4f83dcca96f314f0e7a6aa.scope: Consumed 1.006s CPU time.
Nov 25 04:32:41 np0005534516 podman[438700]: 2025-11-25 09:32:41.67878616 +0000 UTC m=+1.260569797 container died 291fe8498066679d1b403893c6a6371e397e4d13ac4f83dcca96f314f0e7a6aa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_buck, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 04:32:41 np0005534516 systemd[1]: var-lib-containers-storage-overlay-c2ace03bdd2c8b7656e8f34ea390951f51da22ad303f9b68444d4ffb1fe0beef-merged.mount: Deactivated successfully.
Nov 25 04:32:41 np0005534516 podman[438700]: 2025-11-25 09:32:41.825695585 +0000 UTC m=+1.407479222 container remove 291fe8498066679d1b403893c6a6371e397e4d13ac4f83dcca96f314f0e7a6aa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_buck, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 04:32:41 np0005534516 systemd[1]: libpod-conmon-291fe8498066679d1b403893c6a6371e397e4d13ac4f83dcca96f314f0e7a6aa.scope: Deactivated successfully.
Nov 25 04:32:41 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 04:32:41 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 04:32:41 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 04:32:41 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 04:32:41 np0005534516 ceph-mgr[75313]: [progress WARNING root] complete: ev 1e47d68c-4dc6-478b-8556-95b45088a4d8 does not exist
Nov 25 04:32:41 np0005534516 ceph-mgr[75313]: [progress WARNING root] complete: ev c2bccb43-b819-448a-b71e-cd9dcdbca330 does not exist
Nov 25 04:32:42 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 04:32:42 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 04:32:42 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3387: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:32:44 np0005534516 nova_compute[253538]: 2025-11-25 09:32:44.254 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:32:44 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e283 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:32:44 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3388: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:32:45 np0005534516 nova_compute[253538]: 2025-11-25 09:32:45.922 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:32:46 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3389: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:32:48 np0005534516 podman[438813]: 2025-11-25 09:32:48.842458847 +0000 UTC m=+0.091695421 container health_status 8ad1e319ea263c63021f01bb2d6f18ca5aea205883339692c6b10bddfa993191 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, config_id=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Nov 25 04:32:48 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3390: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:32:49 np0005534516 nova_compute[253538]: 2025-11-25 09:32:49.256 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:32:49 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e283 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:32:50 np0005534516 nova_compute[253538]: 2025-11-25 09:32:50.925 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:32:50 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3391: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:32:52 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3392: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:32:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] Optimize plan auto_2025-11-25_09:32:53
Nov 25 04:32:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 04:32:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] do_upmap
Nov 25 04:32:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] pools ['backups', 'cephfs.cephfs.data', 'vms', 'cephfs.cephfs.meta', 'images', '.rgw.root', 'default.rgw.control', 'default.rgw.log', 'volumes', '.mgr', 'default.rgw.meta']
Nov 25 04:32:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] prepared 0/10 changes
Nov 25 04:32:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 04:32:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 04:32:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 04:32:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 04:32:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 04:32:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 04:32:54 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 04:32:54 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 04:32:54 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 04:32:54 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 04:32:54 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 04:32:54 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 04:32:54 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 04:32:54 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 04:32:54 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 04:32:54 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 04:32:54 np0005534516 nova_compute[253538]: 2025-11-25 09:32:54.258 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:32:54 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e283 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:32:54 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3393: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:32:55 np0005534516 nova_compute[253538]: 2025-11-25 09:32:55.926 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:32:56 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3394: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:32:58 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3395: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:32:59 np0005534516 nova_compute[253538]: 2025-11-25 09:32:59.259 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:32:59 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e283 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:33:00 np0005534516 nova_compute[253538]: 2025-11-25 09:33:00.928 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:33:00 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3396: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:33:02 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3397: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:33:03 np0005534516 ceph-mon[75015]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #165. Immutable memtables: 0.
Nov 25 04:33:03 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:33:03.167758) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 25 04:33:03 np0005534516 ceph-mon[75015]: rocksdb: [db/flush_job.cc:856] [default] [JOB 101] Flushing memtable with next log file: 165
Nov 25 04:33:03 np0005534516 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764063183167798, "job": 101, "event": "flush_started", "num_memtables": 1, "num_entries": 1673, "num_deletes": 251, "total_data_size": 2720273, "memory_usage": 2766368, "flush_reason": "Manual Compaction"}
Nov 25 04:33:03 np0005534516 ceph-mon[75015]: rocksdb: [db/flush_job.cc:885] [default] [JOB 101] Level-0 flush table #166: started
Nov 25 04:33:03 np0005534516 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764063183229725, "cf_name": "default", "job": 101, "event": "table_file_creation", "file_number": 166, "file_size": 2672166, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 68769, "largest_seqno": 70441, "table_properties": {"data_size": 2664378, "index_size": 4730, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1989, "raw_key_size": 15798, "raw_average_key_size": 19, "raw_value_size": 2648860, "raw_average_value_size": 3352, "num_data_blocks": 211, "num_entries": 790, "num_filter_entries": 790, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764062999, "oldest_key_time": 1764062999, "file_creation_time": 1764063183, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "dc5dc6ae-0fb8-474e-b34d-e37705a41add", "db_session_id": "WCSCMBNWDQZ3QKJA3OWW", "orig_file_number": 166, "seqno_to_time_mapping": "N/A"}}
Nov 25 04:33:03 np0005534516 ceph-mon[75015]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 101] Flush lasted 62053 microseconds, and 7396 cpu microseconds.
Nov 25 04:33:03 np0005534516 ceph-mon[75015]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 04:33:03 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:33:03.229801) [db/flush_job.cc:967] [default] [JOB 101] Level-0 flush table #166: 2672166 bytes OK
Nov 25 04:33:03 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:33:03.229835) [db/memtable_list.cc:519] [default] Level-0 commit table #166 started
Nov 25 04:33:03 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:33:03.237794) [db/memtable_list.cc:722] [default] Level-0 commit table #166: memtable #1 done
Nov 25 04:33:03 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:33:03.237832) EVENT_LOG_v1 {"time_micros": 1764063183237820, "job": 101, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 25 04:33:03 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:33:03.237853) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 25 04:33:03 np0005534516 ceph-mon[75015]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 101] Try to delete WAL files size 2713063, prev total WAL file size 2713063, number of live WAL files 2.
Nov 25 04:33:03 np0005534516 ceph-mon[75015]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000162.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 04:33:03 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:33:03.238894) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730036373737' seq:72057594037927935, type:22 .. '7061786F730037303239' seq:0, type:0; will stop at (end)
Nov 25 04:33:03 np0005534516 ceph-mon[75015]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 102] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 25 04:33:03 np0005534516 ceph-mon[75015]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 101 Base level 0, inputs: [166(2609KB)], [164(8704KB)]
Nov 25 04:33:03 np0005534516 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764063183238925, "job": 102, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [166], "files_L6": [164], "score": -1, "input_data_size": 11585256, "oldest_snapshot_seqno": -1}
Nov 25 04:33:03 np0005534516 ceph-mon[75015]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 102] Generated table #167: 8789 keys, 9813066 bytes, temperature: kUnknown
Nov 25 04:33:03 np0005534516 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764063183301965, "cf_name": "default", "job": 102, "event": "table_file_creation", "file_number": 167, "file_size": 9813066, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9758845, "index_size": 31132, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 22021, "raw_key_size": 230682, "raw_average_key_size": 26, "raw_value_size": 9606403, "raw_average_value_size": 1093, "num_data_blocks": 1200, "num_entries": 8789, "num_filter_entries": 8789, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764056881, "oldest_key_time": 0, "file_creation_time": 1764063183, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "dc5dc6ae-0fb8-474e-b34d-e37705a41add", "db_session_id": "WCSCMBNWDQZ3QKJA3OWW", "orig_file_number": 167, "seqno_to_time_mapping": "N/A"}}
Nov 25 04:33:03 np0005534516 ceph-mon[75015]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 04:33:03 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:33:03.302375) [db/compaction/compaction_job.cc:1663] [default] [JOB 102] Compacted 1@0 + 1@6 files to L6 => 9813066 bytes
Nov 25 04:33:03 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:33:03.306495) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 183.4 rd, 155.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.5, 8.5 +0.0 blob) out(9.4 +0.0 blob), read-write-amplify(8.0) write-amplify(3.7) OK, records in: 9303, records dropped: 514 output_compression: NoCompression
Nov 25 04:33:03 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:33:03.306530) EVENT_LOG_v1 {"time_micros": 1764063183306514, "job": 102, "event": "compaction_finished", "compaction_time_micros": 63163, "compaction_time_cpu_micros": 23202, "output_level": 6, "num_output_files": 1, "total_output_size": 9813066, "num_input_records": 9303, "num_output_records": 8789, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 25 04:33:03 np0005534516 ceph-mon[75015]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000166.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 04:33:03 np0005534516 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764063183307801, "job": 102, "event": "table_file_deletion", "file_number": 166}
Nov 25 04:33:03 np0005534516 ceph-mon[75015]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000164.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 04:33:03 np0005534516 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764063183311375, "job": 102, "event": "table_file_deletion", "file_number": 164}
Nov 25 04:33:03 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:33:03.238816) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 04:33:03 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:33:03.311478) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 04:33:03 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:33:03.311485) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 04:33:03 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:33:03.311487) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 04:33:03 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:33:03.311490) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 04:33:03 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:33:03.311492) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 04:33:04 np0005534516 nova_compute[253538]: 2025-11-25 09:33:04.261 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:33:04 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e283 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:33:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 04:33:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 04:33:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 04:33:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 04:33:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 25 04:33:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 04:33:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 04:33:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 04:33:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 04:33:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 04:33:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006659854830044931 of space, bias 1.0, pg target 0.19979564490134794 quantized to 32 (current 32)
Nov 25 04:33:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 04:33:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 25 04:33:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 04:33:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 04:33:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 04:33:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 25 04:33:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 04:33:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 25 04:33:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 04:33:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 04:33:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 04:33:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 25 04:33:04 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3398: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:33:05 np0005534516 nova_compute[253538]: 2025-11-25 09:33:05.930 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:33:06 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3399: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:33:07 np0005534516 nova_compute[253538]: 2025-11-25 09:33:07.864 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:33:08 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3400: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:33:09 np0005534516 nova_compute[253538]: 2025-11-25 09:33:09.285 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:33:09 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e283 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:33:09 np0005534516 podman[438839]: 2025-11-25 09:33:09.825786209 +0000 UTC m=+0.072300712 container health_status 5c8d5508db57b4c3da7e80ae11f3a3094e87785cb74f60c98199261dd8b73481 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true)
Nov 25 04:33:09 np0005534516 podman[438838]: 2025-11-25 09:33:09.844214762 +0000 UTC m=+0.089080919 container health_status 201f5ba8a846455dad7681d91099d8c3f33e8ac91298c1330a8b525b94c03938 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, container_name=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true)
Nov 25 04:33:10 np0005534516 nova_compute[253538]: 2025-11-25 09:33:10.932 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:33:10 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3401: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:33:11 np0005534516 nova_compute[253538]: 2025-11-25 09:33:11.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:33:12 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3402: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:33:14 np0005534516 nova_compute[253538]: 2025-11-25 09:33:14.288 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:33:14 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e283 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:33:14 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3403: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:33:15 np0005534516 nova_compute[253538]: 2025-11-25 09:33:15.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:33:15 np0005534516 nova_compute[253538]: 2025-11-25 09:33:15.554 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 25 04:33:15 np0005534516 nova_compute[253538]: 2025-11-25 09:33:15.555 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 25 04:33:15 np0005534516 nova_compute[253538]: 2025-11-25 09:33:15.569 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 25 04:33:15 np0005534516 nova_compute[253538]: 2025-11-25 09:33:15.932 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:33:16 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3404: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:33:17 np0005534516 nova_compute[253538]: 2025-11-25 09:33:17.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:33:17 np0005534516 nova_compute[253538]: 2025-11-25 09:33:17.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:33:17 np0005534516 nova_compute[253538]: 2025-11-25 09:33:17.555 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:33:17 np0005534516 nova_compute[253538]: 2025-11-25 09:33:17.555 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 25 04:33:18 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3405: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:33:19 np0005534516 nova_compute[253538]: 2025-11-25 09:33:19.290 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:33:19 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e283 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:33:19 np0005534516 nova_compute[253538]: 2025-11-25 09:33:19.555 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:33:19 np0005534516 podman[438879]: 2025-11-25 09:33:19.843818618 +0000 UTC m=+0.090292552 container health_status 8ad1e319ea263c63021f01bb2d6f18ca5aea205883339692c6b10bddfa993191 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0)
Nov 25 04:33:20 np0005534516 nova_compute[253538]: 2025-11-25 09:33:20.935 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:33:20 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3406: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:33:22 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3407: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:33:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 04:33:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 04:33:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 04:33:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 04:33:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 04:33:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 04:33:24 np0005534516 nova_compute[253538]: 2025-11-25 09:33:24.291 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:33:24 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e283 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:33:24 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3408: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:33:25 np0005534516 nova_compute[253538]: 2025-11-25 09:33:25.555 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:33:25 np0005534516 nova_compute[253538]: 2025-11-25 09:33:25.936 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:33:26 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3409: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:33:28 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3410: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:33:29 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 25 04:33:29 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3357591464' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 04:33:29 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 25 04:33:29 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3357591464' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 04:33:29 np0005534516 nova_compute[253538]: 2025-11-25 09:33:29.293 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:33:29 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e283 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:33:30 np0005534516 nova_compute[253538]: 2025-11-25 09:33:30.548 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:33:30 np0005534516 nova_compute[253538]: 2025-11-25 09:33:30.978 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:33:30 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3411: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:33:32 np0005534516 nova_compute[253538]: 2025-11-25 09:33:32.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:33:32 np0005534516 nova_compute[253538]: 2025-11-25 09:33:32.599 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:33:32 np0005534516 nova_compute[253538]: 2025-11-25 09:33:32.600 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:33:32 np0005534516 nova_compute[253538]: 2025-11-25 09:33:32.600 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:33:32 np0005534516 nova_compute[253538]: 2025-11-25 09:33:32.600 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 25 04:33:32 np0005534516 nova_compute[253538]: 2025-11-25 09:33:32.601 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 04:33:32 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3412: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:33:33 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 04:33:33 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3904402365' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 04:33:33 np0005534516 nova_compute[253538]: 2025-11-25 09:33:33.023 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.422s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 04:33:33 np0005534516 nova_compute[253538]: 2025-11-25 09:33:33.216 253542 WARNING nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 04:33:33 np0005534516 nova_compute[253538]: 2025-11-25 09:33:33.217 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3638MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 25 04:33:33 np0005534516 nova_compute[253538]: 2025-11-25 09:33:33.218 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:33:33 np0005534516 nova_compute[253538]: 2025-11-25 09:33:33.218 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:33:33 np0005534516 nova_compute[253538]: 2025-11-25 09:33:33.365 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 25 04:33:33 np0005534516 nova_compute[253538]: 2025-11-25 09:33:33.365 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 25 04:33:33 np0005534516 nova_compute[253538]: 2025-11-25 09:33:33.463 253542 DEBUG nova.scheduler.client.report [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Refreshing inventories for resource provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Nov 25 04:33:33 np0005534516 nova_compute[253538]: 2025-11-25 09:33:33.553 253542 DEBUG nova.scheduler.client.report [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Updating ProviderTree inventory for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Nov 25 04:33:33 np0005534516 nova_compute[253538]: 2025-11-25 09:33:33.554 253542 DEBUG nova.compute.provider_tree [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Updating inventory in ProviderTree for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Nov 25 04:33:33 np0005534516 nova_compute[253538]: 2025-11-25 09:33:33.572 253542 DEBUG nova.scheduler.client.report [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Refreshing aggregate associations for resource provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Nov 25 04:33:33 np0005534516 nova_compute[253538]: 2025-11-25 09:33:33.596 253542 DEBUG nova.scheduler.client.report [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Refreshing trait associations for resource provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4, traits: HW_CPU_X86_ABM,HW_CPU_X86_SSE,HW_CPU_X86_SSE4A,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_NET_VIF_MODEL_VIRTIO,HW_CPU_X86_SSSE3,COMPUTE_ACCELERATORS,COMPUTE_RESCUE_BFV,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_BMI2,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_DEVICE_TAGGING,HW_CPU_X86_SSE2,HW_CPU_X86_SSE42,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_SECURITY_TPM_1_2,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_SVM,HW_CPU_X86_AVX,HW_CPU_X86_CLMUL,COMPUTE_NODE,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_AVX2,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_SHA,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_BMI,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_MMX,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_SSE41,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_SECURITY_TPM_2_0,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_F16C,COMPUTE_STORAGE_BUS_FDC,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_AMD_SVM,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_AESNI,HW_CPU_X86_FMA3 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Nov 25 04:33:33 np0005534516 nova_compute[253538]: 2025-11-25 09:33:33.613 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 04:33:34 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 04:33:34 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2998360382' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 04:33:34 np0005534516 nova_compute[253538]: 2025-11-25 09:33:34.051 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.438s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 04:33:34 np0005534516 nova_compute[253538]: 2025-11-25 09:33:34.058 253542 DEBUG nova.compute.provider_tree [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 04:33:34 np0005534516 nova_compute[253538]: 2025-11-25 09:33:34.083 253542 DEBUG nova.scheduler.client.report [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 04:33:34 np0005534516 nova_compute[253538]: 2025-11-25 09:33:34.085 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 25 04:33:34 np0005534516 nova_compute[253538]: 2025-11-25 09:33:34.085 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.867s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:33:34 np0005534516 nova_compute[253538]: 2025-11-25 09:33:34.295 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:33:34 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e283 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:33:34 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3413: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:33:35 np0005534516 nova_compute[253538]: 2025-11-25 09:33:35.980 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:33:36 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3414: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:33:38 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3415: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:33:39 np0005534516 nova_compute[253538]: 2025-11-25 09:33:39.297 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:33:39 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e283 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:33:40 np0005534516 podman[438953]: 2025-11-25 09:33:40.828228229 +0000 UTC m=+0.077038831 container health_status 201f5ba8a846455dad7681d91099d8c3f33e8ac91298c1330a8b525b94c03938 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 25 04:33:40 np0005534516 podman[438954]: 2025-11-25 09:33:40.841360497 +0000 UTC m=+0.076691962 container health_status 5c8d5508db57b4c3da7e80ae11f3a3094e87785cb74f60c98199261dd8b73481 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_metadata_agent)
Nov 25 04:33:40 np0005534516 nova_compute[253538]: 2025-11-25 09:33:40.982 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:33:40 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3416: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:33:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:33:41.121 162739 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:33:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:33:41.122 162739 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:33:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:33:41.122 162739 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:33:42 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 04:33:42 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 04:33:42 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 04:33:42 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 04:33:42 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3417: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:33:43 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 04:33:43 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 04:33:43 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 04:33:43 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 04:33:43 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 04:33:43 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 04:33:43 np0005534516 ceph-mgr[75313]: [progress WARNING root] complete: ev 88b38ac1-664b-4884-8c6c-63f2444f1ef4 does not exist
Nov 25 04:33:43 np0005534516 ceph-mgr[75313]: [progress WARNING root] complete: ev 0b481079-20e2-4ce6-9fa8-615d9da57a2b does not exist
Nov 25 04:33:43 np0005534516 ceph-mgr[75313]: [progress WARNING root] complete: ev 3f0366ca-105e-4b40-829b-8d9fc71aa33a does not exist
Nov 25 04:33:43 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 04:33:43 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 04:33:43 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 04:33:43 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 04:33:43 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 04:33:43 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 04:33:43 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 04:33:43 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 04:33:43 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 04:33:43 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 04:33:43 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 04:33:44 np0005534516 podman[439382]: 2025-11-25 09:33:43.941695748 +0000 UTC m=+0.021363803 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 04:33:44 np0005534516 podman[439382]: 2025-11-25 09:33:44.058241516 +0000 UTC m=+0.137909561 container create 61a4956fcb2086360076bd07aedfd2772719dce91c3c20405ad34760c2ece791 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_kirch, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 04:33:44 np0005534516 systemd[1]: Started libpod-conmon-61a4956fcb2086360076bd07aedfd2772719dce91c3c20405ad34760c2ece791.scope.
Nov 25 04:33:44 np0005534516 systemd[1]: Started libcrun container.
Nov 25 04:33:44 np0005534516 nova_compute[253538]: 2025-11-25 09:33:44.298 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:33:44 np0005534516 podman[439382]: 2025-11-25 09:33:44.30637483 +0000 UTC m=+0.386042895 container init 61a4956fcb2086360076bd07aedfd2772719dce91c3c20405ad34760c2ece791 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_kirch, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 04:33:44 np0005534516 podman[439382]: 2025-11-25 09:33:44.314894373 +0000 UTC m=+0.394562398 container start 61a4956fcb2086360076bd07aedfd2772719dce91c3c20405ad34760c2ece791 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_kirch, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Nov 25 04:33:44 np0005534516 jolly_kirch[439398]: 167 167
Nov 25 04:33:44 np0005534516 systemd[1]: libpod-61a4956fcb2086360076bd07aedfd2772719dce91c3c20405ad34760c2ece791.scope: Deactivated successfully.
Nov 25 04:33:44 np0005534516 conmon[439398]: conmon 61a4956fcb2086360076 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-61a4956fcb2086360076bd07aedfd2772719dce91c3c20405ad34760c2ece791.scope/container/memory.events
Nov 25 04:33:44 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e283 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:33:44 np0005534516 podman[439382]: 2025-11-25 09:33:44.393709292 +0000 UTC m=+0.473377317 container attach 61a4956fcb2086360076bd07aedfd2772719dce91c3c20405ad34760c2ece791 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_kirch, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Nov 25 04:33:44 np0005534516 podman[439382]: 2025-11-25 09:33:44.394670248 +0000 UTC m=+0.474338253 container died 61a4956fcb2086360076bd07aedfd2772719dce91c3c20405ad34760c2ece791 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_kirch, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 04:33:44 np0005534516 systemd[1]: var-lib-containers-storage-overlay-315db2ef99eda647d0ee0c246513139f1f8e30731defc5b19674169931ae69d7-merged.mount: Deactivated successfully.
Nov 25 04:33:44 np0005534516 podman[439382]: 2025-11-25 09:33:44.899667444 +0000 UTC m=+0.979335449 container remove 61a4956fcb2086360076bd07aedfd2772719dce91c3c20405ad34760c2ece791 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_kirch, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Nov 25 04:33:44 np0005534516 systemd[1]: libpod-conmon-61a4956fcb2086360076bd07aedfd2772719dce91c3c20405ad34760c2ece791.scope: Deactivated successfully.
Nov 25 04:33:44 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3418: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:33:45 np0005534516 podman[439422]: 2025-11-25 09:33:45.087856665 +0000 UTC m=+0.039718034 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 04:33:45 np0005534516 podman[439422]: 2025-11-25 09:33:45.292028251 +0000 UTC m=+0.243889630 container create 8028a5cf37236c849404acca54ad9215e793e3502daf02105d6266b1fb233185 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_goldstine, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 04:33:45 np0005534516 systemd[1]: Started libpod-conmon-8028a5cf37236c849404acca54ad9215e793e3502daf02105d6266b1fb233185.scope.
Nov 25 04:33:45 np0005534516 systemd[1]: Started libcrun container.
Nov 25 04:33:45 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6a3af70e5932aa8f615e3656ef766278be45904183eedfab4606becc1abea22d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 04:33:45 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6a3af70e5932aa8f615e3656ef766278be45904183eedfab4606becc1abea22d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 04:33:45 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6a3af70e5932aa8f615e3656ef766278be45904183eedfab4606becc1abea22d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 04:33:45 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6a3af70e5932aa8f615e3656ef766278be45904183eedfab4606becc1abea22d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 04:33:45 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6a3af70e5932aa8f615e3656ef766278be45904183eedfab4606becc1abea22d/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 04:33:45 np0005534516 podman[439422]: 2025-11-25 09:33:45.555124484 +0000 UTC m=+0.506985833 container init 8028a5cf37236c849404acca54ad9215e793e3502daf02105d6266b1fb233185 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_goldstine, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Nov 25 04:33:45 np0005534516 podman[439422]: 2025-11-25 09:33:45.562828834 +0000 UTC m=+0.514690183 container start 8028a5cf37236c849404acca54ad9215e793e3502daf02105d6266b1fb233185 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_goldstine, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 04:33:45 np0005534516 podman[439422]: 2025-11-25 09:33:45.638014044 +0000 UTC m=+0.589875423 container attach 8028a5cf37236c849404acca54ad9215e793e3502daf02105d6266b1fb233185 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_goldstine, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0)
Nov 25 04:33:45 np0005534516 nova_compute[253538]: 2025-11-25 09:33:45.984 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:33:46 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e283 do_prune osdmap full prune enabled
Nov 25 04:33:46 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e284 e284: 3 total, 3 up, 3 in
Nov 25 04:33:46 np0005534516 ceph-mon[75015]: log_channel(cluster) log [DBG] : osdmap e284: 3 total, 3 up, 3 in
Nov 25 04:33:46 np0005534516 quirky_goldstine[439438]: --> passed data devices: 0 physical, 3 LVM
Nov 25 04:33:46 np0005534516 quirky_goldstine[439438]: --> relative data size: 1.0
Nov 25 04:33:46 np0005534516 quirky_goldstine[439438]: --> All data devices are unavailable
Nov 25 04:33:46 np0005534516 systemd[1]: libpod-8028a5cf37236c849404acca54ad9215e793e3502daf02105d6266b1fb233185.scope: Deactivated successfully.
Nov 25 04:33:46 np0005534516 podman[439422]: 2025-11-25 09:33:46.590392538 +0000 UTC m=+1.542253887 container died 8028a5cf37236c849404acca54ad9215e793e3502daf02105d6266b1fb233185 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_goldstine, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 04:33:46 np0005534516 systemd[1]: var-lib-containers-storage-overlay-6a3af70e5932aa8f615e3656ef766278be45904183eedfab4606becc1abea22d-merged.mount: Deactivated successfully.
Nov 25 04:33:46 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3420: 321 pgs: 2 active+clean+snaptrim, 6 active+clean+snaptrim_wait, 313 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail; 6.3 KiB/s rd, 614 B/s wr, 8 op/s
Nov 25 04:33:47 np0005534516 podman[439422]: 2025-11-25 09:33:47.089536625 +0000 UTC m=+2.041398014 container remove 8028a5cf37236c849404acca54ad9215e793e3502daf02105d6266b1fb233185 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_goldstine, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 04:33:47 np0005534516 systemd[1]: libpod-conmon-8028a5cf37236c849404acca54ad9215e793e3502daf02105d6266b1fb233185.scope: Deactivated successfully.
Nov 25 04:33:47 np0005534516 podman[439619]: 2025-11-25 09:33:47.87507348 +0000 UTC m=+0.116550727 container create 39941af595881874439f6e869d8db369dd1225500a2905606f4ca84c087b67d2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_diffie, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Nov 25 04:33:47 np0005534516 podman[439619]: 2025-11-25 09:33:47.795885622 +0000 UTC m=+0.037362959 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 04:33:47 np0005534516 systemd[1]: Started libpod-conmon-39941af595881874439f6e869d8db369dd1225500a2905606f4ca84c087b67d2.scope.
Nov 25 04:33:47 np0005534516 systemd[1]: Started libcrun container.
Nov 25 04:33:47 np0005534516 podman[439619]: 2025-11-25 09:33:47.991889835 +0000 UTC m=+0.233367122 container init 39941af595881874439f6e869d8db369dd1225500a2905606f4ca84c087b67d2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_diffie, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef)
Nov 25 04:33:48 np0005534516 podman[439619]: 2025-11-25 09:33:48.003475941 +0000 UTC m=+0.244953188 container start 39941af595881874439f6e869d8db369dd1225500a2905606f4ca84c087b67d2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_diffie, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 04:33:48 np0005534516 podman[439619]: 2025-11-25 09:33:48.006435492 +0000 UTC m=+0.247912859 container attach 39941af595881874439f6e869d8db369dd1225500a2905606f4ca84c087b67d2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_diffie, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 04:33:48 np0005534516 distracted_diffie[439635]: 167 167
Nov 25 04:33:48 np0005534516 systemd[1]: libpod-39941af595881874439f6e869d8db369dd1225500a2905606f4ca84c087b67d2.scope: Deactivated successfully.
Nov 25 04:33:48 np0005534516 conmon[439635]: conmon 39941af595881874439f <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-39941af595881874439f6e869d8db369dd1225500a2905606f4ca84c087b67d2.scope/container/memory.events
Nov 25 04:33:48 np0005534516 podman[439619]: 2025-11-25 09:33:48.011192642 +0000 UTC m=+0.252669899 container died 39941af595881874439f6e869d8db369dd1225500a2905606f4ca84c087b67d2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_diffie, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Nov 25 04:33:48 np0005534516 systemd[1]: var-lib-containers-storage-overlay-f6cb92f1671015a272803035473012e6d068952726001ee0591fe85283d13c1e-merged.mount: Deactivated successfully.
Nov 25 04:33:48 np0005534516 podman[439619]: 2025-11-25 09:33:48.114464877 +0000 UTC m=+0.355942134 container remove 39941af595881874439f6e869d8db369dd1225500a2905606f4ca84c087b67d2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_diffie, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Nov 25 04:33:48 np0005534516 systemd[1]: libpod-conmon-39941af595881874439f6e869d8db369dd1225500a2905606f4ca84c087b67d2.scope: Deactivated successfully.
Nov 25 04:33:48 np0005534516 podman[439659]: 2025-11-25 09:33:48.303421298 +0000 UTC m=+0.066913095 container create 52ec1b13510830a64df55b4414acd0d29fe888041802af18ad5bbdb4a3bec2f7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_khorana, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Nov 25 04:33:48 np0005534516 systemd[1]: Started libpod-conmon-52ec1b13510830a64df55b4414acd0d29fe888041802af18ad5bbdb4a3bec2f7.scope.
Nov 25 04:33:48 np0005534516 podman[439659]: 2025-11-25 09:33:48.258850193 +0000 UTC m=+0.022342000 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 04:33:48 np0005534516 systemd[1]: Started libcrun container.
Nov 25 04:33:48 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/92b889bfd34aa25d58c825ae5a7b0a248519bd9d726ee657f1f55745a86dd22b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 04:33:48 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/92b889bfd34aa25d58c825ae5a7b0a248519bd9d726ee657f1f55745a86dd22b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 04:33:48 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/92b889bfd34aa25d58c825ae5a7b0a248519bd9d726ee657f1f55745a86dd22b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 04:33:48 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/92b889bfd34aa25d58c825ae5a7b0a248519bd9d726ee657f1f55745a86dd22b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 04:33:48 np0005534516 podman[439659]: 2025-11-25 09:33:48.463511622 +0000 UTC m=+0.227003429 container init 52ec1b13510830a64df55b4414acd0d29fe888041802af18ad5bbdb4a3bec2f7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_khorana, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Nov 25 04:33:48 np0005534516 podman[439659]: 2025-11-25 09:33:48.469885757 +0000 UTC m=+0.233377584 container start 52ec1b13510830a64df55b4414acd0d29fe888041802af18ad5bbdb4a3bec2f7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_khorana, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 04:33:48 np0005534516 podman[439659]: 2025-11-25 09:33:48.587784931 +0000 UTC m=+0.351276758 container attach 52ec1b13510830a64df55b4414acd0d29fe888041802af18ad5bbdb4a3bec2f7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_khorana, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Nov 25 04:33:48 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3421: 321 pgs: 2 active+clean+snaptrim, 6 active+clean+snaptrim_wait, 313 active+clean; 33 MiB data, 1008 MiB used, 59 GiB / 60 GiB avail; 6.9 KiB/s rd, 614 B/s wr, 9 op/s
Nov 25 04:33:49 np0005534516 amazing_khorana[439675]: {
Nov 25 04:33:49 np0005534516 amazing_khorana[439675]:    "0": [
Nov 25 04:33:49 np0005534516 amazing_khorana[439675]:        {
Nov 25 04:33:49 np0005534516 amazing_khorana[439675]:            "devices": [
Nov 25 04:33:49 np0005534516 amazing_khorana[439675]:                "/dev/loop3"
Nov 25 04:33:49 np0005534516 amazing_khorana[439675]:            ],
Nov 25 04:33:49 np0005534516 amazing_khorana[439675]:            "lv_name": "ceph_lv0",
Nov 25 04:33:49 np0005534516 amazing_khorana[439675]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 04:33:49 np0005534516 amazing_khorana[439675]:            "lv_size": "21470642176",
Nov 25 04:33:49 np0005534516 amazing_khorana[439675]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=fa0f8d90-5df8-4d42-9078-da082765696d,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 04:33:49 np0005534516 amazing_khorana[439675]:            "lv_uuid": "ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr",
Nov 25 04:33:49 np0005534516 amazing_khorana[439675]:            "name": "ceph_lv0",
Nov 25 04:33:49 np0005534516 amazing_khorana[439675]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 04:33:49 np0005534516 amazing_khorana[439675]:            "tags": {
Nov 25 04:33:49 np0005534516 amazing_khorana[439675]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 04:33:49 np0005534516 amazing_khorana[439675]:                "ceph.block_uuid": "ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr",
Nov 25 04:33:49 np0005534516 amazing_khorana[439675]:                "ceph.cephx_lockbox_secret": "",
Nov 25 04:33:49 np0005534516 amazing_khorana[439675]:                "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 04:33:49 np0005534516 amazing_khorana[439675]:                "ceph.cluster_name": "ceph",
Nov 25 04:33:49 np0005534516 amazing_khorana[439675]:                "ceph.crush_device_class": "",
Nov 25 04:33:49 np0005534516 amazing_khorana[439675]:                "ceph.encrypted": "0",
Nov 25 04:33:49 np0005534516 amazing_khorana[439675]:                "ceph.osd_fsid": "fa0f8d90-5df8-4d42-9078-da082765696d",
Nov 25 04:33:49 np0005534516 amazing_khorana[439675]:                "ceph.osd_id": "0",
Nov 25 04:33:49 np0005534516 amazing_khorana[439675]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 04:33:49 np0005534516 amazing_khorana[439675]:                "ceph.type": "block",
Nov 25 04:33:49 np0005534516 amazing_khorana[439675]:                "ceph.vdo": "0"
Nov 25 04:33:49 np0005534516 amazing_khorana[439675]:            },
Nov 25 04:33:49 np0005534516 amazing_khorana[439675]:            "type": "block",
Nov 25 04:33:49 np0005534516 amazing_khorana[439675]:            "vg_name": "ceph_vg0"
Nov 25 04:33:49 np0005534516 amazing_khorana[439675]:        }
Nov 25 04:33:49 np0005534516 amazing_khorana[439675]:    ],
Nov 25 04:33:49 np0005534516 amazing_khorana[439675]:    "1": [
Nov 25 04:33:49 np0005534516 amazing_khorana[439675]:        {
Nov 25 04:33:49 np0005534516 amazing_khorana[439675]:            "devices": [
Nov 25 04:33:49 np0005534516 amazing_khorana[439675]:                "/dev/loop4"
Nov 25 04:33:49 np0005534516 amazing_khorana[439675]:            ],
Nov 25 04:33:49 np0005534516 amazing_khorana[439675]:            "lv_name": "ceph_lv1",
Nov 25 04:33:49 np0005534516 amazing_khorana[439675]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 04:33:49 np0005534516 amazing_khorana[439675]:            "lv_size": "21470642176",
Nov 25 04:33:49 np0005534516 amazing_khorana[439675]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1ccbbde0-8faf-460a-800e-c84f00ed17db,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 04:33:49 np0005534516 amazing_khorana[439675]:            "lv_uuid": "nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e",
Nov 25 04:33:49 np0005534516 amazing_khorana[439675]:            "name": "ceph_lv1",
Nov 25 04:33:49 np0005534516 amazing_khorana[439675]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 04:33:49 np0005534516 amazing_khorana[439675]:            "tags": {
Nov 25 04:33:49 np0005534516 amazing_khorana[439675]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 04:33:49 np0005534516 amazing_khorana[439675]:                "ceph.block_uuid": "nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e",
Nov 25 04:33:49 np0005534516 amazing_khorana[439675]:                "ceph.cephx_lockbox_secret": "",
Nov 25 04:33:49 np0005534516 amazing_khorana[439675]:                "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 04:33:49 np0005534516 amazing_khorana[439675]:                "ceph.cluster_name": "ceph",
Nov 25 04:33:49 np0005534516 amazing_khorana[439675]:                "ceph.crush_device_class": "",
Nov 25 04:33:49 np0005534516 amazing_khorana[439675]:                "ceph.encrypted": "0",
Nov 25 04:33:49 np0005534516 amazing_khorana[439675]:                "ceph.osd_fsid": "1ccbbde0-8faf-460a-800e-c84f00ed17db",
Nov 25 04:33:49 np0005534516 amazing_khorana[439675]:                "ceph.osd_id": "1",
Nov 25 04:33:49 np0005534516 amazing_khorana[439675]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 04:33:49 np0005534516 amazing_khorana[439675]:                "ceph.type": "block",
Nov 25 04:33:49 np0005534516 amazing_khorana[439675]:                "ceph.vdo": "0"
Nov 25 04:33:49 np0005534516 amazing_khorana[439675]:            },
Nov 25 04:33:49 np0005534516 amazing_khorana[439675]:            "type": "block",
Nov 25 04:33:49 np0005534516 amazing_khorana[439675]:            "vg_name": "ceph_vg1"
Nov 25 04:33:49 np0005534516 amazing_khorana[439675]:        }
Nov 25 04:33:49 np0005534516 amazing_khorana[439675]:    ],
Nov 25 04:33:49 np0005534516 amazing_khorana[439675]:    "2": [
Nov 25 04:33:49 np0005534516 amazing_khorana[439675]:        {
Nov 25 04:33:49 np0005534516 amazing_khorana[439675]:            "devices": [
Nov 25 04:33:49 np0005534516 amazing_khorana[439675]:                "/dev/loop5"
Nov 25 04:33:49 np0005534516 amazing_khorana[439675]:            ],
Nov 25 04:33:49 np0005534516 amazing_khorana[439675]:            "lv_name": "ceph_lv2",
Nov 25 04:33:49 np0005534516 amazing_khorana[439675]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 04:33:49 np0005534516 amazing_khorana[439675]:            "lv_size": "21470642176",
Nov 25 04:33:49 np0005534516 amazing_khorana[439675]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2a15223e-f21c-42af-b702-2be1c3607399,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 04:33:49 np0005534516 amazing_khorana[439675]:            "lv_uuid": "5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0",
Nov 25 04:33:49 np0005534516 amazing_khorana[439675]:            "name": "ceph_lv2",
Nov 25 04:33:49 np0005534516 amazing_khorana[439675]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 04:33:49 np0005534516 amazing_khorana[439675]:            "tags": {
Nov 25 04:33:49 np0005534516 amazing_khorana[439675]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 04:33:49 np0005534516 amazing_khorana[439675]:                "ceph.block_uuid": "5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0",
Nov 25 04:33:49 np0005534516 amazing_khorana[439675]:                "ceph.cephx_lockbox_secret": "",
Nov 25 04:33:49 np0005534516 amazing_khorana[439675]:                "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 04:33:49 np0005534516 amazing_khorana[439675]:                "ceph.cluster_name": "ceph",
Nov 25 04:33:49 np0005534516 amazing_khorana[439675]:                "ceph.crush_device_class": "",
Nov 25 04:33:49 np0005534516 amazing_khorana[439675]:                "ceph.encrypted": "0",
Nov 25 04:33:49 np0005534516 amazing_khorana[439675]:                "ceph.osd_fsid": "2a15223e-f21c-42af-b702-2be1c3607399",
Nov 25 04:33:49 np0005534516 amazing_khorana[439675]:                "ceph.osd_id": "2",
Nov 25 04:33:49 np0005534516 amazing_khorana[439675]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 04:33:49 np0005534516 amazing_khorana[439675]:                "ceph.type": "block",
Nov 25 04:33:49 np0005534516 amazing_khorana[439675]:                "ceph.vdo": "0"
Nov 25 04:33:49 np0005534516 amazing_khorana[439675]:            },
Nov 25 04:33:49 np0005534516 amazing_khorana[439675]:            "type": "block",
Nov 25 04:33:49 np0005534516 amazing_khorana[439675]:            "vg_name": "ceph_vg2"
Nov 25 04:33:49 np0005534516 amazing_khorana[439675]:        }
Nov 25 04:33:49 np0005534516 amazing_khorana[439675]:    ]
Nov 25 04:33:49 np0005534516 amazing_khorana[439675]: }
Nov 25 04:33:49 np0005534516 systemd[1]: libpod-52ec1b13510830a64df55b4414acd0d29fe888041802af18ad5bbdb4a3bec2f7.scope: Deactivated successfully.
Nov 25 04:33:49 np0005534516 podman[439659]: 2025-11-25 09:33:49.282247033 +0000 UTC m=+1.045738850 container died 52ec1b13510830a64df55b4414acd0d29fe888041802af18ad5bbdb4a3bec2f7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_khorana, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 04:33:49 np0005534516 nova_compute[253538]: 2025-11-25 09:33:49.301 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:33:49 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:33:49 np0005534516 systemd[1]: var-lib-containers-storage-overlay-92b889bfd34aa25d58c825ae5a7b0a248519bd9d726ee657f1f55745a86dd22b-merged.mount: Deactivated successfully.
Nov 25 04:33:49 np0005534516 podman[439659]: 2025-11-25 09:33:49.647985584 +0000 UTC m=+1.411477371 container remove 52ec1b13510830a64df55b4414acd0d29fe888041802af18ad5bbdb4a3bec2f7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_khorana, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 04:33:49 np0005534516 systemd[1]: libpod-conmon-52ec1b13510830a64df55b4414acd0d29fe888041802af18ad5bbdb4a3bec2f7.scope: Deactivated successfully.
Nov 25 04:33:50 np0005534516 podman[439770]: 2025-11-25 09:33:50.028358504 +0000 UTC m=+0.099308708 container health_status 8ad1e319ea263c63021f01bb2d6f18ca5aea205883339692c6b10bddfa993191 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Nov 25 04:33:50 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e284 do_prune osdmap full prune enabled
Nov 25 04:33:50 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e285 e285: 3 total, 3 up, 3 in
Nov 25 04:33:50 np0005534516 ceph-mon[75015]: log_channel(cluster) log [DBG] : osdmap e285: 3 total, 3 up, 3 in
Nov 25 04:33:50 np0005534516 podman[439863]: 2025-11-25 09:33:50.408255141 +0000 UTC m=+0.102438334 container create e567ce78abed84fd97e0c77c84d7f9a829b0bdf26a5e2198f185c4a23d7a59f7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_ptolemy, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2)
Nov 25 04:33:50 np0005534516 podman[439863]: 2025-11-25 09:33:50.33746228 +0000 UTC m=+0.031645463 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 04:33:50 np0005534516 systemd[1]: Started libpod-conmon-e567ce78abed84fd97e0c77c84d7f9a829b0bdf26a5e2198f185c4a23d7a59f7.scope.
Nov 25 04:33:50 np0005534516 systemd[1]: Started libcrun container.
Nov 25 04:33:50 np0005534516 nova_compute[253538]: 2025-11-25 09:33:50.986 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:33:50 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3423: 321 pgs: 2 active+clean+snaptrim, 6 active+clean+snaptrim_wait, 313 active+clean; 21 MiB data, 995 MiB used, 59 GiB / 60 GiB avail; 16 KiB/s rd, 1.7 KiB/s wr, 22 op/s
Nov 25 04:33:51 np0005534516 podman[439863]: 2025-11-25 09:33:51.039853749 +0000 UTC m=+0.734036932 container init e567ce78abed84fd97e0c77c84d7f9a829b0bdf26a5e2198f185c4a23d7a59f7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_ptolemy, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 25 04:33:51 np0005534516 podman[439863]: 2025-11-25 09:33:51.053208733 +0000 UTC m=+0.747391906 container start e567ce78abed84fd97e0c77c84d7f9a829b0bdf26a5e2198f185c4a23d7a59f7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_ptolemy, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 25 04:33:51 np0005534516 crazy_ptolemy[439882]: 167 167
Nov 25 04:33:51 np0005534516 systemd[1]: libpod-e567ce78abed84fd97e0c77c84d7f9a829b0bdf26a5e2198f185c4a23d7a59f7.scope: Deactivated successfully.
Nov 25 04:33:51 np0005534516 podman[439863]: 2025-11-25 09:33:51.149081857 +0000 UTC m=+0.843265050 container attach e567ce78abed84fd97e0c77c84d7f9a829b0bdf26a5e2198f185c4a23d7a59f7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_ptolemy, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 04:33:51 np0005534516 podman[439863]: 2025-11-25 09:33:51.149487438 +0000 UTC m=+0.843670611 container died e567ce78abed84fd97e0c77c84d7f9a829b0bdf26a5e2198f185c4a23d7a59f7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_ptolemy, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 04:33:51 np0005534516 systemd[1]: var-lib-containers-storage-overlay-14dea20a4e070f30f646ddc0fcfef4c5268e618308fad135909c8240333d99fa-merged.mount: Deactivated successfully.
Nov 25 04:33:51 np0005534516 podman[439863]: 2025-11-25 09:33:51.22033935 +0000 UTC m=+0.914522523 container remove e567ce78abed84fd97e0c77c84d7f9a829b0bdf26a5e2198f185c4a23d7a59f7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_ptolemy, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 04:33:51 np0005534516 systemd[1]: libpod-conmon-e567ce78abed84fd97e0c77c84d7f9a829b0bdf26a5e2198f185c4a23d7a59f7.scope: Deactivated successfully.
Nov 25 04:33:51 np0005534516 podman[439905]: 2025-11-25 09:33:51.49475968 +0000 UTC m=+0.097119328 container create ab00855765c45d812419342b6de98c94546c0ddac3f1ad289666e92b8eb3a33b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_wilson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 04:33:51 np0005534516 podman[439905]: 2025-11-25 09:33:51.422548033 +0000 UTC m=+0.024907711 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 04:33:51 np0005534516 systemd[1]: Started libpod-conmon-ab00855765c45d812419342b6de98c94546c0ddac3f1ad289666e92b8eb3a33b.scope.
Nov 25 04:33:51 np0005534516 systemd[1]: Started libcrun container.
Nov 25 04:33:51 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5f8cfeab3052566ed05dae5cd6df2107820baa724e3754ea76b7f6288155a69f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 04:33:51 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5f8cfeab3052566ed05dae5cd6df2107820baa724e3754ea76b7f6288155a69f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 04:33:51 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5f8cfeab3052566ed05dae5cd6df2107820baa724e3754ea76b7f6288155a69f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 04:33:51 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5f8cfeab3052566ed05dae5cd6df2107820baa724e3754ea76b7f6288155a69f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 04:33:51 np0005534516 podman[439905]: 2025-11-25 09:33:51.854533359 +0000 UTC m=+0.456893087 container init ab00855765c45d812419342b6de98c94546c0ddac3f1ad289666e92b8eb3a33b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_wilson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Nov 25 04:33:51 np0005534516 podman[439905]: 2025-11-25 09:33:51.866458644 +0000 UTC m=+0.468818292 container start ab00855765c45d812419342b6de98c94546c0ddac3f1ad289666e92b8eb3a33b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_wilson, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 04:33:51 np0005534516 podman[439905]: 2025-11-25 09:33:51.957398974 +0000 UTC m=+0.559758702 container attach ab00855765c45d812419342b6de98c94546c0ddac3f1ad289666e92b8eb3a33b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_wilson, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Nov 25 04:33:52 np0005534516 sharp_wilson[439921]: {
Nov 25 04:33:52 np0005534516 sharp_wilson[439921]:    "1ccbbde0-8faf-460a-800e-c84f00ed17db": {
Nov 25 04:33:52 np0005534516 sharp_wilson[439921]:        "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 04:33:52 np0005534516 sharp_wilson[439921]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 04:33:52 np0005534516 sharp_wilson[439921]:        "osd_id": 1,
Nov 25 04:33:52 np0005534516 sharp_wilson[439921]:        "osd_uuid": "1ccbbde0-8faf-460a-800e-c84f00ed17db",
Nov 25 04:33:52 np0005534516 sharp_wilson[439921]:        "type": "bluestore"
Nov 25 04:33:52 np0005534516 sharp_wilson[439921]:    },
Nov 25 04:33:52 np0005534516 sharp_wilson[439921]:    "2a15223e-f21c-42af-b702-2be1c3607399": {
Nov 25 04:33:52 np0005534516 sharp_wilson[439921]:        "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 04:33:52 np0005534516 sharp_wilson[439921]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 04:33:52 np0005534516 sharp_wilson[439921]:        "osd_id": 2,
Nov 25 04:33:52 np0005534516 sharp_wilson[439921]:        "osd_uuid": "2a15223e-f21c-42af-b702-2be1c3607399",
Nov 25 04:33:52 np0005534516 sharp_wilson[439921]:        "type": "bluestore"
Nov 25 04:33:52 np0005534516 sharp_wilson[439921]:    },
Nov 25 04:33:52 np0005534516 sharp_wilson[439921]:    "fa0f8d90-5df8-4d42-9078-da082765696d": {
Nov 25 04:33:52 np0005534516 sharp_wilson[439921]:        "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 04:33:52 np0005534516 sharp_wilson[439921]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 04:33:52 np0005534516 sharp_wilson[439921]:        "osd_id": 0,
Nov 25 04:33:52 np0005534516 sharp_wilson[439921]:        "osd_uuid": "fa0f8d90-5df8-4d42-9078-da082765696d",
Nov 25 04:33:52 np0005534516 sharp_wilson[439921]:        "type": "bluestore"
Nov 25 04:33:52 np0005534516 sharp_wilson[439921]:    }
Nov 25 04:33:52 np0005534516 sharp_wilson[439921]: }
Nov 25 04:33:52 np0005534516 systemd[1]: libpod-ab00855765c45d812419342b6de98c94546c0ddac3f1ad289666e92b8eb3a33b.scope: Deactivated successfully.
Nov 25 04:33:52 np0005534516 podman[439905]: 2025-11-25 09:33:52.934957244 +0000 UTC m=+1.537316882 container died ab00855765c45d812419342b6de98c94546c0ddac3f1ad289666e92b8eb3a33b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_wilson, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Nov 25 04:33:52 np0005534516 systemd[1]: libpod-ab00855765c45d812419342b6de98c94546c0ddac3f1ad289666e92b8eb3a33b.scope: Consumed 1.074s CPU time.
Nov 25 04:33:52 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3424: 321 pgs: 321 active+clean; 21 MiB data, 995 MiB used, 59 GiB / 60 GiB avail; 31 KiB/s rd, 2.4 KiB/s wr, 41 op/s
Nov 25 04:33:53 np0005534516 systemd[1]: var-lib-containers-storage-overlay-5f8cfeab3052566ed05dae5cd6df2107820baa724e3754ea76b7f6288155a69f-merged.mount: Deactivated successfully.
Nov 25 04:33:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] Optimize plan auto_2025-11-25_09:33:53
Nov 25 04:33:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 04:33:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] do_upmap
Nov 25 04:33:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] pools ['cephfs.cephfs.data', 'default.rgw.control', 'backups', 'default.rgw.meta', 'cephfs.cephfs.meta', 'vms', 'volumes', 'default.rgw.log', '.mgr', '.rgw.root', 'images']
Nov 25 04:33:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] prepared 0/10 changes
Nov 25 04:33:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 04:33:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 04:33:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 04:33:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 04:33:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 04:33:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 04:33:53 np0005534516 podman[439905]: 2025-11-25 09:33:53.691232561 +0000 UTC m=+2.293592229 container remove ab00855765c45d812419342b6de98c94546c0ddac3f1ad289666e92b8eb3a33b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_wilson, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 04:33:53 np0005534516 systemd[1]: libpod-conmon-ab00855765c45d812419342b6de98c94546c0ddac3f1ad289666e92b8eb3a33b.scope: Deactivated successfully.
Nov 25 04:33:53 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 04:33:53 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 04:33:53 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 04:33:53 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 04:33:53 np0005534516 ceph-mgr[75313]: [progress WARNING root] complete: ev f8870446-f61b-463a-9ad8-c4f178a450a2 does not exist
Nov 25 04:33:53 np0005534516 ceph-mgr[75313]: [progress WARNING root] complete: ev d3968b03-fa92-4517-bb4f-3c35c5070e5c does not exist
Nov 25 04:33:54 np0005534516 ceph-mgr[75313]: client.0 ms_handle_reset on v2:192.168.122.100:6800/3119838916
Nov 25 04:33:54 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 04:33:54 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 04:33:54 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 04:33:54 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 04:33:54 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 04:33:54 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 04:33:54 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 04:33:54 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 04:33:54 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 04:33:54 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 04:33:54 np0005534516 nova_compute[253538]: 2025-11-25 09:33:54.304 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:33:54 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e285 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:33:54 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e285 do_prune osdmap full prune enabled
Nov 25 04:33:54 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e286 e286: 3 total, 3 up, 3 in
Nov 25 04:33:54 np0005534516 ceph-mon[75015]: log_channel(cluster) log [DBG] : osdmap e286: 3 total, 3 up, 3 in
Nov 25 04:33:54 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 04:33:54 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 04:33:54 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3426: 321 pgs: 321 active+clean; 4.9 MiB data, 979 MiB used, 59 GiB / 60 GiB avail; 37 KiB/s rd, 2.7 KiB/s wr, 52 op/s
Nov 25 04:33:55 np0005534516 nova_compute[253538]: 2025-11-25 09:33:55.987 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:33:56 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3427: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail; 37 KiB/s rd, 2.7 KiB/s wr, 50 op/s
Nov 25 04:33:58 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3428: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail; 26 KiB/s rd, 1.7 KiB/s wr, 36 op/s
Nov 25 04:33:59 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e286 do_prune osdmap full prune enabled
Nov 25 04:33:59 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e287 e287: 3 total, 3 up, 3 in
Nov 25 04:33:59 np0005534516 ceph-mon[75015]: log_channel(cluster) log [DBG] : osdmap e287: 3 total, 3 up, 3 in
Nov 25 04:33:59 np0005534516 nova_compute[253538]: 2025-11-25 09:33:59.307 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:33:59 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e287 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:33:59 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e287 do_prune osdmap full prune enabled
Nov 25 04:33:59 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e288 e288: 3 total, 3 up, 3 in
Nov 25 04:33:59 np0005534516 ceph-mon[75015]: log_channel(cluster) log [DBG] : osdmap e288: 3 total, 3 up, 3 in
Nov 25 04:34:00 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3431: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail; 15 KiB/s rd, 964 B/s wr, 19 op/s
Nov 25 04:34:01 np0005534516 nova_compute[253538]: 2025-11-25 09:34:01.001 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:34:02 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3432: 321 pgs: 321 active+clean; 21 MiB data, 995 MiB used, 59 GiB / 60 GiB avail; 9.7 KiB/s rd, 2.6 MiB/s wr, 15 op/s
Nov 25 04:34:04 np0005534516 nova_compute[253538]: 2025-11-25 09:34:04.347 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:34:04 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:34:04 np0005534516 ceph-mon[75015]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #168. Immutable memtables: 0.
Nov 25 04:34:04 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:34:04.533581) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 25 04:34:04 np0005534516 ceph-mon[75015]: rocksdb: [db/flush_job.cc:856] [default] [JOB 103] Flushing memtable with next log file: 168
Nov 25 04:34:04 np0005534516 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764063244533654, "job": 103, "event": "flush_started", "num_memtables": 1, "num_entries": 819, "num_deletes": 260, "total_data_size": 1031675, "memory_usage": 1048064, "flush_reason": "Manual Compaction"}
Nov 25 04:34:04 np0005534516 ceph-mon[75015]: rocksdb: [db/flush_job.cc:885] [default] [JOB 103] Level-0 flush table #169: started
Nov 25 04:34:04 np0005534516 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764063244544795, "cf_name": "default", "job": 103, "event": "table_file_creation", "file_number": 169, "file_size": 1010330, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 70442, "largest_seqno": 71260, "table_properties": {"data_size": 1006135, "index_size": 1910, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1221, "raw_key_size": 9358, "raw_average_key_size": 19, "raw_value_size": 997583, "raw_average_value_size": 2073, "num_data_blocks": 85, "num_entries": 481, "num_filter_entries": 481, "num_deletions": 260, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764063185, "oldest_key_time": 1764063185, "file_creation_time": 1764063244, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "dc5dc6ae-0fb8-474e-b34d-e37705a41add", "db_session_id": "WCSCMBNWDQZ3QKJA3OWW", "orig_file_number": 169, "seqno_to_time_mapping": "N/A"}}
Nov 25 04:34:04 np0005534516 ceph-mon[75015]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 103] Flush lasted 11270 microseconds, and 4859 cpu microseconds.
Nov 25 04:34:04 np0005534516 ceph-mon[75015]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 04:34:04 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:34:04.544850) [db/flush_job.cc:967] [default] [JOB 103] Level-0 flush table #169: 1010330 bytes OK
Nov 25 04:34:04 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:34:04.544886) [db/memtable_list.cc:519] [default] Level-0 commit table #169 started
Nov 25 04:34:04 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:34:04.546439) [db/memtable_list.cc:722] [default] Level-0 commit table #169: memtable #1 done
Nov 25 04:34:04 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:34:04.546461) EVENT_LOG_v1 {"time_micros": 1764063244546454, "job": 103, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 25 04:34:04 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:34:04.546480) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 25 04:34:04 np0005534516 ceph-mon[75015]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 103] Try to delete WAL files size 1027542, prev total WAL file size 1027542, number of live WAL files 2.
Nov 25 04:34:04 np0005534516 ceph-mon[75015]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000165.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 04:34:04 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:34:04.547239) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0033303139' seq:72057594037927935, type:22 .. '6C6F676D0033323731' seq:0, type:0; will stop at (end)
Nov 25 04:34:04 np0005534516 ceph-mon[75015]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 104] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 25 04:34:04 np0005534516 ceph-mon[75015]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 103 Base level 0, inputs: [169(986KB)], [167(9583KB)]
Nov 25 04:34:04 np0005534516 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764063244547280, "job": 104, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [169], "files_L6": [167], "score": -1, "input_data_size": 10823396, "oldest_snapshot_seqno": -1}
Nov 25 04:34:04 np0005534516 ceph-mon[75015]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 104] Generated table #170: 8737 keys, 10710109 bytes, temperature: kUnknown
Nov 25 04:34:04 np0005534516 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764063244623671, "cf_name": "default", "job": 104, "event": "table_file_creation", "file_number": 170, "file_size": 10710109, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10654571, "index_size": 32577, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 21893, "raw_key_size": 230530, "raw_average_key_size": 26, "raw_value_size": 10501308, "raw_average_value_size": 1201, "num_data_blocks": 1261, "num_entries": 8737, "num_filter_entries": 8737, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764056881, "oldest_key_time": 0, "file_creation_time": 1764063244, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "dc5dc6ae-0fb8-474e-b34d-e37705a41add", "db_session_id": "WCSCMBNWDQZ3QKJA3OWW", "orig_file_number": 170, "seqno_to_time_mapping": "N/A"}}
Nov 25 04:34:04 np0005534516 ceph-mon[75015]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 04:34:04 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:34:04.623959) [db/compaction/compaction_job.cc:1663] [default] [JOB 104] Compacted 1@0 + 1@6 files to L6 => 10710109 bytes
Nov 25 04:34:04 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:34:04.625439) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 141.5 rd, 140.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.0, 9.4 +0.0 blob) out(10.2 +0.0 blob), read-write-amplify(21.3) write-amplify(10.6) OK, records in: 9270, records dropped: 533 output_compression: NoCompression
Nov 25 04:34:04 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:34:04.625461) EVENT_LOG_v1 {"time_micros": 1764063244625451, "job": 104, "event": "compaction_finished", "compaction_time_micros": 76491, "compaction_time_cpu_micros": 44611, "output_level": 6, "num_output_files": 1, "total_output_size": 10710109, "num_input_records": 9270, "num_output_records": 8737, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 25 04:34:04 np0005534516 ceph-mon[75015]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000169.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 04:34:04 np0005534516 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764063244625768, "job": 104, "event": "table_file_deletion", "file_number": 169}
Nov 25 04:34:04 np0005534516 ceph-mon[75015]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000167.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 04:34:04 np0005534516 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764063244627961, "job": 104, "event": "table_file_deletion", "file_number": 167}
Nov 25 04:34:04 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:34:04.547155) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 04:34:04 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:34:04.628000) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 04:34:04 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:34:04.628005) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 04:34:04 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:34:04.628008) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 04:34:04 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:34:04.628010) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 04:34:04 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:34:04.628012) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 04:34:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 04:34:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 04:34:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 04:34:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 04:34:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 25 04:34:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 04:34:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 04:34:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 04:34:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 04:34:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 04:34:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.00033308812756397733 of space, bias 1.0, pg target 0.0999264382691932 quantized to 32 (current 32)
Nov 25 04:34:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 04:34:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 25 04:34:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 04:34:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 04:34:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 04:34:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 25 04:34:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 04:34:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 25 04:34:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 04:34:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 04:34:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 04:34:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 25 04:34:04 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3433: 321 pgs: 321 active+clean; 21 MiB data, 995 MiB used, 59 GiB / 60 GiB avail; 9.6 KiB/s rd, 2.6 MiB/s wr, 14 op/s
Nov 25 04:34:06 np0005534516 nova_compute[253538]: 2025-11-25 09:34:06.004 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:34:06 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3434: 321 pgs: 321 active+clean; 21 MiB data, 995 MiB used, 59 GiB / 60 GiB avail; 9.4 KiB/s rd, 2.6 MiB/s wr, 14 op/s
Nov 25 04:34:09 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3435: 321 pgs: 321 active+clean; 21 MiB data, 995 MiB used, 59 GiB / 60 GiB avail; 7.7 KiB/s rd, 2.1 MiB/s wr, 11 op/s
Nov 25 04:34:09 np0005534516 nova_compute[253538]: 2025-11-25 09:34:09.349 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:34:09 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:34:10 np0005534516 nova_compute[253538]: 2025-11-25 09:34:10.085 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:34:11 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3436: 321 pgs: 321 active+clean; 21 MiB data, 995 MiB used, 59 GiB / 60 GiB avail; 6.6 KiB/s rd, 1.8 MiB/s wr, 10 op/s
Nov 25 04:34:11 np0005534516 nova_compute[253538]: 2025-11-25 09:34:11.007 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:34:11 np0005534516 nova_compute[253538]: 2025-11-25 09:34:11.555 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:34:11 np0005534516 podman[440024]: 2025-11-25 09:34:11.659389892 +0000 UTC m=+0.059868023 container health_status 5c8d5508db57b4c3da7e80ae11f3a3094e87785cb74f60c98199261dd8b73481 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2)
Nov 25 04:34:11 np0005534516 podman[440023]: 2025-11-25 09:34:11.664390328 +0000 UTC m=+0.065006913 container health_status 201f5ba8a846455dad7681d91099d8c3f33e8ac91298c1330a8b525b94c03938 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=multipathd, org.label-schema.license=GPLv2, container_name=multipathd, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118)
Nov 25 04:34:13 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3437: 321 pgs: 321 active+clean; 21 MiB data, 995 MiB used, 59 GiB / 60 GiB avail; 6.2 KiB/s rd, 1.7 MiB/s wr, 9 op/s
Nov 25 04:34:14 np0005534516 nova_compute[253538]: 2025-11-25 09:34:14.351 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:34:14 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:34:15 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3438: 321 pgs: 321 active+clean; 21 MiB data, 995 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:34:16 np0005534516 nova_compute[253538]: 2025-11-25 09:34:16.009 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:34:17 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3439: 321 pgs: 321 active+clean; 21 MiB data, 995 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:34:17 np0005534516 nova_compute[253538]: 2025-11-25 09:34:17.555 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:34:17 np0005534516 nova_compute[253538]: 2025-11-25 09:34:17.555 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 25 04:34:17 np0005534516 nova_compute[253538]: 2025-11-25 09:34:17.556 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 25 04:34:17 np0005534516 nova_compute[253538]: 2025-11-25 09:34:17.576 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 25 04:34:17 np0005534516 nova_compute[253538]: 2025-11-25 09:34:17.577 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:34:17 np0005534516 nova_compute[253538]: 2025-11-25 09:34:17.577 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 25 04:34:18 np0005534516 nova_compute[253538]: 2025-11-25 09:34:18.555 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:34:18 np0005534516 nova_compute[253538]: 2025-11-25 09:34:18.555 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:34:19 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3440: 321 pgs: 321 active+clean; 21 MiB data, 995 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:34:19 np0005534516 nova_compute[253538]: 2025-11-25 09:34:19.351 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:34:19 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:34:20 np0005534516 nova_compute[253538]: 2025-11-25 09:34:20.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:34:20 np0005534516 podman[440059]: 2025-11-25 09:34:20.819105688 +0000 UTC m=+0.073303420 container health_status 8ad1e319ea263c63021f01bb2d6f18ca5aea205883339692c6b10bddfa993191 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true)
Nov 25 04:34:21 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3441: 321 pgs: 321 active+clean; 21 MiB data, 995 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:34:21 np0005534516 nova_compute[253538]: 2025-11-25 09:34:21.066 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:34:23 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3442: 321 pgs: 321 active+clean; 21 MiB data, 995 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:34:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 04:34:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 04:34:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 04:34:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 04:34:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 04:34:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 04:34:24 np0005534516 nova_compute[253538]: 2025-11-25 09:34:24.354 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:34:24 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:34:25 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3443: 321 pgs: 321 active+clean; 21 MiB data, 995 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:34:26 np0005534516 nova_compute[253538]: 2025-11-25 09:34:26.067 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:34:27 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3444: 321 pgs: 321 active+clean; 21 MiB data, 995 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:34:27 np0005534516 nova_compute[253538]: 2025-11-25 09:34:27.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:34:29 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3445: 321 pgs: 321 active+clean; 21 MiB data, 995 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:34:29 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 25 04:34:29 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/700008148' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 04:34:29 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 25 04:34:29 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/700008148' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 04:34:29 np0005534516 nova_compute[253538]: 2025-11-25 09:34:29.355 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:34:29 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:34:31 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3446: 321 pgs: 321 active+clean; 21 MiB data, 995 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:34:31 np0005534516 nova_compute[253538]: 2025-11-25 09:34:31.069 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:34:33 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3447: 321 pgs: 321 active+clean; 21 MiB data, 995 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:34:33 np0005534516 nova_compute[253538]: 2025-11-25 09:34:33.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:34:33 np0005534516 nova_compute[253538]: 2025-11-25 09:34:33.593 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:34:33 np0005534516 nova_compute[253538]: 2025-11-25 09:34:33.594 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:34:33 np0005534516 nova_compute[253538]: 2025-11-25 09:34:33.594 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:34:33 np0005534516 nova_compute[253538]: 2025-11-25 09:34:33.594 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 25 04:34:33 np0005534516 nova_compute[253538]: 2025-11-25 09:34:33.594 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 04:34:34 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 04:34:34 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1684591325' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 04:34:34 np0005534516 nova_compute[253538]: 2025-11-25 09:34:34.069 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.475s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 04:34:34 np0005534516 nova_compute[253538]: 2025-11-25 09:34:34.235 253542 WARNING nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 04:34:34 np0005534516 nova_compute[253538]: 2025-11-25 09:34:34.236 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3628MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 25 04:34:34 np0005534516 nova_compute[253538]: 2025-11-25 09:34:34.237 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:34:34 np0005534516 nova_compute[253538]: 2025-11-25 09:34:34.237 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:34:34 np0005534516 nova_compute[253538]: 2025-11-25 09:34:34.328 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 25 04:34:34 np0005534516 nova_compute[253538]: 2025-11-25 09:34:34.329 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 25 04:34:34 np0005534516 nova_compute[253538]: 2025-11-25 09:34:34.348 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 04:34:34 np0005534516 nova_compute[253538]: 2025-11-25 09:34:34.385 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:34:34 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:34:34 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 04:34:34 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/735311605' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 04:34:34 np0005534516 nova_compute[253538]: 2025-11-25 09:34:34.845 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.498s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 04:34:34 np0005534516 nova_compute[253538]: 2025-11-25 09:34:34.850 253542 DEBUG nova.compute.provider_tree [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 04:34:34 np0005534516 nova_compute[253538]: 2025-11-25 09:34:34.862 253542 DEBUG nova.scheduler.client.report [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 04:34:34 np0005534516 nova_compute[253538]: 2025-11-25 09:34:34.864 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 25 04:34:34 np0005534516 nova_compute[253538]: 2025-11-25 09:34:34.864 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.627s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:34:35 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3448: 321 pgs: 321 active+clean; 21 MiB data, 995 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:34:36 np0005534516 nova_compute[253538]: 2025-11-25 09:34:36.071 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:34:37 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3449: 321 pgs: 321 active+clean; 21 MiB data, 995 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:34:39 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3450: 321 pgs: 321 active+clean; 21 MiB data, 995 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:34:39 np0005534516 nova_compute[253538]: 2025-11-25 09:34:39.387 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:34:39 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:34:41 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3451: 321 pgs: 321 active+clean; 21 MiB data, 995 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:34:41 np0005534516 nova_compute[253538]: 2025-11-25 09:34:41.074 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:34:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:34:41.122 162739 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:34:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:34:41.122 162739 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:34:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:34:41.122 162739 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:34:41 np0005534516 podman[440129]: 2025-11-25 09:34:41.804123292 +0000 UTC m=+0.053857899 container health_status 201f5ba8a846455dad7681d91099d8c3f33e8ac91298c1330a8b525b94c03938 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, config_id=multipathd, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3)
Nov 25 04:34:41 np0005534516 podman[440130]: 2025-11-25 09:34:41.811587036 +0000 UTC m=+0.063605526 container health_status 5c8d5508db57b4c3da7e80ae11f3a3094e87785cb74f60c98199261dd8b73481 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251118)
Nov 25 04:34:43 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3452: 321 pgs: 321 active+clean; 21 MiB data, 995 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:34:44 np0005534516 nova_compute[253538]: 2025-11-25 09:34:44.389 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:34:44 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:34:45 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3453: 321 pgs: 321 active+clean; 21 MiB data, 995 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:34:46 np0005534516 nova_compute[253538]: 2025-11-25 09:34:46.075 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:34:47 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3454: 321 pgs: 321 active+clean; 21 MiB data, 995 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:34:49 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3455: 321 pgs: 321 active+clean; 21 MiB data, 995 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:34:49 np0005534516 nova_compute[253538]: 2025-11-25 09:34:49.394 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:34:49 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:34:51 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3456: 321 pgs: 321 active+clean; 21 MiB data, 995 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:34:51 np0005534516 nova_compute[253538]: 2025-11-25 09:34:51.076 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:34:51 np0005534516 podman[440170]: 2025-11-25 09:34:51.59641442 +0000 UTC m=+0.099448392 container health_status 8ad1e319ea263c63021f01bb2d6f18ca5aea205883339692c6b10bddfa993191 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, config_id=ovn_controller, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 04:34:53 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3457: 321 pgs: 321 active+clean; 21 MiB data, 995 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:34:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] Optimize plan auto_2025-11-25_09:34:53
Nov 25 04:34:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 04:34:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] do_upmap
Nov 25 04:34:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] pools ['.mgr', '.rgw.root', 'volumes', 'default.rgw.log', 'cephfs.cephfs.meta', 'images', 'default.rgw.meta', 'cephfs.cephfs.data', 'backups', 'vms', 'default.rgw.control']
Nov 25 04:34:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] prepared 0/10 changes
Nov 25 04:34:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 04:34:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 04:34:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 04:34:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 04:34:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 04:34:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 04:34:54 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 04:34:54 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 04:34:54 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 04:34:54 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 04:34:54 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 04:34:54 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 04:34:54 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 04:34:54 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 04:34:54 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 04:34:54 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 04:34:54 np0005534516 nova_compute[253538]: 2025-11-25 09:34:54.396 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:34:54 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:34:54 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0) v1
Nov 25 04:34:54 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 25 04:34:54 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 04:34:54 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 04:34:54 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 04:34:54 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 04:34:54 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 04:34:54 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 04:34:54 np0005534516 ceph-mgr[75313]: [progress WARNING root] complete: ev 747128f5-fa36-4ce0-8242-9c4c2e1189de does not exist
Nov 25 04:34:54 np0005534516 ceph-mgr[75313]: [progress WARNING root] complete: ev e868dbc3-1eaa-44bc-b913-1d9adfbc5af3 does not exist
Nov 25 04:34:54 np0005534516 ceph-mgr[75313]: [progress WARNING root] complete: ev 092675a0-50c9-443a-bf25-cc89012bd15e does not exist
Nov 25 04:34:54 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 04:34:54 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 04:34:54 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 04:34:54 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 04:34:54 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 04:34:54 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 04:34:55 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3458: 321 pgs: 321 active+clean; 21 MiB data, 995 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:34:55 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 25 04:34:55 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 04:34:55 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 04:34:55 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 04:34:55 np0005534516 podman[440468]: 2025-11-25 09:34:55.530217384 +0000 UTC m=+0.079419025 container create f97fa3651e4edd53d159c7f8c5e91c067c2bf0235f50a690b701d3aaf6fa70b5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_panini, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 04:34:55 np0005534516 podman[440468]: 2025-11-25 09:34:55.476662405 +0000 UTC m=+0.025864066 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 04:34:55 np0005534516 systemd[1]: Started libpod-conmon-f97fa3651e4edd53d159c7f8c5e91c067c2bf0235f50a690b701d3aaf6fa70b5.scope.
Nov 25 04:34:55 np0005534516 systemd[1]: Started libcrun container.
Nov 25 04:34:55 np0005534516 podman[440468]: 2025-11-25 09:34:55.794864439 +0000 UTC m=+0.344066100 container init f97fa3651e4edd53d159c7f8c5e91c067c2bf0235f50a690b701d3aaf6fa70b5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_panini, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef)
Nov 25 04:34:55 np0005534516 podman[440468]: 2025-11-25 09:34:55.801679705 +0000 UTC m=+0.350881346 container start f97fa3651e4edd53d159c7f8c5e91c067c2bf0235f50a690b701d3aaf6fa70b5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_panini, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 04:34:55 np0005534516 recursing_panini[440485]: 167 167
Nov 25 04:34:55 np0005534516 systemd[1]: libpod-f97fa3651e4edd53d159c7f8c5e91c067c2bf0235f50a690b701d3aaf6fa70b5.scope: Deactivated successfully.
Nov 25 04:34:55 np0005534516 podman[440468]: 2025-11-25 09:34:55.820635262 +0000 UTC m=+0.369836903 container attach f97fa3651e4edd53d159c7f8c5e91c067c2bf0235f50a690b701d3aaf6fa70b5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_panini, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 04:34:55 np0005534516 podman[440468]: 2025-11-25 09:34:55.821700601 +0000 UTC m=+0.370902242 container died f97fa3651e4edd53d159c7f8c5e91c067c2bf0235f50a690b701d3aaf6fa70b5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_panini, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 04:34:55 np0005534516 systemd[1]: var-lib-containers-storage-overlay-27dc0136c57dcd0bad1de871ecdbf79c1710601fb30600e9ac7afbdf74078a8d-merged.mount: Deactivated successfully.
Nov 25 04:34:56 np0005534516 nova_compute[253538]: 2025-11-25 09:34:56.078 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:34:56 np0005534516 podman[440468]: 2025-11-25 09:34:56.142373303 +0000 UTC m=+0.691574964 container remove f97fa3651e4edd53d159c7f8c5e91c067c2bf0235f50a690b701d3aaf6fa70b5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_panini, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Nov 25 04:34:56 np0005534516 systemd[1]: libpod-conmon-f97fa3651e4edd53d159c7f8c5e91c067c2bf0235f50a690b701d3aaf6fa70b5.scope: Deactivated successfully.
Nov 25 04:34:56 np0005534516 podman[440508]: 2025-11-25 09:34:56.486942837 +0000 UTC m=+0.113101125 container create 8ac9b6d184fff644cc9e186b5fff1ffd37766e8140c122df85782760aff32640 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_bouman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Nov 25 04:34:56 np0005534516 podman[440508]: 2025-11-25 09:34:56.412492727 +0000 UTC m=+0.038651015 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 04:34:56 np0005534516 systemd[1]: Started libpod-conmon-8ac9b6d184fff644cc9e186b5fff1ffd37766e8140c122df85782760aff32640.scope.
Nov 25 04:34:56 np0005534516 systemd[1]: Started libcrun container.
Nov 25 04:34:56 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/12766fb881eee2a01d7a41723188fb932a7ba63d000f3f60623f0ee7d1fd2dfa/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 04:34:56 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/12766fb881eee2a01d7a41723188fb932a7ba63d000f3f60623f0ee7d1fd2dfa/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 04:34:56 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/12766fb881eee2a01d7a41723188fb932a7ba63d000f3f60623f0ee7d1fd2dfa/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 04:34:56 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/12766fb881eee2a01d7a41723188fb932a7ba63d000f3f60623f0ee7d1fd2dfa/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 04:34:56 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/12766fb881eee2a01d7a41723188fb932a7ba63d000f3f60623f0ee7d1fd2dfa/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 04:34:56 np0005534516 podman[440508]: 2025-11-25 09:34:56.7200031 +0000 UTC m=+0.346161378 container init 8ac9b6d184fff644cc9e186b5fff1ffd37766e8140c122df85782760aff32640 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_bouman, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Nov 25 04:34:56 np0005534516 podman[440508]: 2025-11-25 09:34:56.734249799 +0000 UTC m=+0.360408057 container start 8ac9b6d184fff644cc9e186b5fff1ffd37766e8140c122df85782760aff32640 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_bouman, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507)
Nov 25 04:34:56 np0005534516 podman[440508]: 2025-11-25 09:34:56.786459562 +0000 UTC m=+0.412617840 container attach 8ac9b6d184fff644cc9e186b5fff1ffd37766e8140c122df85782760aff32640 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_bouman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Nov 25 04:34:57 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3459: 321 pgs: 321 active+clean; 21 MiB data, 995 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:34:57 np0005534516 angry_bouman[440524]: --> passed data devices: 0 physical, 3 LVM
Nov 25 04:34:57 np0005534516 angry_bouman[440524]: --> relative data size: 1.0
Nov 25 04:34:57 np0005534516 angry_bouman[440524]: --> All data devices are unavailable
Nov 25 04:34:57 np0005534516 systemd[1]: libpod-8ac9b6d184fff644cc9e186b5fff1ffd37766e8140c122df85782760aff32640.scope: Deactivated successfully.
Nov 25 04:34:57 np0005534516 podman[440508]: 2025-11-25 09:34:57.825777266 +0000 UTC m=+1.451935524 container died 8ac9b6d184fff644cc9e186b5fff1ffd37766e8140c122df85782760aff32640 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_bouman, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 04:34:57 np0005534516 systemd[1]: libpod-8ac9b6d184fff644cc9e186b5fff1ffd37766e8140c122df85782760aff32640.scope: Consumed 1.047s CPU time.
Nov 25 04:34:58 np0005534516 systemd[1]: var-lib-containers-storage-overlay-12766fb881eee2a01d7a41723188fb932a7ba63d000f3f60623f0ee7d1fd2dfa-merged.mount: Deactivated successfully.
Nov 25 04:34:58 np0005534516 podman[440508]: 2025-11-25 09:34:58.49347906 +0000 UTC m=+2.119637368 container remove 8ac9b6d184fff644cc9e186b5fff1ffd37766e8140c122df85782760aff32640 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_bouman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 04:34:58 np0005534516 systemd[1]: libpod-conmon-8ac9b6d184fff644cc9e186b5fff1ffd37766e8140c122df85782760aff32640.scope: Deactivated successfully.
Nov 25 04:34:59 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3460: 321 pgs: 321 active+clean; 21 MiB data, 995 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:34:59 np0005534516 podman[440707]: 2025-11-25 09:34:59.149893255 +0000 UTC m=+0.022715080 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 04:34:59 np0005534516 podman[440707]: 2025-11-25 09:34:59.328470674 +0000 UTC m=+0.201292469 container create 22cfcdcbcf4d33442bb977376b7307bf5e0453a8740a1114465eeb94074071c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_germain, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 04:34:59 np0005534516 nova_compute[253538]: 2025-11-25 09:34:59.398 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:34:59 np0005534516 systemd[1]: Started libpod-conmon-22cfcdcbcf4d33442bb977376b7307bf5e0453a8740a1114465eeb94074071c1.scope.
Nov 25 04:34:59 np0005534516 systemd[1]: Started libcrun container.
Nov 25 04:34:59 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:34:59 np0005534516 podman[440707]: 2025-11-25 09:34:59.664790172 +0000 UTC m=+0.537612047 container init 22cfcdcbcf4d33442bb977376b7307bf5e0453a8740a1114465eeb94074071c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_germain, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 04:34:59 np0005534516 podman[440707]: 2025-11-25 09:34:59.671406102 +0000 UTC m=+0.544227937 container start 22cfcdcbcf4d33442bb977376b7307bf5e0453a8740a1114465eeb94074071c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_germain, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Nov 25 04:34:59 np0005534516 silly_germain[440723]: 167 167
Nov 25 04:34:59 np0005534516 systemd[1]: libpod-22cfcdcbcf4d33442bb977376b7307bf5e0453a8740a1114465eeb94074071c1.scope: Deactivated successfully.
Nov 25 04:34:59 np0005534516 podman[440707]: 2025-11-25 09:34:59.728448118 +0000 UTC m=+0.601270013 container attach 22cfcdcbcf4d33442bb977376b7307bf5e0453a8740a1114465eeb94074071c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_germain, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 04:34:59 np0005534516 podman[440707]: 2025-11-25 09:34:59.72927451 +0000 UTC m=+0.602096345 container died 22cfcdcbcf4d33442bb977376b7307bf5e0453a8740a1114465eeb94074071c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_germain, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 04:34:59 np0005534516 systemd[1]: var-lib-containers-storage-overlay-5c7386f3b012f623999a6b15a1af6934d50f27272669df9e31d9c38c9bf0f27a-merged.mount: Deactivated successfully.
Nov 25 04:35:00 np0005534516 podman[440707]: 2025-11-25 09:35:00.328890857 +0000 UTC m=+1.201712662 container remove 22cfcdcbcf4d33442bb977376b7307bf5e0453a8740a1114465eeb94074071c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_germain, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Nov 25 04:35:00 np0005534516 systemd[1]: libpod-conmon-22cfcdcbcf4d33442bb977376b7307bf5e0453a8740a1114465eeb94074071c1.scope: Deactivated successfully.
Nov 25 04:35:00 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e288 do_prune osdmap full prune enabled
Nov 25 04:35:00 np0005534516 podman[440747]: 2025-11-25 09:35:00.537143724 +0000 UTC m=+0.028777225 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 04:35:00 np0005534516 podman[440747]: 2025-11-25 09:35:00.715766724 +0000 UTC m=+0.207400175 container create a9e2ba4b18ba17e237243029cf2adf7b84e43c6758f588ee50f06378e09e7804 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_kalam, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Nov 25 04:35:00 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e289 e289: 3 total, 3 up, 3 in
Nov 25 04:35:00 np0005534516 ceph-mon[75015]: log_channel(cluster) log [DBG] : osdmap e289: 3 total, 3 up, 3 in
Nov 25 04:35:00 np0005534516 systemd[1]: Started libpod-conmon-a9e2ba4b18ba17e237243029cf2adf7b84e43c6758f588ee50f06378e09e7804.scope.
Nov 25 04:35:00 np0005534516 systemd[1]: Started libcrun container.
Nov 25 04:35:00 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9aa706f9172d4d4ea86a3bcc1df9e9c5a7dd0cf4fc872ad15cff48bce67c48e5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 04:35:00 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9aa706f9172d4d4ea86a3bcc1df9e9c5a7dd0cf4fc872ad15cff48bce67c48e5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 04:35:00 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9aa706f9172d4d4ea86a3bcc1df9e9c5a7dd0cf4fc872ad15cff48bce67c48e5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 04:35:00 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9aa706f9172d4d4ea86a3bcc1df9e9c5a7dd0cf4fc872ad15cff48bce67c48e5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 04:35:00 np0005534516 podman[440747]: 2025-11-25 09:35:00.995899732 +0000 UTC m=+0.487533213 container init a9e2ba4b18ba17e237243029cf2adf7b84e43c6758f588ee50f06378e09e7804 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_kalam, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 04:35:01 np0005534516 podman[440747]: 2025-11-25 09:35:01.007420786 +0000 UTC m=+0.499054247 container start a9e2ba4b18ba17e237243029cf2adf7b84e43c6758f588ee50f06378e09e7804 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_kalam, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 04:35:01 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3462: 321 pgs: 321 active+clean; 21 MiB data, 995 MiB used, 59 GiB / 60 GiB avail; 102 B/s rd, 0 op/s
Nov 25 04:35:01 np0005534516 nova_compute[253538]: 2025-11-25 09:35:01.083 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:35:01 np0005534516 podman[440747]: 2025-11-25 09:35:01.1098903 +0000 UTC m=+0.601523791 container attach a9e2ba4b18ba17e237243029cf2adf7b84e43c6758f588ee50f06378e09e7804 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_kalam, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 04:35:01 np0005534516 zen_kalam[440763]: {
Nov 25 04:35:01 np0005534516 zen_kalam[440763]:    "0": [
Nov 25 04:35:01 np0005534516 zen_kalam[440763]:        {
Nov 25 04:35:01 np0005534516 zen_kalam[440763]:            "devices": [
Nov 25 04:35:01 np0005534516 zen_kalam[440763]:                "/dev/loop3"
Nov 25 04:35:01 np0005534516 zen_kalam[440763]:            ],
Nov 25 04:35:01 np0005534516 zen_kalam[440763]:            "lv_name": "ceph_lv0",
Nov 25 04:35:01 np0005534516 zen_kalam[440763]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 04:35:01 np0005534516 zen_kalam[440763]:            "lv_size": "21470642176",
Nov 25 04:35:01 np0005534516 zen_kalam[440763]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=fa0f8d90-5df8-4d42-9078-da082765696d,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 04:35:01 np0005534516 zen_kalam[440763]:            "lv_uuid": "ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr",
Nov 25 04:35:01 np0005534516 zen_kalam[440763]:            "name": "ceph_lv0",
Nov 25 04:35:01 np0005534516 zen_kalam[440763]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 04:35:01 np0005534516 zen_kalam[440763]:            "tags": {
Nov 25 04:35:01 np0005534516 zen_kalam[440763]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 04:35:01 np0005534516 zen_kalam[440763]:                "ceph.block_uuid": "ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr",
Nov 25 04:35:01 np0005534516 zen_kalam[440763]:                "ceph.cephx_lockbox_secret": "",
Nov 25 04:35:01 np0005534516 zen_kalam[440763]:                "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 04:35:01 np0005534516 zen_kalam[440763]:                "ceph.cluster_name": "ceph",
Nov 25 04:35:01 np0005534516 zen_kalam[440763]:                "ceph.crush_device_class": "",
Nov 25 04:35:01 np0005534516 zen_kalam[440763]:                "ceph.encrypted": "0",
Nov 25 04:35:01 np0005534516 zen_kalam[440763]:                "ceph.osd_fsid": "fa0f8d90-5df8-4d42-9078-da082765696d",
Nov 25 04:35:01 np0005534516 zen_kalam[440763]:                "ceph.osd_id": "0",
Nov 25 04:35:01 np0005534516 zen_kalam[440763]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 04:35:01 np0005534516 zen_kalam[440763]:                "ceph.type": "block",
Nov 25 04:35:01 np0005534516 zen_kalam[440763]:                "ceph.vdo": "0"
Nov 25 04:35:01 np0005534516 zen_kalam[440763]:            },
Nov 25 04:35:01 np0005534516 zen_kalam[440763]:            "type": "block",
Nov 25 04:35:01 np0005534516 zen_kalam[440763]:            "vg_name": "ceph_vg0"
Nov 25 04:35:01 np0005534516 zen_kalam[440763]:        }
Nov 25 04:35:01 np0005534516 zen_kalam[440763]:    ],
Nov 25 04:35:01 np0005534516 zen_kalam[440763]:    "1": [
Nov 25 04:35:01 np0005534516 zen_kalam[440763]:        {
Nov 25 04:35:01 np0005534516 zen_kalam[440763]:            "devices": [
Nov 25 04:35:01 np0005534516 zen_kalam[440763]:                "/dev/loop4"
Nov 25 04:35:01 np0005534516 zen_kalam[440763]:            ],
Nov 25 04:35:01 np0005534516 zen_kalam[440763]:            "lv_name": "ceph_lv1",
Nov 25 04:35:01 np0005534516 zen_kalam[440763]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 04:35:01 np0005534516 zen_kalam[440763]:            "lv_size": "21470642176",
Nov 25 04:35:01 np0005534516 zen_kalam[440763]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1ccbbde0-8faf-460a-800e-c84f00ed17db,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 04:35:01 np0005534516 zen_kalam[440763]:            "lv_uuid": "nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e",
Nov 25 04:35:01 np0005534516 zen_kalam[440763]:            "name": "ceph_lv1",
Nov 25 04:35:01 np0005534516 zen_kalam[440763]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 04:35:01 np0005534516 zen_kalam[440763]:            "tags": {
Nov 25 04:35:01 np0005534516 zen_kalam[440763]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 04:35:01 np0005534516 zen_kalam[440763]:                "ceph.block_uuid": "nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e",
Nov 25 04:35:01 np0005534516 zen_kalam[440763]:                "ceph.cephx_lockbox_secret": "",
Nov 25 04:35:01 np0005534516 zen_kalam[440763]:                "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 04:35:01 np0005534516 zen_kalam[440763]:                "ceph.cluster_name": "ceph",
Nov 25 04:35:01 np0005534516 zen_kalam[440763]:                "ceph.crush_device_class": "",
Nov 25 04:35:01 np0005534516 zen_kalam[440763]:                "ceph.encrypted": "0",
Nov 25 04:35:01 np0005534516 zen_kalam[440763]:                "ceph.osd_fsid": "1ccbbde0-8faf-460a-800e-c84f00ed17db",
Nov 25 04:35:01 np0005534516 zen_kalam[440763]:                "ceph.osd_id": "1",
Nov 25 04:35:01 np0005534516 zen_kalam[440763]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 04:35:01 np0005534516 zen_kalam[440763]:                "ceph.type": "block",
Nov 25 04:35:01 np0005534516 zen_kalam[440763]:                "ceph.vdo": "0"
Nov 25 04:35:01 np0005534516 zen_kalam[440763]:            },
Nov 25 04:35:01 np0005534516 zen_kalam[440763]:            "type": "block",
Nov 25 04:35:01 np0005534516 zen_kalam[440763]:            "vg_name": "ceph_vg1"
Nov 25 04:35:01 np0005534516 zen_kalam[440763]:        }
Nov 25 04:35:01 np0005534516 zen_kalam[440763]:    ],
Nov 25 04:35:01 np0005534516 zen_kalam[440763]:    "2": [
Nov 25 04:35:01 np0005534516 zen_kalam[440763]:        {
Nov 25 04:35:01 np0005534516 zen_kalam[440763]:            "devices": [
Nov 25 04:35:01 np0005534516 zen_kalam[440763]:                "/dev/loop5"
Nov 25 04:35:01 np0005534516 zen_kalam[440763]:            ],
Nov 25 04:35:01 np0005534516 zen_kalam[440763]:            "lv_name": "ceph_lv2",
Nov 25 04:35:01 np0005534516 zen_kalam[440763]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 04:35:01 np0005534516 zen_kalam[440763]:            "lv_size": "21470642176",
Nov 25 04:35:01 np0005534516 zen_kalam[440763]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2a15223e-f21c-42af-b702-2be1c3607399,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 04:35:01 np0005534516 zen_kalam[440763]:            "lv_uuid": "5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0",
Nov 25 04:35:01 np0005534516 zen_kalam[440763]:            "name": "ceph_lv2",
Nov 25 04:35:01 np0005534516 zen_kalam[440763]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 04:35:01 np0005534516 zen_kalam[440763]:            "tags": {
Nov 25 04:35:01 np0005534516 zen_kalam[440763]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 04:35:01 np0005534516 zen_kalam[440763]:                "ceph.block_uuid": "5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0",
Nov 25 04:35:01 np0005534516 zen_kalam[440763]:                "ceph.cephx_lockbox_secret": "",
Nov 25 04:35:01 np0005534516 zen_kalam[440763]:                "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 04:35:01 np0005534516 zen_kalam[440763]:                "ceph.cluster_name": "ceph",
Nov 25 04:35:01 np0005534516 zen_kalam[440763]:                "ceph.crush_device_class": "",
Nov 25 04:35:01 np0005534516 zen_kalam[440763]:                "ceph.encrypted": "0",
Nov 25 04:35:01 np0005534516 zen_kalam[440763]:                "ceph.osd_fsid": "2a15223e-f21c-42af-b702-2be1c3607399",
Nov 25 04:35:01 np0005534516 zen_kalam[440763]:                "ceph.osd_id": "2",
Nov 25 04:35:01 np0005534516 zen_kalam[440763]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 04:35:01 np0005534516 zen_kalam[440763]:                "ceph.type": "block",
Nov 25 04:35:01 np0005534516 zen_kalam[440763]:                "ceph.vdo": "0"
Nov 25 04:35:01 np0005534516 zen_kalam[440763]:            },
Nov 25 04:35:01 np0005534516 zen_kalam[440763]:            "type": "block",
Nov 25 04:35:01 np0005534516 zen_kalam[440763]:            "vg_name": "ceph_vg2"
Nov 25 04:35:01 np0005534516 zen_kalam[440763]:        }
Nov 25 04:35:01 np0005534516 zen_kalam[440763]:    ]
Nov 25 04:35:01 np0005534516 zen_kalam[440763]: }
Nov 25 04:35:01 np0005534516 systemd[1]: libpod-a9e2ba4b18ba17e237243029cf2adf7b84e43c6758f588ee50f06378e09e7804.scope: Deactivated successfully.
Nov 25 04:35:01 np0005534516 podman[440747]: 2025-11-25 09:35:01.82116218 +0000 UTC m=+1.312795641 container died a9e2ba4b18ba17e237243029cf2adf7b84e43c6758f588ee50f06378e09e7804 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_kalam, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 04:35:02 np0005534516 systemd[1]: var-lib-containers-storage-overlay-9aa706f9172d4d4ea86a3bcc1df9e9c5a7dd0cf4fc872ad15cff48bce67c48e5-merged.mount: Deactivated successfully.
Nov 25 04:35:02 np0005534516 podman[440747]: 2025-11-25 09:35:02.652823914 +0000 UTC m=+2.144457375 container remove a9e2ba4b18ba17e237243029cf2adf7b84e43c6758f588ee50f06378e09e7804 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_kalam, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Nov 25 04:35:02 np0005534516 systemd[1]: libpod-conmon-a9e2ba4b18ba17e237243029cf2adf7b84e43c6758f588ee50f06378e09e7804.scope: Deactivated successfully.
Nov 25 04:35:03 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3463: 321 pgs: 321 active+clean; 21 MiB data, 995 MiB used, 59 GiB / 60 GiB avail; 9.7 KiB/s rd, 716 B/s wr, 12 op/s
Nov 25 04:35:03 np0005534516 podman[440924]: 2025-11-25 09:35:03.373884772 +0000 UTC m=+0.085814871 container create ec95d8f147d339636b087546256e36aa0441c10183d9f724cd8e89ccf11e2fb7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_elbakyan, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 04:35:03 np0005534516 podman[440924]: 2025-11-25 09:35:03.310567055 +0000 UTC m=+0.022497174 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 04:35:03 np0005534516 systemd[1]: Started libpod-conmon-ec95d8f147d339636b087546256e36aa0441c10183d9f724cd8e89ccf11e2fb7.scope.
Nov 25 04:35:03 np0005534516 systemd[1]: Started libcrun container.
Nov 25 04:35:03 np0005534516 podman[440924]: 2025-11-25 09:35:03.702015437 +0000 UTC m=+0.413945556 container init ec95d8f147d339636b087546256e36aa0441c10183d9f724cd8e89ccf11e2fb7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_elbakyan, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef)
Nov 25 04:35:03 np0005534516 podman[440924]: 2025-11-25 09:35:03.713268714 +0000 UTC m=+0.425198823 container start ec95d8f147d339636b087546256e36aa0441c10183d9f724cd8e89ccf11e2fb7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_elbakyan, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Nov 25 04:35:03 np0005534516 cranky_elbakyan[440940]: 167 167
Nov 25 04:35:03 np0005534516 systemd[1]: libpod-ec95d8f147d339636b087546256e36aa0441c10183d9f724cd8e89ccf11e2fb7.scope: Deactivated successfully.
Nov 25 04:35:03 np0005534516 podman[440924]: 2025-11-25 09:35:03.777524366 +0000 UTC m=+0.489454475 container attach ec95d8f147d339636b087546256e36aa0441c10183d9f724cd8e89ccf11e2fb7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_elbakyan, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 04:35:03 np0005534516 podman[440924]: 2025-11-25 09:35:03.778387829 +0000 UTC m=+0.490317978 container died ec95d8f147d339636b087546256e36aa0441c10183d9f724cd8e89ccf11e2fb7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_elbakyan, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Nov 25 04:35:04 np0005534516 systemd[1]: var-lib-containers-storage-overlay-c1a43126f221a2f7499cbc765b41149f8152d65e855c1adcb2916969d3122be8-merged.mount: Deactivated successfully.
Nov 25 04:35:04 np0005534516 podman[440924]: 2025-11-25 09:35:04.195536031 +0000 UTC m=+0.907466120 container remove ec95d8f147d339636b087546256e36aa0441c10183d9f724cd8e89ccf11e2fb7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_elbakyan, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Nov 25 04:35:04 np0005534516 systemd[1]: libpod-conmon-ec95d8f147d339636b087546256e36aa0441c10183d9f724cd8e89ccf11e2fb7.scope: Deactivated successfully.
Nov 25 04:35:04 np0005534516 nova_compute[253538]: 2025-11-25 09:35:04.400 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:35:04 np0005534516 podman[440965]: 2025-11-25 09:35:04.410198164 +0000 UTC m=+0.055021582 container create f107d1231d457835f69cfeede28861a9026462576a0ad57a59a513b98533beb9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_murdock, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0)
Nov 25 04:35:04 np0005534516 podman[440965]: 2025-11-25 09:35:04.377105651 +0000 UTC m=+0.021929089 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 04:35:04 np0005534516 systemd[1]: Started libpod-conmon-f107d1231d457835f69cfeede28861a9026462576a0ad57a59a513b98533beb9.scope.
Nov 25 04:35:04 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e289 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:35:04 np0005534516 systemd[1]: Started libcrun container.
Nov 25 04:35:04 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1b62095ee2e7f37470cffe814d31fa4121921b05401a941a3eecde5bff0916f7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 04:35:04 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1b62095ee2e7f37470cffe814d31fa4121921b05401a941a3eecde5bff0916f7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 04:35:04 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1b62095ee2e7f37470cffe814d31fa4121921b05401a941a3eecde5bff0916f7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 04:35:04 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1b62095ee2e7f37470cffe814d31fa4121921b05401a941a3eecde5bff0916f7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 04:35:04 np0005534516 podman[440965]: 2025-11-25 09:35:04.578509112 +0000 UTC m=+0.223332530 container init f107d1231d457835f69cfeede28861a9026462576a0ad57a59a513b98533beb9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_murdock, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Nov 25 04:35:04 np0005534516 podman[440965]: 2025-11-25 09:35:04.588413731 +0000 UTC m=+0.233237149 container start f107d1231d457835f69cfeede28861a9026462576a0ad57a59a513b98533beb9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_murdock, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 04:35:04 np0005534516 podman[440965]: 2025-11-25 09:35:04.60961135 +0000 UTC m=+0.254434778 container attach f107d1231d457835f69cfeede28861a9026462576a0ad57a59a513b98533beb9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_murdock, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 04:35:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 04:35:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 04:35:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 04:35:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 04:35:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 25 04:35:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 04:35:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 04:35:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 04:35:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 04:35:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 04:35:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0003330245368561568 of space, bias 1.0, pg target 0.09990736105684704 quantized to 32 (current 32)
Nov 25 04:35:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 04:35:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 25 04:35:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 04:35:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 04:35:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 04:35:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 25 04:35:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 04:35:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 25 04:35:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 04:35:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 04:35:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 04:35:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 25 04:35:05 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3464: 321 pgs: 321 active+clean; 21 MiB data, 995 MiB used, 59 GiB / 60 GiB avail; 11 KiB/s rd, 1023 B/s wr, 14 op/s
Nov 25 04:35:05 np0005534516 blissful_murdock[440982]: {
Nov 25 04:35:05 np0005534516 blissful_murdock[440982]:    "1ccbbde0-8faf-460a-800e-c84f00ed17db": {
Nov 25 04:35:05 np0005534516 blissful_murdock[440982]:        "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 04:35:05 np0005534516 blissful_murdock[440982]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 04:35:05 np0005534516 blissful_murdock[440982]:        "osd_id": 1,
Nov 25 04:35:05 np0005534516 blissful_murdock[440982]:        "osd_uuid": "1ccbbde0-8faf-460a-800e-c84f00ed17db",
Nov 25 04:35:05 np0005534516 blissful_murdock[440982]:        "type": "bluestore"
Nov 25 04:35:05 np0005534516 blissful_murdock[440982]:    },
Nov 25 04:35:05 np0005534516 blissful_murdock[440982]:    "2a15223e-f21c-42af-b702-2be1c3607399": {
Nov 25 04:35:05 np0005534516 blissful_murdock[440982]:        "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 04:35:05 np0005534516 blissful_murdock[440982]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 04:35:05 np0005534516 blissful_murdock[440982]:        "osd_id": 2,
Nov 25 04:35:05 np0005534516 blissful_murdock[440982]:        "osd_uuid": "2a15223e-f21c-42af-b702-2be1c3607399",
Nov 25 04:35:05 np0005534516 blissful_murdock[440982]:        "type": "bluestore"
Nov 25 04:35:05 np0005534516 blissful_murdock[440982]:    },
Nov 25 04:35:05 np0005534516 blissful_murdock[440982]:    "fa0f8d90-5df8-4d42-9078-da082765696d": {
Nov 25 04:35:05 np0005534516 blissful_murdock[440982]:        "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 04:35:05 np0005534516 blissful_murdock[440982]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 04:35:05 np0005534516 blissful_murdock[440982]:        "osd_id": 0,
Nov 25 04:35:05 np0005534516 blissful_murdock[440982]:        "osd_uuid": "fa0f8d90-5df8-4d42-9078-da082765696d",
Nov 25 04:35:05 np0005534516 blissful_murdock[440982]:        "type": "bluestore"
Nov 25 04:35:05 np0005534516 blissful_murdock[440982]:    }
Nov 25 04:35:05 np0005534516 blissful_murdock[440982]: }
Nov 25 04:35:05 np0005534516 systemd[1]: libpod-f107d1231d457835f69cfeede28861a9026462576a0ad57a59a513b98533beb9.scope: Deactivated successfully.
Nov 25 04:35:05 np0005534516 systemd[1]: libpod-f107d1231d457835f69cfeede28861a9026462576a0ad57a59a513b98533beb9.scope: Consumed 1.056s CPU time.
Nov 25 04:35:05 np0005534516 podman[440965]: 2025-11-25 09:35:05.641022698 +0000 UTC m=+1.285846156 container died f107d1231d457835f69cfeede28861a9026462576a0ad57a59a513b98533beb9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_murdock, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 04:35:05 np0005534516 systemd[1]: var-lib-containers-storage-overlay-1b62095ee2e7f37470cffe814d31fa4121921b05401a941a3eecde5bff0916f7-merged.mount: Deactivated successfully.
Nov 25 04:35:06 np0005534516 podman[440965]: 2025-11-25 09:35:06.004519708 +0000 UTC m=+1.649343166 container remove f107d1231d457835f69cfeede28861a9026462576a0ad57a59a513b98533beb9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_murdock, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Nov 25 04:35:06 np0005534516 systemd[1]: libpod-conmon-f107d1231d457835f69cfeede28861a9026462576a0ad57a59a513b98533beb9.scope: Deactivated successfully.
Nov 25 04:35:06 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 04:35:06 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 04:35:06 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 04:35:06 np0005534516 nova_compute[253538]: 2025-11-25 09:35:06.084 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:35:06 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 04:35:06 np0005534516 ceph-mgr[75313]: [progress WARNING root] complete: ev d6d41010-25fb-4b3b-83f5-7680b2d81c16 does not exist
Nov 25 04:35:06 np0005534516 ceph-mgr[75313]: [progress WARNING root] complete: ev 239ae6b9-ed4d-4c6b-84ef-be68f6af8a1a does not exist
Nov 25 04:35:07 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3465: 321 pgs: 321 active+clean; 457 KiB data, 987 MiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.4 KiB/s wr, 25 op/s
Nov 25 04:35:07 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 04:35:07 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 04:35:09 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3466: 321 pgs: 321 active+clean; 457 KiB data, 987 MiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.4 KiB/s wr, 25 op/s
Nov 25 04:35:09 np0005534516 nova_compute[253538]: 2025-11-25 09:35:09.507 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:35:09 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e289 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:35:09 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e289 do_prune osdmap full prune enabled
Nov 25 04:35:09 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e290 e290: 3 total, 3 up, 3 in
Nov 25 04:35:09 np0005534516 ceph-mon[75015]: log_channel(cluster) log [DBG] : osdmap e290: 3 total, 3 up, 3 in
Nov 25 04:35:09 np0005534516 nova_compute[253538]: 2025-11-25 09:35:09.864 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:35:11 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3468: 321 pgs: 321 active+clean; 457 KiB data, 987 MiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.4 KiB/s wr, 24 op/s
Nov 25 04:35:11 np0005534516 nova_compute[253538]: 2025-11-25 09:35:11.085 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:35:12 np0005534516 nova_compute[253538]: 2025-11-25 09:35:12.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:35:12 np0005534516 podman[441078]: 2025-11-25 09:35:12.832070082 +0000 UTC m=+0.079558690 container health_status 5c8d5508db57b4c3da7e80ae11f3a3094e87785cb74f60c98199261dd8b73481 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 04:35:12 np0005534516 podman[441077]: 2025-11-25 09:35:12.83237976 +0000 UTC m=+0.078250294 container health_status 201f5ba8a846455dad7681d91099d8c3f33e8ac91298c1330a8b525b94c03938 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team)
Nov 25 04:35:13 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3469: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail; 8.5 KiB/s rd, 716 B/s wr, 12 op/s
Nov 25 04:35:14 np0005534516 nova_compute[253538]: 2025-11-25 09:35:14.543 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:35:14 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:35:15 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3470: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail; 7.4 KiB/s rd, 409 B/s wr, 10 op/s
Nov 25 04:35:16 np0005534516 nova_compute[253538]: 2025-11-25 09:35:16.088 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:35:17 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3471: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:35:17 np0005534516 nova_compute[253538]: 2025-11-25 09:35:17.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:35:17 np0005534516 nova_compute[253538]: 2025-11-25 09:35:17.554 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 25 04:35:17 np0005534516 nova_compute[253538]: 2025-11-25 09:35:17.555 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 25 04:35:17 np0005534516 nova_compute[253538]: 2025-11-25 09:35:17.589 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 25 04:35:19 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3472: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:35:19 np0005534516 nova_compute[253538]: 2025-11-25 09:35:19.546 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:35:19 np0005534516 nova_compute[253538]: 2025-11-25 09:35:19.553 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:35:19 np0005534516 nova_compute[253538]: 2025-11-25 09:35:19.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:35:19 np0005534516 nova_compute[253538]: 2025-11-25 09:35:19.554 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 25 04:35:19 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:35:20 np0005534516 nova_compute[253538]: 2025-11-25 09:35:20.548 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:35:21 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3473: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:35:21 np0005534516 nova_compute[253538]: 2025-11-25 09:35:21.091 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:35:21 np0005534516 podman[441117]: 2025-11-25 09:35:21.93661974 +0000 UTC m=+0.175244368 container health_status 8ad1e319ea263c63021f01bb2d6f18ca5aea205883339692c6b10bddfa993191 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118)
Nov 25 04:35:22 np0005534516 nova_compute[253538]: 2025-11-25 09:35:22.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:35:23 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3474: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:35:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 04:35:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 04:35:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 04:35:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 04:35:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 04:35:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 04:35:24 np0005534516 nova_compute[253538]: 2025-11-25 09:35:24.550 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:35:24 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:35:25 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3475: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:35:26 np0005534516 nova_compute[253538]: 2025-11-25 09:35:26.092 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:35:27 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3476: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:35:28 np0005534516 nova_compute[253538]: 2025-11-25 09:35:28.555 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:35:29 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3477: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:35:29 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 25 04:35:29 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1150341661' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 04:35:29 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 25 04:35:29 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1150341661' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 04:35:29 np0005534516 nova_compute[253538]: 2025-11-25 09:35:29.551 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:35:29 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:35:31 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3478: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:35:31 np0005534516 nova_compute[253538]: 2025-11-25 09:35:31.093 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:35:33 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3479: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:35:33 np0005534516 nova_compute[253538]: 2025-11-25 09:35:33.548 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:35:34 np0005534516 nova_compute[253538]: 2025-11-25 09:35:34.553 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:35:34 np0005534516 nova_compute[253538]: 2025-11-25 09:35:34.554 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:35:34 np0005534516 nova_compute[253538]: 2025-11-25 09:35:34.579 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:35:34 np0005534516 nova_compute[253538]: 2025-11-25 09:35:34.579 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:35:34 np0005534516 nova_compute[253538]: 2025-11-25 09:35:34.579 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:35:34 np0005534516 nova_compute[253538]: 2025-11-25 09:35:34.580 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 25 04:35:34 np0005534516 nova_compute[253538]: 2025-11-25 09:35:34.580 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 04:35:34 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:35:34 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 04:35:34 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2227355941' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 04:35:35 np0005534516 nova_compute[253538]: 2025-11-25 09:35:35.013 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.433s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 04:35:35 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3480: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:35:35 np0005534516 nova_compute[253538]: 2025-11-25 09:35:35.165 253542 WARNING nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 04:35:35 np0005534516 nova_compute[253538]: 2025-11-25 09:35:35.166 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3617MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 25 04:35:35 np0005534516 nova_compute[253538]: 2025-11-25 09:35:35.166 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:35:35 np0005534516 nova_compute[253538]: 2025-11-25 09:35:35.166 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:35:35 np0005534516 nova_compute[253538]: 2025-11-25 09:35:35.240 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 25 04:35:35 np0005534516 nova_compute[253538]: 2025-11-25 09:35:35.241 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 25 04:35:35 np0005534516 nova_compute[253538]: 2025-11-25 09:35:35.275 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 04:35:35 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 04:35:35 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1462335001' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 04:35:35 np0005534516 nova_compute[253538]: 2025-11-25 09:35:35.716 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.441s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 04:35:35 np0005534516 nova_compute[253538]: 2025-11-25 09:35:35.722 253542 DEBUG nova.compute.provider_tree [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 04:35:35 np0005534516 nova_compute[253538]: 2025-11-25 09:35:35.737 253542 DEBUG nova.scheduler.client.report [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 04:35:35 np0005534516 nova_compute[253538]: 2025-11-25 09:35:35.739 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 25 04:35:35 np0005534516 nova_compute[253538]: 2025-11-25 09:35:35.739 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.573s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:35:36 np0005534516 nova_compute[253538]: 2025-11-25 09:35:36.095 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:35:37 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3481: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:35:39 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3482: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:35:39 np0005534516 nova_compute[253538]: 2025-11-25 09:35:39.555 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:35:39 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:35:41 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3483: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:35:41 np0005534516 nova_compute[253538]: 2025-11-25 09:35:41.096 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:35:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:35:41.122 162739 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:35:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:35:41.123 162739 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:35:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:35:41.123 162739 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:35:43 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3484: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:35:43 np0005534516 podman[441192]: 2025-11-25 09:35:43.808145237 +0000 UTC m=+0.052615056 container health_status 5c8d5508db57b4c3da7e80ae11f3a3094e87785cb74f60c98199261dd8b73481 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent)
Nov 25 04:35:43 np0005534516 podman[441191]: 2025-11-25 09:35:43.809298118 +0000 UTC m=+0.059737009 container health_status 201f5ba8a846455dad7681d91099d8c3f33e8ac91298c1330a8b525b94c03938 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Nov 25 04:35:44 np0005534516 nova_compute[253538]: 2025-11-25 09:35:44.574 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:35:44 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:35:45 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3485: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:35:46 np0005534516 nova_compute[253538]: 2025-11-25 09:35:46.098 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:35:47 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3486: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:35:49 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3487: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:35:49 np0005534516 nova_compute[253538]: 2025-11-25 09:35:49.574 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:35:49 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:35:51 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3488: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:35:51 np0005534516 nova_compute[253538]: 2025-11-25 09:35:51.099 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:35:52 np0005534516 podman[441232]: 2025-11-25 09:35:52.848566557 +0000 UTC m=+0.098128187 container health_status 8ad1e319ea263c63021f01bb2d6f18ca5aea205883339692c6b10bddfa993191 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Nov 25 04:35:53 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3489: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:35:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] Optimize plan auto_2025-11-25_09:35:53
Nov 25 04:35:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 04:35:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] do_upmap
Nov 25 04:35:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] pools ['volumes', '.rgw.root', 'backups', 'default.rgw.log', 'default.rgw.meta', 'default.rgw.control', 'images', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', 'vms', '.mgr']
Nov 25 04:35:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] prepared 0/10 changes
Nov 25 04:35:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 04:35:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 04:35:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 04:35:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 04:35:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 04:35:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 04:35:54 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 04:35:54 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 04:35:54 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 04:35:54 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 04:35:54 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 04:35:54 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 04:35:54 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 04:35:54 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 04:35:54 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 04:35:54 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 04:35:54 np0005534516 nova_compute[253538]: 2025-11-25 09:35:54.576 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:35:54 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:35:55 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3490: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:35:56 np0005534516 nova_compute[253538]: 2025-11-25 09:35:56.133 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:35:57 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3491: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:35:59 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3492: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:35:59 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:35:59 np0005534516 nova_compute[253538]: 2025-11-25 09:35:59.619 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:36:01 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3493: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:36:01 np0005534516 nova_compute[253538]: 2025-11-25 09:36:01.137 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:36:03 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3494: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:36:04 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:36:04 np0005534516 nova_compute[253538]: 2025-11-25 09:36:04.620 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:36:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 04:36:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 04:36:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 04:36:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 04:36:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 25 04:36:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 04:36:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 04:36:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 04:36:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 04:36:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 04:36:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 1.9077212346161359e-07 of space, bias 1.0, pg target 5.723163703848408e-05 quantized to 32 (current 32)
Nov 25 04:36:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 04:36:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 25 04:36:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 04:36:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 04:36:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 04:36:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 25 04:36:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 04:36:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 25 04:36:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 04:36:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 04:36:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 04:36:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 25 04:36:05 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3495: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:36:06 np0005534516 nova_compute[253538]: 2025-11-25 09:36:06.140 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:36:07 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3496: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:36:07 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 04:36:07 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 04:36:07 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 04:36:07 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 04:36:07 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 04:36:07 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 04:36:07 np0005534516 ceph-mgr[75313]: [progress WARNING root] complete: ev a5a17609-87e6-487a-aa16-e36f9a25d6c8 does not exist
Nov 25 04:36:07 np0005534516 ceph-mgr[75313]: [progress WARNING root] complete: ev 2f483500-c670-4737-8681-5ee12618578a does not exist
Nov 25 04:36:07 np0005534516 ceph-mgr[75313]: [progress WARNING root] complete: ev d5ecfb5b-27e6-499a-b696-0978b26a6d1a does not exist
Nov 25 04:36:07 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 04:36:07 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 04:36:07 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 04:36:07 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 04:36:07 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 04:36:07 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 04:36:07 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 04:36:07 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 04:36:07 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 04:36:07 np0005534516 podman[441533]: 2025-11-25 09:36:07.638933064 +0000 UTC m=+0.043266220 container create 7c261f71e1bb5be791a22908ecf9322a4ed6a9b3f060537530fb3814e6ccc75b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_nash, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Nov 25 04:36:07 np0005534516 systemd[1]: Started libpod-conmon-7c261f71e1bb5be791a22908ecf9322a4ed6a9b3f060537530fb3814e6ccc75b.scope.
Nov 25 04:36:07 np0005534516 systemd[1]: Started libcrun container.
Nov 25 04:36:07 np0005534516 podman[441533]: 2025-11-25 09:36:07.619126784 +0000 UTC m=+0.023459960 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 04:36:07 np0005534516 podman[441533]: 2025-11-25 09:36:07.727426557 +0000 UTC m=+0.131759733 container init 7c261f71e1bb5be791a22908ecf9322a4ed6a9b3f060537530fb3814e6ccc75b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_nash, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Nov 25 04:36:07 np0005534516 podman[441533]: 2025-11-25 09:36:07.733672727 +0000 UTC m=+0.138005913 container start 7c261f71e1bb5be791a22908ecf9322a4ed6a9b3f060537530fb3814e6ccc75b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_nash, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 04:36:07 np0005534516 podman[441533]: 2025-11-25 09:36:07.737457451 +0000 UTC m=+0.141790637 container attach 7c261f71e1bb5be791a22908ecf9322a4ed6a9b3f060537530fb3814e6ccc75b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_nash, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Nov 25 04:36:07 np0005534516 competent_nash[441550]: 167 167
Nov 25 04:36:07 np0005534516 systemd[1]: libpod-7c261f71e1bb5be791a22908ecf9322a4ed6a9b3f060537530fb3814e6ccc75b.scope: Deactivated successfully.
Nov 25 04:36:07 np0005534516 conmon[441550]: conmon 7c261f71e1bb5be791a2 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-7c261f71e1bb5be791a22908ecf9322a4ed6a9b3f060537530fb3814e6ccc75b.scope/container/memory.events
Nov 25 04:36:07 np0005534516 podman[441533]: 2025-11-25 09:36:07.743515916 +0000 UTC m=+0.147849072 container died 7c261f71e1bb5be791a22908ecf9322a4ed6a9b3f060537530fb3814e6ccc75b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_nash, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3)
Nov 25 04:36:07 np0005534516 systemd[1]: var-lib-containers-storage-overlay-07d8a8e4fee59e9a7375ba67a66962696dde3e6514a1230c3a19821749541491-merged.mount: Deactivated successfully.
Nov 25 04:36:07 np0005534516 podman[441533]: 2025-11-25 09:36:07.815502738 +0000 UTC m=+0.219835894 container remove 7c261f71e1bb5be791a22908ecf9322a4ed6a9b3f060537530fb3814e6ccc75b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_nash, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True)
Nov 25 04:36:07 np0005534516 systemd[1]: libpod-conmon-7c261f71e1bb5be791a22908ecf9322a4ed6a9b3f060537530fb3814e6ccc75b.scope: Deactivated successfully.
Nov 25 04:36:08 np0005534516 podman[441573]: 2025-11-25 09:36:08.011064019 +0000 UTC m=+0.054309041 container create 6add9e2346325779178fe0fdb0ad50829e573a79024691e9c5520252de502d7a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_shockley, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 04:36:08 np0005534516 systemd[1]: Started libpod-conmon-6add9e2346325779178fe0fdb0ad50829e573a79024691e9c5520252de502d7a.scope.
Nov 25 04:36:08 np0005534516 systemd[1]: Started libcrun container.
Nov 25 04:36:08 np0005534516 podman[441573]: 2025-11-25 09:36:07.983984062 +0000 UTC m=+0.027229164 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 04:36:08 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2bd925b3fe776e61c646ae0f4884b46cd0a77940fbe94c11ceb5b15845e6dc5b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 04:36:08 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2bd925b3fe776e61c646ae0f4884b46cd0a77940fbe94c11ceb5b15845e6dc5b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 04:36:08 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2bd925b3fe776e61c646ae0f4884b46cd0a77940fbe94c11ceb5b15845e6dc5b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 04:36:08 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2bd925b3fe776e61c646ae0f4884b46cd0a77940fbe94c11ceb5b15845e6dc5b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 04:36:08 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2bd925b3fe776e61c646ae0f4884b46cd0a77940fbe94c11ceb5b15845e6dc5b/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 04:36:08 np0005534516 podman[441573]: 2025-11-25 09:36:08.088177762 +0000 UTC m=+0.131422814 container init 6add9e2346325779178fe0fdb0ad50829e573a79024691e9c5520252de502d7a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_shockley, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Nov 25 04:36:08 np0005534516 podman[441573]: 2025-11-25 09:36:08.094637728 +0000 UTC m=+0.137882750 container start 6add9e2346325779178fe0fdb0ad50829e573a79024691e9c5520252de502d7a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_shockley, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2)
Nov 25 04:36:08 np0005534516 podman[441573]: 2025-11-25 09:36:08.098532104 +0000 UTC m=+0.141777126 container attach 6add9e2346325779178fe0fdb0ad50829e573a79024691e9c5520252de502d7a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_shockley, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 04:36:09 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3497: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:36:09 np0005534516 charming_shockley[441589]: --> passed data devices: 0 physical, 3 LVM
Nov 25 04:36:09 np0005534516 charming_shockley[441589]: --> relative data size: 1.0
Nov 25 04:36:09 np0005534516 charming_shockley[441589]: --> All data devices are unavailable
Nov 25 04:36:09 np0005534516 systemd[1]: libpod-6add9e2346325779178fe0fdb0ad50829e573a79024691e9c5520252de502d7a.scope: Deactivated successfully.
Nov 25 04:36:09 np0005534516 podman[441573]: 2025-11-25 09:36:09.105215069 +0000 UTC m=+1.148460131 container died 6add9e2346325779178fe0fdb0ad50829e573a79024691e9c5520252de502d7a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_shockley, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507)
Nov 25 04:36:09 np0005534516 systemd[1]: var-lib-containers-storage-overlay-2bd925b3fe776e61c646ae0f4884b46cd0a77940fbe94c11ceb5b15845e6dc5b-merged.mount: Deactivated successfully.
Nov 25 04:36:09 np0005534516 podman[441573]: 2025-11-25 09:36:09.166858119 +0000 UTC m=+1.210103141 container remove 6add9e2346325779178fe0fdb0ad50829e573a79024691e9c5520252de502d7a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_shockley, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 04:36:09 np0005534516 systemd[1]: libpod-conmon-6add9e2346325779178fe0fdb0ad50829e573a79024691e9c5520252de502d7a.scope: Deactivated successfully.
Nov 25 04:36:09 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:36:09 np0005534516 nova_compute[253538]: 2025-11-25 09:36:09.622 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:36:09 np0005534516 podman[441772]: 2025-11-25 09:36:09.725487277 +0000 UTC m=+0.041088831 container create a3a0e5360445749a1ead286ddea6c6f7eb2fccfee354391bc11a6f003046dcf6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_jones, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Nov 25 04:36:09 np0005534516 systemd[1]: Started libpod-conmon-a3a0e5360445749a1ead286ddea6c6f7eb2fccfee354391bc11a6f003046dcf6.scope.
Nov 25 04:36:09 np0005534516 systemd[1]: Started libcrun container.
Nov 25 04:36:09 np0005534516 podman[441772]: 2025-11-25 09:36:09.80192551 +0000 UTC m=+0.117527084 container init a3a0e5360445749a1ead286ddea6c6f7eb2fccfee354391bc11a6f003046dcf6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_jones, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 04:36:09 np0005534516 podman[441772]: 2025-11-25 09:36:09.707885657 +0000 UTC m=+0.023487241 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 04:36:09 np0005534516 podman[441772]: 2025-11-25 09:36:09.807739329 +0000 UTC m=+0.123340883 container start a3a0e5360445749a1ead286ddea6c6f7eb2fccfee354391bc11a6f003046dcf6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_jones, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 04:36:09 np0005534516 podman[441772]: 2025-11-25 09:36:09.811324977 +0000 UTC m=+0.126926551 container attach a3a0e5360445749a1ead286ddea6c6f7eb2fccfee354391bc11a6f003046dcf6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_jones, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 04:36:09 np0005534516 intelligent_jones[441789]: 167 167
Nov 25 04:36:09 np0005534516 systemd[1]: libpod-a3a0e5360445749a1ead286ddea6c6f7eb2fccfee354391bc11a6f003046dcf6.scope: Deactivated successfully.
Nov 25 04:36:09 np0005534516 podman[441772]: 2025-11-25 09:36:09.813373103 +0000 UTC m=+0.128974657 container died a3a0e5360445749a1ead286ddea6c6f7eb2fccfee354391bc11a6f003046dcf6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_jones, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default)
Nov 25 04:36:09 np0005534516 systemd[1]: var-lib-containers-storage-overlay-7e62de42c35affd42fc0103dd254b09df83e5e28ffedf28f1d81a758f0c39161-merged.mount: Deactivated successfully.
Nov 25 04:36:09 np0005534516 podman[441772]: 2025-11-25 09:36:09.845017545 +0000 UTC m=+0.160619099 container remove a3a0e5360445749a1ead286ddea6c6f7eb2fccfee354391bc11a6f003046dcf6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_jones, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True)
Nov 25 04:36:09 np0005534516 systemd[1]: libpod-conmon-a3a0e5360445749a1ead286ddea6c6f7eb2fccfee354391bc11a6f003046dcf6.scope: Deactivated successfully.
Nov 25 04:36:09 np0005534516 podman[441812]: 2025-11-25 09:36:09.988900118 +0000 UTC m=+0.035127008 container create f2a167d819ca41fb8b7431408fac32511e22e13e87bbcefec54ccb2fcff8aea2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_williamson, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Nov 25 04:36:10 np0005534516 systemd[1]: Started libpod-conmon-f2a167d819ca41fb8b7431408fac32511e22e13e87bbcefec54ccb2fcff8aea2.scope.
Nov 25 04:36:10 np0005534516 systemd[1]: Started libcrun container.
Nov 25 04:36:10 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3f24c646e95d931cfe668b7e8f74119a4754376fb44dbe57517655a6fb9fb39e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 04:36:10 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3f24c646e95d931cfe668b7e8f74119a4754376fb44dbe57517655a6fb9fb39e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 04:36:10 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3f24c646e95d931cfe668b7e8f74119a4754376fb44dbe57517655a6fb9fb39e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 04:36:10 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3f24c646e95d931cfe668b7e8f74119a4754376fb44dbe57517655a6fb9fb39e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 04:36:10 np0005534516 podman[441812]: 2025-11-25 09:36:10.059923895 +0000 UTC m=+0.106150785 container init f2a167d819ca41fb8b7431408fac32511e22e13e87bbcefec54ccb2fcff8aea2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_williamson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Nov 25 04:36:10 np0005534516 podman[441812]: 2025-11-25 09:36:10.068076096 +0000 UTC m=+0.114302976 container start f2a167d819ca41fb8b7431408fac32511e22e13e87bbcefec54ccb2fcff8aea2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_williamson, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Nov 25 04:36:10 np0005534516 podman[441812]: 2025-11-25 09:36:09.974819454 +0000 UTC m=+0.021046344 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 04:36:10 np0005534516 podman[441812]: 2025-11-25 09:36:10.07298503 +0000 UTC m=+0.119211900 container attach f2a167d819ca41fb8b7431408fac32511e22e13e87bbcefec54ccb2fcff8aea2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_williamson, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 04:36:10 np0005534516 nova_compute[253538]: 2025-11-25 09:36:10.741 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:36:10 np0005534516 vigorous_williamson[441828]: {
Nov 25 04:36:10 np0005534516 vigorous_williamson[441828]:    "0": [
Nov 25 04:36:10 np0005534516 vigorous_williamson[441828]:        {
Nov 25 04:36:10 np0005534516 vigorous_williamson[441828]:            "devices": [
Nov 25 04:36:10 np0005534516 vigorous_williamson[441828]:                "/dev/loop3"
Nov 25 04:36:10 np0005534516 vigorous_williamson[441828]:            ],
Nov 25 04:36:10 np0005534516 vigorous_williamson[441828]:            "lv_name": "ceph_lv0",
Nov 25 04:36:10 np0005534516 vigorous_williamson[441828]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 04:36:10 np0005534516 vigorous_williamson[441828]:            "lv_size": "21470642176",
Nov 25 04:36:10 np0005534516 vigorous_williamson[441828]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=fa0f8d90-5df8-4d42-9078-da082765696d,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 04:36:10 np0005534516 vigorous_williamson[441828]:            "lv_uuid": "ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr",
Nov 25 04:36:10 np0005534516 vigorous_williamson[441828]:            "name": "ceph_lv0",
Nov 25 04:36:10 np0005534516 vigorous_williamson[441828]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 04:36:10 np0005534516 vigorous_williamson[441828]:            "tags": {
Nov 25 04:36:10 np0005534516 vigorous_williamson[441828]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 04:36:10 np0005534516 vigorous_williamson[441828]:                "ceph.block_uuid": "ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr",
Nov 25 04:36:10 np0005534516 vigorous_williamson[441828]:                "ceph.cephx_lockbox_secret": "",
Nov 25 04:36:10 np0005534516 vigorous_williamson[441828]:                "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 04:36:10 np0005534516 vigorous_williamson[441828]:                "ceph.cluster_name": "ceph",
Nov 25 04:36:10 np0005534516 vigorous_williamson[441828]:                "ceph.crush_device_class": "",
Nov 25 04:36:10 np0005534516 vigorous_williamson[441828]:                "ceph.encrypted": "0",
Nov 25 04:36:10 np0005534516 vigorous_williamson[441828]:                "ceph.osd_fsid": "fa0f8d90-5df8-4d42-9078-da082765696d",
Nov 25 04:36:10 np0005534516 vigorous_williamson[441828]:                "ceph.osd_id": "0",
Nov 25 04:36:10 np0005534516 vigorous_williamson[441828]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 04:36:10 np0005534516 vigorous_williamson[441828]:                "ceph.type": "block",
Nov 25 04:36:10 np0005534516 vigorous_williamson[441828]:                "ceph.vdo": "0"
Nov 25 04:36:10 np0005534516 vigorous_williamson[441828]:            },
Nov 25 04:36:10 np0005534516 vigorous_williamson[441828]:            "type": "block",
Nov 25 04:36:10 np0005534516 vigorous_williamson[441828]:            "vg_name": "ceph_vg0"
Nov 25 04:36:10 np0005534516 vigorous_williamson[441828]:        }
Nov 25 04:36:10 np0005534516 vigorous_williamson[441828]:    ],
Nov 25 04:36:10 np0005534516 vigorous_williamson[441828]:    "1": [
Nov 25 04:36:10 np0005534516 vigorous_williamson[441828]:        {
Nov 25 04:36:10 np0005534516 vigorous_williamson[441828]:            "devices": [
Nov 25 04:36:10 np0005534516 vigorous_williamson[441828]:                "/dev/loop4"
Nov 25 04:36:10 np0005534516 vigorous_williamson[441828]:            ],
Nov 25 04:36:10 np0005534516 vigorous_williamson[441828]:            "lv_name": "ceph_lv1",
Nov 25 04:36:10 np0005534516 vigorous_williamson[441828]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 04:36:10 np0005534516 vigorous_williamson[441828]:            "lv_size": "21470642176",
Nov 25 04:36:10 np0005534516 vigorous_williamson[441828]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1ccbbde0-8faf-460a-800e-c84f00ed17db,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 04:36:10 np0005534516 vigorous_williamson[441828]:            "lv_uuid": "nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e",
Nov 25 04:36:10 np0005534516 vigorous_williamson[441828]:            "name": "ceph_lv1",
Nov 25 04:36:10 np0005534516 vigorous_williamson[441828]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 04:36:10 np0005534516 vigorous_williamson[441828]:            "tags": {
Nov 25 04:36:10 np0005534516 vigorous_williamson[441828]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 04:36:10 np0005534516 vigorous_williamson[441828]:                "ceph.block_uuid": "nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e",
Nov 25 04:36:10 np0005534516 vigorous_williamson[441828]:                "ceph.cephx_lockbox_secret": "",
Nov 25 04:36:10 np0005534516 vigorous_williamson[441828]:                "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 04:36:10 np0005534516 vigorous_williamson[441828]:                "ceph.cluster_name": "ceph",
Nov 25 04:36:10 np0005534516 vigorous_williamson[441828]:                "ceph.crush_device_class": "",
Nov 25 04:36:10 np0005534516 vigorous_williamson[441828]:                "ceph.encrypted": "0",
Nov 25 04:36:10 np0005534516 vigorous_williamson[441828]:                "ceph.osd_fsid": "1ccbbde0-8faf-460a-800e-c84f00ed17db",
Nov 25 04:36:10 np0005534516 vigorous_williamson[441828]:                "ceph.osd_id": "1",
Nov 25 04:36:10 np0005534516 vigorous_williamson[441828]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 04:36:10 np0005534516 vigorous_williamson[441828]:                "ceph.type": "block",
Nov 25 04:36:10 np0005534516 vigorous_williamson[441828]:                "ceph.vdo": "0"
Nov 25 04:36:10 np0005534516 vigorous_williamson[441828]:            },
Nov 25 04:36:10 np0005534516 vigorous_williamson[441828]:            "type": "block",
Nov 25 04:36:10 np0005534516 vigorous_williamson[441828]:            "vg_name": "ceph_vg1"
Nov 25 04:36:10 np0005534516 vigorous_williamson[441828]:        }
Nov 25 04:36:10 np0005534516 vigorous_williamson[441828]:    ],
Nov 25 04:36:10 np0005534516 vigorous_williamson[441828]:    "2": [
Nov 25 04:36:10 np0005534516 vigorous_williamson[441828]:        {
Nov 25 04:36:10 np0005534516 vigorous_williamson[441828]:            "devices": [
Nov 25 04:36:10 np0005534516 vigorous_williamson[441828]:                "/dev/loop5"
Nov 25 04:36:10 np0005534516 vigorous_williamson[441828]:            ],
Nov 25 04:36:10 np0005534516 vigorous_williamson[441828]:            "lv_name": "ceph_lv2",
Nov 25 04:36:10 np0005534516 vigorous_williamson[441828]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 04:36:10 np0005534516 vigorous_williamson[441828]:            "lv_size": "21470642176",
Nov 25 04:36:10 np0005534516 vigorous_williamson[441828]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2a15223e-f21c-42af-b702-2be1c3607399,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 04:36:10 np0005534516 vigorous_williamson[441828]:            "lv_uuid": "5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0",
Nov 25 04:36:10 np0005534516 vigorous_williamson[441828]:            "name": "ceph_lv2",
Nov 25 04:36:10 np0005534516 vigorous_williamson[441828]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 04:36:10 np0005534516 vigorous_williamson[441828]:            "tags": {
Nov 25 04:36:10 np0005534516 vigorous_williamson[441828]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 04:36:10 np0005534516 vigorous_williamson[441828]:                "ceph.block_uuid": "5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0",
Nov 25 04:36:10 np0005534516 vigorous_williamson[441828]:                "ceph.cephx_lockbox_secret": "",
Nov 25 04:36:10 np0005534516 vigorous_williamson[441828]:                "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 04:36:10 np0005534516 vigorous_williamson[441828]:                "ceph.cluster_name": "ceph",
Nov 25 04:36:10 np0005534516 vigorous_williamson[441828]:                "ceph.crush_device_class": "",
Nov 25 04:36:10 np0005534516 vigorous_williamson[441828]:                "ceph.encrypted": "0",
Nov 25 04:36:10 np0005534516 vigorous_williamson[441828]:                "ceph.osd_fsid": "2a15223e-f21c-42af-b702-2be1c3607399",
Nov 25 04:36:10 np0005534516 vigorous_williamson[441828]:                "ceph.osd_id": "2",
Nov 25 04:36:10 np0005534516 vigorous_williamson[441828]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 04:36:10 np0005534516 vigorous_williamson[441828]:                "ceph.type": "block",
Nov 25 04:36:10 np0005534516 vigorous_williamson[441828]:                "ceph.vdo": "0"
Nov 25 04:36:10 np0005534516 vigorous_williamson[441828]:            },
Nov 25 04:36:10 np0005534516 vigorous_williamson[441828]:            "type": "block",
Nov 25 04:36:10 np0005534516 vigorous_williamson[441828]:            "vg_name": "ceph_vg2"
Nov 25 04:36:10 np0005534516 vigorous_williamson[441828]:        }
Nov 25 04:36:10 np0005534516 vigorous_williamson[441828]:    ]
Nov 25 04:36:10 np0005534516 vigorous_williamson[441828]: }
Nov 25 04:36:10 np0005534516 systemd[1]: libpod-f2a167d819ca41fb8b7431408fac32511e22e13e87bbcefec54ccb2fcff8aea2.scope: Deactivated successfully.
Nov 25 04:36:10 np0005534516 podman[441812]: 2025-11-25 09:36:10.87116794 +0000 UTC m=+0.917394810 container died f2a167d819ca41fb8b7431408fac32511e22e13e87bbcefec54ccb2fcff8aea2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_williamson, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Nov 25 04:36:10 np0005534516 systemd[1]: var-lib-containers-storage-overlay-3f24c646e95d931cfe668b7e8f74119a4754376fb44dbe57517655a6fb9fb39e-merged.mount: Deactivated successfully.
Nov 25 04:36:10 np0005534516 podman[441812]: 2025-11-25 09:36:10.925287555 +0000 UTC m=+0.971514425 container remove f2a167d819ca41fb8b7431408fac32511e22e13e87bbcefec54ccb2fcff8aea2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_williamson, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2)
Nov 25 04:36:10 np0005534516 systemd[1]: libpod-conmon-f2a167d819ca41fb8b7431408fac32511e22e13e87bbcefec54ccb2fcff8aea2.scope: Deactivated successfully.
Nov 25 04:36:11 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3498: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:36:11 np0005534516 nova_compute[253538]: 2025-11-25 09:36:11.142 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:36:11 np0005534516 podman[441991]: 2025-11-25 09:36:11.589255726 +0000 UTC m=+0.042612903 container create 7d3437fb7e21ad504f760f051d9d2072e82848e1f620e1ea7f7138285ef5bca7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_gould, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 04:36:11 np0005534516 systemd[1]: Started libpod-conmon-7d3437fb7e21ad504f760f051d9d2072e82848e1f620e1ea7f7138285ef5bca7.scope.
Nov 25 04:36:11 np0005534516 podman[441991]: 2025-11-25 09:36:11.567855863 +0000 UTC m=+0.021213060 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 04:36:11 np0005534516 systemd[1]: Started libcrun container.
Nov 25 04:36:11 np0005534516 podman[441991]: 2025-11-25 09:36:11.699032779 +0000 UTC m=+0.152389986 container init 7d3437fb7e21ad504f760f051d9d2072e82848e1f620e1ea7f7138285ef5bca7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_gould, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Nov 25 04:36:11 np0005534516 podman[441991]: 2025-11-25 09:36:11.707842039 +0000 UTC m=+0.161199216 container start 7d3437fb7e21ad504f760f051d9d2072e82848e1f620e1ea7f7138285ef5bca7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_gould, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True)
Nov 25 04:36:11 np0005534516 podman[441991]: 2025-11-25 09:36:11.714427009 +0000 UTC m=+0.167784196 container attach 7d3437fb7e21ad504f760f051d9d2072e82848e1f620e1ea7f7138285ef5bca7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_gould, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Nov 25 04:36:11 np0005534516 condescending_gould[442007]: 167 167
Nov 25 04:36:11 np0005534516 systemd[1]: libpod-7d3437fb7e21ad504f760f051d9d2072e82848e1f620e1ea7f7138285ef5bca7.scope: Deactivated successfully.
Nov 25 04:36:11 np0005534516 podman[441991]: 2025-11-25 09:36:11.716833694 +0000 UTC m=+0.170190871 container died 7d3437fb7e21ad504f760f051d9d2072e82848e1f620e1ea7f7138285ef5bca7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_gould, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 04:36:11 np0005534516 systemd[1]: var-lib-containers-storage-overlay-6243dbb4c07cb623e389df2ad8806fc81f2fc55fbf72603d4951b08ec48bcf77-merged.mount: Deactivated successfully.
Nov 25 04:36:11 np0005534516 podman[441991]: 2025-11-25 09:36:11.779521083 +0000 UTC m=+0.232878260 container remove 7d3437fb7e21ad504f760f051d9d2072e82848e1f620e1ea7f7138285ef5bca7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_gould, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Nov 25 04:36:11 np0005534516 systemd[1]: libpod-conmon-7d3437fb7e21ad504f760f051d9d2072e82848e1f620e1ea7f7138285ef5bca7.scope: Deactivated successfully.
Nov 25 04:36:11 np0005534516 podman[442031]: 2025-11-25 09:36:11.982348273 +0000 UTC m=+0.048811452 container create d9197fbc9aaeb469249a1f2a296c4264829c5f0044df0a2e4cece77fff3923de (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_tharp, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Nov 25 04:36:12 np0005534516 systemd[1]: Started libpod-conmon-d9197fbc9aaeb469249a1f2a296c4264829c5f0044df0a2e4cece77fff3923de.scope.
Nov 25 04:36:12 np0005534516 systemd[1]: Started libcrun container.
Nov 25 04:36:12 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e517a4a2d05910b231e393f0a6cab1be36720ff217bd8313faf77b1d09fa1ef4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 04:36:12 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e517a4a2d05910b231e393f0a6cab1be36720ff217bd8313faf77b1d09fa1ef4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 04:36:12 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e517a4a2d05910b231e393f0a6cab1be36720ff217bd8313faf77b1d09fa1ef4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 04:36:12 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e517a4a2d05910b231e393f0a6cab1be36720ff217bd8313faf77b1d09fa1ef4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 04:36:12 np0005534516 podman[442031]: 2025-11-25 09:36:11.962842361 +0000 UTC m=+0.029305570 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 04:36:12 np0005534516 podman[442031]: 2025-11-25 09:36:12.072368767 +0000 UTC m=+0.138831966 container init d9197fbc9aaeb469249a1f2a296c4264829c5f0044df0a2e4cece77fff3923de (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_tharp, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Nov 25 04:36:12 np0005534516 podman[442031]: 2025-11-25 09:36:12.08055842 +0000 UTC m=+0.147021589 container start d9197fbc9aaeb469249a1f2a296c4264829c5f0044df0a2e4cece77fff3923de (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_tharp, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 04:36:12 np0005534516 podman[442031]: 2025-11-25 09:36:12.084014844 +0000 UTC m=+0.150478023 container attach d9197fbc9aaeb469249a1f2a296c4264829c5f0044df0a2e4cece77fff3923de (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_tharp, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Nov 25 04:36:12 np0005534516 nova_compute[253538]: 2025-11-25 09:36:12.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:36:13 np0005534516 peaceful_tharp[442047]: {
Nov 25 04:36:13 np0005534516 peaceful_tharp[442047]:    "1ccbbde0-8faf-460a-800e-c84f00ed17db": {
Nov 25 04:36:13 np0005534516 peaceful_tharp[442047]:        "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 04:36:13 np0005534516 peaceful_tharp[442047]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 04:36:13 np0005534516 peaceful_tharp[442047]:        "osd_id": 1,
Nov 25 04:36:13 np0005534516 peaceful_tharp[442047]:        "osd_uuid": "1ccbbde0-8faf-460a-800e-c84f00ed17db",
Nov 25 04:36:13 np0005534516 peaceful_tharp[442047]:        "type": "bluestore"
Nov 25 04:36:13 np0005534516 peaceful_tharp[442047]:    },
Nov 25 04:36:13 np0005534516 peaceful_tharp[442047]:    "2a15223e-f21c-42af-b702-2be1c3607399": {
Nov 25 04:36:13 np0005534516 peaceful_tharp[442047]:        "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 04:36:13 np0005534516 peaceful_tharp[442047]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 04:36:13 np0005534516 peaceful_tharp[442047]:        "osd_id": 2,
Nov 25 04:36:13 np0005534516 peaceful_tharp[442047]:        "osd_uuid": "2a15223e-f21c-42af-b702-2be1c3607399",
Nov 25 04:36:13 np0005534516 peaceful_tharp[442047]:        "type": "bluestore"
Nov 25 04:36:13 np0005534516 peaceful_tharp[442047]:    },
Nov 25 04:36:13 np0005534516 peaceful_tharp[442047]:    "fa0f8d90-5df8-4d42-9078-da082765696d": {
Nov 25 04:36:13 np0005534516 peaceful_tharp[442047]:        "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 04:36:13 np0005534516 peaceful_tharp[442047]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 04:36:13 np0005534516 peaceful_tharp[442047]:        "osd_id": 0,
Nov 25 04:36:13 np0005534516 peaceful_tharp[442047]:        "osd_uuid": "fa0f8d90-5df8-4d42-9078-da082765696d",
Nov 25 04:36:13 np0005534516 peaceful_tharp[442047]:        "type": "bluestore"
Nov 25 04:36:13 np0005534516 peaceful_tharp[442047]:    }
Nov 25 04:36:13 np0005534516 peaceful_tharp[442047]: }
Nov 25 04:36:13 np0005534516 systemd[1]: libpod-d9197fbc9aaeb469249a1f2a296c4264829c5f0044df0a2e4cece77fff3923de.scope: Deactivated successfully.
Nov 25 04:36:13 np0005534516 podman[442031]: 2025-11-25 09:36:13.048349404 +0000 UTC m=+1.114812583 container died d9197fbc9aaeb469249a1f2a296c4264829c5f0044df0a2e4cece77fff3923de (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_tharp, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 04:36:13 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3499: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:36:13 np0005534516 systemd[1]: var-lib-containers-storage-overlay-e517a4a2d05910b231e393f0a6cab1be36720ff217bd8313faf77b1d09fa1ef4-merged.mount: Deactivated successfully.
Nov 25 04:36:13 np0005534516 podman[442031]: 2025-11-25 09:36:13.104584217 +0000 UTC m=+1.171047396 container remove d9197fbc9aaeb469249a1f2a296c4264829c5f0044df0a2e4cece77fff3923de (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_tharp, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Nov 25 04:36:13 np0005534516 systemd[1]: libpod-conmon-d9197fbc9aaeb469249a1f2a296c4264829c5f0044df0a2e4cece77fff3923de.scope: Deactivated successfully.
Nov 25 04:36:13 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 04:36:13 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 04:36:13 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 04:36:13 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 04:36:13 np0005534516 ceph-mgr[75313]: [progress WARNING root] complete: ev 00d4f859-e71a-4783-9944-da36c640d3ae does not exist
Nov 25 04:36:13 np0005534516 ceph-mgr[75313]: [progress WARNING root] complete: ev f4faf2d2-5e82-41c0-be05-d96889e24edb does not exist
Nov 25 04:36:14 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 04:36:14 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 04:36:14 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:36:14 np0005534516 nova_compute[253538]: 2025-11-25 09:36:14.670 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:36:14 np0005534516 podman[442143]: 2025-11-25 09:36:14.818352427 +0000 UTC m=+0.054449465 container health_status 5c8d5508db57b4c3da7e80ae11f3a3094e87785cb74f60c98199261dd8b73481 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Nov 25 04:36:14 np0005534516 podman[442142]: 2025-11-25 09:36:14.825929814 +0000 UTC m=+0.063836941 container health_status 201f5ba8a846455dad7681d91099d8c3f33e8ac91298c1330a8b525b94c03938 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=multipathd, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_id=multipathd)
Nov 25 04:36:15 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3500: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:36:16 np0005534516 nova_compute[253538]: 2025-11-25 09:36:16.143 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:36:17 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3501: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:36:19 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3502: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:36:19 np0005534516 systemd-logind[822]: New session 55 of user zuul.
Nov 25 04:36:19 np0005534516 systemd[1]: Started Session 55 of User zuul.
Nov 25 04:36:19 np0005534516 nova_compute[253538]: 2025-11-25 09:36:19.555 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:36:19 np0005534516 nova_compute[253538]: 2025-11-25 09:36:19.555 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 25 04:36:19 np0005534516 nova_compute[253538]: 2025-11-25 09:36:19.555 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 25 04:36:19 np0005534516 nova_compute[253538]: 2025-11-25 09:36:19.572 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 25 04:36:19 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:36:19 np0005534516 nova_compute[253538]: 2025-11-25 09:36:19.671 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:36:20 np0005534516 nova_compute[253538]: 2025-11-25 09:36:20.553 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:36:20 np0005534516 nova_compute[253538]: 2025-11-25 09:36:20.554 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 25 04:36:21 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3503: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:36:21 np0005534516 nova_compute[253538]: 2025-11-25 09:36:21.144 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:36:21 np0005534516 nova_compute[253538]: 2025-11-25 09:36:21.548 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:36:21 np0005534516 nova_compute[253538]: 2025-11-25 09:36:21.553 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:36:22 np0005534516 nova_compute[253538]: 2025-11-25 09:36:22.553 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:36:22 np0005534516 ceph-mgr[75313]: log_channel(audit) log [DBG] : from='client.23115 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 04:36:23 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3504: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:36:23 np0005534516 ceph-mgr[75313]: log_channel(audit) log [DBG] : from='client.23117 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 04:36:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 04:36:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 04:36:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 04:36:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 04:36:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 04:36:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 04:36:23 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status"} v 0) v1
Nov 25 04:36:23 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1940533214' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Nov 25 04:36:23 np0005534516 podman[442407]: 2025-11-25 09:36:23.895893192 +0000 UTC m=+0.132954866 container health_status 8ad1e319ea263c63021f01bb2d6f18ca5aea205883339692c6b10bddfa993191 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, config_id=ovn_controller, container_name=ovn_controller, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 04:36:24 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:36:24 np0005534516 nova_compute[253538]: 2025-11-25 09:36:24.674 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:36:25 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3505: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:36:26 np0005534516 nova_compute[253538]: 2025-11-25 09:36:26.147 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:36:27 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3506: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:36:27 np0005534516 ovs-vsctl[442490]: ovs|00001|db_ctl_base|ERR|no key "dpdk-init" in Open_vSwitch record "." column other_config
Nov 25 04:36:28 np0005534516 nova_compute[253538]: 2025-11-25 09:36:28.555 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:36:28 np0005534516 virtqemud[253839]: Failed to connect socket to '/var/run/libvirt/virtnetworkd-sock-ro': No such file or directory
Nov 25 04:36:28 np0005534516 virtqemud[253839]: Failed to connect socket to '/var/run/libvirt/virtnwfilterd-sock-ro': No such file or directory
Nov 25 04:36:28 np0005534516 virtqemud[253839]: Failed to connect socket to '/var/run/libvirt/virtstoraged-sock-ro': No such file or directory
Nov 25 04:36:29 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3507: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:36:29 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 25 04:36:29 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4130503670' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 04:36:29 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 25 04:36:29 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4130503670' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 04:36:29 np0005534516 ceph-mds[101326]: mds.cephfs.compute-0.dgfvvi asok_command: cache status {prefix=cache status} (starting...)
Nov 25 04:36:29 np0005534516 ceph-mds[101326]: mds.cephfs.compute-0.dgfvvi asok_command: client ls {prefix=client ls} (starting...)
Nov 25 04:36:29 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:36:29 np0005534516 nova_compute[253538]: 2025-11-25 09:36:29.677 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:36:29 np0005534516 lvm[442852]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Nov 25 04:36:29 np0005534516 lvm[442852]: VG ceph_vg0 finished
Nov 25 04:36:30 np0005534516 lvm[442858]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Nov 25 04:36:30 np0005534516 lvm[442858]: VG ceph_vg1 finished
Nov 25 04:36:30 np0005534516 lvm[442861]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Nov 25 04:36:30 np0005534516 lvm[442861]: VG ceph_vg2 finished
Nov 25 04:36:30 np0005534516 ceph-mgr[75313]: log_channel(audit) log [DBG] : from='client.23125 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 04:36:30 np0005534516 ceph-mds[101326]: mds.cephfs.compute-0.dgfvvi asok_command: damage ls {prefix=damage ls} (starting...)
Nov 25 04:36:30 np0005534516 ceph-mds[101326]: mds.cephfs.compute-0.dgfvvi asok_command: dump loads {prefix=dump loads} (starting...)
Nov 25 04:36:30 np0005534516 ceph-mgr[75313]: log_channel(audit) log [DBG] : from='client.23127 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 04:36:30 np0005534516 ceph-mds[101326]: mds.cephfs.compute-0.dgfvvi asok_command: dump tree {prefix=dump tree,root=/} (starting...)
Nov 25 04:36:30 np0005534516 ceph-mds[101326]: mds.cephfs.compute-0.dgfvvi asok_command: dump_blocked_ops {prefix=dump_blocked_ops} (starting...)
Nov 25 04:36:30 np0005534516 ceph-mds[101326]: mds.cephfs.compute-0.dgfvvi asok_command: dump_historic_ops {prefix=dump_historic_ops} (starting...)
Nov 25 04:36:30 np0005534516 ceph-mds[101326]: mds.cephfs.compute-0.dgfvvi asok_command: dump_historic_ops_by_duration {prefix=dump_historic_ops_by_duration} (starting...)
Nov 25 04:36:31 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3508: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:36:31 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "report"} v 0) v1
Nov 25 04:36:31 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/828198151' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Nov 25 04:36:31 np0005534516 ceph-mds[101326]: mds.cephfs.compute-0.dgfvvi asok_command: dump_ops_in_flight {prefix=dump_ops_in_flight} (starting...)
Nov 25 04:36:31 np0005534516 nova_compute[253538]: 2025-11-25 09:36:31.147 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:36:31 np0005534516 ceph-mgr[75313]: log_channel(audit) log [DBG] : from='client.23133 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 04:36:31 np0005534516 ceph-a058ea16-8b73-51e1-b172-ed66107102bf-mgr-compute-0-cpskve[75309]: 2025-11-25T09:36:31.227+0000 7f6347321640 -1 mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Nov 25 04:36:31 np0005534516 ceph-mgr[75313]: mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Nov 25 04:36:31 np0005534516 ceph-mds[101326]: mds.cephfs.compute-0.dgfvvi asok_command: get subtrees {prefix=get subtrees} (starting...)
Nov 25 04:36:31 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 04:36:31 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1193962000' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 04:36:31 np0005534516 ceph-mds[101326]: mds.cephfs.compute-0.dgfvvi asok_command: ops {prefix=ops} (starting...)
Nov 25 04:36:31 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "log last", "channel": "cephadm"} v 0) v1
Nov 25 04:36:31 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2772513536' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm"}]: dispatch
Nov 25 04:36:31 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config log"} v 0) v1
Nov 25 04:36:31 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1481717257' entity='client.admin' cmd=[{"prefix": "config log"}]: dispatch
Nov 25 04:36:32 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr dump"} v 0) v1
Nov 25 04:36:32 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2744050197' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Nov 25 04:36:32 np0005534516 ceph-mds[101326]: mds.cephfs.compute-0.dgfvvi asok_command: session ls {prefix=session ls} (starting...)
Nov 25 04:36:32 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config-key dump"} v 0) v1
Nov 25 04:36:32 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3728529688' entity='client.admin' cmd=[{"prefix": "config-key dump"}]: dispatch
Nov 25 04:36:32 np0005534516 ceph-mds[101326]: mds.cephfs.compute-0.dgfvvi asok_command: status {prefix=status} (starting...)
Nov 25 04:36:32 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata"} v 0) v1
Nov 25 04:36:32 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1238580934' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Nov 25 04:36:32 np0005534516 ceph-mgr[75313]: log_channel(audit) log [DBG] : from='client.23147 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 04:36:32 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module ls"} v 0) v1
Nov 25 04:36:32 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2844190516' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Nov 25 04:36:32 np0005534516 ceph-mgr[75313]: log_channel(audit) log [DBG] : from='client.23151 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 04:36:33 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3509: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:36:33 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services"} v 0) v1
Nov 25 04:36:33 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2128864166' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Nov 25 04:36:33 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "features"} v 0) v1
Nov 25 04:36:33 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/9214792' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Nov 25 04:36:33 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "health", "detail": "detail"} v 0) v1
Nov 25 04:36:33 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4132945345' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch
Nov 25 04:36:33 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr stat"} v 0) v1
Nov 25 04:36:33 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1917744854' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Nov 25 04:36:34 np0005534516 ceph-mgr[75313]: log_channel(audit) log [DBG] : from='client.23161 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 04:36:34 np0005534516 ceph-a058ea16-8b73-51e1-b172-ed66107102bf-mgr-compute-0-cpskve[75309]: 2025-11-25T09:36:34.166+0000 7f6347321640 -1 mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Nov 25 04:36:34 np0005534516 ceph-mgr[75313]: mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Nov 25 04:36:34 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr versions"} v 0) v1
Nov 25 04:36:34 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3058462891' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Nov 25 04:36:34 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"} v 0) v1
Nov 25 04:36:34 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3033806218' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"}]: dispatch
Nov 25 04:36:34 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:36:34 np0005534516 ceph-mgr[75313]: log_channel(audit) log [DBG] : from='client.23167 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 04:36:34 np0005534516 nova_compute[253538]: 2025-11-25 09:36:34.679 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:36:34 np0005534516 ceph-mgr[75313]: log_channel(audit) log [DBG] : from='client.23171 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 04:36:35 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"} v 0) v1
Nov 25 04:36:35 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1133696125' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"}]: dispatch
Nov 25 04:36:35 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3510: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:36:35 np0005534516 ceph-mgr[75313]: log_channel(audit) log [DBG] : from='client.23173 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 04:36:35 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr dump"} v 0) v1
Nov 25 04:36:35 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3645074910' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Nov 25 04:36:35 np0005534516 ceph-mgr[75313]: log_channel(audit) log [DBG] : from='client.23177 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 267362304 unmapped: 47718400 heap: 315080704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2721669 data_alloc: 218103808 data_used: 7303168
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 267362304 unmapped: 47718400 heap: 315080704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 267362304 unmapped: 47718400 heap: 315080704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 267362304 unmapped: 47718400 heap: 315080704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 253 heartbeat osd_stat(store_statfs(0x4eee62000/0x0/0x4ffc00000, data 0x1233f7c/0x139a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf9ff9c6), peers [0,1] op hist [])
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 267370496 unmapped: 47710208 heap: 315080704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 267370496 unmapped: 47710208 heap: 315080704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2721669 data_alloc: 218103808 data_used: 7303168
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 267370496 unmapped: 47710208 heap: 315080704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 267370496 unmapped: 47710208 heap: 315080704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 267370496 unmapped: 47710208 heap: 315080704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 267370496 unmapped: 47710208 heap: 315080704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 253 heartbeat osd_stat(store_statfs(0x4eee62000/0x0/0x4ffc00000, data 0x1233f7c/0x139a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf9ff9c6), peers [0,1] op hist [])
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 267378688 unmapped: 47702016 heap: 315080704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2721669 data_alloc: 218103808 data_used: 7303168
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 267378688 unmapped: 47702016 heap: 315080704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 267378688 unmapped: 47702016 heap: 315080704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 267378688 unmapped: 47702016 heap: 315080704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 267378688 unmapped: 47702016 heap: 315080704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 48.778717041s of 50.482543945s, submitted: 50
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 253 ms_handle_reset con 0x55a12935d400 session 0x55a1299c9680
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 253 ms_handle_reset con 0x55a129f42400 session 0x55a1284852c0
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 253 ms_handle_reset con 0x55a127698400 session 0x55a1294e1a40
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 267436032 unmapped: 47644672 heap: 315080704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 253 ms_handle_reset con 0x55a12769b400 session 0x55a128485a40
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 253 ms_handle_reset con 0x55a12935d400 session 0x55a12ad86960
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 253 heartbeat osd_stat(store_statfs(0x4eea00000/0x0/0x4ffc00000, data 0x1697f7c/0x17fe000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf9ff9c6), peers [0,1] op hist [])
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2760681 data_alloc: 218103808 data_used: 7303168
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 267436032 unmapped: 47644672 heap: 315080704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 253 heartbeat osd_stat(store_statfs(0x4eea00000/0x0/0x4ffc00000, data 0x1697f7c/0x17fe000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf9ff9c6), peers [0,1] op hist [])
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 267436032 unmapped: 47644672 heap: 315080704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 267444224 unmapped: 47636480 heap: 315080704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 253 ms_handle_reset con 0x55a129f3e000 session 0x55a129f52d20
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 253 ms_handle_reset con 0x55a129f42400 session 0x55a128074960
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 253 ms_handle_reset con 0x55a127698400 session 0x55a129e9cf00
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 253 ms_handle_reset con 0x55a12769b400 session 0x55a129ac6b40
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 253 ms_handle_reset con 0x55a12935d400 session 0x55a1285ada40
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 253 ms_handle_reset con 0x55a129f3e000 session 0x55a129d8e3c0
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 253 ms_handle_reset con 0x55a129f59800 session 0x55a12962a960
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 267395072 unmapped: 47685632 heap: 315080704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 253 ms_handle_reset con 0x55a127698400 session 0x55a129d7eb40
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 253 ms_handle_reset con 0x55a12769b400 session 0x55a129d8f4a0
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 267395072 unmapped: 47685632 heap: 315080704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 253 heartbeat osd_stat(store_statfs(0x4ee083000/0x0/0x4ffc00000, data 0x2012fee/0x217b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf9ff9c6), peers [0,1] op hist [])
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2837556 data_alloc: 218103808 data_used: 7307264
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 267395072 unmapped: 47685632 heap: 315080704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 253 ms_handle_reset con 0x55a12935d400 session 0x55a12808dc20
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 267395072 unmapped: 47685632 heap: 315080704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 253 ms_handle_reset con 0x55a129f3e000 session 0x55a12962a780
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 253 ms_handle_reset con 0x55a129f3ac00 session 0x55a1294e12c0
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 253 ms_handle_reset con 0x55a127698400 session 0x55a12761c780
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 267427840 unmapped: 47652864 heap: 315080704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 253 heartbeat osd_stat(store_statfs(0x4ee083000/0x0/0x4ffc00000, data 0x2012fee/0x217b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf9ff9c6), peers [0,1] op hist [])
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 253 ms_handle_reset con 0x55a129f3e000 session 0x55a129f4bc20
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 267427840 unmapped: 47652864 heap: 315080704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 267427840 unmapped: 47652864 heap: 315080704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 253 heartbeat osd_stat(store_statfs(0x4ee082000/0x0/0x4ffc00000, data 0x2013011/0x217c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf9ff9c6), peers [0,1] op hist [])
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2875638 data_alloc: 218103808 data_used: 12042240
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 267436032 unmapped: 47644672 heap: 315080704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 270901248 unmapped: 44179456 heap: 315080704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 253 heartbeat osd_stat(store_statfs(0x4ee082000/0x0/0x4ffc00000, data 0x2013011/0x217c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf9ff9c6), peers [0,1] op hist [])
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 270901248 unmapped: 44179456 heap: 315080704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 270901248 unmapped: 44179456 heap: 315080704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 270901248 unmapped: 44179456 heap: 315080704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2943318 data_alloc: 234881024 data_used: 21549056
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 270901248 unmapped: 44179456 heap: 315080704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 253 heartbeat osd_stat(store_statfs(0x4ee082000/0x0/0x4ffc00000, data 0x2013011/0x217c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf9ff9c6), peers [0,1] op hist [])
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 270901248 unmapped: 44179456 heap: 315080704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: mgrc ms_handle_reset ms_handle_reset con 0x55a12b693400
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/3119838916
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/3119838916,v1:192.168.122.100:6801/3119838916]
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: mgrc handle_mgr_configure stats_period=5
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 270958592 unmapped: 44122112 heap: 315080704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 270958592 unmapped: 44122112 heap: 315080704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 270958592 unmapped: 44122112 heap: 315080704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2943798 data_alloc: 234881024 data_used: 21561344
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 270958592 unmapped: 44122112 heap: 315080704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 20.959785461s of 21.312959671s, submitted: 50
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 275210240 unmapped: 39870464 heap: 315080704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 253 heartbeat osd_stat(store_statfs(0x4ecf17000/0x0/0x4ffc00000, data 0x317e011/0x32e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf9ff9c6), peers [0,1] op hist [])
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 275447808 unmapped: 39632896 heap: 315080704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 253 heartbeat osd_stat(store_statfs(0x4ece93000/0x0/0x4ffc00000, data 0x3202011/0x336b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf9ff9c6), peers [0,1] op hist [])
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 275447808 unmapped: 39632896 heap: 315080704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 275447808 unmapped: 39632896 heap: 315080704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3105258 data_alloc: 234881024 data_used: 22839296
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 275447808 unmapped: 39632896 heap: 315080704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 253 heartbeat osd_stat(store_statfs(0x4ece93000/0x0/0x4ffc00000, data 0x3202011/0x336b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf9ff9c6), peers [0,1] op hist [])
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 275447808 unmapped: 39632896 heap: 315080704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 275447808 unmapped: 39632896 heap: 315080704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 275447808 unmapped: 39632896 heap: 315080704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 253 heartbeat osd_stat(store_statfs(0x4ece72000/0x0/0x4ffc00000, data 0x3223011/0x338c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf9ff9c6), peers [0,1] op hist [])
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 275447808 unmapped: 39632896 heap: 315080704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3101998 data_alloc: 234881024 data_used: 22851584
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 275447808 unmapped: 39632896 heap: 315080704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 275447808 unmapped: 39632896 heap: 315080704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 275447808 unmapped: 39632896 heap: 315080704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.514759064s of 12.928457260s, submitted: 142
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 253 ms_handle_reset con 0x55a12a97dc00 session 0x55a129ce9c20
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 253 ms_handle_reset con 0x55a12b694c00 session 0x55a129e9a780
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 275447808 unmapped: 39632896 heap: 315080704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 253 heartbeat osd_stat(store_statfs(0x4ece72000/0x0/0x4ffc00000, data 0x3223011/0x338c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf9ff9c6), peers [0,1] op hist [])
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 253 ms_handle_reset con 0x55a13007f400 session 0x55a12ad861e0
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 269238272 unmapped: 45842432 heap: 315080704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 253 ms_handle_reset con 0x55a127698400 session 0x55a1285781e0
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 253 ms_handle_reset con 0x55a129f3e000 session 0x55a129d7e960
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 253 ms_handle_reset con 0x55a12a97dc00 session 0x55a129d5d4a0
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 253 ms_handle_reset con 0x55a12b694c00 session 0x55a129e9d0e0
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 253 ms_handle_reset con 0x55a12c5fc000 session 0x55a1294e1680
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2953948 data_alloc: 218103808 data_used: 12865536
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 253 ms_handle_reset con 0x55a127698400 session 0x55a128517c20
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 269238272 unmapped: 45842432 heap: 315080704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 253 ms_handle_reset con 0x55a129f3e000 session 0x55a1294f2f00
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 253 ms_handle_reset con 0x55a12a97dc00 session 0x55a1277a3c20
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 253 ms_handle_reset con 0x55a12b694c00 session 0x55a129d5d680
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 253 heartbeat osd_stat(store_statfs(0x4edcb8000/0x0/0x4ffc00000, data 0x2316f7c/0x247d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf9ff9c6), peers [0,1] op hist [])
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 269238272 unmapped: 45842432 heap: 315080704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 269238272 unmapped: 45842432 heap: 315080704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 269238272 unmapped: 45842432 heap: 315080704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 269238272 unmapped: 45842432 heap: 315080704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2954356 data_alloc: 218103808 data_used: 12865536
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 269238272 unmapped: 45842432 heap: 315080704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 269238272 unmapped: 45842432 heap: 315080704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 253 heartbeat osd_stat(store_statfs(0x4ed874000/0x0/0x4ffc00000, data 0x275af7c/0x28c1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf9ff9c6), peers [0,1] op hist [])
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 253 ms_handle_reset con 0x55a129f40400 session 0x55a129f4ad20
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 269238272 unmapped: 45842432 heap: 315080704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 253 heartbeat osd_stat(store_statfs(0x4ed874000/0x0/0x4ffc00000, data 0x275af7c/0x28c1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf9ff9c6), peers [0,1] op hist [])
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 253 ms_handle_reset con 0x55a127698400 session 0x55a129eb3e00
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 269238272 unmapped: 45842432 heap: 315080704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 253 ms_handle_reset con 0x55a129f3e000 session 0x55a1280734a0
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.086547852s of 10.297847748s, submitted: 37
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 253 ms_handle_reset con 0x55a12a97dc00 session 0x55a12844cb40
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 270311424 unmapped: 44769280 heap: 315080704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 253 heartbeat osd_stat(store_statfs(0x4ed93b000/0x0/0x4ffc00000, data 0x275afaf/0x28c3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf9ff9c6), peers [0,1] op hist [])
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2958596 data_alloc: 218103808 data_used: 12865536
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 270311424 unmapped: 44769280 heap: 315080704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 270311424 unmapped: 44769280 heap: 315080704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 270311424 unmapped: 44769280 heap: 315080704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 270311424 unmapped: 44769280 heap: 315080704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 253 heartbeat osd_stat(store_statfs(0x4ed93b000/0x0/0x4ffc00000, data 0x275afaf/0x28c3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf9ff9c6), peers [0,1] op hist [])
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 270311424 unmapped: 44769280 heap: 315080704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2989796 data_alloc: 234881024 data_used: 17223680
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 270311424 unmapped: 44769280 heap: 315080704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 270311424 unmapped: 44769280 heap: 315080704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 253 heartbeat osd_stat(store_statfs(0x4ed93b000/0x0/0x4ffc00000, data 0x275afaf/0x28c3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf9ff9c6), peers [0,1] op hist [])
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 253 heartbeat osd_stat(store_statfs(0x4ed93b000/0x0/0x4ffc00000, data 0x275afaf/0x28c3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf9ff9c6), peers [0,1] op hist [])
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 270311424 unmapped: 44769280 heap: 315080704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 270311424 unmapped: 44769280 heap: 315080704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 253 heartbeat osd_stat(store_statfs(0x4ed939000/0x0/0x4ffc00000, data 0x275bfaf/0x28c4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf9ff9c6), peers [0,1] op hist [])
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 270311424 unmapped: 44769280 heap: 315080704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2990232 data_alloc: 234881024 data_used: 17227776
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 270311424 unmapped: 44769280 heap: 315080704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 270311424 unmapped: 44769280 heap: 315080704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 270311424 unmapped: 44769280 heap: 315080704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 14.535174370s of 14.598592758s, submitted: 14
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 272859136 unmapped: 42221568 heap: 315080704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 253 heartbeat osd_stat(store_statfs(0x4ecf05000/0x0/0x4ffc00000, data 0x3182faf/0x32eb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf9ff9c6), peers [0,1] op hist [])
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 273547264 unmapped: 41533440 heap: 315080704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3087538 data_alloc: 234881024 data_used: 18219008
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 273776640 unmapped: 41304064 heap: 315080704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 273776640 unmapped: 41304064 heap: 315080704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 273776640 unmapped: 41304064 heap: 315080704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 253 heartbeat osd_stat(store_statfs(0x4eceef000/0x0/0x4ffc00000, data 0x3190faf/0x32f9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf9ff9c6), peers [0,1] op hist [])
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 273776640 unmapped: 41304064 heap: 315080704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 253 heartbeat osd_stat(store_statfs(0x4eceef000/0x0/0x4ffc00000, data 0x3190faf/0x32f9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf9ff9c6), peers [0,1] op hist [])
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 273776640 unmapped: 41304064 heap: 315080704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3087554 data_alloc: 234881024 data_used: 18219008
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 273776640 unmapped: 41304064 heap: 315080704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 273776640 unmapped: 41304064 heap: 315080704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 273776640 unmapped: 41304064 heap: 315080704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 253 heartbeat osd_stat(store_statfs(0x4eceef000/0x0/0x4ffc00000, data 0x3190faf/0x32f9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf9ff9c6), peers [0,1] op hist [])
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 273776640 unmapped: 41304064 heap: 315080704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 253 heartbeat osd_stat(store_statfs(0x4eceef000/0x0/0x4ffc00000, data 0x3190faf/0x32f9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf9ff9c6), peers [0,1] op hist [])
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 273793024 unmapped: 41287680 heap: 315080704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3090754 data_alloc: 234881024 data_used: 18391040
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 273793024 unmapped: 41287680 heap: 315080704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 273793024 unmapped: 41287680 heap: 315080704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 253 ms_handle_reset con 0x55a12b694c00 session 0x55a12761c3c0
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 13.127888680s of 13.495772362s, submitted: 77
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 253 ms_handle_reset con 0x55a12abf8800 session 0x55a129e9c780
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 253 ms_handle_reset con 0x55a127698400 session 0x55a1285163c0
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 270876672 unmapped: 44204032 heap: 315080704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 270876672 unmapped: 44204032 heap: 315080704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 253 heartbeat osd_stat(store_statfs(0x4edd6e000/0x0/0x4ffc00000, data 0x2329f7c/0x2490000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf9ff9c6), peers [0,1] op hist [])
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 270876672 unmapped: 44204032 heap: 315080704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2929259 data_alloc: 218103808 data_used: 12951552
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 270876672 unmapped: 44204032 heap: 315080704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 270876672 unmapped: 44204032 heap: 315080704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 253 ms_handle_reset con 0x55a12769b400 session 0x55a12808cf00
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 253 ms_handle_reset con 0x55a12935d400 session 0x55a129ac6b40
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 270876672 unmapped: 44204032 heap: 315080704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 253 ms_handle_reset con 0x55a129f3e000 session 0x55a126bf7a40
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 268722176 unmapped: 46358528 heap: 315080704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 253 heartbeat osd_stat(store_statfs(0x4eee64000/0x0/0x4ffc00000, data 0x1233f7c/0x139a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf9ff9c6), peers [0,1] op hist [])
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 268722176 unmapped: 46358528 heap: 315080704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2756393 data_alloc: 218103808 data_used: 7303168
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 268722176 unmapped: 46358528 heap: 315080704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 268722176 unmapped: 46358528 heap: 315080704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.729051590s of 10.018085480s, submitted: 60
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 253 ms_handle_reset con 0x55a12a97dc00 session 0x55a129f4a000
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 268738560 unmapped: 50020352 heap: 318758912 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 253 ms_handle_reset con 0x55a127698400 session 0x55a127620000
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 253 ms_handle_reset con 0x55a12769b400 session 0x55a1294f2b40
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 253 ms_handle_reset con 0x55a12935d400 session 0x55a129eb2960
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 253 ms_handle_reset con 0x55a129f3e000 session 0x55a129eb21e0
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 253 heartbeat osd_stat(store_statfs(0x4ee20a000/0x0/0x4ffc00000, data 0x1e8df7c/0x1ff4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf9ff9c6), peers [0,1] op hist [])
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 268746752 unmapped: 50012160 heap: 318758912 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 268746752 unmapped: 50012160 heap: 318758912 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2848624 data_alloc: 218103808 data_used: 7303168
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 268746752 unmapped: 50012160 heap: 318758912 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 253 ms_handle_reset con 0x55a12b694c00 session 0x55a12962be00
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 268746752 unmapped: 50012160 heap: 318758912 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 253 heartbeat osd_stat(store_statfs(0x4ee20a000/0x0/0x4ffc00000, data 0x1e8df7c/0x1ff4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf9ff9c6), peers [0,1] op hist [])
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 253 ms_handle_reset con 0x55a127698400 session 0x55a12844d2c0
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 253 ms_handle_reset con 0x55a12951c000 session 0x55a128000b40
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 268746752 unmapped: 50012160 heap: 318758912 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 253 ms_handle_reset con 0x55a12951c000 session 0x55a1294f3860
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 253 ms_handle_reset con 0x55a12935d400 session 0x55a129d8f2c0
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 268746752 unmapped: 50012160 heap: 318758912 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 269705216 unmapped: 49053696 heap: 318758912 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2943878 data_alloc: 234881024 data_used: 20254720
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 271441920 unmapped: 47316992 heap: 318758912 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 271441920 unmapped: 47316992 heap: 318758912 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 253 heartbeat osd_stat(store_statfs(0x4ee209000/0x0/0x4ffc00000, data 0x1e8df9f/0x1ff5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf9ff9c6), peers [0,1] op hist [])
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 271441920 unmapped: 47316992 heap: 318758912 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 271441920 unmapped: 47316992 heap: 318758912 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 271441920 unmapped: 47316992 heap: 318758912 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2943878 data_alloc: 234881024 data_used: 20254720
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 271441920 unmapped: 47316992 heap: 318758912 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 253 heartbeat osd_stat(store_statfs(0x4ee209000/0x0/0x4ffc00000, data 0x1e8df9f/0x1ff5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf9ff9c6), peers [0,1] op hist [])
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 271441920 unmapped: 47316992 heap: 318758912 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 271441920 unmapped: 47316992 heap: 318758912 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 271441920 unmapped: 47316992 heap: 318758912 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 271441920 unmapped: 47316992 heap: 318758912 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 17.334646225s of 18.030015945s, submitted: 29
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3022528 data_alloc: 234881024 data_used: 20365312
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 273301504 unmapped: 45457408 heap: 318758912 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 273309696 unmapped: 45449216 heap: 318758912 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 253 heartbeat osd_stat(store_statfs(0x4ed792000/0x0/0x4ffc00000, data 0x2904f9f/0x2a6c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf9ff9c6), peers [0,1] op hist [])
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 273342464 unmapped: 45416448 heap: 318758912 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 273342464 unmapped: 45416448 heap: 318758912 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 273342464 unmapped: 45416448 heap: 318758912 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3028408 data_alloc: 234881024 data_used: 21094400
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 273342464 unmapped: 45416448 heap: 318758912 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 273342464 unmapped: 45416448 heap: 318758912 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 253 heartbeat osd_stat(store_statfs(0x4ed792000/0x0/0x4ffc00000, data 0x2904f9f/0x2a6c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf9ff9c6), peers [0,1] op hist [])
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 273342464 unmapped: 45416448 heap: 318758912 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 273342464 unmapped: 45416448 heap: 318758912 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 273342464 unmapped: 45416448 heap: 318758912 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3028408 data_alloc: 234881024 data_used: 21094400
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 273342464 unmapped: 45416448 heap: 318758912 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 273342464 unmapped: 45416448 heap: 318758912 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 253 heartbeat osd_stat(store_statfs(0x4ed792000/0x0/0x4ffc00000, data 0x2904f9f/0x2a6c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf9ff9c6), peers [0,1] op hist [])
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 253 ms_handle_reset con 0x55a12951d800 session 0x55a128517e00
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.266537666s of 12.748188019s, submitted: 50
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 273342464 unmapped: 45416448 heap: 318758912 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 253 handle_osd_map epochs [254,254], i have 253, src has [1,254]
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 254 ms_handle_reset con 0x55a12935c400 session 0x55a1285161e0
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 254 ms_handle_reset con 0x55a12935d400 session 0x55a12ad863c0
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 254 ms_handle_reset con 0x55a127698400 session 0x55a128073860
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 254 ms_handle_reset con 0x55a12951c000 session 0x55a129f4b0e0
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 285696000 unmapped: 35487744 heap: 321183744 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 254 handle_osd_map epochs [254,255], i have 254, src has [1,255]
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 254 handle_osd_map epochs [255,255], i have 255, src has [1,255]
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 255 ms_handle_reset con 0x55a12951d800 session 0x55a128072f00
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 255 heartbeat osd_stat(store_statfs(0x4ec999000/0x0/0x4ffc00000, data 0x36fbb2c/0x3865000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf9ff9c6), peers [0,1] op hist [])
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 285712384 unmapped: 35471360 heap: 321183744 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 255 handle_osd_map epochs [256,256], i have 255, src has [1,256]
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 256 ms_handle_reset con 0x55a12a6e4000 session 0x55a127620960
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3184509 data_alloc: 234881024 data_used: 27652096
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 285720576 unmapped: 35463168 heap: 321183744 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 256 ms_handle_reset con 0x55a127698400 session 0x55a129d7f4a0
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 256 ms_handle_reset con 0x55a12935d400 session 0x55a12761c960
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 256 ms_handle_reset con 0x55a12951c000 session 0x55a1277a2000
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 285728768 unmapped: 35454976 heap: 321183744 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 285392896 unmapped: 35790848 heap: 321183744 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 285392896 unmapped: 35790848 heap: 321183744 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 285392896 unmapped: 35790848 heap: 321183744 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 256 heartbeat osd_stat(store_statfs(0x4ec991000/0x0/0x4ffc00000, data 0x36ff296/0x386b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf9ff9c6), peers [0,1] op hist [])
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3174861 data_alloc: 234881024 data_used: 27652096
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 286588928 unmapped: 38264832 heap: 324853760 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 256 ms_handle_reset con 0x55a12951d800 session 0x55a129d7f0e0
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 256 ms_handle_reset con 0x55a129f43000 session 0x55a129ce94a0
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 256 ms_handle_reset con 0x55a127698400 session 0x55a129ce83c0
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 256 ms_handle_reset con 0x55a12935d400 session 0x55a1299c81e0
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 256 ms_handle_reset con 0x55a12951c000 session 0x55a129f4b4a0
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 256 ms_handle_reset con 0x55a12951d800 session 0x55a129d5cf00
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 256 ms_handle_reset con 0x55a12abf9000 session 0x55a1277a2780
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 256 ms_handle_reset con 0x55a13007f800 session 0x55a129d7fa40
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 256 ms_handle_reset con 0x55a127698400 session 0x55a129d8fa40
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 256 ms_handle_reset con 0x55a12935d400 session 0x55a128517680
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 256 ms_handle_reset con 0x55a12951c000 session 0x55a129f4a1e0
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 279527424 unmapped: 45326336 heap: 324853760 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 279527424 unmapped: 45326336 heap: 324853760 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 256 handle_osd_map epochs [257,257], i have 256, src has [1,257]
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.191231728s of 10.974235535s, submitted: 81
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 279543808 unmapped: 45309952 heap: 324853760 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 279543808 unmapped: 45309952 heap: 324853760 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 257 heartbeat osd_stat(store_statfs(0x4ebfe7000/0x0/0x4ffc00000, data 0x40a8cf9/0x4216000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf9ff9c6), peers [0,1] op hist [])
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3247971 data_alloc: 234881024 data_used: 27660288
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 279552000 unmapped: 45301760 heap: 324853760 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 279552000 unmapped: 45301760 heap: 324853760 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 257 ms_handle_reset con 0x55a12951d800 session 0x55a12ad87a40
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 257 heartbeat osd_stat(store_statfs(0x4ebfe7000/0x0/0x4ffc00000, data 0x40a8cf9/0x4216000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf9ff9c6), peers [0,1] op hist [])
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 279855104 unmapped: 44998656 heap: 324853760 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 257 heartbeat osd_stat(store_statfs(0x4ebfc4000/0x0/0x4ffc00000, data 0x40cccf9/0x423a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf9ff9c6), peers [0,1] op hist [])
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 279863296 unmapped: 44990464 heap: 324853760 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 257 ms_handle_reset con 0x55a12951c000 session 0x55a129d8f860
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 279912448 unmapped: 44941312 heap: 324853760 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3307448 data_alloc: 251658240 data_used: 35053568
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 280035328 unmapped: 44818432 heap: 324853760 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 283172864 unmapped: 41680896 heap: 324853760 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 285007872 unmapped: 39845888 heap: 324853760 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 257 heartbeat osd_stat(store_statfs(0x4ebfc4000/0x0/0x4ffc00000, data 0x40cccf9/0x423a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf9ff9c6), peers [0,1] op hist [])
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 285007872 unmapped: 39845888 heap: 324853760 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 285007872 unmapped: 39845888 heap: 324853760 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 257 heartbeat osd_stat(store_statfs(0x4ebfc4000/0x0/0x4ffc00000, data 0x40cccf9/0x423a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf9ff9c6), peers [0,1] op hist [])
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3378328 data_alloc: 251658240 data_used: 44978176
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 285007872 unmapped: 39845888 heap: 324853760 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 285007872 unmapped: 39845888 heap: 324853760 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 285007872 unmapped: 39845888 heap: 324853760 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 285007872 unmapped: 39845888 heap: 324853760 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 16.041135788s of 16.079441071s, submitted: 65
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 288817152 unmapped: 36036608 heap: 324853760 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 257 heartbeat osd_stat(store_statfs(0x4ebcb9000/0x0/0x4ffc00000, data 0x43d7cf9/0x4545000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf9ff9c6), peers [0,1] op hist [])
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3427395 data_alloc: 251658240 data_used: 45391872
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 289161216 unmapped: 35692544 heap: 324853760 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 289193984 unmapped: 35659776 heap: 324853760 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 291012608 unmapped: 33841152 heap: 324853760 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 291037184 unmapped: 33816576 heap: 324853760 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 257 heartbeat osd_stat(store_statfs(0x4eb30e000/0x0/0x4ffc00000, data 0x4d7ccf9/0x4eea000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf9ff9c6), peers [0,1] op hist [])
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293314560 unmapped: 31539200 heap: 324853760 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3508061 data_alloc: 251658240 data_used: 45445120
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293314560 unmapped: 31539200 heap: 324853760 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293322752 unmapped: 31531008 heap: 324853760 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293322752 unmapped: 31531008 heap: 324853760 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 257 heartbeat osd_stat(store_statfs(0x4eb25e000/0x0/0x4ffc00000, data 0x4e2acf9/0x4f98000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf9ff9c6), peers [0,1] op hist [])
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293355520 unmapped: 31498240 heap: 324853760 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.931900978s of 10.228565216s, submitted: 100
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 292134912 unmapped: 32718848 heap: 324853760 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3502997 data_alloc: 251658240 data_used: 45445120
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 292134912 unmapped: 32718848 heap: 324853760 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 292134912 unmapped: 32718848 heap: 324853760 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 257 heartbeat osd_stat(store_statfs(0x4eb241000/0x0/0x4ffc00000, data 0x4e4fcf9/0x4fbd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf9ff9c6), peers [0,1] op hist [])
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 292134912 unmapped: 32718848 heap: 324853760 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 292134912 unmapped: 32718848 heap: 324853760 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 257 ms_handle_reset con 0x55a12e96d000 session 0x55a1281530e0
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 257 ms_handle_reset con 0x55a1286df800 session 0x55a128153860
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 257 ms_handle_reset con 0x55a12b694400 session 0x55a128579e00
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 257 ms_handle_reset con 0x55a12b4d0400 session 0x55a127959a40
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 292134912 unmapped: 32718848 heap: 324853760 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 257 ms_handle_reset con 0x55a1286df800 session 0x55a127620f00
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 257 ms_handle_reset con 0x55a12951c000 session 0x55a1294a6960
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 257 ms_handle_reset con 0x55a12b694400 session 0x55a128547a40
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 257 ms_handle_reset con 0x55a12e96d000 session 0x55a1294a6b40
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 257 ms_handle_reset con 0x55a129f44000 session 0x55a128665a40
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3577914 data_alloc: 251658240 data_used: 45445120
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 292487168 unmapped: 32366592 heap: 324853760 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 257 heartbeat osd_stat(store_statfs(0x4ea8e6000/0x0/0x4ffc00000, data 0x57a9d5b/0x5918000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf9ff9c6), peers [0,1] op hist [])
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 292503552 unmapped: 32350208 heap: 324853760 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 292511744 unmapped: 32342016 heap: 324853760 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 292511744 unmapped: 32342016 heap: 324853760 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 292511744 unmapped: 32342016 heap: 324853760 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3578094 data_alloc: 251658240 data_used: 45445120
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 292511744 unmapped: 32342016 heap: 324853760 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.780004501s of 11.949531555s, submitted: 35
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 292511744 unmapped: 32342016 heap: 324853760 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 257 heartbeat osd_stat(store_statfs(0x4ea8e3000/0x0/0x4ffc00000, data 0x57acd5b/0x591b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf9ff9c6), peers [0,1] op hist [])
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 292519936 unmapped: 32333824 heap: 324853760 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 257 heartbeat osd_stat(store_statfs(0x4ea8e3000/0x0/0x4ffc00000, data 0x57acd5b/0x591b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf9ff9c6), peers [0,1] op hist [])
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 292519936 unmapped: 32333824 heap: 324853760 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 292528128 unmapped: 32325632 heap: 324853760 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 257 ms_handle_reset con 0x55a1286df800 session 0x55a1285bb4a0
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3585006 data_alloc: 251658240 data_used: 46264320
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293224448 unmapped: 31629312 heap: 324853760 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 257 heartbeat osd_stat(store_statfs(0x4ea8bf000/0x0/0x4ffc00000, data 0x57d0d5b/0x593f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf9ff9c6), peers [0,1] op hist [])
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 257 ms_handle_reset con 0x55a12e96d000 session 0x55a12962be00
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293257216 unmapped: 31596544 heap: 324853760 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 257 handle_osd_map epochs [257,258], i have 257, src has [1,258]
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 257 handle_osd_map epochs [258,258], i have 258, src has [1,258]
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 258 ms_handle_reset con 0x55a12e96ec00 session 0x55a129e9de00
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 258 ms_handle_reset con 0x55a1286ad000 session 0x55a129f4a1e0
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 258 ms_handle_reset con 0x55a129f38c00 session 0x55a129f4b4a0
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 299302912 unmapped: 25550848 heap: 324853760 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 258 ms_handle_reset con 0x55a1286ad000 session 0x55a129e9d860
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 302546944 unmapped: 34914304 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 258 handle_osd_map epochs [259,259], i have 258, src has [1,259]
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 259 ms_handle_reset con 0x55a1286df800 session 0x55a12962b2c0
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 304529408 unmapped: 32931840 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 259 handle_osd_map epochs [260,260], i have 259, src has [1,260]
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 260 ms_handle_reset con 0x55a12e96d000 session 0x55a129d8e1e0
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 260 heartbeat osd_stat(store_statfs(0x4e9393000/0x0/0x4ffc00000, data 0x6cf94a9/0x6e6a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf9ff9c6), peers [0,1] op hist [])
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3857252 data_alloc: 268435456 data_used: 57167872
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 260 ms_handle_reset con 0x55a12e96ec00 session 0x55a129f4a960
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 306790400 unmapped: 30670848 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 260 ms_handle_reset con 0x55a129f43800 session 0x55a129ce92c0
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 260 ms_handle_reset con 0x55a1286ad000 session 0x55a129d5da40
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 306896896 unmapped: 30564352 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 260 handle_osd_map epochs [261,261], i have 260, src has [1,261]
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.849305153s of 10.489986420s, submitted: 43
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 261 ms_handle_reset con 0x55a1286df800 session 0x55a1285161e0
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 304177152 unmapped: 33284096 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 261 heartbeat osd_stat(store_statfs(0x4ea686000/0x0/0x4ffc00000, data 0x5a03c2f/0x5b77000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf9ff9c6), peers [0,1] op hist [])
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 304177152 unmapped: 33284096 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 261 heartbeat osd_stat(store_statfs(0x4ea686000/0x0/0x4ffc00000, data 0x5a03c2f/0x5b77000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf9ff9c6), peers [0,1] op hist [])
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 261 ms_handle_reset con 0x55a127698400 session 0x55a129e9af00
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 261 ms_handle_reset con 0x55a12935d400 session 0x55a129e9b680
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 304185344 unmapped: 33275904 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 261 ms_handle_reset con 0x55a129f43800 session 0x55a129d5cb40
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3645496 data_alloc: 268435456 data_used: 53866496
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 304185344 unmapped: 33275904 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 304185344 unmapped: 33275904 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 304185344 unmapped: 33275904 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 261 handle_osd_map epochs [262,262], i have 261, src has [1,262]
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 262 handle_osd_map epochs [263,263], i have 262, src has [1,263]
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 263 ms_handle_reset con 0x55a127698400 session 0x55a12ad86960
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 301932544 unmapped: 35528704 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 301932544 unmapped: 35528704 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 263 heartbeat osd_stat(store_statfs(0x4eb9d0000/0x0/0x4ffc00000, data 0x46b726f/0x482c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf9ff9c6), peers [0,1] op hist [])
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3546903 data_alloc: 251658240 data_used: 40132608
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 303833088 unmapped: 33628160 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 263 handle_osd_map epochs [264,264], i have 263, src has [1,264]
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 303964160 unmapped: 33497088 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.238170624s of 10.207138062s, submitted: 191
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 264 ms_handle_reset con 0x55a129f3e000 session 0x55a12ad86b40
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 264 ms_handle_reset con 0x55a12b694c00 session 0x55a12844c5a0
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 303964160 unmapped: 33497088 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 264 ms_handle_reset con 0x55a1286ad000 session 0x55a1284850e0
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 264 heartbeat osd_stat(store_statfs(0x4eaf59000/0x0/0x4ffc00000, data 0x512dcee/0x52a4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf9ff9c6), peers [0,1] op hist [])
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297181184 unmapped: 40280064 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 264 heartbeat osd_stat(store_statfs(0x4ec62b000/0x0/0x4ffc00000, data 0x3a5cccb/0x3bd2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf9ff9c6), peers [0,1] op hist [])
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297181184 unmapped: 40280064 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 264 heartbeat osd_stat(store_statfs(0x4ec62b000/0x0/0x4ffc00000, data 0x3a5cccb/0x3bd2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf9ff9c6), peers [0,1] op hist [])
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3301591 data_alloc: 234881024 data_used: 26443776
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297181184 unmapped: 40280064 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 264 handle_osd_map epochs [264,265], i have 264, src has [1,265]
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297181184 unmapped: 40280064 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297181184 unmapped: 40280064 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297181184 unmapped: 40280064 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata"} v 0) v1
Nov 25 04:36:35 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1279833615' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297181184 unmapped: 40280064 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ec628000/0x0/0x4ffc00000, data 0x3a5e72e/0x3bd5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf9ff9c6), peers [0,1] op hist [])
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3306229 data_alloc: 234881024 data_used: 26468352
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297181184 unmapped: 40280064 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297181184 unmapped: 40280064 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297181184 unmapped: 40280064 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.888059616s of 11.317874908s, submitted: 61
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 265 ms_handle_reset con 0x55a12951c000 session 0x55a1280014a0
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 265 ms_handle_reset con 0x55a12b694400 session 0x55a128000b40
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297181184 unmapped: 40280064 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 265 ms_handle_reset con 0x55a127698400 session 0x55a128516000
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 285057024 unmapped: 52404224 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ed765000/0x0/0x4ffc00000, data 0x266e6cc/0x27e4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf9ff9c6), peers [0,1] op hist [])
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3074375 data_alloc: 218103808 data_used: 15634432
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 285057024 unmapped: 52404224 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 285057024 unmapped: 52404224 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 265 ms_handle_reset con 0x55a13007f800 session 0x55a128665860
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 265 ms_handle_reset con 0x55a12abf9000 session 0x55a12855d0e0
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 285057024 unmapped: 52404224 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4eda0c000/0x0/0x4ffc00000, data 0x267c6cc/0x27f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf9ff9c6), peers [0,1] op hist [])
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 265 ms_handle_reset con 0x55a1286ad000 session 0x55a129d7f0e0
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 277774336 unmapped: 59686912 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 277774336 unmapped: 59686912 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2854123 data_alloc: 218103808 data_used: 7364608
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 277774336 unmapped: 59686912 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 277774336 unmapped: 59686912 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4eee40000/0x0/0x4ffc00000, data 0x12486cc/0x13be000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf9ff9c6), peers [0,1] op hist [])
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 277774336 unmapped: 59686912 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 277774336 unmapped: 59686912 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 277774336 unmapped: 59686912 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2854123 data_alloc: 218103808 data_used: 7364608
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 277774336 unmapped: 59686912 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4eee40000/0x0/0x4ffc00000, data 0x12486cc/0x13be000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf9ff9c6), peers [0,1] op hist [])
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 277774336 unmapped: 59686912 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4eee40000/0x0/0x4ffc00000, data 0x12486cc/0x13be000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf9ff9c6), peers [0,1] op hist [])
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 277774336 unmapped: 59686912 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 277774336 unmapped: 59686912 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 277774336 unmapped: 59686912 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4eee40000/0x0/0x4ffc00000, data 0x12486cc/0x13be000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf9ff9c6), peers [0,1] op hist [])
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2854123 data_alloc: 218103808 data_used: 7364608
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 277774336 unmapped: 59686912 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 277774336 unmapped: 59686912 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 277774336 unmapped: 59686912 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 277774336 unmapped: 59686912 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 277774336 unmapped: 59686912 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4eee40000/0x0/0x4ffc00000, data 0x12486cc/0x13be000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf9ff9c6), peers [0,1] op hist [])
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2854123 data_alloc: 218103808 data_used: 7364608
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 277774336 unmapped: 59686912 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 277774336 unmapped: 59686912 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 277774336 unmapped: 59686912 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 277774336 unmapped: 59686912 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 277774336 unmapped: 59686912 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4eee40000/0x0/0x4ffc00000, data 0x12486cc/0x13be000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf9ff9c6), peers [0,1] op hist [])
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4eee40000/0x0/0x4ffc00000, data 0x12486cc/0x13be000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf9ff9c6), peers [0,1] op hist [])
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2854123 data_alloc: 218103808 data_used: 7364608
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 277774336 unmapped: 59686912 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4eee40000/0x0/0x4ffc00000, data 0x12486cc/0x13be000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf9ff9c6), peers [0,1] op hist [])
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 277774336 unmapped: 59686912 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 277774336 unmapped: 59686912 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 277774336 unmapped: 59686912 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 277774336 unmapped: 59686912 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2854123 data_alloc: 218103808 data_used: 7364608
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 277774336 unmapped: 59686912 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4eee40000/0x0/0x4ffc00000, data 0x12486cc/0x13be000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf9ff9c6), peers [0,1] op hist [])
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 277774336 unmapped: 59686912 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 277774336 unmapped: 59686912 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 277774336 unmapped: 59686912 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 277774336 unmapped: 59686912 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2854123 data_alloc: 218103808 data_used: 7364608
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 277774336 unmapped: 59686912 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4eee40000/0x0/0x4ffc00000, data 0x12486cc/0x13be000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf9ff9c6), peers [0,1] op hist [])
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 277774336 unmapped: 59686912 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 277774336 unmapped: 59686912 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 277782528 unmapped: 59678720 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 277782528 unmapped: 59678720 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4eee40000/0x0/0x4ffc00000, data 0x12486cc/0x13be000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf9ff9c6), peers [0,1] op hist [])
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2854123 data_alloc: 218103808 data_used: 7364608
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 277782528 unmapped: 59678720 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 277782528 unmapped: 59678720 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 277782528 unmapped: 59678720 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 277782528 unmapped: 59678720 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 277782528 unmapped: 59678720 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 46.931575775s of 47.195777893s, submitted: 51
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2888029 data_alloc: 218103808 data_used: 7364608
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 279502848 unmapped: 57958400 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4eee40000/0x0/0x4ffc00000, data 0x12486cc/0x13be000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf9ff9c6), peers [0,1] op hist [])
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 265 ms_handle_reset con 0x55a127698400 session 0x55a129ce92c0
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 265 ms_handle_reset con 0x55a1286ad000 session 0x55a12962b2c0
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 265 ms_handle_reset con 0x55a12abf9000 session 0x55a129e9d860
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 265 ms_handle_reset con 0x55a12b694400 session 0x55a129f4b4a0
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 265 ms_handle_reset con 0x55a13007f800 session 0x55a129f4a1e0
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 277889024 unmapped: 59572224 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 277889024 unmapped: 59572224 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4eeb20000/0x0/0x4ffc00000, data 0x15686cc/0x16de000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf9ff9c6), peers [0,1] op hist [])
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 277889024 unmapped: 59572224 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4eeb20000/0x0/0x4ffc00000, data 0x15686cc/0x16de000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf9ff9c6), peers [0,1] op hist [])
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 277889024 unmapped: 59572224 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2882589 data_alloc: 218103808 data_used: 7364608
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 277889024 unmapped: 59572224 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 277889024 unmapped: 59572224 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 265 ms_handle_reset con 0x55a127698400 session 0x55a129e9de00
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 277889024 unmapped: 59572224 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 277897216 unmapped: 59564032 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 277905408 unmapped: 59555840 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4eeb20000/0x0/0x4ffc00000, data 0x15686cc/0x16de000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf9ff9c6), peers [0,1] op hist [])
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2897949 data_alloc: 218103808 data_used: 9506816
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 277905408 unmapped: 59555840 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 277905408 unmapped: 59555840 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 277905408 unmapped: 59555840 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4eeb20000/0x0/0x4ffc00000, data 0x15686cc/0x16de000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf9ff9c6), peers [0,1] op hist [])
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 277905408 unmapped: 59555840 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 277905408 unmapped: 59555840 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2897949 data_alloc: 218103808 data_used: 9506816
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 277905408 unmapped: 59555840 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4eeb20000/0x0/0x4ffc00000, data 0x15686cc/0x16de000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf9ff9c6), peers [0,1] op hist [])
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 277905408 unmapped: 59555840 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 277913600 unmapped: 59547648 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4eeb20000/0x0/0x4ffc00000, data 0x15686cc/0x16de000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf9ff9c6), peers [0,1] op hist [])
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 277913600 unmapped: 59547648 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 19.087421417s of 19.442668915s, submitted: 8
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 279355392 unmapped: 58105856 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2952199 data_alloc: 218103808 data_used: 10223616
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 281747456 unmapped: 55713792 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 282738688 unmapped: 54722560 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 282738688 unmapped: 54722560 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ee53c000/0x0/0x4ffc00000, data 0x1b4b6cc/0x1cc1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf9ff9c6), peers [0,1] op hist [])
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 282738688 unmapped: 54722560 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 282738688 unmapped: 54722560 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2957029 data_alloc: 218103808 data_used: 10440704
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 282738688 unmapped: 54722560 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 282738688 unmapped: 54722560 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 282656768 unmapped: 54804480 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ee53a000/0x0/0x4ffc00000, data 0x1b4e6cc/0x1cc4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf9ff9c6), peers [0,1] op hist [])
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 282656768 unmapped: 54804480 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 282656768 unmapped: 54804480 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ee53a000/0x0/0x4ffc00000, data 0x1b4e6cc/0x1cc4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf9ff9c6), peers [0,1] op hist [])
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2956373 data_alloc: 218103808 data_used: 10440704
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 282656768 unmapped: 54804480 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 282656768 unmapped: 54804480 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 282664960 unmapped: 54796288 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 282664960 unmapped: 54796288 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 282664960 unmapped: 54796288 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ee53a000/0x0/0x4ffc00000, data 0x1b4e6cc/0x1cc4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf9ff9c6), peers [0,1] op hist [])
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2956373 data_alloc: 218103808 data_used: 10440704
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 282664960 unmapped: 54796288 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 282664960 unmapped: 54796288 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 282673152 unmapped: 54788096 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ee53a000/0x0/0x4ffc00000, data 0x1b4e6cc/0x1cc4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf9ff9c6), peers [0,1] op hist [])
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 282673152 unmapped: 54788096 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 282673152 unmapped: 54788096 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2956373 data_alloc: 218103808 data_used: 10440704
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 282673152 unmapped: 54788096 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 282673152 unmapped: 54788096 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ee53a000/0x0/0x4ffc00000, data 0x1b4e6cc/0x1cc4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf9ff9c6), peers [0,1] op hist [])
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 282673152 unmapped: 54788096 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 282673152 unmapped: 54788096 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 282673152 unmapped: 54788096 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2956693 data_alloc: 218103808 data_used: 10448896
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 282681344 unmapped: 54779904 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ee53a000/0x0/0x4ffc00000, data 0x1b4e6cc/0x1cc4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf9ff9c6), peers [0,1] op hist [])
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 282681344 unmapped: 54779904 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ee53a000/0x0/0x4ffc00000, data 0x1b4e6cc/0x1cc4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf9ff9c6), peers [0,1] op hist [])
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 282681344 unmapped: 54779904 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 282681344 unmapped: 54779904 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 265 ms_handle_reset con 0x55a12b694400 session 0x55a129d7ef00
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 265 ms_handle_reset con 0x55a12951c000 session 0x55a1276212c0
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 265 ms_handle_reset con 0x55a129f3e000 session 0x55a1277a2000
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 265 ms_handle_reset con 0x55a12b694c00 session 0x55a1285794a0
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 29.608951569s of 30.004426956s, submitted: 63
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 283533312 unmapped: 53927936 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 265 ms_handle_reset con 0x55a127698400 session 0x55a129d7fe00
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 265 ms_handle_reset con 0x55a12951c000 session 0x55a129eb3c20
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 265 ms_handle_reset con 0x55a129f3e000 session 0x55a128579a40
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 265 ms_handle_reset con 0x55a12b694400 session 0x55a129ce8d20
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 265 ms_handle_reset con 0x55a1286df800 session 0x55a129d8fa40
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3004634 data_alloc: 218103808 data_used: 10448896
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 283009024 unmapped: 54452224 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 283009024 unmapped: 54452224 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 283009024 unmapped: 54452224 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ee03f000/0x0/0x4ffc00000, data 0x20496cc/0x21bf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf9ff9c6), peers [0,1] op hist [])
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 283009024 unmapped: 54452224 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 4802.4 total, 600.0 interval#012Cumulative writes: 33K writes, 133K keys, 33K commit groups, 1.0 writes per commit group, ingest: 0.13 GB, 0.03 MB/s#012Cumulative WAL: 33K writes, 11K syncs, 2.85 writes per sync, written: 0.13 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 3363 writes, 13K keys, 3363 commit groups, 1.0 writes per commit group, ingest: 13.80 MB, 0.02 MB/s#012Interval WAL: 3363 writes, 1299 syncs, 2.59 writes per sync, written: 0.01 GB, 0.02 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 283009024 unmapped: 54452224 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3004810 data_alloc: 218103808 data_used: 10448896
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 283009024 unmapped: 54452224 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ee03f000/0x0/0x4ffc00000, data 0x20496cc/0x21bf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf9ff9c6), peers [0,1] op hist [])
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 283009024 unmapped: 54452224 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 283017216 unmapped: 54444032 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ee03f000/0x0/0x4ffc00000, data 0x20496cc/0x21bf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf9ff9c6), peers [0,1] op hist [])
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 283017216 unmapped: 54444032 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 283017216 unmapped: 54444032 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3004810 data_alloc: 218103808 data_used: 10448896
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 283017216 unmapped: 54444032 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 265 ms_handle_reset con 0x55a127698400 session 0x55a129eb21e0
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 283017216 unmapped: 54444032 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ee03f000/0x0/0x4ffc00000, data 0x20496cc/0x21bf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf9ff9c6), peers [0,1] op hist [1])
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.558760643s of 12.719706535s, submitted: 25
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 283017216 unmapped: 54444032 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 283017216 unmapped: 54444032 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 283025408 unmapped: 54435840 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3004398 data_alloc: 218103808 data_used: 10452992
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 283025408 unmapped: 54435840 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 282730496 unmapped: 54730752 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ee03f000/0x0/0x4ffc00000, data 0x20496cc/0x21bf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf9ff9c6), peers [0,1] op hist [])
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 282664960 unmapped: 54796288 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ee03f000/0x0/0x4ffc00000, data 0x20496cc/0x21bf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf9ff9c6), peers [0,1] op hist [])
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 282664960 unmapped: 54796288 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 282664960 unmapped: 54796288 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ee03f000/0x0/0x4ffc00000, data 0x20496cc/0x21bf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf9ff9c6), peers [0,1] op hist [])
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3033838 data_alloc: 218103808 data_used: 14458880
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 282664960 unmapped: 54796288 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 282664960 unmapped: 54796288 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 282664960 unmapped: 54796288 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ee03f000/0x0/0x4ffc00000, data 0x20496cc/0x21bf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf9ff9c6), peers [0,1] op hist [])
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ee03f000/0x0/0x4ffc00000, data 0x20496cc/0x21bf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf9ff9c6), peers [0,1] op hist [])
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 282664960 unmapped: 54796288 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 282664960 unmapped: 54796288 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3033838 data_alloc: 218103808 data_used: 14458880
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 282664960 unmapped: 54796288 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 282664960 unmapped: 54796288 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ee03f000/0x0/0x4ffc00000, data 0x20496cc/0x21bf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf9ff9c6), peers [0,1] op hist [])
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 15.305363655s of 15.309145927s, submitted: 1
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 287645696 unmapped: 49815552 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 287997952 unmapped: 49463296 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ec4ce000/0x0/0x4ffc00000, data 0x2a1a6cc/0x2b90000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x10b9f9c6), peers [0,1] op hist [])
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 289251328 unmapped: 48209920 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3117286 data_alloc: 218103808 data_used: 15851520
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 289251328 unmapped: 48209920 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ec4a4000/0x0/0x4ffc00000, data 0x2a436cc/0x2bb9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x10b9f9c6), peers [0,1] op hist [])
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ec4a4000/0x0/0x4ffc00000, data 0x2a436cc/0x2bb9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x10b9f9c6), peers [0,1] op hist [])
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 289251328 unmapped: 48209920 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 289251328 unmapped: 48209920 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ec4a4000/0x0/0x4ffc00000, data 0x2a436cc/0x2bb9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x10b9f9c6), peers [0,1] op hist [])
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 289251328 unmapped: 48209920 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 289251328 unmapped: 48209920 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3110502 data_alloc: 218103808 data_used: 15851520
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ec4a4000/0x0/0x4ffc00000, data 0x2a436cc/0x2bb9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x10b9f9c6), peers [0,1] op hist [])
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 288940032 unmapped: 48521216 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 288940032 unmapped: 48521216 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 288940032 unmapped: 48521216 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 288940032 unmapped: 48521216 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 288948224 unmapped: 48513024 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.085641861s of 12.817085266s, submitted: 90
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3112330 data_alloc: 218103808 data_used: 15937536
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 288948224 unmapped: 48513024 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ec4a1000/0x0/0x4ffc00000, data 0x2a476cc/0x2bbd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x10b9f9c6), peers [0,1] op hist [])
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 288948224 unmapped: 48513024 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 265 ms_handle_reset con 0x55a12951c000 session 0x55a129eb25a0
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 265 ms_handle_reset con 0x55a129f3e000 session 0x55a128152d20
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 288956416 unmapped: 48504832 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 288964608 unmapped: 48496640 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 285818880 unmapped: 51642368 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 265 ms_handle_reset con 0x55a12b694400 session 0x55a12761cf00
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ed399000/0x0/0x4ffc00000, data 0x1b4f6cc/0x1cc5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x10b9f9c6), peers [0,1] op hist [])
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2964738 data_alloc: 218103808 data_used: 10526720
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 285818880 unmapped: 51642368 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 285818880 unmapped: 51642368 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 285818880 unmapped: 51642368 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 285818880 unmapped: 51642368 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ed399000/0x0/0x4ffc00000, data 0x1b4f6cc/0x1cc5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x10b9f9c6), peers [0,1] op hist [])
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 285818880 unmapped: 51642368 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2964738 data_alloc: 218103808 data_used: 10526720
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 285818880 unmapped: 51642368 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 285818880 unmapped: 51642368 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 265 ms_handle_reset con 0x55a1286ad000 session 0x55a12962be00
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 265 ms_handle_reset con 0x55a12abf9000 session 0x55a12855c3c0
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 285818880 unmapped: 51642368 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.591695786s of 12.656030655s, submitted: 24
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 265 ms_handle_reset con 0x55a127698400 session 0x55a12850c3c0
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 285818880 unmapped: 51642368 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 285818880 unmapped: 51642368 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4edca0000/0x0/0x4ffc00000, data 0x12486cc/0x13be000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x10b9f9c6), peers [0,1] op hist [])
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2866172 data_alloc: 218103808 data_used: 7364608
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 285818880 unmapped: 51642368 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 285818880 unmapped: 51642368 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4edca0000/0x0/0x4ffc00000, data 0x12486cc/0x13be000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x10b9f9c6), peers [0,1] op hist [])
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 285818880 unmapped: 51642368 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4edca0000/0x0/0x4ffc00000, data 0x12486cc/0x13be000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x10b9f9c6), peers [0,1] op hist [])
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 285827072 unmapped: 51634176 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 285827072 unmapped: 51634176 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4edca0000/0x0/0x4ffc00000, data 0x12486cc/0x13be000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x10b9f9c6), peers [0,1] op hist [])
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2866172 data_alloc: 218103808 data_used: 7364608
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 285827072 unmapped: 51634176 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 285827072 unmapped: 51634176 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 285827072 unmapped: 51634176 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 285827072 unmapped: 51634176 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 285827072 unmapped: 51634176 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4edca0000/0x0/0x4ffc00000, data 0x12486cc/0x13be000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x10b9f9c6), peers [0,1] op hist [])
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2866172 data_alloc: 218103808 data_used: 7364608
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 285835264 unmapped: 51625984 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 285835264 unmapped: 51625984 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 285835264 unmapped: 51625984 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 285835264 unmapped: 51625984 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 285835264 unmapped: 51625984 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4edca0000/0x0/0x4ffc00000, data 0x12486cc/0x13be000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x10b9f9c6), peers [0,1] op hist [])
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2866172 data_alloc: 218103808 data_used: 7364608
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 285843456 unmapped: 51617792 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 285843456 unmapped: 51617792 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4edca0000/0x0/0x4ffc00000, data 0x12486cc/0x13be000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x10b9f9c6), peers [0,1] op hist [])
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 285843456 unmapped: 51617792 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4edca0000/0x0/0x4ffc00000, data 0x12486cc/0x13be000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x10b9f9c6), peers [0,1] op hist [])
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 285843456 unmapped: 51617792 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 285843456 unmapped: 51617792 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2866172 data_alloc: 218103808 data_used: 7364608
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 285843456 unmapped: 51617792 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 285843456 unmapped: 51617792 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4edca0000/0x0/0x4ffc00000, data 0x12486cc/0x13be000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x10b9f9c6), peers [0,1] op hist [])
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 285843456 unmapped: 51617792 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 285851648 unmapped: 51609600 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 285851648 unmapped: 51609600 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2866172 data_alloc: 218103808 data_used: 7364608
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 285851648 unmapped: 51609600 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4edca0000/0x0/0x4ffc00000, data 0x12486cc/0x13be000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x10b9f9c6), peers [0,1] op hist [])
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 285851648 unmapped: 51609600 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 285851648 unmapped: 51609600 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 285851648 unmapped: 51609600 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4edca0000/0x0/0x4ffc00000, data 0x12486cc/0x13be000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x10b9f9c6), peers [0,1] op hist [])
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 285851648 unmapped: 51609600 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2866172 data_alloc: 218103808 data_used: 7364608
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 285851648 unmapped: 51609600 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 285859840 unmapped: 51601408 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 285859840 unmapped: 51601408 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4edca0000/0x0/0x4ffc00000, data 0x12486cc/0x13be000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x10b9f9c6), peers [0,1] op hist [])
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 285859840 unmapped: 51601408 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 285859840 unmapped: 51601408 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2866172 data_alloc: 218103808 data_used: 7364608
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 285868032 unmapped: 51593216 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4edca0000/0x0/0x4ffc00000, data 0x12486cc/0x13be000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x10b9f9c6), peers [0,1] op hist [])
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 285868032 unmapped: 51593216 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 285868032 unmapped: 51593216 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 285868032 unmapped: 51593216 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 285868032 unmapped: 51593216 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2866172 data_alloc: 218103808 data_used: 7364608
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 285868032 unmapped: 51593216 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 265 ms_handle_reset con 0x55a12951c000 session 0x55a12855c000
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 265 ms_handle_reset con 0x55a129f3e000 session 0x55a1294e12c0
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 265 ms_handle_reset con 0x55a12b694400 session 0x55a1294e01e0
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 265 ms_handle_reset con 0x55a127698400 session 0x55a1294e1680
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 43.633911133s of 43.654327393s, submitted: 5
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 265 ms_handle_reset con 0x55a12951c000 session 0x55a1285ada40
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 265 ms_handle_reset con 0x55a129f3e000 session 0x55a129e9a780
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 265 ms_handle_reset con 0x55a12abf9000 session 0x55a129e9a1e0
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 265 ms_handle_reset con 0x55a12b694400 session 0x55a129e9ad20
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 265 ms_handle_reset con 0x55a127698400 session 0x55a1285781e0
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4edca0000/0x0/0x4ffc00000, data 0x12486cc/0x13be000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x10b9f9c6), peers [0,1] op hist [])
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 286457856 unmapped: 51003392 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 286457856 unmapped: 51003392 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ed3fd000/0x0/0x4ffc00000, data 0x1aeb6cc/0x1c61000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x10b9f9c6), peers [0,1] op hist [])
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 286457856 unmapped: 51003392 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 286457856 unmapped: 51003392 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ed3fd000/0x0/0x4ffc00000, data 0x1aeb6cc/0x1c61000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x10b9f9c6), peers [0,1] op hist [])
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2937838 data_alloc: 218103808 data_used: 7364608
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 265 ms_handle_reset con 0x55a12951c000 session 0x55a128579c20
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 286457856 unmapped: 51003392 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 265 ms_handle_reset con 0x55a129f3e000 session 0x55a128578d20
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 286466048 unmapped: 50995200 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 265 ms_handle_reset con 0x55a12abf9000 session 0x55a128075680
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ed3fd000/0x0/0x4ffc00000, data 0x1aeb6cc/0x1c61000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x10b9f9c6), peers [0,1] op hist [])
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 265 ms_handle_reset con 0x55a12b694400 session 0x55a1286652c0
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 286466048 unmapped: 50995200 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 286466048 unmapped: 50995200 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ed3fc000/0x0/0x4ffc00000, data 0x1aeb6db/0x1c62000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x10b9f9c6), peers [0,1] op hist [])
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 286507008 unmapped: 50954240 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ed3fc000/0x0/0x4ffc00000, data 0x1aeb6db/0x1c62000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x10b9f9c6), peers [0,1] op hist [])
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3001436 data_alloc: 218103808 data_used: 16015360
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 286507008 unmapped: 50954240 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 286507008 unmapped: 50954240 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ed3fc000/0x0/0x4ffc00000, data 0x1aeb6db/0x1c62000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x10b9f9c6), peers [0,1] op hist [])
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 286507008 unmapped: 50954240 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.580469131s of 11.722208023s, submitted: 15
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 286498816 unmapped: 50962432 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 286507008 unmapped: 50954240 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ecfec000/0x0/0x4ffc00000, data 0x1aeb6db/0x1c62000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x10faf9c6), peers [0,1] op hist [])
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3000380 data_alloc: 218103808 data_used: 16015360
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 286507008 unmapped: 50954240 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 286507008 unmapped: 50954240 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ecfec000/0x0/0x4ffc00000, data 0x1aeb6db/0x1c62000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x10faf9c6), peers [0,1] op hist [])
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ecfec000/0x0/0x4ffc00000, data 0x1aeb6db/0x1c62000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x10faf9c6), peers [0,1] op hist [])
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 286507008 unmapped: 50954240 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 286507008 unmapped: 50954240 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 286507008 unmapped: 50954240 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3035518 data_alloc: 218103808 data_used: 16035840
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 289300480 unmapped: 48160768 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 291225600 unmapped: 46235648 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 291225600 unmapped: 46235648 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ec9cf000/0x0/0x4ffc00000, data 0x21026db/0x2279000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x10faf9c6), peers [0,1] op hist [])
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 291225600 unmapped: 46235648 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.718398094s of 11.239345551s, submitted: 157
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 291438592 unmapped: 46022656 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3058784 data_alloc: 218103808 data_used: 16855040
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 291438592 unmapped: 46022656 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 291438592 unmapped: 46022656 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ec93d000/0x0/0x4ffc00000, data 0x21926db/0x2309000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x10faf9c6), peers [0,1] op hist [])
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 291438592 unmapped: 46022656 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 291438592 unmapped: 46022656 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 291504128 unmapped: 45957120 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ec93d000/0x0/0x4ffc00000, data 0x21926db/0x2309000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x10faf9c6), peers [0,1] op hist [])
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3053172 data_alloc: 218103808 data_used: 16855040
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 291504128 unmapped: 45957120 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ec924000/0x0/0x4ffc00000, data 0x21b36db/0x232a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x10faf9c6), peers [0,1] op hist [])
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 291504128 unmapped: 45957120 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 291504128 unmapped: 45957120 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 291504128 unmapped: 45957120 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 291504128 unmapped: 45957120 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.055026054s of 11.294532776s, submitted: 35
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 265 ms_handle_reset con 0x55a129f3e000 session 0x55a129eb3e00
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 265 ms_handle_reset con 0x55a12abf9000 session 0x55a127620000
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 265 ms_handle_reset con 0x55a12935d400 session 0x55a12808dc20
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3053404 data_alloc: 218103808 data_used: 16855040
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 265 ms_handle_reset con 0x55a12e96d000 session 0x55a128001c20
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 291504128 unmapped: 45957120 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 265 ms_handle_reset con 0x55a12e96ec00 session 0x55a129f4b0e0
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 265 ms_handle_reset con 0x55a12935d400 session 0x55a129f4bc20
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 265 ms_handle_reset con 0x55a129f3e000 session 0x55a129d7f680
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 265 ms_handle_reset con 0x55a12abf9000 session 0x55a129eb32c0
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 265 ms_handle_reset con 0x55a12e96d000 session 0x55a129f4b2c0
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 291528704 unmapped: 45932544 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ec555000/0x0/0x4ffc00000, data 0x25816eb/0x26f9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x10faf9c6), peers [0,1] op hist [])
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 291528704 unmapped: 45932544 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 291528704 unmapped: 45932544 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 291528704 unmapped: 45932544 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ec555000/0x0/0x4ffc00000, data 0x25816eb/0x26f9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x10faf9c6), peers [0,1] op hist [])
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3087144 data_alloc: 218103808 data_used: 16855040
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 291528704 unmapped: 45932544 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 291528704 unmapped: 45932544 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 265 ms_handle_reset con 0x55a129f38000 session 0x55a128517e00
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 291528704 unmapped: 45932544 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ec555000/0x0/0x4ffc00000, data 0x25816eb/0x26f9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x10faf9c6), peers [0,1] op hist [])
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 291528704 unmapped: 45932544 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 291536896 unmapped: 45924352 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3113064 data_alloc: 234881024 data_used: 19431424
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 291536896 unmapped: 45924352 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 291536896 unmapped: 45924352 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 291536896 unmapped: 45924352 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ec555000/0x0/0x4ffc00000, data 0x25816eb/0x26f9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x10faf9c6), peers [0,1] op hist [])
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 291536896 unmapped: 45924352 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 291536896 unmapped: 45924352 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3113064 data_alloc: 234881024 data_used: 19431424
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 291536896 unmapped: 45924352 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 291536896 unmapped: 45924352 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 291536896 unmapped: 45924352 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ec555000/0x0/0x4ffc00000, data 0x25816eb/0x26f9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x10faf9c6), peers [0,1] op hist [])
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 291536896 unmapped: 45924352 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 291536896 unmapped: 45924352 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 19.307680130s of 19.415519714s, submitted: 9
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #49. Immutable memtables: 6.
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3212432 data_alloc: 234881024 data_used: 19800064
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 292724736 unmapped: 44736512 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ea6c5000/0x0/0x4ffc00000, data 0x32716eb/0x33e9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1214f9c6), peers [0,1] op hist [])
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 292724736 unmapped: 44736512 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 292937728 unmapped: 44523520 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 292937728 unmapped: 44523520 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ea648000/0x0/0x4ffc00000, data 0x32ee6eb/0x3466000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1214f9c6), peers [0,1] op hist [])
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 292937728 unmapped: 44523520 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3218054 data_alloc: 234881024 data_used: 19988480
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 292937728 unmapped: 44523520 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 292937728 unmapped: 44523520 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ea648000/0x0/0x4ffc00000, data 0x32ee6eb/0x3466000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1214f9c6), peers [0,1] op hist [])
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 292937728 unmapped: 44523520 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ea648000/0x0/0x4ffc00000, data 0x32ee6eb/0x3466000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1214f9c6), peers [0,1] op hist [])
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 292937728 unmapped: 44523520 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 292945920 unmapped: 44515328 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3217482 data_alloc: 234881024 data_used: 19988480
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 292945920 unmapped: 44515328 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ea627000/0x0/0x4ffc00000, data 0x330f6eb/0x3487000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1214f9c6), peers [0,1] op hist [])
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 292945920 unmapped: 44515328 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 292945920 unmapped: 44515328 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 265 ms_handle_reset con 0x55a12935d400 session 0x55a129d8e5a0
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 265 ms_handle_reset con 0x55a129f3e000 session 0x55a129e9a780
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ea627000/0x0/0x4ffc00000, data 0x330f6eb/0x3487000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1214f9c6), peers [0,1] op hist [])
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 292945920 unmapped: 44515328 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 13.704324722s of 14.392348289s, submitted: 68
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 265 ms_handle_reset con 0x55a12abf9000 session 0x55a129e9be00
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 291430400 unmapped: 46030848 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3062914 data_alloc: 218103808 data_used: 15761408
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 291430400 unmapped: 46030848 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4eb777000/0x0/0x4ffc00000, data 0x21c06db/0x2337000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1214f9c6), peers [0,1] op hist [])
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 291430400 unmapped: 46030848 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 265 ms_handle_reset con 0x55a127698400 session 0x55a127959e00
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 265 ms_handle_reset con 0x55a12951c000 session 0x55a129d5d2c0
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 291430400 unmapped: 46030848 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 265 ms_handle_reset con 0x55a12951c000 session 0x55a1285794a0
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 287686656 unmapped: 49774592 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 287686656 unmapped: 49774592 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ec6f0000/0x0/0x4ffc00000, data 0x12486cc/0x13be000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1214f9c6), peers [0,1] op hist [])
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2884287 data_alloc: 218103808 data_used: 7364608
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 287686656 unmapped: 49774592 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 287686656 unmapped: 49774592 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 287686656 unmapped: 49774592 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 287686656 unmapped: 49774592 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ec6f0000/0x0/0x4ffc00000, data 0x12486cc/0x13be000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1214f9c6), peers [0,1] op hist [])
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 287686656 unmapped: 49774592 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2884287 data_alloc: 218103808 data_used: 7364608
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 287686656 unmapped: 49774592 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 287686656 unmapped: 49774592 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ec6f0000/0x0/0x4ffc00000, data 0x12486cc/0x13be000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1214f9c6), peers [0,1] op hist [])
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ec6f0000/0x0/0x4ffc00000, data 0x12486cc/0x13be000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1214f9c6), peers [0,1] op hist [])
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 287686656 unmapped: 49774592 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 287686656 unmapped: 49774592 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ec6f0000/0x0/0x4ffc00000, data 0x12486cc/0x13be000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1214f9c6), peers [0,1] op hist [])
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 287686656 unmapped: 49774592 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2884287 data_alloc: 218103808 data_used: 7364608
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 287686656 unmapped: 49774592 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 287686656 unmapped: 49774592 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ec6f0000/0x0/0x4ffc00000, data 0x12486cc/0x13be000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1214f9c6), peers [0,1] op hist [])
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 287686656 unmapped: 49774592 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 287686656 unmapped: 49774592 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 287686656 unmapped: 49774592 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ec6f0000/0x0/0x4ffc00000, data 0x12486cc/0x13be000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1214f9c6), peers [0,1] op hist [])
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2884287 data_alloc: 218103808 data_used: 7364608
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 287686656 unmapped: 49774592 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ec6f0000/0x0/0x4ffc00000, data 0x12486cc/0x13be000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1214f9c6), peers [0,1] op hist [])
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 287686656 unmapped: 49774592 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 287686656 unmapped: 49774592 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 287686656 unmapped: 49774592 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ec6f0000/0x0/0x4ffc00000, data 0x12486cc/0x13be000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1214f9c6), peers [0,1] op hist [])
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 287686656 unmapped: 49774592 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2884287 data_alloc: 218103808 data_used: 7364608
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 287686656 unmapped: 49774592 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 287686656 unmapped: 49774592 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 287686656 unmapped: 49774592 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ec6f0000/0x0/0x4ffc00000, data 0x12486cc/0x13be000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1214f9c6), peers [0,1] op hist [])
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 287686656 unmapped: 49774592 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 287686656 unmapped: 49774592 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2884287 data_alloc: 218103808 data_used: 7364608
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 287686656 unmapped: 49774592 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 287686656 unmapped: 49774592 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ec6f0000/0x0/0x4ffc00000, data 0x12486cc/0x13be000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1214f9c6), peers [0,1] op hist [])
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 287686656 unmapped: 49774592 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 287686656 unmapped: 49774592 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ec6f0000/0x0/0x4ffc00000, data 0x12486cc/0x13be000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1214f9c6), peers [0,1] op hist [])
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 287686656 unmapped: 49774592 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2884287 data_alloc: 218103808 data_used: 7364608
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 287686656 unmapped: 49774592 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 287686656 unmapped: 49774592 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 287686656 unmapped: 49774592 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ec6f0000/0x0/0x4ffc00000, data 0x12486cc/0x13be000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1214f9c6), peers [0,1] op hist [])
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 287686656 unmapped: 49774592 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 287686656 unmapped: 49774592 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2884287 data_alloc: 218103808 data_used: 7364608
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 287686656 unmapped: 49774592 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 287694848 unmapped: 49766400 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ec6f0000/0x0/0x4ffc00000, data 0x12486cc/0x13be000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1214f9c6), peers [0,1] op hist [])
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 265 ms_handle_reset con 0x55a127698400 session 0x55a129e9a1e0
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 265 ms_handle_reset con 0x55a12935d400 session 0x55a1294a74a0
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 287694848 unmapped: 49766400 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 265 ms_handle_reset con 0x55a129f3e000 session 0x55a1276214a0
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 265 ms_handle_reset con 0x55a12abf9000 session 0x55a128516d20
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 44.088615417s of 44.159133911s, submitted: 19
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 265 ms_handle_reset con 0x55a127698400 session 0x55a12844c000
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 265 ms_handle_reset con 0x55a12935d400 session 0x55a129d8f680
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 265 ms_handle_reset con 0x55a12951c000 session 0x55a12808c780
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 265 ms_handle_reset con 0x55a129f3e000 session 0x55a1299c8000
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 265 ms_handle_reset con 0x55a12e96d000 session 0x55a129ce90e0
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 284205056 unmapped: 53256192 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ec4c7000/0x0/0x4ffc00000, data 0x14706dc/0x15e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1214f9c6), peers [0,1] op hist [])
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 284205056 unmapped: 53256192 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2906377 data_alloc: 218103808 data_used: 7364608
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 284205056 unmapped: 53256192 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 284205056 unmapped: 53256192 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 284205056 unmapped: 53256192 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 284205056 unmapped: 53256192 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ec4c7000/0x0/0x4ffc00000, data 0x14706dc/0x15e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1214f9c6), peers [0,1] op hist [])
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 284205056 unmapped: 53256192 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ec4c7000/0x0/0x4ffc00000, data 0x14706dc/0x15e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1214f9c6), peers [0,1] op hist [])
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 265 ms_handle_reset con 0x55a127698400 session 0x55a1280734a0
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2909117 data_alloc: 218103808 data_used: 7364608
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 284516352 unmapped: 52944896 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 284516352 unmapped: 52944896 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 284516352 unmapped: 52944896 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 284516352 unmapped: 52944896 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ec4a3000/0x0/0x4ffc00000, data 0x14946dc/0x160b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1214f9c6), peers [0,1] op hist [])
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 284516352 unmapped: 52944896 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2924957 data_alloc: 218103808 data_used: 9543680
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ec4a3000/0x0/0x4ffc00000, data 0x14946dc/0x160b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1214f9c6), peers [0,1] op hist [])
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 284516352 unmapped: 52944896 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 284516352 unmapped: 52944896 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 284516352 unmapped: 52944896 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 284516352 unmapped: 52944896 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 284516352 unmapped: 52944896 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ec4a3000/0x0/0x4ffc00000, data 0x14946dc/0x160b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1214f9c6), peers [0,1] op hist [])
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2924957 data_alloc: 218103808 data_used: 9543680
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 284516352 unmapped: 52944896 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ec4a3000/0x0/0x4ffc00000, data 0x14946dc/0x160b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1214f9c6), peers [0,1] op hist [])
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 284516352 unmapped: 52944896 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 19.063507080s of 19.188037872s, submitted: 5
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 286867456 unmapped: 50593792 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 286883840 unmapped: 50577408 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 286212096 unmapped: 51249152 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3004787 data_alloc: 218103808 data_used: 10338304
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ebc76000/0x0/0x4ffc00000, data 0x1cc16dc/0x1e38000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1214f9c6), peers [0,1] op hist [])
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 286212096 unmapped: 51249152 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 286212096 unmapped: 51249152 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 286212096 unmapped: 51249152 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 286212096 unmapped: 51249152 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ebc76000/0x0/0x4ffc00000, data 0x1cc16dc/0x1e38000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1214f9c6), peers [0,1] op hist [])
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 286212096 unmapped: 51249152 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3005123 data_alloc: 218103808 data_used: 10346496
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 286212096 unmapped: 51249152 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 286212096 unmapped: 51249152 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 286212096 unmapped: 51249152 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 286212096 unmapped: 51249152 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 265 ms_handle_reset con 0x55a129f3e000 session 0x55a1294e1680
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 265 ms_handle_reset con 0x55a1286a4000 session 0x55a12855d0e0
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 265 ms_handle_reset con 0x55a12b4d1000 session 0x55a129f4ab40
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 265 ms_handle_reset con 0x55a12b692800 session 0x55a1294f3860
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.005581856s of 12.598716736s, submitted: 51
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 291987456 unmapped: 45473792 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 265 ms_handle_reset con 0x55a127698400 session 0x55a12962b2c0
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 265 ms_handle_reset con 0x55a1286a4000 session 0x55a12844d2c0
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ebc76000/0x0/0x4ffc00000, data 0x1cc16dc/0x1e38000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1214f9c6), peers [0,1] op hist [])
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 265 ms_handle_reset con 0x55a129f3e000 session 0x55a129f4a780
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 265 ms_handle_reset con 0x55a12b4d1000 session 0x55a1294a7c20
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 265 ms_handle_reset con 0x55a1299f6c00 session 0x55a12761c960
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3057527 data_alloc: 218103808 data_used: 10346496
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 287744000 unmapped: 49717248 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 287744000 unmapped: 49717248 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 287744000 unmapped: 49717248 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 287744000 unmapped: 49717248 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4eb68e000/0x0/0x4ffc00000, data 0x22a86ec/0x2420000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1214f9c6), peers [0,1] op hist [])
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 287744000 unmapped: 49717248 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3057527 data_alloc: 218103808 data_used: 10346496
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 287744000 unmapped: 49717248 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 287744000 unmapped: 49717248 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4eb68e000/0x0/0x4ffc00000, data 0x22a86ec/0x2420000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1214f9c6), peers [0,1] op hist [])
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 287744000 unmapped: 49717248 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 265 ms_handle_reset con 0x55a127698400 session 0x55a129eb2f00
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 265 ms_handle_reset con 0x55a1286a4000 session 0x55a129e9d860
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 287768576 unmapped: 49692672 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 265 ms_handle_reset con 0x55a129f3e000 session 0x55a129eb25a0
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 265 ms_handle_reset con 0x55a12b4d1000 session 0x55a129d8e780
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 287768576 unmapped: 49692672 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3059899 data_alloc: 218103808 data_used: 10350592
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 287768576 unmapped: 49692672 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 287768576 unmapped: 49692672 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4eb66a000/0x0/0x4ffc00000, data 0x22cc6ec/0x2444000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1214f9c6), peers [0,1] op hist [])
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 287768576 unmapped: 49692672 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 287768576 unmapped: 49692672 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4eb66a000/0x0/0x4ffc00000, data 0x22cc6ec/0x2444000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1214f9c6), peers [0,1] op hist [])
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 287768576 unmapped: 49692672 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3104219 data_alloc: 218103808 data_used: 16457728
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 287768576 unmapped: 49692672 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 287768576 unmapped: 49692672 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4eb66a000/0x0/0x4ffc00000, data 0x22cc6ec/0x2444000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1214f9c6), peers [0,1] op hist [])
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 287768576 unmapped: 49692672 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 287768576 unmapped: 49692672 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 287768576 unmapped: 49692672 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3104219 data_alloc: 218103808 data_used: 16457728
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 287768576 unmapped: 49692672 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 21.668926239s of 21.840452194s, submitted: 15
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 292233216 unmapped: 45228032 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ebda6000/0x0/0x4ffc00000, data 0x2bca6ec/0x2d42000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 292864000 unmapped: 44597248 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 292872192 unmapped: 44589056 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 292872192 unmapped: 44589056 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3194841 data_alloc: 234881024 data_used: 18636800
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 292872192 unmapped: 44589056 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 292872192 unmapped: 44589056 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ebd8e000/0x0/0x4ffc00000, data 0x2be06ec/0x2d58000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 292872192 unmapped: 44589056 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 292872192 unmapped: 44589056 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ebd8e000/0x0/0x4ffc00000, data 0x2be06ec/0x2d58000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 292872192 unmapped: 44589056 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3196601 data_alloc: 234881024 data_used: 18771968
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 292872192 unmapped: 44589056 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 292872192 unmapped: 44589056 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ebd8e000/0x0/0x4ffc00000, data 0x2be06ec/0x2d58000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 292872192 unmapped: 44589056 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 265 ms_handle_reset con 0x55a127698c00 session 0x55a128547a40
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 265 ms_handle_reset con 0x55a12e49fc00 session 0x55a12761c1e0
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 292872192 unmapped: 44589056 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.503499985s of 12.766171455s, submitted: 100
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 265 ms_handle_reset con 0x55a127698400 session 0x55a128516b40
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 292880384 unmapped: 44580864 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3013621 data_alloc: 218103808 data_used: 10407936
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 292880384 unmapped: 44580864 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4eccb6000/0x0/0x4ffc00000, data 0x1cc16dc/0x1e38000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 292880384 unmapped: 44580864 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 292880384 unmapped: 44580864 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 265 ms_handle_reset con 0x55a12935d400 session 0x55a1294a70e0
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 265 ms_handle_reset con 0x55a12951c000 session 0x55a1294f3a40
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 265 ms_handle_reset con 0x55a1286a4000 session 0x55a12850d860
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 290447360 unmapped: 47013888 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 290447360 unmapped: 47013888 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2901715 data_alloc: 218103808 data_used: 7364608
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 290447360 unmapped: 47013888 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ed730000/0x0/0x4ffc00000, data 0x12486cc/0x13be000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 290447360 unmapped: 47013888 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 290447360 unmapped: 47013888 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ed730000/0x0/0x4ffc00000, data 0x12486cc/0x13be000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 290447360 unmapped: 47013888 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 290447360 unmapped: 47013888 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2901715 data_alloc: 218103808 data_used: 7364608
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 290447360 unmapped: 47013888 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 290447360 unmapped: 47013888 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ed730000/0x0/0x4ffc00000, data 0x12486cc/0x13be000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 290447360 unmapped: 47013888 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 290447360 unmapped: 47013888 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 290447360 unmapped: 47013888 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2901715 data_alloc: 218103808 data_used: 7364608
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 290447360 unmapped: 47013888 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ed730000/0x0/0x4ffc00000, data 0x12486cc/0x13be000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 290447360 unmapped: 47013888 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 290447360 unmapped: 47013888 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 290447360 unmapped: 47013888 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ed730000/0x0/0x4ffc00000, data 0x12486cc/0x13be000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 290447360 unmapped: 47013888 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2901715 data_alloc: 218103808 data_used: 7364608
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 290447360 unmapped: 47013888 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 290447360 unmapped: 47013888 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 290447360 unmapped: 47013888 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 290447360 unmapped: 47013888 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 290447360 unmapped: 47013888 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ed730000/0x0/0x4ffc00000, data 0x12486cc/0x13be000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2901715 data_alloc: 218103808 data_used: 7364608
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 290447360 unmapped: 47013888 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ed730000/0x0/0x4ffc00000, data 0x12486cc/0x13be000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 290455552 unmapped: 47005696 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 290455552 unmapped: 47005696 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ed730000/0x0/0x4ffc00000, data 0x12486cc/0x13be000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 290455552 unmapped: 47005696 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 290455552 unmapped: 47005696 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2901715 data_alloc: 218103808 data_used: 7364608
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 290455552 unmapped: 47005696 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 290455552 unmapped: 47005696 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 290455552 unmapped: 47005696 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 290455552 unmapped: 47005696 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ed730000/0x0/0x4ffc00000, data 0x12486cc/0x13be000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 290455552 unmapped: 47005696 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2901715 data_alloc: 218103808 data_used: 7364608
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 290455552 unmapped: 47005696 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 290455552 unmapped: 47005696 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 290455552 unmapped: 47005696 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ed730000/0x0/0x4ffc00000, data 0x12486cc/0x13be000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 290455552 unmapped: 47005696 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 290455552 unmapped: 47005696 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2901715 data_alloc: 218103808 data_used: 7364608
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 290455552 unmapped: 47005696 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 42.827690125s of 42.880455017s, submitted: 18
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 290463744 unmapped: 46997504 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 265 ms_handle_reset con 0x55a127698400 session 0x55a128094f00
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 265 ms_handle_reset con 0x55a1286a4000 session 0x55a128073860
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 265 ms_handle_reset con 0x55a12935d400 session 0x55a127959e00
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 265 ms_handle_reset con 0x55a12951c000 session 0x55a129d5de00
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 265 ms_handle_reset con 0x55a12e49fc00 session 0x55a12ad874a0
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ed730000/0x0/0x4ffc00000, data 0x12486cc/0x13be000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 290553856 unmapped: 46907392 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 290553856 unmapped: 46907392 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ed282000/0x0/0x4ffc00000, data 0x16f66cc/0x186c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 290553856 unmapped: 46907392 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2939984 data_alloc: 218103808 data_used: 7364608
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ed282000/0x0/0x4ffc00000, data 0x16f66cc/0x186c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 290553856 unmapped: 46907392 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 290553856 unmapped: 46907392 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 265 ms_handle_reset con 0x55a127698400 session 0x55a12962a1e0
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 290553856 unmapped: 46907392 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 290553856 unmapped: 46907392 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 290594816 unmapped: 46866432 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2969544 data_alloc: 218103808 data_used: 11149312
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 290594816 unmapped: 46866432 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ed25e000/0x0/0x4ffc00000, data 0x171a6cc/0x1890000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ed25e000/0x0/0x4ffc00000, data 0x171a6cc/0x1890000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 289464320 unmapped: 47996928 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 289464320 unmapped: 47996928 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 289464320 unmapped: 47996928 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 289464320 unmapped: 47996928 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2969544 data_alloc: 218103808 data_used: 11149312
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 289464320 unmapped: 47996928 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ed25e000/0x0/0x4ffc00000, data 0x171a6cc/0x1890000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 289464320 unmapped: 47996928 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 289464320 unmapped: 47996928 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ed25e000/0x0/0x4ffc00000, data 0x171a6cc/0x1890000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 289472512 unmapped: 47988736 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 17.737442017s of 17.874473572s, submitted: 19
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 291201024 unmapped: 46260224 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3013660 data_alloc: 218103808 data_used: 11976704
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 291209216 unmapped: 46252032 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ecedc000/0x0/0x4ffc00000, data 0x1a9b6cc/0x1c11000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 291209216 unmapped: 46252032 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 291209216 unmapped: 46252032 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 291209216 unmapped: 46252032 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 291209216 unmapped: 46252032 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ecedc000/0x0/0x4ffc00000, data 0x1a9b6cc/0x1c11000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3015580 data_alloc: 218103808 data_used: 12120064
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 291209216 unmapped: 46252032 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 291209216 unmapped: 46252032 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 291209216 unmapped: 46252032 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 291209216 unmapped: 46252032 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ecedc000/0x0/0x4ffc00000, data 0x1a9b6cc/0x1c11000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 291209216 unmapped: 46252032 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3015900 data_alloc: 218103808 data_used: 12128256
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 291209216 unmapped: 46252032 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ecedc000/0x0/0x4ffc00000, data 0x1a9b6cc/0x1c11000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 291209216 unmapped: 46252032 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 291209216 unmapped: 46252032 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 291209216 unmapped: 46252032 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 14.645758629s of 14.787140846s, submitted: 29
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 265 ms_handle_reset con 0x55a12951c000 session 0x55a12844c960
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 265 ms_handle_reset con 0x55a129f3e000 session 0x55a12ad87680
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 265 ms_handle_reset con 0x55a12b4d1000 session 0x55a12ad87a40
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 265 ms_handle_reset con 0x55a12935dc00 session 0x55a12ad865a0
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 265 ms_handle_reset con 0x55a127698400 session 0x55a12ad86f00
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ecedc000/0x0/0x4ffc00000, data 0x1a9b6cc/0x1c11000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 291217408 unmapped: 46243840 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3051303 data_alloc: 218103808 data_used: 12128256
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 291217408 unmapped: 46243840 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 291217408 unmapped: 46243840 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ecb0c000/0x0/0x4ffc00000, data 0x1e6b72e/0x1fe2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 291217408 unmapped: 46243840 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 291217408 unmapped: 46243840 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 291217408 unmapped: 46243840 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ecb0c000/0x0/0x4ffc00000, data 0x1e6b72e/0x1fe2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3051303 data_alloc: 218103808 data_used: 12128256
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 291225600 unmapped: 46235648 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ecb0c000/0x0/0x4ffc00000, data 0x1e6b72e/0x1fe2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ecb0c000/0x0/0x4ffc00000, data 0x1e6b72e/0x1fe2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 291225600 unmapped: 46235648 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 265 ms_handle_reset con 0x55a12951c000 session 0x55a12962a5a0
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 291233792 unmapped: 46227456 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 291299328 unmapped: 46161920 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 292151296 unmapped: 45309952 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3077383 data_alloc: 218103808 data_used: 15802368
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 292151296 unmapped: 45309952 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ecb0c000/0x0/0x4ffc00000, data 0x1e6b72e/0x1fe2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 292151296 unmapped: 45309952 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 292151296 unmapped: 45309952 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ecb0c000/0x0/0x4ffc00000, data 0x1e6b72e/0x1fe2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ecb0c000/0x0/0x4ffc00000, data 0x1e6b72e/0x1fe2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 292151296 unmapped: 45309952 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 292151296 unmapped: 45309952 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3077383 data_alloc: 218103808 data_used: 15802368
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 292151296 unmapped: 45309952 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 292151296 unmapped: 45309952 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 292151296 unmapped: 45309952 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 292151296 unmapped: 45309952 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ecb0c000/0x0/0x4ffc00000, data 0x1e6b72e/0x1fe2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 20.151699066s of 20.260654449s, submitted: 29
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293240832 unmapped: 44220416 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3131253 data_alloc: 218103808 data_used: 16269312
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293249024 unmapped: 44212224 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294526976 unmapped: 42934272 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294526976 unmapped: 42934272 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294526976 unmapped: 42934272 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ec31b000/0x0/0x4ffc00000, data 0x264e72e/0x27c5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294526976 unmapped: 42934272 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3142553 data_alloc: 218103808 data_used: 16080896
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294526976 unmapped: 42934272 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294526976 unmapped: 42934272 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294535168 unmapped: 42926080 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 265 ms_handle_reset con 0x55a129f3e000 session 0x55a12962a780
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 265 ms_handle_reset con 0x55a12b4d1000 session 0x55a129ac61e0
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294535168 unmapped: 42926080 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 265 ms_handle_reset con 0x55a13007dc00 session 0x55a129ce85a0
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293789696 unmapped: 43671552 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ecb2f000/0x0/0x4ffc00000, data 0x1a9b6cc/0x1c11000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3022497 data_alloc: 218103808 data_used: 12189696
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293789696 unmapped: 43671552 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293789696 unmapped: 43671552 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.365324974s of 12.755796432s, submitted: 111
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293789696 unmapped: 43671552 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293789696 unmapped: 43671552 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 265 ms_handle_reset con 0x55a1286a4000 session 0x55a129d7fa40
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 265 ms_handle_reset con 0x55a12935d400 session 0x55a129d8f2c0
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 265 ms_handle_reset con 0x55a127698400 session 0x55a127620780
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293806080 unmapped: 43655168 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2915481 data_alloc: 218103808 data_used: 7364608
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293806080 unmapped: 43655168 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ed730000/0x0/0x4ffc00000, data 0x12486cc/0x13be000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293806080 unmapped: 43655168 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293806080 unmapped: 43655168 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293806080 unmapped: 43655168 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293806080 unmapped: 43655168 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2915481 data_alloc: 218103808 data_used: 7364608
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293806080 unmapped: 43655168 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ed730000/0x0/0x4ffc00000, data 0x12486cc/0x13be000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293806080 unmapped: 43655168 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293806080 unmapped: 43655168 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293806080 unmapped: 43655168 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293806080 unmapped: 43655168 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ed730000/0x0/0x4ffc00000, data 0x12486cc/0x13be000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2915481 data_alloc: 218103808 data_used: 7364608
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293806080 unmapped: 43655168 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293806080 unmapped: 43655168 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293806080 unmapped: 43655168 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293806080 unmapped: 43655168 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293806080 unmapped: 43655168 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ed730000/0x0/0x4ffc00000, data 0x12486cc/0x13be000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2915481 data_alloc: 218103808 data_used: 7364608
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293806080 unmapped: 43655168 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293806080 unmapped: 43655168 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293806080 unmapped: 43655168 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ed730000/0x0/0x4ffc00000, data 0x12486cc/0x13be000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293806080 unmapped: 43655168 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293814272 unmapped: 43646976 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ed730000/0x0/0x4ffc00000, data 0x12486cc/0x13be000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2915481 data_alloc: 218103808 data_used: 7364608
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293814272 unmapped: 43646976 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293814272 unmapped: 43646976 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293814272 unmapped: 43646976 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293814272 unmapped: 43646976 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293814272 unmapped: 43646976 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ed730000/0x0/0x4ffc00000, data 0x12486cc/0x13be000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2915481 data_alloc: 218103808 data_used: 7364608
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293814272 unmapped: 43646976 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293814272 unmapped: 43646976 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ed730000/0x0/0x4ffc00000, data 0x12486cc/0x13be000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293822464 unmapped: 43638784 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293822464 unmapped: 43638784 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293822464 unmapped: 43638784 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2915481 data_alloc: 218103808 data_used: 7364608
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 33.636383057s of 33.703060150s, submitted: 22
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 298565632 unmapped: 38895616 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ed730000/0x0/0x4ffc00000, data 0x12486cc/0x13be000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,7])
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 265 ms_handle_reset con 0x55a12951c000 session 0x55a128517c20
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 265 ms_handle_reset con 0x55a129f3e000 session 0x55a129e9b860
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 265 ms_handle_reset con 0x55a127698400 session 0x55a129d5cb40
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 265 ms_handle_reset con 0x55a1286a4000 session 0x55a1280952c0
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 265 ms_handle_reset con 0x55a12935d400 session 0x55a12761c780
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293838848 unmapped: 43622400 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293838848 unmapped: 43622400 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293838848 unmapped: 43622400 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ed2ab000/0x0/0x4ffc00000, data 0x16cd6cc/0x1843000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 265 ms_handle_reset con 0x55a12951c000 session 0x55a129f4a1e0
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293838848 unmapped: 43622400 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 265 ms_handle_reset con 0x55a129f3e000 session 0x55a12962b4a0
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2956231 data_alloc: 218103808 data_used: 7364608
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293838848 unmapped: 43622400 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 265 ms_handle_reset con 0x55a127698400 session 0x55a128517680
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 265 ms_handle_reset con 0x55a1286a4000 session 0x55a12761c3c0
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293986304 unmapped: 43474944 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294010880 unmapped: 43450368 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ed287000/0x0/0x4ffc00000, data 0x16f16cc/0x1867000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294010880 unmapped: 43450368 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294010880 unmapped: 43450368 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 265 ms_handle_reset con 0x55a12935d400 session 0x55a12962b680
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 265 ms_handle_reset con 0x55a12951c000 session 0x55a128665860
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2993716 data_alloc: 218103808 data_used: 12095488
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294010880 unmapped: 43450368 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294010880 unmapped: 43450368 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294010880 unmapped: 43450368 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ed287000/0x0/0x4ffc00000, data 0x16f16cc/0x1867000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294010880 unmapped: 43450368 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294010880 unmapped: 43450368 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ed287000/0x0/0x4ffc00000, data 0x16f16cc/0x1867000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 13.936422348s of 14.315987587s, submitted: 15
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2993848 data_alloc: 218103808 data_used: 12095488
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294010880 unmapped: 43450368 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294010880 unmapped: 43450368 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 265 ms_handle_reset con 0x55a12b4d1000 session 0x55a12ad865a0
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 265 ms_handle_reset con 0x55a13007dc00 session 0x55a129d5cf00
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ed287000/0x0/0x4ffc00000, data 0x16f16cc/0x1867000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294010880 unmapped: 43450368 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ed287000/0x0/0x4ffc00000, data 0x16f16cc/0x1867000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294019072 unmapped: 43442176 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294019072 unmapped: 43442176 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2993716 data_alloc: 218103808 data_used: 12095488
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294019072 unmapped: 43442176 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294019072 unmapped: 43442176 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294019072 unmapped: 43442176 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ed287000/0x0/0x4ffc00000, data 0x16f16cc/0x1867000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294019072 unmapped: 43442176 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 265 ms_handle_reset con 0x55a127698400 session 0x55a12ad87a40
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 265 ms_handle_reset con 0x55a1286a4000 session 0x55a129eb21e0
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 265 ms_handle_reset con 0x55a12935d400 session 0x55a12ad87680
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 291471360 unmapped: 45989888 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2921114 data_alloc: 218103808 data_used: 7364608
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 291471360 unmapped: 45989888 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 291471360 unmapped: 45989888 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ed730000/0x0/0x4ffc00000, data 0x12486cc/0x13be000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 291471360 unmapped: 45989888 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 291471360 unmapped: 45989888 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ed730000/0x0/0x4ffc00000, data 0x12486cc/0x13be000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 291471360 unmapped: 45989888 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2921114 data_alloc: 218103808 data_used: 7364608
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 291471360 unmapped: 45989888 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 291471360 unmapped: 45989888 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 291471360 unmapped: 45989888 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 291471360 unmapped: 45989888 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 291471360 unmapped: 45989888 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ed730000/0x0/0x4ffc00000, data 0x12486cc/0x13be000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2921114 data_alloc: 218103808 data_used: 7364608
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 291471360 unmapped: 45989888 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 291471360 unmapped: 45989888 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 291471360 unmapped: 45989888 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 291471360 unmapped: 45989888 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 291471360 unmapped: 45989888 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ed730000/0x0/0x4ffc00000, data 0x12486cc/0x13be000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2921114 data_alloc: 218103808 data_used: 7364608
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 291471360 unmapped: 45989888 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 291471360 unmapped: 45989888 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 291471360 unmapped: 45989888 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 291471360 unmapped: 45989888 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 291471360 unmapped: 45989888 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ed730000/0x0/0x4ffc00000, data 0x12486cc/0x13be000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2921114 data_alloc: 218103808 data_used: 7364608
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 291487744 unmapped: 45973504 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ed730000/0x0/0x4ffc00000, data 0x12486cc/0x13be000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ed730000/0x0/0x4ffc00000, data 0x12486cc/0x13be000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 291487744 unmapped: 45973504 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 291487744 unmapped: 45973504 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 291487744 unmapped: 45973504 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 291487744 unmapped: 45973504 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2921114 data_alloc: 218103808 data_used: 7364608
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ed730000/0x0/0x4ffc00000, data 0x12486cc/0x13be000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 291495936 unmapped: 45965312 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ed730000/0x0/0x4ffc00000, data 0x12486cc/0x13be000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 291495936 unmapped: 45965312 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ed730000/0x0/0x4ffc00000, data 0x12486cc/0x13be000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 291495936 unmapped: 45965312 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ed730000/0x0/0x4ffc00000, data 0x12486cc/0x13be000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 291495936 unmapped: 45965312 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 291495936 unmapped: 45965312 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2921114 data_alloc: 218103808 data_used: 7364608
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 291495936 unmapped: 45965312 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 291504128 unmapped: 45957120 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ed730000/0x0/0x4ffc00000, data 0x12486cc/0x13be000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 291504128 unmapped: 45957120 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 291504128 unmapped: 45957120 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 291504128 unmapped: 45957120 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2921114 data_alloc: 218103808 data_used: 7364608
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 291504128 unmapped: 45957120 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ed730000/0x0/0x4ffc00000, data 0x12486cc/0x13be000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 291512320 unmapped: 45948928 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 291512320 unmapped: 45948928 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 48.175201416s of 48.326942444s, submitted: 23
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 265 ms_handle_reset con 0x55a12951c000 session 0x55a129d8e780
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 265 ms_handle_reset con 0x55a127698400 session 0x55a12844d2c0
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 265 ms_handle_reset con 0x55a1286a4000 session 0x55a1294f3860
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 265 ms_handle_reset con 0x55a12935d400 session 0x55a1294e1680
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 265 ms_handle_reset con 0x55a13007dc00 session 0x55a12761cf00
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 291536896 unmapped: 45924352 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 291536896 unmapped: 45924352 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2999881 data_alloc: 218103808 data_used: 7364608
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 291536896 unmapped: 45924352 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ecd45000/0x0/0x4ffc00000, data 0x1c336cc/0x1da9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 291536896 unmapped: 45924352 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 291536896 unmapped: 45924352 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ecd45000/0x0/0x4ffc00000, data 0x1c336cc/0x1da9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 291536896 unmapped: 45924352 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 265 ms_handle_reset con 0x55a12d8ce800 session 0x55a128094f00
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 291536896 unmapped: 45924352 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3000013 data_alloc: 218103808 data_used: 7364608
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 291536896 unmapped: 45924352 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 292847616 unmapped: 44613632 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ecd45000/0x0/0x4ffc00000, data 0x1c336cc/0x1da9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 292847616 unmapped: 44613632 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ecd45000/0x0/0x4ffc00000, data 0x1c336cc/0x1da9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 292847616 unmapped: 44613632 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 292847616 unmapped: 44613632 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3064813 data_alloc: 218103808 data_used: 16584704
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 292847616 unmapped: 44613632 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 292847616 unmapped: 44613632 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ecd45000/0x0/0x4ffc00000, data 0x1c336cc/0x1da9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 292847616 unmapped: 44613632 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 292847616 unmapped: 44613632 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 292847616 unmapped: 44613632 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3064813 data_alloc: 218103808 data_used: 16584704
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 292847616 unmapped: 44613632 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 18.436155319s of 18.604982376s, submitted: 24
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293904384 unmapped: 43556864 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ec693000/0x0/0x4ffc00000, data 0x22e56cc/0x245b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294043648 unmapped: 43417600 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294043648 unmapped: 43417600 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ec685000/0x0/0x4ffc00000, data 0x22f26cc/0x2468000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294043648 unmapped: 43417600 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3134245 data_alloc: 234881024 data_used: 17334272
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294043648 unmapped: 43417600 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294043648 unmapped: 43417600 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ec685000/0x0/0x4ffc00000, data 0x22f26cc/0x2468000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294043648 unmapped: 43417600 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ec685000/0x0/0x4ffc00000, data 0x22f26cc/0x2468000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294043648 unmapped: 43417600 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294043648 unmapped: 43417600 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3134245 data_alloc: 234881024 data_used: 17334272
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294043648 unmapped: 43417600 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294043648 unmapped: 43417600 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294043648 unmapped: 43417600 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294043648 unmapped: 43417600 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.365725517s of 12.565147400s, submitted: 42
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 265 ms_handle_reset con 0x55a127698400 session 0x55a129d5de00
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 265 ms_handle_reset con 0x55a1286a4000 session 0x55a12855c3c0
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ec685000/0x0/0x4ffc00000, data 0x22f26cc/0x2468000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294043648 unmapped: 43417600 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 265 ms_handle_reset con 0x55a12935d400 session 0x55a129d7e000
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3131957 data_alloc: 234881024 data_used: 17362944
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294060032 unmapped: 43401216 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 265 handle_osd_map epochs [266,266], i have 265, src has [1,266]
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 266 ms_handle_reset con 0x55a13007dc00 session 0x55a129f53860
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 266 ms_handle_reset con 0x55a127eb5400 session 0x55a12808d2c0
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 266 ms_handle_reset con 0x55a1286a5000 session 0x55a128579a40
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 266 ms_handle_reset con 0x55a127eb5400 session 0x55a129f4a5a0
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 299204608 unmapped: 41099264 heap: 340303872 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 266 handle_osd_map epochs [266,267], i have 266, src has [1,267]
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 267 handle_osd_map epochs [267,267], i have 267, src has [1,267]
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 267 ms_handle_reset con 0x55a127698400 session 0x55a1299c9860
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 300269568 unmapped: 40034304 heap: 340303872 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 267 ms_handle_reset con 0x55a1286a4000 session 0x55a129ac6960
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 267 heartbeat osd_stat(store_statfs(0x4ebc4a000/0x0/0x4ffc00000, data 0x2d29e2a/0x2ea3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 267 handle_osd_map epochs [268,268], i have 267, src has [1,268]
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 267 handle_osd_map epochs [268,268], i have 268, src has [1,268]
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 300277760 unmapped: 40026112 heap: 340303872 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 268 ms_handle_reset con 0x55a12935d400 session 0x55a1294f2b40
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 268 ms_handle_reset con 0x55a127698400 session 0x55a12ad86000
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 268 ms_handle_reset con 0x55a127eb5400 session 0x55a12761d2c0
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 268 heartbeat osd_stat(store_statfs(0x4ebc46000/0x0/0x4ffc00000, data 0x2d2b9c3/0x2ea6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 300277760 unmapped: 40026112 heap: 340303872 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3255591 data_alloc: 234881024 data_used: 24862720
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 300277760 unmapped: 40026112 heap: 340303872 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 268 heartbeat osd_stat(store_statfs(0x4ebc46000/0x0/0x4ffc00000, data 0x2d2b9c3/0x2ea6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 268 heartbeat osd_stat(store_statfs(0x4ebc46000/0x0/0x4ffc00000, data 0x2d2b9c3/0x2ea6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 300277760 unmapped: 40026112 heap: 340303872 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 268 heartbeat osd_stat(store_statfs(0x4ebc46000/0x0/0x4ffc00000, data 0x2d2b9c3/0x2ea6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 300277760 unmapped: 40026112 heap: 340303872 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 268 handle_osd_map epochs [269,269], i have 268, src has [1,269]
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 300285952 unmapped: 40017920 heap: 340303872 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.536736488s of 10.118284225s, submitted: 64
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 269 ms_handle_reset con 0x55a1286a4000 session 0x55a127620960
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295960576 unmapped: 44343296 heap: 340303872 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3033163 data_alloc: 218103808 data_used: 7397376
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295960576 unmapped: 44343296 heap: 340303872 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295960576 unmapped: 44343296 heap: 340303872 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295960576 unmapped: 44343296 heap: 340303872 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 269 heartbeat osd_stat(store_statfs(0x4eccee000/0x0/0x4ffc00000, data 0x1c83426/0x1dff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295960576 unmapped: 44343296 heap: 340303872 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295960576 unmapped: 44343296 heap: 340303872 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3033163 data_alloc: 218103808 data_used: 7397376
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295960576 unmapped: 44343296 heap: 340303872 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295960576 unmapped: 44343296 heap: 340303872 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 269 ms_handle_reset con 0x55a1286a5000 session 0x55a12844c960
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 269 ms_handle_reset con 0x55a127eb4000 session 0x55a129e9af00
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 269 ms_handle_reset con 0x55a13007dc00 session 0x55a1285474a0
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 269 ms_handle_reset con 0x55a127698400 session 0x55a129ce81e0
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 269 ms_handle_reset con 0x55a127eb5400 session 0x55a129f53c20
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 269 ms_handle_reset con 0x55a1286a4000 session 0x55a129ac6f00
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 269 ms_handle_reset con 0x55a1286a5000 session 0x55a1294a7860
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 316350464 unmapped: 36495360 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 269 heartbeat osd_stat(store_statfs(0x4eb5ae000/0x0/0x4ffc00000, data 0x33c3488/0x3540000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [0,0,0,0,0,1,0,0,4,1,3])
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 269 ms_handle_reset con 0x55a127698400 session 0x55a129f4b680
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 306053120 unmapped: 46792704 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 306053120 unmapped: 46792704 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 269 ms_handle_reset con 0x55a127eb5400 session 0x55a128001c20
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 269 heartbeat osd_stat(store_statfs(0x4eb5aa000/0x0/0x4ffc00000, data 0x33c7488/0x3544000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3263674 data_alloc: 234881024 data_used: 18100224
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 306053120 unmapped: 46792704 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 306053120 unmapped: 46792704 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 269 handle_osd_map epochs [270,270], i have 269, src has [1,270]
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.454534531s of 13.402193069s, submitted: 53
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 270 ms_handle_reset con 0x55a13007dc00 session 0x55a1299c9c20
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 298696704 unmapped: 54149120 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 298696704 unmapped: 54149120 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 270 heartbeat osd_stat(store_statfs(0x4ebfdb000/0x0/0x4ffc00000, data 0x2995049/0x2b12000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 298696704 unmapped: 54149120 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3194453 data_alloc: 218103808 data_used: 15204352
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 298696704 unmapped: 54149120 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 270 heartbeat osd_stat(store_statfs(0x4ebfdb000/0x0/0x4ffc00000, data 0x2995049/0x2b12000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 298696704 unmapped: 54149120 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 270 heartbeat osd_stat(store_statfs(0x4ebfdb000/0x0/0x4ffc00000, data 0x2995049/0x2b12000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 298696704 unmapped: 54149120 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 270 handle_osd_map epochs [270,271], i have 270, src has [1,271]
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 298696704 unmapped: 54149120 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 271 heartbeat osd_stat(store_statfs(0x4ebfdb000/0x0/0x4ffc00000, data 0x2995049/0x2b12000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 298696704 unmapped: 54149120 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3197427 data_alloc: 218103808 data_used: 15204352
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 298696704 unmapped: 54149120 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 298696704 unmapped: 54149120 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 298696704 unmapped: 54149120 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 298696704 unmapped: 54149120 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.603380203s of 11.698580742s, submitted: 38
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 271 heartbeat osd_stat(store_statfs(0x4ebfd8000/0x0/0x4ffc00000, data 0x2996aac/0x2b15000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [0,0,0,0,0,0,1])
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 298754048 unmapped: 54091776 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3216491 data_alloc: 234881024 data_used: 17309696
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 298786816 unmapped: 54059008 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 271 heartbeat osd_stat(store_statfs(0x4ebfd9000/0x0/0x4ffc00000, data 0x2996aac/0x2b15000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 298786816 unmapped: 54059008 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 271 heartbeat osd_stat(store_statfs(0x4ebfd9000/0x0/0x4ffc00000, data 0x2996aac/0x2b15000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 298786816 unmapped: 54059008 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 298786816 unmapped: 54059008 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 298786816 unmapped: 54059008 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3216651 data_alloc: 234881024 data_used: 17313792
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 271 heartbeat osd_stat(store_statfs(0x4ebfd9000/0x0/0x4ffc00000, data 0x2996aac/0x2b15000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 298786816 unmapped: 54059008 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 271 heartbeat osd_stat(store_statfs(0x4ebfd9000/0x0/0x4ffc00000, data 0x2996aac/0x2b15000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 298786816 unmapped: 54059008 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 298786816 unmapped: 54059008 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 298786816 unmapped: 54059008 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 298786816 unmapped: 54059008 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3216651 data_alloc: 234881024 data_used: 17313792
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 298786816 unmapped: 54059008 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 271 heartbeat osd_stat(store_statfs(0x4ebfd9000/0x0/0x4ffc00000, data 0x2996aac/0x2b15000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 298786816 unmapped: 54059008 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 298786816 unmapped: 54059008 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 14.006009102s of 14.082138062s, submitted: 3
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 271 heartbeat osd_stat(store_statfs(0x4ebfd9000/0x0/0x4ffc00000, data 0x2996aac/0x2b15000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 298786816 unmapped: 54059008 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 271 heartbeat osd_stat(store_statfs(0x4ebfd9000/0x0/0x4ffc00000, data 0x2996aac/0x2b15000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 298786816 unmapped: 54059008 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3216667 data_alloc: 234881024 data_used: 17309696
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 298786816 unmapped: 54059008 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 298786816 unmapped: 54059008 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 298786816 unmapped: 54059008 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 298786816 unmapped: 54059008 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 298786816 unmapped: 54059008 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 271 heartbeat osd_stat(store_statfs(0x4ebfd9000/0x0/0x4ffc00000, data 0x2996aac/0x2b15000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3216667 data_alloc: 234881024 data_used: 17309696
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 298786816 unmapped: 54059008 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 298786816 unmapped: 54059008 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 271 heartbeat osd_stat(store_statfs(0x4ebfd9000/0x0/0x4ffc00000, data 0x2996aac/0x2b15000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 298786816 unmapped: 54059008 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.220427513s of 10.246352196s, submitted: 3
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 271 ms_handle_reset con 0x55a1286a4000 session 0x55a12ad87860
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 271 ms_handle_reset con 0x55a1286a5000 session 0x55a129eb3860
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 289693696 unmapped: 63152128 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 271 ms_handle_reset con 0x55a1286a5000 session 0x55a12ad865a0
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 289693696 unmapped: 63152128 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 271 heartbeat osd_stat(store_statfs(0x4ed4ce000/0x0/0x4ffc00000, data 0x1252a4a/0x13d0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2972369 data_alloc: 218103808 data_used: 7401472
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 289693696 unmapped: 63152128 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 289693696 unmapped: 63152128 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 289693696 unmapped: 63152128 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 289693696 unmapped: 63152128 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 289693696 unmapped: 63152128 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 271 heartbeat osd_stat(store_statfs(0x4ed4ce000/0x0/0x4ffc00000, data 0x1252a4a/0x13d0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2972529 data_alloc: 218103808 data_used: 7405568
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 289693696 unmapped: 63152128 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 289693696 unmapped: 63152128 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: osd.2 271 heartbeat osd_stat(store_statfs(0x4ed4ce000/0x0/0x4ffc00000, data 0x1252a4a/0x13d0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:36:35 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 289693696 unmapped: 63152128 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 289693696 unmapped: 63152128 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 289693696 unmapped: 63152128 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2972529 data_alloc: 218103808 data_used: 7405568
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: osd.2 271 heartbeat osd_stat(store_statfs(0x4ed4ce000/0x0/0x4ffc00000, data 0x1252a4a/0x13d0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 289693696 unmapped: 63152128 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 289693696 unmapped: 63152128 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 289693696 unmapped: 63152128 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 289693696 unmapped: 63152128 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 289693696 unmapped: 63152128 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2972529 data_alloc: 218103808 data_used: 7405568
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 289693696 unmapped: 63152128 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: osd.2 271 heartbeat osd_stat(store_statfs(0x4ed4ce000/0x0/0x4ffc00000, data 0x1252a4a/0x13d0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 289693696 unmapped: 63152128 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: osd.2 271 heartbeat osd_stat(store_statfs(0x4ed4ce000/0x0/0x4ffc00000, data 0x1252a4a/0x13d0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 289693696 unmapped: 63152128 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 289693696 unmapped: 63152128 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: osd.2 271 heartbeat osd_stat(store_statfs(0x4ed4ce000/0x0/0x4ffc00000, data 0x1252a4a/0x13d0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 289693696 unmapped: 63152128 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2972529 data_alloc: 218103808 data_used: 7405568
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 289693696 unmapped: 63152128 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 289693696 unmapped: 63152128 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: osd.2 271 heartbeat osd_stat(store_statfs(0x4ed4ce000/0x0/0x4ffc00000, data 0x1252a4a/0x13d0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 24.520774841s of 24.589715958s, submitted: 20
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 289693696 unmapped: 63152128 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: osd.2 271 ms_handle_reset con 0x55a127698400 session 0x55a129f4a1e0
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: osd.2 271 ms_handle_reset con 0x55a127eb5400 session 0x55a129d8f2c0
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: osd.2 271 ms_handle_reset con 0x55a1286a4000 session 0x55a129ce85a0
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: osd.2 271 ms_handle_reset con 0x55a13007dc00 session 0x55a129d5d680
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: osd.2 271 ms_handle_reset con 0x55a127698400 session 0x55a12850cf00
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: osd.2 271 heartbeat osd_stat(store_statfs(0x4ecd3c000/0x0/0x4ffc00000, data 0x1c34a4a/0x1db2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 289701888 unmapped: 63143936 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: osd.2 271 ms_handle_reset con 0x55a127eb5400 session 0x55a12ad86960
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: osd.2 271 ms_handle_reset con 0x55a1286a4000 session 0x55a129d5cd20
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 289701888 unmapped: 63143936 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3048048 data_alloc: 218103808 data_used: 7405568
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: osd.2 271 ms_handle_reset con 0x55a1286a5000 session 0x55a1285ba780
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 289701888 unmapped: 63143936 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: osd.2 271 ms_handle_reset con 0x55a13007e000 session 0x55a129d7f0e0
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 290750464 unmapped: 62095360 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 290750464 unmapped: 62095360 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294322176 unmapped: 58523648 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: osd.2 271 ms_handle_reset con 0x55a127698400 session 0x55a129eb30e0
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: osd.2 271 ms_handle_reset con 0x55a127eb5400 session 0x55a12844cb40
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: osd.2 271 heartbeat osd_stat(store_statfs(0x4ecd3c000/0x0/0x4ffc00000, data 0x1c34a4a/0x1db2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: osd.2 271 ms_handle_reset con 0x55a1286a4000 session 0x55a129ac6b40
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293273600 unmapped: 59572224 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2978242 data_alloc: 218103808 data_used: 7405568
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293273600 unmapped: 59572224 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 5402.4 total, 600.0 interval#012Cumulative writes: 35K writes, 142K keys, 35K commit groups, 1.0 writes per commit group, ingest: 0.14 GB, 0.03 MB/s#012Cumulative WAL: 35K writes, 12K syncs, 2.84 writes per sync, written: 0.14 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1830 writes, 8268 keys, 1830 commit groups, 1.0 writes per commit group, ingest: 10.35 MB, 0.02 MB/s#012Interval WAL: 1830 writes, 682 syncs, 2.68 writes per sync, written: 0.01 GB, 0.02 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293273600 unmapped: 59572224 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293273600 unmapped: 59572224 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: osd.2 271 heartbeat osd_stat(store_statfs(0x4ed71e000/0x0/0x4ffc00000, data 0x1252a4a/0x13d0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293273600 unmapped: 59572224 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: osd.2 271 heartbeat osd_stat(store_statfs(0x4ed71e000/0x0/0x4ffc00000, data 0x1252a4a/0x13d0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293273600 unmapped: 59572224 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2978242 data_alloc: 218103808 data_used: 7405568
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293273600 unmapped: 59572224 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: osd.2 271 heartbeat osd_stat(store_statfs(0x4ed71e000/0x0/0x4ffc00000, data 0x1252a4a/0x13d0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: osd.2 271 heartbeat osd_stat(store_statfs(0x4ed71e000/0x0/0x4ffc00000, data 0x1252a4a/0x13d0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293273600 unmapped: 59572224 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: osd.2 271 heartbeat osd_stat(store_statfs(0x4ed71e000/0x0/0x4ffc00000, data 0x1252a4a/0x13d0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293273600 unmapped: 59572224 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293273600 unmapped: 59572224 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293273600 unmapped: 59572224 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: osd.2 271 heartbeat osd_stat(store_statfs(0x4ed71e000/0x0/0x4ffc00000, data 0x1252a4a/0x13d0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2978242 data_alloc: 218103808 data_used: 7405568
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293273600 unmapped: 59572224 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293273600 unmapped: 59572224 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293273600 unmapped: 59572224 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293273600 unmapped: 59572224 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293273600 unmapped: 59572224 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2978242 data_alloc: 218103808 data_used: 7405568
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: osd.2 271 heartbeat osd_stat(store_statfs(0x4ed71e000/0x0/0x4ffc00000, data 0x1252a4a/0x13d0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293273600 unmapped: 59572224 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293273600 unmapped: 59572224 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293273600 unmapped: 59572224 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293273600 unmapped: 59572224 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: osd.2 271 heartbeat osd_stat(store_statfs(0x4ed71e000/0x0/0x4ffc00000, data 0x1252a4a/0x13d0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293273600 unmapped: 59572224 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2978242 data_alloc: 218103808 data_used: 7405568
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293273600 unmapped: 59572224 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293273600 unmapped: 59572224 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: osd.2 271 heartbeat osd_stat(store_statfs(0x4ed71e000/0x0/0x4ffc00000, data 0x1252a4a/0x13d0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293273600 unmapped: 59572224 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: mgrc ms_handle_reset ms_handle_reset con 0x55a129f3ac00
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/3119838916
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/3119838916,v1:192.168.122.100:6801/3119838916]
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: mgrc handle_mgr_configure stats_period=5
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293273600 unmapped: 59572224 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293273600 unmapped: 59572224 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: osd.2 271 heartbeat osd_stat(store_statfs(0x4ed71e000/0x0/0x4ffc00000, data 0x1252a4a/0x13d0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2978242 data_alloc: 218103808 data_used: 7405568
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293273600 unmapped: 59572224 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293273600 unmapped: 59572224 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293273600 unmapped: 59572224 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: osd.2 271 heartbeat osd_stat(store_statfs(0x4ed71e000/0x0/0x4ffc00000, data 0x1252a4a/0x13d0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293273600 unmapped: 59572224 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293273600 unmapped: 59572224 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2978242 data_alloc: 218103808 data_used: 7405568
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293273600 unmapped: 59572224 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: osd.2 271 heartbeat osd_stat(store_statfs(0x4ed71e000/0x0/0x4ffc00000, data 0x1252a4a/0x13d0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293273600 unmapped: 59572224 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293273600 unmapped: 59572224 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293273600 unmapped: 59572224 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293273600 unmapped: 59572224 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2978242 data_alloc: 218103808 data_used: 7405568
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293273600 unmapped: 59572224 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293273600 unmapped: 59572224 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: osd.2 271 heartbeat osd_stat(store_statfs(0x4ed71e000/0x0/0x4ffc00000, data 0x1252a4a/0x13d0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293273600 unmapped: 59572224 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293273600 unmapped: 59572224 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293273600 unmapped: 59572224 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2978242 data_alloc: 218103808 data_used: 7405568
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: osd.2 271 heartbeat osd_stat(store_statfs(0x4ed71e000/0x0/0x4ffc00000, data 0x1252a4a/0x13d0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293273600 unmapped: 59572224 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293273600 unmapped: 59572224 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293273600 unmapped: 59572224 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293273600 unmapped: 59572224 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: osd.2 271 heartbeat osd_stat(store_statfs(0x4ed71e000/0x0/0x4ffc00000, data 0x1252a4a/0x13d0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293273600 unmapped: 59572224 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2978242 data_alloc: 218103808 data_used: 7405568
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293273600 unmapped: 59572224 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293273600 unmapped: 59572224 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293273600 unmapped: 59572224 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293273600 unmapped: 59572224 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293273600 unmapped: 59572224 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: osd.2 271 heartbeat osd_stat(store_statfs(0x4ed71e000/0x0/0x4ffc00000, data 0x1252a4a/0x13d0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2978242 data_alloc: 218103808 data_used: 7405568
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293273600 unmapped: 59572224 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293273600 unmapped: 59572224 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293273600 unmapped: 59572224 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293273600 unmapped: 59572224 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293273600 unmapped: 59572224 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2978242 data_alloc: 218103808 data_used: 7405568
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293273600 unmapped: 59572224 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: osd.2 271 heartbeat osd_stat(store_statfs(0x4ed71e000/0x0/0x4ffc00000, data 0x1252a4a/0x13d0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293273600 unmapped: 59572224 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293273600 unmapped: 59572224 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293273600 unmapped: 59572224 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293273600 unmapped: 59572224 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2978242 data_alloc: 218103808 data_used: 7405568
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293273600 unmapped: 59572224 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: osd.2 271 heartbeat osd_stat(store_statfs(0x4ed71e000/0x0/0x4ffc00000, data 0x1252a4a/0x13d0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293273600 unmapped: 59572224 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293273600 unmapped: 59572224 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293273600 unmapped: 59572224 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293281792 unmapped: 59564032 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2978242 data_alloc: 218103808 data_used: 7405568
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293281792 unmapped: 59564032 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: osd.2 271 heartbeat osd_stat(store_statfs(0x4ed71e000/0x0/0x4ffc00000, data 0x1252a4a/0x13d0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293281792 unmapped: 59564032 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: osd.2 271 heartbeat osd_stat(store_statfs(0x4ed71e000/0x0/0x4ffc00000, data 0x1252a4a/0x13d0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293281792 unmapped: 59564032 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: osd.2 271 heartbeat osd_stat(store_statfs(0x4ed71e000/0x0/0x4ffc00000, data 0x1252a4a/0x13d0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293281792 unmapped: 59564032 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293281792 unmapped: 59564032 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 77.009826660s of 77.842643738s, submitted: 41
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: osd.2 271 heartbeat osd_stat(store_statfs(0x4ed71e000/0x0/0x4ffc00000, data 0x1252a4a/0x13d0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2977528 data_alloc: 218103808 data_used: 7405568
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: osd.2 271 handle_osd_map epochs [271,272], i have 271, src has [1,272]
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293298176 unmapped: 59547648 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: osd.2 272 ms_handle_reset con 0x55a1286a5000 session 0x55a129eb2b40
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293298176 unmapped: 59547648 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: osd.2 272 handle_osd_map epochs [273,273], i have 272, src has [1,273]
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: osd.2 273 ms_handle_reset con 0x55a1286de800 session 0x55a129d7eb40
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293314560 unmapped: 59531264 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293314560 unmapped: 59531264 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293314560 unmapped: 59531264 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2923180 data_alloc: 218103808 data_used: 7413760
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293322752 unmapped: 59523072 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: osd.2 273 heartbeat osd_stat(store_statfs(0x4edfb7000/0x0/0x4ffc00000, data 0x9b7089/0xb35000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293322752 unmapped: 59523072 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293322752 unmapped: 59523072 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: osd.2 273 handle_osd_map epochs [274,274], i have 273, src has [1,274]
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293339136 unmapped: 59506688 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: osd.2 274 heartbeat osd_stat(store_statfs(0x4edfb7000/0x0/0x4ffc00000, data 0x9b7089/0xb35000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293339136 unmapped: 59506688 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2925482 data_alloc: 218103808 data_used: 7413760
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293339136 unmapped: 59506688 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293339136 unmapped: 59506688 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: osd.2 274 heartbeat osd_stat(store_statfs(0x4edfb5000/0x0/0x4ffc00000, data 0x9b8b0c/0xb38000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293339136 unmapped: 59506688 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293347328 unmapped: 59498496 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293347328 unmapped: 59498496 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2925482 data_alloc: 218103808 data_used: 7413760
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: osd.2 274 heartbeat osd_stat(store_statfs(0x4edfb5000/0x0/0x4ffc00000, data 0x9b8b0c/0xb38000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293347328 unmapped: 59498496 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293347328 unmapped: 59498496 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: osd.2 274 heartbeat osd_stat(store_statfs(0x4edfb5000/0x0/0x4ffc00000, data 0x9b8b0c/0xb38000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293347328 unmapped: 59498496 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293347328 unmapped: 59498496 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293347328 unmapped: 59498496 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2925482 data_alloc: 218103808 data_used: 7413760
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293363712 unmapped: 59482112 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: osd.2 274 heartbeat osd_stat(store_statfs(0x4edfb5000/0x0/0x4ffc00000, data 0x9b8b0c/0xb38000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293363712 unmapped: 59482112 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293363712 unmapped: 59482112 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: osd.2 274 heartbeat osd_stat(store_statfs(0x4edfb5000/0x0/0x4ffc00000, data 0x9b8b0c/0xb38000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: osd.2 274 ms_handle_reset con 0x55a12769b400 session 0x55a1294e1a40
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293363712 unmapped: 59482112 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: osd.2 274 heartbeat osd_stat(store_statfs(0x4edfb5000/0x0/0x4ffc00000, data 0x9b8b0c/0xb38000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293371904 unmapped: 59473920 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2925482 data_alloc: 218103808 data_used: 7413760
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293371904 unmapped: 59473920 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293371904 unmapped: 59473920 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293371904 unmapped: 59473920 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: osd.2 274 heartbeat osd_stat(store_statfs(0x4edfb5000/0x0/0x4ffc00000, data 0x9b8b0c/0xb38000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: osd.2 274 heartbeat osd_stat(store_statfs(0x4edfb5000/0x0/0x4ffc00000, data 0x9b8b0c/0xb38000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293371904 unmapped: 59473920 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293380096 unmapped: 59465728 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2925482 data_alloc: 218103808 data_used: 7413760
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293380096 unmapped: 59465728 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293380096 unmapped: 59465728 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: osd.2 274 heartbeat osd_stat(store_statfs(0x4edfb5000/0x0/0x4ffc00000, data 0x9b8b0c/0xb38000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293380096 unmapped: 59465728 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: osd.2 274 heartbeat osd_stat(store_statfs(0x4edfb5000/0x0/0x4ffc00000, data 0x9b8b0c/0xb38000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293380096 unmapped: 59465728 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293380096 unmapped: 59465728 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2925482 data_alloc: 218103808 data_used: 7413760
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293380096 unmapped: 59465728 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 35.864181519s of 36.010292053s, submitted: 49
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293396480 unmapped: 59449344 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294469632 unmapped: 58376192 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294469632 unmapped: 58376192 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: osd.2 274 heartbeat osd_stat(store_statfs(0x4edfb6000/0x0/0x4ffc00000, data 0x9b8b0c/0xb38000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294469632 unmapped: 58376192 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2924602 data_alloc: 218103808 data_used: 7413760
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294469632 unmapped: 58376192 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294469632 unmapped: 58376192 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: osd.2 274 heartbeat osd_stat(store_statfs(0x4edfb6000/0x0/0x4ffc00000, data 0x9b8b0c/0xb38000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294469632 unmapped: 58376192 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: osd.2 274 heartbeat osd_stat(store_statfs(0x4edfb6000/0x0/0x4ffc00000, data 0x9b8b0c/0xb38000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294469632 unmapped: 58376192 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294477824 unmapped: 58368000 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2924602 data_alloc: 218103808 data_used: 7413760
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294477824 unmapped: 58368000 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294477824 unmapped: 58368000 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294477824 unmapped: 58368000 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: osd.2 274 heartbeat osd_stat(store_statfs(0x4edfb6000/0x0/0x4ffc00000, data 0x9b8b0c/0xb38000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294477824 unmapped: 58368000 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: osd.2 274 heartbeat osd_stat(store_statfs(0x4edfb6000/0x0/0x4ffc00000, data 0x9b8b0c/0xb38000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294477824 unmapped: 58368000 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2924602 data_alloc: 218103808 data_used: 7413760
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294477824 unmapped: 58368000 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294477824 unmapped: 58368000 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294486016 unmapped: 58359808 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: osd.2 274 heartbeat osd_stat(store_statfs(0x4edfb6000/0x0/0x4ffc00000, data 0x9b8b0c/0xb38000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294486016 unmapped: 58359808 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294486016 unmapped: 58359808 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2924602 data_alloc: 218103808 data_used: 7413760
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294486016 unmapped: 58359808 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294486016 unmapped: 58359808 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294494208 unmapped: 58351616 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294494208 unmapped: 58351616 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: osd.2 274 heartbeat osd_stat(store_statfs(0x4edfb6000/0x0/0x4ffc00000, data 0x9b8b0c/0xb38000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294494208 unmapped: 58351616 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2924602 data_alloc: 218103808 data_used: 7413760
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294494208 unmapped: 58351616 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294494208 unmapped: 58351616 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294494208 unmapped: 58351616 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: osd.2 274 heartbeat osd_stat(store_statfs(0x4edfb6000/0x0/0x4ffc00000, data 0x9b8b0c/0xb38000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294494208 unmapped: 58351616 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294494208 unmapped: 58351616 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2924602 data_alloc: 218103808 data_used: 7413760
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294502400 unmapped: 58343424 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294502400 unmapped: 58343424 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294502400 unmapped: 58343424 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: osd.2 274 heartbeat osd_stat(store_statfs(0x4edfb6000/0x0/0x4ffc00000, data 0x9b8b0c/0xb38000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: osd.2 274 heartbeat osd_stat(store_statfs(0x4edfb6000/0x0/0x4ffc00000, data 0x9b8b0c/0xb38000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294502400 unmapped: 58343424 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294510592 unmapped: 58335232 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: osd.2 274 heartbeat osd_stat(store_statfs(0x4edfb6000/0x0/0x4ffc00000, data 0x9b8b0c/0xb38000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2924602 data_alloc: 218103808 data_used: 7413760
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294510592 unmapped: 58335232 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294510592 unmapped: 58335232 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: osd.2 274 heartbeat osd_stat(store_statfs(0x4edfb6000/0x0/0x4ffc00000, data 0x9b8b0c/0xb38000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294510592 unmapped: 58335232 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294510592 unmapped: 58335232 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: osd.2 274 heartbeat osd_stat(store_statfs(0x4edfb6000/0x0/0x4ffc00000, data 0x9b8b0c/0xb38000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294510592 unmapped: 58335232 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2924602 data_alloc: 218103808 data_used: 7413760
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294510592 unmapped: 58335232 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: osd.2 274 heartbeat osd_stat(store_statfs(0x4edfb6000/0x0/0x4ffc00000, data 0x9b8b0c/0xb38000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294510592 unmapped: 58335232 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294510592 unmapped: 58335232 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294510592 unmapped: 58335232 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294510592 unmapped: 58335232 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: osd.2 274 heartbeat osd_stat(store_statfs(0x4edfb6000/0x0/0x4ffc00000, data 0x9b8b0c/0xb38000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2924602 data_alloc: 218103808 data_used: 7413760
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294518784 unmapped: 58327040 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294518784 unmapped: 58327040 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294518784 unmapped: 58327040 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294518784 unmapped: 58327040 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: osd.2 274 heartbeat osd_stat(store_statfs(0x4edfb6000/0x0/0x4ffc00000, data 0x9b8b0c/0xb38000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294518784 unmapped: 58327040 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2924602 data_alloc: 218103808 data_used: 7413760
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294526976 unmapped: 58318848 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294526976 unmapped: 58318848 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294526976 unmapped: 58318848 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294526976 unmapped: 58318848 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: osd.2 274 heartbeat osd_stat(store_statfs(0x4edfb6000/0x0/0x4ffc00000, data 0x9b8b0c/0xb38000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294526976 unmapped: 58318848 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: osd.2 274 heartbeat osd_stat(store_statfs(0x4edfb6000/0x0/0x4ffc00000, data 0x9b8b0c/0xb38000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2924602 data_alloc: 218103808 data_used: 7413760
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294526976 unmapped: 58318848 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: osd.2 274 heartbeat osd_stat(store_statfs(0x4edfb6000/0x0/0x4ffc00000, data 0x9b8b0c/0xb38000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294526976 unmapped: 58318848 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: osd.2 274 heartbeat osd_stat(store_statfs(0x4edfb6000/0x0/0x4ffc00000, data 0x9b8b0c/0xb38000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294526976 unmapped: 58318848 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294543360 unmapped: 58302464 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294543360 unmapped: 58302464 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2924602 data_alloc: 218103808 data_used: 7413760
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294543360 unmapped: 58302464 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294543360 unmapped: 58302464 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: osd.2 274 heartbeat osd_stat(store_statfs(0x4edfb6000/0x0/0x4ffc00000, data 0x9b8b0c/0xb38000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294551552 unmapped: 58294272 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294551552 unmapped: 58294272 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294551552 unmapped: 58294272 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2924602 data_alloc: 218103808 data_used: 7413760
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294551552 unmapped: 58294272 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294559744 unmapped: 58286080 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: osd.2 274 heartbeat osd_stat(store_statfs(0x4edfb6000/0x0/0x4ffc00000, data 0x9b8b0c/0xb38000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294559744 unmapped: 58286080 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294559744 unmapped: 58286080 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294559744 unmapped: 58286080 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2924602 data_alloc: 218103808 data_used: 7413760
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294559744 unmapped: 58286080 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294559744 unmapped: 58286080 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: osd.2 274 heartbeat osd_stat(store_statfs(0x4edfb6000/0x0/0x4ffc00000, data 0x9b8b0c/0xb38000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294559744 unmapped: 58286080 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294567936 unmapped: 58277888 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294584320 unmapped: 58261504 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2924602 data_alloc: 218103808 data_used: 7413760
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294584320 unmapped: 58261504 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294584320 unmapped: 58261504 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: osd.2 274 heartbeat osd_stat(store_statfs(0x4edfb6000/0x0/0x4ffc00000, data 0x9b8b0c/0xb38000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294584320 unmapped: 58261504 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294584320 unmapped: 58261504 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294584320 unmapped: 58261504 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2924602 data_alloc: 218103808 data_used: 7413760
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294584320 unmapped: 58261504 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294592512 unmapped: 58253312 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: osd.2 274 heartbeat osd_stat(store_statfs(0x4edfb6000/0x0/0x4ffc00000, data 0x9b8b0c/0xb38000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294592512 unmapped: 58253312 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 82.001701355s of 82.319427490s, submitted: 90
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: osd.2 274 ms_handle_reset con 0x55a12769b400 session 0x55a1285ba780
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: osd.2 274 ms_handle_reset con 0x55a127eb5400 session 0x55a12850cf00
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294600704 unmapped: 58245120 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294600704 unmapped: 58245120 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2924602 data_alloc: 218103808 data_used: 7413760
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294600704 unmapped: 58245120 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: osd.2 274 heartbeat osd_stat(store_statfs(0x4edfb6000/0x0/0x4ffc00000, data 0x9b8b0c/0xb38000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294600704 unmapped: 58245120 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: osd.2 274 heartbeat osd_stat(store_statfs(0x4edfb6000/0x0/0x4ffc00000, data 0x9b8b0c/0xb38000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294600704 unmapped: 58245120 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294600704 unmapped: 58245120 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294600704 unmapped: 58245120 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2924602 data_alloc: 218103808 data_used: 7413760
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294617088 unmapped: 58228736 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: osd.2 274 heartbeat osd_stat(store_statfs(0x4edfb6000/0x0/0x4ffc00000, data 0x9b8b0c/0xb38000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: osd.2 274 handle_osd_map epochs [274,275], i have 274, src has [1,275]
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: osd.2 274 handle_osd_map epochs [275,275], i have 275, src has [1,275]
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295673856 unmapped: 57171968 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: osd.2 275 ms_handle_reset con 0x55a1286a4000 session 0x55a129f4a1e0
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295690240 unmapped: 57155584 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: osd.2 275 heartbeat osd_stat(store_statfs(0x4edfb4000/0x0/0x4ffc00000, data 0x9ba687/0xb38000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295690240 unmapped: 57155584 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295690240 unmapped: 57155584 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.517580032s of 12.192517281s, submitted: 57
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2925069 data_alloc: 218103808 data_used: 7413760
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295673856 unmapped: 57171968 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: osd.2 275 handle_osd_map epochs [275,276], i have 275, src has [1,276]
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: osd.2 275 handle_osd_map epochs [276,276], i have 276, src has [1,276]
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294748160 unmapped: 58097664 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: osd.2 276 heartbeat osd_stat(store_statfs(0x4ee7b2000/0x0/0x4ffc00000, data 0x1bc258/0x33b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: osd.2 276 ms_handle_reset con 0x55a1286a5000 session 0x55a12850c780
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294756352 unmapped: 58089472 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: osd.2 276 handle_osd_map epochs [277,277], i have 276, src has [1,277]
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294756352 unmapped: 58089472 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: osd.2 277 heartbeat osd_stat(store_statfs(0x4ee7af000/0x0/0x4ffc00000, data 0x1bdcd7/0x33e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294756352 unmapped: 58089472 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2857665 data_alloc: 218103808 data_used: 679936
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294756352 unmapped: 58089472 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294756352 unmapped: 58089472 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294772736 unmapped: 58073088 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: osd.2 277 heartbeat osd_stat(store_statfs(0x4ee7af000/0x0/0x4ffc00000, data 0x1bdcd7/0x33e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: osd.2 277 handle_osd_map epochs [278,278], i have 277, src has [1,278]
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: osd.2 277 handle_osd_map epochs [278,278], i have 278, src has [1,278]
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294789120 unmapped: 58056704 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: osd.2 278 heartbeat osd_stat(store_statfs(0x4ee7ac000/0x0/0x4ffc00000, data 0x1bf73a/0x341000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294789120 unmapped: 58056704 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2860639 data_alloc: 218103808 data_used: 679936
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: osd.2 278 heartbeat osd_stat(store_statfs(0x4ee7ac000/0x0/0x4ffc00000, data 0x1bf73a/0x341000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294797312 unmapped: 58048512 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294797312 unmapped: 58048512 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: osd.2 278 heartbeat osd_stat(store_statfs(0x4ee7ac000/0x0/0x4ffc00000, data 0x1bf73a/0x341000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294797312 unmapped: 58048512 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294797312 unmapped: 58048512 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 13.606686592s of 14.555405617s, submitted: 67
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294797312 unmapped: 58048512 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: osd.2 278 handle_osd_map epochs [278,279], i have 278, src has [1,279]
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: osd.2 279 ms_handle_reset con 0x55a129f44800 session 0x55a129d8f680
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2865810 data_alloc: 218103808 data_used: 688128
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294805504 unmapped: 58040320 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294805504 unmapped: 58040320 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: osd.2 279 heartbeat osd_stat(store_statfs(0x4ee7a8000/0x0/0x4ffc00000, data 0x1c12da/0x345000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294805504 unmapped: 58040320 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294813696 unmapped: 58032128 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: osd.2 279 heartbeat osd_stat(store_statfs(0x4ee7a8000/0x0/0x4ffc00000, data 0x1c12da/0x345000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294813696 unmapped: 58032128 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2865810 data_alloc: 218103808 data_used: 688128
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294813696 unmapped: 58032128 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: osd.2 279 heartbeat osd_stat(store_statfs(0x4ee7a8000/0x0/0x4ffc00000, data 0x1c12da/0x345000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294821888 unmapped: 58023936 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: osd.2 279 heartbeat osd_stat(store_statfs(0x4ee7a8000/0x0/0x4ffc00000, data 0x1c12da/0x345000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294821888 unmapped: 58023936 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294821888 unmapped: 58023936 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: osd.2 279 heartbeat osd_stat(store_statfs(0x4ee7a8000/0x0/0x4ffc00000, data 0x1c12da/0x345000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294821888 unmapped: 58023936 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2865810 data_alloc: 218103808 data_used: 688128
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294821888 unmapped: 58023936 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294821888 unmapped: 58023936 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294830080 unmapped: 58015744 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: osd.2 279 heartbeat osd_stat(store_statfs(0x4ee7a8000/0x0/0x4ffc00000, data 0x1c12da/0x345000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: osd.2 279 heartbeat osd_stat(store_statfs(0x4ee7a8000/0x0/0x4ffc00000, data 0x1c12da/0x345000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294830080 unmapped: 58015744 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: osd.2 279 heartbeat osd_stat(store_statfs(0x4ee7a8000/0x0/0x4ffc00000, data 0x1c12da/0x345000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294830080 unmapped: 58015744 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2865810 data_alloc: 218103808 data_used: 688128
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294830080 unmapped: 58015744 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294830080 unmapped: 58015744 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: osd.2 279 heartbeat osd_stat(store_statfs(0x4ee7a8000/0x0/0x4ffc00000, data 0x1c12da/0x345000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294830080 unmapped: 58015744 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294830080 unmapped: 58015744 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: osd.2 279 heartbeat osd_stat(store_statfs(0x4ee7a8000/0x0/0x4ffc00000, data 0x1c12da/0x345000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294838272 unmapped: 58007552 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2865810 data_alloc: 218103808 data_used: 688128
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294838272 unmapped: 58007552 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294838272 unmapped: 58007552 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: osd.2 279 heartbeat osd_stat(store_statfs(0x4ee7a8000/0x0/0x4ffc00000, data 0x1c12da/0x345000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294838272 unmapped: 58007552 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294854656 unmapped: 57991168 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294854656 unmapped: 57991168 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2865810 data_alloc: 218103808 data_used: 688128
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294854656 unmapped: 57991168 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294854656 unmapped: 57991168 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294862848 unmapped: 57982976 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: osd.2 279 heartbeat osd_stat(store_statfs(0x4ee7a8000/0x0/0x4ffc00000, data 0x1c12da/0x345000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294862848 unmapped: 57982976 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294862848 unmapped: 57982976 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2865810 data_alloc: 218103808 data_used: 688128
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294862848 unmapped: 57982976 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: osd.2 279 heartbeat osd_stat(store_statfs(0x4ee7a8000/0x0/0x4ffc00000, data 0x1c12da/0x345000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294862848 unmapped: 57982976 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294862848 unmapped: 57982976 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294862848 unmapped: 57982976 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294862848 unmapped: 57982976 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: osd.2 279 heartbeat osd_stat(store_statfs(0x4ee7a8000/0x0/0x4ffc00000, data 0x1c12da/0x345000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2865810 data_alloc: 218103808 data_used: 688128
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294871040 unmapped: 57974784 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294871040 unmapped: 57974784 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294871040 unmapped: 57974784 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294871040 unmapped: 57974784 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294887424 unmapped: 57958400 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2865810 data_alloc: 218103808 data_used: 688128
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: osd.2 279 heartbeat osd_stat(store_statfs(0x4ee7a8000/0x0/0x4ffc00000, data 0x1c12da/0x345000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294887424 unmapped: 57958400 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294887424 unmapped: 57958400 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: osd.2 279 heartbeat osd_stat(store_statfs(0x4ee7a8000/0x0/0x4ffc00000, data 0x1c12da/0x345000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294887424 unmapped: 57958400 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294887424 unmapped: 57958400 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294887424 unmapped: 57958400 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2865810 data_alloc: 218103808 data_used: 688128
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294887424 unmapped: 57958400 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294887424 unmapped: 57958400 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: osd.2 279 heartbeat osd_stat(store_statfs(0x4ee7a8000/0x0/0x4ffc00000, data 0x1c12da/0x345000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294895616 unmapped: 57950208 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294895616 unmapped: 57950208 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294895616 unmapped: 57950208 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2865810 data_alloc: 218103808 data_used: 688128
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294895616 unmapped: 57950208 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294903808 unmapped: 57942016 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294903808 unmapped: 57942016 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: osd.2 279 heartbeat osd_stat(store_statfs(0x4ee7a8000/0x0/0x4ffc00000, data 0x1c12da/0x345000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294903808 unmapped: 57942016 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: osd.2 279 heartbeat osd_stat(store_statfs(0x4ee7a8000/0x0/0x4ffc00000, data 0x1c12da/0x345000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294903808 unmapped: 57942016 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2865810 data_alloc: 218103808 data_used: 688128
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294903808 unmapped: 57942016 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294903808 unmapped: 57942016 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294903808 unmapped: 57942016 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294912000 unmapped: 57933824 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294912000 unmapped: 57933824 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2865810 data_alloc: 218103808 data_used: 688128
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: osd.2 279 heartbeat osd_stat(store_statfs(0x4ee7a8000/0x0/0x4ffc00000, data 0x1c12da/0x345000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294912000 unmapped: 57933824 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294912000 unmapped: 57933824 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294912000 unmapped: 57933824 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294920192 unmapped: 57925632 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294920192 unmapped: 57925632 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2865810 data_alloc: 218103808 data_used: 688128
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: osd.2 279 heartbeat osd_stat(store_statfs(0x4ee7a8000/0x0/0x4ffc00000, data 0x1c12da/0x345000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294920192 unmapped: 57925632 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294920192 unmapped: 57925632 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294928384 unmapped: 57917440 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294928384 unmapped: 57917440 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294928384 unmapped: 57917440 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2865810 data_alloc: 218103808 data_used: 688128
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294936576 unmapped: 57909248 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: osd.2 279 heartbeat osd_stat(store_statfs(0x4ee7a8000/0x0/0x4ffc00000, data 0x1c12da/0x345000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294936576 unmapped: 57909248 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: osd.2 279 heartbeat osd_stat(store_statfs(0x4ee7a8000/0x0/0x4ffc00000, data 0x1c12da/0x345000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294936576 unmapped: 57909248 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294936576 unmapped: 57909248 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294936576 unmapped: 57909248 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2865810 data_alloc: 218103808 data_used: 688128
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294944768 unmapped: 57901056 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294944768 unmapped: 57901056 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: osd.2 279 heartbeat osd_stat(store_statfs(0x4ee7a8000/0x0/0x4ffc00000, data 0x1c12da/0x345000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294944768 unmapped: 57901056 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294944768 unmapped: 57901056 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: osd.2 279 heartbeat osd_stat(store_statfs(0x4ee7a8000/0x0/0x4ffc00000, data 0x1c12da/0x345000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294944768 unmapped: 57901056 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2865810 data_alloc: 218103808 data_used: 688128
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294944768 unmapped: 57901056 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: osd.2 279 heartbeat osd_stat(store_statfs(0x4ee7a8000/0x0/0x4ffc00000, data 0x1c12da/0x345000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294944768 unmapped: 57901056 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294944768 unmapped: 57901056 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294961152 unmapped: 57884672 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294961152 unmapped: 57884672 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2865810 data_alloc: 218103808 data_used: 688128
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: osd.2 279 heartbeat osd_stat(store_statfs(0x4ee7a8000/0x0/0x4ffc00000, data 0x1c12da/0x345000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294961152 unmapped: 57884672 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294961152 unmapped: 57884672 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294961152 unmapped: 57884672 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294961152 unmapped: 57884672 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294961152 unmapped: 57884672 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2865810 data_alloc: 218103808 data_used: 688128
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: osd.2 279 heartbeat osd_stat(store_statfs(0x4ee7a8000/0x0/0x4ffc00000, data 0x1c12da/0x345000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294969344 unmapped: 57876480 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: osd.2 279 heartbeat osd_stat(store_statfs(0x4ee7a8000/0x0/0x4ffc00000, data 0x1c12da/0x345000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294977536 unmapped: 57868288 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294977536 unmapped: 57868288 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294977536 unmapped: 57868288 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294977536 unmapped: 57868288 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2865810 data_alloc: 218103808 data_used: 688128
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294977536 unmapped: 57868288 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: osd.2 279 heartbeat osd_stat(store_statfs(0x4ee7a8000/0x0/0x4ffc00000, data 0x1c12da/0x345000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294977536 unmapped: 57868288 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: osd.2 279 heartbeat osd_stat(store_statfs(0x4ee7a8000/0x0/0x4ffc00000, data 0x1c12da/0x345000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294977536 unmapped: 57868288 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294985728 unmapped: 57860096 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: osd.2 279 heartbeat osd_stat(store_statfs(0x4ee7a8000/0x0/0x4ffc00000, data 0x1c12da/0x345000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294993920 unmapped: 57851904 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2865810 data_alloc: 218103808 data_used: 688128
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294993920 unmapped: 57851904 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294993920 unmapped: 57851904 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: osd.2 279 heartbeat osd_stat(store_statfs(0x4ee7a8000/0x0/0x4ffc00000, data 0x1c12da/0x345000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294993920 unmapped: 57851904 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 104.417770386s of 104.486747742s, submitted: 10
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295002112 unmapped: 57843712 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295002112 unmapped: 57843712 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: osd.2 279 heartbeat osd_stat(store_statfs(0x4ee7a8000/0x0/0x4ffc00000, data 0x1c12da/0x345000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2865810 data_alloc: 218103808 data_used: 688128
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 303390720 unmapped: 49455104 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295002112 unmapped: 57843712 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: osd.2 279 heartbeat osd_stat(store_statfs(0x4edfa9000/0x0/0x4ffc00000, data 0x9c12da/0xb45000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295010304 unmapped: 57835520 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295010304 unmapped: 57835520 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295010304 unmapped: 57835520 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2919522 data_alloc: 218103808 data_used: 688128
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295010304 unmapped: 57835520 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: osd.2 279 heartbeat osd_stat(store_statfs(0x4edfa9000/0x0/0x4ffc00000, data 0x9c12da/0xb45000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: osd.2 279 handle_osd_map epochs [280,280], i have 279, src has [1,280]
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: osd.2 279 handle_osd_map epochs [280,280], i have 280, src has [1,280]
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295034880 unmapped: 57810944 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: osd.2 280 handle_osd_map epochs [280,280], i have 280, src has [1,280]
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295034880 unmapped: 57810944 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: osd.2 280 ms_handle_reset con 0x55a12769b400 session 0x55a1294f3a40
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 8.512447357s of 10.195021629s, submitted: 4
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: osd.2 280 heartbeat osd_stat(store_statfs(0x4edfa4000/0x0/0x4ffc00000, data 0x9c2e7a/0xb49000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,1])
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295034880 unmapped: 57810944 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295043072 unmapped: 57802752 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2926341 data_alloc: 218103808 data_used: 696320
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 303431680 unmapped: 49414144 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295051264 unmapped: 57794560 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295051264 unmapped: 57794560 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: osd.2 280 heartbeat osd_stat(store_statfs(0x4ecfa4000/0x0/0x4ffc00000, data 0x19c2e9d/0x1b4a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: osd.2 280 handle_osd_map epochs [281,281], i have 280, src has [1,281]
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295059456 unmapped: 57786368 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: osd.2 281 ms_handle_reset con 0x55a127eb5400 session 0x55a1299c8000
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295059456 unmapped: 57786368 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3039483 data_alloc: 218103808 data_used: 704512
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295059456 unmapped: 57786368 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295059456 unmapped: 57786368 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: osd.2 281 heartbeat osd_stat(store_statfs(0x4ecfa1000/0x0/0x4ffc00000, data 0x19c4a1a/0x1b4d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295067648 unmapped: 57778176 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295067648 unmapped: 57778176 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: osd.2 281 heartbeat osd_stat(store_statfs(0x4ecfa1000/0x0/0x4ffc00000, data 0x19c4a1a/0x1b4d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295067648 unmapped: 57778176 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3039483 data_alloc: 218103808 data_used: 704512
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295067648 unmapped: 57778176 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295067648 unmapped: 57778176 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295067648 unmapped: 57778176 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295067648 unmapped: 57778176 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: osd.2 281 heartbeat osd_stat(store_statfs(0x4ecfa1000/0x0/0x4ffc00000, data 0x19c4a1a/0x1b4d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295075840 unmapped: 57769984 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3039483 data_alloc: 218103808 data_used: 704512
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295084032 unmapped: 57761792 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: osd.2 281 heartbeat osd_stat(store_statfs(0x4ecfa1000/0x0/0x4ffc00000, data 0x19c4a1a/0x1b4d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295084032 unmapped: 57761792 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295084032 unmapped: 57761792 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295084032 unmapped: 57761792 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295092224 unmapped: 57753600 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3039483 data_alloc: 218103808 data_used: 704512
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: osd.2 281 heartbeat osd_stat(store_statfs(0x4ecfa1000/0x0/0x4ffc00000, data 0x19c4a1a/0x1b4d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295092224 unmapped: 57753600 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295092224 unmapped: 57753600 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295092224 unmapped: 57753600 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295092224 unmapped: 57753600 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: osd.2 281 heartbeat osd_stat(store_statfs(0x4ecfa1000/0x0/0x4ffc00000, data 0x19c4a1a/0x1b4d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295100416 unmapped: 57745408 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3039483 data_alloc: 218103808 data_used: 704512
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295100416 unmapped: 57745408 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295100416 unmapped: 57745408 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295100416 unmapped: 57745408 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: osd.2 281 heartbeat osd_stat(store_statfs(0x4ecfa1000/0x0/0x4ffc00000, data 0x19c4a1a/0x1b4d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295100416 unmapped: 57745408 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295100416 unmapped: 57745408 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3039483 data_alloc: 218103808 data_used: 704512
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295100416 unmapped: 57745408 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295108608 unmapped: 57737216 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295108608 unmapped: 57737216 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: osd.2 281 heartbeat osd_stat(store_statfs(0x4ecfa1000/0x0/0x4ffc00000, data 0x19c4a1a/0x1b4d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295108608 unmapped: 57737216 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295108608 unmapped: 57737216 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3039483 data_alloc: 218103808 data_used: 704512
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295108608 unmapped: 57737216 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295116800 unmapped: 57729024 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295116800 unmapped: 57729024 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: osd.2 281 heartbeat osd_stat(store_statfs(0x4ecfa1000/0x0/0x4ffc00000, data 0x19c4a1a/0x1b4d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295116800 unmapped: 57729024 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295116800 unmapped: 57729024 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3039483 data_alloc: 218103808 data_used: 704512
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295116800 unmapped: 57729024 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295116800 unmapped: 57729024 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 41.138420105s of 43.667034149s, submitted: 14
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: osd.2 281 handle_osd_map epochs [281,282], i have 281, src has [1,282]
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295124992 unmapped: 57720832 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: osd.2 282 ms_handle_reset con 0x55a1286a4000 session 0x55a129f52780
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: osd.2 282 heartbeat osd_stat(store_statfs(0x4ecf9e000/0x0/0x4ffc00000, data 0x19c65c8/0x1b4f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295149568 unmapped: 57696256 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295149568 unmapped: 57696256 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3042065 data_alloc: 218103808 data_used: 712704
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295149568 unmapped: 57696256 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295149568 unmapped: 57696256 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: osd.2 282 heartbeat osd_stat(store_statfs(0x4ecf9e000/0x0/0x4ffc00000, data 0x19c65c8/0x1b4f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295165952 unmapped: 57679872 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295165952 unmapped: 57679872 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295165952 unmapped: 57679872 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3042065 data_alloc: 218103808 data_used: 712704
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: osd.2 282 heartbeat osd_stat(store_statfs(0x4ecf9e000/0x0/0x4ffc00000, data 0x19c65c8/0x1b4f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295165952 unmapped: 57679872 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295165952 unmapped: 57679872 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: osd.2 282 handle_osd_map epochs [282,283], i have 282, src has [1,283]
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.782243729s of 10.014166832s, submitted: 29
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9e000/0x0/0x4ffc00000, data 0x19c65c8/0x1b4f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295190528 unmapped: 57655296 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295190528 unmapped: 57655296 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295190528 unmapped: 57655296 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3046063 data_alloc: 218103808 data_used: 720896
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9b000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295190528 unmapped: 57655296 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295198720 unmapped: 57647104 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295198720 unmapped: 57647104 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295198720 unmapped: 57647104 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9b000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295198720 unmapped: 57647104 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3046063 data_alloc: 218103808 data_used: 720896
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295198720 unmapped: 57647104 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295198720 unmapped: 57647104 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9b000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295206912 unmapped: 57638912 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295206912 unmapped: 57638912 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9b000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295206912 unmapped: 57638912 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3046063 data_alloc: 218103808 data_used: 720896
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295206912 unmapped: 57638912 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295206912 unmapped: 57638912 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295206912 unmapped: 57638912 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9b000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295215104 unmapped: 57630720 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295215104 unmapped: 57630720 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3046063 data_alloc: 218103808 data_used: 720896
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295215104 unmapped: 57630720 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295223296 unmapped: 57622528 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295223296 unmapped: 57622528 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9b000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295223296 unmapped: 57622528 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295223296 unmapped: 57622528 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3046063 data_alloc: 218103808 data_used: 720896
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295223296 unmapped: 57622528 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295223296 unmapped: 57622528 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9b000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295231488 unmapped: 57614336 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295231488 unmapped: 57614336 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9b000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295239680 unmapped: 57606144 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3046063 data_alloc: 218103808 data_used: 720896
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295239680 unmapped: 57606144 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295239680 unmapped: 57606144 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295239680 unmapped: 57606144 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295247872 unmapped: 57597952 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9b000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9b000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295256064 unmapped: 57589760 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3046063 data_alloc: 218103808 data_used: 720896
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295256064 unmapped: 57589760 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295256064 unmapped: 57589760 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9b000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295256064 unmapped: 57589760 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295256064 unmapped: 57589760 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295256064 unmapped: 57589760 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3046063 data_alloc: 218103808 data_used: 720896
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295256064 unmapped: 57589760 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9b000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295272448 unmapped: 57573376 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295272448 unmapped: 57573376 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295272448 unmapped: 57573376 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295272448 unmapped: 57573376 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3046063 data_alloc: 218103808 data_used: 720896
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9b000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9b000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295288832 unmapped: 57556992 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295288832 unmapped: 57556992 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295288832 unmapped: 57556992 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295288832 unmapped: 57556992 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9b000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295288832 unmapped: 57556992 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3046063 data_alloc: 218103808 data_used: 720896
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295288832 unmapped: 57556992 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9b000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295288832 unmapped: 57556992 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295288832 unmapped: 57556992 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295297024 unmapped: 57548800 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295297024 unmapped: 57548800 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3046063 data_alloc: 218103808 data_used: 720896
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295297024 unmapped: 57548800 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295297024 unmapped: 57548800 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9b000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295297024 unmapped: 57548800 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295297024 unmapped: 57548800 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295297024 unmapped: 57548800 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3046063 data_alloc: 218103808 data_used: 720896
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295297024 unmapped: 57548800 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295305216 unmapped: 57540608 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9b000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295313408 unmapped: 57532416 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295313408 unmapped: 57532416 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9b000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295313408 unmapped: 57532416 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3046063 data_alloc: 218103808 data_used: 720896
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295313408 unmapped: 57532416 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295313408 unmapped: 57532416 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9b000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295313408 unmapped: 57532416 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295313408 unmapped: 57532416 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295321600 unmapped: 57524224 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3046063 data_alloc: 218103808 data_used: 720896
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9b000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295329792 unmapped: 57516032 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295329792 unmapped: 57516032 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295329792 unmapped: 57516032 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295329792 unmapped: 57516032 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9b000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295329792 unmapped: 57516032 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3046063 data_alloc: 218103808 data_used: 720896
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295329792 unmapped: 57516032 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295329792 unmapped: 57516032 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295337984 unmapped: 57507840 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295337984 unmapped: 57507840 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9b000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295337984 unmapped: 57507840 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3046063 data_alloc: 218103808 data_used: 720896
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9b000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295337984 unmapped: 57507840 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295346176 unmapped: 57499648 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9b000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295362560 unmapped: 57483264 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295362560 unmapped: 57483264 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295362560 unmapped: 57483264 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3046063 data_alloc: 218103808 data_used: 720896
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295362560 unmapped: 57483264 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9b000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295362560 unmapped: 57483264 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295362560 unmapped: 57483264 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295370752 unmapped: 57475072 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295370752 unmapped: 57475072 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3046063 data_alloc: 218103808 data_used: 720896
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295370752 unmapped: 57475072 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9b000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295370752 unmapped: 57475072 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9b000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295370752 unmapped: 57475072 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295378944 unmapped: 57466880 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295378944 unmapped: 57466880 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3046063 data_alloc: 218103808 data_used: 720896
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9b000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295378944 unmapped: 57466880 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295378944 unmapped: 57466880 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295378944 unmapped: 57466880 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295378944 unmapped: 57466880 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9b000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295387136 unmapped: 57458688 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3046063 data_alloc: 218103808 data_used: 720896
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295387136 unmapped: 57458688 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295403520 unmapped: 57442304 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295403520 unmapped: 57442304 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295403520 unmapped: 57442304 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9b000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295403520 unmapped: 57442304 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3046063 data_alloc: 218103808 data_used: 720896
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295411712 unmapped: 57434112 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295411712 unmapped: 57434112 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295411712 unmapped: 57434112 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295411712 unmapped: 57434112 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295419904 unmapped: 57425920 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3046063 data_alloc: 218103808 data_used: 720896
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9b000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295428096 unmapped: 57417728 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9b000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295428096 unmapped: 57417728 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295428096 unmapped: 57417728 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295428096 unmapped: 57417728 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9b000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295428096 unmapped: 57417728 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3046063 data_alloc: 218103808 data_used: 720896
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295428096 unmapped: 57417728 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295428096 unmapped: 57417728 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295436288 unmapped: 57409536 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9b000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295436288 unmapped: 57409536 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9b000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295444480 unmapped: 57401344 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3046063 data_alloc: 218103808 data_used: 720896
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295444480 unmapped: 57401344 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295452672 unmapped: 57393152 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295452672 unmapped: 57393152 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9b000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9b000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295452672 unmapped: 57393152 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295452672 unmapped: 57393152 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3046063 data_alloc: 218103808 data_used: 720896
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295460864 unmapped: 57384960 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295460864 unmapped: 57384960 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295460864 unmapped: 57384960 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9b000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295460864 unmapped: 57384960 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9b000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295460864 unmapped: 57384960 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3046063 data_alloc: 218103808 data_used: 720896
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295460864 unmapped: 57384960 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295460864 unmapped: 57384960 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295469056 unmapped: 57376768 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9b000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295485440 unmapped: 57360384 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295485440 unmapped: 57360384 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3046063 data_alloc: 218103808 data_used: 720896
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295485440 unmapped: 57360384 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295485440 unmapped: 57360384 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9b000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295485440 unmapped: 57360384 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295485440 unmapped: 57360384 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295485440 unmapped: 57360384 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3046063 data_alloc: 218103808 data_used: 720896
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9b000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295485440 unmapped: 57360384 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295493632 unmapped: 57352192 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295493632 unmapped: 57352192 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295493632 unmapped: 57352192 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295493632 unmapped: 57352192 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3046063 data_alloc: 218103808 data_used: 720896
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9b000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295501824 unmapped: 57344000 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9b000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295510016 unmapped: 57335808 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295510016 unmapped: 57335808 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295518208 unmapped: 57327616 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295518208 unmapped: 57327616 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3046063 data_alloc: 218103808 data_used: 720896
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295518208 unmapped: 57327616 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9b000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295526400 unmapped: 57319424 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295526400 unmapped: 57319424 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295526400 unmapped: 57319424 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9b000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295526400 unmapped: 57319424 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3046063 data_alloc: 218103808 data_used: 720896
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9b000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295526400 unmapped: 57319424 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295526400 unmapped: 57319424 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295534592 unmapped: 57311232 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9b000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295534592 unmapped: 57311232 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9b000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3046063 data_alloc: 218103808 data_used: 720896
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295534592 unmapped: 57311232 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295534592 unmapped: 57311232 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295534592 unmapped: 57311232 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295534592 unmapped: 57311232 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295534592 unmapped: 57311232 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9b000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3046063 data_alloc: 218103808 data_used: 720896
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295542784 unmapped: 57303040 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295542784 unmapped: 57303040 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295542784 unmapped: 57303040 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295550976 unmapped: 57294848 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295550976 unmapped: 57294848 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9b000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3046063 data_alloc: 218103808 data_used: 720896
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295550976 unmapped: 57294848 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295550976 unmapped: 57294848 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9b000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9b000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295559168 unmapped: 57286656 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295575552 unmapped: 57270272 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295575552 unmapped: 57270272 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9b000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3046063 data_alloc: 218103808 data_used: 720896
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295575552 unmapped: 57270272 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295575552 unmapped: 57270272 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9b000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295575552 unmapped: 57270272 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295575552 unmapped: 57270272 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9b000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295575552 unmapped: 57270272 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3046063 data_alloc: 218103808 data_used: 720896
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295575552 unmapped: 57270272 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295583744 unmapped: 57262080 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295583744 unmapped: 57262080 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9b000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295583744 unmapped: 57262080 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295583744 unmapped: 57262080 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3046063 data_alloc: 218103808 data_used: 720896
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295583744 unmapped: 57262080 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295583744 unmapped: 57262080 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295583744 unmapped: 57262080 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9b000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295591936 unmapped: 57253888 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295616512 unmapped: 57229312 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3046063 data_alloc: 218103808 data_used: 720896
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295616512 unmapped: 57229312 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295616512 unmapped: 57229312 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295616512 unmapped: 57229312 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9b000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295624704 unmapped: 57221120 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295624704 unmapped: 57221120 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3046063 data_alloc: 218103808 data_used: 720896
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295624704 unmapped: 57221120 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295624704 unmapped: 57221120 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295624704 unmapped: 57221120 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9b000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295632896 unmapped: 57212928 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295632896 unmapped: 57212928 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3046063 data_alloc: 218103808 data_used: 720896
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295632896 unmapped: 57212928 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9b000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295632896 unmapped: 57212928 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295632896 unmapped: 57212928 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295632896 unmapped: 57212928 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295641088 unmapped: 57204736 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3046063 data_alloc: 218103808 data_used: 720896
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295641088 unmapped: 57204736 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9b000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295641088 unmapped: 57204736 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295641088 unmapped: 57204736 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295641088 unmapped: 57204736 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9b000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295657472 unmapped: 57188352 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3046063 data_alloc: 218103808 data_used: 720896
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295657472 unmapped: 57188352 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295657472 unmapped: 57188352 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295657472 unmapped: 57188352 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9b000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295665664 unmapped: 57180160 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295665664 unmapped: 57180160 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3046063 data_alloc: 218103808 data_used: 720896
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295665664 unmapped: 57180160 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9b000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295665664 unmapped: 57180160 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295673856 unmapped: 57171968 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9b000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295673856 unmapped: 57171968 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9b000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 6002.4 total, 600.0 interval#012Cumulative writes: 36K writes, 143K keys, 36K commit groups, 1.0 writes per commit group, ingest: 0.14 GB, 0.02 MB/s#012Cumulative WAL: 36K writes, 12K syncs, 2.83 writes per sync, written: 0.14 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 596 writes, 1534 keys, 596 commit groups, 1.0 writes per commit group, ingest: 0.75 MB, 0.00 MB/s#012Interval WAL: 596 writes, 265 syncs, 2.25 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.36              0.00         1    0.357       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.36              0.00         1    0.357       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.36              0.00         1    0.357       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 6002.4 total, 4800.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.4 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55a125e131f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 2.2e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 6002.4 total, 4800.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55a125e131f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 2.2e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 6002.4 total, 4800.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtab
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295673856 unmapped: 57171968 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3046063 data_alloc: 218103808 data_used: 720896
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295673856 unmapped: 57171968 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295682048 unmapped: 57163776 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9b000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295682048 unmapped: 57163776 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295682048 unmapped: 57163776 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295682048 unmapped: 57163776 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3046063 data_alloc: 218103808 data_used: 720896
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295682048 unmapped: 57163776 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295682048 unmapped: 57163776 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9b000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295690240 unmapped: 57155584 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9b000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295690240 unmapped: 57155584 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295698432 unmapped: 57147392 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3046063 data_alloc: 218103808 data_used: 720896
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295698432 unmapped: 57147392 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9b000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295698432 unmapped: 57147392 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295698432 unmapped: 57147392 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295706624 unmapped: 57139200 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295706624 unmapped: 57139200 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3046063 data_alloc: 218103808 data_used: 720896
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295706624 unmapped: 57139200 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295723008 unmapped: 57122816 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9b000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295723008 unmapped: 57122816 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295723008 unmapped: 57122816 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295723008 unmapped: 57122816 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3046063 data_alloc: 218103808 data_used: 720896
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295731200 unmapped: 57114624 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295731200 unmapped: 57114624 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9b000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295731200 unmapped: 57114624 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295731200 unmapped: 57114624 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295731200 unmapped: 57114624 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3046063 data_alloc: 218103808 data_used: 720896
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295739392 unmapped: 57106432 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295739392 unmapped: 57106432 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9b000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9b000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295747584 unmapped: 57098240 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295747584 unmapped: 57098240 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295747584 unmapped: 57098240 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3046063 data_alloc: 218103808 data_used: 720896
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295747584 unmapped: 57098240 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9b000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295747584 unmapped: 57098240 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295747584 unmapped: 57098240 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295763968 unmapped: 57081856 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295763968 unmapped: 57081856 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3046063 data_alloc: 218103808 data_used: 720896
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295763968 unmapped: 57081856 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295763968 unmapped: 57081856 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9b000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295763968 unmapped: 57081856 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295763968 unmapped: 57081856 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295763968 unmapped: 57081856 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3046063 data_alloc: 218103808 data_used: 720896
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9b000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295772160 unmapped: 57073664 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295772160 unmapped: 57073664 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295772160 unmapped: 57073664 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295772160 unmapped: 57073664 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9b000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295772160 unmapped: 57073664 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3046063 data_alloc: 218103808 data_used: 720896
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295780352 unmapped: 57065472 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295780352 unmapped: 57065472 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9b000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295780352 unmapped: 57065472 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295788544 unmapped: 57057280 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9b000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295788544 unmapped: 57057280 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3046063 data_alloc: 218103808 data_used: 720896
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295796736 unmapped: 57049088 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9b000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295796736 unmapped: 57049088 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295796736 unmapped: 57049088 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9b000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295796736 unmapped: 57049088 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295796736 unmapped: 57049088 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3046063 data_alloc: 218103808 data_used: 720896
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9b000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295796736 unmapped: 57049088 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9b000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295804928 unmapped: 57040896 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295804928 unmapped: 57040896 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295804928 unmapped: 57040896 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295804928 unmapped: 57040896 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3046063 data_alloc: 218103808 data_used: 720896
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295813120 unmapped: 57032704 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295821312 unmapped: 57024512 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9b000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295821312 unmapped: 57024512 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295821312 unmapped: 57024512 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295821312 unmapped: 57024512 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3046063 data_alloc: 218103808 data_used: 720896
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295829504 unmapped: 57016320 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9b000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295837696 unmapped: 57008128 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295837696 unmapped: 57008128 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295837696 unmapped: 57008128 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295837696 unmapped: 57008128 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3046063 data_alloc: 218103808 data_used: 720896
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295837696 unmapped: 57008128 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9b000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295837696 unmapped: 57008128 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295845888 unmapped: 56999936 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295845888 unmapped: 56999936 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295845888 unmapped: 56999936 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9b000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3046063 data_alloc: 218103808 data_used: 720896
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295845888 unmapped: 56999936 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295845888 unmapped: 56999936 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295854080 unmapped: 56991744 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9b000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295854080 unmapped: 56991744 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9b000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295854080 unmapped: 56991744 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3046063 data_alloc: 218103808 data_used: 720896
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295854080 unmapped: 56991744 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295854080 unmapped: 56991744 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295854080 unmapped: 56991744 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9b000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295854080 unmapped: 56991744 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295854080 unmapped: 56991744 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9b000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3046063 data_alloc: 218103808 data_used: 720896
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9b000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295878656 unmapped: 56967168 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295878656 unmapped: 56967168 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295878656 unmapped: 56967168 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295878656 unmapped: 56967168 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295895040 unmapped: 56950784 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3046063 data_alloc: 218103808 data_used: 720896
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295895040 unmapped: 56950784 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9b000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295895040 unmapped: 56950784 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9b000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295895040 unmapped: 56950784 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295895040 unmapped: 56950784 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295895040 unmapped: 56950784 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3046063 data_alloc: 218103808 data_used: 720896
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295895040 unmapped: 56950784 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9b000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295895040 unmapped: 56950784 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9b000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295911424 unmapped: 56934400 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9b000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295911424 unmapped: 56934400 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295911424 unmapped: 56934400 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3046063 data_alloc: 218103808 data_used: 720896
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295911424 unmapped: 56934400 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9b000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295919616 unmapped: 56926208 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295919616 unmapped: 56926208 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295919616 unmapped: 56926208 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 322.681640625s of 322.787384033s, submitted: 11
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295952384 unmapped: 56893440 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3045183 data_alloc: 218103808 data_used: 720896
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295968768 unmapped: 56877056 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9c000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295968768 unmapped: 56877056 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295968768 unmapped: 56877056 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295968768 unmapped: 56877056 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295968768 unmapped: 56877056 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9c000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3045183 data_alloc: 218103808 data_used: 720896
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295968768 unmapped: 56877056 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295968768 unmapped: 56877056 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9c000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295976960 unmapped: 56868864 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295976960 unmapped: 56868864 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295976960 unmapped: 56868864 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9c000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3045183 data_alloc: 218103808 data_used: 720896
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295976960 unmapped: 56868864 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295976960 unmapped: 56868864 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9c000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9c000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295985152 unmapped: 56860672 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295985152 unmapped: 56860672 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295985152 unmapped: 56860672 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3045183 data_alloc: 218103808 data_used: 720896
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295985152 unmapped: 56860672 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9c000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9c000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295985152 unmapped: 56860672 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295985152 unmapped: 56860672 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295993344 unmapped: 56852480 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295993344 unmapped: 56852480 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3045183 data_alloc: 218103808 data_used: 720896
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295993344 unmapped: 56852480 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9c000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295993344 unmapped: 56852480 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9c000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295993344 unmapped: 56852480 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9c000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295993344 unmapped: 56852480 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9c000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296001536 unmapped: 56844288 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3045183 data_alloc: 218103808 data_used: 720896
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296001536 unmapped: 56844288 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9c000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296001536 unmapped: 56844288 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296001536 unmapped: 56844288 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296001536 unmapped: 56844288 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9c000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296001536 unmapped: 56844288 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3045183 data_alloc: 218103808 data_used: 720896
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9c000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296001536 unmapped: 56844288 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296009728 unmapped: 56836096 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296009728 unmapped: 56836096 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296009728 unmapped: 56836096 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296009728 unmapped: 56836096 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9c000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3045183 data_alloc: 218103808 data_used: 720896
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296017920 unmapped: 56827904 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9c000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296017920 unmapped: 56827904 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9c000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296017920 unmapped: 56827904 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9c000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296017920 unmapped: 56827904 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9c000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296026112 unmapped: 56819712 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3045183 data_alloc: 218103808 data_used: 720896
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296034304 unmapped: 56811520 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296034304 unmapped: 56811520 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296034304 unmapped: 56811520 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9c000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296034304 unmapped: 56811520 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296034304 unmapped: 56811520 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3045183 data_alloc: 218103808 data_used: 720896
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296042496 unmapped: 56803328 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9c000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296058880 unmapped: 56786944 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296058880 unmapped: 56786944 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296058880 unmapped: 56786944 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296058880 unmapped: 56786944 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3045183 data_alloc: 218103808 data_used: 720896
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296067072 unmapped: 56778752 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9c000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296067072 unmapped: 56778752 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9c000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296067072 unmapped: 56778752 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296067072 unmapped: 56778752 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296067072 unmapped: 56778752 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3045183 data_alloc: 218103808 data_used: 720896
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296067072 unmapped: 56778752 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296075264 unmapped: 56770560 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9c000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296075264 unmapped: 56770560 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296075264 unmapped: 56770560 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296083456 unmapped: 56762368 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3045183 data_alloc: 218103808 data_used: 720896
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296083456 unmapped: 56762368 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296083456 unmapped: 56762368 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296083456 unmapped: 56762368 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9c000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296091648 unmapped: 56754176 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9c000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296099840 unmapped: 56745984 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9c000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3045183 data_alloc: 218103808 data_used: 720896
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296099840 unmapped: 56745984 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296099840 unmapped: 56745984 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9c000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296099840 unmapped: 56745984 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296099840 unmapped: 56745984 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296099840 unmapped: 56745984 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3045183 data_alloc: 218103808 data_used: 720896
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296099840 unmapped: 56745984 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9c000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9c000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296108032 unmapped: 56737792 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296108032 unmapped: 56737792 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296108032 unmapped: 56737792 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296108032 unmapped: 56737792 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3045183 data_alloc: 218103808 data_used: 720896
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296116224 unmapped: 56729600 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296116224 unmapped: 56729600 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9c000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296124416 unmapped: 56721408 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296124416 unmapped: 56721408 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296132608 unmapped: 56713216 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9c000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3045183 data_alloc: 218103808 data_used: 720896
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296132608 unmapped: 56713216 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296132608 unmapped: 56713216 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296132608 unmapped: 56713216 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296132608 unmapped: 56713216 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9c000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296132608 unmapped: 56713216 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3045183 data_alloc: 218103808 data_used: 720896
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296132608 unmapped: 56713216 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9c000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296132608 unmapped: 56713216 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9c000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296140800 unmapped: 56705024 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296140800 unmapped: 56705024 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9c000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296148992 unmapped: 56696832 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9c000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3045183 data_alloc: 218103808 data_used: 720896
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296148992 unmapped: 56696832 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296148992 unmapped: 56696832 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296148992 unmapped: 56696832 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296148992 unmapped: 56696832 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9c000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296157184 unmapped: 56688640 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3045183 data_alloc: 218103808 data_used: 720896
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296157184 unmapped: 56688640 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296157184 unmapped: 56688640 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 98.430641174s of 98.900810242s, submitted: 90
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296173568 unmapped: 56672256 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: osd.2 283 handle_osd_map epochs [284,284], i have 283, src has [1,284]
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ecf9d000/0x0/0x4ffc00000, data 0x19c8008/0x1b51000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296189952 unmapped: 56655872 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: osd.2 284 ms_handle_reset con 0x55a1286a5000 session 0x55a129ce9860
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296189952 unmapped: 56655872 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2995067 data_alloc: 218103808 data_used: 729088
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296189952 unmapped: 56655872 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: osd.2 284 handle_osd_map epochs [285,285], i have 284, src has [1,285]
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296189952 unmapped: 56655872 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296206336 unmapped: 56639488 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296239104 unmapped: 56606720 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: osd.2 285 ms_handle_reset con 0x55a129f44800 session 0x55a129eb3860
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: osd.2 285 heartbeat osd_stat(store_statfs(0x4ee387000/0x0/0x4ffc00000, data 0x1cb787/0x356000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296239104 unmapped: 56606720 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2889761 data_alloc: 218103808 data_used: 737280
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296239104 unmapped: 56606720 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: osd.2 285 handle_osd_map epochs [285,286], i have 285, src has [1,286]
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296247296 unmapped: 56598528 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296255488 unmapped: 56590336 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.809815407s of 11.059050560s, submitted: 61
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296255488 unmapped: 56590336 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: osd.2 286 heartbeat osd_stat(store_statfs(0x4ee383000/0x0/0x4ffc00000, data 0x1cd206/0x359000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296255488 unmapped: 56590336 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2894255 data_alloc: 218103808 data_used: 753664
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: osd.2 286 handle_osd_map epochs [286,287], i have 286, src has [1,287]
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296280064 unmapped: 56565760 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: osd.2 287 heartbeat osd_stat(store_statfs(0x4ee383000/0x0/0x4ffc00000, data 0x1cd206/0x359000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: osd.2 287 ms_handle_reset con 0x55a12769b400 session 0x55a127620780
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296288256 unmapped: 56557568 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296288256 unmapped: 56557568 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: osd.2 287 handle_osd_map epochs [288,288], i have 287, src has [1,288]
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296280064 unmapped: 56565760 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296280064 unmapped: 56565760 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2900059 data_alloc: 218103808 data_used: 753664
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296296448 unmapped: 56549376 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: osd.2 288 heartbeat osd_stat(store_statfs(0x4ee37e000/0x0/0x4ffc00000, data 0x1d0802/0x35f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296296448 unmapped: 56549376 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296296448 unmapped: 56549376 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296296448 unmapped: 56549376 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: osd.2 288 heartbeat osd_stat(store_statfs(0x4ee37e000/0x0/0x4ffc00000, data 0x1d0802/0x35f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296296448 unmapped: 56549376 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2900059 data_alloc: 218103808 data_used: 753664
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296296448 unmapped: 56549376 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296296448 unmapped: 56549376 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296304640 unmapped: 56541184 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: osd.2 288 heartbeat osd_stat(store_statfs(0x4ee37e000/0x0/0x4ffc00000, data 0x1d0802/0x35f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296304640 unmapped: 56541184 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296304640 unmapped: 56541184 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2900059 data_alloc: 218103808 data_used: 753664
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296304640 unmapped: 56541184 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296312832 unmapped: 56532992 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296321024 unmapped: 56524800 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296321024 unmapped: 56524800 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: osd.2 288 heartbeat osd_stat(store_statfs(0x4ee37e000/0x0/0x4ffc00000, data 0x1d0802/0x35f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296321024 unmapped: 56524800 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2900059 data_alloc: 218103808 data_used: 753664
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296329216 unmapped: 56516608 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296329216 unmapped: 56516608 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296329216 unmapped: 56516608 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: osd.2 288 heartbeat osd_stat(store_statfs(0x4ee37e000/0x0/0x4ffc00000, data 0x1d0802/0x35f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296329216 unmapped: 56516608 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296337408 unmapped: 56508416 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: osd.2 288 heartbeat osd_stat(store_statfs(0x4ee37e000/0x0/0x4ffc00000, data 0x1d0802/0x35f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2900059 data_alloc: 218103808 data_used: 753664
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296337408 unmapped: 56508416 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296337408 unmapped: 56508416 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: osd.2 288 heartbeat osd_stat(store_statfs(0x4ee37e000/0x0/0x4ffc00000, data 0x1d0802/0x35f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296337408 unmapped: 56508416 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: osd.2 288 heartbeat osd_stat(store_statfs(0x4ee37e000/0x0/0x4ffc00000, data 0x1d0802/0x35f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296337408 unmapped: 56508416 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296345600 unmapped: 56500224 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: osd.2 288 heartbeat osd_stat(store_statfs(0x4ee37e000/0x0/0x4ffc00000, data 0x1d0802/0x35f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2900059 data_alloc: 218103808 data_used: 753664
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296345600 unmapped: 56500224 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296345600 unmapped: 56500224 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: osd.2 288 heartbeat osd_stat(store_statfs(0x4ee37e000/0x0/0x4ffc00000, data 0x1d0802/0x35f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: osd.2 288 heartbeat osd_stat(store_statfs(0x4ee37e000/0x0/0x4ffc00000, data 0x1d0802/0x35f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296345600 unmapped: 56500224 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296345600 unmapped: 56500224 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: osd.2 288 heartbeat osd_stat(store_statfs(0x4ee37e000/0x0/0x4ffc00000, data 0x1d0802/0x35f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296345600 unmapped: 56500224 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2900059 data_alloc: 218103808 data_used: 753664
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296345600 unmapped: 56500224 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: osd.2 288 heartbeat osd_stat(store_statfs(0x4ee37e000/0x0/0x4ffc00000, data 0x1d0802/0x35f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296345600 unmapped: 56500224 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296361984 unmapped: 56483840 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296361984 unmapped: 56483840 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: osd.2 288 heartbeat osd_stat(store_statfs(0x4ee37e000/0x0/0x4ffc00000, data 0x1d0802/0x35f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296361984 unmapped: 56483840 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2900059 data_alloc: 218103808 data_used: 753664
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296361984 unmapped: 56483840 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296370176 unmapped: 56475648 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: osd.2 288 heartbeat osd_stat(store_statfs(0x4ee37e000/0x0/0x4ffc00000, data 0x1d0802/0x35f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296370176 unmapped: 56475648 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296370176 unmapped: 56475648 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: osd.2 288 heartbeat osd_stat(store_statfs(0x4ee37e000/0x0/0x4ffc00000, data 0x1d0802/0x35f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296378368 unmapped: 56467456 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2900059 data_alloc: 218103808 data_used: 753664
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296386560 unmapped: 56459264 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296386560 unmapped: 56459264 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: osd.2 288 heartbeat osd_stat(store_statfs(0x4ee37e000/0x0/0x4ffc00000, data 0x1d0802/0x35f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296386560 unmapped: 56459264 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296394752 unmapped: 56451072 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: osd.2 288 heartbeat osd_stat(store_statfs(0x4ee37e000/0x0/0x4ffc00000, data 0x1d0802/0x35f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296402944 unmapped: 56442880 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2900059 data_alloc: 218103808 data_used: 753664
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296402944 unmapped: 56442880 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296402944 unmapped: 56442880 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296402944 unmapped: 56442880 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296402944 unmapped: 56442880 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: osd.2 288 heartbeat osd_stat(store_statfs(0x4ee37e000/0x0/0x4ffc00000, data 0x1d0802/0x35f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296402944 unmapped: 56442880 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2900059 data_alloc: 218103808 data_used: 753664
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: osd.2 288 heartbeat osd_stat(store_statfs(0x4ee37e000/0x0/0x4ffc00000, data 0x1d0802/0x35f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296402944 unmapped: 56442880 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296402944 unmapped: 56442880 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296411136 unmapped: 56434688 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296411136 unmapped: 56434688 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296411136 unmapped: 56434688 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2900059 data_alloc: 218103808 data_used: 753664
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296411136 unmapped: 56434688 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: osd.2 288 heartbeat osd_stat(store_statfs(0x4ee37e000/0x0/0x4ffc00000, data 0x1d0802/0x35f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296419328 unmapped: 56426496 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: osd.2 288 handle_osd_map epochs [288,289], i have 288, src has [1,289]
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 63.735168457s of 63.853855133s, submitted: 16
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: osd.2 289 heartbeat osd_stat(store_statfs(0x4ee37e000/0x0/0x4ffc00000, data 0x1d0802/0x35f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297484288 unmapped: 55361536 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297484288 unmapped: 55361536 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297492480 unmapped: 55353344 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2902153 data_alloc: 218103808 data_used: 753664
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: osd.2 289 ms_handle_reset con 0x55a127eb5400 session 0x55a128153860
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297508864 unmapped: 55336960 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297508864 unmapped: 55336960 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297508864 unmapped: 55336960 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: osd.2 289 heartbeat osd_stat(store_statfs(0x4ee37c000/0x0/0x4ffc00000, data 0x1d23d3/0x362000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297508864 unmapped: 55336960 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297517056 unmapped: 55328768 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2902153 data_alloc: 218103808 data_used: 753664
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297517056 unmapped: 55328768 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: osd.2 289 handle_osd_map epochs [290,290], i have 289, src has [1,290]
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297533440 unmapped: 55312384 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297533440 unmapped: 55312384 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee378000/0x0/0x4ffc00000, data 0x1d3e36/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297533440 unmapped: 55312384 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297533440 unmapped: 55312384 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2906327 data_alloc: 218103808 data_used: 761856
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297541632 unmapped: 55304192 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee378000/0x0/0x4ffc00000, data 0x1d3e36/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297541632 unmapped: 55304192 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297541632 unmapped: 55304192 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297541632 unmapped: 55304192 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297549824 unmapped: 55296000 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2906327 data_alloc: 218103808 data_used: 761856
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297549824 unmapped: 55296000 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297558016 unmapped: 55287808 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee378000/0x0/0x4ffc00000, data 0x1d3e36/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297558016 unmapped: 55287808 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297558016 unmapped: 55287808 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297558016 unmapped: 55287808 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2906327 data_alloc: 218103808 data_used: 761856
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297558016 unmapped: 55287808 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee378000/0x0/0x4ffc00000, data 0x1d3e36/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297558016 unmapped: 55287808 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee378000/0x0/0x4ffc00000, data 0x1d3e36/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297558016 unmapped: 55287808 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297558016 unmapped: 55287808 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297574400 unmapped: 55271424 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2906327 data_alloc: 218103808 data_used: 761856
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297574400 unmapped: 55271424 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297574400 unmapped: 55271424 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee378000/0x0/0x4ffc00000, data 0x1d3e36/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297574400 unmapped: 55271424 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee378000/0x0/0x4ffc00000, data 0x1d3e36/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297582592 unmapped: 55263232 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297582592 unmapped: 55263232 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2906327 data_alloc: 218103808 data_used: 761856
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297582592 unmapped: 55263232 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297582592 unmapped: 55263232 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297582592 unmapped: 55263232 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee378000/0x0/0x4ffc00000, data 0x1d3e36/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297582592 unmapped: 55263232 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297590784 unmapped: 55255040 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2906327 data_alloc: 218103808 data_used: 761856
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297590784 unmapped: 55255040 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297590784 unmapped: 55255040 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297590784 unmapped: 55255040 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297590784 unmapped: 55255040 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee378000/0x0/0x4ffc00000, data 0x1d3e36/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297598976 unmapped: 55246848 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee378000/0x0/0x4ffc00000, data 0x1d3e36/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2906327 data_alloc: 218103808 data_used: 761856
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297598976 unmapped: 55246848 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297598976 unmapped: 55246848 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297598976 unmapped: 55246848 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee378000/0x0/0x4ffc00000, data 0x1d3e36/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297607168 unmapped: 55238656 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297615360 unmapped: 55230464 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee378000/0x0/0x4ffc00000, data 0x1d3e36/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2906327 data_alloc: 218103808 data_used: 761856
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297615360 unmapped: 55230464 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297615360 unmapped: 55230464 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297623552 unmapped: 55222272 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee378000/0x0/0x4ffc00000, data 0x1d3e36/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297623552 unmapped: 55222272 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297623552 unmapped: 55222272 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2906327 data_alloc: 218103808 data_used: 761856
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297623552 unmapped: 55222272 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297623552 unmapped: 55222272 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297623552 unmapped: 55222272 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297623552 unmapped: 55222272 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee378000/0x0/0x4ffc00000, data 0x1d3e36/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297623552 unmapped: 55222272 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2906327 data_alloc: 218103808 data_used: 761856
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297631744 unmapped: 55214080 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297639936 unmapped: 55205888 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297639936 unmapped: 55205888 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee378000/0x0/0x4ffc00000, data 0x1d3e36/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297639936 unmapped: 55205888 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297639936 unmapped: 55205888 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2906327 data_alloc: 218103808 data_used: 761856
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297656320 unmapped: 55189504 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297656320 unmapped: 55189504 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297656320 unmapped: 55189504 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297656320 unmapped: 55189504 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee378000/0x0/0x4ffc00000, data 0x1d3e36/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297672704 unmapped: 55173120 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2906327 data_alloc: 218103808 data_used: 761856
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297672704 unmapped: 55173120 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee378000/0x0/0x4ffc00000, data 0x1d3e36/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297672704 unmapped: 55173120 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297672704 unmapped: 55173120 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297672704 unmapped: 55173120 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297672704 unmapped: 55173120 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2906327 data_alloc: 218103808 data_used: 761856
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297672704 unmapped: 55173120 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee378000/0x0/0x4ffc00000, data 0x1d3e36/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297680896 unmapped: 55164928 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297680896 unmapped: 55164928 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297680896 unmapped: 55164928 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297680896 unmapped: 55164928 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee378000/0x0/0x4ffc00000, data 0x1d3e36/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2906327 data_alloc: 218103808 data_used: 761856
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297680896 unmapped: 55164928 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297680896 unmapped: 55164928 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee378000/0x0/0x4ffc00000, data 0x1d3e36/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297680896 unmapped: 55164928 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297680896 unmapped: 55164928 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee378000/0x0/0x4ffc00000, data 0x1d3e36/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297689088 unmapped: 55156736 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2906327 data_alloc: 218103808 data_used: 761856
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297689088 unmapped: 55156736 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297689088 unmapped: 55156736 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297697280 unmapped: 55148544 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297697280 unmapped: 55148544 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee378000/0x0/0x4ffc00000, data 0x1d3e36/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297705472 unmapped: 55140352 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2906327 data_alloc: 218103808 data_used: 761856
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297705472 unmapped: 55140352 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee378000/0x0/0x4ffc00000, data 0x1d3e36/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee378000/0x0/0x4ffc00000, data 0x1d3e36/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297705472 unmapped: 55140352 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297713664 unmapped: 55132160 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297787392 unmapped: 55058432 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: do_command 'config diff' '{prefix=config diff}'
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: do_command 'config show' '{prefix=config show}'
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: do_command 'counter dump' '{prefix=counter dump}'
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: do_command 'counter schema' '{prefix=counter schema}'
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297312256 unmapped: 55533568 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2906327 data_alloc: 218103808 data_used: 761856
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296992768 unmapped: 55853056 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:36 np0005534516 ceph-osd[90711]: do_command 'log dump' '{prefix=log dump}'
Nov 25 04:36:36 np0005534516 nova_compute[253538]: 2025-11-25 09:36:36.150 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:36:36 np0005534516 ceph-mgr[75313]: log_channel(audit) log [DBG] : from='client.23181 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 04:36:36 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module ls"} v 0) v1
Nov 25 04:36:36 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/402239297' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Nov 25 04:36:36 np0005534516 nova_compute[253538]: 2025-11-25 09:36:36.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:36:36 np0005534516 nova_compute[253538]: 2025-11-25 09:36:36.573 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:36:36 np0005534516 nova_compute[253538]: 2025-11-25 09:36:36.574 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:36:36 np0005534516 nova_compute[253538]: 2025-11-25 09:36:36.574 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:36:36 np0005534516 nova_compute[253538]: 2025-11-25 09:36:36.574 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 25 04:36:36 np0005534516 nova_compute[253538]: 2025-11-25 09:36:36.574 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 04:36:36 np0005534516 ceph-mgr[75313]: log_channel(audit) log [DBG] : from='client.23185 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Nov 25 04:36:36 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services"} v 0) v1
Nov 25 04:36:36 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1510233932' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Nov 25 04:36:37 np0005534516 ceph-mgr[75313]: log_channel(audit) log [DBG] : from='client.23191 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 04:36:37 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 04:36:37 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1371963626' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 04:36:37 np0005534516 nova_compute[253538]: 2025-11-25 09:36:37.055 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.481s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 04:36:37 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3511: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:36:37 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr versions"} v 0) v1
Nov 25 04:36:37 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4284209462' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Nov 25 04:36:37 np0005534516 nova_compute[253538]: 2025-11-25 09:36:37.232 253542 WARNING nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 04:36:37 np0005534516 nova_compute[253538]: 2025-11-25 09:36:37.233 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3415MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 25 04:36:37 np0005534516 nova_compute[253538]: 2025-11-25 09:36:37.234 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:36:37 np0005534516 nova_compute[253538]: 2025-11-25 09:36:37.234 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:36:37 np0005534516 nova_compute[253538]: 2025-11-25 09:36:37.364 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 25 04:36:37 np0005534516 nova_compute[253538]: 2025-11-25 09:36:37.364 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 25 04:36:37 np0005534516 nova_compute[253538]: 2025-11-25 09:36:37.387 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 04:36:37 np0005534516 ceph-mgr[75313]: log_channel(audit) log [DBG] : from='client.23195 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 25 04:36:37 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon stat"} v 0) v1
Nov 25 04:36:37 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/769775139' entity='client.admin' cmd=[{"prefix": "mon stat"}]: dispatch
Nov 25 04:36:37 np0005534516 ceph-mgr[75313]: log_channel(audit) log [DBG] : from='client.23201 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 25 04:36:37 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 04:36:37 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4183006800' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 04:36:37 np0005534516 nova_compute[253538]: 2025-11-25 09:36:37.917 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.530s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 04:36:37 np0005534516 nova_compute[253538]: 2025-11-25 09:36:37.923 253542 DEBUG nova.compute.provider_tree [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 04:36:37 np0005534516 nova_compute[253538]: 2025-11-25 09:36:37.951 253542 DEBUG nova.scheduler.client.report [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 04:36:37 np0005534516 nova_compute[253538]: 2025-11-25 09:36:37.953 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 25 04:36:37 np0005534516 nova_compute[253538]: 2025-11-25 09:36:37.953 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.719s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:36:38 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "node ls"} v 0) v1
Nov 25 04:36:38 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2770772115' entity='client.admin' cmd=[{"prefix": "node ls"}]: dispatch
Nov 25 04:36:38 np0005534516 ceph-mgr[75313]: log_channel(audit) log [DBG] : from='client.23209 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 25 04:36:38 np0005534516 ceph-mgr[75313]: mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Nov 25 04:36:38 np0005534516 ceph-a058ea16-8b73-51e1-b172-ed66107102bf-mgr-compute-0-cpskve[75309]: 2025-11-25T09:36:38.615+0000 7f6347321640 -1 mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Nov 25 04:36:38 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush class ls"} v 0) v1
Nov 25 04:36:38 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1266162372' entity='client.admin' cmd=[{"prefix": "osd crush class ls"}]: dispatch
Nov 25 04:36:39 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "log last", "channel": "cephadm", "format": "json-pretty"} v 0) v1
Nov 25 04:36:39 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2910845931' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm", "format": "json-pretty"}]: dispatch
Nov 25 04:36:39 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3512: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:36:39 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush dump"} v 0) v1
Nov 25 04:36:39 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1305211320' entity='client.admin' cmd=[{"prefix": "osd crush dump"}]: dispatch
Nov 25 04:36:39 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr dump", "format": "json-pretty"} v 0) v1
Nov 25 04:36:39 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1378948537' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json-pretty"}]: dispatch
Nov 25 04:36:39 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:36:39 np0005534516 nova_compute[253538]: 2025-11-25 09:36:39.681 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:36:39 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush rule ls"} v 0) v1
Nov 25 04:36:39 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1487684928' entity='client.admin' cmd=[{"prefix": "osd crush rule ls"}]: dispatch
Nov 25 04:36:39 np0005534516 ceph-mon[75015]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #171. Immutable memtables: 0.
Nov 25 04:36:39 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:36:39.836271) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 25 04:36:39 np0005534516 ceph-mon[75015]: rocksdb: [db/flush_job.cc:856] [default] [JOB 105] Flushing memtable with next log file: 171
Nov 25 04:36:39 np0005534516 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764063399836301, "job": 105, "event": "flush_started", "num_memtables": 1, "num_entries": 1563, "num_deletes": 252, "total_data_size": 2388398, "memory_usage": 2424080, "flush_reason": "Manual Compaction"}
Nov 25 04:36:39 np0005534516 ceph-mon[75015]: rocksdb: [db/flush_job.cc:885] [default] [JOB 105] Level-0 flush table #172: started
Nov 25 04:36:39 np0005534516 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764063399857445, "cf_name": "default", "job": 105, "event": "table_file_creation", "file_number": 172, "file_size": 2353693, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 71261, "largest_seqno": 72823, "table_properties": {"data_size": 2346353, "index_size": 4282, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1989, "raw_key_size": 15962, "raw_average_key_size": 20, "raw_value_size": 2331335, "raw_average_value_size": 2985, "num_data_blocks": 191, "num_entries": 781, "num_filter_entries": 781, "num_deletions": 252, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764063245, "oldest_key_time": 1764063245, "file_creation_time": 1764063399, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "dc5dc6ae-0fb8-474e-b34d-e37705a41add", "db_session_id": "WCSCMBNWDQZ3QKJA3OWW", "orig_file_number": 172, "seqno_to_time_mapping": "N/A"}}
Nov 25 04:36:39 np0005534516 ceph-mon[75015]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 105] Flush lasted 21899 microseconds, and 6994 cpu microseconds.
Nov 25 04:36:39 np0005534516 ceph-mon[75015]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 04:36:39 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:36:39.858161) [db/flush_job.cc:967] [default] [JOB 105] Level-0 flush table #172: 2353693 bytes OK
Nov 25 04:36:39 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:36:39.858293) [db/memtable_list.cc:519] [default] Level-0 commit table #172 started
Nov 25 04:36:39 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:36:39.865827) [db/memtable_list.cc:722] [default] Level-0 commit table #172: memtable #1 done
Nov 25 04:36:39 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:36:39.865896) EVENT_LOG_v1 {"time_micros": 1764063399865880, "job": 105, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 25 04:36:39 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:36:39.865934) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 25 04:36:39 np0005534516 ceph-mon[75015]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 105] Try to delete WAL files size 2381453, prev total WAL file size 2381453, number of live WAL files 2.
Nov 25 04:36:39 np0005534516 ceph-mon[75015]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000168.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 04:36:39 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:36:39.867943) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730037303238' seq:72057594037927935, type:22 .. '7061786F730037323830' seq:0, type:0; will stop at (end)
Nov 25 04:36:39 np0005534516 ceph-mon[75015]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 106] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 25 04:36:39 np0005534516 ceph-mon[75015]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 105 Base level 0, inputs: [172(2298KB)], [170(10MB)]
Nov 25 04:36:39 np0005534516 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764063399868009, "job": 106, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [172], "files_L6": [170], "score": -1, "input_data_size": 13063802, "oldest_snapshot_seqno": -1}
Nov 25 04:36:39 np0005534516 ceph-mon[75015]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 106] Generated table #173: 8998 keys, 11310088 bytes, temperature: kUnknown
Nov 25 04:36:39 np0005534516 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764063399939951, "cf_name": "default", "job": 106, "event": "table_file_creation", "file_number": 173, "file_size": 11310088, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11252309, "index_size": 34178, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 22533, "raw_key_size": 236863, "raw_average_key_size": 26, "raw_value_size": 11093930, "raw_average_value_size": 1232, "num_data_blocks": 1323, "num_entries": 8998, "num_filter_entries": 8998, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764056881, "oldest_key_time": 0, "file_creation_time": 1764063399, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "dc5dc6ae-0fb8-474e-b34d-e37705a41add", "db_session_id": "WCSCMBNWDQZ3QKJA3OWW", "orig_file_number": 173, "seqno_to_time_mapping": "N/A"}}
Nov 25 04:36:39 np0005534516 ceph-mon[75015]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 04:36:39 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:36:39.940198) [db/compaction/compaction_job.cc:1663] [default] [JOB 106] Compacted 1@0 + 1@6 files to L6 => 11310088 bytes
Nov 25 04:36:39 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:36:39.941597) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 181.4 rd, 157.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.2, 10.2 +0.0 blob) out(10.8 +0.0 blob), read-write-amplify(10.4) write-amplify(4.8) OK, records in: 9518, records dropped: 520 output_compression: NoCompression
Nov 25 04:36:39 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:36:39.941612) EVENT_LOG_v1 {"time_micros": 1764063399941605, "job": 106, "event": "compaction_finished", "compaction_time_micros": 72000, "compaction_time_cpu_micros": 24505, "output_level": 6, "num_output_files": 1, "total_output_size": 11310088, "num_input_records": 9518, "num_output_records": 8998, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 25 04:36:39 np0005534516 ceph-mon[75015]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000172.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 04:36:39 np0005534516 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764063399942011, "job": 106, "event": "table_file_deletion", "file_number": 172}
Nov 25 04:36:39 np0005534516 ceph-mon[75015]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000170.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 04:36:39 np0005534516 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764063399943661, "job": 106, "event": "table_file_deletion", "file_number": 170}
Nov 25 04:36:39 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:36:39.867864) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 04:36:39 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:36:39.943703) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 04:36:39 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:36:39.943707) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 04:36:39 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:36:39.943708) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 04:36:39 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:36:39.943710) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 04:36:39 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:36:39.943711) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 04:36:39 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata", "format": "json-pretty"} v 0) v1
Nov 25 04:36:39 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3777993575' entity='client.admin' cmd=[{"prefix": "mgr metadata", "format": "json-pretty"}]: dispatch
Nov 25 04:36:40 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush show-tunables"} v 0) v1
Nov 25 04:36:40 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4266855097' entity='client.admin' cmd=[{"prefix": "osd crush show-tunables"}]: dispatch
Nov 25 04:36:40 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module ls", "format": "json-pretty"} v 0) v1
Nov 25 04:36:40 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/612359251' entity='client.admin' cmd=[{"prefix": "mgr module ls", "format": "json-pretty"}]: dispatch
Nov 25 04:36:40 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush tree", "show_shadow": true} v 0) v1
Nov 25 04:36:40 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/705432200' entity='client.admin' cmd=[{"prefix": "osd crush tree", "show_shadow": true}]: dispatch
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 253 heartbeat osd_stat(store_statfs(0x4ec2d7000/0x0/0x4ffc00000, data 0x1e938c8/0x1ff7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,2] op hist [])
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 319315968 unmapped: 45899776 heap: 365215744 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 319315968 unmapped: 45899776 heap: 365215744 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 319315968 unmapped: 45899776 heap: 365215744 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 319315968 unmapped: 45899776 heap: 365215744 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3215035 data_alloc: 218103808 data_used: 14893056
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 319324160 unmapped: 45891584 heap: 365215744 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 319324160 unmapped: 45891584 heap: 365215744 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 253 heartbeat osd_stat(store_statfs(0x4ec2d7000/0x0/0x4ffc00000, data 0x1e938c8/0x1ff7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,2] op hist [])
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 319324160 unmapped: 45891584 heap: 365215744 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 319324160 unmapped: 45891584 heap: 365215744 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 319324160 unmapped: 45891584 heap: 365215744 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3215035 data_alloc: 218103808 data_used: 14893056
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 319324160 unmapped: 45891584 heap: 365215744 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 253 heartbeat osd_stat(store_statfs(0x4ec2d7000/0x0/0x4ffc00000, data 0x1e938c8/0x1ff7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,2] op hist [])
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 319324160 unmapped: 45891584 heap: 365215744 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 319324160 unmapped: 45891584 heap: 365215744 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 319332352 unmapped: 45883392 heap: 365215744 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 253 heartbeat osd_stat(store_statfs(0x4ec2d7000/0x0/0x4ffc00000, data 0x1e938c8/0x1ff7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,2] op hist [])
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 253 ms_handle_reset con 0x561d8e3b8800 session 0x561d8c74b860
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 253 ms_handle_reset con 0x561d8e3b8800 session 0x561d8d118f00
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 253 ms_handle_reset con 0x561d8d10c000 session 0x561d8e3ed860
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 319340544 unmapped: 45875200 heap: 365215744 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 253 ms_handle_reset con 0x561d8d14ec00 session 0x561d8d118f00
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 45.639049530s of 45.752262115s, submitted: 23
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 253 ms_handle_reset con 0x561d8e74c800 session 0x561d8c62a780
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 253 ms_handle_reset con 0x561d8f032800 session 0x561d8f049c20
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 253 ms_handle_reset con 0x561d8f032800 session 0x561d8d2370e0
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 253 ms_handle_reset con 0x561d8d10c000 session 0x561d8d25bc20
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 253 ms_handle_reset con 0x561d8d14ec00 session 0x561d8c8134a0
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3288911 data_alloc: 218103808 data_used: 14893056
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 319512576 unmapped: 54099968 heap: 373612544 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 253 heartbeat osd_stat(store_statfs(0x4eb9bf000/0x0/0x4ffc00000, data 0x27aa8d8/0x290f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,2] op hist [])
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 319512576 unmapped: 54099968 heap: 373612544 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 319512576 unmapped: 54099968 heap: 373612544 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 253 heartbeat osd_stat(store_statfs(0x4eb9bf000/0x0/0x4ffc00000, data 0x27aa8d8/0x290f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,2] op hist [])
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 319512576 unmapped: 54099968 heap: 373612544 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 253 ms_handle_reset con 0x561d8e3b8800 session 0x561d8f38eb40
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 253 ms_handle_reset con 0x561d8e74c800 session 0x561d8e4321e0
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 253 ms_handle_reset con 0x561d8e74c800 session 0x561d8cff1860
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 253 ms_handle_reset con 0x561d8d10c000 session 0x561d8ed734a0
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 253 ms_handle_reset con 0x561d8d14ec00 session 0x561d8e3edc20
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 253 heartbeat osd_stat(store_statfs(0x4eb1a2000/0x0/0x4ffc00000, data 0x2fc78d8/0x312c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,2] op hist [])
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 319971328 unmapped: 53641216 heap: 373612544 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3352946 data_alloc: 218103808 data_used: 14893056
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 319971328 unmapped: 53641216 heap: 373612544 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 319971328 unmapped: 53641216 heap: 373612544 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 253 ms_handle_reset con 0x561d8e3b8800 session 0x561d8c74ab40
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 253 ms_handle_reset con 0x561d8f032800 session 0x561d8e50f2c0
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 319979520 unmapped: 53633024 heap: 373612544 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 253 ms_handle_reset con 0x561d8f032800 session 0x561d8e50e3c0
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 253 ms_handle_reset con 0x561d8d14ec00 session 0x561d8e24fa40
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 319979520 unmapped: 53633024 heap: 373612544 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 253 ms_handle_reset con 0x561d8e3b8800 session 0x561d8c813a40
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 253 heartbeat osd_stat(store_statfs(0x4eb1a1000/0x0/0x4ffc00000, data 0x2fc78fb/0x312d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,2] op hist [])
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 320184320 unmapped: 53428224 heap: 373612544 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3361680 data_alloc: 218103808 data_used: 14905344
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 320184320 unmapped: 53428224 heap: 373612544 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 326451200 unmapped: 47161344 heap: 373612544 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 326451200 unmapped: 47161344 heap: 373612544 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 326451200 unmapped: 47161344 heap: 373612544 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 253 heartbeat osd_stat(store_statfs(0x4eb176000/0x0/0x4ffc00000, data 0x2ff190b/0x3158000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,2] op hist [])
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 326451200 unmapped: 47161344 heap: 373612544 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 253 ms_handle_reset con 0x561d93337000 session 0x561d8c74a5a0
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3479600 data_alloc: 234881024 data_used: 31514624
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: mgrc ms_handle_reset ms_handle_reset con 0x561d8f035c00
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/3119838916
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/3119838916,v1:192.168.122.100:6801/3119838916]
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: mgrc handle_mgr_configure stats_period=5
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 326475776 unmapped: 47136768 heap: 373612544 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 253 ms_handle_reset con 0x561d8f031c00 session 0x561d8e24f2c0
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 253 ms_handle_reset con 0x561d8ceb1000 session 0x561d8d117680
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 326475776 unmapped: 47136768 heap: 373612544 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 326475776 unmapped: 47136768 heap: 373612544 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 326475776 unmapped: 47136768 heap: 373612544 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 326475776 unmapped: 47136768 heap: 373612544 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3479600 data_alloc: 234881024 data_used: 31514624
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 253 heartbeat osd_stat(store_statfs(0x4eb176000/0x0/0x4ffc00000, data 0x2ff190b/0x3158000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,2] op hist [])
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 326475776 unmapped: 47136768 heap: 373612544 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 21.041730881s of 21.370254517s, submitted: 46
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 253 heartbeat osd_stat(store_statfs(0x4ead6d000/0x0/0x4ffc00000, data 0x33fa90b/0x3561000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,2] op hist [0,0,0,5,1])
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 332963840 unmapped: 40648704 heap: 373612544 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 333488128 unmapped: 40124416 heap: 373612544 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 334880768 unmapped: 38731776 heap: 373612544 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 334880768 unmapped: 38731776 heap: 373612544 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 253 heartbeat osd_stat(store_statfs(0x4e9b20000/0x0/0x4ffc00000, data 0x464690b/0x47ad000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,2] op hist [])
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3670790 data_alloc: 234881024 data_used: 32821248
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 334880768 unmapped: 38731776 heap: 373612544 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 334880768 unmapped: 38731776 heap: 373612544 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 334880768 unmapped: 38731776 heap: 373612544 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 335028224 unmapped: 38584320 heap: 373612544 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 335028224 unmapped: 38584320 heap: 373612544 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 253 heartbeat osd_stat(store_statfs(0x4e9af9000/0x0/0x4ffc00000, data 0x466e90b/0x47d5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,2] op hist [])
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3666786 data_alloc: 234881024 data_used: 32821248
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 253 heartbeat osd_stat(store_statfs(0x4e9af9000/0x0/0x4ffc00000, data 0x466e90b/0x47d5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,2] op hist [])
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 335028224 unmapped: 38584320 heap: 373612544 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 335028224 unmapped: 38584320 heap: 373612544 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 253 heartbeat osd_stat(store_statfs(0x4e9af9000/0x0/0x4ffc00000, data 0x466e90b/0x47d5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,2] op hist [])
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 335028224 unmapped: 38584320 heap: 373612544 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 335028224 unmapped: 38584320 heap: 373612544 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.411730766s of 12.939365387s, submitted: 176
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 253 ms_handle_reset con 0x561d8e74c800 session 0x561d8cff1c20
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 253 ms_handle_reset con 0x561d8e41e000 session 0x561d8c881860
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 253 heartbeat osd_stat(store_statfs(0x4e9af9000/0x0/0x4ffc00000, data 0x466e90b/0x47d5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,2] op hist [])
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 253 ms_handle_reset con 0x561d8d14ec00 session 0x561d8eb53c20
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 332382208 unmapped: 41230336 heap: 373612544 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3438469 data_alloc: 218103808 data_used: 23339008
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 332382208 unmapped: 41230336 heap: 373612544 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 253 ms_handle_reset con 0x561d8e3b8800 session 0x561d8e249e00
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 253 ms_handle_reset con 0x561d8e74c800 session 0x561d8d119e00
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 332570624 unmapped: 41041920 heap: 373612544 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 253 heartbeat osd_stat(store_statfs(0x4ea9b2000/0x0/0x4ffc00000, data 0x37b595d/0x391c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,2] op hist [])
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 332570624 unmapped: 41041920 heap: 373612544 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 332570624 unmapped: 41041920 heap: 373612544 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 332570624 unmapped: 41041920 heap: 373612544 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3505912 data_alloc: 218103808 data_used: 23339008
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 332570624 unmapped: 41041920 heap: 373612544 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 332570624 unmapped: 41041920 heap: 373612544 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 332570624 unmapped: 41041920 heap: 373612544 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 253 heartbeat osd_stat(store_statfs(0x4ea9b2000/0x0/0x4ffc00000, data 0x37b595d/0x391c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,2] op hist [])
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 332570624 unmapped: 41041920 heap: 373612544 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 332570624 unmapped: 41041920 heap: 373612544 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3505912 data_alloc: 218103808 data_used: 23339008
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 332570624 unmapped: 41041920 heap: 373612544 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 253 heartbeat osd_stat(store_statfs(0x4ea9b2000/0x0/0x4ffc00000, data 0x37b595d/0x391c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,2] op hist [])
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 332570624 unmapped: 41041920 heap: 373612544 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 253 heartbeat osd_stat(store_statfs(0x4ea9b2000/0x0/0x4ffc00000, data 0x37b595d/0x391c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,2] op hist [])
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 332570624 unmapped: 41041920 heap: 373612544 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 332595200 unmapped: 41017344 heap: 373612544 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 332595200 unmapped: 41017344 heap: 373612544 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3547352 data_alloc: 234881024 data_used: 28848128
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 253 heartbeat osd_stat(store_statfs(0x4ea9b2000/0x0/0x4ffc00000, data 0x37b595d/0x391c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,2] op hist [])
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 332595200 unmapped: 41017344 heap: 373612544 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 253 heartbeat osd_stat(store_statfs(0x4ea9b2000/0x0/0x4ffc00000, data 0x37b595d/0x391c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,2] op hist [])
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 332595200 unmapped: 41017344 heap: 373612544 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 253 heartbeat osd_stat(store_statfs(0x4ea9b2000/0x0/0x4ffc00000, data 0x37b595d/0x391c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,2] op hist [])
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 332595200 unmapped: 41017344 heap: 373612544 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 19.149969101s of 19.457172394s, submitted: 94
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 332595200 unmapped: 41017344 heap: 373612544 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 332595200 unmapped: 41017344 heap: 373612544 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3547528 data_alloc: 234881024 data_used: 28848128
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 253 heartbeat osd_stat(store_statfs(0x4ea9b2000/0x0/0x4ffc00000, data 0x37b595d/0x391c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,2] op hist [])
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 332595200 unmapped: 41017344 heap: 373612544 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 332595200 unmapped: 41017344 heap: 373612544 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 332595200 unmapped: 41017344 heap: 373612544 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 332595200 unmapped: 41017344 heap: 373612544 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 253 heartbeat osd_stat(store_statfs(0x4e907f000/0x0/0x4ffc00000, data 0x3f4895d/0x40af000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,2] op hist [])
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 335577088 unmapped: 38035456 heap: 373612544 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3625298 data_alloc: 234881024 data_used: 30154752
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 335593472 unmapped: 38019072 heap: 373612544 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 335593472 unmapped: 38019072 heap: 373612544 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 253 heartbeat osd_stat(store_statfs(0x4e8fec000/0x0/0x4ffc00000, data 0x3fda95d/0x4141000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,2] op hist [])
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 335593472 unmapped: 38019072 heap: 373612544 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 335593472 unmapped: 38019072 heap: 373612544 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 335593472 unmapped: 38019072 heap: 373612544 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3625458 data_alloc: 234881024 data_used: 30158848
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.860197067s of 12.206836700s, submitted: 105
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 335454208 unmapped: 38158336 heap: 373612544 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 335454208 unmapped: 38158336 heap: 373612544 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 253 heartbeat osd_stat(store_statfs(0x4e8fce000/0x0/0x4ffc00000, data 0x3ff995d/0x4160000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,2] op hist [])
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 335454208 unmapped: 38158336 heap: 373612544 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 335454208 unmapped: 38158336 heap: 373612544 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 253 heartbeat osd_stat(store_statfs(0x4e8fce000/0x0/0x4ffc00000, data 0x3ff995d/0x4160000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,2] op hist [])
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 335454208 unmapped: 38158336 heap: 373612544 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3617634 data_alloc: 234881024 data_used: 30179328
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 335462400 unmapped: 38150144 heap: 373612544 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 335462400 unmapped: 38150144 heap: 373612544 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 253 ms_handle_reset con 0x561d8f032800 session 0x561d8d236d20
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 330547200 unmapped: 43065344 heap: 373612544 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 253 ms_handle_reset con 0x561d96070c00 session 0x561d8e248f00
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 253 heartbeat osd_stat(store_statfs(0x4e9f10000/0x0/0x4ffc00000, data 0x30b78fb/0x321d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,2] op hist [])
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 330547200 unmapped: 43065344 heap: 373612544 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 253 heartbeat osd_stat(store_statfs(0x4e9f10000/0x0/0x4ffc00000, data 0x30b78fb/0x321d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,2] op hist [])
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 330547200 unmapped: 43065344 heap: 373612544 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3450306 data_alloc: 218103808 data_used: 23072768
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 330547200 unmapped: 43065344 heap: 373612544 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 330547200 unmapped: 43065344 heap: 373612544 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 330547200 unmapped: 43065344 heap: 373612544 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 253 ms_handle_reset con 0x561d8d10c000 session 0x561d8e90c780
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.583516121s of 12.746244431s, submitted: 38
Nov 25 04:36:40 np0005534516 rsyslogd[1007]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 253 ms_handle_reset con 0x561d96070c00 session 0x561d8d2a7e00
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 324550656 unmapped: 49061888 heap: 373612544 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 324550656 unmapped: 49061888 heap: 373612544 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3254418 data_alloc: 218103808 data_used: 14893056
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 253 heartbeat osd_stat(store_statfs(0x4eb136000/0x0/0x4ffc00000, data 0x1e938c8/0x1ff7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,2] op hist [])
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 324550656 unmapped: 49061888 heap: 373612544 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 324550656 unmapped: 49061888 heap: 373612544 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 253 ms_handle_reset con 0x561d8d14ec00 session 0x561d8e705860
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 253 ms_handle_reset con 0x561d8e3b8800 session 0x561d8e248b40
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 253 ms_handle_reset con 0x561d8e74c800 session 0x561d8e24f680
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 253 ms_handle_reset con 0x561d8e74c800 session 0x561d8d2a7e00
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #50. Immutable memtables: 7.
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 326139904 unmapped: 47472640 heap: 373612544 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 253 ms_handle_reset con 0x561d8d10c000 session 0x561d8e248f00
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 253 ms_handle_reset con 0x561d8d14ec00 session 0x561d8cff1c20
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 327196672 unmapped: 46415872 heap: 373612544 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 327196672 unmapped: 46415872 heap: 373612544 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3280725 data_alloc: 218103808 data_used: 14893056
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 327196672 unmapped: 46415872 heap: 373612544 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 253 heartbeat osd_stat(store_statfs(0x4e9d65000/0x0/0x4ffc00000, data 0x20c492a/0x2229000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x13c6f9c6), peers [0,2] op hist [])
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 327196672 unmapped: 46415872 heap: 373612544 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 253 ms_handle_reset con 0x561d8e3b8800 session 0x561d8c849860
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 253 ms_handle_reset con 0x561d96070c00 session 0x561d8e24f2c0
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 253 ms_handle_reset con 0x561d8c459800 session 0x561d8e513e00
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 327196672 unmapped: 46415872 heap: 373612544 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 253 heartbeat osd_stat(store_statfs(0x4e9d65000/0x0/0x4ffc00000, data 0x20c492a/0x2229000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x13c6f9c6), peers [0,2] op hist [])
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 253 ms_handle_reset con 0x561d8c459800 session 0x561d8ed72f00
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.653755188s of 10.120682716s, submitted: 65
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 253 ms_handle_reset con 0x561d8d10c000 session 0x561d8c813a40
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 327344128 unmapped: 46268416 heap: 373612544 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 327278592 unmapped: 46333952 heap: 373612544 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3291143 data_alloc: 218103808 data_used: 15781888
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 327286784 unmapped: 46325760 heap: 373612544 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 253 heartbeat osd_stat(store_statfs(0x4e9d40000/0x0/0x4ffc00000, data 0x20e893a/0x224e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x13c6f9c6), peers [0,2] op hist [])
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 327286784 unmapped: 46325760 heap: 373612544 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 253 heartbeat osd_stat(store_statfs(0x4e9d40000/0x0/0x4ffc00000, data 0x20e893a/0x224e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x13c6f9c6), peers [0,2] op hist [])
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 253 heartbeat osd_stat(store_statfs(0x4e9d40000/0x0/0x4ffc00000, data 0x20e893a/0x224e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x13c6f9c6), peers [0,2] op hist [])
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 327286784 unmapped: 46325760 heap: 373612544 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 253 heartbeat osd_stat(store_statfs(0x4e9d40000/0x0/0x4ffc00000, data 0x20e893a/0x224e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x13c6f9c6), peers [0,2] op hist [])
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 327286784 unmapped: 46325760 heap: 373612544 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 327294976 unmapped: 46317568 heap: 373612544 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3291143 data_alloc: 218103808 data_used: 15781888
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 327294976 unmapped: 46317568 heap: 373612544 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 327294976 unmapped: 46317568 heap: 373612544 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 253 heartbeat osd_stat(store_statfs(0x4e9d40000/0x0/0x4ffc00000, data 0x20e893a/0x224e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x13c6f9c6), peers [0,2] op hist [])
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 327294976 unmapped: 46317568 heap: 373612544 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 327294976 unmapped: 46317568 heap: 373612544 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 327294976 unmapped: 46317568 heap: 373612544 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.936169624s of 11.946027756s, submitted: 2
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3307099 data_alloc: 218103808 data_used: 15839232
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 328499200 unmapped: 45113344 heap: 373612544 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 325386240 unmapped: 48226304 heap: 373612544 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 253 heartbeat osd_stat(store_statfs(0x4e9835000/0x0/0x4ffc00000, data 0x25f393a/0x2759000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x13c6f9c6), peers [0,2] op hist [])
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 326909952 unmapped: 46702592 heap: 373612544 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 326909952 unmapped: 46702592 heap: 373612544 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 253 heartbeat osd_stat(store_statfs(0x4e9787000/0x0/0x4ffc00000, data 0x26a093a/0x2806000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x13c6f9c6), peers [0,2] op hist [])
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 326909952 unmapped: 46702592 heap: 373612544 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3354701 data_alloc: 218103808 data_used: 16392192
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 326909952 unmapped: 46702592 heap: 373612544 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 326909952 unmapped: 46702592 heap: 373612544 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 326909952 unmapped: 46702592 heap: 373612544 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 253 heartbeat osd_stat(store_statfs(0x4e9764000/0x0/0x4ffc00000, data 0x26c493a/0x282a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x13c6f9c6), peers [0,2] op hist [])
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 326279168 unmapped: 47333376 heap: 373612544 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 253 heartbeat osd_stat(store_statfs(0x4e9764000/0x0/0x4ffc00000, data 0x26c493a/0x282a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x13c6f9c6), peers [0,2] op hist [])
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 326279168 unmapped: 47333376 heap: 373612544 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3346753 data_alloc: 218103808 data_used: 16396288
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 326279168 unmapped: 47333376 heap: 373612544 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.099457741s of 10.687935829s, submitted: 103
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 326238208 unmapped: 47374336 heap: 373612544 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 253 heartbeat osd_stat(store_statfs(0x4e9752000/0x0/0x4ffc00000, data 0x26d693a/0x283c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x13c6f9c6), peers [0,2] op hist [])
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 326238208 unmapped: 47374336 heap: 373612544 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 253 handle_osd_map epochs [253,254], i have 253, src has [1,254]
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 254 ms_handle_reset con 0x561d8e74c800 session 0x561d8e8c4d20
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 348168192 unmapped: 38273024 heap: 386441216 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 254 ms_handle_reset con 0x561d8f032800 session 0x561d8d2a6780
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 254 heartbeat osd_stat(store_statfs(0x4e9752000/0x0/0x4ffc00000, data 0x26d693a/0x283c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x13c6f9c6), peers [0,2] op hist [])
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 254 ms_handle_reset con 0x561d8e52f000 session 0x561d8d2361e0
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 254 handle_osd_map epochs [255,255], i have 254, src has [1,255]
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 334610432 unmapped: 51830784 heap: 386441216 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3521865 data_alloc: 218103808 data_used: 23535616
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 255 handle_osd_map epochs [255,256], i have 255, src has [1,256]
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 256 ms_handle_reset con 0x561d8c459800 session 0x561d8e24e5a0
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 334618624 unmapped: 51822592 heap: 386441216 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 256 ms_handle_reset con 0x561d8d10c000 session 0x561d8e704780
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 334626816 unmapped: 51814400 heap: 386441216 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 256 ms_handle_reset con 0x561d8e74c800 session 0x561d8c62bc20
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 256 ms_handle_reset con 0x561d8f032800 session 0x561d8f046d20
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 334626816 unmapped: 51814400 heap: 386441216 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 334643200 unmapped: 51798016 heap: 386441216 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 256 heartbeat osd_stat(store_statfs(0x4e857c000/0x0/0x4ffc00000, data 0x38a5c83/0x3a10000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x13c6f9c6), peers [0,2] op hist [])
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 334643200 unmapped: 51798016 heap: 386441216 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3527727 data_alloc: 218103808 data_used: 23535616
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 334643200 unmapped: 51798016 heap: 386441216 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 256 heartbeat osd_stat(store_statfs(0x4e857c000/0x0/0x4ffc00000, data 0x38a5c83/0x3a10000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x13c6f9c6), peers [0,2] op hist [])
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.871630669s of 10.184924126s, submitted: 37
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 256 ms_handle_reset con 0x561d8c585800 session 0x561d8c8805a0
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 256 ms_handle_reset con 0x561d8c459800 session 0x561d8cff14a0
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 256 ms_handle_reset con 0x561d8d10c000 session 0x561d8d1a41e0
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 332210176 unmapped: 54231040 heap: 386441216 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 256 ms_handle_reset con 0x561d8e74c800 session 0x561d8e513c20
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 256 ms_handle_reset con 0x561d8f032800 session 0x561d8e1a14a0
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 256 heartbeat osd_stat(store_statfs(0x4e8107000/0x0/0x4ffc00000, data 0x3d1bcac/0x3e87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x13c6f9c6), peers [0,2] op hist [])
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 256 ms_handle_reset con 0x561d8c6d7400 session 0x561d8f38f0e0
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 256 ms_handle_reset con 0x561d8c459800 session 0x561d8e5134a0
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 256 handle_osd_map epochs [256,257], i have 256, src has [1,257]
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 332218368 unmapped: 54222848 heap: 386441216 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 332218368 unmapped: 54222848 heap: 386441216 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 332218368 unmapped: 54222848 heap: 386441216 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3565348 data_alloc: 218103808 data_used: 23547904
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 332218368 unmapped: 54222848 heap: 386441216 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 257 heartbeat osd_stat(store_statfs(0x4e8103000/0x0/0x4ffc00000, data 0x3d1d748/0x3e8a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x13c6f9c6), peers [0,2] op hist [])
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 332226560 unmapped: 54214656 heap: 386441216 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 257 ms_handle_reset con 0x561d8d10c000 session 0x561d8e1a1680
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 332226560 unmapped: 54214656 heap: 386441216 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 257 ms_handle_reset con 0x561d8e74c800 session 0x561d8ca96960
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 257 heartbeat osd_stat(store_statfs(0x4e8104000/0x0/0x4ffc00000, data 0x3d1d748/0x3e8a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x13c6f9c6), peers [0,2] op hist [])
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 332226560 unmapped: 54214656 heap: 386441216 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 257 ms_handle_reset con 0x561d8d10d400 session 0x561d8e1a1a40
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 257 ms_handle_reset con 0x561d8e3dd000 session 0x561d8e39ed20
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 257 ms_handle_reset con 0x561d8e3dd000 session 0x561d8cffe3c0
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 332390400 unmapped: 54050816 heap: 386441216 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3582741 data_alloc: 234881024 data_used: 25600000
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 257 heartbeat osd_stat(store_statfs(0x4e80e0000/0x0/0x4ffc00000, data 0x3d41748/0x3eae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x13c6f9c6), peers [0,2] op hist [])
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 332390400 unmapped: 54050816 heap: 386441216 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 332390400 unmapped: 54050816 heap: 386441216 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 332390400 unmapped: 54050816 heap: 386441216 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 257 heartbeat osd_stat(store_statfs(0x4e80e0000/0x0/0x4ffc00000, data 0x3d41748/0x3eae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x13c6f9c6), peers [0,2] op hist [])
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 332390400 unmapped: 54050816 heap: 386441216 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 332390400 unmapped: 54050816 heap: 386441216 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3614901 data_alloc: 234881024 data_used: 30093312
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 332390400 unmapped: 54050816 heap: 386441216 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 332390400 unmapped: 54050816 heap: 386441216 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 257 heartbeat osd_stat(store_statfs(0x4e80e0000/0x0/0x4ffc00000, data 0x3d41748/0x3eae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x13c6f9c6), peers [0,2] op hist [])
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 332390400 unmapped: 54050816 heap: 386441216 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 332390400 unmapped: 54050816 heap: 386441216 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 332390400 unmapped: 54050816 heap: 386441216 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 18.748184204s of 18.947298050s, submitted: 100
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3648765 data_alloc: 234881024 data_used: 34914304
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 334462976 unmapped: 51978240 heap: 386441216 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 334462976 unmapped: 51978240 heap: 386441216 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 334471168 unmapped: 51970048 heap: 386441216 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 257 heartbeat osd_stat(store_statfs(0x4e7c79000/0x0/0x4ffc00000, data 0x41a0748/0x430d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x13c6f9c6), peers [0,2] op hist [])
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338722816 unmapped: 47718400 heap: 386441216 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 257 heartbeat osd_stat(store_statfs(0x4e7c79000/0x0/0x4ffc00000, data 0x41a0748/0x430d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x13c6f9c6), peers [0,2] op hist [])
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338722816 unmapped: 47718400 heap: 386441216 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3695857 data_alloc: 234881024 data_used: 36913152
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 257 heartbeat osd_stat(store_statfs(0x4e7c73000/0x0/0x4ffc00000, data 0x41ae748/0x431b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x13c6f9c6), peers [0,2] op hist [])
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338051072 unmapped: 48390144 heap: 386441216 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339181568 unmapped: 47259648 heap: 386441216 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339181568 unmapped: 47259648 heap: 386441216 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339181568 unmapped: 47259648 heap: 386441216 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339181568 unmapped: 47259648 heap: 386441216 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.900465012s of 10.162605286s, submitted: 84
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3707409 data_alloc: 234881024 data_used: 37318656
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339189760 unmapped: 47251456 heap: 386441216 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 257 heartbeat osd_stat(store_statfs(0x4e7c05000/0x0/0x4ffc00000, data 0x421c748/0x4389000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x13c6f9c6), peers [0,2] op hist [])
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338853888 unmapped: 47587328 heap: 386441216 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338853888 unmapped: 47587328 heap: 386441216 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338853888 unmapped: 47587328 heap: 386441216 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338853888 unmapped: 47587328 heap: 386441216 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 257 ms_handle_reset con 0x561d8d10d400 session 0x561d8c644960
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 257 ms_handle_reset con 0x561d8e74c800 session 0x561d8c389c20
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3747476 data_alloc: 234881024 data_used: 37322752
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 257 ms_handle_reset con 0x561d8c969400 session 0x561d8f048b40
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 257 ms_handle_reset con 0x561d8e41f800 session 0x561d8f38f2c0
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 257 ms_handle_reset con 0x561d8c969400 session 0x561d8e4cda40
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 257 heartbeat osd_stat(store_statfs(0x4e7be7000/0x0/0x4ffc00000, data 0x423a748/0x43a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x13c6f9c6), peers [0,2] op hist [])
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338878464 unmapped: 47562752 heap: 386441216 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338878464 unmapped: 47562752 heap: 386441216 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338886656 unmapped: 47554560 heap: 386441216 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338886656 unmapped: 47554560 heap: 386441216 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 257 heartbeat osd_stat(store_statfs(0x4e768e000/0x0/0x4ffc00000, data 0x4793748/0x4900000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x13c6f9c6), peers [0,2] op hist [])
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338886656 unmapped: 47554560 heap: 386441216 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3747576 data_alloc: 234881024 data_used: 37322752
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338886656 unmapped: 47554560 heap: 386441216 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338886656 unmapped: 47554560 heap: 386441216 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338886656 unmapped: 47554560 heap: 386441216 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338886656 unmapped: 47554560 heap: 386441216 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 257 ms_handle_reset con 0x561d8d10d400 session 0x561d8c74bc20
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 257 heartbeat osd_stat(store_statfs(0x4e768e000/0x0/0x4ffc00000, data 0x4793748/0x4900000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x13c6f9c6), peers [0,2] op hist [])
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 257 ms_handle_reset con 0x561d8e3dd000 session 0x561d8f047680
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338886656 unmapped: 47554560 heap: 386441216 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3747576 data_alloc: 234881024 data_used: 37322752
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 257 ms_handle_reset con 0x561d8e74c800 session 0x561d8f0481e0
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 15.523627281s of 15.641870499s, submitted: 25
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 257 ms_handle_reset con 0x561d8e52e000 session 0x561d8e8c41e0
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339083264 unmapped: 47357952 heap: 386441216 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 257 ms_handle_reset con 0x561d8e3dd000 session 0x561d8d16e780
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339083264 unmapped: 47357952 heap: 386441216 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 257 heartbeat osd_stat(store_statfs(0x4e768e000/0x0/0x4ffc00000, data 0x4793748/0x4900000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x13c6f9c6), peers [0,2] op hist [])
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 257 handle_osd_map epochs [258,258], i have 257, src has [1,258]
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 257 handle_osd_map epochs [258,258], i have 258, src has [1,258]
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 258 ms_handle_reset con 0x561d8e74c800 session 0x561d8e4cd4a0
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 258 ms_handle_reset con 0x561d8e41f400 session 0x561d8d236960
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 258 ms_handle_reset con 0x561d8c586c00 session 0x561d8e512000
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 351141888 unmapped: 35299328 heap: 386441216 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 351346688 unmapped: 47702016 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 258 ms_handle_reset con 0x561d8f3b1400 session 0x561d8f38e780
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 258 handle_osd_map epochs [258,259], i have 258, src has [1,259]
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 259 ms_handle_reset con 0x561d8c586c00 session 0x561d8e5125a0
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 353550336 unmapped: 45498368 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4140652 data_alloc: 251658240 data_used: 54489088
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 259 handle_osd_map epochs [260,260], i have 259, src has [1,260]
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 260 ms_handle_reset con 0x561d8e3dd000 session 0x561d8e3ecb40
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 353665024 unmapped: 45383680 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 260 ms_handle_reset con 0x561d8e41f400 session 0x561d8c7d25a0
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 260 ms_handle_reset con 0x561d8e74c800 session 0x561d8f049a40
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 260 ms_handle_reset con 0x561d8c586000 session 0x561d8eb52b40
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 350445568 unmapped: 48603136 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 260 handle_osd_map epochs [260,261], i have 260, src has [1,261]
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 261 ms_handle_reset con 0x561d8c586000 session 0x561d8d117c20
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 350519296 unmapped: 48529408 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 261 heartbeat osd_stat(store_statfs(0x4e727b000/0x0/0x4ffc00000, data 0x479f61c/0x4911000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x13c6f9c6), peers [0,2] op hist [])
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 350519296 unmapped: 48529408 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 261 ms_handle_reset con 0x561d8f032800 session 0x561d8e8c54a0
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 261 ms_handle_reset con 0x561d8f3b1c00 session 0x561d8d237c20
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 350535680 unmapped: 48513024 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3837511 data_alloc: 251658240 data_used: 53280768
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 8.442913055s of 10.001754761s, submitted: 134
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 261 ms_handle_reset con 0x561d8c586c00 session 0x561d8e7023c0
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 350560256 unmapped: 48488448 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 350560256 unmapped: 48488448 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 261 handle_osd_map epochs [261,262], i have 261, src has [1,262]
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 350560256 unmapped: 48488448 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 262 heartbeat osd_stat(store_statfs(0x4e7398000/0x0/0x4ffc00000, data 0x467209b/0x47e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 262 handle_osd_map epochs [263,263], i have 262, src has [1,263]
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 350617600 unmapped: 48431104 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 263 ms_handle_reset con 0x561d8e3dd000 session 0x561d8e18a3c0
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 346275840 unmapped: 52772864 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3631553 data_alloc: 234881024 data_used: 34648064
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349388800 unmapped: 49659904 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349159424 unmapped: 49889280 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 263 heartbeat osd_stat(store_statfs(0x4e7fe4000/0x0/0x4ffc00000, data 0x3a1ec0a/0x3b92000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 263 handle_osd_map epochs [263,264], i have 263, src has [1,264]
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 264 ms_handle_reset con 0x561d8d14ec00 session 0x561d8e50e3c0
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 264 ms_handle_reset con 0x561d8e3b8800 session 0x561d8e50f2c0
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349224960 unmapped: 49823744 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 264 ms_handle_reset con 0x561d8c586000 session 0x561d8cff1a40
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345440256 unmapped: 53608448 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345440256 unmapped: 53608448 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3575510 data_alloc: 234881024 data_used: 26189824
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345440256 unmapped: 53608448 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 264 heartbeat osd_stat(store_statfs(0x4e8776000/0x0/0x4ffc00000, data 0x3270617/0x33e3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345440256 unmapped: 53608448 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 264 heartbeat osd_stat(store_statfs(0x4e8776000/0x0/0x4ffc00000, data 0x3270617/0x33e3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 264 handle_osd_map epochs [265,265], i have 264, src has [1,265]
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.457151413s of 12.123614311s, submitted: 157
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345448448 unmapped: 53600256 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 346513408 unmapped: 52535296 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 346513408 unmapped: 52535296 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3581608 data_alloc: 234881024 data_used: 26202112
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 346513408 unmapped: 52535296 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e8773000/0x0/0x4ffc00000, data 0x329607a/0x340a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 346513408 unmapped: 52535296 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 346513408 unmapped: 52535296 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 346521600 unmapped: 52527104 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 265 ms_handle_reset con 0x561d8c969400 session 0x561d8e513a40
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 265 ms_handle_reset con 0x561d8d10d400 session 0x561d8e7c0d20
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e8765000/0x0/0x4ffc00000, data 0x32a507a/0x3419000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [0,0,1])
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 337297408 unmapped: 61751296 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 265 ms_handle_reset con 0x561d8d10d400 session 0x561d8d25ba40
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3453322 data_alloc: 218103808 data_used: 20615168
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 337297408 unmapped: 61751296 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 337297408 unmapped: 61751296 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 337297408 unmapped: 61751296 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.911143303s of 10.312707901s, submitted: 67
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 265 ms_handle_reset con 0x561d8d10c000 session 0x561d8c881a40
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 265 ms_handle_reset con 0x561d8c459800 session 0x561d8d032960
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e9316000/0x0/0x4ffc00000, data 0x26f407a/0x2868000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 330719232 unmapped: 68329472 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 265 ms_handle_reset con 0x561d8c586000 session 0x561d8d0334a0
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 330727424 unmapped: 68321280 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e9b62000/0x0/0x4ffc00000, data 0x1ea8018/0x201b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3345696 data_alloc: 218103808 data_used: 14979072
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 330727424 unmapped: 68321280 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e9b62000/0x0/0x4ffc00000, data 0x1ea8018/0x201b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 330727424 unmapped: 68321280 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 330727424 unmapped: 68321280 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 330727424 unmapped: 68321280 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 330727424 unmapped: 68321280 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3345696 data_alloc: 218103808 data_used: 14979072
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e9b62000/0x0/0x4ffc00000, data 0x1ea8018/0x201b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 330727424 unmapped: 68321280 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 330727424 unmapped: 68321280 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e9b62000/0x0/0x4ffc00000, data 0x1ea8018/0x201b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 330727424 unmapped: 68321280 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e9b62000/0x0/0x4ffc00000, data 0x1ea8018/0x201b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 330727424 unmapped: 68321280 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 330727424 unmapped: 68321280 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3345696 data_alloc: 218103808 data_used: 14979072
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 330727424 unmapped: 68321280 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e9b62000/0x0/0x4ffc00000, data 0x1ea8018/0x201b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 330727424 unmapped: 68321280 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 330727424 unmapped: 68321280 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e9b62000/0x0/0x4ffc00000, data 0x1ea8018/0x201b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 330727424 unmapped: 68321280 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 330727424 unmapped: 68321280 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3345696 data_alloc: 218103808 data_used: 14979072
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 330727424 unmapped: 68321280 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 330727424 unmapped: 68321280 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e9b62000/0x0/0x4ffc00000, data 0x1ea8018/0x201b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 330727424 unmapped: 68321280 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 330727424 unmapped: 68321280 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 330727424 unmapped: 68321280 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3345696 data_alloc: 218103808 data_used: 14979072
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 330727424 unmapped: 68321280 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 330727424 unmapped: 68321280 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e9b62000/0x0/0x4ffc00000, data 0x1ea8018/0x201b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 330727424 unmapped: 68321280 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 330727424 unmapped: 68321280 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 330727424 unmapped: 68321280 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e9b62000/0x0/0x4ffc00000, data 0x1ea8018/0x201b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3345696 data_alloc: 218103808 data_used: 14979072
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 330727424 unmapped: 68321280 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 330727424 unmapped: 68321280 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 330727424 unmapped: 68321280 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 330727424 unmapped: 68321280 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 330727424 unmapped: 68321280 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3345696 data_alloc: 218103808 data_used: 14979072
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e9b62000/0x0/0x4ffc00000, data 0x1ea8018/0x201b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 330727424 unmapped: 68321280 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 330727424 unmapped: 68321280 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 330727424 unmapped: 68321280 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 330727424 unmapped: 68321280 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e9b62000/0x0/0x4ffc00000, data 0x1ea8018/0x201b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 330727424 unmapped: 68321280 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3345696 data_alloc: 218103808 data_used: 14979072
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 330727424 unmapped: 68321280 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 330727424 unmapped: 68321280 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e9b62000/0x0/0x4ffc00000, data 0x1ea8018/0x201b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 330735616 unmapped: 68313088 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 330735616 unmapped: 68313088 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 330735616 unmapped: 68313088 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3345696 data_alloc: 218103808 data_used: 14979072
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 42.686870575s of 42.879127502s, submitted: 27
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 344719360 unmapped: 54329344 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 265 ms_handle_reset con 0x561d8c969400 session 0x561d8e90c1e0
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 265 ms_handle_reset con 0x561d8c969400 session 0x561d8c644f00
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 265 ms_handle_reset con 0x561d8c459800 session 0x561d8d116f00
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 265 ms_handle_reset con 0x561d8c586000 session 0x561d8d1a52c0
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 265 ms_handle_reset con 0x561d8d10c000 session 0x561d8e3ecf00
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e8e18000/0x0/0x4ffc00000, data 0x2bf3018/0x2d66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 331620352 unmapped: 67428352 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 331620352 unmapped: 67428352 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 331620352 unmapped: 67428352 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e8e18000/0x0/0x4ffc00000, data 0x2bf3018/0x2d66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 331620352 unmapped: 67428352 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3450951 data_alloc: 218103808 data_used: 14979072
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 331628544 unmapped: 67420160 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 265 ms_handle_reset con 0x561d8d10d400 session 0x561d8ed73c20
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 265 ms_handle_reset con 0x561d8d10d400 session 0x561d8e3ed4a0
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 331628544 unmapped: 67420160 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 265 ms_handle_reset con 0x561d8c459800 session 0x561d8e18ad20
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 265 ms_handle_reset con 0x561d8c586000 session 0x561d8c62a1e0
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 331546624 unmapped: 67502080 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 331554816 unmapped: 67493888 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e8e16000/0x0/0x4ffc00000, data 0x2bf304b/0x2d68000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 332578816 unmapped: 66469888 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3552651 data_alloc: 234881024 data_used: 28540928
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 332578816 unmapped: 66469888 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 332578816 unmapped: 66469888 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 332578816 unmapped: 66469888 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 332578816 unmapped: 66469888 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 332578816 unmapped: 66469888 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3552651 data_alloc: 234881024 data_used: 28540928
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e8e16000/0x0/0x4ffc00000, data 0x2bf304b/0x2d68000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 332578816 unmapped: 66469888 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 332578816 unmapped: 66469888 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 332578816 unmapped: 66469888 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 332578816 unmapped: 66469888 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 332578816 unmapped: 66469888 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 18.975399017s of 19.610374451s, submitted: 38
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3578299 data_alloc: 234881024 data_used: 28549120
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 337371136 unmapped: 61677568 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e857a000/0x0/0x4ffc00000, data 0x348704b/0x35fc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 337158144 unmapped: 61890560 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 337158144 unmapped: 61890560 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 337158144 unmapped: 61890560 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 337158144 unmapped: 61890560 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3631847 data_alloc: 234881024 data_used: 30068736
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 337158144 unmapped: 61890560 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e84fa000/0x0/0x4ffc00000, data 0x350f04b/0x3684000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 337158144 unmapped: 61890560 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 337158144 unmapped: 61890560 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 337158144 unmapped: 61890560 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 337158144 unmapped: 61890560 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3629483 data_alloc: 234881024 data_used: 30068736
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 337158144 unmapped: 61890560 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e84d9000/0x0/0x4ffc00000, data 0x353004b/0x36a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 337158144 unmapped: 61890560 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 337158144 unmapped: 61890560 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 337158144 unmapped: 61890560 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 337158144 unmapped: 61890560 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3629483 data_alloc: 234881024 data_used: 30068736
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e84d9000/0x0/0x4ffc00000, data 0x353004b/0x36a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 337158144 unmapped: 61890560 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 16.343145370s of 16.906240463s, submitted: 101
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 337158144 unmapped: 61890560 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 337158144 unmapped: 61890560 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e84d6000/0x0/0x4ffc00000, data 0x353304b/0x36a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 337158144 unmapped: 61890560 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 337158144 unmapped: 61890560 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3629527 data_alloc: 234881024 data_used: 30068736
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 337166336 unmapped: 61882368 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e84d6000/0x0/0x4ffc00000, data 0x353304b/0x36a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 337166336 unmapped: 61882368 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 337166336 unmapped: 61882368 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 337166336 unmapped: 61882368 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 4800.4 total, 600.0 interval#012Cumulative writes: 42K writes, 168K keys, 42K commit groups, 1.0 writes per commit group, ingest: 0.16 GB, 0.03 MB/s#012Cumulative WAL: 42K writes, 15K syncs, 2.81 writes per sync, written: 0.16 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 4521 writes, 19K keys, 4521 commit groups, 1.0 writes per commit group, ingest: 23.42 MB, 0.04 MB/s#012Interval WAL: 4521 writes, 1686 syncs, 2.68 writes per sync, written: 0.02 GB, 0.04 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 337166336 unmapped: 61882368 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3629527 data_alloc: 234881024 data_used: 30068736
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e84d6000/0x0/0x4ffc00000, data 0x353304b/0x36a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 337166336 unmapped: 61882368 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e84d6000/0x0/0x4ffc00000, data 0x353304b/0x36a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 337166336 unmapped: 61882368 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 337166336 unmapped: 61882368 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.767104149s of 11.775897026s, submitted: 2
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 337182720 unmapped: 61865984 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 265 ms_handle_reset con 0x561d8d14ec00 session 0x561d8e513a40
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 265 ms_handle_reset con 0x561d8e3b8800 session 0x561d8cff1a40
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 265 ms_handle_reset con 0x561d8c459800 session 0x561d8e50f2c0
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 265 ms_handle_reset con 0x561d8c586000 session 0x561d8e50e3c0
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 337403904 unmapped: 61644800 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 265 ms_handle_reset con 0x561d8d10d400 session 0x561d8e18a3c0
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3712601 data_alloc: 234881024 data_used: 30068736
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 265 ms_handle_reset con 0x561d8d14ec00 session 0x561d8d117c20
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 265 ms_handle_reset con 0x561d8c586c00 session 0x561d8f049a40
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 265 ms_handle_reset con 0x561d8c459800 session 0x561d8c7d25a0
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 265 ms_handle_reset con 0x561d8c586000 session 0x561d8e3ecb40
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 337543168 unmapped: 61505536 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 337543168 unmapped: 61505536 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e7aaa000/0x0/0x4ffc00000, data 0x3f5e05b/0x40d4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 337543168 unmapped: 61505536 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 337543168 unmapped: 61505536 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 337543168 unmapped: 61505536 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e7aaa000/0x0/0x4ffc00000, data 0x3f5e05b/0x40d4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Nov 25 04:36:40 np0005534516 rsyslogd[1007]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3711369 data_alloc: 234881024 data_used: 30068736
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e7aaa000/0x0/0x4ffc00000, data 0x3f5e05b/0x40d4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 337551360 unmapped: 61497344 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 337551360 unmapped: 61497344 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 337551360 unmapped: 61497344 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 337551360 unmapped: 61497344 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e7aaa000/0x0/0x4ffc00000, data 0x3f5e05b/0x40d4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 337551360 unmapped: 61497344 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e7aaa000/0x0/0x4ffc00000, data 0x3f5e05b/0x40d4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3711369 data_alloc: 234881024 data_used: 30068736
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 265 ms_handle_reset con 0x561d8d10d400 session 0x561d8e512000
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 337551360 unmapped: 61497344 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 265 ms_handle_reset con 0x561d8d14ec00 session 0x561d8d236960
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 265 ms_handle_reset con 0x561d8f032800 session 0x561d8e4cd4a0
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.303936958s of 13.445974350s, submitted: 21
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 337559552 unmapped: 61489152 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 265 ms_handle_reset con 0x561d8c459800 session 0x561d8e8c41e0
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 337559552 unmapped: 61489152 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e7aa9000/0x0/0x4ffc00000, data 0x3f5e06b/0x40d5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 337559552 unmapped: 61489152 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e7aa9000/0x0/0x4ffc00000, data 0x3f5e06b/0x40d5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 337559552 unmapped: 61489152 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3713367 data_alloc: 234881024 data_used: 30072832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 337559552 unmapped: 61489152 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 337969152 unmapped: 61079552 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 343932928 unmapped: 55115776 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 343932928 unmapped: 55115776 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 343932928 unmapped: 55115776 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3786743 data_alloc: 234881024 data_used: 40497152
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e7aa9000/0x0/0x4ffc00000, data 0x3f5e06b/0x40d5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 343932928 unmapped: 55115776 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 343932928 unmapped: 55115776 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 343932928 unmapped: 55115776 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 343932928 unmapped: 55115776 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e7aa9000/0x0/0x4ffc00000, data 0x3f5e06b/0x40d5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 343932928 unmapped: 55115776 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3786743 data_alloc: 234881024 data_used: 40497152
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 343932928 unmapped: 55115776 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 343932928 unmapped: 55115776 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e7aa9000/0x0/0x4ffc00000, data 0x3f5e06b/0x40d5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 15.973099709s of 15.986382484s, submitted: 2
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 343719936 unmapped: 55328768 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 347422720 unmapped: 51625984 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 348323840 unmapped: 50724864 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3888193 data_alloc: 234881024 data_used: 41283584
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 348479488 unmapped: 50569216 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 348479488 unmapped: 50569216 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e7020000/0x0/0x4ffc00000, data 0x49d906b/0x4b50000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 348479488 unmapped: 50569216 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 348479488 unmapped: 50569216 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e7020000/0x0/0x4ffc00000, data 0x49d906b/0x4b50000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 348479488 unmapped: 50569216 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3888673 data_alloc: 234881024 data_used: 41295872
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 348610560 unmapped: 50438144 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 348610560 unmapped: 50438144 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 348626944 unmapped: 50421760 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 348626944 unmapped: 50421760 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 348626944 unmapped: 50421760 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3882581 data_alloc: 234881024 data_used: 41402368
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e700d000/0x0/0x4ffc00000, data 0x49fa06b/0x4b71000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.950686455s of 12.774372101s, submitted: 94
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 348626944 unmapped: 50421760 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 348626944 unmapped: 50421760 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 265 ms_handle_reset con 0x561d8c586000 session 0x561d8f0481e0
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 265 ms_handle_reset con 0x561d8d10d400 session 0x561d8e18b2c0
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 348626944 unmapped: 50421760 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e6fff000/0x0/0x4ffc00000, data 0x4a0806b/0x4b7f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,1])
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 348643328 unmapped: 50405376 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 348651520 unmapped: 50397184 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3643394 data_alloc: 234881024 data_used: 30081024
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 265 ms_handle_reset con 0x561d8d14ec00 session 0x561d8e24f680
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 348651520 unmapped: 50397184 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 348651520 unmapped: 50397184 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 348659712 unmapped: 50388992 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 348659712 unmapped: 50388992 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e84c4000/0x0/0x4ffc00000, data 0x354504b/0x36ba000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e84c4000/0x0/0x4ffc00000, data 0x354504b/0x36ba000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 348659712 unmapped: 50388992 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3643394 data_alloc: 234881024 data_used: 30081024
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e84c4000/0x0/0x4ffc00000, data 0x354504b/0x36ba000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 348659712 unmapped: 50388992 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 348667904 unmapped: 50380800 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.654464722s of 11.467668533s, submitted: 27
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 265 ms_handle_reset con 0x561d8c969400 session 0x561d8f049e00
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 265 ms_handle_reset con 0x561d8d10c000 session 0x561d8e50e000
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 348667904 unmapped: 50380800 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 265 ms_handle_reset con 0x561d8c459800 session 0x561d8e4325a0
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 337821696 unmapped: 61227008 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 337821696 unmapped: 61227008 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3374163 data_alloc: 218103808 data_used: 14979072
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 337821696 unmapped: 61227008 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e9b62000/0x0/0x4ffc00000, data 0x1ea8018/0x201b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 337821696 unmapped: 61227008 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 337821696 unmapped: 61227008 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 337821696 unmapped: 61227008 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e9b62000/0x0/0x4ffc00000, data 0x1ea8018/0x201b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 337821696 unmapped: 61227008 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3374163 data_alloc: 218103808 data_used: 14979072
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 337821696 unmapped: 61227008 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 337821696 unmapped: 61227008 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 337821696 unmapped: 61227008 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e9b62000/0x0/0x4ffc00000, data 0x1ea8018/0x201b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 337821696 unmapped: 61227008 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 337821696 unmapped: 61227008 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3374163 data_alloc: 218103808 data_used: 14979072
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 337821696 unmapped: 61227008 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 337821696 unmapped: 61227008 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e9b62000/0x0/0x4ffc00000, data 0x1ea8018/0x201b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 337821696 unmapped: 61227008 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e9b62000/0x0/0x4ffc00000, data 0x1ea8018/0x201b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 337821696 unmapped: 61227008 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 337821696 unmapped: 61227008 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3374163 data_alloc: 218103808 data_used: 14979072
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 337821696 unmapped: 61227008 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e9b62000/0x0/0x4ffc00000, data 0x1ea8018/0x201b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 337821696 unmapped: 61227008 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 337821696 unmapped: 61227008 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 337821696 unmapped: 61227008 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 337821696 unmapped: 61227008 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3374163 data_alloc: 218103808 data_used: 14979072
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 337821696 unmapped: 61227008 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e9b62000/0x0/0x4ffc00000, data 0x1ea8018/0x201b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 337821696 unmapped: 61227008 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 337821696 unmapped: 61227008 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e9b62000/0x0/0x4ffc00000, data 0x1ea8018/0x201b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 337821696 unmapped: 61227008 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 337821696 unmapped: 61227008 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3374163 data_alloc: 218103808 data_used: 14979072
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 337821696 unmapped: 61227008 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 337821696 unmapped: 61227008 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 337821696 unmapped: 61227008 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 337821696 unmapped: 61227008 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e9b62000/0x0/0x4ffc00000, data 0x1ea8018/0x201b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 337821696 unmapped: 61227008 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3374163 data_alloc: 218103808 data_used: 14979072
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 337821696 unmapped: 61227008 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 337821696 unmapped: 61227008 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e9b62000/0x0/0x4ffc00000, data 0x1ea8018/0x201b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 337821696 unmapped: 61227008 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 337821696 unmapped: 61227008 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 337821696 unmapped: 61227008 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3374163 data_alloc: 218103808 data_used: 14979072
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e9b62000/0x0/0x4ffc00000, data 0x1ea8018/0x201b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 337821696 unmapped: 61227008 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 337821696 unmapped: 61227008 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e9b62000/0x0/0x4ffc00000, data 0x1ea8018/0x201b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 337821696 unmapped: 61227008 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 337821696 unmapped: 61227008 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 337821696 unmapped: 61227008 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3374163 data_alloc: 218103808 data_used: 14979072
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 337821696 unmapped: 61227008 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 265 ms_handle_reset con 0x561d8c586000 session 0x561d8e1a1680
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 265 ms_handle_reset con 0x561d8d10d400 session 0x561d8e8c5e00
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 265 ms_handle_reset con 0x561d8d14ec00 session 0x561d8d116b40
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 265 ms_handle_reset con 0x561d8d14ec00 session 0x561d8d25a960
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 43.586261749s of 44.781974792s, submitted: 62
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 265 ms_handle_reset con 0x561d8c459800 session 0x561d8e7c1c20
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 265 ms_handle_reset con 0x561d8c586000 session 0x561d8d25a960
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 265 ms_handle_reset con 0x561d8d10c000 session 0x561d8e1a1680
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 337846272 unmapped: 61202432 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 265 ms_handle_reset con 0x561d8d10d400 session 0x561d8f049e00
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 265 ms_handle_reset con 0x561d8d10d400 session 0x561d8e4cd4a0
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e9665000/0x0/0x4ffc00000, data 0x23a408a/0x2519000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 337846272 unmapped: 61202432 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 337846272 unmapped: 61202432 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 337846272 unmapped: 61202432 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3418577 data_alloc: 218103808 data_used: 14983168
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 337846272 unmapped: 61202432 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 265 ms_handle_reset con 0x561d8c459800 session 0x561d8e3ecb40
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 265 ms_handle_reset con 0x561d8c586000 session 0x561d8c7d25a0
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 337846272 unmapped: 61202432 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 265 ms_handle_reset con 0x561d8d10c000 session 0x561d8f049a40
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 337846272 unmapped: 61202432 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 265 ms_handle_reset con 0x561d8d14ec00 session 0x561d8d117c20
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e9664000/0x0/0x4ffc00000, data 0x23a40ad/0x251a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 337846272 unmapped: 61202432 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 337846272 unmapped: 61202432 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3449160 data_alloc: 218103808 data_used: 18993152
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e9664000/0x0/0x4ffc00000, data 0x23a40ad/0x251a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 337846272 unmapped: 61202432 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 337846272 unmapped: 61202432 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 337846272 unmapped: 61202432 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.539092064s of 11.824838638s, submitted: 45
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 337879040 unmapped: 61169664 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 337895424 unmapped: 61153280 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3449112 data_alloc: 218103808 data_used: 18997248
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 337895424 unmapped: 61153280 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e9664000/0x0/0x4ffc00000, data 0x23a40ad/0x251a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 337895424 unmapped: 61153280 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 337895424 unmapped: 61153280 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 337895424 unmapped: 61153280 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e9664000/0x0/0x4ffc00000, data 0x23a40ad/0x251a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 337903616 unmapped: 61145088 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3451036 data_alloc: 218103808 data_used: 19042304
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339697664 unmapped: 59351040 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 341319680 unmapped: 57729024 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e8e28000/0x0/0x4ffc00000, data 0x2bd80ad/0x2d4e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 341319680 unmapped: 57729024 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 8.745832443s of 10.384993553s, submitted: 145
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 340402176 unmapped: 58646528 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 340402176 unmapped: 58646528 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3528146 data_alloc: 218103808 data_used: 19152896
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 340402176 unmapped: 58646528 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e8dfb000/0x0/0x4ffc00000, data 0x2c0c0ad/0x2d82000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 340402176 unmapped: 58646528 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e8dfb000/0x0/0x4ffc00000, data 0x2c0c0ad/0x2d82000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 340402176 unmapped: 58646528 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 340402176 unmapped: 58646528 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e8dfb000/0x0/0x4ffc00000, data 0x2c0c0ad/0x2d82000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 340402176 unmapped: 58646528 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e8dfb000/0x0/0x4ffc00000, data 0x2c0c0ad/0x2d82000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3525954 data_alloc: 218103808 data_used: 19156992
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 340402176 unmapped: 58646528 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e8df9000/0x0/0x4ffc00000, data 0x2c0f0ad/0x2d85000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 340402176 unmapped: 58646528 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 340402176 unmapped: 58646528 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e8df9000/0x0/0x4ffc00000, data 0x2c0f0ad/0x2d85000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 340410368 unmapped: 58638336 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 340410368 unmapped: 58638336 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3525954 data_alloc: 218103808 data_used: 19156992
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e8df9000/0x0/0x4ffc00000, data 0x2c0f0ad/0x2d85000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 340410368 unmapped: 58638336 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.710719109s of 12.442818642s, submitted: 34
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 265 ms_handle_reset con 0x561d8c459800 session 0x561d8e7032c0
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 265 ms_handle_reset con 0x561d8c586000 session 0x561d8e7c0780
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 340451328 unmapped: 58597376 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 340451328 unmapped: 58597376 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e811e000/0x0/0x4ffc00000, data 0x38e910f/0x3a60000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 340451328 unmapped: 58597376 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 340459520 unmapped: 58589184 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3619231 data_alloc: 218103808 data_used: 19156992
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 340459520 unmapped: 58589184 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 265 ms_handle_reset con 0x561d8d10c000 session 0x561d8d1172c0
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e811e000/0x0/0x4ffc00000, data 0x38e910f/0x3a60000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 265 ms_handle_reset con 0x561d8d10d400 session 0x561d8ca963c0
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 340459520 unmapped: 58589184 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 265 ms_handle_reset con 0x561d8f3b1c00 session 0x561d8ca97c20
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 265 ms_handle_reset con 0x561d8f3b1c00 session 0x561d8ca97a40
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 340770816 unmapped: 58277888 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 340795392 unmapped: 58253312 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 340795392 unmapped: 58253312 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3670238 data_alloc: 234881024 data_used: 25145344
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 344727552 unmapped: 54321152 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e80f9000/0x0/0x4ffc00000, data 0x390d11f/0x3a85000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 344727552 unmapped: 54321152 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 344727552 unmapped: 54321152 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 344727552 unmapped: 54321152 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e80f9000/0x0/0x4ffc00000, data 0x390d11f/0x3a85000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 344727552 unmapped: 54321152 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3720638 data_alloc: 234881024 data_used: 31768576
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 344727552 unmapped: 54321152 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e80f9000/0x0/0x4ffc00000, data 0x390d11f/0x3a85000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 344727552 unmapped: 54321152 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 344727552 unmapped: 54321152 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 344727552 unmapped: 54321152 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e80f9000/0x0/0x4ffc00000, data 0x390d11f/0x3a85000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 344727552 unmapped: 54321152 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 18.677581787s of 18.996076584s, submitted: 36
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3721670 data_alloc: 234881024 data_used: 31793152
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 346578944 unmapped: 52469760 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e7c90000/0x0/0x4ffc00000, data 0x3d7611f/0x3eee000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 346603520 unmapped: 52445184 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 346693632 unmapped: 52355072 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 347766784 unmapped: 51281920 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 347766784 unmapped: 51281920 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e7c67000/0x0/0x4ffc00000, data 0x3d9f11f/0x3f17000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3772452 data_alloc: 234881024 data_used: 32202752
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e7c67000/0x0/0x4ffc00000, data 0x3d9f11f/0x3f17000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 347766784 unmapped: 51281920 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 347766784 unmapped: 51281920 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 347766784 unmapped: 51281920 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e7c67000/0x0/0x4ffc00000, data 0x3d9f11f/0x3f17000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 347766784 unmapped: 51281920 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e7c67000/0x0/0x4ffc00000, data 0x3d9f11f/0x3f17000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 347766784 unmapped: 51281920 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3770996 data_alloc: 234881024 data_used: 32206848
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 347766784 unmapped: 51281920 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.320297241s of 10.981893539s, submitted: 59
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 347774976 unmapped: 51273728 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 347783168 unmapped: 51265536 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 265 ms_handle_reset con 0x561d8c586000 session 0x561d8e24fa40
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 265 ms_handle_reset con 0x561d8c459800 session 0x561d8ca972c0
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 347783168 unmapped: 51265536 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 265 ms_handle_reset con 0x561d8d10c000 session 0x561d8e2494a0
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e8971000/0x0/0x4ffc00000, data 0x2c330bd/0x2daa000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342581248 unmapped: 56467456 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3538985 data_alloc: 218103808 data_used: 18288640
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342581248 unmapped: 56467456 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342581248 unmapped: 56467456 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 265 ms_handle_reset con 0x561d8d14ec00 session 0x561d8e50e3c0
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342581248 unmapped: 56467456 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e8995000/0x0/0x4ffc00000, data 0x2c100ad/0x2d86000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 265 ms_handle_reset con 0x561d8d14ec00 session 0x561d8e90cd20
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338944000 unmapped: 60104704 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338944000 unmapped: 60104704 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3403206 data_alloc: 218103808 data_used: 14966784
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338944000 unmapped: 60104704 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338944000 unmapped: 60104704 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e9b61000/0x0/0x4ffc00000, data 0x1ea8018/0x201b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338944000 unmapped: 60104704 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338944000 unmapped: 60104704 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338944000 unmapped: 60104704 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3403206 data_alloc: 218103808 data_used: 14966784
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e9b61000/0x0/0x4ffc00000, data 0x1ea8018/0x201b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338944000 unmapped: 60104704 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e9b61000/0x0/0x4ffc00000, data 0x1ea8018/0x201b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338944000 unmapped: 60104704 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338944000 unmapped: 60104704 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338944000 unmapped: 60104704 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e9b61000/0x0/0x4ffc00000, data 0x1ea8018/0x201b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338944000 unmapped: 60104704 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3403206 data_alloc: 218103808 data_used: 14966784
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338944000 unmapped: 60104704 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338944000 unmapped: 60104704 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e9b61000/0x0/0x4ffc00000, data 0x1ea8018/0x201b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338944000 unmapped: 60104704 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338944000 unmapped: 60104704 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338944000 unmapped: 60104704 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3403206 data_alloc: 218103808 data_used: 14966784
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338944000 unmapped: 60104704 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338944000 unmapped: 60104704 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338944000 unmapped: 60104704 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e9b61000/0x0/0x4ffc00000, data 0x1ea8018/0x201b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338944000 unmapped: 60104704 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338944000 unmapped: 60104704 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3403206 data_alloc: 218103808 data_used: 14966784
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338944000 unmapped: 60104704 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338944000 unmapped: 60104704 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338944000 unmapped: 60104704 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e9b61000/0x0/0x4ffc00000, data 0x1ea8018/0x201b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338944000 unmapped: 60104704 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338952192 unmapped: 60096512 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3403206 data_alloc: 218103808 data_used: 14966784
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338952192 unmapped: 60096512 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338960384 unmapped: 60088320 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e9b61000/0x0/0x4ffc00000, data 0x1ea8018/0x201b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338960384 unmapped: 60088320 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338960384 unmapped: 60088320 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338960384 unmapped: 60088320 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3403206 data_alloc: 218103808 data_used: 14966784
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338960384 unmapped: 60088320 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338960384 unmapped: 60088320 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e9b61000/0x0/0x4ffc00000, data 0x1ea8018/0x201b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e9b61000/0x0/0x4ffc00000, data 0x1ea8018/0x201b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338968576 unmapped: 60080128 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e9b61000/0x0/0x4ffc00000, data 0x1ea8018/0x201b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338968576 unmapped: 60080128 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338968576 unmapped: 60080128 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3403206 data_alloc: 218103808 data_used: 14966784
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338968576 unmapped: 60080128 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338968576 unmapped: 60080128 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e9b61000/0x0/0x4ffc00000, data 0x1ea8018/0x201b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338968576 unmapped: 60080128 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 46.827438354s of 47.724197388s, submitted: 107
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 265 ms_handle_reset con 0x561d8c459800 session 0x561d8e24f860
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339386368 unmapped: 59662336 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339386368 unmapped: 59662336 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3525594 data_alloc: 218103808 data_used: 14966784
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339394560 unmapped: 59654144 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e8bd6000/0x0/0x4ffc00000, data 0x2e35018/0x2fa8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339402752 unmapped: 59645952 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339402752 unmapped: 59645952 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339402752 unmapped: 59645952 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 265 ms_handle_reset con 0x561d8c586000 session 0x561d8d1163c0
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339402752 unmapped: 59645952 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3525594 data_alloc: 218103808 data_used: 14966784
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 265 ms_handle_reset con 0x561d8d10c000 session 0x561d8d1a50e0
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e8bd6000/0x0/0x4ffc00000, data 0x2e35018/0x2fa8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 265 ms_handle_reset con 0x561d8f3b1c00 session 0x561d8e7034a0
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 265 ms_handle_reset con 0x561d8f3b1c00 session 0x561d8e7c0d20
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339402752 unmapped: 59645952 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339419136 unmapped: 59629568 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 340385792 unmapped: 58662912 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e8bd6000/0x0/0x4ffc00000, data 0x2e35018/0x2fa8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 340385792 unmapped: 58662912 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e8bd6000/0x0/0x4ffc00000, data 0x2e35018/0x2fa8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 340385792 unmapped: 58662912 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3632855 data_alloc: 234881024 data_used: 28499968
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 340385792 unmapped: 58662912 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e8bd6000/0x0/0x4ffc00000, data 0x2e35018/0x2fa8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 340385792 unmapped: 58662912 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 340385792 unmapped: 58662912 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e8bd6000/0x0/0x4ffc00000, data 0x2e35018/0x2fa8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 340385792 unmapped: 58662912 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 340385792 unmapped: 58662912 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3632855 data_alloc: 234881024 data_used: 28499968
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 340385792 unmapped: 58662912 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e8bd6000/0x0/0x4ffc00000, data 0x2e35018/0x2fa8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 340385792 unmapped: 58662912 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 18.927917480s of 19.114524841s, submitted: 30
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345874432 unmapped: 53174272 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345989120 unmapped: 53059584 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 346431488 unmapped: 52617216 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3727151 data_alloc: 234881024 data_used: 29900800
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 346431488 unmapped: 52617216 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e809b000/0x0/0x4ffc00000, data 0x3970018/0x3ae3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 346431488 unmapped: 52617216 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 346431488 unmapped: 52617216 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 346431488 unmapped: 52617216 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e809b000/0x0/0x4ffc00000, data 0x3970018/0x3ae3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 346431488 unmapped: 52617216 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3725011 data_alloc: 234881024 data_used: 29900800
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 346431488 unmapped: 52617216 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 346431488 unmapped: 52617216 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 346431488 unmapped: 52617216 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 346431488 unmapped: 52617216 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.873039246s of 12.605758667s, submitted: 112
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 346439680 unmapped: 52609024 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3770760 data_alloc: 234881024 data_used: 29900800
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e807a000/0x0/0x4ffc00000, data 0x3991018/0x3b04000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [0,0,0,0,1,2])
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 265 ms_handle_reset con 0x561d8d10c000 session 0x561d8e2483c0
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 265 ms_handle_reset con 0x561d8d14ec00 session 0x561d8e90cb40
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 265 ms_handle_reset con 0x561d8d10d400 session 0x561d8e90cf00
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 265 ms_handle_reset con 0x561d8e41f400 session 0x561d8e8c5a40
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 265 ms_handle_reset con 0x561d8d10c000 session 0x561d8f048f00
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 346546176 unmapped: 52502528 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e7b1c000/0x0/0x4ffc00000, data 0x3eef018/0x4062000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e7b16000/0x0/0x4ffc00000, data 0x3ef5018/0x4068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 346546176 unmapped: 52502528 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 346546176 unmapped: 52502528 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e7b16000/0x0/0x4ffc00000, data 0x3ef5018/0x4068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 346546176 unmapped: 52502528 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 346546176 unmapped: 52502528 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3769046 data_alloc: 234881024 data_used: 29900800
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e7b16000/0x0/0x4ffc00000, data 0x3ef5018/0x4068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 346546176 unmapped: 52502528 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e7b16000/0x0/0x4ffc00000, data 0x3ef5018/0x4068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 346546176 unmapped: 52502528 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 346546176 unmapped: 52502528 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 265 ms_handle_reset con 0x561d8d10d400 session 0x561d8e39ed20
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 265 ms_handle_reset con 0x561d8d14ec00 session 0x561d8d2365a0
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 346546176 unmapped: 52502528 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 265 ms_handle_reset con 0x561d8f3b1c00 session 0x561d8c738780
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 265 ms_handle_reset con 0x561d8e74c800 session 0x561d8d116b40
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 346546176 unmapped: 52502528 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3771924 data_alloc: 234881024 data_used: 29900800
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.365564346s of 10.544802666s, submitted: 22
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 346546176 unmapped: 52502528 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 347373568 unmapped: 51675136 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e7b15000/0x0/0x4ffc00000, data 0x3ef5028/0x4069000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 347373568 unmapped: 51675136 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 347373568 unmapped: 51675136 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 347373568 unmapped: 51675136 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3804024 data_alloc: 234881024 data_used: 34402304
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e7b15000/0x0/0x4ffc00000, data 0x3ef5028/0x4069000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 347373568 unmapped: 51675136 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 347373568 unmapped: 51675136 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 347373568 unmapped: 51675136 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 347373568 unmapped: 51675136 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e7b15000/0x0/0x4ffc00000, data 0x3ef5028/0x4069000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 347373568 unmapped: 51675136 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3804552 data_alloc: 234881024 data_used: 34402304
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 347373568 unmapped: 51675136 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.267370224s of 11.282814980s, submitted: 5
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 347512832 unmapped: 51535872 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 347529216 unmapped: 51519488 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e72fb000/0x0/0x4ffc00000, data 0x470e028/0x4882000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 347553792 unmapped: 51494912 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 347553792 unmapped: 51494912 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3874330 data_alloc: 234881024 data_used: 34574336
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 347553792 unmapped: 51494912 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 347553792 unmapped: 51494912 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 347553792 unmapped: 51494912 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e72fb000/0x0/0x4ffc00000, data 0x470e028/0x4882000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 347561984 unmapped: 51486720 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 347561984 unmapped: 51486720 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3872410 data_alloc: 234881024 data_used: 34574336
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 347561984 unmapped: 51486720 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 347561984 unmapped: 51486720 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 347561984 unmapped: 51486720 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e72f9000/0x0/0x4ffc00000, data 0x4711028/0x4885000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 347570176 unmapped: 51478528 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.121975899s of 12.436902046s, submitted: 54
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 265 ms_handle_reset con 0x561d8d10c000 session 0x561d8e90de00
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 265 ms_handle_reset con 0x561d8d10d400 session 0x561d8e4321e0
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 265 ms_handle_reset con 0x561d8d14ec00 session 0x561d8d1a41e0
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 347602944 unmapped: 51445760 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3736725 data_alloc: 234881024 data_used: 29904896
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e8074000/0x0/0x4ffc00000, data 0x3997018/0x3b0a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 347602944 unmapped: 51445760 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 347602944 unmapped: 51445760 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 347602944 unmapped: 51445760 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e806d000/0x0/0x4ffc00000, data 0x399e018/0x3b11000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 265 ms_handle_reset con 0x561d8c459800 session 0x561d8d25bc20
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 265 ms_handle_reset con 0x561d8c586000 session 0x561d8c62bc20
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 265 ms_handle_reset con 0x561d8c459800 session 0x561d8e249860
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338640896 unmapped: 60407808 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e9b63000/0x0/0x4ffc00000, data 0x1ea8018/0x201b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338640896 unmapped: 60407808 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3428513 data_alloc: 218103808 data_used: 14966784
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338640896 unmapped: 60407808 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338640896 unmapped: 60407808 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338640896 unmapped: 60407808 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338640896 unmapped: 60407808 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338640896 unmapped: 60407808 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3428513 data_alloc: 218103808 data_used: 14966784
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e9b63000/0x0/0x4ffc00000, data 0x1ea8018/0x201b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338640896 unmapped: 60407808 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e9b63000/0x0/0x4ffc00000, data 0x1ea8018/0x201b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338640896 unmapped: 60407808 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e9b63000/0x0/0x4ffc00000, data 0x1ea8018/0x201b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338640896 unmapped: 60407808 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338640896 unmapped: 60407808 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338640896 unmapped: 60407808 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3428513 data_alloc: 218103808 data_used: 14966784
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338640896 unmapped: 60407808 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338640896 unmapped: 60407808 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e9b63000/0x0/0x4ffc00000, data 0x1ea8018/0x201b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338640896 unmapped: 60407808 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338640896 unmapped: 60407808 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338640896 unmapped: 60407808 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3428513 data_alloc: 218103808 data_used: 14966784
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338640896 unmapped: 60407808 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e9b63000/0x0/0x4ffc00000, data 0x1ea8018/0x201b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338640896 unmapped: 60407808 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338640896 unmapped: 60407808 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338640896 unmapped: 60407808 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338640896 unmapped: 60407808 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3428513 data_alloc: 218103808 data_used: 14966784
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338640896 unmapped: 60407808 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e9b63000/0x0/0x4ffc00000, data 0x1ea8018/0x201b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338640896 unmapped: 60407808 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338640896 unmapped: 60407808 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338640896 unmapped: 60407808 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338640896 unmapped: 60407808 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3428513 data_alloc: 218103808 data_used: 14966784
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338640896 unmapped: 60407808 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e9b63000/0x0/0x4ffc00000, data 0x1ea8018/0x201b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338640896 unmapped: 60407808 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338640896 unmapped: 60407808 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338640896 unmapped: 60407808 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338640896 unmapped: 60407808 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3428513 data_alloc: 218103808 data_used: 14966784
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e9b63000/0x0/0x4ffc00000, data 0x1ea8018/0x201b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338640896 unmapped: 60407808 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338640896 unmapped: 60407808 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e9b63000/0x0/0x4ffc00000, data 0x1ea8018/0x201b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338649088 unmapped: 60399616 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338649088 unmapped: 60399616 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338657280 unmapped: 60391424 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3428513 data_alloc: 218103808 data_used: 14966784
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338657280 unmapped: 60391424 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 265 ms_handle_reset con 0x561d8d10c000 session 0x561d8c8130e0
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 265 ms_handle_reset con 0x561d8d10d400 session 0x561d8f38e960
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 265 ms_handle_reset con 0x561d8d14ec00 session 0x561d8e39fa40
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 265 ms_handle_reset con 0x561d8f3b1c00 session 0x561d8e24ed20
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338657280 unmapped: 60391424 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 42.992057800s of 43.214187622s, submitted: 76
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e9b63000/0x0/0x4ffc00000, data 0x1ea8018/0x201b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [0,1,3,3,1])
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 265 ms_handle_reset con 0x561d8f3b1c00 session 0x561d8eb53c20
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 265 ms_handle_reset con 0x561d8c459800 session 0x561d8d032f00
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 265 ms_handle_reset con 0x561d8d10c000 session 0x561d8c644960
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 265 ms_handle_reset con 0x561d8d10d400 session 0x561d8f048780
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 265 ms_handle_reset con 0x561d8d14ec00 session 0x561d8e512780
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 340107264 unmapped: 58941440 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 340107264 unmapped: 58941440 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 340107264 unmapped: 58941440 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3517656 data_alloc: 218103808 data_used: 14966784
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e9042000/0x0/0x4ffc00000, data 0x29c708a/0x2b3c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 265 ms_handle_reset con 0x561d8d14ec00 session 0x561d8e3edc20
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 340107264 unmapped: 58941440 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 265 ms_handle_reset con 0x561d8c459800 session 0x561d8d16f0e0
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 340107264 unmapped: 58941440 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 265 ms_handle_reset con 0x561d8d10c000 session 0x561d8e4325a0
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 265 ms_handle_reset con 0x561d8d10d400 session 0x561d8e5105a0
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e9042000/0x0/0x4ffc00000, data 0x29c708a/0x2b3c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 340107264 unmapped: 58941440 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 340107264 unmapped: 58941440 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 343433216 unmapped: 55615488 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3599896 data_alloc: 234881024 data_used: 26505216
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 343433216 unmapped: 55615488 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e9042000/0x0/0x4ffc00000, data 0x29c708a/0x2b3c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 343433216 unmapped: 55615488 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 343433216 unmapped: 55615488 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 343433216 unmapped: 55615488 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 343433216 unmapped: 55615488 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3599896 data_alloc: 234881024 data_used: 26505216
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 343433216 unmapped: 55615488 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 343433216 unmapped: 55615488 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e9042000/0x0/0x4ffc00000, data 0x29c708a/0x2b3c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 343433216 unmapped: 55615488 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 343433216 unmapped: 55615488 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 17.722257614s of 17.883968353s, submitted: 34
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 346169344 unmapped: 52879360 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3648258 data_alloc: 234881024 data_used: 26505216
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 346456064 unmapped: 52592640 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [P] New memtable created with log file: #51. Immutable memtables: 0.
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349618176 unmapped: 49430528 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349618176 unmapped: 49430528 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e753d000/0x0/0x4ffc00000, data 0x332c08a/0x34a1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1521f9c6), peers [0,2] op hist [])
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349618176 unmapped: 49430528 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349618176 unmapped: 49430528 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3682318 data_alloc: 234881024 data_used: 27729920
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349618176 unmapped: 49430528 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e753d000/0x0/0x4ffc00000, data 0x332c08a/0x34a1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1521f9c6), peers [0,2] op hist [])
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e753d000/0x0/0x4ffc00000, data 0x332c08a/0x34a1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1521f9c6), peers [0,2] op hist [])
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349618176 unmapped: 49430528 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349618176 unmapped: 49430528 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349618176 unmapped: 49430528 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349618176 unmapped: 49430528 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e753d000/0x0/0x4ffc00000, data 0x332c08a/0x34a1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1521f9c6), peers [0,2] op hist [])
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3682318 data_alloc: 234881024 data_used: 27729920
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349618176 unmapped: 49430528 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349626368 unmapped: 49422336 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349626368 unmapped: 49422336 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349626368 unmapped: 49422336 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 265 ms_handle_reset con 0x561d91e23800 session 0x561d8e2485a0
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 265 ms_handle_reset con 0x561d91e23800 session 0x561d8c74ba40
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 265 ms_handle_reset con 0x561d8c459800 session 0x561d8e4cd4a0
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 265 ms_handle_reset con 0x561d8d10c000 session 0x561d8e4cc780
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 14.496106148s of 14.768519402s, submitted: 82
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 265 ms_handle_reset con 0x561d8d10d400 session 0x561d8f38ef00
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 265 ms_handle_reset con 0x561d8d14ec00 session 0x561d8e8c4d20
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 265 ms_handle_reset con 0x561d8c459800 session 0x561d8ed734a0
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 265 ms_handle_reset con 0x561d8d10c000 session 0x561d8d2361e0
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 265 ms_handle_reset con 0x561d8d10d400 session 0x561d8e39f860
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 348504064 unmapped: 50544640 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3730459 data_alloc: 234881024 data_used: 27734016
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e6e1e000/0x0/0x4ffc00000, data 0x3a4a09a/0x3bc0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1521f9c6), peers [0,2] op hist [])
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 348504064 unmapped: 50544640 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 348504064 unmapped: 50544640 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 348504064 unmapped: 50544640 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 348504064 unmapped: 50544640 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 348504064 unmapped: 50544640 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3730459 data_alloc: 234881024 data_used: 27734016
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 348504064 unmapped: 50544640 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e6e1e000/0x0/0x4ffc00000, data 0x3a4a09a/0x3bc0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1521f9c6), peers [0,2] op hist [])
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 265 ms_handle_reset con 0x561d91e23800 session 0x561d8e50eb40
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 348512256 unmapped: 50536448 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 265 ms_handle_reset con 0x561d90b43c00 session 0x561d8d237860
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 265 ms_handle_reset con 0x561d90b43c00 session 0x561d8d1a50e0
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 265 ms_handle_reset con 0x561d8c459800 session 0x561d8c74b680
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e6e1e000/0x0/0x4ffc00000, data 0x3a4a09a/0x3bc0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1521f9c6), peers [0,2] op hist [])
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 348512256 unmapped: 50536448 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 348512256 unmapped: 50536448 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 351223808 unmapped: 47824896 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3782911 data_alloc: 234881024 data_used: 35106816
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e6e1e000/0x0/0x4ffc00000, data 0x3a4a09a/0x3bc0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1521f9c6), peers [0,2] op hist [])
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 351223808 unmapped: 47824896 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 351223808 unmapped: 47824896 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e6e1e000/0x0/0x4ffc00000, data 0x3a4a09a/0x3bc0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1521f9c6), peers [0,2] op hist [])
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 351223808 unmapped: 47824896 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 351223808 unmapped: 47824896 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e6e1e000/0x0/0x4ffc00000, data 0x3a4a09a/0x3bc0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1521f9c6), peers [0,2] op hist [])
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 351223808 unmapped: 47824896 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3782911 data_alloc: 234881024 data_used: 35106816
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e6e1e000/0x0/0x4ffc00000, data 0x3a4a09a/0x3bc0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1521f9c6), peers [0,2] op hist [])
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 351223808 unmapped: 47824896 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 17.003824234s of 17.156684875s, submitted: 17
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 351223808 unmapped: 47824896 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 351223808 unmapped: 47824896 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 351223808 unmapped: 47824896 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 352051200 unmapped: 46997504 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3817961 data_alloc: 234881024 data_used: 35102720
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e6e1e000/0x0/0x4ffc00000, data 0x3a4a09a/0x3bc0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1521f9c6), peers [0,2] op hist [])
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 352722944 unmapped: 46325760 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 352739328 unmapped: 46309376 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 352739328 unmapped: 46309376 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 352739328 unmapped: 46309376 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e6993000/0x0/0x4ffc00000, data 0x3ecd09a/0x4043000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1521f9c6), peers [0,2] op hist [])
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 352739328 unmapped: 46309376 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3828065 data_alloc: 234881024 data_used: 35172352
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 352747520 unmapped: 46301184 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.787124634s of 10.008896828s, submitted: 44
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 352747520 unmapped: 46301184 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e6993000/0x0/0x4ffc00000, data 0x3ecd09a/0x4043000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1521f9c6), peers [0,2] op hist [])
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 352747520 unmapped: 46301184 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 352755712 unmapped: 46292992 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 265 ms_handle_reset con 0x561d8d10c000 session 0x561d8f38f680
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 265 ms_handle_reset con 0x561d8d10d400 session 0x561d8ed73c20
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e753d000/0x0/0x4ffc00000, data 0x332c08a/0x34a1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1521f9c6), peers [0,2] op hist [])
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 352100352 unmapped: 46948352 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3684633 data_alloc: 234881024 data_used: 27738112
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 352100352 unmapped: 46948352 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 352100352 unmapped: 46948352 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 352100352 unmapped: 46948352 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e753d000/0x0/0x4ffc00000, data 0x332c08a/0x34a1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1521f9c6), peers [0,2] op hist [])
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 352100352 unmapped: 46948352 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 265 ms_handle_reset con 0x561d8f3b1c00 session 0x561d8e3ec1e0
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 265 ms_handle_reset con 0x561d8f3b1c00 session 0x561d8e18a3c0
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 340688896 unmapped: 58359808 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3450107 data_alloc: 218103808 data_used: 14966784
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 340688896 unmapped: 58359808 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 340688896 unmapped: 58359808 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e89be000/0x0/0x4ffc00000, data 0x1ea8018/0x201b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1521f9c6), peers [0,2] op hist [])
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 340688896 unmapped: 58359808 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 340688896 unmapped: 58359808 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 340688896 unmapped: 58359808 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3450107 data_alloc: 218103808 data_used: 14966784
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e89be000/0x0/0x4ffc00000, data 0x1ea8018/0x201b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1521f9c6), peers [0,2] op hist [])
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 340688896 unmapped: 58359808 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e89be000/0x0/0x4ffc00000, data 0x1ea8018/0x201b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1521f9c6), peers [0,2] op hist [])
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 340688896 unmapped: 58359808 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 340688896 unmapped: 58359808 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 340688896 unmapped: 58359808 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 340688896 unmapped: 58359808 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3450107 data_alloc: 218103808 data_used: 14966784
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 340688896 unmapped: 58359808 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 340688896 unmapped: 58359808 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e89be000/0x0/0x4ffc00000, data 0x1ea8018/0x201b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1521f9c6), peers [0,2] op hist [])
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 340688896 unmapped: 58359808 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e89be000/0x0/0x4ffc00000, data 0x1ea8018/0x201b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1521f9c6), peers [0,2] op hist [])
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 340688896 unmapped: 58359808 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 340688896 unmapped: 58359808 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3450107 data_alloc: 218103808 data_used: 14966784
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e89be000/0x0/0x4ffc00000, data 0x1ea8018/0x201b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1521f9c6), peers [0,2] op hist [])
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 340688896 unmapped: 58359808 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e89be000/0x0/0x4ffc00000, data 0x1ea8018/0x201b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1521f9c6), peers [0,2] op hist [])
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 340688896 unmapped: 58359808 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 340688896 unmapped: 58359808 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e89be000/0x0/0x4ffc00000, data 0x1ea8018/0x201b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1521f9c6), peers [0,2] op hist [])
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 340688896 unmapped: 58359808 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 340688896 unmapped: 58359808 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3450107 data_alloc: 218103808 data_used: 14966784
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 340688896 unmapped: 58359808 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 340688896 unmapped: 58359808 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 340688896 unmapped: 58359808 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 340688896 unmapped: 58359808 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e89be000/0x0/0x4ffc00000, data 0x1ea8018/0x201b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1521f9c6), peers [0,2] op hist [])
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 340688896 unmapped: 58359808 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3450107 data_alloc: 218103808 data_used: 14966784
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e89be000/0x0/0x4ffc00000, data 0x1ea8018/0x201b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1521f9c6), peers [0,2] op hist [])
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 340688896 unmapped: 58359808 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 340688896 unmapped: 58359808 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 340688896 unmapped: 58359808 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 340688896 unmapped: 58359808 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 340688896 unmapped: 58359808 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3450107 data_alloc: 218103808 data_used: 14966784
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 265 ms_handle_reset con 0x561d8c459800 session 0x561d8e39f860
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 265 ms_handle_reset con 0x561d8d10c000 session 0x561d8d2361e0
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 265 ms_handle_reset con 0x561d8d10d400 session 0x561d8ed734a0
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 265 ms_handle_reset con 0x561d90b43c00 session 0x561d8f38ef00
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 38.750972748s of 38.958789825s, submitted: 67
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e89bd000/0x0/0x4ffc00000, data 0x1ea8028/0x201c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1521f9c6), peers [0,2] op hist [0,0,0,0,0,0,1])
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 340697088 unmapped: 58351616 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 265 ms_handle_reset con 0x561d90b43c00 session 0x561d8e4cc780
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 265 ms_handle_reset con 0x561d8c459800 session 0x561d8f048780
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 265 ms_handle_reset con 0x561d8d10c000 session 0x561d8eb53c20
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 265 ms_handle_reset con 0x561d8d10d400 session 0x561d8f38e960
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 265 ms_handle_reset con 0x561d8f3b1c00 session 0x561d8d1a41e0
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 340910080 unmapped: 58138624 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 340910080 unmapped: 58138624 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 340910080 unmapped: 58138624 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 265 ms_handle_reset con 0x561d8f3b1c00 session 0x561d8d116b40
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 340910080 unmapped: 58138624 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3492197 data_alloc: 218103808 data_used: 14966784
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e8112000/0x0/0x4ffc00000, data 0x234708a/0x24bc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [])
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 265 ms_handle_reset con 0x561d8c459800 session 0x561d8c738780
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 340910080 unmapped: 58138624 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 265 ms_handle_reset con 0x561d8d10c000 session 0x561d8e2485a0
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 265 ms_handle_reset con 0x561d8d10d400 session 0x561d8d2a63c0
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 340926464 unmapped: 58122240 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e8110000/0x0/0x4ffc00000, data 0x23470bd/0x24be000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [])
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 340926464 unmapped: 58122240 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 340934656 unmapped: 58114048 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 340934656 unmapped: 58114048 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3529564 data_alloc: 218103808 data_used: 19730432
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 265 ms_handle_reset con 0x561d90b43c00 session 0x561d8e4cd860
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 265 ms_handle_reset con 0x561d91e23800 session 0x561d8e2492c0
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 340934656 unmapped: 58114048 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e8110000/0x0/0x4ffc00000, data 0x23470bd/0x24be000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [])
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 340934656 unmapped: 58114048 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 340934656 unmapped: 58114048 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 340934656 unmapped: 58114048 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 340934656 unmapped: 58114048 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3529432 data_alloc: 218103808 data_used: 19730432
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e8110000/0x0/0x4ffc00000, data 0x23470bd/0x24be000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [])
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.582655907s of 14.895611763s, submitted: 46
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 340934656 unmapped: 58114048 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e8110000/0x0/0x4ffc00000, data 0x23470bd/0x24be000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [])
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 340934656 unmapped: 58114048 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 265 ms_handle_reset con 0x561d8c459800 session 0x561d8e7c1e00
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 265 ms_handle_reset con 0x561d8d10c000 session 0x561d8d25a000
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 340934656 unmapped: 58114048 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 340942848 unmapped: 58105856 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 340942848 unmapped: 58105856 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3529368 data_alloc: 218103808 data_used: 19738624
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e8110000/0x0/0x4ffc00000, data 0x23470bd/0x24be000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [])
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 340942848 unmapped: 58105856 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 340951040 unmapped: 58097664 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e8110000/0x0/0x4ffc00000, data 0x23470bd/0x24be000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [])
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 340951040 unmapped: 58097664 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e8110000/0x0/0x4ffc00000, data 0x23470bd/0x24be000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [])
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 340951040 unmapped: 58097664 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 265 ms_handle_reset con 0x561d8d10d400 session 0x561d8e248780
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 265 ms_handle_reset con 0x561d8f3b1c00 session 0x561d8f0483c0
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339787776 unmapped: 59260928 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3456873 data_alloc: 218103808 data_used: 14966784
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 265 ms_handle_reset con 0x561d8f3b1c00 session 0x561d8f047c20
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e85b1000/0x0/0x4ffc00000, data 0x1ea8018/0x201b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [])
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339787776 unmapped: 59260928 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339787776 unmapped: 59260928 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339787776 unmapped: 59260928 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339787776 unmapped: 59260928 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e85b1000/0x0/0x4ffc00000, data 0x1ea8018/0x201b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [])
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339787776 unmapped: 59260928 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3456873 data_alloc: 218103808 data_used: 14966784
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339787776 unmapped: 59260928 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339787776 unmapped: 59260928 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339787776 unmapped: 59260928 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e85b1000/0x0/0x4ffc00000, data 0x1ea8018/0x201b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [])
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338550784 unmapped: 60497920 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338550784 unmapped: 60497920 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3456873 data_alloc: 218103808 data_used: 14966784
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e85b1000/0x0/0x4ffc00000, data 0x1ea8018/0x201b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [])
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338550784 unmapped: 60497920 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338550784 unmapped: 60497920 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338550784 unmapped: 60497920 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338558976 unmapped: 60489728 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338558976 unmapped: 60489728 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3456873 data_alloc: 218103808 data_used: 14966784
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338558976 unmapped: 60489728 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e85b1000/0x0/0x4ffc00000, data 0x1ea8018/0x201b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [])
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338558976 unmapped: 60489728 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338567168 unmapped: 60481536 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338567168 unmapped: 60481536 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338567168 unmapped: 60481536 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3456873 data_alloc: 218103808 data_used: 14966784
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338567168 unmapped: 60481536 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338567168 unmapped: 60481536 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e85b1000/0x0/0x4ffc00000, data 0x1ea8018/0x201b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [])
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338567168 unmapped: 60481536 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338567168 unmapped: 60481536 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338567168 unmapped: 60481536 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3456873 data_alloc: 218103808 data_used: 14966784
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338575360 unmapped: 60473344 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e85b1000/0x0/0x4ffc00000, data 0x1ea8018/0x201b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [])
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338575360 unmapped: 60473344 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338575360 unmapped: 60473344 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338575360 unmapped: 60473344 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338575360 unmapped: 60473344 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3456873 data_alloc: 218103808 data_used: 14966784
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338575360 unmapped: 60473344 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e85b1000/0x0/0x4ffc00000, data 0x1ea8018/0x201b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [])
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338575360 unmapped: 60473344 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338575360 unmapped: 60473344 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338583552 unmapped: 60465152 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338583552 unmapped: 60465152 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3456873 data_alloc: 218103808 data_used: 14966784
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338583552 unmapped: 60465152 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338583552 unmapped: 60465152 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e85b1000/0x0/0x4ffc00000, data 0x1ea8018/0x201b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [])
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338591744 unmapped: 60456960 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 48.281188965s of 48.498039246s, submitted: 50
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 265 ms_handle_reset con 0x561d8c459800 session 0x561d8c880000
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e85b1000/0x0/0x4ffc00000, data 0x1ea8018/0x201b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [])
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338599936 unmapped: 60448768 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e7c5b000/0x0/0x4ffc00000, data 0x2800018/0x2973000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [])
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338599936 unmapped: 60448768 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3525743 data_alloc: 218103808 data_used: 14966784
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338599936 unmapped: 60448768 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e7c5b000/0x0/0x4ffc00000, data 0x2800018/0x2973000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [])
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338599936 unmapped: 60448768 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338599936 unmapped: 60448768 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 265 ms_handle_reset con 0x561d8d10c000 session 0x561d8f047680
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 265 ms_handle_reset con 0x561d8d10d400 session 0x561d8e2490e0
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338599936 unmapped: 60448768 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 265 ms_handle_reset con 0x561d91e23800 session 0x561d8f049e00
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e7c5b000/0x0/0x4ffc00000, data 0x2800018/0x2973000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [])
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 265 ms_handle_reset con 0x561d91e23800 session 0x561d8e1a1a40
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338698240 unmapped: 60350464 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3529969 data_alloc: 218103808 data_used: 14966784
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338706432 unmapped: 60342272 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338714624 unmapped: 60334080 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338714624 unmapped: 60334080 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338714624 unmapped: 60334080 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e7c36000/0x0/0x4ffc00000, data 0x2824028/0x2998000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [])
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338714624 unmapped: 60334080 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3598449 data_alloc: 218103808 data_used: 24567808
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338714624 unmapped: 60334080 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338714624 unmapped: 60334080 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338714624 unmapped: 60334080 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338714624 unmapped: 60334080 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e7c36000/0x0/0x4ffc00000, data 0x2824028/0x2998000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [])
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338714624 unmapped: 60334080 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3598449 data_alloc: 218103808 data_used: 24567808
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338714624 unmapped: 60334080 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 18.442842484s of 18.518671036s, submitted: 11
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339673088 unmapped: 59375616 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 344154112 unmapped: 54894592 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 344416256 unmapped: 54632448 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e712d000/0x0/0x4ffc00000, data 0x332d028/0x34a1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [])
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 344416256 unmapped: 54632448 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3694817 data_alloc: 218103808 data_used: 24907776
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 344416256 unmapped: 54632448 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 344416256 unmapped: 54632448 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 344416256 unmapped: 54632448 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 344416256 unmapped: 54632448 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 344416256 unmapped: 54632448 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3691285 data_alloc: 218103808 data_used: 24907776
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e7109000/0x0/0x4ffc00000, data 0x3351028/0x34c5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [])
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 344416256 unmapped: 54632448 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 344416256 unmapped: 54632448 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.927303314s of 10.246125221s, submitted: 93
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 344424448 unmapped: 54624256 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 344424448 unmapped: 54624256 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 265 ms_handle_reset con 0x561d8d10c000 session 0x561d8e4cdc20
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 265 ms_handle_reset con 0x561d8c459800 session 0x561d8cff1860
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e70e8000/0x0/0x4ffc00000, data 0x3372028/0x34e6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [])
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 344424448 unmapped: 54624256 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3693961 data_alloc: 218103808 data_used: 24915968
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 344424448 unmapped: 54624256 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 265 handle_osd_map epochs [266,266], i have 265, src has [1,266]
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 266 ms_handle_reset con 0x561d8d10d400 session 0x561d8e50e1e0
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 266 ms_handle_reset con 0x561d8f3b1c00 session 0x561d8ca97c20
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 266 ms_handle_reset con 0x561d8e41e400 session 0x561d8d116780
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 266 ms_handle_reset con 0x561d8c459800 session 0x561d8e24f0e0
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 357629952 unmapped: 52314112 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 266 ms_handle_reset con 0x561d8c6d7800 session 0x561d8f38eb40
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 266 handle_osd_map epochs [267,267], i have 266, src has [1,267]
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 357646336 unmapped: 52297728 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 267 handle_osd_map epochs [267,268], i have 267, src has [1,268]
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 268 handle_osd_map epochs [268,268], i have 268, src has [1,268]
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 268 ms_handle_reset con 0x561d8d109000 session 0x561d8e3edc20
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 357670912 unmapped: 52273152 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 268 ms_handle_reset con 0x561d8f02e000 session 0x561d8e4cd4a0
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 268 ms_handle_reset con 0x561d8f02e000 session 0x561d8f38fe00
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 268 ms_handle_reset con 0x561d8c459800 session 0x561d8ed73e00
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 357679104 unmapped: 52264960 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3970481 data_alloc: 234881024 data_used: 33873920
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 268 heartbeat osd_stat(store_statfs(0x4e6421000/0x0/0x4ffc00000, data 0x5072371/0x51eb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 357679104 unmapped: 52264960 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 357679104 unmapped: 52264960 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 357679104 unmapped: 52264960 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 357679104 unmapped: 52264960 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 268 handle_osd_map epochs [269,269], i have 268, src has [1,269]
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.572269440s of 12.077063560s, submitted: 93
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 269 ms_handle_reset con 0x561d8c6d7800 session 0x561d8ed72000
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 348897280 unmapped: 61046784 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3709496 data_alloc: 218103808 data_used: 14983168
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 269 heartbeat osd_stat(store_statfs(0x4e78ea000/0x0/0x4ffc00000, data 0x3ba9dc4/0x3d23000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 348897280 unmapped: 61046784 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 348897280 unmapped: 61046784 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 348897280 unmapped: 61046784 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 348897280 unmapped: 61046784 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 269 heartbeat osd_stat(store_statfs(0x4e78ea000/0x0/0x4ffc00000, data 0x3ba9dc4/0x3d23000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 348897280 unmapped: 61046784 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3709496 data_alloc: 218103808 data_used: 14983168
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 348897280 unmapped: 61046784 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 269 heartbeat osd_stat(store_statfs(0x4e78ea000/0x0/0x4ffc00000, data 0x3ba9dc4/0x3d23000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 348897280 unmapped: 61046784 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 269 ms_handle_reset con 0x561d8d109000 session 0x561d8e3eda40
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 269 ms_handle_reset con 0x561d8e41e400 session 0x561d8e7c1c20
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 269 ms_handle_reset con 0x561d8c459800 session 0x561d8c659e00
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 269 ms_handle_reset con 0x561d8c6d7800 session 0x561d8d1a5860
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 269 ms_handle_reset con 0x561d8d109000 session 0x561d8d16e780
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 269 ms_handle_reset con 0x561d8f02e000 session 0x561d8e5105a0
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 384196608 unmapped: 25747456 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 269 ms_handle_reset con 0x561d8f02c400 session 0x561d8c74ba40
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 373506048 unmapped: 36438016 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 269 ms_handle_reset con 0x561d8c459800 session 0x561d8d2363c0
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 269 ms_handle_reset con 0x561d8c6d7800 session 0x561d8d1174a0
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 373506048 unmapped: 36438016 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3907564 data_alloc: 251658240 data_used: 44687360
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 269 heartbeat osd_stat(store_statfs(0x4e6abd000/0x0/0x4ffc00000, data 0x49d7dc4/0x4b51000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 269 ms_handle_reset con 0x561d8d109000 session 0x561d8f38f4a0
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.500659943s of 11.212936401s, submitted: 43
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 269 ms_handle_reset con 0x561d8f02e000 session 0x561d8e50e000
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 373506048 unmapped: 36438016 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 269 heartbeat osd_stat(store_statfs(0x4e6abc000/0x0/0x4ffc00000, data 0x49d7dd3/0x4b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [0,0,0,0,2])
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 368877568 unmapped: 41066496 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 269 handle_osd_map epochs [270,270], i have 269, src has [1,270]
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 361938944 unmapped: 48005120 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 270 ms_handle_reset con 0x561d8d0b7000 session 0x561d8f38ed20
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 361938944 unmapped: 48005120 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 270 heartbeat osd_stat(store_statfs(0x4e83e9000/0x0/0x4ffc00000, data 0x2cde942/0x2e59000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 361938944 unmapped: 48005120 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3668229 data_alloc: 218103808 data_used: 22822912
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 270 heartbeat osd_stat(store_statfs(0x4e83e9000/0x0/0x4ffc00000, data 0x2cde942/0x2e59000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 361938944 unmapped: 48005120 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 361938944 unmapped: 48005120 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 270 heartbeat osd_stat(store_statfs(0x4e83e9000/0x0/0x4ffc00000, data 0x2cde942/0x2e59000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 361938944 unmapped: 48005120 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 270 heartbeat osd_stat(store_statfs(0x4e83e9000/0x0/0x4ffc00000, data 0x2cde942/0x2e59000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 361938944 unmapped: 48005120 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 270 handle_osd_map epochs [271,271], i have 270, src has [1,271]
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 361938944 unmapped: 48005120 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3671011 data_alloc: 218103808 data_used: 22822912
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 361938944 unmapped: 48005120 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 271 heartbeat osd_stat(store_statfs(0x4e87b1000/0x0/0x4ffc00000, data 0x2ce03a5/0x2e5c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 361938944 unmapped: 48005120 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 271 heartbeat osd_stat(store_statfs(0x4e87b1000/0x0/0x4ffc00000, data 0x2ce03a5/0x2e5c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 361938944 unmapped: 48005120 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 361938944 unmapped: 48005120 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.854998589s of 14.211294174s, submitted: 75
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 362086400 unmapped: 47857664 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3687955 data_alloc: 218103808 data_used: 24702976
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 362086400 unmapped: 47857664 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 271 heartbeat osd_stat(store_statfs(0x4e87b2000/0x0/0x4ffc00000, data 0x2ce03a5/0x2e5c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 362086400 unmapped: 47857664 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 271 heartbeat osd_stat(store_statfs(0x4e87b2000/0x0/0x4ffc00000, data 0x2ce03a5/0x2e5c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 271 heartbeat osd_stat(store_statfs(0x4e87b2000/0x0/0x4ffc00000, data 0x2ce03a5/0x2e5c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 362086400 unmapped: 47857664 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 362086400 unmapped: 47857664 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 362086400 unmapped: 47857664 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3693395 data_alloc: 234881024 data_used: 25255936
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 363233280 unmapped: 46710784 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 271 heartbeat osd_stat(store_statfs(0x4e87b2000/0x0/0x4ffc00000, data 0x2ce03a5/0x2e5c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 363233280 unmapped: 46710784 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 363233280 unmapped: 46710784 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 363233280 unmapped: 46710784 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 363233280 unmapped: 46710784 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3693571 data_alloc: 234881024 data_used: 25251840
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 363233280 unmapped: 46710784 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 271 heartbeat osd_stat(store_statfs(0x4e87b2000/0x0/0x4ffc00000, data 0x2ce03a5/0x2e5c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.356461525s of 12.525735855s, submitted: 8
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 363233280 unmapped: 46710784 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 363233280 unmapped: 46710784 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 363233280 unmapped: 46710784 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 363233280 unmapped: 46710784 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3693235 data_alloc: 234881024 data_used: 25247744
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 363233280 unmapped: 46710784 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 363233280 unmapped: 46710784 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 271 heartbeat osd_stat(store_statfs(0x4e87b2000/0x0/0x4ffc00000, data 0x2ce03a5/0x2e5c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 363233280 unmapped: 46710784 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 363233280 unmapped: 46710784 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 363233280 unmapped: 46710784 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3692883 data_alloc: 234881024 data_used: 25247744
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 363233280 unmapped: 46710784 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 271 heartbeat osd_stat(store_statfs(0x4e87b2000/0x0/0x4ffc00000, data 0x2ce03a5/0x2e5c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 363233280 unmapped: 46710784 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 363233280 unmapped: 46710784 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 271 ms_handle_reset con 0x561d8c5cbc00 session 0x561d8f0485a0
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 271 ms_handle_reset con 0x561d8d10bc00 session 0x561d8e3ed680
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.056241989s of 12.152028084s, submitted: 5
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 363233280 unmapped: 46710784 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 271 ms_handle_reset con 0x561d8d0b7000 session 0x561d8e24eb40
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 357621760 unmapped: 52322304 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3510922 data_alloc: 218103808 data_used: 14991360
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 271 heartbeat osd_stat(store_statfs(0x4e95e1000/0x0/0x4ffc00000, data 0x1eb2396/0x202d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 357621760 unmapped: 52322304 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 357621760 unmapped: 52322304 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 357621760 unmapped: 52322304 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 357621760 unmapped: 52322304 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 271 heartbeat osd_stat(store_statfs(0x4e95e1000/0x0/0x4ffc00000, data 0x1eb2396/0x202d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 357621760 unmapped: 52322304 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3510922 data_alloc: 218103808 data_used: 14991360
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 357621760 unmapped: 52322304 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 357621760 unmapped: 52322304 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 357621760 unmapped: 52322304 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 271 heartbeat osd_stat(store_statfs(0x4e95e1000/0x0/0x4ffc00000, data 0x1eb2396/0x202d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 271 heartbeat osd_stat(store_statfs(0x4e95e1000/0x0/0x4ffc00000, data 0x1eb2396/0x202d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 357621760 unmapped: 52322304 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 357621760 unmapped: 52322304 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3510922 data_alloc: 218103808 data_used: 14991360
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 357621760 unmapped: 52322304 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 357621760 unmapped: 52322304 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 357621760 unmapped: 52322304 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 271 heartbeat osd_stat(store_statfs(0x4e95e1000/0x0/0x4ffc00000, data 0x1eb2396/0x202d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 357621760 unmapped: 52322304 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 357621760 unmapped: 52322304 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3510922 data_alloc: 218103808 data_used: 14991360
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 357621760 unmapped: 52322304 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 357621760 unmapped: 52322304 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 357621760 unmapped: 52322304 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 357621760 unmapped: 52322304 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 271 heartbeat osd_stat(store_statfs(0x4e95e1000/0x0/0x4ffc00000, data 0x1eb2396/0x202d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 357621760 unmapped: 52322304 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3510922 data_alloc: 218103808 data_used: 14991360
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 357621760 unmapped: 52322304 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 5400.4 total, 600.0 interval#012Cumulative writes: 45K writes, 180K keys, 45K commit groups, 1.0 writes per commit group, ingest: 0.17 GB, 0.03 MB/s#012Cumulative WAL: 45K writes, 16K syncs, 2.79 writes per sync, written: 0.17 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 2715 writes, 11K keys, 2715 commit groups, 1.0 writes per commit group, ingest: 11.35 MB, 0.02 MB/s#012Interval WAL: 2715 writes, 1097 syncs, 2.47 writes per sync, written: 0.01 GB, 0.02 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 357621760 unmapped: 52322304 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 271 heartbeat osd_stat(store_statfs(0x4e95e1000/0x0/0x4ffc00000, data 0x1eb2396/0x202d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 357621760 unmapped: 52322304 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 24.209249496s of 24.272548676s, submitted: 19
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 271 ms_handle_reset con 0x561d8c459800 session 0x561d8c62bc20
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 357875712 unmapped: 52068352 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 357875712 unmapped: 52068352 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3551006 data_alloc: 218103808 data_used: 14991360
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 271 heartbeat osd_stat(store_statfs(0x4e913b000/0x0/0x4ffc00000, data 0x2358396/0x24d3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 357875712 unmapped: 52068352 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 271 ms_handle_reset con 0x561d8c6d7800 session 0x561d8c644960
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 357875712 unmapped: 52068352 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 357875712 unmapped: 52068352 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 356655104 unmapped: 53288960 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 271 ms_handle_reset con 0x561d8c459800 session 0x561d8e18ba40
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349806592 unmapped: 60137472 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3516663 data_alloc: 218103808 data_used: 14991360
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 271 ms_handle_reset con 0x561d8d10bc00 session 0x561d8e18a780
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 271 heartbeat osd_stat(store_statfs(0x4e95e0000/0x0/0x4ffc00000, data 0x1eb2396/0x202d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349806592 unmapped: 60137472 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349806592 unmapped: 60137472 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349806592 unmapped: 60137472 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 271 heartbeat osd_stat(store_statfs(0x4e95e0000/0x0/0x4ffc00000, data 0x1eb2396/0x202d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349806592 unmapped: 60137472 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349806592 unmapped: 60137472 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3515831 data_alloc: 218103808 data_used: 14991360
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349806592 unmapped: 60137472 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349806592 unmapped: 60137472 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 271 heartbeat osd_stat(store_statfs(0x4e95e0000/0x0/0x4ffc00000, data 0x1eb2396/0x202d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349806592 unmapped: 60137472 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349806592 unmapped: 60137472 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349806592 unmapped: 60137472 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3515831 data_alloc: 218103808 data_used: 14991360
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349806592 unmapped: 60137472 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 271 heartbeat osd_stat(store_statfs(0x4e95e0000/0x0/0x4ffc00000, data 0x1eb2396/0x202d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349806592 unmapped: 60137472 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349806592 unmapped: 60137472 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349806592 unmapped: 60137472 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349806592 unmapped: 60137472 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3515831 data_alloc: 218103808 data_used: 14991360
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349806592 unmapped: 60137472 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349806592 unmapped: 60137472 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 271 heartbeat osd_stat(store_statfs(0x4e95e0000/0x0/0x4ffc00000, data 0x1eb2396/0x202d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349806592 unmapped: 60137472 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349806592 unmapped: 60137472 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349806592 unmapped: 60137472 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3515831 data_alloc: 218103808 data_used: 14991360
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 271 ms_handle_reset con 0x561d8e74f800 session 0x561d8e18be00
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349806592 unmapped: 60137472 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 271 heartbeat osd_stat(store_statfs(0x4e95e0000/0x0/0x4ffc00000, data 0x1eb2396/0x202d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: mgrc ms_handle_reset ms_handle_reset con 0x561d8c6aa800
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/3119838916
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/3119838916,v1:192.168.122.100:6801/3119838916]
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: mgrc handle_mgr_configure stats_period=5
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 271 heartbeat osd_stat(store_statfs(0x4e95e0000/0x0/0x4ffc00000, data 0x1eb2396/0x202d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349806592 unmapped: 60137472 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 271 ms_handle_reset con 0x561d8f032c00 session 0x561d8e1cc000
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 271 ms_handle_reset con 0x561d8f039c00 session 0x561d8d117860
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349806592 unmapped: 60137472 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349806592 unmapped: 60137472 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 271 heartbeat osd_stat(store_statfs(0x4e95e0000/0x0/0x4ffc00000, data 0x1eb2396/0x202d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349806592 unmapped: 60137472 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3515831 data_alloc: 218103808 data_used: 14991360
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349806592 unmapped: 60137472 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349806592 unmapped: 60137472 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349806592 unmapped: 60137472 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349806592 unmapped: 60137472 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 271 heartbeat osd_stat(store_statfs(0x4e95e0000/0x0/0x4ffc00000, data 0x1eb2396/0x202d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349806592 unmapped: 60137472 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3515831 data_alloc: 218103808 data_used: 14991360
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349806592 unmapped: 60137472 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 271 heartbeat osd_stat(store_statfs(0x4e95e0000/0x0/0x4ffc00000, data 0x1eb2396/0x202d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349806592 unmapped: 60137472 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349814784 unmapped: 60129280 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349814784 unmapped: 60129280 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349814784 unmapped: 60129280 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3515831 data_alloc: 218103808 data_used: 14991360
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 271 heartbeat osd_stat(store_statfs(0x4e95e0000/0x0/0x4ffc00000, data 0x1eb2396/0x202d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349814784 unmapped: 60129280 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349822976 unmapped: 60121088 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349822976 unmapped: 60121088 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349822976 unmapped: 60121088 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349822976 unmapped: 60121088 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3515831 data_alloc: 218103808 data_used: 14991360
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 271 heartbeat osd_stat(store_statfs(0x4e95e0000/0x0/0x4ffc00000, data 0x1eb2396/0x202d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349831168 unmapped: 60112896 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349831168 unmapped: 60112896 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349831168 unmapped: 60112896 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349831168 unmapped: 60112896 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349831168 unmapped: 60112896 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3515831 data_alloc: 218103808 data_used: 14991360
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349831168 unmapped: 60112896 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 271 heartbeat osd_stat(store_statfs(0x4e95e0000/0x0/0x4ffc00000, data 0x1eb2396/0x202d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349831168 unmapped: 60112896 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349831168 unmapped: 60112896 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349839360 unmapped: 60104704 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 271 heartbeat osd_stat(store_statfs(0x4e95e0000/0x0/0x4ffc00000, data 0x1eb2396/0x202d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349839360 unmapped: 60104704 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3515831 data_alloc: 218103808 data_used: 14991360
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349839360 unmapped: 60104704 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 271 heartbeat osd_stat(store_statfs(0x4e95e0000/0x0/0x4ffc00000, data 0x1eb2396/0x202d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349839360 unmapped: 60104704 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 271 heartbeat osd_stat(store_statfs(0x4e95e0000/0x0/0x4ffc00000, data 0x1eb2396/0x202d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349839360 unmapped: 60104704 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349839360 unmapped: 60104704 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349839360 unmapped: 60104704 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3515831 data_alloc: 218103808 data_used: 14991360
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 271 heartbeat osd_stat(store_statfs(0x4e95e0000/0x0/0x4ffc00000, data 0x1eb2396/0x202d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349839360 unmapped: 60104704 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349847552 unmapped: 60096512 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349847552 unmapped: 60096512 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349847552 unmapped: 60096512 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349847552 unmapped: 60096512 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3515831 data_alloc: 218103808 data_used: 14991360
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 271 heartbeat osd_stat(store_statfs(0x4e95e0000/0x0/0x4ffc00000, data 0x1eb2396/0x202d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349855744 unmapped: 60088320 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349855744 unmapped: 60088320 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349855744 unmapped: 60088320 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349855744 unmapped: 60088320 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349855744 unmapped: 60088320 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3515831 data_alloc: 218103808 data_used: 14991360
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349855744 unmapped: 60088320 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 271 heartbeat osd_stat(store_statfs(0x4e95e0000/0x0/0x4ffc00000, data 0x1eb2396/0x202d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349863936 unmapped: 60080128 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349863936 unmapped: 60080128 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 271 heartbeat osd_stat(store_statfs(0x4e95e0000/0x0/0x4ffc00000, data 0x1eb2396/0x202d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349863936 unmapped: 60080128 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349863936 unmapped: 60080128 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3515831 data_alloc: 218103808 data_used: 14991360
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 77.114036560s of 77.737030029s, submitted: 33
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 271 handle_osd_map epochs [272,272], i have 271, src has [1,272]
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349872128 unmapped: 60071936 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 272 ms_handle_reset con 0x561d8e52e800 session 0x561d8d16f680
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349880320 unmapped: 60063744 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 272 handle_osd_map epochs [272,273], i have 272, src has [1,273]
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349896704 unmapped: 60047360 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 273 ms_handle_reset con 0x561d8d10d000 session 0x561d8e4cd4a0
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349896704 unmapped: 60047360 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 273 heartbeat osd_stat(store_statfs(0x4e9a6b000/0x0/0x4ffc00000, data 0x1a26aa0/0x1ba1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349896704 unmapped: 60047360 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3478923 data_alloc: 218103808 data_used: 15007744
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349896704 unmapped: 60047360 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349896704 unmapped: 60047360 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 273 heartbeat osd_stat(store_statfs(0x4e9a6b000/0x0/0x4ffc00000, data 0x1a26aa0/0x1ba1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349904896 unmapped: 60039168 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 273 heartbeat osd_stat(store_statfs(0x4e9a6b000/0x0/0x4ffc00000, data 0x1a26aa0/0x1ba1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 273 handle_osd_map epochs [274,274], i have 273, src has [1,274]
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349921280 unmapped: 60022784 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 274 heartbeat osd_stat(store_statfs(0x4e9a69000/0x0/0x4ffc00000, data 0x1a28523/0x1ba4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349921280 unmapped: 60022784 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3481705 data_alloc: 218103808 data_used: 15007744
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349921280 unmapped: 60022784 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 274 heartbeat osd_stat(store_statfs(0x4e9a69000/0x0/0x4ffc00000, data 0x1a28523/0x1ba4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349921280 unmapped: 60022784 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 274 heartbeat osd_stat(store_statfs(0x4e9a69000/0x0/0x4ffc00000, data 0x1a28523/0x1ba4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349929472 unmapped: 60014592 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349929472 unmapped: 60014592 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 274 heartbeat osd_stat(store_statfs(0x4e9a69000/0x0/0x4ffc00000, data 0x1a28523/0x1ba4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349929472 unmapped: 60014592 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3481705 data_alloc: 218103808 data_used: 15007744
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 274 heartbeat osd_stat(store_statfs(0x4e9a69000/0x0/0x4ffc00000, data 0x1a28523/0x1ba4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349929472 unmapped: 60014592 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349929472 unmapped: 60014592 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 274 heartbeat osd_stat(store_statfs(0x4e9a69000/0x0/0x4ffc00000, data 0x1a28523/0x1ba4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349937664 unmapped: 60006400 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349945856 unmapped: 59998208 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349945856 unmapped: 59998208 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3481705 data_alloc: 218103808 data_used: 15007744
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 274 heartbeat osd_stat(store_statfs(0x4e9a69000/0x0/0x4ffc00000, data 0x1a28523/0x1ba4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349945856 unmapped: 59998208 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349945856 unmapped: 59998208 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349954048 unmapped: 59990016 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 274 ms_handle_reset con 0x561d96070c00 session 0x561d8d117680
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349954048 unmapped: 59990016 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349954048 unmapped: 59990016 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3481705 data_alloc: 218103808 data_used: 15007744
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 274 heartbeat osd_stat(store_statfs(0x4e9a69000/0x0/0x4ffc00000, data 0x1a28523/0x1ba4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349954048 unmapped: 59990016 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 274 heartbeat osd_stat(store_statfs(0x4e9a69000/0x0/0x4ffc00000, data 0x1a28523/0x1ba4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349962240 unmapped: 59981824 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349962240 unmapped: 59981824 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349962240 unmapped: 59981824 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349962240 unmapped: 59981824 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3481705 data_alloc: 218103808 data_used: 15007744
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349962240 unmapped: 59981824 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349962240 unmapped: 59981824 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 274 heartbeat osd_stat(store_statfs(0x4e9a69000/0x0/0x4ffc00000, data 0x1a28523/0x1ba4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349962240 unmapped: 59981824 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 274 heartbeat osd_stat(store_statfs(0x4e9a69000/0x0/0x4ffc00000, data 0x1a28523/0x1ba4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349970432 unmapped: 59973632 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 274 heartbeat osd_stat(store_statfs(0x4e9a69000/0x0/0x4ffc00000, data 0x1a28523/0x1ba4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349978624 unmapped: 59965440 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3481705 data_alloc: 218103808 data_used: 15007744
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349978624 unmapped: 59965440 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 35.830196381s of 36.115074158s, submitted: 95
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349986816 unmapped: 59957248 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 350027776 unmapped: 59916288 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 350027776 unmapped: 59916288 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 274 heartbeat osd_stat(store_statfs(0x4e9a6a000/0x0/0x4ffc00000, data 0x1a28523/0x1ba4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 350027776 unmapped: 59916288 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3480825 data_alloc: 218103808 data_used: 15007744
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 274 heartbeat osd_stat(store_statfs(0x4e9a6a000/0x0/0x4ffc00000, data 0x1a28523/0x1ba4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 350027776 unmapped: 59916288 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 350027776 unmapped: 59916288 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 350027776 unmapped: 59916288 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 350027776 unmapped: 59916288 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 274 heartbeat osd_stat(store_statfs(0x4e9a6a000/0x0/0x4ffc00000, data 0x1a28523/0x1ba4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 350027776 unmapped: 59916288 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3480825 data_alloc: 218103808 data_used: 15007744
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 350027776 unmapped: 59916288 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 350035968 unmapped: 59908096 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 350035968 unmapped: 59908096 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 274 heartbeat osd_stat(store_statfs(0x4e9a6a000/0x0/0x4ffc00000, data 0x1a28523/0x1ba4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 350035968 unmapped: 59908096 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 350035968 unmapped: 59908096 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3480825 data_alloc: 218103808 data_used: 15007744
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 350044160 unmapped: 59899904 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 350044160 unmapped: 59899904 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 350044160 unmapped: 59899904 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 350044160 unmapped: 59899904 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 274 heartbeat osd_stat(store_statfs(0x4e9a6a000/0x0/0x4ffc00000, data 0x1a28523/0x1ba4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 350052352 unmapped: 59891712 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3480825 data_alloc: 218103808 data_used: 15007744
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 350052352 unmapped: 59891712 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 350052352 unmapped: 59891712 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 274 heartbeat osd_stat(store_statfs(0x4e9a6a000/0x0/0x4ffc00000, data 0x1a28523/0x1ba4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 274 heartbeat osd_stat(store_statfs(0x4e9a6a000/0x0/0x4ffc00000, data 0x1a28523/0x1ba4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 350060544 unmapped: 59883520 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 274 heartbeat osd_stat(store_statfs(0x4e9a6a000/0x0/0x4ffc00000, data 0x1a28523/0x1ba4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 350060544 unmapped: 59883520 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 350060544 unmapped: 59883520 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3480825 data_alloc: 218103808 data_used: 15007744
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 350060544 unmapped: 59883520 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 350068736 unmapped: 59875328 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 350068736 unmapped: 59875328 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 350068736 unmapped: 59875328 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 274 heartbeat osd_stat(store_statfs(0x4e9a6a000/0x0/0x4ffc00000, data 0x1a28523/0x1ba4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 350068736 unmapped: 59875328 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3480825 data_alloc: 218103808 data_used: 15007744
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 274 heartbeat osd_stat(store_statfs(0x4e9a6a000/0x0/0x4ffc00000, data 0x1a28523/0x1ba4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 350076928 unmapped: 59867136 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 350076928 unmapped: 59867136 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 350076928 unmapped: 59867136 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 350076928 unmapped: 59867136 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 274 heartbeat osd_stat(store_statfs(0x4e9a6a000/0x0/0x4ffc00000, data 0x1a28523/0x1ba4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 350085120 unmapped: 59858944 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3480825 data_alloc: 218103808 data_used: 15007744
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 350085120 unmapped: 59858944 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 350085120 unmapped: 59858944 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 350085120 unmapped: 59858944 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 274 heartbeat osd_stat(store_statfs(0x4e9a6a000/0x0/0x4ffc00000, data 0x1a28523/0x1ba4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 350093312 unmapped: 59850752 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 350093312 unmapped: 59850752 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3480825 data_alloc: 218103808 data_used: 15007744
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 350093312 unmapped: 59850752 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 350093312 unmapped: 59850752 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 350093312 unmapped: 59850752 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 350093312 unmapped: 59850752 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 274 heartbeat osd_stat(store_statfs(0x4e9a6a000/0x0/0x4ffc00000, data 0x1a28523/0x1ba4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 350093312 unmapped: 59850752 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3480825 data_alloc: 218103808 data_used: 15007744
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 350101504 unmapped: 59842560 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 350101504 unmapped: 59842560 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 350109696 unmapped: 59834368 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 274 heartbeat osd_stat(store_statfs(0x4e9a6a000/0x0/0x4ffc00000, data 0x1a28523/0x1ba4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 350109696 unmapped: 59834368 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 274 heartbeat osd_stat(store_statfs(0x4e9a6a000/0x0/0x4ffc00000, data 0x1a28523/0x1ba4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 350117888 unmapped: 59826176 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3480825 data_alloc: 218103808 data_used: 15007744
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 350117888 unmapped: 59826176 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 350117888 unmapped: 59826176 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 274 heartbeat osd_stat(store_statfs(0x4e9a6a000/0x0/0x4ffc00000, data 0x1a28523/0x1ba4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 274 heartbeat osd_stat(store_statfs(0x4e9a6a000/0x0/0x4ffc00000, data 0x1a28523/0x1ba4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 350117888 unmapped: 59826176 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 350117888 unmapped: 59826176 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 274 heartbeat osd_stat(store_statfs(0x4e9a6a000/0x0/0x4ffc00000, data 0x1a28523/0x1ba4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 274 heartbeat osd_stat(store_statfs(0x4e9a6a000/0x0/0x4ffc00000, data 0x1a28523/0x1ba4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 350134272 unmapped: 59809792 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3480825 data_alloc: 218103808 data_used: 15007744
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 350134272 unmapped: 59809792 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 274 heartbeat osd_stat(store_statfs(0x4e9a6a000/0x0/0x4ffc00000, data 0x1a28523/0x1ba4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 350142464 unmapped: 59801600 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 350142464 unmapped: 59801600 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 274 heartbeat osd_stat(store_statfs(0x4e9a6a000/0x0/0x4ffc00000, data 0x1a28523/0x1ba4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 350142464 unmapped: 59801600 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 350150656 unmapped: 59793408 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3480825 data_alloc: 218103808 data_used: 15007744
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 350150656 unmapped: 59793408 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 350150656 unmapped: 59793408 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 350158848 unmapped: 59785216 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 274 heartbeat osd_stat(store_statfs(0x4e9a6a000/0x0/0x4ffc00000, data 0x1a28523/0x1ba4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 350158848 unmapped: 59785216 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 274 heartbeat osd_stat(store_statfs(0x4e9a6a000/0x0/0x4ffc00000, data 0x1a28523/0x1ba4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 350158848 unmapped: 59785216 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3480825 data_alloc: 218103808 data_used: 15007744
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 350158848 unmapped: 59785216 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 350158848 unmapped: 59785216 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 274 heartbeat osd_stat(store_statfs(0x4e9a6a000/0x0/0x4ffc00000, data 0x1a28523/0x1ba4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 350158848 unmapped: 59785216 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 274 heartbeat osd_stat(store_statfs(0x4e9a6a000/0x0/0x4ffc00000, data 0x1a28523/0x1ba4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 350158848 unmapped: 59785216 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 350158848 unmapped: 59785216 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3480825 data_alloc: 218103808 data_used: 15007744
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 274 heartbeat osd_stat(store_statfs(0x4e9a6a000/0x0/0x4ffc00000, data 0x1a28523/0x1ba4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 350175232 unmapped: 59768832 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 350175232 unmapped: 59768832 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 350175232 unmapped: 59768832 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 274 heartbeat osd_stat(store_statfs(0x4e9a6a000/0x0/0x4ffc00000, data 0x1a28523/0x1ba4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 350183424 unmapped: 59760640 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 350183424 unmapped: 59760640 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3480825 data_alloc: 218103808 data_used: 15007744
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 350183424 unmapped: 59760640 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 350183424 unmapped: 59760640 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 274 heartbeat osd_stat(store_statfs(0x4e9a6a000/0x0/0x4ffc00000, data 0x1a28523/0x1ba4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 350183424 unmapped: 59760640 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 350191616 unmapped: 59752448 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 350191616 unmapped: 59752448 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3480825 data_alloc: 218103808 data_used: 15007744
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 350191616 unmapped: 59752448 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 274 heartbeat osd_stat(store_statfs(0x4e9a6a000/0x0/0x4ffc00000, data 0x1a28523/0x1ba4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 350191616 unmapped: 59752448 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 350191616 unmapped: 59752448 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 274 ms_handle_reset con 0x561d8c459800 session 0x561d8e50e1e0
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 350453760 unmapped: 59490304 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 274 ms_handle_reset con 0x561d8d10bc00 session 0x561d8cff1860
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 350199808 unmapped: 59744256 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3486265 data_alloc: 234881024 data_used: 21430272
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 274 heartbeat osd_stat(store_statfs(0x4e9a6a000/0x0/0x4ffc00000, data 0x1a28523/0x1ba4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 350289920 unmapped: 59654144 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 350289920 unmapped: 59654144 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 274 heartbeat osd_stat(store_statfs(0x4e9a6a000/0x0/0x4ffc00000, data 0x1a28523/0x1ba4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 350289920 unmapped: 59654144 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 274 heartbeat osd_stat(store_statfs(0x4e9a6a000/0x0/0x4ffc00000, data 0x1a28523/0x1ba4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 350289920 unmapped: 59654144 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 350289920 unmapped: 59654144 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3486265 data_alloc: 234881024 data_used: 21430272
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 350289920 unmapped: 59654144 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 350289920 unmapped: 59654144 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 274 handle_osd_map epochs [275,275], i have 274, src has [1,275]
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 90.937507629s of 91.256561279s, submitted: 90
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 275 heartbeat osd_stat(store_statfs(0x4e9a66000/0x0/0x4ffc00000, data 0x1a2a0f4/0x1ba7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 348340224 unmapped: 61603840 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 275 ms_handle_reset con 0x561d8d10d000 session 0x561d8e702d20
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 348340224 unmapped: 61603840 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 348340224 unmapped: 61603840 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3342967 data_alloc: 218103808 data_used: 7806976
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 348340224 unmapped: 61603840 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 275 handle_osd_map epochs [276,276], i have 275, src has [1,276]
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 348348416 unmapped: 61595648 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 276 heartbeat osd_stat(store_statfs(0x4eaa68000/0x0/0x4ffc00000, data 0xa2a0d1/0xba6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 276 ms_handle_reset con 0x561d8e52e800 session 0x561d8c644f00
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345120768 unmapped: 64823296 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345120768 unmapped: 64823296 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 276 handle_osd_map epochs [277,277], i have 276, src has [1,277]
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345120768 unmapped: 64823296 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3272720 data_alloc: 218103808 data_used: 1056768
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345120768 unmapped: 64823296 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345120768 unmapped: 64823296 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 277 heartbeat osd_stat(store_statfs(0x4eb263000/0x0/0x4ffc00000, data 0x22d6ee/0x3aa000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345128960 unmapped: 64815104 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 277 heartbeat osd_stat(store_statfs(0x4eb263000/0x0/0x4ffc00000, data 0x22d6ee/0x3aa000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 277 handle_osd_map epochs [278,278], i have 277, src has [1,278]
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.671767235s of 10.978181839s, submitted: 60
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345145344 unmapped: 64798720 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345145344 unmapped: 64798720 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3275694 data_alloc: 218103808 data_used: 1056768
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 278 heartbeat osd_stat(store_statfs(0x4eb260000/0x0/0x4ffc00000, data 0x22f151/0x3ad000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 278 heartbeat osd_stat(store_statfs(0x4eb260000/0x0/0x4ffc00000, data 0x22f151/0x3ad000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345145344 unmapped: 64798720 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 278 heartbeat osd_stat(store_statfs(0x4eb260000/0x0/0x4ffc00000, data 0x22f151/0x3ad000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345145344 unmapped: 64798720 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345145344 unmapped: 64798720 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345145344 unmapped: 64798720 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345145344 unmapped: 64798720 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3275694 data_alloc: 218103808 data_used: 1056768
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 278 handle_osd_map epochs [279,279], i have 278, src has [1,279]
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 279 ms_handle_reset con 0x561d8e532400 session 0x561d8e7034a0
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 279 heartbeat osd_stat(store_statfs(0x4eb25b000/0x0/0x4ffc00000, data 0x230d01/0x3b2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345194496 unmapped: 64749568 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345194496 unmapped: 64749568 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345194496 unmapped: 64749568 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 279 heartbeat osd_stat(store_statfs(0x4eb25b000/0x0/0x4ffc00000, data 0x230d01/0x3b2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345194496 unmapped: 64749568 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345194496 unmapped: 64749568 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3284115 data_alloc: 218103808 data_used: 1064960
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345194496 unmapped: 64749568 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 279 heartbeat osd_stat(store_statfs(0x4eb25b000/0x0/0x4ffc00000, data 0x230d01/0x3b2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345194496 unmapped: 64749568 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345202688 unmapped: 64741376 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 279 heartbeat osd_stat(store_statfs(0x4eb25b000/0x0/0x4ffc00000, data 0x230d01/0x3b2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345202688 unmapped: 64741376 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345202688 unmapped: 64741376 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3284115 data_alloc: 218103808 data_used: 1064960
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345202688 unmapped: 64741376 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345210880 unmapped: 64733184 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 279 heartbeat osd_stat(store_statfs(0x4eb25b000/0x0/0x4ffc00000, data 0x230d01/0x3b2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345210880 unmapped: 64733184 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345210880 unmapped: 64733184 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345210880 unmapped: 64733184 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3284115 data_alloc: 218103808 data_used: 1064960
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345219072 unmapped: 64724992 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345227264 unmapped: 64716800 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345227264 unmapped: 64716800 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 279 heartbeat osd_stat(store_statfs(0x4eb25b000/0x0/0x4ffc00000, data 0x230d01/0x3b2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345227264 unmapped: 64716800 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 279 heartbeat osd_stat(store_statfs(0x4eb25b000/0x0/0x4ffc00000, data 0x230d01/0x3b2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345227264 unmapped: 64716800 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3284115 data_alloc: 218103808 data_used: 1064960
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 279 heartbeat osd_stat(store_statfs(0x4eb25b000/0x0/0x4ffc00000, data 0x230d01/0x3b2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345227264 unmapped: 64716800 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345227264 unmapped: 64716800 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345227264 unmapped: 64716800 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345227264 unmapped: 64716800 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345235456 unmapped: 64708608 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3284115 data_alloc: 218103808 data_used: 1064960
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345235456 unmapped: 64708608 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 279 heartbeat osd_stat(store_statfs(0x4eb25b000/0x0/0x4ffc00000, data 0x230d01/0x3b2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345235456 unmapped: 64708608 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345235456 unmapped: 64708608 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345243648 unmapped: 64700416 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3284115 data_alloc: 218103808 data_used: 1064960
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345243648 unmapped: 64700416 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 279 heartbeat osd_stat(store_statfs(0x4eb25b000/0x0/0x4ffc00000, data 0x230d01/0x3b2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 279 heartbeat osd_stat(store_statfs(0x4eb25b000/0x0/0x4ffc00000, data 0x230d01/0x3b2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345251840 unmapped: 64692224 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 279 heartbeat osd_stat(store_statfs(0x4eb25b000/0x0/0x4ffc00000, data 0x230d01/0x3b2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345251840 unmapped: 64692224 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345260032 unmapped: 64684032 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345260032 unmapped: 64684032 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3284115 data_alloc: 218103808 data_used: 1064960
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345260032 unmapped: 64684032 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345260032 unmapped: 64684032 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345260032 unmapped: 64684032 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 279 heartbeat osd_stat(store_statfs(0x4eb25b000/0x0/0x4ffc00000, data 0x230d01/0x3b2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345260032 unmapped: 64684032 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 279 heartbeat osd_stat(store_statfs(0x4eb25b000/0x0/0x4ffc00000, data 0x230d01/0x3b2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345260032 unmapped: 64684032 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3284115 data_alloc: 218103808 data_used: 1064960
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345260032 unmapped: 64684032 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345268224 unmapped: 64675840 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 279 heartbeat osd_stat(store_statfs(0x4eb25b000/0x0/0x4ffc00000, data 0x230d01/0x3b2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345268224 unmapped: 64675840 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345268224 unmapped: 64675840 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345276416 unmapped: 64667648 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3284115 data_alloc: 218103808 data_used: 1064960
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345276416 unmapped: 64667648 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 279 heartbeat osd_stat(store_statfs(0x4eb25b000/0x0/0x4ffc00000, data 0x230d01/0x3b2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345276416 unmapped: 64667648 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345276416 unmapped: 64667648 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345276416 unmapped: 64667648 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 279 heartbeat osd_stat(store_statfs(0x4eb25b000/0x0/0x4ffc00000, data 0x230d01/0x3b2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345284608 unmapped: 64659456 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3284115 data_alloc: 218103808 data_used: 1064960
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345284608 unmapped: 64659456 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345292800 unmapped: 64651264 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345292800 unmapped: 64651264 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 279 heartbeat osd_stat(store_statfs(0x4eb25b000/0x0/0x4ffc00000, data 0x230d01/0x3b2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345292800 unmapped: 64651264 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345292800 unmapped: 64651264 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3284115 data_alloc: 218103808 data_used: 1064960
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345292800 unmapped: 64651264 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345292800 unmapped: 64651264 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 279 heartbeat osd_stat(store_statfs(0x4eb25b000/0x0/0x4ffc00000, data 0x230d01/0x3b2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345309184 unmapped: 64634880 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345309184 unmapped: 64634880 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345309184 unmapped: 64634880 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3284115 data_alloc: 218103808 data_used: 1064960
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345309184 unmapped: 64634880 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 279 heartbeat osd_stat(store_statfs(0x4eb25b000/0x0/0x4ffc00000, data 0x230d01/0x3b2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345309184 unmapped: 64634880 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345309184 unmapped: 64634880 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 279 heartbeat osd_stat(store_statfs(0x4eb25b000/0x0/0x4ffc00000, data 0x230d01/0x3b2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345309184 unmapped: 64634880 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 279 heartbeat osd_stat(store_statfs(0x4eb25b000/0x0/0x4ffc00000, data 0x230d01/0x3b2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345317376 unmapped: 64626688 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3284115 data_alloc: 218103808 data_used: 1064960
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345317376 unmapped: 64626688 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345317376 unmapped: 64626688 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345317376 unmapped: 64626688 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 279 heartbeat osd_stat(store_statfs(0x4eb25b000/0x0/0x4ffc00000, data 0x230d01/0x3b2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 279 heartbeat osd_stat(store_statfs(0x4eb25b000/0x0/0x4ffc00000, data 0x230d01/0x3b2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345317376 unmapped: 64626688 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345317376 unmapped: 64626688 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3284115 data_alloc: 218103808 data_used: 1064960
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345317376 unmapped: 64626688 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345317376 unmapped: 64626688 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345317376 unmapped: 64626688 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 279 heartbeat osd_stat(store_statfs(0x4eb25b000/0x0/0x4ffc00000, data 0x230d01/0x3b2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345333760 unmapped: 64610304 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345333760 unmapped: 64610304 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3284115 data_alloc: 218103808 data_used: 1064960
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345333760 unmapped: 64610304 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345333760 unmapped: 64610304 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345333760 unmapped: 64610304 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 279 heartbeat osd_stat(store_statfs(0x4eb25b000/0x0/0x4ffc00000, data 0x230d01/0x3b2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345333760 unmapped: 64610304 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345341952 unmapped: 64602112 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3284115 data_alloc: 218103808 data_used: 1064960
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345341952 unmapped: 64602112 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345350144 unmapped: 64593920 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 279 heartbeat osd_stat(store_statfs(0x4eb25b000/0x0/0x4ffc00000, data 0x230d01/0x3b2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345350144 unmapped: 64593920 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345350144 unmapped: 64593920 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345350144 unmapped: 64593920 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3284115 data_alloc: 218103808 data_used: 1064960
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345350144 unmapped: 64593920 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345350144 unmapped: 64593920 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 279 heartbeat osd_stat(store_statfs(0x4eb25b000/0x0/0x4ffc00000, data 0x230d01/0x3b2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345350144 unmapped: 64593920 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345350144 unmapped: 64593920 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345358336 unmapped: 64585728 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3284115 data_alloc: 218103808 data_used: 1064960
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345358336 unmapped: 64585728 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345366528 unmapped: 64577536 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 279 heartbeat osd_stat(store_statfs(0x4eb25b000/0x0/0x4ffc00000, data 0x230d01/0x3b2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345366528 unmapped: 64577536 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345366528 unmapped: 64577536 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345366528 unmapped: 64577536 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3284115 data_alloc: 218103808 data_used: 1064960
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345366528 unmapped: 64577536 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345366528 unmapped: 64577536 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 279 heartbeat osd_stat(store_statfs(0x4eb25b000/0x0/0x4ffc00000, data 0x230d01/0x3b2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345382912 unmapped: 64561152 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345382912 unmapped: 64561152 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345382912 unmapped: 64561152 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 279 heartbeat osd_stat(store_statfs(0x4eb25b000/0x0/0x4ffc00000, data 0x230d01/0x3b2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3284115 data_alloc: 218103808 data_used: 1064960
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345382912 unmapped: 64561152 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 279 heartbeat osd_stat(store_statfs(0x4eb25b000/0x0/0x4ffc00000, data 0x230d01/0x3b2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345382912 unmapped: 64561152 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345382912 unmapped: 64561152 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345391104 unmapped: 64552960 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345391104 unmapped: 64552960 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 111.406425476s of 111.508773804s, submitted: 26
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3287953 data_alloc: 218103808 data_used: 1064960
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345399296 unmapped: 64544768 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 279 heartbeat osd_stat(store_statfs(0x4eb25a000/0x0/0x4ffc00000, data 0x230d2f/0x3b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,1])
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345399296 unmapped: 64544768 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345399296 unmapped: 64544768 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 279 heartbeat osd_stat(store_statfs(0x4eb25a000/0x0/0x4ffc00000, data 0x230d34/0x3b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345432064 unmapped: 64512000 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345432064 unmapped: 64512000 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 279 heartbeat osd_stat(store_statfs(0x4eb25a000/0x0/0x4ffc00000, data 0x230d34/0x3b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3288207 data_alloc: 218103808 data_used: 1064960
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345432064 unmapped: 64512000 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 279 heartbeat osd_stat(store_statfs(0x4eb25a000/0x0/0x4ffc00000, data 0x230d34/0x3b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345432064 unmapped: 64512000 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 279 handle_osd_map epochs [280,280], i have 279, src has [1,280]
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345432064 unmapped: 64512000 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345456640 unmapped: 64487424 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 280 ms_handle_reset con 0x561d8c459800 session 0x561d8ed72d20
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345456640 unmapped: 64487424 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3291989 data_alloc: 218103808 data_used: 1073152
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345456640 unmapped: 64487424 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 280 heartbeat osd_stat(store_statfs(0x4eb256000/0x0/0x4ffc00000, data 0x2328b1/0x3b7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345456640 unmapped: 64487424 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345456640 unmapped: 64487424 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 7.714424610s of 13.350893974s, submitted: 22
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345464832 unmapped: 64479232 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345464832 unmapped: 64479232 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 280 handle_osd_map epochs [281,281], i have 280, src has [1,281]
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3327899 data_alloc: 218103808 data_used: 1081344
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345489408 unmapped: 64454656 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 281 heartbeat osd_stat(store_statfs(0x4eade2000/0x0/0x4ffc00000, data 0x6a4451/0x82b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 281 ms_handle_reset con 0x561d8d10bc00 session 0x561d8c881a40
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345489408 unmapped: 64454656 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345489408 unmapped: 64454656 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345489408 unmapped: 64454656 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 281 heartbeat osd_stat(store_statfs(0x4eade2000/0x0/0x4ffc00000, data 0x6a4451/0x82b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345489408 unmapped: 64454656 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3327899 data_alloc: 218103808 data_used: 1081344
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345489408 unmapped: 64454656 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 281 heartbeat osd_stat(store_statfs(0x4eade2000/0x0/0x4ffc00000, data 0x6a4451/0x82b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345489408 unmapped: 64454656 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345505792 unmapped: 64438272 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345505792 unmapped: 64438272 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345513984 unmapped: 64430080 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3327899 data_alloc: 218103808 data_used: 1081344
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345513984 unmapped: 64430080 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345513984 unmapped: 64430080 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 281 heartbeat osd_stat(store_statfs(0x4eade2000/0x0/0x4ffc00000, data 0x6a4451/0x82b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345513984 unmapped: 64430080 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 281 heartbeat osd_stat(store_statfs(0x4eade2000/0x0/0x4ffc00000, data 0x6a4451/0x82b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345513984 unmapped: 64430080 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345513984 unmapped: 64430080 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3327899 data_alloc: 218103808 data_used: 1081344
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345513984 unmapped: 64430080 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345522176 unmapped: 64421888 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 281 heartbeat osd_stat(store_statfs(0x4eade2000/0x0/0x4ffc00000, data 0x6a4451/0x82b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345530368 unmapped: 64413696 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345530368 unmapped: 64413696 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345530368 unmapped: 64413696 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3327899 data_alloc: 218103808 data_used: 1081344
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345530368 unmapped: 64413696 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 281 heartbeat osd_stat(store_statfs(0x4eade2000/0x0/0x4ffc00000, data 0x6a4451/0x82b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 281 heartbeat osd_stat(store_statfs(0x4eade2000/0x0/0x4ffc00000, data 0x6a4451/0x82b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345538560 unmapped: 64405504 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345538560 unmapped: 64405504 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345538560 unmapped: 64405504 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 281 heartbeat osd_stat(store_statfs(0x4eade2000/0x0/0x4ffc00000, data 0x6a4451/0x82b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345538560 unmapped: 64405504 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3327899 data_alloc: 218103808 data_used: 1081344
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345538560 unmapped: 64405504 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345538560 unmapped: 64405504 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345546752 unmapped: 64397312 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345546752 unmapped: 64397312 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345546752 unmapped: 64397312 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 281 heartbeat osd_stat(store_statfs(0x4eade2000/0x0/0x4ffc00000, data 0x6a4451/0x82b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3327899 data_alloc: 218103808 data_used: 1081344
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345546752 unmapped: 64397312 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 281 heartbeat osd_stat(store_statfs(0x4eade2000/0x0/0x4ffc00000, data 0x6a4451/0x82b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 281 heartbeat osd_stat(store_statfs(0x4eade2000/0x0/0x4ffc00000, data 0x6a4451/0x82b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345554944 unmapped: 64389120 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345563136 unmapped: 64380928 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345563136 unmapped: 64380928 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345563136 unmapped: 64380928 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3327899 data_alloc: 218103808 data_used: 1081344
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345563136 unmapped: 64380928 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345571328 unmapped: 64372736 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 281 heartbeat osd_stat(store_statfs(0x4eade2000/0x0/0x4ffc00000, data 0x6a4451/0x82b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345579520 unmapped: 64364544 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 39.750205994s of 40.227725983s, submitted: 4
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345587712 unmapped: 64356352 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 281 handle_osd_map epochs [282,282], i have 281, src has [1,282]
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 282 ms_handle_reset con 0x561d8d10d000 session 0x561d8e8c54a0
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345612288 unmapped: 64331776 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3328998 data_alloc: 218103808 data_used: 1089536
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 282 heartbeat osd_stat(store_statfs(0x4eade1000/0x0/0x4ffc00000, data 0x6a5fef/0x82c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345612288 unmapped: 64331776 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345612288 unmapped: 64331776 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345612288 unmapped: 64331776 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 282 heartbeat osd_stat(store_statfs(0x4eade1000/0x0/0x4ffc00000, data 0x6a5fef/0x82c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345612288 unmapped: 64331776 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345612288 unmapped: 64331776 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3328998 data_alloc: 218103808 data_used: 1089536
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345612288 unmapped: 64331776 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 282 heartbeat osd_stat(store_statfs(0x4eade1000/0x0/0x4ffc00000, data 0x6a5fef/0x82c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345612288 unmapped: 64331776 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345612288 unmapped: 64331776 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 282 handle_osd_map epochs [283,283], i have 282, src has [1,283]
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.148046494s of 10.304548264s, submitted: 38
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345628672 unmapped: 64315392 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 282 handle_osd_map epochs [283,283], i have 283, src has [1,283]
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345645056 unmapped: 64299008 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3331972 data_alloc: 218103808 data_used: 1089536
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345645056 unmapped: 64299008 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345645056 unmapped: 64299008 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 283 heartbeat osd_stat(store_statfs(0x4eadde000/0x0/0x4ffc00000, data 0x6a7a52/0x82f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345653248 unmapped: 64290816 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345653248 unmapped: 64290816 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345653248 unmapped: 64290816 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3331972 data_alloc: 218103808 data_used: 1089536
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345661440 unmapped: 64282624 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 283 heartbeat osd_stat(store_statfs(0x4eadde000/0x0/0x4ffc00000, data 0x6a7a52/0x82f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345661440 unmapped: 64282624 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 283 heartbeat osd_stat(store_statfs(0x4eadde000/0x0/0x4ffc00000, data 0x6a7a52/0x82f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 283 heartbeat osd_stat(store_statfs(0x4eadde000/0x0/0x4ffc00000, data 0x6a7a52/0x82f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345661440 unmapped: 64282624 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 283 heartbeat osd_stat(store_statfs(0x4eadde000/0x0/0x4ffc00000, data 0x6a7a52/0x82f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345661440 unmapped: 64282624 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345661440 unmapped: 64282624 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3331972 data_alloc: 218103808 data_used: 1089536
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345661440 unmapped: 64282624 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 283 heartbeat osd_stat(store_statfs(0x4eadde000/0x0/0x4ffc00000, data 0x6a7a52/0x82f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345669632 unmapped: 64274432 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345669632 unmapped: 64274432 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345677824 unmapped: 64266240 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345677824 unmapped: 64266240 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 283 heartbeat osd_stat(store_statfs(0x4eadde000/0x0/0x4ffc00000, data 0x6a7a52/0x82f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3331972 data_alloc: 218103808 data_used: 1089536
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345686016 unmapped: 64258048 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345686016 unmapped: 64258048 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345686016 unmapped: 64258048 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 283 heartbeat osd_stat(store_statfs(0x4eadde000/0x0/0x4ffc00000, data 0x6a7a52/0x82f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345686016 unmapped: 64258048 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345686016 unmapped: 64258048 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3331972 data_alloc: 218103808 data_used: 1089536
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345686016 unmapped: 64258048 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345686016 unmapped: 64258048 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345694208 unmapped: 64249856 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 283 heartbeat osd_stat(store_statfs(0x4eadde000/0x0/0x4ffc00000, data 0x6a7a52/0x82f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345694208 unmapped: 64249856 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345694208 unmapped: 64249856 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3331972 data_alloc: 218103808 data_used: 1089536
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345694208 unmapped: 64249856 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345694208 unmapped: 64249856 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 283 heartbeat osd_stat(store_statfs(0x4eadde000/0x0/0x4ffc00000, data 0x6a7a52/0x82f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345694208 unmapped: 64249856 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 283 heartbeat osd_stat(store_statfs(0x4eadde000/0x0/0x4ffc00000, data 0x6a7a52/0x82f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 283 heartbeat osd_stat(store_statfs(0x4eadde000/0x0/0x4ffc00000, data 0x6a7a52/0x82f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345710592 unmapped: 64233472 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345710592 unmapped: 64233472 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 283 heartbeat osd_stat(store_statfs(0x4eadde000/0x0/0x4ffc00000, data 0x6a7a52/0x82f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3331972 data_alloc: 218103808 data_used: 1089536
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345726976 unmapped: 64217088 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345726976 unmapped: 64217088 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345726976 unmapped: 64217088 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345726976 unmapped: 64217088 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345726976 unmapped: 64217088 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 283 heartbeat osd_stat(store_statfs(0x4eadde000/0x0/0x4ffc00000, data 0x6a7a52/0x82f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3331972 data_alloc: 218103808 data_used: 1089536
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345726976 unmapped: 64217088 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345726976 unmapped: 64217088 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 283 heartbeat osd_stat(store_statfs(0x4eadde000/0x0/0x4ffc00000, data 0x6a7a52/0x82f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345726976 unmapped: 64217088 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345735168 unmapped: 64208896 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345735168 unmapped: 64208896 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3331972 data_alloc: 218103808 data_used: 1089536
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345735168 unmapped: 64208896 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345735168 unmapped: 64208896 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 283 heartbeat osd_stat(store_statfs(0x4eadde000/0x0/0x4ffc00000, data 0x6a7a52/0x82f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 283 heartbeat osd_stat(store_statfs(0x4eadde000/0x0/0x4ffc00000, data 0x6a7a52/0x82f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345751552 unmapped: 64192512 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345751552 unmapped: 64192512 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345751552 unmapped: 64192512 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3331972 data_alloc: 218103808 data_used: 1089536
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345751552 unmapped: 64192512 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 283 heartbeat osd_stat(store_statfs(0x4eadde000/0x0/0x4ffc00000, data 0x6a7a52/0x82f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345767936 unmapped: 64176128 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345767936 unmapped: 64176128 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345767936 unmapped: 64176128 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 283 heartbeat osd_stat(store_statfs(0x4eadde000/0x0/0x4ffc00000, data 0x6a7a52/0x82f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345767936 unmapped: 64176128 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3331972 data_alloc: 218103808 data_used: 1089536
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345776128 unmapped: 64167936 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345776128 unmapped: 64167936 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 283 heartbeat osd_stat(store_statfs(0x4eadde000/0x0/0x4ffc00000, data 0x6a7a52/0x82f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345776128 unmapped: 64167936 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345776128 unmapped: 64167936 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345784320 unmapped: 64159744 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3331972 data_alloc: 218103808 data_used: 1089536
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345784320 unmapped: 64159744 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345784320 unmapped: 64159744 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345784320 unmapped: 64159744 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 283 heartbeat osd_stat(store_statfs(0x4eadde000/0x0/0x4ffc00000, data 0x6a7a52/0x82f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345784320 unmapped: 64159744 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345784320 unmapped: 64159744 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 283 heartbeat osd_stat(store_statfs(0x4eadde000/0x0/0x4ffc00000, data 0x6a7a52/0x82f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3331972 data_alloc: 218103808 data_used: 1089536
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345792512 unmapped: 64151552 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345792512 unmapped: 64151552 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345808896 unmapped: 64135168 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345817088 unmapped: 64126976 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345817088 unmapped: 64126976 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3331972 data_alloc: 218103808 data_used: 1089536
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345817088 unmapped: 64126976 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 283 heartbeat osd_stat(store_statfs(0x4eadde000/0x0/0x4ffc00000, data 0x6a7a52/0x82f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345817088 unmapped: 64126976 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 283 heartbeat osd_stat(store_statfs(0x4eadde000/0x0/0x4ffc00000, data 0x6a7a52/0x82f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345817088 unmapped: 64126976 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345825280 unmapped: 64118784 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 283 heartbeat osd_stat(store_statfs(0x4eadde000/0x0/0x4ffc00000, data 0x6a7a52/0x82f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345825280 unmapped: 64118784 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3331972 data_alloc: 218103808 data_used: 1089536
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345833472 unmapped: 64110592 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 283 heartbeat osd_stat(store_statfs(0x4eadde000/0x0/0x4ffc00000, data 0x6a7a52/0x82f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345833472 unmapped: 64110592 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345833472 unmapped: 64110592 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345841664 unmapped: 64102400 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345841664 unmapped: 64102400 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3331972 data_alloc: 218103808 data_used: 1089536
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 283 heartbeat osd_stat(store_statfs(0x4eadde000/0x0/0x4ffc00000, data 0x6a7a52/0x82f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345849856 unmapped: 64094208 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 283 heartbeat osd_stat(store_statfs(0x4eadde000/0x0/0x4ffc00000, data 0x6a7a52/0x82f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345849856 unmapped: 64094208 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345849856 unmapped: 64094208 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345858048 unmapped: 64086016 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 283 heartbeat osd_stat(store_statfs(0x4eadde000/0x0/0x4ffc00000, data 0x6a7a52/0x82f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345858048 unmapped: 64086016 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3331972 data_alloc: 218103808 data_used: 1089536
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345858048 unmapped: 64086016 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345858048 unmapped: 64086016 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345866240 unmapped: 64077824 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345866240 unmapped: 64077824 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 283 heartbeat osd_stat(store_statfs(0x4eadde000/0x0/0x4ffc00000, data 0x6a7a52/0x82f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345866240 unmapped: 64077824 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3331972 data_alloc: 218103808 data_used: 1089536
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345866240 unmapped: 64077824 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345882624 unmapped: 64061440 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345882624 unmapped: 64061440 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345882624 unmapped: 64061440 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345882624 unmapped: 64061440 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 283 heartbeat osd_stat(store_statfs(0x4eadde000/0x0/0x4ffc00000, data 0x6a7a52/0x82f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3331972 data_alloc: 218103808 data_used: 1089536
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345882624 unmapped: 64061440 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345882624 unmapped: 64061440 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345882624 unmapped: 64061440 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345882624 unmapped: 64061440 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345890816 unmapped: 64053248 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 283 heartbeat osd_stat(store_statfs(0x4eadde000/0x0/0x4ffc00000, data 0x6a7a52/0x82f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3331972 data_alloc: 218103808 data_used: 1089536
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345890816 unmapped: 64053248 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345899008 unmapped: 64045056 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345899008 unmapped: 64045056 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 283 heartbeat osd_stat(store_statfs(0x4eadde000/0x0/0x4ffc00000, data 0x6a7a52/0x82f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345907200 unmapped: 64036864 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345907200 unmapped: 64036864 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3331972 data_alloc: 218103808 data_used: 1089536
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345907200 unmapped: 64036864 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345907200 unmapped: 64036864 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 283 heartbeat osd_stat(store_statfs(0x4eadde000/0x0/0x4ffc00000, data 0x6a7a52/0x82f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345915392 unmapped: 64028672 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345915392 unmapped: 64028672 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345915392 unmapped: 64028672 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3331972 data_alloc: 218103808 data_used: 1089536
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345915392 unmapped: 64028672 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 283 heartbeat osd_stat(store_statfs(0x4eadde000/0x0/0x4ffc00000, data 0x6a7a52/0x82f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345915392 unmapped: 64028672 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345915392 unmapped: 64028672 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345915392 unmapped: 64028672 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345915392 unmapped: 64028672 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3331972 data_alloc: 218103808 data_used: 1089536
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345931776 unmapped: 64012288 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: osd.1 283 heartbeat osd_stat(store_statfs(0x4eadde000/0x0/0x4ffc00000, data 0x6a7a52/0x82f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345931776 unmapped: 64012288 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345931776 unmapped: 64012288 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:36:40 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345931776 unmapped: 64012288 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:42:16 np0005534516 nova_compute[253538]: 2025-11-25 09:42:16.749 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:42:17 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3681: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Nov 25 04:42:17 np0005534516 rsyslogd[1007]: imjournal: 15462 messages lost due to rate-limiting (20000 allowed within 600 seconds)
Nov 25 04:42:18 np0005534516 nova_compute[253538]: 2025-11-25 09:42:18.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:42:19 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3682: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Nov 25 04:42:19 np0005534516 nova_compute[253538]: 2025-11-25 09:42:19.553 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:42:19 np0005534516 nova_compute[253538]: 2025-11-25 09:42:19.913 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:42:19 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:42:21 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3683: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail; 31 KiB/s rd, 0 B/s wr, 52 op/s
Nov 25 04:42:21 np0005534516 nova_compute[253538]: 2025-11-25 09:42:21.750 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:42:22 np0005534516 nova_compute[253538]: 2025-11-25 09:42:22.569 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:42:22 np0005534516 nova_compute[253538]: 2025-11-25 09:42:22.570 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Nov 25 04:42:22 np0005534516 nova_compute[253538]: 2025-11-25 09:42:22.585 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Nov 25 04:42:23 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3684: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail; 20 KiB/s rd, 0 B/s wr, 32 op/s
Nov 25 04:42:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 04:42:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 04:42:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 04:42:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 04:42:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 04:42:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 04:42:24 np0005534516 podman[454701]: 2025-11-25 09:42:24.806593381 +0000 UTC m=+0.049024309 container health_status 5c8d5508db57b4c3da7e80ae11f3a3094e87785cb74f60c98199261dd8b73481 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Nov 25 04:42:24 np0005534516 podman[454700]: 2025-11-25 09:42:24.819207976 +0000 UTC m=+0.067063732 container health_status 201f5ba8a846455dad7681d91099d8c3f33e8ac91298c1330a8b525b94c03938 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 04:42:24 np0005534516 nova_compute[253538]: 2025-11-25 09:42:24.858 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:42:24 np0005534516 nova_compute[253538]: 2025-11-25 09:42:24.915 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:42:24 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:42:25 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3685: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail; 8.5 KiB/s rd, 0 B/s wr, 13 op/s
Nov 25 04:42:25 np0005534516 nova_compute[253538]: 2025-11-25 09:42:25.566 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:42:26 np0005534516 nova_compute[253538]: 2025-11-25 09:42:26.547 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:42:26 np0005534516 nova_compute[253538]: 2025-11-25 09:42:26.553 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:42:26 np0005534516 nova_compute[253538]: 2025-11-25 09:42:26.553 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 25 04:42:26 np0005534516 nova_compute[253538]: 2025-11-25 09:42:26.553 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 25 04:42:26 np0005534516 nova_compute[253538]: 2025-11-25 09:42:26.568 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 25 04:42:26 np0005534516 nova_compute[253538]: 2025-11-25 09:42:26.754 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:42:27 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3686: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 25 04:42:27 np0005534516 nova_compute[253538]: 2025-11-25 09:42:27.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:42:27 np0005534516 nova_compute[253538]: 2025-11-25 09:42:27.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:42:27 np0005534516 nova_compute[253538]: 2025-11-25 09:42:27.554 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 25 04:42:29 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 25 04:42:29 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4292678911' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 04:42:29 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 25 04:42:29 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4292678911' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 04:42:29 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3687: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 25 04:42:29 np0005534516 nova_compute[253538]: 2025-11-25 09:42:29.917 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:42:29 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:42:31 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3688: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 25 04:42:31 np0005534516 nova_compute[253538]: 2025-11-25 09:42:31.756 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:42:32 np0005534516 podman[454737]: 2025-11-25 09:42:32.873628186 +0000 UTC m=+0.119590846 container health_status 8ad1e319ea263c63021f01bb2d6f18ca5aea205883339692c6b10bddfa993191 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ovn_controller, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Nov 25 04:42:33 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3689: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 25 04:42:34 np0005534516 nova_compute[253538]: 2025-11-25 09:42:34.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:42:34 np0005534516 nova_compute[253538]: 2025-11-25 09:42:34.918 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:42:34 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:42:35 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3690: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:42:36 np0005534516 nova_compute[253538]: 2025-11-25 09:42:36.757 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:42:37 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3691: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:42:39 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3692: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:42:39 np0005534516 nova_compute[253538]: 2025-11-25 09:42:39.918 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:42:39 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:42:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:42:41.131 162739 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:42:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:42:41.132 162739 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:42:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:42:41.132 162739 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:42:41 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3693: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:42:41 np0005534516 nova_compute[253538]: 2025-11-25 09:42:41.760 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:42:42 np0005534516 nova_compute[253538]: 2025-11-25 09:42:42.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:42:42 np0005534516 nova_compute[253538]: 2025-11-25 09:42:42.581 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:42:42 np0005534516 nova_compute[253538]: 2025-11-25 09:42:42.582 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:42:42 np0005534516 nova_compute[253538]: 2025-11-25 09:42:42.583 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:42:42 np0005534516 nova_compute[253538]: 2025-11-25 09:42:42.583 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 25 04:42:42 np0005534516 nova_compute[253538]: 2025-11-25 09:42:42.584 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 04:42:43 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 04:42:43 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2938740477' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 04:42:43 np0005534516 nova_compute[253538]: 2025-11-25 09:42:43.044 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.460s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 04:42:43 np0005534516 nova_compute[253538]: 2025-11-25 09:42:43.200 253542 WARNING nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 04:42:43 np0005534516 nova_compute[253538]: 2025-11-25 09:42:43.202 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3572MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 25 04:42:43 np0005534516 nova_compute[253538]: 2025-11-25 09:42:43.202 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:42:43 np0005534516 nova_compute[253538]: 2025-11-25 09:42:43.202 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:42:43 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3694: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:42:43 np0005534516 nova_compute[253538]: 2025-11-25 09:42:43.263 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 25 04:42:43 np0005534516 nova_compute[253538]: 2025-11-25 09:42:43.263 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 25 04:42:43 np0005534516 nova_compute[253538]: 2025-11-25 09:42:43.291 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 04:42:43 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 04:42:43 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3627811981' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 04:42:43 np0005534516 nova_compute[253538]: 2025-11-25 09:42:43.732 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.441s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 04:42:43 np0005534516 nova_compute[253538]: 2025-11-25 09:42:43.741 253542 DEBUG nova.compute.provider_tree [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 04:42:43 np0005534516 nova_compute[253538]: 2025-11-25 09:42:43.756 253542 DEBUG nova.scheduler.client.report [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 04:42:43 np0005534516 nova_compute[253538]: 2025-11-25 09:42:43.758 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 25 04:42:43 np0005534516 nova_compute[253538]: 2025-11-25 09:42:43.759 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.557s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:42:44 np0005534516 nova_compute[253538]: 2025-11-25 09:42:44.920 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:42:44 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:42:45 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3695: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:42:46 np0005534516 nova_compute[253538]: 2025-11-25 09:42:46.761 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:42:47 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3696: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:42:49 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3697: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:42:49 np0005534516 nova_compute[253538]: 2025-11-25 09:42:49.921 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:42:49 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:42:51 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3698: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:42:51 np0005534516 nova_compute[253538]: 2025-11-25 09:42:51.765 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:42:53 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3699: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:42:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] Optimize plan auto_2025-11-25_09:42:53
Nov 25 04:42:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 04:42:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] do_upmap
Nov 25 04:42:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] pools ['default.rgw.log', 'cephfs.cephfs.data', 'backups', '.mgr', 'cephfs.cephfs.meta', '.rgw.root', 'default.rgw.control', 'images', 'volumes', 'default.rgw.meta', 'vms']
Nov 25 04:42:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] prepared 0/10 changes
Nov 25 04:42:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 04:42:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 04:42:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 04:42:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 04:42:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 04:42:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 04:42:54 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 04:42:54 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 04:42:54 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 04:42:54 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 04:42:54 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 04:42:54 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 04:42:54 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 04:42:54 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 04:42:54 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 04:42:54 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 04:42:54 np0005534516 nova_compute[253538]: 2025-11-25 09:42:54.921 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:42:54 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:42:55 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3700: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:42:55 np0005534516 podman[454812]: 2025-11-25 09:42:55.813325698 +0000 UTC m=+0.061177201 container health_status 5c8d5508db57b4c3da7e80ae11f3a3094e87785cb74f60c98199261dd8b73481 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent)
Nov 25 04:42:55 np0005534516 podman[454811]: 2025-11-25 09:42:55.847299956 +0000 UTC m=+0.091369846 container health_status 201f5ba8a846455dad7681d91099d8c3f33e8ac91298c1330a8b525b94c03938 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=multipathd, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true)
Nov 25 04:42:56 np0005534516 nova_compute[253538]: 2025-11-25 09:42:56.767 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:42:57 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3701: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:42:59 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3702: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:42:59 np0005534516 nova_compute[253538]: 2025-11-25 09:42:59.923 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:42:59 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:43:01 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3703: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:43:01 np0005534516 nova_compute[253538]: 2025-11-25 09:43:01.769 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:43:02 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 04:43:02 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 04:43:02 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 04:43:02 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 04:43:02 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 04:43:02 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 04:43:02 np0005534516 ceph-mgr[75313]: [progress WARNING root] complete: ev d83876ba-50e8-48d6-9c53-5fba51135dfd does not exist
Nov 25 04:43:02 np0005534516 ceph-mgr[75313]: [progress WARNING root] complete: ev 72dc65eb-ae16-499b-b237-87503c75c175 does not exist
Nov 25 04:43:02 np0005534516 ceph-mgr[75313]: [progress WARNING root] complete: ev d98d0182-be53-4148-8280-581a11314092 does not exist
Nov 25 04:43:02 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 04:43:02 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 04:43:02 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 04:43:02 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 04:43:02 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 04:43:02 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 04:43:03 np0005534516 podman[455083]: 2025-11-25 09:43:03.08053024 +0000 UTC m=+0.115307768 container health_status 8ad1e319ea263c63021f01bb2d6f18ca5aea205883339692c6b10bddfa993191 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Nov 25 04:43:03 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3704: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:43:03 np0005534516 podman[455151]: 2025-11-25 09:43:03.304149294 +0000 UTC m=+0.025558088 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 04:43:03 np0005534516 podman[455151]: 2025-11-25 09:43:03.400949636 +0000 UTC m=+0.122358410 container create b77e28e5c3175e0660b9902619e0648b1605ff27add8ee668ded5887b0093a10 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_feistel, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 04:43:03 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 04:43:03 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 04:43:03 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 04:43:03 np0005534516 systemd[1]: Started libpod-conmon-b77e28e5c3175e0660b9902619e0648b1605ff27add8ee668ded5887b0093a10.scope.
Nov 25 04:43:03 np0005534516 systemd[1]: Started libcrun container.
Nov 25 04:43:03 np0005534516 podman[455151]: 2025-11-25 09:43:03.488731733 +0000 UTC m=+0.210140527 container init b77e28e5c3175e0660b9902619e0648b1605ff27add8ee668ded5887b0093a10 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_feistel, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507)
Nov 25 04:43:03 np0005534516 podman[455151]: 2025-11-25 09:43:03.497191084 +0000 UTC m=+0.218599858 container start b77e28e5c3175e0660b9902619e0648b1605ff27add8ee668ded5887b0093a10 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_feistel, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 04:43:03 np0005534516 podman[455151]: 2025-11-25 09:43:03.500514944 +0000 UTC m=+0.221923748 container attach b77e28e5c3175e0660b9902619e0648b1605ff27add8ee668ded5887b0093a10 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_feistel, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 04:43:03 np0005534516 clever_feistel[455168]: 167 167
Nov 25 04:43:03 np0005534516 systemd[1]: libpod-b77e28e5c3175e0660b9902619e0648b1605ff27add8ee668ded5887b0093a10.scope: Deactivated successfully.
Nov 25 04:43:03 np0005534516 podman[455151]: 2025-11-25 09:43:03.504544864 +0000 UTC m=+0.225953638 container died b77e28e5c3175e0660b9902619e0648b1605ff27add8ee668ded5887b0093a10 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_feistel, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Nov 25 04:43:03 np0005534516 systemd[1]: var-lib-containers-storage-overlay-9cd7d9c256f8572e957987a781e10b867c3a568cb991a7896718757efc26bc75-merged.mount: Deactivated successfully.
Nov 25 04:43:03 np0005534516 podman[455151]: 2025-11-25 09:43:03.552280627 +0000 UTC m=+0.273689401 container remove b77e28e5c3175e0660b9902619e0648b1605ff27add8ee668ded5887b0093a10 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_feistel, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Nov 25 04:43:03 np0005534516 systemd[1]: libpod-conmon-b77e28e5c3175e0660b9902619e0648b1605ff27add8ee668ded5887b0093a10.scope: Deactivated successfully.
Nov 25 04:43:03 np0005534516 podman[455191]: 2025-11-25 09:43:03.715414361 +0000 UTC m=+0.041133914 container create 5d2ae597c09eaf86597f79b9bbc7d4ac028e0f561c2f671f6e71d884fcdef191 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_blackwell, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 04:43:03 np0005534516 systemd[1]: Started libpod-conmon-5d2ae597c09eaf86597f79b9bbc7d4ac028e0f561c2f671f6e71d884fcdef191.scope.
Nov 25 04:43:03 np0005534516 podman[455191]: 2025-11-25 09:43:03.69632733 +0000 UTC m=+0.022046903 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 04:43:03 np0005534516 systemd[1]: Started libcrun container.
Nov 25 04:43:03 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0306ace65b0abc4f1de5fe401e56b7c0df391a8e7e995f9accbfb7ba2197085a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 04:43:03 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0306ace65b0abc4f1de5fe401e56b7c0df391a8e7e995f9accbfb7ba2197085a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 04:43:03 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0306ace65b0abc4f1de5fe401e56b7c0df391a8e7e995f9accbfb7ba2197085a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 04:43:03 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0306ace65b0abc4f1de5fe401e56b7c0df391a8e7e995f9accbfb7ba2197085a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 04:43:03 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0306ace65b0abc4f1de5fe401e56b7c0df391a8e7e995f9accbfb7ba2197085a/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 04:43:03 np0005534516 podman[455191]: 2025-11-25 09:43:03.825994579 +0000 UTC m=+0.151714132 container init 5d2ae597c09eaf86597f79b9bbc7d4ac028e0f561c2f671f6e71d884fcdef191 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_blackwell, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 04:43:03 np0005534516 podman[455191]: 2025-11-25 09:43:03.83522124 +0000 UTC m=+0.160940783 container start 5d2ae597c09eaf86597f79b9bbc7d4ac028e0f561c2f671f6e71d884fcdef191 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_blackwell, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef)
Nov 25 04:43:03 np0005534516 podman[455191]: 2025-11-25 09:43:03.838818409 +0000 UTC m=+0.164537962 container attach 5d2ae597c09eaf86597f79b9bbc7d4ac028e0f561c2f671f6e71d884fcdef191 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_blackwell, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Nov 25 04:43:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 04:43:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 04:43:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 04:43:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 04:43:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 25 04:43:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 04:43:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 04:43:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 04:43:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 04:43:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 04:43:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 1.9077212346161359e-07 of space, bias 1.0, pg target 5.723163703848408e-05 quantized to 32 (current 32)
Nov 25 04:43:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 04:43:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 25 04:43:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 04:43:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 04:43:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 04:43:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 25 04:43:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 04:43:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 25 04:43:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 04:43:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 04:43:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 04:43:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 25 04:43:04 np0005534516 nervous_blackwell[455208]: --> passed data devices: 0 physical, 3 LVM
Nov 25 04:43:04 np0005534516 nervous_blackwell[455208]: --> relative data size: 1.0
Nov 25 04:43:04 np0005534516 nervous_blackwell[455208]: --> All data devices are unavailable
Nov 25 04:43:04 np0005534516 nova_compute[253538]: 2025-11-25 09:43:04.924 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:43:04 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:43:04 np0005534516 systemd[1]: libpod-5d2ae597c09eaf86597f79b9bbc7d4ac028e0f561c2f671f6e71d884fcdef191.scope: Deactivated successfully.
Nov 25 04:43:04 np0005534516 podman[455191]: 2025-11-25 09:43:04.938451725 +0000 UTC m=+1.264171308 container died 5d2ae597c09eaf86597f79b9bbc7d4ac028e0f561c2f671f6e71d884fcdef191 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_blackwell, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 04:43:04 np0005534516 systemd[1]: libpod-5d2ae597c09eaf86597f79b9bbc7d4ac028e0f561c2f671f6e71d884fcdef191.scope: Consumed 1.041s CPU time.
Nov 25 04:43:04 np0005534516 systemd[1]: var-lib-containers-storage-overlay-0306ace65b0abc4f1de5fe401e56b7c0df391a8e7e995f9accbfb7ba2197085a-merged.mount: Deactivated successfully.
Nov 25 04:43:04 np0005534516 podman[455191]: 2025-11-25 09:43:04.996277654 +0000 UTC m=+1.321997207 container remove 5d2ae597c09eaf86597f79b9bbc7d4ac028e0f561c2f671f6e71d884fcdef191 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_blackwell, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 04:43:05 np0005534516 systemd[1]: libpod-conmon-5d2ae597c09eaf86597f79b9bbc7d4ac028e0f561c2f671f6e71d884fcdef191.scope: Deactivated successfully.
Nov 25 04:43:05 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3705: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:43:05 np0005534516 podman[455392]: 2025-11-25 09:43:05.662821559 +0000 UTC m=+0.046098900 container create 505d5c0ad26598d65c36023d80c63ffd35095263ab7bc13d4113b2f9e0318a53 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_ardinghelli, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 04:43:05 np0005534516 systemd[1]: Started libpod-conmon-505d5c0ad26598d65c36023d80c63ffd35095263ab7bc13d4113b2f9e0318a53.scope.
Nov 25 04:43:05 np0005534516 systemd[1]: Started libcrun container.
Nov 25 04:43:05 np0005534516 podman[455392]: 2025-11-25 09:43:05.645487696 +0000 UTC m=+0.028765057 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 04:43:05 np0005534516 podman[455392]: 2025-11-25 09:43:05.751347264 +0000 UTC m=+0.134624625 container init 505d5c0ad26598d65c36023d80c63ffd35095263ab7bc13d4113b2f9e0318a53 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_ardinghelli, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Nov 25 04:43:05 np0005534516 podman[455392]: 2025-11-25 09:43:05.759860107 +0000 UTC m=+0.143137438 container start 505d5c0ad26598d65c36023d80c63ffd35095263ab7bc13d4113b2f9e0318a53 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_ardinghelli, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Nov 25 04:43:05 np0005534516 podman[455392]: 2025-11-25 09:43:05.765259824 +0000 UTC m=+0.148537185 container attach 505d5c0ad26598d65c36023d80c63ffd35095263ab7bc13d4113b2f9e0318a53 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_ardinghelli, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Nov 25 04:43:05 np0005534516 elastic_ardinghelli[455408]: 167 167
Nov 25 04:43:05 np0005534516 systemd[1]: libpod-505d5c0ad26598d65c36023d80c63ffd35095263ab7bc13d4113b2f9e0318a53.scope: Deactivated successfully.
Nov 25 04:43:05 np0005534516 conmon[455408]: conmon 505d5c0ad26598d65c36 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-505d5c0ad26598d65c36023d80c63ffd35095263ab7bc13d4113b2f9e0318a53.scope/container/memory.events
Nov 25 04:43:05 np0005534516 podman[455392]: 2025-11-25 09:43:05.767428504 +0000 UTC m=+0.150705855 container died 505d5c0ad26598d65c36023d80c63ffd35095263ab7bc13d4113b2f9e0318a53 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_ardinghelli, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 04:43:05 np0005534516 systemd[1]: var-lib-containers-storage-overlay-2a951b02dbd4df22ade2c4cf1ae62c33649ca2494763ebf86863c08c9ef815cd-merged.mount: Deactivated successfully.
Nov 25 04:43:05 np0005534516 podman[455392]: 2025-11-25 09:43:05.805396051 +0000 UTC m=+0.188673392 container remove 505d5c0ad26598d65c36023d80c63ffd35095263ab7bc13d4113b2f9e0318a53 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_ardinghelli, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 04:43:05 np0005534516 systemd[1]: libpod-conmon-505d5c0ad26598d65c36023d80c63ffd35095263ab7bc13d4113b2f9e0318a53.scope: Deactivated successfully.
Nov 25 04:43:05 np0005534516 podman[455430]: 2025-11-25 09:43:05.97720669 +0000 UTC m=+0.047865447 container create 97ff5cb4c6f352e31286c11e57ed625eed2b7a3d5ff1ef2823f6c3401fc4a073 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_booth, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True)
Nov 25 04:43:06 np0005534516 systemd[1]: Started libpod-conmon-97ff5cb4c6f352e31286c11e57ed625eed2b7a3d5ff1ef2823f6c3401fc4a073.scope.
Nov 25 04:43:06 np0005534516 systemd[1]: Started libcrun container.
Nov 25 04:43:06 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ca933924115bb774f197a34e29df6d95cdac0e57694a97424389744362dc8908/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 04:43:06 np0005534516 podman[455430]: 2025-11-25 09:43:05.960238507 +0000 UTC m=+0.030897284 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 04:43:06 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ca933924115bb774f197a34e29df6d95cdac0e57694a97424389744362dc8908/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 04:43:06 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ca933924115bb774f197a34e29df6d95cdac0e57694a97424389744362dc8908/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 04:43:06 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ca933924115bb774f197a34e29df6d95cdac0e57694a97424389744362dc8908/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 04:43:06 np0005534516 podman[455430]: 2025-11-25 09:43:06.068132412 +0000 UTC m=+0.138791179 container init 97ff5cb4c6f352e31286c11e57ed625eed2b7a3d5ff1ef2823f6c3401fc4a073 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_booth, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 04:43:06 np0005534516 podman[455430]: 2025-11-25 09:43:06.076376987 +0000 UTC m=+0.147035754 container start 97ff5cb4c6f352e31286c11e57ed625eed2b7a3d5ff1ef2823f6c3401fc4a073 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_booth, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Nov 25 04:43:06 np0005534516 podman[455430]: 2025-11-25 09:43:06.081706822 +0000 UTC m=+0.152365599 container attach 97ff5cb4c6f352e31286c11e57ed625eed2b7a3d5ff1ef2823f6c3401fc4a073 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_booth, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 25 04:43:06 np0005534516 nova_compute[253538]: 2025-11-25 09:43:06.770 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:43:06 np0005534516 lucid_booth[455447]: {
Nov 25 04:43:06 np0005534516 lucid_booth[455447]:    "0": [
Nov 25 04:43:06 np0005534516 lucid_booth[455447]:        {
Nov 25 04:43:06 np0005534516 lucid_booth[455447]:            "devices": [
Nov 25 04:43:06 np0005534516 lucid_booth[455447]:                "/dev/loop3"
Nov 25 04:43:06 np0005534516 lucid_booth[455447]:            ],
Nov 25 04:43:06 np0005534516 lucid_booth[455447]:            "lv_name": "ceph_lv0",
Nov 25 04:43:06 np0005534516 lucid_booth[455447]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 04:43:06 np0005534516 lucid_booth[455447]:            "lv_size": "21470642176",
Nov 25 04:43:06 np0005534516 lucid_booth[455447]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=fa0f8d90-5df8-4d42-9078-da082765696d,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 04:43:06 np0005534516 lucid_booth[455447]:            "lv_uuid": "ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr",
Nov 25 04:43:06 np0005534516 lucid_booth[455447]:            "name": "ceph_lv0",
Nov 25 04:43:06 np0005534516 lucid_booth[455447]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 04:43:06 np0005534516 lucid_booth[455447]:            "tags": {
Nov 25 04:43:06 np0005534516 lucid_booth[455447]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 04:43:06 np0005534516 lucid_booth[455447]:                "ceph.block_uuid": "ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr",
Nov 25 04:43:06 np0005534516 lucid_booth[455447]:                "ceph.cephx_lockbox_secret": "",
Nov 25 04:43:06 np0005534516 lucid_booth[455447]:                "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 04:43:06 np0005534516 lucid_booth[455447]:                "ceph.cluster_name": "ceph",
Nov 25 04:43:06 np0005534516 lucid_booth[455447]:                "ceph.crush_device_class": "",
Nov 25 04:43:06 np0005534516 lucid_booth[455447]:                "ceph.encrypted": "0",
Nov 25 04:43:06 np0005534516 lucid_booth[455447]:                "ceph.osd_fsid": "fa0f8d90-5df8-4d42-9078-da082765696d",
Nov 25 04:43:06 np0005534516 lucid_booth[455447]:                "ceph.osd_id": "0",
Nov 25 04:43:06 np0005534516 lucid_booth[455447]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 04:43:06 np0005534516 lucid_booth[455447]:                "ceph.type": "block",
Nov 25 04:43:06 np0005534516 lucid_booth[455447]:                "ceph.vdo": "0"
Nov 25 04:43:06 np0005534516 lucid_booth[455447]:            },
Nov 25 04:43:06 np0005534516 lucid_booth[455447]:            "type": "block",
Nov 25 04:43:06 np0005534516 lucid_booth[455447]:            "vg_name": "ceph_vg0"
Nov 25 04:43:06 np0005534516 lucid_booth[455447]:        }
Nov 25 04:43:06 np0005534516 lucid_booth[455447]:    ],
Nov 25 04:43:06 np0005534516 lucid_booth[455447]:    "1": [
Nov 25 04:43:06 np0005534516 lucid_booth[455447]:        {
Nov 25 04:43:06 np0005534516 lucid_booth[455447]:            "devices": [
Nov 25 04:43:06 np0005534516 lucid_booth[455447]:                "/dev/loop4"
Nov 25 04:43:06 np0005534516 lucid_booth[455447]:            ],
Nov 25 04:43:06 np0005534516 lucid_booth[455447]:            "lv_name": "ceph_lv1",
Nov 25 04:43:06 np0005534516 lucid_booth[455447]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 04:43:06 np0005534516 lucid_booth[455447]:            "lv_size": "21470642176",
Nov 25 04:43:06 np0005534516 lucid_booth[455447]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1ccbbde0-8faf-460a-800e-c84f00ed17db,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 04:43:06 np0005534516 lucid_booth[455447]:            "lv_uuid": "nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e",
Nov 25 04:43:06 np0005534516 lucid_booth[455447]:            "name": "ceph_lv1",
Nov 25 04:43:06 np0005534516 lucid_booth[455447]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 04:43:06 np0005534516 lucid_booth[455447]:            "tags": {
Nov 25 04:43:06 np0005534516 lucid_booth[455447]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 04:43:06 np0005534516 lucid_booth[455447]:                "ceph.block_uuid": "nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e",
Nov 25 04:43:06 np0005534516 lucid_booth[455447]:                "ceph.cephx_lockbox_secret": "",
Nov 25 04:43:06 np0005534516 lucid_booth[455447]:                "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 04:43:06 np0005534516 lucid_booth[455447]:                "ceph.cluster_name": "ceph",
Nov 25 04:43:06 np0005534516 lucid_booth[455447]:                "ceph.crush_device_class": "",
Nov 25 04:43:06 np0005534516 lucid_booth[455447]:                "ceph.encrypted": "0",
Nov 25 04:43:06 np0005534516 lucid_booth[455447]:                "ceph.osd_fsid": "1ccbbde0-8faf-460a-800e-c84f00ed17db",
Nov 25 04:43:06 np0005534516 lucid_booth[455447]:                "ceph.osd_id": "1",
Nov 25 04:43:06 np0005534516 lucid_booth[455447]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 04:43:06 np0005534516 lucid_booth[455447]:                "ceph.type": "block",
Nov 25 04:43:06 np0005534516 lucid_booth[455447]:                "ceph.vdo": "0"
Nov 25 04:43:06 np0005534516 lucid_booth[455447]:            },
Nov 25 04:43:06 np0005534516 lucid_booth[455447]:            "type": "block",
Nov 25 04:43:06 np0005534516 lucid_booth[455447]:            "vg_name": "ceph_vg1"
Nov 25 04:43:06 np0005534516 lucid_booth[455447]:        }
Nov 25 04:43:06 np0005534516 lucid_booth[455447]:    ],
Nov 25 04:43:06 np0005534516 lucid_booth[455447]:    "2": [
Nov 25 04:43:06 np0005534516 lucid_booth[455447]:        {
Nov 25 04:43:06 np0005534516 lucid_booth[455447]:            "devices": [
Nov 25 04:43:06 np0005534516 lucid_booth[455447]:                "/dev/loop5"
Nov 25 04:43:06 np0005534516 lucid_booth[455447]:            ],
Nov 25 04:43:06 np0005534516 lucid_booth[455447]:            "lv_name": "ceph_lv2",
Nov 25 04:43:06 np0005534516 lucid_booth[455447]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 04:43:06 np0005534516 lucid_booth[455447]:            "lv_size": "21470642176",
Nov 25 04:43:06 np0005534516 lucid_booth[455447]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2a15223e-f21c-42af-b702-2be1c3607399,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 04:43:06 np0005534516 lucid_booth[455447]:            "lv_uuid": "5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0",
Nov 25 04:43:06 np0005534516 lucid_booth[455447]:            "name": "ceph_lv2",
Nov 25 04:43:06 np0005534516 lucid_booth[455447]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 04:43:06 np0005534516 lucid_booth[455447]:            "tags": {
Nov 25 04:43:06 np0005534516 lucid_booth[455447]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 04:43:06 np0005534516 lucid_booth[455447]:                "ceph.block_uuid": "5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0",
Nov 25 04:43:06 np0005534516 lucid_booth[455447]:                "ceph.cephx_lockbox_secret": "",
Nov 25 04:43:06 np0005534516 lucid_booth[455447]:                "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 04:43:06 np0005534516 lucid_booth[455447]:                "ceph.cluster_name": "ceph",
Nov 25 04:43:06 np0005534516 lucid_booth[455447]:                "ceph.crush_device_class": "",
Nov 25 04:43:06 np0005534516 lucid_booth[455447]:                "ceph.encrypted": "0",
Nov 25 04:43:06 np0005534516 lucid_booth[455447]:                "ceph.osd_fsid": "2a15223e-f21c-42af-b702-2be1c3607399",
Nov 25 04:43:06 np0005534516 lucid_booth[455447]:                "ceph.osd_id": "2",
Nov 25 04:43:06 np0005534516 lucid_booth[455447]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 04:43:06 np0005534516 lucid_booth[455447]:                "ceph.type": "block",
Nov 25 04:43:06 np0005534516 lucid_booth[455447]:                "ceph.vdo": "0"
Nov 25 04:43:06 np0005534516 lucid_booth[455447]:            },
Nov 25 04:43:06 np0005534516 lucid_booth[455447]:            "type": "block",
Nov 25 04:43:06 np0005534516 lucid_booth[455447]:            "vg_name": "ceph_vg2"
Nov 25 04:43:06 np0005534516 lucid_booth[455447]:        }
Nov 25 04:43:06 np0005534516 lucid_booth[455447]:    ]
Nov 25 04:43:06 np0005534516 lucid_booth[455447]: }
Nov 25 04:43:06 np0005534516 systemd[1]: libpod-97ff5cb4c6f352e31286c11e57ed625eed2b7a3d5ff1ef2823f6c3401fc4a073.scope: Deactivated successfully.
Nov 25 04:43:06 np0005534516 podman[455430]: 2025-11-25 09:43:06.866408832 +0000 UTC m=+0.937067599 container died 97ff5cb4c6f352e31286c11e57ed625eed2b7a3d5ff1ef2823f6c3401fc4a073 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_booth, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Nov 25 04:43:06 np0005534516 systemd[1]: var-lib-containers-storage-overlay-ca933924115bb774f197a34e29df6d95cdac0e57694a97424389744362dc8908-merged.mount: Deactivated successfully.
Nov 25 04:43:06 np0005534516 podman[455430]: 2025-11-25 09:43:06.92601125 +0000 UTC m=+0.996669997 container remove 97ff5cb4c6f352e31286c11e57ed625eed2b7a3d5ff1ef2823f6c3401fc4a073 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_booth, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 04:43:06 np0005534516 systemd[1]: libpod-conmon-97ff5cb4c6f352e31286c11e57ed625eed2b7a3d5ff1ef2823f6c3401fc4a073.scope: Deactivated successfully.
Nov 25 04:43:07 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3706: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:43:07 np0005534516 podman[455611]: 2025-11-25 09:43:07.623102148 +0000 UTC m=+0.044362992 container create 88790e76343b746e057c410a9b95879624988acaf3bb2729bc823e4666e107df (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_maxwell, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 04:43:07 np0005534516 systemd[1]: Started libpod-conmon-88790e76343b746e057c410a9b95879624988acaf3bb2729bc823e4666e107df.scope.
Nov 25 04:43:07 np0005534516 systemd[1]: Started libcrun container.
Nov 25 04:43:07 np0005534516 podman[455611]: 2025-11-25 09:43:07.605346963 +0000 UTC m=+0.026607817 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 04:43:07 np0005534516 podman[455611]: 2025-11-25 09:43:07.707901033 +0000 UTC m=+0.129161917 container init 88790e76343b746e057c410a9b95879624988acaf3bb2729bc823e4666e107df (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_maxwell, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Nov 25 04:43:07 np0005534516 podman[455611]: 2025-11-25 09:43:07.717771212 +0000 UTC m=+0.139032056 container start 88790e76343b746e057c410a9b95879624988acaf3bb2729bc823e4666e107df (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_maxwell, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 04:43:07 np0005534516 podman[455611]: 2025-11-25 09:43:07.721857254 +0000 UTC m=+0.143118118 container attach 88790e76343b746e057c410a9b95879624988acaf3bb2729bc823e4666e107df (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_maxwell, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 04:43:07 np0005534516 goofy_maxwell[455627]: 167 167
Nov 25 04:43:07 np0005534516 systemd[1]: libpod-88790e76343b746e057c410a9b95879624988acaf3bb2729bc823e4666e107df.scope: Deactivated successfully.
Nov 25 04:43:07 np0005534516 podman[455611]: 2025-11-25 09:43:07.726674285 +0000 UTC m=+0.147935119 container died 88790e76343b746e057c410a9b95879624988acaf3bb2729bc823e4666e107df (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_maxwell, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef)
Nov 25 04:43:07 np0005534516 systemd[1]: var-lib-containers-storage-overlay-aad4434f5cc4ea8766b4adf84f64d25d7bff2681795ad6851349d9ef34f0b7f0-merged.mount: Deactivated successfully.
Nov 25 04:43:07 np0005534516 podman[455611]: 2025-11-25 09:43:07.765371811 +0000 UTC m=+0.186632645 container remove 88790e76343b746e057c410a9b95879624988acaf3bb2729bc823e4666e107df (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_maxwell, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True)
Nov 25 04:43:07 np0005534516 systemd[1]: libpod-conmon-88790e76343b746e057c410a9b95879624988acaf3bb2729bc823e4666e107df.scope: Deactivated successfully.
Nov 25 04:43:07 np0005534516 podman[455650]: 2025-11-25 09:43:07.975405634 +0000 UTC m=+0.072344066 container create 383dabd569bf9e1506bfa31670fd026385784796e2ac0277ee25d00951013430 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_proskuriakova, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Nov 25 04:43:08 np0005534516 systemd[1]: Started libpod-conmon-383dabd569bf9e1506bfa31670fd026385784796e2ac0277ee25d00951013430.scope.
Nov 25 04:43:08 np0005534516 podman[455650]: 2025-11-25 09:43:07.955394558 +0000 UTC m=+0.052333000 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 04:43:08 np0005534516 systemd[1]: Started libcrun container.
Nov 25 04:43:08 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d31e68504adb1ec693e0d14ac4103dacdcbeb4bfbeb10b0b6199cda73412ac0f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 04:43:08 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d31e68504adb1ec693e0d14ac4103dacdcbeb4bfbeb10b0b6199cda73412ac0f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 04:43:08 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d31e68504adb1ec693e0d14ac4103dacdcbeb4bfbeb10b0b6199cda73412ac0f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 04:43:08 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d31e68504adb1ec693e0d14ac4103dacdcbeb4bfbeb10b0b6199cda73412ac0f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 04:43:08 np0005534516 podman[455650]: 2025-11-25 09:43:08.081661565 +0000 UTC m=+0.178600077 container init 383dabd569bf9e1506bfa31670fd026385784796e2ac0277ee25d00951013430 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_proskuriakova, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 04:43:08 np0005534516 podman[455650]: 2025-11-25 09:43:08.089650303 +0000 UTC m=+0.186588765 container start 383dabd569bf9e1506bfa31670fd026385784796e2ac0277ee25d00951013430 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_proskuriakova, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Nov 25 04:43:08 np0005534516 podman[455650]: 2025-11-25 09:43:08.093659562 +0000 UTC m=+0.190598024 container attach 383dabd569bf9e1506bfa31670fd026385784796e2ac0277ee25d00951013430 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_proskuriakova, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Nov 25 04:43:09 np0005534516 trusting_proskuriakova[455667]: {
Nov 25 04:43:09 np0005534516 trusting_proskuriakova[455667]:    "1ccbbde0-8faf-460a-800e-c84f00ed17db": {
Nov 25 04:43:09 np0005534516 trusting_proskuriakova[455667]:        "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 04:43:09 np0005534516 trusting_proskuriakova[455667]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 04:43:09 np0005534516 trusting_proskuriakova[455667]:        "osd_id": 1,
Nov 25 04:43:09 np0005534516 trusting_proskuriakova[455667]:        "osd_uuid": "1ccbbde0-8faf-460a-800e-c84f00ed17db",
Nov 25 04:43:09 np0005534516 trusting_proskuriakova[455667]:        "type": "bluestore"
Nov 25 04:43:09 np0005534516 trusting_proskuriakova[455667]:    },
Nov 25 04:43:09 np0005534516 trusting_proskuriakova[455667]:    "2a15223e-f21c-42af-b702-2be1c3607399": {
Nov 25 04:43:09 np0005534516 trusting_proskuriakova[455667]:        "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 04:43:09 np0005534516 trusting_proskuriakova[455667]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 04:43:09 np0005534516 trusting_proskuriakova[455667]:        "osd_id": 2,
Nov 25 04:43:09 np0005534516 trusting_proskuriakova[455667]:        "osd_uuid": "2a15223e-f21c-42af-b702-2be1c3607399",
Nov 25 04:43:09 np0005534516 trusting_proskuriakova[455667]:        "type": "bluestore"
Nov 25 04:43:09 np0005534516 trusting_proskuriakova[455667]:    },
Nov 25 04:43:09 np0005534516 trusting_proskuriakova[455667]:    "fa0f8d90-5df8-4d42-9078-da082765696d": {
Nov 25 04:43:09 np0005534516 trusting_proskuriakova[455667]:        "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 04:43:09 np0005534516 trusting_proskuriakova[455667]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 04:43:09 np0005534516 trusting_proskuriakova[455667]:        "osd_id": 0,
Nov 25 04:43:09 np0005534516 trusting_proskuriakova[455667]:        "osd_uuid": "fa0f8d90-5df8-4d42-9078-da082765696d",
Nov 25 04:43:09 np0005534516 trusting_proskuriakova[455667]:        "type": "bluestore"
Nov 25 04:43:09 np0005534516 trusting_proskuriakova[455667]:    }
Nov 25 04:43:09 np0005534516 trusting_proskuriakova[455667]: }
Nov 25 04:43:09 np0005534516 systemd[1]: libpod-383dabd569bf9e1506bfa31670fd026385784796e2ac0277ee25d00951013430.scope: Deactivated successfully.
Nov 25 04:43:09 np0005534516 systemd[1]: libpod-383dabd569bf9e1506bfa31670fd026385784796e2ac0277ee25d00951013430.scope: Consumed 1.094s CPU time.
Nov 25 04:43:09 np0005534516 conmon[455667]: conmon 383dabd569bf9e1506bf <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-383dabd569bf9e1506bfa31670fd026385784796e2ac0277ee25d00951013430.scope/container/memory.events
Nov 25 04:43:09 np0005534516 podman[455650]: 2025-11-25 09:43:09.17646981 +0000 UTC m=+1.273408232 container died 383dabd569bf9e1506bfa31670fd026385784796e2ac0277ee25d00951013430 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_proskuriakova, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Nov 25 04:43:09 np0005534516 systemd[1]: var-lib-containers-storage-overlay-d31e68504adb1ec693e0d14ac4103dacdcbeb4bfbeb10b0b6199cda73412ac0f-merged.mount: Deactivated successfully.
Nov 25 04:43:09 np0005534516 podman[455650]: 2025-11-25 09:43:09.237251929 +0000 UTC m=+1.334190351 container remove 383dabd569bf9e1506bfa31670fd026385784796e2ac0277ee25d00951013430 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_proskuriakova, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 04:43:09 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3707: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:43:09 np0005534516 systemd[1]: libpod-conmon-383dabd569bf9e1506bfa31670fd026385784796e2ac0277ee25d00951013430.scope: Deactivated successfully.
Nov 25 04:43:09 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 04:43:09 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 04:43:09 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 04:43:09 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 04:43:09 np0005534516 ceph-mgr[75313]: [progress WARNING root] complete: ev c622ed19-5e96-4bc9-b606-d6f872d8ca45 does not exist
Nov 25 04:43:09 np0005534516 ceph-mgr[75313]: [progress WARNING root] complete: ev e4b18c4a-eb4e-477b-a0e3-0f10baec0f8d does not exist
Nov 25 04:43:09 np0005534516 nova_compute[253538]: 2025-11-25 09:43:09.927 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:43:09 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:43:10 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 04:43:10 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 04:43:11 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3708: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:43:11 np0005534516 nova_compute[253538]: 2025-11-25 09:43:11.772 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:43:13 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3709: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:43:14 np0005534516 nova_compute[253538]: 2025-11-25 09:43:14.930 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:43:14 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:43:15 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3710: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:43:16 np0005534516 nova_compute[253538]: 2025-11-25 09:43:16.760 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:43:16 np0005534516 nova_compute[253538]: 2025-11-25 09:43:16.773 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:43:17 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3711: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:43:18 np0005534516 nova_compute[253538]: 2025-11-25 09:43:18.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:43:19 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3712: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:43:19 np0005534516 nova_compute[253538]: 2025-11-25 09:43:19.931 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:43:19 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:43:21 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3713: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:43:21 np0005534516 nova_compute[253538]: 2025-11-25 09:43:21.775 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:43:23 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3714: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:43:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 04:43:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 04:43:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 04:43:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 04:43:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 04:43:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 04:43:24 np0005534516 nova_compute[253538]: 2025-11-25 09:43:24.932 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:43:24 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:43:25 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3715: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:43:26 np0005534516 nova_compute[253538]: 2025-11-25 09:43:26.778 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:43:26 np0005534516 podman[455775]: 2025-11-25 09:43:26.811652264 +0000 UTC m=+0.056787891 container health_status 5c8d5508db57b4c3da7e80ae11f3a3094e87785cb74f60c98199261dd8b73481 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 04:43:26 np0005534516 podman[455774]: 2025-11-25 09:43:26.820181227 +0000 UTC m=+0.064809310 container health_status 201f5ba8a846455dad7681d91099d8c3f33e8ac91298c1330a8b525b94c03938 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=multipathd, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251118)
Nov 25 04:43:27 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3716: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:43:27 np0005534516 nova_compute[253538]: 2025-11-25 09:43:27.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:43:27 np0005534516 nova_compute[253538]: 2025-11-25 09:43:27.554 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 25 04:43:27 np0005534516 nova_compute[253538]: 2025-11-25 09:43:27.554 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 25 04:43:27 np0005534516 nova_compute[253538]: 2025-11-25 09:43:27.574 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 25 04:43:27 np0005534516 nova_compute[253538]: 2025-11-25 09:43:27.574 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:43:27 np0005534516 nova_compute[253538]: 2025-11-25 09:43:27.575 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:43:28 np0005534516 nova_compute[253538]: 2025-11-25 09:43:28.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:43:28 np0005534516 nova_compute[253538]: 2025-11-25 09:43:28.555 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:43:28 np0005534516 nova_compute[253538]: 2025-11-25 09:43:28.555 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 25 04:43:29 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 25 04:43:29 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1316391236' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 04:43:29 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 25 04:43:29 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1316391236' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 04:43:29 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3717: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:43:29 np0005534516 nova_compute[253538]: 2025-11-25 09:43:29.934 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:43:29 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:43:31 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3718: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:43:31 np0005534516 nova_compute[253538]: 2025-11-25 09:43:31.780 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:43:33 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3719: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:43:33 np0005534516 podman[455812]: 2025-11-25 09:43:33.825031078 +0000 UTC m=+0.079792640 container health_status 8ad1e319ea263c63021f01bb2d6f18ca5aea205883339692c6b10bddfa993191 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Nov 25 04:43:34 np0005534516 nova_compute[253538]: 2025-11-25 09:43:34.936 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:43:34 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:43:35 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3720: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:43:35 np0005534516 nova_compute[253538]: 2025-11-25 09:43:35.555 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:43:36 np0005534516 nova_compute[253538]: 2025-11-25 09:43:36.782 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:43:37 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3721: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:43:39 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3722: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:43:39 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:43:39 np0005534516 nova_compute[253538]: 2025-11-25 09:43:39.940 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:43:40 np0005534516 nova_compute[253538]: 2025-11-25 09:43:40.549 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:43:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:43:41.132 162739 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:43:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:43:41.132 162739 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:43:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:43:41.133 162739 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:43:41 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3723: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:43:41 np0005534516 nova_compute[253538]: 2025-11-25 09:43:41.783 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:43:43 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3724: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:43:43 np0005534516 nova_compute[253538]: 2025-11-25 09:43:43.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:43:43 np0005534516 nova_compute[253538]: 2025-11-25 09:43:43.589 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:43:43 np0005534516 nova_compute[253538]: 2025-11-25 09:43:43.590 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:43:43 np0005534516 nova_compute[253538]: 2025-11-25 09:43:43.590 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:43:43 np0005534516 nova_compute[253538]: 2025-11-25 09:43:43.591 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 25 04:43:43 np0005534516 nova_compute[253538]: 2025-11-25 09:43:43.591 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 04:43:44 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 04:43:44 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3365809163' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 04:43:44 np0005534516 nova_compute[253538]: 2025-11-25 09:43:44.057 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.465s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 04:43:44 np0005534516 nova_compute[253538]: 2025-11-25 09:43:44.223 253542 WARNING nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 04:43:44 np0005534516 nova_compute[253538]: 2025-11-25 09:43:44.224 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3573MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 25 04:43:44 np0005534516 nova_compute[253538]: 2025-11-25 09:43:44.225 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:43:44 np0005534516 nova_compute[253538]: 2025-11-25 09:43:44.225 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:43:44 np0005534516 nova_compute[253538]: 2025-11-25 09:43:44.435 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 25 04:43:44 np0005534516 nova_compute[253538]: 2025-11-25 09:43:44.435 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 25 04:43:44 np0005534516 nova_compute[253538]: 2025-11-25 09:43:44.520 253542 DEBUG nova.scheduler.client.report [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Refreshing inventories for resource provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Nov 25 04:43:44 np0005534516 nova_compute[253538]: 2025-11-25 09:43:44.605 253542 DEBUG nova.scheduler.client.report [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Updating ProviderTree inventory for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Nov 25 04:43:44 np0005534516 nova_compute[253538]: 2025-11-25 09:43:44.605 253542 DEBUG nova.compute.provider_tree [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Updating inventory in ProviderTree for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Nov 25 04:43:44 np0005534516 nova_compute[253538]: 2025-11-25 09:43:44.622 253542 DEBUG nova.scheduler.client.report [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Refreshing aggregate associations for resource provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Nov 25 04:43:44 np0005534516 nova_compute[253538]: 2025-11-25 09:43:44.643 253542 DEBUG nova.scheduler.client.report [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Refreshing trait associations for resource provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4, traits: HW_CPU_X86_ABM,HW_CPU_X86_SSE,HW_CPU_X86_SSE4A,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_NET_VIF_MODEL_VIRTIO,HW_CPU_X86_SSSE3,COMPUTE_ACCELERATORS,COMPUTE_RESCUE_BFV,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_BMI2,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_DEVICE_TAGGING,HW_CPU_X86_SSE2,HW_CPU_X86_SSE42,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_SECURITY_TPM_1_2,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_SVM,HW_CPU_X86_AVX,HW_CPU_X86_CLMUL,COMPUTE_NODE,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_AVX2,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_SHA,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_BMI,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_MMX,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_SSE41,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_SECURITY_TPM_2_0,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_F16C,COMPUTE_STORAGE_BUS_FDC,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_AMD_SVM,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_AESNI,HW_CPU_X86_FMA3 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Nov 25 04:43:44 np0005534516 nova_compute[253538]: 2025-11-25 09:43:44.667 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 04:43:44 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:43:44 np0005534516 nova_compute[253538]: 2025-11-25 09:43:44.942 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:43:45 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 04:43:45 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/370423732' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 04:43:45 np0005534516 nova_compute[253538]: 2025-11-25 09:43:45.147 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.479s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 04:43:45 np0005534516 nova_compute[253538]: 2025-11-25 09:43:45.155 253542 DEBUG nova.compute.provider_tree [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 04:43:45 np0005534516 nova_compute[253538]: 2025-11-25 09:43:45.169 253542 DEBUG nova.scheduler.client.report [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 04:43:45 np0005534516 nova_compute[253538]: 2025-11-25 09:43:45.171 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 25 04:43:45 np0005534516 nova_compute[253538]: 2025-11-25 09:43:45.171 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.947s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:43:45 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3725: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:43:46 np0005534516 nova_compute[253538]: 2025-11-25 09:43:46.849 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:43:47 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3726: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:43:49 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3727: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:43:49 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:43:49 np0005534516 nova_compute[253538]: 2025-11-25 09:43:49.944 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:43:51 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3728: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:43:51 np0005534516 nova_compute[253538]: 2025-11-25 09:43:51.851 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:43:53 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3729: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:43:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] Optimize plan auto_2025-11-25_09:43:53
Nov 25 04:43:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 04:43:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] do_upmap
Nov 25 04:43:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] pools ['default.rgw.log', 'vms', 'backups', 'default.rgw.control', 'cephfs.cephfs.meta', '.mgr', 'default.rgw.meta', '.rgw.root', 'volumes', 'cephfs.cephfs.data', 'images']
Nov 25 04:43:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] prepared 0/10 changes
Nov 25 04:43:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 04:43:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 04:43:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 04:43:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 04:43:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 04:43:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 04:43:54 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 04:43:54 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 04:43:54 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 04:43:54 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 04:43:54 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 04:43:54 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 04:43:54 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 04:43:54 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 04:43:54 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 04:43:54 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 04:43:54 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:43:54 np0005534516 nova_compute[253538]: 2025-11-25 09:43:54.948 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:43:55 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3730: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:43:56 np0005534516 nova_compute[253538]: 2025-11-25 09:43:56.853 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:43:57 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3731: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:43:57 np0005534516 podman[455886]: 2025-11-25 09:43:57.795075953 +0000 UTC m=+0.044885906 container health_status 5c8d5508db57b4c3da7e80ae11f3a3094e87785cb74f60c98199261dd8b73481 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, managed_by=edpm_ansible, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2)
Nov 25 04:43:57 np0005534516 podman[455885]: 2025-11-25 09:43:57.79790719 +0000 UTC m=+0.050230282 container health_status 201f5ba8a846455dad7681d91099d8c3f33e8ac91298c1330a8b525b94c03938 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Nov 25 04:43:59 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3732: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:43:59 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:43:59 np0005534516 nova_compute[253538]: 2025-11-25 09:43:59.951 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:44:01 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3733: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:44:01 np0005534516 nova_compute[253538]: 2025-11-25 09:44:01.854 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:44:03 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3734: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:44:04 np0005534516 podman[455924]: 2025-11-25 09:44:04.834498047 +0000 UTC m=+0.085273388 container health_status 8ad1e319ea263c63021f01bb2d6f18ca5aea205883339692c6b10bddfa993191 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible)
Nov 25 04:44:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 04:44:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 04:44:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 04:44:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 04:44:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 25 04:44:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 04:44:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 04:44:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 04:44:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 04:44:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 04:44:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 1.9077212346161359e-07 of space, bias 1.0, pg target 5.723163703848408e-05 quantized to 32 (current 32)
Nov 25 04:44:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 04:44:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 25 04:44:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 04:44:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 04:44:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 04:44:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 25 04:44:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 04:44:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 25 04:44:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 04:44:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 04:44:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 04:44:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 25 04:44:04 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:44:04 np0005534516 nova_compute[253538]: 2025-11-25 09:44:04.953 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:44:05 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3735: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:44:06 np0005534516 nova_compute[253538]: 2025-11-25 09:44:06.856 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:44:07 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3736: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:44:09 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3737: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:44:09 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 04:44:09 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 04:44:09 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 04:44:09 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 04:44:09 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:44:09 np0005534516 nova_compute[253538]: 2025-11-25 09:44:09.955 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:44:10 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 04:44:10 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 04:44:10 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 04:44:10 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 04:44:10 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 04:44:10 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 04:44:10 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 04:44:10 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 04:44:10 np0005534516 ceph-mgr[75313]: [progress WARNING root] complete: ev 3f9f7504-5acc-458c-aac7-ebc05ec90caf does not exist
Nov 25 04:44:10 np0005534516 ceph-mgr[75313]: [progress WARNING root] complete: ev 1a23456b-ece7-4e18-b985-ffbb9a08629d does not exist
Nov 25 04:44:10 np0005534516 ceph-mgr[75313]: [progress WARNING root] complete: ev 474f2439-6640-499b-b57e-824764c729f8 does not exist
Nov 25 04:44:10 np0005534516 ceph-mon[75015]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #183. Immutable memtables: 0.
Nov 25 04:44:10 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:44:10.740628) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 25 04:44:10 np0005534516 ceph-mon[75015]: rocksdb: [db/flush_job.cc:856] [default] [JOB 113] Flushing memtable with next log file: 183
Nov 25 04:44:10 np0005534516 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764063850740823, "job": 113, "event": "flush_started", "num_memtables": 1, "num_entries": 1557, "num_deletes": 251, "total_data_size": 2490969, "memory_usage": 2526048, "flush_reason": "Manual Compaction"}
Nov 25 04:44:10 np0005534516 ceph-mon[75015]: rocksdb: [db/flush_job.cc:885] [default] [JOB 113] Level-0 flush table #184: started
Nov 25 04:44:10 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 04:44:10 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 04:44:10 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 04:44:10 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 04:44:10 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 04:44:10 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 04:44:10 np0005534516 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764063850756596, "cf_name": "default", "job": 113, "event": "table_file_creation", "file_number": 184, "file_size": 2444879, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 75917, "largest_seqno": 77473, "table_properties": {"data_size": 2437600, "index_size": 4285, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1925, "raw_key_size": 14780, "raw_average_key_size": 19, "raw_value_size": 2423178, "raw_average_value_size": 3261, "num_data_blocks": 192, "num_entries": 743, "num_filter_entries": 743, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764063686, "oldest_key_time": 1764063686, "file_creation_time": 1764063850, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "dc5dc6ae-0fb8-474e-b34d-e37705a41add", "db_session_id": "WCSCMBNWDQZ3QKJA3OWW", "orig_file_number": 184, "seqno_to_time_mapping": "N/A"}}
Nov 25 04:44:10 np0005534516 ceph-mon[75015]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 113] Flush lasted 15865 microseconds, and 5630 cpu microseconds.
Nov 25 04:44:10 np0005534516 ceph-mon[75015]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 04:44:10 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:44:10.756641) [db/flush_job.cc:967] [default] [JOB 113] Level-0 flush table #184: 2444879 bytes OK
Nov 25 04:44:10 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:44:10.756663) [db/memtable_list.cc:519] [default] Level-0 commit table #184 started
Nov 25 04:44:10 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:44:10.758507) [db/memtable_list.cc:722] [default] Level-0 commit table #184: memtable #1 done
Nov 25 04:44:10 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:44:10.758522) EVENT_LOG_v1 {"time_micros": 1764063850758517, "job": 113, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 25 04:44:10 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:44:10.758540) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 25 04:44:10 np0005534516 ceph-mon[75015]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 113] Try to delete WAL files size 2484211, prev total WAL file size 2484211, number of live WAL files 2.
Nov 25 04:44:10 np0005534516 ceph-mon[75015]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000180.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 04:44:10 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:44:10.759301) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730037353330' seq:72057594037927935, type:22 .. '7061786F730037373832' seq:0, type:0; will stop at (end)
Nov 25 04:44:10 np0005534516 ceph-mon[75015]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 114] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 25 04:44:10 np0005534516 ceph-mon[75015]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 113 Base level 0, inputs: [184(2387KB)], [182(10166KB)]
Nov 25 04:44:10 np0005534516 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764063850759343, "job": 114, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [184], "files_L6": [182], "score": -1, "input_data_size": 12855663, "oldest_snapshot_seqno": -1}
Nov 25 04:44:10 np0005534516 ceph-mon[75015]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 114] Generated table #185: 9371 keys, 11124444 bytes, temperature: kUnknown
Nov 25 04:44:10 np0005534516 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764063850835097, "cf_name": "default", "job": 114, "event": "table_file_creation", "file_number": 185, "file_size": 11124444, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11065437, "index_size": 34446, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 23493, "raw_key_size": 247267, "raw_average_key_size": 26, "raw_value_size": 10901687, "raw_average_value_size": 1163, "num_data_blocks": 1328, "num_entries": 9371, "num_filter_entries": 9371, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764056881, "oldest_key_time": 0, "file_creation_time": 1764063850, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "dc5dc6ae-0fb8-474e-b34d-e37705a41add", "db_session_id": "WCSCMBNWDQZ3QKJA3OWW", "orig_file_number": 185, "seqno_to_time_mapping": "N/A"}}
Nov 25 04:44:10 np0005534516 ceph-mon[75015]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 04:44:10 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:44:10.835323) [db/compaction/compaction_job.cc:1663] [default] [JOB 114] Compacted 1@0 + 1@6 files to L6 => 11124444 bytes
Nov 25 04:44:10 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:44:10.837248) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 169.6 rd, 146.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.3, 9.9 +0.0 blob) out(10.6 +0.0 blob), read-write-amplify(9.8) write-amplify(4.6) OK, records in: 9885, records dropped: 514 output_compression: NoCompression
Nov 25 04:44:10 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:44:10.837266) EVENT_LOG_v1 {"time_micros": 1764063850837258, "job": 114, "event": "compaction_finished", "compaction_time_micros": 75820, "compaction_time_cpu_micros": 32867, "output_level": 6, "num_output_files": 1, "total_output_size": 11124444, "num_input_records": 9885, "num_output_records": 9371, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 25 04:44:10 np0005534516 ceph-mon[75015]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000184.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 04:44:10 np0005534516 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764063850837747, "job": 114, "event": "table_file_deletion", "file_number": 184}
Nov 25 04:44:10 np0005534516 ceph-mon[75015]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000182.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 04:44:10 np0005534516 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764063850839606, "job": 114, "event": "table_file_deletion", "file_number": 182}
Nov 25 04:44:10 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:44:10.759215) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 04:44:10 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:44:10.839652) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 04:44:10 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:44:10.839658) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 04:44:10 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:44:10.839660) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 04:44:10 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:44:10.839662) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 04:44:10 np0005534516 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:44:10.839663) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 04:44:11 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3738: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:44:11 np0005534516 podman[456340]: 2025-11-25 09:44:11.305621108 +0000 UTC m=+0.021502778 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 04:44:11 np0005534516 podman[456340]: 2025-11-25 09:44:11.458249175 +0000 UTC m=+0.174130865 container create 8fdf802db3e6f156e2943f98411b768c65c4247fd036015e2d04b633a3cc495f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_greider, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Nov 25 04:44:11 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 04:44:11 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 04:44:11 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 04:44:11 np0005534516 systemd[1]: Started libpod-conmon-8fdf802db3e6f156e2943f98411b768c65c4247fd036015e2d04b633a3cc495f.scope.
Nov 25 04:44:11 np0005534516 systemd[1]: Started libcrun container.
Nov 25 04:44:11 np0005534516 podman[456340]: 2025-11-25 09:44:11.656617249 +0000 UTC m=+0.372498919 container init 8fdf802db3e6f156e2943f98411b768c65c4247fd036015e2d04b633a3cc495f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_greider, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 25 04:44:11 np0005534516 podman[456340]: 2025-11-25 09:44:11.662896921 +0000 UTC m=+0.378778571 container start 8fdf802db3e6f156e2943f98411b768c65c4247fd036015e2d04b633a3cc495f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_greider, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 04:44:11 np0005534516 pedantic_greider[456356]: 167 167
Nov 25 04:44:11 np0005534516 systemd[1]: libpod-8fdf802db3e6f156e2943f98411b768c65c4247fd036015e2d04b633a3cc495f.scope: Deactivated successfully.
Nov 25 04:44:11 np0005534516 podman[456340]: 2025-11-25 09:44:11.676972916 +0000 UTC m=+0.392854686 container attach 8fdf802db3e6f156e2943f98411b768c65c4247fd036015e2d04b633a3cc495f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_greider, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 04:44:11 np0005534516 podman[456340]: 2025-11-25 09:44:11.678200199 +0000 UTC m=+0.394081899 container died 8fdf802db3e6f156e2943f98411b768c65c4247fd036015e2d04b633a3cc495f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_greider, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 04:44:11 np0005534516 systemd[1]: var-lib-containers-storage-overlay-b23d3d7beeeb73c0e00b40386473ac363e28f2152e2b87d32fed3af17b24324f-merged.mount: Deactivated successfully.
Nov 25 04:44:11 np0005534516 podman[456340]: 2025-11-25 09:44:11.789584109 +0000 UTC m=+0.505465759 container remove 8fdf802db3e6f156e2943f98411b768c65c4247fd036015e2d04b633a3cc495f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_greider, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Nov 25 04:44:11 np0005534516 systemd[1]: libpod-conmon-8fdf802db3e6f156e2943f98411b768c65c4247fd036015e2d04b633a3cc495f.scope: Deactivated successfully.
Nov 25 04:44:11 np0005534516 nova_compute[253538]: 2025-11-25 09:44:11.858 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:44:11 np0005534516 podman[456382]: 2025-11-25 09:44:11.94932402 +0000 UTC m=+0.039951092 container create 62b796e6a2eade9f813e0b43f4a258831bc8876fc04fbc2f2e913857bb532d6c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_newton, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 04:44:11 np0005534516 systemd[1]: Started libpod-conmon-62b796e6a2eade9f813e0b43f4a258831bc8876fc04fbc2f2e913857bb532d6c.scope.
Nov 25 04:44:12 np0005534516 systemd[1]: Started libcrun container.
Nov 25 04:44:12 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/836c444c85bd64788d6be4a1d84a3ebc86e50fdae805bd24be3c21dc57a528ce/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 04:44:12 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/836c444c85bd64788d6be4a1d84a3ebc86e50fdae805bd24be3c21dc57a528ce/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 04:44:12 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/836c444c85bd64788d6be4a1d84a3ebc86e50fdae805bd24be3c21dc57a528ce/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 04:44:12 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/836c444c85bd64788d6be4a1d84a3ebc86e50fdae805bd24be3c21dc57a528ce/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 04:44:12 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/836c444c85bd64788d6be4a1d84a3ebc86e50fdae805bd24be3c21dc57a528ce/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 04:44:12 np0005534516 podman[456382]: 2025-11-25 09:44:11.933664122 +0000 UTC m=+0.024291144 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 04:44:12 np0005534516 podman[456382]: 2025-11-25 09:44:12.041049784 +0000 UTC m=+0.131676806 container init 62b796e6a2eade9f813e0b43f4a258831bc8876fc04fbc2f2e913857bb532d6c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_newton, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 04:44:12 np0005534516 podman[456382]: 2025-11-25 09:44:12.047444848 +0000 UTC m=+0.138071850 container start 62b796e6a2eade9f813e0b43f4a258831bc8876fc04fbc2f2e913857bb532d6c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_newton, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Nov 25 04:44:12 np0005534516 podman[456382]: 2025-11-25 09:44:12.050369988 +0000 UTC m=+0.140997020 container attach 62b796e6a2eade9f813e0b43f4a258831bc8876fc04fbc2f2e913857bb532d6c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_newton, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 04:44:13 np0005534516 nervous_newton[456398]: --> passed data devices: 0 physical, 3 LVM
Nov 25 04:44:13 np0005534516 nervous_newton[456398]: --> relative data size: 1.0
Nov 25 04:44:13 np0005534516 nervous_newton[456398]: --> All data devices are unavailable
Nov 25 04:44:13 np0005534516 systemd[1]: libpod-62b796e6a2eade9f813e0b43f4a258831bc8876fc04fbc2f2e913857bb532d6c.scope: Deactivated successfully.
Nov 25 04:44:13 np0005534516 podman[456382]: 2025-11-25 09:44:13.090940892 +0000 UTC m=+1.181567894 container died 62b796e6a2eade9f813e0b43f4a258831bc8876fc04fbc2f2e913857bb532d6c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_newton, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Nov 25 04:44:13 np0005534516 systemd[1]: var-lib-containers-storage-overlay-836c444c85bd64788d6be4a1d84a3ebc86e50fdae805bd24be3c21dc57a528ce-merged.mount: Deactivated successfully.
Nov 25 04:44:13 np0005534516 podman[456382]: 2025-11-25 09:44:13.246465427 +0000 UTC m=+1.337092429 container remove 62b796e6a2eade9f813e0b43f4a258831bc8876fc04fbc2f2e913857bb532d6c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_newton, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Nov 25 04:44:13 np0005534516 systemd[1]: libpod-conmon-62b796e6a2eade9f813e0b43f4a258831bc8876fc04fbc2f2e913857bb532d6c.scope: Deactivated successfully.
Nov 25 04:44:13 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3739: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:44:13 np0005534516 podman[456582]: 2025-11-25 09:44:13.910960556 +0000 UTC m=+0.070740182 container create d2fdabba02e7bc742498b8d9431a0602be6e26244c2b0281ed28a41b301dd835 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_easley, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 04:44:13 np0005534516 systemd[1]: Started libpod-conmon-d2fdabba02e7bc742498b8d9431a0602be6e26244c2b0281ed28a41b301dd835.scope.
Nov 25 04:44:13 np0005534516 podman[456582]: 2025-11-25 09:44:13.867362055 +0000 UTC m=+0.027142191 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 04:44:13 np0005534516 systemd[1]: Started libcrun container.
Nov 25 04:44:14 np0005534516 podman[456582]: 2025-11-25 09:44:14.013465174 +0000 UTC m=+0.173244820 container init d2fdabba02e7bc742498b8d9431a0602be6e26244c2b0281ed28a41b301dd835 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_easley, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 04:44:14 np0005534516 podman[456582]: 2025-11-25 09:44:14.02211686 +0000 UTC m=+0.181896486 container start d2fdabba02e7bc742498b8d9431a0602be6e26244c2b0281ed28a41b301dd835 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_easley, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 04:44:14 np0005534516 upbeat_easley[456598]: 167 167
Nov 25 04:44:14 np0005534516 systemd[1]: libpod-d2fdabba02e7bc742498b8d9431a0602be6e26244c2b0281ed28a41b301dd835.scope: Deactivated successfully.
Nov 25 04:44:14 np0005534516 podman[456582]: 2025-11-25 09:44:14.042817045 +0000 UTC m=+0.202596691 container attach d2fdabba02e7bc742498b8d9431a0602be6e26244c2b0281ed28a41b301dd835 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_easley, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2)
Nov 25 04:44:14 np0005534516 podman[456582]: 2025-11-25 09:44:14.04374359 +0000 UTC m=+0.203523216 container died d2fdabba02e7bc742498b8d9431a0602be6e26244c2b0281ed28a41b301dd835 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_easley, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 04:44:14 np0005534516 systemd[1]: var-lib-containers-storage-overlay-6beea3c2aea0dc645e9a0b16f24d13ba7498866f820099847339b0a45be8ec41-merged.mount: Deactivated successfully.
Nov 25 04:44:14 np0005534516 podman[456582]: 2025-11-25 09:44:14.139148065 +0000 UTC m=+0.298927681 container remove d2fdabba02e7bc742498b8d9431a0602be6e26244c2b0281ed28a41b301dd835 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_easley, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Nov 25 04:44:14 np0005534516 systemd[1]: libpod-conmon-d2fdabba02e7bc742498b8d9431a0602be6e26244c2b0281ed28a41b301dd835.scope: Deactivated successfully.
Nov 25 04:44:14 np0005534516 podman[456621]: 2025-11-25 09:44:14.350583856 +0000 UTC m=+0.069843207 container create 015f9b89d7c9fada46656e5c5f164871db3fbc6f0fd00a21869944a361f99ea8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_tharp, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Nov 25 04:44:14 np0005534516 podman[456621]: 2025-11-25 09:44:14.305197138 +0000 UTC m=+0.024456509 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 04:44:14 np0005534516 systemd[1]: Started libpod-conmon-015f9b89d7c9fada46656e5c5f164871db3fbc6f0fd00a21869944a361f99ea8.scope.
Nov 25 04:44:14 np0005534516 systemd[1]: Started libcrun container.
Nov 25 04:44:14 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ade26a4f3258f526344d35fe78f779eea3f8444c2fb96a1d586c3c21ea5ad543/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 04:44:14 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ade26a4f3258f526344d35fe78f779eea3f8444c2fb96a1d586c3c21ea5ad543/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 04:44:14 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ade26a4f3258f526344d35fe78f779eea3f8444c2fb96a1d586c3c21ea5ad543/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 04:44:14 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ade26a4f3258f526344d35fe78f779eea3f8444c2fb96a1d586c3c21ea5ad543/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 04:44:14 np0005534516 podman[456621]: 2025-11-25 09:44:14.496657314 +0000 UTC m=+0.215916675 container init 015f9b89d7c9fada46656e5c5f164871db3fbc6f0fd00a21869944a361f99ea8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_tharp, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Nov 25 04:44:14 np0005534516 podman[456621]: 2025-11-25 09:44:14.502476432 +0000 UTC m=+0.221735783 container start 015f9b89d7c9fada46656e5c5f164871db3fbc6f0fd00a21869944a361f99ea8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_tharp, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Nov 25 04:44:14 np0005534516 podman[456621]: 2025-11-25 09:44:14.520283359 +0000 UTC m=+0.239542720 container attach 015f9b89d7c9fada46656e5c5f164871db3fbc6f0fd00a21869944a361f99ea8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_tharp, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 04:44:14 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:44:14 np0005534516 nova_compute[253538]: 2025-11-25 09:44:14.958 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:44:15 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3740: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:44:15 np0005534516 naughty_tharp[456637]: {
Nov 25 04:44:15 np0005534516 naughty_tharp[456637]:    "0": [
Nov 25 04:44:15 np0005534516 naughty_tharp[456637]:        {
Nov 25 04:44:15 np0005534516 naughty_tharp[456637]:            "devices": [
Nov 25 04:44:15 np0005534516 naughty_tharp[456637]:                "/dev/loop3"
Nov 25 04:44:15 np0005534516 naughty_tharp[456637]:            ],
Nov 25 04:44:15 np0005534516 naughty_tharp[456637]:            "lv_name": "ceph_lv0",
Nov 25 04:44:15 np0005534516 naughty_tharp[456637]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 04:44:15 np0005534516 naughty_tharp[456637]:            "lv_size": "21470642176",
Nov 25 04:44:15 np0005534516 naughty_tharp[456637]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=fa0f8d90-5df8-4d42-9078-da082765696d,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 04:44:15 np0005534516 naughty_tharp[456637]:            "lv_uuid": "ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr",
Nov 25 04:44:15 np0005534516 naughty_tharp[456637]:            "name": "ceph_lv0",
Nov 25 04:44:15 np0005534516 naughty_tharp[456637]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 04:44:15 np0005534516 naughty_tharp[456637]:            "tags": {
Nov 25 04:44:15 np0005534516 naughty_tharp[456637]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 04:44:15 np0005534516 naughty_tharp[456637]:                "ceph.block_uuid": "ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr",
Nov 25 04:44:15 np0005534516 naughty_tharp[456637]:                "ceph.cephx_lockbox_secret": "",
Nov 25 04:44:15 np0005534516 naughty_tharp[456637]:                "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 04:44:15 np0005534516 naughty_tharp[456637]:                "ceph.cluster_name": "ceph",
Nov 25 04:44:15 np0005534516 naughty_tharp[456637]:                "ceph.crush_device_class": "",
Nov 25 04:44:15 np0005534516 naughty_tharp[456637]:                "ceph.encrypted": "0",
Nov 25 04:44:15 np0005534516 naughty_tharp[456637]:                "ceph.osd_fsid": "fa0f8d90-5df8-4d42-9078-da082765696d",
Nov 25 04:44:15 np0005534516 naughty_tharp[456637]:                "ceph.osd_id": "0",
Nov 25 04:44:15 np0005534516 naughty_tharp[456637]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 04:44:15 np0005534516 naughty_tharp[456637]:                "ceph.type": "block",
Nov 25 04:44:15 np0005534516 naughty_tharp[456637]:                "ceph.vdo": "0"
Nov 25 04:44:15 np0005534516 naughty_tharp[456637]:            },
Nov 25 04:44:15 np0005534516 naughty_tharp[456637]:            "type": "block",
Nov 25 04:44:15 np0005534516 naughty_tharp[456637]:            "vg_name": "ceph_vg0"
Nov 25 04:44:15 np0005534516 naughty_tharp[456637]:        }
Nov 25 04:44:15 np0005534516 naughty_tharp[456637]:    ],
Nov 25 04:44:15 np0005534516 naughty_tharp[456637]:    "1": [
Nov 25 04:44:15 np0005534516 naughty_tharp[456637]:        {
Nov 25 04:44:15 np0005534516 naughty_tharp[456637]:            "devices": [
Nov 25 04:44:15 np0005534516 naughty_tharp[456637]:                "/dev/loop4"
Nov 25 04:44:15 np0005534516 naughty_tharp[456637]:            ],
Nov 25 04:44:15 np0005534516 naughty_tharp[456637]:            "lv_name": "ceph_lv1",
Nov 25 04:44:15 np0005534516 naughty_tharp[456637]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 04:44:15 np0005534516 naughty_tharp[456637]:            "lv_size": "21470642176",
Nov 25 04:44:15 np0005534516 naughty_tharp[456637]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1ccbbde0-8faf-460a-800e-c84f00ed17db,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 04:44:15 np0005534516 naughty_tharp[456637]:            "lv_uuid": "nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e",
Nov 25 04:44:15 np0005534516 naughty_tharp[456637]:            "name": "ceph_lv1",
Nov 25 04:44:15 np0005534516 naughty_tharp[456637]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 04:44:15 np0005534516 naughty_tharp[456637]:            "tags": {
Nov 25 04:44:15 np0005534516 naughty_tharp[456637]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 04:44:15 np0005534516 naughty_tharp[456637]:                "ceph.block_uuid": "nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e",
Nov 25 04:44:15 np0005534516 naughty_tharp[456637]:                "ceph.cephx_lockbox_secret": "",
Nov 25 04:44:15 np0005534516 naughty_tharp[456637]:                "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 04:44:15 np0005534516 naughty_tharp[456637]:                "ceph.cluster_name": "ceph",
Nov 25 04:44:15 np0005534516 naughty_tharp[456637]:                "ceph.crush_device_class": "",
Nov 25 04:44:15 np0005534516 naughty_tharp[456637]:                "ceph.encrypted": "0",
Nov 25 04:44:15 np0005534516 naughty_tharp[456637]:                "ceph.osd_fsid": "1ccbbde0-8faf-460a-800e-c84f00ed17db",
Nov 25 04:44:15 np0005534516 naughty_tharp[456637]:                "ceph.osd_id": "1",
Nov 25 04:44:15 np0005534516 naughty_tharp[456637]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 04:44:15 np0005534516 naughty_tharp[456637]:                "ceph.type": "block",
Nov 25 04:44:15 np0005534516 naughty_tharp[456637]:                "ceph.vdo": "0"
Nov 25 04:44:15 np0005534516 naughty_tharp[456637]:            },
Nov 25 04:44:15 np0005534516 naughty_tharp[456637]:            "type": "block",
Nov 25 04:44:15 np0005534516 naughty_tharp[456637]:            "vg_name": "ceph_vg1"
Nov 25 04:44:15 np0005534516 naughty_tharp[456637]:        }
Nov 25 04:44:15 np0005534516 naughty_tharp[456637]:    ],
Nov 25 04:44:15 np0005534516 naughty_tharp[456637]:    "2": [
Nov 25 04:44:15 np0005534516 naughty_tharp[456637]:        {
Nov 25 04:44:15 np0005534516 naughty_tharp[456637]:            "devices": [
Nov 25 04:44:15 np0005534516 naughty_tharp[456637]:                "/dev/loop5"
Nov 25 04:44:15 np0005534516 naughty_tharp[456637]:            ],
Nov 25 04:44:15 np0005534516 naughty_tharp[456637]:            "lv_name": "ceph_lv2",
Nov 25 04:44:15 np0005534516 naughty_tharp[456637]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 04:44:15 np0005534516 naughty_tharp[456637]:            "lv_size": "21470642176",
Nov 25 04:44:15 np0005534516 naughty_tharp[456637]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2a15223e-f21c-42af-b702-2be1c3607399,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 04:44:15 np0005534516 naughty_tharp[456637]:            "lv_uuid": "5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0",
Nov 25 04:44:15 np0005534516 naughty_tharp[456637]:            "name": "ceph_lv2",
Nov 25 04:44:15 np0005534516 naughty_tharp[456637]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 04:44:15 np0005534516 naughty_tharp[456637]:            "tags": {
Nov 25 04:44:15 np0005534516 naughty_tharp[456637]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 04:44:15 np0005534516 naughty_tharp[456637]:                "ceph.block_uuid": "5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0",
Nov 25 04:44:15 np0005534516 naughty_tharp[456637]:                "ceph.cephx_lockbox_secret": "",
Nov 25 04:44:15 np0005534516 naughty_tharp[456637]:                "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 04:44:15 np0005534516 naughty_tharp[456637]:                "ceph.cluster_name": "ceph",
Nov 25 04:44:15 np0005534516 naughty_tharp[456637]:                "ceph.crush_device_class": "",
Nov 25 04:44:15 np0005534516 naughty_tharp[456637]:                "ceph.encrypted": "0",
Nov 25 04:44:15 np0005534516 naughty_tharp[456637]:                "ceph.osd_fsid": "2a15223e-f21c-42af-b702-2be1c3607399",
Nov 25 04:44:15 np0005534516 naughty_tharp[456637]:                "ceph.osd_id": "2",
Nov 25 04:44:15 np0005534516 naughty_tharp[456637]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 04:44:15 np0005534516 naughty_tharp[456637]:                "ceph.type": "block",
Nov 25 04:44:15 np0005534516 naughty_tharp[456637]:                "ceph.vdo": "0"
Nov 25 04:44:15 np0005534516 naughty_tharp[456637]:            },
Nov 25 04:44:15 np0005534516 naughty_tharp[456637]:            "type": "block",
Nov 25 04:44:15 np0005534516 naughty_tharp[456637]:            "vg_name": "ceph_vg2"
Nov 25 04:44:15 np0005534516 naughty_tharp[456637]:        }
Nov 25 04:44:15 np0005534516 naughty_tharp[456637]:    ]
Nov 25 04:44:15 np0005534516 naughty_tharp[456637]: }
Nov 25 04:44:15 np0005534516 systemd[1]: libpod-015f9b89d7c9fada46656e5c5f164871db3fbc6f0fd00a21869944a361f99ea8.scope: Deactivated successfully.
Nov 25 04:44:15 np0005534516 podman[456621]: 2025-11-25 09:44:15.329704643 +0000 UTC m=+1.048964004 container died 015f9b89d7c9fada46656e5c5f164871db3fbc6f0fd00a21869944a361f99ea8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_tharp, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 04:44:15 np0005534516 systemd[1]: var-lib-containers-storage-overlay-ade26a4f3258f526344d35fe78f779eea3f8444c2fb96a1d586c3c21ea5ad543-merged.mount: Deactivated successfully.
Nov 25 04:44:15 np0005534516 podman[456621]: 2025-11-25 09:44:15.388145638 +0000 UTC m=+1.107404989 container remove 015f9b89d7c9fada46656e5c5f164871db3fbc6f0fd00a21869944a361f99ea8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_tharp, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 04:44:15 np0005534516 systemd[1]: libpod-conmon-015f9b89d7c9fada46656e5c5f164871db3fbc6f0fd00a21869944a361f99ea8.scope: Deactivated successfully.
Nov 25 04:44:16 np0005534516 podman[456797]: 2025-11-25 09:44:16.030455922 +0000 UTC m=+0.036551189 container create eb4f9e13db87c9e444ab2f594bfc9e90bb914f06d9e3c9c20e6bba89d82ebc70 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_cray, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 04:44:16 np0005534516 systemd[1]: Started libpod-conmon-eb4f9e13db87c9e444ab2f594bfc9e90bb914f06d9e3c9c20e6bba89d82ebc70.scope.
Nov 25 04:44:16 np0005534516 systemd[1]: Started libcrun container.
Nov 25 04:44:16 np0005534516 podman[456797]: 2025-11-25 09:44:16.016439699 +0000 UTC m=+0.022534996 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 04:44:16 np0005534516 podman[456797]: 2025-11-25 09:44:16.119445 +0000 UTC m=+0.125540297 container init eb4f9e13db87c9e444ab2f594bfc9e90bb914f06d9e3c9c20e6bba89d82ebc70 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_cray, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 04:44:16 np0005534516 podman[456797]: 2025-11-25 09:44:16.129349921 +0000 UTC m=+0.135445208 container start eb4f9e13db87c9e444ab2f594bfc9e90bb914f06d9e3c9c20e6bba89d82ebc70 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_cray, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 04:44:16 np0005534516 podman[456797]: 2025-11-25 09:44:16.132820645 +0000 UTC m=+0.138915932 container attach eb4f9e13db87c9e444ab2f594bfc9e90bb914f06d9e3c9c20e6bba89d82ebc70 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_cray, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 04:44:16 np0005534516 gifted_cray[456813]: 167 167
Nov 25 04:44:16 np0005534516 systemd[1]: libpod-eb4f9e13db87c9e444ab2f594bfc9e90bb914f06d9e3c9c20e6bba89d82ebc70.scope: Deactivated successfully.
Nov 25 04:44:16 np0005534516 podman[456797]: 2025-11-25 09:44:16.13554155 +0000 UTC m=+0.141636847 container died eb4f9e13db87c9e444ab2f594bfc9e90bb914f06d9e3c9c20e6bba89d82ebc70 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_cray, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 04:44:16 np0005534516 systemd[1]: var-lib-containers-storage-overlay-0df3a0fd2e80188c1fb726c19486864adf56e227a61a4b63c4b632b5acfee39c-merged.mount: Deactivated successfully.
Nov 25 04:44:16 np0005534516 podman[456797]: 2025-11-25 09:44:16.16778167 +0000 UTC m=+0.173876947 container remove eb4f9e13db87c9e444ab2f594bfc9e90bb914f06d9e3c9c20e6bba89d82ebc70 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_cray, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Nov 25 04:44:16 np0005534516 systemd[1]: libpod-conmon-eb4f9e13db87c9e444ab2f594bfc9e90bb914f06d9e3c9c20e6bba89d82ebc70.scope: Deactivated successfully.
Nov 25 04:44:16 np0005534516 podman[456837]: 2025-11-25 09:44:16.342669924 +0000 UTC m=+0.051901407 container create fa791dccb2025283a6d980384094d86b6f0a5ae020983f991d031d93089b4e29 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_austin, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 04:44:16 np0005534516 systemd[1]: Started libpod-conmon-fa791dccb2025283a6d980384094d86b6f0a5ae020983f991d031d93089b4e29.scope.
Nov 25 04:44:16 np0005534516 systemd[1]: Started libcrun container.
Nov 25 04:44:16 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1ebd7886ab89b6c2efc4fa26e7cc2e8bfa359c40bc90cca080b1de493f11ab24/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 04:44:16 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1ebd7886ab89b6c2efc4fa26e7cc2e8bfa359c40bc90cca080b1de493f11ab24/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 04:44:16 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1ebd7886ab89b6c2efc4fa26e7cc2e8bfa359c40bc90cca080b1de493f11ab24/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 04:44:16 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1ebd7886ab89b6c2efc4fa26e7cc2e8bfa359c40bc90cca080b1de493f11ab24/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 04:44:16 np0005534516 podman[456837]: 2025-11-25 09:44:16.325392262 +0000 UTC m=+0.034623775 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 04:44:16 np0005534516 podman[456837]: 2025-11-25 09:44:16.423755147 +0000 UTC m=+0.132986630 container init fa791dccb2025283a6d980384094d86b6f0a5ae020983f991d031d93089b4e29 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_austin, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 04:44:16 np0005534516 podman[456837]: 2025-11-25 09:44:16.436757242 +0000 UTC m=+0.145988725 container start fa791dccb2025283a6d980384094d86b6f0a5ae020983f991d031d93089b4e29 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_austin, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Nov 25 04:44:16 np0005534516 podman[456837]: 2025-11-25 09:44:16.440472854 +0000 UTC m=+0.149704627 container attach fa791dccb2025283a6d980384094d86b6f0a5ae020983f991d031d93089b4e29 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_austin, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 04:44:16 np0005534516 nova_compute[253538]: 2025-11-25 09:44:16.861 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:44:17 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3741: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:44:17 np0005534516 hopeful_austin[456854]: {
Nov 25 04:44:17 np0005534516 hopeful_austin[456854]:    "1ccbbde0-8faf-460a-800e-c84f00ed17db": {
Nov 25 04:44:17 np0005534516 hopeful_austin[456854]:        "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 04:44:17 np0005534516 hopeful_austin[456854]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 04:44:17 np0005534516 hopeful_austin[456854]:        "osd_id": 1,
Nov 25 04:44:17 np0005534516 hopeful_austin[456854]:        "osd_uuid": "1ccbbde0-8faf-460a-800e-c84f00ed17db",
Nov 25 04:44:17 np0005534516 hopeful_austin[456854]:        "type": "bluestore"
Nov 25 04:44:17 np0005534516 hopeful_austin[456854]:    },
Nov 25 04:44:17 np0005534516 hopeful_austin[456854]:    "2a15223e-f21c-42af-b702-2be1c3607399": {
Nov 25 04:44:17 np0005534516 hopeful_austin[456854]:        "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 04:44:17 np0005534516 hopeful_austin[456854]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 04:44:17 np0005534516 hopeful_austin[456854]:        "osd_id": 2,
Nov 25 04:44:17 np0005534516 hopeful_austin[456854]:        "osd_uuid": "2a15223e-f21c-42af-b702-2be1c3607399",
Nov 25 04:44:17 np0005534516 hopeful_austin[456854]:        "type": "bluestore"
Nov 25 04:44:17 np0005534516 hopeful_austin[456854]:    },
Nov 25 04:44:17 np0005534516 hopeful_austin[456854]:    "fa0f8d90-5df8-4d42-9078-da082765696d": {
Nov 25 04:44:17 np0005534516 hopeful_austin[456854]:        "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 04:44:17 np0005534516 hopeful_austin[456854]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 04:44:17 np0005534516 hopeful_austin[456854]:        "osd_id": 0,
Nov 25 04:44:17 np0005534516 hopeful_austin[456854]:        "osd_uuid": "fa0f8d90-5df8-4d42-9078-da082765696d",
Nov 25 04:44:17 np0005534516 hopeful_austin[456854]:        "type": "bluestore"
Nov 25 04:44:17 np0005534516 hopeful_austin[456854]:    }
Nov 25 04:44:17 np0005534516 hopeful_austin[456854]: }
Nov 25 04:44:17 np0005534516 systemd[1]: libpod-fa791dccb2025283a6d980384094d86b6f0a5ae020983f991d031d93089b4e29.scope: Deactivated successfully.
Nov 25 04:44:17 np0005534516 podman[456837]: 2025-11-25 09:44:17.423474326 +0000 UTC m=+1.132705809 container died fa791dccb2025283a6d980384094d86b6f0a5ae020983f991d031d93089b4e29 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_austin, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True)
Nov 25 04:44:17 np0005534516 systemd[1]: var-lib-containers-storage-overlay-1ebd7886ab89b6c2efc4fa26e7cc2e8bfa359c40bc90cca080b1de493f11ab24-merged.mount: Deactivated successfully.
Nov 25 04:44:17 np0005534516 podman[456837]: 2025-11-25 09:44:17.574439987 +0000 UTC m=+1.283671470 container remove fa791dccb2025283a6d980384094d86b6f0a5ae020983f991d031d93089b4e29 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_austin, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 25 04:44:17 np0005534516 systemd[1]: libpod-conmon-fa791dccb2025283a6d980384094d86b6f0a5ae020983f991d031d93089b4e29.scope: Deactivated successfully.
Nov 25 04:44:17 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 04:44:17 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 04:44:17 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 04:44:17 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 04:44:17 np0005534516 ceph-mgr[75313]: [progress WARNING root] complete: ev 5dafb98f-c0a7-4bd1-ba17-e46082a7382a does not exist
Nov 25 04:44:17 np0005534516 ceph-mgr[75313]: [progress WARNING root] complete: ev a6228746-1f03-4c13-b81c-d93839102d4b does not exist
Nov 25 04:44:18 np0005534516 nova_compute[253538]: 2025-11-25 09:44:18.172 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:44:18 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 04:44:18 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 04:44:19 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3742: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:44:19 np0005534516 nova_compute[253538]: 2025-11-25 09:44:19.555 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:44:19 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:44:19 np0005534516 nova_compute[253538]: 2025-11-25 09:44:19.963 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:44:21 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3743: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:44:21 np0005534516 nova_compute[253538]: 2025-11-25 09:44:21.865 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:44:23 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3744: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:44:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 04:44:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 04:44:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 04:44:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 04:44:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 04:44:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 04:44:24 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:44:24 np0005534516 nova_compute[253538]: 2025-11-25 09:44:24.966 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:44:25 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3745: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:44:26 np0005534516 nova_compute[253538]: 2025-11-25 09:44:26.935 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:44:27 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3746: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:44:27 np0005534516 nova_compute[253538]: 2025-11-25 09:44:27.553 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:44:28 np0005534516 nova_compute[253538]: 2025-11-25 09:44:28.556 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:44:28 np0005534516 nova_compute[253538]: 2025-11-25 09:44:28.556 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:44:28 np0005534516 podman[456956]: 2025-11-25 09:44:28.809232231 +0000 UTC m=+0.061094859 container health_status 201f5ba8a846455dad7681d91099d8c3f33e8ac91298c1330a8b525b94c03938 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, config_id=multipathd, container_name=multipathd, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Nov 25 04:44:28 np0005534516 podman[456957]: 2025-11-25 09:44:28.828146888 +0000 UTC m=+0.080468308 container health_status 5c8d5508db57b4c3da7e80ae11f3a3094e87785cb74f60c98199261dd8b73481 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_metadata_agent, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 25 04:44:29 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 25 04:44:29 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2661224800' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 04:44:29 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 25 04:44:29 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2661224800' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 04:44:29 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3747: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:44:29 np0005534516 nova_compute[253538]: 2025-11-25 09:44:29.553 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:44:29 np0005534516 nova_compute[253538]: 2025-11-25 09:44:29.554 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 25 04:44:29 np0005534516 nova_compute[253538]: 2025-11-25 09:44:29.554 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 25 04:44:29 np0005534516 nova_compute[253538]: 2025-11-25 09:44:29.570 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 25 04:44:29 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:44:29 np0005534516 nova_compute[253538]: 2025-11-25 09:44:29.969 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:44:30 np0005534516 nova_compute[253538]: 2025-11-25 09:44:30.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:44:30 np0005534516 nova_compute[253538]: 2025-11-25 09:44:30.554 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 25 04:44:31 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3748: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:44:31 np0005534516 nova_compute[253538]: 2025-11-25 09:44:31.938 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:44:33 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3749: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:44:34 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:44:34 np0005534516 nova_compute[253538]: 2025-11-25 09:44:34.973 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:44:35 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3750: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:44:35 np0005534516 podman[456997]: 2025-11-25 09:44:35.877842101 +0000 UTC m=+0.121175918 container health_status 8ad1e319ea263c63021f01bb2d6f18ca5aea205883339692c6b10bddfa993191 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 04:44:36 np0005534516 nova_compute[253538]: 2025-11-25 09:44:36.977 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:44:37 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3751: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:44:37 np0005534516 nova_compute[253538]: 2025-11-25 09:44:37.555 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:44:39 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3752: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:44:39 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:44:39 np0005534516 nova_compute[253538]: 2025-11-25 09:44:39.977 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:44:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:44:41.133 162739 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:44:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:44:41.135 162739 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:44:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:44:41.136 162739 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:44:41 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3753: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:44:41 np0005534516 nova_compute[253538]: 2025-11-25 09:44:41.980 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:44:43 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3754: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:44:44 np0005534516 nova_compute[253538]: 2025-11-25 09:44:44.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:44:44 np0005534516 nova_compute[253538]: 2025-11-25 09:44:44.583 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:44:44 np0005534516 nova_compute[253538]: 2025-11-25 09:44:44.583 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:44:44 np0005534516 nova_compute[253538]: 2025-11-25 09:44:44.584 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:44:44 np0005534516 nova_compute[253538]: 2025-11-25 09:44:44.584 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 25 04:44:44 np0005534516 nova_compute[253538]: 2025-11-25 09:44:44.585 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 04:44:44 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:44:44 np0005534516 nova_compute[253538]: 2025-11-25 09:44:44.981 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:44:45 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 04:44:45 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2094437044' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 04:44:45 np0005534516 nova_compute[253538]: 2025-11-25 09:44:45.072 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.488s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 04:44:45 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3755: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:44:45 np0005534516 nova_compute[253538]: 2025-11-25 09:44:45.296 253542 WARNING nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 04:44:45 np0005534516 nova_compute[253538]: 2025-11-25 09:44:45.297 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3570MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 25 04:44:45 np0005534516 nova_compute[253538]: 2025-11-25 09:44:45.298 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:44:45 np0005534516 nova_compute[253538]: 2025-11-25 09:44:45.298 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:44:45 np0005534516 nova_compute[253538]: 2025-11-25 09:44:45.353 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 25 04:44:45 np0005534516 nova_compute[253538]: 2025-11-25 09:44:45.354 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 25 04:44:45 np0005534516 nova_compute[253538]: 2025-11-25 09:44:45.374 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 04:44:45 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 04:44:45 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1853883255' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 04:44:45 np0005534516 nova_compute[253538]: 2025-11-25 09:44:45.821 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.447s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 04:44:45 np0005534516 nova_compute[253538]: 2025-11-25 09:44:45.830 253542 DEBUG nova.compute.provider_tree [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 04:44:45 np0005534516 nova_compute[253538]: 2025-11-25 09:44:45.845 253542 DEBUG nova.scheduler.client.report [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 04:44:45 np0005534516 nova_compute[253538]: 2025-11-25 09:44:45.848 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 25 04:44:45 np0005534516 nova_compute[253538]: 2025-11-25 09:44:45.849 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.551s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:44:46 np0005534516 nova_compute[253538]: 2025-11-25 09:44:46.981 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:44:47 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3756: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:44:49 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3757: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:44:49 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:44:50 np0005534516 nova_compute[253538]: 2025-11-25 09:44:50.026 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:44:51 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3758: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:44:51 np0005534516 nova_compute[253538]: 2025-11-25 09:44:51.984 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:44:53 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3759: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:44:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] Optimize plan auto_2025-11-25_09:44:53
Nov 25 04:44:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 04:44:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] do_upmap
Nov 25 04:44:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] pools ['default.rgw.control', 'cephfs.cephfs.data', 'backups', 'default.rgw.log', 'cephfs.cephfs.meta', 'volumes', '.mgr', 'default.rgw.meta', '.rgw.root', 'images', 'vms']
Nov 25 04:44:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] prepared 0/10 changes
Nov 25 04:44:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 04:44:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 04:44:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 04:44:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 04:44:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 04:44:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 04:44:54 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 04:44:54 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 04:44:54 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 04:44:54 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 04:44:54 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 04:44:54 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 04:44:54 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 04:44:54 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 04:44:54 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 04:44:54 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 04:44:54 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:44:55 np0005534516 nova_compute[253538]: 2025-11-25 09:44:55.030 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:44:55 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3760: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:44:56 np0005534516 nova_compute[253538]: 2025-11-25 09:44:56.989 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:44:57 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3761: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:44:59 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3762: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:44:59 np0005534516 podman[457074]: 2025-11-25 09:44:59.849519673 +0000 UTC m=+0.084788976 container health_status 201f5ba8a846455dad7681d91099d8c3f33e8ac91298c1330a8b525b94c03938 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=multipathd)
Nov 25 04:44:59 np0005534516 podman[457075]: 2025-11-25 09:44:59.881945358 +0000 UTC m=+0.112375679 container health_status 5c8d5508db57b4c3da7e80ae11f3a3094e87785cb74f60c98199261dd8b73481 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible)
Nov 25 04:44:59 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:45:00 np0005534516 nova_compute[253538]: 2025-11-25 09:45:00.033 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:45:01 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3763: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:45:01 np0005534516 nova_compute[253538]: 2025-11-25 09:45:01.990 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:45:03 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3764: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:45:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 04:45:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 04:45:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 04:45:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 04:45:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 25 04:45:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 04:45:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 04:45:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 04:45:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 04:45:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 04:45:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 1.9077212346161359e-07 of space, bias 1.0, pg target 5.723163703848408e-05 quantized to 32 (current 32)
Nov 25 04:45:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 04:45:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 25 04:45:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 04:45:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 04:45:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 04:45:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 25 04:45:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 04:45:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 25 04:45:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 04:45:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 04:45:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 04:45:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 25 04:45:04 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:45:05 np0005534516 nova_compute[253538]: 2025-11-25 09:45:05.036 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:45:05 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3765: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:45:06 np0005534516 podman[457110]: 2025-11-25 09:45:06.863765949 +0000 UTC m=+0.114898377 container health_status 8ad1e319ea263c63021f01bb2d6f18ca5aea205883339692c6b10bddfa993191 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_managed=true)
Nov 25 04:45:06 np0005534516 nova_compute[253538]: 2025-11-25 09:45:06.991 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:45:07 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3766: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:45:09 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3767: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:45:09 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:45:10 np0005534516 nova_compute[253538]: 2025-11-25 09:45:10.041 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:45:11 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3768: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:45:11 np0005534516 nova_compute[253538]: 2025-11-25 09:45:11.993 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:45:13 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3769: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:45:14 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:45:15 np0005534516 nova_compute[253538]: 2025-11-25 09:45:15.044 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:45:15 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3770: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:45:16 np0005534516 nova_compute[253538]: 2025-11-25 09:45:16.995 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:45:17 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3771: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:45:17 np0005534516 nova_compute[253538]: 2025-11-25 09:45:17.849 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:45:18 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0) v1
Nov 25 04:45:18 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 25 04:45:18 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 04:45:18 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 04:45:18 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 04:45:18 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 04:45:18 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 04:45:18 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 04:45:18 np0005534516 ceph-mgr[75313]: [progress WARNING root] complete: ev d4660492-c67a-439c-90e5-3add8392451b does not exist
Nov 25 04:45:18 np0005534516 ceph-mgr[75313]: [progress WARNING root] complete: ev 89ea9a3f-6af0-429d-a5c7-bcd06efba30c does not exist
Nov 25 04:45:18 np0005534516 ceph-mgr[75313]: [progress WARNING root] complete: ev 34a235ac-4d8b-4f64-830d-c705b92da52e does not exist
Nov 25 04:45:18 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 04:45:18 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 04:45:18 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 04:45:18 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 04:45:18 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 04:45:18 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 04:45:18 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 25 04:45:18 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 04:45:18 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 04:45:18 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 04:45:19 np0005534516 podman[457407]: 2025-11-25 09:45:19.190622704 +0000 UTC m=+0.040440376 container create 05a833fbd7597c6d70e8512928a4a8f784e056c3313549f50d0ee0cc13b4f801 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_poincare, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 04:45:19 np0005534516 systemd[1]: Started libpod-conmon-05a833fbd7597c6d70e8512928a4a8f784e056c3313549f50d0ee0cc13b4f801.scope.
Nov 25 04:45:19 np0005534516 systemd[1]: Started libcrun container.
Nov 25 04:45:19 np0005534516 podman[457407]: 2025-11-25 09:45:19.174901154 +0000 UTC m=+0.024718846 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 04:45:19 np0005534516 podman[457407]: 2025-11-25 09:45:19.277800733 +0000 UTC m=+0.127618435 container init 05a833fbd7597c6d70e8512928a4a8f784e056c3313549f50d0ee0cc13b4f801 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_poincare, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Nov 25 04:45:19 np0005534516 podman[457407]: 2025-11-25 09:45:19.28901887 +0000 UTC m=+0.138836542 container start 05a833fbd7597c6d70e8512928a4a8f784e056c3313549f50d0ee0cc13b4f801 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_poincare, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 04:45:19 np0005534516 admiring_poincare[457423]: 167 167
Nov 25 04:45:19 np0005534516 systemd[1]: libpod-05a833fbd7597c6d70e8512928a4a8f784e056c3313549f50d0ee0cc13b4f801.scope: Deactivated successfully.
Nov 25 04:45:19 np0005534516 podman[457407]: 2025-11-25 09:45:19.300167304 +0000 UTC m=+0.149985056 container attach 05a833fbd7597c6d70e8512928a4a8f784e056c3313549f50d0ee0cc13b4f801 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_poincare, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 04:45:19 np0005534516 podman[457407]: 2025-11-25 09:45:19.301091009 +0000 UTC m=+0.150908681 container died 05a833fbd7597c6d70e8512928a4a8f784e056c3313549f50d0ee0cc13b4f801 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_poincare, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 04:45:19 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3772: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:45:19 np0005534516 systemd[1]: var-lib-containers-storage-overlay-1a31b984f039012911ba5054c747098d5934321e5995f3f224d29d4035b480fe-merged.mount: Deactivated successfully.
Nov 25 04:45:19 np0005534516 podman[457407]: 2025-11-25 09:45:19.360706416 +0000 UTC m=+0.210524088 container remove 05a833fbd7597c6d70e8512928a4a8f784e056c3313549f50d0ee0cc13b4f801 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_poincare, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 04:45:19 np0005534516 systemd[1]: libpod-conmon-05a833fbd7597c6d70e8512928a4a8f784e056c3313549f50d0ee0cc13b4f801.scope: Deactivated successfully.
Nov 25 04:45:19 np0005534516 nova_compute[253538]: 2025-11-25 09:45:19.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:45:19 np0005534516 podman[457446]: 2025-11-25 09:45:19.559208684 +0000 UTC m=+0.084596709 container create 24ce67cf980efba8e3c966d098f002db4e2233fc0b0ba2de130b94e829bdfd5d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_ellis, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Nov 25 04:45:19 np0005534516 podman[457446]: 2025-11-25 09:45:19.501818388 +0000 UTC m=+0.027206503 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 04:45:19 np0005534516 systemd[1]: Started libpod-conmon-24ce67cf980efba8e3c966d098f002db4e2233fc0b0ba2de130b94e829bdfd5d.scope.
Nov 25 04:45:19 np0005534516 systemd[1]: Started libcrun container.
Nov 25 04:45:19 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/92b56ba5b6a6b7ea604521e2176b2381aad8033e45c0c1df2cdfd0982da9b3b9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 04:45:19 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/92b56ba5b6a6b7ea604521e2176b2381aad8033e45c0c1df2cdfd0982da9b3b9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 04:45:19 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/92b56ba5b6a6b7ea604521e2176b2381aad8033e45c0c1df2cdfd0982da9b3b9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 04:45:19 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/92b56ba5b6a6b7ea604521e2176b2381aad8033e45c0c1df2cdfd0982da9b3b9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 04:45:19 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/92b56ba5b6a6b7ea604521e2176b2381aad8033e45c0c1df2cdfd0982da9b3b9/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 04:45:19 np0005534516 podman[457446]: 2025-11-25 09:45:19.645520761 +0000 UTC m=+0.170908806 container init 24ce67cf980efba8e3c966d098f002db4e2233fc0b0ba2de130b94e829bdfd5d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_ellis, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 04:45:19 np0005534516 podman[457446]: 2025-11-25 09:45:19.656646234 +0000 UTC m=+0.182034269 container start 24ce67cf980efba8e3c966d098f002db4e2233fc0b0ba2de130b94e829bdfd5d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_ellis, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3)
Nov 25 04:45:19 np0005534516 podman[457446]: 2025-11-25 09:45:19.659938064 +0000 UTC m=+0.185326119 container attach 24ce67cf980efba8e3c966d098f002db4e2233fc0b0ba2de130b94e829bdfd5d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_ellis, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Nov 25 04:45:19 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:45:20 np0005534516 nova_compute[253538]: 2025-11-25 09:45:20.046 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:45:20 np0005534516 jovial_ellis[457463]: --> passed data devices: 0 physical, 3 LVM
Nov 25 04:45:20 np0005534516 jovial_ellis[457463]: --> relative data size: 1.0
Nov 25 04:45:20 np0005534516 jovial_ellis[457463]: --> All data devices are unavailable
Nov 25 04:45:20 np0005534516 systemd[1]: libpod-24ce67cf980efba8e3c966d098f002db4e2233fc0b0ba2de130b94e829bdfd5d.scope: Deactivated successfully.
Nov 25 04:45:20 np0005534516 systemd[1]: libpod-24ce67cf980efba8e3c966d098f002db4e2233fc0b0ba2de130b94e829bdfd5d.scope: Consumed 1.052s CPU time.
Nov 25 04:45:20 np0005534516 podman[457446]: 2025-11-25 09:45:20.767838567 +0000 UTC m=+1.293226592 container died 24ce67cf980efba8e3c966d098f002db4e2233fc0b0ba2de130b94e829bdfd5d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_ellis, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 04:45:20 np0005534516 systemd[1]: var-lib-containers-storage-overlay-92b56ba5b6a6b7ea604521e2176b2381aad8033e45c0c1df2cdfd0982da9b3b9-merged.mount: Deactivated successfully.
Nov 25 04:45:20 np0005534516 podman[457446]: 2025-11-25 09:45:20.830413535 +0000 UTC m=+1.355801580 container remove 24ce67cf980efba8e3c966d098f002db4e2233fc0b0ba2de130b94e829bdfd5d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_ellis, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 04:45:20 np0005534516 systemd[1]: libpod-conmon-24ce67cf980efba8e3c966d098f002db4e2233fc0b0ba2de130b94e829bdfd5d.scope: Deactivated successfully.
Nov 25 04:45:21 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3773: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:45:21 np0005534516 podman[457646]: 2025-11-25 09:45:21.538873483 +0000 UTC m=+0.048344140 container create 26455697d3e7aac83a01c004b78e7c7b05d377fc9900b0ccff5e53921a96a121 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_nobel, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 04:45:21 np0005534516 systemd[1]: Started libpod-conmon-26455697d3e7aac83a01c004b78e7c7b05d377fc9900b0ccff5e53921a96a121.scope.
Nov 25 04:45:21 np0005534516 systemd[1]: Started libcrun container.
Nov 25 04:45:21 np0005534516 podman[457646]: 2025-11-25 09:45:21.519551076 +0000 UTC m=+0.029021783 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 04:45:21 np0005534516 podman[457646]: 2025-11-25 09:45:21.616029959 +0000 UTC m=+0.125500626 container init 26455697d3e7aac83a01c004b78e7c7b05d377fc9900b0ccff5e53921a96a121 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_nobel, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS)
Nov 25 04:45:21 np0005534516 podman[457646]: 2025-11-25 09:45:21.624286464 +0000 UTC m=+0.133757111 container start 26455697d3e7aac83a01c004b78e7c7b05d377fc9900b0ccff5e53921a96a121 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_nobel, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 04:45:21 np0005534516 podman[457646]: 2025-11-25 09:45:21.627279406 +0000 UTC m=+0.136750053 container attach 26455697d3e7aac83a01c004b78e7c7b05d377fc9900b0ccff5e53921a96a121 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_nobel, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507)
Nov 25 04:45:21 np0005534516 charming_nobel[457662]: 167 167
Nov 25 04:45:21 np0005534516 systemd[1]: libpod-26455697d3e7aac83a01c004b78e7c7b05d377fc9900b0ccff5e53921a96a121.scope: Deactivated successfully.
Nov 25 04:45:21 np0005534516 conmon[457662]: conmon 26455697d3e7aac83a01 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-26455697d3e7aac83a01c004b78e7c7b05d377fc9900b0ccff5e53921a96a121.scope/container/memory.events
Nov 25 04:45:21 np0005534516 podman[457646]: 2025-11-25 09:45:21.632740335 +0000 UTC m=+0.142210992 container died 26455697d3e7aac83a01c004b78e7c7b05d377fc9900b0ccff5e53921a96a121 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_nobel, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 04:45:21 np0005534516 systemd[1]: var-lib-containers-storage-overlay-fc4771e21aee5bb6fccfb7d751250d9c98513b93d0e559becbfc7579dbd650df-merged.mount: Deactivated successfully.
Nov 25 04:45:21 np0005534516 podman[457646]: 2025-11-25 09:45:21.676387987 +0000 UTC m=+0.185858634 container remove 26455697d3e7aac83a01c004b78e7c7b05d377fc9900b0ccff5e53921a96a121 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_nobel, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 04:45:21 np0005534516 systemd[1]: libpod-conmon-26455697d3e7aac83a01c004b78e7c7b05d377fc9900b0ccff5e53921a96a121.scope: Deactivated successfully.
Nov 25 04:45:21 np0005534516 podman[457687]: 2025-11-25 09:45:21.901384848 +0000 UTC m=+0.062768294 container create 4e678957062a5b1d577de692530ba8504f33187f12a950a993d2e8373afe947d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_haslett, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 04:45:21 np0005534516 systemd[1]: Started libpod-conmon-4e678957062a5b1d577de692530ba8504f33187f12a950a993d2e8373afe947d.scope.
Nov 25 04:45:21 np0005534516 podman[457687]: 2025-11-25 09:45:21.875034549 +0000 UTC m=+0.036418085 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 04:45:21 np0005534516 systemd[1]: Started libcrun container.
Nov 25 04:45:21 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c95f9c34cc9cf3c535d0b926d1de13207f0e5142efd4f211f2f482ea6e5320ae/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 04:45:21 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c95f9c34cc9cf3c535d0b926d1de13207f0e5142efd4f211f2f482ea6e5320ae/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 04:45:21 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c95f9c34cc9cf3c535d0b926d1de13207f0e5142efd4f211f2f482ea6e5320ae/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 04:45:21 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c95f9c34cc9cf3c535d0b926d1de13207f0e5142efd4f211f2f482ea6e5320ae/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 04:45:21 np0005534516 podman[457687]: 2025-11-25 09:45:21.992883866 +0000 UTC m=+0.154267342 container init 4e678957062a5b1d577de692530ba8504f33187f12a950a993d2e8373afe947d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_haslett, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 04:45:22 np0005534516 nova_compute[253538]: 2025-11-25 09:45:21.998 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:45:22 np0005534516 podman[457687]: 2025-11-25 09:45:22.000047412 +0000 UTC m=+0.161430868 container start 4e678957062a5b1d577de692530ba8504f33187f12a950a993d2e8373afe947d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_haslett, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True)
Nov 25 04:45:22 np0005534516 podman[457687]: 2025-11-25 09:45:22.003420504 +0000 UTC m=+0.164803970 container attach 4e678957062a5b1d577de692530ba8504f33187f12a950a993d2e8373afe947d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_haslett, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Nov 25 04:45:22 np0005534516 infallible_haslett[457704]: {
Nov 25 04:45:22 np0005534516 infallible_haslett[457704]:    "0": [
Nov 25 04:45:22 np0005534516 infallible_haslett[457704]:        {
Nov 25 04:45:22 np0005534516 infallible_haslett[457704]:            "devices": [
Nov 25 04:45:22 np0005534516 infallible_haslett[457704]:                "/dev/loop3"
Nov 25 04:45:22 np0005534516 infallible_haslett[457704]:            ],
Nov 25 04:45:22 np0005534516 infallible_haslett[457704]:            "lv_name": "ceph_lv0",
Nov 25 04:45:22 np0005534516 infallible_haslett[457704]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 04:45:22 np0005534516 infallible_haslett[457704]:            "lv_size": "21470642176",
Nov 25 04:45:22 np0005534516 infallible_haslett[457704]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=fa0f8d90-5df8-4d42-9078-da082765696d,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 04:45:22 np0005534516 infallible_haslett[457704]:            "lv_uuid": "ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr",
Nov 25 04:45:22 np0005534516 infallible_haslett[457704]:            "name": "ceph_lv0",
Nov 25 04:45:22 np0005534516 infallible_haslett[457704]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 04:45:22 np0005534516 infallible_haslett[457704]:            "tags": {
Nov 25 04:45:22 np0005534516 infallible_haslett[457704]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 04:45:22 np0005534516 infallible_haslett[457704]:                "ceph.block_uuid": "ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr",
Nov 25 04:45:22 np0005534516 infallible_haslett[457704]:                "ceph.cephx_lockbox_secret": "",
Nov 25 04:45:22 np0005534516 infallible_haslett[457704]:                "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 04:45:22 np0005534516 infallible_haslett[457704]:                "ceph.cluster_name": "ceph",
Nov 25 04:45:22 np0005534516 infallible_haslett[457704]:                "ceph.crush_device_class": "",
Nov 25 04:45:22 np0005534516 infallible_haslett[457704]:                "ceph.encrypted": "0",
Nov 25 04:45:22 np0005534516 infallible_haslett[457704]:                "ceph.osd_fsid": "fa0f8d90-5df8-4d42-9078-da082765696d",
Nov 25 04:45:22 np0005534516 infallible_haslett[457704]:                "ceph.osd_id": "0",
Nov 25 04:45:22 np0005534516 infallible_haslett[457704]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 04:45:22 np0005534516 infallible_haslett[457704]:                "ceph.type": "block",
Nov 25 04:45:22 np0005534516 infallible_haslett[457704]:                "ceph.vdo": "0"
Nov 25 04:45:22 np0005534516 infallible_haslett[457704]:            },
Nov 25 04:45:22 np0005534516 infallible_haslett[457704]:            "type": "block",
Nov 25 04:45:22 np0005534516 infallible_haslett[457704]:            "vg_name": "ceph_vg0"
Nov 25 04:45:22 np0005534516 infallible_haslett[457704]:        }
Nov 25 04:45:22 np0005534516 infallible_haslett[457704]:    ],
Nov 25 04:45:22 np0005534516 infallible_haslett[457704]:    "1": [
Nov 25 04:45:22 np0005534516 infallible_haslett[457704]:        {
Nov 25 04:45:22 np0005534516 infallible_haslett[457704]:            "devices": [
Nov 25 04:45:22 np0005534516 infallible_haslett[457704]:                "/dev/loop4"
Nov 25 04:45:22 np0005534516 infallible_haslett[457704]:            ],
Nov 25 04:45:22 np0005534516 infallible_haslett[457704]:            "lv_name": "ceph_lv1",
Nov 25 04:45:22 np0005534516 infallible_haslett[457704]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 04:45:22 np0005534516 infallible_haslett[457704]:            "lv_size": "21470642176",
Nov 25 04:45:22 np0005534516 infallible_haslett[457704]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1ccbbde0-8faf-460a-800e-c84f00ed17db,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 04:45:22 np0005534516 infallible_haslett[457704]:            "lv_uuid": "nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e",
Nov 25 04:45:22 np0005534516 infallible_haslett[457704]:            "name": "ceph_lv1",
Nov 25 04:45:22 np0005534516 infallible_haslett[457704]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 04:45:22 np0005534516 infallible_haslett[457704]:            "tags": {
Nov 25 04:45:22 np0005534516 infallible_haslett[457704]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 04:45:22 np0005534516 infallible_haslett[457704]:                "ceph.block_uuid": "nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e",
Nov 25 04:45:22 np0005534516 infallible_haslett[457704]:                "ceph.cephx_lockbox_secret": "",
Nov 25 04:45:22 np0005534516 infallible_haslett[457704]:                "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 04:45:22 np0005534516 infallible_haslett[457704]:                "ceph.cluster_name": "ceph",
Nov 25 04:45:22 np0005534516 infallible_haslett[457704]:                "ceph.crush_device_class": "",
Nov 25 04:45:22 np0005534516 infallible_haslett[457704]:                "ceph.encrypted": "0",
Nov 25 04:45:22 np0005534516 infallible_haslett[457704]:                "ceph.osd_fsid": "1ccbbde0-8faf-460a-800e-c84f00ed17db",
Nov 25 04:45:22 np0005534516 infallible_haslett[457704]:                "ceph.osd_id": "1",
Nov 25 04:45:22 np0005534516 infallible_haslett[457704]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 04:45:22 np0005534516 infallible_haslett[457704]:                "ceph.type": "block",
Nov 25 04:45:22 np0005534516 infallible_haslett[457704]:                "ceph.vdo": "0"
Nov 25 04:45:22 np0005534516 infallible_haslett[457704]:            },
Nov 25 04:45:22 np0005534516 infallible_haslett[457704]:            "type": "block",
Nov 25 04:45:22 np0005534516 infallible_haslett[457704]:            "vg_name": "ceph_vg1"
Nov 25 04:45:22 np0005534516 infallible_haslett[457704]:        }
Nov 25 04:45:22 np0005534516 infallible_haslett[457704]:    ],
Nov 25 04:45:22 np0005534516 infallible_haslett[457704]:    "2": [
Nov 25 04:45:22 np0005534516 infallible_haslett[457704]:        {
Nov 25 04:45:22 np0005534516 infallible_haslett[457704]:            "devices": [
Nov 25 04:45:22 np0005534516 infallible_haslett[457704]:                "/dev/loop5"
Nov 25 04:45:22 np0005534516 infallible_haslett[457704]:            ],
Nov 25 04:45:22 np0005534516 infallible_haslett[457704]:            "lv_name": "ceph_lv2",
Nov 25 04:45:22 np0005534516 infallible_haslett[457704]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 04:45:22 np0005534516 infallible_haslett[457704]:            "lv_size": "21470642176",
Nov 25 04:45:22 np0005534516 infallible_haslett[457704]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2a15223e-f21c-42af-b702-2be1c3607399,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 04:45:22 np0005534516 infallible_haslett[457704]:            "lv_uuid": "5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0",
Nov 25 04:45:22 np0005534516 infallible_haslett[457704]:            "name": "ceph_lv2",
Nov 25 04:45:22 np0005534516 infallible_haslett[457704]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 04:45:22 np0005534516 infallible_haslett[457704]:            "tags": {
Nov 25 04:45:22 np0005534516 infallible_haslett[457704]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 04:45:22 np0005534516 infallible_haslett[457704]:                "ceph.block_uuid": "5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0",
Nov 25 04:45:22 np0005534516 infallible_haslett[457704]:                "ceph.cephx_lockbox_secret": "",
Nov 25 04:45:22 np0005534516 infallible_haslett[457704]:                "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 04:45:22 np0005534516 infallible_haslett[457704]:                "ceph.cluster_name": "ceph",
Nov 25 04:45:22 np0005534516 infallible_haslett[457704]:                "ceph.crush_device_class": "",
Nov 25 04:45:22 np0005534516 infallible_haslett[457704]:                "ceph.encrypted": "0",
Nov 25 04:45:22 np0005534516 infallible_haslett[457704]:                "ceph.osd_fsid": "2a15223e-f21c-42af-b702-2be1c3607399",
Nov 25 04:45:22 np0005534516 infallible_haslett[457704]:                "ceph.osd_id": "2",
Nov 25 04:45:22 np0005534516 infallible_haslett[457704]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 04:45:22 np0005534516 infallible_haslett[457704]:                "ceph.type": "block",
Nov 25 04:45:22 np0005534516 infallible_haslett[457704]:                "ceph.vdo": "0"
Nov 25 04:45:22 np0005534516 infallible_haslett[457704]:            },
Nov 25 04:45:22 np0005534516 infallible_haslett[457704]:            "type": "block",
Nov 25 04:45:22 np0005534516 infallible_haslett[457704]:            "vg_name": "ceph_vg2"
Nov 25 04:45:22 np0005534516 infallible_haslett[457704]:        }
Nov 25 04:45:22 np0005534516 infallible_haslett[457704]:    ]
Nov 25 04:45:22 np0005534516 infallible_haslett[457704]: }
Nov 25 04:45:22 np0005534516 systemd[1]: libpod-4e678957062a5b1d577de692530ba8504f33187f12a950a993d2e8373afe947d.scope: Deactivated successfully.
Nov 25 04:45:23 np0005534516 podman[457713]: 2025-11-25 09:45:23.034006096 +0000 UTC m=+0.038306297 container died 4e678957062a5b1d577de692530ba8504f33187f12a950a993d2e8373afe947d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_haslett, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 04:45:23 np0005534516 systemd[1]: var-lib-containers-storage-overlay-c95f9c34cc9cf3c535d0b926d1de13207f0e5142efd4f211f2f482ea6e5320ae-merged.mount: Deactivated successfully.
Nov 25 04:45:23 np0005534516 podman[457713]: 2025-11-25 09:45:23.081036669 +0000 UTC m=+0.085336840 container remove 4e678957062a5b1d577de692530ba8504f33187f12a950a993d2e8373afe947d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_haslett, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Nov 25 04:45:23 np0005534516 systemd[1]: libpod-conmon-4e678957062a5b1d577de692530ba8504f33187f12a950a993d2e8373afe947d.scope: Deactivated successfully.
Nov 25 04:45:23 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3774: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:45:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 04:45:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 04:45:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 04:45:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 04:45:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 04:45:23 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 04:45:23 np0005534516 podman[457869]: 2025-11-25 09:45:23.813714649 +0000 UTC m=+0.050479909 container create c9f35143b27da54870b431b7f66d8801139aca6803f4b78cf9861e9759ca8ecf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_wescoff, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Nov 25 04:45:23 np0005534516 systemd[1]: Started libpod-conmon-c9f35143b27da54870b431b7f66d8801139aca6803f4b78cf9861e9759ca8ecf.scope.
Nov 25 04:45:23 np0005534516 podman[457869]: 2025-11-25 09:45:23.78590513 +0000 UTC m=+0.022670410 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 04:45:23 np0005534516 systemd[1]: Started libcrun container.
Nov 25 04:45:23 np0005534516 podman[457869]: 2025-11-25 09:45:23.90239671 +0000 UTC m=+0.139161990 container init c9f35143b27da54870b431b7f66d8801139aca6803f4b78cf9861e9759ca8ecf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_wescoff, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 04:45:23 np0005534516 podman[457869]: 2025-11-25 09:45:23.908327282 +0000 UTC m=+0.145092542 container start c9f35143b27da54870b431b7f66d8801139aca6803f4b78cf9861e9759ca8ecf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_wescoff, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 04:45:23 np0005534516 elegant_wescoff[457886]: 167 167
Nov 25 04:45:23 np0005534516 systemd[1]: libpod-c9f35143b27da54870b431b7f66d8801139aca6803f4b78cf9861e9759ca8ecf.scope: Deactivated successfully.
Nov 25 04:45:23 np0005534516 podman[457869]: 2025-11-25 09:45:23.914787778 +0000 UTC m=+0.151553068 container attach c9f35143b27da54870b431b7f66d8801139aca6803f4b78cf9861e9759ca8ecf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_wescoff, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Nov 25 04:45:23 np0005534516 podman[457869]: 2025-11-25 09:45:23.915112327 +0000 UTC m=+0.151877587 container died c9f35143b27da54870b431b7f66d8801139aca6803f4b78cf9861e9759ca8ecf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_wescoff, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Nov 25 04:45:23 np0005534516 systemd[1]: var-lib-containers-storage-overlay-0087a099184feb138d26866296e2990f10dd4bd69941c6eabbe343c524e92269-merged.mount: Deactivated successfully.
Nov 25 04:45:23 np0005534516 podman[457869]: 2025-11-25 09:45:23.964427433 +0000 UTC m=+0.201192703 container remove c9f35143b27da54870b431b7f66d8801139aca6803f4b78cf9861e9759ca8ecf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_wescoff, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 04:45:23 np0005534516 systemd[1]: libpod-conmon-c9f35143b27da54870b431b7f66d8801139aca6803f4b78cf9861e9759ca8ecf.scope: Deactivated successfully.
Nov 25 04:45:24 np0005534516 podman[457909]: 2025-11-25 09:45:24.135850713 +0000 UTC m=+0.052136025 container create 62173747a634b5c2fae464f63e7a2958eed7c09fb41b0d0fd4541fe7a5b1f74f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_bouman, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 04:45:24 np0005534516 systemd[1]: Started libpod-conmon-62173747a634b5c2fae464f63e7a2958eed7c09fb41b0d0fd4541fe7a5b1f74f.scope.
Nov 25 04:45:24 np0005534516 systemd[1]: Started libcrun container.
Nov 25 04:45:24 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f5100eac66ed36a528b391ca9fe7d90d22ba6dd1662daa4886a16be4787756ca/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 04:45:24 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f5100eac66ed36a528b391ca9fe7d90d22ba6dd1662daa4886a16be4787756ca/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 04:45:24 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f5100eac66ed36a528b391ca9fe7d90d22ba6dd1662daa4886a16be4787756ca/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 04:45:24 np0005534516 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f5100eac66ed36a528b391ca9fe7d90d22ba6dd1662daa4886a16be4787756ca/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 04:45:24 np0005534516 podman[457909]: 2025-11-25 09:45:24.111347333 +0000 UTC m=+0.027632625 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 04:45:24 np0005534516 podman[457909]: 2025-11-25 09:45:24.220912574 +0000 UTC m=+0.137197866 container init 62173747a634b5c2fae464f63e7a2958eed7c09fb41b0d0fd4541fe7a5b1f74f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_bouman, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 25 04:45:24 np0005534516 podman[457909]: 2025-11-25 09:45:24.231794491 +0000 UTC m=+0.148079763 container start 62173747a634b5c2fae464f63e7a2958eed7c09fb41b0d0fd4541fe7a5b1f74f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_bouman, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Nov 25 04:45:24 np0005534516 podman[457909]: 2025-11-25 09:45:24.236844099 +0000 UTC m=+0.153129391 container attach 62173747a634b5c2fae464f63e7a2958eed7c09fb41b0d0fd4541fe7a5b1f74f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_bouman, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 04:45:24 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:45:25 np0005534516 nova_compute[253538]: 2025-11-25 09:45:25.049 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:45:25 np0005534516 intelligent_bouman[457926]: {
Nov 25 04:45:25 np0005534516 intelligent_bouman[457926]:    "1ccbbde0-8faf-460a-800e-c84f00ed17db": {
Nov 25 04:45:25 np0005534516 intelligent_bouman[457926]:        "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 04:45:25 np0005534516 intelligent_bouman[457926]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 04:45:25 np0005534516 intelligent_bouman[457926]:        "osd_id": 1,
Nov 25 04:45:25 np0005534516 intelligent_bouman[457926]:        "osd_uuid": "1ccbbde0-8faf-460a-800e-c84f00ed17db",
Nov 25 04:45:25 np0005534516 intelligent_bouman[457926]:        "type": "bluestore"
Nov 25 04:45:25 np0005534516 intelligent_bouman[457926]:    },
Nov 25 04:45:25 np0005534516 intelligent_bouman[457926]:    "2a15223e-f21c-42af-b702-2be1c3607399": {
Nov 25 04:45:25 np0005534516 intelligent_bouman[457926]:        "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 04:45:25 np0005534516 intelligent_bouman[457926]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 04:45:25 np0005534516 intelligent_bouman[457926]:        "osd_id": 2,
Nov 25 04:45:25 np0005534516 intelligent_bouman[457926]:        "osd_uuid": "2a15223e-f21c-42af-b702-2be1c3607399",
Nov 25 04:45:25 np0005534516 intelligent_bouman[457926]:        "type": "bluestore"
Nov 25 04:45:25 np0005534516 intelligent_bouman[457926]:    },
Nov 25 04:45:25 np0005534516 intelligent_bouman[457926]:    "fa0f8d90-5df8-4d42-9078-da082765696d": {
Nov 25 04:45:25 np0005534516 intelligent_bouman[457926]:        "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 04:45:25 np0005534516 intelligent_bouman[457926]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 04:45:25 np0005534516 intelligent_bouman[457926]:        "osd_id": 0,
Nov 25 04:45:25 np0005534516 intelligent_bouman[457926]:        "osd_uuid": "fa0f8d90-5df8-4d42-9078-da082765696d",
Nov 25 04:45:25 np0005534516 intelligent_bouman[457926]:        "type": "bluestore"
Nov 25 04:45:25 np0005534516 intelligent_bouman[457926]:    }
Nov 25 04:45:25 np0005534516 intelligent_bouman[457926]: }
Nov 25 04:45:25 np0005534516 systemd[1]: libpod-62173747a634b5c2fae464f63e7a2958eed7c09fb41b0d0fd4541fe7a5b1f74f.scope: Deactivated successfully.
Nov 25 04:45:25 np0005534516 systemd[1]: libpod-62173747a634b5c2fae464f63e7a2958eed7c09fb41b0d0fd4541fe7a5b1f74f.scope: Consumed 1.067s CPU time.
Nov 25 04:45:25 np0005534516 podman[457909]: 2025-11-25 09:45:25.295385754 +0000 UTC m=+1.211671066 container died 62173747a634b5c2fae464f63e7a2958eed7c09fb41b0d0fd4541fe7a5b1f74f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_bouman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Nov 25 04:45:25 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3775: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:45:25 np0005534516 systemd[1]: var-lib-containers-storage-overlay-f5100eac66ed36a528b391ca9fe7d90d22ba6dd1662daa4886a16be4787756ca-merged.mount: Deactivated successfully.
Nov 25 04:45:25 np0005534516 podman[457909]: 2025-11-25 09:45:25.34945935 +0000 UTC m=+1.265744622 container remove 62173747a634b5c2fae464f63e7a2958eed7c09fb41b0d0fd4541fe7a5b1f74f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_bouman, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 04:45:25 np0005534516 systemd[1]: libpod-conmon-62173747a634b5c2fae464f63e7a2958eed7c09fb41b0d0fd4541fe7a5b1f74f.scope: Deactivated successfully.
Nov 25 04:45:25 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 04:45:25 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 04:45:25 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 04:45:25 np0005534516 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 04:45:25 np0005534516 ceph-mgr[75313]: [progress WARNING root] complete: ev 96efbcec-9f84-4fdd-8cb2-0f6b5fb3b4a4 does not exist
Nov 25 04:45:25 np0005534516 ceph-mgr[75313]: [progress WARNING root] complete: ev bf8fa640-345f-49a7-b244-1a57cfe92224 does not exist
Nov 25 04:45:26 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 04:45:26 np0005534516 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 04:45:27 np0005534516 nova_compute[253538]: 2025-11-25 09:45:27.000 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:45:27 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3776: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:45:28 np0005534516 nova_compute[253538]: 2025-11-25 09:45:28.548 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:45:28 np0005534516 nova_compute[253538]: 2025-11-25 09:45:28.553 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:45:29 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 25 04:45:29 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1911535707' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 04:45:29 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 25 04:45:29 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1911535707' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 04:45:29 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3777: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:45:29 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:45:30 np0005534516 nova_compute[253538]: 2025-11-25 09:45:30.052 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:45:30 np0005534516 nova_compute[253538]: 2025-11-25 09:45:30.553 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:45:30 np0005534516 nova_compute[253538]: 2025-11-25 09:45:30.554 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 25 04:45:30 np0005534516 nova_compute[253538]: 2025-11-25 09:45:30.554 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 25 04:45:30 np0005534516 nova_compute[253538]: 2025-11-25 09:45:30.565 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 25 04:45:30 np0005534516 nova_compute[253538]: 2025-11-25 09:45:30.565 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:45:30 np0005534516 nova_compute[253538]: 2025-11-25 09:45:30.565 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:45:30 np0005534516 nova_compute[253538]: 2025-11-25 09:45:30.565 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 25 04:45:30 np0005534516 podman[458023]: 2025-11-25 09:45:30.809350418 +0000 UTC m=+0.061291355 container health_status 5c8d5508db57b4c3da7e80ae11f3a3094e87785cb74f60c98199261dd8b73481 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 04:45:30 np0005534516 podman[458022]: 2025-11-25 09:45:30.815834914 +0000 UTC m=+0.067795891 container health_status 201f5ba8a846455dad7681d91099d8c3f33e8ac91298c1330a8b525b94c03938 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Nov 25 04:45:31 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3778: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:45:32 np0005534516 nova_compute[253538]: 2025-11-25 09:45:32.003 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:45:33 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3779: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:45:34 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:45:35 np0005534516 nova_compute[253538]: 2025-11-25 09:45:35.056 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:45:35 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3780: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:45:37 np0005534516 nova_compute[253538]: 2025-11-25 09:45:37.005 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:45:37 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3781: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:45:37 np0005534516 podman[458062]: 2025-11-25 09:45:37.83371774 +0000 UTC m=+0.087792288 container health_status 8ad1e319ea263c63021f01bb2d6f18ca5aea205883339692c6b10bddfa993191 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251118, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 25 04:45:38 np0005534516 nova_compute[253538]: 2025-11-25 09:45:38.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:45:39 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3782: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:45:39 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:45:40 np0005534516 nova_compute[253538]: 2025-11-25 09:45:40.059 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:45:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:45:41.134 162739 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:45:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:45:41.135 162739 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:45:41 np0005534516 ovn_metadata_agent[162734]: 2025-11-25 09:45:41.135 162739 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:45:41 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3783: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:45:42 np0005534516 nova_compute[253538]: 2025-11-25 09:45:42.007 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:45:43 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3784: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:45:44 np0005534516 nova_compute[253538]: 2025-11-25 09:45:44.547 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:45:44 np0005534516 nova_compute[253538]: 2025-11-25 09:45:44.564 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:45:44 np0005534516 nova_compute[253538]: 2025-11-25 09:45:44.585 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:45:44 np0005534516 nova_compute[253538]: 2025-11-25 09:45:44.585 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:45:44 np0005534516 nova_compute[253538]: 2025-11-25 09:45:44.585 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:45:44 np0005534516 nova_compute[253538]: 2025-11-25 09:45:44.585 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 25 04:45:44 np0005534516 nova_compute[253538]: 2025-11-25 09:45:44.586 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 04:45:44 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:45:45 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 04:45:45 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3716900065' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 04:45:45 np0005534516 nova_compute[253538]: 2025-11-25 09:45:45.016 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.431s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 04:45:45 np0005534516 nova_compute[253538]: 2025-11-25 09:45:45.062 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:45:45 np0005534516 nova_compute[253538]: 2025-11-25 09:45:45.202 253542 WARNING nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 04:45:45 np0005534516 nova_compute[253538]: 2025-11-25 09:45:45.204 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3561MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 25 04:45:45 np0005534516 nova_compute[253538]: 2025-11-25 09:45:45.204 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:45:45 np0005534516 nova_compute[253538]: 2025-11-25 09:45:45.204 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:45:45 np0005534516 nova_compute[253538]: 2025-11-25 09:45:45.273 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 25 04:45:45 np0005534516 nova_compute[253538]: 2025-11-25 09:45:45.274 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 25 04:45:45 np0005534516 nova_compute[253538]: 2025-11-25 09:45:45.292 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 04:45:45 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3785: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:45:45 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 04:45:45 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/787085369' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 04:45:45 np0005534516 nova_compute[253538]: 2025-11-25 09:45:45.732 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.440s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 04:45:45 np0005534516 nova_compute[253538]: 2025-11-25 09:45:45.737 253542 DEBUG nova.compute.provider_tree [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 04:45:45 np0005534516 nova_compute[253538]: 2025-11-25 09:45:45.753 253542 DEBUG nova.scheduler.client.report [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 04:45:45 np0005534516 nova_compute[253538]: 2025-11-25 09:45:45.755 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 25 04:45:45 np0005534516 nova_compute[253538]: 2025-11-25 09:45:45.755 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.550s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:45:47 np0005534516 nova_compute[253538]: 2025-11-25 09:45:47.008 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:45:47 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3786: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:45:49 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3787: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:45:49 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:45:50 np0005534516 nova_compute[253538]: 2025-11-25 09:45:50.066 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:45:51 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3788: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:45:52 np0005534516 nova_compute[253538]: 2025-11-25 09:45:52.010 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:45:53 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3789: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:45:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] Optimize plan auto_2025-11-25_09:45:53
Nov 25 04:45:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 04:45:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] do_upmap
Nov 25 04:45:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] pools ['volumes', '.mgr', 'vms', 'backups', 'cephfs.cephfs.data', '.rgw.root', 'default.rgw.meta', 'default.rgw.log', 'default.rgw.control', 'cephfs.cephfs.meta', 'images']
Nov 25 04:45:53 np0005534516 ceph-mgr[75313]: [balancer INFO root] prepared 0/10 changes
Nov 25 04:45:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 04:45:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 04:45:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 04:45:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 04:45:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 04:45:53 np0005534516 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 04:45:54 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 04:45:54 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 04:45:54 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 04:45:54 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 04:45:54 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 04:45:54 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 04:45:54 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 04:45:54 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 04:45:54 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 04:45:54 np0005534516 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 04:45:54 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:45:55 np0005534516 nova_compute[253538]: 2025-11-25 09:45:55.069 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:45:55 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3790: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:45:57 np0005534516 nova_compute[253538]: 2025-11-25 09:45:57.012 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:45:57 np0005534516 systemd-logind[822]: New session 58 of user zuul.
Nov 25 04:45:57 np0005534516 systemd[1]: Started Session 58 of User zuul.
Nov 25 04:45:57 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3791: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:45:59 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3792: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:45:59 np0005534516 ceph-mgr[75313]: log_channel(audit) log [DBG] : from='client.23437 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 04:45:59 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:46:00 np0005534516 nova_compute[253538]: 2025-11-25 09:46:00.072 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:46:00 np0005534516 ceph-mgr[75313]: log_channel(audit) log [DBG] : from='client.23439 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 04:46:00 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status"} v 0) v1
Nov 25 04:46:00 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3459262280' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Nov 25 04:46:01 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3793: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:46:01 np0005534516 podman[458392]: 2025-11-25 09:46:01.514202593 +0000 UTC m=+0.057392298 container health_status 5c8d5508db57b4c3da7e80ae11f3a3094e87785cb74f60c98199261dd8b73481 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Nov 25 04:46:01 np0005534516 podman[458391]: 2025-11-25 09:46:01.544533271 +0000 UTC m=+0.082651498 container health_status 201f5ba8a846455dad7681d91099d8c3f33e8ac91298c1330a8b525b94c03938 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true)
Nov 25 04:46:02 np0005534516 nova_compute[253538]: 2025-11-25 09:46:02.014 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:46:03 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3794: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:46:03 np0005534516 ovs-vsctl[458462]: ovs|00001|db_ctl_base|ERR|no key "dpdk-init" in Open_vSwitch record "." column other_config
Nov 25 04:46:04 np0005534516 virtqemud[253839]: Failed to connect socket to '/var/run/libvirt/virtnetworkd-sock-ro': No such file or directory
Nov 25 04:46:04 np0005534516 virtqemud[253839]: Failed to connect socket to '/var/run/libvirt/virtnwfilterd-sock-ro': No such file or directory
Nov 25 04:46:04 np0005534516 virtqemud[253839]: Failed to connect socket to '/var/run/libvirt/virtstoraged-sock-ro': No such file or directory
Nov 25 04:46:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 04:46:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 04:46:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 04:46:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 04:46:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 25 04:46:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 04:46:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 04:46:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 04:46:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 04:46:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 04:46:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 1.9077212346161359e-07 of space, bias 1.0, pg target 5.723163703848408e-05 quantized to 32 (current 32)
Nov 25 04:46:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 04:46:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 25 04:46:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 04:46:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 04:46:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 04:46:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 25 04:46:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 04:46:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 25 04:46:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 04:46:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 04:46:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 04:46:04 np0005534516 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 25 04:46:04 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:46:05 np0005534516 nova_compute[253538]: 2025-11-25 09:46:05.128 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:46:05 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3795: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:46:05 np0005534516 ceph-mds[101326]: mds.cephfs.compute-0.dgfvvi asok_command: cache status {prefix=cache status} (starting...)
Nov 25 04:46:05 np0005534516 ceph-mds[101326]: mds.cephfs.compute-0.dgfvvi asok_command: client ls {prefix=client ls} (starting...)
Nov 25 04:46:05 np0005534516 lvm[458805]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Nov 25 04:46:05 np0005534516 lvm[458805]: VG ceph_vg0 finished
Nov 25 04:46:05 np0005534516 lvm[458825]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Nov 25 04:46:05 np0005534516 lvm[458825]: VG ceph_vg1 finished
Nov 25 04:46:05 np0005534516 ceph-mgr[75313]: log_channel(audit) log [DBG] : from='client.23443 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 04:46:05 np0005534516 lvm[458835]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Nov 25 04:46:05 np0005534516 lvm[458835]: VG ceph_vg2 finished
Nov 25 04:46:06 np0005534516 ceph-mgr[75313]: log_channel(audit) log [DBG] : from='client.23445 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 04:46:06 np0005534516 ceph-mds[101326]: mds.cephfs.compute-0.dgfvvi asok_command: damage ls {prefix=damage ls} (starting...)
Nov 25 04:46:06 np0005534516 ceph-mds[101326]: mds.cephfs.compute-0.dgfvvi asok_command: dump loads {prefix=dump loads} (starting...)
Nov 25 04:46:06 np0005534516 ceph-mds[101326]: mds.cephfs.compute-0.dgfvvi asok_command: dump tree {prefix=dump tree,root=/} (starting...)
Nov 25 04:46:06 np0005534516 ceph-mds[101326]: mds.cephfs.compute-0.dgfvvi asok_command: dump_blocked_ops {prefix=dump_blocked_ops} (starting...)
Nov 25 04:46:06 np0005534516 ceph-mds[101326]: mds.cephfs.compute-0.dgfvvi asok_command: dump_historic_ops {prefix=dump_historic_ops} (starting...)
Nov 25 04:46:06 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "report"} v 0) v1
Nov 25 04:46:06 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/946772714' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Nov 25 04:46:06 np0005534516 ceph-mds[101326]: mds.cephfs.compute-0.dgfvvi asok_command: dump_historic_ops_by_duration {prefix=dump_historic_ops_by_duration} (starting...)
Nov 25 04:46:07 np0005534516 nova_compute[253538]: 2025-11-25 09:46:07.014 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:46:07 np0005534516 ceph-mgr[75313]: log_channel(audit) log [DBG] : from='client.23451 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 04:46:07 np0005534516 ceph-mgr[75313]: mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Nov 25 04:46:07 np0005534516 ceph-a058ea16-8b73-51e1-b172-ed66107102bf-mgr-compute-0-cpskve[75309]: 2025-11-25T09:46:07.035+0000 7f6347321640 -1 mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Nov 25 04:46:07 np0005534516 ceph-mds[101326]: mds.cephfs.compute-0.dgfvvi asok_command: dump_ops_in_flight {prefix=dump_ops_in_flight} (starting...)
Nov 25 04:46:07 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 04:46:07 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/967422664' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 04:46:07 np0005534516 ceph-mds[101326]: mds.cephfs.compute-0.dgfvvi asok_command: get subtrees {prefix=get subtrees} (starting...)
Nov 25 04:46:07 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3796: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:46:07 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "log last", "channel": "cephadm"} v 0) v1
Nov 25 04:46:07 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/847336372' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm"}]: dispatch
Nov 25 04:46:07 np0005534516 ceph-mds[101326]: mds.cephfs.compute-0.dgfvvi asok_command: ops {prefix=ops} (starting...)
Nov 25 04:46:07 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config log"} v 0) v1
Nov 25 04:46:07 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2244514426' entity='client.admin' cmd=[{"prefix": "config log"}]: dispatch
Nov 25 04:46:07 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr dump"} v 0) v1
Nov 25 04:46:07 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1559998018' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Nov 25 04:46:08 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config-key dump"} v 0) v1
Nov 25 04:46:08 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3230869845' entity='client.admin' cmd=[{"prefix": "config-key dump"}]: dispatch
Nov 25 04:46:08 np0005534516 ceph-mds[101326]: mds.cephfs.compute-0.dgfvvi asok_command: session ls {prefix=session ls} (starting...)
Nov 25 04:46:08 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata"} v 0) v1
Nov 25 04:46:08 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1531798828' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Nov 25 04:46:08 np0005534516 ceph-mds[101326]: mds.cephfs.compute-0.dgfvvi asok_command: status {prefix=status} (starting...)
Nov 25 04:46:08 np0005534516 ceph-mgr[75313]: log_channel(audit) log [DBG] : from='client.23465 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 04:46:08 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module ls"} v 0) v1
Nov 25 04:46:08 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/73070914' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Nov 25 04:46:08 np0005534516 ceph-mgr[75313]: log_channel(audit) log [DBG] : from='client.23469 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 04:46:08 np0005534516 podman[459249]: 2025-11-25 09:46:08.845121884 +0000 UTC m=+0.095924470 container health_status 8ad1e319ea263c63021f01bb2d6f18ca5aea205883339692c6b10bddfa993191 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Nov 25 04:46:09 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services"} v 0) v1
Nov 25 04:46:09 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3171255462' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Nov 25 04:46:09 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "features"} v 0) v1
Nov 25 04:46:09 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2116851053' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Nov 25 04:46:09 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3797: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:46:09 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr stat"} v 0) v1
Nov 25 04:46:09 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4178038085' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Nov 25 04:46:09 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "health", "detail": "detail"} v 0) v1
Nov 25 04:46:09 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3961891983' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch
Nov 25 04:46:09 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr versions"} v 0) v1
Nov 25 04:46:09 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/118572444' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Nov 25 04:46:09 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:46:10 np0005534516 ceph-mgr[75313]: log_channel(audit) log [DBG] : from='client.23481 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 04:46:10 np0005534516 ceph-mgr[75313]: mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Nov 25 04:46:10 np0005534516 ceph-a058ea16-8b73-51e1-b172-ed66107102bf-mgr-compute-0-cpskve[75309]: 2025-11-25T09:46:10.096+0000 7f6347321640 -1 mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Nov 25 04:46:10 np0005534516 nova_compute[253538]: 2025-11-25 09:46:10.130 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:46:10 np0005534516 ceph-mgr[75313]: log_channel(audit) log [DBG] : from='client.23483 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 04:46:10 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"} v 0) v1
Nov 25 04:46:10 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3649500046' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"}]: dispatch
Nov 25 04:46:10 np0005534516 ceph-mgr[75313]: log_channel(audit) log [DBG] : from='client.23487 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 04:46:10 np0005534516 ceph-mgr[75313]: log_channel(audit) log [DBG] : from='client.23491 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 04:46:10 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"} v 0) v1
Nov 25 04:46:10 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2776042938' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"}]: dispatch
Nov 25 04:46:11 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3798: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:46:11 np0005534516 ceph-mgr[75313]: log_channel(audit) log [DBG] : from='client.23493 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Nov 25 04:46:11 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr dump"} v 0) v1
Nov 25 04:46:11 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4144628325' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Nov 25 04:46:11 np0005534516 ceph-mgr[75313]: log_channel(audit) log [DBG] : from='client.23497 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 287686656 unmapped: 49774592 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 287686656 unmapped: 49774592 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 287686656 unmapped: 49774592 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ec6f0000/0x0/0x4ffc00000, data 0x12486cc/0x13be000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1214f9c6), peers [0,1] op hist [])
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 287686656 unmapped: 49774592 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 287686656 unmapped: 49774592 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2884287 data_alloc: 218103808 data_used: 7364608
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 287686656 unmapped: 49774592 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 287686656 unmapped: 49774592 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ec6f0000/0x0/0x4ffc00000, data 0x12486cc/0x13be000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1214f9c6), peers [0,1] op hist [])
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 287686656 unmapped: 49774592 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 287686656 unmapped: 49774592 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ec6f0000/0x0/0x4ffc00000, data 0x12486cc/0x13be000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1214f9c6), peers [0,1] op hist [])
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 287686656 unmapped: 49774592 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2884287 data_alloc: 218103808 data_used: 7364608
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 287686656 unmapped: 49774592 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 287686656 unmapped: 49774592 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 287686656 unmapped: 49774592 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ec6f0000/0x0/0x4ffc00000, data 0x12486cc/0x13be000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1214f9c6), peers [0,1] op hist [])
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 287686656 unmapped: 49774592 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 287686656 unmapped: 49774592 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2884287 data_alloc: 218103808 data_used: 7364608
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 287686656 unmapped: 49774592 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 287694848 unmapped: 49766400 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ec6f0000/0x0/0x4ffc00000, data 0x12486cc/0x13be000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1214f9c6), peers [0,1] op hist [])
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 265 ms_handle_reset con 0x55a127698400 session 0x55a129e9a1e0
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 265 ms_handle_reset con 0x55a12935d400 session 0x55a1294a74a0
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 287694848 unmapped: 49766400 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 265 ms_handle_reset con 0x55a129f3e000 session 0x55a1276214a0
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 265 ms_handle_reset con 0x55a12abf9000 session 0x55a128516d20
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 44.088615417s of 44.159133911s, submitted: 19
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 265 ms_handle_reset con 0x55a127698400 session 0x55a12844c000
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 265 ms_handle_reset con 0x55a12935d400 session 0x55a129d8f680
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 265 ms_handle_reset con 0x55a12951c000 session 0x55a12808c780
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 265 ms_handle_reset con 0x55a129f3e000 session 0x55a1299c8000
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 265 ms_handle_reset con 0x55a12e96d000 session 0x55a129ce90e0
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 284205056 unmapped: 53256192 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ec4c7000/0x0/0x4ffc00000, data 0x14706dc/0x15e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1214f9c6), peers [0,1] op hist [])
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 284205056 unmapped: 53256192 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2906377 data_alloc: 218103808 data_used: 7364608
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 284205056 unmapped: 53256192 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 284205056 unmapped: 53256192 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 284205056 unmapped: 53256192 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 284205056 unmapped: 53256192 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ec4c7000/0x0/0x4ffc00000, data 0x14706dc/0x15e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1214f9c6), peers [0,1] op hist [])
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 284205056 unmapped: 53256192 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ec4c7000/0x0/0x4ffc00000, data 0x14706dc/0x15e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1214f9c6), peers [0,1] op hist [])
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 265 ms_handle_reset con 0x55a127698400 session 0x55a1280734a0
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2909117 data_alloc: 218103808 data_used: 7364608
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 284516352 unmapped: 52944896 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 284516352 unmapped: 52944896 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 284516352 unmapped: 52944896 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 284516352 unmapped: 52944896 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ec4a3000/0x0/0x4ffc00000, data 0x14946dc/0x160b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1214f9c6), peers [0,1] op hist [])
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 284516352 unmapped: 52944896 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2924957 data_alloc: 218103808 data_used: 9543680
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ec4a3000/0x0/0x4ffc00000, data 0x14946dc/0x160b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1214f9c6), peers [0,1] op hist [])
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 284516352 unmapped: 52944896 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 284516352 unmapped: 52944896 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 284516352 unmapped: 52944896 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 284516352 unmapped: 52944896 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 284516352 unmapped: 52944896 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ec4a3000/0x0/0x4ffc00000, data 0x14946dc/0x160b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1214f9c6), peers [0,1] op hist [])
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2924957 data_alloc: 218103808 data_used: 9543680
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 284516352 unmapped: 52944896 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ec4a3000/0x0/0x4ffc00000, data 0x14946dc/0x160b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1214f9c6), peers [0,1] op hist [])
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 284516352 unmapped: 52944896 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 19.063507080s of 19.188037872s, submitted: 5
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 286867456 unmapped: 50593792 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 286883840 unmapped: 50577408 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 286212096 unmapped: 51249152 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3004787 data_alloc: 218103808 data_used: 10338304
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ebc76000/0x0/0x4ffc00000, data 0x1cc16dc/0x1e38000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1214f9c6), peers [0,1] op hist [])
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 286212096 unmapped: 51249152 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 286212096 unmapped: 51249152 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 286212096 unmapped: 51249152 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 286212096 unmapped: 51249152 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ebc76000/0x0/0x4ffc00000, data 0x1cc16dc/0x1e38000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1214f9c6), peers [0,1] op hist [])
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 286212096 unmapped: 51249152 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3005123 data_alloc: 218103808 data_used: 10346496
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 286212096 unmapped: 51249152 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 286212096 unmapped: 51249152 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 286212096 unmapped: 51249152 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 286212096 unmapped: 51249152 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 265 ms_handle_reset con 0x55a129f3e000 session 0x55a1294e1680
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 265 ms_handle_reset con 0x55a1286a4000 session 0x55a12855d0e0
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 265 ms_handle_reset con 0x55a12b4d1000 session 0x55a129f4ab40
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 265 ms_handle_reset con 0x55a12b692800 session 0x55a1294f3860
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.005581856s of 12.598716736s, submitted: 51
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 291987456 unmapped: 45473792 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 265 ms_handle_reset con 0x55a127698400 session 0x55a12962b2c0
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 265 ms_handle_reset con 0x55a1286a4000 session 0x55a12844d2c0
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ebc76000/0x0/0x4ffc00000, data 0x1cc16dc/0x1e38000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1214f9c6), peers [0,1] op hist [])
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 265 ms_handle_reset con 0x55a129f3e000 session 0x55a129f4a780
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 265 ms_handle_reset con 0x55a12b4d1000 session 0x55a1294a7c20
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 265 ms_handle_reset con 0x55a1299f6c00 session 0x55a12761c960
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3057527 data_alloc: 218103808 data_used: 10346496
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 287744000 unmapped: 49717248 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 287744000 unmapped: 49717248 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 287744000 unmapped: 49717248 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 287744000 unmapped: 49717248 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4eb68e000/0x0/0x4ffc00000, data 0x22a86ec/0x2420000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1214f9c6), peers [0,1] op hist [])
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 287744000 unmapped: 49717248 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3057527 data_alloc: 218103808 data_used: 10346496
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 287744000 unmapped: 49717248 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 287744000 unmapped: 49717248 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4eb68e000/0x0/0x4ffc00000, data 0x22a86ec/0x2420000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1214f9c6), peers [0,1] op hist [])
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 287744000 unmapped: 49717248 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 265 ms_handle_reset con 0x55a127698400 session 0x55a129eb2f00
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 265 ms_handle_reset con 0x55a1286a4000 session 0x55a129e9d860
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 287768576 unmapped: 49692672 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 265 ms_handle_reset con 0x55a129f3e000 session 0x55a129eb25a0
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 265 ms_handle_reset con 0x55a12b4d1000 session 0x55a129d8e780
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 287768576 unmapped: 49692672 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3059899 data_alloc: 218103808 data_used: 10350592
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 287768576 unmapped: 49692672 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 287768576 unmapped: 49692672 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4eb66a000/0x0/0x4ffc00000, data 0x22cc6ec/0x2444000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1214f9c6), peers [0,1] op hist [])
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 287768576 unmapped: 49692672 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 287768576 unmapped: 49692672 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4eb66a000/0x0/0x4ffc00000, data 0x22cc6ec/0x2444000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1214f9c6), peers [0,1] op hist [])
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 287768576 unmapped: 49692672 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3104219 data_alloc: 218103808 data_used: 16457728
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 287768576 unmapped: 49692672 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 287768576 unmapped: 49692672 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4eb66a000/0x0/0x4ffc00000, data 0x22cc6ec/0x2444000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1214f9c6), peers [0,1] op hist [])
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 287768576 unmapped: 49692672 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 287768576 unmapped: 49692672 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 287768576 unmapped: 49692672 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3104219 data_alloc: 218103808 data_used: 16457728
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 287768576 unmapped: 49692672 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 21.668926239s of 21.840452194s, submitted: 15
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 292233216 unmapped: 45228032 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ebda6000/0x0/0x4ffc00000, data 0x2bca6ec/0x2d42000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 292864000 unmapped: 44597248 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 292872192 unmapped: 44589056 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 292872192 unmapped: 44589056 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3194841 data_alloc: 234881024 data_used: 18636800
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 292872192 unmapped: 44589056 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 292872192 unmapped: 44589056 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ebd8e000/0x0/0x4ffc00000, data 0x2be06ec/0x2d58000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 292872192 unmapped: 44589056 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 292872192 unmapped: 44589056 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ebd8e000/0x0/0x4ffc00000, data 0x2be06ec/0x2d58000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 292872192 unmapped: 44589056 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3196601 data_alloc: 234881024 data_used: 18771968
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 292872192 unmapped: 44589056 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 292872192 unmapped: 44589056 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ebd8e000/0x0/0x4ffc00000, data 0x2be06ec/0x2d58000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 292872192 unmapped: 44589056 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 265 ms_handle_reset con 0x55a127698c00 session 0x55a128547a40
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 265 ms_handle_reset con 0x55a12e49fc00 session 0x55a12761c1e0
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 292872192 unmapped: 44589056 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.503499985s of 12.766171455s, submitted: 100
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 265 ms_handle_reset con 0x55a127698400 session 0x55a128516b40
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 292880384 unmapped: 44580864 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3013621 data_alloc: 218103808 data_used: 10407936
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 292880384 unmapped: 44580864 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4eccb6000/0x0/0x4ffc00000, data 0x1cc16dc/0x1e38000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 292880384 unmapped: 44580864 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 292880384 unmapped: 44580864 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 265 ms_handle_reset con 0x55a12935d400 session 0x55a1294a70e0
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 265 ms_handle_reset con 0x55a12951c000 session 0x55a1294f3a40
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 265 ms_handle_reset con 0x55a1286a4000 session 0x55a12850d860
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 290447360 unmapped: 47013888 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 290447360 unmapped: 47013888 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2901715 data_alloc: 218103808 data_used: 7364608
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 290447360 unmapped: 47013888 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ed730000/0x0/0x4ffc00000, data 0x12486cc/0x13be000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 290447360 unmapped: 47013888 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 290447360 unmapped: 47013888 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ed730000/0x0/0x4ffc00000, data 0x12486cc/0x13be000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 290447360 unmapped: 47013888 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 290447360 unmapped: 47013888 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2901715 data_alloc: 218103808 data_used: 7364608
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 290447360 unmapped: 47013888 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 290447360 unmapped: 47013888 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ed730000/0x0/0x4ffc00000, data 0x12486cc/0x13be000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 290447360 unmapped: 47013888 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 290447360 unmapped: 47013888 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 290447360 unmapped: 47013888 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2901715 data_alloc: 218103808 data_used: 7364608
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 290447360 unmapped: 47013888 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ed730000/0x0/0x4ffc00000, data 0x12486cc/0x13be000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 290447360 unmapped: 47013888 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 290447360 unmapped: 47013888 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 290447360 unmapped: 47013888 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ed730000/0x0/0x4ffc00000, data 0x12486cc/0x13be000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 290447360 unmapped: 47013888 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2901715 data_alloc: 218103808 data_used: 7364608
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 290447360 unmapped: 47013888 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 290447360 unmapped: 47013888 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 290447360 unmapped: 47013888 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 290447360 unmapped: 47013888 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 290447360 unmapped: 47013888 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ed730000/0x0/0x4ffc00000, data 0x12486cc/0x13be000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2901715 data_alloc: 218103808 data_used: 7364608
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 290447360 unmapped: 47013888 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ed730000/0x0/0x4ffc00000, data 0x12486cc/0x13be000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 290455552 unmapped: 47005696 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 290455552 unmapped: 47005696 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ed730000/0x0/0x4ffc00000, data 0x12486cc/0x13be000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 290455552 unmapped: 47005696 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 290455552 unmapped: 47005696 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2901715 data_alloc: 218103808 data_used: 7364608
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 290455552 unmapped: 47005696 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 290455552 unmapped: 47005696 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 290455552 unmapped: 47005696 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 290455552 unmapped: 47005696 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ed730000/0x0/0x4ffc00000, data 0x12486cc/0x13be000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 290455552 unmapped: 47005696 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2901715 data_alloc: 218103808 data_used: 7364608
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 290455552 unmapped: 47005696 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 290455552 unmapped: 47005696 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 290455552 unmapped: 47005696 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ed730000/0x0/0x4ffc00000, data 0x12486cc/0x13be000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 290455552 unmapped: 47005696 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 290455552 unmapped: 47005696 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2901715 data_alloc: 218103808 data_used: 7364608
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 290455552 unmapped: 47005696 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 42.827690125s of 42.880455017s, submitted: 18
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 290463744 unmapped: 46997504 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 265 ms_handle_reset con 0x55a127698400 session 0x55a128094f00
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 265 ms_handle_reset con 0x55a1286a4000 session 0x55a128073860
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 265 ms_handle_reset con 0x55a12935d400 session 0x55a127959e00
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 265 ms_handle_reset con 0x55a12951c000 session 0x55a129d5de00
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 265 ms_handle_reset con 0x55a12e49fc00 session 0x55a12ad874a0
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ed730000/0x0/0x4ffc00000, data 0x12486cc/0x13be000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 290553856 unmapped: 46907392 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 290553856 unmapped: 46907392 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ed282000/0x0/0x4ffc00000, data 0x16f66cc/0x186c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 290553856 unmapped: 46907392 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2939984 data_alloc: 218103808 data_used: 7364608
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ed282000/0x0/0x4ffc00000, data 0x16f66cc/0x186c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 290553856 unmapped: 46907392 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 290553856 unmapped: 46907392 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 265 ms_handle_reset con 0x55a127698400 session 0x55a12962a1e0
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 290553856 unmapped: 46907392 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 290553856 unmapped: 46907392 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 290594816 unmapped: 46866432 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2969544 data_alloc: 218103808 data_used: 11149312
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 290594816 unmapped: 46866432 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ed25e000/0x0/0x4ffc00000, data 0x171a6cc/0x1890000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ed25e000/0x0/0x4ffc00000, data 0x171a6cc/0x1890000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 289464320 unmapped: 47996928 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 289464320 unmapped: 47996928 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 289464320 unmapped: 47996928 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 289464320 unmapped: 47996928 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2969544 data_alloc: 218103808 data_used: 11149312
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 289464320 unmapped: 47996928 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ed25e000/0x0/0x4ffc00000, data 0x171a6cc/0x1890000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 289464320 unmapped: 47996928 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 289464320 unmapped: 47996928 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ed25e000/0x0/0x4ffc00000, data 0x171a6cc/0x1890000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 289472512 unmapped: 47988736 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 17.737442017s of 17.874473572s, submitted: 19
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 291201024 unmapped: 46260224 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3013660 data_alloc: 218103808 data_used: 11976704
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 291209216 unmapped: 46252032 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ecedc000/0x0/0x4ffc00000, data 0x1a9b6cc/0x1c11000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 291209216 unmapped: 46252032 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 291209216 unmapped: 46252032 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 291209216 unmapped: 46252032 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 291209216 unmapped: 46252032 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ecedc000/0x0/0x4ffc00000, data 0x1a9b6cc/0x1c11000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3015580 data_alloc: 218103808 data_used: 12120064
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 291209216 unmapped: 46252032 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 291209216 unmapped: 46252032 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 291209216 unmapped: 46252032 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 291209216 unmapped: 46252032 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ecedc000/0x0/0x4ffc00000, data 0x1a9b6cc/0x1c11000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 291209216 unmapped: 46252032 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3015900 data_alloc: 218103808 data_used: 12128256
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 291209216 unmapped: 46252032 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ecedc000/0x0/0x4ffc00000, data 0x1a9b6cc/0x1c11000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 291209216 unmapped: 46252032 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 291209216 unmapped: 46252032 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 291209216 unmapped: 46252032 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 14.645758629s of 14.787140846s, submitted: 29
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 265 ms_handle_reset con 0x55a12951c000 session 0x55a12844c960
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 265 ms_handle_reset con 0x55a129f3e000 session 0x55a12ad87680
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 265 ms_handle_reset con 0x55a12b4d1000 session 0x55a12ad87a40
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 265 ms_handle_reset con 0x55a12935dc00 session 0x55a12ad865a0
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 265 ms_handle_reset con 0x55a127698400 session 0x55a12ad86f00
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ecedc000/0x0/0x4ffc00000, data 0x1a9b6cc/0x1c11000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 291217408 unmapped: 46243840 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3051303 data_alloc: 218103808 data_used: 12128256
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 291217408 unmapped: 46243840 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 291217408 unmapped: 46243840 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ecb0c000/0x0/0x4ffc00000, data 0x1e6b72e/0x1fe2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 291217408 unmapped: 46243840 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 291217408 unmapped: 46243840 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 291217408 unmapped: 46243840 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ecb0c000/0x0/0x4ffc00000, data 0x1e6b72e/0x1fe2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3051303 data_alloc: 218103808 data_used: 12128256
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 291225600 unmapped: 46235648 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ecb0c000/0x0/0x4ffc00000, data 0x1e6b72e/0x1fe2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ecb0c000/0x0/0x4ffc00000, data 0x1e6b72e/0x1fe2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 291225600 unmapped: 46235648 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 265 ms_handle_reset con 0x55a12951c000 session 0x55a12962a5a0
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 291233792 unmapped: 46227456 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 291299328 unmapped: 46161920 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 292151296 unmapped: 45309952 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3077383 data_alloc: 218103808 data_used: 15802368
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 292151296 unmapped: 45309952 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ecb0c000/0x0/0x4ffc00000, data 0x1e6b72e/0x1fe2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 292151296 unmapped: 45309952 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 292151296 unmapped: 45309952 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ecb0c000/0x0/0x4ffc00000, data 0x1e6b72e/0x1fe2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ecb0c000/0x0/0x4ffc00000, data 0x1e6b72e/0x1fe2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 292151296 unmapped: 45309952 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 292151296 unmapped: 45309952 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3077383 data_alloc: 218103808 data_used: 15802368
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 292151296 unmapped: 45309952 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 292151296 unmapped: 45309952 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 292151296 unmapped: 45309952 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 292151296 unmapped: 45309952 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ecb0c000/0x0/0x4ffc00000, data 0x1e6b72e/0x1fe2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 20.151699066s of 20.260654449s, submitted: 29
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293240832 unmapped: 44220416 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3131253 data_alloc: 218103808 data_used: 16269312
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293249024 unmapped: 44212224 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294526976 unmapped: 42934272 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294526976 unmapped: 42934272 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294526976 unmapped: 42934272 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ec31b000/0x0/0x4ffc00000, data 0x264e72e/0x27c5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294526976 unmapped: 42934272 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3142553 data_alloc: 218103808 data_used: 16080896
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294526976 unmapped: 42934272 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294526976 unmapped: 42934272 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294535168 unmapped: 42926080 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 265 ms_handle_reset con 0x55a129f3e000 session 0x55a12962a780
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 265 ms_handle_reset con 0x55a12b4d1000 session 0x55a129ac61e0
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294535168 unmapped: 42926080 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 265 ms_handle_reset con 0x55a13007dc00 session 0x55a129ce85a0
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293789696 unmapped: 43671552 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ecb2f000/0x0/0x4ffc00000, data 0x1a9b6cc/0x1c11000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3022497 data_alloc: 218103808 data_used: 12189696
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293789696 unmapped: 43671552 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293789696 unmapped: 43671552 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.365324974s of 12.755796432s, submitted: 111
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293789696 unmapped: 43671552 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293789696 unmapped: 43671552 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 265 ms_handle_reset con 0x55a1286a4000 session 0x55a129d7fa40
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 265 ms_handle_reset con 0x55a12935d400 session 0x55a129d8f2c0
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 265 ms_handle_reset con 0x55a127698400 session 0x55a127620780
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293806080 unmapped: 43655168 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2915481 data_alloc: 218103808 data_used: 7364608
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293806080 unmapped: 43655168 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ed730000/0x0/0x4ffc00000, data 0x12486cc/0x13be000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293806080 unmapped: 43655168 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293806080 unmapped: 43655168 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293806080 unmapped: 43655168 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293806080 unmapped: 43655168 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2915481 data_alloc: 218103808 data_used: 7364608
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293806080 unmapped: 43655168 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ed730000/0x0/0x4ffc00000, data 0x12486cc/0x13be000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293806080 unmapped: 43655168 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293806080 unmapped: 43655168 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293806080 unmapped: 43655168 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293806080 unmapped: 43655168 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ed730000/0x0/0x4ffc00000, data 0x12486cc/0x13be000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2915481 data_alloc: 218103808 data_used: 7364608
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293806080 unmapped: 43655168 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293806080 unmapped: 43655168 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293806080 unmapped: 43655168 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293806080 unmapped: 43655168 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293806080 unmapped: 43655168 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ed730000/0x0/0x4ffc00000, data 0x12486cc/0x13be000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2915481 data_alloc: 218103808 data_used: 7364608
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293806080 unmapped: 43655168 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293806080 unmapped: 43655168 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293806080 unmapped: 43655168 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ed730000/0x0/0x4ffc00000, data 0x12486cc/0x13be000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293806080 unmapped: 43655168 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293814272 unmapped: 43646976 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ed730000/0x0/0x4ffc00000, data 0x12486cc/0x13be000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2915481 data_alloc: 218103808 data_used: 7364608
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293814272 unmapped: 43646976 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293814272 unmapped: 43646976 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293814272 unmapped: 43646976 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293814272 unmapped: 43646976 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293814272 unmapped: 43646976 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ed730000/0x0/0x4ffc00000, data 0x12486cc/0x13be000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2915481 data_alloc: 218103808 data_used: 7364608
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293814272 unmapped: 43646976 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293814272 unmapped: 43646976 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ed730000/0x0/0x4ffc00000, data 0x12486cc/0x13be000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293822464 unmapped: 43638784 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293822464 unmapped: 43638784 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293822464 unmapped: 43638784 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2915481 data_alloc: 218103808 data_used: 7364608
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 33.636383057s of 33.703060150s, submitted: 22
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 298565632 unmapped: 38895616 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ed730000/0x0/0x4ffc00000, data 0x12486cc/0x13be000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,7])
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 265 ms_handle_reset con 0x55a12951c000 session 0x55a128517c20
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 265 ms_handle_reset con 0x55a129f3e000 session 0x55a129e9b860
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 265 ms_handle_reset con 0x55a127698400 session 0x55a129d5cb40
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 265 ms_handle_reset con 0x55a1286a4000 session 0x55a1280952c0
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 265 ms_handle_reset con 0x55a12935d400 session 0x55a12761c780
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293838848 unmapped: 43622400 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293838848 unmapped: 43622400 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293838848 unmapped: 43622400 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ed2ab000/0x0/0x4ffc00000, data 0x16cd6cc/0x1843000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 265 ms_handle_reset con 0x55a12951c000 session 0x55a129f4a1e0
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293838848 unmapped: 43622400 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 265 ms_handle_reset con 0x55a129f3e000 session 0x55a12962b4a0
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2956231 data_alloc: 218103808 data_used: 7364608
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293838848 unmapped: 43622400 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 265 ms_handle_reset con 0x55a127698400 session 0x55a128517680
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 265 ms_handle_reset con 0x55a1286a4000 session 0x55a12761c3c0
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293986304 unmapped: 43474944 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294010880 unmapped: 43450368 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ed287000/0x0/0x4ffc00000, data 0x16f16cc/0x1867000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294010880 unmapped: 43450368 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294010880 unmapped: 43450368 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 265 ms_handle_reset con 0x55a12935d400 session 0x55a12962b680
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 265 ms_handle_reset con 0x55a12951c000 session 0x55a128665860
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2993716 data_alloc: 218103808 data_used: 12095488
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294010880 unmapped: 43450368 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294010880 unmapped: 43450368 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294010880 unmapped: 43450368 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ed287000/0x0/0x4ffc00000, data 0x16f16cc/0x1867000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294010880 unmapped: 43450368 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294010880 unmapped: 43450368 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ed287000/0x0/0x4ffc00000, data 0x16f16cc/0x1867000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 13.936422348s of 14.315987587s, submitted: 15
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2993848 data_alloc: 218103808 data_used: 12095488
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294010880 unmapped: 43450368 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294010880 unmapped: 43450368 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 265 ms_handle_reset con 0x55a12b4d1000 session 0x55a12ad865a0
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 265 ms_handle_reset con 0x55a13007dc00 session 0x55a129d5cf00
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ed287000/0x0/0x4ffc00000, data 0x16f16cc/0x1867000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294010880 unmapped: 43450368 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ed287000/0x0/0x4ffc00000, data 0x16f16cc/0x1867000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294019072 unmapped: 43442176 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294019072 unmapped: 43442176 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2993716 data_alloc: 218103808 data_used: 12095488
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294019072 unmapped: 43442176 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294019072 unmapped: 43442176 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294019072 unmapped: 43442176 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ed287000/0x0/0x4ffc00000, data 0x16f16cc/0x1867000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294019072 unmapped: 43442176 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 265 ms_handle_reset con 0x55a127698400 session 0x55a12ad87a40
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 265 ms_handle_reset con 0x55a1286a4000 session 0x55a129eb21e0
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 265 ms_handle_reset con 0x55a12935d400 session 0x55a12ad87680
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 291471360 unmapped: 45989888 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2921114 data_alloc: 218103808 data_used: 7364608
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 291471360 unmapped: 45989888 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 291471360 unmapped: 45989888 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ed730000/0x0/0x4ffc00000, data 0x12486cc/0x13be000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 291471360 unmapped: 45989888 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 291471360 unmapped: 45989888 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ed730000/0x0/0x4ffc00000, data 0x12486cc/0x13be000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 291471360 unmapped: 45989888 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2921114 data_alloc: 218103808 data_used: 7364608
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 291471360 unmapped: 45989888 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 291471360 unmapped: 45989888 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 291471360 unmapped: 45989888 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 291471360 unmapped: 45989888 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 291471360 unmapped: 45989888 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ed730000/0x0/0x4ffc00000, data 0x12486cc/0x13be000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2921114 data_alloc: 218103808 data_used: 7364608
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 291471360 unmapped: 45989888 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 291471360 unmapped: 45989888 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 291471360 unmapped: 45989888 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 291471360 unmapped: 45989888 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 291471360 unmapped: 45989888 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ed730000/0x0/0x4ffc00000, data 0x12486cc/0x13be000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2921114 data_alloc: 218103808 data_used: 7364608
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 291471360 unmapped: 45989888 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 291471360 unmapped: 45989888 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 291471360 unmapped: 45989888 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 291471360 unmapped: 45989888 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 291471360 unmapped: 45989888 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ed730000/0x0/0x4ffc00000, data 0x12486cc/0x13be000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2921114 data_alloc: 218103808 data_used: 7364608
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 291487744 unmapped: 45973504 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ed730000/0x0/0x4ffc00000, data 0x12486cc/0x13be000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ed730000/0x0/0x4ffc00000, data 0x12486cc/0x13be000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 291487744 unmapped: 45973504 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 291487744 unmapped: 45973504 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 291487744 unmapped: 45973504 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 291487744 unmapped: 45973504 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2921114 data_alloc: 218103808 data_used: 7364608
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ed730000/0x0/0x4ffc00000, data 0x12486cc/0x13be000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 291495936 unmapped: 45965312 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ed730000/0x0/0x4ffc00000, data 0x12486cc/0x13be000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 291495936 unmapped: 45965312 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ed730000/0x0/0x4ffc00000, data 0x12486cc/0x13be000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 291495936 unmapped: 45965312 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ed730000/0x0/0x4ffc00000, data 0x12486cc/0x13be000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 291495936 unmapped: 45965312 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 291495936 unmapped: 45965312 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2921114 data_alloc: 218103808 data_used: 7364608
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 291495936 unmapped: 45965312 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 291504128 unmapped: 45957120 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ed730000/0x0/0x4ffc00000, data 0x12486cc/0x13be000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 291504128 unmapped: 45957120 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 291504128 unmapped: 45957120 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 291504128 unmapped: 45957120 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2921114 data_alloc: 218103808 data_used: 7364608
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 291504128 unmapped: 45957120 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ed730000/0x0/0x4ffc00000, data 0x12486cc/0x13be000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 291512320 unmapped: 45948928 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 291512320 unmapped: 45948928 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 48.175201416s of 48.326942444s, submitted: 23
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 265 ms_handle_reset con 0x55a12951c000 session 0x55a129d8e780
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 265 ms_handle_reset con 0x55a127698400 session 0x55a12844d2c0
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 265 ms_handle_reset con 0x55a1286a4000 session 0x55a1294f3860
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 265 ms_handle_reset con 0x55a12935d400 session 0x55a1294e1680
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 265 ms_handle_reset con 0x55a13007dc00 session 0x55a12761cf00
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 291536896 unmapped: 45924352 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 291536896 unmapped: 45924352 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2999881 data_alloc: 218103808 data_used: 7364608
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 291536896 unmapped: 45924352 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ecd45000/0x0/0x4ffc00000, data 0x1c336cc/0x1da9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 291536896 unmapped: 45924352 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 291536896 unmapped: 45924352 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ecd45000/0x0/0x4ffc00000, data 0x1c336cc/0x1da9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 291536896 unmapped: 45924352 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 265 ms_handle_reset con 0x55a12d8ce800 session 0x55a128094f00
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 291536896 unmapped: 45924352 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3000013 data_alloc: 218103808 data_used: 7364608
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 291536896 unmapped: 45924352 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 292847616 unmapped: 44613632 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ecd45000/0x0/0x4ffc00000, data 0x1c336cc/0x1da9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 292847616 unmapped: 44613632 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ecd45000/0x0/0x4ffc00000, data 0x1c336cc/0x1da9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 292847616 unmapped: 44613632 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 292847616 unmapped: 44613632 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3064813 data_alloc: 218103808 data_used: 16584704
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 292847616 unmapped: 44613632 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 292847616 unmapped: 44613632 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ecd45000/0x0/0x4ffc00000, data 0x1c336cc/0x1da9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 292847616 unmapped: 44613632 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 292847616 unmapped: 44613632 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 292847616 unmapped: 44613632 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3064813 data_alloc: 218103808 data_used: 16584704
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 292847616 unmapped: 44613632 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 18.436155319s of 18.604982376s, submitted: 24
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293904384 unmapped: 43556864 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ec693000/0x0/0x4ffc00000, data 0x22e56cc/0x245b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294043648 unmapped: 43417600 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294043648 unmapped: 43417600 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ec685000/0x0/0x4ffc00000, data 0x22f26cc/0x2468000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294043648 unmapped: 43417600 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3134245 data_alloc: 234881024 data_used: 17334272
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294043648 unmapped: 43417600 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294043648 unmapped: 43417600 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ec685000/0x0/0x4ffc00000, data 0x22f26cc/0x2468000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294043648 unmapped: 43417600 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ec685000/0x0/0x4ffc00000, data 0x22f26cc/0x2468000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294043648 unmapped: 43417600 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294043648 unmapped: 43417600 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3134245 data_alloc: 234881024 data_used: 17334272
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294043648 unmapped: 43417600 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294043648 unmapped: 43417600 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294043648 unmapped: 43417600 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294043648 unmapped: 43417600 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.365725517s of 12.565147400s, submitted: 42
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 265 ms_handle_reset con 0x55a127698400 session 0x55a129d5de00
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 265 ms_handle_reset con 0x55a1286a4000 session 0x55a12855c3c0
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ec685000/0x0/0x4ffc00000, data 0x22f26cc/0x2468000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294043648 unmapped: 43417600 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 265 ms_handle_reset con 0x55a12935d400 session 0x55a129d7e000
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3131957 data_alloc: 234881024 data_used: 17362944
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294060032 unmapped: 43401216 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 265 handle_osd_map epochs [266,266], i have 265, src has [1,266]
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 266 ms_handle_reset con 0x55a13007dc00 session 0x55a129f53860
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 266 ms_handle_reset con 0x55a127eb5400 session 0x55a12808d2c0
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 266 ms_handle_reset con 0x55a1286a5000 session 0x55a128579a40
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 266 ms_handle_reset con 0x55a127eb5400 session 0x55a129f4a5a0
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 299204608 unmapped: 41099264 heap: 340303872 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 266 handle_osd_map epochs [266,267], i have 266, src has [1,267]
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 267 handle_osd_map epochs [267,267], i have 267, src has [1,267]
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 267 ms_handle_reset con 0x55a127698400 session 0x55a1299c9860
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 300269568 unmapped: 40034304 heap: 340303872 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 267 ms_handle_reset con 0x55a1286a4000 session 0x55a129ac6960
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 267 heartbeat osd_stat(store_statfs(0x4ebc4a000/0x0/0x4ffc00000, data 0x2d29e2a/0x2ea3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 267 handle_osd_map epochs [268,268], i have 267, src has [1,268]
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 267 handle_osd_map epochs [268,268], i have 268, src has [1,268]
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 300277760 unmapped: 40026112 heap: 340303872 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 268 ms_handle_reset con 0x55a12935d400 session 0x55a1294f2b40
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 268 ms_handle_reset con 0x55a127698400 session 0x55a12ad86000
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 268 ms_handle_reset con 0x55a127eb5400 session 0x55a12761d2c0
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 268 heartbeat osd_stat(store_statfs(0x4ebc46000/0x0/0x4ffc00000, data 0x2d2b9c3/0x2ea6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 300277760 unmapped: 40026112 heap: 340303872 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3255591 data_alloc: 234881024 data_used: 24862720
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 300277760 unmapped: 40026112 heap: 340303872 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 268 heartbeat osd_stat(store_statfs(0x4ebc46000/0x0/0x4ffc00000, data 0x2d2b9c3/0x2ea6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 268 heartbeat osd_stat(store_statfs(0x4ebc46000/0x0/0x4ffc00000, data 0x2d2b9c3/0x2ea6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 300277760 unmapped: 40026112 heap: 340303872 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 268 heartbeat osd_stat(store_statfs(0x4ebc46000/0x0/0x4ffc00000, data 0x2d2b9c3/0x2ea6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 300277760 unmapped: 40026112 heap: 340303872 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 268 handle_osd_map epochs [269,269], i have 268, src has [1,269]
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 300285952 unmapped: 40017920 heap: 340303872 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.536736488s of 10.118284225s, submitted: 64
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 269 ms_handle_reset con 0x55a1286a4000 session 0x55a127620960
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295960576 unmapped: 44343296 heap: 340303872 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3033163 data_alloc: 218103808 data_used: 7397376
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295960576 unmapped: 44343296 heap: 340303872 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295960576 unmapped: 44343296 heap: 340303872 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295960576 unmapped: 44343296 heap: 340303872 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 269 heartbeat osd_stat(store_statfs(0x4eccee000/0x0/0x4ffc00000, data 0x1c83426/0x1dff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295960576 unmapped: 44343296 heap: 340303872 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295960576 unmapped: 44343296 heap: 340303872 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3033163 data_alloc: 218103808 data_used: 7397376
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295960576 unmapped: 44343296 heap: 340303872 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295960576 unmapped: 44343296 heap: 340303872 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 269 ms_handle_reset con 0x55a1286a5000 session 0x55a12844c960
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 269 ms_handle_reset con 0x55a127eb4000 session 0x55a129e9af00
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 269 ms_handle_reset con 0x55a13007dc00 session 0x55a1285474a0
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 269 ms_handle_reset con 0x55a127698400 session 0x55a129ce81e0
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 269 ms_handle_reset con 0x55a127eb5400 session 0x55a129f53c20
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 269 ms_handle_reset con 0x55a1286a4000 session 0x55a129ac6f00
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 269 ms_handle_reset con 0x55a1286a5000 session 0x55a1294a7860
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 316350464 unmapped: 36495360 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 269 heartbeat osd_stat(store_statfs(0x4eb5ae000/0x0/0x4ffc00000, data 0x33c3488/0x3540000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [0,0,0,0,0,1,0,0,4,1,3])
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 269 ms_handle_reset con 0x55a127698400 session 0x55a129f4b680
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 306053120 unmapped: 46792704 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 306053120 unmapped: 46792704 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 269 ms_handle_reset con 0x55a127eb5400 session 0x55a128001c20
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 269 heartbeat osd_stat(store_statfs(0x4eb5aa000/0x0/0x4ffc00000, data 0x33c7488/0x3544000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3263674 data_alloc: 234881024 data_used: 18100224
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 306053120 unmapped: 46792704 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 306053120 unmapped: 46792704 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 269 handle_osd_map epochs [270,270], i have 269, src has [1,270]
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.454534531s of 13.402193069s, submitted: 53
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 270 ms_handle_reset con 0x55a13007dc00 session 0x55a1299c9c20
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 298696704 unmapped: 54149120 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 298696704 unmapped: 54149120 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 270 heartbeat osd_stat(store_statfs(0x4ebfdb000/0x0/0x4ffc00000, data 0x2995049/0x2b12000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 298696704 unmapped: 54149120 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3194453 data_alloc: 218103808 data_used: 15204352
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 298696704 unmapped: 54149120 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 270 heartbeat osd_stat(store_statfs(0x4ebfdb000/0x0/0x4ffc00000, data 0x2995049/0x2b12000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 298696704 unmapped: 54149120 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 270 heartbeat osd_stat(store_statfs(0x4ebfdb000/0x0/0x4ffc00000, data 0x2995049/0x2b12000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 298696704 unmapped: 54149120 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 270 handle_osd_map epochs [270,271], i have 270, src has [1,271]
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 298696704 unmapped: 54149120 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 271 heartbeat osd_stat(store_statfs(0x4ebfdb000/0x0/0x4ffc00000, data 0x2995049/0x2b12000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 298696704 unmapped: 54149120 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3197427 data_alloc: 218103808 data_used: 15204352
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 298696704 unmapped: 54149120 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 298696704 unmapped: 54149120 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 298696704 unmapped: 54149120 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 298696704 unmapped: 54149120 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.603380203s of 11.698580742s, submitted: 38
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 271 heartbeat osd_stat(store_statfs(0x4ebfd8000/0x0/0x4ffc00000, data 0x2996aac/0x2b15000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [0,0,0,0,0,0,1])
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 298754048 unmapped: 54091776 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3216491 data_alloc: 234881024 data_used: 17309696
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 298786816 unmapped: 54059008 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 271 heartbeat osd_stat(store_statfs(0x4ebfd9000/0x0/0x4ffc00000, data 0x2996aac/0x2b15000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 298786816 unmapped: 54059008 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 271 heartbeat osd_stat(store_statfs(0x4ebfd9000/0x0/0x4ffc00000, data 0x2996aac/0x2b15000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 298786816 unmapped: 54059008 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 298786816 unmapped: 54059008 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 298786816 unmapped: 54059008 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3216651 data_alloc: 234881024 data_used: 17313792
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 271 heartbeat osd_stat(store_statfs(0x4ebfd9000/0x0/0x4ffc00000, data 0x2996aac/0x2b15000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 298786816 unmapped: 54059008 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 271 heartbeat osd_stat(store_statfs(0x4ebfd9000/0x0/0x4ffc00000, data 0x2996aac/0x2b15000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 298786816 unmapped: 54059008 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 298786816 unmapped: 54059008 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 298786816 unmapped: 54059008 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 298786816 unmapped: 54059008 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3216651 data_alloc: 234881024 data_used: 17313792
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 298786816 unmapped: 54059008 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 271 heartbeat osd_stat(store_statfs(0x4ebfd9000/0x0/0x4ffc00000, data 0x2996aac/0x2b15000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 298786816 unmapped: 54059008 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 298786816 unmapped: 54059008 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 14.006009102s of 14.082138062s, submitted: 3
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 271 heartbeat osd_stat(store_statfs(0x4ebfd9000/0x0/0x4ffc00000, data 0x2996aac/0x2b15000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 298786816 unmapped: 54059008 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 271 heartbeat osd_stat(store_statfs(0x4ebfd9000/0x0/0x4ffc00000, data 0x2996aac/0x2b15000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 298786816 unmapped: 54059008 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3216667 data_alloc: 234881024 data_used: 17309696
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 298786816 unmapped: 54059008 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 298786816 unmapped: 54059008 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 298786816 unmapped: 54059008 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 298786816 unmapped: 54059008 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 298786816 unmapped: 54059008 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 271 heartbeat osd_stat(store_statfs(0x4ebfd9000/0x0/0x4ffc00000, data 0x2996aac/0x2b15000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3216667 data_alloc: 234881024 data_used: 17309696
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 298786816 unmapped: 54059008 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 298786816 unmapped: 54059008 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 271 heartbeat osd_stat(store_statfs(0x4ebfd9000/0x0/0x4ffc00000, data 0x2996aac/0x2b15000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 298786816 unmapped: 54059008 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.220427513s of 10.246352196s, submitted: 3
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 271 ms_handle_reset con 0x55a1286a4000 session 0x55a12ad87860
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 271 ms_handle_reset con 0x55a1286a5000 session 0x55a129eb3860
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 289693696 unmapped: 63152128 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 271 ms_handle_reset con 0x55a1286a5000 session 0x55a12ad865a0
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 289693696 unmapped: 63152128 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 271 heartbeat osd_stat(store_statfs(0x4ed4ce000/0x0/0x4ffc00000, data 0x1252a4a/0x13d0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2972369 data_alloc: 218103808 data_used: 7401472
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 289693696 unmapped: 63152128 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 289693696 unmapped: 63152128 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 289693696 unmapped: 63152128 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 289693696 unmapped: 63152128 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 289693696 unmapped: 63152128 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 271 heartbeat osd_stat(store_statfs(0x4ed4ce000/0x0/0x4ffc00000, data 0x1252a4a/0x13d0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2972529 data_alloc: 218103808 data_used: 7405568
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 289693696 unmapped: 63152128 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 289693696 unmapped: 63152128 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 271 heartbeat osd_stat(store_statfs(0x4ed4ce000/0x0/0x4ffc00000, data 0x1252a4a/0x13d0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 289693696 unmapped: 63152128 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 289693696 unmapped: 63152128 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 289693696 unmapped: 63152128 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2972529 data_alloc: 218103808 data_used: 7405568
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 271 heartbeat osd_stat(store_statfs(0x4ed4ce000/0x0/0x4ffc00000, data 0x1252a4a/0x13d0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 289693696 unmapped: 63152128 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 289693696 unmapped: 63152128 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 289693696 unmapped: 63152128 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 289693696 unmapped: 63152128 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 289693696 unmapped: 63152128 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2972529 data_alloc: 218103808 data_used: 7405568
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 289693696 unmapped: 63152128 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 271 heartbeat osd_stat(store_statfs(0x4ed4ce000/0x0/0x4ffc00000, data 0x1252a4a/0x13d0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 289693696 unmapped: 63152128 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 271 heartbeat osd_stat(store_statfs(0x4ed4ce000/0x0/0x4ffc00000, data 0x1252a4a/0x13d0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 289693696 unmapped: 63152128 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 289693696 unmapped: 63152128 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 271 heartbeat osd_stat(store_statfs(0x4ed4ce000/0x0/0x4ffc00000, data 0x1252a4a/0x13d0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 289693696 unmapped: 63152128 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2972529 data_alloc: 218103808 data_used: 7405568
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 289693696 unmapped: 63152128 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 289693696 unmapped: 63152128 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 271 heartbeat osd_stat(store_statfs(0x4ed4ce000/0x0/0x4ffc00000, data 0x1252a4a/0x13d0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 24.520774841s of 24.589715958s, submitted: 20
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 289693696 unmapped: 63152128 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 271 ms_handle_reset con 0x55a127698400 session 0x55a129f4a1e0
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 271 ms_handle_reset con 0x55a127eb5400 session 0x55a129d8f2c0
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 271 ms_handle_reset con 0x55a1286a4000 session 0x55a129ce85a0
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 271 ms_handle_reset con 0x55a13007dc00 session 0x55a129d5d680
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 271 ms_handle_reset con 0x55a127698400 session 0x55a12850cf00
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 271 heartbeat osd_stat(store_statfs(0x4ecd3c000/0x0/0x4ffc00000, data 0x1c34a4a/0x1db2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 289701888 unmapped: 63143936 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 271 ms_handle_reset con 0x55a127eb5400 session 0x55a12ad86960
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 271 ms_handle_reset con 0x55a1286a4000 session 0x55a129d5cd20
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 289701888 unmapped: 63143936 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3048048 data_alloc: 218103808 data_used: 7405568
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 271 ms_handle_reset con 0x55a1286a5000 session 0x55a1285ba780
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 289701888 unmapped: 63143936 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 271 ms_handle_reset con 0x55a13007e000 session 0x55a129d7f0e0
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 290750464 unmapped: 62095360 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 290750464 unmapped: 62095360 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294322176 unmapped: 58523648 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 271 ms_handle_reset con 0x55a127698400 session 0x55a129eb30e0
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 271 ms_handle_reset con 0x55a127eb5400 session 0x55a12844cb40
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 271 heartbeat osd_stat(store_statfs(0x4ecd3c000/0x0/0x4ffc00000, data 0x1c34a4a/0x1db2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 271 ms_handle_reset con 0x55a1286a4000 session 0x55a129ac6b40
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293273600 unmapped: 59572224 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2978242 data_alloc: 218103808 data_used: 7405568
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293273600 unmapped: 59572224 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 5402.4 total, 600.0 interval#012Cumulative writes: 35K writes, 142K keys, 35K commit groups, 1.0 writes per commit group, ingest: 0.14 GB, 0.03 MB/s#012Cumulative WAL: 35K writes, 12K syncs, 2.84 writes per sync, written: 0.14 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1830 writes, 8268 keys, 1830 commit groups, 1.0 writes per commit group, ingest: 10.35 MB, 0.02 MB/s#012Interval WAL: 1830 writes, 682 syncs, 2.68 writes per sync, written: 0.01 GB, 0.02 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293273600 unmapped: 59572224 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293273600 unmapped: 59572224 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 271 heartbeat osd_stat(store_statfs(0x4ed71e000/0x0/0x4ffc00000, data 0x1252a4a/0x13d0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293273600 unmapped: 59572224 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 271 heartbeat osd_stat(store_statfs(0x4ed71e000/0x0/0x4ffc00000, data 0x1252a4a/0x13d0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293273600 unmapped: 59572224 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2978242 data_alloc: 218103808 data_used: 7405568
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293273600 unmapped: 59572224 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 271 heartbeat osd_stat(store_statfs(0x4ed71e000/0x0/0x4ffc00000, data 0x1252a4a/0x13d0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 271 heartbeat osd_stat(store_statfs(0x4ed71e000/0x0/0x4ffc00000, data 0x1252a4a/0x13d0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293273600 unmapped: 59572224 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 271 heartbeat osd_stat(store_statfs(0x4ed71e000/0x0/0x4ffc00000, data 0x1252a4a/0x13d0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293273600 unmapped: 59572224 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293273600 unmapped: 59572224 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293273600 unmapped: 59572224 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 271 heartbeat osd_stat(store_statfs(0x4ed71e000/0x0/0x4ffc00000, data 0x1252a4a/0x13d0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2978242 data_alloc: 218103808 data_used: 7405568
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293273600 unmapped: 59572224 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293273600 unmapped: 59572224 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293273600 unmapped: 59572224 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293273600 unmapped: 59572224 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293273600 unmapped: 59572224 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2978242 data_alloc: 218103808 data_used: 7405568
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 271 heartbeat osd_stat(store_statfs(0x4ed71e000/0x0/0x4ffc00000, data 0x1252a4a/0x13d0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293273600 unmapped: 59572224 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293273600 unmapped: 59572224 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293273600 unmapped: 59572224 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293273600 unmapped: 59572224 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 271 heartbeat osd_stat(store_statfs(0x4ed71e000/0x0/0x4ffc00000, data 0x1252a4a/0x13d0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293273600 unmapped: 59572224 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2978242 data_alloc: 218103808 data_used: 7405568
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293273600 unmapped: 59572224 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293273600 unmapped: 59572224 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 271 heartbeat osd_stat(store_statfs(0x4ed71e000/0x0/0x4ffc00000, data 0x1252a4a/0x13d0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293273600 unmapped: 59572224 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: mgrc ms_handle_reset ms_handle_reset con 0x55a129f3ac00
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/3119838916
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/3119838916,v1:192.168.122.100:6801/3119838916]
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: mgrc handle_mgr_configure stats_period=5
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293273600 unmapped: 59572224 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293273600 unmapped: 59572224 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 271 heartbeat osd_stat(store_statfs(0x4ed71e000/0x0/0x4ffc00000, data 0x1252a4a/0x13d0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2978242 data_alloc: 218103808 data_used: 7405568
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293273600 unmapped: 59572224 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293273600 unmapped: 59572224 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293273600 unmapped: 59572224 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 271 heartbeat osd_stat(store_statfs(0x4ed71e000/0x0/0x4ffc00000, data 0x1252a4a/0x13d0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293273600 unmapped: 59572224 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293273600 unmapped: 59572224 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2978242 data_alloc: 218103808 data_used: 7405568
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293273600 unmapped: 59572224 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 271 heartbeat osd_stat(store_statfs(0x4ed71e000/0x0/0x4ffc00000, data 0x1252a4a/0x13d0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293273600 unmapped: 59572224 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293273600 unmapped: 59572224 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293273600 unmapped: 59572224 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293273600 unmapped: 59572224 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2978242 data_alloc: 218103808 data_used: 7405568
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293273600 unmapped: 59572224 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293273600 unmapped: 59572224 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 271 heartbeat osd_stat(store_statfs(0x4ed71e000/0x0/0x4ffc00000, data 0x1252a4a/0x13d0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293273600 unmapped: 59572224 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293273600 unmapped: 59572224 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293273600 unmapped: 59572224 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2978242 data_alloc: 218103808 data_used: 7405568
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 271 heartbeat osd_stat(store_statfs(0x4ed71e000/0x0/0x4ffc00000, data 0x1252a4a/0x13d0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293273600 unmapped: 59572224 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293273600 unmapped: 59572224 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293273600 unmapped: 59572224 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293273600 unmapped: 59572224 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 271 heartbeat osd_stat(store_statfs(0x4ed71e000/0x0/0x4ffc00000, data 0x1252a4a/0x13d0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293273600 unmapped: 59572224 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2978242 data_alloc: 218103808 data_used: 7405568
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293273600 unmapped: 59572224 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293273600 unmapped: 59572224 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293273600 unmapped: 59572224 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293273600 unmapped: 59572224 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293273600 unmapped: 59572224 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 271 heartbeat osd_stat(store_statfs(0x4ed71e000/0x0/0x4ffc00000, data 0x1252a4a/0x13d0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2978242 data_alloc: 218103808 data_used: 7405568
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293273600 unmapped: 59572224 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293273600 unmapped: 59572224 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293273600 unmapped: 59572224 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293273600 unmapped: 59572224 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293273600 unmapped: 59572224 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2978242 data_alloc: 218103808 data_used: 7405568
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293273600 unmapped: 59572224 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 271 heartbeat osd_stat(store_statfs(0x4ed71e000/0x0/0x4ffc00000, data 0x1252a4a/0x13d0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293273600 unmapped: 59572224 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293273600 unmapped: 59572224 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293273600 unmapped: 59572224 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293273600 unmapped: 59572224 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2978242 data_alloc: 218103808 data_used: 7405568
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293273600 unmapped: 59572224 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 271 heartbeat osd_stat(store_statfs(0x4ed71e000/0x0/0x4ffc00000, data 0x1252a4a/0x13d0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293273600 unmapped: 59572224 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293273600 unmapped: 59572224 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293273600 unmapped: 59572224 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293281792 unmapped: 59564032 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2978242 data_alloc: 218103808 data_used: 7405568
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293281792 unmapped: 59564032 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 271 heartbeat osd_stat(store_statfs(0x4ed71e000/0x0/0x4ffc00000, data 0x1252a4a/0x13d0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293281792 unmapped: 59564032 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 271 heartbeat osd_stat(store_statfs(0x4ed71e000/0x0/0x4ffc00000, data 0x1252a4a/0x13d0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293281792 unmapped: 59564032 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 271 heartbeat osd_stat(store_statfs(0x4ed71e000/0x0/0x4ffc00000, data 0x1252a4a/0x13d0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293281792 unmapped: 59564032 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293281792 unmapped: 59564032 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 77.009826660s of 77.842643738s, submitted: 41
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 271 heartbeat osd_stat(store_statfs(0x4ed71e000/0x0/0x4ffc00000, data 0x1252a4a/0x13d0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2977528 data_alloc: 218103808 data_used: 7405568
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 271 handle_osd_map epochs [271,272], i have 271, src has [1,272]
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293298176 unmapped: 59547648 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 272 ms_handle_reset con 0x55a1286a5000 session 0x55a129eb2b40
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293298176 unmapped: 59547648 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 272 handle_osd_map epochs [273,273], i have 272, src has [1,273]
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 273 ms_handle_reset con 0x55a1286de800 session 0x55a129d7eb40
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293314560 unmapped: 59531264 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293314560 unmapped: 59531264 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293314560 unmapped: 59531264 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2923180 data_alloc: 218103808 data_used: 7413760
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293322752 unmapped: 59523072 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 273 heartbeat osd_stat(store_statfs(0x4edfb7000/0x0/0x4ffc00000, data 0x9b7089/0xb35000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293322752 unmapped: 59523072 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293322752 unmapped: 59523072 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 273 handle_osd_map epochs [274,274], i have 273, src has [1,274]
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293339136 unmapped: 59506688 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 274 heartbeat osd_stat(store_statfs(0x4edfb7000/0x0/0x4ffc00000, data 0x9b7089/0xb35000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293339136 unmapped: 59506688 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2925482 data_alloc: 218103808 data_used: 7413760
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293339136 unmapped: 59506688 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293339136 unmapped: 59506688 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 274 heartbeat osd_stat(store_statfs(0x4edfb5000/0x0/0x4ffc00000, data 0x9b8b0c/0xb38000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293339136 unmapped: 59506688 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293347328 unmapped: 59498496 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293347328 unmapped: 59498496 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2925482 data_alloc: 218103808 data_used: 7413760
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 274 heartbeat osd_stat(store_statfs(0x4edfb5000/0x0/0x4ffc00000, data 0x9b8b0c/0xb38000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293347328 unmapped: 59498496 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293347328 unmapped: 59498496 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 274 heartbeat osd_stat(store_statfs(0x4edfb5000/0x0/0x4ffc00000, data 0x9b8b0c/0xb38000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293347328 unmapped: 59498496 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293347328 unmapped: 59498496 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293347328 unmapped: 59498496 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2925482 data_alloc: 218103808 data_used: 7413760
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293363712 unmapped: 59482112 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 274 heartbeat osd_stat(store_statfs(0x4edfb5000/0x0/0x4ffc00000, data 0x9b8b0c/0xb38000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293363712 unmapped: 59482112 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293363712 unmapped: 59482112 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 274 heartbeat osd_stat(store_statfs(0x4edfb5000/0x0/0x4ffc00000, data 0x9b8b0c/0xb38000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 274 ms_handle_reset con 0x55a12769b400 session 0x55a1294e1a40
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293363712 unmapped: 59482112 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 274 heartbeat osd_stat(store_statfs(0x4edfb5000/0x0/0x4ffc00000, data 0x9b8b0c/0xb38000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293371904 unmapped: 59473920 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2925482 data_alloc: 218103808 data_used: 7413760
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293371904 unmapped: 59473920 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293371904 unmapped: 59473920 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293371904 unmapped: 59473920 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 274 heartbeat osd_stat(store_statfs(0x4edfb5000/0x0/0x4ffc00000, data 0x9b8b0c/0xb38000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 274 heartbeat osd_stat(store_statfs(0x4edfb5000/0x0/0x4ffc00000, data 0x9b8b0c/0xb38000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293371904 unmapped: 59473920 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293380096 unmapped: 59465728 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2925482 data_alloc: 218103808 data_used: 7413760
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293380096 unmapped: 59465728 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293380096 unmapped: 59465728 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 274 heartbeat osd_stat(store_statfs(0x4edfb5000/0x0/0x4ffc00000, data 0x9b8b0c/0xb38000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293380096 unmapped: 59465728 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 274 heartbeat osd_stat(store_statfs(0x4edfb5000/0x0/0x4ffc00000, data 0x9b8b0c/0xb38000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293380096 unmapped: 59465728 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293380096 unmapped: 59465728 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2925482 data_alloc: 218103808 data_used: 7413760
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293380096 unmapped: 59465728 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 35.864181519s of 36.010292053s, submitted: 49
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293396480 unmapped: 59449344 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294469632 unmapped: 58376192 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294469632 unmapped: 58376192 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 274 heartbeat osd_stat(store_statfs(0x4edfb6000/0x0/0x4ffc00000, data 0x9b8b0c/0xb38000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294469632 unmapped: 58376192 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2924602 data_alloc: 218103808 data_used: 7413760
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294469632 unmapped: 58376192 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294469632 unmapped: 58376192 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 274 heartbeat osd_stat(store_statfs(0x4edfb6000/0x0/0x4ffc00000, data 0x9b8b0c/0xb38000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294469632 unmapped: 58376192 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 274 heartbeat osd_stat(store_statfs(0x4edfb6000/0x0/0x4ffc00000, data 0x9b8b0c/0xb38000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294469632 unmapped: 58376192 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294477824 unmapped: 58368000 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2924602 data_alloc: 218103808 data_used: 7413760
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294477824 unmapped: 58368000 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294477824 unmapped: 58368000 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294477824 unmapped: 58368000 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 274 heartbeat osd_stat(store_statfs(0x4edfb6000/0x0/0x4ffc00000, data 0x9b8b0c/0xb38000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294477824 unmapped: 58368000 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 274 heartbeat osd_stat(store_statfs(0x4edfb6000/0x0/0x4ffc00000, data 0x9b8b0c/0xb38000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294477824 unmapped: 58368000 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2924602 data_alloc: 218103808 data_used: 7413760
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294477824 unmapped: 58368000 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294477824 unmapped: 58368000 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294486016 unmapped: 58359808 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 274 heartbeat osd_stat(store_statfs(0x4edfb6000/0x0/0x4ffc00000, data 0x9b8b0c/0xb38000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294486016 unmapped: 58359808 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294486016 unmapped: 58359808 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2924602 data_alloc: 218103808 data_used: 7413760
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294486016 unmapped: 58359808 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294486016 unmapped: 58359808 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294494208 unmapped: 58351616 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294494208 unmapped: 58351616 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 274 heartbeat osd_stat(store_statfs(0x4edfb6000/0x0/0x4ffc00000, data 0x9b8b0c/0xb38000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294494208 unmapped: 58351616 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2924602 data_alloc: 218103808 data_used: 7413760
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294494208 unmapped: 58351616 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294494208 unmapped: 58351616 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294494208 unmapped: 58351616 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 274 heartbeat osd_stat(store_statfs(0x4edfb6000/0x0/0x4ffc00000, data 0x9b8b0c/0xb38000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294494208 unmapped: 58351616 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294494208 unmapped: 58351616 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2924602 data_alloc: 218103808 data_used: 7413760
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294502400 unmapped: 58343424 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294502400 unmapped: 58343424 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294502400 unmapped: 58343424 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 274 heartbeat osd_stat(store_statfs(0x4edfb6000/0x0/0x4ffc00000, data 0x9b8b0c/0xb38000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 274 heartbeat osd_stat(store_statfs(0x4edfb6000/0x0/0x4ffc00000, data 0x9b8b0c/0xb38000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294502400 unmapped: 58343424 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294510592 unmapped: 58335232 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 274 heartbeat osd_stat(store_statfs(0x4edfb6000/0x0/0x4ffc00000, data 0x9b8b0c/0xb38000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2924602 data_alloc: 218103808 data_used: 7413760
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294510592 unmapped: 58335232 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294510592 unmapped: 58335232 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 274 heartbeat osd_stat(store_statfs(0x4edfb6000/0x0/0x4ffc00000, data 0x9b8b0c/0xb38000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294510592 unmapped: 58335232 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294510592 unmapped: 58335232 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 274 heartbeat osd_stat(store_statfs(0x4edfb6000/0x0/0x4ffc00000, data 0x9b8b0c/0xb38000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294510592 unmapped: 58335232 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2924602 data_alloc: 218103808 data_used: 7413760
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294510592 unmapped: 58335232 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 274 heartbeat osd_stat(store_statfs(0x4edfb6000/0x0/0x4ffc00000, data 0x9b8b0c/0xb38000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294510592 unmapped: 58335232 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294510592 unmapped: 58335232 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294510592 unmapped: 58335232 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294510592 unmapped: 58335232 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 274 heartbeat osd_stat(store_statfs(0x4edfb6000/0x0/0x4ffc00000, data 0x9b8b0c/0xb38000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2924602 data_alloc: 218103808 data_used: 7413760
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294518784 unmapped: 58327040 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294518784 unmapped: 58327040 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294518784 unmapped: 58327040 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294518784 unmapped: 58327040 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 274 heartbeat osd_stat(store_statfs(0x4edfb6000/0x0/0x4ffc00000, data 0x9b8b0c/0xb38000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294518784 unmapped: 58327040 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2924602 data_alloc: 218103808 data_used: 7413760
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294526976 unmapped: 58318848 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294526976 unmapped: 58318848 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294526976 unmapped: 58318848 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294526976 unmapped: 58318848 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 274 heartbeat osd_stat(store_statfs(0x4edfb6000/0x0/0x4ffc00000, data 0x9b8b0c/0xb38000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294526976 unmapped: 58318848 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 274 heartbeat osd_stat(store_statfs(0x4edfb6000/0x0/0x4ffc00000, data 0x9b8b0c/0xb38000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2924602 data_alloc: 218103808 data_used: 7413760
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294526976 unmapped: 58318848 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 274 heartbeat osd_stat(store_statfs(0x4edfb6000/0x0/0x4ffc00000, data 0x9b8b0c/0xb38000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294526976 unmapped: 58318848 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 274 heartbeat osd_stat(store_statfs(0x4edfb6000/0x0/0x4ffc00000, data 0x9b8b0c/0xb38000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294526976 unmapped: 58318848 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294543360 unmapped: 58302464 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294543360 unmapped: 58302464 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2924602 data_alloc: 218103808 data_used: 7413760
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294543360 unmapped: 58302464 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294543360 unmapped: 58302464 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 274 heartbeat osd_stat(store_statfs(0x4edfb6000/0x0/0x4ffc00000, data 0x9b8b0c/0xb38000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294551552 unmapped: 58294272 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294551552 unmapped: 58294272 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294551552 unmapped: 58294272 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2924602 data_alloc: 218103808 data_used: 7413760
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294551552 unmapped: 58294272 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294559744 unmapped: 58286080 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 274 heartbeat osd_stat(store_statfs(0x4edfb6000/0x0/0x4ffc00000, data 0x9b8b0c/0xb38000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294559744 unmapped: 58286080 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294559744 unmapped: 58286080 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294559744 unmapped: 58286080 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2924602 data_alloc: 218103808 data_used: 7413760
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294559744 unmapped: 58286080 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294559744 unmapped: 58286080 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 274 heartbeat osd_stat(store_statfs(0x4edfb6000/0x0/0x4ffc00000, data 0x9b8b0c/0xb38000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294559744 unmapped: 58286080 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294567936 unmapped: 58277888 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294584320 unmapped: 58261504 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2924602 data_alloc: 218103808 data_used: 7413760
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294584320 unmapped: 58261504 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294584320 unmapped: 58261504 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 274 heartbeat osd_stat(store_statfs(0x4edfb6000/0x0/0x4ffc00000, data 0x9b8b0c/0xb38000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294584320 unmapped: 58261504 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294584320 unmapped: 58261504 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294584320 unmapped: 58261504 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2924602 data_alloc: 218103808 data_used: 7413760
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294584320 unmapped: 58261504 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294592512 unmapped: 58253312 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 274 heartbeat osd_stat(store_statfs(0x4edfb6000/0x0/0x4ffc00000, data 0x9b8b0c/0xb38000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294592512 unmapped: 58253312 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 82.001701355s of 82.319427490s, submitted: 90
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 274 ms_handle_reset con 0x55a12769b400 session 0x55a1285ba780
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 274 ms_handle_reset con 0x55a127eb5400 session 0x55a12850cf00
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294600704 unmapped: 58245120 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294600704 unmapped: 58245120 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2924602 data_alloc: 218103808 data_used: 7413760
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294600704 unmapped: 58245120 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 274 heartbeat osd_stat(store_statfs(0x4edfb6000/0x0/0x4ffc00000, data 0x9b8b0c/0xb38000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294600704 unmapped: 58245120 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 274 heartbeat osd_stat(store_statfs(0x4edfb6000/0x0/0x4ffc00000, data 0x9b8b0c/0xb38000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294600704 unmapped: 58245120 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294600704 unmapped: 58245120 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294600704 unmapped: 58245120 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2924602 data_alloc: 218103808 data_used: 7413760
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294617088 unmapped: 58228736 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 274 heartbeat osd_stat(store_statfs(0x4edfb6000/0x0/0x4ffc00000, data 0x9b8b0c/0xb38000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 274 handle_osd_map epochs [274,275], i have 274, src has [1,275]
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 274 handle_osd_map epochs [275,275], i have 275, src has [1,275]
Nov 25 04:46:11 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata"} v 0) v1
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295673856 unmapped: 57171968 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 275 ms_handle_reset con 0x55a1286a4000 session 0x55a129f4a1e0
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295690240 unmapped: 57155584 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 275 heartbeat osd_stat(store_statfs(0x4edfb4000/0x0/0x4ffc00000, data 0x9ba687/0xb38000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295690240 unmapped: 57155584 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295690240 unmapped: 57155584 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.517580032s of 12.192517281s, submitted: 57
Nov 25 04:46:11 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3015543659' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2925069 data_alloc: 218103808 data_used: 7413760
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295673856 unmapped: 57171968 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 275 handle_osd_map epochs [275,276], i have 275, src has [1,276]
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 275 handle_osd_map epochs [276,276], i have 276, src has [1,276]
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294748160 unmapped: 58097664 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 276 heartbeat osd_stat(store_statfs(0x4ee7b2000/0x0/0x4ffc00000, data 0x1bc258/0x33b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 276 ms_handle_reset con 0x55a1286a5000 session 0x55a12850c780
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294756352 unmapped: 58089472 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 276 handle_osd_map epochs [277,277], i have 276, src has [1,277]
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294756352 unmapped: 58089472 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 277 heartbeat osd_stat(store_statfs(0x4ee7af000/0x0/0x4ffc00000, data 0x1bdcd7/0x33e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294756352 unmapped: 58089472 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2857665 data_alloc: 218103808 data_used: 679936
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294756352 unmapped: 58089472 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294756352 unmapped: 58089472 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294772736 unmapped: 58073088 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 277 heartbeat osd_stat(store_statfs(0x4ee7af000/0x0/0x4ffc00000, data 0x1bdcd7/0x33e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 277 handle_osd_map epochs [278,278], i have 277, src has [1,278]
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 277 handle_osd_map epochs [278,278], i have 278, src has [1,278]
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294789120 unmapped: 58056704 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 278 heartbeat osd_stat(store_statfs(0x4ee7ac000/0x0/0x4ffc00000, data 0x1bf73a/0x341000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294789120 unmapped: 58056704 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2860639 data_alloc: 218103808 data_used: 679936
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 278 heartbeat osd_stat(store_statfs(0x4ee7ac000/0x0/0x4ffc00000, data 0x1bf73a/0x341000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294797312 unmapped: 58048512 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294797312 unmapped: 58048512 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 278 heartbeat osd_stat(store_statfs(0x4ee7ac000/0x0/0x4ffc00000, data 0x1bf73a/0x341000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294797312 unmapped: 58048512 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294797312 unmapped: 58048512 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 13.606686592s of 14.555405617s, submitted: 67
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294797312 unmapped: 58048512 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 278 handle_osd_map epochs [278,279], i have 278, src has [1,279]
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 279 ms_handle_reset con 0x55a129f44800 session 0x55a129d8f680
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2865810 data_alloc: 218103808 data_used: 688128
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294805504 unmapped: 58040320 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294805504 unmapped: 58040320 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 279 heartbeat osd_stat(store_statfs(0x4ee7a8000/0x0/0x4ffc00000, data 0x1c12da/0x345000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294805504 unmapped: 58040320 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294813696 unmapped: 58032128 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 279 heartbeat osd_stat(store_statfs(0x4ee7a8000/0x0/0x4ffc00000, data 0x1c12da/0x345000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294813696 unmapped: 58032128 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2865810 data_alloc: 218103808 data_used: 688128
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294813696 unmapped: 58032128 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 279 heartbeat osd_stat(store_statfs(0x4ee7a8000/0x0/0x4ffc00000, data 0x1c12da/0x345000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294821888 unmapped: 58023936 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 279 heartbeat osd_stat(store_statfs(0x4ee7a8000/0x0/0x4ffc00000, data 0x1c12da/0x345000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294821888 unmapped: 58023936 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294821888 unmapped: 58023936 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 279 heartbeat osd_stat(store_statfs(0x4ee7a8000/0x0/0x4ffc00000, data 0x1c12da/0x345000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294821888 unmapped: 58023936 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2865810 data_alloc: 218103808 data_used: 688128
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294821888 unmapped: 58023936 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294821888 unmapped: 58023936 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294830080 unmapped: 58015744 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 279 heartbeat osd_stat(store_statfs(0x4ee7a8000/0x0/0x4ffc00000, data 0x1c12da/0x345000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 279 heartbeat osd_stat(store_statfs(0x4ee7a8000/0x0/0x4ffc00000, data 0x1c12da/0x345000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294830080 unmapped: 58015744 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 279 heartbeat osd_stat(store_statfs(0x4ee7a8000/0x0/0x4ffc00000, data 0x1c12da/0x345000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294830080 unmapped: 58015744 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2865810 data_alloc: 218103808 data_used: 688128
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294830080 unmapped: 58015744 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294830080 unmapped: 58015744 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 279 heartbeat osd_stat(store_statfs(0x4ee7a8000/0x0/0x4ffc00000, data 0x1c12da/0x345000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294830080 unmapped: 58015744 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294830080 unmapped: 58015744 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 279 heartbeat osd_stat(store_statfs(0x4ee7a8000/0x0/0x4ffc00000, data 0x1c12da/0x345000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294838272 unmapped: 58007552 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2865810 data_alloc: 218103808 data_used: 688128
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294838272 unmapped: 58007552 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294838272 unmapped: 58007552 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 279 heartbeat osd_stat(store_statfs(0x4ee7a8000/0x0/0x4ffc00000, data 0x1c12da/0x345000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294838272 unmapped: 58007552 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294854656 unmapped: 57991168 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294854656 unmapped: 57991168 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2865810 data_alloc: 218103808 data_used: 688128
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294854656 unmapped: 57991168 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294854656 unmapped: 57991168 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294862848 unmapped: 57982976 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 279 heartbeat osd_stat(store_statfs(0x4ee7a8000/0x0/0x4ffc00000, data 0x1c12da/0x345000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294862848 unmapped: 57982976 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294862848 unmapped: 57982976 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2865810 data_alloc: 218103808 data_used: 688128
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294862848 unmapped: 57982976 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 279 heartbeat osd_stat(store_statfs(0x4ee7a8000/0x0/0x4ffc00000, data 0x1c12da/0x345000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294862848 unmapped: 57982976 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294862848 unmapped: 57982976 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294862848 unmapped: 57982976 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294862848 unmapped: 57982976 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 279 heartbeat osd_stat(store_statfs(0x4ee7a8000/0x0/0x4ffc00000, data 0x1c12da/0x345000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2865810 data_alloc: 218103808 data_used: 688128
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294871040 unmapped: 57974784 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294871040 unmapped: 57974784 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294871040 unmapped: 57974784 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294871040 unmapped: 57974784 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294887424 unmapped: 57958400 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2865810 data_alloc: 218103808 data_used: 688128
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 279 heartbeat osd_stat(store_statfs(0x4ee7a8000/0x0/0x4ffc00000, data 0x1c12da/0x345000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294887424 unmapped: 57958400 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294887424 unmapped: 57958400 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 279 heartbeat osd_stat(store_statfs(0x4ee7a8000/0x0/0x4ffc00000, data 0x1c12da/0x345000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294887424 unmapped: 57958400 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294887424 unmapped: 57958400 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294887424 unmapped: 57958400 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2865810 data_alloc: 218103808 data_used: 688128
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294887424 unmapped: 57958400 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294887424 unmapped: 57958400 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 279 heartbeat osd_stat(store_statfs(0x4ee7a8000/0x0/0x4ffc00000, data 0x1c12da/0x345000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294895616 unmapped: 57950208 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294895616 unmapped: 57950208 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294895616 unmapped: 57950208 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2865810 data_alloc: 218103808 data_used: 688128
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294895616 unmapped: 57950208 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294903808 unmapped: 57942016 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294903808 unmapped: 57942016 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 279 heartbeat osd_stat(store_statfs(0x4ee7a8000/0x0/0x4ffc00000, data 0x1c12da/0x345000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294903808 unmapped: 57942016 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 279 heartbeat osd_stat(store_statfs(0x4ee7a8000/0x0/0x4ffc00000, data 0x1c12da/0x345000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294903808 unmapped: 57942016 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2865810 data_alloc: 218103808 data_used: 688128
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294903808 unmapped: 57942016 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294903808 unmapped: 57942016 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294903808 unmapped: 57942016 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294912000 unmapped: 57933824 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294912000 unmapped: 57933824 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2865810 data_alloc: 218103808 data_used: 688128
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 279 heartbeat osd_stat(store_statfs(0x4ee7a8000/0x0/0x4ffc00000, data 0x1c12da/0x345000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294912000 unmapped: 57933824 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294912000 unmapped: 57933824 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294912000 unmapped: 57933824 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294920192 unmapped: 57925632 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294920192 unmapped: 57925632 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2865810 data_alloc: 218103808 data_used: 688128
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 279 heartbeat osd_stat(store_statfs(0x4ee7a8000/0x0/0x4ffc00000, data 0x1c12da/0x345000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294920192 unmapped: 57925632 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294920192 unmapped: 57925632 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294928384 unmapped: 57917440 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294928384 unmapped: 57917440 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294928384 unmapped: 57917440 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2865810 data_alloc: 218103808 data_used: 688128
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294936576 unmapped: 57909248 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 279 heartbeat osd_stat(store_statfs(0x4ee7a8000/0x0/0x4ffc00000, data 0x1c12da/0x345000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294936576 unmapped: 57909248 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 279 heartbeat osd_stat(store_statfs(0x4ee7a8000/0x0/0x4ffc00000, data 0x1c12da/0x345000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294936576 unmapped: 57909248 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294936576 unmapped: 57909248 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294936576 unmapped: 57909248 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2865810 data_alloc: 218103808 data_used: 688128
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294944768 unmapped: 57901056 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294944768 unmapped: 57901056 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 279 heartbeat osd_stat(store_statfs(0x4ee7a8000/0x0/0x4ffc00000, data 0x1c12da/0x345000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294944768 unmapped: 57901056 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294944768 unmapped: 57901056 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 279 heartbeat osd_stat(store_statfs(0x4ee7a8000/0x0/0x4ffc00000, data 0x1c12da/0x345000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294944768 unmapped: 57901056 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2865810 data_alloc: 218103808 data_used: 688128
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294944768 unmapped: 57901056 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 279 heartbeat osd_stat(store_statfs(0x4ee7a8000/0x0/0x4ffc00000, data 0x1c12da/0x345000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294944768 unmapped: 57901056 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294944768 unmapped: 57901056 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294961152 unmapped: 57884672 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294961152 unmapped: 57884672 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2865810 data_alloc: 218103808 data_used: 688128
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 279 heartbeat osd_stat(store_statfs(0x4ee7a8000/0x0/0x4ffc00000, data 0x1c12da/0x345000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294961152 unmapped: 57884672 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294961152 unmapped: 57884672 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294961152 unmapped: 57884672 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294961152 unmapped: 57884672 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294961152 unmapped: 57884672 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2865810 data_alloc: 218103808 data_used: 688128
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 279 heartbeat osd_stat(store_statfs(0x4ee7a8000/0x0/0x4ffc00000, data 0x1c12da/0x345000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294969344 unmapped: 57876480 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 279 heartbeat osd_stat(store_statfs(0x4ee7a8000/0x0/0x4ffc00000, data 0x1c12da/0x345000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294977536 unmapped: 57868288 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294977536 unmapped: 57868288 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294977536 unmapped: 57868288 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294977536 unmapped: 57868288 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2865810 data_alloc: 218103808 data_used: 688128
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294977536 unmapped: 57868288 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 279 heartbeat osd_stat(store_statfs(0x4ee7a8000/0x0/0x4ffc00000, data 0x1c12da/0x345000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294977536 unmapped: 57868288 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 279 heartbeat osd_stat(store_statfs(0x4ee7a8000/0x0/0x4ffc00000, data 0x1c12da/0x345000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294977536 unmapped: 57868288 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294985728 unmapped: 57860096 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 279 heartbeat osd_stat(store_statfs(0x4ee7a8000/0x0/0x4ffc00000, data 0x1c12da/0x345000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294993920 unmapped: 57851904 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2865810 data_alloc: 218103808 data_used: 688128
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294993920 unmapped: 57851904 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294993920 unmapped: 57851904 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 279 heartbeat osd_stat(store_statfs(0x4ee7a8000/0x0/0x4ffc00000, data 0x1c12da/0x345000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294993920 unmapped: 57851904 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 104.417770386s of 104.486747742s, submitted: 10
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295002112 unmapped: 57843712 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295002112 unmapped: 57843712 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 279 heartbeat osd_stat(store_statfs(0x4ee7a8000/0x0/0x4ffc00000, data 0x1c12da/0x345000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2865810 data_alloc: 218103808 data_used: 688128
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 303390720 unmapped: 49455104 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295002112 unmapped: 57843712 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 279 heartbeat osd_stat(store_statfs(0x4edfa9000/0x0/0x4ffc00000, data 0x9c12da/0xb45000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295010304 unmapped: 57835520 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295010304 unmapped: 57835520 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295010304 unmapped: 57835520 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2919522 data_alloc: 218103808 data_used: 688128
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295010304 unmapped: 57835520 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 279 heartbeat osd_stat(store_statfs(0x4edfa9000/0x0/0x4ffc00000, data 0x9c12da/0xb45000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 279 handle_osd_map epochs [280,280], i have 279, src has [1,280]
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 279 handle_osd_map epochs [280,280], i have 280, src has [1,280]
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295034880 unmapped: 57810944 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 280 handle_osd_map epochs [280,280], i have 280, src has [1,280]
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295034880 unmapped: 57810944 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 280 ms_handle_reset con 0x55a12769b400 session 0x55a1294f3a40
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 8.512447357s of 10.195021629s, submitted: 4
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 280 heartbeat osd_stat(store_statfs(0x4edfa4000/0x0/0x4ffc00000, data 0x9c2e7a/0xb49000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,1])
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295034880 unmapped: 57810944 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295043072 unmapped: 57802752 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2926341 data_alloc: 218103808 data_used: 696320
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 303431680 unmapped: 49414144 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295051264 unmapped: 57794560 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295051264 unmapped: 57794560 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 280 heartbeat osd_stat(store_statfs(0x4ecfa4000/0x0/0x4ffc00000, data 0x19c2e9d/0x1b4a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 280 handle_osd_map epochs [281,281], i have 280, src has [1,281]
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295059456 unmapped: 57786368 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 281 ms_handle_reset con 0x55a127eb5400 session 0x55a1299c8000
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295059456 unmapped: 57786368 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3039483 data_alloc: 218103808 data_used: 704512
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295059456 unmapped: 57786368 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295059456 unmapped: 57786368 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 281 heartbeat osd_stat(store_statfs(0x4ecfa1000/0x0/0x4ffc00000, data 0x19c4a1a/0x1b4d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295067648 unmapped: 57778176 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295067648 unmapped: 57778176 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 281 heartbeat osd_stat(store_statfs(0x4ecfa1000/0x0/0x4ffc00000, data 0x19c4a1a/0x1b4d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295067648 unmapped: 57778176 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3039483 data_alloc: 218103808 data_used: 704512
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295067648 unmapped: 57778176 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295067648 unmapped: 57778176 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295067648 unmapped: 57778176 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295067648 unmapped: 57778176 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 281 heartbeat osd_stat(store_statfs(0x4ecfa1000/0x0/0x4ffc00000, data 0x19c4a1a/0x1b4d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295075840 unmapped: 57769984 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3039483 data_alloc: 218103808 data_used: 704512
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295084032 unmapped: 57761792 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 281 heartbeat osd_stat(store_statfs(0x4ecfa1000/0x0/0x4ffc00000, data 0x19c4a1a/0x1b4d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295084032 unmapped: 57761792 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295084032 unmapped: 57761792 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295084032 unmapped: 57761792 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295092224 unmapped: 57753600 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3039483 data_alloc: 218103808 data_used: 704512
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 281 heartbeat osd_stat(store_statfs(0x4ecfa1000/0x0/0x4ffc00000, data 0x19c4a1a/0x1b4d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295092224 unmapped: 57753600 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295092224 unmapped: 57753600 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295092224 unmapped: 57753600 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295092224 unmapped: 57753600 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 281 heartbeat osd_stat(store_statfs(0x4ecfa1000/0x0/0x4ffc00000, data 0x19c4a1a/0x1b4d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295100416 unmapped: 57745408 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3039483 data_alloc: 218103808 data_used: 704512
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295100416 unmapped: 57745408 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295100416 unmapped: 57745408 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295100416 unmapped: 57745408 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 281 heartbeat osd_stat(store_statfs(0x4ecfa1000/0x0/0x4ffc00000, data 0x19c4a1a/0x1b4d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295100416 unmapped: 57745408 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295100416 unmapped: 57745408 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3039483 data_alloc: 218103808 data_used: 704512
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295100416 unmapped: 57745408 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295108608 unmapped: 57737216 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295108608 unmapped: 57737216 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 281 heartbeat osd_stat(store_statfs(0x4ecfa1000/0x0/0x4ffc00000, data 0x19c4a1a/0x1b4d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295108608 unmapped: 57737216 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295108608 unmapped: 57737216 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3039483 data_alloc: 218103808 data_used: 704512
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295108608 unmapped: 57737216 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295116800 unmapped: 57729024 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295116800 unmapped: 57729024 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 281 heartbeat osd_stat(store_statfs(0x4ecfa1000/0x0/0x4ffc00000, data 0x19c4a1a/0x1b4d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295116800 unmapped: 57729024 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295116800 unmapped: 57729024 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3039483 data_alloc: 218103808 data_used: 704512
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295116800 unmapped: 57729024 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295116800 unmapped: 57729024 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 41.138420105s of 43.667034149s, submitted: 14
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 281 handle_osd_map epochs [281,282], i have 281, src has [1,282]
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295124992 unmapped: 57720832 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 282 ms_handle_reset con 0x55a1286a4000 session 0x55a129f52780
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 282 heartbeat osd_stat(store_statfs(0x4ecf9e000/0x0/0x4ffc00000, data 0x19c65c8/0x1b4f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295149568 unmapped: 57696256 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295149568 unmapped: 57696256 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3042065 data_alloc: 218103808 data_used: 712704
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295149568 unmapped: 57696256 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295149568 unmapped: 57696256 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 282 heartbeat osd_stat(store_statfs(0x4ecf9e000/0x0/0x4ffc00000, data 0x19c65c8/0x1b4f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295165952 unmapped: 57679872 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295165952 unmapped: 57679872 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295165952 unmapped: 57679872 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3042065 data_alloc: 218103808 data_used: 712704
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 282 heartbeat osd_stat(store_statfs(0x4ecf9e000/0x0/0x4ffc00000, data 0x19c65c8/0x1b4f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295165952 unmapped: 57679872 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295165952 unmapped: 57679872 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 282 handle_osd_map epochs [282,283], i have 282, src has [1,283]
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.782243729s of 10.014166832s, submitted: 29
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9e000/0x0/0x4ffc00000, data 0x19c65c8/0x1b4f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295190528 unmapped: 57655296 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295190528 unmapped: 57655296 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295190528 unmapped: 57655296 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3046063 data_alloc: 218103808 data_used: 720896
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9b000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295190528 unmapped: 57655296 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295198720 unmapped: 57647104 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295198720 unmapped: 57647104 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295198720 unmapped: 57647104 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9b000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295198720 unmapped: 57647104 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3046063 data_alloc: 218103808 data_used: 720896
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295198720 unmapped: 57647104 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295198720 unmapped: 57647104 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9b000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295206912 unmapped: 57638912 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295206912 unmapped: 57638912 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9b000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295206912 unmapped: 57638912 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3046063 data_alloc: 218103808 data_used: 720896
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295206912 unmapped: 57638912 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295206912 unmapped: 57638912 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295206912 unmapped: 57638912 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9b000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295215104 unmapped: 57630720 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295215104 unmapped: 57630720 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3046063 data_alloc: 218103808 data_used: 720896
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295215104 unmapped: 57630720 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295223296 unmapped: 57622528 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295223296 unmapped: 57622528 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9b000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295223296 unmapped: 57622528 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295223296 unmapped: 57622528 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3046063 data_alloc: 218103808 data_used: 720896
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295223296 unmapped: 57622528 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295223296 unmapped: 57622528 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9b000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295231488 unmapped: 57614336 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295231488 unmapped: 57614336 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9b000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295239680 unmapped: 57606144 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3046063 data_alloc: 218103808 data_used: 720896
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295239680 unmapped: 57606144 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295239680 unmapped: 57606144 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295239680 unmapped: 57606144 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295247872 unmapped: 57597952 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9b000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9b000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295256064 unmapped: 57589760 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3046063 data_alloc: 218103808 data_used: 720896
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295256064 unmapped: 57589760 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295256064 unmapped: 57589760 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9b000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295256064 unmapped: 57589760 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295256064 unmapped: 57589760 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295256064 unmapped: 57589760 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3046063 data_alloc: 218103808 data_used: 720896
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295256064 unmapped: 57589760 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9b000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295272448 unmapped: 57573376 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295272448 unmapped: 57573376 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295272448 unmapped: 57573376 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295272448 unmapped: 57573376 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3046063 data_alloc: 218103808 data_used: 720896
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9b000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9b000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295288832 unmapped: 57556992 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295288832 unmapped: 57556992 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295288832 unmapped: 57556992 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295288832 unmapped: 57556992 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9b000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295288832 unmapped: 57556992 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3046063 data_alloc: 218103808 data_used: 720896
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295288832 unmapped: 57556992 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9b000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295288832 unmapped: 57556992 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295288832 unmapped: 57556992 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295297024 unmapped: 57548800 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295297024 unmapped: 57548800 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3046063 data_alloc: 218103808 data_used: 720896
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295297024 unmapped: 57548800 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295297024 unmapped: 57548800 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9b000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295297024 unmapped: 57548800 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295297024 unmapped: 57548800 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295297024 unmapped: 57548800 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3046063 data_alloc: 218103808 data_used: 720896
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295297024 unmapped: 57548800 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295305216 unmapped: 57540608 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9b000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295313408 unmapped: 57532416 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295313408 unmapped: 57532416 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9b000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295313408 unmapped: 57532416 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3046063 data_alloc: 218103808 data_used: 720896
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295313408 unmapped: 57532416 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295313408 unmapped: 57532416 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9b000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295313408 unmapped: 57532416 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295313408 unmapped: 57532416 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295321600 unmapped: 57524224 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3046063 data_alloc: 218103808 data_used: 720896
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9b000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295329792 unmapped: 57516032 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295329792 unmapped: 57516032 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295329792 unmapped: 57516032 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295329792 unmapped: 57516032 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9b000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295329792 unmapped: 57516032 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3046063 data_alloc: 218103808 data_used: 720896
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295329792 unmapped: 57516032 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295329792 unmapped: 57516032 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295337984 unmapped: 57507840 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295337984 unmapped: 57507840 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9b000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295337984 unmapped: 57507840 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3046063 data_alloc: 218103808 data_used: 720896
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9b000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295337984 unmapped: 57507840 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295346176 unmapped: 57499648 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9b000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295362560 unmapped: 57483264 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295362560 unmapped: 57483264 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295362560 unmapped: 57483264 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3046063 data_alloc: 218103808 data_used: 720896
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295362560 unmapped: 57483264 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9b000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295362560 unmapped: 57483264 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295362560 unmapped: 57483264 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295370752 unmapped: 57475072 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295370752 unmapped: 57475072 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3046063 data_alloc: 218103808 data_used: 720896
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295370752 unmapped: 57475072 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9b000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295370752 unmapped: 57475072 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9b000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295370752 unmapped: 57475072 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295378944 unmapped: 57466880 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295378944 unmapped: 57466880 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3046063 data_alloc: 218103808 data_used: 720896
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9b000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295378944 unmapped: 57466880 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295378944 unmapped: 57466880 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295378944 unmapped: 57466880 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295378944 unmapped: 57466880 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9b000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295387136 unmapped: 57458688 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3046063 data_alloc: 218103808 data_used: 720896
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295387136 unmapped: 57458688 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295403520 unmapped: 57442304 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295403520 unmapped: 57442304 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295403520 unmapped: 57442304 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9b000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295403520 unmapped: 57442304 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3046063 data_alloc: 218103808 data_used: 720896
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295411712 unmapped: 57434112 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295411712 unmapped: 57434112 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295411712 unmapped: 57434112 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295411712 unmapped: 57434112 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295419904 unmapped: 57425920 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3046063 data_alloc: 218103808 data_used: 720896
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9b000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295428096 unmapped: 57417728 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9b000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295428096 unmapped: 57417728 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295428096 unmapped: 57417728 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295428096 unmapped: 57417728 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9b000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295428096 unmapped: 57417728 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3046063 data_alloc: 218103808 data_used: 720896
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295428096 unmapped: 57417728 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295428096 unmapped: 57417728 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295436288 unmapped: 57409536 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9b000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295436288 unmapped: 57409536 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9b000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295444480 unmapped: 57401344 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3046063 data_alloc: 218103808 data_used: 720896
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295444480 unmapped: 57401344 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295452672 unmapped: 57393152 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295452672 unmapped: 57393152 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9b000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9b000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295452672 unmapped: 57393152 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295452672 unmapped: 57393152 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3046063 data_alloc: 218103808 data_used: 720896
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295460864 unmapped: 57384960 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295460864 unmapped: 57384960 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295460864 unmapped: 57384960 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9b000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295460864 unmapped: 57384960 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9b000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295460864 unmapped: 57384960 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3046063 data_alloc: 218103808 data_used: 720896
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295460864 unmapped: 57384960 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295460864 unmapped: 57384960 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295469056 unmapped: 57376768 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9b000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295485440 unmapped: 57360384 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295485440 unmapped: 57360384 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3046063 data_alloc: 218103808 data_used: 720896
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295485440 unmapped: 57360384 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295485440 unmapped: 57360384 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9b000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295485440 unmapped: 57360384 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295485440 unmapped: 57360384 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295485440 unmapped: 57360384 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3046063 data_alloc: 218103808 data_used: 720896
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9b000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295485440 unmapped: 57360384 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295493632 unmapped: 57352192 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295493632 unmapped: 57352192 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295493632 unmapped: 57352192 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295493632 unmapped: 57352192 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3046063 data_alloc: 218103808 data_used: 720896
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9b000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295501824 unmapped: 57344000 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9b000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295510016 unmapped: 57335808 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295510016 unmapped: 57335808 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295518208 unmapped: 57327616 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295518208 unmapped: 57327616 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3046063 data_alloc: 218103808 data_used: 720896
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295518208 unmapped: 57327616 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9b000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295526400 unmapped: 57319424 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295526400 unmapped: 57319424 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295526400 unmapped: 57319424 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9b000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295526400 unmapped: 57319424 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3046063 data_alloc: 218103808 data_used: 720896
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9b000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295526400 unmapped: 57319424 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295526400 unmapped: 57319424 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295534592 unmapped: 57311232 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9b000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295534592 unmapped: 57311232 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9b000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3046063 data_alloc: 218103808 data_used: 720896
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295534592 unmapped: 57311232 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295534592 unmapped: 57311232 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295534592 unmapped: 57311232 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295534592 unmapped: 57311232 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295534592 unmapped: 57311232 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9b000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3046063 data_alloc: 218103808 data_used: 720896
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295542784 unmapped: 57303040 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295542784 unmapped: 57303040 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295542784 unmapped: 57303040 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295550976 unmapped: 57294848 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295550976 unmapped: 57294848 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9b000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3046063 data_alloc: 218103808 data_used: 720896
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295550976 unmapped: 57294848 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295550976 unmapped: 57294848 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9b000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9b000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295559168 unmapped: 57286656 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295575552 unmapped: 57270272 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295575552 unmapped: 57270272 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9b000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3046063 data_alloc: 218103808 data_used: 720896
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295575552 unmapped: 57270272 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295575552 unmapped: 57270272 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9b000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295575552 unmapped: 57270272 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295575552 unmapped: 57270272 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9b000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295575552 unmapped: 57270272 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3046063 data_alloc: 218103808 data_used: 720896
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295575552 unmapped: 57270272 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295583744 unmapped: 57262080 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295583744 unmapped: 57262080 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9b000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295583744 unmapped: 57262080 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295583744 unmapped: 57262080 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3046063 data_alloc: 218103808 data_used: 720896
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295583744 unmapped: 57262080 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295583744 unmapped: 57262080 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295583744 unmapped: 57262080 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9b000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295591936 unmapped: 57253888 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295616512 unmapped: 57229312 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3046063 data_alloc: 218103808 data_used: 720896
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295616512 unmapped: 57229312 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295616512 unmapped: 57229312 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295616512 unmapped: 57229312 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9b000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295624704 unmapped: 57221120 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295624704 unmapped: 57221120 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3046063 data_alloc: 218103808 data_used: 720896
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295624704 unmapped: 57221120 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295624704 unmapped: 57221120 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295624704 unmapped: 57221120 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9b000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295632896 unmapped: 57212928 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295632896 unmapped: 57212928 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3046063 data_alloc: 218103808 data_used: 720896
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295632896 unmapped: 57212928 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9b000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295632896 unmapped: 57212928 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295632896 unmapped: 57212928 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295632896 unmapped: 57212928 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295641088 unmapped: 57204736 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3046063 data_alloc: 218103808 data_used: 720896
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295641088 unmapped: 57204736 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9b000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295641088 unmapped: 57204736 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295641088 unmapped: 57204736 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295641088 unmapped: 57204736 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9b000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295657472 unmapped: 57188352 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3046063 data_alloc: 218103808 data_used: 720896
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295657472 unmapped: 57188352 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295657472 unmapped: 57188352 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295657472 unmapped: 57188352 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9b000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295665664 unmapped: 57180160 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295665664 unmapped: 57180160 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3046063 data_alloc: 218103808 data_used: 720896
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295665664 unmapped: 57180160 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9b000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295665664 unmapped: 57180160 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295673856 unmapped: 57171968 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9b000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295673856 unmapped: 57171968 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9b000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 6002.4 total, 600.0 interval#012Cumulative writes: 36K writes, 143K keys, 36K commit groups, 1.0 writes per commit group, ingest: 0.14 GB, 0.02 MB/s#012Cumulative WAL: 36K writes, 12K syncs, 2.83 writes per sync, written: 0.14 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 596 writes, 1534 keys, 596 commit groups, 1.0 writes per commit group, ingest: 0.75 MB, 0.00 MB/s#012Interval WAL: 596 writes, 265 syncs, 2.25 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.36              0.00         1    0.357       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.36              0.00         1    0.357       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.36              0.00         1    0.357       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 6002.4 total, 4800.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.4 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55a125e131f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 2.2e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 6002.4 total, 4800.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55a125e131f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 2.2e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 6002.4 total, 4800.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtab
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295673856 unmapped: 57171968 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3046063 data_alloc: 218103808 data_used: 720896
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295673856 unmapped: 57171968 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295682048 unmapped: 57163776 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9b000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295682048 unmapped: 57163776 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295682048 unmapped: 57163776 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295682048 unmapped: 57163776 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3046063 data_alloc: 218103808 data_used: 720896
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295682048 unmapped: 57163776 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295682048 unmapped: 57163776 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9b000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295690240 unmapped: 57155584 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9b000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295690240 unmapped: 57155584 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295698432 unmapped: 57147392 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3046063 data_alloc: 218103808 data_used: 720896
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295698432 unmapped: 57147392 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9b000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295698432 unmapped: 57147392 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295698432 unmapped: 57147392 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295706624 unmapped: 57139200 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295706624 unmapped: 57139200 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3046063 data_alloc: 218103808 data_used: 720896
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295706624 unmapped: 57139200 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295723008 unmapped: 57122816 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9b000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295723008 unmapped: 57122816 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295723008 unmapped: 57122816 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295723008 unmapped: 57122816 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3046063 data_alloc: 218103808 data_used: 720896
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295731200 unmapped: 57114624 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295731200 unmapped: 57114624 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9b000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295731200 unmapped: 57114624 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295731200 unmapped: 57114624 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295731200 unmapped: 57114624 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3046063 data_alloc: 218103808 data_used: 720896
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295739392 unmapped: 57106432 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295739392 unmapped: 57106432 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9b000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9b000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295747584 unmapped: 57098240 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295747584 unmapped: 57098240 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295747584 unmapped: 57098240 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3046063 data_alloc: 218103808 data_used: 720896
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295747584 unmapped: 57098240 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9b000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295747584 unmapped: 57098240 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295747584 unmapped: 57098240 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295763968 unmapped: 57081856 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295763968 unmapped: 57081856 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3046063 data_alloc: 218103808 data_used: 720896
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295763968 unmapped: 57081856 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295763968 unmapped: 57081856 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9b000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295763968 unmapped: 57081856 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295763968 unmapped: 57081856 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295763968 unmapped: 57081856 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3046063 data_alloc: 218103808 data_used: 720896
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9b000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295772160 unmapped: 57073664 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295772160 unmapped: 57073664 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295772160 unmapped: 57073664 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295772160 unmapped: 57073664 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9b000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295772160 unmapped: 57073664 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3046063 data_alloc: 218103808 data_used: 720896
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295780352 unmapped: 57065472 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295780352 unmapped: 57065472 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9b000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295780352 unmapped: 57065472 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295788544 unmapped: 57057280 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9b000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295788544 unmapped: 57057280 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3046063 data_alloc: 218103808 data_used: 720896
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295796736 unmapped: 57049088 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9b000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295796736 unmapped: 57049088 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295796736 unmapped: 57049088 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9b000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295796736 unmapped: 57049088 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295796736 unmapped: 57049088 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3046063 data_alloc: 218103808 data_used: 720896
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9b000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295796736 unmapped: 57049088 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9b000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295804928 unmapped: 57040896 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295804928 unmapped: 57040896 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295804928 unmapped: 57040896 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295804928 unmapped: 57040896 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3046063 data_alloc: 218103808 data_used: 720896
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295813120 unmapped: 57032704 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295821312 unmapped: 57024512 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9b000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295821312 unmapped: 57024512 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295821312 unmapped: 57024512 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295821312 unmapped: 57024512 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3046063 data_alloc: 218103808 data_used: 720896
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295829504 unmapped: 57016320 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9b000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295837696 unmapped: 57008128 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295837696 unmapped: 57008128 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295837696 unmapped: 57008128 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295837696 unmapped: 57008128 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3046063 data_alloc: 218103808 data_used: 720896
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295837696 unmapped: 57008128 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9b000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295837696 unmapped: 57008128 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295845888 unmapped: 56999936 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295845888 unmapped: 56999936 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295845888 unmapped: 56999936 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9b000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3046063 data_alloc: 218103808 data_used: 720896
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295845888 unmapped: 56999936 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295845888 unmapped: 56999936 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295854080 unmapped: 56991744 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9b000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295854080 unmapped: 56991744 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9b000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295854080 unmapped: 56991744 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3046063 data_alloc: 218103808 data_used: 720896
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295854080 unmapped: 56991744 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295854080 unmapped: 56991744 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295854080 unmapped: 56991744 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9b000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295854080 unmapped: 56991744 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295854080 unmapped: 56991744 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9b000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3046063 data_alloc: 218103808 data_used: 720896
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9b000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295878656 unmapped: 56967168 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295878656 unmapped: 56967168 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295878656 unmapped: 56967168 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295878656 unmapped: 56967168 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295895040 unmapped: 56950784 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3046063 data_alloc: 218103808 data_used: 720896
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295895040 unmapped: 56950784 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9b000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295895040 unmapped: 56950784 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9b000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295895040 unmapped: 56950784 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295895040 unmapped: 56950784 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295895040 unmapped: 56950784 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3046063 data_alloc: 218103808 data_used: 720896
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295895040 unmapped: 56950784 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9b000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295895040 unmapped: 56950784 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9b000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295911424 unmapped: 56934400 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9b000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295911424 unmapped: 56934400 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295911424 unmapped: 56934400 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3046063 data_alloc: 218103808 data_used: 720896
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295911424 unmapped: 56934400 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9b000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295919616 unmapped: 56926208 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295919616 unmapped: 56926208 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295919616 unmapped: 56926208 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 322.681640625s of 322.787384033s, submitted: 11
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295952384 unmapped: 56893440 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3045183 data_alloc: 218103808 data_used: 720896
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295968768 unmapped: 56877056 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9c000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295968768 unmapped: 56877056 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295968768 unmapped: 56877056 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295968768 unmapped: 56877056 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295968768 unmapped: 56877056 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9c000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3045183 data_alloc: 218103808 data_used: 720896
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295968768 unmapped: 56877056 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295968768 unmapped: 56877056 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9c000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295976960 unmapped: 56868864 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295976960 unmapped: 56868864 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295976960 unmapped: 56868864 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9c000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3045183 data_alloc: 218103808 data_used: 720896
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295976960 unmapped: 56868864 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295976960 unmapped: 56868864 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9c000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9c000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295985152 unmapped: 56860672 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295985152 unmapped: 56860672 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295985152 unmapped: 56860672 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3045183 data_alloc: 218103808 data_used: 720896
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295985152 unmapped: 56860672 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9c000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9c000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295985152 unmapped: 56860672 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295985152 unmapped: 56860672 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295993344 unmapped: 56852480 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295993344 unmapped: 56852480 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3045183 data_alloc: 218103808 data_used: 720896
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295993344 unmapped: 56852480 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9c000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295993344 unmapped: 56852480 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9c000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295993344 unmapped: 56852480 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9c000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295993344 unmapped: 56852480 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9c000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296001536 unmapped: 56844288 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3045183 data_alloc: 218103808 data_used: 720896
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296001536 unmapped: 56844288 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9c000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296001536 unmapped: 56844288 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296001536 unmapped: 56844288 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296001536 unmapped: 56844288 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9c000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296001536 unmapped: 56844288 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3045183 data_alloc: 218103808 data_used: 720896
Nov 25 04:46:11 np0005534516 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9c000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296001536 unmapped: 56844288 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296009728 unmapped: 56836096 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296009728 unmapped: 56836096 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296009728 unmapped: 56836096 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296009728 unmapped: 56836096 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9c000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3045183 data_alloc: 218103808 data_used: 720896
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296017920 unmapped: 56827904 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9c000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296017920 unmapped: 56827904 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9c000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296017920 unmapped: 56827904 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9c000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296017920 unmapped: 56827904 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9c000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296026112 unmapped: 56819712 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3045183 data_alloc: 218103808 data_used: 720896
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296034304 unmapped: 56811520 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296034304 unmapped: 56811520 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296034304 unmapped: 56811520 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9c000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296034304 unmapped: 56811520 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296034304 unmapped: 56811520 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3045183 data_alloc: 218103808 data_used: 720896
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296042496 unmapped: 56803328 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9c000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296058880 unmapped: 56786944 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296058880 unmapped: 56786944 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296058880 unmapped: 56786944 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296058880 unmapped: 56786944 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3045183 data_alloc: 218103808 data_used: 720896
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296067072 unmapped: 56778752 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9c000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296067072 unmapped: 56778752 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9c000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296067072 unmapped: 56778752 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296067072 unmapped: 56778752 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296067072 unmapped: 56778752 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3045183 data_alloc: 218103808 data_used: 720896
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296067072 unmapped: 56778752 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296075264 unmapped: 56770560 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9c000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296075264 unmapped: 56770560 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296075264 unmapped: 56770560 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296083456 unmapped: 56762368 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3045183 data_alloc: 218103808 data_used: 720896
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296083456 unmapped: 56762368 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296083456 unmapped: 56762368 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296083456 unmapped: 56762368 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9c000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296091648 unmapped: 56754176 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9c000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296099840 unmapped: 56745984 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9c000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3045183 data_alloc: 218103808 data_used: 720896
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296099840 unmapped: 56745984 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296099840 unmapped: 56745984 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9c000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296099840 unmapped: 56745984 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296099840 unmapped: 56745984 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296099840 unmapped: 56745984 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3045183 data_alloc: 218103808 data_used: 720896
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296099840 unmapped: 56745984 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9c000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9c000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296108032 unmapped: 56737792 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296108032 unmapped: 56737792 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296108032 unmapped: 56737792 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296108032 unmapped: 56737792 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3045183 data_alloc: 218103808 data_used: 720896
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296116224 unmapped: 56729600 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296116224 unmapped: 56729600 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9c000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296124416 unmapped: 56721408 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296124416 unmapped: 56721408 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296132608 unmapped: 56713216 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9c000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3045183 data_alloc: 218103808 data_used: 720896
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296132608 unmapped: 56713216 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296132608 unmapped: 56713216 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296132608 unmapped: 56713216 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296132608 unmapped: 56713216 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9c000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296132608 unmapped: 56713216 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3045183 data_alloc: 218103808 data_used: 720896
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296132608 unmapped: 56713216 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9c000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296132608 unmapped: 56713216 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9c000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296140800 unmapped: 56705024 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296140800 unmapped: 56705024 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9c000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296148992 unmapped: 56696832 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9c000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3045183 data_alloc: 218103808 data_used: 720896
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296148992 unmapped: 56696832 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296148992 unmapped: 56696832 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296148992 unmapped: 56696832 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296148992 unmapped: 56696832 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9c000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296157184 unmapped: 56688640 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3045183 data_alloc: 218103808 data_used: 720896
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296157184 unmapped: 56688640 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296157184 unmapped: 56688640 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 98.430641174s of 98.900810242s, submitted: 90
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296173568 unmapped: 56672256 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: osd.2 283 handle_osd_map epochs [284,284], i have 283, src has [1,284]
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ecf9d000/0x0/0x4ffc00000, data 0x19c8008/0x1b51000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296189952 unmapped: 56655872 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: osd.2 284 ms_handle_reset con 0x55a1286a5000 session 0x55a129ce9860
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296189952 unmapped: 56655872 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2995067 data_alloc: 218103808 data_used: 729088
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296189952 unmapped: 56655872 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: osd.2 284 handle_osd_map epochs [285,285], i have 284, src has [1,285]
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296189952 unmapped: 56655872 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296206336 unmapped: 56639488 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296239104 unmapped: 56606720 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: osd.2 285 ms_handle_reset con 0x55a129f44800 session 0x55a129eb3860
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: osd.2 285 heartbeat osd_stat(store_statfs(0x4ee387000/0x0/0x4ffc00000, data 0x1cb787/0x356000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296239104 unmapped: 56606720 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2889761 data_alloc: 218103808 data_used: 737280
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296239104 unmapped: 56606720 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: osd.2 285 handle_osd_map epochs [285,286], i have 285, src has [1,286]
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296247296 unmapped: 56598528 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296255488 unmapped: 56590336 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.809815407s of 11.059050560s, submitted: 61
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296255488 unmapped: 56590336 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: osd.2 286 heartbeat osd_stat(store_statfs(0x4ee383000/0x0/0x4ffc00000, data 0x1cd206/0x359000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296255488 unmapped: 56590336 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2894255 data_alloc: 218103808 data_used: 753664
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: osd.2 286 handle_osd_map epochs [286,287], i have 286, src has [1,287]
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296280064 unmapped: 56565760 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: osd.2 287 heartbeat osd_stat(store_statfs(0x4ee383000/0x0/0x4ffc00000, data 0x1cd206/0x359000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: osd.2 287 ms_handle_reset con 0x55a12769b400 session 0x55a127620780
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296288256 unmapped: 56557568 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296288256 unmapped: 56557568 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: osd.2 287 handle_osd_map epochs [288,288], i have 287, src has [1,288]
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296280064 unmapped: 56565760 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296280064 unmapped: 56565760 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2900059 data_alloc: 218103808 data_used: 753664
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296296448 unmapped: 56549376 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: osd.2 288 heartbeat osd_stat(store_statfs(0x4ee37e000/0x0/0x4ffc00000, data 0x1d0802/0x35f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296296448 unmapped: 56549376 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296296448 unmapped: 56549376 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296296448 unmapped: 56549376 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: osd.2 288 heartbeat osd_stat(store_statfs(0x4ee37e000/0x0/0x4ffc00000, data 0x1d0802/0x35f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296296448 unmapped: 56549376 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2900059 data_alloc: 218103808 data_used: 753664
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296296448 unmapped: 56549376 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296296448 unmapped: 56549376 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296304640 unmapped: 56541184 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: osd.2 288 heartbeat osd_stat(store_statfs(0x4ee37e000/0x0/0x4ffc00000, data 0x1d0802/0x35f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296304640 unmapped: 56541184 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296304640 unmapped: 56541184 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2900059 data_alloc: 218103808 data_used: 753664
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296304640 unmapped: 56541184 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296312832 unmapped: 56532992 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296321024 unmapped: 56524800 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296321024 unmapped: 56524800 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: osd.2 288 heartbeat osd_stat(store_statfs(0x4ee37e000/0x0/0x4ffc00000, data 0x1d0802/0x35f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296321024 unmapped: 56524800 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2900059 data_alloc: 218103808 data_used: 753664
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296329216 unmapped: 56516608 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296329216 unmapped: 56516608 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296329216 unmapped: 56516608 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: osd.2 288 heartbeat osd_stat(store_statfs(0x4ee37e000/0x0/0x4ffc00000, data 0x1d0802/0x35f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296329216 unmapped: 56516608 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296337408 unmapped: 56508416 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: osd.2 288 heartbeat osd_stat(store_statfs(0x4ee37e000/0x0/0x4ffc00000, data 0x1d0802/0x35f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2900059 data_alloc: 218103808 data_used: 753664
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296337408 unmapped: 56508416 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296337408 unmapped: 56508416 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: osd.2 288 heartbeat osd_stat(store_statfs(0x4ee37e000/0x0/0x4ffc00000, data 0x1d0802/0x35f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296337408 unmapped: 56508416 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: osd.2 288 heartbeat osd_stat(store_statfs(0x4ee37e000/0x0/0x4ffc00000, data 0x1d0802/0x35f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296337408 unmapped: 56508416 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296345600 unmapped: 56500224 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: osd.2 288 heartbeat osd_stat(store_statfs(0x4ee37e000/0x0/0x4ffc00000, data 0x1d0802/0x35f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2900059 data_alloc: 218103808 data_used: 753664
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296345600 unmapped: 56500224 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296345600 unmapped: 56500224 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: osd.2 288 heartbeat osd_stat(store_statfs(0x4ee37e000/0x0/0x4ffc00000, data 0x1d0802/0x35f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: osd.2 288 heartbeat osd_stat(store_statfs(0x4ee37e000/0x0/0x4ffc00000, data 0x1d0802/0x35f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296345600 unmapped: 56500224 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296345600 unmapped: 56500224 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: osd.2 288 heartbeat osd_stat(store_statfs(0x4ee37e000/0x0/0x4ffc00000, data 0x1d0802/0x35f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296345600 unmapped: 56500224 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2900059 data_alloc: 218103808 data_used: 753664
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296345600 unmapped: 56500224 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: osd.2 288 heartbeat osd_stat(store_statfs(0x4ee37e000/0x0/0x4ffc00000, data 0x1d0802/0x35f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296345600 unmapped: 56500224 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296361984 unmapped: 56483840 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296361984 unmapped: 56483840 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: osd.2 288 heartbeat osd_stat(store_statfs(0x4ee37e000/0x0/0x4ffc00000, data 0x1d0802/0x35f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296361984 unmapped: 56483840 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2900059 data_alloc: 218103808 data_used: 753664
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296361984 unmapped: 56483840 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296370176 unmapped: 56475648 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: osd.2 288 heartbeat osd_stat(store_statfs(0x4ee37e000/0x0/0x4ffc00000, data 0x1d0802/0x35f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296370176 unmapped: 56475648 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296370176 unmapped: 56475648 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: osd.2 288 heartbeat osd_stat(store_statfs(0x4ee37e000/0x0/0x4ffc00000, data 0x1d0802/0x35f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296378368 unmapped: 56467456 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2900059 data_alloc: 218103808 data_used: 753664
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296386560 unmapped: 56459264 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296386560 unmapped: 56459264 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: osd.2 288 heartbeat osd_stat(store_statfs(0x4ee37e000/0x0/0x4ffc00000, data 0x1d0802/0x35f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296386560 unmapped: 56459264 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296394752 unmapped: 56451072 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: osd.2 288 heartbeat osd_stat(store_statfs(0x4ee37e000/0x0/0x4ffc00000, data 0x1d0802/0x35f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296402944 unmapped: 56442880 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2900059 data_alloc: 218103808 data_used: 753664
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296402944 unmapped: 56442880 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296402944 unmapped: 56442880 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296402944 unmapped: 56442880 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296402944 unmapped: 56442880 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: osd.2 288 heartbeat osd_stat(store_statfs(0x4ee37e000/0x0/0x4ffc00000, data 0x1d0802/0x35f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296402944 unmapped: 56442880 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2900059 data_alloc: 218103808 data_used: 753664
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: osd.2 288 heartbeat osd_stat(store_statfs(0x4ee37e000/0x0/0x4ffc00000, data 0x1d0802/0x35f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296402944 unmapped: 56442880 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296402944 unmapped: 56442880 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296411136 unmapped: 56434688 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296411136 unmapped: 56434688 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296411136 unmapped: 56434688 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2900059 data_alloc: 218103808 data_used: 753664
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296411136 unmapped: 56434688 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: osd.2 288 heartbeat osd_stat(store_statfs(0x4ee37e000/0x0/0x4ffc00000, data 0x1d0802/0x35f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296419328 unmapped: 56426496 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: osd.2 288 handle_osd_map epochs [288,289], i have 288, src has [1,289]
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 63.735168457s of 63.853855133s, submitted: 16
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: osd.2 289 heartbeat osd_stat(store_statfs(0x4ee37e000/0x0/0x4ffc00000, data 0x1d0802/0x35f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297484288 unmapped: 55361536 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297484288 unmapped: 55361536 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297492480 unmapped: 55353344 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2902153 data_alloc: 218103808 data_used: 753664
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: osd.2 289 ms_handle_reset con 0x55a127eb5400 session 0x55a128153860
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297508864 unmapped: 55336960 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297508864 unmapped: 55336960 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297508864 unmapped: 55336960 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: osd.2 289 heartbeat osd_stat(store_statfs(0x4ee37c000/0x0/0x4ffc00000, data 0x1d23d3/0x362000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297508864 unmapped: 55336960 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297517056 unmapped: 55328768 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2902153 data_alloc: 218103808 data_used: 753664
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297517056 unmapped: 55328768 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: osd.2 289 handle_osd_map epochs [290,290], i have 289, src has [1,290]
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297533440 unmapped: 55312384 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297533440 unmapped: 55312384 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee378000/0x0/0x4ffc00000, data 0x1d3e36/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297533440 unmapped: 55312384 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297533440 unmapped: 55312384 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2906327 data_alloc: 218103808 data_used: 761856
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297541632 unmapped: 55304192 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee378000/0x0/0x4ffc00000, data 0x1d3e36/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297541632 unmapped: 55304192 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297541632 unmapped: 55304192 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297541632 unmapped: 55304192 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297549824 unmapped: 55296000 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2906327 data_alloc: 218103808 data_used: 761856
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297549824 unmapped: 55296000 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297558016 unmapped: 55287808 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee378000/0x0/0x4ffc00000, data 0x1d3e36/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297558016 unmapped: 55287808 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297558016 unmapped: 55287808 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297558016 unmapped: 55287808 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2906327 data_alloc: 218103808 data_used: 761856
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297558016 unmapped: 55287808 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee378000/0x0/0x4ffc00000, data 0x1d3e36/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297558016 unmapped: 55287808 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee378000/0x0/0x4ffc00000, data 0x1d3e36/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297558016 unmapped: 55287808 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297558016 unmapped: 55287808 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297574400 unmapped: 55271424 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2906327 data_alloc: 218103808 data_used: 761856
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297574400 unmapped: 55271424 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297574400 unmapped: 55271424 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee378000/0x0/0x4ffc00000, data 0x1d3e36/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297574400 unmapped: 55271424 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee378000/0x0/0x4ffc00000, data 0x1d3e36/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297582592 unmapped: 55263232 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297582592 unmapped: 55263232 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2906327 data_alloc: 218103808 data_used: 761856
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297582592 unmapped: 55263232 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297582592 unmapped: 55263232 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297582592 unmapped: 55263232 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee378000/0x0/0x4ffc00000, data 0x1d3e36/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297582592 unmapped: 55263232 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297590784 unmapped: 55255040 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2906327 data_alloc: 218103808 data_used: 761856
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297590784 unmapped: 55255040 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297590784 unmapped: 55255040 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297590784 unmapped: 55255040 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297590784 unmapped: 55255040 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee378000/0x0/0x4ffc00000, data 0x1d3e36/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297598976 unmapped: 55246848 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee378000/0x0/0x4ffc00000, data 0x1d3e36/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2906327 data_alloc: 218103808 data_used: 761856
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297598976 unmapped: 55246848 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297598976 unmapped: 55246848 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297598976 unmapped: 55246848 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee378000/0x0/0x4ffc00000, data 0x1d3e36/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297607168 unmapped: 55238656 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297615360 unmapped: 55230464 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee378000/0x0/0x4ffc00000, data 0x1d3e36/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2906327 data_alloc: 218103808 data_used: 761856
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297615360 unmapped: 55230464 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297615360 unmapped: 55230464 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297623552 unmapped: 55222272 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee378000/0x0/0x4ffc00000, data 0x1d3e36/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297623552 unmapped: 55222272 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297623552 unmapped: 55222272 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2906327 data_alloc: 218103808 data_used: 761856
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297623552 unmapped: 55222272 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297623552 unmapped: 55222272 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297623552 unmapped: 55222272 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297623552 unmapped: 55222272 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee378000/0x0/0x4ffc00000, data 0x1d3e36/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297623552 unmapped: 55222272 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2906327 data_alloc: 218103808 data_used: 761856
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297631744 unmapped: 55214080 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297639936 unmapped: 55205888 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297639936 unmapped: 55205888 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee378000/0x0/0x4ffc00000, data 0x1d3e36/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297639936 unmapped: 55205888 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297639936 unmapped: 55205888 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2906327 data_alloc: 218103808 data_used: 761856
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297656320 unmapped: 55189504 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297656320 unmapped: 55189504 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297656320 unmapped: 55189504 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297656320 unmapped: 55189504 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee378000/0x0/0x4ffc00000, data 0x1d3e36/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297672704 unmapped: 55173120 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2906327 data_alloc: 218103808 data_used: 761856
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297672704 unmapped: 55173120 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee378000/0x0/0x4ffc00000, data 0x1d3e36/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297672704 unmapped: 55173120 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297672704 unmapped: 55173120 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 nova_compute[253538]: 2025-11-25 09:46:12.017 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297672704 unmapped: 55173120 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297672704 unmapped: 55173120 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2906327 data_alloc: 218103808 data_used: 761856
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297672704 unmapped: 55173120 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee378000/0x0/0x4ffc00000, data 0x1d3e36/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297680896 unmapped: 55164928 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297680896 unmapped: 55164928 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297680896 unmapped: 55164928 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297680896 unmapped: 55164928 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee378000/0x0/0x4ffc00000, data 0x1d3e36/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2906327 data_alloc: 218103808 data_used: 761856
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297680896 unmapped: 55164928 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297680896 unmapped: 55164928 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee378000/0x0/0x4ffc00000, data 0x1d3e36/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297680896 unmapped: 55164928 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297680896 unmapped: 55164928 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee378000/0x0/0x4ffc00000, data 0x1d3e36/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297689088 unmapped: 55156736 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2906327 data_alloc: 218103808 data_used: 761856
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297689088 unmapped: 55156736 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297689088 unmapped: 55156736 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297697280 unmapped: 55148544 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297697280 unmapped: 55148544 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee378000/0x0/0x4ffc00000, data 0x1d3e36/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297705472 unmapped: 55140352 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2906327 data_alloc: 218103808 data_used: 761856
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297705472 unmapped: 55140352 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee378000/0x0/0x4ffc00000, data 0x1d3e36/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee378000/0x0/0x4ffc00000, data 0x1d3e36/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297705472 unmapped: 55140352 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297713664 unmapped: 55132160 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297787392 unmapped: 55058432 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: do_command 'config diff' '{prefix=config diff}'
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: do_command 'config show' '{prefix=config show}'
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: do_command 'counter dump' '{prefix=counter dump}'
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: do_command 'counter schema' '{prefix=counter schema}'
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297312256 unmapped: 55533568 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2906327 data_alloc: 218103808 data_used: 761856
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296992768 unmapped: 55853056 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: do_command 'log dump' '{prefix=log dump}'
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: do_command 'log dump' '{prefix=log dump}' result is 0 bytes
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296992768 unmapped: 55853056 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: do_command 'perf dump' '{prefix=perf dump}'
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: do_command 'perf dump' '{prefix=perf dump}' result is 0 bytes
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee378000/0x0/0x4ffc00000, data 0x1d3e36/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: do_command 'perf histogram dump' '{prefix=perf histogram dump}'
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: do_command 'perf histogram dump' '{prefix=perf histogram dump}' result is 0 bytes
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: do_command 'perf schema' '{prefix=perf schema}'
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: do_command 'perf schema' '{prefix=perf schema}' result is 0 bytes
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297156608 unmapped: 55689216 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee378000/0x0/0x4ffc00000, data 0x1d3e36/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297156608 unmapped: 55689216 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297156608 unmapped: 55689216 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2906327 data_alloc: 218103808 data_used: 761856
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297164800 unmapped: 55681024 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297164800 unmapped: 55681024 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297164800 unmapped: 55681024 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297164800 unmapped: 55681024 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee378000/0x0/0x4ffc00000, data 0x1d3e36/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297181184 unmapped: 55664640 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2906327 data_alloc: 218103808 data_used: 761856
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297181184 unmapped: 55664640 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297181184 unmapped: 55664640 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297181184 unmapped: 55664640 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297189376 unmapped: 55656448 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297189376 unmapped: 55656448 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee378000/0x0/0x4ffc00000, data 0x1d3e36/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2906327 data_alloc: 218103808 data_used: 761856
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297189376 unmapped: 55656448 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee378000/0x0/0x4ffc00000, data 0x1d3e36/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297189376 unmapped: 55656448 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297197568 unmapped: 55648256 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297197568 unmapped: 55648256 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: osd.2 290 ms_handle_reset con 0x55a127698400 session 0x55a129f4ba40
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297197568 unmapped: 55648256 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2906327 data_alloc: 218103808 data_used: 761856
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297197568 unmapped: 55648256 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee378000/0x0/0x4ffc00000, data 0x1d3e36/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297205760 unmapped: 55640064 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297205760 unmapped: 55640064 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297205760 unmapped: 55640064 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee378000/0x0/0x4ffc00000, data 0x1d3e36/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297205760 unmapped: 55640064 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2906327 data_alloc: 218103808 data_used: 761856
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297205760 unmapped: 55640064 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297205760 unmapped: 55640064 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297205760 unmapped: 55640064 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee378000/0x0/0x4ffc00000, data 0x1d3e36/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297213952 unmapped: 55631872 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296157184 unmapped: 56688640 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2906327 data_alloc: 218103808 data_used: 761856
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296157184 unmapped: 56688640 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296157184 unmapped: 56688640 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296157184 unmapped: 56688640 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee378000/0x0/0x4ffc00000, data 0x1d3e36/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee378000/0x0/0x4ffc00000, data 0x1d3e36/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296165376 unmapped: 56680448 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296165376 unmapped: 56680448 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2906327 data_alloc: 218103808 data_used: 761856
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296165376 unmapped: 56680448 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296165376 unmapped: 56680448 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296173568 unmapped: 56672256 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296173568 unmapped: 56672256 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee378000/0x0/0x4ffc00000, data 0x1d3e36/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296173568 unmapped: 56672256 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2906327 data_alloc: 218103808 data_used: 761856
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee378000/0x0/0x4ffc00000, data 0x1d3e36/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296173568 unmapped: 56672256 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296173568 unmapped: 56672256 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296173568 unmapped: 56672256 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee378000/0x0/0x4ffc00000, data 0x1d3e36/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296173568 unmapped: 56672256 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296173568 unmapped: 56672256 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2906327 data_alloc: 218103808 data_used: 761856
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296181760 unmapped: 56664064 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296181760 unmapped: 56664064 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee378000/0x0/0x4ffc00000, data 0x1d3e36/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296181760 unmapped: 56664064 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296181760 unmapped: 56664064 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296189952 unmapped: 56655872 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2906327 data_alloc: 218103808 data_used: 761856
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296189952 unmapped: 56655872 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296189952 unmapped: 56655872 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee378000/0x0/0x4ffc00000, data 0x1d3e36/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296189952 unmapped: 56655872 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296206336 unmapped: 56639488 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee378000/0x0/0x4ffc00000, data 0x1d3e36/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296206336 unmapped: 56639488 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2906327 data_alloc: 218103808 data_used: 761856
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296206336 unmapped: 56639488 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296206336 unmapped: 56639488 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee378000/0x0/0x4ffc00000, data 0x1d3e36/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296206336 unmapped: 56639488 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296206336 unmapped: 56639488 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee378000/0x0/0x4ffc00000, data 0x1d3e36/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296206336 unmapped: 56639488 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2906327 data_alloc: 218103808 data_used: 761856
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296214528 unmapped: 56631296 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296214528 unmapped: 56631296 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee378000/0x0/0x4ffc00000, data 0x1d3e36/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296214528 unmapped: 56631296 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee378000/0x0/0x4ffc00000, data 0x1d3e36/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296222720 unmapped: 56623104 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296222720 unmapped: 56623104 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2906327 data_alloc: 218103808 data_used: 761856
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296222720 unmapped: 56623104 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296222720 unmapped: 56623104 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296222720 unmapped: 56623104 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296230912 unmapped: 56614912 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee378000/0x0/0x4ffc00000, data 0x1d3e36/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296239104 unmapped: 56606720 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2906327 data_alloc: 218103808 data_used: 761856
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296239104 unmapped: 56606720 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee378000/0x0/0x4ffc00000, data 0x1d3e36/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296239104 unmapped: 56606720 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee378000/0x0/0x4ffc00000, data 0x1d3e36/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296247296 unmapped: 56598528 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296247296 unmapped: 56598528 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296247296 unmapped: 56598528 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2906327 data_alloc: 218103808 data_used: 761856
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296247296 unmapped: 56598528 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296247296 unmapped: 56598528 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee378000/0x0/0x4ffc00000, data 0x1d3e36/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296255488 unmapped: 56590336 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296255488 unmapped: 56590336 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296255488 unmapped: 56590336 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2906327 data_alloc: 218103808 data_used: 761856
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296255488 unmapped: 56590336 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee378000/0x0/0x4ffc00000, data 0x1d3e36/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296255488 unmapped: 56590336 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296255488 unmapped: 56590336 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296255488 unmapped: 56590336 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee378000/0x0/0x4ffc00000, data 0x1d3e36/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296263680 unmapped: 56582144 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2906327 data_alloc: 218103808 data_used: 761856
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296271872 unmapped: 56573952 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296271872 unmapped: 56573952 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296271872 unmapped: 56573952 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee378000/0x0/0x4ffc00000, data 0x1d3e36/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296280064 unmapped: 56565760 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296280064 unmapped: 56565760 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2906327 data_alloc: 218103808 data_used: 761856
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296280064 unmapped: 56565760 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296280064 unmapped: 56565760 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296288256 unmapped: 56557568 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296296448 unmapped: 56549376 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee378000/0x0/0x4ffc00000, data 0x1d3e36/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296296448 unmapped: 56549376 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2906327 data_alloc: 218103808 data_used: 761856
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296296448 unmapped: 56549376 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296296448 unmapped: 56549376 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296296448 unmapped: 56549376 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee378000/0x0/0x4ffc00000, data 0x1d3e36/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296296448 unmapped: 56549376 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee378000/0x0/0x4ffc00000, data 0x1d3e36/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296296448 unmapped: 56549376 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2906327 data_alloc: 218103808 data_used: 761856
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296304640 unmapped: 56541184 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296312832 unmapped: 56532992 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296312832 unmapped: 56532992 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296312832 unmapped: 56532992 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296312832 unmapped: 56532992 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2906327 data_alloc: 218103808 data_used: 761856
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee378000/0x0/0x4ffc00000, data 0x1d3e36/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296312832 unmapped: 56532992 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296312832 unmapped: 56532992 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296312832 unmapped: 56532992 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296321024 unmapped: 56524800 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee378000/0x0/0x4ffc00000, data 0x1d3e36/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296329216 unmapped: 56516608 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee378000/0x0/0x4ffc00000, data 0x1d3e36/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2906327 data_alloc: 218103808 data_used: 761856
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296329216 unmapped: 56516608 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296329216 unmapped: 56516608 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296329216 unmapped: 56516608 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296337408 unmapped: 56508416 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296337408 unmapped: 56508416 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2906327 data_alloc: 218103808 data_used: 761856
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296337408 unmapped: 56508416 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee378000/0x0/0x4ffc00000, data 0x1d3e36/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296337408 unmapped: 56508416 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee378000/0x0/0x4ffc00000, data 0x1d3e36/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296345600 unmapped: 56500224 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296345600 unmapped: 56500224 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296345600 unmapped: 56500224 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2906327 data_alloc: 218103808 data_used: 761856
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296345600 unmapped: 56500224 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee378000/0x0/0x4ffc00000, data 0x1d3e36/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296353792 unmapped: 56492032 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296353792 unmapped: 56492032 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee378000/0x0/0x4ffc00000, data 0x1d3e36/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296361984 unmapped: 56483840 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296361984 unmapped: 56483840 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2906327 data_alloc: 218103808 data_used: 761856
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296361984 unmapped: 56483840 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee378000/0x0/0x4ffc00000, data 0x1d3e36/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296361984 unmapped: 56483840 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee378000/0x0/0x4ffc00000, data 0x1d3e36/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296361984 unmapped: 56483840 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296361984 unmapped: 56483840 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296370176 unmapped: 56475648 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2906327 data_alloc: 218103808 data_used: 761856
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296370176 unmapped: 56475648 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296370176 unmapped: 56475648 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee378000/0x0/0x4ffc00000, data 0x1d3e36/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296378368 unmapped: 56467456 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296378368 unmapped: 56467456 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296378368 unmapped: 56467456 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2906327 data_alloc: 218103808 data_used: 761856
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296386560 unmapped: 56459264 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296386560 unmapped: 56459264 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296386560 unmapped: 56459264 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee378000/0x0/0x4ffc00000, data 0x1d3e36/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296386560 unmapped: 56459264 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee378000/0x0/0x4ffc00000, data 0x1d3e36/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296386560 unmapped: 56459264 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2906327 data_alloc: 218103808 data_used: 761856
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296386560 unmapped: 56459264 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296402944 unmapped: 56442880 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296402944 unmapped: 56442880 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296402944 unmapped: 56442880 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296402944 unmapped: 56442880 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2906327 data_alloc: 218103808 data_used: 761856
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee378000/0x0/0x4ffc00000, data 0x1d3e36/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296402944 unmapped: 56442880 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296402944 unmapped: 56442880 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296402944 unmapped: 56442880 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296411136 unmapped: 56434688 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296411136 unmapped: 56434688 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2906327 data_alloc: 218103808 data_used: 761856
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296411136 unmapped: 56434688 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee378000/0x0/0x4ffc00000, data 0x1d3e36/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296419328 unmapped: 56426496 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296419328 unmapped: 56426496 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296419328 unmapped: 56426496 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296419328 unmapped: 56426496 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2906327 data_alloc: 218103808 data_used: 761856
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296419328 unmapped: 56426496 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee378000/0x0/0x4ffc00000, data 0x1d3e36/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296419328 unmapped: 56426496 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296427520 unmapped: 56418304 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296427520 unmapped: 56418304 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296427520 unmapped: 56418304 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2906327 data_alloc: 218103808 data_used: 761856
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296427520 unmapped: 56418304 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee378000/0x0/0x4ffc00000, data 0x1d3e36/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296427520 unmapped: 56418304 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee378000/0x0/0x4ffc00000, data 0x1d3e36/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296435712 unmapped: 56410112 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296435712 unmapped: 56410112 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296443904 unmapped: 56401920 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee378000/0x0/0x4ffc00000, data 0x1d3e36/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2906327 data_alloc: 218103808 data_used: 761856
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296443904 unmapped: 56401920 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296443904 unmapped: 56401920 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee378000/0x0/0x4ffc00000, data 0x1d3e36/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296460288 unmapped: 56385536 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296460288 unmapped: 56385536 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296460288 unmapped: 56385536 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2906327 data_alloc: 218103808 data_used: 761856
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296460288 unmapped: 56385536 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296460288 unmapped: 56385536 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296468480 unmapped: 56377344 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee378000/0x0/0x4ffc00000, data 0x1d3e36/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296484864 unmapped: 56360960 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296484864 unmapped: 56360960 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2906327 data_alloc: 218103808 data_used: 761856
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296484864 unmapped: 56360960 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee378000/0x0/0x4ffc00000, data 0x1d3e36/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296484864 unmapped: 56360960 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee378000/0x0/0x4ffc00000, data 0x1d3e36/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296484864 unmapped: 56360960 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296484864 unmapped: 56360960 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296484864 unmapped: 56360960 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2906327 data_alloc: 218103808 data_used: 761856
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296493056 unmapped: 56352768 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee378000/0x0/0x4ffc00000, data 0x1d3e36/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296501248 unmapped: 56344576 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296501248 unmapped: 56344576 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296501248 unmapped: 56344576 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296501248 unmapped: 56344576 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2906327 data_alloc: 218103808 data_used: 761856
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296501248 unmapped: 56344576 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee378000/0x0/0x4ffc00000, data 0x1d3e36/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296501248 unmapped: 56344576 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296501248 unmapped: 56344576 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296509440 unmapped: 56336384 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296509440 unmapped: 56336384 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2906327 data_alloc: 218103808 data_used: 761856
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296509440 unmapped: 56336384 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee378000/0x0/0x4ffc00000, data 0x1d3e36/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296509440 unmapped: 56336384 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee378000/0x0/0x4ffc00000, data 0x1d3e36/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296517632 unmapped: 56328192 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296525824 unmapped: 56320000 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296525824 unmapped: 56320000 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2906327 data_alloc: 218103808 data_used: 761856
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296525824 unmapped: 56320000 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee378000/0x0/0x4ffc00000, data 0x1d3e36/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296525824 unmapped: 56320000 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296534016 unmapped: 56311808 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296534016 unmapped: 56311808 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296534016 unmapped: 56311808 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2906327 data_alloc: 218103808 data_used: 761856
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296534016 unmapped: 56311808 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296550400 unmapped: 56295424 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee378000/0x0/0x4ffc00000, data 0x1d3e36/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296550400 unmapped: 56295424 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296550400 unmapped: 56295424 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296550400 unmapped: 56295424 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2906327 data_alloc: 218103808 data_used: 761856
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296558592 unmapped: 56287232 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296558592 unmapped: 56287232 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee378000/0x0/0x4ffc00000, data 0x1d3e36/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296558592 unmapped: 56287232 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296558592 unmapped: 56287232 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296558592 unmapped: 56287232 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee378000/0x0/0x4ffc00000, data 0x1d3e36/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2906327 data_alloc: 218103808 data_used: 761856
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296558592 unmapped: 56287232 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296558592 unmapped: 56287232 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee378000/0x0/0x4ffc00000, data 0x1d3e36/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296566784 unmapped: 56279040 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296574976 unmapped: 56270848 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296574976 unmapped: 56270848 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2906327 data_alloc: 218103808 data_used: 761856
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296574976 unmapped: 56270848 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee378000/0x0/0x4ffc00000, data 0x1d3e36/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296574976 unmapped: 56270848 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296574976 unmapped: 56270848 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee378000/0x0/0x4ffc00000, data 0x1d3e36/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296583168 unmapped: 56262656 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296583168 unmapped: 56262656 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2906327 data_alloc: 218103808 data_used: 761856
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296591360 unmapped: 56254464 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296599552 unmapped: 56246272 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296599552 unmapped: 56246272 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee378000/0x0/0x4ffc00000, data 0x1d3e36/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296599552 unmapped: 56246272 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296599552 unmapped: 56246272 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2906327 data_alloc: 218103808 data_used: 761856
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296599552 unmapped: 56246272 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 6602.4 total, 600.0 interval#012Cumulative writes: 36K writes, 144K keys, 36K commit groups, 1.0 writes per commit group, ingest: 0.14 GB, 0.02 MB/s#012Cumulative WAL: 36K writes, 12K syncs, 2.82 writes per sync, written: 0.14 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 333 writes, 863 keys, 333 commit groups, 1.0 writes per commit group, ingest: 0.37 MB, 0.00 MB/s#012Interval WAL: 333 writes, 138 syncs, 2.41 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296599552 unmapped: 56246272 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296599552 unmapped: 56246272 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296607744 unmapped: 56238080 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee378000/0x0/0x4ffc00000, data 0x1d3e36/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296607744 unmapped: 56238080 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee378000/0x0/0x4ffc00000, data 0x1d3e36/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2906327 data_alloc: 218103808 data_used: 761856
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296615936 unmapped: 56229888 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296615936 unmapped: 56229888 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296615936 unmapped: 56229888 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296615936 unmapped: 56229888 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee378000/0x0/0x4ffc00000, data 0x1d3e36/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee378000/0x0/0x4ffc00000, data 0x1d3e36/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296624128 unmapped: 56221696 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2906327 data_alloc: 218103808 data_used: 761856
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296624128 unmapped: 56221696 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296624128 unmapped: 56221696 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296632320 unmapped: 56213504 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296632320 unmapped: 56213504 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296632320 unmapped: 56213504 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2906327 data_alloc: 218103808 data_used: 761856
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee378000/0x0/0x4ffc00000, data 0x1d3e36/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296640512 unmapped: 56205312 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296640512 unmapped: 56205312 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296640512 unmapped: 56205312 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee378000/0x0/0x4ffc00000, data 0x1d3e36/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296648704 unmapped: 56197120 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296656896 unmapped: 56188928 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2906327 data_alloc: 218103808 data_used: 761856
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296656896 unmapped: 56188928 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296656896 unmapped: 56188928 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee378000/0x0/0x4ffc00000, data 0x1d3e36/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296673280 unmapped: 56172544 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296673280 unmapped: 56172544 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296673280 unmapped: 56172544 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2906327 data_alloc: 218103808 data_used: 761856
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296673280 unmapped: 56172544 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee378000/0x0/0x4ffc00000, data 0x1d3e36/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296673280 unmapped: 56172544 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296681472 unmapped: 56164352 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee378000/0x0/0x4ffc00000, data 0x1d3e36/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296681472 unmapped: 56164352 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296681472 unmapped: 56164352 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee378000/0x0/0x4ffc00000, data 0x1d3e36/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2906327 data_alloc: 218103808 data_used: 761856
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296689664 unmapped: 56156160 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296689664 unmapped: 56156160 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee378000/0x0/0x4ffc00000, data 0x1d3e36/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296689664 unmapped: 56156160 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296689664 unmapped: 56156160 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296697856 unmapped: 56147968 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2906327 data_alloc: 218103808 data_used: 761856
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296706048 unmapped: 56139776 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee378000/0x0/0x4ffc00000, data 0x1d3e36/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296706048 unmapped: 56139776 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296706048 unmapped: 56139776 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296706048 unmapped: 56139776 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296706048 unmapped: 56139776 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee378000/0x0/0x4ffc00000, data 0x1d3e36/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2906327 data_alloc: 218103808 data_used: 761856
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296706048 unmapped: 56139776 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee378000/0x0/0x4ffc00000, data 0x1d3e36/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 04:46:12 np0005534516 rsyslogd[1007]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296706048 unmapped: 56139776 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296714240 unmapped: 56131584 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296714240 unmapped: 56131584 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296714240 unmapped: 56131584 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2906327 data_alloc: 218103808 data_used: 761856
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee378000/0x0/0x4ffc00000, data 0x1d3e36/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296714240 unmapped: 56131584 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296714240 unmapped: 56131584 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296722432 unmapped: 56123392 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296722432 unmapped: 56123392 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296722432 unmapped: 56123392 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2906327 data_alloc: 218103808 data_used: 761856
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296722432 unmapped: 56123392 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee378000/0x0/0x4ffc00000, data 0x1d3e36/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296722432 unmapped: 56123392 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee378000/0x0/0x4ffc00000, data 0x1d3e36/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296722432 unmapped: 56123392 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296722432 unmapped: 56123392 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee378000/0x0/0x4ffc00000, data 0x1d3e36/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296730624 unmapped: 56115200 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2906327 data_alloc: 218103808 data_used: 761856
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296738816 unmapped: 56107008 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296738816 unmapped: 56107008 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee378000/0x0/0x4ffc00000, data 0x1d3e36/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296738816 unmapped: 56107008 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296738816 unmapped: 56107008 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296747008 unmapped: 56098816 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2906327 data_alloc: 218103808 data_used: 761856
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296747008 unmapped: 56098816 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee378000/0x0/0x4ffc00000, data 0x1d3e36/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296755200 unmapped: 56090624 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296755200 unmapped: 56090624 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296755200 unmapped: 56090624 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296755200 unmapped: 56090624 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2906327 data_alloc: 218103808 data_used: 761856
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296755200 unmapped: 56090624 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee378000/0x0/0x4ffc00000, data 0x1d3e36/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296763392 unmapped: 56082432 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296771584 unmapped: 56074240 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296771584 unmapped: 56074240 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee378000/0x0/0x4ffc00000, data 0x1d3e36/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296771584 unmapped: 56074240 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2906327 data_alloc: 218103808 data_used: 761856
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296771584 unmapped: 56074240 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296771584 unmapped: 56074240 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296771584 unmapped: 56074240 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296771584 unmapped: 56074240 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee378000/0x0/0x4ffc00000, data 0x1d3e36/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee378000/0x0/0x4ffc00000, data 0x1d3e36/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296779776 unmapped: 56066048 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2906327 data_alloc: 218103808 data_used: 761856
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296779776 unmapped: 56066048 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296779776 unmapped: 56066048 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296779776 unmapped: 56066048 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296779776 unmapped: 56066048 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296787968 unmapped: 56057856 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2906327 data_alloc: 218103808 data_used: 761856
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee378000/0x0/0x4ffc00000, data 0x1d3e36/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296787968 unmapped: 56057856 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296787968 unmapped: 56057856 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee378000/0x0/0x4ffc00000, data 0x1d3e36/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296796160 unmapped: 56049664 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296812544 unmapped: 56033280 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296812544 unmapped: 56033280 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2906327 data_alloc: 218103808 data_used: 761856
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296812544 unmapped: 56033280 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296812544 unmapped: 56033280 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296828928 unmapped: 56016896 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee378000/0x0/0x4ffc00000, data 0x1d3e36/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296828928 unmapped: 56016896 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296828928 unmapped: 56016896 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2906327 data_alloc: 218103808 data_used: 761856
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296837120 unmapped: 56008704 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296837120 unmapped: 56008704 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee378000/0x0/0x4ffc00000, data 0x1d3e36/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296837120 unmapped: 56008704 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296837120 unmapped: 56008704 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296837120 unmapped: 56008704 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee378000/0x0/0x4ffc00000, data 0x1d3e36/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2906327 data_alloc: 218103808 data_used: 761856
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296845312 unmapped: 56000512 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296845312 unmapped: 56000512 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296845312 unmapped: 56000512 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296845312 unmapped: 56000512 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee378000/0x0/0x4ffc00000, data 0x1d3e36/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296845312 unmapped: 56000512 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2906327 data_alloc: 218103808 data_used: 761856
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296845312 unmapped: 56000512 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296845312 unmapped: 56000512 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee378000/0x0/0x4ffc00000, data 0x1d3e36/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296853504 unmapped: 55992320 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296869888 unmapped: 55975936 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296869888 unmapped: 55975936 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2906327 data_alloc: 218103808 data_used: 761856
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296869888 unmapped: 55975936 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 425.801208496s of 426.502380371s, submitted: 28
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296886272 unmapped: 55959552 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee378000/0x0/0x4ffc00000, data 0x1d3e36/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [0,0,0,0,0,0,1])
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296894464 unmapped: 55951360 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296927232 unmapped: 55918592 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296927232 unmapped: 55918592 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2905447 data_alloc: 218103808 data_used: 761856
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296927232 unmapped: 55918592 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296927232 unmapped: 55918592 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee379000/0x0/0x4ffc00000, data 0x1d3e36/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296927232 unmapped: 55918592 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296927232 unmapped: 55918592 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296927232 unmapped: 55918592 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2905447 data_alloc: 218103808 data_used: 761856
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296927232 unmapped: 55918592 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296927232 unmapped: 55918592 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee379000/0x0/0x4ffc00000, data 0x1d3e36/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296927232 unmapped: 55918592 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296927232 unmapped: 55918592 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296935424 unmapped: 55910400 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2905447 data_alloc: 218103808 data_used: 761856
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296935424 unmapped: 55910400 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee379000/0x0/0x4ffc00000, data 0x1d3e36/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296935424 unmapped: 55910400 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee379000/0x0/0x4ffc00000, data 0x1d3e36/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296935424 unmapped: 55910400 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296935424 unmapped: 55910400 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296943616 unmapped: 55902208 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2905447 data_alloc: 218103808 data_used: 761856
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296943616 unmapped: 55902208 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296943616 unmapped: 55902208 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee379000/0x0/0x4ffc00000, data 0x1d3e36/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296943616 unmapped: 55902208 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296943616 unmapped: 55902208 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296951808 unmapped: 55894016 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2905447 data_alloc: 218103808 data_used: 761856
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296951808 unmapped: 55894016 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee379000/0x0/0x4ffc00000, data 0x1d3e36/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296951808 unmapped: 55894016 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296951808 unmapped: 55894016 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296951808 unmapped: 55894016 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296951808 unmapped: 55894016 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2905447 data_alloc: 218103808 data_used: 761856
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296960000 unmapped: 55885824 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296960000 unmapped: 55885824 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee379000/0x0/0x4ffc00000, data 0x1d3e36/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296960000 unmapped: 55885824 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296960000 unmapped: 55885824 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296960000 unmapped: 55885824 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2905447 data_alloc: 218103808 data_used: 761856
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296960000 unmapped: 55885824 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296960000 unmapped: 55885824 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee379000/0x0/0x4ffc00000, data 0x1d3e36/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296960000 unmapped: 55885824 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296968192 unmapped: 55877632 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296968192 unmapped: 55877632 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee379000/0x0/0x4ffc00000, data 0x1d3e36/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2905447 data_alloc: 218103808 data_used: 761856
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee379000/0x0/0x4ffc00000, data 0x1d3e36/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296968192 unmapped: 55877632 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296968192 unmapped: 55877632 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296976384 unmapped: 55869440 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296976384 unmapped: 55869440 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296976384 unmapped: 55869440 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2905447 data_alloc: 218103808 data_used: 761856
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296976384 unmapped: 55869440 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee379000/0x0/0x4ffc00000, data 0x1d3e36/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296976384 unmapped: 55869440 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296976384 unmapped: 55869440 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee379000/0x0/0x4ffc00000, data 0x1d3e36/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296976384 unmapped: 55869440 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296976384 unmapped: 55869440 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee379000/0x0/0x4ffc00000, data 0x1d3e36/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2905447 data_alloc: 218103808 data_used: 761856
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296984576 unmapped: 55861248 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296984576 unmapped: 55861248 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296984576 unmapped: 55861248 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296984576 unmapped: 55861248 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee379000/0x0/0x4ffc00000, data 0x1d3e36/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296984576 unmapped: 55861248 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2905447 data_alloc: 218103808 data_used: 761856
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296984576 unmapped: 55861248 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296984576 unmapped: 55861248 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296992768 unmapped: 55853056 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee379000/0x0/0x4ffc00000, data 0x1d3e36/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297000960 unmapped: 55844864 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297000960 unmapped: 55844864 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2905447 data_alloc: 218103808 data_used: 761856
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297000960 unmapped: 55844864 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee379000/0x0/0x4ffc00000, data 0x1d3e36/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297000960 unmapped: 55844864 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297000960 unmapped: 55844864 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297000960 unmapped: 55844864 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297000960 unmapped: 55844864 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2905447 data_alloc: 218103808 data_used: 761856
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297009152 unmapped: 55836672 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297009152 unmapped: 55836672 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee379000/0x0/0x4ffc00000, data 0x1d3e36/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297009152 unmapped: 55836672 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297009152 unmapped: 55836672 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee379000/0x0/0x4ffc00000, data 0x1d3e36/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297009152 unmapped: 55836672 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2905447 data_alloc: 218103808 data_used: 761856
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297017344 unmapped: 55828480 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee379000/0x0/0x4ffc00000, data 0x1d3e36/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297017344 unmapped: 55828480 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297017344 unmapped: 55828480 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297025536 unmapped: 55820288 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297025536 unmapped: 55820288 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2905447 data_alloc: 218103808 data_used: 761856
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297025536 unmapped: 55820288 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee379000/0x0/0x4ffc00000, data 0x1d3e36/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297041920 unmapped: 55803904 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297041920 unmapped: 55803904 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297041920 unmapped: 55803904 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297041920 unmapped: 55803904 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2905447 data_alloc: 218103808 data_used: 761856
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297041920 unmapped: 55803904 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297041920 unmapped: 55803904 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee379000/0x0/0x4ffc00000, data 0x1d3e36/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297050112 unmapped: 55795712 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297050112 unmapped: 55795712 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297050112 unmapped: 55795712 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2905447 data_alloc: 218103808 data_used: 761856
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297050112 unmapped: 55795712 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297050112 unmapped: 55795712 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297050112 unmapped: 55795712 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee379000/0x0/0x4ffc00000, data 0x1d3e36/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297050112 unmapped: 55795712 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee379000/0x0/0x4ffc00000, data 0x1d3e36/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297058304 unmapped: 55787520 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2905447 data_alloc: 218103808 data_used: 761856
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee379000/0x0/0x4ffc00000, data 0x1d3e36/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297058304 unmapped: 55787520 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297058304 unmapped: 55787520 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297058304 unmapped: 55787520 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297058304 unmapped: 55787520 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee379000/0x0/0x4ffc00000, data 0x1d3e36/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297074688 unmapped: 55771136 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2905447 data_alloc: 218103808 data_used: 761856
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297074688 unmapped: 55771136 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297074688 unmapped: 55771136 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee379000/0x0/0x4ffc00000, data 0x1d3e36/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297082880 unmapped: 55762944 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297091072 unmapped: 55754752 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297091072 unmapped: 55754752 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2905447 data_alloc: 218103808 data_used: 761856
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297091072 unmapped: 55754752 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee379000/0x0/0x4ffc00000, data 0x1d3e36/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297091072 unmapped: 55754752 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297091072 unmapped: 55754752 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297091072 unmapped: 55754752 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297091072 unmapped: 55754752 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2905447 data_alloc: 218103808 data_used: 761856
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297091072 unmapped: 55754752 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297099264 unmapped: 55746560 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee379000/0x0/0x4ffc00000, data 0x1d3e36/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297099264 unmapped: 55746560 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297099264 unmapped: 55746560 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297099264 unmapped: 55746560 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2905447 data_alloc: 218103808 data_used: 761856
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297099264 unmapped: 55746560 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee379000/0x0/0x4ffc00000, data 0x1d3e36/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297099264 unmapped: 55746560 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297099264 unmapped: 55746560 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee379000/0x0/0x4ffc00000, data 0x1d3e36/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297099264 unmapped: 55746560 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297107456 unmapped: 55738368 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2905447 data_alloc: 218103808 data_used: 761856
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297107456 unmapped: 55738368 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297115648 unmapped: 55730176 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297115648 unmapped: 55730176 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297115648 unmapped: 55730176 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee379000/0x0/0x4ffc00000, data 0x1d3e36/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297132032 unmapped: 55713792 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2905447 data_alloc: 218103808 data_used: 761856
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297132032 unmapped: 55713792 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297132032 unmapped: 55713792 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee379000/0x0/0x4ffc00000, data 0x1d3e36/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297132032 unmapped: 55713792 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297132032 unmapped: 55713792 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297132032 unmapped: 55713792 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2905447 data_alloc: 218103808 data_used: 761856
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297132032 unmapped: 55713792 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297140224 unmapped: 55705600 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297140224 unmapped: 55705600 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee379000/0x0/0x4ffc00000, data 0x1d3e36/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297140224 unmapped: 55705600 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297140224 unmapped: 55705600 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2905447 data_alloc: 218103808 data_used: 761856
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297148416 unmapped: 55697408 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297148416 unmapped: 55697408 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297156608 unmapped: 55689216 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee379000/0x0/0x4ffc00000, data 0x1d3e36/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297156608 unmapped: 55689216 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297156608 unmapped: 55689216 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2905447 data_alloc: 218103808 data_used: 761856
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297156608 unmapped: 55689216 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee379000/0x0/0x4ffc00000, data 0x1d3e36/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297156608 unmapped: 55689216 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297156608 unmapped: 55689216 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297164800 unmapped: 55681024 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297164800 unmapped: 55681024 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2905447 data_alloc: 218103808 data_used: 761856
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297164800 unmapped: 55681024 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297164800 unmapped: 55681024 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee379000/0x0/0x4ffc00000, data 0x1d3e36/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297189376 unmapped: 55656448 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297189376 unmapped: 55656448 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297189376 unmapped: 55656448 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2905447 data_alloc: 218103808 data_used: 761856
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297189376 unmapped: 55656448 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297197568 unmapped: 55648256 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297197568 unmapped: 55648256 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee379000/0x0/0x4ffc00000, data 0x1d3e36/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297197568 unmapped: 55648256 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297197568 unmapped: 55648256 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2905447 data_alloc: 218103808 data_used: 761856
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee379000/0x0/0x4ffc00000, data 0x1d3e36/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297205760 unmapped: 55640064 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee379000/0x0/0x4ffc00000, data 0x1d3e36/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297205760 unmapped: 55640064 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee379000/0x0/0x4ffc00000, data 0x1d3e36/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297205760 unmapped: 55640064 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297222144 unmapped: 55623680 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee379000/0x0/0x4ffc00000, data 0x1d3e36/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297222144 unmapped: 55623680 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2905447 data_alloc: 218103808 data_used: 761856
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297222144 unmapped: 55623680 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297222144 unmapped: 55623680 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297222144 unmapped: 55623680 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297222144 unmapped: 55623680 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee379000/0x0/0x4ffc00000, data 0x1d3e36/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297222144 unmapped: 55623680 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2905447 data_alloc: 218103808 data_used: 761856
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297222144 unmapped: 55623680 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297230336 unmapped: 55615488 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297230336 unmapped: 55615488 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297230336 unmapped: 55615488 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297238528 unmapped: 55607296 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee379000/0x0/0x4ffc00000, data 0x1d3e36/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2905447 data_alloc: 218103808 data_used: 761856
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297238528 unmapped: 55607296 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297238528 unmapped: 55607296 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee379000/0x0/0x4ffc00000, data 0x1d3e36/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297238528 unmapped: 55607296 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297238528 unmapped: 55607296 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297238528 unmapped: 55607296 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2905447 data_alloc: 218103808 data_used: 761856
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee379000/0x0/0x4ffc00000, data 0x1d3e36/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297238528 unmapped: 55607296 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee379000/0x0/0x4ffc00000, data 0x1d3e36/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297238528 unmapped: 55607296 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297238528 unmapped: 55607296 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297254912 unmapped: 55590912 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297254912 unmapped: 55590912 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2905447 data_alloc: 218103808 data_used: 761856
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297254912 unmapped: 55590912 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297254912 unmapped: 55590912 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee379000/0x0/0x4ffc00000, data 0x1d3e36/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297263104 unmapped: 55582720 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee379000/0x0/0x4ffc00000, data 0x1d3e36/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297263104 unmapped: 55582720 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee379000/0x0/0x4ffc00000, data 0x1d3e36/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297271296 unmapped: 55574528 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2905447 data_alloc: 218103808 data_used: 761856
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297271296 unmapped: 55574528 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297271296 unmapped: 55574528 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee379000/0x0/0x4ffc00000, data 0x1d3e36/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297271296 unmapped: 55574528 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297271296 unmapped: 55574528 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee379000/0x0/0x4ffc00000, data 0x1d3e36/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297271296 unmapped: 55574528 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2905447 data_alloc: 218103808 data_used: 761856
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297287680 unmapped: 55558144 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee379000/0x0/0x4ffc00000, data 0x1d3e36/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297287680 unmapped: 55558144 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297287680 unmapped: 55558144 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297287680 unmapped: 55558144 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297295872 unmapped: 55549952 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee379000/0x0/0x4ffc00000, data 0x1d3e36/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2905447 data_alloc: 218103808 data_used: 761856
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297295872 unmapped: 55549952 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297295872 unmapped: 55549952 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297304064 unmapped: 55541760 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297304064 unmapped: 55541760 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297304064 unmapped: 55541760 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2905447 data_alloc: 218103808 data_used: 761856
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297304064 unmapped: 55541760 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee379000/0x0/0x4ffc00000, data 0x1d3e36/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297312256 unmapped: 55533568 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee379000/0x0/0x4ffc00000, data 0x1d3e36/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297328640 unmapped: 55517184 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee379000/0x0/0x4ffc00000, data 0x1d3e36/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297328640 unmapped: 55517184 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297328640 unmapped: 55517184 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2905447 data_alloc: 218103808 data_used: 761856
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297328640 unmapped: 55517184 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297336832 unmapped: 55508992 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee379000/0x0/0x4ffc00000, data 0x1d3e36/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297336832 unmapped: 55508992 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297336832 unmapped: 55508992 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297336832 unmapped: 55508992 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee379000/0x0/0x4ffc00000, data 0x1d3e36/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2905447 data_alloc: 218103808 data_used: 761856
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee379000/0x0/0x4ffc00000, data 0x1d3e36/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297336832 unmapped: 55508992 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297336832 unmapped: 55508992 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297336832 unmapped: 55508992 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297336832 unmapped: 55508992 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297345024 unmapped: 55500800 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2905447 data_alloc: 218103808 data_used: 761856
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297345024 unmapped: 55500800 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee379000/0x0/0x4ffc00000, data 0x1d3e36/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297353216 unmapped: 55492608 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297353216 unmapped: 55492608 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee379000/0x0/0x4ffc00000, data 0x1d3e36/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297361408 unmapped: 55484416 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297361408 unmapped: 55484416 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2905447 data_alloc: 218103808 data_used: 761856
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297361408 unmapped: 55484416 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297369600 unmapped: 55476224 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297369600 unmapped: 55476224 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297369600 unmapped: 55476224 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee379000/0x0/0x4ffc00000, data 0x1d3e36/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297369600 unmapped: 55476224 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2905447 data_alloc: 218103808 data_used: 761856
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297369600 unmapped: 55476224 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297369600 unmapped: 55476224 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee379000/0x0/0x4ffc00000, data 0x1d3e36/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297377792 unmapped: 55468032 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297377792 unmapped: 55468032 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297377792 unmapped: 55468032 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee379000/0x0/0x4ffc00000, data 0x1d3e36/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2905447 data_alloc: 218103808 data_used: 761856
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297377792 unmapped: 55468032 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297377792 unmapped: 55468032 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee379000/0x0/0x4ffc00000, data 0x1d3e36/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297385984 unmapped: 55459840 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee379000/0x0/0x4ffc00000, data 0x1d3e36/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297402368 unmapped: 55443456 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297402368 unmapped: 55443456 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2905447 data_alloc: 218103808 data_used: 761856
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee379000/0x0/0x4ffc00000, data 0x1d3e36/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297402368 unmapped: 55443456 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee379000/0x0/0x4ffc00000, data 0x1d3e36/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297402368 unmapped: 55443456 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee379000/0x0/0x4ffc00000, data 0x1d3e36/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297402368 unmapped: 55443456 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297418752 unmapped: 55427072 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297418752 unmapped: 55427072 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2905447 data_alloc: 218103808 data_used: 761856
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297418752 unmapped: 55427072 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee379000/0x0/0x4ffc00000, data 0x1d3e36/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297426944 unmapped: 55418880 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297426944 unmapped: 55418880 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297426944 unmapped: 55418880 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee379000/0x0/0x4ffc00000, data 0x1d3e36/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297426944 unmapped: 55418880 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2905447 data_alloc: 218103808 data_used: 761856
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297426944 unmapped: 55418880 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297435136 unmapped: 55410688 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee379000/0x0/0x4ffc00000, data 0x1d3e36/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: do_command 'config diff' '{prefix=config diff}'
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: do_command 'config show' '{prefix=config show}'
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: do_command 'counter dump' '{prefix=counter dump}'
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297164800 unmapped: 55681024 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: do_command 'counter schema' '{prefix=counter schema}'
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297328640 unmapped: 55517184 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297426944 unmapped: 55418880 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:12 np0005534516 ceph-osd[90711]: do_command 'log dump' '{prefix=log dump}'
Nov 25 04:46:12 np0005534516 ceph-mgr[75313]: log_channel(audit) log [DBG] : from='client.23501 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Nov 25 04:46:12 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module ls"} v 0) v1
Nov 25 04:46:12 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1062392188' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Nov 25 04:46:12 np0005534516 ceph-mgr[75313]: log_channel(audit) log [DBG] : from='client.23505 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 04:46:12 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services"} v 0) v1
Nov 25 04:46:12 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1260254065' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Nov 25 04:46:12 np0005534516 ceph-mgr[75313]: log_channel(audit) log [DBG] : from='client.23509 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 25 04:46:13 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr versions"} v 0) v1
Nov 25 04:46:13 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3716183601' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Nov 25 04:46:13 np0005534516 ceph-mgr[75313]: log_channel(audit) log [DBG] : from='client.23513 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 25 04:46:13 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3799: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:46:13 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon stat"} v 0) v1
Nov 25 04:46:13 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3351674670' entity='client.admin' cmd=[{"prefix": "mon stat"}]: dispatch
Nov 25 04:46:13 np0005534516 ceph-mgr[75313]: log_channel(audit) log [DBG] : from='client.23519 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 25 04:46:13 np0005534516 ceph-mgr[75313]: mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Nov 25 04:46:13 np0005534516 ceph-a058ea16-8b73-51e1-b172-ed66107102bf-mgr-compute-0-cpskve[75309]: 2025-11-25T09:46:13.948+0000 7f6347321640 -1 mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Nov 25 04:46:14 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "log last", "channel": "cephadm", "format": "json-pretty"} v 0) v1
Nov 25 04:46:14 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3306814604' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm", "format": "json-pretty"}]: dispatch
Nov 25 04:46:14 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "node ls"} v 0) v1
Nov 25 04:46:14 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/572609602' entity='client.admin' cmd=[{"prefix": "node ls"}]: dispatch
Nov 25 04:46:14 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr dump", "format": "json-pretty"} v 0) v1
Nov 25 04:46:14 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2118742297' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json-pretty"}]: dispatch
Nov 25 04:46:14 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush class ls"} v 0) v1
Nov 25 04:46:14 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2876837186' entity='client.admin' cmd=[{"prefix": "osd crush class ls"}]: dispatch
Nov 25 04:46:14 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:46:15 np0005534516 nova_compute[253538]: 2025-11-25 09:46:15.174 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:46:15 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush dump"} v 0) v1
Nov 25 04:46:15 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2234045790' entity='client.admin' cmd=[{"prefix": "osd crush dump"}]: dispatch
Nov 25 04:46:15 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata", "format": "json-pretty"} v 0) v1
Nov 25 04:46:15 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4092545494' entity='client.admin' cmd=[{"prefix": "mgr metadata", "format": "json-pretty"}]: dispatch
Nov 25 04:46:15 np0005534516 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3800: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 04:46:15 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush rule ls"} v 0) v1
Nov 25 04:46:15 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2452799282' entity='client.admin' cmd=[{"prefix": "osd crush rule ls"}]: dispatch
Nov 25 04:46:15 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module ls", "format": "json-pretty"} v 0) v1
Nov 25 04:46:15 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2427329848' entity='client.admin' cmd=[{"prefix": "mgr module ls", "format": "json-pretty"}]: dispatch
Nov 25 04:46:16 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush show-tunables"} v 0) v1
Nov 25 04:46:16 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/615187276' entity='client.admin' cmd=[{"prefix": "osd crush show-tunables"}]: dispatch
Nov 25 04:46:16 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services", "format": "json-pretty"} v 0) v1
Nov 25 04:46:16 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2313208645' entity='client.admin' cmd=[{"prefix": "mgr services", "format": "json-pretty"}]: dispatch
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338960384 unmapped: 60088320 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e9b61000/0x0/0x4ffc00000, data 0x1ea8018/0x201b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e9b61000/0x0/0x4ffc00000, data 0x1ea8018/0x201b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338968576 unmapped: 60080128 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e9b61000/0x0/0x4ffc00000, data 0x1ea8018/0x201b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338968576 unmapped: 60080128 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338968576 unmapped: 60080128 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3403206 data_alloc: 218103808 data_used: 14966784
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338968576 unmapped: 60080128 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338968576 unmapped: 60080128 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e9b61000/0x0/0x4ffc00000, data 0x1ea8018/0x201b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338968576 unmapped: 60080128 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 46.827438354s of 47.724197388s, submitted: 107
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 265 ms_handle_reset con 0x561d8c459800 session 0x561d8e24f860
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339386368 unmapped: 59662336 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339386368 unmapped: 59662336 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3525594 data_alloc: 218103808 data_used: 14966784
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339394560 unmapped: 59654144 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e8bd6000/0x0/0x4ffc00000, data 0x2e35018/0x2fa8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339402752 unmapped: 59645952 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339402752 unmapped: 59645952 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339402752 unmapped: 59645952 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 265 ms_handle_reset con 0x561d8c586000 session 0x561d8d1163c0
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339402752 unmapped: 59645952 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3525594 data_alloc: 218103808 data_used: 14966784
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 265 ms_handle_reset con 0x561d8d10c000 session 0x561d8d1a50e0
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e8bd6000/0x0/0x4ffc00000, data 0x2e35018/0x2fa8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 265 ms_handle_reset con 0x561d8f3b1c00 session 0x561d8e7034a0
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 265 ms_handle_reset con 0x561d8f3b1c00 session 0x561d8e7c0d20
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339402752 unmapped: 59645952 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339419136 unmapped: 59629568 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 340385792 unmapped: 58662912 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e8bd6000/0x0/0x4ffc00000, data 0x2e35018/0x2fa8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 340385792 unmapped: 58662912 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e8bd6000/0x0/0x4ffc00000, data 0x2e35018/0x2fa8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 340385792 unmapped: 58662912 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3632855 data_alloc: 234881024 data_used: 28499968
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 340385792 unmapped: 58662912 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e8bd6000/0x0/0x4ffc00000, data 0x2e35018/0x2fa8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 340385792 unmapped: 58662912 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 340385792 unmapped: 58662912 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e8bd6000/0x0/0x4ffc00000, data 0x2e35018/0x2fa8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 340385792 unmapped: 58662912 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 340385792 unmapped: 58662912 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3632855 data_alloc: 234881024 data_used: 28499968
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 340385792 unmapped: 58662912 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e8bd6000/0x0/0x4ffc00000, data 0x2e35018/0x2fa8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 340385792 unmapped: 58662912 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 18.927917480s of 19.114524841s, submitted: 30
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345874432 unmapped: 53174272 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345989120 unmapped: 53059584 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 346431488 unmapped: 52617216 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3727151 data_alloc: 234881024 data_used: 29900800
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 346431488 unmapped: 52617216 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e809b000/0x0/0x4ffc00000, data 0x3970018/0x3ae3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 346431488 unmapped: 52617216 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 346431488 unmapped: 52617216 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 346431488 unmapped: 52617216 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e809b000/0x0/0x4ffc00000, data 0x3970018/0x3ae3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 346431488 unmapped: 52617216 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3725011 data_alloc: 234881024 data_used: 29900800
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 346431488 unmapped: 52617216 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 346431488 unmapped: 52617216 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 346431488 unmapped: 52617216 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 346431488 unmapped: 52617216 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.873039246s of 12.605758667s, submitted: 112
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 346439680 unmapped: 52609024 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3770760 data_alloc: 234881024 data_used: 29900800
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e807a000/0x0/0x4ffc00000, data 0x3991018/0x3b04000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [0,0,0,0,1,2])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 265 ms_handle_reset con 0x561d8d10c000 session 0x561d8e2483c0
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 265 ms_handle_reset con 0x561d8d14ec00 session 0x561d8e90cb40
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 265 ms_handle_reset con 0x561d8d10d400 session 0x561d8e90cf00
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 265 ms_handle_reset con 0x561d8e41f400 session 0x561d8e8c5a40
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 265 ms_handle_reset con 0x561d8d10c000 session 0x561d8f048f00
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 346546176 unmapped: 52502528 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e7b1c000/0x0/0x4ffc00000, data 0x3eef018/0x4062000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e7b16000/0x0/0x4ffc00000, data 0x3ef5018/0x4068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 346546176 unmapped: 52502528 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 346546176 unmapped: 52502528 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e7b16000/0x0/0x4ffc00000, data 0x3ef5018/0x4068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 346546176 unmapped: 52502528 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 346546176 unmapped: 52502528 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3769046 data_alloc: 234881024 data_used: 29900800
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e7b16000/0x0/0x4ffc00000, data 0x3ef5018/0x4068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 346546176 unmapped: 52502528 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e7b16000/0x0/0x4ffc00000, data 0x3ef5018/0x4068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 346546176 unmapped: 52502528 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 346546176 unmapped: 52502528 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 265 ms_handle_reset con 0x561d8d10d400 session 0x561d8e39ed20
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 265 ms_handle_reset con 0x561d8d14ec00 session 0x561d8d2365a0
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 346546176 unmapped: 52502528 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 265 ms_handle_reset con 0x561d8f3b1c00 session 0x561d8c738780
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 265 ms_handle_reset con 0x561d8e74c800 session 0x561d8d116b40
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 346546176 unmapped: 52502528 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3771924 data_alloc: 234881024 data_used: 29900800
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.365564346s of 10.544802666s, submitted: 22
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 346546176 unmapped: 52502528 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 347373568 unmapped: 51675136 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e7b15000/0x0/0x4ffc00000, data 0x3ef5028/0x4069000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 347373568 unmapped: 51675136 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 347373568 unmapped: 51675136 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 347373568 unmapped: 51675136 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3804024 data_alloc: 234881024 data_used: 34402304
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e7b15000/0x0/0x4ffc00000, data 0x3ef5028/0x4069000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 347373568 unmapped: 51675136 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 347373568 unmapped: 51675136 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 347373568 unmapped: 51675136 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 347373568 unmapped: 51675136 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e7b15000/0x0/0x4ffc00000, data 0x3ef5028/0x4069000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 347373568 unmapped: 51675136 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3804552 data_alloc: 234881024 data_used: 34402304
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 347373568 unmapped: 51675136 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.267370224s of 11.282814980s, submitted: 5
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 347512832 unmapped: 51535872 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 347529216 unmapped: 51519488 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e72fb000/0x0/0x4ffc00000, data 0x470e028/0x4882000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 347553792 unmapped: 51494912 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 347553792 unmapped: 51494912 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3874330 data_alloc: 234881024 data_used: 34574336
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 347553792 unmapped: 51494912 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 347553792 unmapped: 51494912 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 347553792 unmapped: 51494912 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e72fb000/0x0/0x4ffc00000, data 0x470e028/0x4882000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 347561984 unmapped: 51486720 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 347561984 unmapped: 51486720 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3872410 data_alloc: 234881024 data_used: 34574336
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 347561984 unmapped: 51486720 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 347561984 unmapped: 51486720 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 347561984 unmapped: 51486720 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e72f9000/0x0/0x4ffc00000, data 0x4711028/0x4885000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 347570176 unmapped: 51478528 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.121975899s of 12.436902046s, submitted: 54
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 265 ms_handle_reset con 0x561d8d10c000 session 0x561d8e90de00
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 265 ms_handle_reset con 0x561d8d10d400 session 0x561d8e4321e0
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 265 ms_handle_reset con 0x561d8d14ec00 session 0x561d8d1a41e0
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 347602944 unmapped: 51445760 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3736725 data_alloc: 234881024 data_used: 29904896
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e8074000/0x0/0x4ffc00000, data 0x3997018/0x3b0a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 347602944 unmapped: 51445760 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 347602944 unmapped: 51445760 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 347602944 unmapped: 51445760 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e806d000/0x0/0x4ffc00000, data 0x399e018/0x3b11000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 265 ms_handle_reset con 0x561d8c459800 session 0x561d8d25bc20
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 265 ms_handle_reset con 0x561d8c586000 session 0x561d8c62bc20
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 265 ms_handle_reset con 0x561d8c459800 session 0x561d8e249860
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338640896 unmapped: 60407808 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e9b63000/0x0/0x4ffc00000, data 0x1ea8018/0x201b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338640896 unmapped: 60407808 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3428513 data_alloc: 218103808 data_used: 14966784
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338640896 unmapped: 60407808 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338640896 unmapped: 60407808 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338640896 unmapped: 60407808 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338640896 unmapped: 60407808 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338640896 unmapped: 60407808 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3428513 data_alloc: 218103808 data_used: 14966784
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e9b63000/0x0/0x4ffc00000, data 0x1ea8018/0x201b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338640896 unmapped: 60407808 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e9b63000/0x0/0x4ffc00000, data 0x1ea8018/0x201b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338640896 unmapped: 60407808 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e9b63000/0x0/0x4ffc00000, data 0x1ea8018/0x201b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338640896 unmapped: 60407808 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338640896 unmapped: 60407808 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338640896 unmapped: 60407808 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3428513 data_alloc: 218103808 data_used: 14966784
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338640896 unmapped: 60407808 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338640896 unmapped: 60407808 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e9b63000/0x0/0x4ffc00000, data 0x1ea8018/0x201b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338640896 unmapped: 60407808 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338640896 unmapped: 60407808 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338640896 unmapped: 60407808 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3428513 data_alloc: 218103808 data_used: 14966784
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338640896 unmapped: 60407808 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e9b63000/0x0/0x4ffc00000, data 0x1ea8018/0x201b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338640896 unmapped: 60407808 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338640896 unmapped: 60407808 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338640896 unmapped: 60407808 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338640896 unmapped: 60407808 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3428513 data_alloc: 218103808 data_used: 14966784
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338640896 unmapped: 60407808 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e9b63000/0x0/0x4ffc00000, data 0x1ea8018/0x201b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338640896 unmapped: 60407808 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338640896 unmapped: 60407808 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338640896 unmapped: 60407808 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338640896 unmapped: 60407808 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3428513 data_alloc: 218103808 data_used: 14966784
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338640896 unmapped: 60407808 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e9b63000/0x0/0x4ffc00000, data 0x1ea8018/0x201b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338640896 unmapped: 60407808 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338640896 unmapped: 60407808 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338640896 unmapped: 60407808 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338640896 unmapped: 60407808 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3428513 data_alloc: 218103808 data_used: 14966784
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e9b63000/0x0/0x4ffc00000, data 0x1ea8018/0x201b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338640896 unmapped: 60407808 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338640896 unmapped: 60407808 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e9b63000/0x0/0x4ffc00000, data 0x1ea8018/0x201b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338649088 unmapped: 60399616 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338649088 unmapped: 60399616 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338657280 unmapped: 60391424 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3428513 data_alloc: 218103808 data_used: 14966784
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338657280 unmapped: 60391424 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 265 ms_handle_reset con 0x561d8d10c000 session 0x561d8c8130e0
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 265 ms_handle_reset con 0x561d8d10d400 session 0x561d8f38e960
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 265 ms_handle_reset con 0x561d8d14ec00 session 0x561d8e39fa40
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 265 ms_handle_reset con 0x561d8f3b1c00 session 0x561d8e24ed20
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338657280 unmapped: 60391424 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 42.992057800s of 43.214187622s, submitted: 76
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e9b63000/0x0/0x4ffc00000, data 0x1ea8018/0x201b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [0,1,3,3,1])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 265 ms_handle_reset con 0x561d8f3b1c00 session 0x561d8eb53c20
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 265 ms_handle_reset con 0x561d8c459800 session 0x561d8d032f00
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 265 ms_handle_reset con 0x561d8d10c000 session 0x561d8c644960
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 265 ms_handle_reset con 0x561d8d10d400 session 0x561d8f048780
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 265 ms_handle_reset con 0x561d8d14ec00 session 0x561d8e512780
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 340107264 unmapped: 58941440 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 340107264 unmapped: 58941440 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 340107264 unmapped: 58941440 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3517656 data_alloc: 218103808 data_used: 14966784
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e9042000/0x0/0x4ffc00000, data 0x29c708a/0x2b3c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 265 ms_handle_reset con 0x561d8d14ec00 session 0x561d8e3edc20
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 340107264 unmapped: 58941440 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 265 ms_handle_reset con 0x561d8c459800 session 0x561d8d16f0e0
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 340107264 unmapped: 58941440 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 265 ms_handle_reset con 0x561d8d10c000 session 0x561d8e4325a0
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 265 ms_handle_reset con 0x561d8d10d400 session 0x561d8e5105a0
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e9042000/0x0/0x4ffc00000, data 0x29c708a/0x2b3c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 340107264 unmapped: 58941440 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 340107264 unmapped: 58941440 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 343433216 unmapped: 55615488 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3599896 data_alloc: 234881024 data_used: 26505216
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 343433216 unmapped: 55615488 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e9042000/0x0/0x4ffc00000, data 0x29c708a/0x2b3c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 343433216 unmapped: 55615488 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 343433216 unmapped: 55615488 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 343433216 unmapped: 55615488 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 343433216 unmapped: 55615488 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3599896 data_alloc: 234881024 data_used: 26505216
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 343433216 unmapped: 55615488 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 343433216 unmapped: 55615488 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e9042000/0x0/0x4ffc00000, data 0x29c708a/0x2b3c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 343433216 unmapped: 55615488 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 343433216 unmapped: 55615488 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 17.722257614s of 17.883968353s, submitted: 34
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 346169344 unmapped: 52879360 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3648258 data_alloc: 234881024 data_used: 26505216
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 346456064 unmapped: 52592640 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [P] New memtable created with log file: #51. Immutable memtables: 0.
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349618176 unmapped: 49430528 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349618176 unmapped: 49430528 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e753d000/0x0/0x4ffc00000, data 0x332c08a/0x34a1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1521f9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349618176 unmapped: 49430528 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349618176 unmapped: 49430528 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3682318 data_alloc: 234881024 data_used: 27729920
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349618176 unmapped: 49430528 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e753d000/0x0/0x4ffc00000, data 0x332c08a/0x34a1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1521f9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e753d000/0x0/0x4ffc00000, data 0x332c08a/0x34a1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1521f9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349618176 unmapped: 49430528 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349618176 unmapped: 49430528 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349618176 unmapped: 49430528 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349618176 unmapped: 49430528 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e753d000/0x0/0x4ffc00000, data 0x332c08a/0x34a1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1521f9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3682318 data_alloc: 234881024 data_used: 27729920
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349618176 unmapped: 49430528 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349626368 unmapped: 49422336 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349626368 unmapped: 49422336 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349626368 unmapped: 49422336 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 265 ms_handle_reset con 0x561d91e23800 session 0x561d8e2485a0
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 265 ms_handle_reset con 0x561d91e23800 session 0x561d8c74ba40
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 265 ms_handle_reset con 0x561d8c459800 session 0x561d8e4cd4a0
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 265 ms_handle_reset con 0x561d8d10c000 session 0x561d8e4cc780
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 14.496106148s of 14.768519402s, submitted: 82
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 265 ms_handle_reset con 0x561d8d10d400 session 0x561d8f38ef00
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 265 ms_handle_reset con 0x561d8d14ec00 session 0x561d8e8c4d20
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 265 ms_handle_reset con 0x561d8c459800 session 0x561d8ed734a0
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 265 ms_handle_reset con 0x561d8d10c000 session 0x561d8d2361e0
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 265 ms_handle_reset con 0x561d8d10d400 session 0x561d8e39f860
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 348504064 unmapped: 50544640 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3730459 data_alloc: 234881024 data_used: 27734016
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e6e1e000/0x0/0x4ffc00000, data 0x3a4a09a/0x3bc0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1521f9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 348504064 unmapped: 50544640 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 348504064 unmapped: 50544640 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 348504064 unmapped: 50544640 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 348504064 unmapped: 50544640 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 348504064 unmapped: 50544640 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3730459 data_alloc: 234881024 data_used: 27734016
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 348504064 unmapped: 50544640 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e6e1e000/0x0/0x4ffc00000, data 0x3a4a09a/0x3bc0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1521f9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 265 ms_handle_reset con 0x561d91e23800 session 0x561d8e50eb40
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 348512256 unmapped: 50536448 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 265 ms_handle_reset con 0x561d90b43c00 session 0x561d8d237860
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 265 ms_handle_reset con 0x561d90b43c00 session 0x561d8d1a50e0
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 265 ms_handle_reset con 0x561d8c459800 session 0x561d8c74b680
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e6e1e000/0x0/0x4ffc00000, data 0x3a4a09a/0x3bc0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1521f9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 348512256 unmapped: 50536448 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 348512256 unmapped: 50536448 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 351223808 unmapped: 47824896 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3782911 data_alloc: 234881024 data_used: 35106816
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e6e1e000/0x0/0x4ffc00000, data 0x3a4a09a/0x3bc0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1521f9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 351223808 unmapped: 47824896 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 351223808 unmapped: 47824896 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e6e1e000/0x0/0x4ffc00000, data 0x3a4a09a/0x3bc0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1521f9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 351223808 unmapped: 47824896 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 351223808 unmapped: 47824896 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e6e1e000/0x0/0x4ffc00000, data 0x3a4a09a/0x3bc0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1521f9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 351223808 unmapped: 47824896 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3782911 data_alloc: 234881024 data_used: 35106816
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e6e1e000/0x0/0x4ffc00000, data 0x3a4a09a/0x3bc0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1521f9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 351223808 unmapped: 47824896 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 17.003824234s of 17.156684875s, submitted: 17
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 351223808 unmapped: 47824896 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 351223808 unmapped: 47824896 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 351223808 unmapped: 47824896 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 352051200 unmapped: 46997504 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3817961 data_alloc: 234881024 data_used: 35102720
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e6e1e000/0x0/0x4ffc00000, data 0x3a4a09a/0x3bc0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1521f9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 352722944 unmapped: 46325760 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 352739328 unmapped: 46309376 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 352739328 unmapped: 46309376 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 352739328 unmapped: 46309376 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e6993000/0x0/0x4ffc00000, data 0x3ecd09a/0x4043000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1521f9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 352739328 unmapped: 46309376 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3828065 data_alloc: 234881024 data_used: 35172352
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 352747520 unmapped: 46301184 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.787124634s of 10.008896828s, submitted: 44
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 352747520 unmapped: 46301184 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e6993000/0x0/0x4ffc00000, data 0x3ecd09a/0x4043000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1521f9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 352747520 unmapped: 46301184 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 352755712 unmapped: 46292992 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 265 ms_handle_reset con 0x561d8d10c000 session 0x561d8f38f680
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 265 ms_handle_reset con 0x561d8d10d400 session 0x561d8ed73c20
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e753d000/0x0/0x4ffc00000, data 0x332c08a/0x34a1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1521f9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 352100352 unmapped: 46948352 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3684633 data_alloc: 234881024 data_used: 27738112
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 352100352 unmapped: 46948352 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 352100352 unmapped: 46948352 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 352100352 unmapped: 46948352 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e753d000/0x0/0x4ffc00000, data 0x332c08a/0x34a1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1521f9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 352100352 unmapped: 46948352 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 265 ms_handle_reset con 0x561d8f3b1c00 session 0x561d8e3ec1e0
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 265 ms_handle_reset con 0x561d8f3b1c00 session 0x561d8e18a3c0
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 340688896 unmapped: 58359808 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3450107 data_alloc: 218103808 data_used: 14966784
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 340688896 unmapped: 58359808 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 340688896 unmapped: 58359808 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e89be000/0x0/0x4ffc00000, data 0x1ea8018/0x201b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1521f9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 340688896 unmapped: 58359808 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 340688896 unmapped: 58359808 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 340688896 unmapped: 58359808 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3450107 data_alloc: 218103808 data_used: 14966784
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e89be000/0x0/0x4ffc00000, data 0x1ea8018/0x201b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1521f9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 340688896 unmapped: 58359808 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e89be000/0x0/0x4ffc00000, data 0x1ea8018/0x201b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1521f9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 340688896 unmapped: 58359808 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 340688896 unmapped: 58359808 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 340688896 unmapped: 58359808 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 340688896 unmapped: 58359808 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3450107 data_alloc: 218103808 data_used: 14966784
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 340688896 unmapped: 58359808 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 340688896 unmapped: 58359808 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e89be000/0x0/0x4ffc00000, data 0x1ea8018/0x201b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1521f9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 340688896 unmapped: 58359808 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e89be000/0x0/0x4ffc00000, data 0x1ea8018/0x201b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1521f9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 340688896 unmapped: 58359808 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 340688896 unmapped: 58359808 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3450107 data_alloc: 218103808 data_used: 14966784
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e89be000/0x0/0x4ffc00000, data 0x1ea8018/0x201b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1521f9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 340688896 unmapped: 58359808 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e89be000/0x0/0x4ffc00000, data 0x1ea8018/0x201b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1521f9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 340688896 unmapped: 58359808 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 340688896 unmapped: 58359808 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e89be000/0x0/0x4ffc00000, data 0x1ea8018/0x201b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1521f9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 340688896 unmapped: 58359808 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 340688896 unmapped: 58359808 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3450107 data_alloc: 218103808 data_used: 14966784
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 340688896 unmapped: 58359808 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 340688896 unmapped: 58359808 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 340688896 unmapped: 58359808 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 340688896 unmapped: 58359808 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e89be000/0x0/0x4ffc00000, data 0x1ea8018/0x201b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1521f9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 340688896 unmapped: 58359808 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3450107 data_alloc: 218103808 data_used: 14966784
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e89be000/0x0/0x4ffc00000, data 0x1ea8018/0x201b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1521f9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 340688896 unmapped: 58359808 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 340688896 unmapped: 58359808 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 340688896 unmapped: 58359808 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 340688896 unmapped: 58359808 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 340688896 unmapped: 58359808 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3450107 data_alloc: 218103808 data_used: 14966784
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 265 ms_handle_reset con 0x561d8c459800 session 0x561d8e39f860
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 265 ms_handle_reset con 0x561d8d10c000 session 0x561d8d2361e0
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 265 ms_handle_reset con 0x561d8d10d400 session 0x561d8ed734a0
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 265 ms_handle_reset con 0x561d90b43c00 session 0x561d8f38ef00
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 38.750972748s of 38.958789825s, submitted: 67
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e89bd000/0x0/0x4ffc00000, data 0x1ea8028/0x201c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1521f9c6), peers [0,2] op hist [0,0,0,0,0,0,1])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 340697088 unmapped: 58351616 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 265 ms_handle_reset con 0x561d90b43c00 session 0x561d8e4cc780
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 265 ms_handle_reset con 0x561d8c459800 session 0x561d8f048780
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 265 ms_handle_reset con 0x561d8d10c000 session 0x561d8eb53c20
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 265 ms_handle_reset con 0x561d8d10d400 session 0x561d8f38e960
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 265 ms_handle_reset con 0x561d8f3b1c00 session 0x561d8d1a41e0
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 340910080 unmapped: 58138624 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 340910080 unmapped: 58138624 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 340910080 unmapped: 58138624 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 265 ms_handle_reset con 0x561d8f3b1c00 session 0x561d8d116b40
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 340910080 unmapped: 58138624 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3492197 data_alloc: 218103808 data_used: 14966784
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e8112000/0x0/0x4ffc00000, data 0x234708a/0x24bc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 265 ms_handle_reset con 0x561d8c459800 session 0x561d8c738780
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 340910080 unmapped: 58138624 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 265 ms_handle_reset con 0x561d8d10c000 session 0x561d8e2485a0
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 265 ms_handle_reset con 0x561d8d10d400 session 0x561d8d2a63c0
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 340926464 unmapped: 58122240 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e8110000/0x0/0x4ffc00000, data 0x23470bd/0x24be000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 340926464 unmapped: 58122240 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 340934656 unmapped: 58114048 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 340934656 unmapped: 58114048 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3529564 data_alloc: 218103808 data_used: 19730432
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 265 ms_handle_reset con 0x561d90b43c00 session 0x561d8e4cd860
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 265 ms_handle_reset con 0x561d91e23800 session 0x561d8e2492c0
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 340934656 unmapped: 58114048 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e8110000/0x0/0x4ffc00000, data 0x23470bd/0x24be000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 340934656 unmapped: 58114048 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 340934656 unmapped: 58114048 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 340934656 unmapped: 58114048 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 340934656 unmapped: 58114048 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3529432 data_alloc: 218103808 data_used: 19730432
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e8110000/0x0/0x4ffc00000, data 0x23470bd/0x24be000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.582655907s of 14.895611763s, submitted: 46
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 340934656 unmapped: 58114048 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e8110000/0x0/0x4ffc00000, data 0x23470bd/0x24be000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 340934656 unmapped: 58114048 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 265 ms_handle_reset con 0x561d8c459800 session 0x561d8e7c1e00
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 265 ms_handle_reset con 0x561d8d10c000 session 0x561d8d25a000
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 340934656 unmapped: 58114048 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 340942848 unmapped: 58105856 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 340942848 unmapped: 58105856 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3529368 data_alloc: 218103808 data_used: 19738624
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e8110000/0x0/0x4ffc00000, data 0x23470bd/0x24be000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 340942848 unmapped: 58105856 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 340951040 unmapped: 58097664 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e8110000/0x0/0x4ffc00000, data 0x23470bd/0x24be000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 340951040 unmapped: 58097664 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e8110000/0x0/0x4ffc00000, data 0x23470bd/0x24be000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 340951040 unmapped: 58097664 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 265 ms_handle_reset con 0x561d8d10d400 session 0x561d8e248780
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 265 ms_handle_reset con 0x561d8f3b1c00 session 0x561d8f0483c0
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339787776 unmapped: 59260928 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3456873 data_alloc: 218103808 data_used: 14966784
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 265 ms_handle_reset con 0x561d8f3b1c00 session 0x561d8f047c20
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e85b1000/0x0/0x4ffc00000, data 0x1ea8018/0x201b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339787776 unmapped: 59260928 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339787776 unmapped: 59260928 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339787776 unmapped: 59260928 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339787776 unmapped: 59260928 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e85b1000/0x0/0x4ffc00000, data 0x1ea8018/0x201b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339787776 unmapped: 59260928 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3456873 data_alloc: 218103808 data_used: 14966784
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339787776 unmapped: 59260928 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339787776 unmapped: 59260928 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339787776 unmapped: 59260928 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e85b1000/0x0/0x4ffc00000, data 0x1ea8018/0x201b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338550784 unmapped: 60497920 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338550784 unmapped: 60497920 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3456873 data_alloc: 218103808 data_used: 14966784
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e85b1000/0x0/0x4ffc00000, data 0x1ea8018/0x201b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338550784 unmapped: 60497920 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338550784 unmapped: 60497920 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338550784 unmapped: 60497920 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338558976 unmapped: 60489728 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338558976 unmapped: 60489728 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3456873 data_alloc: 218103808 data_used: 14966784
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338558976 unmapped: 60489728 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e85b1000/0x0/0x4ffc00000, data 0x1ea8018/0x201b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338558976 unmapped: 60489728 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338567168 unmapped: 60481536 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338567168 unmapped: 60481536 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338567168 unmapped: 60481536 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3456873 data_alloc: 218103808 data_used: 14966784
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338567168 unmapped: 60481536 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338567168 unmapped: 60481536 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e85b1000/0x0/0x4ffc00000, data 0x1ea8018/0x201b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338567168 unmapped: 60481536 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338567168 unmapped: 60481536 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338567168 unmapped: 60481536 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3456873 data_alloc: 218103808 data_used: 14966784
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338575360 unmapped: 60473344 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e85b1000/0x0/0x4ffc00000, data 0x1ea8018/0x201b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338575360 unmapped: 60473344 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338575360 unmapped: 60473344 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338575360 unmapped: 60473344 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338575360 unmapped: 60473344 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3456873 data_alloc: 218103808 data_used: 14966784
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338575360 unmapped: 60473344 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e85b1000/0x0/0x4ffc00000, data 0x1ea8018/0x201b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338575360 unmapped: 60473344 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338575360 unmapped: 60473344 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338583552 unmapped: 60465152 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338583552 unmapped: 60465152 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3456873 data_alloc: 218103808 data_used: 14966784
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338583552 unmapped: 60465152 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338583552 unmapped: 60465152 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e85b1000/0x0/0x4ffc00000, data 0x1ea8018/0x201b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338591744 unmapped: 60456960 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 48.281188965s of 48.498039246s, submitted: 50
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 265 ms_handle_reset con 0x561d8c459800 session 0x561d8c880000
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e85b1000/0x0/0x4ffc00000, data 0x1ea8018/0x201b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338599936 unmapped: 60448768 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e7c5b000/0x0/0x4ffc00000, data 0x2800018/0x2973000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338599936 unmapped: 60448768 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3525743 data_alloc: 218103808 data_used: 14966784
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338599936 unmapped: 60448768 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e7c5b000/0x0/0x4ffc00000, data 0x2800018/0x2973000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338599936 unmapped: 60448768 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338599936 unmapped: 60448768 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 265 ms_handle_reset con 0x561d8d10c000 session 0x561d8f047680
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 265 ms_handle_reset con 0x561d8d10d400 session 0x561d8e2490e0
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338599936 unmapped: 60448768 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 265 ms_handle_reset con 0x561d91e23800 session 0x561d8f049e00
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e7c5b000/0x0/0x4ffc00000, data 0x2800018/0x2973000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 265 ms_handle_reset con 0x561d91e23800 session 0x561d8e1a1a40
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338698240 unmapped: 60350464 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3529969 data_alloc: 218103808 data_used: 14966784
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338706432 unmapped: 60342272 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338714624 unmapped: 60334080 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338714624 unmapped: 60334080 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338714624 unmapped: 60334080 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e7c36000/0x0/0x4ffc00000, data 0x2824028/0x2998000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338714624 unmapped: 60334080 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3598449 data_alloc: 218103808 data_used: 24567808
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338714624 unmapped: 60334080 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338714624 unmapped: 60334080 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338714624 unmapped: 60334080 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338714624 unmapped: 60334080 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e7c36000/0x0/0x4ffc00000, data 0x2824028/0x2998000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338714624 unmapped: 60334080 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3598449 data_alloc: 218103808 data_used: 24567808
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338714624 unmapped: 60334080 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 18.442842484s of 18.518671036s, submitted: 11
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339673088 unmapped: 59375616 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 344154112 unmapped: 54894592 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 344416256 unmapped: 54632448 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e712d000/0x0/0x4ffc00000, data 0x332d028/0x34a1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 344416256 unmapped: 54632448 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3694817 data_alloc: 218103808 data_used: 24907776
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 344416256 unmapped: 54632448 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 344416256 unmapped: 54632448 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 344416256 unmapped: 54632448 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 344416256 unmapped: 54632448 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 344416256 unmapped: 54632448 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3691285 data_alloc: 218103808 data_used: 24907776
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e7109000/0x0/0x4ffc00000, data 0x3351028/0x34c5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 344416256 unmapped: 54632448 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 344416256 unmapped: 54632448 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.927303314s of 10.246125221s, submitted: 93
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 344424448 unmapped: 54624256 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 344424448 unmapped: 54624256 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 265 ms_handle_reset con 0x561d8d10c000 session 0x561d8e4cdc20
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 265 ms_handle_reset con 0x561d8c459800 session 0x561d8cff1860
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e70e8000/0x0/0x4ffc00000, data 0x3372028/0x34e6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 344424448 unmapped: 54624256 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3693961 data_alloc: 218103808 data_used: 24915968
Nov 25 04:46:16 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush tree", "show_shadow": true} v 0) v1
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 344424448 unmapped: 54624256 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 265 handle_osd_map epochs [266,266], i have 265, src has [1,266]
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 266 ms_handle_reset con 0x561d8d10d400 session 0x561d8e50e1e0
Nov 25 04:46:16 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3140011526' entity='client.admin' cmd=[{"prefix": "osd crush tree", "show_shadow": true}]: dispatch
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 266 ms_handle_reset con 0x561d8f3b1c00 session 0x561d8ca97c20
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 266 ms_handle_reset con 0x561d8e41e400 session 0x561d8d116780
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 266 ms_handle_reset con 0x561d8c459800 session 0x561d8e24f0e0
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 357629952 unmapped: 52314112 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 266 ms_handle_reset con 0x561d8c6d7800 session 0x561d8f38eb40
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 266 handle_osd_map epochs [267,267], i have 266, src has [1,267]
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 357646336 unmapped: 52297728 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 267 handle_osd_map epochs [267,268], i have 267, src has [1,268]
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 268 handle_osd_map epochs [268,268], i have 268, src has [1,268]
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 268 ms_handle_reset con 0x561d8d109000 session 0x561d8e3edc20
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 357670912 unmapped: 52273152 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 268 ms_handle_reset con 0x561d8f02e000 session 0x561d8e4cd4a0
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 268 ms_handle_reset con 0x561d8f02e000 session 0x561d8f38fe00
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 268 ms_handle_reset con 0x561d8c459800 session 0x561d8ed73e00
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 357679104 unmapped: 52264960 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3970481 data_alloc: 234881024 data_used: 33873920
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 268 heartbeat osd_stat(store_statfs(0x4e6421000/0x0/0x4ffc00000, data 0x5072371/0x51eb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 357679104 unmapped: 52264960 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 357679104 unmapped: 52264960 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 357679104 unmapped: 52264960 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 357679104 unmapped: 52264960 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 268 handle_osd_map epochs [269,269], i have 268, src has [1,269]
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.572269440s of 12.077063560s, submitted: 93
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 269 ms_handle_reset con 0x561d8c6d7800 session 0x561d8ed72000
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 348897280 unmapped: 61046784 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3709496 data_alloc: 218103808 data_used: 14983168
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 269 heartbeat osd_stat(store_statfs(0x4e78ea000/0x0/0x4ffc00000, data 0x3ba9dc4/0x3d23000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 348897280 unmapped: 61046784 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 348897280 unmapped: 61046784 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 348897280 unmapped: 61046784 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 348897280 unmapped: 61046784 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 269 heartbeat osd_stat(store_statfs(0x4e78ea000/0x0/0x4ffc00000, data 0x3ba9dc4/0x3d23000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 348897280 unmapped: 61046784 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3709496 data_alloc: 218103808 data_used: 14983168
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 348897280 unmapped: 61046784 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 269 heartbeat osd_stat(store_statfs(0x4e78ea000/0x0/0x4ffc00000, data 0x3ba9dc4/0x3d23000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 348897280 unmapped: 61046784 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 269 ms_handle_reset con 0x561d8d109000 session 0x561d8e3eda40
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 269 ms_handle_reset con 0x561d8e41e400 session 0x561d8e7c1c20
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 269 ms_handle_reset con 0x561d8c459800 session 0x561d8c659e00
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 269 ms_handle_reset con 0x561d8c6d7800 session 0x561d8d1a5860
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 269 ms_handle_reset con 0x561d8d109000 session 0x561d8d16e780
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 269 ms_handle_reset con 0x561d8f02e000 session 0x561d8e5105a0
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 384196608 unmapped: 25747456 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 269 ms_handle_reset con 0x561d8f02c400 session 0x561d8c74ba40
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 373506048 unmapped: 36438016 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 269 ms_handle_reset con 0x561d8c459800 session 0x561d8d2363c0
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 269 ms_handle_reset con 0x561d8c6d7800 session 0x561d8d1174a0
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 373506048 unmapped: 36438016 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3907564 data_alloc: 251658240 data_used: 44687360
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 269 heartbeat osd_stat(store_statfs(0x4e6abd000/0x0/0x4ffc00000, data 0x49d7dc4/0x4b51000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 269 ms_handle_reset con 0x561d8d109000 session 0x561d8f38f4a0
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.500659943s of 11.212936401s, submitted: 43
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 269 ms_handle_reset con 0x561d8f02e000 session 0x561d8e50e000
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 373506048 unmapped: 36438016 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 269 heartbeat osd_stat(store_statfs(0x4e6abc000/0x0/0x4ffc00000, data 0x49d7dd3/0x4b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [0,0,0,0,2])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 368877568 unmapped: 41066496 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 269 handle_osd_map epochs [270,270], i have 269, src has [1,270]
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 361938944 unmapped: 48005120 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 270 ms_handle_reset con 0x561d8d0b7000 session 0x561d8f38ed20
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 361938944 unmapped: 48005120 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 270 heartbeat osd_stat(store_statfs(0x4e83e9000/0x0/0x4ffc00000, data 0x2cde942/0x2e59000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 361938944 unmapped: 48005120 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3668229 data_alloc: 218103808 data_used: 22822912
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 270 heartbeat osd_stat(store_statfs(0x4e83e9000/0x0/0x4ffc00000, data 0x2cde942/0x2e59000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 361938944 unmapped: 48005120 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 361938944 unmapped: 48005120 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 270 heartbeat osd_stat(store_statfs(0x4e83e9000/0x0/0x4ffc00000, data 0x2cde942/0x2e59000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 361938944 unmapped: 48005120 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 270 heartbeat osd_stat(store_statfs(0x4e83e9000/0x0/0x4ffc00000, data 0x2cde942/0x2e59000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 361938944 unmapped: 48005120 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 270 handle_osd_map epochs [271,271], i have 270, src has [1,271]
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 361938944 unmapped: 48005120 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3671011 data_alloc: 218103808 data_used: 22822912
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 361938944 unmapped: 48005120 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 271 heartbeat osd_stat(store_statfs(0x4e87b1000/0x0/0x4ffc00000, data 0x2ce03a5/0x2e5c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 361938944 unmapped: 48005120 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 271 heartbeat osd_stat(store_statfs(0x4e87b1000/0x0/0x4ffc00000, data 0x2ce03a5/0x2e5c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 361938944 unmapped: 48005120 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 361938944 unmapped: 48005120 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.854998589s of 14.211294174s, submitted: 75
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 362086400 unmapped: 47857664 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3687955 data_alloc: 218103808 data_used: 24702976
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 362086400 unmapped: 47857664 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 271 heartbeat osd_stat(store_statfs(0x4e87b2000/0x0/0x4ffc00000, data 0x2ce03a5/0x2e5c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 362086400 unmapped: 47857664 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 271 heartbeat osd_stat(store_statfs(0x4e87b2000/0x0/0x4ffc00000, data 0x2ce03a5/0x2e5c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 271 heartbeat osd_stat(store_statfs(0x4e87b2000/0x0/0x4ffc00000, data 0x2ce03a5/0x2e5c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 362086400 unmapped: 47857664 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 362086400 unmapped: 47857664 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 362086400 unmapped: 47857664 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3693395 data_alloc: 234881024 data_used: 25255936
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 363233280 unmapped: 46710784 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 271 heartbeat osd_stat(store_statfs(0x4e87b2000/0x0/0x4ffc00000, data 0x2ce03a5/0x2e5c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 363233280 unmapped: 46710784 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 363233280 unmapped: 46710784 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 363233280 unmapped: 46710784 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 363233280 unmapped: 46710784 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3693571 data_alloc: 234881024 data_used: 25251840
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 363233280 unmapped: 46710784 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 271 heartbeat osd_stat(store_statfs(0x4e87b2000/0x0/0x4ffc00000, data 0x2ce03a5/0x2e5c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.356461525s of 12.525735855s, submitted: 8
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 363233280 unmapped: 46710784 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 363233280 unmapped: 46710784 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 363233280 unmapped: 46710784 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 363233280 unmapped: 46710784 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3693235 data_alloc: 234881024 data_used: 25247744
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 363233280 unmapped: 46710784 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 363233280 unmapped: 46710784 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 271 heartbeat osd_stat(store_statfs(0x4e87b2000/0x0/0x4ffc00000, data 0x2ce03a5/0x2e5c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 363233280 unmapped: 46710784 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 363233280 unmapped: 46710784 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 363233280 unmapped: 46710784 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3692883 data_alloc: 234881024 data_used: 25247744
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 363233280 unmapped: 46710784 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 271 heartbeat osd_stat(store_statfs(0x4e87b2000/0x0/0x4ffc00000, data 0x2ce03a5/0x2e5c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 363233280 unmapped: 46710784 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 363233280 unmapped: 46710784 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 271 ms_handle_reset con 0x561d8c5cbc00 session 0x561d8f0485a0
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 271 ms_handle_reset con 0x561d8d10bc00 session 0x561d8e3ed680
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.056241989s of 12.152028084s, submitted: 5
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 363233280 unmapped: 46710784 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 271 ms_handle_reset con 0x561d8d0b7000 session 0x561d8e24eb40
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 357621760 unmapped: 52322304 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3510922 data_alloc: 218103808 data_used: 14991360
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 271 heartbeat osd_stat(store_statfs(0x4e95e1000/0x0/0x4ffc00000, data 0x1eb2396/0x202d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 357621760 unmapped: 52322304 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 357621760 unmapped: 52322304 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 357621760 unmapped: 52322304 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 357621760 unmapped: 52322304 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 271 heartbeat osd_stat(store_statfs(0x4e95e1000/0x0/0x4ffc00000, data 0x1eb2396/0x202d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 357621760 unmapped: 52322304 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3510922 data_alloc: 218103808 data_used: 14991360
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 357621760 unmapped: 52322304 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 357621760 unmapped: 52322304 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 357621760 unmapped: 52322304 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 271 heartbeat osd_stat(store_statfs(0x4e95e1000/0x0/0x4ffc00000, data 0x1eb2396/0x202d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 271 heartbeat osd_stat(store_statfs(0x4e95e1000/0x0/0x4ffc00000, data 0x1eb2396/0x202d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 357621760 unmapped: 52322304 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 357621760 unmapped: 52322304 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3510922 data_alloc: 218103808 data_used: 14991360
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 357621760 unmapped: 52322304 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 357621760 unmapped: 52322304 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 357621760 unmapped: 52322304 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 271 heartbeat osd_stat(store_statfs(0x4e95e1000/0x0/0x4ffc00000, data 0x1eb2396/0x202d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 357621760 unmapped: 52322304 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 357621760 unmapped: 52322304 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3510922 data_alloc: 218103808 data_used: 14991360
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 357621760 unmapped: 52322304 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 357621760 unmapped: 52322304 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 357621760 unmapped: 52322304 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 357621760 unmapped: 52322304 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 271 heartbeat osd_stat(store_statfs(0x4e95e1000/0x0/0x4ffc00000, data 0x1eb2396/0x202d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 357621760 unmapped: 52322304 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3510922 data_alloc: 218103808 data_used: 14991360
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 357621760 unmapped: 52322304 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 5400.4 total, 600.0 interval#012Cumulative writes: 45K writes, 180K keys, 45K commit groups, 1.0 writes per commit group, ingest: 0.17 GB, 0.03 MB/s#012Cumulative WAL: 45K writes, 16K syncs, 2.79 writes per sync, written: 0.17 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 2715 writes, 11K keys, 2715 commit groups, 1.0 writes per commit group, ingest: 11.35 MB, 0.02 MB/s#012Interval WAL: 2715 writes, 1097 syncs, 2.47 writes per sync, written: 0.01 GB, 0.02 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 357621760 unmapped: 52322304 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 271 heartbeat osd_stat(store_statfs(0x4e95e1000/0x0/0x4ffc00000, data 0x1eb2396/0x202d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 357621760 unmapped: 52322304 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 24.209249496s of 24.272548676s, submitted: 19
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 271 ms_handle_reset con 0x561d8c459800 session 0x561d8c62bc20
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 357875712 unmapped: 52068352 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 357875712 unmapped: 52068352 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3551006 data_alloc: 218103808 data_used: 14991360
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 271 heartbeat osd_stat(store_statfs(0x4e913b000/0x0/0x4ffc00000, data 0x2358396/0x24d3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 357875712 unmapped: 52068352 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 271 ms_handle_reset con 0x561d8c6d7800 session 0x561d8c644960
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 357875712 unmapped: 52068352 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 357875712 unmapped: 52068352 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 356655104 unmapped: 53288960 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 271 ms_handle_reset con 0x561d8c459800 session 0x561d8e18ba40
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349806592 unmapped: 60137472 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3516663 data_alloc: 218103808 data_used: 14991360
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 271 ms_handle_reset con 0x561d8d10bc00 session 0x561d8e18a780
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 271 heartbeat osd_stat(store_statfs(0x4e95e0000/0x0/0x4ffc00000, data 0x1eb2396/0x202d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349806592 unmapped: 60137472 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349806592 unmapped: 60137472 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349806592 unmapped: 60137472 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 271 heartbeat osd_stat(store_statfs(0x4e95e0000/0x0/0x4ffc00000, data 0x1eb2396/0x202d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349806592 unmapped: 60137472 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349806592 unmapped: 60137472 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3515831 data_alloc: 218103808 data_used: 14991360
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349806592 unmapped: 60137472 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349806592 unmapped: 60137472 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 271 heartbeat osd_stat(store_statfs(0x4e95e0000/0x0/0x4ffc00000, data 0x1eb2396/0x202d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349806592 unmapped: 60137472 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349806592 unmapped: 60137472 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349806592 unmapped: 60137472 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3515831 data_alloc: 218103808 data_used: 14991360
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349806592 unmapped: 60137472 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 271 heartbeat osd_stat(store_statfs(0x4e95e0000/0x0/0x4ffc00000, data 0x1eb2396/0x202d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349806592 unmapped: 60137472 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349806592 unmapped: 60137472 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349806592 unmapped: 60137472 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349806592 unmapped: 60137472 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3515831 data_alloc: 218103808 data_used: 14991360
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349806592 unmapped: 60137472 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349806592 unmapped: 60137472 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 271 heartbeat osd_stat(store_statfs(0x4e95e0000/0x0/0x4ffc00000, data 0x1eb2396/0x202d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349806592 unmapped: 60137472 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349806592 unmapped: 60137472 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349806592 unmapped: 60137472 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3515831 data_alloc: 218103808 data_used: 14991360
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 271 ms_handle_reset con 0x561d8e74f800 session 0x561d8e18be00
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349806592 unmapped: 60137472 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 271 heartbeat osd_stat(store_statfs(0x4e95e0000/0x0/0x4ffc00000, data 0x1eb2396/0x202d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: mgrc ms_handle_reset ms_handle_reset con 0x561d8c6aa800
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/3119838916
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/3119838916,v1:192.168.122.100:6801/3119838916]
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: mgrc handle_mgr_configure stats_period=5
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 271 heartbeat osd_stat(store_statfs(0x4e95e0000/0x0/0x4ffc00000, data 0x1eb2396/0x202d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349806592 unmapped: 60137472 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 271 ms_handle_reset con 0x561d8f032c00 session 0x561d8e1cc000
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 271 ms_handle_reset con 0x561d8f039c00 session 0x561d8d117860
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349806592 unmapped: 60137472 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349806592 unmapped: 60137472 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 271 heartbeat osd_stat(store_statfs(0x4e95e0000/0x0/0x4ffc00000, data 0x1eb2396/0x202d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349806592 unmapped: 60137472 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3515831 data_alloc: 218103808 data_used: 14991360
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349806592 unmapped: 60137472 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349806592 unmapped: 60137472 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349806592 unmapped: 60137472 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349806592 unmapped: 60137472 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 271 heartbeat osd_stat(store_statfs(0x4e95e0000/0x0/0x4ffc00000, data 0x1eb2396/0x202d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349806592 unmapped: 60137472 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3515831 data_alloc: 218103808 data_used: 14991360
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349806592 unmapped: 60137472 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 271 heartbeat osd_stat(store_statfs(0x4e95e0000/0x0/0x4ffc00000, data 0x1eb2396/0x202d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349806592 unmapped: 60137472 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349814784 unmapped: 60129280 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349814784 unmapped: 60129280 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349814784 unmapped: 60129280 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3515831 data_alloc: 218103808 data_used: 14991360
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 271 heartbeat osd_stat(store_statfs(0x4e95e0000/0x0/0x4ffc00000, data 0x1eb2396/0x202d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349814784 unmapped: 60129280 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349822976 unmapped: 60121088 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349822976 unmapped: 60121088 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349822976 unmapped: 60121088 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349822976 unmapped: 60121088 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3515831 data_alloc: 218103808 data_used: 14991360
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 271 heartbeat osd_stat(store_statfs(0x4e95e0000/0x0/0x4ffc00000, data 0x1eb2396/0x202d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349831168 unmapped: 60112896 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349831168 unmapped: 60112896 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349831168 unmapped: 60112896 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349831168 unmapped: 60112896 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349831168 unmapped: 60112896 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3515831 data_alloc: 218103808 data_used: 14991360
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349831168 unmapped: 60112896 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 271 heartbeat osd_stat(store_statfs(0x4e95e0000/0x0/0x4ffc00000, data 0x1eb2396/0x202d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349831168 unmapped: 60112896 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349831168 unmapped: 60112896 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349839360 unmapped: 60104704 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 271 heartbeat osd_stat(store_statfs(0x4e95e0000/0x0/0x4ffc00000, data 0x1eb2396/0x202d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349839360 unmapped: 60104704 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3515831 data_alloc: 218103808 data_used: 14991360
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349839360 unmapped: 60104704 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 271 heartbeat osd_stat(store_statfs(0x4e95e0000/0x0/0x4ffc00000, data 0x1eb2396/0x202d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349839360 unmapped: 60104704 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 271 heartbeat osd_stat(store_statfs(0x4e95e0000/0x0/0x4ffc00000, data 0x1eb2396/0x202d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349839360 unmapped: 60104704 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349839360 unmapped: 60104704 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349839360 unmapped: 60104704 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3515831 data_alloc: 218103808 data_used: 14991360
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 271 heartbeat osd_stat(store_statfs(0x4e95e0000/0x0/0x4ffc00000, data 0x1eb2396/0x202d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349839360 unmapped: 60104704 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349847552 unmapped: 60096512 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349847552 unmapped: 60096512 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349847552 unmapped: 60096512 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349847552 unmapped: 60096512 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3515831 data_alloc: 218103808 data_used: 14991360
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 271 heartbeat osd_stat(store_statfs(0x4e95e0000/0x0/0x4ffc00000, data 0x1eb2396/0x202d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349855744 unmapped: 60088320 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349855744 unmapped: 60088320 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349855744 unmapped: 60088320 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349855744 unmapped: 60088320 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349855744 unmapped: 60088320 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3515831 data_alloc: 218103808 data_used: 14991360
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349855744 unmapped: 60088320 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 271 heartbeat osd_stat(store_statfs(0x4e95e0000/0x0/0x4ffc00000, data 0x1eb2396/0x202d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349863936 unmapped: 60080128 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349863936 unmapped: 60080128 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 271 heartbeat osd_stat(store_statfs(0x4e95e0000/0x0/0x4ffc00000, data 0x1eb2396/0x202d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349863936 unmapped: 60080128 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349863936 unmapped: 60080128 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3515831 data_alloc: 218103808 data_used: 14991360
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 77.114036560s of 77.737030029s, submitted: 33
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 271 handle_osd_map epochs [272,272], i have 271, src has [1,272]
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349872128 unmapped: 60071936 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 272 ms_handle_reset con 0x561d8e52e800 session 0x561d8d16f680
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349880320 unmapped: 60063744 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 272 handle_osd_map epochs [272,273], i have 272, src has [1,273]
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349896704 unmapped: 60047360 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 273 ms_handle_reset con 0x561d8d10d000 session 0x561d8e4cd4a0
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349896704 unmapped: 60047360 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 273 heartbeat osd_stat(store_statfs(0x4e9a6b000/0x0/0x4ffc00000, data 0x1a26aa0/0x1ba1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349896704 unmapped: 60047360 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3478923 data_alloc: 218103808 data_used: 15007744
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349896704 unmapped: 60047360 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349896704 unmapped: 60047360 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 273 heartbeat osd_stat(store_statfs(0x4e9a6b000/0x0/0x4ffc00000, data 0x1a26aa0/0x1ba1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349904896 unmapped: 60039168 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 273 heartbeat osd_stat(store_statfs(0x4e9a6b000/0x0/0x4ffc00000, data 0x1a26aa0/0x1ba1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 273 handle_osd_map epochs [274,274], i have 273, src has [1,274]
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349921280 unmapped: 60022784 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 274 heartbeat osd_stat(store_statfs(0x4e9a69000/0x0/0x4ffc00000, data 0x1a28523/0x1ba4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349921280 unmapped: 60022784 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3481705 data_alloc: 218103808 data_used: 15007744
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349921280 unmapped: 60022784 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 274 heartbeat osd_stat(store_statfs(0x4e9a69000/0x0/0x4ffc00000, data 0x1a28523/0x1ba4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349921280 unmapped: 60022784 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 274 heartbeat osd_stat(store_statfs(0x4e9a69000/0x0/0x4ffc00000, data 0x1a28523/0x1ba4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349929472 unmapped: 60014592 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349929472 unmapped: 60014592 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 274 heartbeat osd_stat(store_statfs(0x4e9a69000/0x0/0x4ffc00000, data 0x1a28523/0x1ba4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349929472 unmapped: 60014592 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3481705 data_alloc: 218103808 data_used: 15007744
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 274 heartbeat osd_stat(store_statfs(0x4e9a69000/0x0/0x4ffc00000, data 0x1a28523/0x1ba4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349929472 unmapped: 60014592 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349929472 unmapped: 60014592 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 274 heartbeat osd_stat(store_statfs(0x4e9a69000/0x0/0x4ffc00000, data 0x1a28523/0x1ba4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349937664 unmapped: 60006400 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349945856 unmapped: 59998208 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349945856 unmapped: 59998208 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3481705 data_alloc: 218103808 data_used: 15007744
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 274 heartbeat osd_stat(store_statfs(0x4e9a69000/0x0/0x4ffc00000, data 0x1a28523/0x1ba4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349945856 unmapped: 59998208 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349945856 unmapped: 59998208 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349954048 unmapped: 59990016 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 274 ms_handle_reset con 0x561d96070c00 session 0x561d8d117680
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349954048 unmapped: 59990016 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349954048 unmapped: 59990016 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3481705 data_alloc: 218103808 data_used: 15007744
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 274 heartbeat osd_stat(store_statfs(0x4e9a69000/0x0/0x4ffc00000, data 0x1a28523/0x1ba4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349954048 unmapped: 59990016 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 274 heartbeat osd_stat(store_statfs(0x4e9a69000/0x0/0x4ffc00000, data 0x1a28523/0x1ba4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349962240 unmapped: 59981824 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349962240 unmapped: 59981824 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349962240 unmapped: 59981824 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349962240 unmapped: 59981824 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3481705 data_alloc: 218103808 data_used: 15007744
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349962240 unmapped: 59981824 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349962240 unmapped: 59981824 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 274 heartbeat osd_stat(store_statfs(0x4e9a69000/0x0/0x4ffc00000, data 0x1a28523/0x1ba4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349962240 unmapped: 59981824 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 274 heartbeat osd_stat(store_statfs(0x4e9a69000/0x0/0x4ffc00000, data 0x1a28523/0x1ba4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349970432 unmapped: 59973632 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 274 heartbeat osd_stat(store_statfs(0x4e9a69000/0x0/0x4ffc00000, data 0x1a28523/0x1ba4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349978624 unmapped: 59965440 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3481705 data_alloc: 218103808 data_used: 15007744
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349978624 unmapped: 59965440 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 35.830196381s of 36.115074158s, submitted: 95
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349986816 unmapped: 59957248 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 350027776 unmapped: 59916288 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 350027776 unmapped: 59916288 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 274 heartbeat osd_stat(store_statfs(0x4e9a6a000/0x0/0x4ffc00000, data 0x1a28523/0x1ba4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 350027776 unmapped: 59916288 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3480825 data_alloc: 218103808 data_used: 15007744
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 274 heartbeat osd_stat(store_statfs(0x4e9a6a000/0x0/0x4ffc00000, data 0x1a28523/0x1ba4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 350027776 unmapped: 59916288 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 350027776 unmapped: 59916288 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 350027776 unmapped: 59916288 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 350027776 unmapped: 59916288 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 274 heartbeat osd_stat(store_statfs(0x4e9a6a000/0x0/0x4ffc00000, data 0x1a28523/0x1ba4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 350027776 unmapped: 59916288 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3480825 data_alloc: 218103808 data_used: 15007744
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 350027776 unmapped: 59916288 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 350035968 unmapped: 59908096 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 350035968 unmapped: 59908096 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 274 heartbeat osd_stat(store_statfs(0x4e9a6a000/0x0/0x4ffc00000, data 0x1a28523/0x1ba4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 350035968 unmapped: 59908096 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 350035968 unmapped: 59908096 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3480825 data_alloc: 218103808 data_used: 15007744
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 350044160 unmapped: 59899904 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 350044160 unmapped: 59899904 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 350044160 unmapped: 59899904 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 350044160 unmapped: 59899904 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 274 heartbeat osd_stat(store_statfs(0x4e9a6a000/0x0/0x4ffc00000, data 0x1a28523/0x1ba4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 350052352 unmapped: 59891712 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3480825 data_alloc: 218103808 data_used: 15007744
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 350052352 unmapped: 59891712 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 350052352 unmapped: 59891712 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 274 heartbeat osd_stat(store_statfs(0x4e9a6a000/0x0/0x4ffc00000, data 0x1a28523/0x1ba4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 274 heartbeat osd_stat(store_statfs(0x4e9a6a000/0x0/0x4ffc00000, data 0x1a28523/0x1ba4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 350060544 unmapped: 59883520 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 274 heartbeat osd_stat(store_statfs(0x4e9a6a000/0x0/0x4ffc00000, data 0x1a28523/0x1ba4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 350060544 unmapped: 59883520 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 350060544 unmapped: 59883520 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3480825 data_alloc: 218103808 data_used: 15007744
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 350060544 unmapped: 59883520 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 350068736 unmapped: 59875328 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 350068736 unmapped: 59875328 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 350068736 unmapped: 59875328 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 274 heartbeat osd_stat(store_statfs(0x4e9a6a000/0x0/0x4ffc00000, data 0x1a28523/0x1ba4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 350068736 unmapped: 59875328 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3480825 data_alloc: 218103808 data_used: 15007744
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 274 heartbeat osd_stat(store_statfs(0x4e9a6a000/0x0/0x4ffc00000, data 0x1a28523/0x1ba4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 350076928 unmapped: 59867136 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 350076928 unmapped: 59867136 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 350076928 unmapped: 59867136 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 350076928 unmapped: 59867136 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 274 heartbeat osd_stat(store_statfs(0x4e9a6a000/0x0/0x4ffc00000, data 0x1a28523/0x1ba4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 350085120 unmapped: 59858944 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3480825 data_alloc: 218103808 data_used: 15007744
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 350085120 unmapped: 59858944 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 350085120 unmapped: 59858944 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 350085120 unmapped: 59858944 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 274 heartbeat osd_stat(store_statfs(0x4e9a6a000/0x0/0x4ffc00000, data 0x1a28523/0x1ba4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 350093312 unmapped: 59850752 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 350093312 unmapped: 59850752 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3480825 data_alloc: 218103808 data_used: 15007744
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 350093312 unmapped: 59850752 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 350093312 unmapped: 59850752 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 350093312 unmapped: 59850752 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 350093312 unmapped: 59850752 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 274 heartbeat osd_stat(store_statfs(0x4e9a6a000/0x0/0x4ffc00000, data 0x1a28523/0x1ba4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 350093312 unmapped: 59850752 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3480825 data_alloc: 218103808 data_used: 15007744
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 350101504 unmapped: 59842560 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 350101504 unmapped: 59842560 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 350109696 unmapped: 59834368 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 274 heartbeat osd_stat(store_statfs(0x4e9a6a000/0x0/0x4ffc00000, data 0x1a28523/0x1ba4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 350109696 unmapped: 59834368 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 274 heartbeat osd_stat(store_statfs(0x4e9a6a000/0x0/0x4ffc00000, data 0x1a28523/0x1ba4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 350117888 unmapped: 59826176 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3480825 data_alloc: 218103808 data_used: 15007744
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 350117888 unmapped: 59826176 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 350117888 unmapped: 59826176 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 274 heartbeat osd_stat(store_statfs(0x4e9a6a000/0x0/0x4ffc00000, data 0x1a28523/0x1ba4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 274 heartbeat osd_stat(store_statfs(0x4e9a6a000/0x0/0x4ffc00000, data 0x1a28523/0x1ba4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 350117888 unmapped: 59826176 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 350117888 unmapped: 59826176 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 274 heartbeat osd_stat(store_statfs(0x4e9a6a000/0x0/0x4ffc00000, data 0x1a28523/0x1ba4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 274 heartbeat osd_stat(store_statfs(0x4e9a6a000/0x0/0x4ffc00000, data 0x1a28523/0x1ba4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 350134272 unmapped: 59809792 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3480825 data_alloc: 218103808 data_used: 15007744
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 350134272 unmapped: 59809792 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 274 heartbeat osd_stat(store_statfs(0x4e9a6a000/0x0/0x4ffc00000, data 0x1a28523/0x1ba4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 350142464 unmapped: 59801600 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 350142464 unmapped: 59801600 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 274 heartbeat osd_stat(store_statfs(0x4e9a6a000/0x0/0x4ffc00000, data 0x1a28523/0x1ba4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 350142464 unmapped: 59801600 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 350150656 unmapped: 59793408 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3480825 data_alloc: 218103808 data_used: 15007744
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 350150656 unmapped: 59793408 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 350150656 unmapped: 59793408 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 350158848 unmapped: 59785216 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 274 heartbeat osd_stat(store_statfs(0x4e9a6a000/0x0/0x4ffc00000, data 0x1a28523/0x1ba4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 350158848 unmapped: 59785216 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 274 heartbeat osd_stat(store_statfs(0x4e9a6a000/0x0/0x4ffc00000, data 0x1a28523/0x1ba4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 350158848 unmapped: 59785216 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3480825 data_alloc: 218103808 data_used: 15007744
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 350158848 unmapped: 59785216 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 350158848 unmapped: 59785216 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 274 heartbeat osd_stat(store_statfs(0x4e9a6a000/0x0/0x4ffc00000, data 0x1a28523/0x1ba4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 350158848 unmapped: 59785216 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 274 heartbeat osd_stat(store_statfs(0x4e9a6a000/0x0/0x4ffc00000, data 0x1a28523/0x1ba4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 350158848 unmapped: 59785216 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 350158848 unmapped: 59785216 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3480825 data_alloc: 218103808 data_used: 15007744
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 274 heartbeat osd_stat(store_statfs(0x4e9a6a000/0x0/0x4ffc00000, data 0x1a28523/0x1ba4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 350175232 unmapped: 59768832 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 350175232 unmapped: 59768832 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 350175232 unmapped: 59768832 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 274 heartbeat osd_stat(store_statfs(0x4e9a6a000/0x0/0x4ffc00000, data 0x1a28523/0x1ba4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 350183424 unmapped: 59760640 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 350183424 unmapped: 59760640 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3480825 data_alloc: 218103808 data_used: 15007744
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 350183424 unmapped: 59760640 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 350183424 unmapped: 59760640 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 274 heartbeat osd_stat(store_statfs(0x4e9a6a000/0x0/0x4ffc00000, data 0x1a28523/0x1ba4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 350183424 unmapped: 59760640 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 350191616 unmapped: 59752448 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 350191616 unmapped: 59752448 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3480825 data_alloc: 218103808 data_used: 15007744
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 350191616 unmapped: 59752448 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 274 heartbeat osd_stat(store_statfs(0x4e9a6a000/0x0/0x4ffc00000, data 0x1a28523/0x1ba4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 350191616 unmapped: 59752448 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 350191616 unmapped: 59752448 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 274 ms_handle_reset con 0x561d8c459800 session 0x561d8e50e1e0
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 350453760 unmapped: 59490304 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 274 ms_handle_reset con 0x561d8d10bc00 session 0x561d8cff1860
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 350199808 unmapped: 59744256 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3486265 data_alloc: 234881024 data_used: 21430272
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 274 heartbeat osd_stat(store_statfs(0x4e9a6a000/0x0/0x4ffc00000, data 0x1a28523/0x1ba4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 350289920 unmapped: 59654144 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 350289920 unmapped: 59654144 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 274 heartbeat osd_stat(store_statfs(0x4e9a6a000/0x0/0x4ffc00000, data 0x1a28523/0x1ba4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 350289920 unmapped: 59654144 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 274 heartbeat osd_stat(store_statfs(0x4e9a6a000/0x0/0x4ffc00000, data 0x1a28523/0x1ba4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 350289920 unmapped: 59654144 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 350289920 unmapped: 59654144 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3486265 data_alloc: 234881024 data_used: 21430272
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 350289920 unmapped: 59654144 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 350289920 unmapped: 59654144 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 274 handle_osd_map epochs [275,275], i have 274, src has [1,275]
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 90.937507629s of 91.256561279s, submitted: 90
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 275 heartbeat osd_stat(store_statfs(0x4e9a66000/0x0/0x4ffc00000, data 0x1a2a0f4/0x1ba7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 348340224 unmapped: 61603840 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 275 ms_handle_reset con 0x561d8d10d000 session 0x561d8e702d20
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 348340224 unmapped: 61603840 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 348340224 unmapped: 61603840 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3342967 data_alloc: 218103808 data_used: 7806976
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 348340224 unmapped: 61603840 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 275 handle_osd_map epochs [276,276], i have 275, src has [1,276]
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 348348416 unmapped: 61595648 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 276 heartbeat osd_stat(store_statfs(0x4eaa68000/0x0/0x4ffc00000, data 0xa2a0d1/0xba6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 276 ms_handle_reset con 0x561d8e52e800 session 0x561d8c644f00
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345120768 unmapped: 64823296 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345120768 unmapped: 64823296 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 276 handle_osd_map epochs [277,277], i have 276, src has [1,277]
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345120768 unmapped: 64823296 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3272720 data_alloc: 218103808 data_used: 1056768
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345120768 unmapped: 64823296 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345120768 unmapped: 64823296 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 277 heartbeat osd_stat(store_statfs(0x4eb263000/0x0/0x4ffc00000, data 0x22d6ee/0x3aa000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345128960 unmapped: 64815104 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 277 heartbeat osd_stat(store_statfs(0x4eb263000/0x0/0x4ffc00000, data 0x22d6ee/0x3aa000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 277 handle_osd_map epochs [278,278], i have 277, src has [1,278]
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.671767235s of 10.978181839s, submitted: 60
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345145344 unmapped: 64798720 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345145344 unmapped: 64798720 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3275694 data_alloc: 218103808 data_used: 1056768
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 278 heartbeat osd_stat(store_statfs(0x4eb260000/0x0/0x4ffc00000, data 0x22f151/0x3ad000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 278 heartbeat osd_stat(store_statfs(0x4eb260000/0x0/0x4ffc00000, data 0x22f151/0x3ad000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345145344 unmapped: 64798720 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 278 heartbeat osd_stat(store_statfs(0x4eb260000/0x0/0x4ffc00000, data 0x22f151/0x3ad000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345145344 unmapped: 64798720 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345145344 unmapped: 64798720 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345145344 unmapped: 64798720 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345145344 unmapped: 64798720 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3275694 data_alloc: 218103808 data_used: 1056768
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 278 handle_osd_map epochs [279,279], i have 278, src has [1,279]
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 279 ms_handle_reset con 0x561d8e532400 session 0x561d8e7034a0
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 279 heartbeat osd_stat(store_statfs(0x4eb25b000/0x0/0x4ffc00000, data 0x230d01/0x3b2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345194496 unmapped: 64749568 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345194496 unmapped: 64749568 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345194496 unmapped: 64749568 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 279 heartbeat osd_stat(store_statfs(0x4eb25b000/0x0/0x4ffc00000, data 0x230d01/0x3b2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345194496 unmapped: 64749568 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345194496 unmapped: 64749568 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3284115 data_alloc: 218103808 data_used: 1064960
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345194496 unmapped: 64749568 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 279 heartbeat osd_stat(store_statfs(0x4eb25b000/0x0/0x4ffc00000, data 0x230d01/0x3b2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345194496 unmapped: 64749568 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345202688 unmapped: 64741376 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 279 heartbeat osd_stat(store_statfs(0x4eb25b000/0x0/0x4ffc00000, data 0x230d01/0x3b2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345202688 unmapped: 64741376 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345202688 unmapped: 64741376 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3284115 data_alloc: 218103808 data_used: 1064960
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345202688 unmapped: 64741376 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345210880 unmapped: 64733184 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 279 heartbeat osd_stat(store_statfs(0x4eb25b000/0x0/0x4ffc00000, data 0x230d01/0x3b2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345210880 unmapped: 64733184 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345210880 unmapped: 64733184 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345210880 unmapped: 64733184 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3284115 data_alloc: 218103808 data_used: 1064960
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345219072 unmapped: 64724992 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345227264 unmapped: 64716800 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345227264 unmapped: 64716800 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 279 heartbeat osd_stat(store_statfs(0x4eb25b000/0x0/0x4ffc00000, data 0x230d01/0x3b2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345227264 unmapped: 64716800 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 279 heartbeat osd_stat(store_statfs(0x4eb25b000/0x0/0x4ffc00000, data 0x230d01/0x3b2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345227264 unmapped: 64716800 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3284115 data_alloc: 218103808 data_used: 1064960
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 279 heartbeat osd_stat(store_statfs(0x4eb25b000/0x0/0x4ffc00000, data 0x230d01/0x3b2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345227264 unmapped: 64716800 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345227264 unmapped: 64716800 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345227264 unmapped: 64716800 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345227264 unmapped: 64716800 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345235456 unmapped: 64708608 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3284115 data_alloc: 218103808 data_used: 1064960
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345235456 unmapped: 64708608 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 279 heartbeat osd_stat(store_statfs(0x4eb25b000/0x0/0x4ffc00000, data 0x230d01/0x3b2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345235456 unmapped: 64708608 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345235456 unmapped: 64708608 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345243648 unmapped: 64700416 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3284115 data_alloc: 218103808 data_used: 1064960
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345243648 unmapped: 64700416 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 279 heartbeat osd_stat(store_statfs(0x4eb25b000/0x0/0x4ffc00000, data 0x230d01/0x3b2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 279 heartbeat osd_stat(store_statfs(0x4eb25b000/0x0/0x4ffc00000, data 0x230d01/0x3b2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345251840 unmapped: 64692224 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 279 heartbeat osd_stat(store_statfs(0x4eb25b000/0x0/0x4ffc00000, data 0x230d01/0x3b2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345251840 unmapped: 64692224 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345260032 unmapped: 64684032 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345260032 unmapped: 64684032 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3284115 data_alloc: 218103808 data_used: 1064960
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345260032 unmapped: 64684032 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345260032 unmapped: 64684032 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345260032 unmapped: 64684032 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 279 heartbeat osd_stat(store_statfs(0x4eb25b000/0x0/0x4ffc00000, data 0x230d01/0x3b2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345260032 unmapped: 64684032 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 279 heartbeat osd_stat(store_statfs(0x4eb25b000/0x0/0x4ffc00000, data 0x230d01/0x3b2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345260032 unmapped: 64684032 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3284115 data_alloc: 218103808 data_used: 1064960
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345260032 unmapped: 64684032 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345268224 unmapped: 64675840 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 279 heartbeat osd_stat(store_statfs(0x4eb25b000/0x0/0x4ffc00000, data 0x230d01/0x3b2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345268224 unmapped: 64675840 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345268224 unmapped: 64675840 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345276416 unmapped: 64667648 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3284115 data_alloc: 218103808 data_used: 1064960
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345276416 unmapped: 64667648 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 279 heartbeat osd_stat(store_statfs(0x4eb25b000/0x0/0x4ffc00000, data 0x230d01/0x3b2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345276416 unmapped: 64667648 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345276416 unmapped: 64667648 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345276416 unmapped: 64667648 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 279 heartbeat osd_stat(store_statfs(0x4eb25b000/0x0/0x4ffc00000, data 0x230d01/0x3b2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345284608 unmapped: 64659456 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3284115 data_alloc: 218103808 data_used: 1064960
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345284608 unmapped: 64659456 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345292800 unmapped: 64651264 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345292800 unmapped: 64651264 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 279 heartbeat osd_stat(store_statfs(0x4eb25b000/0x0/0x4ffc00000, data 0x230d01/0x3b2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345292800 unmapped: 64651264 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345292800 unmapped: 64651264 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3284115 data_alloc: 218103808 data_used: 1064960
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345292800 unmapped: 64651264 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345292800 unmapped: 64651264 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 279 heartbeat osd_stat(store_statfs(0x4eb25b000/0x0/0x4ffc00000, data 0x230d01/0x3b2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345309184 unmapped: 64634880 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345309184 unmapped: 64634880 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345309184 unmapped: 64634880 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3284115 data_alloc: 218103808 data_used: 1064960
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345309184 unmapped: 64634880 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 279 heartbeat osd_stat(store_statfs(0x4eb25b000/0x0/0x4ffc00000, data 0x230d01/0x3b2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345309184 unmapped: 64634880 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345309184 unmapped: 64634880 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 279 heartbeat osd_stat(store_statfs(0x4eb25b000/0x0/0x4ffc00000, data 0x230d01/0x3b2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345309184 unmapped: 64634880 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 279 heartbeat osd_stat(store_statfs(0x4eb25b000/0x0/0x4ffc00000, data 0x230d01/0x3b2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345317376 unmapped: 64626688 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3284115 data_alloc: 218103808 data_used: 1064960
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345317376 unmapped: 64626688 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345317376 unmapped: 64626688 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345317376 unmapped: 64626688 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 279 heartbeat osd_stat(store_statfs(0x4eb25b000/0x0/0x4ffc00000, data 0x230d01/0x3b2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 279 heartbeat osd_stat(store_statfs(0x4eb25b000/0x0/0x4ffc00000, data 0x230d01/0x3b2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345317376 unmapped: 64626688 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345317376 unmapped: 64626688 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3284115 data_alloc: 218103808 data_used: 1064960
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345317376 unmapped: 64626688 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345317376 unmapped: 64626688 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345317376 unmapped: 64626688 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 279 heartbeat osd_stat(store_statfs(0x4eb25b000/0x0/0x4ffc00000, data 0x230d01/0x3b2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345333760 unmapped: 64610304 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345333760 unmapped: 64610304 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3284115 data_alloc: 218103808 data_used: 1064960
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345333760 unmapped: 64610304 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345333760 unmapped: 64610304 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345333760 unmapped: 64610304 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 279 heartbeat osd_stat(store_statfs(0x4eb25b000/0x0/0x4ffc00000, data 0x230d01/0x3b2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345333760 unmapped: 64610304 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345341952 unmapped: 64602112 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3284115 data_alloc: 218103808 data_used: 1064960
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345341952 unmapped: 64602112 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345350144 unmapped: 64593920 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 279 heartbeat osd_stat(store_statfs(0x4eb25b000/0x0/0x4ffc00000, data 0x230d01/0x3b2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345350144 unmapped: 64593920 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345350144 unmapped: 64593920 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345350144 unmapped: 64593920 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3284115 data_alloc: 218103808 data_used: 1064960
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345350144 unmapped: 64593920 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345350144 unmapped: 64593920 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 279 heartbeat osd_stat(store_statfs(0x4eb25b000/0x0/0x4ffc00000, data 0x230d01/0x3b2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345350144 unmapped: 64593920 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345350144 unmapped: 64593920 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345358336 unmapped: 64585728 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3284115 data_alloc: 218103808 data_used: 1064960
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345358336 unmapped: 64585728 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345366528 unmapped: 64577536 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 279 heartbeat osd_stat(store_statfs(0x4eb25b000/0x0/0x4ffc00000, data 0x230d01/0x3b2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345366528 unmapped: 64577536 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345366528 unmapped: 64577536 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345366528 unmapped: 64577536 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3284115 data_alloc: 218103808 data_used: 1064960
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345366528 unmapped: 64577536 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345366528 unmapped: 64577536 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 279 heartbeat osd_stat(store_statfs(0x4eb25b000/0x0/0x4ffc00000, data 0x230d01/0x3b2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345382912 unmapped: 64561152 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345382912 unmapped: 64561152 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345382912 unmapped: 64561152 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 279 heartbeat osd_stat(store_statfs(0x4eb25b000/0x0/0x4ffc00000, data 0x230d01/0x3b2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3284115 data_alloc: 218103808 data_used: 1064960
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345382912 unmapped: 64561152 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 279 heartbeat osd_stat(store_statfs(0x4eb25b000/0x0/0x4ffc00000, data 0x230d01/0x3b2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345382912 unmapped: 64561152 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345382912 unmapped: 64561152 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345391104 unmapped: 64552960 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345391104 unmapped: 64552960 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 111.406425476s of 111.508773804s, submitted: 26
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3287953 data_alloc: 218103808 data_used: 1064960
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345399296 unmapped: 64544768 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 279 heartbeat osd_stat(store_statfs(0x4eb25a000/0x0/0x4ffc00000, data 0x230d2f/0x3b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,1])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345399296 unmapped: 64544768 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345399296 unmapped: 64544768 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 279 heartbeat osd_stat(store_statfs(0x4eb25a000/0x0/0x4ffc00000, data 0x230d34/0x3b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345432064 unmapped: 64512000 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345432064 unmapped: 64512000 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 279 heartbeat osd_stat(store_statfs(0x4eb25a000/0x0/0x4ffc00000, data 0x230d34/0x3b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3288207 data_alloc: 218103808 data_used: 1064960
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345432064 unmapped: 64512000 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 279 heartbeat osd_stat(store_statfs(0x4eb25a000/0x0/0x4ffc00000, data 0x230d34/0x3b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345432064 unmapped: 64512000 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 279 handle_osd_map epochs [280,280], i have 279, src has [1,280]
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345432064 unmapped: 64512000 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345456640 unmapped: 64487424 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 280 ms_handle_reset con 0x561d8c459800 session 0x561d8ed72d20
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345456640 unmapped: 64487424 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3291989 data_alloc: 218103808 data_used: 1073152
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345456640 unmapped: 64487424 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 280 heartbeat osd_stat(store_statfs(0x4eb256000/0x0/0x4ffc00000, data 0x2328b1/0x3b7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345456640 unmapped: 64487424 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345456640 unmapped: 64487424 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 7.714424610s of 13.350893974s, submitted: 22
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345464832 unmapped: 64479232 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345464832 unmapped: 64479232 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 280 handle_osd_map epochs [281,281], i have 280, src has [1,281]
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3327899 data_alloc: 218103808 data_used: 1081344
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345489408 unmapped: 64454656 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 281 heartbeat osd_stat(store_statfs(0x4eade2000/0x0/0x4ffc00000, data 0x6a4451/0x82b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 281 ms_handle_reset con 0x561d8d10bc00 session 0x561d8c881a40
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345489408 unmapped: 64454656 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345489408 unmapped: 64454656 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345489408 unmapped: 64454656 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 281 heartbeat osd_stat(store_statfs(0x4eade2000/0x0/0x4ffc00000, data 0x6a4451/0x82b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345489408 unmapped: 64454656 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3327899 data_alloc: 218103808 data_used: 1081344
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345489408 unmapped: 64454656 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 281 heartbeat osd_stat(store_statfs(0x4eade2000/0x0/0x4ffc00000, data 0x6a4451/0x82b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345489408 unmapped: 64454656 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345505792 unmapped: 64438272 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345505792 unmapped: 64438272 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345513984 unmapped: 64430080 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3327899 data_alloc: 218103808 data_used: 1081344
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345513984 unmapped: 64430080 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345513984 unmapped: 64430080 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 281 heartbeat osd_stat(store_statfs(0x4eade2000/0x0/0x4ffc00000, data 0x6a4451/0x82b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345513984 unmapped: 64430080 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 281 heartbeat osd_stat(store_statfs(0x4eade2000/0x0/0x4ffc00000, data 0x6a4451/0x82b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345513984 unmapped: 64430080 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345513984 unmapped: 64430080 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3327899 data_alloc: 218103808 data_used: 1081344
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345513984 unmapped: 64430080 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345522176 unmapped: 64421888 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 281 heartbeat osd_stat(store_statfs(0x4eade2000/0x0/0x4ffc00000, data 0x6a4451/0x82b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345530368 unmapped: 64413696 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345530368 unmapped: 64413696 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345530368 unmapped: 64413696 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3327899 data_alloc: 218103808 data_used: 1081344
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345530368 unmapped: 64413696 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 281 heartbeat osd_stat(store_statfs(0x4eade2000/0x0/0x4ffc00000, data 0x6a4451/0x82b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 281 heartbeat osd_stat(store_statfs(0x4eade2000/0x0/0x4ffc00000, data 0x6a4451/0x82b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345538560 unmapped: 64405504 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345538560 unmapped: 64405504 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345538560 unmapped: 64405504 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 281 heartbeat osd_stat(store_statfs(0x4eade2000/0x0/0x4ffc00000, data 0x6a4451/0x82b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345538560 unmapped: 64405504 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3327899 data_alloc: 218103808 data_used: 1081344
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345538560 unmapped: 64405504 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345538560 unmapped: 64405504 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345546752 unmapped: 64397312 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345546752 unmapped: 64397312 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345546752 unmapped: 64397312 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 281 heartbeat osd_stat(store_statfs(0x4eade2000/0x0/0x4ffc00000, data 0x6a4451/0x82b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3327899 data_alloc: 218103808 data_used: 1081344
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345546752 unmapped: 64397312 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 281 heartbeat osd_stat(store_statfs(0x4eade2000/0x0/0x4ffc00000, data 0x6a4451/0x82b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 281 heartbeat osd_stat(store_statfs(0x4eade2000/0x0/0x4ffc00000, data 0x6a4451/0x82b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345554944 unmapped: 64389120 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345563136 unmapped: 64380928 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345563136 unmapped: 64380928 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345563136 unmapped: 64380928 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3327899 data_alloc: 218103808 data_used: 1081344
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345563136 unmapped: 64380928 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345571328 unmapped: 64372736 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 281 heartbeat osd_stat(store_statfs(0x4eade2000/0x0/0x4ffc00000, data 0x6a4451/0x82b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345579520 unmapped: 64364544 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 39.750205994s of 40.227725983s, submitted: 4
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345587712 unmapped: 64356352 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 281 handle_osd_map epochs [282,282], i have 281, src has [1,282]
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 282 ms_handle_reset con 0x561d8d10d000 session 0x561d8e8c54a0
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345612288 unmapped: 64331776 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3328998 data_alloc: 218103808 data_used: 1089536
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 282 heartbeat osd_stat(store_statfs(0x4eade1000/0x0/0x4ffc00000, data 0x6a5fef/0x82c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345612288 unmapped: 64331776 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345612288 unmapped: 64331776 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345612288 unmapped: 64331776 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 282 heartbeat osd_stat(store_statfs(0x4eade1000/0x0/0x4ffc00000, data 0x6a5fef/0x82c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345612288 unmapped: 64331776 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345612288 unmapped: 64331776 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3328998 data_alloc: 218103808 data_used: 1089536
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345612288 unmapped: 64331776 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 282 heartbeat osd_stat(store_statfs(0x4eade1000/0x0/0x4ffc00000, data 0x6a5fef/0x82c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345612288 unmapped: 64331776 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345612288 unmapped: 64331776 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 282 handle_osd_map epochs [283,283], i have 282, src has [1,283]
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.148046494s of 10.304548264s, submitted: 38
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345628672 unmapped: 64315392 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 282 handle_osd_map epochs [283,283], i have 283, src has [1,283]
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345645056 unmapped: 64299008 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3331972 data_alloc: 218103808 data_used: 1089536
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345645056 unmapped: 64299008 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345645056 unmapped: 64299008 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 283 heartbeat osd_stat(store_statfs(0x4eadde000/0x0/0x4ffc00000, data 0x6a7a52/0x82f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345653248 unmapped: 64290816 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345653248 unmapped: 64290816 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345653248 unmapped: 64290816 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3331972 data_alloc: 218103808 data_used: 1089536
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345661440 unmapped: 64282624 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 283 heartbeat osd_stat(store_statfs(0x4eadde000/0x0/0x4ffc00000, data 0x6a7a52/0x82f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345661440 unmapped: 64282624 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 283 heartbeat osd_stat(store_statfs(0x4eadde000/0x0/0x4ffc00000, data 0x6a7a52/0x82f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 283 heartbeat osd_stat(store_statfs(0x4eadde000/0x0/0x4ffc00000, data 0x6a7a52/0x82f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345661440 unmapped: 64282624 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 283 heartbeat osd_stat(store_statfs(0x4eadde000/0x0/0x4ffc00000, data 0x6a7a52/0x82f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345661440 unmapped: 64282624 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345661440 unmapped: 64282624 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3331972 data_alloc: 218103808 data_used: 1089536
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345661440 unmapped: 64282624 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 283 heartbeat osd_stat(store_statfs(0x4eadde000/0x0/0x4ffc00000, data 0x6a7a52/0x82f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345669632 unmapped: 64274432 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345669632 unmapped: 64274432 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345677824 unmapped: 64266240 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345677824 unmapped: 64266240 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 283 heartbeat osd_stat(store_statfs(0x4eadde000/0x0/0x4ffc00000, data 0x6a7a52/0x82f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3331972 data_alloc: 218103808 data_used: 1089536
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345686016 unmapped: 64258048 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345686016 unmapped: 64258048 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345686016 unmapped: 64258048 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 283 heartbeat osd_stat(store_statfs(0x4eadde000/0x0/0x4ffc00000, data 0x6a7a52/0x82f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345686016 unmapped: 64258048 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345686016 unmapped: 64258048 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3331972 data_alloc: 218103808 data_used: 1089536
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345686016 unmapped: 64258048 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345686016 unmapped: 64258048 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345694208 unmapped: 64249856 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 283 heartbeat osd_stat(store_statfs(0x4eadde000/0x0/0x4ffc00000, data 0x6a7a52/0x82f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345694208 unmapped: 64249856 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345694208 unmapped: 64249856 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3331972 data_alloc: 218103808 data_used: 1089536
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345694208 unmapped: 64249856 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345694208 unmapped: 64249856 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 283 heartbeat osd_stat(store_statfs(0x4eadde000/0x0/0x4ffc00000, data 0x6a7a52/0x82f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345694208 unmapped: 64249856 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 283 heartbeat osd_stat(store_statfs(0x4eadde000/0x0/0x4ffc00000, data 0x6a7a52/0x82f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 283 heartbeat osd_stat(store_statfs(0x4eadde000/0x0/0x4ffc00000, data 0x6a7a52/0x82f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345710592 unmapped: 64233472 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345710592 unmapped: 64233472 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 283 heartbeat osd_stat(store_statfs(0x4eadde000/0x0/0x4ffc00000, data 0x6a7a52/0x82f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3331972 data_alloc: 218103808 data_used: 1089536
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345726976 unmapped: 64217088 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345726976 unmapped: 64217088 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345726976 unmapped: 64217088 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345726976 unmapped: 64217088 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345726976 unmapped: 64217088 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 283 heartbeat osd_stat(store_statfs(0x4eadde000/0x0/0x4ffc00000, data 0x6a7a52/0x82f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3331972 data_alloc: 218103808 data_used: 1089536
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345726976 unmapped: 64217088 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345726976 unmapped: 64217088 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 283 heartbeat osd_stat(store_statfs(0x4eadde000/0x0/0x4ffc00000, data 0x6a7a52/0x82f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345726976 unmapped: 64217088 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345735168 unmapped: 64208896 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345735168 unmapped: 64208896 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3331972 data_alloc: 218103808 data_used: 1089536
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345735168 unmapped: 64208896 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345735168 unmapped: 64208896 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 283 heartbeat osd_stat(store_statfs(0x4eadde000/0x0/0x4ffc00000, data 0x6a7a52/0x82f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 283 heartbeat osd_stat(store_statfs(0x4eadde000/0x0/0x4ffc00000, data 0x6a7a52/0x82f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345751552 unmapped: 64192512 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345751552 unmapped: 64192512 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345751552 unmapped: 64192512 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3331972 data_alloc: 218103808 data_used: 1089536
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345751552 unmapped: 64192512 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 283 heartbeat osd_stat(store_statfs(0x4eadde000/0x0/0x4ffc00000, data 0x6a7a52/0x82f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345767936 unmapped: 64176128 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345767936 unmapped: 64176128 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345767936 unmapped: 64176128 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 283 heartbeat osd_stat(store_statfs(0x4eadde000/0x0/0x4ffc00000, data 0x6a7a52/0x82f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345767936 unmapped: 64176128 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3331972 data_alloc: 218103808 data_used: 1089536
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345776128 unmapped: 64167936 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345776128 unmapped: 64167936 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 283 heartbeat osd_stat(store_statfs(0x4eadde000/0x0/0x4ffc00000, data 0x6a7a52/0x82f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345776128 unmapped: 64167936 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345776128 unmapped: 64167936 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345784320 unmapped: 64159744 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3331972 data_alloc: 218103808 data_used: 1089536
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345784320 unmapped: 64159744 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345784320 unmapped: 64159744 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345784320 unmapped: 64159744 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 283 heartbeat osd_stat(store_statfs(0x4eadde000/0x0/0x4ffc00000, data 0x6a7a52/0x82f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345784320 unmapped: 64159744 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345784320 unmapped: 64159744 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 283 heartbeat osd_stat(store_statfs(0x4eadde000/0x0/0x4ffc00000, data 0x6a7a52/0x82f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3331972 data_alloc: 218103808 data_used: 1089536
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345792512 unmapped: 64151552 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345792512 unmapped: 64151552 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345808896 unmapped: 64135168 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345817088 unmapped: 64126976 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345817088 unmapped: 64126976 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3331972 data_alloc: 218103808 data_used: 1089536
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345817088 unmapped: 64126976 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 283 heartbeat osd_stat(store_statfs(0x4eadde000/0x0/0x4ffc00000, data 0x6a7a52/0x82f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345817088 unmapped: 64126976 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 283 heartbeat osd_stat(store_statfs(0x4eadde000/0x0/0x4ffc00000, data 0x6a7a52/0x82f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345817088 unmapped: 64126976 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345825280 unmapped: 64118784 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 283 heartbeat osd_stat(store_statfs(0x4eadde000/0x0/0x4ffc00000, data 0x6a7a52/0x82f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345825280 unmapped: 64118784 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3331972 data_alloc: 218103808 data_used: 1089536
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345833472 unmapped: 64110592 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 283 heartbeat osd_stat(store_statfs(0x4eadde000/0x0/0x4ffc00000, data 0x6a7a52/0x82f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345833472 unmapped: 64110592 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345833472 unmapped: 64110592 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345841664 unmapped: 64102400 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345841664 unmapped: 64102400 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3331972 data_alloc: 218103808 data_used: 1089536
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 283 heartbeat osd_stat(store_statfs(0x4eadde000/0x0/0x4ffc00000, data 0x6a7a52/0x82f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345849856 unmapped: 64094208 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 283 heartbeat osd_stat(store_statfs(0x4eadde000/0x0/0x4ffc00000, data 0x6a7a52/0x82f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345849856 unmapped: 64094208 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345849856 unmapped: 64094208 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345858048 unmapped: 64086016 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 283 heartbeat osd_stat(store_statfs(0x4eadde000/0x0/0x4ffc00000, data 0x6a7a52/0x82f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345858048 unmapped: 64086016 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3331972 data_alloc: 218103808 data_used: 1089536
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345858048 unmapped: 64086016 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345858048 unmapped: 64086016 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345866240 unmapped: 64077824 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345866240 unmapped: 64077824 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 283 heartbeat osd_stat(store_statfs(0x4eadde000/0x0/0x4ffc00000, data 0x6a7a52/0x82f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345866240 unmapped: 64077824 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3331972 data_alloc: 218103808 data_used: 1089536
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345866240 unmapped: 64077824 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345882624 unmapped: 64061440 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345882624 unmapped: 64061440 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345882624 unmapped: 64061440 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345882624 unmapped: 64061440 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 283 heartbeat osd_stat(store_statfs(0x4eadde000/0x0/0x4ffc00000, data 0x6a7a52/0x82f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3331972 data_alloc: 218103808 data_used: 1089536
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345882624 unmapped: 64061440 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345882624 unmapped: 64061440 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345882624 unmapped: 64061440 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345882624 unmapped: 64061440 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345890816 unmapped: 64053248 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 283 heartbeat osd_stat(store_statfs(0x4eadde000/0x0/0x4ffc00000, data 0x6a7a52/0x82f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3331972 data_alloc: 218103808 data_used: 1089536
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345890816 unmapped: 64053248 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345899008 unmapped: 64045056 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345899008 unmapped: 64045056 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 283 heartbeat osd_stat(store_statfs(0x4eadde000/0x0/0x4ffc00000, data 0x6a7a52/0x82f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345907200 unmapped: 64036864 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345907200 unmapped: 64036864 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3331972 data_alloc: 218103808 data_used: 1089536
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345907200 unmapped: 64036864 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345907200 unmapped: 64036864 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 283 heartbeat osd_stat(store_statfs(0x4eadde000/0x0/0x4ffc00000, data 0x6a7a52/0x82f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345915392 unmapped: 64028672 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345915392 unmapped: 64028672 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345915392 unmapped: 64028672 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3331972 data_alloc: 218103808 data_used: 1089536
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345915392 unmapped: 64028672 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 283 heartbeat osd_stat(store_statfs(0x4eadde000/0x0/0x4ffc00000, data 0x6a7a52/0x82f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345915392 unmapped: 64028672 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345915392 unmapped: 64028672 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345915392 unmapped: 64028672 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345915392 unmapped: 64028672 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3331972 data_alloc: 218103808 data_used: 1089536
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345931776 unmapped: 64012288 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 283 heartbeat osd_stat(store_statfs(0x4eadde000/0x0/0x4ffc00000, data 0x6a7a52/0x82f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345931776 unmapped: 64012288 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345931776 unmapped: 64012288 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345931776 unmapped: 64012288 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345939968 unmapped: 64004096 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 283 heartbeat osd_stat(store_statfs(0x4eadde000/0x0/0x4ffc00000, data 0x6a7a52/0x82f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3331972 data_alloc: 218103808 data_used: 1089536
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345939968 unmapped: 64004096 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345939968 unmapped: 64004096 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345939968 unmapped: 64004096 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345956352 unmapped: 63987712 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345956352 unmapped: 63987712 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 283 heartbeat osd_stat(store_statfs(0x4eadde000/0x0/0x4ffc00000, data 0x6a7a52/0x82f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3331972 data_alloc: 218103808 data_used: 1089536
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345964544 unmapped: 63979520 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 283 heartbeat osd_stat(store_statfs(0x4eadde000/0x0/0x4ffc00000, data 0x6a7a52/0x82f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345964544 unmapped: 63979520 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345972736 unmapped: 63971328 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345972736 unmapped: 63971328 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345972736 unmapped: 63971328 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3331972 data_alloc: 218103808 data_used: 1089536
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345972736 unmapped: 63971328 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345980928 unmapped: 63963136 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 283 heartbeat osd_stat(store_statfs(0x4eadde000/0x0/0x4ffc00000, data 0x6a7a52/0x82f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345980928 unmapped: 63963136 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 283 heartbeat osd_stat(store_statfs(0x4eadde000/0x0/0x4ffc00000, data 0x6a7a52/0x82f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345980928 unmapped: 63963136 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345989120 unmapped: 63954944 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3331972 data_alloc: 218103808 data_used: 1089536
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345989120 unmapped: 63954944 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 283 heartbeat osd_stat(store_statfs(0x4eadde000/0x0/0x4ffc00000, data 0x6a7a52/0x82f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345989120 unmapped: 63954944 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 283 heartbeat osd_stat(store_statfs(0x4eadde000/0x0/0x4ffc00000, data 0x6a7a52/0x82f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345997312 unmapped: 63946752 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345997312 unmapped: 63946752 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 346005504 unmapped: 63938560 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3331972 data_alloc: 218103808 data_used: 1089536
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 346005504 unmapped: 63938560 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 283 heartbeat osd_stat(store_statfs(0x4eadde000/0x0/0x4ffc00000, data 0x6a7a52/0x82f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 346005504 unmapped: 63938560 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 283 heartbeat osd_stat(store_statfs(0x4eadde000/0x0/0x4ffc00000, data 0x6a7a52/0x82f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 346005504 unmapped: 63938560 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 346005504 unmapped: 63938560 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 283 heartbeat osd_stat(store_statfs(0x4eadde000/0x0/0x4ffc00000, data 0x6a7a52/0x82f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 346013696 unmapped: 63930368 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3331972 data_alloc: 218103808 data_used: 1089536
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 346013696 unmapped: 63930368 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 346021888 unmapped: 63922176 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342564864 unmapped: 67379200 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342564864 unmapped: 67379200 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 283 heartbeat osd_stat(store_statfs(0x4eadde000/0x0/0x4ffc00000, data 0x6a7a52/0x82f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342564864 unmapped: 67379200 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3331972 data_alloc: 218103808 data_used: 1089536
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342564864 unmapped: 67379200 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342564864 unmapped: 67379200 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342564864 unmapped: 67379200 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342564864 unmapped: 67379200 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 283 heartbeat osd_stat(store_statfs(0x4eadde000/0x0/0x4ffc00000, data 0x6a7a52/0x82f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342564864 unmapped: 67379200 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3331972 data_alloc: 218103808 data_used: 1089536
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342564864 unmapped: 67379200 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342564864 unmapped: 67379200 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 283 heartbeat osd_stat(store_statfs(0x4eadde000/0x0/0x4ffc00000, data 0x6a7a52/0x82f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342564864 unmapped: 67379200 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342564864 unmapped: 67379200 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 283 heartbeat osd_stat(store_statfs(0x4eadde000/0x0/0x4ffc00000, data 0x6a7a52/0x82f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342573056 unmapped: 67371008 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3331972 data_alloc: 218103808 data_used: 1089536
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342573056 unmapped: 67371008 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342573056 unmapped: 67371008 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 283 heartbeat osd_stat(store_statfs(0x4eadde000/0x0/0x4ffc00000, data 0x6a7a52/0x82f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342573056 unmapped: 67371008 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 283 heartbeat osd_stat(store_statfs(0x4eadde000/0x0/0x4ffc00000, data 0x6a7a52/0x82f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342573056 unmapped: 67371008 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342573056 unmapped: 67371008 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3331972 data_alloc: 218103808 data_used: 1089536
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342573056 unmapped: 67371008 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342573056 unmapped: 67371008 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 283 heartbeat osd_stat(store_statfs(0x4eadde000/0x0/0x4ffc00000, data 0x6a7a52/0x82f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342573056 unmapped: 67371008 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342573056 unmapped: 67371008 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342573056 unmapped: 67371008 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3331972 data_alloc: 218103808 data_used: 1089536
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342573056 unmapped: 67371008 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342573056 unmapped: 67371008 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342573056 unmapped: 67371008 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 283 heartbeat osd_stat(store_statfs(0x4eadde000/0x0/0x4ffc00000, data 0x6a7a52/0x82f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342573056 unmapped: 67371008 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342573056 unmapped: 67371008 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 283 heartbeat osd_stat(store_statfs(0x4eadde000/0x0/0x4ffc00000, data 0x6a7a52/0x82f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3331972 data_alloc: 218103808 data_used: 1089536
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342581248 unmapped: 67362816 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342581248 unmapped: 67362816 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342581248 unmapped: 67362816 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342581248 unmapped: 67362816 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 283 heartbeat osd_stat(store_statfs(0x4eadde000/0x0/0x4ffc00000, data 0x6a7a52/0x82f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342581248 unmapped: 67362816 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3331972 data_alloc: 218103808 data_used: 1089536
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342581248 unmapped: 67362816 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342581248 unmapped: 67362816 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 283 heartbeat osd_stat(store_statfs(0x4eadde000/0x0/0x4ffc00000, data 0x6a7a52/0x82f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342581248 unmapped: 67362816 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342581248 unmapped: 67362816 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342581248 unmapped: 67362816 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3331972 data_alloc: 218103808 data_used: 1089536
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342581248 unmapped: 67362816 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342597632 unmapped: 67346432 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342597632 unmapped: 67346432 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 283 heartbeat osd_stat(store_statfs(0x4eadde000/0x0/0x4ffc00000, data 0x6a7a52/0x82f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342597632 unmapped: 67346432 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 283 heartbeat osd_stat(store_statfs(0x4eadde000/0x0/0x4ffc00000, data 0x6a7a52/0x82f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 283 heartbeat osd_stat(store_statfs(0x4eadde000/0x0/0x4ffc00000, data 0x6a7a52/0x82f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342605824 unmapped: 67338240 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3331972 data_alloc: 218103808 data_used: 1089536
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 283 heartbeat osd_stat(store_statfs(0x4eadde000/0x0/0x4ffc00000, data 0x6a7a52/0x82f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342605824 unmapped: 67338240 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 283 heartbeat osd_stat(store_statfs(0x4eadde000/0x0/0x4ffc00000, data 0x6a7a52/0x82f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342614016 unmapped: 67330048 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342614016 unmapped: 67330048 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342614016 unmapped: 67330048 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342614016 unmapped: 67330048 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3331972 data_alloc: 218103808 data_used: 1089536
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342622208 unmapped: 67321856 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 283 heartbeat osd_stat(store_statfs(0x4eadde000/0x0/0x4ffc00000, data 0x6a7a52/0x82f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 283 heartbeat osd_stat(store_statfs(0x4eadde000/0x0/0x4ffc00000, data 0x6a7a52/0x82f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342622208 unmapped: 67321856 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342622208 unmapped: 67321856 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342622208 unmapped: 67321856 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342622208 unmapped: 67321856 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3331972 data_alloc: 218103808 data_used: 1089536
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342622208 unmapped: 67321856 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 283 heartbeat osd_stat(store_statfs(0x4eadde000/0x0/0x4ffc00000, data 0x6a7a52/0x82f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342622208 unmapped: 67321856 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342630400 unmapped: 67313664 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342638592 unmapped: 67305472 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342638592 unmapped: 67305472 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3331972 data_alloc: 218103808 data_used: 1089536
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342638592 unmapped: 67305472 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 283 heartbeat osd_stat(store_statfs(0x4eadde000/0x0/0x4ffc00000, data 0x6a7a52/0x82f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342654976 unmapped: 67289088 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342654976 unmapped: 67289088 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 283 heartbeat osd_stat(store_statfs(0x4eadde000/0x0/0x4ffc00000, data 0x6a7a52/0x82f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342654976 unmapped: 67289088 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342654976 unmapped: 67289088 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 6000.4 total, 600.0 interval#012Cumulative writes: 46K writes, 181K keys, 46K commit groups, 1.0 writes per commit group, ingest: 0.17 GB, 0.03 MB/s#012Cumulative WAL: 46K writes, 16K syncs, 2.78 writes per sync, written: 0.17 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 691 writes, 1763 keys, 691 commit groups, 1.0 writes per commit group, ingest: 0.85 MB, 0.00 MB/s#012Interval WAL: 691 writes, 309 syncs, 2.24 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.05              0.00         1    0.049       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.05              0.00         1    0.049       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.05              0.00         1    0.049       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 6000.4 total, 4800.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x561d8ae0f1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 5.4e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 6000.4 total, 4800.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x561d8ae0f1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 5.4e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 6000.4 total, 4800.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtab
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 283 heartbeat osd_stat(store_statfs(0x4eadde000/0x0/0x4ffc00000, data 0x6a7a52/0x82f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3331972 data_alloc: 218103808 data_used: 1089536
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342663168 unmapped: 67280896 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342663168 unmapped: 67280896 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342663168 unmapped: 67280896 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 283 heartbeat osd_stat(store_statfs(0x4eadde000/0x0/0x4ffc00000, data 0x6a7a52/0x82f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342663168 unmapped: 67280896 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342663168 unmapped: 67280896 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3331972 data_alloc: 218103808 data_used: 1089536
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342671360 unmapped: 67272704 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342671360 unmapped: 67272704 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342671360 unmapped: 67272704 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 283 heartbeat osd_stat(store_statfs(0x4eadde000/0x0/0x4ffc00000, data 0x6a7a52/0x82f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342679552 unmapped: 67264512 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342679552 unmapped: 67264512 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3331972 data_alloc: 218103808 data_used: 1089536
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342679552 unmapped: 67264512 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342679552 unmapped: 67264512 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 283 heartbeat osd_stat(store_statfs(0x4eadde000/0x0/0x4ffc00000, data 0x6a7a52/0x82f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342695936 unmapped: 67248128 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342695936 unmapped: 67248128 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 283 heartbeat osd_stat(store_statfs(0x4eadde000/0x0/0x4ffc00000, data 0x6a7a52/0x82f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 341778432 unmapped: 68165632 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3331972 data_alloc: 218103808 data_used: 1089536
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 341778432 unmapped: 68165632 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 341786624 unmapped: 68157440 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 341794816 unmapped: 68149248 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 283 heartbeat osd_stat(store_statfs(0x4eadde000/0x0/0x4ffc00000, data 0x6a7a52/0x82f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 341794816 unmapped: 68149248 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 341794816 unmapped: 68149248 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 283 heartbeat osd_stat(store_statfs(0x4eadde000/0x0/0x4ffc00000, data 0x6a7a52/0x82f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3331972 data_alloc: 218103808 data_used: 1089536
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 341794816 unmapped: 68149248 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 341794816 unmapped: 68149248 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 341803008 unmapped: 68141056 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 283 heartbeat osd_stat(store_statfs(0x4eadde000/0x0/0x4ffc00000, data 0x6a7a52/0x82f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 283 heartbeat osd_stat(store_statfs(0x4eadde000/0x0/0x4ffc00000, data 0x6a7a52/0x82f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 341803008 unmapped: 68141056 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 341811200 unmapped: 68132864 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3331972 data_alloc: 218103808 data_used: 1089536
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 341811200 unmapped: 68132864 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 341811200 unmapped: 68132864 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 341811200 unmapped: 68132864 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 283 heartbeat osd_stat(store_statfs(0x4eadde000/0x0/0x4ffc00000, data 0x6a7a52/0x82f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 341811200 unmapped: 68132864 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 341811200 unmapped: 68132864 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 283 heartbeat osd_stat(store_statfs(0x4eadde000/0x0/0x4ffc00000, data 0x6a7a52/0x82f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3331972 data_alloc: 218103808 data_used: 1089536
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 341811200 unmapped: 68132864 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 341819392 unmapped: 68124672 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 283 heartbeat osd_stat(store_statfs(0x4eadde000/0x0/0x4ffc00000, data 0x6a7a52/0x82f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 341827584 unmapped: 68116480 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 341827584 unmapped: 68116480 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 341827584 unmapped: 68116480 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3331972 data_alloc: 218103808 data_used: 1089536
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 341827584 unmapped: 68116480 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 341835776 unmapped: 68108288 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 341835776 unmapped: 68108288 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 283 heartbeat osd_stat(store_statfs(0x4eadde000/0x0/0x4ffc00000, data 0x6a7a52/0x82f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 341835776 unmapped: 68108288 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 283 heartbeat osd_stat(store_statfs(0x4eadde000/0x0/0x4ffc00000, data 0x6a7a52/0x82f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 341843968 unmapped: 68100096 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3331972 data_alloc: 218103808 data_used: 1089536
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 341860352 unmapped: 68083712 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 341860352 unmapped: 68083712 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 341868544 unmapped: 68075520 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 283 heartbeat osd_stat(store_statfs(0x4eadde000/0x0/0x4ffc00000, data 0x6a7a52/0x82f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 341868544 unmapped: 68075520 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 341868544 unmapped: 68075520 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3331972 data_alloc: 218103808 data_used: 1089536
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 341868544 unmapped: 68075520 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 341868544 unmapped: 68075520 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 283 heartbeat osd_stat(store_statfs(0x4eadde000/0x0/0x4ffc00000, data 0x6a7a52/0x82f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 341868544 unmapped: 68075520 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 341876736 unmapped: 68067328 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 341876736 unmapped: 68067328 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3331972 data_alloc: 218103808 data_used: 1089536
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 283 heartbeat osd_stat(store_statfs(0x4eadde000/0x0/0x4ffc00000, data 0x6a7a52/0x82f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 341884928 unmapped: 68059136 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 341884928 unmapped: 68059136 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 341884928 unmapped: 68059136 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 341884928 unmapped: 68059136 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 341884928 unmapped: 68059136 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3331972 data_alloc: 218103808 data_used: 1089536
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 341884928 unmapped: 68059136 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 283 heartbeat osd_stat(store_statfs(0x4eadde000/0x0/0x4ffc00000, data 0x6a7a52/0x82f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 341893120 unmapped: 68050944 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 341893120 unmapped: 68050944 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 341893120 unmapped: 68050944 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 341893120 unmapped: 68050944 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 283 heartbeat osd_stat(store_statfs(0x4eadde000/0x0/0x4ffc00000, data 0x6a7a52/0x82f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 283 heartbeat osd_stat(store_statfs(0x4eadde000/0x0/0x4ffc00000, data 0x6a7a52/0x82f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3331972 data_alloc: 218103808 data_used: 1089536
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 341901312 unmapped: 68042752 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 283 heartbeat osd_stat(store_statfs(0x4eadde000/0x0/0x4ffc00000, data 0x6a7a52/0x82f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 341901312 unmapped: 68042752 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 283 heartbeat osd_stat(store_statfs(0x4eadde000/0x0/0x4ffc00000, data 0x6a7a52/0x82f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 341909504 unmapped: 68034560 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 341909504 unmapped: 68034560 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 341925888 unmapped: 68018176 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3331972 data_alloc: 218103808 data_used: 1089536
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 341925888 unmapped: 68018176 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 341925888 unmapped: 68018176 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 341925888 unmapped: 68018176 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 283 heartbeat osd_stat(store_statfs(0x4eadde000/0x0/0x4ffc00000, data 0x6a7a52/0x82f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 341925888 unmapped: 68018176 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 341925888 unmapped: 68018176 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3331972 data_alloc: 218103808 data_used: 1089536
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 341925888 unmapped: 68018176 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 341925888 unmapped: 68018176 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 341942272 unmapped: 68001792 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 283 heartbeat osd_stat(store_statfs(0x4eadde000/0x0/0x4ffc00000, data 0x6a7a52/0x82f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 341950464 unmapped: 67993600 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 341950464 unmapped: 67993600 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3331972 data_alloc: 218103808 data_used: 1089536
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 341950464 unmapped: 67993600 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 341958656 unmapped: 67985408 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 341958656 unmapped: 67985408 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 283 heartbeat osd_stat(store_statfs(0x4eadde000/0x0/0x4ffc00000, data 0x6a7a52/0x82f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 341958656 unmapped: 67985408 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 283 heartbeat osd_stat(store_statfs(0x4eadde000/0x0/0x4ffc00000, data 0x6a7a52/0x82f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 341958656 unmapped: 67985408 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3331972 data_alloc: 218103808 data_used: 1089536
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 341975040 unmapped: 67969024 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 341975040 unmapped: 67969024 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 341975040 unmapped: 67969024 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 283 heartbeat osd_stat(store_statfs(0x4eadde000/0x0/0x4ffc00000, data 0x6a7a52/0x82f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 341975040 unmapped: 67969024 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 341975040 unmapped: 67969024 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3331972 data_alloc: 218103808 data_used: 1089536
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 341975040 unmapped: 67969024 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 341975040 unmapped: 67969024 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 283 heartbeat osd_stat(store_statfs(0x4eadde000/0x0/0x4ffc00000, data 0x6a7a52/0x82f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 341975040 unmapped: 67969024 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 341983232 unmapped: 67960832 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 341991424 unmapped: 67952640 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3331972 data_alloc: 218103808 data_used: 1089536
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 341991424 unmapped: 67952640 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 341991424 unmapped: 67952640 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 341991424 unmapped: 67952640 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 283 heartbeat osd_stat(store_statfs(0x4eadde000/0x0/0x4ffc00000, data 0x6a7a52/0x82f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 341991424 unmapped: 67952640 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 283 heartbeat osd_stat(store_statfs(0x4eadde000/0x0/0x4ffc00000, data 0x6a7a52/0x82f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 341999616 unmapped: 67944448 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3331972 data_alloc: 218103808 data_used: 1089536
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 341999616 unmapped: 67944448 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342007808 unmapped: 67936256 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 283 heartbeat osd_stat(store_statfs(0x4eadde000/0x0/0x4ffc00000, data 0x6a7a52/0x82f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342007808 unmapped: 67936256 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342007808 unmapped: 67936256 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342007808 unmapped: 67936256 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3331972 data_alloc: 218103808 data_used: 1089536
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342007808 unmapped: 67936256 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342007808 unmapped: 67936256 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 283 heartbeat osd_stat(store_statfs(0x4eadde000/0x0/0x4ffc00000, data 0x6a7a52/0x82f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342016000 unmapped: 67928064 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342016000 unmapped: 67928064 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342024192 unmapped: 67919872 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 283 heartbeat osd_stat(store_statfs(0x4eadde000/0x0/0x4ffc00000, data 0x6a7a52/0x82f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3331972 data_alloc: 218103808 data_used: 1089536
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342024192 unmapped: 67919872 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342024192 unmapped: 67919872 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342024192 unmapped: 67919872 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342040576 unmapped: 67903488 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 283 heartbeat osd_stat(store_statfs(0x4eadde000/0x0/0x4ffc00000, data 0x6a7a52/0x82f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342040576 unmapped: 67903488 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 283 heartbeat osd_stat(store_statfs(0x4eadde000/0x0/0x4ffc00000, data 0x6a7a52/0x82f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3331972 data_alloc: 218103808 data_used: 1089536
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342048768 unmapped: 67895296 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342048768 unmapped: 67895296 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342056960 unmapped: 67887104 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342056960 unmapped: 67887104 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 283 heartbeat osd_stat(store_statfs(0x4eadde000/0x0/0x4ffc00000, data 0x6a7a52/0x82f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342056960 unmapped: 67887104 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 322.153381348s of 322.755462646s, submitted: 14
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 283 heartbeat osd_stat(store_statfs(0x4eaddf000/0x0/0x4ffc00000, data 0x6a7a52/0x82f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3331092 data_alloc: 218103808 data_used: 1089536
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342097920 unmapped: 67846144 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342106112 unmapped: 67837952 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 283 heartbeat osd_stat(store_statfs(0x4eaddf000/0x0/0x4ffc00000, data 0x6a7a52/0x82f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342106112 unmapped: 67837952 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342106112 unmapped: 67837952 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342106112 unmapped: 67837952 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3331092 data_alloc: 218103808 data_used: 1089536
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342106112 unmapped: 67837952 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 283 heartbeat osd_stat(store_statfs(0x4eaddf000/0x0/0x4ffc00000, data 0x6a7a52/0x82f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342106112 unmapped: 67837952 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342106112 unmapped: 67837952 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342106112 unmapped: 67837952 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 283 heartbeat osd_stat(store_statfs(0x4eaddf000/0x0/0x4ffc00000, data 0x6a7a52/0x82f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 283 heartbeat osd_stat(store_statfs(0x4eaddf000/0x0/0x4ffc00000, data 0x6a7a52/0x82f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342122496 unmapped: 67821568 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3331092 data_alloc: 218103808 data_used: 1089536
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342122496 unmapped: 67821568 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342122496 unmapped: 67821568 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342122496 unmapped: 67821568 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342122496 unmapped: 67821568 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342122496 unmapped: 67821568 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3331092 data_alloc: 218103808 data_used: 1089536
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 283 heartbeat osd_stat(store_statfs(0x4eaddf000/0x0/0x4ffc00000, data 0x6a7a52/0x82f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342122496 unmapped: 67821568 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 283 heartbeat osd_stat(store_statfs(0x4eaddf000/0x0/0x4ffc00000, data 0x6a7a52/0x82f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342130688 unmapped: 67813376 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342130688 unmapped: 67813376 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342130688 unmapped: 67813376 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 283 heartbeat osd_stat(store_statfs(0x4eaddf000/0x0/0x4ffc00000, data 0x6a7a52/0x82f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342130688 unmapped: 67813376 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 283 heartbeat osd_stat(store_statfs(0x4eaddf000/0x0/0x4ffc00000, data 0x6a7a52/0x82f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 283 heartbeat osd_stat(store_statfs(0x4eaddf000/0x0/0x4ffc00000, data 0x6a7a52/0x82f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3331092 data_alloc: 218103808 data_used: 1089536
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342147072 unmapped: 67796992 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342147072 unmapped: 67796992 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342147072 unmapped: 67796992 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342147072 unmapped: 67796992 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 283 heartbeat osd_stat(store_statfs(0x4eaddf000/0x0/0x4ffc00000, data 0x6a7a52/0x82f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342147072 unmapped: 67796992 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 283 heartbeat osd_stat(store_statfs(0x4eaddf000/0x0/0x4ffc00000, data 0x6a7a52/0x82f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3331092 data_alloc: 218103808 data_used: 1089536
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 283 heartbeat osd_stat(store_statfs(0x4eaddf000/0x0/0x4ffc00000, data 0x6a7a52/0x82f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342155264 unmapped: 67788800 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 283 heartbeat osd_stat(store_statfs(0x4eaddf000/0x0/0x4ffc00000, data 0x6a7a52/0x82f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342155264 unmapped: 67788800 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342155264 unmapped: 67788800 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342155264 unmapped: 67788800 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342163456 unmapped: 67780608 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 283 heartbeat osd_stat(store_statfs(0x4eaddf000/0x0/0x4ffc00000, data 0x6a7a52/0x82f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3331092 data_alloc: 218103808 data_used: 1089536
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342171648 unmapped: 67772416 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342171648 unmapped: 67772416 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 283 heartbeat osd_stat(store_statfs(0x4eaddf000/0x0/0x4ffc00000, data 0x6a7a52/0x82f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342171648 unmapped: 67772416 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342171648 unmapped: 67772416 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342171648 unmapped: 67772416 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3331092 data_alloc: 218103808 data_used: 1089536
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342171648 unmapped: 67772416 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 283 heartbeat osd_stat(store_statfs(0x4eaddf000/0x0/0x4ffc00000, data 0x6a7a52/0x82f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342179840 unmapped: 67764224 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342179840 unmapped: 67764224 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342179840 unmapped: 67764224 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342179840 unmapped: 67764224 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3331092 data_alloc: 218103808 data_used: 1089536
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342179840 unmapped: 67764224 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 283 heartbeat osd_stat(store_statfs(0x4eaddf000/0x0/0x4ffc00000, data 0x6a7a52/0x82f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342188032 unmapped: 67756032 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 283 heartbeat osd_stat(store_statfs(0x4eaddf000/0x0/0x4ffc00000, data 0x6a7a52/0x82f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342188032 unmapped: 67756032 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342188032 unmapped: 67756032 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342188032 unmapped: 67756032 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3331092 data_alloc: 218103808 data_used: 1089536
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342188032 unmapped: 67756032 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342188032 unmapped: 67756032 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 283 heartbeat osd_stat(store_statfs(0x4eaddf000/0x0/0x4ffc00000, data 0x6a7a52/0x82f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342204416 unmapped: 67739648 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342204416 unmapped: 67739648 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342204416 unmapped: 67739648 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 283 heartbeat osd_stat(store_statfs(0x4eaddf000/0x0/0x4ffc00000, data 0x6a7a52/0x82f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3331092 data_alloc: 218103808 data_used: 1089536
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342204416 unmapped: 67739648 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342204416 unmapped: 67739648 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 283 heartbeat osd_stat(store_statfs(0x4eaddf000/0x0/0x4ffc00000, data 0x6a7a52/0x82f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 283 heartbeat osd_stat(store_statfs(0x4eaddf000/0x0/0x4ffc00000, data 0x6a7a52/0x82f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342212608 unmapped: 67731456 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342212608 unmapped: 67731456 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342212608 unmapped: 67731456 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3331092 data_alloc: 218103808 data_used: 1089536
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342212608 unmapped: 67731456 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 283 heartbeat osd_stat(store_statfs(0x4eaddf000/0x0/0x4ffc00000, data 0x6a7a52/0x82f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342212608 unmapped: 67731456 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342212608 unmapped: 67731456 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342212608 unmapped: 67731456 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342228992 unmapped: 67715072 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3331092 data_alloc: 218103808 data_used: 1089536
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342237184 unmapped: 67706880 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 283 heartbeat osd_stat(store_statfs(0x4eaddf000/0x0/0x4ffc00000, data 0x6a7a52/0x82f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342245376 unmapped: 67698688 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342245376 unmapped: 67698688 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342245376 unmapped: 67698688 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342245376 unmapped: 67698688 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3331092 data_alloc: 218103808 data_used: 1089536
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342245376 unmapped: 67698688 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 283 heartbeat osd_stat(store_statfs(0x4eaddf000/0x0/0x4ffc00000, data 0x6a7a52/0x82f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 283 heartbeat osd_stat(store_statfs(0x4eaddf000/0x0/0x4ffc00000, data 0x6a7a52/0x82f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342245376 unmapped: 67698688 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342245376 unmapped: 67698688 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342261760 unmapped: 67682304 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342261760 unmapped: 67682304 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3331092 data_alloc: 218103808 data_used: 1089536
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342261760 unmapped: 67682304 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342261760 unmapped: 67682304 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 283 heartbeat osd_stat(store_statfs(0x4eaddf000/0x0/0x4ffc00000, data 0x6a7a52/0x82f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342261760 unmapped: 67682304 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342261760 unmapped: 67682304 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342269952 unmapped: 67674112 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3331092 data_alloc: 218103808 data_used: 1089536
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342269952 unmapped: 67674112 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342269952 unmapped: 67674112 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 283 heartbeat osd_stat(store_statfs(0x4eaddf000/0x0/0x4ffc00000, data 0x6a7a52/0x82f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342278144 unmapped: 67665920 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342278144 unmapped: 67665920 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342278144 unmapped: 67665920 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3331092 data_alloc: 218103808 data_used: 1089536
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342278144 unmapped: 67665920 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342286336 unmapped: 67657728 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342286336 unmapped: 67657728 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 283 heartbeat osd_stat(store_statfs(0x4eaddf000/0x0/0x4ffc00000, data 0x6a7a52/0x82f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342294528 unmapped: 67649536 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 283 heartbeat osd_stat(store_statfs(0x4eaddf000/0x0/0x4ffc00000, data 0x6a7a52/0x82f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342302720 unmapped: 67641344 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3331092 data_alloc: 218103808 data_used: 1089536
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342302720 unmapped: 67641344 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 283 heartbeat osd_stat(store_statfs(0x4eaddf000/0x0/0x4ffc00000, data 0x6a7a52/0x82f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342302720 unmapped: 67641344 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342310912 unmapped: 67633152 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342310912 unmapped: 67633152 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342310912 unmapped: 67633152 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3331092 data_alloc: 218103808 data_used: 1089536
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342310912 unmapped: 67633152 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 283 heartbeat osd_stat(store_statfs(0x4eaddf000/0x0/0x4ffc00000, data 0x6a7a52/0x82f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342310912 unmapped: 67633152 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342327296 unmapped: 67616768 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342327296 unmapped: 67616768 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342327296 unmapped: 67616768 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 283 heartbeat osd_stat(store_statfs(0x4eaddf000/0x0/0x4ffc00000, data 0x6a7a52/0x82f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3331092 data_alloc: 218103808 data_used: 1089536
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342327296 unmapped: 67616768 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342327296 unmapped: 67616768 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 97.540496826s of 97.933944702s, submitted: 90
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342335488 unmapped: 67608576 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 283 handle_osd_map epochs [284,284], i have 283, src has [1,284]
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339148800 unmapped: 70795264 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 284 heartbeat osd_stat(store_statfs(0x4eaddf000/0x0/0x4ffc00000, data 0x6a7a52/0x82f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 284 ms_handle_reset con 0x561d8e52e800 session 0x561d8e7c0960
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339173376 unmapped: 70770688 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3333441 data_alloc: 218103808 data_used: 1097728
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339173376 unmapped: 70770688 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339173376 unmapped: 70770688 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 284 heartbeat osd_stat(store_statfs(0x4eaddc000/0x0/0x4ffc00000, data 0x6a95f0/0x830000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 284 handle_osd_map epochs [284,285], i have 284, src has [1,285]
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339173376 unmapped: 70770688 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339173376 unmapped: 70770688 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 285 ms_handle_reset con 0x561d8e532400 session 0x561d8d25ba40
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339173376 unmapped: 70770688 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3305645 data_alloc: 218103808 data_used: 1097728
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339173376 unmapped: 70770688 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339173376 unmapped: 70770688 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 285 heartbeat osd_stat(store_statfs(0x4eb24b000/0x0/0x4ffc00000, data 0x23b19e/0x3c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 285 handle_osd_map epochs [286,286], i have 285, src has [1,286]
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 8.389628410s of 10.221417427s, submitted: 70
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339173376 unmapped: 70770688 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339173376 unmapped: 70770688 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 286 heartbeat osd_stat(store_statfs(0x4eb248000/0x0/0x4ffc00000, data 0x23cc40/0x3c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339173376 unmapped: 70770688 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 286 heartbeat osd_stat(store_statfs(0x4ea248000/0x0/0x4ffc00000, data 0x123cc40/0x13c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [0,0,0,0,0,0,1])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3452337 data_alloc: 218103808 data_used: 1097728
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339173376 unmapped: 70770688 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 286 handle_osd_map epochs [287,287], i have 286, src has [1,287]
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339214336 unmapped: 70729728 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 287 ms_handle_reset con 0x561d8c459800 session 0x561d8e3ed680
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339214336 unmapped: 70729728 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339214336 unmapped: 70729728 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 287 handle_osd_map epochs [288,288], i have 287, src has [1,288]
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339214336 unmapped: 70729728 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3460557 data_alloc: 218103808 data_used: 1105920
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339214336 unmapped: 70729728 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e9dd0000/0x0/0x4ffc00000, data 0x16b025f/0x183d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339222528 unmapped: 70721536 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339222528 unmapped: 70721536 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339222528 unmapped: 70721536 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339222528 unmapped: 70721536 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3460557 data_alloc: 218103808 data_used: 1105920
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339222528 unmapped: 70721536 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e9dd0000/0x0/0x4ffc00000, data 0x16b025f/0x183d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339222528 unmapped: 70721536 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339222528 unmapped: 70721536 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339222528 unmapped: 70721536 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339230720 unmapped: 70713344 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3460557 data_alloc: 218103808 data_used: 1105920
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339230720 unmapped: 70713344 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339230720 unmapped: 70713344 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e9dd0000/0x0/0x4ffc00000, data 0x16b025f/0x183d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339238912 unmapped: 70705152 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339238912 unmapped: 70705152 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339238912 unmapped: 70705152 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3460557 data_alloc: 218103808 data_used: 1105920
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e9dd0000/0x0/0x4ffc00000, data 0x16b025f/0x183d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339238912 unmapped: 70705152 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e9dd0000/0x0/0x4ffc00000, data 0x16b025f/0x183d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339238912 unmapped: 70705152 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339247104 unmapped: 70696960 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339247104 unmapped: 70696960 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339247104 unmapped: 70696960 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3460557 data_alloc: 218103808 data_used: 1105920
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339247104 unmapped: 70696960 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e9dd0000/0x0/0x4ffc00000, data 0x16b025f/0x183d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339247104 unmapped: 70696960 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339247104 unmapped: 70696960 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339247104 unmapped: 70696960 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339247104 unmapped: 70696960 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e9dd0000/0x0/0x4ffc00000, data 0x16b025f/0x183d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3460557 data_alloc: 218103808 data_used: 1105920
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339255296 unmapped: 70688768 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339255296 unmapped: 70688768 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e9dd0000/0x0/0x4ffc00000, data 0x16b025f/0x183d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339255296 unmapped: 70688768 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339255296 unmapped: 70688768 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339255296 unmapped: 70688768 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3460557 data_alloc: 218103808 data_used: 1105920
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339255296 unmapped: 70688768 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339255296 unmapped: 70688768 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339255296 unmapped: 70688768 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e9dd0000/0x0/0x4ffc00000, data 0x16b025f/0x183d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339263488 unmapped: 70680576 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339263488 unmapped: 70680576 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3460557 data_alloc: 218103808 data_used: 1105920
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339271680 unmapped: 70672384 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e9dd0000/0x0/0x4ffc00000, data 0x16b025f/0x183d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e9dd0000/0x0/0x4ffc00000, data 0x16b025f/0x183d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339271680 unmapped: 70672384 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339271680 unmapped: 70672384 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339271680 unmapped: 70672384 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339271680 unmapped: 70672384 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e9dd0000/0x0/0x4ffc00000, data 0x16b025f/0x183d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3460557 data_alloc: 218103808 data_used: 1105920
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339271680 unmapped: 70672384 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339279872 unmapped: 70664192 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e9dd0000/0x0/0x4ffc00000, data 0x16b025f/0x183d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339288064 unmapped: 70656000 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339288064 unmapped: 70656000 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e9dd0000/0x0/0x4ffc00000, data 0x16b025f/0x183d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339288064 unmapped: 70656000 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3460557 data_alloc: 218103808 data_used: 1105920
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339296256 unmapped: 70647808 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339296256 unmapped: 70647808 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339296256 unmapped: 70647808 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339296256 unmapped: 70647808 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339296256 unmapped: 70647808 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e9dd0000/0x0/0x4ffc00000, data 0x16b025f/0x183d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3460557 data_alloc: 218103808 data_used: 1105920
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339296256 unmapped: 70647808 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339296256 unmapped: 70647808 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339304448 unmapped: 70639616 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339304448 unmapped: 70639616 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339304448 unmapped: 70639616 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e9dd0000/0x0/0x4ffc00000, data 0x16b025f/0x183d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 61.414230347s of 63.027751923s, submitted: 47
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3459809 data_alloc: 218103808 data_used: 1105920
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339312640 unmapped: 70631424 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339312640 unmapped: 70631424 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 288 handle_osd_map epochs [289,289], i have 288, src has [1,289]
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339337216 unmapped: 70606848 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339345408 unmapped: 70598656 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 289 heartbeat osd_stat(store_statfs(0x4e9dce000/0x0/0x4ffc00000, data 0x16b1e0d/0x183f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [0,0,0,0,0,0,0,1])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 289 heartbeat osd_stat(store_statfs(0x4e9dce000/0x0/0x4ffc00000, data 0x16b1e0d/0x183f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339353600 unmapped: 70590464 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3325729 data_alloc: 218103808 data_used: 1114112
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 289 heartbeat osd_stat(store_statfs(0x4ea5cf000/0x0/0x4ffc00000, data 0x241e0d/0x3cf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [0,0,0,0,0,0,0,1])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339369984 unmapped: 70574080 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 289 ms_handle_reset con 0x561d8d10bc00 session 0x561d8e8c43c0
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339369984 unmapped: 70574080 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339369984 unmapped: 70574080 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339369984 unmapped: 70574080 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339369984 unmapped: 70574080 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 289 heartbeat osd_stat(store_statfs(0x4ea5cf000/0x0/0x4ffc00000, data 0x241dea/0x3ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3324113 data_alloc: 218103808 data_used: 1114112
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339369984 unmapped: 70574080 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 289 handle_osd_map epochs [290,290], i have 289, src has [1,290]
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 7.201864243s of 11.404572487s, submitted: 53
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 289 heartbeat osd_stat(store_statfs(0x4ea5cf000/0x0/0x4ffc00000, data 0x241dea/0x3ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [0,0,0,0,0,0,0,2])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 289 handle_osd_map epochs [290,290], i have 290, src has [1,290]
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339394560 unmapped: 70549504 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339394560 unmapped: 70549504 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339394560 unmapped: 70549504 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339402752 unmapped: 70541312 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3328111 data_alloc: 218103808 data_used: 1122304
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339402752 unmapped: 70541312 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339402752 unmapped: 70541312 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 290 heartbeat osd_stat(store_statfs(0x4eb23c000/0x0/0x4ffc00000, data 0x24384d/0x3d1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339402752 unmapped: 70541312 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 290 heartbeat osd_stat(store_statfs(0x4eb23c000/0x0/0x4ffc00000, data 0x24384d/0x3d1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339410944 unmapped: 70533120 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339410944 unmapped: 70533120 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3328111 data_alloc: 218103808 data_used: 1122304
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 290 heartbeat osd_stat(store_statfs(0x4eb23c000/0x0/0x4ffc00000, data 0x24384d/0x3d1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339410944 unmapped: 70533120 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339410944 unmapped: 70533120 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339410944 unmapped: 70533120 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 290 heartbeat osd_stat(store_statfs(0x4eb23c000/0x0/0x4ffc00000, data 0x24384d/0x3d1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339410944 unmapped: 70533120 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 290 heartbeat osd_stat(store_statfs(0x4eb23c000/0x0/0x4ffc00000, data 0x24384d/0x3d1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339410944 unmapped: 70533120 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3328111 data_alloc: 218103808 data_used: 1122304
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339410944 unmapped: 70533120 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 290 heartbeat osd_stat(store_statfs(0x4eb23c000/0x0/0x4ffc00000, data 0x24384d/0x3d1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339427328 unmapped: 70516736 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339427328 unmapped: 70516736 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339427328 unmapped: 70516736 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339427328 unmapped: 70516736 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 290 heartbeat osd_stat(store_statfs(0x4eb23c000/0x0/0x4ffc00000, data 0x24384d/0x3d1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3328111 data_alloc: 218103808 data_used: 1122304
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339427328 unmapped: 70516736 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339427328 unmapped: 70516736 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339427328 unmapped: 70516736 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339427328 unmapped: 70516736 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 290 heartbeat osd_stat(store_statfs(0x4eb23c000/0x0/0x4ffc00000, data 0x24384d/0x3d1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339435520 unmapped: 70508544 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3328111 data_alloc: 218103808 data_used: 1122304
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339435520 unmapped: 70508544 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339435520 unmapped: 70508544 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 290 heartbeat osd_stat(store_statfs(0x4eb23c000/0x0/0x4ffc00000, data 0x24384d/0x3d1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339443712 unmapped: 70500352 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339443712 unmapped: 70500352 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339443712 unmapped: 70500352 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3328111 data_alloc: 218103808 data_used: 1122304
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339443712 unmapped: 70500352 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339443712 unmapped: 70500352 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 290 ms_handle_reset con 0x561d8d109000 session 0x561d8e39fe00
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339451904 unmapped: 70492160 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 290 heartbeat osd_stat(store_statfs(0x4eb23c000/0x0/0x4ffc00000, data 0x24384d/0x3d1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 290 ms_handle_reset con 0x561d8e74f800 session 0x561d8d1163c0
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 290 ms_handle_reset con 0x561d8c2aa800 session 0x561d8d2a6d20
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339460096 unmapped: 70483968 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339460096 unmapped: 70483968 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3328111 data_alloc: 218103808 data_used: 1122304
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339468288 unmapped: 70475776 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339476480 unmapped: 70467584 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339476480 unmapped: 70467584 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339476480 unmapped: 70467584 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 290 heartbeat osd_stat(store_statfs(0x4eb23c000/0x0/0x4ffc00000, data 0x24384d/0x3d1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339484672 unmapped: 70459392 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3328111 data_alloc: 218103808 data_used: 1122304
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339484672 unmapped: 70459392 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339484672 unmapped: 70459392 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339484672 unmapped: 70459392 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 290 heartbeat osd_stat(store_statfs(0x4eb23c000/0x0/0x4ffc00000, data 0x24384d/0x3d1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339484672 unmapped: 70459392 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 290 heartbeat osd_stat(store_statfs(0x4eb23c000/0x0/0x4ffc00000, data 0x24384d/0x3d1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339492864 unmapped: 70451200 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3328111 data_alloc: 218103808 data_used: 1122304
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339492864 unmapped: 70451200 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339492864 unmapped: 70451200 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339509248 unmapped: 70434816 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339509248 unmapped: 70434816 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 290 heartbeat osd_stat(store_statfs(0x4eb23c000/0x0/0x4ffc00000, data 0x24384d/0x3d1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339517440 unmapped: 70426624 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3328111 data_alloc: 218103808 data_used: 1122304
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339517440 unmapped: 70426624 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339517440 unmapped: 70426624 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 290 heartbeat osd_stat(store_statfs(0x4eb23c000/0x0/0x4ffc00000, data 0x24384d/0x3d1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339525632 unmapped: 70418432 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 290 heartbeat osd_stat(store_statfs(0x4eb23c000/0x0/0x4ffc00000, data 0x24384d/0x3d1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339525632 unmapped: 70418432 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339525632 unmapped: 70418432 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 290 heartbeat osd_stat(store_statfs(0x4eb23c000/0x0/0x4ffc00000, data 0x24384d/0x3d1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3328111 data_alloc: 218103808 data_used: 1122304
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339533824 unmapped: 70410240 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339533824 unmapped: 70410240 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339533824 unmapped: 70410240 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339542016 unmapped: 70402048 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339542016 unmapped: 70402048 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3328111 data_alloc: 218103808 data_used: 1122304
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 290 heartbeat osd_stat(store_statfs(0x4eb23c000/0x0/0x4ffc00000, data 0x24384d/0x3d1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339542016 unmapped: 70402048 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339542016 unmapped: 70402048 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339542016 unmapped: 70402048 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339550208 unmapped: 70393856 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 290 heartbeat osd_stat(store_statfs(0x4eb23c000/0x0/0x4ffc00000, data 0x24384d/0x3d1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339566592 unmapped: 70377472 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3328111 data_alloc: 218103808 data_used: 1122304
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339566592 unmapped: 70377472 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339566592 unmapped: 70377472 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339566592 unmapped: 70377472 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339566592 unmapped: 70377472 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339566592 unmapped: 70377472 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 290 heartbeat osd_stat(store_statfs(0x4eb23c000/0x0/0x4ffc00000, data 0x24384d/0x3d1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3328111 data_alloc: 218103808 data_used: 1122304
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339582976 unmapped: 70361088 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339582976 unmapped: 70361088 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339582976 unmapped: 70361088 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 290 heartbeat osd_stat(store_statfs(0x4eb23c000/0x0/0x4ffc00000, data 0x24384d/0x3d1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339582976 unmapped: 70361088 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339582976 unmapped: 70361088 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3328111 data_alloc: 218103808 data_used: 1122304
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339582976 unmapped: 70361088 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 290 heartbeat osd_stat(store_statfs(0x4eb23c000/0x0/0x4ffc00000, data 0x24384d/0x3d1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339582976 unmapped: 70361088 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 290 heartbeat osd_stat(store_statfs(0x4eb23c000/0x0/0x4ffc00000, data 0x24384d/0x3d1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339582976 unmapped: 70361088 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339582976 unmapped: 70361088 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 290 heartbeat osd_stat(store_statfs(0x4eb23c000/0x0/0x4ffc00000, data 0x24384d/0x3d1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339599360 unmapped: 70344704 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3328111 data_alloc: 218103808 data_used: 1122304
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339599360 unmapped: 70344704 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339599360 unmapped: 70344704 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 290 heartbeat osd_stat(store_statfs(0x4eb23c000/0x0/0x4ffc00000, data 0x24384d/0x3d1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339607552 unmapped: 70336512 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339607552 unmapped: 70336512 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339615744 unmapped: 70328320 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3328111 data_alloc: 218103808 data_used: 1122304
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339615744 unmapped: 70328320 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 290 heartbeat osd_stat(store_statfs(0x4eb23c000/0x0/0x4ffc00000, data 0x24384d/0x3d1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339615744 unmapped: 70328320 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339623936 unmapped: 70320128 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr stat", "format": "json-pretty"} v 0) v1
Nov 25 04:46:16 np0005534516 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1149545527' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json-pretty"}]: dispatch
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 290 heartbeat osd_stat(store_statfs(0x4eb23c000/0x0/0x4ffc00000, data 0x24384d/0x3d1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339615744 unmapped: 70328320 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: do_command 'config diff' '{prefix=config diff}'
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: do_command 'config show' '{prefix=config show}'
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: do_command 'counter dump' '{prefix=counter dump}'
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: do_command 'counter schema' '{prefix=counter schema}'
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339214336 unmapped: 70729728 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 290 heartbeat osd_stat(store_statfs(0x4eb23c000/0x0/0x4ffc00000, data 0x24384d/0x3d1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3328111 data_alloc: 218103808 data_used: 1122304
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 290 heartbeat osd_stat(store_statfs(0x4eb23c000/0x0/0x4ffc00000, data 0x24384d/0x3d1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338845696 unmapped: 71098368 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338665472 unmapped: 71278592 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: do_command 'log dump' '{prefix=log dump}'
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: do_command 'log dump' '{prefix=log dump}' result is 0 bytes
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: do_command 'perf dump' '{prefix=perf dump}'
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: do_command 'perf dump' '{prefix=perf dump}' result is 0 bytes
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: do_command 'perf histogram dump' '{prefix=perf histogram dump}'
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: do_command 'perf histogram dump' '{prefix=perf histogram dump}' result is 0 bytes
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338796544 unmapped: 82190336 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: do_command 'perf schema' '{prefix=perf schema}'
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: do_command 'perf schema' '{prefix=perf schema}' result is 0 bytes
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338419712 unmapped: 82567168 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 290 heartbeat osd_stat(store_statfs(0x4eb23c000/0x0/0x4ffc00000, data 0x24384d/0x3d1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338419712 unmapped: 82567168 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 290 heartbeat osd_stat(store_statfs(0x4eb23c000/0x0/0x4ffc00000, data 0x24384d/0x3d1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3328111 data_alloc: 218103808 data_used: 1122304
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338419712 unmapped: 82567168 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338419712 unmapped: 82567168 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338427904 unmapped: 82558976 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338427904 unmapped: 82558976 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338427904 unmapped: 82558976 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 290 heartbeat osd_stat(store_statfs(0x4eb23c000/0x0/0x4ffc00000, data 0x24384d/0x3d1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3328111 data_alloc: 218103808 data_used: 1122304
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338427904 unmapped: 82558976 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338427904 unmapped: 82558976 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338427904 unmapped: 82558976 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 290 heartbeat osd_stat(store_statfs(0x4eb23c000/0x0/0x4ffc00000, data 0x24384d/0x3d1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338444288 unmapped: 82542592 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 290 heartbeat osd_stat(store_statfs(0x4eb23c000/0x0/0x4ffc00000, data 0x24384d/0x3d1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338444288 unmapped: 82542592 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 290 ms_handle_reset con 0x561d96070c00 session 0x561d8d2365a0
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3328111 data_alloc: 218103808 data_used: 1122304
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338444288 unmapped: 82542592 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338444288 unmapped: 82542592 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338452480 unmapped: 82534400 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 290 heartbeat osd_stat(store_statfs(0x4eb23c000/0x0/0x4ffc00000, data 0x24384d/0x3d1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338460672 unmapped: 82526208 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338460672 unmapped: 82526208 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3328111 data_alloc: 218103808 data_used: 1122304
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338460672 unmapped: 82526208 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338468864 unmapped: 82518016 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338468864 unmapped: 82518016 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338468864 unmapped: 82518016 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 290 heartbeat osd_stat(store_statfs(0x4eb23c000/0x0/0x4ffc00000, data 0x24384d/0x3d1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338468864 unmapped: 82518016 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3328111 data_alloc: 218103808 data_used: 1122304
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338468864 unmapped: 82518016 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 290 heartbeat osd_stat(store_statfs(0x4eb23c000/0x0/0x4ffc00000, data 0x24384d/0x3d1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338485248 unmapped: 82501632 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338485248 unmapped: 82501632 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338485248 unmapped: 82501632 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 290 heartbeat osd_stat(store_statfs(0x4eb23c000/0x0/0x4ffc00000, data 0x24384d/0x3d1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338493440 unmapped: 82493440 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3328111 data_alloc: 218103808 data_used: 1122304
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338501632 unmapped: 82485248 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338501632 unmapped: 82485248 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338501632 unmapped: 82485248 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338501632 unmapped: 82485248 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338501632 unmapped: 82485248 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 290 heartbeat osd_stat(store_statfs(0x4eb23c000/0x0/0x4ffc00000, data 0x24384d/0x3d1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3328111 data_alloc: 218103808 data_used: 1122304
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338501632 unmapped: 82485248 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 290 heartbeat osd_stat(store_statfs(0x4eb23c000/0x0/0x4ffc00000, data 0x24384d/0x3d1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338509824 unmapped: 82477056 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 290 heartbeat osd_stat(store_statfs(0x4eb23c000/0x0/0x4ffc00000, data 0x24384d/0x3d1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338518016 unmapped: 82468864 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338518016 unmapped: 82468864 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338518016 unmapped: 82468864 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 290 heartbeat osd_stat(store_statfs(0x4eb23c000/0x0/0x4ffc00000, data 0x24384d/0x3d1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3328111 data_alloc: 218103808 data_used: 1122304
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338526208 unmapped: 82460672 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338526208 unmapped: 82460672 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338526208 unmapped: 82460672 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338526208 unmapped: 82460672 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 290 heartbeat osd_stat(store_statfs(0x4eb23c000/0x0/0x4ffc00000, data 0x24384d/0x3d1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338526208 unmapped: 82460672 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3328111 data_alloc: 218103808 data_used: 1122304
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338534400 unmapped: 82452480 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338534400 unmapped: 82452480 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338534400 unmapped: 82452480 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 290 heartbeat osd_stat(store_statfs(0x4eb23c000/0x0/0x4ffc00000, data 0x24384d/0x3d1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338534400 unmapped: 82452480 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338534400 unmapped: 82452480 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3328111 data_alloc: 218103808 data_used: 1122304
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338534400 unmapped: 82452480 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 290 heartbeat osd_stat(store_statfs(0x4eb23c000/0x0/0x4ffc00000, data 0x24384d/0x3d1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338542592 unmapped: 82444288 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338542592 unmapped: 82444288 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338550784 unmapped: 82436096 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338550784 unmapped: 82436096 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3328111 data_alloc: 218103808 data_used: 1122304
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338550784 unmapped: 82436096 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338550784 unmapped: 82436096 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 290 heartbeat osd_stat(store_statfs(0x4eb23c000/0x0/0x4ffc00000, data 0x24384d/0x3d1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338558976 unmapped: 82427904 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338558976 unmapped: 82427904 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338558976 unmapped: 82427904 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3328111 data_alloc: 218103808 data_used: 1122304
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338567168 unmapped: 82419712 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 290 heartbeat osd_stat(store_statfs(0x4eb23c000/0x0/0x4ffc00000, data 0x24384d/0x3d1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338583552 unmapped: 82403328 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338583552 unmapped: 82403328 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 290 heartbeat osd_stat(store_statfs(0x4eb23c000/0x0/0x4ffc00000, data 0x24384d/0x3d1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338583552 unmapped: 82403328 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338583552 unmapped: 82403328 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3328111 data_alloc: 218103808 data_used: 1122304
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338583552 unmapped: 82403328 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338583552 unmapped: 82403328 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338583552 unmapped: 82403328 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 290 heartbeat osd_stat(store_statfs(0x4eb23c000/0x0/0x4ffc00000, data 0x24384d/0x3d1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338583552 unmapped: 82403328 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338591744 unmapped: 82395136 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3328111 data_alloc: 218103808 data_used: 1122304
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338591744 unmapped: 82395136 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338591744 unmapped: 82395136 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338591744 unmapped: 82395136 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338591744 unmapped: 82395136 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 290 heartbeat osd_stat(store_statfs(0x4eb23c000/0x0/0x4ffc00000, data 0x24384d/0x3d1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338599936 unmapped: 82386944 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3328111 data_alloc: 218103808 data_used: 1122304
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338599936 unmapped: 82386944 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338599936 unmapped: 82386944 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338624512 unmapped: 82362368 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 290 heartbeat osd_stat(store_statfs(0x4eb23c000/0x0/0x4ffc00000, data 0x24384d/0x3d1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338624512 unmapped: 82362368 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338624512 unmapped: 82362368 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3328111 data_alloc: 218103808 data_used: 1122304
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338624512 unmapped: 82362368 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338632704 unmapped: 82354176 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338632704 unmapped: 82354176 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 290 heartbeat osd_stat(store_statfs(0x4eb23c000/0x0/0x4ffc00000, data 0x24384d/0x3d1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338632704 unmapped: 82354176 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338632704 unmapped: 82354176 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3328111 data_alloc: 218103808 data_used: 1122304
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 290 heartbeat osd_stat(store_statfs(0x4eb23c000/0x0/0x4ffc00000, data 0x24384d/0x3d1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338640896 unmapped: 82345984 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338640896 unmapped: 82345984 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338640896 unmapped: 82345984 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338640896 unmapped: 82345984 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338640896 unmapped: 82345984 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3328111 data_alloc: 218103808 data_used: 1122304
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338640896 unmapped: 82345984 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 290 heartbeat osd_stat(store_statfs(0x4eb23c000/0x0/0x4ffc00000, data 0x24384d/0x3d1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338640896 unmapped: 82345984 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338640896 unmapped: 82345984 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 290 heartbeat osd_stat(store_statfs(0x4eb23c000/0x0/0x4ffc00000, data 0x24384d/0x3d1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338657280 unmapped: 82329600 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338657280 unmapped: 82329600 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3328111 data_alloc: 218103808 data_used: 1122304
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338657280 unmapped: 82329600 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338657280 unmapped: 82329600 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 290 heartbeat osd_stat(store_statfs(0x4eb23c000/0x0/0x4ffc00000, data 0x24384d/0x3d1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338673664 unmapped: 82313216 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338673664 unmapped: 82313216 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338673664 unmapped: 82313216 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3328111 data_alloc: 218103808 data_used: 1122304
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 290 heartbeat osd_stat(store_statfs(0x4eb23c000/0x0/0x4ffc00000, data 0x24384d/0x3d1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338681856 unmapped: 82305024 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 290 heartbeat osd_stat(store_statfs(0x4eb23c000/0x0/0x4ffc00000, data 0x24384d/0x3d1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338690048 unmapped: 82296832 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338690048 unmapped: 82296832 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338690048 unmapped: 82296832 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 290 heartbeat osd_stat(store_statfs(0x4eb23c000/0x0/0x4ffc00000, data 0x24384d/0x3d1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338690048 unmapped: 82296832 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3328111 data_alloc: 218103808 data_used: 1122304
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338690048 unmapped: 82296832 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338690048 unmapped: 82296832 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 290 heartbeat osd_stat(store_statfs(0x4eb23c000/0x0/0x4ffc00000, data 0x24384d/0x3d1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338690048 unmapped: 82296832 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338690048 unmapped: 82296832 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338706432 unmapped: 82280448 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3328111 data_alloc: 218103808 data_used: 1122304
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338706432 unmapped: 82280448 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338706432 unmapped: 82280448 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 290 heartbeat osd_stat(store_statfs(0x4eb23c000/0x0/0x4ffc00000, data 0x24384d/0x3d1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338714624 unmapped: 82272256 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338714624 unmapped: 82272256 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338714624 unmapped: 82272256 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3328111 data_alloc: 218103808 data_used: 1122304
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338714624 unmapped: 82272256 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338714624 unmapped: 82272256 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 290 heartbeat osd_stat(store_statfs(0x4eb23c000/0x0/0x4ffc00000, data 0x24384d/0x3d1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338722816 unmapped: 82264064 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 290 heartbeat osd_stat(store_statfs(0x4eb23c000/0x0/0x4ffc00000, data 0x24384d/0x3d1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338731008 unmapped: 82255872 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338739200 unmapped: 82247680 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3328111 data_alloc: 218103808 data_used: 1122304
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338739200 unmapped: 82247680 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338739200 unmapped: 82247680 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 290 heartbeat osd_stat(store_statfs(0x4eb23c000/0x0/0x4ffc00000, data 0x24384d/0x3d1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338739200 unmapped: 82247680 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 290 heartbeat osd_stat(store_statfs(0x4eb23c000/0x0/0x4ffc00000, data 0x24384d/0x3d1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338739200 unmapped: 82247680 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338739200 unmapped: 82247680 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3328111 data_alloc: 218103808 data_used: 1122304
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 290 heartbeat osd_stat(store_statfs(0x4eb23c000/0x0/0x4ffc00000, data 0x24384d/0x3d1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338763776 unmapped: 82223104 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338763776 unmapped: 82223104 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338763776 unmapped: 82223104 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338763776 unmapped: 82223104 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338763776 unmapped: 82223104 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3328111 data_alloc: 218103808 data_used: 1122304
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 290 heartbeat osd_stat(store_statfs(0x4eb23c000/0x0/0x4ffc00000, data 0x24384d/0x3d1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338763776 unmapped: 82223104 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338763776 unmapped: 82223104 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338763776 unmapped: 82223104 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338771968 unmapped: 82214912 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338771968 unmapped: 82214912 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3328111 data_alloc: 218103808 data_used: 1122304
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338771968 unmapped: 82214912 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 290 heartbeat osd_stat(store_statfs(0x4eb23c000/0x0/0x4ffc00000, data 0x24384d/0x3d1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338780160 unmapped: 82206720 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338780160 unmapped: 82206720 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338780160 unmapped: 82206720 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 290 heartbeat osd_stat(store_statfs(0x4eb23c000/0x0/0x4ffc00000, data 0x24384d/0x3d1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338780160 unmapped: 82206720 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3328111 data_alloc: 218103808 data_used: 1122304
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338780160 unmapped: 82206720 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 290 heartbeat osd_stat(store_statfs(0x4eb23c000/0x0/0x4ffc00000, data 0x24384d/0x3d1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338796544 unmapped: 82190336 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338796544 unmapped: 82190336 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338796544 unmapped: 82190336 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338796544 unmapped: 82190336 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3328111 data_alloc: 218103808 data_used: 1122304
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338796544 unmapped: 82190336 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338796544 unmapped: 82190336 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 290 heartbeat osd_stat(store_statfs(0x4eb23c000/0x0/0x4ffc00000, data 0x24384d/0x3d1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338804736 unmapped: 82182144 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338804736 unmapped: 82182144 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338821120 unmapped: 82165760 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3328111 data_alloc: 218103808 data_used: 1122304
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338821120 unmapped: 82165760 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 290 heartbeat osd_stat(store_statfs(0x4eb23c000/0x0/0x4ffc00000, data 0x24384d/0x3d1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338821120 unmapped: 82165760 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338821120 unmapped: 82165760 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338829312 unmapped: 82157568 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338829312 unmapped: 82157568 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 290 heartbeat osd_stat(store_statfs(0x4eb23c000/0x0/0x4ffc00000, data 0x24384d/0x3d1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3328111 data_alloc: 218103808 data_used: 1122304
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338829312 unmapped: 82157568 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338829312 unmapped: 82157568 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338837504 unmapped: 82149376 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 290 heartbeat osd_stat(store_statfs(0x4eb23c000/0x0/0x4ffc00000, data 0x24384d/0x3d1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338837504 unmapped: 82149376 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338837504 unmapped: 82149376 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3328111 data_alloc: 218103808 data_used: 1122304
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338837504 unmapped: 82149376 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338853888 unmapped: 82132992 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338853888 unmapped: 82132992 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338853888 unmapped: 82132992 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 290 heartbeat osd_stat(store_statfs(0x4eb23c000/0x0/0x4ffc00000, data 0x24384d/0x3d1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338853888 unmapped: 82132992 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3328111 data_alloc: 218103808 data_used: 1122304
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338862080 unmapped: 82124800 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338862080 unmapped: 82124800 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338862080 unmapped: 82124800 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338862080 unmapped: 82124800 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338862080 unmapped: 82124800 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 290 heartbeat osd_stat(store_statfs(0x4eb23c000/0x0/0x4ffc00000, data 0x24384d/0x3d1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3328111 data_alloc: 218103808 data_used: 1122304
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338862080 unmapped: 82124800 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338862080 unmapped: 82124800 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338862080 unmapped: 82124800 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338878464 unmapped: 82108416 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338878464 unmapped: 82108416 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: osd.1 290 heartbeat osd_stat(store_statfs(0x4eb23c000/0x0/0x4ffc00000, data 0x24384d/0x3d1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3328111 data_alloc: 218103808 data_used: 1122304
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338886656 unmapped: 82100224 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338886656 unmapped: 82100224 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 04:46:16 np0005534516 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338886656 unmapped: 82100224 heap: 420986880 old mem: 2845415832 new mem: 2845415832
